title
stringlengths
4
295
pmid
stringlengths
8
8
background_abstract
stringlengths
12
1.65k
background_abstract_label
stringclasses
12 values
methods_abstract
stringlengths
39
1.48k
methods_abstract_label
stringlengths
6
31
results_abstract
stringlengths
65
1.93k
results_abstract_label
stringclasses
10 values
conclusions_abstract
stringlengths
57
1.02k
conclusions_abstract_label
stringclasses
22 values
mesh_descriptor_names
sequence
pmcid
stringlengths
6
8
background_title
stringlengths
10
86
background_text
stringlengths
215
23.3k
methods_title
stringlengths
6
74
methods_text
stringlengths
99
42.9k
results_title
stringlengths
6
172
results_text
stringlengths
141
62.9k
conclusions_title
stringlengths
9
44
conclusions_text
stringlengths
5
13.6k
other_sections_titles
sequence
other_sections_texts
sequence
other_sections_sec_types
sequence
all_sections_titles
sequence
all_sections_texts
sequence
all_sections_sec_types
sequence
keywords
sequence
whole_article_text
stringlengths
6.93k
126k
whole_article_abstract
stringlengths
936
2.95k
background_conclusion_text
stringlengths
587
24.7k
background_conclusion_abstract
stringlengths
936
2.83k
whole_article_text_length
int64
1.3k
22.5k
whole_article_abstract_length
int64
183
490
other_sections_lengths
sequence
num_sections
int64
3
28
most_frequent_words
sequence
keybert_topics
sequence
annotated_base_background_abstract_prompt
stringclasses
1 value
annotated_base_methods_abstract_prompt
stringclasses
1 value
annotated_base_results_abstract_prompt
stringclasses
1 value
annotated_base_conclusions_abstract_prompt
stringclasses
1 value
annotated_base_whole_article_abstract_prompt
stringclasses
1 value
annotated_base_background_conclusion_abstract_prompt
stringclasses
1 value
annotated_keywords_background_abstract_prompt
stringlengths
28
460
annotated_keywords_methods_abstract_prompt
stringlengths
28
701
annotated_keywords_results_abstract_prompt
stringlengths
28
701
annotated_keywords_conclusions_abstract_prompt
stringlengths
28
428
annotated_keywords_whole_article_abstract_prompt
stringlengths
28
701
annotated_keywords_background_conclusion_abstract_prompt
stringlengths
28
428
annotated_mesh_background_abstract_prompt
stringlengths
53
701
annotated_mesh_methods_abstract_prompt
stringlengths
53
701
annotated_mesh_results_abstract_prompt
stringlengths
53
692
annotated_mesh_conclusions_abstract_prompt
stringlengths
54
701
annotated_mesh_whole_article_abstract_prompt
stringlengths
53
701
annotated_mesh_background_conclusion_abstract_prompt
stringlengths
54
701
annotated_keybert_background_abstract_prompt
stringlengths
100
219
annotated_keybert_methods_abstract_prompt
stringlengths
100
219
annotated_keybert_results_abstract_prompt
stringlengths
101
219
annotated_keybert_conclusions_abstract_prompt
stringlengths
100
240
annotated_keybert_whole_article_abstract_prompt
stringlengths
100
240
annotated_keybert_background_conclusion_abstract_prompt
stringlengths
100
211
annotated_most_frequent_background_abstract_prompt
stringlengths
67
217
annotated_most_frequent_methods_abstract_prompt
stringlengths
67
217
annotated_most_frequent_results_abstract_prompt
stringlengths
67
217
annotated_most_frequent_conclusions_abstract_prompt
stringlengths
71
217
annotated_most_frequent_whole_article_abstract_prompt
stringlengths
67
217
annotated_most_frequent_background_conclusion_abstract_prompt
stringlengths
71
217
annotated_tf_idf_background_abstract_prompt
stringlengths
74
283
annotated_tf_idf_methods_abstract_prompt
stringlengths
67
325
annotated_tf_idf_results_abstract_prompt
stringlengths
69
340
annotated_tf_idf_conclusions_abstract_prompt
stringlengths
83
403
annotated_tf_idf_whole_article_abstract_prompt
stringlengths
70
254
annotated_tf_idf_background_conclusion_abstract_prompt
stringlengths
71
254
annotated_entity_plan_background_abstract_prompt
stringlengths
20
313
annotated_entity_plan_methods_abstract_prompt
stringlengths
20
452
annotated_entity_plan_results_abstract_prompt
stringlengths
20
596
annotated_entity_plan_conclusions_abstract_prompt
stringlengths
20
150
annotated_entity_plan_whole_article_abstract_prompt
stringlengths
50
758
annotated_entity_plan_background_conclusion_abstract_prompt
stringlengths
50
758
Burden of Malnutrition among Children and Adolescents with Cerebral Palsy in Arabic-Speaking Countries: A Systematic Review and Meta-Analysis.
34579076
We aimed to estimate the burden and underlying risk factors of malnutrition among children and adolescents with cerebral palsy in Arabic-speaking countries.
BACKGROUND
OVID Medline, OVID Embase, CINAHL via EBSCO, Cochrane Library, and SCOPUS databases were searched up to 3 July 2021. Publications were reviewed to identify relevant papers following pre-defined inclusion/exclusion criteria. Two reviewers independently assessed the studies for inclusion. Data extraction was independently completed by two reviewers. Descriptive and pooled analysis has been reported.
METHODS
From a total of 79 records screened, nine full-text articles were assessed for eligibility, of which seven studies met the inclusion criteria. Study characteristics, anthropometric measurements used, and nutritional outcome reported varied between the studies. The included studies contained data of total 400 participants aged 1-18 years. Overall, (mean: 71.46%, 95% confidence interval: 55.52-85.04) of children with cerebral palsy had at least one form of malnutrition. Severe gross motor function limitation, feeding difficulties, cognitive impairment and inadequate energy intake were the commonly reported underlying risk factors for malnutrition among children with cerebral palsy.
RESULTS
The burden of malnutrition is high among children with cerebral palsy in Arabic-speaking countries. More research is needed for better understanding of this public health issue in these countries.
CONCLUSIONS
[ "Adolescent", "Cerebral Palsy", "Child", "Child Nutrition Disorders", "Global Health", "Humans", "Language" ]
8468429
1. Introduction
Cerebral palsy (CP) is considered as one of the leading causes of motor disability among children and adolescents [1]. Malnutrition is defined as a person’s energy and/or nutritional consumption being deficient, excessive, or unbalanced. Malnutrition has a broad definition that refers to two types of problem. First, stunting (low height for age), wasting (low weight for height), underweight (low weight for age), and micronutrient deficiencies or insufficiencies are some of the symptoms of undernutrition (a lack of important vitamins and minerals). Second, overweight, obesity, and noncommunicable diseases linked to diet are the other two (such as heart disease, stroke, diabetes, and cancer) [2]. Malnutrition can be seen as a secondary health issue that can impact on the overall health and well-being of children with CP and their families [3]. It occurs when food intake falls short of the requirements for normal body functions, causing growth and development problems [4]. Malnutrition must be diagnosed, prevented, and managed early in children’s lives because growth and development depend on optimum nutritional intake. Malnutrition in children with a chronic condition such as CP is caused by various factors, including the underlying disorder and non-illness-related factors such as increased caloric demands, malabsorption, altered nutrient use, and nutrient provision limits due to fluid status and/or feeding tolerance [5]. There are many ways to evaluate malnutrition and related risk factors among children, including, but not limited to, standard anthropometric measures like weight and its percentile, height and its percentile, body mass index (BMI), waist, head, and arm circumferences. Other measurements that could be used are total body water, fat mass, triceps fold thickness, z-score, and biochemical parameters such as hemoglobin, ferritin, and albumin [4,6]. Despite differences among Arabic-speaking countries (ASCs) (Table 1) in the quality of health care provided, they share many common customs in relation to cultural, social, and food habits. Regardless of these similarities and differences, children with CP are equally vulnerable to malnutrition, yet the burden of malnutrition among children and adolescents with CP in these countries has not been quantified through a systematic review. Because of the dearth of knowledge regarding the nutritional status of children with CP from ASCs, and to advance the global knowledge base on this crucial issue, we aimed to estimate the burden and underlying risk factors of malnutrition among children and adolescents with CP in the ASCs based on available published literature, to facilitate evidence-based medicine. We realize the need for systematic data collection and reporting of the limited available studies. In this review, therefore, we focused on summarizing the available information regarding the size of the problem and its causes, despite the scarcity of available resources that could be used to conduct large scale studies and nutrition intervention in a similar context.
null
null
3. Results
3.1. Study Characteristics and Participants A total of 79 titles were identified from the databases following the search strategy described above. After deduplication using EndNoteX9 citation manager and a manual re-check, 50 primary studies were identified of which 41 irrelevant studies were excluded and nine studies were eligible for full-text review. Following consensus among the reviewers, seven articles were selected for inclusion and data extraction. The details have been summarized in Figure 1. Table 3 summarizes the characteristics of the included studies (n = 7). The studies were published between 1984 and 2021 and in English language [4,11,12,13,14,15,16]. Most of the studies included were from Saudi Arabia (n = 4) [4,11,12,16], and the remaining were from Egypt (n = 1) [13], United Arab Emirates (n = 1) [14], and Jordan (n = 1) [15]. The study designs of the included studies were cross-sectional (n = 5) [4,11,13,14,15], retrospective record review (n = 1) [12], and in one study the design was not clearly mentioned [16]. Five studies were hospital/institution-based [4,12,13,15,16], one was school-based [11] and one was a population-based study [14]. Studies differed in terms of duration, sample size, causes of malnutrition, assessment measures used, and nutritional indicators reported. Overall, two studies were conducted among children with CP only [4,12], whereas, the remaining studies included children with CP as part of a larger cohort of children with disability (n = 2) [12,15], special needs (n = 1) [11], or compared with control groups (n = 2) [13,15]. The total of 400 pooled participants ranged from 12 to 119 children with CP in each study, whose age ranged between 1–18.4 years. Male-female numbers/percentages were available for five of seven studies, which ranged between 47% to 58.3% males, and 41% to 53% females. A total of 79 titles were identified from the databases following the search strategy described above. After deduplication using EndNoteX9 citation manager and a manual re-check, 50 primary studies were identified of which 41 irrelevant studies were excluded and nine studies were eligible for full-text review. Following consensus among the reviewers, seven articles were selected for inclusion and data extraction. The details have been summarized in Figure 1. Table 3 summarizes the characteristics of the included studies (n = 7). The studies were published between 1984 and 2021 and in English language [4,11,12,13,14,15,16]. Most of the studies included were from Saudi Arabia (n = 4) [4,11,12,16], and the remaining were from Egypt (n = 1) [13], United Arab Emirates (n = 1) [14], and Jordan (n = 1) [15]. The study designs of the included studies were cross-sectional (n = 5) [4,11,13,14,15], retrospective record review (n = 1) [12], and in one study the design was not clearly mentioned [16]. Five studies were hospital/institution-based [4,12,13,15,16], one was school-based [11] and one was a population-based study [14]. Studies differed in terms of duration, sample size, causes of malnutrition, assessment measures used, and nutritional indicators reported. Overall, two studies were conducted among children with CP only [4,12], whereas, the remaining studies included children with CP as part of a larger cohort of children with disability (n = 2) [12,15], special needs (n = 1) [11], or compared with control groups (n = 2) [13,15]. The total of 400 pooled participants ranged from 12 to 119 children with CP in each study, whose age ranged between 1–18.4 years. Male-female numbers/percentages were available for five of seven studies, which ranged between 47% to 58.3% males, and 41% to 53% females. 3.2. Measurements Used for Nutritional Assessment Among reported nutritional assessment indicators used, all studies used at least one standard anthropometric measurement tool. Most commonly reported indicators were percentiles/z-scores for weight-for-age (n = 3) [4,12,13], height for age (n = 2) [4,13], and BMI/BMI-for-age (n = 3) [4,11,14]. Additionally, body composition and biochemical tests were reported in one study [13] as an indicator for nutritional status. The nutritional indicators reported in the included studies have been summarized in Table 4. Among reported nutritional assessment indicators used, all studies used at least one standard anthropometric measurement tool. Most commonly reported indicators were percentiles/z-scores for weight-for-age (n = 3) [4,12,13], height for age (n = 2) [4,13], and BMI/BMI-for-age (n = 3) [4,11,14]. Additionally, body composition and biochemical tests were reported in one study [13] as an indicator for nutritional status. The nutritional indicators reported in the included studies have been summarized in Table 4. 3.3. Malnutrition Rate among Children with CP Out of a total N = 952 participants in the included studies, n = 400 were children with CP and were eligible for estimation of the pooled prevalence of malnutrition. However, the proportion of at least one form of malnutrition among children with CP was reported in n = 6 studies [4,11,12,13,14,16] whereas the mean (SD) nutritional indicator was reported in n = 2 studies [14,15]. The pooled estimates suggest that 48.84–91.67% children with CP in the included studies had at least one form of malnutrition (pooled prevalence of 71.46%, 95% CI: 55.52–85.04, p < 0.0001). Moderate to severe underweight was most frequently reported (n = 4) and ranged between 7%–84.9% among the participating children with CP [4,11,12,13]. Being overweight was reported in n = 3 studies and ranged between 2.5–25% [11,12,14] (Table 5, Figure 2 and Figure 3). Out of a total N = 952 participants in the included studies, n = 400 were children with CP and were eligible for estimation of the pooled prevalence of malnutrition. However, the proportion of at least one form of malnutrition among children with CP was reported in n = 6 studies [4,11,12,13,14,16] whereas the mean (SD) nutritional indicator was reported in n = 2 studies [14,15]. The pooled estimates suggest that 48.84–91.67% children with CP in the included studies had at least one form of malnutrition (pooled prevalence of 71.46%, 95% CI: 55.52–85.04, p < 0.0001). Moderate to severe underweight was most frequently reported (n = 4) and ranged between 7%–84.9% among the participating children with CP [4,11,12,13]. Being overweight was reported in n = 3 studies and ranged between 2.5–25% [11,12,14] (Table 5, Figure 2 and Figure 3). 3.4. Underlying Risk Factors of Malnutrition Five out of seven included studies reported the factors related to malnutrition among children with CP in ASCs [4,11,12,13,14]. Overall, malnutrition was found higher among children with moderate-severe gross motor function limitation (e.g., GMFCS level III–V) [13,15], oro-motor dysfunction/feeding difficulties [12,13], with traumatic dental injury, caries and medical complications [11,12]. Furthermore, older age of the child, presence of cognitive impairment and inadequate energy intake were reported as contributing factors to malnutrition among children with CP in one study [4]. Five out of seven included studies reported the factors related to malnutrition among children with CP in ASCs [4,11,12,13,14]. Overall, malnutrition was found higher among children with moderate-severe gross motor function limitation (e.g., GMFCS level III–V) [13,15], oro-motor dysfunction/feeding difficulties [12,13], with traumatic dental injury, caries and medical complications [11,12]. Furthermore, older age of the child, presence of cognitive impairment and inadequate energy intake were reported as contributing factors to malnutrition among children with CP in one study [4]. 3.5. Study Quality and Heterogeneity The symmetrical funnel plot in Figure 3 revealed that there was no substantial publication bias in the meta-analysis (Figure 2) for the proportion of malnutrition estimates against corresponding standard error. However, there is a high clinical heterogeneity (I2 = 88.40%) as the included studies did not use uniform measurements of malnutrition. The symmetrical funnel plot in Figure 3 revealed that there was no substantial publication bias in the meta-analysis (Figure 2) for the proportion of malnutrition estimates against corresponding standard error. However, there is a high clinical heterogeneity (I2 = 88.40%) as the included studies did not use uniform measurements of malnutrition.
5. Conclusions
Malnutrition in children and adolescents with disabilities and/or CP is an existing problem in ASCs but there is a dearth of medical research. Focused research is needed to fill the large evidence gap and identify need-based effective nutrition intervention for children with CP in these countries.
[ "2. Materials and Methods", "2.1. Data Sources and Search Strategy", "2.2. Study Selection and Inclusion", "2.3. Risk of Bias Assessment", "2.4. Data Extraction", "2.5. Data Analysis", "3.2. Measurements Used for Nutritional Assessment", "3.3. Malnutrition Rate among Children with CP", "3.4. Underlying Risk Factors of Malnutrition", "3.5. Study Quality and Heterogeneity" ]
[ "For this review, we followed the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines on conducting systematic reviews, including the 27-item checklist [7,8].\n2.1. Data Sources and Search Strategy We identified 22 countries whose official language is Arabic [9]. One author (C.K.) searched the following bibliographic databases—OVID Medline (1946–25 June 2021), OVID Embase (1947–1 July 2021), CINAHL via EBSCO (1982–July 2021), Cochrane Library Database of Systematic Reviews (Issue 7 of 12, 2021), Cochrane Central Register of Controlled Trials (Issue 7 of 12, 2021) and SCOPUS (1788–July 2021) to find publications on nutritional status among children and adolescents with CP in ASCs. The final search was completed on 3 July 2021. No language or date limits were applied to ensure maximum retrieval.\nThe search used controlled vocabulary terms including ‘Cerebral Palsy’, ‘Nutritional status’, ‘Nutritional Sciences’, ‘Malnutrition’, ‘Thinness’, ‘Growth disorders’, ‘Cachexia’, ‘Body Mass Index’, ‘Overweight,’ ‘Obesity’, “Infant Newborn, ‘Infant’, ’Child Preschool’, ‘Child’ and ‘Adolescent’. These were used with corresponding text-word terms. Text-word terms were truncated where necessary to include all relevant term endings. The search terms were combined with the individual country list terms provided in Table 1. The Ovid Medline search strategy used is provided in Appendix A.\nWe identified 22 countries whose official language is Arabic [9]. One author (C.K.) searched the following bibliographic databases—OVID Medline (1946–25 June 2021), OVID Embase (1947–1 July 2021), CINAHL via EBSCO (1982–July 2021), Cochrane Library Database of Systematic Reviews (Issue 7 of 12, 2021), Cochrane Central Register of Controlled Trials (Issue 7 of 12, 2021) and SCOPUS (1788–July 2021) to find publications on nutritional status among children and adolescents with CP in ASCs. The final search was completed on 3 July 2021. No language or date limits were applied to ensure maximum retrieval.\nThe search used controlled vocabulary terms including ‘Cerebral Palsy’, ‘Nutritional status’, ‘Nutritional Sciences’, ‘Malnutrition’, ‘Thinness’, ‘Growth disorders’, ‘Cachexia’, ‘Body Mass Index’, ‘Overweight,’ ‘Obesity’, “Infant Newborn, ‘Infant’, ’Child Preschool’, ‘Child’ and ‘Adolescent’. These were used with corresponding text-word terms. Text-word terms were truncated where necessary to include all relevant term endings. The search terms were combined with the individual country list terms provided in Table 1. The Ovid Medline search strategy used is provided in Appendix A.\n2.2. Study Selection and Inclusion Study selection was completed following a pre-set eligibility criteria developed by three reviewers (G.K., S.M., & S.M.M.). The inclusion criteria were as follows: (1) studies reported original observations (from observational and analytical study design); (2) study participants were children and/or adolescents with CP aged up to 18 years in ASCs; and (3) studies reported malnutrition (i.e., underweight, or overweight) as an outcome or in the background characteristics.\nThe exclusion criteria were as follows: (1) studies reporting a single case, case series, non-observational studies (e.g., systematic reviews, narrative reviews, scoping reviews), conference reports/posters, (2) study participants were only malnourished children or adults with CP, (3) conducted in non-Arabic speaking countries.\nTwo reviewers, (S.M.M. and G.K.) independently reviewed the identified studies and disagreements were resolved by a third reviewer (I.J.) by consensus. The review protocol has been registered in PROSPERO (registration number: CRD42021244171—https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=244171—accessed on 27 July 2021).\nStudy selection was completed following a pre-set eligibility criteria developed by three reviewers (G.K., S.M., & S.M.M.). The inclusion criteria were as follows: (1) studies reported original observations (from observational and analytical study design); (2) study participants were children and/or adolescents with CP aged up to 18 years in ASCs; and (3) studies reported malnutrition (i.e., underweight, or overweight) as an outcome or in the background characteristics.\nThe exclusion criteria were as follows: (1) studies reporting a single case, case series, non-observational studies (e.g., systematic reviews, narrative reviews, scoping reviews), conference reports/posters, (2) study participants were only malnourished children or adults with CP, (3) conducted in non-Arabic speaking countries.\nTwo reviewers, (S.M.M. and G.K.) independently reviewed the identified studies and disagreements were resolved by a third reviewer (I.J.) by consensus. The review protocol has been registered in PROSPERO (registration number: CRD42021244171—https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=244171—accessed on 27 July 2021).\n2.3. Risk of Bias Assessment We assessed the selected studies to identify risk of bias using the Newcastle-Ottawa Quality Assessment Scale (NOS) [10]. The assessment was completed by the first author (S.M.M.) with support of an external reviewer (H.B.). Results of individual studies included in this review are shown in Table 2. All seven articles included in this review displayed good quality in all three areas of the assessment (i.e., selection, comparability, and outcome). None of the studies were excluded due to poor quality at this stage as all of them met the standard thresholds for inclusion.\nWe assessed the selected studies to identify risk of bias using the Newcastle-Ottawa Quality Assessment Scale (NOS) [10]. The assessment was completed by the first author (S.M.M.) with support of an external reviewer (H.B.). Results of individual studies included in this review are shown in Table 2. All seven articles included in this review displayed good quality in all three areas of the assessment (i.e., selection, comparability, and outcome). None of the studies were excluded due to poor quality at this stage as all of them met the standard thresholds for inclusion.\n2.4. Data Extraction Data extraction was completed in an Excel templated developed by the first author (S.M.M.) in consultation with another two reviewers (G.K. and I.J.). Two reviewers (R.S. and I.J.) completed data extraction from all seven studies independently. Any differences identified were resolved following discussion with a third reviewer (G.K.). As the most commonly utilized method reported in the studies was anthropometric measurements, the following were extracted as available: (i) study characteristics (citation, implementation country, study settings, study design, study participants, samples size, age and gender, study duration), (ii) outcome measures/measurements used (anthropometric, biochemical, others), (iii) outcome reported (malnutrition proportions and significantly associated risk factors). If any information was unavailable, then it was documented as ‘not reported’.\nData extraction was completed in an Excel templated developed by the first author (S.M.M.) in consultation with another two reviewers (G.K. and I.J.). Two reviewers (R.S. and I.J.) completed data extraction from all seven studies independently. Any differences identified were resolved following discussion with a third reviewer (G.K.). As the most commonly utilized method reported in the studies was anthropometric measurements, the following were extracted as available: (i) study characteristics (citation, implementation country, study settings, study design, study participants, samples size, age and gender, study duration), (ii) outcome measures/measurements used (anthropometric, biochemical, others), (iii) outcome reported (malnutrition proportions and significantly associated risk factors). If any information was unavailable, then it was documented as ‘not reported’.\n2.5. Data Analysis Descriptive information (e.g., study characteristics and outcome measures) were presented in table format. The rate of malnutrition was reported as documented in the original study. Factors related to malnutrition reported in individual studies were also summarized, but the effect size could not be estimated due to lack of consistent data. Furthermore, a forest plot and a funnel plot showing the proportion (with 95% confidence interval (CI)) of at least one form of malnutrition as reported in individual studies were constructed. For studies where malnutrition rate was reported for multiple indicators, the highest proportion was included. For meta-analysis, we used MedCalc® Statistical Software version 20.009 (MedCalc Software Ltd., Ostend, Belgium; https://www.medcalc.org; accessed on 20 July 2021). To investigate the heterogeneity we used a random effect model in the analysis. Heterogeneity was considered mild if I2 < 30%, moderate if I2 = 30–50%, and notable if I2 > 50%.\nDescriptive information (e.g., study characteristics and outcome measures) were presented in table format. The rate of malnutrition was reported as documented in the original study. Factors related to malnutrition reported in individual studies were also summarized, but the effect size could not be estimated due to lack of consistent data. Furthermore, a forest plot and a funnel plot showing the proportion (with 95% confidence interval (CI)) of at least one form of malnutrition as reported in individual studies were constructed. For studies where malnutrition rate was reported for multiple indicators, the highest proportion was included. For meta-analysis, we used MedCalc® Statistical Software version 20.009 (MedCalc Software Ltd., Ostend, Belgium; https://www.medcalc.org; accessed on 20 July 2021). To investigate the heterogeneity we used a random effect model in the analysis. Heterogeneity was considered mild if I2 < 30%, moderate if I2 = 30–50%, and notable if I2 > 50%.", "We identified 22 countries whose official language is Arabic [9]. One author (C.K.) searched the following bibliographic databases—OVID Medline (1946–25 June 2021), OVID Embase (1947–1 July 2021), CINAHL via EBSCO (1982–July 2021), Cochrane Library Database of Systematic Reviews (Issue 7 of 12, 2021), Cochrane Central Register of Controlled Trials (Issue 7 of 12, 2021) and SCOPUS (1788–July 2021) to find publications on nutritional status among children and adolescents with CP in ASCs. The final search was completed on 3 July 2021. No language or date limits were applied to ensure maximum retrieval.\nThe search used controlled vocabulary terms including ‘Cerebral Palsy’, ‘Nutritional status’, ‘Nutritional Sciences’, ‘Malnutrition’, ‘Thinness’, ‘Growth disorders’, ‘Cachexia’, ‘Body Mass Index’, ‘Overweight,’ ‘Obesity’, “Infant Newborn, ‘Infant’, ’Child Preschool’, ‘Child’ and ‘Adolescent’. These were used with corresponding text-word terms. Text-word terms were truncated where necessary to include all relevant term endings. The search terms were combined with the individual country list terms provided in Table 1. The Ovid Medline search strategy used is provided in Appendix A.", "Study selection was completed following a pre-set eligibility criteria developed by three reviewers (G.K., S.M., & S.M.M.). The inclusion criteria were as follows: (1) studies reported original observations (from observational and analytical study design); (2) study participants were children and/or adolescents with CP aged up to 18 years in ASCs; and (3) studies reported malnutrition (i.e., underweight, or overweight) as an outcome or in the background characteristics.\nThe exclusion criteria were as follows: (1) studies reporting a single case, case series, non-observational studies (e.g., systematic reviews, narrative reviews, scoping reviews), conference reports/posters, (2) study participants were only malnourished children or adults with CP, (3) conducted in non-Arabic speaking countries.\nTwo reviewers, (S.M.M. and G.K.) independently reviewed the identified studies and disagreements were resolved by a third reviewer (I.J.) by consensus. The review protocol has been registered in PROSPERO (registration number: CRD42021244171—https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=244171—accessed on 27 July 2021).", "We assessed the selected studies to identify risk of bias using the Newcastle-Ottawa Quality Assessment Scale (NOS) [10]. The assessment was completed by the first author (S.M.M.) with support of an external reviewer (H.B.). Results of individual studies included in this review are shown in Table 2. All seven articles included in this review displayed good quality in all three areas of the assessment (i.e., selection, comparability, and outcome). None of the studies were excluded due to poor quality at this stage as all of them met the standard thresholds for inclusion.", "Data extraction was completed in an Excel templated developed by the first author (S.M.M.) in consultation with another two reviewers (G.K. and I.J.). Two reviewers (R.S. and I.J.) completed data extraction from all seven studies independently. Any differences identified were resolved following discussion with a third reviewer (G.K.). As the most commonly utilized method reported in the studies was anthropometric measurements, the following were extracted as available: (i) study characteristics (citation, implementation country, study settings, study design, study participants, samples size, age and gender, study duration), (ii) outcome measures/measurements used (anthropometric, biochemical, others), (iii) outcome reported (malnutrition proportions and significantly associated risk factors). If any information was unavailable, then it was documented as ‘not reported’.", "Descriptive information (e.g., study characteristics and outcome measures) were presented in table format. The rate of malnutrition was reported as documented in the original study. Factors related to malnutrition reported in individual studies were also summarized, but the effect size could not be estimated due to lack of consistent data. Furthermore, a forest plot and a funnel plot showing the proportion (with 95% confidence interval (CI)) of at least one form of malnutrition as reported in individual studies were constructed. For studies where malnutrition rate was reported for multiple indicators, the highest proportion was included. For meta-analysis, we used MedCalc® Statistical Software version 20.009 (MedCalc Software Ltd., Ostend, Belgium; https://www.medcalc.org; accessed on 20 July 2021). To investigate the heterogeneity we used a random effect model in the analysis. Heterogeneity was considered mild if I2 < 30%, moderate if I2 = 30–50%, and notable if I2 > 50%.", "Among reported nutritional assessment indicators used, all studies used at least one standard anthropometric measurement tool. Most commonly reported indicators were percentiles/z-scores for weight-for-age (n = 3) [4,12,13], height for age (n = 2) [4,13], and BMI/BMI-for-age (n = 3) [4,11,14]. Additionally, body composition and biochemical tests were reported in one study [13] as an indicator for nutritional status. The nutritional indicators reported in the included studies have been summarized in Table 4.", "Out of a total N = 952 participants in the included studies, n = 400 were children with CP and were eligible for estimation of the pooled prevalence of malnutrition. However, the proportion of at least one form of malnutrition among children with CP was reported in n = 6 studies [4,11,12,13,14,16] whereas the mean (SD) nutritional indicator was reported in n = 2 studies [14,15]. The pooled estimates suggest that 48.84–91.67% children with CP in the included studies had at least one form of malnutrition (pooled prevalence of 71.46%, 95% CI: 55.52–85.04, p < 0.0001). Moderate to severe underweight was most frequently reported (n = 4) and ranged between 7%–84.9% among the participating children with CP [4,11,12,13]. Being overweight was reported in n = 3 studies and ranged between 2.5–25% [11,12,14] (Table 5, Figure 2 and Figure 3).", "Five out of seven included studies reported the factors related to malnutrition among children with CP in ASCs [4,11,12,13,14]. Overall, malnutrition was found higher among children with moderate-severe gross motor function limitation (e.g., GMFCS level III–V) [13,15], oro-motor dysfunction/feeding difficulties [12,13], with traumatic dental injury, caries and medical complications [11,12]. Furthermore, older age of the child, presence of cognitive impairment and inadequate energy intake were reported as contributing factors to malnutrition among children with CP in one study [4].", "The symmetrical funnel plot in Figure 3 revealed that there was no substantial publication bias in the meta-analysis (Figure 2) for the proportion of malnutrition estimates against corresponding standard error. However, there is a high clinical heterogeneity (I2 = 88.40%) as the included studies did not use uniform measurements of malnutrition." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Data Sources and Search Strategy", "2.2. Study Selection and Inclusion", "2.3. Risk of Bias Assessment", "2.4. Data Extraction", "2.5. Data Analysis", "3. Results", "3.1. Study Characteristics and Participants", "3.2. Measurements Used for Nutritional Assessment", "3.3. Malnutrition Rate among Children with CP", "3.4. Underlying Risk Factors of Malnutrition", "3.5. Study Quality and Heterogeneity", "4. Discussion", "5. Conclusions" ]
[ "Cerebral palsy (CP) is considered as one of the leading causes of motor disability among children and adolescents [1]. Malnutrition is defined as a person’s energy and/or nutritional consumption being deficient, excessive, or unbalanced. Malnutrition has a broad definition that refers to two types of problem. First, stunting (low height for age), wasting (low weight for height), underweight (low weight for age), and micronutrient deficiencies or insufficiencies are some of the symptoms of undernutrition (a lack of important vitamins and minerals). Second, overweight, obesity, and noncommunicable diseases linked to diet are the other two (such as heart disease, stroke, diabetes, and cancer) [2]. Malnutrition can be seen as a secondary health issue that can impact on the overall health and well-being of children with CP and their families [3]. It occurs when food intake falls short of the requirements for normal body functions, causing growth and development problems [4]. Malnutrition must be diagnosed, prevented, and managed early in children’s lives because growth and development depend on optimum nutritional intake. Malnutrition in children with a chronic condition such as CP is caused by various factors, including the underlying disorder and non-illness-related factors such as increased caloric demands, malabsorption, altered nutrient use, and nutrient provision limits due to fluid status and/or feeding tolerance [5].\nThere are many ways to evaluate malnutrition and related risk factors among children, including, but not limited to, standard anthropometric measures like weight and its percentile, height and its percentile, body mass index (BMI), waist, head, and arm circumferences. Other measurements that could be used are total body water, fat mass, triceps fold thickness, z-score, and biochemical parameters such as hemoglobin, ferritin, and albumin [4,6].\nDespite differences among Arabic-speaking countries (ASCs) (Table 1) in the quality of health care provided, they share many common customs in relation to cultural, social, and food habits. Regardless of these similarities and differences, children with CP are equally vulnerable to malnutrition, yet the burden of malnutrition among children and adolescents with CP in these countries has not been quantified through a systematic review.\nBecause of the dearth of knowledge regarding the nutritional status of children with CP from ASCs, and to advance the global knowledge base on this crucial issue, we aimed to estimate the burden and underlying risk factors of malnutrition among children and adolescents with CP in the ASCs based on available published literature, to facilitate evidence-based medicine. We realize the need for systematic data collection and reporting of the limited available studies. In this review, therefore, we focused on summarizing the available information regarding the size of the problem and its causes, despite the scarcity of available resources that could be used to conduct large scale studies and nutrition intervention in a similar context.", "For this review, we followed the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines on conducting systematic reviews, including the 27-item checklist [7,8].\n2.1. Data Sources and Search Strategy We identified 22 countries whose official language is Arabic [9]. One author (C.K.) searched the following bibliographic databases—OVID Medline (1946–25 June 2021), OVID Embase (1947–1 July 2021), CINAHL via EBSCO (1982–July 2021), Cochrane Library Database of Systematic Reviews (Issue 7 of 12, 2021), Cochrane Central Register of Controlled Trials (Issue 7 of 12, 2021) and SCOPUS (1788–July 2021) to find publications on nutritional status among children and adolescents with CP in ASCs. The final search was completed on 3 July 2021. No language or date limits were applied to ensure maximum retrieval.\nThe search used controlled vocabulary terms including ‘Cerebral Palsy’, ‘Nutritional status’, ‘Nutritional Sciences’, ‘Malnutrition’, ‘Thinness’, ‘Growth disorders’, ‘Cachexia’, ‘Body Mass Index’, ‘Overweight,’ ‘Obesity’, “Infant Newborn, ‘Infant’, ’Child Preschool’, ‘Child’ and ‘Adolescent’. These were used with corresponding text-word terms. Text-word terms were truncated where necessary to include all relevant term endings. The search terms were combined with the individual country list terms provided in Table 1. The Ovid Medline search strategy used is provided in Appendix A.\nWe identified 22 countries whose official language is Arabic [9]. One author (C.K.) searched the following bibliographic databases—OVID Medline (1946–25 June 2021), OVID Embase (1947–1 July 2021), CINAHL via EBSCO (1982–July 2021), Cochrane Library Database of Systematic Reviews (Issue 7 of 12, 2021), Cochrane Central Register of Controlled Trials (Issue 7 of 12, 2021) and SCOPUS (1788–July 2021) to find publications on nutritional status among children and adolescents with CP in ASCs. The final search was completed on 3 July 2021. No language or date limits were applied to ensure maximum retrieval.\nThe search used controlled vocabulary terms including ‘Cerebral Palsy’, ‘Nutritional status’, ‘Nutritional Sciences’, ‘Malnutrition’, ‘Thinness’, ‘Growth disorders’, ‘Cachexia’, ‘Body Mass Index’, ‘Overweight,’ ‘Obesity’, “Infant Newborn, ‘Infant’, ’Child Preschool’, ‘Child’ and ‘Adolescent’. These were used with corresponding text-word terms. Text-word terms were truncated where necessary to include all relevant term endings. The search terms were combined with the individual country list terms provided in Table 1. The Ovid Medline search strategy used is provided in Appendix A.\n2.2. Study Selection and Inclusion Study selection was completed following a pre-set eligibility criteria developed by three reviewers (G.K., S.M., & S.M.M.). The inclusion criteria were as follows: (1) studies reported original observations (from observational and analytical study design); (2) study participants were children and/or adolescents with CP aged up to 18 years in ASCs; and (3) studies reported malnutrition (i.e., underweight, or overweight) as an outcome or in the background characteristics.\nThe exclusion criteria were as follows: (1) studies reporting a single case, case series, non-observational studies (e.g., systematic reviews, narrative reviews, scoping reviews), conference reports/posters, (2) study participants were only malnourished children or adults with CP, (3) conducted in non-Arabic speaking countries.\nTwo reviewers, (S.M.M. and G.K.) independently reviewed the identified studies and disagreements were resolved by a third reviewer (I.J.) by consensus. The review protocol has been registered in PROSPERO (registration number: CRD42021244171—https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=244171—accessed on 27 July 2021).\nStudy selection was completed following a pre-set eligibility criteria developed by three reviewers (G.K., S.M., & S.M.M.). The inclusion criteria were as follows: (1) studies reported original observations (from observational and analytical study design); (2) study participants were children and/or adolescents with CP aged up to 18 years in ASCs; and (3) studies reported malnutrition (i.e., underweight, or overweight) as an outcome or in the background characteristics.\nThe exclusion criteria were as follows: (1) studies reporting a single case, case series, non-observational studies (e.g., systematic reviews, narrative reviews, scoping reviews), conference reports/posters, (2) study participants were only malnourished children or adults with CP, (3) conducted in non-Arabic speaking countries.\nTwo reviewers, (S.M.M. and G.K.) independently reviewed the identified studies and disagreements were resolved by a third reviewer (I.J.) by consensus. The review protocol has been registered in PROSPERO (registration number: CRD42021244171—https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=244171—accessed on 27 July 2021).\n2.3. Risk of Bias Assessment We assessed the selected studies to identify risk of bias using the Newcastle-Ottawa Quality Assessment Scale (NOS) [10]. The assessment was completed by the first author (S.M.M.) with support of an external reviewer (H.B.). Results of individual studies included in this review are shown in Table 2. All seven articles included in this review displayed good quality in all three areas of the assessment (i.e., selection, comparability, and outcome). None of the studies were excluded due to poor quality at this stage as all of them met the standard thresholds for inclusion.\nWe assessed the selected studies to identify risk of bias using the Newcastle-Ottawa Quality Assessment Scale (NOS) [10]. The assessment was completed by the first author (S.M.M.) with support of an external reviewer (H.B.). Results of individual studies included in this review are shown in Table 2. All seven articles included in this review displayed good quality in all three areas of the assessment (i.e., selection, comparability, and outcome). None of the studies were excluded due to poor quality at this stage as all of them met the standard thresholds for inclusion.\n2.4. Data Extraction Data extraction was completed in an Excel templated developed by the first author (S.M.M.) in consultation with another two reviewers (G.K. and I.J.). Two reviewers (R.S. and I.J.) completed data extraction from all seven studies independently. Any differences identified were resolved following discussion with a third reviewer (G.K.). As the most commonly utilized method reported in the studies was anthropometric measurements, the following were extracted as available: (i) study characteristics (citation, implementation country, study settings, study design, study participants, samples size, age and gender, study duration), (ii) outcome measures/measurements used (anthropometric, biochemical, others), (iii) outcome reported (malnutrition proportions and significantly associated risk factors). If any information was unavailable, then it was documented as ‘not reported’.\nData extraction was completed in an Excel templated developed by the first author (S.M.M.) in consultation with another two reviewers (G.K. and I.J.). Two reviewers (R.S. and I.J.) completed data extraction from all seven studies independently. Any differences identified were resolved following discussion with a third reviewer (G.K.). As the most commonly utilized method reported in the studies was anthropometric measurements, the following were extracted as available: (i) study characteristics (citation, implementation country, study settings, study design, study participants, samples size, age and gender, study duration), (ii) outcome measures/measurements used (anthropometric, biochemical, others), (iii) outcome reported (malnutrition proportions and significantly associated risk factors). If any information was unavailable, then it was documented as ‘not reported’.\n2.5. Data Analysis Descriptive information (e.g., study characteristics and outcome measures) were presented in table format. The rate of malnutrition was reported as documented in the original study. Factors related to malnutrition reported in individual studies were also summarized, but the effect size could not be estimated due to lack of consistent data. Furthermore, a forest plot and a funnel plot showing the proportion (with 95% confidence interval (CI)) of at least one form of malnutrition as reported in individual studies were constructed. For studies where malnutrition rate was reported for multiple indicators, the highest proportion was included. For meta-analysis, we used MedCalc® Statistical Software version 20.009 (MedCalc Software Ltd., Ostend, Belgium; https://www.medcalc.org; accessed on 20 July 2021). To investigate the heterogeneity we used a random effect model in the analysis. Heterogeneity was considered mild if I2 < 30%, moderate if I2 = 30–50%, and notable if I2 > 50%.\nDescriptive information (e.g., study characteristics and outcome measures) were presented in table format. The rate of malnutrition was reported as documented in the original study. Factors related to malnutrition reported in individual studies were also summarized, but the effect size could not be estimated due to lack of consistent data. Furthermore, a forest plot and a funnel plot showing the proportion (with 95% confidence interval (CI)) of at least one form of malnutrition as reported in individual studies were constructed. For studies where malnutrition rate was reported for multiple indicators, the highest proportion was included. For meta-analysis, we used MedCalc® Statistical Software version 20.009 (MedCalc Software Ltd., Ostend, Belgium; https://www.medcalc.org; accessed on 20 July 2021). To investigate the heterogeneity we used a random effect model in the analysis. Heterogeneity was considered mild if I2 < 30%, moderate if I2 = 30–50%, and notable if I2 > 50%.", "We identified 22 countries whose official language is Arabic [9]. One author (C.K.) searched the following bibliographic databases—OVID Medline (1946–25 June 2021), OVID Embase (1947–1 July 2021), CINAHL via EBSCO (1982–July 2021), Cochrane Library Database of Systematic Reviews (Issue 7 of 12, 2021), Cochrane Central Register of Controlled Trials (Issue 7 of 12, 2021) and SCOPUS (1788–July 2021) to find publications on nutritional status among children and adolescents with CP in ASCs. The final search was completed on 3 July 2021. No language or date limits were applied to ensure maximum retrieval.\nThe search used controlled vocabulary terms including ‘Cerebral Palsy’, ‘Nutritional status’, ‘Nutritional Sciences’, ‘Malnutrition’, ‘Thinness’, ‘Growth disorders’, ‘Cachexia’, ‘Body Mass Index’, ‘Overweight,’ ‘Obesity’, “Infant Newborn, ‘Infant’, ’Child Preschool’, ‘Child’ and ‘Adolescent’. These were used with corresponding text-word terms. Text-word terms were truncated where necessary to include all relevant term endings. The search terms were combined with the individual country list terms provided in Table 1. The Ovid Medline search strategy used is provided in Appendix A.", "Study selection was completed following a pre-set eligibility criteria developed by three reviewers (G.K., S.M., & S.M.M.). The inclusion criteria were as follows: (1) studies reported original observations (from observational and analytical study design); (2) study participants were children and/or adolescents with CP aged up to 18 years in ASCs; and (3) studies reported malnutrition (i.e., underweight, or overweight) as an outcome or in the background characteristics.\nThe exclusion criteria were as follows: (1) studies reporting a single case, case series, non-observational studies (e.g., systematic reviews, narrative reviews, scoping reviews), conference reports/posters, (2) study participants were only malnourished children or adults with CP, (3) conducted in non-Arabic speaking countries.\nTwo reviewers, (S.M.M. and G.K.) independently reviewed the identified studies and disagreements were resolved by a third reviewer (I.J.) by consensus. The review protocol has been registered in PROSPERO (registration number: CRD42021244171—https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=244171—accessed on 27 July 2021).", "We assessed the selected studies to identify risk of bias using the Newcastle-Ottawa Quality Assessment Scale (NOS) [10]. The assessment was completed by the first author (S.M.M.) with support of an external reviewer (H.B.). Results of individual studies included in this review are shown in Table 2. All seven articles included in this review displayed good quality in all three areas of the assessment (i.e., selection, comparability, and outcome). None of the studies were excluded due to poor quality at this stage as all of them met the standard thresholds for inclusion.", "Data extraction was completed in an Excel templated developed by the first author (S.M.M.) in consultation with another two reviewers (G.K. and I.J.). Two reviewers (R.S. and I.J.) completed data extraction from all seven studies independently. Any differences identified were resolved following discussion with a third reviewer (G.K.). As the most commonly utilized method reported in the studies was anthropometric measurements, the following were extracted as available: (i) study characteristics (citation, implementation country, study settings, study design, study participants, samples size, age and gender, study duration), (ii) outcome measures/measurements used (anthropometric, biochemical, others), (iii) outcome reported (malnutrition proportions and significantly associated risk factors). If any information was unavailable, then it was documented as ‘not reported’.", "Descriptive information (e.g., study characteristics and outcome measures) were presented in table format. The rate of malnutrition was reported as documented in the original study. Factors related to malnutrition reported in individual studies were also summarized, but the effect size could not be estimated due to lack of consistent data. Furthermore, a forest plot and a funnel plot showing the proportion (with 95% confidence interval (CI)) of at least one form of malnutrition as reported in individual studies were constructed. For studies where malnutrition rate was reported for multiple indicators, the highest proportion was included. For meta-analysis, we used MedCalc® Statistical Software version 20.009 (MedCalc Software Ltd., Ostend, Belgium; https://www.medcalc.org; accessed on 20 July 2021). To investigate the heterogeneity we used a random effect model in the analysis. Heterogeneity was considered mild if I2 < 30%, moderate if I2 = 30–50%, and notable if I2 > 50%.", "3.1. Study Characteristics and Participants A total of 79 titles were identified from the databases following the search strategy described above. After deduplication using EndNoteX9 citation manager and a manual re-check, 50 primary studies were identified of which 41 irrelevant studies were excluded and nine studies were eligible for full-text review. Following consensus among the reviewers, seven articles were selected for inclusion and data extraction. The details have been summarized in Figure 1.\nTable 3 summarizes the characteristics of the included studies (n = 7). The studies were published between 1984 and 2021 and in English language [4,11,12,13,14,15,16]. Most of the studies included were from Saudi Arabia (n = 4) [4,11,12,16], and the remaining were from Egypt (n = 1) [13], United Arab Emirates (n = 1) [14], and Jordan (n = 1) [15]. The study designs of the included studies were cross-sectional (n = 5) [4,11,13,14,15], retrospective record review (n = 1) [12], and in one study the design was not clearly mentioned [16]. Five studies were hospital/institution-based [4,12,13,15,16], one was school-based [11] and one was a population-based study [14]. Studies differed in terms of duration, sample size, causes of malnutrition, assessment measures used, and nutritional indicators reported.\nOverall, two studies were conducted among children with CP only [4,12], whereas, the remaining studies included children with CP as part of a larger cohort of children with disability (n = 2) [12,15], special needs (n = 1) [11], or compared with control groups (n = 2) [13,15]. The total of 400 pooled participants ranged from 12 to 119 children with CP in each study, whose age ranged between 1–18.4 years. Male-female numbers/percentages were available for five of seven studies, which ranged between 47% to 58.3% males, and 41% to 53% females.\nA total of 79 titles were identified from the databases following the search strategy described above. After deduplication using EndNoteX9 citation manager and a manual re-check, 50 primary studies were identified of which 41 irrelevant studies were excluded and nine studies were eligible for full-text review. Following consensus among the reviewers, seven articles were selected for inclusion and data extraction. The details have been summarized in Figure 1.\nTable 3 summarizes the characteristics of the included studies (n = 7). The studies were published between 1984 and 2021 and in English language [4,11,12,13,14,15,16]. Most of the studies included were from Saudi Arabia (n = 4) [4,11,12,16], and the remaining were from Egypt (n = 1) [13], United Arab Emirates (n = 1) [14], and Jordan (n = 1) [15]. The study designs of the included studies were cross-sectional (n = 5) [4,11,13,14,15], retrospective record review (n = 1) [12], and in one study the design was not clearly mentioned [16]. Five studies were hospital/institution-based [4,12,13,15,16], one was school-based [11] and one was a population-based study [14]. Studies differed in terms of duration, sample size, causes of malnutrition, assessment measures used, and nutritional indicators reported.\nOverall, two studies were conducted among children with CP only [4,12], whereas, the remaining studies included children with CP as part of a larger cohort of children with disability (n = 2) [12,15], special needs (n = 1) [11], or compared with control groups (n = 2) [13,15]. The total of 400 pooled participants ranged from 12 to 119 children with CP in each study, whose age ranged between 1–18.4 years. Male-female numbers/percentages were available for five of seven studies, which ranged between 47% to 58.3% males, and 41% to 53% females.\n3.2. Measurements Used for Nutritional Assessment Among reported nutritional assessment indicators used, all studies used at least one standard anthropometric measurement tool. Most commonly reported indicators were percentiles/z-scores for weight-for-age (n = 3) [4,12,13], height for age (n = 2) [4,13], and BMI/BMI-for-age (n = 3) [4,11,14]. Additionally, body composition and biochemical tests were reported in one study [13] as an indicator for nutritional status. The nutritional indicators reported in the included studies have been summarized in Table 4.\nAmong reported nutritional assessment indicators used, all studies used at least one standard anthropometric measurement tool. Most commonly reported indicators were percentiles/z-scores for weight-for-age (n = 3) [4,12,13], height for age (n = 2) [4,13], and BMI/BMI-for-age (n = 3) [4,11,14]. Additionally, body composition and biochemical tests were reported in one study [13] as an indicator for nutritional status. The nutritional indicators reported in the included studies have been summarized in Table 4.\n3.3. Malnutrition Rate among Children with CP Out of a total N = 952 participants in the included studies, n = 400 were children with CP and were eligible for estimation of the pooled prevalence of malnutrition. However, the proportion of at least one form of malnutrition among children with CP was reported in n = 6 studies [4,11,12,13,14,16] whereas the mean (SD) nutritional indicator was reported in n = 2 studies [14,15]. The pooled estimates suggest that 48.84–91.67% children with CP in the included studies had at least one form of malnutrition (pooled prevalence of 71.46%, 95% CI: 55.52–85.04, p < 0.0001). Moderate to severe underweight was most frequently reported (n = 4) and ranged between 7%–84.9% among the participating children with CP [4,11,12,13]. Being overweight was reported in n = 3 studies and ranged between 2.5–25% [11,12,14] (Table 5, Figure 2 and Figure 3).\nOut of a total N = 952 participants in the included studies, n = 400 were children with CP and were eligible for estimation of the pooled prevalence of malnutrition. However, the proportion of at least one form of malnutrition among children with CP was reported in n = 6 studies [4,11,12,13,14,16] whereas the mean (SD) nutritional indicator was reported in n = 2 studies [14,15]. The pooled estimates suggest that 48.84–91.67% children with CP in the included studies had at least one form of malnutrition (pooled prevalence of 71.46%, 95% CI: 55.52–85.04, p < 0.0001). Moderate to severe underweight was most frequently reported (n = 4) and ranged between 7%–84.9% among the participating children with CP [4,11,12,13]. Being overweight was reported in n = 3 studies and ranged between 2.5–25% [11,12,14] (Table 5, Figure 2 and Figure 3).\n3.4. Underlying Risk Factors of Malnutrition Five out of seven included studies reported the factors related to malnutrition among children with CP in ASCs [4,11,12,13,14]. Overall, malnutrition was found higher among children with moderate-severe gross motor function limitation (e.g., GMFCS level III–V) [13,15], oro-motor dysfunction/feeding difficulties [12,13], with traumatic dental injury, caries and medical complications [11,12]. Furthermore, older age of the child, presence of cognitive impairment and inadequate energy intake were reported as contributing factors to malnutrition among children with CP in one study [4].\nFive out of seven included studies reported the factors related to malnutrition among children with CP in ASCs [4,11,12,13,14]. Overall, malnutrition was found higher among children with moderate-severe gross motor function limitation (e.g., GMFCS level III–V) [13,15], oro-motor dysfunction/feeding difficulties [12,13], with traumatic dental injury, caries and medical complications [11,12]. Furthermore, older age of the child, presence of cognitive impairment and inadequate energy intake were reported as contributing factors to malnutrition among children with CP in one study [4].\n3.5. Study Quality and Heterogeneity The symmetrical funnel plot in Figure 3 revealed that there was no substantial publication bias in the meta-analysis (Figure 2) for the proportion of malnutrition estimates against corresponding standard error. However, there is a high clinical heterogeneity (I2 = 88.40%) as the included studies did not use uniform measurements of malnutrition.\nThe symmetrical funnel plot in Figure 3 revealed that there was no substantial publication bias in the meta-analysis (Figure 2) for the proportion of malnutrition estimates against corresponding standard error. However, there is a high clinical heterogeneity (I2 = 88.40%) as the included studies did not use uniform measurements of malnutrition.", "A total of 79 titles were identified from the databases following the search strategy described above. After deduplication using EndNoteX9 citation manager and a manual re-check, 50 primary studies were identified of which 41 irrelevant studies were excluded and nine studies were eligible for full-text review. Following consensus among the reviewers, seven articles were selected for inclusion and data extraction. The details have been summarized in Figure 1.\nTable 3 summarizes the characteristics of the included studies (n = 7). The studies were published between 1984 and 2021 and in English language [4,11,12,13,14,15,16]. Most of the studies included were from Saudi Arabia (n = 4) [4,11,12,16], and the remaining were from Egypt (n = 1) [13], United Arab Emirates (n = 1) [14], and Jordan (n = 1) [15]. The study designs of the included studies were cross-sectional (n = 5) [4,11,13,14,15], retrospective record review (n = 1) [12], and in one study the design was not clearly mentioned [16]. Five studies were hospital/institution-based [4,12,13,15,16], one was school-based [11] and one was a population-based study [14]. Studies differed in terms of duration, sample size, causes of malnutrition, assessment measures used, and nutritional indicators reported.\nOverall, two studies were conducted among children with CP only [4,12], whereas, the remaining studies included children with CP as part of a larger cohort of children with disability (n = 2) [12,15], special needs (n = 1) [11], or compared with control groups (n = 2) [13,15]. The total of 400 pooled participants ranged from 12 to 119 children with CP in each study, whose age ranged between 1–18.4 years. Male-female numbers/percentages were available for five of seven studies, which ranged between 47% to 58.3% males, and 41% to 53% females.", "Among reported nutritional assessment indicators used, all studies used at least one standard anthropometric measurement tool. Most commonly reported indicators were percentiles/z-scores for weight-for-age (n = 3) [4,12,13], height for age (n = 2) [4,13], and BMI/BMI-for-age (n = 3) [4,11,14]. Additionally, body composition and biochemical tests were reported in one study [13] as an indicator for nutritional status. The nutritional indicators reported in the included studies have been summarized in Table 4.", "Out of a total N = 952 participants in the included studies, n = 400 were children with CP and were eligible for estimation of the pooled prevalence of malnutrition. However, the proportion of at least one form of malnutrition among children with CP was reported in n = 6 studies [4,11,12,13,14,16] whereas the mean (SD) nutritional indicator was reported in n = 2 studies [14,15]. The pooled estimates suggest that 48.84–91.67% children with CP in the included studies had at least one form of malnutrition (pooled prevalence of 71.46%, 95% CI: 55.52–85.04, p < 0.0001). Moderate to severe underweight was most frequently reported (n = 4) and ranged between 7%–84.9% among the participating children with CP [4,11,12,13]. Being overweight was reported in n = 3 studies and ranged between 2.5–25% [11,12,14] (Table 5, Figure 2 and Figure 3).", "Five out of seven included studies reported the factors related to malnutrition among children with CP in ASCs [4,11,12,13,14]. Overall, malnutrition was found higher among children with moderate-severe gross motor function limitation (e.g., GMFCS level III–V) [13,15], oro-motor dysfunction/feeding difficulties [12,13], with traumatic dental injury, caries and medical complications [11,12]. Furthermore, older age of the child, presence of cognitive impairment and inadequate energy intake were reported as contributing factors to malnutrition among children with CP in one study [4].", "The symmetrical funnel plot in Figure 3 revealed that there was no substantial publication bias in the meta-analysis (Figure 2) for the proportion of malnutrition estimates against corresponding standard error. However, there is a high clinical heterogeneity (I2 = 88.40%) as the included studies did not use uniform measurements of malnutrition.", "To the best of our knowledge, this is the first systematic review reporting the burden of malnutrition and its underlying risk factors among children and adolescents with CP in ASCs. In our review we observed that the burden of malnutrition among children with CP in ASCs is obviously understudied. Although we included all 22 countries during our detailed search, the results yielded studies from only four countries. Furthermore, most studies were conducted in institution-based settings (e.g., hospitals, health care facilities, schools) limiting the opportunities to generalize the findings. This indicates an urgent need for more medical research on this crucial issue, especially in the setting of low-to-middle income countries (LMIC). Although most of the ASCs are classified as low or middle income, with the exception of the Gulf countries [17], among the included studies in our review only two (out of seven) were from LMIC settings (e.g., Egypt, Jordan) [13,15]. More research is needed to investigate the factors that contribute to this evidence gap.\nThe indicators used/type of malnutrition reported varied substantially between the studies and sufficient data were not available to estimate the pooled prevalence of different types of malnutrition (e.g., underweight, stunting, overweight, wasting, etc.). Hence, we reported the pooled proportion of at least one form of malnutrition among participating children with CP in the Arabic-speaking countries. Nevertheless, the overall malnutrition rate was high among children with CP in ASCs, especially when compared to children without CP.\nBeing underweight was the most commonly reported form of malnutrition, although the proportions varied substantially between countries. However, when compared to other institution-based studies, the proportion of undernutrition was higher in Arabic-speaking LMICs (e.g., Egypt) than non-Arabic-speaking LMICs (e.g., Vietnam and Argentina) [18,19,20]. We also observed a wide range of overweight/obesity among the participating children in the included studies.\nMalnutrition in children with disabilities, including CP, could be due to several interlinked underlying risk factors which varies from one population to another [20]. Only a few of the included studies reported the underlying factors, of which gross motor function and feeding difficulties were predominant [12,13,15]. Although we could not measure the effect size of these underlying factors on malnutrition rate, due to the heterogeneity in the reported data (I2 = 84.40%), it is known that gross motor function significantly affects nutritional status and is closely related to the presence and severity of feeding difficulties among children with CP [21,22]. Children with higher gross motor impairment therefore require careful evaluation and nutritional intervention to improve their nutritional as well as functional outcome [20]. One study also reported inadequate energy intake as an influencing factor of malnutrition among the participating children [4]. This relationship is straightforward, but the reason for lack of energy consumption could be due to clinical factors or lack of access to resources. All these findings indicate that there is an urgent need to generate robust data to identify the modifiable causes and a potential practical intervention relating to these crucial issues among children with CP in ASCs. Malnutrition among children with CP is a major concern. It is often associated with a number of other comorbidities. Iron deficiency anaemia (IDA), renal impairment, auditory and visual deficiency, low bone mineral density, poor growth, and infections have been reported in previous studies [4,12,14,15,16,23].\nThis review has some limitations which are evident in the small number of studies (seven for 22 countries), so not all countries are represented. Thence, we did not exclude any studies based on the CP definition. However, for outcome measures such as undernutrition or overnutrition, we used standard criteria. For instance, underweight was defined as a child’s weight-for-age being ≤2SD or 15th percentile.\nAlthough we conducted a comprehensive search, the number of studies identified was very small, indicating that there is a large gap in the evidence in ASCs in this regard. This is one of the main reasons why this review assesses and maps the existing evidence to generate comprehensive data on the nutritional status of children with CP in ASCs.\nIn addition, there was high clinical heterogeneity, non-uniform anthropometric measurements, and the age group ranged up to 18.4 years in one study [14], although one of the inclusion criteria was up to 18 years old. The included studies were mostly conducted in institution-based settings, hence the pooled estimates are not generalizable. We could not estimate the effect size of different underlying factors on nutritional status of children with CP in ASCs, although this was one of our study objectives. Furthermore, malnutrition can take several forms, including underweight and/or overweight. However, because anthropometric measurements are the most commonly used method, and the majority of studies reporting nutritional status of children used those terminologies, we only focused on nutritional status reported based on anthropometric measurements. Nevertheless, the strength of this review is that it is a uniquely novel systematic review and meta-analysis on an under-researched theme. It addresses a very important public health issue involving children with disability-like CP. All of the studies included are of good quality with a symmetrical funnel plot.", "Malnutrition in children and adolescents with disabilities and/or CP is an existing problem in ASCs but there is a dearth of medical research. Focused research is needed to fill the large evidence gap and identify need-based effective nutrition intervention for children with CP in these countries." ]
[ "intro", null, null, null, null, null, null, "results", "subjects", null, null, null, null, "discussion", "conclusions" ]
[ "Arabic-speaking countries", "malnutrition", "children", "adolescents", "cerebral palsy" ]
1. Introduction: Cerebral palsy (CP) is considered as one of the leading causes of motor disability among children and adolescents [1]. Malnutrition is defined as a person’s energy and/or nutritional consumption being deficient, excessive, or unbalanced. Malnutrition has a broad definition that refers to two types of problem. First, stunting (low height for age), wasting (low weight for height), underweight (low weight for age), and micronutrient deficiencies or insufficiencies are some of the symptoms of undernutrition (a lack of important vitamins and minerals). Second, overweight, obesity, and noncommunicable diseases linked to diet are the other two (such as heart disease, stroke, diabetes, and cancer) [2]. Malnutrition can be seen as a secondary health issue that can impact on the overall health and well-being of children with CP and their families [3]. It occurs when food intake falls short of the requirements for normal body functions, causing growth and development problems [4]. Malnutrition must be diagnosed, prevented, and managed early in children’s lives because growth and development depend on optimum nutritional intake. Malnutrition in children with a chronic condition such as CP is caused by various factors, including the underlying disorder and non-illness-related factors such as increased caloric demands, malabsorption, altered nutrient use, and nutrient provision limits due to fluid status and/or feeding tolerance [5]. There are many ways to evaluate malnutrition and related risk factors among children, including, but not limited to, standard anthropometric measures like weight and its percentile, height and its percentile, body mass index (BMI), waist, head, and arm circumferences. Other measurements that could be used are total body water, fat mass, triceps fold thickness, z-score, and biochemical parameters such as hemoglobin, ferritin, and albumin [4,6]. Despite differences among Arabic-speaking countries (ASCs) (Table 1) in the quality of health care provided, they share many common customs in relation to cultural, social, and food habits. Regardless of these similarities and differences, children with CP are equally vulnerable to malnutrition, yet the burden of malnutrition among children and adolescents with CP in these countries has not been quantified through a systematic review. Because of the dearth of knowledge regarding the nutritional status of children with CP from ASCs, and to advance the global knowledge base on this crucial issue, we aimed to estimate the burden and underlying risk factors of malnutrition among children and adolescents with CP in the ASCs based on available published literature, to facilitate evidence-based medicine. We realize the need for systematic data collection and reporting of the limited available studies. In this review, therefore, we focused on summarizing the available information regarding the size of the problem and its causes, despite the scarcity of available resources that could be used to conduct large scale studies and nutrition intervention in a similar context. 2. Materials and Methods: For this review, we followed the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines on conducting systematic reviews, including the 27-item checklist [7,8]. 2.1. Data Sources and Search Strategy We identified 22 countries whose official language is Arabic [9]. One author (C.K.) searched the following bibliographic databases—OVID Medline (1946–25 June 2021), OVID Embase (1947–1 July 2021), CINAHL via EBSCO (1982–July 2021), Cochrane Library Database of Systematic Reviews (Issue 7 of 12, 2021), Cochrane Central Register of Controlled Trials (Issue 7 of 12, 2021) and SCOPUS (1788–July 2021) to find publications on nutritional status among children and adolescents with CP in ASCs. The final search was completed on 3 July 2021. No language or date limits were applied to ensure maximum retrieval. The search used controlled vocabulary terms including ‘Cerebral Palsy’, ‘Nutritional status’, ‘Nutritional Sciences’, ‘Malnutrition’, ‘Thinness’, ‘Growth disorders’, ‘Cachexia’, ‘Body Mass Index’, ‘Overweight,’ ‘Obesity’, “Infant Newborn, ‘Infant’, ’Child Preschool’, ‘Child’ and ‘Adolescent’. These were used with corresponding text-word terms. Text-word terms were truncated where necessary to include all relevant term endings. The search terms were combined with the individual country list terms provided in Table 1. The Ovid Medline search strategy used is provided in Appendix A. We identified 22 countries whose official language is Arabic [9]. One author (C.K.) searched the following bibliographic databases—OVID Medline (1946–25 June 2021), OVID Embase (1947–1 July 2021), CINAHL via EBSCO (1982–July 2021), Cochrane Library Database of Systematic Reviews (Issue 7 of 12, 2021), Cochrane Central Register of Controlled Trials (Issue 7 of 12, 2021) and SCOPUS (1788–July 2021) to find publications on nutritional status among children and adolescents with CP in ASCs. The final search was completed on 3 July 2021. No language or date limits were applied to ensure maximum retrieval. The search used controlled vocabulary terms including ‘Cerebral Palsy’, ‘Nutritional status’, ‘Nutritional Sciences’, ‘Malnutrition’, ‘Thinness’, ‘Growth disorders’, ‘Cachexia’, ‘Body Mass Index’, ‘Overweight,’ ‘Obesity’, “Infant Newborn, ‘Infant’, ’Child Preschool’, ‘Child’ and ‘Adolescent’. These were used with corresponding text-word terms. Text-word terms were truncated where necessary to include all relevant term endings. The search terms were combined with the individual country list terms provided in Table 1. The Ovid Medline search strategy used is provided in Appendix A. 2.2. Study Selection and Inclusion Study selection was completed following a pre-set eligibility criteria developed by three reviewers (G.K., S.M., & S.M.M.). The inclusion criteria were as follows: (1) studies reported original observations (from observational and analytical study design); (2) study participants were children and/or adolescents with CP aged up to 18 years in ASCs; and (3) studies reported malnutrition (i.e., underweight, or overweight) as an outcome or in the background characteristics. The exclusion criteria were as follows: (1) studies reporting a single case, case series, non-observational studies (e.g., systematic reviews, narrative reviews, scoping reviews), conference reports/posters, (2) study participants were only malnourished children or adults with CP, (3) conducted in non-Arabic speaking countries. Two reviewers, (S.M.M. and G.K.) independently reviewed the identified studies and disagreements were resolved by a third reviewer (I.J.) by consensus. The review protocol has been registered in PROSPERO (registration number: CRD42021244171—https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=244171—accessed on 27 July 2021). Study selection was completed following a pre-set eligibility criteria developed by three reviewers (G.K., S.M., & S.M.M.). The inclusion criteria were as follows: (1) studies reported original observations (from observational and analytical study design); (2) study participants were children and/or adolescents with CP aged up to 18 years in ASCs; and (3) studies reported malnutrition (i.e., underweight, or overweight) as an outcome or in the background characteristics. The exclusion criteria were as follows: (1) studies reporting a single case, case series, non-observational studies (e.g., systematic reviews, narrative reviews, scoping reviews), conference reports/posters, (2) study participants were only malnourished children or adults with CP, (3) conducted in non-Arabic speaking countries. Two reviewers, (S.M.M. and G.K.) independently reviewed the identified studies and disagreements were resolved by a third reviewer (I.J.) by consensus. The review protocol has been registered in PROSPERO (registration number: CRD42021244171—https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=244171—accessed on 27 July 2021). 2.3. Risk of Bias Assessment We assessed the selected studies to identify risk of bias using the Newcastle-Ottawa Quality Assessment Scale (NOS) [10]. The assessment was completed by the first author (S.M.M.) with support of an external reviewer (H.B.). Results of individual studies included in this review are shown in Table 2. All seven articles included in this review displayed good quality in all three areas of the assessment (i.e., selection, comparability, and outcome). None of the studies were excluded due to poor quality at this stage as all of them met the standard thresholds for inclusion. We assessed the selected studies to identify risk of bias using the Newcastle-Ottawa Quality Assessment Scale (NOS) [10]. The assessment was completed by the first author (S.M.M.) with support of an external reviewer (H.B.). Results of individual studies included in this review are shown in Table 2. All seven articles included in this review displayed good quality in all three areas of the assessment (i.e., selection, comparability, and outcome). None of the studies were excluded due to poor quality at this stage as all of them met the standard thresholds for inclusion. 2.4. Data Extraction Data extraction was completed in an Excel templated developed by the first author (S.M.M.) in consultation with another two reviewers (G.K. and I.J.). Two reviewers (R.S. and I.J.) completed data extraction from all seven studies independently. Any differences identified were resolved following discussion with a third reviewer (G.K.). As the most commonly utilized method reported in the studies was anthropometric measurements, the following were extracted as available: (i) study characteristics (citation, implementation country, study settings, study design, study participants, samples size, age and gender, study duration), (ii) outcome measures/measurements used (anthropometric, biochemical, others), (iii) outcome reported (malnutrition proportions and significantly associated risk factors). If any information was unavailable, then it was documented as ‘not reported’. Data extraction was completed in an Excel templated developed by the first author (S.M.M.) in consultation with another two reviewers (G.K. and I.J.). Two reviewers (R.S. and I.J.) completed data extraction from all seven studies independently. Any differences identified were resolved following discussion with a third reviewer (G.K.). As the most commonly utilized method reported in the studies was anthropometric measurements, the following were extracted as available: (i) study characteristics (citation, implementation country, study settings, study design, study participants, samples size, age and gender, study duration), (ii) outcome measures/measurements used (anthropometric, biochemical, others), (iii) outcome reported (malnutrition proportions and significantly associated risk factors). If any information was unavailable, then it was documented as ‘not reported’. 2.5. Data Analysis Descriptive information (e.g., study characteristics and outcome measures) were presented in table format. The rate of malnutrition was reported as documented in the original study. Factors related to malnutrition reported in individual studies were also summarized, but the effect size could not be estimated due to lack of consistent data. Furthermore, a forest plot and a funnel plot showing the proportion (with 95% confidence interval (CI)) of at least one form of malnutrition as reported in individual studies were constructed. For studies where malnutrition rate was reported for multiple indicators, the highest proportion was included. For meta-analysis, we used MedCalc® Statistical Software version 20.009 (MedCalc Software Ltd., Ostend, Belgium; https://www.medcalc.org; accessed on 20 July 2021). To investigate the heterogeneity we used a random effect model in the analysis. Heterogeneity was considered mild if I2 < 30%, moderate if I2 = 30–50%, and notable if I2 > 50%. Descriptive information (e.g., study characteristics and outcome measures) were presented in table format. The rate of malnutrition was reported as documented in the original study. Factors related to malnutrition reported in individual studies were also summarized, but the effect size could not be estimated due to lack of consistent data. Furthermore, a forest plot and a funnel plot showing the proportion (with 95% confidence interval (CI)) of at least one form of malnutrition as reported in individual studies were constructed. For studies where malnutrition rate was reported for multiple indicators, the highest proportion was included. For meta-analysis, we used MedCalc® Statistical Software version 20.009 (MedCalc Software Ltd., Ostend, Belgium; https://www.medcalc.org; accessed on 20 July 2021). To investigate the heterogeneity we used a random effect model in the analysis. Heterogeneity was considered mild if I2 < 30%, moderate if I2 = 30–50%, and notable if I2 > 50%. 2.1. Data Sources and Search Strategy: We identified 22 countries whose official language is Arabic [9]. One author (C.K.) searched the following bibliographic databases—OVID Medline (1946–25 June 2021), OVID Embase (1947–1 July 2021), CINAHL via EBSCO (1982–July 2021), Cochrane Library Database of Systematic Reviews (Issue 7 of 12, 2021), Cochrane Central Register of Controlled Trials (Issue 7 of 12, 2021) and SCOPUS (1788–July 2021) to find publications on nutritional status among children and adolescents with CP in ASCs. The final search was completed on 3 July 2021. No language or date limits were applied to ensure maximum retrieval. The search used controlled vocabulary terms including ‘Cerebral Palsy’, ‘Nutritional status’, ‘Nutritional Sciences’, ‘Malnutrition’, ‘Thinness’, ‘Growth disorders’, ‘Cachexia’, ‘Body Mass Index’, ‘Overweight,’ ‘Obesity’, “Infant Newborn, ‘Infant’, ’Child Preschool’, ‘Child’ and ‘Adolescent’. These were used with corresponding text-word terms. Text-word terms were truncated where necessary to include all relevant term endings. The search terms were combined with the individual country list terms provided in Table 1. The Ovid Medline search strategy used is provided in Appendix A. 2.2. Study Selection and Inclusion: Study selection was completed following a pre-set eligibility criteria developed by three reviewers (G.K., S.M., & S.M.M.). The inclusion criteria were as follows: (1) studies reported original observations (from observational and analytical study design); (2) study participants were children and/or adolescents with CP aged up to 18 years in ASCs; and (3) studies reported malnutrition (i.e., underweight, or overweight) as an outcome or in the background characteristics. The exclusion criteria were as follows: (1) studies reporting a single case, case series, non-observational studies (e.g., systematic reviews, narrative reviews, scoping reviews), conference reports/posters, (2) study participants were only malnourished children or adults with CP, (3) conducted in non-Arabic speaking countries. Two reviewers, (S.M.M. and G.K.) independently reviewed the identified studies and disagreements were resolved by a third reviewer (I.J.) by consensus. The review protocol has been registered in PROSPERO (registration number: CRD42021244171—https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=244171—accessed on 27 July 2021). 2.3. Risk of Bias Assessment: We assessed the selected studies to identify risk of bias using the Newcastle-Ottawa Quality Assessment Scale (NOS) [10]. The assessment was completed by the first author (S.M.M.) with support of an external reviewer (H.B.). Results of individual studies included in this review are shown in Table 2. All seven articles included in this review displayed good quality in all three areas of the assessment (i.e., selection, comparability, and outcome). None of the studies were excluded due to poor quality at this stage as all of them met the standard thresholds for inclusion. 2.4. Data Extraction: Data extraction was completed in an Excel templated developed by the first author (S.M.M.) in consultation with another two reviewers (G.K. and I.J.). Two reviewers (R.S. and I.J.) completed data extraction from all seven studies independently. Any differences identified were resolved following discussion with a third reviewer (G.K.). As the most commonly utilized method reported in the studies was anthropometric measurements, the following were extracted as available: (i) study characteristics (citation, implementation country, study settings, study design, study participants, samples size, age and gender, study duration), (ii) outcome measures/measurements used (anthropometric, biochemical, others), (iii) outcome reported (malnutrition proportions and significantly associated risk factors). If any information was unavailable, then it was documented as ‘not reported’. 2.5. Data Analysis: Descriptive information (e.g., study characteristics and outcome measures) were presented in table format. The rate of malnutrition was reported as documented in the original study. Factors related to malnutrition reported in individual studies were also summarized, but the effect size could not be estimated due to lack of consistent data. Furthermore, a forest plot and a funnel plot showing the proportion (with 95% confidence interval (CI)) of at least one form of malnutrition as reported in individual studies were constructed. For studies where malnutrition rate was reported for multiple indicators, the highest proportion was included. For meta-analysis, we used MedCalc® Statistical Software version 20.009 (MedCalc Software Ltd., Ostend, Belgium; https://www.medcalc.org; accessed on 20 July 2021). To investigate the heterogeneity we used a random effect model in the analysis. Heterogeneity was considered mild if I2 < 30%, moderate if I2 = 30–50%, and notable if I2 > 50%. 3. Results: 3.1. Study Characteristics and Participants A total of 79 titles were identified from the databases following the search strategy described above. After deduplication using EndNoteX9 citation manager and a manual re-check, 50 primary studies were identified of which 41 irrelevant studies were excluded and nine studies were eligible for full-text review. Following consensus among the reviewers, seven articles were selected for inclusion and data extraction. The details have been summarized in Figure 1. Table 3 summarizes the characteristics of the included studies (n = 7). The studies were published between 1984 and 2021 and in English language [4,11,12,13,14,15,16]. Most of the studies included were from Saudi Arabia (n = 4) [4,11,12,16], and the remaining were from Egypt (n = 1) [13], United Arab Emirates (n = 1) [14], and Jordan (n = 1) [15]. The study designs of the included studies were cross-sectional (n = 5) [4,11,13,14,15], retrospective record review (n = 1) [12], and in one study the design was not clearly mentioned [16]. Five studies were hospital/institution-based [4,12,13,15,16], one was school-based [11] and one was a population-based study [14]. Studies differed in terms of duration, sample size, causes of malnutrition, assessment measures used, and nutritional indicators reported. Overall, two studies were conducted among children with CP only [4,12], whereas, the remaining studies included children with CP as part of a larger cohort of children with disability (n = 2) [12,15], special needs (n = 1) [11], or compared with control groups (n = 2) [13,15]. The total of 400 pooled participants ranged from 12 to 119 children with CP in each study, whose age ranged between 1–18.4 years. Male-female numbers/percentages were available for five of seven studies, which ranged between 47% to 58.3% males, and 41% to 53% females. A total of 79 titles were identified from the databases following the search strategy described above. After deduplication using EndNoteX9 citation manager and a manual re-check, 50 primary studies were identified of which 41 irrelevant studies were excluded and nine studies were eligible for full-text review. Following consensus among the reviewers, seven articles were selected for inclusion and data extraction. The details have been summarized in Figure 1. Table 3 summarizes the characteristics of the included studies (n = 7). The studies were published between 1984 and 2021 and in English language [4,11,12,13,14,15,16]. Most of the studies included were from Saudi Arabia (n = 4) [4,11,12,16], and the remaining were from Egypt (n = 1) [13], United Arab Emirates (n = 1) [14], and Jordan (n = 1) [15]. The study designs of the included studies were cross-sectional (n = 5) [4,11,13,14,15], retrospective record review (n = 1) [12], and in one study the design was not clearly mentioned [16]. Five studies were hospital/institution-based [4,12,13,15,16], one was school-based [11] and one was a population-based study [14]. Studies differed in terms of duration, sample size, causes of malnutrition, assessment measures used, and nutritional indicators reported. Overall, two studies were conducted among children with CP only [4,12], whereas, the remaining studies included children with CP as part of a larger cohort of children with disability (n = 2) [12,15], special needs (n = 1) [11], or compared with control groups (n = 2) [13,15]. The total of 400 pooled participants ranged from 12 to 119 children with CP in each study, whose age ranged between 1–18.4 years. Male-female numbers/percentages were available for five of seven studies, which ranged between 47% to 58.3% males, and 41% to 53% females. 3.2. Measurements Used for Nutritional Assessment Among reported nutritional assessment indicators used, all studies used at least one standard anthropometric measurement tool. Most commonly reported indicators were percentiles/z-scores for weight-for-age (n = 3) [4,12,13], height for age (n = 2) [4,13], and BMI/BMI-for-age (n = 3) [4,11,14]. Additionally, body composition and biochemical tests were reported in one study [13] as an indicator for nutritional status. The nutritional indicators reported in the included studies have been summarized in Table 4. Among reported nutritional assessment indicators used, all studies used at least one standard anthropometric measurement tool. Most commonly reported indicators were percentiles/z-scores for weight-for-age (n = 3) [4,12,13], height for age (n = 2) [4,13], and BMI/BMI-for-age (n = 3) [4,11,14]. Additionally, body composition and biochemical tests were reported in one study [13] as an indicator for nutritional status. The nutritional indicators reported in the included studies have been summarized in Table 4. 3.3. Malnutrition Rate among Children with CP Out of a total N = 952 participants in the included studies, n = 400 were children with CP and were eligible for estimation of the pooled prevalence of malnutrition. However, the proportion of at least one form of malnutrition among children with CP was reported in n = 6 studies [4,11,12,13,14,16] whereas the mean (SD) nutritional indicator was reported in n = 2 studies [14,15]. The pooled estimates suggest that 48.84–91.67% children with CP in the included studies had at least one form of malnutrition (pooled prevalence of 71.46%, 95% CI: 55.52–85.04, p < 0.0001). Moderate to severe underweight was most frequently reported (n = 4) and ranged between 7%–84.9% among the participating children with CP [4,11,12,13]. Being overweight was reported in n = 3 studies and ranged between 2.5–25% [11,12,14] (Table 5, Figure 2 and Figure 3). Out of a total N = 952 participants in the included studies, n = 400 were children with CP and were eligible for estimation of the pooled prevalence of malnutrition. However, the proportion of at least one form of malnutrition among children with CP was reported in n = 6 studies [4,11,12,13,14,16] whereas the mean (SD) nutritional indicator was reported in n = 2 studies [14,15]. The pooled estimates suggest that 48.84–91.67% children with CP in the included studies had at least one form of malnutrition (pooled prevalence of 71.46%, 95% CI: 55.52–85.04, p < 0.0001). Moderate to severe underweight was most frequently reported (n = 4) and ranged between 7%–84.9% among the participating children with CP [4,11,12,13]. Being overweight was reported in n = 3 studies and ranged between 2.5–25% [11,12,14] (Table 5, Figure 2 and Figure 3). 3.4. Underlying Risk Factors of Malnutrition Five out of seven included studies reported the factors related to malnutrition among children with CP in ASCs [4,11,12,13,14]. Overall, malnutrition was found higher among children with moderate-severe gross motor function limitation (e.g., GMFCS level III–V) [13,15], oro-motor dysfunction/feeding difficulties [12,13], with traumatic dental injury, caries and medical complications [11,12]. Furthermore, older age of the child, presence of cognitive impairment and inadequate energy intake were reported as contributing factors to malnutrition among children with CP in one study [4]. Five out of seven included studies reported the factors related to malnutrition among children with CP in ASCs [4,11,12,13,14]. Overall, malnutrition was found higher among children with moderate-severe gross motor function limitation (e.g., GMFCS level III–V) [13,15], oro-motor dysfunction/feeding difficulties [12,13], with traumatic dental injury, caries and medical complications [11,12]. Furthermore, older age of the child, presence of cognitive impairment and inadequate energy intake were reported as contributing factors to malnutrition among children with CP in one study [4]. 3.5. Study Quality and Heterogeneity The symmetrical funnel plot in Figure 3 revealed that there was no substantial publication bias in the meta-analysis (Figure 2) for the proportion of malnutrition estimates against corresponding standard error. However, there is a high clinical heterogeneity (I2 = 88.40%) as the included studies did not use uniform measurements of malnutrition. The symmetrical funnel plot in Figure 3 revealed that there was no substantial publication bias in the meta-analysis (Figure 2) for the proportion of malnutrition estimates against corresponding standard error. However, there is a high clinical heterogeneity (I2 = 88.40%) as the included studies did not use uniform measurements of malnutrition. 3.1. Study Characteristics and Participants: A total of 79 titles were identified from the databases following the search strategy described above. After deduplication using EndNoteX9 citation manager and a manual re-check, 50 primary studies were identified of which 41 irrelevant studies were excluded and nine studies were eligible for full-text review. Following consensus among the reviewers, seven articles were selected for inclusion and data extraction. The details have been summarized in Figure 1. Table 3 summarizes the characteristics of the included studies (n = 7). The studies were published between 1984 and 2021 and in English language [4,11,12,13,14,15,16]. Most of the studies included were from Saudi Arabia (n = 4) [4,11,12,16], and the remaining were from Egypt (n = 1) [13], United Arab Emirates (n = 1) [14], and Jordan (n = 1) [15]. The study designs of the included studies were cross-sectional (n = 5) [4,11,13,14,15], retrospective record review (n = 1) [12], and in one study the design was not clearly mentioned [16]. Five studies were hospital/institution-based [4,12,13,15,16], one was school-based [11] and one was a population-based study [14]. Studies differed in terms of duration, sample size, causes of malnutrition, assessment measures used, and nutritional indicators reported. Overall, two studies were conducted among children with CP only [4,12], whereas, the remaining studies included children with CP as part of a larger cohort of children with disability (n = 2) [12,15], special needs (n = 1) [11], or compared with control groups (n = 2) [13,15]. The total of 400 pooled participants ranged from 12 to 119 children with CP in each study, whose age ranged between 1–18.4 years. Male-female numbers/percentages were available for five of seven studies, which ranged between 47% to 58.3% males, and 41% to 53% females. 3.2. Measurements Used for Nutritional Assessment: Among reported nutritional assessment indicators used, all studies used at least one standard anthropometric measurement tool. Most commonly reported indicators were percentiles/z-scores for weight-for-age (n = 3) [4,12,13], height for age (n = 2) [4,13], and BMI/BMI-for-age (n = 3) [4,11,14]. Additionally, body composition and biochemical tests were reported in one study [13] as an indicator for nutritional status. The nutritional indicators reported in the included studies have been summarized in Table 4. 3.3. Malnutrition Rate among Children with CP: Out of a total N = 952 participants in the included studies, n = 400 were children with CP and were eligible for estimation of the pooled prevalence of malnutrition. However, the proportion of at least one form of malnutrition among children with CP was reported in n = 6 studies [4,11,12,13,14,16] whereas the mean (SD) nutritional indicator was reported in n = 2 studies [14,15]. The pooled estimates suggest that 48.84–91.67% children with CP in the included studies had at least one form of malnutrition (pooled prevalence of 71.46%, 95% CI: 55.52–85.04, p < 0.0001). Moderate to severe underweight was most frequently reported (n = 4) and ranged between 7%–84.9% among the participating children with CP [4,11,12,13]. Being overweight was reported in n = 3 studies and ranged between 2.5–25% [11,12,14] (Table 5, Figure 2 and Figure 3). 3.4. Underlying Risk Factors of Malnutrition: Five out of seven included studies reported the factors related to malnutrition among children with CP in ASCs [4,11,12,13,14]. Overall, malnutrition was found higher among children with moderate-severe gross motor function limitation (e.g., GMFCS level III–V) [13,15], oro-motor dysfunction/feeding difficulties [12,13], with traumatic dental injury, caries and medical complications [11,12]. Furthermore, older age of the child, presence of cognitive impairment and inadequate energy intake were reported as contributing factors to malnutrition among children with CP in one study [4]. 3.5. Study Quality and Heterogeneity: The symmetrical funnel plot in Figure 3 revealed that there was no substantial publication bias in the meta-analysis (Figure 2) for the proportion of malnutrition estimates against corresponding standard error. However, there is a high clinical heterogeneity (I2 = 88.40%) as the included studies did not use uniform measurements of malnutrition. 4. Discussion: To the best of our knowledge, this is the first systematic review reporting the burden of malnutrition and its underlying risk factors among children and adolescents with CP in ASCs. In our review we observed that the burden of malnutrition among children with CP in ASCs is obviously understudied. Although we included all 22 countries during our detailed search, the results yielded studies from only four countries. Furthermore, most studies were conducted in institution-based settings (e.g., hospitals, health care facilities, schools) limiting the opportunities to generalize the findings. This indicates an urgent need for more medical research on this crucial issue, especially in the setting of low-to-middle income countries (LMIC). Although most of the ASCs are classified as low or middle income, with the exception of the Gulf countries [17], among the included studies in our review only two (out of seven) were from LMIC settings (e.g., Egypt, Jordan) [13,15]. More research is needed to investigate the factors that contribute to this evidence gap. The indicators used/type of malnutrition reported varied substantially between the studies and sufficient data were not available to estimate the pooled prevalence of different types of malnutrition (e.g., underweight, stunting, overweight, wasting, etc.). Hence, we reported the pooled proportion of at least one form of malnutrition among participating children with CP in the Arabic-speaking countries. Nevertheless, the overall malnutrition rate was high among children with CP in ASCs, especially when compared to children without CP. Being underweight was the most commonly reported form of malnutrition, although the proportions varied substantially between countries. However, when compared to other institution-based studies, the proportion of undernutrition was higher in Arabic-speaking LMICs (e.g., Egypt) than non-Arabic-speaking LMICs (e.g., Vietnam and Argentina) [18,19,20]. We also observed a wide range of overweight/obesity among the participating children in the included studies. Malnutrition in children with disabilities, including CP, could be due to several interlinked underlying risk factors which varies from one population to another [20]. Only a few of the included studies reported the underlying factors, of which gross motor function and feeding difficulties were predominant [12,13,15]. Although we could not measure the effect size of these underlying factors on malnutrition rate, due to the heterogeneity in the reported data (I2 = 84.40%), it is known that gross motor function significantly affects nutritional status and is closely related to the presence and severity of feeding difficulties among children with CP [21,22]. Children with higher gross motor impairment therefore require careful evaluation and nutritional intervention to improve their nutritional as well as functional outcome [20]. One study also reported inadequate energy intake as an influencing factor of malnutrition among the participating children [4]. This relationship is straightforward, but the reason for lack of energy consumption could be due to clinical factors or lack of access to resources. All these findings indicate that there is an urgent need to generate robust data to identify the modifiable causes and a potential practical intervention relating to these crucial issues among children with CP in ASCs. Malnutrition among children with CP is a major concern. It is often associated with a number of other comorbidities. Iron deficiency anaemia (IDA), renal impairment, auditory and visual deficiency, low bone mineral density, poor growth, and infections have been reported in previous studies [4,12,14,15,16,23]. This review has some limitations which are evident in the small number of studies (seven for 22 countries), so not all countries are represented. Thence, we did not exclude any studies based on the CP definition. However, for outcome measures such as undernutrition or overnutrition, we used standard criteria. For instance, underweight was defined as a child’s weight-for-age being ≤2SD or 15th percentile. Although we conducted a comprehensive search, the number of studies identified was very small, indicating that there is a large gap in the evidence in ASCs in this regard. This is one of the main reasons why this review assesses and maps the existing evidence to generate comprehensive data on the nutritional status of children with CP in ASCs. In addition, there was high clinical heterogeneity, non-uniform anthropometric measurements, and the age group ranged up to 18.4 years in one study [14], although one of the inclusion criteria was up to 18 years old. The included studies were mostly conducted in institution-based settings, hence the pooled estimates are not generalizable. We could not estimate the effect size of different underlying factors on nutritional status of children with CP in ASCs, although this was one of our study objectives. Furthermore, malnutrition can take several forms, including underweight and/or overweight. However, because anthropometric measurements are the most commonly used method, and the majority of studies reporting nutritional status of children used those terminologies, we only focused on nutritional status reported based on anthropometric measurements. Nevertheless, the strength of this review is that it is a uniquely novel systematic review and meta-analysis on an under-researched theme. It addresses a very important public health issue involving children with disability-like CP. All of the studies included are of good quality with a symmetrical funnel plot. 5. Conclusions: Malnutrition in children and adolescents with disabilities and/or CP is an existing problem in ASCs but there is a dearth of medical research. Focused research is needed to fill the large evidence gap and identify need-based effective nutrition intervention for children with CP in these countries.
Background: We aimed to estimate the burden and underlying risk factors of malnutrition among children and adolescents with cerebral palsy in Arabic-speaking countries. Methods: OVID Medline, OVID Embase, CINAHL via EBSCO, Cochrane Library, and SCOPUS databases were searched up to 3 July 2021. Publications were reviewed to identify relevant papers following pre-defined inclusion/exclusion criteria. Two reviewers independently assessed the studies for inclusion. Data extraction was independently completed by two reviewers. Descriptive and pooled analysis has been reported. Results: From a total of 79 records screened, nine full-text articles were assessed for eligibility, of which seven studies met the inclusion criteria. Study characteristics, anthropometric measurements used, and nutritional outcome reported varied between the studies. The included studies contained data of total 400 participants aged 1-18 years. Overall, (mean: 71.46%, 95% confidence interval: 55.52-85.04) of children with cerebral palsy had at least one form of malnutrition. Severe gross motor function limitation, feeding difficulties, cognitive impairment and inadequate energy intake were the commonly reported underlying risk factors for malnutrition among children with cerebral palsy. Conclusions: The burden of malnutrition is high among children with cerebral palsy in Arabic-speaking countries. More research is needed for better understanding of this public health issue in these countries.
1. Introduction: Cerebral palsy (CP) is considered as one of the leading causes of motor disability among children and adolescents [1]. Malnutrition is defined as a person’s energy and/or nutritional consumption being deficient, excessive, or unbalanced. Malnutrition has a broad definition that refers to two types of problem. First, stunting (low height for age), wasting (low weight for height), underweight (low weight for age), and micronutrient deficiencies or insufficiencies are some of the symptoms of undernutrition (a lack of important vitamins and minerals). Second, overweight, obesity, and noncommunicable diseases linked to diet are the other two (such as heart disease, stroke, diabetes, and cancer) [2]. Malnutrition can be seen as a secondary health issue that can impact on the overall health and well-being of children with CP and their families [3]. It occurs when food intake falls short of the requirements for normal body functions, causing growth and development problems [4]. Malnutrition must be diagnosed, prevented, and managed early in children’s lives because growth and development depend on optimum nutritional intake. Malnutrition in children with a chronic condition such as CP is caused by various factors, including the underlying disorder and non-illness-related factors such as increased caloric demands, malabsorption, altered nutrient use, and nutrient provision limits due to fluid status and/or feeding tolerance [5]. There are many ways to evaluate malnutrition and related risk factors among children, including, but not limited to, standard anthropometric measures like weight and its percentile, height and its percentile, body mass index (BMI), waist, head, and arm circumferences. Other measurements that could be used are total body water, fat mass, triceps fold thickness, z-score, and biochemical parameters such as hemoglobin, ferritin, and albumin [4,6]. Despite differences among Arabic-speaking countries (ASCs) (Table 1) in the quality of health care provided, they share many common customs in relation to cultural, social, and food habits. Regardless of these similarities and differences, children with CP are equally vulnerable to malnutrition, yet the burden of malnutrition among children and adolescents with CP in these countries has not been quantified through a systematic review. Because of the dearth of knowledge regarding the nutritional status of children with CP from ASCs, and to advance the global knowledge base on this crucial issue, we aimed to estimate the burden and underlying risk factors of malnutrition among children and adolescents with CP in the ASCs based on available published literature, to facilitate evidence-based medicine. We realize the need for systematic data collection and reporting of the limited available studies. In this review, therefore, we focused on summarizing the available information regarding the size of the problem and its causes, despite the scarcity of available resources that could be used to conduct large scale studies and nutrition intervention in a similar context. 5. Conclusions: Malnutrition in children and adolescents with disabilities and/or CP is an existing problem in ASCs but there is a dearth of medical research. Focused research is needed to fill the large evidence gap and identify need-based effective nutrition intervention for children with CP in these countries.
Background: We aimed to estimate the burden and underlying risk factors of malnutrition among children and adolescents with cerebral palsy in Arabic-speaking countries. Methods: OVID Medline, OVID Embase, CINAHL via EBSCO, Cochrane Library, and SCOPUS databases were searched up to 3 July 2021. Publications were reviewed to identify relevant papers following pre-defined inclusion/exclusion criteria. Two reviewers independently assessed the studies for inclusion. Data extraction was independently completed by two reviewers. Descriptive and pooled analysis has been reported. Results: From a total of 79 records screened, nine full-text articles were assessed for eligibility, of which seven studies met the inclusion criteria. Study characteristics, anthropometric measurements used, and nutritional outcome reported varied between the studies. The included studies contained data of total 400 participants aged 1-18 years. Overall, (mean: 71.46%, 95% confidence interval: 55.52-85.04) of children with cerebral palsy had at least one form of malnutrition. Severe gross motor function limitation, feeding difficulties, cognitive impairment and inadequate energy intake were the commonly reported underlying risk factors for malnutrition among children with cerebral palsy. Conclusions: The burden of malnutrition is high among children with cerebral palsy in Arabic-speaking countries. More research is needed for better understanding of this public health issue in these countries.
7,131
257
[ 1901, 251, 212, 112, 158, 181, 109, 171, 109, 61 ]
15
[ "studies", "malnutrition", "children", "reported", "study", "cp", "12", "included", "children cp", "13" ]
[ "related malnutrition reported", "causes malnutrition", "factors related malnutrition", "malnutrition children cp", "malnutrition children disabilities" ]
null
[CONTENT] Arabic-speaking countries | malnutrition | children | adolescents | cerebral palsy [SUMMARY]
null
[CONTENT] Arabic-speaking countries | malnutrition | children | adolescents | cerebral palsy [SUMMARY]
[CONTENT] Arabic-speaking countries | malnutrition | children | adolescents | cerebral palsy [SUMMARY]
[CONTENT] Arabic-speaking countries | malnutrition | children | adolescents | cerebral palsy [SUMMARY]
[CONTENT] Arabic-speaking countries | malnutrition | children | adolescents | cerebral palsy [SUMMARY]
[CONTENT] Adolescent | Cerebral Palsy | Child | Child Nutrition Disorders | Global Health | Humans | Language [SUMMARY]
null
[CONTENT] Adolescent | Cerebral Palsy | Child | Child Nutrition Disorders | Global Health | Humans | Language [SUMMARY]
[CONTENT] Adolescent | Cerebral Palsy | Child | Child Nutrition Disorders | Global Health | Humans | Language [SUMMARY]
[CONTENT] Adolescent | Cerebral Palsy | Child | Child Nutrition Disorders | Global Health | Humans | Language [SUMMARY]
[CONTENT] Adolescent | Cerebral Palsy | Child | Child Nutrition Disorders | Global Health | Humans | Language [SUMMARY]
[CONTENT] related malnutrition reported | causes malnutrition | factors related malnutrition | malnutrition children cp | malnutrition children disabilities [SUMMARY]
null
[CONTENT] related malnutrition reported | causes malnutrition | factors related malnutrition | malnutrition children cp | malnutrition children disabilities [SUMMARY]
[CONTENT] related malnutrition reported | causes malnutrition | factors related malnutrition | malnutrition children cp | malnutrition children disabilities [SUMMARY]
[CONTENT] related malnutrition reported | causes malnutrition | factors related malnutrition | malnutrition children cp | malnutrition children disabilities [SUMMARY]
[CONTENT] related malnutrition reported | causes malnutrition | factors related malnutrition | malnutrition children cp | malnutrition children disabilities [SUMMARY]
[CONTENT] studies | malnutrition | children | reported | study | cp | 12 | included | children cp | 13 [SUMMARY]
null
[CONTENT] studies | malnutrition | children | reported | study | cp | 12 | included | children cp | 13 [SUMMARY]
[CONTENT] studies | malnutrition | children | reported | study | cp | 12 | included | children cp | 13 [SUMMARY]
[CONTENT] studies | malnutrition | children | reported | study | cp | 12 | included | children cp | 13 [SUMMARY]
[CONTENT] studies | malnutrition | children | reported | study | cp | 12 | included | children cp | 13 [SUMMARY]
[CONTENT] children | malnutrition | cp | health | low | available | height | factors | weight | despite [SUMMARY]
null
[CONTENT] studies | 13 | 12 | 11 | 14 | children cp | 15 | children | 11 12 | reported [SUMMARY]
[CONTENT] research | identify need based | intervention children cp | problem ascs | problem ascs dearth | effective nutrition intervention children | effective nutrition intervention | effective nutrition | effective | focused research needed fill [SUMMARY]
[CONTENT] studies | reported | children | malnutrition | cp | study | 13 | 12 | children cp | 11 [SUMMARY]
[CONTENT] studies | reported | children | malnutrition | cp | study | 13 | 12 | children cp | 11 [SUMMARY]
[CONTENT] Arabic [SUMMARY]
null
[CONTENT] 79 | nine | seven ||| ||| 400 | 1-18 years ||| 71.46% | 95% | 55.52-85.04 | at least one ||| [SUMMARY]
[CONTENT] Arabic ||| [SUMMARY]
[CONTENT] Arabic ||| OVID Medline | OVID Embase | EBSCO | Cochrane Library | 3 July 2021 ||| ||| Two ||| two ||| ||| 79 | nine | seven ||| ||| 400 | 1-18 years ||| 71.46% | 95% | 55.52-85.04 | at least one ||| ||| Arabic ||| [SUMMARY]
[CONTENT] Arabic ||| OVID Medline | OVID Embase | EBSCO | Cochrane Library | 3 July 2021 ||| ||| Two ||| two ||| ||| 79 | nine | seven ||| ||| 400 | 1-18 years ||| 71.46% | 95% | 55.52-85.04 | at least one ||| ||| Arabic ||| [SUMMARY]
Outcome analysis of single-stage transanal endorectal pull through in selected patients with hirschsprung disease.
34916354
Hirschsprung disease is a notable cause of neonatal intestinal obstruction and constipation in older children. Transanal endorectal pull through (TEPT) is a newer technique of definitive management as against staged procedures. The aim of our study is to evaluate the feasibility and outcome of the procedure in selected children with Hirschsprung disease managed by this technique with review of the literature.
BACKGROUND
Medical records of 12 children who underwent single-stage TEPT in a tertiary care centre over a period of 3 years from 2015 to 2018 were reviewed and retrospectively analysed on the basis of age, investigations, intraoperative parameters, complications, functional outcome and hospital stay.
MATERIALS AND METHODS
The median age at surgery was 9 months. Nine patients were boys. The median weight of patients was 7.5 kg. The transition zone was observed at the level of the rectosigmoid in eight patients (66.6%) and sigmoid colon in four patients (33.3%). The mean length of muscle cuff was 3 cm, the mean length of resected bowel was 25 cm, the median operative time was 105 min and the mean hospital stay was 8 days. Perianal excoriation (n = 2) and enterocolitis (n = 1) were complications encountered postoperatively; however, no patient had cuff abscess, anastomotic leak or stricture. Stool frequency initially at 2 weeks was average of six to ten times a day, which gradually reduced to two to three times a day by 3 months postoperatively. None of the patients had faecal soiling or constipation on follow-up.
RESULTS
Single-stage transanal endorectal pull through is an effective technique in the management of Hirschsprung disease with minimal complications.
CONCLUSION
[ "Child", "Hirschsprung Disease", "Humans", "Infant, Newborn", "Retrospective Studies", "Tertiary Care Centers" ]
8759412
INTRODUCTION
Hirschsprung disease is a frequent cause of intestinal obstruction in neonates and constipation in older children. Various surgical techniques have evolved over time for the definitive management of Hirschsprung disease, single-stage Transanal endorectal pullthrough (TEPT) being one of the recent techniques. The study is aimed at evaluating the feasibility and outcome of the procedure. We present the outcome analysis in 12 children with Hirschsprung disease managed by single-stage TEPT over a period of 3 years from 2015 to 2018 with regard to technique, functional outcome and complications. They were followed up for a period of 2 years.
null
null
RESULTS
In our study, the median age at the time of surgery was 9 months (range: 6 months–3 years). Nine boys and three girls underwent surgery. The median weight at surgery was 7.5 kg (range: 6.4–10 kg). TZ was at rectosigmoid in eight patients (66.6%) and the sigmoid colon in four patients (33.3%) as seen on barium enema. Intraoperatively, the mean length of muscle cuff was 3 cm, and the average length of resected bowel was 25 cm. The median operative time required for the procedure was 105 min. The patients stayed in the hospital for an average of 8 days. None of the patients required laparoscopy or laparotomy. In the post-operative period, two children (16.6%) had perianal excoriation and one child (8.3%) had enterocolitis, which responded to medical management. The patients were kept on follow-up for 2 years. No patient had a cuff abscess, anastomotic leak or stricture. Stool frequency initially at 2 weeks was six to ten times a day and gradually reduced to two to three times a day by 3 months post-operatively. No patient had faecal soiling or constipation.
CONCLUSION
Single-stage transanal endorectal pull through for the management of rectosigmoid and sigmoid Hirschsprung disease is feasible and may be preferred in carefully selected patients. The safety and cost-effectiveness of this procedure is of special interest for developing countries. The functional outcome after the procedure is highly satisfactory. Financial support and sponsorship Nil. Nil. Conflicts of interest There are no conflicts of interest. There are no conflicts of interest.
[ "Operative procedure", "Financial support and sponsorship" ]
[ "The patients were placed in lithotomy position with a pelvic tilt; the bladder was catheterized and the anal canal was exposed by the use of stay sutures at the anal verge. Mucosal stay sutures were placed 1.5 cm proximal from the dentate line. A circumferential incision was made 1 cm proximal to the dentate line followed by rectal mucosal dissection in the submucosal plane for 3 cm. The mucosa was stripped from the underlying muscle, initially using fine electrocautery and subsequently using blunt dissection. After the mucosal dissection was completed, the rectal muscle was incised circumferentially. Dissection was then continued full thickness by dividing fibrovascular bands, and proximal bowel was telescoped through the muscular sleeve [Figure 2]. The vessels were divided just as they entered the bowel wall, to avoid injury to pelvic nerves, as well as the prostate gland or vagina. The principle of surgery is to resect aganglionated bowel segment; pull through and anastomosis of ganglionated bowel segment. Multiple full-thickness biopsies were sent for the frozen section to define the level of TZ and ganglion cells in proximal dilated bowel. Once the normally innervated bowel was reached, the bowel was divided and aganglionic segment resected. The length of muscular cuff was measured by thin, sterile surgical ruler. The muscle cuff was split at 6 o'clock position. The ganglionic part of the colon was fixed within muscle cuff from below. Then, the coloanal anastomosis was completed.\nIntraoperative-aganglionic rectal segment, transition zone and dilated sigmoid colon (original)\nPostoperatively, the patients were kept nil by mouth with intravenous antibiotics for 5 days to prevent possible stool contamination and resultant anastomotic complications. Perianal skin hygiene was strictly maintained, and petroleum jelly was applied locally as barrier cream. The patients were kept on regular follow-up for 2 years, and parents were asked to note the stooling pattern. No patient was lost to follow-up.\nThe data obtained from the patients' medical records were analysed with respect to age at surgery, investigations, operative parameters, complications, functional outcome, duration of hospital stay and follow-up.", "Nil." ]
[ null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Operative procedure", "RESULTS", "DISCUSSION", "CONCLUSION", "Financial support and sponsorship", "Conflicts of interest" ]
[ "Hirschsprung disease is a frequent cause of intestinal obstruction in neonates and constipation in older children. Various surgical techniques have evolved over time for the definitive management of Hirschsprung disease, single-stage Transanal endorectal pullthrough (TEPT) being one of the recent techniques. The study is aimed at evaluating the feasibility and outcome of the procedure. We present the outcome analysis in 12 children with Hirschsprung disease managed by single-stage TEPT over a period of 3 years from 2015 to 2018 with regard to technique, functional outcome and complications. They were followed up for a period of 2 years.", "Medical records of 12 children who underwent one-stage transanal endorectal pull through at a tertiary care centre from 2015 to 2018 were reviewed and retrospectively analysed on the basis of age, investigations, operative parameters, complications, functional outcome and duration of hospital stay. Patients with clinical suspicion of Hirschsprung disease were put on rectal washouts and investigated with a barium enema to look for the presence and level of radiological transition zone (TZ). Those with the presence of TZ underwent full-thickness rectal biopsy (FTRB). Only those patients who were deflating well on regular washouts; the patients whose barium enema [Figure 1] suggested TZ in rectum, rectosigmoid or sigmoid colon and with histopathologically confirmed aganglionosis at FTRB were included in the study. The patients who presented late with a dilated sigmoid colon; those who were non-compliant with rectal washouts and with radiologic TZ proximal to sigmoid colon were excluded from the study.\nBarium enema showing level of transition zone (original)\nPreoperatively, the patients were kept nil orally 24 h before surgery to prevent intraoperative and early post-operative wound contamination. Double catheter rectal washouts with warm saline were administered for bowel preparation. Prophylactic intravenous broad-spectrum antibiotics were administered 1 h before surgery to cover gram-negative bacilli and colonic anaerobes.\nOperative procedure The patients were placed in lithotomy position with a pelvic tilt; the bladder was catheterized and the anal canal was exposed by the use of stay sutures at the anal verge. Mucosal stay sutures were placed 1.5 cm proximal from the dentate line. A circumferential incision was made 1 cm proximal to the dentate line followed by rectal mucosal dissection in the submucosal plane for 3 cm. The mucosa was stripped from the underlying muscle, initially using fine electrocautery and subsequently using blunt dissection. After the mucosal dissection was completed, the rectal muscle was incised circumferentially. Dissection was then continued full thickness by dividing fibrovascular bands, and proximal bowel was telescoped through the muscular sleeve [Figure 2]. The vessels were divided just as they entered the bowel wall, to avoid injury to pelvic nerves, as well as the prostate gland or vagina. The principle of surgery is to resect aganglionated bowel segment; pull through and anastomosis of ganglionated bowel segment. Multiple full-thickness biopsies were sent for the frozen section to define the level of TZ and ganglion cells in proximal dilated bowel. Once the normally innervated bowel was reached, the bowel was divided and aganglionic segment resected. The length of muscular cuff was measured by thin, sterile surgical ruler. The muscle cuff was split at 6 o'clock position. The ganglionic part of the colon was fixed within muscle cuff from below. Then, the coloanal anastomosis was completed.\nIntraoperative-aganglionic rectal segment, transition zone and dilated sigmoid colon (original)\nPostoperatively, the patients were kept nil by mouth with intravenous antibiotics for 5 days to prevent possible stool contamination and resultant anastomotic complications. Perianal skin hygiene was strictly maintained, and petroleum jelly was applied locally as barrier cream. The patients were kept on regular follow-up for 2 years, and parents were asked to note the stooling pattern. No patient was lost to follow-up.\nThe data obtained from the patients' medical records were analysed with respect to age at surgery, investigations, operative parameters, complications, functional outcome, duration of hospital stay and follow-up.\nThe patients were placed in lithotomy position with a pelvic tilt; the bladder was catheterized and the anal canal was exposed by the use of stay sutures at the anal verge. Mucosal stay sutures were placed 1.5 cm proximal from the dentate line. A circumferential incision was made 1 cm proximal to the dentate line followed by rectal mucosal dissection in the submucosal plane for 3 cm. The mucosa was stripped from the underlying muscle, initially using fine electrocautery and subsequently using blunt dissection. After the mucosal dissection was completed, the rectal muscle was incised circumferentially. Dissection was then continued full thickness by dividing fibrovascular bands, and proximal bowel was telescoped through the muscular sleeve [Figure 2]. The vessels were divided just as they entered the bowel wall, to avoid injury to pelvic nerves, as well as the prostate gland or vagina. The principle of surgery is to resect aganglionated bowel segment; pull through and anastomosis of ganglionated bowel segment. Multiple full-thickness biopsies were sent for the frozen section to define the level of TZ and ganglion cells in proximal dilated bowel. Once the normally innervated bowel was reached, the bowel was divided and aganglionic segment resected. The length of muscular cuff was measured by thin, sterile surgical ruler. The muscle cuff was split at 6 o'clock position. The ganglionic part of the colon was fixed within muscle cuff from below. Then, the coloanal anastomosis was completed.\nIntraoperative-aganglionic rectal segment, transition zone and dilated sigmoid colon (original)\nPostoperatively, the patients were kept nil by mouth with intravenous antibiotics for 5 days to prevent possible stool contamination and resultant anastomotic complications. Perianal skin hygiene was strictly maintained, and petroleum jelly was applied locally as barrier cream. The patients were kept on regular follow-up for 2 years, and parents were asked to note the stooling pattern. No patient was lost to follow-up.\nThe data obtained from the patients' medical records were analysed with respect to age at surgery, investigations, operative parameters, complications, functional outcome, duration of hospital stay and follow-up.", "The patients were placed in lithotomy position with a pelvic tilt; the bladder was catheterized and the anal canal was exposed by the use of stay sutures at the anal verge. Mucosal stay sutures were placed 1.5 cm proximal from the dentate line. A circumferential incision was made 1 cm proximal to the dentate line followed by rectal mucosal dissection in the submucosal plane for 3 cm. The mucosa was stripped from the underlying muscle, initially using fine electrocautery and subsequently using blunt dissection. After the mucosal dissection was completed, the rectal muscle was incised circumferentially. Dissection was then continued full thickness by dividing fibrovascular bands, and proximal bowel was telescoped through the muscular sleeve [Figure 2]. The vessels were divided just as they entered the bowel wall, to avoid injury to pelvic nerves, as well as the prostate gland or vagina. The principle of surgery is to resect aganglionated bowel segment; pull through and anastomosis of ganglionated bowel segment. Multiple full-thickness biopsies were sent for the frozen section to define the level of TZ and ganglion cells in proximal dilated bowel. Once the normally innervated bowel was reached, the bowel was divided and aganglionic segment resected. The length of muscular cuff was measured by thin, sterile surgical ruler. The muscle cuff was split at 6 o'clock position. The ganglionic part of the colon was fixed within muscle cuff from below. Then, the coloanal anastomosis was completed.\nIntraoperative-aganglionic rectal segment, transition zone and dilated sigmoid colon (original)\nPostoperatively, the patients were kept nil by mouth with intravenous antibiotics for 5 days to prevent possible stool contamination and resultant anastomotic complications. Perianal skin hygiene was strictly maintained, and petroleum jelly was applied locally as barrier cream. The patients were kept on regular follow-up for 2 years, and parents were asked to note the stooling pattern. No patient was lost to follow-up.\nThe data obtained from the patients' medical records were analysed with respect to age at surgery, investigations, operative parameters, complications, functional outcome, duration of hospital stay and follow-up.", "In our study, the median age at the time of surgery was 9 months (range: 6 months–3 years). Nine boys and three girls underwent surgery. The median weight at surgery was 7.5 kg (range: 6.4–10 kg). TZ was at rectosigmoid in eight patients (66.6%) and the sigmoid colon in four patients (33.3%) as seen on barium enema. Intraoperatively, the mean length of muscle cuff was 3 cm, and the average length of resected bowel was 25 cm. The median operative time required for the procedure was 105 min. The patients stayed in the hospital for an average of 8 days. None of the patients required laparoscopy or laparotomy.\nIn the post-operative period, two children (16.6%) had perianal excoriation and one child (8.3%) had enterocolitis, which responded to medical management. The patients were kept on follow-up for 2 years. No patient had a cuff abscess, anastomotic leak or stricture. Stool frequency initially at 2 weeks was six to ten times a day and gradually reduced to two to three times a day by 3 months post-operatively. No patient had faecal soiling or constipation.", "Treatment options for Hirschsprung disease have evolved from a staged procedure-colostomy followed by definitive surgery to a single-stage procedure. The latter has been observed to be comparable or even better than the staged procedure.[1] TEPT when done as a one-stage procedure avoids multiple anaesthesia exposures, exempts the morbidity of stoma and reduces the cost.\nTEPT represents a natural evolution from the laparoscopic procedure.[2] The minimal access approach for Hirschsprung disease was first described by Georgeson et al. in the early 1990s wherein the procedure consisted of a laparoscopic biopsy to identify the TZ, laparoscopic mobilization of the rectum below peritoneal reflection and a short endorectal mucosal dissection from below.[3] The initial series of children with Hirschsprung disease was published by de La Torre-Mondragon and Ortega-Salgado and Langer et al. in the late 1990s.[45] Many studies published later have proved the safety, efficacy, cost-effectiveness and good functional outcome of the procedure.\nSingle-stage TEPT has a precise indication; hence, case selection is very important – cases with TZ involving rectum and sigmoid colon are most suitable for this procedure, parents should be compliant with rectal washes and colon should be effectively decompressed with washes.\nIn most cases, TEPT is performed in infancy. There are series, in which TEPT is performed in neonatal age group.[6] In our study, the median age at the time of surgery was 9 months (6 months–3 years). TEPT can be performed successfully in all ages of children with good results, avoiding abdominal exploration.[7] Almost all studies regarding TEPT showed male preponderance over females. This might be due to the higher incidence of Hirschsprung disease (especially short segment Hirschsprung disease) in males as compared to females (M:F = 4:1). Similar findings regarding male preponderance were noted in our study with nine boys and three girls.\nAlthough contrast study is commonly used to identify the level of TZ, it is not accurate in locating the pathological transition zone. In 12% of cases, pathologic TZ is different from the radiological TZ.[8] The accuracy of contrast enema in identifying the level of TZ in older children may be improved by discontinuing the rectal irrigations for 1–3 days before the study. By discontinuation of the washes, adequate time is offered for the proximal bowel to distend and demarcate the TZ on contrast enema.\nTannuri et al. in their series on TEPT have reported a refinement in technique by not giving preoperative bowel preparation.[9] However, we have followed the technique of bowel preparation in our study, as per the classical technique to avoid wound contamination and dehiscence. The mucosal incision above the dentate line depends on the size of the child, but it is crucial that the incision is high enough above the dentate line so that the transitional epithelium is not damaged.[10] This is important to prevent the loss of sensation, which may predispose the child to long-term problems with incontinence. Langer et al. state that it ranges from 0.5 to 1.0 cm above the dentate line in a new born and 1.0–2.0 cm above the dentate line in an older child.[11] The present technique involves proceeding with a short mucosal dissection for 1.0–3.0 cm and then incising the rectal wall circumferentially. With a very short cuff, the muscle does not need to be incised in most cases. Some surgeons have eliminated the mucosal dissection entirely and performed a transanal Swenson procedure.[12] The advantage of leaving a short cuff or no cuff is the avoidance of a constricting ring or residual aganglionic bowel, with a lower risk of obstruction and enterocolitis.[13] The disadvantage is that dissection on outside of the rectum deep in the pelvis may increase the risk of injury to pelvic nerves and vessels, prostate gland, urethra or vagina. Initial descriptions of TEPT involved a long rectal cuff, but it may either constrict the pulled through bowel or roll down into a ring during the pull through; hence, a shorter cuff is preferred now.[14]\nThe length of resected bowel depends on the length of aganglionic bowel segment. Teeraratkul, Isa et al. and Pratap et al. reported the length of resected bowel to be 9–25 cm, 18.64 cm and 30 cm, respectively, which is comparable to our study.[151617] The operative time and as a result, the overall anaesthesia time can range from 95 min in some studies to about 180 min in some other studies.[1517] The average operative time in our study was 105 min. Operative time included the process and reporting of the frozen section to confirm the presence of ganglion cells.\nAt least 50% of children develop perianal dermatitis because of frequent bowel movements and liquid discharge during the initial months after a transanal pull-through operation. It is important to prevent this as much as possible by immediate application of barrier creams and in some cases antidiarrheal medication. Increased stool frequency and perianal excoriation both are known to settle down within several weeks to months post-operatively.[2]\nThe most important and dangerous complication after a pull-through procedure is enterocolitis.[18] One patient developed enterocolitis 1-month post-surgery in our study. It was managed with intravenous antibiotics and adequate hydration. Many preventive measures have been described including routine post-operative irrigations or rectal stimulation, the use of intravenous antibiotics such as vancomycin and metronidazole and also administering probiotics.[1920] Menezes et al. have reported the incidence of obstructive symptoms to be 8%–30% after TEPT.[21] However, we did not encounter obstruction/constipation in any of our operated children. These obstructive symptoms can be taken care of by bowel management program, after stricture and residual aganglionosis is ruled out.\nThere are some intraoperative difficulties encountered during TEPT-narrow field of vision, retraction of vessels if adequate care is not taken, stretching of anal sphincters, TZ seen in pre-operative barium enema may be located at a higher level intraoperatively, thus making it difficult to reach the ganglionic colon.[8] However, the advantages of TEPT-minimal access approach, negligible risk of intra-abdominal adhesions, good cosmesis as there is no abdominal scar, bowel not opened intra-abdominally or intraperitoneally, well preserved pelvic structures, sphincters, local blood supply and innervation, thus no effect on faecal and urinary continence. Furthermore, there is a significant decrease in need of analgesics in immediate post-operative period and a decreased total hospital stay and better cosmetic outcome.[222324]\nThere are a few limitations of this study – retrospective analysis in a small cohort of patients. We acknowledge that large population-based studies/randomized control trials would be needed for better analysis.", "Single-stage transanal endorectal pull through for the management of rectosigmoid and sigmoid Hirschsprung disease is feasible and may be preferred in carefully selected patients. The safety and cost-effectiveness of this procedure is of special interest for developing countries. The functional outcome after the procedure is highly satisfactory.\nFinancial support and sponsorship Nil.\nNil.\nConflicts of interest There are no conflicts of interest.\nThere are no conflicts of interest.", "Nil.", "There are no conflicts of interest." ]
[ "intro", "materials|methods", null, "results", "discussion", "conclusion", null, "COI-statement" ]
[ "Colorectal", "hirschsprung disease", "paediatric", "transanal endorectal pull through" ]
INTRODUCTION: Hirschsprung disease is a frequent cause of intestinal obstruction in neonates and constipation in older children. Various surgical techniques have evolved over time for the definitive management of Hirschsprung disease, single-stage Transanal endorectal pullthrough (TEPT) being one of the recent techniques. The study is aimed at evaluating the feasibility and outcome of the procedure. We present the outcome analysis in 12 children with Hirschsprung disease managed by single-stage TEPT over a period of 3 years from 2015 to 2018 with regard to technique, functional outcome and complications. They were followed up for a period of 2 years. MATERIALS AND METHODS: Medical records of 12 children who underwent one-stage transanal endorectal pull through at a tertiary care centre from 2015 to 2018 were reviewed and retrospectively analysed on the basis of age, investigations, operative parameters, complications, functional outcome and duration of hospital stay. Patients with clinical suspicion of Hirschsprung disease were put on rectal washouts and investigated with a barium enema to look for the presence and level of radiological transition zone (TZ). Those with the presence of TZ underwent full-thickness rectal biopsy (FTRB). Only those patients who were deflating well on regular washouts; the patients whose barium enema [Figure 1] suggested TZ in rectum, rectosigmoid or sigmoid colon and with histopathologically confirmed aganglionosis at FTRB were included in the study. The patients who presented late with a dilated sigmoid colon; those who were non-compliant with rectal washouts and with radiologic TZ proximal to sigmoid colon were excluded from the study. Barium enema showing level of transition zone (original) Preoperatively, the patients were kept nil orally 24 h before surgery to prevent intraoperative and early post-operative wound contamination. Double catheter rectal washouts with warm saline were administered for bowel preparation. Prophylactic intravenous broad-spectrum antibiotics were administered 1 h before surgery to cover gram-negative bacilli and colonic anaerobes. Operative procedure The patients were placed in lithotomy position with a pelvic tilt; the bladder was catheterized and the anal canal was exposed by the use of stay sutures at the anal verge. Mucosal stay sutures were placed 1.5 cm proximal from the dentate line. A circumferential incision was made 1 cm proximal to the dentate line followed by rectal mucosal dissection in the submucosal plane for 3 cm. The mucosa was stripped from the underlying muscle, initially using fine electrocautery and subsequently using blunt dissection. After the mucosal dissection was completed, the rectal muscle was incised circumferentially. Dissection was then continued full thickness by dividing fibrovascular bands, and proximal bowel was telescoped through the muscular sleeve [Figure 2]. The vessels were divided just as they entered the bowel wall, to avoid injury to pelvic nerves, as well as the prostate gland or vagina. The principle of surgery is to resect aganglionated bowel segment; pull through and anastomosis of ganglionated bowel segment. Multiple full-thickness biopsies were sent for the frozen section to define the level of TZ and ganglion cells in proximal dilated bowel. Once the normally innervated bowel was reached, the bowel was divided and aganglionic segment resected. The length of muscular cuff was measured by thin, sterile surgical ruler. The muscle cuff was split at 6 o'clock position. The ganglionic part of the colon was fixed within muscle cuff from below. Then, the coloanal anastomosis was completed. Intraoperative-aganglionic rectal segment, transition zone and dilated sigmoid colon (original) Postoperatively, the patients were kept nil by mouth with intravenous antibiotics for 5 days to prevent possible stool contamination and resultant anastomotic complications. Perianal skin hygiene was strictly maintained, and petroleum jelly was applied locally as barrier cream. The patients were kept on regular follow-up for 2 years, and parents were asked to note the stooling pattern. No patient was lost to follow-up. The data obtained from the patients' medical records were analysed with respect to age at surgery, investigations, operative parameters, complications, functional outcome, duration of hospital stay and follow-up. The patients were placed in lithotomy position with a pelvic tilt; the bladder was catheterized and the anal canal was exposed by the use of stay sutures at the anal verge. Mucosal stay sutures were placed 1.5 cm proximal from the dentate line. A circumferential incision was made 1 cm proximal to the dentate line followed by rectal mucosal dissection in the submucosal plane for 3 cm. The mucosa was stripped from the underlying muscle, initially using fine electrocautery and subsequently using blunt dissection. After the mucosal dissection was completed, the rectal muscle was incised circumferentially. Dissection was then continued full thickness by dividing fibrovascular bands, and proximal bowel was telescoped through the muscular sleeve [Figure 2]. The vessels were divided just as they entered the bowel wall, to avoid injury to pelvic nerves, as well as the prostate gland or vagina. The principle of surgery is to resect aganglionated bowel segment; pull through and anastomosis of ganglionated bowel segment. Multiple full-thickness biopsies were sent for the frozen section to define the level of TZ and ganglion cells in proximal dilated bowel. Once the normally innervated bowel was reached, the bowel was divided and aganglionic segment resected. The length of muscular cuff was measured by thin, sterile surgical ruler. The muscle cuff was split at 6 o'clock position. The ganglionic part of the colon was fixed within muscle cuff from below. Then, the coloanal anastomosis was completed. Intraoperative-aganglionic rectal segment, transition zone and dilated sigmoid colon (original) Postoperatively, the patients were kept nil by mouth with intravenous antibiotics for 5 days to prevent possible stool contamination and resultant anastomotic complications. Perianal skin hygiene was strictly maintained, and petroleum jelly was applied locally as barrier cream. The patients were kept on regular follow-up for 2 years, and parents were asked to note the stooling pattern. No patient was lost to follow-up. The data obtained from the patients' medical records were analysed with respect to age at surgery, investigations, operative parameters, complications, functional outcome, duration of hospital stay and follow-up. Operative procedure: The patients were placed in lithotomy position with a pelvic tilt; the bladder was catheterized and the anal canal was exposed by the use of stay sutures at the anal verge. Mucosal stay sutures were placed 1.5 cm proximal from the dentate line. A circumferential incision was made 1 cm proximal to the dentate line followed by rectal mucosal dissection in the submucosal plane for 3 cm. The mucosa was stripped from the underlying muscle, initially using fine electrocautery and subsequently using blunt dissection. After the mucosal dissection was completed, the rectal muscle was incised circumferentially. Dissection was then continued full thickness by dividing fibrovascular bands, and proximal bowel was telescoped through the muscular sleeve [Figure 2]. The vessels were divided just as they entered the bowel wall, to avoid injury to pelvic nerves, as well as the prostate gland or vagina. The principle of surgery is to resect aganglionated bowel segment; pull through and anastomosis of ganglionated bowel segment. Multiple full-thickness biopsies were sent for the frozen section to define the level of TZ and ganglion cells in proximal dilated bowel. Once the normally innervated bowel was reached, the bowel was divided and aganglionic segment resected. The length of muscular cuff was measured by thin, sterile surgical ruler. The muscle cuff was split at 6 o'clock position. The ganglionic part of the colon was fixed within muscle cuff from below. Then, the coloanal anastomosis was completed. Intraoperative-aganglionic rectal segment, transition zone and dilated sigmoid colon (original) Postoperatively, the patients were kept nil by mouth with intravenous antibiotics for 5 days to prevent possible stool contamination and resultant anastomotic complications. Perianal skin hygiene was strictly maintained, and petroleum jelly was applied locally as barrier cream. The patients were kept on regular follow-up for 2 years, and parents were asked to note the stooling pattern. No patient was lost to follow-up. The data obtained from the patients' medical records were analysed with respect to age at surgery, investigations, operative parameters, complications, functional outcome, duration of hospital stay and follow-up. RESULTS: In our study, the median age at the time of surgery was 9 months (range: 6 months–3 years). Nine boys and three girls underwent surgery. The median weight at surgery was 7.5 kg (range: 6.4–10 kg). TZ was at rectosigmoid in eight patients (66.6%) and the sigmoid colon in four patients (33.3%) as seen on barium enema. Intraoperatively, the mean length of muscle cuff was 3 cm, and the average length of resected bowel was 25 cm. The median operative time required for the procedure was 105 min. The patients stayed in the hospital for an average of 8 days. None of the patients required laparoscopy or laparotomy. In the post-operative period, two children (16.6%) had perianal excoriation and one child (8.3%) had enterocolitis, which responded to medical management. The patients were kept on follow-up for 2 years. No patient had a cuff abscess, anastomotic leak or stricture. Stool frequency initially at 2 weeks was six to ten times a day and gradually reduced to two to three times a day by 3 months post-operatively. No patient had faecal soiling or constipation. DISCUSSION: Treatment options for Hirschsprung disease have evolved from a staged procedure-colostomy followed by definitive surgery to a single-stage procedure. The latter has been observed to be comparable or even better than the staged procedure.[1] TEPT when done as a one-stage procedure avoids multiple anaesthesia exposures, exempts the morbidity of stoma and reduces the cost. TEPT represents a natural evolution from the laparoscopic procedure.[2] The minimal access approach for Hirschsprung disease was first described by Georgeson et al. in the early 1990s wherein the procedure consisted of a laparoscopic biopsy to identify the TZ, laparoscopic mobilization of the rectum below peritoneal reflection and a short endorectal mucosal dissection from below.[3] The initial series of children with Hirschsprung disease was published by de La Torre-Mondragon and Ortega-Salgado and Langer et al. in the late 1990s.[45] Many studies published later have proved the safety, efficacy, cost-effectiveness and good functional outcome of the procedure. Single-stage TEPT has a precise indication; hence, case selection is very important – cases with TZ involving rectum and sigmoid colon are most suitable for this procedure, parents should be compliant with rectal washes and colon should be effectively decompressed with washes. In most cases, TEPT is performed in infancy. There are series, in which TEPT is performed in neonatal age group.[6] In our study, the median age at the time of surgery was 9 months (6 months–3 years). TEPT can be performed successfully in all ages of children with good results, avoiding abdominal exploration.[7] Almost all studies regarding TEPT showed male preponderance over females. This might be due to the higher incidence of Hirschsprung disease (especially short segment Hirschsprung disease) in males as compared to females (M:F = 4:1). Similar findings regarding male preponderance were noted in our study with nine boys and three girls. Although contrast study is commonly used to identify the level of TZ, it is not accurate in locating the pathological transition zone. In 12% of cases, pathologic TZ is different from the radiological TZ.[8] The accuracy of contrast enema in identifying the level of TZ in older children may be improved by discontinuing the rectal irrigations for 1–3 days before the study. By discontinuation of the washes, adequate time is offered for the proximal bowel to distend and demarcate the TZ on contrast enema. Tannuri et al. in their series on TEPT have reported a refinement in technique by not giving preoperative bowel preparation.[9] However, we have followed the technique of bowel preparation in our study, as per the classical technique to avoid wound contamination and dehiscence. The mucosal incision above the dentate line depends on the size of the child, but it is crucial that the incision is high enough above the dentate line so that the transitional epithelium is not damaged.[10] This is important to prevent the loss of sensation, which may predispose the child to long-term problems with incontinence. Langer et al. state that it ranges from 0.5 to 1.0 cm above the dentate line in a new born and 1.0–2.0 cm above the dentate line in an older child.[11] The present technique involves proceeding with a short mucosal dissection for 1.0–3.0 cm and then incising the rectal wall circumferentially. With a very short cuff, the muscle does not need to be incised in most cases. Some surgeons have eliminated the mucosal dissection entirely and performed a transanal Swenson procedure.[12] The advantage of leaving a short cuff or no cuff is the avoidance of a constricting ring or residual aganglionic bowel, with a lower risk of obstruction and enterocolitis.[13] The disadvantage is that dissection on outside of the rectum deep in the pelvis may increase the risk of injury to pelvic nerves and vessels, prostate gland, urethra or vagina. Initial descriptions of TEPT involved a long rectal cuff, but it may either constrict the pulled through bowel or roll down into a ring during the pull through; hence, a shorter cuff is preferred now.[14] The length of resected bowel depends on the length of aganglionic bowel segment. Teeraratkul, Isa et al. and Pratap et al. reported the length of resected bowel to be 9–25 cm, 18.64 cm and 30 cm, respectively, which is comparable to our study.[151617] The operative time and as a result, the overall anaesthesia time can range from 95 min in some studies to about 180 min in some other studies.[1517] The average operative time in our study was 105 min. Operative time included the process and reporting of the frozen section to confirm the presence of ganglion cells. At least 50% of children develop perianal dermatitis because of frequent bowel movements and liquid discharge during the initial months after a transanal pull-through operation. It is important to prevent this as much as possible by immediate application of barrier creams and in some cases antidiarrheal medication. Increased stool frequency and perianal excoriation both are known to settle down within several weeks to months post-operatively.[2] The most important and dangerous complication after a pull-through procedure is enterocolitis.[18] One patient developed enterocolitis 1-month post-surgery in our study. It was managed with intravenous antibiotics and adequate hydration. Many preventive measures have been described including routine post-operative irrigations or rectal stimulation, the use of intravenous antibiotics such as vancomycin and metronidazole and also administering probiotics.[1920] Menezes et al. have reported the incidence of obstructive symptoms to be 8%–30% after TEPT.[21] However, we did not encounter obstruction/constipation in any of our operated children. These obstructive symptoms can be taken care of by bowel management program, after stricture and residual aganglionosis is ruled out. There are some intraoperative difficulties encountered during TEPT-narrow field of vision, retraction of vessels if adequate care is not taken, stretching of anal sphincters, TZ seen in pre-operative barium enema may be located at a higher level intraoperatively, thus making it difficult to reach the ganglionic colon.[8] However, the advantages of TEPT-minimal access approach, negligible risk of intra-abdominal adhesions, good cosmesis as there is no abdominal scar, bowel not opened intra-abdominally or intraperitoneally, well preserved pelvic structures, sphincters, local blood supply and innervation, thus no effect on faecal and urinary continence. Furthermore, there is a significant decrease in need of analgesics in immediate post-operative period and a decreased total hospital stay and better cosmetic outcome.[222324] There are a few limitations of this study – retrospective analysis in a small cohort of patients. We acknowledge that large population-based studies/randomized control trials would be needed for better analysis. CONCLUSION: Single-stage transanal endorectal pull through for the management of rectosigmoid and sigmoid Hirschsprung disease is feasible and may be preferred in carefully selected patients. The safety and cost-effectiveness of this procedure is of special interest for developing countries. The functional outcome after the procedure is highly satisfactory. Financial support and sponsorship Nil. Nil. Conflicts of interest There are no conflicts of interest. There are no conflicts of interest. Financial support and sponsorship: Nil. Conflicts of interest: There are no conflicts of interest.
Background: Hirschsprung disease is a notable cause of neonatal intestinal obstruction and constipation in older children. Transanal endorectal pull through (TEPT) is a newer technique of definitive management as against staged procedures. The aim of our study is to evaluate the feasibility and outcome of the procedure in selected children with Hirschsprung disease managed by this technique with review of the literature. Methods: Medical records of 12 children who underwent single-stage TEPT in a tertiary care centre over a period of 3 years from 2015 to 2018 were reviewed and retrospectively analysed on the basis of age, investigations, intraoperative parameters, complications, functional outcome and hospital stay. Results: The median age at surgery was 9 months. Nine patients were boys. The median weight of patients was 7.5 kg. The transition zone was observed at the level of the rectosigmoid in eight patients (66.6%) and sigmoid colon in four patients (33.3%). The mean length of muscle cuff was 3 cm, the mean length of resected bowel was 25 cm, the median operative time was 105 min and the mean hospital stay was 8 days. Perianal excoriation (n = 2) and enterocolitis (n = 1) were complications encountered postoperatively; however, no patient had cuff abscess, anastomotic leak or stricture. Stool frequency initially at 2 weeks was average of six to ten times a day, which gradually reduced to two to three times a day by 3 months postoperatively. None of the patients had faecal soiling or constipation on follow-up. Conclusions: Single-stage transanal endorectal pull through is an effective technique in the management of Hirschsprung disease with minimal complications.
INTRODUCTION: Hirschsprung disease is a frequent cause of intestinal obstruction in neonates and constipation in older children. Various surgical techniques have evolved over time for the definitive management of Hirschsprung disease, single-stage Transanal endorectal pullthrough (TEPT) being one of the recent techniques. The study is aimed at evaluating the feasibility and outcome of the procedure. We present the outcome analysis in 12 children with Hirschsprung disease managed by single-stage TEPT over a period of 3 years from 2015 to 2018 with regard to technique, functional outcome and complications. They were followed up for a period of 2 years. CONCLUSION: Single-stage transanal endorectal pull through for the management of rectosigmoid and sigmoid Hirschsprung disease is feasible and may be preferred in carefully selected patients. The safety and cost-effectiveness of this procedure is of special interest for developing countries. The functional outcome after the procedure is highly satisfactory. Financial support and sponsorship Nil. Nil. Conflicts of interest There are no conflicts of interest. There are no conflicts of interest.
Background: Hirschsprung disease is a notable cause of neonatal intestinal obstruction and constipation in older children. Transanal endorectal pull through (TEPT) is a newer technique of definitive management as against staged procedures. The aim of our study is to evaluate the feasibility and outcome of the procedure in selected children with Hirschsprung disease managed by this technique with review of the literature. Methods: Medical records of 12 children who underwent single-stage TEPT in a tertiary care centre over a period of 3 years from 2015 to 2018 were reviewed and retrospectively analysed on the basis of age, investigations, intraoperative parameters, complications, functional outcome and hospital stay. Results: The median age at surgery was 9 months. Nine patients were boys. The median weight of patients was 7.5 kg. The transition zone was observed at the level of the rectosigmoid in eight patients (66.6%) and sigmoid colon in four patients (33.3%). The mean length of muscle cuff was 3 cm, the mean length of resected bowel was 25 cm, the median operative time was 105 min and the mean hospital stay was 8 days. Perianal excoriation (n = 2) and enterocolitis (n = 1) were complications encountered postoperatively; however, no patient had cuff abscess, anastomotic leak or stricture. Stool frequency initially at 2 weeks was average of six to ten times a day, which gradually reduced to two to three times a day by 3 months postoperatively. None of the patients had faecal soiling or constipation on follow-up. Conclusions: Single-stage transanal endorectal pull through is an effective technique in the management of Hirschsprung disease with minimal complications.
3,134
317
[ 394, 2 ]
8
[ "bowel", "patients", "rectal", "cm", "procedure", "cuff", "tz", "dissection", "operative", "tept" ]
[ "hirschsprung disease managed", "options hirschsprung disease", "segment hirschsprung disease", "children hirschsprung disease", "hirschsprung disease rectal" ]
null
[CONTENT] Colorectal | hirschsprung disease | paediatric | transanal endorectal pull through [SUMMARY]
null
[CONTENT] Colorectal | hirschsprung disease | paediatric | transanal endorectal pull through [SUMMARY]
[CONTENT] Colorectal | hirschsprung disease | paediatric | transanal endorectal pull through [SUMMARY]
[CONTENT] Colorectal | hirschsprung disease | paediatric | transanal endorectal pull through [SUMMARY]
[CONTENT] Colorectal | hirschsprung disease | paediatric | transanal endorectal pull through [SUMMARY]
[CONTENT] Child | Hirschsprung Disease | Humans | Infant, Newborn | Retrospective Studies | Tertiary Care Centers [SUMMARY]
null
[CONTENT] Child | Hirschsprung Disease | Humans | Infant, Newborn | Retrospective Studies | Tertiary Care Centers [SUMMARY]
[CONTENT] Child | Hirschsprung Disease | Humans | Infant, Newborn | Retrospective Studies | Tertiary Care Centers [SUMMARY]
[CONTENT] Child | Hirschsprung Disease | Humans | Infant, Newborn | Retrospective Studies | Tertiary Care Centers [SUMMARY]
[CONTENT] Child | Hirschsprung Disease | Humans | Infant, Newborn | Retrospective Studies | Tertiary Care Centers [SUMMARY]
[CONTENT] hirschsprung disease managed | options hirschsprung disease | segment hirschsprung disease | children hirschsprung disease | hirschsprung disease rectal [SUMMARY]
null
[CONTENT] hirschsprung disease managed | options hirschsprung disease | segment hirschsprung disease | children hirschsprung disease | hirschsprung disease rectal [SUMMARY]
[CONTENT] hirschsprung disease managed | options hirschsprung disease | segment hirschsprung disease | children hirschsprung disease | hirschsprung disease rectal [SUMMARY]
[CONTENT] hirschsprung disease managed | options hirschsprung disease | segment hirschsprung disease | children hirschsprung disease | hirschsprung disease rectal [SUMMARY]
[CONTENT] hirschsprung disease managed | options hirschsprung disease | segment hirschsprung disease | children hirschsprung disease | hirschsprung disease rectal [SUMMARY]
[CONTENT] bowel | patients | rectal | cm | procedure | cuff | tz | dissection | operative | tept [SUMMARY]
null
[CONTENT] bowel | patients | rectal | cm | procedure | cuff | tz | dissection | operative | tept [SUMMARY]
[CONTENT] bowel | patients | rectal | cm | procedure | cuff | tz | dissection | operative | tept [SUMMARY]
[CONTENT] bowel | patients | rectal | cm | procedure | cuff | tz | dissection | operative | tept [SUMMARY]
[CONTENT] bowel | patients | rectal | cm | procedure | cuff | tz | dissection | operative | tept [SUMMARY]
[CONTENT] period years | techniques | disease | hirschsprung disease | hirschsprung | outcome | tept | single stage | period | single [SUMMARY]
null
[CONTENT] patients | months | median | day | kg | times day | times | required | surgery | range [SUMMARY]
[CONTENT] interest | conflicts interest | conflicts | interest conflicts | conflicts interest conflicts interest | conflicts interest conflicts | interest conflicts interest | nil | procedure | transanal endorectal pull management [SUMMARY]
[CONTENT] nil | interest | conflicts interest | conflicts | bowel | patients | tept | rectal | cm | procedure [SUMMARY]
[CONTENT] nil | interest | conflicts interest | conflicts | bowel | patients | tept | rectal | cm | procedure [SUMMARY]
[CONTENT] Hirschsprung ||| ||| Hirschsprung [SUMMARY]
null
[CONTENT] 9 months ||| Nine ||| 7.5 kg ||| eight | 66.6% | four | 33.3% ||| 3 cm | 25 cm | 105 | 8 days ||| 2 | 1 ||| 2 weeks | six to ten | two to three | 3 months ||| [SUMMARY]
[CONTENT] Hirschsprung [SUMMARY]
[CONTENT] Hirschsprung ||| ||| Hirschsprung ||| 12 | tertiary | 3 years | 2015 | 2018 ||| ||| 9 months ||| Nine ||| 7.5 kg ||| eight | 66.6% | four | 33.3% ||| 3 cm | 25 cm | 105 | 8 days ||| 2 | 1 ||| 2 weeks | six to ten | two to three | 3 months ||| ||| Hirschsprung [SUMMARY]
[CONTENT] Hirschsprung ||| ||| Hirschsprung ||| 12 | tertiary | 3 years | 2015 | 2018 ||| ||| 9 months ||| Nine ||| 7.5 kg ||| eight | 66.6% | four | 33.3% ||| 3 cm | 25 cm | 105 | 8 days ||| 2 | 1 ||| 2 weeks | six to ten | two to three | 3 months ||| ||| Hirschsprung [SUMMARY]
Emergency physician attitudes towards illness verification (sick notes).
33464695
Emergency physicians frequently provide care for patients who are experiencing viral illnesses and may be asked to provide verification of the patient's illness (a sick note) for time missed from work. Exclusion from work can be a powerful public health measure during epidemics; both legislation and physician advice contribute to patients' decisions to recover at home.
INTRODUCTION
We surveyed Canadian Association of Emergency Physicians members to determine what impacts sick notes have on patients and the system, the duration of time off work that physicians recommend, and what training and policies are in place to help providers. Descriptive statistics from the survey are reported.
METHODS
A total of 182 of 1524 physicians responded to the survey; 51.1% practice in Ontario. 76.4% of physicians write at least one sick note per day, with 4.2% writing 5 or more sick notes per day. Thirteen percentage of physicians charge for a sick note (mean cost $22.50). Patients advised to stay home for a median of 4 days with influenza and 2 days with gastroenteritis and upper respiratory tract infections. 82.8% of physicians believe that most of the time, patients can determine when to return to work. Advice varied widely between respondents. 61% of respondents were unfamiliar with sick leave legislation in their province and only 2% had received formal training about illness verification.
RESULTS
Providing sick notes is a common practice of Canadian Emergency Physicians; return-to-work guidance is variable. Improved physician education about public health recommendations and provincial legislation may strengthen physician advice to patients.
CONCLUSIONS
[ "Adult", "Attitude of Health Personnel", "Canada", "Decision Making", "Emergency Medicine", "Female", "Health Services Misuse", "Humans", "Male", "Physician-Patient Relations", "Physicians", "Practice Patterns, Physicians'", "Return to Work", "Sick Leave", "Surveys and Questionnaires" ]
7814758
INTRODUCTION
Public health agencies recommend that patients with minor illnesses stay home to recover and to avoid infecting others. 1 Exclusion from work can be a powerful public health measure during outbreaks 2 ; in the US, workplace spread of influenza‐like illness conferred a population‐attributable risk of 5 million additional cases in a study of H1N1 spread during the 2009 epidemic. 3 Emergency physicians frequently see patients with viral infections in the emergency department and are asked to provide guidance about return to work. In some instances, patients may seek medical care solely for the purpose of obtaining a “sick note”, placing unnecessary burden on emergency care providers and systems. 4 In Canada, provincial legislation determines the duration of time that a patient can stay home from work, whether or not an employer can require a sick note, and if patients are paid for sick days. These standards vary significantly between the different provinces and territories. In Ontario, Canada's most populated province, the current legislation provides three unpaid days of job‐protected leave for personal illness, injury, or medical emergency. Employers may ask for a sick note for illness verification, thereby necessitating the employee to seek medical care. During the COVID‐19 pandemic, the legislation has been amended to provide an unspecified number of days of unpaid, job‐protected infectious disease emergency leave. This leave covers isolation, quarantine, and to provide care for family members due to school and day‐care closures. Employers cannot require employees to provide sick notes for the infectious disease emergency leave. 5 Legislation on emergency medical leave and sick notes impact the ability of patients to adhere to physician recommendations. Despite this, little is known about physician knowledge of these standards.
METHODS
Following a literature review, the survey was designed by consensus of four authors, and revised following review by an additional physician and a labor policy expert. The survey was distributed in English only via SurveyMonkey 6 (Appendix 1). The Canadian Association of Emergency Physicians (CAEP) administered the survey. The link was distributed by email to all CAEP physician members three times in 2‐week intervals between December 2019 and January 2020. Participation in the survey was voluntary and all responses were anonymous. The survey included multiple‐choice demographic questions, as well multiple‐choice questions and open‐ended, numeric responses to quantify variables such as the duration of time that physicians advise patients to stay home from work, the cost of a sick note, and the frequency with which patients require additional medical care. Participants were allowed to skip questions and data from incomplete surveys were included. Data were analyzed in the R statistical programming language. 7 No financial incentive was provided for participating.
RESULTS
Participant characteristics Of the 1524 CAEP physician members surveyed, 182 participated. 51.1% reported Ontario as their practice location. 79% practiced emergency medicine exclusively, with the remainder practicing emergency medicine and family medicine, sports medicine, or other specialties. Of the 1524 CAEP physician members surveyed, 182 participated. 51.1% reported Ontario as their practice location. 79% practiced emergency medicine exclusively, with the remainder practicing emergency medicine and family medicine, sports medicine, or other specialties. Impact on emergency department flow and functioning The majority (75.1%) of respondents answered that their practice environment does not have a sick note policy in place, or that they were unsure about the policy (10.7% of respondents). Only 13% of emergency providers charge patients for a sick note. The fee charged ranges from $5 to 80 (Canadian dollars), with a mean cost of $22.50. Most physicians provide at least one sick note per day (76.4%), with 4.2% reporting that they provide 5 or more notes per day. 89.7% of respondents believe that patients require additional medical care on half of visits or less. The majority (75.1%) of respondents answered that their practice environment does not have a sick note policy in place, or that they were unsure about the policy (10.7% of respondents). Only 13% of emergency providers charge patients for a sick note. The fee charged ranges from $5 to 80 (Canadian dollars), with a mean cost of $22.50. Most physicians provide at least one sick note per day (76.4%), with 4.2% reporting that they provide 5 or more notes per day. 89.7% of respondents believe that patients require additional medical care on half of visits or less. Advice to patients Most respondents answered that they advise patients to remain at home for minor illnesses, however, the duration of exclusion from work varied. For influenza‐like illness, respondents advised patients to remain home from work for a median of 4 days. For an upper respiratory tract infection and for gastroenteritis, respondents advised patients to remain home from work for a median of 2 days. For all conditions, a proportion of respondents did not provide a discrete number of days, rather answered that they advise patients to remain at home until the fever has resolved. The distribution of responses is illustrated in Figure 1. 82.8% of respondents believed that a patient is capable of determining when to return to work “most of the time”, 17.2% believed “sometimes”, and none answered “rarely”. Distribution of responses of survey participants to the question of how long they advise patients to stay home when sick for three different illnesses. UnFR, until fever resolves; URTI, upper respiratory tract infection Most respondents answered that they advise patients to remain at home for minor illnesses, however, the duration of exclusion from work varied. For influenza‐like illness, respondents advised patients to remain home from work for a median of 4 days. For an upper respiratory tract infection and for gastroenteritis, respondents advised patients to remain home from work for a median of 2 days. For all conditions, a proportion of respondents did not provide a discrete number of days, rather answered that they advise patients to remain at home until the fever has resolved. The distribution of responses is illustrated in Figure 1. 82.8% of respondents believed that a patient is capable of determining when to return to work “most of the time”, 17.2% believed “sometimes”, and none answered “rarely”. Distribution of responses of survey participants to the question of how long they advise patients to stay home when sick for three different illnesses. UnFR, until fever resolves; URTI, upper respiratory tract infection What knowledge do providers have? 61.2% of participants answered that they were not familiar with the current sick leave legislation in their province, and 18.8% stated that they were unsure. Only 2% of respondents had received training on illness verification during medical school or residency. 61.2% of participants answered that they were not familiar with the current sick leave legislation in their province, and 18.8% stated that they were unsure. Only 2% of respondents had received training on illness verification during medical school or residency.
CONCLUSIONS
Providing sick notes is a common practice of Canadian Emergency Physicians, and may have significant impact on departmental functioning, particularly during viral outbreaks. Advice to patients is variable, and physicians have limited knowledge of governmental policy that impacts sick leave. Improved physician education may be one mechanism to provide better return‐to‐work guidance to patients.
[ "INTRODUCTION", "Participant characteristics", "Impact on emergency department flow and functioning", "Advice to patients", "What knowledge do providers have?", "Interpretation and implications", "Limitations", "AUTHOR CONTRIBUTIONS" ]
[ "Public health agencies recommend that patients with minor illnesses stay home to recover and to avoid infecting others.\n1\n Exclusion from work can be a powerful public health measure during outbreaks\n2\n; in the US, workplace spread of influenza‐like illness conferred a population‐attributable risk of 5 million additional cases in a study of H1N1 spread during the 2009 epidemic.\n3\n Emergency physicians frequently see patients with viral infections in the emergency department and are asked to provide guidance about return to work. In some instances, patients may seek medical care solely for the purpose of obtaining a “sick note”, placing unnecessary burden on emergency care providers and systems.\n4\n\n\nIn Canada, provincial legislation determines the duration of time that a patient can stay home from work, whether or not an employer can require a sick note, and if patients are paid for sick days. These standards vary significantly between the different provinces and territories. In Ontario, Canada's most populated province, the current legislation provides three unpaid days of job‐protected leave for personal illness, injury, or medical emergency. Employers may ask for a sick note for illness verification, thereby necessitating the employee to seek medical care. During the COVID‐19 pandemic, the legislation has been amended to provide an unspecified number of days of unpaid, job‐protected infectious disease emergency leave. This leave covers isolation, quarantine, and to provide care for family members due to school and day‐care closures. Employers cannot require employees to provide sick notes for the infectious disease emergency leave.\n5\n\n\nLegislation on emergency medical leave and sick notes impact the ability of patients to adhere to physician recommendations. Despite this, little is known about physician knowledge of these standards.", "Of the 1524 CAEP physician members surveyed, 182 participated. 51.1% reported Ontario as their practice location. 79% practiced emergency medicine exclusively, with the remainder practicing emergency medicine and family medicine, sports medicine, or other specialties.", "The majority (75.1%) of respondents answered that their practice environment does not have a sick note policy in place, or that they were unsure about the policy (10.7% of respondents). Only 13% of emergency providers charge patients for a sick note. The fee charged ranges from $5 to 80 (Canadian dollars), with a mean cost of $22.50. Most physicians provide at least one sick note per day (76.4%), with 4.2% reporting that they provide 5 or more notes per day. 89.7% of respondents believe that patients require additional medical care on half of visits or less.", "Most respondents answered that they advise patients to remain at home for minor illnesses, however, the duration of exclusion from work varied. For influenza‐like illness, respondents advised patients to remain home from work for a median of 4 days. For an upper respiratory tract infection and for gastroenteritis, respondents advised patients to remain home from work for a median of 2 days. For all conditions, a proportion of respondents did not provide a discrete number of days, rather answered that they advise patients to remain at home until the fever has resolved. The distribution of responses is illustrated in Figure 1. 82.8% of respondents believed that a patient is capable of determining when to return to work “most of the time”, 17.2% believed “sometimes”, and none answered “rarely”.\nDistribution of responses of survey participants to the question of how long they advise patients to stay home when sick for three different illnesses. UnFR, until fever resolves; URTI, upper respiratory tract infection", "61.2% of participants answered that they were not familiar with the current sick leave legislation in their province, and 18.8% stated that they were unsure. Only 2% of respondents had received training on illness verification during medical school or residency.", "This survey confirmed our hypothesis that many Emergency Department (ED) physicians are writing sick notes on a daily basis, and that ED providers believe that most of these patients do not require additional care for their viral illness and can safely decide for themselves when to return to work. The CAEP position statement on sick notes recommends that governments prevent employers from requesting sick notes; this study improves our understanding of the frequency with which ED providers are required to see patients for this administrative task.\n8\n\n\nDespite the public health implications, emergency physicians are providing varied advice to patients about exclusion from work while unwell. Increased physician education is needed to ensure that providers are aware of the latest public health recommendations for patients, both during and beyond epidemics. In other conditions, physicians guidance to remain off work is associated with the duration of time used for recovery, however, patients also report receiving advice that they cannot adhere to.\n9\n In our study, most physicians reported that they were unfamiliar with local sick leave legislation, meaning that providers are unknowingly advising patients to stay home for a period that could, in fact, threaten job and income loss. Physicians require greater knowledge beyond the biomedical paradigm in order to have informed conversations with their patients about return to work. This education could take place in the form of dedicated teaching on occupational health during residency training programs, increased involvement of occupational health specialists in emergency departments, and improved communication of current medical leave policy by policymakers to front‐line health practitioners.", "This study included only Canadian emergency physicians, and findings may not be generalizable to primary care settings or in other jurisdictions. Respondents may also represent a biased subset; it is possible that respondents have different sick note provision practices than non‐respondents. This survey did not use a previously validated questionnaire, however, open‐ended questions allowed physicians to provide further clarification to accompany their responses, which was reviewed by the authors. Despite the response rate, we believe that these findings provide an important initial understanding of the current situation in Canada.", "KH, JM, and HS came up with the idea for this study. KH, JM, HS, and CJV all contributed to survey development and conceptualization of the study. CJV helped with the REB submission. DA did the statistical analysis for this study. KH wrote the first draft of the manuscript, and JM, HS, and CJV edited and revised the manuscript." ]
[ null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "OBJECTIVES", "METHODS", "RESULTS", "Participant characteristics", "Impact on emergency department flow and functioning", "Advice to patients", "What knowledge do providers have?", "DISCUSSION", "Interpretation and implications", "Limitations", "CONCLUSIONS", "AUTHOR CONTRIBUTIONS", "DISCLOSURES" ]
[ "Public health agencies recommend that patients with minor illnesses stay home to recover and to avoid infecting others.\n1\n Exclusion from work can be a powerful public health measure during outbreaks\n2\n; in the US, workplace spread of influenza‐like illness conferred a population‐attributable risk of 5 million additional cases in a study of H1N1 spread during the 2009 epidemic.\n3\n Emergency physicians frequently see patients with viral infections in the emergency department and are asked to provide guidance about return to work. In some instances, patients may seek medical care solely for the purpose of obtaining a “sick note”, placing unnecessary burden on emergency care providers and systems.\n4\n\n\nIn Canada, provincial legislation determines the duration of time that a patient can stay home from work, whether or not an employer can require a sick note, and if patients are paid for sick days. These standards vary significantly between the different provinces and territories. In Ontario, Canada's most populated province, the current legislation provides three unpaid days of job‐protected leave for personal illness, injury, or medical emergency. Employers may ask for a sick note for illness verification, thereby necessitating the employee to seek medical care. During the COVID‐19 pandemic, the legislation has been amended to provide an unspecified number of days of unpaid, job‐protected infectious disease emergency leave. This leave covers isolation, quarantine, and to provide care for family members due to school and day‐care closures. Employers cannot require employees to provide sick notes for the infectious disease emergency leave.\n5\n\n\nLegislation on emergency medical leave and sick notes impact the ability of patients to adhere to physician recommendations. Despite this, little is known about physician knowledge of these standards.", "We performed a survey of Canadian emergency physicians to determine:\nWhat impacts do “sick notes” for brief illnesses have on patients and the healthcare system?How long do healthcare providers advise patients to remain off work if they are sick with a brief illness?What training and/or policies are in place to help healthcare providers issue sick notes?\n\nWhat impacts do “sick notes” for brief illnesses have on patients and the healthcare system?\nHow long do healthcare providers advise patients to remain off work if they are sick with a brief illness?\nWhat training and/or policies are in place to help healthcare providers issue sick notes?", "Following a literature review, the survey was designed by consensus of four authors, and revised following review by an additional physician and a labor policy expert. The survey was distributed in English only via SurveyMonkey\n6\n (Appendix 1). The Canadian Association of Emergency Physicians (CAEP) administered the survey. The link was distributed by email to all CAEP physician members three times in 2‐week intervals between December 2019 and January 2020. Participation in the survey was voluntary and all responses were anonymous.\nThe survey included multiple‐choice demographic questions, as well multiple‐choice questions and open‐ended, numeric responses to quantify variables such as the duration of time that physicians advise patients to stay home from work, the cost of a sick note, and the frequency with which patients require additional medical care. Participants were allowed to skip questions and data from incomplete surveys were included. Data were analyzed in the R statistical programming language.\n7\n No financial incentive was provided for participating.", "Participant characteristics Of the 1524 CAEP physician members surveyed, 182 participated. 51.1% reported Ontario as their practice location. 79% practiced emergency medicine exclusively, with the remainder practicing emergency medicine and family medicine, sports medicine, or other specialties.\nOf the 1524 CAEP physician members surveyed, 182 participated. 51.1% reported Ontario as their practice location. 79% practiced emergency medicine exclusively, with the remainder practicing emergency medicine and family medicine, sports medicine, or other specialties.\nImpact on emergency department flow and functioning The majority (75.1%) of respondents answered that their practice environment does not have a sick note policy in place, or that they were unsure about the policy (10.7% of respondents). Only 13% of emergency providers charge patients for a sick note. The fee charged ranges from $5 to 80 (Canadian dollars), with a mean cost of $22.50. Most physicians provide at least one sick note per day (76.4%), with 4.2% reporting that they provide 5 or more notes per day. 89.7% of respondents believe that patients require additional medical care on half of visits or less.\nThe majority (75.1%) of respondents answered that their practice environment does not have a sick note policy in place, or that they were unsure about the policy (10.7% of respondents). Only 13% of emergency providers charge patients for a sick note. The fee charged ranges from $5 to 80 (Canadian dollars), with a mean cost of $22.50. Most physicians provide at least one sick note per day (76.4%), with 4.2% reporting that they provide 5 or more notes per day. 89.7% of respondents believe that patients require additional medical care on half of visits or less.\nAdvice to patients Most respondents answered that they advise patients to remain at home for minor illnesses, however, the duration of exclusion from work varied. For influenza‐like illness, respondents advised patients to remain home from work for a median of 4 days. For an upper respiratory tract infection and for gastroenteritis, respondents advised patients to remain home from work for a median of 2 days. For all conditions, a proportion of respondents did not provide a discrete number of days, rather answered that they advise patients to remain at home until the fever has resolved. The distribution of responses is illustrated in Figure 1. 82.8% of respondents believed that a patient is capable of determining when to return to work “most of the time”, 17.2% believed “sometimes”, and none answered “rarely”.\nDistribution of responses of survey participants to the question of how long they advise patients to stay home when sick for three different illnesses. UnFR, until fever resolves; URTI, upper respiratory tract infection\nMost respondents answered that they advise patients to remain at home for minor illnesses, however, the duration of exclusion from work varied. For influenza‐like illness, respondents advised patients to remain home from work for a median of 4 days. For an upper respiratory tract infection and for gastroenteritis, respondents advised patients to remain home from work for a median of 2 days. For all conditions, a proportion of respondents did not provide a discrete number of days, rather answered that they advise patients to remain at home until the fever has resolved. The distribution of responses is illustrated in Figure 1. 82.8% of respondents believed that a patient is capable of determining when to return to work “most of the time”, 17.2% believed “sometimes”, and none answered “rarely”.\nDistribution of responses of survey participants to the question of how long they advise patients to stay home when sick for three different illnesses. UnFR, until fever resolves; URTI, upper respiratory tract infection\nWhat knowledge do providers have? 61.2% of participants answered that they were not familiar with the current sick leave legislation in their province, and 18.8% stated that they were unsure. Only 2% of respondents had received training on illness verification during medical school or residency.\n61.2% of participants answered that they were not familiar with the current sick leave legislation in their province, and 18.8% stated that they were unsure. Only 2% of respondents had received training on illness verification during medical school or residency.", "Of the 1524 CAEP physician members surveyed, 182 participated. 51.1% reported Ontario as their practice location. 79% practiced emergency medicine exclusively, with the remainder practicing emergency medicine and family medicine, sports medicine, or other specialties.", "The majority (75.1%) of respondents answered that their practice environment does not have a sick note policy in place, or that they were unsure about the policy (10.7% of respondents). Only 13% of emergency providers charge patients for a sick note. The fee charged ranges from $5 to 80 (Canadian dollars), with a mean cost of $22.50. Most physicians provide at least one sick note per day (76.4%), with 4.2% reporting that they provide 5 or more notes per day. 89.7% of respondents believe that patients require additional medical care on half of visits or less.", "Most respondents answered that they advise patients to remain at home for minor illnesses, however, the duration of exclusion from work varied. For influenza‐like illness, respondents advised patients to remain home from work for a median of 4 days. For an upper respiratory tract infection and for gastroenteritis, respondents advised patients to remain home from work for a median of 2 days. For all conditions, a proportion of respondents did not provide a discrete number of days, rather answered that they advise patients to remain at home until the fever has resolved. The distribution of responses is illustrated in Figure 1. 82.8% of respondents believed that a patient is capable of determining when to return to work “most of the time”, 17.2% believed “sometimes”, and none answered “rarely”.\nDistribution of responses of survey participants to the question of how long they advise patients to stay home when sick for three different illnesses. UnFR, until fever resolves; URTI, upper respiratory tract infection", "61.2% of participants answered that they were not familiar with the current sick leave legislation in their province, and 18.8% stated that they were unsure. Only 2% of respondents had received training on illness verification during medical school or residency.", "Interpretation and implications This survey confirmed our hypothesis that many Emergency Department (ED) physicians are writing sick notes on a daily basis, and that ED providers believe that most of these patients do not require additional care for their viral illness and can safely decide for themselves when to return to work. The CAEP position statement on sick notes recommends that governments prevent employers from requesting sick notes; this study improves our understanding of the frequency with which ED providers are required to see patients for this administrative task.\n8\n\n\nDespite the public health implications, emergency physicians are providing varied advice to patients about exclusion from work while unwell. Increased physician education is needed to ensure that providers are aware of the latest public health recommendations for patients, both during and beyond epidemics. In other conditions, physicians guidance to remain off work is associated with the duration of time used for recovery, however, patients also report receiving advice that they cannot adhere to.\n9\n In our study, most physicians reported that they were unfamiliar with local sick leave legislation, meaning that providers are unknowingly advising patients to stay home for a period that could, in fact, threaten job and income loss. Physicians require greater knowledge beyond the biomedical paradigm in order to have informed conversations with their patients about return to work. This education could take place in the form of dedicated teaching on occupational health during residency training programs, increased involvement of occupational health specialists in emergency departments, and improved communication of current medical leave policy by policymakers to front‐line health practitioners.\nThis survey confirmed our hypothesis that many Emergency Department (ED) physicians are writing sick notes on a daily basis, and that ED providers believe that most of these patients do not require additional care for their viral illness and can safely decide for themselves when to return to work. The CAEP position statement on sick notes recommends that governments prevent employers from requesting sick notes; this study improves our understanding of the frequency with which ED providers are required to see patients for this administrative task.\n8\n\n\nDespite the public health implications, emergency physicians are providing varied advice to patients about exclusion from work while unwell. Increased physician education is needed to ensure that providers are aware of the latest public health recommendations for patients, both during and beyond epidemics. In other conditions, physicians guidance to remain off work is associated with the duration of time used for recovery, however, patients also report receiving advice that they cannot adhere to.\n9\n In our study, most physicians reported that they were unfamiliar with local sick leave legislation, meaning that providers are unknowingly advising patients to stay home for a period that could, in fact, threaten job and income loss. Physicians require greater knowledge beyond the biomedical paradigm in order to have informed conversations with their patients about return to work. This education could take place in the form of dedicated teaching on occupational health during residency training programs, increased involvement of occupational health specialists in emergency departments, and improved communication of current medical leave policy by policymakers to front‐line health practitioners.\nLimitations This study included only Canadian emergency physicians, and findings may not be generalizable to primary care settings or in other jurisdictions. Respondents may also represent a biased subset; it is possible that respondents have different sick note provision practices than non‐respondents. This survey did not use a previously validated questionnaire, however, open‐ended questions allowed physicians to provide further clarification to accompany their responses, which was reviewed by the authors. Despite the response rate, we believe that these findings provide an important initial understanding of the current situation in Canada.\nThis study included only Canadian emergency physicians, and findings may not be generalizable to primary care settings or in other jurisdictions. Respondents may also represent a biased subset; it is possible that respondents have different sick note provision practices than non‐respondents. This survey did not use a previously validated questionnaire, however, open‐ended questions allowed physicians to provide further clarification to accompany their responses, which was reviewed by the authors. Despite the response rate, we believe that these findings provide an important initial understanding of the current situation in Canada.", "This survey confirmed our hypothesis that many Emergency Department (ED) physicians are writing sick notes on a daily basis, and that ED providers believe that most of these patients do not require additional care for their viral illness and can safely decide for themselves when to return to work. The CAEP position statement on sick notes recommends that governments prevent employers from requesting sick notes; this study improves our understanding of the frequency with which ED providers are required to see patients for this administrative task.\n8\n\n\nDespite the public health implications, emergency physicians are providing varied advice to patients about exclusion from work while unwell. Increased physician education is needed to ensure that providers are aware of the latest public health recommendations for patients, both during and beyond epidemics. In other conditions, physicians guidance to remain off work is associated with the duration of time used for recovery, however, patients also report receiving advice that they cannot adhere to.\n9\n In our study, most physicians reported that they were unfamiliar with local sick leave legislation, meaning that providers are unknowingly advising patients to stay home for a period that could, in fact, threaten job and income loss. Physicians require greater knowledge beyond the biomedical paradigm in order to have informed conversations with their patients about return to work. This education could take place in the form of dedicated teaching on occupational health during residency training programs, increased involvement of occupational health specialists in emergency departments, and improved communication of current medical leave policy by policymakers to front‐line health practitioners.", "This study included only Canadian emergency physicians, and findings may not be generalizable to primary care settings or in other jurisdictions. Respondents may also represent a biased subset; it is possible that respondents have different sick note provision practices than non‐respondents. This survey did not use a previously validated questionnaire, however, open‐ended questions allowed physicians to provide further clarification to accompany their responses, which was reviewed by the authors. Despite the response rate, we believe that these findings provide an important initial understanding of the current situation in Canada.", "Providing sick notes is a common practice of Canadian Emergency Physicians, and may have significant impact on departmental functioning, particularly during viral outbreaks. Advice to patients is variable, and physicians have limited knowledge of governmental policy that impacts sick leave. Improved physician education may be one mechanism to provide better return‐to‐work guidance to patients.", "KH, JM, and HS came up with the idea for this study. KH, JM, HS, and CJV all contributed to survey development and conceptualization of the study. CJV helped with the REB submission. DA did the statistical analysis for this study. KH wrote the first draft of the manuscript, and JM, HS, and CJV edited and revised the manuscript.", "\nApproval of the research protocol: This study received ethics approval from the University of Toronto Health Sciences research ethics board. Informed consent: All participants provided informed consent prior to participating in the survey. Registry and the registration no. of the study/trial: N/A. Animal studies: N/A. Conflict of interests: Carolina Jimenez Vanegas is a paid staff member of the Decent Work and Health Network, which is supported by a grant from the Atkinson Foundation. Drs Hayman, Sheikh, and McLaren are steering committee members of the network and receive no compensation (financial or otherwise) for this activity." ]
[ null, "objectives", "methods", "results", null, null, null, null, "discussion", null, null, "conclusions", null, "COI-statement" ]
[ "emergency departments", "illness verification", "sick notes", "viral illness" ]
INTRODUCTION: Public health agencies recommend that patients with minor illnesses stay home to recover and to avoid infecting others. 1 Exclusion from work can be a powerful public health measure during outbreaks 2 ; in the US, workplace spread of influenza‐like illness conferred a population‐attributable risk of 5 million additional cases in a study of H1N1 spread during the 2009 epidemic. 3 Emergency physicians frequently see patients with viral infections in the emergency department and are asked to provide guidance about return to work. In some instances, patients may seek medical care solely for the purpose of obtaining a “sick note”, placing unnecessary burden on emergency care providers and systems. 4 In Canada, provincial legislation determines the duration of time that a patient can stay home from work, whether or not an employer can require a sick note, and if patients are paid for sick days. These standards vary significantly between the different provinces and territories. In Ontario, Canada's most populated province, the current legislation provides three unpaid days of job‐protected leave for personal illness, injury, or medical emergency. Employers may ask for a sick note for illness verification, thereby necessitating the employee to seek medical care. During the COVID‐19 pandemic, the legislation has been amended to provide an unspecified number of days of unpaid, job‐protected infectious disease emergency leave. This leave covers isolation, quarantine, and to provide care for family members due to school and day‐care closures. Employers cannot require employees to provide sick notes for the infectious disease emergency leave. 5 Legislation on emergency medical leave and sick notes impact the ability of patients to adhere to physician recommendations. Despite this, little is known about physician knowledge of these standards. OBJECTIVES: We performed a survey of Canadian emergency physicians to determine: What impacts do “sick notes” for brief illnesses have on patients and the healthcare system?How long do healthcare providers advise patients to remain off work if they are sick with a brief illness?What training and/or policies are in place to help healthcare providers issue sick notes? What impacts do “sick notes” for brief illnesses have on patients and the healthcare system? How long do healthcare providers advise patients to remain off work if they are sick with a brief illness? What training and/or policies are in place to help healthcare providers issue sick notes? METHODS: Following a literature review, the survey was designed by consensus of four authors, and revised following review by an additional physician and a labor policy expert. The survey was distributed in English only via SurveyMonkey 6 (Appendix 1). The Canadian Association of Emergency Physicians (CAEP) administered the survey. The link was distributed by email to all CAEP physician members three times in 2‐week intervals between December 2019 and January 2020. Participation in the survey was voluntary and all responses were anonymous. The survey included multiple‐choice demographic questions, as well multiple‐choice questions and open‐ended, numeric responses to quantify variables such as the duration of time that physicians advise patients to stay home from work, the cost of a sick note, and the frequency with which patients require additional medical care. Participants were allowed to skip questions and data from incomplete surveys were included. Data were analyzed in the R statistical programming language. 7 No financial incentive was provided for participating. RESULTS: Participant characteristics Of the 1524 CAEP physician members surveyed, 182 participated. 51.1% reported Ontario as their practice location. 79% practiced emergency medicine exclusively, with the remainder practicing emergency medicine and family medicine, sports medicine, or other specialties. Of the 1524 CAEP physician members surveyed, 182 participated. 51.1% reported Ontario as their practice location. 79% practiced emergency medicine exclusively, with the remainder practicing emergency medicine and family medicine, sports medicine, or other specialties. Impact on emergency department flow and functioning The majority (75.1%) of respondents answered that their practice environment does not have a sick note policy in place, or that they were unsure about the policy (10.7% of respondents). Only 13% of emergency providers charge patients for a sick note. The fee charged ranges from $5 to 80 (Canadian dollars), with a mean cost of $22.50. Most physicians provide at least one sick note per day (76.4%), with 4.2% reporting that they provide 5 or more notes per day. 89.7% of respondents believe that patients require additional medical care on half of visits or less. The majority (75.1%) of respondents answered that their practice environment does not have a sick note policy in place, or that they were unsure about the policy (10.7% of respondents). Only 13% of emergency providers charge patients for a sick note. The fee charged ranges from $5 to 80 (Canadian dollars), with a mean cost of $22.50. Most physicians provide at least one sick note per day (76.4%), with 4.2% reporting that they provide 5 or more notes per day. 89.7% of respondents believe that patients require additional medical care on half of visits or less. Advice to patients Most respondents answered that they advise patients to remain at home for minor illnesses, however, the duration of exclusion from work varied. For influenza‐like illness, respondents advised patients to remain home from work for a median of 4 days. For an upper respiratory tract infection and for gastroenteritis, respondents advised patients to remain home from work for a median of 2 days. For all conditions, a proportion of respondents did not provide a discrete number of days, rather answered that they advise patients to remain at home until the fever has resolved. The distribution of responses is illustrated in Figure 1. 82.8% of respondents believed that a patient is capable of determining when to return to work “most of the time”, 17.2% believed “sometimes”, and none answered “rarely”. Distribution of responses of survey participants to the question of how long they advise patients to stay home when sick for three different illnesses. UnFR, until fever resolves; URTI, upper respiratory tract infection Most respondents answered that they advise patients to remain at home for minor illnesses, however, the duration of exclusion from work varied. For influenza‐like illness, respondents advised patients to remain home from work for a median of 4 days. For an upper respiratory tract infection and for gastroenteritis, respondents advised patients to remain home from work for a median of 2 days. For all conditions, a proportion of respondents did not provide a discrete number of days, rather answered that they advise patients to remain at home until the fever has resolved. The distribution of responses is illustrated in Figure 1. 82.8% of respondents believed that a patient is capable of determining when to return to work “most of the time”, 17.2% believed “sometimes”, and none answered “rarely”. Distribution of responses of survey participants to the question of how long they advise patients to stay home when sick for three different illnesses. UnFR, until fever resolves; URTI, upper respiratory tract infection What knowledge do providers have? 61.2% of participants answered that they were not familiar with the current sick leave legislation in their province, and 18.8% stated that they were unsure. Only 2% of respondents had received training on illness verification during medical school or residency. 61.2% of participants answered that they were not familiar with the current sick leave legislation in their province, and 18.8% stated that they were unsure. Only 2% of respondents had received training on illness verification during medical school or residency. Participant characteristics: Of the 1524 CAEP physician members surveyed, 182 participated. 51.1% reported Ontario as their practice location. 79% practiced emergency medicine exclusively, with the remainder practicing emergency medicine and family medicine, sports medicine, or other specialties. Impact on emergency department flow and functioning: The majority (75.1%) of respondents answered that their practice environment does not have a sick note policy in place, or that they were unsure about the policy (10.7% of respondents). Only 13% of emergency providers charge patients for a sick note. The fee charged ranges from $5 to 80 (Canadian dollars), with a mean cost of $22.50. Most physicians provide at least one sick note per day (76.4%), with 4.2% reporting that they provide 5 or more notes per day. 89.7% of respondents believe that patients require additional medical care on half of visits or less. Advice to patients: Most respondents answered that they advise patients to remain at home for minor illnesses, however, the duration of exclusion from work varied. For influenza‐like illness, respondents advised patients to remain home from work for a median of 4 days. For an upper respiratory tract infection and for gastroenteritis, respondents advised patients to remain home from work for a median of 2 days. For all conditions, a proportion of respondents did not provide a discrete number of days, rather answered that they advise patients to remain at home until the fever has resolved. The distribution of responses is illustrated in Figure 1. 82.8% of respondents believed that a patient is capable of determining when to return to work “most of the time”, 17.2% believed “sometimes”, and none answered “rarely”. Distribution of responses of survey participants to the question of how long they advise patients to stay home when sick for three different illnesses. UnFR, until fever resolves; URTI, upper respiratory tract infection What knowledge do providers have?: 61.2% of participants answered that they were not familiar with the current sick leave legislation in their province, and 18.8% stated that they were unsure. Only 2% of respondents had received training on illness verification during medical school or residency. DISCUSSION: Interpretation and implications This survey confirmed our hypothesis that many Emergency Department (ED) physicians are writing sick notes on a daily basis, and that ED providers believe that most of these patients do not require additional care for their viral illness and can safely decide for themselves when to return to work. The CAEP position statement on sick notes recommends that governments prevent employers from requesting sick notes; this study improves our understanding of the frequency with which ED providers are required to see patients for this administrative task. 8 Despite the public health implications, emergency physicians are providing varied advice to patients about exclusion from work while unwell. Increased physician education is needed to ensure that providers are aware of the latest public health recommendations for patients, both during and beyond epidemics. In other conditions, physicians guidance to remain off work is associated with the duration of time used for recovery, however, patients also report receiving advice that they cannot adhere to. 9 In our study, most physicians reported that they were unfamiliar with local sick leave legislation, meaning that providers are unknowingly advising patients to stay home for a period that could, in fact, threaten job and income loss. Physicians require greater knowledge beyond the biomedical paradigm in order to have informed conversations with their patients about return to work. This education could take place in the form of dedicated teaching on occupational health during residency training programs, increased involvement of occupational health specialists in emergency departments, and improved communication of current medical leave policy by policymakers to front‐line health practitioners. This survey confirmed our hypothesis that many Emergency Department (ED) physicians are writing sick notes on a daily basis, and that ED providers believe that most of these patients do not require additional care for their viral illness and can safely decide for themselves when to return to work. The CAEP position statement on sick notes recommends that governments prevent employers from requesting sick notes; this study improves our understanding of the frequency with which ED providers are required to see patients for this administrative task. 8 Despite the public health implications, emergency physicians are providing varied advice to patients about exclusion from work while unwell. Increased physician education is needed to ensure that providers are aware of the latest public health recommendations for patients, both during and beyond epidemics. In other conditions, physicians guidance to remain off work is associated with the duration of time used for recovery, however, patients also report receiving advice that they cannot adhere to. 9 In our study, most physicians reported that they were unfamiliar with local sick leave legislation, meaning that providers are unknowingly advising patients to stay home for a period that could, in fact, threaten job and income loss. Physicians require greater knowledge beyond the biomedical paradigm in order to have informed conversations with their patients about return to work. This education could take place in the form of dedicated teaching on occupational health during residency training programs, increased involvement of occupational health specialists in emergency departments, and improved communication of current medical leave policy by policymakers to front‐line health practitioners. Limitations This study included only Canadian emergency physicians, and findings may not be generalizable to primary care settings or in other jurisdictions. Respondents may also represent a biased subset; it is possible that respondents have different sick note provision practices than non‐respondents. This survey did not use a previously validated questionnaire, however, open‐ended questions allowed physicians to provide further clarification to accompany their responses, which was reviewed by the authors. Despite the response rate, we believe that these findings provide an important initial understanding of the current situation in Canada. This study included only Canadian emergency physicians, and findings may not be generalizable to primary care settings or in other jurisdictions. Respondents may also represent a biased subset; it is possible that respondents have different sick note provision practices than non‐respondents. This survey did not use a previously validated questionnaire, however, open‐ended questions allowed physicians to provide further clarification to accompany their responses, which was reviewed by the authors. Despite the response rate, we believe that these findings provide an important initial understanding of the current situation in Canada. Interpretation and implications: This survey confirmed our hypothesis that many Emergency Department (ED) physicians are writing sick notes on a daily basis, and that ED providers believe that most of these patients do not require additional care for their viral illness and can safely decide for themselves when to return to work. The CAEP position statement on sick notes recommends that governments prevent employers from requesting sick notes; this study improves our understanding of the frequency with which ED providers are required to see patients for this administrative task. 8 Despite the public health implications, emergency physicians are providing varied advice to patients about exclusion from work while unwell. Increased physician education is needed to ensure that providers are aware of the latest public health recommendations for patients, both during and beyond epidemics. In other conditions, physicians guidance to remain off work is associated with the duration of time used for recovery, however, patients also report receiving advice that they cannot adhere to. 9 In our study, most physicians reported that they were unfamiliar with local sick leave legislation, meaning that providers are unknowingly advising patients to stay home for a period that could, in fact, threaten job and income loss. Physicians require greater knowledge beyond the biomedical paradigm in order to have informed conversations with their patients about return to work. This education could take place in the form of dedicated teaching on occupational health during residency training programs, increased involvement of occupational health specialists in emergency departments, and improved communication of current medical leave policy by policymakers to front‐line health practitioners. Limitations: This study included only Canadian emergency physicians, and findings may not be generalizable to primary care settings or in other jurisdictions. Respondents may also represent a biased subset; it is possible that respondents have different sick note provision practices than non‐respondents. This survey did not use a previously validated questionnaire, however, open‐ended questions allowed physicians to provide further clarification to accompany their responses, which was reviewed by the authors. Despite the response rate, we believe that these findings provide an important initial understanding of the current situation in Canada. CONCLUSIONS: Providing sick notes is a common practice of Canadian Emergency Physicians, and may have significant impact on departmental functioning, particularly during viral outbreaks. Advice to patients is variable, and physicians have limited knowledge of governmental policy that impacts sick leave. Improved physician education may be one mechanism to provide better return‐to‐work guidance to patients. AUTHOR CONTRIBUTIONS: KH, JM, and HS came up with the idea for this study. KH, JM, HS, and CJV all contributed to survey development and conceptualization of the study. CJV helped with the REB submission. DA did the statistical analysis for this study. KH wrote the first draft of the manuscript, and JM, HS, and CJV edited and revised the manuscript. DISCLOSURES: Approval of the research protocol: This study received ethics approval from the University of Toronto Health Sciences research ethics board. Informed consent: All participants provided informed consent prior to participating in the survey. Registry and the registration no. of the study/trial: N/A. Animal studies: N/A. Conflict of interests: Carolina Jimenez Vanegas is a paid staff member of the Decent Work and Health Network, which is supported by a grant from the Atkinson Foundation. Drs Hayman, Sheikh, and McLaren are steering committee members of the network and receive no compensation (financial or otherwise) for this activity.
Background: Emergency physicians frequently provide care for patients who are experiencing viral illnesses and may be asked to provide verification of the patient's illness (a sick note) for time missed from work. Exclusion from work can be a powerful public health measure during epidemics; both legislation and physician advice contribute to patients' decisions to recover at home. Methods: We surveyed Canadian Association of Emergency Physicians members to determine what impacts sick notes have on patients and the system, the duration of time off work that physicians recommend, and what training and policies are in place to help providers. Descriptive statistics from the survey are reported. Results: A total of 182 of 1524 physicians responded to the survey; 51.1% practice in Ontario. 76.4% of physicians write at least one sick note per day, with 4.2% writing 5 or more sick notes per day. Thirteen percentage of physicians charge for a sick note (mean cost $22.50). Patients advised to stay home for a median of 4 days with influenza and 2 days with gastroenteritis and upper respiratory tract infections. 82.8% of physicians believe that most of the time, patients can determine when to return to work. Advice varied widely between respondents. 61% of respondents were unfamiliar with sick leave legislation in their province and only 2% had received formal training about illness verification. Conclusions: Providing sick notes is a common practice of Canadian Emergency Physicians; return-to-work guidance is variable. Improved physician education about public health recommendations and provincial legislation may strengthen physician advice to patients.
INTRODUCTION: Public health agencies recommend that patients with minor illnesses stay home to recover and to avoid infecting others. 1 Exclusion from work can be a powerful public health measure during outbreaks 2 ; in the US, workplace spread of influenza‐like illness conferred a population‐attributable risk of 5 million additional cases in a study of H1N1 spread during the 2009 epidemic. 3 Emergency physicians frequently see patients with viral infections in the emergency department and are asked to provide guidance about return to work. In some instances, patients may seek medical care solely for the purpose of obtaining a “sick note”, placing unnecessary burden on emergency care providers and systems. 4 In Canada, provincial legislation determines the duration of time that a patient can stay home from work, whether or not an employer can require a sick note, and if patients are paid for sick days. These standards vary significantly between the different provinces and territories. In Ontario, Canada's most populated province, the current legislation provides three unpaid days of job‐protected leave for personal illness, injury, or medical emergency. Employers may ask for a sick note for illness verification, thereby necessitating the employee to seek medical care. During the COVID‐19 pandemic, the legislation has been amended to provide an unspecified number of days of unpaid, job‐protected infectious disease emergency leave. This leave covers isolation, quarantine, and to provide care for family members due to school and day‐care closures. Employers cannot require employees to provide sick notes for the infectious disease emergency leave. 5 Legislation on emergency medical leave and sick notes impact the ability of patients to adhere to physician recommendations. Despite this, little is known about physician knowledge of these standards. CONCLUSIONS: Providing sick notes is a common practice of Canadian Emergency Physicians, and may have significant impact on departmental functioning, particularly during viral outbreaks. Advice to patients is variable, and physicians have limited knowledge of governmental policy that impacts sick leave. Improved physician education may be one mechanism to provide better return‐to‐work guidance to patients.
Background: Emergency physicians frequently provide care for patients who are experiencing viral illnesses and may be asked to provide verification of the patient's illness (a sick note) for time missed from work. Exclusion from work can be a powerful public health measure during epidemics; both legislation and physician advice contribute to patients' decisions to recover at home. Methods: We surveyed Canadian Association of Emergency Physicians members to determine what impacts sick notes have on patients and the system, the duration of time off work that physicians recommend, and what training and policies are in place to help providers. Descriptive statistics from the survey are reported. Results: A total of 182 of 1524 physicians responded to the survey; 51.1% practice in Ontario. 76.4% of physicians write at least one sick note per day, with 4.2% writing 5 or more sick notes per day. Thirteen percentage of physicians charge for a sick note (mean cost $22.50). Patients advised to stay home for a median of 4 days with influenza and 2 days with gastroenteritis and upper respiratory tract infections. 82.8% of physicians believe that most of the time, patients can determine when to return to work. Advice varied widely between respondents. 61% of respondents were unfamiliar with sick leave legislation in their province and only 2% had received formal training about illness verification. Conclusions: Providing sick notes is a common practice of Canadian Emergency Physicians; return-to-work guidance is variable. Improved physician education about public health recommendations and provincial legislation may strengthen physician advice to patients.
3,343
303
[ 325, 44, 120, 191, 46, 288, 99, 72 ]
14
[ "patients", "sick", "respondents", "emergency", "work", "physicians", "providers", "home", "provide", "notes" ]
[ "illnesses duration exclusion", "employers ask sick", "local sick leave", "disease emergency leave", "remain work sick" ]
[CONTENT] emergency departments | illness verification | sick notes | viral illness [SUMMARY]
[CONTENT] emergency departments | illness verification | sick notes | viral illness [SUMMARY]
[CONTENT] emergency departments | illness verification | sick notes | viral illness [SUMMARY]
[CONTENT] emergency departments | illness verification | sick notes | viral illness [SUMMARY]
[CONTENT] emergency departments | illness verification | sick notes | viral illness [SUMMARY]
[CONTENT] emergency departments | illness verification | sick notes | viral illness [SUMMARY]
[CONTENT] Adult | Attitude of Health Personnel | Canada | Decision Making | Emergency Medicine | Female | Health Services Misuse | Humans | Male | Physician-Patient Relations | Physicians | Practice Patterns, Physicians' | Return to Work | Sick Leave | Surveys and Questionnaires [SUMMARY]
[CONTENT] Adult | Attitude of Health Personnel | Canada | Decision Making | Emergency Medicine | Female | Health Services Misuse | Humans | Male | Physician-Patient Relations | Physicians | Practice Patterns, Physicians' | Return to Work | Sick Leave | Surveys and Questionnaires [SUMMARY]
[CONTENT] Adult | Attitude of Health Personnel | Canada | Decision Making | Emergency Medicine | Female | Health Services Misuse | Humans | Male | Physician-Patient Relations | Physicians | Practice Patterns, Physicians' | Return to Work | Sick Leave | Surveys and Questionnaires [SUMMARY]
[CONTENT] Adult | Attitude of Health Personnel | Canada | Decision Making | Emergency Medicine | Female | Health Services Misuse | Humans | Male | Physician-Patient Relations | Physicians | Practice Patterns, Physicians' | Return to Work | Sick Leave | Surveys and Questionnaires [SUMMARY]
[CONTENT] Adult | Attitude of Health Personnel | Canada | Decision Making | Emergency Medicine | Female | Health Services Misuse | Humans | Male | Physician-Patient Relations | Physicians | Practice Patterns, Physicians' | Return to Work | Sick Leave | Surveys and Questionnaires [SUMMARY]
[CONTENT] Adult | Attitude of Health Personnel | Canada | Decision Making | Emergency Medicine | Female | Health Services Misuse | Humans | Male | Physician-Patient Relations | Physicians | Practice Patterns, Physicians' | Return to Work | Sick Leave | Surveys and Questionnaires [SUMMARY]
[CONTENT] illnesses duration exclusion | employers ask sick | local sick leave | disease emergency leave | remain work sick [SUMMARY]
[CONTENT] illnesses duration exclusion | employers ask sick | local sick leave | disease emergency leave | remain work sick [SUMMARY]
[CONTENT] illnesses duration exclusion | employers ask sick | local sick leave | disease emergency leave | remain work sick [SUMMARY]
[CONTENT] illnesses duration exclusion | employers ask sick | local sick leave | disease emergency leave | remain work sick [SUMMARY]
[CONTENT] illnesses duration exclusion | employers ask sick | local sick leave | disease emergency leave | remain work sick [SUMMARY]
[CONTENT] illnesses duration exclusion | employers ask sick | local sick leave | disease emergency leave | remain work sick [SUMMARY]
[CONTENT] patients | sick | respondents | emergency | work | physicians | providers | home | provide | notes [SUMMARY]
[CONTENT] patients | sick | respondents | emergency | work | physicians | providers | home | provide | notes [SUMMARY]
[CONTENT] patients | sick | respondents | emergency | work | physicians | providers | home | provide | notes [SUMMARY]
[CONTENT] patients | sick | respondents | emergency | work | physicians | providers | home | provide | notes [SUMMARY]
[CONTENT] patients | sick | respondents | emergency | work | physicians | providers | home | provide | notes [SUMMARY]
[CONTENT] patients | sick | respondents | emergency | work | physicians | providers | home | provide | notes [SUMMARY]
[CONTENT] emergency | leave | care | legislation | sick | patients | days | medical | provide | standards [SUMMARY]
[CONTENT] survey | questions | review | following | distributed | data | choice | multiple choice | multiple | included [SUMMARY]
[CONTENT] respondents | patients | answered | medicine | patients remain home | remain home | patients remain | home | remain | days [SUMMARY]
[CONTENT] mechanism provide | leave improved physician | particularly | particularly viral | particularly viral outbreaks | particularly viral outbreaks advice | impacts sick leave improved | impacts sick leave | impact departmental functioning particularly | impact departmental functioning [SUMMARY]
[CONTENT] patients | respondents | sick | emergency | physicians | work | medicine | health | provide | providers [SUMMARY]
[CONTENT] patients | respondents | sick | emergency | physicians | work | medicine | health | provide | providers [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] Canadian Association of Emergency Physicians ||| [SUMMARY]
[CONTENT] 182 | 1524 | 51.1% | Ontario ||| 76.4% | at least one | 4.2% | 5 ||| Thirteen | 22.50 ||| 4 days | 2 days ||| 82.8% ||| ||| 61% | only 2% [SUMMARY]
[CONTENT] Canadian ||| [SUMMARY]
[CONTENT] ||| ||| Canadian Association of Emergency Physicians ||| ||| 182 | 1524 | 51.1% | Ontario ||| 76.4% | at least one | 4.2% | 5 ||| Thirteen | 22.50 ||| 4 days | 2 days ||| 82.8% ||| ||| 61% | only 2% ||| Canadian ||| [SUMMARY]
[CONTENT] ||| ||| Canadian Association of Emergency Physicians ||| ||| 182 | 1524 | 51.1% | Ontario ||| 76.4% | at least one | 4.2% | 5 ||| Thirteen | 22.50 ||| 4 days | 2 days ||| 82.8% ||| ||| 61% | only 2% ||| Canadian ||| [SUMMARY]
Randomized clinical trial: pharmacokinetics and safety of multimatrix mesalamine for treatment of pediatric ulcerative colitis.
26893546
Limited data are available on mesalamine (5-aminosalicylic acid; 5-ASA) use in pediatric ulcerative colitis (UC).
BACKGROUND
Participants (5-17 years of age; 18-82 kg, stratified by weight) with UC received multi-matrix mesalamine 30, 60, or 100 mg/kg/day once daily (to 4,800 mg/day) for 7 days. Blood samples were collected pre-dose on days 5 and 6. On days 7 and 8, blood and urine samples were collected and safety was evaluated. 5-ASA and Ac-5-ASA plasma and urine concentrations were analyzed by non-compartmental methods and used to develop a population pharmacokinetic model.
METHODS
Fifty-two subjects (21 [30 mg/kg]; 22 [60 mg/kg]; 9 [100 mg/kg]) were randomized. On day 7, systemic exposures of 5-ASA and Ac-5-ASA exhibited a dose-proportional increase between 30 and 60 mg/kg/day cohorts. For 30, 60, and 100 mg/kg/day doses, mean percentages of 5-ASA absorbed were 29.4%, 27.0%, and 22.1%, respectively. Simulated steady-state exposures and variabilities for 5-ASA and Ac-5-ASA (coefficient of variation approximately 50% and 40%-45%, respectively) were similar to those observed previously in adults at comparable doses. Treatment-emergent adverse events were reported by ten subjects. Events were similar among different doses and age groups with no new safety signals identified.
RESULTS
Children and adolescents with UC receiving multimatrix mesalamine demonstrated 5-ASA and Ac-5-ASA pharmacokinetic profiles similar to historical adult data. Multimatrix mesalamine was well tolerated across all dose and age groups. ClinicalTrials.gov Identifier: NCT01130844.
CONCLUSION
[ "Adolescent", "Aminosalicylic Acids", "Anti-Inflammatory Agents, Non-Steroidal", "Area Under Curve", "Child", "Child, Preschool", "Colitis, Ulcerative", "Female", "Humans", "Male", "Mesalamine" ]
4745832
Introduction
Ulcerative colitis (UC) is a chronic inflammatory disease of the colon and rectum distinguished by cycles of remission and relapse over the life of the subject.1,2 While the incidence of UC peaks around early adulthood, onset of the disorder can occur from early childhood through adulthood.3 UC is one of the more prevalent chronic diseases in children, with an incidence rate of 2.1 per 100,000 children within the United States.4 The primary goals in UC management are induction and maintenance of remission to improve subjects’ health and quality of life.5 Oral 5-aminosalicylic acid (5-ASA) formulations, such as multimatrix mesalamine (Shire US Inc., Wayne, PA, USA), have proven effective in the induction and maintenance of UC remission,6–10 and are recommended first-line therapy for adults with active mild-to-moderate UC.5,6 5-ASAs are also commonly used as the standard of care first-line therapy in pediatric UC. However, while studies that support 5-ASA use in adult subjects with UC are abundant,11 evidence on the efficacy and safety of 5-ASA in pediatric UC subjects is less substantial, with only a few randomized, controlled clinical studies for the induction or maintenance of remission by 5-ASA in pediatric subjects having been conducted.12,13 To date, no 5-ASA product has been licensed for maintenance of remission of UC in children. As the first step in a program evaluating multimatrix mesalamine in pediatric UC, the primary objective of this randomized Phase I study (ClinicalTrials.gov Identifier: NCT01130844) was to assess the pharmacokinetics of 5-ASA and its major metabolite acetyl-5-ASA (Ac-5-ASA) after administration of once-daily multimatrix mesalamine at three different doses (30, 60, or 100 mg/kg/day) for 7 days in children and adolescents diagnosed with UC. Secondary objectives included examining the safety and tolerability of multimatrix mesalamine at these doses in children and adolescents with UC, and evaluating the extent of absorption of 5-ASA from multimatrix mesalamine at steady state.
Population pharmacokinetic analysis
A population pharmacokinetic model was developed using non-linear mixed effects modeling (NONMEM® program, Version 7.2.0; ICON, Ellicott City, MD, USA) to describe the population variability in 5-ASA/Ac-5-ASA pharmacokinetics and the relationship between pharmacokinetic parameters and potential explanatory covariates (eg, age, weight, and sex). Pharmacokinetic parameters were estimated using Monte-Carlo Importance Sampling Expectation Maximization method with “Mu Referencing”.17 Development of the population pharmacokinetic model consisted of building a base model, followed by development of a covariate model using an interim data cut (40 subjects); the final model was updated with data from an additional 12 subjects from the study. Structural model selection was data driven, based on goodness-of-fit plots (observed vs predicted concentration, conditional weighted residual vs predicted concentration or time, histograms of individual random effects, etc), successful convergence, plausibility and precision of parameter estimates, and the minimum objective function value. Missing drug concentrations and concentrations reported as “not quantifiable” were excluded in the analysis. The final pharmacokinetic model was evaluated using visual predictive check (VPC), and this model was used to simulate the expected 5-ASA and Ac-5-ASA plasma concentration profiles in a broader population of children and adolescents using Trial Simulator (Version 2.2.1, Pharsight Corporation). Data presentation and construction of plots were performed using S-PLUS (Version 8.1; Tibco Software Inc., Palo Alto, CA, USA). The simulated exposures were compared to historical adult exposures.15,16
Results
Subjects Between October 2010 and June 2013, a total of 52 subjects were screened, randomized, and treated; all completed the study (21 subjects [5 children, 16 adolescents] in the 30 mg/kg/day dose group, 22 [4 children, 18 adolescents] in the 60 mg/kg/day dose group, and 9 [7 children, 2 adolescents] in the 100 mg/kg/day dose group; Figure 1). While the study met the FDA enrollment requirement of a minimum of six subjects per dose level, more patients were enrolled into the 30 and 60 mg/kg/day dose groups than into the 100 mg/kg/day dose group due to difficulties enrolling children who weighed <49 kg. Overall, demographic data and baseline disease characteristics were well balanced between dose groups, although fewer subjects were studied in the 100 mg/kg/day dose group due to enrollment difficulties (Table 1). Between October 2010 and June 2013, a total of 52 subjects were screened, randomized, and treated; all completed the study (21 subjects [5 children, 16 adolescents] in the 30 mg/kg/day dose group, 22 [4 children, 18 adolescents] in the 60 mg/kg/day dose group, and 9 [7 children, 2 adolescents] in the 100 mg/kg/day dose group; Figure 1). While the study met the FDA enrollment requirement of a minimum of six subjects per dose level, more patients were enrolled into the 30 and 60 mg/kg/day dose groups than into the 100 mg/kg/day dose group due to difficulties enrolling children who weighed <49 kg. Overall, demographic data and baseline disease characteristics were well balanced between dose groups, although fewer subjects were studied in the 100 mg/kg/day dose group due to enrollment difficulties (Table 1). Non-compartmental pharmacokinetics Inter-assay accuracy and precision data for 5-ASA and Ac-5-ASA in the QC samples at three concentrations each in human plasma and urine across all analytical batches are shown in Table S1. For 5-ASA, mean plasma concentration–time profiles attained maxima at ~6 or 9 hours post-dose, with a secondary peak at 24 hours post-dose (Figure 2A; Table 2). Median tmax was 6 and 9 hours (range 0–24 hours for all dose levels) post-dose for 30 and 60 mg/kg/day doses, respectively. For 100 mg/kg/day, median tmax was approximately 2 hours post-dose; however, there were only nine subjects at 100 mg/kg/day. Steady-state plasma concentrations for 5-ASA were attained by day 5 for all doses. On day 7, systemic exposure to 5-ASA (mean AUCss and Cmax,ss) increased in a dose-proportional manner between 30 and 60 mg/kg/day doses, but in a subproportional manner between 60 and 100 mg/kg/day doses. Based on urinary recovery, the mean percentages of 5-ASA absorbed from multimatrix mesalamine were 29.4%, 27.0%, and 22.1% for 30, 60, and 100 mg/kg/day doses, respectively. High between-subject variability was noted for 5-ASA AUCss and Cmax,ss, with arithmetic CV values ranging from 36% to 52% and 52% to 60%, respectively. Mean CLR ranged from 5.0 to 6.5 L/h, with a trend toward decreasing with increasing dose. Mean plasma concentration–time profiles of plasma Ac-5-ASA were similar to those of the parent drug 5-ASA (Figure 2B; Table 2), with median tmax of 9, 7.5, and 2 hours post-dose for 30, 60, and 100 mg/kg/day doses, respectively. Steady-state plasma concentrations for Ac-5-ASA were attained by day 5 for all doses. On day 7, as for 5-ASA, systemic exposure to Ac-5-ASA increased in a dose-proportional manner between 30 and 60 mg/kg/day doses, but in a subproportional manner between 60 and 100 mg/kg/day doses. The metabolite Ac-5-ASA was more abundant than the parent drug (Table 2; MRAUCss of Ac-5-ASA:5-ASA), with no apparent trend across dose. Mean CLR ranged from 10.0 to 16.2 L/h, with a trend toward decreasing with increasing dose. Moderate to high between-subject variability was noted for AUCss and Cmax,ss, with arithmetic CV values ranging from 35% to 44% and 40% to 59%, respectively. There was no apparent difference in 5-ASA or Ac-5-ASA systemic exposure, as measured by mean AUCss and Cmax,ss, between children (aged 5–12 years) and adolescents (aged 13–17 years) for this weight-based dosing paradigm (data not shown). Inter-assay accuracy and precision data for 5-ASA and Ac-5-ASA in the QC samples at three concentrations each in human plasma and urine across all analytical batches are shown in Table S1. For 5-ASA, mean plasma concentration–time profiles attained maxima at ~6 or 9 hours post-dose, with a secondary peak at 24 hours post-dose (Figure 2A; Table 2). Median tmax was 6 and 9 hours (range 0–24 hours for all dose levels) post-dose for 30 and 60 mg/kg/day doses, respectively. For 100 mg/kg/day, median tmax was approximately 2 hours post-dose; however, there were only nine subjects at 100 mg/kg/day. Steady-state plasma concentrations for 5-ASA were attained by day 5 for all doses. On day 7, systemic exposure to 5-ASA (mean AUCss and Cmax,ss) increased in a dose-proportional manner between 30 and 60 mg/kg/day doses, but in a subproportional manner between 60 and 100 mg/kg/day doses. Based on urinary recovery, the mean percentages of 5-ASA absorbed from multimatrix mesalamine were 29.4%, 27.0%, and 22.1% for 30, 60, and 100 mg/kg/day doses, respectively. High between-subject variability was noted for 5-ASA AUCss and Cmax,ss, with arithmetic CV values ranging from 36% to 52% and 52% to 60%, respectively. Mean CLR ranged from 5.0 to 6.5 L/h, with a trend toward decreasing with increasing dose. Mean plasma concentration–time profiles of plasma Ac-5-ASA were similar to those of the parent drug 5-ASA (Figure 2B; Table 2), with median tmax of 9, 7.5, and 2 hours post-dose for 30, 60, and 100 mg/kg/day doses, respectively. Steady-state plasma concentrations for Ac-5-ASA were attained by day 5 for all doses. On day 7, as for 5-ASA, systemic exposure to Ac-5-ASA increased in a dose-proportional manner between 30 and 60 mg/kg/day doses, but in a subproportional manner between 60 and 100 mg/kg/day doses. The metabolite Ac-5-ASA was more abundant than the parent drug (Table 2; MRAUCss of Ac-5-ASA:5-ASA), with no apparent trend across dose. Mean CLR ranged from 10.0 to 16.2 L/h, with a trend toward decreasing with increasing dose. Moderate to high between-subject variability was noted for AUCss and Cmax,ss, with arithmetic CV values ranging from 35% to 44% and 40% to 59%, respectively. There was no apparent difference in 5-ASA or Ac-5-ASA systemic exposure, as measured by mean AUCss and Cmax,ss, between children (aged 5–12 years) and adolescents (aged 13–17 years) for this weight-based dosing paradigm (data not shown). Population pharmacokinetics The pharmacokinetics of 5-ASA and Ac-5-ASA were adequately described by the population pharmacokinetic structural model (Figure S1) that included: first-order absorption from two depot compartments, absorption lag times, and separate central compartments for 5-ASA and Ac-5-ASA with respective urine compartments for renal clearance. Non-renal clearance of 5-ASA was assumed to involve only metabolism to Ac-5-ASA, and all elimination processes were based on the first-order kinetics. Allometric scaling by body weight was applied to all clearance and volume parameters, with the exponents fixed to the theoretical value of 0.75 for clearance and 1 for volume parameters.18 Parameter estimates for the final model are shown in Table S2; following evaluation by VPC, less than 9% of the observed concentrations fell outside the 90% prediction intervals for both analytes, suggesting that the final model adequately described the observed data. For a 70 kg individual, the typical value of 5-ASA apparent renal clearance was estimated to be 1.15 L/h (95% confidence interval [CI]: 1.01–1.32 L/h), and apparent metabolic clearance was estimated to be 85.6 L/h (95% CI: 75.9–96.5 L/h). The typical value of Ac-5-ASA apparent renal clearance was estimated to be 2.54 L/h (95% CI: 2.27–2.86 L/h), and apparent non-renal clearance was estimated to be 74.4 L/h (95% CI: 66.7–83.1 L/h). Typical estimates of the central volume of distribution were 109 L (95% CI: 70.1–169 L) for 5-ASA and 7.10 L (95% CI: 5.31–9.49 L) for Ac-5-ASA. Estimated absorption rates from depot 1 and depot 3 were 0.0334 h−1 (95% CI: 0.0207–0.0539 h−1) and 0.273 h−1 (95% CI: 0.165–0.448 h−1). Corresponding estimated absorption lag times from each depot were 5.10 hours (95% CI: 4.31–6.05 hours) and 15.0 hours (95% CI: 14.0–16.1 hours); the lag time from depot 3 represented the additional lag following delay in absorption from depot 1. The fraction of dose absorbed from depot 1 was estimated to be 0.734 (95% CI: 0.413–1.06), and remaining dose fractions were assumed to be absorbed from depot 3. Goodness-of-fit plots for 5-ASA and Ac-5-ASA plasma concentrations are shown in Figures S2 and S3. The population pharmacokinetic model was used to simulate steady-state profiles for both 5-ASA and Ac-5-ASA for four weight groups (18–23, 24–35, 36–50, and 51–90 kg) at planned high doses (1,800, 2,400, 3,600, and 4,800 mg) and low doses (900, 1,200, 1,800, and 2,400 mg) in 80 subjects with 1,000 replications (Figure 3A–D; Tables S3 and S4). The variability in the predicted steady-state exposures for children and adolescents was approximately 50% (CV%) for 5-ASA AUC (Table S3) and 40%–45% for Ac-5-ASA AUC (Table S4). The proposed low dose of multimatrix mesalamine for each weight category is predicted to provide comparable steady-state exposure for both 5-ASA and Ac-5 -ASA to those observed following administration of a fixed 2,400 mg dose in the adult population (Figure 3A–D). The proposed high dose for each weight category is predicted to provide comparable steady-state AUC for both 5-ASA and Ac-5-ASA to those observed following administration of a fixed 4,800 mg dose in the adult population. The pharmacokinetics of 5-ASA and Ac-5-ASA were adequately described by the population pharmacokinetic structural model (Figure S1) that included: first-order absorption from two depot compartments, absorption lag times, and separate central compartments for 5-ASA and Ac-5-ASA with respective urine compartments for renal clearance. Non-renal clearance of 5-ASA was assumed to involve only metabolism to Ac-5-ASA, and all elimination processes were based on the first-order kinetics. Allometric scaling by body weight was applied to all clearance and volume parameters, with the exponents fixed to the theoretical value of 0.75 for clearance and 1 for volume parameters.18 Parameter estimates for the final model are shown in Table S2; following evaluation by VPC, less than 9% of the observed concentrations fell outside the 90% prediction intervals for both analytes, suggesting that the final model adequately described the observed data. For a 70 kg individual, the typical value of 5-ASA apparent renal clearance was estimated to be 1.15 L/h (95% confidence interval [CI]: 1.01–1.32 L/h), and apparent metabolic clearance was estimated to be 85.6 L/h (95% CI: 75.9–96.5 L/h). The typical value of Ac-5-ASA apparent renal clearance was estimated to be 2.54 L/h (95% CI: 2.27–2.86 L/h), and apparent non-renal clearance was estimated to be 74.4 L/h (95% CI: 66.7–83.1 L/h). Typical estimates of the central volume of distribution were 109 L (95% CI: 70.1–169 L) for 5-ASA and 7.10 L (95% CI: 5.31–9.49 L) for Ac-5-ASA. Estimated absorption rates from depot 1 and depot 3 were 0.0334 h−1 (95% CI: 0.0207–0.0539 h−1) and 0.273 h−1 (95% CI: 0.165–0.448 h−1). Corresponding estimated absorption lag times from each depot were 5.10 hours (95% CI: 4.31–6.05 hours) and 15.0 hours (95% CI: 14.0–16.1 hours); the lag time from depot 3 represented the additional lag following delay in absorption from depot 1. The fraction of dose absorbed from depot 1 was estimated to be 0.734 (95% CI: 0.413–1.06), and remaining dose fractions were assumed to be absorbed from depot 3. Goodness-of-fit plots for 5-ASA and Ac-5-ASA plasma concentrations are shown in Figures S2 and S3. The population pharmacokinetic model was used to simulate steady-state profiles for both 5-ASA and Ac-5-ASA for four weight groups (18–23, 24–35, 36–50, and 51–90 kg) at planned high doses (1,800, 2,400, 3,600, and 4,800 mg) and low doses (900, 1,200, 1,800, and 2,400 mg) in 80 subjects with 1,000 replications (Figure 3A–D; Tables S3 and S4). The variability in the predicted steady-state exposures for children and adolescents was approximately 50% (CV%) for 5-ASA AUC (Table S3) and 40%–45% for Ac-5-ASA AUC (Table S4). The proposed low dose of multimatrix mesalamine for each weight category is predicted to provide comparable steady-state exposure for both 5-ASA and Ac-5 -ASA to those observed following administration of a fixed 2,400 mg dose in the adult population (Figure 3A–D). The proposed high dose for each weight category is predicted to provide comparable steady-state AUC for both 5-ASA and Ac-5-ASA to those observed following administration of a fixed 4,800 mg dose in the adult population. Safety The incidence of treatment-emergent adverse events (TEAEs) was 19.2% (ten subjects overall); all TEAEs were mild to moderate (Table 3). Incidence rates were similar among different dose groups, and the most commonly reported TEAEs were abdominal pain, musculoskeletal pain, and headache, each reported in 3.8% of subjects (n=2 each). There were no TEAEs leading to premature discontinuation. Two subjects (3.8%) experienced a TEAE considered by the investigator to be related to the study drug: one subject in the 30 mg/kg/day dose group experienced abdominal pain, dehydration, and vomiting, and one subject in the 60 mg/kg/day dose group experienced moderate upper abdominal pain. No relevant differences between the age groups were observed with regard to the occurrence, severity, or relatedness of TEAEs, and no clinically relevant abnormalities in biochemistry, hematology, urinalysis, or vital sign values were observed. No new safety signals were reported. The incidence of treatment-emergent adverse events (TEAEs) was 19.2% (ten subjects overall); all TEAEs were mild to moderate (Table 3). Incidence rates were similar among different dose groups, and the most commonly reported TEAEs were abdominal pain, musculoskeletal pain, and headache, each reported in 3.8% of subjects (n=2 each). There were no TEAEs leading to premature discontinuation. Two subjects (3.8%) experienced a TEAE considered by the investigator to be related to the study drug: one subject in the 30 mg/kg/day dose group experienced abdominal pain, dehydration, and vomiting, and one subject in the 60 mg/kg/day dose group experienced moderate upper abdominal pain. No relevant differences between the age groups were observed with regard to the occurrence, severity, or relatedness of TEAEs, and no clinically relevant abnormalities in biochemistry, hematology, urinalysis, or vital sign values were observed. No new safety signals were reported.
null
null
[ "Methods", "Study design", "Study population", "Pharmacokinetic evaluations", "Safety", "Sample size", "Non-compartmental pharmacokinetic analysis", "Non-compartmental pharmacokinetics", "Population pharmacokinetics", "Safety" ]
[ " Study design This Phase I, multicenter, randomized, open-label, three-arm study was conducted in 12 sites across three countries (United States, Poland, and Slovakia). Children and adolescents (aged 5–17 years) with a diagnosis of UC were randomly assigned to receive multimatrix mesalamine 30, 60, or 100 mg/kg once daily (up to a maximum of 4,800 mg/day) each morning for 7 days. To achieve these doses in children, smaller-sized 300 and 600 mg tablets were developed for this study to augment the existing approved 1,200 mg multimatrix mesalamine tablet. Total daily doses for the study ranged from 900 to 4,800 mg/day. Randomization was stratified by body weight (18–24, 25–49, and 50–82 kg); subjects weighing 18–24 kg were only randomized to 60 and 100 mg/kg/day groups to avoid receiving doses less than 900 mg, and subjects weighing 50–82 kg were only randomized to 30 and 60 mg/kg/day groups to avoid receiving doses greater than 4,800 mg, the maximum approved dose in adult subjects. Randomization was achieved via use of a randomization number allocated prior to dosing, once eligibility had been determined, and a randomization schedule was produced by an interactive voice response system vendor. Subjects were dosed with multimatrix mesalamine every morning for 7 days, at home on days 1–4, and on-site on days 5–7. On day 7, blood and urine pharmacokinetic samples were collected and safety assessments were performed until 24 hours post-dose. The study protocol, protocol amendments, informed consent documents, relevant supporting information, and subject recruitment information were submitted to and approved by the respective independent ethics committees, institutional review boards, and regulatory agencies prior to study initiation. The independent ethics committees, institutional review boards, and regulatory agencies are from the following: United States: Arkansas Children’s Hospital, Advanced Clinical Research Institute, University of California, San Francisco, Connecticut Children’s Medical Center, University of Maryland Medical Center for Children, and Mayo Clinic; Australia: Royal Children’s Hospital Melbourne; Poland: Klinika Pediatrii Gastroenterologii i Zywienia, Uniwersytecki Szpital Dzieciecy w Krakowie, Klinika Pediatrii Dzieciecy Szpital Kliniczny im prof Antoniego Gebali, Kliniczny Oddzial Pediatrii z Pododdzialem Neurologii Dzieciecej Szpital Wojewodzki, Klinika Gastroenterolofii Pediatrii, Instytut Centrum Zdrowia Matki Polki, Oddzial Gastroenterologii i Hepatologii, and Instytut Pomnik-Centrum Zdrowia Dziecka; Slovakia: Gastroenterologicka ambulancia, Univerzitna nemocnica Martin, and DFNsP Banska Bystrica; and the United Kingdom: Alder Hey Children’s NHS Foundation Trust, Barts Health NHS Trust/Royal London Hospital, Somers Clinical Research Facility/Great Ormond Street Hospital, Southampton General Hospital. Further details are available at https://clinicaltrials.gov/ct2/show/NCT01130844. All authors had access to the study data, and reviewed and approved the final manuscript.\nThis Phase I, multicenter, randomized, open-label, three-arm study was conducted in 12 sites across three countries (United States, Poland, and Slovakia). Children and adolescents (aged 5–17 years) with a diagnosis of UC were randomly assigned to receive multimatrix mesalamine 30, 60, or 100 mg/kg once daily (up to a maximum of 4,800 mg/day) each morning for 7 days. To achieve these doses in children, smaller-sized 300 and 600 mg tablets were developed for this study to augment the existing approved 1,200 mg multimatrix mesalamine tablet. Total daily doses for the study ranged from 900 to 4,800 mg/day. Randomization was stratified by body weight (18–24, 25–49, and 50–82 kg); subjects weighing 18–24 kg were only randomized to 60 and 100 mg/kg/day groups to avoid receiving doses less than 900 mg, and subjects weighing 50–82 kg were only randomized to 30 and 60 mg/kg/day groups to avoid receiving doses greater than 4,800 mg, the maximum approved dose in adult subjects. Randomization was achieved via use of a randomization number allocated prior to dosing, once eligibility had been determined, and a randomization schedule was produced by an interactive voice response system vendor. Subjects were dosed with multimatrix mesalamine every morning for 7 days, at home on days 1–4, and on-site on days 5–7. On day 7, blood and urine pharmacokinetic samples were collected and safety assessments were performed until 24 hours post-dose. The study protocol, protocol amendments, informed consent documents, relevant supporting information, and subject recruitment information were submitted to and approved by the respective independent ethics committees, institutional review boards, and regulatory agencies prior to study initiation. The independent ethics committees, institutional review boards, and regulatory agencies are from the following: United States: Arkansas Children’s Hospital, Advanced Clinical Research Institute, University of California, San Francisco, Connecticut Children’s Medical Center, University of Maryland Medical Center for Children, and Mayo Clinic; Australia: Royal Children’s Hospital Melbourne; Poland: Klinika Pediatrii Gastroenterologii i Zywienia, Uniwersytecki Szpital Dzieciecy w Krakowie, Klinika Pediatrii Dzieciecy Szpital Kliniczny im prof Antoniego Gebali, Kliniczny Oddzial Pediatrii z Pododdzialem Neurologii Dzieciecej Szpital Wojewodzki, Klinika Gastroenterolofii Pediatrii, Instytut Centrum Zdrowia Matki Polki, Oddzial Gastroenterologii i Hepatologii, and Instytut Pomnik-Centrum Zdrowia Dziecka; Slovakia: Gastroenterologicka ambulancia, Univerzitna nemocnica Martin, and DFNsP Banska Bystrica; and the United Kingdom: Alder Hey Children’s NHS Foundation Trust, Barts Health NHS Trust/Royal London Hospital, Somers Clinical Research Facility/Great Ormond Street Hospital, Southampton General Hospital. Further details are available at https://clinicaltrials.gov/ct2/show/NCT01130844. All authors had access to the study data, and reviewed and approved the final manuscript.\n Study population Children (aged 5–12 years) and adolescents (aged 13–17 years), weighing 18–82 kg, with a diagnosis of UC ≥3 months prior to the first dose of study drug were enrolled. Subjects already on a 5-ASA product had to have been on a stable regimen for ≥4 weeks prior to the first dose of study drug. In addition to each subject documenting assent, the subject’s parent or legal representative had to provide informed consent. Subjects with current or recurrent disease (other than UC) that could affect the colon or the action, absorption, or disposition of the study drug were excluded. Additional exclusion criteria included: UC confined to the rectum; allergy, hypersensitivity, or poor tolerability to salicylates or aminosalicylates; history of hepatic or renal impairment, pancreatitis, or Reyes syndrome; use of another investigational product ≤30 days prior to the first dose of study drug; serious, severe, or unstable psychiatric or physical illness; positive urine screen for drugs or abuse of alcohol; and pregnancy or lactation in female subjects.\nChildren (aged 5–12 years) and adolescents (aged 13–17 years), weighing 18–82 kg, with a diagnosis of UC ≥3 months prior to the first dose of study drug were enrolled. Subjects already on a 5-ASA product had to have been on a stable regimen for ≥4 weeks prior to the first dose of study drug. In addition to each subject documenting assent, the subject’s parent or legal representative had to provide informed consent. Subjects with current or recurrent disease (other than UC) that could affect the colon or the action, absorption, or disposition of the study drug were excluded. Additional exclusion criteria included: UC confined to the rectum; allergy, hypersensitivity, or poor tolerability to salicylates or aminosalicylates; history of hepatic or renal impairment, pancreatitis, or Reyes syndrome; use of another investigational product ≤30 days prior to the first dose of study drug; serious, severe, or unstable psychiatric or physical illness; positive urine screen for drugs or abuse of alcohol; and pregnancy or lactation in female subjects.\n Pharmacokinetic evaluations Blood samples (2 mL) for the measurement of 5-ASA and Ac-5-ASA plasma concentrations were taken pre-dose on days 5 and 6 and at the following time points on day 7: pre-dose, and 2, 4, 6, 9, 12, 16, and 24 hours after dosing. In addition, a complete 0- to 24-hour urine collection was made for the determination of urinary excretion of 5-ASA and Ac-5-ASA, starting within 30 minutes before the morning meal on day 7 until the final void scheduled for 30 minutes before the morning meal on day 8. Plasma and urine concentrations of 5-ASA and Ac-5-ASA were determined by validated methods based on liquid chromatography with tandem mass spectrometry (LC-MS/MS). The bioanalytical methods used for the quantitation of the plasma and the urine samples were validated, and quality control and calibration standard data were accepted in accordance with the U.S. Food and Drug Administration (FDA) guidance for bioanalytical method validation.14 Pharmacokinetic parameters were determined from the plasma concentration–time data by non-compartmental analysis.\nThe plasma assay ranged from 5 to 5,000 ng/mL for both 5-ASA and Ac-5-ASA. In each analytical batch, two low, two mid, and two high quality control (QC) samples were analyzed along with the study samples. The QC sample concentrations of both analytes in plasma were 12.5, 2,500, and 4,000 ng/mL. The inter-day coefficient of variation (CV) values ranged from 5.0% to 7.6% for 5-ASA and from 4.8% to 10.1% for Ac-5-ASA. Accuracy (expressed as percentage of the difference of the mean value for each pool from the theoretical concentration) values ranged from −0.8% to 1.6% for 5-ASA and from 2.4% to 5.2% for Ac-5-ASA. In 12 out of 14 analytical batches, three dilution QC samples were analyzed along with the study samples. The dilution QC sample concentrations for both analytes in plasma were each 10,000 ng/mL, and the dilution QC samples were analyzed after a 1:5 dilution with control plasma. The inter-day CV value was 5.5% for 5-ASA and 6.1% for Ac-5-ASA, and the accuracy values were 4.0% and 4.0%, respectively. Recoveries of 5-ASA and Ac-5-ASA from plasma were shown during assay validation to be >95%.\nThe urine assay ranged from 5 to 1,000 µg/mL for both 5-ASA and Ac-5-ASA. In each analytical batch, two low, two mid, and two high QC samples were analyzed along with the study samples. The QC sample concentrations of both analytes were 15, 400, and 800 µg/mL. The inter-day CV values ranged from 6.7% to 11.7% for 5-ASA and from 7.6% to 10.2% for Ac-5-ASA. Accuracy values ranged from −5.5% to −1.3% for 5-ASA and from 1.4% to 4.0% for Ac-5-ASA. In three out of nine analytical batches, three dilution QC samples were analyzed along with the study samples. The dilution QC sample concentrations for both analytes in urine were each 2,000 µg/mL, and the dilution QC samples were analyzed after a 1:5 dilution with control urine. The inter-day CV value was 7.8% for 5-ASA and 7.2% for Ac-5-ASA, and the accuracy values were −2.5% and 2.0%, respectively. Recoveries of 5-ASA and Ac-5-ASA from urine were shown during assay validation to be >90%.\nA total of 52 plasma samples and six urine samples were re-analyzed to assess incurred sample reproducibility. Results for 100% of these samples were found to be within ±20% of the mean of the original and re-assay results, meeting the predefined acceptance criteria.\nBlood samples (2 mL) for the measurement of 5-ASA and Ac-5-ASA plasma concentrations were taken pre-dose on days 5 and 6 and at the following time points on day 7: pre-dose, and 2, 4, 6, 9, 12, 16, and 24 hours after dosing. In addition, a complete 0- to 24-hour urine collection was made for the determination of urinary excretion of 5-ASA and Ac-5-ASA, starting within 30 minutes before the morning meal on day 7 until the final void scheduled for 30 minutes before the morning meal on day 8. Plasma and urine concentrations of 5-ASA and Ac-5-ASA were determined by validated methods based on liquid chromatography with tandem mass spectrometry (LC-MS/MS). The bioanalytical methods used for the quantitation of the plasma and the urine samples were validated, and quality control and calibration standard data were accepted in accordance with the U.S. Food and Drug Administration (FDA) guidance for bioanalytical method validation.14 Pharmacokinetic parameters were determined from the plasma concentration–time data by non-compartmental analysis.\nThe plasma assay ranged from 5 to 5,000 ng/mL for both 5-ASA and Ac-5-ASA. In each analytical batch, two low, two mid, and two high quality control (QC) samples were analyzed along with the study samples. The QC sample concentrations of both analytes in plasma were 12.5, 2,500, and 4,000 ng/mL. The inter-day coefficient of variation (CV) values ranged from 5.0% to 7.6% for 5-ASA and from 4.8% to 10.1% for Ac-5-ASA. Accuracy (expressed as percentage of the difference of the mean value for each pool from the theoretical concentration) values ranged from −0.8% to 1.6% for 5-ASA and from 2.4% to 5.2% for Ac-5-ASA. In 12 out of 14 analytical batches, three dilution QC samples were analyzed along with the study samples. The dilution QC sample concentrations for both analytes in plasma were each 10,000 ng/mL, and the dilution QC samples were analyzed after a 1:5 dilution with control plasma. The inter-day CV value was 5.5% for 5-ASA and 6.1% for Ac-5-ASA, and the accuracy values were 4.0% and 4.0%, respectively. Recoveries of 5-ASA and Ac-5-ASA from plasma were shown during assay validation to be >95%.\nThe urine assay ranged from 5 to 1,000 µg/mL for both 5-ASA and Ac-5-ASA. In each analytical batch, two low, two mid, and two high QC samples were analyzed along with the study samples. The QC sample concentrations of both analytes were 15, 400, and 800 µg/mL. The inter-day CV values ranged from 6.7% to 11.7% for 5-ASA and from 7.6% to 10.2% for Ac-5-ASA. Accuracy values ranged from −5.5% to −1.3% for 5-ASA and from 1.4% to 4.0% for Ac-5-ASA. In three out of nine analytical batches, three dilution QC samples were analyzed along with the study samples. The dilution QC sample concentrations for both analytes in urine were each 2,000 µg/mL, and the dilution QC samples were analyzed after a 1:5 dilution with control urine. The inter-day CV value was 7.8% for 5-ASA and 7.2% for Ac-5-ASA, and the accuracy values were −2.5% and 2.0%, respectively. Recoveries of 5-ASA and Ac-5-ASA from urine were shown during assay validation to be >90%.\nA total of 52 plasma samples and six urine samples were re-analyzed to assess incurred sample reproducibility. Results for 100% of these samples were found to be within ±20% of the mean of the original and re-assay results, meeting the predefined acceptance criteria.\n Safety Safety was evaluated by reported adverse events (AEs) at each study visit and while the subject was on-site, and included assessment of clinical laboratory parameters, physical examination findings, vital signs, and 12-lead electrocardiogram. Safety analyses were performed on all randomized subjects who took ≥1 dose of study drug and had ≥1 post-dose safety assessment (safety analysis set). Safety data were summarized by treatment group and by treatment group stratified by age (5–12 years and 13–17 years).\nSafety was evaluated by reported adverse events (AEs) at each study visit and while the subject was on-site, and included assessment of clinical laboratory parameters, physical examination findings, vital signs, and 12-lead electrocardiogram. Safety analyses were performed on all randomized subjects who took ≥1 dose of study drug and had ≥1 post-dose safety assessment (safety analysis set). Safety data were summarized by treatment group and by treatment group stratified by age (5–12 years and 13–17 years).\n Sample size It was anticipated that up to 60 subjects would be needed for screening to enroll 45 subjects. Thirty subjects were required to complete the study and, per agreement with the FDA, a minimum of six subjects were to be assigned to each age group (5–12 years and 13–17 years), as well as a minimum of six subjects per dose level (30, 60, and 100 mg/kg/day).\nIt was anticipated that up to 60 subjects would be needed for screening to enroll 45 subjects. Thirty subjects were required to complete the study and, per agreement with the FDA, a minimum of six subjects were to be assigned to each age group (5–12 years and 13–17 years), as well as a minimum of six subjects per dose level (30, 60, and 100 mg/kg/day).\n Non-compartmental pharmacokinetic analysis Pharmacokinetic parameters were determined (WinNonlin 5.2; Pharsight Corporation, Mountain View, CA, USA) for 5-ASA and Ac-5-ASA for all subjects in the safety analysis set who generated sufficient plasma samples to allow reliable determination of maximum concentration (Cmax,ss) and area under the curve for the defined interval between doses (AUCss; tau=24 h) at steady state (ie, the pharmacokinetic set). All calculations were based on actual sampling times. Pharmacokinetic parameters that were derived based on 5-ASA and/or Ac-5-ASA concentrations, as appropriate, included AUCss, Cmax,ss, time of maximum observed concentration sampled during a dosing interval (tmax), cumulative amount recovered in urine in time interval 0–24 hours (Xu0–24h), clearance from the blood by the kidneys (CLR), metabolic ratio (Ac-5-ASA:5-ASA) calculated using Cmax,ss (MRCmax,ss), metabolic ratio (Ac-5-ASA:5-ASA) calculated using AUCss (MRAUCss), and percentage of the dose absorbed, calculated as:\n%of dose absorbed=(Xu0−24h5-ASA+[0.7847×Xu0−24hAc-5-ASA])Dose×100where 0.7847 is the ratio of the molecular weight of 5-ASA (153.14) to the molecular weight of Ac-5-ASA (195.15).\nAUC values were calculated using the linear trapezoidal method when concentrations were increasing, and the logarithmic trapezoidal method when concentrations were decreasing. No inferential statistical analyses were conducted on the pharmacokinetic data. Summary statistics were presented by treatment group and by treatment group stratified by age (5–12 years, 13–17 years) for all pharmacokinetic parameters. Achievement of steady state was assessed by visual inspection of pre-dose plasma concentrations on days 5, 6, and 7. Systemic exposure (Cmax,ss and AUCss) was assessed by comparison with historical data in healthy adult subjects administered multimatrix mesalamine 2,400 or 4,800 mg/day.15,16\nPharmacokinetic parameters were determined (WinNonlin 5.2; Pharsight Corporation, Mountain View, CA, USA) for 5-ASA and Ac-5-ASA for all subjects in the safety analysis set who generated sufficient plasma samples to allow reliable determination of maximum concentration (Cmax,ss) and area under the curve for the defined interval between doses (AUCss; tau=24 h) at steady state (ie, the pharmacokinetic set). All calculations were based on actual sampling times. Pharmacokinetic parameters that were derived based on 5-ASA and/or Ac-5-ASA concentrations, as appropriate, included AUCss, Cmax,ss, time of maximum observed concentration sampled during a dosing interval (tmax), cumulative amount recovered in urine in time interval 0–24 hours (Xu0–24h), clearance from the blood by the kidneys (CLR), metabolic ratio (Ac-5-ASA:5-ASA) calculated using Cmax,ss (MRCmax,ss), metabolic ratio (Ac-5-ASA:5-ASA) calculated using AUCss (MRAUCss), and percentage of the dose absorbed, calculated as:\n%of dose absorbed=(Xu0−24h5-ASA+[0.7847×Xu0−24hAc-5-ASA])Dose×100where 0.7847 is the ratio of the molecular weight of 5-ASA (153.14) to the molecular weight of Ac-5-ASA (195.15).\nAUC values were calculated using the linear trapezoidal method when concentrations were increasing, and the logarithmic trapezoidal method when concentrations were decreasing. No inferential statistical analyses were conducted on the pharmacokinetic data. Summary statistics were presented by treatment group and by treatment group stratified by age (5–12 years, 13–17 years) for all pharmacokinetic parameters. Achievement of steady state was assessed by visual inspection of pre-dose plasma concentrations on days 5, 6, and 7. Systemic exposure (Cmax,ss and AUCss) was assessed by comparison with historical data in healthy adult subjects administered multimatrix mesalamine 2,400 or 4,800 mg/day.15,16\n Population pharmacokinetic analysis A population pharmacokinetic model was developed using non-linear mixed effects modeling (NONMEM® program, Version 7.2.0; ICON, Ellicott City, MD, USA) to describe the population variability in 5-ASA/Ac-5-ASA pharmacokinetics and the relationship between pharmacokinetic parameters and potential explanatory covariates (eg, age, weight, and sex). Pharmacokinetic parameters were estimated using Monte-Carlo Importance Sampling Expectation Maximization method with “Mu Referencing”.17 Development of the population pharmacokinetic model consisted of building a base model, followed by development of a covariate model using an interim data cut (40 subjects); the final model was updated with data from an additional 12 subjects from the study. Structural model selection was data driven, based on goodness-of-fit plots (observed vs predicted concentration, conditional weighted residual vs predicted concentration or time, histograms of individual random effects, etc), successful convergence, plausibility and precision of parameter estimates, and the minimum objective function value. Missing drug concentrations and concentrations reported as “not quantifiable” were excluded in the analysis. The final pharmacokinetic model was evaluated using visual predictive check (VPC), and this model was used to simulate the expected 5-ASA and Ac-5-ASA plasma concentration profiles in a broader population of children and adolescents using Trial Simulator (Version 2.2.1, Pharsight Corporation). Data presentation and construction of plots were performed using S-PLUS (Version 8.1; Tibco Software Inc., Palo Alto, CA, USA). The simulated exposures were compared to historical adult exposures.15,16\nA population pharmacokinetic model was developed using non-linear mixed effects modeling (NONMEM® program, Version 7.2.0; ICON, Ellicott City, MD, USA) to describe the population variability in 5-ASA/Ac-5-ASA pharmacokinetics and the relationship between pharmacokinetic parameters and potential explanatory covariates (eg, age, weight, and sex). Pharmacokinetic parameters were estimated using Monte-Carlo Importance Sampling Expectation Maximization method with “Mu Referencing”.17 Development of the population pharmacokinetic model consisted of building a base model, followed by development of a covariate model using an interim data cut (40 subjects); the final model was updated with data from an additional 12 subjects from the study. Structural model selection was data driven, based on goodness-of-fit plots (observed vs predicted concentration, conditional weighted residual vs predicted concentration or time, histograms of individual random effects, etc), successful convergence, plausibility and precision of parameter estimates, and the minimum objective function value. Missing drug concentrations and concentrations reported as “not quantifiable” were excluded in the analysis. The final pharmacokinetic model was evaluated using visual predictive check (VPC), and this model was used to simulate the expected 5-ASA and Ac-5-ASA plasma concentration profiles in a broader population of children and adolescents using Trial Simulator (Version 2.2.1, Pharsight Corporation). Data presentation and construction of plots were performed using S-PLUS (Version 8.1; Tibco Software Inc., Palo Alto, CA, USA). The simulated exposures were compared to historical adult exposures.15,16", "This Phase I, multicenter, randomized, open-label, three-arm study was conducted in 12 sites across three countries (United States, Poland, and Slovakia). Children and adolescents (aged 5–17 years) with a diagnosis of UC were randomly assigned to receive multimatrix mesalamine 30, 60, or 100 mg/kg once daily (up to a maximum of 4,800 mg/day) each morning for 7 days. To achieve these doses in children, smaller-sized 300 and 600 mg tablets were developed for this study to augment the existing approved 1,200 mg multimatrix mesalamine tablet. Total daily doses for the study ranged from 900 to 4,800 mg/day. Randomization was stratified by body weight (18–24, 25–49, and 50–82 kg); subjects weighing 18–24 kg were only randomized to 60 and 100 mg/kg/day groups to avoid receiving doses less than 900 mg, and subjects weighing 50–82 kg were only randomized to 30 and 60 mg/kg/day groups to avoid receiving doses greater than 4,800 mg, the maximum approved dose in adult subjects. Randomization was achieved via use of a randomization number allocated prior to dosing, once eligibility had been determined, and a randomization schedule was produced by an interactive voice response system vendor. Subjects were dosed with multimatrix mesalamine every morning for 7 days, at home on days 1–4, and on-site on days 5–7. On day 7, blood and urine pharmacokinetic samples were collected and safety assessments were performed until 24 hours post-dose. The study protocol, protocol amendments, informed consent documents, relevant supporting information, and subject recruitment information were submitted to and approved by the respective independent ethics committees, institutional review boards, and regulatory agencies prior to study initiation. The independent ethics committees, institutional review boards, and regulatory agencies are from the following: United States: Arkansas Children’s Hospital, Advanced Clinical Research Institute, University of California, San Francisco, Connecticut Children’s Medical Center, University of Maryland Medical Center for Children, and Mayo Clinic; Australia: Royal Children’s Hospital Melbourne; Poland: Klinika Pediatrii Gastroenterologii i Zywienia, Uniwersytecki Szpital Dzieciecy w Krakowie, Klinika Pediatrii Dzieciecy Szpital Kliniczny im prof Antoniego Gebali, Kliniczny Oddzial Pediatrii z Pododdzialem Neurologii Dzieciecej Szpital Wojewodzki, Klinika Gastroenterolofii Pediatrii, Instytut Centrum Zdrowia Matki Polki, Oddzial Gastroenterologii i Hepatologii, and Instytut Pomnik-Centrum Zdrowia Dziecka; Slovakia: Gastroenterologicka ambulancia, Univerzitna nemocnica Martin, and DFNsP Banska Bystrica; and the United Kingdom: Alder Hey Children’s NHS Foundation Trust, Barts Health NHS Trust/Royal London Hospital, Somers Clinical Research Facility/Great Ormond Street Hospital, Southampton General Hospital. Further details are available at https://clinicaltrials.gov/ct2/show/NCT01130844. All authors had access to the study data, and reviewed and approved the final manuscript.", "Children (aged 5–12 years) and adolescents (aged 13–17 years), weighing 18–82 kg, with a diagnosis of UC ≥3 months prior to the first dose of study drug were enrolled. Subjects already on a 5-ASA product had to have been on a stable regimen for ≥4 weeks prior to the first dose of study drug. In addition to each subject documenting assent, the subject’s parent or legal representative had to provide informed consent. Subjects with current or recurrent disease (other than UC) that could affect the colon or the action, absorption, or disposition of the study drug were excluded. Additional exclusion criteria included: UC confined to the rectum; allergy, hypersensitivity, or poor tolerability to salicylates or aminosalicylates; history of hepatic or renal impairment, pancreatitis, or Reyes syndrome; use of another investigational product ≤30 days prior to the first dose of study drug; serious, severe, or unstable psychiatric or physical illness; positive urine screen for drugs or abuse of alcohol; and pregnancy or lactation in female subjects.", "Blood samples (2 mL) for the measurement of 5-ASA and Ac-5-ASA plasma concentrations were taken pre-dose on days 5 and 6 and at the following time points on day 7: pre-dose, and 2, 4, 6, 9, 12, 16, and 24 hours after dosing. In addition, a complete 0- to 24-hour urine collection was made for the determination of urinary excretion of 5-ASA and Ac-5-ASA, starting within 30 minutes before the morning meal on day 7 until the final void scheduled for 30 minutes before the morning meal on day 8. Plasma and urine concentrations of 5-ASA and Ac-5-ASA were determined by validated methods based on liquid chromatography with tandem mass spectrometry (LC-MS/MS). The bioanalytical methods used for the quantitation of the plasma and the urine samples were validated, and quality control and calibration standard data were accepted in accordance with the U.S. Food and Drug Administration (FDA) guidance for bioanalytical method validation.14 Pharmacokinetic parameters were determined from the plasma concentration–time data by non-compartmental analysis.\nThe plasma assay ranged from 5 to 5,000 ng/mL for both 5-ASA and Ac-5-ASA. In each analytical batch, two low, two mid, and two high quality control (QC) samples were analyzed along with the study samples. The QC sample concentrations of both analytes in plasma were 12.5, 2,500, and 4,000 ng/mL. The inter-day coefficient of variation (CV) values ranged from 5.0% to 7.6% for 5-ASA and from 4.8% to 10.1% for Ac-5-ASA. Accuracy (expressed as percentage of the difference of the mean value for each pool from the theoretical concentration) values ranged from −0.8% to 1.6% for 5-ASA and from 2.4% to 5.2% for Ac-5-ASA. In 12 out of 14 analytical batches, three dilution QC samples were analyzed along with the study samples. The dilution QC sample concentrations for both analytes in plasma were each 10,000 ng/mL, and the dilution QC samples were analyzed after a 1:5 dilution with control plasma. The inter-day CV value was 5.5% for 5-ASA and 6.1% for Ac-5-ASA, and the accuracy values were 4.0% and 4.0%, respectively. Recoveries of 5-ASA and Ac-5-ASA from plasma were shown during assay validation to be >95%.\nThe urine assay ranged from 5 to 1,000 µg/mL for both 5-ASA and Ac-5-ASA. In each analytical batch, two low, two mid, and two high QC samples were analyzed along with the study samples. The QC sample concentrations of both analytes were 15, 400, and 800 µg/mL. The inter-day CV values ranged from 6.7% to 11.7% for 5-ASA and from 7.6% to 10.2% for Ac-5-ASA. Accuracy values ranged from −5.5% to −1.3% for 5-ASA and from 1.4% to 4.0% for Ac-5-ASA. In three out of nine analytical batches, three dilution QC samples were analyzed along with the study samples. The dilution QC sample concentrations for both analytes in urine were each 2,000 µg/mL, and the dilution QC samples were analyzed after a 1:5 dilution with control urine. The inter-day CV value was 7.8% for 5-ASA and 7.2% for Ac-5-ASA, and the accuracy values were −2.5% and 2.0%, respectively. Recoveries of 5-ASA and Ac-5-ASA from urine were shown during assay validation to be >90%.\nA total of 52 plasma samples and six urine samples were re-analyzed to assess incurred sample reproducibility. Results for 100% of these samples were found to be within ±20% of the mean of the original and re-assay results, meeting the predefined acceptance criteria.", "Safety was evaluated by reported adverse events (AEs) at each study visit and while the subject was on-site, and included assessment of clinical laboratory parameters, physical examination findings, vital signs, and 12-lead electrocardiogram. Safety analyses were performed on all randomized subjects who took ≥1 dose of study drug and had ≥1 post-dose safety assessment (safety analysis set). Safety data were summarized by treatment group and by treatment group stratified by age (5–12 years and 13–17 years).", "It was anticipated that up to 60 subjects would be needed for screening to enroll 45 subjects. Thirty subjects were required to complete the study and, per agreement with the FDA, a minimum of six subjects were to be assigned to each age group (5–12 years and 13–17 years), as well as a minimum of six subjects per dose level (30, 60, and 100 mg/kg/day).", "Pharmacokinetic parameters were determined (WinNonlin 5.2; Pharsight Corporation, Mountain View, CA, USA) for 5-ASA and Ac-5-ASA for all subjects in the safety analysis set who generated sufficient plasma samples to allow reliable determination of maximum concentration (Cmax,ss) and area under the curve for the defined interval between doses (AUCss; tau=24 h) at steady state (ie, the pharmacokinetic set). All calculations were based on actual sampling times. Pharmacokinetic parameters that were derived based on 5-ASA and/or Ac-5-ASA concentrations, as appropriate, included AUCss, Cmax,ss, time of maximum observed concentration sampled during a dosing interval (tmax), cumulative amount recovered in urine in time interval 0–24 hours (Xu0–24h), clearance from the blood by the kidneys (CLR), metabolic ratio (Ac-5-ASA:5-ASA) calculated using Cmax,ss (MRCmax,ss), metabolic ratio (Ac-5-ASA:5-ASA) calculated using AUCss (MRAUCss), and percentage of the dose absorbed, calculated as:\n%of dose absorbed=(Xu0−24h5-ASA+[0.7847×Xu0−24hAc-5-ASA])Dose×100where 0.7847 is the ratio of the molecular weight of 5-ASA (153.14) to the molecular weight of Ac-5-ASA (195.15).\nAUC values were calculated using the linear trapezoidal method when concentrations were increasing, and the logarithmic trapezoidal method when concentrations were decreasing. No inferential statistical analyses were conducted on the pharmacokinetic data. Summary statistics were presented by treatment group and by treatment group stratified by age (5–12 years, 13–17 years) for all pharmacokinetic parameters. Achievement of steady state was assessed by visual inspection of pre-dose plasma concentrations on days 5, 6, and 7. Systemic exposure (Cmax,ss and AUCss) was assessed by comparison with historical data in healthy adult subjects administered multimatrix mesalamine 2,400 or 4,800 mg/day.15,16", "Inter-assay accuracy and precision data for 5-ASA and Ac-5-ASA in the QC samples at three concentrations each in human plasma and urine across all analytical batches are shown in Table S1. For 5-ASA, mean plasma concentration–time profiles attained maxima at ~6 or 9 hours post-dose, with a secondary peak at 24 hours post-dose (Figure 2A; Table 2). Median tmax was 6 and 9 hours (range 0–24 hours for all dose levels) post-dose for 30 and 60 mg/kg/day doses, respectively. For 100 mg/kg/day, median tmax was approximately 2 hours post-dose; however, there were only nine subjects at 100 mg/kg/day. Steady-state plasma concentrations for 5-ASA were attained by day 5 for all doses. On day 7, systemic exposure to 5-ASA (mean AUCss and Cmax,ss) increased in a dose-proportional manner between 30 and 60 mg/kg/day doses, but in a subproportional manner between 60 and 100 mg/kg/day doses. Based on urinary recovery, the mean percentages of 5-ASA absorbed from multimatrix mesalamine were 29.4%, 27.0%, and 22.1% for 30, 60, and 100 mg/kg/day doses, respectively. High between-subject variability was noted for 5-ASA AUCss and Cmax,ss, with arithmetic CV values ranging from 36% to 52% and 52% to 60%, respectively. Mean CLR ranged from 5.0 to 6.5 L/h, with a trend toward decreasing with increasing dose.\nMean plasma concentration–time profiles of plasma Ac-5-ASA were similar to those of the parent drug 5-ASA (Figure 2B; Table 2), with median tmax of 9, 7.5, and 2 hours post-dose for 30, 60, and 100 mg/kg/day doses, respectively. Steady-state plasma concentrations for Ac-5-ASA were attained by day 5 for all doses. On day 7, as for 5-ASA, systemic exposure to Ac-5-ASA increased in a dose-proportional manner between 30 and 60 mg/kg/day doses, but in a subproportional manner between 60 and 100 mg/kg/day doses. The metabolite Ac-5-ASA was more abundant than the parent drug (Table 2; MRAUCss of Ac-5-ASA:5-ASA), with no apparent trend across dose. Mean CLR ranged from 10.0 to 16.2 L/h, with a trend toward decreasing with increasing dose. Moderate to high between-subject variability was noted for AUCss and Cmax,ss, with arithmetic CV values ranging from 35% to 44% and 40% to 59%, respectively. There was no apparent difference in 5-ASA or Ac-5-ASA systemic exposure, as measured by mean AUCss and Cmax,ss, between children (aged 5–12 years) and adolescents (aged 13–17 years) for this weight-based dosing paradigm (data not shown).", "The pharmacokinetics of 5-ASA and Ac-5-ASA were adequately described by the population pharmacokinetic structural model (Figure S1) that included: first-order absorption from two depot compartments, absorption lag times, and separate central compartments for 5-ASA and Ac-5-ASA with respective urine compartments for renal clearance. Non-renal clearance of 5-ASA was assumed to involve only metabolism to Ac-5-ASA, and all elimination processes were based on the first-order kinetics. Allometric scaling by body weight was applied to all clearance and volume parameters, with the exponents fixed to the theoretical value of 0.75 for clearance and 1 for volume parameters.18 Parameter estimates for the final model are shown in Table S2; following evaluation by VPC, less than 9% of the observed concentrations fell outside the 90% prediction intervals for both analytes, suggesting that the final model adequately described the observed data. For a 70 kg individual, the typical value of 5-ASA apparent renal clearance was estimated to be 1.15 L/h (95% confidence interval [CI]: 1.01–1.32 L/h), and apparent metabolic clearance was estimated to be 85.6 L/h (95% CI: 75.9–96.5 L/h). The typical value of Ac-5-ASA apparent renal clearance was estimated to be 2.54 L/h (95% CI: 2.27–2.86 L/h), and apparent non-renal clearance was estimated to be 74.4 L/h (95% CI: 66.7–83.1 L/h). Typical estimates of the central volume of distribution were 109 L (95% CI: 70.1–169 L) for 5-ASA and 7.10 L (95% CI: 5.31–9.49 L) for Ac-5-ASA. Estimated absorption rates from depot 1 and depot 3 were 0.0334 h−1 (95% CI: 0.0207–0.0539 h−1) and 0.273 h−1 (95% CI: 0.165–0.448 h−1). Corresponding estimated absorption lag times from each depot were 5.10 hours (95% CI: 4.31–6.05 hours) and 15.0 hours (95% CI: 14.0–16.1 hours); the lag time from depot 3 represented the additional lag following delay in absorption from depot 1. The fraction of dose absorbed from depot 1 was estimated to be 0.734 (95% CI: 0.413–1.06), and remaining dose fractions were assumed to be absorbed from depot 3. Goodness-of-fit plots for 5-ASA and Ac-5-ASA plasma concentrations are shown in Figures S2 and S3.\nThe population pharmacokinetic model was used to simulate steady-state profiles for both 5-ASA and Ac-5-ASA for four weight groups (18–23, 24–35, 36–50, and 51–90 kg) at planned high doses (1,800, 2,400, 3,600, and 4,800 mg) and low doses (900, 1,200, 1,800, and 2,400 mg) in 80 subjects with 1,000 replications (Figure 3A–D; Tables S3 and S4). The variability in the predicted steady-state exposures for children and adolescents was approximately 50% (CV%) for 5-ASA AUC (Table S3) and 40%–45% for Ac-5-ASA AUC (Table S4). The proposed low dose of multimatrix mesalamine for each weight category is predicted to provide comparable steady-state exposure for both 5-ASA and Ac-5 -ASA to those observed following administration of a fixed 2,400 mg dose in the adult population (Figure 3A–D). The proposed high dose for each weight category is predicted to provide comparable steady-state AUC for both 5-ASA and Ac-5-ASA to those observed following administration of a fixed 4,800 mg dose in the adult population.", "The incidence of treatment-emergent adverse events (TEAEs) was 19.2% (ten subjects overall); all TEAEs were mild to moderate (Table 3). Incidence rates were similar among different dose groups, and the most commonly reported TEAEs were abdominal pain, musculoskeletal pain, and headache, each reported in 3.8% of subjects (n=2 each). There were no TEAEs leading to premature discontinuation. Two subjects (3.8%) experienced a TEAE considered by the investigator to be related to the study drug: one subject in the 30 mg/kg/day dose group experienced abdominal pain, dehydration, and vomiting, and one subject in the 60 mg/kg/day dose group experienced moderate upper abdominal pain. No relevant differences between the age groups were observed with regard to the occurrence, severity, or relatedness of TEAEs, and no clinically relevant abnormalities in biochemistry, hematology, urinalysis, or vital sign values were observed. No new safety signals were reported." ]
[ "methods", "methods", "methods", null, null, null, "methods", null, null, null ]
[ "Introduction", "Methods", "Study design", "Study population", "Pharmacokinetic evaluations", "Safety", "Sample size", "Non-compartmental pharmacokinetic analysis", "Population pharmacokinetic analysis", "Results", "Subjects", "Non-compartmental pharmacokinetics", "Population pharmacokinetics", "Safety", "Discussion", "Supplementary materials" ]
[ "Ulcerative colitis (UC) is a chronic inflammatory disease of the colon and rectum distinguished by cycles of remission and relapse over the life of the subject.1,2 While the incidence of UC peaks around early adulthood, onset of the disorder can occur from early childhood through adulthood.3 UC is one of the more prevalent chronic diseases in children, with an incidence rate of 2.1 per 100,000 children within the United States.4\nThe primary goals in UC management are induction and maintenance of remission to improve subjects’ health and quality of life.5 Oral 5-aminosalicylic acid (5-ASA) formulations, such as multimatrix mesalamine (Shire US Inc., Wayne, PA, USA), have proven effective in the induction and maintenance of UC remission,6–10 and are recommended first-line therapy for adults with active mild-to-moderate UC.5,6 5-ASAs are also commonly used as the standard of care first-line therapy in pediatric UC. However, while studies that support 5-ASA use in adult subjects with UC are abundant,11 evidence on the efficacy and safety of 5-ASA in pediatric UC subjects is less substantial, with only a few randomized, controlled clinical studies for the induction or maintenance of remission by 5-ASA in pediatric subjects having been conducted.12,13 To date, no 5-ASA product has been licensed for maintenance of remission of UC in children.\nAs the first step in a program evaluating multimatrix mesalamine in pediatric UC, the primary objective of this randomized Phase I study (ClinicalTrials.gov Identifier: NCT01130844) was to assess the pharmacokinetics of 5-ASA and its major metabolite acetyl-5-ASA (Ac-5-ASA) after administration of once-daily multimatrix mesalamine at three different doses (30, 60, or 100 mg/kg/day) for 7 days in children and adolescents diagnosed with UC. Secondary objectives included examining the safety and tolerability of multimatrix mesalamine at these doses in children and adolescents with UC, and evaluating the extent of absorption of 5-ASA from multimatrix mesalamine at steady state.", " Study design This Phase I, multicenter, randomized, open-label, three-arm study was conducted in 12 sites across three countries (United States, Poland, and Slovakia). Children and adolescents (aged 5–17 years) with a diagnosis of UC were randomly assigned to receive multimatrix mesalamine 30, 60, or 100 mg/kg once daily (up to a maximum of 4,800 mg/day) each morning for 7 days. To achieve these doses in children, smaller-sized 300 and 600 mg tablets were developed for this study to augment the existing approved 1,200 mg multimatrix mesalamine tablet. Total daily doses for the study ranged from 900 to 4,800 mg/day. Randomization was stratified by body weight (18–24, 25–49, and 50–82 kg); subjects weighing 18–24 kg were only randomized to 60 and 100 mg/kg/day groups to avoid receiving doses less than 900 mg, and subjects weighing 50–82 kg were only randomized to 30 and 60 mg/kg/day groups to avoid receiving doses greater than 4,800 mg, the maximum approved dose in adult subjects. Randomization was achieved via use of a randomization number allocated prior to dosing, once eligibility had been determined, and a randomization schedule was produced by an interactive voice response system vendor. Subjects were dosed with multimatrix mesalamine every morning for 7 days, at home on days 1–4, and on-site on days 5–7. On day 7, blood and urine pharmacokinetic samples were collected and safety assessments were performed until 24 hours post-dose. The study protocol, protocol amendments, informed consent documents, relevant supporting information, and subject recruitment information were submitted to and approved by the respective independent ethics committees, institutional review boards, and regulatory agencies prior to study initiation. The independent ethics committees, institutional review boards, and regulatory agencies are from the following: United States: Arkansas Children’s Hospital, Advanced Clinical Research Institute, University of California, San Francisco, Connecticut Children’s Medical Center, University of Maryland Medical Center for Children, and Mayo Clinic; Australia: Royal Children’s Hospital Melbourne; Poland: Klinika Pediatrii Gastroenterologii i Zywienia, Uniwersytecki Szpital Dzieciecy w Krakowie, Klinika Pediatrii Dzieciecy Szpital Kliniczny im prof Antoniego Gebali, Kliniczny Oddzial Pediatrii z Pododdzialem Neurologii Dzieciecej Szpital Wojewodzki, Klinika Gastroenterolofii Pediatrii, Instytut Centrum Zdrowia Matki Polki, Oddzial Gastroenterologii i Hepatologii, and Instytut Pomnik-Centrum Zdrowia Dziecka; Slovakia: Gastroenterologicka ambulancia, Univerzitna nemocnica Martin, and DFNsP Banska Bystrica; and the United Kingdom: Alder Hey Children’s NHS Foundation Trust, Barts Health NHS Trust/Royal London Hospital, Somers Clinical Research Facility/Great Ormond Street Hospital, Southampton General Hospital. Further details are available at https://clinicaltrials.gov/ct2/show/NCT01130844. All authors had access to the study data, and reviewed and approved the final manuscript.\nThis Phase I, multicenter, randomized, open-label, three-arm study was conducted in 12 sites across three countries (United States, Poland, and Slovakia). Children and adolescents (aged 5–17 years) with a diagnosis of UC were randomly assigned to receive multimatrix mesalamine 30, 60, or 100 mg/kg once daily (up to a maximum of 4,800 mg/day) each morning for 7 days. To achieve these doses in children, smaller-sized 300 and 600 mg tablets were developed for this study to augment the existing approved 1,200 mg multimatrix mesalamine tablet. Total daily doses for the study ranged from 900 to 4,800 mg/day. Randomization was stratified by body weight (18–24, 25–49, and 50–82 kg); subjects weighing 18–24 kg were only randomized to 60 and 100 mg/kg/day groups to avoid receiving doses less than 900 mg, and subjects weighing 50–82 kg were only randomized to 30 and 60 mg/kg/day groups to avoid receiving doses greater than 4,800 mg, the maximum approved dose in adult subjects. Randomization was achieved via use of a randomization number allocated prior to dosing, once eligibility had been determined, and a randomization schedule was produced by an interactive voice response system vendor. Subjects were dosed with multimatrix mesalamine every morning for 7 days, at home on days 1–4, and on-site on days 5–7. On day 7, blood and urine pharmacokinetic samples were collected and safety assessments were performed until 24 hours post-dose. The study protocol, protocol amendments, informed consent documents, relevant supporting information, and subject recruitment information were submitted to and approved by the respective independent ethics committees, institutional review boards, and regulatory agencies prior to study initiation. The independent ethics committees, institutional review boards, and regulatory agencies are from the following: United States: Arkansas Children’s Hospital, Advanced Clinical Research Institute, University of California, San Francisco, Connecticut Children’s Medical Center, University of Maryland Medical Center for Children, and Mayo Clinic; Australia: Royal Children’s Hospital Melbourne; Poland: Klinika Pediatrii Gastroenterologii i Zywienia, Uniwersytecki Szpital Dzieciecy w Krakowie, Klinika Pediatrii Dzieciecy Szpital Kliniczny im prof Antoniego Gebali, Kliniczny Oddzial Pediatrii z Pododdzialem Neurologii Dzieciecej Szpital Wojewodzki, Klinika Gastroenterolofii Pediatrii, Instytut Centrum Zdrowia Matki Polki, Oddzial Gastroenterologii i Hepatologii, and Instytut Pomnik-Centrum Zdrowia Dziecka; Slovakia: Gastroenterologicka ambulancia, Univerzitna nemocnica Martin, and DFNsP Banska Bystrica; and the United Kingdom: Alder Hey Children’s NHS Foundation Trust, Barts Health NHS Trust/Royal London Hospital, Somers Clinical Research Facility/Great Ormond Street Hospital, Southampton General Hospital. Further details are available at https://clinicaltrials.gov/ct2/show/NCT01130844. All authors had access to the study data, and reviewed and approved the final manuscript.\n Study population Children (aged 5–12 years) and adolescents (aged 13–17 years), weighing 18–82 kg, with a diagnosis of UC ≥3 months prior to the first dose of study drug were enrolled. Subjects already on a 5-ASA product had to have been on a stable regimen for ≥4 weeks prior to the first dose of study drug. In addition to each subject documenting assent, the subject’s parent or legal representative had to provide informed consent. Subjects with current or recurrent disease (other than UC) that could affect the colon or the action, absorption, or disposition of the study drug were excluded. Additional exclusion criteria included: UC confined to the rectum; allergy, hypersensitivity, or poor tolerability to salicylates or aminosalicylates; history of hepatic or renal impairment, pancreatitis, or Reyes syndrome; use of another investigational product ≤30 days prior to the first dose of study drug; serious, severe, or unstable psychiatric or physical illness; positive urine screen for drugs or abuse of alcohol; and pregnancy or lactation in female subjects.\nChildren (aged 5–12 years) and adolescents (aged 13–17 years), weighing 18–82 kg, with a diagnosis of UC ≥3 months prior to the first dose of study drug were enrolled. Subjects already on a 5-ASA product had to have been on a stable regimen for ≥4 weeks prior to the first dose of study drug. In addition to each subject documenting assent, the subject’s parent or legal representative had to provide informed consent. Subjects with current or recurrent disease (other than UC) that could affect the colon or the action, absorption, or disposition of the study drug were excluded. Additional exclusion criteria included: UC confined to the rectum; allergy, hypersensitivity, or poor tolerability to salicylates or aminosalicylates; history of hepatic or renal impairment, pancreatitis, or Reyes syndrome; use of another investigational product ≤30 days prior to the first dose of study drug; serious, severe, or unstable psychiatric or physical illness; positive urine screen for drugs or abuse of alcohol; and pregnancy or lactation in female subjects.\n Pharmacokinetic evaluations Blood samples (2 mL) for the measurement of 5-ASA and Ac-5-ASA plasma concentrations were taken pre-dose on days 5 and 6 and at the following time points on day 7: pre-dose, and 2, 4, 6, 9, 12, 16, and 24 hours after dosing. In addition, a complete 0- to 24-hour urine collection was made for the determination of urinary excretion of 5-ASA and Ac-5-ASA, starting within 30 minutes before the morning meal on day 7 until the final void scheduled for 30 minutes before the morning meal on day 8. Plasma and urine concentrations of 5-ASA and Ac-5-ASA were determined by validated methods based on liquid chromatography with tandem mass spectrometry (LC-MS/MS). The bioanalytical methods used for the quantitation of the plasma and the urine samples were validated, and quality control and calibration standard data were accepted in accordance with the U.S. Food and Drug Administration (FDA) guidance for bioanalytical method validation.14 Pharmacokinetic parameters were determined from the plasma concentration–time data by non-compartmental analysis.\nThe plasma assay ranged from 5 to 5,000 ng/mL for both 5-ASA and Ac-5-ASA. In each analytical batch, two low, two mid, and two high quality control (QC) samples were analyzed along with the study samples. The QC sample concentrations of both analytes in plasma were 12.5, 2,500, and 4,000 ng/mL. The inter-day coefficient of variation (CV) values ranged from 5.0% to 7.6% for 5-ASA and from 4.8% to 10.1% for Ac-5-ASA. Accuracy (expressed as percentage of the difference of the mean value for each pool from the theoretical concentration) values ranged from −0.8% to 1.6% for 5-ASA and from 2.4% to 5.2% for Ac-5-ASA. In 12 out of 14 analytical batches, three dilution QC samples were analyzed along with the study samples. The dilution QC sample concentrations for both analytes in plasma were each 10,000 ng/mL, and the dilution QC samples were analyzed after a 1:5 dilution with control plasma. The inter-day CV value was 5.5% for 5-ASA and 6.1% for Ac-5-ASA, and the accuracy values were 4.0% and 4.0%, respectively. Recoveries of 5-ASA and Ac-5-ASA from plasma were shown during assay validation to be >95%.\nThe urine assay ranged from 5 to 1,000 µg/mL for both 5-ASA and Ac-5-ASA. In each analytical batch, two low, two mid, and two high QC samples were analyzed along with the study samples. The QC sample concentrations of both analytes were 15, 400, and 800 µg/mL. The inter-day CV values ranged from 6.7% to 11.7% for 5-ASA and from 7.6% to 10.2% for Ac-5-ASA. Accuracy values ranged from −5.5% to −1.3% for 5-ASA and from 1.4% to 4.0% for Ac-5-ASA. In three out of nine analytical batches, three dilution QC samples were analyzed along with the study samples. The dilution QC sample concentrations for both analytes in urine were each 2,000 µg/mL, and the dilution QC samples were analyzed after a 1:5 dilution with control urine. The inter-day CV value was 7.8% for 5-ASA and 7.2% for Ac-5-ASA, and the accuracy values were −2.5% and 2.0%, respectively. Recoveries of 5-ASA and Ac-5-ASA from urine were shown during assay validation to be >90%.\nA total of 52 plasma samples and six urine samples were re-analyzed to assess incurred sample reproducibility. Results for 100% of these samples were found to be within ±20% of the mean of the original and re-assay results, meeting the predefined acceptance criteria.\nBlood samples (2 mL) for the measurement of 5-ASA and Ac-5-ASA plasma concentrations were taken pre-dose on days 5 and 6 and at the following time points on day 7: pre-dose, and 2, 4, 6, 9, 12, 16, and 24 hours after dosing. In addition, a complete 0- to 24-hour urine collection was made for the determination of urinary excretion of 5-ASA and Ac-5-ASA, starting within 30 minutes before the morning meal on day 7 until the final void scheduled for 30 minutes before the morning meal on day 8. Plasma and urine concentrations of 5-ASA and Ac-5-ASA were determined by validated methods based on liquid chromatography with tandem mass spectrometry (LC-MS/MS). The bioanalytical methods used for the quantitation of the plasma and the urine samples were validated, and quality control and calibration standard data were accepted in accordance with the U.S. Food and Drug Administration (FDA) guidance for bioanalytical method validation.14 Pharmacokinetic parameters were determined from the plasma concentration–time data by non-compartmental analysis.\nThe plasma assay ranged from 5 to 5,000 ng/mL for both 5-ASA and Ac-5-ASA. In each analytical batch, two low, two mid, and two high quality control (QC) samples were analyzed along with the study samples. The QC sample concentrations of both analytes in plasma were 12.5, 2,500, and 4,000 ng/mL. The inter-day coefficient of variation (CV) values ranged from 5.0% to 7.6% for 5-ASA and from 4.8% to 10.1% for Ac-5-ASA. Accuracy (expressed as percentage of the difference of the mean value for each pool from the theoretical concentration) values ranged from −0.8% to 1.6% for 5-ASA and from 2.4% to 5.2% for Ac-5-ASA. In 12 out of 14 analytical batches, three dilution QC samples were analyzed along with the study samples. The dilution QC sample concentrations for both analytes in plasma were each 10,000 ng/mL, and the dilution QC samples were analyzed after a 1:5 dilution with control plasma. The inter-day CV value was 5.5% for 5-ASA and 6.1% for Ac-5-ASA, and the accuracy values were 4.0% and 4.0%, respectively. Recoveries of 5-ASA and Ac-5-ASA from plasma were shown during assay validation to be >95%.\nThe urine assay ranged from 5 to 1,000 µg/mL for both 5-ASA and Ac-5-ASA. In each analytical batch, two low, two mid, and two high QC samples were analyzed along with the study samples. The QC sample concentrations of both analytes were 15, 400, and 800 µg/mL. The inter-day CV values ranged from 6.7% to 11.7% for 5-ASA and from 7.6% to 10.2% for Ac-5-ASA. Accuracy values ranged from −5.5% to −1.3% for 5-ASA and from 1.4% to 4.0% for Ac-5-ASA. In three out of nine analytical batches, three dilution QC samples were analyzed along with the study samples. The dilution QC sample concentrations for both analytes in urine were each 2,000 µg/mL, and the dilution QC samples were analyzed after a 1:5 dilution with control urine. The inter-day CV value was 7.8% for 5-ASA and 7.2% for Ac-5-ASA, and the accuracy values were −2.5% and 2.0%, respectively. Recoveries of 5-ASA and Ac-5-ASA from urine were shown during assay validation to be >90%.\nA total of 52 plasma samples and six urine samples were re-analyzed to assess incurred sample reproducibility. Results for 100% of these samples were found to be within ±20% of the mean of the original and re-assay results, meeting the predefined acceptance criteria.\n Safety Safety was evaluated by reported adverse events (AEs) at each study visit and while the subject was on-site, and included assessment of clinical laboratory parameters, physical examination findings, vital signs, and 12-lead electrocardiogram. Safety analyses were performed on all randomized subjects who took ≥1 dose of study drug and had ≥1 post-dose safety assessment (safety analysis set). Safety data were summarized by treatment group and by treatment group stratified by age (5–12 years and 13–17 years).\nSafety was evaluated by reported adverse events (AEs) at each study visit and while the subject was on-site, and included assessment of clinical laboratory parameters, physical examination findings, vital signs, and 12-lead electrocardiogram. Safety analyses were performed on all randomized subjects who took ≥1 dose of study drug and had ≥1 post-dose safety assessment (safety analysis set). Safety data were summarized by treatment group and by treatment group stratified by age (5–12 years and 13–17 years).\n Sample size It was anticipated that up to 60 subjects would be needed for screening to enroll 45 subjects. Thirty subjects were required to complete the study and, per agreement with the FDA, a minimum of six subjects were to be assigned to each age group (5–12 years and 13–17 years), as well as a minimum of six subjects per dose level (30, 60, and 100 mg/kg/day).\nIt was anticipated that up to 60 subjects would be needed for screening to enroll 45 subjects. Thirty subjects were required to complete the study and, per agreement with the FDA, a minimum of six subjects were to be assigned to each age group (5–12 years and 13–17 years), as well as a minimum of six subjects per dose level (30, 60, and 100 mg/kg/day).\n Non-compartmental pharmacokinetic analysis Pharmacokinetic parameters were determined (WinNonlin 5.2; Pharsight Corporation, Mountain View, CA, USA) for 5-ASA and Ac-5-ASA for all subjects in the safety analysis set who generated sufficient plasma samples to allow reliable determination of maximum concentration (Cmax,ss) and area under the curve for the defined interval between doses (AUCss; tau=24 h) at steady state (ie, the pharmacokinetic set). All calculations were based on actual sampling times. Pharmacokinetic parameters that were derived based on 5-ASA and/or Ac-5-ASA concentrations, as appropriate, included AUCss, Cmax,ss, time of maximum observed concentration sampled during a dosing interval (tmax), cumulative amount recovered in urine in time interval 0–24 hours (Xu0–24h), clearance from the blood by the kidneys (CLR), metabolic ratio (Ac-5-ASA:5-ASA) calculated using Cmax,ss (MRCmax,ss), metabolic ratio (Ac-5-ASA:5-ASA) calculated using AUCss (MRAUCss), and percentage of the dose absorbed, calculated as:\n%of dose absorbed=(Xu0−24h5-ASA+[0.7847×Xu0−24hAc-5-ASA])Dose×100where 0.7847 is the ratio of the molecular weight of 5-ASA (153.14) to the molecular weight of Ac-5-ASA (195.15).\nAUC values were calculated using the linear trapezoidal method when concentrations were increasing, and the logarithmic trapezoidal method when concentrations were decreasing. No inferential statistical analyses were conducted on the pharmacokinetic data. Summary statistics were presented by treatment group and by treatment group stratified by age (5–12 years, 13–17 years) for all pharmacokinetic parameters. Achievement of steady state was assessed by visual inspection of pre-dose plasma concentrations on days 5, 6, and 7. Systemic exposure (Cmax,ss and AUCss) was assessed by comparison with historical data in healthy adult subjects administered multimatrix mesalamine 2,400 or 4,800 mg/day.15,16\nPharmacokinetic parameters were determined (WinNonlin 5.2; Pharsight Corporation, Mountain View, CA, USA) for 5-ASA and Ac-5-ASA for all subjects in the safety analysis set who generated sufficient plasma samples to allow reliable determination of maximum concentration (Cmax,ss) and area under the curve for the defined interval between doses (AUCss; tau=24 h) at steady state (ie, the pharmacokinetic set). All calculations were based on actual sampling times. Pharmacokinetic parameters that were derived based on 5-ASA and/or Ac-5-ASA concentrations, as appropriate, included AUCss, Cmax,ss, time of maximum observed concentration sampled during a dosing interval (tmax), cumulative amount recovered in urine in time interval 0–24 hours (Xu0–24h), clearance from the blood by the kidneys (CLR), metabolic ratio (Ac-5-ASA:5-ASA) calculated using Cmax,ss (MRCmax,ss), metabolic ratio (Ac-5-ASA:5-ASA) calculated using AUCss (MRAUCss), and percentage of the dose absorbed, calculated as:\n%of dose absorbed=(Xu0−24h5-ASA+[0.7847×Xu0−24hAc-5-ASA])Dose×100where 0.7847 is the ratio of the molecular weight of 5-ASA (153.14) to the molecular weight of Ac-5-ASA (195.15).\nAUC values were calculated using the linear trapezoidal method when concentrations were increasing, and the logarithmic trapezoidal method when concentrations were decreasing. No inferential statistical analyses were conducted on the pharmacokinetic data. Summary statistics were presented by treatment group and by treatment group stratified by age (5–12 years, 13–17 years) for all pharmacokinetic parameters. Achievement of steady state was assessed by visual inspection of pre-dose plasma concentrations on days 5, 6, and 7. Systemic exposure (Cmax,ss and AUCss) was assessed by comparison with historical data in healthy adult subjects administered multimatrix mesalamine 2,400 or 4,800 mg/day.15,16\n Population pharmacokinetic analysis A population pharmacokinetic model was developed using non-linear mixed effects modeling (NONMEM® program, Version 7.2.0; ICON, Ellicott City, MD, USA) to describe the population variability in 5-ASA/Ac-5-ASA pharmacokinetics and the relationship between pharmacokinetic parameters and potential explanatory covariates (eg, age, weight, and sex). Pharmacokinetic parameters were estimated using Monte-Carlo Importance Sampling Expectation Maximization method with “Mu Referencing”.17 Development of the population pharmacokinetic model consisted of building a base model, followed by development of a covariate model using an interim data cut (40 subjects); the final model was updated with data from an additional 12 subjects from the study. Structural model selection was data driven, based on goodness-of-fit plots (observed vs predicted concentration, conditional weighted residual vs predicted concentration or time, histograms of individual random effects, etc), successful convergence, plausibility and precision of parameter estimates, and the minimum objective function value. Missing drug concentrations and concentrations reported as “not quantifiable” were excluded in the analysis. The final pharmacokinetic model was evaluated using visual predictive check (VPC), and this model was used to simulate the expected 5-ASA and Ac-5-ASA plasma concentration profiles in a broader population of children and adolescents using Trial Simulator (Version 2.2.1, Pharsight Corporation). Data presentation and construction of plots were performed using S-PLUS (Version 8.1; Tibco Software Inc., Palo Alto, CA, USA). The simulated exposures were compared to historical adult exposures.15,16\nA population pharmacokinetic model was developed using non-linear mixed effects modeling (NONMEM® program, Version 7.2.0; ICON, Ellicott City, MD, USA) to describe the population variability in 5-ASA/Ac-5-ASA pharmacokinetics and the relationship between pharmacokinetic parameters and potential explanatory covariates (eg, age, weight, and sex). Pharmacokinetic parameters were estimated using Monte-Carlo Importance Sampling Expectation Maximization method with “Mu Referencing”.17 Development of the population pharmacokinetic model consisted of building a base model, followed by development of a covariate model using an interim data cut (40 subjects); the final model was updated with data from an additional 12 subjects from the study. Structural model selection was data driven, based on goodness-of-fit plots (observed vs predicted concentration, conditional weighted residual vs predicted concentration or time, histograms of individual random effects, etc), successful convergence, plausibility and precision of parameter estimates, and the minimum objective function value. Missing drug concentrations and concentrations reported as “not quantifiable” were excluded in the analysis. The final pharmacokinetic model was evaluated using visual predictive check (VPC), and this model was used to simulate the expected 5-ASA and Ac-5-ASA plasma concentration profiles in a broader population of children and adolescents using Trial Simulator (Version 2.2.1, Pharsight Corporation). Data presentation and construction of plots were performed using S-PLUS (Version 8.1; Tibco Software Inc., Palo Alto, CA, USA). The simulated exposures were compared to historical adult exposures.15,16", "This Phase I, multicenter, randomized, open-label, three-arm study was conducted in 12 sites across three countries (United States, Poland, and Slovakia). Children and adolescents (aged 5–17 years) with a diagnosis of UC were randomly assigned to receive multimatrix mesalamine 30, 60, or 100 mg/kg once daily (up to a maximum of 4,800 mg/day) each morning for 7 days. To achieve these doses in children, smaller-sized 300 and 600 mg tablets were developed for this study to augment the existing approved 1,200 mg multimatrix mesalamine tablet. Total daily doses for the study ranged from 900 to 4,800 mg/day. Randomization was stratified by body weight (18–24, 25–49, and 50–82 kg); subjects weighing 18–24 kg were only randomized to 60 and 100 mg/kg/day groups to avoid receiving doses less than 900 mg, and subjects weighing 50–82 kg were only randomized to 30 and 60 mg/kg/day groups to avoid receiving doses greater than 4,800 mg, the maximum approved dose in adult subjects. Randomization was achieved via use of a randomization number allocated prior to dosing, once eligibility had been determined, and a randomization schedule was produced by an interactive voice response system vendor. Subjects were dosed with multimatrix mesalamine every morning for 7 days, at home on days 1–4, and on-site on days 5–7. On day 7, blood and urine pharmacokinetic samples were collected and safety assessments were performed until 24 hours post-dose. The study protocol, protocol amendments, informed consent documents, relevant supporting information, and subject recruitment information were submitted to and approved by the respective independent ethics committees, institutional review boards, and regulatory agencies prior to study initiation. The independent ethics committees, institutional review boards, and regulatory agencies are from the following: United States: Arkansas Children’s Hospital, Advanced Clinical Research Institute, University of California, San Francisco, Connecticut Children’s Medical Center, University of Maryland Medical Center for Children, and Mayo Clinic; Australia: Royal Children’s Hospital Melbourne; Poland: Klinika Pediatrii Gastroenterologii i Zywienia, Uniwersytecki Szpital Dzieciecy w Krakowie, Klinika Pediatrii Dzieciecy Szpital Kliniczny im prof Antoniego Gebali, Kliniczny Oddzial Pediatrii z Pododdzialem Neurologii Dzieciecej Szpital Wojewodzki, Klinika Gastroenterolofii Pediatrii, Instytut Centrum Zdrowia Matki Polki, Oddzial Gastroenterologii i Hepatologii, and Instytut Pomnik-Centrum Zdrowia Dziecka; Slovakia: Gastroenterologicka ambulancia, Univerzitna nemocnica Martin, and DFNsP Banska Bystrica; and the United Kingdom: Alder Hey Children’s NHS Foundation Trust, Barts Health NHS Trust/Royal London Hospital, Somers Clinical Research Facility/Great Ormond Street Hospital, Southampton General Hospital. Further details are available at https://clinicaltrials.gov/ct2/show/NCT01130844. All authors had access to the study data, and reviewed and approved the final manuscript.", "Children (aged 5–12 years) and adolescents (aged 13–17 years), weighing 18–82 kg, with a diagnosis of UC ≥3 months prior to the first dose of study drug were enrolled. Subjects already on a 5-ASA product had to have been on a stable regimen for ≥4 weeks prior to the first dose of study drug. In addition to each subject documenting assent, the subject’s parent or legal representative had to provide informed consent. Subjects with current or recurrent disease (other than UC) that could affect the colon or the action, absorption, or disposition of the study drug were excluded. Additional exclusion criteria included: UC confined to the rectum; allergy, hypersensitivity, or poor tolerability to salicylates or aminosalicylates; history of hepatic or renal impairment, pancreatitis, or Reyes syndrome; use of another investigational product ≤30 days prior to the first dose of study drug; serious, severe, or unstable psychiatric or physical illness; positive urine screen for drugs or abuse of alcohol; and pregnancy or lactation in female subjects.", "Blood samples (2 mL) for the measurement of 5-ASA and Ac-5-ASA plasma concentrations were taken pre-dose on days 5 and 6 and at the following time points on day 7: pre-dose, and 2, 4, 6, 9, 12, 16, and 24 hours after dosing. In addition, a complete 0- to 24-hour urine collection was made for the determination of urinary excretion of 5-ASA and Ac-5-ASA, starting within 30 minutes before the morning meal on day 7 until the final void scheduled for 30 minutes before the morning meal on day 8. Plasma and urine concentrations of 5-ASA and Ac-5-ASA were determined by validated methods based on liquid chromatography with tandem mass spectrometry (LC-MS/MS). The bioanalytical methods used for the quantitation of the plasma and the urine samples were validated, and quality control and calibration standard data were accepted in accordance with the U.S. Food and Drug Administration (FDA) guidance for bioanalytical method validation.14 Pharmacokinetic parameters were determined from the plasma concentration–time data by non-compartmental analysis.\nThe plasma assay ranged from 5 to 5,000 ng/mL for both 5-ASA and Ac-5-ASA. In each analytical batch, two low, two mid, and two high quality control (QC) samples were analyzed along with the study samples. The QC sample concentrations of both analytes in plasma were 12.5, 2,500, and 4,000 ng/mL. The inter-day coefficient of variation (CV) values ranged from 5.0% to 7.6% for 5-ASA and from 4.8% to 10.1% for Ac-5-ASA. Accuracy (expressed as percentage of the difference of the mean value for each pool from the theoretical concentration) values ranged from −0.8% to 1.6% for 5-ASA and from 2.4% to 5.2% for Ac-5-ASA. In 12 out of 14 analytical batches, three dilution QC samples were analyzed along with the study samples. The dilution QC sample concentrations for both analytes in plasma were each 10,000 ng/mL, and the dilution QC samples were analyzed after a 1:5 dilution with control plasma. The inter-day CV value was 5.5% for 5-ASA and 6.1% for Ac-5-ASA, and the accuracy values were 4.0% and 4.0%, respectively. Recoveries of 5-ASA and Ac-5-ASA from plasma were shown during assay validation to be >95%.\nThe urine assay ranged from 5 to 1,000 µg/mL for both 5-ASA and Ac-5-ASA. In each analytical batch, two low, two mid, and two high QC samples were analyzed along with the study samples. The QC sample concentrations of both analytes were 15, 400, and 800 µg/mL. The inter-day CV values ranged from 6.7% to 11.7% for 5-ASA and from 7.6% to 10.2% for Ac-5-ASA. Accuracy values ranged from −5.5% to −1.3% for 5-ASA and from 1.4% to 4.0% for Ac-5-ASA. In three out of nine analytical batches, three dilution QC samples were analyzed along with the study samples. The dilution QC sample concentrations for both analytes in urine were each 2,000 µg/mL, and the dilution QC samples were analyzed after a 1:5 dilution with control urine. The inter-day CV value was 7.8% for 5-ASA and 7.2% for Ac-5-ASA, and the accuracy values were −2.5% and 2.0%, respectively. Recoveries of 5-ASA and Ac-5-ASA from urine were shown during assay validation to be >90%.\nA total of 52 plasma samples and six urine samples were re-analyzed to assess incurred sample reproducibility. Results for 100% of these samples were found to be within ±20% of the mean of the original and re-assay results, meeting the predefined acceptance criteria.", "Safety was evaluated by reported adverse events (AEs) at each study visit and while the subject was on-site, and included assessment of clinical laboratory parameters, physical examination findings, vital signs, and 12-lead electrocardiogram. Safety analyses were performed on all randomized subjects who took ≥1 dose of study drug and had ≥1 post-dose safety assessment (safety analysis set). Safety data were summarized by treatment group and by treatment group stratified by age (5–12 years and 13–17 years).", "It was anticipated that up to 60 subjects would be needed for screening to enroll 45 subjects. Thirty subjects were required to complete the study and, per agreement with the FDA, a minimum of six subjects were to be assigned to each age group (5–12 years and 13–17 years), as well as a minimum of six subjects per dose level (30, 60, and 100 mg/kg/day).", "Pharmacokinetic parameters were determined (WinNonlin 5.2; Pharsight Corporation, Mountain View, CA, USA) for 5-ASA and Ac-5-ASA for all subjects in the safety analysis set who generated sufficient plasma samples to allow reliable determination of maximum concentration (Cmax,ss) and area under the curve for the defined interval between doses (AUCss; tau=24 h) at steady state (ie, the pharmacokinetic set). All calculations were based on actual sampling times. Pharmacokinetic parameters that were derived based on 5-ASA and/or Ac-5-ASA concentrations, as appropriate, included AUCss, Cmax,ss, time of maximum observed concentration sampled during a dosing interval (tmax), cumulative amount recovered in urine in time interval 0–24 hours (Xu0–24h), clearance from the blood by the kidneys (CLR), metabolic ratio (Ac-5-ASA:5-ASA) calculated using Cmax,ss (MRCmax,ss), metabolic ratio (Ac-5-ASA:5-ASA) calculated using AUCss (MRAUCss), and percentage of the dose absorbed, calculated as:\n%of dose absorbed=(Xu0−24h5-ASA+[0.7847×Xu0−24hAc-5-ASA])Dose×100where 0.7847 is the ratio of the molecular weight of 5-ASA (153.14) to the molecular weight of Ac-5-ASA (195.15).\nAUC values were calculated using the linear trapezoidal method when concentrations were increasing, and the logarithmic trapezoidal method when concentrations were decreasing. No inferential statistical analyses were conducted on the pharmacokinetic data. Summary statistics were presented by treatment group and by treatment group stratified by age (5–12 years, 13–17 years) for all pharmacokinetic parameters. Achievement of steady state was assessed by visual inspection of pre-dose plasma concentrations on days 5, 6, and 7. Systemic exposure (Cmax,ss and AUCss) was assessed by comparison with historical data in healthy adult subjects administered multimatrix mesalamine 2,400 or 4,800 mg/day.15,16", "A population pharmacokinetic model was developed using non-linear mixed effects modeling (NONMEM® program, Version 7.2.0; ICON, Ellicott City, MD, USA) to describe the population variability in 5-ASA/Ac-5-ASA pharmacokinetics and the relationship between pharmacokinetic parameters and potential explanatory covariates (eg, age, weight, and sex). Pharmacokinetic parameters were estimated using Monte-Carlo Importance Sampling Expectation Maximization method with “Mu Referencing”.17 Development of the population pharmacokinetic model consisted of building a base model, followed by development of a covariate model using an interim data cut (40 subjects); the final model was updated with data from an additional 12 subjects from the study. Structural model selection was data driven, based on goodness-of-fit plots (observed vs predicted concentration, conditional weighted residual vs predicted concentration or time, histograms of individual random effects, etc), successful convergence, plausibility and precision of parameter estimates, and the minimum objective function value. Missing drug concentrations and concentrations reported as “not quantifiable” were excluded in the analysis. The final pharmacokinetic model was evaluated using visual predictive check (VPC), and this model was used to simulate the expected 5-ASA and Ac-5-ASA plasma concentration profiles in a broader population of children and adolescents using Trial Simulator (Version 2.2.1, Pharsight Corporation). Data presentation and construction of plots were performed using S-PLUS (Version 8.1; Tibco Software Inc., Palo Alto, CA, USA). The simulated exposures were compared to historical adult exposures.15,16", " Subjects Between October 2010 and June 2013, a total of 52 subjects were screened, randomized, and treated; all completed the study (21 subjects [5 children, 16 adolescents] in the 30 mg/kg/day dose group, 22 [4 children, 18 adolescents] in the 60 mg/kg/day dose group, and 9 [7 children, 2 adolescents] in the 100 mg/kg/day dose group; Figure 1). While the study met the FDA enrollment requirement of a minimum of six subjects per dose level, more patients were enrolled into the 30 and 60 mg/kg/day dose groups than into the 100 mg/kg/day dose group due to difficulties enrolling children who weighed <49 kg. Overall, demographic data and baseline disease characteristics were well balanced between dose groups, although fewer subjects were studied in the 100 mg/kg/day dose group due to enrollment difficulties (Table 1).\nBetween October 2010 and June 2013, a total of 52 subjects were screened, randomized, and treated; all completed the study (21 subjects [5 children, 16 adolescents] in the 30 mg/kg/day dose group, 22 [4 children, 18 adolescents] in the 60 mg/kg/day dose group, and 9 [7 children, 2 adolescents] in the 100 mg/kg/day dose group; Figure 1). While the study met the FDA enrollment requirement of a minimum of six subjects per dose level, more patients were enrolled into the 30 and 60 mg/kg/day dose groups than into the 100 mg/kg/day dose group due to difficulties enrolling children who weighed <49 kg. Overall, demographic data and baseline disease characteristics were well balanced between dose groups, although fewer subjects were studied in the 100 mg/kg/day dose group due to enrollment difficulties (Table 1).\n Non-compartmental pharmacokinetics Inter-assay accuracy and precision data for 5-ASA and Ac-5-ASA in the QC samples at three concentrations each in human plasma and urine across all analytical batches are shown in Table S1. For 5-ASA, mean plasma concentration–time profiles attained maxima at ~6 or 9 hours post-dose, with a secondary peak at 24 hours post-dose (Figure 2A; Table 2). Median tmax was 6 and 9 hours (range 0–24 hours for all dose levels) post-dose for 30 and 60 mg/kg/day doses, respectively. For 100 mg/kg/day, median tmax was approximately 2 hours post-dose; however, there were only nine subjects at 100 mg/kg/day. Steady-state plasma concentrations for 5-ASA were attained by day 5 for all doses. On day 7, systemic exposure to 5-ASA (mean AUCss and Cmax,ss) increased in a dose-proportional manner between 30 and 60 mg/kg/day doses, but in a subproportional manner between 60 and 100 mg/kg/day doses. Based on urinary recovery, the mean percentages of 5-ASA absorbed from multimatrix mesalamine were 29.4%, 27.0%, and 22.1% for 30, 60, and 100 mg/kg/day doses, respectively. High between-subject variability was noted for 5-ASA AUCss and Cmax,ss, with arithmetic CV values ranging from 36% to 52% and 52% to 60%, respectively. Mean CLR ranged from 5.0 to 6.5 L/h, with a trend toward decreasing with increasing dose.\nMean plasma concentration–time profiles of plasma Ac-5-ASA were similar to those of the parent drug 5-ASA (Figure 2B; Table 2), with median tmax of 9, 7.5, and 2 hours post-dose for 30, 60, and 100 mg/kg/day doses, respectively. Steady-state plasma concentrations for Ac-5-ASA were attained by day 5 for all doses. On day 7, as for 5-ASA, systemic exposure to Ac-5-ASA increased in a dose-proportional manner between 30 and 60 mg/kg/day doses, but in a subproportional manner between 60 and 100 mg/kg/day doses. The metabolite Ac-5-ASA was more abundant than the parent drug (Table 2; MRAUCss of Ac-5-ASA:5-ASA), with no apparent trend across dose. Mean CLR ranged from 10.0 to 16.2 L/h, with a trend toward decreasing with increasing dose. Moderate to high between-subject variability was noted for AUCss and Cmax,ss, with arithmetic CV values ranging from 35% to 44% and 40% to 59%, respectively. There was no apparent difference in 5-ASA or Ac-5-ASA systemic exposure, as measured by mean AUCss and Cmax,ss, between children (aged 5–12 years) and adolescents (aged 13–17 years) for this weight-based dosing paradigm (data not shown).\nInter-assay accuracy and precision data for 5-ASA and Ac-5-ASA in the QC samples at three concentrations each in human plasma and urine across all analytical batches are shown in Table S1. For 5-ASA, mean plasma concentration–time profiles attained maxima at ~6 or 9 hours post-dose, with a secondary peak at 24 hours post-dose (Figure 2A; Table 2). Median tmax was 6 and 9 hours (range 0–24 hours for all dose levels) post-dose for 30 and 60 mg/kg/day doses, respectively. For 100 mg/kg/day, median tmax was approximately 2 hours post-dose; however, there were only nine subjects at 100 mg/kg/day. Steady-state plasma concentrations for 5-ASA were attained by day 5 for all doses. On day 7, systemic exposure to 5-ASA (mean AUCss and Cmax,ss) increased in a dose-proportional manner between 30 and 60 mg/kg/day doses, but in a subproportional manner between 60 and 100 mg/kg/day doses. Based on urinary recovery, the mean percentages of 5-ASA absorbed from multimatrix mesalamine were 29.4%, 27.0%, and 22.1% for 30, 60, and 100 mg/kg/day doses, respectively. High between-subject variability was noted for 5-ASA AUCss and Cmax,ss, with arithmetic CV values ranging from 36% to 52% and 52% to 60%, respectively. Mean CLR ranged from 5.0 to 6.5 L/h, with a trend toward decreasing with increasing dose.\nMean plasma concentration–time profiles of plasma Ac-5-ASA were similar to those of the parent drug 5-ASA (Figure 2B; Table 2), with median tmax of 9, 7.5, and 2 hours post-dose for 30, 60, and 100 mg/kg/day doses, respectively. Steady-state plasma concentrations for Ac-5-ASA were attained by day 5 for all doses. On day 7, as for 5-ASA, systemic exposure to Ac-5-ASA increased in a dose-proportional manner between 30 and 60 mg/kg/day doses, but in a subproportional manner between 60 and 100 mg/kg/day doses. The metabolite Ac-5-ASA was more abundant than the parent drug (Table 2; MRAUCss of Ac-5-ASA:5-ASA), with no apparent trend across dose. Mean CLR ranged from 10.0 to 16.2 L/h, with a trend toward decreasing with increasing dose. Moderate to high between-subject variability was noted for AUCss and Cmax,ss, with arithmetic CV values ranging from 35% to 44% and 40% to 59%, respectively. There was no apparent difference in 5-ASA or Ac-5-ASA systemic exposure, as measured by mean AUCss and Cmax,ss, between children (aged 5–12 years) and adolescents (aged 13–17 years) for this weight-based dosing paradigm (data not shown).\n Population pharmacokinetics The pharmacokinetics of 5-ASA and Ac-5-ASA were adequately described by the population pharmacokinetic structural model (Figure S1) that included: first-order absorption from two depot compartments, absorption lag times, and separate central compartments for 5-ASA and Ac-5-ASA with respective urine compartments for renal clearance. Non-renal clearance of 5-ASA was assumed to involve only metabolism to Ac-5-ASA, and all elimination processes were based on the first-order kinetics. Allometric scaling by body weight was applied to all clearance and volume parameters, with the exponents fixed to the theoretical value of 0.75 for clearance and 1 for volume parameters.18 Parameter estimates for the final model are shown in Table S2; following evaluation by VPC, less than 9% of the observed concentrations fell outside the 90% prediction intervals for both analytes, suggesting that the final model adequately described the observed data. For a 70 kg individual, the typical value of 5-ASA apparent renal clearance was estimated to be 1.15 L/h (95% confidence interval [CI]: 1.01–1.32 L/h), and apparent metabolic clearance was estimated to be 85.6 L/h (95% CI: 75.9–96.5 L/h). The typical value of Ac-5-ASA apparent renal clearance was estimated to be 2.54 L/h (95% CI: 2.27–2.86 L/h), and apparent non-renal clearance was estimated to be 74.4 L/h (95% CI: 66.7–83.1 L/h). Typical estimates of the central volume of distribution were 109 L (95% CI: 70.1–169 L) for 5-ASA and 7.10 L (95% CI: 5.31–9.49 L) for Ac-5-ASA. Estimated absorption rates from depot 1 and depot 3 were 0.0334 h−1 (95% CI: 0.0207–0.0539 h−1) and 0.273 h−1 (95% CI: 0.165–0.448 h−1). Corresponding estimated absorption lag times from each depot were 5.10 hours (95% CI: 4.31–6.05 hours) and 15.0 hours (95% CI: 14.0–16.1 hours); the lag time from depot 3 represented the additional lag following delay in absorption from depot 1. The fraction of dose absorbed from depot 1 was estimated to be 0.734 (95% CI: 0.413–1.06), and remaining dose fractions were assumed to be absorbed from depot 3. Goodness-of-fit plots for 5-ASA and Ac-5-ASA plasma concentrations are shown in Figures S2 and S3.\nThe population pharmacokinetic model was used to simulate steady-state profiles for both 5-ASA and Ac-5-ASA for four weight groups (18–23, 24–35, 36–50, and 51–90 kg) at planned high doses (1,800, 2,400, 3,600, and 4,800 mg) and low doses (900, 1,200, 1,800, and 2,400 mg) in 80 subjects with 1,000 replications (Figure 3A–D; Tables S3 and S4). The variability in the predicted steady-state exposures for children and adolescents was approximately 50% (CV%) for 5-ASA AUC (Table S3) and 40%–45% for Ac-5-ASA AUC (Table S4). The proposed low dose of multimatrix mesalamine for each weight category is predicted to provide comparable steady-state exposure for both 5-ASA and Ac-5 -ASA to those observed following administration of a fixed 2,400 mg dose in the adult population (Figure 3A–D). The proposed high dose for each weight category is predicted to provide comparable steady-state AUC for both 5-ASA and Ac-5-ASA to those observed following administration of a fixed 4,800 mg dose in the adult population.\nThe pharmacokinetics of 5-ASA and Ac-5-ASA were adequately described by the population pharmacokinetic structural model (Figure S1) that included: first-order absorption from two depot compartments, absorption lag times, and separate central compartments for 5-ASA and Ac-5-ASA with respective urine compartments for renal clearance. Non-renal clearance of 5-ASA was assumed to involve only metabolism to Ac-5-ASA, and all elimination processes were based on the first-order kinetics. Allometric scaling by body weight was applied to all clearance and volume parameters, with the exponents fixed to the theoretical value of 0.75 for clearance and 1 for volume parameters.18 Parameter estimates for the final model are shown in Table S2; following evaluation by VPC, less than 9% of the observed concentrations fell outside the 90% prediction intervals for both analytes, suggesting that the final model adequately described the observed data. For a 70 kg individual, the typical value of 5-ASA apparent renal clearance was estimated to be 1.15 L/h (95% confidence interval [CI]: 1.01–1.32 L/h), and apparent metabolic clearance was estimated to be 85.6 L/h (95% CI: 75.9–96.5 L/h). The typical value of Ac-5-ASA apparent renal clearance was estimated to be 2.54 L/h (95% CI: 2.27–2.86 L/h), and apparent non-renal clearance was estimated to be 74.4 L/h (95% CI: 66.7–83.1 L/h). Typical estimates of the central volume of distribution were 109 L (95% CI: 70.1–169 L) for 5-ASA and 7.10 L (95% CI: 5.31–9.49 L) for Ac-5-ASA. Estimated absorption rates from depot 1 and depot 3 were 0.0334 h−1 (95% CI: 0.0207–0.0539 h−1) and 0.273 h−1 (95% CI: 0.165–0.448 h−1). Corresponding estimated absorption lag times from each depot were 5.10 hours (95% CI: 4.31–6.05 hours) and 15.0 hours (95% CI: 14.0–16.1 hours); the lag time from depot 3 represented the additional lag following delay in absorption from depot 1. The fraction of dose absorbed from depot 1 was estimated to be 0.734 (95% CI: 0.413–1.06), and remaining dose fractions were assumed to be absorbed from depot 3. Goodness-of-fit plots for 5-ASA and Ac-5-ASA plasma concentrations are shown in Figures S2 and S3.\nThe population pharmacokinetic model was used to simulate steady-state profiles for both 5-ASA and Ac-5-ASA for four weight groups (18–23, 24–35, 36–50, and 51–90 kg) at planned high doses (1,800, 2,400, 3,600, and 4,800 mg) and low doses (900, 1,200, 1,800, and 2,400 mg) in 80 subjects with 1,000 replications (Figure 3A–D; Tables S3 and S4). The variability in the predicted steady-state exposures for children and adolescents was approximately 50% (CV%) for 5-ASA AUC (Table S3) and 40%–45% for Ac-5-ASA AUC (Table S4). The proposed low dose of multimatrix mesalamine for each weight category is predicted to provide comparable steady-state exposure for both 5-ASA and Ac-5 -ASA to those observed following administration of a fixed 2,400 mg dose in the adult population (Figure 3A–D). The proposed high dose for each weight category is predicted to provide comparable steady-state AUC for both 5-ASA and Ac-5-ASA to those observed following administration of a fixed 4,800 mg dose in the adult population.\n Safety The incidence of treatment-emergent adverse events (TEAEs) was 19.2% (ten subjects overall); all TEAEs were mild to moderate (Table 3). Incidence rates were similar among different dose groups, and the most commonly reported TEAEs were abdominal pain, musculoskeletal pain, and headache, each reported in 3.8% of subjects (n=2 each). There were no TEAEs leading to premature discontinuation. Two subjects (3.8%) experienced a TEAE considered by the investigator to be related to the study drug: one subject in the 30 mg/kg/day dose group experienced abdominal pain, dehydration, and vomiting, and one subject in the 60 mg/kg/day dose group experienced moderate upper abdominal pain. No relevant differences between the age groups were observed with regard to the occurrence, severity, or relatedness of TEAEs, and no clinically relevant abnormalities in biochemistry, hematology, urinalysis, or vital sign values were observed. No new safety signals were reported.\nThe incidence of treatment-emergent adverse events (TEAEs) was 19.2% (ten subjects overall); all TEAEs were mild to moderate (Table 3). Incidence rates were similar among different dose groups, and the most commonly reported TEAEs were abdominal pain, musculoskeletal pain, and headache, each reported in 3.8% of subjects (n=2 each). There were no TEAEs leading to premature discontinuation. Two subjects (3.8%) experienced a TEAE considered by the investigator to be related to the study drug: one subject in the 30 mg/kg/day dose group experienced abdominal pain, dehydration, and vomiting, and one subject in the 60 mg/kg/day dose group experienced moderate upper abdominal pain. No relevant differences between the age groups were observed with regard to the occurrence, severity, or relatedness of TEAEs, and no clinically relevant abnormalities in biochemistry, hematology, urinalysis, or vital sign values were observed. No new safety signals were reported.", "Between October 2010 and June 2013, a total of 52 subjects were screened, randomized, and treated; all completed the study (21 subjects [5 children, 16 adolescents] in the 30 mg/kg/day dose group, 22 [4 children, 18 adolescents] in the 60 mg/kg/day dose group, and 9 [7 children, 2 adolescents] in the 100 mg/kg/day dose group; Figure 1). While the study met the FDA enrollment requirement of a minimum of six subjects per dose level, more patients were enrolled into the 30 and 60 mg/kg/day dose groups than into the 100 mg/kg/day dose group due to difficulties enrolling children who weighed <49 kg. Overall, demographic data and baseline disease characteristics were well balanced between dose groups, although fewer subjects were studied in the 100 mg/kg/day dose group due to enrollment difficulties (Table 1).", "Inter-assay accuracy and precision data for 5-ASA and Ac-5-ASA in the QC samples at three concentrations each in human plasma and urine across all analytical batches are shown in Table S1. For 5-ASA, mean plasma concentration–time profiles attained maxima at ~6 or 9 hours post-dose, with a secondary peak at 24 hours post-dose (Figure 2A; Table 2). Median tmax was 6 and 9 hours (range 0–24 hours for all dose levels) post-dose for 30 and 60 mg/kg/day doses, respectively. For 100 mg/kg/day, median tmax was approximately 2 hours post-dose; however, there were only nine subjects at 100 mg/kg/day. Steady-state plasma concentrations for 5-ASA were attained by day 5 for all doses. On day 7, systemic exposure to 5-ASA (mean AUCss and Cmax,ss) increased in a dose-proportional manner between 30 and 60 mg/kg/day doses, but in a subproportional manner between 60 and 100 mg/kg/day doses. Based on urinary recovery, the mean percentages of 5-ASA absorbed from multimatrix mesalamine were 29.4%, 27.0%, and 22.1% for 30, 60, and 100 mg/kg/day doses, respectively. High between-subject variability was noted for 5-ASA AUCss and Cmax,ss, with arithmetic CV values ranging from 36% to 52% and 52% to 60%, respectively. Mean CLR ranged from 5.0 to 6.5 L/h, with a trend toward decreasing with increasing dose.\nMean plasma concentration–time profiles of plasma Ac-5-ASA were similar to those of the parent drug 5-ASA (Figure 2B; Table 2), with median tmax of 9, 7.5, and 2 hours post-dose for 30, 60, and 100 mg/kg/day doses, respectively. Steady-state plasma concentrations for Ac-5-ASA were attained by day 5 for all doses. On day 7, as for 5-ASA, systemic exposure to Ac-5-ASA increased in a dose-proportional manner between 30 and 60 mg/kg/day doses, but in a subproportional manner between 60 and 100 mg/kg/day doses. The metabolite Ac-5-ASA was more abundant than the parent drug (Table 2; MRAUCss of Ac-5-ASA:5-ASA), with no apparent trend across dose. Mean CLR ranged from 10.0 to 16.2 L/h, with a trend toward decreasing with increasing dose. Moderate to high between-subject variability was noted for AUCss and Cmax,ss, with arithmetic CV values ranging from 35% to 44% and 40% to 59%, respectively. There was no apparent difference in 5-ASA or Ac-5-ASA systemic exposure, as measured by mean AUCss and Cmax,ss, between children (aged 5–12 years) and adolescents (aged 13–17 years) for this weight-based dosing paradigm (data not shown).", "The pharmacokinetics of 5-ASA and Ac-5-ASA were adequately described by the population pharmacokinetic structural model (Figure S1) that included: first-order absorption from two depot compartments, absorption lag times, and separate central compartments for 5-ASA and Ac-5-ASA with respective urine compartments for renal clearance. Non-renal clearance of 5-ASA was assumed to involve only metabolism to Ac-5-ASA, and all elimination processes were based on the first-order kinetics. Allometric scaling by body weight was applied to all clearance and volume parameters, with the exponents fixed to the theoretical value of 0.75 for clearance and 1 for volume parameters.18 Parameter estimates for the final model are shown in Table S2; following evaluation by VPC, less than 9% of the observed concentrations fell outside the 90% prediction intervals for both analytes, suggesting that the final model adequately described the observed data. For a 70 kg individual, the typical value of 5-ASA apparent renal clearance was estimated to be 1.15 L/h (95% confidence interval [CI]: 1.01–1.32 L/h), and apparent metabolic clearance was estimated to be 85.6 L/h (95% CI: 75.9–96.5 L/h). The typical value of Ac-5-ASA apparent renal clearance was estimated to be 2.54 L/h (95% CI: 2.27–2.86 L/h), and apparent non-renal clearance was estimated to be 74.4 L/h (95% CI: 66.7–83.1 L/h). Typical estimates of the central volume of distribution were 109 L (95% CI: 70.1–169 L) for 5-ASA and 7.10 L (95% CI: 5.31–9.49 L) for Ac-5-ASA. Estimated absorption rates from depot 1 and depot 3 were 0.0334 h−1 (95% CI: 0.0207–0.0539 h−1) and 0.273 h−1 (95% CI: 0.165–0.448 h−1). Corresponding estimated absorption lag times from each depot were 5.10 hours (95% CI: 4.31–6.05 hours) and 15.0 hours (95% CI: 14.0–16.1 hours); the lag time from depot 3 represented the additional lag following delay in absorption from depot 1. The fraction of dose absorbed from depot 1 was estimated to be 0.734 (95% CI: 0.413–1.06), and remaining dose fractions were assumed to be absorbed from depot 3. Goodness-of-fit plots for 5-ASA and Ac-5-ASA plasma concentrations are shown in Figures S2 and S3.\nThe population pharmacokinetic model was used to simulate steady-state profiles for both 5-ASA and Ac-5-ASA for four weight groups (18–23, 24–35, 36–50, and 51–90 kg) at planned high doses (1,800, 2,400, 3,600, and 4,800 mg) and low doses (900, 1,200, 1,800, and 2,400 mg) in 80 subjects with 1,000 replications (Figure 3A–D; Tables S3 and S4). The variability in the predicted steady-state exposures for children and adolescents was approximately 50% (CV%) for 5-ASA AUC (Table S3) and 40%–45% for Ac-5-ASA AUC (Table S4). The proposed low dose of multimatrix mesalamine for each weight category is predicted to provide comparable steady-state exposure for both 5-ASA and Ac-5 -ASA to those observed following administration of a fixed 2,400 mg dose in the adult population (Figure 3A–D). The proposed high dose for each weight category is predicted to provide comparable steady-state AUC for both 5-ASA and Ac-5-ASA to those observed following administration of a fixed 4,800 mg dose in the adult population.", "The incidence of treatment-emergent adverse events (TEAEs) was 19.2% (ten subjects overall); all TEAEs were mild to moderate (Table 3). Incidence rates were similar among different dose groups, and the most commonly reported TEAEs were abdominal pain, musculoskeletal pain, and headache, each reported in 3.8% of subjects (n=2 each). There were no TEAEs leading to premature discontinuation. Two subjects (3.8%) experienced a TEAE considered by the investigator to be related to the study drug: one subject in the 30 mg/kg/day dose group experienced abdominal pain, dehydration, and vomiting, and one subject in the 60 mg/kg/day dose group experienced moderate upper abdominal pain. No relevant differences between the age groups were observed with regard to the occurrence, severity, or relatedness of TEAEs, and no clinically relevant abnormalities in biochemistry, hematology, urinalysis, or vital sign values were observed. No new safety signals were reported.", "This is the first study to evaluate the safety and pharmacokinetics of multimatrix mesalamine in children and adolescents diagnosed with UC. Multimatrix mesalamine was generally safe and well tolerated. No fatal TEAEs, other serious TEAEs, or TEAEs leading to discontinuation of treatment occurred during the study, and the incidence of TEAEs was comparable between dose and age groups. Furthermore, the types and frequencies of clinical laboratory abnormalities and other safety parameters were low, and comparable between dose and age groups.\nThe pharmacokinetic profiles of 5-ASA and Ac-5-ASA at 30 or 60 mg/kg once daily were similar to those observed historically in adults after a 2,400 mg daily dose.15,16 Likewise, the pharmacokinetic profiles of 5-ASA and Ac-5-ASA at 100 mg/kg once daily were also similar to those in adults after a 4,800 mg daily dose.15,16 These conclusions were based on the variable absorption profiles with initial and secondary peaks, the percentage of the 5-ASA absorbed, renal clearance of 5-ASA and Ac-5-ASA, plasma exposure (AUCss and Cmax,ss), and the inter-subject variability in pharmacokinetic parameters (Table 2).15,16 For example, in adults, the total absorption of 5-ASA from multimatrix mesalamine 2.4 or 4.8 g administered once daily for 14 days to healthy volunteers was approximately 21%–22% of the administered dose.15 In the current study, 5-ASA absorption ranged from 22% to 29% across doses.\nThe overall similarity in 5-ASA and Ac-5-ASA exposures between children/adolescents and adults suggests that the novel, smaller 300 and 600 mg multimatrix mesalamine tablets developed for this study performed as intended, delivering 5-ASA to the colon in a similar fashion to the commercial 1,200 mg multimatrix mesalamine tablets. Hence, these new 300 and 600 mg tablets may be suitable for use in future studies.\nFor both 5-ASA and Ac-5-ASA, pharmacokinetic steady state was attained by day 5 for all doses, and systemic exposure of 5-ASA and Ac-5-ASA (measured by mean AUCss and Cmax,ss) on day 7 increased in a dose-proportional manner between 30 and 60 mg/kg/day doses (Table 2). Mean AUCss and Cmax,ss increased subproportionally between 60 and 100 mg/kg/day doses, possibly due to the maximum dose restriction of 4,800 mg/day, with the highest dose of 100 mg/kg only being administered to subjects weighing less than 50 kg. In the 100 mg/kg/day dose group, a mean of 22.1% of the dose was absorbed (similar to the 21%–22% observed in adults),15 whereas the corresponding values for the 30 and 60 mg/kg/day dose groups were higher (29.4% and 27.0%, respectively), suggesting that there also may be differences in the extent of absorption. Nevertheless, these assessments of dose-proportionality should be interpreted with caution due to the large inter-subject pharmacokinetic variability and the small sample size (n=9) receiving 100 mg/kg/day.\nIn addition to the typical non-compartmental pharmacokinetic analysis, a population pharmacokinetic model was developed to describe the pharmacokinetics of 5-ASA and Ac-5-ASA (Figure S1), and was subsequently used to simulate exposures in a broader population of children and adolescents with UC, following repeat oral administration of multimatrix mesalamine (Figures 3A–D; Tables S3 and S4). The variability in the predicted steady-state exposures for children and adolescents was similar to the observed variability in adults for both 5-ASA AUC (CV ≈50% in children and adolescents and 60% in adults) and Ac-5-ASA AUC (CV ≈40%–45% in both populations).15,16 Additionally, absorption from two depots, as described in the final model, is consistent with the multimatrix mesalamine tablet, delivering an initial burst of 5-ASA that occurs from disintegration of the enteric coating, followed by leaching of 5-ASA out of the core as the tablet travels through the terminal ileum and colon. Therefore, the absorption data provide support for the drug being delivered in accordance with the design concept. Finally, results from the modeling and simulation in the current analysis suggest that the proposed dosing regimen for examining the safety and efficacy of multimatrix mesalamine in children and adolescents is likely to produce exposures to 5-ASA and Ac-5-ASA within the range of exposures at which safety and efficacy have been established in adults. As a result, these data supported the proposed doses for the Phase III safety and efficacy trial of multimatrix mesalamine in pediatric patients with UC (PACE; NCT02093663).", "Schematic of final population pharmacokinetic structural model.\nNotes: Numbers on the top left-hand corner of the boxes represent the compartment numbers for the population pharmacokinetic model in the NONMEM code. Hence, the two depot compartments were denoted as depot 1 and depot 3.\nAbbreviations: ALAG1, absorption lag time from depot 1; 5-ASA, 5-aminosalicylic acid; Ac-5-ASA, acetyl-5-aminosalicylic acid; Ka1, absorption rate constant from depot 1; F1, fraction of dose absorbed from depot 1; Ka3, absorption rate constant from depot 3; ALAG3, absorption lag time from depot 3 in addition to the lag time from depot 1; CLM, metabolic clearance of 5-ASA; CLNRM, non-renal clearance of Ac-5-ASA; CLR, renal clearance of 5-ASA; CLRM, renal clearance of Ac-5-ASA.\nGoodness-of-fit and diagnostic plots for the final model in children and adolescents: plasma 5-ASA.\nNotes: The dashed line represents the local regression (Loess) smoothing line. (A) Observed versus population predicted concentration; (B) observed versus individual predicted concentration; (C) conditional weighted residual versus population predicted concentration; (D) conditional weighted residual versus time.\nAbbreviation: 5-ASA, 5-aminosalicylic acid.\nGoodness-of-fit and diagnostic plots for the final model in children and adolescents: plasma Ac-5-ASA.\nNotes: The dashed line represents the local regression (Loess) smoothing line. (A) Observed versus population predicted concentration; (B) observed versus individual predicted concentration; (C) conditional weighted residual versus population predicted concentration; (D) conditional weighted residual versus time.\nAbbreviation: Ac-5-ASA, acetyl-5-aminosalicylic acid.\nAssay performance of 5-ASA and Ac-5-ASA bioanalytical quality control samples in human plasma and urine\nAbbreviations: Ac-5-ASA, acetyl-5-aminosalicylic acid; 5-ASA, 5-aminosalicylic acid; QC, quality control; CV, coefficient of variation.\nParameter estimates of final 5-ASA/Ac-5-ASA population pharmacokinetic modela\n\nNotes:\n\nThe reference population for the pharmacokinetic parameters CLR/F, CLM/F, Vc/F, CLRM/F, CLNRM/F, and VcM/F is an individual weighing 70 kg.\nEstimates and 95% CI back-transformed from loge scale.\n\nCVTVP=eωP2−1, when ω2P exceeds 0.15.\nCV of proportional error (\n=[σprop2]0.5×100=[σ2).\nAbbreviations: 5-ASA, 5-aminosalicylic acid; Ac-5-ASA, acetyl-5-aminosalicylic acid; ALAG1, absorption lag time from depot 1; ALAG3, absorption lag time from depot 3 in addition to the lag time from depot 1; CI, 95% confidence interval on the parameter; CLM/F, apparent metabolic clearance of 5-ASA; CLNRM/F, apparent non-renal clearance of Ac-5-ASA; CLR/F, apparent renal clearance of 5-ASA; CLRM/F, apparent renal clearance of Ac-5-ASA; CV, coefficient of variation; F1, fraction of dose absorbed from depot 1; FIX, fixed; Ka1, absorption rate constant from depot 1; Ka3, absorption rate constant from depot 3; NA, not applicable; %RSE, percent relative standard error of the estimate = standard error/parameter estimate ×100; Vc/F, apparent volume of central compartment of 5-ASA; VcM/F, apparent volume of central compartment of Ac-5-ASA; WT, body weight; ω2CLR, variance of random effect of CLR/F; ω2CL, variance of random effect of CLM/F; ω2Vc, variance of random effect of Vc/F; ω2CLRM, variance of random effect of CLRM/F; ω2CLNR, variance of random effect of CLNRM/F; ω2VcM, variance of random effect of VcM/F; ω2Ka1, variance of random effect of Ka1; ω2Ka3, variance of random effect of Ka3; ω2ALAG1, variance of random effect of ALAG1; ω2ALAG3, variance of random effect of ALAG3; ω2F1, variance of random effect of F1; σ2prop, proportionalcomponent of the residual error model.\nSimulated steady-state 5-ASA exposures by weight and dose group\n\nNote:\n\nSummarized as median (2.5th percentile, 97.5th percentile).\nAbbreviations: 5-ASA, 5-aminosalicylic acid; AUC, area under the curve; SD, standard deviation; CV, coefficient of variation; PI, percentile.\nSimulated steady-state Ac-5-ASA pharmacokinetic parameters by dose and age group\n\nNote:\n\nSummarized as median (2.5th percentile, 97.5th percentile).\nAbbreviations: Ac-5-ASA, acetyl-5-aminosalicylic acid; AUC, area under the curve; SD, standard deviation; CV, coefficient of variation; PI, percentile." ]
[ "intro", "methods", "methods", "methods", null, null, null, "methods", "methods", "results", "subjects", null, null, null, "discussion", "supplementary-material" ]
[ "ulcerative colitis", "mesalamine", "pharmacology" ]
Introduction: Ulcerative colitis (UC) is a chronic inflammatory disease of the colon and rectum distinguished by cycles of remission and relapse over the life of the subject.1,2 While the incidence of UC peaks around early adulthood, onset of the disorder can occur from early childhood through adulthood.3 UC is one of the more prevalent chronic diseases in children, with an incidence rate of 2.1 per 100,000 children within the United States.4 The primary goals in UC management are induction and maintenance of remission to improve subjects’ health and quality of life.5 Oral 5-aminosalicylic acid (5-ASA) formulations, such as multimatrix mesalamine (Shire US Inc., Wayne, PA, USA), have proven effective in the induction and maintenance of UC remission,6–10 and are recommended first-line therapy for adults with active mild-to-moderate UC.5,6 5-ASAs are also commonly used as the standard of care first-line therapy in pediatric UC. However, while studies that support 5-ASA use in adult subjects with UC are abundant,11 evidence on the efficacy and safety of 5-ASA in pediatric UC subjects is less substantial, with only a few randomized, controlled clinical studies for the induction or maintenance of remission by 5-ASA in pediatric subjects having been conducted.12,13 To date, no 5-ASA product has been licensed for maintenance of remission of UC in children. As the first step in a program evaluating multimatrix mesalamine in pediatric UC, the primary objective of this randomized Phase I study (ClinicalTrials.gov Identifier: NCT01130844) was to assess the pharmacokinetics of 5-ASA and its major metabolite acetyl-5-ASA (Ac-5-ASA) after administration of once-daily multimatrix mesalamine at three different doses (30, 60, or 100 mg/kg/day) for 7 days in children and adolescents diagnosed with UC. Secondary objectives included examining the safety and tolerability of multimatrix mesalamine at these doses in children and adolescents with UC, and evaluating the extent of absorption of 5-ASA from multimatrix mesalamine at steady state. Methods: Study design This Phase I, multicenter, randomized, open-label, three-arm study was conducted in 12 sites across three countries (United States, Poland, and Slovakia). Children and adolescents (aged 5–17 years) with a diagnosis of UC were randomly assigned to receive multimatrix mesalamine 30, 60, or 100 mg/kg once daily (up to a maximum of 4,800 mg/day) each morning for 7 days. To achieve these doses in children, smaller-sized 300 and 600 mg tablets were developed for this study to augment the existing approved 1,200 mg multimatrix mesalamine tablet. Total daily doses for the study ranged from 900 to 4,800 mg/day. Randomization was stratified by body weight (18–24, 25–49, and 50–82 kg); subjects weighing 18–24 kg were only randomized to 60 and 100 mg/kg/day groups to avoid receiving doses less than 900 mg, and subjects weighing 50–82 kg were only randomized to 30 and 60 mg/kg/day groups to avoid receiving doses greater than 4,800 mg, the maximum approved dose in adult subjects. Randomization was achieved via use of a randomization number allocated prior to dosing, once eligibility had been determined, and a randomization schedule was produced by an interactive voice response system vendor. Subjects were dosed with multimatrix mesalamine every morning for 7 days, at home on days 1–4, and on-site on days 5–7. On day 7, blood and urine pharmacokinetic samples were collected and safety assessments were performed until 24 hours post-dose. The study protocol, protocol amendments, informed consent documents, relevant supporting information, and subject recruitment information were submitted to and approved by the respective independent ethics committees, institutional review boards, and regulatory agencies prior to study initiation. The independent ethics committees, institutional review boards, and regulatory agencies are from the following: United States: Arkansas Children’s Hospital, Advanced Clinical Research Institute, University of California, San Francisco, Connecticut Children’s Medical Center, University of Maryland Medical Center for Children, and Mayo Clinic; Australia: Royal Children’s Hospital Melbourne; Poland: Klinika Pediatrii Gastroenterologii i Zywienia, Uniwersytecki Szpital Dzieciecy w Krakowie, Klinika Pediatrii Dzieciecy Szpital Kliniczny im prof Antoniego Gebali, Kliniczny Oddzial Pediatrii z Pododdzialem Neurologii Dzieciecej Szpital Wojewodzki, Klinika Gastroenterolofii Pediatrii, Instytut Centrum Zdrowia Matki Polki, Oddzial Gastroenterologii i Hepatologii, and Instytut Pomnik-Centrum Zdrowia Dziecka; Slovakia: Gastroenterologicka ambulancia, Univerzitna nemocnica Martin, and DFNsP Banska Bystrica; and the United Kingdom: Alder Hey Children’s NHS Foundation Trust, Barts Health NHS Trust/Royal London Hospital, Somers Clinical Research Facility/Great Ormond Street Hospital, Southampton General Hospital. Further details are available at https://clinicaltrials.gov/ct2/show/NCT01130844. All authors had access to the study data, and reviewed and approved the final manuscript. This Phase I, multicenter, randomized, open-label, three-arm study was conducted in 12 sites across three countries (United States, Poland, and Slovakia). Children and adolescents (aged 5–17 years) with a diagnosis of UC were randomly assigned to receive multimatrix mesalamine 30, 60, or 100 mg/kg once daily (up to a maximum of 4,800 mg/day) each morning for 7 days. To achieve these doses in children, smaller-sized 300 and 600 mg tablets were developed for this study to augment the existing approved 1,200 mg multimatrix mesalamine tablet. Total daily doses for the study ranged from 900 to 4,800 mg/day. Randomization was stratified by body weight (18–24, 25–49, and 50–82 kg); subjects weighing 18–24 kg were only randomized to 60 and 100 mg/kg/day groups to avoid receiving doses less than 900 mg, and subjects weighing 50–82 kg were only randomized to 30 and 60 mg/kg/day groups to avoid receiving doses greater than 4,800 mg, the maximum approved dose in adult subjects. Randomization was achieved via use of a randomization number allocated prior to dosing, once eligibility had been determined, and a randomization schedule was produced by an interactive voice response system vendor. Subjects were dosed with multimatrix mesalamine every morning for 7 days, at home on days 1–4, and on-site on days 5–7. On day 7, blood and urine pharmacokinetic samples were collected and safety assessments were performed until 24 hours post-dose. The study protocol, protocol amendments, informed consent documents, relevant supporting information, and subject recruitment information were submitted to and approved by the respective independent ethics committees, institutional review boards, and regulatory agencies prior to study initiation. The independent ethics committees, institutional review boards, and regulatory agencies are from the following: United States: Arkansas Children’s Hospital, Advanced Clinical Research Institute, University of California, San Francisco, Connecticut Children’s Medical Center, University of Maryland Medical Center for Children, and Mayo Clinic; Australia: Royal Children’s Hospital Melbourne; Poland: Klinika Pediatrii Gastroenterologii i Zywienia, Uniwersytecki Szpital Dzieciecy w Krakowie, Klinika Pediatrii Dzieciecy Szpital Kliniczny im prof Antoniego Gebali, Kliniczny Oddzial Pediatrii z Pododdzialem Neurologii Dzieciecej Szpital Wojewodzki, Klinika Gastroenterolofii Pediatrii, Instytut Centrum Zdrowia Matki Polki, Oddzial Gastroenterologii i Hepatologii, and Instytut Pomnik-Centrum Zdrowia Dziecka; Slovakia: Gastroenterologicka ambulancia, Univerzitna nemocnica Martin, and DFNsP Banska Bystrica; and the United Kingdom: Alder Hey Children’s NHS Foundation Trust, Barts Health NHS Trust/Royal London Hospital, Somers Clinical Research Facility/Great Ormond Street Hospital, Southampton General Hospital. Further details are available at https://clinicaltrials.gov/ct2/show/NCT01130844. All authors had access to the study data, and reviewed and approved the final manuscript. Study population Children (aged 5–12 years) and adolescents (aged 13–17 years), weighing 18–82 kg, with a diagnosis of UC ≥3 months prior to the first dose of study drug were enrolled. Subjects already on a 5-ASA product had to have been on a stable regimen for ≥4 weeks prior to the first dose of study drug. In addition to each subject documenting assent, the subject’s parent or legal representative had to provide informed consent. Subjects with current or recurrent disease (other than UC) that could affect the colon or the action, absorption, or disposition of the study drug were excluded. Additional exclusion criteria included: UC confined to the rectum; allergy, hypersensitivity, or poor tolerability to salicylates or aminosalicylates; history of hepatic or renal impairment, pancreatitis, or Reyes syndrome; use of another investigational product ≤30 days prior to the first dose of study drug; serious, severe, or unstable psychiatric or physical illness; positive urine screen for drugs or abuse of alcohol; and pregnancy or lactation in female subjects. Children (aged 5–12 years) and adolescents (aged 13–17 years), weighing 18–82 kg, with a diagnosis of UC ≥3 months prior to the first dose of study drug were enrolled. Subjects already on a 5-ASA product had to have been on a stable regimen for ≥4 weeks prior to the first dose of study drug. In addition to each subject documenting assent, the subject’s parent or legal representative had to provide informed consent. Subjects with current or recurrent disease (other than UC) that could affect the colon or the action, absorption, or disposition of the study drug were excluded. Additional exclusion criteria included: UC confined to the rectum; allergy, hypersensitivity, or poor tolerability to salicylates or aminosalicylates; history of hepatic or renal impairment, pancreatitis, or Reyes syndrome; use of another investigational product ≤30 days prior to the first dose of study drug; serious, severe, or unstable psychiatric or physical illness; positive urine screen for drugs or abuse of alcohol; and pregnancy or lactation in female subjects. Pharmacokinetic evaluations Blood samples (2 mL) for the measurement of 5-ASA and Ac-5-ASA plasma concentrations were taken pre-dose on days 5 and 6 and at the following time points on day 7: pre-dose, and 2, 4, 6, 9, 12, 16, and 24 hours after dosing. In addition, a complete 0- to 24-hour urine collection was made for the determination of urinary excretion of 5-ASA and Ac-5-ASA, starting within 30 minutes before the morning meal on day 7 until the final void scheduled for 30 minutes before the morning meal on day 8. Plasma and urine concentrations of 5-ASA and Ac-5-ASA were determined by validated methods based on liquid chromatography with tandem mass spectrometry (LC-MS/MS). The bioanalytical methods used for the quantitation of the plasma and the urine samples were validated, and quality control and calibration standard data were accepted in accordance with the U.S. Food and Drug Administration (FDA) guidance for bioanalytical method validation.14 Pharmacokinetic parameters were determined from the plasma concentration–time data by non-compartmental analysis. The plasma assay ranged from 5 to 5,000 ng/mL for both 5-ASA and Ac-5-ASA. In each analytical batch, two low, two mid, and two high quality control (QC) samples were analyzed along with the study samples. The QC sample concentrations of both analytes in plasma were 12.5, 2,500, and 4,000 ng/mL. The inter-day coefficient of variation (CV) values ranged from 5.0% to 7.6% for 5-ASA and from 4.8% to 10.1% for Ac-5-ASA. Accuracy (expressed as percentage of the difference of the mean value for each pool from the theoretical concentration) values ranged from −0.8% to 1.6% for 5-ASA and from 2.4% to 5.2% for Ac-5-ASA. In 12 out of 14 analytical batches, three dilution QC samples were analyzed along with the study samples. The dilution QC sample concentrations for both analytes in plasma were each 10,000 ng/mL, and the dilution QC samples were analyzed after a 1:5 dilution with control plasma. The inter-day CV value was 5.5% for 5-ASA and 6.1% for Ac-5-ASA, and the accuracy values were 4.0% and 4.0%, respectively. Recoveries of 5-ASA and Ac-5-ASA from plasma were shown during assay validation to be >95%. The urine assay ranged from 5 to 1,000 µg/mL for both 5-ASA and Ac-5-ASA. In each analytical batch, two low, two mid, and two high QC samples were analyzed along with the study samples. The QC sample concentrations of both analytes were 15, 400, and 800 µg/mL. The inter-day CV values ranged from 6.7% to 11.7% for 5-ASA and from 7.6% to 10.2% for Ac-5-ASA. Accuracy values ranged from −5.5% to −1.3% for 5-ASA and from 1.4% to 4.0% for Ac-5-ASA. In three out of nine analytical batches, three dilution QC samples were analyzed along with the study samples. The dilution QC sample concentrations for both analytes in urine were each 2,000 µg/mL, and the dilution QC samples were analyzed after a 1:5 dilution with control urine. The inter-day CV value was 7.8% for 5-ASA and 7.2% for Ac-5-ASA, and the accuracy values were −2.5% and 2.0%, respectively. Recoveries of 5-ASA and Ac-5-ASA from urine were shown during assay validation to be >90%. A total of 52 plasma samples and six urine samples were re-analyzed to assess incurred sample reproducibility. Results for 100% of these samples were found to be within ±20% of the mean of the original and re-assay results, meeting the predefined acceptance criteria. Blood samples (2 mL) for the measurement of 5-ASA and Ac-5-ASA plasma concentrations were taken pre-dose on days 5 and 6 and at the following time points on day 7: pre-dose, and 2, 4, 6, 9, 12, 16, and 24 hours after dosing. In addition, a complete 0- to 24-hour urine collection was made for the determination of urinary excretion of 5-ASA and Ac-5-ASA, starting within 30 minutes before the morning meal on day 7 until the final void scheduled for 30 minutes before the morning meal on day 8. Plasma and urine concentrations of 5-ASA and Ac-5-ASA were determined by validated methods based on liquid chromatography with tandem mass spectrometry (LC-MS/MS). The bioanalytical methods used for the quantitation of the plasma and the urine samples were validated, and quality control and calibration standard data were accepted in accordance with the U.S. Food and Drug Administration (FDA) guidance for bioanalytical method validation.14 Pharmacokinetic parameters were determined from the plasma concentration–time data by non-compartmental analysis. The plasma assay ranged from 5 to 5,000 ng/mL for both 5-ASA and Ac-5-ASA. In each analytical batch, two low, two mid, and two high quality control (QC) samples were analyzed along with the study samples. The QC sample concentrations of both analytes in plasma were 12.5, 2,500, and 4,000 ng/mL. The inter-day coefficient of variation (CV) values ranged from 5.0% to 7.6% for 5-ASA and from 4.8% to 10.1% for Ac-5-ASA. Accuracy (expressed as percentage of the difference of the mean value for each pool from the theoretical concentration) values ranged from −0.8% to 1.6% for 5-ASA and from 2.4% to 5.2% for Ac-5-ASA. In 12 out of 14 analytical batches, three dilution QC samples were analyzed along with the study samples. The dilution QC sample concentrations for both analytes in plasma were each 10,000 ng/mL, and the dilution QC samples were analyzed after a 1:5 dilution with control plasma. The inter-day CV value was 5.5% for 5-ASA and 6.1% for Ac-5-ASA, and the accuracy values were 4.0% and 4.0%, respectively. Recoveries of 5-ASA and Ac-5-ASA from plasma were shown during assay validation to be >95%. The urine assay ranged from 5 to 1,000 µg/mL for both 5-ASA and Ac-5-ASA. In each analytical batch, two low, two mid, and two high QC samples were analyzed along with the study samples. The QC sample concentrations of both analytes were 15, 400, and 800 µg/mL. The inter-day CV values ranged from 6.7% to 11.7% for 5-ASA and from 7.6% to 10.2% for Ac-5-ASA. Accuracy values ranged from −5.5% to −1.3% for 5-ASA and from 1.4% to 4.0% for Ac-5-ASA. In three out of nine analytical batches, three dilution QC samples were analyzed along with the study samples. The dilution QC sample concentrations for both analytes in urine were each 2,000 µg/mL, and the dilution QC samples were analyzed after a 1:5 dilution with control urine. The inter-day CV value was 7.8% for 5-ASA and 7.2% for Ac-5-ASA, and the accuracy values were −2.5% and 2.0%, respectively. Recoveries of 5-ASA and Ac-5-ASA from urine were shown during assay validation to be >90%. A total of 52 plasma samples and six urine samples were re-analyzed to assess incurred sample reproducibility. Results for 100% of these samples were found to be within ±20% of the mean of the original and re-assay results, meeting the predefined acceptance criteria. Safety Safety was evaluated by reported adverse events (AEs) at each study visit and while the subject was on-site, and included assessment of clinical laboratory parameters, physical examination findings, vital signs, and 12-lead electrocardiogram. Safety analyses were performed on all randomized subjects who took ≥1 dose of study drug and had ≥1 post-dose safety assessment (safety analysis set). Safety data were summarized by treatment group and by treatment group stratified by age (5–12 years and 13–17 years). Safety was evaluated by reported adverse events (AEs) at each study visit and while the subject was on-site, and included assessment of clinical laboratory parameters, physical examination findings, vital signs, and 12-lead electrocardiogram. Safety analyses were performed on all randomized subjects who took ≥1 dose of study drug and had ≥1 post-dose safety assessment (safety analysis set). Safety data were summarized by treatment group and by treatment group stratified by age (5–12 years and 13–17 years). Sample size It was anticipated that up to 60 subjects would be needed for screening to enroll 45 subjects. Thirty subjects were required to complete the study and, per agreement with the FDA, a minimum of six subjects were to be assigned to each age group (5–12 years and 13–17 years), as well as a minimum of six subjects per dose level (30, 60, and 100 mg/kg/day). It was anticipated that up to 60 subjects would be needed for screening to enroll 45 subjects. Thirty subjects were required to complete the study and, per agreement with the FDA, a minimum of six subjects were to be assigned to each age group (5–12 years and 13–17 years), as well as a minimum of six subjects per dose level (30, 60, and 100 mg/kg/day). Non-compartmental pharmacokinetic analysis Pharmacokinetic parameters were determined (WinNonlin 5.2; Pharsight Corporation, Mountain View, CA, USA) for 5-ASA and Ac-5-ASA for all subjects in the safety analysis set who generated sufficient plasma samples to allow reliable determination of maximum concentration (Cmax,ss) and area under the curve for the defined interval between doses (AUCss; tau=24 h) at steady state (ie, the pharmacokinetic set). All calculations were based on actual sampling times. Pharmacokinetic parameters that were derived based on 5-ASA and/or Ac-5-ASA concentrations, as appropriate, included AUCss, Cmax,ss, time of maximum observed concentration sampled during a dosing interval (tmax), cumulative amount recovered in urine in time interval 0–24 hours (Xu0–24h), clearance from the blood by the kidneys (CLR), metabolic ratio (Ac-5-ASA:5-ASA) calculated using Cmax,ss (MRCmax,ss), metabolic ratio (Ac-5-ASA:5-ASA) calculated using AUCss (MRAUCss), and percentage of the dose absorbed, calculated as: %of dose absorbed=(Xu0−24h5-ASA+[0.7847×Xu0−24hAc-5-ASA])Dose×100where 0.7847 is the ratio of the molecular weight of 5-ASA (153.14) to the molecular weight of Ac-5-ASA (195.15). AUC values were calculated using the linear trapezoidal method when concentrations were increasing, and the logarithmic trapezoidal method when concentrations were decreasing. No inferential statistical analyses were conducted on the pharmacokinetic data. Summary statistics were presented by treatment group and by treatment group stratified by age (5–12 years, 13–17 years) for all pharmacokinetic parameters. Achievement of steady state was assessed by visual inspection of pre-dose plasma concentrations on days 5, 6, and 7. Systemic exposure (Cmax,ss and AUCss) was assessed by comparison with historical data in healthy adult subjects administered multimatrix mesalamine 2,400 or 4,800 mg/day.15,16 Pharmacokinetic parameters were determined (WinNonlin 5.2; Pharsight Corporation, Mountain View, CA, USA) for 5-ASA and Ac-5-ASA for all subjects in the safety analysis set who generated sufficient plasma samples to allow reliable determination of maximum concentration (Cmax,ss) and area under the curve for the defined interval between doses (AUCss; tau=24 h) at steady state (ie, the pharmacokinetic set). All calculations were based on actual sampling times. Pharmacokinetic parameters that were derived based on 5-ASA and/or Ac-5-ASA concentrations, as appropriate, included AUCss, Cmax,ss, time of maximum observed concentration sampled during a dosing interval (tmax), cumulative amount recovered in urine in time interval 0–24 hours (Xu0–24h), clearance from the blood by the kidneys (CLR), metabolic ratio (Ac-5-ASA:5-ASA) calculated using Cmax,ss (MRCmax,ss), metabolic ratio (Ac-5-ASA:5-ASA) calculated using AUCss (MRAUCss), and percentage of the dose absorbed, calculated as: %of dose absorbed=(Xu0−24h5-ASA+[0.7847×Xu0−24hAc-5-ASA])Dose×100where 0.7847 is the ratio of the molecular weight of 5-ASA (153.14) to the molecular weight of Ac-5-ASA (195.15). AUC values were calculated using the linear trapezoidal method when concentrations were increasing, and the logarithmic trapezoidal method when concentrations were decreasing. No inferential statistical analyses were conducted on the pharmacokinetic data. Summary statistics were presented by treatment group and by treatment group stratified by age (5–12 years, 13–17 years) for all pharmacokinetic parameters. Achievement of steady state was assessed by visual inspection of pre-dose plasma concentrations on days 5, 6, and 7. Systemic exposure (Cmax,ss and AUCss) was assessed by comparison with historical data in healthy adult subjects administered multimatrix mesalamine 2,400 or 4,800 mg/day.15,16 Population pharmacokinetic analysis A population pharmacokinetic model was developed using non-linear mixed effects modeling (NONMEM® program, Version 7.2.0; ICON, Ellicott City, MD, USA) to describe the population variability in 5-ASA/Ac-5-ASA pharmacokinetics and the relationship between pharmacokinetic parameters and potential explanatory covariates (eg, age, weight, and sex). Pharmacokinetic parameters were estimated using Monte-Carlo Importance Sampling Expectation Maximization method with “Mu Referencing”.17 Development of the population pharmacokinetic model consisted of building a base model, followed by development of a covariate model using an interim data cut (40 subjects); the final model was updated with data from an additional 12 subjects from the study. Structural model selection was data driven, based on goodness-of-fit plots (observed vs predicted concentration, conditional weighted residual vs predicted concentration or time, histograms of individual random effects, etc), successful convergence, plausibility and precision of parameter estimates, and the minimum objective function value. Missing drug concentrations and concentrations reported as “not quantifiable” were excluded in the analysis. The final pharmacokinetic model was evaluated using visual predictive check (VPC), and this model was used to simulate the expected 5-ASA and Ac-5-ASA plasma concentration profiles in a broader population of children and adolescents using Trial Simulator (Version 2.2.1, Pharsight Corporation). Data presentation and construction of plots were performed using S-PLUS (Version 8.1; Tibco Software Inc., Palo Alto, CA, USA). The simulated exposures were compared to historical adult exposures.15,16 A population pharmacokinetic model was developed using non-linear mixed effects modeling (NONMEM® program, Version 7.2.0; ICON, Ellicott City, MD, USA) to describe the population variability in 5-ASA/Ac-5-ASA pharmacokinetics and the relationship between pharmacokinetic parameters and potential explanatory covariates (eg, age, weight, and sex). Pharmacokinetic parameters were estimated using Monte-Carlo Importance Sampling Expectation Maximization method with “Mu Referencing”.17 Development of the population pharmacokinetic model consisted of building a base model, followed by development of a covariate model using an interim data cut (40 subjects); the final model was updated with data from an additional 12 subjects from the study. Structural model selection was data driven, based on goodness-of-fit plots (observed vs predicted concentration, conditional weighted residual vs predicted concentration or time, histograms of individual random effects, etc), successful convergence, plausibility and precision of parameter estimates, and the minimum objective function value. Missing drug concentrations and concentrations reported as “not quantifiable” were excluded in the analysis. The final pharmacokinetic model was evaluated using visual predictive check (VPC), and this model was used to simulate the expected 5-ASA and Ac-5-ASA plasma concentration profiles in a broader population of children and adolescents using Trial Simulator (Version 2.2.1, Pharsight Corporation). Data presentation and construction of plots were performed using S-PLUS (Version 8.1; Tibco Software Inc., Palo Alto, CA, USA). The simulated exposures were compared to historical adult exposures.15,16 Study design: This Phase I, multicenter, randomized, open-label, three-arm study was conducted in 12 sites across three countries (United States, Poland, and Slovakia). Children and adolescents (aged 5–17 years) with a diagnosis of UC were randomly assigned to receive multimatrix mesalamine 30, 60, or 100 mg/kg once daily (up to a maximum of 4,800 mg/day) each morning for 7 days. To achieve these doses in children, smaller-sized 300 and 600 mg tablets were developed for this study to augment the existing approved 1,200 mg multimatrix mesalamine tablet. Total daily doses for the study ranged from 900 to 4,800 mg/day. Randomization was stratified by body weight (18–24, 25–49, and 50–82 kg); subjects weighing 18–24 kg were only randomized to 60 and 100 mg/kg/day groups to avoid receiving doses less than 900 mg, and subjects weighing 50–82 kg were only randomized to 30 and 60 mg/kg/day groups to avoid receiving doses greater than 4,800 mg, the maximum approved dose in adult subjects. Randomization was achieved via use of a randomization number allocated prior to dosing, once eligibility had been determined, and a randomization schedule was produced by an interactive voice response system vendor. Subjects were dosed with multimatrix mesalamine every morning for 7 days, at home on days 1–4, and on-site on days 5–7. On day 7, blood and urine pharmacokinetic samples were collected and safety assessments were performed until 24 hours post-dose. The study protocol, protocol amendments, informed consent documents, relevant supporting information, and subject recruitment information were submitted to and approved by the respective independent ethics committees, institutional review boards, and regulatory agencies prior to study initiation. The independent ethics committees, institutional review boards, and regulatory agencies are from the following: United States: Arkansas Children’s Hospital, Advanced Clinical Research Institute, University of California, San Francisco, Connecticut Children’s Medical Center, University of Maryland Medical Center for Children, and Mayo Clinic; Australia: Royal Children’s Hospital Melbourne; Poland: Klinika Pediatrii Gastroenterologii i Zywienia, Uniwersytecki Szpital Dzieciecy w Krakowie, Klinika Pediatrii Dzieciecy Szpital Kliniczny im prof Antoniego Gebali, Kliniczny Oddzial Pediatrii z Pododdzialem Neurologii Dzieciecej Szpital Wojewodzki, Klinika Gastroenterolofii Pediatrii, Instytut Centrum Zdrowia Matki Polki, Oddzial Gastroenterologii i Hepatologii, and Instytut Pomnik-Centrum Zdrowia Dziecka; Slovakia: Gastroenterologicka ambulancia, Univerzitna nemocnica Martin, and DFNsP Banska Bystrica; and the United Kingdom: Alder Hey Children’s NHS Foundation Trust, Barts Health NHS Trust/Royal London Hospital, Somers Clinical Research Facility/Great Ormond Street Hospital, Southampton General Hospital. Further details are available at https://clinicaltrials.gov/ct2/show/NCT01130844. All authors had access to the study data, and reviewed and approved the final manuscript. Study population: Children (aged 5–12 years) and adolescents (aged 13–17 years), weighing 18–82 kg, with a diagnosis of UC ≥3 months prior to the first dose of study drug were enrolled. Subjects already on a 5-ASA product had to have been on a stable regimen for ≥4 weeks prior to the first dose of study drug. In addition to each subject documenting assent, the subject’s parent or legal representative had to provide informed consent. Subjects with current or recurrent disease (other than UC) that could affect the colon or the action, absorption, or disposition of the study drug were excluded. Additional exclusion criteria included: UC confined to the rectum; allergy, hypersensitivity, or poor tolerability to salicylates or aminosalicylates; history of hepatic or renal impairment, pancreatitis, or Reyes syndrome; use of another investigational product ≤30 days prior to the first dose of study drug; serious, severe, or unstable psychiatric or physical illness; positive urine screen for drugs or abuse of alcohol; and pregnancy or lactation in female subjects. Pharmacokinetic evaluations: Blood samples (2 mL) for the measurement of 5-ASA and Ac-5-ASA plasma concentrations were taken pre-dose on days 5 and 6 and at the following time points on day 7: pre-dose, and 2, 4, 6, 9, 12, 16, and 24 hours after dosing. In addition, a complete 0- to 24-hour urine collection was made for the determination of urinary excretion of 5-ASA and Ac-5-ASA, starting within 30 minutes before the morning meal on day 7 until the final void scheduled for 30 minutes before the morning meal on day 8. Plasma and urine concentrations of 5-ASA and Ac-5-ASA were determined by validated methods based on liquid chromatography with tandem mass spectrometry (LC-MS/MS). The bioanalytical methods used for the quantitation of the plasma and the urine samples were validated, and quality control and calibration standard data were accepted in accordance with the U.S. Food and Drug Administration (FDA) guidance for bioanalytical method validation.14 Pharmacokinetic parameters were determined from the plasma concentration–time data by non-compartmental analysis. The plasma assay ranged from 5 to 5,000 ng/mL for both 5-ASA and Ac-5-ASA. In each analytical batch, two low, two mid, and two high quality control (QC) samples were analyzed along with the study samples. The QC sample concentrations of both analytes in plasma were 12.5, 2,500, and 4,000 ng/mL. The inter-day coefficient of variation (CV) values ranged from 5.0% to 7.6% for 5-ASA and from 4.8% to 10.1% for Ac-5-ASA. Accuracy (expressed as percentage of the difference of the mean value for each pool from the theoretical concentration) values ranged from −0.8% to 1.6% for 5-ASA and from 2.4% to 5.2% for Ac-5-ASA. In 12 out of 14 analytical batches, three dilution QC samples were analyzed along with the study samples. The dilution QC sample concentrations for both analytes in plasma were each 10,000 ng/mL, and the dilution QC samples were analyzed after a 1:5 dilution with control plasma. The inter-day CV value was 5.5% for 5-ASA and 6.1% for Ac-5-ASA, and the accuracy values were 4.0% and 4.0%, respectively. Recoveries of 5-ASA and Ac-5-ASA from plasma were shown during assay validation to be >95%. The urine assay ranged from 5 to 1,000 µg/mL for both 5-ASA and Ac-5-ASA. In each analytical batch, two low, two mid, and two high QC samples were analyzed along with the study samples. The QC sample concentrations of both analytes were 15, 400, and 800 µg/mL. The inter-day CV values ranged from 6.7% to 11.7% for 5-ASA and from 7.6% to 10.2% for Ac-5-ASA. Accuracy values ranged from −5.5% to −1.3% for 5-ASA and from 1.4% to 4.0% for Ac-5-ASA. In three out of nine analytical batches, three dilution QC samples were analyzed along with the study samples. The dilution QC sample concentrations for both analytes in urine were each 2,000 µg/mL, and the dilution QC samples were analyzed after a 1:5 dilution with control urine. The inter-day CV value was 7.8% for 5-ASA and 7.2% for Ac-5-ASA, and the accuracy values were −2.5% and 2.0%, respectively. Recoveries of 5-ASA and Ac-5-ASA from urine were shown during assay validation to be >90%. A total of 52 plasma samples and six urine samples were re-analyzed to assess incurred sample reproducibility. Results for 100% of these samples were found to be within ±20% of the mean of the original and re-assay results, meeting the predefined acceptance criteria. Safety: Safety was evaluated by reported adverse events (AEs) at each study visit and while the subject was on-site, and included assessment of clinical laboratory parameters, physical examination findings, vital signs, and 12-lead electrocardiogram. Safety analyses were performed on all randomized subjects who took ≥1 dose of study drug and had ≥1 post-dose safety assessment (safety analysis set). Safety data were summarized by treatment group and by treatment group stratified by age (5–12 years and 13–17 years). Sample size: It was anticipated that up to 60 subjects would be needed for screening to enroll 45 subjects. Thirty subjects were required to complete the study and, per agreement with the FDA, a minimum of six subjects were to be assigned to each age group (5–12 years and 13–17 years), as well as a minimum of six subjects per dose level (30, 60, and 100 mg/kg/day). Non-compartmental pharmacokinetic analysis: Pharmacokinetic parameters were determined (WinNonlin 5.2; Pharsight Corporation, Mountain View, CA, USA) for 5-ASA and Ac-5-ASA for all subjects in the safety analysis set who generated sufficient plasma samples to allow reliable determination of maximum concentration (Cmax,ss) and area under the curve for the defined interval between doses (AUCss; tau=24 h) at steady state (ie, the pharmacokinetic set). All calculations were based on actual sampling times. Pharmacokinetic parameters that were derived based on 5-ASA and/or Ac-5-ASA concentrations, as appropriate, included AUCss, Cmax,ss, time of maximum observed concentration sampled during a dosing interval (tmax), cumulative amount recovered in urine in time interval 0–24 hours (Xu0–24h), clearance from the blood by the kidneys (CLR), metabolic ratio (Ac-5-ASA:5-ASA) calculated using Cmax,ss (MRCmax,ss), metabolic ratio (Ac-5-ASA:5-ASA) calculated using AUCss (MRAUCss), and percentage of the dose absorbed, calculated as: %of dose absorbed=(Xu0−24h5-ASA+[0.7847×Xu0−24hAc-5-ASA])Dose×100where 0.7847 is the ratio of the molecular weight of 5-ASA (153.14) to the molecular weight of Ac-5-ASA (195.15). AUC values were calculated using the linear trapezoidal method when concentrations were increasing, and the logarithmic trapezoidal method when concentrations were decreasing. No inferential statistical analyses were conducted on the pharmacokinetic data. Summary statistics were presented by treatment group and by treatment group stratified by age (5–12 years, 13–17 years) for all pharmacokinetic parameters. Achievement of steady state was assessed by visual inspection of pre-dose plasma concentrations on days 5, 6, and 7. Systemic exposure (Cmax,ss and AUCss) was assessed by comparison with historical data in healthy adult subjects administered multimatrix mesalamine 2,400 or 4,800 mg/day.15,16 Population pharmacokinetic analysis: A population pharmacokinetic model was developed using non-linear mixed effects modeling (NONMEM® program, Version 7.2.0; ICON, Ellicott City, MD, USA) to describe the population variability in 5-ASA/Ac-5-ASA pharmacokinetics and the relationship between pharmacokinetic parameters and potential explanatory covariates (eg, age, weight, and sex). Pharmacokinetic parameters were estimated using Monte-Carlo Importance Sampling Expectation Maximization method with “Mu Referencing”.17 Development of the population pharmacokinetic model consisted of building a base model, followed by development of a covariate model using an interim data cut (40 subjects); the final model was updated with data from an additional 12 subjects from the study. Structural model selection was data driven, based on goodness-of-fit plots (observed vs predicted concentration, conditional weighted residual vs predicted concentration or time, histograms of individual random effects, etc), successful convergence, plausibility and precision of parameter estimates, and the minimum objective function value. Missing drug concentrations and concentrations reported as “not quantifiable” were excluded in the analysis. The final pharmacokinetic model was evaluated using visual predictive check (VPC), and this model was used to simulate the expected 5-ASA and Ac-5-ASA plasma concentration profiles in a broader population of children and adolescents using Trial Simulator (Version 2.2.1, Pharsight Corporation). Data presentation and construction of plots were performed using S-PLUS (Version 8.1; Tibco Software Inc., Palo Alto, CA, USA). The simulated exposures were compared to historical adult exposures.15,16 Results: Subjects Between October 2010 and June 2013, a total of 52 subjects were screened, randomized, and treated; all completed the study (21 subjects [5 children, 16 adolescents] in the 30 mg/kg/day dose group, 22 [4 children, 18 adolescents] in the 60 mg/kg/day dose group, and 9 [7 children, 2 adolescents] in the 100 mg/kg/day dose group; Figure 1). While the study met the FDA enrollment requirement of a minimum of six subjects per dose level, more patients were enrolled into the 30 and 60 mg/kg/day dose groups than into the 100 mg/kg/day dose group due to difficulties enrolling children who weighed <49 kg. Overall, demographic data and baseline disease characteristics were well balanced between dose groups, although fewer subjects were studied in the 100 mg/kg/day dose group due to enrollment difficulties (Table 1). Between October 2010 and June 2013, a total of 52 subjects were screened, randomized, and treated; all completed the study (21 subjects [5 children, 16 adolescents] in the 30 mg/kg/day dose group, 22 [4 children, 18 adolescents] in the 60 mg/kg/day dose group, and 9 [7 children, 2 adolescents] in the 100 mg/kg/day dose group; Figure 1). While the study met the FDA enrollment requirement of a minimum of six subjects per dose level, more patients were enrolled into the 30 and 60 mg/kg/day dose groups than into the 100 mg/kg/day dose group due to difficulties enrolling children who weighed <49 kg. Overall, demographic data and baseline disease characteristics were well balanced between dose groups, although fewer subjects were studied in the 100 mg/kg/day dose group due to enrollment difficulties (Table 1). Non-compartmental pharmacokinetics Inter-assay accuracy and precision data for 5-ASA and Ac-5-ASA in the QC samples at three concentrations each in human plasma and urine across all analytical batches are shown in Table S1. For 5-ASA, mean plasma concentration–time profiles attained maxima at ~6 or 9 hours post-dose, with a secondary peak at 24 hours post-dose (Figure 2A; Table 2). Median tmax was 6 and 9 hours (range 0–24 hours for all dose levels) post-dose for 30 and 60 mg/kg/day doses, respectively. For 100 mg/kg/day, median tmax was approximately 2 hours post-dose; however, there were only nine subjects at 100 mg/kg/day. Steady-state plasma concentrations for 5-ASA were attained by day 5 for all doses. On day 7, systemic exposure to 5-ASA (mean AUCss and Cmax,ss) increased in a dose-proportional manner between 30 and 60 mg/kg/day doses, but in a subproportional manner between 60 and 100 mg/kg/day doses. Based on urinary recovery, the mean percentages of 5-ASA absorbed from multimatrix mesalamine were 29.4%, 27.0%, and 22.1% for 30, 60, and 100 mg/kg/day doses, respectively. High between-subject variability was noted for 5-ASA AUCss and Cmax,ss, with arithmetic CV values ranging from 36% to 52% and 52% to 60%, respectively. Mean CLR ranged from 5.0 to 6.5 L/h, with a trend toward decreasing with increasing dose. Mean plasma concentration–time profiles of plasma Ac-5-ASA were similar to those of the parent drug 5-ASA (Figure 2B; Table 2), with median tmax of 9, 7.5, and 2 hours post-dose for 30, 60, and 100 mg/kg/day doses, respectively. Steady-state plasma concentrations for Ac-5-ASA were attained by day 5 for all doses. On day 7, as for 5-ASA, systemic exposure to Ac-5-ASA increased in a dose-proportional manner between 30 and 60 mg/kg/day doses, but in a subproportional manner between 60 and 100 mg/kg/day doses. The metabolite Ac-5-ASA was more abundant than the parent drug (Table 2; MRAUCss of Ac-5-ASA:5-ASA), with no apparent trend across dose. Mean CLR ranged from 10.0 to 16.2 L/h, with a trend toward decreasing with increasing dose. Moderate to high between-subject variability was noted for AUCss and Cmax,ss, with arithmetic CV values ranging from 35% to 44% and 40% to 59%, respectively. There was no apparent difference in 5-ASA or Ac-5-ASA systemic exposure, as measured by mean AUCss and Cmax,ss, between children (aged 5–12 years) and adolescents (aged 13–17 years) for this weight-based dosing paradigm (data not shown). Inter-assay accuracy and precision data for 5-ASA and Ac-5-ASA in the QC samples at three concentrations each in human plasma and urine across all analytical batches are shown in Table S1. For 5-ASA, mean plasma concentration–time profiles attained maxima at ~6 or 9 hours post-dose, with a secondary peak at 24 hours post-dose (Figure 2A; Table 2). Median tmax was 6 and 9 hours (range 0–24 hours for all dose levels) post-dose for 30 and 60 mg/kg/day doses, respectively. For 100 mg/kg/day, median tmax was approximately 2 hours post-dose; however, there were only nine subjects at 100 mg/kg/day. Steady-state plasma concentrations for 5-ASA were attained by day 5 for all doses. On day 7, systemic exposure to 5-ASA (mean AUCss and Cmax,ss) increased in a dose-proportional manner between 30 and 60 mg/kg/day doses, but in a subproportional manner between 60 and 100 mg/kg/day doses. Based on urinary recovery, the mean percentages of 5-ASA absorbed from multimatrix mesalamine were 29.4%, 27.0%, and 22.1% for 30, 60, and 100 mg/kg/day doses, respectively. High between-subject variability was noted for 5-ASA AUCss and Cmax,ss, with arithmetic CV values ranging from 36% to 52% and 52% to 60%, respectively. Mean CLR ranged from 5.0 to 6.5 L/h, with a trend toward decreasing with increasing dose. Mean plasma concentration–time profiles of plasma Ac-5-ASA were similar to those of the parent drug 5-ASA (Figure 2B; Table 2), with median tmax of 9, 7.5, and 2 hours post-dose for 30, 60, and 100 mg/kg/day doses, respectively. Steady-state plasma concentrations for Ac-5-ASA were attained by day 5 for all doses. On day 7, as for 5-ASA, systemic exposure to Ac-5-ASA increased in a dose-proportional manner between 30 and 60 mg/kg/day doses, but in a subproportional manner between 60 and 100 mg/kg/day doses. The metabolite Ac-5-ASA was more abundant than the parent drug (Table 2; MRAUCss of Ac-5-ASA:5-ASA), with no apparent trend across dose. Mean CLR ranged from 10.0 to 16.2 L/h, with a trend toward decreasing with increasing dose. Moderate to high between-subject variability was noted for AUCss and Cmax,ss, with arithmetic CV values ranging from 35% to 44% and 40% to 59%, respectively. There was no apparent difference in 5-ASA or Ac-5-ASA systemic exposure, as measured by mean AUCss and Cmax,ss, between children (aged 5–12 years) and adolescents (aged 13–17 years) for this weight-based dosing paradigm (data not shown). Population pharmacokinetics The pharmacokinetics of 5-ASA and Ac-5-ASA were adequately described by the population pharmacokinetic structural model (Figure S1) that included: first-order absorption from two depot compartments, absorption lag times, and separate central compartments for 5-ASA and Ac-5-ASA with respective urine compartments for renal clearance. Non-renal clearance of 5-ASA was assumed to involve only metabolism to Ac-5-ASA, and all elimination processes were based on the first-order kinetics. Allometric scaling by body weight was applied to all clearance and volume parameters, with the exponents fixed to the theoretical value of 0.75 for clearance and 1 for volume parameters.18 Parameter estimates for the final model are shown in Table S2; following evaluation by VPC, less than 9% of the observed concentrations fell outside the 90% prediction intervals for both analytes, suggesting that the final model adequately described the observed data. For a 70 kg individual, the typical value of 5-ASA apparent renal clearance was estimated to be 1.15 L/h (95% confidence interval [CI]: 1.01–1.32 L/h), and apparent metabolic clearance was estimated to be 85.6 L/h (95% CI: 75.9–96.5 L/h). The typical value of Ac-5-ASA apparent renal clearance was estimated to be 2.54 L/h (95% CI: 2.27–2.86 L/h), and apparent non-renal clearance was estimated to be 74.4 L/h (95% CI: 66.7–83.1 L/h). Typical estimates of the central volume of distribution were 109 L (95% CI: 70.1–169 L) for 5-ASA and 7.10 L (95% CI: 5.31–9.49 L) for Ac-5-ASA. Estimated absorption rates from depot 1 and depot 3 were 0.0334 h−1 (95% CI: 0.0207–0.0539 h−1) and 0.273 h−1 (95% CI: 0.165–0.448 h−1). Corresponding estimated absorption lag times from each depot were 5.10 hours (95% CI: 4.31–6.05 hours) and 15.0 hours (95% CI: 14.0–16.1 hours); the lag time from depot 3 represented the additional lag following delay in absorption from depot 1. The fraction of dose absorbed from depot 1 was estimated to be 0.734 (95% CI: 0.413–1.06), and remaining dose fractions were assumed to be absorbed from depot 3. Goodness-of-fit plots for 5-ASA and Ac-5-ASA plasma concentrations are shown in Figures S2 and S3. The population pharmacokinetic model was used to simulate steady-state profiles for both 5-ASA and Ac-5-ASA for four weight groups (18–23, 24–35, 36–50, and 51–90 kg) at planned high doses (1,800, 2,400, 3,600, and 4,800 mg) and low doses (900, 1,200, 1,800, and 2,400 mg) in 80 subjects with 1,000 replications (Figure 3A–D; Tables S3 and S4). The variability in the predicted steady-state exposures for children and adolescents was approximately 50% (CV%) for 5-ASA AUC (Table S3) and 40%–45% for Ac-5-ASA AUC (Table S4). The proposed low dose of multimatrix mesalamine for each weight category is predicted to provide comparable steady-state exposure for both 5-ASA and Ac-5 -ASA to those observed following administration of a fixed 2,400 mg dose in the adult population (Figure 3A–D). The proposed high dose for each weight category is predicted to provide comparable steady-state AUC for both 5-ASA and Ac-5-ASA to those observed following administration of a fixed 4,800 mg dose in the adult population. The pharmacokinetics of 5-ASA and Ac-5-ASA were adequately described by the population pharmacokinetic structural model (Figure S1) that included: first-order absorption from two depot compartments, absorption lag times, and separate central compartments for 5-ASA and Ac-5-ASA with respective urine compartments for renal clearance. Non-renal clearance of 5-ASA was assumed to involve only metabolism to Ac-5-ASA, and all elimination processes were based on the first-order kinetics. Allometric scaling by body weight was applied to all clearance and volume parameters, with the exponents fixed to the theoretical value of 0.75 for clearance and 1 for volume parameters.18 Parameter estimates for the final model are shown in Table S2; following evaluation by VPC, less than 9% of the observed concentrations fell outside the 90% prediction intervals for both analytes, suggesting that the final model adequately described the observed data. For a 70 kg individual, the typical value of 5-ASA apparent renal clearance was estimated to be 1.15 L/h (95% confidence interval [CI]: 1.01–1.32 L/h), and apparent metabolic clearance was estimated to be 85.6 L/h (95% CI: 75.9–96.5 L/h). The typical value of Ac-5-ASA apparent renal clearance was estimated to be 2.54 L/h (95% CI: 2.27–2.86 L/h), and apparent non-renal clearance was estimated to be 74.4 L/h (95% CI: 66.7–83.1 L/h). Typical estimates of the central volume of distribution were 109 L (95% CI: 70.1–169 L) for 5-ASA and 7.10 L (95% CI: 5.31–9.49 L) for Ac-5-ASA. Estimated absorption rates from depot 1 and depot 3 were 0.0334 h−1 (95% CI: 0.0207–0.0539 h−1) and 0.273 h−1 (95% CI: 0.165–0.448 h−1). Corresponding estimated absorption lag times from each depot were 5.10 hours (95% CI: 4.31–6.05 hours) and 15.0 hours (95% CI: 14.0–16.1 hours); the lag time from depot 3 represented the additional lag following delay in absorption from depot 1. The fraction of dose absorbed from depot 1 was estimated to be 0.734 (95% CI: 0.413–1.06), and remaining dose fractions were assumed to be absorbed from depot 3. Goodness-of-fit plots for 5-ASA and Ac-5-ASA plasma concentrations are shown in Figures S2 and S3. The population pharmacokinetic model was used to simulate steady-state profiles for both 5-ASA and Ac-5-ASA for four weight groups (18–23, 24–35, 36–50, and 51–90 kg) at planned high doses (1,800, 2,400, 3,600, and 4,800 mg) and low doses (900, 1,200, 1,800, and 2,400 mg) in 80 subjects with 1,000 replications (Figure 3A–D; Tables S3 and S4). The variability in the predicted steady-state exposures for children and adolescents was approximately 50% (CV%) for 5-ASA AUC (Table S3) and 40%–45% for Ac-5-ASA AUC (Table S4). The proposed low dose of multimatrix mesalamine for each weight category is predicted to provide comparable steady-state exposure for both 5-ASA and Ac-5 -ASA to those observed following administration of a fixed 2,400 mg dose in the adult population (Figure 3A–D). The proposed high dose for each weight category is predicted to provide comparable steady-state AUC for both 5-ASA and Ac-5-ASA to those observed following administration of a fixed 4,800 mg dose in the adult population. Safety The incidence of treatment-emergent adverse events (TEAEs) was 19.2% (ten subjects overall); all TEAEs were mild to moderate (Table 3). Incidence rates were similar among different dose groups, and the most commonly reported TEAEs were abdominal pain, musculoskeletal pain, and headache, each reported in 3.8% of subjects (n=2 each). There were no TEAEs leading to premature discontinuation. Two subjects (3.8%) experienced a TEAE considered by the investigator to be related to the study drug: one subject in the 30 mg/kg/day dose group experienced abdominal pain, dehydration, and vomiting, and one subject in the 60 mg/kg/day dose group experienced moderate upper abdominal pain. No relevant differences between the age groups were observed with regard to the occurrence, severity, or relatedness of TEAEs, and no clinically relevant abnormalities in biochemistry, hematology, urinalysis, or vital sign values were observed. No new safety signals were reported. The incidence of treatment-emergent adverse events (TEAEs) was 19.2% (ten subjects overall); all TEAEs were mild to moderate (Table 3). Incidence rates were similar among different dose groups, and the most commonly reported TEAEs were abdominal pain, musculoskeletal pain, and headache, each reported in 3.8% of subjects (n=2 each). There were no TEAEs leading to premature discontinuation. Two subjects (3.8%) experienced a TEAE considered by the investigator to be related to the study drug: one subject in the 30 mg/kg/day dose group experienced abdominal pain, dehydration, and vomiting, and one subject in the 60 mg/kg/day dose group experienced moderate upper abdominal pain. No relevant differences between the age groups were observed with regard to the occurrence, severity, or relatedness of TEAEs, and no clinically relevant abnormalities in biochemistry, hematology, urinalysis, or vital sign values were observed. No new safety signals were reported. Subjects: Between October 2010 and June 2013, a total of 52 subjects were screened, randomized, and treated; all completed the study (21 subjects [5 children, 16 adolescents] in the 30 mg/kg/day dose group, 22 [4 children, 18 adolescents] in the 60 mg/kg/day dose group, and 9 [7 children, 2 adolescents] in the 100 mg/kg/day dose group; Figure 1). While the study met the FDA enrollment requirement of a minimum of six subjects per dose level, more patients were enrolled into the 30 and 60 mg/kg/day dose groups than into the 100 mg/kg/day dose group due to difficulties enrolling children who weighed <49 kg. Overall, demographic data and baseline disease characteristics were well balanced between dose groups, although fewer subjects were studied in the 100 mg/kg/day dose group due to enrollment difficulties (Table 1). Non-compartmental pharmacokinetics: Inter-assay accuracy and precision data for 5-ASA and Ac-5-ASA in the QC samples at three concentrations each in human plasma and urine across all analytical batches are shown in Table S1. For 5-ASA, mean plasma concentration–time profiles attained maxima at ~6 or 9 hours post-dose, with a secondary peak at 24 hours post-dose (Figure 2A; Table 2). Median tmax was 6 and 9 hours (range 0–24 hours for all dose levels) post-dose for 30 and 60 mg/kg/day doses, respectively. For 100 mg/kg/day, median tmax was approximately 2 hours post-dose; however, there were only nine subjects at 100 mg/kg/day. Steady-state plasma concentrations for 5-ASA were attained by day 5 for all doses. On day 7, systemic exposure to 5-ASA (mean AUCss and Cmax,ss) increased in a dose-proportional manner between 30 and 60 mg/kg/day doses, but in a subproportional manner between 60 and 100 mg/kg/day doses. Based on urinary recovery, the mean percentages of 5-ASA absorbed from multimatrix mesalamine were 29.4%, 27.0%, and 22.1% for 30, 60, and 100 mg/kg/day doses, respectively. High between-subject variability was noted for 5-ASA AUCss and Cmax,ss, with arithmetic CV values ranging from 36% to 52% and 52% to 60%, respectively. Mean CLR ranged from 5.0 to 6.5 L/h, with a trend toward decreasing with increasing dose. Mean plasma concentration–time profiles of plasma Ac-5-ASA were similar to those of the parent drug 5-ASA (Figure 2B; Table 2), with median tmax of 9, 7.5, and 2 hours post-dose for 30, 60, and 100 mg/kg/day doses, respectively. Steady-state plasma concentrations for Ac-5-ASA were attained by day 5 for all doses. On day 7, as for 5-ASA, systemic exposure to Ac-5-ASA increased in a dose-proportional manner between 30 and 60 mg/kg/day doses, but in a subproportional manner between 60 and 100 mg/kg/day doses. The metabolite Ac-5-ASA was more abundant than the parent drug (Table 2; MRAUCss of Ac-5-ASA:5-ASA), with no apparent trend across dose. Mean CLR ranged from 10.0 to 16.2 L/h, with a trend toward decreasing with increasing dose. Moderate to high between-subject variability was noted for AUCss and Cmax,ss, with arithmetic CV values ranging from 35% to 44% and 40% to 59%, respectively. There was no apparent difference in 5-ASA or Ac-5-ASA systemic exposure, as measured by mean AUCss and Cmax,ss, between children (aged 5–12 years) and adolescents (aged 13–17 years) for this weight-based dosing paradigm (data not shown). Population pharmacokinetics: The pharmacokinetics of 5-ASA and Ac-5-ASA were adequately described by the population pharmacokinetic structural model (Figure S1) that included: first-order absorption from two depot compartments, absorption lag times, and separate central compartments for 5-ASA and Ac-5-ASA with respective urine compartments for renal clearance. Non-renal clearance of 5-ASA was assumed to involve only metabolism to Ac-5-ASA, and all elimination processes were based on the first-order kinetics. Allometric scaling by body weight was applied to all clearance and volume parameters, with the exponents fixed to the theoretical value of 0.75 for clearance and 1 for volume parameters.18 Parameter estimates for the final model are shown in Table S2; following evaluation by VPC, less than 9% of the observed concentrations fell outside the 90% prediction intervals for both analytes, suggesting that the final model adequately described the observed data. For a 70 kg individual, the typical value of 5-ASA apparent renal clearance was estimated to be 1.15 L/h (95% confidence interval [CI]: 1.01–1.32 L/h), and apparent metabolic clearance was estimated to be 85.6 L/h (95% CI: 75.9–96.5 L/h). The typical value of Ac-5-ASA apparent renal clearance was estimated to be 2.54 L/h (95% CI: 2.27–2.86 L/h), and apparent non-renal clearance was estimated to be 74.4 L/h (95% CI: 66.7–83.1 L/h). Typical estimates of the central volume of distribution were 109 L (95% CI: 70.1–169 L) for 5-ASA and 7.10 L (95% CI: 5.31–9.49 L) for Ac-5-ASA. Estimated absorption rates from depot 1 and depot 3 were 0.0334 h−1 (95% CI: 0.0207–0.0539 h−1) and 0.273 h−1 (95% CI: 0.165–0.448 h−1). Corresponding estimated absorption lag times from each depot were 5.10 hours (95% CI: 4.31–6.05 hours) and 15.0 hours (95% CI: 14.0–16.1 hours); the lag time from depot 3 represented the additional lag following delay in absorption from depot 1. The fraction of dose absorbed from depot 1 was estimated to be 0.734 (95% CI: 0.413–1.06), and remaining dose fractions were assumed to be absorbed from depot 3. Goodness-of-fit plots for 5-ASA and Ac-5-ASA plasma concentrations are shown in Figures S2 and S3. The population pharmacokinetic model was used to simulate steady-state profiles for both 5-ASA and Ac-5-ASA for four weight groups (18–23, 24–35, 36–50, and 51–90 kg) at planned high doses (1,800, 2,400, 3,600, and 4,800 mg) and low doses (900, 1,200, 1,800, and 2,400 mg) in 80 subjects with 1,000 replications (Figure 3A–D; Tables S3 and S4). The variability in the predicted steady-state exposures for children and adolescents was approximately 50% (CV%) for 5-ASA AUC (Table S3) and 40%–45% for Ac-5-ASA AUC (Table S4). The proposed low dose of multimatrix mesalamine for each weight category is predicted to provide comparable steady-state exposure for both 5-ASA and Ac-5 -ASA to those observed following administration of a fixed 2,400 mg dose in the adult population (Figure 3A–D). The proposed high dose for each weight category is predicted to provide comparable steady-state AUC for both 5-ASA and Ac-5-ASA to those observed following administration of a fixed 4,800 mg dose in the adult population. Safety: The incidence of treatment-emergent adverse events (TEAEs) was 19.2% (ten subjects overall); all TEAEs were mild to moderate (Table 3). Incidence rates were similar among different dose groups, and the most commonly reported TEAEs were abdominal pain, musculoskeletal pain, and headache, each reported in 3.8% of subjects (n=2 each). There were no TEAEs leading to premature discontinuation. Two subjects (3.8%) experienced a TEAE considered by the investigator to be related to the study drug: one subject in the 30 mg/kg/day dose group experienced abdominal pain, dehydration, and vomiting, and one subject in the 60 mg/kg/day dose group experienced moderate upper abdominal pain. No relevant differences between the age groups were observed with regard to the occurrence, severity, or relatedness of TEAEs, and no clinically relevant abnormalities in biochemistry, hematology, urinalysis, or vital sign values were observed. No new safety signals were reported. Discussion: This is the first study to evaluate the safety and pharmacokinetics of multimatrix mesalamine in children and adolescents diagnosed with UC. Multimatrix mesalamine was generally safe and well tolerated. No fatal TEAEs, other serious TEAEs, or TEAEs leading to discontinuation of treatment occurred during the study, and the incidence of TEAEs was comparable between dose and age groups. Furthermore, the types and frequencies of clinical laboratory abnormalities and other safety parameters were low, and comparable between dose and age groups. The pharmacokinetic profiles of 5-ASA and Ac-5-ASA at 30 or 60 mg/kg once daily were similar to those observed historically in adults after a 2,400 mg daily dose.15,16 Likewise, the pharmacokinetic profiles of 5-ASA and Ac-5-ASA at 100 mg/kg once daily were also similar to those in adults after a 4,800 mg daily dose.15,16 These conclusions were based on the variable absorption profiles with initial and secondary peaks, the percentage of the 5-ASA absorbed, renal clearance of 5-ASA and Ac-5-ASA, plasma exposure (AUCss and Cmax,ss), and the inter-subject variability in pharmacokinetic parameters (Table 2).15,16 For example, in adults, the total absorption of 5-ASA from multimatrix mesalamine 2.4 or 4.8 g administered once daily for 14 days to healthy volunteers was approximately 21%–22% of the administered dose.15 In the current study, 5-ASA absorption ranged from 22% to 29% across doses. The overall similarity in 5-ASA and Ac-5-ASA exposures between children/adolescents and adults suggests that the novel, smaller 300 and 600 mg multimatrix mesalamine tablets developed for this study performed as intended, delivering 5-ASA to the colon in a similar fashion to the commercial 1,200 mg multimatrix mesalamine tablets. Hence, these new 300 and 600 mg tablets may be suitable for use in future studies. For both 5-ASA and Ac-5-ASA, pharmacokinetic steady state was attained by day 5 for all doses, and systemic exposure of 5-ASA and Ac-5-ASA (measured by mean AUCss and Cmax,ss) on day 7 increased in a dose-proportional manner between 30 and 60 mg/kg/day doses (Table 2). Mean AUCss and Cmax,ss increased subproportionally between 60 and 100 mg/kg/day doses, possibly due to the maximum dose restriction of 4,800 mg/day, with the highest dose of 100 mg/kg only being administered to subjects weighing less than 50 kg. In the 100 mg/kg/day dose group, a mean of 22.1% of the dose was absorbed (similar to the 21%–22% observed in adults),15 whereas the corresponding values for the 30 and 60 mg/kg/day dose groups were higher (29.4% and 27.0%, respectively), suggesting that there also may be differences in the extent of absorption. Nevertheless, these assessments of dose-proportionality should be interpreted with caution due to the large inter-subject pharmacokinetic variability and the small sample size (n=9) receiving 100 mg/kg/day. In addition to the typical non-compartmental pharmacokinetic analysis, a population pharmacokinetic model was developed to describe the pharmacokinetics of 5-ASA and Ac-5-ASA (Figure S1), and was subsequently used to simulate exposures in a broader population of children and adolescents with UC, following repeat oral administration of multimatrix mesalamine (Figures 3A–D; Tables S3 and S4). The variability in the predicted steady-state exposures for children and adolescents was similar to the observed variability in adults for both 5-ASA AUC (CV ≈50% in children and adolescents and 60% in adults) and Ac-5-ASA AUC (CV ≈40%–45% in both populations).15,16 Additionally, absorption from two depots, as described in the final model, is consistent with the multimatrix mesalamine tablet, delivering an initial burst of 5-ASA that occurs from disintegration of the enteric coating, followed by leaching of 5-ASA out of the core as the tablet travels through the terminal ileum and colon. Therefore, the absorption data provide support for the drug being delivered in accordance with the design concept. Finally, results from the modeling and simulation in the current analysis suggest that the proposed dosing regimen for examining the safety and efficacy of multimatrix mesalamine in children and adolescents is likely to produce exposures to 5-ASA and Ac-5-ASA within the range of exposures at which safety and efficacy have been established in adults. As a result, these data supported the proposed doses for the Phase III safety and efficacy trial of multimatrix mesalamine in pediatric patients with UC (PACE; NCT02093663). Supplementary materials: Schematic of final population pharmacokinetic structural model. Notes: Numbers on the top left-hand corner of the boxes represent the compartment numbers for the population pharmacokinetic model in the NONMEM code. Hence, the two depot compartments were denoted as depot 1 and depot 3. Abbreviations: ALAG1, absorption lag time from depot 1; 5-ASA, 5-aminosalicylic acid; Ac-5-ASA, acetyl-5-aminosalicylic acid; Ka1, absorption rate constant from depot 1; F1, fraction of dose absorbed from depot 1; Ka3, absorption rate constant from depot 3; ALAG3, absorption lag time from depot 3 in addition to the lag time from depot 1; CLM, metabolic clearance of 5-ASA; CLNRM, non-renal clearance of Ac-5-ASA; CLR, renal clearance of 5-ASA; CLRM, renal clearance of Ac-5-ASA. Goodness-of-fit and diagnostic plots for the final model in children and adolescents: plasma 5-ASA. Notes: The dashed line represents the local regression (Loess) smoothing line. (A) Observed versus population predicted concentration; (B) observed versus individual predicted concentration; (C) conditional weighted residual versus population predicted concentration; (D) conditional weighted residual versus time. Abbreviation: 5-ASA, 5-aminosalicylic acid. Goodness-of-fit and diagnostic plots for the final model in children and adolescents: plasma Ac-5-ASA. Notes: The dashed line represents the local regression (Loess) smoothing line. (A) Observed versus population predicted concentration; (B) observed versus individual predicted concentration; (C) conditional weighted residual versus population predicted concentration; (D) conditional weighted residual versus time. Abbreviation: Ac-5-ASA, acetyl-5-aminosalicylic acid. Assay performance of 5-ASA and Ac-5-ASA bioanalytical quality control samples in human plasma and urine Abbreviations: Ac-5-ASA, acetyl-5-aminosalicylic acid; 5-ASA, 5-aminosalicylic acid; QC, quality control; CV, coefficient of variation. Parameter estimates of final 5-ASA/Ac-5-ASA population pharmacokinetic modela Notes: The reference population for the pharmacokinetic parameters CLR/F, CLM/F, Vc/F, CLRM/F, CLNRM/F, and VcM/F is an individual weighing 70 kg. Estimates and 95% CI back-transformed from loge scale. CVTVP=eωP2−1, when ω2P exceeds 0.15. CV of proportional error ( =[σprop2]0.5×100=[σ2). Abbreviations: 5-ASA, 5-aminosalicylic acid; Ac-5-ASA, acetyl-5-aminosalicylic acid; ALAG1, absorption lag time from depot 1; ALAG3, absorption lag time from depot 3 in addition to the lag time from depot 1; CI, 95% confidence interval on the parameter; CLM/F, apparent metabolic clearance of 5-ASA; CLNRM/F, apparent non-renal clearance of Ac-5-ASA; CLR/F, apparent renal clearance of 5-ASA; CLRM/F, apparent renal clearance of Ac-5-ASA; CV, coefficient of variation; F1, fraction of dose absorbed from depot 1; FIX, fixed; Ka1, absorption rate constant from depot 1; Ka3, absorption rate constant from depot 3; NA, not applicable; %RSE, percent relative standard error of the estimate = standard error/parameter estimate ×100; Vc/F, apparent volume of central compartment of 5-ASA; VcM/F, apparent volume of central compartment of Ac-5-ASA; WT, body weight; ω2CLR, variance of random effect of CLR/F; ω2CL, variance of random effect of CLM/F; ω2Vc, variance of random effect of Vc/F; ω2CLRM, variance of random effect of CLRM/F; ω2CLNR, variance of random effect of CLNRM/F; ω2VcM, variance of random effect of VcM/F; ω2Ka1, variance of random effect of Ka1; ω2Ka3, variance of random effect of Ka3; ω2ALAG1, variance of random effect of ALAG1; ω2ALAG3, variance of random effect of ALAG3; ω2F1, variance of random effect of F1; σ2prop, proportionalcomponent of the residual error model. Simulated steady-state 5-ASA exposures by weight and dose group Note: Summarized as median (2.5th percentile, 97.5th percentile). Abbreviations: 5-ASA, 5-aminosalicylic acid; AUC, area under the curve; SD, standard deviation; CV, coefficient of variation; PI, percentile. Simulated steady-state Ac-5-ASA pharmacokinetic parameters by dose and age group Note: Summarized as median (2.5th percentile, 97.5th percentile). Abbreviations: Ac-5-ASA, acetyl-5-aminosalicylic acid; AUC, area under the curve; SD, standard deviation; CV, coefficient of variation; PI, percentile.
Background: Limited data are available on mesalamine (5-aminosalicylic acid; 5-ASA) use in pediatric ulcerative colitis (UC). Methods: Participants (5-17 years of age; 18-82 kg, stratified by weight) with UC received multi-matrix mesalamine 30, 60, or 100 mg/kg/day once daily (to 4,800 mg/day) for 7 days. Blood samples were collected pre-dose on days 5 and 6. On days 7 and 8, blood and urine samples were collected and safety was evaluated. 5-ASA and Ac-5-ASA plasma and urine concentrations were analyzed by non-compartmental methods and used to develop a population pharmacokinetic model. Results: Fifty-two subjects (21 [30 mg/kg]; 22 [60 mg/kg]; 9 [100 mg/kg]) were randomized. On day 7, systemic exposures of 5-ASA and Ac-5-ASA exhibited a dose-proportional increase between 30 and 60 mg/kg/day cohorts. For 30, 60, and 100 mg/kg/day doses, mean percentages of 5-ASA absorbed were 29.4%, 27.0%, and 22.1%, respectively. Simulated steady-state exposures and variabilities for 5-ASA and Ac-5-ASA (coefficient of variation approximately 50% and 40%-45%, respectively) were similar to those observed previously in adults at comparable doses. Treatment-emergent adverse events were reported by ten subjects. Events were similar among different doses and age groups with no new safety signals identified. Conclusions: Children and adolescents with UC receiving multimatrix mesalamine demonstrated 5-ASA and Ac-5-ASA pharmacokinetic profiles similar to historical adult data. Multimatrix mesalamine was well tolerated across all dose and age groups. ClinicalTrials.gov Identifier: NCT01130844.
null
null
14,071
351
[ 4606, 526, 197, 744, 95, 80, 352, 582, 681, 186 ]
16
[ "asa", "dose", "ac", "ac asa", "day", "mg", "kg", "subjects", "asa ac asa", "asa ac" ]
[ "efficacy multimatrix mesalamine", "uc chronic inflammatory", "mesalamine pediatric uc", "colitis", "introduction ulcerative colitis" ]
null
null
[CONTENT] ulcerative colitis | mesalamine | pharmacology [SUMMARY]
[CONTENT] ulcerative colitis | mesalamine | pharmacology [SUMMARY]
[CONTENT] ulcerative colitis | mesalamine | pharmacology [SUMMARY]
null
[CONTENT] ulcerative colitis | mesalamine | pharmacology [SUMMARY]
null
[CONTENT] Adolescent | Aminosalicylic Acids | Anti-Inflammatory Agents, Non-Steroidal | Area Under Curve | Child | Child, Preschool | Colitis, Ulcerative | Female | Humans | Male | Mesalamine [SUMMARY]
[CONTENT] Adolescent | Aminosalicylic Acids | Anti-Inflammatory Agents, Non-Steroidal | Area Under Curve | Child | Child, Preschool | Colitis, Ulcerative | Female | Humans | Male | Mesalamine [SUMMARY]
[CONTENT] Adolescent | Aminosalicylic Acids | Anti-Inflammatory Agents, Non-Steroidal | Area Under Curve | Child | Child, Preschool | Colitis, Ulcerative | Female | Humans | Male | Mesalamine [SUMMARY]
null
[CONTENT] Adolescent | Aminosalicylic Acids | Anti-Inflammatory Agents, Non-Steroidal | Area Under Curve | Child | Child, Preschool | Colitis, Ulcerative | Female | Humans | Male | Mesalamine [SUMMARY]
null
[CONTENT] efficacy multimatrix mesalamine | uc chronic inflammatory | mesalamine pediatric uc | colitis | introduction ulcerative colitis [SUMMARY]
[CONTENT] efficacy multimatrix mesalamine | uc chronic inflammatory | mesalamine pediatric uc | colitis | introduction ulcerative colitis [SUMMARY]
[CONTENT] efficacy multimatrix mesalamine | uc chronic inflammatory | mesalamine pediatric uc | colitis | introduction ulcerative colitis [SUMMARY]
null
[CONTENT] efficacy multimatrix mesalamine | uc chronic inflammatory | mesalamine pediatric uc | colitis | introduction ulcerative colitis [SUMMARY]
null
[CONTENT] asa | dose | ac | ac asa | day | mg | kg | subjects | asa ac asa | asa ac [SUMMARY]
[CONTENT] asa | dose | ac | ac asa | day | mg | kg | subjects | asa ac asa | asa ac [SUMMARY]
[CONTENT] asa | dose | ac | ac asa | day | mg | kg | subjects | asa ac asa | asa ac [SUMMARY]
null
[CONTENT] asa | dose | ac | ac asa | day | mg | kg | subjects | asa ac asa | asa ac [SUMMARY]
null
[CONTENT] uc | remission | maintenance | asa | pediatric | induction | induction maintenance | maintenance remission | pediatric uc | multimatrix [SUMMARY]
[CONTENT] model | version | pharmacokinetic | population | pharmacokinetic model | development | vs | vs predicted | vs predicted concentration | effects [SUMMARY]
[CONTENT] asa | dose | day | mg | ci | mg kg | mg kg day | kg day | kg | 95 ci [SUMMARY]
null
[CONTENT] asa | dose | ac | ac asa | mg | day | subjects | kg | mg kg | kg day [SUMMARY]
null
[CONTENT] 5 | 5 | UC [SUMMARY]
[CONTENT] 5-17 years of age | 18-82 kg | UC | 30 | 60 | 100 mg/kg/day once daily | 4,800 mg/ | 7 days ||| days 5 and 6 ||| days 7 and 8 ||| 5 [SUMMARY]
[CONTENT] Fifty-two | 21 | 30 mg/kg | 22 ||| 60 mg/kg | 9 | 100 mg/kg ||| day 7 | 5 | between 30 and 60 ||| 30, 60 | 100 mg/kg/day | 5 | 29.4% | 27.0% | 22.1% ||| 5-ASA | approximately 50% | 40%-45% ||| ten ||| [SUMMARY]
null
[CONTENT] 5 | 5 | UC ||| 5-17 years of age | 18-82 kg | UC | 30 | 60 | 100 mg/kg/day once daily | 4,800 mg/ | 7 days ||| days 5 and 6 ||| days 7 and 8 ||| 5 ||| ||| Fifty-two | 21 | 30 mg/kg | 22 ||| 60 mg/kg | 9 | 100 mg/kg ||| day 7 | 5 | between 30 and 60 ||| 30, 60 | 100 mg/kg/day | 5 | 29.4% | 27.0% | 22.1% ||| 5-ASA | approximately 50% | 40%-45% ||| ten ||| ||| UC | 5 ||| ||| ||| [SUMMARY]
null
Lipid and metabolic alteration involvement in physiotherapy for chronic nonspecific low back pain.
36434687
Chronic nonspecific low back pain (cNLBP) is a common health problem worldwide, affecting 65-80% of the population and greatly affecting people's quality of life and productivity. It also causes huge economic losses. Manual therapy (MT) and therapeutic exercise (TE) are effective treatment options for cNLBP physiotherapy-based treatment. However, the underlying mechanisms that promote cNLBP amelioration by MT or TE are incompletely understood.
BACKGROUND
Seventeen recruited subjects were randomly divided into an MT group and a TE group. Subjects in the MT group performed muscular relaxation, myofascial release, and mobilization for 20 min during each treatment session. The treatment lasted for a total of six sessions, once every two days. Subjects in the TE group completed motor control and core stability exercises for 30 min during each treatment session. The motor control exercise included stretching of the trunk and extremity muscles through trunk and hip rotation and flexion training. Stabilization exercises consisted of the (1) bridge exercise, (2) single-leg-lift bridge exercise, (3) side bridge exercise, (4) two-point bird-dog position with an elevated contralateral leg and arm, (5) bear crawl exercise, and (6) dead bug exercise. The treatment lasted for a total of six sessions, with one session every two days. Serum samples were collected from subjects before and after physiotherapy-based treatment for lipidomic and metabolomic measurements.
METHODS
Through lipidomic analysis, we found that the phosphatidylcholine/phosphatidylethanolamine (PC/PE) ratio decreased and the sphingomyelin/ceramide (SM/Cer) ratio increased in cNLBP patients after MT or TE treatment. In addition, eight metabolites enriched in pyrimidine and purine differed significantly in cNLBP patients who received MT treatment. A total of nine metabolites enriched in pyrimidine, tyrosine, and galactose pathways differed significantly in cNLBP patients after TE treatment during metabolomics analysis.
RESULTS
Our study was the first to elucidate the alterations in the lipidomics and metabolomics of cNLBP physiotherapy-based treatment and can expand our knowledge of cNLBP physiotherapy-based treatment.
CONCLUSION
[ "Lipids", "Low Back Pain", "Physical Therapy Modalities", "Pyrimidines", "Quality of Life", "Humans" ]
9700977
Introduction
Chronic nonspecific low back pain (cNLBP) is not caused by recognizable pathology; it lasts for more than three months and occurs between the lower rib and the inferior gluteal fold [1]. It is estimated that approximately four out of five people have lower back pain at some time during their lives, and it greatly affects their quality of life, productivity, and ability to work [2]. cNLBP can be caused by many factors, such as lumbar strain, nerve irritation, and bony encroachment. However, the etiology of cNLBP is typically unknown and poorly understood [3]. Medical treatments and physiotherapy are recommended to treat and resolve issues associated with cNLBP [3]. Therapeutic exercise and manual therapy have a lower risk of increasing future back injuries or work absence and are more effective treatment options for chronic pain than medication or surgery, and they can be performed at rehabilitation clinics [4–6]. Exercise therapy is a widely used strategy to cope with low back pain that includes a heterogeneous group of interventions ranging from aerobic exercise or general physical fitness to muscle strengthening and various types of flexibility and stretching exercises [7]. Manual therapy is another effective method to deal with low back pain, in which hands are used to apply a force with a therapeutic intent, including massage, joint mobilization/manipulation, myofascial release, nerve manipulation, strain/counter strain, and acupressure [8]. However, the reasons therapeutic exercise and manual therapy ameliorate cNLBP are still unknown. With the development of lipidomics and metabolomics, many studies have indicated that lipid or metabolite alterations are associated with chronic pain [9, 10]. Lipids, as primary metabolites, are not only structural components of membranes but can also be used as signaling molecules to regulate many physiological activities. For example, fatty acid (FA) chains can be saturated (SFA), monounsaturated (MUFA), or polyunsaturated (PUFA), and the ratio of saturated to unsaturated FAs participates in the regulation of longevity [11]. Phosphatidylcholine (PC) phosphatidylethanolamine (PE) is abundant in membranes. In mammals, cellular PC/PE molar ratios that are out of balance and increase or decrease abnormally can cause diseases [12]. For example, a reduced PC/PE ratio can protect mice against atherosclerosis [13]. Decreasing the PC/PE molar ratio can change the intracellular energy supply by activating the electron transport chain and mitochondrial respiration [14]. Lysophosphatidylcholine (LPC) 16:0 correlated with pain outcomes in a cohort of patients with osteoarthritis [15]. Apart from phospholipids, studies have shown that sphingolipid metabolism also contributes to chronic pain. Increased ceramide and sphingosine-1-phosphate (S1P) are involved in the progression of chronic pain in the nervous system [16]. Previous studies reported that metabolites were also associated with pain. Patients with neuropathic pain showed elevated choline-containing compounds in response to myoinositol [tCho/mI] under magnetic resonance spectroscopy [17]. Flavonoids are the most common secondary plant metabolites used as tranquilizers in folkloric medicine and have been claimed to reduce neuropathic pain [18]. Patients with chest pain and high plasma levels of deoxyuridine, homoserine, and methionine had an increased risk of myocardial infarction [19]. Despite the evidence presented above that pain is associated with specific lipids and metabolites, no studies have shown that MT and TE can relieve cNLBP by altering lipids and metabolites. In this article, we compared the lipidomics and metabolomics of patients with cNLBP before and after treatment to explore differences in lipids and metabolites correlated with cNLBP physiotherapy-based treatment. The newly found data will expand our knowledge of cNLBP physiotherapy-based treatment.
null
null
Results
Lipid composition analysis of cNLBP patients before and after manual therapy We recruited 17 patients with cNLBP whose demographic information is shown in Table 1. The recruited subjects were randomly divided into MT or TE groups, with no significant differences in age, weight, height, BMI, or VAS score between them. We found that MT treatment was effective in alleviating cNLBP (Fig. 1). After treatment, the VAS score decreased in almost the entire MT group (Fig. 1). Serum lipidomics were determined after six MT treatment sessions. Since one participant’s blood sample could not be collected after treatment, there were eight effective participants in the MT group. Table 1Demographic information among two groups, M ± SEMMTTEPN (male, count)89Age (years)28.75 ± 7.2628.11 ± 7.45not significantly different at baselineHeight (m)1.68 ± 0.081.65 ± 0.05not significantly different at baselineWeight (kg)60.06 ± 8.8058.00 ± 10.58not significantly different at baselineBMI (kg/m2)21.25 ± 2.5421.17 ± 3.08not significantly different at baselineVAS (before treatment)5.76 ± 1.155.63 ± 1.88not significantly different at baselineVAS (after treatment)2.80 ± 1.763.47 ± 1.83not significantly different at baselineAbbreviation: MT Manual therapy, TE therapeutic exercise, BMI Body mass index, VAS Visual analog scale (0–10; VAS 0 = no pain; VAS 10 = maximal pain) Demographic information among two groups, M ± SEM Abbreviation: MT Manual therapy, TE therapeutic exercise, BMI Body mass index, VAS Visual analog scale (0–10; VAS 0 = no pain; VAS 10 = maximal pain) Fig. 1Manual therapy and therapeutic exercise were effective in cNLBP amelioration Seventeen patients were randomly divided into two groups: one group received manual therapy, and the other group received therapeutic exercise. VAS was recorded before and after treatment. Asterisks show a significant difference from patients before treatment using Student’s t tests (**P < 0.01)  Manual therapy and therapeutic exercise were effective in cNLBP amelioration Seventeen patients were randomly divided into two groups: one group received manual therapy, and the other group received therapeutic exercise. VAS was recorded before and after treatment. Asterisks show a significant difference from patients before treatment using Student’s t tests (**P < 0.01)  We completed lipid extraction and performed qualitative analysis. Through lipidomic analysis, we identified 290 lipids, which can be divided into the ten subclasses of phosphatidylcholine (PC), phosphatidylethanolamine (PE), lysophosphatidylcholines (LPC), lysophosphatidylethanolamine (LPE), triacylglycerol (TG), phosphatidylinositol (PI), sphingomyelin (SM), ceramide (Cer), hexosylceramide (HexCer), and fatty acid (FA). As a multivariate statistical analysis, PLS-DA could maximize the distinction and discover different metabolites between groups. We performed PLS-DA analysis with the MetaboAnalyst R software package and found a clear difference in the nontreated group (pink) and MT-treated group (green), suggesting differential lipidomic profiles in cNLBP patients before and after manual therapy (Fig. 2 A). Next, we used Pearson correlation analysis to measure the closeness of different lipids (Fig. 2B). Using volume measurements of lipids, we analyzed the lipidomic composition of cNLBP patients before and after manual therapy and found a decrease in phosphatidylcholine (PC)/phosphatidylethanolamine (PE) molar ratios but an increase in sphingomyelin (SM)/ceramide (Cer) molar ratios in the patients after manual therapy. Meanwhile, there were also decreases in the volumes of fatty acids (FAs) and increases in lysophosphatidylcholine (LPC) and lysophosphatidylethanolamine (LPE) when cNLBP patients were treated with MT (Fig. 3 A). We also generated a heatmap to present the volume of the lipids in each sample (Fig. 3B). Fig. 2Lipidomic profiles in cNLBP patients before and after treatment with MT. A The PLS-DA analysis of cNLBP patients treated with MT versus the control group. “1” represents the nontreated group, and “2” represents the MT-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient Lipidomic profiles in cNLBP patients before and after treatment with MT. A The PLS-DA analysis of cNLBP patients treated with MT versus the control group. “1” represents the nontreated group, and “2” represents the MT-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient Fig. 3Lipid identification in cNLBP patients before and after treatment with MT. A The composition of nontreated samples and MT-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the MT-treated group Lipid identification in cNLBP patients before and after treatment with MT. A The composition of nontreated samples and MT-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the MT-treated group We recruited 17 patients with cNLBP whose demographic information is shown in Table 1. The recruited subjects were randomly divided into MT or TE groups, with no significant differences in age, weight, height, BMI, or VAS score between them. We found that MT treatment was effective in alleviating cNLBP (Fig. 1). After treatment, the VAS score decreased in almost the entire MT group (Fig. 1). Serum lipidomics were determined after six MT treatment sessions. Since one participant’s blood sample could not be collected after treatment, there were eight effective participants in the MT group. Table 1Demographic information among two groups, M ± SEMMTTEPN (male, count)89Age (years)28.75 ± 7.2628.11 ± 7.45not significantly different at baselineHeight (m)1.68 ± 0.081.65 ± 0.05not significantly different at baselineWeight (kg)60.06 ± 8.8058.00 ± 10.58not significantly different at baselineBMI (kg/m2)21.25 ± 2.5421.17 ± 3.08not significantly different at baselineVAS (before treatment)5.76 ± 1.155.63 ± 1.88not significantly different at baselineVAS (after treatment)2.80 ± 1.763.47 ± 1.83not significantly different at baselineAbbreviation: MT Manual therapy, TE therapeutic exercise, BMI Body mass index, VAS Visual analog scale (0–10; VAS 0 = no pain; VAS 10 = maximal pain) Demographic information among two groups, M ± SEM Abbreviation: MT Manual therapy, TE therapeutic exercise, BMI Body mass index, VAS Visual analog scale (0–10; VAS 0 = no pain; VAS 10 = maximal pain) Fig. 1Manual therapy and therapeutic exercise were effective in cNLBP amelioration Seventeen patients were randomly divided into two groups: one group received manual therapy, and the other group received therapeutic exercise. VAS was recorded before and after treatment. Asterisks show a significant difference from patients before treatment using Student’s t tests (**P < 0.01)  Manual therapy and therapeutic exercise were effective in cNLBP amelioration Seventeen patients were randomly divided into two groups: one group received manual therapy, and the other group received therapeutic exercise. VAS was recorded before and after treatment. Asterisks show a significant difference from patients before treatment using Student’s t tests (**P < 0.01)  We completed lipid extraction and performed qualitative analysis. Through lipidomic analysis, we identified 290 lipids, which can be divided into the ten subclasses of phosphatidylcholine (PC), phosphatidylethanolamine (PE), lysophosphatidylcholines (LPC), lysophosphatidylethanolamine (LPE), triacylglycerol (TG), phosphatidylinositol (PI), sphingomyelin (SM), ceramide (Cer), hexosylceramide (HexCer), and fatty acid (FA). As a multivariate statistical analysis, PLS-DA could maximize the distinction and discover different metabolites between groups. We performed PLS-DA analysis with the MetaboAnalyst R software package and found a clear difference in the nontreated group (pink) and MT-treated group (green), suggesting differential lipidomic profiles in cNLBP patients before and after manual therapy (Fig. 2 A). Next, we used Pearson correlation analysis to measure the closeness of different lipids (Fig. 2B). Using volume measurements of lipids, we analyzed the lipidomic composition of cNLBP patients before and after manual therapy and found a decrease in phosphatidylcholine (PC)/phosphatidylethanolamine (PE) molar ratios but an increase in sphingomyelin (SM)/ceramide (Cer) molar ratios in the patients after manual therapy. Meanwhile, there were also decreases in the volumes of fatty acids (FAs) and increases in lysophosphatidylcholine (LPC) and lysophosphatidylethanolamine (LPE) when cNLBP patients were treated with MT (Fig. 3 A). We also generated a heatmap to present the volume of the lipids in each sample (Fig. 3B). Fig. 2Lipidomic profiles in cNLBP patients before and after treatment with MT. A The PLS-DA analysis of cNLBP patients treated with MT versus the control group. “1” represents the nontreated group, and “2” represents the MT-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient Lipidomic profiles in cNLBP patients before and after treatment with MT. A The PLS-DA analysis of cNLBP patients treated with MT versus the control group. “1” represents the nontreated group, and “2” represents the MT-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient Fig. 3Lipid identification in cNLBP patients before and after treatment with MT. A The composition of nontreated samples and MT-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the MT-treated group Lipid identification in cNLBP patients before and after treatment with MT. A The composition of nontreated samples and MT-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the MT-treated group Lipid composition analysis of cNLBP patients before and after therapeutic exercise (TE) Therapeutic exercise (TE) is another effective method for improving cNLBP [29]. We found that TE treatment was also effective in alleviating cNLBP (Fig. 1). After treatment, patients’ VAS scores decreased significantly in the TE-treated group (Fig. 1). We performed lipid extraction from cNLBP patients before and after therapeutic exercise and performed qualitative analysis. PLS-DA results indicated a distinct separation between the nontreated group (pink) and the TE-treated group (green) (Fig. 4A). Pearson correlation analysis showed the closeness of different lipids (Fig. 4B). Based on the volume of lipids, PC/PE molar ratios decreased, while SM/Cer molar ratios increased in the patients after therapeutic exercise. The volume of FA also decreased, while the volume of LPC and LPE increased in cNLBP patients after therapeutic exercise, similar to the results of the MT-treated group. Interestingly, the volume of TG (triacylglycerol) increased in the TE-treated group, while it decreased in the MT-treated group (Fig. 5A). A heatmap was produced to indicate the volume of lipids in the nontreated group (red) and the TE-treated group (green) (Fig. 5B). Fig. 4Lipidomic profiles in cNLBP patients before and after treatment with TE. A PLS-DA analysis of cNLBP patients treated with TE versus the control group. “1” represents the nontreated group, and “2” represents the TE-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient Lipidomic profiles in cNLBP patients before and after treatment with TE. A PLS-DA analysis of cNLBP patients treated with TE versus the control group. “1” represents the nontreated group, and “2” represents the TE-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient Fig. 5Lipid identification in cNLBP patients before and after treatment with TE. A The composition of nontreated samples and TE-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the TE-treated group Lipid identification in cNLBP patients before and after treatment with TE. A The composition of nontreated samples and TE-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the TE-treated group Therapeutic exercise (TE) is another effective method for improving cNLBP [29]. We found that TE treatment was also effective in alleviating cNLBP (Fig. 1). After treatment, patients’ VAS scores decreased significantly in the TE-treated group (Fig. 1). We performed lipid extraction from cNLBP patients before and after therapeutic exercise and performed qualitative analysis. PLS-DA results indicated a distinct separation between the nontreated group (pink) and the TE-treated group (green) (Fig. 4A). Pearson correlation analysis showed the closeness of different lipids (Fig. 4B). Based on the volume of lipids, PC/PE molar ratios decreased, while SM/Cer molar ratios increased in the patients after therapeutic exercise. The volume of FA also decreased, while the volume of LPC and LPE increased in cNLBP patients after therapeutic exercise, similar to the results of the MT-treated group. Interestingly, the volume of TG (triacylglycerol) increased in the TE-treated group, while it decreased in the MT-treated group (Fig. 5A). A heatmap was produced to indicate the volume of lipids in the nontreated group (red) and the TE-treated group (green) (Fig. 5B). Fig. 4Lipidomic profiles in cNLBP patients before and after treatment with TE. A PLS-DA analysis of cNLBP patients treated with TE versus the control group. “1” represents the nontreated group, and “2” represents the TE-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient Lipidomic profiles in cNLBP patients before and after treatment with TE. A PLS-DA analysis of cNLBP patients treated with TE versus the control group. “1” represents the nontreated group, and “2” represents the TE-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient Fig. 5Lipid identification in cNLBP patients before and after treatment with TE. A The composition of nontreated samples and TE-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the TE-treated group Lipid identification in cNLBP patients before and after treatment with TE. A The composition of nontreated samples and TE-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the TE-treated group Metabolite alterations in cNLBP patients To further identify therapeutic targets for cNLBP physiotherapy-based treatment, we analyzed the metabolome of cNLBP patients before and after treatment. In our sample of patients, the metabolomic analysis annotated and quantified 171 metabolites. Through KEGG-based enrichment analysis, these metabolites were enriched in the metabolism of tryptophan or aspartate, ammonia recycling, the metabolism of methionine or glycine, and serine, among others (Fig. 6A). Combining enrichment and topology analysis, pathway analysis was carried out for all patients. We found a total of 14 pathways that were significantly changed in patients (P value < 0.05). These metabolites mainly belonged to aminoacyl-tRNA biosynthesis; arginine biosynthesis; valine, leucine and isoleucine biosynthesis; amino acid metabolism; pyrimidine and purine metabolism; ascorbate and aldarate metabolism; taurine and hypotaurine metabolism; beta-alanine metabolism; and nicotinate and nicotinamide metabolism (Fig. 6B). Fig. 6Metabolite alteration in cNLBP patients. A Pathway enrichment analysis revealed different metabolic pathways enriched in cNLBP patients (P value cutoff ≤ 0.05). B The results from the pathway analysis carried out with the web-based tool METPA using the concentrations of metabolites identified in cNLBP patients. Total cmpd, the total number of compounds in the pathway. Hits are the matched number from the uploaded data. Raw P is the original P value. Impact is the pathway impact value calculated from pathway topology analysis Metabolite alteration in cNLBP patients. A Pathway enrichment analysis revealed different metabolic pathways enriched in cNLBP patients (P value cutoff ≤ 0.05). B The results from the pathway analysis carried out with the web-based tool METPA using the concentrations of metabolites identified in cNLBP patients. Total cmpd, the total number of compounds in the pathway. Hits are the matched number from the uploaded data. Raw P is the original P value. Impact is the pathway impact value calculated from pathway topology analysis To further identify therapeutic targets for cNLBP physiotherapy-based treatment, we analyzed the metabolome of cNLBP patients before and after treatment. In our sample of patients, the metabolomic analysis annotated and quantified 171 metabolites. Through KEGG-based enrichment analysis, these metabolites were enriched in the metabolism of tryptophan or aspartate, ammonia recycling, the metabolism of methionine or glycine, and serine, among others (Fig. 6A). Combining enrichment and topology analysis, pathway analysis was carried out for all patients. We found a total of 14 pathways that were significantly changed in patients (P value < 0.05). These metabolites mainly belonged to aminoacyl-tRNA biosynthesis; arginine biosynthesis; valine, leucine and isoleucine biosynthesis; amino acid metabolism; pyrimidine and purine metabolism; ascorbate and aldarate metabolism; taurine and hypotaurine metabolism; beta-alanine metabolism; and nicotinate and nicotinamide metabolism (Fig. 6B). Fig. 6Metabolite alteration in cNLBP patients. A Pathway enrichment analysis revealed different metabolic pathways enriched in cNLBP patients (P value cutoff ≤ 0.05). B The results from the pathway analysis carried out with the web-based tool METPA using the concentrations of metabolites identified in cNLBP patients. Total cmpd, the total number of compounds in the pathway. Hits are the matched number from the uploaded data. Raw P is the original P value. Impact is the pathway impact value calculated from pathway topology analysis Metabolite alteration in cNLBP patients. A Pathway enrichment analysis revealed different metabolic pathways enriched in cNLBP patients (P value cutoff ≤ 0.05). B The results from the pathway analysis carried out with the web-based tool METPA using the concentrations of metabolites identified in cNLBP patients. Total cmpd, the total number of compounds in the pathway. Hits are the matched number from the uploaded data. Raw P is the original P value. Impact is the pathway impact value calculated from pathway topology analysis Metabolite profiles of cNLBP patients treated with manual therapy Serum metabolome analysis was performed on samples collected after MT treatment. Since two participants’ blood samples could not be collected after treatment, there were seven included participants in the MT group for metabolomes. Orthogonal PLS-DA was performed to demonstrate the suitability of the system (Fig. 7A). The orthogonal PLS-DA score plot revealed good discrimination of the MT treatment group against untreated samples (Fig. 7A). MT-treated and nontreated samples were separated with no outliers (Fig. 7A), demonstrating that our metabolomic analysis could sufficiently reflect the metabolic profile alteration of MT treatment. The VIP scores derived from orthogonal PLS-DA, based on the first 20 metabolites with a VIP score > 1.5, revealed uridine, guanosine, kynurenic acid, 2’-deoxyadenosine, allantoin, stachydrine, inosine, uridine 5’-monophosphate, nicotinuric acid, 3,4-dihydroxybenzeneacetic acid, 2’-deoxyuridine, 2’-deoxyguanosine, 4-aminohippuric acid, cytidine, pyridoxylamine, glutathione oxidized, desaminotyrosine, L − valine, N-acetyl-5-hydroxytryptamine, and 4-acetamidobutanoic acid with the highest VIP scores for MT treatment (Fig. 7B). Finally, we screened out metabolites (fold changes > 2) in the MT treatment group compared with the nontreated group, which were cytidine, uridine 5’-monophosphate, kynurenic acid, guanosine, inosine, 2’-deoxyadenosine, and stachydrine (Fig. 8A). The KEGG-based enrichment analysis revealed that these metabolites were significantly enriched in pyrimidine metabolism and purine metabolism pathways, demonstrating that MT treatment relieves pain by altering the metabolism of these two pathways (Fig. 8B). Fig. 7Discrimination through orthogonal PLS-DA of patients before and after manual therapy analyzed based on metabolomics analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend as 1) and patients after manual therapy (indicated as 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis Discrimination through orthogonal PLS-DA of patients before and after manual therapy analyzed based on metabolomics analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend as 1) and patients after manual therapy (indicated as 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis Fig. 8Manual therapy could alter target metabolites in patients with cNLBP. A Comparison of the volumes of cytidine, uridine 5’-monophosphate, kynurenic acid, guanosine, inosine, 2’-deoxyadenosine, stachydrine, and N-acetyl-5-hydroxytryptamine in patients treated with and without manual therapy for 2 weeks. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism and purine metabolism pathways were enriched in patients treated with manual therapy (P value cutoff ≤ 0.05) Manual therapy could alter target metabolites in patients with cNLBP. A Comparison of the volumes of cytidine, uridine 5’-monophosphate, kynurenic acid, guanosine, inosine, 2’-deoxyadenosine, stachydrine, and N-acetyl-5-hydroxytryptamine in patients treated with and without manual therapy for 2 weeks. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism and purine metabolism pathways were enriched in patients treated with manual therapy (P value cutoff ≤ 0.05) Serum metabolome analysis was performed on samples collected after MT treatment. Since two participants’ blood samples could not be collected after treatment, there were seven included participants in the MT group for metabolomes. Orthogonal PLS-DA was performed to demonstrate the suitability of the system (Fig. 7A). The orthogonal PLS-DA score plot revealed good discrimination of the MT treatment group against untreated samples (Fig. 7A). MT-treated and nontreated samples were separated with no outliers (Fig. 7A), demonstrating that our metabolomic analysis could sufficiently reflect the metabolic profile alteration of MT treatment. The VIP scores derived from orthogonal PLS-DA, based on the first 20 metabolites with a VIP score > 1.5, revealed uridine, guanosine, kynurenic acid, 2’-deoxyadenosine, allantoin, stachydrine, inosine, uridine 5’-monophosphate, nicotinuric acid, 3,4-dihydroxybenzeneacetic acid, 2’-deoxyuridine, 2’-deoxyguanosine, 4-aminohippuric acid, cytidine, pyridoxylamine, glutathione oxidized, desaminotyrosine, L − valine, N-acetyl-5-hydroxytryptamine, and 4-acetamidobutanoic acid with the highest VIP scores for MT treatment (Fig. 7B). Finally, we screened out metabolites (fold changes > 2) in the MT treatment group compared with the nontreated group, which were cytidine, uridine 5’-monophosphate, kynurenic acid, guanosine, inosine, 2’-deoxyadenosine, and stachydrine (Fig. 8A). The KEGG-based enrichment analysis revealed that these metabolites were significantly enriched in pyrimidine metabolism and purine metabolism pathways, demonstrating that MT treatment relieves pain by altering the metabolism of these two pathways (Fig. 8B). Fig. 7Discrimination through orthogonal PLS-DA of patients before and after manual therapy analyzed based on metabolomics analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend as 1) and patients after manual therapy (indicated as 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis Discrimination through orthogonal PLS-DA of patients before and after manual therapy analyzed based on metabolomics analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend as 1) and patients after manual therapy (indicated as 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis Fig. 8Manual therapy could alter target metabolites in patients with cNLBP. A Comparison of the volumes of cytidine, uridine 5’-monophosphate, kynurenic acid, guanosine, inosine, 2’-deoxyadenosine, stachydrine, and N-acetyl-5-hydroxytryptamine in patients treated with and without manual therapy for 2 weeks. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism and purine metabolism pathways were enriched in patients treated with manual therapy (P value cutoff ≤ 0.05) Manual therapy could alter target metabolites in patients with cNLBP. A Comparison of the volumes of cytidine, uridine 5’-monophosphate, kynurenic acid, guanosine, inosine, 2’-deoxyadenosine, stachydrine, and N-acetyl-5-hydroxytryptamine in patients treated with and without manual therapy for 2 weeks. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism and purine metabolism pathways were enriched in patients treated with manual therapy (P value cutoff ≤ 0.05) Metabolite profiles of cNLBP patients treated with therapeutic exercise We also performed metabolite identification in the serum metabolomes pooled from cNLBP patients treated with TE. Since one participant’s blood sample could not be collected after treatment, there were seven included participants in the TE group for metabolomes. Orthogonal PLS-DA was performed on untreated samples and TE treatment samples. The orthogonal PLS-DA score plot revealed good discrimination between the TE treatment group and the nontreated samples (Fig. 9A). The VIP score > 1.5 derived from orthogonal PLS-DA, based on the first 20 metabolites, revealed liothyronine, 2’-deoxyadenosine, uridine, L − homocystine, N-acetyl-5-hydroxytryptamine, stachydrine, γ-aminobutyric acid, nicotinuric acid, glutaric acid, 2’-deoxyuridine, adenine, N-acetyl-L-aspartic acid, cinnamic acid, cytidine, uridine 5’-monophosphate, D-(-)-mandelic acid, L-cysteine, 4-aminobenzoic acid, 5’-deoxyadenosine, and D-sorbitol with the highest VIP scores for TE treatment (Fig. 9B). Fig. 9Discrimination through orthogonal PLS-DA of patients before and after therapeutic exercise examined by metabolomic analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend with 1) and patients after therapeutic exercise (indicated with 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis Discrimination through orthogonal PLS-DA of patients before and after therapeutic exercise examined by metabolomic analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend with 1) and patients after therapeutic exercise (indicated with 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis Finally, nine metabolites with fold changes > 2 in the TE treatment group were found, including uridine 5’-monophosphate, thymidine, 2’-deoxyadenosine, 5’-deoxyadenosine, N-acetyl-5-hydroxytryptamine, stachydrine, inosine, gallic acid, and γ-aminobutyric acid (Fig. 10A). The KEGG-based enrichment analysis revealed that these nine metabolites were significantly enriched in pyrimidine metabolism, tyrosine metabolism, and galactose metabolism pathways, demonstrating that TE treatment relieves pain by altering the metabolism of these three pathways (Fig. 10B). Fig. 10Therapeutic exercise could alter target metabolites in patients with cNLBP. A Comparison of the volumes of uridine 5’-monophosphate, thymidine, 2’-deoxyadenosine, 5’-deoxyadenosine, N-acetyl-5-hydroxytryptamine, stachydrine, inosine, gallic acid, and γ-aminobutyric acid in patients treated with therapeutic exercise for two weeks or without therapeutic treatment. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism, tyrosine metabolism, and galactose metabolism were enriched in patients treated with therapeutic exercise (P value cutoff ≤ 0.05) Therapeutic exercise could alter target metabolites in patients with cNLBP. A Comparison of the volumes of uridine 5’-monophosphate, thymidine, 2’-deoxyadenosine, 5’-deoxyadenosine, N-acetyl-5-hydroxytryptamine, stachydrine, inosine, gallic acid, and γ-aminobutyric acid in patients treated with therapeutic exercise for two weeks or without therapeutic treatment. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism, tyrosine metabolism, and galactose metabolism were enriched in patients treated with therapeutic exercise (P value cutoff ≤ 0.05) We also performed metabolite identification in the serum metabolomes pooled from cNLBP patients treated with TE. Since one participant’s blood sample could not be collected after treatment, there were seven included participants in the TE group for metabolomes. Orthogonal PLS-DA was performed on untreated samples and TE treatment samples. The orthogonal PLS-DA score plot revealed good discrimination between the TE treatment group and the nontreated samples (Fig. 9A). The VIP score > 1.5 derived from orthogonal PLS-DA, based on the first 20 metabolites, revealed liothyronine, 2’-deoxyadenosine, uridine, L − homocystine, N-acetyl-5-hydroxytryptamine, stachydrine, γ-aminobutyric acid, nicotinuric acid, glutaric acid, 2’-deoxyuridine, adenine, N-acetyl-L-aspartic acid, cinnamic acid, cytidine, uridine 5’-monophosphate, D-(-)-mandelic acid, L-cysteine, 4-aminobenzoic acid, 5’-deoxyadenosine, and D-sorbitol with the highest VIP scores for TE treatment (Fig. 9B). Fig. 9Discrimination through orthogonal PLS-DA of patients before and after therapeutic exercise examined by metabolomic analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend with 1) and patients after therapeutic exercise (indicated with 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis Discrimination through orthogonal PLS-DA of patients before and after therapeutic exercise examined by metabolomic analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend with 1) and patients after therapeutic exercise (indicated with 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis Finally, nine metabolites with fold changes > 2 in the TE treatment group were found, including uridine 5’-monophosphate, thymidine, 2’-deoxyadenosine, 5’-deoxyadenosine, N-acetyl-5-hydroxytryptamine, stachydrine, inosine, gallic acid, and γ-aminobutyric acid (Fig. 10A). The KEGG-based enrichment analysis revealed that these nine metabolites were significantly enriched in pyrimidine metabolism, tyrosine metabolism, and galactose metabolism pathways, demonstrating that TE treatment relieves pain by altering the metabolism of these three pathways (Fig. 10B). Fig. 10Therapeutic exercise could alter target metabolites in patients with cNLBP. A Comparison of the volumes of uridine 5’-monophosphate, thymidine, 2’-deoxyadenosine, 5’-deoxyadenosine, N-acetyl-5-hydroxytryptamine, stachydrine, inosine, gallic acid, and γ-aminobutyric acid in patients treated with therapeutic exercise for two weeks or without therapeutic treatment. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism, tyrosine metabolism, and galactose metabolism were enriched in patients treated with therapeutic exercise (P value cutoff ≤ 0.05) Therapeutic exercise could alter target metabolites in patients with cNLBP. A Comparison of the volumes of uridine 5’-monophosphate, thymidine, 2’-deoxyadenosine, 5’-deoxyadenosine, N-acetyl-5-hydroxytryptamine, stachydrine, inosine, gallic acid, and γ-aminobutyric acid in patients treated with therapeutic exercise for two weeks or without therapeutic treatment. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism, tyrosine metabolism, and galactose metabolism were enriched in patients treated with therapeutic exercise (P value cutoff ≤ 0.05)
Conclusions and clinical perspective
MT or TE treatment were effective strategies in alleviating cNLBP. The possible mechanism is that MT or TE treatment was able to cause alterations in the lipidomics and metabolomics in cNLBP patients. This study was the first to elucidate cNLBP physiotherapy-based treatment was associated with specific lipids and metabolites. These results indicate that physiotherapy or agents targeting these lipids and metabolites alteration might be useful for treatment of cNLBP.
[ "Participants", "Therapy of subjects", "Lipidomic analysis", "Metabolomic measurement", "Statistical analyses", "Lipid composition analysis of cNLBP patients before and after manual therapy", "Lipid composition analysis of cNLBP patients before and after therapeutic exercise (TE)", "Metabolite alterations in cNLBP patients", "Metabolite profiles of cNLBP patients treated with manual therapy", "Metabolite profiles of cNLBP patients treated with therapeutic exercise", "Study strengths and limitations", "Conclusions and clinical perspective" ]
[ "Patients with cNLBP were recruited through advertising. The inclusion criteria were as follows: (1) patients aged between 18 years and 65 years [20]; (2) patients with pain in the area between the lower rib and the inferior gluteal fold; (3) patients with persistent pain > 3 months or intermittent pain > 6 months and having been clinically diagnosed as having cNLBP by two licensed medical doctors in accordance with the diagnostic guidelines published by the American College of Physicians and the American Pain Society [21, 22]; (4) patients with a minimum score of 2 on the Visual Analog Scale (VAS) in the previous week [23]; (5) patients who were right-hand dominant, with no neurological diseases (e.g., traumatic brain injury, or epilepsy), or intracranial lesions; and (6) patients who did not receive pain treatment within the past 3 months.\nThe exclusion criteria were as follows: (1) patients with radiating pain, menstrual pain, recent/current pregnancy, or postpartum low back pain; (2) patients who suffered known inflammatory disease of the spine, vertebral fracture, severe osteoporosis, autoinflammatory arthritis, and cancer or had significant unexplained weight loss; (3) patients who had cardio-cerebrovascular disease or endocrine disorders; (4) patients with mental illness requiring immediate pharmacotherapy; (5) patients who showed an unwillingness to sign research consent and unwillingness or inability to follow the research protocol; and (6) patients with current alcohol or drug dependence.\nAll participants were assessed for pain intensity using the visual analog scale (VAS), and serum samples for LC‒MS measurements were collected before and after treatment. The First Affiliated Hospital of Sun Yat-sen University approved the ethical approval document of the study (no. [2019] 408). The recruited subjects signed informed consent forms prior to the experiment.", "Seventeen recruited subjects were randomly divided into the MT group and the TE group. Patients in the MT group received manual therapy, and patients in the TE group received therapeutic exercise. Subjects in the manual therapy group were involved in muscular relaxation, myofascial release, and mobilization for 20 min during each session. The treatment lasted for a total of six sessions, once every two days. Subjects in the therapeutic exercise group completed motor control exercise and core stability exercise for 30 min during each session. The motor control exercises included stretching of the trunk and extremity muscles, trunk and hip rotation, and flexion training. Stabilization exercises consisted of the (1) bridge exercise, (2) single-leg-lift bridge exercise, (3) side bridge exercise, (4) two-point bird-dog position elevated contralateral leg and arm, (6) bear crawl exercise, and (7) dead bug exercise. The treatment lasted for a total of six sessions, once every two days.", "Lipid samples were prepared as described by Xuan et al. with some modifications [24]. Briefly, venous blood was collected in heparinization tubes and then centrifuged for 15 min at 2000 g at 4 °C to collect serum. A total of 200 µL serum samples with lipid standards were mixed with 400 µL tert-butyl methyl ether (MTBE) and 80 µL methanol and then vortexed for 30 s. Next, the samples were centrifuged, after which the upper phases were collected, transferred into new tubes, and dried by vacuum evaporation. Samples were reconstituted with 100 µL of methylene chloride:methanol (1:1, v/v).\nLipid analysis was carried out with a Shimadzu LC-30 A (Shimadzu, Kyoto, Japan) coupled with a mass spectrometer (QTRAP 6500, AB SCIEX, Framingham, MA, USA). The chromatographic parameters were as follows: chromatographic column: ACQUITY UPLC® BEH C18 column (2.1 × 100 mm, 1.7 μm, Waters, Milford, MA, USA), volume of injection: 5 µl, flow rate: 0.26 mL/min, oven temperature: 55 °C. The mobile phase included reagent A (acetonitrile:ultrapure water = 60:40, v/v, with 10 mM ammonium acetate) and reagent B (isopropanol:acetonitrile = 90:10, v/v, with 10 mM ammonium acetate). A binary gradient was set as follows: 0–1.5 min, mobile phase including 68% reagent A and 32% reagent B; 1.5–15.5 min, mobile phase including 15% reagent A and 85% reagent B; 15.5–15.6 min, mobile phase including 3% reagent A and 97% reagent B; 15.6–18 min, mobile phase including 3% reagent A and 97% reagent B; 18–18.1 min, mobile phase including 68% reagent A and 32% reagent B; 18.1–20 min, mobile phase including 68% reagent A and 32% reagent B. Electrospray ionization (QTRAP 6500, AB SCIEX, Framingham, MA, USA) was used with the following parameters: ion source voltage was − 4500 or 5500 V, ion source temperature was 600 °C, curtain gas was 20 psi, atomizing gas was 60 psi, and auxiliary gas was 60 psi. Scanning was performed through multiple reaction monitoring (MRM). Samples under test conditions were mixed and used as QC samples for LC‒MS analysis every third sample to correct deviations caused by instrumental drift and evaluate the quality of data.", "Metabolomic samples were prepared as described by Wang et al. [25]. Briefly, venous blood was collected in heparinization tubes and then centrifuged at 8000 g at 4 °C to collect serum. A total of 100 µL of serum sample was mixed with 400 µL of solution (methanol:acetonitrile:ultrapure water = 2:2:1, v/v/v) and then sonicated for 10 min in a 4 °C water bath. Next, the samples were incubated for one hour at − 20 °C and then centrifuged. The supernatant was collected and evaporated by vacuum evaporation. Each sample was resuspended in solution (acetonitrile:ultrapure water, 1:1, v/v).\nMetabolomic analysis was carried out with a Shimadzu LC-30 A (Shimadzu, Kyoto, Japan) coupled with a mass spectrometer (QTRAP 4500, AB SCIEX, Framingham, MA, USA). The chromatographic parameters were set as follows: chromatographic column: UPLC BEH Amide column (2.1 × 100 mm, 1.7 μm, Waters, Milford, MA, USA), volume of injection: 5 µl, flow rate: 0.3 mL/min, oven temperature: 55 °C. The mobile phase included reagent A (100% ultrapure water with 0.025 M ammonium hydroxide and 0.025 M ammonium acetate) and reagent B (100% acetonitrile). A binary gradient was set as follows: 0–1 min, mobile phase including of 15% reagent A and 85% reagent B; 1–12 min, mobile phase including of 35% reagent A and 65% reagent B; 12–12.1 min, mobile phase including of 60% reagent A and 40% reagent B; 12.1–15 min, mobile phase including of 60% reagent A and 40% reagent B; 15–15.1 min, mobile phase including of 15% reagent A and 85% reagent B; 15.1–20 min, mobile phase including of 15% reagent A and 85% reagent B. An electrospray ionization (QTRAP 4500, AB SCIEX, Framingham, MA, USA) was used with parameters as follows: ion source voltage was − 4500 or 5500 V, ion source temperature was 600 °C, the ion source voltage was − 4500 or 5500 V, curtain gas was 20 psi, atomizing gas was 60 psi, and auxiliary gas was 60 psi. Scanning was performed via multiple reaction monitoring (MRM). Samples under test conditions were mixed and used as QC samples for LC‒MS analysis every third sample to correct deviations caused by instrumental drift and evaluate the quality of data. After the test, raw data were converted to mzXML format with the web-based tool ProteoWizard and then analyzed for peak alignment, retention time correction, and peak area extraction based on XCMS. Metabolite annotation was carried out based on the online human metabolome database (HMDB, http://www.hmdb.ca) using mass-to-charge ratio information and metabolite structures. Metabolite structures were accurately matched using primary and secondary spectrograms (< 25 ppm).", "Lipid and metabolite abundance were determined by peak area. Then, data were processed and normalized based on a reference sample (PQN) following the process outlined on the website https://www.metaboanalyst.ca/, which was mainly designed for raw spectra processing and general statistical and functional analysis of targeted metabolomics data [26–28]. The maximum covariance between nontreated samples and MT- or ET-treated samples in lipidomic analysis was determined with partial least squares-discriminant analysis (PLS-DA). The maximum covariance between nontreated samples and MT- or ET-treated samples in metabolomic analysis was determined using orthogonal partial least-squares discriminant analysis (OPLS-DA). The correlation between lipid molecules was analyzed with correlation heatmaps. The content difference of lipids in each sample was indicated with hierarchical clustering analysis. Pathway analysis was carried out with the web-based tool METPA.\nThe raw data were logarithmically transformed and tested for normality before the means were compared between different groups. If normality was assumed, Student’s t test was applied. To visualize the differentiation between different groups, PLS-DA and OPLS-DA were performed using MetaboAnalyst 5.0 (http://www.metaboanalyst.ca/). Data are presented as the mean ± SEM. GraphPad Prism (version 8, GraphPad Software, San Diego, CA, USA) was used to perform statistical analyses between the nontreatment and physiotherapy-based treatment groups using Student’s t test (P < 0.05).", "We recruited 17 patients with cNLBP whose demographic information is shown in Table 1. The recruited subjects were randomly divided into MT or TE groups, with no significant differences in age, weight, height, BMI, or VAS score between them. We found that MT treatment was effective in alleviating cNLBP (Fig. 1). After treatment, the VAS score decreased in almost the entire MT group (Fig. 1). Serum lipidomics were determined after six MT treatment sessions. Since one participant’s blood sample could not be collected after treatment, there were eight effective participants in the MT group.\n\nTable 1Demographic information among two groups, M ± SEMMTTEPN (male, count)89Age (years)28.75 ± 7.2628.11 ± 7.45not significantly different at baselineHeight (m)1.68 ± 0.081.65 ± 0.05not significantly different at baselineWeight (kg)60.06 ± 8.8058.00 ± 10.58not significantly different at baselineBMI (kg/m2)21.25 ± 2.5421.17 ± 3.08not significantly different at baselineVAS (before treatment)5.76 ± 1.155.63 ± 1.88not significantly different at baselineVAS (after treatment)2.80 ± 1.763.47 ± 1.83not significantly different at baselineAbbreviation:\nMT Manual therapy, TE therapeutic exercise, BMI Body mass index, VAS Visual analog scale (0–10; VAS 0 = no pain; VAS 10 = maximal pain)\nDemographic information among two groups, M ± SEM\nAbbreviation:\nMT Manual therapy, TE therapeutic exercise, BMI Body mass index, VAS Visual analog scale (0–10; VAS 0 = no pain; VAS 10 = maximal pain)\n\nFig. 1Manual therapy and therapeutic exercise were effective in cNLBP amelioration Seventeen patients were randomly divided into two groups: one group received manual therapy, and the other group received therapeutic exercise. VAS was recorded before and after treatment. Asterisks show a significant difference from patients before treatment using Student’s t tests (**P < 0.01) \nManual therapy and therapeutic exercise were effective in cNLBP amelioration Seventeen patients were randomly divided into two groups: one group received manual therapy, and the other group received therapeutic exercise. VAS was recorded before and after treatment. Asterisks show a significant difference from patients before treatment using Student’s t tests (**P < 0.01) \nWe completed lipid extraction and performed qualitative analysis. Through lipidomic analysis, we identified 290 lipids, which can be divided into the ten subclasses of phosphatidylcholine (PC), phosphatidylethanolamine (PE), lysophosphatidylcholines (LPC), lysophosphatidylethanolamine (LPE), triacylglycerol (TG), phosphatidylinositol (PI), sphingomyelin (SM), ceramide (Cer), hexosylceramide (HexCer), and fatty acid (FA). As a multivariate statistical analysis, PLS-DA could maximize the distinction and discover different metabolites between groups. We performed PLS-DA analysis with the MetaboAnalyst R software package and found a clear difference in the nontreated group (pink) and MT-treated group (green), suggesting differential lipidomic profiles in cNLBP patients before and after manual therapy (Fig. 2 A). Next, we used Pearson correlation analysis to measure the closeness of different lipids (Fig. 2B). Using volume measurements of lipids, we analyzed the lipidomic composition of cNLBP patients before and after manual therapy and found a decrease in phosphatidylcholine (PC)/phosphatidylethanolamine (PE) molar ratios but an increase in sphingomyelin (SM)/ceramide (Cer) molar ratios in the patients after manual therapy. Meanwhile, there were also decreases in the volumes of fatty acids (FAs) and increases in lysophosphatidylcholine (LPC) and lysophosphatidylethanolamine (LPE) when cNLBP patients were treated with MT (Fig. 3 A). We also generated a heatmap to present the volume of the lipids in each sample (Fig. 3B).\n\nFig. 2Lipidomic profiles in cNLBP patients before and after treatment with MT. A The PLS-DA analysis of cNLBP patients treated with MT versus the control group. “1” represents the nontreated group, and “2” represents the MT-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient\nLipidomic profiles in cNLBP patients before and after treatment with MT. A The PLS-DA analysis of cNLBP patients treated with MT versus the control group. “1” represents the nontreated group, and “2” represents the MT-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient\n\nFig. 3Lipid identification in cNLBP patients before and after treatment with MT. A The composition of nontreated samples and MT-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the MT-treated group\nLipid identification in cNLBP patients before and after treatment with MT. A The composition of nontreated samples and MT-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the MT-treated group", "Therapeutic exercise (TE) is another effective method for improving cNLBP [29]. We found that TE treatment was also effective in alleviating cNLBP (Fig. 1). After treatment, patients’ VAS scores decreased significantly in the TE-treated group (Fig. 1).\nWe performed lipid extraction from cNLBP patients before and after therapeutic exercise and performed qualitative analysis. PLS-DA results indicated a distinct separation between the nontreated group (pink) and the TE-treated group (green) (Fig. 4A). Pearson correlation analysis showed the closeness of different lipids (Fig. 4B). Based on the volume of lipids, PC/PE molar ratios decreased, while SM/Cer molar ratios increased in the patients after therapeutic exercise. The volume of FA also decreased, while the volume of LPC and LPE increased in cNLBP patients after therapeutic exercise, similar to the results of the MT-treated group. Interestingly, the volume of TG (triacylglycerol) increased in the TE-treated group, while it decreased in the MT-treated group (Fig. 5A). A heatmap was produced to indicate the volume of lipids in the nontreated group (red) and the TE-treated group (green) (Fig. 5B).\n\nFig. 4Lipidomic profiles in cNLBP patients before and after treatment with TE. A PLS-DA analysis of cNLBP patients treated with TE versus the control group. “1” represents the nontreated group, and “2” represents the TE-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient\nLipidomic profiles in cNLBP patients before and after treatment with TE. A PLS-DA analysis of cNLBP patients treated with TE versus the control group. “1” represents the nontreated group, and “2” represents the TE-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient\n\nFig. 5Lipid identification in cNLBP patients before and after treatment with TE. A The composition of nontreated samples and TE-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the TE-treated group\nLipid identification in cNLBP patients before and after treatment with TE. A The composition of nontreated samples and TE-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the TE-treated group", "To further identify therapeutic targets for cNLBP physiotherapy-based treatment, we analyzed the metabolome of cNLBP patients before and after treatment. In our sample of patients, the metabolomic analysis annotated and quantified 171 metabolites. Through KEGG-based enrichment analysis, these metabolites were enriched in the metabolism of tryptophan or aspartate, ammonia recycling, the metabolism of methionine or glycine, and serine, among others (Fig. 6A). Combining enrichment and topology analysis, pathway analysis was carried out for all patients. We found a total of 14 pathways that were significantly changed in patients (P value < 0.05). These metabolites mainly belonged to aminoacyl-tRNA biosynthesis; arginine biosynthesis; valine, leucine and isoleucine biosynthesis; amino acid metabolism; pyrimidine and purine metabolism; ascorbate and aldarate metabolism; taurine and hypotaurine metabolism; beta-alanine metabolism; and nicotinate and nicotinamide metabolism (Fig. 6B).\n\nFig. 6Metabolite alteration in cNLBP patients. A Pathway enrichment analysis revealed different metabolic pathways enriched in cNLBP patients (P value cutoff ≤ 0.05). B The results from the pathway analysis carried out with the web-based tool METPA using the concentrations of metabolites identified in cNLBP patients. Total cmpd, the total number of compounds in the pathway. Hits are the matched number from the uploaded data. Raw P is the original P value. Impact is the pathway impact value calculated from pathway topology analysis\nMetabolite alteration in cNLBP patients. A Pathway enrichment analysis revealed different metabolic pathways enriched in cNLBP patients (P value cutoff ≤ 0.05). B The results from the pathway analysis carried out with the web-based tool METPA using the concentrations of metabolites identified in cNLBP patients. Total cmpd, the total number of compounds in the pathway. Hits are the matched number from the uploaded data. Raw P is the original P value. Impact is the pathway impact value calculated from pathway topology analysis", "Serum metabolome analysis was performed on samples collected after MT treatment. Since two participants’ blood samples could not be collected after treatment, there were seven included participants in the MT group for metabolomes. Orthogonal PLS-DA was performed to demonstrate the suitability of the system (Fig. 7A). The orthogonal PLS-DA score plot revealed good discrimination of the MT treatment group against untreated samples (Fig. 7A). MT-treated and nontreated samples were separated with no outliers (Fig. 7A), demonstrating that our metabolomic analysis could sufficiently reflect the metabolic profile alteration of MT treatment. The VIP scores derived from orthogonal PLS-DA, based on the first 20 metabolites with a VIP score > 1.5, revealed uridine, guanosine, kynurenic acid, 2’-deoxyadenosine, allantoin, stachydrine, inosine, uridine 5’-monophosphate, nicotinuric acid, 3,4-dihydroxybenzeneacetic acid, 2’-deoxyuridine, 2’-deoxyguanosine, 4-aminohippuric acid, cytidine, pyridoxylamine, glutathione oxidized, desaminotyrosine, L − valine, N-acetyl-5-hydroxytryptamine, and 4-acetamidobutanoic acid with the highest VIP scores for MT treatment (Fig. 7B). Finally, we screened out metabolites (fold changes > 2) in the MT treatment group compared with the nontreated group, which were cytidine, uridine 5’-monophosphate, kynurenic acid, guanosine, inosine, 2’-deoxyadenosine, and stachydrine (Fig. 8A). The KEGG-based enrichment analysis revealed that these metabolites were significantly enriched in pyrimidine metabolism and purine metabolism pathways, demonstrating that MT treatment relieves pain by altering the metabolism of these two pathways (Fig. 8B).\n\nFig. 7Discrimination through orthogonal PLS-DA of patients before and after manual therapy analyzed based on metabolomics analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend as 1) and patients after manual therapy (indicated as 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis\nDiscrimination through orthogonal PLS-DA of patients before and after manual therapy analyzed based on metabolomics analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend as 1) and patients after manual therapy (indicated as 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis\n\nFig. 8Manual therapy could alter target metabolites in patients with cNLBP. A Comparison of the volumes of cytidine, uridine 5’-monophosphate, kynurenic acid, guanosine, inosine, 2’-deoxyadenosine, stachydrine, and N-acetyl-5-hydroxytryptamine in patients treated with and without manual therapy for 2 weeks. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism and purine metabolism pathways were enriched in patients treated with manual therapy (P value cutoff ≤ 0.05)\nManual therapy could alter target metabolites in patients with cNLBP. A Comparison of the volumes of cytidine, uridine 5’-monophosphate, kynurenic acid, guanosine, inosine, 2’-deoxyadenosine, stachydrine, and N-acetyl-5-hydroxytryptamine in patients treated with and without manual therapy for 2 weeks. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism and purine metabolism pathways were enriched in patients treated with manual therapy (P value cutoff ≤ 0.05)", "We also performed metabolite identification in the serum metabolomes pooled from cNLBP patients treated with TE. Since one participant’s blood sample could not be collected after treatment, there were seven included participants in the TE group for metabolomes. Orthogonal PLS-DA was performed on untreated samples and TE treatment samples. The orthogonal PLS-DA score plot revealed good discrimination between the TE treatment group and the nontreated samples (Fig. 9A). The VIP score > 1.5 derived from orthogonal PLS-DA, based on the first 20 metabolites, revealed liothyronine, 2’-deoxyadenosine, uridine, L − homocystine, N-acetyl-5-hydroxytryptamine, stachydrine, γ-aminobutyric acid, nicotinuric acid, glutaric acid, 2’-deoxyuridine, adenine, N-acetyl-L-aspartic acid, cinnamic acid, cytidine, uridine 5’-monophosphate, D-(-)-mandelic acid, L-cysteine, 4-aminobenzoic acid, 5’-deoxyadenosine, and D-sorbitol with the highest VIP scores for TE treatment (Fig. 9B).\n\nFig. 9Discrimination through orthogonal PLS-DA of patients before and after therapeutic exercise examined by metabolomic analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend with 1) and patients after therapeutic exercise (indicated with 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis\nDiscrimination through orthogonal PLS-DA of patients before and after therapeutic exercise examined by metabolomic analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend with 1) and patients after therapeutic exercise (indicated with 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis\nFinally, nine metabolites with fold changes > 2 in the TE treatment group were found, including uridine 5’-monophosphate, thymidine, 2’-deoxyadenosine, 5’-deoxyadenosine, N-acetyl-5-hydroxytryptamine, stachydrine, inosine, gallic acid, and γ-aminobutyric acid (Fig. 10A). The KEGG-based enrichment analysis revealed that these nine metabolites were significantly enriched in pyrimidine metabolism, tyrosine metabolism, and galactose metabolism pathways, demonstrating that TE treatment relieves pain by altering the metabolism of these three pathways (Fig. 10B).\n\nFig. 10Therapeutic exercise could alter target metabolites in patients with cNLBP. A Comparison of the volumes of uridine 5’-monophosphate, thymidine, 2’-deoxyadenosine, 5’-deoxyadenosine, N-acetyl-5-hydroxytryptamine, stachydrine, inosine, gallic acid, and γ-aminobutyric acid in patients treated with therapeutic exercise for two weeks or without therapeutic treatment. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism, tyrosine metabolism, and galactose metabolism were enriched in patients treated with therapeutic exercise (P value cutoff ≤ 0.05)\nTherapeutic exercise could alter target metabolites in patients with cNLBP. A Comparison of the volumes of uridine 5’-monophosphate, thymidine, 2’-deoxyadenosine, 5’-deoxyadenosine, N-acetyl-5-hydroxytryptamine, stachydrine, inosine, gallic acid, and γ-aminobutyric acid in patients treated with therapeutic exercise for two weeks or without therapeutic treatment. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism, tyrosine metabolism, and galactose metabolism were enriched in patients treated with therapeutic exercise (P value cutoff ≤ 0.05)", "The greatest strength of this study is to reveal the possible mechanism of promoting cNLBP amelioration through MT or TE treatment from the perspective of lipidomics and metabolomics in cNLBP patients. However, the experiment only involved with alterations in lipids and metabolites, and the deeper mechanisms of these lipids and metabolites affecting cNLBP physiotherapy-based treatment are uncertain. Therefore, more evidences are needed to explore.", "MT or TE treatment were effective strategies in alleviating cNLBP. The possible mechanism is that MT or TE treatment was able to cause alterations in the lipidomics and metabolomics in cNLBP patients. This study was the first to elucidate cNLBP physiotherapy-based treatment was associated with specific lipids and metabolites. These results indicate that physiotherapy or agents targeting these lipids and metabolites alteration might be useful for treatment of cNLBP." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Material and methods", "Participants", "Therapy of subjects", "Lipidomic analysis", "Metabolomic measurement", "Statistical analyses", "Results", "Lipid composition analysis of cNLBP patients before and after manual therapy", "Lipid composition analysis of cNLBP patients before and after therapeutic exercise (TE)", "Metabolite alterations in cNLBP patients", "Metabolite profiles of cNLBP patients treated with manual therapy", "Metabolite profiles of cNLBP patients treated with therapeutic exercise", "Discussion", "Study strengths and limitations", "Conclusions and clinical perspective" ]
[ "Chronic nonspecific low back pain (cNLBP) is not caused by recognizable pathology; it lasts for more than three months and occurs between the lower rib and the inferior gluteal fold [1]. It is estimated that approximately four out of five people have lower back pain at some time during their lives, and it greatly affects their quality of life, productivity, and ability to work [2].\ncNLBP can be caused by many factors, such as lumbar strain, nerve irritation, and bony encroachment. However, the etiology of cNLBP is typically unknown and poorly understood [3]. Medical treatments and physiotherapy are recommended to treat and resolve issues associated with cNLBP [3]. Therapeutic exercise and manual therapy have a lower risk of increasing future back injuries or work absence and are more effective treatment options for chronic pain than medication or surgery, and they can be performed at rehabilitation clinics [4–6]. Exercise therapy is a widely used strategy to cope with low back pain that includes a heterogeneous group of interventions ranging from aerobic exercise or general physical fitness to muscle strengthening and various types of flexibility and stretching exercises [7]. Manual therapy is another effective method to deal with low back pain, in which hands are used to apply a force with a therapeutic intent, including massage, joint mobilization/manipulation, myofascial release, nerve manipulation, strain/counter strain, and acupressure [8]. However, the reasons therapeutic exercise and manual therapy ameliorate cNLBP are still unknown.\nWith the development of lipidomics and metabolomics, many studies have indicated that lipid or metabolite alterations are associated with chronic pain [9, 10]. Lipids, as primary metabolites, are not only structural components of membranes but can also be used as signaling molecules to regulate many physiological activities. For example, fatty acid (FA) chains can be saturated (SFA), monounsaturated (MUFA), or polyunsaturated (PUFA), and the ratio of saturated to unsaturated FAs participates in the regulation of longevity [11]. Phosphatidylcholine (PC) phosphatidylethanolamine (PE) is abundant in membranes. In mammals, cellular PC/PE molar ratios that are out of balance and increase or decrease abnormally can cause diseases [12]. For example, a reduced PC/PE ratio can protect mice against atherosclerosis [13]. Decreasing the PC/PE molar ratio can change the intracellular energy supply by activating the electron transport chain and mitochondrial respiration [14]. Lysophosphatidylcholine (LPC) 16:0 correlated with pain outcomes in a cohort of patients with osteoarthritis [15]. Apart from phospholipids, studies have shown that sphingolipid metabolism also contributes to chronic pain. Increased ceramide and sphingosine-1-phosphate (S1P) are involved in the progression of chronic pain in the nervous system [16]. Previous studies reported that metabolites were also associated with pain. Patients with neuropathic pain showed elevated choline-containing compounds in response to myoinositol [tCho/mI] under magnetic resonance spectroscopy [17]. Flavonoids are the most common secondary plant metabolites used as tranquilizers in folkloric medicine and have been claimed to reduce neuropathic pain [18]. Patients with chest pain and high plasma levels of deoxyuridine, homoserine, and methionine had an increased risk of myocardial infarction [19]. Despite the evidence presented above that pain is associated with specific lipids and metabolites, no studies have shown that MT and TE can relieve cNLBP by altering lipids and metabolites.\nIn this article, we compared the lipidomics and metabolomics of patients with cNLBP before and after treatment to explore differences in lipids and metabolites correlated with cNLBP physiotherapy-based treatment. The newly found data will expand our knowledge of cNLBP physiotherapy-based treatment.", "Participants Patients with cNLBP were recruited through advertising. The inclusion criteria were as follows: (1) patients aged between 18 years and 65 years [20]; (2) patients with pain in the area between the lower rib and the inferior gluteal fold; (3) patients with persistent pain > 3 months or intermittent pain > 6 months and having been clinically diagnosed as having cNLBP by two licensed medical doctors in accordance with the diagnostic guidelines published by the American College of Physicians and the American Pain Society [21, 22]; (4) patients with a minimum score of 2 on the Visual Analog Scale (VAS) in the previous week [23]; (5) patients who were right-hand dominant, with no neurological diseases (e.g., traumatic brain injury, or epilepsy), or intracranial lesions; and (6) patients who did not receive pain treatment within the past 3 months.\nThe exclusion criteria were as follows: (1) patients with radiating pain, menstrual pain, recent/current pregnancy, or postpartum low back pain; (2) patients who suffered known inflammatory disease of the spine, vertebral fracture, severe osteoporosis, autoinflammatory arthritis, and cancer or had significant unexplained weight loss; (3) patients who had cardio-cerebrovascular disease or endocrine disorders; (4) patients with mental illness requiring immediate pharmacotherapy; (5) patients who showed an unwillingness to sign research consent and unwillingness or inability to follow the research protocol; and (6) patients with current alcohol or drug dependence.\nAll participants were assessed for pain intensity using the visual analog scale (VAS), and serum samples for LC‒MS measurements were collected before and after treatment. The First Affiliated Hospital of Sun Yat-sen University approved the ethical approval document of the study (no. [2019] 408). The recruited subjects signed informed consent forms prior to the experiment.\nPatients with cNLBP were recruited through advertising. The inclusion criteria were as follows: (1) patients aged between 18 years and 65 years [20]; (2) patients with pain in the area between the lower rib and the inferior gluteal fold; (3) patients with persistent pain > 3 months or intermittent pain > 6 months and having been clinically diagnosed as having cNLBP by two licensed medical doctors in accordance with the diagnostic guidelines published by the American College of Physicians and the American Pain Society [21, 22]; (4) patients with a minimum score of 2 on the Visual Analog Scale (VAS) in the previous week [23]; (5) patients who were right-hand dominant, with no neurological diseases (e.g., traumatic brain injury, or epilepsy), or intracranial lesions; and (6) patients who did not receive pain treatment within the past 3 months.\nThe exclusion criteria were as follows: (1) patients with radiating pain, menstrual pain, recent/current pregnancy, or postpartum low back pain; (2) patients who suffered known inflammatory disease of the spine, vertebral fracture, severe osteoporosis, autoinflammatory arthritis, and cancer or had significant unexplained weight loss; (3) patients who had cardio-cerebrovascular disease or endocrine disorders; (4) patients with mental illness requiring immediate pharmacotherapy; (5) patients who showed an unwillingness to sign research consent and unwillingness or inability to follow the research protocol; and (6) patients with current alcohol or drug dependence.\nAll participants were assessed for pain intensity using the visual analog scale (VAS), and serum samples for LC‒MS measurements were collected before and after treatment. The First Affiliated Hospital of Sun Yat-sen University approved the ethical approval document of the study (no. [2019] 408). The recruited subjects signed informed consent forms prior to the experiment.\nTherapy of subjects Seventeen recruited subjects were randomly divided into the MT group and the TE group. Patients in the MT group received manual therapy, and patients in the TE group received therapeutic exercise. Subjects in the manual therapy group were involved in muscular relaxation, myofascial release, and mobilization for 20 min during each session. The treatment lasted for a total of six sessions, once every two days. Subjects in the therapeutic exercise group completed motor control exercise and core stability exercise for 30 min during each session. The motor control exercises included stretching of the trunk and extremity muscles, trunk and hip rotation, and flexion training. Stabilization exercises consisted of the (1) bridge exercise, (2) single-leg-lift bridge exercise, (3) side bridge exercise, (4) two-point bird-dog position elevated contralateral leg and arm, (6) bear crawl exercise, and (7) dead bug exercise. The treatment lasted for a total of six sessions, once every two days.\nSeventeen recruited subjects were randomly divided into the MT group and the TE group. Patients in the MT group received manual therapy, and patients in the TE group received therapeutic exercise. Subjects in the manual therapy group were involved in muscular relaxation, myofascial release, and mobilization for 20 min during each session. The treatment lasted for a total of six sessions, once every two days. Subjects in the therapeutic exercise group completed motor control exercise and core stability exercise for 30 min during each session. The motor control exercises included stretching of the trunk and extremity muscles, trunk and hip rotation, and flexion training. Stabilization exercises consisted of the (1) bridge exercise, (2) single-leg-lift bridge exercise, (3) side bridge exercise, (4) two-point bird-dog position elevated contralateral leg and arm, (6) bear crawl exercise, and (7) dead bug exercise. The treatment lasted for a total of six sessions, once every two days.\nLipidomic analysis Lipid samples were prepared as described by Xuan et al. with some modifications [24]. Briefly, venous blood was collected in heparinization tubes and then centrifuged for 15 min at 2000 g at 4 °C to collect serum. A total of 200 µL serum samples with lipid standards were mixed with 400 µL tert-butyl methyl ether (MTBE) and 80 µL methanol and then vortexed for 30 s. Next, the samples were centrifuged, after which the upper phases were collected, transferred into new tubes, and dried by vacuum evaporation. Samples were reconstituted with 100 µL of methylene chloride:methanol (1:1, v/v).\nLipid analysis was carried out with a Shimadzu LC-30 A (Shimadzu, Kyoto, Japan) coupled with a mass spectrometer (QTRAP 6500, AB SCIEX, Framingham, MA, USA). The chromatographic parameters were as follows: chromatographic column: ACQUITY UPLC® BEH C18 column (2.1 × 100 mm, 1.7 μm, Waters, Milford, MA, USA), volume of injection: 5 µl, flow rate: 0.26 mL/min, oven temperature: 55 °C. The mobile phase included reagent A (acetonitrile:ultrapure water = 60:40, v/v, with 10 mM ammonium acetate) and reagent B (isopropanol:acetonitrile = 90:10, v/v, with 10 mM ammonium acetate). A binary gradient was set as follows: 0–1.5 min, mobile phase including 68% reagent A and 32% reagent B; 1.5–15.5 min, mobile phase including 15% reagent A and 85% reagent B; 15.5–15.6 min, mobile phase including 3% reagent A and 97% reagent B; 15.6–18 min, mobile phase including 3% reagent A and 97% reagent B; 18–18.1 min, mobile phase including 68% reagent A and 32% reagent B; 18.1–20 min, mobile phase including 68% reagent A and 32% reagent B. Electrospray ionization (QTRAP 6500, AB SCIEX, Framingham, MA, USA) was used with the following parameters: ion source voltage was − 4500 or 5500 V, ion source temperature was 600 °C, curtain gas was 20 psi, atomizing gas was 60 psi, and auxiliary gas was 60 psi. Scanning was performed through multiple reaction monitoring (MRM). Samples under test conditions were mixed and used as QC samples for LC‒MS analysis every third sample to correct deviations caused by instrumental drift and evaluate the quality of data.\nLipid samples were prepared as described by Xuan et al. with some modifications [24]. Briefly, venous blood was collected in heparinization tubes and then centrifuged for 15 min at 2000 g at 4 °C to collect serum. A total of 200 µL serum samples with lipid standards were mixed with 400 µL tert-butyl methyl ether (MTBE) and 80 µL methanol and then vortexed for 30 s. Next, the samples were centrifuged, after which the upper phases were collected, transferred into new tubes, and dried by vacuum evaporation. Samples were reconstituted with 100 µL of methylene chloride:methanol (1:1, v/v).\nLipid analysis was carried out with a Shimadzu LC-30 A (Shimadzu, Kyoto, Japan) coupled with a mass spectrometer (QTRAP 6500, AB SCIEX, Framingham, MA, USA). The chromatographic parameters were as follows: chromatographic column: ACQUITY UPLC® BEH C18 column (2.1 × 100 mm, 1.7 μm, Waters, Milford, MA, USA), volume of injection: 5 µl, flow rate: 0.26 mL/min, oven temperature: 55 °C. The mobile phase included reagent A (acetonitrile:ultrapure water = 60:40, v/v, with 10 mM ammonium acetate) and reagent B (isopropanol:acetonitrile = 90:10, v/v, with 10 mM ammonium acetate). A binary gradient was set as follows: 0–1.5 min, mobile phase including 68% reagent A and 32% reagent B; 1.5–15.5 min, mobile phase including 15% reagent A and 85% reagent B; 15.5–15.6 min, mobile phase including 3% reagent A and 97% reagent B; 15.6–18 min, mobile phase including 3% reagent A and 97% reagent B; 18–18.1 min, mobile phase including 68% reagent A and 32% reagent B; 18.1–20 min, mobile phase including 68% reagent A and 32% reagent B. Electrospray ionization (QTRAP 6500, AB SCIEX, Framingham, MA, USA) was used with the following parameters: ion source voltage was − 4500 or 5500 V, ion source temperature was 600 °C, curtain gas was 20 psi, atomizing gas was 60 psi, and auxiliary gas was 60 psi. Scanning was performed through multiple reaction monitoring (MRM). Samples under test conditions were mixed and used as QC samples for LC‒MS analysis every third sample to correct deviations caused by instrumental drift and evaluate the quality of data.\nMetabolomic measurement Metabolomic samples were prepared as described by Wang et al. [25]. Briefly, venous blood was collected in heparinization tubes and then centrifuged at 8000 g at 4 °C to collect serum. A total of 100 µL of serum sample was mixed with 400 µL of solution (methanol:acetonitrile:ultrapure water = 2:2:1, v/v/v) and then sonicated for 10 min in a 4 °C water bath. Next, the samples were incubated for one hour at − 20 °C and then centrifuged. The supernatant was collected and evaporated by vacuum evaporation. Each sample was resuspended in solution (acetonitrile:ultrapure water, 1:1, v/v).\nMetabolomic analysis was carried out with a Shimadzu LC-30 A (Shimadzu, Kyoto, Japan) coupled with a mass spectrometer (QTRAP 4500, AB SCIEX, Framingham, MA, USA). The chromatographic parameters were set as follows: chromatographic column: UPLC BEH Amide column (2.1 × 100 mm, 1.7 μm, Waters, Milford, MA, USA), volume of injection: 5 µl, flow rate: 0.3 mL/min, oven temperature: 55 °C. The mobile phase included reagent A (100% ultrapure water with 0.025 M ammonium hydroxide and 0.025 M ammonium acetate) and reagent B (100% acetonitrile). A binary gradient was set as follows: 0–1 min, mobile phase including of 15% reagent A and 85% reagent B; 1–12 min, mobile phase including of 35% reagent A and 65% reagent B; 12–12.1 min, mobile phase including of 60% reagent A and 40% reagent B; 12.1–15 min, mobile phase including of 60% reagent A and 40% reagent B; 15–15.1 min, mobile phase including of 15% reagent A and 85% reagent B; 15.1–20 min, mobile phase including of 15% reagent A and 85% reagent B. An electrospray ionization (QTRAP 4500, AB SCIEX, Framingham, MA, USA) was used with parameters as follows: ion source voltage was − 4500 or 5500 V, ion source temperature was 600 °C, the ion source voltage was − 4500 or 5500 V, curtain gas was 20 psi, atomizing gas was 60 psi, and auxiliary gas was 60 psi. Scanning was performed via multiple reaction monitoring (MRM). Samples under test conditions were mixed and used as QC samples for LC‒MS analysis every third sample to correct deviations caused by instrumental drift and evaluate the quality of data. After the test, raw data were converted to mzXML format with the web-based tool ProteoWizard and then analyzed for peak alignment, retention time correction, and peak area extraction based on XCMS. Metabolite annotation was carried out based on the online human metabolome database (HMDB, http://www.hmdb.ca) using mass-to-charge ratio information and metabolite structures. Metabolite structures were accurately matched using primary and secondary spectrograms (< 25 ppm).\nMetabolomic samples were prepared as described by Wang et al. [25]. Briefly, venous blood was collected in heparinization tubes and then centrifuged at 8000 g at 4 °C to collect serum. A total of 100 µL of serum sample was mixed with 400 µL of solution (methanol:acetonitrile:ultrapure water = 2:2:1, v/v/v) and then sonicated for 10 min in a 4 °C water bath. Next, the samples were incubated for one hour at − 20 °C and then centrifuged. The supernatant was collected and evaporated by vacuum evaporation. Each sample was resuspended in solution (acetonitrile:ultrapure water, 1:1, v/v).\nMetabolomic analysis was carried out with a Shimadzu LC-30 A (Shimadzu, Kyoto, Japan) coupled with a mass spectrometer (QTRAP 4500, AB SCIEX, Framingham, MA, USA). The chromatographic parameters were set as follows: chromatographic column: UPLC BEH Amide column (2.1 × 100 mm, 1.7 μm, Waters, Milford, MA, USA), volume of injection: 5 µl, flow rate: 0.3 mL/min, oven temperature: 55 °C. The mobile phase included reagent A (100% ultrapure water with 0.025 M ammonium hydroxide and 0.025 M ammonium acetate) and reagent B (100% acetonitrile). A binary gradient was set as follows: 0–1 min, mobile phase including of 15% reagent A and 85% reagent B; 1–12 min, mobile phase including of 35% reagent A and 65% reagent B; 12–12.1 min, mobile phase including of 60% reagent A and 40% reagent B; 12.1–15 min, mobile phase including of 60% reagent A and 40% reagent B; 15–15.1 min, mobile phase including of 15% reagent A and 85% reagent B; 15.1–20 min, mobile phase including of 15% reagent A and 85% reagent B. An electrospray ionization (QTRAP 4500, AB SCIEX, Framingham, MA, USA) was used with parameters as follows: ion source voltage was − 4500 or 5500 V, ion source temperature was 600 °C, the ion source voltage was − 4500 or 5500 V, curtain gas was 20 psi, atomizing gas was 60 psi, and auxiliary gas was 60 psi. Scanning was performed via multiple reaction monitoring (MRM). Samples under test conditions were mixed and used as QC samples for LC‒MS analysis every third sample to correct deviations caused by instrumental drift and evaluate the quality of data. After the test, raw data were converted to mzXML format with the web-based tool ProteoWizard and then analyzed for peak alignment, retention time correction, and peak area extraction based on XCMS. Metabolite annotation was carried out based on the online human metabolome database (HMDB, http://www.hmdb.ca) using mass-to-charge ratio information and metabolite structures. Metabolite structures were accurately matched using primary and secondary spectrograms (< 25 ppm).\nStatistical analyses Lipid and metabolite abundance were determined by peak area. Then, data were processed and normalized based on a reference sample (PQN) following the process outlined on the website https://www.metaboanalyst.ca/, which was mainly designed for raw spectra processing and general statistical and functional analysis of targeted metabolomics data [26–28]. The maximum covariance between nontreated samples and MT- or ET-treated samples in lipidomic analysis was determined with partial least squares-discriminant analysis (PLS-DA). The maximum covariance between nontreated samples and MT- or ET-treated samples in metabolomic analysis was determined using orthogonal partial least-squares discriminant analysis (OPLS-DA). The correlation between lipid molecules was analyzed with correlation heatmaps. The content difference of lipids in each sample was indicated with hierarchical clustering analysis. Pathway analysis was carried out with the web-based tool METPA.\nThe raw data were logarithmically transformed and tested for normality before the means were compared between different groups. If normality was assumed, Student’s t test was applied. To visualize the differentiation between different groups, PLS-DA and OPLS-DA were performed using MetaboAnalyst 5.0 (http://www.metaboanalyst.ca/). Data are presented as the mean ± SEM. GraphPad Prism (version 8, GraphPad Software, San Diego, CA, USA) was used to perform statistical analyses between the nontreatment and physiotherapy-based treatment groups using Student’s t test (P < 0.05).\nLipid and metabolite abundance were determined by peak area. Then, data were processed and normalized based on a reference sample (PQN) following the process outlined on the website https://www.metaboanalyst.ca/, which was mainly designed for raw spectra processing and general statistical and functional analysis of targeted metabolomics data [26–28]. The maximum covariance between nontreated samples and MT- or ET-treated samples in lipidomic analysis was determined with partial least squares-discriminant analysis (PLS-DA). The maximum covariance between nontreated samples and MT- or ET-treated samples in metabolomic analysis was determined using orthogonal partial least-squares discriminant analysis (OPLS-DA). The correlation between lipid molecules was analyzed with correlation heatmaps. The content difference of lipids in each sample was indicated with hierarchical clustering analysis. Pathway analysis was carried out with the web-based tool METPA.\nThe raw data were logarithmically transformed and tested for normality before the means were compared between different groups. If normality was assumed, Student’s t test was applied. To visualize the differentiation between different groups, PLS-DA and OPLS-DA were performed using MetaboAnalyst 5.0 (http://www.metaboanalyst.ca/). Data are presented as the mean ± SEM. GraphPad Prism (version 8, GraphPad Software, San Diego, CA, USA) was used to perform statistical analyses between the nontreatment and physiotherapy-based treatment groups using Student’s t test (P < 0.05).", "Patients with cNLBP were recruited through advertising. The inclusion criteria were as follows: (1) patients aged between 18 years and 65 years [20]; (2) patients with pain in the area between the lower rib and the inferior gluteal fold; (3) patients with persistent pain > 3 months or intermittent pain > 6 months and having been clinically diagnosed as having cNLBP by two licensed medical doctors in accordance with the diagnostic guidelines published by the American College of Physicians and the American Pain Society [21, 22]; (4) patients with a minimum score of 2 on the Visual Analog Scale (VAS) in the previous week [23]; (5) patients who were right-hand dominant, with no neurological diseases (e.g., traumatic brain injury, or epilepsy), or intracranial lesions; and (6) patients who did not receive pain treatment within the past 3 months.\nThe exclusion criteria were as follows: (1) patients with radiating pain, menstrual pain, recent/current pregnancy, or postpartum low back pain; (2) patients who suffered known inflammatory disease of the spine, vertebral fracture, severe osteoporosis, autoinflammatory arthritis, and cancer or had significant unexplained weight loss; (3) patients who had cardio-cerebrovascular disease or endocrine disorders; (4) patients with mental illness requiring immediate pharmacotherapy; (5) patients who showed an unwillingness to sign research consent and unwillingness or inability to follow the research protocol; and (6) patients with current alcohol or drug dependence.\nAll participants were assessed for pain intensity using the visual analog scale (VAS), and serum samples for LC‒MS measurements were collected before and after treatment. The First Affiliated Hospital of Sun Yat-sen University approved the ethical approval document of the study (no. [2019] 408). The recruited subjects signed informed consent forms prior to the experiment.", "Seventeen recruited subjects were randomly divided into the MT group and the TE group. Patients in the MT group received manual therapy, and patients in the TE group received therapeutic exercise. Subjects in the manual therapy group were involved in muscular relaxation, myofascial release, and mobilization for 20 min during each session. The treatment lasted for a total of six sessions, once every two days. Subjects in the therapeutic exercise group completed motor control exercise and core stability exercise for 30 min during each session. The motor control exercises included stretching of the trunk and extremity muscles, trunk and hip rotation, and flexion training. Stabilization exercises consisted of the (1) bridge exercise, (2) single-leg-lift bridge exercise, (3) side bridge exercise, (4) two-point bird-dog position elevated contralateral leg and arm, (6) bear crawl exercise, and (7) dead bug exercise. The treatment lasted for a total of six sessions, once every two days.", "Lipid samples were prepared as described by Xuan et al. with some modifications [24]. Briefly, venous blood was collected in heparinization tubes and then centrifuged for 15 min at 2000 g at 4 °C to collect serum. A total of 200 µL serum samples with lipid standards were mixed with 400 µL tert-butyl methyl ether (MTBE) and 80 µL methanol and then vortexed for 30 s. Next, the samples were centrifuged, after which the upper phases were collected, transferred into new tubes, and dried by vacuum evaporation. Samples were reconstituted with 100 µL of methylene chloride:methanol (1:1, v/v).\nLipid analysis was carried out with a Shimadzu LC-30 A (Shimadzu, Kyoto, Japan) coupled with a mass spectrometer (QTRAP 6500, AB SCIEX, Framingham, MA, USA). The chromatographic parameters were as follows: chromatographic column: ACQUITY UPLC® BEH C18 column (2.1 × 100 mm, 1.7 μm, Waters, Milford, MA, USA), volume of injection: 5 µl, flow rate: 0.26 mL/min, oven temperature: 55 °C. The mobile phase included reagent A (acetonitrile:ultrapure water = 60:40, v/v, with 10 mM ammonium acetate) and reagent B (isopropanol:acetonitrile = 90:10, v/v, with 10 mM ammonium acetate). A binary gradient was set as follows: 0–1.5 min, mobile phase including 68% reagent A and 32% reagent B; 1.5–15.5 min, mobile phase including 15% reagent A and 85% reagent B; 15.5–15.6 min, mobile phase including 3% reagent A and 97% reagent B; 15.6–18 min, mobile phase including 3% reagent A and 97% reagent B; 18–18.1 min, mobile phase including 68% reagent A and 32% reagent B; 18.1–20 min, mobile phase including 68% reagent A and 32% reagent B. Electrospray ionization (QTRAP 6500, AB SCIEX, Framingham, MA, USA) was used with the following parameters: ion source voltage was − 4500 or 5500 V, ion source temperature was 600 °C, curtain gas was 20 psi, atomizing gas was 60 psi, and auxiliary gas was 60 psi. Scanning was performed through multiple reaction monitoring (MRM). Samples under test conditions were mixed and used as QC samples for LC‒MS analysis every third sample to correct deviations caused by instrumental drift and evaluate the quality of data.", "Metabolomic samples were prepared as described by Wang et al. [25]. Briefly, venous blood was collected in heparinization tubes and then centrifuged at 8000 g at 4 °C to collect serum. A total of 100 µL of serum sample was mixed with 400 µL of solution (methanol:acetonitrile:ultrapure water = 2:2:1, v/v/v) and then sonicated for 10 min in a 4 °C water bath. Next, the samples were incubated for one hour at − 20 °C and then centrifuged. The supernatant was collected and evaporated by vacuum evaporation. Each sample was resuspended in solution (acetonitrile:ultrapure water, 1:1, v/v).\nMetabolomic analysis was carried out with a Shimadzu LC-30 A (Shimadzu, Kyoto, Japan) coupled with a mass spectrometer (QTRAP 4500, AB SCIEX, Framingham, MA, USA). The chromatographic parameters were set as follows: chromatographic column: UPLC BEH Amide column (2.1 × 100 mm, 1.7 μm, Waters, Milford, MA, USA), volume of injection: 5 µl, flow rate: 0.3 mL/min, oven temperature: 55 °C. The mobile phase included reagent A (100% ultrapure water with 0.025 M ammonium hydroxide and 0.025 M ammonium acetate) and reagent B (100% acetonitrile). A binary gradient was set as follows: 0–1 min, mobile phase including of 15% reagent A and 85% reagent B; 1–12 min, mobile phase including of 35% reagent A and 65% reagent B; 12–12.1 min, mobile phase including of 60% reagent A and 40% reagent B; 12.1–15 min, mobile phase including of 60% reagent A and 40% reagent B; 15–15.1 min, mobile phase including of 15% reagent A and 85% reagent B; 15.1–20 min, mobile phase including of 15% reagent A and 85% reagent B. An electrospray ionization (QTRAP 4500, AB SCIEX, Framingham, MA, USA) was used with parameters as follows: ion source voltage was − 4500 or 5500 V, ion source temperature was 600 °C, the ion source voltage was − 4500 or 5500 V, curtain gas was 20 psi, atomizing gas was 60 psi, and auxiliary gas was 60 psi. Scanning was performed via multiple reaction monitoring (MRM). Samples under test conditions were mixed and used as QC samples for LC‒MS analysis every third sample to correct deviations caused by instrumental drift and evaluate the quality of data. After the test, raw data were converted to mzXML format with the web-based tool ProteoWizard and then analyzed for peak alignment, retention time correction, and peak area extraction based on XCMS. Metabolite annotation was carried out based on the online human metabolome database (HMDB, http://www.hmdb.ca) using mass-to-charge ratio information and metabolite structures. Metabolite structures were accurately matched using primary and secondary spectrograms (< 25 ppm).", "Lipid and metabolite abundance were determined by peak area. Then, data were processed and normalized based on a reference sample (PQN) following the process outlined on the website https://www.metaboanalyst.ca/, which was mainly designed for raw spectra processing and general statistical and functional analysis of targeted metabolomics data [26–28]. The maximum covariance between nontreated samples and MT- or ET-treated samples in lipidomic analysis was determined with partial least squares-discriminant analysis (PLS-DA). The maximum covariance between nontreated samples and MT- or ET-treated samples in metabolomic analysis was determined using orthogonal partial least-squares discriminant analysis (OPLS-DA). The correlation between lipid molecules was analyzed with correlation heatmaps. The content difference of lipids in each sample was indicated with hierarchical clustering analysis. Pathway analysis was carried out with the web-based tool METPA.\nThe raw data were logarithmically transformed and tested for normality before the means were compared between different groups. If normality was assumed, Student’s t test was applied. To visualize the differentiation between different groups, PLS-DA and OPLS-DA were performed using MetaboAnalyst 5.0 (http://www.metaboanalyst.ca/). Data are presented as the mean ± SEM. GraphPad Prism (version 8, GraphPad Software, San Diego, CA, USA) was used to perform statistical analyses between the nontreatment and physiotherapy-based treatment groups using Student’s t test (P < 0.05).", "Lipid composition analysis of cNLBP patients before and after manual therapy We recruited 17 patients with cNLBP whose demographic information is shown in Table 1. The recruited subjects were randomly divided into MT or TE groups, with no significant differences in age, weight, height, BMI, or VAS score between them. We found that MT treatment was effective in alleviating cNLBP (Fig. 1). After treatment, the VAS score decreased in almost the entire MT group (Fig. 1). Serum lipidomics were determined after six MT treatment sessions. Since one participant’s blood sample could not be collected after treatment, there were eight effective participants in the MT group.\n\nTable 1Demographic information among two groups, M ± SEMMTTEPN (male, count)89Age (years)28.75 ± 7.2628.11 ± 7.45not significantly different at baselineHeight (m)1.68 ± 0.081.65 ± 0.05not significantly different at baselineWeight (kg)60.06 ± 8.8058.00 ± 10.58not significantly different at baselineBMI (kg/m2)21.25 ± 2.5421.17 ± 3.08not significantly different at baselineVAS (before treatment)5.76 ± 1.155.63 ± 1.88not significantly different at baselineVAS (after treatment)2.80 ± 1.763.47 ± 1.83not significantly different at baselineAbbreviation:\nMT Manual therapy, TE therapeutic exercise, BMI Body mass index, VAS Visual analog scale (0–10; VAS 0 = no pain; VAS 10 = maximal pain)\nDemographic information among two groups, M ± SEM\nAbbreviation:\nMT Manual therapy, TE therapeutic exercise, BMI Body mass index, VAS Visual analog scale (0–10; VAS 0 = no pain; VAS 10 = maximal pain)\n\nFig. 1Manual therapy and therapeutic exercise were effective in cNLBP amelioration Seventeen patients were randomly divided into two groups: one group received manual therapy, and the other group received therapeutic exercise. VAS was recorded before and after treatment. Asterisks show a significant difference from patients before treatment using Student’s t tests (**P < 0.01) \nManual therapy and therapeutic exercise were effective in cNLBP amelioration Seventeen patients were randomly divided into two groups: one group received manual therapy, and the other group received therapeutic exercise. VAS was recorded before and after treatment. Asterisks show a significant difference from patients before treatment using Student’s t tests (**P < 0.01) \nWe completed lipid extraction and performed qualitative analysis. Through lipidomic analysis, we identified 290 lipids, which can be divided into the ten subclasses of phosphatidylcholine (PC), phosphatidylethanolamine (PE), lysophosphatidylcholines (LPC), lysophosphatidylethanolamine (LPE), triacylglycerol (TG), phosphatidylinositol (PI), sphingomyelin (SM), ceramide (Cer), hexosylceramide (HexCer), and fatty acid (FA). As a multivariate statistical analysis, PLS-DA could maximize the distinction and discover different metabolites between groups. We performed PLS-DA analysis with the MetaboAnalyst R software package and found a clear difference in the nontreated group (pink) and MT-treated group (green), suggesting differential lipidomic profiles in cNLBP patients before and after manual therapy (Fig. 2 A). Next, we used Pearson correlation analysis to measure the closeness of different lipids (Fig. 2B). Using volume measurements of lipids, we analyzed the lipidomic composition of cNLBP patients before and after manual therapy and found a decrease in phosphatidylcholine (PC)/phosphatidylethanolamine (PE) molar ratios but an increase in sphingomyelin (SM)/ceramide (Cer) molar ratios in the patients after manual therapy. Meanwhile, there were also decreases in the volumes of fatty acids (FAs) and increases in lysophosphatidylcholine (LPC) and lysophosphatidylethanolamine (LPE) when cNLBP patients were treated with MT (Fig. 3 A). We also generated a heatmap to present the volume of the lipids in each sample (Fig. 3B).\n\nFig. 2Lipidomic profiles in cNLBP patients before and after treatment with MT. A The PLS-DA analysis of cNLBP patients treated with MT versus the control group. “1” represents the nontreated group, and “2” represents the MT-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient\nLipidomic profiles in cNLBP patients before and after treatment with MT. A The PLS-DA analysis of cNLBP patients treated with MT versus the control group. “1” represents the nontreated group, and “2” represents the MT-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient\n\nFig. 3Lipid identification in cNLBP patients before and after treatment with MT. A The composition of nontreated samples and MT-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the MT-treated group\nLipid identification in cNLBP patients before and after treatment with MT. A The composition of nontreated samples and MT-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the MT-treated group\nWe recruited 17 patients with cNLBP whose demographic information is shown in Table 1. The recruited subjects were randomly divided into MT or TE groups, with no significant differences in age, weight, height, BMI, or VAS score between them. We found that MT treatment was effective in alleviating cNLBP (Fig. 1). After treatment, the VAS score decreased in almost the entire MT group (Fig. 1). Serum lipidomics were determined after six MT treatment sessions. Since one participant’s blood sample could not be collected after treatment, there were eight effective participants in the MT group.\n\nTable 1Demographic information among two groups, M ± SEMMTTEPN (male, count)89Age (years)28.75 ± 7.2628.11 ± 7.45not significantly different at baselineHeight (m)1.68 ± 0.081.65 ± 0.05not significantly different at baselineWeight (kg)60.06 ± 8.8058.00 ± 10.58not significantly different at baselineBMI (kg/m2)21.25 ± 2.5421.17 ± 3.08not significantly different at baselineVAS (before treatment)5.76 ± 1.155.63 ± 1.88not significantly different at baselineVAS (after treatment)2.80 ± 1.763.47 ± 1.83not significantly different at baselineAbbreviation:\nMT Manual therapy, TE therapeutic exercise, BMI Body mass index, VAS Visual analog scale (0–10; VAS 0 = no pain; VAS 10 = maximal pain)\nDemographic information among two groups, M ± SEM\nAbbreviation:\nMT Manual therapy, TE therapeutic exercise, BMI Body mass index, VAS Visual analog scale (0–10; VAS 0 = no pain; VAS 10 = maximal pain)\n\nFig. 1Manual therapy and therapeutic exercise were effective in cNLBP amelioration Seventeen patients were randomly divided into two groups: one group received manual therapy, and the other group received therapeutic exercise. VAS was recorded before and after treatment. Asterisks show a significant difference from patients before treatment using Student’s t tests (**P < 0.01) \nManual therapy and therapeutic exercise were effective in cNLBP amelioration Seventeen patients were randomly divided into two groups: one group received manual therapy, and the other group received therapeutic exercise. VAS was recorded before and after treatment. Asterisks show a significant difference from patients before treatment using Student’s t tests (**P < 0.01) \nWe completed lipid extraction and performed qualitative analysis. Through lipidomic analysis, we identified 290 lipids, which can be divided into the ten subclasses of phosphatidylcholine (PC), phosphatidylethanolamine (PE), lysophosphatidylcholines (LPC), lysophosphatidylethanolamine (LPE), triacylglycerol (TG), phosphatidylinositol (PI), sphingomyelin (SM), ceramide (Cer), hexosylceramide (HexCer), and fatty acid (FA). As a multivariate statistical analysis, PLS-DA could maximize the distinction and discover different metabolites between groups. We performed PLS-DA analysis with the MetaboAnalyst R software package and found a clear difference in the nontreated group (pink) and MT-treated group (green), suggesting differential lipidomic profiles in cNLBP patients before and after manual therapy (Fig. 2 A). Next, we used Pearson correlation analysis to measure the closeness of different lipids (Fig. 2B). Using volume measurements of lipids, we analyzed the lipidomic composition of cNLBP patients before and after manual therapy and found a decrease in phosphatidylcholine (PC)/phosphatidylethanolamine (PE) molar ratios but an increase in sphingomyelin (SM)/ceramide (Cer) molar ratios in the patients after manual therapy. Meanwhile, there were also decreases in the volumes of fatty acids (FAs) and increases in lysophosphatidylcholine (LPC) and lysophosphatidylethanolamine (LPE) when cNLBP patients were treated with MT (Fig. 3 A). We also generated a heatmap to present the volume of the lipids in each sample (Fig. 3B).\n\nFig. 2Lipidomic profiles in cNLBP patients before and after treatment with MT. A The PLS-DA analysis of cNLBP patients treated with MT versus the control group. “1” represents the nontreated group, and “2” represents the MT-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient\nLipidomic profiles in cNLBP patients before and after treatment with MT. A The PLS-DA analysis of cNLBP patients treated with MT versus the control group. “1” represents the nontreated group, and “2” represents the MT-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient\n\nFig. 3Lipid identification in cNLBP patients before and after treatment with MT. A The composition of nontreated samples and MT-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the MT-treated group\nLipid identification in cNLBP patients before and after treatment with MT. A The composition of nontreated samples and MT-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the MT-treated group\nLipid composition analysis of cNLBP patients before and after therapeutic exercise (TE) Therapeutic exercise (TE) is another effective method for improving cNLBP [29]. We found that TE treatment was also effective in alleviating cNLBP (Fig. 1). After treatment, patients’ VAS scores decreased significantly in the TE-treated group (Fig. 1).\nWe performed lipid extraction from cNLBP patients before and after therapeutic exercise and performed qualitative analysis. PLS-DA results indicated a distinct separation between the nontreated group (pink) and the TE-treated group (green) (Fig. 4A). Pearson correlation analysis showed the closeness of different lipids (Fig. 4B). Based on the volume of lipids, PC/PE molar ratios decreased, while SM/Cer molar ratios increased in the patients after therapeutic exercise. The volume of FA also decreased, while the volume of LPC and LPE increased in cNLBP patients after therapeutic exercise, similar to the results of the MT-treated group. Interestingly, the volume of TG (triacylglycerol) increased in the TE-treated group, while it decreased in the MT-treated group (Fig. 5A). A heatmap was produced to indicate the volume of lipids in the nontreated group (red) and the TE-treated group (green) (Fig. 5B).\n\nFig. 4Lipidomic profiles in cNLBP patients before and after treatment with TE. A PLS-DA analysis of cNLBP patients treated with TE versus the control group. “1” represents the nontreated group, and “2” represents the TE-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient\nLipidomic profiles in cNLBP patients before and after treatment with TE. A PLS-DA analysis of cNLBP patients treated with TE versus the control group. “1” represents the nontreated group, and “2” represents the TE-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient\n\nFig. 5Lipid identification in cNLBP patients before and after treatment with TE. A The composition of nontreated samples and TE-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the TE-treated group\nLipid identification in cNLBP patients before and after treatment with TE. A The composition of nontreated samples and TE-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the TE-treated group\nTherapeutic exercise (TE) is another effective method for improving cNLBP [29]. We found that TE treatment was also effective in alleviating cNLBP (Fig. 1). After treatment, patients’ VAS scores decreased significantly in the TE-treated group (Fig. 1).\nWe performed lipid extraction from cNLBP patients before and after therapeutic exercise and performed qualitative analysis. PLS-DA results indicated a distinct separation between the nontreated group (pink) and the TE-treated group (green) (Fig. 4A). Pearson correlation analysis showed the closeness of different lipids (Fig. 4B). Based on the volume of lipids, PC/PE molar ratios decreased, while SM/Cer molar ratios increased in the patients after therapeutic exercise. The volume of FA also decreased, while the volume of LPC and LPE increased in cNLBP patients after therapeutic exercise, similar to the results of the MT-treated group. Interestingly, the volume of TG (triacylglycerol) increased in the TE-treated group, while it decreased in the MT-treated group (Fig. 5A). A heatmap was produced to indicate the volume of lipids in the nontreated group (red) and the TE-treated group (green) (Fig. 5B).\n\nFig. 4Lipidomic profiles in cNLBP patients before and after treatment with TE. A PLS-DA analysis of cNLBP patients treated with TE versus the control group. “1” represents the nontreated group, and “2” represents the TE-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient\nLipidomic profiles in cNLBP patients before and after treatment with TE. A PLS-DA analysis of cNLBP patients treated with TE versus the control group. “1” represents the nontreated group, and “2” represents the TE-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient\n\nFig. 5Lipid identification in cNLBP patients before and after treatment with TE. A The composition of nontreated samples and TE-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the TE-treated group\nLipid identification in cNLBP patients before and after treatment with TE. A The composition of nontreated samples and TE-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the TE-treated group\nMetabolite alterations in cNLBP patients To further identify therapeutic targets for cNLBP physiotherapy-based treatment, we analyzed the metabolome of cNLBP patients before and after treatment. In our sample of patients, the metabolomic analysis annotated and quantified 171 metabolites. Through KEGG-based enrichment analysis, these metabolites were enriched in the metabolism of tryptophan or aspartate, ammonia recycling, the metabolism of methionine or glycine, and serine, among others (Fig. 6A). Combining enrichment and topology analysis, pathway analysis was carried out for all patients. We found a total of 14 pathways that were significantly changed in patients (P value < 0.05). These metabolites mainly belonged to aminoacyl-tRNA biosynthesis; arginine biosynthesis; valine, leucine and isoleucine biosynthesis; amino acid metabolism; pyrimidine and purine metabolism; ascorbate and aldarate metabolism; taurine and hypotaurine metabolism; beta-alanine metabolism; and nicotinate and nicotinamide metabolism (Fig. 6B).\n\nFig. 6Metabolite alteration in cNLBP patients. A Pathway enrichment analysis revealed different metabolic pathways enriched in cNLBP patients (P value cutoff ≤ 0.05). B The results from the pathway analysis carried out with the web-based tool METPA using the concentrations of metabolites identified in cNLBP patients. Total cmpd, the total number of compounds in the pathway. Hits are the matched number from the uploaded data. Raw P is the original P value. Impact is the pathway impact value calculated from pathway topology analysis\nMetabolite alteration in cNLBP patients. A Pathway enrichment analysis revealed different metabolic pathways enriched in cNLBP patients (P value cutoff ≤ 0.05). B The results from the pathway analysis carried out with the web-based tool METPA using the concentrations of metabolites identified in cNLBP patients. Total cmpd, the total number of compounds in the pathway. Hits are the matched number from the uploaded data. Raw P is the original P value. Impact is the pathway impact value calculated from pathway topology analysis\nTo further identify therapeutic targets for cNLBP physiotherapy-based treatment, we analyzed the metabolome of cNLBP patients before and after treatment. In our sample of patients, the metabolomic analysis annotated and quantified 171 metabolites. Through KEGG-based enrichment analysis, these metabolites were enriched in the metabolism of tryptophan or aspartate, ammonia recycling, the metabolism of methionine or glycine, and serine, among others (Fig. 6A). Combining enrichment and topology analysis, pathway analysis was carried out for all patients. We found a total of 14 pathways that were significantly changed in patients (P value < 0.05). These metabolites mainly belonged to aminoacyl-tRNA biosynthesis; arginine biosynthesis; valine, leucine and isoleucine biosynthesis; amino acid metabolism; pyrimidine and purine metabolism; ascorbate and aldarate metabolism; taurine and hypotaurine metabolism; beta-alanine metabolism; and nicotinate and nicotinamide metabolism (Fig. 6B).\n\nFig. 6Metabolite alteration in cNLBP patients. A Pathway enrichment analysis revealed different metabolic pathways enriched in cNLBP patients (P value cutoff ≤ 0.05). B The results from the pathway analysis carried out with the web-based tool METPA using the concentrations of metabolites identified in cNLBP patients. Total cmpd, the total number of compounds in the pathway. Hits are the matched number from the uploaded data. Raw P is the original P value. Impact is the pathway impact value calculated from pathway topology analysis\nMetabolite alteration in cNLBP patients. A Pathway enrichment analysis revealed different metabolic pathways enriched in cNLBP patients (P value cutoff ≤ 0.05). B The results from the pathway analysis carried out with the web-based tool METPA using the concentrations of metabolites identified in cNLBP patients. Total cmpd, the total number of compounds in the pathway. Hits are the matched number from the uploaded data. Raw P is the original P value. Impact is the pathway impact value calculated from pathway topology analysis\nMetabolite profiles of cNLBP patients treated with manual therapy Serum metabolome analysis was performed on samples collected after MT treatment. Since two participants’ blood samples could not be collected after treatment, there were seven included participants in the MT group for metabolomes. Orthogonal PLS-DA was performed to demonstrate the suitability of the system (Fig. 7A). The orthogonal PLS-DA score plot revealed good discrimination of the MT treatment group against untreated samples (Fig. 7A). MT-treated and nontreated samples were separated with no outliers (Fig. 7A), demonstrating that our metabolomic analysis could sufficiently reflect the metabolic profile alteration of MT treatment. The VIP scores derived from orthogonal PLS-DA, based on the first 20 metabolites with a VIP score > 1.5, revealed uridine, guanosine, kynurenic acid, 2’-deoxyadenosine, allantoin, stachydrine, inosine, uridine 5’-monophosphate, nicotinuric acid, 3,4-dihydroxybenzeneacetic acid, 2’-deoxyuridine, 2’-deoxyguanosine, 4-aminohippuric acid, cytidine, pyridoxylamine, glutathione oxidized, desaminotyrosine, L − valine, N-acetyl-5-hydroxytryptamine, and 4-acetamidobutanoic acid with the highest VIP scores for MT treatment (Fig. 7B). Finally, we screened out metabolites (fold changes > 2) in the MT treatment group compared with the nontreated group, which were cytidine, uridine 5’-monophosphate, kynurenic acid, guanosine, inosine, 2’-deoxyadenosine, and stachydrine (Fig. 8A). The KEGG-based enrichment analysis revealed that these metabolites were significantly enriched in pyrimidine metabolism and purine metabolism pathways, demonstrating that MT treatment relieves pain by altering the metabolism of these two pathways (Fig. 8B).\n\nFig. 7Discrimination through orthogonal PLS-DA of patients before and after manual therapy analyzed based on metabolomics analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend as 1) and patients after manual therapy (indicated as 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis\nDiscrimination through orthogonal PLS-DA of patients before and after manual therapy analyzed based on metabolomics analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend as 1) and patients after manual therapy (indicated as 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis\n\nFig. 8Manual therapy could alter target metabolites in patients with cNLBP. A Comparison of the volumes of cytidine, uridine 5’-monophosphate, kynurenic acid, guanosine, inosine, 2’-deoxyadenosine, stachydrine, and N-acetyl-5-hydroxytryptamine in patients treated with and without manual therapy for 2 weeks. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism and purine metabolism pathways were enriched in patients treated with manual therapy (P value cutoff ≤ 0.05)\nManual therapy could alter target metabolites in patients with cNLBP. A Comparison of the volumes of cytidine, uridine 5’-monophosphate, kynurenic acid, guanosine, inosine, 2’-deoxyadenosine, stachydrine, and N-acetyl-5-hydroxytryptamine in patients treated with and without manual therapy for 2 weeks. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism and purine metabolism pathways were enriched in patients treated with manual therapy (P value cutoff ≤ 0.05)\nSerum metabolome analysis was performed on samples collected after MT treatment. Since two participants’ blood samples could not be collected after treatment, there were seven included participants in the MT group for metabolomes. Orthogonal PLS-DA was performed to demonstrate the suitability of the system (Fig. 7A). The orthogonal PLS-DA score plot revealed good discrimination of the MT treatment group against untreated samples (Fig. 7A). MT-treated and nontreated samples were separated with no outliers (Fig. 7A), demonstrating that our metabolomic analysis could sufficiently reflect the metabolic profile alteration of MT treatment. The VIP scores derived from orthogonal PLS-DA, based on the first 20 metabolites with a VIP score > 1.5, revealed uridine, guanosine, kynurenic acid, 2’-deoxyadenosine, allantoin, stachydrine, inosine, uridine 5’-monophosphate, nicotinuric acid, 3,4-dihydroxybenzeneacetic acid, 2’-deoxyuridine, 2’-deoxyguanosine, 4-aminohippuric acid, cytidine, pyridoxylamine, glutathione oxidized, desaminotyrosine, L − valine, N-acetyl-5-hydroxytryptamine, and 4-acetamidobutanoic acid with the highest VIP scores for MT treatment (Fig. 7B). Finally, we screened out metabolites (fold changes > 2) in the MT treatment group compared with the nontreated group, which were cytidine, uridine 5’-monophosphate, kynurenic acid, guanosine, inosine, 2’-deoxyadenosine, and stachydrine (Fig. 8A). The KEGG-based enrichment analysis revealed that these metabolites were significantly enriched in pyrimidine metabolism and purine metabolism pathways, demonstrating that MT treatment relieves pain by altering the metabolism of these two pathways (Fig. 8B).\n\nFig. 7Discrimination through orthogonal PLS-DA of patients before and after manual therapy analyzed based on metabolomics analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend as 1) and patients after manual therapy (indicated as 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis\nDiscrimination through orthogonal PLS-DA of patients before and after manual therapy analyzed based on metabolomics analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend as 1) and patients after manual therapy (indicated as 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis\n\nFig. 8Manual therapy could alter target metabolites in patients with cNLBP. A Comparison of the volumes of cytidine, uridine 5’-monophosphate, kynurenic acid, guanosine, inosine, 2’-deoxyadenosine, stachydrine, and N-acetyl-5-hydroxytryptamine in patients treated with and without manual therapy for 2 weeks. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism and purine metabolism pathways were enriched in patients treated with manual therapy (P value cutoff ≤ 0.05)\nManual therapy could alter target metabolites in patients with cNLBP. A Comparison of the volumes of cytidine, uridine 5’-monophosphate, kynurenic acid, guanosine, inosine, 2’-deoxyadenosine, stachydrine, and N-acetyl-5-hydroxytryptamine in patients treated with and without manual therapy for 2 weeks. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism and purine metabolism pathways were enriched in patients treated with manual therapy (P value cutoff ≤ 0.05)\nMetabolite profiles of cNLBP patients treated with therapeutic exercise We also performed metabolite identification in the serum metabolomes pooled from cNLBP patients treated with TE. Since one participant’s blood sample could not be collected after treatment, there were seven included participants in the TE group for metabolomes. Orthogonal PLS-DA was performed on untreated samples and TE treatment samples. The orthogonal PLS-DA score plot revealed good discrimination between the TE treatment group and the nontreated samples (Fig. 9A). The VIP score > 1.5 derived from orthogonal PLS-DA, based on the first 20 metabolites, revealed liothyronine, 2’-deoxyadenosine, uridine, L − homocystine, N-acetyl-5-hydroxytryptamine, stachydrine, γ-aminobutyric acid, nicotinuric acid, glutaric acid, 2’-deoxyuridine, adenine, N-acetyl-L-aspartic acid, cinnamic acid, cytidine, uridine 5’-monophosphate, D-(-)-mandelic acid, L-cysteine, 4-aminobenzoic acid, 5’-deoxyadenosine, and D-sorbitol with the highest VIP scores for TE treatment (Fig. 9B).\n\nFig. 9Discrimination through orthogonal PLS-DA of patients before and after therapeutic exercise examined by metabolomic analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend with 1) and patients after therapeutic exercise (indicated with 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis\nDiscrimination through orthogonal PLS-DA of patients before and after therapeutic exercise examined by metabolomic analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend with 1) and patients after therapeutic exercise (indicated with 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis\nFinally, nine metabolites with fold changes > 2 in the TE treatment group were found, including uridine 5’-monophosphate, thymidine, 2’-deoxyadenosine, 5’-deoxyadenosine, N-acetyl-5-hydroxytryptamine, stachydrine, inosine, gallic acid, and γ-aminobutyric acid (Fig. 10A). The KEGG-based enrichment analysis revealed that these nine metabolites were significantly enriched in pyrimidine metabolism, tyrosine metabolism, and galactose metabolism pathways, demonstrating that TE treatment relieves pain by altering the metabolism of these three pathways (Fig. 10B).\n\nFig. 10Therapeutic exercise could alter target metabolites in patients with cNLBP. A Comparison of the volumes of uridine 5’-monophosphate, thymidine, 2’-deoxyadenosine, 5’-deoxyadenosine, N-acetyl-5-hydroxytryptamine, stachydrine, inosine, gallic acid, and γ-aminobutyric acid in patients treated with therapeutic exercise for two weeks or without therapeutic treatment. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism, tyrosine metabolism, and galactose metabolism were enriched in patients treated with therapeutic exercise (P value cutoff ≤ 0.05)\nTherapeutic exercise could alter target metabolites in patients with cNLBP. A Comparison of the volumes of uridine 5’-monophosphate, thymidine, 2’-deoxyadenosine, 5’-deoxyadenosine, N-acetyl-5-hydroxytryptamine, stachydrine, inosine, gallic acid, and γ-aminobutyric acid in patients treated with therapeutic exercise for two weeks or without therapeutic treatment. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism, tyrosine metabolism, and galactose metabolism were enriched in patients treated with therapeutic exercise (P value cutoff ≤ 0.05)\nWe also performed metabolite identification in the serum metabolomes pooled from cNLBP patients treated with TE. Since one participant’s blood sample could not be collected after treatment, there were seven included participants in the TE group for metabolomes. Orthogonal PLS-DA was performed on untreated samples and TE treatment samples. The orthogonal PLS-DA score plot revealed good discrimination between the TE treatment group and the nontreated samples (Fig. 9A). The VIP score > 1.5 derived from orthogonal PLS-DA, based on the first 20 metabolites, revealed liothyronine, 2’-deoxyadenosine, uridine, L − homocystine, N-acetyl-5-hydroxytryptamine, stachydrine, γ-aminobutyric acid, nicotinuric acid, glutaric acid, 2’-deoxyuridine, adenine, N-acetyl-L-aspartic acid, cinnamic acid, cytidine, uridine 5’-monophosphate, D-(-)-mandelic acid, L-cysteine, 4-aminobenzoic acid, 5’-deoxyadenosine, and D-sorbitol with the highest VIP scores for TE treatment (Fig. 9B).\n\nFig. 9Discrimination through orthogonal PLS-DA of patients before and after therapeutic exercise examined by metabolomic analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend with 1) and patients after therapeutic exercise (indicated with 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis\nDiscrimination through orthogonal PLS-DA of patients before and after therapeutic exercise examined by metabolomic analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend with 1) and patients after therapeutic exercise (indicated with 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis\nFinally, nine metabolites with fold changes > 2 in the TE treatment group were found, including uridine 5’-monophosphate, thymidine, 2’-deoxyadenosine, 5’-deoxyadenosine, N-acetyl-5-hydroxytryptamine, stachydrine, inosine, gallic acid, and γ-aminobutyric acid (Fig. 10A). The KEGG-based enrichment analysis revealed that these nine metabolites were significantly enriched in pyrimidine metabolism, tyrosine metabolism, and galactose metabolism pathways, demonstrating that TE treatment relieves pain by altering the metabolism of these three pathways (Fig. 10B).\n\nFig. 10Therapeutic exercise could alter target metabolites in patients with cNLBP. A Comparison of the volumes of uridine 5’-monophosphate, thymidine, 2’-deoxyadenosine, 5’-deoxyadenosine, N-acetyl-5-hydroxytryptamine, stachydrine, inosine, gallic acid, and γ-aminobutyric acid in patients treated with therapeutic exercise for two weeks or without therapeutic treatment. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism, tyrosine metabolism, and galactose metabolism were enriched in patients treated with therapeutic exercise (P value cutoff ≤ 0.05)\nTherapeutic exercise could alter target metabolites in patients with cNLBP. A Comparison of the volumes of uridine 5’-monophosphate, thymidine, 2’-deoxyadenosine, 5’-deoxyadenosine, N-acetyl-5-hydroxytryptamine, stachydrine, inosine, gallic acid, and γ-aminobutyric acid in patients treated with therapeutic exercise for two weeks or without therapeutic treatment. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism, tyrosine metabolism, and galactose metabolism were enriched in patients treated with therapeutic exercise (P value cutoff ≤ 0.05)", "We recruited 17 patients with cNLBP whose demographic information is shown in Table 1. The recruited subjects were randomly divided into MT or TE groups, with no significant differences in age, weight, height, BMI, or VAS score between them. We found that MT treatment was effective in alleviating cNLBP (Fig. 1). After treatment, the VAS score decreased in almost the entire MT group (Fig. 1). Serum lipidomics were determined after six MT treatment sessions. Since one participant’s blood sample could not be collected after treatment, there were eight effective participants in the MT group.\n\nTable 1Demographic information among two groups, M ± SEMMTTEPN (male, count)89Age (years)28.75 ± 7.2628.11 ± 7.45not significantly different at baselineHeight (m)1.68 ± 0.081.65 ± 0.05not significantly different at baselineWeight (kg)60.06 ± 8.8058.00 ± 10.58not significantly different at baselineBMI (kg/m2)21.25 ± 2.5421.17 ± 3.08not significantly different at baselineVAS (before treatment)5.76 ± 1.155.63 ± 1.88not significantly different at baselineVAS (after treatment)2.80 ± 1.763.47 ± 1.83not significantly different at baselineAbbreviation:\nMT Manual therapy, TE therapeutic exercise, BMI Body mass index, VAS Visual analog scale (0–10; VAS 0 = no pain; VAS 10 = maximal pain)\nDemographic information among two groups, M ± SEM\nAbbreviation:\nMT Manual therapy, TE therapeutic exercise, BMI Body mass index, VAS Visual analog scale (0–10; VAS 0 = no pain; VAS 10 = maximal pain)\n\nFig. 1Manual therapy and therapeutic exercise were effective in cNLBP amelioration Seventeen patients were randomly divided into two groups: one group received manual therapy, and the other group received therapeutic exercise. VAS was recorded before and after treatment. Asterisks show a significant difference from patients before treatment using Student’s t tests (**P < 0.01) \nManual therapy and therapeutic exercise were effective in cNLBP amelioration Seventeen patients were randomly divided into two groups: one group received manual therapy, and the other group received therapeutic exercise. VAS was recorded before and after treatment. Asterisks show a significant difference from patients before treatment using Student’s t tests (**P < 0.01) \nWe completed lipid extraction and performed qualitative analysis. Through lipidomic analysis, we identified 290 lipids, which can be divided into the ten subclasses of phosphatidylcholine (PC), phosphatidylethanolamine (PE), lysophosphatidylcholines (LPC), lysophosphatidylethanolamine (LPE), triacylglycerol (TG), phosphatidylinositol (PI), sphingomyelin (SM), ceramide (Cer), hexosylceramide (HexCer), and fatty acid (FA). As a multivariate statistical analysis, PLS-DA could maximize the distinction and discover different metabolites between groups. We performed PLS-DA analysis with the MetaboAnalyst R software package and found a clear difference in the nontreated group (pink) and MT-treated group (green), suggesting differential lipidomic profiles in cNLBP patients before and after manual therapy (Fig. 2 A). Next, we used Pearson correlation analysis to measure the closeness of different lipids (Fig. 2B). Using volume measurements of lipids, we analyzed the lipidomic composition of cNLBP patients before and after manual therapy and found a decrease in phosphatidylcholine (PC)/phosphatidylethanolamine (PE) molar ratios but an increase in sphingomyelin (SM)/ceramide (Cer) molar ratios in the patients after manual therapy. Meanwhile, there were also decreases in the volumes of fatty acids (FAs) and increases in lysophosphatidylcholine (LPC) and lysophosphatidylethanolamine (LPE) when cNLBP patients were treated with MT (Fig. 3 A). We also generated a heatmap to present the volume of the lipids in each sample (Fig. 3B).\n\nFig. 2Lipidomic profiles in cNLBP patients before and after treatment with MT. A The PLS-DA analysis of cNLBP patients treated with MT versus the control group. “1” represents the nontreated group, and “2” represents the MT-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient\nLipidomic profiles in cNLBP patients before and after treatment with MT. A The PLS-DA analysis of cNLBP patients treated with MT versus the control group. “1” represents the nontreated group, and “2” represents the MT-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient\n\nFig. 3Lipid identification in cNLBP patients before and after treatment with MT. A The composition of nontreated samples and MT-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the MT-treated group\nLipid identification in cNLBP patients before and after treatment with MT. A The composition of nontreated samples and MT-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the MT-treated group", "Therapeutic exercise (TE) is another effective method for improving cNLBP [29]. We found that TE treatment was also effective in alleviating cNLBP (Fig. 1). After treatment, patients’ VAS scores decreased significantly in the TE-treated group (Fig. 1).\nWe performed lipid extraction from cNLBP patients before and after therapeutic exercise and performed qualitative analysis. PLS-DA results indicated a distinct separation between the nontreated group (pink) and the TE-treated group (green) (Fig. 4A). Pearson correlation analysis showed the closeness of different lipids (Fig. 4B). Based on the volume of lipids, PC/PE molar ratios decreased, while SM/Cer molar ratios increased in the patients after therapeutic exercise. The volume of FA also decreased, while the volume of LPC and LPE increased in cNLBP patients after therapeutic exercise, similar to the results of the MT-treated group. Interestingly, the volume of TG (triacylglycerol) increased in the TE-treated group, while it decreased in the MT-treated group (Fig. 5A). A heatmap was produced to indicate the volume of lipids in the nontreated group (red) and the TE-treated group (green) (Fig. 5B).\n\nFig. 4Lipidomic profiles in cNLBP patients before and after treatment with TE. A PLS-DA analysis of cNLBP patients treated with TE versus the control group. “1” represents the nontreated group, and “2” represents the TE-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient\nLipidomic profiles in cNLBP patients before and after treatment with TE. A PLS-DA analysis of cNLBP patients treated with TE versus the control group. “1” represents the nontreated group, and “2” represents the TE-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient\n\nFig. 5Lipid identification in cNLBP patients before and after treatment with TE. A The composition of nontreated samples and TE-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the TE-treated group\nLipid identification in cNLBP patients before and after treatment with TE. A The composition of nontreated samples and TE-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the TE-treated group", "To further identify therapeutic targets for cNLBP physiotherapy-based treatment, we analyzed the metabolome of cNLBP patients before and after treatment. In our sample of patients, the metabolomic analysis annotated and quantified 171 metabolites. Through KEGG-based enrichment analysis, these metabolites were enriched in the metabolism of tryptophan or aspartate, ammonia recycling, the metabolism of methionine or glycine, and serine, among others (Fig. 6A). Combining enrichment and topology analysis, pathway analysis was carried out for all patients. We found a total of 14 pathways that were significantly changed in patients (P value < 0.05). These metabolites mainly belonged to aminoacyl-tRNA biosynthesis; arginine biosynthesis; valine, leucine and isoleucine biosynthesis; amino acid metabolism; pyrimidine and purine metabolism; ascorbate and aldarate metabolism; taurine and hypotaurine metabolism; beta-alanine metabolism; and nicotinate and nicotinamide metabolism (Fig. 6B).\n\nFig. 6Metabolite alteration in cNLBP patients. A Pathway enrichment analysis revealed different metabolic pathways enriched in cNLBP patients (P value cutoff ≤ 0.05). B The results from the pathway analysis carried out with the web-based tool METPA using the concentrations of metabolites identified in cNLBP patients. Total cmpd, the total number of compounds in the pathway. Hits are the matched number from the uploaded data. Raw P is the original P value. Impact is the pathway impact value calculated from pathway topology analysis\nMetabolite alteration in cNLBP patients. A Pathway enrichment analysis revealed different metabolic pathways enriched in cNLBP patients (P value cutoff ≤ 0.05). B The results from the pathway analysis carried out with the web-based tool METPA using the concentrations of metabolites identified in cNLBP patients. Total cmpd, the total number of compounds in the pathway. Hits are the matched number from the uploaded data. Raw P is the original P value. Impact is the pathway impact value calculated from pathway topology analysis", "Serum metabolome analysis was performed on samples collected after MT treatment. Since two participants’ blood samples could not be collected after treatment, there were seven included participants in the MT group for metabolomes. Orthogonal PLS-DA was performed to demonstrate the suitability of the system (Fig. 7A). The orthogonal PLS-DA score plot revealed good discrimination of the MT treatment group against untreated samples (Fig. 7A). MT-treated and nontreated samples were separated with no outliers (Fig. 7A), demonstrating that our metabolomic analysis could sufficiently reflect the metabolic profile alteration of MT treatment. The VIP scores derived from orthogonal PLS-DA, based on the first 20 metabolites with a VIP score > 1.5, revealed uridine, guanosine, kynurenic acid, 2’-deoxyadenosine, allantoin, stachydrine, inosine, uridine 5’-monophosphate, nicotinuric acid, 3,4-dihydroxybenzeneacetic acid, 2’-deoxyuridine, 2’-deoxyguanosine, 4-aminohippuric acid, cytidine, pyridoxylamine, glutathione oxidized, desaminotyrosine, L − valine, N-acetyl-5-hydroxytryptamine, and 4-acetamidobutanoic acid with the highest VIP scores for MT treatment (Fig. 7B). Finally, we screened out metabolites (fold changes > 2) in the MT treatment group compared with the nontreated group, which were cytidine, uridine 5’-monophosphate, kynurenic acid, guanosine, inosine, 2’-deoxyadenosine, and stachydrine (Fig. 8A). The KEGG-based enrichment analysis revealed that these metabolites were significantly enriched in pyrimidine metabolism and purine metabolism pathways, demonstrating that MT treatment relieves pain by altering the metabolism of these two pathways (Fig. 8B).\n\nFig. 7Discrimination through orthogonal PLS-DA of patients before and after manual therapy analyzed based on metabolomics analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend as 1) and patients after manual therapy (indicated as 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis\nDiscrimination through orthogonal PLS-DA of patients before and after manual therapy analyzed based on metabolomics analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend as 1) and patients after manual therapy (indicated as 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis\n\nFig. 8Manual therapy could alter target metabolites in patients with cNLBP. A Comparison of the volumes of cytidine, uridine 5’-monophosphate, kynurenic acid, guanosine, inosine, 2’-deoxyadenosine, stachydrine, and N-acetyl-5-hydroxytryptamine in patients treated with and without manual therapy for 2 weeks. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism and purine metabolism pathways were enriched in patients treated with manual therapy (P value cutoff ≤ 0.05)\nManual therapy could alter target metabolites in patients with cNLBP. A Comparison of the volumes of cytidine, uridine 5’-monophosphate, kynurenic acid, guanosine, inosine, 2’-deoxyadenosine, stachydrine, and N-acetyl-5-hydroxytryptamine in patients treated with and without manual therapy for 2 weeks. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism and purine metabolism pathways were enriched in patients treated with manual therapy (P value cutoff ≤ 0.05)", "We also performed metabolite identification in the serum metabolomes pooled from cNLBP patients treated with TE. Since one participant’s blood sample could not be collected after treatment, there were seven included participants in the TE group for metabolomes. Orthogonal PLS-DA was performed on untreated samples and TE treatment samples. The orthogonal PLS-DA score plot revealed good discrimination between the TE treatment group and the nontreated samples (Fig. 9A). The VIP score > 1.5 derived from orthogonal PLS-DA, based on the first 20 metabolites, revealed liothyronine, 2’-deoxyadenosine, uridine, L − homocystine, N-acetyl-5-hydroxytryptamine, stachydrine, γ-aminobutyric acid, nicotinuric acid, glutaric acid, 2’-deoxyuridine, adenine, N-acetyl-L-aspartic acid, cinnamic acid, cytidine, uridine 5’-monophosphate, D-(-)-mandelic acid, L-cysteine, 4-aminobenzoic acid, 5’-deoxyadenosine, and D-sorbitol with the highest VIP scores for TE treatment (Fig. 9B).\n\nFig. 9Discrimination through orthogonal PLS-DA of patients before and after therapeutic exercise examined by metabolomic analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend with 1) and patients after therapeutic exercise (indicated with 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis\nDiscrimination through orthogonal PLS-DA of patients before and after therapeutic exercise examined by metabolomic analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend with 1) and patients after therapeutic exercise (indicated with 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis\nFinally, nine metabolites with fold changes > 2 in the TE treatment group were found, including uridine 5’-monophosphate, thymidine, 2’-deoxyadenosine, 5’-deoxyadenosine, N-acetyl-5-hydroxytryptamine, stachydrine, inosine, gallic acid, and γ-aminobutyric acid (Fig. 10A). The KEGG-based enrichment analysis revealed that these nine metabolites were significantly enriched in pyrimidine metabolism, tyrosine metabolism, and galactose metabolism pathways, demonstrating that TE treatment relieves pain by altering the metabolism of these three pathways (Fig. 10B).\n\nFig. 10Therapeutic exercise could alter target metabolites in patients with cNLBP. A Comparison of the volumes of uridine 5’-monophosphate, thymidine, 2’-deoxyadenosine, 5’-deoxyadenosine, N-acetyl-5-hydroxytryptamine, stachydrine, inosine, gallic acid, and γ-aminobutyric acid in patients treated with therapeutic exercise for two weeks or without therapeutic treatment. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism, tyrosine metabolism, and galactose metabolism were enriched in patients treated with therapeutic exercise (P value cutoff ≤ 0.05)\nTherapeutic exercise could alter target metabolites in patients with cNLBP. A Comparison of the volumes of uridine 5’-monophosphate, thymidine, 2’-deoxyadenosine, 5’-deoxyadenosine, N-acetyl-5-hydroxytryptamine, stachydrine, inosine, gallic acid, and γ-aminobutyric acid in patients treated with therapeutic exercise for two weeks or without therapeutic treatment. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism, tyrosine metabolism, and galactose metabolism were enriched in patients treated with therapeutic exercise (P value cutoff ≤ 0.05)", "Lipids can act as bioactive compounds that play critical roles in signal transduction. The balance of cellular PC/PE molar ratios is crucial to maintain cell survival and participates in the regulation of many diseases. However, PC/PE molar ratios associated with physiotherapy with cNLBP have not been studied. In this study, we found that PC/PE molar ratios decreased in cNLBP patients either treated with MT or treated with TE when compared with control groups, suggesting that PC/PE molar ratios are involved in cNLBP physiotherapy-based treatment. We still do not know the exact reason why decreased PC/PE molar ratios induced by MT or TE can cause cNLBP relief. However, we believe that the most likely explanation is that decreased PC/PE can alter the properties of membranes and inhibit TNFa-induced inflammatory responses significantly, which is an important inducer of sensory nerve growth [30, 31]. Studies have also shown that the growth of sensory nerves into the inner layer of IVDs (intervertebral discs, IVDs) is a potential factor in low back pain [32, 33].\nSphingolipids are another kind of bioactive lipid that can be used as powerful signaling molecules, and dysregulation of sphingolipid metabolism changes is known to have a significant impact on signal transduction [34]. Sphingomyelin (SM) and ceramide (Cer) are the most enriched classes of sphingolipids, and the balance between SM and Cer is associated with human disease. For example, SM/Cer imbalance can promote lipid dysregulation and apoptosis [35]. Studies have shown that altered sphingolipid metabolism causes neuropathic pain in humans [36]. N,N-dimethylsphingosine induces mechanical hypersensitivity, and the SM/Cer ratio is altered in rats with neuropathic pain [37]. There has, however, been no investigation examining whether the changes in the SM/Cer ratio were related to the physiotherapy of cNLBP. In our study, we found that the SM/Cer ratio increased in cNLBP patients treated with MT or TE compared with control groups, suggesting that SM/Cer ratio alteration is involved in cNLBP physiotherapy-based treatment. However, thus far, there has been no study on the mechanism of the SM/Cer ratio in cNLBP. SM can be hydrolyzed to produce biologically active molecules, such as ceramide and sphingosine, which can be used as potent inhibitors of protein kinase C (PKC) [38]. Therefore, SM/Cer ratio alterations can control many signaling pathways related to inflammation through PKC to relieve low back pain, since inflammation is the primary source of low back pain [28, 39]. In addition, SM/Cer ratio alterations can decrease chronic inflammatory responses through ER stress [40].\nWe further performed metabolome analysis to identify the underlying mechanisms in cNLBP physiotherapy-based treatment through MT or TE. We found that pyrimidine metabolism and purine metabolism pathways related to MT caused cNLBP amelioration, while pyrimidine metabolism, tyrosine metabolism, and galactose metabolism pathways were responsible for TE-generated cNLBP amelioration. There is literature demonstrating that pyrimidines and purine have widespread functions in responding to pain therapeutics [41, 42]. For example, the nucleotides cytidine and uridine are helpful for dealing with low back pain [43]. The amount of tyramine sulphate was significantly lower in pain patients than in control patients [41]. Purine antagonists can reduce chronic pain and inflammatory pain. Adenosine and its analogs have the ability to suppress nociception by activating adenosine receptors [44]. In addition to the pyrimidine pathway, tyrosine metabolism is also associated with pain [45]. In headache patients, tyrosine metabolism levels are abnormal [46]. Tyrosine can be hydrolyzed to DOPA, dopamine (DA), and noradrenaline (NE), which govern pain and vegetative functions [47]. Galactose was not only used as a primary source of energy but also considered a candidate for pharmacological applications [48].\nIn comparing the lipidomic and metabolomic profiles of patients with cNLBP before and after treatment, we found that alterations in the PC/PE ratio, SM/Cer ratio, and target metabolites may be the cause of cNLBP amelioration by MT or TE. However, the relationship between lipids and target metabolites is still unclear. We still do not know whether lipid alteration affects metabolite volume or whether metabolite volume affects lipid alteration. Many studies have demonstrated that lipids can affect gene expression, which can then alter the level of metabolites [49, 50]. For example, S1P can inhibit the activity of histone deacetylases by binding with HDAC1 and HDAC2 specifically to the epigenetic regulation of gene expression [49]. Lipids can also directly affect the activity of protein kinase C, which is an important downstream target of Cer. They can also modulate pyrimidine biosynthesis [50]. In turn, metabolite alteration can also affect lipid metabolism. For example, prenyloxycoumarin is a secondary metabolite and can be used as a modulator of lipid metabolism [51]. Very-low-density lipoproteins (VLDL) are a risk factor for modic changes. These changes result in low back pain (LBP), and receptors can enhance lipid metabolism and promote the expression of interleukin-33 (IL-33) [9, 52]. More studies are needed, however, to investigate the relationship between lipid metabolism and metabolite metabolism in the process of MT or TE in reducing cNLBP. Our study identified the target lipids and metabolites involved in the improvement of cNLBP treated with MT or TE, which has expanded our knowledge of cNLBP physiotherapy-based treatment.\nStudy strengths and limitations The greatest strength of this study is to reveal the possible mechanism of promoting cNLBP amelioration through MT or TE treatment from the perspective of lipidomics and metabolomics in cNLBP patients. However, the experiment only involved with alterations in lipids and metabolites, and the deeper mechanisms of these lipids and metabolites affecting cNLBP physiotherapy-based treatment are uncertain. Therefore, more evidences are needed to explore.\nThe greatest strength of this study is to reveal the possible mechanism of promoting cNLBP amelioration through MT or TE treatment from the perspective of lipidomics and metabolomics in cNLBP patients. However, the experiment only involved with alterations in lipids and metabolites, and the deeper mechanisms of these lipids and metabolites affecting cNLBP physiotherapy-based treatment are uncertain. Therefore, more evidences are needed to explore.", "The greatest strength of this study is to reveal the possible mechanism of promoting cNLBP amelioration through MT or TE treatment from the perspective of lipidomics and metabolomics in cNLBP patients. However, the experiment only involved with alterations in lipids and metabolites, and the deeper mechanisms of these lipids and metabolites affecting cNLBP physiotherapy-based treatment are uncertain. Therefore, more evidences are needed to explore.", "MT or TE treatment were effective strategies in alleviating cNLBP. The possible mechanism is that MT or TE treatment was able to cause alterations in the lipidomics and metabolomics in cNLBP patients. This study was the first to elucidate cNLBP physiotherapy-based treatment was associated with specific lipids and metabolites. These results indicate that physiotherapy or agents targeting these lipids and metabolites alteration might be useful for treatment of cNLBP." ]
[ "introduction", "materials|methods", null, null, null, null, null, "results", null, null, null, null, null, "discussion", null, null ]
[ "Chronic nonspecific low back pain", "Lipid", "Metabolite", "Manual therapy", "Therapeutic exercise" ]
Introduction: Chronic nonspecific low back pain (cNLBP) is not caused by recognizable pathology; it lasts for more than three months and occurs between the lower rib and the inferior gluteal fold [1]. It is estimated that approximately four out of five people have lower back pain at some time during their lives, and it greatly affects their quality of life, productivity, and ability to work [2]. cNLBP can be caused by many factors, such as lumbar strain, nerve irritation, and bony encroachment. However, the etiology of cNLBP is typically unknown and poorly understood [3]. Medical treatments and physiotherapy are recommended to treat and resolve issues associated with cNLBP [3]. Therapeutic exercise and manual therapy have a lower risk of increasing future back injuries or work absence and are more effective treatment options for chronic pain than medication or surgery, and they can be performed at rehabilitation clinics [4–6]. Exercise therapy is a widely used strategy to cope with low back pain that includes a heterogeneous group of interventions ranging from aerobic exercise or general physical fitness to muscle strengthening and various types of flexibility and stretching exercises [7]. Manual therapy is another effective method to deal with low back pain, in which hands are used to apply a force with a therapeutic intent, including massage, joint mobilization/manipulation, myofascial release, nerve manipulation, strain/counter strain, and acupressure [8]. However, the reasons therapeutic exercise and manual therapy ameliorate cNLBP are still unknown. With the development of lipidomics and metabolomics, many studies have indicated that lipid or metabolite alterations are associated with chronic pain [9, 10]. Lipids, as primary metabolites, are not only structural components of membranes but can also be used as signaling molecules to regulate many physiological activities. For example, fatty acid (FA) chains can be saturated (SFA), monounsaturated (MUFA), or polyunsaturated (PUFA), and the ratio of saturated to unsaturated FAs participates in the regulation of longevity [11]. Phosphatidylcholine (PC) phosphatidylethanolamine (PE) is abundant in membranes. In mammals, cellular PC/PE molar ratios that are out of balance and increase or decrease abnormally can cause diseases [12]. For example, a reduced PC/PE ratio can protect mice against atherosclerosis [13]. Decreasing the PC/PE molar ratio can change the intracellular energy supply by activating the electron transport chain and mitochondrial respiration [14]. Lysophosphatidylcholine (LPC) 16:0 correlated with pain outcomes in a cohort of patients with osteoarthritis [15]. Apart from phospholipids, studies have shown that sphingolipid metabolism also contributes to chronic pain. Increased ceramide and sphingosine-1-phosphate (S1P) are involved in the progression of chronic pain in the nervous system [16]. Previous studies reported that metabolites were also associated with pain. Patients with neuropathic pain showed elevated choline-containing compounds in response to myoinositol [tCho/mI] under magnetic resonance spectroscopy [17]. Flavonoids are the most common secondary plant metabolites used as tranquilizers in folkloric medicine and have been claimed to reduce neuropathic pain [18]. Patients with chest pain and high plasma levels of deoxyuridine, homoserine, and methionine had an increased risk of myocardial infarction [19]. Despite the evidence presented above that pain is associated with specific lipids and metabolites, no studies have shown that MT and TE can relieve cNLBP by altering lipids and metabolites. In this article, we compared the lipidomics and metabolomics of patients with cNLBP before and after treatment to explore differences in lipids and metabolites correlated with cNLBP physiotherapy-based treatment. The newly found data will expand our knowledge of cNLBP physiotherapy-based treatment. Material and methods: Participants Patients with cNLBP were recruited through advertising. The inclusion criteria were as follows: (1) patients aged between 18 years and 65 years [20]; (2) patients with pain in the area between the lower rib and the inferior gluteal fold; (3) patients with persistent pain > 3 months or intermittent pain > 6 months and having been clinically diagnosed as having cNLBP by two licensed medical doctors in accordance with the diagnostic guidelines published by the American College of Physicians and the American Pain Society [21, 22]; (4) patients with a minimum score of 2 on the Visual Analog Scale (VAS) in the previous week [23]; (5) patients who were right-hand dominant, with no neurological diseases (e.g., traumatic brain injury, or epilepsy), or intracranial lesions; and (6) patients who did not receive pain treatment within the past 3 months. The exclusion criteria were as follows: (1) patients with radiating pain, menstrual pain, recent/current pregnancy, or postpartum low back pain; (2) patients who suffered known inflammatory disease of the spine, vertebral fracture, severe osteoporosis, autoinflammatory arthritis, and cancer or had significant unexplained weight loss; (3) patients who had cardio-cerebrovascular disease or endocrine disorders; (4) patients with mental illness requiring immediate pharmacotherapy; (5) patients who showed an unwillingness to sign research consent and unwillingness or inability to follow the research protocol; and (6) patients with current alcohol or drug dependence. All participants were assessed for pain intensity using the visual analog scale (VAS), and serum samples for LC‒MS measurements were collected before and after treatment. The First Affiliated Hospital of Sun Yat-sen University approved the ethical approval document of the study (no. [2019] 408). The recruited subjects signed informed consent forms prior to the experiment. Patients with cNLBP were recruited through advertising. The inclusion criteria were as follows: (1) patients aged between 18 years and 65 years [20]; (2) patients with pain in the area between the lower rib and the inferior gluteal fold; (3) patients with persistent pain > 3 months or intermittent pain > 6 months and having been clinically diagnosed as having cNLBP by two licensed medical doctors in accordance with the diagnostic guidelines published by the American College of Physicians and the American Pain Society [21, 22]; (4) patients with a minimum score of 2 on the Visual Analog Scale (VAS) in the previous week [23]; (5) patients who were right-hand dominant, with no neurological diseases (e.g., traumatic brain injury, or epilepsy), or intracranial lesions; and (6) patients who did not receive pain treatment within the past 3 months. The exclusion criteria were as follows: (1) patients with radiating pain, menstrual pain, recent/current pregnancy, or postpartum low back pain; (2) patients who suffered known inflammatory disease of the spine, vertebral fracture, severe osteoporosis, autoinflammatory arthritis, and cancer or had significant unexplained weight loss; (3) patients who had cardio-cerebrovascular disease or endocrine disorders; (4) patients with mental illness requiring immediate pharmacotherapy; (5) patients who showed an unwillingness to sign research consent and unwillingness or inability to follow the research protocol; and (6) patients with current alcohol or drug dependence. All participants were assessed for pain intensity using the visual analog scale (VAS), and serum samples for LC‒MS measurements were collected before and after treatment. The First Affiliated Hospital of Sun Yat-sen University approved the ethical approval document of the study (no. [2019] 408). The recruited subjects signed informed consent forms prior to the experiment. Therapy of subjects Seventeen recruited subjects were randomly divided into the MT group and the TE group. Patients in the MT group received manual therapy, and patients in the TE group received therapeutic exercise. Subjects in the manual therapy group were involved in muscular relaxation, myofascial release, and mobilization for 20 min during each session. The treatment lasted for a total of six sessions, once every two days. Subjects in the therapeutic exercise group completed motor control exercise and core stability exercise for 30 min during each session. The motor control exercises included stretching of the trunk and extremity muscles, trunk and hip rotation, and flexion training. Stabilization exercises consisted of the (1) bridge exercise, (2) single-leg-lift bridge exercise, (3) side bridge exercise, (4) two-point bird-dog position elevated contralateral leg and arm, (6) bear crawl exercise, and (7) dead bug exercise. The treatment lasted for a total of six sessions, once every two days. Seventeen recruited subjects were randomly divided into the MT group and the TE group. Patients in the MT group received manual therapy, and patients in the TE group received therapeutic exercise. Subjects in the manual therapy group were involved in muscular relaxation, myofascial release, and mobilization for 20 min during each session. The treatment lasted for a total of six sessions, once every two days. Subjects in the therapeutic exercise group completed motor control exercise and core stability exercise for 30 min during each session. The motor control exercises included stretching of the trunk and extremity muscles, trunk and hip rotation, and flexion training. Stabilization exercises consisted of the (1) bridge exercise, (2) single-leg-lift bridge exercise, (3) side bridge exercise, (4) two-point bird-dog position elevated contralateral leg and arm, (6) bear crawl exercise, and (7) dead bug exercise. The treatment lasted for a total of six sessions, once every two days. Lipidomic analysis Lipid samples were prepared as described by Xuan et al. with some modifications [24]. Briefly, venous blood was collected in heparinization tubes and then centrifuged for 15 min at 2000 g at 4 °C to collect serum. A total of 200 µL serum samples with lipid standards were mixed with 400 µL tert-butyl methyl ether (MTBE) and 80 µL methanol and then vortexed for 30 s. Next, the samples were centrifuged, after which the upper phases were collected, transferred into new tubes, and dried by vacuum evaporation. Samples were reconstituted with 100 µL of methylene chloride:methanol (1:1, v/v). Lipid analysis was carried out with a Shimadzu LC-30 A (Shimadzu, Kyoto, Japan) coupled with a mass spectrometer (QTRAP 6500, AB SCIEX, Framingham, MA, USA). The chromatographic parameters were as follows: chromatographic column: ACQUITY UPLC® BEH C18 column (2.1 × 100 mm, 1.7 μm, Waters, Milford, MA, USA), volume of injection: 5 µl, flow rate: 0.26 mL/min, oven temperature: 55 °C. The mobile phase included reagent A (acetonitrile:ultrapure water = 60:40, v/v, with 10 mM ammonium acetate) and reagent B (isopropanol:acetonitrile = 90:10, v/v, with 10 mM ammonium acetate). A binary gradient was set as follows: 0–1.5 min, mobile phase including 68% reagent A and 32% reagent B; 1.5–15.5 min, mobile phase including 15% reagent A and 85% reagent B; 15.5–15.6 min, mobile phase including 3% reagent A and 97% reagent B; 15.6–18 min, mobile phase including 3% reagent A and 97% reagent B; 18–18.1 min, mobile phase including 68% reagent A and 32% reagent B; 18.1–20 min, mobile phase including 68% reagent A and 32% reagent B. Electrospray ionization (QTRAP 6500, AB SCIEX, Framingham, MA, USA) was used with the following parameters: ion source voltage was − 4500 or 5500 V, ion source temperature was 600 °C, curtain gas was 20 psi, atomizing gas was 60 psi, and auxiliary gas was 60 psi. Scanning was performed through multiple reaction monitoring (MRM). Samples under test conditions were mixed and used as QC samples for LC‒MS analysis every third sample to correct deviations caused by instrumental drift and evaluate the quality of data. Lipid samples were prepared as described by Xuan et al. with some modifications [24]. Briefly, venous blood was collected in heparinization tubes and then centrifuged for 15 min at 2000 g at 4 °C to collect serum. A total of 200 µL serum samples with lipid standards were mixed with 400 µL tert-butyl methyl ether (MTBE) and 80 µL methanol and then vortexed for 30 s. Next, the samples were centrifuged, after which the upper phases were collected, transferred into new tubes, and dried by vacuum evaporation. Samples were reconstituted with 100 µL of methylene chloride:methanol (1:1, v/v). Lipid analysis was carried out with a Shimadzu LC-30 A (Shimadzu, Kyoto, Japan) coupled with a mass spectrometer (QTRAP 6500, AB SCIEX, Framingham, MA, USA). The chromatographic parameters were as follows: chromatographic column: ACQUITY UPLC® BEH C18 column (2.1 × 100 mm, 1.7 μm, Waters, Milford, MA, USA), volume of injection: 5 µl, flow rate: 0.26 mL/min, oven temperature: 55 °C. The mobile phase included reagent A (acetonitrile:ultrapure water = 60:40, v/v, with 10 mM ammonium acetate) and reagent B (isopropanol:acetonitrile = 90:10, v/v, with 10 mM ammonium acetate). A binary gradient was set as follows: 0–1.5 min, mobile phase including 68% reagent A and 32% reagent B; 1.5–15.5 min, mobile phase including 15% reagent A and 85% reagent B; 15.5–15.6 min, mobile phase including 3% reagent A and 97% reagent B; 15.6–18 min, mobile phase including 3% reagent A and 97% reagent B; 18–18.1 min, mobile phase including 68% reagent A and 32% reagent B; 18.1–20 min, mobile phase including 68% reagent A and 32% reagent B. Electrospray ionization (QTRAP 6500, AB SCIEX, Framingham, MA, USA) was used with the following parameters: ion source voltage was − 4500 or 5500 V, ion source temperature was 600 °C, curtain gas was 20 psi, atomizing gas was 60 psi, and auxiliary gas was 60 psi. Scanning was performed through multiple reaction monitoring (MRM). Samples under test conditions were mixed and used as QC samples for LC‒MS analysis every third sample to correct deviations caused by instrumental drift and evaluate the quality of data. Metabolomic measurement Metabolomic samples were prepared as described by Wang et al. [25]. Briefly, venous blood was collected in heparinization tubes and then centrifuged at 8000 g at 4 °C to collect serum. A total of 100 µL of serum sample was mixed with 400 µL of solution (methanol:acetonitrile:ultrapure water = 2:2:1, v/v/v) and then sonicated for 10 min in a 4 °C water bath. Next, the samples were incubated for one hour at − 20 °C and then centrifuged. The supernatant was collected and evaporated by vacuum evaporation. Each sample was resuspended in solution (acetonitrile:ultrapure water, 1:1, v/v). Metabolomic analysis was carried out with a Shimadzu LC-30 A (Shimadzu, Kyoto, Japan) coupled with a mass spectrometer (QTRAP 4500, AB SCIEX, Framingham, MA, USA). The chromatographic parameters were set as follows: chromatographic column: UPLC BEH Amide column (2.1 × 100 mm, 1.7 μm, Waters, Milford, MA, USA), volume of injection: 5 µl, flow rate: 0.3 mL/min, oven temperature: 55 °C. The mobile phase included reagent A (100% ultrapure water with 0.025 M ammonium hydroxide and 0.025 M ammonium acetate) and reagent B (100% acetonitrile). A binary gradient was set as follows: 0–1 min, mobile phase including of 15% reagent A and 85% reagent B; 1–12 min, mobile phase including of 35% reagent A and 65% reagent B; 12–12.1 min, mobile phase including of 60% reagent A and 40% reagent B; 12.1–15 min, mobile phase including of 60% reagent A and 40% reagent B; 15–15.1 min, mobile phase including of 15% reagent A and 85% reagent B; 15.1–20 min, mobile phase including of 15% reagent A and 85% reagent B. An electrospray ionization (QTRAP 4500, AB SCIEX, Framingham, MA, USA) was used with parameters as follows: ion source voltage was − 4500 or 5500 V, ion source temperature was 600 °C, the ion source voltage was − 4500 or 5500 V, curtain gas was 20 psi, atomizing gas was 60 psi, and auxiliary gas was 60 psi. Scanning was performed via multiple reaction monitoring (MRM). Samples under test conditions were mixed and used as QC samples for LC‒MS analysis every third sample to correct deviations caused by instrumental drift and evaluate the quality of data. After the test, raw data were converted to mzXML format with the web-based tool ProteoWizard and then analyzed for peak alignment, retention time correction, and peak area extraction based on XCMS. Metabolite annotation was carried out based on the online human metabolome database (HMDB, http://www.hmdb.ca) using mass-to-charge ratio information and metabolite structures. Metabolite structures were accurately matched using primary and secondary spectrograms (< 25 ppm). Metabolomic samples were prepared as described by Wang et al. [25]. Briefly, venous blood was collected in heparinization tubes and then centrifuged at 8000 g at 4 °C to collect serum. A total of 100 µL of serum sample was mixed with 400 µL of solution (methanol:acetonitrile:ultrapure water = 2:2:1, v/v/v) and then sonicated for 10 min in a 4 °C water bath. Next, the samples were incubated for one hour at − 20 °C and then centrifuged. The supernatant was collected and evaporated by vacuum evaporation. Each sample was resuspended in solution (acetonitrile:ultrapure water, 1:1, v/v). Metabolomic analysis was carried out with a Shimadzu LC-30 A (Shimadzu, Kyoto, Japan) coupled with a mass spectrometer (QTRAP 4500, AB SCIEX, Framingham, MA, USA). The chromatographic parameters were set as follows: chromatographic column: UPLC BEH Amide column (2.1 × 100 mm, 1.7 μm, Waters, Milford, MA, USA), volume of injection: 5 µl, flow rate: 0.3 mL/min, oven temperature: 55 °C. The mobile phase included reagent A (100% ultrapure water with 0.025 M ammonium hydroxide and 0.025 M ammonium acetate) and reagent B (100% acetonitrile). A binary gradient was set as follows: 0–1 min, mobile phase including of 15% reagent A and 85% reagent B; 1–12 min, mobile phase including of 35% reagent A and 65% reagent B; 12–12.1 min, mobile phase including of 60% reagent A and 40% reagent B; 12.1–15 min, mobile phase including of 60% reagent A and 40% reagent B; 15–15.1 min, mobile phase including of 15% reagent A and 85% reagent B; 15.1–20 min, mobile phase including of 15% reagent A and 85% reagent B. An electrospray ionization (QTRAP 4500, AB SCIEX, Framingham, MA, USA) was used with parameters as follows: ion source voltage was − 4500 or 5500 V, ion source temperature was 600 °C, the ion source voltage was − 4500 or 5500 V, curtain gas was 20 psi, atomizing gas was 60 psi, and auxiliary gas was 60 psi. Scanning was performed via multiple reaction monitoring (MRM). Samples under test conditions were mixed and used as QC samples for LC‒MS analysis every third sample to correct deviations caused by instrumental drift and evaluate the quality of data. After the test, raw data were converted to mzXML format with the web-based tool ProteoWizard and then analyzed for peak alignment, retention time correction, and peak area extraction based on XCMS. Metabolite annotation was carried out based on the online human metabolome database (HMDB, http://www.hmdb.ca) using mass-to-charge ratio information and metabolite structures. Metabolite structures were accurately matched using primary and secondary spectrograms (< 25 ppm). Statistical analyses Lipid and metabolite abundance were determined by peak area. Then, data were processed and normalized based on a reference sample (PQN) following the process outlined on the website https://www.metaboanalyst.ca/, which was mainly designed for raw spectra processing and general statistical and functional analysis of targeted metabolomics data [26–28]. The maximum covariance between nontreated samples and MT- or ET-treated samples in lipidomic analysis was determined with partial least squares-discriminant analysis (PLS-DA). The maximum covariance between nontreated samples and MT- or ET-treated samples in metabolomic analysis was determined using orthogonal partial least-squares discriminant analysis (OPLS-DA). The correlation between lipid molecules was analyzed with correlation heatmaps. The content difference of lipids in each sample was indicated with hierarchical clustering analysis. Pathway analysis was carried out with the web-based tool METPA. The raw data were logarithmically transformed and tested for normality before the means were compared between different groups. If normality was assumed, Student’s t test was applied. To visualize the differentiation between different groups, PLS-DA and OPLS-DA were performed using MetaboAnalyst 5.0 (http://www.metaboanalyst.ca/). Data are presented as the mean ± SEM. GraphPad Prism (version 8, GraphPad Software, San Diego, CA, USA) was used to perform statistical analyses between the nontreatment and physiotherapy-based treatment groups using Student’s t test (P < 0.05). Lipid and metabolite abundance were determined by peak area. Then, data were processed and normalized based on a reference sample (PQN) following the process outlined on the website https://www.metaboanalyst.ca/, which was mainly designed for raw spectra processing and general statistical and functional analysis of targeted metabolomics data [26–28]. The maximum covariance between nontreated samples and MT- or ET-treated samples in lipidomic analysis was determined with partial least squares-discriminant analysis (PLS-DA). The maximum covariance between nontreated samples and MT- or ET-treated samples in metabolomic analysis was determined using orthogonal partial least-squares discriminant analysis (OPLS-DA). The correlation between lipid molecules was analyzed with correlation heatmaps. The content difference of lipids in each sample was indicated with hierarchical clustering analysis. Pathway analysis was carried out with the web-based tool METPA. The raw data were logarithmically transformed and tested for normality before the means were compared between different groups. If normality was assumed, Student’s t test was applied. To visualize the differentiation between different groups, PLS-DA and OPLS-DA were performed using MetaboAnalyst 5.0 (http://www.metaboanalyst.ca/). Data are presented as the mean ± SEM. GraphPad Prism (version 8, GraphPad Software, San Diego, CA, USA) was used to perform statistical analyses between the nontreatment and physiotherapy-based treatment groups using Student’s t test (P < 0.05). Participants: Patients with cNLBP were recruited through advertising. The inclusion criteria were as follows: (1) patients aged between 18 years and 65 years [20]; (2) patients with pain in the area between the lower rib and the inferior gluteal fold; (3) patients with persistent pain > 3 months or intermittent pain > 6 months and having been clinically diagnosed as having cNLBP by two licensed medical doctors in accordance with the diagnostic guidelines published by the American College of Physicians and the American Pain Society [21, 22]; (4) patients with a minimum score of 2 on the Visual Analog Scale (VAS) in the previous week [23]; (5) patients who were right-hand dominant, with no neurological diseases (e.g., traumatic brain injury, or epilepsy), or intracranial lesions; and (6) patients who did not receive pain treatment within the past 3 months. The exclusion criteria were as follows: (1) patients with radiating pain, menstrual pain, recent/current pregnancy, or postpartum low back pain; (2) patients who suffered known inflammatory disease of the spine, vertebral fracture, severe osteoporosis, autoinflammatory arthritis, and cancer or had significant unexplained weight loss; (3) patients who had cardio-cerebrovascular disease or endocrine disorders; (4) patients with mental illness requiring immediate pharmacotherapy; (5) patients who showed an unwillingness to sign research consent and unwillingness or inability to follow the research protocol; and (6) patients with current alcohol or drug dependence. All participants were assessed for pain intensity using the visual analog scale (VAS), and serum samples for LC‒MS measurements were collected before and after treatment. The First Affiliated Hospital of Sun Yat-sen University approved the ethical approval document of the study (no. [2019] 408). The recruited subjects signed informed consent forms prior to the experiment. Therapy of subjects: Seventeen recruited subjects were randomly divided into the MT group and the TE group. Patients in the MT group received manual therapy, and patients in the TE group received therapeutic exercise. Subjects in the manual therapy group were involved in muscular relaxation, myofascial release, and mobilization for 20 min during each session. The treatment lasted for a total of six sessions, once every two days. Subjects in the therapeutic exercise group completed motor control exercise and core stability exercise for 30 min during each session. The motor control exercises included stretching of the trunk and extremity muscles, trunk and hip rotation, and flexion training. Stabilization exercises consisted of the (1) bridge exercise, (2) single-leg-lift bridge exercise, (3) side bridge exercise, (4) two-point bird-dog position elevated contralateral leg and arm, (6) bear crawl exercise, and (7) dead bug exercise. The treatment lasted for a total of six sessions, once every two days. Lipidomic analysis: Lipid samples were prepared as described by Xuan et al. with some modifications [24]. Briefly, venous blood was collected in heparinization tubes and then centrifuged for 15 min at 2000 g at 4 °C to collect serum. A total of 200 µL serum samples with lipid standards were mixed with 400 µL tert-butyl methyl ether (MTBE) and 80 µL methanol and then vortexed for 30 s. Next, the samples were centrifuged, after which the upper phases were collected, transferred into new tubes, and dried by vacuum evaporation. Samples were reconstituted with 100 µL of methylene chloride:methanol (1:1, v/v). Lipid analysis was carried out with a Shimadzu LC-30 A (Shimadzu, Kyoto, Japan) coupled with a mass spectrometer (QTRAP 6500, AB SCIEX, Framingham, MA, USA). The chromatographic parameters were as follows: chromatographic column: ACQUITY UPLC® BEH C18 column (2.1 × 100 mm, 1.7 μm, Waters, Milford, MA, USA), volume of injection: 5 µl, flow rate: 0.26 mL/min, oven temperature: 55 °C. The mobile phase included reagent A (acetonitrile:ultrapure water = 60:40, v/v, with 10 mM ammonium acetate) and reagent B (isopropanol:acetonitrile = 90:10, v/v, with 10 mM ammonium acetate). A binary gradient was set as follows: 0–1.5 min, mobile phase including 68% reagent A and 32% reagent B; 1.5–15.5 min, mobile phase including 15% reagent A and 85% reagent B; 15.5–15.6 min, mobile phase including 3% reagent A and 97% reagent B; 15.6–18 min, mobile phase including 3% reagent A and 97% reagent B; 18–18.1 min, mobile phase including 68% reagent A and 32% reagent B; 18.1–20 min, mobile phase including 68% reagent A and 32% reagent B. Electrospray ionization (QTRAP 6500, AB SCIEX, Framingham, MA, USA) was used with the following parameters: ion source voltage was − 4500 or 5500 V, ion source temperature was 600 °C, curtain gas was 20 psi, atomizing gas was 60 psi, and auxiliary gas was 60 psi. Scanning was performed through multiple reaction monitoring (MRM). Samples under test conditions were mixed and used as QC samples for LC‒MS analysis every third sample to correct deviations caused by instrumental drift and evaluate the quality of data. Metabolomic measurement: Metabolomic samples were prepared as described by Wang et al. [25]. Briefly, venous blood was collected in heparinization tubes and then centrifuged at 8000 g at 4 °C to collect serum. A total of 100 µL of serum sample was mixed with 400 µL of solution (methanol:acetonitrile:ultrapure water = 2:2:1, v/v/v) and then sonicated for 10 min in a 4 °C water bath. Next, the samples were incubated for one hour at − 20 °C and then centrifuged. The supernatant was collected and evaporated by vacuum evaporation. Each sample was resuspended in solution (acetonitrile:ultrapure water, 1:1, v/v). Metabolomic analysis was carried out with a Shimadzu LC-30 A (Shimadzu, Kyoto, Japan) coupled with a mass spectrometer (QTRAP 4500, AB SCIEX, Framingham, MA, USA). The chromatographic parameters were set as follows: chromatographic column: UPLC BEH Amide column (2.1 × 100 mm, 1.7 μm, Waters, Milford, MA, USA), volume of injection: 5 µl, flow rate: 0.3 mL/min, oven temperature: 55 °C. The mobile phase included reagent A (100% ultrapure water with 0.025 M ammonium hydroxide and 0.025 M ammonium acetate) and reagent B (100% acetonitrile). A binary gradient was set as follows: 0–1 min, mobile phase including of 15% reagent A and 85% reagent B; 1–12 min, mobile phase including of 35% reagent A and 65% reagent B; 12–12.1 min, mobile phase including of 60% reagent A and 40% reagent B; 12.1–15 min, mobile phase including of 60% reagent A and 40% reagent B; 15–15.1 min, mobile phase including of 15% reagent A and 85% reagent B; 15.1–20 min, mobile phase including of 15% reagent A and 85% reagent B. An electrospray ionization (QTRAP 4500, AB SCIEX, Framingham, MA, USA) was used with parameters as follows: ion source voltage was − 4500 or 5500 V, ion source temperature was 600 °C, the ion source voltage was − 4500 or 5500 V, curtain gas was 20 psi, atomizing gas was 60 psi, and auxiliary gas was 60 psi. Scanning was performed via multiple reaction monitoring (MRM). Samples under test conditions were mixed and used as QC samples for LC‒MS analysis every third sample to correct deviations caused by instrumental drift and evaluate the quality of data. After the test, raw data were converted to mzXML format with the web-based tool ProteoWizard and then analyzed for peak alignment, retention time correction, and peak area extraction based on XCMS. Metabolite annotation was carried out based on the online human metabolome database (HMDB, http://www.hmdb.ca) using mass-to-charge ratio information and metabolite structures. Metabolite structures were accurately matched using primary and secondary spectrograms (< 25 ppm). Statistical analyses: Lipid and metabolite abundance were determined by peak area. Then, data were processed and normalized based on a reference sample (PQN) following the process outlined on the website https://www.metaboanalyst.ca/, which was mainly designed for raw spectra processing and general statistical and functional analysis of targeted metabolomics data [26–28]. The maximum covariance between nontreated samples and MT- or ET-treated samples in lipidomic analysis was determined with partial least squares-discriminant analysis (PLS-DA). The maximum covariance between nontreated samples and MT- or ET-treated samples in metabolomic analysis was determined using orthogonal partial least-squares discriminant analysis (OPLS-DA). The correlation between lipid molecules was analyzed with correlation heatmaps. The content difference of lipids in each sample was indicated with hierarchical clustering analysis. Pathway analysis was carried out with the web-based tool METPA. The raw data were logarithmically transformed and tested for normality before the means were compared between different groups. If normality was assumed, Student’s t test was applied. To visualize the differentiation between different groups, PLS-DA and OPLS-DA were performed using MetaboAnalyst 5.0 (http://www.metaboanalyst.ca/). Data are presented as the mean ± SEM. GraphPad Prism (version 8, GraphPad Software, San Diego, CA, USA) was used to perform statistical analyses between the nontreatment and physiotherapy-based treatment groups using Student’s t test (P < 0.05). Results: Lipid composition analysis of cNLBP patients before and after manual therapy We recruited 17 patients with cNLBP whose demographic information is shown in Table 1. The recruited subjects were randomly divided into MT or TE groups, with no significant differences in age, weight, height, BMI, or VAS score between them. We found that MT treatment was effective in alleviating cNLBP (Fig. 1). After treatment, the VAS score decreased in almost the entire MT group (Fig. 1). Serum lipidomics were determined after six MT treatment sessions. Since one participant’s blood sample could not be collected after treatment, there were eight effective participants in the MT group. Table 1Demographic information among two groups, M ± SEMMTTEPN (male, count)89Age (years)28.75 ± 7.2628.11 ± 7.45not significantly different at baselineHeight (m)1.68 ± 0.081.65 ± 0.05not significantly different at baselineWeight (kg)60.06 ± 8.8058.00 ± 10.58not significantly different at baselineBMI (kg/m2)21.25 ± 2.5421.17 ± 3.08not significantly different at baselineVAS (before treatment)5.76 ± 1.155.63 ± 1.88not significantly different at baselineVAS (after treatment)2.80 ± 1.763.47 ± 1.83not significantly different at baselineAbbreviation: MT Manual therapy, TE therapeutic exercise, BMI Body mass index, VAS Visual analog scale (0–10; VAS 0 = no pain; VAS 10 = maximal pain) Demographic information among two groups, M ± SEM Abbreviation: MT Manual therapy, TE therapeutic exercise, BMI Body mass index, VAS Visual analog scale (0–10; VAS 0 = no pain; VAS 10 = maximal pain) Fig. 1Manual therapy and therapeutic exercise were effective in cNLBP amelioration Seventeen patients were randomly divided into two groups: one group received manual therapy, and the other group received therapeutic exercise. VAS was recorded before and after treatment. Asterisks show a significant difference from patients before treatment using Student’s t tests (**P < 0.01)  Manual therapy and therapeutic exercise were effective in cNLBP amelioration Seventeen patients were randomly divided into two groups: one group received manual therapy, and the other group received therapeutic exercise. VAS was recorded before and after treatment. Asterisks show a significant difference from patients before treatment using Student’s t tests (**P < 0.01)  We completed lipid extraction and performed qualitative analysis. Through lipidomic analysis, we identified 290 lipids, which can be divided into the ten subclasses of phosphatidylcholine (PC), phosphatidylethanolamine (PE), lysophosphatidylcholines (LPC), lysophosphatidylethanolamine (LPE), triacylglycerol (TG), phosphatidylinositol (PI), sphingomyelin (SM), ceramide (Cer), hexosylceramide (HexCer), and fatty acid (FA). As a multivariate statistical analysis, PLS-DA could maximize the distinction and discover different metabolites between groups. We performed PLS-DA analysis with the MetaboAnalyst R software package and found a clear difference in the nontreated group (pink) and MT-treated group (green), suggesting differential lipidomic profiles in cNLBP patients before and after manual therapy (Fig. 2 A). Next, we used Pearson correlation analysis to measure the closeness of different lipids (Fig. 2B). Using volume measurements of lipids, we analyzed the lipidomic composition of cNLBP patients before and after manual therapy and found a decrease in phosphatidylcholine (PC)/phosphatidylethanolamine (PE) molar ratios but an increase in sphingomyelin (SM)/ceramide (Cer) molar ratios in the patients after manual therapy. Meanwhile, there were also decreases in the volumes of fatty acids (FAs) and increases in lysophosphatidylcholine (LPC) and lysophosphatidylethanolamine (LPE) when cNLBP patients were treated with MT (Fig. 3 A). We also generated a heatmap to present the volume of the lipids in each sample (Fig. 3B). Fig. 2Lipidomic profiles in cNLBP patients before and after treatment with MT. A The PLS-DA analysis of cNLBP patients treated with MT versus the control group. “1” represents the nontreated group, and “2” represents the MT-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient Lipidomic profiles in cNLBP patients before and after treatment with MT. A The PLS-DA analysis of cNLBP patients treated with MT versus the control group. “1” represents the nontreated group, and “2” represents the MT-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient Fig. 3Lipid identification in cNLBP patients before and after treatment with MT. A The composition of nontreated samples and MT-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the MT-treated group Lipid identification in cNLBP patients before and after treatment with MT. A The composition of nontreated samples and MT-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the MT-treated group We recruited 17 patients with cNLBP whose demographic information is shown in Table 1. The recruited subjects were randomly divided into MT or TE groups, with no significant differences in age, weight, height, BMI, or VAS score between them. We found that MT treatment was effective in alleviating cNLBP (Fig. 1). After treatment, the VAS score decreased in almost the entire MT group (Fig. 1). Serum lipidomics were determined after six MT treatment sessions. Since one participant’s blood sample could not be collected after treatment, there were eight effective participants in the MT group. Table 1Demographic information among two groups, M ± SEMMTTEPN (male, count)89Age (years)28.75 ± 7.2628.11 ± 7.45not significantly different at baselineHeight (m)1.68 ± 0.081.65 ± 0.05not significantly different at baselineWeight (kg)60.06 ± 8.8058.00 ± 10.58not significantly different at baselineBMI (kg/m2)21.25 ± 2.5421.17 ± 3.08not significantly different at baselineVAS (before treatment)5.76 ± 1.155.63 ± 1.88not significantly different at baselineVAS (after treatment)2.80 ± 1.763.47 ± 1.83not significantly different at baselineAbbreviation: MT Manual therapy, TE therapeutic exercise, BMI Body mass index, VAS Visual analog scale (0–10; VAS 0 = no pain; VAS 10 = maximal pain) Demographic information among two groups, M ± SEM Abbreviation: MT Manual therapy, TE therapeutic exercise, BMI Body mass index, VAS Visual analog scale (0–10; VAS 0 = no pain; VAS 10 = maximal pain) Fig. 1Manual therapy and therapeutic exercise were effective in cNLBP amelioration Seventeen patients were randomly divided into two groups: one group received manual therapy, and the other group received therapeutic exercise. VAS was recorded before and after treatment. Asterisks show a significant difference from patients before treatment using Student’s t tests (**P < 0.01)  Manual therapy and therapeutic exercise were effective in cNLBP amelioration Seventeen patients were randomly divided into two groups: one group received manual therapy, and the other group received therapeutic exercise. VAS was recorded before and after treatment. Asterisks show a significant difference from patients before treatment using Student’s t tests (**P < 0.01)  We completed lipid extraction and performed qualitative analysis. Through lipidomic analysis, we identified 290 lipids, which can be divided into the ten subclasses of phosphatidylcholine (PC), phosphatidylethanolamine (PE), lysophosphatidylcholines (LPC), lysophosphatidylethanolamine (LPE), triacylglycerol (TG), phosphatidylinositol (PI), sphingomyelin (SM), ceramide (Cer), hexosylceramide (HexCer), and fatty acid (FA). As a multivariate statistical analysis, PLS-DA could maximize the distinction and discover different metabolites between groups. We performed PLS-DA analysis with the MetaboAnalyst R software package and found a clear difference in the nontreated group (pink) and MT-treated group (green), suggesting differential lipidomic profiles in cNLBP patients before and after manual therapy (Fig. 2 A). Next, we used Pearson correlation analysis to measure the closeness of different lipids (Fig. 2B). Using volume measurements of lipids, we analyzed the lipidomic composition of cNLBP patients before and after manual therapy and found a decrease in phosphatidylcholine (PC)/phosphatidylethanolamine (PE) molar ratios but an increase in sphingomyelin (SM)/ceramide (Cer) molar ratios in the patients after manual therapy. Meanwhile, there were also decreases in the volumes of fatty acids (FAs) and increases in lysophosphatidylcholine (LPC) and lysophosphatidylethanolamine (LPE) when cNLBP patients were treated with MT (Fig. 3 A). We also generated a heatmap to present the volume of the lipids in each sample (Fig. 3B). Fig. 2Lipidomic profiles in cNLBP patients before and after treatment with MT. A The PLS-DA analysis of cNLBP patients treated with MT versus the control group. “1” represents the nontreated group, and “2” represents the MT-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient Lipidomic profiles in cNLBP patients before and after treatment with MT. A The PLS-DA analysis of cNLBP patients treated with MT versus the control group. “1” represents the nontreated group, and “2” represents the MT-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient Fig. 3Lipid identification in cNLBP patients before and after treatment with MT. A The composition of nontreated samples and MT-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the MT-treated group Lipid identification in cNLBP patients before and after treatment with MT. A The composition of nontreated samples and MT-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the MT-treated group Lipid composition analysis of cNLBP patients before and after therapeutic exercise (TE) Therapeutic exercise (TE) is another effective method for improving cNLBP [29]. We found that TE treatment was also effective in alleviating cNLBP (Fig. 1). After treatment, patients’ VAS scores decreased significantly in the TE-treated group (Fig. 1). We performed lipid extraction from cNLBP patients before and after therapeutic exercise and performed qualitative analysis. PLS-DA results indicated a distinct separation between the nontreated group (pink) and the TE-treated group (green) (Fig. 4A). Pearson correlation analysis showed the closeness of different lipids (Fig. 4B). Based on the volume of lipids, PC/PE molar ratios decreased, while SM/Cer molar ratios increased in the patients after therapeutic exercise. The volume of FA also decreased, while the volume of LPC and LPE increased in cNLBP patients after therapeutic exercise, similar to the results of the MT-treated group. Interestingly, the volume of TG (triacylglycerol) increased in the TE-treated group, while it decreased in the MT-treated group (Fig. 5A). A heatmap was produced to indicate the volume of lipids in the nontreated group (red) and the TE-treated group (green) (Fig. 5B). Fig. 4Lipidomic profiles in cNLBP patients before and after treatment with TE. A PLS-DA analysis of cNLBP patients treated with TE versus the control group. “1” represents the nontreated group, and “2” represents the TE-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient Lipidomic profiles in cNLBP patients before and after treatment with TE. A PLS-DA analysis of cNLBP patients treated with TE versus the control group. “1” represents the nontreated group, and “2” represents the TE-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient Fig. 5Lipid identification in cNLBP patients before and after treatment with TE. A The composition of nontreated samples and TE-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the TE-treated group Lipid identification in cNLBP patients before and after treatment with TE. A The composition of nontreated samples and TE-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the TE-treated group Therapeutic exercise (TE) is another effective method for improving cNLBP [29]. We found that TE treatment was also effective in alleviating cNLBP (Fig. 1). After treatment, patients’ VAS scores decreased significantly in the TE-treated group (Fig. 1). We performed lipid extraction from cNLBP patients before and after therapeutic exercise and performed qualitative analysis. PLS-DA results indicated a distinct separation between the nontreated group (pink) and the TE-treated group (green) (Fig. 4A). Pearson correlation analysis showed the closeness of different lipids (Fig. 4B). Based on the volume of lipids, PC/PE molar ratios decreased, while SM/Cer molar ratios increased in the patients after therapeutic exercise. The volume of FA also decreased, while the volume of LPC and LPE increased in cNLBP patients after therapeutic exercise, similar to the results of the MT-treated group. Interestingly, the volume of TG (triacylglycerol) increased in the TE-treated group, while it decreased in the MT-treated group (Fig. 5A). A heatmap was produced to indicate the volume of lipids in the nontreated group (red) and the TE-treated group (green) (Fig. 5B). Fig. 4Lipidomic profiles in cNLBP patients before and after treatment with TE. A PLS-DA analysis of cNLBP patients treated with TE versus the control group. “1” represents the nontreated group, and “2” represents the TE-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient Lipidomic profiles in cNLBP patients before and after treatment with TE. A PLS-DA analysis of cNLBP patients treated with TE versus the control group. “1” represents the nontreated group, and “2” represents the TE-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient Fig. 5Lipid identification in cNLBP patients before and after treatment with TE. A The composition of nontreated samples and TE-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the TE-treated group Lipid identification in cNLBP patients before and after treatment with TE. A The composition of nontreated samples and TE-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the TE-treated group Metabolite alterations in cNLBP patients To further identify therapeutic targets for cNLBP physiotherapy-based treatment, we analyzed the metabolome of cNLBP patients before and after treatment. In our sample of patients, the metabolomic analysis annotated and quantified 171 metabolites. Through KEGG-based enrichment analysis, these metabolites were enriched in the metabolism of tryptophan or aspartate, ammonia recycling, the metabolism of methionine or glycine, and serine, among others (Fig. 6A). Combining enrichment and topology analysis, pathway analysis was carried out for all patients. We found a total of 14 pathways that were significantly changed in patients (P value < 0.05). These metabolites mainly belonged to aminoacyl-tRNA biosynthesis; arginine biosynthesis; valine, leucine and isoleucine biosynthesis; amino acid metabolism; pyrimidine and purine metabolism; ascorbate and aldarate metabolism; taurine and hypotaurine metabolism; beta-alanine metabolism; and nicotinate and nicotinamide metabolism (Fig. 6B). Fig. 6Metabolite alteration in cNLBP patients. A Pathway enrichment analysis revealed different metabolic pathways enriched in cNLBP patients (P value cutoff ≤ 0.05). B The results from the pathway analysis carried out with the web-based tool METPA using the concentrations of metabolites identified in cNLBP patients. Total cmpd, the total number of compounds in the pathway. Hits are the matched number from the uploaded data. Raw P is the original P value. Impact is the pathway impact value calculated from pathway topology analysis Metabolite alteration in cNLBP patients. A Pathway enrichment analysis revealed different metabolic pathways enriched in cNLBP patients (P value cutoff ≤ 0.05). B The results from the pathway analysis carried out with the web-based tool METPA using the concentrations of metabolites identified in cNLBP patients. Total cmpd, the total number of compounds in the pathway. Hits are the matched number from the uploaded data. Raw P is the original P value. Impact is the pathway impact value calculated from pathway topology analysis To further identify therapeutic targets for cNLBP physiotherapy-based treatment, we analyzed the metabolome of cNLBP patients before and after treatment. In our sample of patients, the metabolomic analysis annotated and quantified 171 metabolites. Through KEGG-based enrichment analysis, these metabolites were enriched in the metabolism of tryptophan or aspartate, ammonia recycling, the metabolism of methionine or glycine, and serine, among others (Fig. 6A). Combining enrichment and topology analysis, pathway analysis was carried out for all patients. We found a total of 14 pathways that were significantly changed in patients (P value < 0.05). These metabolites mainly belonged to aminoacyl-tRNA biosynthesis; arginine biosynthesis; valine, leucine and isoleucine biosynthesis; amino acid metabolism; pyrimidine and purine metabolism; ascorbate and aldarate metabolism; taurine and hypotaurine metabolism; beta-alanine metabolism; and nicotinate and nicotinamide metabolism (Fig. 6B). Fig. 6Metabolite alteration in cNLBP patients. A Pathway enrichment analysis revealed different metabolic pathways enriched in cNLBP patients (P value cutoff ≤ 0.05). B The results from the pathway analysis carried out with the web-based tool METPA using the concentrations of metabolites identified in cNLBP patients. Total cmpd, the total number of compounds in the pathway. Hits are the matched number from the uploaded data. Raw P is the original P value. Impact is the pathway impact value calculated from pathway topology analysis Metabolite alteration in cNLBP patients. A Pathway enrichment analysis revealed different metabolic pathways enriched in cNLBP patients (P value cutoff ≤ 0.05). B The results from the pathway analysis carried out with the web-based tool METPA using the concentrations of metabolites identified in cNLBP patients. Total cmpd, the total number of compounds in the pathway. Hits are the matched number from the uploaded data. Raw P is the original P value. Impact is the pathway impact value calculated from pathway topology analysis Metabolite profiles of cNLBP patients treated with manual therapy Serum metabolome analysis was performed on samples collected after MT treatment. Since two participants’ blood samples could not be collected after treatment, there were seven included participants in the MT group for metabolomes. Orthogonal PLS-DA was performed to demonstrate the suitability of the system (Fig. 7A). The orthogonal PLS-DA score plot revealed good discrimination of the MT treatment group against untreated samples (Fig. 7A). MT-treated and nontreated samples were separated with no outliers (Fig. 7A), demonstrating that our metabolomic analysis could sufficiently reflect the metabolic profile alteration of MT treatment. The VIP scores derived from orthogonal PLS-DA, based on the first 20 metabolites with a VIP score > 1.5, revealed uridine, guanosine, kynurenic acid, 2’-deoxyadenosine, allantoin, stachydrine, inosine, uridine 5’-monophosphate, nicotinuric acid, 3,4-dihydroxybenzeneacetic acid, 2’-deoxyuridine, 2’-deoxyguanosine, 4-aminohippuric acid, cytidine, pyridoxylamine, glutathione oxidized, desaminotyrosine, L − valine, N-acetyl-5-hydroxytryptamine, and 4-acetamidobutanoic acid with the highest VIP scores for MT treatment (Fig. 7B). Finally, we screened out metabolites (fold changes > 2) in the MT treatment group compared with the nontreated group, which were cytidine, uridine 5’-monophosphate, kynurenic acid, guanosine, inosine, 2’-deoxyadenosine, and stachydrine (Fig. 8A). The KEGG-based enrichment analysis revealed that these metabolites were significantly enriched in pyrimidine metabolism and purine metabolism pathways, demonstrating that MT treatment relieves pain by altering the metabolism of these two pathways (Fig. 8B). Fig. 7Discrimination through orthogonal PLS-DA of patients before and after manual therapy analyzed based on metabolomics analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend as 1) and patients after manual therapy (indicated as 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis Discrimination through orthogonal PLS-DA of patients before and after manual therapy analyzed based on metabolomics analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend as 1) and patients after manual therapy (indicated as 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis Fig. 8Manual therapy could alter target metabolites in patients with cNLBP. A Comparison of the volumes of cytidine, uridine 5’-monophosphate, kynurenic acid, guanosine, inosine, 2’-deoxyadenosine, stachydrine, and N-acetyl-5-hydroxytryptamine in patients treated with and without manual therapy for 2 weeks. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism and purine metabolism pathways were enriched in patients treated with manual therapy (P value cutoff ≤ 0.05) Manual therapy could alter target metabolites in patients with cNLBP. A Comparison of the volumes of cytidine, uridine 5’-monophosphate, kynurenic acid, guanosine, inosine, 2’-deoxyadenosine, stachydrine, and N-acetyl-5-hydroxytryptamine in patients treated with and without manual therapy for 2 weeks. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism and purine metabolism pathways were enriched in patients treated with manual therapy (P value cutoff ≤ 0.05) Serum metabolome analysis was performed on samples collected after MT treatment. Since two participants’ blood samples could not be collected after treatment, there were seven included participants in the MT group for metabolomes. Orthogonal PLS-DA was performed to demonstrate the suitability of the system (Fig. 7A). The orthogonal PLS-DA score plot revealed good discrimination of the MT treatment group against untreated samples (Fig. 7A). MT-treated and nontreated samples were separated with no outliers (Fig. 7A), demonstrating that our metabolomic analysis could sufficiently reflect the metabolic profile alteration of MT treatment. The VIP scores derived from orthogonal PLS-DA, based on the first 20 metabolites with a VIP score > 1.5, revealed uridine, guanosine, kynurenic acid, 2’-deoxyadenosine, allantoin, stachydrine, inosine, uridine 5’-monophosphate, nicotinuric acid, 3,4-dihydroxybenzeneacetic acid, 2’-deoxyuridine, 2’-deoxyguanosine, 4-aminohippuric acid, cytidine, pyridoxylamine, glutathione oxidized, desaminotyrosine, L − valine, N-acetyl-5-hydroxytryptamine, and 4-acetamidobutanoic acid with the highest VIP scores for MT treatment (Fig. 7B). Finally, we screened out metabolites (fold changes > 2) in the MT treatment group compared with the nontreated group, which were cytidine, uridine 5’-monophosphate, kynurenic acid, guanosine, inosine, 2’-deoxyadenosine, and stachydrine (Fig. 8A). The KEGG-based enrichment analysis revealed that these metabolites were significantly enriched in pyrimidine metabolism and purine metabolism pathways, demonstrating that MT treatment relieves pain by altering the metabolism of these two pathways (Fig. 8B). Fig. 7Discrimination through orthogonal PLS-DA of patients before and after manual therapy analyzed based on metabolomics analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend as 1) and patients after manual therapy (indicated as 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis Discrimination through orthogonal PLS-DA of patients before and after manual therapy analyzed based on metabolomics analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend as 1) and patients after manual therapy (indicated as 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis Fig. 8Manual therapy could alter target metabolites in patients with cNLBP. A Comparison of the volumes of cytidine, uridine 5’-monophosphate, kynurenic acid, guanosine, inosine, 2’-deoxyadenosine, stachydrine, and N-acetyl-5-hydroxytryptamine in patients treated with and without manual therapy for 2 weeks. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism and purine metabolism pathways were enriched in patients treated with manual therapy (P value cutoff ≤ 0.05) Manual therapy could alter target metabolites in patients with cNLBP. A Comparison of the volumes of cytidine, uridine 5’-monophosphate, kynurenic acid, guanosine, inosine, 2’-deoxyadenosine, stachydrine, and N-acetyl-5-hydroxytryptamine in patients treated with and without manual therapy for 2 weeks. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism and purine metabolism pathways were enriched in patients treated with manual therapy (P value cutoff ≤ 0.05) Metabolite profiles of cNLBP patients treated with therapeutic exercise We also performed metabolite identification in the serum metabolomes pooled from cNLBP patients treated with TE. Since one participant’s blood sample could not be collected after treatment, there were seven included participants in the TE group for metabolomes. Orthogonal PLS-DA was performed on untreated samples and TE treatment samples. The orthogonal PLS-DA score plot revealed good discrimination between the TE treatment group and the nontreated samples (Fig. 9A). The VIP score > 1.5 derived from orthogonal PLS-DA, based on the first 20 metabolites, revealed liothyronine, 2’-deoxyadenosine, uridine, L − homocystine, N-acetyl-5-hydroxytryptamine, stachydrine, γ-aminobutyric acid, nicotinuric acid, glutaric acid, 2’-deoxyuridine, adenine, N-acetyl-L-aspartic acid, cinnamic acid, cytidine, uridine 5’-monophosphate, D-(-)-mandelic acid, L-cysteine, 4-aminobenzoic acid, 5’-deoxyadenosine, and D-sorbitol with the highest VIP scores for TE treatment (Fig. 9B). Fig. 9Discrimination through orthogonal PLS-DA of patients before and after therapeutic exercise examined by metabolomic analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend with 1) and patients after therapeutic exercise (indicated with 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis Discrimination through orthogonal PLS-DA of patients before and after therapeutic exercise examined by metabolomic analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend with 1) and patients after therapeutic exercise (indicated with 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis Finally, nine metabolites with fold changes > 2 in the TE treatment group were found, including uridine 5’-monophosphate, thymidine, 2’-deoxyadenosine, 5’-deoxyadenosine, N-acetyl-5-hydroxytryptamine, stachydrine, inosine, gallic acid, and γ-aminobutyric acid (Fig. 10A). The KEGG-based enrichment analysis revealed that these nine metabolites were significantly enriched in pyrimidine metabolism, tyrosine metabolism, and galactose metabolism pathways, demonstrating that TE treatment relieves pain by altering the metabolism of these three pathways (Fig. 10B). Fig. 10Therapeutic exercise could alter target metabolites in patients with cNLBP. A Comparison of the volumes of uridine 5’-monophosphate, thymidine, 2’-deoxyadenosine, 5’-deoxyadenosine, N-acetyl-5-hydroxytryptamine, stachydrine, inosine, gallic acid, and γ-aminobutyric acid in patients treated with therapeutic exercise for two weeks or without therapeutic treatment. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism, tyrosine metabolism, and galactose metabolism were enriched in patients treated with therapeutic exercise (P value cutoff ≤ 0.05) Therapeutic exercise could alter target metabolites in patients with cNLBP. A Comparison of the volumes of uridine 5’-monophosphate, thymidine, 2’-deoxyadenosine, 5’-deoxyadenosine, N-acetyl-5-hydroxytryptamine, stachydrine, inosine, gallic acid, and γ-aminobutyric acid in patients treated with therapeutic exercise for two weeks or without therapeutic treatment. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism, tyrosine metabolism, and galactose metabolism were enriched in patients treated with therapeutic exercise (P value cutoff ≤ 0.05) We also performed metabolite identification in the serum metabolomes pooled from cNLBP patients treated with TE. Since one participant’s blood sample could not be collected after treatment, there were seven included participants in the TE group for metabolomes. Orthogonal PLS-DA was performed on untreated samples and TE treatment samples. The orthogonal PLS-DA score plot revealed good discrimination between the TE treatment group and the nontreated samples (Fig. 9A). The VIP score > 1.5 derived from orthogonal PLS-DA, based on the first 20 metabolites, revealed liothyronine, 2’-deoxyadenosine, uridine, L − homocystine, N-acetyl-5-hydroxytryptamine, stachydrine, γ-aminobutyric acid, nicotinuric acid, glutaric acid, 2’-deoxyuridine, adenine, N-acetyl-L-aspartic acid, cinnamic acid, cytidine, uridine 5’-monophosphate, D-(-)-mandelic acid, L-cysteine, 4-aminobenzoic acid, 5’-deoxyadenosine, and D-sorbitol with the highest VIP scores for TE treatment (Fig. 9B). Fig. 9Discrimination through orthogonal PLS-DA of patients before and after therapeutic exercise examined by metabolomic analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend with 1) and patients after therapeutic exercise (indicated with 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis Discrimination through orthogonal PLS-DA of patients before and after therapeutic exercise examined by metabolomic analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend with 1) and patients after therapeutic exercise (indicated with 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis Finally, nine metabolites with fold changes > 2 in the TE treatment group were found, including uridine 5’-monophosphate, thymidine, 2’-deoxyadenosine, 5’-deoxyadenosine, N-acetyl-5-hydroxytryptamine, stachydrine, inosine, gallic acid, and γ-aminobutyric acid (Fig. 10A). The KEGG-based enrichment analysis revealed that these nine metabolites were significantly enriched in pyrimidine metabolism, tyrosine metabolism, and galactose metabolism pathways, demonstrating that TE treatment relieves pain by altering the metabolism of these three pathways (Fig. 10B). Fig. 10Therapeutic exercise could alter target metabolites in patients with cNLBP. A Comparison of the volumes of uridine 5’-monophosphate, thymidine, 2’-deoxyadenosine, 5’-deoxyadenosine, N-acetyl-5-hydroxytryptamine, stachydrine, inosine, gallic acid, and γ-aminobutyric acid in patients treated with therapeutic exercise for two weeks or without therapeutic treatment. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism, tyrosine metabolism, and galactose metabolism were enriched in patients treated with therapeutic exercise (P value cutoff ≤ 0.05) Therapeutic exercise could alter target metabolites in patients with cNLBP. A Comparison of the volumes of uridine 5’-monophosphate, thymidine, 2’-deoxyadenosine, 5’-deoxyadenosine, N-acetyl-5-hydroxytryptamine, stachydrine, inosine, gallic acid, and γ-aminobutyric acid in patients treated with therapeutic exercise for two weeks or without therapeutic treatment. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism, tyrosine metabolism, and galactose metabolism were enriched in patients treated with therapeutic exercise (P value cutoff ≤ 0.05) Lipid composition analysis of cNLBP patients before and after manual therapy: We recruited 17 patients with cNLBP whose demographic information is shown in Table 1. The recruited subjects were randomly divided into MT or TE groups, with no significant differences in age, weight, height, BMI, or VAS score between them. We found that MT treatment was effective in alleviating cNLBP (Fig. 1). After treatment, the VAS score decreased in almost the entire MT group (Fig. 1). Serum lipidomics were determined after six MT treatment sessions. Since one participant’s blood sample could not be collected after treatment, there were eight effective participants in the MT group. Table 1Demographic information among two groups, M ± SEMMTTEPN (male, count)89Age (years)28.75 ± 7.2628.11 ± 7.45not significantly different at baselineHeight (m)1.68 ± 0.081.65 ± 0.05not significantly different at baselineWeight (kg)60.06 ± 8.8058.00 ± 10.58not significantly different at baselineBMI (kg/m2)21.25 ± 2.5421.17 ± 3.08not significantly different at baselineVAS (before treatment)5.76 ± 1.155.63 ± 1.88not significantly different at baselineVAS (after treatment)2.80 ± 1.763.47 ± 1.83not significantly different at baselineAbbreviation: MT Manual therapy, TE therapeutic exercise, BMI Body mass index, VAS Visual analog scale (0–10; VAS 0 = no pain; VAS 10 = maximal pain) Demographic information among two groups, M ± SEM Abbreviation: MT Manual therapy, TE therapeutic exercise, BMI Body mass index, VAS Visual analog scale (0–10; VAS 0 = no pain; VAS 10 = maximal pain) Fig. 1Manual therapy and therapeutic exercise were effective in cNLBP amelioration Seventeen patients were randomly divided into two groups: one group received manual therapy, and the other group received therapeutic exercise. VAS was recorded before and after treatment. Asterisks show a significant difference from patients before treatment using Student’s t tests (**P < 0.01)  Manual therapy and therapeutic exercise were effective in cNLBP amelioration Seventeen patients were randomly divided into two groups: one group received manual therapy, and the other group received therapeutic exercise. VAS was recorded before and after treatment. Asterisks show a significant difference from patients before treatment using Student’s t tests (**P < 0.01)  We completed lipid extraction and performed qualitative analysis. Through lipidomic analysis, we identified 290 lipids, which can be divided into the ten subclasses of phosphatidylcholine (PC), phosphatidylethanolamine (PE), lysophosphatidylcholines (LPC), lysophosphatidylethanolamine (LPE), triacylglycerol (TG), phosphatidylinositol (PI), sphingomyelin (SM), ceramide (Cer), hexosylceramide (HexCer), and fatty acid (FA). As a multivariate statistical analysis, PLS-DA could maximize the distinction and discover different metabolites between groups. We performed PLS-DA analysis with the MetaboAnalyst R software package and found a clear difference in the nontreated group (pink) and MT-treated group (green), suggesting differential lipidomic profiles in cNLBP patients before and after manual therapy (Fig. 2 A). Next, we used Pearson correlation analysis to measure the closeness of different lipids (Fig. 2B). Using volume measurements of lipids, we analyzed the lipidomic composition of cNLBP patients before and after manual therapy and found a decrease in phosphatidylcholine (PC)/phosphatidylethanolamine (PE) molar ratios but an increase in sphingomyelin (SM)/ceramide (Cer) molar ratios in the patients after manual therapy. Meanwhile, there were also decreases in the volumes of fatty acids (FAs) and increases in lysophosphatidylcholine (LPC) and lysophosphatidylethanolamine (LPE) when cNLBP patients were treated with MT (Fig. 3 A). We also generated a heatmap to present the volume of the lipids in each sample (Fig. 3B). Fig. 2Lipidomic profiles in cNLBP patients before and after treatment with MT. A The PLS-DA analysis of cNLBP patients treated with MT versus the control group. “1” represents the nontreated group, and “2” represents the MT-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient Lipidomic profiles in cNLBP patients before and after treatment with MT. A The PLS-DA analysis of cNLBP patients treated with MT versus the control group. “1” represents the nontreated group, and “2” represents the MT-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient Fig. 3Lipid identification in cNLBP patients before and after treatment with MT. A The composition of nontreated samples and MT-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the MT-treated group Lipid identification in cNLBP patients before and after treatment with MT. A The composition of nontreated samples and MT-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the MT-treated group Lipid composition analysis of cNLBP patients before and after therapeutic exercise (TE): Therapeutic exercise (TE) is another effective method for improving cNLBP [29]. We found that TE treatment was also effective in alleviating cNLBP (Fig. 1). After treatment, patients’ VAS scores decreased significantly in the TE-treated group (Fig. 1). We performed lipid extraction from cNLBP patients before and after therapeutic exercise and performed qualitative analysis. PLS-DA results indicated a distinct separation between the nontreated group (pink) and the TE-treated group (green) (Fig. 4A). Pearson correlation analysis showed the closeness of different lipids (Fig. 4B). Based on the volume of lipids, PC/PE molar ratios decreased, while SM/Cer molar ratios increased in the patients after therapeutic exercise. The volume of FA also decreased, while the volume of LPC and LPE increased in cNLBP patients after therapeutic exercise, similar to the results of the MT-treated group. Interestingly, the volume of TG (triacylglycerol) increased in the TE-treated group, while it decreased in the MT-treated group (Fig. 5A). A heatmap was produced to indicate the volume of lipids in the nontreated group (red) and the TE-treated group (green) (Fig. 5B). Fig. 4Lipidomic profiles in cNLBP patients before and after treatment with TE. A PLS-DA analysis of cNLBP patients treated with TE versus the control group. “1” represents the nontreated group, and “2” represents the TE-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient Lipidomic profiles in cNLBP patients before and after treatment with TE. A PLS-DA analysis of cNLBP patients treated with TE versus the control group. “1” represents the nontreated group, and “2” represents the TE-treated group. B Correlation analysis of the significantly different lipids. Different colors represent the level of Pearson’s correlation coefficient Fig. 5Lipid identification in cNLBP patients before and after treatment with TE. A The composition of nontreated samples and TE-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the TE-treated group Lipid identification in cNLBP patients before and after treatment with TE. A The composition of nontreated samples and TE-treated samples based on the volume of lipids in each lipid category. B Hierarchical clustering analysis of the 10 lipids in each sample. For class name, red represents the control group, and green represents the TE-treated group Metabolite alterations in cNLBP patients: To further identify therapeutic targets for cNLBP physiotherapy-based treatment, we analyzed the metabolome of cNLBP patients before and after treatment. In our sample of patients, the metabolomic analysis annotated and quantified 171 metabolites. Through KEGG-based enrichment analysis, these metabolites were enriched in the metabolism of tryptophan or aspartate, ammonia recycling, the metabolism of methionine or glycine, and serine, among others (Fig. 6A). Combining enrichment and topology analysis, pathway analysis was carried out for all patients. We found a total of 14 pathways that were significantly changed in patients (P value < 0.05). These metabolites mainly belonged to aminoacyl-tRNA biosynthesis; arginine biosynthesis; valine, leucine and isoleucine biosynthesis; amino acid metabolism; pyrimidine and purine metabolism; ascorbate and aldarate metabolism; taurine and hypotaurine metabolism; beta-alanine metabolism; and nicotinate and nicotinamide metabolism (Fig. 6B). Fig. 6Metabolite alteration in cNLBP patients. A Pathway enrichment analysis revealed different metabolic pathways enriched in cNLBP patients (P value cutoff ≤ 0.05). B The results from the pathway analysis carried out with the web-based tool METPA using the concentrations of metabolites identified in cNLBP patients. Total cmpd, the total number of compounds in the pathway. Hits are the matched number from the uploaded data. Raw P is the original P value. Impact is the pathway impact value calculated from pathway topology analysis Metabolite alteration in cNLBP patients. A Pathway enrichment analysis revealed different metabolic pathways enriched in cNLBP patients (P value cutoff ≤ 0.05). B The results from the pathway analysis carried out with the web-based tool METPA using the concentrations of metabolites identified in cNLBP patients. Total cmpd, the total number of compounds in the pathway. Hits are the matched number from the uploaded data. Raw P is the original P value. Impact is the pathway impact value calculated from pathway topology analysis Metabolite profiles of cNLBP patients treated with manual therapy: Serum metabolome analysis was performed on samples collected after MT treatment. Since two participants’ blood samples could not be collected after treatment, there were seven included participants in the MT group for metabolomes. Orthogonal PLS-DA was performed to demonstrate the suitability of the system (Fig. 7A). The orthogonal PLS-DA score plot revealed good discrimination of the MT treatment group against untreated samples (Fig. 7A). MT-treated and nontreated samples were separated with no outliers (Fig. 7A), demonstrating that our metabolomic analysis could sufficiently reflect the metabolic profile alteration of MT treatment. The VIP scores derived from orthogonal PLS-DA, based on the first 20 metabolites with a VIP score > 1.5, revealed uridine, guanosine, kynurenic acid, 2’-deoxyadenosine, allantoin, stachydrine, inosine, uridine 5’-monophosphate, nicotinuric acid, 3,4-dihydroxybenzeneacetic acid, 2’-deoxyuridine, 2’-deoxyguanosine, 4-aminohippuric acid, cytidine, pyridoxylamine, glutathione oxidized, desaminotyrosine, L − valine, N-acetyl-5-hydroxytryptamine, and 4-acetamidobutanoic acid with the highest VIP scores for MT treatment (Fig. 7B). Finally, we screened out metabolites (fold changes > 2) in the MT treatment group compared with the nontreated group, which were cytidine, uridine 5’-monophosphate, kynurenic acid, guanosine, inosine, 2’-deoxyadenosine, and stachydrine (Fig. 8A). The KEGG-based enrichment analysis revealed that these metabolites were significantly enriched in pyrimidine metabolism and purine metabolism pathways, demonstrating that MT treatment relieves pain by altering the metabolism of these two pathways (Fig. 8B). Fig. 7Discrimination through orthogonal PLS-DA of patients before and after manual therapy analyzed based on metabolomics analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend as 1) and patients after manual therapy (indicated as 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis Discrimination through orthogonal PLS-DA of patients before and after manual therapy analyzed based on metabolomics analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend as 1) and patients after manual therapy (indicated as 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis Fig. 8Manual therapy could alter target metabolites in patients with cNLBP. A Comparison of the volumes of cytidine, uridine 5’-monophosphate, kynurenic acid, guanosine, inosine, 2’-deoxyadenosine, stachydrine, and N-acetyl-5-hydroxytryptamine in patients treated with and without manual therapy for 2 weeks. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism and purine metabolism pathways were enriched in patients treated with manual therapy (P value cutoff ≤ 0.05) Manual therapy could alter target metabolites in patients with cNLBP. A Comparison of the volumes of cytidine, uridine 5’-monophosphate, kynurenic acid, guanosine, inosine, 2’-deoxyadenosine, stachydrine, and N-acetyl-5-hydroxytryptamine in patients treated with and without manual therapy for 2 weeks. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism and purine metabolism pathways were enriched in patients treated with manual therapy (P value cutoff ≤ 0.05) Metabolite profiles of cNLBP patients treated with therapeutic exercise: We also performed metabolite identification in the serum metabolomes pooled from cNLBP patients treated with TE. Since one participant’s blood sample could not be collected after treatment, there were seven included participants in the TE group for metabolomes. Orthogonal PLS-DA was performed on untreated samples and TE treatment samples. The orthogonal PLS-DA score plot revealed good discrimination between the TE treatment group and the nontreated samples (Fig. 9A). The VIP score > 1.5 derived from orthogonal PLS-DA, based on the first 20 metabolites, revealed liothyronine, 2’-deoxyadenosine, uridine, L − homocystine, N-acetyl-5-hydroxytryptamine, stachydrine, γ-aminobutyric acid, nicotinuric acid, glutaric acid, 2’-deoxyuridine, adenine, N-acetyl-L-aspartic acid, cinnamic acid, cytidine, uridine 5’-monophosphate, D-(-)-mandelic acid, L-cysteine, 4-aminobenzoic acid, 5’-deoxyadenosine, and D-sorbitol with the highest VIP scores for TE treatment (Fig. 9B). Fig. 9Discrimination through orthogonal PLS-DA of patients before and after therapeutic exercise examined by metabolomic analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend with 1) and patients after therapeutic exercise (indicated with 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis Discrimination through orthogonal PLS-DA of patients before and after therapeutic exercise examined by metabolomic analysis. A Orthogonal PLS-DA showing score plots comparing nontreated patients (indicated in the legend with 1) and patients after therapeutic exercise (indicated with 2). B Variable importance of projection (VIP) features for the groups from orthogonal PLS-DA analysis Finally, nine metabolites with fold changes > 2 in the TE treatment group were found, including uridine 5’-monophosphate, thymidine, 2’-deoxyadenosine, 5’-deoxyadenosine, N-acetyl-5-hydroxytryptamine, stachydrine, inosine, gallic acid, and γ-aminobutyric acid (Fig. 10A). The KEGG-based enrichment analysis revealed that these nine metabolites were significantly enriched in pyrimidine metabolism, tyrosine metabolism, and galactose metabolism pathways, demonstrating that TE treatment relieves pain by altering the metabolism of these three pathways (Fig. 10B). Fig. 10Therapeutic exercise could alter target metabolites in patients with cNLBP. A Comparison of the volumes of uridine 5’-monophosphate, thymidine, 2’-deoxyadenosine, 5’-deoxyadenosine, N-acetyl-5-hydroxytryptamine, stachydrine, inosine, gallic acid, and γ-aminobutyric acid in patients treated with therapeutic exercise for two weeks or without therapeutic treatment. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism, tyrosine metabolism, and galactose metabolism were enriched in patients treated with therapeutic exercise (P value cutoff ≤ 0.05) Therapeutic exercise could alter target metabolites in patients with cNLBP. A Comparison of the volumes of uridine 5’-monophosphate, thymidine, 2’-deoxyadenosine, 5’-deoxyadenosine, N-acetyl-5-hydroxytryptamine, stachydrine, inosine, gallic acid, and γ-aminobutyric acid in patients treated with therapeutic exercise for two weeks or without therapeutic treatment. Different letters show a significant difference from nontreated patients using Student’s t test (P < 0.05). B Pathway enrichment analysis revealed that pyrimidine metabolism, tyrosine metabolism, and galactose metabolism were enriched in patients treated with therapeutic exercise (P value cutoff ≤ 0.05) Discussion: Lipids can act as bioactive compounds that play critical roles in signal transduction. The balance of cellular PC/PE molar ratios is crucial to maintain cell survival and participates in the regulation of many diseases. However, PC/PE molar ratios associated with physiotherapy with cNLBP have not been studied. In this study, we found that PC/PE molar ratios decreased in cNLBP patients either treated with MT or treated with TE when compared with control groups, suggesting that PC/PE molar ratios are involved in cNLBP physiotherapy-based treatment. We still do not know the exact reason why decreased PC/PE molar ratios induced by MT or TE can cause cNLBP relief. However, we believe that the most likely explanation is that decreased PC/PE can alter the properties of membranes and inhibit TNFa-induced inflammatory responses significantly, which is an important inducer of sensory nerve growth [30, 31]. Studies have also shown that the growth of sensory nerves into the inner layer of IVDs (intervertebral discs, IVDs) is a potential factor in low back pain [32, 33]. Sphingolipids are another kind of bioactive lipid that can be used as powerful signaling molecules, and dysregulation of sphingolipid metabolism changes is known to have a significant impact on signal transduction [34]. Sphingomyelin (SM) and ceramide (Cer) are the most enriched classes of sphingolipids, and the balance between SM and Cer is associated with human disease. For example, SM/Cer imbalance can promote lipid dysregulation and apoptosis [35]. Studies have shown that altered sphingolipid metabolism causes neuropathic pain in humans [36]. N,N-dimethylsphingosine induces mechanical hypersensitivity, and the SM/Cer ratio is altered in rats with neuropathic pain [37]. There has, however, been no investigation examining whether the changes in the SM/Cer ratio were related to the physiotherapy of cNLBP. In our study, we found that the SM/Cer ratio increased in cNLBP patients treated with MT or TE compared with control groups, suggesting that SM/Cer ratio alteration is involved in cNLBP physiotherapy-based treatment. However, thus far, there has been no study on the mechanism of the SM/Cer ratio in cNLBP. SM can be hydrolyzed to produce biologically active molecules, such as ceramide and sphingosine, which can be used as potent inhibitors of protein kinase C (PKC) [38]. Therefore, SM/Cer ratio alterations can control many signaling pathways related to inflammation through PKC to relieve low back pain, since inflammation is the primary source of low back pain [28, 39]. In addition, SM/Cer ratio alterations can decrease chronic inflammatory responses through ER stress [40]. We further performed metabolome analysis to identify the underlying mechanisms in cNLBP physiotherapy-based treatment through MT or TE. We found that pyrimidine metabolism and purine metabolism pathways related to MT caused cNLBP amelioration, while pyrimidine metabolism, tyrosine metabolism, and galactose metabolism pathways were responsible for TE-generated cNLBP amelioration. There is literature demonstrating that pyrimidines and purine have widespread functions in responding to pain therapeutics [41, 42]. For example, the nucleotides cytidine and uridine are helpful for dealing with low back pain [43]. The amount of tyramine sulphate was significantly lower in pain patients than in control patients [41]. Purine antagonists can reduce chronic pain and inflammatory pain. Adenosine and its analogs have the ability to suppress nociception by activating adenosine receptors [44]. In addition to the pyrimidine pathway, tyrosine metabolism is also associated with pain [45]. In headache patients, tyrosine metabolism levels are abnormal [46]. Tyrosine can be hydrolyzed to DOPA, dopamine (DA), and noradrenaline (NE), which govern pain and vegetative functions [47]. Galactose was not only used as a primary source of energy but also considered a candidate for pharmacological applications [48]. In comparing the lipidomic and metabolomic profiles of patients with cNLBP before and after treatment, we found that alterations in the PC/PE ratio, SM/Cer ratio, and target metabolites may be the cause of cNLBP amelioration by MT or TE. However, the relationship between lipids and target metabolites is still unclear. We still do not know whether lipid alteration affects metabolite volume or whether metabolite volume affects lipid alteration. Many studies have demonstrated that lipids can affect gene expression, which can then alter the level of metabolites [49, 50]. For example, S1P can inhibit the activity of histone deacetylases by binding with HDAC1 and HDAC2 specifically to the epigenetic regulation of gene expression [49]. Lipids can also directly affect the activity of protein kinase C, which is an important downstream target of Cer. They can also modulate pyrimidine biosynthesis [50]. In turn, metabolite alteration can also affect lipid metabolism. For example, prenyloxycoumarin is a secondary metabolite and can be used as a modulator of lipid metabolism [51]. Very-low-density lipoproteins (VLDL) are a risk factor for modic changes. These changes result in low back pain (LBP), and receptors can enhance lipid metabolism and promote the expression of interleukin-33 (IL-33) [9, 52]. More studies are needed, however, to investigate the relationship between lipid metabolism and metabolite metabolism in the process of MT or TE in reducing cNLBP. Our study identified the target lipids and metabolites involved in the improvement of cNLBP treated with MT or TE, which has expanded our knowledge of cNLBP physiotherapy-based treatment. Study strengths and limitations The greatest strength of this study is to reveal the possible mechanism of promoting cNLBP amelioration through MT or TE treatment from the perspective of lipidomics and metabolomics in cNLBP patients. However, the experiment only involved with alterations in lipids and metabolites, and the deeper mechanisms of these lipids and metabolites affecting cNLBP physiotherapy-based treatment are uncertain. Therefore, more evidences are needed to explore. The greatest strength of this study is to reveal the possible mechanism of promoting cNLBP amelioration through MT or TE treatment from the perspective of lipidomics and metabolomics in cNLBP patients. However, the experiment only involved with alterations in lipids and metabolites, and the deeper mechanisms of these lipids and metabolites affecting cNLBP physiotherapy-based treatment are uncertain. Therefore, more evidences are needed to explore. Study strengths and limitations: The greatest strength of this study is to reveal the possible mechanism of promoting cNLBP amelioration through MT or TE treatment from the perspective of lipidomics and metabolomics in cNLBP patients. However, the experiment only involved with alterations in lipids and metabolites, and the deeper mechanisms of these lipids and metabolites affecting cNLBP physiotherapy-based treatment are uncertain. Therefore, more evidences are needed to explore. Conclusions and clinical perspective: MT or TE treatment were effective strategies in alleviating cNLBP. The possible mechanism is that MT or TE treatment was able to cause alterations in the lipidomics and metabolomics in cNLBP patients. This study was the first to elucidate cNLBP physiotherapy-based treatment was associated with specific lipids and metabolites. These results indicate that physiotherapy or agents targeting these lipids and metabolites alteration might be useful for treatment of cNLBP.
Background: Chronic nonspecific low back pain (cNLBP) is a common health problem worldwide, affecting 65-80% of the population and greatly affecting people's quality of life and productivity. It also causes huge economic losses. Manual therapy (MT) and therapeutic exercise (TE) are effective treatment options for cNLBP physiotherapy-based treatment. However, the underlying mechanisms that promote cNLBP amelioration by MT or TE are incompletely understood. Methods: Seventeen recruited subjects were randomly divided into an MT group and a TE group. Subjects in the MT group performed muscular relaxation, myofascial release, and mobilization for 20 min during each treatment session. The treatment lasted for a total of six sessions, once every two days. Subjects in the TE group completed motor control and core stability exercises for 30 min during each treatment session. The motor control exercise included stretching of the trunk and extremity muscles through trunk and hip rotation and flexion training. Stabilization exercises consisted of the (1) bridge exercise, (2) single-leg-lift bridge exercise, (3) side bridge exercise, (4) two-point bird-dog position with an elevated contralateral leg and arm, (5) bear crawl exercise, and (6) dead bug exercise. The treatment lasted for a total of six sessions, with one session every two days. Serum samples were collected from subjects before and after physiotherapy-based treatment for lipidomic and metabolomic measurements. Results: Through lipidomic analysis, we found that the phosphatidylcholine/phosphatidylethanolamine (PC/PE) ratio decreased and the sphingomyelin/ceramide (SM/Cer) ratio increased in cNLBP patients after MT or TE treatment. In addition, eight metabolites enriched in pyrimidine and purine differed significantly in cNLBP patients who received MT treatment. A total of nine metabolites enriched in pyrimidine, tyrosine, and galactose pathways differed significantly in cNLBP patients after TE treatment during metabolomics analysis. Conclusions: Our study was the first to elucidate the alterations in the lipidomics and metabolomics of cNLBP physiotherapy-based treatment and can expand our knowledge of cNLBP physiotherapy-based treatment.
Introduction: Chronic nonspecific low back pain (cNLBP) is not caused by recognizable pathology; it lasts for more than three months and occurs between the lower rib and the inferior gluteal fold [1]. It is estimated that approximately four out of five people have lower back pain at some time during their lives, and it greatly affects their quality of life, productivity, and ability to work [2]. cNLBP can be caused by many factors, such as lumbar strain, nerve irritation, and bony encroachment. However, the etiology of cNLBP is typically unknown and poorly understood [3]. Medical treatments and physiotherapy are recommended to treat and resolve issues associated with cNLBP [3]. Therapeutic exercise and manual therapy have a lower risk of increasing future back injuries or work absence and are more effective treatment options for chronic pain than medication or surgery, and they can be performed at rehabilitation clinics [4–6]. Exercise therapy is a widely used strategy to cope with low back pain that includes a heterogeneous group of interventions ranging from aerobic exercise or general physical fitness to muscle strengthening and various types of flexibility and stretching exercises [7]. Manual therapy is another effective method to deal with low back pain, in which hands are used to apply a force with a therapeutic intent, including massage, joint mobilization/manipulation, myofascial release, nerve manipulation, strain/counter strain, and acupressure [8]. However, the reasons therapeutic exercise and manual therapy ameliorate cNLBP are still unknown. With the development of lipidomics and metabolomics, many studies have indicated that lipid or metabolite alterations are associated with chronic pain [9, 10]. Lipids, as primary metabolites, are not only structural components of membranes but can also be used as signaling molecules to regulate many physiological activities. For example, fatty acid (FA) chains can be saturated (SFA), monounsaturated (MUFA), or polyunsaturated (PUFA), and the ratio of saturated to unsaturated FAs participates in the regulation of longevity [11]. Phosphatidylcholine (PC) phosphatidylethanolamine (PE) is abundant in membranes. In mammals, cellular PC/PE molar ratios that are out of balance and increase or decrease abnormally can cause diseases [12]. For example, a reduced PC/PE ratio can protect mice against atherosclerosis [13]. Decreasing the PC/PE molar ratio can change the intracellular energy supply by activating the electron transport chain and mitochondrial respiration [14]. Lysophosphatidylcholine (LPC) 16:0 correlated with pain outcomes in a cohort of patients with osteoarthritis [15]. Apart from phospholipids, studies have shown that sphingolipid metabolism also contributes to chronic pain. Increased ceramide and sphingosine-1-phosphate (S1P) are involved in the progression of chronic pain in the nervous system [16]. Previous studies reported that metabolites were also associated with pain. Patients with neuropathic pain showed elevated choline-containing compounds in response to myoinositol [tCho/mI] under magnetic resonance spectroscopy [17]. Flavonoids are the most common secondary plant metabolites used as tranquilizers in folkloric medicine and have been claimed to reduce neuropathic pain [18]. Patients with chest pain and high plasma levels of deoxyuridine, homoserine, and methionine had an increased risk of myocardial infarction [19]. Despite the evidence presented above that pain is associated with specific lipids and metabolites, no studies have shown that MT and TE can relieve cNLBP by altering lipids and metabolites. In this article, we compared the lipidomics and metabolomics of patients with cNLBP before and after treatment to explore differences in lipids and metabolites correlated with cNLBP physiotherapy-based treatment. The newly found data will expand our knowledge of cNLBP physiotherapy-based treatment. Conclusions and clinical perspective: MT or TE treatment were effective strategies in alleviating cNLBP. The possible mechanism is that MT or TE treatment was able to cause alterations in the lipidomics and metabolomics in cNLBP patients. This study was the first to elucidate cNLBP physiotherapy-based treatment was associated with specific lipids and metabolites. These results indicate that physiotherapy or agents targeting these lipids and metabolites alteration might be useful for treatment of cNLBP.
Background: Chronic nonspecific low back pain (cNLBP) is a common health problem worldwide, affecting 65-80% of the population and greatly affecting people's quality of life and productivity. It also causes huge economic losses. Manual therapy (MT) and therapeutic exercise (TE) are effective treatment options for cNLBP physiotherapy-based treatment. However, the underlying mechanisms that promote cNLBP amelioration by MT or TE are incompletely understood. Methods: Seventeen recruited subjects were randomly divided into an MT group and a TE group. Subjects in the MT group performed muscular relaxation, myofascial release, and mobilization for 20 min during each treatment session. The treatment lasted for a total of six sessions, once every two days. Subjects in the TE group completed motor control and core stability exercises for 30 min during each treatment session. The motor control exercise included stretching of the trunk and extremity muscles through trunk and hip rotation and flexion training. Stabilization exercises consisted of the (1) bridge exercise, (2) single-leg-lift bridge exercise, (3) side bridge exercise, (4) two-point bird-dog position with an elevated contralateral leg and arm, (5) bear crawl exercise, and (6) dead bug exercise. The treatment lasted for a total of six sessions, with one session every two days. Serum samples were collected from subjects before and after physiotherapy-based treatment for lipidomic and metabolomic measurements. Results: Through lipidomic analysis, we found that the phosphatidylcholine/phosphatidylethanolamine (PC/PE) ratio decreased and the sphingomyelin/ceramide (SM/Cer) ratio increased in cNLBP patients after MT or TE treatment. In addition, eight metabolites enriched in pyrimidine and purine differed significantly in cNLBP patients who received MT treatment. A total of nine metabolites enriched in pyrimidine, tyrosine, and galactose pathways differed significantly in cNLBP patients after TE treatment during metabolomics analysis. Conclusions: Our study was the first to elucidate the alterations in the lipidomics and metabolomics of cNLBP physiotherapy-based treatment and can expand our knowledge of cNLBP physiotherapy-based treatment.
17,782
405
[ 368, 193, 488, 586, 269, 1027, 532, 375, 670, 671, 72, 75 ]
16
[ "patients", "analysis", "cnlbp", "group", "treatment", "mt", "treated", "te", "fig", "exercise" ]
[ "low pain cnlbp", "elucidate cnlbp physiotherapy", "physiotherapy cnlbp study", "affecting cnlbp physiotherapy", "cnlbp therapeutic exercise" ]
null
[CONTENT] Chronic nonspecific low back pain | Lipid | Metabolite | Manual therapy | Therapeutic exercise [SUMMARY]
null
[CONTENT] Chronic nonspecific low back pain | Lipid | Metabolite | Manual therapy | Therapeutic exercise [SUMMARY]
[CONTENT] Chronic nonspecific low back pain | Lipid | Metabolite | Manual therapy | Therapeutic exercise [SUMMARY]
[CONTENT] Chronic nonspecific low back pain | Lipid | Metabolite | Manual therapy | Therapeutic exercise [SUMMARY]
[CONTENT] Chronic nonspecific low back pain | Lipid | Metabolite | Manual therapy | Therapeutic exercise [SUMMARY]
[CONTENT] Lipids | Low Back Pain | Physical Therapy Modalities | Pyrimidines | Quality of Life | Humans [SUMMARY]
null
[CONTENT] Lipids | Low Back Pain | Physical Therapy Modalities | Pyrimidines | Quality of Life | Humans [SUMMARY]
[CONTENT] Lipids | Low Back Pain | Physical Therapy Modalities | Pyrimidines | Quality of Life | Humans [SUMMARY]
[CONTENT] Lipids | Low Back Pain | Physical Therapy Modalities | Pyrimidines | Quality of Life | Humans [SUMMARY]
[CONTENT] Lipids | Low Back Pain | Physical Therapy Modalities | Pyrimidines | Quality of Life | Humans [SUMMARY]
[CONTENT] low pain cnlbp | elucidate cnlbp physiotherapy | physiotherapy cnlbp study | affecting cnlbp physiotherapy | cnlbp therapeutic exercise [SUMMARY]
null
[CONTENT] low pain cnlbp | elucidate cnlbp physiotherapy | physiotherapy cnlbp study | affecting cnlbp physiotherapy | cnlbp therapeutic exercise [SUMMARY]
[CONTENT] low pain cnlbp | elucidate cnlbp physiotherapy | physiotherapy cnlbp study | affecting cnlbp physiotherapy | cnlbp therapeutic exercise [SUMMARY]
[CONTENT] low pain cnlbp | elucidate cnlbp physiotherapy | physiotherapy cnlbp study | affecting cnlbp physiotherapy | cnlbp therapeutic exercise [SUMMARY]
[CONTENT] low pain cnlbp | elucidate cnlbp physiotherapy | physiotherapy cnlbp study | affecting cnlbp physiotherapy | cnlbp therapeutic exercise [SUMMARY]
[CONTENT] patients | analysis | cnlbp | group | treatment | mt | treated | te | fig | exercise [SUMMARY]
null
[CONTENT] patients | analysis | cnlbp | group | treatment | mt | treated | te | fig | exercise [SUMMARY]
[CONTENT] patients | analysis | cnlbp | group | treatment | mt | treated | te | fig | exercise [SUMMARY]
[CONTENT] patients | analysis | cnlbp | group | treatment | mt | treated | te | fig | exercise [SUMMARY]
[CONTENT] patients | analysis | cnlbp | group | treatment | mt | treated | te | fig | exercise [SUMMARY]
[CONTENT] pain | chronic | cnlbp | studies | chronic pain | associated | strain | metabolites | pe | pc [SUMMARY]
null
[CONTENT] patients | group | fig | analysis | treated | cnlbp | metabolism | cnlbp patients | orthogonal pls | orthogonal pls da [SUMMARY]
[CONTENT] cnlbp | mt te treatment | treatment | lipids metabolites | te treatment | mt te | physiotherapy | treatment able cause | metabolites alteration useful treatment | treatment associated specific lipids [SUMMARY]
[CONTENT] patients | cnlbp | reagent | group | analysis | treatment | exercise | mt | te | metabolism [SUMMARY]
[CONTENT] patients | cnlbp | reagent | group | analysis | treatment | exercise | mt | te | metabolism [SUMMARY]
[CONTENT] 65-80% ||| ||| TE ||| MT | TE [SUMMARY]
null
[CONTENT] SM/Cer | MT | TE ||| eight ||| nine | TE [SUMMARY]
[CONTENT] first [SUMMARY]
[CONTENT] 65-80% ||| ||| TE ||| MT | TE ||| Seventeen | MT | TE ||| MT | 20 ||| six | every two days ||| TE | 30 ||| ||| 1 | 2 | 3 | 4 | two | 5 | 6 ||| six | one | every two days ||| ||| SM/Cer | MT | TE ||| eight ||| nine | TE ||| first [SUMMARY]
[CONTENT] 65-80% ||| ||| TE ||| MT | TE ||| Seventeen | MT | TE ||| MT | 20 ||| six | every two days ||| TE | 30 ||| ||| 1 | 2 | 3 | 4 | two | 5 | 6 ||| six | one | every two days ||| ||| SM/Cer | MT | TE ||| eight ||| nine | TE ||| first [SUMMARY]
Dihydropyrimidine dehydrogenase (DPYD) gene c.1627A>G A/G and G/G genotypes are risk factors for lymph node metastasis and distant metastasis of colorectal cancer.
34612540
Dihydropyrimidine dehydrogenase (DPD) acts as the key enzyme catabolizing pyrimidines, and may affect the tumor progression. DPYD gene mutations affect DPD activity. The relationship between DPYD IVS14+1G>A, c.1627A>G, c.85T>C and lymph node metastasis (LNM) and distant metastasis (DM) of colorectal cancer (CRC) was investigated.
BACKGROUND
A total of 537 CRC patients were enrolled in this study. DPYD polymorphisms were analyzed by polymerase chain reaction (PCR)-Sanger sequencing. The relationship between DPYD genotypes and clinical features of patients, metastasis of CRC was analyzed.
METHODS
About DPYD c.1627A>G, A/A (57.7%) was the most common genotype, followed by A/G (35.6%), G/G (6.7%) genotypes. In c.85T>C, T/T, T/C, and C/C genotypes are accounted for 83.6%, 16.0%, and 0.4%, respectively. Logistic regression analysis revealed that DPYD c.1627A>G A/G and G/G genotypes in the dominant model (A/G + G/G vs. A/A) were significant risk factors for the LNM (p = 0.029, OR 1.506, 95% CI = 1.048-2.165) and DM (p = 0.039, OR 1.588, 95% CI = 1.041-2.423) of CRC. In addition, DPYD c.1627A>G polymorphism was more common in patients with abnormal serum carcinoembryonic antigen (CEA) (>5 ng/ml) (p = 0.003) or carbohydrate antigen 24-2 (CA24-2) (>20 U/ml) level (p = 0.015).
RESULTS
The results suggested that DPYD c.1627A>G A/G, G/G genotypes are associated with increased risk of LNM and DM of CRC.
CONCLUSIONS
[ "Adult", "Aged", "Aged, 80 and over", "Colorectal Neoplasms", "Dihydrouracil Dehydrogenase (NADP)", "Female", "Genetic Predisposition to Disease", "Humans", "Lymphatic Metastasis", "Male", "Middle Aged", "Polymorphism, Single Nucleotide", "Risk Factors" ]
8605172
INTRODUCTION
With the burden of cancer morbidity and mortality rapidly growing worldwide, cancer is a major barrier to increasing life expectancy worldwide. 1 Colorectal cancer (CRC) is one of the most common gastrointestinal malignancies. According to the Global Cancer Statistics in 2020 by International Agency for Research on Cancer (IARC), CRC is the third most prevalent cancer and the second leading cause of cancer death in the world. 2 In clinical treatment, CRC can be treated with endoscopic treatment, surgical resection, chemotherapy drugs, targeted drugs, immunotherapy, and radiation. 3 , 4 The multiple disciplinary team (MDT) model also improved the treatment level of CRC. 5 However, the recurrence and metastasis of CRC are the major problems affecting the survival of the patients. Metastasis is the process by which cancer cells spread from the primary lesion to the distal organs and is the leading cause of cancer mortality. 6 Metastasis of CRC includes lymph nodes metastasis (LNM) and distant metastasis (DM). 7 Capecitabine is an oral prodrug of 5‐fluorouracil (5‐FU) and has been approved for the treatment of various malignancies. 8 There has been reports that the curative effect and toxic effects of 5‐FU exist noticeable individual differences. 9 After fluorouracil administration, 5‐FU can be transformed into 5‐fluoro‐2'‐deoxyuridine 5’ monophosphate (FdUMP), 5‐fluoro‐2'‐deoxyuridine 5'‐triphosphate (FdUTP), and 5‐fluorouridine 5'‐triphosphate (FUTP) in cells, which are three cytotoxic metabolites. 10 FdUMP inhibits the thymine ceoxyribonucleotide synthetase, the enzyme is necessary for DNA replication and repair, while FdUTP and FUTP disrupt the processing and function of DNA and RNA. 11 Dihydropyrimidine dehydrogenase (DPD) is a rate‐limiting enzyme in the catabolic pathway of fluorouracil. DPD can inactivate up to 85% of 5‐Fu into 5, 6‐dihydro‐5‐fluorouracil, and the intermediate is further metabolized to β‐alanine or β‐aminoisobutyric acid. 12 These processes will increase nucleotide synthesis, which is conducive to DNA synthesis and cell growth. While DPD enzyme activity is decreased, fluorouracil clearance rate in vivo is decreased, the half‐life is prolonged and cytotoxicity is enhanced. 13 DPD enzyme activity is affected by DPYD gene polymorphisms. 14 In addition, DPD is associated with epithelial‐to‐mesenchymal transition (EMT). EMT has been implicated in carcinogenesis and tumor metastasis by enhancing mobility, invasion, and resistance to apoptotic stimuli. 15 DPYD gene polymorphisms may affect the process of EMT by changing the activity of DPD, thus participating in the metastasis of tumor cells. The human DPYD gene is located on chromosome 1p21.3, it is 850 kb in length encompassing 23 exons. Genetic variations of DPYD lead to changes in DPD enzyme activity, which could result in some adverse side effects. The DPYD gene has more than 1700 different genetic variants, and more than 600 are missense variants impacting on the DPD protein sequence, according to the report in the GnomAD database (https://gnomad.broadinstitute.org/). So far, the variants or polymorphisms of DPYD gene attracted more attention including: DPYD IVS14+1 G>A (rs3918290, DPYD *2A), DPYD c. 1627 A>G (rs1801159, DPYD *5A), DPYD c. 85 T>C (rs1801265, DPYD *9A). 16 , 17 Studies have shown that the clinical outcome, the survival of CRC is associated with gene polymorphisms and gene expression level. 18 One study showed that polymorphisms of DPYD have a significant effect on toxicity and clinical outcome in colorectal or gastroesophageal cancer patients receiving capecitabine‐based chemotherapy. 19 Another study showed that the mRNA expression of DPYD is associated with clinicopathological characteristics and may be useful for predicting survival in CRC patients. 20 The relationship between DPYD gene polymorphisms and metastasis of CRC has not been studied. In the present study, the relationship between DPYD gene polymorphisms and the clinical features of CRC patients, metastasis of CRC (including LNM and DM) was analyzed. It is expected to provide a valuable marker for the prognosis of CRC and a valuable target for the clinical treatment of metastatic CRC. This study may provide a valuable reference for the relationship between gene polymorphism and pathological features and metastasis of CRC.
null
null
RESULTS
Population characteristics A total of 537 CRC patients were enrolled in this study, including 349 (65.0%) men and 188 (35.0%) women. The average age of the patients was 59.34 ± 10.14 years (26–85 years), 273 (50.8%) patients with ≤60 years old, and 264 (49.2%) patients with >60 years old. According to the pathological degree of tumor differentiation, 8 (1.5%) samples were well‐differentiated tumors, 497 (92.5%) samples were moderately differentiated tumors, 26 (5.0%) samples were poorly differentiated tumors, and 6 samples were unknown. According to the tumor stage, 3 (0.6%), 27 (5.0%), 364 (67.8%), and 142 (26.4%) cases were pT1, pT2, pT3, and pT4 stage, respectively. The proportion of higher stage tumors (pT3+ pT4 categories) was 94.2%. According to the lymph nodes status, 192 (35.8%), 196 (36.5%), 145 (27.0%), and 4 (0.7%) cases were N0, N1, N2, and N3 stage, respectively. In addition, 428 (79.7%) and 109 (20.3%) cases were M0 and M1 stage, respectively (Table 1). Baseline characteristics of study objects A total of 537 CRC patients were enrolled in this study, including 349 (65.0%) men and 188 (35.0%) women. The average age of the patients was 59.34 ± 10.14 years (26–85 years), 273 (50.8%) patients with ≤60 years old, and 264 (49.2%) patients with >60 years old. According to the pathological degree of tumor differentiation, 8 (1.5%) samples were well‐differentiated tumors, 497 (92.5%) samples were moderately differentiated tumors, 26 (5.0%) samples were poorly differentiated tumors, and 6 samples were unknown. According to the tumor stage, 3 (0.6%), 27 (5.0%), 364 (67.8%), and 142 (26.4%) cases were pT1, pT2, pT3, and pT4 stage, respectively. The proportion of higher stage tumors (pT3+ pT4 categories) was 94.2%. According to the lymph nodes status, 192 (35.8%), 196 (36.5%), 145 (27.0%), and 4 (0.7%) cases were N0, N1, N2, and N3 stage, respectively. In addition, 428 (79.7%) and 109 (20.3%) cases were M0 and M1 stage, respectively (Table 1). Baseline characteristics of study objects The frequency of DPYD gene polymorphisms in the patients In this study, the DPYD IVS14+1 G>A, DPYD c. 1627 A>G, DPYD c. 85 T>C genotypes in the patients were identified. About the DPYD IVS14+1G>A variant, there were 537 (100%) cases with G/G genotype (wild type), 0 (0%) cases with G/A heterozygous, and 0 (0%) cases with A/A homozygous. That is to say, no DPYD IVS14+1G>A mutation was found in the patients in this study. In the DPYD c.1627A>G, there were 310 (57.7%) cases with A/A genotype (wild type), 191 (35.6%) cases with A/G heterozygous, and 36 (6.7%) cases with G/G homozygous. Among DPYD c.85T>C, there were 449 (83.6%) cases with T/T genotype (wild type), 86 (16.0%) cases with T/C heterozygotes, and 2 (0.4%) cases with C/C homozygous. The genotype distributions of DPYD c.1627A>G, and DPYD c.85T>C in the CRC patients were consistent with Hardy–Weinberg equilibrium (χ2 = 0.425, p = 0.802 and χ2 = 0.715, p = 0.750, respectively). In this study, the DPYD IVS14+1 G>A, DPYD c. 1627 A>G, DPYD c. 85 T>C genotypes in the patients were identified. About the DPYD IVS14+1G>A variant, there were 537 (100%) cases with G/G genotype (wild type), 0 (0%) cases with G/A heterozygous, and 0 (0%) cases with A/A homozygous. That is to say, no DPYD IVS14+1G>A mutation was found in the patients in this study. In the DPYD c.1627A>G, there were 310 (57.7%) cases with A/A genotype (wild type), 191 (35.6%) cases with A/G heterozygous, and 36 (6.7%) cases with G/G homozygous. Among DPYD c.85T>C, there were 449 (83.6%) cases with T/T genotype (wild type), 86 (16.0%) cases with T/C heterozygotes, and 2 (0.4%) cases with C/C homozygous. The genotype distributions of DPYD c.1627A>G, and DPYD c.85T>C in the CRC patients were consistent with Hardy–Weinberg equilibrium (χ2 = 0.425, p = 0.802 and χ2 = 0.715, p = 0.750, respectively). Association of DPYD polymorphisms with metastasis of CRC Logistic regression analysis of the relationship between the genotype of DPYD polymorphisms and the LNM status of CRC was studied. The frequency of DPYD c.1627A>G A/G genotype (39.4%) in the LNM group was obviously higher than that (28.6%) in the non‐LNM CRC patients. It was demonstrated that the A/G genotype of DPYD c.1627A>G might increase the risk of LNM in CRC patients (p = 0.016, OR = 1.626, 95% CI = 1.104–2.395). The variants were analyzed under different genetic models. It was showed that DPYD c.1627A>G A/G and G/G genotypes in the dominant model (DPYD c.1627A>G A/G + G/G vs. DPYD c.1627A>G A/A) were the significant risk factors (p = 0.029, OR 1.506, 95% CI = 1.048–2.165) for the LNM of CRC (Table 2). Association of DPYD polymorphisms with metastasis of CRC patients Abbreviations: CRC, colorectal cancer; DM, distant metastasis; LNM, lymph node metastasis. Bold numbers indicate significant values (p < 0.05). Logistic regression analysis of the relationship between the genotype of DPYD polymorphisms and DM status of CRC was studied. The frequency of DPYD c.1627A>G A/G genotype (45.0%) in the DM group was obviously higher than that (33.2%) in the non‐DM group. It was demonstrated that the A/G genotype of DPYD c.1627A>G might increase the risk of DM in CRC patients (p = 0.023, OR = 1.673, 95% CI = 1.079–2.596). In addition, DPYD c.1627A>G A/G and G/G genotypes in the dominant model (DPYD c.1627A>G A/G + G/G vs. DPYD c.1627A>G A/A) were the significant risk factors (p = 0.039, OR = 1.588, 95% CI = 1.041–2.423) for the DM of CRC (Table 2). Logistic regression analysis of the relationship between the genotype of DPYD polymorphisms and the LNM status of CRC was studied. The frequency of DPYD c.1627A>G A/G genotype (39.4%) in the LNM group was obviously higher than that (28.6%) in the non‐LNM CRC patients. It was demonstrated that the A/G genotype of DPYD c.1627A>G might increase the risk of LNM in CRC patients (p = 0.016, OR = 1.626, 95% CI = 1.104–2.395). The variants were analyzed under different genetic models. It was showed that DPYD c.1627A>G A/G and G/G genotypes in the dominant model (DPYD c.1627A>G A/G + G/G vs. DPYD c.1627A>G A/A) were the significant risk factors (p = 0.029, OR 1.506, 95% CI = 1.048–2.165) for the LNM of CRC (Table 2). Association of DPYD polymorphisms with metastasis of CRC patients Abbreviations: CRC, colorectal cancer; DM, distant metastasis; LNM, lymph node metastasis. Bold numbers indicate significant values (p < 0.05). Logistic regression analysis of the relationship between the genotype of DPYD polymorphisms and DM status of CRC was studied. The frequency of DPYD c.1627A>G A/G genotype (45.0%) in the DM group was obviously higher than that (33.2%) in the non‐DM group. It was demonstrated that the A/G genotype of DPYD c.1627A>G might increase the risk of DM in CRC patients (p = 0.023, OR = 1.673, 95% CI = 1.079–2.596). In addition, DPYD c.1627A>G A/G and G/G genotypes in the dominant model (DPYD c.1627A>G A/G + G/G vs. DPYD c.1627A>G A/A) were the significant risk factors (p = 0.039, OR = 1.588, 95% CI = 1.041–2.423) for the DM of CRC (Table 2). Association of DPYD polymorphisms with clinicopathological parameters in the CRC patients The association between DPYD c.1627A>G, c.85T>C polymorphisms, and clinicopathological features of CRC patients have been evaluated. The clinical features including gender, age, degree of differentiation of the tumor sample, serum tumor marker levels (carcinoembryonic antigen (CEA), carbohydrate antigen 24–2 (CA24‐2), carbohydrate antigen 19–9 (CA19‐9)), tumor stage, lymph nodes status, and distant metastasis status was collected. There was no relationship between the DPYD c.1627A>G, c.85T>C polymorphisms and gender, degree of differentiation of the tumor sample, serum CA19‐9 level, and tumor stage (T stage) of CRC patients. However, the frequency of DPYD c.1627A>G A/G+G/G genotypes in older patients (>60 years old) was significantly higher than that in the younger patients (≤60 years old) (p = 0.036). The frequency of DPYD c.1627A>G A/G+G/G genotypes in patients with abnormal serum CEA level (>5 ng/ml) and abnormal serum CA24‐2 level (>20 U/ml) was significantly higher than that in the patients with normal serum CEA level (≤5 ng/ml) (p = 0.003) and normal serum CA24‐2 level (≤20 U/ml) (p = 0.015), respectively (Table 3). Association of DPYD polymorphisms with clinicopathological parameters in the CRC patients Abbreviations: CA19‐9, carbohydrate antigen 19–9; CA24‐2, carbohydrate antigen 24–2; CEA, carcinoembryonic antigen. Bold numbers indicate significant values (p < 0.05). The association between DPYD c.1627A>G, c.85T>C polymorphisms, and clinicopathological features of CRC patients have been evaluated. The clinical features including gender, age, degree of differentiation of the tumor sample, serum tumor marker levels (carcinoembryonic antigen (CEA), carbohydrate antigen 24–2 (CA24‐2), carbohydrate antigen 19–9 (CA19‐9)), tumor stage, lymph nodes status, and distant metastasis status was collected. There was no relationship between the DPYD c.1627A>G, c.85T>C polymorphisms and gender, degree of differentiation of the tumor sample, serum CA19‐9 level, and tumor stage (T stage) of CRC patients. However, the frequency of DPYD c.1627A>G A/G+G/G genotypes in older patients (>60 years old) was significantly higher than that in the younger patients (≤60 years old) (p = 0.036). The frequency of DPYD c.1627A>G A/G+G/G genotypes in patients with abnormal serum CEA level (>5 ng/ml) and abnormal serum CA24‐2 level (>20 U/ml) was significantly higher than that in the patients with normal serum CEA level (≤5 ng/ml) (p = 0.003) and normal serum CA24‐2 level (≤20 U/ml) (p = 0.015), respectively (Table 3). Association of DPYD polymorphisms with clinicopathological parameters in the CRC patients Abbreviations: CA19‐9, carbohydrate antigen 19–9; CA24‐2, carbohydrate antigen 24–2; CEA, carcinoembryonic antigen. Bold numbers indicate significant values (p < 0.05).
CONCLUSION
DPYD c.1627A>G A/G and G/G genotypes are associated with the increased risk of lymph node metastasis and distant metastasis of CRC. Future studies need to include more relevant genes for analysis and to assess potential gene‐environment interactions. This study may provide a valuable reference for the relationship between gene polymorphism and pathological features and metastasis of CRC.
[ "INTRODUCTION", "Subjects", "Genotyping of DPYD gene", "Data collection and statistical analysis", "Population characteristics", "The frequency of DPYD gene polymorphisms in the patients", "Association of DPYD polymorphisms with metastasis of CRC", "Association of DPYD polymorphisms with clinicopathological parameters in the CRC patients", "AUTHORS’ CONTRIBUTIONS" ]
[ "With the burden of cancer morbidity and mortality rapidly growing worldwide, cancer is a major barrier to increasing life expectancy worldwide.\n1\n Colorectal cancer (CRC) is one of the most common gastrointestinal malignancies. According to the Global Cancer Statistics in 2020 by International Agency for Research on Cancer (IARC), CRC is the third most prevalent cancer and the second leading cause of cancer death in the world.\n2\n In clinical treatment, CRC can be treated with endoscopic treatment, surgical resection, chemotherapy drugs, targeted drugs, immunotherapy, and radiation.\n3\n, \n4\n The multiple disciplinary team (MDT) model also improved the treatment level of CRC.\n5\n However, the recurrence and metastasis of CRC are the major problems affecting the survival of the patients. Metastasis is the process by which cancer cells spread from the primary lesion to the distal organs and is the leading cause of cancer mortality.\n6\n Metastasis of CRC includes lymph nodes metastasis (LNM) and distant metastasis (DM).\n7\n\n\nCapecitabine is an oral prodrug of 5‐fluorouracil (5‐FU) and has been approved for the treatment of various malignancies.\n8\n There has been reports that the curative effect and toxic effects of 5‐FU exist noticeable individual differences.\n9\n After fluorouracil administration, 5‐FU can be transformed into 5‐fluoro‐2'‐deoxyuridine 5’ monophosphate (FdUMP), 5‐fluoro‐2'‐deoxyuridine 5'‐triphosphate (FdUTP), and 5‐fluorouridine 5'‐triphosphate (FUTP) in cells, which are three cytotoxic metabolites.\n10\n FdUMP inhibits the thymine ceoxyribonucleotide synthetase, the enzyme is necessary for DNA replication and repair, while FdUTP and FUTP disrupt the processing and function of DNA and RNA.\n11\n Dihydropyrimidine dehydrogenase (DPD) is a rate‐limiting enzyme in the catabolic pathway of fluorouracil. DPD can inactivate up to 85% of 5‐Fu into 5, 6‐dihydro‐5‐fluorouracil, and the intermediate is further metabolized to β‐alanine or β‐aminoisobutyric acid.\n12\n These processes will increase nucleotide synthesis, which is conducive to DNA synthesis and cell growth. While DPD enzyme activity is decreased, fluorouracil clearance rate in vivo is decreased, the half‐life is prolonged and cytotoxicity is enhanced.\n13\n DPD enzyme activity is affected by DPYD gene polymorphisms.\n14\n In addition, DPD is associated with epithelial‐to‐mesenchymal transition (EMT). EMT has been implicated in carcinogenesis and tumor metastasis by enhancing mobility, invasion, and resistance to apoptotic stimuli.\n15\n\nDPYD gene polymorphisms may affect the process of EMT by changing the activity of DPD, thus participating in the metastasis of tumor cells.\nThe human DPYD gene is located on chromosome 1p21.3, it is 850 kb in length encompassing 23 exons. Genetic variations of DPYD lead to changes in DPD enzyme activity, which could result in some adverse side effects. The DPYD gene has more than 1700 different genetic variants, and more than 600 are missense variants impacting on the DPD protein sequence, according to the report in the GnomAD database (https://gnomad.broadinstitute.org/). So far, the variants or polymorphisms of DPYD gene attracted more attention including: DPYD IVS14+1 G>A (rs3918290, DPYD *2A), DPYD c. 1627 A>G (rs1801159, DPYD *5A), DPYD c. 85 T>C (rs1801265, DPYD *9A).\n16\n, \n17\n\n\nStudies have shown that the clinical outcome, the survival of CRC is associated with gene polymorphisms and gene expression level.\n18\n One study showed that polymorphisms of DPYD have a significant effect on toxicity and clinical outcome in colorectal or gastroesophageal cancer patients receiving capecitabine‐based chemotherapy.\n19\n Another study showed that the mRNA expression of DPYD is associated with clinicopathological characteristics and may be useful for predicting survival in CRC patients.\n20\n The relationship between DPYD gene polymorphisms and metastasis of CRC has not been studied. In the present study, the relationship between DPYD gene polymorphisms and the clinical features of CRC patients, metastasis of CRC (including LNM and DM) was analyzed. It is expected to provide a valuable marker for the prognosis of CRC and a valuable target for the clinical treatment of metastatic CRC. This study may provide a valuable reference for the relationship between gene polymorphism and pathological features and metastasis of CRC.", "A total of 537 CRC patients were recruited from Meizhou People's Hospital, from January 2016 to May 2019. Inclusion criteria: (1) Imaging diagnosis and histologically confirmed diagnosis met the diagnostic criteria for CRC. (2) Patients without serious cardiovascular and cerebrovascular diseases and infectious diseases. Exclusion criteria: (1) Patients without colorectal cancer. (2) Patients with dysfunction of vital organs. (3) Patients who also have other tumors. This study was supported by the Ethics Committee of the Meizhou People's Hospital. The flow chart of the present study is shown in Figure 1.\nThe flow chart of the present study", "Two milliliters of venous blood sample were obtained from each subject. Genomic DNA was extracted using a QIAamp DNA Kit (Qiagen GmbH). DPYD IVS14+1 G>A variant and polymorphisms of DPYD c. 1627 A>G and DPYD c. 85 T>C were analyzed. DPYD Genotyping Test Kit (SINOMD Gene Detection Technology Co. Ltd.) based on Sanger sequencing was used for testing. Polymerase chain reaction (PCR) was performed according to the following procedure: Initial denaturation at 95℃ for 3 min, followed by 45 cycles of denaturation at 94℃ for 15 s, annealing at 63℃ for 1 min, and extension at 72℃ for 1 min. PCR products were purified with ExoSap‐It (ABI PCR Product Cleanup Reagent). DNA sequences determination was detected using ABI Terminator v3.1 Cycle Sequencing kit and performed on ABI 3500 Dx Genetic Analyzer, analyzed with Sequencing Analysis v5.4 (Life Technologies).", "Relevant information and medical records of these participants were collected. Clinical information, including age, gender, histopathological type, degree of tumor differentiation, TNM stage, and tumor grade, was collected. SPSS statistical software version 21.0 (IBM Inc.) was used for the data analysis. The Hardy–Weinberg equilibrium (HWE) of DPYD genotypes was assessed using the χ2 test. Association between DPYD variants status with the clinical features of patients and metastasis of CRC were evaluated by Fisher's exact test. A p value <0.05 was set as statistically significant.", "A total of 537 CRC patients were enrolled in this study, including 349 (65.0%) men and 188 (35.0%) women. The average age of the patients was 59.34 ± 10.14 years (26–85 years), 273 (50.8%) patients with ≤60 years old, and 264 (49.2%) patients with >60 years old. According to the pathological degree of tumor differentiation, 8 (1.5%) samples were well‐differentiated tumors, 497 (92.5%) samples were moderately differentiated tumors, 26 (5.0%) samples were poorly differentiated tumors, and 6 samples were unknown. According to the tumor stage, 3 (0.6%), 27 (5.0%), 364 (67.8%), and 142 (26.4%) cases were pT1, pT2, pT3, and pT4 stage, respectively. The proportion of higher stage tumors (pT3+ pT4 categories) was 94.2%. According to the lymph nodes status, 192 (35.8%), 196 (36.5%), 145 (27.0%), and 4 (0.7%) cases were N0, N1, N2, and N3 stage, respectively. In addition, 428 (79.7%) and 109 (20.3%) cases were M0 and M1 stage, respectively (Table 1).\nBaseline characteristics of study objects", "In this study, the DPYD IVS14+1 G>A, DPYD c. 1627 A>G, DPYD c. 85 T>C genotypes in the patients were identified. About the DPYD IVS14+1G>A variant, there were 537 (100%) cases with G/G genotype (wild type), 0 (0%) cases with G/A heterozygous, and 0 (0%) cases with A/A homozygous. That is to say, no DPYD IVS14+1G>A mutation was found in the patients in this study. In the DPYD c.1627A>G, there were 310 (57.7%) cases with A/A genotype (wild type), 191 (35.6%) cases with A/G heterozygous, and 36 (6.7%) cases with G/G homozygous. Among DPYD c.85T>C, there were 449 (83.6%) cases with T/T genotype (wild type), 86 (16.0%) cases with T/C heterozygotes, and 2 (0.4%) cases with C/C homozygous. The genotype distributions of DPYD c.1627A>G, and DPYD c.85T>C in the CRC patients were consistent with Hardy–Weinberg equilibrium (χ2 = 0.425, p = 0.802 and χ2 = 0.715, p = 0.750, respectively).", "Logistic regression analysis of the relationship between the genotype of DPYD polymorphisms and the LNM status of CRC was studied. The frequency of DPYD c.1627A>G A/G genotype (39.4%) in the LNM group was obviously higher than that (28.6%) in the non‐LNM CRC patients. It was demonstrated that the A/G genotype of DPYD c.1627A>G might increase the risk of LNM in CRC patients (p = 0.016, OR = 1.626, 95% CI = 1.104–2.395). The variants were analyzed under different genetic models. It was showed that DPYD c.1627A>G A/G and G/G genotypes in the dominant model (DPYD c.1627A>G A/G + G/G vs. DPYD c.1627A>G A/A) were the significant risk factors (p = 0.029, OR 1.506, 95% CI = 1.048–2.165) for the LNM of CRC (Table 2).\nAssociation of DPYD polymorphisms with metastasis of CRC patients\nAbbreviations: CRC, colorectal cancer; DM, distant metastasis; LNM, lymph node metastasis.\nBold numbers indicate significant values (p < 0.05).\nLogistic regression analysis of the relationship between the genotype of DPYD polymorphisms and DM status of CRC was studied. The frequency of DPYD c.1627A>G A/G genotype (45.0%) in the DM group was obviously higher than that (33.2%) in the non‐DM group. It was demonstrated that the A/G genotype of DPYD c.1627A>G might increase the risk of DM in CRC patients (p = 0.023, OR = 1.673, 95% CI = 1.079–2.596). In addition, DPYD c.1627A>G A/G and G/G genotypes in the dominant model (DPYD c.1627A>G A/G + G/G vs. DPYD c.1627A>G A/A) were the significant risk factors (p = 0.039, OR = 1.588, 95% CI = 1.041–2.423) for the DM of CRC (Table 2).", "The association between DPYD c.1627A>G, c.85T>C polymorphisms, and clinicopathological features of CRC patients have been evaluated. The clinical features including gender, age, degree of differentiation of the tumor sample, serum tumor marker levels (carcinoembryonic antigen (CEA), carbohydrate antigen 24–2 (CA24‐2), carbohydrate antigen 19–9 (CA19‐9)), tumor stage, lymph nodes status, and distant metastasis status was collected. There was no relationship between the DPYD c.1627A>G, c.85T>C polymorphisms and gender, degree of differentiation of the tumor sample, serum CA19‐9 level, and tumor stage (T stage) of CRC patients. However, the frequency of DPYD c.1627A>G A/G+G/G genotypes in older patients (>60 years old) was significantly higher than that in the younger patients (≤60 years old) (p = 0.036). The frequency of DPYD c.1627A>G A/G+G/G genotypes in patients with abnormal serum CEA level (>5 ng/ml) and abnormal serum CA24‐2 level (>20 U/ml) was significantly higher than that in the patients with normal serum CEA level (≤5 ng/ml) (p = 0.003) and normal serum CA24‐2 level (≤20 U/ml) (p = 0.015), respectively (Table 3).\nAssociation of DPYD polymorphisms with clinicopathological parameters in the CRC patients\nAbbreviations: CA19‐9, carbohydrate antigen 19–9; CA24‐2, carbohydrate antigen 24–2; CEA, carcinoembryonic antigen.\nBold numbers indicate significant values (p < 0.05).", "Zhixiong Zhong, Heming Wu, and Juanzi Zeng designed the study. Juanzi Zeng, Qingyan Huang, and Zhikang Yu performed the experiments. Juanzi Zeng and Jiaquan Li collected the clinical data. Heming Wu and Juanzi Zeng analyzed the data. Heming Wu and Juanzi Zeng prepared the manuscript. All authors were responsible for critical revisions, and all authors read and approved the final version of this work." ]
[ null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Subjects", "Genotyping of DPYD gene", "Data collection and statistical analysis", "RESULTS", "Population characteristics", "The frequency of DPYD gene polymorphisms in the patients", "Association of DPYD polymorphisms with metastasis of CRC", "Association of DPYD polymorphisms with clinicopathological parameters in the CRC patients", "DISCUSSION", "CONCLUSION", "CONFLICT OF INTEREST", "AUTHORS’ CONTRIBUTIONS" ]
[ "With the burden of cancer morbidity and mortality rapidly growing worldwide, cancer is a major barrier to increasing life expectancy worldwide.\n1\n Colorectal cancer (CRC) is one of the most common gastrointestinal malignancies. According to the Global Cancer Statistics in 2020 by International Agency for Research on Cancer (IARC), CRC is the third most prevalent cancer and the second leading cause of cancer death in the world.\n2\n In clinical treatment, CRC can be treated with endoscopic treatment, surgical resection, chemotherapy drugs, targeted drugs, immunotherapy, and radiation.\n3\n, \n4\n The multiple disciplinary team (MDT) model also improved the treatment level of CRC.\n5\n However, the recurrence and metastasis of CRC are the major problems affecting the survival of the patients. Metastasis is the process by which cancer cells spread from the primary lesion to the distal organs and is the leading cause of cancer mortality.\n6\n Metastasis of CRC includes lymph nodes metastasis (LNM) and distant metastasis (DM).\n7\n\n\nCapecitabine is an oral prodrug of 5‐fluorouracil (5‐FU) and has been approved for the treatment of various malignancies.\n8\n There has been reports that the curative effect and toxic effects of 5‐FU exist noticeable individual differences.\n9\n After fluorouracil administration, 5‐FU can be transformed into 5‐fluoro‐2'‐deoxyuridine 5’ monophosphate (FdUMP), 5‐fluoro‐2'‐deoxyuridine 5'‐triphosphate (FdUTP), and 5‐fluorouridine 5'‐triphosphate (FUTP) in cells, which are three cytotoxic metabolites.\n10\n FdUMP inhibits the thymine ceoxyribonucleotide synthetase, the enzyme is necessary for DNA replication and repair, while FdUTP and FUTP disrupt the processing and function of DNA and RNA.\n11\n Dihydropyrimidine dehydrogenase (DPD) is a rate‐limiting enzyme in the catabolic pathway of fluorouracil. DPD can inactivate up to 85% of 5‐Fu into 5, 6‐dihydro‐5‐fluorouracil, and the intermediate is further metabolized to β‐alanine or β‐aminoisobutyric acid.\n12\n These processes will increase nucleotide synthesis, which is conducive to DNA synthesis and cell growth. While DPD enzyme activity is decreased, fluorouracil clearance rate in vivo is decreased, the half‐life is prolonged and cytotoxicity is enhanced.\n13\n DPD enzyme activity is affected by DPYD gene polymorphisms.\n14\n In addition, DPD is associated with epithelial‐to‐mesenchymal transition (EMT). EMT has been implicated in carcinogenesis and tumor metastasis by enhancing mobility, invasion, and resistance to apoptotic stimuli.\n15\n\nDPYD gene polymorphisms may affect the process of EMT by changing the activity of DPD, thus participating in the metastasis of tumor cells.\nThe human DPYD gene is located on chromosome 1p21.3, it is 850 kb in length encompassing 23 exons. Genetic variations of DPYD lead to changes in DPD enzyme activity, which could result in some adverse side effects. The DPYD gene has more than 1700 different genetic variants, and more than 600 are missense variants impacting on the DPD protein sequence, according to the report in the GnomAD database (https://gnomad.broadinstitute.org/). So far, the variants or polymorphisms of DPYD gene attracted more attention including: DPYD IVS14+1 G>A (rs3918290, DPYD *2A), DPYD c. 1627 A>G (rs1801159, DPYD *5A), DPYD c. 85 T>C (rs1801265, DPYD *9A).\n16\n, \n17\n\n\nStudies have shown that the clinical outcome, the survival of CRC is associated with gene polymorphisms and gene expression level.\n18\n One study showed that polymorphisms of DPYD have a significant effect on toxicity and clinical outcome in colorectal or gastroesophageal cancer patients receiving capecitabine‐based chemotherapy.\n19\n Another study showed that the mRNA expression of DPYD is associated with clinicopathological characteristics and may be useful for predicting survival in CRC patients.\n20\n The relationship between DPYD gene polymorphisms and metastasis of CRC has not been studied. In the present study, the relationship between DPYD gene polymorphisms and the clinical features of CRC patients, metastasis of CRC (including LNM and DM) was analyzed. It is expected to provide a valuable marker for the prognosis of CRC and a valuable target for the clinical treatment of metastatic CRC. This study may provide a valuable reference for the relationship between gene polymorphism and pathological features and metastasis of CRC.", "Subjects A total of 537 CRC patients were recruited from Meizhou People's Hospital, from January 2016 to May 2019. Inclusion criteria: (1) Imaging diagnosis and histologically confirmed diagnosis met the diagnostic criteria for CRC. (2) Patients without serious cardiovascular and cerebrovascular diseases and infectious diseases. Exclusion criteria: (1) Patients without colorectal cancer. (2) Patients with dysfunction of vital organs. (3) Patients who also have other tumors. This study was supported by the Ethics Committee of the Meizhou People's Hospital. The flow chart of the present study is shown in Figure 1.\nThe flow chart of the present study\nA total of 537 CRC patients were recruited from Meizhou People's Hospital, from January 2016 to May 2019. Inclusion criteria: (1) Imaging diagnosis and histologically confirmed diagnosis met the diagnostic criteria for CRC. (2) Patients without serious cardiovascular and cerebrovascular diseases and infectious diseases. Exclusion criteria: (1) Patients without colorectal cancer. (2) Patients with dysfunction of vital organs. (3) Patients who also have other tumors. This study was supported by the Ethics Committee of the Meizhou People's Hospital. The flow chart of the present study is shown in Figure 1.\nThe flow chart of the present study\nGenotyping of DPYD gene Two milliliters of venous blood sample were obtained from each subject. Genomic DNA was extracted using a QIAamp DNA Kit (Qiagen GmbH). DPYD IVS14+1 G>A variant and polymorphisms of DPYD c. 1627 A>G and DPYD c. 85 T>C were analyzed. DPYD Genotyping Test Kit (SINOMD Gene Detection Technology Co. Ltd.) based on Sanger sequencing was used for testing. Polymerase chain reaction (PCR) was performed according to the following procedure: Initial denaturation at 95℃ for 3 min, followed by 45 cycles of denaturation at 94℃ for 15 s, annealing at 63℃ for 1 min, and extension at 72℃ for 1 min. PCR products were purified with ExoSap‐It (ABI PCR Product Cleanup Reagent). DNA sequences determination was detected using ABI Terminator v3.1 Cycle Sequencing kit and performed on ABI 3500 Dx Genetic Analyzer, analyzed with Sequencing Analysis v5.4 (Life Technologies).\nTwo milliliters of venous blood sample were obtained from each subject. Genomic DNA was extracted using a QIAamp DNA Kit (Qiagen GmbH). DPYD IVS14+1 G>A variant and polymorphisms of DPYD c. 1627 A>G and DPYD c. 85 T>C were analyzed. DPYD Genotyping Test Kit (SINOMD Gene Detection Technology Co. Ltd.) based on Sanger sequencing was used for testing. Polymerase chain reaction (PCR) was performed according to the following procedure: Initial denaturation at 95℃ for 3 min, followed by 45 cycles of denaturation at 94℃ for 15 s, annealing at 63℃ for 1 min, and extension at 72℃ for 1 min. PCR products were purified with ExoSap‐It (ABI PCR Product Cleanup Reagent). DNA sequences determination was detected using ABI Terminator v3.1 Cycle Sequencing kit and performed on ABI 3500 Dx Genetic Analyzer, analyzed with Sequencing Analysis v5.4 (Life Technologies).\nData collection and statistical analysis Relevant information and medical records of these participants were collected. Clinical information, including age, gender, histopathological type, degree of tumor differentiation, TNM stage, and tumor grade, was collected. SPSS statistical software version 21.0 (IBM Inc.) was used for the data analysis. The Hardy–Weinberg equilibrium (HWE) of DPYD genotypes was assessed using the χ2 test. Association between DPYD variants status with the clinical features of patients and metastasis of CRC were evaluated by Fisher's exact test. A p value <0.05 was set as statistically significant.\nRelevant information and medical records of these participants were collected. Clinical information, including age, gender, histopathological type, degree of tumor differentiation, TNM stage, and tumor grade, was collected. SPSS statistical software version 21.0 (IBM Inc.) was used for the data analysis. The Hardy–Weinberg equilibrium (HWE) of DPYD genotypes was assessed using the χ2 test. Association between DPYD variants status with the clinical features of patients and metastasis of CRC were evaluated by Fisher's exact test. A p value <0.05 was set as statistically significant.", "A total of 537 CRC patients were recruited from Meizhou People's Hospital, from January 2016 to May 2019. Inclusion criteria: (1) Imaging diagnosis and histologically confirmed diagnosis met the diagnostic criteria for CRC. (2) Patients without serious cardiovascular and cerebrovascular diseases and infectious diseases. Exclusion criteria: (1) Patients without colorectal cancer. (2) Patients with dysfunction of vital organs. (3) Patients who also have other tumors. This study was supported by the Ethics Committee of the Meizhou People's Hospital. The flow chart of the present study is shown in Figure 1.\nThe flow chart of the present study", "Two milliliters of venous blood sample were obtained from each subject. Genomic DNA was extracted using a QIAamp DNA Kit (Qiagen GmbH). DPYD IVS14+1 G>A variant and polymorphisms of DPYD c. 1627 A>G and DPYD c. 85 T>C were analyzed. DPYD Genotyping Test Kit (SINOMD Gene Detection Technology Co. Ltd.) based on Sanger sequencing was used for testing. Polymerase chain reaction (PCR) was performed according to the following procedure: Initial denaturation at 95℃ for 3 min, followed by 45 cycles of denaturation at 94℃ for 15 s, annealing at 63℃ for 1 min, and extension at 72℃ for 1 min. PCR products were purified with ExoSap‐It (ABI PCR Product Cleanup Reagent). DNA sequences determination was detected using ABI Terminator v3.1 Cycle Sequencing kit and performed on ABI 3500 Dx Genetic Analyzer, analyzed with Sequencing Analysis v5.4 (Life Technologies).", "Relevant information and medical records of these participants were collected. Clinical information, including age, gender, histopathological type, degree of tumor differentiation, TNM stage, and tumor grade, was collected. SPSS statistical software version 21.0 (IBM Inc.) was used for the data analysis. The Hardy–Weinberg equilibrium (HWE) of DPYD genotypes was assessed using the χ2 test. Association between DPYD variants status with the clinical features of patients and metastasis of CRC were evaluated by Fisher's exact test. A p value <0.05 was set as statistically significant.", "Population characteristics A total of 537 CRC patients were enrolled in this study, including 349 (65.0%) men and 188 (35.0%) women. The average age of the patients was 59.34 ± 10.14 years (26–85 years), 273 (50.8%) patients with ≤60 years old, and 264 (49.2%) patients with >60 years old. According to the pathological degree of tumor differentiation, 8 (1.5%) samples were well‐differentiated tumors, 497 (92.5%) samples were moderately differentiated tumors, 26 (5.0%) samples were poorly differentiated tumors, and 6 samples were unknown. According to the tumor stage, 3 (0.6%), 27 (5.0%), 364 (67.8%), and 142 (26.4%) cases were pT1, pT2, pT3, and pT4 stage, respectively. The proportion of higher stage tumors (pT3+ pT4 categories) was 94.2%. According to the lymph nodes status, 192 (35.8%), 196 (36.5%), 145 (27.0%), and 4 (0.7%) cases were N0, N1, N2, and N3 stage, respectively. In addition, 428 (79.7%) and 109 (20.3%) cases were M0 and M1 stage, respectively (Table 1).\nBaseline characteristics of study objects\nA total of 537 CRC patients were enrolled in this study, including 349 (65.0%) men and 188 (35.0%) women. The average age of the patients was 59.34 ± 10.14 years (26–85 years), 273 (50.8%) patients with ≤60 years old, and 264 (49.2%) patients with >60 years old. According to the pathological degree of tumor differentiation, 8 (1.5%) samples were well‐differentiated tumors, 497 (92.5%) samples were moderately differentiated tumors, 26 (5.0%) samples were poorly differentiated tumors, and 6 samples were unknown. According to the tumor stage, 3 (0.6%), 27 (5.0%), 364 (67.8%), and 142 (26.4%) cases were pT1, pT2, pT3, and pT4 stage, respectively. The proportion of higher stage tumors (pT3+ pT4 categories) was 94.2%. According to the lymph nodes status, 192 (35.8%), 196 (36.5%), 145 (27.0%), and 4 (0.7%) cases were N0, N1, N2, and N3 stage, respectively. In addition, 428 (79.7%) and 109 (20.3%) cases were M0 and M1 stage, respectively (Table 1).\nBaseline characteristics of study objects\nThe frequency of DPYD gene polymorphisms in the patients In this study, the DPYD IVS14+1 G>A, DPYD c. 1627 A>G, DPYD c. 85 T>C genotypes in the patients were identified. About the DPYD IVS14+1G>A variant, there were 537 (100%) cases with G/G genotype (wild type), 0 (0%) cases with G/A heterozygous, and 0 (0%) cases with A/A homozygous. That is to say, no DPYD IVS14+1G>A mutation was found in the patients in this study. In the DPYD c.1627A>G, there were 310 (57.7%) cases with A/A genotype (wild type), 191 (35.6%) cases with A/G heterozygous, and 36 (6.7%) cases with G/G homozygous. Among DPYD c.85T>C, there were 449 (83.6%) cases with T/T genotype (wild type), 86 (16.0%) cases with T/C heterozygotes, and 2 (0.4%) cases with C/C homozygous. The genotype distributions of DPYD c.1627A>G, and DPYD c.85T>C in the CRC patients were consistent with Hardy–Weinberg equilibrium (χ2 = 0.425, p = 0.802 and χ2 = 0.715, p = 0.750, respectively).\nIn this study, the DPYD IVS14+1 G>A, DPYD c. 1627 A>G, DPYD c. 85 T>C genotypes in the patients were identified. About the DPYD IVS14+1G>A variant, there were 537 (100%) cases with G/G genotype (wild type), 0 (0%) cases with G/A heterozygous, and 0 (0%) cases with A/A homozygous. That is to say, no DPYD IVS14+1G>A mutation was found in the patients in this study. In the DPYD c.1627A>G, there were 310 (57.7%) cases with A/A genotype (wild type), 191 (35.6%) cases with A/G heterozygous, and 36 (6.7%) cases with G/G homozygous. Among DPYD c.85T>C, there were 449 (83.6%) cases with T/T genotype (wild type), 86 (16.0%) cases with T/C heterozygotes, and 2 (0.4%) cases with C/C homozygous. The genotype distributions of DPYD c.1627A>G, and DPYD c.85T>C in the CRC patients were consistent with Hardy–Weinberg equilibrium (χ2 = 0.425, p = 0.802 and χ2 = 0.715, p = 0.750, respectively).\nAssociation of DPYD polymorphisms with metastasis of CRC Logistic regression analysis of the relationship between the genotype of DPYD polymorphisms and the LNM status of CRC was studied. The frequency of DPYD c.1627A>G A/G genotype (39.4%) in the LNM group was obviously higher than that (28.6%) in the non‐LNM CRC patients. It was demonstrated that the A/G genotype of DPYD c.1627A>G might increase the risk of LNM in CRC patients (p = 0.016, OR = 1.626, 95% CI = 1.104–2.395). The variants were analyzed under different genetic models. It was showed that DPYD c.1627A>G A/G and G/G genotypes in the dominant model (DPYD c.1627A>G A/G + G/G vs. DPYD c.1627A>G A/A) were the significant risk factors (p = 0.029, OR 1.506, 95% CI = 1.048–2.165) for the LNM of CRC (Table 2).\nAssociation of DPYD polymorphisms with metastasis of CRC patients\nAbbreviations: CRC, colorectal cancer; DM, distant metastasis; LNM, lymph node metastasis.\nBold numbers indicate significant values (p < 0.05).\nLogistic regression analysis of the relationship between the genotype of DPYD polymorphisms and DM status of CRC was studied. The frequency of DPYD c.1627A>G A/G genotype (45.0%) in the DM group was obviously higher than that (33.2%) in the non‐DM group. It was demonstrated that the A/G genotype of DPYD c.1627A>G might increase the risk of DM in CRC patients (p = 0.023, OR = 1.673, 95% CI = 1.079–2.596). In addition, DPYD c.1627A>G A/G and G/G genotypes in the dominant model (DPYD c.1627A>G A/G + G/G vs. DPYD c.1627A>G A/A) were the significant risk factors (p = 0.039, OR = 1.588, 95% CI = 1.041–2.423) for the DM of CRC (Table 2).\nLogistic regression analysis of the relationship between the genotype of DPYD polymorphisms and the LNM status of CRC was studied. The frequency of DPYD c.1627A>G A/G genotype (39.4%) in the LNM group was obviously higher than that (28.6%) in the non‐LNM CRC patients. It was demonstrated that the A/G genotype of DPYD c.1627A>G might increase the risk of LNM in CRC patients (p = 0.016, OR = 1.626, 95% CI = 1.104–2.395). The variants were analyzed under different genetic models. It was showed that DPYD c.1627A>G A/G and G/G genotypes in the dominant model (DPYD c.1627A>G A/G + G/G vs. DPYD c.1627A>G A/A) were the significant risk factors (p = 0.029, OR 1.506, 95% CI = 1.048–2.165) for the LNM of CRC (Table 2).\nAssociation of DPYD polymorphisms with metastasis of CRC patients\nAbbreviations: CRC, colorectal cancer; DM, distant metastasis; LNM, lymph node metastasis.\nBold numbers indicate significant values (p < 0.05).\nLogistic regression analysis of the relationship between the genotype of DPYD polymorphisms and DM status of CRC was studied. The frequency of DPYD c.1627A>G A/G genotype (45.0%) in the DM group was obviously higher than that (33.2%) in the non‐DM group. It was demonstrated that the A/G genotype of DPYD c.1627A>G might increase the risk of DM in CRC patients (p = 0.023, OR = 1.673, 95% CI = 1.079–2.596). In addition, DPYD c.1627A>G A/G and G/G genotypes in the dominant model (DPYD c.1627A>G A/G + G/G vs. DPYD c.1627A>G A/A) were the significant risk factors (p = 0.039, OR = 1.588, 95% CI = 1.041–2.423) for the DM of CRC (Table 2).\nAssociation of DPYD polymorphisms with clinicopathological parameters in the CRC patients The association between DPYD c.1627A>G, c.85T>C polymorphisms, and clinicopathological features of CRC patients have been evaluated. The clinical features including gender, age, degree of differentiation of the tumor sample, serum tumor marker levels (carcinoembryonic antigen (CEA), carbohydrate antigen 24–2 (CA24‐2), carbohydrate antigen 19–9 (CA19‐9)), tumor stage, lymph nodes status, and distant metastasis status was collected. There was no relationship between the DPYD c.1627A>G, c.85T>C polymorphisms and gender, degree of differentiation of the tumor sample, serum CA19‐9 level, and tumor stage (T stage) of CRC patients. However, the frequency of DPYD c.1627A>G A/G+G/G genotypes in older patients (>60 years old) was significantly higher than that in the younger patients (≤60 years old) (p = 0.036). The frequency of DPYD c.1627A>G A/G+G/G genotypes in patients with abnormal serum CEA level (>5 ng/ml) and abnormal serum CA24‐2 level (>20 U/ml) was significantly higher than that in the patients with normal serum CEA level (≤5 ng/ml) (p = 0.003) and normal serum CA24‐2 level (≤20 U/ml) (p = 0.015), respectively (Table 3).\nAssociation of DPYD polymorphisms with clinicopathological parameters in the CRC patients\nAbbreviations: CA19‐9, carbohydrate antigen 19–9; CA24‐2, carbohydrate antigen 24–2; CEA, carcinoembryonic antigen.\nBold numbers indicate significant values (p < 0.05).\nThe association between DPYD c.1627A>G, c.85T>C polymorphisms, and clinicopathological features of CRC patients have been evaluated. The clinical features including gender, age, degree of differentiation of the tumor sample, serum tumor marker levels (carcinoembryonic antigen (CEA), carbohydrate antigen 24–2 (CA24‐2), carbohydrate antigen 19–9 (CA19‐9)), tumor stage, lymph nodes status, and distant metastasis status was collected. There was no relationship between the DPYD c.1627A>G, c.85T>C polymorphisms and gender, degree of differentiation of the tumor sample, serum CA19‐9 level, and tumor stage (T stage) of CRC patients. However, the frequency of DPYD c.1627A>G A/G+G/G genotypes in older patients (>60 years old) was significantly higher than that in the younger patients (≤60 years old) (p = 0.036). The frequency of DPYD c.1627A>G A/G+G/G genotypes in patients with abnormal serum CEA level (>5 ng/ml) and abnormal serum CA24‐2 level (>20 U/ml) was significantly higher than that in the patients with normal serum CEA level (≤5 ng/ml) (p = 0.003) and normal serum CA24‐2 level (≤20 U/ml) (p = 0.015), respectively (Table 3).\nAssociation of DPYD polymorphisms with clinicopathological parameters in the CRC patients\nAbbreviations: CA19‐9, carbohydrate antigen 19–9; CA24‐2, carbohydrate antigen 24–2; CEA, carcinoembryonic antigen.\nBold numbers indicate significant values (p < 0.05).", "A total of 537 CRC patients were enrolled in this study, including 349 (65.0%) men and 188 (35.0%) women. The average age of the patients was 59.34 ± 10.14 years (26–85 years), 273 (50.8%) patients with ≤60 years old, and 264 (49.2%) patients with >60 years old. According to the pathological degree of tumor differentiation, 8 (1.5%) samples were well‐differentiated tumors, 497 (92.5%) samples were moderately differentiated tumors, 26 (5.0%) samples were poorly differentiated tumors, and 6 samples were unknown. According to the tumor stage, 3 (0.6%), 27 (5.0%), 364 (67.8%), and 142 (26.4%) cases were pT1, pT2, pT3, and pT4 stage, respectively. The proportion of higher stage tumors (pT3+ pT4 categories) was 94.2%. According to the lymph nodes status, 192 (35.8%), 196 (36.5%), 145 (27.0%), and 4 (0.7%) cases were N0, N1, N2, and N3 stage, respectively. In addition, 428 (79.7%) and 109 (20.3%) cases were M0 and M1 stage, respectively (Table 1).\nBaseline characteristics of study objects", "In this study, the DPYD IVS14+1 G>A, DPYD c. 1627 A>G, DPYD c. 85 T>C genotypes in the patients were identified. About the DPYD IVS14+1G>A variant, there were 537 (100%) cases with G/G genotype (wild type), 0 (0%) cases with G/A heterozygous, and 0 (0%) cases with A/A homozygous. That is to say, no DPYD IVS14+1G>A mutation was found in the patients in this study. In the DPYD c.1627A>G, there were 310 (57.7%) cases with A/A genotype (wild type), 191 (35.6%) cases with A/G heterozygous, and 36 (6.7%) cases with G/G homozygous. Among DPYD c.85T>C, there were 449 (83.6%) cases with T/T genotype (wild type), 86 (16.0%) cases with T/C heterozygotes, and 2 (0.4%) cases with C/C homozygous. The genotype distributions of DPYD c.1627A>G, and DPYD c.85T>C in the CRC patients were consistent with Hardy–Weinberg equilibrium (χ2 = 0.425, p = 0.802 and χ2 = 0.715, p = 0.750, respectively).", "Logistic regression analysis of the relationship between the genotype of DPYD polymorphisms and the LNM status of CRC was studied. The frequency of DPYD c.1627A>G A/G genotype (39.4%) in the LNM group was obviously higher than that (28.6%) in the non‐LNM CRC patients. It was demonstrated that the A/G genotype of DPYD c.1627A>G might increase the risk of LNM in CRC patients (p = 0.016, OR = 1.626, 95% CI = 1.104–2.395). The variants were analyzed under different genetic models. It was showed that DPYD c.1627A>G A/G and G/G genotypes in the dominant model (DPYD c.1627A>G A/G + G/G vs. DPYD c.1627A>G A/A) were the significant risk factors (p = 0.029, OR 1.506, 95% CI = 1.048–2.165) for the LNM of CRC (Table 2).\nAssociation of DPYD polymorphisms with metastasis of CRC patients\nAbbreviations: CRC, colorectal cancer; DM, distant metastasis; LNM, lymph node metastasis.\nBold numbers indicate significant values (p < 0.05).\nLogistic regression analysis of the relationship between the genotype of DPYD polymorphisms and DM status of CRC was studied. The frequency of DPYD c.1627A>G A/G genotype (45.0%) in the DM group was obviously higher than that (33.2%) in the non‐DM group. It was demonstrated that the A/G genotype of DPYD c.1627A>G might increase the risk of DM in CRC patients (p = 0.023, OR = 1.673, 95% CI = 1.079–2.596). In addition, DPYD c.1627A>G A/G and G/G genotypes in the dominant model (DPYD c.1627A>G A/G + G/G vs. DPYD c.1627A>G A/A) were the significant risk factors (p = 0.039, OR = 1.588, 95% CI = 1.041–2.423) for the DM of CRC (Table 2).", "The association between DPYD c.1627A>G, c.85T>C polymorphisms, and clinicopathological features of CRC patients have been evaluated. The clinical features including gender, age, degree of differentiation of the tumor sample, serum tumor marker levels (carcinoembryonic antigen (CEA), carbohydrate antigen 24–2 (CA24‐2), carbohydrate antigen 19–9 (CA19‐9)), tumor stage, lymph nodes status, and distant metastasis status was collected. There was no relationship between the DPYD c.1627A>G, c.85T>C polymorphisms and gender, degree of differentiation of the tumor sample, serum CA19‐9 level, and tumor stage (T stage) of CRC patients. However, the frequency of DPYD c.1627A>G A/G+G/G genotypes in older patients (>60 years old) was significantly higher than that in the younger patients (≤60 years old) (p = 0.036). The frequency of DPYD c.1627A>G A/G+G/G genotypes in patients with abnormal serum CEA level (>5 ng/ml) and abnormal serum CA24‐2 level (>20 U/ml) was significantly higher than that in the patients with normal serum CEA level (≤5 ng/ml) (p = 0.003) and normal serum CA24‐2 level (≤20 U/ml) (p = 0.015), respectively (Table 3).\nAssociation of DPYD polymorphisms with clinicopathological parameters in the CRC patients\nAbbreviations: CA19‐9, carbohydrate antigen 19–9; CA24‐2, carbohydrate antigen 24–2; CEA, carcinoembryonic antigen.\nBold numbers indicate significant values (p < 0.05).", "CRC is one of the common malignant tumors in human digestive tracts.\n21\n, \n22\n Metastasis is a biological phenotype of malignant tumors and an important factor affecting the prognosis of malignant tumors. Tumor metastasis is a dynamic process in which multiple factors are involved in multiple stages of development, including the biology of tumor cells and the interaction between tumor and microenvironment.\n23\n, \n24\n At present, the research on tumor metastasis mainly focuses on tumor metastasis genes and tumor metastasis suppressor genes, tumor angiogenesis, extracellular matrix, cell adhesion, tumor microenvironment, and so on.\n25\n, \n26\n\n\nStudies have shown that some gene polymorphisms were associated with the metastasis of cancer. It is a lower risk of LNM in oral cancer patients carrying A/A genotype of the single nucleotide polymorphism (SNP) rs10399805 or rs6691378 in chitinase‐3‐like protein 1 (CHI3L1) gene.\n27\n Polymorphisms in the promoter regions of matrix metalloproteinase (MMP)1, 3, 7, and 9 genes are associated with metastasis of head/neck and breast cancer.\n28\n Luminal A and luminal B breast cancer patients with the A/G genotype of C‐C motif chemokine ligand 4 (CCL4) gene SNP rs10491121 were less likely to develop LNM.\n29\n The SNPs rs1143630, rs1143633, and rs1143643 of interleukin‐1 beta (IL‐1B) gene showed a relationship with LNM of papillary thyroid carcinoma (PTC).\n30\n SNP rs1989839 C/T genotype of Ras‐association domain family 1 isoform A (RASSF1A) gene increases the risk of lung metastasis of osteosarcoma.\n31\n Transforming growth factor‐β1 (TGFB1) gene promoter −509C/T polymorphism affected the metastasis of CRC.\n32\n Granzyme B (GZMB) gene polymorphisms were not associated with the metastasis of CRC.\n33\n Studies have shown that DPYD gene polymorphisms were associated with the susceptibility to CRC\n12\n and the toxicity of chemotherapy drugs.\n34\n However, the relationship between DPYD gene polymorphisms and metastasis of CRC has not been studied.\n\nDPYD IVS14+1G>A variant was not found in this study, and this result was similar to those reported in other populations, such as Caucasians, African‐Americans, Egyptians, Turks, and Taiwanese.\n35\n Many studies have reported that CRC patients with DPYD IVS14+1G>A variant might suffer from severe toxicity and even death after the 5‐FU administration.\n36\n, \n37\n However, DPYD IVS14+1G>A variant is rare in most populations. In this study, DPYD c.1627A>G, A/A, A/G, and G/G genotypes accounted for 57.7%, 35.6%, and 6.7%, respectively. The result is in line with those of another Chinese population study.\n17\n\nDPYD c.85T>C T/T, T/C, and C/C genotypes accounted for 83.6%, 16.0%, and 0.4%, respectively. The result in this study was consistent with that in the previous study.\n17\n A study of a population of a mixed racial background showed that DPYD c.85T>C T/C and C/C genotypes were 41% and 10%, respectively.\n38\n The frequencies of DPYD c.85T>C variants in patients were higher than that in this study.\nIn this study, DPYD c.1627A>G A/G and G/G genotypes in the dominant model (A/G + G/G vs. A/A) were significant risk factors for the LNM and DM of CRC. DPD activity is in association with the epithelial‐to‐mesenchymal transition (EMT). EMT is a process during which the epithelial features of cancer cells are lost, the cytoskeletal architecture is re‐organized, the cell shape is changed, and some genes are activated, which leads to increased cell motility and dissemination of tumor to distant metastatic sites.\n39\n EMT results in decreased adhesion and enhanced migration or invasion. Studies have shown that dihydrothymine and dihydrouracil, the metabolites catabolized by DPD, play an important role in tumor EMT.\n40\n, \n41\n DPD is necessary for cells to acquire mesenchymal characteristics in vitro and tumorigenic cells overflow. It is a metabolic process essential associated with the acquisition of metastatic and aggressive cancer cell traits for the EMT.\n40\n Mechanistically, DPD may act as a regulator of EMT by targeting the p38/NF‐κB/Snail1 pathway.\n41\n\n\nIn the present study, the frequency of DPYD c.1627A>G A/G+G/G genotypes in patients with abnormal serum CEA levels was significantly higher than that in patients with normal serum CEA levels. Serum CEA levels can be used as biomarkers for diagnosis, postoperative recurrence, or efficacy monitoring of colorectal cancer.\n42\n The CEA gene family belongs to the immunoglobulin (Ig) superfamily and codes for a vast number of glycoproteins that differ greatly both in amino acid composition and function. The CEA family is divided into two groups, the carcinoembryonic antigen‐related cell adhesion molecules (CEA‐CAMs) and the pregnancy‐specific glycoproteins. CEA expression on epithelial cells may directly influence tumor development by CEA‐CEA bridges between tumor cells or tumor‐stromal cells.\n43\n That is to say, DPYD gene mutations may affect the process of EMT by changing the activity of DPD, thus participating in the metastasis of tumor cells. Elevated CEA expression level and DPYD gene mutations may be associated with CRC metastasis.\nCA24‐2 is a serum tumor marker, which is one of the indicators reflecting the number and activity of tumor cells.\n44\n A study has shown that the CA24‐2 level was higher in gastric cancer patients with distant metastasis than in patients without distant metastasis.\n45\n Increased serum CA24‐2 concentrations were significantly associated with the risk of invasiveness of intraductal papillary mucinous neoplasm (IPMN).\n46\n CEA, CA19‐9, CA24‐2, and CA72‐4, examined postoperatively during follow‐up, were useful to find early tumor recurrence and metastasis, and evaluate prognosis.\n47\n Tumorigenesis is dependent on the reprogramming of cellular metabolism. A common feature of metabolism in the cancer cells is the ability to acquire necessary nutrients from a frequently nutrient‐poor environment and utilize these nutrients to both maintain viability and build new biomass.\n48\n Some studies have shown that Pantothenate and CoA biosynthesis signaling pathway was significantly altered in tumor cells.\n49\n, \n50\n, \n51\n DPD is a key enzyme in the Pantothenate and CoA biosynthesis signaling pathway (https://www.genome.jp/pathway/ko00770+K00207). So, DPYD c.1627A>G A/G+G/G genotypes may affect the activity of DPD, and regulate tumor cells tumorigenesis through signaling pathway regulation in the reprogramming of cellular metabolism, which is manifested as changes in serum tumor markers.\nTumor invasion and metastasis is a dynamic and complex process, including multiple simultaneous steps. The persistent emergence of populations of cells with different invasion and metastasis capabilities is a barrier to tumor therapy.\n52\n In order to prevent the invasion and metastasis of tumor, it is a hot spot of research to design modulatory blocking methods specifically aiming at some key links in tumor invasion and metastasis.\n53\n With the deepening understanding of the occurrence and mechanism of tumor invasion and metastasis, it can promote the design and search for effective anti‐tumor drugs, provide new ideas for the treatment of tumors, and have a positive significance to reduce the mortality of tumor patients.\nThis is the first study about the relationship of DPYD gene variants/polymorphisms and lymph node metastasis, distant metastasis of CRC. There are some limitations to this study that should be noted. First of all, the number of cases included in this study is not large, which may lead to some deviations in the results. Second, the number of gene polymorphisms included in this study was relatively single. Tumor cell metastasis is affected by tumor metastasis‐related genes and tumor metastasis‐suppressor genes, tumor angiogenesis, extracellular matrix degradation, cell adhesion, tumor microenvironment, and other factors. It may be more meaningful to include some related genes for comprehensive analysis. In addition, a tumor is a kind of multifactorial disease caused by genetic and environmental factors. As a retrospective analysis, the limitations of the original data included in this study constrained assessment of potential gene‐environment interactions.", "\nDPYD c.1627A>G A/G and G/G genotypes are associated with the increased risk of lymph node metastasis and distant metastasis of CRC. Future studies need to include more relevant genes for analysis and to assess potential gene‐environment interactions. This study may provide a valuable reference for the relationship between gene polymorphism and pathological features and metastasis of CRC.", "The authors declare that they have no competing interests.", "Zhixiong Zhong, Heming Wu, and Juanzi Zeng designed the study. Juanzi Zeng, Qingyan Huang, and Zhikang Yu performed the experiments. Juanzi Zeng and Jiaquan Li collected the clinical data. Heming Wu and Juanzi Zeng analyzed the data. Heming Wu and Juanzi Zeng prepared the manuscript. All authors were responsible for critical revisions, and all authors read and approved the final version of this work." ]
[ null, "materials-and-methods", null, null, null, "results", null, null, null, null, "discussion", "conclusions", "COI-statement", null ]
[ "colorectal cancer", "dihydropyrimidine dehydrogenase", "distant metastasis", "\nDPYD\n", "lymph node metastasis" ]
INTRODUCTION: With the burden of cancer morbidity and mortality rapidly growing worldwide, cancer is a major barrier to increasing life expectancy worldwide. 1 Colorectal cancer (CRC) is one of the most common gastrointestinal malignancies. According to the Global Cancer Statistics in 2020 by International Agency for Research on Cancer (IARC), CRC is the third most prevalent cancer and the second leading cause of cancer death in the world. 2 In clinical treatment, CRC can be treated with endoscopic treatment, surgical resection, chemotherapy drugs, targeted drugs, immunotherapy, and radiation. 3 , 4 The multiple disciplinary team (MDT) model also improved the treatment level of CRC. 5 However, the recurrence and metastasis of CRC are the major problems affecting the survival of the patients. Metastasis is the process by which cancer cells spread from the primary lesion to the distal organs and is the leading cause of cancer mortality. 6 Metastasis of CRC includes lymph nodes metastasis (LNM) and distant metastasis (DM). 7 Capecitabine is an oral prodrug of 5‐fluorouracil (5‐FU) and has been approved for the treatment of various malignancies. 8 There has been reports that the curative effect and toxic effects of 5‐FU exist noticeable individual differences. 9 After fluorouracil administration, 5‐FU can be transformed into 5‐fluoro‐2'‐deoxyuridine 5’ monophosphate (FdUMP), 5‐fluoro‐2'‐deoxyuridine 5'‐triphosphate (FdUTP), and 5‐fluorouridine 5'‐triphosphate (FUTP) in cells, which are three cytotoxic metabolites. 10 FdUMP inhibits the thymine ceoxyribonucleotide synthetase, the enzyme is necessary for DNA replication and repair, while FdUTP and FUTP disrupt the processing and function of DNA and RNA. 11 Dihydropyrimidine dehydrogenase (DPD) is a rate‐limiting enzyme in the catabolic pathway of fluorouracil. DPD can inactivate up to 85% of 5‐Fu into 5, 6‐dihydro‐5‐fluorouracil, and the intermediate is further metabolized to β‐alanine or β‐aminoisobutyric acid. 12 These processes will increase nucleotide synthesis, which is conducive to DNA synthesis and cell growth. While DPD enzyme activity is decreased, fluorouracil clearance rate in vivo is decreased, the half‐life is prolonged and cytotoxicity is enhanced. 13 DPD enzyme activity is affected by DPYD gene polymorphisms. 14 In addition, DPD is associated with epithelial‐to‐mesenchymal transition (EMT). EMT has been implicated in carcinogenesis and tumor metastasis by enhancing mobility, invasion, and resistance to apoptotic stimuli. 15 DPYD gene polymorphisms may affect the process of EMT by changing the activity of DPD, thus participating in the metastasis of tumor cells. The human DPYD gene is located on chromosome 1p21.3, it is 850 kb in length encompassing 23 exons. Genetic variations of DPYD lead to changes in DPD enzyme activity, which could result in some adverse side effects. The DPYD gene has more than 1700 different genetic variants, and more than 600 are missense variants impacting on the DPD protein sequence, according to the report in the GnomAD database (https://gnomad.broadinstitute.org/). So far, the variants or polymorphisms of DPYD gene attracted more attention including: DPYD IVS14+1 G>A (rs3918290, DPYD *2A), DPYD c. 1627 A>G (rs1801159, DPYD *5A), DPYD c. 85 T>C (rs1801265, DPYD *9A). 16 , 17 Studies have shown that the clinical outcome, the survival of CRC is associated with gene polymorphisms and gene expression level. 18 One study showed that polymorphisms of DPYD have a significant effect on toxicity and clinical outcome in colorectal or gastroesophageal cancer patients receiving capecitabine‐based chemotherapy. 19 Another study showed that the mRNA expression of DPYD is associated with clinicopathological characteristics and may be useful for predicting survival in CRC patients. 20 The relationship between DPYD gene polymorphisms and metastasis of CRC has not been studied. In the present study, the relationship between DPYD gene polymorphisms and the clinical features of CRC patients, metastasis of CRC (including LNM and DM) was analyzed. It is expected to provide a valuable marker for the prognosis of CRC and a valuable target for the clinical treatment of metastatic CRC. This study may provide a valuable reference for the relationship between gene polymorphism and pathological features and metastasis of CRC. MATERIALS AND METHODS: Subjects A total of 537 CRC patients were recruited from Meizhou People's Hospital, from January 2016 to May 2019. Inclusion criteria: (1) Imaging diagnosis and histologically confirmed diagnosis met the diagnostic criteria for CRC. (2) Patients without serious cardiovascular and cerebrovascular diseases and infectious diseases. Exclusion criteria: (1) Patients without colorectal cancer. (2) Patients with dysfunction of vital organs. (3) Patients who also have other tumors. This study was supported by the Ethics Committee of the Meizhou People's Hospital. The flow chart of the present study is shown in Figure 1. The flow chart of the present study A total of 537 CRC patients were recruited from Meizhou People's Hospital, from January 2016 to May 2019. Inclusion criteria: (1) Imaging diagnosis and histologically confirmed diagnosis met the diagnostic criteria for CRC. (2) Patients without serious cardiovascular and cerebrovascular diseases and infectious diseases. Exclusion criteria: (1) Patients without colorectal cancer. (2) Patients with dysfunction of vital organs. (3) Patients who also have other tumors. This study was supported by the Ethics Committee of the Meizhou People's Hospital. The flow chart of the present study is shown in Figure 1. The flow chart of the present study Genotyping of DPYD gene Two milliliters of venous blood sample were obtained from each subject. Genomic DNA was extracted using a QIAamp DNA Kit (Qiagen GmbH). DPYD IVS14+1 G>A variant and polymorphisms of DPYD c. 1627 A>G and DPYD c. 85 T>C were analyzed. DPYD Genotyping Test Kit (SINOMD Gene Detection Technology Co. Ltd.) based on Sanger sequencing was used for testing. Polymerase chain reaction (PCR) was performed according to the following procedure: Initial denaturation at 95℃ for 3 min, followed by 45 cycles of denaturation at 94℃ for 15 s, annealing at 63℃ for 1 min, and extension at 72℃ for 1 min. PCR products were purified with ExoSap‐It (ABI PCR Product Cleanup Reagent). DNA sequences determination was detected using ABI Terminator v3.1 Cycle Sequencing kit and performed on ABI 3500 Dx Genetic Analyzer, analyzed with Sequencing Analysis v5.4 (Life Technologies). Two milliliters of venous blood sample were obtained from each subject. Genomic DNA was extracted using a QIAamp DNA Kit (Qiagen GmbH). DPYD IVS14+1 G>A variant and polymorphisms of DPYD c. 1627 A>G and DPYD c. 85 T>C were analyzed. DPYD Genotyping Test Kit (SINOMD Gene Detection Technology Co. Ltd.) based on Sanger sequencing was used for testing. Polymerase chain reaction (PCR) was performed according to the following procedure: Initial denaturation at 95℃ for 3 min, followed by 45 cycles of denaturation at 94℃ for 15 s, annealing at 63℃ for 1 min, and extension at 72℃ for 1 min. PCR products were purified with ExoSap‐It (ABI PCR Product Cleanup Reagent). DNA sequences determination was detected using ABI Terminator v3.1 Cycle Sequencing kit and performed on ABI 3500 Dx Genetic Analyzer, analyzed with Sequencing Analysis v5.4 (Life Technologies). Data collection and statistical analysis Relevant information and medical records of these participants were collected. Clinical information, including age, gender, histopathological type, degree of tumor differentiation, TNM stage, and tumor grade, was collected. SPSS statistical software version 21.0 (IBM Inc.) was used for the data analysis. The Hardy–Weinberg equilibrium (HWE) of DPYD genotypes was assessed using the χ2 test. Association between DPYD variants status with the clinical features of patients and metastasis of CRC were evaluated by Fisher's exact test. A p value <0.05 was set as statistically significant. Relevant information and medical records of these participants were collected. Clinical information, including age, gender, histopathological type, degree of tumor differentiation, TNM stage, and tumor grade, was collected. SPSS statistical software version 21.0 (IBM Inc.) was used for the data analysis. The Hardy–Weinberg equilibrium (HWE) of DPYD genotypes was assessed using the χ2 test. Association between DPYD variants status with the clinical features of patients and metastasis of CRC were evaluated by Fisher's exact test. A p value <0.05 was set as statistically significant. Subjects: A total of 537 CRC patients were recruited from Meizhou People's Hospital, from January 2016 to May 2019. Inclusion criteria: (1) Imaging diagnosis and histologically confirmed diagnosis met the diagnostic criteria for CRC. (2) Patients without serious cardiovascular and cerebrovascular diseases and infectious diseases. Exclusion criteria: (1) Patients without colorectal cancer. (2) Patients with dysfunction of vital organs. (3) Patients who also have other tumors. This study was supported by the Ethics Committee of the Meizhou People's Hospital. The flow chart of the present study is shown in Figure 1. The flow chart of the present study Genotyping of DPYD gene: Two milliliters of venous blood sample were obtained from each subject. Genomic DNA was extracted using a QIAamp DNA Kit (Qiagen GmbH). DPYD IVS14+1 G>A variant and polymorphisms of DPYD c. 1627 A>G and DPYD c. 85 T>C were analyzed. DPYD Genotyping Test Kit (SINOMD Gene Detection Technology Co. Ltd.) based on Sanger sequencing was used for testing. Polymerase chain reaction (PCR) was performed according to the following procedure: Initial denaturation at 95℃ for 3 min, followed by 45 cycles of denaturation at 94℃ for 15 s, annealing at 63℃ for 1 min, and extension at 72℃ for 1 min. PCR products were purified with ExoSap‐It (ABI PCR Product Cleanup Reagent). DNA sequences determination was detected using ABI Terminator v3.1 Cycle Sequencing kit and performed on ABI 3500 Dx Genetic Analyzer, analyzed with Sequencing Analysis v5.4 (Life Technologies). Data collection and statistical analysis: Relevant information and medical records of these participants were collected. Clinical information, including age, gender, histopathological type, degree of tumor differentiation, TNM stage, and tumor grade, was collected. SPSS statistical software version 21.0 (IBM Inc.) was used for the data analysis. The Hardy–Weinberg equilibrium (HWE) of DPYD genotypes was assessed using the χ2 test. Association between DPYD variants status with the clinical features of patients and metastasis of CRC were evaluated by Fisher's exact test. A p value <0.05 was set as statistically significant. RESULTS: Population characteristics A total of 537 CRC patients were enrolled in this study, including 349 (65.0%) men and 188 (35.0%) women. The average age of the patients was 59.34 ± 10.14 years (26–85 years), 273 (50.8%) patients with ≤60 years old, and 264 (49.2%) patients with >60 years old. According to the pathological degree of tumor differentiation, 8 (1.5%) samples were well‐differentiated tumors, 497 (92.5%) samples were moderately differentiated tumors, 26 (5.0%) samples were poorly differentiated tumors, and 6 samples were unknown. According to the tumor stage, 3 (0.6%), 27 (5.0%), 364 (67.8%), and 142 (26.4%) cases were pT1, pT2, pT3, and pT4 stage, respectively. The proportion of higher stage tumors (pT3+ pT4 categories) was 94.2%. According to the lymph nodes status, 192 (35.8%), 196 (36.5%), 145 (27.0%), and 4 (0.7%) cases were N0, N1, N2, and N3 stage, respectively. In addition, 428 (79.7%) and 109 (20.3%) cases were M0 and M1 stage, respectively (Table 1). Baseline characteristics of study objects A total of 537 CRC patients were enrolled in this study, including 349 (65.0%) men and 188 (35.0%) women. The average age of the patients was 59.34 ± 10.14 years (26–85 years), 273 (50.8%) patients with ≤60 years old, and 264 (49.2%) patients with >60 years old. According to the pathological degree of tumor differentiation, 8 (1.5%) samples were well‐differentiated tumors, 497 (92.5%) samples were moderately differentiated tumors, 26 (5.0%) samples were poorly differentiated tumors, and 6 samples were unknown. According to the tumor stage, 3 (0.6%), 27 (5.0%), 364 (67.8%), and 142 (26.4%) cases were pT1, pT2, pT3, and pT4 stage, respectively. The proportion of higher stage tumors (pT3+ pT4 categories) was 94.2%. According to the lymph nodes status, 192 (35.8%), 196 (36.5%), 145 (27.0%), and 4 (0.7%) cases were N0, N1, N2, and N3 stage, respectively. In addition, 428 (79.7%) and 109 (20.3%) cases were M0 and M1 stage, respectively (Table 1). Baseline characteristics of study objects The frequency of DPYD gene polymorphisms in the patients In this study, the DPYD IVS14+1 G>A, DPYD c. 1627 A>G, DPYD c. 85 T>C genotypes in the patients were identified. About the DPYD IVS14+1G>A variant, there were 537 (100%) cases with G/G genotype (wild type), 0 (0%) cases with G/A heterozygous, and 0 (0%) cases with A/A homozygous. That is to say, no DPYD IVS14+1G>A mutation was found in the patients in this study. In the DPYD c.1627A>G, there were 310 (57.7%) cases with A/A genotype (wild type), 191 (35.6%) cases with A/G heterozygous, and 36 (6.7%) cases with G/G homozygous. Among DPYD c.85T>C, there were 449 (83.6%) cases with T/T genotype (wild type), 86 (16.0%) cases with T/C heterozygotes, and 2 (0.4%) cases with C/C homozygous. The genotype distributions of DPYD c.1627A>G, and DPYD c.85T>C in the CRC patients were consistent with Hardy–Weinberg equilibrium (χ2 = 0.425, p = 0.802 and χ2 = 0.715, p = 0.750, respectively). In this study, the DPYD IVS14+1 G>A, DPYD c. 1627 A>G, DPYD c. 85 T>C genotypes in the patients were identified. About the DPYD IVS14+1G>A variant, there were 537 (100%) cases with G/G genotype (wild type), 0 (0%) cases with G/A heterozygous, and 0 (0%) cases with A/A homozygous. That is to say, no DPYD IVS14+1G>A mutation was found in the patients in this study. In the DPYD c.1627A>G, there were 310 (57.7%) cases with A/A genotype (wild type), 191 (35.6%) cases with A/G heterozygous, and 36 (6.7%) cases with G/G homozygous. Among DPYD c.85T>C, there were 449 (83.6%) cases with T/T genotype (wild type), 86 (16.0%) cases with T/C heterozygotes, and 2 (0.4%) cases with C/C homozygous. The genotype distributions of DPYD c.1627A>G, and DPYD c.85T>C in the CRC patients were consistent with Hardy–Weinberg equilibrium (χ2 = 0.425, p = 0.802 and χ2 = 0.715, p = 0.750, respectively). Association of DPYD polymorphisms with metastasis of CRC Logistic regression analysis of the relationship between the genotype of DPYD polymorphisms and the LNM status of CRC was studied. The frequency of DPYD c.1627A>G A/G genotype (39.4%) in the LNM group was obviously higher than that (28.6%) in the non‐LNM CRC patients. It was demonstrated that the A/G genotype of DPYD c.1627A>G might increase the risk of LNM in CRC patients (p = 0.016, OR = 1.626, 95% CI = 1.104–2.395). The variants were analyzed under different genetic models. It was showed that DPYD c.1627A>G A/G and G/G genotypes in the dominant model (DPYD c.1627A>G A/G + G/G vs. DPYD c.1627A>G A/A) were the significant risk factors (p = 0.029, OR 1.506, 95% CI = 1.048–2.165) for the LNM of CRC (Table 2). Association of DPYD polymorphisms with metastasis of CRC patients Abbreviations: CRC, colorectal cancer; DM, distant metastasis; LNM, lymph node metastasis. Bold numbers indicate significant values (p < 0.05). Logistic regression analysis of the relationship between the genotype of DPYD polymorphisms and DM status of CRC was studied. The frequency of DPYD c.1627A>G A/G genotype (45.0%) in the DM group was obviously higher than that (33.2%) in the non‐DM group. It was demonstrated that the A/G genotype of DPYD c.1627A>G might increase the risk of DM in CRC patients (p = 0.023, OR = 1.673, 95% CI = 1.079–2.596). In addition, DPYD c.1627A>G A/G and G/G genotypes in the dominant model (DPYD c.1627A>G A/G + G/G vs. DPYD c.1627A>G A/A) were the significant risk factors (p = 0.039, OR = 1.588, 95% CI = 1.041–2.423) for the DM of CRC (Table 2). Logistic regression analysis of the relationship between the genotype of DPYD polymorphisms and the LNM status of CRC was studied. The frequency of DPYD c.1627A>G A/G genotype (39.4%) in the LNM group was obviously higher than that (28.6%) in the non‐LNM CRC patients. It was demonstrated that the A/G genotype of DPYD c.1627A>G might increase the risk of LNM in CRC patients (p = 0.016, OR = 1.626, 95% CI = 1.104–2.395). The variants were analyzed under different genetic models. It was showed that DPYD c.1627A>G A/G and G/G genotypes in the dominant model (DPYD c.1627A>G A/G + G/G vs. DPYD c.1627A>G A/A) were the significant risk factors (p = 0.029, OR 1.506, 95% CI = 1.048–2.165) for the LNM of CRC (Table 2). Association of DPYD polymorphisms with metastasis of CRC patients Abbreviations: CRC, colorectal cancer; DM, distant metastasis; LNM, lymph node metastasis. Bold numbers indicate significant values (p < 0.05). Logistic regression analysis of the relationship between the genotype of DPYD polymorphisms and DM status of CRC was studied. The frequency of DPYD c.1627A>G A/G genotype (45.0%) in the DM group was obviously higher than that (33.2%) in the non‐DM group. It was demonstrated that the A/G genotype of DPYD c.1627A>G might increase the risk of DM in CRC patients (p = 0.023, OR = 1.673, 95% CI = 1.079–2.596). In addition, DPYD c.1627A>G A/G and G/G genotypes in the dominant model (DPYD c.1627A>G A/G + G/G vs. DPYD c.1627A>G A/A) were the significant risk factors (p = 0.039, OR = 1.588, 95% CI = 1.041–2.423) for the DM of CRC (Table 2). Association of DPYD polymorphisms with clinicopathological parameters in the CRC patients The association between DPYD c.1627A>G, c.85T>C polymorphisms, and clinicopathological features of CRC patients have been evaluated. The clinical features including gender, age, degree of differentiation of the tumor sample, serum tumor marker levels (carcinoembryonic antigen (CEA), carbohydrate antigen 24–2 (CA24‐2), carbohydrate antigen 19–9 (CA19‐9)), tumor stage, lymph nodes status, and distant metastasis status was collected. There was no relationship between the DPYD c.1627A>G, c.85T>C polymorphisms and gender, degree of differentiation of the tumor sample, serum CA19‐9 level, and tumor stage (T stage) of CRC patients. However, the frequency of DPYD c.1627A>G A/G+G/G genotypes in older patients (>60 years old) was significantly higher than that in the younger patients (≤60 years old) (p = 0.036). The frequency of DPYD c.1627A>G A/G+G/G genotypes in patients with abnormal serum CEA level (>5 ng/ml) and abnormal serum CA24‐2 level (>20 U/ml) was significantly higher than that in the patients with normal serum CEA level (≤5 ng/ml) (p = 0.003) and normal serum CA24‐2 level (≤20 U/ml) (p = 0.015), respectively (Table 3). Association of DPYD polymorphisms with clinicopathological parameters in the CRC patients Abbreviations: CA19‐9, carbohydrate antigen 19–9; CA24‐2, carbohydrate antigen 24–2; CEA, carcinoembryonic antigen. Bold numbers indicate significant values (p < 0.05). The association between DPYD c.1627A>G, c.85T>C polymorphisms, and clinicopathological features of CRC patients have been evaluated. The clinical features including gender, age, degree of differentiation of the tumor sample, serum tumor marker levels (carcinoembryonic antigen (CEA), carbohydrate antigen 24–2 (CA24‐2), carbohydrate antigen 19–9 (CA19‐9)), tumor stage, lymph nodes status, and distant metastasis status was collected. There was no relationship between the DPYD c.1627A>G, c.85T>C polymorphisms and gender, degree of differentiation of the tumor sample, serum CA19‐9 level, and tumor stage (T stage) of CRC patients. However, the frequency of DPYD c.1627A>G A/G+G/G genotypes in older patients (>60 years old) was significantly higher than that in the younger patients (≤60 years old) (p = 0.036). The frequency of DPYD c.1627A>G A/G+G/G genotypes in patients with abnormal serum CEA level (>5 ng/ml) and abnormal serum CA24‐2 level (>20 U/ml) was significantly higher than that in the patients with normal serum CEA level (≤5 ng/ml) (p = 0.003) and normal serum CA24‐2 level (≤20 U/ml) (p = 0.015), respectively (Table 3). Association of DPYD polymorphisms with clinicopathological parameters in the CRC patients Abbreviations: CA19‐9, carbohydrate antigen 19–9; CA24‐2, carbohydrate antigen 24–2; CEA, carcinoembryonic antigen. Bold numbers indicate significant values (p < 0.05). Population characteristics: A total of 537 CRC patients were enrolled in this study, including 349 (65.0%) men and 188 (35.0%) women. The average age of the patients was 59.34 ± 10.14 years (26–85 years), 273 (50.8%) patients with ≤60 years old, and 264 (49.2%) patients with >60 years old. According to the pathological degree of tumor differentiation, 8 (1.5%) samples were well‐differentiated tumors, 497 (92.5%) samples were moderately differentiated tumors, 26 (5.0%) samples were poorly differentiated tumors, and 6 samples were unknown. According to the tumor stage, 3 (0.6%), 27 (5.0%), 364 (67.8%), and 142 (26.4%) cases were pT1, pT2, pT3, and pT4 stage, respectively. The proportion of higher stage tumors (pT3+ pT4 categories) was 94.2%. According to the lymph nodes status, 192 (35.8%), 196 (36.5%), 145 (27.0%), and 4 (0.7%) cases were N0, N1, N2, and N3 stage, respectively. In addition, 428 (79.7%) and 109 (20.3%) cases were M0 and M1 stage, respectively (Table 1). Baseline characteristics of study objects The frequency of DPYD gene polymorphisms in the patients: In this study, the DPYD IVS14+1 G>A, DPYD c. 1627 A>G, DPYD c. 85 T>C genotypes in the patients were identified. About the DPYD IVS14+1G>A variant, there were 537 (100%) cases with G/G genotype (wild type), 0 (0%) cases with G/A heterozygous, and 0 (0%) cases with A/A homozygous. That is to say, no DPYD IVS14+1G>A mutation was found in the patients in this study. In the DPYD c.1627A>G, there were 310 (57.7%) cases with A/A genotype (wild type), 191 (35.6%) cases with A/G heterozygous, and 36 (6.7%) cases with G/G homozygous. Among DPYD c.85T>C, there were 449 (83.6%) cases with T/T genotype (wild type), 86 (16.0%) cases with T/C heterozygotes, and 2 (0.4%) cases with C/C homozygous. The genotype distributions of DPYD c.1627A>G, and DPYD c.85T>C in the CRC patients were consistent with Hardy–Weinberg equilibrium (χ2 = 0.425, p = 0.802 and χ2 = 0.715, p = 0.750, respectively). Association of DPYD polymorphisms with metastasis of CRC: Logistic regression analysis of the relationship between the genotype of DPYD polymorphisms and the LNM status of CRC was studied. The frequency of DPYD c.1627A>G A/G genotype (39.4%) in the LNM group was obviously higher than that (28.6%) in the non‐LNM CRC patients. It was demonstrated that the A/G genotype of DPYD c.1627A>G might increase the risk of LNM in CRC patients (p = 0.016, OR = 1.626, 95% CI = 1.104–2.395). The variants were analyzed under different genetic models. It was showed that DPYD c.1627A>G A/G and G/G genotypes in the dominant model (DPYD c.1627A>G A/G + G/G vs. DPYD c.1627A>G A/A) were the significant risk factors (p = 0.029, OR 1.506, 95% CI = 1.048–2.165) for the LNM of CRC (Table 2). Association of DPYD polymorphisms with metastasis of CRC patients Abbreviations: CRC, colorectal cancer; DM, distant metastasis; LNM, lymph node metastasis. Bold numbers indicate significant values (p < 0.05). Logistic regression analysis of the relationship between the genotype of DPYD polymorphisms and DM status of CRC was studied. The frequency of DPYD c.1627A>G A/G genotype (45.0%) in the DM group was obviously higher than that (33.2%) in the non‐DM group. It was demonstrated that the A/G genotype of DPYD c.1627A>G might increase the risk of DM in CRC patients (p = 0.023, OR = 1.673, 95% CI = 1.079–2.596). In addition, DPYD c.1627A>G A/G and G/G genotypes in the dominant model (DPYD c.1627A>G A/G + G/G vs. DPYD c.1627A>G A/A) were the significant risk factors (p = 0.039, OR = 1.588, 95% CI = 1.041–2.423) for the DM of CRC (Table 2). Association of DPYD polymorphisms with clinicopathological parameters in the CRC patients: The association between DPYD c.1627A>G, c.85T>C polymorphisms, and clinicopathological features of CRC patients have been evaluated. The clinical features including gender, age, degree of differentiation of the tumor sample, serum tumor marker levels (carcinoembryonic antigen (CEA), carbohydrate antigen 24–2 (CA24‐2), carbohydrate antigen 19–9 (CA19‐9)), tumor stage, lymph nodes status, and distant metastasis status was collected. There was no relationship between the DPYD c.1627A>G, c.85T>C polymorphisms and gender, degree of differentiation of the tumor sample, serum CA19‐9 level, and tumor stage (T stage) of CRC patients. However, the frequency of DPYD c.1627A>G A/G+G/G genotypes in older patients (>60 years old) was significantly higher than that in the younger patients (≤60 years old) (p = 0.036). The frequency of DPYD c.1627A>G A/G+G/G genotypes in patients with abnormal serum CEA level (>5 ng/ml) and abnormal serum CA24‐2 level (>20 U/ml) was significantly higher than that in the patients with normal serum CEA level (≤5 ng/ml) (p = 0.003) and normal serum CA24‐2 level (≤20 U/ml) (p = 0.015), respectively (Table 3). Association of DPYD polymorphisms with clinicopathological parameters in the CRC patients Abbreviations: CA19‐9, carbohydrate antigen 19–9; CA24‐2, carbohydrate antigen 24–2; CEA, carcinoembryonic antigen. Bold numbers indicate significant values (p < 0.05). DISCUSSION: CRC is one of the common malignant tumors in human digestive tracts. 21 , 22 Metastasis is a biological phenotype of malignant tumors and an important factor affecting the prognosis of malignant tumors. Tumor metastasis is a dynamic process in which multiple factors are involved in multiple stages of development, including the biology of tumor cells and the interaction between tumor and microenvironment. 23 , 24 At present, the research on tumor metastasis mainly focuses on tumor metastasis genes and tumor metastasis suppressor genes, tumor angiogenesis, extracellular matrix, cell adhesion, tumor microenvironment, and so on. 25 , 26 Studies have shown that some gene polymorphisms were associated with the metastasis of cancer. It is a lower risk of LNM in oral cancer patients carrying A/A genotype of the single nucleotide polymorphism (SNP) rs10399805 or rs6691378 in chitinase‐3‐like protein 1 (CHI3L1) gene. 27 Polymorphisms in the promoter regions of matrix metalloproteinase (MMP)1, 3, 7, and 9 genes are associated with metastasis of head/neck and breast cancer. 28 Luminal A and luminal B breast cancer patients with the A/G genotype of C‐C motif chemokine ligand 4 (CCL4) gene SNP rs10491121 were less likely to develop LNM. 29 The SNPs rs1143630, rs1143633, and rs1143643 of interleukin‐1 beta (IL‐1B) gene showed a relationship with LNM of papillary thyroid carcinoma (PTC). 30 SNP rs1989839 C/T genotype of Ras‐association domain family 1 isoform A (RASSF1A) gene increases the risk of lung metastasis of osteosarcoma. 31 Transforming growth factor‐β1 (TGFB1) gene promoter −509C/T polymorphism affected the metastasis of CRC. 32 Granzyme B (GZMB) gene polymorphisms were not associated with the metastasis of CRC. 33 Studies have shown that DPYD gene polymorphisms were associated with the susceptibility to CRC 12 and the toxicity of chemotherapy drugs. 34 However, the relationship between DPYD gene polymorphisms and metastasis of CRC has not been studied. DPYD IVS14+1G>A variant was not found in this study, and this result was similar to those reported in other populations, such as Caucasians, African‐Americans, Egyptians, Turks, and Taiwanese. 35 Many studies have reported that CRC patients with DPYD IVS14+1G>A variant might suffer from severe toxicity and even death after the 5‐FU administration. 36 , 37 However, DPYD IVS14+1G>A variant is rare in most populations. In this study, DPYD c.1627A>G, A/A, A/G, and G/G genotypes accounted for 57.7%, 35.6%, and 6.7%, respectively. The result is in line with those of another Chinese population study. 17 DPYD c.85T>C T/T, T/C, and C/C genotypes accounted for 83.6%, 16.0%, and 0.4%, respectively. The result in this study was consistent with that in the previous study. 17 A study of a population of a mixed racial background showed that DPYD c.85T>C T/C and C/C genotypes were 41% and 10%, respectively. 38 The frequencies of DPYD c.85T>C variants in patients were higher than that in this study. In this study, DPYD c.1627A>G A/G and G/G genotypes in the dominant model (A/G + G/G vs. A/A) were significant risk factors for the LNM and DM of CRC. DPD activity is in association with the epithelial‐to‐mesenchymal transition (EMT). EMT is a process during which the epithelial features of cancer cells are lost, the cytoskeletal architecture is re‐organized, the cell shape is changed, and some genes are activated, which leads to increased cell motility and dissemination of tumor to distant metastatic sites. 39 EMT results in decreased adhesion and enhanced migration or invasion. Studies have shown that dihydrothymine and dihydrouracil, the metabolites catabolized by DPD, play an important role in tumor EMT. 40 , 41 DPD is necessary for cells to acquire mesenchymal characteristics in vitro and tumorigenic cells overflow. It is a metabolic process essential associated with the acquisition of metastatic and aggressive cancer cell traits for the EMT. 40 Mechanistically, DPD may act as a regulator of EMT by targeting the p38/NF‐κB/Snail1 pathway. 41 In the present study, the frequency of DPYD c.1627A>G A/G+G/G genotypes in patients with abnormal serum CEA levels was significantly higher than that in patients with normal serum CEA levels. Serum CEA levels can be used as biomarkers for diagnosis, postoperative recurrence, or efficacy monitoring of colorectal cancer. 42 The CEA gene family belongs to the immunoglobulin (Ig) superfamily and codes for a vast number of glycoproteins that differ greatly both in amino acid composition and function. The CEA family is divided into two groups, the carcinoembryonic antigen‐related cell adhesion molecules (CEA‐CAMs) and the pregnancy‐specific glycoproteins. CEA expression on epithelial cells may directly influence tumor development by CEA‐CEA bridges between tumor cells or tumor‐stromal cells. 43 That is to say, DPYD gene mutations may affect the process of EMT by changing the activity of DPD, thus participating in the metastasis of tumor cells. Elevated CEA expression level and DPYD gene mutations may be associated with CRC metastasis. CA24‐2 is a serum tumor marker, which is one of the indicators reflecting the number and activity of tumor cells. 44 A study has shown that the CA24‐2 level was higher in gastric cancer patients with distant metastasis than in patients without distant metastasis. 45 Increased serum CA24‐2 concentrations were significantly associated with the risk of invasiveness of intraductal papillary mucinous neoplasm (IPMN). 46 CEA, CA19‐9, CA24‐2, and CA72‐4, examined postoperatively during follow‐up, were useful to find early tumor recurrence and metastasis, and evaluate prognosis. 47 Tumorigenesis is dependent on the reprogramming of cellular metabolism. A common feature of metabolism in the cancer cells is the ability to acquire necessary nutrients from a frequently nutrient‐poor environment and utilize these nutrients to both maintain viability and build new biomass. 48 Some studies have shown that Pantothenate and CoA biosynthesis signaling pathway was significantly altered in tumor cells. 49 , 50 , 51 DPD is a key enzyme in the Pantothenate and CoA biosynthesis signaling pathway (https://www.genome.jp/pathway/ko00770+K00207). So, DPYD c.1627A>G A/G+G/G genotypes may affect the activity of DPD, and regulate tumor cells tumorigenesis through signaling pathway regulation in the reprogramming of cellular metabolism, which is manifested as changes in serum tumor markers. Tumor invasion and metastasis is a dynamic and complex process, including multiple simultaneous steps. The persistent emergence of populations of cells with different invasion and metastasis capabilities is a barrier to tumor therapy. 52 In order to prevent the invasion and metastasis of tumor, it is a hot spot of research to design modulatory blocking methods specifically aiming at some key links in tumor invasion and metastasis. 53 With the deepening understanding of the occurrence and mechanism of tumor invasion and metastasis, it can promote the design and search for effective anti‐tumor drugs, provide new ideas for the treatment of tumors, and have a positive significance to reduce the mortality of tumor patients. This is the first study about the relationship of DPYD gene variants/polymorphisms and lymph node metastasis, distant metastasis of CRC. There are some limitations to this study that should be noted. First of all, the number of cases included in this study is not large, which may lead to some deviations in the results. Second, the number of gene polymorphisms included in this study was relatively single. Tumor cell metastasis is affected by tumor metastasis‐related genes and tumor metastasis‐suppressor genes, tumor angiogenesis, extracellular matrix degradation, cell adhesion, tumor microenvironment, and other factors. It may be more meaningful to include some related genes for comprehensive analysis. In addition, a tumor is a kind of multifactorial disease caused by genetic and environmental factors. As a retrospective analysis, the limitations of the original data included in this study constrained assessment of potential gene‐environment interactions. CONCLUSION: DPYD c.1627A>G A/G and G/G genotypes are associated with the increased risk of lymph node metastasis and distant metastasis of CRC. Future studies need to include more relevant genes for analysis and to assess potential gene‐environment interactions. This study may provide a valuable reference for the relationship between gene polymorphism and pathological features and metastasis of CRC. CONFLICT OF INTEREST: The authors declare that they have no competing interests. AUTHORS’ CONTRIBUTIONS: Zhixiong Zhong, Heming Wu, and Juanzi Zeng designed the study. Juanzi Zeng, Qingyan Huang, and Zhikang Yu performed the experiments. Juanzi Zeng and Jiaquan Li collected the clinical data. Heming Wu and Juanzi Zeng analyzed the data. Heming Wu and Juanzi Zeng prepared the manuscript. All authors were responsible for critical revisions, and all authors read and approved the final version of this work.
Background: Dihydropyrimidine dehydrogenase (DPD) acts as the key enzyme catabolizing pyrimidines, and may affect the tumor progression. DPYD gene mutations affect DPD activity. The relationship between DPYD IVS14+1G>A, c.1627A>G, c.85T>C and lymph node metastasis (LNM) and distant metastasis (DM) of colorectal cancer (CRC) was investigated. Methods: A total of 537 CRC patients were enrolled in this study. DPYD polymorphisms were analyzed by polymerase chain reaction (PCR)-Sanger sequencing. The relationship between DPYD genotypes and clinical features of patients, metastasis of CRC was analyzed. Results: About DPYD c.1627A>G, A/A (57.7%) was the most common genotype, followed by A/G (35.6%), G/G (6.7%) genotypes. In c.85T>C, T/T, T/C, and C/C genotypes are accounted for 83.6%, 16.0%, and 0.4%, respectively. Logistic regression analysis revealed that DPYD c.1627A>G A/G and G/G genotypes in the dominant model (A/G + G/G vs. A/A) were significant risk factors for the LNM (p = 0.029, OR 1.506, 95% CI = 1.048-2.165) and DM (p = 0.039, OR 1.588, 95% CI = 1.041-2.423) of CRC. In addition, DPYD c.1627A>G polymorphism was more common in patients with abnormal serum carcinoembryonic antigen (CEA) (>5 ng/ml) (p = 0.003) or carbohydrate antigen 24-2 (CA24-2) (>20 U/ml) level (p = 0.015). Conclusions: The results suggested that DPYD c.1627A>G A/G, G/G genotypes are associated with increased risk of LNM and DM of CRC.
INTRODUCTION: With the burden of cancer morbidity and mortality rapidly growing worldwide, cancer is a major barrier to increasing life expectancy worldwide. 1 Colorectal cancer (CRC) is one of the most common gastrointestinal malignancies. According to the Global Cancer Statistics in 2020 by International Agency for Research on Cancer (IARC), CRC is the third most prevalent cancer and the second leading cause of cancer death in the world. 2 In clinical treatment, CRC can be treated with endoscopic treatment, surgical resection, chemotherapy drugs, targeted drugs, immunotherapy, and radiation. 3 , 4 The multiple disciplinary team (MDT) model also improved the treatment level of CRC. 5 However, the recurrence and metastasis of CRC are the major problems affecting the survival of the patients. Metastasis is the process by which cancer cells spread from the primary lesion to the distal organs and is the leading cause of cancer mortality. 6 Metastasis of CRC includes lymph nodes metastasis (LNM) and distant metastasis (DM). 7 Capecitabine is an oral prodrug of 5‐fluorouracil (5‐FU) and has been approved for the treatment of various malignancies. 8 There has been reports that the curative effect and toxic effects of 5‐FU exist noticeable individual differences. 9 After fluorouracil administration, 5‐FU can be transformed into 5‐fluoro‐2'‐deoxyuridine 5’ monophosphate (FdUMP), 5‐fluoro‐2'‐deoxyuridine 5'‐triphosphate (FdUTP), and 5‐fluorouridine 5'‐triphosphate (FUTP) in cells, which are three cytotoxic metabolites. 10 FdUMP inhibits the thymine ceoxyribonucleotide synthetase, the enzyme is necessary for DNA replication and repair, while FdUTP and FUTP disrupt the processing and function of DNA and RNA. 11 Dihydropyrimidine dehydrogenase (DPD) is a rate‐limiting enzyme in the catabolic pathway of fluorouracil. DPD can inactivate up to 85% of 5‐Fu into 5, 6‐dihydro‐5‐fluorouracil, and the intermediate is further metabolized to β‐alanine or β‐aminoisobutyric acid. 12 These processes will increase nucleotide synthesis, which is conducive to DNA synthesis and cell growth. While DPD enzyme activity is decreased, fluorouracil clearance rate in vivo is decreased, the half‐life is prolonged and cytotoxicity is enhanced. 13 DPD enzyme activity is affected by DPYD gene polymorphisms. 14 In addition, DPD is associated with epithelial‐to‐mesenchymal transition (EMT). EMT has been implicated in carcinogenesis and tumor metastasis by enhancing mobility, invasion, and resistance to apoptotic stimuli. 15 DPYD gene polymorphisms may affect the process of EMT by changing the activity of DPD, thus participating in the metastasis of tumor cells. The human DPYD gene is located on chromosome 1p21.3, it is 850 kb in length encompassing 23 exons. Genetic variations of DPYD lead to changes in DPD enzyme activity, which could result in some adverse side effects. The DPYD gene has more than 1700 different genetic variants, and more than 600 are missense variants impacting on the DPD protein sequence, according to the report in the GnomAD database (https://gnomad.broadinstitute.org/). So far, the variants or polymorphisms of DPYD gene attracted more attention including: DPYD IVS14+1 G>A (rs3918290, DPYD *2A), DPYD c. 1627 A>G (rs1801159, DPYD *5A), DPYD c. 85 T>C (rs1801265, DPYD *9A). 16 , 17 Studies have shown that the clinical outcome, the survival of CRC is associated with gene polymorphisms and gene expression level. 18 One study showed that polymorphisms of DPYD have a significant effect on toxicity and clinical outcome in colorectal or gastroesophageal cancer patients receiving capecitabine‐based chemotherapy. 19 Another study showed that the mRNA expression of DPYD is associated with clinicopathological characteristics and may be useful for predicting survival in CRC patients. 20 The relationship between DPYD gene polymorphisms and metastasis of CRC has not been studied. In the present study, the relationship between DPYD gene polymorphisms and the clinical features of CRC patients, metastasis of CRC (including LNM and DM) was analyzed. It is expected to provide a valuable marker for the prognosis of CRC and a valuable target for the clinical treatment of metastatic CRC. This study may provide a valuable reference for the relationship between gene polymorphism and pathological features and metastasis of CRC. CONCLUSION: DPYD c.1627A>G A/G and G/G genotypes are associated with the increased risk of lymph node metastasis and distant metastasis of CRC. Future studies need to include more relevant genes for analysis and to assess potential gene‐environment interactions. This study may provide a valuable reference for the relationship between gene polymorphism and pathological features and metastasis of CRC.
Background: Dihydropyrimidine dehydrogenase (DPD) acts as the key enzyme catabolizing pyrimidines, and may affect the tumor progression. DPYD gene mutations affect DPD activity. The relationship between DPYD IVS14+1G>A, c.1627A>G, c.85T>C and lymph node metastasis (LNM) and distant metastasis (DM) of colorectal cancer (CRC) was investigated. Methods: A total of 537 CRC patients were enrolled in this study. DPYD polymorphisms were analyzed by polymerase chain reaction (PCR)-Sanger sequencing. The relationship between DPYD genotypes and clinical features of patients, metastasis of CRC was analyzed. Results: About DPYD c.1627A>G, A/A (57.7%) was the most common genotype, followed by A/G (35.6%), G/G (6.7%) genotypes. In c.85T>C, T/T, T/C, and C/C genotypes are accounted for 83.6%, 16.0%, and 0.4%, respectively. Logistic regression analysis revealed that DPYD c.1627A>G A/G and G/G genotypes in the dominant model (A/G + G/G vs. A/A) were significant risk factors for the LNM (p = 0.029, OR 1.506, 95% CI = 1.048-2.165) and DM (p = 0.039, OR 1.588, 95% CI = 1.041-2.423) of CRC. In addition, DPYD c.1627A>G polymorphism was more common in patients with abnormal serum carcinoembryonic antigen (CEA) (>5 ng/ml) (p = 0.003) or carbohydrate antigen 24-2 (CA24-2) (>20 U/ml) level (p = 0.015). Conclusions: The results suggested that DPYD c.1627A>G A/G, G/G genotypes are associated with increased risk of LNM and DM of CRC.
7,567
371
[ 805, 122, 177, 105, 259, 261, 401, 307, 75 ]
14
[ "dpyd", "patients", "crc", "tumor", "metastasis", "1627a", "dpyd 1627a", "study", "polymorphisms", "crc patients" ]
[ "cancer mortality metastasis", "metastasis crc evaluated", "worldwide colorectal cancer", "treatment metastatic crc", "cancer crc" ]
null
[CONTENT] colorectal cancer | dihydropyrimidine dehydrogenase | distant metastasis | DPYD | lymph node metastasis [SUMMARY]
null
[CONTENT] colorectal cancer | dihydropyrimidine dehydrogenase | distant metastasis | DPYD | lymph node metastasis [SUMMARY]
[CONTENT] colorectal cancer | dihydropyrimidine dehydrogenase | distant metastasis | DPYD | lymph node metastasis [SUMMARY]
[CONTENT] colorectal cancer | dihydropyrimidine dehydrogenase | distant metastasis | DPYD | lymph node metastasis [SUMMARY]
[CONTENT] colorectal cancer | dihydropyrimidine dehydrogenase | distant metastasis | DPYD | lymph node metastasis [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Colorectal Neoplasms | Dihydrouracil Dehydrogenase (NADP) | Female | Genetic Predisposition to Disease | Humans | Lymphatic Metastasis | Male | Middle Aged | Polymorphism, Single Nucleotide | Risk Factors [SUMMARY]
null
[CONTENT] Adult | Aged | Aged, 80 and over | Colorectal Neoplasms | Dihydrouracil Dehydrogenase (NADP) | Female | Genetic Predisposition to Disease | Humans | Lymphatic Metastasis | Male | Middle Aged | Polymorphism, Single Nucleotide | Risk Factors [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Colorectal Neoplasms | Dihydrouracil Dehydrogenase (NADP) | Female | Genetic Predisposition to Disease | Humans | Lymphatic Metastasis | Male | Middle Aged | Polymorphism, Single Nucleotide | Risk Factors [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Colorectal Neoplasms | Dihydrouracil Dehydrogenase (NADP) | Female | Genetic Predisposition to Disease | Humans | Lymphatic Metastasis | Male | Middle Aged | Polymorphism, Single Nucleotide | Risk Factors [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Colorectal Neoplasms | Dihydrouracil Dehydrogenase (NADP) | Female | Genetic Predisposition to Disease | Humans | Lymphatic Metastasis | Male | Middle Aged | Polymorphism, Single Nucleotide | Risk Factors [SUMMARY]
[CONTENT] cancer mortality metastasis | metastasis crc evaluated | worldwide colorectal cancer | treatment metastatic crc | cancer crc [SUMMARY]
null
[CONTENT] cancer mortality metastasis | metastasis crc evaluated | worldwide colorectal cancer | treatment metastatic crc | cancer crc [SUMMARY]
[CONTENT] cancer mortality metastasis | metastasis crc evaluated | worldwide colorectal cancer | treatment metastatic crc | cancer crc [SUMMARY]
[CONTENT] cancer mortality metastasis | metastasis crc evaluated | worldwide colorectal cancer | treatment metastatic crc | cancer crc [SUMMARY]
[CONTENT] cancer mortality metastasis | metastasis crc evaluated | worldwide colorectal cancer | treatment metastatic crc | cancer crc [SUMMARY]
[CONTENT] dpyd | patients | crc | tumor | metastasis | 1627a | dpyd 1627a | study | polymorphisms | crc patients [SUMMARY]
null
[CONTENT] dpyd | patients | crc | tumor | metastasis | 1627a | dpyd 1627a | study | polymorphisms | crc patients [SUMMARY]
[CONTENT] dpyd | patients | crc | tumor | metastasis | 1627a | dpyd 1627a | study | polymorphisms | crc patients [SUMMARY]
[CONTENT] dpyd | patients | crc | tumor | metastasis | 1627a | dpyd 1627a | study | polymorphisms | crc patients [SUMMARY]
[CONTENT] dpyd | patients | crc | tumor | metastasis | 1627a | dpyd 1627a | study | polymorphisms | crc patients [SUMMARY]
[CONTENT] dpyd | dpd | gene | cancer | crc | metastasis | fluorouracil | dpyd gene | treatment | enzyme [SUMMARY]
null
[CONTENT] dpyd | 1627a | dpyd 1627a | cases | patients | genotype | crc | stage | years | antigen [SUMMARY]
[CONTENT] metastasis | gene | metastasis crc | crc future studies need | increased risk | increased risk lymph | increased risk lymph node | assess | crc future | genotypes associated [SUMMARY]
[CONTENT] dpyd | patients | crc | tumor | dpyd 1627a | 1627a | metastasis | cases | study | genotype [SUMMARY]
[CONTENT] dpyd | patients | crc | tumor | dpyd 1627a | 1627a | metastasis | cases | study | genotype [SUMMARY]
[CONTENT] Dihydropyrimidine ||| DPD ||| DPYD | CRC [SUMMARY]
null
[CONTENT] DPYD | 57.7% | 35.6% | G/G | 6.7% ||| 83.6% | 16.0% | 0.4% ||| DPYD | LNM | 0.029 | 1.506 | 95% | CI | 1.048 | DM | 0.039 | 1.588 | 95% | CI | 1.041 | CRC ||| DPYD | CEA | 5 ng/ml | 0.003 | 24 | 20 | 0.015 [SUMMARY]
[CONTENT] DPYD | LNM | DM of CRC [SUMMARY]
[CONTENT] Dihydropyrimidine ||| DPD ||| DPYD | CRC ||| 537 | CRC ||| ||| DPYD | CRC ||| 57.7% | 35.6% | G/G | 6.7% ||| 83.6% | 16.0% | 0.4% ||| DPYD | LNM | 0.029 | 1.506 | 95% | CI | 1.048 | DM | 0.039 | 1.588 | 95% | CI | 1.041 | CRC ||| DPYD | CEA | 5 ng/ml | 0.003 | 24 | 20 | 0.015 ||| DPYD | LNM | DM of CRC [SUMMARY]
[CONTENT] Dihydropyrimidine ||| DPD ||| DPYD | CRC ||| 537 | CRC ||| ||| DPYD | CRC ||| 57.7% | 35.6% | G/G | 6.7% ||| 83.6% | 16.0% | 0.4% ||| DPYD | LNM | 0.029 | 1.506 | 95% | CI | 1.048 | DM | 0.039 | 1.588 | 95% | CI | 1.041 | CRC ||| DPYD | CEA | 5 ng/ml | 0.003 | 24 | 20 | 0.015 ||| DPYD | LNM | DM of CRC [SUMMARY]
Uptake of intermittent preventive treatment of malaria in pregnancy using sulfadoxine-pyrimethamine (IPTp-SP) in Uganda: a national survey.
36207727
In spite of the missed opportunities of sulfadoxine-pyrimethamine (IPTp-SP) in Uganda, scanty literature exist on malaria in pregnancy. To date, empirical national study utilizing the 2018-19 Uganda Malaria Indicator Survey to explore predictors of attaining three or more doses of IPTp-SP in the country is non-existent. This study investigated the factors affecting uptake of three or more IPTp-SP doses as recommended by the World Health Organization.
BACKGROUND
Data from the 2018-2019 Uganda Malaria Indicator Survey (2018-19 UMIS) was analysed. Adequate uptake of intermittent preventive therapy with IPTp-SP was the dependent variable for this study. Weighted frequencies and percentages were used to present the proportion of women who had adequate IPTp-SP uptake or otherwise with respect to the independent variables. A three-level multilevel logistic regression was fitted. The Bayesian Deviance Information Criterion (DIC) was used in determining the goodness of fit of all the models.
METHODS
Less than half of the surveyed women had three or more IPTp-SP doses during their last pregnancies (45.3%). Women aged 15-19 had less odds of receiving at least three IPTp-SP doses compared to those aged 45-49 [aOR = 0.42, Crl = 0.33-0.98]. Poor women [aOR = 0.80, Crl = 0.78-0.91] were less likely to have three or more doses of IPTp-SP relative to rich women. Most disadvantaged regions were aligned with less likelihood of three or more IPTp-SP uptake [aOR = 0.59, CI = 0.48-0.78] compared to least disadvantaged regions. The variation in uptake of three or more IPTp-SP doses was substantial at the community level [σ2 = 1. 86; Crl = 11.12-2.18] than regional level [σ2 = 1.13; Crl = 1.06-1.20]. About 18% and 47% disparity in IPTp-SP uptake are linked to region and community level factors respectively.
RESULTS
IPTp-SP interventions need to reflect broader community and region level factors in order to wane the high malaria prevalence in Uganda. Contextually responsive behavioural change communication interventions are required to invoke women's passion to achieve the recommended dosage.
CONCLUSION
[ "Antimalarials", "Bayes Theorem", "Dacarbazine", "Drug Combinations", "Female", "Humans", "Malaria", "Patient Acceptance of Health Care", "Pregnancy", "Pregnancy Complications, Parasitic", "Prenatal Care", "Pyrimethamine", "Sulfadoxine", "Uganda" ]
9547429
Background
Malaria infection in pregnancy (MiP) is acknowledged as a weighty public health challenge and have ample dangers for the pregnant woman and her fetus [1–3]. The symptoms and complications of MiP fluctuate with respect to intensity of transmission within a defined geographical area as well as a woman’s level of acquired immunity [1, 2]. Nineteen countries within sub-Saharan Africa (SSA), with Uganda inclusive, and one Asian country account for 85% of the global malaria burden [1]. In 2018 alone, about US$ 2.7 billion investment was made into malaria control and elimination globally and three-quarters of this was directed to the World Health Organization (WHO) African Region. In spite of this, malaria continue to take a heavy toll on government and household expenses in Uganda [4]. Pregnant women have increased susceptibility to malaria and its associated complications such as maternal anaemia, stillbirth, low birth weight, and in worse scenarios, infant mortality and morbidity [5, 6]. MiP could be an impediment to the realization of targets 3.1 of the Sustainable Development Goals, thus reducing maternal mortality ratio to less than 70 deaths per 100,000 live births [7]. In order to shield women in moderate to high malaria transmission areas in Africa and their newborns from the adverse implications of MiP and its associated imminent problems, the WHO in 2012 revised its anti-malaria policy and recommended that all pregnant women within such regions should receive at least three doses of intermittent preventive treatment in pregnancy with antimalarial drug sulfadoxine-pyrimethamine (IPTp-SP) [8]. This recommendation was informed by the stagnated IPTp coverage rates and new evidence that reinforced the need for three doses or more [9]. Further, in 2016, the WHO developed new antenatal care (ANC) guidelines by indorsing an increase in the number of ANC to at least eight contacts between pregnant women and healthcare providers as a strategy to enhance prospects of IPTp-SP uptake [2]. IPTp-SP is to be taken by all pregnant women in moderate to high malaria transmission areas and should commence as early as possible within the second trimester. The doses are to be administered at least three times with at least one month interval until childbirth. Generally IPTp-SP declines episodes of MiP, neonatal mortality, low birth weight, and placental parasitaemia [2]. Empirical evidence indicate that IPTp contributes to 29%, 38% and 31% reduction in the incidence of low birth weight, severe malaria anaemia and neonatal mortality respectively [10, 11]. In 2018, out of the 36 African countries that recounted IPTp-SP coverage rates, about 31% of eligible women in the reproductive age received the recommended doses and this signified an increase relative to the rate reported in 2017 (22%) and 2010 (2%). In the case of Uganda, where malaria is endemic in 95% of the country [4], the 2019 World Malaria Report indicated that 30% or lower of the eligible women had three or more IPTp-SP doses [1]. Resistance of the parasite to SP has been noted, however, IPTp still remains a very cost-effective and a promising lifesaving intervention [12, 13]. In spite of the missed IPTp-SP opportunities in Uganda in the wake of high MiP [1, 14, 15], scanty literature exist on MiP in Uganda. The few studies have either been limited to some regions of the country [16–20], used relatively old national data [21], assessed the impact of intermittent preventive treatment during pregnancy [22] and among others. To date, empirical national study utilizing the 2018-19 Uganda Malaria Indicator Survey that explore predictors of attaining three or more doses of IPTp-SP in the country is non-existent. With the aim of invigorating a critical evidence-based discussion on MiP prevention, and offering empirical evidence to guide MiP policies, this study investigated the rate of uptake and factors affecting uptake of three or more IPTp-SP doses as recommended by the WHO.
null
null
Results
Descriptive findings As shown in Table 1, less than half of the surveyed women had three or more IPTp-SP doses during their last pregnancies (45.3%). All the explanatory variables showed significant association at 95% level of significance. Nearly half of women aged 15–19 (49.9%) and the highly educated (47.1%) women received at least three doses. A greater section of rich women had at least three IPTp-SP doses (46.6%) and 50.1% of women who knew that mosquito bite causes malaria did the same. A significant proportion of women who knew that sleeping under ITN prevents malaria (46.7%) and those who did not know that destroying mosquito breeding sites prevents malaria (46.0%) received at three or more doses of IPTp-SP. Receiving at least three doses of IPTp-SP was profound in refugee settlements (50.9%), Western zone (47.3%), most disadvantaged communities (47.5%) and moderately disadvantaged regions (45.8%). As shown in Table 1, less than half of the surveyed women had three or more IPTp-SP doses during their last pregnancies (45.3%). All the explanatory variables showed significant association at 95% level of significance. Nearly half of women aged 15–19 (49.9%) and the highly educated (47.1%) women received at least three doses. A greater section of rich women had at least three IPTp-SP doses (46.6%) and 50.1% of women who knew that mosquito bite causes malaria did the same. A significant proportion of women who knew that sleeping under ITN prevents malaria (46.7%) and those who did not know that destroying mosquito breeding sites prevents malaria (46.0%) received at three or more doses of IPTp-SP. Receiving at least three doses of IPTp-SP was profound in refugee settlements (50.9%), Western zone (47.3%), most disadvantaged communities (47.5%) and moderately disadvantaged regions (45.8%). Fixed effects Table 2 presents the findings of the fixed effects. The complete and final model (Model V) indicates that women aged 15–19 had less odds of receiving at least three IPTp-SP doses compared to those aged 45–49 [aOR = 0.42, Crl = 0.33–0.98]. Those who had no formal education were less likely to achieve the minimum recommended doses [aOR = 0.51, Crl = 0.35–0.81]. Similarly, poor women [aOR = 0.80, Crl = 0.78–0.91] and women who knew that mosquito bite can cause malaria [aOR = 0.84, Crl = 0.73–0.96] were less likely to have three or more doses of IPTp-SP relative to the rich women and women who did not know that mosquito bite can cause malaria respectively. Women who reported that sleeping under ITN prevents malaria had higher odds of three or more IPTp-SP doses [aOR = 1.22, Crl = 1.04–1.43] relative to those who reported otherwise. The findings further revealed that urban residents were less probable to have at least three doses [aOR = 0.60, CI = 0.46–0.82] as well as most disadvantaged communities [aOR = 0.67, CI = 0.50–0.86] relative to rural residents and least disadvantaged communities correspondingly. Similarly, most disadvantaged regions were aligned with less likelihood of three or more IPTp-SP uptake [aOR = 0.59, CI = 0.48–0.78] compared to the least disadvantaged regions. Table 2Individual, community and region-level predictors of IPTp-SP utilizationModel IModel IIModel IIIModel IVModel V aOR [95% Crl] aOR [95% Crl] aOR [95% Crl] aOR [95% Crl] Fixed effects Individual level   Age   15–190.22** [0.11–0.81]0.42**[0.33–0.98]   20–240.43*[0.24–0.98]0.38*[0.25–0.79]   25–290.99[0.62–1.62]0.81[0.53–1.22]   30–341.01[0.62–1.64]0.81[0.53–1.24]   35–390.94[0.56–1.55]0.76[0.48–1.18]   40–440.99[0.58–1.71]0.81[0.50–1.30]   45–491[1]1[1]   Education   No education0.33**[0.17–0.77]0.51**[0.35–0.81]   Primary0.53*[0.44–0.82]0.62*[0.51–0.91]   Secondary1.09[0.74–1.60]1.06[0.75–1.50]   Higher1[1]1[1]   Wealth quintile   Poor0.81*[0.79–0.94]0.80**[0.78–0.91]   Middle0.89[0.75–1.06]0.88[0.74–1.05]   Rich1[1]1[1]   Mosquito bite causes malaria   No0.85*[0.74–0.97]0.84*[0.73–0.96]   Yes1[1]1[1]   Sleeping under ITN prevents malaria   No1[1]1[1]   Yes1.23**[1.06–1.43]1.22*[1.04–1.43]   Destroying mosquito breeding site prevents malaria   No1.16[0.97–1.39]1.165[0.96–1.41]   Yes1[1]1[1]  Community level factors   Residential status   Urban0.69***[0.31–0.88]0.60***[0.46–0.82]   Rural0.78*[0.68–0.96]0.80[0.61–1.05]   Refugee settlement1[1]1[1]   Zone   Southern1[1]1[1]   Eastern1.05[0.74–1.49]1.07[0.71–1.61]   Northern1.08[0.73–1.61]1.12[0.68–1.83]   Western1.19[0.84–1.68]1.22[0.81–1.84]   Socio-economic disadvantage   Tertile 1(least disadvantaged)1[1]1[1]   Tertile 20.87[0.72–1.05]0.91[0.73–1.12]   Tertile 3(most disadvantaged)0.71*[0.65–0.92]0.67**[0.50–0.86]  Region level factor   Socio-economic disadvantage   Tertile 1(least disadvantaged)1[1]1[1]   Tertile 20.71**[0.51–0.89]0.68***[0.54–0.82]   Tertile 3(most disadvantaged)0.55**[0.39–0.78]0.59***[0.48–0.78] Random effects  Region level  Variance (SE)1.13[1.06–1.20]1.14[1.04–1.21]1.12[1.05–1.20]1.15[1.07–1.21]1.14[1.06–1.20]  ICC (%)18.00[15.01–22.90]17.72[15.20-19.08]16.33[15.21-9.00]17.10[16.91–18.70]18.22[17.90-19.02]  MOR2.76[2.03–3.42]2.77[2.65–2.86]2.74[2.66–2.84]2.78[2.68–2.86]2.89[2.33–3.51]  Explained variation[1]35.72[29.06–41.20]31.03[27.61–37.31]33.20[27.31–37.28]30.82[26.97–37.91]  Community level  Variance (SE)1.86[1.12–2.18]2.00[1.71–2.31]1.98[1.14–2.20]1.59[1.22–1.98]1.99[1.42–2.36]  ICC (%)47.60[39.8–50.7]49.80[37.98–53.70]48.50[36.90-59.08]48.40[37.00-51.20]49.22[38.99–52.51]  MOR3.67[2.74–4.09]3.85[3.48–4.26]3.83[2.77–4.12]3.33[2.87–3.83]3.84[3.12–4.33]  Explained variation[1]52.00[46.30–59.80]48.99[41.40-55.81]47.89[38.21–52.55]48.70[43.00–52.00]  Model fit statistics  Bayesian DIC58396002599860106012    N  Region level1515151515  Community level340340340340340  Individual42544254425442544254 * p < 0.05, ** p < 0.01, *** p < 0.001; aOR = adjusted Odds Ratio; CrI = Credible Interval; ICC = Intra-cluster correlation; MOR = Median Odds Ratio; 1 = reference Individual, community and region-level predictors of IPTp-SP utilization * p < 0.05, ** p < 0.01, *** p < 0.001; aOR = adjusted Odds Ratio; CrI = Credible Interval; ICC = Intra-cluster correlation; MOR = Median Odds Ratio; 1 = reference Table 2 presents the findings of the fixed effects. The complete and final model (Model V) indicates that women aged 15–19 had less odds of receiving at least three IPTp-SP doses compared to those aged 45–49 [aOR = 0.42, Crl = 0.33–0.98]. Those who had no formal education were less likely to achieve the minimum recommended doses [aOR = 0.51, Crl = 0.35–0.81]. Similarly, poor women [aOR = 0.80, Crl = 0.78–0.91] and women who knew that mosquito bite can cause malaria [aOR = 0.84, Crl = 0.73–0.96] were less likely to have three or more doses of IPTp-SP relative to the rich women and women who did not know that mosquito bite can cause malaria respectively. Women who reported that sleeping under ITN prevents malaria had higher odds of three or more IPTp-SP doses [aOR = 1.22, Crl = 1.04–1.43] relative to those who reported otherwise. The findings further revealed that urban residents were less probable to have at least three doses [aOR = 0.60, CI = 0.46–0.82] as well as most disadvantaged communities [aOR = 0.67, CI = 0.50–0.86] relative to rural residents and least disadvantaged communities correspondingly. Similarly, most disadvantaged regions were aligned with less likelihood of three or more IPTp-SP uptake [aOR = 0.59, CI = 0.48–0.78] compared to the least disadvantaged regions. Table 2Individual, community and region-level predictors of IPTp-SP utilizationModel IModel IIModel IIIModel IVModel V aOR [95% Crl] aOR [95% Crl] aOR [95% Crl] aOR [95% Crl] Fixed effects Individual level   Age   15–190.22** [0.11–0.81]0.42**[0.33–0.98]   20–240.43*[0.24–0.98]0.38*[0.25–0.79]   25–290.99[0.62–1.62]0.81[0.53–1.22]   30–341.01[0.62–1.64]0.81[0.53–1.24]   35–390.94[0.56–1.55]0.76[0.48–1.18]   40–440.99[0.58–1.71]0.81[0.50–1.30]   45–491[1]1[1]   Education   No education0.33**[0.17–0.77]0.51**[0.35–0.81]   Primary0.53*[0.44–0.82]0.62*[0.51–0.91]   Secondary1.09[0.74–1.60]1.06[0.75–1.50]   Higher1[1]1[1]   Wealth quintile   Poor0.81*[0.79–0.94]0.80**[0.78–0.91]   Middle0.89[0.75–1.06]0.88[0.74–1.05]   Rich1[1]1[1]   Mosquito bite causes malaria   No0.85*[0.74–0.97]0.84*[0.73–0.96]   Yes1[1]1[1]   Sleeping under ITN prevents malaria   No1[1]1[1]   Yes1.23**[1.06–1.43]1.22*[1.04–1.43]   Destroying mosquito breeding site prevents malaria   No1.16[0.97–1.39]1.165[0.96–1.41]   Yes1[1]1[1]  Community level factors   Residential status   Urban0.69***[0.31–0.88]0.60***[0.46–0.82]   Rural0.78*[0.68–0.96]0.80[0.61–1.05]   Refugee settlement1[1]1[1]   Zone   Southern1[1]1[1]   Eastern1.05[0.74–1.49]1.07[0.71–1.61]   Northern1.08[0.73–1.61]1.12[0.68–1.83]   Western1.19[0.84–1.68]1.22[0.81–1.84]   Socio-economic disadvantage   Tertile 1(least disadvantaged)1[1]1[1]   Tertile 20.87[0.72–1.05]0.91[0.73–1.12]   Tertile 3(most disadvantaged)0.71*[0.65–0.92]0.67**[0.50–0.86]  Region level factor   Socio-economic disadvantage   Tertile 1(least disadvantaged)1[1]1[1]   Tertile 20.71**[0.51–0.89]0.68***[0.54–0.82]   Tertile 3(most disadvantaged)0.55**[0.39–0.78]0.59***[0.48–0.78] Random effects  Region level  Variance (SE)1.13[1.06–1.20]1.14[1.04–1.21]1.12[1.05–1.20]1.15[1.07–1.21]1.14[1.06–1.20]  ICC (%)18.00[15.01–22.90]17.72[15.20-19.08]16.33[15.21-9.00]17.10[16.91–18.70]18.22[17.90-19.02]  MOR2.76[2.03–3.42]2.77[2.65–2.86]2.74[2.66–2.84]2.78[2.68–2.86]2.89[2.33–3.51]  Explained variation[1]35.72[29.06–41.20]31.03[27.61–37.31]33.20[27.31–37.28]30.82[26.97–37.91]  Community level  Variance (SE)1.86[1.12–2.18]2.00[1.71–2.31]1.98[1.14–2.20]1.59[1.22–1.98]1.99[1.42–2.36]  ICC (%)47.60[39.8–50.7]49.80[37.98–53.70]48.50[36.90-59.08]48.40[37.00-51.20]49.22[38.99–52.51]  MOR3.67[2.74–4.09]3.85[3.48–4.26]3.83[2.77–4.12]3.33[2.87–3.83]3.84[3.12–4.33]  Explained variation[1]52.00[46.30–59.80]48.99[41.40-55.81]47.89[38.21–52.55]48.70[43.00–52.00]  Model fit statistics  Bayesian DIC58396002599860106012    N  Region level1515151515  Community level340340340340340  Individual42544254425442544254 * p < 0.05, ** p < 0.01, *** p < 0.001; aOR = adjusted Odds Ratio; CrI = Credible Interval; ICC = Intra-cluster correlation; MOR = Median Odds Ratio; 1 = reference Individual, community and region-level predictors of IPTp-SP utilization * p < 0.05, ** p < 0.01, *** p < 0.001; aOR = adjusted Odds Ratio; CrI = Credible Interval; ICC = Intra-cluster correlation; MOR = Median Odds Ratio; 1 = reference Random effects Outcome of the random effects were also reported in Table 2. The empty model (Model I), revealed that the discrepancy in uptake of three or more IPTp-SP doses was substantial at the community level [σ2 = 1. 86; Crl = 11.12–2.18] than regional level [σ2 = 1.13; Crl = 1.06–1.20]. Also, the ICC of Model I indicated that 18% and 47% disparity in IPTp-SP uptake are linked to region and community level factors respectively. According to the final model (Model V), the MORs indicate that when a woman changes her community to a community with higher likelihood of three or more doses of IPTp-SP, she has 3.83 higher chances of achieving the recommended doses (i.e. three or more). Similarly, moving to a region with high likelihood of three or more IPTp-SP doses is associated with 2.74-fold increase in having three or more IPTp-SP doses. Outcome of the random effects were also reported in Table 2. The empty model (Model I), revealed that the discrepancy in uptake of three or more IPTp-SP doses was substantial at the community level [σ2 = 1. 86; Crl = 11.12–2.18] than regional level [σ2 = 1.13; Crl = 1.06–1.20]. Also, the ICC of Model I indicated that 18% and 47% disparity in IPTp-SP uptake are linked to region and community level factors respectively. According to the final model (Model V), the MORs indicate that when a woman changes her community to a community with higher likelihood of three or more doses of IPTp-SP, she has 3.83 higher chances of achieving the recommended doses (i.e. three or more). Similarly, moving to a region with high likelihood of three or more IPTp-SP doses is associated with 2.74-fold increase in having three or more IPTp-SP doses.
Conclusion
The study revealed that less than half of Ugandan women achieved the recommended IPTp-SP dosage at their last pregnancy preceding the 2018-19 UMIS. Community and region level factors are significant predictors of optimal IPTp-SP uptake. All existing IPTp-SP interventions that focus on individual level factors alone need to be reviewed to reflect broader community and region level factors in order to wane the high malaria prevalence in the country. More especially, augmenting IPTp-SP uptake in most disadvantaged communities would require much scrutiny into suitable approaches to ensure access by obviating all barriers. Also, contextually responsive behavioural change communication interventions could invoke women’s passion to achieve the recommended dosage.
[ "Background", "Methods", "Data description", "Measurement of variables", "Dependent variable", "Independent variables", "Statistical analyses", "Model fit and specifications", "Ethics approval", "Descriptive findings", "Fixed effects", "Random effects", "Strengths and limitations" ]
[ "Malaria infection in pregnancy (MiP) is acknowledged as a weighty public health challenge and have ample dangers for the pregnant woman and her fetus [1–3]. The symptoms and complications of MiP fluctuate with respect to intensity of transmission within a defined geographical area as well as a woman’s level of acquired immunity [1, 2]. Nineteen countries within sub-Saharan Africa (SSA), with Uganda inclusive, and one Asian country account for 85% of the global malaria burden [1]. In 2018 alone, about US$ 2.7 billion investment was made into malaria control and elimination globally and three-quarters of this was directed to the World Health Organization (WHO) African Region. In spite of this, malaria continue to take a heavy toll on government and household expenses in Uganda [4]. Pregnant women have increased susceptibility to malaria and its associated complications such as maternal anaemia, stillbirth, low birth weight, and in worse scenarios, infant mortality and morbidity [5, 6]. MiP could be an impediment to the realization of targets 3.1 of the Sustainable Development Goals, thus reducing maternal mortality ratio to less than 70 deaths per 100,000 live births [7].\nIn order to shield women in moderate to high malaria transmission areas in Africa and their newborns from the adverse implications of MiP and its associated imminent problems, the WHO in 2012 revised its anti-malaria policy and recommended that all pregnant women within such regions should receive at least three doses of intermittent preventive treatment in pregnancy with antimalarial drug sulfadoxine-pyrimethamine (IPTp-SP) [8]. This recommendation was informed by the stagnated IPTp coverage rates and new evidence that reinforced the need for three doses or more [9]. Further, in 2016, the WHO developed new antenatal care (ANC) guidelines by indorsing an increase in the number of ANC to at least eight contacts between pregnant women and healthcare providers as a strategy to enhance prospects of IPTp-SP uptake [2].\nIPTp-SP is to be taken by all pregnant women in moderate to high malaria transmission areas and should commence as early as possible within the second trimester. The doses are to be administered at least three times with at least one month interval until childbirth. Generally IPTp-SP declines episodes of MiP, neonatal mortality, low birth weight, and placental parasitaemia [2]. Empirical evidence indicate that IPTp contributes to 29%, 38% and 31% reduction in the incidence of low birth weight, severe malaria anaemia and neonatal mortality respectively [10, 11]. In 2018, out of the 36 African countries that recounted IPTp-SP coverage rates, about 31% of eligible women in the reproductive age received the recommended doses and this signified an increase relative to the rate reported in 2017 (22%) and 2010 (2%). In the case of Uganda, where malaria is endemic in 95% of the country [4], the 2019 World Malaria Report indicated that 30% or lower of the eligible women had three or more IPTp-SP doses [1]. Resistance of the parasite to SP has been noted, however, IPTp still remains a very cost-effective and a promising lifesaving intervention [12, 13].\nIn spite of the missed IPTp-SP opportunities in Uganda in the wake of high MiP [1, 14, 15], scanty literature exist on MiP in Uganda. The few studies have either been limited to some regions of the country [16–20], used relatively old national data [21], assessed the impact of intermittent preventive treatment during pregnancy [22] and among others. To date, empirical national study utilizing the 2018-19 Uganda Malaria Indicator Survey that explore predictors of attaining three or more doses of IPTp-SP in the country is non-existent. With the aim of invigorating a critical evidence-based discussion on MiP prevention, and offering empirical evidence to guide MiP policies, this study investigated the rate of uptake and factors affecting uptake of three or more IPTp-SP doses as recommended by the WHO.", "Data description Data from the 2018–2019 Malaria Indicator Survey (2018-19 UMIS) of Uganda was analysed. This is a cross-sectional survey that is executed by the Uganda Bureau of Statistics (UBOS) and the National Malaria Control Division (NMCD), however, technical assistance was granted by the Inner City Fund (ICF) [23]. The survey sampled participants through a two-stage sampling design with the intent of achieving estimation of three essential indicators, thus rural-urban locations, all fifteen administrative regions and national coverage. The sampling commenced with selection of clusters from refugee and non-refugee sample frames [23] from the enumeration areas delineated for the 2014 National Population and Housing Census (NPHC). In all, 320 clusters were selected from non-refugee sample frame (236 and 84 from rural and urban settlements respectively). Urban settlements were oversampled to obtain unbiased estimations for rural and urban settlements. The same procedure was followed to select 22 clusters from the refugee frame. The next sampling phase involved the systematic selection of households and 28 households per cluster were selected resulting in a total of 8,878. Eligible women were those aged 15–49 and were either permanent residents or visitors who joined the household the night before the survey. In all, 8,231 women were successfully interviewed signifying 98% and 99% response rates for non-refugee and refugee settlements respectively. In the present study, however, 4,254 women were eligible for inclusion based on completeness of data.\nData from the 2018–2019 Malaria Indicator Survey (2018-19 UMIS) of Uganda was analysed. This is a cross-sectional survey that is executed by the Uganda Bureau of Statistics (UBOS) and the National Malaria Control Division (NMCD), however, technical assistance was granted by the Inner City Fund (ICF) [23]. The survey sampled participants through a two-stage sampling design with the intent of achieving estimation of three essential indicators, thus rural-urban locations, all fifteen administrative regions and national coverage. The sampling commenced with selection of clusters from refugee and non-refugee sample frames [23] from the enumeration areas delineated for the 2014 National Population and Housing Census (NPHC). In all, 320 clusters were selected from non-refugee sample frame (236 and 84 from rural and urban settlements respectively). Urban settlements were oversampled to obtain unbiased estimations for rural and urban settlements. The same procedure was followed to select 22 clusters from the refugee frame. The next sampling phase involved the systematic selection of households and 28 households per cluster were selected resulting in a total of 8,878. Eligible women were those aged 15–49 and were either permanent residents or visitors who joined the household the night before the survey. In all, 8,231 women were successfully interviewed signifying 98% and 99% response rates for non-refugee and refugee settlements respectively. In the present study, however, 4,254 women were eligible for inclusion based on completeness of data.", "Data from the 2018–2019 Malaria Indicator Survey (2018-19 UMIS) of Uganda was analysed. This is a cross-sectional survey that is executed by the Uganda Bureau of Statistics (UBOS) and the National Malaria Control Division (NMCD), however, technical assistance was granted by the Inner City Fund (ICF) [23]. The survey sampled participants through a two-stage sampling design with the intent of achieving estimation of three essential indicators, thus rural-urban locations, all fifteen administrative regions and national coverage. The sampling commenced with selection of clusters from refugee and non-refugee sample frames [23] from the enumeration areas delineated for the 2014 National Population and Housing Census (NPHC). In all, 320 clusters were selected from non-refugee sample frame (236 and 84 from rural and urban settlements respectively). Urban settlements were oversampled to obtain unbiased estimations for rural and urban settlements. The same procedure was followed to select 22 clusters from the refugee frame. The next sampling phase involved the systematic selection of households and 28 households per cluster were selected resulting in a total of 8,878. Eligible women were those aged 15–49 and were either permanent residents or visitors who joined the household the night before the survey. In all, 8,231 women were successfully interviewed signifying 98% and 99% response rates for non-refugee and refugee settlements respectively. In the present study, however, 4,254 women were eligible for inclusion based on completeness of data.", "Dependent variable Adequate uptake of intermittent preventive therapy using IPTp-SP was the dependent variable for this study. During the 2018-19 UMIS, women were asked if they took IPTp-SP (Fansidar™), and the number of times this was taken during their last pregnancy. The current recommendation by the WHO endorses at least three doses for all pregnant women in locations that have moderate to high malaria transmission of which Africa and for that matter Uganda is inclusive [8, 24]. Following this recommendation, all women who revealed that they had at least three doses of IPTp-SP were categorized as having adequate IPTp-SP (coded 1) whilst those who had less than three were categorized otherwise (coded 0).\nAdequate uptake of intermittent preventive therapy using IPTp-SP was the dependent variable for this study. During the 2018-19 UMIS, women were asked if they took IPTp-SP (Fansidar™), and the number of times this was taken during their last pregnancy. The current recommendation by the WHO endorses at least three doses for all pregnant women in locations that have moderate to high malaria transmission of which Africa and for that matter Uganda is inclusive [8, 24]. Following this recommendation, all women who revealed that they had at least three doses of IPTp-SP were categorized as having adequate IPTp-SP (coded 1) whilst those who had less than three were categorized otherwise (coded 0).\nIndependent variables A total of ten (10) independent variables were included and were categorized under individual, community and region-level factors. This was possible due the hierarchical nature of the data set. The individual-level factors comprised age in completed years (15–19, 20–24, 25–29, 30–34, 35–39, 40–44, 45–49) and education measured as highest level of educational attainment (no education, primary, secondary, higher). In addition were wealth quintile (poor, middle, rich), whether mosquito bite causes malaria (yes or no), if sleeping under ITN prevents malaria (yes or no) and if malaria can be prevented by destroying mosquito breeding site (yes or no). The community-level factors comprised residential status (urban, rural, refugee settlement), and socio-economic disadvantage at the community level. The sole region-level factor was socio-economic disadvantage at the region level. Several studies using the DHS dataset have followed the same categorization [21, 25, 26].\nA total of ten (10) independent variables were included and were categorized under individual, community and region-level factors. This was possible due the hierarchical nature of the data set. The individual-level factors comprised age in completed years (15–19, 20–24, 25–29, 30–34, 35–39, 40–44, 45–49) and education measured as highest level of educational attainment (no education, primary, secondary, higher). In addition were wealth quintile (poor, middle, rich), whether mosquito bite causes malaria (yes or no), if sleeping under ITN prevents malaria (yes or no) and if malaria can be prevented by destroying mosquito breeding site (yes or no). The community-level factors comprised residential status (urban, rural, refugee settlement), and socio-economic disadvantage at the community level. The sole region-level factor was socio-economic disadvantage at the region level. Several studies using the DHS dataset have followed the same categorization [21, 25, 26].\nStatistical analyses Stata version 13 was utilized for all the analyses. Weighted frequencies and percentages were used to present the proportion of women who had adequate IPTp-SP uptake or otherwise with respect to the independent variables (see Table 1). The association of significance between the explanatory variables and adequate IPTp-SP uptake was assessed with chi-square at 5% margin of error. Finally, a three-level multilevel logistic regression was fitted with five models. First of the five models was the empty model without any explanatory variable (Model I). The empty model is an unconditional model, which accounted for the magnitude of variance between community and region levels. This was followed by a model bearing all the individual-level variables (Model II) and subsequently Model III, which accounted for only community-level variables. Model IV featured region-level variables alone whilst the complete and ultimate model included variables of all the aforementioned levels/models (Model V). Output from the models comprised fixed and random effects. The fixed effects were reported as adjusted odds ratios (aORs) at 95% credible intervals (CrIs). However, the random effects reflected as intraclass correlation coefficient (ICC) and median odds ratio (MOR) [27]. With the ICC, it was possible to gauge the extent of variance in the possibility or tendency of adequate IPTp-SP uptake that is explained by community and region level factors. On the order hand, the MOR quantified the community and region variance as odds ratios and in addition estimated the likelihood of adequate IPTp-SP uptake that is influenced by community and region level issues. Groups having least observations were set as reference groups in the models.\n\nTable 1Sample by IPTp-SP utilization during last pregnancyVariableAt least 3 doses of IPTp-SP in last Pregnancy\nNon (%)\nYesn (%)\nTotaln (%)\nX\n2; p-value\nIndividual level\n Age19.638; p < 0.05  15–19143(50.1)142(49.9)286(100)  20–24553(52.0)510(48.0)1063(100)  25–29606(55.8)479(44.2)1085(100)  30–34497(56.4)385(43.6)882(100)  35–39318(54.8)262(45.2)580(100)  40–44163(59.6)110(40.4)273(100)  45–4949(58.2)35(41.8)84(100)\n Education45.638; p < 0.01  No education379(55.1)309(44.9)689(100)  Primary1294(55.0)1060(45.0)2354(100)  Secondary537(54.4)450(45.6)987(100)  Higher118(52.9)105(47.1)224(100)\n Wealth quintile97.638; p < 0.001  Poor572(53.8)491(46.2)1063(100)  Middle863(56.9)654(43.1)1517(100)  Rich894(53.4)780(46.6)1675(100)\n Mosquito bite causes malaria18.832; p < 0.01  No1573(57.4)1166(42.6)2739(100)  Yes755(49.9)759(50.1)1515(100)\n Sleeping under ITN prevents malaria24.221; p < 0.01  No553(60.0)368(40.0)920(100)  Yes1776(53.3)1557(46.7)3333(100)\n Destroying mosquito breeding site prevents malaria19.361; p < 0.01  No1909(54.0)1624(46.0)3533(100)  Yes420(58.3)301(41.7)720(100)\nCommunity level factors\n Residential status52.873; p < 0.001  Urban493(52.4)448(47.6)940(100)  Rural1664(56.2)1298(43.8)2962(100)  Refugee settlement172(49.1)179(50.9)351(100)\n Zone45.911; p < 0.001  Southern719(56.0)565(44.0)1284(100)  Eastern617(55.6)493(44.4)1110(100)  Northern465(54.2)392(45.8)858(100)  Western528(52.7)474(47.3)1002(100)\n Socio-economic disadvantage109.111; p < 0.001  Tertile 1(least disadvantaged)935(53.2)822(46.8)1757(100)  Tertile 2855(58.1)615(41.9)1470(100)  Tertile 3(most disadvantaged)539(52.5)487(47.5)1026(100)\nRegion level factor Socio-economic disadvantage59.651; p < 0.001  Tertile 1(least disadvantaged)1048(55.2)851(44.8)1898(100)  Tertile 2860(54.2)725(45.8)1585(100)  Tertile 3(most disadvantaged)421(54.7)349(45.3)770(100) Total2329(54.7)1925(45.3)4254(100)Source: 2018-19 Uganda Malaria Indicator Survey\nSample by IPTp-SP utilization during last pregnancy\nSource: 2018-19 Uganda Malaria Indicator Survey\nStata version 13 was utilized for all the analyses. Weighted frequencies and percentages were used to present the proportion of women who had adequate IPTp-SP uptake or otherwise with respect to the independent variables (see Table 1). The association of significance between the explanatory variables and adequate IPTp-SP uptake was assessed with chi-square at 5% margin of error. Finally, a three-level multilevel logistic regression was fitted with five models. First of the five models was the empty model without any explanatory variable (Model I). The empty model is an unconditional model, which accounted for the magnitude of variance between community and region levels. This was followed by a model bearing all the individual-level variables (Model II) and subsequently Model III, which accounted for only community-level variables. Model IV featured region-level variables alone whilst the complete and ultimate model included variables of all the aforementioned levels/models (Model V). Output from the models comprised fixed and random effects. The fixed effects were reported as adjusted odds ratios (aORs) at 95% credible intervals (CrIs). However, the random effects reflected as intraclass correlation coefficient (ICC) and median odds ratio (MOR) [27]. With the ICC, it was possible to gauge the extent of variance in the possibility or tendency of adequate IPTp-SP uptake that is explained by community and region level factors. On the order hand, the MOR quantified the community and region variance as odds ratios and in addition estimated the likelihood of adequate IPTp-SP uptake that is influenced by community and region level issues. Groups having least observations were set as reference groups in the models.\n\nTable 1Sample by IPTp-SP utilization during last pregnancyVariableAt least 3 doses of IPTp-SP in last Pregnancy\nNon (%)\nYesn (%)\nTotaln (%)\nX\n2; p-value\nIndividual level\n Age19.638; p < 0.05  15–19143(50.1)142(49.9)286(100)  20–24553(52.0)510(48.0)1063(100)  25–29606(55.8)479(44.2)1085(100)  30–34497(56.4)385(43.6)882(100)  35–39318(54.8)262(45.2)580(100)  40–44163(59.6)110(40.4)273(100)  45–4949(58.2)35(41.8)84(100)\n Education45.638; p < 0.01  No education379(55.1)309(44.9)689(100)  Primary1294(55.0)1060(45.0)2354(100)  Secondary537(54.4)450(45.6)987(100)  Higher118(52.9)105(47.1)224(100)\n Wealth quintile97.638; p < 0.001  Poor572(53.8)491(46.2)1063(100)  Middle863(56.9)654(43.1)1517(100)  Rich894(53.4)780(46.6)1675(100)\n Mosquito bite causes malaria18.832; p < 0.01  No1573(57.4)1166(42.6)2739(100)  Yes755(49.9)759(50.1)1515(100)\n Sleeping under ITN prevents malaria24.221; p < 0.01  No553(60.0)368(40.0)920(100)  Yes1776(53.3)1557(46.7)3333(100)\n Destroying mosquito breeding site prevents malaria19.361; p < 0.01  No1909(54.0)1624(46.0)3533(100)  Yes420(58.3)301(41.7)720(100)\nCommunity level factors\n Residential status52.873; p < 0.001  Urban493(52.4)448(47.6)940(100)  Rural1664(56.2)1298(43.8)2962(100)  Refugee settlement172(49.1)179(50.9)351(100)\n Zone45.911; p < 0.001  Southern719(56.0)565(44.0)1284(100)  Eastern617(55.6)493(44.4)1110(100)  Northern465(54.2)392(45.8)858(100)  Western528(52.7)474(47.3)1002(100)\n Socio-economic disadvantage109.111; p < 0.001  Tertile 1(least disadvantaged)935(53.2)822(46.8)1757(100)  Tertile 2855(58.1)615(41.9)1470(100)  Tertile 3(most disadvantaged)539(52.5)487(47.5)1026(100)\nRegion level factor Socio-economic disadvantage59.651; p < 0.001  Tertile 1(least disadvantaged)1048(55.2)851(44.8)1898(100)  Tertile 2860(54.2)725(45.8)1585(100)  Tertile 3(most disadvantaged)421(54.7)349(45.3)770(100) Total2329(54.7)1925(45.3)4254(100)Source: 2018-19 Uganda Malaria Indicator Survey\nSample by IPTp-SP utilization during last pregnancy\nSource: 2018-19 Uganda Malaria Indicator Survey\nModel fit and specifications First, multicollinearity between the explanatory variables was assessed using the Variance Inflation Factor (VIF) [28] and the results showed that the variables were not highly correlated to warrant a concern (mean VIF = 1.55, minimum VIF = 1.19, maximum VIF = 2.09). Second, the Bayesian Deviance Information Criterion (DIC) was used in determining the goodness of fit of all the models. Third, the Markov Chain Monte Carlo (MCMC) estimation was applied in modelling [29] and all models were specified using 3.05 version of MLwinN package in the Stata software.\nFirst, multicollinearity between the explanatory variables was assessed using the Variance Inflation Factor (VIF) [28] and the results showed that the variables were not highly correlated to warrant a concern (mean VIF = 1.55, minimum VIF = 1.19, maximum VIF = 2.09). Second, the Bayesian Deviance Information Criterion (DIC) was used in determining the goodness of fit of all the models. Third, the Markov Chain Monte Carlo (MCMC) estimation was applied in modelling [29] and all models were specified using 3.05 version of MLwinN package in the Stata software.\nEthics approval The 2018–2019 UMIS had approval from the Uganda National Council for Science and Technology (UNCST), the Ethics Committee of the School of Medicine Research and Ethics Committee (SOMREC) of the Makerere University as well as the institutional review board of the ICF. The author applied and was granted access to utilize the dataset for the purpose of this study.\nThe 2018–2019 UMIS had approval from the Uganda National Council for Science and Technology (UNCST), the Ethics Committee of the School of Medicine Research and Ethics Committee (SOMREC) of the Makerere University as well as the institutional review board of the ICF. The author applied and was granted access to utilize the dataset for the purpose of this study.", "Adequate uptake of intermittent preventive therapy using IPTp-SP was the dependent variable for this study. During the 2018-19 UMIS, women were asked if they took IPTp-SP (Fansidar™), and the number of times this was taken during their last pregnancy. The current recommendation by the WHO endorses at least three doses for all pregnant women in locations that have moderate to high malaria transmission of which Africa and for that matter Uganda is inclusive [8, 24]. Following this recommendation, all women who revealed that they had at least three doses of IPTp-SP were categorized as having adequate IPTp-SP (coded 1) whilst those who had less than three were categorized otherwise (coded 0).", "A total of ten (10) independent variables were included and were categorized under individual, community and region-level factors. This was possible due the hierarchical nature of the data set. The individual-level factors comprised age in completed years (15–19, 20–24, 25–29, 30–34, 35–39, 40–44, 45–49) and education measured as highest level of educational attainment (no education, primary, secondary, higher). In addition were wealth quintile (poor, middle, rich), whether mosquito bite causes malaria (yes or no), if sleeping under ITN prevents malaria (yes or no) and if malaria can be prevented by destroying mosquito breeding site (yes or no). The community-level factors comprised residential status (urban, rural, refugee settlement), and socio-economic disadvantage at the community level. The sole region-level factor was socio-economic disadvantage at the region level. Several studies using the DHS dataset have followed the same categorization [21, 25, 26].", "Stata version 13 was utilized for all the analyses. Weighted frequencies and percentages were used to present the proportion of women who had adequate IPTp-SP uptake or otherwise with respect to the independent variables (see Table 1). The association of significance between the explanatory variables and adequate IPTp-SP uptake was assessed with chi-square at 5% margin of error. Finally, a three-level multilevel logistic regression was fitted with five models. First of the five models was the empty model without any explanatory variable (Model I). The empty model is an unconditional model, which accounted for the magnitude of variance between community and region levels. This was followed by a model bearing all the individual-level variables (Model II) and subsequently Model III, which accounted for only community-level variables. Model IV featured region-level variables alone whilst the complete and ultimate model included variables of all the aforementioned levels/models (Model V). Output from the models comprised fixed and random effects. The fixed effects were reported as adjusted odds ratios (aORs) at 95% credible intervals (CrIs). However, the random effects reflected as intraclass correlation coefficient (ICC) and median odds ratio (MOR) [27]. With the ICC, it was possible to gauge the extent of variance in the possibility or tendency of adequate IPTp-SP uptake that is explained by community and region level factors. On the order hand, the MOR quantified the community and region variance as odds ratios and in addition estimated the likelihood of adequate IPTp-SP uptake that is influenced by community and region level issues. Groups having least observations were set as reference groups in the models.\n\nTable 1Sample by IPTp-SP utilization during last pregnancyVariableAt least 3 doses of IPTp-SP in last Pregnancy\nNon (%)\nYesn (%)\nTotaln (%)\nX\n2; p-value\nIndividual level\n Age19.638; p < 0.05  15–19143(50.1)142(49.9)286(100)  20–24553(52.0)510(48.0)1063(100)  25–29606(55.8)479(44.2)1085(100)  30–34497(56.4)385(43.6)882(100)  35–39318(54.8)262(45.2)580(100)  40–44163(59.6)110(40.4)273(100)  45–4949(58.2)35(41.8)84(100)\n Education45.638; p < 0.01  No education379(55.1)309(44.9)689(100)  Primary1294(55.0)1060(45.0)2354(100)  Secondary537(54.4)450(45.6)987(100)  Higher118(52.9)105(47.1)224(100)\n Wealth quintile97.638; p < 0.001  Poor572(53.8)491(46.2)1063(100)  Middle863(56.9)654(43.1)1517(100)  Rich894(53.4)780(46.6)1675(100)\n Mosquito bite causes malaria18.832; p < 0.01  No1573(57.4)1166(42.6)2739(100)  Yes755(49.9)759(50.1)1515(100)\n Sleeping under ITN prevents malaria24.221; p < 0.01  No553(60.0)368(40.0)920(100)  Yes1776(53.3)1557(46.7)3333(100)\n Destroying mosquito breeding site prevents malaria19.361; p < 0.01  No1909(54.0)1624(46.0)3533(100)  Yes420(58.3)301(41.7)720(100)\nCommunity level factors\n Residential status52.873; p < 0.001  Urban493(52.4)448(47.6)940(100)  Rural1664(56.2)1298(43.8)2962(100)  Refugee settlement172(49.1)179(50.9)351(100)\n Zone45.911; p < 0.001  Southern719(56.0)565(44.0)1284(100)  Eastern617(55.6)493(44.4)1110(100)  Northern465(54.2)392(45.8)858(100)  Western528(52.7)474(47.3)1002(100)\n Socio-economic disadvantage109.111; p < 0.001  Tertile 1(least disadvantaged)935(53.2)822(46.8)1757(100)  Tertile 2855(58.1)615(41.9)1470(100)  Tertile 3(most disadvantaged)539(52.5)487(47.5)1026(100)\nRegion level factor Socio-economic disadvantage59.651; p < 0.001  Tertile 1(least disadvantaged)1048(55.2)851(44.8)1898(100)  Tertile 2860(54.2)725(45.8)1585(100)  Tertile 3(most disadvantaged)421(54.7)349(45.3)770(100) Total2329(54.7)1925(45.3)4254(100)Source: 2018-19 Uganda Malaria Indicator Survey\nSample by IPTp-SP utilization during last pregnancy\nSource: 2018-19 Uganda Malaria Indicator Survey", "First, multicollinearity between the explanatory variables was assessed using the Variance Inflation Factor (VIF) [28] and the results showed that the variables were not highly correlated to warrant a concern (mean VIF = 1.55, minimum VIF = 1.19, maximum VIF = 2.09). Second, the Bayesian Deviance Information Criterion (DIC) was used in determining the goodness of fit of all the models. Third, the Markov Chain Monte Carlo (MCMC) estimation was applied in modelling [29] and all models were specified using 3.05 version of MLwinN package in the Stata software.", "The 2018–2019 UMIS had approval from the Uganda National Council for Science and Technology (UNCST), the Ethics Committee of the School of Medicine Research and Ethics Committee (SOMREC) of the Makerere University as well as the institutional review board of the ICF. The author applied and was granted access to utilize the dataset for the purpose of this study.", "As shown in Table 1, less than half of the surveyed women had three or more IPTp-SP doses during their last pregnancies (45.3%). All the explanatory variables showed significant association at 95% level of significance. Nearly half of women aged 15–19 (49.9%) and the highly educated (47.1%) women received at least three doses. A greater section of rich women had at least three IPTp-SP doses (46.6%) and 50.1% of women who knew that mosquito bite causes malaria did the same. A significant proportion of women who knew that sleeping under ITN prevents malaria (46.7%) and those who did not know that destroying mosquito breeding sites prevents malaria (46.0%) received at three or more doses of IPTp-SP. Receiving at least three doses of IPTp-SP was profound in refugee settlements (50.9%), Western zone (47.3%), most disadvantaged communities (47.5%) and moderately disadvantaged regions (45.8%).", "Table 2 presents the findings of the fixed effects. The complete and final model (Model V) indicates that women aged 15–19 had less odds of receiving at least three IPTp-SP doses compared to those aged 45–49 [aOR = 0.42, Crl = 0.33–0.98]. Those who had no formal education were less likely to achieve the minimum recommended doses [aOR = 0.51, Crl = 0.35–0.81]. Similarly, poor women [aOR = 0.80, Crl = 0.78–0.91] and women who knew that mosquito bite can cause malaria [aOR = 0.84, Crl = 0.73–0.96] were less likely to have three or more doses of IPTp-SP relative to the rich women and women who did not know that mosquito bite can cause malaria respectively. Women who reported that sleeping under ITN prevents malaria had higher odds of three or more IPTp-SP doses [aOR = 1.22, Crl = 1.04–1.43] relative to those who reported otherwise. The findings further revealed that urban residents were less probable to have at least three doses [aOR = 0.60, CI = 0.46–0.82] as well as most disadvantaged communities [aOR = 0.67, CI = 0.50–0.86] relative to rural residents and least disadvantaged communities correspondingly. Similarly, most disadvantaged regions were aligned with less likelihood of three or more IPTp-SP uptake [aOR = 0.59, CI = 0.48–0.78] compared to the least disadvantaged regions.\n\nTable 2Individual, community and region-level predictors of IPTp-SP utilizationModel IModel IIModel IIIModel IVModel V\naOR [95% Crl]\naOR [95% Crl]\naOR [95% Crl]\naOR [95% Crl]\nFixed effects Individual level\n  Age   15–190.22** [0.11–0.81]0.42**[0.33–0.98]   20–240.43*[0.24–0.98]0.38*[0.25–0.79]   25–290.99[0.62–1.62]0.81[0.53–1.22]   30–341.01[0.62–1.64]0.81[0.53–1.24]   35–390.94[0.56–1.55]0.76[0.48–1.18]   40–440.99[0.58–1.71]0.81[0.50–1.30]   45–491[1]1[1]\n  Education   No education0.33**[0.17–0.77]0.51**[0.35–0.81]   Primary0.53*[0.44–0.82]0.62*[0.51–0.91]   Secondary1.09[0.74–1.60]1.06[0.75–1.50]   Higher1[1]1[1]\n  Wealth quintile   Poor0.81*[0.79–0.94]0.80**[0.78–0.91]   Middle0.89[0.75–1.06]0.88[0.74–1.05]   Rich1[1]1[1]\n  Mosquito bite causes malaria   No0.85*[0.74–0.97]0.84*[0.73–0.96]   Yes1[1]1[1]\n  Sleeping under ITN prevents malaria   No1[1]1[1]   Yes1.23**[1.06–1.43]1.22*[1.04–1.43]\n  Destroying mosquito breeding site prevents malaria   No1.16[0.97–1.39]1.165[0.96–1.41]   Yes1[1]1[1]\n Community level factors\n  Residential status   Urban0.69***[0.31–0.88]0.60***[0.46–0.82]   Rural0.78*[0.68–0.96]0.80[0.61–1.05]   Refugee settlement1[1]1[1]\n  Zone   Southern1[1]1[1]   Eastern1.05[0.74–1.49]1.07[0.71–1.61]   Northern1.08[0.73–1.61]1.12[0.68–1.83]   Western1.19[0.84–1.68]1.22[0.81–1.84]\n  Socio-economic disadvantage   Tertile 1(least disadvantaged)1[1]1[1]   Tertile 20.87[0.72–1.05]0.91[0.73–1.12]   Tertile 3(most disadvantaged)0.71*[0.65–0.92]0.67**[0.50–0.86]\n Region level factor\n  Socio-economic disadvantage   Tertile 1(least disadvantaged)1[1]1[1]   Tertile 20.71**[0.51–0.89]0.68***[0.54–0.82]   Tertile 3(most disadvantaged)0.55**[0.39–0.78]0.59***[0.48–0.78]\nRandom effects\n Region level  Variance (SE)1.13[1.06–1.20]1.14[1.04–1.21]1.12[1.05–1.20]1.15[1.07–1.21]1.14[1.06–1.20]  ICC (%)18.00[15.01–22.90]17.72[15.20-19.08]16.33[15.21-9.00]17.10[16.91–18.70]18.22[17.90-19.02]  MOR2.76[2.03–3.42]2.77[2.65–2.86]2.74[2.66–2.84]2.78[2.68–2.86]2.89[2.33–3.51]  Explained variation[1]35.72[29.06–41.20]31.03[27.61–37.31]33.20[27.31–37.28]30.82[26.97–37.91]\n Community level  Variance (SE)1.86[1.12–2.18]2.00[1.71–2.31]1.98[1.14–2.20]1.59[1.22–1.98]1.99[1.42–2.36]  ICC (%)47.60[39.8–50.7]49.80[37.98–53.70]48.50[36.90-59.08]48.40[37.00-51.20]49.22[38.99–52.51]  MOR3.67[2.74–4.09]3.85[3.48–4.26]3.83[2.77–4.12]3.33[2.87–3.83]3.84[3.12–4.33]  Explained variation[1]52.00[46.30–59.80]48.99[41.40-55.81]47.89[38.21–52.55]48.70[43.00–52.00]\n Model fit statistics  Bayesian DIC58396002599860106012\n   N  Region level1515151515  Community level340340340340340  Individual42544254425442544254\n*\np < 0.05, **\np < 0.01, ***\np < 0.001; aOR = adjusted Odds Ratio; CrI = Credible Interval; ICC = Intra-cluster correlation; MOR = Median Odds Ratio; 1 = reference\nIndividual, community and region-level predictors of IPTp-SP utilization\n\n*\np < 0.05, **\np < 0.01, ***\np < 0.001; aOR = adjusted Odds Ratio; CrI = Credible Interval; ICC = Intra-cluster correlation; MOR = Median Odds Ratio; 1 = reference", "Outcome of the random effects were also reported in Table 2. The empty model (Model I), revealed that the discrepancy in uptake of three or more IPTp-SP doses was substantial at the community level [σ2 = 1. 86; Crl = 11.12–2.18] than regional level [σ2 = 1.13; Crl = 1.06–1.20]. Also, the ICC of Model I indicated that 18% and 47% disparity in IPTp-SP uptake are linked to region and community level factors respectively. According to the final model (Model V), the MORs indicate that when a woman changes her community to a community with higher likelihood of three or more doses of IPTp-SP, she has 3.83 higher chances of achieving the recommended doses (i.e. three or more). Similarly, moving to a region with high likelihood of three or more IPTp-SP doses is associated with 2.74-fold increase in having three or more IPTp-SP doses.", "This study emerged from the most recent national malaria survey of Uganda. Due to the sampling procedure and large sample, it is representative of all women aged 15–49 in Uganda. In spite of this strength, the study has some noteworthy limitations. The study adopted a cross-sectional design and as such causal inference is not permissible. Secondly, since the outcome variable was self-reported, under-reporting or over-reporting of optimal IPTp-SP uptake is plausible. Also, since the study was based on pre-existing data without information on health system factors, I was unable to interrogate health system factors." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Data description", "Measurement of variables", "Dependent variable", "Independent variables", "Statistical analyses", "Model fit and specifications", "Ethics approval", "Results", "Descriptive findings", "Fixed effects", "Random effects", "Discussion", "Strengths and limitations", "Conclusion" ]
[ "Malaria infection in pregnancy (MiP) is acknowledged as a weighty public health challenge and have ample dangers for the pregnant woman and her fetus [1–3]. The symptoms and complications of MiP fluctuate with respect to intensity of transmission within a defined geographical area as well as a woman’s level of acquired immunity [1, 2]. Nineteen countries within sub-Saharan Africa (SSA), with Uganda inclusive, and one Asian country account for 85% of the global malaria burden [1]. In 2018 alone, about US$ 2.7 billion investment was made into malaria control and elimination globally and three-quarters of this was directed to the World Health Organization (WHO) African Region. In spite of this, malaria continue to take a heavy toll on government and household expenses in Uganda [4]. Pregnant women have increased susceptibility to malaria and its associated complications such as maternal anaemia, stillbirth, low birth weight, and in worse scenarios, infant mortality and morbidity [5, 6]. MiP could be an impediment to the realization of targets 3.1 of the Sustainable Development Goals, thus reducing maternal mortality ratio to less than 70 deaths per 100,000 live births [7].\nIn order to shield women in moderate to high malaria transmission areas in Africa and their newborns from the adverse implications of MiP and its associated imminent problems, the WHO in 2012 revised its anti-malaria policy and recommended that all pregnant women within such regions should receive at least three doses of intermittent preventive treatment in pregnancy with antimalarial drug sulfadoxine-pyrimethamine (IPTp-SP) [8]. This recommendation was informed by the stagnated IPTp coverage rates and new evidence that reinforced the need for three doses or more [9]. Further, in 2016, the WHO developed new antenatal care (ANC) guidelines by indorsing an increase in the number of ANC to at least eight contacts between pregnant women and healthcare providers as a strategy to enhance prospects of IPTp-SP uptake [2].\nIPTp-SP is to be taken by all pregnant women in moderate to high malaria transmission areas and should commence as early as possible within the second trimester. The doses are to be administered at least three times with at least one month interval until childbirth. Generally IPTp-SP declines episodes of MiP, neonatal mortality, low birth weight, and placental parasitaemia [2]. Empirical evidence indicate that IPTp contributes to 29%, 38% and 31% reduction in the incidence of low birth weight, severe malaria anaemia and neonatal mortality respectively [10, 11]. In 2018, out of the 36 African countries that recounted IPTp-SP coverage rates, about 31% of eligible women in the reproductive age received the recommended doses and this signified an increase relative to the rate reported in 2017 (22%) and 2010 (2%). In the case of Uganda, where malaria is endemic in 95% of the country [4], the 2019 World Malaria Report indicated that 30% or lower of the eligible women had three or more IPTp-SP doses [1]. Resistance of the parasite to SP has been noted, however, IPTp still remains a very cost-effective and a promising lifesaving intervention [12, 13].\nIn spite of the missed IPTp-SP opportunities in Uganda in the wake of high MiP [1, 14, 15], scanty literature exist on MiP in Uganda. The few studies have either been limited to some regions of the country [16–20], used relatively old national data [21], assessed the impact of intermittent preventive treatment during pregnancy [22] and among others. To date, empirical national study utilizing the 2018-19 Uganda Malaria Indicator Survey that explore predictors of attaining three or more doses of IPTp-SP in the country is non-existent. With the aim of invigorating a critical evidence-based discussion on MiP prevention, and offering empirical evidence to guide MiP policies, this study investigated the rate of uptake and factors affecting uptake of three or more IPTp-SP doses as recommended by the WHO.", "Data description Data from the 2018–2019 Malaria Indicator Survey (2018-19 UMIS) of Uganda was analysed. This is a cross-sectional survey that is executed by the Uganda Bureau of Statistics (UBOS) and the National Malaria Control Division (NMCD), however, technical assistance was granted by the Inner City Fund (ICF) [23]. The survey sampled participants through a two-stage sampling design with the intent of achieving estimation of three essential indicators, thus rural-urban locations, all fifteen administrative regions and national coverage. The sampling commenced with selection of clusters from refugee and non-refugee sample frames [23] from the enumeration areas delineated for the 2014 National Population and Housing Census (NPHC). In all, 320 clusters were selected from non-refugee sample frame (236 and 84 from rural and urban settlements respectively). Urban settlements were oversampled to obtain unbiased estimations for rural and urban settlements. The same procedure was followed to select 22 clusters from the refugee frame. The next sampling phase involved the systematic selection of households and 28 households per cluster were selected resulting in a total of 8,878. Eligible women were those aged 15–49 and were either permanent residents or visitors who joined the household the night before the survey. In all, 8,231 women were successfully interviewed signifying 98% and 99% response rates for non-refugee and refugee settlements respectively. In the present study, however, 4,254 women were eligible for inclusion based on completeness of data.\nData from the 2018–2019 Malaria Indicator Survey (2018-19 UMIS) of Uganda was analysed. This is a cross-sectional survey that is executed by the Uganda Bureau of Statistics (UBOS) and the National Malaria Control Division (NMCD), however, technical assistance was granted by the Inner City Fund (ICF) [23]. The survey sampled participants through a two-stage sampling design with the intent of achieving estimation of three essential indicators, thus rural-urban locations, all fifteen administrative regions and national coverage. The sampling commenced with selection of clusters from refugee and non-refugee sample frames [23] from the enumeration areas delineated for the 2014 National Population and Housing Census (NPHC). In all, 320 clusters were selected from non-refugee sample frame (236 and 84 from rural and urban settlements respectively). Urban settlements were oversampled to obtain unbiased estimations for rural and urban settlements. The same procedure was followed to select 22 clusters from the refugee frame. The next sampling phase involved the systematic selection of households and 28 households per cluster were selected resulting in a total of 8,878. Eligible women were those aged 15–49 and were either permanent residents or visitors who joined the household the night before the survey. In all, 8,231 women were successfully interviewed signifying 98% and 99% response rates for non-refugee and refugee settlements respectively. In the present study, however, 4,254 women were eligible for inclusion based on completeness of data.", "Data from the 2018–2019 Malaria Indicator Survey (2018-19 UMIS) of Uganda was analysed. This is a cross-sectional survey that is executed by the Uganda Bureau of Statistics (UBOS) and the National Malaria Control Division (NMCD), however, technical assistance was granted by the Inner City Fund (ICF) [23]. The survey sampled participants through a two-stage sampling design with the intent of achieving estimation of three essential indicators, thus rural-urban locations, all fifteen administrative regions and national coverage. The sampling commenced with selection of clusters from refugee and non-refugee sample frames [23] from the enumeration areas delineated for the 2014 National Population and Housing Census (NPHC). In all, 320 clusters were selected from non-refugee sample frame (236 and 84 from rural and urban settlements respectively). Urban settlements were oversampled to obtain unbiased estimations for rural and urban settlements. The same procedure was followed to select 22 clusters from the refugee frame. The next sampling phase involved the systematic selection of households and 28 households per cluster were selected resulting in a total of 8,878. Eligible women were those aged 15–49 and were either permanent residents or visitors who joined the household the night before the survey. In all, 8,231 women were successfully interviewed signifying 98% and 99% response rates for non-refugee and refugee settlements respectively. In the present study, however, 4,254 women were eligible for inclusion based on completeness of data.", "Dependent variable Adequate uptake of intermittent preventive therapy using IPTp-SP was the dependent variable for this study. During the 2018-19 UMIS, women were asked if they took IPTp-SP (Fansidar™), and the number of times this was taken during their last pregnancy. The current recommendation by the WHO endorses at least three doses for all pregnant women in locations that have moderate to high malaria transmission of which Africa and for that matter Uganda is inclusive [8, 24]. Following this recommendation, all women who revealed that they had at least three doses of IPTp-SP were categorized as having adequate IPTp-SP (coded 1) whilst those who had less than three were categorized otherwise (coded 0).\nAdequate uptake of intermittent preventive therapy using IPTp-SP was the dependent variable for this study. During the 2018-19 UMIS, women were asked if they took IPTp-SP (Fansidar™), and the number of times this was taken during their last pregnancy. The current recommendation by the WHO endorses at least three doses for all pregnant women in locations that have moderate to high malaria transmission of which Africa and for that matter Uganda is inclusive [8, 24]. Following this recommendation, all women who revealed that they had at least three doses of IPTp-SP were categorized as having adequate IPTp-SP (coded 1) whilst those who had less than three were categorized otherwise (coded 0).\nIndependent variables A total of ten (10) independent variables were included and were categorized under individual, community and region-level factors. This was possible due the hierarchical nature of the data set. The individual-level factors comprised age in completed years (15–19, 20–24, 25–29, 30–34, 35–39, 40–44, 45–49) and education measured as highest level of educational attainment (no education, primary, secondary, higher). In addition were wealth quintile (poor, middle, rich), whether mosquito bite causes malaria (yes or no), if sleeping under ITN prevents malaria (yes or no) and if malaria can be prevented by destroying mosquito breeding site (yes or no). The community-level factors comprised residential status (urban, rural, refugee settlement), and socio-economic disadvantage at the community level. The sole region-level factor was socio-economic disadvantage at the region level. Several studies using the DHS dataset have followed the same categorization [21, 25, 26].\nA total of ten (10) independent variables were included and were categorized under individual, community and region-level factors. This was possible due the hierarchical nature of the data set. The individual-level factors comprised age in completed years (15–19, 20–24, 25–29, 30–34, 35–39, 40–44, 45–49) and education measured as highest level of educational attainment (no education, primary, secondary, higher). In addition were wealth quintile (poor, middle, rich), whether mosquito bite causes malaria (yes or no), if sleeping under ITN prevents malaria (yes or no) and if malaria can be prevented by destroying mosquito breeding site (yes or no). The community-level factors comprised residential status (urban, rural, refugee settlement), and socio-economic disadvantage at the community level. The sole region-level factor was socio-economic disadvantage at the region level. Several studies using the DHS dataset have followed the same categorization [21, 25, 26].\nStatistical analyses Stata version 13 was utilized for all the analyses. Weighted frequencies and percentages were used to present the proportion of women who had adequate IPTp-SP uptake or otherwise with respect to the independent variables (see Table 1). The association of significance between the explanatory variables and adequate IPTp-SP uptake was assessed with chi-square at 5% margin of error. Finally, a three-level multilevel logistic regression was fitted with five models. First of the five models was the empty model without any explanatory variable (Model I). The empty model is an unconditional model, which accounted for the magnitude of variance between community and region levels. This was followed by a model bearing all the individual-level variables (Model II) and subsequently Model III, which accounted for only community-level variables. Model IV featured region-level variables alone whilst the complete and ultimate model included variables of all the aforementioned levels/models (Model V). Output from the models comprised fixed and random effects. The fixed effects were reported as adjusted odds ratios (aORs) at 95% credible intervals (CrIs). However, the random effects reflected as intraclass correlation coefficient (ICC) and median odds ratio (MOR) [27]. With the ICC, it was possible to gauge the extent of variance in the possibility or tendency of adequate IPTp-SP uptake that is explained by community and region level factors. On the order hand, the MOR quantified the community and region variance as odds ratios and in addition estimated the likelihood of adequate IPTp-SP uptake that is influenced by community and region level issues. Groups having least observations were set as reference groups in the models.\n\nTable 1Sample by IPTp-SP utilization during last pregnancyVariableAt least 3 doses of IPTp-SP in last Pregnancy\nNon (%)\nYesn (%)\nTotaln (%)\nX\n2; p-value\nIndividual level\n Age19.638; p < 0.05  15–19143(50.1)142(49.9)286(100)  20–24553(52.0)510(48.0)1063(100)  25–29606(55.8)479(44.2)1085(100)  30–34497(56.4)385(43.6)882(100)  35–39318(54.8)262(45.2)580(100)  40–44163(59.6)110(40.4)273(100)  45–4949(58.2)35(41.8)84(100)\n Education45.638; p < 0.01  No education379(55.1)309(44.9)689(100)  Primary1294(55.0)1060(45.0)2354(100)  Secondary537(54.4)450(45.6)987(100)  Higher118(52.9)105(47.1)224(100)\n Wealth quintile97.638; p < 0.001  Poor572(53.8)491(46.2)1063(100)  Middle863(56.9)654(43.1)1517(100)  Rich894(53.4)780(46.6)1675(100)\n Mosquito bite causes malaria18.832; p < 0.01  No1573(57.4)1166(42.6)2739(100)  Yes755(49.9)759(50.1)1515(100)\n Sleeping under ITN prevents malaria24.221; p < 0.01  No553(60.0)368(40.0)920(100)  Yes1776(53.3)1557(46.7)3333(100)\n Destroying mosquito breeding site prevents malaria19.361; p < 0.01  No1909(54.0)1624(46.0)3533(100)  Yes420(58.3)301(41.7)720(100)\nCommunity level factors\n Residential status52.873; p < 0.001  Urban493(52.4)448(47.6)940(100)  Rural1664(56.2)1298(43.8)2962(100)  Refugee settlement172(49.1)179(50.9)351(100)\n Zone45.911; p < 0.001  Southern719(56.0)565(44.0)1284(100)  Eastern617(55.6)493(44.4)1110(100)  Northern465(54.2)392(45.8)858(100)  Western528(52.7)474(47.3)1002(100)\n Socio-economic disadvantage109.111; p < 0.001  Tertile 1(least disadvantaged)935(53.2)822(46.8)1757(100)  Tertile 2855(58.1)615(41.9)1470(100)  Tertile 3(most disadvantaged)539(52.5)487(47.5)1026(100)\nRegion level factor Socio-economic disadvantage59.651; p < 0.001  Tertile 1(least disadvantaged)1048(55.2)851(44.8)1898(100)  Tertile 2860(54.2)725(45.8)1585(100)  Tertile 3(most disadvantaged)421(54.7)349(45.3)770(100) Total2329(54.7)1925(45.3)4254(100)Source: 2018-19 Uganda Malaria Indicator Survey\nSample by IPTp-SP utilization during last pregnancy\nSource: 2018-19 Uganda Malaria Indicator Survey\nStata version 13 was utilized for all the analyses. Weighted frequencies and percentages were used to present the proportion of women who had adequate IPTp-SP uptake or otherwise with respect to the independent variables (see Table 1). The association of significance between the explanatory variables and adequate IPTp-SP uptake was assessed with chi-square at 5% margin of error. Finally, a three-level multilevel logistic regression was fitted with five models. First of the five models was the empty model without any explanatory variable (Model I). The empty model is an unconditional model, which accounted for the magnitude of variance between community and region levels. This was followed by a model bearing all the individual-level variables (Model II) and subsequently Model III, which accounted for only community-level variables. Model IV featured region-level variables alone whilst the complete and ultimate model included variables of all the aforementioned levels/models (Model V). Output from the models comprised fixed and random effects. The fixed effects were reported as adjusted odds ratios (aORs) at 95% credible intervals (CrIs). However, the random effects reflected as intraclass correlation coefficient (ICC) and median odds ratio (MOR) [27]. With the ICC, it was possible to gauge the extent of variance in the possibility or tendency of adequate IPTp-SP uptake that is explained by community and region level factors. On the order hand, the MOR quantified the community and region variance as odds ratios and in addition estimated the likelihood of adequate IPTp-SP uptake that is influenced by community and region level issues. Groups having least observations were set as reference groups in the models.\n\nTable 1Sample by IPTp-SP utilization during last pregnancyVariableAt least 3 doses of IPTp-SP in last Pregnancy\nNon (%)\nYesn (%)\nTotaln (%)\nX\n2; p-value\nIndividual level\n Age19.638; p < 0.05  15–19143(50.1)142(49.9)286(100)  20–24553(52.0)510(48.0)1063(100)  25–29606(55.8)479(44.2)1085(100)  30–34497(56.4)385(43.6)882(100)  35–39318(54.8)262(45.2)580(100)  40–44163(59.6)110(40.4)273(100)  45–4949(58.2)35(41.8)84(100)\n Education45.638; p < 0.01  No education379(55.1)309(44.9)689(100)  Primary1294(55.0)1060(45.0)2354(100)  Secondary537(54.4)450(45.6)987(100)  Higher118(52.9)105(47.1)224(100)\n Wealth quintile97.638; p < 0.001  Poor572(53.8)491(46.2)1063(100)  Middle863(56.9)654(43.1)1517(100)  Rich894(53.4)780(46.6)1675(100)\n Mosquito bite causes malaria18.832; p < 0.01  No1573(57.4)1166(42.6)2739(100)  Yes755(49.9)759(50.1)1515(100)\n Sleeping under ITN prevents malaria24.221; p < 0.01  No553(60.0)368(40.0)920(100)  Yes1776(53.3)1557(46.7)3333(100)\n Destroying mosquito breeding site prevents malaria19.361; p < 0.01  No1909(54.0)1624(46.0)3533(100)  Yes420(58.3)301(41.7)720(100)\nCommunity level factors\n Residential status52.873; p < 0.001  Urban493(52.4)448(47.6)940(100)  Rural1664(56.2)1298(43.8)2962(100)  Refugee settlement172(49.1)179(50.9)351(100)\n Zone45.911; p < 0.001  Southern719(56.0)565(44.0)1284(100)  Eastern617(55.6)493(44.4)1110(100)  Northern465(54.2)392(45.8)858(100)  Western528(52.7)474(47.3)1002(100)\n Socio-economic disadvantage109.111; p < 0.001  Tertile 1(least disadvantaged)935(53.2)822(46.8)1757(100)  Tertile 2855(58.1)615(41.9)1470(100)  Tertile 3(most disadvantaged)539(52.5)487(47.5)1026(100)\nRegion level factor Socio-economic disadvantage59.651; p < 0.001  Tertile 1(least disadvantaged)1048(55.2)851(44.8)1898(100)  Tertile 2860(54.2)725(45.8)1585(100)  Tertile 3(most disadvantaged)421(54.7)349(45.3)770(100) Total2329(54.7)1925(45.3)4254(100)Source: 2018-19 Uganda Malaria Indicator Survey\nSample by IPTp-SP utilization during last pregnancy\nSource: 2018-19 Uganda Malaria Indicator Survey\nModel fit and specifications First, multicollinearity between the explanatory variables was assessed using the Variance Inflation Factor (VIF) [28] and the results showed that the variables were not highly correlated to warrant a concern (mean VIF = 1.55, minimum VIF = 1.19, maximum VIF = 2.09). Second, the Bayesian Deviance Information Criterion (DIC) was used in determining the goodness of fit of all the models. Third, the Markov Chain Monte Carlo (MCMC) estimation was applied in modelling [29] and all models were specified using 3.05 version of MLwinN package in the Stata software.\nFirst, multicollinearity between the explanatory variables was assessed using the Variance Inflation Factor (VIF) [28] and the results showed that the variables were not highly correlated to warrant a concern (mean VIF = 1.55, minimum VIF = 1.19, maximum VIF = 2.09). Second, the Bayesian Deviance Information Criterion (DIC) was used in determining the goodness of fit of all the models. Third, the Markov Chain Monte Carlo (MCMC) estimation was applied in modelling [29] and all models were specified using 3.05 version of MLwinN package in the Stata software.\nEthics approval The 2018–2019 UMIS had approval from the Uganda National Council for Science and Technology (UNCST), the Ethics Committee of the School of Medicine Research and Ethics Committee (SOMREC) of the Makerere University as well as the institutional review board of the ICF. The author applied and was granted access to utilize the dataset for the purpose of this study.\nThe 2018–2019 UMIS had approval from the Uganda National Council for Science and Technology (UNCST), the Ethics Committee of the School of Medicine Research and Ethics Committee (SOMREC) of the Makerere University as well as the institutional review board of the ICF. The author applied and was granted access to utilize the dataset for the purpose of this study.", "Adequate uptake of intermittent preventive therapy using IPTp-SP was the dependent variable for this study. During the 2018-19 UMIS, women were asked if they took IPTp-SP (Fansidar™), and the number of times this was taken during their last pregnancy. The current recommendation by the WHO endorses at least three doses for all pregnant women in locations that have moderate to high malaria transmission of which Africa and for that matter Uganda is inclusive [8, 24]. Following this recommendation, all women who revealed that they had at least three doses of IPTp-SP were categorized as having adequate IPTp-SP (coded 1) whilst those who had less than three were categorized otherwise (coded 0).", "A total of ten (10) independent variables were included and were categorized under individual, community and region-level factors. This was possible due the hierarchical nature of the data set. The individual-level factors comprised age in completed years (15–19, 20–24, 25–29, 30–34, 35–39, 40–44, 45–49) and education measured as highest level of educational attainment (no education, primary, secondary, higher). In addition were wealth quintile (poor, middle, rich), whether mosquito bite causes malaria (yes or no), if sleeping under ITN prevents malaria (yes or no) and if malaria can be prevented by destroying mosquito breeding site (yes or no). The community-level factors comprised residential status (urban, rural, refugee settlement), and socio-economic disadvantage at the community level. The sole region-level factor was socio-economic disadvantage at the region level. Several studies using the DHS dataset have followed the same categorization [21, 25, 26].", "Stata version 13 was utilized for all the analyses. Weighted frequencies and percentages were used to present the proportion of women who had adequate IPTp-SP uptake or otherwise with respect to the independent variables (see Table 1). The association of significance between the explanatory variables and adequate IPTp-SP uptake was assessed with chi-square at 5% margin of error. Finally, a three-level multilevel logistic regression was fitted with five models. First of the five models was the empty model without any explanatory variable (Model I). The empty model is an unconditional model, which accounted for the magnitude of variance between community and region levels. This was followed by a model bearing all the individual-level variables (Model II) and subsequently Model III, which accounted for only community-level variables. Model IV featured region-level variables alone whilst the complete and ultimate model included variables of all the aforementioned levels/models (Model V). Output from the models comprised fixed and random effects. The fixed effects were reported as adjusted odds ratios (aORs) at 95% credible intervals (CrIs). However, the random effects reflected as intraclass correlation coefficient (ICC) and median odds ratio (MOR) [27]. With the ICC, it was possible to gauge the extent of variance in the possibility or tendency of adequate IPTp-SP uptake that is explained by community and region level factors. On the order hand, the MOR quantified the community and region variance as odds ratios and in addition estimated the likelihood of adequate IPTp-SP uptake that is influenced by community and region level issues. Groups having least observations were set as reference groups in the models.\n\nTable 1Sample by IPTp-SP utilization during last pregnancyVariableAt least 3 doses of IPTp-SP in last Pregnancy\nNon (%)\nYesn (%)\nTotaln (%)\nX\n2; p-value\nIndividual level\n Age19.638; p < 0.05  15–19143(50.1)142(49.9)286(100)  20–24553(52.0)510(48.0)1063(100)  25–29606(55.8)479(44.2)1085(100)  30–34497(56.4)385(43.6)882(100)  35–39318(54.8)262(45.2)580(100)  40–44163(59.6)110(40.4)273(100)  45–4949(58.2)35(41.8)84(100)\n Education45.638; p < 0.01  No education379(55.1)309(44.9)689(100)  Primary1294(55.0)1060(45.0)2354(100)  Secondary537(54.4)450(45.6)987(100)  Higher118(52.9)105(47.1)224(100)\n Wealth quintile97.638; p < 0.001  Poor572(53.8)491(46.2)1063(100)  Middle863(56.9)654(43.1)1517(100)  Rich894(53.4)780(46.6)1675(100)\n Mosquito bite causes malaria18.832; p < 0.01  No1573(57.4)1166(42.6)2739(100)  Yes755(49.9)759(50.1)1515(100)\n Sleeping under ITN prevents malaria24.221; p < 0.01  No553(60.0)368(40.0)920(100)  Yes1776(53.3)1557(46.7)3333(100)\n Destroying mosquito breeding site prevents malaria19.361; p < 0.01  No1909(54.0)1624(46.0)3533(100)  Yes420(58.3)301(41.7)720(100)\nCommunity level factors\n Residential status52.873; p < 0.001  Urban493(52.4)448(47.6)940(100)  Rural1664(56.2)1298(43.8)2962(100)  Refugee settlement172(49.1)179(50.9)351(100)\n Zone45.911; p < 0.001  Southern719(56.0)565(44.0)1284(100)  Eastern617(55.6)493(44.4)1110(100)  Northern465(54.2)392(45.8)858(100)  Western528(52.7)474(47.3)1002(100)\n Socio-economic disadvantage109.111; p < 0.001  Tertile 1(least disadvantaged)935(53.2)822(46.8)1757(100)  Tertile 2855(58.1)615(41.9)1470(100)  Tertile 3(most disadvantaged)539(52.5)487(47.5)1026(100)\nRegion level factor Socio-economic disadvantage59.651; p < 0.001  Tertile 1(least disadvantaged)1048(55.2)851(44.8)1898(100)  Tertile 2860(54.2)725(45.8)1585(100)  Tertile 3(most disadvantaged)421(54.7)349(45.3)770(100) Total2329(54.7)1925(45.3)4254(100)Source: 2018-19 Uganda Malaria Indicator Survey\nSample by IPTp-SP utilization during last pregnancy\nSource: 2018-19 Uganda Malaria Indicator Survey", "First, multicollinearity between the explanatory variables was assessed using the Variance Inflation Factor (VIF) [28] and the results showed that the variables were not highly correlated to warrant a concern (mean VIF = 1.55, minimum VIF = 1.19, maximum VIF = 2.09). Second, the Bayesian Deviance Information Criterion (DIC) was used in determining the goodness of fit of all the models. Third, the Markov Chain Monte Carlo (MCMC) estimation was applied in modelling [29] and all models were specified using 3.05 version of MLwinN package in the Stata software.", "The 2018–2019 UMIS had approval from the Uganda National Council for Science and Technology (UNCST), the Ethics Committee of the School of Medicine Research and Ethics Committee (SOMREC) of the Makerere University as well as the institutional review board of the ICF. The author applied and was granted access to utilize the dataset for the purpose of this study.", "Descriptive findings As shown in Table 1, less than half of the surveyed women had three or more IPTp-SP doses during their last pregnancies (45.3%). All the explanatory variables showed significant association at 95% level of significance. Nearly half of women aged 15–19 (49.9%) and the highly educated (47.1%) women received at least three doses. A greater section of rich women had at least three IPTp-SP doses (46.6%) and 50.1% of women who knew that mosquito bite causes malaria did the same. A significant proportion of women who knew that sleeping under ITN prevents malaria (46.7%) and those who did not know that destroying mosquito breeding sites prevents malaria (46.0%) received at three or more doses of IPTp-SP. Receiving at least three doses of IPTp-SP was profound in refugee settlements (50.9%), Western zone (47.3%), most disadvantaged communities (47.5%) and moderately disadvantaged regions (45.8%).\nAs shown in Table 1, less than half of the surveyed women had three or more IPTp-SP doses during their last pregnancies (45.3%). All the explanatory variables showed significant association at 95% level of significance. Nearly half of women aged 15–19 (49.9%) and the highly educated (47.1%) women received at least three doses. A greater section of rich women had at least three IPTp-SP doses (46.6%) and 50.1% of women who knew that mosquito bite causes malaria did the same. A significant proportion of women who knew that sleeping under ITN prevents malaria (46.7%) and those who did not know that destroying mosquito breeding sites prevents malaria (46.0%) received at three or more doses of IPTp-SP. Receiving at least three doses of IPTp-SP was profound in refugee settlements (50.9%), Western zone (47.3%), most disadvantaged communities (47.5%) and moderately disadvantaged regions (45.8%).\nFixed effects Table 2 presents the findings of the fixed effects. The complete and final model (Model V) indicates that women aged 15–19 had less odds of receiving at least three IPTp-SP doses compared to those aged 45–49 [aOR = 0.42, Crl = 0.33–0.98]. Those who had no formal education were less likely to achieve the minimum recommended doses [aOR = 0.51, Crl = 0.35–0.81]. Similarly, poor women [aOR = 0.80, Crl = 0.78–0.91] and women who knew that mosquito bite can cause malaria [aOR = 0.84, Crl = 0.73–0.96] were less likely to have three or more doses of IPTp-SP relative to the rich women and women who did not know that mosquito bite can cause malaria respectively. Women who reported that sleeping under ITN prevents malaria had higher odds of three or more IPTp-SP doses [aOR = 1.22, Crl = 1.04–1.43] relative to those who reported otherwise. The findings further revealed that urban residents were less probable to have at least three doses [aOR = 0.60, CI = 0.46–0.82] as well as most disadvantaged communities [aOR = 0.67, CI = 0.50–0.86] relative to rural residents and least disadvantaged communities correspondingly. Similarly, most disadvantaged regions were aligned with less likelihood of three or more IPTp-SP uptake [aOR = 0.59, CI = 0.48–0.78] compared to the least disadvantaged regions.\n\nTable 2Individual, community and region-level predictors of IPTp-SP utilizationModel IModel IIModel IIIModel IVModel V\naOR [95% Crl]\naOR [95% Crl]\naOR [95% Crl]\naOR [95% Crl]\nFixed effects Individual level\n  Age   15–190.22** [0.11–0.81]0.42**[0.33–0.98]   20–240.43*[0.24–0.98]0.38*[0.25–0.79]   25–290.99[0.62–1.62]0.81[0.53–1.22]   30–341.01[0.62–1.64]0.81[0.53–1.24]   35–390.94[0.56–1.55]0.76[0.48–1.18]   40–440.99[0.58–1.71]0.81[0.50–1.30]   45–491[1]1[1]\n  Education   No education0.33**[0.17–0.77]0.51**[0.35–0.81]   Primary0.53*[0.44–0.82]0.62*[0.51–0.91]   Secondary1.09[0.74–1.60]1.06[0.75–1.50]   Higher1[1]1[1]\n  Wealth quintile   Poor0.81*[0.79–0.94]0.80**[0.78–0.91]   Middle0.89[0.75–1.06]0.88[0.74–1.05]   Rich1[1]1[1]\n  Mosquito bite causes malaria   No0.85*[0.74–0.97]0.84*[0.73–0.96]   Yes1[1]1[1]\n  Sleeping under ITN prevents malaria   No1[1]1[1]   Yes1.23**[1.06–1.43]1.22*[1.04–1.43]\n  Destroying mosquito breeding site prevents malaria   No1.16[0.97–1.39]1.165[0.96–1.41]   Yes1[1]1[1]\n Community level factors\n  Residential status   Urban0.69***[0.31–0.88]0.60***[0.46–0.82]   Rural0.78*[0.68–0.96]0.80[0.61–1.05]   Refugee settlement1[1]1[1]\n  Zone   Southern1[1]1[1]   Eastern1.05[0.74–1.49]1.07[0.71–1.61]   Northern1.08[0.73–1.61]1.12[0.68–1.83]   Western1.19[0.84–1.68]1.22[0.81–1.84]\n  Socio-economic disadvantage   Tertile 1(least disadvantaged)1[1]1[1]   Tertile 20.87[0.72–1.05]0.91[0.73–1.12]   Tertile 3(most disadvantaged)0.71*[0.65–0.92]0.67**[0.50–0.86]\n Region level factor\n  Socio-economic disadvantage   Tertile 1(least disadvantaged)1[1]1[1]   Tertile 20.71**[0.51–0.89]0.68***[0.54–0.82]   Tertile 3(most disadvantaged)0.55**[0.39–0.78]0.59***[0.48–0.78]\nRandom effects\n Region level  Variance (SE)1.13[1.06–1.20]1.14[1.04–1.21]1.12[1.05–1.20]1.15[1.07–1.21]1.14[1.06–1.20]  ICC (%)18.00[15.01–22.90]17.72[15.20-19.08]16.33[15.21-9.00]17.10[16.91–18.70]18.22[17.90-19.02]  MOR2.76[2.03–3.42]2.77[2.65–2.86]2.74[2.66–2.84]2.78[2.68–2.86]2.89[2.33–3.51]  Explained variation[1]35.72[29.06–41.20]31.03[27.61–37.31]33.20[27.31–37.28]30.82[26.97–37.91]\n Community level  Variance (SE)1.86[1.12–2.18]2.00[1.71–2.31]1.98[1.14–2.20]1.59[1.22–1.98]1.99[1.42–2.36]  ICC (%)47.60[39.8–50.7]49.80[37.98–53.70]48.50[36.90-59.08]48.40[37.00-51.20]49.22[38.99–52.51]  MOR3.67[2.74–4.09]3.85[3.48–4.26]3.83[2.77–4.12]3.33[2.87–3.83]3.84[3.12–4.33]  Explained variation[1]52.00[46.30–59.80]48.99[41.40-55.81]47.89[38.21–52.55]48.70[43.00–52.00]\n Model fit statistics  Bayesian DIC58396002599860106012\n   N  Region level1515151515  Community level340340340340340  Individual42544254425442544254\n*\np < 0.05, **\np < 0.01, ***\np < 0.001; aOR = adjusted Odds Ratio; CrI = Credible Interval; ICC = Intra-cluster correlation; MOR = Median Odds Ratio; 1 = reference\nIndividual, community and region-level predictors of IPTp-SP utilization\n\n*\np < 0.05, **\np < 0.01, ***\np < 0.001; aOR = adjusted Odds Ratio; CrI = Credible Interval; ICC = Intra-cluster correlation; MOR = Median Odds Ratio; 1 = reference\nTable 2 presents the findings of the fixed effects. The complete and final model (Model V) indicates that women aged 15–19 had less odds of receiving at least three IPTp-SP doses compared to those aged 45–49 [aOR = 0.42, Crl = 0.33–0.98]. Those who had no formal education were less likely to achieve the minimum recommended doses [aOR = 0.51, Crl = 0.35–0.81]. Similarly, poor women [aOR = 0.80, Crl = 0.78–0.91] and women who knew that mosquito bite can cause malaria [aOR = 0.84, Crl = 0.73–0.96] were less likely to have three or more doses of IPTp-SP relative to the rich women and women who did not know that mosquito bite can cause malaria respectively. Women who reported that sleeping under ITN prevents malaria had higher odds of three or more IPTp-SP doses [aOR = 1.22, Crl = 1.04–1.43] relative to those who reported otherwise. The findings further revealed that urban residents were less probable to have at least three doses [aOR = 0.60, CI = 0.46–0.82] as well as most disadvantaged communities [aOR = 0.67, CI = 0.50–0.86] relative to rural residents and least disadvantaged communities correspondingly. Similarly, most disadvantaged regions were aligned with less likelihood of three or more IPTp-SP uptake [aOR = 0.59, CI = 0.48–0.78] compared to the least disadvantaged regions.\n\nTable 2Individual, community and region-level predictors of IPTp-SP utilizationModel IModel IIModel IIIModel IVModel V\naOR [95% Crl]\naOR [95% Crl]\naOR [95% Crl]\naOR [95% Crl]\nFixed effects Individual level\n  Age   15–190.22** [0.11–0.81]0.42**[0.33–0.98]   20–240.43*[0.24–0.98]0.38*[0.25–0.79]   25–290.99[0.62–1.62]0.81[0.53–1.22]   30–341.01[0.62–1.64]0.81[0.53–1.24]   35–390.94[0.56–1.55]0.76[0.48–1.18]   40–440.99[0.58–1.71]0.81[0.50–1.30]   45–491[1]1[1]\n  Education   No education0.33**[0.17–0.77]0.51**[0.35–0.81]   Primary0.53*[0.44–0.82]0.62*[0.51–0.91]   Secondary1.09[0.74–1.60]1.06[0.75–1.50]   Higher1[1]1[1]\n  Wealth quintile   Poor0.81*[0.79–0.94]0.80**[0.78–0.91]   Middle0.89[0.75–1.06]0.88[0.74–1.05]   Rich1[1]1[1]\n  Mosquito bite causes malaria   No0.85*[0.74–0.97]0.84*[0.73–0.96]   Yes1[1]1[1]\n  Sleeping under ITN prevents malaria   No1[1]1[1]   Yes1.23**[1.06–1.43]1.22*[1.04–1.43]\n  Destroying mosquito breeding site prevents malaria   No1.16[0.97–1.39]1.165[0.96–1.41]   Yes1[1]1[1]\n Community level factors\n  Residential status   Urban0.69***[0.31–0.88]0.60***[0.46–0.82]   Rural0.78*[0.68–0.96]0.80[0.61–1.05]   Refugee settlement1[1]1[1]\n  Zone   Southern1[1]1[1]   Eastern1.05[0.74–1.49]1.07[0.71–1.61]   Northern1.08[0.73–1.61]1.12[0.68–1.83]   Western1.19[0.84–1.68]1.22[0.81–1.84]\n  Socio-economic disadvantage   Tertile 1(least disadvantaged)1[1]1[1]   Tertile 20.87[0.72–1.05]0.91[0.73–1.12]   Tertile 3(most disadvantaged)0.71*[0.65–0.92]0.67**[0.50–0.86]\n Region level factor\n  Socio-economic disadvantage   Tertile 1(least disadvantaged)1[1]1[1]   Tertile 20.71**[0.51–0.89]0.68***[0.54–0.82]   Tertile 3(most disadvantaged)0.55**[0.39–0.78]0.59***[0.48–0.78]\nRandom effects\n Region level  Variance (SE)1.13[1.06–1.20]1.14[1.04–1.21]1.12[1.05–1.20]1.15[1.07–1.21]1.14[1.06–1.20]  ICC (%)18.00[15.01–22.90]17.72[15.20-19.08]16.33[15.21-9.00]17.10[16.91–18.70]18.22[17.90-19.02]  MOR2.76[2.03–3.42]2.77[2.65–2.86]2.74[2.66–2.84]2.78[2.68–2.86]2.89[2.33–3.51]  Explained variation[1]35.72[29.06–41.20]31.03[27.61–37.31]33.20[27.31–37.28]30.82[26.97–37.91]\n Community level  Variance (SE)1.86[1.12–2.18]2.00[1.71–2.31]1.98[1.14–2.20]1.59[1.22–1.98]1.99[1.42–2.36]  ICC (%)47.60[39.8–50.7]49.80[37.98–53.70]48.50[36.90-59.08]48.40[37.00-51.20]49.22[38.99–52.51]  MOR3.67[2.74–4.09]3.85[3.48–4.26]3.83[2.77–4.12]3.33[2.87–3.83]3.84[3.12–4.33]  Explained variation[1]52.00[46.30–59.80]48.99[41.40-55.81]47.89[38.21–52.55]48.70[43.00–52.00]\n Model fit statistics  Bayesian DIC58396002599860106012\n   N  Region level1515151515  Community level340340340340340  Individual42544254425442544254\n*\np < 0.05, **\np < 0.01, ***\np < 0.001; aOR = adjusted Odds Ratio; CrI = Credible Interval; ICC = Intra-cluster correlation; MOR = Median Odds Ratio; 1 = reference\nIndividual, community and region-level predictors of IPTp-SP utilization\n\n*\np < 0.05, **\np < 0.01, ***\np < 0.001; aOR = adjusted Odds Ratio; CrI = Credible Interval; ICC = Intra-cluster correlation; MOR = Median Odds Ratio; 1 = reference\nRandom effects Outcome of the random effects were also reported in Table 2. The empty model (Model I), revealed that the discrepancy in uptake of three or more IPTp-SP doses was substantial at the community level [σ2 = 1. 86; Crl = 11.12–2.18] than regional level [σ2 = 1.13; Crl = 1.06–1.20]. Also, the ICC of Model I indicated that 18% and 47% disparity in IPTp-SP uptake are linked to region and community level factors respectively. According to the final model (Model V), the MORs indicate that when a woman changes her community to a community with higher likelihood of three or more doses of IPTp-SP, she has 3.83 higher chances of achieving the recommended doses (i.e. three or more). Similarly, moving to a region with high likelihood of three or more IPTp-SP doses is associated with 2.74-fold increase in having three or more IPTp-SP doses.\nOutcome of the random effects were also reported in Table 2. The empty model (Model I), revealed that the discrepancy in uptake of three or more IPTp-SP doses was substantial at the community level [σ2 = 1. 86; Crl = 11.12–2.18] than regional level [σ2 = 1.13; Crl = 1.06–1.20]. Also, the ICC of Model I indicated that 18% and 47% disparity in IPTp-SP uptake are linked to region and community level factors respectively. According to the final model (Model V), the MORs indicate that when a woman changes her community to a community with higher likelihood of three or more doses of IPTp-SP, she has 3.83 higher chances of achieving the recommended doses (i.e. three or more). Similarly, moving to a region with high likelihood of three or more IPTp-SP doses is associated with 2.74-fold increase in having three or more IPTp-SP doses.", "As shown in Table 1, less than half of the surveyed women had three or more IPTp-SP doses during their last pregnancies (45.3%). All the explanatory variables showed significant association at 95% level of significance. Nearly half of women aged 15–19 (49.9%) and the highly educated (47.1%) women received at least three doses. A greater section of rich women had at least three IPTp-SP doses (46.6%) and 50.1% of women who knew that mosquito bite causes malaria did the same. A significant proportion of women who knew that sleeping under ITN prevents malaria (46.7%) and those who did not know that destroying mosquito breeding sites prevents malaria (46.0%) received at three or more doses of IPTp-SP. Receiving at least three doses of IPTp-SP was profound in refugee settlements (50.9%), Western zone (47.3%), most disadvantaged communities (47.5%) and moderately disadvantaged regions (45.8%).", "Table 2 presents the findings of the fixed effects. The complete and final model (Model V) indicates that women aged 15–19 had less odds of receiving at least three IPTp-SP doses compared to those aged 45–49 [aOR = 0.42, Crl = 0.33–0.98]. Those who had no formal education were less likely to achieve the minimum recommended doses [aOR = 0.51, Crl = 0.35–0.81]. Similarly, poor women [aOR = 0.80, Crl = 0.78–0.91] and women who knew that mosquito bite can cause malaria [aOR = 0.84, Crl = 0.73–0.96] were less likely to have three or more doses of IPTp-SP relative to the rich women and women who did not know that mosquito bite can cause malaria respectively. Women who reported that sleeping under ITN prevents malaria had higher odds of three or more IPTp-SP doses [aOR = 1.22, Crl = 1.04–1.43] relative to those who reported otherwise. The findings further revealed that urban residents were less probable to have at least three doses [aOR = 0.60, CI = 0.46–0.82] as well as most disadvantaged communities [aOR = 0.67, CI = 0.50–0.86] relative to rural residents and least disadvantaged communities correspondingly. Similarly, most disadvantaged regions were aligned with less likelihood of three or more IPTp-SP uptake [aOR = 0.59, CI = 0.48–0.78] compared to the least disadvantaged regions.\n\nTable 2Individual, community and region-level predictors of IPTp-SP utilizationModel IModel IIModel IIIModel IVModel V\naOR [95% Crl]\naOR [95% Crl]\naOR [95% Crl]\naOR [95% Crl]\nFixed effects Individual level\n  Age   15–190.22** [0.11–0.81]0.42**[0.33–0.98]   20–240.43*[0.24–0.98]0.38*[0.25–0.79]   25–290.99[0.62–1.62]0.81[0.53–1.22]   30–341.01[0.62–1.64]0.81[0.53–1.24]   35–390.94[0.56–1.55]0.76[0.48–1.18]   40–440.99[0.58–1.71]0.81[0.50–1.30]   45–491[1]1[1]\n  Education   No education0.33**[0.17–0.77]0.51**[0.35–0.81]   Primary0.53*[0.44–0.82]0.62*[0.51–0.91]   Secondary1.09[0.74–1.60]1.06[0.75–1.50]   Higher1[1]1[1]\n  Wealth quintile   Poor0.81*[0.79–0.94]0.80**[0.78–0.91]   Middle0.89[0.75–1.06]0.88[0.74–1.05]   Rich1[1]1[1]\n  Mosquito bite causes malaria   No0.85*[0.74–0.97]0.84*[0.73–0.96]   Yes1[1]1[1]\n  Sleeping under ITN prevents malaria   No1[1]1[1]   Yes1.23**[1.06–1.43]1.22*[1.04–1.43]\n  Destroying mosquito breeding site prevents malaria   No1.16[0.97–1.39]1.165[0.96–1.41]   Yes1[1]1[1]\n Community level factors\n  Residential status   Urban0.69***[0.31–0.88]0.60***[0.46–0.82]   Rural0.78*[0.68–0.96]0.80[0.61–1.05]   Refugee settlement1[1]1[1]\n  Zone   Southern1[1]1[1]   Eastern1.05[0.74–1.49]1.07[0.71–1.61]   Northern1.08[0.73–1.61]1.12[0.68–1.83]   Western1.19[0.84–1.68]1.22[0.81–1.84]\n  Socio-economic disadvantage   Tertile 1(least disadvantaged)1[1]1[1]   Tertile 20.87[0.72–1.05]0.91[0.73–1.12]   Tertile 3(most disadvantaged)0.71*[0.65–0.92]0.67**[0.50–0.86]\n Region level factor\n  Socio-economic disadvantage   Tertile 1(least disadvantaged)1[1]1[1]   Tertile 20.71**[0.51–0.89]0.68***[0.54–0.82]   Tertile 3(most disadvantaged)0.55**[0.39–0.78]0.59***[0.48–0.78]\nRandom effects\n Region level  Variance (SE)1.13[1.06–1.20]1.14[1.04–1.21]1.12[1.05–1.20]1.15[1.07–1.21]1.14[1.06–1.20]  ICC (%)18.00[15.01–22.90]17.72[15.20-19.08]16.33[15.21-9.00]17.10[16.91–18.70]18.22[17.90-19.02]  MOR2.76[2.03–3.42]2.77[2.65–2.86]2.74[2.66–2.84]2.78[2.68–2.86]2.89[2.33–3.51]  Explained variation[1]35.72[29.06–41.20]31.03[27.61–37.31]33.20[27.31–37.28]30.82[26.97–37.91]\n Community level  Variance (SE)1.86[1.12–2.18]2.00[1.71–2.31]1.98[1.14–2.20]1.59[1.22–1.98]1.99[1.42–2.36]  ICC (%)47.60[39.8–50.7]49.80[37.98–53.70]48.50[36.90-59.08]48.40[37.00-51.20]49.22[38.99–52.51]  MOR3.67[2.74–4.09]3.85[3.48–4.26]3.83[2.77–4.12]3.33[2.87–3.83]3.84[3.12–4.33]  Explained variation[1]52.00[46.30–59.80]48.99[41.40-55.81]47.89[38.21–52.55]48.70[43.00–52.00]\n Model fit statistics  Bayesian DIC58396002599860106012\n   N  Region level1515151515  Community level340340340340340  Individual42544254425442544254\n*\np < 0.05, **\np < 0.01, ***\np < 0.001; aOR = adjusted Odds Ratio; CrI = Credible Interval; ICC = Intra-cluster correlation; MOR = Median Odds Ratio; 1 = reference\nIndividual, community and region-level predictors of IPTp-SP utilization\n\n*\np < 0.05, **\np < 0.01, ***\np < 0.001; aOR = adjusted Odds Ratio; CrI = Credible Interval; ICC = Intra-cluster correlation; MOR = Median Odds Ratio; 1 = reference", "Outcome of the random effects were also reported in Table 2. The empty model (Model I), revealed that the discrepancy in uptake of three or more IPTp-SP doses was substantial at the community level [σ2 = 1. 86; Crl = 11.12–2.18] than regional level [σ2 = 1.13; Crl = 1.06–1.20]. Also, the ICC of Model I indicated that 18% and 47% disparity in IPTp-SP uptake are linked to region and community level factors respectively. According to the final model (Model V), the MORs indicate that when a woman changes her community to a community with higher likelihood of three or more doses of IPTp-SP, she has 3.83 higher chances of achieving the recommended doses (i.e. three or more). Similarly, moving to a region with high likelihood of three or more IPTp-SP doses is associated with 2.74-fold increase in having three or more IPTp-SP doses.", "The aim of this study was to investigate the predictors of three or more IPTp-SP doses in Uganda as recommended by the WHO [8, 24]. Less than half of the women met the recommended dosage. This is far below the prevalence in other sub-Sahara African countries such as Ghana (63%) [30] but higher than the proportion of women who obtain at least three doses in Mali (36.7%) [31] and Senegal (37.51%) [32]. The prevalence, however, denotes an appreciation from 16% to 18% as reported by earlier studies based on the 2016 Uganda Demographic and Health Survey dataset [21, 33]. The relative increase in optimal IPTp-SP uptake since 2016 is suggestive that the recent anti-malaria and IPTp-SP initiatives are useful. Yet, for Uganda alone to account for 5% of the global malaria burden [1] is not good enough and more public health sensitization, and behavioural change communication interventions should be intensified. Hitherto, IPTp-SP administration has been dependent on ANC attendance, meanwhile, even where ANC attendance is high, as in the case of Malawi (84.0%) sometimes optimal uptake of IPTp-SP is low (24.8%) [34]. Thus, varied interventions such as adopting technological approaches and mobile phone alerts/reminders could prompt pregnant women to achieve the recommended dosage as a way of augmenting the traditional approach of administering the drug during ANC.\nThe study revealed that women aged 15–19 had less odds of receiving at least three IPTp-SP doses compared to those aged 45–49. Due to stigmatization, cultural and traditional connotations of adolescent pregnancy in Uganda [35], adolescent pregnant women may be less motivated to frequent health facilities, and thereby have an increased likelihood of missing or having lower IPTp-SP uptake. Recent synthesis of evidence on maternal healthcare utilization also noted that adolescents generally have lower maternal healthcare utilization [36]. Perhaps, having a secluded and special care for adolescent pregnant women may motivate their likelihood of visiting the health facility for the doses. In the event that the healthcare provider forgets to administer IPTp-SP, it is common knowledge that not all adolescents will feel comfortable and confident to query the healthcare provider for the drug whilst she is in a queue with her mothers’ age mates and possibly feels apprehended.\nThose who had no formal education and poor women were less likely to achieve the minimum recommended doses. Being poor and/or uneducated denotes disempowerment [26] and there is congruence in the literature about the positive impact of empowerment on maternal healthcare utilization [37, 38] and taking charge of one’s holistic wellbeing [39]. This observation points to the need for the Uganda Government and its partner organizations to appreciate that optimising IPTp-SP utilization transcends beyond provision of funds to secure the drugs. Thus, enhancing education opportunities for women and widening their wealth status such that every woman in the reproductive age will be competitive in finding a decent occupation could facilitate uptake of IPTp-SP in the country. Whilst education offers the knowledge for women to appreciate the need for achieving the recommended dosage, enhanced wealth status will offer the financial or economic power required to offset cost which could have hindered them from accessing the recommended dosage.\nWomen who knew that mosquito bite can cause malaria were less likely to have three or more doses of IPTp-SP, however, those who knew that sleeping under ITN can prevent malaria had higher odds of three or more IPTp-SP doses. All things being equal, women who are knowledgeable about possible routes and preventive strategies of malaria are expected to utilize all available opportunities to protect themselves and their newborns [40]. Arguably, a greater section of the women may be unsure of the information they possess about the causative and preventive routes. The content of ANC messages delivered throughout the trimesters could reflect issues pertaining to malaria causation and preventive strategies in order for women to be conscious and be appreciative of the need to achieve the full dosage. This is very critical on the account that malaria is endemic in the over 95% of the country [4].\nThe findings further revealed that urban residents were less probable to have at least three doses. Unlike rural settings, urban centres are usually clean with less hideouts for mosquitoes or limited conducive breeding sites for mosquitos. Consequently, there is a high temptation for urban residents to feel that they are less susceptible to malaria even during pregnancy. However, a rural resident may be concerned about being at increased risk of malaria and hence utilize all means possible to achieve the recommended doses. This finding further indicates that availability of health facilities does not necessarily imply healthcare utilization. Thus, several other factors interrelate to determine utilization. This is because health facilities and health personnel are prevalent in urban settings of Uganda relative to the rural locations [41, 42] and all things being equal, urban residents were expected to have increased chances of achieving the recommended IPTp-SP dosage. Health education among urban women through the mass media may be useful to boost IPTp-SP uptake among them.\nMost disadvantaged communities and regions were aligned with less likelihood of three or more IPTp-SP uptake. Being a resident of most disadvantaged communities may be indicative of limited access to IPTp-SP outlets such as health facilities and mobile clinics [42]. As a result, these finding was anticipated. This points to the need for urgent appraisal of existing IPTp-SP administration/interventions and prioritization of the possibilities of increasing access among women in most disadvantageous regions and communities. It is by such approaches that malaria burden in Uganda can be reduced and thereby increase the country’s prospects of achieving SDG target 3.1 [7].", "This study emerged from the most recent national malaria survey of Uganda. Due to the sampling procedure and large sample, it is representative of all women aged 15–49 in Uganda. In spite of this strength, the study has some noteworthy limitations. The study adopted a cross-sectional design and as such causal inference is not permissible. Secondly, since the outcome variable was self-reported, under-reporting or over-reporting of optimal IPTp-SP uptake is plausible. Also, since the study was based on pre-existing data without information on health system factors, I was unable to interrogate health system factors.", "The study revealed that less than half of Ugandan women achieved the recommended IPTp-SP dosage at their last pregnancy preceding the 2018-19 UMIS. Community and region level factors are significant predictors of optimal IPTp-SP uptake. All existing IPTp-SP interventions that focus on individual level factors alone need to be reviewed to reflect broader community and region level factors in order to wane the high malaria prevalence in the country. More especially, augmenting IPTp-SP uptake in most disadvantaged communities would require much scrutiny into suitable approaches to ensure access by obviating all barriers. Also, contextually responsive behavioural change communication interventions could invoke women’s passion to achieve the recommended dosage." ]
[ null, null, null, null, null, null, null, null, null, "results", null, null, null, "discussion", null, "conclusion" ]
[ "Malaria", "Pregnancy", "Public health", "Maternal health", "Uganda" ]
Background: Malaria infection in pregnancy (MiP) is acknowledged as a weighty public health challenge and have ample dangers for the pregnant woman and her fetus [1–3]. The symptoms and complications of MiP fluctuate with respect to intensity of transmission within a defined geographical area as well as a woman’s level of acquired immunity [1, 2]. Nineteen countries within sub-Saharan Africa (SSA), with Uganda inclusive, and one Asian country account for 85% of the global malaria burden [1]. In 2018 alone, about US$ 2.7 billion investment was made into malaria control and elimination globally and three-quarters of this was directed to the World Health Organization (WHO) African Region. In spite of this, malaria continue to take a heavy toll on government and household expenses in Uganda [4]. Pregnant women have increased susceptibility to malaria and its associated complications such as maternal anaemia, stillbirth, low birth weight, and in worse scenarios, infant mortality and morbidity [5, 6]. MiP could be an impediment to the realization of targets 3.1 of the Sustainable Development Goals, thus reducing maternal mortality ratio to less than 70 deaths per 100,000 live births [7]. In order to shield women in moderate to high malaria transmission areas in Africa and their newborns from the adverse implications of MiP and its associated imminent problems, the WHO in 2012 revised its anti-malaria policy and recommended that all pregnant women within such regions should receive at least three doses of intermittent preventive treatment in pregnancy with antimalarial drug sulfadoxine-pyrimethamine (IPTp-SP) [8]. This recommendation was informed by the stagnated IPTp coverage rates and new evidence that reinforced the need for three doses or more [9]. Further, in 2016, the WHO developed new antenatal care (ANC) guidelines by indorsing an increase in the number of ANC to at least eight contacts between pregnant women and healthcare providers as a strategy to enhance prospects of IPTp-SP uptake [2]. IPTp-SP is to be taken by all pregnant women in moderate to high malaria transmission areas and should commence as early as possible within the second trimester. The doses are to be administered at least three times with at least one month interval until childbirth. Generally IPTp-SP declines episodes of MiP, neonatal mortality, low birth weight, and placental parasitaemia [2]. Empirical evidence indicate that IPTp contributes to 29%, 38% and 31% reduction in the incidence of low birth weight, severe malaria anaemia and neonatal mortality respectively [10, 11]. In 2018, out of the 36 African countries that recounted IPTp-SP coverage rates, about 31% of eligible women in the reproductive age received the recommended doses and this signified an increase relative to the rate reported in 2017 (22%) and 2010 (2%). In the case of Uganda, where malaria is endemic in 95% of the country [4], the 2019 World Malaria Report indicated that 30% or lower of the eligible women had three or more IPTp-SP doses [1]. Resistance of the parasite to SP has been noted, however, IPTp still remains a very cost-effective and a promising lifesaving intervention [12, 13]. In spite of the missed IPTp-SP opportunities in Uganda in the wake of high MiP [1, 14, 15], scanty literature exist on MiP in Uganda. The few studies have either been limited to some regions of the country [16–20], used relatively old national data [21], assessed the impact of intermittent preventive treatment during pregnancy [22] and among others. To date, empirical national study utilizing the 2018-19 Uganda Malaria Indicator Survey that explore predictors of attaining three or more doses of IPTp-SP in the country is non-existent. With the aim of invigorating a critical evidence-based discussion on MiP prevention, and offering empirical evidence to guide MiP policies, this study investigated the rate of uptake and factors affecting uptake of three or more IPTp-SP doses as recommended by the WHO. Methods: Data description Data from the 2018–2019 Malaria Indicator Survey (2018-19 UMIS) of Uganda was analysed. This is a cross-sectional survey that is executed by the Uganda Bureau of Statistics (UBOS) and the National Malaria Control Division (NMCD), however, technical assistance was granted by the Inner City Fund (ICF) [23]. The survey sampled participants through a two-stage sampling design with the intent of achieving estimation of three essential indicators, thus rural-urban locations, all fifteen administrative regions and national coverage. The sampling commenced with selection of clusters from refugee and non-refugee sample frames [23] from the enumeration areas delineated for the 2014 National Population and Housing Census (NPHC). In all, 320 clusters were selected from non-refugee sample frame (236 and 84 from rural and urban settlements respectively). Urban settlements were oversampled to obtain unbiased estimations for rural and urban settlements. The same procedure was followed to select 22 clusters from the refugee frame. The next sampling phase involved the systematic selection of households and 28 households per cluster were selected resulting in a total of 8,878. Eligible women were those aged 15–49 and were either permanent residents or visitors who joined the household the night before the survey. In all, 8,231 women were successfully interviewed signifying 98% and 99% response rates for non-refugee and refugee settlements respectively. In the present study, however, 4,254 women were eligible for inclusion based on completeness of data. Data from the 2018–2019 Malaria Indicator Survey (2018-19 UMIS) of Uganda was analysed. This is a cross-sectional survey that is executed by the Uganda Bureau of Statistics (UBOS) and the National Malaria Control Division (NMCD), however, technical assistance was granted by the Inner City Fund (ICF) [23]. The survey sampled participants through a two-stage sampling design with the intent of achieving estimation of three essential indicators, thus rural-urban locations, all fifteen administrative regions and national coverage. The sampling commenced with selection of clusters from refugee and non-refugee sample frames [23] from the enumeration areas delineated for the 2014 National Population and Housing Census (NPHC). In all, 320 clusters were selected from non-refugee sample frame (236 and 84 from rural and urban settlements respectively). Urban settlements were oversampled to obtain unbiased estimations for rural and urban settlements. The same procedure was followed to select 22 clusters from the refugee frame. The next sampling phase involved the systematic selection of households and 28 households per cluster were selected resulting in a total of 8,878. Eligible women were those aged 15–49 and were either permanent residents or visitors who joined the household the night before the survey. In all, 8,231 women were successfully interviewed signifying 98% and 99% response rates for non-refugee and refugee settlements respectively. In the present study, however, 4,254 women were eligible for inclusion based on completeness of data. Data description: Data from the 2018–2019 Malaria Indicator Survey (2018-19 UMIS) of Uganda was analysed. This is a cross-sectional survey that is executed by the Uganda Bureau of Statistics (UBOS) and the National Malaria Control Division (NMCD), however, technical assistance was granted by the Inner City Fund (ICF) [23]. The survey sampled participants through a two-stage sampling design with the intent of achieving estimation of three essential indicators, thus rural-urban locations, all fifteen administrative regions and national coverage. The sampling commenced with selection of clusters from refugee and non-refugee sample frames [23] from the enumeration areas delineated for the 2014 National Population and Housing Census (NPHC). In all, 320 clusters were selected from non-refugee sample frame (236 and 84 from rural and urban settlements respectively). Urban settlements were oversampled to obtain unbiased estimations for rural and urban settlements. The same procedure was followed to select 22 clusters from the refugee frame. The next sampling phase involved the systematic selection of households and 28 households per cluster were selected resulting in a total of 8,878. Eligible women were those aged 15–49 and were either permanent residents or visitors who joined the household the night before the survey. In all, 8,231 women were successfully interviewed signifying 98% and 99% response rates for non-refugee and refugee settlements respectively. In the present study, however, 4,254 women were eligible for inclusion based on completeness of data. Measurement of variables: Dependent variable Adequate uptake of intermittent preventive therapy using IPTp-SP was the dependent variable for this study. During the 2018-19 UMIS, women were asked if they took IPTp-SP (Fansidar™), and the number of times this was taken during their last pregnancy. The current recommendation by the WHO endorses at least three doses for all pregnant women in locations that have moderate to high malaria transmission of which Africa and for that matter Uganda is inclusive [8, 24]. Following this recommendation, all women who revealed that they had at least three doses of IPTp-SP were categorized as having adequate IPTp-SP (coded 1) whilst those who had less than three were categorized otherwise (coded 0). Adequate uptake of intermittent preventive therapy using IPTp-SP was the dependent variable for this study. During the 2018-19 UMIS, women were asked if they took IPTp-SP (Fansidar™), and the number of times this was taken during their last pregnancy. The current recommendation by the WHO endorses at least three doses for all pregnant women in locations that have moderate to high malaria transmission of which Africa and for that matter Uganda is inclusive [8, 24]. Following this recommendation, all women who revealed that they had at least three doses of IPTp-SP were categorized as having adequate IPTp-SP (coded 1) whilst those who had less than three were categorized otherwise (coded 0). Independent variables A total of ten (10) independent variables were included and were categorized under individual, community and region-level factors. This was possible due the hierarchical nature of the data set. The individual-level factors comprised age in completed years (15–19, 20–24, 25–29, 30–34, 35–39, 40–44, 45–49) and education measured as highest level of educational attainment (no education, primary, secondary, higher). In addition were wealth quintile (poor, middle, rich), whether mosquito bite causes malaria (yes or no), if sleeping under ITN prevents malaria (yes or no) and if malaria can be prevented by destroying mosquito breeding site (yes or no). The community-level factors comprised residential status (urban, rural, refugee settlement), and socio-economic disadvantage at the community level. The sole region-level factor was socio-economic disadvantage at the region level. Several studies using the DHS dataset have followed the same categorization [21, 25, 26]. A total of ten (10) independent variables were included and were categorized under individual, community and region-level factors. This was possible due the hierarchical nature of the data set. The individual-level factors comprised age in completed years (15–19, 20–24, 25–29, 30–34, 35–39, 40–44, 45–49) and education measured as highest level of educational attainment (no education, primary, secondary, higher). In addition were wealth quintile (poor, middle, rich), whether mosquito bite causes malaria (yes or no), if sleeping under ITN prevents malaria (yes or no) and if malaria can be prevented by destroying mosquito breeding site (yes or no). The community-level factors comprised residential status (urban, rural, refugee settlement), and socio-economic disadvantage at the community level. The sole region-level factor was socio-economic disadvantage at the region level. Several studies using the DHS dataset have followed the same categorization [21, 25, 26]. Statistical analyses Stata version 13 was utilized for all the analyses. Weighted frequencies and percentages were used to present the proportion of women who had adequate IPTp-SP uptake or otherwise with respect to the independent variables (see Table 1). The association of significance between the explanatory variables and adequate IPTp-SP uptake was assessed with chi-square at 5% margin of error. Finally, a three-level multilevel logistic regression was fitted with five models. First of the five models was the empty model without any explanatory variable (Model I). The empty model is an unconditional model, which accounted for the magnitude of variance between community and region levels. This was followed by a model bearing all the individual-level variables (Model II) and subsequently Model III, which accounted for only community-level variables. Model IV featured region-level variables alone whilst the complete and ultimate model included variables of all the aforementioned levels/models (Model V). Output from the models comprised fixed and random effects. The fixed effects were reported as adjusted odds ratios (aORs) at 95% credible intervals (CrIs). However, the random effects reflected as intraclass correlation coefficient (ICC) and median odds ratio (MOR) [27]. With the ICC, it was possible to gauge the extent of variance in the possibility or tendency of adequate IPTp-SP uptake that is explained by community and region level factors. On the order hand, the MOR quantified the community and region variance as odds ratios and in addition estimated the likelihood of adequate IPTp-SP uptake that is influenced by community and region level issues. Groups having least observations were set as reference groups in the models. Table 1Sample by IPTp-SP utilization during last pregnancyVariableAt least 3 doses of IPTp-SP in last Pregnancy Non (%) Yesn (%) Totaln (%) X 2; p-value Individual level  Age19.638; p < 0.05  15–19143(50.1)142(49.9)286(100)  20–24553(52.0)510(48.0)1063(100)  25–29606(55.8)479(44.2)1085(100)  30–34497(56.4)385(43.6)882(100)  35–39318(54.8)262(45.2)580(100)  40–44163(59.6)110(40.4)273(100)  45–4949(58.2)35(41.8)84(100)  Education45.638; p < 0.01  No education379(55.1)309(44.9)689(100)  Primary1294(55.0)1060(45.0)2354(100)  Secondary537(54.4)450(45.6)987(100)  Higher118(52.9)105(47.1)224(100)  Wealth quintile97.638; p < 0.001  Poor572(53.8)491(46.2)1063(100)  Middle863(56.9)654(43.1)1517(100)  Rich894(53.4)780(46.6)1675(100)  Mosquito bite causes malaria18.832; p < 0.01  No1573(57.4)1166(42.6)2739(100)  Yes755(49.9)759(50.1)1515(100)  Sleeping under ITN prevents malaria24.221; p < 0.01  No553(60.0)368(40.0)920(100)  Yes1776(53.3)1557(46.7)3333(100)  Destroying mosquito breeding site prevents malaria19.361; p < 0.01  No1909(54.0)1624(46.0)3533(100)  Yes420(58.3)301(41.7)720(100) Community level factors  Residential status52.873; p < 0.001  Urban493(52.4)448(47.6)940(100)  Rural1664(56.2)1298(43.8)2962(100)  Refugee settlement172(49.1)179(50.9)351(100)  Zone45.911; p < 0.001  Southern719(56.0)565(44.0)1284(100)  Eastern617(55.6)493(44.4)1110(100)  Northern465(54.2)392(45.8)858(100)  Western528(52.7)474(47.3)1002(100)  Socio-economic disadvantage109.111; p < 0.001  Tertile 1(least disadvantaged)935(53.2)822(46.8)1757(100)  Tertile 2855(58.1)615(41.9)1470(100)  Tertile 3(most disadvantaged)539(52.5)487(47.5)1026(100) Region level factor Socio-economic disadvantage59.651; p < 0.001  Tertile 1(least disadvantaged)1048(55.2)851(44.8)1898(100)  Tertile 2860(54.2)725(45.8)1585(100)  Tertile 3(most disadvantaged)421(54.7)349(45.3)770(100) Total2329(54.7)1925(45.3)4254(100)Source: 2018-19 Uganda Malaria Indicator Survey Sample by IPTp-SP utilization during last pregnancy Source: 2018-19 Uganda Malaria Indicator Survey Stata version 13 was utilized for all the analyses. Weighted frequencies and percentages were used to present the proportion of women who had adequate IPTp-SP uptake or otherwise with respect to the independent variables (see Table 1). The association of significance between the explanatory variables and adequate IPTp-SP uptake was assessed with chi-square at 5% margin of error. Finally, a three-level multilevel logistic regression was fitted with five models. First of the five models was the empty model without any explanatory variable (Model I). The empty model is an unconditional model, which accounted for the magnitude of variance between community and region levels. This was followed by a model bearing all the individual-level variables (Model II) and subsequently Model III, which accounted for only community-level variables. Model IV featured region-level variables alone whilst the complete and ultimate model included variables of all the aforementioned levels/models (Model V). Output from the models comprised fixed and random effects. The fixed effects were reported as adjusted odds ratios (aORs) at 95% credible intervals (CrIs). However, the random effects reflected as intraclass correlation coefficient (ICC) and median odds ratio (MOR) [27]. With the ICC, it was possible to gauge the extent of variance in the possibility or tendency of adequate IPTp-SP uptake that is explained by community and region level factors. On the order hand, the MOR quantified the community and region variance as odds ratios and in addition estimated the likelihood of adequate IPTp-SP uptake that is influenced by community and region level issues. Groups having least observations were set as reference groups in the models. Table 1Sample by IPTp-SP utilization during last pregnancyVariableAt least 3 doses of IPTp-SP in last Pregnancy Non (%) Yesn (%) Totaln (%) X 2; p-value Individual level  Age19.638; p < 0.05  15–19143(50.1)142(49.9)286(100)  20–24553(52.0)510(48.0)1063(100)  25–29606(55.8)479(44.2)1085(100)  30–34497(56.4)385(43.6)882(100)  35–39318(54.8)262(45.2)580(100)  40–44163(59.6)110(40.4)273(100)  45–4949(58.2)35(41.8)84(100)  Education45.638; p < 0.01  No education379(55.1)309(44.9)689(100)  Primary1294(55.0)1060(45.0)2354(100)  Secondary537(54.4)450(45.6)987(100)  Higher118(52.9)105(47.1)224(100)  Wealth quintile97.638; p < 0.001  Poor572(53.8)491(46.2)1063(100)  Middle863(56.9)654(43.1)1517(100)  Rich894(53.4)780(46.6)1675(100)  Mosquito bite causes malaria18.832; p < 0.01  No1573(57.4)1166(42.6)2739(100)  Yes755(49.9)759(50.1)1515(100)  Sleeping under ITN prevents malaria24.221; p < 0.01  No553(60.0)368(40.0)920(100)  Yes1776(53.3)1557(46.7)3333(100)  Destroying mosquito breeding site prevents malaria19.361; p < 0.01  No1909(54.0)1624(46.0)3533(100)  Yes420(58.3)301(41.7)720(100) Community level factors  Residential status52.873; p < 0.001  Urban493(52.4)448(47.6)940(100)  Rural1664(56.2)1298(43.8)2962(100)  Refugee settlement172(49.1)179(50.9)351(100)  Zone45.911; p < 0.001  Southern719(56.0)565(44.0)1284(100)  Eastern617(55.6)493(44.4)1110(100)  Northern465(54.2)392(45.8)858(100)  Western528(52.7)474(47.3)1002(100)  Socio-economic disadvantage109.111; p < 0.001  Tertile 1(least disadvantaged)935(53.2)822(46.8)1757(100)  Tertile 2855(58.1)615(41.9)1470(100)  Tertile 3(most disadvantaged)539(52.5)487(47.5)1026(100) Region level factor Socio-economic disadvantage59.651; p < 0.001  Tertile 1(least disadvantaged)1048(55.2)851(44.8)1898(100)  Tertile 2860(54.2)725(45.8)1585(100)  Tertile 3(most disadvantaged)421(54.7)349(45.3)770(100) Total2329(54.7)1925(45.3)4254(100)Source: 2018-19 Uganda Malaria Indicator Survey Sample by IPTp-SP utilization during last pregnancy Source: 2018-19 Uganda Malaria Indicator Survey Model fit and specifications First, multicollinearity between the explanatory variables was assessed using the Variance Inflation Factor (VIF) [28] and the results showed that the variables were not highly correlated to warrant a concern (mean VIF = 1.55, minimum VIF = 1.19, maximum VIF = 2.09). Second, the Bayesian Deviance Information Criterion (DIC) was used in determining the goodness of fit of all the models. Third, the Markov Chain Monte Carlo (MCMC) estimation was applied in modelling [29] and all models were specified using 3.05 version of MLwinN package in the Stata software. First, multicollinearity between the explanatory variables was assessed using the Variance Inflation Factor (VIF) [28] and the results showed that the variables were not highly correlated to warrant a concern (mean VIF = 1.55, minimum VIF = 1.19, maximum VIF = 2.09). Second, the Bayesian Deviance Information Criterion (DIC) was used in determining the goodness of fit of all the models. Third, the Markov Chain Monte Carlo (MCMC) estimation was applied in modelling [29] and all models were specified using 3.05 version of MLwinN package in the Stata software. Ethics approval The 2018–2019 UMIS had approval from the Uganda National Council for Science and Technology (UNCST), the Ethics Committee of the School of Medicine Research and Ethics Committee (SOMREC) of the Makerere University as well as the institutional review board of the ICF. The author applied and was granted access to utilize the dataset for the purpose of this study. The 2018–2019 UMIS had approval from the Uganda National Council for Science and Technology (UNCST), the Ethics Committee of the School of Medicine Research and Ethics Committee (SOMREC) of the Makerere University as well as the institutional review board of the ICF. The author applied and was granted access to utilize the dataset for the purpose of this study. Dependent variable: Adequate uptake of intermittent preventive therapy using IPTp-SP was the dependent variable for this study. During the 2018-19 UMIS, women were asked if they took IPTp-SP (Fansidar™), and the number of times this was taken during their last pregnancy. The current recommendation by the WHO endorses at least three doses for all pregnant women in locations that have moderate to high malaria transmission of which Africa and for that matter Uganda is inclusive [8, 24]. Following this recommendation, all women who revealed that they had at least three doses of IPTp-SP were categorized as having adequate IPTp-SP (coded 1) whilst those who had less than three were categorized otherwise (coded 0). Independent variables: A total of ten (10) independent variables were included and were categorized under individual, community and region-level factors. This was possible due the hierarchical nature of the data set. The individual-level factors comprised age in completed years (15–19, 20–24, 25–29, 30–34, 35–39, 40–44, 45–49) and education measured as highest level of educational attainment (no education, primary, secondary, higher). In addition were wealth quintile (poor, middle, rich), whether mosquito bite causes malaria (yes or no), if sleeping under ITN prevents malaria (yes or no) and if malaria can be prevented by destroying mosquito breeding site (yes or no). The community-level factors comprised residential status (urban, rural, refugee settlement), and socio-economic disadvantage at the community level. The sole region-level factor was socio-economic disadvantage at the region level. Several studies using the DHS dataset have followed the same categorization [21, 25, 26]. Statistical analyses: Stata version 13 was utilized for all the analyses. Weighted frequencies and percentages were used to present the proportion of women who had adequate IPTp-SP uptake or otherwise with respect to the independent variables (see Table 1). The association of significance between the explanatory variables and adequate IPTp-SP uptake was assessed with chi-square at 5% margin of error. Finally, a three-level multilevel logistic regression was fitted with five models. First of the five models was the empty model without any explanatory variable (Model I). The empty model is an unconditional model, which accounted for the magnitude of variance between community and region levels. This was followed by a model bearing all the individual-level variables (Model II) and subsequently Model III, which accounted for only community-level variables. Model IV featured region-level variables alone whilst the complete and ultimate model included variables of all the aforementioned levels/models (Model V). Output from the models comprised fixed and random effects. The fixed effects were reported as adjusted odds ratios (aORs) at 95% credible intervals (CrIs). However, the random effects reflected as intraclass correlation coefficient (ICC) and median odds ratio (MOR) [27]. With the ICC, it was possible to gauge the extent of variance in the possibility or tendency of adequate IPTp-SP uptake that is explained by community and region level factors. On the order hand, the MOR quantified the community and region variance as odds ratios and in addition estimated the likelihood of adequate IPTp-SP uptake that is influenced by community and region level issues. Groups having least observations were set as reference groups in the models. Table 1Sample by IPTp-SP utilization during last pregnancyVariableAt least 3 doses of IPTp-SP in last Pregnancy Non (%) Yesn (%) Totaln (%) X 2; p-value Individual level  Age19.638; p < 0.05  15–19143(50.1)142(49.9)286(100)  20–24553(52.0)510(48.0)1063(100)  25–29606(55.8)479(44.2)1085(100)  30–34497(56.4)385(43.6)882(100)  35–39318(54.8)262(45.2)580(100)  40–44163(59.6)110(40.4)273(100)  45–4949(58.2)35(41.8)84(100)  Education45.638; p < 0.01  No education379(55.1)309(44.9)689(100)  Primary1294(55.0)1060(45.0)2354(100)  Secondary537(54.4)450(45.6)987(100)  Higher118(52.9)105(47.1)224(100)  Wealth quintile97.638; p < 0.001  Poor572(53.8)491(46.2)1063(100)  Middle863(56.9)654(43.1)1517(100)  Rich894(53.4)780(46.6)1675(100)  Mosquito bite causes malaria18.832; p < 0.01  No1573(57.4)1166(42.6)2739(100)  Yes755(49.9)759(50.1)1515(100)  Sleeping under ITN prevents malaria24.221; p < 0.01  No553(60.0)368(40.0)920(100)  Yes1776(53.3)1557(46.7)3333(100)  Destroying mosquito breeding site prevents malaria19.361; p < 0.01  No1909(54.0)1624(46.0)3533(100)  Yes420(58.3)301(41.7)720(100) Community level factors  Residential status52.873; p < 0.001  Urban493(52.4)448(47.6)940(100)  Rural1664(56.2)1298(43.8)2962(100)  Refugee settlement172(49.1)179(50.9)351(100)  Zone45.911; p < 0.001  Southern719(56.0)565(44.0)1284(100)  Eastern617(55.6)493(44.4)1110(100)  Northern465(54.2)392(45.8)858(100)  Western528(52.7)474(47.3)1002(100)  Socio-economic disadvantage109.111; p < 0.001  Tertile 1(least disadvantaged)935(53.2)822(46.8)1757(100)  Tertile 2855(58.1)615(41.9)1470(100)  Tertile 3(most disadvantaged)539(52.5)487(47.5)1026(100) Region level factor Socio-economic disadvantage59.651; p < 0.001  Tertile 1(least disadvantaged)1048(55.2)851(44.8)1898(100)  Tertile 2860(54.2)725(45.8)1585(100)  Tertile 3(most disadvantaged)421(54.7)349(45.3)770(100) Total2329(54.7)1925(45.3)4254(100)Source: 2018-19 Uganda Malaria Indicator Survey Sample by IPTp-SP utilization during last pregnancy Source: 2018-19 Uganda Malaria Indicator Survey Model fit and specifications: First, multicollinearity between the explanatory variables was assessed using the Variance Inflation Factor (VIF) [28] and the results showed that the variables were not highly correlated to warrant a concern (mean VIF = 1.55, minimum VIF = 1.19, maximum VIF = 2.09). Second, the Bayesian Deviance Information Criterion (DIC) was used in determining the goodness of fit of all the models. Third, the Markov Chain Monte Carlo (MCMC) estimation was applied in modelling [29] and all models were specified using 3.05 version of MLwinN package in the Stata software. Ethics approval: The 2018–2019 UMIS had approval from the Uganda National Council for Science and Technology (UNCST), the Ethics Committee of the School of Medicine Research and Ethics Committee (SOMREC) of the Makerere University as well as the institutional review board of the ICF. The author applied and was granted access to utilize the dataset for the purpose of this study. Results: Descriptive findings As shown in Table 1, less than half of the surveyed women had three or more IPTp-SP doses during their last pregnancies (45.3%). All the explanatory variables showed significant association at 95% level of significance. Nearly half of women aged 15–19 (49.9%) and the highly educated (47.1%) women received at least three doses. A greater section of rich women had at least three IPTp-SP doses (46.6%) and 50.1% of women who knew that mosquito bite causes malaria did the same. A significant proportion of women who knew that sleeping under ITN prevents malaria (46.7%) and those who did not know that destroying mosquito breeding sites prevents malaria (46.0%) received at three or more doses of IPTp-SP. Receiving at least three doses of IPTp-SP was profound in refugee settlements (50.9%), Western zone (47.3%), most disadvantaged communities (47.5%) and moderately disadvantaged regions (45.8%). As shown in Table 1, less than half of the surveyed women had three or more IPTp-SP doses during their last pregnancies (45.3%). All the explanatory variables showed significant association at 95% level of significance. Nearly half of women aged 15–19 (49.9%) and the highly educated (47.1%) women received at least three doses. A greater section of rich women had at least three IPTp-SP doses (46.6%) and 50.1% of women who knew that mosquito bite causes malaria did the same. A significant proportion of women who knew that sleeping under ITN prevents malaria (46.7%) and those who did not know that destroying mosquito breeding sites prevents malaria (46.0%) received at three or more doses of IPTp-SP. Receiving at least three doses of IPTp-SP was profound in refugee settlements (50.9%), Western zone (47.3%), most disadvantaged communities (47.5%) and moderately disadvantaged regions (45.8%). Fixed effects Table 2 presents the findings of the fixed effects. The complete and final model (Model V) indicates that women aged 15–19 had less odds of receiving at least three IPTp-SP doses compared to those aged 45–49 [aOR = 0.42, Crl = 0.33–0.98]. Those who had no formal education were less likely to achieve the minimum recommended doses [aOR = 0.51, Crl = 0.35–0.81]. Similarly, poor women [aOR = 0.80, Crl = 0.78–0.91] and women who knew that mosquito bite can cause malaria [aOR = 0.84, Crl = 0.73–0.96] were less likely to have three or more doses of IPTp-SP relative to the rich women and women who did not know that mosquito bite can cause malaria respectively. Women who reported that sleeping under ITN prevents malaria had higher odds of three or more IPTp-SP doses [aOR = 1.22, Crl = 1.04–1.43] relative to those who reported otherwise. The findings further revealed that urban residents were less probable to have at least three doses [aOR = 0.60, CI = 0.46–0.82] as well as most disadvantaged communities [aOR = 0.67, CI = 0.50–0.86] relative to rural residents and least disadvantaged communities correspondingly. Similarly, most disadvantaged regions were aligned with less likelihood of three or more IPTp-SP uptake [aOR = 0.59, CI = 0.48–0.78] compared to the least disadvantaged regions. Table 2Individual, community and region-level predictors of IPTp-SP utilizationModel IModel IIModel IIIModel IVModel V aOR [95% Crl] aOR [95% Crl] aOR [95% Crl] aOR [95% Crl] Fixed effects Individual level   Age   15–190.22** [0.11–0.81]0.42**[0.33–0.98]   20–240.43*[0.24–0.98]0.38*[0.25–0.79]   25–290.99[0.62–1.62]0.81[0.53–1.22]   30–341.01[0.62–1.64]0.81[0.53–1.24]   35–390.94[0.56–1.55]0.76[0.48–1.18]   40–440.99[0.58–1.71]0.81[0.50–1.30]   45–491[1]1[1]   Education   No education0.33**[0.17–0.77]0.51**[0.35–0.81]   Primary0.53*[0.44–0.82]0.62*[0.51–0.91]   Secondary1.09[0.74–1.60]1.06[0.75–1.50]   Higher1[1]1[1]   Wealth quintile   Poor0.81*[0.79–0.94]0.80**[0.78–0.91]   Middle0.89[0.75–1.06]0.88[0.74–1.05]   Rich1[1]1[1]   Mosquito bite causes malaria   No0.85*[0.74–0.97]0.84*[0.73–0.96]   Yes1[1]1[1]   Sleeping under ITN prevents malaria   No1[1]1[1]   Yes1.23**[1.06–1.43]1.22*[1.04–1.43]   Destroying mosquito breeding site prevents malaria   No1.16[0.97–1.39]1.165[0.96–1.41]   Yes1[1]1[1]  Community level factors   Residential status   Urban0.69***[0.31–0.88]0.60***[0.46–0.82]   Rural0.78*[0.68–0.96]0.80[0.61–1.05]   Refugee settlement1[1]1[1]   Zone   Southern1[1]1[1]   Eastern1.05[0.74–1.49]1.07[0.71–1.61]   Northern1.08[0.73–1.61]1.12[0.68–1.83]   Western1.19[0.84–1.68]1.22[0.81–1.84]   Socio-economic disadvantage   Tertile 1(least disadvantaged)1[1]1[1]   Tertile 20.87[0.72–1.05]0.91[0.73–1.12]   Tertile 3(most disadvantaged)0.71*[0.65–0.92]0.67**[0.50–0.86]  Region level factor   Socio-economic disadvantage   Tertile 1(least disadvantaged)1[1]1[1]   Tertile 20.71**[0.51–0.89]0.68***[0.54–0.82]   Tertile 3(most disadvantaged)0.55**[0.39–0.78]0.59***[0.48–0.78] Random effects  Region level  Variance (SE)1.13[1.06–1.20]1.14[1.04–1.21]1.12[1.05–1.20]1.15[1.07–1.21]1.14[1.06–1.20]  ICC (%)18.00[15.01–22.90]17.72[15.20-19.08]16.33[15.21-9.00]17.10[16.91–18.70]18.22[17.90-19.02]  MOR2.76[2.03–3.42]2.77[2.65–2.86]2.74[2.66–2.84]2.78[2.68–2.86]2.89[2.33–3.51]  Explained variation[1]35.72[29.06–41.20]31.03[27.61–37.31]33.20[27.31–37.28]30.82[26.97–37.91]  Community level  Variance (SE)1.86[1.12–2.18]2.00[1.71–2.31]1.98[1.14–2.20]1.59[1.22–1.98]1.99[1.42–2.36]  ICC (%)47.60[39.8–50.7]49.80[37.98–53.70]48.50[36.90-59.08]48.40[37.00-51.20]49.22[38.99–52.51]  MOR3.67[2.74–4.09]3.85[3.48–4.26]3.83[2.77–4.12]3.33[2.87–3.83]3.84[3.12–4.33]  Explained variation[1]52.00[46.30–59.80]48.99[41.40-55.81]47.89[38.21–52.55]48.70[43.00–52.00]  Model fit statistics  Bayesian DIC58396002599860106012    N  Region level1515151515  Community level340340340340340  Individual42544254425442544254 * p < 0.05, ** p < 0.01, *** p < 0.001; aOR = adjusted Odds Ratio; CrI = Credible Interval; ICC = Intra-cluster correlation; MOR = Median Odds Ratio; 1 = reference Individual, community and region-level predictors of IPTp-SP utilization * p < 0.05, ** p < 0.01, *** p < 0.001; aOR = adjusted Odds Ratio; CrI = Credible Interval; ICC = Intra-cluster correlation; MOR = Median Odds Ratio; 1 = reference Table 2 presents the findings of the fixed effects. The complete and final model (Model V) indicates that women aged 15–19 had less odds of receiving at least three IPTp-SP doses compared to those aged 45–49 [aOR = 0.42, Crl = 0.33–0.98]. Those who had no formal education were less likely to achieve the minimum recommended doses [aOR = 0.51, Crl = 0.35–0.81]. Similarly, poor women [aOR = 0.80, Crl = 0.78–0.91] and women who knew that mosquito bite can cause malaria [aOR = 0.84, Crl = 0.73–0.96] were less likely to have three or more doses of IPTp-SP relative to the rich women and women who did not know that mosquito bite can cause malaria respectively. Women who reported that sleeping under ITN prevents malaria had higher odds of three or more IPTp-SP doses [aOR = 1.22, Crl = 1.04–1.43] relative to those who reported otherwise. The findings further revealed that urban residents were less probable to have at least three doses [aOR = 0.60, CI = 0.46–0.82] as well as most disadvantaged communities [aOR = 0.67, CI = 0.50–0.86] relative to rural residents and least disadvantaged communities correspondingly. Similarly, most disadvantaged regions were aligned with less likelihood of three or more IPTp-SP uptake [aOR = 0.59, CI = 0.48–0.78] compared to the least disadvantaged regions. Table 2Individual, community and region-level predictors of IPTp-SP utilizationModel IModel IIModel IIIModel IVModel V aOR [95% Crl] aOR [95% Crl] aOR [95% Crl] aOR [95% Crl] Fixed effects Individual level   Age   15–190.22** [0.11–0.81]0.42**[0.33–0.98]   20–240.43*[0.24–0.98]0.38*[0.25–0.79]   25–290.99[0.62–1.62]0.81[0.53–1.22]   30–341.01[0.62–1.64]0.81[0.53–1.24]   35–390.94[0.56–1.55]0.76[0.48–1.18]   40–440.99[0.58–1.71]0.81[0.50–1.30]   45–491[1]1[1]   Education   No education0.33**[0.17–0.77]0.51**[0.35–0.81]   Primary0.53*[0.44–0.82]0.62*[0.51–0.91]   Secondary1.09[0.74–1.60]1.06[0.75–1.50]   Higher1[1]1[1]   Wealth quintile   Poor0.81*[0.79–0.94]0.80**[0.78–0.91]   Middle0.89[0.75–1.06]0.88[0.74–1.05]   Rich1[1]1[1]   Mosquito bite causes malaria   No0.85*[0.74–0.97]0.84*[0.73–0.96]   Yes1[1]1[1]   Sleeping under ITN prevents malaria   No1[1]1[1]   Yes1.23**[1.06–1.43]1.22*[1.04–1.43]   Destroying mosquito breeding site prevents malaria   No1.16[0.97–1.39]1.165[0.96–1.41]   Yes1[1]1[1]  Community level factors   Residential status   Urban0.69***[0.31–0.88]0.60***[0.46–0.82]   Rural0.78*[0.68–0.96]0.80[0.61–1.05]   Refugee settlement1[1]1[1]   Zone   Southern1[1]1[1]   Eastern1.05[0.74–1.49]1.07[0.71–1.61]   Northern1.08[0.73–1.61]1.12[0.68–1.83]   Western1.19[0.84–1.68]1.22[0.81–1.84]   Socio-economic disadvantage   Tertile 1(least disadvantaged)1[1]1[1]   Tertile 20.87[0.72–1.05]0.91[0.73–1.12]   Tertile 3(most disadvantaged)0.71*[0.65–0.92]0.67**[0.50–0.86]  Region level factor   Socio-economic disadvantage   Tertile 1(least disadvantaged)1[1]1[1]   Tertile 20.71**[0.51–0.89]0.68***[0.54–0.82]   Tertile 3(most disadvantaged)0.55**[0.39–0.78]0.59***[0.48–0.78] Random effects  Region level  Variance (SE)1.13[1.06–1.20]1.14[1.04–1.21]1.12[1.05–1.20]1.15[1.07–1.21]1.14[1.06–1.20]  ICC (%)18.00[15.01–22.90]17.72[15.20-19.08]16.33[15.21-9.00]17.10[16.91–18.70]18.22[17.90-19.02]  MOR2.76[2.03–3.42]2.77[2.65–2.86]2.74[2.66–2.84]2.78[2.68–2.86]2.89[2.33–3.51]  Explained variation[1]35.72[29.06–41.20]31.03[27.61–37.31]33.20[27.31–37.28]30.82[26.97–37.91]  Community level  Variance (SE)1.86[1.12–2.18]2.00[1.71–2.31]1.98[1.14–2.20]1.59[1.22–1.98]1.99[1.42–2.36]  ICC (%)47.60[39.8–50.7]49.80[37.98–53.70]48.50[36.90-59.08]48.40[37.00-51.20]49.22[38.99–52.51]  MOR3.67[2.74–4.09]3.85[3.48–4.26]3.83[2.77–4.12]3.33[2.87–3.83]3.84[3.12–4.33]  Explained variation[1]52.00[46.30–59.80]48.99[41.40-55.81]47.89[38.21–52.55]48.70[43.00–52.00]  Model fit statistics  Bayesian DIC58396002599860106012    N  Region level1515151515  Community level340340340340340  Individual42544254425442544254 * p < 0.05, ** p < 0.01, *** p < 0.001; aOR = adjusted Odds Ratio; CrI = Credible Interval; ICC = Intra-cluster correlation; MOR = Median Odds Ratio; 1 = reference Individual, community and region-level predictors of IPTp-SP utilization * p < 0.05, ** p < 0.01, *** p < 0.001; aOR = adjusted Odds Ratio; CrI = Credible Interval; ICC = Intra-cluster correlation; MOR = Median Odds Ratio; 1 = reference Random effects Outcome of the random effects were also reported in Table 2. The empty model (Model I), revealed that the discrepancy in uptake of three or more IPTp-SP doses was substantial at the community level [σ2 = 1. 86; Crl = 11.12–2.18] than regional level [σ2 = 1.13; Crl = 1.06–1.20]. Also, the ICC of Model I indicated that 18% and 47% disparity in IPTp-SP uptake are linked to region and community level factors respectively. According to the final model (Model V), the MORs indicate that when a woman changes her community to a community with higher likelihood of three or more doses of IPTp-SP, she has 3.83 higher chances of achieving the recommended doses (i.e. three or more). Similarly, moving to a region with high likelihood of three or more IPTp-SP doses is associated with 2.74-fold increase in having three or more IPTp-SP doses. Outcome of the random effects were also reported in Table 2. The empty model (Model I), revealed that the discrepancy in uptake of three or more IPTp-SP doses was substantial at the community level [σ2 = 1. 86; Crl = 11.12–2.18] than regional level [σ2 = 1.13; Crl = 1.06–1.20]. Also, the ICC of Model I indicated that 18% and 47% disparity in IPTp-SP uptake are linked to region and community level factors respectively. According to the final model (Model V), the MORs indicate that when a woman changes her community to a community with higher likelihood of three or more doses of IPTp-SP, she has 3.83 higher chances of achieving the recommended doses (i.e. three or more). Similarly, moving to a region with high likelihood of three or more IPTp-SP doses is associated with 2.74-fold increase in having three or more IPTp-SP doses. Descriptive findings: As shown in Table 1, less than half of the surveyed women had three or more IPTp-SP doses during their last pregnancies (45.3%). All the explanatory variables showed significant association at 95% level of significance. Nearly half of women aged 15–19 (49.9%) and the highly educated (47.1%) women received at least three doses. A greater section of rich women had at least three IPTp-SP doses (46.6%) and 50.1% of women who knew that mosquito bite causes malaria did the same. A significant proportion of women who knew that sleeping under ITN prevents malaria (46.7%) and those who did not know that destroying mosquito breeding sites prevents malaria (46.0%) received at three or more doses of IPTp-SP. Receiving at least three doses of IPTp-SP was profound in refugee settlements (50.9%), Western zone (47.3%), most disadvantaged communities (47.5%) and moderately disadvantaged regions (45.8%). Fixed effects: Table 2 presents the findings of the fixed effects. The complete and final model (Model V) indicates that women aged 15–19 had less odds of receiving at least three IPTp-SP doses compared to those aged 45–49 [aOR = 0.42, Crl = 0.33–0.98]. Those who had no formal education were less likely to achieve the minimum recommended doses [aOR = 0.51, Crl = 0.35–0.81]. Similarly, poor women [aOR = 0.80, Crl = 0.78–0.91] and women who knew that mosquito bite can cause malaria [aOR = 0.84, Crl = 0.73–0.96] were less likely to have three or more doses of IPTp-SP relative to the rich women and women who did not know that mosquito bite can cause malaria respectively. Women who reported that sleeping under ITN prevents malaria had higher odds of three or more IPTp-SP doses [aOR = 1.22, Crl = 1.04–1.43] relative to those who reported otherwise. The findings further revealed that urban residents were less probable to have at least three doses [aOR = 0.60, CI = 0.46–0.82] as well as most disadvantaged communities [aOR = 0.67, CI = 0.50–0.86] relative to rural residents and least disadvantaged communities correspondingly. Similarly, most disadvantaged regions were aligned with less likelihood of three or more IPTp-SP uptake [aOR = 0.59, CI = 0.48–0.78] compared to the least disadvantaged regions. Table 2Individual, community and region-level predictors of IPTp-SP utilizationModel IModel IIModel IIIModel IVModel V aOR [95% Crl] aOR [95% Crl] aOR [95% Crl] aOR [95% Crl] Fixed effects Individual level   Age   15–190.22** [0.11–0.81]0.42**[0.33–0.98]   20–240.43*[0.24–0.98]0.38*[0.25–0.79]   25–290.99[0.62–1.62]0.81[0.53–1.22]   30–341.01[0.62–1.64]0.81[0.53–1.24]   35–390.94[0.56–1.55]0.76[0.48–1.18]   40–440.99[0.58–1.71]0.81[0.50–1.30]   45–491[1]1[1]   Education   No education0.33**[0.17–0.77]0.51**[0.35–0.81]   Primary0.53*[0.44–0.82]0.62*[0.51–0.91]   Secondary1.09[0.74–1.60]1.06[0.75–1.50]   Higher1[1]1[1]   Wealth quintile   Poor0.81*[0.79–0.94]0.80**[0.78–0.91]   Middle0.89[0.75–1.06]0.88[0.74–1.05]   Rich1[1]1[1]   Mosquito bite causes malaria   No0.85*[0.74–0.97]0.84*[0.73–0.96]   Yes1[1]1[1]   Sleeping under ITN prevents malaria   No1[1]1[1]   Yes1.23**[1.06–1.43]1.22*[1.04–1.43]   Destroying mosquito breeding site prevents malaria   No1.16[0.97–1.39]1.165[0.96–1.41]   Yes1[1]1[1]  Community level factors   Residential status   Urban0.69***[0.31–0.88]0.60***[0.46–0.82]   Rural0.78*[0.68–0.96]0.80[0.61–1.05]   Refugee settlement1[1]1[1]   Zone   Southern1[1]1[1]   Eastern1.05[0.74–1.49]1.07[0.71–1.61]   Northern1.08[0.73–1.61]1.12[0.68–1.83]   Western1.19[0.84–1.68]1.22[0.81–1.84]   Socio-economic disadvantage   Tertile 1(least disadvantaged)1[1]1[1]   Tertile 20.87[0.72–1.05]0.91[0.73–1.12]   Tertile 3(most disadvantaged)0.71*[0.65–0.92]0.67**[0.50–0.86]  Region level factor   Socio-economic disadvantage   Tertile 1(least disadvantaged)1[1]1[1]   Tertile 20.71**[0.51–0.89]0.68***[0.54–0.82]   Tertile 3(most disadvantaged)0.55**[0.39–0.78]0.59***[0.48–0.78] Random effects  Region level  Variance (SE)1.13[1.06–1.20]1.14[1.04–1.21]1.12[1.05–1.20]1.15[1.07–1.21]1.14[1.06–1.20]  ICC (%)18.00[15.01–22.90]17.72[15.20-19.08]16.33[15.21-9.00]17.10[16.91–18.70]18.22[17.90-19.02]  MOR2.76[2.03–3.42]2.77[2.65–2.86]2.74[2.66–2.84]2.78[2.68–2.86]2.89[2.33–3.51]  Explained variation[1]35.72[29.06–41.20]31.03[27.61–37.31]33.20[27.31–37.28]30.82[26.97–37.91]  Community level  Variance (SE)1.86[1.12–2.18]2.00[1.71–2.31]1.98[1.14–2.20]1.59[1.22–1.98]1.99[1.42–2.36]  ICC (%)47.60[39.8–50.7]49.80[37.98–53.70]48.50[36.90-59.08]48.40[37.00-51.20]49.22[38.99–52.51]  MOR3.67[2.74–4.09]3.85[3.48–4.26]3.83[2.77–4.12]3.33[2.87–3.83]3.84[3.12–4.33]  Explained variation[1]52.00[46.30–59.80]48.99[41.40-55.81]47.89[38.21–52.55]48.70[43.00–52.00]  Model fit statistics  Bayesian DIC58396002599860106012    N  Region level1515151515  Community level340340340340340  Individual42544254425442544254 * p < 0.05, ** p < 0.01, *** p < 0.001; aOR = adjusted Odds Ratio; CrI = Credible Interval; ICC = Intra-cluster correlation; MOR = Median Odds Ratio; 1 = reference Individual, community and region-level predictors of IPTp-SP utilization * p < 0.05, ** p < 0.01, *** p < 0.001; aOR = adjusted Odds Ratio; CrI = Credible Interval; ICC = Intra-cluster correlation; MOR = Median Odds Ratio; 1 = reference Random effects: Outcome of the random effects were also reported in Table 2. The empty model (Model I), revealed that the discrepancy in uptake of three or more IPTp-SP doses was substantial at the community level [σ2 = 1. 86; Crl = 11.12–2.18] than regional level [σ2 = 1.13; Crl = 1.06–1.20]. Also, the ICC of Model I indicated that 18% and 47% disparity in IPTp-SP uptake are linked to region and community level factors respectively. According to the final model (Model V), the MORs indicate that when a woman changes her community to a community with higher likelihood of three or more doses of IPTp-SP, she has 3.83 higher chances of achieving the recommended doses (i.e. three or more). Similarly, moving to a region with high likelihood of three or more IPTp-SP doses is associated with 2.74-fold increase in having three or more IPTp-SP doses. Discussion: The aim of this study was to investigate the predictors of three or more IPTp-SP doses in Uganda as recommended by the WHO [8, 24]. Less than half of the women met the recommended dosage. This is far below the prevalence in other sub-Sahara African countries such as Ghana (63%) [30] but higher than the proportion of women who obtain at least three doses in Mali (36.7%) [31] and Senegal (37.51%) [32]. The prevalence, however, denotes an appreciation from 16% to 18% as reported by earlier studies based on the 2016 Uganda Demographic and Health Survey dataset [21, 33]. The relative increase in optimal IPTp-SP uptake since 2016 is suggestive that the recent anti-malaria and IPTp-SP initiatives are useful. Yet, for Uganda alone to account for 5% of the global malaria burden [1] is not good enough and more public health sensitization, and behavioural change communication interventions should be intensified. Hitherto, IPTp-SP administration has been dependent on ANC attendance, meanwhile, even where ANC attendance is high, as in the case of Malawi (84.0%) sometimes optimal uptake of IPTp-SP is low (24.8%) [34]. Thus, varied interventions such as adopting technological approaches and mobile phone alerts/reminders could prompt pregnant women to achieve the recommended dosage as a way of augmenting the traditional approach of administering the drug during ANC. The study revealed that women aged 15–19 had less odds of receiving at least three IPTp-SP doses compared to those aged 45–49. Due to stigmatization, cultural and traditional connotations of adolescent pregnancy in Uganda [35], adolescent pregnant women may be less motivated to frequent health facilities, and thereby have an increased likelihood of missing or having lower IPTp-SP uptake. Recent synthesis of evidence on maternal healthcare utilization also noted that adolescents generally have lower maternal healthcare utilization [36]. Perhaps, having a secluded and special care for adolescent pregnant women may motivate their likelihood of visiting the health facility for the doses. In the event that the healthcare provider forgets to administer IPTp-SP, it is common knowledge that not all adolescents will feel comfortable and confident to query the healthcare provider for the drug whilst she is in a queue with her mothers’ age mates and possibly feels apprehended. Those who had no formal education and poor women were less likely to achieve the minimum recommended doses. Being poor and/or uneducated denotes disempowerment [26] and there is congruence in the literature about the positive impact of empowerment on maternal healthcare utilization [37, 38] and taking charge of one’s holistic wellbeing [39]. This observation points to the need for the Uganda Government and its partner organizations to appreciate that optimising IPTp-SP utilization transcends beyond provision of funds to secure the drugs. Thus, enhancing education opportunities for women and widening their wealth status such that every woman in the reproductive age will be competitive in finding a decent occupation could facilitate uptake of IPTp-SP in the country. Whilst education offers the knowledge for women to appreciate the need for achieving the recommended dosage, enhanced wealth status will offer the financial or economic power required to offset cost which could have hindered them from accessing the recommended dosage. Women who knew that mosquito bite can cause malaria were less likely to have three or more doses of IPTp-SP, however, those who knew that sleeping under ITN can prevent malaria had higher odds of three or more IPTp-SP doses. All things being equal, women who are knowledgeable about possible routes and preventive strategies of malaria are expected to utilize all available opportunities to protect themselves and their newborns [40]. Arguably, a greater section of the women may be unsure of the information they possess about the causative and preventive routes. The content of ANC messages delivered throughout the trimesters could reflect issues pertaining to malaria causation and preventive strategies in order for women to be conscious and be appreciative of the need to achieve the full dosage. This is very critical on the account that malaria is endemic in the over 95% of the country [4]. The findings further revealed that urban residents were less probable to have at least three doses. Unlike rural settings, urban centres are usually clean with less hideouts for mosquitoes or limited conducive breeding sites for mosquitos. Consequently, there is a high temptation for urban residents to feel that they are less susceptible to malaria even during pregnancy. However, a rural resident may be concerned about being at increased risk of malaria and hence utilize all means possible to achieve the recommended doses. This finding further indicates that availability of health facilities does not necessarily imply healthcare utilization. Thus, several other factors interrelate to determine utilization. This is because health facilities and health personnel are prevalent in urban settings of Uganda relative to the rural locations [41, 42] and all things being equal, urban residents were expected to have increased chances of achieving the recommended IPTp-SP dosage. Health education among urban women through the mass media may be useful to boost IPTp-SP uptake among them. Most disadvantaged communities and regions were aligned with less likelihood of three or more IPTp-SP uptake. Being a resident of most disadvantaged communities may be indicative of limited access to IPTp-SP outlets such as health facilities and mobile clinics [42]. As a result, these finding was anticipated. This points to the need for urgent appraisal of existing IPTp-SP administration/interventions and prioritization of the possibilities of increasing access among women in most disadvantageous regions and communities. It is by such approaches that malaria burden in Uganda can be reduced and thereby increase the country’s prospects of achieving SDG target 3.1 [7]. Strengths and limitations: This study emerged from the most recent national malaria survey of Uganda. Due to the sampling procedure and large sample, it is representative of all women aged 15–49 in Uganda. In spite of this strength, the study has some noteworthy limitations. The study adopted a cross-sectional design and as such causal inference is not permissible. Secondly, since the outcome variable was self-reported, under-reporting or over-reporting of optimal IPTp-SP uptake is plausible. Also, since the study was based on pre-existing data without information on health system factors, I was unable to interrogate health system factors. Conclusion: The study revealed that less than half of Ugandan women achieved the recommended IPTp-SP dosage at their last pregnancy preceding the 2018-19 UMIS. Community and region level factors are significant predictors of optimal IPTp-SP uptake. All existing IPTp-SP interventions that focus on individual level factors alone need to be reviewed to reflect broader community and region level factors in order to wane the high malaria prevalence in the country. More especially, augmenting IPTp-SP uptake in most disadvantaged communities would require much scrutiny into suitable approaches to ensure access by obviating all barriers. Also, contextually responsive behavioural change communication interventions could invoke women’s passion to achieve the recommended dosage.
Background: In spite of the missed opportunities of sulfadoxine-pyrimethamine (IPTp-SP) in Uganda, scanty literature exist on malaria in pregnancy. To date, empirical national study utilizing the 2018-19 Uganda Malaria Indicator Survey to explore predictors of attaining three or more doses of IPTp-SP in the country is non-existent. This study investigated the factors affecting uptake of three or more IPTp-SP doses as recommended by the World Health Organization. Methods: Data from the 2018-2019 Uganda Malaria Indicator Survey (2018-19 UMIS) was analysed. Adequate uptake of intermittent preventive therapy with IPTp-SP was the dependent variable for this study. Weighted frequencies and percentages were used to present the proportion of women who had adequate IPTp-SP uptake or otherwise with respect to the independent variables. A three-level multilevel logistic regression was fitted. The Bayesian Deviance Information Criterion (DIC) was used in determining the goodness of fit of all the models. Results: Less than half of the surveyed women had three or more IPTp-SP doses during their last pregnancies (45.3%). Women aged 15-19 had less odds of receiving at least three IPTp-SP doses compared to those aged 45-49 [aOR = 0.42, Crl = 0.33-0.98]. Poor women [aOR = 0.80, Crl = 0.78-0.91] were less likely to have three or more doses of IPTp-SP relative to rich women. Most disadvantaged regions were aligned with less likelihood of three or more IPTp-SP uptake [aOR = 0.59, CI = 0.48-0.78] compared to least disadvantaged regions. The variation in uptake of three or more IPTp-SP doses was substantial at the community level [σ2 = 1. 86; Crl = 11.12-2.18] than regional level [σ2 = 1.13; Crl = 1.06-1.20]. About 18% and 47% disparity in IPTp-SP uptake are linked to region and community level factors respectively. Conclusions: IPTp-SP interventions need to reflect broader community and region level factors in order to wane the high malaria prevalence in Uganda. Contextually responsive behavioural change communication interventions are required to invoke women's passion to achieve the recommended dosage.
Background: Malaria infection in pregnancy (MiP) is acknowledged as a weighty public health challenge and have ample dangers for the pregnant woman and her fetus [1–3]. The symptoms and complications of MiP fluctuate with respect to intensity of transmission within a defined geographical area as well as a woman’s level of acquired immunity [1, 2]. Nineteen countries within sub-Saharan Africa (SSA), with Uganda inclusive, and one Asian country account for 85% of the global malaria burden [1]. In 2018 alone, about US$ 2.7 billion investment was made into malaria control and elimination globally and three-quarters of this was directed to the World Health Organization (WHO) African Region. In spite of this, malaria continue to take a heavy toll on government and household expenses in Uganda [4]. Pregnant women have increased susceptibility to malaria and its associated complications such as maternal anaemia, stillbirth, low birth weight, and in worse scenarios, infant mortality and morbidity [5, 6]. MiP could be an impediment to the realization of targets 3.1 of the Sustainable Development Goals, thus reducing maternal mortality ratio to less than 70 deaths per 100,000 live births [7]. In order to shield women in moderate to high malaria transmission areas in Africa and their newborns from the adverse implications of MiP and its associated imminent problems, the WHO in 2012 revised its anti-malaria policy and recommended that all pregnant women within such regions should receive at least three doses of intermittent preventive treatment in pregnancy with antimalarial drug sulfadoxine-pyrimethamine (IPTp-SP) [8]. This recommendation was informed by the stagnated IPTp coverage rates and new evidence that reinforced the need for three doses or more [9]. Further, in 2016, the WHO developed new antenatal care (ANC) guidelines by indorsing an increase in the number of ANC to at least eight contacts between pregnant women and healthcare providers as a strategy to enhance prospects of IPTp-SP uptake [2]. IPTp-SP is to be taken by all pregnant women in moderate to high malaria transmission areas and should commence as early as possible within the second trimester. The doses are to be administered at least three times with at least one month interval until childbirth. Generally IPTp-SP declines episodes of MiP, neonatal mortality, low birth weight, and placental parasitaemia [2]. Empirical evidence indicate that IPTp contributes to 29%, 38% and 31% reduction in the incidence of low birth weight, severe malaria anaemia and neonatal mortality respectively [10, 11]. In 2018, out of the 36 African countries that recounted IPTp-SP coverage rates, about 31% of eligible women in the reproductive age received the recommended doses and this signified an increase relative to the rate reported in 2017 (22%) and 2010 (2%). In the case of Uganda, where malaria is endemic in 95% of the country [4], the 2019 World Malaria Report indicated that 30% or lower of the eligible women had three or more IPTp-SP doses [1]. Resistance of the parasite to SP has been noted, however, IPTp still remains a very cost-effective and a promising lifesaving intervention [12, 13]. In spite of the missed IPTp-SP opportunities in Uganda in the wake of high MiP [1, 14, 15], scanty literature exist on MiP in Uganda. The few studies have either been limited to some regions of the country [16–20], used relatively old national data [21], assessed the impact of intermittent preventive treatment during pregnancy [22] and among others. To date, empirical national study utilizing the 2018-19 Uganda Malaria Indicator Survey that explore predictors of attaining three or more doses of IPTp-SP in the country is non-existent. With the aim of invigorating a critical evidence-based discussion on MiP prevention, and offering empirical evidence to guide MiP policies, this study investigated the rate of uptake and factors affecting uptake of three or more IPTp-SP doses as recommended by the WHO. Conclusion: The study revealed that less than half of Ugandan women achieved the recommended IPTp-SP dosage at their last pregnancy preceding the 2018-19 UMIS. Community and region level factors are significant predictors of optimal IPTp-SP uptake. All existing IPTp-SP interventions that focus on individual level factors alone need to be reviewed to reflect broader community and region level factors in order to wane the high malaria prevalence in the country. More especially, augmenting IPTp-SP uptake in most disadvantaged communities would require much scrutiny into suitable approaches to ensure access by obviating all barriers. Also, contextually responsive behavioural change communication interventions could invoke women’s passion to achieve the recommended dosage.
Background: In spite of the missed opportunities of sulfadoxine-pyrimethamine (IPTp-SP) in Uganda, scanty literature exist on malaria in pregnancy. To date, empirical national study utilizing the 2018-19 Uganda Malaria Indicator Survey to explore predictors of attaining three or more doses of IPTp-SP in the country is non-existent. This study investigated the factors affecting uptake of three or more IPTp-SP doses as recommended by the World Health Organization. Methods: Data from the 2018-2019 Uganda Malaria Indicator Survey (2018-19 UMIS) was analysed. Adequate uptake of intermittent preventive therapy with IPTp-SP was the dependent variable for this study. Weighted frequencies and percentages were used to present the proportion of women who had adequate IPTp-SP uptake or otherwise with respect to the independent variables. A three-level multilevel logistic regression was fitted. The Bayesian Deviance Information Criterion (DIC) was used in determining the goodness of fit of all the models. Results: Less than half of the surveyed women had three or more IPTp-SP doses during their last pregnancies (45.3%). Women aged 15-19 had less odds of receiving at least three IPTp-SP doses compared to those aged 45-49 [aOR = 0.42, Crl = 0.33-0.98]. Poor women [aOR = 0.80, Crl = 0.78-0.91] were less likely to have three or more doses of IPTp-SP relative to rich women. Most disadvantaged regions were aligned with less likelihood of three or more IPTp-SP uptake [aOR = 0.59, CI = 0.48-0.78] compared to least disadvantaged regions. The variation in uptake of three or more IPTp-SP doses was substantial at the community level [σ2 = 1. 86; Crl = 11.12-2.18] than regional level [σ2 = 1.13; Crl = 1.06-1.20]. About 18% and 47% disparity in IPTp-SP uptake are linked to region and community level factors respectively. Conclusions: IPTp-SP interventions need to reflect broader community and region level factors in order to wane the high malaria prevalence in Uganda. Contextually responsive behavioural change communication interventions are required to invoke women's passion to achieve the recommended dosage.
9,853
451
[ 790, 563, 280, 2293, 139, 195, 620, 116, 66, 191, 732, 191, 122 ]
16
[ "iptp", "sp", "iptp sp", "100", "level", "women", "malaria", "doses", "community", "model" ]
[ "account global malaria", "risk malaria utilize", "malaria burden uganda", "malaria infection pregnancy", "malaria pregnancy rural" ]
null
[CONTENT] Malaria | Pregnancy | Public health | Maternal health | Uganda [SUMMARY]
null
[CONTENT] Malaria | Pregnancy | Public health | Maternal health | Uganda [SUMMARY]
[CONTENT] Malaria | Pregnancy | Public health | Maternal health | Uganda [SUMMARY]
[CONTENT] Malaria | Pregnancy | Public health | Maternal health | Uganda [SUMMARY]
[CONTENT] Malaria | Pregnancy | Public health | Maternal health | Uganda [SUMMARY]
[CONTENT] Antimalarials | Bayes Theorem | Dacarbazine | Drug Combinations | Female | Humans | Malaria | Patient Acceptance of Health Care | Pregnancy | Pregnancy Complications, Parasitic | Prenatal Care | Pyrimethamine | Sulfadoxine | Uganda [SUMMARY]
null
[CONTENT] Antimalarials | Bayes Theorem | Dacarbazine | Drug Combinations | Female | Humans | Malaria | Patient Acceptance of Health Care | Pregnancy | Pregnancy Complications, Parasitic | Prenatal Care | Pyrimethamine | Sulfadoxine | Uganda [SUMMARY]
[CONTENT] Antimalarials | Bayes Theorem | Dacarbazine | Drug Combinations | Female | Humans | Malaria | Patient Acceptance of Health Care | Pregnancy | Pregnancy Complications, Parasitic | Prenatal Care | Pyrimethamine | Sulfadoxine | Uganda [SUMMARY]
[CONTENT] Antimalarials | Bayes Theorem | Dacarbazine | Drug Combinations | Female | Humans | Malaria | Patient Acceptance of Health Care | Pregnancy | Pregnancy Complications, Parasitic | Prenatal Care | Pyrimethamine | Sulfadoxine | Uganda [SUMMARY]
[CONTENT] Antimalarials | Bayes Theorem | Dacarbazine | Drug Combinations | Female | Humans | Malaria | Patient Acceptance of Health Care | Pregnancy | Pregnancy Complications, Parasitic | Prenatal Care | Pyrimethamine | Sulfadoxine | Uganda [SUMMARY]
[CONTENT] account global malaria | risk malaria utilize | malaria burden uganda | malaria infection pregnancy | malaria pregnancy rural [SUMMARY]
null
[CONTENT] account global malaria | risk malaria utilize | malaria burden uganda | malaria infection pregnancy | malaria pregnancy rural [SUMMARY]
[CONTENT] account global malaria | risk malaria utilize | malaria burden uganda | malaria infection pregnancy | malaria pregnancy rural [SUMMARY]
[CONTENT] account global malaria | risk malaria utilize | malaria burden uganda | malaria infection pregnancy | malaria pregnancy rural [SUMMARY]
[CONTENT] account global malaria | risk malaria utilize | malaria burden uganda | malaria infection pregnancy | malaria pregnancy rural [SUMMARY]
[CONTENT] iptp | sp | iptp sp | 100 | level | women | malaria | doses | community | model [SUMMARY]
null
[CONTENT] iptp | sp | iptp sp | 100 | level | women | malaria | doses | community | model [SUMMARY]
[CONTENT] iptp | sp | iptp sp | 100 | level | women | malaria | doses | community | model [SUMMARY]
[CONTENT] iptp | sp | iptp sp | 100 | level | women | malaria | doses | community | model [SUMMARY]
[CONTENT] iptp | sp | iptp sp | 100 | level | women | malaria | doses | community | model [SUMMARY]
[CONTENT] mip | iptp | malaria | sp | mortality | iptp sp | pregnant | evidence | doses | country [SUMMARY]
null
[CONTENT] aor | crl | 81 | doses | 20 | sp | iptp sp | iptp | 33 | 00 [SUMMARY]
[CONTENT] interventions | dosage | iptp sp | iptp | sp | level factors | level | community region level factors | region level factors | factors [SUMMARY]
[CONTENT] iptp | sp | iptp sp | 100 | level | women | doses | malaria | community | model [SUMMARY]
[CONTENT] iptp | sp | iptp sp | 100 | level | women | doses | malaria | community | model [SUMMARY]
[CONTENT] Uganda ||| 2018- | Uganda | three ||| three | the World Health Organization [SUMMARY]
null
[CONTENT] Less than half | three | 45.3% ||| 15-19 | at least three | 45-49 ||| 0.42 | Crl | 0.33 ||| 0.80 | Crl | 0.78-0.91 | three ||| three | 0.59 | CI | 0.48-0.78 ||| three | 1 | 86 | Crl =  | 11.12-2.18 | 1.13 | Crl | 1.06-1.20 ||| About 18% | 47% [SUMMARY]
[CONTENT] Uganda ||| [SUMMARY]
[CONTENT] Uganda ||| 2018- | Uganda | three ||| three | the World Health Organization ||| 2018-2019 | Uganda | 2018-19 | UMIS ||| ||| ||| three ||| The Bayesian Deviance Information Criterion ||| Less than half | three | 45.3% ||| 15-19 | at least three | 45-49 ||| 0.42 | Crl | 0.33 ||| 0.80 | Crl | 0.78-0.91 | three ||| three | 0.59 | CI | 0.48-0.78 ||| three | 1 | 86 | Crl =  | 11.12-2.18 | 1.13 | Crl | 1.06-1.20 ||| About 18% | 47% ||| Uganda ||| [SUMMARY]
[CONTENT] Uganda ||| 2018- | Uganda | three ||| three | the World Health Organization ||| 2018-2019 | Uganda | 2018-19 | UMIS ||| ||| ||| three ||| The Bayesian Deviance Information Criterion ||| Less than half | three | 45.3% ||| 15-19 | at least three | 45-49 ||| 0.42 | Crl | 0.33 ||| 0.80 | Crl | 0.78-0.91 | three ||| three | 0.59 | CI | 0.48-0.78 ||| three | 1 | 86 | Crl =  | 11.12-2.18 | 1.13 | Crl | 1.06-1.20 ||| About 18% | 47% ||| Uganda ||| [SUMMARY]
Effect of cataract surgery on the progression of age-related macular degeneration.
36343088
Cataracts and age-related macular degeneration (AMD) are common causes of decreased vision and blindness in individuals over age 50. Although surgery is the most effective treatment for cataracts, it may accelerate the progression of AMD, so this study further evaluated the influence of cataract surgery for AMD through a systematic review and meta-analysis.
BACKGROUND
The Cochrane Systematic Evaluation method was adopted, and computer searches were conducted for the China Knowledge Network, Wanfang, Vipul, SinoMed, PubMed, SpringerLink, Clinicalkey, Medline, Cochrane Library, Web of Science, OVID, and Embase databases of cohort studies on the impact of cataract surgery on AMD, with search timeframes up to May 2022. Meta-analysis was performed using Stata/12.0.
METHODS
A total of 8 cohort studies were included in the study. The results showed that the relative risk (RR) of AMD progression after cataract surgery was not significantly different, RR 1.194 [95% credibility interval (CI) 0.897-1.591]; the risk remained increased more than 5 years after surgery, RR 1.372 (95% CI 1.062-1.772).
RESULTS
There is still a significant positive correlation between cataract surgery and increase the risk of worsening of AMD progression, and faster progression of early-to-late AMD found in cataract surgery with longer follow-up of patients.
CONCLUSION
[ "Humans", "Middle Aged", "Cataract Extraction", "Macular Degeneration", "Cataract", "China" ]
9646653
1. Introduction
Cataracts and age-related macular degeneration (AMD) are common causes of decreased vision and blindness in individuals over age 50.[1] By 2020, the number of people with AMD globally is expected to be approximately 200 million, increasing to nearly 300 million by 2040.[2] For the past few decades, vision has been an important criterion for cataract surgery, and cataracts can improve vision by removing the opaque lens.[3] Surgery is the most effective operation in cataract patients to improve visual function, but some professor’s suspect it increase the risk of worsening of underlying AMD and vision.[3,4] This potential problem has not been resolved for a long time. Prospective or retrospective studies[5,6] have reported a higher frequency of AMD progression in surgical eyes than in nonoperated eyes. However, some studies[7–11] have also shown that there is no significant correlation between cataract surgery and AMD. A cross-sectional[12] study also explored this topic, but it is still inconclusive. The truth is that cataracts and AMD often coexist, and there may be adverse effects between AMD and cataract surgery. However, delaying cataract surgery can also negatively impact a patient’s vision. Therefore, this study further evaluated the influence of cataract surgery for AMD through a systematic review and meta-analysis.
2. Materials and Methods
2.1. Search strategy A meta-analysis was performed according to the PRISMA guidelines.[13] We independently searched PubMed, SpringerLink, Clinicalkey, Medline, the Cochrane library, Web of Science, OVID, Embase, and SinoMed from their earliest dates up to May 2022. The final search string was “macular degeneration,” “wet macular degeneration,” “choroidal neovascularization,” “geographic atrophy,” “age-related macular degeneration” and “randomized controlled trial.” A meta-analysis was performed according to the PRISMA guidelines.[13] We independently searched PubMed, SpringerLink, Clinicalkey, Medline, the Cochrane library, Web of Science, OVID, Embase, and SinoMed from their earliest dates up to May 2022. The final search string was “macular degeneration,” “wet macular degeneration,” “choroidal neovascularization,” “geographic atrophy,” “age-related macular degeneration” and “randomized controlled trial.” 2.2. Inclusion criteria Cohort study comparing visual acuity in AMD patients with or without cataract surgery. Cohort study comparing visual acuity in AMD patients with or without cataract surgery. 2.3. Research objects Age-related cataract patients who underwent cataract emulsification surgery or suffered from both cataracts and AMD were included. Those who had cataract history may affect postoperative visual acuity and were therefore excluded. Age-related cataract patients who underwent cataract emulsification surgery or suffered from both cataracts and AMD were included. Those who had cataract history may affect postoperative visual acuity and were therefore excluded. 2.4. Outcome measures To describe whether cataract surgery increases the risk of worsening of underlying AMD and vision (relative risk (RR)). To describe whether cataract surgery increases the risk of worsening of underlying AMD and vision (relative risk (RR)). 2.5. Data extraction For each study, 2 reviewers independently evaluated the retrieved literature against the inclusion and exclusion criteria. In case of disagreement, adjudicate by consultation or with the aid of a 3rd commentator. To ensure the consistency of the data collection of each study, we conform to the conditions of the study of the following information into a structured Excel data table: first author, year of publication, study design, number of control and case groups, location, patients’ age, AMD classification, duration of follow-up, RR, odds ratio (OR) and hazard ratio (HR) with corresponding 95% CI [credibility interval (CI)]. The one that adjusts the most variables was used when there were multiple estimates. For each study, 2 reviewers independently evaluated the retrieved literature against the inclusion and exclusion criteria. In case of disagreement, adjudicate by consultation or with the aid of a 3rd commentator. To ensure the consistency of the data collection of each study, we conform to the conditions of the study of the following information into a structured Excel data table: first author, year of publication, study design, number of control and case groups, location, patients’ age, AMD classification, duration of follow-up, RR, odds ratio (OR) and hazard ratio (HR) with corresponding 95% CI [credibility interval (CI)]. The one that adjusts the most variables was used when there were multiple estimates. 2.6. Quality evaluation and statistical analysis The risk of bias of nonrandomized studies was assessed using the Newcastle-Ottawa scale (NOS).[14] In consideration of the low prevalence of AMD, the distinction between RR/OR/HR can generally be ignored.[15] We used Stata 12.0 to derive pooled effect estimates such as risk ratios (RR) using a fixed-effects model or a random-effects model.[16] Heterogeneity test: The χ2 test was used to analyze the statistical heterogeneity between studies with an α level of 0.1, and the magnitude of heterogeneity was quantitatively estimated according to I2. If there was statistical heterogeneity between studies (P > .1 and I2 < 50%), a fixed effect model was used; if P < .1 and I2 > 50%, but when it was clinically judged that the research indicators of each group were consistent and needed to be merged, a random effect model was selected to merge the effect values.[15] We use sensitivity analyses to assess the impact of individual studies on the outcome, funnel plots quantified by Egger and Begg tests to observe potential publication bias, and region, duration of follow-up, and AMD classification to performed subgroup analyses. P < .05 was considered statistically significant. The risk of bias of nonrandomized studies was assessed using the Newcastle-Ottawa scale (NOS).[14] In consideration of the low prevalence of AMD, the distinction between RR/OR/HR can generally be ignored.[15] We used Stata 12.0 to derive pooled effect estimates such as risk ratios (RR) using a fixed-effects model or a random-effects model.[16] Heterogeneity test: The χ2 test was used to analyze the statistical heterogeneity between studies with an α level of 0.1, and the magnitude of heterogeneity was quantitatively estimated according to I2. If there was statistical heterogeneity between studies (P > .1 and I2 < 50%), a fixed effect model was used; if P < .1 and I2 > 50%, but when it was clinically judged that the research indicators of each group were consistent and needed to be merged, a random effect model was selected to merge the effect values.[15] We use sensitivity analyses to assess the impact of individual studies on the outcome, funnel plots quantified by Egger and Begg tests to observe potential publication bias, and region, duration of follow-up, and AMD classification to performed subgroup analyses. P < .05 was considered statistically significant.
3. Results
3.1. Literature search After inclusion and exclusion screening and reducing the repeated synthesis of the same control group and the same study population, 8 studies were finally included in the meta-analysis. The screening flowchart is shown in Figure 1. Search and screening flowchart. After inclusion and exclusion screening and reducing the repeated synthesis of the same control group and the same study population, 8 studies were finally included in the meta-analysis. The screening flowchart is shown in Figure 1. Search and screening flowchart. 3.2. Basic characteristics of the included studies Among the 8 studies, 2 studies[16,17] were conducted in Europe; 2 studies[3,18] were conducted in Asia; and the others[6,19,20] were from the Americas and Oceania. Patients in all studies were older than 42 years and were followed for 3 months to 20 years (Table 1). Basic characteristics of the included literature. NA = not reported. Among the 8 studies, 2 studies[16,17] were conducted in Europe; 2 studies[3,18] were conducted in Asia; and the others[6,19,20] were from the Americas and Oceania. Patients in all studies were older than 42 years and were followed for 3 months to 20 years (Table 1). Basic characteristics of the included literature. NA = not reported. 3.3. Quality assessment Cohort studies were assessed using the cohort study coding manual. If the NOS score was equal to or higher than 8 points, the article was considered high quality. All articles were high-quality literature (Table 1). Cohort studies were assessed using the cohort study coding manual. If the NOS score was equal to or higher than 8 points, the article was considered high quality. All articles were high-quality literature (Table 1). 3.4. Efficacy analysis There was no significant publication bias in the studies (Egger test P = .323; Begg test P = .373). The data had high heterogeneity (I2 = 72.7%). In the random-effects model, cataract surgery and progression of AMD were not significantly associated (RR 1.194, 95% CI 0.897‐1.591), and the difference was not statistically significant (Z = 1.21, P = .225) (Fig. 2). Relationship between cataract surgery and age-related macular degeneration. There was no significant publication bias in the studies (Egger test P = .323; Begg test P = .373). The data had high heterogeneity (I2 = 72.7%). In the random-effects model, cataract surgery and progression of AMD were not significantly associated (RR 1.194, 95% CI 0.897‐1.591), and the difference was not statistically significant (Z = 1.21, P = .225) (Fig. 2). Relationship between cataract surgery and age-related macular degeneration. 3.5. Sensitivity analysis Sensitivity analysis indicated that the sensitivity was low, and the results were relatively stable and credible (Fig. 3). We excluded the effects of the included studies on outcomes one by one, and found that none of them would lead to the reversal of results or significant increase in heterogeneity. The subgroup analysis can be divided into four groups by region, namely, Asia, Europe, Oceania, and America. Asia RR 2.855 (95% CI 1.704‐4.781), Europe RR 1.271 (95% CI 0.914‐1.769), Oceania RR 1.017 (95% CI 0.607‐1.703), Americas RR 0.997 (95% CI 0.621‐1.601) (Fig. 4). Cataract surgery in Asia was significantly associated with AMD progression by subgroup analysis (Fig. 5). Sensitivity analysis. Subgroup analysis (region). Subgroup analysis (age-related macular degeneration type). Grouped by the length of follow-up, the RR of the group less than or equal to 5 years was 1.011 (95% CI 0.592‐1.728), and the RR of the group greater than 5 years was 1.372 (95% CI 1.062‐1.772). The association between cataract surgery and AMD progression became more pronounced with increasing follow-up time (Fig. 6). Subgroup analysis (follow-up time). Sensitivity analysis indicated that the sensitivity was low, and the results were relatively stable and credible (Fig. 3). We excluded the effects of the included studies on outcomes one by one, and found that none of them would lead to the reversal of results or significant increase in heterogeneity. The subgroup analysis can be divided into four groups by region, namely, Asia, Europe, Oceania, and America. Asia RR 2.855 (95% CI 1.704‐4.781), Europe RR 1.271 (95% CI 0.914‐1.769), Oceania RR 1.017 (95% CI 0.607‐1.703), Americas RR 0.997 (95% CI 0.621‐1.601) (Fig. 4). Cataract surgery in Asia was significantly associated with AMD progression by subgroup analysis (Fig. 5). Sensitivity analysis. Subgroup analysis (region). Subgroup analysis (age-related macular degeneration type). Grouped by the length of follow-up, the RR of the group less than or equal to 5 years was 1.011 (95% CI 0.592‐1.728), and the RR of the group greater than 5 years was 1.372 (95% CI 1.062‐1.772). The association between cataract surgery and AMD progression became more pronounced with increasing follow-up time (Fig. 6). Subgroup analysis (follow-up time).
null
null
[ "2.1. Search strategy", "2.2. Inclusion criteria", "2.3. Research objects", "2.4. Outcome measures", "2.5. Data extraction", "2.6. Quality evaluation and statistical analysis", "3.1. Literature search", "3.2. Basic characteristics of the included studies", "3.3. Quality assessment", "3.4. Efficacy analysis", "3.5. Sensitivity analysis", "Author contributions" ]
[ "A meta-analysis was performed according to the PRISMA guidelines.[13] We independently searched PubMed, SpringerLink, Clinicalkey, Medline, the Cochrane library, Web of Science, OVID, Embase, and SinoMed from their earliest dates up to May 2022. The final search string was “macular degeneration,” “wet macular degeneration,” “choroidal neovascularization,” “geographic atrophy,” “age-related macular degeneration” and “randomized controlled trial.”", "Cohort study comparing visual acuity in AMD patients with or without cataract surgery.", "Age-related cataract patients who underwent cataract emulsification surgery or suffered from both cataracts and AMD were included. Those who had cataract history may affect postoperative visual acuity and were therefore excluded.", "To describe whether cataract surgery increases the risk of worsening of underlying AMD and vision (relative risk (RR)).", "For each study, 2 reviewers independently evaluated the retrieved literature against the inclusion and exclusion criteria. In case of disagreement, adjudicate by consultation or with the aid of a 3rd commentator. To ensure the consistency of the data collection of each study, we conform to the conditions of the study of the following information into a structured Excel data table: first author, year of publication, study design, number of control and case groups, location, patients’ age, AMD classification, duration of follow-up, RR, odds ratio (OR) and hazard ratio (HR) with corresponding 95% CI [credibility interval (CI)]. The one that adjusts the most variables was used when there were multiple estimates.", "The risk of bias of nonrandomized studies was assessed using the Newcastle-Ottawa scale (NOS).[14] In consideration of the low prevalence of AMD, the distinction between RR/OR/HR can generally be ignored.[15] We used Stata 12.0 to derive pooled effect estimates such as risk ratios (RR) using a fixed-effects model or a random-effects model.[16] Heterogeneity test: The χ2 test was used to analyze the statistical heterogeneity between studies with an α level of 0.1, and the magnitude of heterogeneity was quantitatively estimated according to I2. If there was statistical heterogeneity between studies (P > .1 and I2 < 50%), a fixed effect model was used; if P < .1 and I2 > 50%, but when it was clinically judged that the research indicators of each group were consistent and needed to be merged, a random effect model was selected to merge the effect values.[15] We use sensitivity analyses to assess the impact of individual studies on the outcome, funnel plots quantified by Egger and Begg tests to observe potential publication bias, and region, duration of follow-up, and AMD classification to performed subgroup analyses. P < .05 was considered statistically significant.", "After inclusion and exclusion screening and reducing the repeated synthesis of the same control group and the same study population, 8 studies were finally included in the meta-analysis. The screening flowchart is shown in Figure 1.\nSearch and screening flowchart.", "Among the 8 studies, 2 studies[16,17] were conducted in Europe; 2 studies[3,18] were conducted in Asia; and the others[6,19,20] were from the Americas and Oceania. Patients in all studies were older than 42 years and were followed for 3 months to 20 years (Table 1).\nBasic characteristics of the included literature.\nNA = not reported.", "Cohort studies were assessed using the cohort study coding manual. If the NOS score was equal to or higher than 8 points, the article was considered high quality. All articles were high-quality literature (Table 1).", "There was no significant publication bias in the studies (Egger test P = .323; Begg test P = .373). The data had high heterogeneity (I2 = 72.7%). In the random-effects model, cataract surgery and progression of AMD were not significantly associated (RR 1.194, 95% CI 0.897‐1.591), and the difference was not statistically significant (Z = 1.21, P = .225) (Fig. 2).\nRelationship between cataract surgery and age-related macular degeneration.", "Sensitivity analysis indicated that the sensitivity was low, and the results were relatively stable and credible (Fig. 3). We excluded the effects of the included studies on outcomes one by one, and found that none of them would lead to the reversal of results or significant increase in heterogeneity. The subgroup analysis can be divided into four groups by region, namely, Asia, Europe, Oceania, and America. Asia RR 2.855 (95% CI 1.704‐4.781), Europe RR 1.271 (95% CI 0.914‐1.769), Oceania RR 1.017 (95% CI 0.607‐1.703), Americas RR 0.997 (95% CI 0.621‐1.601) (Fig. 4). Cataract surgery in Asia was significantly associated with AMD progression by subgroup analysis (Fig. 5).\nSensitivity analysis.\nSubgroup analysis (region).\nSubgroup analysis (age-related macular degeneration type).\nGrouped by the length of follow-up, the RR of the group less than or equal to 5 years was 1.011 (95% CI 0.592‐1.728), and the RR of the group greater than 5 years was 1.372 (95% CI 1.062‐1.772). The association between cataract surgery and AMD progression became more pronounced with increasing follow-up time (Fig. 6).\nSubgroup analysis (follow-up time).", "ZYC conceived and designed the experiments. YZ and FYT performed the experiments and data analysis. YZ and ZYC provided the reagents, materials and analysis tools. ZYC wrote the manuscript. ZYC revised the work critically for important intellectual content. All authors have read and approved the final version of the manuscript.\nConceptualization: Zhaoyan Chen.\nData curation: Zhaoyan Chen.\nInvestigation: Ya Zeng.\nMethodology: Ya Zeng.\nProject administration: Fangyuan Tian.\nResources: Fangyuan Tian." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Search strategy", "2.2. Inclusion criteria", "2.3. Research objects", "2.4. Outcome measures", "2.5. Data extraction", "2.6. Quality evaluation and statistical analysis", "3. Results", "3.1. Literature search", "3.2. Basic characteristics of the included studies", "3.3. Quality assessment", "3.4. Efficacy analysis", "3.5. Sensitivity analysis", "4. Discussion", "Author contributions" ]
[ "Cataracts and age-related macular degeneration (AMD) are common causes of decreased vision and blindness in individuals over age 50.[1] By 2020, the number of people with AMD globally is expected to be approximately 200 million, increasing to nearly 300 million by 2040.[2] For the past few decades, vision has been an important criterion for cataract surgery, and cataracts can improve vision by removing the opaque lens.[3] Surgery is the most effective operation in cataract patients to improve visual function, but some professor’s suspect it increase the risk of worsening of underlying AMD and vision.[3,4] This potential problem has not been resolved for a long time. Prospective or retrospective studies[5,6] have reported a higher frequency of AMD progression in surgical eyes than in nonoperated eyes. However, some studies[7–11] have also shown that there is no significant correlation between cataract surgery and AMD. A cross-sectional[12] study also explored this topic, but it is still inconclusive. The truth is that cataracts and AMD often coexist, and there may be adverse effects between AMD and cataract surgery. However, delaying cataract surgery can also negatively impact a patient’s vision. Therefore, this study further evaluated the influence of cataract surgery for AMD through a systematic review and meta-analysis.", "2.1. Search strategy A meta-analysis was performed according to the PRISMA guidelines.[13] We independently searched PubMed, SpringerLink, Clinicalkey, Medline, the Cochrane library, Web of Science, OVID, Embase, and SinoMed from their earliest dates up to May 2022. The final search string was “macular degeneration,” “wet macular degeneration,” “choroidal neovascularization,” “geographic atrophy,” “age-related macular degeneration” and “randomized controlled trial.”\nA meta-analysis was performed according to the PRISMA guidelines.[13] We independently searched PubMed, SpringerLink, Clinicalkey, Medline, the Cochrane library, Web of Science, OVID, Embase, and SinoMed from their earliest dates up to May 2022. The final search string was “macular degeneration,” “wet macular degeneration,” “choroidal neovascularization,” “geographic atrophy,” “age-related macular degeneration” and “randomized controlled trial.”\n2.2. Inclusion criteria Cohort study comparing visual acuity in AMD patients with or without cataract surgery.\nCohort study comparing visual acuity in AMD patients with or without cataract surgery.\n2.3. Research objects Age-related cataract patients who underwent cataract emulsification surgery or suffered from both cataracts and AMD were included. Those who had cataract history may affect postoperative visual acuity and were therefore excluded.\nAge-related cataract patients who underwent cataract emulsification surgery or suffered from both cataracts and AMD were included. Those who had cataract history may affect postoperative visual acuity and were therefore excluded.\n2.4. Outcome measures To describe whether cataract surgery increases the risk of worsening of underlying AMD and vision (relative risk (RR)).\nTo describe whether cataract surgery increases the risk of worsening of underlying AMD and vision (relative risk (RR)).\n2.5. Data extraction For each study, 2 reviewers independently evaluated the retrieved literature against the inclusion and exclusion criteria. In case of disagreement, adjudicate by consultation or with the aid of a 3rd commentator. To ensure the consistency of the data collection of each study, we conform to the conditions of the study of the following information into a structured Excel data table: first author, year of publication, study design, number of control and case groups, location, patients’ age, AMD classification, duration of follow-up, RR, odds ratio (OR) and hazard ratio (HR) with corresponding 95% CI [credibility interval (CI)]. The one that adjusts the most variables was used when there were multiple estimates.\nFor each study, 2 reviewers independently evaluated the retrieved literature against the inclusion and exclusion criteria. In case of disagreement, adjudicate by consultation or with the aid of a 3rd commentator. To ensure the consistency of the data collection of each study, we conform to the conditions of the study of the following information into a structured Excel data table: first author, year of publication, study design, number of control and case groups, location, patients’ age, AMD classification, duration of follow-up, RR, odds ratio (OR) and hazard ratio (HR) with corresponding 95% CI [credibility interval (CI)]. The one that adjusts the most variables was used when there were multiple estimates.\n2.6. Quality evaluation and statistical analysis The risk of bias of nonrandomized studies was assessed using the Newcastle-Ottawa scale (NOS).[14] In consideration of the low prevalence of AMD, the distinction between RR/OR/HR can generally be ignored.[15] We used Stata 12.0 to derive pooled effect estimates such as risk ratios (RR) using a fixed-effects model or a random-effects model.[16] Heterogeneity test: The χ2 test was used to analyze the statistical heterogeneity between studies with an α level of 0.1, and the magnitude of heterogeneity was quantitatively estimated according to I2. If there was statistical heterogeneity between studies (P > .1 and I2 < 50%), a fixed effect model was used; if P < .1 and I2 > 50%, but when it was clinically judged that the research indicators of each group were consistent and needed to be merged, a random effect model was selected to merge the effect values.[15] We use sensitivity analyses to assess the impact of individual studies on the outcome, funnel plots quantified by Egger and Begg tests to observe potential publication bias, and region, duration of follow-up, and AMD classification to performed subgroup analyses. P < .05 was considered statistically significant.\nThe risk of bias of nonrandomized studies was assessed using the Newcastle-Ottawa scale (NOS).[14] In consideration of the low prevalence of AMD, the distinction between RR/OR/HR can generally be ignored.[15] We used Stata 12.0 to derive pooled effect estimates such as risk ratios (RR) using a fixed-effects model or a random-effects model.[16] Heterogeneity test: The χ2 test was used to analyze the statistical heterogeneity between studies with an α level of 0.1, and the magnitude of heterogeneity was quantitatively estimated according to I2. If there was statistical heterogeneity between studies (P > .1 and I2 < 50%), a fixed effect model was used; if P < .1 and I2 > 50%, but when it was clinically judged that the research indicators of each group were consistent and needed to be merged, a random effect model was selected to merge the effect values.[15] We use sensitivity analyses to assess the impact of individual studies on the outcome, funnel plots quantified by Egger and Begg tests to observe potential publication bias, and region, duration of follow-up, and AMD classification to performed subgroup analyses. P < .05 was considered statistically significant.", "A meta-analysis was performed according to the PRISMA guidelines.[13] We independently searched PubMed, SpringerLink, Clinicalkey, Medline, the Cochrane library, Web of Science, OVID, Embase, and SinoMed from their earliest dates up to May 2022. The final search string was “macular degeneration,” “wet macular degeneration,” “choroidal neovascularization,” “geographic atrophy,” “age-related macular degeneration” and “randomized controlled trial.”", "Cohort study comparing visual acuity in AMD patients with or without cataract surgery.", "Age-related cataract patients who underwent cataract emulsification surgery or suffered from both cataracts and AMD were included. Those who had cataract history may affect postoperative visual acuity and were therefore excluded.", "To describe whether cataract surgery increases the risk of worsening of underlying AMD and vision (relative risk (RR)).", "For each study, 2 reviewers independently evaluated the retrieved literature against the inclusion and exclusion criteria. In case of disagreement, adjudicate by consultation or with the aid of a 3rd commentator. To ensure the consistency of the data collection of each study, we conform to the conditions of the study of the following information into a structured Excel data table: first author, year of publication, study design, number of control and case groups, location, patients’ age, AMD classification, duration of follow-up, RR, odds ratio (OR) and hazard ratio (HR) with corresponding 95% CI [credibility interval (CI)]. The one that adjusts the most variables was used when there were multiple estimates.", "The risk of bias of nonrandomized studies was assessed using the Newcastle-Ottawa scale (NOS).[14] In consideration of the low prevalence of AMD, the distinction between RR/OR/HR can generally be ignored.[15] We used Stata 12.0 to derive pooled effect estimates such as risk ratios (RR) using a fixed-effects model or a random-effects model.[16] Heterogeneity test: The χ2 test was used to analyze the statistical heterogeneity between studies with an α level of 0.1, and the magnitude of heterogeneity was quantitatively estimated according to I2. If there was statistical heterogeneity between studies (P > .1 and I2 < 50%), a fixed effect model was used; if P < .1 and I2 > 50%, but when it was clinically judged that the research indicators of each group were consistent and needed to be merged, a random effect model was selected to merge the effect values.[15] We use sensitivity analyses to assess the impact of individual studies on the outcome, funnel plots quantified by Egger and Begg tests to observe potential publication bias, and region, duration of follow-up, and AMD classification to performed subgroup analyses. P < .05 was considered statistically significant.", "3.1. Literature search After inclusion and exclusion screening and reducing the repeated synthesis of the same control group and the same study population, 8 studies were finally included in the meta-analysis. The screening flowchart is shown in Figure 1.\nSearch and screening flowchart.\nAfter inclusion and exclusion screening and reducing the repeated synthesis of the same control group and the same study population, 8 studies were finally included in the meta-analysis. The screening flowchart is shown in Figure 1.\nSearch and screening flowchart.\n3.2. Basic characteristics of the included studies Among the 8 studies, 2 studies[16,17] were conducted in Europe; 2 studies[3,18] were conducted in Asia; and the others[6,19,20] were from the Americas and Oceania. Patients in all studies were older than 42 years and were followed for 3 months to 20 years (Table 1).\nBasic characteristics of the included literature.\nNA = not reported.\nAmong the 8 studies, 2 studies[16,17] were conducted in Europe; 2 studies[3,18] were conducted in Asia; and the others[6,19,20] were from the Americas and Oceania. Patients in all studies were older than 42 years and were followed for 3 months to 20 years (Table 1).\nBasic characteristics of the included literature.\nNA = not reported.\n3.3. Quality assessment Cohort studies were assessed using the cohort study coding manual. If the NOS score was equal to or higher than 8 points, the article was considered high quality. All articles were high-quality literature (Table 1).\nCohort studies were assessed using the cohort study coding manual. If the NOS score was equal to or higher than 8 points, the article was considered high quality. All articles were high-quality literature (Table 1).\n3.4. Efficacy analysis There was no significant publication bias in the studies (Egger test P = .323; Begg test P = .373). The data had high heterogeneity (I2 = 72.7%). In the random-effects model, cataract surgery and progression of AMD were not significantly associated (RR 1.194, 95% CI 0.897‐1.591), and the difference was not statistically significant (Z = 1.21, P = .225) (Fig. 2).\nRelationship between cataract surgery and age-related macular degeneration.\nThere was no significant publication bias in the studies (Egger test P = .323; Begg test P = .373). The data had high heterogeneity (I2 = 72.7%). In the random-effects model, cataract surgery and progression of AMD were not significantly associated (RR 1.194, 95% CI 0.897‐1.591), and the difference was not statistically significant (Z = 1.21, P = .225) (Fig. 2).\nRelationship between cataract surgery and age-related macular degeneration.\n3.5. Sensitivity analysis Sensitivity analysis indicated that the sensitivity was low, and the results were relatively stable and credible (Fig. 3). We excluded the effects of the included studies on outcomes one by one, and found that none of them would lead to the reversal of results or significant increase in heterogeneity. The subgroup analysis can be divided into four groups by region, namely, Asia, Europe, Oceania, and America. Asia RR 2.855 (95% CI 1.704‐4.781), Europe RR 1.271 (95% CI 0.914‐1.769), Oceania RR 1.017 (95% CI 0.607‐1.703), Americas RR 0.997 (95% CI 0.621‐1.601) (Fig. 4). Cataract surgery in Asia was significantly associated with AMD progression by subgroup analysis (Fig. 5).\nSensitivity analysis.\nSubgroup analysis (region).\nSubgroup analysis (age-related macular degeneration type).\nGrouped by the length of follow-up, the RR of the group less than or equal to 5 years was 1.011 (95% CI 0.592‐1.728), and the RR of the group greater than 5 years was 1.372 (95% CI 1.062‐1.772). The association between cataract surgery and AMD progression became more pronounced with increasing follow-up time (Fig. 6).\nSubgroup analysis (follow-up time).\nSensitivity analysis indicated that the sensitivity was low, and the results were relatively stable and credible (Fig. 3). We excluded the effects of the included studies on outcomes one by one, and found that none of them would lead to the reversal of results or significant increase in heterogeneity. The subgroup analysis can be divided into four groups by region, namely, Asia, Europe, Oceania, and America. Asia RR 2.855 (95% CI 1.704‐4.781), Europe RR 1.271 (95% CI 0.914‐1.769), Oceania RR 1.017 (95% CI 0.607‐1.703), Americas RR 0.997 (95% CI 0.621‐1.601) (Fig. 4). Cataract surgery in Asia was significantly associated with AMD progression by subgroup analysis (Fig. 5).\nSensitivity analysis.\nSubgroup analysis (region).\nSubgroup analysis (age-related macular degeneration type).\nGrouped by the length of follow-up, the RR of the group less than or equal to 5 years was 1.011 (95% CI 0.592‐1.728), and the RR of the group greater than 5 years was 1.372 (95% CI 1.062‐1.772). The association between cataract surgery and AMD progression became more pronounced with increasing follow-up time (Fig. 6).\nSubgroup analysis (follow-up time).", "After inclusion and exclusion screening and reducing the repeated synthesis of the same control group and the same study population, 8 studies were finally included in the meta-analysis. The screening flowchart is shown in Figure 1.\nSearch and screening flowchart.", "Among the 8 studies, 2 studies[16,17] were conducted in Europe; 2 studies[3,18] were conducted in Asia; and the others[6,19,20] were from the Americas and Oceania. Patients in all studies were older than 42 years and were followed for 3 months to 20 years (Table 1).\nBasic characteristics of the included literature.\nNA = not reported.", "Cohort studies were assessed using the cohort study coding manual. If the NOS score was equal to or higher than 8 points, the article was considered high quality. All articles were high-quality literature (Table 1).", "There was no significant publication bias in the studies (Egger test P = .323; Begg test P = .373). The data had high heterogeneity (I2 = 72.7%). In the random-effects model, cataract surgery and progression of AMD were not significantly associated (RR 1.194, 95% CI 0.897‐1.591), and the difference was not statistically significant (Z = 1.21, P = .225) (Fig. 2).\nRelationship between cataract surgery and age-related macular degeneration.", "Sensitivity analysis indicated that the sensitivity was low, and the results were relatively stable and credible (Fig. 3). We excluded the effects of the included studies on outcomes one by one, and found that none of them would lead to the reversal of results or significant increase in heterogeneity. The subgroup analysis can be divided into four groups by region, namely, Asia, Europe, Oceania, and America. Asia RR 2.855 (95% CI 1.704‐4.781), Europe RR 1.271 (95% CI 0.914‐1.769), Oceania RR 1.017 (95% CI 0.607‐1.703), Americas RR 0.997 (95% CI 0.621‐1.601) (Fig. 4). Cataract surgery in Asia was significantly associated with AMD progression by subgroup analysis (Fig. 5).\nSensitivity analysis.\nSubgroup analysis (region).\nSubgroup analysis (age-related macular degeneration type).\nGrouped by the length of follow-up, the RR of the group less than or equal to 5 years was 1.011 (95% CI 0.592‐1.728), and the RR of the group greater than 5 years was 1.372 (95% CI 1.062‐1.772). The association between cataract surgery and AMD progression became more pronounced with increasing follow-up time (Fig. 6).\nSubgroup analysis (follow-up time).", "Many large-scale epidemiological studies have not clearly identified whether cataract surgery is such an intervention can increase the risk of worsening of AMD progression. Our results show that worsening of AMD progression after cataract surgery are the most common in Asia patients, which is similar to Wang JJ’s study published in 2012,[20] it showed that the incidence of neovascular AMD within 5 years after cataract surgery was 2‐3 times higher than that of nonsurgical patients (HR 2.68; 95% CI 1.55‐4.66; P < .001). In a cross-sectional study of North Africans, Lazreg et al[12] found that patients with cataract surgery were more likely to develop AMD than those without surgery (OR: 2.69; 95% CI 1.96‐3.70; P < .0001). Darker iris color can increase the risk of worsening of AMD progression too.[21]\nWe divided AMD into early AMD and advanced AMD (including wet AMD and dry AMD) thought severity, and it was found that cataract surgery did not exacerbate all types of AMD.\nThe studies included differed significantly in the duration of follow-up, and no selection date was proposed. When the follow-up time was >5 years, there was a significant positive association between cataract surgery and increase the risk of worsening of AMD progression. Studies have found that the risk of neovascular AMD differs in different clinical subtypes of AMD. Combining the findings of Beaver Dam and Blue Mountain, cataract surgery was found to be associated with an increased 5-year incidence of neovascular AMD.[22] Klein et al also found that the OR value of advanced AMD is higher in patients with cataract surgery more than 5 years.[23] Epidemiological studies have provided the incidence of AMD after up to 20 years of follow-up, while clinical trials have been followed for no more than 1 year.\nIn general, the studies included varied widely in study populations and study durations, which were limited in comparability. However, this study included only cohort studies, and the association between cataract surgery and AMD progression became stronger with follow-up beyond 5 years. Many studies with long-term follow-up of AMD patients found that patients who underwent cataract surgery had a higher risk of AMD progression than those who did not.[6,24] In a cross-sectional study combining 3 population studies, those who reported surgery at 5 years or more had a 2.1-fold (95% CI 1.0 vs 4.6) chance of developing advanced AMD compared with those who had surgery less than 5 years. The odds of developing advanced AMD were slightly increased, but the increase was not statistically significant (OR 1.4, 95% CI 0.7 2.6).[25] A population-based cohort of older Australians reported that the long-term (10-year) risk of developing advanced AMD in the eye undergoing surgery was significantly higher than baseline.[6] Ferris et al[26] found that the risk of advanced AMD progression in patients with moderate AMD was as high as 50% within 5 years. In conclusion, AMD patients need regular fundus examinations no matter with or without cataract surgery.\nBoth cataracts and AMD are strongly age-related, and multiple studies have found that cataracts and AMD may share the same epidemiological risk factors, but this study did not find that cataract surgery and the risk of worsening of AMD progression are directly related or have a clear connection.[27,28] Although most of the included observational studies did control for different familiar confounders, but cataract status is still residual confounding for the possible interactions of AMD, cataract, and cataract surgery do not all appear to be consistent. However, all of these are insufficient to explain the association between cataract surgery and the risk of worsening of AMD progression in this study.\nThis review has the following limitations: First, Persuading cataract patients to randomize surgery is difficult, so it’s hard to conduct randomized controlled trials (RCTs),which with the highest level of evidence, prospective or retrospective studies can provide the most powerful evidence under the circumstances with a series of obstacles of RCTs.[28] In addition, there are some less standardized studies, the incidence of AMD was artificially reduced for some patients with unclear fundus cataracts cause were excluded. Therefore, all those may affect the statistical analysis of the association of AMD with other parameters.\nIn conclusion, there is still a significant positive correlation between cataract surgery and increase the risk of worsening of AMD progression, and faster progression of early-to-late AMD found in cataract surgery with longer follow-up of patients. However, based on the result of this review, we cannot draw conclusions about the effect of cataract surgery on the risk of worsening of AMD progression. In conclusion, the studies we included were highly heterogeneous in terms of study population and study period, and were highly heterogeneous, so their comparability was limited. more clinical trials (with sufficient statistical power) are needed, which should ideally be able to adequately control for confounding variables such as age and cataract severity, and subgroup analyses can be set up to make the study more precise Regarding the need for more clinical trials (with sufficient statistical power) to demonstrate this hypothesis by adequately controlling for confounding variables such as age and cataract severity.", "ZYC conceived and designed the experiments. YZ and FYT performed the experiments and data analysis. YZ and ZYC provided the reagents, materials and analysis tools. ZYC wrote the manuscript. ZYC revised the work critically for important intellectual content. All authors have read and approved the final version of the manuscript.\nConceptualization: Zhaoyan Chen.\nData curation: Zhaoyan Chen.\nInvestigation: Ya Zeng.\nMethodology: Ya Zeng.\nProject administration: Fangyuan Tian.\nResources: Fangyuan Tian." ]
[ "intro", "methods", null, null, null, null, null, null, "results", null, null, null, null, null, "discussion", null ]
[ "age-related macular degeneration", "cataract", "meta-analysis", "surgery", "systematic review" ]
1. Introduction: Cataracts and age-related macular degeneration (AMD) are common causes of decreased vision and blindness in individuals over age 50.[1] By 2020, the number of people with AMD globally is expected to be approximately 200 million, increasing to nearly 300 million by 2040.[2] For the past few decades, vision has been an important criterion for cataract surgery, and cataracts can improve vision by removing the opaque lens.[3] Surgery is the most effective operation in cataract patients to improve visual function, but some professor’s suspect it increase the risk of worsening of underlying AMD and vision.[3,4] This potential problem has not been resolved for a long time. Prospective or retrospective studies[5,6] have reported a higher frequency of AMD progression in surgical eyes than in nonoperated eyes. However, some studies[7–11] have also shown that there is no significant correlation between cataract surgery and AMD. A cross-sectional[12] study also explored this topic, but it is still inconclusive. The truth is that cataracts and AMD often coexist, and there may be adverse effects between AMD and cataract surgery. However, delaying cataract surgery can also negatively impact a patient’s vision. Therefore, this study further evaluated the influence of cataract surgery for AMD through a systematic review and meta-analysis. 2. Materials and Methods: 2.1. Search strategy A meta-analysis was performed according to the PRISMA guidelines.[13] We independently searched PubMed, SpringerLink, Clinicalkey, Medline, the Cochrane library, Web of Science, OVID, Embase, and SinoMed from their earliest dates up to May 2022. The final search string was “macular degeneration,” “wet macular degeneration,” “choroidal neovascularization,” “geographic atrophy,” “age-related macular degeneration” and “randomized controlled trial.” A meta-analysis was performed according to the PRISMA guidelines.[13] We independently searched PubMed, SpringerLink, Clinicalkey, Medline, the Cochrane library, Web of Science, OVID, Embase, and SinoMed from their earliest dates up to May 2022. The final search string was “macular degeneration,” “wet macular degeneration,” “choroidal neovascularization,” “geographic atrophy,” “age-related macular degeneration” and “randomized controlled trial.” 2.2. Inclusion criteria Cohort study comparing visual acuity in AMD patients with or without cataract surgery. Cohort study comparing visual acuity in AMD patients with or without cataract surgery. 2.3. Research objects Age-related cataract patients who underwent cataract emulsification surgery or suffered from both cataracts and AMD were included. Those who had cataract history may affect postoperative visual acuity and were therefore excluded. Age-related cataract patients who underwent cataract emulsification surgery or suffered from both cataracts and AMD were included. Those who had cataract history may affect postoperative visual acuity and were therefore excluded. 2.4. Outcome measures To describe whether cataract surgery increases the risk of worsening of underlying AMD and vision (relative risk (RR)). To describe whether cataract surgery increases the risk of worsening of underlying AMD and vision (relative risk (RR)). 2.5. Data extraction For each study, 2 reviewers independently evaluated the retrieved literature against the inclusion and exclusion criteria. In case of disagreement, adjudicate by consultation or with the aid of a 3rd commentator. To ensure the consistency of the data collection of each study, we conform to the conditions of the study of the following information into a structured Excel data table: first author, year of publication, study design, number of control and case groups, location, patients’ age, AMD classification, duration of follow-up, RR, odds ratio (OR) and hazard ratio (HR) with corresponding 95% CI [credibility interval (CI)]. The one that adjusts the most variables was used when there were multiple estimates. For each study, 2 reviewers independently evaluated the retrieved literature against the inclusion and exclusion criteria. In case of disagreement, adjudicate by consultation or with the aid of a 3rd commentator. To ensure the consistency of the data collection of each study, we conform to the conditions of the study of the following information into a structured Excel data table: first author, year of publication, study design, number of control and case groups, location, patients’ age, AMD classification, duration of follow-up, RR, odds ratio (OR) and hazard ratio (HR) with corresponding 95% CI [credibility interval (CI)]. The one that adjusts the most variables was used when there were multiple estimates. 2.6. Quality evaluation and statistical analysis The risk of bias of nonrandomized studies was assessed using the Newcastle-Ottawa scale (NOS).[14] In consideration of the low prevalence of AMD, the distinction between RR/OR/HR can generally be ignored.[15] We used Stata 12.0 to derive pooled effect estimates such as risk ratios (RR) using a fixed-effects model or a random-effects model.[16] Heterogeneity test: The χ2 test was used to analyze the statistical heterogeneity between studies with an α level of 0.1, and the magnitude of heterogeneity was quantitatively estimated according to I2. If there was statistical heterogeneity between studies (P > .1 and I2 < 50%), a fixed effect model was used; if P < .1 and I2 > 50%, but when it was clinically judged that the research indicators of each group were consistent and needed to be merged, a random effect model was selected to merge the effect values.[15] We use sensitivity analyses to assess the impact of individual studies on the outcome, funnel plots quantified by Egger and Begg tests to observe potential publication bias, and region, duration of follow-up, and AMD classification to performed subgroup analyses. P < .05 was considered statistically significant. The risk of bias of nonrandomized studies was assessed using the Newcastle-Ottawa scale (NOS).[14] In consideration of the low prevalence of AMD, the distinction between RR/OR/HR can generally be ignored.[15] We used Stata 12.0 to derive pooled effect estimates such as risk ratios (RR) using a fixed-effects model or a random-effects model.[16] Heterogeneity test: The χ2 test was used to analyze the statistical heterogeneity between studies with an α level of 0.1, and the magnitude of heterogeneity was quantitatively estimated according to I2. If there was statistical heterogeneity between studies (P > .1 and I2 < 50%), a fixed effect model was used; if P < .1 and I2 > 50%, but when it was clinically judged that the research indicators of each group were consistent and needed to be merged, a random effect model was selected to merge the effect values.[15] We use sensitivity analyses to assess the impact of individual studies on the outcome, funnel plots quantified by Egger and Begg tests to observe potential publication bias, and region, duration of follow-up, and AMD classification to performed subgroup analyses. P < .05 was considered statistically significant. 2.1. Search strategy: A meta-analysis was performed according to the PRISMA guidelines.[13] We independently searched PubMed, SpringerLink, Clinicalkey, Medline, the Cochrane library, Web of Science, OVID, Embase, and SinoMed from their earliest dates up to May 2022. The final search string was “macular degeneration,” “wet macular degeneration,” “choroidal neovascularization,” “geographic atrophy,” “age-related macular degeneration” and “randomized controlled trial.” 2.2. Inclusion criteria: Cohort study comparing visual acuity in AMD patients with or without cataract surgery. 2.3. Research objects: Age-related cataract patients who underwent cataract emulsification surgery or suffered from both cataracts and AMD were included. Those who had cataract history may affect postoperative visual acuity and were therefore excluded. 2.4. Outcome measures: To describe whether cataract surgery increases the risk of worsening of underlying AMD and vision (relative risk (RR)). 2.5. Data extraction: For each study, 2 reviewers independently evaluated the retrieved literature against the inclusion and exclusion criteria. In case of disagreement, adjudicate by consultation or with the aid of a 3rd commentator. To ensure the consistency of the data collection of each study, we conform to the conditions of the study of the following information into a structured Excel data table: first author, year of publication, study design, number of control and case groups, location, patients’ age, AMD classification, duration of follow-up, RR, odds ratio (OR) and hazard ratio (HR) with corresponding 95% CI [credibility interval (CI)]. The one that adjusts the most variables was used when there were multiple estimates. 2.6. Quality evaluation and statistical analysis: The risk of bias of nonrandomized studies was assessed using the Newcastle-Ottawa scale (NOS).[14] In consideration of the low prevalence of AMD, the distinction between RR/OR/HR can generally be ignored.[15] We used Stata 12.0 to derive pooled effect estimates such as risk ratios (RR) using a fixed-effects model or a random-effects model.[16] Heterogeneity test: The χ2 test was used to analyze the statistical heterogeneity between studies with an α level of 0.1, and the magnitude of heterogeneity was quantitatively estimated according to I2. If there was statistical heterogeneity between studies (P > .1 and I2 < 50%), a fixed effect model was used; if P < .1 and I2 > 50%, but when it was clinically judged that the research indicators of each group were consistent and needed to be merged, a random effect model was selected to merge the effect values.[15] We use sensitivity analyses to assess the impact of individual studies on the outcome, funnel plots quantified by Egger and Begg tests to observe potential publication bias, and region, duration of follow-up, and AMD classification to performed subgroup analyses. P < .05 was considered statistically significant. 3. Results: 3.1. Literature search After inclusion and exclusion screening and reducing the repeated synthesis of the same control group and the same study population, 8 studies were finally included in the meta-analysis. The screening flowchart is shown in Figure 1. Search and screening flowchart. After inclusion and exclusion screening and reducing the repeated synthesis of the same control group and the same study population, 8 studies were finally included in the meta-analysis. The screening flowchart is shown in Figure 1. Search and screening flowchart. 3.2. Basic characteristics of the included studies Among the 8 studies, 2 studies[16,17] were conducted in Europe; 2 studies[3,18] were conducted in Asia; and the others[6,19,20] were from the Americas and Oceania. Patients in all studies were older than 42 years and were followed for 3 months to 20 years (Table 1). Basic characteristics of the included literature. NA = not reported. Among the 8 studies, 2 studies[16,17] were conducted in Europe; 2 studies[3,18] were conducted in Asia; and the others[6,19,20] were from the Americas and Oceania. Patients in all studies were older than 42 years and were followed for 3 months to 20 years (Table 1). Basic characteristics of the included literature. NA = not reported. 3.3. Quality assessment Cohort studies were assessed using the cohort study coding manual. If the NOS score was equal to or higher than 8 points, the article was considered high quality. All articles were high-quality literature (Table 1). Cohort studies were assessed using the cohort study coding manual. If the NOS score was equal to or higher than 8 points, the article was considered high quality. All articles were high-quality literature (Table 1). 3.4. Efficacy analysis There was no significant publication bias in the studies (Egger test P = .323; Begg test P = .373). The data had high heterogeneity (I2 = 72.7%). In the random-effects model, cataract surgery and progression of AMD were not significantly associated (RR 1.194, 95% CI 0.897‐1.591), and the difference was not statistically significant (Z = 1.21, P = .225) (Fig. 2). Relationship between cataract surgery and age-related macular degeneration. There was no significant publication bias in the studies (Egger test P = .323; Begg test P = .373). The data had high heterogeneity (I2 = 72.7%). In the random-effects model, cataract surgery and progression of AMD were not significantly associated (RR 1.194, 95% CI 0.897‐1.591), and the difference was not statistically significant (Z = 1.21, P = .225) (Fig. 2). Relationship between cataract surgery and age-related macular degeneration. 3.5. Sensitivity analysis Sensitivity analysis indicated that the sensitivity was low, and the results were relatively stable and credible (Fig. 3). We excluded the effects of the included studies on outcomes one by one, and found that none of them would lead to the reversal of results or significant increase in heterogeneity. The subgroup analysis can be divided into four groups by region, namely, Asia, Europe, Oceania, and America. Asia RR 2.855 (95% CI 1.704‐4.781), Europe RR 1.271 (95% CI 0.914‐1.769), Oceania RR 1.017 (95% CI 0.607‐1.703), Americas RR 0.997 (95% CI 0.621‐1.601) (Fig. 4). Cataract surgery in Asia was significantly associated with AMD progression by subgroup analysis (Fig. 5). Sensitivity analysis. Subgroup analysis (region). Subgroup analysis (age-related macular degeneration type). Grouped by the length of follow-up, the RR of the group less than or equal to 5 years was 1.011 (95% CI 0.592‐1.728), and the RR of the group greater than 5 years was 1.372 (95% CI 1.062‐1.772). The association between cataract surgery and AMD progression became more pronounced with increasing follow-up time (Fig. 6). Subgroup analysis (follow-up time). Sensitivity analysis indicated that the sensitivity was low, and the results were relatively stable and credible (Fig. 3). We excluded the effects of the included studies on outcomes one by one, and found that none of them would lead to the reversal of results or significant increase in heterogeneity. The subgroup analysis can be divided into four groups by region, namely, Asia, Europe, Oceania, and America. Asia RR 2.855 (95% CI 1.704‐4.781), Europe RR 1.271 (95% CI 0.914‐1.769), Oceania RR 1.017 (95% CI 0.607‐1.703), Americas RR 0.997 (95% CI 0.621‐1.601) (Fig. 4). Cataract surgery in Asia was significantly associated with AMD progression by subgroup analysis (Fig. 5). Sensitivity analysis. Subgroup analysis (region). Subgroup analysis (age-related macular degeneration type). Grouped by the length of follow-up, the RR of the group less than or equal to 5 years was 1.011 (95% CI 0.592‐1.728), and the RR of the group greater than 5 years was 1.372 (95% CI 1.062‐1.772). The association between cataract surgery and AMD progression became more pronounced with increasing follow-up time (Fig. 6). Subgroup analysis (follow-up time). 3.1. Literature search: After inclusion and exclusion screening and reducing the repeated synthesis of the same control group and the same study population, 8 studies were finally included in the meta-analysis. The screening flowchart is shown in Figure 1. Search and screening flowchart. 3.2. Basic characteristics of the included studies: Among the 8 studies, 2 studies[16,17] were conducted in Europe; 2 studies[3,18] were conducted in Asia; and the others[6,19,20] were from the Americas and Oceania. Patients in all studies were older than 42 years and were followed for 3 months to 20 years (Table 1). Basic characteristics of the included literature. NA = not reported. 3.3. Quality assessment: Cohort studies were assessed using the cohort study coding manual. If the NOS score was equal to or higher than 8 points, the article was considered high quality. All articles were high-quality literature (Table 1). 3.4. Efficacy analysis: There was no significant publication bias in the studies (Egger test P = .323; Begg test P = .373). The data had high heterogeneity (I2 = 72.7%). In the random-effects model, cataract surgery and progression of AMD were not significantly associated (RR 1.194, 95% CI 0.897‐1.591), and the difference was not statistically significant (Z = 1.21, P = .225) (Fig. 2). Relationship between cataract surgery and age-related macular degeneration. 3.5. Sensitivity analysis: Sensitivity analysis indicated that the sensitivity was low, and the results were relatively stable and credible (Fig. 3). We excluded the effects of the included studies on outcomes one by one, and found that none of them would lead to the reversal of results or significant increase in heterogeneity. The subgroup analysis can be divided into four groups by region, namely, Asia, Europe, Oceania, and America. Asia RR 2.855 (95% CI 1.704‐4.781), Europe RR 1.271 (95% CI 0.914‐1.769), Oceania RR 1.017 (95% CI 0.607‐1.703), Americas RR 0.997 (95% CI 0.621‐1.601) (Fig. 4). Cataract surgery in Asia was significantly associated with AMD progression by subgroup analysis (Fig. 5). Sensitivity analysis. Subgroup analysis (region). Subgroup analysis (age-related macular degeneration type). Grouped by the length of follow-up, the RR of the group less than or equal to 5 years was 1.011 (95% CI 0.592‐1.728), and the RR of the group greater than 5 years was 1.372 (95% CI 1.062‐1.772). The association between cataract surgery and AMD progression became more pronounced with increasing follow-up time (Fig. 6). Subgroup analysis (follow-up time). 4. Discussion: Many large-scale epidemiological studies have not clearly identified whether cataract surgery is such an intervention can increase the risk of worsening of AMD progression. Our results show that worsening of AMD progression after cataract surgery are the most common in Asia patients, which is similar to Wang JJ’s study published in 2012,[20] it showed that the incidence of neovascular AMD within 5 years after cataract surgery was 2‐3 times higher than that of nonsurgical patients (HR 2.68; 95% CI 1.55‐4.66; P < .001). In a cross-sectional study of North Africans, Lazreg et al[12] found that patients with cataract surgery were more likely to develop AMD than those without surgery (OR: 2.69; 95% CI 1.96‐3.70; P < .0001). Darker iris color can increase the risk of worsening of AMD progression too.[21] We divided AMD into early AMD and advanced AMD (including wet AMD and dry AMD) thought severity, and it was found that cataract surgery did not exacerbate all types of AMD. The studies included differed significantly in the duration of follow-up, and no selection date was proposed. When the follow-up time was >5 years, there was a significant positive association between cataract surgery and increase the risk of worsening of AMD progression. Studies have found that the risk of neovascular AMD differs in different clinical subtypes of AMD. Combining the findings of Beaver Dam and Blue Mountain, cataract surgery was found to be associated with an increased 5-year incidence of neovascular AMD.[22] Klein et al also found that the OR value of advanced AMD is higher in patients with cataract surgery more than 5 years.[23] Epidemiological studies have provided the incidence of AMD after up to 20 years of follow-up, while clinical trials have been followed for no more than 1 year. In general, the studies included varied widely in study populations and study durations, which were limited in comparability. However, this study included only cohort studies, and the association between cataract surgery and AMD progression became stronger with follow-up beyond 5 years. Many studies with long-term follow-up of AMD patients found that patients who underwent cataract surgery had a higher risk of AMD progression than those who did not.[6,24] In a cross-sectional study combining 3 population studies, those who reported surgery at 5 years or more had a 2.1-fold (95% CI 1.0 vs 4.6) chance of developing advanced AMD compared with those who had surgery less than 5 years. The odds of developing advanced AMD were slightly increased, but the increase was not statistically significant (OR 1.4, 95% CI 0.7 2.6).[25] A population-based cohort of older Australians reported that the long-term (10-year) risk of developing advanced AMD in the eye undergoing surgery was significantly higher than baseline.[6] Ferris et al[26] found that the risk of advanced AMD progression in patients with moderate AMD was as high as 50% within 5 years. In conclusion, AMD patients need regular fundus examinations no matter with or without cataract surgery. Both cataracts and AMD are strongly age-related, and multiple studies have found that cataracts and AMD may share the same epidemiological risk factors, but this study did not find that cataract surgery and the risk of worsening of AMD progression are directly related or have a clear connection.[27,28] Although most of the included observational studies did control for different familiar confounders, but cataract status is still residual confounding for the possible interactions of AMD, cataract, and cataract surgery do not all appear to be consistent. However, all of these are insufficient to explain the association between cataract surgery and the risk of worsening of AMD progression in this study. This review has the following limitations: First, Persuading cataract patients to randomize surgery is difficult, so it’s hard to conduct randomized controlled trials (RCTs),which with the highest level of evidence, prospective or retrospective studies can provide the most powerful evidence under the circumstances with a series of obstacles of RCTs.[28] In addition, there are some less standardized studies, the incidence of AMD was artificially reduced for some patients with unclear fundus cataracts cause were excluded. Therefore, all those may affect the statistical analysis of the association of AMD with other parameters. In conclusion, there is still a significant positive correlation between cataract surgery and increase the risk of worsening of AMD progression, and faster progression of early-to-late AMD found in cataract surgery with longer follow-up of patients. However, based on the result of this review, we cannot draw conclusions about the effect of cataract surgery on the risk of worsening of AMD progression. In conclusion, the studies we included were highly heterogeneous in terms of study population and study period, and were highly heterogeneous, so their comparability was limited. more clinical trials (with sufficient statistical power) are needed, which should ideally be able to adequately control for confounding variables such as age and cataract severity, and subgroup analyses can be set up to make the study more precise Regarding the need for more clinical trials (with sufficient statistical power) to demonstrate this hypothesis by adequately controlling for confounding variables such as age and cataract severity. Author contributions: ZYC conceived and designed the experiments. YZ and FYT performed the experiments and data analysis. YZ and ZYC provided the reagents, materials and analysis tools. ZYC wrote the manuscript. ZYC revised the work critically for important intellectual content. All authors have read and approved the final version of the manuscript. Conceptualization: Zhaoyan Chen. Data curation: Zhaoyan Chen. Investigation: Ya Zeng. Methodology: Ya Zeng. Project administration: Fangyuan Tian. Resources: Fangyuan Tian.
Background: Cataracts and age-related macular degeneration (AMD) are common causes of decreased vision and blindness in individuals over age 50. Although surgery is the most effective treatment for cataracts, it may accelerate the progression of AMD, so this study further evaluated the influence of cataract surgery for AMD through a systematic review and meta-analysis. Methods: The Cochrane Systematic Evaluation method was adopted, and computer searches were conducted for the China Knowledge Network, Wanfang, Vipul, SinoMed, PubMed, SpringerLink, Clinicalkey, Medline, Cochrane Library, Web of Science, OVID, and Embase databases of cohort studies on the impact of cataract surgery on AMD, with search timeframes up to May 2022. Meta-analysis was performed using Stata/12.0. Results: A total of 8 cohort studies were included in the study. The results showed that the relative risk (RR) of AMD progression after cataract surgery was not significantly different, RR 1.194 [95% credibility interval (CI) 0.897-1.591]; the risk remained increased more than 5 years after surgery, RR 1.372 (95% CI 1.062-1.772). Conclusions: There is still a significant positive correlation between cataract surgery and increase the risk of worsening of AMD progression, and faster progression of early-to-late AMD found in cataract surgery with longer follow-up of patients.
null
null
4,577
263
[ 86, 14, 35, 23, 139, 225, 47, 69, 43, 96, 249, 95 ]
16
[ "amd", "cataract", "studies", "surgery", "cataract surgery", "analysis", "study", "rr", "ci", "95 ci" ]
[ "cataracts amd coexist", "amd years cataract", "amd cataract surgery", "surgery cataracts amd", "effects amd cataract" ]
null
null
[CONTENT] age-related macular degeneration | cataract | meta-analysis | surgery | systematic review [SUMMARY]
[CONTENT] age-related macular degeneration | cataract | meta-analysis | surgery | systematic review [SUMMARY]
[CONTENT] age-related macular degeneration | cataract | meta-analysis | surgery | systematic review [SUMMARY]
null
[CONTENT] age-related macular degeneration | cataract | meta-analysis | surgery | systematic review [SUMMARY]
null
[CONTENT] Humans | Middle Aged | Cataract Extraction | Macular Degeneration | Cataract | China [SUMMARY]
[CONTENT] Humans | Middle Aged | Cataract Extraction | Macular Degeneration | Cataract | China [SUMMARY]
[CONTENT] Humans | Middle Aged | Cataract Extraction | Macular Degeneration | Cataract | China [SUMMARY]
null
[CONTENT] Humans | Middle Aged | Cataract Extraction | Macular Degeneration | Cataract | China [SUMMARY]
null
[CONTENT] cataracts amd coexist | amd years cataract | amd cataract surgery | surgery cataracts amd | effects amd cataract [SUMMARY]
[CONTENT] cataracts amd coexist | amd years cataract | amd cataract surgery | surgery cataracts amd | effects amd cataract [SUMMARY]
[CONTENT] cataracts amd coexist | amd years cataract | amd cataract surgery | surgery cataracts amd | effects amd cataract [SUMMARY]
null
[CONTENT] cataracts amd coexist | amd years cataract | amd cataract surgery | surgery cataracts amd | effects amd cataract [SUMMARY]
null
[CONTENT] amd | cataract | studies | surgery | cataract surgery | analysis | study | rr | ci | 95 ci [SUMMARY]
[CONTENT] amd | cataract | studies | surgery | cataract surgery | analysis | study | rr | ci | 95 ci [SUMMARY]
[CONTENT] amd | cataract | studies | surgery | cataract surgery | analysis | study | rr | ci | 95 ci [SUMMARY]
null
[CONTENT] amd | cataract | studies | surgery | cataract surgery | analysis | study | rr | ci | 95 ci [SUMMARY]
null
[CONTENT] vision | amd | surgery | cataract | cataract surgery | cataracts | improve | eyes | million | surgery amd [SUMMARY]
[CONTENT] effect | model | study | heterogeneity | risk | amd | cataract | rr | i2 | statistical [SUMMARY]
[CONTENT] analysis | subgroup analysis | 95 ci | ci | 95 | rr | fig | studies | subgroup | years [SUMMARY]
null
[CONTENT] cataract | amd | surgery | studies | cataract surgery | study | rr | analysis | risk | ci [SUMMARY]
null
[CONTENT] AMD | age 50 ||| AMD | AMD [SUMMARY]
[CONTENT] the China Knowledge Network | Wanfang | Vipul | SinoMed | PubMed | SpringerLink | Clinicalkey | Medline | Cochrane Library | OVID | Embase | AMD | May 2022 ||| [SUMMARY]
[CONTENT] 8 ||| AMD | 1.194 ||| 95% | CI | 0.897-1.591 | more than 5 years | 1.372 | 95% | CI | 1.062-1.772 [SUMMARY]
null
[CONTENT] AMD | age 50 ||| AMD | AMD ||| the China Knowledge Network | Wanfang | Vipul | SinoMed | PubMed | SpringerLink | Clinicalkey | Medline | Cochrane Library | OVID | Embase | AMD | May 2022 ||| ||| ||| 8 ||| AMD | 1.194 ||| 95% | CI | 0.897-1.591 | more than 5 years | 1.372 | 95% | CI | 1.062-1.772 ||| AMD | AMD [SUMMARY]
null
Comparison of the systemic bioavailability of mometasone furoate after oral inhalation from a mometasone furoate/formoterol fumarate metered-dose inhaler versus a mometasone furoate dry-powder inhaler in patients with chronic obstructive pulmonary disease.
23525511
Coadministration of mometasone furoate (MF) and formoterol fumarate (F) produces additive effects for improving symptoms and lung function and reduces exacerbations in patients with asthma and chronic obstructive pulmonary disease (COPD). The present study assessed the relative systemic exposure to MF and characterized the pharmacokinetics of MF and formoterol in patients with COPD.
BACKGROUND
This was a single-center, randomized, open-label, multiple-dose, three-period, three-treatment crossover study. The following three treatments were self-administered by patients (n = 14) with moderate-to-severe COPD: MF 400 μg/F 10 μg via a metered-dose inhaler (MF/F MDI; DULERA(®)/ZENHALE(®)) without a spacer device, MF/F MDI with a spacer, or MF 400 μg via a dry-powder inhaler (DPI; ASMANEX(®) TWISTHALER(®)) twice daily for 5 days. Plasma samples for MF and formoterol assay were obtained predose and at prespecified time points after the last (morning) dose on day 5 of each period of the crossover. The geometric mean ratio (GMR) as a percent and the corresponding 90% confidence intervals (CI) were calculated for treatment comparisons.
METHODS
Systemic MF exposure was lower (GMR 77%; 90% CI 58, 102) following administration by MF/F MDI compared to MF DPI. Additionally, least squares geometric mean systemic exposures of MF and formoterol were lower (GMR 72%; 90% CI 61, 84) and (GMR 62%; 90% CI 52, 74), respectively, following administration by MF/F MDI in conjunction with a spacer compared to MF/F MDI without a spacer. MF/F MDI had a similar adverse experience profile as that seen with MF DPI. All adverse experiences were either mild or moderate in severity; no serious adverse experience was reported.
RESULTS
Systemic MF exposures were lower following administration by MF/F MDI compared with MF DPI. Additionally, systemic MF and formoterol exposures were lower following administration by MF/F MDI with a spacer versus without a spacer. The magnitude of these differences with respect to systemic exposure was not clinically relevant.
CONCLUSION
[ "Aged", "Anti-Inflammatory Agents", "Bronchodilator Agents", "Cross-Over Studies", "Drug Combinations", "Dry Powder Inhalers", "Ethanolamines", "Female", "Formoterol Fumarate", "Humans", "Male", "Metered Dose Inhalers", "Middle Aged", "Mometasone Furoate", "Pregnadienediols", "Pulmonary Disease, Chronic Obstructive" ]
3595976
Demographic and baseline characteristics
A total of 14 patients (five men and nine women) aged 45 to 72 years (mean, 62.7 years) were treated. Of these, 13 (93%) were white and one (7%) was black/African-American. Subjects had a mean (range) 50 (20–100) pack-year smoking history and a mean (range) predicted post-bronchodilator FEV1 of 58% (42%–73%). All 14 patients completed the study.
Statistical analysis
Summary statistics including means and coefficients of variation were provided for MF concentration data at each time point and the derived pharmacokinetic parameters. The primary objective was to compare the MF AUC0–12 hr and Cmax values for the MF/F MDI combination with those for MF DPI monotherapy (ie, treatment A versus treatment C, respectively). The AUC0–12 hr and Cmax values were log-transformed and analyzed using an appropriate analysis of variance (ANOVA) model for a three-period crossover design extracting sources of variation due to treatment, patient, sequence, and period. As a secondary objective, the MF AUC0–12 hr and Cmax values for the MF/F MDI combination administered without and with a spacer were compared (ie, treatment A versus treatment B). The geometric means of treatment A to treatment C or treatment A to treatment B were expressed as a ratio (GMR), and the corresponding 90% CIs were computed. In addition, the log-transformed AUC0–12 hr and Cmax values for formoterol were analyzed similarly for treatments A and B, and comparisons between treatment A and treatment B were performed using a ratio and the corresponding 90% CI. Assuming an intrapatient variability of 28% (based on the variability observed in a similarly designed crossover study in healthy adults), this study with a targeted sample size of 12 patients should have been able to detect a 30% difference in MF pharmacokinetics between treatments B or C versus A with 80% power and a 90% CI. No inferential analysis of safety data was planned. The number of patients reporting any AEs, the occurrence of specific AEs, and discontinuation due to AEs were tabulated.
Pharmacokinetic results
Mean plasma concentration-time profiles showed prolonged absorption of MF following administration of MF by MDI (Figure 1). Median MF Tmax values were 3.00, 2.00, and 1.00 hours for treatments A, B, and C, respectively (Table 1). Median Tmax values of formoterol were 1.02 and 0.52 hours for Treatments A and B, respectively (Table 1). For the comparison of the MF exposure following inhalation of the DPI and MDI (primary objective), the ANOVA model included all treatments. Intrapatient variabilities for MF AUC0–12 hr and Cmax of 44% and 47%, respectively, were obtained from the model (Table 2). Despite this large observed variability, MF Cmax values were significantly different between treatments, with the mean Cmax for the MDI being 44% lower than that for DPI (Table 2). Mean MF AUC0–12 hr following inhalation by MDI alone was 23% lower than the mean value following administration of MF by DPI. The large CI reflects the small sample size and the larger than expected intrapatient variability (expected variability was approximately 28%; observed was 44%). A secondary objective of the study was to compare MF and formoterol exposure following inhalation using the MDI with and without a spacer. Following inhalation by MDI with the spacer, MF exposures based on AUC0–12 hr were lower than for MDI alone (Table 2). In the initial analysis with all treatments (Table 2), the ratio estimates for MF AUC0–12 hr and Cmax included 100%. However, because the larger intrapatient variability was related to the DPI treatment group, a reanalysis without treatment C showed that the AUC0–12 hr and Cmax values were 28% and 18% lower, respectively, for MDI with a spacer compared with MDI alone (Table 3). Intrapatient variability for MF AUC0–12 hr and Cmax values in the original three-treatment ANOVA ranged from 44% and 47%, respectively, to 23% and 24%, when comparing only the MDI data. Mean plots of formoterol plasma concentrations showed rapid absorption of F (Figure 2). For formoterol, AUC0–12 hr and Cmax values were 38% and 20% lower, respectively, for MDI with the spacer compared to MDI alone (Table 3).
Conclusion
This multiple-dose pharmacokinetic study demonstrated that systemic exposure to MF was lower following administration by MF/F MDI compared to MF DPI in patients with mild-to-moderate COPD. Additionally, systemic exposures of MF and formoterol were lower following administration by MF/F MDI with an AeroChamber Plus® Valved Holding Chamber spacer compared to MF/F MDI without this spacer. The magnitude of the differences in systemic exposure to MF seen with MF/F MDI versus MF DPI as well as MF/F MDI administered with a spacer versus MF/F MDI without a spacer were not clinically relevant. There also was no clinically relevant difference in systemic exposure to formoterol seen with MF/F MDI administered in the presence and absence of a spacer. Finally, MF/F delivered twice daily by MDI was generally well tolerated among patients with moderate-to-severe COPD in this short-term study.
[ "Introduction", "Methods", "Patient selection", "Study design", "Pharmacokinetic assessments", "Safety assessments", "Results", "Safety", "Conclusion" ]
[ "Current treatment guidelines for the long-term management of chronic obstructive pulmonary disease (COPD) and asthma recommend, for certain degrees of severity, combination therapy with an inhaled corticosteroid (ICS) and a long-acting β2-agonist.1–5 Clinically, coadministration of an ICS and long-acting β2-agonist has been shown to have additive effects for improving symptoms and lung function and reducing the frequency of disease exacerbations.1–5\nMometasone furoate (MF) is a potent ICS with relatively low potential to cause significant systemic side effects typically associated with oral corticosteroids, such as hypothalamic-pituitary-adrenal axis suppression.6–8 MF has been shown to produce clinical benefit for treating asthma and COPD by reducing symptoms and exacerbations, and improving lung function, with no significant safety risks.6–10 Formoterol fumarate (F) is a potent, selective, long-acting β2-agonist that exerts a preferential effect on β2-adrenergic receptors of bronchial smooth muscle.11 Bronchodilator activity observed in patients with asthma after F inhalation is characterized by a rapid onset (within 3 minutes of inhalation) and long duration (at least 12 hours) of action.11 F is approved for maintenance treatment in patients with asthma and COPD. Merck Sharp & Dohme Corp (Whitehouse Station, NJ, USA) and Novartis (East Hanover, NJ, USA) have jointly developed a fixed-dose combination product combining MF and F in a metered-dose inhaler device (MF/F MDI; marketed as DULERA® in the United States and ZENHALE® in Canada and elsewhere; Merck & Co. Inc., Whitehouse Station, NJ, USA) for the treatment of asthma. MF/F MDI is also in late-stage clinical development for the treatment of patients with COPD. In addition to producing additive beneficial effects on symptoms and lung function, an MF/F combination product is expected to be more convenient for patients with asthma or COPD.\nDue to the effects of decreases in lung function on exposure to inhaled products in patients with COPD, the systemic exposure and pharmacokinetics of MF and formoterol in these patients were expected to differ from healthy volunteers, as has been reported for other ICSs.12,13 A previous pharmacokinetic study showed lower mean (area under the curve [AUC]; 25%; geometric mean ratio [GMR]: 75%; 90% confidence interval [CI]: 61%–91%; mean maximum concentration [Cmax] 39% [GMR: 61%; 90% CI: 49%–75%]) systemic exposure of MF after steady-state dosing from the MDI compared to the dry-powder inhaler (DPI) ASMANEX® TWISTHALER® ([MF] Merck Sharp & Dohme Corp) in healthy patients (data on file, Merck Sharp & Dohme Corp, 2010). Similar differences in systemic drug exposure with MDIs and DPIs have been reported for other ICSs.14,15\nAs part of the clinical development program for MF/F MDI for the treatment of patients with moderate to severe COPD, the present study was conducted primarily to define the systemic exposure of MF when administered using a new combination product containing MF and F in an MDI device (MF/F MDI) versus the approved DPI monotherapy product (MF DPI) for which there is extensive clinical use and safety experience.16 As a secondary objective, this study examined the potential effect of using a spacer device in conjunction with the MF/F MDI on MF and formoterol exposure (versus use of MDI device without a spacer). In addition, this study provided descriptive multiple-dose pharmacokinetic data for MF and formoterol in patients with COPD.", " Patient selection This study enrolled men and women between the ages of 40 and 75 years with the following inclusion criteria: moderate to severe COPD (as defined by post-bronchodilator forced expiratory volume in 1 second [FEV1] ≥30% and <80%) within 1 week prior to the baseline visit (day 1); current smoker or ex-smoker with at least 10 pack-years of smoking history; and receiving only albuterol/salbutamol for relief of symptoms for at least 2 weeks prior to randomization. Subjects were excluded from participation in this study based on the following criteria: increase in absolute volume of FEV1 of ≥400 mL within 30 minutes after administration of four inhalations of albuterol/salbutamol (total dose of 360 to 400 μg), or nebulized 2.5 mg albuterol/salbutamol; inability to use the MF/F MDI device or the MF DPI device; female patients who were pregnant, intended to become pregnant (within 3 months of ending the study), or were breastfeeding; history of any infectious disease within 4 weeks prior to drug administration; or tested positive for hepatitis B surface antigen, hepatitis C antibodies, or human immunodeficiency virus.\nThis study enrolled men and women between the ages of 40 and 75 years with the following inclusion criteria: moderate to severe COPD (as defined by post-bronchodilator forced expiratory volume in 1 second [FEV1] ≥30% and <80%) within 1 week prior to the baseline visit (day 1); current smoker or ex-smoker with at least 10 pack-years of smoking history; and receiving only albuterol/salbutamol for relief of symptoms for at least 2 weeks prior to randomization. Subjects were excluded from participation in this study based on the following criteria: increase in absolute volume of FEV1 of ≥400 mL within 30 minutes after administration of four inhalations of albuterol/salbutamol (total dose of 360 to 400 μg), or nebulized 2.5 mg albuterol/salbutamol; inability to use the MF/F MDI device or the MF DPI device; female patients who were pregnant, intended to become pregnant (within 3 months of ending the study), or were breastfeeding; history of any infectious disease within 4 weeks prior to drug administration; or tested positive for hepatitis B surface antigen, hepatitis C antibodies, or human immunodeficiency virus.\n Study design This was a randomized, open-label, multiple-dose, three-period, three-treatment crossover study conducted at a single study center. Subjects were screened within 21 days prior to dosing.\nAll patients were trained in the use of the devices and proper inhalation techniques using placebo, MDI, and DPI. If necessary, patients could be retrained in the proper use of these inhaler devices prior to the start of each period. Subjects were instructed by the investigator regarding when and how to take the daily treatment.\nSubjects were admitted to the study center on day 1 to confirm continued eligibility and for baseline assessments. The investigator or designee reviewed the inclusion/exclusion criteria and recorded adverse events (AEs) and medications taken within the previous 14 days. A repeat drug and pregnancy screen, laboratory safety tests (hematology, blood chemistry, urinalysis, and electrocardiography), and vital signs also were performed on day 1. On day 1 of the first treatment period, after a 10-hour overnight fast, each patient was randomized to a crossover treatment sequence according to a computer-generated randomization schedule provided by the sponsor, and then received the first dose. The following three treatments were self-administered by patients: treatment A, MF 400 μg/F 10 μg twice a day (BID) via MDI oral inhalation (two puffs × 200 μg/5 μg MF/F per burst combination product); treatment B, MF 400 μg/F 10 μg BID via MDI oral inhalation and in conjunction with a spacer device (two puffs × 200 μg/5 μg MF/F per burst combination product); and treatment C, MF 400 μg BID via DPI oral inhalation (two puffs × 200 μg MF per oral inhalation from the MF DPI). Subjects self-administered the treatments under observation of the site staff for 4 days every 12 hours between approximately 8 am and 9 am and again between approximately 8 pm and 9 pm, and a single morning dose on day 5. After taking their treatment, subjects were instructed to rinse their mouth with water and then spit it out (not swallow it). Based on a previous pharmacokinetic study where the effective half-life (based on accumulation) of MF after administration from an MDI was approximately 25 hours, and since dosing for five half-lives is typically required to attain steady state conditions, a dosing period of 5 days was chosen for this study. Subjects were confined to the study center for the duration of treatment in each period of the crossover. After a 1-week washout between dosing, patients returned to start confinement for the next treatment period.\nMF/F MDIs and placebo MDI devices (for the training of the inhalation technique) were manufactured by 3M Health Care Ltd (Loughborough, Leicestershire, UK, for Schering-Plough Corp [now Merck Sharp & Dohme Corp], Kenilworth, NJ, USA). The spacer device (AeroChamber Plus® Valved Holding Chamber; Monaghan Medical Corp, Plattsburgh, NY, USA) and MF DPI were obtained commercially by the site. Placebo DPI to match the MF DPI was manufactured and supplied by Schering-Plough Corp (now Merck Sharp & Dohme Corp).\nThe study population was identified from the surrounding urban and suburban communities of Madisonville, KY, USA. Subjects were recruited from a pool of volunteers obtained from the database of Commonwealth Biomedical Research, LLC and by word-of-mouth advertisement. The protocol and informed consent were approved by Independent Investigational Review Board, Plantation, FL, USA, as required by the US Code of Federal Regulations and the internal Standard Operating Procedures of the sponsor. The study was conducted in accordance with good clinical practice and was approved by the appropriate institutional review boards and regulatory agencies. Written consent was obtained from all patients prior to the conduct of any study related procedures.\nThis was a randomized, open-label, multiple-dose, three-period, three-treatment crossover study conducted at a single study center. Subjects were screened within 21 days prior to dosing.\nAll patients were trained in the use of the devices and proper inhalation techniques using placebo, MDI, and DPI. If necessary, patients could be retrained in the proper use of these inhaler devices prior to the start of each period. Subjects were instructed by the investigator regarding when and how to take the daily treatment.\nSubjects were admitted to the study center on day 1 to confirm continued eligibility and for baseline assessments. The investigator or designee reviewed the inclusion/exclusion criteria and recorded adverse events (AEs) and medications taken within the previous 14 days. A repeat drug and pregnancy screen, laboratory safety tests (hematology, blood chemistry, urinalysis, and electrocardiography), and vital signs also were performed on day 1. On day 1 of the first treatment period, after a 10-hour overnight fast, each patient was randomized to a crossover treatment sequence according to a computer-generated randomization schedule provided by the sponsor, and then received the first dose. The following three treatments were self-administered by patients: treatment A, MF 400 μg/F 10 μg twice a day (BID) via MDI oral inhalation (two puffs × 200 μg/5 μg MF/F per burst combination product); treatment B, MF 400 μg/F 10 μg BID via MDI oral inhalation and in conjunction with a spacer device (two puffs × 200 μg/5 μg MF/F per burst combination product); and treatment C, MF 400 μg BID via DPI oral inhalation (two puffs × 200 μg MF per oral inhalation from the MF DPI). Subjects self-administered the treatments under observation of the site staff for 4 days every 12 hours between approximately 8 am and 9 am and again between approximately 8 pm and 9 pm, and a single morning dose on day 5. After taking their treatment, subjects were instructed to rinse their mouth with water and then spit it out (not swallow it). Based on a previous pharmacokinetic study where the effective half-life (based on accumulation) of MF after administration from an MDI was approximately 25 hours, and since dosing for five half-lives is typically required to attain steady state conditions, a dosing period of 5 days was chosen for this study. Subjects were confined to the study center for the duration of treatment in each period of the crossover. After a 1-week washout between dosing, patients returned to start confinement for the next treatment period.\nMF/F MDIs and placebo MDI devices (for the training of the inhalation technique) were manufactured by 3M Health Care Ltd (Loughborough, Leicestershire, UK, for Schering-Plough Corp [now Merck Sharp & Dohme Corp], Kenilworth, NJ, USA). The spacer device (AeroChamber Plus® Valved Holding Chamber; Monaghan Medical Corp, Plattsburgh, NY, USA) and MF DPI were obtained commercially by the site. Placebo DPI to match the MF DPI was manufactured and supplied by Schering-Plough Corp (now Merck Sharp & Dohme Corp).\nThe study population was identified from the surrounding urban and suburban communities of Madisonville, KY, USA. Subjects were recruited from a pool of volunteers obtained from the database of Commonwealth Biomedical Research, LLC and by word-of-mouth advertisement. The protocol and informed consent were approved by Independent Investigational Review Board, Plantation, FL, USA, as required by the US Code of Federal Regulations and the internal Standard Operating Procedures of the sponsor. The study was conducted in accordance with good clinical practice and was approved by the appropriate institutional review boards and regulatory agencies. Written consent was obtained from all patients prior to the conduct of any study related procedures.\n Pharmacokinetic assessments Pharmacokinetic parameters were calculated via noncompartmental analysis. Day 5 plasma concentrations and actual sampling times were used to calculate the following pharmacokinetic parameters of MF and formoterol: area under the plasma concentration–time curve from 0 to 12 hours (AUC0–12 hr), the maximum plasma concentration (Cmax), trough plasma concentration (Ctrough), and time to maximum plasma concentration (Tmax). Pharmacokinetic parameters were calculated using WinNonlin® software (v 5.0.1; Pharsight Corporation, Mountain View, CA, USA). AUC0–12 hr was calculated using the linear trapezoidal method for ascending concentrations and log trapezoidal method for descending concentrations. Values for Cmax, Ctrough, and Tmax were obtained by visual inspection of the blood concentration data.\nFor the determination of MF plasma concentration, whole blood was collected in K3-ethylenediaminetetraacetic acid-containing tubes at predose (0 hours) and at 0.5, 1, 2, 4, 8, and 12 hours after the morning dose on day 5 and centrifuged for 15 minutes at 1500 × g. Plasma was removed and stored in a freezer at −20°C. A 1 mL sample aliquot was fortified with 50 μL of internal standard (mometasone furoate-d3). Analytes were isolated through liquid–liquid extraction with 5.0 mL of 15:85 ethyl acetate/hexane, v/v. The extracts were further purified by solid phase extraction with Bond Elut® LRC NH2 cartridges (Agilent Technologies, Strathaven, Scotland). Analytes were eluted with 6.0 mL of 65:35 ethyl acetate/hexane, v/v. The solvent was evaporated under a nitrogen stream at approximately 50°C, and the remaining residue was reconstituted with 125 μL of methanol and 75 μL of 20 μM sodium acetate. The final extract was analyzed using a Sciex API 5000 triple quadrupole liquid chromatography with tandem mass spectrometry system equipped with an electrospray ionization source (AB SCIEX, Framingham, MA, USA) and having a lower limit of quantitation (LLOQ) of 0.250 pg/mL. The LC system employed a Luna (Phenomenex, Torrance, CA, USA) C18 3 × 150 mm (3 μm particle size) column and gradient elution. The retention time for both MF and the internal standard was approximately 14 minutes. The range of the standard curve using a 1.00 mL sample of human plasma was 0.250 to 25.0 pg/mL. At the LLOQ (0.250 pg/mL) for the MF assay, the between-day mean (standard deviation) was 0.253 (0.023) pg/mL, mean % bias was 1.10%, and the mean coefficient of variation was 8.95%.\nFor the determination of the plasma concentration of formoterol, whole blood was collected in tubes containing lithium heparin with eserine (physostigmine) hemisulfate as a preservative at predose (0 hours) and at 0.167 (10 minutes), 0.25, 0.5, 1, 2, 4, 8, and 12 hours after the morning dose on day 5 and centrifuged for 15 minutes at 1500 × g. Plasma was removed and stored in a freezer at −20°C. A 500 μL plasma sample aliquot was fortified with 20 μL of internal standard (formoterol-d6) and 200 μL of 2% ammonium hydroxide. Extraction solvent was added, and the tubes were vortexed and centrifuged. The aqueous layer was frozen and the organic layer was decanted to a clean tube containing keeper solution. The organic solution was evaporated and the remaining residue was reconstituted with 200 μL of reconstitution solution. A 40-μL volume of the final extract was injected and analyzed using a Sciex API 5000 triple quadrupole liquid chromatography with tandem mass spectrometry system equipped with a turbo ion spray source (AB SCIEX) and having an LLOQ of 1.45 pmol/L. The LC system employed a 10 × 3 mm (5 μm particle size) Thermo BETASIL Silica-100 loading column (Thermo Fisher Scientific, Waltham, MA, USA) and a 50 × 3 mm (5 μm particle size) Thermo BETASIL Silica-100 analytical column. Formoterol and the internal standard were separated from the other plasma components using a mobile phase A consisting of 0.1% formic acid in 10 mM ammonium formate and a mobile phase B consisting of a 95:5 acetonitrile:mobile Phase A. The retention time for both formoterol and the internal standard were approximately 2.5 minutes. The range of the standard curve using a 500 μL sample of human plasma was 1.45 pmol/L to 727 pmol/L. At the LLOQ (1.45 pmol/L) for the formoterol assay, the between-day mean (standard deviation) was 1.44 (0.135) pmol/L, mean % bias was −1.23%, and the mean % coefficient of variation was 9.37%.\nPharmacokinetic parameters were calculated via noncompartmental analysis. Day 5 plasma concentrations and actual sampling times were used to calculate the following pharmacokinetic parameters of MF and formoterol: area under the plasma concentration–time curve from 0 to 12 hours (AUC0–12 hr), the maximum plasma concentration (Cmax), trough plasma concentration (Ctrough), and time to maximum plasma concentration (Tmax). Pharmacokinetic parameters were calculated using WinNonlin® software (v 5.0.1; Pharsight Corporation, Mountain View, CA, USA). AUC0–12 hr was calculated using the linear trapezoidal method for ascending concentrations and log trapezoidal method for descending concentrations. Values for Cmax, Ctrough, and Tmax were obtained by visual inspection of the blood concentration data.\nFor the determination of MF plasma concentration, whole blood was collected in K3-ethylenediaminetetraacetic acid-containing tubes at predose (0 hours) and at 0.5, 1, 2, 4, 8, and 12 hours after the morning dose on day 5 and centrifuged for 15 minutes at 1500 × g. Plasma was removed and stored in a freezer at −20°C. A 1 mL sample aliquot was fortified with 50 μL of internal standard (mometasone furoate-d3). Analytes were isolated through liquid–liquid extraction with 5.0 mL of 15:85 ethyl acetate/hexane, v/v. The extracts were further purified by solid phase extraction with Bond Elut® LRC NH2 cartridges (Agilent Technologies, Strathaven, Scotland). Analytes were eluted with 6.0 mL of 65:35 ethyl acetate/hexane, v/v. The solvent was evaporated under a nitrogen stream at approximately 50°C, and the remaining residue was reconstituted with 125 μL of methanol and 75 μL of 20 μM sodium acetate. The final extract was analyzed using a Sciex API 5000 triple quadrupole liquid chromatography with tandem mass spectrometry system equipped with an electrospray ionization source (AB SCIEX, Framingham, MA, USA) and having a lower limit of quantitation (LLOQ) of 0.250 pg/mL. The LC system employed a Luna (Phenomenex, Torrance, CA, USA) C18 3 × 150 mm (3 μm particle size) column and gradient elution. The retention time for both MF and the internal standard was approximately 14 minutes. The range of the standard curve using a 1.00 mL sample of human plasma was 0.250 to 25.0 pg/mL. At the LLOQ (0.250 pg/mL) for the MF assay, the between-day mean (standard deviation) was 0.253 (0.023) pg/mL, mean % bias was 1.10%, and the mean coefficient of variation was 8.95%.\nFor the determination of the plasma concentration of formoterol, whole blood was collected in tubes containing lithium heparin with eserine (physostigmine) hemisulfate as a preservative at predose (0 hours) and at 0.167 (10 minutes), 0.25, 0.5, 1, 2, 4, 8, and 12 hours after the morning dose on day 5 and centrifuged for 15 minutes at 1500 × g. Plasma was removed and stored in a freezer at −20°C. A 500 μL plasma sample aliquot was fortified with 20 μL of internal standard (formoterol-d6) and 200 μL of 2% ammonium hydroxide. Extraction solvent was added, and the tubes were vortexed and centrifuged. The aqueous layer was frozen and the organic layer was decanted to a clean tube containing keeper solution. The organic solution was evaporated and the remaining residue was reconstituted with 200 μL of reconstitution solution. A 40-μL volume of the final extract was injected and analyzed using a Sciex API 5000 triple quadrupole liquid chromatography with tandem mass spectrometry system equipped with a turbo ion spray source (AB SCIEX) and having an LLOQ of 1.45 pmol/L. The LC system employed a 10 × 3 mm (5 μm particle size) Thermo BETASIL Silica-100 loading column (Thermo Fisher Scientific, Waltham, MA, USA) and a 50 × 3 mm (5 μm particle size) Thermo BETASIL Silica-100 analytical column. Formoterol and the internal standard were separated from the other plasma components using a mobile phase A consisting of 0.1% formic acid in 10 mM ammonium formate and a mobile phase B consisting of a 95:5 acetonitrile:mobile Phase A. The retention time for both formoterol and the internal standard were approximately 2.5 minutes. The range of the standard curve using a 500 μL sample of human plasma was 1.45 pmol/L to 727 pmol/L. At the LLOQ (1.45 pmol/L) for the formoterol assay, the between-day mean (standard deviation) was 1.44 (0.135) pmol/L, mean % bias was −1.23%, and the mean % coefficient of variation was 9.37%.\n Safety assessments Clinical laboratory tests, vital signs, electrocardiography, and physical examinations were assessed at screening and clinical laboratory tests and vital signs at prespecified times during the study. Subjects were continually monitored for possible occurrence of AEs. At study conclusion (day 6 of period 3), vital signs, clinical laboratory tests, and physical examinations were repeated.\nClinical laboratory tests, vital signs, electrocardiography, and physical examinations were assessed at screening and clinical laboratory tests and vital signs at prespecified times during the study. Subjects were continually monitored for possible occurrence of AEs. At study conclusion (day 6 of period 3), vital signs, clinical laboratory tests, and physical examinations were repeated.\n Statistical analysis Summary statistics including means and coefficients of variation were provided for MF concentration data at each time point and the derived pharmacokinetic parameters. The primary objective was to compare the MF AUC0–12 hr and Cmax values for the MF/F MDI combination with those for MF DPI monotherapy (ie, treatment A versus treatment C, respectively). The AUC0–12 hr and Cmax values were log-transformed and analyzed using an appropriate analysis of variance (ANOVA) model for a three-period crossover design extracting sources of variation due to treatment, patient, sequence, and period. As a secondary objective, the MF AUC0–12 hr and Cmax values for the MF/F MDI combination administered without and with a spacer were compared (ie, treatment A versus treatment B). The geometric means of treatment A to treatment C or treatment A to treatment B were expressed as a ratio (GMR), and the corresponding 90% CIs were computed. In addition, the log-transformed AUC0–12 hr and Cmax values for formoterol were analyzed similarly for treatments A and B, and comparisons between treatment A and treatment B were performed using a ratio and the corresponding 90% CI.\nAssuming an intrapatient variability of 28% (based on the variability observed in a similarly designed crossover study in healthy adults), this study with a targeted sample size of 12 patients should have been able to detect a 30% difference in MF pharmacokinetics between treatments B or C versus A with 80% power and a 90% CI.\nNo inferential analysis of safety data was planned. The number of patients reporting any AEs, the occurrence of specific AEs, and discontinuation due to AEs were tabulated.\nSummary statistics including means and coefficients of variation were provided for MF concentration data at each time point and the derived pharmacokinetic parameters. The primary objective was to compare the MF AUC0–12 hr and Cmax values for the MF/F MDI combination with those for MF DPI monotherapy (ie, treatment A versus treatment C, respectively). The AUC0–12 hr and Cmax values were log-transformed and analyzed using an appropriate analysis of variance (ANOVA) model for a three-period crossover design extracting sources of variation due to treatment, patient, sequence, and period. As a secondary objective, the MF AUC0–12 hr and Cmax values for the MF/F MDI combination administered without and with a spacer were compared (ie, treatment A versus treatment B). The geometric means of treatment A to treatment C or treatment A to treatment B were expressed as a ratio (GMR), and the corresponding 90% CIs were computed. In addition, the log-transformed AUC0–12 hr and Cmax values for formoterol were analyzed similarly for treatments A and B, and comparisons between treatment A and treatment B were performed using a ratio and the corresponding 90% CI.\nAssuming an intrapatient variability of 28% (based on the variability observed in a similarly designed crossover study in healthy adults), this study with a targeted sample size of 12 patients should have been able to detect a 30% difference in MF pharmacokinetics between treatments B or C versus A with 80% power and a 90% CI.\nNo inferential analysis of safety data was planned. The number of patients reporting any AEs, the occurrence of specific AEs, and discontinuation due to AEs were tabulated.", "This study enrolled men and women between the ages of 40 and 75 years with the following inclusion criteria: moderate to severe COPD (as defined by post-bronchodilator forced expiratory volume in 1 second [FEV1] ≥30% and <80%) within 1 week prior to the baseline visit (day 1); current smoker or ex-smoker with at least 10 pack-years of smoking history; and receiving only albuterol/salbutamol for relief of symptoms for at least 2 weeks prior to randomization. Subjects were excluded from participation in this study based on the following criteria: increase in absolute volume of FEV1 of ≥400 mL within 30 minutes after administration of four inhalations of albuterol/salbutamol (total dose of 360 to 400 μg), or nebulized 2.5 mg albuterol/salbutamol; inability to use the MF/F MDI device or the MF DPI device; female patients who were pregnant, intended to become pregnant (within 3 months of ending the study), or were breastfeeding; history of any infectious disease within 4 weeks prior to drug administration; or tested positive for hepatitis B surface antigen, hepatitis C antibodies, or human immunodeficiency virus.", "This was a randomized, open-label, multiple-dose, three-period, three-treatment crossover study conducted at a single study center. Subjects were screened within 21 days prior to dosing.\nAll patients were trained in the use of the devices and proper inhalation techniques using placebo, MDI, and DPI. If necessary, patients could be retrained in the proper use of these inhaler devices prior to the start of each period. Subjects were instructed by the investigator regarding when and how to take the daily treatment.\nSubjects were admitted to the study center on day 1 to confirm continued eligibility and for baseline assessments. The investigator or designee reviewed the inclusion/exclusion criteria and recorded adverse events (AEs) and medications taken within the previous 14 days. A repeat drug and pregnancy screen, laboratory safety tests (hematology, blood chemistry, urinalysis, and electrocardiography), and vital signs also were performed on day 1. On day 1 of the first treatment period, after a 10-hour overnight fast, each patient was randomized to a crossover treatment sequence according to a computer-generated randomization schedule provided by the sponsor, and then received the first dose. The following three treatments were self-administered by patients: treatment A, MF 400 μg/F 10 μg twice a day (BID) via MDI oral inhalation (two puffs × 200 μg/5 μg MF/F per burst combination product); treatment B, MF 400 μg/F 10 μg BID via MDI oral inhalation and in conjunction with a spacer device (two puffs × 200 μg/5 μg MF/F per burst combination product); and treatment C, MF 400 μg BID via DPI oral inhalation (two puffs × 200 μg MF per oral inhalation from the MF DPI). Subjects self-administered the treatments under observation of the site staff for 4 days every 12 hours between approximately 8 am and 9 am and again between approximately 8 pm and 9 pm, and a single morning dose on day 5. After taking their treatment, subjects were instructed to rinse their mouth with water and then spit it out (not swallow it). Based on a previous pharmacokinetic study where the effective half-life (based on accumulation) of MF after administration from an MDI was approximately 25 hours, and since dosing for five half-lives is typically required to attain steady state conditions, a dosing period of 5 days was chosen for this study. Subjects were confined to the study center for the duration of treatment in each period of the crossover. After a 1-week washout between dosing, patients returned to start confinement for the next treatment period.\nMF/F MDIs and placebo MDI devices (for the training of the inhalation technique) were manufactured by 3M Health Care Ltd (Loughborough, Leicestershire, UK, for Schering-Plough Corp [now Merck Sharp & Dohme Corp], Kenilworth, NJ, USA). The spacer device (AeroChamber Plus® Valved Holding Chamber; Monaghan Medical Corp, Plattsburgh, NY, USA) and MF DPI were obtained commercially by the site. Placebo DPI to match the MF DPI was manufactured and supplied by Schering-Plough Corp (now Merck Sharp & Dohme Corp).\nThe study population was identified from the surrounding urban and suburban communities of Madisonville, KY, USA. Subjects were recruited from a pool of volunteers obtained from the database of Commonwealth Biomedical Research, LLC and by word-of-mouth advertisement. The protocol and informed consent were approved by Independent Investigational Review Board, Plantation, FL, USA, as required by the US Code of Federal Regulations and the internal Standard Operating Procedures of the sponsor. The study was conducted in accordance with good clinical practice and was approved by the appropriate institutional review boards and regulatory agencies. Written consent was obtained from all patients prior to the conduct of any study related procedures.", "Pharmacokinetic parameters were calculated via noncompartmental analysis. Day 5 plasma concentrations and actual sampling times were used to calculate the following pharmacokinetic parameters of MF and formoterol: area under the plasma concentration–time curve from 0 to 12 hours (AUC0–12 hr), the maximum plasma concentration (Cmax), trough plasma concentration (Ctrough), and time to maximum plasma concentration (Tmax). Pharmacokinetic parameters were calculated using WinNonlin® software (v 5.0.1; Pharsight Corporation, Mountain View, CA, USA). AUC0–12 hr was calculated using the linear trapezoidal method for ascending concentrations and log trapezoidal method for descending concentrations. Values for Cmax, Ctrough, and Tmax were obtained by visual inspection of the blood concentration data.\nFor the determination of MF plasma concentration, whole blood was collected in K3-ethylenediaminetetraacetic acid-containing tubes at predose (0 hours) and at 0.5, 1, 2, 4, 8, and 12 hours after the morning dose on day 5 and centrifuged for 15 minutes at 1500 × g. Plasma was removed and stored in a freezer at −20°C. A 1 mL sample aliquot was fortified with 50 μL of internal standard (mometasone furoate-d3). Analytes were isolated through liquid–liquid extraction with 5.0 mL of 15:85 ethyl acetate/hexane, v/v. The extracts were further purified by solid phase extraction with Bond Elut® LRC NH2 cartridges (Agilent Technologies, Strathaven, Scotland). Analytes were eluted with 6.0 mL of 65:35 ethyl acetate/hexane, v/v. The solvent was evaporated under a nitrogen stream at approximately 50°C, and the remaining residue was reconstituted with 125 μL of methanol and 75 μL of 20 μM sodium acetate. The final extract was analyzed using a Sciex API 5000 triple quadrupole liquid chromatography with tandem mass spectrometry system equipped with an electrospray ionization source (AB SCIEX, Framingham, MA, USA) and having a lower limit of quantitation (LLOQ) of 0.250 pg/mL. The LC system employed a Luna (Phenomenex, Torrance, CA, USA) C18 3 × 150 mm (3 μm particle size) column and gradient elution. The retention time for both MF and the internal standard was approximately 14 minutes. The range of the standard curve using a 1.00 mL sample of human plasma was 0.250 to 25.0 pg/mL. At the LLOQ (0.250 pg/mL) for the MF assay, the between-day mean (standard deviation) was 0.253 (0.023) pg/mL, mean % bias was 1.10%, and the mean coefficient of variation was 8.95%.\nFor the determination of the plasma concentration of formoterol, whole blood was collected in tubes containing lithium heparin with eserine (physostigmine) hemisulfate as a preservative at predose (0 hours) and at 0.167 (10 minutes), 0.25, 0.5, 1, 2, 4, 8, and 12 hours after the morning dose on day 5 and centrifuged for 15 minutes at 1500 × g. Plasma was removed and stored in a freezer at −20°C. A 500 μL plasma sample aliquot was fortified with 20 μL of internal standard (formoterol-d6) and 200 μL of 2% ammonium hydroxide. Extraction solvent was added, and the tubes were vortexed and centrifuged. The aqueous layer was frozen and the organic layer was decanted to a clean tube containing keeper solution. The organic solution was evaporated and the remaining residue was reconstituted with 200 μL of reconstitution solution. A 40-μL volume of the final extract was injected and analyzed using a Sciex API 5000 triple quadrupole liquid chromatography with tandem mass spectrometry system equipped with a turbo ion spray source (AB SCIEX) and having an LLOQ of 1.45 pmol/L. The LC system employed a 10 × 3 mm (5 μm particle size) Thermo BETASIL Silica-100 loading column (Thermo Fisher Scientific, Waltham, MA, USA) and a 50 × 3 mm (5 μm particle size) Thermo BETASIL Silica-100 analytical column. Formoterol and the internal standard were separated from the other plasma components using a mobile phase A consisting of 0.1% formic acid in 10 mM ammonium formate and a mobile phase B consisting of a 95:5 acetonitrile:mobile Phase A. The retention time for both formoterol and the internal standard were approximately 2.5 minutes. The range of the standard curve using a 500 μL sample of human plasma was 1.45 pmol/L to 727 pmol/L. At the LLOQ (1.45 pmol/L) for the formoterol assay, the between-day mean (standard deviation) was 1.44 (0.135) pmol/L, mean % bias was −1.23%, and the mean % coefficient of variation was 9.37%.", "Clinical laboratory tests, vital signs, electrocardiography, and physical examinations were assessed at screening and clinical laboratory tests and vital signs at prespecified times during the study. Subjects were continually monitored for possible occurrence of AEs. At study conclusion (day 6 of period 3), vital signs, clinical laboratory tests, and physical examinations were repeated.", " Demographic and baseline characteristics A total of 14 patients (five men and nine women) aged 45 to 72 years (mean, 62.7 years) were treated. Of these, 13 (93%) were white and one (7%) was black/African-American. Subjects had a mean (range) 50 (20–100) pack-year smoking history and a mean (range) predicted post-bronchodilator FEV1 of 58% (42%–73%). All 14 patients completed the study.\nA total of 14 patients (five men and nine women) aged 45 to 72 years (mean, 62.7 years) were treated. Of these, 13 (93%) were white and one (7%) was black/African-American. Subjects had a mean (range) 50 (20–100) pack-year smoking history and a mean (range) predicted post-bronchodilator FEV1 of 58% (42%–73%). All 14 patients completed the study.\n Pharmacokinetic results Mean plasma concentration-time profiles showed prolonged absorption of MF following administration of MF by MDI (Figure 1). Median MF Tmax values were 3.00, 2.00, and 1.00 hours for treatments A, B, and C, respectively (Table 1). Median Tmax values of formoterol were 1.02 and 0.52 hours for Treatments A and B, respectively (Table 1).\nFor the comparison of the MF exposure following inhalation of the DPI and MDI (primary objective), the ANOVA model included all treatments. Intrapatient variabilities for MF AUC0–12 hr and Cmax of 44% and 47%, respectively, were obtained from the model (Table 2). Despite this large observed variability, MF Cmax values were significantly different between treatments, with the mean Cmax for the MDI being 44% lower than that for DPI (Table 2). Mean MF AUC0–12 hr following inhalation by MDI alone was 23% lower than the mean value following administration of MF by DPI. The large CI reflects the small sample size and the larger than expected intrapatient variability (expected variability was approximately 28%; observed was 44%).\nA secondary objective of the study was to compare MF and formoterol exposure following inhalation using the MDI with and without a spacer. Following inhalation by MDI with the spacer, MF exposures based on AUC0–12 hr were lower than for MDI alone (Table 2). In the initial analysis with all treatments (Table 2), the ratio estimates for MF AUC0–12 hr and Cmax included 100%. However, because the larger intrapatient variability was related to the DPI treatment group, a reanalysis without treatment C showed that the AUC0–12 hr and Cmax values were 28% and 18% lower, respectively, for MDI with a spacer compared with MDI alone (Table 3). Intrapatient variability for MF AUC0–12 hr and Cmax values in the original three-treatment ANOVA ranged from 44% and 47%, respectively, to 23% and 24%, when comparing only the MDI data.\nMean plots of formoterol plasma concentrations showed rapid absorption of F (Figure 2). For formoterol, AUC0–12 hr and Cmax values were 38% and 20% lower, respectively, for MDI with the spacer compared to MDI alone (Table 3).\nMean plasma concentration-time profiles showed prolonged absorption of MF following administration of MF by MDI (Figure 1). Median MF Tmax values were 3.00, 2.00, and 1.00 hours for treatments A, B, and C, respectively (Table 1). Median Tmax values of formoterol were 1.02 and 0.52 hours for Treatments A and B, respectively (Table 1).\nFor the comparison of the MF exposure following inhalation of the DPI and MDI (primary objective), the ANOVA model included all treatments. Intrapatient variabilities for MF AUC0–12 hr and Cmax of 44% and 47%, respectively, were obtained from the model (Table 2). Despite this large observed variability, MF Cmax values were significantly different between treatments, with the mean Cmax for the MDI being 44% lower than that for DPI (Table 2). Mean MF AUC0–12 hr following inhalation by MDI alone was 23% lower than the mean value following administration of MF by DPI. The large CI reflects the small sample size and the larger than expected intrapatient variability (expected variability was approximately 28%; observed was 44%).\nA secondary objective of the study was to compare MF and formoterol exposure following inhalation using the MDI with and without a spacer. Following inhalation by MDI with the spacer, MF exposures based on AUC0–12 hr were lower than for MDI alone (Table 2). In the initial analysis with all treatments (Table 2), the ratio estimates for MF AUC0–12 hr and Cmax included 100%. However, because the larger intrapatient variability was related to the DPI treatment group, a reanalysis without treatment C showed that the AUC0–12 hr and Cmax values were 28% and 18% lower, respectively, for MDI with a spacer compared with MDI alone (Table 3). Intrapatient variability for MF AUC0–12 hr and Cmax values in the original three-treatment ANOVA ranged from 44% and 47%, respectively, to 23% and 24%, when comparing only the MDI data.\nMean plots of formoterol plasma concentrations showed rapid absorption of F (Figure 2). For formoterol, AUC0–12 hr and Cmax values were 38% and 20% lower, respectively, for MDI with the spacer compared to MDI alone (Table 3).\n Safety A total of ten (71.4%) patients reported at least one AE during the study: seven (50%) during treatment A (MF 400 μg + F 10 μg via MDI), three (21.4%) during treatment B (MF 400 μg + F 10 μg via MDI with spacer), and four (28.6%) during treatment C (MF 400 μg via DPI). The most common AEs were headache and dyspepsia, occurring in eight (57.1%) and two (14.3%) patients, respectively. All reported AEs were either mild or moderate in severity. No death or serious AEs occurred during the study.\nThere were no clinically significant changes in blood chemistry or hematologic parameters, vital signs, or electrocardiography in any of the treatment groups.\nA total of ten (71.4%) patients reported at least one AE during the study: seven (50%) during treatment A (MF 400 μg + F 10 μg via MDI), three (21.4%) during treatment B (MF 400 μg + F 10 μg via MDI with spacer), and four (28.6%) during treatment C (MF 400 μg via DPI). The most common AEs were headache and dyspepsia, occurring in eight (57.1%) and two (14.3%) patients, respectively. All reported AEs were either mild or moderate in severity. No death or serious AEs occurred during the study.\nThere were no clinically significant changes in blood chemistry or hematologic parameters, vital signs, or electrocardiography in any of the treatment groups.", "A total of ten (71.4%) patients reported at least one AE during the study: seven (50%) during treatment A (MF 400 μg + F 10 μg via MDI), three (21.4%) during treatment B (MF 400 μg + F 10 μg via MDI with spacer), and four (28.6%) during treatment C (MF 400 μg via DPI). The most common AEs were headache and dyspepsia, occurring in eight (57.1%) and two (14.3%) patients, respectively. All reported AEs were either mild or moderate in severity. No death or serious AEs occurred during the study.\nThere were no clinically significant changes in blood chemistry or hematologic parameters, vital signs, or electrocardiography in any of the treatment groups.", "This multiple-dose pharmacokinetic study demonstrated that systemic exposure to MF was lower following administration by MF/F MDI compared to MF DPI in patients with mild-to-moderate COPD. Additionally, systemic exposures of MF and formoterol were lower following administration by MF/F MDI with an AeroChamber Plus® Valved Holding Chamber spacer compared to MF/F MDI without this spacer. The magnitude of the differences in systemic exposure to MF seen with MF/F MDI versus MF DPI as well as MF/F MDI administered with a spacer versus MF/F MDI without a spacer were not clinically relevant. There also was no clinically relevant difference in systemic exposure to formoterol seen with MF/F MDI administered in the presence and absence of a spacer. Finally, MF/F delivered twice daily by MDI was generally well tolerated among patients with moderate-to-severe COPD in this short-term study." ]
[ null, "methods", null, "methods", null, null, "results", null, null ]
[ "Introduction", "Methods", "Patient selection", "Study design", "Pharmacokinetic assessments", "Safety assessments", "Statistical analysis", "Results", "Demographic and baseline characteristics", "Pharmacokinetic results", "Safety", "Discussion", "Conclusion" ]
[ "Current treatment guidelines for the long-term management of chronic obstructive pulmonary disease (COPD) and asthma recommend, for certain degrees of severity, combination therapy with an inhaled corticosteroid (ICS) and a long-acting β2-agonist.1–5 Clinically, coadministration of an ICS and long-acting β2-agonist has been shown to have additive effects for improving symptoms and lung function and reducing the frequency of disease exacerbations.1–5\nMometasone furoate (MF) is a potent ICS with relatively low potential to cause significant systemic side effects typically associated with oral corticosteroids, such as hypothalamic-pituitary-adrenal axis suppression.6–8 MF has been shown to produce clinical benefit for treating asthma and COPD by reducing symptoms and exacerbations, and improving lung function, with no significant safety risks.6–10 Formoterol fumarate (F) is a potent, selective, long-acting β2-agonist that exerts a preferential effect on β2-adrenergic receptors of bronchial smooth muscle.11 Bronchodilator activity observed in patients with asthma after F inhalation is characterized by a rapid onset (within 3 minutes of inhalation) and long duration (at least 12 hours) of action.11 F is approved for maintenance treatment in patients with asthma and COPD. Merck Sharp & Dohme Corp (Whitehouse Station, NJ, USA) and Novartis (East Hanover, NJ, USA) have jointly developed a fixed-dose combination product combining MF and F in a metered-dose inhaler device (MF/F MDI; marketed as DULERA® in the United States and ZENHALE® in Canada and elsewhere; Merck & Co. Inc., Whitehouse Station, NJ, USA) for the treatment of asthma. MF/F MDI is also in late-stage clinical development for the treatment of patients with COPD. In addition to producing additive beneficial effects on symptoms and lung function, an MF/F combination product is expected to be more convenient for patients with asthma or COPD.\nDue to the effects of decreases in lung function on exposure to inhaled products in patients with COPD, the systemic exposure and pharmacokinetics of MF and formoterol in these patients were expected to differ from healthy volunteers, as has been reported for other ICSs.12,13 A previous pharmacokinetic study showed lower mean (area under the curve [AUC]; 25%; geometric mean ratio [GMR]: 75%; 90% confidence interval [CI]: 61%–91%; mean maximum concentration [Cmax] 39% [GMR: 61%; 90% CI: 49%–75%]) systemic exposure of MF after steady-state dosing from the MDI compared to the dry-powder inhaler (DPI) ASMANEX® TWISTHALER® ([MF] Merck Sharp & Dohme Corp) in healthy patients (data on file, Merck Sharp & Dohme Corp, 2010). Similar differences in systemic drug exposure with MDIs and DPIs have been reported for other ICSs.14,15\nAs part of the clinical development program for MF/F MDI for the treatment of patients with moderate to severe COPD, the present study was conducted primarily to define the systemic exposure of MF when administered using a new combination product containing MF and F in an MDI device (MF/F MDI) versus the approved DPI monotherapy product (MF DPI) for which there is extensive clinical use and safety experience.16 As a secondary objective, this study examined the potential effect of using a spacer device in conjunction with the MF/F MDI on MF and formoterol exposure (versus use of MDI device without a spacer). In addition, this study provided descriptive multiple-dose pharmacokinetic data for MF and formoterol in patients with COPD.", " Patient selection This study enrolled men and women between the ages of 40 and 75 years with the following inclusion criteria: moderate to severe COPD (as defined by post-bronchodilator forced expiratory volume in 1 second [FEV1] ≥30% and <80%) within 1 week prior to the baseline visit (day 1); current smoker or ex-smoker with at least 10 pack-years of smoking history; and receiving only albuterol/salbutamol for relief of symptoms for at least 2 weeks prior to randomization. Subjects were excluded from participation in this study based on the following criteria: increase in absolute volume of FEV1 of ≥400 mL within 30 minutes after administration of four inhalations of albuterol/salbutamol (total dose of 360 to 400 μg), or nebulized 2.5 mg albuterol/salbutamol; inability to use the MF/F MDI device or the MF DPI device; female patients who were pregnant, intended to become pregnant (within 3 months of ending the study), or were breastfeeding; history of any infectious disease within 4 weeks prior to drug administration; or tested positive for hepatitis B surface antigen, hepatitis C antibodies, or human immunodeficiency virus.\nThis study enrolled men and women between the ages of 40 and 75 years with the following inclusion criteria: moderate to severe COPD (as defined by post-bronchodilator forced expiratory volume in 1 second [FEV1] ≥30% and <80%) within 1 week prior to the baseline visit (day 1); current smoker or ex-smoker with at least 10 pack-years of smoking history; and receiving only albuterol/salbutamol for relief of symptoms for at least 2 weeks prior to randomization. Subjects were excluded from participation in this study based on the following criteria: increase in absolute volume of FEV1 of ≥400 mL within 30 minutes after administration of four inhalations of albuterol/salbutamol (total dose of 360 to 400 μg), or nebulized 2.5 mg albuterol/salbutamol; inability to use the MF/F MDI device or the MF DPI device; female patients who were pregnant, intended to become pregnant (within 3 months of ending the study), or were breastfeeding; history of any infectious disease within 4 weeks prior to drug administration; or tested positive for hepatitis B surface antigen, hepatitis C antibodies, or human immunodeficiency virus.\n Study design This was a randomized, open-label, multiple-dose, three-period, three-treatment crossover study conducted at a single study center. Subjects were screened within 21 days prior to dosing.\nAll patients were trained in the use of the devices and proper inhalation techniques using placebo, MDI, and DPI. If necessary, patients could be retrained in the proper use of these inhaler devices prior to the start of each period. Subjects were instructed by the investigator regarding when and how to take the daily treatment.\nSubjects were admitted to the study center on day 1 to confirm continued eligibility and for baseline assessments. The investigator or designee reviewed the inclusion/exclusion criteria and recorded adverse events (AEs) and medications taken within the previous 14 days. A repeat drug and pregnancy screen, laboratory safety tests (hematology, blood chemistry, urinalysis, and electrocardiography), and vital signs also were performed on day 1. On day 1 of the first treatment period, after a 10-hour overnight fast, each patient was randomized to a crossover treatment sequence according to a computer-generated randomization schedule provided by the sponsor, and then received the first dose. The following three treatments were self-administered by patients: treatment A, MF 400 μg/F 10 μg twice a day (BID) via MDI oral inhalation (two puffs × 200 μg/5 μg MF/F per burst combination product); treatment B, MF 400 μg/F 10 μg BID via MDI oral inhalation and in conjunction with a spacer device (two puffs × 200 μg/5 μg MF/F per burst combination product); and treatment C, MF 400 μg BID via DPI oral inhalation (two puffs × 200 μg MF per oral inhalation from the MF DPI). Subjects self-administered the treatments under observation of the site staff for 4 days every 12 hours between approximately 8 am and 9 am and again between approximately 8 pm and 9 pm, and a single morning dose on day 5. After taking their treatment, subjects were instructed to rinse their mouth with water and then spit it out (not swallow it). Based on a previous pharmacokinetic study where the effective half-life (based on accumulation) of MF after administration from an MDI was approximately 25 hours, and since dosing for five half-lives is typically required to attain steady state conditions, a dosing period of 5 days was chosen for this study. Subjects were confined to the study center for the duration of treatment in each period of the crossover. After a 1-week washout between dosing, patients returned to start confinement for the next treatment period.\nMF/F MDIs and placebo MDI devices (for the training of the inhalation technique) were manufactured by 3M Health Care Ltd (Loughborough, Leicestershire, UK, for Schering-Plough Corp [now Merck Sharp & Dohme Corp], Kenilworth, NJ, USA). The spacer device (AeroChamber Plus® Valved Holding Chamber; Monaghan Medical Corp, Plattsburgh, NY, USA) and MF DPI were obtained commercially by the site. Placebo DPI to match the MF DPI was manufactured and supplied by Schering-Plough Corp (now Merck Sharp & Dohme Corp).\nThe study population was identified from the surrounding urban and suburban communities of Madisonville, KY, USA. Subjects were recruited from a pool of volunteers obtained from the database of Commonwealth Biomedical Research, LLC and by word-of-mouth advertisement. The protocol and informed consent were approved by Independent Investigational Review Board, Plantation, FL, USA, as required by the US Code of Federal Regulations and the internal Standard Operating Procedures of the sponsor. The study was conducted in accordance with good clinical practice and was approved by the appropriate institutional review boards and regulatory agencies. Written consent was obtained from all patients prior to the conduct of any study related procedures.\nThis was a randomized, open-label, multiple-dose, three-period, three-treatment crossover study conducted at a single study center. Subjects were screened within 21 days prior to dosing.\nAll patients were trained in the use of the devices and proper inhalation techniques using placebo, MDI, and DPI. If necessary, patients could be retrained in the proper use of these inhaler devices prior to the start of each period. Subjects were instructed by the investigator regarding when and how to take the daily treatment.\nSubjects were admitted to the study center on day 1 to confirm continued eligibility and for baseline assessments. The investigator or designee reviewed the inclusion/exclusion criteria and recorded adverse events (AEs) and medications taken within the previous 14 days. A repeat drug and pregnancy screen, laboratory safety tests (hematology, blood chemistry, urinalysis, and electrocardiography), and vital signs also were performed on day 1. On day 1 of the first treatment period, after a 10-hour overnight fast, each patient was randomized to a crossover treatment sequence according to a computer-generated randomization schedule provided by the sponsor, and then received the first dose. The following three treatments were self-administered by patients: treatment A, MF 400 μg/F 10 μg twice a day (BID) via MDI oral inhalation (two puffs × 200 μg/5 μg MF/F per burst combination product); treatment B, MF 400 μg/F 10 μg BID via MDI oral inhalation and in conjunction with a spacer device (two puffs × 200 μg/5 μg MF/F per burst combination product); and treatment C, MF 400 μg BID via DPI oral inhalation (two puffs × 200 μg MF per oral inhalation from the MF DPI). Subjects self-administered the treatments under observation of the site staff for 4 days every 12 hours between approximately 8 am and 9 am and again between approximately 8 pm and 9 pm, and a single morning dose on day 5. After taking their treatment, subjects were instructed to rinse their mouth with water and then spit it out (not swallow it). Based on a previous pharmacokinetic study where the effective half-life (based on accumulation) of MF after administration from an MDI was approximately 25 hours, and since dosing for five half-lives is typically required to attain steady state conditions, a dosing period of 5 days was chosen for this study. Subjects were confined to the study center for the duration of treatment in each period of the crossover. After a 1-week washout between dosing, patients returned to start confinement for the next treatment period.\nMF/F MDIs and placebo MDI devices (for the training of the inhalation technique) were manufactured by 3M Health Care Ltd (Loughborough, Leicestershire, UK, for Schering-Plough Corp [now Merck Sharp & Dohme Corp], Kenilworth, NJ, USA). The spacer device (AeroChamber Plus® Valved Holding Chamber; Monaghan Medical Corp, Plattsburgh, NY, USA) and MF DPI were obtained commercially by the site. Placebo DPI to match the MF DPI was manufactured and supplied by Schering-Plough Corp (now Merck Sharp & Dohme Corp).\nThe study population was identified from the surrounding urban and suburban communities of Madisonville, KY, USA. Subjects were recruited from a pool of volunteers obtained from the database of Commonwealth Biomedical Research, LLC and by word-of-mouth advertisement. The protocol and informed consent were approved by Independent Investigational Review Board, Plantation, FL, USA, as required by the US Code of Federal Regulations and the internal Standard Operating Procedures of the sponsor. The study was conducted in accordance with good clinical practice and was approved by the appropriate institutional review boards and regulatory agencies. Written consent was obtained from all patients prior to the conduct of any study related procedures.\n Pharmacokinetic assessments Pharmacokinetic parameters were calculated via noncompartmental analysis. Day 5 plasma concentrations and actual sampling times were used to calculate the following pharmacokinetic parameters of MF and formoterol: area under the plasma concentration–time curve from 0 to 12 hours (AUC0–12 hr), the maximum plasma concentration (Cmax), trough plasma concentration (Ctrough), and time to maximum plasma concentration (Tmax). Pharmacokinetic parameters were calculated using WinNonlin® software (v 5.0.1; Pharsight Corporation, Mountain View, CA, USA). AUC0–12 hr was calculated using the linear trapezoidal method for ascending concentrations and log trapezoidal method for descending concentrations. Values for Cmax, Ctrough, and Tmax were obtained by visual inspection of the blood concentration data.\nFor the determination of MF plasma concentration, whole blood was collected in K3-ethylenediaminetetraacetic acid-containing tubes at predose (0 hours) and at 0.5, 1, 2, 4, 8, and 12 hours after the morning dose on day 5 and centrifuged for 15 minutes at 1500 × g. Plasma was removed and stored in a freezer at −20°C. A 1 mL sample aliquot was fortified with 50 μL of internal standard (mometasone furoate-d3). Analytes were isolated through liquid–liquid extraction with 5.0 mL of 15:85 ethyl acetate/hexane, v/v. The extracts were further purified by solid phase extraction with Bond Elut® LRC NH2 cartridges (Agilent Technologies, Strathaven, Scotland). Analytes were eluted with 6.0 mL of 65:35 ethyl acetate/hexane, v/v. The solvent was evaporated under a nitrogen stream at approximately 50°C, and the remaining residue was reconstituted with 125 μL of methanol and 75 μL of 20 μM sodium acetate. The final extract was analyzed using a Sciex API 5000 triple quadrupole liquid chromatography with tandem mass spectrometry system equipped with an electrospray ionization source (AB SCIEX, Framingham, MA, USA) and having a lower limit of quantitation (LLOQ) of 0.250 pg/mL. The LC system employed a Luna (Phenomenex, Torrance, CA, USA) C18 3 × 150 mm (3 μm particle size) column and gradient elution. The retention time for both MF and the internal standard was approximately 14 minutes. The range of the standard curve using a 1.00 mL sample of human plasma was 0.250 to 25.0 pg/mL. At the LLOQ (0.250 pg/mL) for the MF assay, the between-day mean (standard deviation) was 0.253 (0.023) pg/mL, mean % bias was 1.10%, and the mean coefficient of variation was 8.95%.\nFor the determination of the plasma concentration of formoterol, whole blood was collected in tubes containing lithium heparin with eserine (physostigmine) hemisulfate as a preservative at predose (0 hours) and at 0.167 (10 minutes), 0.25, 0.5, 1, 2, 4, 8, and 12 hours after the morning dose on day 5 and centrifuged for 15 minutes at 1500 × g. Plasma was removed and stored in a freezer at −20°C. A 500 μL plasma sample aliquot was fortified with 20 μL of internal standard (formoterol-d6) and 200 μL of 2% ammonium hydroxide. Extraction solvent was added, and the tubes were vortexed and centrifuged. The aqueous layer was frozen and the organic layer was decanted to a clean tube containing keeper solution. The organic solution was evaporated and the remaining residue was reconstituted with 200 μL of reconstitution solution. A 40-μL volume of the final extract was injected and analyzed using a Sciex API 5000 triple quadrupole liquid chromatography with tandem mass spectrometry system equipped with a turbo ion spray source (AB SCIEX) and having an LLOQ of 1.45 pmol/L. The LC system employed a 10 × 3 mm (5 μm particle size) Thermo BETASIL Silica-100 loading column (Thermo Fisher Scientific, Waltham, MA, USA) and a 50 × 3 mm (5 μm particle size) Thermo BETASIL Silica-100 analytical column. Formoterol and the internal standard were separated from the other plasma components using a mobile phase A consisting of 0.1% formic acid in 10 mM ammonium formate and a mobile phase B consisting of a 95:5 acetonitrile:mobile Phase A. The retention time for both formoterol and the internal standard were approximately 2.5 minutes. The range of the standard curve using a 500 μL sample of human plasma was 1.45 pmol/L to 727 pmol/L. At the LLOQ (1.45 pmol/L) for the formoterol assay, the between-day mean (standard deviation) was 1.44 (0.135) pmol/L, mean % bias was −1.23%, and the mean % coefficient of variation was 9.37%.\nPharmacokinetic parameters were calculated via noncompartmental analysis. Day 5 plasma concentrations and actual sampling times were used to calculate the following pharmacokinetic parameters of MF and formoterol: area under the plasma concentration–time curve from 0 to 12 hours (AUC0–12 hr), the maximum plasma concentration (Cmax), trough plasma concentration (Ctrough), and time to maximum plasma concentration (Tmax). Pharmacokinetic parameters were calculated using WinNonlin® software (v 5.0.1; Pharsight Corporation, Mountain View, CA, USA). AUC0–12 hr was calculated using the linear trapezoidal method for ascending concentrations and log trapezoidal method for descending concentrations. Values for Cmax, Ctrough, and Tmax were obtained by visual inspection of the blood concentration data.\nFor the determination of MF plasma concentration, whole blood was collected in K3-ethylenediaminetetraacetic acid-containing tubes at predose (0 hours) and at 0.5, 1, 2, 4, 8, and 12 hours after the morning dose on day 5 and centrifuged for 15 minutes at 1500 × g. Plasma was removed and stored in a freezer at −20°C. A 1 mL sample aliquot was fortified with 50 μL of internal standard (mometasone furoate-d3). Analytes were isolated through liquid–liquid extraction with 5.0 mL of 15:85 ethyl acetate/hexane, v/v. The extracts were further purified by solid phase extraction with Bond Elut® LRC NH2 cartridges (Agilent Technologies, Strathaven, Scotland). Analytes were eluted with 6.0 mL of 65:35 ethyl acetate/hexane, v/v. The solvent was evaporated under a nitrogen stream at approximately 50°C, and the remaining residue was reconstituted with 125 μL of methanol and 75 μL of 20 μM sodium acetate. The final extract was analyzed using a Sciex API 5000 triple quadrupole liquid chromatography with tandem mass spectrometry system equipped with an electrospray ionization source (AB SCIEX, Framingham, MA, USA) and having a lower limit of quantitation (LLOQ) of 0.250 pg/mL. The LC system employed a Luna (Phenomenex, Torrance, CA, USA) C18 3 × 150 mm (3 μm particle size) column and gradient elution. The retention time for both MF and the internal standard was approximately 14 minutes. The range of the standard curve using a 1.00 mL sample of human plasma was 0.250 to 25.0 pg/mL. At the LLOQ (0.250 pg/mL) for the MF assay, the between-day mean (standard deviation) was 0.253 (0.023) pg/mL, mean % bias was 1.10%, and the mean coefficient of variation was 8.95%.\nFor the determination of the plasma concentration of formoterol, whole blood was collected in tubes containing lithium heparin with eserine (physostigmine) hemisulfate as a preservative at predose (0 hours) and at 0.167 (10 minutes), 0.25, 0.5, 1, 2, 4, 8, and 12 hours after the morning dose on day 5 and centrifuged for 15 minutes at 1500 × g. Plasma was removed and stored in a freezer at −20°C. A 500 μL plasma sample aliquot was fortified with 20 μL of internal standard (formoterol-d6) and 200 μL of 2% ammonium hydroxide. Extraction solvent was added, and the tubes were vortexed and centrifuged. The aqueous layer was frozen and the organic layer was decanted to a clean tube containing keeper solution. The organic solution was evaporated and the remaining residue was reconstituted with 200 μL of reconstitution solution. A 40-μL volume of the final extract was injected and analyzed using a Sciex API 5000 triple quadrupole liquid chromatography with tandem mass spectrometry system equipped with a turbo ion spray source (AB SCIEX) and having an LLOQ of 1.45 pmol/L. The LC system employed a 10 × 3 mm (5 μm particle size) Thermo BETASIL Silica-100 loading column (Thermo Fisher Scientific, Waltham, MA, USA) and a 50 × 3 mm (5 μm particle size) Thermo BETASIL Silica-100 analytical column. Formoterol and the internal standard were separated from the other plasma components using a mobile phase A consisting of 0.1% formic acid in 10 mM ammonium formate and a mobile phase B consisting of a 95:5 acetonitrile:mobile Phase A. The retention time for both formoterol and the internal standard were approximately 2.5 minutes. The range of the standard curve using a 500 μL sample of human plasma was 1.45 pmol/L to 727 pmol/L. At the LLOQ (1.45 pmol/L) for the formoterol assay, the between-day mean (standard deviation) was 1.44 (0.135) pmol/L, mean % bias was −1.23%, and the mean % coefficient of variation was 9.37%.\n Safety assessments Clinical laboratory tests, vital signs, electrocardiography, and physical examinations were assessed at screening and clinical laboratory tests and vital signs at prespecified times during the study. Subjects were continually monitored for possible occurrence of AEs. At study conclusion (day 6 of period 3), vital signs, clinical laboratory tests, and physical examinations were repeated.\nClinical laboratory tests, vital signs, electrocardiography, and physical examinations were assessed at screening and clinical laboratory tests and vital signs at prespecified times during the study. Subjects were continually monitored for possible occurrence of AEs. At study conclusion (day 6 of period 3), vital signs, clinical laboratory tests, and physical examinations were repeated.\n Statistical analysis Summary statistics including means and coefficients of variation were provided for MF concentration data at each time point and the derived pharmacokinetic parameters. The primary objective was to compare the MF AUC0–12 hr and Cmax values for the MF/F MDI combination with those for MF DPI monotherapy (ie, treatment A versus treatment C, respectively). The AUC0–12 hr and Cmax values were log-transformed and analyzed using an appropriate analysis of variance (ANOVA) model for a three-period crossover design extracting sources of variation due to treatment, patient, sequence, and period. As a secondary objective, the MF AUC0–12 hr and Cmax values for the MF/F MDI combination administered without and with a spacer were compared (ie, treatment A versus treatment B). The geometric means of treatment A to treatment C or treatment A to treatment B were expressed as a ratio (GMR), and the corresponding 90% CIs were computed. In addition, the log-transformed AUC0–12 hr and Cmax values for formoterol were analyzed similarly for treatments A and B, and comparisons between treatment A and treatment B were performed using a ratio and the corresponding 90% CI.\nAssuming an intrapatient variability of 28% (based on the variability observed in a similarly designed crossover study in healthy adults), this study with a targeted sample size of 12 patients should have been able to detect a 30% difference in MF pharmacokinetics between treatments B or C versus A with 80% power and a 90% CI.\nNo inferential analysis of safety data was planned. The number of patients reporting any AEs, the occurrence of specific AEs, and discontinuation due to AEs were tabulated.\nSummary statistics including means and coefficients of variation were provided for MF concentration data at each time point and the derived pharmacokinetic parameters. The primary objective was to compare the MF AUC0–12 hr and Cmax values for the MF/F MDI combination with those for MF DPI monotherapy (ie, treatment A versus treatment C, respectively). The AUC0–12 hr and Cmax values were log-transformed and analyzed using an appropriate analysis of variance (ANOVA) model for a three-period crossover design extracting sources of variation due to treatment, patient, sequence, and period. As a secondary objective, the MF AUC0–12 hr and Cmax values for the MF/F MDI combination administered without and with a spacer were compared (ie, treatment A versus treatment B). The geometric means of treatment A to treatment C or treatment A to treatment B were expressed as a ratio (GMR), and the corresponding 90% CIs were computed. In addition, the log-transformed AUC0–12 hr and Cmax values for formoterol were analyzed similarly for treatments A and B, and comparisons between treatment A and treatment B were performed using a ratio and the corresponding 90% CI.\nAssuming an intrapatient variability of 28% (based on the variability observed in a similarly designed crossover study in healthy adults), this study with a targeted sample size of 12 patients should have been able to detect a 30% difference in MF pharmacokinetics between treatments B or C versus A with 80% power and a 90% CI.\nNo inferential analysis of safety data was planned. The number of patients reporting any AEs, the occurrence of specific AEs, and discontinuation due to AEs were tabulated.", "This study enrolled men and women between the ages of 40 and 75 years with the following inclusion criteria: moderate to severe COPD (as defined by post-bronchodilator forced expiratory volume in 1 second [FEV1] ≥30% and <80%) within 1 week prior to the baseline visit (day 1); current smoker or ex-smoker with at least 10 pack-years of smoking history; and receiving only albuterol/salbutamol for relief of symptoms for at least 2 weeks prior to randomization. Subjects were excluded from participation in this study based on the following criteria: increase in absolute volume of FEV1 of ≥400 mL within 30 minutes after administration of four inhalations of albuterol/salbutamol (total dose of 360 to 400 μg), or nebulized 2.5 mg albuterol/salbutamol; inability to use the MF/F MDI device or the MF DPI device; female patients who were pregnant, intended to become pregnant (within 3 months of ending the study), or were breastfeeding; history of any infectious disease within 4 weeks prior to drug administration; or tested positive for hepatitis B surface antigen, hepatitis C antibodies, or human immunodeficiency virus.", "This was a randomized, open-label, multiple-dose, three-period, three-treatment crossover study conducted at a single study center. Subjects were screened within 21 days prior to dosing.\nAll patients were trained in the use of the devices and proper inhalation techniques using placebo, MDI, and DPI. If necessary, patients could be retrained in the proper use of these inhaler devices prior to the start of each period. Subjects were instructed by the investigator regarding when and how to take the daily treatment.\nSubjects were admitted to the study center on day 1 to confirm continued eligibility and for baseline assessments. The investigator or designee reviewed the inclusion/exclusion criteria and recorded adverse events (AEs) and medications taken within the previous 14 days. A repeat drug and pregnancy screen, laboratory safety tests (hematology, blood chemistry, urinalysis, and electrocardiography), and vital signs also were performed on day 1. On day 1 of the first treatment period, after a 10-hour overnight fast, each patient was randomized to a crossover treatment sequence according to a computer-generated randomization schedule provided by the sponsor, and then received the first dose. The following three treatments were self-administered by patients: treatment A, MF 400 μg/F 10 μg twice a day (BID) via MDI oral inhalation (two puffs × 200 μg/5 μg MF/F per burst combination product); treatment B, MF 400 μg/F 10 μg BID via MDI oral inhalation and in conjunction with a spacer device (two puffs × 200 μg/5 μg MF/F per burst combination product); and treatment C, MF 400 μg BID via DPI oral inhalation (two puffs × 200 μg MF per oral inhalation from the MF DPI). Subjects self-administered the treatments under observation of the site staff for 4 days every 12 hours between approximately 8 am and 9 am and again between approximately 8 pm and 9 pm, and a single morning dose on day 5. After taking their treatment, subjects were instructed to rinse their mouth with water and then spit it out (not swallow it). Based on a previous pharmacokinetic study where the effective half-life (based on accumulation) of MF after administration from an MDI was approximately 25 hours, and since dosing for five half-lives is typically required to attain steady state conditions, a dosing period of 5 days was chosen for this study. Subjects were confined to the study center for the duration of treatment in each period of the crossover. After a 1-week washout between dosing, patients returned to start confinement for the next treatment period.\nMF/F MDIs and placebo MDI devices (for the training of the inhalation technique) were manufactured by 3M Health Care Ltd (Loughborough, Leicestershire, UK, for Schering-Plough Corp [now Merck Sharp & Dohme Corp], Kenilworth, NJ, USA). The spacer device (AeroChamber Plus® Valved Holding Chamber; Monaghan Medical Corp, Plattsburgh, NY, USA) and MF DPI were obtained commercially by the site. Placebo DPI to match the MF DPI was manufactured and supplied by Schering-Plough Corp (now Merck Sharp & Dohme Corp).\nThe study population was identified from the surrounding urban and suburban communities of Madisonville, KY, USA. Subjects were recruited from a pool of volunteers obtained from the database of Commonwealth Biomedical Research, LLC and by word-of-mouth advertisement. The protocol and informed consent were approved by Independent Investigational Review Board, Plantation, FL, USA, as required by the US Code of Federal Regulations and the internal Standard Operating Procedures of the sponsor. The study was conducted in accordance with good clinical practice and was approved by the appropriate institutional review boards and regulatory agencies. Written consent was obtained from all patients prior to the conduct of any study related procedures.", "Pharmacokinetic parameters were calculated via noncompartmental analysis. Day 5 plasma concentrations and actual sampling times were used to calculate the following pharmacokinetic parameters of MF and formoterol: area under the plasma concentration–time curve from 0 to 12 hours (AUC0–12 hr), the maximum plasma concentration (Cmax), trough plasma concentration (Ctrough), and time to maximum plasma concentration (Tmax). Pharmacokinetic parameters were calculated using WinNonlin® software (v 5.0.1; Pharsight Corporation, Mountain View, CA, USA). AUC0–12 hr was calculated using the linear trapezoidal method for ascending concentrations and log trapezoidal method for descending concentrations. Values for Cmax, Ctrough, and Tmax were obtained by visual inspection of the blood concentration data.\nFor the determination of MF plasma concentration, whole blood was collected in K3-ethylenediaminetetraacetic acid-containing tubes at predose (0 hours) and at 0.5, 1, 2, 4, 8, and 12 hours after the morning dose on day 5 and centrifuged for 15 minutes at 1500 × g. Plasma was removed and stored in a freezer at −20°C. A 1 mL sample aliquot was fortified with 50 μL of internal standard (mometasone furoate-d3). Analytes were isolated through liquid–liquid extraction with 5.0 mL of 15:85 ethyl acetate/hexane, v/v. The extracts were further purified by solid phase extraction with Bond Elut® LRC NH2 cartridges (Agilent Technologies, Strathaven, Scotland). Analytes were eluted with 6.0 mL of 65:35 ethyl acetate/hexane, v/v. The solvent was evaporated under a nitrogen stream at approximately 50°C, and the remaining residue was reconstituted with 125 μL of methanol and 75 μL of 20 μM sodium acetate. The final extract was analyzed using a Sciex API 5000 triple quadrupole liquid chromatography with tandem mass spectrometry system equipped with an electrospray ionization source (AB SCIEX, Framingham, MA, USA) and having a lower limit of quantitation (LLOQ) of 0.250 pg/mL. The LC system employed a Luna (Phenomenex, Torrance, CA, USA) C18 3 × 150 mm (3 μm particle size) column and gradient elution. The retention time for both MF and the internal standard was approximately 14 minutes. The range of the standard curve using a 1.00 mL sample of human plasma was 0.250 to 25.0 pg/mL. At the LLOQ (0.250 pg/mL) for the MF assay, the between-day mean (standard deviation) was 0.253 (0.023) pg/mL, mean % bias was 1.10%, and the mean coefficient of variation was 8.95%.\nFor the determination of the plasma concentration of formoterol, whole blood was collected in tubes containing lithium heparin with eserine (physostigmine) hemisulfate as a preservative at predose (0 hours) and at 0.167 (10 minutes), 0.25, 0.5, 1, 2, 4, 8, and 12 hours after the morning dose on day 5 and centrifuged for 15 minutes at 1500 × g. Plasma was removed and stored in a freezer at −20°C. A 500 μL plasma sample aliquot was fortified with 20 μL of internal standard (formoterol-d6) and 200 μL of 2% ammonium hydroxide. Extraction solvent was added, and the tubes were vortexed and centrifuged. The aqueous layer was frozen and the organic layer was decanted to a clean tube containing keeper solution. The organic solution was evaporated and the remaining residue was reconstituted with 200 μL of reconstitution solution. A 40-μL volume of the final extract was injected and analyzed using a Sciex API 5000 triple quadrupole liquid chromatography with tandem mass spectrometry system equipped with a turbo ion spray source (AB SCIEX) and having an LLOQ of 1.45 pmol/L. The LC system employed a 10 × 3 mm (5 μm particle size) Thermo BETASIL Silica-100 loading column (Thermo Fisher Scientific, Waltham, MA, USA) and a 50 × 3 mm (5 μm particle size) Thermo BETASIL Silica-100 analytical column. Formoterol and the internal standard were separated from the other plasma components using a mobile phase A consisting of 0.1% formic acid in 10 mM ammonium formate and a mobile phase B consisting of a 95:5 acetonitrile:mobile Phase A. The retention time for both formoterol and the internal standard were approximately 2.5 minutes. The range of the standard curve using a 500 μL sample of human plasma was 1.45 pmol/L to 727 pmol/L. At the LLOQ (1.45 pmol/L) for the formoterol assay, the between-day mean (standard deviation) was 1.44 (0.135) pmol/L, mean % bias was −1.23%, and the mean % coefficient of variation was 9.37%.", "Clinical laboratory tests, vital signs, electrocardiography, and physical examinations were assessed at screening and clinical laboratory tests and vital signs at prespecified times during the study. Subjects were continually monitored for possible occurrence of AEs. At study conclusion (day 6 of period 3), vital signs, clinical laboratory tests, and physical examinations were repeated.", "Summary statistics including means and coefficients of variation were provided for MF concentration data at each time point and the derived pharmacokinetic parameters. The primary objective was to compare the MF AUC0–12 hr and Cmax values for the MF/F MDI combination with those for MF DPI monotherapy (ie, treatment A versus treatment C, respectively). The AUC0–12 hr and Cmax values were log-transformed and analyzed using an appropriate analysis of variance (ANOVA) model for a three-period crossover design extracting sources of variation due to treatment, patient, sequence, and period. As a secondary objective, the MF AUC0–12 hr and Cmax values for the MF/F MDI combination administered without and with a spacer were compared (ie, treatment A versus treatment B). The geometric means of treatment A to treatment C or treatment A to treatment B were expressed as a ratio (GMR), and the corresponding 90% CIs were computed. In addition, the log-transformed AUC0–12 hr and Cmax values for formoterol were analyzed similarly for treatments A and B, and comparisons between treatment A and treatment B were performed using a ratio and the corresponding 90% CI.\nAssuming an intrapatient variability of 28% (based on the variability observed in a similarly designed crossover study in healthy adults), this study with a targeted sample size of 12 patients should have been able to detect a 30% difference in MF pharmacokinetics between treatments B or C versus A with 80% power and a 90% CI.\nNo inferential analysis of safety data was planned. The number of patients reporting any AEs, the occurrence of specific AEs, and discontinuation due to AEs were tabulated.", " Demographic and baseline characteristics A total of 14 patients (five men and nine women) aged 45 to 72 years (mean, 62.7 years) were treated. Of these, 13 (93%) were white and one (7%) was black/African-American. Subjects had a mean (range) 50 (20–100) pack-year smoking history and a mean (range) predicted post-bronchodilator FEV1 of 58% (42%–73%). All 14 patients completed the study.\nA total of 14 patients (five men and nine women) aged 45 to 72 years (mean, 62.7 years) were treated. Of these, 13 (93%) were white and one (7%) was black/African-American. Subjects had a mean (range) 50 (20–100) pack-year smoking history and a mean (range) predicted post-bronchodilator FEV1 of 58% (42%–73%). All 14 patients completed the study.\n Pharmacokinetic results Mean plasma concentration-time profiles showed prolonged absorption of MF following administration of MF by MDI (Figure 1). Median MF Tmax values were 3.00, 2.00, and 1.00 hours for treatments A, B, and C, respectively (Table 1). Median Tmax values of formoterol were 1.02 and 0.52 hours for Treatments A and B, respectively (Table 1).\nFor the comparison of the MF exposure following inhalation of the DPI and MDI (primary objective), the ANOVA model included all treatments. Intrapatient variabilities for MF AUC0–12 hr and Cmax of 44% and 47%, respectively, were obtained from the model (Table 2). Despite this large observed variability, MF Cmax values were significantly different between treatments, with the mean Cmax for the MDI being 44% lower than that for DPI (Table 2). Mean MF AUC0–12 hr following inhalation by MDI alone was 23% lower than the mean value following administration of MF by DPI. The large CI reflects the small sample size and the larger than expected intrapatient variability (expected variability was approximately 28%; observed was 44%).\nA secondary objective of the study was to compare MF and formoterol exposure following inhalation using the MDI with and without a spacer. Following inhalation by MDI with the spacer, MF exposures based on AUC0–12 hr were lower than for MDI alone (Table 2). In the initial analysis with all treatments (Table 2), the ratio estimates for MF AUC0–12 hr and Cmax included 100%. However, because the larger intrapatient variability was related to the DPI treatment group, a reanalysis without treatment C showed that the AUC0–12 hr and Cmax values were 28% and 18% lower, respectively, for MDI with a spacer compared with MDI alone (Table 3). Intrapatient variability for MF AUC0–12 hr and Cmax values in the original three-treatment ANOVA ranged from 44% and 47%, respectively, to 23% and 24%, when comparing only the MDI data.\nMean plots of formoterol plasma concentrations showed rapid absorption of F (Figure 2). For formoterol, AUC0–12 hr and Cmax values were 38% and 20% lower, respectively, for MDI with the spacer compared to MDI alone (Table 3).\nMean plasma concentration-time profiles showed prolonged absorption of MF following administration of MF by MDI (Figure 1). Median MF Tmax values were 3.00, 2.00, and 1.00 hours for treatments A, B, and C, respectively (Table 1). Median Tmax values of formoterol were 1.02 and 0.52 hours for Treatments A and B, respectively (Table 1).\nFor the comparison of the MF exposure following inhalation of the DPI and MDI (primary objective), the ANOVA model included all treatments. Intrapatient variabilities for MF AUC0–12 hr and Cmax of 44% and 47%, respectively, were obtained from the model (Table 2). Despite this large observed variability, MF Cmax values were significantly different between treatments, with the mean Cmax for the MDI being 44% lower than that for DPI (Table 2). Mean MF AUC0–12 hr following inhalation by MDI alone was 23% lower than the mean value following administration of MF by DPI. The large CI reflects the small sample size and the larger than expected intrapatient variability (expected variability was approximately 28%; observed was 44%).\nA secondary objective of the study was to compare MF and formoterol exposure following inhalation using the MDI with and without a spacer. Following inhalation by MDI with the spacer, MF exposures based on AUC0–12 hr were lower than for MDI alone (Table 2). In the initial analysis with all treatments (Table 2), the ratio estimates for MF AUC0–12 hr and Cmax included 100%. However, because the larger intrapatient variability was related to the DPI treatment group, a reanalysis without treatment C showed that the AUC0–12 hr and Cmax values were 28% and 18% lower, respectively, for MDI with a spacer compared with MDI alone (Table 3). Intrapatient variability for MF AUC0–12 hr and Cmax values in the original three-treatment ANOVA ranged from 44% and 47%, respectively, to 23% and 24%, when comparing only the MDI data.\nMean plots of formoterol plasma concentrations showed rapid absorption of F (Figure 2). For formoterol, AUC0–12 hr and Cmax values were 38% and 20% lower, respectively, for MDI with the spacer compared to MDI alone (Table 3).\n Safety A total of ten (71.4%) patients reported at least one AE during the study: seven (50%) during treatment A (MF 400 μg + F 10 μg via MDI), three (21.4%) during treatment B (MF 400 μg + F 10 μg via MDI with spacer), and four (28.6%) during treatment C (MF 400 μg via DPI). The most common AEs were headache and dyspepsia, occurring in eight (57.1%) and two (14.3%) patients, respectively. All reported AEs were either mild or moderate in severity. No death or serious AEs occurred during the study.\nThere were no clinically significant changes in blood chemistry or hematologic parameters, vital signs, or electrocardiography in any of the treatment groups.\nA total of ten (71.4%) patients reported at least one AE during the study: seven (50%) during treatment A (MF 400 μg + F 10 μg via MDI), three (21.4%) during treatment B (MF 400 μg + F 10 μg via MDI with spacer), and four (28.6%) during treatment C (MF 400 μg via DPI). The most common AEs were headache and dyspepsia, occurring in eight (57.1%) and two (14.3%) patients, respectively. All reported AEs were either mild or moderate in severity. No death or serious AEs occurred during the study.\nThere were no clinically significant changes in blood chemistry or hematologic parameters, vital signs, or electrocardiography in any of the treatment groups.", "A total of 14 patients (five men and nine women) aged 45 to 72 years (mean, 62.7 years) were treated. Of these, 13 (93%) were white and one (7%) was black/African-American. Subjects had a mean (range) 50 (20–100) pack-year smoking history and a mean (range) predicted post-bronchodilator FEV1 of 58% (42%–73%). All 14 patients completed the study.", "Mean plasma concentration-time profiles showed prolonged absorption of MF following administration of MF by MDI (Figure 1). Median MF Tmax values were 3.00, 2.00, and 1.00 hours for treatments A, B, and C, respectively (Table 1). Median Tmax values of formoterol were 1.02 and 0.52 hours for Treatments A and B, respectively (Table 1).\nFor the comparison of the MF exposure following inhalation of the DPI and MDI (primary objective), the ANOVA model included all treatments. Intrapatient variabilities for MF AUC0–12 hr and Cmax of 44% and 47%, respectively, were obtained from the model (Table 2). Despite this large observed variability, MF Cmax values were significantly different between treatments, with the mean Cmax for the MDI being 44% lower than that for DPI (Table 2). Mean MF AUC0–12 hr following inhalation by MDI alone was 23% lower than the mean value following administration of MF by DPI. The large CI reflects the small sample size and the larger than expected intrapatient variability (expected variability was approximately 28%; observed was 44%).\nA secondary objective of the study was to compare MF and formoterol exposure following inhalation using the MDI with and without a spacer. Following inhalation by MDI with the spacer, MF exposures based on AUC0–12 hr were lower than for MDI alone (Table 2). In the initial analysis with all treatments (Table 2), the ratio estimates for MF AUC0–12 hr and Cmax included 100%. However, because the larger intrapatient variability was related to the DPI treatment group, a reanalysis without treatment C showed that the AUC0–12 hr and Cmax values were 28% and 18% lower, respectively, for MDI with a spacer compared with MDI alone (Table 3). Intrapatient variability for MF AUC0–12 hr and Cmax values in the original three-treatment ANOVA ranged from 44% and 47%, respectively, to 23% and 24%, when comparing only the MDI data.\nMean plots of formoterol plasma concentrations showed rapid absorption of F (Figure 2). For formoterol, AUC0–12 hr and Cmax values were 38% and 20% lower, respectively, for MDI with the spacer compared to MDI alone (Table 3).", "A total of ten (71.4%) patients reported at least one AE during the study: seven (50%) during treatment A (MF 400 μg + F 10 μg via MDI), three (21.4%) during treatment B (MF 400 μg + F 10 μg via MDI with spacer), and four (28.6%) during treatment C (MF 400 μg via DPI). The most common AEs were headache and dyspepsia, occurring in eight (57.1%) and two (14.3%) patients, respectively. All reported AEs were either mild or moderate in severity. No death or serious AEs occurred during the study.\nThere were no clinically significant changes in blood chemistry or hematologic parameters, vital signs, or electrocardiography in any of the treatment groups.", "Rapid and sustained relief from bronchoconstriction and improvement in lung function are critical for the long-term management of patients with persistent symptoms and exacerbation of COPD. Coadministration of MF and F has been shown to produce additive effects in rapidly improving symptoms and lung function and reducing the frequency of exacerbation of asthma and COPD.9,10,17–19 The decreases in lung function seen in patients with COPD may affect systemic exposure to drugs, such as MF/F MDI, which are administered via inhalation, and thereby may alter the pharmacokinetics of MF and formoterol. This has previously been described for ICS fluticasone propionate in patients with COPD compared to healthy controls.12,13 Therefore, the current study was conducted in patients with moderate-to-severe COPD to assess the pharmacokinetics of MF and formoterol in the intended target population. The rationale for including the MF DPI comparison arm in this study was to assess the relative systemic exposure of MF as administered by the MF/F MDI to the MF DPI, for which there is extensive clinical use and safety experience.16 A previous pharmacokinetic study in healthy subjects showed lower systemic exposure to MF after steady state dosing from the MDI compared to the DPI device (data on file, Merck Sharp & Dohme Corp, 2006). Similar differences in the systemic exposure of ICS between MDI and DPI devices have been reported for other ICSs. For example, for SYMBICORT® (AztraZeneca, London, UK), an approved fixed-dose combination product containing budesonide and formoterol, the systemic exposure of budesonide was approximately 30% lower in both pediatric and adult patients with asthma after administration from an MDI device compared to the same dose delivered from a DPI device.14,15\nThe current study demonstrated that mean systemic exposures to MF were 23% lower following administration by MF/F MDI compared to MF DPI (primary objective). Additionally, mean systemic exposures of MF and formoterol were 28% lower and 38% lower, respectively, following administration by MF/F MDI in conjunction with a spacer (AeroChamber Plus® Valved Holding Chamber) compared to MF/F MDI without a spacer (secondary objective). The high intrapatient variability in MF exposures observed in this study may in part be attributed to differences in lung function over time (eg, reduced lung inflammation over time with observed dosing) and/or day-to-day variations in patient inhalation technique. In order to control for variations in inhalation technique, patients were extensively trained on the proper use of inhalation devices at baseline and, if necessary, prior to the start of each treatment period. However, even after allowing for differences in the inhalation techniques between the MDI and DPI for the observed intrapatient variability, MF and formoterol exposure following MDI treatment were still lower than those following DPI administration. These observed differences were not due to differences between the two formulations/devices in the oral deposition and subsequent gastrointestinal absorption of MF, since patients were instructed to rinse their mouths with water and spit it out after treatment administration. Furthermore, after oral administration as a solution, MF has been shown to have very low systemic bioavailability due to extensive first pass metabolism (unpublished data). The magnitude of the observed differences in systemic exposure between the MDI and DPI are probably due to formulation differences, which may result in differences in regional lung deposition and clearance from the lungs. The high intrapatient variability was related to inclusion of data from MF DPI treatment group (ie, treatment C) in the ANOVA model. A reanalysis of the results excluding treatment C showed lower overall mean exposures when a spacer was used with the MF/F MDI compared to when the MF MDI was used alone.\nThe present results are in agreement with previous studies showing that spacer devices reduce the systemic absorption of ICS in healthy volunteers.20,21 In those studies, a major factor that contributed to the lower dosage delivery using a spacer was the static charge of the spacer, which attracted medication particles. The authors also reported that multiple actuations and delayed inhalation of the drug after actuation may also cause reductions in dose delivery through a spacer. Application of antistatic material or washing the spacer was useful to reduce the static effect.21 It should be noted, however, that the demonstrated differences in systemic exposure seen in this study in the presence/absence of a spacer device were observed in COPD patients who were trained for good, reproducible inhalation techniques with an MDI device and therefore would be unlikely to benefit from the use of a spacer in clinical practice. Spacer devices are indicated for patients with poor coordination and poor inhalation technique in order to improve drug delivery to the lungs. In clinical practice, pharmacotherapy with an inhalation product, such as MF/F MDI, is individualized, and each patient is titrated to a desired therapeutic response. Therefore, considering that patients who require a spacer device will be dosed with, and if necessary, titrated with a spacer, the use of a spacer device is not expected to have an efficacy implication in the target population.\nIn this study, MF/F MDI was shown to have a similar AE profile to MF DPI. All reported AEs were either mild or moderate in intensity and no serious AE was reported in this study. These safety findings are in agreement with those of previous studies conducted in asthma patients, which also reported that treatment with MF/F was generally well tolerated.22,23 Nevertheless, the current short term, multiple-dose pharmacokinetic study does not address the long-term safety and tolerability profile of chronic MF/F MDI therapy in patients with COPD. Two recently published articles demonstrated treatment with MF/F MDI was well tolerated in patients with moderate-to-severe COPD over 52 weeks.9,10\nConsidering that this study demonstrated a lower systemic exposure to the MDI product compared with the DPI in patients with COPD, there may be some concern regarding the appropriate interchangeability of the DPI monotherapy for the MDI combination therapy with regard to the comparability of the delivery of MF dose in each device. While a comparison of the systemic exposure of inhaled drugs is an acceptable way to evaluate the relative risk of ICS with regard to their systemic safety, there is considerable debate whether similar systemic exposure reflects comparable localized drug concentrations in the lung. Therefore, the clinical development program for the MF/F fixed-dose combination product has focused on demonstrating the clinical efficacy and safety of the MF/F device compared with the MF MDI device and placebo rather than the DPI reference products.9,10\nIn this study, the observed mean difference in systemic availability between MDI and DPI formulations was 23%. However, the difference between formulations in actual lung deposition may be smaller than those noted in systemic exposure and may be due to, at least in part, differences in lung retention. This hypothesis is supported by the apparently longer MF effective half-life (25 hours) after administration from the MDI versus the DPI (effective 13 hours; unpublished data). In addition, the majority of dose response studies conducted with inhaled corticosteroids have failed to demonstrate clinically meaningful differences even between doubling of doses,24,25 let alone a 25% difference. This point is illustrated by formoterol/budesonide FDC (SYMBICORT®) where the same delivered doses are approved for both the MDI (aerosol inhaler) and DPI (Turbuhaler®; AstraZeneca) formulations despite the ~30% lower systemic budesonide exposure from the MDI.14,15 Consistent with the relatively flat dose-response relationship of ICS, the current European Medicines Agency therapeutic equivalence guideline for orally inhaled products (CPMP/EWP/4151/00) has expanded equivalence acceptance criteria margins of 0.67 and 1.5,26 which assume a mean difference between treatments of up to 1.5-fold. In view of the aforementioned considerations, the 23% lower systemic exposure to MF from the MDI relative to the DPI formulation is not considered clinically important.", "This multiple-dose pharmacokinetic study demonstrated that systemic exposure to MF was lower following administration by MF/F MDI compared to MF DPI in patients with mild-to-moderate COPD. Additionally, systemic exposures of MF and formoterol were lower following administration by MF/F MDI with an AeroChamber Plus® Valved Holding Chamber spacer compared to MF/F MDI without this spacer. The magnitude of the differences in systemic exposure to MF seen with MF/F MDI versus MF DPI as well as MF/F MDI administered with a spacer versus MF/F MDI without a spacer were not clinically relevant. There also was no clinically relevant difference in systemic exposure to formoterol seen with MF/F MDI administered in the presence and absence of a spacer. Finally, MF/F delivered twice daily by MDI was generally well tolerated among patients with moderate-to-severe COPD in this short-term study." ]
[ null, "methods", null, "methods", null, null, "methods", "results", "intro", "results", null, "discussion", null ]
[ "mometasone furoate", "chronic obstructive pulmonary disease", "pharmacokinetics", "systemic exposure", "metered-dose inhaler", "dry-powder inhaler" ]
Introduction: Current treatment guidelines for the long-term management of chronic obstructive pulmonary disease (COPD) and asthma recommend, for certain degrees of severity, combination therapy with an inhaled corticosteroid (ICS) and a long-acting β2-agonist.1–5 Clinically, coadministration of an ICS and long-acting β2-agonist has been shown to have additive effects for improving symptoms and lung function and reducing the frequency of disease exacerbations.1–5 Mometasone furoate (MF) is a potent ICS with relatively low potential to cause significant systemic side effects typically associated with oral corticosteroids, such as hypothalamic-pituitary-adrenal axis suppression.6–8 MF has been shown to produce clinical benefit for treating asthma and COPD by reducing symptoms and exacerbations, and improving lung function, with no significant safety risks.6–10 Formoterol fumarate (F) is a potent, selective, long-acting β2-agonist that exerts a preferential effect on β2-adrenergic receptors of bronchial smooth muscle.11 Bronchodilator activity observed in patients with asthma after F inhalation is characterized by a rapid onset (within 3 minutes of inhalation) and long duration (at least 12 hours) of action.11 F is approved for maintenance treatment in patients with asthma and COPD. Merck Sharp & Dohme Corp (Whitehouse Station, NJ, USA) and Novartis (East Hanover, NJ, USA) have jointly developed a fixed-dose combination product combining MF and F in a metered-dose inhaler device (MF/F MDI; marketed as DULERA® in the United States and ZENHALE® in Canada and elsewhere; Merck & Co. Inc., Whitehouse Station, NJ, USA) for the treatment of asthma. MF/F MDI is also in late-stage clinical development for the treatment of patients with COPD. In addition to producing additive beneficial effects on symptoms and lung function, an MF/F combination product is expected to be more convenient for patients with asthma or COPD. Due to the effects of decreases in lung function on exposure to inhaled products in patients with COPD, the systemic exposure and pharmacokinetics of MF and formoterol in these patients were expected to differ from healthy volunteers, as has been reported for other ICSs.12,13 A previous pharmacokinetic study showed lower mean (area under the curve [AUC]; 25%; geometric mean ratio [GMR]: 75%; 90% confidence interval [CI]: 61%–91%; mean maximum concentration [Cmax] 39% [GMR: 61%; 90% CI: 49%–75%]) systemic exposure of MF after steady-state dosing from the MDI compared to the dry-powder inhaler (DPI) ASMANEX® TWISTHALER® ([MF] Merck Sharp & Dohme Corp) in healthy patients (data on file, Merck Sharp & Dohme Corp, 2010). Similar differences in systemic drug exposure with MDIs and DPIs have been reported for other ICSs.14,15 As part of the clinical development program for MF/F MDI for the treatment of patients with moderate to severe COPD, the present study was conducted primarily to define the systemic exposure of MF when administered using a new combination product containing MF and F in an MDI device (MF/F MDI) versus the approved DPI monotherapy product (MF DPI) for which there is extensive clinical use and safety experience.16 As a secondary objective, this study examined the potential effect of using a spacer device in conjunction with the MF/F MDI on MF and formoterol exposure (versus use of MDI device without a spacer). In addition, this study provided descriptive multiple-dose pharmacokinetic data for MF and formoterol in patients with COPD. Methods: Patient selection This study enrolled men and women between the ages of 40 and 75 years with the following inclusion criteria: moderate to severe COPD (as defined by post-bronchodilator forced expiratory volume in 1 second [FEV1] ≥30% and <80%) within 1 week prior to the baseline visit (day 1); current smoker or ex-smoker with at least 10 pack-years of smoking history; and receiving only albuterol/salbutamol for relief of symptoms for at least 2 weeks prior to randomization. Subjects were excluded from participation in this study based on the following criteria: increase in absolute volume of FEV1 of ≥400 mL within 30 minutes after administration of four inhalations of albuterol/salbutamol (total dose of 360 to 400 μg), or nebulized 2.5 mg albuterol/salbutamol; inability to use the MF/F MDI device or the MF DPI device; female patients who were pregnant, intended to become pregnant (within 3 months of ending the study), or were breastfeeding; history of any infectious disease within 4 weeks prior to drug administration; or tested positive for hepatitis B surface antigen, hepatitis C antibodies, or human immunodeficiency virus. This study enrolled men and women between the ages of 40 and 75 years with the following inclusion criteria: moderate to severe COPD (as defined by post-bronchodilator forced expiratory volume in 1 second [FEV1] ≥30% and <80%) within 1 week prior to the baseline visit (day 1); current smoker or ex-smoker with at least 10 pack-years of smoking history; and receiving only albuterol/salbutamol for relief of symptoms for at least 2 weeks prior to randomization. Subjects were excluded from participation in this study based on the following criteria: increase in absolute volume of FEV1 of ≥400 mL within 30 minutes after administration of four inhalations of albuterol/salbutamol (total dose of 360 to 400 μg), or nebulized 2.5 mg albuterol/salbutamol; inability to use the MF/F MDI device or the MF DPI device; female patients who were pregnant, intended to become pregnant (within 3 months of ending the study), or were breastfeeding; history of any infectious disease within 4 weeks prior to drug administration; or tested positive for hepatitis B surface antigen, hepatitis C antibodies, or human immunodeficiency virus. Study design This was a randomized, open-label, multiple-dose, three-period, three-treatment crossover study conducted at a single study center. Subjects were screened within 21 days prior to dosing. All patients were trained in the use of the devices and proper inhalation techniques using placebo, MDI, and DPI. If necessary, patients could be retrained in the proper use of these inhaler devices prior to the start of each period. Subjects were instructed by the investigator regarding when and how to take the daily treatment. Subjects were admitted to the study center on day 1 to confirm continued eligibility and for baseline assessments. The investigator or designee reviewed the inclusion/exclusion criteria and recorded adverse events (AEs) and medications taken within the previous 14 days. A repeat drug and pregnancy screen, laboratory safety tests (hematology, blood chemistry, urinalysis, and electrocardiography), and vital signs also were performed on day 1. On day 1 of the first treatment period, after a 10-hour overnight fast, each patient was randomized to a crossover treatment sequence according to a computer-generated randomization schedule provided by the sponsor, and then received the first dose. The following three treatments were self-administered by patients: treatment A, MF 400 μg/F 10 μg twice a day (BID) via MDI oral inhalation (two puffs × 200 μg/5 μg MF/F per burst combination product); treatment B, MF 400 μg/F 10 μg BID via MDI oral inhalation and in conjunction with a spacer device (two puffs × 200 μg/5 μg MF/F per burst combination product); and treatment C, MF 400 μg BID via DPI oral inhalation (two puffs × 200 μg MF per oral inhalation from the MF DPI). Subjects self-administered the treatments under observation of the site staff for 4 days every 12 hours between approximately 8 am and 9 am and again between approximately 8 pm and 9 pm, and a single morning dose on day 5. After taking their treatment, subjects were instructed to rinse their mouth with water and then spit it out (not swallow it). Based on a previous pharmacokinetic study where the effective half-life (based on accumulation) of MF after administration from an MDI was approximately 25 hours, and since dosing for five half-lives is typically required to attain steady state conditions, a dosing period of 5 days was chosen for this study. Subjects were confined to the study center for the duration of treatment in each period of the crossover. After a 1-week washout between dosing, patients returned to start confinement for the next treatment period. MF/F MDIs and placebo MDI devices (for the training of the inhalation technique) were manufactured by 3M Health Care Ltd (Loughborough, Leicestershire, UK, for Schering-Plough Corp [now Merck Sharp & Dohme Corp], Kenilworth, NJ, USA). The spacer device (AeroChamber Plus® Valved Holding Chamber; Monaghan Medical Corp, Plattsburgh, NY, USA) and MF DPI were obtained commercially by the site. Placebo DPI to match the MF DPI was manufactured and supplied by Schering-Plough Corp (now Merck Sharp & Dohme Corp). The study population was identified from the surrounding urban and suburban communities of Madisonville, KY, USA. Subjects were recruited from a pool of volunteers obtained from the database of Commonwealth Biomedical Research, LLC and by word-of-mouth advertisement. The protocol and informed consent were approved by Independent Investigational Review Board, Plantation, FL, USA, as required by the US Code of Federal Regulations and the internal Standard Operating Procedures of the sponsor. The study was conducted in accordance with good clinical practice and was approved by the appropriate institutional review boards and regulatory agencies. Written consent was obtained from all patients prior to the conduct of any study related procedures. This was a randomized, open-label, multiple-dose, three-period, three-treatment crossover study conducted at a single study center. Subjects were screened within 21 days prior to dosing. All patients were trained in the use of the devices and proper inhalation techniques using placebo, MDI, and DPI. If necessary, patients could be retrained in the proper use of these inhaler devices prior to the start of each period. Subjects were instructed by the investigator regarding when and how to take the daily treatment. Subjects were admitted to the study center on day 1 to confirm continued eligibility and for baseline assessments. The investigator or designee reviewed the inclusion/exclusion criteria and recorded adverse events (AEs) and medications taken within the previous 14 days. A repeat drug and pregnancy screen, laboratory safety tests (hematology, blood chemistry, urinalysis, and electrocardiography), and vital signs also were performed on day 1. On day 1 of the first treatment period, after a 10-hour overnight fast, each patient was randomized to a crossover treatment sequence according to a computer-generated randomization schedule provided by the sponsor, and then received the first dose. The following three treatments were self-administered by patients: treatment A, MF 400 μg/F 10 μg twice a day (BID) via MDI oral inhalation (two puffs × 200 μg/5 μg MF/F per burst combination product); treatment B, MF 400 μg/F 10 μg BID via MDI oral inhalation and in conjunction with a spacer device (two puffs × 200 μg/5 μg MF/F per burst combination product); and treatment C, MF 400 μg BID via DPI oral inhalation (two puffs × 200 μg MF per oral inhalation from the MF DPI). Subjects self-administered the treatments under observation of the site staff for 4 days every 12 hours between approximately 8 am and 9 am and again between approximately 8 pm and 9 pm, and a single morning dose on day 5. After taking their treatment, subjects were instructed to rinse their mouth with water and then spit it out (not swallow it). Based on a previous pharmacokinetic study where the effective half-life (based on accumulation) of MF after administration from an MDI was approximately 25 hours, and since dosing for five half-lives is typically required to attain steady state conditions, a dosing period of 5 days was chosen for this study. Subjects were confined to the study center for the duration of treatment in each period of the crossover. After a 1-week washout between dosing, patients returned to start confinement for the next treatment period. MF/F MDIs and placebo MDI devices (for the training of the inhalation technique) were manufactured by 3M Health Care Ltd (Loughborough, Leicestershire, UK, for Schering-Plough Corp [now Merck Sharp & Dohme Corp], Kenilworth, NJ, USA). The spacer device (AeroChamber Plus® Valved Holding Chamber; Monaghan Medical Corp, Plattsburgh, NY, USA) and MF DPI were obtained commercially by the site. Placebo DPI to match the MF DPI was manufactured and supplied by Schering-Plough Corp (now Merck Sharp & Dohme Corp). The study population was identified from the surrounding urban and suburban communities of Madisonville, KY, USA. Subjects were recruited from a pool of volunteers obtained from the database of Commonwealth Biomedical Research, LLC and by word-of-mouth advertisement. The protocol and informed consent were approved by Independent Investigational Review Board, Plantation, FL, USA, as required by the US Code of Federal Regulations and the internal Standard Operating Procedures of the sponsor. The study was conducted in accordance with good clinical practice and was approved by the appropriate institutional review boards and regulatory agencies. Written consent was obtained from all patients prior to the conduct of any study related procedures. Pharmacokinetic assessments Pharmacokinetic parameters were calculated via noncompartmental analysis. Day 5 plasma concentrations and actual sampling times were used to calculate the following pharmacokinetic parameters of MF and formoterol: area under the plasma concentration–time curve from 0 to 12 hours (AUC0–12 hr), the maximum plasma concentration (Cmax), trough plasma concentration (Ctrough), and time to maximum plasma concentration (Tmax). Pharmacokinetic parameters were calculated using WinNonlin® software (v 5.0.1; Pharsight Corporation, Mountain View, CA, USA). AUC0–12 hr was calculated using the linear trapezoidal method for ascending concentrations and log trapezoidal method for descending concentrations. Values for Cmax, Ctrough, and Tmax were obtained by visual inspection of the blood concentration data. For the determination of MF plasma concentration, whole blood was collected in K3-ethylenediaminetetraacetic acid-containing tubes at predose (0 hours) and at 0.5, 1, 2, 4, 8, and 12 hours after the morning dose on day 5 and centrifuged for 15 minutes at 1500 × g. Plasma was removed and stored in a freezer at −20°C. A 1 mL sample aliquot was fortified with 50 μL of internal standard (mometasone furoate-d3). Analytes were isolated through liquid–liquid extraction with 5.0 mL of 15:85 ethyl acetate/hexane, v/v. The extracts were further purified by solid phase extraction with Bond Elut® LRC NH2 cartridges (Agilent Technologies, Strathaven, Scotland). Analytes were eluted with 6.0 mL of 65:35 ethyl acetate/hexane, v/v. The solvent was evaporated under a nitrogen stream at approximately 50°C, and the remaining residue was reconstituted with 125 μL of methanol and 75 μL of 20 μM sodium acetate. The final extract was analyzed using a Sciex API 5000 triple quadrupole liquid chromatography with tandem mass spectrometry system equipped with an electrospray ionization source (AB SCIEX, Framingham, MA, USA) and having a lower limit of quantitation (LLOQ) of 0.250 pg/mL. The LC system employed a Luna (Phenomenex, Torrance, CA, USA) C18 3 × 150 mm (3 μm particle size) column and gradient elution. The retention time for both MF and the internal standard was approximately 14 minutes. The range of the standard curve using a 1.00 mL sample of human plasma was 0.250 to 25.0 pg/mL. At the LLOQ (0.250 pg/mL) for the MF assay, the between-day mean (standard deviation) was 0.253 (0.023) pg/mL, mean % bias was 1.10%, and the mean coefficient of variation was 8.95%. For the determination of the plasma concentration of formoterol, whole blood was collected in tubes containing lithium heparin with eserine (physostigmine) hemisulfate as a preservative at predose (0 hours) and at 0.167 (10 minutes), 0.25, 0.5, 1, 2, 4, 8, and 12 hours after the morning dose on day 5 and centrifuged for 15 minutes at 1500 × g. Plasma was removed and stored in a freezer at −20°C. A 500 μL plasma sample aliquot was fortified with 20 μL of internal standard (formoterol-d6) and 200 μL of 2% ammonium hydroxide. Extraction solvent was added, and the tubes were vortexed and centrifuged. The aqueous layer was frozen and the organic layer was decanted to a clean tube containing keeper solution. The organic solution was evaporated and the remaining residue was reconstituted with 200 μL of reconstitution solution. A 40-μL volume of the final extract was injected and analyzed using a Sciex API 5000 triple quadrupole liquid chromatography with tandem mass spectrometry system equipped with a turbo ion spray source (AB SCIEX) and having an LLOQ of 1.45 pmol/L. The LC system employed a 10 × 3 mm (5 μm particle size) Thermo BETASIL Silica-100 loading column (Thermo Fisher Scientific, Waltham, MA, USA) and a 50 × 3 mm (5 μm particle size) Thermo BETASIL Silica-100 analytical column. Formoterol and the internal standard were separated from the other plasma components using a mobile phase A consisting of 0.1% formic acid in 10 mM ammonium formate and a mobile phase B consisting of a 95:5 acetonitrile:mobile Phase A. The retention time for both formoterol and the internal standard were approximately 2.5 minutes. The range of the standard curve using a 500 μL sample of human plasma was 1.45 pmol/L to 727 pmol/L. At the LLOQ (1.45 pmol/L) for the formoterol assay, the between-day mean (standard deviation) was 1.44 (0.135) pmol/L, mean % bias was −1.23%, and the mean % coefficient of variation was 9.37%. Pharmacokinetic parameters were calculated via noncompartmental analysis. Day 5 plasma concentrations and actual sampling times were used to calculate the following pharmacokinetic parameters of MF and formoterol: area under the plasma concentration–time curve from 0 to 12 hours (AUC0–12 hr), the maximum plasma concentration (Cmax), trough plasma concentration (Ctrough), and time to maximum plasma concentration (Tmax). Pharmacokinetic parameters were calculated using WinNonlin® software (v 5.0.1; Pharsight Corporation, Mountain View, CA, USA). AUC0–12 hr was calculated using the linear trapezoidal method for ascending concentrations and log trapezoidal method for descending concentrations. Values for Cmax, Ctrough, and Tmax were obtained by visual inspection of the blood concentration data. For the determination of MF plasma concentration, whole blood was collected in K3-ethylenediaminetetraacetic acid-containing tubes at predose (0 hours) and at 0.5, 1, 2, 4, 8, and 12 hours after the morning dose on day 5 and centrifuged for 15 minutes at 1500 × g. Plasma was removed and stored in a freezer at −20°C. A 1 mL sample aliquot was fortified with 50 μL of internal standard (mometasone furoate-d3). Analytes were isolated through liquid–liquid extraction with 5.0 mL of 15:85 ethyl acetate/hexane, v/v. The extracts were further purified by solid phase extraction with Bond Elut® LRC NH2 cartridges (Agilent Technologies, Strathaven, Scotland). Analytes were eluted with 6.0 mL of 65:35 ethyl acetate/hexane, v/v. The solvent was evaporated under a nitrogen stream at approximately 50°C, and the remaining residue was reconstituted with 125 μL of methanol and 75 μL of 20 μM sodium acetate. The final extract was analyzed using a Sciex API 5000 triple quadrupole liquid chromatography with tandem mass spectrometry system equipped with an electrospray ionization source (AB SCIEX, Framingham, MA, USA) and having a lower limit of quantitation (LLOQ) of 0.250 pg/mL. The LC system employed a Luna (Phenomenex, Torrance, CA, USA) C18 3 × 150 mm (3 μm particle size) column and gradient elution. The retention time for both MF and the internal standard was approximately 14 minutes. The range of the standard curve using a 1.00 mL sample of human plasma was 0.250 to 25.0 pg/mL. At the LLOQ (0.250 pg/mL) for the MF assay, the between-day mean (standard deviation) was 0.253 (0.023) pg/mL, mean % bias was 1.10%, and the mean coefficient of variation was 8.95%. For the determination of the plasma concentration of formoterol, whole blood was collected in tubes containing lithium heparin with eserine (physostigmine) hemisulfate as a preservative at predose (0 hours) and at 0.167 (10 minutes), 0.25, 0.5, 1, 2, 4, 8, and 12 hours after the morning dose on day 5 and centrifuged for 15 minutes at 1500 × g. Plasma was removed and stored in a freezer at −20°C. A 500 μL plasma sample aliquot was fortified with 20 μL of internal standard (formoterol-d6) and 200 μL of 2% ammonium hydroxide. Extraction solvent was added, and the tubes were vortexed and centrifuged. The aqueous layer was frozen and the organic layer was decanted to a clean tube containing keeper solution. The organic solution was evaporated and the remaining residue was reconstituted with 200 μL of reconstitution solution. A 40-μL volume of the final extract was injected and analyzed using a Sciex API 5000 triple quadrupole liquid chromatography with tandem mass spectrometry system equipped with a turbo ion spray source (AB SCIEX) and having an LLOQ of 1.45 pmol/L. The LC system employed a 10 × 3 mm (5 μm particle size) Thermo BETASIL Silica-100 loading column (Thermo Fisher Scientific, Waltham, MA, USA) and a 50 × 3 mm (5 μm particle size) Thermo BETASIL Silica-100 analytical column. Formoterol and the internal standard were separated from the other plasma components using a mobile phase A consisting of 0.1% formic acid in 10 mM ammonium formate and a mobile phase B consisting of a 95:5 acetonitrile:mobile Phase A. The retention time for both formoterol and the internal standard were approximately 2.5 minutes. The range of the standard curve using a 500 μL sample of human plasma was 1.45 pmol/L to 727 pmol/L. At the LLOQ (1.45 pmol/L) for the formoterol assay, the between-day mean (standard deviation) was 1.44 (0.135) pmol/L, mean % bias was −1.23%, and the mean % coefficient of variation was 9.37%. Safety assessments Clinical laboratory tests, vital signs, electrocardiography, and physical examinations were assessed at screening and clinical laboratory tests and vital signs at prespecified times during the study. Subjects were continually monitored for possible occurrence of AEs. At study conclusion (day 6 of period 3), vital signs, clinical laboratory tests, and physical examinations were repeated. Clinical laboratory tests, vital signs, electrocardiography, and physical examinations were assessed at screening and clinical laboratory tests and vital signs at prespecified times during the study. Subjects were continually monitored for possible occurrence of AEs. At study conclusion (day 6 of period 3), vital signs, clinical laboratory tests, and physical examinations were repeated. Statistical analysis Summary statistics including means and coefficients of variation were provided for MF concentration data at each time point and the derived pharmacokinetic parameters. The primary objective was to compare the MF AUC0–12 hr and Cmax values for the MF/F MDI combination with those for MF DPI monotherapy (ie, treatment A versus treatment C, respectively). The AUC0–12 hr and Cmax values were log-transformed and analyzed using an appropriate analysis of variance (ANOVA) model for a three-period crossover design extracting sources of variation due to treatment, patient, sequence, and period. As a secondary objective, the MF AUC0–12 hr and Cmax values for the MF/F MDI combination administered without and with a spacer were compared (ie, treatment A versus treatment B). The geometric means of treatment A to treatment C or treatment A to treatment B were expressed as a ratio (GMR), and the corresponding 90% CIs were computed. In addition, the log-transformed AUC0–12 hr and Cmax values for formoterol were analyzed similarly for treatments A and B, and comparisons between treatment A and treatment B were performed using a ratio and the corresponding 90% CI. Assuming an intrapatient variability of 28% (based on the variability observed in a similarly designed crossover study in healthy adults), this study with a targeted sample size of 12 patients should have been able to detect a 30% difference in MF pharmacokinetics between treatments B or C versus A with 80% power and a 90% CI. No inferential analysis of safety data was planned. The number of patients reporting any AEs, the occurrence of specific AEs, and discontinuation due to AEs were tabulated. Summary statistics including means and coefficients of variation were provided for MF concentration data at each time point and the derived pharmacokinetic parameters. The primary objective was to compare the MF AUC0–12 hr and Cmax values for the MF/F MDI combination with those for MF DPI monotherapy (ie, treatment A versus treatment C, respectively). The AUC0–12 hr and Cmax values were log-transformed and analyzed using an appropriate analysis of variance (ANOVA) model for a three-period crossover design extracting sources of variation due to treatment, patient, sequence, and period. As a secondary objective, the MF AUC0–12 hr and Cmax values for the MF/F MDI combination administered without and with a spacer were compared (ie, treatment A versus treatment B). The geometric means of treatment A to treatment C or treatment A to treatment B were expressed as a ratio (GMR), and the corresponding 90% CIs were computed. In addition, the log-transformed AUC0–12 hr and Cmax values for formoterol were analyzed similarly for treatments A and B, and comparisons between treatment A and treatment B were performed using a ratio and the corresponding 90% CI. Assuming an intrapatient variability of 28% (based on the variability observed in a similarly designed crossover study in healthy adults), this study with a targeted sample size of 12 patients should have been able to detect a 30% difference in MF pharmacokinetics between treatments B or C versus A with 80% power and a 90% CI. No inferential analysis of safety data was planned. The number of patients reporting any AEs, the occurrence of specific AEs, and discontinuation due to AEs were tabulated. Patient selection: This study enrolled men and women between the ages of 40 and 75 years with the following inclusion criteria: moderate to severe COPD (as defined by post-bronchodilator forced expiratory volume in 1 second [FEV1] ≥30% and <80%) within 1 week prior to the baseline visit (day 1); current smoker or ex-smoker with at least 10 pack-years of smoking history; and receiving only albuterol/salbutamol for relief of symptoms for at least 2 weeks prior to randomization. Subjects were excluded from participation in this study based on the following criteria: increase in absolute volume of FEV1 of ≥400 mL within 30 minutes after administration of four inhalations of albuterol/salbutamol (total dose of 360 to 400 μg), or nebulized 2.5 mg albuterol/salbutamol; inability to use the MF/F MDI device or the MF DPI device; female patients who were pregnant, intended to become pregnant (within 3 months of ending the study), or were breastfeeding; history of any infectious disease within 4 weeks prior to drug administration; or tested positive for hepatitis B surface antigen, hepatitis C antibodies, or human immunodeficiency virus. Study design: This was a randomized, open-label, multiple-dose, three-period, three-treatment crossover study conducted at a single study center. Subjects were screened within 21 days prior to dosing. All patients were trained in the use of the devices and proper inhalation techniques using placebo, MDI, and DPI. If necessary, patients could be retrained in the proper use of these inhaler devices prior to the start of each period. Subjects were instructed by the investigator regarding when and how to take the daily treatment. Subjects were admitted to the study center on day 1 to confirm continued eligibility and for baseline assessments. The investigator or designee reviewed the inclusion/exclusion criteria and recorded adverse events (AEs) and medications taken within the previous 14 days. A repeat drug and pregnancy screen, laboratory safety tests (hematology, blood chemistry, urinalysis, and electrocardiography), and vital signs also were performed on day 1. On day 1 of the first treatment period, after a 10-hour overnight fast, each patient was randomized to a crossover treatment sequence according to a computer-generated randomization schedule provided by the sponsor, and then received the first dose. The following three treatments were self-administered by patients: treatment A, MF 400 μg/F 10 μg twice a day (BID) via MDI oral inhalation (two puffs × 200 μg/5 μg MF/F per burst combination product); treatment B, MF 400 μg/F 10 μg BID via MDI oral inhalation and in conjunction with a spacer device (two puffs × 200 μg/5 μg MF/F per burst combination product); and treatment C, MF 400 μg BID via DPI oral inhalation (two puffs × 200 μg MF per oral inhalation from the MF DPI). Subjects self-administered the treatments under observation of the site staff for 4 days every 12 hours between approximately 8 am and 9 am and again between approximately 8 pm and 9 pm, and a single morning dose on day 5. After taking their treatment, subjects were instructed to rinse their mouth with water and then spit it out (not swallow it). Based on a previous pharmacokinetic study where the effective half-life (based on accumulation) of MF after administration from an MDI was approximately 25 hours, and since dosing for five half-lives is typically required to attain steady state conditions, a dosing period of 5 days was chosen for this study. Subjects were confined to the study center for the duration of treatment in each period of the crossover. After a 1-week washout between dosing, patients returned to start confinement for the next treatment period. MF/F MDIs and placebo MDI devices (for the training of the inhalation technique) were manufactured by 3M Health Care Ltd (Loughborough, Leicestershire, UK, for Schering-Plough Corp [now Merck Sharp & Dohme Corp], Kenilworth, NJ, USA). The spacer device (AeroChamber Plus® Valved Holding Chamber; Monaghan Medical Corp, Plattsburgh, NY, USA) and MF DPI were obtained commercially by the site. Placebo DPI to match the MF DPI was manufactured and supplied by Schering-Plough Corp (now Merck Sharp & Dohme Corp). The study population was identified from the surrounding urban and suburban communities of Madisonville, KY, USA. Subjects were recruited from a pool of volunteers obtained from the database of Commonwealth Biomedical Research, LLC and by word-of-mouth advertisement. The protocol and informed consent were approved by Independent Investigational Review Board, Plantation, FL, USA, as required by the US Code of Federal Regulations and the internal Standard Operating Procedures of the sponsor. The study was conducted in accordance with good clinical practice and was approved by the appropriate institutional review boards and regulatory agencies. Written consent was obtained from all patients prior to the conduct of any study related procedures. Pharmacokinetic assessments: Pharmacokinetic parameters were calculated via noncompartmental analysis. Day 5 plasma concentrations and actual sampling times were used to calculate the following pharmacokinetic parameters of MF and formoterol: area under the plasma concentration–time curve from 0 to 12 hours (AUC0–12 hr), the maximum plasma concentration (Cmax), trough plasma concentration (Ctrough), and time to maximum plasma concentration (Tmax). Pharmacokinetic parameters were calculated using WinNonlin® software (v 5.0.1; Pharsight Corporation, Mountain View, CA, USA). AUC0–12 hr was calculated using the linear trapezoidal method for ascending concentrations and log trapezoidal method for descending concentrations. Values for Cmax, Ctrough, and Tmax were obtained by visual inspection of the blood concentration data. For the determination of MF plasma concentration, whole blood was collected in K3-ethylenediaminetetraacetic acid-containing tubes at predose (0 hours) and at 0.5, 1, 2, 4, 8, and 12 hours after the morning dose on day 5 and centrifuged for 15 minutes at 1500 × g. Plasma was removed and stored in a freezer at −20°C. A 1 mL sample aliquot was fortified with 50 μL of internal standard (mometasone furoate-d3). Analytes were isolated through liquid–liquid extraction with 5.0 mL of 15:85 ethyl acetate/hexane, v/v. The extracts were further purified by solid phase extraction with Bond Elut® LRC NH2 cartridges (Agilent Technologies, Strathaven, Scotland). Analytes were eluted with 6.0 mL of 65:35 ethyl acetate/hexane, v/v. The solvent was evaporated under a nitrogen stream at approximately 50°C, and the remaining residue was reconstituted with 125 μL of methanol and 75 μL of 20 μM sodium acetate. The final extract was analyzed using a Sciex API 5000 triple quadrupole liquid chromatography with tandem mass spectrometry system equipped with an electrospray ionization source (AB SCIEX, Framingham, MA, USA) and having a lower limit of quantitation (LLOQ) of 0.250 pg/mL. The LC system employed a Luna (Phenomenex, Torrance, CA, USA) C18 3 × 150 mm (3 μm particle size) column and gradient elution. The retention time for both MF and the internal standard was approximately 14 minutes. The range of the standard curve using a 1.00 mL sample of human plasma was 0.250 to 25.0 pg/mL. At the LLOQ (0.250 pg/mL) for the MF assay, the between-day mean (standard deviation) was 0.253 (0.023) pg/mL, mean % bias was 1.10%, and the mean coefficient of variation was 8.95%. For the determination of the plasma concentration of formoterol, whole blood was collected in tubes containing lithium heparin with eserine (physostigmine) hemisulfate as a preservative at predose (0 hours) and at 0.167 (10 minutes), 0.25, 0.5, 1, 2, 4, 8, and 12 hours after the morning dose on day 5 and centrifuged for 15 minutes at 1500 × g. Plasma was removed and stored in a freezer at −20°C. A 500 μL plasma sample aliquot was fortified with 20 μL of internal standard (formoterol-d6) and 200 μL of 2% ammonium hydroxide. Extraction solvent was added, and the tubes were vortexed and centrifuged. The aqueous layer was frozen and the organic layer was decanted to a clean tube containing keeper solution. The organic solution was evaporated and the remaining residue was reconstituted with 200 μL of reconstitution solution. A 40-μL volume of the final extract was injected and analyzed using a Sciex API 5000 triple quadrupole liquid chromatography with tandem mass spectrometry system equipped with a turbo ion spray source (AB SCIEX) and having an LLOQ of 1.45 pmol/L. The LC system employed a 10 × 3 mm (5 μm particle size) Thermo BETASIL Silica-100 loading column (Thermo Fisher Scientific, Waltham, MA, USA) and a 50 × 3 mm (5 μm particle size) Thermo BETASIL Silica-100 analytical column. Formoterol and the internal standard were separated from the other plasma components using a mobile phase A consisting of 0.1% formic acid in 10 mM ammonium formate and a mobile phase B consisting of a 95:5 acetonitrile:mobile Phase A. The retention time for both formoterol and the internal standard were approximately 2.5 minutes. The range of the standard curve using a 500 μL sample of human plasma was 1.45 pmol/L to 727 pmol/L. At the LLOQ (1.45 pmol/L) for the formoterol assay, the between-day mean (standard deviation) was 1.44 (0.135) pmol/L, mean % bias was −1.23%, and the mean % coefficient of variation was 9.37%. Safety assessments: Clinical laboratory tests, vital signs, electrocardiography, and physical examinations were assessed at screening and clinical laboratory tests and vital signs at prespecified times during the study. Subjects were continually monitored for possible occurrence of AEs. At study conclusion (day 6 of period 3), vital signs, clinical laboratory tests, and physical examinations were repeated. Statistical analysis: Summary statistics including means and coefficients of variation were provided for MF concentration data at each time point and the derived pharmacokinetic parameters. The primary objective was to compare the MF AUC0–12 hr and Cmax values for the MF/F MDI combination with those for MF DPI monotherapy (ie, treatment A versus treatment C, respectively). The AUC0–12 hr and Cmax values were log-transformed and analyzed using an appropriate analysis of variance (ANOVA) model for a three-period crossover design extracting sources of variation due to treatment, patient, sequence, and period. As a secondary objective, the MF AUC0–12 hr and Cmax values for the MF/F MDI combination administered without and with a spacer were compared (ie, treatment A versus treatment B). The geometric means of treatment A to treatment C or treatment A to treatment B were expressed as a ratio (GMR), and the corresponding 90% CIs were computed. In addition, the log-transformed AUC0–12 hr and Cmax values for formoterol were analyzed similarly for treatments A and B, and comparisons between treatment A and treatment B were performed using a ratio and the corresponding 90% CI. Assuming an intrapatient variability of 28% (based on the variability observed in a similarly designed crossover study in healthy adults), this study with a targeted sample size of 12 patients should have been able to detect a 30% difference in MF pharmacokinetics between treatments B or C versus A with 80% power and a 90% CI. No inferential analysis of safety data was planned. The number of patients reporting any AEs, the occurrence of specific AEs, and discontinuation due to AEs were tabulated. Results: Demographic and baseline characteristics A total of 14 patients (five men and nine women) aged 45 to 72 years (mean, 62.7 years) were treated. Of these, 13 (93%) were white and one (7%) was black/African-American. Subjects had a mean (range) 50 (20–100) pack-year smoking history and a mean (range) predicted post-bronchodilator FEV1 of 58% (42%–73%). All 14 patients completed the study. A total of 14 patients (five men and nine women) aged 45 to 72 years (mean, 62.7 years) were treated. Of these, 13 (93%) were white and one (7%) was black/African-American. Subjects had a mean (range) 50 (20–100) pack-year smoking history and a mean (range) predicted post-bronchodilator FEV1 of 58% (42%–73%). All 14 patients completed the study. Pharmacokinetic results Mean plasma concentration-time profiles showed prolonged absorption of MF following administration of MF by MDI (Figure 1). Median MF Tmax values were 3.00, 2.00, and 1.00 hours for treatments A, B, and C, respectively (Table 1). Median Tmax values of formoterol were 1.02 and 0.52 hours for Treatments A and B, respectively (Table 1). For the comparison of the MF exposure following inhalation of the DPI and MDI (primary objective), the ANOVA model included all treatments. Intrapatient variabilities for MF AUC0–12 hr and Cmax of 44% and 47%, respectively, were obtained from the model (Table 2). Despite this large observed variability, MF Cmax values were significantly different between treatments, with the mean Cmax for the MDI being 44% lower than that for DPI (Table 2). Mean MF AUC0–12 hr following inhalation by MDI alone was 23% lower than the mean value following administration of MF by DPI. The large CI reflects the small sample size and the larger than expected intrapatient variability (expected variability was approximately 28%; observed was 44%). A secondary objective of the study was to compare MF and formoterol exposure following inhalation using the MDI with and without a spacer. Following inhalation by MDI with the spacer, MF exposures based on AUC0–12 hr were lower than for MDI alone (Table 2). In the initial analysis with all treatments (Table 2), the ratio estimates for MF AUC0–12 hr and Cmax included 100%. However, because the larger intrapatient variability was related to the DPI treatment group, a reanalysis without treatment C showed that the AUC0–12 hr and Cmax values were 28% and 18% lower, respectively, for MDI with a spacer compared with MDI alone (Table 3). Intrapatient variability for MF AUC0–12 hr and Cmax values in the original three-treatment ANOVA ranged from 44% and 47%, respectively, to 23% and 24%, when comparing only the MDI data. Mean plots of formoterol plasma concentrations showed rapid absorption of F (Figure 2). For formoterol, AUC0–12 hr and Cmax values were 38% and 20% lower, respectively, for MDI with the spacer compared to MDI alone (Table 3). Mean plasma concentration-time profiles showed prolonged absorption of MF following administration of MF by MDI (Figure 1). Median MF Tmax values were 3.00, 2.00, and 1.00 hours for treatments A, B, and C, respectively (Table 1). Median Tmax values of formoterol were 1.02 and 0.52 hours for Treatments A and B, respectively (Table 1). For the comparison of the MF exposure following inhalation of the DPI and MDI (primary objective), the ANOVA model included all treatments. Intrapatient variabilities for MF AUC0–12 hr and Cmax of 44% and 47%, respectively, were obtained from the model (Table 2). Despite this large observed variability, MF Cmax values were significantly different between treatments, with the mean Cmax for the MDI being 44% lower than that for DPI (Table 2). Mean MF AUC0–12 hr following inhalation by MDI alone was 23% lower than the mean value following administration of MF by DPI. The large CI reflects the small sample size and the larger than expected intrapatient variability (expected variability was approximately 28%; observed was 44%). A secondary objective of the study was to compare MF and formoterol exposure following inhalation using the MDI with and without a spacer. Following inhalation by MDI with the spacer, MF exposures based on AUC0–12 hr were lower than for MDI alone (Table 2). In the initial analysis with all treatments (Table 2), the ratio estimates for MF AUC0–12 hr and Cmax included 100%. However, because the larger intrapatient variability was related to the DPI treatment group, a reanalysis without treatment C showed that the AUC0–12 hr and Cmax values were 28% and 18% lower, respectively, for MDI with a spacer compared with MDI alone (Table 3). Intrapatient variability for MF AUC0–12 hr and Cmax values in the original three-treatment ANOVA ranged from 44% and 47%, respectively, to 23% and 24%, when comparing only the MDI data. Mean plots of formoterol plasma concentrations showed rapid absorption of F (Figure 2). For formoterol, AUC0–12 hr and Cmax values were 38% and 20% lower, respectively, for MDI with the spacer compared to MDI alone (Table 3). Safety A total of ten (71.4%) patients reported at least one AE during the study: seven (50%) during treatment A (MF 400 μg + F 10 μg via MDI), three (21.4%) during treatment B (MF 400 μg + F 10 μg via MDI with spacer), and four (28.6%) during treatment C (MF 400 μg via DPI). The most common AEs were headache and dyspepsia, occurring in eight (57.1%) and two (14.3%) patients, respectively. All reported AEs were either mild or moderate in severity. No death or serious AEs occurred during the study. There were no clinically significant changes in blood chemistry or hematologic parameters, vital signs, or electrocardiography in any of the treatment groups. A total of ten (71.4%) patients reported at least one AE during the study: seven (50%) during treatment A (MF 400 μg + F 10 μg via MDI), three (21.4%) during treatment B (MF 400 μg + F 10 μg via MDI with spacer), and four (28.6%) during treatment C (MF 400 μg via DPI). The most common AEs were headache and dyspepsia, occurring in eight (57.1%) and two (14.3%) patients, respectively. All reported AEs were either mild or moderate in severity. No death or serious AEs occurred during the study. There were no clinically significant changes in blood chemistry or hematologic parameters, vital signs, or electrocardiography in any of the treatment groups. Demographic and baseline characteristics: A total of 14 patients (five men and nine women) aged 45 to 72 years (mean, 62.7 years) were treated. Of these, 13 (93%) were white and one (7%) was black/African-American. Subjects had a mean (range) 50 (20–100) pack-year smoking history and a mean (range) predicted post-bronchodilator FEV1 of 58% (42%–73%). All 14 patients completed the study. Pharmacokinetic results: Mean plasma concentration-time profiles showed prolonged absorption of MF following administration of MF by MDI (Figure 1). Median MF Tmax values were 3.00, 2.00, and 1.00 hours for treatments A, B, and C, respectively (Table 1). Median Tmax values of formoterol were 1.02 and 0.52 hours for Treatments A and B, respectively (Table 1). For the comparison of the MF exposure following inhalation of the DPI and MDI (primary objective), the ANOVA model included all treatments. Intrapatient variabilities for MF AUC0–12 hr and Cmax of 44% and 47%, respectively, were obtained from the model (Table 2). Despite this large observed variability, MF Cmax values were significantly different between treatments, with the mean Cmax for the MDI being 44% lower than that for DPI (Table 2). Mean MF AUC0–12 hr following inhalation by MDI alone was 23% lower than the mean value following administration of MF by DPI. The large CI reflects the small sample size and the larger than expected intrapatient variability (expected variability was approximately 28%; observed was 44%). A secondary objective of the study was to compare MF and formoterol exposure following inhalation using the MDI with and without a spacer. Following inhalation by MDI with the spacer, MF exposures based on AUC0–12 hr were lower than for MDI alone (Table 2). In the initial analysis with all treatments (Table 2), the ratio estimates for MF AUC0–12 hr and Cmax included 100%. However, because the larger intrapatient variability was related to the DPI treatment group, a reanalysis without treatment C showed that the AUC0–12 hr and Cmax values were 28% and 18% lower, respectively, for MDI with a spacer compared with MDI alone (Table 3). Intrapatient variability for MF AUC0–12 hr and Cmax values in the original three-treatment ANOVA ranged from 44% and 47%, respectively, to 23% and 24%, when comparing only the MDI data. Mean plots of formoterol plasma concentrations showed rapid absorption of F (Figure 2). For formoterol, AUC0–12 hr and Cmax values were 38% and 20% lower, respectively, for MDI with the spacer compared to MDI alone (Table 3). Safety: A total of ten (71.4%) patients reported at least one AE during the study: seven (50%) during treatment A (MF 400 μg + F 10 μg via MDI), three (21.4%) during treatment B (MF 400 μg + F 10 μg via MDI with spacer), and four (28.6%) during treatment C (MF 400 μg via DPI). The most common AEs were headache and dyspepsia, occurring in eight (57.1%) and two (14.3%) patients, respectively. All reported AEs were either mild or moderate in severity. No death or serious AEs occurred during the study. There were no clinically significant changes in blood chemistry or hematologic parameters, vital signs, or electrocardiography in any of the treatment groups. Discussion: Rapid and sustained relief from bronchoconstriction and improvement in lung function are critical for the long-term management of patients with persistent symptoms and exacerbation of COPD. Coadministration of MF and F has been shown to produce additive effects in rapidly improving symptoms and lung function and reducing the frequency of exacerbation of asthma and COPD.9,10,17–19 The decreases in lung function seen in patients with COPD may affect systemic exposure to drugs, such as MF/F MDI, which are administered via inhalation, and thereby may alter the pharmacokinetics of MF and formoterol. This has previously been described for ICS fluticasone propionate in patients with COPD compared to healthy controls.12,13 Therefore, the current study was conducted in patients with moderate-to-severe COPD to assess the pharmacokinetics of MF and formoterol in the intended target population. The rationale for including the MF DPI comparison arm in this study was to assess the relative systemic exposure of MF as administered by the MF/F MDI to the MF DPI, for which there is extensive clinical use and safety experience.16 A previous pharmacokinetic study in healthy subjects showed lower systemic exposure to MF after steady state dosing from the MDI compared to the DPI device (data on file, Merck Sharp & Dohme Corp, 2006). Similar differences in the systemic exposure of ICS between MDI and DPI devices have been reported for other ICSs. For example, for SYMBICORT® (AztraZeneca, London, UK), an approved fixed-dose combination product containing budesonide and formoterol, the systemic exposure of budesonide was approximately 30% lower in both pediatric and adult patients with asthma after administration from an MDI device compared to the same dose delivered from a DPI device.14,15 The current study demonstrated that mean systemic exposures to MF were 23% lower following administration by MF/F MDI compared to MF DPI (primary objective). Additionally, mean systemic exposures of MF and formoterol were 28% lower and 38% lower, respectively, following administration by MF/F MDI in conjunction with a spacer (AeroChamber Plus® Valved Holding Chamber) compared to MF/F MDI without a spacer (secondary objective). The high intrapatient variability in MF exposures observed in this study may in part be attributed to differences in lung function over time (eg, reduced lung inflammation over time with observed dosing) and/or day-to-day variations in patient inhalation technique. In order to control for variations in inhalation technique, patients were extensively trained on the proper use of inhalation devices at baseline and, if necessary, prior to the start of each treatment period. However, even after allowing for differences in the inhalation techniques between the MDI and DPI for the observed intrapatient variability, MF and formoterol exposure following MDI treatment were still lower than those following DPI administration. These observed differences were not due to differences between the two formulations/devices in the oral deposition and subsequent gastrointestinal absorption of MF, since patients were instructed to rinse their mouths with water and spit it out after treatment administration. Furthermore, after oral administration as a solution, MF has been shown to have very low systemic bioavailability due to extensive first pass metabolism (unpublished data). The magnitude of the observed differences in systemic exposure between the MDI and DPI are probably due to formulation differences, which may result in differences in regional lung deposition and clearance from the lungs. The high intrapatient variability was related to inclusion of data from MF DPI treatment group (ie, treatment C) in the ANOVA model. A reanalysis of the results excluding treatment C showed lower overall mean exposures when a spacer was used with the MF/F MDI compared to when the MF MDI was used alone. The present results are in agreement with previous studies showing that spacer devices reduce the systemic absorption of ICS in healthy volunteers.20,21 In those studies, a major factor that contributed to the lower dosage delivery using a spacer was the static charge of the spacer, which attracted medication particles. The authors also reported that multiple actuations and delayed inhalation of the drug after actuation may also cause reductions in dose delivery through a spacer. Application of antistatic material or washing the spacer was useful to reduce the static effect.21 It should be noted, however, that the demonstrated differences in systemic exposure seen in this study in the presence/absence of a spacer device were observed in COPD patients who were trained for good, reproducible inhalation techniques with an MDI device and therefore would be unlikely to benefit from the use of a spacer in clinical practice. Spacer devices are indicated for patients with poor coordination and poor inhalation technique in order to improve drug delivery to the lungs. In clinical practice, pharmacotherapy with an inhalation product, such as MF/F MDI, is individualized, and each patient is titrated to a desired therapeutic response. Therefore, considering that patients who require a spacer device will be dosed with, and if necessary, titrated with a spacer, the use of a spacer device is not expected to have an efficacy implication in the target population. In this study, MF/F MDI was shown to have a similar AE profile to MF DPI. All reported AEs were either mild or moderate in intensity and no serious AE was reported in this study. These safety findings are in agreement with those of previous studies conducted in asthma patients, which also reported that treatment with MF/F was generally well tolerated.22,23 Nevertheless, the current short term, multiple-dose pharmacokinetic study does not address the long-term safety and tolerability profile of chronic MF/F MDI therapy in patients with COPD. Two recently published articles demonstrated treatment with MF/F MDI was well tolerated in patients with moderate-to-severe COPD over 52 weeks.9,10 Considering that this study demonstrated a lower systemic exposure to the MDI product compared with the DPI in patients with COPD, there may be some concern regarding the appropriate interchangeability of the DPI monotherapy for the MDI combination therapy with regard to the comparability of the delivery of MF dose in each device. While a comparison of the systemic exposure of inhaled drugs is an acceptable way to evaluate the relative risk of ICS with regard to their systemic safety, there is considerable debate whether similar systemic exposure reflects comparable localized drug concentrations in the lung. Therefore, the clinical development program for the MF/F fixed-dose combination product has focused on demonstrating the clinical efficacy and safety of the MF/F device compared with the MF MDI device and placebo rather than the DPI reference products.9,10 In this study, the observed mean difference in systemic availability between MDI and DPI formulations was 23%. However, the difference between formulations in actual lung deposition may be smaller than those noted in systemic exposure and may be due to, at least in part, differences in lung retention. This hypothesis is supported by the apparently longer MF effective half-life (25 hours) after administration from the MDI versus the DPI (effective 13 hours; unpublished data). In addition, the majority of dose response studies conducted with inhaled corticosteroids have failed to demonstrate clinically meaningful differences even between doubling of doses,24,25 let alone a 25% difference. This point is illustrated by formoterol/budesonide FDC (SYMBICORT®) where the same delivered doses are approved for both the MDI (aerosol inhaler) and DPI (Turbuhaler®; AstraZeneca) formulations despite the ~30% lower systemic budesonide exposure from the MDI.14,15 Consistent with the relatively flat dose-response relationship of ICS, the current European Medicines Agency therapeutic equivalence guideline for orally inhaled products (CPMP/EWP/4151/00) has expanded equivalence acceptance criteria margins of 0.67 and 1.5,26 which assume a mean difference between treatments of up to 1.5-fold. In view of the aforementioned considerations, the 23% lower systemic exposure to MF from the MDI relative to the DPI formulation is not considered clinically important. Conclusion: This multiple-dose pharmacokinetic study demonstrated that systemic exposure to MF was lower following administration by MF/F MDI compared to MF DPI in patients with mild-to-moderate COPD. Additionally, systemic exposures of MF and formoterol were lower following administration by MF/F MDI with an AeroChamber Plus® Valved Holding Chamber spacer compared to MF/F MDI without this spacer. The magnitude of the differences in systemic exposure to MF seen with MF/F MDI versus MF DPI as well as MF/F MDI administered with a spacer versus MF/F MDI without a spacer were not clinically relevant. There also was no clinically relevant difference in systemic exposure to formoterol seen with MF/F MDI administered in the presence and absence of a spacer. Finally, MF/F delivered twice daily by MDI was generally well tolerated among patients with moderate-to-severe COPD in this short-term study.
Background: Coadministration of mometasone furoate (MF) and formoterol fumarate (F) produces additive effects for improving symptoms and lung function and reduces exacerbations in patients with asthma and chronic obstructive pulmonary disease (COPD). The present study assessed the relative systemic exposure to MF and characterized the pharmacokinetics of MF and formoterol in patients with COPD. Methods: This was a single-center, randomized, open-label, multiple-dose, three-period, three-treatment crossover study. The following three treatments were self-administered by patients (n = 14) with moderate-to-severe COPD: MF 400 μg/F 10 μg via a metered-dose inhaler (MF/F MDI; DULERA(®)/ZENHALE(®)) without a spacer device, MF/F MDI with a spacer, or MF 400 μg via a dry-powder inhaler (DPI; ASMANEX(®) TWISTHALER(®)) twice daily for 5 days. Plasma samples for MF and formoterol assay were obtained predose and at prespecified time points after the last (morning) dose on day 5 of each period of the crossover. The geometric mean ratio (GMR) as a percent and the corresponding 90% confidence intervals (CI) were calculated for treatment comparisons. Results: Systemic MF exposure was lower (GMR 77%; 90% CI 58, 102) following administration by MF/F MDI compared to MF DPI. Additionally, least squares geometric mean systemic exposures of MF and formoterol were lower (GMR 72%; 90% CI 61, 84) and (GMR 62%; 90% CI 52, 74), respectively, following administration by MF/F MDI in conjunction with a spacer compared to MF/F MDI without a spacer. MF/F MDI had a similar adverse experience profile as that seen with MF DPI. All adverse experiences were either mild or moderate in severity; no serious adverse experience was reported. Conclusions: Systemic MF exposures were lower following administration by MF/F MDI compared with MF DPI. Additionally, systemic MF and formoterol exposures were lower following administration by MF/F MDI with a spacer versus without a spacer. The magnitude of these differences with respect to systemic exposure was not clinically relevant.
Demographic and baseline characteristics: A total of 14 patients (five men and nine women) aged 45 to 72 years (mean, 62.7 years) were treated. Of these, 13 (93%) were white and one (7%) was black/African-American. Subjects had a mean (range) 50 (20–100) pack-year smoking history and a mean (range) predicted post-bronchodilator FEV1 of 58% (42%–73%). All 14 patients completed the study. Conclusion: This multiple-dose pharmacokinetic study demonstrated that systemic exposure to MF was lower following administration by MF/F MDI compared to MF DPI in patients with mild-to-moderate COPD. Additionally, systemic exposures of MF and formoterol were lower following administration by MF/F MDI with an AeroChamber Plus® Valved Holding Chamber spacer compared to MF/F MDI without this spacer. The magnitude of the differences in systemic exposure to MF seen with MF/F MDI versus MF DPI as well as MF/F MDI administered with a spacer versus MF/F MDI without a spacer were not clinically relevant. There also was no clinically relevant difference in systemic exposure to formoterol seen with MF/F MDI administered in the presence and absence of a spacer. Finally, MF/F delivered twice daily by MDI was generally well tolerated among patients with moderate-to-severe COPD in this short-term study.
Background: Coadministration of mometasone furoate (MF) and formoterol fumarate (F) produces additive effects for improving symptoms and lung function and reduces exacerbations in patients with asthma and chronic obstructive pulmonary disease (COPD). The present study assessed the relative systemic exposure to MF and characterized the pharmacokinetics of MF and formoterol in patients with COPD. Methods: This was a single-center, randomized, open-label, multiple-dose, three-period, three-treatment crossover study. The following three treatments were self-administered by patients (n = 14) with moderate-to-severe COPD: MF 400 μg/F 10 μg via a metered-dose inhaler (MF/F MDI; DULERA(®)/ZENHALE(®)) without a spacer device, MF/F MDI with a spacer, or MF 400 μg via a dry-powder inhaler (DPI; ASMANEX(®) TWISTHALER(®)) twice daily for 5 days. Plasma samples for MF and formoterol assay were obtained predose and at prespecified time points after the last (morning) dose on day 5 of each period of the crossover. The geometric mean ratio (GMR) as a percent and the corresponding 90% confidence intervals (CI) were calculated for treatment comparisons. Results: Systemic MF exposure was lower (GMR 77%; 90% CI 58, 102) following administration by MF/F MDI compared to MF DPI. Additionally, least squares geometric mean systemic exposures of MF and formoterol were lower (GMR 72%; 90% CI 61, 84) and (GMR 62%; 90% CI 52, 74), respectively, following administration by MF/F MDI in conjunction with a spacer compared to MF/F MDI without a spacer. MF/F MDI had a similar adverse experience profile as that seen with MF DPI. All adverse experiences were either mild or moderate in severity; no serious adverse experience was reported. Conclusions: Systemic MF exposures were lower following administration by MF/F MDI compared with MF DPI. Additionally, systemic MF and formoterol exposures were lower following administration by MF/F MDI with a spacer versus without a spacer. The magnitude of these differences with respect to systemic exposure was not clinically relevant.
11,071
435
[ 671, 4459, 219, 739, 879, 64, 1356, 150, 172 ]
13
[ "mf", "mdi", "treatment", "study", "dpi", "patients", "12", "mean", "μg", "spacer" ]
[ "copd asthma recommend", "inhalation product mf", "adrenergic receptors bronchial", "inhaled corticosteroids", "treatment asthma mf" ]
[CONTENT] mometasone furoate | chronic obstructive pulmonary disease | pharmacokinetics | systemic exposure | metered-dose inhaler | dry-powder inhaler [SUMMARY]
[CONTENT] mometasone furoate | chronic obstructive pulmonary disease | pharmacokinetics | systemic exposure | metered-dose inhaler | dry-powder inhaler [SUMMARY]
[CONTENT] mometasone furoate | chronic obstructive pulmonary disease | pharmacokinetics | systemic exposure | metered-dose inhaler | dry-powder inhaler [SUMMARY]
[CONTENT] mometasone furoate | chronic obstructive pulmonary disease | pharmacokinetics | systemic exposure | metered-dose inhaler | dry-powder inhaler [SUMMARY]
[CONTENT] mometasone furoate | chronic obstructive pulmonary disease | pharmacokinetics | systemic exposure | metered-dose inhaler | dry-powder inhaler [SUMMARY]
[CONTENT] mometasone furoate | chronic obstructive pulmonary disease | pharmacokinetics | systemic exposure | metered-dose inhaler | dry-powder inhaler [SUMMARY]
[CONTENT] Aged | Anti-Inflammatory Agents | Bronchodilator Agents | Cross-Over Studies | Drug Combinations | Dry Powder Inhalers | Ethanolamines | Female | Formoterol Fumarate | Humans | Male | Metered Dose Inhalers | Middle Aged | Mometasone Furoate | Pregnadienediols | Pulmonary Disease, Chronic Obstructive [SUMMARY]
[CONTENT] Aged | Anti-Inflammatory Agents | Bronchodilator Agents | Cross-Over Studies | Drug Combinations | Dry Powder Inhalers | Ethanolamines | Female | Formoterol Fumarate | Humans | Male | Metered Dose Inhalers | Middle Aged | Mometasone Furoate | Pregnadienediols | Pulmonary Disease, Chronic Obstructive [SUMMARY]
[CONTENT] Aged | Anti-Inflammatory Agents | Bronchodilator Agents | Cross-Over Studies | Drug Combinations | Dry Powder Inhalers | Ethanolamines | Female | Formoterol Fumarate | Humans | Male | Metered Dose Inhalers | Middle Aged | Mometasone Furoate | Pregnadienediols | Pulmonary Disease, Chronic Obstructive [SUMMARY]
[CONTENT] Aged | Anti-Inflammatory Agents | Bronchodilator Agents | Cross-Over Studies | Drug Combinations | Dry Powder Inhalers | Ethanolamines | Female | Formoterol Fumarate | Humans | Male | Metered Dose Inhalers | Middle Aged | Mometasone Furoate | Pregnadienediols | Pulmonary Disease, Chronic Obstructive [SUMMARY]
[CONTENT] Aged | Anti-Inflammatory Agents | Bronchodilator Agents | Cross-Over Studies | Drug Combinations | Dry Powder Inhalers | Ethanolamines | Female | Formoterol Fumarate | Humans | Male | Metered Dose Inhalers | Middle Aged | Mometasone Furoate | Pregnadienediols | Pulmonary Disease, Chronic Obstructive [SUMMARY]
[CONTENT] Aged | Anti-Inflammatory Agents | Bronchodilator Agents | Cross-Over Studies | Drug Combinations | Dry Powder Inhalers | Ethanolamines | Female | Formoterol Fumarate | Humans | Male | Metered Dose Inhalers | Middle Aged | Mometasone Furoate | Pregnadienediols | Pulmonary Disease, Chronic Obstructive [SUMMARY]
[CONTENT] copd asthma recommend | inhalation product mf | adrenergic receptors bronchial | inhaled corticosteroids | treatment asthma mf [SUMMARY]
[CONTENT] copd asthma recommend | inhalation product mf | adrenergic receptors bronchial | inhaled corticosteroids | treatment asthma mf [SUMMARY]
[CONTENT] copd asthma recommend | inhalation product mf | adrenergic receptors bronchial | inhaled corticosteroids | treatment asthma mf [SUMMARY]
[CONTENT] copd asthma recommend | inhalation product mf | adrenergic receptors bronchial | inhaled corticosteroids | treatment asthma mf [SUMMARY]
[CONTENT] copd asthma recommend | inhalation product mf | adrenergic receptors bronchial | inhaled corticosteroids | treatment asthma mf [SUMMARY]
[CONTENT] copd asthma recommend | inhalation product mf | adrenergic receptors bronchial | inhaled corticosteroids | treatment asthma mf [SUMMARY]
[CONTENT] mf | mdi | treatment | study | dpi | patients | 12 | mean | μg | spacer [SUMMARY]
[CONTENT] mf | mdi | treatment | study | dpi | patients | 12 | mean | μg | spacer [SUMMARY]
[CONTENT] mf | mdi | treatment | study | dpi | patients | 12 | mean | μg | spacer [SUMMARY]
[CONTENT] mf | mdi | treatment | study | dpi | patients | 12 | mean | μg | spacer [SUMMARY]
[CONTENT] mf | mdi | treatment | study | dpi | patients | 12 | mean | μg | spacer [SUMMARY]
[CONTENT] mf | mdi | treatment | study | dpi | patients | 12 | mean | μg | spacer [SUMMARY]
[CONTENT] mean range | mean | 14 patients | years | range | 14 | 72 years mean 62 | 50 20 100 | 50 20 | american subjects mean range [SUMMARY]
[CONTENT] treatment | treatment treatment | cmax values | 12 hr cmax values | 12 hr cmax | hr cmax | hr cmax values | auc0 12 hr cmax | mf | 12 hr [SUMMARY]
[CONTENT] table | mdi | mf | hr | 12 hr | auc0 | auc0 12 | auc0 12 hr | cmax | values [SUMMARY]
[CONTENT] mf | mf mdi | mdi | systemic | systemic exposure | spacer | seen mf | relevant | clinically relevant | versus mf [SUMMARY]
[CONTENT] mf | mdi | treatment | μg | study | patients | mean | dpi | 12 | systemic [SUMMARY]
[CONTENT] mf | mdi | treatment | μg | study | patients | mean | dpi | 12 | systemic [SUMMARY]
[CONTENT] F ||| MF | COPD [SUMMARY]
[CONTENT] three-period | three ||| three | 14 | 400 | 400 | ASMANEX ||| daily | 5 days ||| Plasma | MF | morning | day 5 ||| GMR | 90% | CI [SUMMARY]
[CONTENT] GMR | 77% | 90% | CI | 58 | 102 | MF/F MDI ||| MF | GMR | 72% | 90% | CI | 61, 84 | GMR | 62% | 90% | CI | 74 | MF/F MDI ||| MF/F MDI ||| [SUMMARY]
[CONTENT] MF/F MDI | MF DPI ||| MF | MF/F MDI ||| [SUMMARY]
[CONTENT] F ||| MF | COPD ||| three-period | three ||| three | 14 | 400 | 400 | ASMANEX ||| daily | 5 days ||| Plasma | MF | morning | day 5 ||| GMR | 90% | CI ||| ||| GMR | 77% | 90% | CI | 58 | 102 | MF/F MDI ||| MF | GMR | 72% | 90% | CI | 61, 84 | GMR | 62% | 90% | CI | 74 | MF/F MDI ||| MF/F MDI ||| ||| MF/F MDI | MF DPI ||| MF | MF/F MDI ||| [SUMMARY]
[CONTENT] F ||| MF | COPD ||| three-period | three ||| three | 14 | 400 | 400 | ASMANEX ||| daily | 5 days ||| Plasma | MF | morning | day 5 ||| GMR | 90% | CI ||| ||| GMR | 77% | 90% | CI | 58 | 102 | MF/F MDI ||| MF | GMR | 72% | 90% | CI | 61, 84 | GMR | 62% | 90% | CI | 74 | MF/F MDI ||| MF/F MDI ||| ||| MF/F MDI | MF DPI ||| MF | MF/F MDI ||| [SUMMARY]
Epidemiological and Genomic Characterization of
32344510
Foodborne outbreaks caused by Campylobacter jejuni have become a significant public health problem worldwide. Applying genomic sequencing as a routine part of foodborne outbreak investigation remains in its infancy in China. We applied both traditional PFGE profiling and genomic investigation to understand the cause of a foodborne outbreak in Hangzhou in December 2018.
BACKGROUND
A total of 43 fecal samples, including 27 sick patients and 16 canteen employees from a high school in Hangzhou city in Zhejiang province, were recruited. Routine real-time fluorescent PCR assays were used for scanning the potential infectious agents, including viral pathogens (norovirus, rotavirus, adenovirus, and astrovirus), and bacterial pathogens (Salmonella, Shigella, Campylobacter jejuni, Vibrio parahaemolyticus and Vibrio cholerae). Bacterial selection medium was used to isolate and identify the positive bacteria identified by molecular test. Pulsed field gel electrophoresis (PFGE), and next generation sequencing (NGS) were applied to fifteen recovered C. jejuni isolates to further understand the case linkage of this particular outbreak. Additionally, we retrieved reference genomes from the NCBI database and performed a comparative genomics analysis with the examined genomes produced in this study.
METHOD
The analyzed samples were found to be negative for the queried viruses. Additionally, Salmonella, Shigella, Vibrio parahaemolyticus and Vibrio cholera were not detected. Fifteen C. jejuni strains were identified by the real-time PCR assay and bacterial selection medium. These C. jejuni strains were classified into two genetic profiles defined by the PFGE. Out of fifteen C. jejuni strains, fourteen have a unified consistent genotype belonging to ST2988, and the other strain belongs to ST8149, with a 66.7% similarity in comparison with the rest of the strains. Moreover, all fifteen strains harbored blaOXA-61 and tet(O), in addition to a chromosomal mutation in gyrA (T86I). The examined fourteen strains of ST2988 from CC354 clone group have very minimal genetic difference (3~66 SNPs), demonstrated by the phylogenomic investigation.
RESULTS
Both genomic investigation and PFGE profiling confirmed that C. jejuni ST2988, a new derivative from CC354, was responsible for the foodborne outbreak Illustrated in this study.
CONCLUSION
[ "Bacterial Typing Techniques", "Campylobacter Infections", "Campylobacter jejuni", "China", "Disease Outbreaks", "Electrophoresis, Gel, Pulsed-Field", "Foodborne Diseases", "Genome, Bacterial", "Genomics", "Humans", "Phylogeny", "Sequence Analysis, DNA", "Virulence Factors" ]
7215453
1. Introduction
Campylobacter jejuni is a common foodborne pathogenic bacterium which causes gastroenteritis, and more severely, a neural damage disease in humans called Guillain-Barre syndrome [1]. Raw milk, water, and contaminated meat, particularly chicken are believed to be the main sources of C. jejuni human infections [2,3]. C. jejuni is considered to be the leading cause of human gastroenteritis [4] and ranked as the second important cause for foodborne diseases in the U.S., with more than 1.5 million illness annually according to the Centers for Disease Control and Prevention (CDC), it has also been reported as one of the most commonly described pathogens in humans in the European Union foodborne disease surveillance network since 2005 [5,6]. Recently, there has been a surge in the global incidence of Campylobacter infections, and ongoing spread of human cases in North America, Europe, and Australia [7]. Though foodborne disease caused by C. jejuni has become an important public health concern, there is limited knowledge about its role in foodborne disease outbreaks in China. This knowledge gap could be due to Campylobacter infections not being subjected to obligatory reports and its surveillance being on a voluntary basis by local and regional laboratories. The pulsed field gel electrophoresis (PFGE) has been widely used in outbreak investigations for tracking sources of infection and effectively controlling epidemics due to its good reproducibility, high resolution and stable results, and the ease of standardization [8]. Nowadays, next generation sequencing (NGS) technology is becoming popular, considering advantages of labor- and time-saving, high-throughput capacities, highly precise and abundance of genetic information available for extensive studies. As the sequencing cost continues to decrease, genomic epidemiology combined with NGS has been increasingly and widely applied to outbreak investigations [9,10]. The PFGE technology and other genotyping approaches, including multi-locus sequence typing (MLST), shows that Campylobacter is not a genetically monomorphic organism, but includes highly diverse assemblies with an array of different phenotypes [9,10,11]. Considering this complexity, there are sufficient genetic materials, which could be used to link a particular genotype with a certain animal host [2,12]. Nevertheless, few C. jejuni Chinese clinical isolates with genome sequence are available in the public genomic database. The aim of this study was to describe both the epidemiological investigation and genomic characterization of C. jejuni that was responsible for the outbreak in a high school in Hangzhou in December 2018 using PFGE and NGS technologies.
null
null
2. Results
2.1. Causative Pathogen Scanning All forty-three samples were found negative for norovirus, rotavirus, adenovirus, sapovirus and astrovirus. Additionally, Salmonella, Shigella, Vibrio parahaemolyticus, and Vibrio cholerae were also not detected in all the examined patients. Fifteen strains of C. jejuni, from the fifteen sick students (Table 1), were identified by real-time fluorescent PCR and confirmed by the traditional microbiological approaches. All forty-three samples were found negative for norovirus, rotavirus, adenovirus, sapovirus and astrovirus. Additionally, Salmonella, Shigella, Vibrio parahaemolyticus, and Vibrio cholerae were also not detected in all the examined patients. Fifteen strains of C. jejuni, from the fifteen sick students (Table 1), were identified by real-time fluorescent PCR and confirmed by the traditional microbiological approaches. 2.2. PFGE Profiling Studies The fifteen C. jejuni strains were classified into two types, PA1 (fourteen strains) with an overall 99.7% similarity, and PA2 (one strain). The similarity between PA1 and PA2 was 66.7% (Figure 1, Table 1). The fifteen C. jejuni strains were classified into two types, PA1 (fourteen strains) with an overall 99.7% similarity, and PA2 (one strain). The similarity between PA1 and PA2 was 66.7% (Figure 1, Table 1). 2.3. Genomic Sequencing After conducting the whole genome sequencing and genomic assembly of the C. jejuni strains, the number of contigs was calculated to be between 12 and 48 contigs. Genome sequencing, assembly results and accession number are summarized in Table 1. The average genome size of draft assemblies was 1,650,982. Furthermore, the average N50 was 255,161 with 30.33% as average of GC%. The assembly results were scanned and identified with their MLST profiles. Fourteen strains of C. jejuni belonged to ST2988 and only one strain belonged to the ST8149 type. Further analysis showed that all the strains harbored blaOXA-61 which encodes resistance to β-lactamases, and tet(O) which confers resistance to tetracyclines. Additionally, a chromosomal mutation in gyrA (T86I), which might be responsible for the resistance to fluoroquinolones, was detected in all fifteen strains. No plasmid replicons were detected in any isolate. Furthermore, Figure 2 shows that all isolates harbored flagellar, motility, chemotaxis and cytolethal toxin proteins. SAMN12388815 isolate harbored gmhP, porA proteins which play a vital role in bacterial virulence by enhancing the adhesion and invasion properties. Cysc, Cj1416c, Cj1417c, Cj1419c, Cj1420c proteins, which are involved in capsule polysaccharide biosynthesis, were only detected in one strain (SAMN12388815). We also identified that both kpsT and kpsC proteins, which were involved in capsule polysaccharide biosynthesis, were only in three strains (SAMN12388802, SAMN12388803, SAMN12388804, in Figure 2). After conducting the whole genome sequencing and genomic assembly of the C. jejuni strains, the number of contigs was calculated to be between 12 and 48 contigs. Genome sequencing, assembly results and accession number are summarized in Table 1. The average genome size of draft assemblies was 1,650,982. Furthermore, the average N50 was 255,161 with 30.33% as average of GC%. The assembly results were scanned and identified with their MLST profiles. Fourteen strains of C. jejuni belonged to ST2988 and only one strain belonged to the ST8149 type. Further analysis showed that all the strains harbored blaOXA-61 which encodes resistance to β-lactamases, and tet(O) which confers resistance to tetracyclines. Additionally, a chromosomal mutation in gyrA (T86I), which might be responsible for the resistance to fluoroquinolones, was detected in all fifteen strains. No plasmid replicons were detected in any isolate. Furthermore, Figure 2 shows that all isolates harbored flagellar, motility, chemotaxis and cytolethal toxin proteins. SAMN12388815 isolate harbored gmhP, porA proteins which play a vital role in bacterial virulence by enhancing the adhesion and invasion properties. Cysc, Cj1416c, Cj1417c, Cj1419c, Cj1420c proteins, which are involved in capsule polysaccharide biosynthesis, were only detected in one strain (SAMN12388815). We also identified that both kpsT and kpsC proteins, which were involved in capsule polysaccharide biosynthesis, were only in three strains (SAMN12388802, SAMN12388803, SAMN12388804, in Figure 2). 2.4. Genome Comparison and Phylogenomic Analysis The phylogenomic tree shows that all fourteen case-patient isolates from this particular outbreak, which belonged to ST2988, are closely related and clustered together in a single clade. The other individual strain belonged to ST8149 (Figure 1). Importantly, the genomes of these fourteen ST2988 isolates were differed in (< 70) core SNPs, and showed (> 99%) a high similarity (Table S1). The ST2988 belongs to CC354 that includes 199 identified sequence types in (http://pubmlst.org/campylobacter/). Genomic data of all CC354 strains in NCBI database were extracted and 303 genomes were obtained (Table S2), including 27 sequence types. With ST354 strain RM1221 (GCA_000011865.1) as the reference genome, the SNP locus and phylogenetic tree between 302 strains in the public database and 14 ST2988 isolates from this outbreak were obtained (Figure 3). ST354 is the most predominant sequence type in CC354 (Figure S1). Most CC345 isolates were isolated from humans and food samples, and few isolates were retrieved from unknown sources (Figure 3, and Figure S1). Isolates from chicken-origin were identified to have the highest prevalence among the food isolates (Figure 3, and Table S2). The CC354 strains in the public databases are mainly from the US and the UK (Figure S1), while strains from other countries are scattered. A small difference in distance between phylogenetic branches of CC345 isolates was identified in Figure 3 with a scale bar at 0.001, indicating a very close genetic relationship within the sequence type. We also observed a close relationship with a scale bar at 0.001 among these 14 strains linked with the outbreak in this study, which were also linked with the only available genome (SAMN10485936) in the NCBI database (Figure 4). The phylogenomic tree shows that all fourteen case-patient isolates from this particular outbreak, which belonged to ST2988, are closely related and clustered together in a single clade. The other individual strain belonged to ST8149 (Figure 1). Importantly, the genomes of these fourteen ST2988 isolates were differed in (< 70) core SNPs, and showed (> 99%) a high similarity (Table S1). The ST2988 belongs to CC354 that includes 199 identified sequence types in (http://pubmlst.org/campylobacter/). Genomic data of all CC354 strains in NCBI database were extracted and 303 genomes were obtained (Table S2), including 27 sequence types. With ST354 strain RM1221 (GCA_000011865.1) as the reference genome, the SNP locus and phylogenetic tree between 302 strains in the public database and 14 ST2988 isolates from this outbreak were obtained (Figure 3). ST354 is the most predominant sequence type in CC354 (Figure S1). Most CC345 isolates were isolated from humans and food samples, and few isolates were retrieved from unknown sources (Figure 3, and Figure S1). Isolates from chicken-origin were identified to have the highest prevalence among the food isolates (Figure 3, and Table S2). The CC354 strains in the public databases are mainly from the US and the UK (Figure S1), while strains from other countries are scattered. A small difference in distance between phylogenetic branches of CC345 isolates was identified in Figure 3 with a scale bar at 0.001, indicating a very close genetic relationship within the sequence type. We also observed a close relationship with a scale bar at 0.001 among these 14 strains linked with the outbreak in this study, which were also linked with the only available genome (SAMN10485936) in the NCBI database (Figure 4).
5. Conclusions
This analysis sheds light on the possible menace of C. jejuni infections. PFGE and NGS technologies provided reliable evidence for the identification of the pathogens for this outbreak, caused by C. jejuni ST2988. These results suggest that enhanced concerns should be given to the circulation of this rarely reported sequence type. It is expected that the advanced NGS technologies will be promising in pathogen detection and foodborne disease tracking. To our knowledge, this is the second C. jejuni outbreak described in China to date. Unfortunately, in this event, food samples were not included in the investigation. In the future, the collection and testing of food samples should be emphasized for a more comprehensive investigation. These data also endorse that authorities need to implement systematic surveillance and compulsory notification for Campylobacter infections from humans as well as different animals, which is essential for the identification and tracking of the source of infection and the rationalization of effective control measures to ensure public health and safety.
[ "2.1. Causative Pathogen Scanning", "2.2. PFGE Profiling Studies", "2.3. Genomic Sequencing", "2.4. Genome Comparison and Phylogenomic Analysis ", "4. Material and Methods", "4.1. Epidemiological Investigation", "4.2. Samples Collection", "4.3. Pathogen Detection", "4.4. Isolation and Identification of Campylobacter spp.", "4.5. Pulsed Field Gel Electrophoresis (PFGE) Testing", "4.6. Genomic Sequencing and Bioinformatic Analysis", "4.7. Ethical Approval", "5. Conclusions" ]
[ "All forty-three samples were found negative for norovirus, rotavirus, adenovirus, sapovirus and astrovirus. Additionally, Salmonella, Shigella, Vibrio parahaemolyticus, and Vibrio cholerae were also not detected in all the examined patients. Fifteen strains of C. jejuni, from the fifteen sick students (Table 1), were identified by real-time fluorescent PCR and confirmed by the traditional microbiological approaches.", "The fifteen C. jejuni strains were classified into two types, PA1 (fourteen strains) with an overall 99.7% similarity, and PA2 (one strain). The similarity between PA1 and PA2 was 66.7% (Figure 1, Table 1).", "After conducting the whole genome sequencing and genomic assembly of the C. jejuni strains, the number of contigs was calculated to be between 12 and 48 contigs. Genome sequencing, assembly results and accession number are summarized in Table 1. The average genome size of draft assemblies was 1,650,982. Furthermore, the average N50 was 255,161 with 30.33% as average of GC%.\nThe assembly results were scanned and identified with their MLST profiles. Fourteen strains of C. jejuni belonged to ST2988 and only one strain belonged to the ST8149 type. Further analysis showed that all the strains harbored blaOXA-61 which encodes resistance to β-lactamases, and tet(O) which confers resistance to tetracyclines. Additionally, a chromosomal mutation in gyrA (T86I), which might be responsible for the resistance to fluoroquinolones, was detected in all fifteen strains. No plasmid replicons were detected in any isolate. Furthermore, Figure 2 shows that all isolates harbored flagellar, motility, chemotaxis and cytolethal toxin proteins. SAMN12388815 isolate harbored gmhP, porA proteins which play a vital role in bacterial virulence by enhancing the adhesion and invasion properties. Cysc, Cj1416c, Cj1417c, Cj1419c, Cj1420c proteins, which are involved in capsule polysaccharide biosynthesis, were only detected in one strain (SAMN12388815). We also identified that both kpsT and kpsC proteins, which were involved in capsule polysaccharide biosynthesis, were only in three strains (SAMN12388802, SAMN12388803, SAMN12388804, in Figure 2).", "The phylogenomic tree shows that all fourteen case-patient isolates from this particular outbreak, which belonged to ST2988, are closely related and clustered together in a single clade. The other individual strain belonged to ST8149 (Figure 1). Importantly, the genomes of these fourteen ST2988 isolates were differed in (< 70) core SNPs, and showed (> 99%) a high similarity (Table S1).\nThe ST2988 belongs to CC354 that includes 199 identified sequence types in (http://pubmlst.org/campylobacter/). Genomic data of all CC354 strains in NCBI database were extracted and 303 genomes were obtained (Table S2), including 27 sequence types. With ST354 strain RM1221 (GCA_000011865.1) as the reference genome, the SNP locus and phylogenetic tree between 302 strains in the public database and 14 ST2988 isolates from this outbreak were obtained (Figure 3). ST354 is the most predominant sequence type in CC354 (Figure S1). Most CC345 isolates were isolated from humans and food samples, and few isolates were retrieved from unknown sources (Figure 3, and Figure S1). Isolates from chicken-origin were identified to have the highest prevalence among the food isolates (Figure 3, and Table S2). The CC354 strains in the public databases are mainly from the US and the UK (Figure S1), while strains from other countries are scattered. A small difference in distance between phylogenetic branches of CC345 isolates was identified in Figure 3 with a scale bar at 0.001, indicating a very close genetic relationship within the sequence type. We also observed a close relationship with a scale bar at 0.001 among these 14 strains linked with the outbreak in this study, which were also linked with the only available genome (SAMN10485936) in the NCBI database (Figure 4).", " 4.1. Epidemiological Investigation In December 2018, a series of patients reported foodborne diseases in a high school in Hangzhou, the capital city of Zhejiang province in eastern China. Eighty-four students, in twelve classes from grade one to six, complained of symptoms of food poisoning. No meals were served at the school other than school lunches, which could be the potential source of this foodborne outbreak.\nWe defined a probable case as a patient with diarrhea, vomiting or other symptoms (abdominal pain, fever and so on) and a confirmed case as a patient with any symptoms and a confirmed laboratory diagnosis of C. jejuni.\nIn December 2018, a series of patients reported foodborne diseases in a high school in Hangzhou, the capital city of Zhejiang province in eastern China. Eighty-four students, in twelve classes from grade one to six, complained of symptoms of food poisoning. No meals were served at the school other than school lunches, which could be the potential source of this foodborne outbreak.\nWe defined a probable case as a patient with diarrhea, vomiting or other symptoms (abdominal pain, fever and so on) and a confirmed case as a patient with any symptoms and a confirmed laboratory diagnosis of C. jejuni.\n 4.2. Samples Collection Local CDC microbiologists collected 43 fecal samples based on the Chinese local regulations, of which 27 were from sick students and 16 from canteen employees, as probable cases for microbiological investigation. Canteen food samples were disposed of by the head of school due to the concerns of further contamination and disease dissemination, so no foods were available in the current investigation.\nLocal CDC microbiologists collected 43 fecal samples based on the Chinese local regulations, of which 27 were from sick students and 16 from canteen employees, as probable cases for microbiological investigation. Canteen food samples were disposed of by the head of school due to the concerns of further contamination and disease dissemination, so no foods were available in the current investigation.\n 4.3. Pathogen Detection Real-time fluorescent PCR was used to detect norovirus, rotavirus, adenovirus, sapovirus and astrovirus according to a protocol reported earlier [36]. WS271-2007 diagnostic criteria for infectious diarrhea protocol [37,38] was used for the detection of Salmonella, C. jejuni and Vibrio parahaemolyticus. WS287-2008 [39] and WS289-2008 [40] protocols were used for detection of Shigella and Vibrio cholerae, respectively. Briefly, fecal samples were added to an Eppendorf tube with sterile saline to prepare a stool suspension. Total genomic DNA, including bacterial and viral agents, was extracted and purified from the stool suspension using QIAamp DNA mini Kit (Qiagen, Hilden, Germany, No: 51304), according to the manufacturer’s recommended protocols. Real-time fluorescent PCR was performed at 42 °C for 1 h and 95 °C for 15 min, followed by 40 cycles of 94 °C for 60 s, 58 °C for 80 s, and 72 °C for 60 s, with a final extension at 72 °C for 7 min.\nReal-time fluorescent PCR was used to detect norovirus, rotavirus, adenovirus, sapovirus and astrovirus according to a protocol reported earlier [36]. WS271-2007 diagnostic criteria for infectious diarrhea protocol [37,38] was used for the detection of Salmonella, C. jejuni and Vibrio parahaemolyticus. WS287-2008 [39] and WS289-2008 [40] protocols were used for detection of Shigella and Vibrio cholerae, respectively. Briefly, fecal samples were added to an Eppendorf tube with sterile saline to prepare a stool suspension. Total genomic DNA, including bacterial and viral agents, was extracted and purified from the stool suspension using QIAamp DNA mini Kit (Qiagen, Hilden, Germany, No: 51304), according to the manufacturer’s recommended protocols. Real-time fluorescent PCR was performed at 42 °C for 1 h and 95 °C for 15 min, followed by 40 cycles of 94 °C for 60 s, 58 °C for 80 s, and 72 °C for 60 s, with a final extension at 72 °C for 7 min.\n 4.4. Isolation and Identification of Campylobacter spp. The positive Campylobacter samples detected by the real-time fluorescent PCR were pre-enriched with Preston selective broth supplemented with 5% sterile, lysed sheep blood, Campylobacter growth supplement and selective supplement (Oxoid Ltd., Basingstoke, UK). Samples were incubated at 42 ℃ under microaerobic conditions (5% O2, 10% CO2, and 85% N2) for 12–24 h. Two hundred microliter drops of the pre-enrichment were applied to the 0.45-μm pore-size filter and left on the surface of a Columbia blood agar plate. These plates were further incubated at 37 ℃ under microaerobic conditions [41].\nThe positive Campylobacter samples detected by the real-time fluorescent PCR were pre-enriched with Preston selective broth supplemented with 5% sterile, lysed sheep blood, Campylobacter growth supplement and selective supplement (Oxoid Ltd., Basingstoke, UK). Samples were incubated at 42 ℃ under microaerobic conditions (5% O2, 10% CO2, and 85% N2) for 12–24 h. Two hundred microliter drops of the pre-enrichment were applied to the 0.45-μm pore-size filter and left on the surface of a Columbia blood agar plate. These plates were further incubated at 37 ℃ under microaerobic conditions [41].\n 4.5. Pulsed Field Gel Electrophoresis (PFGE) Testing PFGE molecular typing was performed according to the PFGE protocol for C. jejuni [42,43]. Briefly, restriction digestion was conducted by using 40 U SmaI (Takara, Dalian, China), and run on a CHEF Mapper PFGE system (Bio-Rad Laboratories, Hercules, Canada) for SeaKem gold agarose (Lonza, Rockland, MD, USA) in 0.5×Tris-borate-EDTA. Bionumerics v6.6 software was used for the clustering analysis. Similarity greater than 95% was considered as the same genetic group. The similarity between chromosomal fingerprints was scored using the Dice coefficient. The unweighted pair group method, with arithmetic means (UPGMA) at the cut-off of 1.5% tolerance and 1.00% optimization, was used to obtain the dendrogram in the PFGE profile.\nPFGE molecular typing was performed according to the PFGE protocol for C. jejuni [42,43]. Briefly, restriction digestion was conducted by using 40 U SmaI (Takara, Dalian, China), and run on a CHEF Mapper PFGE system (Bio-Rad Laboratories, Hercules, Canada) for SeaKem gold agarose (Lonza, Rockland, MD, USA) in 0.5×Tris-borate-EDTA. Bionumerics v6.6 software was used for the clustering analysis. Similarity greater than 95% was considered as the same genetic group. The similarity between chromosomal fingerprints was scored using the Dice coefficient. The unweighted pair group method, with arithmetic means (UPGMA) at the cut-off of 1.5% tolerance and 1.00% optimization, was used to obtain the dendrogram in the PFGE profile.\n 4.6. Genomic Sequencing and Bioinformatic Analysis The Genomic DNA library was constructed using Nextera XT DNA library construction kit (Illumina, USA, No: FC-131-1024); followed by genomic sequencing using Miseq Reagent Kit v2 300cycle kit (Illumina, USA, No: MS-102-2002). High-throughput genome sequencing was accomplished by the Illumina Miseq sequencing platform, as previously described [44,45,46]. The quality of sequencing and trimming was checked with FastQC toolkit, while low-quality sequences and joint sequences were removed with trimmomatic [47]. The genome assembly was performed with SPAdes 4.0.1 for genomic scaffolds [48], using the “careful correction” option in order to reduce the number of mismatches in the final assembly with automatically choosen k-mer values by SPAdes. QUAST [49] was used to evaluate the assembled genomes through basic statistics generation, including the total number of contigs, contig length, and N50. Prokka 1.14 [50], with the “default” settings was used to annotate the assembled genomes. Multilocus sequence typing (MLST) software (http://www.github.com/tseemann/mlst) was applied for the sequence type of the isolates for the in-house database. Detection of resistance genes, plasmids replicons and virulence genes were conducted using ABRicate software (http://www.github.com/tseemann/abricate). All the sequence types from a clonal complex (CC) detected by using the genome sequence were retrieved from the NCBI assembly database. Considering RM1221 strain [51] as a reference genome, we used two different protocols to conduct the multiple sequence alignment of the genomes in order to build the phylogenomic tree, and both of them delivered the identical results. The first approach was performed using Snippy to search for single nucleotide polymorphism (SNP) locus [52]. The second approach was conducted by Gubbins to produce the consensus sequence, and Mafft was used to make the multiple sequence alignment for the whole genome sequences [52]. The phylogenomic tree was built and projected with RAxML [53] and ITOL [54], respectively.\nThe Genomic DNA library was constructed using Nextera XT DNA library construction kit (Illumina, USA, No: FC-131-1024); followed by genomic sequencing using Miseq Reagent Kit v2 300cycle kit (Illumina, USA, No: MS-102-2002). High-throughput genome sequencing was accomplished by the Illumina Miseq sequencing platform, as previously described [44,45,46]. The quality of sequencing and trimming was checked with FastQC toolkit, while low-quality sequences and joint sequences were removed with trimmomatic [47]. The genome assembly was performed with SPAdes 4.0.1 for genomic scaffolds [48], using the “careful correction” option in order to reduce the number of mismatches in the final assembly with automatically choosen k-mer values by SPAdes. QUAST [49] was used to evaluate the assembled genomes through basic statistics generation, including the total number of contigs, contig length, and N50. Prokka 1.14 [50], with the “default” settings was used to annotate the assembled genomes. Multilocus sequence typing (MLST) software (http://www.github.com/tseemann/mlst) was applied for the sequence type of the isolates for the in-house database. Detection of resistance genes, plasmids replicons and virulence genes were conducted using ABRicate software (http://www.github.com/tseemann/abricate). All the sequence types from a clonal complex (CC) detected by using the genome sequence were retrieved from the NCBI assembly database. Considering RM1221 strain [51] as a reference genome, we used two different protocols to conduct the multiple sequence alignment of the genomes in order to build the phylogenomic tree, and both of them delivered the identical results. The first approach was performed using Snippy to search for single nucleotide polymorphism (SNP) locus [52]. The second approach was conducted by Gubbins to produce the consensus sequence, and Mafft was used to make the multiple sequence alignment for the whole genome sequences [52]. The phylogenomic tree was built and projected with RAxML [53] and ITOL [54], respectively.\n 4.7. Ethical Approval All procedures performed in studies involving human participants were officially approved by the Xiacheng CDC at Hangzhou (No. 2019-05, 20190716), which was in accordance with the ethical standards of the institutional research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.\nAll procedures performed in studies involving human participants were officially approved by the Xiacheng CDC at Hangzhou (No. 2019-05, 20190716), which was in accordance with the ethical standards of the institutional research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.", "In December 2018, a series of patients reported foodborne diseases in a high school in Hangzhou, the capital city of Zhejiang province in eastern China. Eighty-four students, in twelve classes from grade one to six, complained of symptoms of food poisoning. No meals were served at the school other than school lunches, which could be the potential source of this foodborne outbreak.\nWe defined a probable case as a patient with diarrhea, vomiting or other symptoms (abdominal pain, fever and so on) and a confirmed case as a patient with any symptoms and a confirmed laboratory diagnosis of C. jejuni.", "Local CDC microbiologists collected 43 fecal samples based on the Chinese local regulations, of which 27 were from sick students and 16 from canteen employees, as probable cases for microbiological investigation. Canteen food samples were disposed of by the head of school due to the concerns of further contamination and disease dissemination, so no foods were available in the current investigation.", "Real-time fluorescent PCR was used to detect norovirus, rotavirus, adenovirus, sapovirus and astrovirus according to a protocol reported earlier [36]. WS271-2007 diagnostic criteria for infectious diarrhea protocol [37,38] was used for the detection of Salmonella, C. jejuni and Vibrio parahaemolyticus. WS287-2008 [39] and WS289-2008 [40] protocols were used for detection of Shigella and Vibrio cholerae, respectively. Briefly, fecal samples were added to an Eppendorf tube with sterile saline to prepare a stool suspension. Total genomic DNA, including bacterial and viral agents, was extracted and purified from the stool suspension using QIAamp DNA mini Kit (Qiagen, Hilden, Germany, No: 51304), according to the manufacturer’s recommended protocols. Real-time fluorescent PCR was performed at 42 °C for 1 h and 95 °C for 15 min, followed by 40 cycles of 94 °C for 60 s, 58 °C for 80 s, and 72 °C for 60 s, with a final extension at 72 °C for 7 min.", "The positive Campylobacter samples detected by the real-time fluorescent PCR were pre-enriched with Preston selective broth supplemented with 5% sterile, lysed sheep blood, Campylobacter growth supplement and selective supplement (Oxoid Ltd., Basingstoke, UK). Samples were incubated at 42 ℃ under microaerobic conditions (5% O2, 10% CO2, and 85% N2) for 12–24 h. Two hundred microliter drops of the pre-enrichment were applied to the 0.45-μm pore-size filter and left on the surface of a Columbia blood agar plate. These plates were further incubated at 37 ℃ under microaerobic conditions [41].", "PFGE molecular typing was performed according to the PFGE protocol for C. jejuni [42,43]. Briefly, restriction digestion was conducted by using 40 U SmaI (Takara, Dalian, China), and run on a CHEF Mapper PFGE system (Bio-Rad Laboratories, Hercules, Canada) for SeaKem gold agarose (Lonza, Rockland, MD, USA) in 0.5×Tris-borate-EDTA. Bionumerics v6.6 software was used for the clustering analysis. Similarity greater than 95% was considered as the same genetic group. The similarity between chromosomal fingerprints was scored using the Dice coefficient. The unweighted pair group method, with arithmetic means (UPGMA) at the cut-off of 1.5% tolerance and 1.00% optimization, was used to obtain the dendrogram in the PFGE profile.", "The Genomic DNA library was constructed using Nextera XT DNA library construction kit (Illumina, USA, No: FC-131-1024); followed by genomic sequencing using Miseq Reagent Kit v2 300cycle kit (Illumina, USA, No: MS-102-2002). High-throughput genome sequencing was accomplished by the Illumina Miseq sequencing platform, as previously described [44,45,46]. The quality of sequencing and trimming was checked with FastQC toolkit, while low-quality sequences and joint sequences were removed with trimmomatic [47]. The genome assembly was performed with SPAdes 4.0.1 for genomic scaffolds [48], using the “careful correction” option in order to reduce the number of mismatches in the final assembly with automatically choosen k-mer values by SPAdes. QUAST [49] was used to evaluate the assembled genomes through basic statistics generation, including the total number of contigs, contig length, and N50. Prokka 1.14 [50], with the “default” settings was used to annotate the assembled genomes. Multilocus sequence typing (MLST) software (http://www.github.com/tseemann/mlst) was applied for the sequence type of the isolates for the in-house database. Detection of resistance genes, plasmids replicons and virulence genes were conducted using ABRicate software (http://www.github.com/tseemann/abricate). All the sequence types from a clonal complex (CC) detected by using the genome sequence were retrieved from the NCBI assembly database. Considering RM1221 strain [51] as a reference genome, we used two different protocols to conduct the multiple sequence alignment of the genomes in order to build the phylogenomic tree, and both of them delivered the identical results. The first approach was performed using Snippy to search for single nucleotide polymorphism (SNP) locus [52]. The second approach was conducted by Gubbins to produce the consensus sequence, and Mafft was used to make the multiple sequence alignment for the whole genome sequences [52]. The phylogenomic tree was built and projected with RAxML [53] and ITOL [54], respectively.", "All procedures performed in studies involving human participants were officially approved by the Xiacheng CDC at Hangzhou (No. 2019-05, 20190716), which was in accordance with the ethical standards of the institutional research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.", "This analysis sheds light on the possible menace of C. jejuni infections. PFGE and NGS technologies provided reliable evidence for the identification of the pathogens for this outbreak, caused by C. jejuni ST2988. These results suggest that enhanced concerns should be given to the circulation of this rarely reported sequence type. It is expected that the advanced NGS technologies will be promising in pathogen detection and foodborne disease tracking.\nTo our knowledge, this is the second C. jejuni outbreak described in China to date. Unfortunately, in this event, food samples were not included in the investigation. In the future, the collection and testing of food samples should be emphasized for a more comprehensive investigation. These data also endorse that authorities need to implement systematic surveillance and compulsory notification for Campylobacter infections from humans as well as different animals, which is essential for the identification and tracking of the source of infection and the rationalization of effective control measures to ensure public health and safety." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Results", "2.1. Causative Pathogen Scanning", "2.2. PFGE Profiling Studies", "2.3. Genomic Sequencing", "2.4. Genome Comparison and Phylogenomic Analysis ", "3. Discussion", "4. Material and Methods", "4.1. Epidemiological Investigation", "4.2. Samples Collection", "4.3. Pathogen Detection", "4.4. Isolation and Identification of Campylobacter spp.", "4.5. Pulsed Field Gel Electrophoresis (PFGE) Testing", "4.6. Genomic Sequencing and Bioinformatic Analysis", "4.7. Ethical Approval", "5. Conclusions" ]
[ "Campylobacter jejuni is a common foodborne pathogenic bacterium which causes gastroenteritis, and more severely, a neural damage disease in humans called Guillain-Barre syndrome [1]. Raw milk, water, and contaminated meat, particularly chicken are believed to be the main sources of C. jejuni human infections [2,3].\nC. jejuni is considered to be the leading cause of human gastroenteritis [4] and ranked as the second important cause for foodborne diseases in the U.S., with more than 1.5 million illness annually according to the Centers for Disease Control and Prevention (CDC), it has also been reported as one of the most commonly described pathogens in humans in the European Union foodborne disease surveillance network since 2005 [5,6]. Recently, there has been a surge in the global incidence of Campylobacter infections, and ongoing spread of human cases in North America, Europe, and Australia [7]. Though foodborne disease caused by C. jejuni has become an important public health concern, there is limited knowledge about its role in foodborne disease outbreaks in China. This knowledge gap could be due to Campylobacter infections not being subjected to obligatory reports and its surveillance being on a voluntary basis by local and regional laboratories.\nThe pulsed field gel electrophoresis (PFGE) has been widely used in outbreak investigations for tracking sources of infection and effectively controlling epidemics due to its good reproducibility, high resolution and stable results, and the ease of standardization [8]. Nowadays, next generation sequencing (NGS) technology is becoming popular, considering advantages of labor- and time-saving, high-throughput capacities, highly precise and abundance of genetic information available for extensive studies. As the sequencing cost continues to decrease, genomic epidemiology combined with NGS has been increasingly and widely applied to outbreak investigations [9,10]. The PFGE technology and other genotyping approaches, including multi-locus sequence typing (MLST), shows that Campylobacter is not a genetically monomorphic organism, but includes highly diverse assemblies with an array of different phenotypes [9,10,11]. Considering this complexity, there are sufficient genetic materials, which could be used to link a particular genotype with a certain animal host [2,12]. Nevertheless, few C. jejuni Chinese clinical isolates with genome sequence are available in the public genomic database. The aim of this study was to describe both the epidemiological investigation and genomic characterization of C. jejuni that was responsible for the outbreak in a high school in Hangzhou in December 2018 using PFGE and NGS technologies.", " 2.1. Causative Pathogen Scanning All forty-three samples were found negative for norovirus, rotavirus, adenovirus, sapovirus and astrovirus. Additionally, Salmonella, Shigella, Vibrio parahaemolyticus, and Vibrio cholerae were also not detected in all the examined patients. Fifteen strains of C. jejuni, from the fifteen sick students (Table 1), were identified by real-time fluorescent PCR and confirmed by the traditional microbiological approaches.\nAll forty-three samples were found negative for norovirus, rotavirus, adenovirus, sapovirus and astrovirus. Additionally, Salmonella, Shigella, Vibrio parahaemolyticus, and Vibrio cholerae were also not detected in all the examined patients. Fifteen strains of C. jejuni, from the fifteen sick students (Table 1), were identified by real-time fluorescent PCR and confirmed by the traditional microbiological approaches.\n 2.2. PFGE Profiling Studies The fifteen C. jejuni strains were classified into two types, PA1 (fourteen strains) with an overall 99.7% similarity, and PA2 (one strain). The similarity between PA1 and PA2 was 66.7% (Figure 1, Table 1).\nThe fifteen C. jejuni strains were classified into two types, PA1 (fourteen strains) with an overall 99.7% similarity, and PA2 (one strain). The similarity between PA1 and PA2 was 66.7% (Figure 1, Table 1).\n 2.3. Genomic Sequencing After conducting the whole genome sequencing and genomic assembly of the C. jejuni strains, the number of contigs was calculated to be between 12 and 48 contigs. Genome sequencing, assembly results and accession number are summarized in Table 1. The average genome size of draft assemblies was 1,650,982. Furthermore, the average N50 was 255,161 with 30.33% as average of GC%.\nThe assembly results were scanned and identified with their MLST profiles. Fourteen strains of C. jejuni belonged to ST2988 and only one strain belonged to the ST8149 type. Further analysis showed that all the strains harbored blaOXA-61 which encodes resistance to β-lactamases, and tet(O) which confers resistance to tetracyclines. Additionally, a chromosomal mutation in gyrA (T86I), which might be responsible for the resistance to fluoroquinolones, was detected in all fifteen strains. No plasmid replicons were detected in any isolate. Furthermore, Figure 2 shows that all isolates harbored flagellar, motility, chemotaxis and cytolethal toxin proteins. SAMN12388815 isolate harbored gmhP, porA proteins which play a vital role in bacterial virulence by enhancing the adhesion and invasion properties. Cysc, Cj1416c, Cj1417c, Cj1419c, Cj1420c proteins, which are involved in capsule polysaccharide biosynthesis, were only detected in one strain (SAMN12388815). We also identified that both kpsT and kpsC proteins, which were involved in capsule polysaccharide biosynthesis, were only in three strains (SAMN12388802, SAMN12388803, SAMN12388804, in Figure 2).\nAfter conducting the whole genome sequencing and genomic assembly of the C. jejuni strains, the number of contigs was calculated to be between 12 and 48 contigs. Genome sequencing, assembly results and accession number are summarized in Table 1. The average genome size of draft assemblies was 1,650,982. Furthermore, the average N50 was 255,161 with 30.33% as average of GC%.\nThe assembly results were scanned and identified with their MLST profiles. Fourteen strains of C. jejuni belonged to ST2988 and only one strain belonged to the ST8149 type. Further analysis showed that all the strains harbored blaOXA-61 which encodes resistance to β-lactamases, and tet(O) which confers resistance to tetracyclines. Additionally, a chromosomal mutation in gyrA (T86I), which might be responsible for the resistance to fluoroquinolones, was detected in all fifteen strains. No plasmid replicons were detected in any isolate. Furthermore, Figure 2 shows that all isolates harbored flagellar, motility, chemotaxis and cytolethal toxin proteins. SAMN12388815 isolate harbored gmhP, porA proteins which play a vital role in bacterial virulence by enhancing the adhesion and invasion properties. Cysc, Cj1416c, Cj1417c, Cj1419c, Cj1420c proteins, which are involved in capsule polysaccharide biosynthesis, were only detected in one strain (SAMN12388815). We also identified that both kpsT and kpsC proteins, which were involved in capsule polysaccharide biosynthesis, were only in three strains (SAMN12388802, SAMN12388803, SAMN12388804, in Figure 2).\n 2.4. Genome Comparison and Phylogenomic Analysis The phylogenomic tree shows that all fourteen case-patient isolates from this particular outbreak, which belonged to ST2988, are closely related and clustered together in a single clade. The other individual strain belonged to ST8149 (Figure 1). Importantly, the genomes of these fourteen ST2988 isolates were differed in (< 70) core SNPs, and showed (> 99%) a high similarity (Table S1).\nThe ST2988 belongs to CC354 that includes 199 identified sequence types in (http://pubmlst.org/campylobacter/). Genomic data of all CC354 strains in NCBI database were extracted and 303 genomes were obtained (Table S2), including 27 sequence types. With ST354 strain RM1221 (GCA_000011865.1) as the reference genome, the SNP locus and phylogenetic tree between 302 strains in the public database and 14 ST2988 isolates from this outbreak were obtained (Figure 3). ST354 is the most predominant sequence type in CC354 (Figure S1). Most CC345 isolates were isolated from humans and food samples, and few isolates were retrieved from unknown sources (Figure 3, and Figure S1). Isolates from chicken-origin were identified to have the highest prevalence among the food isolates (Figure 3, and Table S2). The CC354 strains in the public databases are mainly from the US and the UK (Figure S1), while strains from other countries are scattered. A small difference in distance between phylogenetic branches of CC345 isolates was identified in Figure 3 with a scale bar at 0.001, indicating a very close genetic relationship within the sequence type. We also observed a close relationship with a scale bar at 0.001 among these 14 strains linked with the outbreak in this study, which were also linked with the only available genome (SAMN10485936) in the NCBI database (Figure 4).\nThe phylogenomic tree shows that all fourteen case-patient isolates from this particular outbreak, which belonged to ST2988, are closely related and clustered together in a single clade. The other individual strain belonged to ST8149 (Figure 1). Importantly, the genomes of these fourteen ST2988 isolates were differed in (< 70) core SNPs, and showed (> 99%) a high similarity (Table S1).\nThe ST2988 belongs to CC354 that includes 199 identified sequence types in (http://pubmlst.org/campylobacter/). Genomic data of all CC354 strains in NCBI database were extracted and 303 genomes were obtained (Table S2), including 27 sequence types. With ST354 strain RM1221 (GCA_000011865.1) as the reference genome, the SNP locus and phylogenetic tree between 302 strains in the public database and 14 ST2988 isolates from this outbreak were obtained (Figure 3). ST354 is the most predominant sequence type in CC354 (Figure S1). Most CC345 isolates were isolated from humans and food samples, and few isolates were retrieved from unknown sources (Figure 3, and Figure S1). Isolates from chicken-origin were identified to have the highest prevalence among the food isolates (Figure 3, and Table S2). The CC354 strains in the public databases are mainly from the US and the UK (Figure S1), while strains from other countries are scattered. A small difference in distance between phylogenetic branches of CC345 isolates was identified in Figure 3 with a scale bar at 0.001, indicating a very close genetic relationship within the sequence type. We also observed a close relationship with a scale bar at 0.001 among these 14 strains linked with the outbreak in this study, which were also linked with the only available genome (SAMN10485936) in the NCBI database (Figure 4).", "All forty-three samples were found negative for norovirus, rotavirus, adenovirus, sapovirus and astrovirus. Additionally, Salmonella, Shigella, Vibrio parahaemolyticus, and Vibrio cholerae were also not detected in all the examined patients. Fifteen strains of C. jejuni, from the fifteen sick students (Table 1), were identified by real-time fluorescent PCR and confirmed by the traditional microbiological approaches.", "The fifteen C. jejuni strains were classified into two types, PA1 (fourteen strains) with an overall 99.7% similarity, and PA2 (one strain). The similarity between PA1 and PA2 was 66.7% (Figure 1, Table 1).", "After conducting the whole genome sequencing and genomic assembly of the C. jejuni strains, the number of contigs was calculated to be between 12 and 48 contigs. Genome sequencing, assembly results and accession number are summarized in Table 1. The average genome size of draft assemblies was 1,650,982. Furthermore, the average N50 was 255,161 with 30.33% as average of GC%.\nThe assembly results were scanned and identified with their MLST profiles. Fourteen strains of C. jejuni belonged to ST2988 and only one strain belonged to the ST8149 type. Further analysis showed that all the strains harbored blaOXA-61 which encodes resistance to β-lactamases, and tet(O) which confers resistance to tetracyclines. Additionally, a chromosomal mutation in gyrA (T86I), which might be responsible for the resistance to fluoroquinolones, was detected in all fifteen strains. No plasmid replicons were detected in any isolate. Furthermore, Figure 2 shows that all isolates harbored flagellar, motility, chemotaxis and cytolethal toxin proteins. SAMN12388815 isolate harbored gmhP, porA proteins which play a vital role in bacterial virulence by enhancing the adhesion and invasion properties. Cysc, Cj1416c, Cj1417c, Cj1419c, Cj1420c proteins, which are involved in capsule polysaccharide biosynthesis, were only detected in one strain (SAMN12388815). We also identified that both kpsT and kpsC proteins, which were involved in capsule polysaccharide biosynthesis, were only in three strains (SAMN12388802, SAMN12388803, SAMN12388804, in Figure 2).", "The phylogenomic tree shows that all fourteen case-patient isolates from this particular outbreak, which belonged to ST2988, are closely related and clustered together in a single clade. The other individual strain belonged to ST8149 (Figure 1). Importantly, the genomes of these fourteen ST2988 isolates were differed in (< 70) core SNPs, and showed (> 99%) a high similarity (Table S1).\nThe ST2988 belongs to CC354 that includes 199 identified sequence types in (http://pubmlst.org/campylobacter/). Genomic data of all CC354 strains in NCBI database were extracted and 303 genomes were obtained (Table S2), including 27 sequence types. With ST354 strain RM1221 (GCA_000011865.1) as the reference genome, the SNP locus and phylogenetic tree between 302 strains in the public database and 14 ST2988 isolates from this outbreak were obtained (Figure 3). ST354 is the most predominant sequence type in CC354 (Figure S1). Most CC345 isolates were isolated from humans and food samples, and few isolates were retrieved from unknown sources (Figure 3, and Figure S1). Isolates from chicken-origin were identified to have the highest prevalence among the food isolates (Figure 3, and Table S2). The CC354 strains in the public databases are mainly from the US and the UK (Figure S1), while strains from other countries are scattered. A small difference in distance between phylogenetic branches of CC345 isolates was identified in Figure 3 with a scale bar at 0.001, indicating a very close genetic relationship within the sequence type. We also observed a close relationship with a scale bar at 0.001 among these 14 strains linked with the outbreak in this study, which were also linked with the only available genome (SAMN10485936) in the NCBI database (Figure 4).", "Recently, the rate of Campylobacter infections has rapidly increased due to the expansion of the consumption of raw or undercooked chicken, especially in China [13]. In December 2018, a serious case of foodborne disease was reported in a high school, where eighty-four students in twelve classes from grade one to six had diarrhea, vomiting, fever and other foodborne disease-associated symptoms, in Hangzhou. To identify the causative agent of this outbreak, 43 fecal samples were collected from patient students and canteen workers. Nucleic acid of suspected viral or bacterial samples were extracted for laboratory investigation. None of these samples were positive for the suspected viruses. Fifteen strains of C. jejuni were detected and isolated from the samples of fifteen sick students. To the best of our knowledge, this is the second foodborne outbreak of C. jejuni described in China to date. The previous outbreak led to 36 cases of Campylobacter infections that occurred in a high school in Beijing after a trip to another province in Southern China [14].\nIn order to provide more reliable evidence for the outbreak origin, we conducted PFGE profiling and genomic analysis for these fifteen strains of C. jejuni, which is essential for evaluating the clinical isolates from the outbreak and related cases [15]. The results showed that these fourteen strains belonged to the same pattern (PA-1), while the one other strain which had a similarity of 66.7%, belonged to the other pattern (PA-2). By using genomic data for MLST or genotype scanning, it was found that 14 strains were of ST2988 type and one of ST8149 type, which was consistent with PFGE results. These results suggested that the unique ST2988 C. jejuni isolate was responsible for this foodborne outbreak. Scrutiny of the PFGE pattern (PA-1) exhibited an inherent similarity, with some changes in three isolates (CAM19-027, CAM19-028, CAM19-037) belonging to the same MLST (Figure 1), which hints towards a recent evolutionary deviation from a common ancestor. Although these isolates had a slightly deviant PFGE pattern, it was not considered significant enough to exclude them from this outbreak, as the variations in the PFGE patterns can result from a single-nucleotide polymorphism in a restriction site [16]. Thus, a clonal relationship may be found even between strains with dissimilar PFGE profiles. Furthermore, a PFGE profile can change after only a single passage through the host by genomic rearrangement [17]. Such changes may occur at relatively high frequency by the discriminatory power of PFGE, compared with MLST, and do not exclude our conclusion regarding the source of infection [18], considering that genotyping results are always in the context of other results from the outbreak investigation.\nThere are limited epidemiological studies reported on C. jejuni ST2988 in China. This particular sequence type has only been reported in three (0.25%) strains from poultry in Jiangsu province, a province close to Zhejiang province in 2014 [19]. Interestingly, there are only two strains belonging to ST2988 from the unknown sources: One strain was in the UK, and the other strain was from the US, as described in the Campylobacter PubMLST database (http://pubmlst.org/campylobacter/), an additional strain GCA_004825105.1 (PNUSAC006969, Biosample: SAMN10485936, in October 2018 from a patient aged 40–49) was also described in the NCBI database. As shown in Figure 4, we found a close relation with the 14 strains isolated in this study and only one available genome (SAMN10485936) in the NCBI database with a scale bar at 0.0001.\nThis ST2988 belonged to CC354, which included 2707 isolates submitted to PubMLST, with a total of 199 different sequence types (http://pubmlst.org/campylobacter/), although only three isolates of C. jejuni ST2988 were found in the public database. The CC354 strains in the public databases are mainly from the US and the UK (Figure 3), while the submitted isolates in other countries are scattered. However, CC354 is frequently associated with human clinical infections (47.9%) and poultry (30.7%) (http://pubmlst.org/campylobacter/), it has also been indicated from wild birds in Spain [20], ducks in South Korea [21] and from cattle and pig carcasses in Poland [22]. Large surveillance data on C. jejuni isolates from humans as well as various other animals could provide additional knowledge of disease ecology and host reservoirs, which might aid in source attribution for this particular outbreak.\nGenome MLST types of a total of 303 strains of high-quality CC354 were retrieved from the NCBI assembly public database and were used to conduct the comparative genomics analysis. We found that there is very limited genetic difference in the distance between the branches of the evolutionary tree of CC345 isolate genomes, indicating an obvious consistency with the sequence type results. This information demonstrates that MLST genotyping based on the housekeeping gene is correlated with their genomic phylogeny.\nThe mechanisms by which Campylobacter species cause diarrhea, and knowledge for the following sequelae are lacking [23]. The genes associated with bacterial motility, invasion and adhesion to epithelial cells, which are critical in the development of Campylobacter infection [24,25], were detected in all isolates. These findings confirmed the evidence that flagellar and adhesion genes are highly conserved among C. jejuni, as previously reported [23,26]. Furthermore, virulence marker determinants included cdtA, cdtB, and cdtC cytotoxin genes, which play an important role in diarrhea by interfering with the division and differentiation of the intestinal crypt cells, were also identified in all examined isolates. As it has been shown in previous investigations, all three subunits are required for full toxin activity [23].\nCampylobacter is a major foodborne pathogen, and its resistance to clinically vital antibiotics is posing a significant health concern [4,27,28]. Particularly, rising fluoroquinolones and tetracyclines resistance in Campylobacter have been reported in many countries [4]. Fluoroquinolones are considered to be the rational drug of choice in treating human campylobacteriosis [12,29], but in certain cases, tetracyclines are used to treat systemic infection caused by Campylobacter [12,27]. Genomic analysis in this study indicated that all the tested isolates harbored tet(O) which confer resistance to tetracyclines, and a chromosomal mutation in gyrA (T86I) which confer resistant to fluoroquinolones. Resistance to these two antibiotics were also the most frequently reported in Campylobacter infections in China [30,31,32]. More than 90% of the Campylobacter spp. isolates have been reported to be resistant to quinolones and tetracycline in Shanghai, also in eastern China [33]. Furthermore, C. jejuni strains obtained from retail chicken meat samples have been described with high resistance to ciprofloxacin and tetracycline in central China [34]. As antimicrobial resistance tenders a significance alarm [35], substantial concern should be given to the antimicrobial resistance in C. jejuni. A long-term monitoring system is needed for improved control of infections, epidemics and antimicrobial resistance to crucial antimicrobials for bacterial agents, including C. jejuni.", " 4.1. Epidemiological Investigation In December 2018, a series of patients reported foodborne diseases in a high school in Hangzhou, the capital city of Zhejiang province in eastern China. Eighty-four students, in twelve classes from grade one to six, complained of symptoms of food poisoning. No meals were served at the school other than school lunches, which could be the potential source of this foodborne outbreak.\nWe defined a probable case as a patient with diarrhea, vomiting or other symptoms (abdominal pain, fever and so on) and a confirmed case as a patient with any symptoms and a confirmed laboratory diagnosis of C. jejuni.\nIn December 2018, a series of patients reported foodborne diseases in a high school in Hangzhou, the capital city of Zhejiang province in eastern China. Eighty-four students, in twelve classes from grade one to six, complained of symptoms of food poisoning. No meals were served at the school other than school lunches, which could be the potential source of this foodborne outbreak.\nWe defined a probable case as a patient with diarrhea, vomiting or other symptoms (abdominal pain, fever and so on) and a confirmed case as a patient with any symptoms and a confirmed laboratory diagnosis of C. jejuni.\n 4.2. Samples Collection Local CDC microbiologists collected 43 fecal samples based on the Chinese local regulations, of which 27 were from sick students and 16 from canteen employees, as probable cases for microbiological investigation. Canteen food samples were disposed of by the head of school due to the concerns of further contamination and disease dissemination, so no foods were available in the current investigation.\nLocal CDC microbiologists collected 43 fecal samples based on the Chinese local regulations, of which 27 were from sick students and 16 from canteen employees, as probable cases for microbiological investigation. Canteen food samples were disposed of by the head of school due to the concerns of further contamination and disease dissemination, so no foods were available in the current investigation.\n 4.3. Pathogen Detection Real-time fluorescent PCR was used to detect norovirus, rotavirus, adenovirus, sapovirus and astrovirus according to a protocol reported earlier [36]. WS271-2007 diagnostic criteria for infectious diarrhea protocol [37,38] was used for the detection of Salmonella, C. jejuni and Vibrio parahaemolyticus. WS287-2008 [39] and WS289-2008 [40] protocols were used for detection of Shigella and Vibrio cholerae, respectively. Briefly, fecal samples were added to an Eppendorf tube with sterile saline to prepare a stool suspension. Total genomic DNA, including bacterial and viral agents, was extracted and purified from the stool suspension using QIAamp DNA mini Kit (Qiagen, Hilden, Germany, No: 51304), according to the manufacturer’s recommended protocols. Real-time fluorescent PCR was performed at 42 °C for 1 h and 95 °C for 15 min, followed by 40 cycles of 94 °C for 60 s, 58 °C for 80 s, and 72 °C for 60 s, with a final extension at 72 °C for 7 min.\nReal-time fluorescent PCR was used to detect norovirus, rotavirus, adenovirus, sapovirus and astrovirus according to a protocol reported earlier [36]. WS271-2007 diagnostic criteria for infectious diarrhea protocol [37,38] was used for the detection of Salmonella, C. jejuni and Vibrio parahaemolyticus. WS287-2008 [39] and WS289-2008 [40] protocols were used for detection of Shigella and Vibrio cholerae, respectively. Briefly, fecal samples were added to an Eppendorf tube with sterile saline to prepare a stool suspension. Total genomic DNA, including bacterial and viral agents, was extracted and purified from the stool suspension using QIAamp DNA mini Kit (Qiagen, Hilden, Germany, No: 51304), according to the manufacturer’s recommended protocols. Real-time fluorescent PCR was performed at 42 °C for 1 h and 95 °C for 15 min, followed by 40 cycles of 94 °C for 60 s, 58 °C for 80 s, and 72 °C for 60 s, with a final extension at 72 °C for 7 min.\n 4.4. Isolation and Identification of Campylobacter spp. The positive Campylobacter samples detected by the real-time fluorescent PCR were pre-enriched with Preston selective broth supplemented with 5% sterile, lysed sheep blood, Campylobacter growth supplement and selective supplement (Oxoid Ltd., Basingstoke, UK). Samples were incubated at 42 ℃ under microaerobic conditions (5% O2, 10% CO2, and 85% N2) for 12–24 h. Two hundred microliter drops of the pre-enrichment were applied to the 0.45-μm pore-size filter and left on the surface of a Columbia blood agar plate. These plates were further incubated at 37 ℃ under microaerobic conditions [41].\nThe positive Campylobacter samples detected by the real-time fluorescent PCR were pre-enriched with Preston selective broth supplemented with 5% sterile, lysed sheep blood, Campylobacter growth supplement and selective supplement (Oxoid Ltd., Basingstoke, UK). Samples were incubated at 42 ℃ under microaerobic conditions (5% O2, 10% CO2, and 85% N2) for 12–24 h. Two hundred microliter drops of the pre-enrichment were applied to the 0.45-μm pore-size filter and left on the surface of a Columbia blood agar plate. These plates were further incubated at 37 ℃ under microaerobic conditions [41].\n 4.5. Pulsed Field Gel Electrophoresis (PFGE) Testing PFGE molecular typing was performed according to the PFGE protocol for C. jejuni [42,43]. Briefly, restriction digestion was conducted by using 40 U SmaI (Takara, Dalian, China), and run on a CHEF Mapper PFGE system (Bio-Rad Laboratories, Hercules, Canada) for SeaKem gold agarose (Lonza, Rockland, MD, USA) in 0.5×Tris-borate-EDTA. Bionumerics v6.6 software was used for the clustering analysis. Similarity greater than 95% was considered as the same genetic group. The similarity between chromosomal fingerprints was scored using the Dice coefficient. The unweighted pair group method, with arithmetic means (UPGMA) at the cut-off of 1.5% tolerance and 1.00% optimization, was used to obtain the dendrogram in the PFGE profile.\nPFGE molecular typing was performed according to the PFGE protocol for C. jejuni [42,43]. Briefly, restriction digestion was conducted by using 40 U SmaI (Takara, Dalian, China), and run on a CHEF Mapper PFGE system (Bio-Rad Laboratories, Hercules, Canada) for SeaKem gold agarose (Lonza, Rockland, MD, USA) in 0.5×Tris-borate-EDTA. Bionumerics v6.6 software was used for the clustering analysis. Similarity greater than 95% was considered as the same genetic group. The similarity between chromosomal fingerprints was scored using the Dice coefficient. The unweighted pair group method, with arithmetic means (UPGMA) at the cut-off of 1.5% tolerance and 1.00% optimization, was used to obtain the dendrogram in the PFGE profile.\n 4.6. Genomic Sequencing and Bioinformatic Analysis The Genomic DNA library was constructed using Nextera XT DNA library construction kit (Illumina, USA, No: FC-131-1024); followed by genomic sequencing using Miseq Reagent Kit v2 300cycle kit (Illumina, USA, No: MS-102-2002). High-throughput genome sequencing was accomplished by the Illumina Miseq sequencing platform, as previously described [44,45,46]. The quality of sequencing and trimming was checked with FastQC toolkit, while low-quality sequences and joint sequences were removed with trimmomatic [47]. The genome assembly was performed with SPAdes 4.0.1 for genomic scaffolds [48], using the “careful correction” option in order to reduce the number of mismatches in the final assembly with automatically choosen k-mer values by SPAdes. QUAST [49] was used to evaluate the assembled genomes through basic statistics generation, including the total number of contigs, contig length, and N50. Prokka 1.14 [50], with the “default” settings was used to annotate the assembled genomes. Multilocus sequence typing (MLST) software (http://www.github.com/tseemann/mlst) was applied for the sequence type of the isolates for the in-house database. Detection of resistance genes, plasmids replicons and virulence genes were conducted using ABRicate software (http://www.github.com/tseemann/abricate). All the sequence types from a clonal complex (CC) detected by using the genome sequence were retrieved from the NCBI assembly database. Considering RM1221 strain [51] as a reference genome, we used two different protocols to conduct the multiple sequence alignment of the genomes in order to build the phylogenomic tree, and both of them delivered the identical results. The first approach was performed using Snippy to search for single nucleotide polymorphism (SNP) locus [52]. The second approach was conducted by Gubbins to produce the consensus sequence, and Mafft was used to make the multiple sequence alignment for the whole genome sequences [52]. The phylogenomic tree was built and projected with RAxML [53] and ITOL [54], respectively.\nThe Genomic DNA library was constructed using Nextera XT DNA library construction kit (Illumina, USA, No: FC-131-1024); followed by genomic sequencing using Miseq Reagent Kit v2 300cycle kit (Illumina, USA, No: MS-102-2002). High-throughput genome sequencing was accomplished by the Illumina Miseq sequencing platform, as previously described [44,45,46]. The quality of sequencing and trimming was checked with FastQC toolkit, while low-quality sequences and joint sequences were removed with trimmomatic [47]. The genome assembly was performed with SPAdes 4.0.1 for genomic scaffolds [48], using the “careful correction” option in order to reduce the number of mismatches in the final assembly with automatically choosen k-mer values by SPAdes. QUAST [49] was used to evaluate the assembled genomes through basic statistics generation, including the total number of contigs, contig length, and N50. Prokka 1.14 [50], with the “default” settings was used to annotate the assembled genomes. Multilocus sequence typing (MLST) software (http://www.github.com/tseemann/mlst) was applied for the sequence type of the isolates for the in-house database. Detection of resistance genes, plasmids replicons and virulence genes were conducted using ABRicate software (http://www.github.com/tseemann/abricate). All the sequence types from a clonal complex (CC) detected by using the genome sequence were retrieved from the NCBI assembly database. Considering RM1221 strain [51] as a reference genome, we used two different protocols to conduct the multiple sequence alignment of the genomes in order to build the phylogenomic tree, and both of them delivered the identical results. The first approach was performed using Snippy to search for single nucleotide polymorphism (SNP) locus [52]. The second approach was conducted by Gubbins to produce the consensus sequence, and Mafft was used to make the multiple sequence alignment for the whole genome sequences [52]. The phylogenomic tree was built and projected with RAxML [53] and ITOL [54], respectively.\n 4.7. Ethical Approval All procedures performed in studies involving human participants were officially approved by the Xiacheng CDC at Hangzhou (No. 2019-05, 20190716), which was in accordance with the ethical standards of the institutional research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.\nAll procedures performed in studies involving human participants were officially approved by the Xiacheng CDC at Hangzhou (No. 2019-05, 20190716), which was in accordance with the ethical standards of the institutional research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.", "In December 2018, a series of patients reported foodborne diseases in a high school in Hangzhou, the capital city of Zhejiang province in eastern China. Eighty-four students, in twelve classes from grade one to six, complained of symptoms of food poisoning. No meals were served at the school other than school lunches, which could be the potential source of this foodborne outbreak.\nWe defined a probable case as a patient with diarrhea, vomiting or other symptoms (abdominal pain, fever and so on) and a confirmed case as a patient with any symptoms and a confirmed laboratory diagnosis of C. jejuni.", "Local CDC microbiologists collected 43 fecal samples based on the Chinese local regulations, of which 27 were from sick students and 16 from canteen employees, as probable cases for microbiological investigation. Canteen food samples were disposed of by the head of school due to the concerns of further contamination and disease dissemination, so no foods were available in the current investigation.", "Real-time fluorescent PCR was used to detect norovirus, rotavirus, adenovirus, sapovirus and astrovirus according to a protocol reported earlier [36]. WS271-2007 diagnostic criteria for infectious diarrhea protocol [37,38] was used for the detection of Salmonella, C. jejuni and Vibrio parahaemolyticus. WS287-2008 [39] and WS289-2008 [40] protocols were used for detection of Shigella and Vibrio cholerae, respectively. Briefly, fecal samples were added to an Eppendorf tube with sterile saline to prepare a stool suspension. Total genomic DNA, including bacterial and viral agents, was extracted and purified from the stool suspension using QIAamp DNA mini Kit (Qiagen, Hilden, Germany, No: 51304), according to the manufacturer’s recommended protocols. Real-time fluorescent PCR was performed at 42 °C for 1 h and 95 °C for 15 min, followed by 40 cycles of 94 °C for 60 s, 58 °C for 80 s, and 72 °C for 60 s, with a final extension at 72 °C for 7 min.", "The positive Campylobacter samples detected by the real-time fluorescent PCR were pre-enriched with Preston selective broth supplemented with 5% sterile, lysed sheep blood, Campylobacter growth supplement and selective supplement (Oxoid Ltd., Basingstoke, UK). Samples were incubated at 42 ℃ under microaerobic conditions (5% O2, 10% CO2, and 85% N2) for 12–24 h. Two hundred microliter drops of the pre-enrichment were applied to the 0.45-μm pore-size filter and left on the surface of a Columbia blood agar plate. These plates were further incubated at 37 ℃ under microaerobic conditions [41].", "PFGE molecular typing was performed according to the PFGE protocol for C. jejuni [42,43]. Briefly, restriction digestion was conducted by using 40 U SmaI (Takara, Dalian, China), and run on a CHEF Mapper PFGE system (Bio-Rad Laboratories, Hercules, Canada) for SeaKem gold agarose (Lonza, Rockland, MD, USA) in 0.5×Tris-borate-EDTA. Bionumerics v6.6 software was used for the clustering analysis. Similarity greater than 95% was considered as the same genetic group. The similarity between chromosomal fingerprints was scored using the Dice coefficient. The unweighted pair group method, with arithmetic means (UPGMA) at the cut-off of 1.5% tolerance and 1.00% optimization, was used to obtain the dendrogram in the PFGE profile.", "The Genomic DNA library was constructed using Nextera XT DNA library construction kit (Illumina, USA, No: FC-131-1024); followed by genomic sequencing using Miseq Reagent Kit v2 300cycle kit (Illumina, USA, No: MS-102-2002). High-throughput genome sequencing was accomplished by the Illumina Miseq sequencing platform, as previously described [44,45,46]. The quality of sequencing and trimming was checked with FastQC toolkit, while low-quality sequences and joint sequences were removed with trimmomatic [47]. The genome assembly was performed with SPAdes 4.0.1 for genomic scaffolds [48], using the “careful correction” option in order to reduce the number of mismatches in the final assembly with automatically choosen k-mer values by SPAdes. QUAST [49] was used to evaluate the assembled genomes through basic statistics generation, including the total number of contigs, contig length, and N50. Prokka 1.14 [50], with the “default” settings was used to annotate the assembled genomes. Multilocus sequence typing (MLST) software (http://www.github.com/tseemann/mlst) was applied for the sequence type of the isolates for the in-house database. Detection of resistance genes, plasmids replicons and virulence genes were conducted using ABRicate software (http://www.github.com/tseemann/abricate). All the sequence types from a clonal complex (CC) detected by using the genome sequence were retrieved from the NCBI assembly database. Considering RM1221 strain [51] as a reference genome, we used two different protocols to conduct the multiple sequence alignment of the genomes in order to build the phylogenomic tree, and both of them delivered the identical results. The first approach was performed using Snippy to search for single nucleotide polymorphism (SNP) locus [52]. The second approach was conducted by Gubbins to produce the consensus sequence, and Mafft was used to make the multiple sequence alignment for the whole genome sequences [52]. The phylogenomic tree was built and projected with RAxML [53] and ITOL [54], respectively.", "All procedures performed in studies involving human participants were officially approved by the Xiacheng CDC at Hangzhou (No. 2019-05, 20190716), which was in accordance with the ethical standards of the institutional research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.", "This analysis sheds light on the possible menace of C. jejuni infections. PFGE and NGS technologies provided reliable evidence for the identification of the pathogens for this outbreak, caused by C. jejuni ST2988. These results suggest that enhanced concerns should be given to the circulation of this rarely reported sequence type. It is expected that the advanced NGS technologies will be promising in pathogen detection and foodborne disease tracking.\nTo our knowledge, this is the second C. jejuni outbreak described in China to date. Unfortunately, in this event, food samples were not included in the investigation. In the future, the collection and testing of food samples should be emphasized for a more comprehensive investigation. These data also endorse that authorities need to implement systematic surveillance and compulsory notification for Campylobacter infections from humans as well as different animals, which is essential for the identification and tracking of the source of infection and the rationalization of effective control measures to ensure public health and safety." ]
[ "intro", "results", null, null, null, null, "discussion", null, null, null, null, null, null, null, null, null ]
[ "\nCampylobacter jejuni\n", "foodborne outbreak", "genomic investigation", "pulse field gel electrophoresis", "ST2988" ]
1. Introduction: Campylobacter jejuni is a common foodborne pathogenic bacterium which causes gastroenteritis, and more severely, a neural damage disease in humans called Guillain-Barre syndrome [1]. Raw milk, water, and contaminated meat, particularly chicken are believed to be the main sources of C. jejuni human infections [2,3]. C. jejuni is considered to be the leading cause of human gastroenteritis [4] and ranked as the second important cause for foodborne diseases in the U.S., with more than 1.5 million illness annually according to the Centers for Disease Control and Prevention (CDC), it has also been reported as one of the most commonly described pathogens in humans in the European Union foodborne disease surveillance network since 2005 [5,6]. Recently, there has been a surge in the global incidence of Campylobacter infections, and ongoing spread of human cases in North America, Europe, and Australia [7]. Though foodborne disease caused by C. jejuni has become an important public health concern, there is limited knowledge about its role in foodborne disease outbreaks in China. This knowledge gap could be due to Campylobacter infections not being subjected to obligatory reports and its surveillance being on a voluntary basis by local and regional laboratories. The pulsed field gel electrophoresis (PFGE) has been widely used in outbreak investigations for tracking sources of infection and effectively controlling epidemics due to its good reproducibility, high resolution and stable results, and the ease of standardization [8]. Nowadays, next generation sequencing (NGS) technology is becoming popular, considering advantages of labor- and time-saving, high-throughput capacities, highly precise and abundance of genetic information available for extensive studies. As the sequencing cost continues to decrease, genomic epidemiology combined with NGS has been increasingly and widely applied to outbreak investigations [9,10]. The PFGE technology and other genotyping approaches, including multi-locus sequence typing (MLST), shows that Campylobacter is not a genetically monomorphic organism, but includes highly diverse assemblies with an array of different phenotypes [9,10,11]. Considering this complexity, there are sufficient genetic materials, which could be used to link a particular genotype with a certain animal host [2,12]. Nevertheless, few C. jejuni Chinese clinical isolates with genome sequence are available in the public genomic database. The aim of this study was to describe both the epidemiological investigation and genomic characterization of C. jejuni that was responsible for the outbreak in a high school in Hangzhou in December 2018 using PFGE and NGS technologies. 2. Results: 2.1. Causative Pathogen Scanning All forty-three samples were found negative for norovirus, rotavirus, adenovirus, sapovirus and astrovirus. Additionally, Salmonella, Shigella, Vibrio parahaemolyticus, and Vibrio cholerae were also not detected in all the examined patients. Fifteen strains of C. jejuni, from the fifteen sick students (Table 1), were identified by real-time fluorescent PCR and confirmed by the traditional microbiological approaches. All forty-three samples were found negative for norovirus, rotavirus, adenovirus, sapovirus and astrovirus. Additionally, Salmonella, Shigella, Vibrio parahaemolyticus, and Vibrio cholerae were also not detected in all the examined patients. Fifteen strains of C. jejuni, from the fifteen sick students (Table 1), were identified by real-time fluorescent PCR and confirmed by the traditional microbiological approaches. 2.2. PFGE Profiling Studies The fifteen C. jejuni strains were classified into two types, PA1 (fourteen strains) with an overall 99.7% similarity, and PA2 (one strain). The similarity between PA1 and PA2 was 66.7% (Figure 1, Table 1). The fifteen C. jejuni strains were classified into two types, PA1 (fourteen strains) with an overall 99.7% similarity, and PA2 (one strain). The similarity between PA1 and PA2 was 66.7% (Figure 1, Table 1). 2.3. Genomic Sequencing After conducting the whole genome sequencing and genomic assembly of the C. jejuni strains, the number of contigs was calculated to be between 12 and 48 contigs. Genome sequencing, assembly results and accession number are summarized in Table 1. The average genome size of draft assemblies was 1,650,982. Furthermore, the average N50 was 255,161 with 30.33% as average of GC%. The assembly results were scanned and identified with their MLST profiles. Fourteen strains of C. jejuni belonged to ST2988 and only one strain belonged to the ST8149 type. Further analysis showed that all the strains harbored blaOXA-61 which encodes resistance to β-lactamases, and tet(O) which confers resistance to tetracyclines. Additionally, a chromosomal mutation in gyrA (T86I), which might be responsible for the resistance to fluoroquinolones, was detected in all fifteen strains. No plasmid replicons were detected in any isolate. Furthermore, Figure 2 shows that all isolates harbored flagellar, motility, chemotaxis and cytolethal toxin proteins. SAMN12388815 isolate harbored gmhP, porA proteins which play a vital role in bacterial virulence by enhancing the adhesion and invasion properties. Cysc, Cj1416c, Cj1417c, Cj1419c, Cj1420c proteins, which are involved in capsule polysaccharide biosynthesis, were only detected in one strain (SAMN12388815). We also identified that both kpsT and kpsC proteins, which were involved in capsule polysaccharide biosynthesis, were only in three strains (SAMN12388802, SAMN12388803, SAMN12388804, in Figure 2). After conducting the whole genome sequencing and genomic assembly of the C. jejuni strains, the number of contigs was calculated to be between 12 and 48 contigs. Genome sequencing, assembly results and accession number are summarized in Table 1. The average genome size of draft assemblies was 1,650,982. Furthermore, the average N50 was 255,161 with 30.33% as average of GC%. The assembly results were scanned and identified with their MLST profiles. Fourteen strains of C. jejuni belonged to ST2988 and only one strain belonged to the ST8149 type. Further analysis showed that all the strains harbored blaOXA-61 which encodes resistance to β-lactamases, and tet(O) which confers resistance to tetracyclines. Additionally, a chromosomal mutation in gyrA (T86I), which might be responsible for the resistance to fluoroquinolones, was detected in all fifteen strains. No plasmid replicons were detected in any isolate. Furthermore, Figure 2 shows that all isolates harbored flagellar, motility, chemotaxis and cytolethal toxin proteins. SAMN12388815 isolate harbored gmhP, porA proteins which play a vital role in bacterial virulence by enhancing the adhesion and invasion properties. Cysc, Cj1416c, Cj1417c, Cj1419c, Cj1420c proteins, which are involved in capsule polysaccharide biosynthesis, were only detected in one strain (SAMN12388815). We also identified that both kpsT and kpsC proteins, which were involved in capsule polysaccharide biosynthesis, were only in three strains (SAMN12388802, SAMN12388803, SAMN12388804, in Figure 2). 2.4. Genome Comparison and Phylogenomic Analysis The phylogenomic tree shows that all fourteen case-patient isolates from this particular outbreak, which belonged to ST2988, are closely related and clustered together in a single clade. The other individual strain belonged to ST8149 (Figure 1). Importantly, the genomes of these fourteen ST2988 isolates were differed in (< 70) core SNPs, and showed (> 99%) a high similarity (Table S1). The ST2988 belongs to CC354 that includes 199 identified sequence types in (http://pubmlst.org/campylobacter/). Genomic data of all CC354 strains in NCBI database were extracted and 303 genomes were obtained (Table S2), including 27 sequence types. With ST354 strain RM1221 (GCA_000011865.1) as the reference genome, the SNP locus and phylogenetic tree between 302 strains in the public database and 14 ST2988 isolates from this outbreak were obtained (Figure 3). ST354 is the most predominant sequence type in CC354 (Figure S1). Most CC345 isolates were isolated from humans and food samples, and few isolates were retrieved from unknown sources (Figure 3, and Figure S1). Isolates from chicken-origin were identified to have the highest prevalence among the food isolates (Figure 3, and Table S2). The CC354 strains in the public databases are mainly from the US and the UK (Figure S1), while strains from other countries are scattered. A small difference in distance between phylogenetic branches of CC345 isolates was identified in Figure 3 with a scale bar at 0.001, indicating a very close genetic relationship within the sequence type. We also observed a close relationship with a scale bar at 0.001 among these 14 strains linked with the outbreak in this study, which were also linked with the only available genome (SAMN10485936) in the NCBI database (Figure 4). The phylogenomic tree shows that all fourteen case-patient isolates from this particular outbreak, which belonged to ST2988, are closely related and clustered together in a single clade. The other individual strain belonged to ST8149 (Figure 1). Importantly, the genomes of these fourteen ST2988 isolates were differed in (< 70) core SNPs, and showed (> 99%) a high similarity (Table S1). The ST2988 belongs to CC354 that includes 199 identified sequence types in (http://pubmlst.org/campylobacter/). Genomic data of all CC354 strains in NCBI database were extracted and 303 genomes were obtained (Table S2), including 27 sequence types. With ST354 strain RM1221 (GCA_000011865.1) as the reference genome, the SNP locus and phylogenetic tree between 302 strains in the public database and 14 ST2988 isolates from this outbreak were obtained (Figure 3). ST354 is the most predominant sequence type in CC354 (Figure S1). Most CC345 isolates were isolated from humans and food samples, and few isolates were retrieved from unknown sources (Figure 3, and Figure S1). Isolates from chicken-origin were identified to have the highest prevalence among the food isolates (Figure 3, and Table S2). The CC354 strains in the public databases are mainly from the US and the UK (Figure S1), while strains from other countries are scattered. A small difference in distance between phylogenetic branches of CC345 isolates was identified in Figure 3 with a scale bar at 0.001, indicating a very close genetic relationship within the sequence type. We also observed a close relationship with a scale bar at 0.001 among these 14 strains linked with the outbreak in this study, which were also linked with the only available genome (SAMN10485936) in the NCBI database (Figure 4). 2.1. Causative Pathogen Scanning: All forty-three samples were found negative for norovirus, rotavirus, adenovirus, sapovirus and astrovirus. Additionally, Salmonella, Shigella, Vibrio parahaemolyticus, and Vibrio cholerae were also not detected in all the examined patients. Fifteen strains of C. jejuni, from the fifteen sick students (Table 1), were identified by real-time fluorescent PCR and confirmed by the traditional microbiological approaches. 2.2. PFGE Profiling Studies: The fifteen C. jejuni strains were classified into two types, PA1 (fourteen strains) with an overall 99.7% similarity, and PA2 (one strain). The similarity between PA1 and PA2 was 66.7% (Figure 1, Table 1). 2.3. Genomic Sequencing: After conducting the whole genome sequencing and genomic assembly of the C. jejuni strains, the number of contigs was calculated to be between 12 and 48 contigs. Genome sequencing, assembly results and accession number are summarized in Table 1. The average genome size of draft assemblies was 1,650,982. Furthermore, the average N50 was 255,161 with 30.33% as average of GC%. The assembly results were scanned and identified with their MLST profiles. Fourteen strains of C. jejuni belonged to ST2988 and only one strain belonged to the ST8149 type. Further analysis showed that all the strains harbored blaOXA-61 which encodes resistance to β-lactamases, and tet(O) which confers resistance to tetracyclines. Additionally, a chromosomal mutation in gyrA (T86I), which might be responsible for the resistance to fluoroquinolones, was detected in all fifteen strains. No plasmid replicons were detected in any isolate. Furthermore, Figure 2 shows that all isolates harbored flagellar, motility, chemotaxis and cytolethal toxin proteins. SAMN12388815 isolate harbored gmhP, porA proteins which play a vital role in bacterial virulence by enhancing the adhesion and invasion properties. Cysc, Cj1416c, Cj1417c, Cj1419c, Cj1420c proteins, which are involved in capsule polysaccharide biosynthesis, were only detected in one strain (SAMN12388815). We also identified that both kpsT and kpsC proteins, which were involved in capsule polysaccharide biosynthesis, were only in three strains (SAMN12388802, SAMN12388803, SAMN12388804, in Figure 2). 2.4. Genome Comparison and Phylogenomic Analysis : The phylogenomic tree shows that all fourteen case-patient isolates from this particular outbreak, which belonged to ST2988, are closely related and clustered together in a single clade. The other individual strain belonged to ST8149 (Figure 1). Importantly, the genomes of these fourteen ST2988 isolates were differed in (< 70) core SNPs, and showed (> 99%) a high similarity (Table S1). The ST2988 belongs to CC354 that includes 199 identified sequence types in (http://pubmlst.org/campylobacter/). Genomic data of all CC354 strains in NCBI database were extracted and 303 genomes were obtained (Table S2), including 27 sequence types. With ST354 strain RM1221 (GCA_000011865.1) as the reference genome, the SNP locus and phylogenetic tree between 302 strains in the public database and 14 ST2988 isolates from this outbreak were obtained (Figure 3). ST354 is the most predominant sequence type in CC354 (Figure S1). Most CC345 isolates were isolated from humans and food samples, and few isolates were retrieved from unknown sources (Figure 3, and Figure S1). Isolates from chicken-origin were identified to have the highest prevalence among the food isolates (Figure 3, and Table S2). The CC354 strains in the public databases are mainly from the US and the UK (Figure S1), while strains from other countries are scattered. A small difference in distance between phylogenetic branches of CC345 isolates was identified in Figure 3 with a scale bar at 0.001, indicating a very close genetic relationship within the sequence type. We also observed a close relationship with a scale bar at 0.001 among these 14 strains linked with the outbreak in this study, which were also linked with the only available genome (SAMN10485936) in the NCBI database (Figure 4). 3. Discussion: Recently, the rate of Campylobacter infections has rapidly increased due to the expansion of the consumption of raw or undercooked chicken, especially in China [13]. In December 2018, a serious case of foodborne disease was reported in a high school, where eighty-four students in twelve classes from grade one to six had diarrhea, vomiting, fever and other foodborne disease-associated symptoms, in Hangzhou. To identify the causative agent of this outbreak, 43 fecal samples were collected from patient students and canteen workers. Nucleic acid of suspected viral or bacterial samples were extracted for laboratory investigation. None of these samples were positive for the suspected viruses. Fifteen strains of C. jejuni were detected and isolated from the samples of fifteen sick students. To the best of our knowledge, this is the second foodborne outbreak of C. jejuni described in China to date. The previous outbreak led to 36 cases of Campylobacter infections that occurred in a high school in Beijing after a trip to another province in Southern China [14]. In order to provide more reliable evidence for the outbreak origin, we conducted PFGE profiling and genomic analysis for these fifteen strains of C. jejuni, which is essential for evaluating the clinical isolates from the outbreak and related cases [15]. The results showed that these fourteen strains belonged to the same pattern (PA-1), while the one other strain which had a similarity of 66.7%, belonged to the other pattern (PA-2). By using genomic data for MLST or genotype scanning, it was found that 14 strains were of ST2988 type and one of ST8149 type, which was consistent with PFGE results. These results suggested that the unique ST2988 C. jejuni isolate was responsible for this foodborne outbreak. Scrutiny of the PFGE pattern (PA-1) exhibited an inherent similarity, with some changes in three isolates (CAM19-027, CAM19-028, CAM19-037) belonging to the same MLST (Figure 1), which hints towards a recent evolutionary deviation from a common ancestor. Although these isolates had a slightly deviant PFGE pattern, it was not considered significant enough to exclude them from this outbreak, as the variations in the PFGE patterns can result from a single-nucleotide polymorphism in a restriction site [16]. Thus, a clonal relationship may be found even between strains with dissimilar PFGE profiles. Furthermore, a PFGE profile can change after only a single passage through the host by genomic rearrangement [17]. Such changes may occur at relatively high frequency by the discriminatory power of PFGE, compared with MLST, and do not exclude our conclusion regarding the source of infection [18], considering that genotyping results are always in the context of other results from the outbreak investigation. There are limited epidemiological studies reported on C. jejuni ST2988 in China. This particular sequence type has only been reported in three (0.25%) strains from poultry in Jiangsu province, a province close to Zhejiang province in 2014 [19]. Interestingly, there are only two strains belonging to ST2988 from the unknown sources: One strain was in the UK, and the other strain was from the US, as described in the Campylobacter PubMLST database (http://pubmlst.org/campylobacter/), an additional strain GCA_004825105.1 (PNUSAC006969, Biosample: SAMN10485936, in October 2018 from a patient aged 40–49) was also described in the NCBI database. As shown in Figure 4, we found a close relation with the 14 strains isolated in this study and only one available genome (SAMN10485936) in the NCBI database with a scale bar at 0.0001. This ST2988 belonged to CC354, which included 2707 isolates submitted to PubMLST, with a total of 199 different sequence types (http://pubmlst.org/campylobacter/), although only three isolates of C. jejuni ST2988 were found in the public database. The CC354 strains in the public databases are mainly from the US and the UK (Figure 3), while the submitted isolates in other countries are scattered. However, CC354 is frequently associated with human clinical infections (47.9%) and poultry (30.7%) (http://pubmlst.org/campylobacter/), it has also been indicated from wild birds in Spain [20], ducks in South Korea [21] and from cattle and pig carcasses in Poland [22]. Large surveillance data on C. jejuni isolates from humans as well as various other animals could provide additional knowledge of disease ecology and host reservoirs, which might aid in source attribution for this particular outbreak. Genome MLST types of a total of 303 strains of high-quality CC354 were retrieved from the NCBI assembly public database and were used to conduct the comparative genomics analysis. We found that there is very limited genetic difference in the distance between the branches of the evolutionary tree of CC345 isolate genomes, indicating an obvious consistency with the sequence type results. This information demonstrates that MLST genotyping based on the housekeeping gene is correlated with their genomic phylogeny. The mechanisms by which Campylobacter species cause diarrhea, and knowledge for the following sequelae are lacking [23]. The genes associated with bacterial motility, invasion and adhesion to epithelial cells, which are critical in the development of Campylobacter infection [24,25], were detected in all isolates. These findings confirmed the evidence that flagellar and adhesion genes are highly conserved among C. jejuni, as previously reported [23,26]. Furthermore, virulence marker determinants included cdtA, cdtB, and cdtC cytotoxin genes, which play an important role in diarrhea by interfering with the division and differentiation of the intestinal crypt cells, were also identified in all examined isolates. As it has been shown in previous investigations, all three subunits are required for full toxin activity [23]. Campylobacter is a major foodborne pathogen, and its resistance to clinically vital antibiotics is posing a significant health concern [4,27,28]. Particularly, rising fluoroquinolones and tetracyclines resistance in Campylobacter have been reported in many countries [4]. Fluoroquinolones are considered to be the rational drug of choice in treating human campylobacteriosis [12,29], but in certain cases, tetracyclines are used to treat systemic infection caused by Campylobacter [12,27]. Genomic analysis in this study indicated that all the tested isolates harbored tet(O) which confer resistance to tetracyclines, and a chromosomal mutation in gyrA (T86I) which confer resistant to fluoroquinolones. Resistance to these two antibiotics were also the most frequently reported in Campylobacter infections in China [30,31,32]. More than 90% of the Campylobacter spp. isolates have been reported to be resistant to quinolones and tetracycline in Shanghai, also in eastern China [33]. Furthermore, C. jejuni strains obtained from retail chicken meat samples have been described with high resistance to ciprofloxacin and tetracycline in central China [34]. As antimicrobial resistance tenders a significance alarm [35], substantial concern should be given to the antimicrobial resistance in C. jejuni. A long-term monitoring system is needed for improved control of infections, epidemics and antimicrobial resistance to crucial antimicrobials for bacterial agents, including C. jejuni. 4. Material and Methods: 4.1. Epidemiological Investigation In December 2018, a series of patients reported foodborne diseases in a high school in Hangzhou, the capital city of Zhejiang province in eastern China. Eighty-four students, in twelve classes from grade one to six, complained of symptoms of food poisoning. No meals were served at the school other than school lunches, which could be the potential source of this foodborne outbreak. We defined a probable case as a patient with diarrhea, vomiting or other symptoms (abdominal pain, fever and so on) and a confirmed case as a patient with any symptoms and a confirmed laboratory diagnosis of C. jejuni. In December 2018, a series of patients reported foodborne diseases in a high school in Hangzhou, the capital city of Zhejiang province in eastern China. Eighty-four students, in twelve classes from grade one to six, complained of symptoms of food poisoning. No meals were served at the school other than school lunches, which could be the potential source of this foodborne outbreak. We defined a probable case as a patient with diarrhea, vomiting or other symptoms (abdominal pain, fever and so on) and a confirmed case as a patient with any symptoms and a confirmed laboratory diagnosis of C. jejuni. 4.2. Samples Collection Local CDC microbiologists collected 43 fecal samples based on the Chinese local regulations, of which 27 were from sick students and 16 from canteen employees, as probable cases for microbiological investigation. Canteen food samples were disposed of by the head of school due to the concerns of further contamination and disease dissemination, so no foods were available in the current investigation. Local CDC microbiologists collected 43 fecal samples based on the Chinese local regulations, of which 27 were from sick students and 16 from canteen employees, as probable cases for microbiological investigation. Canteen food samples were disposed of by the head of school due to the concerns of further contamination and disease dissemination, so no foods were available in the current investigation. 4.3. Pathogen Detection Real-time fluorescent PCR was used to detect norovirus, rotavirus, adenovirus, sapovirus and astrovirus according to a protocol reported earlier [36]. WS271-2007 diagnostic criteria for infectious diarrhea protocol [37,38] was used for the detection of Salmonella, C. jejuni and Vibrio parahaemolyticus. WS287-2008 [39] and WS289-2008 [40] protocols were used for detection of Shigella and Vibrio cholerae, respectively. Briefly, fecal samples were added to an Eppendorf tube with sterile saline to prepare a stool suspension. Total genomic DNA, including bacterial and viral agents, was extracted and purified from the stool suspension using QIAamp DNA mini Kit (Qiagen, Hilden, Germany, No: 51304), according to the manufacturer’s recommended protocols. Real-time fluorescent PCR was performed at 42 °C for 1 h and 95 °C for 15 min, followed by 40 cycles of 94 °C for 60 s, 58 °C for 80 s, and 72 °C for 60 s, with a final extension at 72 °C for 7 min. Real-time fluorescent PCR was used to detect norovirus, rotavirus, adenovirus, sapovirus and astrovirus according to a protocol reported earlier [36]. WS271-2007 diagnostic criteria for infectious diarrhea protocol [37,38] was used for the detection of Salmonella, C. jejuni and Vibrio parahaemolyticus. WS287-2008 [39] and WS289-2008 [40] protocols were used for detection of Shigella and Vibrio cholerae, respectively. Briefly, fecal samples were added to an Eppendorf tube with sterile saline to prepare a stool suspension. Total genomic DNA, including bacterial and viral agents, was extracted and purified from the stool suspension using QIAamp DNA mini Kit (Qiagen, Hilden, Germany, No: 51304), according to the manufacturer’s recommended protocols. Real-time fluorescent PCR was performed at 42 °C for 1 h and 95 °C for 15 min, followed by 40 cycles of 94 °C for 60 s, 58 °C for 80 s, and 72 °C for 60 s, with a final extension at 72 °C for 7 min. 4.4. Isolation and Identification of Campylobacter spp. The positive Campylobacter samples detected by the real-time fluorescent PCR were pre-enriched with Preston selective broth supplemented with 5% sterile, lysed sheep blood, Campylobacter growth supplement and selective supplement (Oxoid Ltd., Basingstoke, UK). Samples were incubated at 42 ℃ under microaerobic conditions (5% O2, 10% CO2, and 85% N2) for 12–24 h. Two hundred microliter drops of the pre-enrichment were applied to the 0.45-μm pore-size filter and left on the surface of a Columbia blood agar plate. These plates were further incubated at 37 ℃ under microaerobic conditions [41]. The positive Campylobacter samples detected by the real-time fluorescent PCR were pre-enriched with Preston selective broth supplemented with 5% sterile, lysed sheep blood, Campylobacter growth supplement and selective supplement (Oxoid Ltd., Basingstoke, UK). Samples were incubated at 42 ℃ under microaerobic conditions (5% O2, 10% CO2, and 85% N2) for 12–24 h. Two hundred microliter drops of the pre-enrichment were applied to the 0.45-μm pore-size filter and left on the surface of a Columbia blood agar plate. These plates were further incubated at 37 ℃ under microaerobic conditions [41]. 4.5. Pulsed Field Gel Electrophoresis (PFGE) Testing PFGE molecular typing was performed according to the PFGE protocol for C. jejuni [42,43]. Briefly, restriction digestion was conducted by using 40 U SmaI (Takara, Dalian, China), and run on a CHEF Mapper PFGE system (Bio-Rad Laboratories, Hercules, Canada) for SeaKem gold agarose (Lonza, Rockland, MD, USA) in 0.5×Tris-borate-EDTA. Bionumerics v6.6 software was used for the clustering analysis. Similarity greater than 95% was considered as the same genetic group. The similarity between chromosomal fingerprints was scored using the Dice coefficient. The unweighted pair group method, with arithmetic means (UPGMA) at the cut-off of 1.5% tolerance and 1.00% optimization, was used to obtain the dendrogram in the PFGE profile. PFGE molecular typing was performed according to the PFGE protocol for C. jejuni [42,43]. Briefly, restriction digestion was conducted by using 40 U SmaI (Takara, Dalian, China), and run on a CHEF Mapper PFGE system (Bio-Rad Laboratories, Hercules, Canada) for SeaKem gold agarose (Lonza, Rockland, MD, USA) in 0.5×Tris-borate-EDTA. Bionumerics v6.6 software was used for the clustering analysis. Similarity greater than 95% was considered as the same genetic group. The similarity between chromosomal fingerprints was scored using the Dice coefficient. The unweighted pair group method, with arithmetic means (UPGMA) at the cut-off of 1.5% tolerance and 1.00% optimization, was used to obtain the dendrogram in the PFGE profile. 4.6. Genomic Sequencing and Bioinformatic Analysis The Genomic DNA library was constructed using Nextera XT DNA library construction kit (Illumina, USA, No: FC-131-1024); followed by genomic sequencing using Miseq Reagent Kit v2 300cycle kit (Illumina, USA, No: MS-102-2002). High-throughput genome sequencing was accomplished by the Illumina Miseq sequencing platform, as previously described [44,45,46]. The quality of sequencing and trimming was checked with FastQC toolkit, while low-quality sequences and joint sequences were removed with trimmomatic [47]. The genome assembly was performed with SPAdes 4.0.1 for genomic scaffolds [48], using the “careful correction” option in order to reduce the number of mismatches in the final assembly with automatically choosen k-mer values by SPAdes. QUAST [49] was used to evaluate the assembled genomes through basic statistics generation, including the total number of contigs, contig length, and N50. Prokka 1.14 [50], with the “default” settings was used to annotate the assembled genomes. Multilocus sequence typing (MLST) software (http://www.github.com/tseemann/mlst) was applied for the sequence type of the isolates for the in-house database. Detection of resistance genes, plasmids replicons and virulence genes were conducted using ABRicate software (http://www.github.com/tseemann/abricate). All the sequence types from a clonal complex (CC) detected by using the genome sequence were retrieved from the NCBI assembly database. Considering RM1221 strain [51] as a reference genome, we used two different protocols to conduct the multiple sequence alignment of the genomes in order to build the phylogenomic tree, and both of them delivered the identical results. The first approach was performed using Snippy to search for single nucleotide polymorphism (SNP) locus [52]. The second approach was conducted by Gubbins to produce the consensus sequence, and Mafft was used to make the multiple sequence alignment for the whole genome sequences [52]. The phylogenomic tree was built and projected with RAxML [53] and ITOL [54], respectively. The Genomic DNA library was constructed using Nextera XT DNA library construction kit (Illumina, USA, No: FC-131-1024); followed by genomic sequencing using Miseq Reagent Kit v2 300cycle kit (Illumina, USA, No: MS-102-2002). High-throughput genome sequencing was accomplished by the Illumina Miseq sequencing platform, as previously described [44,45,46]. The quality of sequencing and trimming was checked with FastQC toolkit, while low-quality sequences and joint sequences were removed with trimmomatic [47]. The genome assembly was performed with SPAdes 4.0.1 for genomic scaffolds [48], using the “careful correction” option in order to reduce the number of mismatches in the final assembly with automatically choosen k-mer values by SPAdes. QUAST [49] was used to evaluate the assembled genomes through basic statistics generation, including the total number of contigs, contig length, and N50. Prokka 1.14 [50], with the “default” settings was used to annotate the assembled genomes. Multilocus sequence typing (MLST) software (http://www.github.com/tseemann/mlst) was applied for the sequence type of the isolates for the in-house database. Detection of resistance genes, plasmids replicons and virulence genes were conducted using ABRicate software (http://www.github.com/tseemann/abricate). All the sequence types from a clonal complex (CC) detected by using the genome sequence were retrieved from the NCBI assembly database. Considering RM1221 strain [51] as a reference genome, we used two different protocols to conduct the multiple sequence alignment of the genomes in order to build the phylogenomic tree, and both of them delivered the identical results. The first approach was performed using Snippy to search for single nucleotide polymorphism (SNP) locus [52]. The second approach was conducted by Gubbins to produce the consensus sequence, and Mafft was used to make the multiple sequence alignment for the whole genome sequences [52]. The phylogenomic tree was built and projected with RAxML [53] and ITOL [54], respectively. 4.7. Ethical Approval All procedures performed in studies involving human participants were officially approved by the Xiacheng CDC at Hangzhou (No. 2019-05, 20190716), which was in accordance with the ethical standards of the institutional research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. All procedures performed in studies involving human participants were officially approved by the Xiacheng CDC at Hangzhou (No. 2019-05, 20190716), which was in accordance with the ethical standards of the institutional research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. 4.1. Epidemiological Investigation: In December 2018, a series of patients reported foodborne diseases in a high school in Hangzhou, the capital city of Zhejiang province in eastern China. Eighty-four students, in twelve classes from grade one to six, complained of symptoms of food poisoning. No meals were served at the school other than school lunches, which could be the potential source of this foodborne outbreak. We defined a probable case as a patient with diarrhea, vomiting or other symptoms (abdominal pain, fever and so on) and a confirmed case as a patient with any symptoms and a confirmed laboratory diagnosis of C. jejuni. 4.2. Samples Collection: Local CDC microbiologists collected 43 fecal samples based on the Chinese local regulations, of which 27 were from sick students and 16 from canteen employees, as probable cases for microbiological investigation. Canteen food samples were disposed of by the head of school due to the concerns of further contamination and disease dissemination, so no foods were available in the current investigation. 4.3. Pathogen Detection: Real-time fluorescent PCR was used to detect norovirus, rotavirus, adenovirus, sapovirus and astrovirus according to a protocol reported earlier [36]. WS271-2007 diagnostic criteria for infectious diarrhea protocol [37,38] was used for the detection of Salmonella, C. jejuni and Vibrio parahaemolyticus. WS287-2008 [39] and WS289-2008 [40] protocols were used for detection of Shigella and Vibrio cholerae, respectively. Briefly, fecal samples were added to an Eppendorf tube with sterile saline to prepare a stool suspension. Total genomic DNA, including bacterial and viral agents, was extracted and purified from the stool suspension using QIAamp DNA mini Kit (Qiagen, Hilden, Germany, No: 51304), according to the manufacturer’s recommended protocols. Real-time fluorescent PCR was performed at 42 °C for 1 h and 95 °C for 15 min, followed by 40 cycles of 94 °C for 60 s, 58 °C for 80 s, and 72 °C for 60 s, with a final extension at 72 °C for 7 min. 4.4. Isolation and Identification of Campylobacter spp.: The positive Campylobacter samples detected by the real-time fluorescent PCR were pre-enriched with Preston selective broth supplemented with 5% sterile, lysed sheep blood, Campylobacter growth supplement and selective supplement (Oxoid Ltd., Basingstoke, UK). Samples were incubated at 42 ℃ under microaerobic conditions (5% O2, 10% CO2, and 85% N2) for 12–24 h. Two hundred microliter drops of the pre-enrichment were applied to the 0.45-μm pore-size filter and left on the surface of a Columbia blood agar plate. These plates were further incubated at 37 ℃ under microaerobic conditions [41]. 4.5. Pulsed Field Gel Electrophoresis (PFGE) Testing: PFGE molecular typing was performed according to the PFGE protocol for C. jejuni [42,43]. Briefly, restriction digestion was conducted by using 40 U SmaI (Takara, Dalian, China), and run on a CHEF Mapper PFGE system (Bio-Rad Laboratories, Hercules, Canada) for SeaKem gold agarose (Lonza, Rockland, MD, USA) in 0.5×Tris-borate-EDTA. Bionumerics v6.6 software was used for the clustering analysis. Similarity greater than 95% was considered as the same genetic group. The similarity between chromosomal fingerprints was scored using the Dice coefficient. The unweighted pair group method, with arithmetic means (UPGMA) at the cut-off of 1.5% tolerance and 1.00% optimization, was used to obtain the dendrogram in the PFGE profile. 4.6. Genomic Sequencing and Bioinformatic Analysis: The Genomic DNA library was constructed using Nextera XT DNA library construction kit (Illumina, USA, No: FC-131-1024); followed by genomic sequencing using Miseq Reagent Kit v2 300cycle kit (Illumina, USA, No: MS-102-2002). High-throughput genome sequencing was accomplished by the Illumina Miseq sequencing platform, as previously described [44,45,46]. The quality of sequencing and trimming was checked with FastQC toolkit, while low-quality sequences and joint sequences were removed with trimmomatic [47]. The genome assembly was performed with SPAdes 4.0.1 for genomic scaffolds [48], using the “careful correction” option in order to reduce the number of mismatches in the final assembly with automatically choosen k-mer values by SPAdes. QUAST [49] was used to evaluate the assembled genomes through basic statistics generation, including the total number of contigs, contig length, and N50. Prokka 1.14 [50], with the “default” settings was used to annotate the assembled genomes. Multilocus sequence typing (MLST) software (http://www.github.com/tseemann/mlst) was applied for the sequence type of the isolates for the in-house database. Detection of resistance genes, plasmids replicons and virulence genes were conducted using ABRicate software (http://www.github.com/tseemann/abricate). All the sequence types from a clonal complex (CC) detected by using the genome sequence were retrieved from the NCBI assembly database. Considering RM1221 strain [51] as a reference genome, we used two different protocols to conduct the multiple sequence alignment of the genomes in order to build the phylogenomic tree, and both of them delivered the identical results. The first approach was performed using Snippy to search for single nucleotide polymorphism (SNP) locus [52]. The second approach was conducted by Gubbins to produce the consensus sequence, and Mafft was used to make the multiple sequence alignment for the whole genome sequences [52]. The phylogenomic tree was built and projected with RAxML [53] and ITOL [54], respectively. 4.7. Ethical Approval: All procedures performed in studies involving human participants were officially approved by the Xiacheng CDC at Hangzhou (No. 2019-05, 20190716), which was in accordance with the ethical standards of the institutional research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. 5. Conclusions: This analysis sheds light on the possible menace of C. jejuni infections. PFGE and NGS technologies provided reliable evidence for the identification of the pathogens for this outbreak, caused by C. jejuni ST2988. These results suggest that enhanced concerns should be given to the circulation of this rarely reported sequence type. It is expected that the advanced NGS technologies will be promising in pathogen detection and foodborne disease tracking. To our knowledge, this is the second C. jejuni outbreak described in China to date. Unfortunately, in this event, food samples were not included in the investigation. In the future, the collection and testing of food samples should be emphasized for a more comprehensive investigation. These data also endorse that authorities need to implement systematic surveillance and compulsory notification for Campylobacter infections from humans as well as different animals, which is essential for the identification and tracking of the source of infection and the rationalization of effective control measures to ensure public health and safety.
Background: Foodborne outbreaks caused by Campylobacter jejuni have become a significant public health problem worldwide. Applying genomic sequencing as a routine part of foodborne outbreak investigation remains in its infancy in China. We applied both traditional PFGE profiling and genomic investigation to understand the cause of a foodborne outbreak in Hangzhou in December 2018. Methods: A total of 43 fecal samples, including 27 sick patients and 16 canteen employees from a high school in Hangzhou city in Zhejiang province, were recruited. Routine real-time fluorescent PCR assays were used for scanning the potential infectious agents, including viral pathogens (norovirus, rotavirus, adenovirus, and astrovirus), and bacterial pathogens (Salmonella, Shigella, Campylobacter jejuni, Vibrio parahaemolyticus and Vibrio cholerae). Bacterial selection medium was used to isolate and identify the positive bacteria identified by molecular test. Pulsed field gel electrophoresis (PFGE), and next generation sequencing (NGS) were applied to fifteen recovered C. jejuni isolates to further understand the case linkage of this particular outbreak. Additionally, we retrieved reference genomes from the NCBI database and performed a comparative genomics analysis with the examined genomes produced in this study. Results: The analyzed samples were found to be negative for the queried viruses. Additionally, Salmonella, Shigella, Vibrio parahaemolyticus and Vibrio cholera were not detected. Fifteen C. jejuni strains were identified by the real-time PCR assay and bacterial selection medium. These C. jejuni strains were classified into two genetic profiles defined by the PFGE. Out of fifteen C. jejuni strains, fourteen have a unified consistent genotype belonging to ST2988, and the other strain belongs to ST8149, with a 66.7% similarity in comparison with the rest of the strains. Moreover, all fifteen strains harbored blaOXA-61 and tet(O), in addition to a chromosomal mutation in gyrA (T86I). The examined fourteen strains of ST2988 from CC354 clone group have very minimal genetic difference (3~66 SNPs), demonstrated by the phylogenomic investigation. Conclusions: Both genomic investigation and PFGE profiling confirmed that C. jejuni ST2988, a new derivative from CC354, was responsible for the foodborne outbreak Illustrated in this study.
1. Introduction: Campylobacter jejuni is a common foodborne pathogenic bacterium which causes gastroenteritis, and more severely, a neural damage disease in humans called Guillain-Barre syndrome [1]. Raw milk, water, and contaminated meat, particularly chicken are believed to be the main sources of C. jejuni human infections [2,3]. C. jejuni is considered to be the leading cause of human gastroenteritis [4] and ranked as the second important cause for foodborne diseases in the U.S., with more than 1.5 million illness annually according to the Centers for Disease Control and Prevention (CDC), it has also been reported as one of the most commonly described pathogens in humans in the European Union foodborne disease surveillance network since 2005 [5,6]. Recently, there has been a surge in the global incidence of Campylobacter infections, and ongoing spread of human cases in North America, Europe, and Australia [7]. Though foodborne disease caused by C. jejuni has become an important public health concern, there is limited knowledge about its role in foodborne disease outbreaks in China. This knowledge gap could be due to Campylobacter infections not being subjected to obligatory reports and its surveillance being on a voluntary basis by local and regional laboratories. The pulsed field gel electrophoresis (PFGE) has been widely used in outbreak investigations for tracking sources of infection and effectively controlling epidemics due to its good reproducibility, high resolution and stable results, and the ease of standardization [8]. Nowadays, next generation sequencing (NGS) technology is becoming popular, considering advantages of labor- and time-saving, high-throughput capacities, highly precise and abundance of genetic information available for extensive studies. As the sequencing cost continues to decrease, genomic epidemiology combined with NGS has been increasingly and widely applied to outbreak investigations [9,10]. The PFGE technology and other genotyping approaches, including multi-locus sequence typing (MLST), shows that Campylobacter is not a genetically monomorphic organism, but includes highly diverse assemblies with an array of different phenotypes [9,10,11]. Considering this complexity, there are sufficient genetic materials, which could be used to link a particular genotype with a certain animal host [2,12]. Nevertheless, few C. jejuni Chinese clinical isolates with genome sequence are available in the public genomic database. The aim of this study was to describe both the epidemiological investigation and genomic characterization of C. jejuni that was responsible for the outbreak in a high school in Hangzhou in December 2018 using PFGE and NGS technologies. 5. Conclusions: This analysis sheds light on the possible menace of C. jejuni infections. PFGE and NGS technologies provided reliable evidence for the identification of the pathogens for this outbreak, caused by C. jejuni ST2988. These results suggest that enhanced concerns should be given to the circulation of this rarely reported sequence type. It is expected that the advanced NGS technologies will be promising in pathogen detection and foodborne disease tracking. To our knowledge, this is the second C. jejuni outbreak described in China to date. Unfortunately, in this event, food samples were not included in the investigation. In the future, the collection and testing of food samples should be emphasized for a more comprehensive investigation. These data also endorse that authorities need to implement systematic surveillance and compulsory notification for Campylobacter infections from humans as well as different animals, which is essential for the identification and tracking of the source of infection and the rationalization of effective control measures to ensure public health and safety.
Background: Foodborne outbreaks caused by Campylobacter jejuni have become a significant public health problem worldwide. Applying genomic sequencing as a routine part of foodborne outbreak investigation remains in its infancy in China. We applied both traditional PFGE profiling and genomic investigation to understand the cause of a foodborne outbreak in Hangzhou in December 2018. Methods: A total of 43 fecal samples, including 27 sick patients and 16 canteen employees from a high school in Hangzhou city in Zhejiang province, were recruited. Routine real-time fluorescent PCR assays were used for scanning the potential infectious agents, including viral pathogens (norovirus, rotavirus, adenovirus, and astrovirus), and bacterial pathogens (Salmonella, Shigella, Campylobacter jejuni, Vibrio parahaemolyticus and Vibrio cholerae). Bacterial selection medium was used to isolate and identify the positive bacteria identified by molecular test. Pulsed field gel electrophoresis (PFGE), and next generation sequencing (NGS) were applied to fifteen recovered C. jejuni isolates to further understand the case linkage of this particular outbreak. Additionally, we retrieved reference genomes from the NCBI database and performed a comparative genomics analysis with the examined genomes produced in this study. Results: The analyzed samples were found to be negative for the queried viruses. Additionally, Salmonella, Shigella, Vibrio parahaemolyticus and Vibrio cholera were not detected. Fifteen C. jejuni strains were identified by the real-time PCR assay and bacterial selection medium. These C. jejuni strains were classified into two genetic profiles defined by the PFGE. Out of fifteen C. jejuni strains, fourteen have a unified consistent genotype belonging to ST2988, and the other strain belongs to ST8149, with a 66.7% similarity in comparison with the rest of the strains. Moreover, all fifteen strains harbored blaOXA-61 and tet(O), in addition to a chromosomal mutation in gyrA (T86I). The examined fourteen strains of ST2988 from CC354 clone group have very minimal genetic difference (3~66 SNPs), demonstrated by the phylogenomic investigation. Conclusions: Both genomic investigation and PFGE profiling confirmed that C. jejuni ST2988, a new derivative from CC354, was responsible for the foodborne outbreak Illustrated in this study.
7,586
402
[ 73, 47, 268, 334, 2221, 116, 66, 203, 118, 146, 375, 55, 179 ]
16
[ "strains", "isolates", "jejuni", "figure", "sequence", "genome", "samples", "genomic", "campylobacter", "pfge" ]
[ "jejuni human infections", "jejuni responsible outbreak", "introduction campylobacter jejuni", "jejuni common foodborne", "campylobacter infections china" ]
null
[CONTENT] Campylobacter jejuni | foodborne outbreak | genomic investigation | pulse field gel electrophoresis | ST2988 [SUMMARY]
null
[CONTENT] Campylobacter jejuni | foodborne outbreak | genomic investigation | pulse field gel electrophoresis | ST2988 [SUMMARY]
[CONTENT] Campylobacter jejuni | foodborne outbreak | genomic investigation | pulse field gel electrophoresis | ST2988 [SUMMARY]
[CONTENT] Campylobacter jejuni | foodborne outbreak | genomic investigation | pulse field gel electrophoresis | ST2988 [SUMMARY]
[CONTENT] Campylobacter jejuni | foodborne outbreak | genomic investigation | pulse field gel electrophoresis | ST2988 [SUMMARY]
[CONTENT] Bacterial Typing Techniques | Campylobacter Infections | Campylobacter jejuni | China | Disease Outbreaks | Electrophoresis, Gel, Pulsed-Field | Foodborne Diseases | Genome, Bacterial | Genomics | Humans | Phylogeny | Sequence Analysis, DNA | Virulence Factors [SUMMARY]
null
[CONTENT] Bacterial Typing Techniques | Campylobacter Infections | Campylobacter jejuni | China | Disease Outbreaks | Electrophoresis, Gel, Pulsed-Field | Foodborne Diseases | Genome, Bacterial | Genomics | Humans | Phylogeny | Sequence Analysis, DNA | Virulence Factors [SUMMARY]
[CONTENT] Bacterial Typing Techniques | Campylobacter Infections | Campylobacter jejuni | China | Disease Outbreaks | Electrophoresis, Gel, Pulsed-Field | Foodborne Diseases | Genome, Bacterial | Genomics | Humans | Phylogeny | Sequence Analysis, DNA | Virulence Factors [SUMMARY]
[CONTENT] Bacterial Typing Techniques | Campylobacter Infections | Campylobacter jejuni | China | Disease Outbreaks | Electrophoresis, Gel, Pulsed-Field | Foodborne Diseases | Genome, Bacterial | Genomics | Humans | Phylogeny | Sequence Analysis, DNA | Virulence Factors [SUMMARY]
[CONTENT] Bacterial Typing Techniques | Campylobacter Infections | Campylobacter jejuni | China | Disease Outbreaks | Electrophoresis, Gel, Pulsed-Field | Foodborne Diseases | Genome, Bacterial | Genomics | Humans | Phylogeny | Sequence Analysis, DNA | Virulence Factors [SUMMARY]
[CONTENT] jejuni human infections | jejuni responsible outbreak | introduction campylobacter jejuni | jejuni common foodborne | campylobacter infections china [SUMMARY]
null
[CONTENT] jejuni human infections | jejuni responsible outbreak | introduction campylobacter jejuni | jejuni common foodborne | campylobacter infections china [SUMMARY]
[CONTENT] jejuni human infections | jejuni responsible outbreak | introduction campylobacter jejuni | jejuni common foodborne | campylobacter infections china [SUMMARY]
[CONTENT] jejuni human infections | jejuni responsible outbreak | introduction campylobacter jejuni | jejuni common foodborne | campylobacter infections china [SUMMARY]
[CONTENT] jejuni human infections | jejuni responsible outbreak | introduction campylobacter jejuni | jejuni common foodborne | campylobacter infections china [SUMMARY]
[CONTENT] strains | isolates | jejuni | figure | sequence | genome | samples | genomic | campylobacter | pfge [SUMMARY]
null
[CONTENT] strains | isolates | jejuni | figure | sequence | genome | samples | genomic | campylobacter | pfge [SUMMARY]
[CONTENT] strains | isolates | jejuni | figure | sequence | genome | samples | genomic | campylobacter | pfge [SUMMARY]
[CONTENT] strains | isolates | jejuni | figure | sequence | genome | samples | genomic | campylobacter | pfge [SUMMARY]
[CONTENT] strains | isolates | jejuni | figure | sequence | genome | samples | genomic | campylobacter | pfge [SUMMARY]
[CONTENT] foodborne | disease | ngs | jejuni | foodborne disease | infections | campylobacter | human | technology | gastroenteritis [SUMMARY]
null
[CONTENT] strains | figure | isolates | table | identified | s1 | proteins | st2988 | cc354 | genome [SUMMARY]
[CONTENT] identification | ngs technologies | technologies | ngs | tracking | infections | investigation | food samples | jejuni | food [SUMMARY]
[CONTENT] strains | figure | jejuni | isolates | sequence | samples | genome | pfge | campylobacter | outbreak [SUMMARY]
[CONTENT] strains | figure | jejuni | isolates | sequence | samples | genome | pfge | campylobacter | outbreak [SUMMARY]
[CONTENT] Campylobacter ||| China ||| Hangzhou | December 2018 [SUMMARY]
null
[CONTENT] ||| Salmonella | Shigella | Vibrio | Vibrio cholera ||| Fifteen | PCR ||| C. | two | PFGE ||| fifteen | C. | fourteen | ST8149 | 66.7% ||| fifteen | blaOXA-61 | T86I ||| fourteen | CC354 | 3~66 [SUMMARY]
[CONTENT] C. | CC354 | Illustrated [SUMMARY]
[CONTENT] Campylobacter ||| China ||| Hangzhou | December 2018 ||| 43 | 27 | 16 | Hangzhou | Zhejiang ||| Shigella | Campylobacter | Vibrio | Vibrio cholerae ||| ||| NGS | fifteen | C. ||| NCBI ||| ||| ||| Salmonella | Shigella | Vibrio | Vibrio cholera ||| Fifteen | PCR ||| C. | two | PFGE ||| fifteen | C. | fourteen | ST8149 | 66.7% ||| fifteen | blaOXA-61 | T86I ||| fourteen | CC354 | 3~66 ||| C. | CC354 | Illustrated [SUMMARY]
[CONTENT] Campylobacter ||| China ||| Hangzhou | December 2018 ||| 43 | 27 | 16 | Hangzhou | Zhejiang ||| Shigella | Campylobacter | Vibrio | Vibrio cholerae ||| ||| NGS | fifteen | C. ||| NCBI ||| ||| ||| Salmonella | Shigella | Vibrio | Vibrio cholera ||| Fifteen | PCR ||| C. | two | PFGE ||| fifteen | C. | fourteen | ST8149 | 66.7% ||| fifteen | blaOXA-61 | T86I ||| fourteen | CC354 | 3~66 ||| C. | CC354 | Illustrated [SUMMARY]
Sudden infant death syndrome: a re-examination of temporal trends.
22747916
While the reduction in infants' prone sleeping has led to a temporal decline in Sudden Infant Death Syndrome (SIDS), some aspects of this trend remain unexplained. We assessed whether changes in the gestational age distribution of births also contributed to the temporal reduction in SIDS.
BACKGROUND
SIDS patterns among singleton and twin births in the United States were analysed in 1995-96 and 2004-05. The temporal reduction in SIDS was partitioned using the Kitagawa decomposition method into reductions due to changes in the gestational age distribution and reductions due to changes in gestational age-specific SIDS rates. Both the traditional and the fetuses-at-risk models were used.
METHODS
SIDS rates declined with increasing gestation under the traditional perinatal model. Rates were higher at early gestation among singletons compared with twins, while the reverse was true at later gestation. Under the fetuses-at-risk model, SIDS rates increased with increasing gestation and twins had higher rates of SIDS than singletons at all gestational ages. Between 1995-96 and 2004-05, SIDS declined from 8.3 to 5.6 per 10,000 live births among singletons and from 14.2 to 10.6 per 10,000 live births among twins. Decomposition using the traditional model showed that the SIDS reduction among singletons and twins was entirely due to changes in the gestational age-specific SIDS rate. The fetuses-at-risk model attributed 45% of the SIDS reduction to changes in the gestational age distribution and 55% of the reduction to changes in gestational age-specific SIDS rates among singletons; among twins these proportions were 64% and 36%, respectively.
RESULTS
Changes in the gestational age distribution may have contributed to the recent temporal reduction in SIDS.
CONCLUSION
[ "Adult", "Diseases in Twins", "Female", "Gestational Age", "Health Promotion", "Humans", "Infant", "Male", "Models, Statistical", "Risk Factors", "Sudden Infant Death", "United States", "Young Adult" ]
3437219
Background
Although Sudden Infant Death Syndrome (SIDS) is a leading cause of post-neonatal death in industrialized countries, its etiology is largely unknown [1]. While the reduction in prone sleeping following the back-to-sleep campaign has led to a decline in SIDS in many countries [2-6], there are several puzzling aspects related to this intervention and the epidemiology of SIDS. For instance, the onset of the decline in SIDS preceded the initiation of the back-to-sleep campaign [2-7]. The reduction in SIDS in the United States began in 1989, while the back-to-sleep campaign was initiated in 1994 [6]. Similarly, SIDS rates in the United Kingdom decreased continuously from 1988 onwards, while the back-to-sleep campaign only began in 1991 [7]. Other unexplained epidemiologic features of the temporal reduction in SIDS include the relatively greater reduction in SIDS among term infants, as compared with infants born at preterm gestation. Data from Avon county in England show that term live births among SIDS cases decreased from 88% in 1984–88 to 63% in 1994–98, the period when SIDS rates declined most rapidly. The proportion of term infants among SIDS cases remained stable thereafter (66% in 1999–2003) and SIDS rates did not change dramatically during this period [7]. Also, a larger decline in SIDS was observed among twins as compared with singletons. In England, SIDS among twin live births declined by 71% from 1.4 per 1000 live births in 1993 to 0.4 per 1000 live births in 2003, whereas among singletons, SIDS rates decreased by 50% from 0.6 to 0.3 per 1000 live births during the same period [8,9]. It is notable that births at term and post-term gestation and twin births (subpopulations which experienced relatively larger reductions in SIDS) also experienced the largest increases in early delivery (i.e., increased obstetric intervention through labour induction and cesarean delivery). Perhaps the most intriguing finding related to SIDS is the paradoxical association between plurality and birth weight-specific SIDS rates [8,9]. SIDS rates are higher among twins as compared with singletons among normal birth weight infants (>3,000 g), whereas at lower birth weights the opposite is true. This phenomenon, sometimes referred to as the paradox of intersecting mortality curves, has also been observed when birth weight- and gestational age-specific stillbirth or infant mortality rates are contrasted across plurality, parity, race and other factors [10]. Various explanations [11] have been proposed to explain the paradox of intersecting mortality curves including the fetuses-at-risk approach [12,13]. This model assumes an intrauterine etiology for the outcomes of interest and gestational age-specific mortality rates calculated using the fetuses-at-risk approach do not exhibit the crossover paradox [14]. Since numerous studies have shown that unexplained antepartum fetal death and SIDS have common features (including similar pathologic characteristics at autopsy [15-17], common risk factors [15,18,19] and congruent temporal trends [1-7,20]), there is good justification for using the fetuses-at-risk approach for examining gestational age-specific SIDS rates. Finally, the contrast between the gestational age-specific patterns of SIDS and diseases of prematurity (e.g., retinopathy of prematurity) suggest that SIDS is a late gestation disease whose incidence may have been affected by temporal changes in the gestational age distribution [14]. In this study we explored the extent to which changes in gestational age distribution and changes in gestational age-specific SIDS rates contributed to the temporal decline in SIDS among singletons and twins.
Methods
We used population-based data on singleton and twin births in the United States from 1995–96 to 2004–05 from the National Centre for Health Statistics (NCHS). Information in the NCHS birth/death and fetal death files was abstracted from birth certificates and is publicly available [21], with the birth-infant death linkage carried out by the NCHS (period linked birth-infant death file). We included all infants born at ≥22 week gestation, based on the clinical estimate of gestation at birth [22-24]. States that did not report the clinical estimate of gestational age were not included in the analysis (13.5% of all births). We excluded infants weighing less than 500 grams in order to avoid potential bias due to variable birth registration at the borderline of viability [25-27]. International Classification of Diseases (ICD) codes 798.0 and R95 (ICD 9th version, and ICD 10th version, respectively) were used to identify cases of SIDS. Information about maternal and infant risk factors associated with SIDS, including maternal age, education, race, parity, and marital status, was also obtained from the NCHS files. Temporal changes in the frequency of these risk factors were evaluated by contrasting their population prevalence between 1995–96 and 2004–05. Temporal changes in SIDS rates across categories of each risk factor were evaluated using rate ratios (2004–05 vs. 1995–96) and 95% confidence intervals. Gestational age-specific SIDS rates were compared using two different approaches: A) the traditional method which expressed gestational age-specific SIDS rates as the number of SIDS cases at any gestation divided by the number of live births at that gestation; and B) the fetuses-at-risk approach. Under the fetuses-at-risk approach, gestational age-specific SIDS rates were calculated as the number of SIDS cases among infants born at any gestation divided by the number of fetuses in-utero who were at risk of birth (live birth or stillbirth) at that gestation [12,13]. This latter model assumes an intrauterine etiology for SIDS and has been used previously for estimating gestational age-specific rates of stillbirth, neonatal death and cerebral palsy [12,13]. Both approaches were used to examine the temporal trends in SIDS, because they embody different perspectives; the traditional approach models the gestational age-specific risk of SIDS after birth assuming that live births are the appropriate candidates for SIDS, whereas the fetuses-at-risk approach models an in-utero etiology and assumes that fetuses are the appropriate candidates for SIDS. The temporal trend in SIDS was conceptualized as a potential consequence of temporal changes in the gestational age distribution and/or as a consequence of temporal changes in the gestational age-specific SIDS rates (e.g., due to the back to sleep campaign). The relative contribution of each of these two components to the overall reduction in SIDS was estimated by the Kitagawa decomposition method [28]. This method partitions the mortality rate difference between the two time periods into two components: the mortality difference due to the change in the gestational age distribution and the mortality difference due to the change in gestational age-specific mortality. By holding one component constant at its average (e.g., average gestational age–specific SIDS rate), the Kitagawa method estimates the relative contribution of the second component (i.e., the gestational age distribution), and vice versa. The Kitagawa decomposition formula is expressed as: (1) N1-N 2 = ∑ i = 1 n R 1 i + R 2 i 2 F 1 i − F 2 i + ∑ i = 1 n F 1 i + F 2 i 2 R 1 i − R 2 i where N1 and N2 denote SIDS rates in 2004–05 and 1995–96, respectively, R1 and R2 refer to gestational age-specific SIDS rates in 2004–05 and 1995–96, F1 and F2 represent proportions of live births in gestational age category i for each respective time period, and i denotes gestational age category (in weeks). The first part of the equation represents the relative contribution of changes in the gestational age distribution to the overall difference in SIDS rates, and the latter part of the equation represents the relative contribution of changes in gestational age-specific SIDS rates. For the decomposition using the traditional approach, the gestational age distribution at gestational week i was defined as the number of live births at that gestation expressed as a proportion of all live births; for the decomposition using the fetuses-at-risk approach, the gestational age distribution at gestational week i was expressed as the number of live births at that gestation expressed as a fraction of all fetuses in-utero at that gestation. The Kitagawa decomposition was carried out separately for singletons and twins born at term vs pre-term gestation (≥37 weeks vs 22–36 weeks). New birth certificates were introduced in the United States in 2003 and led to some increases in missing values for a few variables of interest (e.g., educational status, congenital malformations). Sensitivity analyses were carried out to assess how these changes affected results by restricting temporal trends to a period before the introduction of the new birth certificate i.e., between 1995–96 and 2001–02. All analyses were carried out using SAS version 9.2. Data used in this study were publicly accessible from the National Centre for Health Statistics [21].
Results
The rate of SIDS declined from 8.3 to 5.6 per 10,000 live births from 1995–96 to 2004–05 among singletons (rate difference −2.7, 95% CI: -2.4 and −3.0), and from 14.2 to 10.6 per 10,000 live births among twins (rate difference −3.6, 95% CI: -1.4 and −5.9). On a relative scale SIDS rates declined by 33% (rate ratio 0.67, 95% CI: 0.67-0.67) among singletons and by 25% among twins (rate ratio 0.75, 95% CI: 0.74-0.75). Changes in maternal characteristics between 1995–96 and 2004–05 are shown in Table 1. The proportion of older, Hispanic and unmarried mothers increased, while the proportion of mothers who were less than 20 years old, non-Hispanic white, and those who smoked during pregnancy decreased from 1995–96 to 2004–05. The frequency of twin live births increased, while gestational age at delivery decreased. There was a increase in the proportion of live births at preterm gestation (from 8.6% to 10.6% at <37 weeks) and at early term gestation (from 21.2% to 29.3% at 37–38 weeks) and a decrease in the proportion of live births at late term gestation (from 67.8% to 59.6% at 39–41 weeks) and post-term gestation (from 2.3% to 0.7% at ≥42 weeks). SIDS rates declined during this period across all maternal characteristics (Table 1). Changes in maternal and infant characteristics and SIDS rates, United States, 1995–96 and 2004-2005 (Figure 1a shows the gestational age-specific rates of SIDS among singleton and twin live births between 28 and 40 weeks gestation as calculated under the traditional perinatal model. Rates of SIDS declined with increasing gestational age among both singletons and twins. Rates of SIDS were lower among twins at preterm gestation compared with singleton live births at preterm gestation, but the opposite was true at later gestational ages (paradox of intersecting perinatal mortality curves). (Figure 1b shows gestational age-specific rates of SIDS among singletons and twins under the fetuses-at-risk model. Rates of SIDS increased with increasing gestation among both singletons and twins and SIDS rates were higher among twins at all gestational ages. Gestational age-specific rates of SIDS between 28 and 40 weeks gestation among singleton and twin live births according to the traditional perinatal model (Figure1a) and according to the fetuses-at-risk model (Figure1b), United States, 1995–2005. Substantial changes occurred in the gestational age distribution of singleton live births between 1995–96 and 2004–05 ((Figure 2a). The proportion of singleton live births at gestational ages up to 39 weeks increased, while the proportion after 39 weeks declined. Under the traditional model, gestational age-specific SIDS rates showed a temporal decline at all gestational ages (Figure 2b), while under the fetuses-at-risk approach, gestational age-specific SIDS rates showed a temporal decline at 39 weeks and later (Figure 2c). (Figure 3a shows changes in the gestational age distribution among twins between 1995-96 and 2004–05; the proportion of live births up to 37 weeks increased and there was a decline in the proportion of births at 38 weeks and later. Gestational age-specific SIDS rates among twins showed a temporal decline at all gestations under the traditional model ((Figure 3b), while under the fetuses-at-risk model, no decline in rates was evident except at 40 weeks of gestation (Figure 3c). Changes in the gestational age distribution of singleton live births (Figure2a), in gestational age-specific rates of SIDS (traditional model, Figure2b) and in gestational age-specific rates of SIDS (fetuses-at-risk model, Figure2c) among singletons 28 to 40 weeks gestation, United States, 1995–96 and 2004–05. Changes in the gestational age distribution of twin live births (Figure 3 a), in gestational age-specific rates of SIDS (traditional model, moving average, Figure 3 b,) and in gestational age-specific rates of SIDS (fetuses-at-risk model, Figure 3 c) among twins between 28 and 40 weeks gestation, United States, 1995–96 and 2004–05 Table 2 presents rates of SIDS among singletons and twins at preterm and term gestation in 1995–96 and in 2004–05, with temporal changes expressed in terms of rate differences and rate ratios. Under the traditional model, singletons showed a larger relative decline in SIDS than twins (rate ratio 0.67 vs 0.75), whereas in absolute terms twins showed a larger reduction than singletons (rate difference −3.61 vs −2.72 per 10,000 live births). Reductions in SIDS rates were larger at preterm gestation compared with term gestation in terms of both the ratio and the difference measure. Under the fetuses-at-risk approach, temporal changes were larger at term gestation among both singletons and twins irrespective of the effect measure (whether ratio or difference). SIDS rates, rate ratios and rate differences, by plurality and gestation, United States, 2004-05 vs. 1995–96 95% CI denotes 95% confidence intervals; SIDS cases were defined by the underlying cause of death 7980 (ICD-9) in 1995–06 and R95 (ICD-10) in 2004–05. The discrepancy in total SIDS rates under the traditional and fetuses-at-risk approaches is because stillbirths were included in the denominator for the latter calculation. Rate differences were calculated per 10,000 live births and 10,000 fetuses-at-risk. Among singletons, the traditional Kitagawa decomposition method revealed that the overall temporal reduction in SIDS rates (−2.7 cases per 10,000 live births) was entirely due to the decrease in gestational age-specific SIDS rates (Table 3). In fact, under this model, changes in the gestational age distribution adversely impacted SIDS rates. However, the modified Kitagawa decomposition method, based on the fetuses-at-risk approach, yielded a different partitioning. Changes in the gestational age-specific distribution were responsible for 45% of the overall decline in SIDS (−1.2 cases per 10,000 fetuses-at-risk), whereas changes in gestational age-specific rates were responsible for 55% of the overall decline (−1.5 cases per 10,000 fetuses-at-risk, Table 3).. Relative contribution of changes in the gestational age distribution and in gestational age-specific SIDS rates to the overall reduction in SIDS rates by plurality and gestation, United States, 2004-05 vs. 1995–96 Explanatory note: Between 1995–96 and 2004–05, singletons experienced a decrease in SIDS (−2.72 SIDS per 10,000 live births; 100% decrease). Under the traditional model, changes in gestational age distribution among singletons increased SIDS rates (+0.37 cases per 10,000 live births; 13.6% increase), whereas changes in gestational age-specific SIDS rates decreased SIDS rates (−3.09 cases per 10,000 live births; 113.6% decrease) The decline in SIDS among twins followed a similar pattern. Under the traditional Kitagawa decomposition, the entire temporal change in SIDS rates (−3.6 cases per 10,000 live births) was due to changes in gestational age-specific mortality (Table 3). Under the fetuses-at-risk approach, however, the temporal shift in the gestational age distribution was responsible for 63% of the decline in SIDS (−2.3 cases per 10,000 fetuses-at-risk), while change in gestational age-specific SIDS rates were responsible for 37% of the observed SIDS reduction (−1.3 SIDS cases per 10,000 fetuses-at-risk, Table 3). The decomposition of the SIDS decline yielded different results at preterm vs term gestation. Under the traditional model, the change in the gestational age distribution had a relatively larger adverse effect at preterm gestation among singletons, whereas among twins, the change in the gestational age distribution adversely affected SIDS rates at preterm gestation only. Changes in gestational age-specific SIDS rates were responsible for a larger proportion of the SIDS decline at preterm gestation compared to term gestation among both singletons and twins (Table 3). Under the fetuses-at-risk approach, gestational age distribution changes contributed substantially to the SIDS decline at term gestation among both singletons and twins. On the other hand, changes in gestational age-specific SIDS rates contributed more to the SIDS decline at preterm gestation compared to term gestation (Table 3). Results from sensitivity analyses restricted to years prior to the introduction of new birth certificates in 2003 showed that between 1995–96 and 2001–02, the decline in SIDS rates was similar among singletons and twins. SIDS rates declined from 8.3 to 5.9 per 10,000 live births among singletons and from 14.2 to 10.5 per 10,000 live births among twins (rate ratio for singletons 0.72, 95% CI 0.69-0.75 and for twins 0.74, 95% CI 0.62-0.89).
Conclusions
In conclusion, our study indicates that in addition to the back-to-sleep campaign, temporal changes in the gestational age distribution may have contributed to the overall reduction in SIDS. This effect of a temporal shift towards earlier gestation at delivery has been observed predominantly at term and post-term gestation, and to a larger extent among twins. These findings support the hypothesis that antenatal factors contribute to the origins of SIDS, and endorse the concept of similar causal pathways between unexplained fetal death and SIDS.
[ "Background", "Abbreviations", "Financial disclosure", "Competing of interests", "Authors’ contributions", "Pre-publication history" ]
[ "Although Sudden Infant Death Syndrome (SIDS) is a leading cause of post-neonatal death in industrialized countries, its etiology is largely unknown [1]. While the reduction in prone sleeping following the back-to-sleep campaign has led to a decline in SIDS in many countries [2-6], there are several puzzling aspects related to this intervention and the epidemiology of SIDS. For instance, the onset of the decline in SIDS preceded the initiation of the back-to-sleep campaign [2-7]. The reduction in SIDS in the United States began in 1989, while the back-to-sleep campaign was initiated in 1994 [6]. Similarly, SIDS rates in the United Kingdom decreased continuously from 1988 onwards, while the back-to-sleep campaign only began in 1991 [7].\nOther unexplained epidemiologic features of the temporal reduction in SIDS include the relatively greater reduction in SIDS among term infants, as compared with infants born at preterm gestation. Data from Avon county in England show that term live births among SIDS cases decreased from 88% in 1984–88 to 63% in 1994–98, the period when SIDS rates declined most rapidly. The proportion of term infants among SIDS cases remained stable thereafter (66% in 1999–2003) and SIDS rates did not change dramatically during this period [7]. Also, a larger decline in SIDS was observed among twins as compared with singletons. In England, SIDS among twin live births declined by 71% from 1.4 per 1000 live births in 1993 to 0.4 per 1000 live births in 2003, whereas among singletons, SIDS rates decreased by 50% from 0.6 to 0.3 per 1000 live births during the same period [8,9]. It is notable that births at term and post-term gestation and twin births (subpopulations which experienced relatively larger reductions in SIDS) also experienced the largest increases in early delivery (i.e., increased obstetric intervention through labour induction and cesarean delivery).\nPerhaps the most intriguing finding related to SIDS is the paradoxical association between plurality and birth weight-specific SIDS rates [8,9]. SIDS rates are higher among twins as compared with singletons among normal birth weight infants (>3,000 g), whereas at lower birth weights the opposite is true. This phenomenon, sometimes referred to as the paradox of intersecting mortality curves, has also been observed when birth weight- and gestational age-specific stillbirth or infant mortality rates are contrasted across plurality, parity, race and other factors [10].\nVarious explanations [11] have been proposed to explain the paradox of intersecting mortality curves including the fetuses-at-risk approach [12,13]. This model assumes an intrauterine etiology for the outcomes of interest and gestational age-specific mortality rates calculated using the fetuses-at-risk approach do not exhibit the crossover paradox [14]. Since numerous studies have shown that unexplained antepartum fetal death and SIDS have common features (including similar pathologic characteristics at autopsy [15-17], common risk factors [15,18,19] and congruent temporal trends [1-7,20]), there is good justification for using the fetuses-at-risk approach for examining gestational age-specific SIDS rates. Finally, the contrast between the gestational age-specific patterns of SIDS and diseases of prematurity (e.g., retinopathy of prematurity) suggest that SIDS is a late gestation disease whose incidence may have been affected by temporal changes in the gestational age distribution [14].\nIn this study we explored the extent to which changes in gestational age distribution and changes in gestational age-specific SIDS rates contributed to the temporal decline in SIDS among singletons and twins.", "SIDS, Sudden Infant Death Syndrome; NCHS, National Centre for Health Statistics; ICD, International Classification of Diseases; CI, confidence interval.", "None of the authors have a personal financial relationship relevant to this article.", "The authors declare that they have no competing interests.", "SL, JAH and KSJ contributed to the conception and design of the study, SL conducted the data analysis and drafted the manuscript and JAH and KSJ revised the manuscript for intellectual content; all authors approved the final version of the manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2393/12/59/prepub\n" ]
[ null, null, null, null, null, null ]
[ "Background", "Methods", "Results", "Discussion", "Conclusions", "Abbreviations", "Financial disclosure", "Competing of interests", "Authors’ contributions", "Pre-publication history" ]
[ "Although Sudden Infant Death Syndrome (SIDS) is a leading cause of post-neonatal death in industrialized countries, its etiology is largely unknown [1]. While the reduction in prone sleeping following the back-to-sleep campaign has led to a decline in SIDS in many countries [2-6], there are several puzzling aspects related to this intervention and the epidemiology of SIDS. For instance, the onset of the decline in SIDS preceded the initiation of the back-to-sleep campaign [2-7]. The reduction in SIDS in the United States began in 1989, while the back-to-sleep campaign was initiated in 1994 [6]. Similarly, SIDS rates in the United Kingdom decreased continuously from 1988 onwards, while the back-to-sleep campaign only began in 1991 [7].\nOther unexplained epidemiologic features of the temporal reduction in SIDS include the relatively greater reduction in SIDS among term infants, as compared with infants born at preterm gestation. Data from Avon county in England show that term live births among SIDS cases decreased from 88% in 1984–88 to 63% in 1994–98, the period when SIDS rates declined most rapidly. The proportion of term infants among SIDS cases remained stable thereafter (66% in 1999–2003) and SIDS rates did not change dramatically during this period [7]. Also, a larger decline in SIDS was observed among twins as compared with singletons. In England, SIDS among twin live births declined by 71% from 1.4 per 1000 live births in 1993 to 0.4 per 1000 live births in 2003, whereas among singletons, SIDS rates decreased by 50% from 0.6 to 0.3 per 1000 live births during the same period [8,9]. It is notable that births at term and post-term gestation and twin births (subpopulations which experienced relatively larger reductions in SIDS) also experienced the largest increases in early delivery (i.e., increased obstetric intervention through labour induction and cesarean delivery).\nPerhaps the most intriguing finding related to SIDS is the paradoxical association between plurality and birth weight-specific SIDS rates [8,9]. SIDS rates are higher among twins as compared with singletons among normal birth weight infants (>3,000 g), whereas at lower birth weights the opposite is true. This phenomenon, sometimes referred to as the paradox of intersecting mortality curves, has also been observed when birth weight- and gestational age-specific stillbirth or infant mortality rates are contrasted across plurality, parity, race and other factors [10].\nVarious explanations [11] have been proposed to explain the paradox of intersecting mortality curves including the fetuses-at-risk approach [12,13]. This model assumes an intrauterine etiology for the outcomes of interest and gestational age-specific mortality rates calculated using the fetuses-at-risk approach do not exhibit the crossover paradox [14]. Since numerous studies have shown that unexplained antepartum fetal death and SIDS have common features (including similar pathologic characteristics at autopsy [15-17], common risk factors [15,18,19] and congruent temporal trends [1-7,20]), there is good justification for using the fetuses-at-risk approach for examining gestational age-specific SIDS rates. Finally, the contrast between the gestational age-specific patterns of SIDS and diseases of prematurity (e.g., retinopathy of prematurity) suggest that SIDS is a late gestation disease whose incidence may have been affected by temporal changes in the gestational age distribution [14].\nIn this study we explored the extent to which changes in gestational age distribution and changes in gestational age-specific SIDS rates contributed to the temporal decline in SIDS among singletons and twins.", "We used population-based data on singleton and twin births in the United States from 1995–96 to 2004–05 from the National Centre for Health Statistics (NCHS). Information in the NCHS birth/death and fetal death files was abstracted from birth certificates and is publicly available [21], with the birth-infant death linkage carried out by the NCHS (period linked birth-infant death file). We included all infants born at ≥22 week gestation, based on the clinical estimate of gestation at birth [22-24]. States that did not report the clinical estimate of gestational age were not included in the analysis (13.5% of all births). We excluded infants weighing less than 500 grams in order to avoid potential bias due to variable birth registration at the borderline of viability [25-27]. International Classification of Diseases (ICD) codes 798.0 and R95 (ICD 9th version, and ICD 10th version, respectively) were used to identify cases of SIDS. Information about maternal and infant risk factors associated with SIDS, including maternal age, education, race, parity, and marital status, was also obtained from the NCHS files. Temporal changes in the frequency of these risk factors were evaluated by contrasting their population prevalence between 1995–96 and 2004–05. Temporal changes in SIDS rates across categories of each risk factor were evaluated using rate ratios (2004–05 vs. 1995–96) and 95% confidence intervals.\nGestational age-specific SIDS rates were compared using two different approaches: A) the traditional method which expressed gestational age-specific SIDS rates as the number of SIDS cases at any gestation divided by the number of live births at that gestation; and B) the fetuses-at-risk approach. Under the fetuses-at-risk approach, gestational age-specific SIDS rates were calculated as the number of SIDS cases among infants born at any gestation divided by the number of fetuses in-utero who were at risk of birth (live birth or stillbirth) at that gestation [12,13]. This latter model assumes an intrauterine etiology for SIDS and has been used previously for estimating gestational age-specific rates of stillbirth, neonatal death and cerebral palsy [12,13]. Both approaches were used to examine the temporal trends in SIDS, because they embody different perspectives; the traditional approach models the gestational age-specific risk of SIDS after birth assuming that live births are the appropriate candidates for SIDS, whereas the fetuses-at-risk approach models an in-utero etiology and assumes that fetuses are the appropriate candidates for SIDS.\nThe temporal trend in SIDS was conceptualized as a potential consequence of temporal changes in the gestational age distribution and/or as a consequence of temporal changes in the gestational age-specific SIDS rates (e.g., due to the back to sleep campaign). The relative contribution of each of these two components to the overall reduction in SIDS was estimated by the Kitagawa decomposition method [28]. This method partitions the mortality rate difference between the two time periods into two components: the mortality difference due to the change in the gestational age distribution and the mortality difference due to the change in gestational age-specific mortality. By holding one component constant at its average (e.g., average gestational age–specific SIDS rate), the Kitagawa method estimates the relative contribution of the second component (i.e., the gestational age distribution), and vice versa. The Kitagawa decomposition formula is expressed as:\n\n\n(1)\n\n\nN1-N\n2\n=\n\n∑\n\ni\n=\n1\n\nn\n\n\n\n\n\nR\n\n1\ni\n\n\n+\n\nR\n\n2\ni\n\n\n\n\n2\n\n\n\n\nF\n\n1\ni\n\n\n−\n\nF\n\n2\ni\n\n\n\n\n+\n\n\n∑\n\ni\n=\n1\n\nn\n\n\n\n\n\n\nF\n\n1\ni\n\n\n+\n\nF\n\n2\ni\n\n\n\n\n2\n\n\n\n\nR\n\n1\ni\n\n\n−\n\nR\n\n2\ni\n\n\n\n\n\n\n\n\n\n\nwhere N1 and N2 denote SIDS rates in 2004–05 and 1995–96, respectively, R1 and R2 refer to gestational age-specific SIDS rates in 2004–05 and 1995–96, F1 and F2 represent proportions of live births in gestational age category i for each respective time period, and i denotes gestational age category (in weeks). The first part of the equation represents the relative contribution of changes in the gestational age distribution to the overall difference in SIDS rates, and the latter part of the equation represents the relative contribution of changes in gestational age-specific SIDS rates. For the decomposition using the traditional approach, the gestational age distribution at gestational week i was defined as the number of live births at that gestation expressed as a proportion of all live births; for the decomposition using the fetuses-at-risk approach, the gestational age distribution at gestational week i was expressed as the number of live births at that gestation expressed as a fraction of all fetuses in-utero at that gestation.\nThe Kitagawa decomposition was carried out separately for singletons and twins born at term vs pre-term gestation (≥37 weeks vs 22–36 weeks).\nNew birth certificates were introduced in the United States in 2003 and led to some increases in missing values for a few variables of interest (e.g., educational status, congenital malformations). Sensitivity analyses were carried out to assess how these changes affected results by restricting temporal trends to a period before the introduction of the new birth certificate i.e., between 1995–96 and 2001–02. All analyses were carried out using SAS version 9.2. Data used in this study were publicly accessible from the National Centre for Health Statistics [21].", "The rate of SIDS declined from 8.3 to 5.6 per 10,000 live births from 1995–96 to 2004–05 among singletons (rate difference −2.7, 95% CI: -2.4 and −3.0), and from 14.2 to 10.6 per 10,000 live births among twins (rate difference −3.6, 95% CI: -1.4 and −5.9). On a relative scale SIDS rates declined by 33% (rate ratio 0.67, 95% CI: 0.67-0.67) among singletons and by 25% among twins (rate ratio 0.75, 95% CI: 0.74-0.75).\nChanges in maternal characteristics between 1995–96 and 2004–05 are shown in Table 1. The proportion of older, Hispanic and unmarried mothers increased, while the proportion of mothers who were less than 20 years old, non-Hispanic white, and those who smoked during pregnancy decreased from 1995–96 to 2004–05. The frequency of twin live births increased, while gestational age at delivery decreased. There was a increase in the proportion of live births at preterm gestation (from 8.6% to 10.6% at <37 weeks) and at early term gestation (from 21.2% to 29.3% at 37–38 weeks) and a decrease in the proportion of live births at late term gestation (from 67.8% to 59.6% at 39–41 weeks) and post-term gestation (from 2.3% to 0.7% at ≥42 weeks). SIDS rates declined during this period across all maternal characteristics (Table 1).\nChanges in maternal and infant characteristics and SIDS rates, United States, 1995–96 and 2004-2005\n(Figure 1a shows the gestational age-specific rates of SIDS among singleton and twin live births between 28 and 40 weeks gestation as calculated under the traditional perinatal model. Rates of SIDS declined with increasing gestational age among both singletons and twins. Rates of SIDS were lower among twins at preterm gestation compared with singleton live births at preterm gestation, but the opposite was true at later gestational ages (paradox of intersecting perinatal mortality curves). (Figure 1b shows gestational age-specific rates of SIDS among singletons and twins under the fetuses-at-risk model. Rates of SIDS increased with increasing gestation among both singletons and twins and SIDS rates were higher among twins at all gestational ages.\nGestational age-specific rates of SIDS between 28 and 40 weeks gestation among singleton and twin live births according to the traditional perinatal model (Figure1a) and according to the fetuses-at-risk model (Figure1b), United States, 1995–2005.\nSubstantial changes occurred in the gestational age distribution of singleton live births between 1995–96 and 2004–05 ((Figure 2a). The proportion of singleton live births at gestational ages up to 39 weeks increased, while the proportion after 39 weeks declined. Under the traditional model, gestational age-specific SIDS rates showed a temporal decline at all gestational ages (Figure 2b), while under the fetuses-at-risk approach, gestational age-specific SIDS rates showed a temporal decline at 39 weeks and later (Figure 2c). (Figure 3a shows changes in the gestational age distribution among twins between 1995-96 and 2004–05; the proportion of live births up to 37 weeks increased and there was a decline in the proportion of births at 38 weeks and later. Gestational age-specific SIDS rates among twins showed a temporal decline at all gestations under the traditional model ((Figure 3b), while under the fetuses-at-risk model, no decline in rates was evident except at 40 weeks of gestation (Figure 3c).\nChanges in the gestational age distribution of singleton live births (Figure2a), in gestational age-specific rates of SIDS (traditional model, Figure2b) and in gestational age-specific rates of SIDS (fetuses-at-risk model, Figure2c) among singletons 28 to 40 weeks gestation, United States, 1995–96 and 2004–05.\n\nChanges in the gestational age distribution of twin live births (Figure\n3\na), in gestational age-specific rates of SIDS (traditional model, moving average, Figure\n3\nb,) and in gestational age-specific rates of SIDS (fetuses-at-risk model, Figure\n3\nc) among twins between 28 and 40 weeks gestation, United States, 1995–96 and 2004–05\n\nTable 2 presents rates of SIDS among singletons and twins at preterm and term gestation in 1995–96 and in 2004–05, with temporal changes expressed in terms of rate differences and rate ratios. Under the traditional model, singletons showed a larger relative decline in SIDS than twins (rate ratio 0.67 vs 0.75), whereas in absolute terms twins showed a larger reduction than singletons (rate difference −3.61 vs −2.72 per 10,000 live births). Reductions in SIDS rates were larger at preterm gestation compared with term gestation in terms of both the ratio and the difference measure. Under the fetuses-at-risk approach, temporal changes were larger at term gestation among both singletons and twins irrespective of the effect measure (whether ratio or difference).\nSIDS rates, rate ratios and rate differences, by plurality and gestation, United States, 2004-05 vs. 1995–96 \n95% CI denotes 95% confidence intervals; SIDS cases were defined by the underlying cause of death 7980 (ICD-9) in 1995–06 and R95 (ICD-10) in 2004–05. The discrepancy in total SIDS rates under the traditional and fetuses-at-risk approaches is because stillbirths were included in the denominator for the latter calculation. Rate differences were calculated per 10,000 live births and 10,000 fetuses-at-risk.\nAmong singletons, the traditional Kitagawa decomposition method revealed that the overall temporal reduction in SIDS rates (−2.7 cases per 10,000 live births) was entirely due to the decrease in gestational age-specific SIDS rates (Table 3). In fact, under this model, changes in the gestational age distribution adversely impacted SIDS rates. However, the modified Kitagawa decomposition method, based on the fetuses-at-risk approach, yielded a different partitioning. Changes in the gestational age-specific distribution were responsible for 45% of the overall decline in SIDS (−1.2 cases per 10,000 fetuses-at-risk), whereas changes in gestational age-specific rates were responsible for 55% of the overall decline (−1.5 cases per 10,000 fetuses-at-risk, Table 3)..\nRelative contribution of changes in the gestational age distribution and in gestational age-specific SIDS rates to the overall reduction in SIDS rates by plurality and gestation, United States, 2004-05 vs. 1995–96 \nExplanatory note: Between 1995–96 and 2004–05, singletons experienced a decrease in SIDS (−2.72 SIDS per 10,000 live births; 100% decrease). Under the traditional model, changes in gestational age distribution among singletons increased SIDS rates (+0.37 cases per 10,000 live births; 13.6% increase), whereas changes in gestational age-specific SIDS rates decreased SIDS rates (−3.09 cases per 10,000 live births; 113.6% decrease)\nThe decline in SIDS among twins followed a similar pattern. Under the traditional Kitagawa decomposition, the entire temporal change in SIDS rates (−3.6 cases per 10,000 live births) was due to changes in gestational age-specific mortality (Table 3). Under the fetuses-at-risk approach, however, the temporal shift in the gestational age distribution was responsible for 63% of the decline in SIDS (−2.3 cases per 10,000 fetuses-at-risk), while change in gestational age-specific SIDS rates were responsible for 37% of the observed SIDS reduction (−1.3 SIDS cases per 10,000 fetuses-at-risk, Table 3).\nThe decomposition of the SIDS decline yielded different results at preterm vs term gestation. Under the traditional model, the change in the gestational age distribution had a relatively larger adverse effect at preterm gestation among singletons, whereas among twins, the change in the gestational age distribution adversely affected SIDS rates at preterm gestation only. Changes in gestational age-specific SIDS rates were responsible for a larger proportion of the SIDS decline at preterm gestation compared to term gestation among both singletons and twins (Table 3). Under the fetuses-at-risk approach, gestational age distribution changes contributed substantially to the SIDS decline at term gestation among both singletons and twins. On the other hand, changes in gestational age-specific SIDS rates contributed more to the SIDS decline at preterm gestation compared to term gestation (Table 3).\nResults from sensitivity analyses restricted to years prior to the introduction of new birth certificates in 2003 showed that between 1995–96 and 2001–02, the decline in SIDS rates was similar among singletons and twins. SIDS rates declined from 8.3 to 5.9 per 10,000 live births among singletons and from 14.2 to 10.5 per 10,000 live births among twins (rate ratio for singletons 0.72, 95% CI 0.69-0.75 and for twins 0.74, 95% CI 0.62-0.89).", "Our study confirms the paradoxical relationship between plurality and SIDS under the traditional perinatal model; twins had lower rates of SIDS at preterm gestation, while singletons had lower rates at later gestation. The fetuses-at-risk approach eliminated the paradoxical crossover in SIDS rates by plurality and showed that, in fact, twins had higher rates of SIDS at all gestational ages. Analyses using the fetuses-at-risk approach also showed that temporal changes in the gestational age distribution of live births were responsible for 45% of the temporal decline in SIDS rates among singletons and for 64% of the decline among twins. The remainder of the decline in SIDS (55% among singletons and 37% among twins) was attributed to reductions in gestational age-specific rates of SIDS (e.g., due to the back to sleep campaign, etc.). This effect of a temporal shift in gestational age distribution was evident predominantly among term singletons and term twins suggesting that the temporal reduction in SIDS occurred, in part, due to a shift in the gestational age distribution among term infants.\nWhile the exact cause of SIDS is unknown, the ‘triple risk hypothesis’ implicates three causes in the etiology of SIDS, namely, genetic factors, a critical developmental period, and environmental factors [29]. These three risk factors may affect the fetus/infant sequentially at different stages of development, and progressively increase the risk of SIDS. Genetic predisposition is related to male gender and race [29,30] and several gene polymorphisms involved in autonomic function, neurotransmission, energy metabolism, and response to infection have been implicated [31,32]. However, temporal changes in genetic factors are unlikely to explain the temporal decline in SIDS.\nCritical developmental periods may occur at various time-windows during fetal or infant development. Suboptimal conditions in-utero, due to hypoxia, have been implicated in the origins of SIDS [15]. Studies have shown that infants who died of SIDS had a higher incidence of subcortical leukomalacia, brainstem gliosis, and other changes in central nervous system, when compared with infants not affected by SIDS [33]. These lesions have been found to originate, in part, in the antenatal period, suggesting that an antepartum hypoxic insult may constitute a predisposing risk factor for SIDS [16,33]. Due to a resulting dysfunction of the autonomic nervous system, SIDS victims have a diminished capacity to respond to physiological challenges during a vulnerable developmental period between 2 to 4 months after birth, when the majority of SIDS occurs [15,30,31]. Similar pathologic characteristics at autopsy have also been found among stillborn fetuses, suggesting that SIDS and unexplained stillbirth represent the same phenomenon [15-17]. SIDS and unexplained stillbirth share similar risk factors including male gender, seasonality, maternal smoking, a parity of 3 or more, race, extremes of maternal age, low education, single marital status and low socio-economic status [15,18,19]. Antepartum hypoxia is believed to be responsible for the majority of unexplained antepartum stillbirths [34]. Many complications of pregnancy that necessitate iatrogenic delivery involve fetal hypoxia [35]. The recent temporal increase in medically indicated deliveries has been shown to coincide with a reduction in stillbirth rates [36] and the decline in SIDS follows a similar temporal pattern.\nChanges in environmental factors, such as the infant’s sleeping position, safe sleep environments and second hand smoke have changed over time, and likely contributed to a substantial fraction of the decline in gestational age-specific rates of SIDS. More recently, side-sleeping and bed-sharing have been identified as risk factors for SIDS [37]. Other explanations for the temporal reduction in SIDS rates have been proposed including temporal changes in SIDS case ascertainment and death certificate coding practices. Studies have shown that the reduction in SIDS in the United States paralleled a temporal increase in deaths caused by ‘accidental suffocation and strangulation’ [38]. However, the decline in SIDS has been observed in many countries and a similar shift in coding practices worldwide is unlikely [1,20,39]. For instance, a significant temporal reduction in SIDS was observed in the United Kingdom, where SIDS cases were carefully evaluated by the Confidential Enquiry into Stillbirths and Death in Infancy team. This included a thorough clinical and criminal investigation and a post-mortem examination by a pediatric pathologist [7].\nOur study adds to the literature suggesting that unexplained fetal death and SIDS may have common causal pathways. The fetuses-at-risk approach shows that gestational age-specific rates of stillbirth and SIDS both increase with advancing gestation. Further, the fetuses-at-risk approach resolves the paradoxical relationship between gestational age-specific rates of SIDS and plurality and shows that twins are at higher risk of SIDS at all gestational ages. This adds to a growing body of evidence that suggests an intrauterine etiology for SIDS [15-17]. Our finding that changes in the gestational age distribution contributed significantly to the temporal reduction in SIDS rates also supports a common causal pathway between stillbirth and SIDS. The iatrogenic left shift in the gestational age distribution at term and post-term gestation that has occurred in recent decades has had a positive impact on both stillbirth and SIDS.\nOur study has several limitations. We used death certificates to identify SIDS and this information is subject to potential misclassification as death certification processes are limited in their ability to incorporate all information about the circumstances leading to death [40,41]. A systematic misclassification, such as an increasing preference for other types of diagnoses than SIDS may have contributed to the temporal decline in gestational age-specific rates of SIDS in our study. The diagnostic code for SIDS changed from ICD-9 (code 798.0) to ICD-10 (code R95) in 2000, and SIDS cases were more likely to be reported under the new ICD-10 coding rules [42]. This change was estimated to artificially increase the SIDS rates by about 3% [42]. However, any effect of changes in coding practices would likely be uniform across all gestational ages. The relative reduction in SIDS in our study was larger among singletons compared with twins between 1995–96 vs 2004–05, whereas a similar relative reduction was observed for singletons and twins between 1995–96 and 2001–02. This is in contrast to data from England, where the relative SIDS reduction was larger in twins as compared with singletons [9]. Potential explanations for this discordance include differences in the time period when the largest changes in the gestational age distribution and in SIDS rates occurred in the 2 countries. Data from the United States show that the SIDS decline was larger among twins as compared with singletons from 1990 to 2005 (rate ratio 0.35, 95% CI: 0.28-0.43 among twins vs rate ratio 0.43, 95% CI: 0.41-0.45 among singletons). However, clinical estimates of gestational age were not available prior to 1995 and this restricted the time period of our study.", "In conclusion, our study indicates that in addition to the back-to-sleep campaign, temporal changes in the gestational age distribution may have contributed to the overall reduction in SIDS. This effect of a temporal shift towards earlier gestation at delivery has been observed predominantly at term and post-term gestation, and to a larger extent among twins. These findings support the hypothesis that antenatal factors contribute to the origins of SIDS, and endorse the concept of similar causal pathways between unexplained fetal death and SIDS.", "SIDS, Sudden Infant Death Syndrome; NCHS, National Centre for Health Statistics; ICD, International Classification of Diseases; CI, confidence interval.", "None of the authors have a personal financial relationship relevant to this article.", "The authors declare that they have no competing interests.", "SL, JAH and KSJ contributed to the conception and design of the study, SL conducted the data analysis and drafted the manuscript and JAH and KSJ revised the manuscript for intellectual content; all authors approved the final version of the manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2393/12/59/prepub\n" ]
[ null, "methods", "results", "discussion", "conclusions", null, null, null, null, null ]
[ "SIDS", "Temporal trend", "Gestational age" ]
Background: Although Sudden Infant Death Syndrome (SIDS) is a leading cause of post-neonatal death in industrialized countries, its etiology is largely unknown [1]. While the reduction in prone sleeping following the back-to-sleep campaign has led to a decline in SIDS in many countries [2-6], there are several puzzling aspects related to this intervention and the epidemiology of SIDS. For instance, the onset of the decline in SIDS preceded the initiation of the back-to-sleep campaign [2-7]. The reduction in SIDS in the United States began in 1989, while the back-to-sleep campaign was initiated in 1994 [6]. Similarly, SIDS rates in the United Kingdom decreased continuously from 1988 onwards, while the back-to-sleep campaign only began in 1991 [7]. Other unexplained epidemiologic features of the temporal reduction in SIDS include the relatively greater reduction in SIDS among term infants, as compared with infants born at preterm gestation. Data from Avon county in England show that term live births among SIDS cases decreased from 88% in 1984–88 to 63% in 1994–98, the period when SIDS rates declined most rapidly. The proportion of term infants among SIDS cases remained stable thereafter (66% in 1999–2003) and SIDS rates did not change dramatically during this period [7]. Also, a larger decline in SIDS was observed among twins as compared with singletons. In England, SIDS among twin live births declined by 71% from 1.4 per 1000 live births in 1993 to 0.4 per 1000 live births in 2003, whereas among singletons, SIDS rates decreased by 50% from 0.6 to 0.3 per 1000 live births during the same period [8,9]. It is notable that births at term and post-term gestation and twin births (subpopulations which experienced relatively larger reductions in SIDS) also experienced the largest increases in early delivery (i.e., increased obstetric intervention through labour induction and cesarean delivery). Perhaps the most intriguing finding related to SIDS is the paradoxical association between plurality and birth weight-specific SIDS rates [8,9]. SIDS rates are higher among twins as compared with singletons among normal birth weight infants (>3,000 g), whereas at lower birth weights the opposite is true. This phenomenon, sometimes referred to as the paradox of intersecting mortality curves, has also been observed when birth weight- and gestational age-specific stillbirth or infant mortality rates are contrasted across plurality, parity, race and other factors [10]. Various explanations [11] have been proposed to explain the paradox of intersecting mortality curves including the fetuses-at-risk approach [12,13]. This model assumes an intrauterine etiology for the outcomes of interest and gestational age-specific mortality rates calculated using the fetuses-at-risk approach do not exhibit the crossover paradox [14]. Since numerous studies have shown that unexplained antepartum fetal death and SIDS have common features (including similar pathologic characteristics at autopsy [15-17], common risk factors [15,18,19] and congruent temporal trends [1-7,20]), there is good justification for using the fetuses-at-risk approach for examining gestational age-specific SIDS rates. Finally, the contrast between the gestational age-specific patterns of SIDS and diseases of prematurity (e.g., retinopathy of prematurity) suggest that SIDS is a late gestation disease whose incidence may have been affected by temporal changes in the gestational age distribution [14]. In this study we explored the extent to which changes in gestational age distribution and changes in gestational age-specific SIDS rates contributed to the temporal decline in SIDS among singletons and twins. Methods: We used population-based data on singleton and twin births in the United States from 1995–96 to 2004–05 from the National Centre for Health Statistics (NCHS). Information in the NCHS birth/death and fetal death files was abstracted from birth certificates and is publicly available [21], with the birth-infant death linkage carried out by the NCHS (period linked birth-infant death file). We included all infants born at ≥22 week gestation, based on the clinical estimate of gestation at birth [22-24]. States that did not report the clinical estimate of gestational age were not included in the analysis (13.5% of all births). We excluded infants weighing less than 500 grams in order to avoid potential bias due to variable birth registration at the borderline of viability [25-27]. International Classification of Diseases (ICD) codes 798.0 and R95 (ICD 9th version, and ICD 10th version, respectively) were used to identify cases of SIDS. Information about maternal and infant risk factors associated with SIDS, including maternal age, education, race, parity, and marital status, was also obtained from the NCHS files. Temporal changes in the frequency of these risk factors were evaluated by contrasting their population prevalence between 1995–96 and 2004–05. Temporal changes in SIDS rates across categories of each risk factor were evaluated using rate ratios (2004–05 vs. 1995–96) and 95% confidence intervals. Gestational age-specific SIDS rates were compared using two different approaches: A) the traditional method which expressed gestational age-specific SIDS rates as the number of SIDS cases at any gestation divided by the number of live births at that gestation; and B) the fetuses-at-risk approach. Under the fetuses-at-risk approach, gestational age-specific SIDS rates were calculated as the number of SIDS cases among infants born at any gestation divided by the number of fetuses in-utero who were at risk of birth (live birth or stillbirth) at that gestation [12,13]. This latter model assumes an intrauterine etiology for SIDS and has been used previously for estimating gestational age-specific rates of stillbirth, neonatal death and cerebral palsy [12,13]. Both approaches were used to examine the temporal trends in SIDS, because they embody different perspectives; the traditional approach models the gestational age-specific risk of SIDS after birth assuming that live births are the appropriate candidates for SIDS, whereas the fetuses-at-risk approach models an in-utero etiology and assumes that fetuses are the appropriate candidates for SIDS. The temporal trend in SIDS was conceptualized as a potential consequence of temporal changes in the gestational age distribution and/or as a consequence of temporal changes in the gestational age-specific SIDS rates (e.g., due to the back to sleep campaign). The relative contribution of each of these two components to the overall reduction in SIDS was estimated by the Kitagawa decomposition method [28]. This method partitions the mortality rate difference between the two time periods into two components: the mortality difference due to the change in the gestational age distribution and the mortality difference due to the change in gestational age-specific mortality. By holding one component constant at its average (e.g., average gestational age–specific SIDS rate), the Kitagawa method estimates the relative contribution of the second component (i.e., the gestational age distribution), and vice versa. The Kitagawa decomposition formula is expressed as: (1) N1-N 2 = ∑ i = 1 n R 1 i + R 2 i 2 F 1 i − F 2 i + ∑ i = 1 n F 1 i + F 2 i 2 R 1 i − R 2 i where N1 and N2 denote SIDS rates in 2004–05 and 1995–96, respectively, R1 and R2 refer to gestational age-specific SIDS rates in 2004–05 and 1995–96, F1 and F2 represent proportions of live births in gestational age category i for each respective time period, and i denotes gestational age category (in weeks). The first part of the equation represents the relative contribution of changes in the gestational age distribution to the overall difference in SIDS rates, and the latter part of the equation represents the relative contribution of changes in gestational age-specific SIDS rates. For the decomposition using the traditional approach, the gestational age distribution at gestational week i was defined as the number of live births at that gestation expressed as a proportion of all live births; for the decomposition using the fetuses-at-risk approach, the gestational age distribution at gestational week i was expressed as the number of live births at that gestation expressed as a fraction of all fetuses in-utero at that gestation. The Kitagawa decomposition was carried out separately for singletons and twins born at term vs pre-term gestation (≥37 weeks vs 22–36 weeks). New birth certificates were introduced in the United States in 2003 and led to some increases in missing values for a few variables of interest (e.g., educational status, congenital malformations). Sensitivity analyses were carried out to assess how these changes affected results by restricting temporal trends to a period before the introduction of the new birth certificate i.e., between 1995–96 and 2001–02. All analyses were carried out using SAS version 9.2. Data used in this study were publicly accessible from the National Centre for Health Statistics [21]. Results: The rate of SIDS declined from 8.3 to 5.6 per 10,000 live births from 1995–96 to 2004–05 among singletons (rate difference −2.7, 95% CI: -2.4 and −3.0), and from 14.2 to 10.6 per 10,000 live births among twins (rate difference −3.6, 95% CI: -1.4 and −5.9). On a relative scale SIDS rates declined by 33% (rate ratio 0.67, 95% CI: 0.67-0.67) among singletons and by 25% among twins (rate ratio 0.75, 95% CI: 0.74-0.75). Changes in maternal characteristics between 1995–96 and 2004–05 are shown in Table 1. The proportion of older, Hispanic and unmarried mothers increased, while the proportion of mothers who were less than 20 years old, non-Hispanic white, and those who smoked during pregnancy decreased from 1995–96 to 2004–05. The frequency of twin live births increased, while gestational age at delivery decreased. There was a increase in the proportion of live births at preterm gestation (from 8.6% to 10.6% at <37 weeks) and at early term gestation (from 21.2% to 29.3% at 37–38 weeks) and a decrease in the proportion of live births at late term gestation (from 67.8% to 59.6% at 39–41 weeks) and post-term gestation (from 2.3% to 0.7% at ≥42 weeks). SIDS rates declined during this period across all maternal characteristics (Table 1). Changes in maternal and infant characteristics and SIDS rates, United States, 1995–96 and 2004-2005 (Figure 1a shows the gestational age-specific rates of SIDS among singleton and twin live births between 28 and 40 weeks gestation as calculated under the traditional perinatal model. Rates of SIDS declined with increasing gestational age among both singletons and twins. Rates of SIDS were lower among twins at preterm gestation compared with singleton live births at preterm gestation, but the opposite was true at later gestational ages (paradox of intersecting perinatal mortality curves). (Figure 1b shows gestational age-specific rates of SIDS among singletons and twins under the fetuses-at-risk model. Rates of SIDS increased with increasing gestation among both singletons and twins and SIDS rates were higher among twins at all gestational ages. Gestational age-specific rates of SIDS between 28 and 40 weeks gestation among singleton and twin live births according to the traditional perinatal model (Figure1a) and according to the fetuses-at-risk model (Figure1b), United States, 1995–2005. Substantial changes occurred in the gestational age distribution of singleton live births between 1995–96 and 2004–05 ((Figure 2a). The proportion of singleton live births at gestational ages up to 39 weeks increased, while the proportion after 39 weeks declined. Under the traditional model, gestational age-specific SIDS rates showed a temporal decline at all gestational ages (Figure 2b), while under the fetuses-at-risk approach, gestational age-specific SIDS rates showed a temporal decline at 39 weeks and later (Figure 2c). (Figure 3a shows changes in the gestational age distribution among twins between 1995-96 and 2004–05; the proportion of live births up to 37 weeks increased and there was a decline in the proportion of births at 38 weeks and later. Gestational age-specific SIDS rates among twins showed a temporal decline at all gestations under the traditional model ((Figure 3b), while under the fetuses-at-risk model, no decline in rates was evident except at 40 weeks of gestation (Figure 3c). Changes in the gestational age distribution of singleton live births (Figure2a), in gestational age-specific rates of SIDS (traditional model, Figure2b) and in gestational age-specific rates of SIDS (fetuses-at-risk model, Figure2c) among singletons 28 to 40 weeks gestation, United States, 1995–96 and 2004–05. Changes in the gestational age distribution of twin live births (Figure 3 a), in gestational age-specific rates of SIDS (traditional model, moving average, Figure 3 b,) and in gestational age-specific rates of SIDS (fetuses-at-risk model, Figure 3 c) among twins between 28 and 40 weeks gestation, United States, 1995–96 and 2004–05 Table 2 presents rates of SIDS among singletons and twins at preterm and term gestation in 1995–96 and in 2004–05, with temporal changes expressed in terms of rate differences and rate ratios. Under the traditional model, singletons showed a larger relative decline in SIDS than twins (rate ratio 0.67 vs 0.75), whereas in absolute terms twins showed a larger reduction than singletons (rate difference −3.61 vs −2.72 per 10,000 live births). Reductions in SIDS rates were larger at preterm gestation compared with term gestation in terms of both the ratio and the difference measure. Under the fetuses-at-risk approach, temporal changes were larger at term gestation among both singletons and twins irrespective of the effect measure (whether ratio or difference). SIDS rates, rate ratios and rate differences, by plurality and gestation, United States, 2004-05 vs. 1995–96 95% CI denotes 95% confidence intervals; SIDS cases were defined by the underlying cause of death 7980 (ICD-9) in 1995–06 and R95 (ICD-10) in 2004–05. The discrepancy in total SIDS rates under the traditional and fetuses-at-risk approaches is because stillbirths were included in the denominator for the latter calculation. Rate differences were calculated per 10,000 live births and 10,000 fetuses-at-risk. Among singletons, the traditional Kitagawa decomposition method revealed that the overall temporal reduction in SIDS rates (−2.7 cases per 10,000 live births) was entirely due to the decrease in gestational age-specific SIDS rates (Table 3). In fact, under this model, changes in the gestational age distribution adversely impacted SIDS rates. However, the modified Kitagawa decomposition method, based on the fetuses-at-risk approach, yielded a different partitioning. Changes in the gestational age-specific distribution were responsible for 45% of the overall decline in SIDS (−1.2 cases per 10,000 fetuses-at-risk), whereas changes in gestational age-specific rates were responsible for 55% of the overall decline (−1.5 cases per 10,000 fetuses-at-risk, Table 3).. Relative contribution of changes in the gestational age distribution and in gestational age-specific SIDS rates to the overall reduction in SIDS rates by plurality and gestation, United States, 2004-05 vs. 1995–96 Explanatory note: Between 1995–96 and 2004–05, singletons experienced a decrease in SIDS (−2.72 SIDS per 10,000 live births; 100% decrease). Under the traditional model, changes in gestational age distribution among singletons increased SIDS rates (+0.37 cases per 10,000 live births; 13.6% increase), whereas changes in gestational age-specific SIDS rates decreased SIDS rates (−3.09 cases per 10,000 live births; 113.6% decrease) The decline in SIDS among twins followed a similar pattern. Under the traditional Kitagawa decomposition, the entire temporal change in SIDS rates (−3.6 cases per 10,000 live births) was due to changes in gestational age-specific mortality (Table 3). Under the fetuses-at-risk approach, however, the temporal shift in the gestational age distribution was responsible for 63% of the decline in SIDS (−2.3 cases per 10,000 fetuses-at-risk), while change in gestational age-specific SIDS rates were responsible for 37% of the observed SIDS reduction (−1.3 SIDS cases per 10,000 fetuses-at-risk, Table 3). The decomposition of the SIDS decline yielded different results at preterm vs term gestation. Under the traditional model, the change in the gestational age distribution had a relatively larger adverse effect at preterm gestation among singletons, whereas among twins, the change in the gestational age distribution adversely affected SIDS rates at preterm gestation only. Changes in gestational age-specific SIDS rates were responsible for a larger proportion of the SIDS decline at preterm gestation compared to term gestation among both singletons and twins (Table 3). Under the fetuses-at-risk approach, gestational age distribution changes contributed substantially to the SIDS decline at term gestation among both singletons and twins. On the other hand, changes in gestational age-specific SIDS rates contributed more to the SIDS decline at preterm gestation compared to term gestation (Table 3). Results from sensitivity analyses restricted to years prior to the introduction of new birth certificates in 2003 showed that between 1995–96 and 2001–02, the decline in SIDS rates was similar among singletons and twins. SIDS rates declined from 8.3 to 5.9 per 10,000 live births among singletons and from 14.2 to 10.5 per 10,000 live births among twins (rate ratio for singletons 0.72, 95% CI 0.69-0.75 and for twins 0.74, 95% CI 0.62-0.89). Discussion: Our study confirms the paradoxical relationship between plurality and SIDS under the traditional perinatal model; twins had lower rates of SIDS at preterm gestation, while singletons had lower rates at later gestation. The fetuses-at-risk approach eliminated the paradoxical crossover in SIDS rates by plurality and showed that, in fact, twins had higher rates of SIDS at all gestational ages. Analyses using the fetuses-at-risk approach also showed that temporal changes in the gestational age distribution of live births were responsible for 45% of the temporal decline in SIDS rates among singletons and for 64% of the decline among twins. The remainder of the decline in SIDS (55% among singletons and 37% among twins) was attributed to reductions in gestational age-specific rates of SIDS (e.g., due to the back to sleep campaign, etc.). This effect of a temporal shift in gestational age distribution was evident predominantly among term singletons and term twins suggesting that the temporal reduction in SIDS occurred, in part, due to a shift in the gestational age distribution among term infants. While the exact cause of SIDS is unknown, the ‘triple risk hypothesis’ implicates three causes in the etiology of SIDS, namely, genetic factors, a critical developmental period, and environmental factors [29]. These three risk factors may affect the fetus/infant sequentially at different stages of development, and progressively increase the risk of SIDS. Genetic predisposition is related to male gender and race [29,30] and several gene polymorphisms involved in autonomic function, neurotransmission, energy metabolism, and response to infection have been implicated [31,32]. However, temporal changes in genetic factors are unlikely to explain the temporal decline in SIDS. Critical developmental periods may occur at various time-windows during fetal or infant development. Suboptimal conditions in-utero, due to hypoxia, have been implicated in the origins of SIDS [15]. Studies have shown that infants who died of SIDS had a higher incidence of subcortical leukomalacia, brainstem gliosis, and other changes in central nervous system, when compared with infants not affected by SIDS [33]. These lesions have been found to originate, in part, in the antenatal period, suggesting that an antepartum hypoxic insult may constitute a predisposing risk factor for SIDS [16,33]. Due to a resulting dysfunction of the autonomic nervous system, SIDS victims have a diminished capacity to respond to physiological challenges during a vulnerable developmental period between 2 to 4 months after birth, when the majority of SIDS occurs [15,30,31]. Similar pathologic characteristics at autopsy have also been found among stillborn fetuses, suggesting that SIDS and unexplained stillbirth represent the same phenomenon [15-17]. SIDS and unexplained stillbirth share similar risk factors including male gender, seasonality, maternal smoking, a parity of 3 or more, race, extremes of maternal age, low education, single marital status and low socio-economic status [15,18,19]. Antepartum hypoxia is believed to be responsible for the majority of unexplained antepartum stillbirths [34]. Many complications of pregnancy that necessitate iatrogenic delivery involve fetal hypoxia [35]. The recent temporal increase in medically indicated deliveries has been shown to coincide with a reduction in stillbirth rates [36] and the decline in SIDS follows a similar temporal pattern. Changes in environmental factors, such as the infant’s sleeping position, safe sleep environments and second hand smoke have changed over time, and likely contributed to a substantial fraction of the decline in gestational age-specific rates of SIDS. More recently, side-sleeping and bed-sharing have been identified as risk factors for SIDS [37]. Other explanations for the temporal reduction in SIDS rates have been proposed including temporal changes in SIDS case ascertainment and death certificate coding practices. Studies have shown that the reduction in SIDS in the United States paralleled a temporal increase in deaths caused by ‘accidental suffocation and strangulation’ [38]. However, the decline in SIDS has been observed in many countries and a similar shift in coding practices worldwide is unlikely [1,20,39]. For instance, a significant temporal reduction in SIDS was observed in the United Kingdom, where SIDS cases were carefully evaluated by the Confidential Enquiry into Stillbirths and Death in Infancy team. This included a thorough clinical and criminal investigation and a post-mortem examination by a pediatric pathologist [7]. Our study adds to the literature suggesting that unexplained fetal death and SIDS may have common causal pathways. The fetuses-at-risk approach shows that gestational age-specific rates of stillbirth and SIDS both increase with advancing gestation. Further, the fetuses-at-risk approach resolves the paradoxical relationship between gestational age-specific rates of SIDS and plurality and shows that twins are at higher risk of SIDS at all gestational ages. This adds to a growing body of evidence that suggests an intrauterine etiology for SIDS [15-17]. Our finding that changes in the gestational age distribution contributed significantly to the temporal reduction in SIDS rates also supports a common causal pathway between stillbirth and SIDS. The iatrogenic left shift in the gestational age distribution at term and post-term gestation that has occurred in recent decades has had a positive impact on both stillbirth and SIDS. Our study has several limitations. We used death certificates to identify SIDS and this information is subject to potential misclassification as death certification processes are limited in their ability to incorporate all information about the circumstances leading to death [40,41]. A systematic misclassification, such as an increasing preference for other types of diagnoses than SIDS may have contributed to the temporal decline in gestational age-specific rates of SIDS in our study. The diagnostic code for SIDS changed from ICD-9 (code 798.0) to ICD-10 (code R95) in 2000, and SIDS cases were more likely to be reported under the new ICD-10 coding rules [42]. This change was estimated to artificially increase the SIDS rates by about 3% [42]. However, any effect of changes in coding practices would likely be uniform across all gestational ages. The relative reduction in SIDS in our study was larger among singletons compared with twins between 1995–96 vs 2004–05, whereas a similar relative reduction was observed for singletons and twins between 1995–96 and 2001–02. This is in contrast to data from England, where the relative SIDS reduction was larger in twins as compared with singletons [9]. Potential explanations for this discordance include differences in the time period when the largest changes in the gestational age distribution and in SIDS rates occurred in the 2 countries. Data from the United States show that the SIDS decline was larger among twins as compared with singletons from 1990 to 2005 (rate ratio 0.35, 95% CI: 0.28-0.43 among twins vs rate ratio 0.43, 95% CI: 0.41-0.45 among singletons). However, clinical estimates of gestational age were not available prior to 1995 and this restricted the time period of our study. Conclusions: In conclusion, our study indicates that in addition to the back-to-sleep campaign, temporal changes in the gestational age distribution may have contributed to the overall reduction in SIDS. This effect of a temporal shift towards earlier gestation at delivery has been observed predominantly at term and post-term gestation, and to a larger extent among twins. These findings support the hypothesis that antenatal factors contribute to the origins of SIDS, and endorse the concept of similar causal pathways between unexplained fetal death and SIDS. Abbreviations: SIDS, Sudden Infant Death Syndrome; NCHS, National Centre for Health Statistics; ICD, International Classification of Diseases; CI, confidence interval. Financial disclosure: None of the authors have a personal financial relationship relevant to this article. Competing of interests: The authors declare that they have no competing interests. Authors’ contributions: SL, JAH and KSJ contributed to the conception and design of the study, SL conducted the data analysis and drafted the manuscript and JAH and KSJ revised the manuscript for intellectual content; all authors approved the final version of the manuscript. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2393/12/59/prepub
Background: While the reduction in infants' prone sleeping has led to a temporal decline in Sudden Infant Death Syndrome (SIDS), some aspects of this trend remain unexplained. We assessed whether changes in the gestational age distribution of births also contributed to the temporal reduction in SIDS. Methods: SIDS patterns among singleton and twin births in the United States were analysed in 1995-96 and 2004-05. The temporal reduction in SIDS was partitioned using the Kitagawa decomposition method into reductions due to changes in the gestational age distribution and reductions due to changes in gestational age-specific SIDS rates. Both the traditional and the fetuses-at-risk models were used. Results: SIDS rates declined with increasing gestation under the traditional perinatal model. Rates were higher at early gestation among singletons compared with twins, while the reverse was true at later gestation. Under the fetuses-at-risk model, SIDS rates increased with increasing gestation and twins had higher rates of SIDS than singletons at all gestational ages. Between 1995-96 and 2004-05, SIDS declined from 8.3 to 5.6 per 10,000 live births among singletons and from 14.2 to 10.6 per 10,000 live births among twins. Decomposition using the traditional model showed that the SIDS reduction among singletons and twins was entirely due to changes in the gestational age-specific SIDS rate. The fetuses-at-risk model attributed 45% of the SIDS reduction to changes in the gestational age distribution and 55% of the reduction to changes in gestational age-specific SIDS rates among singletons; among twins these proportions were 64% and 36%, respectively. Conclusions: Changes in the gestational age distribution may have contributed to the recent temporal reduction in SIDS.
Background: Although Sudden Infant Death Syndrome (SIDS) is a leading cause of post-neonatal death in industrialized countries, its etiology is largely unknown [1]. While the reduction in prone sleeping following the back-to-sleep campaign has led to a decline in SIDS in many countries [2-6], there are several puzzling aspects related to this intervention and the epidemiology of SIDS. For instance, the onset of the decline in SIDS preceded the initiation of the back-to-sleep campaign [2-7]. The reduction in SIDS in the United States began in 1989, while the back-to-sleep campaign was initiated in 1994 [6]. Similarly, SIDS rates in the United Kingdom decreased continuously from 1988 onwards, while the back-to-sleep campaign only began in 1991 [7]. Other unexplained epidemiologic features of the temporal reduction in SIDS include the relatively greater reduction in SIDS among term infants, as compared with infants born at preterm gestation. Data from Avon county in England show that term live births among SIDS cases decreased from 88% in 1984–88 to 63% in 1994–98, the period when SIDS rates declined most rapidly. The proportion of term infants among SIDS cases remained stable thereafter (66% in 1999–2003) and SIDS rates did not change dramatically during this period [7]. Also, a larger decline in SIDS was observed among twins as compared with singletons. In England, SIDS among twin live births declined by 71% from 1.4 per 1000 live births in 1993 to 0.4 per 1000 live births in 2003, whereas among singletons, SIDS rates decreased by 50% from 0.6 to 0.3 per 1000 live births during the same period [8,9]. It is notable that births at term and post-term gestation and twin births (subpopulations which experienced relatively larger reductions in SIDS) also experienced the largest increases in early delivery (i.e., increased obstetric intervention through labour induction and cesarean delivery). Perhaps the most intriguing finding related to SIDS is the paradoxical association between plurality and birth weight-specific SIDS rates [8,9]. SIDS rates are higher among twins as compared with singletons among normal birth weight infants (>3,000 g), whereas at lower birth weights the opposite is true. This phenomenon, sometimes referred to as the paradox of intersecting mortality curves, has also been observed when birth weight- and gestational age-specific stillbirth or infant mortality rates are contrasted across plurality, parity, race and other factors [10]. Various explanations [11] have been proposed to explain the paradox of intersecting mortality curves including the fetuses-at-risk approach [12,13]. This model assumes an intrauterine etiology for the outcomes of interest and gestational age-specific mortality rates calculated using the fetuses-at-risk approach do not exhibit the crossover paradox [14]. Since numerous studies have shown that unexplained antepartum fetal death and SIDS have common features (including similar pathologic characteristics at autopsy [15-17], common risk factors [15,18,19] and congruent temporal trends [1-7,20]), there is good justification for using the fetuses-at-risk approach for examining gestational age-specific SIDS rates. Finally, the contrast between the gestational age-specific patterns of SIDS and diseases of prematurity (e.g., retinopathy of prematurity) suggest that SIDS is a late gestation disease whose incidence may have been affected by temporal changes in the gestational age distribution [14]. In this study we explored the extent to which changes in gestational age distribution and changes in gestational age-specific SIDS rates contributed to the temporal decline in SIDS among singletons and twins. Conclusions: In conclusion, our study indicates that in addition to the back-to-sleep campaign, temporal changes in the gestational age distribution may have contributed to the overall reduction in SIDS. This effect of a temporal shift towards earlier gestation at delivery has been observed predominantly at term and post-term gestation, and to a larger extent among twins. These findings support the hypothesis that antenatal factors contribute to the origins of SIDS, and endorse the concept of similar causal pathways between unexplained fetal death and SIDS.
Background: While the reduction in infants' prone sleeping has led to a temporal decline in Sudden Infant Death Syndrome (SIDS), some aspects of this trend remain unexplained. We assessed whether changes in the gestational age distribution of births also contributed to the temporal reduction in SIDS. Methods: SIDS patterns among singleton and twin births in the United States were analysed in 1995-96 and 2004-05. The temporal reduction in SIDS was partitioned using the Kitagawa decomposition method into reductions due to changes in the gestational age distribution and reductions due to changes in gestational age-specific SIDS rates. Both the traditional and the fetuses-at-risk models were used. Results: SIDS rates declined with increasing gestation under the traditional perinatal model. Rates were higher at early gestation among singletons compared with twins, while the reverse was true at later gestation. Under the fetuses-at-risk model, SIDS rates increased with increasing gestation and twins had higher rates of SIDS than singletons at all gestational ages. Between 1995-96 and 2004-05, SIDS declined from 8.3 to 5.6 per 10,000 live births among singletons and from 14.2 to 10.6 per 10,000 live births among twins. Decomposition using the traditional model showed that the SIDS reduction among singletons and twins was entirely due to changes in the gestational age-specific SIDS rate. The fetuses-at-risk model attributed 45% of the SIDS reduction to changes in the gestational age distribution and 55% of the reduction to changes in gestational age-specific SIDS rates among singletons; among twins these proportions were 64% and 36%, respectively. Conclusions: Changes in the gestational age distribution may have contributed to the recent temporal reduction in SIDS.
5,030
329
[ 698, 27, 14, 10, 45, 16 ]
10
[ "sids", "gestational", "age", "rates", "gestational age", "sids rates", "gestation", "risk", "specific", "gestational age specific" ]
[ "term infants sids", "sids sudden infant", "sids sleep campaign", "infants died sids", "rates sids sleep" ]
[CONTENT] SIDS | Temporal trend | Gestational age [SUMMARY]
[CONTENT] SIDS | Temporal trend | Gestational age [SUMMARY]
[CONTENT] SIDS | Temporal trend | Gestational age [SUMMARY]
[CONTENT] SIDS | Temporal trend | Gestational age [SUMMARY]
[CONTENT] SIDS | Temporal trend | Gestational age [SUMMARY]
[CONTENT] SIDS | Temporal trend | Gestational age [SUMMARY]
[CONTENT] Adult | Diseases in Twins | Female | Gestational Age | Health Promotion | Humans | Infant | Male | Models, Statistical | Risk Factors | Sudden Infant Death | United States | Young Adult [SUMMARY]
[CONTENT] Adult | Diseases in Twins | Female | Gestational Age | Health Promotion | Humans | Infant | Male | Models, Statistical | Risk Factors | Sudden Infant Death | United States | Young Adult [SUMMARY]
[CONTENT] Adult | Diseases in Twins | Female | Gestational Age | Health Promotion | Humans | Infant | Male | Models, Statistical | Risk Factors | Sudden Infant Death | United States | Young Adult [SUMMARY]
[CONTENT] Adult | Diseases in Twins | Female | Gestational Age | Health Promotion | Humans | Infant | Male | Models, Statistical | Risk Factors | Sudden Infant Death | United States | Young Adult [SUMMARY]
[CONTENT] Adult | Diseases in Twins | Female | Gestational Age | Health Promotion | Humans | Infant | Male | Models, Statistical | Risk Factors | Sudden Infant Death | United States | Young Adult [SUMMARY]
[CONTENT] Adult | Diseases in Twins | Female | Gestational Age | Health Promotion | Humans | Infant | Male | Models, Statistical | Risk Factors | Sudden Infant Death | United States | Young Adult [SUMMARY]
[CONTENT] term infants sids | sids sudden infant | sids sleep campaign | infants died sids | rates sids sleep [SUMMARY]
[CONTENT] term infants sids | sids sudden infant | sids sleep campaign | infants died sids | rates sids sleep [SUMMARY]
[CONTENT] term infants sids | sids sudden infant | sids sleep campaign | infants died sids | rates sids sleep [SUMMARY]
[CONTENT] term infants sids | sids sudden infant | sids sleep campaign | infants died sids | rates sids sleep [SUMMARY]
[CONTENT] term infants sids | sids sudden infant | sids sleep campaign | infants died sids | rates sids sleep [SUMMARY]
[CONTENT] term infants sids | sids sudden infant | sids sleep campaign | infants died sids | rates sids sleep [SUMMARY]
[CONTENT] sids | gestational | age | rates | gestational age | sids rates | gestation | risk | specific | gestational age specific [SUMMARY]
[CONTENT] sids | gestational | age | rates | gestational age | sids rates | gestation | risk | specific | gestational age specific [SUMMARY]
[CONTENT] sids | gestational | age | rates | gestational age | sids rates | gestation | risk | specific | gestational age specific [SUMMARY]
[CONTENT] sids | gestational | age | rates | gestational age | sids rates | gestation | risk | specific | gestational age specific [SUMMARY]
[CONTENT] sids | gestational | age | rates | gestational age | sids rates | gestation | risk | specific | gestational age specific [SUMMARY]
[CONTENT] sids | gestational | age | rates | gestational age | sids rates | gestation | risk | specific | gestational age specific [SUMMARY]
[CONTENT] sids | rates | sids rates | births | age | gestational age | gestational | specific | live | gestational age specific [SUMMARY]
[CONTENT] gestational | age | gestational age | sids | birth | gestational age specific | specific | rates | age specific | number [SUMMARY]
[CONTENT] sids | rates | gestational | gestational age | age | sids rates | 10 000 | gestation | births | 10 [SUMMARY]
[CONTENT] sids | temporal | term | gestation | addition sleep campaign temporal | larger extent twins findings | larger extent twins | larger extent | twins findings | concept [SUMMARY]
[CONTENT] sids | gestational | rates | age | gestational age | authors | sids rates | gestation | temporal | risk [SUMMARY]
[CONTENT] sids | gestational | rates | age | gestational age | authors | sids rates | gestation | temporal | risk [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] the United States | 1995-96 ||| Kitagawa ||| [SUMMARY]
[CONTENT] ||| twins ||| ||| Between 1995-96 | 8.3 | 5.6 | 10,000 | 14.2 | 10.6 | 10,000 ||| ||| 45% | 55% | 64% | 36% [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] ||| ||| the United States | 1995-96 ||| Kitagawa ||| ||| ||| twins ||| ||| Between 1995-96 | 8.3 | 5.6 | 10,000 | 14.2 | 10.6 | 10,000 ||| ||| 45% | 55% | 64% | 36% ||| [SUMMARY]
[CONTENT] ||| ||| the United States | 1995-96 ||| Kitagawa ||| ||| ||| twins ||| ||| Between 1995-96 | 8.3 | 5.6 | 10,000 | 14.2 | 10.6 | 10,000 ||| ||| 45% | 55% | 64% | 36% ||| [SUMMARY]
Risk factors for wound infection caused by Methicillin Resistant
34394309
Methicillin Resistant Staphylococcus aureus (MRSA) causes infection in hospitals and communities. The prevalence and risk factors of MRSA infection is not homogenous across the globe.
BACKGROUND
Cross-sectional case control study was conducted at a tertiary care hospital in India. The risk factors were collected using checklist from 130 MRSA and 130 Methicillin sensitive staphylococcus aureus (MSSA) infected patients. The pathogens were isolated from the wound swabs according to Clinical and Laboratory Standards Institute guidelines.
METHODS
Both the groups were comparable in terms of age, gender, diabetic status, undergoing invasive procedures, urinary catheterization and smoking (p>0.05). Multivariate logistic regression revealed surgical treatment (OR 4.355; CI 1.03, 18.328; p=0.045), prolonged hospitalization (OR 0.307; CI 0.11, 0.832; p=0.020), tracheostomy (OR 5.298, CI 1.16, 24.298; p=0.032), pressure/venous ulcer (OR 7.205; CI 1.75, 29.606; p=0.006) and previous hospitalization (OR 2.883; CI 1.25, 6.631; p=0.013) as significant risk factors for MRSA infection.
RESULTS
Surgical treatment, prolonged and history of hospitalization, having tracheostomy for ventilation and pressure/venous ulcer were the key risk factors. Therefore, special attention has to be given to the preventable risk factors while caring for hospitalized patients to prevent MRSA infection.
CONCLUSION
[ "Adult", "Aged", "Aged, 80 and over", "Anti-Bacterial Agents", "Female", "Hospitalization", "Humans", "India", "Inpatients", "Male", "Methicillin-Resistant Staphylococcus aureus", "Middle Aged", "Staphylococcal Infections", "Wound Infection" ]
8356623
Introduction
Methicillin Resistant Staphylococcus aureus (MRSA) is a Gram-positive pathogen, having the ability to cause hospital associated infection and/or community acquired infection. Hospital associated MRSA infection is one of the major problems affecting both patients and care providers1. MRSA colonization is predominantly present in the nose and skin of humans2. Nasal colonization of Staphylococcus aureus and MRSA are the independent predictors of MRSA infection3. Colonized bacteria may not cause infection. However, it can enter the body through injured skin or mucus membrane and can cause simple skin infection to life threatening bacteremia. The spectrum includes pneumonia, bacteremia, skin and soft tissue infection, pyomyositis, sepsis, osteomyelitis, necrotizing pneumonia and necrotizing fasciitis4. Though MRSA can be isolated from blood, nose, wound, urine, respiratory tract, sputum and other body fluids, the prevalence is high in wounds5. Acquiring MRSA infection is multifactorial, and the risk factors described are prolonged post-operative state, emergency admissions and prior treatment with multiple antibiotics6. Other notable treatment related factors are emergency surgery, prolonged or multiple hospital stays, use of invasive devices (catheters, surgical drains, gastric/endotracheal tubes), repeated surgeries, treatment with multiple broad-spectrum antibiotics, inpatient in a neonatal or surgical ICU and poor infection control practices7–12. The host related factors are age over 65 years, any conditions that suppress immune system function, open wound or injuries, unsanitary or crowded living conditions like dormitories or military barracks, sharing towels or other personal items7–12. Comorbidities such as diabetes mellitus (66%), hypertension (66%) and sickle cell diseases (33%) are also the threat for acquiring MRSA infection13. MRSA contaminates the hands of healthcare professionals (59.6%)7. Even the dress of healthcare professionals can spread MRSA. According to the society for Healthcare Epidemiology of America (SHEA) report (2014), HCPs opine that their attire, including footwear, is important in preventing transmission of infection14. Also, MRSA is found on hospital surfaces, disinfectant areas and reusable equipment15. Though the cleaning of patient surroundings in ICU has shown a significant reduction in MRSA, after 24 hours of cleaning, the risk of MRSA growth in the patient environment remained high16. Although some similar strains of MRSA are seen in many countries depicting international dissemination, the spread is not homogenous around the globe17. Most of the studies have been conducted in developed countries 3,12,18. No published information on risk factors of MRSA infection is traced in India. Therefore, we aimed at identifying the risk factors of MRSA infection in an Indian hospital to institute appropriate preventive measures.
Methods
Study design The study has adopted a cross-sectional case control study design (1:1) with a quantitative approach. The study has adopted a cross-sectional case control study design (1:1) with a quantitative approach. Study setting The study was carried out in a tertiary care hospital in South India. The hospital has almost all the super specialties with 2032 beds and provides both in-patient and outpatient healthcare services. It caters to the health needs of a large population. It is a private university hospital meeting the teaching needs of many health science courses such as medical, dental, nursing and other allied health courses. The hospital had more than 80% occupancy during the study period. The hospital is certified by the International Organization for Standardization, (ISO) 14001: 2015 ISO 50001:2011 and accredited by National Accreditation Board for Hospitals & Healthcare Providers (NABH). The study was carried out in a tertiary care hospital in South India. The hospital has almost all the super specialties with 2032 beds and provides both in-patient and outpatient healthcare services. It caters to the health needs of a large population. It is a private university hospital meeting the teaching needs of many health science courses such as medical, dental, nursing and other allied health courses. The hospital had more than 80% occupancy during the study period. The hospital is certified by the International Organization for Standardization, (ISO) 14001: 2015 ISO 50001:2011 and accredited by National Accreditation Board for Hospitals & Healthcare Providers (NABH). Participants and sample size Hospitalized patients infected with MRSA were the cases. Patients with Methicillin Sensitive Staphylococcus aureus (MSSA) infections were considered as controls. The sample size for identifying the risk factors of MRSA infection was calculated based on the previous study reports by using the following formula 19. The proportion at baseline was 0.73 and the expected outcome set was 0.53 (based on the previous hospitalization as the risk factor)19. The measured confidence interval was 95% with 80% power, and the calculated sample was 88 in each group. Considering the presence of skin ulcers (baseline 0.33 and expected outcome 0.18)19 the calculated sample size was 129. The study included 130 patients with MRSA infection (cases) and 130 patients with MSSA infection (controls). Hospitalized patients who had MRSA grown in their wound culture were considered as cases, whereas hospitalized patients with MSSA grown in the wound swabs were taken as controls. We recruited both male and female adult patients (18 years and above) of general wards, medical and surgical intensive care units. The wards included were medical, surgical, dermatology, orthopedics, cardiology, Ear, nose and throat (ENT) specialties. The patients who were hospitalized for more than two days (>48 hours) as in-patients were included. Patients with immunosuppressive with human immunodeficiency virus, cancer and on immunosuppression therapy were excluded from the study. However, patients with agranulocytosis, leukocytosis and mild autoimmune disorder were not excluded. Hospitalized patients infected with MRSA were the cases. Patients with Methicillin Sensitive Staphylococcus aureus (MSSA) infections were considered as controls. The sample size for identifying the risk factors of MRSA infection was calculated based on the previous study reports by using the following formula 19. The proportion at baseline was 0.73 and the expected outcome set was 0.53 (based on the previous hospitalization as the risk factor)19. The measured confidence interval was 95% with 80% power, and the calculated sample was 88 in each group. Considering the presence of skin ulcers (baseline 0.33 and expected outcome 0.18)19 the calculated sample size was 129. The study included 130 patients with MRSA infection (cases) and 130 patients with MSSA infection (controls). Hospitalized patients who had MRSA grown in their wound culture were considered as cases, whereas hospitalized patients with MSSA grown in the wound swabs were taken as controls. We recruited both male and female adult patients (18 years and above) of general wards, medical and surgical intensive care units. The wards included were medical, surgical, dermatology, orthopedics, cardiology, Ear, nose and throat (ENT) specialties. The patients who were hospitalized for more than two days (>48 hours) as in-patients were included. Patients with immunosuppressive with human immunodeficiency virus, cancer and on immunosuppression therapy were excluded from the study. However, patients with agranulocytosis, leukocytosis and mild autoimmune disorder were not excluded. Risk assessment checklist There was no standardized tool available for identifying the risk factors of MRSA infection. Hence, a checklist of risk factors for MRSA infection was developed after an extensive literature search and discussion with microbiologists and Hospital Infection Control Committee (HICC) members. The checklist had 31 dichotomous items with ‘yes’ or ‘no’ options. The content validity was done by nine experts from different healthcare professionals (members of HICC, microbiologists, physicians, faculty of nursing and a policymaker). Content validity index was 0.94. The reliability of the tool was established by the raterinter-rater method, and the calculated ‘r’ was 0.974. There was no standardized tool available for identifying the risk factors of MRSA infection. Hence, a checklist of risk factors for MRSA infection was developed after an extensive literature search and discussion with microbiologists and Hospital Infection Control Committee (HICC) members. The checklist had 31 dichotomous items with ‘yes’ or ‘no’ options. The content validity was done by nine experts from different healthcare professionals (members of HICC, microbiologists, physicians, faculty of nursing and a policymaker). Content validity index was 0.94. The reliability of the tool was established by the raterinter-rater method, and the calculated ‘r’ was 0.974. Ethical consideration Ethical permission was obtained from the Institutional Ethics Committee (IEC). The study was registered at ‘clinical trail registry – India’ (CTRI/2018/01/011510). Administrative approval was taken from the Medical Superintendent and Chief Operating Officer of the hospital. Informed written consent from study participants was obtained. Ethical permission was obtained from the Institutional Ethics Committee (IEC). The study was registered at ‘clinical trail registry – India’ (CTRI/2018/01/011510). Administrative approval was taken from the Medical Superintendent and Chief Operating Officer of the hospital. Informed written consent from study participants was obtained. Data analysis The data were coded and entered in Statistical Package for Social Sciences (SPSS 16.0) version and the analysis was performed using logistic regression. The demographic characteristics are given in frequency and percentage. The data were coded and entered in Statistical Package for Social Sciences (SPSS 16.0) version and the analysis was performed using logistic regression. The demographic characteristics are given in frequency and percentage. Data collection procedure We collected the data from June 2017 to May 2018. The hospitalized patients, whose wound swab grew MRSA or MSSA were approached as presented in the flow diagram (figure 1). After obtaining the consent, investigators collected information from the patients and the medical records using a risk assessment checklist. A total of 260 (130 MRSA infected and 130 MSSA infected) patients were recruited. Flow diagram of patient recruitment and data collection We collected the data from June 2017 to May 2018. The hospitalized patients, whose wound swab grew MRSA or MSSA were approached as presented in the flow diagram (figure 1). After obtaining the consent, investigators collected information from the patients and the medical records using a risk assessment checklist. A total of 260 (130 MRSA infected and 130 MSSA infected) patients were recruited. Flow diagram of patient recruitment and data collection
Results
Both MRSA and MSSA infection groups were comparable in terms of age, gender, admission status, immunity, diabetes mellitus, smoking status, having undergone invasive diagnostic procedures, presence of a catheter, feeding tubes and duration of surgery as shown in Table 1. The mean duration of hospital stay was 9.9 days (range: 1–38 days) for the MRSA infected patients and 9.7 days (range: 1–30 days) for the MSSA infected patients. Comparison of demographic characteristics among both the groups of MRSA and MSSA infected patient Both the MRSA and MSSA infected patients were comparable (Table 1) as the odds ratio was not significant at p<0.05. Hence, the groups were considered for further statistical analysis to identify the risk factors. The risk factors given in Table 2 were considered for multiple logistic regression since the univariate analysis indicated statistical significance. The risk factors along with the odds ratio and 95% confidence interval are given in Table 2. The risk factors through univariate logistic regression Significance considered as p<0.05 level The risk factors for MRSA infection which showed significance such as prolonged hospitalization, undergoing surgical procedures, surgical drain, previous use of antibiotics, presence of open wounds, having endotracheal and tracheostomy tubes, presence of intravenous access, presence of vascular/pressure ulcers and recent previous hospitalization were considered for further multiple logistic regression. Multiple logistic regression was adjusted with the age (advanced age) and gender (female), as these two factors are biologically significant and adjusted odds ratios are given in Table 3. Multiple logistic regression with adjusted Odds Ratio on risk factors of MRSA infection Logistic regression model: log (odds of MRSA) = -2.482 + 1.471 (performing surgery)+ -1.182 (prolonged hospitalization) + 1.667 (presence of tracheostomy tube) + 1.975 (presence of pressure/venous ulcer) + 1.059 (Recent hospitalization) Out of the above risk factors, surgery as a treatment option (OR 4.355; CI 1.03, 18.328; p=0.045), prolonged hospitalization (OR 0.307; CI 0.11, 0.832; p=0.020), presence of tracheostomy tube (OR 5.298, CI 1.16, 24.298; p=0.032), presence of pressure/venous ulcer (OR 7.205; CI 1.75, 29.606; p=0.006) and previous recent hospitalization (OR 2.883; CI 1.25, 6.631; p=0.013) were significant risk factors for causing MRSA infection among hospitalized patients.
Conclusion
We identified that the damage to the skin and mucosal barriers such as undergoing surgical procedures and the existence of pressure or venous ulcers increase the risk of acquiring MRSA infection. Prolonged length of hospital stay and the history of recent hospitalization are the other risk factors. In addition, tracheostomy escalates the threat of MRSA infection in wounds of patients admitted to the hospital. Hence, controlling these risk factors may help in reducing the burden of infection.
[ "Study design", "Study setting", "Participants and sample size", "Risk assessment checklist", "Ethical consideration", "Data analysis", "Data collection procedure", "Limitation" ]
[ "The study has adopted a cross-sectional case control study design (1:1) with a quantitative approach.", "The study was carried out in a tertiary care hospital in South India. The hospital has almost all the super specialties with 2032 beds and provides both in-patient and outpatient healthcare services. It caters to the health needs of a large population. It is a private university hospital meeting the teaching needs of many health science courses such as medical, dental, nursing and other allied health courses. The hospital had more than 80% occupancy during the study period. The hospital is certified by the International Organization for Standardization, (ISO) 14001: 2015 ISO 50001:2011 and accredited by National Accreditation Board for Hospitals & Healthcare Providers (NABH).", "Hospitalized patients infected with MRSA were the cases. Patients with Methicillin Sensitive Staphylococcus aureus (MSSA) infections were considered as controls.\nThe sample size for identifying the risk factors of MRSA infection was calculated based on the previous study reports by using the following formula 19.\nThe proportion at baseline was 0.73 and the expected outcome set was 0.53 (based on the previous hospitalization as the risk factor)19. The measured confidence interval was 95% with 80% power, and the calculated sample was 88 in each group. Considering the presence of skin ulcers (baseline 0.33 and expected outcome 0.18)19 the calculated sample size was 129. The study included 130 patients with MRSA infection (cases) and 130 patients with MSSA infection (controls). Hospitalized patients who had MRSA grown in their wound culture were considered as cases, whereas hospitalized patients with MSSA grown in the wound swabs were taken as controls. We recruited both male and female adult patients (18 years and above) of general wards, medical and surgical intensive care units. The wards included were medical, surgical, dermatology, orthopedics, cardiology, Ear, nose and throat (ENT) specialties. The patients who were hospitalized for more than two days (>48 hours) as in-patients were included. Patients with immunosuppressive with human immunodeficiency virus, cancer and on immunosuppression therapy were excluded from the study. However, patients with agranulocytosis, leukocytosis and mild autoimmune disorder were not excluded.", "There was no standardized tool available for identifying the risk factors of MRSA infection. Hence, a checklist of risk factors for MRSA infection was developed after an extensive literature search and discussion with microbiologists and Hospital Infection Control Committee (HICC) members. The checklist had 31 dichotomous items with ‘yes’ or ‘no’ options. The content validity was done by nine experts from different healthcare professionals (members of HICC, microbiologists, physicians, faculty of nursing and a policymaker). Content validity index was 0.94. The reliability of the tool was established by the raterinter-rater method, and the calculated ‘r’ was 0.974.", "Ethical permission was obtained from the Institutional Ethics Committee (IEC). The study was registered at ‘clinical trail registry – India’ (CTRI/2018/01/011510). Administrative approval was taken from the Medical Superintendent and Chief Operating Officer of the hospital. Informed written consent from study participants was obtained.", "The data were coded and entered in Statistical Package for Social Sciences (SPSS 16.0) version and the analysis was performed using logistic regression. The demographic characteristics are given in frequency and percentage.", "We collected the data from June 2017 to May 2018. The hospitalized patients, whose wound swab grew MRSA or MSSA were approached as presented in the flow diagram (figure 1). After obtaining the consent, investigators collected information from the patients and the medical records using a risk assessment checklist. A total of 260 (130 MRSA infected and 130 MSSA infected) patients were recruited.\nFlow diagram of patient recruitment and data collection", "The study conducted at a single center with convenient sampling lacks the generalizability. Perhaps further studies are required covering diverse geographical and clinical areas which may help in developing appropriate guidelines to prevent MRSA infection." ]
[ null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Study design", "Study setting", "Participants and sample size", "Risk assessment checklist", "Ethical consideration", "Data analysis", "Data collection procedure", "Results", "Discussion", "Limitation", "Conclusion" ]
[ "Methicillin Resistant Staphylococcus aureus (MRSA) is a Gram-positive pathogen, having the ability to cause hospital associated infection and/or community acquired infection. Hospital associated MRSA infection is one of the major problems affecting both patients and care providers1. MRSA colonization is predominantly present in the nose and skin of humans2. Nasal colonization of Staphylococcus aureus and MRSA are the independent predictors of MRSA infection3. Colonized bacteria may not cause infection. However, it can enter the body through injured skin or mucus membrane and can cause simple skin infection to life threatening bacteremia. The spectrum includes pneumonia, bacteremia, skin and soft tissue infection, pyomyositis, sepsis, osteomyelitis, necrotizing pneumonia and necrotizing fasciitis4. Though MRSA can be isolated from blood, nose, wound, urine, respiratory tract, sputum and other body fluids, the prevalence is high in wounds5.\nAcquiring MRSA infection is multifactorial, and the risk factors described are prolonged post-operative state, emergency admissions and prior treatment with multiple antibiotics6. Other notable treatment related factors are emergency surgery, prolonged or multiple hospital stays, use of invasive devices (catheters, surgical drains, gastric/endotracheal tubes), repeated surgeries, treatment with multiple broad-spectrum antibiotics, inpatient in a neonatal or surgical ICU and poor infection control practices7–12. The host related factors are age over 65 years, any conditions that suppress immune system function, open wound or injuries, unsanitary or crowded living conditions like dormitories or military barracks, sharing towels or other personal items7–12. Comorbidities such as diabetes mellitus (66%), hypertension (66%) and sickle cell diseases (33%) are also the threat for acquiring MRSA infection13.\nMRSA contaminates the hands of healthcare professionals (59.6%)7. Even the dress of healthcare professionals can spread MRSA. According to the society for Healthcare Epidemiology of America (SHEA) report (2014), HCPs opine that their attire, including footwear, is important in preventing transmission of infection14. Also, MRSA is found on hospital surfaces, disinfectant areas and reusable equipment15. Though the cleaning of patient surroundings in ICU has shown a significant reduction in MRSA, after 24 hours of cleaning, the risk of MRSA growth in the patient environment remained high16.\nAlthough some similar strains of MRSA are seen in many countries depicting international dissemination, the spread is not homogenous around the globe17. Most of the studies have been conducted in developed countries 3,12,18. No published information on risk factors of MRSA infection is traced in India. Therefore, we aimed at identifying the risk factors of MRSA infection in an Indian hospital to institute appropriate preventive measures.", "Study design The study has adopted a cross-sectional case control study design (1:1) with a quantitative approach.\nThe study has adopted a cross-sectional case control study design (1:1) with a quantitative approach.\nStudy setting The study was carried out in a tertiary care hospital in South India. The hospital has almost all the super specialties with 2032 beds and provides both in-patient and outpatient healthcare services. It caters to the health needs of a large population. It is a private university hospital meeting the teaching needs of many health science courses such as medical, dental, nursing and other allied health courses. The hospital had more than 80% occupancy during the study period. The hospital is certified by the International Organization for Standardization, (ISO) 14001: 2015 ISO 50001:2011 and accredited by National Accreditation Board for Hospitals & Healthcare Providers (NABH).\nThe study was carried out in a tertiary care hospital in South India. The hospital has almost all the super specialties with 2032 beds and provides both in-patient and outpatient healthcare services. It caters to the health needs of a large population. It is a private university hospital meeting the teaching needs of many health science courses such as medical, dental, nursing and other allied health courses. The hospital had more than 80% occupancy during the study period. The hospital is certified by the International Organization for Standardization, (ISO) 14001: 2015 ISO 50001:2011 and accredited by National Accreditation Board for Hospitals & Healthcare Providers (NABH).\nParticipants and sample size Hospitalized patients infected with MRSA were the cases. Patients with Methicillin Sensitive Staphylococcus aureus (MSSA) infections were considered as controls.\nThe sample size for identifying the risk factors of MRSA infection was calculated based on the previous study reports by using the following formula 19.\nThe proportion at baseline was 0.73 and the expected outcome set was 0.53 (based on the previous hospitalization as the risk factor)19. The measured confidence interval was 95% with 80% power, and the calculated sample was 88 in each group. Considering the presence of skin ulcers (baseline 0.33 and expected outcome 0.18)19 the calculated sample size was 129. The study included 130 patients with MRSA infection (cases) and 130 patients with MSSA infection (controls). Hospitalized patients who had MRSA grown in their wound culture were considered as cases, whereas hospitalized patients with MSSA grown in the wound swabs were taken as controls. We recruited both male and female adult patients (18 years and above) of general wards, medical and surgical intensive care units. The wards included were medical, surgical, dermatology, orthopedics, cardiology, Ear, nose and throat (ENT) specialties. The patients who were hospitalized for more than two days (>48 hours) as in-patients were included. Patients with immunosuppressive with human immunodeficiency virus, cancer and on immunosuppression therapy were excluded from the study. However, patients with agranulocytosis, leukocytosis and mild autoimmune disorder were not excluded.\nHospitalized patients infected with MRSA were the cases. Patients with Methicillin Sensitive Staphylococcus aureus (MSSA) infections were considered as controls.\nThe sample size for identifying the risk factors of MRSA infection was calculated based on the previous study reports by using the following formula 19.\nThe proportion at baseline was 0.73 and the expected outcome set was 0.53 (based on the previous hospitalization as the risk factor)19. The measured confidence interval was 95% with 80% power, and the calculated sample was 88 in each group. Considering the presence of skin ulcers (baseline 0.33 and expected outcome 0.18)19 the calculated sample size was 129. The study included 130 patients with MRSA infection (cases) and 130 patients with MSSA infection (controls). Hospitalized patients who had MRSA grown in their wound culture were considered as cases, whereas hospitalized patients with MSSA grown in the wound swabs were taken as controls. We recruited both male and female adult patients (18 years and above) of general wards, medical and surgical intensive care units. The wards included were medical, surgical, dermatology, orthopedics, cardiology, Ear, nose and throat (ENT) specialties. The patients who were hospitalized for more than two days (>48 hours) as in-patients were included. Patients with immunosuppressive with human immunodeficiency virus, cancer and on immunosuppression therapy were excluded from the study. However, patients with agranulocytosis, leukocytosis and mild autoimmune disorder were not excluded.\nRisk assessment checklist There was no standardized tool available for identifying the risk factors of MRSA infection. Hence, a checklist of risk factors for MRSA infection was developed after an extensive literature search and discussion with microbiologists and Hospital Infection Control Committee (HICC) members. The checklist had 31 dichotomous items with ‘yes’ or ‘no’ options. The content validity was done by nine experts from different healthcare professionals (members of HICC, microbiologists, physicians, faculty of nursing and a policymaker). Content validity index was 0.94. The reliability of the tool was established by the raterinter-rater method, and the calculated ‘r’ was 0.974.\nThere was no standardized tool available for identifying the risk factors of MRSA infection. Hence, a checklist of risk factors for MRSA infection was developed after an extensive literature search and discussion with microbiologists and Hospital Infection Control Committee (HICC) members. The checklist had 31 dichotomous items with ‘yes’ or ‘no’ options. The content validity was done by nine experts from different healthcare professionals (members of HICC, microbiologists, physicians, faculty of nursing and a policymaker). Content validity index was 0.94. The reliability of the tool was established by the raterinter-rater method, and the calculated ‘r’ was 0.974.\nEthical consideration Ethical permission was obtained from the Institutional Ethics Committee (IEC). The study was registered at ‘clinical trail registry – India’ (CTRI/2018/01/011510). Administrative approval was taken from the Medical Superintendent and Chief Operating Officer of the hospital. Informed written consent from study participants was obtained.\nEthical permission was obtained from the Institutional Ethics Committee (IEC). The study was registered at ‘clinical trail registry – India’ (CTRI/2018/01/011510). Administrative approval was taken from the Medical Superintendent and Chief Operating Officer of the hospital. Informed written consent from study participants was obtained.\nData analysis The data were coded and entered in Statistical Package for Social Sciences (SPSS 16.0) version and the analysis was performed using logistic regression. The demographic characteristics are given in frequency and percentage.\nThe data were coded and entered in Statistical Package for Social Sciences (SPSS 16.0) version and the analysis was performed using logistic regression. The demographic characteristics are given in frequency and percentage.\nData collection procedure We collected the data from June 2017 to May 2018. The hospitalized patients, whose wound swab grew MRSA or MSSA were approached as presented in the flow diagram (figure 1). After obtaining the consent, investigators collected information from the patients and the medical records using a risk assessment checklist. A total of 260 (130 MRSA infected and 130 MSSA infected) patients were recruited.\nFlow diagram of patient recruitment and data collection\nWe collected the data from June 2017 to May 2018. The hospitalized patients, whose wound swab grew MRSA or MSSA were approached as presented in the flow diagram (figure 1). After obtaining the consent, investigators collected information from the patients and the medical records using a risk assessment checklist. A total of 260 (130 MRSA infected and 130 MSSA infected) patients were recruited.\nFlow diagram of patient recruitment and data collection", "The study has adopted a cross-sectional case control study design (1:1) with a quantitative approach.", "The study was carried out in a tertiary care hospital in South India. The hospital has almost all the super specialties with 2032 beds and provides both in-patient and outpatient healthcare services. It caters to the health needs of a large population. It is a private university hospital meeting the teaching needs of many health science courses such as medical, dental, nursing and other allied health courses. The hospital had more than 80% occupancy during the study period. The hospital is certified by the International Organization for Standardization, (ISO) 14001: 2015 ISO 50001:2011 and accredited by National Accreditation Board for Hospitals & Healthcare Providers (NABH).", "Hospitalized patients infected with MRSA were the cases. Patients with Methicillin Sensitive Staphylococcus aureus (MSSA) infections were considered as controls.\nThe sample size for identifying the risk factors of MRSA infection was calculated based on the previous study reports by using the following formula 19.\nThe proportion at baseline was 0.73 and the expected outcome set was 0.53 (based on the previous hospitalization as the risk factor)19. The measured confidence interval was 95% with 80% power, and the calculated sample was 88 in each group. Considering the presence of skin ulcers (baseline 0.33 and expected outcome 0.18)19 the calculated sample size was 129. The study included 130 patients with MRSA infection (cases) and 130 patients with MSSA infection (controls). Hospitalized patients who had MRSA grown in their wound culture were considered as cases, whereas hospitalized patients with MSSA grown in the wound swabs were taken as controls. We recruited both male and female adult patients (18 years and above) of general wards, medical and surgical intensive care units. The wards included were medical, surgical, dermatology, orthopedics, cardiology, Ear, nose and throat (ENT) specialties. The patients who were hospitalized for more than two days (>48 hours) as in-patients were included. Patients with immunosuppressive with human immunodeficiency virus, cancer and on immunosuppression therapy were excluded from the study. However, patients with agranulocytosis, leukocytosis and mild autoimmune disorder were not excluded.", "There was no standardized tool available for identifying the risk factors of MRSA infection. Hence, a checklist of risk factors for MRSA infection was developed after an extensive literature search and discussion with microbiologists and Hospital Infection Control Committee (HICC) members. The checklist had 31 dichotomous items with ‘yes’ or ‘no’ options. The content validity was done by nine experts from different healthcare professionals (members of HICC, microbiologists, physicians, faculty of nursing and a policymaker). Content validity index was 0.94. The reliability of the tool was established by the raterinter-rater method, and the calculated ‘r’ was 0.974.", "Ethical permission was obtained from the Institutional Ethics Committee (IEC). The study was registered at ‘clinical trail registry – India’ (CTRI/2018/01/011510). Administrative approval was taken from the Medical Superintendent and Chief Operating Officer of the hospital. Informed written consent from study participants was obtained.", "The data were coded and entered in Statistical Package for Social Sciences (SPSS 16.0) version and the analysis was performed using logistic regression. The demographic characteristics are given in frequency and percentage.", "We collected the data from June 2017 to May 2018. The hospitalized patients, whose wound swab grew MRSA or MSSA were approached as presented in the flow diagram (figure 1). After obtaining the consent, investigators collected information from the patients and the medical records using a risk assessment checklist. A total of 260 (130 MRSA infected and 130 MSSA infected) patients were recruited.\nFlow diagram of patient recruitment and data collection", "Both MRSA and MSSA infection groups were comparable in terms of age, gender, admission status, immunity, diabetes mellitus, smoking status, having undergone invasive diagnostic procedures, presence of a catheter, feeding tubes and duration of surgery as shown in Table 1. The mean duration of hospital stay was 9.9 days (range: 1–38 days) for the MRSA infected patients and 9.7 days (range: 1–30 days) for the MSSA infected patients.\nComparison of demographic characteristics among both the groups of MRSA and MSSA infected patient\nBoth the MRSA and MSSA infected patients were comparable (Table 1) as the odds ratio was not significant at p<0.05. Hence, the groups were considered for further statistical analysis to identify the risk factors.\nThe risk factors given in Table 2 were considered for multiple logistic regression since the univariate analysis indicated statistical significance. The risk factors along with the odds ratio and 95% confidence interval are given in Table 2.\nThe risk factors through univariate logistic regression\nSignificance considered as p<0.05 level\nThe risk factors for MRSA infection which showed significance such as prolonged hospitalization, undergoing surgical procedures, surgical drain, previous use of antibiotics, presence of open wounds, having endotracheal and tracheostomy tubes, presence of intravenous access, presence of vascular/pressure ulcers and recent previous hospitalization were considered for further multiple logistic regression. Multiple logistic regression was adjusted with the age (advanced age) and gender (female), as these two factors are biologically significant and adjusted odds ratios are given in Table 3.\nMultiple logistic regression with adjusted Odds Ratio on risk factors of MRSA infection\nLogistic regression model: log (odds of MRSA) = -2.482 + 1.471 (performing surgery)+ -1.182 (prolonged hospitalization) + 1.667 (presence of tracheostomy tube) + 1.975 (presence of pressure/venous ulcer) + 1.059 (Recent hospitalization)\nOut of the above risk factors, surgery as a treatment option (OR 4.355; CI 1.03, 18.328; p=0.045), prolonged hospitalization (OR 0.307; CI 0.11, 0.832; p=0.020), presence of tracheostomy tube (OR 5.298, CI 1.16, 24.298; p=0.032), presence of pressure/venous ulcer (OR 7.205; CI 1.75, 29.606; p=0.006) and previous recent hospitalization (OR 2.883; CI 1.25, 6.631; p=0.013) were significant risk factors for causing MRSA infection among hospitalized patients.", "Staphylococcus aureus remains the most common pathogen causing infection in wounds20. World Health Organization has stressed that MRSA is one of the high priority multidrug-resistant organism21. MRSA infection is high in Asia and the region is considered as ‘hospital associated MRSA endemic area’17.\nIn the present study, undergoing surgery, prolonged hospitalization, presence of tracheostomy tube, pressure/venous ulcer and recent hospitalization were the significant independent risk factors causing MRSA infection among hospitalized patients.\nThe prevalence of MRSA infection is high among emergency admission patients6. In the present study, 34.9% of patients were admitted from the trauma center and have undergone emergency surgeries. Usually, emergency surgeries are not well prepared like elective surgeries. However, emergency admission was not a significant determinant in our study.\nCallejo et al. and Sun et al. reported that the risk factors of MRSA infection were advanced age (above 65 years), traumatic injuries, admitted from a long-term care facility, presence of a urinary catheter, previous antibiotic treatment and skin-soft tissue or post-surgical superficial skin infections12,22. Patients with an open fracture tend to get infected more (14.7%) than a closed fracture (4.2%)23 or open injuries22. In contrast, none of these factors were significant in the present study. Therefore, it can be inferred that the risk factors of MRSA infection differ around the globe.\nPatients who have undergone surgical debridement within one year (adjusted odds ratio, 2.6; 95% CI, 1.4–5.0, p=0.002) and obesity (adjusted OR 3.4, 95% CI 1.4–8.8, p=0.008) were at risk of developing recurrent MRSA infection24. Vascular ulcer increases the risk of MRSA infection25. In agreement to this, the presence of vascular ulcer in the present study was one of the significant risk of causing MRSA infection. Vascular ulcer reduces the blood flow to distal areas. In the absence of oxygen, wound healing is delayed. Non-healing of the ulcer increases the risk of infection.\nIn the current study, bed occupancy was more than 80%. The studies have proven that the occupancy rate in the hospital is directly proportional to the incidence of HAIs26. The previous hospitalization is a proven cause of MRSA bacteremia27. Recent hospitalization (OR 2.883; CI 1.25, 6.631; p=0.013) within a year was a significant cause of MRSA infection. Both prolonged hospitalization and repeated hospitalization increases the risk. The previous history of nursing home admission (OR 8.42; 1.06–66.43) is another threat of acquiring MRSA infection9. Hospital is a source of multiple pathogens, and transmission of such pathogens from the hospital to the host is common. MRSA is seen in hospital environmental surface (38.9%) which increases the risk of causing infection17. The ICU environment (67.3%) is an additional well-known risk factor of getting MRSA infection7. MRSA was detected in ventilators (33%)7, ultrasound transducers (17%)28 and stethoscopes used in the hospital29,30. Also, MRSA is detected on the hands of 59.6% healthcare professionals7.\nOld age and nursing home residences are found to be independent risk factors of MRSA infection related death31. Pre-prosthetic infection with MRSA is increasing (44%) among orthopedic surgery patients32 and arthroplasty patients have a higher risk (OR 0.11; 0.02–0.56) than internal fixation9 which also increases the treatment costs. Most of the time, removal of the prosthesis is the treatment for prosthetic infection and this infection indicates the failure of treatment.\nMRSA infection can have an adverse effect on the life of infected patients. The consequence of the infection can be repeated hospitalization, increased healthcare cost, increased mortality and morbidity11. A retrospective study carried out in Texas showed that 21% MRSA infected patients developed recurrent infection22. A two year retrospective study of amputated patients showed 7.3% re-hospitalization due to stump infection. Among the re-admitted patients, MRSA was the leading pathogen causing infection and the most common cause of death33. The occurrence of surgical site infection with MRSA among orthopedic and transplant surgery patients is in late post-operative days compared to general surgical patients34. This indicates that a longer duration of hospitalization is a threat for the development of infection. Longer hospitalization not only causes wound infection but also can result in MRSA bacteremia. In the present study, prolonged hospitalization (OR 0.307; CI 0.11, 0.832; p=0.020) was a significant contributing factor of MRSA infecton. The mean duration of the hospitalized MRSA infected patients was 9.9 days. The duration of the hospitalization differs for each disease condition. However, for patients with minor surgeries, more than three days of hospitalization and more than seven days for major surgeries were considered as prolonged hospitalization. For patients, without surgical procedures (only medical treatment) the duration of hospitalization was compared with our hospital policy.\nIn the present study, undergoing surgery emerged as a risk factor. As surgical procedure disrupts the integrity of the skin, a pathogen can enter into the body easily. It is also noted that, more personnel in the operation room increases the risk of infection35. However, the operating room team is bigger in teaching hospitals as students are posted in the operation room to develop surgical skills. Therefore, additional measures need to be implemented to reduce risk.\nPresence of endotracheal or tracheostomy tubes and vascular ulcers result in infection. Though ulcers can be prevented, managing the patient with endotracheal or tracheostomy is unavoidable in many situations. Therefore, additional emphasis is needed for infection control. These patients need to stay for a longer time in the hospital. A systematic review revealed that the cost of treating MRSA infection is high36. Though hospitalization cannot be completely eliminated, the hospital must take necessary measures to reduce the duration of hospitalization and avoid repeated admissions.", "The study conducted at a single center with convenient sampling lacks the generalizability. Perhaps further studies are required covering diverse geographical and clinical areas which may help in developing appropriate guidelines to prevent MRSA infection.", "We identified that the damage to the skin and mucosal barriers such as undergoing surgical procedures and the existence of pressure or venous ulcers increase the risk of acquiring MRSA infection. Prolonged length of hospital stay and the history of recent hospitalization are the other risk factors. In addition, tracheostomy escalates the threat of MRSA infection in wounds of patients admitted to the hospital. Hence, controlling these risk factors may help in reducing the burden of infection." ]
[ "intro", "methods", null, null, null, null, null, null, null, "results", "discussion", null, "conclusions" ]
[ "MRSA", "infection", "India" ]
Introduction: Methicillin Resistant Staphylococcus aureus (MRSA) is a Gram-positive pathogen, having the ability to cause hospital associated infection and/or community acquired infection. Hospital associated MRSA infection is one of the major problems affecting both patients and care providers1. MRSA colonization is predominantly present in the nose and skin of humans2. Nasal colonization of Staphylococcus aureus and MRSA are the independent predictors of MRSA infection3. Colonized bacteria may not cause infection. However, it can enter the body through injured skin or mucus membrane and can cause simple skin infection to life threatening bacteremia. The spectrum includes pneumonia, bacteremia, skin and soft tissue infection, pyomyositis, sepsis, osteomyelitis, necrotizing pneumonia and necrotizing fasciitis4. Though MRSA can be isolated from blood, nose, wound, urine, respiratory tract, sputum and other body fluids, the prevalence is high in wounds5. Acquiring MRSA infection is multifactorial, and the risk factors described are prolonged post-operative state, emergency admissions and prior treatment with multiple antibiotics6. Other notable treatment related factors are emergency surgery, prolonged or multiple hospital stays, use of invasive devices (catheters, surgical drains, gastric/endotracheal tubes), repeated surgeries, treatment with multiple broad-spectrum antibiotics, inpatient in a neonatal or surgical ICU and poor infection control practices7–12. The host related factors are age over 65 years, any conditions that suppress immune system function, open wound or injuries, unsanitary or crowded living conditions like dormitories or military barracks, sharing towels or other personal items7–12. Comorbidities such as diabetes mellitus (66%), hypertension (66%) and sickle cell diseases (33%) are also the threat for acquiring MRSA infection13. MRSA contaminates the hands of healthcare professionals (59.6%)7. Even the dress of healthcare professionals can spread MRSA. According to the society for Healthcare Epidemiology of America (SHEA) report (2014), HCPs opine that their attire, including footwear, is important in preventing transmission of infection14. Also, MRSA is found on hospital surfaces, disinfectant areas and reusable equipment15. Though the cleaning of patient surroundings in ICU has shown a significant reduction in MRSA, after 24 hours of cleaning, the risk of MRSA growth in the patient environment remained high16. Although some similar strains of MRSA are seen in many countries depicting international dissemination, the spread is not homogenous around the globe17. Most of the studies have been conducted in developed countries 3,12,18. No published information on risk factors of MRSA infection is traced in India. Therefore, we aimed at identifying the risk factors of MRSA infection in an Indian hospital to institute appropriate preventive measures. Methods: Study design The study has adopted a cross-sectional case control study design (1:1) with a quantitative approach. The study has adopted a cross-sectional case control study design (1:1) with a quantitative approach. Study setting The study was carried out in a tertiary care hospital in South India. The hospital has almost all the super specialties with 2032 beds and provides both in-patient and outpatient healthcare services. It caters to the health needs of a large population. It is a private university hospital meeting the teaching needs of many health science courses such as medical, dental, nursing and other allied health courses. The hospital had more than 80% occupancy during the study period. The hospital is certified by the International Organization for Standardization, (ISO) 14001: 2015 ISO 50001:2011 and accredited by National Accreditation Board for Hospitals & Healthcare Providers (NABH). The study was carried out in a tertiary care hospital in South India. The hospital has almost all the super specialties with 2032 beds and provides both in-patient and outpatient healthcare services. It caters to the health needs of a large population. It is a private university hospital meeting the teaching needs of many health science courses such as medical, dental, nursing and other allied health courses. The hospital had more than 80% occupancy during the study period. The hospital is certified by the International Organization for Standardization, (ISO) 14001: 2015 ISO 50001:2011 and accredited by National Accreditation Board for Hospitals & Healthcare Providers (NABH). Participants and sample size Hospitalized patients infected with MRSA were the cases. Patients with Methicillin Sensitive Staphylococcus aureus (MSSA) infections were considered as controls. The sample size for identifying the risk factors of MRSA infection was calculated based on the previous study reports by using the following formula 19. The proportion at baseline was 0.73 and the expected outcome set was 0.53 (based on the previous hospitalization as the risk factor)19. The measured confidence interval was 95% with 80% power, and the calculated sample was 88 in each group. Considering the presence of skin ulcers (baseline 0.33 and expected outcome 0.18)19 the calculated sample size was 129. The study included 130 patients with MRSA infection (cases) and 130 patients with MSSA infection (controls). Hospitalized patients who had MRSA grown in their wound culture were considered as cases, whereas hospitalized patients with MSSA grown in the wound swabs were taken as controls. We recruited both male and female adult patients (18 years and above) of general wards, medical and surgical intensive care units. The wards included were medical, surgical, dermatology, orthopedics, cardiology, Ear, nose and throat (ENT) specialties. The patients who were hospitalized for more than two days (>48 hours) as in-patients were included. Patients with immunosuppressive with human immunodeficiency virus, cancer and on immunosuppression therapy were excluded from the study. However, patients with agranulocytosis, leukocytosis and mild autoimmune disorder were not excluded. Hospitalized patients infected with MRSA were the cases. Patients with Methicillin Sensitive Staphylococcus aureus (MSSA) infections were considered as controls. The sample size for identifying the risk factors of MRSA infection was calculated based on the previous study reports by using the following formula 19. The proportion at baseline was 0.73 and the expected outcome set was 0.53 (based on the previous hospitalization as the risk factor)19. The measured confidence interval was 95% with 80% power, and the calculated sample was 88 in each group. Considering the presence of skin ulcers (baseline 0.33 and expected outcome 0.18)19 the calculated sample size was 129. The study included 130 patients with MRSA infection (cases) and 130 patients with MSSA infection (controls). Hospitalized patients who had MRSA grown in their wound culture were considered as cases, whereas hospitalized patients with MSSA grown in the wound swabs were taken as controls. We recruited both male and female adult patients (18 years and above) of general wards, medical and surgical intensive care units. The wards included were medical, surgical, dermatology, orthopedics, cardiology, Ear, nose and throat (ENT) specialties. The patients who were hospitalized for more than two days (>48 hours) as in-patients were included. Patients with immunosuppressive with human immunodeficiency virus, cancer and on immunosuppression therapy were excluded from the study. However, patients with agranulocytosis, leukocytosis and mild autoimmune disorder were not excluded. Risk assessment checklist There was no standardized tool available for identifying the risk factors of MRSA infection. Hence, a checklist of risk factors for MRSA infection was developed after an extensive literature search and discussion with microbiologists and Hospital Infection Control Committee (HICC) members. The checklist had 31 dichotomous items with ‘yes’ or ‘no’ options. The content validity was done by nine experts from different healthcare professionals (members of HICC, microbiologists, physicians, faculty of nursing and a policymaker). Content validity index was 0.94. The reliability of the tool was established by the raterinter-rater method, and the calculated ‘r’ was 0.974. There was no standardized tool available for identifying the risk factors of MRSA infection. Hence, a checklist of risk factors for MRSA infection was developed after an extensive literature search and discussion with microbiologists and Hospital Infection Control Committee (HICC) members. The checklist had 31 dichotomous items with ‘yes’ or ‘no’ options. The content validity was done by nine experts from different healthcare professionals (members of HICC, microbiologists, physicians, faculty of nursing and a policymaker). Content validity index was 0.94. The reliability of the tool was established by the raterinter-rater method, and the calculated ‘r’ was 0.974. Ethical consideration Ethical permission was obtained from the Institutional Ethics Committee (IEC). The study was registered at ‘clinical trail registry – India’ (CTRI/2018/01/011510). Administrative approval was taken from the Medical Superintendent and Chief Operating Officer of the hospital. Informed written consent from study participants was obtained. Ethical permission was obtained from the Institutional Ethics Committee (IEC). The study was registered at ‘clinical trail registry – India’ (CTRI/2018/01/011510). Administrative approval was taken from the Medical Superintendent and Chief Operating Officer of the hospital. Informed written consent from study participants was obtained. Data analysis The data were coded and entered in Statistical Package for Social Sciences (SPSS 16.0) version and the analysis was performed using logistic regression. The demographic characteristics are given in frequency and percentage. The data were coded and entered in Statistical Package for Social Sciences (SPSS 16.0) version and the analysis was performed using logistic regression. The demographic characteristics are given in frequency and percentage. Data collection procedure We collected the data from June 2017 to May 2018. The hospitalized patients, whose wound swab grew MRSA or MSSA were approached as presented in the flow diagram (figure 1). After obtaining the consent, investigators collected information from the patients and the medical records using a risk assessment checklist. A total of 260 (130 MRSA infected and 130 MSSA infected) patients were recruited. Flow diagram of patient recruitment and data collection We collected the data from June 2017 to May 2018. The hospitalized patients, whose wound swab grew MRSA or MSSA were approached as presented in the flow diagram (figure 1). After obtaining the consent, investigators collected information from the patients and the medical records using a risk assessment checklist. A total of 260 (130 MRSA infected and 130 MSSA infected) patients were recruited. Flow diagram of patient recruitment and data collection Study design: The study has adopted a cross-sectional case control study design (1:1) with a quantitative approach. Study setting: The study was carried out in a tertiary care hospital in South India. The hospital has almost all the super specialties with 2032 beds and provides both in-patient and outpatient healthcare services. It caters to the health needs of a large population. It is a private university hospital meeting the teaching needs of many health science courses such as medical, dental, nursing and other allied health courses. The hospital had more than 80% occupancy during the study period. The hospital is certified by the International Organization for Standardization, (ISO) 14001: 2015 ISO 50001:2011 and accredited by National Accreditation Board for Hospitals & Healthcare Providers (NABH). Participants and sample size: Hospitalized patients infected with MRSA were the cases. Patients with Methicillin Sensitive Staphylococcus aureus (MSSA) infections were considered as controls. The sample size for identifying the risk factors of MRSA infection was calculated based on the previous study reports by using the following formula 19. The proportion at baseline was 0.73 and the expected outcome set was 0.53 (based on the previous hospitalization as the risk factor)19. The measured confidence interval was 95% with 80% power, and the calculated sample was 88 in each group. Considering the presence of skin ulcers (baseline 0.33 and expected outcome 0.18)19 the calculated sample size was 129. The study included 130 patients with MRSA infection (cases) and 130 patients with MSSA infection (controls). Hospitalized patients who had MRSA grown in their wound culture were considered as cases, whereas hospitalized patients with MSSA grown in the wound swabs were taken as controls. We recruited both male and female adult patients (18 years and above) of general wards, medical and surgical intensive care units. The wards included were medical, surgical, dermatology, orthopedics, cardiology, Ear, nose and throat (ENT) specialties. The patients who were hospitalized for more than two days (>48 hours) as in-patients were included. Patients with immunosuppressive with human immunodeficiency virus, cancer and on immunosuppression therapy were excluded from the study. However, patients with agranulocytosis, leukocytosis and mild autoimmune disorder were not excluded. Risk assessment checklist: There was no standardized tool available for identifying the risk factors of MRSA infection. Hence, a checklist of risk factors for MRSA infection was developed after an extensive literature search and discussion with microbiologists and Hospital Infection Control Committee (HICC) members. The checklist had 31 dichotomous items with ‘yes’ or ‘no’ options. The content validity was done by nine experts from different healthcare professionals (members of HICC, microbiologists, physicians, faculty of nursing and a policymaker). Content validity index was 0.94. The reliability of the tool was established by the raterinter-rater method, and the calculated ‘r’ was 0.974. Ethical consideration: Ethical permission was obtained from the Institutional Ethics Committee (IEC). The study was registered at ‘clinical trail registry – India’ (CTRI/2018/01/011510). Administrative approval was taken from the Medical Superintendent and Chief Operating Officer of the hospital. Informed written consent from study participants was obtained. Data analysis: The data were coded and entered in Statistical Package for Social Sciences (SPSS 16.0) version and the analysis was performed using logistic regression. The demographic characteristics are given in frequency and percentage. Data collection procedure: We collected the data from June 2017 to May 2018. The hospitalized patients, whose wound swab grew MRSA or MSSA were approached as presented in the flow diagram (figure 1). After obtaining the consent, investigators collected information from the patients and the medical records using a risk assessment checklist. A total of 260 (130 MRSA infected and 130 MSSA infected) patients were recruited. Flow diagram of patient recruitment and data collection Results: Both MRSA and MSSA infection groups were comparable in terms of age, gender, admission status, immunity, diabetes mellitus, smoking status, having undergone invasive diagnostic procedures, presence of a catheter, feeding tubes and duration of surgery as shown in Table 1. The mean duration of hospital stay was 9.9 days (range: 1–38 days) for the MRSA infected patients and 9.7 days (range: 1–30 days) for the MSSA infected patients. Comparison of demographic characteristics among both the groups of MRSA and MSSA infected patient Both the MRSA and MSSA infected patients were comparable (Table 1) as the odds ratio was not significant at p<0.05. Hence, the groups were considered for further statistical analysis to identify the risk factors. The risk factors given in Table 2 were considered for multiple logistic regression since the univariate analysis indicated statistical significance. The risk factors along with the odds ratio and 95% confidence interval are given in Table 2. The risk factors through univariate logistic regression Significance considered as p<0.05 level The risk factors for MRSA infection which showed significance such as prolonged hospitalization, undergoing surgical procedures, surgical drain, previous use of antibiotics, presence of open wounds, having endotracheal and tracheostomy tubes, presence of intravenous access, presence of vascular/pressure ulcers and recent previous hospitalization were considered for further multiple logistic regression. Multiple logistic regression was adjusted with the age (advanced age) and gender (female), as these two factors are biologically significant and adjusted odds ratios are given in Table 3. Multiple logistic regression with adjusted Odds Ratio on risk factors of MRSA infection Logistic regression model: log (odds of MRSA) = -2.482 + 1.471 (performing surgery)+ -1.182 (prolonged hospitalization) + 1.667 (presence of tracheostomy tube) + 1.975 (presence of pressure/venous ulcer) + 1.059 (Recent hospitalization) Out of the above risk factors, surgery as a treatment option (OR 4.355; CI 1.03, 18.328; p=0.045), prolonged hospitalization (OR 0.307; CI 0.11, 0.832; p=0.020), presence of tracheostomy tube (OR 5.298, CI 1.16, 24.298; p=0.032), presence of pressure/venous ulcer (OR 7.205; CI 1.75, 29.606; p=0.006) and previous recent hospitalization (OR 2.883; CI 1.25, 6.631; p=0.013) were significant risk factors for causing MRSA infection among hospitalized patients. Discussion: Staphylococcus aureus remains the most common pathogen causing infection in wounds20. World Health Organization has stressed that MRSA is one of the high priority multidrug-resistant organism21. MRSA infection is high in Asia and the region is considered as ‘hospital associated MRSA endemic area’17. In the present study, undergoing surgery, prolonged hospitalization, presence of tracheostomy tube, pressure/venous ulcer and recent hospitalization were the significant independent risk factors causing MRSA infection among hospitalized patients. The prevalence of MRSA infection is high among emergency admission patients6. In the present study, 34.9% of patients were admitted from the trauma center and have undergone emergency surgeries. Usually, emergency surgeries are not well prepared like elective surgeries. However, emergency admission was not a significant determinant in our study. Callejo et al. and Sun et al. reported that the risk factors of MRSA infection were advanced age (above 65 years), traumatic injuries, admitted from a long-term care facility, presence of a urinary catheter, previous antibiotic treatment and skin-soft tissue or post-surgical superficial skin infections12,22. Patients with an open fracture tend to get infected more (14.7%) than a closed fracture (4.2%)23 or open injuries22. In contrast, none of these factors were significant in the present study. Therefore, it can be inferred that the risk factors of MRSA infection differ around the globe. Patients who have undergone surgical debridement within one year (adjusted odds ratio, 2.6; 95% CI, 1.4–5.0, p=0.002) and obesity (adjusted OR 3.4, 95% CI 1.4–8.8, p=0.008) were at risk of developing recurrent MRSA infection24. Vascular ulcer increases the risk of MRSA infection25. In agreement to this, the presence of vascular ulcer in the present study was one of the significant risk of causing MRSA infection. Vascular ulcer reduces the blood flow to distal areas. In the absence of oxygen, wound healing is delayed. Non-healing of the ulcer increases the risk of infection. In the current study, bed occupancy was more than 80%. The studies have proven that the occupancy rate in the hospital is directly proportional to the incidence of HAIs26. The previous hospitalization is a proven cause of MRSA bacteremia27. Recent hospitalization (OR 2.883; CI 1.25, 6.631; p=0.013) within a year was a significant cause of MRSA infection. Both prolonged hospitalization and repeated hospitalization increases the risk. The previous history of nursing home admission (OR 8.42; 1.06–66.43) is another threat of acquiring MRSA infection9. Hospital is a source of multiple pathogens, and transmission of such pathogens from the hospital to the host is common. MRSA is seen in hospital environmental surface (38.9%) which increases the risk of causing infection17. The ICU environment (67.3%) is an additional well-known risk factor of getting MRSA infection7. MRSA was detected in ventilators (33%)7, ultrasound transducers (17%)28 and stethoscopes used in the hospital29,30. Also, MRSA is detected on the hands of 59.6% healthcare professionals7. Old age and nursing home residences are found to be independent risk factors of MRSA infection related death31. Pre-prosthetic infection with MRSA is increasing (44%) among orthopedic surgery patients32 and arthroplasty patients have a higher risk (OR 0.11; 0.02–0.56) than internal fixation9 which also increases the treatment costs. Most of the time, removal of the prosthesis is the treatment for prosthetic infection and this infection indicates the failure of treatment. MRSA infection can have an adverse effect on the life of infected patients. The consequence of the infection can be repeated hospitalization, increased healthcare cost, increased mortality and morbidity11. A retrospective study carried out in Texas showed that 21% MRSA infected patients developed recurrent infection22. A two year retrospective study of amputated patients showed 7.3% re-hospitalization due to stump infection. Among the re-admitted patients, MRSA was the leading pathogen causing infection and the most common cause of death33. The occurrence of surgical site infection with MRSA among orthopedic and transplant surgery patients is in late post-operative days compared to general surgical patients34. This indicates that a longer duration of hospitalization is a threat for the development of infection. Longer hospitalization not only causes wound infection but also can result in MRSA bacteremia. In the present study, prolonged hospitalization (OR 0.307; CI 0.11, 0.832; p=0.020) was a significant contributing factor of MRSA infecton. The mean duration of the hospitalized MRSA infected patients was 9.9 days. The duration of the hospitalization differs for each disease condition. However, for patients with minor surgeries, more than three days of hospitalization and more than seven days for major surgeries were considered as prolonged hospitalization. For patients, without surgical procedures (only medical treatment) the duration of hospitalization was compared with our hospital policy. In the present study, undergoing surgery emerged as a risk factor. As surgical procedure disrupts the integrity of the skin, a pathogen can enter into the body easily. It is also noted that, more personnel in the operation room increases the risk of infection35. However, the operating room team is bigger in teaching hospitals as students are posted in the operation room to develop surgical skills. Therefore, additional measures need to be implemented to reduce risk. Presence of endotracheal or tracheostomy tubes and vascular ulcers result in infection. Though ulcers can be prevented, managing the patient with endotracheal or tracheostomy is unavoidable in many situations. Therefore, additional emphasis is needed for infection control. These patients need to stay for a longer time in the hospital. A systematic review revealed that the cost of treating MRSA infection is high36. Though hospitalization cannot be completely eliminated, the hospital must take necessary measures to reduce the duration of hospitalization and avoid repeated admissions. Limitation: The study conducted at a single center with convenient sampling lacks the generalizability. Perhaps further studies are required covering diverse geographical and clinical areas which may help in developing appropriate guidelines to prevent MRSA infection. Conclusion: We identified that the damage to the skin and mucosal barriers such as undergoing surgical procedures and the existence of pressure or venous ulcers increase the risk of acquiring MRSA infection. Prolonged length of hospital stay and the history of recent hospitalization are the other risk factors. In addition, tracheostomy escalates the threat of MRSA infection in wounds of patients admitted to the hospital. Hence, controlling these risk factors may help in reducing the burden of infection.
Background: Methicillin Resistant Staphylococcus aureus (MRSA) causes infection in hospitals and communities. The prevalence and risk factors of MRSA infection is not homogenous across the globe. Methods: Cross-sectional case control study was conducted at a tertiary care hospital in India. The risk factors were collected using checklist from 130 MRSA and 130 Methicillin sensitive staphylococcus aureus (MSSA) infected patients. The pathogens were isolated from the wound swabs according to Clinical and Laboratory Standards Institute guidelines. Results: Both the groups were comparable in terms of age, gender, diabetic status, undergoing invasive procedures, urinary catheterization and smoking (p>0.05). Multivariate logistic regression revealed surgical treatment (OR 4.355; CI 1.03, 18.328; p=0.045), prolonged hospitalization (OR 0.307; CI 0.11, 0.832; p=0.020), tracheostomy (OR 5.298, CI 1.16, 24.298; p=0.032), pressure/venous ulcer (OR 7.205; CI 1.75, 29.606; p=0.006) and previous hospitalization (OR 2.883; CI 1.25, 6.631; p=0.013) as significant risk factors for MRSA infection. Conclusions: Surgical treatment, prolonged and history of hospitalization, having tracheostomy for ventilation and pressure/venous ulcer were the key risk factors. Therefore, special attention has to be given to the preventable risk factors while caring for hospitalized patients to prevent MRSA infection.
Introduction: Methicillin Resistant Staphylococcus aureus (MRSA) is a Gram-positive pathogen, having the ability to cause hospital associated infection and/or community acquired infection. Hospital associated MRSA infection is one of the major problems affecting both patients and care providers1. MRSA colonization is predominantly present in the nose and skin of humans2. Nasal colonization of Staphylococcus aureus and MRSA are the independent predictors of MRSA infection3. Colonized bacteria may not cause infection. However, it can enter the body through injured skin or mucus membrane and can cause simple skin infection to life threatening bacteremia. The spectrum includes pneumonia, bacteremia, skin and soft tissue infection, pyomyositis, sepsis, osteomyelitis, necrotizing pneumonia and necrotizing fasciitis4. Though MRSA can be isolated from blood, nose, wound, urine, respiratory tract, sputum and other body fluids, the prevalence is high in wounds5. Acquiring MRSA infection is multifactorial, and the risk factors described are prolonged post-operative state, emergency admissions and prior treatment with multiple antibiotics6. Other notable treatment related factors are emergency surgery, prolonged or multiple hospital stays, use of invasive devices (catheters, surgical drains, gastric/endotracheal tubes), repeated surgeries, treatment with multiple broad-spectrum antibiotics, inpatient in a neonatal or surgical ICU and poor infection control practices7–12. The host related factors are age over 65 years, any conditions that suppress immune system function, open wound or injuries, unsanitary or crowded living conditions like dormitories or military barracks, sharing towels or other personal items7–12. Comorbidities such as diabetes mellitus (66%), hypertension (66%) and sickle cell diseases (33%) are also the threat for acquiring MRSA infection13. MRSA contaminates the hands of healthcare professionals (59.6%)7. Even the dress of healthcare professionals can spread MRSA. According to the society for Healthcare Epidemiology of America (SHEA) report (2014), HCPs opine that their attire, including footwear, is important in preventing transmission of infection14. Also, MRSA is found on hospital surfaces, disinfectant areas and reusable equipment15. Though the cleaning of patient surroundings in ICU has shown a significant reduction in MRSA, after 24 hours of cleaning, the risk of MRSA growth in the patient environment remained high16. Although some similar strains of MRSA are seen in many countries depicting international dissemination, the spread is not homogenous around the globe17. Most of the studies have been conducted in developed countries 3,12,18. No published information on risk factors of MRSA infection is traced in India. Therefore, we aimed at identifying the risk factors of MRSA infection in an Indian hospital to institute appropriate preventive measures. Conclusion: We identified that the damage to the skin and mucosal barriers such as undergoing surgical procedures and the existence of pressure or venous ulcers increase the risk of acquiring MRSA infection. Prolonged length of hospital stay and the history of recent hospitalization are the other risk factors. In addition, tracheostomy escalates the threat of MRSA infection in wounds of patients admitted to the hospital. Hence, controlling these risk factors may help in reducing the burden of infection.
Background: Methicillin Resistant Staphylococcus aureus (MRSA) causes infection in hospitals and communities. The prevalence and risk factors of MRSA infection is not homogenous across the globe. Methods: Cross-sectional case control study was conducted at a tertiary care hospital in India. The risk factors were collected using checklist from 130 MRSA and 130 Methicillin sensitive staphylococcus aureus (MSSA) infected patients. The pathogens were isolated from the wound swabs according to Clinical and Laboratory Standards Institute guidelines. Results: Both the groups were comparable in terms of age, gender, diabetic status, undergoing invasive procedures, urinary catheterization and smoking (p>0.05). Multivariate logistic regression revealed surgical treatment (OR 4.355; CI 1.03, 18.328; p=0.045), prolonged hospitalization (OR 0.307; CI 0.11, 0.832; p=0.020), tracheostomy (OR 5.298, CI 1.16, 24.298; p=0.032), pressure/venous ulcer (OR 7.205; CI 1.75, 29.606; p=0.006) and previous hospitalization (OR 2.883; CI 1.25, 6.631; p=0.013) as significant risk factors for MRSA infection. Conclusions: Surgical treatment, prolonged and history of hospitalization, having tracheostomy for ventilation and pressure/venous ulcer were the key risk factors. Therefore, special attention has to be given to the preventable risk factors while caring for hospitalized patients to prevent MRSA infection.
4,368
255
[ 20, 122, 274, 120, 54, 36, 82, 37 ]
13
[ "mrsa", "patients", "infection", "risk", "study", "hospital", "mrsa infection", "factors", "hospitalization", "risk factors" ]
[ "getting mrsa infection7", "mrsa bacteremia", "mrsa bacteremia present", "mrsa infection logistic", "staphylococcus aureus mrsa" ]
[CONTENT] MRSA | infection | India [SUMMARY]
[CONTENT] MRSA | infection | India [SUMMARY]
[CONTENT] MRSA | infection | India [SUMMARY]
[CONTENT] MRSA | infection | India [SUMMARY]
[CONTENT] MRSA | infection | India [SUMMARY]
[CONTENT] MRSA | infection | India [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Anti-Bacterial Agents | Female | Hospitalization | Humans | India | Inpatients | Male | Methicillin-Resistant Staphylococcus aureus | Middle Aged | Staphylococcal Infections | Wound Infection [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Anti-Bacterial Agents | Female | Hospitalization | Humans | India | Inpatients | Male | Methicillin-Resistant Staphylococcus aureus | Middle Aged | Staphylococcal Infections | Wound Infection [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Anti-Bacterial Agents | Female | Hospitalization | Humans | India | Inpatients | Male | Methicillin-Resistant Staphylococcus aureus | Middle Aged | Staphylococcal Infections | Wound Infection [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Anti-Bacterial Agents | Female | Hospitalization | Humans | India | Inpatients | Male | Methicillin-Resistant Staphylococcus aureus | Middle Aged | Staphylococcal Infections | Wound Infection [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Anti-Bacterial Agents | Female | Hospitalization | Humans | India | Inpatients | Male | Methicillin-Resistant Staphylococcus aureus | Middle Aged | Staphylococcal Infections | Wound Infection [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Anti-Bacterial Agents | Female | Hospitalization | Humans | India | Inpatients | Male | Methicillin-Resistant Staphylococcus aureus | Middle Aged | Staphylococcal Infections | Wound Infection [SUMMARY]
[CONTENT] getting mrsa infection7 | mrsa bacteremia | mrsa bacteremia present | mrsa infection logistic | staphylococcus aureus mrsa [SUMMARY]
[CONTENT] getting mrsa infection7 | mrsa bacteremia | mrsa bacteremia present | mrsa infection logistic | staphylococcus aureus mrsa [SUMMARY]
[CONTENT] getting mrsa infection7 | mrsa bacteremia | mrsa bacteremia present | mrsa infection logistic | staphylococcus aureus mrsa [SUMMARY]
[CONTENT] getting mrsa infection7 | mrsa bacteremia | mrsa bacteremia present | mrsa infection logistic | staphylococcus aureus mrsa [SUMMARY]
[CONTENT] getting mrsa infection7 | mrsa bacteremia | mrsa bacteremia present | mrsa infection logistic | staphylococcus aureus mrsa [SUMMARY]
[CONTENT] getting mrsa infection7 | mrsa bacteremia | mrsa bacteremia present | mrsa infection logistic | staphylococcus aureus mrsa [SUMMARY]
[CONTENT] mrsa | patients | infection | risk | study | hospital | mrsa infection | factors | hospitalization | risk factors [SUMMARY]
[CONTENT] mrsa | patients | infection | risk | study | hospital | mrsa infection | factors | hospitalization | risk factors [SUMMARY]
[CONTENT] mrsa | patients | infection | risk | study | hospital | mrsa infection | factors | hospitalization | risk factors [SUMMARY]
[CONTENT] mrsa | patients | infection | risk | study | hospital | mrsa infection | factors | hospitalization | risk factors [SUMMARY]
[CONTENT] mrsa | patients | infection | risk | study | hospital | mrsa infection | factors | hospitalization | risk factors [SUMMARY]
[CONTENT] mrsa | patients | infection | risk | study | hospital | mrsa infection | factors | hospitalization | risk factors [SUMMARY]
[CONTENT] mrsa | infection | 12 | factors | cause | skin | hospital | multiple | treatment | treatment multiple [SUMMARY]
[CONTENT] patients | study | mrsa | mssa | hospital | hospitalized | calculated | 130 | data | sample [SUMMARY]
[CONTENT] presence | table | factors | regression | logistic | logistic regression | odds | ci | risk factors | multiple logistic regression [SUMMARY]
[CONTENT] risk | infection | factors | risk factors | existence | controlling | skin mucosal | patients admitted hospital controlling | patients admitted hospital | controlling risk factors help [SUMMARY]
[CONTENT] mrsa | patients | infection | study | risk | hospital | mrsa infection | factors | risk factors | hospitalization [SUMMARY]
[CONTENT] mrsa | patients | infection | study | risk | hospital | mrsa infection | factors | risk factors | hospitalization [SUMMARY]
[CONTENT] Methicillin Resistant Staphylococcus ||| [SUMMARY]
[CONTENT] tertiary | India ||| 130 | 130 | Methicillin | MSSA ||| Clinical and Laboratory Standards Institute [SUMMARY]
[CONTENT] ||| 4.355 | CI | 1.03 | 18.328 | 0.307 | CI 0.11 | 0.832 | 5.298 | CI | 1.16 | 24.298 | 7.205 | CI | 1.75 | 29.606 | 2.883 | CI 1.25 | 6.631 [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] Methicillin Resistant Staphylococcus ||| ||| tertiary | India ||| 130 | 130 | Methicillin | MSSA ||| Clinical and Laboratory Standards Institute ||| ||| 4.355 | CI | 1.03 | 18.328 | 0.307 | CI 0.11 | 0.832 | 5.298 | CI | 1.16 | 24.298 | 7.205 | CI | 1.75 | 29.606 | 2.883 | CI 1.25 | 6.631 ||| ||| [SUMMARY]
[CONTENT] Methicillin Resistant Staphylococcus ||| ||| tertiary | India ||| 130 | 130 | Methicillin | MSSA ||| Clinical and Laboratory Standards Institute ||| ||| 4.355 | CI | 1.03 | 18.328 | 0.307 | CI 0.11 | 0.832 | 5.298 | CI | 1.16 | 24.298 | 7.205 | CI | 1.75 | 29.606 | 2.883 | CI 1.25 | 6.631 ||| ||| [SUMMARY]
Can patient-physician interview skills be implemented with peer simulated patients?
35232322
Patient-physician interviewing skills are crucial in health service delivery. It is necessary for effective care and treatment that the physician initiates the interview with the patient, takes anamnesis, collects the required information, and ends the consultation. Different methods are used to improve patient-physician interview skills before encountering actual patients. In the absence of simulated patients, peer simulation is an alternative method for carrying out the training. This study aims to show whether patient-physician interview skills training can be implemented using peer simulation in the absence of the simulated patient.
INTRODUCTION
This is a descriptive quantitative study. This research was conducted in six stages: identification of the research problem and determination of the research question, development of data collection tools, planning, acting, evaluation, and monitoring. The data were collected via the patient-physician interview videos of the students. The research team performed descriptive analysis on quantitative data and thematic analysis on qualitative data.
METHODS
Fifty students participated in the study. When performing peer-assisted simulation applications in the absence of simulated patients, the success rate in patient-physician interviews and peer-simulated patient roles was over 88%. Although the students were less satisfied with playing the peer-simulated patient role, the satisfaction towards the application was between 77.33% and 98%.
RESULTS
In patient-physician interviews, the peer-simulated patient method is an effective learning approach. There may be difficulties finding suitable simulated patients, training them, budgeting to cover the costs, planning, organizing the interviews, and solving potential issues during interviews. Our study offers an affordable solution for students to earn patient-physician interview skills in faculties facing difficulties with providing simulated patients for training.
DISCUSSION AND CONCLUSION
[ "Clinical Competence", "Communication", "Humans", "Learning", "Patient Simulation", "Peer Group", "Physician-Patient Relations", "Physicians", "Students" ]
8896181
Introduction
Medical students need to practice patient-physician interviews to develop essential clinical communication and clinical reasoning skills and find the necessary space to apply their basic professional skills [1]. Patient-physician interviewing skills have an important place in health service delivery. A good interview is crucial for effective diagnosis and treatment. Medical educators agree that medical students should be humane and have the necessary communication skills for patient-physician interview skills. However, for years, there has been uncertainty about the ways to achieve this learning goal [2]. Having students experience a mock patient-physician interview is considered the easiest method to accomplish this goal [2]. Methods based on small group activities, such as problem-based learning, role-playing, and simulated/standardized patient simulation, are used to improve patient-physician interview skills [2,3]. Today, it is a common and accepted method to conduct patient-physician interviews with simulated/standardized patients [1,4–6]. Simulated patients can be theatre actors, professional actors, trained volunteers (retirees, students, employees, etc.). There is no evidence that the simulated patient has to be a professional actor for the interview to be efficient [4,7]. There are certain advantages and disadvantages to interviewing simulated patients. Simulated patients offer a student-centered educational opportunity that is the closest to reality without time constraints. They can impersonate different patient profiles and conditions, allowing students to experience patients and cases that are difficult to encounter in real life [4,5]. On the other hand, using simulated patients also has disadvantages related to the cost or training requirements [8]. There may be difficulties finding proper simulated patients, training them, budgeting to cover the costs, planning, organizing the interviews, and solving possible issues during interviews [4,5,7–12]. Furthermore, the need to train faculty members` for simulated patient training, the time spent on it, corporate commitments, and, most importantly, the truth that it is not a sustainable method are some other downsides [4,5]. In modern medical education, to improve patient-physician interviewing skills, it has become imperative to use modernized, affordable and sustainable models, instead of teacher-centered and expensive methods with a traditional approach. Peer-assisted learning (PAL) serves this purpose [3,13,14]. One can define PAL as knowledge and skills acquisition through active help and support among peers. Peer trainers (tutors) are non-professional teachers who, by helping their friends, help themselves as well to have a broader understanding of the topic at hand [3,14,15]. Peer-assisted learning (PAL) has long been used informally in medical education by medical educators as an auxiliary tool for learning since its inclusion among the effective models in the literature [3,13,16]. The primary advantage of PAL is economizing resources. Another advantage is that it immensely reduces the burden of the faculty member. It increases the cultivation of a lifelong learning mentality for students, leads to continuous professional development, and enhances interest in an academic career, boosting skills such as leadership, coaching, confidence, and inner motivation [13,14,16,17]. Peer simulation is presented as a new concept that increases the advantages of PAL [5]. Peer simulation is a structured form of role-playing in which students train to play the patient role for their peers [5]. Having peer support in peer simulation (peer simulated patient) presents many advantages offered by PAL, and it has a positive effect on learning outcomes. Students learn together and from each other through peer simulation. Peer simulation is an alternative method to using simulated patients in preclinical applications. Playing the patient role in peer simulation is an opportunity to facilitate the development of empathy and culture-sensitive medical practice skills [5]. There are very few examples of professional skills training using peer simulation [5]. According to the literature, there are no examples in Turkey yet. In the medical school, where the study was carried out, patient-physician interview skills training was implemented in the second year. The patient-physician interview skills training goal was to teach students the proper way to start the interview, take and expand the anamnesis, inform the patient, and end the interview. There are no simulated/standardized patients in this medical school. For students to gain skills, a different teaching strategy, which is low cost but meets the same function, is required. In our school, action was planned to solve this problem. Results from action are the solution to the problem. Action research, used to improve and modify educational practices, is a method that helps faculty and students better understand the work carried out in the institution. If the results are not satisfactory, researchers retry [18]. The action process is carried out in six stages (Figure 1). The first stage is ‘diagnosing,’ which means identification of the problem. The second stage is ‘reconnaissance,’ in which data collection tools are developed and the problem is analyzed and interpreted. The third stage contains the development of the action/intervention plan. The acting stage includes the implementation of the action/intervention plan. The fifth stage is the evaluation stage comprising data collection and analyzing the action/intervention. The last stage includes monitoring the data to make revisions and test the action/intervention. Figure 1.Mixed-Method Methodological Framework for Research. Mixed-Method Methodological Framework for Research. This study aims to show whether peer simulated patient-physician interview skills training can be successfully implemented to practice patient-physician interviewing skills of medical students in the absence of simulated patients.
null
null
null
null
Conclusion
In the absence of simulated patients, peer-assisted simulation can be performed to contribute to medical students’ patient-physician interview skills. To obtain better results from peer-assisted patient-physician interviews, making the following arrangements within institutions is recommended: • Organizing additional training to increase students’ ability to give constructive feedback to their peers, • Planning multicenter researches that evaluate the institution gains (time, cost, workforce, etc.) obtained through peer-simulated patient usage. • Ensuring the sustainability of the action research cycle by evaluating peer-simulated patient practice in the coming years. Consideration of peer-assisted simulation by educators, students and administrators will ensure that the practice becomes widespread.
[ "Methods", "Data analysis methods", "Planning", "Acting", "Ethics committee", "Results", "Discussion", "Conclusion" ]
[ "This is a descriptive quantitative study. With the descriptive methodological framework, the problem was subjected to a comprehensive initial assessment, and multiple data are collected and integrated. Thus, a more rigorous evaluation of the action was obtained [18–20]. In this study, first, the problem was defined, then data collection tools were developed with the support of literature, remedial action was planned, and finally, the developed training model was applied. The process of this research was carried out in stages and is shown in the figure (Figure 1).\nFigure 1- Descriptive Methodological Framework for Research\nThe method of the research will be presented in accordance with the stages:\nIn the literature review ‘patient-physician interview skills, peer-assisted learning, simulation, peer-simulated patient, peer simulation’ keywords were used. Applications on peer-assisted learning and peer simulation were examined in 51 studies.\nBased on the literature information, data collection tools aimed at obtaining the opinions of different parties have been developed to evaluate peer-assisted patient-physician interview skills.\ni. Physician’s Role Observation Form (PROF).\nUsing the literature, the researchers identified observational headings related to patient-physician interview skills [4, 7, 21], Katharina Eva [22], Katharina Eva [1, 23–26]. After four consecutive meetings, the researchers reached a consensus on the identified headings. An observation form on patient-physician interview skills was created by grouping the agreed items in line with their conceptual similarities.\nPROF consists of three groups (verbal communication, nonverbal communication, questioning of the main complaint) and 54 items. Each answer is rated as “0-no” for missing the objective and “1-yes” for reaching the objective.\nii. Peer Patient Observation Form (PPOF). Using the literature, the researchers identified headings related to the role of simulated patients [4,21,26]. The researchers agreed on PPOF consisting of eight items. Each answer is rated as “0-no”, “1-yes”.\niii. Satisfaction Assessment Form (SAF). The form consists of socio-demographic variables (four items), and items related to the satisfaction with the patient-physician interview (six items), and related to the peer-assisted patient-physician interview (15 items related to the physician’s role, three items related to the peer-simulated patient’s role, and three items related to the observer). All questions except two are closed-ended. Data on whether the peer-assisted patient-physician interview was beneficial was obtained by evaluating the open-ended questions of the SAF.\ni. Physician’s Role Observation Form (PROF).\nUsing the literature, the researchers identified observational headings related to patient-physician interview skills [4, 7, 21], Katharina Eva [22], Katharina Eva [1, 23–26]. After four consecutive meetings, the researchers reached a consensus on the identified headings. An observation form on patient-physician interview skills was created by grouping the agreed items in line with their conceptual similarities.\nPROF consists of three groups (verbal communication, nonverbal communication, questioning of the main complaint) and 54 items. Each answer is rated as “0-no” for missing the objective and “1-yes” for reaching the objective.\nii. Peer Patient Observation Form (PPOF). Using the literature, the researchers identified headings related to the role of simulated patients [4,21,26]. The researchers agreed on PPOF consisting of eight items. Each answer is rated as “0-no”, “1-yes”.\niii. Satisfaction Assessment Form (SAF). The form consists of socio-demographic variables (four items), and items related to the satisfaction with the patient-physician interview (six items), and related to the peer-assisted patient-physician interview (15 items related to the physician’s role, three items related to the peer-simulated patient’s role, and three items related to the observer). All questions except two are closed-ended. Data on whether the peer-assisted patient-physician interview was beneficial was obtained by evaluating the open-ended questions of the SAF.\n Data analysis methods \nStudent interview videos were viewed separately by researchers. Each student received grades for their roles as a physician and a patient. Accordingly, a student playing the physician’s role received a minimum score of 0 and a maximum score of 54 from the PROF. The student playing the peer-simulated patient’s role received a minimum score of 0 and a maximum score of 8 from the PPOF. The internal consistency of the scales was evaluated with the Crohnbach’s alpha coefficient. For the analysis of the results from the SAF, descriptive analysis was performed for the answers to two open-ended questions, and frequency values and means were calculated in closed-ended questions. The statistical software SPSS 24 (Statistical Package for Social Sciences for Windows 24.0) was used for calculations.\nIn addition, it is aimed that students can reach all the gains in the expressions specified in the form. Therefore, the success-satisfaction ratio of the items on the form was calculated using the formula “number of successful-satisfied answered items/total number of items*100”. This ratio was calculated for the physician’s role observation form (54 items), peer simulated patient observation form (8 items), and the peer-assisted patient-physician interview satisfaction section (21 items) of the SAF.\nStudent interview videos were viewed separately by researchers. Each student received grades for their roles as a physician and a patient. Accordingly, a student playing the physician’s role received a minimum score of 0 and a maximum score of 54 from the PROF. The student playing the peer-simulated patient’s role received a minimum score of 0 and a maximum score of 8 from the PPOF. The internal consistency of the scales was evaluated with the Crohnbach’s alpha coefficient. For the analysis of the results from the SAF, descriptive analysis was performed for the answers to two open-ended questions, and frequency values and means were calculated in closed-ended questions. The statistical software SPSS 24 (Statistical Package for Social Sciences for Windows 24.0) was used for calculations.\nIn addition, it is aimed that students can reach all the gains in the expressions specified in the form. Therefore, the success-satisfaction ratio of the items on the form was calculated using the formula “number of successful-satisfied answered items/total number of items*100”. This ratio was calculated for the physician’s role observation form (54 items), peer simulated patient observation form (8 items), and the peer-assisted patient-physician interview satisfaction section (21 items) of the SAF.\n\nStudent interview videos were viewed separately by researchers. Each student received grades for their roles as a physician and a patient. Accordingly, a student playing the physician’s role received a minimum score of 0 and a maximum score of 54 from the PROF. The student playing the peer-simulated patient’s role received a minimum score of 0 and a maximum score of 8 from the PPOF. The internal consistency of the scales was evaluated with the Crohnbach’s alpha coefficient. For the analysis of the results from the SAF, descriptive analysis was performed for the answers to two open-ended questions, and frequency values and means were calculated in closed-ended questions. The statistical software SPSS 24 (Statistical Package for Social Sciences for Windows 24.0) was used for calculations.\nIn addition, it is aimed that students can reach all the gains in the expressions specified in the form. Therefore, the success-satisfaction ratio of the items on the form was calculated using the formula “number of successful-satisfied answered items/total number of items*100”. This ratio was calculated for the physician’s role observation form (54 items), peer simulated patient observation form (8 items), and the peer-assisted patient-physician interview satisfaction section (21 items) of the SAF.\nStudent interview videos were viewed separately by researchers. Each student received grades for their roles as a physician and a patient. Accordingly, a student playing the physician’s role received a minimum score of 0 and a maximum score of 54 from the PROF. The student playing the peer-simulated patient’s role received a minimum score of 0 and a maximum score of 8 from the PPOF. The internal consistency of the scales was evaluated with the Crohnbach’s alpha coefficient. For the analysis of the results from the SAF, descriptive analysis was performed for the answers to two open-ended questions, and frequency values and means were calculated in closed-ended questions. The statistical software SPSS 24 (Statistical Package for Social Sciences for Windows 24.0) was used for calculations.\nIn addition, it is aimed that students can reach all the gains in the expressions specified in the form. Therefore, the success-satisfaction ratio of the items on the form was calculated using the formula “number of successful-satisfied answered items/total number of items*100”. This ratio was calculated for the physician’s role observation form (54 items), peer simulated patient observation form (8 items), and the peer-assisted patient-physician interview satisfaction section (21 items) of the SAF.\n Planning \na. Preparation of simulated patient scenarios.\nA patient scenario for history taking was created by the researchers using the literature. Scenario creation stages are as follows: the determination of learning objectives and outcomes, determination of context and content (the physician’s and patient’s roles, anamnesis information, physical environment, available source, etc.), evaluation of technical infrastructure (computer, camera, sound system), and preparation of supporting documents [27,28]. The scenario was submitted to the expert opinion and was made ready for application after making the necessary revisions. Patient scenarios, which were finalized with the feedback from expert, were prepared for information sessions with students.\nb. Conducting pilot application.\nThe pilot application was conducted with eight volunteering second-year students who had no experience with interviewing simulated patients. Information sessions were held with the volunteering students, and patient-physician interviews were planned. Within the scope of the pilot application, volunteering students made interviews with their peers playing the physician’s role, patient’s role, interviews were video-recorded, and feedback sessions were held with students. Video recordings were evaluated by the researchers using data collection forms. Technical problems encountered in the pilot application (internet, computer screen resolution, sound quality, etc.) and data collection tools were fixed.\nc. Setting up the peer-assisted patient-physician interview.\nDuring the 2019–2020 academic year, second-year students, at Izmir Katip Çelebi University Faculty of Medicine participated in the peer-assisted patient-physician interviews. Throughout the module, a student had three different responsibilities: playing the physician’s role, playing the peer-simulated patient’s role, and being the peer observer. Thus, students were able to experience all the components of the interview directly. Students made interviews, which were video-recorded. After the interview, they filled out a satisfaction form, wrote a self-assessment report, and attended a feedback session. Those playing the patient’s role simulated the disease required by the role, monitored the interviewing physician, gave constructive feedback to the physician, and filled out the satisfaction form. Finally, those who acted as an observer monitored the physician’s performance, gave constructive feedback, and filled out the satisfaction form.\nd. Planning a feedback session with students after the interviews.\nStudents watched a video recording of the interview, wrote the self-assessment report, and participated in the feedback session\na. Preparation of simulated patient scenarios.\nA patient scenario for history taking was created by the researchers using the literature. Scenario creation stages are as follows: the determination of learning objectives and outcomes, determination of context and content (the physician’s and patient’s roles, anamnesis information, physical environment, available source, etc.), evaluation of technical infrastructure (computer, camera, sound system), and preparation of supporting documents [27,28]. The scenario was submitted to the expert opinion and was made ready for application after making the necessary revisions. Patient scenarios, which were finalized with the feedback from expert, were prepared for information sessions with students.\nb. Conducting pilot application.\nThe pilot application was conducted with eight volunteering second-year students who had no experience with interviewing simulated patients. Information sessions were held with the volunteering students, and patient-physician interviews were planned. Within the scope of the pilot application, volunteering students made interviews with their peers playing the physician’s role, patient’s role, interviews were video-recorded, and feedback sessions were held with students. Video recordings were evaluated by the researchers using data collection forms. Technical problems encountered in the pilot application (internet, computer screen resolution, sound quality, etc.) and data collection tools were fixed.\nc. Setting up the peer-assisted patient-physician interview.\nDuring the 2019–2020 academic year, second-year students, at Izmir Katip Çelebi University Faculty of Medicine participated in the peer-assisted patient-physician interviews. Throughout the module, a student had three different responsibilities: playing the physician’s role, playing the peer-simulated patient’s role, and being the peer observer. Thus, students were able to experience all the components of the interview directly. Students made interviews, which were video-recorded. After the interview, they filled out a satisfaction form, wrote a self-assessment report, and attended a feedback session. Those playing the patient’s role simulated the disease required by the role, monitored the interviewing physician, gave constructive feedback to the physician, and filled out the satisfaction form. Finally, those who acted as an observer monitored the physician’s performance, gave constructive feedback, and filled out the satisfaction form.\nd. Planning a feedback session with students after the interviews.\nStudents watched a video recording of the interview, wrote the self-assessment report, and participated in the feedback session\n\na. Preparation of simulated patient scenarios.\nA patient scenario for history taking was created by the researchers using the literature. Scenario creation stages are as follows: the determination of learning objectives and outcomes, determination of context and content (the physician’s and patient’s roles, anamnesis information, physical environment, available source, etc.), evaluation of technical infrastructure (computer, camera, sound system), and preparation of supporting documents [27,28]. The scenario was submitted to the expert opinion and was made ready for application after making the necessary revisions. Patient scenarios, which were finalized with the feedback from expert, were prepared for information sessions with students.\nb. Conducting pilot application.\nThe pilot application was conducted with eight volunteering second-year students who had no experience with interviewing simulated patients. Information sessions were held with the volunteering students, and patient-physician interviews were planned. Within the scope of the pilot application, volunteering students made interviews with their peers playing the physician’s role, patient’s role, interviews were video-recorded, and feedback sessions were held with students. Video recordings were evaluated by the researchers using data collection forms. Technical problems encountered in the pilot application (internet, computer screen resolution, sound quality, etc.) and data collection tools were fixed.\nc. Setting up the peer-assisted patient-physician interview.\nDuring the 2019–2020 academic year, second-year students, at Izmir Katip Çelebi University Faculty of Medicine participated in the peer-assisted patient-physician interviews. Throughout the module, a student had three different responsibilities: playing the physician’s role, playing the peer-simulated patient’s role, and being the peer observer. Thus, students were able to experience all the components of the interview directly. Students made interviews, which were video-recorded. After the interview, they filled out a satisfaction form, wrote a self-assessment report, and attended a feedback session. Those playing the patient’s role simulated the disease required by the role, monitored the interviewing physician, gave constructive feedback to the physician, and filled out the satisfaction form. Finally, those who acted as an observer monitored the physician’s performance, gave constructive feedback, and filled out the satisfaction form.\nd. Planning a feedback session with students after the interviews.\nStudents watched a video recording of the interview, wrote the self-assessment report, and participated in the feedback session\na. Preparation of simulated patient scenarios.\nA patient scenario for history taking was created by the researchers using the literature. Scenario creation stages are as follows: the determination of learning objectives and outcomes, determination of context and content (the physician’s and patient’s roles, anamnesis information, physical environment, available source, etc.), evaluation of technical infrastructure (computer, camera, sound system), and preparation of supporting documents [27,28]. The scenario was submitted to the expert opinion and was made ready for application after making the necessary revisions. Patient scenarios, which were finalized with the feedback from expert, were prepared for information sessions with students.\nb. Conducting pilot application.\nThe pilot application was conducted with eight volunteering second-year students who had no experience with interviewing simulated patients. Information sessions were held with the volunteering students, and patient-physician interviews were planned. Within the scope of the pilot application, volunteering students made interviews with their peers playing the physician’s role, patient’s role, interviews were video-recorded, and feedback sessions were held with students. Video recordings were evaluated by the researchers using data collection forms. Technical problems encountered in the pilot application (internet, computer screen resolution, sound quality, etc.) and data collection tools were fixed.\nc. Setting up the peer-assisted patient-physician interview.\nDuring the 2019–2020 academic year, second-year students, at Izmir Katip Çelebi University Faculty of Medicine participated in the peer-assisted patient-physician interviews. Throughout the module, a student had three different responsibilities: playing the physician’s role, playing the peer-simulated patient’s role, and being the peer observer. Thus, students were able to experience all the components of the interview directly. Students made interviews, which were video-recorded. After the interview, they filled out a satisfaction form, wrote a self-assessment report, and attended a feedback session. Those playing the patient’s role simulated the disease required by the role, monitored the interviewing physician, gave constructive feedback to the physician, and filled out the satisfaction form. Finally, those who acted as an observer monitored the physician’s performance, gave constructive feedback, and filled out the satisfaction form.\nd. Planning a feedback session with students after the interviews.\nStudents watched a video recording of the interview, wrote the self-assessment report, and participated in the feedback session\n Acting At this stage, patient-physician interviews were made, and information sessions were delivered about student responsibilities, and feedback sessions were held. Before this, second-year students who participated in basic communication skills, clinical communication skills, and professional skills courses had a patient-physician interview at the student outpatient clinic during appointment hours. The interviews were conducted simultaneously in five outpatient clinics by teams of five people. In these teams, one of the students played the physician’s role, one played peer-simulated patient’s role, and three participated in interviews as observers. In subsequent interviews, the students exchanged their roles: each student was allowed to play the physician’s and peer simulated patient roles once, and the observer roles three times. The student playing the physician’s role was required to prepare the outpatient clinic, initiate video recording, meet the patient, take anamnesis, and make general situation assessment. The student playing the peer-simulated patient’s role was informed that they could improvise if the answer to the question was not specified in the scenario. Observing students were required to monitor the interview and give feedback to the interviewing physician at the end. Once the interview was over, the student playing the physician’s role took the video recording, wrote the self-evaluation report, and participated in the feedback session held the following week. In the feedback session, the patient-physician interview experience was evaluated using discussion, reflection, and feedback techniques. This stage was completed in March 2020.\nStudent interview videos were monitored and analyzed by researchers with PROF, PPOF, and SAF.\nThe findings obtained after the analysis of the data were interpreted with triangulation, and a decision was made regarding the continuation of the peer-assisted simulated patient-physician interview. All data obtained by triangulation are combined and interpreted in a table.\nAt this stage, patient-physician interviews were made, and information sessions were delivered about student responsibilities, and feedback sessions were held. Before this, second-year students who participated in basic communication skills, clinical communication skills, and professional skills courses had a patient-physician interview at the student outpatient clinic during appointment hours. The interviews were conducted simultaneously in five outpatient clinics by teams of five people. In these teams, one of the students played the physician’s role, one played peer-simulated patient’s role, and three participated in interviews as observers. In subsequent interviews, the students exchanged their roles: each student was allowed to play the physician’s and peer simulated patient roles once, and the observer roles three times. The student playing the physician’s role was required to prepare the outpatient clinic, initiate video recording, meet the patient, take anamnesis, and make general situation assessment. The student playing the peer-simulated patient’s role was informed that they could improvise if the answer to the question was not specified in the scenario. Observing students were required to monitor the interview and give feedback to the interviewing physician at the end. Once the interview was over, the student playing the physician’s role took the video recording, wrote the self-evaluation report, and participated in the feedback session held the following week. In the feedback session, the patient-physician interview experience was evaluated using discussion, reflection, and feedback techniques. This stage was completed in March 2020.\nStudent interview videos were monitored and analyzed by researchers with PROF, PPOF, and SAF.\nThe findings obtained after the analysis of the data were interpreted with triangulation, and a decision was made regarding the continuation of the peer-assisted simulated patient-physician interview. All data obtained by triangulation are combined and interpreted in a table.\n Ethics committee Approval was obtained from the research ethics committee of the ICU Social Research Ethics Committee in March 2020 with the decision numbered 2020/03–04.\nApproval was obtained from the research ethics committee of the ICU Social Research Ethics Committee in March 2020 with the decision numbered 2020/03–04.", "\nStudent interview videos were viewed separately by researchers. Each student received grades for their roles as a physician and a patient. Accordingly, a student playing the physician’s role received a minimum score of 0 and a maximum score of 54 from the PROF. The student playing the peer-simulated patient’s role received a minimum score of 0 and a maximum score of 8 from the PPOF. The internal consistency of the scales was evaluated with the Crohnbach’s alpha coefficient. For the analysis of the results from the SAF, descriptive analysis was performed for the answers to two open-ended questions, and frequency values and means were calculated in closed-ended questions. The statistical software SPSS 24 (Statistical Package for Social Sciences for Windows 24.0) was used for calculations.\nIn addition, it is aimed that students can reach all the gains in the expressions specified in the form. Therefore, the success-satisfaction ratio of the items on the form was calculated using the formula “number of successful-satisfied answered items/total number of items*100”. This ratio was calculated for the physician’s role observation form (54 items), peer simulated patient observation form (8 items), and the peer-assisted patient-physician interview satisfaction section (21 items) of the SAF.\nStudent interview videos were viewed separately by researchers. Each student received grades for their roles as a physician and a patient. Accordingly, a student playing the physician’s role received a minimum score of 0 and a maximum score of 54 from the PROF. The student playing the peer-simulated patient’s role received a minimum score of 0 and a maximum score of 8 from the PPOF. The internal consistency of the scales was evaluated with the Crohnbach’s alpha coefficient. For the analysis of the results from the SAF, descriptive analysis was performed for the answers to two open-ended questions, and frequency values and means were calculated in closed-ended questions. The statistical software SPSS 24 (Statistical Package for Social Sciences for Windows 24.0) was used for calculations.\nIn addition, it is aimed that students can reach all the gains in the expressions specified in the form. Therefore, the success-satisfaction ratio of the items on the form was calculated using the formula “number of successful-satisfied answered items/total number of items*100”. This ratio was calculated for the physician’s role observation form (54 items), peer simulated patient observation form (8 items), and the peer-assisted patient-physician interview satisfaction section (21 items) of the SAF.", "\na. Preparation of simulated patient scenarios.\nA patient scenario for history taking was created by the researchers using the literature. Scenario creation stages are as follows: the determination of learning objectives and outcomes, determination of context and content (the physician’s and patient’s roles, anamnesis information, physical environment, available source, etc.), evaluation of technical infrastructure (computer, camera, sound system), and preparation of supporting documents [27,28]. The scenario was submitted to the expert opinion and was made ready for application after making the necessary revisions. Patient scenarios, which were finalized with the feedback from expert, were prepared for information sessions with students.\nb. Conducting pilot application.\nThe pilot application was conducted with eight volunteering second-year students who had no experience with interviewing simulated patients. Information sessions were held with the volunteering students, and patient-physician interviews were planned. Within the scope of the pilot application, volunteering students made interviews with their peers playing the physician’s role, patient’s role, interviews were video-recorded, and feedback sessions were held with students. Video recordings were evaluated by the researchers using data collection forms. Technical problems encountered in the pilot application (internet, computer screen resolution, sound quality, etc.) and data collection tools were fixed.\nc. Setting up the peer-assisted patient-physician interview.\nDuring the 2019–2020 academic year, second-year students, at Izmir Katip Çelebi University Faculty of Medicine participated in the peer-assisted patient-physician interviews. Throughout the module, a student had three different responsibilities: playing the physician’s role, playing the peer-simulated patient’s role, and being the peer observer. Thus, students were able to experience all the components of the interview directly. Students made interviews, which were video-recorded. After the interview, they filled out a satisfaction form, wrote a self-assessment report, and attended a feedback session. Those playing the patient’s role simulated the disease required by the role, monitored the interviewing physician, gave constructive feedback to the physician, and filled out the satisfaction form. Finally, those who acted as an observer monitored the physician’s performance, gave constructive feedback, and filled out the satisfaction form.\nd. Planning a feedback session with students after the interviews.\nStudents watched a video recording of the interview, wrote the self-assessment report, and participated in the feedback session\na. Preparation of simulated patient scenarios.\nA patient scenario for history taking was created by the researchers using the literature. Scenario creation stages are as follows: the determination of learning objectives and outcomes, determination of context and content (the physician’s and patient’s roles, anamnesis information, physical environment, available source, etc.), evaluation of technical infrastructure (computer, camera, sound system), and preparation of supporting documents [27,28]. The scenario was submitted to the expert opinion and was made ready for application after making the necessary revisions. Patient scenarios, which were finalized with the feedback from expert, were prepared for information sessions with students.\nb. Conducting pilot application.\nThe pilot application was conducted with eight volunteering second-year students who had no experience with interviewing simulated patients. Information sessions were held with the volunteering students, and patient-physician interviews were planned. Within the scope of the pilot application, volunteering students made interviews with their peers playing the physician’s role, patient’s role, interviews were video-recorded, and feedback sessions were held with students. Video recordings were evaluated by the researchers using data collection forms. Technical problems encountered in the pilot application (internet, computer screen resolution, sound quality, etc.) and data collection tools were fixed.\nc. Setting up the peer-assisted patient-physician interview.\nDuring the 2019–2020 academic year, second-year students, at Izmir Katip Çelebi University Faculty of Medicine participated in the peer-assisted patient-physician interviews. Throughout the module, a student had three different responsibilities: playing the physician’s role, playing the peer-simulated patient’s role, and being the peer observer. Thus, students were able to experience all the components of the interview directly. Students made interviews, which were video-recorded. After the interview, they filled out a satisfaction form, wrote a self-assessment report, and attended a feedback session. Those playing the patient’s role simulated the disease required by the role, monitored the interviewing physician, gave constructive feedback to the physician, and filled out the satisfaction form. Finally, those who acted as an observer monitored the physician’s performance, gave constructive feedback, and filled out the satisfaction form.\nd. Planning a feedback session with students after the interviews.\nStudents watched a video recording of the interview, wrote the self-assessment report, and participated in the feedback session", "At this stage, patient-physician interviews were made, and information sessions were delivered about student responsibilities, and feedback sessions were held. Before this, second-year students who participated in basic communication skills, clinical communication skills, and professional skills courses had a patient-physician interview at the student outpatient clinic during appointment hours. The interviews were conducted simultaneously in five outpatient clinics by teams of five people. In these teams, one of the students played the physician’s role, one played peer-simulated patient’s role, and three participated in interviews as observers. In subsequent interviews, the students exchanged their roles: each student was allowed to play the physician’s and peer simulated patient roles once, and the observer roles three times. The student playing the physician’s role was required to prepare the outpatient clinic, initiate video recording, meet the patient, take anamnesis, and make general situation assessment. The student playing the peer-simulated patient’s role was informed that they could improvise if the answer to the question was not specified in the scenario. Observing students were required to monitor the interview and give feedback to the interviewing physician at the end. Once the interview was over, the student playing the physician’s role took the video recording, wrote the self-evaluation report, and participated in the feedback session held the following week. In the feedback session, the patient-physician interview experience was evaluated using discussion, reflection, and feedback techniques. This stage was completed in March 2020.\nStudent interview videos were monitored and analyzed by researchers with PROF, PPOF, and SAF.\nThe findings obtained after the analysis of the data were interpreted with triangulation, and a decision was made regarding the continuation of the peer-assisted simulated patient-physician interview. All data obtained by triangulation are combined and interpreted in a table.", "Approval was obtained from the research ethics committee of the ICU Social Research Ethics Committee in March 2020 with the decision numbered 2020/03–04.", "It was aimed to ensure that all second-year students (n:193) participated in patient-physician interviews. Patient-physician interviews were planned to be held throughout eight weeks according to a schedule, in which each week 25 students participated in the interviews. After the first two weeks, the COVID-19 pandemic was declared by the WHO, so the remaining students were unable to make the interviews. Thus, the interview videos of a total of 50 students were monitored by researchers and analyzed by obtaining data with PROF and PPOF. Cronbach’s alpha of PROF was found to be 0.71.\nA total of 50 students (31 males and 19 females) participated in the study. The mean age of the students is 20.56 (min:19 max:23).\na. In the analysis of the data obtained from the patient-physician interview video recordings (n:50), the total score and success percentage for each student were calculated with PROF. Accordingly, the mean and standard deviation of the scores form PROF were 70.43 ± 9.81 (min. 40.12, maximum 88.27), respectively. Students are expected to get at least 60 points in order to be considered successful. The rate of students who were successful with a score of 60 or above from PROF was 92%. The distribution of achievement scores is presented as a graph (Chart 1).\nChart 1. Students` performance scores from the PROF\nChart 1.Students` performance grade distributions from PROF.\nStudents` performance grade distributions from PROF.\nWhen evaluating the students playing the physician’s role, the headings on the PROF were examined: 96.29% of the 54 items were found to be used effectively during the observation. Students were successful in over 95% of the topics of welcoming patient, asking questions about the patient’s demographic characteristics, making eye contact, listening to the patient’s main complaints, observing the patient’s profile, and asking questions about background. On the other hand, students achieved less than 50% success in summarizing the case, using body language, using the proper tone of voice, and using understandable language.\nb. Students playing the peer-simulated patient’s role were evaluated via the PPOF by considering patient-physician interview video recordings (n:50). Students were found to be more than 90% successful in seven items of the form. However, only 32% success was achieved in the eighth item related to the peer patient giving feedback to the interviewing physician, (Table 1).Table 1.Peer-Simulated Patient Success RatePPOF Items%1. The peer patient focused on the script. (good recall, concentrated)91,332. The peer played the role of patient well.94,673. The peer patient was able to present alternative topics to the topics highlighted in the scenario95,334. The oral communication skills of the peer patient were appropriate (clear, clear, understandable, scripted)99,335. The nonverbal communication skills of the peer patient were appropriate (body language, gesture, gesture).99,336. The peer patient listened to the physician interview topics effectively100,007. The peer patient answered the questions of the interviewer consistently. (credible-reliable)99,338. The peer patient gave effective feedback.32,00Total88,92\n\nPeer-Simulated Patient Success Rate\nc. Findings regarding the satisfaction with the peer-assisted patient-physician interview were presented under the following headings: sociodemographic characteristics of the participants, their opinions on satisfaction with the patient-physician interview, and their opinions on satisfaction with the peer-assisted patient-physician interview.\nAfter the evaluation on the satisfaction of the peer-assisted patient-physician interview, it was determined that 98% (n:49) of the students were satisfied with the peer-assisted patient-physician interview, and 84% (n:42) were satisfied with the presence of their peers in the patient role in the peer-assisted patient-physician interview. The other, 16% (n:8) stated that they would prefer to have an real patient or doctor instead of their peers. It was also determined that 92% of the students wanted to re-experience the peer-assisted patient-physician interview in the coming years, and 96% found the peer-assisted patient-physician interview experiences useful.\nRegarding their answers to the open-ended questions, the students stated that they found it valuable to have experienced the patient-physician interview in the early period during the pre-graduation medical education process. They noted that they realized their weaknesses and what needed to be done about them. They said that it would be useful to repeat this instructive practice, that peer-assisted learning was valuable, and that it was a good opportunity to self-evaluate. On the other hand, some of the negative remarks related to the process were inexperience, excitement, personal inadequacies, lack of knowledge, unnecessary role-playing, and difficulty communicating with the patient”.\nWhen the satisfaction with peer-assisted patient-physician interviews was evaluated, it was determined that 77.33% of the students were satisfied. 80.53% of the students were satisfied with being interviewing physician, 56.66% with being the peer-simulated patient, and 82% with being observer in the interviews.\nd. All the data obtained is combined with triangulation and combined and interpreted in the table.\nIn triangulation, the students playing the physician’s, and patient’s roles were evaluated together with ‘success in being simulated patients’ and ‘satisfaction with the peer-assisted patient-physician interviews’(Table 2).Table 2.Triangulation of Patient-Physician Interview Skills DataMerged Data%Success Rate of Being an Interviewer Physician92,00Peer Simulated Patient Success Rate88,92PatientPhysicianInterviewSatisfactionRateSatisfaction with patient-physician interview98,00Satisfaction with the fact the simulated patient is a peer simulated patient84,00Interest to have a patient-physician interview in the years to come92,00Finding the patient-physician interview experience helpful96,00Finding the patient-physician interview experience useful77,33Satisfaction of being an interviewer physicianPeer-to-peer simulated patient satisfactionSatisfaction of being an observer80,5356,6682,00\n\nTriangulation of Patient-Physician Interview Skills Data\nSatisfaction of being an interviewer physician\nPeer-to-peer simulated patient satisfaction\nSatisfaction of being an observer\nIn the absence of simulated patients, it was determined that students achieved an over 88% success rate in the patient-physician interviews and peer-simulated patient roles. Although they were less satisfied with playing the peer-simulated patient’s role, the satisfaction with the peer-assisted patient-physician interviews was rated between 77.33% and 98%.", "This study was conducted to determine whether medical students’ patient-physician interview skills could be implemented by peer simulation in the absence of simulated patients.\nIn faculties facing difficulties with providing simulated patient for patient-physician interview skills training, a different teaching strategy that meets the same function is needed to ensure that students gain skills at a low cost. Indeed, in this study, nearly all of the students were successful in patient-physician interviews performed using peer-simulated patients.\nThe 26,found that changing a student’s role during learning experiences encourages students to learn [26]. In another study conducted with peers, it was determined that patient-physician interviews contributed to the students’ ability to take anamnesis, manage emotional problems, and self-assess [5, 23]. Similarly, peer simulation develops communication, empathy, trust, and professional skills [5]. In our study, we observed that students playing the physician’s role were successful in starting patient interviews, taking anamnesis, and using the appropriate nonverbal communication skills. These students were evaluated through the PROF, which Cronbach’s alpha reliability coefficient was found to be 0.71. In the literature, Cronbach’s alpha reliability coefficient is interpreted as good if it is between 0.70 and 0.90 [29].\n1,and 30,emphasized that design features such as feedback, planned implementation, the difficulty of simulation, clinical variation, and individualized learning should be taken into account in simulation training [1,30]. In our study, it was seen that students playing the peer-simulated patient’s role failed to give feedback to those playing the physician’s role. However, although the students were trained in giving feedback, they were found to be biased. 31,emphasized that peers evaluated each other generously in peer evaluation, while another study stated that peers may rate each other highly in small groups (small circle collusion) or large groups (pervasive collusion) [31,32].\nIn studies related to patient-physician interviews performed with peer simulation method, it is said that students can carry out the training process more easily than they do with simulated patients as they play the peer-simulated patient’s role [5]. In our study, while playing the physician’s and observer’s roles was satisfactory for the students, playing the peer-simulated patient’s role was not that satisfactory. One can speculate that they had difficulty getting into the role, as the patient-physician interview skills training using the peer-simulation method was conducted for the first time. It is thought that students’ satisfaction may increase as they become more familiar with the patient-physician interview skills training.\nDuring peer simulation, students contribute to each other’s learning ‘as patients’ not by ‘teaching’ [5]. 7,similarly state that students could develop the ability to conduct patient-physician interviews if they observed other physicians [7]. In our study, students expressed their satisfaction and contribution to their learning by playing the observer’s role.\nAccording to the systematic review of the studies that perform patient-physician interviews with peer-simulated patients, peer simulation is an effective learning approach [5]. In our study, as a result of the evaluation of the action, the patient-physician interviews with the peer-simulated patient was successfully completed.\nOne limitation of this study is failing to practically compare the peer simulation technique with standardized patient simulation due to the lack of standardized patient simulation in the medical school where the application was carried out. Another limitation is the inability to include all second-year students in this study due to the pandemic.", "In the absence of simulated patients, peer-assisted simulation can be performed to contribute to medical students’ patient-physician interview skills. To obtain better results from peer-assisted patient-physician interviews, making the following arrangements within institutions is recommended:\n• Organizing additional training to increase students’ ability to give constructive feedback to their peers,\n• Planning multicenter researches that evaluate the institution gains (time, cost, workforce, etc.) obtained through peer-simulated patient usage.\n• Ensuring the sustainability of the action research cycle by evaluating peer-simulated patient practice in the coming years.\nConsideration of peer-assisted simulation by educators, students and administrators will ensure that the practice becomes widespread." ]
[ null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Data analysis methods", "Planning", "Acting", "Ethics committee", "Results", "Discussion", "Conclusion" ]
[ "Medical students need to practice patient-physician interviews to develop essential clinical communication and clinical reasoning skills and find the necessary space to apply their basic professional skills [1]. Patient-physician interviewing skills have an important place in health service delivery. A good interview is crucial for effective diagnosis and treatment. Medical educators agree that medical students should be humane and have the necessary communication skills for patient-physician interview skills. However, for years, there has been uncertainty about the ways to achieve this learning goal [2]. Having students experience a mock patient-physician interview is considered the easiest method to accomplish this goal [2]. Methods based on small group activities, such as problem-based learning, role-playing, and simulated/standardized patient simulation, are used to improve patient-physician interview skills [2,3]. Today, it is a common and accepted method to conduct patient-physician interviews with simulated/standardized patients [1,4–6]. Simulated patients can be theatre actors, professional actors, trained volunteers (retirees, students, employees, etc.). There is no evidence that the simulated patient has to be a professional actor for the interview to be efficient [4,7]. There are certain advantages and disadvantages to interviewing simulated patients. Simulated patients offer a student-centered educational opportunity that is the closest to reality without time constraints. They can impersonate different patient profiles and conditions, allowing students to experience patients and cases that are difficult to encounter in real life [4,5]. On the other hand, using simulated patients also has disadvantages related to the cost or training requirements [8]. There may be difficulties finding proper simulated patients, training them, budgeting to cover the costs, planning, organizing the interviews, and solving possible issues during interviews [4,5,7–12]. Furthermore, the need to train faculty members` for simulated patient training, the time spent on it, corporate commitments, and, most importantly, the truth that it is not a sustainable method are some other downsides [4,5].\nIn modern medical education, to improve patient-physician interviewing skills, it has become imperative to use modernized, affordable and sustainable models, instead of teacher-centered and expensive methods with a traditional approach. Peer-assisted learning (PAL) serves this purpose [3,13,14]. One can define PAL as knowledge and skills acquisition through active help and support among peers. Peer trainers (tutors) are non-professional teachers who, by helping their friends, help themselves as well to have a broader understanding of the topic at hand [3,14,15]. Peer-assisted learning (PAL) has long been used informally in medical education by medical educators as an auxiliary tool for learning since its inclusion among the effective models in the literature [3,13,16]. The primary advantage of PAL is economizing resources. Another advantage is that it immensely reduces the burden of the faculty member. It increases the cultivation of a lifelong learning mentality for students, leads to continuous professional development, and enhances interest in an academic career, boosting skills such as leadership, coaching, confidence, and inner motivation [13,14,16,17]. Peer simulation is presented as a new concept that increases the advantages of PAL [5]. Peer simulation is a structured form of role-playing in which students train to play the patient role for their peers [5]. Having peer support in peer simulation (peer simulated patient) presents many advantages offered by PAL, and it has a positive effect on learning outcomes. Students learn together and from each other through peer simulation. Peer simulation is an alternative method to using simulated patients in preclinical applications. Playing the patient role in peer simulation is an opportunity to facilitate the development of empathy and culture-sensitive medical practice skills [5]. There are very few examples of professional skills training using peer simulation [5].\nAccording to the literature, there are no examples in Turkey yet. In the medical school, where the study was carried out, patient-physician interview skills training was implemented in the second year. The patient-physician interview skills training goal was to teach students the proper way to start the interview, take and expand the anamnesis, inform the patient, and end the interview. There are no simulated/standardized patients in this medical school. For students to gain skills, a different teaching strategy, which is low cost but meets the same function, is required.\nIn our school, action was planned to solve this problem. Results from action are the solution to the problem. Action research, used to improve and modify educational practices, is a method that helps faculty and students better understand the work carried out in the institution. If the results are not satisfactory, researchers retry [18]. The action process is carried out in six stages (Figure 1). The first stage is ‘diagnosing,’ which means identification of the problem. The second stage is ‘reconnaissance,’ in which data collection tools are developed and the problem is analyzed and interpreted. The third stage contains the development of the action/intervention plan. The acting stage includes the implementation of the action/intervention plan. The fifth stage is the evaluation stage comprising data collection and analyzing the action/intervention. The last stage includes monitoring the data to make revisions and test the action/intervention.\nFigure 1.Mixed-Method Methodological Framework for Research.\nMixed-Method Methodological Framework for Research.\nThis study aims to show whether peer simulated patient-physician interview skills training can be successfully implemented to practice patient-physician interviewing skills of medical students in the absence of simulated patients.", "This is a descriptive quantitative study. With the descriptive methodological framework, the problem was subjected to a comprehensive initial assessment, and multiple data are collected and integrated. Thus, a more rigorous evaluation of the action was obtained [18–20]. In this study, first, the problem was defined, then data collection tools were developed with the support of literature, remedial action was planned, and finally, the developed training model was applied. The process of this research was carried out in stages and is shown in the figure (Figure 1).\nFigure 1- Descriptive Methodological Framework for Research\nThe method of the research will be presented in accordance with the stages:\nIn the literature review ‘patient-physician interview skills, peer-assisted learning, simulation, peer-simulated patient, peer simulation’ keywords were used. Applications on peer-assisted learning and peer simulation were examined in 51 studies.\nBased on the literature information, data collection tools aimed at obtaining the opinions of different parties have been developed to evaluate peer-assisted patient-physician interview skills.\ni. Physician’s Role Observation Form (PROF).\nUsing the literature, the researchers identified observational headings related to patient-physician interview skills [4, 7, 21], Katharina Eva [22], Katharina Eva [1, 23–26]. After four consecutive meetings, the researchers reached a consensus on the identified headings. An observation form on patient-physician interview skills was created by grouping the agreed items in line with their conceptual similarities.\nPROF consists of three groups (verbal communication, nonverbal communication, questioning of the main complaint) and 54 items. Each answer is rated as “0-no” for missing the objective and “1-yes” for reaching the objective.\nii. Peer Patient Observation Form (PPOF). Using the literature, the researchers identified headings related to the role of simulated patients [4,21,26]. The researchers agreed on PPOF consisting of eight items. Each answer is rated as “0-no”, “1-yes”.\niii. Satisfaction Assessment Form (SAF). The form consists of socio-demographic variables (four items), and items related to the satisfaction with the patient-physician interview (six items), and related to the peer-assisted patient-physician interview (15 items related to the physician’s role, three items related to the peer-simulated patient’s role, and three items related to the observer). All questions except two are closed-ended. Data on whether the peer-assisted patient-physician interview was beneficial was obtained by evaluating the open-ended questions of the SAF.\ni. Physician’s Role Observation Form (PROF).\nUsing the literature, the researchers identified observational headings related to patient-physician interview skills [4, 7, 21], Katharina Eva [22], Katharina Eva [1, 23–26]. After four consecutive meetings, the researchers reached a consensus on the identified headings. An observation form on patient-physician interview skills was created by grouping the agreed items in line with their conceptual similarities.\nPROF consists of three groups (verbal communication, nonverbal communication, questioning of the main complaint) and 54 items. Each answer is rated as “0-no” for missing the objective and “1-yes” for reaching the objective.\nii. Peer Patient Observation Form (PPOF). Using the literature, the researchers identified headings related to the role of simulated patients [4,21,26]. The researchers agreed on PPOF consisting of eight items. Each answer is rated as “0-no”, “1-yes”.\niii. Satisfaction Assessment Form (SAF). The form consists of socio-demographic variables (four items), and items related to the satisfaction with the patient-physician interview (six items), and related to the peer-assisted patient-physician interview (15 items related to the physician’s role, three items related to the peer-simulated patient’s role, and three items related to the observer). All questions except two are closed-ended. Data on whether the peer-assisted patient-physician interview was beneficial was obtained by evaluating the open-ended questions of the SAF.\n Data analysis methods \nStudent interview videos were viewed separately by researchers. Each student received grades for their roles as a physician and a patient. Accordingly, a student playing the physician’s role received a minimum score of 0 and a maximum score of 54 from the PROF. The student playing the peer-simulated patient’s role received a minimum score of 0 and a maximum score of 8 from the PPOF. The internal consistency of the scales was evaluated with the Crohnbach’s alpha coefficient. For the analysis of the results from the SAF, descriptive analysis was performed for the answers to two open-ended questions, and frequency values and means were calculated in closed-ended questions. The statistical software SPSS 24 (Statistical Package for Social Sciences for Windows 24.0) was used for calculations.\nIn addition, it is aimed that students can reach all the gains in the expressions specified in the form. Therefore, the success-satisfaction ratio of the items on the form was calculated using the formula “number of successful-satisfied answered items/total number of items*100”. This ratio was calculated for the physician’s role observation form (54 items), peer simulated patient observation form (8 items), and the peer-assisted patient-physician interview satisfaction section (21 items) of the SAF.\nStudent interview videos were viewed separately by researchers. Each student received grades for their roles as a physician and a patient. Accordingly, a student playing the physician’s role received a minimum score of 0 and a maximum score of 54 from the PROF. The student playing the peer-simulated patient’s role received a minimum score of 0 and a maximum score of 8 from the PPOF. The internal consistency of the scales was evaluated with the Crohnbach’s alpha coefficient. For the analysis of the results from the SAF, descriptive analysis was performed for the answers to two open-ended questions, and frequency values and means were calculated in closed-ended questions. The statistical software SPSS 24 (Statistical Package for Social Sciences for Windows 24.0) was used for calculations.\nIn addition, it is aimed that students can reach all the gains in the expressions specified in the form. Therefore, the success-satisfaction ratio of the items on the form was calculated using the formula “number of successful-satisfied answered items/total number of items*100”. This ratio was calculated for the physician’s role observation form (54 items), peer simulated patient observation form (8 items), and the peer-assisted patient-physician interview satisfaction section (21 items) of the SAF.\n\nStudent interview videos were viewed separately by researchers. Each student received grades for their roles as a physician and a patient. Accordingly, a student playing the physician’s role received a minimum score of 0 and a maximum score of 54 from the PROF. The student playing the peer-simulated patient’s role received a minimum score of 0 and a maximum score of 8 from the PPOF. The internal consistency of the scales was evaluated with the Crohnbach’s alpha coefficient. For the analysis of the results from the SAF, descriptive analysis was performed for the answers to two open-ended questions, and frequency values and means were calculated in closed-ended questions. The statistical software SPSS 24 (Statistical Package for Social Sciences for Windows 24.0) was used for calculations.\nIn addition, it is aimed that students can reach all the gains in the expressions specified in the form. Therefore, the success-satisfaction ratio of the items on the form was calculated using the formula “number of successful-satisfied answered items/total number of items*100”. This ratio was calculated for the physician’s role observation form (54 items), peer simulated patient observation form (8 items), and the peer-assisted patient-physician interview satisfaction section (21 items) of the SAF.\nStudent interview videos were viewed separately by researchers. Each student received grades for their roles as a physician and a patient. Accordingly, a student playing the physician’s role received a minimum score of 0 and a maximum score of 54 from the PROF. The student playing the peer-simulated patient’s role received a minimum score of 0 and a maximum score of 8 from the PPOF. The internal consistency of the scales was evaluated with the Crohnbach’s alpha coefficient. For the analysis of the results from the SAF, descriptive analysis was performed for the answers to two open-ended questions, and frequency values and means were calculated in closed-ended questions. The statistical software SPSS 24 (Statistical Package for Social Sciences for Windows 24.0) was used for calculations.\nIn addition, it is aimed that students can reach all the gains in the expressions specified in the form. Therefore, the success-satisfaction ratio of the items on the form was calculated using the formula “number of successful-satisfied answered items/total number of items*100”. This ratio was calculated for the physician’s role observation form (54 items), peer simulated patient observation form (8 items), and the peer-assisted patient-physician interview satisfaction section (21 items) of the SAF.\n Planning \na. Preparation of simulated patient scenarios.\nA patient scenario for history taking was created by the researchers using the literature. Scenario creation stages are as follows: the determination of learning objectives and outcomes, determination of context and content (the physician’s and patient’s roles, anamnesis information, physical environment, available source, etc.), evaluation of technical infrastructure (computer, camera, sound system), and preparation of supporting documents [27,28]. The scenario was submitted to the expert opinion and was made ready for application after making the necessary revisions. Patient scenarios, which were finalized with the feedback from expert, were prepared for information sessions with students.\nb. Conducting pilot application.\nThe pilot application was conducted with eight volunteering second-year students who had no experience with interviewing simulated patients. Information sessions were held with the volunteering students, and patient-physician interviews were planned. Within the scope of the pilot application, volunteering students made interviews with their peers playing the physician’s role, patient’s role, interviews were video-recorded, and feedback sessions were held with students. Video recordings were evaluated by the researchers using data collection forms. Technical problems encountered in the pilot application (internet, computer screen resolution, sound quality, etc.) and data collection tools were fixed.\nc. Setting up the peer-assisted patient-physician interview.\nDuring the 2019–2020 academic year, second-year students, at Izmir Katip Çelebi University Faculty of Medicine participated in the peer-assisted patient-physician interviews. Throughout the module, a student had three different responsibilities: playing the physician’s role, playing the peer-simulated patient’s role, and being the peer observer. Thus, students were able to experience all the components of the interview directly. Students made interviews, which were video-recorded. After the interview, they filled out a satisfaction form, wrote a self-assessment report, and attended a feedback session. Those playing the patient’s role simulated the disease required by the role, monitored the interviewing physician, gave constructive feedback to the physician, and filled out the satisfaction form. Finally, those who acted as an observer monitored the physician’s performance, gave constructive feedback, and filled out the satisfaction form.\nd. Planning a feedback session with students after the interviews.\nStudents watched a video recording of the interview, wrote the self-assessment report, and participated in the feedback session\na. Preparation of simulated patient scenarios.\nA patient scenario for history taking was created by the researchers using the literature. Scenario creation stages are as follows: the determination of learning objectives and outcomes, determination of context and content (the physician’s and patient’s roles, anamnesis information, physical environment, available source, etc.), evaluation of technical infrastructure (computer, camera, sound system), and preparation of supporting documents [27,28]. The scenario was submitted to the expert opinion and was made ready for application after making the necessary revisions. Patient scenarios, which were finalized with the feedback from expert, were prepared for information sessions with students.\nb. Conducting pilot application.\nThe pilot application was conducted with eight volunteering second-year students who had no experience with interviewing simulated patients. Information sessions were held with the volunteering students, and patient-physician interviews were planned. Within the scope of the pilot application, volunteering students made interviews with their peers playing the physician’s role, patient’s role, interviews were video-recorded, and feedback sessions were held with students. Video recordings were evaluated by the researchers using data collection forms. Technical problems encountered in the pilot application (internet, computer screen resolution, sound quality, etc.) and data collection tools were fixed.\nc. Setting up the peer-assisted patient-physician interview.\nDuring the 2019–2020 academic year, second-year students, at Izmir Katip Çelebi University Faculty of Medicine participated in the peer-assisted patient-physician interviews. Throughout the module, a student had three different responsibilities: playing the physician’s role, playing the peer-simulated patient’s role, and being the peer observer. Thus, students were able to experience all the components of the interview directly. Students made interviews, which were video-recorded. After the interview, they filled out a satisfaction form, wrote a self-assessment report, and attended a feedback session. Those playing the patient’s role simulated the disease required by the role, monitored the interviewing physician, gave constructive feedback to the physician, and filled out the satisfaction form. Finally, those who acted as an observer monitored the physician’s performance, gave constructive feedback, and filled out the satisfaction form.\nd. Planning a feedback session with students after the interviews.\nStudents watched a video recording of the interview, wrote the self-assessment report, and participated in the feedback session\n\na. Preparation of simulated patient scenarios.\nA patient scenario for history taking was created by the researchers using the literature. Scenario creation stages are as follows: the determination of learning objectives and outcomes, determination of context and content (the physician’s and patient’s roles, anamnesis information, physical environment, available source, etc.), evaluation of technical infrastructure (computer, camera, sound system), and preparation of supporting documents [27,28]. The scenario was submitted to the expert opinion and was made ready for application after making the necessary revisions. Patient scenarios, which were finalized with the feedback from expert, were prepared for information sessions with students.\nb. Conducting pilot application.\nThe pilot application was conducted with eight volunteering second-year students who had no experience with interviewing simulated patients. Information sessions were held with the volunteering students, and patient-physician interviews were planned. Within the scope of the pilot application, volunteering students made interviews with their peers playing the physician’s role, patient’s role, interviews were video-recorded, and feedback sessions were held with students. Video recordings were evaluated by the researchers using data collection forms. Technical problems encountered in the pilot application (internet, computer screen resolution, sound quality, etc.) and data collection tools were fixed.\nc. Setting up the peer-assisted patient-physician interview.\nDuring the 2019–2020 academic year, second-year students, at Izmir Katip Çelebi University Faculty of Medicine participated in the peer-assisted patient-physician interviews. Throughout the module, a student had three different responsibilities: playing the physician’s role, playing the peer-simulated patient’s role, and being the peer observer. Thus, students were able to experience all the components of the interview directly. Students made interviews, which were video-recorded. After the interview, they filled out a satisfaction form, wrote a self-assessment report, and attended a feedback session. Those playing the patient’s role simulated the disease required by the role, monitored the interviewing physician, gave constructive feedback to the physician, and filled out the satisfaction form. Finally, those who acted as an observer monitored the physician’s performance, gave constructive feedback, and filled out the satisfaction form.\nd. Planning a feedback session with students after the interviews.\nStudents watched a video recording of the interview, wrote the self-assessment report, and participated in the feedback session\na. Preparation of simulated patient scenarios.\nA patient scenario for history taking was created by the researchers using the literature. Scenario creation stages are as follows: the determination of learning objectives and outcomes, determination of context and content (the physician’s and patient’s roles, anamnesis information, physical environment, available source, etc.), evaluation of technical infrastructure (computer, camera, sound system), and preparation of supporting documents [27,28]. The scenario was submitted to the expert opinion and was made ready for application after making the necessary revisions. Patient scenarios, which were finalized with the feedback from expert, were prepared for information sessions with students.\nb. Conducting pilot application.\nThe pilot application was conducted with eight volunteering second-year students who had no experience with interviewing simulated patients. Information sessions were held with the volunteering students, and patient-physician interviews were planned. Within the scope of the pilot application, volunteering students made interviews with their peers playing the physician’s role, patient’s role, interviews were video-recorded, and feedback sessions were held with students. Video recordings were evaluated by the researchers using data collection forms. Technical problems encountered in the pilot application (internet, computer screen resolution, sound quality, etc.) and data collection tools were fixed.\nc. Setting up the peer-assisted patient-physician interview.\nDuring the 2019–2020 academic year, second-year students, at Izmir Katip Çelebi University Faculty of Medicine participated in the peer-assisted patient-physician interviews. Throughout the module, a student had three different responsibilities: playing the physician’s role, playing the peer-simulated patient’s role, and being the peer observer. Thus, students were able to experience all the components of the interview directly. Students made interviews, which were video-recorded. After the interview, they filled out a satisfaction form, wrote a self-assessment report, and attended a feedback session. Those playing the patient’s role simulated the disease required by the role, monitored the interviewing physician, gave constructive feedback to the physician, and filled out the satisfaction form. Finally, those who acted as an observer monitored the physician’s performance, gave constructive feedback, and filled out the satisfaction form.\nd. Planning a feedback session with students after the interviews.\nStudents watched a video recording of the interview, wrote the self-assessment report, and participated in the feedback session\n Acting At this stage, patient-physician interviews were made, and information sessions were delivered about student responsibilities, and feedback sessions were held. Before this, second-year students who participated in basic communication skills, clinical communication skills, and professional skills courses had a patient-physician interview at the student outpatient clinic during appointment hours. The interviews were conducted simultaneously in five outpatient clinics by teams of five people. In these teams, one of the students played the physician’s role, one played peer-simulated patient’s role, and three participated in interviews as observers. In subsequent interviews, the students exchanged their roles: each student was allowed to play the physician’s and peer simulated patient roles once, and the observer roles three times. The student playing the physician’s role was required to prepare the outpatient clinic, initiate video recording, meet the patient, take anamnesis, and make general situation assessment. The student playing the peer-simulated patient’s role was informed that they could improvise if the answer to the question was not specified in the scenario. Observing students were required to monitor the interview and give feedback to the interviewing physician at the end. Once the interview was over, the student playing the physician’s role took the video recording, wrote the self-evaluation report, and participated in the feedback session held the following week. In the feedback session, the patient-physician interview experience was evaluated using discussion, reflection, and feedback techniques. This stage was completed in March 2020.\nStudent interview videos were monitored and analyzed by researchers with PROF, PPOF, and SAF.\nThe findings obtained after the analysis of the data were interpreted with triangulation, and a decision was made regarding the continuation of the peer-assisted simulated patient-physician interview. All data obtained by triangulation are combined and interpreted in a table.\nAt this stage, patient-physician interviews were made, and information sessions were delivered about student responsibilities, and feedback sessions were held. Before this, second-year students who participated in basic communication skills, clinical communication skills, and professional skills courses had a patient-physician interview at the student outpatient clinic during appointment hours. The interviews were conducted simultaneously in five outpatient clinics by teams of five people. In these teams, one of the students played the physician’s role, one played peer-simulated patient’s role, and three participated in interviews as observers. In subsequent interviews, the students exchanged their roles: each student was allowed to play the physician’s and peer simulated patient roles once, and the observer roles three times. The student playing the physician’s role was required to prepare the outpatient clinic, initiate video recording, meet the patient, take anamnesis, and make general situation assessment. The student playing the peer-simulated patient’s role was informed that they could improvise if the answer to the question was not specified in the scenario. Observing students were required to monitor the interview and give feedback to the interviewing physician at the end. Once the interview was over, the student playing the physician’s role took the video recording, wrote the self-evaluation report, and participated in the feedback session held the following week. In the feedback session, the patient-physician interview experience was evaluated using discussion, reflection, and feedback techniques. This stage was completed in March 2020.\nStudent interview videos were monitored and analyzed by researchers with PROF, PPOF, and SAF.\nThe findings obtained after the analysis of the data were interpreted with triangulation, and a decision was made regarding the continuation of the peer-assisted simulated patient-physician interview. All data obtained by triangulation are combined and interpreted in a table.\n Ethics committee Approval was obtained from the research ethics committee of the ICU Social Research Ethics Committee in March 2020 with the decision numbered 2020/03–04.\nApproval was obtained from the research ethics committee of the ICU Social Research Ethics Committee in March 2020 with the decision numbered 2020/03–04.", "\nStudent interview videos were viewed separately by researchers. Each student received grades for their roles as a physician and a patient. Accordingly, a student playing the physician’s role received a minimum score of 0 and a maximum score of 54 from the PROF. The student playing the peer-simulated patient’s role received a minimum score of 0 and a maximum score of 8 from the PPOF. The internal consistency of the scales was evaluated with the Crohnbach’s alpha coefficient. For the analysis of the results from the SAF, descriptive analysis was performed for the answers to two open-ended questions, and frequency values and means were calculated in closed-ended questions. The statistical software SPSS 24 (Statistical Package for Social Sciences for Windows 24.0) was used for calculations.\nIn addition, it is aimed that students can reach all the gains in the expressions specified in the form. Therefore, the success-satisfaction ratio of the items on the form was calculated using the formula “number of successful-satisfied answered items/total number of items*100”. This ratio was calculated for the physician’s role observation form (54 items), peer simulated patient observation form (8 items), and the peer-assisted patient-physician interview satisfaction section (21 items) of the SAF.\nStudent interview videos were viewed separately by researchers. Each student received grades for their roles as a physician and a patient. Accordingly, a student playing the physician’s role received a minimum score of 0 and a maximum score of 54 from the PROF. The student playing the peer-simulated patient’s role received a minimum score of 0 and a maximum score of 8 from the PPOF. The internal consistency of the scales was evaluated with the Crohnbach’s alpha coefficient. For the analysis of the results from the SAF, descriptive analysis was performed for the answers to two open-ended questions, and frequency values and means were calculated in closed-ended questions. The statistical software SPSS 24 (Statistical Package for Social Sciences for Windows 24.0) was used for calculations.\nIn addition, it is aimed that students can reach all the gains in the expressions specified in the form. Therefore, the success-satisfaction ratio of the items on the form was calculated using the formula “number of successful-satisfied answered items/total number of items*100”. This ratio was calculated for the physician’s role observation form (54 items), peer simulated patient observation form (8 items), and the peer-assisted patient-physician interview satisfaction section (21 items) of the SAF.", "\na. Preparation of simulated patient scenarios.\nA patient scenario for history taking was created by the researchers using the literature. Scenario creation stages are as follows: the determination of learning objectives and outcomes, determination of context and content (the physician’s and patient’s roles, anamnesis information, physical environment, available source, etc.), evaluation of technical infrastructure (computer, camera, sound system), and preparation of supporting documents [27,28]. The scenario was submitted to the expert opinion and was made ready for application after making the necessary revisions. Patient scenarios, which were finalized with the feedback from expert, were prepared for information sessions with students.\nb. Conducting pilot application.\nThe pilot application was conducted with eight volunteering second-year students who had no experience with interviewing simulated patients. Information sessions were held with the volunteering students, and patient-physician interviews were planned. Within the scope of the pilot application, volunteering students made interviews with their peers playing the physician’s role, patient’s role, interviews were video-recorded, and feedback sessions were held with students. Video recordings were evaluated by the researchers using data collection forms. Technical problems encountered in the pilot application (internet, computer screen resolution, sound quality, etc.) and data collection tools were fixed.\nc. Setting up the peer-assisted patient-physician interview.\nDuring the 2019–2020 academic year, second-year students, at Izmir Katip Çelebi University Faculty of Medicine participated in the peer-assisted patient-physician interviews. Throughout the module, a student had three different responsibilities: playing the physician’s role, playing the peer-simulated patient’s role, and being the peer observer. Thus, students were able to experience all the components of the interview directly. Students made interviews, which were video-recorded. After the interview, they filled out a satisfaction form, wrote a self-assessment report, and attended a feedback session. Those playing the patient’s role simulated the disease required by the role, monitored the interviewing physician, gave constructive feedback to the physician, and filled out the satisfaction form. Finally, those who acted as an observer monitored the physician’s performance, gave constructive feedback, and filled out the satisfaction form.\nd. Planning a feedback session with students after the interviews.\nStudents watched a video recording of the interview, wrote the self-assessment report, and participated in the feedback session\na. Preparation of simulated patient scenarios.\nA patient scenario for history taking was created by the researchers using the literature. Scenario creation stages are as follows: the determination of learning objectives and outcomes, determination of context and content (the physician’s and patient’s roles, anamnesis information, physical environment, available source, etc.), evaluation of technical infrastructure (computer, camera, sound system), and preparation of supporting documents [27,28]. The scenario was submitted to the expert opinion and was made ready for application after making the necessary revisions. Patient scenarios, which were finalized with the feedback from expert, were prepared for information sessions with students.\nb. Conducting pilot application.\nThe pilot application was conducted with eight volunteering second-year students who had no experience with interviewing simulated patients. Information sessions were held with the volunteering students, and patient-physician interviews were planned. Within the scope of the pilot application, volunteering students made interviews with their peers playing the physician’s role, patient’s role, interviews were video-recorded, and feedback sessions were held with students. Video recordings were evaluated by the researchers using data collection forms. Technical problems encountered in the pilot application (internet, computer screen resolution, sound quality, etc.) and data collection tools were fixed.\nc. Setting up the peer-assisted patient-physician interview.\nDuring the 2019–2020 academic year, second-year students, at Izmir Katip Çelebi University Faculty of Medicine participated in the peer-assisted patient-physician interviews. Throughout the module, a student had three different responsibilities: playing the physician’s role, playing the peer-simulated patient’s role, and being the peer observer. Thus, students were able to experience all the components of the interview directly. Students made interviews, which were video-recorded. After the interview, they filled out a satisfaction form, wrote a self-assessment report, and attended a feedback session. Those playing the patient’s role simulated the disease required by the role, monitored the interviewing physician, gave constructive feedback to the physician, and filled out the satisfaction form. Finally, those who acted as an observer monitored the physician’s performance, gave constructive feedback, and filled out the satisfaction form.\nd. Planning a feedback session with students after the interviews.\nStudents watched a video recording of the interview, wrote the self-assessment report, and participated in the feedback session", "At this stage, patient-physician interviews were made, and information sessions were delivered about student responsibilities, and feedback sessions were held. Before this, second-year students who participated in basic communication skills, clinical communication skills, and professional skills courses had a patient-physician interview at the student outpatient clinic during appointment hours. The interviews were conducted simultaneously in five outpatient clinics by teams of five people. In these teams, one of the students played the physician’s role, one played peer-simulated patient’s role, and three participated in interviews as observers. In subsequent interviews, the students exchanged their roles: each student was allowed to play the physician’s and peer simulated patient roles once, and the observer roles three times. The student playing the physician’s role was required to prepare the outpatient clinic, initiate video recording, meet the patient, take anamnesis, and make general situation assessment. The student playing the peer-simulated patient’s role was informed that they could improvise if the answer to the question was not specified in the scenario. Observing students were required to monitor the interview and give feedback to the interviewing physician at the end. Once the interview was over, the student playing the physician’s role took the video recording, wrote the self-evaluation report, and participated in the feedback session held the following week. In the feedback session, the patient-physician interview experience was evaluated using discussion, reflection, and feedback techniques. This stage was completed in March 2020.\nStudent interview videos were monitored and analyzed by researchers with PROF, PPOF, and SAF.\nThe findings obtained after the analysis of the data were interpreted with triangulation, and a decision was made regarding the continuation of the peer-assisted simulated patient-physician interview. All data obtained by triangulation are combined and interpreted in a table.", "Approval was obtained from the research ethics committee of the ICU Social Research Ethics Committee in March 2020 with the decision numbered 2020/03–04.", "It was aimed to ensure that all second-year students (n:193) participated in patient-physician interviews. Patient-physician interviews were planned to be held throughout eight weeks according to a schedule, in which each week 25 students participated in the interviews. After the first two weeks, the COVID-19 pandemic was declared by the WHO, so the remaining students were unable to make the interviews. Thus, the interview videos of a total of 50 students were monitored by researchers and analyzed by obtaining data with PROF and PPOF. Cronbach’s alpha of PROF was found to be 0.71.\nA total of 50 students (31 males and 19 females) participated in the study. The mean age of the students is 20.56 (min:19 max:23).\na. In the analysis of the data obtained from the patient-physician interview video recordings (n:50), the total score and success percentage for each student were calculated with PROF. Accordingly, the mean and standard deviation of the scores form PROF were 70.43 ± 9.81 (min. 40.12, maximum 88.27), respectively. Students are expected to get at least 60 points in order to be considered successful. The rate of students who were successful with a score of 60 or above from PROF was 92%. The distribution of achievement scores is presented as a graph (Chart 1).\nChart 1. Students` performance scores from the PROF\nChart 1.Students` performance grade distributions from PROF.\nStudents` performance grade distributions from PROF.\nWhen evaluating the students playing the physician’s role, the headings on the PROF were examined: 96.29% of the 54 items were found to be used effectively during the observation. Students were successful in over 95% of the topics of welcoming patient, asking questions about the patient’s demographic characteristics, making eye contact, listening to the patient’s main complaints, observing the patient’s profile, and asking questions about background. On the other hand, students achieved less than 50% success in summarizing the case, using body language, using the proper tone of voice, and using understandable language.\nb. Students playing the peer-simulated patient’s role were evaluated via the PPOF by considering patient-physician interview video recordings (n:50). Students were found to be more than 90% successful in seven items of the form. However, only 32% success was achieved in the eighth item related to the peer patient giving feedback to the interviewing physician, (Table 1).Table 1.Peer-Simulated Patient Success RatePPOF Items%1. The peer patient focused on the script. (good recall, concentrated)91,332. The peer played the role of patient well.94,673. The peer patient was able to present alternative topics to the topics highlighted in the scenario95,334. The oral communication skills of the peer patient were appropriate (clear, clear, understandable, scripted)99,335. The nonverbal communication skills of the peer patient were appropriate (body language, gesture, gesture).99,336. The peer patient listened to the physician interview topics effectively100,007. The peer patient answered the questions of the interviewer consistently. (credible-reliable)99,338. The peer patient gave effective feedback.32,00Total88,92\n\nPeer-Simulated Patient Success Rate\nc. Findings regarding the satisfaction with the peer-assisted patient-physician interview were presented under the following headings: sociodemographic characteristics of the participants, their opinions on satisfaction with the patient-physician interview, and their opinions on satisfaction with the peer-assisted patient-physician interview.\nAfter the evaluation on the satisfaction of the peer-assisted patient-physician interview, it was determined that 98% (n:49) of the students were satisfied with the peer-assisted patient-physician interview, and 84% (n:42) were satisfied with the presence of their peers in the patient role in the peer-assisted patient-physician interview. The other, 16% (n:8) stated that they would prefer to have an real patient or doctor instead of their peers. It was also determined that 92% of the students wanted to re-experience the peer-assisted patient-physician interview in the coming years, and 96% found the peer-assisted patient-physician interview experiences useful.\nRegarding their answers to the open-ended questions, the students stated that they found it valuable to have experienced the patient-physician interview in the early period during the pre-graduation medical education process. They noted that they realized their weaknesses and what needed to be done about them. They said that it would be useful to repeat this instructive practice, that peer-assisted learning was valuable, and that it was a good opportunity to self-evaluate. On the other hand, some of the negative remarks related to the process were inexperience, excitement, personal inadequacies, lack of knowledge, unnecessary role-playing, and difficulty communicating with the patient”.\nWhen the satisfaction with peer-assisted patient-physician interviews was evaluated, it was determined that 77.33% of the students were satisfied. 80.53% of the students were satisfied with being interviewing physician, 56.66% with being the peer-simulated patient, and 82% with being observer in the interviews.\nd. All the data obtained is combined with triangulation and combined and interpreted in the table.\nIn triangulation, the students playing the physician’s, and patient’s roles were evaluated together with ‘success in being simulated patients’ and ‘satisfaction with the peer-assisted patient-physician interviews’(Table 2).Table 2.Triangulation of Patient-Physician Interview Skills DataMerged Data%Success Rate of Being an Interviewer Physician92,00Peer Simulated Patient Success Rate88,92PatientPhysicianInterviewSatisfactionRateSatisfaction with patient-physician interview98,00Satisfaction with the fact the simulated patient is a peer simulated patient84,00Interest to have a patient-physician interview in the years to come92,00Finding the patient-physician interview experience helpful96,00Finding the patient-physician interview experience useful77,33Satisfaction of being an interviewer physicianPeer-to-peer simulated patient satisfactionSatisfaction of being an observer80,5356,6682,00\n\nTriangulation of Patient-Physician Interview Skills Data\nSatisfaction of being an interviewer physician\nPeer-to-peer simulated patient satisfaction\nSatisfaction of being an observer\nIn the absence of simulated patients, it was determined that students achieved an over 88% success rate in the patient-physician interviews and peer-simulated patient roles. Although they were less satisfied with playing the peer-simulated patient’s role, the satisfaction with the peer-assisted patient-physician interviews was rated between 77.33% and 98%.", "This study was conducted to determine whether medical students’ patient-physician interview skills could be implemented by peer simulation in the absence of simulated patients.\nIn faculties facing difficulties with providing simulated patient for patient-physician interview skills training, a different teaching strategy that meets the same function is needed to ensure that students gain skills at a low cost. Indeed, in this study, nearly all of the students were successful in patient-physician interviews performed using peer-simulated patients.\nThe 26,found that changing a student’s role during learning experiences encourages students to learn [26]. In another study conducted with peers, it was determined that patient-physician interviews contributed to the students’ ability to take anamnesis, manage emotional problems, and self-assess [5, 23]. Similarly, peer simulation develops communication, empathy, trust, and professional skills [5]. In our study, we observed that students playing the physician’s role were successful in starting patient interviews, taking anamnesis, and using the appropriate nonverbal communication skills. These students were evaluated through the PROF, which Cronbach’s alpha reliability coefficient was found to be 0.71. In the literature, Cronbach’s alpha reliability coefficient is interpreted as good if it is between 0.70 and 0.90 [29].\n1,and 30,emphasized that design features such as feedback, planned implementation, the difficulty of simulation, clinical variation, and individualized learning should be taken into account in simulation training [1,30]. In our study, it was seen that students playing the peer-simulated patient’s role failed to give feedback to those playing the physician’s role. However, although the students were trained in giving feedback, they were found to be biased. 31,emphasized that peers evaluated each other generously in peer evaluation, while another study stated that peers may rate each other highly in small groups (small circle collusion) or large groups (pervasive collusion) [31,32].\nIn studies related to patient-physician interviews performed with peer simulation method, it is said that students can carry out the training process more easily than they do with simulated patients as they play the peer-simulated patient’s role [5]. In our study, while playing the physician’s and observer’s roles was satisfactory for the students, playing the peer-simulated patient’s role was not that satisfactory. One can speculate that they had difficulty getting into the role, as the patient-physician interview skills training using the peer-simulation method was conducted for the first time. It is thought that students’ satisfaction may increase as they become more familiar with the patient-physician interview skills training.\nDuring peer simulation, students contribute to each other’s learning ‘as patients’ not by ‘teaching’ [5]. 7,similarly state that students could develop the ability to conduct patient-physician interviews if they observed other physicians [7]. In our study, students expressed their satisfaction and contribution to their learning by playing the observer’s role.\nAccording to the systematic review of the studies that perform patient-physician interviews with peer-simulated patients, peer simulation is an effective learning approach [5]. In our study, as a result of the evaluation of the action, the patient-physician interviews with the peer-simulated patient was successfully completed.\nOne limitation of this study is failing to practically compare the peer simulation technique with standardized patient simulation due to the lack of standardized patient simulation in the medical school where the application was carried out. Another limitation is the inability to include all second-year students in this study due to the pandemic.", "In the absence of simulated patients, peer-assisted simulation can be performed to contribute to medical students’ patient-physician interview skills. To obtain better results from peer-assisted patient-physician interviews, making the following arrangements within institutions is recommended:\n• Organizing additional training to increase students’ ability to give constructive feedback to their peers,\n• Planning multicenter researches that evaluate the institution gains (time, cost, workforce, etc.) obtained through peer-simulated patient usage.\n• Ensuring the sustainability of the action research cycle by evaluating peer-simulated patient practice in the coming years.\nConsideration of peer-assisted simulation by educators, students and administrators will ensure that the practice becomes widespread." ]
[ "intro", null, null, null, null, null, null, null, null ]
[ "Patient-physician interview skills", "peer-assisted learning", "simulation", "peer simulated patient", "peer simulation" ]
Introduction: Medical students need to practice patient-physician interviews to develop essential clinical communication and clinical reasoning skills and find the necessary space to apply their basic professional skills [1]. Patient-physician interviewing skills have an important place in health service delivery. A good interview is crucial for effective diagnosis and treatment. Medical educators agree that medical students should be humane and have the necessary communication skills for patient-physician interview skills. However, for years, there has been uncertainty about the ways to achieve this learning goal [2]. Having students experience a mock patient-physician interview is considered the easiest method to accomplish this goal [2]. Methods based on small group activities, such as problem-based learning, role-playing, and simulated/standardized patient simulation, are used to improve patient-physician interview skills [2,3]. Today, it is a common and accepted method to conduct patient-physician interviews with simulated/standardized patients [1,4–6]. Simulated patients can be theatre actors, professional actors, trained volunteers (retirees, students, employees, etc.). There is no evidence that the simulated patient has to be a professional actor for the interview to be efficient [4,7]. There are certain advantages and disadvantages to interviewing simulated patients. Simulated patients offer a student-centered educational opportunity that is the closest to reality without time constraints. They can impersonate different patient profiles and conditions, allowing students to experience patients and cases that are difficult to encounter in real life [4,5]. On the other hand, using simulated patients also has disadvantages related to the cost or training requirements [8]. There may be difficulties finding proper simulated patients, training them, budgeting to cover the costs, planning, organizing the interviews, and solving possible issues during interviews [4,5,7–12]. Furthermore, the need to train faculty members` for simulated patient training, the time spent on it, corporate commitments, and, most importantly, the truth that it is not a sustainable method are some other downsides [4,5]. In modern medical education, to improve patient-physician interviewing skills, it has become imperative to use modernized, affordable and sustainable models, instead of teacher-centered and expensive methods with a traditional approach. Peer-assisted learning (PAL) serves this purpose [3,13,14]. One can define PAL as knowledge and skills acquisition through active help and support among peers. Peer trainers (tutors) are non-professional teachers who, by helping their friends, help themselves as well to have a broader understanding of the topic at hand [3,14,15]. Peer-assisted learning (PAL) has long been used informally in medical education by medical educators as an auxiliary tool for learning since its inclusion among the effective models in the literature [3,13,16]. The primary advantage of PAL is economizing resources. Another advantage is that it immensely reduces the burden of the faculty member. It increases the cultivation of a lifelong learning mentality for students, leads to continuous professional development, and enhances interest in an academic career, boosting skills such as leadership, coaching, confidence, and inner motivation [13,14,16,17]. Peer simulation is presented as a new concept that increases the advantages of PAL [5]. Peer simulation is a structured form of role-playing in which students train to play the patient role for their peers [5]. Having peer support in peer simulation (peer simulated patient) presents many advantages offered by PAL, and it has a positive effect on learning outcomes. Students learn together and from each other through peer simulation. Peer simulation is an alternative method to using simulated patients in preclinical applications. Playing the patient role in peer simulation is an opportunity to facilitate the development of empathy and culture-sensitive medical practice skills [5]. There are very few examples of professional skills training using peer simulation [5]. According to the literature, there are no examples in Turkey yet. In the medical school, where the study was carried out, patient-physician interview skills training was implemented in the second year. The patient-physician interview skills training goal was to teach students the proper way to start the interview, take and expand the anamnesis, inform the patient, and end the interview. There are no simulated/standardized patients in this medical school. For students to gain skills, a different teaching strategy, which is low cost but meets the same function, is required. In our school, action was planned to solve this problem. Results from action are the solution to the problem. Action research, used to improve and modify educational practices, is a method that helps faculty and students better understand the work carried out in the institution. If the results are not satisfactory, researchers retry [18]. The action process is carried out in six stages (Figure 1). The first stage is ‘diagnosing,’ which means identification of the problem. The second stage is ‘reconnaissance,’ in which data collection tools are developed and the problem is analyzed and interpreted. The third stage contains the development of the action/intervention plan. The acting stage includes the implementation of the action/intervention plan. The fifth stage is the evaluation stage comprising data collection and analyzing the action/intervention. The last stage includes monitoring the data to make revisions and test the action/intervention. Figure 1.Mixed-Method Methodological Framework for Research. Mixed-Method Methodological Framework for Research. This study aims to show whether peer simulated patient-physician interview skills training can be successfully implemented to practice patient-physician interviewing skills of medical students in the absence of simulated patients. Methods: This is a descriptive quantitative study. With the descriptive methodological framework, the problem was subjected to a comprehensive initial assessment, and multiple data are collected and integrated. Thus, a more rigorous evaluation of the action was obtained [18–20]. In this study, first, the problem was defined, then data collection tools were developed with the support of literature, remedial action was planned, and finally, the developed training model was applied. The process of this research was carried out in stages and is shown in the figure (Figure 1). Figure 1- Descriptive Methodological Framework for Research The method of the research will be presented in accordance with the stages: In the literature review ‘patient-physician interview skills, peer-assisted learning, simulation, peer-simulated patient, peer simulation’ keywords were used. Applications on peer-assisted learning and peer simulation were examined in 51 studies. Based on the literature information, data collection tools aimed at obtaining the opinions of different parties have been developed to evaluate peer-assisted patient-physician interview skills. i. Physician’s Role Observation Form (PROF). Using the literature, the researchers identified observational headings related to patient-physician interview skills [4, 7, 21], Katharina Eva [22], Katharina Eva [1, 23–26]. After four consecutive meetings, the researchers reached a consensus on the identified headings. An observation form on patient-physician interview skills was created by grouping the agreed items in line with their conceptual similarities. PROF consists of three groups (verbal communication, nonverbal communication, questioning of the main complaint) and 54 items. Each answer is rated as “0-no” for missing the objective and “1-yes” for reaching the objective. ii. Peer Patient Observation Form (PPOF). Using the literature, the researchers identified headings related to the role of simulated patients [4,21,26]. The researchers agreed on PPOF consisting of eight items. Each answer is rated as “0-no”, “1-yes”. iii. Satisfaction Assessment Form (SAF). The form consists of socio-demographic variables (four items), and items related to the satisfaction with the patient-physician interview (six items), and related to the peer-assisted patient-physician interview (15 items related to the physician’s role, three items related to the peer-simulated patient’s role, and three items related to the observer). All questions except two are closed-ended. Data on whether the peer-assisted patient-physician interview was beneficial was obtained by evaluating the open-ended questions of the SAF. i. Physician’s Role Observation Form (PROF). Using the literature, the researchers identified observational headings related to patient-physician interview skills [4, 7, 21], Katharina Eva [22], Katharina Eva [1, 23–26]. After four consecutive meetings, the researchers reached a consensus on the identified headings. An observation form on patient-physician interview skills was created by grouping the agreed items in line with their conceptual similarities. PROF consists of three groups (verbal communication, nonverbal communication, questioning of the main complaint) and 54 items. Each answer is rated as “0-no” for missing the objective and “1-yes” for reaching the objective. ii. Peer Patient Observation Form (PPOF). Using the literature, the researchers identified headings related to the role of simulated patients [4,21,26]. The researchers agreed on PPOF consisting of eight items. Each answer is rated as “0-no”, “1-yes”. iii. Satisfaction Assessment Form (SAF). The form consists of socio-demographic variables (four items), and items related to the satisfaction with the patient-physician interview (six items), and related to the peer-assisted patient-physician interview (15 items related to the physician’s role, three items related to the peer-simulated patient’s role, and three items related to the observer). All questions except two are closed-ended. Data on whether the peer-assisted patient-physician interview was beneficial was obtained by evaluating the open-ended questions of the SAF. Data analysis methods Student interview videos were viewed separately by researchers. Each student received grades for their roles as a physician and a patient. Accordingly, a student playing the physician’s role received a minimum score of 0 and a maximum score of 54 from the PROF. The student playing the peer-simulated patient’s role received a minimum score of 0 and a maximum score of 8 from the PPOF. The internal consistency of the scales was evaluated with the Crohnbach’s alpha coefficient. For the analysis of the results from the SAF, descriptive analysis was performed for the answers to two open-ended questions, and frequency values and means were calculated in closed-ended questions. The statistical software SPSS 24 (Statistical Package for Social Sciences for Windows 24.0) was used for calculations. In addition, it is aimed that students can reach all the gains in the expressions specified in the form. Therefore, the success-satisfaction ratio of the items on the form was calculated using the formula “number of successful-satisfied answered items/total number of items*100”. This ratio was calculated for the physician’s role observation form (54 items), peer simulated patient observation form (8 items), and the peer-assisted patient-physician interview satisfaction section (21 items) of the SAF. Student interview videos were viewed separately by researchers. Each student received grades for their roles as a physician and a patient. Accordingly, a student playing the physician’s role received a minimum score of 0 and a maximum score of 54 from the PROF. The student playing the peer-simulated patient’s role received a minimum score of 0 and a maximum score of 8 from the PPOF. The internal consistency of the scales was evaluated with the Crohnbach’s alpha coefficient. For the analysis of the results from the SAF, descriptive analysis was performed for the answers to two open-ended questions, and frequency values and means were calculated in closed-ended questions. The statistical software SPSS 24 (Statistical Package for Social Sciences for Windows 24.0) was used for calculations. In addition, it is aimed that students can reach all the gains in the expressions specified in the form. Therefore, the success-satisfaction ratio of the items on the form was calculated using the formula “number of successful-satisfied answered items/total number of items*100”. This ratio was calculated for the physician’s role observation form (54 items), peer simulated patient observation form (8 items), and the peer-assisted patient-physician interview satisfaction section (21 items) of the SAF. Student interview videos were viewed separately by researchers. Each student received grades for their roles as a physician and a patient. Accordingly, a student playing the physician’s role received a minimum score of 0 and a maximum score of 54 from the PROF. The student playing the peer-simulated patient’s role received a minimum score of 0 and a maximum score of 8 from the PPOF. The internal consistency of the scales was evaluated with the Crohnbach’s alpha coefficient. For the analysis of the results from the SAF, descriptive analysis was performed for the answers to two open-ended questions, and frequency values and means were calculated in closed-ended questions. The statistical software SPSS 24 (Statistical Package for Social Sciences for Windows 24.0) was used for calculations. In addition, it is aimed that students can reach all the gains in the expressions specified in the form. Therefore, the success-satisfaction ratio of the items on the form was calculated using the formula “number of successful-satisfied answered items/total number of items*100”. This ratio was calculated for the physician’s role observation form (54 items), peer simulated patient observation form (8 items), and the peer-assisted patient-physician interview satisfaction section (21 items) of the SAF. Student interview videos were viewed separately by researchers. Each student received grades for their roles as a physician and a patient. Accordingly, a student playing the physician’s role received a minimum score of 0 and a maximum score of 54 from the PROF. The student playing the peer-simulated patient’s role received a minimum score of 0 and a maximum score of 8 from the PPOF. The internal consistency of the scales was evaluated with the Crohnbach’s alpha coefficient. For the analysis of the results from the SAF, descriptive analysis was performed for the answers to two open-ended questions, and frequency values and means were calculated in closed-ended questions. The statistical software SPSS 24 (Statistical Package for Social Sciences for Windows 24.0) was used for calculations. In addition, it is aimed that students can reach all the gains in the expressions specified in the form. Therefore, the success-satisfaction ratio of the items on the form was calculated using the formula “number of successful-satisfied answered items/total number of items*100”. This ratio was calculated for the physician’s role observation form (54 items), peer simulated patient observation form (8 items), and the peer-assisted patient-physician interview satisfaction section (21 items) of the SAF. Planning a. Preparation of simulated patient scenarios. A patient scenario for history taking was created by the researchers using the literature. Scenario creation stages are as follows: the determination of learning objectives and outcomes, determination of context and content (the physician’s and patient’s roles, anamnesis information, physical environment, available source, etc.), evaluation of technical infrastructure (computer, camera, sound system), and preparation of supporting documents [27,28]. The scenario was submitted to the expert opinion and was made ready for application after making the necessary revisions. Patient scenarios, which were finalized with the feedback from expert, were prepared for information sessions with students. b. Conducting pilot application. The pilot application was conducted with eight volunteering second-year students who had no experience with interviewing simulated patients. Information sessions were held with the volunteering students, and patient-physician interviews were planned. Within the scope of the pilot application, volunteering students made interviews with their peers playing the physician’s role, patient’s role, interviews were video-recorded, and feedback sessions were held with students. Video recordings were evaluated by the researchers using data collection forms. Technical problems encountered in the pilot application (internet, computer screen resolution, sound quality, etc.) and data collection tools were fixed. c. Setting up the peer-assisted patient-physician interview. During the 2019–2020 academic year, second-year students, at Izmir Katip Çelebi University Faculty of Medicine participated in the peer-assisted patient-physician interviews. Throughout the module, a student had three different responsibilities: playing the physician’s role, playing the peer-simulated patient’s role, and being the peer observer. Thus, students were able to experience all the components of the interview directly. Students made interviews, which were video-recorded. After the interview, they filled out a satisfaction form, wrote a self-assessment report, and attended a feedback session. Those playing the patient’s role simulated the disease required by the role, monitored the interviewing physician, gave constructive feedback to the physician, and filled out the satisfaction form. Finally, those who acted as an observer monitored the physician’s performance, gave constructive feedback, and filled out the satisfaction form. d. Planning a feedback session with students after the interviews. Students watched a video recording of the interview, wrote the self-assessment report, and participated in the feedback session a. Preparation of simulated patient scenarios. A patient scenario for history taking was created by the researchers using the literature. Scenario creation stages are as follows: the determination of learning objectives and outcomes, determination of context and content (the physician’s and patient’s roles, anamnesis information, physical environment, available source, etc.), evaluation of technical infrastructure (computer, camera, sound system), and preparation of supporting documents [27,28]. The scenario was submitted to the expert opinion and was made ready for application after making the necessary revisions. Patient scenarios, which were finalized with the feedback from expert, were prepared for information sessions with students. b. Conducting pilot application. The pilot application was conducted with eight volunteering second-year students who had no experience with interviewing simulated patients. Information sessions were held with the volunteering students, and patient-physician interviews were planned. Within the scope of the pilot application, volunteering students made interviews with their peers playing the physician’s role, patient’s role, interviews were video-recorded, and feedback sessions were held with students. Video recordings were evaluated by the researchers using data collection forms. Technical problems encountered in the pilot application (internet, computer screen resolution, sound quality, etc.) and data collection tools were fixed. c. Setting up the peer-assisted patient-physician interview. During the 2019–2020 academic year, second-year students, at Izmir Katip Çelebi University Faculty of Medicine participated in the peer-assisted patient-physician interviews. Throughout the module, a student had three different responsibilities: playing the physician’s role, playing the peer-simulated patient’s role, and being the peer observer. Thus, students were able to experience all the components of the interview directly. Students made interviews, which were video-recorded. After the interview, they filled out a satisfaction form, wrote a self-assessment report, and attended a feedback session. Those playing the patient’s role simulated the disease required by the role, monitored the interviewing physician, gave constructive feedback to the physician, and filled out the satisfaction form. Finally, those who acted as an observer monitored the physician’s performance, gave constructive feedback, and filled out the satisfaction form. d. Planning a feedback session with students after the interviews. Students watched a video recording of the interview, wrote the self-assessment report, and participated in the feedback session a. Preparation of simulated patient scenarios. A patient scenario for history taking was created by the researchers using the literature. Scenario creation stages are as follows: the determination of learning objectives and outcomes, determination of context and content (the physician’s and patient’s roles, anamnesis information, physical environment, available source, etc.), evaluation of technical infrastructure (computer, camera, sound system), and preparation of supporting documents [27,28]. The scenario was submitted to the expert opinion and was made ready for application after making the necessary revisions. Patient scenarios, which were finalized with the feedback from expert, were prepared for information sessions with students. b. Conducting pilot application. The pilot application was conducted with eight volunteering second-year students who had no experience with interviewing simulated patients. Information sessions were held with the volunteering students, and patient-physician interviews were planned. Within the scope of the pilot application, volunteering students made interviews with their peers playing the physician’s role, patient’s role, interviews were video-recorded, and feedback sessions were held with students. Video recordings were evaluated by the researchers using data collection forms. Technical problems encountered in the pilot application (internet, computer screen resolution, sound quality, etc.) and data collection tools were fixed. c. Setting up the peer-assisted patient-physician interview. During the 2019–2020 academic year, second-year students, at Izmir Katip Çelebi University Faculty of Medicine participated in the peer-assisted patient-physician interviews. Throughout the module, a student had three different responsibilities: playing the physician’s role, playing the peer-simulated patient’s role, and being the peer observer. Thus, students were able to experience all the components of the interview directly. Students made interviews, which were video-recorded. After the interview, they filled out a satisfaction form, wrote a self-assessment report, and attended a feedback session. Those playing the patient’s role simulated the disease required by the role, monitored the interviewing physician, gave constructive feedback to the physician, and filled out the satisfaction form. Finally, those who acted as an observer monitored the physician’s performance, gave constructive feedback, and filled out the satisfaction form. d. Planning a feedback session with students after the interviews. Students watched a video recording of the interview, wrote the self-assessment report, and participated in the feedback session a. Preparation of simulated patient scenarios. A patient scenario for history taking was created by the researchers using the literature. Scenario creation stages are as follows: the determination of learning objectives and outcomes, determination of context and content (the physician’s and patient’s roles, anamnesis information, physical environment, available source, etc.), evaluation of technical infrastructure (computer, camera, sound system), and preparation of supporting documents [27,28]. The scenario was submitted to the expert opinion and was made ready for application after making the necessary revisions. Patient scenarios, which were finalized with the feedback from expert, were prepared for information sessions with students. b. Conducting pilot application. The pilot application was conducted with eight volunteering second-year students who had no experience with interviewing simulated patients. Information sessions were held with the volunteering students, and patient-physician interviews were planned. Within the scope of the pilot application, volunteering students made interviews with their peers playing the physician’s role, patient’s role, interviews were video-recorded, and feedback sessions were held with students. Video recordings were evaluated by the researchers using data collection forms. Technical problems encountered in the pilot application (internet, computer screen resolution, sound quality, etc.) and data collection tools were fixed. c. Setting up the peer-assisted patient-physician interview. During the 2019–2020 academic year, second-year students, at Izmir Katip Çelebi University Faculty of Medicine participated in the peer-assisted patient-physician interviews. Throughout the module, a student had three different responsibilities: playing the physician’s role, playing the peer-simulated patient’s role, and being the peer observer. Thus, students were able to experience all the components of the interview directly. Students made interviews, which were video-recorded. After the interview, they filled out a satisfaction form, wrote a self-assessment report, and attended a feedback session. Those playing the patient’s role simulated the disease required by the role, monitored the interviewing physician, gave constructive feedback to the physician, and filled out the satisfaction form. Finally, those who acted as an observer monitored the physician’s performance, gave constructive feedback, and filled out the satisfaction form. d. Planning a feedback session with students after the interviews. Students watched a video recording of the interview, wrote the self-assessment report, and participated in the feedback session Acting At this stage, patient-physician interviews were made, and information sessions were delivered about student responsibilities, and feedback sessions were held. Before this, second-year students who participated in basic communication skills, clinical communication skills, and professional skills courses had a patient-physician interview at the student outpatient clinic during appointment hours. The interviews were conducted simultaneously in five outpatient clinics by teams of five people. In these teams, one of the students played the physician’s role, one played peer-simulated patient’s role, and three participated in interviews as observers. In subsequent interviews, the students exchanged their roles: each student was allowed to play the physician’s and peer simulated patient roles once, and the observer roles three times. The student playing the physician’s role was required to prepare the outpatient clinic, initiate video recording, meet the patient, take anamnesis, and make general situation assessment. The student playing the peer-simulated patient’s role was informed that they could improvise if the answer to the question was not specified in the scenario. Observing students were required to monitor the interview and give feedback to the interviewing physician at the end. Once the interview was over, the student playing the physician’s role took the video recording, wrote the self-evaluation report, and participated in the feedback session held the following week. In the feedback session, the patient-physician interview experience was evaluated using discussion, reflection, and feedback techniques. This stage was completed in March 2020. Student interview videos were monitored and analyzed by researchers with PROF, PPOF, and SAF. The findings obtained after the analysis of the data were interpreted with triangulation, and a decision was made regarding the continuation of the peer-assisted simulated patient-physician interview. All data obtained by triangulation are combined and interpreted in a table. At this stage, patient-physician interviews were made, and information sessions were delivered about student responsibilities, and feedback sessions were held. Before this, second-year students who participated in basic communication skills, clinical communication skills, and professional skills courses had a patient-physician interview at the student outpatient clinic during appointment hours. The interviews were conducted simultaneously in five outpatient clinics by teams of five people. In these teams, one of the students played the physician’s role, one played peer-simulated patient’s role, and three participated in interviews as observers. In subsequent interviews, the students exchanged their roles: each student was allowed to play the physician’s and peer simulated patient roles once, and the observer roles three times. The student playing the physician’s role was required to prepare the outpatient clinic, initiate video recording, meet the patient, take anamnesis, and make general situation assessment. The student playing the peer-simulated patient’s role was informed that they could improvise if the answer to the question was not specified in the scenario. Observing students were required to monitor the interview and give feedback to the interviewing physician at the end. Once the interview was over, the student playing the physician’s role took the video recording, wrote the self-evaluation report, and participated in the feedback session held the following week. In the feedback session, the patient-physician interview experience was evaluated using discussion, reflection, and feedback techniques. This stage was completed in March 2020. Student interview videos were monitored and analyzed by researchers with PROF, PPOF, and SAF. The findings obtained after the analysis of the data were interpreted with triangulation, and a decision was made regarding the continuation of the peer-assisted simulated patient-physician interview. All data obtained by triangulation are combined and interpreted in a table. Ethics committee Approval was obtained from the research ethics committee of the ICU Social Research Ethics Committee in March 2020 with the decision numbered 2020/03–04. Approval was obtained from the research ethics committee of the ICU Social Research Ethics Committee in March 2020 with the decision numbered 2020/03–04. Data analysis methods: Student interview videos were viewed separately by researchers. Each student received grades for their roles as a physician and a patient. Accordingly, a student playing the physician’s role received a minimum score of 0 and a maximum score of 54 from the PROF. The student playing the peer-simulated patient’s role received a minimum score of 0 and a maximum score of 8 from the PPOF. The internal consistency of the scales was evaluated with the Crohnbach’s alpha coefficient. For the analysis of the results from the SAF, descriptive analysis was performed for the answers to two open-ended questions, and frequency values and means were calculated in closed-ended questions. The statistical software SPSS 24 (Statistical Package for Social Sciences for Windows 24.0) was used for calculations. In addition, it is aimed that students can reach all the gains in the expressions specified in the form. Therefore, the success-satisfaction ratio of the items on the form was calculated using the formula “number of successful-satisfied answered items/total number of items*100”. This ratio was calculated for the physician’s role observation form (54 items), peer simulated patient observation form (8 items), and the peer-assisted patient-physician interview satisfaction section (21 items) of the SAF. Student interview videos were viewed separately by researchers. Each student received grades for their roles as a physician and a patient. Accordingly, a student playing the physician’s role received a minimum score of 0 and a maximum score of 54 from the PROF. The student playing the peer-simulated patient’s role received a minimum score of 0 and a maximum score of 8 from the PPOF. The internal consistency of the scales was evaluated with the Crohnbach’s alpha coefficient. For the analysis of the results from the SAF, descriptive analysis was performed for the answers to two open-ended questions, and frequency values and means were calculated in closed-ended questions. The statistical software SPSS 24 (Statistical Package for Social Sciences for Windows 24.0) was used for calculations. In addition, it is aimed that students can reach all the gains in the expressions specified in the form. Therefore, the success-satisfaction ratio of the items on the form was calculated using the formula “number of successful-satisfied answered items/total number of items*100”. This ratio was calculated for the physician’s role observation form (54 items), peer simulated patient observation form (8 items), and the peer-assisted patient-physician interview satisfaction section (21 items) of the SAF. Planning: a. Preparation of simulated patient scenarios. A patient scenario for history taking was created by the researchers using the literature. Scenario creation stages are as follows: the determination of learning objectives and outcomes, determination of context and content (the physician’s and patient’s roles, anamnesis information, physical environment, available source, etc.), evaluation of technical infrastructure (computer, camera, sound system), and preparation of supporting documents [27,28]. The scenario was submitted to the expert opinion and was made ready for application after making the necessary revisions. Patient scenarios, which were finalized with the feedback from expert, were prepared for information sessions with students. b. Conducting pilot application. The pilot application was conducted with eight volunteering second-year students who had no experience with interviewing simulated patients. Information sessions were held with the volunteering students, and patient-physician interviews were planned. Within the scope of the pilot application, volunteering students made interviews with their peers playing the physician’s role, patient’s role, interviews were video-recorded, and feedback sessions were held with students. Video recordings were evaluated by the researchers using data collection forms. Technical problems encountered in the pilot application (internet, computer screen resolution, sound quality, etc.) and data collection tools were fixed. c. Setting up the peer-assisted patient-physician interview. During the 2019–2020 academic year, second-year students, at Izmir Katip Çelebi University Faculty of Medicine participated in the peer-assisted patient-physician interviews. Throughout the module, a student had three different responsibilities: playing the physician’s role, playing the peer-simulated patient’s role, and being the peer observer. Thus, students were able to experience all the components of the interview directly. Students made interviews, which were video-recorded. After the interview, they filled out a satisfaction form, wrote a self-assessment report, and attended a feedback session. Those playing the patient’s role simulated the disease required by the role, monitored the interviewing physician, gave constructive feedback to the physician, and filled out the satisfaction form. Finally, those who acted as an observer monitored the physician’s performance, gave constructive feedback, and filled out the satisfaction form. d. Planning a feedback session with students after the interviews. Students watched a video recording of the interview, wrote the self-assessment report, and participated in the feedback session a. Preparation of simulated patient scenarios. A patient scenario for history taking was created by the researchers using the literature. Scenario creation stages are as follows: the determination of learning objectives and outcomes, determination of context and content (the physician’s and patient’s roles, anamnesis information, physical environment, available source, etc.), evaluation of technical infrastructure (computer, camera, sound system), and preparation of supporting documents [27,28]. The scenario was submitted to the expert opinion and was made ready for application after making the necessary revisions. Patient scenarios, which were finalized with the feedback from expert, were prepared for information sessions with students. b. Conducting pilot application. The pilot application was conducted with eight volunteering second-year students who had no experience with interviewing simulated patients. Information sessions were held with the volunteering students, and patient-physician interviews were planned. Within the scope of the pilot application, volunteering students made interviews with their peers playing the physician’s role, patient’s role, interviews were video-recorded, and feedback sessions were held with students. Video recordings were evaluated by the researchers using data collection forms. Technical problems encountered in the pilot application (internet, computer screen resolution, sound quality, etc.) and data collection tools were fixed. c. Setting up the peer-assisted patient-physician interview. During the 2019–2020 academic year, second-year students, at Izmir Katip Çelebi University Faculty of Medicine participated in the peer-assisted patient-physician interviews. Throughout the module, a student had three different responsibilities: playing the physician’s role, playing the peer-simulated patient’s role, and being the peer observer. Thus, students were able to experience all the components of the interview directly. Students made interviews, which were video-recorded. After the interview, they filled out a satisfaction form, wrote a self-assessment report, and attended a feedback session. Those playing the patient’s role simulated the disease required by the role, monitored the interviewing physician, gave constructive feedback to the physician, and filled out the satisfaction form. Finally, those who acted as an observer monitored the physician’s performance, gave constructive feedback, and filled out the satisfaction form. d. Planning a feedback session with students after the interviews. Students watched a video recording of the interview, wrote the self-assessment report, and participated in the feedback session Acting: At this stage, patient-physician interviews were made, and information sessions were delivered about student responsibilities, and feedback sessions were held. Before this, second-year students who participated in basic communication skills, clinical communication skills, and professional skills courses had a patient-physician interview at the student outpatient clinic during appointment hours. The interviews were conducted simultaneously in five outpatient clinics by teams of five people. In these teams, one of the students played the physician’s role, one played peer-simulated patient’s role, and three participated in interviews as observers. In subsequent interviews, the students exchanged their roles: each student was allowed to play the physician’s and peer simulated patient roles once, and the observer roles three times. The student playing the physician’s role was required to prepare the outpatient clinic, initiate video recording, meet the patient, take anamnesis, and make general situation assessment. The student playing the peer-simulated patient’s role was informed that they could improvise if the answer to the question was not specified in the scenario. Observing students were required to monitor the interview and give feedback to the interviewing physician at the end. Once the interview was over, the student playing the physician’s role took the video recording, wrote the self-evaluation report, and participated in the feedback session held the following week. In the feedback session, the patient-physician interview experience was evaluated using discussion, reflection, and feedback techniques. This stage was completed in March 2020. Student interview videos were monitored and analyzed by researchers with PROF, PPOF, and SAF. The findings obtained after the analysis of the data were interpreted with triangulation, and a decision was made regarding the continuation of the peer-assisted simulated patient-physician interview. All data obtained by triangulation are combined and interpreted in a table. Ethics committee: Approval was obtained from the research ethics committee of the ICU Social Research Ethics Committee in March 2020 with the decision numbered 2020/03–04. Results: It was aimed to ensure that all second-year students (n:193) participated in patient-physician interviews. Patient-physician interviews were planned to be held throughout eight weeks according to a schedule, in which each week 25 students participated in the interviews. After the first two weeks, the COVID-19 pandemic was declared by the WHO, so the remaining students were unable to make the interviews. Thus, the interview videos of a total of 50 students were monitored by researchers and analyzed by obtaining data with PROF and PPOF. Cronbach’s alpha of PROF was found to be 0.71. A total of 50 students (31 males and 19 females) participated in the study. The mean age of the students is 20.56 (min:19 max:23). a. In the analysis of the data obtained from the patient-physician interview video recordings (n:50), the total score and success percentage for each student were calculated with PROF. Accordingly, the mean and standard deviation of the scores form PROF were 70.43 ± 9.81 (min. 40.12, maximum 88.27), respectively. Students are expected to get at least 60 points in order to be considered successful. The rate of students who were successful with a score of 60 or above from PROF was 92%. The distribution of achievement scores is presented as a graph (Chart 1). Chart 1. Students` performance scores from the PROF Chart 1.Students` performance grade distributions from PROF. Students` performance grade distributions from PROF. When evaluating the students playing the physician’s role, the headings on the PROF were examined: 96.29% of the 54 items were found to be used effectively during the observation. Students were successful in over 95% of the topics of welcoming patient, asking questions about the patient’s demographic characteristics, making eye contact, listening to the patient’s main complaints, observing the patient’s profile, and asking questions about background. On the other hand, students achieved less than 50% success in summarizing the case, using body language, using the proper tone of voice, and using understandable language. b. Students playing the peer-simulated patient’s role were evaluated via the PPOF by considering patient-physician interview video recordings (n:50). Students were found to be more than 90% successful in seven items of the form. However, only 32% success was achieved in the eighth item related to the peer patient giving feedback to the interviewing physician, (Table 1).Table 1.Peer-Simulated Patient Success RatePPOF Items%1. The peer patient focused on the script. (good recall, concentrated)91,332. The peer played the role of patient well.94,673. The peer patient was able to present alternative topics to the topics highlighted in the scenario95,334. The oral communication skills of the peer patient were appropriate (clear, clear, understandable, scripted)99,335. The nonverbal communication skills of the peer patient were appropriate (body language, gesture, gesture).99,336. The peer patient listened to the physician interview topics effectively100,007. The peer patient answered the questions of the interviewer consistently. (credible-reliable)99,338. The peer patient gave effective feedback.32,00Total88,92 Peer-Simulated Patient Success Rate c. Findings regarding the satisfaction with the peer-assisted patient-physician interview were presented under the following headings: sociodemographic characteristics of the participants, their opinions on satisfaction with the patient-physician interview, and their opinions on satisfaction with the peer-assisted patient-physician interview. After the evaluation on the satisfaction of the peer-assisted patient-physician interview, it was determined that 98% (n:49) of the students were satisfied with the peer-assisted patient-physician interview, and 84% (n:42) were satisfied with the presence of their peers in the patient role in the peer-assisted patient-physician interview. The other, 16% (n:8) stated that they would prefer to have an real patient or doctor instead of their peers. It was also determined that 92% of the students wanted to re-experience the peer-assisted patient-physician interview in the coming years, and 96% found the peer-assisted patient-physician interview experiences useful. Regarding their answers to the open-ended questions, the students stated that they found it valuable to have experienced the patient-physician interview in the early period during the pre-graduation medical education process. They noted that they realized their weaknesses and what needed to be done about them. They said that it would be useful to repeat this instructive practice, that peer-assisted learning was valuable, and that it was a good opportunity to self-evaluate. On the other hand, some of the negative remarks related to the process were inexperience, excitement, personal inadequacies, lack of knowledge, unnecessary role-playing, and difficulty communicating with the patient”. When the satisfaction with peer-assisted patient-physician interviews was evaluated, it was determined that 77.33% of the students were satisfied. 80.53% of the students were satisfied with being interviewing physician, 56.66% with being the peer-simulated patient, and 82% with being observer in the interviews. d. All the data obtained is combined with triangulation and combined and interpreted in the table. In triangulation, the students playing the physician’s, and patient’s roles were evaluated together with ‘success in being simulated patients’ and ‘satisfaction with the peer-assisted patient-physician interviews’(Table 2).Table 2.Triangulation of Patient-Physician Interview Skills DataMerged Data%Success Rate of Being an Interviewer Physician92,00Peer Simulated Patient Success Rate88,92PatientPhysicianInterviewSatisfactionRateSatisfaction with patient-physician interview98,00Satisfaction with the fact the simulated patient is a peer simulated patient84,00Interest to have a patient-physician interview in the years to come92,00Finding the patient-physician interview experience helpful96,00Finding the patient-physician interview experience useful77,33Satisfaction of being an interviewer physicianPeer-to-peer simulated patient satisfactionSatisfaction of being an observer80,5356,6682,00 Triangulation of Patient-Physician Interview Skills Data Satisfaction of being an interviewer physician Peer-to-peer simulated patient satisfaction Satisfaction of being an observer In the absence of simulated patients, it was determined that students achieved an over 88% success rate in the patient-physician interviews and peer-simulated patient roles. Although they were less satisfied with playing the peer-simulated patient’s role, the satisfaction with the peer-assisted patient-physician interviews was rated between 77.33% and 98%. Discussion: This study was conducted to determine whether medical students’ patient-physician interview skills could be implemented by peer simulation in the absence of simulated patients. In faculties facing difficulties with providing simulated patient for patient-physician interview skills training, a different teaching strategy that meets the same function is needed to ensure that students gain skills at a low cost. Indeed, in this study, nearly all of the students were successful in patient-physician interviews performed using peer-simulated patients. The 26,found that changing a student’s role during learning experiences encourages students to learn [26]. In another study conducted with peers, it was determined that patient-physician interviews contributed to the students’ ability to take anamnesis, manage emotional problems, and self-assess [5, 23]. Similarly, peer simulation develops communication, empathy, trust, and professional skills [5]. In our study, we observed that students playing the physician’s role were successful in starting patient interviews, taking anamnesis, and using the appropriate nonverbal communication skills. These students were evaluated through the PROF, which Cronbach’s alpha reliability coefficient was found to be 0.71. In the literature, Cronbach’s alpha reliability coefficient is interpreted as good if it is between 0.70 and 0.90 [29]. 1,and 30,emphasized that design features such as feedback, planned implementation, the difficulty of simulation, clinical variation, and individualized learning should be taken into account in simulation training [1,30]. In our study, it was seen that students playing the peer-simulated patient’s role failed to give feedback to those playing the physician’s role. However, although the students were trained in giving feedback, they were found to be biased. 31,emphasized that peers evaluated each other generously in peer evaluation, while another study stated that peers may rate each other highly in small groups (small circle collusion) or large groups (pervasive collusion) [31,32]. In studies related to patient-physician interviews performed with peer simulation method, it is said that students can carry out the training process more easily than they do with simulated patients as they play the peer-simulated patient’s role [5]. In our study, while playing the physician’s and observer’s roles was satisfactory for the students, playing the peer-simulated patient’s role was not that satisfactory. One can speculate that they had difficulty getting into the role, as the patient-physician interview skills training using the peer-simulation method was conducted for the first time. It is thought that students’ satisfaction may increase as they become more familiar with the patient-physician interview skills training. During peer simulation, students contribute to each other’s learning ‘as patients’ not by ‘teaching’ [5]. 7,similarly state that students could develop the ability to conduct patient-physician interviews if they observed other physicians [7]. In our study, students expressed their satisfaction and contribution to their learning by playing the observer’s role. According to the systematic review of the studies that perform patient-physician interviews with peer-simulated patients, peer simulation is an effective learning approach [5]. In our study, as a result of the evaluation of the action, the patient-physician interviews with the peer-simulated patient was successfully completed. One limitation of this study is failing to practically compare the peer simulation technique with standardized patient simulation due to the lack of standardized patient simulation in the medical school where the application was carried out. Another limitation is the inability to include all second-year students in this study due to the pandemic. Conclusion: In the absence of simulated patients, peer-assisted simulation can be performed to contribute to medical students’ patient-physician interview skills. To obtain better results from peer-assisted patient-physician interviews, making the following arrangements within institutions is recommended: • Organizing additional training to increase students’ ability to give constructive feedback to their peers, • Planning multicenter researches that evaluate the institution gains (time, cost, workforce, etc.) obtained through peer-simulated patient usage. • Ensuring the sustainability of the action research cycle by evaluating peer-simulated patient practice in the coming years. Consideration of peer-assisted simulation by educators, students and administrators will ensure that the practice becomes widespread.
Background: Patient-physician interviewing skills are crucial in health service delivery. It is necessary for effective care and treatment that the physician initiates the interview with the patient, takes anamnesis, collects the required information, and ends the consultation. Different methods are used to improve patient-physician interview skills before encountering actual patients. In the absence of simulated patients, peer simulation is an alternative method for carrying out the training. This study aims to show whether patient-physician interview skills training can be implemented using peer simulation in the absence of the simulated patient. Methods: This is a descriptive quantitative study. This research was conducted in six stages: identification of the research problem and determination of the research question, development of data collection tools, planning, acting, evaluation, and monitoring. The data were collected via the patient-physician interview videos of the students. The research team performed descriptive analysis on quantitative data and thematic analysis on qualitative data. Results: Fifty students participated in the study. When performing peer-assisted simulation applications in the absence of simulated patients, the success rate in patient-physician interviews and peer-simulated patient roles was over 88%. Although the students were less satisfied with playing the peer-simulated patient role, the satisfaction towards the application was between 77.33% and 98%. Conclusions: In patient-physician interviews, the peer-simulated patient method is an effective learning approach. There may be difficulties finding suitable simulated patients, training them, budgeting to cover the costs, planning, organizing the interviews, and solving potential issues during interviews. Our study offers an affordable solution for students to earn patient-physician interview skills in faculties facing difficulties with providing simulated patients for training.
Introduction: Medical students need to practice patient-physician interviews to develop essential clinical communication and clinical reasoning skills and find the necessary space to apply their basic professional skills [1]. Patient-physician interviewing skills have an important place in health service delivery. A good interview is crucial for effective diagnosis and treatment. Medical educators agree that medical students should be humane and have the necessary communication skills for patient-physician interview skills. However, for years, there has been uncertainty about the ways to achieve this learning goal [2]. Having students experience a mock patient-physician interview is considered the easiest method to accomplish this goal [2]. Methods based on small group activities, such as problem-based learning, role-playing, and simulated/standardized patient simulation, are used to improve patient-physician interview skills [2,3]. Today, it is a common and accepted method to conduct patient-physician interviews with simulated/standardized patients [1,4–6]. Simulated patients can be theatre actors, professional actors, trained volunteers (retirees, students, employees, etc.). There is no evidence that the simulated patient has to be a professional actor for the interview to be efficient [4,7]. There are certain advantages and disadvantages to interviewing simulated patients. Simulated patients offer a student-centered educational opportunity that is the closest to reality without time constraints. They can impersonate different patient profiles and conditions, allowing students to experience patients and cases that are difficult to encounter in real life [4,5]. On the other hand, using simulated patients also has disadvantages related to the cost or training requirements [8]. There may be difficulties finding proper simulated patients, training them, budgeting to cover the costs, planning, organizing the interviews, and solving possible issues during interviews [4,5,7–12]. Furthermore, the need to train faculty members` for simulated patient training, the time spent on it, corporate commitments, and, most importantly, the truth that it is not a sustainable method are some other downsides [4,5]. In modern medical education, to improve patient-physician interviewing skills, it has become imperative to use modernized, affordable and sustainable models, instead of teacher-centered and expensive methods with a traditional approach. Peer-assisted learning (PAL) serves this purpose [3,13,14]. One can define PAL as knowledge and skills acquisition through active help and support among peers. Peer trainers (tutors) are non-professional teachers who, by helping their friends, help themselves as well to have a broader understanding of the topic at hand [3,14,15]. Peer-assisted learning (PAL) has long been used informally in medical education by medical educators as an auxiliary tool for learning since its inclusion among the effective models in the literature [3,13,16]. The primary advantage of PAL is economizing resources. Another advantage is that it immensely reduces the burden of the faculty member. It increases the cultivation of a lifelong learning mentality for students, leads to continuous professional development, and enhances interest in an academic career, boosting skills such as leadership, coaching, confidence, and inner motivation [13,14,16,17]. Peer simulation is presented as a new concept that increases the advantages of PAL [5]. Peer simulation is a structured form of role-playing in which students train to play the patient role for their peers [5]. Having peer support in peer simulation (peer simulated patient) presents many advantages offered by PAL, and it has a positive effect on learning outcomes. Students learn together and from each other through peer simulation. Peer simulation is an alternative method to using simulated patients in preclinical applications. Playing the patient role in peer simulation is an opportunity to facilitate the development of empathy and culture-sensitive medical practice skills [5]. There are very few examples of professional skills training using peer simulation [5]. According to the literature, there are no examples in Turkey yet. In the medical school, where the study was carried out, patient-physician interview skills training was implemented in the second year. The patient-physician interview skills training goal was to teach students the proper way to start the interview, take and expand the anamnesis, inform the patient, and end the interview. There are no simulated/standardized patients in this medical school. For students to gain skills, a different teaching strategy, which is low cost but meets the same function, is required. In our school, action was planned to solve this problem. Results from action are the solution to the problem. Action research, used to improve and modify educational practices, is a method that helps faculty and students better understand the work carried out in the institution. If the results are not satisfactory, researchers retry [18]. The action process is carried out in six stages (Figure 1). The first stage is ‘diagnosing,’ which means identification of the problem. The second stage is ‘reconnaissance,’ in which data collection tools are developed and the problem is analyzed and interpreted. The third stage contains the development of the action/intervention plan. The acting stage includes the implementation of the action/intervention plan. The fifth stage is the evaluation stage comprising data collection and analyzing the action/intervention. The last stage includes monitoring the data to make revisions and test the action/intervention. Figure 1.Mixed-Method Methodological Framework for Research. Mixed-Method Methodological Framework for Research. This study aims to show whether peer simulated patient-physician interview skills training can be successfully implemented to practice patient-physician interviewing skills of medical students in the absence of simulated patients. Conclusion: In the absence of simulated patients, peer-assisted simulation can be performed to contribute to medical students’ patient-physician interview skills. To obtain better results from peer-assisted patient-physician interviews, making the following arrangements within institutions is recommended: • Organizing additional training to increase students’ ability to give constructive feedback to their peers, • Planning multicenter researches that evaluate the institution gains (time, cost, workforce, etc.) obtained through peer-simulated patient usage. • Ensuring the sustainability of the action research cycle by evaluating peer-simulated patient practice in the coming years. Consideration of peer-assisted simulation by educators, students and administrators will ensure that the practice becomes widespread.
Background: Patient-physician interviewing skills are crucial in health service delivery. It is necessary for effective care and treatment that the physician initiates the interview with the patient, takes anamnesis, collects the required information, and ends the consultation. Different methods are used to improve patient-physician interview skills before encountering actual patients. In the absence of simulated patients, peer simulation is an alternative method for carrying out the training. This study aims to show whether patient-physician interview skills training can be implemented using peer simulation in the absence of the simulated patient. Methods: This is a descriptive quantitative study. This research was conducted in six stages: identification of the research problem and determination of the research question, development of data collection tools, planning, acting, evaluation, and monitoring. The data were collected via the patient-physician interview videos of the students. The research team performed descriptive analysis on quantitative data and thematic analysis on qualitative data. Results: Fifty students participated in the study. When performing peer-assisted simulation applications in the absence of simulated patients, the success rate in patient-physician interviews and peer-simulated patient roles was over 88%. Although the students were less satisfied with playing the peer-simulated patient role, the satisfaction towards the application was between 77.33% and 98%. Conclusions: In patient-physician interviews, the peer-simulated patient method is an effective learning approach. There may be difficulties finding suitable simulated patients, training them, budgeting to cover the costs, planning, organizing the interviews, and solving potential issues during interviews. Our study offers an affordable solution for students to earn patient-physician interview skills in faculties facing difficulties with providing simulated patients for training.
9,426
335
[ 4459, 494, 938, 353, 24, 1210, 691, 137 ]
9
[ "patient", "physician", "students", "peer", "interview", "role", "patient physician", "simulated", "interviews", "feedback" ]
[ "interviewing skills medical", "patient role interviews", "physician interview student", "interviewing simulated patients", "physician interviews simulated" ]
null
null
[CONTENT] Patient-physician interview skills | peer-assisted learning | simulation | peer simulated patient | peer simulation [SUMMARY]
null
null
[CONTENT] Patient-physician interview skills | peer-assisted learning | simulation | peer simulated patient | peer simulation [SUMMARY]
[CONTENT] Patient-physician interview skills | peer-assisted learning | simulation | peer simulated patient | peer simulation [SUMMARY]
[CONTENT] Patient-physician interview skills | peer-assisted learning | simulation | peer simulated patient | peer simulation [SUMMARY]
[CONTENT] Clinical Competence | Communication | Humans | Learning | Patient Simulation | Peer Group | Physician-Patient Relations | Physicians | Students [SUMMARY]
null
null
[CONTENT] Clinical Competence | Communication | Humans | Learning | Patient Simulation | Peer Group | Physician-Patient Relations | Physicians | Students [SUMMARY]
[CONTENT] Clinical Competence | Communication | Humans | Learning | Patient Simulation | Peer Group | Physician-Patient Relations | Physicians | Students [SUMMARY]
[CONTENT] Clinical Competence | Communication | Humans | Learning | Patient Simulation | Peer Group | Physician-Patient Relations | Physicians | Students [SUMMARY]
[CONTENT] interviewing skills medical | patient role interviews | physician interview student | interviewing simulated patients | physician interviews simulated [SUMMARY]
null
null
[CONTENT] interviewing skills medical | patient role interviews | physician interview student | interviewing simulated patients | physician interviews simulated [SUMMARY]
[CONTENT] interviewing skills medical | patient role interviews | physician interview student | interviewing simulated patients | physician interviews simulated [SUMMARY]
[CONTENT] interviewing skills medical | patient role interviews | physician interview student | interviewing simulated patients | physician interviews simulated [SUMMARY]
[CONTENT] patient | physician | students | peer | interview | role | patient physician | simulated | interviews | feedback [SUMMARY]
null
null
[CONTENT] patient | physician | students | peer | interview | role | patient physician | simulated | interviews | feedback [SUMMARY]
[CONTENT] patient | physician | students | peer | interview | role | patient physician | simulated | interviews | feedback [SUMMARY]
[CONTENT] patient | physician | students | peer | interview | role | patient physician | simulated | interviews | feedback [SUMMARY]
[CONTENT] skills | patient | medical | pal | simulated | peer | patients | action | simulation | stage [SUMMARY]
null
null
[CONTENT] peer | peer assisted simulation | assisted simulation | patient | practice | assisted | peer assisted | simulation | students | simulated [SUMMARY]
[CONTENT] patient | physician | peer | students | simulated | patient physician | interview | role | interviews | feedback [SUMMARY]
[CONTENT] patient | physician | peer | students | simulated | patient physician | interview | role | interviews | feedback [SUMMARY]
[CONTENT] ||| ||| ||| ||| [SUMMARY]
null
null
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| ||| ||| ||| six ||| ||| ||| ||| Fifty ||| 88% ||| between 77.33% and 98% ||| ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| ||| ||| ||| six ||| ||| ||| ||| Fifty ||| 88% ||| between 77.33% and 98% ||| ||| ||| [SUMMARY]
Feline adipose tissue-derived mesenchymal stem cells pretreated with IFN-γ enhance immunomodulatory effects through the PGE₂ pathway.
33774932
Preconditioning with inflammatory stimuli is used to improve the secretion of anti-inflammatory agents in stem cells from variant species such as mouse, human, and dog. However, there are only few studies on feline stem cells.
BACKGROUND
To assess the interaction of lymphocytes and macrophages with IFN-γ-pretreated fAT-MSCs, mouse splenocytes and RAW 264.7 cells were cultured with the conditioned media from IFN-γ-pretreated MSCs.
METHODS
Pretreatment with IFN-γ increased the gene expression levels of cyclooxygenase-2, indoleamine 2,3-dioxygenase, hepatocyte growth factor, and transforming growth factor-beta 1 in the MSCs. The conditioned media from IFN-γ-pretreated MSCs increased the expression levels of M2 macrophage markers and regulatory T-cell markers compared to those in the conditioned media from naive MSCs. Further, prostaglandin E₂ (PGE₂) inhibitor NS-398 attenuated the immunoregulatory potential of MSCs, suggesting that the increased PGE₂ levels induced by IFN-γ stimulation is a crucial factor in the immune regulatory capacity of MSCs pretreated with IFN-γ.
RESULTS
IFN-γ pretreatment improves the immune regulatory profile of fAT-MSCs mainly via the secretion of PGE₂, which induces macrophage polarization and increases regulatory T-cell numbers.
CONCLUSIONS
[ "Animals", "Cats", "Dinoprostone", "Female", "Gene Expression Regulation", "Immunomodulation", "Interferon-gamma", "Mesenchymal Stem Cells", "Mice", "RAW 264.7 Cells" ]
8007449
INTRODUCTION
Feline mesenchymal stem cells (MSCs) have anti-inflammatory effects and immunomodulatory functions that make them attractive as novel cell-based therapeutics for immune-mediated and inflammatory diseases such as gingivostomatitis, chronic kidney disease, and asthma [123]. In addition, we previously showed the mechanisms of anti-inflammatory functions of feline MSCs [45]. However, few studies have focused on the enhancement of immunoregulatory effects and their mechanisms of feline MSCs. One of the strategies to improve the immunoregulatory capacity of MSCs is to pre-treat them with interferon-gamma (IFN-γ) [6]. IFN-γ is a pro-inflammatory cytokine secreted by natural killer cells and T cells which acts on macrophages and lymphocytes [7]. Previous studies suggested that MSCs stimulated with IFN-γ have upregulated immune suppressive functions and show changes in the expression of immunomodulatory factors [8]. Recently, Parys et al. [9] reported that feline MSCs stimulated with IFN-γ showed significantly increased secretion of immunomodulatory factors such as indoleamine 2,3-dioxygenase (IDO), programmed death-ligand 1, interleukin (IL)-6, cyclooxygenase-2 (COX2), and hepatocyte growth factor (HGF). In this study, however, underlying mechanisms of anti-inflammation have not been identified. Therefore, the aim of our study was to evaluate enhancement of the immunomodulatory effects of feline MSCs pre-treated with IFN-γ. In addition, we assessed the mechanisms by which the immunoregulation is induced.
null
null
RESULTS
Characterization of fAT-MSCs Cells isolated from feline adipose tissue were characterized by immunophenotyping and multilineage differentiation. Three days after seeding, the cultured cells exhibited a fibroblast-like morphology. There was no difference in the morphology of fAT-MSCs treated with or without IFN-γ (Fig. 1A). In addition, IFN-γ-treated fAT-MSCs showed normal cell viability (Fig. 1B). IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; HGF, hepatocyte growth factor; COX2, cyclooxygenase-2; IDO, indoleamine 2,3-dioxygenase; ns, not significant; TGF-β, transforming growth factor-β. *p < 0.05, **p < 0.01 by one-way analysis of variance analysis. Flow cytometric analyses showed that only a few of the naïve or IFN-γ-primed fAT-MSCs expressed the known hematopoietic markers CD34 and CD44, but more than 95% expressed the known MSC markers CD90, CD44, CD9, and CD105 (Fig. 1C). When specific differentiation media were used, fAT-MSCs differentiated into osteocytes, adipocytes, and chondrocytes. The abilities of osteogenic and chondrogenic differentiation were enhanced in IFN-γ-treated fAT-MSCs compared with naïve fAT-MSCs (Fig. 1D). Additionally, 48 h treatment with INF-γ upregulated the secretion of immunomodulation factors COX2, IDO, TGF-β, and HGF in fAT-MSCs (Fig. 1E). Cells isolated from feline adipose tissue were characterized by immunophenotyping and multilineage differentiation. Three days after seeding, the cultured cells exhibited a fibroblast-like morphology. There was no difference in the morphology of fAT-MSCs treated with or without IFN-γ (Fig. 1A). In addition, IFN-γ-treated fAT-MSCs showed normal cell viability (Fig. 1B). IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; HGF, hepatocyte growth factor; COX2, cyclooxygenase-2; IDO, indoleamine 2,3-dioxygenase; ns, not significant; TGF-β, transforming growth factor-β. *p < 0.05, **p < 0.01 by one-way analysis of variance analysis. Flow cytometric analyses showed that only a few of the naïve or IFN-γ-primed fAT-MSCs expressed the known hematopoietic markers CD34 and CD44, but more than 95% expressed the known MSC markers CD90, CD44, CD9, and CD105 (Fig. 1C). When specific differentiation media were used, fAT-MSCs differentiated into osteocytes, adipocytes, and chondrocytes. The abilities of osteogenic and chondrogenic differentiation were enhanced in IFN-γ-treated fAT-MSCs compared with naïve fAT-MSCs (Fig. 1D). Additionally, 48 h treatment with INF-γ upregulated the secretion of immunomodulation factors COX2, IDO, TGF-β, and HGF in fAT-MSCs (Fig. 1E). Immunomodulatory effects of IFN-γ primed fAT-MSCs To determine the immunomodulatory capacity of IFN-γ treated fAT-MSCs, we quantified the mRNA expressions of anti- and pro-inflammatory cytokines secreted by RAW 264.7 cells. Tumor necrosis factor-alpha (TNF-α), IL-1β, IL-6, and inducible nitric oxide synthase (iNOS) were upregulated in RAW 264.7 cells stimulated with LPS compared to that in unstimulated RAW 264.7 cells. When RAW 264.7 cells stimulated by LPS were cultured with fAT-MSC- and IFN-γ-treated fAT-MSC-conditioned media TNF-α, IL-1β, IL-6, and iNOS expression levels were reduced. We also measured the mRNA expressions of M2 markers, IL-10, and arginase. The levels of these M2 markers were significantly increased in LPS-stimulated RAW 264.7 cultured in IFN-γ-treated fAT-MSC-conditioned media compared to that LPS-stimulated RAW 264.7 cultured in untreated fAT-MSC-conditioned media (Fig. 2A). IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; LPS, lipopolysaccharide; TGF-β, transforming growth factor-β; IL, interleukin; Arg, arginase; iNOS, inducible nitric oxide synthase; MSC, mesenchymal stem cell; ns, not significant; FOXP3, forkhead box protein P3. *p < 0.05, **p < 0.01, ***p < 0.001 by one-way analysis of variance analysis. Additionally, we performed immunofluorescence staining on RAW 264.7 cells to determine whether the population of M2 macrophage increased when cultured in fAT-MSC- and IFN-γ-treated fAT-MSC-conditioned media. Quantitative analysis of CD206+ and CD11b+ RAW 264.7 cells demonstrated a significant increase in CD206+ cells among the RAW 264.7 cultured in IFN-γ-treated fAT-MSC-conditioned media compared to those cultured in fAT-MSC-conditioned media and NS-398-treated IFN-γ-stimulated-fAT-MSC-conditioned media (Fig. 2B). To determine the ability to induce T-cell regulation, we determined the mRNA expressions of inflammatory cytokines secreted by splenocytes. Con A stimulation increased the expression of IL-1β, IFN-γ, and IL-17 in the splenocytes. However, Con A-stimulated splenocytes cultured in IFN-γ-treated fAT-MSC-conditioned media showed decreased expression of IL-1β, IFN-γ, and IL-17 compared to the Con A-stimulated cultured in untreated fAT-MSC-conditioned media. Conversely, IL-10 and FOXP3 expression levels were increased in Con A-stimulated splenocyte cultured in IFN-γ treated fAT-MSC-conditioned media compared to Con A-stimulated splenocytes cultured in IFN-γ-un-treated fAT-MSC-conditioned media (Fig. 2C). To determine the immunomodulatory capacity of IFN-γ treated fAT-MSCs, we quantified the mRNA expressions of anti- and pro-inflammatory cytokines secreted by RAW 264.7 cells. Tumor necrosis factor-alpha (TNF-α), IL-1β, IL-6, and inducible nitric oxide synthase (iNOS) were upregulated in RAW 264.7 cells stimulated with LPS compared to that in unstimulated RAW 264.7 cells. When RAW 264.7 cells stimulated by LPS were cultured with fAT-MSC- and IFN-γ-treated fAT-MSC-conditioned media TNF-α, IL-1β, IL-6, and iNOS expression levels were reduced. We also measured the mRNA expressions of M2 markers, IL-10, and arginase. The levels of these M2 markers were significantly increased in LPS-stimulated RAW 264.7 cultured in IFN-γ-treated fAT-MSC-conditioned media compared to that LPS-stimulated RAW 264.7 cultured in untreated fAT-MSC-conditioned media (Fig. 2A). IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; LPS, lipopolysaccharide; TGF-β, transforming growth factor-β; IL, interleukin; Arg, arginase; iNOS, inducible nitric oxide synthase; MSC, mesenchymal stem cell; ns, not significant; FOXP3, forkhead box protein P3. *p < 0.05, **p < 0.01, ***p < 0.001 by one-way analysis of variance analysis. Additionally, we performed immunofluorescence staining on RAW 264.7 cells to determine whether the population of M2 macrophage increased when cultured in fAT-MSC- and IFN-γ-treated fAT-MSC-conditioned media. Quantitative analysis of CD206+ and CD11b+ RAW 264.7 cells demonstrated a significant increase in CD206+ cells among the RAW 264.7 cultured in IFN-γ-treated fAT-MSC-conditioned media compared to those cultured in fAT-MSC-conditioned media and NS-398-treated IFN-γ-stimulated-fAT-MSC-conditioned media (Fig. 2B). To determine the ability to induce T-cell regulation, we determined the mRNA expressions of inflammatory cytokines secreted by splenocytes. Con A stimulation increased the expression of IL-1β, IFN-γ, and IL-17 in the splenocytes. However, Con A-stimulated splenocytes cultured in IFN-γ-treated fAT-MSC-conditioned media showed decreased expression of IL-1β, IFN-γ, and IL-17 compared to the Con A-stimulated cultured in untreated fAT-MSC-conditioned media. Conversely, IL-10 and FOXP3 expression levels were increased in Con A-stimulated splenocyte cultured in IFN-γ treated fAT-MSC-conditioned media compared to Con A-stimulated splenocytes cultured in IFN-γ-un-treated fAT-MSC-conditioned media (Fig. 2C). PGE2 concentration levels in various fAT-MSC-conditioned media PGE2 ELISA Kit was used to measure PGE2 levels in the conditioned medium obtained from fAT-MSC treated or untreated with IFN-γ, and NS-398 stimulated fAT-MSC treated with IFN-γ. First, we confirmed that NS-398 (5 μM) treated fAT-MSCs show normal cell viability (Supplementary Fig. 1). PGE2 was significantly increased in IFN-γ-treated fAT-MSC-conditioned media treated compared to that in untreated fAT-MSC-conditioned media and NS-398-stimulated fAT-MSC-conditioned media with or without IFN-γ pretreatment (Fig. 3). PGE2, prostaglandin E2; IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell. ***p < 0.001 by one-way analysis of variance analysis. PGE2 ELISA Kit was used to measure PGE2 levels in the conditioned medium obtained from fAT-MSC treated or untreated with IFN-γ, and NS-398 stimulated fAT-MSC treated with IFN-γ. First, we confirmed that NS-398 (5 μM) treated fAT-MSCs show normal cell viability (Supplementary Fig. 1). PGE2 was significantly increased in IFN-γ-treated fAT-MSC-conditioned media treated compared to that in untreated fAT-MSC-conditioned media and NS-398-stimulated fAT-MSC-conditioned media with or without IFN-γ pretreatment (Fig. 3). PGE2, prostaglandin E2; IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell. ***p < 0.001 by one-way analysis of variance analysis. Macrophage polarization and T-cell regulation are associated with PGE2 To determine the role of PGE2 in the immunomodulatory capacity of fAT-MSCs, we compared pro- and anti-inflammatory cytokines secreted by RAW 264.7 cells and splenocytes. Flow cytometry revealed an increase in CD11c+ RAW 264.7 cells with LPS stimulation, which was decreased by culturing in IFN-γ treated fAT-MSC-conditioned media (Fig. 4). However, there was a decrease in CD206+ RAW 264.7 cells with LPS stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. Pre-treating the fAT-MSCs treated IFN-γ with PGE2 inhibitor, NS-398 attenuated the effect of fAT-MSC-conditioned media on RAW 264.7 cells significantly increasing CD11c+ and decreasing CD206+ RAW 264.7 cells (Fig. 4). PGE2, prostaglandin E2; IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; LPS, lipopolysaccharide; MSC, mesenchymal stem cell; ns, not significant; PE, phycoerythrin; FITC, fluorescein isothiocyanate; SSC, sideward scatter; APC, allophycocyanin; Con A, concanavalin A. **p < 0.01, ***p < 0.001 by one-way analysis of variance analysis. To determine the role of PGE2 in the immunomodulatory capacity of fAT-MSCs, we compared pro- and anti-inflammatory cytokines secreted by RAW 264.7 cells and splenocytes. Flow cytometry revealed an increase in CD11c+ RAW 264.7 cells with LPS stimulation, which was decreased by culturing in IFN-γ treated fAT-MSC-conditioned media (Fig. 4). However, there was a decrease in CD206+ RAW 264.7 cells with LPS stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. Pre-treating the fAT-MSCs treated IFN-γ with PGE2 inhibitor, NS-398 attenuated the effect of fAT-MSC-conditioned media on RAW 264.7 cells significantly increasing CD11c+ and decreasing CD206+ RAW 264.7 cells (Fig. 4). PGE2, prostaglandin E2; IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; LPS, lipopolysaccharide; MSC, mesenchymal stem cell; ns, not significant; PE, phycoerythrin; FITC, fluorescein isothiocyanate; SSC, sideward scatter; APC, allophycocyanin; Con A, concanavalin A. **p < 0.01, ***p < 0.001 by one-way analysis of variance analysis. PGE2 secreted by IFN-γ treated fAT-MSCs is critical for their immune function To determine the role of PGE2, treated IFN-γ treated fAT-MSCs with PGE2 inhibitor, NS-398, and then compared the mRNA expression of inflammatory cytokines secreted by RAW 264.7 cells cultured in IFN-γ treated fAT-MSC-conditioned media or NS-398-stimulated IFN-γ-treated fAT-MSC-conditioned media. First, we confirmed that 5 μM of NS-398 effectively inhibits PGE2 secretion in fAT-MSCs without affecting cell viability (data not shown). TNF-α, IL-1β, and IL-6 expression levels increased with LPS, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. IL-10 levels were decreased after LPS stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. However, pre-treating the fAT-MSCs with NS-398 attenuated the effect of IFN-γ treated fAT-MSC-conditioned media (Fig. 5). PGE2, prostaglandin E2; IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; IL, interleukin; TNF-α, tumor necrosis factor-alpha; LPS, lipopolysaccharide. *p < 0.05, **p < 0.01 by one-way analysis of variance analysis. To confirm the role of PGE2 in T-cell regulation, we determined the mRNA expression of immune cytokines secreted by splenocytes. IL-1β, 1L-17, and IFN-γ were increased after Con A stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. Pre-treating the fAT-MSCs with NS-398 attenuated the effect of IFN-γ treated fAT-MSC-conditioned media and increased IL-1β, 1L-17, and IFN-γ levels. IL-10 was decreased after Con A stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. Pre-treating the fAT-MSCs treated IFN-γ with NS-398 attenuated the effect of IFN-γ treated fAT-MSC-conditioned media and decreased IL-10 levels (Fig. 5). To determine the role of PGE2, treated IFN-γ treated fAT-MSCs with PGE2 inhibitor, NS-398, and then compared the mRNA expression of inflammatory cytokines secreted by RAW 264.7 cells cultured in IFN-γ treated fAT-MSC-conditioned media or NS-398-stimulated IFN-γ-treated fAT-MSC-conditioned media. First, we confirmed that 5 μM of NS-398 effectively inhibits PGE2 secretion in fAT-MSCs without affecting cell viability (data not shown). TNF-α, IL-1β, and IL-6 expression levels increased with LPS, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. IL-10 levels were decreased after LPS stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. However, pre-treating the fAT-MSCs with NS-398 attenuated the effect of IFN-γ treated fAT-MSC-conditioned media (Fig. 5). PGE2, prostaglandin E2; IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; IL, interleukin; TNF-α, tumor necrosis factor-alpha; LPS, lipopolysaccharide. *p < 0.05, **p < 0.01 by one-way analysis of variance analysis. To confirm the role of PGE2 in T-cell regulation, we determined the mRNA expression of immune cytokines secreted by splenocytes. IL-1β, 1L-17, and IFN-γ were increased after Con A stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. Pre-treating the fAT-MSCs with NS-398 attenuated the effect of IFN-γ treated fAT-MSC-conditioned media and increased IL-1β, 1L-17, and IFN-γ levels. IL-10 was decreased after Con A stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. Pre-treating the fAT-MSCs treated IFN-γ with NS-398 attenuated the effect of IFN-γ treated fAT-MSC-conditioned media and decreased IL-10 levels (Fig. 5).
null
null
[ "Isolation and characterization of feline adipose-tissue derived (fAT)-MSCs", "IFN-γ stimulation", "Prostaglandin E2 (PGE2) inhibitor, NS-398 treatment", "Immune cells were cultured with conditioned media obtained from fAT-MSCs", "RNA extraction, cDNA synthesis, and RT-qPCR", "Enzyme-linked immunosorbent assay (ELISA)", "Flow cytometric analysis of mouse immune cells", "Immunofluorescence analyses", "Statistical analyses", "Characterization of fAT-MSCs", "Immunomodulatory effects of IFN-γ primed fAT-MSCs", "PGE2 concentration levels in various fAT-MSC-conditioned media", "Macrophage polarization and T-cell regulation are associated with PGE2", "PGE2 secreted by IFN-γ treated fAT-MSCs is critical for their immune function" ]
[ "Feline adipose tissues were obtained from 3 healthy cats during ovariohysterectomy at the Seoul National University Veterinary Medical Teaching Hospital, and their owners provided informed consent for research use. All procedures were approved by the Institutional Animal Care and Use Committee of Seoul National University (protocol No. SNU-190411-10), and the protocols were performed in accordance with approved guidelines. fAT-MSCs were cultured in Dulbecco's Modified Eagle Medium (DMEM; PAN Biotech, Germany) supplemented with 20% ​​fetal bovine serum (FBS; PAN Biotech) and 1% (w/v) penicillin-streptomycin (PS; PAN Biotech) and incubated at 37°C in a 5% CO2 humidified atmosphere. The medium was changed every 2 days until the cells reached a confluence of 70%–80%. Previous studies used MSCs in the third passage or fourth passage, suggesting that early passage cells are more effective [10]. Therefore, we used 3–4 passages for fAT-MSCs. After adhering to culture plates and achieving a fibroblast-like morphology, the fAT-MSCs were characterized by flow cytometry using antibodies against the following proteins–CD44 (antibody clone IM7, fluorescein isothiocyanate [FITC] rat anti-mouse/human, 103021; Biolegend, USA), CD34 (antibody clone 1H6, phycoerythrin [PE] mouse anti-dog, 559369; BD Biosciences, USA), CD45 (antibody clone YKIX716.13, FITC rat anti-dog; eBioscience, USA), CD90 (antibody clone 5E10, PE mouse anti-human, 555596; BD Biosciences), and CD105 (antibody clone SN6, FITC mouse anti-human, MCA1557F; AbD Serotec, USA). Cell fluorescence was analyzed with FACS Aria II (BD Biosciences). In addition, special differentiation kits (Stem Pro osteogenesis differentiation, Stem Pro adipogenesis differentiation, and Stem Pro chondrogenesis differentiation kits; all from Gibco/Life Technologies, USA) were used for evaluating cellular differentiation following the manufacturer's instructions.", "To assess the effects of INF-γ on the fAT-MSCs, 5 × 105 fAT-MSCs were seeded in 12-well plates in DMEM medium supplemented with 20% FBS and 1% PS. After 24 h, the fAT-MSCs were treated with 50 ng/mL INF-γ (feline recombinant protein; Kingfisher Biotech, USA) for 48 h. The cells were then washed 4 times with Dulbecco's phosphate-buffered saline (DPBS) and were then maintained in DMEM supplemented with 20% FBS 1% PS for 3 days. After incubation, the supernatant was collected and centrifuged at 1,000 rpm for 3 min to remove any debris, and stored at −80°C until use. D-Plus CCK Cell Viability Assay Kit (DonginLS, Korea) was used to determine the cell viability of IFN-γ stimulated fAT-MSCs following the manufacturer's instructions.", "To assess the role of PGE2 produced by fAT-MSCs in the anti-inflammatory response, fAT-MSCs were treated with PGE2 inhibitor NS-398 (5 μM; Enzo Life Sciences, USA) for 24 h and then stimulated with INF-γ (50 ng/mL for 48 h). After that, the supernatant was collected and stored as described above.", "RAW 264.7 cells obtained from Korean Cell Line Bank were treated for 24 h with lipopolysaccharide (LPS) (200 ng/mL; Sigma-Aldrich, USA) and then washed 3 times with DPBS. LPS-stimulated RAW 264.7 cells were seeded in 6-well plates (1 × 106 cells/well) and cultured in conditioned media described above.\nIn addition, splenocytes were isolated from mice, as previously described [11]. All procedures were approved by the Seoul National University Institutional Animal Care and Use Committee (approval number: SNU-190304-1). Briefly, mouse spleens obtained from 4 mice (C57BL/6, male, 5 W) were mixed and mashed using a 1 mL syringe plunger. The cell suspension was then centrifuged for 3 min at 1,000 rpm. The cell pellet was resuspended in RBC Lysis buffer (Sigma-Aldrich) and washed with phosphate-buffered saline (PBS) 3 times, and the splenocytes were then resuspended in RPMI-1640 supplemented with 10% FBS and 1% PS. The splenocytes were stimulated with 5 μg/mL concanavalin A (Con A; Sigma-Aldrich) for 24 h to determine the mRNA expression of pro- and anti-inflammatory cytokines. The splenocytes were then collected by centrifugation at 3,000 rpm for 10 min, and they were seeded at a density of 1 × 106 cells/well in 6-well plates in conditioned media described above.", "Easy-BLUE Total RNA Extraction Kit (Intron Biotechnology, Korea) was used to extract total RNA from fAT-MSCs, RAW 264.7, and spleen cells following the manufacturer's instructions. The extracted RNA was transformed into cDNA using LaboPass M-MuLV reverse transcriptase (Cosmo Genetech, Korea) according to the supplier's instructions. Cytokine mRNA levels were measured using RT-qPCR. The reaction mixture consisted of 10 μL AMPIGENE RT-qPCR Green Mix Hi-ROX with SYBR green dye (Enzo Life Sciences, Switzerland), 400 nM each of forward and reverse primers (Bionics, Korea) (primers are listed in Table 1), and 1 μL template cDNA. Cytokine mRNA levels were quantified using glyceraldehyde 3-phosphate dehydrogenase as a house-keeping control.", "The concentration of PGE2 in the cell culture supernatant was determined using an ELISA kit (Enzo Life Sciences) following the manufacturer's instructions. Culture supernatants from fAT-MSCs treated and un-treated with IFN-γ and NS-398 stimulated fAT-MSCs treated and un-treated with IFN-γ were collected after 48 h of stimulation for PGE2 ELISA.", "Flow cytometry was performed using a FACS Aria II system (BD Biosciences) and the data were analyzed using FlowJo software (Tree Star, USA). To determine M2 macrophage polarization, conditioned medium from fAT-MSCs, fAT-MSCs treated with IFN-γ, or NS-398 stimulated fAT-MSCs treated with IFN-γ were used to stimulate RAW 264.7 treated with or without LPS. The RAW 264.7 cells were harvested after stimulation and suspended in DPBS. The cells were stained with PE-conjugated anti-CD11c+ antibody (clone N418, 117307; Biolegend) and FITC-conjugated anti-CD206+ antibody (clone MRC1, SC376108; Santa Cruz Biotechnology, USA), and evaluated by flow cytometry.\nTo evaluate PGE2-mediated induction of T-cell regulation by MSCs, conditioned medium from fAT-MSCs, fAT-MSCs treated with IFN-γ, or NS-398-treated fAT-MSCs stimulated with IFN-γ were used to stimulate splenocytes treated with or without Con A. The splenocytes were harvested after stimulation and suspended in DPBS. The cells were then stained with PE-conjugated anti-CD4+ antibody (clone RM4-5, 12-0042-82, eBioscience) and APC-conjugated anti-CD25+ antibody (PC61.5, 17-0251-82, eBioscience) and evaluated by flow cytometry.", "RAW 264.7 cells cultured on coverslips were fixed using 4% paraformaldehyde (in PBS, pH 7.2) at room temperature for 15 min and then washed with DPBS 4 times. The cells were then incubated in a blocking buffer containing 1% bovine serum albumin in PBST for 30 min. The cells were then incubated with antibodies against CD206+ and CD11b+ at 4 h for 12 h. The cells were rinsed with DPBS 3 times and were incubated with corresponding fluorescein-conjugated secondary antibodies (1:200; Santa Cruz Biotechnology) or Texas red-conjugated secondary antibodies (1:200; Santa Cruz Biotechnology) for 1 h at room temperature in the dark. The coverslips were then washed 4 times and mounted using VECTASHIELD mounting medium containing 4′,6-diamidino-2-phenylindole (Vector Laboratories, USA). The slides were observed under an EVOS FL microscope (Life Technologies, Germany), and the stained cells in 20 random fields per group were counted.", "All data were analyzed using GraphPad Prism v.6.01 software (GraphPad Software Inc., USA). Student's t-tests or one-way analysis of variance were used to determine statistical significance. The p values less than 0.05 and were considered statistically significant.", "Cells isolated from feline adipose tissue were characterized by immunophenotyping and multilineage differentiation. Three days after seeding, the cultured cells exhibited a fibroblast-like morphology. There was no difference in the morphology of fAT-MSCs treated with or without IFN-γ (Fig. 1A). In addition, IFN-γ-treated fAT-MSCs showed normal cell viability (Fig. 1B).\nIFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; HGF, hepatocyte growth factor; COX2, cyclooxygenase-2; IDO, indoleamine 2,3-dioxygenase; ns, not significant; TGF-β, transforming growth factor-β.\n*p < 0.05, **p < 0.01 by one-way analysis of variance analysis.\nFlow cytometric analyses showed that only a few of the naïve or IFN-γ-primed fAT-MSCs expressed the known hematopoietic markers CD34 and CD44, but more than 95% expressed the known MSC markers CD90, CD44, CD9, and CD105 (Fig. 1C). When specific differentiation media were used, fAT-MSCs differentiated into osteocytes, adipocytes, and chondrocytes. The abilities of osteogenic and chondrogenic differentiation were enhanced in IFN-γ-treated fAT-MSCs compared with naïve fAT-MSCs (Fig. 1D). Additionally, 48 h treatment with INF-γ upregulated the secretion of immunomodulation factors COX2, IDO, TGF-β, and HGF in fAT-MSCs (Fig. 1E).", "To determine the immunomodulatory capacity of IFN-γ treated fAT-MSCs, we quantified the mRNA expressions of anti- and pro-inflammatory cytokines secreted by RAW 264.7 cells. Tumor necrosis factor-alpha (TNF-α), IL-1β, IL-6, and inducible nitric oxide synthase (iNOS) were upregulated in RAW 264.7 cells stimulated with LPS compared to that in unstimulated RAW 264.7 cells. When RAW 264.7 cells stimulated by LPS were cultured with fAT-MSC- and IFN-γ-treated fAT-MSC-conditioned media TNF-α, IL-1β, IL-6, and iNOS expression levels were reduced. We also measured the mRNA expressions of M2 markers, IL-10, and arginase. The levels of these M2 markers were significantly increased in LPS-stimulated RAW 264.7 cultured in IFN-γ-treated fAT-MSC-conditioned media compared to that LPS-stimulated RAW 264.7 cultured in untreated fAT-MSC-conditioned media (Fig. 2A).\nIFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; LPS, lipopolysaccharide; TGF-β, transforming growth factor-β; IL, interleukin; Arg, arginase; iNOS, inducible nitric oxide synthase; MSC, mesenchymal stem cell; ns, not significant; FOXP3, forkhead box protein P3.\n*p < 0.05, **p < 0.01, ***p < 0.001 by one-way analysis of variance analysis.\nAdditionally, we performed immunofluorescence staining on RAW 264.7 cells to determine whether the population of M2 macrophage increased when cultured in fAT-MSC- and IFN-γ-treated fAT-MSC-conditioned media. Quantitative analysis of CD206+ and CD11b+ RAW 264.7 cells demonstrated a significant increase in CD206+ cells among the RAW 264.7 cultured in IFN-γ-treated fAT-MSC-conditioned media compared to those cultured in fAT-MSC-conditioned media and NS-398-treated IFN-γ-stimulated-fAT-MSC-conditioned media (Fig. 2B).\nTo determine the ability to induce T-cell regulation, we determined the mRNA expressions of inflammatory cytokines secreted by splenocytes. Con A stimulation increased the expression of IL-1β, IFN-γ, and IL-17 in the splenocytes. However, Con A-stimulated splenocytes cultured in IFN-γ-treated fAT-MSC-conditioned media showed decreased expression of IL-1β, IFN-γ, and IL-17 compared to the Con A-stimulated cultured in untreated fAT-MSC-conditioned media. Conversely, IL-10 and FOXP3 expression levels were increased in Con A-stimulated splenocyte cultured in IFN-γ treated fAT-MSC-conditioned media compared to Con A-stimulated splenocytes cultured in IFN-γ-un-treated fAT-MSC-conditioned media (Fig. 2C).", "PGE2 ELISA Kit was used to measure PGE2 levels in the conditioned medium obtained from fAT-MSC treated or untreated with IFN-γ, and NS-398 stimulated fAT-MSC treated with IFN-γ. First, we confirmed that NS-398 (5 μM) treated fAT-MSCs show normal cell viability (Supplementary Fig. 1). PGE2 was significantly increased in IFN-γ-treated fAT-MSC-conditioned media treated compared to that in untreated fAT-MSC-conditioned media and NS-398-stimulated fAT-MSC-conditioned media with or without IFN-γ pretreatment (Fig. 3).\nPGE2, prostaglandin E2; IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell.\n***p < 0.001 by one-way analysis of variance analysis.", "To determine the role of PGE2 in the immunomodulatory capacity of fAT-MSCs, we compared pro- and anti-inflammatory cytokines secreted by RAW 264.7 cells and splenocytes. Flow cytometry revealed an increase in CD11c+ RAW 264.7 cells with LPS stimulation, which was decreased by culturing in IFN-γ treated fAT-MSC-conditioned media (Fig. 4). However, there was a decrease in CD206+ RAW 264.7 cells with LPS stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. Pre-treating the fAT-MSCs treated IFN-γ with PGE2 inhibitor, NS-398 attenuated the effect of fAT-MSC-conditioned media on RAW 264.7 cells significantly increasing CD11c+ and decreasing CD206+ RAW 264.7 cells (Fig. 4).\nPGE2, prostaglandin E2; IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; LPS, lipopolysaccharide; MSC, mesenchymal stem cell; ns, not significant; PE, phycoerythrin; FITC, fluorescein isothiocyanate; SSC, sideward scatter; APC, allophycocyanin; Con A, concanavalin A.\n**p < 0.01, ***p < 0.001 by one-way analysis of variance analysis.", "To determine the role of PGE2, treated IFN-γ treated fAT-MSCs with PGE2 inhibitor, NS-398, and then compared the mRNA expression of inflammatory cytokines secreted by RAW 264.7 cells cultured in IFN-γ treated fAT-MSC-conditioned media or NS-398-stimulated IFN-γ-treated fAT-MSC-conditioned media. First, we confirmed that 5 μM of NS-398 effectively inhibits PGE2 secretion in fAT-MSCs without affecting cell viability (data not shown). TNF-α, IL-1β, and IL-6 expression levels increased with LPS, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. IL-10 levels were decreased after LPS stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. However, pre-treating the fAT-MSCs with NS-398 attenuated the effect of IFN-γ treated fAT-MSC-conditioned media (Fig. 5).\nPGE2, prostaglandin E2; IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; IL, interleukin; TNF-α, tumor necrosis factor-alpha; LPS, lipopolysaccharide.\n*p < 0.05, **p < 0.01 by one-way analysis of variance analysis.\nTo confirm the role of PGE2 in T-cell regulation, we determined the mRNA expression of immune cytokines secreted by splenocytes. IL-1β, 1L-17, and IFN-γ were increased after Con A stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. Pre-treating the fAT-MSCs with NS-398 attenuated the effect of IFN-γ treated fAT-MSC-conditioned media and increased IL-1β, 1L-17, and IFN-γ levels. IL-10 was decreased after Con A stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. Pre-treating the fAT-MSCs treated IFN-γ with NS-398 attenuated the effect of IFN-γ treated fAT-MSC-conditioned media and decreased IL-10 levels (Fig. 5)." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Isolation and characterization of feline adipose-tissue derived (fAT)-MSCs", "IFN-γ stimulation", "Prostaglandin E2 (PGE2) inhibitor, NS-398 treatment", "Immune cells were cultured with conditioned media obtained from fAT-MSCs", "RNA extraction, cDNA synthesis, and RT-qPCR", "Enzyme-linked immunosorbent assay (ELISA)", "Flow cytometric analysis of mouse immune cells", "Immunofluorescence analyses", "Statistical analyses", "RESULTS", "Characterization of fAT-MSCs", "Immunomodulatory effects of IFN-γ primed fAT-MSCs", "PGE2 concentration levels in various fAT-MSC-conditioned media", "Macrophage polarization and T-cell regulation are associated with PGE2", "PGE2 secreted by IFN-γ treated fAT-MSCs is critical for their immune function", "DISCUSSION" ]
[ "Feline mesenchymal stem cells (MSCs) have anti-inflammatory effects and immunomodulatory functions that make them attractive as novel cell-based therapeutics for immune-mediated and inflammatory diseases such as gingivostomatitis, chronic kidney disease, and asthma [123]. In addition, we previously showed the mechanisms of anti-inflammatory functions of feline MSCs [45]. However, few studies have focused on the enhancement of immunoregulatory effects and their mechanisms of feline MSCs.\nOne of the strategies to improve the immunoregulatory capacity of MSCs is to pre-treat them with interferon-gamma (IFN-γ) [6]. IFN-γ is a pro-inflammatory cytokine secreted by natural killer cells and T cells which acts on macrophages and lymphocytes [7]. Previous studies suggested that MSCs stimulated with IFN-γ have upregulated immune suppressive functions and show changes in the expression of immunomodulatory factors [8]. Recently, Parys et al. [9] reported that feline MSCs stimulated with IFN-γ showed significantly increased secretion of immunomodulatory factors such as indoleamine 2,3-dioxygenase (IDO), programmed death-ligand 1, interleukin (IL)-6, cyclooxygenase-2 (COX2), and hepatocyte growth factor (HGF). In this study, however, underlying mechanisms of anti-inflammation have not been identified.\nTherefore, the aim of our study was to evaluate enhancement of the immunomodulatory effects of feline MSCs pre-treated with IFN-γ. In addition, we assessed the mechanisms by which the immunoregulation is induced.", "Isolation and characterization of feline adipose-tissue derived (fAT)-MSCs Feline adipose tissues were obtained from 3 healthy cats during ovariohysterectomy at the Seoul National University Veterinary Medical Teaching Hospital, and their owners provided informed consent for research use. All procedures were approved by the Institutional Animal Care and Use Committee of Seoul National University (protocol No. SNU-190411-10), and the protocols were performed in accordance with approved guidelines. fAT-MSCs were cultured in Dulbecco's Modified Eagle Medium (DMEM; PAN Biotech, Germany) supplemented with 20% ​​fetal bovine serum (FBS; PAN Biotech) and 1% (w/v) penicillin-streptomycin (PS; PAN Biotech) and incubated at 37°C in a 5% CO2 humidified atmosphere. The medium was changed every 2 days until the cells reached a confluence of 70%–80%. Previous studies used MSCs in the third passage or fourth passage, suggesting that early passage cells are more effective [10]. Therefore, we used 3–4 passages for fAT-MSCs. After adhering to culture plates and achieving a fibroblast-like morphology, the fAT-MSCs were characterized by flow cytometry using antibodies against the following proteins–CD44 (antibody clone IM7, fluorescein isothiocyanate [FITC] rat anti-mouse/human, 103021; Biolegend, USA), CD34 (antibody clone 1H6, phycoerythrin [PE] mouse anti-dog, 559369; BD Biosciences, USA), CD45 (antibody clone YKIX716.13, FITC rat anti-dog; eBioscience, USA), CD90 (antibody clone 5E10, PE mouse anti-human, 555596; BD Biosciences), and CD105 (antibody clone SN6, FITC mouse anti-human, MCA1557F; AbD Serotec, USA). Cell fluorescence was analyzed with FACS Aria II (BD Biosciences). In addition, special differentiation kits (Stem Pro osteogenesis differentiation, Stem Pro adipogenesis differentiation, and Stem Pro chondrogenesis differentiation kits; all from Gibco/Life Technologies, USA) were used for evaluating cellular differentiation following the manufacturer's instructions.\nFeline adipose tissues were obtained from 3 healthy cats during ovariohysterectomy at the Seoul National University Veterinary Medical Teaching Hospital, and their owners provided informed consent for research use. All procedures were approved by the Institutional Animal Care and Use Committee of Seoul National University (protocol No. SNU-190411-10), and the protocols were performed in accordance with approved guidelines. fAT-MSCs were cultured in Dulbecco's Modified Eagle Medium (DMEM; PAN Biotech, Germany) supplemented with 20% ​​fetal bovine serum (FBS; PAN Biotech) and 1% (w/v) penicillin-streptomycin (PS; PAN Biotech) and incubated at 37°C in a 5% CO2 humidified atmosphere. The medium was changed every 2 days until the cells reached a confluence of 70%–80%. Previous studies used MSCs in the third passage or fourth passage, suggesting that early passage cells are more effective [10]. Therefore, we used 3–4 passages for fAT-MSCs. After adhering to culture plates and achieving a fibroblast-like morphology, the fAT-MSCs were characterized by flow cytometry using antibodies against the following proteins–CD44 (antibody clone IM7, fluorescein isothiocyanate [FITC] rat anti-mouse/human, 103021; Biolegend, USA), CD34 (antibody clone 1H6, phycoerythrin [PE] mouse anti-dog, 559369; BD Biosciences, USA), CD45 (antibody clone YKIX716.13, FITC rat anti-dog; eBioscience, USA), CD90 (antibody clone 5E10, PE mouse anti-human, 555596; BD Biosciences), and CD105 (antibody clone SN6, FITC mouse anti-human, MCA1557F; AbD Serotec, USA). Cell fluorescence was analyzed with FACS Aria II (BD Biosciences). In addition, special differentiation kits (Stem Pro osteogenesis differentiation, Stem Pro adipogenesis differentiation, and Stem Pro chondrogenesis differentiation kits; all from Gibco/Life Technologies, USA) were used for evaluating cellular differentiation following the manufacturer's instructions.\nIFN-γ stimulation To assess the effects of INF-γ on the fAT-MSCs, 5 × 105 fAT-MSCs were seeded in 12-well plates in DMEM medium supplemented with 20% FBS and 1% PS. After 24 h, the fAT-MSCs were treated with 50 ng/mL INF-γ (feline recombinant protein; Kingfisher Biotech, USA) for 48 h. The cells were then washed 4 times with Dulbecco's phosphate-buffered saline (DPBS) and were then maintained in DMEM supplemented with 20% FBS 1% PS for 3 days. After incubation, the supernatant was collected and centrifuged at 1,000 rpm for 3 min to remove any debris, and stored at −80°C until use. D-Plus CCK Cell Viability Assay Kit (DonginLS, Korea) was used to determine the cell viability of IFN-γ stimulated fAT-MSCs following the manufacturer's instructions.\nTo assess the effects of INF-γ on the fAT-MSCs, 5 × 105 fAT-MSCs were seeded in 12-well plates in DMEM medium supplemented with 20% FBS and 1% PS. After 24 h, the fAT-MSCs were treated with 50 ng/mL INF-γ (feline recombinant protein; Kingfisher Biotech, USA) for 48 h. The cells were then washed 4 times with Dulbecco's phosphate-buffered saline (DPBS) and were then maintained in DMEM supplemented with 20% FBS 1% PS for 3 days. After incubation, the supernatant was collected and centrifuged at 1,000 rpm for 3 min to remove any debris, and stored at −80°C until use. D-Plus CCK Cell Viability Assay Kit (DonginLS, Korea) was used to determine the cell viability of IFN-γ stimulated fAT-MSCs following the manufacturer's instructions.\nProstaglandin E2 (PGE2) inhibitor, NS-398 treatment To assess the role of PGE2 produced by fAT-MSCs in the anti-inflammatory response, fAT-MSCs were treated with PGE2 inhibitor NS-398 (5 μM; Enzo Life Sciences, USA) for 24 h and then stimulated with INF-γ (50 ng/mL for 48 h). After that, the supernatant was collected and stored as described above.\nTo assess the role of PGE2 produced by fAT-MSCs in the anti-inflammatory response, fAT-MSCs were treated with PGE2 inhibitor NS-398 (5 μM; Enzo Life Sciences, USA) for 24 h and then stimulated with INF-γ (50 ng/mL for 48 h). After that, the supernatant was collected and stored as described above.\nImmune cells were cultured with conditioned media obtained from fAT-MSCs RAW 264.7 cells obtained from Korean Cell Line Bank were treated for 24 h with lipopolysaccharide (LPS) (200 ng/mL; Sigma-Aldrich, USA) and then washed 3 times with DPBS. LPS-stimulated RAW 264.7 cells were seeded in 6-well plates (1 × 106 cells/well) and cultured in conditioned media described above.\nIn addition, splenocytes were isolated from mice, as previously described [11]. All procedures were approved by the Seoul National University Institutional Animal Care and Use Committee (approval number: SNU-190304-1). Briefly, mouse spleens obtained from 4 mice (C57BL/6, male, 5 W) were mixed and mashed using a 1 mL syringe plunger. The cell suspension was then centrifuged for 3 min at 1,000 rpm. The cell pellet was resuspended in RBC Lysis buffer (Sigma-Aldrich) and washed with phosphate-buffered saline (PBS) 3 times, and the splenocytes were then resuspended in RPMI-1640 supplemented with 10% FBS and 1% PS. The splenocytes were stimulated with 5 μg/mL concanavalin A (Con A; Sigma-Aldrich) for 24 h to determine the mRNA expression of pro- and anti-inflammatory cytokines. The splenocytes were then collected by centrifugation at 3,000 rpm for 10 min, and they were seeded at a density of 1 × 106 cells/well in 6-well plates in conditioned media described above.\nRAW 264.7 cells obtained from Korean Cell Line Bank were treated for 24 h with lipopolysaccharide (LPS) (200 ng/mL; Sigma-Aldrich, USA) and then washed 3 times with DPBS. LPS-stimulated RAW 264.7 cells were seeded in 6-well plates (1 × 106 cells/well) and cultured in conditioned media described above.\nIn addition, splenocytes were isolated from mice, as previously described [11]. All procedures were approved by the Seoul National University Institutional Animal Care and Use Committee (approval number: SNU-190304-1). Briefly, mouse spleens obtained from 4 mice (C57BL/6, male, 5 W) were mixed and mashed using a 1 mL syringe plunger. The cell suspension was then centrifuged for 3 min at 1,000 rpm. The cell pellet was resuspended in RBC Lysis buffer (Sigma-Aldrich) and washed with phosphate-buffered saline (PBS) 3 times, and the splenocytes were then resuspended in RPMI-1640 supplemented with 10% FBS and 1% PS. The splenocytes were stimulated with 5 μg/mL concanavalin A (Con A; Sigma-Aldrich) for 24 h to determine the mRNA expression of pro- and anti-inflammatory cytokines. The splenocytes were then collected by centrifugation at 3,000 rpm for 10 min, and they were seeded at a density of 1 × 106 cells/well in 6-well plates in conditioned media described above.\nRNA extraction, cDNA synthesis, and RT-qPCR Easy-BLUE Total RNA Extraction Kit (Intron Biotechnology, Korea) was used to extract total RNA from fAT-MSCs, RAW 264.7, and spleen cells following the manufacturer's instructions. The extracted RNA was transformed into cDNA using LaboPass M-MuLV reverse transcriptase (Cosmo Genetech, Korea) according to the supplier's instructions. Cytokine mRNA levels were measured using RT-qPCR. The reaction mixture consisted of 10 μL AMPIGENE RT-qPCR Green Mix Hi-ROX with SYBR green dye (Enzo Life Sciences, Switzerland), 400 nM each of forward and reverse primers (Bionics, Korea) (primers are listed in Table 1), and 1 μL template cDNA. Cytokine mRNA levels were quantified using glyceraldehyde 3-phosphate dehydrogenase as a house-keeping control.\nEasy-BLUE Total RNA Extraction Kit (Intron Biotechnology, Korea) was used to extract total RNA from fAT-MSCs, RAW 264.7, and spleen cells following the manufacturer's instructions. The extracted RNA was transformed into cDNA using LaboPass M-MuLV reverse transcriptase (Cosmo Genetech, Korea) according to the supplier's instructions. Cytokine mRNA levels were measured using RT-qPCR. The reaction mixture consisted of 10 μL AMPIGENE RT-qPCR Green Mix Hi-ROX with SYBR green dye (Enzo Life Sciences, Switzerland), 400 nM each of forward and reverse primers (Bionics, Korea) (primers are listed in Table 1), and 1 μL template cDNA. Cytokine mRNA levels were quantified using glyceraldehyde 3-phosphate dehydrogenase as a house-keeping control.\nEnzyme-linked immunosorbent assay (ELISA) The concentration of PGE2 in the cell culture supernatant was determined using an ELISA kit (Enzo Life Sciences) following the manufacturer's instructions. Culture supernatants from fAT-MSCs treated and un-treated with IFN-γ and NS-398 stimulated fAT-MSCs treated and un-treated with IFN-γ were collected after 48 h of stimulation for PGE2 ELISA.\nThe concentration of PGE2 in the cell culture supernatant was determined using an ELISA kit (Enzo Life Sciences) following the manufacturer's instructions. Culture supernatants from fAT-MSCs treated and un-treated with IFN-γ and NS-398 stimulated fAT-MSCs treated and un-treated with IFN-γ were collected after 48 h of stimulation for PGE2 ELISA.\nFlow cytometric analysis of mouse immune cells Flow cytometry was performed using a FACS Aria II system (BD Biosciences) and the data were analyzed using FlowJo software (Tree Star, USA). To determine M2 macrophage polarization, conditioned medium from fAT-MSCs, fAT-MSCs treated with IFN-γ, or NS-398 stimulated fAT-MSCs treated with IFN-γ were used to stimulate RAW 264.7 treated with or without LPS. The RAW 264.7 cells were harvested after stimulation and suspended in DPBS. The cells were stained with PE-conjugated anti-CD11c+ antibody (clone N418, 117307; Biolegend) and FITC-conjugated anti-CD206+ antibody (clone MRC1, SC376108; Santa Cruz Biotechnology, USA), and evaluated by flow cytometry.\nTo evaluate PGE2-mediated induction of T-cell regulation by MSCs, conditioned medium from fAT-MSCs, fAT-MSCs treated with IFN-γ, or NS-398-treated fAT-MSCs stimulated with IFN-γ were used to stimulate splenocytes treated with or without Con A. The splenocytes were harvested after stimulation and suspended in DPBS. The cells were then stained with PE-conjugated anti-CD4+ antibody (clone RM4-5, 12-0042-82, eBioscience) and APC-conjugated anti-CD25+ antibody (PC61.5, 17-0251-82, eBioscience) and evaluated by flow cytometry.\nFlow cytometry was performed using a FACS Aria II system (BD Biosciences) and the data were analyzed using FlowJo software (Tree Star, USA). To determine M2 macrophage polarization, conditioned medium from fAT-MSCs, fAT-MSCs treated with IFN-γ, or NS-398 stimulated fAT-MSCs treated with IFN-γ were used to stimulate RAW 264.7 treated with or without LPS. The RAW 264.7 cells were harvested after stimulation and suspended in DPBS. The cells were stained with PE-conjugated anti-CD11c+ antibody (clone N418, 117307; Biolegend) and FITC-conjugated anti-CD206+ antibody (clone MRC1, SC376108; Santa Cruz Biotechnology, USA), and evaluated by flow cytometry.\nTo evaluate PGE2-mediated induction of T-cell regulation by MSCs, conditioned medium from fAT-MSCs, fAT-MSCs treated with IFN-γ, or NS-398-treated fAT-MSCs stimulated with IFN-γ were used to stimulate splenocytes treated with or without Con A. The splenocytes were harvested after stimulation and suspended in DPBS. The cells were then stained with PE-conjugated anti-CD4+ antibody (clone RM4-5, 12-0042-82, eBioscience) and APC-conjugated anti-CD25+ antibody (PC61.5, 17-0251-82, eBioscience) and evaluated by flow cytometry.\nImmunofluorescence analyses RAW 264.7 cells cultured on coverslips were fixed using 4% paraformaldehyde (in PBS, pH 7.2) at room temperature for 15 min and then washed with DPBS 4 times. The cells were then incubated in a blocking buffer containing 1% bovine serum albumin in PBST for 30 min. The cells were then incubated with antibodies against CD206+ and CD11b+ at 4 h for 12 h. The cells were rinsed with DPBS 3 times and were incubated with corresponding fluorescein-conjugated secondary antibodies (1:200; Santa Cruz Biotechnology) or Texas red-conjugated secondary antibodies (1:200; Santa Cruz Biotechnology) for 1 h at room temperature in the dark. The coverslips were then washed 4 times and mounted using VECTASHIELD mounting medium containing 4′,6-diamidino-2-phenylindole (Vector Laboratories, USA). The slides were observed under an EVOS FL microscope (Life Technologies, Germany), and the stained cells in 20 random fields per group were counted.\nRAW 264.7 cells cultured on coverslips were fixed using 4% paraformaldehyde (in PBS, pH 7.2) at room temperature for 15 min and then washed with DPBS 4 times. The cells were then incubated in a blocking buffer containing 1% bovine serum albumin in PBST for 30 min. The cells were then incubated with antibodies against CD206+ and CD11b+ at 4 h for 12 h. The cells were rinsed with DPBS 3 times and were incubated with corresponding fluorescein-conjugated secondary antibodies (1:200; Santa Cruz Biotechnology) or Texas red-conjugated secondary antibodies (1:200; Santa Cruz Biotechnology) for 1 h at room temperature in the dark. The coverslips were then washed 4 times and mounted using VECTASHIELD mounting medium containing 4′,6-diamidino-2-phenylindole (Vector Laboratories, USA). The slides were observed under an EVOS FL microscope (Life Technologies, Germany), and the stained cells in 20 random fields per group were counted.\nStatistical analyses All data were analyzed using GraphPad Prism v.6.01 software (GraphPad Software Inc., USA). Student's t-tests or one-way analysis of variance were used to determine statistical significance. The p values less than 0.05 and were considered statistically significant.\nAll data were analyzed using GraphPad Prism v.6.01 software (GraphPad Software Inc., USA). Student's t-tests or one-way analysis of variance were used to determine statistical significance. The p values less than 0.05 and were considered statistically significant.", "Feline adipose tissues were obtained from 3 healthy cats during ovariohysterectomy at the Seoul National University Veterinary Medical Teaching Hospital, and their owners provided informed consent for research use. All procedures were approved by the Institutional Animal Care and Use Committee of Seoul National University (protocol No. SNU-190411-10), and the protocols were performed in accordance with approved guidelines. fAT-MSCs were cultured in Dulbecco's Modified Eagle Medium (DMEM; PAN Biotech, Germany) supplemented with 20% ​​fetal bovine serum (FBS; PAN Biotech) and 1% (w/v) penicillin-streptomycin (PS; PAN Biotech) and incubated at 37°C in a 5% CO2 humidified atmosphere. The medium was changed every 2 days until the cells reached a confluence of 70%–80%. Previous studies used MSCs in the third passage or fourth passage, suggesting that early passage cells are more effective [10]. Therefore, we used 3–4 passages for fAT-MSCs. After adhering to culture plates and achieving a fibroblast-like morphology, the fAT-MSCs were characterized by flow cytometry using antibodies against the following proteins–CD44 (antibody clone IM7, fluorescein isothiocyanate [FITC] rat anti-mouse/human, 103021; Biolegend, USA), CD34 (antibody clone 1H6, phycoerythrin [PE] mouse anti-dog, 559369; BD Biosciences, USA), CD45 (antibody clone YKIX716.13, FITC rat anti-dog; eBioscience, USA), CD90 (antibody clone 5E10, PE mouse anti-human, 555596; BD Biosciences), and CD105 (antibody clone SN6, FITC mouse anti-human, MCA1557F; AbD Serotec, USA). Cell fluorescence was analyzed with FACS Aria II (BD Biosciences). In addition, special differentiation kits (Stem Pro osteogenesis differentiation, Stem Pro adipogenesis differentiation, and Stem Pro chondrogenesis differentiation kits; all from Gibco/Life Technologies, USA) were used for evaluating cellular differentiation following the manufacturer's instructions.", "To assess the effects of INF-γ on the fAT-MSCs, 5 × 105 fAT-MSCs were seeded in 12-well plates in DMEM medium supplemented with 20% FBS and 1% PS. After 24 h, the fAT-MSCs were treated with 50 ng/mL INF-γ (feline recombinant protein; Kingfisher Biotech, USA) for 48 h. The cells were then washed 4 times with Dulbecco's phosphate-buffered saline (DPBS) and were then maintained in DMEM supplemented with 20% FBS 1% PS for 3 days. After incubation, the supernatant was collected and centrifuged at 1,000 rpm for 3 min to remove any debris, and stored at −80°C until use. D-Plus CCK Cell Viability Assay Kit (DonginLS, Korea) was used to determine the cell viability of IFN-γ stimulated fAT-MSCs following the manufacturer's instructions.", "To assess the role of PGE2 produced by fAT-MSCs in the anti-inflammatory response, fAT-MSCs were treated with PGE2 inhibitor NS-398 (5 μM; Enzo Life Sciences, USA) for 24 h and then stimulated with INF-γ (50 ng/mL for 48 h). After that, the supernatant was collected and stored as described above.", "RAW 264.7 cells obtained from Korean Cell Line Bank were treated for 24 h with lipopolysaccharide (LPS) (200 ng/mL; Sigma-Aldrich, USA) and then washed 3 times with DPBS. LPS-stimulated RAW 264.7 cells were seeded in 6-well plates (1 × 106 cells/well) and cultured in conditioned media described above.\nIn addition, splenocytes were isolated from mice, as previously described [11]. All procedures were approved by the Seoul National University Institutional Animal Care and Use Committee (approval number: SNU-190304-1). Briefly, mouse spleens obtained from 4 mice (C57BL/6, male, 5 W) were mixed and mashed using a 1 mL syringe plunger. The cell suspension was then centrifuged for 3 min at 1,000 rpm. The cell pellet was resuspended in RBC Lysis buffer (Sigma-Aldrich) and washed with phosphate-buffered saline (PBS) 3 times, and the splenocytes were then resuspended in RPMI-1640 supplemented with 10% FBS and 1% PS. The splenocytes were stimulated with 5 μg/mL concanavalin A (Con A; Sigma-Aldrich) for 24 h to determine the mRNA expression of pro- and anti-inflammatory cytokines. The splenocytes were then collected by centrifugation at 3,000 rpm for 10 min, and they were seeded at a density of 1 × 106 cells/well in 6-well plates in conditioned media described above.", "Easy-BLUE Total RNA Extraction Kit (Intron Biotechnology, Korea) was used to extract total RNA from fAT-MSCs, RAW 264.7, and spleen cells following the manufacturer's instructions. The extracted RNA was transformed into cDNA using LaboPass M-MuLV reverse transcriptase (Cosmo Genetech, Korea) according to the supplier's instructions. Cytokine mRNA levels were measured using RT-qPCR. The reaction mixture consisted of 10 μL AMPIGENE RT-qPCR Green Mix Hi-ROX with SYBR green dye (Enzo Life Sciences, Switzerland), 400 nM each of forward and reverse primers (Bionics, Korea) (primers are listed in Table 1), and 1 μL template cDNA. Cytokine mRNA levels were quantified using glyceraldehyde 3-phosphate dehydrogenase as a house-keeping control.", "The concentration of PGE2 in the cell culture supernatant was determined using an ELISA kit (Enzo Life Sciences) following the manufacturer's instructions. Culture supernatants from fAT-MSCs treated and un-treated with IFN-γ and NS-398 stimulated fAT-MSCs treated and un-treated with IFN-γ were collected after 48 h of stimulation for PGE2 ELISA.", "Flow cytometry was performed using a FACS Aria II system (BD Biosciences) and the data were analyzed using FlowJo software (Tree Star, USA). To determine M2 macrophage polarization, conditioned medium from fAT-MSCs, fAT-MSCs treated with IFN-γ, or NS-398 stimulated fAT-MSCs treated with IFN-γ were used to stimulate RAW 264.7 treated with or without LPS. The RAW 264.7 cells were harvested after stimulation and suspended in DPBS. The cells were stained with PE-conjugated anti-CD11c+ antibody (clone N418, 117307; Biolegend) and FITC-conjugated anti-CD206+ antibody (clone MRC1, SC376108; Santa Cruz Biotechnology, USA), and evaluated by flow cytometry.\nTo evaluate PGE2-mediated induction of T-cell regulation by MSCs, conditioned medium from fAT-MSCs, fAT-MSCs treated with IFN-γ, or NS-398-treated fAT-MSCs stimulated with IFN-γ were used to stimulate splenocytes treated with or without Con A. The splenocytes were harvested after stimulation and suspended in DPBS. The cells were then stained with PE-conjugated anti-CD4+ antibody (clone RM4-5, 12-0042-82, eBioscience) and APC-conjugated anti-CD25+ antibody (PC61.5, 17-0251-82, eBioscience) and evaluated by flow cytometry.", "RAW 264.7 cells cultured on coverslips were fixed using 4% paraformaldehyde (in PBS, pH 7.2) at room temperature for 15 min and then washed with DPBS 4 times. The cells were then incubated in a blocking buffer containing 1% bovine serum albumin in PBST for 30 min. The cells were then incubated with antibodies against CD206+ and CD11b+ at 4 h for 12 h. The cells were rinsed with DPBS 3 times and were incubated with corresponding fluorescein-conjugated secondary antibodies (1:200; Santa Cruz Biotechnology) or Texas red-conjugated secondary antibodies (1:200; Santa Cruz Biotechnology) for 1 h at room temperature in the dark. The coverslips were then washed 4 times and mounted using VECTASHIELD mounting medium containing 4′,6-diamidino-2-phenylindole (Vector Laboratories, USA). The slides were observed under an EVOS FL microscope (Life Technologies, Germany), and the stained cells in 20 random fields per group were counted.", "All data were analyzed using GraphPad Prism v.6.01 software (GraphPad Software Inc., USA). Student's t-tests or one-way analysis of variance were used to determine statistical significance. The p values less than 0.05 and were considered statistically significant.", "Characterization of fAT-MSCs Cells isolated from feline adipose tissue were characterized by immunophenotyping and multilineage differentiation. Three days after seeding, the cultured cells exhibited a fibroblast-like morphology. There was no difference in the morphology of fAT-MSCs treated with or without IFN-γ (Fig. 1A). In addition, IFN-γ-treated fAT-MSCs showed normal cell viability (Fig. 1B).\nIFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; HGF, hepatocyte growth factor; COX2, cyclooxygenase-2; IDO, indoleamine 2,3-dioxygenase; ns, not significant; TGF-β, transforming growth factor-β.\n*p < 0.05, **p < 0.01 by one-way analysis of variance analysis.\nFlow cytometric analyses showed that only a few of the naïve or IFN-γ-primed fAT-MSCs expressed the known hematopoietic markers CD34 and CD44, but more than 95% expressed the known MSC markers CD90, CD44, CD9, and CD105 (Fig. 1C). When specific differentiation media were used, fAT-MSCs differentiated into osteocytes, adipocytes, and chondrocytes. The abilities of osteogenic and chondrogenic differentiation were enhanced in IFN-γ-treated fAT-MSCs compared with naïve fAT-MSCs (Fig. 1D). Additionally, 48 h treatment with INF-γ upregulated the secretion of immunomodulation factors COX2, IDO, TGF-β, and HGF in fAT-MSCs (Fig. 1E).\nCells isolated from feline adipose tissue were characterized by immunophenotyping and multilineage differentiation. Three days after seeding, the cultured cells exhibited a fibroblast-like morphology. There was no difference in the morphology of fAT-MSCs treated with or without IFN-γ (Fig. 1A). In addition, IFN-γ-treated fAT-MSCs showed normal cell viability (Fig. 1B).\nIFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; HGF, hepatocyte growth factor; COX2, cyclooxygenase-2; IDO, indoleamine 2,3-dioxygenase; ns, not significant; TGF-β, transforming growth factor-β.\n*p < 0.05, **p < 0.01 by one-way analysis of variance analysis.\nFlow cytometric analyses showed that only a few of the naïve or IFN-γ-primed fAT-MSCs expressed the known hematopoietic markers CD34 and CD44, but more than 95% expressed the known MSC markers CD90, CD44, CD9, and CD105 (Fig. 1C). When specific differentiation media were used, fAT-MSCs differentiated into osteocytes, adipocytes, and chondrocytes. The abilities of osteogenic and chondrogenic differentiation were enhanced in IFN-γ-treated fAT-MSCs compared with naïve fAT-MSCs (Fig. 1D). Additionally, 48 h treatment with INF-γ upregulated the secretion of immunomodulation factors COX2, IDO, TGF-β, and HGF in fAT-MSCs (Fig. 1E).\nImmunomodulatory effects of IFN-γ primed fAT-MSCs To determine the immunomodulatory capacity of IFN-γ treated fAT-MSCs, we quantified the mRNA expressions of anti- and pro-inflammatory cytokines secreted by RAW 264.7 cells. Tumor necrosis factor-alpha (TNF-α), IL-1β, IL-6, and inducible nitric oxide synthase (iNOS) were upregulated in RAW 264.7 cells stimulated with LPS compared to that in unstimulated RAW 264.7 cells. When RAW 264.7 cells stimulated by LPS were cultured with fAT-MSC- and IFN-γ-treated fAT-MSC-conditioned media TNF-α, IL-1β, IL-6, and iNOS expression levels were reduced. We also measured the mRNA expressions of M2 markers, IL-10, and arginase. The levels of these M2 markers were significantly increased in LPS-stimulated RAW 264.7 cultured in IFN-γ-treated fAT-MSC-conditioned media compared to that LPS-stimulated RAW 264.7 cultured in untreated fAT-MSC-conditioned media (Fig. 2A).\nIFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; LPS, lipopolysaccharide; TGF-β, transforming growth factor-β; IL, interleukin; Arg, arginase; iNOS, inducible nitric oxide synthase; MSC, mesenchymal stem cell; ns, not significant; FOXP3, forkhead box protein P3.\n*p < 0.05, **p < 0.01, ***p < 0.001 by one-way analysis of variance analysis.\nAdditionally, we performed immunofluorescence staining on RAW 264.7 cells to determine whether the population of M2 macrophage increased when cultured in fAT-MSC- and IFN-γ-treated fAT-MSC-conditioned media. Quantitative analysis of CD206+ and CD11b+ RAW 264.7 cells demonstrated a significant increase in CD206+ cells among the RAW 264.7 cultured in IFN-γ-treated fAT-MSC-conditioned media compared to those cultured in fAT-MSC-conditioned media and NS-398-treated IFN-γ-stimulated-fAT-MSC-conditioned media (Fig. 2B).\nTo determine the ability to induce T-cell regulation, we determined the mRNA expressions of inflammatory cytokines secreted by splenocytes. Con A stimulation increased the expression of IL-1β, IFN-γ, and IL-17 in the splenocytes. However, Con A-stimulated splenocytes cultured in IFN-γ-treated fAT-MSC-conditioned media showed decreased expression of IL-1β, IFN-γ, and IL-17 compared to the Con A-stimulated cultured in untreated fAT-MSC-conditioned media. Conversely, IL-10 and FOXP3 expression levels were increased in Con A-stimulated splenocyte cultured in IFN-γ treated fAT-MSC-conditioned media compared to Con A-stimulated splenocytes cultured in IFN-γ-un-treated fAT-MSC-conditioned media (Fig. 2C).\nTo determine the immunomodulatory capacity of IFN-γ treated fAT-MSCs, we quantified the mRNA expressions of anti- and pro-inflammatory cytokines secreted by RAW 264.7 cells. Tumor necrosis factor-alpha (TNF-α), IL-1β, IL-6, and inducible nitric oxide synthase (iNOS) were upregulated in RAW 264.7 cells stimulated with LPS compared to that in unstimulated RAW 264.7 cells. When RAW 264.7 cells stimulated by LPS were cultured with fAT-MSC- and IFN-γ-treated fAT-MSC-conditioned media TNF-α, IL-1β, IL-6, and iNOS expression levels were reduced. We also measured the mRNA expressions of M2 markers, IL-10, and arginase. The levels of these M2 markers were significantly increased in LPS-stimulated RAW 264.7 cultured in IFN-γ-treated fAT-MSC-conditioned media compared to that LPS-stimulated RAW 264.7 cultured in untreated fAT-MSC-conditioned media (Fig. 2A).\nIFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; LPS, lipopolysaccharide; TGF-β, transforming growth factor-β; IL, interleukin; Arg, arginase; iNOS, inducible nitric oxide synthase; MSC, mesenchymal stem cell; ns, not significant; FOXP3, forkhead box protein P3.\n*p < 0.05, **p < 0.01, ***p < 0.001 by one-way analysis of variance analysis.\nAdditionally, we performed immunofluorescence staining on RAW 264.7 cells to determine whether the population of M2 macrophage increased when cultured in fAT-MSC- and IFN-γ-treated fAT-MSC-conditioned media. Quantitative analysis of CD206+ and CD11b+ RAW 264.7 cells demonstrated a significant increase in CD206+ cells among the RAW 264.7 cultured in IFN-γ-treated fAT-MSC-conditioned media compared to those cultured in fAT-MSC-conditioned media and NS-398-treated IFN-γ-stimulated-fAT-MSC-conditioned media (Fig. 2B).\nTo determine the ability to induce T-cell regulation, we determined the mRNA expressions of inflammatory cytokines secreted by splenocytes. Con A stimulation increased the expression of IL-1β, IFN-γ, and IL-17 in the splenocytes. However, Con A-stimulated splenocytes cultured in IFN-γ-treated fAT-MSC-conditioned media showed decreased expression of IL-1β, IFN-γ, and IL-17 compared to the Con A-stimulated cultured in untreated fAT-MSC-conditioned media. Conversely, IL-10 and FOXP3 expression levels were increased in Con A-stimulated splenocyte cultured in IFN-γ treated fAT-MSC-conditioned media compared to Con A-stimulated splenocytes cultured in IFN-γ-un-treated fAT-MSC-conditioned media (Fig. 2C).\nPGE2 concentration levels in various fAT-MSC-conditioned media PGE2 ELISA Kit was used to measure PGE2 levels in the conditioned medium obtained from fAT-MSC treated or untreated with IFN-γ, and NS-398 stimulated fAT-MSC treated with IFN-γ. First, we confirmed that NS-398 (5 μM) treated fAT-MSCs show normal cell viability (Supplementary Fig. 1). PGE2 was significantly increased in IFN-γ-treated fAT-MSC-conditioned media treated compared to that in untreated fAT-MSC-conditioned media and NS-398-stimulated fAT-MSC-conditioned media with or without IFN-γ pretreatment (Fig. 3).\nPGE2, prostaglandin E2; IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell.\n***p < 0.001 by one-way analysis of variance analysis.\nPGE2 ELISA Kit was used to measure PGE2 levels in the conditioned medium obtained from fAT-MSC treated or untreated with IFN-γ, and NS-398 stimulated fAT-MSC treated with IFN-γ. First, we confirmed that NS-398 (5 μM) treated fAT-MSCs show normal cell viability (Supplementary Fig. 1). PGE2 was significantly increased in IFN-γ-treated fAT-MSC-conditioned media treated compared to that in untreated fAT-MSC-conditioned media and NS-398-stimulated fAT-MSC-conditioned media with or without IFN-γ pretreatment (Fig. 3).\nPGE2, prostaglandin E2; IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell.\n***p < 0.001 by one-way analysis of variance analysis.\nMacrophage polarization and T-cell regulation are associated with PGE2 To determine the role of PGE2 in the immunomodulatory capacity of fAT-MSCs, we compared pro- and anti-inflammatory cytokines secreted by RAW 264.7 cells and splenocytes. Flow cytometry revealed an increase in CD11c+ RAW 264.7 cells with LPS stimulation, which was decreased by culturing in IFN-γ treated fAT-MSC-conditioned media (Fig. 4). However, there was a decrease in CD206+ RAW 264.7 cells with LPS stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. Pre-treating the fAT-MSCs treated IFN-γ with PGE2 inhibitor, NS-398 attenuated the effect of fAT-MSC-conditioned media on RAW 264.7 cells significantly increasing CD11c+ and decreasing CD206+ RAW 264.7 cells (Fig. 4).\nPGE2, prostaglandin E2; IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; LPS, lipopolysaccharide; MSC, mesenchymal stem cell; ns, not significant; PE, phycoerythrin; FITC, fluorescein isothiocyanate; SSC, sideward scatter; APC, allophycocyanin; Con A, concanavalin A.\n**p < 0.01, ***p < 0.001 by one-way analysis of variance analysis.\nTo determine the role of PGE2 in the immunomodulatory capacity of fAT-MSCs, we compared pro- and anti-inflammatory cytokines secreted by RAW 264.7 cells and splenocytes. Flow cytometry revealed an increase in CD11c+ RAW 264.7 cells with LPS stimulation, which was decreased by culturing in IFN-γ treated fAT-MSC-conditioned media (Fig. 4). However, there was a decrease in CD206+ RAW 264.7 cells with LPS stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. Pre-treating the fAT-MSCs treated IFN-γ with PGE2 inhibitor, NS-398 attenuated the effect of fAT-MSC-conditioned media on RAW 264.7 cells significantly increasing CD11c+ and decreasing CD206+ RAW 264.7 cells (Fig. 4).\nPGE2, prostaglandin E2; IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; LPS, lipopolysaccharide; MSC, mesenchymal stem cell; ns, not significant; PE, phycoerythrin; FITC, fluorescein isothiocyanate; SSC, sideward scatter; APC, allophycocyanin; Con A, concanavalin A.\n**p < 0.01, ***p < 0.001 by one-way analysis of variance analysis.\nPGE2 secreted by IFN-γ treated fAT-MSCs is critical for their immune function To determine the role of PGE2, treated IFN-γ treated fAT-MSCs with PGE2 inhibitor, NS-398, and then compared the mRNA expression of inflammatory cytokines secreted by RAW 264.7 cells cultured in IFN-γ treated fAT-MSC-conditioned media or NS-398-stimulated IFN-γ-treated fAT-MSC-conditioned media. First, we confirmed that 5 μM of NS-398 effectively inhibits PGE2 secretion in fAT-MSCs without affecting cell viability (data not shown). TNF-α, IL-1β, and IL-6 expression levels increased with LPS, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. IL-10 levels were decreased after LPS stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. However, pre-treating the fAT-MSCs with NS-398 attenuated the effect of IFN-γ treated fAT-MSC-conditioned media (Fig. 5).\nPGE2, prostaglandin E2; IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; IL, interleukin; TNF-α, tumor necrosis factor-alpha; LPS, lipopolysaccharide.\n*p < 0.05, **p < 0.01 by one-way analysis of variance analysis.\nTo confirm the role of PGE2 in T-cell regulation, we determined the mRNA expression of immune cytokines secreted by splenocytes. IL-1β, 1L-17, and IFN-γ were increased after Con A stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. Pre-treating the fAT-MSCs with NS-398 attenuated the effect of IFN-γ treated fAT-MSC-conditioned media and increased IL-1β, 1L-17, and IFN-γ levels. IL-10 was decreased after Con A stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. Pre-treating the fAT-MSCs treated IFN-γ with NS-398 attenuated the effect of IFN-γ treated fAT-MSC-conditioned media and decreased IL-10 levels (Fig. 5).\nTo determine the role of PGE2, treated IFN-γ treated fAT-MSCs with PGE2 inhibitor, NS-398, and then compared the mRNA expression of inflammatory cytokines secreted by RAW 264.7 cells cultured in IFN-γ treated fAT-MSC-conditioned media or NS-398-stimulated IFN-γ-treated fAT-MSC-conditioned media. First, we confirmed that 5 μM of NS-398 effectively inhibits PGE2 secretion in fAT-MSCs without affecting cell viability (data not shown). TNF-α, IL-1β, and IL-6 expression levels increased with LPS, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. IL-10 levels were decreased after LPS stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. However, pre-treating the fAT-MSCs with NS-398 attenuated the effect of IFN-γ treated fAT-MSC-conditioned media (Fig. 5).\nPGE2, prostaglandin E2; IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; IL, interleukin; TNF-α, tumor necrosis factor-alpha; LPS, lipopolysaccharide.\n*p < 0.05, **p < 0.01 by one-way analysis of variance analysis.\nTo confirm the role of PGE2 in T-cell regulation, we determined the mRNA expression of immune cytokines secreted by splenocytes. IL-1β, 1L-17, and IFN-γ were increased after Con A stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. Pre-treating the fAT-MSCs with NS-398 attenuated the effect of IFN-γ treated fAT-MSC-conditioned media and increased IL-1β, 1L-17, and IFN-γ levels. IL-10 was decreased after Con A stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. Pre-treating the fAT-MSCs treated IFN-γ with NS-398 attenuated the effect of IFN-γ treated fAT-MSC-conditioned media and decreased IL-10 levels (Fig. 5).", "Cells isolated from feline adipose tissue were characterized by immunophenotyping and multilineage differentiation. Three days after seeding, the cultured cells exhibited a fibroblast-like morphology. There was no difference in the morphology of fAT-MSCs treated with or without IFN-γ (Fig. 1A). In addition, IFN-γ-treated fAT-MSCs showed normal cell viability (Fig. 1B).\nIFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; HGF, hepatocyte growth factor; COX2, cyclooxygenase-2; IDO, indoleamine 2,3-dioxygenase; ns, not significant; TGF-β, transforming growth factor-β.\n*p < 0.05, **p < 0.01 by one-way analysis of variance analysis.\nFlow cytometric analyses showed that only a few of the naïve or IFN-γ-primed fAT-MSCs expressed the known hematopoietic markers CD34 and CD44, but more than 95% expressed the known MSC markers CD90, CD44, CD9, and CD105 (Fig. 1C). When specific differentiation media were used, fAT-MSCs differentiated into osteocytes, adipocytes, and chondrocytes. The abilities of osteogenic and chondrogenic differentiation were enhanced in IFN-γ-treated fAT-MSCs compared with naïve fAT-MSCs (Fig. 1D). Additionally, 48 h treatment with INF-γ upregulated the secretion of immunomodulation factors COX2, IDO, TGF-β, and HGF in fAT-MSCs (Fig. 1E).", "To determine the immunomodulatory capacity of IFN-γ treated fAT-MSCs, we quantified the mRNA expressions of anti- and pro-inflammatory cytokines secreted by RAW 264.7 cells. Tumor necrosis factor-alpha (TNF-α), IL-1β, IL-6, and inducible nitric oxide synthase (iNOS) were upregulated in RAW 264.7 cells stimulated with LPS compared to that in unstimulated RAW 264.7 cells. When RAW 264.7 cells stimulated by LPS were cultured with fAT-MSC- and IFN-γ-treated fAT-MSC-conditioned media TNF-α, IL-1β, IL-6, and iNOS expression levels were reduced. We also measured the mRNA expressions of M2 markers, IL-10, and arginase. The levels of these M2 markers were significantly increased in LPS-stimulated RAW 264.7 cultured in IFN-γ-treated fAT-MSC-conditioned media compared to that LPS-stimulated RAW 264.7 cultured in untreated fAT-MSC-conditioned media (Fig. 2A).\nIFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; LPS, lipopolysaccharide; TGF-β, transforming growth factor-β; IL, interleukin; Arg, arginase; iNOS, inducible nitric oxide synthase; MSC, mesenchymal stem cell; ns, not significant; FOXP3, forkhead box protein P3.\n*p < 0.05, **p < 0.01, ***p < 0.001 by one-way analysis of variance analysis.\nAdditionally, we performed immunofluorescence staining on RAW 264.7 cells to determine whether the population of M2 macrophage increased when cultured in fAT-MSC- and IFN-γ-treated fAT-MSC-conditioned media. Quantitative analysis of CD206+ and CD11b+ RAW 264.7 cells demonstrated a significant increase in CD206+ cells among the RAW 264.7 cultured in IFN-γ-treated fAT-MSC-conditioned media compared to those cultured in fAT-MSC-conditioned media and NS-398-treated IFN-γ-stimulated-fAT-MSC-conditioned media (Fig. 2B).\nTo determine the ability to induce T-cell regulation, we determined the mRNA expressions of inflammatory cytokines secreted by splenocytes. Con A stimulation increased the expression of IL-1β, IFN-γ, and IL-17 in the splenocytes. However, Con A-stimulated splenocytes cultured in IFN-γ-treated fAT-MSC-conditioned media showed decreased expression of IL-1β, IFN-γ, and IL-17 compared to the Con A-stimulated cultured in untreated fAT-MSC-conditioned media. Conversely, IL-10 and FOXP3 expression levels were increased in Con A-stimulated splenocyte cultured in IFN-γ treated fAT-MSC-conditioned media compared to Con A-stimulated splenocytes cultured in IFN-γ-un-treated fAT-MSC-conditioned media (Fig. 2C).", "PGE2 ELISA Kit was used to measure PGE2 levels in the conditioned medium obtained from fAT-MSC treated or untreated with IFN-γ, and NS-398 stimulated fAT-MSC treated with IFN-γ. First, we confirmed that NS-398 (5 μM) treated fAT-MSCs show normal cell viability (Supplementary Fig. 1). PGE2 was significantly increased in IFN-γ-treated fAT-MSC-conditioned media treated compared to that in untreated fAT-MSC-conditioned media and NS-398-stimulated fAT-MSC-conditioned media with or without IFN-γ pretreatment (Fig. 3).\nPGE2, prostaglandin E2; IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell.\n***p < 0.001 by one-way analysis of variance analysis.", "To determine the role of PGE2 in the immunomodulatory capacity of fAT-MSCs, we compared pro- and anti-inflammatory cytokines secreted by RAW 264.7 cells and splenocytes. Flow cytometry revealed an increase in CD11c+ RAW 264.7 cells with LPS stimulation, which was decreased by culturing in IFN-γ treated fAT-MSC-conditioned media (Fig. 4). However, there was a decrease in CD206+ RAW 264.7 cells with LPS stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. Pre-treating the fAT-MSCs treated IFN-γ with PGE2 inhibitor, NS-398 attenuated the effect of fAT-MSC-conditioned media on RAW 264.7 cells significantly increasing CD11c+ and decreasing CD206+ RAW 264.7 cells (Fig. 4).\nPGE2, prostaglandin E2; IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; LPS, lipopolysaccharide; MSC, mesenchymal stem cell; ns, not significant; PE, phycoerythrin; FITC, fluorescein isothiocyanate; SSC, sideward scatter; APC, allophycocyanin; Con A, concanavalin A.\n**p < 0.01, ***p < 0.001 by one-way analysis of variance analysis.", "To determine the role of PGE2, treated IFN-γ treated fAT-MSCs with PGE2 inhibitor, NS-398, and then compared the mRNA expression of inflammatory cytokines secreted by RAW 264.7 cells cultured in IFN-γ treated fAT-MSC-conditioned media or NS-398-stimulated IFN-γ-treated fAT-MSC-conditioned media. First, we confirmed that 5 μM of NS-398 effectively inhibits PGE2 secretion in fAT-MSCs without affecting cell viability (data not shown). TNF-α, IL-1β, and IL-6 expression levels increased with LPS, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. IL-10 levels were decreased after LPS stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. However, pre-treating the fAT-MSCs with NS-398 attenuated the effect of IFN-γ treated fAT-MSC-conditioned media (Fig. 5).\nPGE2, prostaglandin E2; IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; IL, interleukin; TNF-α, tumor necrosis factor-alpha; LPS, lipopolysaccharide.\n*p < 0.05, **p < 0.01 by one-way analysis of variance analysis.\nTo confirm the role of PGE2 in T-cell regulation, we determined the mRNA expression of immune cytokines secreted by splenocytes. IL-1β, 1L-17, and IFN-γ were increased after Con A stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. Pre-treating the fAT-MSCs with NS-398 attenuated the effect of IFN-γ treated fAT-MSC-conditioned media and increased IL-1β, 1L-17, and IFN-γ levels. IL-10 was decreased after Con A stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. Pre-treating the fAT-MSCs treated IFN-γ with NS-398 attenuated the effect of IFN-γ treated fAT-MSC-conditioned media and decreased IL-10 levels (Fig. 5).", "Numerous studies have reported that MSCs preconditioned with pro-inflammatory cytokines have increased immunomodulatory effects [12131415]. Despite these studies, studies to enhance the immunomodulatory effects of feline MSCs are lacking. Pretreatment of MSCs with IFN-γ has been reported to have a significant effect on the immunoregulatory effect of the MSCs in humans and mice [81617]. However, the effect of IFN-γ on fAT-MSCs has not been reported. In this study, we determined the mechanism underlying the improvement of immunomodulatory effects of feline MSCs with IFN-γ stimulation.\nThe concentration of IFN-γ was determined by combining data from previous studies and preliminary experiments. HGF, TGF-β, IDO, and COX2 were significantly higher in fAT-MSC conditioned media pre-stimulated with 50 ng/mL IFN-γ in fAT-MSCs. We also investigated the relationship between IFN-γ preconditioned fAT-MSC and immune cells in order to confirm that the immunoregulatory ability of these cells was effectively increased. RAW 264.7, a murine macrophage cell line, was used to confirm the immunomodulatory effects of IFN-γ treated fAT-MSCs in vitro as macrophages play an important role in the immune system [18]. Lymphocytes and macrophages are involved in various inflammatory and immune-mediated disease and have been studied in relation to the immune system [4]. Mouse splenocytes have been used for studying various aspects of human, dog, and rat immune systems [111719]. Therefore, we used mouse splenocyte to evaluate T-cell regulation in IFN-γ treated fAT-MSCs.\nAnti-inflammatory T-helper 2 cytokines suppresses the immune responses and are critical regulators of the immune system. The expression levels anti-inflammatory cytokines arginase and IL-10 in RAW 264.7 cells, were significantly increased when they were cultured in IFN-γ treated fAT-MSC-conditioned media. The population of CD206+ RAW 264.7 cells was also increased when cultured in IFN-γ treated fAT-MSC-conditioned media. Culturing in IFN-γ treated fAT-MSC-conditioned media also increased the expression of FOXP3 and IL-10 in splenocytes. In addition, stimulating RAW 264.7 with LPS and splenocytes with Con A confirmed that the activated immunity could be regulated by IFN-γ treated fAT-MSC-conditioned media.\nMany soluble factors secreted by MSCs such as TGF-β, HGF, PGE-2, IL-6, IL-10, IL-1, iNOS, IDO, Gal-1, and HLA-G regulate the immune system and modulate the immune cell activity and Relieve the inflammatory environment [20212223242526272829]. In particular, previous studies have reported that PGE2 plays a critical role in the immune responses [530] and is associated with macrophage polarization and T-cell regulation [4]. However, no studies have been done to determine whether the increased PGE2 secretion is a key regulator of the immune regulation mediated by the IFN-γ treated fAT-MSC. Our results confirmed the significant increase of PGE2 in the IFN-γ treated fAT-MSC-conditioned media. In addition, pretreatment with PGE2 inhibitor, NS-398, significantly decreased the level of PGE2 in the IFN-γ treated fAT-MSC-conditioned media. Pretreatment with NS-398 attenuated PGE2 as so decreased the M2-macrophage polarization, and T-cell regulation induced by the IFN-γ treated fAT-MSC-conditioned media.\nThere were some limitations in this study. First, we used xenogeneic immune cells such as macrophage cell line and splenocytes. Although mouse immune cells such as RAW 264.7 (macrophages) and splenocytes are widely used for studies of immunology, further research using feline immune cells should be needed. Second, we only confirmed the immunomodulatory effects of PGE2 in this study. Although we showed here that PGE2 might be considered to be a major contributor to the immunomodulatory potential of the IFN-γ treated fAT-MSC-conditioned media, further studies should explore the role of other soluble factors such as IDO, TSG-6, and HGF.\nIn conclusion, IFN-γ pretreatment enhanced the ability of fAT-MSCs to induce M2macrophage polarization and T-cell regulation. It also regulated the secretion of anti- and pro-inflammatory cytokines in activated immune cells. In addition, we found that PGE2 is a major factor contributing to the immunomodulatory function of IFN-γ-pretreated fAT-MSCs. Our results provide a theoretical basis for the use of fAT-MSCs as cell-based therapeutics for autoimmune and inflammatory diseases in the future." ]
[ "intro", "materials|methods", null, null, null, null, null, null, null, null, null, "results", null, null, null, null, null, "discussion" ]
[ "Cats", "mesenchymal stem cell", "macrophage", "interferon-gamma", "prostaglandin E2" ]
INTRODUCTION: Feline mesenchymal stem cells (MSCs) have anti-inflammatory effects and immunomodulatory functions that make them attractive as novel cell-based therapeutics for immune-mediated and inflammatory diseases such as gingivostomatitis, chronic kidney disease, and asthma [123]. In addition, we previously showed the mechanisms of anti-inflammatory functions of feline MSCs [45]. However, few studies have focused on the enhancement of immunoregulatory effects and their mechanisms of feline MSCs. One of the strategies to improve the immunoregulatory capacity of MSCs is to pre-treat them with interferon-gamma (IFN-γ) [6]. IFN-γ is a pro-inflammatory cytokine secreted by natural killer cells and T cells which acts on macrophages and lymphocytes [7]. Previous studies suggested that MSCs stimulated with IFN-γ have upregulated immune suppressive functions and show changes in the expression of immunomodulatory factors [8]. Recently, Parys et al. [9] reported that feline MSCs stimulated with IFN-γ showed significantly increased secretion of immunomodulatory factors such as indoleamine 2,3-dioxygenase (IDO), programmed death-ligand 1, interleukin (IL)-6, cyclooxygenase-2 (COX2), and hepatocyte growth factor (HGF). In this study, however, underlying mechanisms of anti-inflammation have not been identified. Therefore, the aim of our study was to evaluate enhancement of the immunomodulatory effects of feline MSCs pre-treated with IFN-γ. In addition, we assessed the mechanisms by which the immunoregulation is induced. MATERIALS AND METHODS: Isolation and characterization of feline adipose-tissue derived (fAT)-MSCs Feline adipose tissues were obtained from 3 healthy cats during ovariohysterectomy at the Seoul National University Veterinary Medical Teaching Hospital, and their owners provided informed consent for research use. All procedures were approved by the Institutional Animal Care and Use Committee of Seoul National University (protocol No. SNU-190411-10), and the protocols were performed in accordance with approved guidelines. fAT-MSCs were cultured in Dulbecco's Modified Eagle Medium (DMEM; PAN Biotech, Germany) supplemented with 20% ​​fetal bovine serum (FBS; PAN Biotech) and 1% (w/v) penicillin-streptomycin (PS; PAN Biotech) and incubated at 37°C in a 5% CO2 humidified atmosphere. The medium was changed every 2 days until the cells reached a confluence of 70%–80%. Previous studies used MSCs in the third passage or fourth passage, suggesting that early passage cells are more effective [10]. Therefore, we used 3–4 passages for fAT-MSCs. After adhering to culture plates and achieving a fibroblast-like morphology, the fAT-MSCs were characterized by flow cytometry using antibodies against the following proteins–CD44 (antibody clone IM7, fluorescein isothiocyanate [FITC] rat anti-mouse/human, 103021; Biolegend, USA), CD34 (antibody clone 1H6, phycoerythrin [PE] mouse anti-dog, 559369; BD Biosciences, USA), CD45 (antibody clone YKIX716.13, FITC rat anti-dog; eBioscience, USA), CD90 (antibody clone 5E10, PE mouse anti-human, 555596; BD Biosciences), and CD105 (antibody clone SN6, FITC mouse anti-human, MCA1557F; AbD Serotec, USA). Cell fluorescence was analyzed with FACS Aria II (BD Biosciences). In addition, special differentiation kits (Stem Pro osteogenesis differentiation, Stem Pro adipogenesis differentiation, and Stem Pro chondrogenesis differentiation kits; all from Gibco/Life Technologies, USA) were used for evaluating cellular differentiation following the manufacturer's instructions. Feline adipose tissues were obtained from 3 healthy cats during ovariohysterectomy at the Seoul National University Veterinary Medical Teaching Hospital, and their owners provided informed consent for research use. All procedures were approved by the Institutional Animal Care and Use Committee of Seoul National University (protocol No. SNU-190411-10), and the protocols were performed in accordance with approved guidelines. fAT-MSCs were cultured in Dulbecco's Modified Eagle Medium (DMEM; PAN Biotech, Germany) supplemented with 20% ​​fetal bovine serum (FBS; PAN Biotech) and 1% (w/v) penicillin-streptomycin (PS; PAN Biotech) and incubated at 37°C in a 5% CO2 humidified atmosphere. The medium was changed every 2 days until the cells reached a confluence of 70%–80%. Previous studies used MSCs in the third passage or fourth passage, suggesting that early passage cells are more effective [10]. Therefore, we used 3–4 passages for fAT-MSCs. After adhering to culture plates and achieving a fibroblast-like morphology, the fAT-MSCs were characterized by flow cytometry using antibodies against the following proteins–CD44 (antibody clone IM7, fluorescein isothiocyanate [FITC] rat anti-mouse/human, 103021; Biolegend, USA), CD34 (antibody clone 1H6, phycoerythrin [PE] mouse anti-dog, 559369; BD Biosciences, USA), CD45 (antibody clone YKIX716.13, FITC rat anti-dog; eBioscience, USA), CD90 (antibody clone 5E10, PE mouse anti-human, 555596; BD Biosciences), and CD105 (antibody clone SN6, FITC mouse anti-human, MCA1557F; AbD Serotec, USA). Cell fluorescence was analyzed with FACS Aria II (BD Biosciences). In addition, special differentiation kits (Stem Pro osteogenesis differentiation, Stem Pro adipogenesis differentiation, and Stem Pro chondrogenesis differentiation kits; all from Gibco/Life Technologies, USA) were used for evaluating cellular differentiation following the manufacturer's instructions. IFN-γ stimulation To assess the effects of INF-γ on the fAT-MSCs, 5 × 105 fAT-MSCs were seeded in 12-well plates in DMEM medium supplemented with 20% FBS and 1% PS. After 24 h, the fAT-MSCs were treated with 50 ng/mL INF-γ (feline recombinant protein; Kingfisher Biotech, USA) for 48 h. The cells were then washed 4 times with Dulbecco's phosphate-buffered saline (DPBS) and were then maintained in DMEM supplemented with 20% FBS 1% PS for 3 days. After incubation, the supernatant was collected and centrifuged at 1,000 rpm for 3 min to remove any debris, and stored at −80°C until use. D-Plus CCK Cell Viability Assay Kit (DonginLS, Korea) was used to determine the cell viability of IFN-γ stimulated fAT-MSCs following the manufacturer's instructions. To assess the effects of INF-γ on the fAT-MSCs, 5 × 105 fAT-MSCs were seeded in 12-well plates in DMEM medium supplemented with 20% FBS and 1% PS. After 24 h, the fAT-MSCs were treated with 50 ng/mL INF-γ (feline recombinant protein; Kingfisher Biotech, USA) for 48 h. The cells were then washed 4 times with Dulbecco's phosphate-buffered saline (DPBS) and were then maintained in DMEM supplemented with 20% FBS 1% PS for 3 days. After incubation, the supernatant was collected and centrifuged at 1,000 rpm for 3 min to remove any debris, and stored at −80°C until use. D-Plus CCK Cell Viability Assay Kit (DonginLS, Korea) was used to determine the cell viability of IFN-γ stimulated fAT-MSCs following the manufacturer's instructions. Prostaglandin E2 (PGE2) inhibitor, NS-398 treatment To assess the role of PGE2 produced by fAT-MSCs in the anti-inflammatory response, fAT-MSCs were treated with PGE2 inhibitor NS-398 (5 μM; Enzo Life Sciences, USA) for 24 h and then stimulated with INF-γ (50 ng/mL for 48 h). After that, the supernatant was collected and stored as described above. To assess the role of PGE2 produced by fAT-MSCs in the anti-inflammatory response, fAT-MSCs were treated with PGE2 inhibitor NS-398 (5 μM; Enzo Life Sciences, USA) for 24 h and then stimulated with INF-γ (50 ng/mL for 48 h). After that, the supernatant was collected and stored as described above. Immune cells were cultured with conditioned media obtained from fAT-MSCs RAW 264.7 cells obtained from Korean Cell Line Bank were treated for 24 h with lipopolysaccharide (LPS) (200 ng/mL; Sigma-Aldrich, USA) and then washed 3 times with DPBS. LPS-stimulated RAW 264.7 cells were seeded in 6-well plates (1 × 106 cells/well) and cultured in conditioned media described above. In addition, splenocytes were isolated from mice, as previously described [11]. All procedures were approved by the Seoul National University Institutional Animal Care and Use Committee (approval number: SNU-190304-1). Briefly, mouse spleens obtained from 4 mice (C57BL/6, male, 5 W) were mixed and mashed using a 1 mL syringe plunger. The cell suspension was then centrifuged for 3 min at 1,000 rpm. The cell pellet was resuspended in RBC Lysis buffer (Sigma-Aldrich) and washed with phosphate-buffered saline (PBS) 3 times, and the splenocytes were then resuspended in RPMI-1640 supplemented with 10% FBS and 1% PS. The splenocytes were stimulated with 5 μg/mL concanavalin A (Con A; Sigma-Aldrich) for 24 h to determine the mRNA expression of pro- and anti-inflammatory cytokines. The splenocytes were then collected by centrifugation at 3,000 rpm for 10 min, and they were seeded at a density of 1 × 106 cells/well in 6-well plates in conditioned media described above. RAW 264.7 cells obtained from Korean Cell Line Bank were treated for 24 h with lipopolysaccharide (LPS) (200 ng/mL; Sigma-Aldrich, USA) and then washed 3 times with DPBS. LPS-stimulated RAW 264.7 cells were seeded in 6-well plates (1 × 106 cells/well) and cultured in conditioned media described above. In addition, splenocytes were isolated from mice, as previously described [11]. All procedures were approved by the Seoul National University Institutional Animal Care and Use Committee (approval number: SNU-190304-1). Briefly, mouse spleens obtained from 4 mice (C57BL/6, male, 5 W) were mixed and mashed using a 1 mL syringe plunger. The cell suspension was then centrifuged for 3 min at 1,000 rpm. The cell pellet was resuspended in RBC Lysis buffer (Sigma-Aldrich) and washed with phosphate-buffered saline (PBS) 3 times, and the splenocytes were then resuspended in RPMI-1640 supplemented with 10% FBS and 1% PS. The splenocytes were stimulated with 5 μg/mL concanavalin A (Con A; Sigma-Aldrich) for 24 h to determine the mRNA expression of pro- and anti-inflammatory cytokines. The splenocytes were then collected by centrifugation at 3,000 rpm for 10 min, and they were seeded at a density of 1 × 106 cells/well in 6-well plates in conditioned media described above. RNA extraction, cDNA synthesis, and RT-qPCR Easy-BLUE Total RNA Extraction Kit (Intron Biotechnology, Korea) was used to extract total RNA from fAT-MSCs, RAW 264.7, and spleen cells following the manufacturer's instructions. The extracted RNA was transformed into cDNA using LaboPass M-MuLV reverse transcriptase (Cosmo Genetech, Korea) according to the supplier's instructions. Cytokine mRNA levels were measured using RT-qPCR. The reaction mixture consisted of 10 μL AMPIGENE RT-qPCR Green Mix Hi-ROX with SYBR green dye (Enzo Life Sciences, Switzerland), 400 nM each of forward and reverse primers (Bionics, Korea) (primers are listed in Table 1), and 1 μL template cDNA. Cytokine mRNA levels were quantified using glyceraldehyde 3-phosphate dehydrogenase as a house-keeping control. Easy-BLUE Total RNA Extraction Kit (Intron Biotechnology, Korea) was used to extract total RNA from fAT-MSCs, RAW 264.7, and spleen cells following the manufacturer's instructions. The extracted RNA was transformed into cDNA using LaboPass M-MuLV reverse transcriptase (Cosmo Genetech, Korea) according to the supplier's instructions. Cytokine mRNA levels were measured using RT-qPCR. The reaction mixture consisted of 10 μL AMPIGENE RT-qPCR Green Mix Hi-ROX with SYBR green dye (Enzo Life Sciences, Switzerland), 400 nM each of forward and reverse primers (Bionics, Korea) (primers are listed in Table 1), and 1 μL template cDNA. Cytokine mRNA levels were quantified using glyceraldehyde 3-phosphate dehydrogenase as a house-keeping control. Enzyme-linked immunosorbent assay (ELISA) The concentration of PGE2 in the cell culture supernatant was determined using an ELISA kit (Enzo Life Sciences) following the manufacturer's instructions. Culture supernatants from fAT-MSCs treated and un-treated with IFN-γ and NS-398 stimulated fAT-MSCs treated and un-treated with IFN-γ were collected after 48 h of stimulation for PGE2 ELISA. The concentration of PGE2 in the cell culture supernatant was determined using an ELISA kit (Enzo Life Sciences) following the manufacturer's instructions. Culture supernatants from fAT-MSCs treated and un-treated with IFN-γ and NS-398 stimulated fAT-MSCs treated and un-treated with IFN-γ were collected after 48 h of stimulation for PGE2 ELISA. Flow cytometric analysis of mouse immune cells Flow cytometry was performed using a FACS Aria II system (BD Biosciences) and the data were analyzed using FlowJo software (Tree Star, USA). To determine M2 macrophage polarization, conditioned medium from fAT-MSCs, fAT-MSCs treated with IFN-γ, or NS-398 stimulated fAT-MSCs treated with IFN-γ were used to stimulate RAW 264.7 treated with or without LPS. The RAW 264.7 cells were harvested after stimulation and suspended in DPBS. The cells were stained with PE-conjugated anti-CD11c+ antibody (clone N418, 117307; Biolegend) and FITC-conjugated anti-CD206+ antibody (clone MRC1, SC376108; Santa Cruz Biotechnology, USA), and evaluated by flow cytometry. To evaluate PGE2-mediated induction of T-cell regulation by MSCs, conditioned medium from fAT-MSCs, fAT-MSCs treated with IFN-γ, or NS-398-treated fAT-MSCs stimulated with IFN-γ were used to stimulate splenocytes treated with or without Con A. The splenocytes were harvested after stimulation and suspended in DPBS. The cells were then stained with PE-conjugated anti-CD4+ antibody (clone RM4-5, 12-0042-82, eBioscience) and APC-conjugated anti-CD25+ antibody (PC61.5, 17-0251-82, eBioscience) and evaluated by flow cytometry. Flow cytometry was performed using a FACS Aria II system (BD Biosciences) and the data were analyzed using FlowJo software (Tree Star, USA). To determine M2 macrophage polarization, conditioned medium from fAT-MSCs, fAT-MSCs treated with IFN-γ, or NS-398 stimulated fAT-MSCs treated with IFN-γ were used to stimulate RAW 264.7 treated with or without LPS. The RAW 264.7 cells were harvested after stimulation and suspended in DPBS. The cells were stained with PE-conjugated anti-CD11c+ antibody (clone N418, 117307; Biolegend) and FITC-conjugated anti-CD206+ antibody (clone MRC1, SC376108; Santa Cruz Biotechnology, USA), and evaluated by flow cytometry. To evaluate PGE2-mediated induction of T-cell regulation by MSCs, conditioned medium from fAT-MSCs, fAT-MSCs treated with IFN-γ, or NS-398-treated fAT-MSCs stimulated with IFN-γ were used to stimulate splenocytes treated with or without Con A. The splenocytes were harvested after stimulation and suspended in DPBS. The cells were then stained with PE-conjugated anti-CD4+ antibody (clone RM4-5, 12-0042-82, eBioscience) and APC-conjugated anti-CD25+ antibody (PC61.5, 17-0251-82, eBioscience) and evaluated by flow cytometry. Immunofluorescence analyses RAW 264.7 cells cultured on coverslips were fixed using 4% paraformaldehyde (in PBS, pH 7.2) at room temperature for 15 min and then washed with DPBS 4 times. The cells were then incubated in a blocking buffer containing 1% bovine serum albumin in PBST for 30 min. The cells were then incubated with antibodies against CD206+ and CD11b+ at 4 h for 12 h. The cells were rinsed with DPBS 3 times and were incubated with corresponding fluorescein-conjugated secondary antibodies (1:200; Santa Cruz Biotechnology) or Texas red-conjugated secondary antibodies (1:200; Santa Cruz Biotechnology) for 1 h at room temperature in the dark. The coverslips were then washed 4 times and mounted using VECTASHIELD mounting medium containing 4′,6-diamidino-2-phenylindole (Vector Laboratories, USA). The slides were observed under an EVOS FL microscope (Life Technologies, Germany), and the stained cells in 20 random fields per group were counted. RAW 264.7 cells cultured on coverslips were fixed using 4% paraformaldehyde (in PBS, pH 7.2) at room temperature for 15 min and then washed with DPBS 4 times. The cells were then incubated in a blocking buffer containing 1% bovine serum albumin in PBST for 30 min. The cells were then incubated with antibodies against CD206+ and CD11b+ at 4 h for 12 h. The cells were rinsed with DPBS 3 times and were incubated with corresponding fluorescein-conjugated secondary antibodies (1:200; Santa Cruz Biotechnology) or Texas red-conjugated secondary antibodies (1:200; Santa Cruz Biotechnology) for 1 h at room temperature in the dark. The coverslips were then washed 4 times and mounted using VECTASHIELD mounting medium containing 4′,6-diamidino-2-phenylindole (Vector Laboratories, USA). The slides were observed under an EVOS FL microscope (Life Technologies, Germany), and the stained cells in 20 random fields per group were counted. Statistical analyses All data were analyzed using GraphPad Prism v.6.01 software (GraphPad Software Inc., USA). Student's t-tests or one-way analysis of variance were used to determine statistical significance. The p values less than 0.05 and were considered statistically significant. All data were analyzed using GraphPad Prism v.6.01 software (GraphPad Software Inc., USA). Student's t-tests or one-way analysis of variance were used to determine statistical significance. The p values less than 0.05 and were considered statistically significant. Isolation and characterization of feline adipose-tissue derived (fAT)-MSCs: Feline adipose tissues were obtained from 3 healthy cats during ovariohysterectomy at the Seoul National University Veterinary Medical Teaching Hospital, and their owners provided informed consent for research use. All procedures were approved by the Institutional Animal Care and Use Committee of Seoul National University (protocol No. SNU-190411-10), and the protocols were performed in accordance with approved guidelines. fAT-MSCs were cultured in Dulbecco's Modified Eagle Medium (DMEM; PAN Biotech, Germany) supplemented with 20% ​​fetal bovine serum (FBS; PAN Biotech) and 1% (w/v) penicillin-streptomycin (PS; PAN Biotech) and incubated at 37°C in a 5% CO2 humidified atmosphere. The medium was changed every 2 days until the cells reached a confluence of 70%–80%. Previous studies used MSCs in the third passage or fourth passage, suggesting that early passage cells are more effective [10]. Therefore, we used 3–4 passages for fAT-MSCs. After adhering to culture plates and achieving a fibroblast-like morphology, the fAT-MSCs were characterized by flow cytometry using antibodies against the following proteins–CD44 (antibody clone IM7, fluorescein isothiocyanate [FITC] rat anti-mouse/human, 103021; Biolegend, USA), CD34 (antibody clone 1H6, phycoerythrin [PE] mouse anti-dog, 559369; BD Biosciences, USA), CD45 (antibody clone YKIX716.13, FITC rat anti-dog; eBioscience, USA), CD90 (antibody clone 5E10, PE mouse anti-human, 555596; BD Biosciences), and CD105 (antibody clone SN6, FITC mouse anti-human, MCA1557F; AbD Serotec, USA). Cell fluorescence was analyzed with FACS Aria II (BD Biosciences). In addition, special differentiation kits (Stem Pro osteogenesis differentiation, Stem Pro adipogenesis differentiation, and Stem Pro chondrogenesis differentiation kits; all from Gibco/Life Technologies, USA) were used for evaluating cellular differentiation following the manufacturer's instructions. IFN-γ stimulation: To assess the effects of INF-γ on the fAT-MSCs, 5 × 105 fAT-MSCs were seeded in 12-well plates in DMEM medium supplemented with 20% FBS and 1% PS. After 24 h, the fAT-MSCs were treated with 50 ng/mL INF-γ (feline recombinant protein; Kingfisher Biotech, USA) for 48 h. The cells were then washed 4 times with Dulbecco's phosphate-buffered saline (DPBS) and were then maintained in DMEM supplemented with 20% FBS 1% PS for 3 days. After incubation, the supernatant was collected and centrifuged at 1,000 rpm for 3 min to remove any debris, and stored at −80°C until use. D-Plus CCK Cell Viability Assay Kit (DonginLS, Korea) was used to determine the cell viability of IFN-γ stimulated fAT-MSCs following the manufacturer's instructions. Prostaglandin E2 (PGE2) inhibitor, NS-398 treatment: To assess the role of PGE2 produced by fAT-MSCs in the anti-inflammatory response, fAT-MSCs were treated with PGE2 inhibitor NS-398 (5 μM; Enzo Life Sciences, USA) for 24 h and then stimulated with INF-γ (50 ng/mL for 48 h). After that, the supernatant was collected and stored as described above. Immune cells were cultured with conditioned media obtained from fAT-MSCs: RAW 264.7 cells obtained from Korean Cell Line Bank were treated for 24 h with lipopolysaccharide (LPS) (200 ng/mL; Sigma-Aldrich, USA) and then washed 3 times with DPBS. LPS-stimulated RAW 264.7 cells were seeded in 6-well plates (1 × 106 cells/well) and cultured in conditioned media described above. In addition, splenocytes were isolated from mice, as previously described [11]. All procedures were approved by the Seoul National University Institutional Animal Care and Use Committee (approval number: SNU-190304-1). Briefly, mouse spleens obtained from 4 mice (C57BL/6, male, 5 W) were mixed and mashed using a 1 mL syringe plunger. The cell suspension was then centrifuged for 3 min at 1,000 rpm. The cell pellet was resuspended in RBC Lysis buffer (Sigma-Aldrich) and washed with phosphate-buffered saline (PBS) 3 times, and the splenocytes were then resuspended in RPMI-1640 supplemented with 10% FBS and 1% PS. The splenocytes were stimulated with 5 μg/mL concanavalin A (Con A; Sigma-Aldrich) for 24 h to determine the mRNA expression of pro- and anti-inflammatory cytokines. The splenocytes were then collected by centrifugation at 3,000 rpm for 10 min, and they were seeded at a density of 1 × 106 cells/well in 6-well plates in conditioned media described above. RNA extraction, cDNA synthesis, and RT-qPCR: Easy-BLUE Total RNA Extraction Kit (Intron Biotechnology, Korea) was used to extract total RNA from fAT-MSCs, RAW 264.7, and spleen cells following the manufacturer's instructions. The extracted RNA was transformed into cDNA using LaboPass M-MuLV reverse transcriptase (Cosmo Genetech, Korea) according to the supplier's instructions. Cytokine mRNA levels were measured using RT-qPCR. The reaction mixture consisted of 10 μL AMPIGENE RT-qPCR Green Mix Hi-ROX with SYBR green dye (Enzo Life Sciences, Switzerland), 400 nM each of forward and reverse primers (Bionics, Korea) (primers are listed in Table 1), and 1 μL template cDNA. Cytokine mRNA levels were quantified using glyceraldehyde 3-phosphate dehydrogenase as a house-keeping control. Enzyme-linked immunosorbent assay (ELISA): The concentration of PGE2 in the cell culture supernatant was determined using an ELISA kit (Enzo Life Sciences) following the manufacturer's instructions. Culture supernatants from fAT-MSCs treated and un-treated with IFN-γ and NS-398 stimulated fAT-MSCs treated and un-treated with IFN-γ were collected after 48 h of stimulation for PGE2 ELISA. Flow cytometric analysis of mouse immune cells: Flow cytometry was performed using a FACS Aria II system (BD Biosciences) and the data were analyzed using FlowJo software (Tree Star, USA). To determine M2 macrophage polarization, conditioned medium from fAT-MSCs, fAT-MSCs treated with IFN-γ, or NS-398 stimulated fAT-MSCs treated with IFN-γ were used to stimulate RAW 264.7 treated with or without LPS. The RAW 264.7 cells were harvested after stimulation and suspended in DPBS. The cells were stained with PE-conjugated anti-CD11c+ antibody (clone N418, 117307; Biolegend) and FITC-conjugated anti-CD206+ antibody (clone MRC1, SC376108; Santa Cruz Biotechnology, USA), and evaluated by flow cytometry. To evaluate PGE2-mediated induction of T-cell regulation by MSCs, conditioned medium from fAT-MSCs, fAT-MSCs treated with IFN-γ, or NS-398-treated fAT-MSCs stimulated with IFN-γ were used to stimulate splenocytes treated with or without Con A. The splenocytes were harvested after stimulation and suspended in DPBS. The cells were then stained with PE-conjugated anti-CD4+ antibody (clone RM4-5, 12-0042-82, eBioscience) and APC-conjugated anti-CD25+ antibody (PC61.5, 17-0251-82, eBioscience) and evaluated by flow cytometry. Immunofluorescence analyses: RAW 264.7 cells cultured on coverslips were fixed using 4% paraformaldehyde (in PBS, pH 7.2) at room temperature for 15 min and then washed with DPBS 4 times. The cells were then incubated in a blocking buffer containing 1% bovine serum albumin in PBST for 30 min. The cells were then incubated with antibodies against CD206+ and CD11b+ at 4 h for 12 h. The cells were rinsed with DPBS 3 times and were incubated with corresponding fluorescein-conjugated secondary antibodies (1:200; Santa Cruz Biotechnology) or Texas red-conjugated secondary antibodies (1:200; Santa Cruz Biotechnology) for 1 h at room temperature in the dark. The coverslips were then washed 4 times and mounted using VECTASHIELD mounting medium containing 4′,6-diamidino-2-phenylindole (Vector Laboratories, USA). The slides were observed under an EVOS FL microscope (Life Technologies, Germany), and the stained cells in 20 random fields per group were counted. Statistical analyses: All data were analyzed using GraphPad Prism v.6.01 software (GraphPad Software Inc., USA). Student's t-tests or one-way analysis of variance were used to determine statistical significance. The p values less than 0.05 and were considered statistically significant. RESULTS: Characterization of fAT-MSCs Cells isolated from feline adipose tissue were characterized by immunophenotyping and multilineage differentiation. Three days after seeding, the cultured cells exhibited a fibroblast-like morphology. There was no difference in the morphology of fAT-MSCs treated with or without IFN-γ (Fig. 1A). In addition, IFN-γ-treated fAT-MSCs showed normal cell viability (Fig. 1B). IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; HGF, hepatocyte growth factor; COX2, cyclooxygenase-2; IDO, indoleamine 2,3-dioxygenase; ns, not significant; TGF-β, transforming growth factor-β. *p < 0.05, **p < 0.01 by one-way analysis of variance analysis. Flow cytometric analyses showed that only a few of the naïve or IFN-γ-primed fAT-MSCs expressed the known hematopoietic markers CD34 and CD44, but more than 95% expressed the known MSC markers CD90, CD44, CD9, and CD105 (Fig. 1C). When specific differentiation media were used, fAT-MSCs differentiated into osteocytes, adipocytes, and chondrocytes. The abilities of osteogenic and chondrogenic differentiation were enhanced in IFN-γ-treated fAT-MSCs compared with naïve fAT-MSCs (Fig. 1D). Additionally, 48 h treatment with INF-γ upregulated the secretion of immunomodulation factors COX2, IDO, TGF-β, and HGF in fAT-MSCs (Fig. 1E). Cells isolated from feline adipose tissue were characterized by immunophenotyping and multilineage differentiation. Three days after seeding, the cultured cells exhibited a fibroblast-like morphology. There was no difference in the morphology of fAT-MSCs treated with or without IFN-γ (Fig. 1A). In addition, IFN-γ-treated fAT-MSCs showed normal cell viability (Fig. 1B). IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; HGF, hepatocyte growth factor; COX2, cyclooxygenase-2; IDO, indoleamine 2,3-dioxygenase; ns, not significant; TGF-β, transforming growth factor-β. *p < 0.05, **p < 0.01 by one-way analysis of variance analysis. Flow cytometric analyses showed that only a few of the naïve or IFN-γ-primed fAT-MSCs expressed the known hematopoietic markers CD34 and CD44, but more than 95% expressed the known MSC markers CD90, CD44, CD9, and CD105 (Fig. 1C). When specific differentiation media were used, fAT-MSCs differentiated into osteocytes, adipocytes, and chondrocytes. The abilities of osteogenic and chondrogenic differentiation were enhanced in IFN-γ-treated fAT-MSCs compared with naïve fAT-MSCs (Fig. 1D). Additionally, 48 h treatment with INF-γ upregulated the secretion of immunomodulation factors COX2, IDO, TGF-β, and HGF in fAT-MSCs (Fig. 1E). Immunomodulatory effects of IFN-γ primed fAT-MSCs To determine the immunomodulatory capacity of IFN-γ treated fAT-MSCs, we quantified the mRNA expressions of anti- and pro-inflammatory cytokines secreted by RAW 264.7 cells. Tumor necrosis factor-alpha (TNF-α), IL-1β, IL-6, and inducible nitric oxide synthase (iNOS) were upregulated in RAW 264.7 cells stimulated with LPS compared to that in unstimulated RAW 264.7 cells. When RAW 264.7 cells stimulated by LPS were cultured with fAT-MSC- and IFN-γ-treated fAT-MSC-conditioned media TNF-α, IL-1β, IL-6, and iNOS expression levels were reduced. We also measured the mRNA expressions of M2 markers, IL-10, and arginase. The levels of these M2 markers were significantly increased in LPS-stimulated RAW 264.7 cultured in IFN-γ-treated fAT-MSC-conditioned media compared to that LPS-stimulated RAW 264.7 cultured in untreated fAT-MSC-conditioned media (Fig. 2A). IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; LPS, lipopolysaccharide; TGF-β, transforming growth factor-β; IL, interleukin; Arg, arginase; iNOS, inducible nitric oxide synthase; MSC, mesenchymal stem cell; ns, not significant; FOXP3, forkhead box protein P3. *p < 0.05, **p < 0.01, ***p < 0.001 by one-way analysis of variance analysis. Additionally, we performed immunofluorescence staining on RAW 264.7 cells to determine whether the population of M2 macrophage increased when cultured in fAT-MSC- and IFN-γ-treated fAT-MSC-conditioned media. Quantitative analysis of CD206+ and CD11b+ RAW 264.7 cells demonstrated a significant increase in CD206+ cells among the RAW 264.7 cultured in IFN-γ-treated fAT-MSC-conditioned media compared to those cultured in fAT-MSC-conditioned media and NS-398-treated IFN-γ-stimulated-fAT-MSC-conditioned media (Fig. 2B). To determine the ability to induce T-cell regulation, we determined the mRNA expressions of inflammatory cytokines secreted by splenocytes. Con A stimulation increased the expression of IL-1β, IFN-γ, and IL-17 in the splenocytes. However, Con A-stimulated splenocytes cultured in IFN-γ-treated fAT-MSC-conditioned media showed decreased expression of IL-1β, IFN-γ, and IL-17 compared to the Con A-stimulated cultured in untreated fAT-MSC-conditioned media. Conversely, IL-10 and FOXP3 expression levels were increased in Con A-stimulated splenocyte cultured in IFN-γ treated fAT-MSC-conditioned media compared to Con A-stimulated splenocytes cultured in IFN-γ-un-treated fAT-MSC-conditioned media (Fig. 2C). To determine the immunomodulatory capacity of IFN-γ treated fAT-MSCs, we quantified the mRNA expressions of anti- and pro-inflammatory cytokines secreted by RAW 264.7 cells. Tumor necrosis factor-alpha (TNF-α), IL-1β, IL-6, and inducible nitric oxide synthase (iNOS) were upregulated in RAW 264.7 cells stimulated with LPS compared to that in unstimulated RAW 264.7 cells. When RAW 264.7 cells stimulated by LPS were cultured with fAT-MSC- and IFN-γ-treated fAT-MSC-conditioned media TNF-α, IL-1β, IL-6, and iNOS expression levels were reduced. We also measured the mRNA expressions of M2 markers, IL-10, and arginase. The levels of these M2 markers were significantly increased in LPS-stimulated RAW 264.7 cultured in IFN-γ-treated fAT-MSC-conditioned media compared to that LPS-stimulated RAW 264.7 cultured in untreated fAT-MSC-conditioned media (Fig. 2A). IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; LPS, lipopolysaccharide; TGF-β, transforming growth factor-β; IL, interleukin; Arg, arginase; iNOS, inducible nitric oxide synthase; MSC, mesenchymal stem cell; ns, not significant; FOXP3, forkhead box protein P3. *p < 0.05, **p < 0.01, ***p < 0.001 by one-way analysis of variance analysis. Additionally, we performed immunofluorescence staining on RAW 264.7 cells to determine whether the population of M2 macrophage increased when cultured in fAT-MSC- and IFN-γ-treated fAT-MSC-conditioned media. Quantitative analysis of CD206+ and CD11b+ RAW 264.7 cells demonstrated a significant increase in CD206+ cells among the RAW 264.7 cultured in IFN-γ-treated fAT-MSC-conditioned media compared to those cultured in fAT-MSC-conditioned media and NS-398-treated IFN-γ-stimulated-fAT-MSC-conditioned media (Fig. 2B). To determine the ability to induce T-cell regulation, we determined the mRNA expressions of inflammatory cytokines secreted by splenocytes. Con A stimulation increased the expression of IL-1β, IFN-γ, and IL-17 in the splenocytes. However, Con A-stimulated splenocytes cultured in IFN-γ-treated fAT-MSC-conditioned media showed decreased expression of IL-1β, IFN-γ, and IL-17 compared to the Con A-stimulated cultured in untreated fAT-MSC-conditioned media. Conversely, IL-10 and FOXP3 expression levels were increased in Con A-stimulated splenocyte cultured in IFN-γ treated fAT-MSC-conditioned media compared to Con A-stimulated splenocytes cultured in IFN-γ-un-treated fAT-MSC-conditioned media (Fig. 2C). PGE2 concentration levels in various fAT-MSC-conditioned media PGE2 ELISA Kit was used to measure PGE2 levels in the conditioned medium obtained from fAT-MSC treated or untreated with IFN-γ, and NS-398 stimulated fAT-MSC treated with IFN-γ. First, we confirmed that NS-398 (5 μM) treated fAT-MSCs show normal cell viability (Supplementary Fig. 1). PGE2 was significantly increased in IFN-γ-treated fAT-MSC-conditioned media treated compared to that in untreated fAT-MSC-conditioned media and NS-398-stimulated fAT-MSC-conditioned media with or without IFN-γ pretreatment (Fig. 3). PGE2, prostaglandin E2; IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell. ***p < 0.001 by one-way analysis of variance analysis. PGE2 ELISA Kit was used to measure PGE2 levels in the conditioned medium obtained from fAT-MSC treated or untreated with IFN-γ, and NS-398 stimulated fAT-MSC treated with IFN-γ. First, we confirmed that NS-398 (5 μM) treated fAT-MSCs show normal cell viability (Supplementary Fig. 1). PGE2 was significantly increased in IFN-γ-treated fAT-MSC-conditioned media treated compared to that in untreated fAT-MSC-conditioned media and NS-398-stimulated fAT-MSC-conditioned media with or without IFN-γ pretreatment (Fig. 3). PGE2, prostaglandin E2; IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell. ***p < 0.001 by one-way analysis of variance analysis. Macrophage polarization and T-cell regulation are associated with PGE2 To determine the role of PGE2 in the immunomodulatory capacity of fAT-MSCs, we compared pro- and anti-inflammatory cytokines secreted by RAW 264.7 cells and splenocytes. Flow cytometry revealed an increase in CD11c+ RAW 264.7 cells with LPS stimulation, which was decreased by culturing in IFN-γ treated fAT-MSC-conditioned media (Fig. 4). However, there was a decrease in CD206+ RAW 264.7 cells with LPS stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. Pre-treating the fAT-MSCs treated IFN-γ with PGE2 inhibitor, NS-398 attenuated the effect of fAT-MSC-conditioned media on RAW 264.7 cells significantly increasing CD11c+ and decreasing CD206+ RAW 264.7 cells (Fig. 4). PGE2, prostaglandin E2; IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; LPS, lipopolysaccharide; MSC, mesenchymal stem cell; ns, not significant; PE, phycoerythrin; FITC, fluorescein isothiocyanate; SSC, sideward scatter; APC, allophycocyanin; Con A, concanavalin A. **p < 0.01, ***p < 0.001 by one-way analysis of variance analysis. To determine the role of PGE2 in the immunomodulatory capacity of fAT-MSCs, we compared pro- and anti-inflammatory cytokines secreted by RAW 264.7 cells and splenocytes. Flow cytometry revealed an increase in CD11c+ RAW 264.7 cells with LPS stimulation, which was decreased by culturing in IFN-γ treated fAT-MSC-conditioned media (Fig. 4). However, there was a decrease in CD206+ RAW 264.7 cells with LPS stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. Pre-treating the fAT-MSCs treated IFN-γ with PGE2 inhibitor, NS-398 attenuated the effect of fAT-MSC-conditioned media on RAW 264.7 cells significantly increasing CD11c+ and decreasing CD206+ RAW 264.7 cells (Fig. 4). PGE2, prostaglandin E2; IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; LPS, lipopolysaccharide; MSC, mesenchymal stem cell; ns, not significant; PE, phycoerythrin; FITC, fluorescein isothiocyanate; SSC, sideward scatter; APC, allophycocyanin; Con A, concanavalin A. **p < 0.01, ***p < 0.001 by one-way analysis of variance analysis. PGE2 secreted by IFN-γ treated fAT-MSCs is critical for their immune function To determine the role of PGE2, treated IFN-γ treated fAT-MSCs with PGE2 inhibitor, NS-398, and then compared the mRNA expression of inflammatory cytokines secreted by RAW 264.7 cells cultured in IFN-γ treated fAT-MSC-conditioned media or NS-398-stimulated IFN-γ-treated fAT-MSC-conditioned media. First, we confirmed that 5 μM of NS-398 effectively inhibits PGE2 secretion in fAT-MSCs without affecting cell viability (data not shown). TNF-α, IL-1β, and IL-6 expression levels increased with LPS, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. IL-10 levels were decreased after LPS stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. However, pre-treating the fAT-MSCs with NS-398 attenuated the effect of IFN-γ treated fAT-MSC-conditioned media (Fig. 5). PGE2, prostaglandin E2; IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; IL, interleukin; TNF-α, tumor necrosis factor-alpha; LPS, lipopolysaccharide. *p < 0.05, **p < 0.01 by one-way analysis of variance analysis. To confirm the role of PGE2 in T-cell regulation, we determined the mRNA expression of immune cytokines secreted by splenocytes. IL-1β, 1L-17, and IFN-γ were increased after Con A stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. Pre-treating the fAT-MSCs with NS-398 attenuated the effect of IFN-γ treated fAT-MSC-conditioned media and increased IL-1β, 1L-17, and IFN-γ levels. IL-10 was decreased after Con A stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. Pre-treating the fAT-MSCs treated IFN-γ with NS-398 attenuated the effect of IFN-γ treated fAT-MSC-conditioned media and decreased IL-10 levels (Fig. 5). To determine the role of PGE2, treated IFN-γ treated fAT-MSCs with PGE2 inhibitor, NS-398, and then compared the mRNA expression of inflammatory cytokines secreted by RAW 264.7 cells cultured in IFN-γ treated fAT-MSC-conditioned media or NS-398-stimulated IFN-γ-treated fAT-MSC-conditioned media. First, we confirmed that 5 μM of NS-398 effectively inhibits PGE2 secretion in fAT-MSCs without affecting cell viability (data not shown). TNF-α, IL-1β, and IL-6 expression levels increased with LPS, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. IL-10 levels were decreased after LPS stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. However, pre-treating the fAT-MSCs with NS-398 attenuated the effect of IFN-γ treated fAT-MSC-conditioned media (Fig. 5). PGE2, prostaglandin E2; IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; IL, interleukin; TNF-α, tumor necrosis factor-alpha; LPS, lipopolysaccharide. *p < 0.05, **p < 0.01 by one-way analysis of variance analysis. To confirm the role of PGE2 in T-cell regulation, we determined the mRNA expression of immune cytokines secreted by splenocytes. IL-1β, 1L-17, and IFN-γ were increased after Con A stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. Pre-treating the fAT-MSCs with NS-398 attenuated the effect of IFN-γ treated fAT-MSC-conditioned media and increased IL-1β, 1L-17, and IFN-γ levels. IL-10 was decreased after Con A stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. Pre-treating the fAT-MSCs treated IFN-γ with NS-398 attenuated the effect of IFN-γ treated fAT-MSC-conditioned media and decreased IL-10 levels (Fig. 5). Characterization of fAT-MSCs: Cells isolated from feline adipose tissue were characterized by immunophenotyping and multilineage differentiation. Three days after seeding, the cultured cells exhibited a fibroblast-like morphology. There was no difference in the morphology of fAT-MSCs treated with or without IFN-γ (Fig. 1A). In addition, IFN-γ-treated fAT-MSCs showed normal cell viability (Fig. 1B). IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; HGF, hepatocyte growth factor; COX2, cyclooxygenase-2; IDO, indoleamine 2,3-dioxygenase; ns, not significant; TGF-β, transforming growth factor-β. *p < 0.05, **p < 0.01 by one-way analysis of variance analysis. Flow cytometric analyses showed that only a few of the naïve or IFN-γ-primed fAT-MSCs expressed the known hematopoietic markers CD34 and CD44, but more than 95% expressed the known MSC markers CD90, CD44, CD9, and CD105 (Fig. 1C). When specific differentiation media were used, fAT-MSCs differentiated into osteocytes, adipocytes, and chondrocytes. The abilities of osteogenic and chondrogenic differentiation were enhanced in IFN-γ-treated fAT-MSCs compared with naïve fAT-MSCs (Fig. 1D). Additionally, 48 h treatment with INF-γ upregulated the secretion of immunomodulation factors COX2, IDO, TGF-β, and HGF in fAT-MSCs (Fig. 1E). Immunomodulatory effects of IFN-γ primed fAT-MSCs: To determine the immunomodulatory capacity of IFN-γ treated fAT-MSCs, we quantified the mRNA expressions of anti- and pro-inflammatory cytokines secreted by RAW 264.7 cells. Tumor necrosis factor-alpha (TNF-α), IL-1β, IL-6, and inducible nitric oxide synthase (iNOS) were upregulated in RAW 264.7 cells stimulated with LPS compared to that in unstimulated RAW 264.7 cells. When RAW 264.7 cells stimulated by LPS were cultured with fAT-MSC- and IFN-γ-treated fAT-MSC-conditioned media TNF-α, IL-1β, IL-6, and iNOS expression levels were reduced. We also measured the mRNA expressions of M2 markers, IL-10, and arginase. The levels of these M2 markers were significantly increased in LPS-stimulated RAW 264.7 cultured in IFN-γ-treated fAT-MSC-conditioned media compared to that LPS-stimulated RAW 264.7 cultured in untreated fAT-MSC-conditioned media (Fig. 2A). IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; LPS, lipopolysaccharide; TGF-β, transforming growth factor-β; IL, interleukin; Arg, arginase; iNOS, inducible nitric oxide synthase; MSC, mesenchymal stem cell; ns, not significant; FOXP3, forkhead box protein P3. *p < 0.05, **p < 0.01, ***p < 0.001 by one-way analysis of variance analysis. Additionally, we performed immunofluorescence staining on RAW 264.7 cells to determine whether the population of M2 macrophage increased when cultured in fAT-MSC- and IFN-γ-treated fAT-MSC-conditioned media. Quantitative analysis of CD206+ and CD11b+ RAW 264.7 cells demonstrated a significant increase in CD206+ cells among the RAW 264.7 cultured in IFN-γ-treated fAT-MSC-conditioned media compared to those cultured in fAT-MSC-conditioned media and NS-398-treated IFN-γ-stimulated-fAT-MSC-conditioned media (Fig. 2B). To determine the ability to induce T-cell regulation, we determined the mRNA expressions of inflammatory cytokines secreted by splenocytes. Con A stimulation increased the expression of IL-1β, IFN-γ, and IL-17 in the splenocytes. However, Con A-stimulated splenocytes cultured in IFN-γ-treated fAT-MSC-conditioned media showed decreased expression of IL-1β, IFN-γ, and IL-17 compared to the Con A-stimulated cultured in untreated fAT-MSC-conditioned media. Conversely, IL-10 and FOXP3 expression levels were increased in Con A-stimulated splenocyte cultured in IFN-γ treated fAT-MSC-conditioned media compared to Con A-stimulated splenocytes cultured in IFN-γ-un-treated fAT-MSC-conditioned media (Fig. 2C). PGE2 concentration levels in various fAT-MSC-conditioned media: PGE2 ELISA Kit was used to measure PGE2 levels in the conditioned medium obtained from fAT-MSC treated or untreated with IFN-γ, and NS-398 stimulated fAT-MSC treated with IFN-γ. First, we confirmed that NS-398 (5 μM) treated fAT-MSCs show normal cell viability (Supplementary Fig. 1). PGE2 was significantly increased in IFN-γ-treated fAT-MSC-conditioned media treated compared to that in untreated fAT-MSC-conditioned media and NS-398-stimulated fAT-MSC-conditioned media with or without IFN-γ pretreatment (Fig. 3). PGE2, prostaglandin E2; IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell. ***p < 0.001 by one-way analysis of variance analysis. Macrophage polarization and T-cell regulation are associated with PGE2: To determine the role of PGE2 in the immunomodulatory capacity of fAT-MSCs, we compared pro- and anti-inflammatory cytokines secreted by RAW 264.7 cells and splenocytes. Flow cytometry revealed an increase in CD11c+ RAW 264.7 cells with LPS stimulation, which was decreased by culturing in IFN-γ treated fAT-MSC-conditioned media (Fig. 4). However, there was a decrease in CD206+ RAW 264.7 cells with LPS stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. Pre-treating the fAT-MSCs treated IFN-γ with PGE2 inhibitor, NS-398 attenuated the effect of fAT-MSC-conditioned media on RAW 264.7 cells significantly increasing CD11c+ and decreasing CD206+ RAW 264.7 cells (Fig. 4). PGE2, prostaglandin E2; IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; LPS, lipopolysaccharide; MSC, mesenchymal stem cell; ns, not significant; PE, phycoerythrin; FITC, fluorescein isothiocyanate; SSC, sideward scatter; APC, allophycocyanin; Con A, concanavalin A. **p < 0.01, ***p < 0.001 by one-way analysis of variance analysis. PGE2 secreted by IFN-γ treated fAT-MSCs is critical for their immune function: To determine the role of PGE2, treated IFN-γ treated fAT-MSCs with PGE2 inhibitor, NS-398, and then compared the mRNA expression of inflammatory cytokines secreted by RAW 264.7 cells cultured in IFN-γ treated fAT-MSC-conditioned media or NS-398-stimulated IFN-γ-treated fAT-MSC-conditioned media. First, we confirmed that 5 μM of NS-398 effectively inhibits PGE2 secretion in fAT-MSCs without affecting cell viability (data not shown). TNF-α, IL-1β, and IL-6 expression levels increased with LPS, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. IL-10 levels were decreased after LPS stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. However, pre-treating the fAT-MSCs with NS-398 attenuated the effect of IFN-γ treated fAT-MSC-conditioned media (Fig. 5). PGE2, prostaglandin E2; IFN-γ, Interferon-gamma; fAT-MSC, feline adipose tissue-derived mesenchymal stem cell; IL, interleukin; TNF-α, tumor necrosis factor-alpha; LPS, lipopolysaccharide. *p < 0.05, **p < 0.01 by one-way analysis of variance analysis. To confirm the role of PGE2 in T-cell regulation, we determined the mRNA expression of immune cytokines secreted by splenocytes. IL-1β, 1L-17, and IFN-γ were increased after Con A stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. Pre-treating the fAT-MSCs with NS-398 attenuated the effect of IFN-γ treated fAT-MSC-conditioned media and increased IL-1β, 1L-17, and IFN-γ levels. IL-10 was decreased after Con A stimulation, which was reversed by culturing in IFN-γ treated fAT-MSC-conditioned media. Pre-treating the fAT-MSCs treated IFN-γ with NS-398 attenuated the effect of IFN-γ treated fAT-MSC-conditioned media and decreased IL-10 levels (Fig. 5). DISCUSSION: Numerous studies have reported that MSCs preconditioned with pro-inflammatory cytokines have increased immunomodulatory effects [12131415]. Despite these studies, studies to enhance the immunomodulatory effects of feline MSCs are lacking. Pretreatment of MSCs with IFN-γ has been reported to have a significant effect on the immunoregulatory effect of the MSCs in humans and mice [81617]. However, the effect of IFN-γ on fAT-MSCs has not been reported. In this study, we determined the mechanism underlying the improvement of immunomodulatory effects of feline MSCs with IFN-γ stimulation. The concentration of IFN-γ was determined by combining data from previous studies and preliminary experiments. HGF, TGF-β, IDO, and COX2 were significantly higher in fAT-MSC conditioned media pre-stimulated with 50 ng/mL IFN-γ in fAT-MSCs. We also investigated the relationship between IFN-γ preconditioned fAT-MSC and immune cells in order to confirm that the immunoregulatory ability of these cells was effectively increased. RAW 264.7, a murine macrophage cell line, was used to confirm the immunomodulatory effects of IFN-γ treated fAT-MSCs in vitro as macrophages play an important role in the immune system [18]. Lymphocytes and macrophages are involved in various inflammatory and immune-mediated disease and have been studied in relation to the immune system [4]. Mouse splenocytes have been used for studying various aspects of human, dog, and rat immune systems [111719]. Therefore, we used mouse splenocyte to evaluate T-cell regulation in IFN-γ treated fAT-MSCs. Anti-inflammatory T-helper 2 cytokines suppresses the immune responses and are critical regulators of the immune system. The expression levels anti-inflammatory cytokines arginase and IL-10 in RAW 264.7 cells, were significantly increased when they were cultured in IFN-γ treated fAT-MSC-conditioned media. The population of CD206+ RAW 264.7 cells was also increased when cultured in IFN-γ treated fAT-MSC-conditioned media. Culturing in IFN-γ treated fAT-MSC-conditioned media also increased the expression of FOXP3 and IL-10 in splenocytes. In addition, stimulating RAW 264.7 with LPS and splenocytes with Con A confirmed that the activated immunity could be regulated by IFN-γ treated fAT-MSC-conditioned media. Many soluble factors secreted by MSCs such as TGF-β, HGF, PGE-2, IL-6, IL-10, IL-1, iNOS, IDO, Gal-1, and HLA-G regulate the immune system and modulate the immune cell activity and Relieve the inflammatory environment [20212223242526272829]. In particular, previous studies have reported that PGE2 plays a critical role in the immune responses [530] and is associated with macrophage polarization and T-cell regulation [4]. However, no studies have been done to determine whether the increased PGE2 secretion is a key regulator of the immune regulation mediated by the IFN-γ treated fAT-MSC. Our results confirmed the significant increase of PGE2 in the IFN-γ treated fAT-MSC-conditioned media. In addition, pretreatment with PGE2 inhibitor, NS-398, significantly decreased the level of PGE2 in the IFN-γ treated fAT-MSC-conditioned media. Pretreatment with NS-398 attenuated PGE2 as so decreased the M2-macrophage polarization, and T-cell regulation induced by the IFN-γ treated fAT-MSC-conditioned media. There were some limitations in this study. First, we used xenogeneic immune cells such as macrophage cell line and splenocytes. Although mouse immune cells such as RAW 264.7 (macrophages) and splenocytes are widely used for studies of immunology, further research using feline immune cells should be needed. Second, we only confirmed the immunomodulatory effects of PGE2 in this study. Although we showed here that PGE2 might be considered to be a major contributor to the immunomodulatory potential of the IFN-γ treated fAT-MSC-conditioned media, further studies should explore the role of other soluble factors such as IDO, TSG-6, and HGF. In conclusion, IFN-γ pretreatment enhanced the ability of fAT-MSCs to induce M2macrophage polarization and T-cell regulation. It also regulated the secretion of anti- and pro-inflammatory cytokines in activated immune cells. In addition, we found that PGE2 is a major factor contributing to the immunomodulatory function of IFN-γ-pretreated fAT-MSCs. Our results provide a theoretical basis for the use of fAT-MSCs as cell-based therapeutics for autoimmune and inflammatory diseases in the future.
Background: Preconditioning with inflammatory stimuli is used to improve the secretion of anti-inflammatory agents in stem cells from variant species such as mouse, human, and dog. However, there are only few studies on feline stem cells. Methods: To assess the interaction of lymphocytes and macrophages with IFN-γ-pretreated fAT-MSCs, mouse splenocytes and RAW 264.7 cells were cultured with the conditioned media from IFN-γ-pretreated MSCs. Results: Pretreatment with IFN-γ increased the gene expression levels of cyclooxygenase-2, indoleamine 2,3-dioxygenase, hepatocyte growth factor, and transforming growth factor-beta 1 in the MSCs. The conditioned media from IFN-γ-pretreated MSCs increased the expression levels of M2 macrophage markers and regulatory T-cell markers compared to those in the conditioned media from naive MSCs. Further, prostaglandin E₂ (PGE₂) inhibitor NS-398 attenuated the immunoregulatory potential of MSCs, suggesting that the increased PGE₂ levels induced by IFN-γ stimulation is a crucial factor in the immune regulatory capacity of MSCs pretreated with IFN-γ. Conclusions: IFN-γ pretreatment improves the immune regulatory profile of fAT-MSCs mainly via the secretion of PGE₂, which induces macrophage polarization and increases regulatory T-cell numbers.
null
null
10,982
240
[ 373, 169, 70, 268, 147, 67, 254, 176, 48, 286, 529, 157, 233, 396 ]
18
[ "fat", "ifn", "treated", "mscs", "msc", "fat mscs", "fat msc", "cells", "conditioned", "media" ]
[ "immunoregulatory effect mscs", "mscs anti inflammatory", "feline mscs stimulated", "immunology research feline", "feline immune cells" ]
null
null
null
[CONTENT] Cats | mesenchymal stem cell | macrophage | interferon-gamma | prostaglandin E2 [SUMMARY]
null
[CONTENT] Cats | mesenchymal stem cell | macrophage | interferon-gamma | prostaglandin E2 [SUMMARY]
null
[CONTENT] Cats | mesenchymal stem cell | macrophage | interferon-gamma | prostaglandin E2 [SUMMARY]
null
[CONTENT] Animals | Cats | Dinoprostone | Female | Gene Expression Regulation | Immunomodulation | Interferon-gamma | Mesenchymal Stem Cells | Mice | RAW 264.7 Cells [SUMMARY]
null
[CONTENT] Animals | Cats | Dinoprostone | Female | Gene Expression Regulation | Immunomodulation | Interferon-gamma | Mesenchymal Stem Cells | Mice | RAW 264.7 Cells [SUMMARY]
null
[CONTENT] Animals | Cats | Dinoprostone | Female | Gene Expression Regulation | Immunomodulation | Interferon-gamma | Mesenchymal Stem Cells | Mice | RAW 264.7 Cells [SUMMARY]
null
[CONTENT] immunoregulatory effect mscs | mscs anti inflammatory | feline mscs stimulated | immunology research feline | feline immune cells [SUMMARY]
null
[CONTENT] immunoregulatory effect mscs | mscs anti inflammatory | feline mscs stimulated | immunology research feline | feline immune cells [SUMMARY]
null
[CONTENT] immunoregulatory effect mscs | mscs anti inflammatory | feline mscs stimulated | immunology research feline | feline immune cells [SUMMARY]
null
[CONTENT] fat | ifn | treated | mscs | msc | fat mscs | fat msc | cells | conditioned | media [SUMMARY]
null
[CONTENT] fat | ifn | treated | mscs | msc | fat mscs | fat msc | cells | conditioned | media [SUMMARY]
null
[CONTENT] fat | ifn | treated | mscs | msc | fat mscs | fat msc | cells | conditioned | media [SUMMARY]
null
[CONTENT] mechanisms | feline mscs | functions | immunomodulatory | mscs | feline | inflammatory | ifn | mechanisms anti | mscs pre [SUMMARY]
null
[CONTENT] msc | fat msc | fat | ifn | msc conditioned | fat msc conditioned media | msc conditioned media | fat msc conditioned | conditioned media | media [SUMMARY]
null
[CONTENT] fat | ifn | msc | treated | fat msc | mscs | fat mscs | conditioned | msc conditioned media | fat msc conditioned [SUMMARY]
null
[CONTENT] ||| [SUMMARY]
null
[CONTENT] IFN | 1 ||| IFN | M2 ||| IFN | IFN [SUMMARY]
null
[CONTENT] ||| ||| IFN | RAW 264.7 | IFN ||| IFN | 1 ||| IFN | M2 ||| IFN | IFN | PGE₂ [SUMMARY]
null
Efficacy of ginseng oral administration and ginseng injections on cancer-related fatigue: A meta-analysis.
36401389
Up to 90% of patients who are under the active treatment suffer from cancer-related fatigue (CRF). CRF can persist about 10 years after diagnosis and/or treatment. Accumulating reports support that ginseng and ginseng injections are both potential drugs for the treatment of CRF but few studies put them together for analysis.
BACKGROUND
Two reviewers independently extracted data in 3 databases (PubMed, Cochrane Library and China National Knowledge Infrastructure) from their inception to May 24, 2021. The primary outcome was the effect of ginseng in alleviating CRF. The secondary outcome was ginseng in alleviating emotional or cognitive fatigue. Standardized mean difference (SMD) was employed.
METHODS
Twelve studies were included to evaluate efficacy of ginseng oral administration and ginseng injections on CRF. The pooled SMD was 0.40 (95% confidence Interval [95% CI] [0.29-0.51], P < .00001). Six studies were included to evaluate efficacy of ginseng oral administration on CRF and the SMD was 0.29 (95% CI [0.15-0.42], P < .0001). The order was 2000 mg/d, 3000 mg/d, 1000 mg/d and placebo from high efficacy to low. Ten studies were included to evaluate efficacy of ginseng injections on CRF and the SMD was 0.74 (95% CI [0.59-0.90], P < .00001). Emotional fatigue was reported in 4 studies, ginseng oral administration in 2 and ginseng injections in 2. The pooled SMD was 0.12 (95% CI [-0.04 to 0.29], P = .15). Cognitive fatigue was reported in 4 studies focusing on ginseng injections and the SMD was 0.72 (95% CI [0.48-0.96], P < .00001).
RESULTS
Ginseng can improve CRF. Intravenous injection might be better than oral administration. Ginseng injections may alleviate cognitive fatigue. No evidence was found to support that ginseng could alleviate emotional fatigue.
CONCLUSION
[ "Humans", "Panax", "Fatigue", "Neoplasms", "Injections", "Administration, Oral" ]
9678550
1. Introduction
Cancer-related fatigue (CRF) is one of the most common symptoms in patients with cancer. Up to 90% of patients who are under the active treatment suffer from CRF.[1] It can persist months or even years after treatment ends which interferes with usual functioning. Besides, fatigue is rarely an isolated symptom and most commonly occurs with other symptoms and signs, such as pain, emotional distress, anemia, and sleep disturbances, in symptom clusters.[2] Unlike typical fatigue, CRF cannot be relieved by additional rest, sleep, reducing physical activity, etc. On the contrary, exercise/physical activity is likely to be effective in ameliorating CRF. Psychological/psycho-education, mind/body wellness training, nutritional and dietary supplements may be effective, too. Unfortunately, these interventions yield, at most, moderate benefits in meta-analyses.[2] Until now, evidence to date indicates that synthetic drugs are less effective than non-pharmacologic intervention.[2] Ginseng is the root of plants in the genus Panax, such as Korean ginseng (Panax ginseng C.A. Meyer), Japanese ginseng (Panax ginseng C.A. Meyer), and American ginseng (Panax quinquefolius L.). Red ginseng is a processed product of Asian ginseng (Panax ginseng C.A. Meyer.) by steaming and drying. Ginseng has been used to treat chronic fatigue as early as 2000 years ago in China. Nowadays, ginseng is not only used in China, but also sold and used in more than 35 countries, such as Japan, South Korea, North Korea, America.[3] Preclinical data supports that ginseng may be helpful for fatigue. Animal studies have reported that ginseng can improve the endurance and swimming duration time of mice.[4] Besides, ginseng is mentioned as a dietary supplement for CRF treatment in the NCCN Guidelines based on 1 randomized, double-blind clinical trial using American ginseng which indicated that American ginseng of 2000 mg improved CRF symptom.[5] There are several types of ginseng and they were deemed to have same active pharmaceutical ingredients, for example, Rg1 and Rb1. However, other types of ginseng were not discussed in the Guidelines.[6] In addition to oral administration, several types of injections whose main components are ginseng extractions have been approved by Chinese National Medical Products Administration to be used in clinic. The common ones are Kangai injection, Shenfu injection, Shenmai injection, Shenqi Fuzheng injection. Kangai injection (China Food and Drug Administration approval number Z20026868) consists of the extracts from Astragalus membranaceus (Fisch.) (Bunge), Panax ginseng C.A. Meyer. and Sophora flavescens Aiton. Shenfu injection (China Food and Drug Administration approval number Z20043117) consists of the extracts from Red ginseng and Aconitum wilsonii Stapf ex Veitch.Shenmai injection (China Food and Drug Administration approval number Z2009364) consists of the extracts from Red ginseng and Ophiopogon japonicus (Linn. f.) Ker-Gawl.. Shenqi Fuzheng injection (China Food and Drug Administration approval number Z19990065) consists of the extracts from Codonopsis pilosula (Franch.) Nannf. and Astragalus Membranaceus (Fisch.) Bunge. Some studies have shown that ginseng injections are of great help in improving the quality of life and reducing the side effects of radiotherapy and chemotherapy in cancer patients.[7,8] Therefore, ginseng and ginseng injections are both potential drugs for the treatment of CRF but few studies put them together for analysis. Here, we employed standardized mean difference (SMD) to conduct a meta-analysis to evaluate the efficacy of ginseng and ginseng injections in the treatment of CRF and the quality of the evidence. Emotional and cognitive fatigue were evaluated, too. Besides, in our study, subgroup analyses were conducted to compare the efficacy of cancer types, cancer stages, basic strategies for treatment of cancer and so on.
2. Methods
2.1. Study registration The protocol of this meta-analysis is registered in PROSPERO, under the registration number CRD42021228094 on February 18, 2021. It is available at http://www.crd.york.ac.uk/PROSPERO/. The protocol of this meta-analysis is registered in PROSPERO, under the registration number CRD42021228094 on February 18, 2021. It is available at http://www.crd.york.ac.uk/PROSPERO/. 2.2. Literature search PubMed, Cochrane Library, China National Knowledge Infrastructure were systematically searched from the database inception to May 24, 2021. The following keywords were used for the Chinese database search:(shen or renshen or gaolishen or xiyangshen or hongshen or baishen or shuishen or yeshanshen or Kangai zhusheye or Shenqi Fuzheng zhusheye or Shenfu zhusheye or Shenmai zhusheye) and (pilao or pifa or pijuan or pibei or juandai or fali). The following keywords were used for the English database search: (Ginseng or Ginsengs or P. quinquefolius or Panax or ginsenosides or ginsenoside or Kangai injection or Shenqi fuzheng injection or Shenfu injection or Shenmai injection) and (fatigue or lethargy or exhaustion or tiredness or weariness or physical performance or exercise performance) (Table S1, http://links.lww.com/MD/H734 for details). PubMed, Cochrane Library, China National Knowledge Infrastructure were systematically searched from the database inception to May 24, 2021. The following keywords were used for the Chinese database search:(shen or renshen or gaolishen or xiyangshen or hongshen or baishen or shuishen or yeshanshen or Kangai zhusheye or Shenqi Fuzheng zhusheye or Shenfu zhusheye or Shenmai zhusheye) and (pilao or pifa or pijuan or pibei or juandai or fali). The following keywords were used for the English database search: (Ginseng or Ginsengs or P. quinquefolius or Panax or ginsenosides or ginsenoside or Kangai injection or Shenqi fuzheng injection or Shenfu injection or Shenmai injection) and (fatigue or lethargy or exhaustion or tiredness or weariness or physical performance or exercise performance) (Table S1, http://links.lww.com/MD/H734 for details). 2.3. Inclusion/exclusion criteria For inclusion in the review, studies were required to meet the following criteria: Experimental design: randomized controlled trials; Type of participants: subjects with CRF, regardless of age, sex, type of cancer, pathological type, cancer treatment; Type of interventions: drugs with ginseng or ginseng injections; Control: unlimited treatment method; Language types: Chinese and English studies. Studies without enough data (The duration of treatment or fatigue score is unknown) or studies whose participants were subjectively selected were excluded from analysis. For inclusion in the review, studies were required to meet the following criteria: Experimental design: randomized controlled trials; Type of participants: subjects with CRF, regardless of age, sex, type of cancer, pathological type, cancer treatment; Type of interventions: drugs with ginseng or ginseng injections; Control: unlimited treatment method; Language types: Chinese and English studies. Studies without enough data (The duration of treatment or fatigue score is unknown) or studies whose participants were subjectively selected were excluded from analysis. 2.4. Selection of relevant studies and quality assessment Two reviewers independently extracted data based on the predetermined criteria, and discrepancies were resolved by consensus. From studies included in the final analysis, the following data were extracted: the name of first author, year of publication, geographic location, types of cancer, basic strategies for treatment of cancer (surgery, chemotherapy or radiotherapy), species and dose of ginseng and ginseng injections in the intervention group, treatment regimen of the control group, duration of treatment, tool name used to assess CRF and type of dimensions used for main outcomes measured. The Cochrane Handbook was used for systematic reviews of interventions to evaluate the quality of the included studies.[9,10] This approach requires studies to be assessed across 6 special domains that were subjected to potential bias, including sequence generation, allocation concealment, blinding, incomplete outcome data, selective outcome reporting, and other sources of bias. There are 3 biases of judgment: Yes (Low risk), No (High risk), Not clear (Unclear risk). Two reviewers independently extracted data based on the predetermined criteria, and discrepancies were resolved by consensus. From studies included in the final analysis, the following data were extracted: the name of first author, year of publication, geographic location, types of cancer, basic strategies for treatment of cancer (surgery, chemotherapy or radiotherapy), species and dose of ginseng and ginseng injections in the intervention group, treatment regimen of the control group, duration of treatment, tool name used to assess CRF and type of dimensions used for main outcomes measured. The Cochrane Handbook was used for systematic reviews of interventions to evaluate the quality of the included studies.[9,10] This approach requires studies to be assessed across 6 special domains that were subjected to potential bias, including sequence generation, allocation concealment, blinding, incomplete outcome data, selective outcome reporting, and other sources of bias. There are 3 biases of judgment: Yes (Low risk), No (High risk), Not clear (Unclear risk). 2.5. Outcomes of interest The primary outcome was the effect of ginseng and ginseng injections in alleviating CRF. The secondary outcome was emotional or cognitive fatigue alleviated by ginseng and ginseng injections. The primary outcome was the effect of ginseng and ginseng injections in alleviating CRF. The secondary outcome was emotional or cognitive fatigue alleviated by ginseng and ginseng injections. 2.6. Statistical analysis RevMan5.3 (Review Manager (RevMan), Computer program, version 5.3. Cochrane Collaboration, Copenhagen, Denmark) was used for our statistical analysis.[9,10] STATA v.16.0 (College Station, TX) was used for network meta-analysis. Because clinical indices to assess clinical response/remission differed among studies, SMD was used as a main effect size to calculate those differences.[11] The calculation formula of SMD is as follows: Where M1 is the mean of fatigue reduction in the intervention group, M2 is the mean of fatigue reduction in the control group, and pooled SD is a pooled intervention specific standard deviation.[8] If the value of SMD is positive and P < .05, it shows that the effect of the intervention group is better than that of the control group. RevMan5.3 (Review Manager (RevMan), Computer program, version 5.3. Cochrane Collaboration, Copenhagen, Denmark) was used for our statistical analysis.[9,10] STATA v.16.0 (College Station, TX) was used for network meta-analysis. Because clinical indices to assess clinical response/remission differed among studies, SMD was used as a main effect size to calculate those differences.[11] The calculation formula of SMD is as follows: Where M1 is the mean of fatigue reduction in the intervention group, M2 is the mean of fatigue reduction in the control group, and pooled SD is a pooled intervention specific standard deviation.[8] If the value of SMD is positive and P < .05, it shows that the effect of the intervention group is better than that of the control group. 2.7. Assessment of heterogeneity Chi[2] test combined with I²-test was used to test the heterogeneity between studies. If P < .1 or I² > 50%, it suggested significant heterogeneity and we would use random-effects model for meta-analysis; otherwise, a fixed-effects model would be used. Wherever feasible, a meta-regression analysis would be conducted to explore the source of significant heterogeneity. Sensitivity analyses would be undertaken to assess the robustness of our findings by excluding studies with high risk of bias.[12] Chi[2] test combined with I²-test was used to test the heterogeneity between studies. If P < .1 or I² > 50%, it suggested significant heterogeneity and we would use random-effects model for meta-analysis; otherwise, a fixed-effects model would be used. Wherever feasible, a meta-regression analysis would be conducted to explore the source of significant heterogeneity. Sensitivity analyses would be undertaken to assess the robustness of our findings by excluding studies with high risk of bias.[12] 2.8. Assessment of reporting biases Funnel plots were performed to assess reporting bias with more than 10 trials. Egger’s regression intercept was calculated by STATA v.16.0 to do a text of asymmetry. A 2-tailed P value < .05 was considered statistically significant.[13] Funnel plots were performed to assess reporting bias with more than 10 trials. Egger’s regression intercept was calculated by STATA v.16.0 to do a text of asymmetry. A 2-tailed P value < .05 was considered statistically significant.[13] 2.9. Summarizing and interpreting results GRADE approach was used to interpret findings. We assessed the outcomes with reference to the overall risk of bias of the included studies, the inconsistency of the results, the directness of the evidence, the precision of the estimates, and the risk of publication bias. The quality of the body of evidence for each assessable outcome were categorized as follows: no reason to downgrade the quality of evidence, serious reason (downgraded by one) or very serious reason (downgraded by two).[14] GRADE approach was used to interpret findings. We assessed the outcomes with reference to the overall risk of bias of the included studies, the inconsistency of the results, the directness of the evidence, the precision of the estimates, and the risk of publication bias. The quality of the body of evidence for each assessable outcome were categorized as follows: no reason to downgrade the quality of evidence, serious reason (downgraded by one) or very serious reason (downgraded by two).[14]
3. Results
3.1. Selection and general characteristics of the included studies A total of 1764 studies were identified from the 3 electronic databases. Seventy-one duplicate studies were excluded by using Endnote X9. On review of the title and abstract, 1631 studies were excluded. After further careful review of 62 articles of the full text, a further 40 studies were excluded. Finally, 22 papers with a total of 2086 participants were included. Patients were treated with ginseng oral administration in 7 papers and with ginseng injections in 15 (Fig. 1 and Table 1). They were published between 2010 and 2020 and were conducted in China (n = 16), America (n = 3), Korea (n = 2) and Italy (n = 1). Six studies were randomized, double-blind, placebo-controlled design trials, and the rest of 16 studies used randomized design (Table 1). The detailed information was summarized in Table 1 and Table S2, http://links.lww.com/MD/H735 and 3, http://links.lww.com/MD/H736. Characteristics of studies included in the systematic review and meta-analysis. BFI = brief fatigue inventory, CFS = cancer fatigue scale, FACIT-F = functional assessment of chronic illness therapy-fatigue subscale, FSI = fatigue symptom inventory, MFSI-SF = multidimensional fatigue symptom inventory-short form, PFS = piper fatigue scale. Flow chart. A total of 1764 studies were identified from the 3 electronic databases. Seventy-one duplicate studies were excluded by using Endnote X9. On review of the title and abstract, 1631 studies were excluded. After further careful review of 62 articles of the full text, a further 40 studies were excluded. Finally, 22 papers with a total of 2086 participants were included. Patients were treated with ginseng oral administration in 7 papers and with ginseng injections in 15 (Fig. 1 and Table 1). They were published between 2010 and 2020 and were conducted in China (n = 16), America (n = 3), Korea (n = 2) and Italy (n = 1). Six studies were randomized, double-blind, placebo-controlled design trials, and the rest of 16 studies used randomized design (Table 1). The detailed information was summarized in Table 1 and Table S2, http://links.lww.com/MD/H735 and 3, http://links.lww.com/MD/H736. Characteristics of studies included in the systematic review and meta-analysis. BFI = brief fatigue inventory, CFS = cancer fatigue scale, FACIT-F = functional assessment of chronic illness therapy-fatigue subscale, FSI = fatigue symptom inventory, MFSI-SF = multidimensional fatigue symptom inventory-short form, PFS = piper fatigue scale. Flow chart. 3.2. Methodological quality of studies All included 22 studies were randomized controlled trials. Two were dynamically allocated by computer,[5,15] and 4 used random number table method.[7,16–18] Those 6 were rated as low risk. One study used stratified block randomization allocation and bias risk was not clear.[19] The other 15 studies didn’t describe the method of random sequence generation. Of 6 double-blind studies, only 1 mentioned double-blind method but did not describe the specific implementation process.[20] Two studies were multicenter and 6 used placebo as control.[5,19] All included studies had low risk of bias regarding incomplete outcome data, had low risk of bias regarding selective reporting and none claimed conflict of interest, early termination of the trial (Figs. 2 and 3). Risk of bias graph. Risk of bias summary. All included 22 studies were randomized controlled trials. Two were dynamically allocated by computer,[5,15] and 4 used random number table method.[7,16–18] Those 6 were rated as low risk. One study used stratified block randomization allocation and bias risk was not clear.[19] The other 15 studies didn’t describe the method of random sequence generation. Of 6 double-blind studies, only 1 mentioned double-blind method but did not describe the specific implementation process.[20] Two studies were multicenter and 6 used placebo as control.[5,19] All included studies had low risk of bias regarding incomplete outcome data, had low risk of bias regarding selective reporting and none claimed conflict of interest, early termination of the trial (Figs. 2 and 3). Risk of bias graph. Risk of bias summary. 3.3. Outcome of heterogeneity text For the primary outcome, all intervention groups’ data was combined, regardless of the types and stages of cancer and so on. Consequently, our analyses were subject to high potential risk of between-study heterogeneity. A meta-regression was conducted, we found that placebo may be a source of heterogeneity. In 6 studies that used placebo, ginseng oral administration was in 6 and ginseng injections in 0, so analysis was conducted separately. We did not find other sources of heterogeneity (Table 2). Sensitivity analyses were undertaken by excluding studies with high risk of bias. Compare the 2 results, we still ca not explain the source of heterogeneity (Fig. S1, http://links.lww.com/MD/H737;2, http://links.lww.com/MD/H738;3, http://links.lww.com/MD/H739). Therefore, whether heterogeneity taken into account or not, results were all presented and discussed in the manuscript, respectively. The same as the secondary outcome (Table 2). Outcome of meta-regression. CRF = cancer-related fatigue. For the primary outcome, all intervention groups’ data was combined, regardless of the types and stages of cancer and so on. Consequently, our analyses were subject to high potential risk of between-study heterogeneity. A meta-regression was conducted, we found that placebo may be a source of heterogeneity. In 6 studies that used placebo, ginseng oral administration was in 6 and ginseng injections in 0, so analysis was conducted separately. We did not find other sources of heterogeneity (Table 2). Sensitivity analyses were undertaken by excluding studies with high risk of bias. Compare the 2 results, we still ca not explain the source of heterogeneity (Fig. S1, http://links.lww.com/MD/H737;2, http://links.lww.com/MD/H738;3, http://links.lww.com/MD/H739). Therefore, whether heterogeneity taken into account or not, results were all presented and discussed in the manuscript, respectively. The same as the secondary outcome (Table 2). Outcome of meta-regression. CRF = cancer-related fatigue. 3.4. Outcome of publication bias text Egger linear regression was conducted to text symmetry of funnel plots. No publication bias was found in each outcome (Fig. S4, http://links.lww.com/MD/H740;6, http://links.lww.com/MD/H742;7, http://links.lww.com/MD/H744). Egger linear regression was conducted to text symmetry of funnel plots. No publication bias was found in each outcome (Fig. S4, http://links.lww.com/MD/H740;6, http://links.lww.com/MD/H742;7, http://links.lww.com/MD/H744). 3.5. Outcome of GRADE rating Table 3 for details. Outcome of GRADE rating. CI = confidence interval, CRF = cancer-related fatigue, SMD = standardized mean difference. There was substantial heterogeneity among studies, meta-regression and sensitivity analyses of each outcome were conducted but we didn’t find the sources of heterogeneity. Small sample size and wide confidence interval. Table 3 for details. Outcome of GRADE rating. CI = confidence interval, CRF = cancer-related fatigue, SMD = standardized mean difference. There was substantial heterogeneity among studies, meta-regression and sensitivity analyses of each outcome were conducted but we didn’t find the sources of heterogeneity. Small sample size and wide confidence interval. 3.6. Efficacy of ginseng oral administration and ginseng injections on CRF Fatigue was reported in 22 studies and 10 were excluded because of heterogeneity. Of included 12 studies, ginseng was used in 5 studies and ginseng injections in 7. The number of patients in the ginseng group is 656 and 646 in control. Efficacy was assessed between 2 weeks and 12 weeks. The pooled SMD was 0.40 (95% confidence interval (95% CI) [0.29–0.51], P < .00001) (Fig. 4). If heterogeneity was not taken into account, the pooled SMD was 0.89 (95% CI [0.60–1.18], P < .00001) (Fig. S4, http://links.lww.com/MD/H740). Those indicate that ginseng can alleviate CRF. Efficacy of ginseng oral administration and ginseng injections on CRF. CRF: cancer-related fatigue. Fatigue was reported in 22 studies and 10 were excluded because of heterogeneity. Of included 12 studies, ginseng was used in 5 studies and ginseng injections in 7. The number of patients in the ginseng group is 656 and 646 in control. Efficacy was assessed between 2 weeks and 12 weeks. The pooled SMD was 0.40 (95% confidence interval (95% CI) [0.29–0.51], P < .00001) (Fig. 4). If heterogeneity was not taken into account, the pooled SMD was 0.89 (95% CI [0.60–1.18], P < .00001) (Fig. S4, http://links.lww.com/MD/H740). Those indicate that ginseng can alleviate CRF. Efficacy of ginseng oral administration and ginseng injections on CRF. CRF: cancer-related fatigue. 3.7. Network meta-analysis between ginseng oral administration and ginseng injections Network meta-analysis to compare the relative efficacy of ginseng oral administration and ginseng injections was done (Fig. 5 and Table 4). The order was ginseng injections, ginseng oral administration and placebo from high efficacy to low. Network meta-analysis between ginseng oral administration and ginseng injections. SMD for comparisons are in the cell in common between the column-defining and row-defining treatment. SMD < 0 favors row-defining treatment. Numbers in parentheses indicate 95% confidence interval. SMD = standardized mean difference. Network of 2 types of administration routes on CRF. CRF: cancer-related fatigue. Network meta-analysis to compare the relative efficacy of ginseng oral administration and ginseng injections was done (Fig. 5 and Table 4). The order was ginseng injections, ginseng oral administration and placebo from high efficacy to low. Network meta-analysis between ginseng oral administration and ginseng injections. SMD for comparisons are in the cell in common between the column-defining and row-defining treatment. SMD < 0 favors row-defining treatment. Numbers in parentheses indicate 95% confidence interval. SMD = standardized mean difference. Network of 2 types of administration routes on CRF. CRF: cancer-related fatigue. 3.8. Efficacy of ginseng oral administration on CRF Seven papers reported the efficacy of ginseng oral administration on CRF and 1 was excluded because of heterogeneity.[16] Six studies included 862 patients, 434 received ginseng and 428 received placebo.[5,15,19–22] Efficacy was assessed between 29 days and 12 weeks. The pooled SMD was 0.29 (95% CI [0.15–0.42], P < .0001) (Fig. 6). If heterogeneity was not taken into account, the pooled SMD was 0.46 (95% CI [0.10–0.82], P = .01) (Fig. S5, http://links.lww.com/MD/H741). Those indicate that ginseng oral administration can alleviate CRF. Forest plot of ginseng oral administration on CRF. CRF: cancer-related fatigue. Efficacies of different doses were explored. Sixteen patients were treated with 1000 mg/day ginseng and 16 with placebo in 1 study.[20] The pooled SMD was -0.29 (95% CI [−0.98 to 0.41], P = .42). Three hundred and forty-seven patients were treated with 2000 mg/d ginseng and 341 with placebo in 3 studies.[5,15,19] The pooled SMD was 0.35 (95% CI [0.19–0.50], P < .00001). Fifteen patients were treated with 3000 mg/d ginseng and 15 with placebo in 1 study.[21] The SMD was 0.32 (95% CI [−0.40 to 1.04], P = .38). Those indicate that 2000 or 3000 mg/d ginseng should be effective to treat CRF. There is no significant difference between 3000 mg/d ginseng group and control group and that might be due to the small sample size (Fig. 7). Network meta-analysis to compare the relative efficacy of different doses was done. The order was 2000 mg/d, 3000 mg/d, placebo and 1000 mg/d from high efficacy to low. But there was no significant difference (Fig. 8 and Table 5). Network meta-analysis of different dose. SMD for comparisons are in the cell in common between the column-defining and row-defining treatment. SMD < 0 favors row-defining treatment. Numbers in parentheses indicate 95% confidence interval. SMD = standardized mean difference. Forest plot of different doses of ginseng oral administration on CRF. CRF: cancer-related fatigue. Network of different doses of ginseng oral administration on CRF. CRF: cancer-related fatigue. Efficacies of different duration were also explored. One hundred and six patients, 52 in ginseng group and 54 in control, were treated for 2 weeks in 1 study.[22] The SMD was 0.10 (95% CI [−0.28 to 0.48], P = .62). Four hundred and twelve patients, 203 in ginseng group and 209 in control, were treated for 4 weeks in 2 study.[5,22] The pooled SMD was 0.20 (95% CI [0.00–0.39], P = .05). Seven hundred and twenty patients, 363 in ginseng group and 357 in control, were treated for 8 weeks in 4 study.[5,15,19,20] The pooled SMD was 0.32 (95% CI [0.17–0.46], P < .0001). Thirty patients, 15 in ginseng group and 15 in control, were treated for 12 weeks in 1 study.[21] The SMD was 0.32 (95% CI [−0.40 to 1.04], P = .38). Three hundred and thirty patients, 161 in ginseng group and 169 in control, were treated for 16 weeks in 1 study. The SMD was 0.24 (95% CI [0.02–0.45], P = .03). It seems that 4 to 8 weeks is enough for ginseng oral administration to alleviate CRF (Fig. 9). Forest plot of different duration of ginseng oral administration on CRF. CRF: cancer-related fatigue. Seven papers reported the efficacy of ginseng oral administration on CRF and 1 was excluded because of heterogeneity.[16] Six studies included 862 patients, 434 received ginseng and 428 received placebo.[5,15,19–22] Efficacy was assessed between 29 days and 12 weeks. The pooled SMD was 0.29 (95% CI [0.15–0.42], P < .0001) (Fig. 6). If heterogeneity was not taken into account, the pooled SMD was 0.46 (95% CI [0.10–0.82], P = .01) (Fig. S5, http://links.lww.com/MD/H741). Those indicate that ginseng oral administration can alleviate CRF. Forest plot of ginseng oral administration on CRF. CRF: cancer-related fatigue. Efficacies of different doses were explored. Sixteen patients were treated with 1000 mg/day ginseng and 16 with placebo in 1 study.[20] The pooled SMD was -0.29 (95% CI [−0.98 to 0.41], P = .42). Three hundred and forty-seven patients were treated with 2000 mg/d ginseng and 341 with placebo in 3 studies.[5,15,19] The pooled SMD was 0.35 (95% CI [0.19–0.50], P < .00001). Fifteen patients were treated with 3000 mg/d ginseng and 15 with placebo in 1 study.[21] The SMD was 0.32 (95% CI [−0.40 to 1.04], P = .38). Those indicate that 2000 or 3000 mg/d ginseng should be effective to treat CRF. There is no significant difference between 3000 mg/d ginseng group and control group and that might be due to the small sample size (Fig. 7). Network meta-analysis to compare the relative efficacy of different doses was done. The order was 2000 mg/d, 3000 mg/d, placebo and 1000 mg/d from high efficacy to low. But there was no significant difference (Fig. 8 and Table 5). Network meta-analysis of different dose. SMD for comparisons are in the cell in common between the column-defining and row-defining treatment. SMD < 0 favors row-defining treatment. Numbers in parentheses indicate 95% confidence interval. SMD = standardized mean difference. Forest plot of different doses of ginseng oral administration on CRF. CRF: cancer-related fatigue. Network of different doses of ginseng oral administration on CRF. CRF: cancer-related fatigue. Efficacies of different duration were also explored. One hundred and six patients, 52 in ginseng group and 54 in control, were treated for 2 weeks in 1 study.[22] The SMD was 0.10 (95% CI [−0.28 to 0.48], P = .62). Four hundred and twelve patients, 203 in ginseng group and 209 in control, were treated for 4 weeks in 2 study.[5,22] The pooled SMD was 0.20 (95% CI [0.00–0.39], P = .05). Seven hundred and twenty patients, 363 in ginseng group and 357 in control, were treated for 8 weeks in 4 study.[5,15,19,20] The pooled SMD was 0.32 (95% CI [0.17–0.46], P < .0001). Thirty patients, 15 in ginseng group and 15 in control, were treated for 12 weeks in 1 study.[21] The SMD was 0.32 (95% CI [−0.40 to 1.04], P = .38). Three hundred and thirty patients, 161 in ginseng group and 169 in control, were treated for 16 weeks in 1 study. The SMD was 0.24 (95% CI [0.02–0.45], P = .03). It seems that 4 to 8 weeks is enough for ginseng oral administration to alleviate CRF (Fig. 9). Forest plot of different duration of ginseng oral administration on CRF. CRF: cancer-related fatigue. 3.9. Efficacy of ginseng injections on CRF Four types of injections (Kangai injection, Shenfu injection, Shenmai injection and Shenqi Fuzheng injection) whose main components are ginseng extractions have been approved by Chinese National Medical Products Administration to be used in clinic. Fifteen studies reported these 4 on CRF and 5 papers were excluded because of heterogeneity.[17,18,23–25] Three hundred and forty-four patients were in the ginseng injection group and 338 in the control group. Efficacy was assessed between 2 weeks and 16 weeks. The pooled SMD was 0.74 (95% CI [0.59–0.90], P < .00001) (Fig. 10). If heterogeneity was not taken into account, the pooled SMD was 1.08 (95% CI [0.73–1.44], P < .00001) (Fig. S6, http://links.lww.com/MD/H742). Those indicate that ginseng injections can alleviate CRF. Efficacy of ginseng injections on CRF. CRF: cancer-related fatigue. Efficacies of the 4 types of injections were also explored, respectively. The SMD of Kangai injection, Shenfu injection, Shenmai injection or Shenqi Fuzheng injection was 1.12 (95%CI [0.67–1.58], P < .00001), 1.54 (95% CI [−0.79 to 3.87], P = .20), 1.02 (95% CI [0.71–1.33], P < .00001), 1.00 (95% CI [0.42–1.57], P = .0007), respectively (Fig. S6, http://links.lww.com/MD/H742). Those indicate that Kangai injection, Shenmai injection or Shenqi Fuzheng injection can alleviate CRF. Four types of injections (Kangai injection, Shenfu injection, Shenmai injection and Shenqi Fuzheng injection) whose main components are ginseng extractions have been approved by Chinese National Medical Products Administration to be used in clinic. Fifteen studies reported these 4 on CRF and 5 papers were excluded because of heterogeneity.[17,18,23–25] Three hundred and forty-four patients were in the ginseng injection group and 338 in the control group. Efficacy was assessed between 2 weeks and 16 weeks. The pooled SMD was 0.74 (95% CI [0.59–0.90], P < .00001) (Fig. 10). If heterogeneity was not taken into account, the pooled SMD was 1.08 (95% CI [0.73–1.44], P < .00001) (Fig. S6, http://links.lww.com/MD/H742). Those indicate that ginseng injections can alleviate CRF. Efficacy of ginseng injections on CRF. CRF: cancer-related fatigue. Efficacies of the 4 types of injections were also explored, respectively. The SMD of Kangai injection, Shenfu injection, Shenmai injection or Shenqi Fuzheng injection was 1.12 (95%CI [0.67–1.58], P < .00001), 1.54 (95% CI [−0.79 to 3.87], P = .20), 1.02 (95% CI [0.71–1.33], P < .00001), 1.00 (95% CI [0.42–1.57], P = .0007), respectively (Fig. S6, http://links.lww.com/MD/H742). Those indicate that Kangai injection, Shenmai injection or Shenqi Fuzheng injection can alleviate CRF. 3.10. Efficacy of ginseng oral administration and ginseng injections on emotional fatigue Ten studies reported the efficacy of ginseng oral administration or ginseng injections on emotional fatigue and 6 were excluded because of heterogeneity.[7,16,18,23,26,27] The number of patients in the experimental group was 286 and 277 in the control group. The pooled SMD was 0.12 (95% CI [−0.04 to 0.29], P = .15) (Fig. 11). If heterogeneity was not taken into account, the pooled SMD was 0.67 (95% CI [0.13–1.21], P = .02) (Fig. S7, http://links.lww.com/MD/H744). It seems that whether ginseng could alleviate emotional fatigue is uncertain. Forest plot of ginseng oral administration and ginseng injections on emotional fatigue. Ginseng oral administration was employed in 3 studies and 1 study was excluded because of heterogeneity.[16] The pooled SMD was 0.10 (95% CI [−0.10 to 0.30], P = .32) (Fig. 11). If heterogeneity was not taken into account, the pooled SMD was −0.34 (95% CI [−1.12 to 0.43], P = .38) (Fig. S7, http://links.lww.com/MD/H744). Those indicate that ginseng oral administration may not alleviate emotional fatigue. Seven studies explored efficacies of ginseng injections on emotional fatigue and 3 were excluded because of heterogeneity. The pooled SMD of the 4 studies was 0.79 (95% CI [0.55–1.03], P < .00001) (Fig. 12). If heterogeneity was not taken into account, the pooled SMD was 1.12 (95% CI [0.50–1.74], P = .0004) (Fig. S8, http://links.lww.com/MD/H749).[7,18,23,26–29] Those results suggest that ginseng injections may be effective in alleviating emotional fatigue. Forest plot of ginseng injections on emotional fatigue. Efficacies of the 3 types of injections were also explored, respectively. The SMD of Shenfu injection, Shenmai injection or Shenqi Fuzheng injection was 1.16 (95%CI [−0.98 to 3.30], P = .29), 0.84 (95% CI [0.53–1.14], P < .00001), 1.34 (95% CI [0.13–2.55], P = .03), respectively (Fig. S8, http://links.lww.com/MD/H749). Those indicate that Shenmai injection or Shenqi Fuzheng injection can alleviate emotional fatigue. Ten studies reported the efficacy of ginseng oral administration or ginseng injections on emotional fatigue and 6 were excluded because of heterogeneity.[7,16,18,23,26,27] The number of patients in the experimental group was 286 and 277 in the control group. The pooled SMD was 0.12 (95% CI [−0.04 to 0.29], P = .15) (Fig. 11). If heterogeneity was not taken into account, the pooled SMD was 0.67 (95% CI [0.13–1.21], P = .02) (Fig. S7, http://links.lww.com/MD/H744). It seems that whether ginseng could alleviate emotional fatigue is uncertain. Forest plot of ginseng oral administration and ginseng injections on emotional fatigue. Ginseng oral administration was employed in 3 studies and 1 study was excluded because of heterogeneity.[16] The pooled SMD was 0.10 (95% CI [−0.10 to 0.30], P = .32) (Fig. 11). If heterogeneity was not taken into account, the pooled SMD was −0.34 (95% CI [−1.12 to 0.43], P = .38) (Fig. S7, http://links.lww.com/MD/H744). Those indicate that ginseng oral administration may not alleviate emotional fatigue. Seven studies explored efficacies of ginseng injections on emotional fatigue and 3 were excluded because of heterogeneity. The pooled SMD of the 4 studies was 0.79 (95% CI [0.55–1.03], P < .00001) (Fig. 12). If heterogeneity was not taken into account, the pooled SMD was 1.12 (95% CI [0.50–1.74], P = .0004) (Fig. S8, http://links.lww.com/MD/H749).[7,18,23,26–29] Those results suggest that ginseng injections may be effective in alleviating emotional fatigue. Forest plot of ginseng injections on emotional fatigue. Efficacies of the 3 types of injections were also explored, respectively. The SMD of Shenfu injection, Shenmai injection or Shenqi Fuzheng injection was 1.16 (95%CI [−0.98 to 3.30], P = .29), 0.84 (95% CI [0.53–1.14], P < .00001), 1.34 (95% CI [0.13–2.55], P = .03), respectively (Fig. S8, http://links.lww.com/MD/H749). Those indicate that Shenmai injection or Shenqi Fuzheng injection can alleviate emotional fatigue. 3.11. Efficacy of ginseng oral administration and ginseng injections on cognitive fatigue. Cognitive fatigue is a psychological state characterized by the subjective feelings of tiredness, and impaired ability to think, memorize, and concentrate.[30,31] Cognitive fatigue is strongly associated with CRF. Eight studies reported the efficacy of ginseng oral administration and ginseng injections on cognitive fatigue. Four studies, 1 on ginseng oral administration and 3 on ginseng injections, were excluded because of heterogeneity.[5,18,23,29] Finally, 4 studies on ginseng injections were included. The pooled SMD was 0.72 (95% CI [0.48–0.96], P < .00001) (Fig. 13). If heterogeneity was not taken into account, the pooled SMD of all 8 studies was 0.80 (95% CI [0.31–1.29], P = .001) (Fig. S9, http://links.lww.com/MD/H750) and the pooled SMD of 7 studies on ginseng injections was 0.93 (95% CI [0.44–1.42], P = .0002) (Fig. S10, http://links.lww.com/MD/H751)..[7,16,17,23,26,27,29,32] Ginseng oral administration was employed in 1study and the SMD was -0.04 (95% CI [−0.28 to 0.20], P = .76) (Fig. S9, http://links.lww.com/MD/H750). Taking together, those results suggest that ginseng injections may be effective in alleviating cognitive fatigue while ginseng oral administration may not be beneficial to cognitive fatigue relieving. Forest plot of ginseng injections on cognitive fatigue. Efficacies of the 3 types of injections were also explored, respectively. The SMD of Shenfu injection, Shenmai injection or Shenqi Fuzheng injection was 0.79 (95%CI [−0.50 to 2.09], P = .23), 0.83 (95% CI [0.53–1.14], P < .00001), 1.13 (95% CI [0.00–2.25], P = .05), respectively (Fig. S10, http://links.lww.com/MD/H751). Those indicate that Shenmai injection or Shenqi Fuzheng injection can alleviate cognitive fatigue. Cognitive fatigue is a psychological state characterized by the subjective feelings of tiredness, and impaired ability to think, memorize, and concentrate.[30,31] Cognitive fatigue is strongly associated with CRF. Eight studies reported the efficacy of ginseng oral administration and ginseng injections on cognitive fatigue. Four studies, 1 on ginseng oral administration and 3 on ginseng injections, were excluded because of heterogeneity.[5,18,23,29] Finally, 4 studies on ginseng injections were included. The pooled SMD was 0.72 (95% CI [0.48–0.96], P < .00001) (Fig. 13). If heterogeneity was not taken into account, the pooled SMD of all 8 studies was 0.80 (95% CI [0.31–1.29], P = .001) (Fig. S9, http://links.lww.com/MD/H750) and the pooled SMD of 7 studies on ginseng injections was 0.93 (95% CI [0.44–1.42], P = .0002) (Fig. S10, http://links.lww.com/MD/H751)..[7,16,17,23,26,27,29,32] Ginseng oral administration was employed in 1study and the SMD was -0.04 (95% CI [−0.28 to 0.20], P = .76) (Fig. S9, http://links.lww.com/MD/H750). Taking together, those results suggest that ginseng injections may be effective in alleviating cognitive fatigue while ginseng oral administration may not be beneficial to cognitive fatigue relieving. Forest plot of ginseng injections on cognitive fatigue. Efficacies of the 3 types of injections were also explored, respectively. The SMD of Shenfu injection, Shenmai injection or Shenqi Fuzheng injection was 0.79 (95%CI [−0.50 to 2.09], P = .23), 0.83 (95% CI [0.53–1.14], P < .00001), 1.13 (95% CI [0.00–2.25], P = .05), respectively (Fig. S10, http://links.lww.com/MD/H751). Those indicate that Shenmai injection or Shenqi Fuzheng injection can alleviate cognitive fatigue. 3.12. The effect of cancer types on efficacy of ginseng and ginseng injections on CRF CRF may associate with cancer types.[16] The effect of cancer types on efficacy of ginseng on CRF was explored here. Nine studies evaluated the efficacy of ginseng on the treatment of lung cancer.[7,16,17,23,24,26,27,29,32] Of the total 725 participants, 519 had undergone chemotherapy prior to participation, 245 were non-small cell lung cancer patients, 86 were lung adenocarcinoma patients and 425 were advanced lung cancer patients. If heterogeneity was not taken into account, those results supported the benefit of Red ginseng, Kangai injection, Shenfu injection, Shenmai injection and Shenqi Fuzheng injection on fatigue relief. Those results support CRF improvement of ginseng injections on lung cancer including the pathological types of non-small cell lung cancer, and TNM staging of advanced lung cancer. At the same time, ginseng may also benefit patients with colorectal cancer and nasopharyngeal carcinoma, and may have little effect on patients with head and neck cancer (Table 6). Efficacy of ginseng and ginseng injections on CRF alleviating by various factors. CRF may associate with cancer types.[16] The effect of cancer types on efficacy of ginseng on CRF was explored here. Nine studies evaluated the efficacy of ginseng on the treatment of lung cancer.[7,16,17,23,24,26,27,29,32] Of the total 725 participants, 519 had undergone chemotherapy prior to participation, 245 were non-small cell lung cancer patients, 86 were lung adenocarcinoma patients and 425 were advanced lung cancer patients. If heterogeneity was not taken into account, those results supported the benefit of Red ginseng, Kangai injection, Shenfu injection, Shenmai injection and Shenqi Fuzheng injection on fatigue relief. Those results support CRF improvement of ginseng injections on lung cancer including the pathological types of non-small cell lung cancer, and TNM staging of advanced lung cancer. At the same time, ginseng may also benefit patients with colorectal cancer and nasopharyngeal carcinoma, and may have little effect on patients with head and neck cancer (Table 6). Efficacy of ginseng and ginseng injections on CRF alleviating by various factors. 3.13. Incidences of treatment-related adverse events between different drugs and cancer types Adverse events were collected and summarized in Table 7. It seems that ginseng has no discernible adverse reactions. Adverse events. Adverse events were collected and summarized in Table 7. It seems that ginseng has no discernible adverse reactions. Adverse events.
5. Conclusion
Data curation: Tianwen Hou, Jing Huang. Formal analysis: Jing Huang. Methodology: Tianwen Hou, Jing Huang. Project administration: Shijiang Sun. Resources: Xueqi Wang. Software: Huijing Li, Xueqi Wang, Xi Liang. Supervision: Jianming He. Validation: Xi Liang, Haiyan Bai. Visualization: Tianhe Zhao, Jingnan Hu, Jianli Ge. Writing – original draft: Huijing Li. Writing – review & editing: Jianming He.
[ "2.1. Study registration", "2.2. Literature search", "2.3. Inclusion/exclusion criteria", "2.4. Selection of relevant studies and quality assessment", "2.5. Outcomes of interest", "2.6. Statistical analysis", "2.7. Assessment of heterogeneity", "2.8. Assessment of reporting biases", "2.9. Summarizing and interpreting results", "3.1. Selection and general characteristics of the included studies", "3.2. Methodological quality of studies", "3.3. Outcome of heterogeneity text", "3.4. Outcome of publication bias text", "3.5. Outcome of GRADE rating", "3.6. Efficacy of ginseng oral administration and ginseng injections on CRF", "3.7. Network meta-analysis between ginseng oral administration and ginseng injections", "3.8. Efficacy of ginseng oral administration on CRF", "3.9. Efficacy of ginseng injections on CRF", "3.10. Efficacy of ginseng oral administration and ginseng injections on emotional fatigue", "3.11. Efficacy of ginseng oral administration and ginseng injections on cognitive fatigue.", "3.12. The effect of cancer types on efficacy of ginseng and ginseng injections on CRF", "3.13. Incidences of treatment-related adverse events between different drugs and cancer types", "5. Conclusion" ]
[ "The protocol of this meta-analysis is registered in PROSPERO, under the registration number CRD42021228094 on February 18, 2021. It is available at http://www.crd.york.ac.uk/PROSPERO/.", "PubMed, Cochrane Library, China National Knowledge Infrastructure were systematically searched from the database inception to May 24, 2021. The following keywords were used for the Chinese database search:(shen or renshen or gaolishen or xiyangshen or hongshen or baishen or shuishen or yeshanshen or Kangai zhusheye or Shenqi Fuzheng zhusheye or Shenfu zhusheye or Shenmai zhusheye) and (pilao or pifa or pijuan or pibei or juandai or fali). The following keywords were used for the English database search: (Ginseng or Ginsengs or P. quinquefolius or Panax or ginsenosides or ginsenoside or Kangai injection or Shenqi fuzheng injection or Shenfu injection or Shenmai injection) and (fatigue or lethargy or exhaustion or tiredness or weariness or physical performance or exercise performance) (Table S1, http://links.lww.com/MD/H734 for details).", "For inclusion in the review, studies were required to meet the following criteria: Experimental design: randomized controlled trials; Type of participants: subjects with CRF, regardless of age, sex, type of cancer, pathological type, cancer treatment; Type of interventions: drugs with ginseng or ginseng injections; Control: unlimited treatment method; Language types: Chinese and English studies. Studies without enough data (The duration of treatment or fatigue score is unknown) or studies whose participants were subjectively selected were excluded from analysis.", "Two reviewers independently extracted data based on the predetermined criteria, and discrepancies were resolved by consensus. From studies included in the final analysis, the following data were extracted: the name of first author, year of publication, geographic location, types of cancer, basic strategies for treatment of cancer (surgery, chemotherapy or radiotherapy), species and dose of ginseng and ginseng injections in the intervention group, treatment regimen of the control group, duration of treatment, tool name used to assess CRF and type of dimensions used for main outcomes measured. The Cochrane Handbook was used for systematic reviews of interventions to evaluate the quality of the included studies.[9,10] This approach requires studies to be assessed across 6 special domains that were subjected to potential bias, including sequence generation, allocation concealment, blinding, incomplete outcome data, selective outcome reporting, and other sources of bias. There are 3 biases of judgment: Yes (Low risk), No (High risk), Not clear (Unclear risk).", "The primary outcome was the effect of ginseng and ginseng injections in alleviating CRF. The secondary outcome was emotional or cognitive fatigue alleviated by ginseng and ginseng injections.", "RevMan5.3 (Review Manager (RevMan), Computer program, version 5.3. Cochrane Collaboration, Copenhagen, Denmark) was used for our statistical analysis.[9,10] STATA v.16.0 (College Station, TX) was used for network meta-analysis. Because clinical indices to assess clinical response/remission differed among studies, SMD was used as a main effect size to calculate those differences.[11] The calculation formula of SMD is as follows:\nWhere M1 is the mean of fatigue reduction in the intervention group, M2 is the mean of fatigue reduction in the control group, and pooled SD is a pooled intervention specific standard deviation.[8] If the value of SMD is positive and P < .05, it shows that the effect of the intervention group is better than that of the control group.", "Chi[2] test combined with I²-test was used to test the heterogeneity between studies. If P < .1 or I² > 50%, it suggested significant heterogeneity and we would use random-effects model for meta-analysis; otherwise, a fixed-effects model would be used. Wherever feasible, a meta-regression analysis would be conducted to explore the source of significant heterogeneity. Sensitivity analyses would be undertaken to assess the robustness of our findings by excluding studies with high risk of bias.[12]", "Funnel plots were performed to assess reporting bias with more than 10 trials. Egger’s regression intercept was calculated by STATA v.16.0 to do a text of asymmetry. A 2-tailed P value < .05 was considered statistically significant.[13]", "GRADE approach was used to interpret findings. We assessed the outcomes with reference to the overall risk of bias of the included studies, the inconsistency of the results, the directness of the evidence, the precision of the estimates, and the risk of publication bias. The quality of the body of evidence for each assessable outcome were categorized as follows: no reason to downgrade the quality of evidence, serious reason (downgraded by one) or very serious reason (downgraded by two).[14]", "A total of 1764 studies were identified from the 3 electronic databases. Seventy-one duplicate studies were excluded by using Endnote X9. On review of the title and abstract, 1631 studies were excluded. After further careful review of 62 articles of the full text, a further 40 studies were excluded. Finally, 22 papers with a total of 2086 participants were included. Patients were treated with ginseng oral administration in 7 papers and with ginseng injections in 15 (Fig. 1 and Table 1). They were published between 2010 and 2020 and were conducted in China (n = 16), America (n = 3), Korea (n = 2) and Italy (n = 1). Six studies were randomized, double-blind, placebo-controlled design trials, and the rest of 16 studies used randomized design (Table 1). The detailed information was summarized in Table 1 and Table S2, http://links.lww.com/MD/H735 and 3, http://links.lww.com/MD/H736.\nCharacteristics of studies included in the systematic review and meta-analysis.\nBFI = brief fatigue inventory, CFS = cancer fatigue scale, FACIT-F = functional assessment of chronic illness therapy-fatigue subscale, FSI = fatigue symptom inventory, MFSI-SF = multidimensional fatigue symptom inventory-short form, PFS = piper fatigue scale.\nFlow chart.", "All included 22 studies were randomized controlled trials. Two were dynamically allocated by computer,[5,15] and 4 used random number table method.[7,16–18] Those 6 were rated as low risk. One study used stratified block randomization allocation and bias risk was not clear.[19] The other 15 studies didn’t describe the method of random sequence generation. Of 6 double-blind studies, only 1 mentioned double-blind method but did not describe the specific implementation process.[20] Two studies were multicenter and 6 used placebo as control.[5,19] All included studies had low risk of bias regarding incomplete outcome data, had low risk of bias regarding selective reporting and none claimed conflict of interest, early termination of the trial (Figs. 2 and 3).\nRisk of bias graph.\nRisk of bias summary.", "For the primary outcome, all intervention groups’ data was combined, regardless of the types and stages of cancer and so on. Consequently, our analyses were subject to high potential risk of between-study heterogeneity. A meta-regression was conducted, we found that placebo may be a source of heterogeneity. In 6 studies that used placebo, ginseng oral administration was in 6 and ginseng injections in 0, so analysis was conducted separately. We did not find other sources of heterogeneity (Table 2). Sensitivity analyses were undertaken by excluding studies with high risk of bias. Compare the 2 results, we still ca not explain the source of heterogeneity (Fig. S1, http://links.lww.com/MD/H737;2, http://links.lww.com/MD/H738;3, http://links.lww.com/MD/H739). Therefore, whether heterogeneity taken into account or not, results were all presented and discussed in the manuscript, respectively. The same as the secondary outcome (Table 2).\nOutcome of meta-regression.\nCRF = cancer-related fatigue.", "Egger linear regression was conducted to text symmetry of funnel plots. No publication bias was found in each outcome (Fig. S4, http://links.lww.com/MD/H740;6, http://links.lww.com/MD/H742;7, http://links.lww.com/MD/H744).", "Table 3 for details.\nOutcome of GRADE rating.\nCI = confidence interval, CRF = cancer-related fatigue, SMD = standardized mean difference.\nThere was substantial heterogeneity among studies, meta-regression and sensitivity analyses of each outcome were conducted but we didn’t find the sources of heterogeneity.\nSmall sample size and wide confidence interval.", "Fatigue was reported in 22 studies and 10 were excluded because of heterogeneity. Of included 12 studies, ginseng was used in 5 studies and ginseng injections in 7. The number of patients in the ginseng group is 656 and 646 in control. Efficacy was assessed between 2 weeks and 12 weeks. The pooled SMD was 0.40 (95% confidence interval (95% CI) [0.29–0.51], P < .00001) (Fig. 4). If heterogeneity was not taken into account, the pooled SMD was 0.89 (95% CI [0.60–1.18], P < .00001) (Fig. S4, http://links.lww.com/MD/H740). Those indicate that ginseng can alleviate CRF.\nEfficacy of ginseng oral administration and ginseng injections on CRF. CRF: cancer-related fatigue.", "Network meta-analysis to compare the relative efficacy of ginseng oral administration and ginseng injections was done (Fig. 5 and Table 4). The order was ginseng injections, ginseng oral administration and placebo from high efficacy to low.\nNetwork meta-analysis between ginseng oral administration and ginseng injections.\nSMD for comparisons are in the cell in common between the column-defining and row-defining treatment. SMD < 0 favors row-defining treatment. Numbers in parentheses indicate 95% confidence interval.\nSMD = standardized mean difference.\nNetwork of 2 types of administration routes on CRF. CRF: cancer-related fatigue.", "Seven papers reported the efficacy of ginseng oral administration on CRF and 1 was excluded because of heterogeneity.[16] Six studies included 862 patients, 434 received ginseng and 428 received placebo.[5,15,19–22] Efficacy was assessed between 29 days and 12 weeks. The pooled SMD was 0.29 (95% CI [0.15–0.42], P < .0001) (Fig. 6). If heterogeneity was not taken into account, the pooled SMD was 0.46 (95% CI [0.10–0.82], P = .01) (Fig. S5, http://links.lww.com/MD/H741). Those indicate that ginseng oral administration can alleviate CRF.\nForest plot of ginseng oral administration on CRF. CRF: cancer-related fatigue.\nEfficacies of different doses were explored. Sixteen patients were treated with 1000 mg/day ginseng and 16 with placebo in 1 study.[20] The pooled SMD was -0.29 (95% CI [−0.98 to 0.41], P = .42). Three hundred and forty-seven patients were treated with 2000 mg/d ginseng and 341 with placebo in 3 studies.[5,15,19] The pooled SMD was 0.35 (95% CI [0.19–0.50], P < .00001). Fifteen patients were treated with 3000 mg/d ginseng and 15 with placebo in 1 study.[21] The SMD was 0.32 (95% CI [−0.40 to 1.04], P = .38). Those indicate that 2000 or 3000 mg/d ginseng should be effective to treat CRF. There is no significant difference between 3000 mg/d ginseng group and control group and that might be due to the small sample size (Fig. 7). Network meta-analysis to compare the relative efficacy of different doses was done. The order was 2000 mg/d, 3000 mg/d, placebo and 1000 mg/d from high efficacy to low. But there was no significant difference (Fig. 8 and Table 5).\nNetwork meta-analysis of different dose.\nSMD for comparisons are in the cell in common between the column-defining and row-defining treatment. SMD < 0 favors row-defining treatment. Numbers in parentheses indicate 95% confidence interval.\nSMD = standardized mean difference.\nForest plot of different doses of ginseng oral administration on CRF. CRF: cancer-related fatigue.\nNetwork of different doses of ginseng oral administration on CRF. CRF: cancer-related fatigue.\nEfficacies of different duration were also explored. One hundred and six patients, 52 in ginseng group and 54 in control, were treated for 2 weeks in 1 study.[22] The SMD was 0.10 (95% CI [−0.28 to 0.48], P = .62). Four hundred and twelve patients, 203 in ginseng group and 209 in control, were treated for 4 weeks in 2 study.[5,22] The pooled SMD was 0.20 (95% CI [0.00–0.39], P = .05). Seven hundred and twenty patients, 363 in ginseng group and 357 in control, were treated for 8 weeks in 4 study.[5,15,19,20] The pooled SMD was 0.32 (95% CI [0.17–0.46], P < .0001). Thirty patients, 15 in ginseng group and 15 in control, were treated for 12 weeks in 1 study.[21] The SMD was 0.32 (95% CI [−0.40 to 1.04], P = .38). Three hundred and thirty patients, 161 in ginseng group and 169 in control, were treated for 16 weeks in 1 study. The SMD was 0.24 (95% CI [0.02–0.45], P = .03). It seems that 4 to 8 weeks is enough for ginseng oral administration to alleviate CRF (Fig. 9).\nForest plot of different duration of ginseng oral administration on CRF. CRF: cancer-related fatigue.", "Four types of injections (Kangai injection, Shenfu injection, Shenmai injection and Shenqi Fuzheng injection) whose main components are ginseng extractions have been approved by Chinese National Medical Products Administration to be used in clinic. Fifteen studies reported these 4 on CRF and 5 papers were excluded because of heterogeneity.[17,18,23–25] Three hundred and forty-four patients were in the ginseng injection group and 338 in the control group. Efficacy was assessed between 2 weeks and 16 weeks. The pooled SMD was 0.74 (95% CI [0.59–0.90], P < .00001) (Fig. 10). If heterogeneity was not taken into account, the pooled SMD was 1.08 (95% CI [0.73–1.44], P < .00001) (Fig. S6, http://links.lww.com/MD/H742). Those indicate that ginseng injections can alleviate CRF.\nEfficacy of ginseng injections on CRF. CRF: cancer-related fatigue.\nEfficacies of the 4 types of injections were also explored, respectively. The SMD of Kangai injection, Shenfu injection, Shenmai injection or Shenqi Fuzheng injection was 1.12 (95%CI [0.67–1.58], P < .00001), 1.54 (95% CI [−0.79 to 3.87], P = .20), 1.02 (95% CI [0.71–1.33], P < .00001), 1.00 (95% CI [0.42–1.57], P = .0007), respectively (Fig. S6, http://links.lww.com/MD/H742). Those indicate that Kangai injection, Shenmai injection or Shenqi Fuzheng injection can alleviate CRF.", "Ten studies reported the efficacy of ginseng oral administration or ginseng injections on emotional fatigue and 6 were excluded because of heterogeneity.[7,16,18,23,26,27] The number of patients in the experimental group was 286 and 277 in the control group. The pooled SMD was 0.12 (95% CI [−0.04 to 0.29], P = .15) (Fig. 11). If heterogeneity was not taken into account, the pooled SMD was 0.67 (95% CI [0.13–1.21], P = .02) (Fig. S7, http://links.lww.com/MD/H744). It seems that whether ginseng could alleviate emotional fatigue is uncertain.\nForest plot of ginseng oral administration and ginseng injections on emotional fatigue.\nGinseng oral administration was employed in 3 studies and 1 study was excluded because of heterogeneity.[16] The pooled SMD was 0.10 (95% CI [−0.10 to 0.30], P = .32) (Fig. 11). If heterogeneity was not taken into account, the pooled SMD was −0.34 (95% CI [−1.12 to 0.43], P = .38) (Fig. S7, http://links.lww.com/MD/H744). Those indicate that ginseng oral administration may not alleviate emotional fatigue.\nSeven studies explored efficacies of ginseng injections on emotional fatigue and 3 were excluded because of heterogeneity. The pooled SMD of the 4 studies was 0.79 (95% CI [0.55–1.03], P < .00001) (Fig. 12). If heterogeneity was not taken into account, the pooled SMD was 1.12 (95% CI [0.50–1.74], P = .0004) (Fig. S8, http://links.lww.com/MD/H749).[7,18,23,26–29] Those results suggest that ginseng injections may be effective in alleviating emotional fatigue.\nForest plot of ginseng injections on emotional fatigue.\nEfficacies of the 3 types of injections were also explored, respectively. The SMD of Shenfu injection, Shenmai injection or Shenqi Fuzheng injection was 1.16 (95%CI [−0.98 to 3.30], P = .29), 0.84 (95% CI [0.53–1.14], P < .00001), 1.34 (95% CI [0.13–2.55], P = .03), respectively (Fig. S8, http://links.lww.com/MD/H749). Those indicate that Shenmai injection or Shenqi Fuzheng injection can alleviate emotional fatigue.", "Cognitive fatigue is a psychological state characterized by the subjective feelings of tiredness, and impaired ability to think, memorize, and concentrate.[30,31] Cognitive fatigue is strongly associated with CRF. Eight studies reported the efficacy of ginseng oral administration and ginseng injections on cognitive fatigue. Four studies, 1 on ginseng oral administration and 3 on ginseng injections, were excluded because of heterogeneity.[5,18,23,29] Finally, 4 studies on ginseng injections were included. The pooled SMD was 0.72 (95% CI [0.48–0.96], P < .00001) (Fig. 13). If heterogeneity was not taken into account, the pooled SMD of all 8 studies was 0.80 (95% CI [0.31–1.29], P = .001) (Fig. S9, http://links.lww.com/MD/H750) and the pooled SMD of 7 studies on ginseng injections was 0.93 (95% CI [0.44–1.42], P = .0002) (Fig. S10, http://links.lww.com/MD/H751)..[7,16,17,23,26,27,29,32] Ginseng oral administration was employed in 1study and the SMD was -0.04 (95% CI [−0.28 to 0.20], P = .76) (Fig. S9, http://links.lww.com/MD/H750). Taking together, those results suggest that ginseng injections may be effective in alleviating cognitive fatigue while ginseng oral administration may not be beneficial to cognitive fatigue relieving.\nForest plot of ginseng injections on cognitive fatigue.\nEfficacies of the 3 types of injections were also explored, respectively. The SMD of Shenfu injection, Shenmai injection or Shenqi Fuzheng injection was 0.79 (95%CI [−0.50 to 2.09], P = .23), 0.83 (95% CI [0.53–1.14], P < .00001), 1.13 (95% CI [0.00–2.25], P = .05), respectively (Fig. S10, http://links.lww.com/MD/H751). Those indicate that Shenmai injection or Shenqi Fuzheng injection can alleviate cognitive fatigue.", "CRF may associate with cancer types.[16] The effect of cancer types on efficacy of ginseng on CRF was explored here. Nine studies evaluated the efficacy of ginseng on the treatment of lung cancer.[7,16,17,23,24,26,27,29,32] Of the total 725 participants, 519 had undergone chemotherapy prior to participation, 245 were non-small cell lung cancer patients, 86 were lung adenocarcinoma patients and 425 were advanced lung cancer patients. If heterogeneity was not taken into account, those results supported the benefit of Red ginseng, Kangai injection, Shenfu injection, Shenmai injection and Shenqi Fuzheng injection on fatigue relief. Those results support CRF improvement of ginseng injections on lung cancer including the pathological types of non-small cell lung cancer, and TNM staging of advanced lung cancer. At the same time, ginseng may also benefit patients with colorectal cancer and nasopharyngeal carcinoma, and may have little effect on patients with head and neck cancer (Table 6).\nEfficacy of ginseng and ginseng injections on CRF alleviating by various factors.", "Adverse events were collected and summarized in Table 7. It seems that ginseng has no discernible adverse reactions.\nAdverse events.", "Ginseng, ginseng oral administration or ginseng injections, may improve CRF. Intravenous injection might be better than oral administration. It seems that ginseng injections may alleviate cognitive fatigue. No evidence was found to support that ginseng could alleviate emotional fatigue. More high-quality randomized, double-blind, placebo-controlled studies with homogeneous samples, large sample sizes, fixed protocol are warranted to identify effectiveness of ginseng on CRF caused by specific type of cancer." ]
[ null, null, null, null, null, null, null, null, "results", null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Methods", "2.1. Study registration", "2.2. Literature search", "2.3. Inclusion/exclusion criteria", "2.4. Selection of relevant studies and quality assessment", "2.5. Outcomes of interest", "2.6. Statistical analysis", "2.7. Assessment of heterogeneity", "2.8. Assessment of reporting biases", "2.9. Summarizing and interpreting results", "3. Results", "3.1. Selection and general characteristics of the included studies", "3.2. Methodological quality of studies", "3.3. Outcome of heterogeneity text", "3.4. Outcome of publication bias text", "3.5. Outcome of GRADE rating", "3.6. Efficacy of ginseng oral administration and ginseng injections on CRF", "3.7. Network meta-analysis between ginseng oral administration and ginseng injections", "3.8. Efficacy of ginseng oral administration on CRF", "3.9. Efficacy of ginseng injections on CRF", "3.10. Efficacy of ginseng oral administration and ginseng injections on emotional fatigue", "3.11. Efficacy of ginseng oral administration and ginseng injections on cognitive fatigue.", "3.12. The effect of cancer types on efficacy of ginseng and ginseng injections on CRF", "3.13. Incidences of treatment-related adverse events between different drugs and cancer types", "4. Discussion", "5. Conclusion", "Supplementary Material" ]
[ "Cancer-related fatigue (CRF) is one of the most common symptoms in patients with cancer. Up to 90% of patients who are under the active treatment suffer from CRF.[1] It can persist months or even years after treatment ends which interferes with usual functioning. Besides, fatigue is rarely an isolated symptom and most commonly occurs with other symptoms and signs, such as pain, emotional distress, anemia, and sleep disturbances, in symptom clusters.[2] Unlike typical fatigue, CRF cannot be relieved by additional rest, sleep, reducing physical activity, etc. On the contrary, exercise/physical activity is likely to be effective in ameliorating CRF. Psychological/psycho-education, mind/body wellness training, nutritional and dietary supplements may be effective, too. Unfortunately, these interventions yield, at most, moderate benefits in meta-analyses.[2] Until now, evidence to date indicates that synthetic drugs are less effective than non-pharmacologic intervention.[2]\nGinseng is the root of plants in the genus Panax, such as Korean ginseng (Panax ginseng C.A. Meyer), Japanese ginseng (Panax ginseng C.A. Meyer), and American ginseng (Panax quinquefolius L.). Red ginseng is a processed product of Asian ginseng (Panax ginseng C.A. Meyer.) by steaming and drying. Ginseng has been used to treat chronic fatigue as early as 2000 years ago in China. Nowadays, ginseng is not only used in China, but also sold and used in more than 35 countries, such as Japan, South Korea, North Korea, America.[3] Preclinical data supports that ginseng may be helpful for fatigue. Animal studies have reported that ginseng can improve the endurance and swimming duration time of mice.[4] Besides, ginseng is mentioned as a dietary supplement for CRF treatment in the NCCN Guidelines based on 1 randomized, double-blind clinical trial using American ginseng which indicated that American ginseng of 2000 mg improved CRF symptom.[5] There are several types of ginseng and they were deemed to have same active pharmaceutical ingredients, for example, Rg1 and Rb1. However, other types of ginseng were not discussed in the Guidelines.[6]\nIn addition to oral administration, several types of injections whose main components are ginseng extractions have been approved by Chinese National Medical Products Administration to be used in clinic. The common ones are Kangai injection, Shenfu injection, Shenmai injection, Shenqi Fuzheng injection. Kangai injection (China Food and Drug Administration approval number Z20026868) consists of the extracts from Astragalus membranaceus (Fisch.) (Bunge), Panax ginseng C.A. Meyer. and Sophora flavescens Aiton. Shenfu injection (China Food and Drug Administration approval number Z20043117) consists of the extracts from Red ginseng and Aconitum wilsonii Stapf ex Veitch.Shenmai injection (China Food and Drug Administration approval number Z2009364) consists of the extracts from Red ginseng and Ophiopogon japonicus (Linn. f.) Ker-Gawl.. Shenqi Fuzheng injection (China Food and Drug Administration approval number Z19990065) consists of the extracts from Codonopsis pilosula (Franch.) Nannf. and Astragalus Membranaceus (Fisch.) Bunge. Some studies have shown that ginseng injections are of great help in improving the quality of life and reducing the side effects of radiotherapy and chemotherapy in cancer patients.[7,8] Therefore, ginseng and ginseng injections are both potential drugs for the treatment of CRF but few studies put them together for analysis. Here, we employed standardized mean difference (SMD) to conduct a meta-analysis to evaluate the efficacy of ginseng and ginseng injections in the treatment of CRF and the quality of the evidence. Emotional and cognitive fatigue were evaluated, too. Besides, in our study, subgroup analyses were conducted to compare the efficacy of cancer types, cancer stages, basic strategies for treatment of cancer and so on.", "2.1. Study registration The protocol of this meta-analysis is registered in PROSPERO, under the registration number CRD42021228094 on February 18, 2021. It is available at http://www.crd.york.ac.uk/PROSPERO/.\nThe protocol of this meta-analysis is registered in PROSPERO, under the registration number CRD42021228094 on February 18, 2021. It is available at http://www.crd.york.ac.uk/PROSPERO/.\n2.2. Literature search PubMed, Cochrane Library, China National Knowledge Infrastructure were systematically searched from the database inception to May 24, 2021. The following keywords were used for the Chinese database search:(shen or renshen or gaolishen or xiyangshen or hongshen or baishen or shuishen or yeshanshen or Kangai zhusheye or Shenqi Fuzheng zhusheye or Shenfu zhusheye or Shenmai zhusheye) and (pilao or pifa or pijuan or pibei or juandai or fali). The following keywords were used for the English database search: (Ginseng or Ginsengs or P. quinquefolius or Panax or ginsenosides or ginsenoside or Kangai injection or Shenqi fuzheng injection or Shenfu injection or Shenmai injection) and (fatigue or lethargy or exhaustion or tiredness or weariness or physical performance or exercise performance) (Table S1, http://links.lww.com/MD/H734 for details).\nPubMed, Cochrane Library, China National Knowledge Infrastructure were systematically searched from the database inception to May 24, 2021. The following keywords were used for the Chinese database search:(shen or renshen or gaolishen or xiyangshen or hongshen or baishen or shuishen or yeshanshen or Kangai zhusheye or Shenqi Fuzheng zhusheye or Shenfu zhusheye or Shenmai zhusheye) and (pilao or pifa or pijuan or pibei or juandai or fali). The following keywords were used for the English database search: (Ginseng or Ginsengs or P. quinquefolius or Panax or ginsenosides or ginsenoside or Kangai injection or Shenqi fuzheng injection or Shenfu injection or Shenmai injection) and (fatigue or lethargy or exhaustion or tiredness or weariness or physical performance or exercise performance) (Table S1, http://links.lww.com/MD/H734 for details).\n2.3. Inclusion/exclusion criteria For inclusion in the review, studies were required to meet the following criteria: Experimental design: randomized controlled trials; Type of participants: subjects with CRF, regardless of age, sex, type of cancer, pathological type, cancer treatment; Type of interventions: drugs with ginseng or ginseng injections; Control: unlimited treatment method; Language types: Chinese and English studies. Studies without enough data (The duration of treatment or fatigue score is unknown) or studies whose participants were subjectively selected were excluded from analysis.\nFor inclusion in the review, studies were required to meet the following criteria: Experimental design: randomized controlled trials; Type of participants: subjects with CRF, regardless of age, sex, type of cancer, pathological type, cancer treatment; Type of interventions: drugs with ginseng or ginseng injections; Control: unlimited treatment method; Language types: Chinese and English studies. Studies without enough data (The duration of treatment or fatigue score is unknown) or studies whose participants were subjectively selected were excluded from analysis.\n2.4. Selection of relevant studies and quality assessment Two reviewers independently extracted data based on the predetermined criteria, and discrepancies were resolved by consensus. From studies included in the final analysis, the following data were extracted: the name of first author, year of publication, geographic location, types of cancer, basic strategies for treatment of cancer (surgery, chemotherapy or radiotherapy), species and dose of ginseng and ginseng injections in the intervention group, treatment regimen of the control group, duration of treatment, tool name used to assess CRF and type of dimensions used for main outcomes measured. The Cochrane Handbook was used for systematic reviews of interventions to evaluate the quality of the included studies.[9,10] This approach requires studies to be assessed across 6 special domains that were subjected to potential bias, including sequence generation, allocation concealment, blinding, incomplete outcome data, selective outcome reporting, and other sources of bias. There are 3 biases of judgment: Yes (Low risk), No (High risk), Not clear (Unclear risk).\nTwo reviewers independently extracted data based on the predetermined criteria, and discrepancies were resolved by consensus. From studies included in the final analysis, the following data were extracted: the name of first author, year of publication, geographic location, types of cancer, basic strategies for treatment of cancer (surgery, chemotherapy or radiotherapy), species and dose of ginseng and ginseng injections in the intervention group, treatment regimen of the control group, duration of treatment, tool name used to assess CRF and type of dimensions used for main outcomes measured. The Cochrane Handbook was used for systematic reviews of interventions to evaluate the quality of the included studies.[9,10] This approach requires studies to be assessed across 6 special domains that were subjected to potential bias, including sequence generation, allocation concealment, blinding, incomplete outcome data, selective outcome reporting, and other sources of bias. There are 3 biases of judgment: Yes (Low risk), No (High risk), Not clear (Unclear risk).\n2.5. Outcomes of interest The primary outcome was the effect of ginseng and ginseng injections in alleviating CRF. The secondary outcome was emotional or cognitive fatigue alleviated by ginseng and ginseng injections.\nThe primary outcome was the effect of ginseng and ginseng injections in alleviating CRF. The secondary outcome was emotional or cognitive fatigue alleviated by ginseng and ginseng injections.\n2.6. Statistical analysis RevMan5.3 (Review Manager (RevMan), Computer program, version 5.3. Cochrane Collaboration, Copenhagen, Denmark) was used for our statistical analysis.[9,10] STATA v.16.0 (College Station, TX) was used for network meta-analysis. Because clinical indices to assess clinical response/remission differed among studies, SMD was used as a main effect size to calculate those differences.[11] The calculation formula of SMD is as follows:\nWhere M1 is the mean of fatigue reduction in the intervention group, M2 is the mean of fatigue reduction in the control group, and pooled SD is a pooled intervention specific standard deviation.[8] If the value of SMD is positive and P < .05, it shows that the effect of the intervention group is better than that of the control group.\nRevMan5.3 (Review Manager (RevMan), Computer program, version 5.3. Cochrane Collaboration, Copenhagen, Denmark) was used for our statistical analysis.[9,10] STATA v.16.0 (College Station, TX) was used for network meta-analysis. Because clinical indices to assess clinical response/remission differed among studies, SMD was used as a main effect size to calculate those differences.[11] The calculation formula of SMD is as follows:\nWhere M1 is the mean of fatigue reduction in the intervention group, M2 is the mean of fatigue reduction in the control group, and pooled SD is a pooled intervention specific standard deviation.[8] If the value of SMD is positive and P < .05, it shows that the effect of the intervention group is better than that of the control group.\n2.7. Assessment of heterogeneity Chi[2] test combined with I²-test was used to test the heterogeneity between studies. If P < .1 or I² > 50%, it suggested significant heterogeneity and we would use random-effects model for meta-analysis; otherwise, a fixed-effects model would be used. Wherever feasible, a meta-regression analysis would be conducted to explore the source of significant heterogeneity. Sensitivity analyses would be undertaken to assess the robustness of our findings by excluding studies with high risk of bias.[12]\nChi[2] test combined with I²-test was used to test the heterogeneity between studies. If P < .1 or I² > 50%, it suggested significant heterogeneity and we would use random-effects model for meta-analysis; otherwise, a fixed-effects model would be used. Wherever feasible, a meta-regression analysis would be conducted to explore the source of significant heterogeneity. Sensitivity analyses would be undertaken to assess the robustness of our findings by excluding studies with high risk of bias.[12]\n2.8. Assessment of reporting biases Funnel plots were performed to assess reporting bias with more than 10 trials. Egger’s regression intercept was calculated by STATA v.16.0 to do a text of asymmetry. A 2-tailed P value < .05 was considered statistically significant.[13]\nFunnel plots were performed to assess reporting bias with more than 10 trials. Egger’s regression intercept was calculated by STATA v.16.0 to do a text of asymmetry. A 2-tailed P value < .05 was considered statistically significant.[13]\n2.9. Summarizing and interpreting results GRADE approach was used to interpret findings. We assessed the outcomes with reference to the overall risk of bias of the included studies, the inconsistency of the results, the directness of the evidence, the precision of the estimates, and the risk of publication bias. The quality of the body of evidence for each assessable outcome were categorized as follows: no reason to downgrade the quality of evidence, serious reason (downgraded by one) or very serious reason (downgraded by two).[14]\nGRADE approach was used to interpret findings. We assessed the outcomes with reference to the overall risk of bias of the included studies, the inconsistency of the results, the directness of the evidence, the precision of the estimates, and the risk of publication bias. The quality of the body of evidence for each assessable outcome were categorized as follows: no reason to downgrade the quality of evidence, serious reason (downgraded by one) or very serious reason (downgraded by two).[14]", "The protocol of this meta-analysis is registered in PROSPERO, under the registration number CRD42021228094 on February 18, 2021. It is available at http://www.crd.york.ac.uk/PROSPERO/.", "PubMed, Cochrane Library, China National Knowledge Infrastructure were systematically searched from the database inception to May 24, 2021. The following keywords were used for the Chinese database search:(shen or renshen or gaolishen or xiyangshen or hongshen or baishen or shuishen or yeshanshen or Kangai zhusheye or Shenqi Fuzheng zhusheye or Shenfu zhusheye or Shenmai zhusheye) and (pilao or pifa or pijuan or pibei or juandai or fali). The following keywords were used for the English database search: (Ginseng or Ginsengs or P. quinquefolius or Panax or ginsenosides or ginsenoside or Kangai injection or Shenqi fuzheng injection or Shenfu injection or Shenmai injection) and (fatigue or lethargy or exhaustion or tiredness or weariness or physical performance or exercise performance) (Table S1, http://links.lww.com/MD/H734 for details).", "For inclusion in the review, studies were required to meet the following criteria: Experimental design: randomized controlled trials; Type of participants: subjects with CRF, regardless of age, sex, type of cancer, pathological type, cancer treatment; Type of interventions: drugs with ginseng or ginseng injections; Control: unlimited treatment method; Language types: Chinese and English studies. Studies without enough data (The duration of treatment or fatigue score is unknown) or studies whose participants were subjectively selected were excluded from analysis.", "Two reviewers independently extracted data based on the predetermined criteria, and discrepancies were resolved by consensus. From studies included in the final analysis, the following data were extracted: the name of first author, year of publication, geographic location, types of cancer, basic strategies for treatment of cancer (surgery, chemotherapy or radiotherapy), species and dose of ginseng and ginseng injections in the intervention group, treatment regimen of the control group, duration of treatment, tool name used to assess CRF and type of dimensions used for main outcomes measured. The Cochrane Handbook was used for systematic reviews of interventions to evaluate the quality of the included studies.[9,10] This approach requires studies to be assessed across 6 special domains that were subjected to potential bias, including sequence generation, allocation concealment, blinding, incomplete outcome data, selective outcome reporting, and other sources of bias. There are 3 biases of judgment: Yes (Low risk), No (High risk), Not clear (Unclear risk).", "The primary outcome was the effect of ginseng and ginseng injections in alleviating CRF. The secondary outcome was emotional or cognitive fatigue alleviated by ginseng and ginseng injections.", "RevMan5.3 (Review Manager (RevMan), Computer program, version 5.3. Cochrane Collaboration, Copenhagen, Denmark) was used for our statistical analysis.[9,10] STATA v.16.0 (College Station, TX) was used for network meta-analysis. Because clinical indices to assess clinical response/remission differed among studies, SMD was used as a main effect size to calculate those differences.[11] The calculation formula of SMD is as follows:\nWhere M1 is the mean of fatigue reduction in the intervention group, M2 is the mean of fatigue reduction in the control group, and pooled SD is a pooled intervention specific standard deviation.[8] If the value of SMD is positive and P < .05, it shows that the effect of the intervention group is better than that of the control group.", "Chi[2] test combined with I²-test was used to test the heterogeneity between studies. If P < .1 or I² > 50%, it suggested significant heterogeneity and we would use random-effects model for meta-analysis; otherwise, a fixed-effects model would be used. Wherever feasible, a meta-regression analysis would be conducted to explore the source of significant heterogeneity. Sensitivity analyses would be undertaken to assess the robustness of our findings by excluding studies with high risk of bias.[12]", "Funnel plots were performed to assess reporting bias with more than 10 trials. Egger’s regression intercept was calculated by STATA v.16.0 to do a text of asymmetry. A 2-tailed P value < .05 was considered statistically significant.[13]", "GRADE approach was used to interpret findings. We assessed the outcomes with reference to the overall risk of bias of the included studies, the inconsistency of the results, the directness of the evidence, the precision of the estimates, and the risk of publication bias. The quality of the body of evidence for each assessable outcome were categorized as follows: no reason to downgrade the quality of evidence, serious reason (downgraded by one) or very serious reason (downgraded by two).[14]", "3.1. Selection and general characteristics of the included studies A total of 1764 studies were identified from the 3 electronic databases. Seventy-one duplicate studies were excluded by using Endnote X9. On review of the title and abstract, 1631 studies were excluded. After further careful review of 62 articles of the full text, a further 40 studies were excluded. Finally, 22 papers with a total of 2086 participants were included. Patients were treated with ginseng oral administration in 7 papers and with ginseng injections in 15 (Fig. 1 and Table 1). They were published between 2010 and 2020 and were conducted in China (n = 16), America (n = 3), Korea (n = 2) and Italy (n = 1). Six studies were randomized, double-blind, placebo-controlled design trials, and the rest of 16 studies used randomized design (Table 1). The detailed information was summarized in Table 1 and Table S2, http://links.lww.com/MD/H735 and 3, http://links.lww.com/MD/H736.\nCharacteristics of studies included in the systematic review and meta-analysis.\nBFI = brief fatigue inventory, CFS = cancer fatigue scale, FACIT-F = functional assessment of chronic illness therapy-fatigue subscale, FSI = fatigue symptom inventory, MFSI-SF = multidimensional fatigue symptom inventory-short form, PFS = piper fatigue scale.\nFlow chart.\nA total of 1764 studies were identified from the 3 electronic databases. Seventy-one duplicate studies were excluded by using Endnote X9. On review of the title and abstract, 1631 studies were excluded. After further careful review of 62 articles of the full text, a further 40 studies were excluded. Finally, 22 papers with a total of 2086 participants were included. Patients were treated with ginseng oral administration in 7 papers and with ginseng injections in 15 (Fig. 1 and Table 1). They were published between 2010 and 2020 and were conducted in China (n = 16), America (n = 3), Korea (n = 2) and Italy (n = 1). Six studies were randomized, double-blind, placebo-controlled design trials, and the rest of 16 studies used randomized design (Table 1). The detailed information was summarized in Table 1 and Table S2, http://links.lww.com/MD/H735 and 3, http://links.lww.com/MD/H736.\nCharacteristics of studies included in the systematic review and meta-analysis.\nBFI = brief fatigue inventory, CFS = cancer fatigue scale, FACIT-F = functional assessment of chronic illness therapy-fatigue subscale, FSI = fatigue symptom inventory, MFSI-SF = multidimensional fatigue symptom inventory-short form, PFS = piper fatigue scale.\nFlow chart.\n3.2. Methodological quality of studies All included 22 studies were randomized controlled trials. Two were dynamically allocated by computer,[5,15] and 4 used random number table method.[7,16–18] Those 6 were rated as low risk. One study used stratified block randomization allocation and bias risk was not clear.[19] The other 15 studies didn’t describe the method of random sequence generation. Of 6 double-blind studies, only 1 mentioned double-blind method but did not describe the specific implementation process.[20] Two studies were multicenter and 6 used placebo as control.[5,19] All included studies had low risk of bias regarding incomplete outcome data, had low risk of bias regarding selective reporting and none claimed conflict of interest, early termination of the trial (Figs. 2 and 3).\nRisk of bias graph.\nRisk of bias summary.\nAll included 22 studies were randomized controlled trials. Two were dynamically allocated by computer,[5,15] and 4 used random number table method.[7,16–18] Those 6 were rated as low risk. One study used stratified block randomization allocation and bias risk was not clear.[19] The other 15 studies didn’t describe the method of random sequence generation. Of 6 double-blind studies, only 1 mentioned double-blind method but did not describe the specific implementation process.[20] Two studies were multicenter and 6 used placebo as control.[5,19] All included studies had low risk of bias regarding incomplete outcome data, had low risk of bias regarding selective reporting and none claimed conflict of interest, early termination of the trial (Figs. 2 and 3).\nRisk of bias graph.\nRisk of bias summary.\n3.3. Outcome of heterogeneity text For the primary outcome, all intervention groups’ data was combined, regardless of the types and stages of cancer and so on. Consequently, our analyses were subject to high potential risk of between-study heterogeneity. A meta-regression was conducted, we found that placebo may be a source of heterogeneity. In 6 studies that used placebo, ginseng oral administration was in 6 and ginseng injections in 0, so analysis was conducted separately. We did not find other sources of heterogeneity (Table 2). Sensitivity analyses were undertaken by excluding studies with high risk of bias. Compare the 2 results, we still ca not explain the source of heterogeneity (Fig. S1, http://links.lww.com/MD/H737;2, http://links.lww.com/MD/H738;3, http://links.lww.com/MD/H739). Therefore, whether heterogeneity taken into account or not, results were all presented and discussed in the manuscript, respectively. The same as the secondary outcome (Table 2).\nOutcome of meta-regression.\nCRF = cancer-related fatigue.\nFor the primary outcome, all intervention groups’ data was combined, regardless of the types and stages of cancer and so on. Consequently, our analyses were subject to high potential risk of between-study heterogeneity. A meta-regression was conducted, we found that placebo may be a source of heterogeneity. In 6 studies that used placebo, ginseng oral administration was in 6 and ginseng injections in 0, so analysis was conducted separately. We did not find other sources of heterogeneity (Table 2). Sensitivity analyses were undertaken by excluding studies with high risk of bias. Compare the 2 results, we still ca not explain the source of heterogeneity (Fig. S1, http://links.lww.com/MD/H737;2, http://links.lww.com/MD/H738;3, http://links.lww.com/MD/H739). Therefore, whether heterogeneity taken into account or not, results were all presented and discussed in the manuscript, respectively. The same as the secondary outcome (Table 2).\nOutcome of meta-regression.\nCRF = cancer-related fatigue.\n3.4. Outcome of publication bias text Egger linear regression was conducted to text symmetry of funnel plots. No publication bias was found in each outcome (Fig. S4, http://links.lww.com/MD/H740;6, http://links.lww.com/MD/H742;7, http://links.lww.com/MD/H744).\nEgger linear regression was conducted to text symmetry of funnel plots. No publication bias was found in each outcome (Fig. S4, http://links.lww.com/MD/H740;6, http://links.lww.com/MD/H742;7, http://links.lww.com/MD/H744).\n3.5. Outcome of GRADE rating Table 3 for details.\nOutcome of GRADE rating.\nCI = confidence interval, CRF = cancer-related fatigue, SMD = standardized mean difference.\nThere was substantial heterogeneity among studies, meta-regression and sensitivity analyses of each outcome were conducted but we didn’t find the sources of heterogeneity.\nSmall sample size and wide confidence interval.\nTable 3 for details.\nOutcome of GRADE rating.\nCI = confidence interval, CRF = cancer-related fatigue, SMD = standardized mean difference.\nThere was substantial heterogeneity among studies, meta-regression and sensitivity analyses of each outcome were conducted but we didn’t find the sources of heterogeneity.\nSmall sample size and wide confidence interval.\n3.6. Efficacy of ginseng oral administration and ginseng injections on CRF Fatigue was reported in 22 studies and 10 were excluded because of heterogeneity. Of included 12 studies, ginseng was used in 5 studies and ginseng injections in 7. The number of patients in the ginseng group is 656 and 646 in control. Efficacy was assessed between 2 weeks and 12 weeks. The pooled SMD was 0.40 (95% confidence interval (95% CI) [0.29–0.51], P < .00001) (Fig. 4). If heterogeneity was not taken into account, the pooled SMD was 0.89 (95% CI [0.60–1.18], P < .00001) (Fig. S4, http://links.lww.com/MD/H740). Those indicate that ginseng can alleviate CRF.\nEfficacy of ginseng oral administration and ginseng injections on CRF. CRF: cancer-related fatigue.\nFatigue was reported in 22 studies and 10 were excluded because of heterogeneity. Of included 12 studies, ginseng was used in 5 studies and ginseng injections in 7. The number of patients in the ginseng group is 656 and 646 in control. Efficacy was assessed between 2 weeks and 12 weeks. The pooled SMD was 0.40 (95% confidence interval (95% CI) [0.29–0.51], P < .00001) (Fig. 4). If heterogeneity was not taken into account, the pooled SMD was 0.89 (95% CI [0.60–1.18], P < .00001) (Fig. S4, http://links.lww.com/MD/H740). Those indicate that ginseng can alleviate CRF.\nEfficacy of ginseng oral administration and ginseng injections on CRF. CRF: cancer-related fatigue.\n3.7. Network meta-analysis between ginseng oral administration and ginseng injections Network meta-analysis to compare the relative efficacy of ginseng oral administration and ginseng injections was done (Fig. 5 and Table 4). The order was ginseng injections, ginseng oral administration and placebo from high efficacy to low.\nNetwork meta-analysis between ginseng oral administration and ginseng injections.\nSMD for comparisons are in the cell in common between the column-defining and row-defining treatment. SMD < 0 favors row-defining treatment. Numbers in parentheses indicate 95% confidence interval.\nSMD = standardized mean difference.\nNetwork of 2 types of administration routes on CRF. CRF: cancer-related fatigue.\nNetwork meta-analysis to compare the relative efficacy of ginseng oral administration and ginseng injections was done (Fig. 5 and Table 4). The order was ginseng injections, ginseng oral administration and placebo from high efficacy to low.\nNetwork meta-analysis between ginseng oral administration and ginseng injections.\nSMD for comparisons are in the cell in common between the column-defining and row-defining treatment. SMD < 0 favors row-defining treatment. Numbers in parentheses indicate 95% confidence interval.\nSMD = standardized mean difference.\nNetwork of 2 types of administration routes on CRF. CRF: cancer-related fatigue.\n3.8. Efficacy of ginseng oral administration on CRF Seven papers reported the efficacy of ginseng oral administration on CRF and 1 was excluded because of heterogeneity.[16] Six studies included 862 patients, 434 received ginseng and 428 received placebo.[5,15,19–22] Efficacy was assessed between 29 days and 12 weeks. The pooled SMD was 0.29 (95% CI [0.15–0.42], P < .0001) (Fig. 6). If heterogeneity was not taken into account, the pooled SMD was 0.46 (95% CI [0.10–0.82], P = .01) (Fig. S5, http://links.lww.com/MD/H741). Those indicate that ginseng oral administration can alleviate CRF.\nForest plot of ginseng oral administration on CRF. CRF: cancer-related fatigue.\nEfficacies of different doses were explored. Sixteen patients were treated with 1000 mg/day ginseng and 16 with placebo in 1 study.[20] The pooled SMD was -0.29 (95% CI [−0.98 to 0.41], P = .42). Three hundred and forty-seven patients were treated with 2000 mg/d ginseng and 341 with placebo in 3 studies.[5,15,19] The pooled SMD was 0.35 (95% CI [0.19–0.50], P < .00001). Fifteen patients were treated with 3000 mg/d ginseng and 15 with placebo in 1 study.[21] The SMD was 0.32 (95% CI [−0.40 to 1.04], P = .38). Those indicate that 2000 or 3000 mg/d ginseng should be effective to treat CRF. There is no significant difference between 3000 mg/d ginseng group and control group and that might be due to the small sample size (Fig. 7). Network meta-analysis to compare the relative efficacy of different doses was done. The order was 2000 mg/d, 3000 mg/d, placebo and 1000 mg/d from high efficacy to low. But there was no significant difference (Fig. 8 and Table 5).\nNetwork meta-analysis of different dose.\nSMD for comparisons are in the cell in common between the column-defining and row-defining treatment. SMD < 0 favors row-defining treatment. Numbers in parentheses indicate 95% confidence interval.\nSMD = standardized mean difference.\nForest plot of different doses of ginseng oral administration on CRF. CRF: cancer-related fatigue.\nNetwork of different doses of ginseng oral administration on CRF. CRF: cancer-related fatigue.\nEfficacies of different duration were also explored. One hundred and six patients, 52 in ginseng group and 54 in control, were treated for 2 weeks in 1 study.[22] The SMD was 0.10 (95% CI [−0.28 to 0.48], P = .62). Four hundred and twelve patients, 203 in ginseng group and 209 in control, were treated for 4 weeks in 2 study.[5,22] The pooled SMD was 0.20 (95% CI [0.00–0.39], P = .05). Seven hundred and twenty patients, 363 in ginseng group and 357 in control, were treated for 8 weeks in 4 study.[5,15,19,20] The pooled SMD was 0.32 (95% CI [0.17–0.46], P < .0001). Thirty patients, 15 in ginseng group and 15 in control, were treated for 12 weeks in 1 study.[21] The SMD was 0.32 (95% CI [−0.40 to 1.04], P = .38). Three hundred and thirty patients, 161 in ginseng group and 169 in control, were treated for 16 weeks in 1 study. The SMD was 0.24 (95% CI [0.02–0.45], P = .03). It seems that 4 to 8 weeks is enough for ginseng oral administration to alleviate CRF (Fig. 9).\nForest plot of different duration of ginseng oral administration on CRF. CRF: cancer-related fatigue.\nSeven papers reported the efficacy of ginseng oral administration on CRF and 1 was excluded because of heterogeneity.[16] Six studies included 862 patients, 434 received ginseng and 428 received placebo.[5,15,19–22] Efficacy was assessed between 29 days and 12 weeks. The pooled SMD was 0.29 (95% CI [0.15–0.42], P < .0001) (Fig. 6). If heterogeneity was not taken into account, the pooled SMD was 0.46 (95% CI [0.10–0.82], P = .01) (Fig. S5, http://links.lww.com/MD/H741). Those indicate that ginseng oral administration can alleviate CRF.\nForest plot of ginseng oral administration on CRF. CRF: cancer-related fatigue.\nEfficacies of different doses were explored. Sixteen patients were treated with 1000 mg/day ginseng and 16 with placebo in 1 study.[20] The pooled SMD was -0.29 (95% CI [−0.98 to 0.41], P = .42). Three hundred and forty-seven patients were treated with 2000 mg/d ginseng and 341 with placebo in 3 studies.[5,15,19] The pooled SMD was 0.35 (95% CI [0.19–0.50], P < .00001). Fifteen patients were treated with 3000 mg/d ginseng and 15 with placebo in 1 study.[21] The SMD was 0.32 (95% CI [−0.40 to 1.04], P = .38). Those indicate that 2000 or 3000 mg/d ginseng should be effective to treat CRF. There is no significant difference between 3000 mg/d ginseng group and control group and that might be due to the small sample size (Fig. 7). Network meta-analysis to compare the relative efficacy of different doses was done. The order was 2000 mg/d, 3000 mg/d, placebo and 1000 mg/d from high efficacy to low. But there was no significant difference (Fig. 8 and Table 5).\nNetwork meta-analysis of different dose.\nSMD for comparisons are in the cell in common between the column-defining and row-defining treatment. SMD < 0 favors row-defining treatment. Numbers in parentheses indicate 95% confidence interval.\nSMD = standardized mean difference.\nForest plot of different doses of ginseng oral administration on CRF. CRF: cancer-related fatigue.\nNetwork of different doses of ginseng oral administration on CRF. CRF: cancer-related fatigue.\nEfficacies of different duration were also explored. One hundred and six patients, 52 in ginseng group and 54 in control, were treated for 2 weeks in 1 study.[22] The SMD was 0.10 (95% CI [−0.28 to 0.48], P = .62). Four hundred and twelve patients, 203 in ginseng group and 209 in control, were treated for 4 weeks in 2 study.[5,22] The pooled SMD was 0.20 (95% CI [0.00–0.39], P = .05). Seven hundred and twenty patients, 363 in ginseng group and 357 in control, were treated for 8 weeks in 4 study.[5,15,19,20] The pooled SMD was 0.32 (95% CI [0.17–0.46], P < .0001). Thirty patients, 15 in ginseng group and 15 in control, were treated for 12 weeks in 1 study.[21] The SMD was 0.32 (95% CI [−0.40 to 1.04], P = .38). Three hundred and thirty patients, 161 in ginseng group and 169 in control, were treated for 16 weeks in 1 study. The SMD was 0.24 (95% CI [0.02–0.45], P = .03). It seems that 4 to 8 weeks is enough for ginseng oral administration to alleviate CRF (Fig. 9).\nForest plot of different duration of ginseng oral administration on CRF. CRF: cancer-related fatigue.\n3.9. Efficacy of ginseng injections on CRF Four types of injections (Kangai injection, Shenfu injection, Shenmai injection and Shenqi Fuzheng injection) whose main components are ginseng extractions have been approved by Chinese National Medical Products Administration to be used in clinic. Fifteen studies reported these 4 on CRF and 5 papers were excluded because of heterogeneity.[17,18,23–25] Three hundred and forty-four patients were in the ginseng injection group and 338 in the control group. Efficacy was assessed between 2 weeks and 16 weeks. The pooled SMD was 0.74 (95% CI [0.59–0.90], P < .00001) (Fig. 10). If heterogeneity was not taken into account, the pooled SMD was 1.08 (95% CI [0.73–1.44], P < .00001) (Fig. S6, http://links.lww.com/MD/H742). Those indicate that ginseng injections can alleviate CRF.\nEfficacy of ginseng injections on CRF. CRF: cancer-related fatigue.\nEfficacies of the 4 types of injections were also explored, respectively. The SMD of Kangai injection, Shenfu injection, Shenmai injection or Shenqi Fuzheng injection was 1.12 (95%CI [0.67–1.58], P < .00001), 1.54 (95% CI [−0.79 to 3.87], P = .20), 1.02 (95% CI [0.71–1.33], P < .00001), 1.00 (95% CI [0.42–1.57], P = .0007), respectively (Fig. S6, http://links.lww.com/MD/H742). Those indicate that Kangai injection, Shenmai injection or Shenqi Fuzheng injection can alleviate CRF.\nFour types of injections (Kangai injection, Shenfu injection, Shenmai injection and Shenqi Fuzheng injection) whose main components are ginseng extractions have been approved by Chinese National Medical Products Administration to be used in clinic. Fifteen studies reported these 4 on CRF and 5 papers were excluded because of heterogeneity.[17,18,23–25] Three hundred and forty-four patients were in the ginseng injection group and 338 in the control group. Efficacy was assessed between 2 weeks and 16 weeks. The pooled SMD was 0.74 (95% CI [0.59–0.90], P < .00001) (Fig. 10). If heterogeneity was not taken into account, the pooled SMD was 1.08 (95% CI [0.73–1.44], P < .00001) (Fig. S6, http://links.lww.com/MD/H742). Those indicate that ginseng injections can alleviate CRF.\nEfficacy of ginseng injections on CRF. CRF: cancer-related fatigue.\nEfficacies of the 4 types of injections were also explored, respectively. The SMD of Kangai injection, Shenfu injection, Shenmai injection or Shenqi Fuzheng injection was 1.12 (95%CI [0.67–1.58], P < .00001), 1.54 (95% CI [−0.79 to 3.87], P = .20), 1.02 (95% CI [0.71–1.33], P < .00001), 1.00 (95% CI [0.42–1.57], P = .0007), respectively (Fig. S6, http://links.lww.com/MD/H742). Those indicate that Kangai injection, Shenmai injection or Shenqi Fuzheng injection can alleviate CRF.\n3.10. Efficacy of ginseng oral administration and ginseng injections on emotional fatigue Ten studies reported the efficacy of ginseng oral administration or ginseng injections on emotional fatigue and 6 were excluded because of heterogeneity.[7,16,18,23,26,27] The number of patients in the experimental group was 286 and 277 in the control group. The pooled SMD was 0.12 (95% CI [−0.04 to 0.29], P = .15) (Fig. 11). If heterogeneity was not taken into account, the pooled SMD was 0.67 (95% CI [0.13–1.21], P = .02) (Fig. S7, http://links.lww.com/MD/H744). It seems that whether ginseng could alleviate emotional fatigue is uncertain.\nForest plot of ginseng oral administration and ginseng injections on emotional fatigue.\nGinseng oral administration was employed in 3 studies and 1 study was excluded because of heterogeneity.[16] The pooled SMD was 0.10 (95% CI [−0.10 to 0.30], P = .32) (Fig. 11). If heterogeneity was not taken into account, the pooled SMD was −0.34 (95% CI [−1.12 to 0.43], P = .38) (Fig. S7, http://links.lww.com/MD/H744). Those indicate that ginseng oral administration may not alleviate emotional fatigue.\nSeven studies explored efficacies of ginseng injections on emotional fatigue and 3 were excluded because of heterogeneity. The pooled SMD of the 4 studies was 0.79 (95% CI [0.55–1.03], P < .00001) (Fig. 12). If heterogeneity was not taken into account, the pooled SMD was 1.12 (95% CI [0.50–1.74], P = .0004) (Fig. S8, http://links.lww.com/MD/H749).[7,18,23,26–29] Those results suggest that ginseng injections may be effective in alleviating emotional fatigue.\nForest plot of ginseng injections on emotional fatigue.\nEfficacies of the 3 types of injections were also explored, respectively. The SMD of Shenfu injection, Shenmai injection or Shenqi Fuzheng injection was 1.16 (95%CI [−0.98 to 3.30], P = .29), 0.84 (95% CI [0.53–1.14], P < .00001), 1.34 (95% CI [0.13–2.55], P = .03), respectively (Fig. S8, http://links.lww.com/MD/H749). Those indicate that Shenmai injection or Shenqi Fuzheng injection can alleviate emotional fatigue.\nTen studies reported the efficacy of ginseng oral administration or ginseng injections on emotional fatigue and 6 were excluded because of heterogeneity.[7,16,18,23,26,27] The number of patients in the experimental group was 286 and 277 in the control group. The pooled SMD was 0.12 (95% CI [−0.04 to 0.29], P = .15) (Fig. 11). If heterogeneity was not taken into account, the pooled SMD was 0.67 (95% CI [0.13–1.21], P = .02) (Fig. S7, http://links.lww.com/MD/H744). It seems that whether ginseng could alleviate emotional fatigue is uncertain.\nForest plot of ginseng oral administration and ginseng injections on emotional fatigue.\nGinseng oral administration was employed in 3 studies and 1 study was excluded because of heterogeneity.[16] The pooled SMD was 0.10 (95% CI [−0.10 to 0.30], P = .32) (Fig. 11). If heterogeneity was not taken into account, the pooled SMD was −0.34 (95% CI [−1.12 to 0.43], P = .38) (Fig. S7, http://links.lww.com/MD/H744). Those indicate that ginseng oral administration may not alleviate emotional fatigue.\nSeven studies explored efficacies of ginseng injections on emotional fatigue and 3 were excluded because of heterogeneity. The pooled SMD of the 4 studies was 0.79 (95% CI [0.55–1.03], P < .00001) (Fig. 12). If heterogeneity was not taken into account, the pooled SMD was 1.12 (95% CI [0.50–1.74], P = .0004) (Fig. S8, http://links.lww.com/MD/H749).[7,18,23,26–29] Those results suggest that ginseng injections may be effective in alleviating emotional fatigue.\nForest plot of ginseng injections on emotional fatigue.\nEfficacies of the 3 types of injections were also explored, respectively. The SMD of Shenfu injection, Shenmai injection or Shenqi Fuzheng injection was 1.16 (95%CI [−0.98 to 3.30], P = .29), 0.84 (95% CI [0.53–1.14], P < .00001), 1.34 (95% CI [0.13–2.55], P = .03), respectively (Fig. S8, http://links.lww.com/MD/H749). Those indicate that Shenmai injection or Shenqi Fuzheng injection can alleviate emotional fatigue.\n3.11. Efficacy of ginseng oral administration and ginseng injections on cognitive fatigue. Cognitive fatigue is a psychological state characterized by the subjective feelings of tiredness, and impaired ability to think, memorize, and concentrate.[30,31] Cognitive fatigue is strongly associated with CRF. Eight studies reported the efficacy of ginseng oral administration and ginseng injections on cognitive fatigue. Four studies, 1 on ginseng oral administration and 3 on ginseng injections, were excluded because of heterogeneity.[5,18,23,29] Finally, 4 studies on ginseng injections were included. The pooled SMD was 0.72 (95% CI [0.48–0.96], P < .00001) (Fig. 13). If heterogeneity was not taken into account, the pooled SMD of all 8 studies was 0.80 (95% CI [0.31–1.29], P = .001) (Fig. S9, http://links.lww.com/MD/H750) and the pooled SMD of 7 studies on ginseng injections was 0.93 (95% CI [0.44–1.42], P = .0002) (Fig. S10, http://links.lww.com/MD/H751)..[7,16,17,23,26,27,29,32] Ginseng oral administration was employed in 1study and the SMD was -0.04 (95% CI [−0.28 to 0.20], P = .76) (Fig. S9, http://links.lww.com/MD/H750). Taking together, those results suggest that ginseng injections may be effective in alleviating cognitive fatigue while ginseng oral administration may not be beneficial to cognitive fatigue relieving.\nForest plot of ginseng injections on cognitive fatigue.\nEfficacies of the 3 types of injections were also explored, respectively. The SMD of Shenfu injection, Shenmai injection or Shenqi Fuzheng injection was 0.79 (95%CI [−0.50 to 2.09], P = .23), 0.83 (95% CI [0.53–1.14], P < .00001), 1.13 (95% CI [0.00–2.25], P = .05), respectively (Fig. S10, http://links.lww.com/MD/H751). Those indicate that Shenmai injection or Shenqi Fuzheng injection can alleviate cognitive fatigue.\nCognitive fatigue is a psychological state characterized by the subjective feelings of tiredness, and impaired ability to think, memorize, and concentrate.[30,31] Cognitive fatigue is strongly associated with CRF. Eight studies reported the efficacy of ginseng oral administration and ginseng injections on cognitive fatigue. Four studies, 1 on ginseng oral administration and 3 on ginseng injections, were excluded because of heterogeneity.[5,18,23,29] Finally, 4 studies on ginseng injections were included. The pooled SMD was 0.72 (95% CI [0.48–0.96], P < .00001) (Fig. 13). If heterogeneity was not taken into account, the pooled SMD of all 8 studies was 0.80 (95% CI [0.31–1.29], P = .001) (Fig. S9, http://links.lww.com/MD/H750) and the pooled SMD of 7 studies on ginseng injections was 0.93 (95% CI [0.44–1.42], P = .0002) (Fig. S10, http://links.lww.com/MD/H751)..[7,16,17,23,26,27,29,32] Ginseng oral administration was employed in 1study and the SMD was -0.04 (95% CI [−0.28 to 0.20], P = .76) (Fig. S9, http://links.lww.com/MD/H750). Taking together, those results suggest that ginseng injections may be effective in alleviating cognitive fatigue while ginseng oral administration may not be beneficial to cognitive fatigue relieving.\nForest plot of ginseng injections on cognitive fatigue.\nEfficacies of the 3 types of injections were also explored, respectively. The SMD of Shenfu injection, Shenmai injection or Shenqi Fuzheng injection was 0.79 (95%CI [−0.50 to 2.09], P = .23), 0.83 (95% CI [0.53–1.14], P < .00001), 1.13 (95% CI [0.00–2.25], P = .05), respectively (Fig. S10, http://links.lww.com/MD/H751). Those indicate that Shenmai injection or Shenqi Fuzheng injection can alleviate cognitive fatigue.\n3.12. The effect of cancer types on efficacy of ginseng and ginseng injections on CRF CRF may associate with cancer types.[16] The effect of cancer types on efficacy of ginseng on CRF was explored here. Nine studies evaluated the efficacy of ginseng on the treatment of lung cancer.[7,16,17,23,24,26,27,29,32] Of the total 725 participants, 519 had undergone chemotherapy prior to participation, 245 were non-small cell lung cancer patients, 86 were lung adenocarcinoma patients and 425 were advanced lung cancer patients. If heterogeneity was not taken into account, those results supported the benefit of Red ginseng, Kangai injection, Shenfu injection, Shenmai injection and Shenqi Fuzheng injection on fatigue relief. Those results support CRF improvement of ginseng injections on lung cancer including the pathological types of non-small cell lung cancer, and TNM staging of advanced lung cancer. At the same time, ginseng may also benefit patients with colorectal cancer and nasopharyngeal carcinoma, and may have little effect on patients with head and neck cancer (Table 6).\nEfficacy of ginseng and ginseng injections on CRF alleviating by various factors.\nCRF may associate with cancer types.[16] The effect of cancer types on efficacy of ginseng on CRF was explored here. Nine studies evaluated the efficacy of ginseng on the treatment of lung cancer.[7,16,17,23,24,26,27,29,32] Of the total 725 participants, 519 had undergone chemotherapy prior to participation, 245 were non-small cell lung cancer patients, 86 were lung adenocarcinoma patients and 425 were advanced lung cancer patients. If heterogeneity was not taken into account, those results supported the benefit of Red ginseng, Kangai injection, Shenfu injection, Shenmai injection and Shenqi Fuzheng injection on fatigue relief. Those results support CRF improvement of ginseng injections on lung cancer including the pathological types of non-small cell lung cancer, and TNM staging of advanced lung cancer. At the same time, ginseng may also benefit patients with colorectal cancer and nasopharyngeal carcinoma, and may have little effect on patients with head and neck cancer (Table 6).\nEfficacy of ginseng and ginseng injections on CRF alleviating by various factors.\n3.13. Incidences of treatment-related adverse events between different drugs and cancer types Adverse events were collected and summarized in Table 7. It seems that ginseng has no discernible adverse reactions.\nAdverse events.\nAdverse events were collected and summarized in Table 7. It seems that ginseng has no discernible adverse reactions.\nAdverse events.", "A total of 1764 studies were identified from the 3 electronic databases. Seventy-one duplicate studies were excluded by using Endnote X9. On review of the title and abstract, 1631 studies were excluded. After further careful review of 62 articles of the full text, a further 40 studies were excluded. Finally, 22 papers with a total of 2086 participants were included. Patients were treated with ginseng oral administration in 7 papers and with ginseng injections in 15 (Fig. 1 and Table 1). They were published between 2010 and 2020 and were conducted in China (n = 16), America (n = 3), Korea (n = 2) and Italy (n = 1). Six studies were randomized, double-blind, placebo-controlled design trials, and the rest of 16 studies used randomized design (Table 1). The detailed information was summarized in Table 1 and Table S2, http://links.lww.com/MD/H735 and 3, http://links.lww.com/MD/H736.\nCharacteristics of studies included in the systematic review and meta-analysis.\nBFI = brief fatigue inventory, CFS = cancer fatigue scale, FACIT-F = functional assessment of chronic illness therapy-fatigue subscale, FSI = fatigue symptom inventory, MFSI-SF = multidimensional fatigue symptom inventory-short form, PFS = piper fatigue scale.\nFlow chart.", "All included 22 studies were randomized controlled trials. Two were dynamically allocated by computer,[5,15] and 4 used random number table method.[7,16–18] Those 6 were rated as low risk. One study used stratified block randomization allocation and bias risk was not clear.[19] The other 15 studies didn’t describe the method of random sequence generation. Of 6 double-blind studies, only 1 mentioned double-blind method but did not describe the specific implementation process.[20] Two studies were multicenter and 6 used placebo as control.[5,19] All included studies had low risk of bias regarding incomplete outcome data, had low risk of bias regarding selective reporting and none claimed conflict of interest, early termination of the trial (Figs. 2 and 3).\nRisk of bias graph.\nRisk of bias summary.", "For the primary outcome, all intervention groups’ data was combined, regardless of the types and stages of cancer and so on. Consequently, our analyses were subject to high potential risk of between-study heterogeneity. A meta-regression was conducted, we found that placebo may be a source of heterogeneity. In 6 studies that used placebo, ginseng oral administration was in 6 and ginseng injections in 0, so analysis was conducted separately. We did not find other sources of heterogeneity (Table 2). Sensitivity analyses were undertaken by excluding studies with high risk of bias. Compare the 2 results, we still ca not explain the source of heterogeneity (Fig. S1, http://links.lww.com/MD/H737;2, http://links.lww.com/MD/H738;3, http://links.lww.com/MD/H739). Therefore, whether heterogeneity taken into account or not, results were all presented and discussed in the manuscript, respectively. The same as the secondary outcome (Table 2).\nOutcome of meta-regression.\nCRF = cancer-related fatigue.", "Egger linear regression was conducted to text symmetry of funnel plots. No publication bias was found in each outcome (Fig. S4, http://links.lww.com/MD/H740;6, http://links.lww.com/MD/H742;7, http://links.lww.com/MD/H744).", "Table 3 for details.\nOutcome of GRADE rating.\nCI = confidence interval, CRF = cancer-related fatigue, SMD = standardized mean difference.\nThere was substantial heterogeneity among studies, meta-regression and sensitivity analyses of each outcome were conducted but we didn’t find the sources of heterogeneity.\nSmall sample size and wide confidence interval.", "Fatigue was reported in 22 studies and 10 were excluded because of heterogeneity. Of included 12 studies, ginseng was used in 5 studies and ginseng injections in 7. The number of patients in the ginseng group is 656 and 646 in control. Efficacy was assessed between 2 weeks and 12 weeks. The pooled SMD was 0.40 (95% confidence interval (95% CI) [0.29–0.51], P < .00001) (Fig. 4). If heterogeneity was not taken into account, the pooled SMD was 0.89 (95% CI [0.60–1.18], P < .00001) (Fig. S4, http://links.lww.com/MD/H740). Those indicate that ginseng can alleviate CRF.\nEfficacy of ginseng oral administration and ginseng injections on CRF. CRF: cancer-related fatigue.", "Network meta-analysis to compare the relative efficacy of ginseng oral administration and ginseng injections was done (Fig. 5 and Table 4). The order was ginseng injections, ginseng oral administration and placebo from high efficacy to low.\nNetwork meta-analysis between ginseng oral administration and ginseng injections.\nSMD for comparisons are in the cell in common between the column-defining and row-defining treatment. SMD < 0 favors row-defining treatment. Numbers in parentheses indicate 95% confidence interval.\nSMD = standardized mean difference.\nNetwork of 2 types of administration routes on CRF. CRF: cancer-related fatigue.", "Seven papers reported the efficacy of ginseng oral administration on CRF and 1 was excluded because of heterogeneity.[16] Six studies included 862 patients, 434 received ginseng and 428 received placebo.[5,15,19–22] Efficacy was assessed between 29 days and 12 weeks. The pooled SMD was 0.29 (95% CI [0.15–0.42], P < .0001) (Fig. 6). If heterogeneity was not taken into account, the pooled SMD was 0.46 (95% CI [0.10–0.82], P = .01) (Fig. S5, http://links.lww.com/MD/H741). Those indicate that ginseng oral administration can alleviate CRF.\nForest plot of ginseng oral administration on CRF. CRF: cancer-related fatigue.\nEfficacies of different doses were explored. Sixteen patients were treated with 1000 mg/day ginseng and 16 with placebo in 1 study.[20] The pooled SMD was -0.29 (95% CI [−0.98 to 0.41], P = .42). Three hundred and forty-seven patients were treated with 2000 mg/d ginseng and 341 with placebo in 3 studies.[5,15,19] The pooled SMD was 0.35 (95% CI [0.19–0.50], P < .00001). Fifteen patients were treated with 3000 mg/d ginseng and 15 with placebo in 1 study.[21] The SMD was 0.32 (95% CI [−0.40 to 1.04], P = .38). Those indicate that 2000 or 3000 mg/d ginseng should be effective to treat CRF. There is no significant difference between 3000 mg/d ginseng group and control group and that might be due to the small sample size (Fig. 7). Network meta-analysis to compare the relative efficacy of different doses was done. The order was 2000 mg/d, 3000 mg/d, placebo and 1000 mg/d from high efficacy to low. But there was no significant difference (Fig. 8 and Table 5).\nNetwork meta-analysis of different dose.\nSMD for comparisons are in the cell in common between the column-defining and row-defining treatment. SMD < 0 favors row-defining treatment. Numbers in parentheses indicate 95% confidence interval.\nSMD = standardized mean difference.\nForest plot of different doses of ginseng oral administration on CRF. CRF: cancer-related fatigue.\nNetwork of different doses of ginseng oral administration on CRF. CRF: cancer-related fatigue.\nEfficacies of different duration were also explored. One hundred and six patients, 52 in ginseng group and 54 in control, were treated for 2 weeks in 1 study.[22] The SMD was 0.10 (95% CI [−0.28 to 0.48], P = .62). Four hundred and twelve patients, 203 in ginseng group and 209 in control, were treated for 4 weeks in 2 study.[5,22] The pooled SMD was 0.20 (95% CI [0.00–0.39], P = .05). Seven hundred and twenty patients, 363 in ginseng group and 357 in control, were treated for 8 weeks in 4 study.[5,15,19,20] The pooled SMD was 0.32 (95% CI [0.17–0.46], P < .0001). Thirty patients, 15 in ginseng group and 15 in control, were treated for 12 weeks in 1 study.[21] The SMD was 0.32 (95% CI [−0.40 to 1.04], P = .38). Three hundred and thirty patients, 161 in ginseng group and 169 in control, were treated for 16 weeks in 1 study. The SMD was 0.24 (95% CI [0.02–0.45], P = .03). It seems that 4 to 8 weeks is enough for ginseng oral administration to alleviate CRF (Fig. 9).\nForest plot of different duration of ginseng oral administration on CRF. CRF: cancer-related fatigue.", "Four types of injections (Kangai injection, Shenfu injection, Shenmai injection and Shenqi Fuzheng injection) whose main components are ginseng extractions have been approved by Chinese National Medical Products Administration to be used in clinic. Fifteen studies reported these 4 on CRF and 5 papers were excluded because of heterogeneity.[17,18,23–25] Three hundred and forty-four patients were in the ginseng injection group and 338 in the control group. Efficacy was assessed between 2 weeks and 16 weeks. The pooled SMD was 0.74 (95% CI [0.59–0.90], P < .00001) (Fig. 10). If heterogeneity was not taken into account, the pooled SMD was 1.08 (95% CI [0.73–1.44], P < .00001) (Fig. S6, http://links.lww.com/MD/H742). Those indicate that ginseng injections can alleviate CRF.\nEfficacy of ginseng injections on CRF. CRF: cancer-related fatigue.\nEfficacies of the 4 types of injections were also explored, respectively. The SMD of Kangai injection, Shenfu injection, Shenmai injection or Shenqi Fuzheng injection was 1.12 (95%CI [0.67–1.58], P < .00001), 1.54 (95% CI [−0.79 to 3.87], P = .20), 1.02 (95% CI [0.71–1.33], P < .00001), 1.00 (95% CI [0.42–1.57], P = .0007), respectively (Fig. S6, http://links.lww.com/MD/H742). Those indicate that Kangai injection, Shenmai injection or Shenqi Fuzheng injection can alleviate CRF.", "Ten studies reported the efficacy of ginseng oral administration or ginseng injections on emotional fatigue and 6 were excluded because of heterogeneity.[7,16,18,23,26,27] The number of patients in the experimental group was 286 and 277 in the control group. The pooled SMD was 0.12 (95% CI [−0.04 to 0.29], P = .15) (Fig. 11). If heterogeneity was not taken into account, the pooled SMD was 0.67 (95% CI [0.13–1.21], P = .02) (Fig. S7, http://links.lww.com/MD/H744). It seems that whether ginseng could alleviate emotional fatigue is uncertain.\nForest plot of ginseng oral administration and ginseng injections on emotional fatigue.\nGinseng oral administration was employed in 3 studies and 1 study was excluded because of heterogeneity.[16] The pooled SMD was 0.10 (95% CI [−0.10 to 0.30], P = .32) (Fig. 11). If heterogeneity was not taken into account, the pooled SMD was −0.34 (95% CI [−1.12 to 0.43], P = .38) (Fig. S7, http://links.lww.com/MD/H744). Those indicate that ginseng oral administration may not alleviate emotional fatigue.\nSeven studies explored efficacies of ginseng injections on emotional fatigue and 3 were excluded because of heterogeneity. The pooled SMD of the 4 studies was 0.79 (95% CI [0.55–1.03], P < .00001) (Fig. 12). If heterogeneity was not taken into account, the pooled SMD was 1.12 (95% CI [0.50–1.74], P = .0004) (Fig. S8, http://links.lww.com/MD/H749).[7,18,23,26–29] Those results suggest that ginseng injections may be effective in alleviating emotional fatigue.\nForest plot of ginseng injections on emotional fatigue.\nEfficacies of the 3 types of injections were also explored, respectively. The SMD of Shenfu injection, Shenmai injection or Shenqi Fuzheng injection was 1.16 (95%CI [−0.98 to 3.30], P = .29), 0.84 (95% CI [0.53–1.14], P < .00001), 1.34 (95% CI [0.13–2.55], P = .03), respectively (Fig. S8, http://links.lww.com/MD/H749). Those indicate that Shenmai injection or Shenqi Fuzheng injection can alleviate emotional fatigue.", "Cognitive fatigue is a psychological state characterized by the subjective feelings of tiredness, and impaired ability to think, memorize, and concentrate.[30,31] Cognitive fatigue is strongly associated with CRF. Eight studies reported the efficacy of ginseng oral administration and ginseng injections on cognitive fatigue. Four studies, 1 on ginseng oral administration and 3 on ginseng injections, were excluded because of heterogeneity.[5,18,23,29] Finally, 4 studies on ginseng injections were included. The pooled SMD was 0.72 (95% CI [0.48–0.96], P < .00001) (Fig. 13). If heterogeneity was not taken into account, the pooled SMD of all 8 studies was 0.80 (95% CI [0.31–1.29], P = .001) (Fig. S9, http://links.lww.com/MD/H750) and the pooled SMD of 7 studies on ginseng injections was 0.93 (95% CI [0.44–1.42], P = .0002) (Fig. S10, http://links.lww.com/MD/H751)..[7,16,17,23,26,27,29,32] Ginseng oral administration was employed in 1study and the SMD was -0.04 (95% CI [−0.28 to 0.20], P = .76) (Fig. S9, http://links.lww.com/MD/H750). Taking together, those results suggest that ginseng injections may be effective in alleviating cognitive fatigue while ginseng oral administration may not be beneficial to cognitive fatigue relieving.\nForest plot of ginseng injections on cognitive fatigue.\nEfficacies of the 3 types of injections were also explored, respectively. The SMD of Shenfu injection, Shenmai injection or Shenqi Fuzheng injection was 0.79 (95%CI [−0.50 to 2.09], P = .23), 0.83 (95% CI [0.53–1.14], P < .00001), 1.13 (95% CI [0.00–2.25], P = .05), respectively (Fig. S10, http://links.lww.com/MD/H751). Those indicate that Shenmai injection or Shenqi Fuzheng injection can alleviate cognitive fatigue.", "CRF may associate with cancer types.[16] The effect of cancer types on efficacy of ginseng on CRF was explored here. Nine studies evaluated the efficacy of ginseng on the treatment of lung cancer.[7,16,17,23,24,26,27,29,32] Of the total 725 participants, 519 had undergone chemotherapy prior to participation, 245 were non-small cell lung cancer patients, 86 were lung adenocarcinoma patients and 425 were advanced lung cancer patients. If heterogeneity was not taken into account, those results supported the benefit of Red ginseng, Kangai injection, Shenfu injection, Shenmai injection and Shenqi Fuzheng injection on fatigue relief. Those results support CRF improvement of ginseng injections on lung cancer including the pathological types of non-small cell lung cancer, and TNM staging of advanced lung cancer. At the same time, ginseng may also benefit patients with colorectal cancer and nasopharyngeal carcinoma, and may have little effect on patients with head and neck cancer (Table 6).\nEfficacy of ginseng and ginseng injections on CRF alleviating by various factors.", "Adverse events were collected and summarized in Table 7. It seems that ginseng has no discernible adverse reactions.\nAdverse events.", "Panax ginseng root is widely used in Asia owing to its therapeutic anti-oxidative, immunomodulatory properties as well as other numerous pharmacologic activities. It has a good safety profile and minor incidence of adverse effects.[33] In China, ginseng has been used to treat chronic fatigue as early as 2000 years ago. Several meta-analyses and systematic reviews focusing on the efficacy of ginseng on fatigue were published recently and they reported that ginseng benefits fatigue, such as chronic fatigue syndrome, idiopathic chronic fatigue, physical fatigue in human beings and animals.[34,35] At the same time, the number of studies focusing on efficacy of ginseng on CRF is growing. Several types of injections whose main components are ginseng extractions have been approved by Chinese National Medical Products Administration to treat patients. Therefore, hundreds of papers focusing on ginseng on CRF were written in Chinese. Here, we conducted a meta-analysis of papers written in Chinese and in English to evaluate the efficacy of ginseng and ginseng injections in the treatment of CRF.\nThe primary outcome was the effect of ginseng and ginseng injections in alleviating CRF. SMD was employed here because clinical indices to assess CRF differed among studies. The data indicate that ginseng treatment, including oral administration and injection, yield benefits CRF whether heterogeneity taken into account (SMD = 0.40; 95% CI [0.29–0.51], P < .00001) or not (SMD = 0.89; 95% CI [0.60–1.18], P < .00001). Several other drugs have been found to be effective in CRF. Methylphenidate may be accepted more widely and be recommend more frequently. But the results of several trials indicated that methylphenidate, compared with placebo, may not improve CRF.[36,37] Dr Moraska et al conducted a randomized, double-blind, placebo-controlled study but did not find evidence that methylphenidate improved the primary end point of CRF.[36] Dr Centeno’s paper indicated that methylphenidate was not more efficient than placebo to treat CRF.[37] Among 22 included studies on ginseng/ginseng injections, results of 21 showed or showed a trend that there was a reduction in CRF in the ginseng/ginseng injections group compared with the control group. Those concordantly indicate that ginseng is effective to treat CRF. So, it seems that ginseng is a promising drug to alleviate CRF.\nEmotional fatigue or/and cognitive fatigue is an important part of CRF. Ten papers explored efficacy of ginseng on emotional fatigue. If heterogeneity not taken into account, data supported the conclusion that ginseng could alleviate emotional fatigue (SMD = 0.67; 95% CI [0.13–1.21], P = .02). But if heterogeneity taken into account, the data show a trend that ginseng might alleviate emotional fatigue (SMD = 0.12; 95% CI [−0.04 to 0.29], P = .15). Efficacy of ginseng on cognitive fatigue was also explored and the data indicated that ginseng injections can alleviate cognitive fatigue whether heterogeneity taken into account (SMD = 0.72; 95% CI [0.48–0.96], P < .00001) or not (SMD = 0.93; 95% CI [0.44–1.42], P = .0002). So, ginseng can alleviate CRF and might be beneficial to treat emotional fatigue or/and cognitive fatigue. At the same time, several papers suggested that the combination treatment of ginseng/ginseng injections and methylphenidate or dexamethasone showed potential clinical benefit in CRF without discernible associated toxicities.[23,38]\nTwo types of administration routes were included in this study: oral and intravenous injection. Maybe it’s because that the efficacy of intravenous injection is faster than oral administration, ginseng injections have other ingredients besides ginseng, the results of network meta-analysis showed that intravenous injection was better. But there was no evidence that ginseng oral administration, as compared with placebo, could improve emotional fatigue whether heterogeneity taken into account (SMD = 0.10; 95% CI [−0.10 to 0.30], P = .32) or not (SMD = −0.34; 95% CI [−1.12 to 0.43], P = .38). It is similar for cognitive fatigue (SMD = −0.04; 95% CI [−0.28 to 0.20], P = .76).\nFour types of injections, Kangai injection, Shenqi fuzheng injection, Shenfu injection and Shenmai injection whose main components are ginseng extractions, have been used in clinic in China. The pooled SMD of the 4 injections indicates that ginseng injections are effective in alleviating CRF and cognitive fatigue whether heterogeneity taken into account or not. Data supported that each type of injections may alleviate CRF. Besides, Shenmai injection and Shenqi fuzheng injection may be effective to alleviate both emotional fatigue and cognitive fatigue. Cancer types were taken into account in some of the included papers focusing on ginseng injections, hence the effect of cancer types on efficacy was meta-analyzed here. It seems that ginseng injections could alleviate CRF caused by non-small cell lung cancer, colorectal cancer, malignant melanoma and nasopharyngeal carcinoma (Table 6). Therefore, ginseng injections can alleviate CRF and may be beneficial to treat emotional fatigue and cognitive fatigue, particularly caused by some types of cancer. Due to limited studies, it is unknown whether ginseng can alleviate CRF caused by other types of cancer. Future rigorous clinical trials and published results will provide deeper insight.\nBesides inherent limitations of individual trials, there are limitations to our analyses. First, different types of studies were included. There are great differences in doses, duration, routes of administration, types of drugs among studies. Those make a great risk of bias in the implementation of the meta-analysis. Second, the sample size in each trial is small. Most of them were about 100. Consequently, confidence levels were very wide and there was a great variability. Third, there are great differences in cancer types, stages, basic strategies for treatment of cancer. Some trials even enrolled several types of cancer in different stages. Some patients were at stage III to VI, some were cancer survivors who remained free of disease. Some were treated with chemotherapy, radiotherapy or others, some were not. Because CRF is associated with cancer types, stages and basic strategies for treatment of cancer (for example, chemotherapy, radiotherapy),[2,16] those confounders play a role in CRF and affect the efficacy of ginseng treatment more or less. Fourth, clinical indices to assess clinical response differed among studies. Brief Fatigue Inventory, Multidimensional Fatigue Symptom Inventory–Short Form, Fatigue Symptom Inventory, Piper Fatigue Scale, et al were employed in different trials. Though those correlate with each other and have been well accepted to assess CRF, little variability among studies might be unavoidable. Fifth, the methodological quality and quality of evidence of the literature included in this study were rated low (Table 3). Although our findings support the effectiveness of ginseng and ginseng injections in the treatment of CRF, the GRADE approaches were rated low. Sixth, some database, such as Web of Science, et al, were not included in the databases searched because we do not have access.", "Ginseng, ginseng oral administration or ginseng injections, may improve CRF. Intravenous injection might be better than oral administration. It seems that ginseng injections may alleviate cognitive fatigue. No evidence was found to support that ginseng could alleviate emotional fatigue. More high-quality randomized, double-blind, placebo-controlled studies with homogeneous samples, large sample sizes, fixed protocol are warranted to identify effectiveness of ginseng on CRF caused by specific type of cancer.", "" ]
[ "intro", "methods", null, null, null, null, null, null, null, null, "results", "results", null, null, null, null, null, null, null, null, null, null, null, null, null, "discussion", null, "supplementary-material" ]
[ "cancer-related fatigue", "ginseng", "ginseng injections", "meta-analysis" ]
1. Introduction: Cancer-related fatigue (CRF) is one of the most common symptoms in patients with cancer. Up to 90% of patients who are under the active treatment suffer from CRF.[1] It can persist months or even years after treatment ends which interferes with usual functioning. Besides, fatigue is rarely an isolated symptom and most commonly occurs with other symptoms and signs, such as pain, emotional distress, anemia, and sleep disturbances, in symptom clusters.[2] Unlike typical fatigue, CRF cannot be relieved by additional rest, sleep, reducing physical activity, etc. On the contrary, exercise/physical activity is likely to be effective in ameliorating CRF. Psychological/psycho-education, mind/body wellness training, nutritional and dietary supplements may be effective, too. Unfortunately, these interventions yield, at most, moderate benefits in meta-analyses.[2] Until now, evidence to date indicates that synthetic drugs are less effective than non-pharmacologic intervention.[2] Ginseng is the root of plants in the genus Panax, such as Korean ginseng (Panax ginseng C.A. Meyer), Japanese ginseng (Panax ginseng C.A. Meyer), and American ginseng (Panax quinquefolius L.). Red ginseng is a processed product of Asian ginseng (Panax ginseng C.A. Meyer.) by steaming and drying. Ginseng has been used to treat chronic fatigue as early as 2000 years ago in China. Nowadays, ginseng is not only used in China, but also sold and used in more than 35 countries, such as Japan, South Korea, North Korea, America.[3] Preclinical data supports that ginseng may be helpful for fatigue. Animal studies have reported that ginseng can improve the endurance and swimming duration time of mice.[4] Besides, ginseng is mentioned as a dietary supplement for CRF treatment in the NCCN Guidelines based on 1 randomized, double-blind clinical trial using American ginseng which indicated that American ginseng of 2000 mg improved CRF symptom.[5] There are several types of ginseng and they were deemed to have same active pharmaceutical ingredients, for example, Rg1 and Rb1. However, other types of ginseng were not discussed in the Guidelines.[6] In addition to oral administration, several types of injections whose main components are ginseng extractions have been approved by Chinese National Medical Products Administration to be used in clinic. The common ones are Kangai injection, Shenfu injection, Shenmai injection, Shenqi Fuzheng injection. Kangai injection (China Food and Drug Administration approval number Z20026868) consists of the extracts from Astragalus membranaceus (Fisch.) (Bunge), Panax ginseng C.A. Meyer. and Sophora flavescens Aiton. Shenfu injection (China Food and Drug Administration approval number Z20043117) consists of the extracts from Red ginseng and Aconitum wilsonii Stapf ex Veitch.Shenmai injection (China Food and Drug Administration approval number Z2009364) consists of the extracts from Red ginseng and Ophiopogon japonicus (Linn. f.) Ker-Gawl.. Shenqi Fuzheng injection (China Food and Drug Administration approval number Z19990065) consists of the extracts from Codonopsis pilosula (Franch.) Nannf. and Astragalus Membranaceus (Fisch.) Bunge. Some studies have shown that ginseng injections are of great help in improving the quality of life and reducing the side effects of radiotherapy and chemotherapy in cancer patients.[7,8] Therefore, ginseng and ginseng injections are both potential drugs for the treatment of CRF but few studies put them together for analysis. Here, we employed standardized mean difference (SMD) to conduct a meta-analysis to evaluate the efficacy of ginseng and ginseng injections in the treatment of CRF and the quality of the evidence. Emotional and cognitive fatigue were evaluated, too. Besides, in our study, subgroup analyses were conducted to compare the efficacy of cancer types, cancer stages, basic strategies for treatment of cancer and so on. 2. Methods: 2.1. Study registration The protocol of this meta-analysis is registered in PROSPERO, under the registration number CRD42021228094 on February 18, 2021. It is available at http://www.crd.york.ac.uk/PROSPERO/. The protocol of this meta-analysis is registered in PROSPERO, under the registration number CRD42021228094 on February 18, 2021. It is available at http://www.crd.york.ac.uk/PROSPERO/. 2.2. Literature search PubMed, Cochrane Library, China National Knowledge Infrastructure were systematically searched from the database inception to May 24, 2021. The following keywords were used for the Chinese database search:(shen or renshen or gaolishen or xiyangshen or hongshen or baishen or shuishen or yeshanshen or Kangai zhusheye or Shenqi Fuzheng zhusheye or Shenfu zhusheye or Shenmai zhusheye) and (pilao or pifa or pijuan or pibei or juandai or fali). The following keywords were used for the English database search: (Ginseng or Ginsengs or P. quinquefolius or Panax or ginsenosides or ginsenoside or Kangai injection or Shenqi fuzheng injection or Shenfu injection or Shenmai injection) and (fatigue or lethargy or exhaustion or tiredness or weariness or physical performance or exercise performance) (Table S1, http://links.lww.com/MD/H734 for details). PubMed, Cochrane Library, China National Knowledge Infrastructure were systematically searched from the database inception to May 24, 2021. The following keywords were used for the Chinese database search:(shen or renshen or gaolishen or xiyangshen or hongshen or baishen or shuishen or yeshanshen or Kangai zhusheye or Shenqi Fuzheng zhusheye or Shenfu zhusheye or Shenmai zhusheye) and (pilao or pifa or pijuan or pibei or juandai or fali). The following keywords were used for the English database search: (Ginseng or Ginsengs or P. quinquefolius or Panax or ginsenosides or ginsenoside or Kangai injection or Shenqi fuzheng injection or Shenfu injection or Shenmai injection) and (fatigue or lethargy or exhaustion or tiredness or weariness or physical performance or exercise performance) (Table S1, http://links.lww.com/MD/H734 for details). 2.3. Inclusion/exclusion criteria For inclusion in the review, studies were required to meet the following criteria: Experimental design: randomized controlled trials; Type of participants: subjects with CRF, regardless of age, sex, type of cancer, pathological type, cancer treatment; Type of interventions: drugs with ginseng or ginseng injections; Control: unlimited treatment method; Language types: Chinese and English studies. Studies without enough data (The duration of treatment or fatigue score is unknown) or studies whose participants were subjectively selected were excluded from analysis. For inclusion in the review, studies were required to meet the following criteria: Experimental design: randomized controlled trials; Type of participants: subjects with CRF, regardless of age, sex, type of cancer, pathological type, cancer treatment; Type of interventions: drugs with ginseng or ginseng injections; Control: unlimited treatment method; Language types: Chinese and English studies. Studies without enough data (The duration of treatment or fatigue score is unknown) or studies whose participants were subjectively selected were excluded from analysis. 2.4. Selection of relevant studies and quality assessment Two reviewers independently extracted data based on the predetermined criteria, and discrepancies were resolved by consensus. From studies included in the final analysis, the following data were extracted: the name of first author, year of publication, geographic location, types of cancer, basic strategies for treatment of cancer (surgery, chemotherapy or radiotherapy), species and dose of ginseng and ginseng injections in the intervention group, treatment regimen of the control group, duration of treatment, tool name used to assess CRF and type of dimensions used for main outcomes measured. The Cochrane Handbook was used for systematic reviews of interventions to evaluate the quality of the included studies.[9,10] This approach requires studies to be assessed across 6 special domains that were subjected to potential bias, including sequence generation, allocation concealment, blinding, incomplete outcome data, selective outcome reporting, and other sources of bias. There are 3 biases of judgment: Yes (Low risk), No (High risk), Not clear (Unclear risk). Two reviewers independently extracted data based on the predetermined criteria, and discrepancies were resolved by consensus. From studies included in the final analysis, the following data were extracted: the name of first author, year of publication, geographic location, types of cancer, basic strategies for treatment of cancer (surgery, chemotherapy or radiotherapy), species and dose of ginseng and ginseng injections in the intervention group, treatment regimen of the control group, duration of treatment, tool name used to assess CRF and type of dimensions used for main outcomes measured. The Cochrane Handbook was used for systematic reviews of interventions to evaluate the quality of the included studies.[9,10] This approach requires studies to be assessed across 6 special domains that were subjected to potential bias, including sequence generation, allocation concealment, blinding, incomplete outcome data, selective outcome reporting, and other sources of bias. There are 3 biases of judgment: Yes (Low risk), No (High risk), Not clear (Unclear risk). 2.5. Outcomes of interest The primary outcome was the effect of ginseng and ginseng injections in alleviating CRF. The secondary outcome was emotional or cognitive fatigue alleviated by ginseng and ginseng injections. The primary outcome was the effect of ginseng and ginseng injections in alleviating CRF. The secondary outcome was emotional or cognitive fatigue alleviated by ginseng and ginseng injections. 2.6. Statistical analysis RevMan5.3 (Review Manager (RevMan), Computer program, version 5.3. Cochrane Collaboration, Copenhagen, Denmark) was used for our statistical analysis.[9,10] STATA v.16.0 (College Station, TX) was used for network meta-analysis. Because clinical indices to assess clinical response/remission differed among studies, SMD was used as a main effect size to calculate those differences.[11] The calculation formula of SMD is as follows: Where M1 is the mean of fatigue reduction in the intervention group, M2 is the mean of fatigue reduction in the control group, and pooled SD is a pooled intervention specific standard deviation.[8] If the value of SMD is positive and P < .05, it shows that the effect of the intervention group is better than that of the control group. RevMan5.3 (Review Manager (RevMan), Computer program, version 5.3. Cochrane Collaboration, Copenhagen, Denmark) was used for our statistical analysis.[9,10] STATA v.16.0 (College Station, TX) was used for network meta-analysis. Because clinical indices to assess clinical response/remission differed among studies, SMD was used as a main effect size to calculate those differences.[11] The calculation formula of SMD is as follows: Where M1 is the mean of fatigue reduction in the intervention group, M2 is the mean of fatigue reduction in the control group, and pooled SD is a pooled intervention specific standard deviation.[8] If the value of SMD is positive and P < .05, it shows that the effect of the intervention group is better than that of the control group. 2.7. Assessment of heterogeneity Chi[2] test combined with I²-test was used to test the heterogeneity between studies. If P < .1 or I² > 50%, it suggested significant heterogeneity and we would use random-effects model for meta-analysis; otherwise, a fixed-effects model would be used. Wherever feasible, a meta-regression analysis would be conducted to explore the source of significant heterogeneity. Sensitivity analyses would be undertaken to assess the robustness of our findings by excluding studies with high risk of bias.[12] Chi[2] test combined with I²-test was used to test the heterogeneity between studies. If P < .1 or I² > 50%, it suggested significant heterogeneity and we would use random-effects model for meta-analysis; otherwise, a fixed-effects model would be used. Wherever feasible, a meta-regression analysis would be conducted to explore the source of significant heterogeneity. Sensitivity analyses would be undertaken to assess the robustness of our findings by excluding studies with high risk of bias.[12] 2.8. Assessment of reporting biases Funnel plots were performed to assess reporting bias with more than 10 trials. Egger’s regression intercept was calculated by STATA v.16.0 to do a text of asymmetry. A 2-tailed P value < .05 was considered statistically significant.[13] Funnel plots were performed to assess reporting bias with more than 10 trials. Egger’s regression intercept was calculated by STATA v.16.0 to do a text of asymmetry. A 2-tailed P value < .05 was considered statistically significant.[13] 2.9. Summarizing and interpreting results GRADE approach was used to interpret findings. We assessed the outcomes with reference to the overall risk of bias of the included studies, the inconsistency of the results, the directness of the evidence, the precision of the estimates, and the risk of publication bias. The quality of the body of evidence for each assessable outcome were categorized as follows: no reason to downgrade the quality of evidence, serious reason (downgraded by one) or very serious reason (downgraded by two).[14] GRADE approach was used to interpret findings. We assessed the outcomes with reference to the overall risk of bias of the included studies, the inconsistency of the results, the directness of the evidence, the precision of the estimates, and the risk of publication bias. The quality of the body of evidence for each assessable outcome were categorized as follows: no reason to downgrade the quality of evidence, serious reason (downgraded by one) or very serious reason (downgraded by two).[14] 2.1. Study registration: The protocol of this meta-analysis is registered in PROSPERO, under the registration number CRD42021228094 on February 18, 2021. It is available at http://www.crd.york.ac.uk/PROSPERO/. 2.2. Literature search: PubMed, Cochrane Library, China National Knowledge Infrastructure were systematically searched from the database inception to May 24, 2021. The following keywords were used for the Chinese database search:(shen or renshen or gaolishen or xiyangshen or hongshen or baishen or shuishen or yeshanshen or Kangai zhusheye or Shenqi Fuzheng zhusheye or Shenfu zhusheye or Shenmai zhusheye) and (pilao or pifa or pijuan or pibei or juandai or fali). The following keywords were used for the English database search: (Ginseng or Ginsengs or P. quinquefolius or Panax or ginsenosides or ginsenoside or Kangai injection or Shenqi fuzheng injection or Shenfu injection or Shenmai injection) and (fatigue or lethargy or exhaustion or tiredness or weariness or physical performance or exercise performance) (Table S1, http://links.lww.com/MD/H734 for details). 2.3. Inclusion/exclusion criteria: For inclusion in the review, studies were required to meet the following criteria: Experimental design: randomized controlled trials; Type of participants: subjects with CRF, regardless of age, sex, type of cancer, pathological type, cancer treatment; Type of interventions: drugs with ginseng or ginseng injections; Control: unlimited treatment method; Language types: Chinese and English studies. Studies without enough data (The duration of treatment or fatigue score is unknown) or studies whose participants were subjectively selected were excluded from analysis. 2.4. Selection of relevant studies and quality assessment: Two reviewers independently extracted data based on the predetermined criteria, and discrepancies were resolved by consensus. From studies included in the final analysis, the following data were extracted: the name of first author, year of publication, geographic location, types of cancer, basic strategies for treatment of cancer (surgery, chemotherapy or radiotherapy), species and dose of ginseng and ginseng injections in the intervention group, treatment regimen of the control group, duration of treatment, tool name used to assess CRF and type of dimensions used for main outcomes measured. The Cochrane Handbook was used for systematic reviews of interventions to evaluate the quality of the included studies.[9,10] This approach requires studies to be assessed across 6 special domains that were subjected to potential bias, including sequence generation, allocation concealment, blinding, incomplete outcome data, selective outcome reporting, and other sources of bias. There are 3 biases of judgment: Yes (Low risk), No (High risk), Not clear (Unclear risk). 2.5. Outcomes of interest: The primary outcome was the effect of ginseng and ginseng injections in alleviating CRF. The secondary outcome was emotional or cognitive fatigue alleviated by ginseng and ginseng injections. 2.6. Statistical analysis: RevMan5.3 (Review Manager (RevMan), Computer program, version 5.3. Cochrane Collaboration, Copenhagen, Denmark) was used for our statistical analysis.[9,10] STATA v.16.0 (College Station, TX) was used for network meta-analysis. Because clinical indices to assess clinical response/remission differed among studies, SMD was used as a main effect size to calculate those differences.[11] The calculation formula of SMD is as follows: Where M1 is the mean of fatigue reduction in the intervention group, M2 is the mean of fatigue reduction in the control group, and pooled SD is a pooled intervention specific standard deviation.[8] If the value of SMD is positive and P < .05, it shows that the effect of the intervention group is better than that of the control group. 2.7. Assessment of heterogeneity: Chi[2] test combined with I²-test was used to test the heterogeneity between studies. If P < .1 or I² > 50%, it suggested significant heterogeneity and we would use random-effects model for meta-analysis; otherwise, a fixed-effects model would be used. Wherever feasible, a meta-regression analysis would be conducted to explore the source of significant heterogeneity. Sensitivity analyses would be undertaken to assess the robustness of our findings by excluding studies with high risk of bias.[12] 2.8. Assessment of reporting biases: Funnel plots were performed to assess reporting bias with more than 10 trials. Egger’s regression intercept was calculated by STATA v.16.0 to do a text of asymmetry. A 2-tailed P value < .05 was considered statistically significant.[13] 2.9. Summarizing and interpreting results: GRADE approach was used to interpret findings. We assessed the outcomes with reference to the overall risk of bias of the included studies, the inconsistency of the results, the directness of the evidence, the precision of the estimates, and the risk of publication bias. The quality of the body of evidence for each assessable outcome were categorized as follows: no reason to downgrade the quality of evidence, serious reason (downgraded by one) or very serious reason (downgraded by two).[14] 3. Results: 3.1. Selection and general characteristics of the included studies A total of 1764 studies were identified from the 3 electronic databases. Seventy-one duplicate studies were excluded by using Endnote X9. On review of the title and abstract, 1631 studies were excluded. After further careful review of 62 articles of the full text, a further 40 studies were excluded. Finally, 22 papers with a total of 2086 participants were included. Patients were treated with ginseng oral administration in 7 papers and with ginseng injections in 15 (Fig. 1 and Table 1). They were published between 2010 and 2020 and were conducted in China (n = 16), America (n = 3), Korea (n = 2) and Italy (n = 1). Six studies were randomized, double-blind, placebo-controlled design trials, and the rest of 16 studies used randomized design (Table 1). The detailed information was summarized in Table 1 and Table S2, http://links.lww.com/MD/H735 and 3, http://links.lww.com/MD/H736. Characteristics of studies included in the systematic review and meta-analysis. BFI = brief fatigue inventory, CFS = cancer fatigue scale, FACIT-F = functional assessment of chronic illness therapy-fatigue subscale, FSI = fatigue symptom inventory, MFSI-SF = multidimensional fatigue symptom inventory-short form, PFS = piper fatigue scale. Flow chart. A total of 1764 studies were identified from the 3 electronic databases. Seventy-one duplicate studies were excluded by using Endnote X9. On review of the title and abstract, 1631 studies were excluded. After further careful review of 62 articles of the full text, a further 40 studies were excluded. Finally, 22 papers with a total of 2086 participants were included. Patients were treated with ginseng oral administration in 7 papers and with ginseng injections in 15 (Fig. 1 and Table 1). They were published between 2010 and 2020 and were conducted in China (n = 16), America (n = 3), Korea (n = 2) and Italy (n = 1). Six studies were randomized, double-blind, placebo-controlled design trials, and the rest of 16 studies used randomized design (Table 1). The detailed information was summarized in Table 1 and Table S2, http://links.lww.com/MD/H735 and 3, http://links.lww.com/MD/H736. Characteristics of studies included in the systematic review and meta-analysis. BFI = brief fatigue inventory, CFS = cancer fatigue scale, FACIT-F = functional assessment of chronic illness therapy-fatigue subscale, FSI = fatigue symptom inventory, MFSI-SF = multidimensional fatigue symptom inventory-short form, PFS = piper fatigue scale. Flow chart. 3.2. Methodological quality of studies All included 22 studies were randomized controlled trials. Two were dynamically allocated by computer,[5,15] and 4 used random number table method.[7,16–18] Those 6 were rated as low risk. One study used stratified block randomization allocation and bias risk was not clear.[19] The other 15 studies didn’t describe the method of random sequence generation. Of 6 double-blind studies, only 1 mentioned double-blind method but did not describe the specific implementation process.[20] Two studies were multicenter and 6 used placebo as control.[5,19] All included studies had low risk of bias regarding incomplete outcome data, had low risk of bias regarding selective reporting and none claimed conflict of interest, early termination of the trial (Figs. 2 and 3). Risk of bias graph. Risk of bias summary. All included 22 studies were randomized controlled trials. Two were dynamically allocated by computer,[5,15] and 4 used random number table method.[7,16–18] Those 6 were rated as low risk. One study used stratified block randomization allocation and bias risk was not clear.[19] The other 15 studies didn’t describe the method of random sequence generation. Of 6 double-blind studies, only 1 mentioned double-blind method but did not describe the specific implementation process.[20] Two studies were multicenter and 6 used placebo as control.[5,19] All included studies had low risk of bias regarding incomplete outcome data, had low risk of bias regarding selective reporting and none claimed conflict of interest, early termination of the trial (Figs. 2 and 3). Risk of bias graph. Risk of bias summary. 3.3. Outcome of heterogeneity text For the primary outcome, all intervention groups’ data was combined, regardless of the types and stages of cancer and so on. Consequently, our analyses were subject to high potential risk of between-study heterogeneity. A meta-regression was conducted, we found that placebo may be a source of heterogeneity. In 6 studies that used placebo, ginseng oral administration was in 6 and ginseng injections in 0, so analysis was conducted separately. We did not find other sources of heterogeneity (Table 2). Sensitivity analyses were undertaken by excluding studies with high risk of bias. Compare the 2 results, we still ca not explain the source of heterogeneity (Fig. S1, http://links.lww.com/MD/H737;2, http://links.lww.com/MD/H738;3, http://links.lww.com/MD/H739). Therefore, whether heterogeneity taken into account or not, results were all presented and discussed in the manuscript, respectively. The same as the secondary outcome (Table 2). Outcome of meta-regression. CRF = cancer-related fatigue. For the primary outcome, all intervention groups’ data was combined, regardless of the types and stages of cancer and so on. Consequently, our analyses were subject to high potential risk of between-study heterogeneity. A meta-regression was conducted, we found that placebo may be a source of heterogeneity. In 6 studies that used placebo, ginseng oral administration was in 6 and ginseng injections in 0, so analysis was conducted separately. We did not find other sources of heterogeneity (Table 2). Sensitivity analyses were undertaken by excluding studies with high risk of bias. Compare the 2 results, we still ca not explain the source of heterogeneity (Fig. S1, http://links.lww.com/MD/H737;2, http://links.lww.com/MD/H738;3, http://links.lww.com/MD/H739). Therefore, whether heterogeneity taken into account or not, results were all presented and discussed in the manuscript, respectively. The same as the secondary outcome (Table 2). Outcome of meta-regression. CRF = cancer-related fatigue. 3.4. Outcome of publication bias text Egger linear regression was conducted to text symmetry of funnel plots. No publication bias was found in each outcome (Fig. S4, http://links.lww.com/MD/H740;6, http://links.lww.com/MD/H742;7, http://links.lww.com/MD/H744). Egger linear regression was conducted to text symmetry of funnel plots. No publication bias was found in each outcome (Fig. S4, http://links.lww.com/MD/H740;6, http://links.lww.com/MD/H742;7, http://links.lww.com/MD/H744). 3.5. Outcome of GRADE rating Table 3 for details. Outcome of GRADE rating. CI = confidence interval, CRF = cancer-related fatigue, SMD = standardized mean difference. There was substantial heterogeneity among studies, meta-regression and sensitivity analyses of each outcome were conducted but we didn’t find the sources of heterogeneity. Small sample size and wide confidence interval. Table 3 for details. Outcome of GRADE rating. CI = confidence interval, CRF = cancer-related fatigue, SMD = standardized mean difference. There was substantial heterogeneity among studies, meta-regression and sensitivity analyses of each outcome were conducted but we didn’t find the sources of heterogeneity. Small sample size and wide confidence interval. 3.6. Efficacy of ginseng oral administration and ginseng injections on CRF Fatigue was reported in 22 studies and 10 were excluded because of heterogeneity. Of included 12 studies, ginseng was used in 5 studies and ginseng injections in 7. The number of patients in the ginseng group is 656 and 646 in control. Efficacy was assessed between 2 weeks and 12 weeks. The pooled SMD was 0.40 (95% confidence interval (95% CI) [0.29–0.51], P < .00001) (Fig. 4). If heterogeneity was not taken into account, the pooled SMD was 0.89 (95% CI [0.60–1.18], P < .00001) (Fig. S4, http://links.lww.com/MD/H740). Those indicate that ginseng can alleviate CRF. Efficacy of ginseng oral administration and ginseng injections on CRF. CRF: cancer-related fatigue. Fatigue was reported in 22 studies and 10 were excluded because of heterogeneity. Of included 12 studies, ginseng was used in 5 studies and ginseng injections in 7. The number of patients in the ginseng group is 656 and 646 in control. Efficacy was assessed between 2 weeks and 12 weeks. The pooled SMD was 0.40 (95% confidence interval (95% CI) [0.29–0.51], P < .00001) (Fig. 4). If heterogeneity was not taken into account, the pooled SMD was 0.89 (95% CI [0.60–1.18], P < .00001) (Fig. S4, http://links.lww.com/MD/H740). Those indicate that ginseng can alleviate CRF. Efficacy of ginseng oral administration and ginseng injections on CRF. CRF: cancer-related fatigue. 3.7. Network meta-analysis between ginseng oral administration and ginseng injections Network meta-analysis to compare the relative efficacy of ginseng oral administration and ginseng injections was done (Fig. 5 and Table 4). The order was ginseng injections, ginseng oral administration and placebo from high efficacy to low. Network meta-analysis between ginseng oral administration and ginseng injections. SMD for comparisons are in the cell in common between the column-defining and row-defining treatment. SMD < 0 favors row-defining treatment. Numbers in parentheses indicate 95% confidence interval. SMD = standardized mean difference. Network of 2 types of administration routes on CRF. CRF: cancer-related fatigue. Network meta-analysis to compare the relative efficacy of ginseng oral administration and ginseng injections was done (Fig. 5 and Table 4). The order was ginseng injections, ginseng oral administration and placebo from high efficacy to low. Network meta-analysis between ginseng oral administration and ginseng injections. SMD for comparisons are in the cell in common between the column-defining and row-defining treatment. SMD < 0 favors row-defining treatment. Numbers in parentheses indicate 95% confidence interval. SMD = standardized mean difference. Network of 2 types of administration routes on CRF. CRF: cancer-related fatigue. 3.8. Efficacy of ginseng oral administration on CRF Seven papers reported the efficacy of ginseng oral administration on CRF and 1 was excluded because of heterogeneity.[16] Six studies included 862 patients, 434 received ginseng and 428 received placebo.[5,15,19–22] Efficacy was assessed between 29 days and 12 weeks. The pooled SMD was 0.29 (95% CI [0.15–0.42], P < .0001) (Fig. 6). If heterogeneity was not taken into account, the pooled SMD was 0.46 (95% CI [0.10–0.82], P = .01) (Fig. S5, http://links.lww.com/MD/H741). Those indicate that ginseng oral administration can alleviate CRF. Forest plot of ginseng oral administration on CRF. CRF: cancer-related fatigue. Efficacies of different doses were explored. Sixteen patients were treated with 1000 mg/day ginseng and 16 with placebo in 1 study.[20] The pooled SMD was -0.29 (95% CI [−0.98 to 0.41], P = .42). Three hundred and forty-seven patients were treated with 2000 mg/d ginseng and 341 with placebo in 3 studies.[5,15,19] The pooled SMD was 0.35 (95% CI [0.19–0.50], P < .00001). Fifteen patients were treated with 3000 mg/d ginseng and 15 with placebo in 1 study.[21] The SMD was 0.32 (95% CI [−0.40 to 1.04], P = .38). Those indicate that 2000 or 3000 mg/d ginseng should be effective to treat CRF. There is no significant difference between 3000 mg/d ginseng group and control group and that might be due to the small sample size (Fig. 7). Network meta-analysis to compare the relative efficacy of different doses was done. The order was 2000 mg/d, 3000 mg/d, placebo and 1000 mg/d from high efficacy to low. But there was no significant difference (Fig. 8 and Table 5). Network meta-analysis of different dose. SMD for comparisons are in the cell in common between the column-defining and row-defining treatment. SMD < 0 favors row-defining treatment. Numbers in parentheses indicate 95% confidence interval. SMD = standardized mean difference. Forest plot of different doses of ginseng oral administration on CRF. CRF: cancer-related fatigue. Network of different doses of ginseng oral administration on CRF. CRF: cancer-related fatigue. Efficacies of different duration were also explored. One hundred and six patients, 52 in ginseng group and 54 in control, were treated for 2 weeks in 1 study.[22] The SMD was 0.10 (95% CI [−0.28 to 0.48], P = .62). Four hundred and twelve patients, 203 in ginseng group and 209 in control, were treated for 4 weeks in 2 study.[5,22] The pooled SMD was 0.20 (95% CI [0.00–0.39], P = .05). Seven hundred and twenty patients, 363 in ginseng group and 357 in control, were treated for 8 weeks in 4 study.[5,15,19,20] The pooled SMD was 0.32 (95% CI [0.17–0.46], P < .0001). Thirty patients, 15 in ginseng group and 15 in control, were treated for 12 weeks in 1 study.[21] The SMD was 0.32 (95% CI [−0.40 to 1.04], P = .38). Three hundred and thirty patients, 161 in ginseng group and 169 in control, were treated for 16 weeks in 1 study. The SMD was 0.24 (95% CI [0.02–0.45], P = .03). It seems that 4 to 8 weeks is enough for ginseng oral administration to alleviate CRF (Fig. 9). Forest plot of different duration of ginseng oral administration on CRF. CRF: cancer-related fatigue. Seven papers reported the efficacy of ginseng oral administration on CRF and 1 was excluded because of heterogeneity.[16] Six studies included 862 patients, 434 received ginseng and 428 received placebo.[5,15,19–22] Efficacy was assessed between 29 days and 12 weeks. The pooled SMD was 0.29 (95% CI [0.15–0.42], P < .0001) (Fig. 6). If heterogeneity was not taken into account, the pooled SMD was 0.46 (95% CI [0.10–0.82], P = .01) (Fig. S5, http://links.lww.com/MD/H741). Those indicate that ginseng oral administration can alleviate CRF. Forest plot of ginseng oral administration on CRF. CRF: cancer-related fatigue. Efficacies of different doses were explored. Sixteen patients were treated with 1000 mg/day ginseng and 16 with placebo in 1 study.[20] The pooled SMD was -0.29 (95% CI [−0.98 to 0.41], P = .42). Three hundred and forty-seven patients were treated with 2000 mg/d ginseng and 341 with placebo in 3 studies.[5,15,19] The pooled SMD was 0.35 (95% CI [0.19–0.50], P < .00001). Fifteen patients were treated with 3000 mg/d ginseng and 15 with placebo in 1 study.[21] The SMD was 0.32 (95% CI [−0.40 to 1.04], P = .38). Those indicate that 2000 or 3000 mg/d ginseng should be effective to treat CRF. There is no significant difference between 3000 mg/d ginseng group and control group and that might be due to the small sample size (Fig. 7). Network meta-analysis to compare the relative efficacy of different doses was done. The order was 2000 mg/d, 3000 mg/d, placebo and 1000 mg/d from high efficacy to low. But there was no significant difference (Fig. 8 and Table 5). Network meta-analysis of different dose. SMD for comparisons are in the cell in common between the column-defining and row-defining treatment. SMD < 0 favors row-defining treatment. Numbers in parentheses indicate 95% confidence interval. SMD = standardized mean difference. Forest plot of different doses of ginseng oral administration on CRF. CRF: cancer-related fatigue. Network of different doses of ginseng oral administration on CRF. CRF: cancer-related fatigue. Efficacies of different duration were also explored. One hundred and six patients, 52 in ginseng group and 54 in control, were treated for 2 weeks in 1 study.[22] The SMD was 0.10 (95% CI [−0.28 to 0.48], P = .62). Four hundred and twelve patients, 203 in ginseng group and 209 in control, were treated for 4 weeks in 2 study.[5,22] The pooled SMD was 0.20 (95% CI [0.00–0.39], P = .05). Seven hundred and twenty patients, 363 in ginseng group and 357 in control, were treated for 8 weeks in 4 study.[5,15,19,20] The pooled SMD was 0.32 (95% CI [0.17–0.46], P < .0001). Thirty patients, 15 in ginseng group and 15 in control, were treated for 12 weeks in 1 study.[21] The SMD was 0.32 (95% CI [−0.40 to 1.04], P = .38). Three hundred and thirty patients, 161 in ginseng group and 169 in control, were treated for 16 weeks in 1 study. The SMD was 0.24 (95% CI [0.02–0.45], P = .03). It seems that 4 to 8 weeks is enough for ginseng oral administration to alleviate CRF (Fig. 9). Forest plot of different duration of ginseng oral administration on CRF. CRF: cancer-related fatigue. 3.9. Efficacy of ginseng injections on CRF Four types of injections (Kangai injection, Shenfu injection, Shenmai injection and Shenqi Fuzheng injection) whose main components are ginseng extractions have been approved by Chinese National Medical Products Administration to be used in clinic. Fifteen studies reported these 4 on CRF and 5 papers were excluded because of heterogeneity.[17,18,23–25] Three hundred and forty-four patients were in the ginseng injection group and 338 in the control group. Efficacy was assessed between 2 weeks and 16 weeks. The pooled SMD was 0.74 (95% CI [0.59–0.90], P < .00001) (Fig. 10). If heterogeneity was not taken into account, the pooled SMD was 1.08 (95% CI [0.73–1.44], P < .00001) (Fig. S6, http://links.lww.com/MD/H742). Those indicate that ginseng injections can alleviate CRF. Efficacy of ginseng injections on CRF. CRF: cancer-related fatigue. Efficacies of the 4 types of injections were also explored, respectively. The SMD of Kangai injection, Shenfu injection, Shenmai injection or Shenqi Fuzheng injection was 1.12 (95%CI [0.67–1.58], P < .00001), 1.54 (95% CI [−0.79 to 3.87], P = .20), 1.02 (95% CI [0.71–1.33], P < .00001), 1.00 (95% CI [0.42–1.57], P = .0007), respectively (Fig. S6, http://links.lww.com/MD/H742). Those indicate that Kangai injection, Shenmai injection or Shenqi Fuzheng injection can alleviate CRF. Four types of injections (Kangai injection, Shenfu injection, Shenmai injection and Shenqi Fuzheng injection) whose main components are ginseng extractions have been approved by Chinese National Medical Products Administration to be used in clinic. Fifteen studies reported these 4 on CRF and 5 papers were excluded because of heterogeneity.[17,18,23–25] Three hundred and forty-four patients were in the ginseng injection group and 338 in the control group. Efficacy was assessed between 2 weeks and 16 weeks. The pooled SMD was 0.74 (95% CI [0.59–0.90], P < .00001) (Fig. 10). If heterogeneity was not taken into account, the pooled SMD was 1.08 (95% CI [0.73–1.44], P < .00001) (Fig. S6, http://links.lww.com/MD/H742). Those indicate that ginseng injections can alleviate CRF. Efficacy of ginseng injections on CRF. CRF: cancer-related fatigue. Efficacies of the 4 types of injections were also explored, respectively. The SMD of Kangai injection, Shenfu injection, Shenmai injection or Shenqi Fuzheng injection was 1.12 (95%CI [0.67–1.58], P < .00001), 1.54 (95% CI [−0.79 to 3.87], P = .20), 1.02 (95% CI [0.71–1.33], P < .00001), 1.00 (95% CI [0.42–1.57], P = .0007), respectively (Fig. S6, http://links.lww.com/MD/H742). Those indicate that Kangai injection, Shenmai injection or Shenqi Fuzheng injection can alleviate CRF. 3.10. Efficacy of ginseng oral administration and ginseng injections on emotional fatigue Ten studies reported the efficacy of ginseng oral administration or ginseng injections on emotional fatigue and 6 were excluded because of heterogeneity.[7,16,18,23,26,27] The number of patients in the experimental group was 286 and 277 in the control group. The pooled SMD was 0.12 (95% CI [−0.04 to 0.29], P = .15) (Fig. 11). If heterogeneity was not taken into account, the pooled SMD was 0.67 (95% CI [0.13–1.21], P = .02) (Fig. S7, http://links.lww.com/MD/H744). It seems that whether ginseng could alleviate emotional fatigue is uncertain. Forest plot of ginseng oral administration and ginseng injections on emotional fatigue. Ginseng oral administration was employed in 3 studies and 1 study was excluded because of heterogeneity.[16] The pooled SMD was 0.10 (95% CI [−0.10 to 0.30], P = .32) (Fig. 11). If heterogeneity was not taken into account, the pooled SMD was −0.34 (95% CI [−1.12 to 0.43], P = .38) (Fig. S7, http://links.lww.com/MD/H744). Those indicate that ginseng oral administration may not alleviate emotional fatigue. Seven studies explored efficacies of ginseng injections on emotional fatigue and 3 were excluded because of heterogeneity. The pooled SMD of the 4 studies was 0.79 (95% CI [0.55–1.03], P < .00001) (Fig. 12). If heterogeneity was not taken into account, the pooled SMD was 1.12 (95% CI [0.50–1.74], P = .0004) (Fig. S8, http://links.lww.com/MD/H749).[7,18,23,26–29] Those results suggest that ginseng injections may be effective in alleviating emotional fatigue. Forest plot of ginseng injections on emotional fatigue. Efficacies of the 3 types of injections were also explored, respectively. The SMD of Shenfu injection, Shenmai injection or Shenqi Fuzheng injection was 1.16 (95%CI [−0.98 to 3.30], P = .29), 0.84 (95% CI [0.53–1.14], P < .00001), 1.34 (95% CI [0.13–2.55], P = .03), respectively (Fig. S8, http://links.lww.com/MD/H749). Those indicate that Shenmai injection or Shenqi Fuzheng injection can alleviate emotional fatigue. Ten studies reported the efficacy of ginseng oral administration or ginseng injections on emotional fatigue and 6 were excluded because of heterogeneity.[7,16,18,23,26,27] The number of patients in the experimental group was 286 and 277 in the control group. The pooled SMD was 0.12 (95% CI [−0.04 to 0.29], P = .15) (Fig. 11). If heterogeneity was not taken into account, the pooled SMD was 0.67 (95% CI [0.13–1.21], P = .02) (Fig. S7, http://links.lww.com/MD/H744). It seems that whether ginseng could alleviate emotional fatigue is uncertain. Forest plot of ginseng oral administration and ginseng injections on emotional fatigue. Ginseng oral administration was employed in 3 studies and 1 study was excluded because of heterogeneity.[16] The pooled SMD was 0.10 (95% CI [−0.10 to 0.30], P = .32) (Fig. 11). If heterogeneity was not taken into account, the pooled SMD was −0.34 (95% CI [−1.12 to 0.43], P = .38) (Fig. S7, http://links.lww.com/MD/H744). Those indicate that ginseng oral administration may not alleviate emotional fatigue. Seven studies explored efficacies of ginseng injections on emotional fatigue and 3 were excluded because of heterogeneity. The pooled SMD of the 4 studies was 0.79 (95% CI [0.55–1.03], P < .00001) (Fig. 12). If heterogeneity was not taken into account, the pooled SMD was 1.12 (95% CI [0.50–1.74], P = .0004) (Fig. S8, http://links.lww.com/MD/H749).[7,18,23,26–29] Those results suggest that ginseng injections may be effective in alleviating emotional fatigue. Forest plot of ginseng injections on emotional fatigue. Efficacies of the 3 types of injections were also explored, respectively. The SMD of Shenfu injection, Shenmai injection or Shenqi Fuzheng injection was 1.16 (95%CI [−0.98 to 3.30], P = .29), 0.84 (95% CI [0.53–1.14], P < .00001), 1.34 (95% CI [0.13–2.55], P = .03), respectively (Fig. S8, http://links.lww.com/MD/H749). Those indicate that Shenmai injection or Shenqi Fuzheng injection can alleviate emotional fatigue. 3.11. Efficacy of ginseng oral administration and ginseng injections on cognitive fatigue. Cognitive fatigue is a psychological state characterized by the subjective feelings of tiredness, and impaired ability to think, memorize, and concentrate.[30,31] Cognitive fatigue is strongly associated with CRF. Eight studies reported the efficacy of ginseng oral administration and ginseng injections on cognitive fatigue. Four studies, 1 on ginseng oral administration and 3 on ginseng injections, were excluded because of heterogeneity.[5,18,23,29] Finally, 4 studies on ginseng injections were included. The pooled SMD was 0.72 (95% CI [0.48–0.96], P < .00001) (Fig. 13). If heterogeneity was not taken into account, the pooled SMD of all 8 studies was 0.80 (95% CI [0.31–1.29], P = .001) (Fig. S9, http://links.lww.com/MD/H750) and the pooled SMD of 7 studies on ginseng injections was 0.93 (95% CI [0.44–1.42], P = .0002) (Fig. S10, http://links.lww.com/MD/H751)..[7,16,17,23,26,27,29,32] Ginseng oral administration was employed in 1study and the SMD was -0.04 (95% CI [−0.28 to 0.20], P = .76) (Fig. S9, http://links.lww.com/MD/H750). Taking together, those results suggest that ginseng injections may be effective in alleviating cognitive fatigue while ginseng oral administration may not be beneficial to cognitive fatigue relieving. Forest plot of ginseng injections on cognitive fatigue. Efficacies of the 3 types of injections were also explored, respectively. The SMD of Shenfu injection, Shenmai injection or Shenqi Fuzheng injection was 0.79 (95%CI [−0.50 to 2.09], P = .23), 0.83 (95% CI [0.53–1.14], P < .00001), 1.13 (95% CI [0.00–2.25], P = .05), respectively (Fig. S10, http://links.lww.com/MD/H751). Those indicate that Shenmai injection or Shenqi Fuzheng injection can alleviate cognitive fatigue. Cognitive fatigue is a psychological state characterized by the subjective feelings of tiredness, and impaired ability to think, memorize, and concentrate.[30,31] Cognitive fatigue is strongly associated with CRF. Eight studies reported the efficacy of ginseng oral administration and ginseng injections on cognitive fatigue. Four studies, 1 on ginseng oral administration and 3 on ginseng injections, were excluded because of heterogeneity.[5,18,23,29] Finally, 4 studies on ginseng injections were included. The pooled SMD was 0.72 (95% CI [0.48–0.96], P < .00001) (Fig. 13). If heterogeneity was not taken into account, the pooled SMD of all 8 studies was 0.80 (95% CI [0.31–1.29], P = .001) (Fig. S9, http://links.lww.com/MD/H750) and the pooled SMD of 7 studies on ginseng injections was 0.93 (95% CI [0.44–1.42], P = .0002) (Fig. S10, http://links.lww.com/MD/H751)..[7,16,17,23,26,27,29,32] Ginseng oral administration was employed in 1study and the SMD was -0.04 (95% CI [−0.28 to 0.20], P = .76) (Fig. S9, http://links.lww.com/MD/H750). Taking together, those results suggest that ginseng injections may be effective in alleviating cognitive fatigue while ginseng oral administration may not be beneficial to cognitive fatigue relieving. Forest plot of ginseng injections on cognitive fatigue. Efficacies of the 3 types of injections were also explored, respectively. The SMD of Shenfu injection, Shenmai injection or Shenqi Fuzheng injection was 0.79 (95%CI [−0.50 to 2.09], P = .23), 0.83 (95% CI [0.53–1.14], P < .00001), 1.13 (95% CI [0.00–2.25], P = .05), respectively (Fig. S10, http://links.lww.com/MD/H751). Those indicate that Shenmai injection or Shenqi Fuzheng injection can alleviate cognitive fatigue. 3.12. The effect of cancer types on efficacy of ginseng and ginseng injections on CRF CRF may associate with cancer types.[16] The effect of cancer types on efficacy of ginseng on CRF was explored here. Nine studies evaluated the efficacy of ginseng on the treatment of lung cancer.[7,16,17,23,24,26,27,29,32] Of the total 725 participants, 519 had undergone chemotherapy prior to participation, 245 were non-small cell lung cancer patients, 86 were lung adenocarcinoma patients and 425 were advanced lung cancer patients. If heterogeneity was not taken into account, those results supported the benefit of Red ginseng, Kangai injection, Shenfu injection, Shenmai injection and Shenqi Fuzheng injection on fatigue relief. Those results support CRF improvement of ginseng injections on lung cancer including the pathological types of non-small cell lung cancer, and TNM staging of advanced lung cancer. At the same time, ginseng may also benefit patients with colorectal cancer and nasopharyngeal carcinoma, and may have little effect on patients with head and neck cancer (Table 6). Efficacy of ginseng and ginseng injections on CRF alleviating by various factors. CRF may associate with cancer types.[16] The effect of cancer types on efficacy of ginseng on CRF was explored here. Nine studies evaluated the efficacy of ginseng on the treatment of lung cancer.[7,16,17,23,24,26,27,29,32] Of the total 725 participants, 519 had undergone chemotherapy prior to participation, 245 were non-small cell lung cancer patients, 86 were lung adenocarcinoma patients and 425 were advanced lung cancer patients. If heterogeneity was not taken into account, those results supported the benefit of Red ginseng, Kangai injection, Shenfu injection, Shenmai injection and Shenqi Fuzheng injection on fatigue relief. Those results support CRF improvement of ginseng injections on lung cancer including the pathological types of non-small cell lung cancer, and TNM staging of advanced lung cancer. At the same time, ginseng may also benefit patients with colorectal cancer and nasopharyngeal carcinoma, and may have little effect on patients with head and neck cancer (Table 6). Efficacy of ginseng and ginseng injections on CRF alleviating by various factors. 3.13. Incidences of treatment-related adverse events between different drugs and cancer types Adverse events were collected and summarized in Table 7. It seems that ginseng has no discernible adverse reactions. Adverse events. Adverse events were collected and summarized in Table 7. It seems that ginseng has no discernible adverse reactions. Adverse events. 3.1. Selection and general characteristics of the included studies: A total of 1764 studies were identified from the 3 electronic databases. Seventy-one duplicate studies were excluded by using Endnote X9. On review of the title and abstract, 1631 studies were excluded. After further careful review of 62 articles of the full text, a further 40 studies were excluded. Finally, 22 papers with a total of 2086 participants were included. Patients were treated with ginseng oral administration in 7 papers and with ginseng injections in 15 (Fig. 1 and Table 1). They were published between 2010 and 2020 and were conducted in China (n = 16), America (n = 3), Korea (n = 2) and Italy (n = 1). Six studies were randomized, double-blind, placebo-controlled design trials, and the rest of 16 studies used randomized design (Table 1). The detailed information was summarized in Table 1 and Table S2, http://links.lww.com/MD/H735 and 3, http://links.lww.com/MD/H736. Characteristics of studies included in the systematic review and meta-analysis. BFI = brief fatigue inventory, CFS = cancer fatigue scale, FACIT-F = functional assessment of chronic illness therapy-fatigue subscale, FSI = fatigue symptom inventory, MFSI-SF = multidimensional fatigue symptom inventory-short form, PFS = piper fatigue scale. Flow chart. 3.2. Methodological quality of studies: All included 22 studies were randomized controlled trials. Two were dynamically allocated by computer,[5,15] and 4 used random number table method.[7,16–18] Those 6 were rated as low risk. One study used stratified block randomization allocation and bias risk was not clear.[19] The other 15 studies didn’t describe the method of random sequence generation. Of 6 double-blind studies, only 1 mentioned double-blind method but did not describe the specific implementation process.[20] Two studies were multicenter and 6 used placebo as control.[5,19] All included studies had low risk of bias regarding incomplete outcome data, had low risk of bias regarding selective reporting and none claimed conflict of interest, early termination of the trial (Figs. 2 and 3). Risk of bias graph. Risk of bias summary. 3.3. Outcome of heterogeneity text: For the primary outcome, all intervention groups’ data was combined, regardless of the types and stages of cancer and so on. Consequently, our analyses were subject to high potential risk of between-study heterogeneity. A meta-regression was conducted, we found that placebo may be a source of heterogeneity. In 6 studies that used placebo, ginseng oral administration was in 6 and ginseng injections in 0, so analysis was conducted separately. We did not find other sources of heterogeneity (Table 2). Sensitivity analyses were undertaken by excluding studies with high risk of bias. Compare the 2 results, we still ca not explain the source of heterogeneity (Fig. S1, http://links.lww.com/MD/H737;2, http://links.lww.com/MD/H738;3, http://links.lww.com/MD/H739). Therefore, whether heterogeneity taken into account or not, results were all presented and discussed in the manuscript, respectively. The same as the secondary outcome (Table 2). Outcome of meta-regression. CRF = cancer-related fatigue. 3.4. Outcome of publication bias text: Egger linear regression was conducted to text symmetry of funnel plots. No publication bias was found in each outcome (Fig. S4, http://links.lww.com/MD/H740;6, http://links.lww.com/MD/H742;7, http://links.lww.com/MD/H744). 3.5. Outcome of GRADE rating: Table 3 for details. Outcome of GRADE rating. CI = confidence interval, CRF = cancer-related fatigue, SMD = standardized mean difference. There was substantial heterogeneity among studies, meta-regression and sensitivity analyses of each outcome were conducted but we didn’t find the sources of heterogeneity. Small sample size and wide confidence interval. 3.6. Efficacy of ginseng oral administration and ginseng injections on CRF: Fatigue was reported in 22 studies and 10 were excluded because of heterogeneity. Of included 12 studies, ginseng was used in 5 studies and ginseng injections in 7. The number of patients in the ginseng group is 656 and 646 in control. Efficacy was assessed between 2 weeks and 12 weeks. The pooled SMD was 0.40 (95% confidence interval (95% CI) [0.29–0.51], P < .00001) (Fig. 4). If heterogeneity was not taken into account, the pooled SMD was 0.89 (95% CI [0.60–1.18], P < .00001) (Fig. S4, http://links.lww.com/MD/H740). Those indicate that ginseng can alleviate CRF. Efficacy of ginseng oral administration and ginseng injections on CRF. CRF: cancer-related fatigue. 3.7. Network meta-analysis between ginseng oral administration and ginseng injections: Network meta-analysis to compare the relative efficacy of ginseng oral administration and ginseng injections was done (Fig. 5 and Table 4). The order was ginseng injections, ginseng oral administration and placebo from high efficacy to low. Network meta-analysis between ginseng oral administration and ginseng injections. SMD for comparisons are in the cell in common between the column-defining and row-defining treatment. SMD < 0 favors row-defining treatment. Numbers in parentheses indicate 95% confidence interval. SMD = standardized mean difference. Network of 2 types of administration routes on CRF. CRF: cancer-related fatigue. 3.8. Efficacy of ginseng oral administration on CRF: Seven papers reported the efficacy of ginseng oral administration on CRF and 1 was excluded because of heterogeneity.[16] Six studies included 862 patients, 434 received ginseng and 428 received placebo.[5,15,19–22] Efficacy was assessed between 29 days and 12 weeks. The pooled SMD was 0.29 (95% CI [0.15–0.42], P < .0001) (Fig. 6). If heterogeneity was not taken into account, the pooled SMD was 0.46 (95% CI [0.10–0.82], P = .01) (Fig. S5, http://links.lww.com/MD/H741). Those indicate that ginseng oral administration can alleviate CRF. Forest plot of ginseng oral administration on CRF. CRF: cancer-related fatigue. Efficacies of different doses were explored. Sixteen patients were treated with 1000 mg/day ginseng and 16 with placebo in 1 study.[20] The pooled SMD was -0.29 (95% CI [−0.98 to 0.41], P = .42). Three hundred and forty-seven patients were treated with 2000 mg/d ginseng and 341 with placebo in 3 studies.[5,15,19] The pooled SMD was 0.35 (95% CI [0.19–0.50], P < .00001). Fifteen patients were treated with 3000 mg/d ginseng and 15 with placebo in 1 study.[21] The SMD was 0.32 (95% CI [−0.40 to 1.04], P = .38). Those indicate that 2000 or 3000 mg/d ginseng should be effective to treat CRF. There is no significant difference between 3000 mg/d ginseng group and control group and that might be due to the small sample size (Fig. 7). Network meta-analysis to compare the relative efficacy of different doses was done. The order was 2000 mg/d, 3000 mg/d, placebo and 1000 mg/d from high efficacy to low. But there was no significant difference (Fig. 8 and Table 5). Network meta-analysis of different dose. SMD for comparisons are in the cell in common between the column-defining and row-defining treatment. SMD < 0 favors row-defining treatment. Numbers in parentheses indicate 95% confidence interval. SMD = standardized mean difference. Forest plot of different doses of ginseng oral administration on CRF. CRF: cancer-related fatigue. Network of different doses of ginseng oral administration on CRF. CRF: cancer-related fatigue. Efficacies of different duration were also explored. One hundred and six patients, 52 in ginseng group and 54 in control, were treated for 2 weeks in 1 study.[22] The SMD was 0.10 (95% CI [−0.28 to 0.48], P = .62). Four hundred and twelve patients, 203 in ginseng group and 209 in control, were treated for 4 weeks in 2 study.[5,22] The pooled SMD was 0.20 (95% CI [0.00–0.39], P = .05). Seven hundred and twenty patients, 363 in ginseng group and 357 in control, were treated for 8 weeks in 4 study.[5,15,19,20] The pooled SMD was 0.32 (95% CI [0.17–0.46], P < .0001). Thirty patients, 15 in ginseng group and 15 in control, were treated for 12 weeks in 1 study.[21] The SMD was 0.32 (95% CI [−0.40 to 1.04], P = .38). Three hundred and thirty patients, 161 in ginseng group and 169 in control, were treated for 16 weeks in 1 study. The SMD was 0.24 (95% CI [0.02–0.45], P = .03). It seems that 4 to 8 weeks is enough for ginseng oral administration to alleviate CRF (Fig. 9). Forest plot of different duration of ginseng oral administration on CRF. CRF: cancer-related fatigue. 3.9. Efficacy of ginseng injections on CRF: Four types of injections (Kangai injection, Shenfu injection, Shenmai injection and Shenqi Fuzheng injection) whose main components are ginseng extractions have been approved by Chinese National Medical Products Administration to be used in clinic. Fifteen studies reported these 4 on CRF and 5 papers were excluded because of heterogeneity.[17,18,23–25] Three hundred and forty-four patients were in the ginseng injection group and 338 in the control group. Efficacy was assessed between 2 weeks and 16 weeks. The pooled SMD was 0.74 (95% CI [0.59–0.90], P < .00001) (Fig. 10). If heterogeneity was not taken into account, the pooled SMD was 1.08 (95% CI [0.73–1.44], P < .00001) (Fig. S6, http://links.lww.com/MD/H742). Those indicate that ginseng injections can alleviate CRF. Efficacy of ginseng injections on CRF. CRF: cancer-related fatigue. Efficacies of the 4 types of injections were also explored, respectively. The SMD of Kangai injection, Shenfu injection, Shenmai injection or Shenqi Fuzheng injection was 1.12 (95%CI [0.67–1.58], P < .00001), 1.54 (95% CI [−0.79 to 3.87], P = .20), 1.02 (95% CI [0.71–1.33], P < .00001), 1.00 (95% CI [0.42–1.57], P = .0007), respectively (Fig. S6, http://links.lww.com/MD/H742). Those indicate that Kangai injection, Shenmai injection or Shenqi Fuzheng injection can alleviate CRF. 3.10. Efficacy of ginseng oral administration and ginseng injections on emotional fatigue: Ten studies reported the efficacy of ginseng oral administration or ginseng injections on emotional fatigue and 6 were excluded because of heterogeneity.[7,16,18,23,26,27] The number of patients in the experimental group was 286 and 277 in the control group. The pooled SMD was 0.12 (95% CI [−0.04 to 0.29], P = .15) (Fig. 11). If heterogeneity was not taken into account, the pooled SMD was 0.67 (95% CI [0.13–1.21], P = .02) (Fig. S7, http://links.lww.com/MD/H744). It seems that whether ginseng could alleviate emotional fatigue is uncertain. Forest plot of ginseng oral administration and ginseng injections on emotional fatigue. Ginseng oral administration was employed in 3 studies and 1 study was excluded because of heterogeneity.[16] The pooled SMD was 0.10 (95% CI [−0.10 to 0.30], P = .32) (Fig. 11). If heterogeneity was not taken into account, the pooled SMD was −0.34 (95% CI [−1.12 to 0.43], P = .38) (Fig. S7, http://links.lww.com/MD/H744). Those indicate that ginseng oral administration may not alleviate emotional fatigue. Seven studies explored efficacies of ginseng injections on emotional fatigue and 3 were excluded because of heterogeneity. The pooled SMD of the 4 studies was 0.79 (95% CI [0.55–1.03], P < .00001) (Fig. 12). If heterogeneity was not taken into account, the pooled SMD was 1.12 (95% CI [0.50–1.74], P = .0004) (Fig. S8, http://links.lww.com/MD/H749).[7,18,23,26–29] Those results suggest that ginseng injections may be effective in alleviating emotional fatigue. Forest plot of ginseng injections on emotional fatigue. Efficacies of the 3 types of injections were also explored, respectively. The SMD of Shenfu injection, Shenmai injection or Shenqi Fuzheng injection was 1.16 (95%CI [−0.98 to 3.30], P = .29), 0.84 (95% CI [0.53–1.14], P < .00001), 1.34 (95% CI [0.13–2.55], P = .03), respectively (Fig. S8, http://links.lww.com/MD/H749). Those indicate that Shenmai injection or Shenqi Fuzheng injection can alleviate emotional fatigue. 3.11. Efficacy of ginseng oral administration and ginseng injections on cognitive fatigue.: Cognitive fatigue is a psychological state characterized by the subjective feelings of tiredness, and impaired ability to think, memorize, and concentrate.[30,31] Cognitive fatigue is strongly associated with CRF. Eight studies reported the efficacy of ginseng oral administration and ginseng injections on cognitive fatigue. Four studies, 1 on ginseng oral administration and 3 on ginseng injections, were excluded because of heterogeneity.[5,18,23,29] Finally, 4 studies on ginseng injections were included. The pooled SMD was 0.72 (95% CI [0.48–0.96], P < .00001) (Fig. 13). If heterogeneity was not taken into account, the pooled SMD of all 8 studies was 0.80 (95% CI [0.31–1.29], P = .001) (Fig. S9, http://links.lww.com/MD/H750) and the pooled SMD of 7 studies on ginseng injections was 0.93 (95% CI [0.44–1.42], P = .0002) (Fig. S10, http://links.lww.com/MD/H751)..[7,16,17,23,26,27,29,32] Ginseng oral administration was employed in 1study and the SMD was -0.04 (95% CI [−0.28 to 0.20], P = .76) (Fig. S9, http://links.lww.com/MD/H750). Taking together, those results suggest that ginseng injections may be effective in alleviating cognitive fatigue while ginseng oral administration may not be beneficial to cognitive fatigue relieving. Forest plot of ginseng injections on cognitive fatigue. Efficacies of the 3 types of injections were also explored, respectively. The SMD of Shenfu injection, Shenmai injection or Shenqi Fuzheng injection was 0.79 (95%CI [−0.50 to 2.09], P = .23), 0.83 (95% CI [0.53–1.14], P < .00001), 1.13 (95% CI [0.00–2.25], P = .05), respectively (Fig. S10, http://links.lww.com/MD/H751). Those indicate that Shenmai injection or Shenqi Fuzheng injection can alleviate cognitive fatigue. 3.12. The effect of cancer types on efficacy of ginseng and ginseng injections on CRF: CRF may associate with cancer types.[16] The effect of cancer types on efficacy of ginseng on CRF was explored here. Nine studies evaluated the efficacy of ginseng on the treatment of lung cancer.[7,16,17,23,24,26,27,29,32] Of the total 725 participants, 519 had undergone chemotherapy prior to participation, 245 were non-small cell lung cancer patients, 86 were lung adenocarcinoma patients and 425 were advanced lung cancer patients. If heterogeneity was not taken into account, those results supported the benefit of Red ginseng, Kangai injection, Shenfu injection, Shenmai injection and Shenqi Fuzheng injection on fatigue relief. Those results support CRF improvement of ginseng injections on lung cancer including the pathological types of non-small cell lung cancer, and TNM staging of advanced lung cancer. At the same time, ginseng may also benefit patients with colorectal cancer and nasopharyngeal carcinoma, and may have little effect on patients with head and neck cancer (Table 6). Efficacy of ginseng and ginseng injections on CRF alleviating by various factors. 3.13. Incidences of treatment-related adverse events between different drugs and cancer types: Adverse events were collected and summarized in Table 7. It seems that ginseng has no discernible adverse reactions. Adverse events. 4. Discussion: Panax ginseng root is widely used in Asia owing to its therapeutic anti-oxidative, immunomodulatory properties as well as other numerous pharmacologic activities. It has a good safety profile and minor incidence of adverse effects.[33] In China, ginseng has been used to treat chronic fatigue as early as 2000 years ago. Several meta-analyses and systematic reviews focusing on the efficacy of ginseng on fatigue were published recently and they reported that ginseng benefits fatigue, such as chronic fatigue syndrome, idiopathic chronic fatigue, physical fatigue in human beings and animals.[34,35] At the same time, the number of studies focusing on efficacy of ginseng on CRF is growing. Several types of injections whose main components are ginseng extractions have been approved by Chinese National Medical Products Administration to treat patients. Therefore, hundreds of papers focusing on ginseng on CRF were written in Chinese. Here, we conducted a meta-analysis of papers written in Chinese and in English to evaluate the efficacy of ginseng and ginseng injections in the treatment of CRF. The primary outcome was the effect of ginseng and ginseng injections in alleviating CRF. SMD was employed here because clinical indices to assess CRF differed among studies. The data indicate that ginseng treatment, including oral administration and injection, yield benefits CRF whether heterogeneity taken into account (SMD = 0.40; 95% CI [0.29–0.51], P < .00001) or not (SMD = 0.89; 95% CI [0.60–1.18], P < .00001). Several other drugs have been found to be effective in CRF. Methylphenidate may be accepted more widely and be recommend more frequently. But the results of several trials indicated that methylphenidate, compared with placebo, may not improve CRF.[36,37] Dr Moraska et al conducted a randomized, double-blind, placebo-controlled study but did not find evidence that methylphenidate improved the primary end point of CRF.[36] Dr Centeno’s paper indicated that methylphenidate was not more efficient than placebo to treat CRF.[37] Among 22 included studies on ginseng/ginseng injections, results of 21 showed or showed a trend that there was a reduction in CRF in the ginseng/ginseng injections group compared with the control group. Those concordantly indicate that ginseng is effective to treat CRF. So, it seems that ginseng is a promising drug to alleviate CRF. Emotional fatigue or/and cognitive fatigue is an important part of CRF. Ten papers explored efficacy of ginseng on emotional fatigue. If heterogeneity not taken into account, data supported the conclusion that ginseng could alleviate emotional fatigue (SMD = 0.67; 95% CI [0.13–1.21], P = .02). But if heterogeneity taken into account, the data show a trend that ginseng might alleviate emotional fatigue (SMD = 0.12; 95% CI [−0.04 to 0.29], P = .15). Efficacy of ginseng on cognitive fatigue was also explored and the data indicated that ginseng injections can alleviate cognitive fatigue whether heterogeneity taken into account (SMD = 0.72; 95% CI [0.48–0.96], P < .00001) or not (SMD = 0.93; 95% CI [0.44–1.42], P = .0002). So, ginseng can alleviate CRF and might be beneficial to treat emotional fatigue or/and cognitive fatigue. At the same time, several papers suggested that the combination treatment of ginseng/ginseng injections and methylphenidate or dexamethasone showed potential clinical benefit in CRF without discernible associated toxicities.[23,38] Two types of administration routes were included in this study: oral and intravenous injection. Maybe it’s because that the efficacy of intravenous injection is faster than oral administration, ginseng injections have other ingredients besides ginseng, the results of network meta-analysis showed that intravenous injection was better. But there was no evidence that ginseng oral administration, as compared with placebo, could improve emotional fatigue whether heterogeneity taken into account (SMD = 0.10; 95% CI [−0.10 to 0.30], P = .32) or not (SMD = −0.34; 95% CI [−1.12 to 0.43], P = .38). It is similar for cognitive fatigue (SMD = −0.04; 95% CI [−0.28 to 0.20], P = .76). Four types of injections, Kangai injection, Shenqi fuzheng injection, Shenfu injection and Shenmai injection whose main components are ginseng extractions, have been used in clinic in China. The pooled SMD of the 4 injections indicates that ginseng injections are effective in alleviating CRF and cognitive fatigue whether heterogeneity taken into account or not. Data supported that each type of injections may alleviate CRF. Besides, Shenmai injection and Shenqi fuzheng injection may be effective to alleviate both emotional fatigue and cognitive fatigue. Cancer types were taken into account in some of the included papers focusing on ginseng injections, hence the effect of cancer types on efficacy was meta-analyzed here. It seems that ginseng injections could alleviate CRF caused by non-small cell lung cancer, colorectal cancer, malignant melanoma and nasopharyngeal carcinoma (Table 6). Therefore, ginseng injections can alleviate CRF and may be beneficial to treat emotional fatigue and cognitive fatigue, particularly caused by some types of cancer. Due to limited studies, it is unknown whether ginseng can alleviate CRF caused by other types of cancer. Future rigorous clinical trials and published results will provide deeper insight. Besides inherent limitations of individual trials, there are limitations to our analyses. First, different types of studies were included. There are great differences in doses, duration, routes of administration, types of drugs among studies. Those make a great risk of bias in the implementation of the meta-analysis. Second, the sample size in each trial is small. Most of them were about 100. Consequently, confidence levels were very wide and there was a great variability. Third, there are great differences in cancer types, stages, basic strategies for treatment of cancer. Some trials even enrolled several types of cancer in different stages. Some patients were at stage III to VI, some were cancer survivors who remained free of disease. Some were treated with chemotherapy, radiotherapy or others, some were not. Because CRF is associated with cancer types, stages and basic strategies for treatment of cancer (for example, chemotherapy, radiotherapy),[2,16] those confounders play a role in CRF and affect the efficacy of ginseng treatment more or less. Fourth, clinical indices to assess clinical response differed among studies. Brief Fatigue Inventory, Multidimensional Fatigue Symptom Inventory–Short Form, Fatigue Symptom Inventory, Piper Fatigue Scale, et al were employed in different trials. Though those correlate with each other and have been well accepted to assess CRF, little variability among studies might be unavoidable. Fifth, the methodological quality and quality of evidence of the literature included in this study were rated low (Table 3). Although our findings support the effectiveness of ginseng and ginseng injections in the treatment of CRF, the GRADE approaches were rated low. Sixth, some database, such as Web of Science, et al, were not included in the databases searched because we do not have access. 5. Conclusion: Ginseng, ginseng oral administration or ginseng injections, may improve CRF. Intravenous injection might be better than oral administration. It seems that ginseng injections may alleviate cognitive fatigue. No evidence was found to support that ginseng could alleviate emotional fatigue. More high-quality randomized, double-blind, placebo-controlled studies with homogeneous samples, large sample sizes, fixed protocol are warranted to identify effectiveness of ginseng on CRF caused by specific type of cancer. Supplementary Material:
Background: Up to 90% of patients who are under the active treatment suffer from cancer-related fatigue (CRF). CRF can persist about 10 years after diagnosis and/or treatment. Accumulating reports support that ginseng and ginseng injections are both potential drugs for the treatment of CRF but few studies put them together for analysis. Methods: Two reviewers independently extracted data in 3 databases (PubMed, Cochrane Library and China National Knowledge Infrastructure) from their inception to May 24, 2021. The primary outcome was the effect of ginseng in alleviating CRF. The secondary outcome was ginseng in alleviating emotional or cognitive fatigue. Standardized mean difference (SMD) was employed. Results: Twelve studies were included to evaluate efficacy of ginseng oral administration and ginseng injections on CRF. The pooled SMD was 0.40 (95% confidence Interval [95% CI] [0.29-0.51], P < .00001). Six studies were included to evaluate efficacy of ginseng oral administration on CRF and the SMD was 0.29 (95% CI [0.15-0.42], P < .0001). The order was 2000 mg/d, 3000 mg/d, 1000 mg/d and placebo from high efficacy to low. Ten studies were included to evaluate efficacy of ginseng injections on CRF and the SMD was 0.74 (95% CI [0.59-0.90], P < .00001). Emotional fatigue was reported in 4 studies, ginseng oral administration in 2 and ginseng injections in 2. The pooled SMD was 0.12 (95% CI [-0.04 to 0.29], P = .15). Cognitive fatigue was reported in 4 studies focusing on ginseng injections and the SMD was 0.72 (95% CI [0.48-0.96], P < .00001). Conclusions: Ginseng can improve CRF. Intravenous injection might be better than oral administration. Ginseng injections may alleviate cognitive fatigue. No evidence was found to support that ginseng could alleviate emotional fatigue.
1. Introduction: Cancer-related fatigue (CRF) is one of the most common symptoms in patients with cancer. Up to 90% of patients who are under the active treatment suffer from CRF.[1] It can persist months or even years after treatment ends which interferes with usual functioning. Besides, fatigue is rarely an isolated symptom and most commonly occurs with other symptoms and signs, such as pain, emotional distress, anemia, and sleep disturbances, in symptom clusters.[2] Unlike typical fatigue, CRF cannot be relieved by additional rest, sleep, reducing physical activity, etc. On the contrary, exercise/physical activity is likely to be effective in ameliorating CRF. Psychological/psycho-education, mind/body wellness training, nutritional and dietary supplements may be effective, too. Unfortunately, these interventions yield, at most, moderate benefits in meta-analyses.[2] Until now, evidence to date indicates that synthetic drugs are less effective than non-pharmacologic intervention.[2] Ginseng is the root of plants in the genus Panax, such as Korean ginseng (Panax ginseng C.A. Meyer), Japanese ginseng (Panax ginseng C.A. Meyer), and American ginseng (Panax quinquefolius L.). Red ginseng is a processed product of Asian ginseng (Panax ginseng C.A. Meyer.) by steaming and drying. Ginseng has been used to treat chronic fatigue as early as 2000 years ago in China. Nowadays, ginseng is not only used in China, but also sold and used in more than 35 countries, such as Japan, South Korea, North Korea, America.[3] Preclinical data supports that ginseng may be helpful for fatigue. Animal studies have reported that ginseng can improve the endurance and swimming duration time of mice.[4] Besides, ginseng is mentioned as a dietary supplement for CRF treatment in the NCCN Guidelines based on 1 randomized, double-blind clinical trial using American ginseng which indicated that American ginseng of 2000 mg improved CRF symptom.[5] There are several types of ginseng and they were deemed to have same active pharmaceutical ingredients, for example, Rg1 and Rb1. However, other types of ginseng were not discussed in the Guidelines.[6] In addition to oral administration, several types of injections whose main components are ginseng extractions have been approved by Chinese National Medical Products Administration to be used in clinic. The common ones are Kangai injection, Shenfu injection, Shenmai injection, Shenqi Fuzheng injection. Kangai injection (China Food and Drug Administration approval number Z20026868) consists of the extracts from Astragalus membranaceus (Fisch.) (Bunge), Panax ginseng C.A. Meyer. and Sophora flavescens Aiton. Shenfu injection (China Food and Drug Administration approval number Z20043117) consists of the extracts from Red ginseng and Aconitum wilsonii Stapf ex Veitch.Shenmai injection (China Food and Drug Administration approval number Z2009364) consists of the extracts from Red ginseng and Ophiopogon japonicus (Linn. f.) Ker-Gawl.. Shenqi Fuzheng injection (China Food and Drug Administration approval number Z19990065) consists of the extracts from Codonopsis pilosula (Franch.) Nannf. and Astragalus Membranaceus (Fisch.) Bunge. Some studies have shown that ginseng injections are of great help in improving the quality of life and reducing the side effects of radiotherapy and chemotherapy in cancer patients.[7,8] Therefore, ginseng and ginseng injections are both potential drugs for the treatment of CRF but few studies put them together for analysis. Here, we employed standardized mean difference (SMD) to conduct a meta-analysis to evaluate the efficacy of ginseng and ginseng injections in the treatment of CRF and the quality of the evidence. Emotional and cognitive fatigue were evaluated, too. Besides, in our study, subgroup analyses were conducted to compare the efficacy of cancer types, cancer stages, basic strategies for treatment of cancer and so on. 5. Conclusion: Data curation: Tianwen Hou, Jing Huang. Formal analysis: Jing Huang. Methodology: Tianwen Hou, Jing Huang. Project administration: Shijiang Sun. Resources: Xueqi Wang. Software: Huijing Li, Xueqi Wang, Xi Liang. Supervision: Jianming He. Validation: Xi Liang, Haiyan Bai. Visualization: Tianhe Zhao, Jingnan Hu, Jianli Ge. Writing – original draft: Huijing Li. Writing – review & editing: Jianming He.
Background: Up to 90% of patients who are under the active treatment suffer from cancer-related fatigue (CRF). CRF can persist about 10 years after diagnosis and/or treatment. Accumulating reports support that ginseng and ginseng injections are both potential drugs for the treatment of CRF but few studies put them together for analysis. Methods: Two reviewers independently extracted data in 3 databases (PubMed, Cochrane Library and China National Knowledge Infrastructure) from their inception to May 24, 2021. The primary outcome was the effect of ginseng in alleviating CRF. The secondary outcome was ginseng in alleviating emotional or cognitive fatigue. Standardized mean difference (SMD) was employed. Results: Twelve studies were included to evaluate efficacy of ginseng oral administration and ginseng injections on CRF. The pooled SMD was 0.40 (95% confidence Interval [95% CI] [0.29-0.51], P < .00001). Six studies were included to evaluate efficacy of ginseng oral administration on CRF and the SMD was 0.29 (95% CI [0.15-0.42], P < .0001). The order was 2000 mg/d, 3000 mg/d, 1000 mg/d and placebo from high efficacy to low. Ten studies were included to evaluate efficacy of ginseng injections on CRF and the SMD was 0.74 (95% CI [0.59-0.90], P < .00001). Emotional fatigue was reported in 4 studies, ginseng oral administration in 2 and ginseng injections in 2. The pooled SMD was 0.12 (95% CI [-0.04 to 0.29], P = .15). Cognitive fatigue was reported in 4 studies focusing on ginseng injections and the SMD was 0.72 (95% CI [0.48-0.96], P < .00001). Conclusions: Ginseng can improve CRF. Intravenous injection might be better than oral administration. Ginseng injections may alleviate cognitive fatigue. No evidence was found to support that ginseng could alleviate emotional fatigue.
14,163
385
[ 28, 140, 98, 189, 30, 148, 95, 45, 91, 258, 147, 184, 32, 68, 149, 123, 736, 288, 426, 348, 185, 24, 85 ]
28
[ "ginseng", "fatigue", "studies", "crf", "injections", "smd", "95", "ci", "95 ci", "injection" ]
[ "cancer fatigue scale", "treat chronic fatigue", "ginseng benefits fatigue", "fatigue cancer", "efficacy ginseng fatigue" ]
[CONTENT] cancer-related fatigue | ginseng | ginseng injections | meta-analysis [SUMMARY]
[CONTENT] cancer-related fatigue | ginseng | ginseng injections | meta-analysis [SUMMARY]
[CONTENT] cancer-related fatigue | ginseng | ginseng injections | meta-analysis [SUMMARY]
[CONTENT] cancer-related fatigue | ginseng | ginseng injections | meta-analysis [SUMMARY]
[CONTENT] cancer-related fatigue | ginseng | ginseng injections | meta-analysis [SUMMARY]
[CONTENT] cancer-related fatigue | ginseng | ginseng injections | meta-analysis [SUMMARY]
[CONTENT] Humans | Panax | Fatigue | Neoplasms | Injections | Administration, Oral [SUMMARY]
[CONTENT] Humans | Panax | Fatigue | Neoplasms | Injections | Administration, Oral [SUMMARY]
[CONTENT] Humans | Panax | Fatigue | Neoplasms | Injections | Administration, Oral [SUMMARY]
[CONTENT] Humans | Panax | Fatigue | Neoplasms | Injections | Administration, Oral [SUMMARY]
[CONTENT] Humans | Panax | Fatigue | Neoplasms | Injections | Administration, Oral [SUMMARY]
[CONTENT] Humans | Panax | Fatigue | Neoplasms | Injections | Administration, Oral [SUMMARY]
[CONTENT] cancer fatigue scale | treat chronic fatigue | ginseng benefits fatigue | fatigue cancer | efficacy ginseng fatigue [SUMMARY]
[CONTENT] cancer fatigue scale | treat chronic fatigue | ginseng benefits fatigue | fatigue cancer | efficacy ginseng fatigue [SUMMARY]
[CONTENT] cancer fatigue scale | treat chronic fatigue | ginseng benefits fatigue | fatigue cancer | efficacy ginseng fatigue [SUMMARY]
[CONTENT] cancer fatigue scale | treat chronic fatigue | ginseng benefits fatigue | fatigue cancer | efficacy ginseng fatigue [SUMMARY]
[CONTENT] cancer fatigue scale | treat chronic fatigue | ginseng benefits fatigue | fatigue cancer | efficacy ginseng fatigue [SUMMARY]
[CONTENT] cancer fatigue scale | treat chronic fatigue | ginseng benefits fatigue | fatigue cancer | efficacy ginseng fatigue [SUMMARY]
[CONTENT] ginseng | fatigue | studies | crf | injections | smd | 95 | ci | 95 ci | injection [SUMMARY]
[CONTENT] ginseng | fatigue | studies | crf | injections | smd | 95 | ci | 95 ci | injection [SUMMARY]
[CONTENT] ginseng | fatigue | studies | crf | injections | smd | 95 | ci | 95 ci | injection [SUMMARY]
[CONTENT] ginseng | fatigue | studies | crf | injections | smd | 95 | ci | 95 ci | injection [SUMMARY]
[CONTENT] ginseng | fatigue | studies | crf | injections | smd | 95 | ci | 95 ci | injection [SUMMARY]
[CONTENT] ginseng | fatigue | studies | crf | injections | smd | 95 | ci | 95 ci | injection [SUMMARY]
[CONTENT] ginseng | panax | injection | extracts | china food | china food drug | china food drug administration | approval number | approval | panax ginseng meyer [SUMMARY]
[CONTENT] studies | zhusheye | risk | analysis | type | group | treatment | ginseng | bias | following [SUMMARY]
[CONTENT] ginseng | 95 | 95 ci | ci | smd | fig | injection | crf | ginseng oral | ginseng oral administration [SUMMARY]
[CONTENT] ginseng | administration ginseng injections | administration ginseng | alleviate | oral administration ginseng | oral administration ginseng injections | oral | oral administration | found support ginseng | studies homogeneous samples [SUMMARY]
[CONTENT] ginseng | studies | injection | crf | fatigue | 95 | injections | ci | smd | 95 ci [SUMMARY]
[CONTENT] ginseng | studies | injection | crf | fatigue | 95 | injections | ci | smd | 95 ci [SUMMARY]
[CONTENT] Up to 90% ||| about 10 years ||| CRF [SUMMARY]
[CONTENT] Two | 3 | PubMed | Cochrane Library | China National Knowledge Infrastructure | May 24, 2021 ||| CRF ||| ||| [SUMMARY]
[CONTENT] Twelve | CRF ||| 0.40 | 95% | Interval ||| 95% | CI ||| 0.29-0.51 ||| Six | CRF | SMD | 0.29 | 95% | CI ||| 0.15 ||| 2000 | 3000 | 1000 ||| Ten | CRF | SMD | 0.74 | 95% | CI ||| 0.59 ||| 4 | 2 | 2 ||| 0.12 | 95% | CI ||| -0.04 to 0.29 ||| 4 | SMD | 0.72 | 95% | CI ||| 0.48 [SUMMARY]
[CONTENT] CRF ||| ||| ||| [SUMMARY]
[CONTENT] Up to 90% ||| about 10 years ||| CRF ||| Two | 3 | PubMed | Cochrane Library | China National Knowledge Infrastructure | May 24, 2021 ||| CRF ||| ||| ||| Twelve | CRF ||| 0.40 | 95% | Interval ||| 95% | CI ||| 0.29-0.51 ||| Six | CRF | SMD | 0.29 | 95% | CI ||| 0.15 ||| 2000 | 3000 | 1000 ||| Ten | CRF | SMD | 0.74 | 95% | CI ||| 0.59 ||| 4 | 2 | 2 ||| 0.12 | 95% | CI ||| -0.04 to 0.29 ||| 4 | SMD | 0.72 | 95% | CI ||| 0.48 ||| CRF ||| ||| ||| [SUMMARY]
[CONTENT] Up to 90% ||| about 10 years ||| CRF ||| Two | 3 | PubMed | Cochrane Library | China National Knowledge Infrastructure | May 24, 2021 ||| CRF ||| ||| ||| Twelve | CRF ||| 0.40 | 95% | Interval ||| 95% | CI ||| 0.29-0.51 ||| Six | CRF | SMD | 0.29 | 95% | CI ||| 0.15 ||| 2000 | 3000 | 1000 ||| Ten | CRF | SMD | 0.74 | 95% | CI ||| 0.59 ||| 4 | 2 | 2 ||| 0.12 | 95% | CI ||| -0.04 to 0.29 ||| 4 | SMD | 0.72 | 95% | CI ||| 0.48 ||| CRF ||| ||| ||| [SUMMARY]
Safety and outcome of thrombolysis in mild stroke: a meta-analysis.
25362481
Whether patients presenting with mild stroke should or should not be treated with intravenous rtPA is still controversial. This systematic review aims to assess the safety and outcome of thrombolysis in these patients.
BACKGROUND
We systematically searched PubMed and Cochrane Central Register of Controlled Trials for studies evaluating intravenous rtPA in patients with mild or rapidly improving symptoms except case reports. Excellent outcome (author reported, mainly mRS 0-1), symptomatic intracranial hemorrhage (sICH) and mortality were analyzed.
MATERIAL/METHODS
Fourteen studies were included (n=1906 patients). Of these, 4 studies were comparative (2 randomized and 2 non-randomized). The remaining were single-arm studies. On the basis of 4 comparative studies with a total of 1006 patients, the meta-analysis did not identify a significant difference in the odds of excellent outcome (OR=0.86; 95% CI: 0.64-1.15; I2=0) between IV rtPA-treated minor stroke and those without rtPA treatment. Eleven studies involving 1083 patients showed the pooled rate of excellent outcome was 76.1% (95% CI: 69.8-81.5%, I2=42.5). Seven studies involving 378 patients showed the mortality rate was 4.5% (95% CI: 2.6-7.5%, I2=1.4). Twelve studies involving 831 patients showed the pooled rate of sICH was 2.4% (95% CI: 1.5-3.8, I2=0).
RESULTS
Although efficacy is not clearly established, this study reveals the adverse event rates related to thrombolysis are low in mild stroke. Intravenous rtPA should be considered in these patients until more RCT evidence is available.
CONCLUSIONS
[ "Humans", "Stroke", "Thrombolytic Therapy" ]
4228861
Characteristics of included studies
The mean age of participants ranged from 59 to 70 years. The proportion of male participants was 55.6–78.9% among these trials. Most of studies enrolled patients treated within 3 hours. All studies except 1 used NIHSS as criteria for mild stroke. Usual cut-off to define mild stroke was NIHSS 4, 5, or 6. More details are given in Tables 1 and 2.
Statistical methods
For comparative studies, results for dichotomous outcomes were expressed as odds ratios (OR) with 95% confidence intervals (CI) and we also obtained the pooled proportions for excellent outcome, mortality, and sICH, including both comparative and single-arm studies. We considered p-values less than 0.05 to be statistically significant. We evaluated heterogeneity among included studies using the I2 test. We considered a value greater than 50% to indicate substantial heterogeneity. Regardless of the size of heterogeneity, a random effects model was used for statistical analysis. We conducted the meta-analysis using Cochrane RevMan 5.1 software and Meta-analyst (version 3.13beta; Tufts Medical Center) [11].
Results
Studies identified The selection of studies is depicted in Figure 1. The initial literature search identified 461 relevant articles. After reading titles and abstracts, we retained 32 studies for further assessment; of these, we excluded 20 studies [12–31]. Two additional studies were included by reference list screening. Ultimately, 14 studies, containing 1906 patients, were included in this systematic review [8–10,32–42]. Two studies were subgroup analyses from previous RCTs (NINDS 1995 and IST-3) [9,37]. The remaining studies were observational studies (single-arm), of which 2 studies had a concurrent control group. Thus, 4 studies (2 randomized and 2 nonrandomized) contributed data to both the rtPA group and the non-rtPA group. The number of participants ranged from 19 to 535 (Table 1). The selection of studies is depicted in Figure 1. The initial literature search identified 461 relevant articles. After reading titles and abstracts, we retained 32 studies for further assessment; of these, we excluded 20 studies [12–31]. Two additional studies were included by reference list screening. Ultimately, 14 studies, containing 1906 patients, were included in this systematic review [8–10,32–42]. Two studies were subgroup analyses from previous RCTs (NINDS 1995 and IST-3) [9,37]. The remaining studies were observational studies (single-arm), of which 2 studies had a concurrent control group. Thus, 4 studies (2 randomized and 2 nonrandomized) contributed data to both the rtPA group and the non-rtPA group. The number of participants ranged from 19 to 535 (Table 1). Characteristics of included studies The mean age of participants ranged from 59 to 70 years. The proportion of male participants was 55.6–78.9% among these trials. Most of studies enrolled patients treated within 3 hours. All studies except 1 used NIHSS as criteria for mild stroke. Usual cut-off to define mild stroke was NIHSS 4, 5, or 6. More details are given in Tables 1 and 2. The mean age of participants ranged from 59 to 70 years. The proportion of male participants was 55.6–78.9% among these trials. Most of studies enrolled patients treated within 3 hours. All studies except 1 used NIHSS as criteria for mild stroke. Usual cut-off to define mild stroke was NIHSS 4, 5, or 6. More details are given in Tables 1 and 2. Outcome rates Four comparative studies evaluated the effect of IV rtPA on excellent outcome. On the basis of these studies with a total of 1006 patients, the meta-analysis did not identify a significant difference in the odds of excellent outcome (OR=0.86; 95% CI: 0.64–1.15; I2=0) between IV rtPA-treated minor stroke and those without rtPA treatment (Figure 2). We also calculated the pooled proportions for excellent outcome, mortality, and sICH, including both comparative and single-arm studies in patients with mild stroke receiving IV rtPA. The excellent outcome was available for 11 studies (1083 patients). It was reported to range from 57.6% to 100%. The pooled proportion of excellent outcome was 76.1% (95% CI: 69.8–81.5%, I2=42.5) (Figure 3). Seven studies involving 378 patients showed the risk of mortality rate ranged from 0% to 8%, with a pooled 90-day mortality rate of 4.5% (95% CI: 2.6–7.5%, I2=1.4) (Figure 4). Regarding the definition of sICH, 4 studies defined it as clinical neurological deterioration temporally related to ICH [9,10,33,36] and 2 defined it as a ≥4-point increase in NIHSS associated with ICH [8,38]; 1 defined clinical neurological deterioration or a ≥4-point increase in NIHSS associated with ICH [41]; the definition was unclear in the remaining studies [32,34,35,40]. The risk of sICH was reported to range from 0% to 5.1%. Twelve studies involving 831 patients showed the pooled rate of sICH was 2.4% (95% CI: 1.5–3.8, I2=0) (Figure 5). Four comparative studies evaluated the effect of IV rtPA on excellent outcome. On the basis of these studies with a total of 1006 patients, the meta-analysis did not identify a significant difference in the odds of excellent outcome (OR=0.86; 95% CI: 0.64–1.15; I2=0) between IV rtPA-treated minor stroke and those without rtPA treatment (Figure 2). We also calculated the pooled proportions for excellent outcome, mortality, and sICH, including both comparative and single-arm studies in patients with mild stroke receiving IV rtPA. The excellent outcome was available for 11 studies (1083 patients). It was reported to range from 57.6% to 100%. The pooled proportion of excellent outcome was 76.1% (95% CI: 69.8–81.5%, I2=42.5) (Figure 3). Seven studies involving 378 patients showed the risk of mortality rate ranged from 0% to 8%, with a pooled 90-day mortality rate of 4.5% (95% CI: 2.6–7.5%, I2=1.4) (Figure 4). Regarding the definition of sICH, 4 studies defined it as clinical neurological deterioration temporally related to ICH [9,10,33,36] and 2 defined it as a ≥4-point increase in NIHSS associated with ICH [8,38]; 1 defined clinical neurological deterioration or a ≥4-point increase in NIHSS associated with ICH [41]; the definition was unclear in the remaining studies [32,34,35,40]. The risk of sICH was reported to range from 0% to 5.1%. Twelve studies involving 831 patients showed the pooled rate of sICH was 2.4% (95% CI: 1.5–3.8, I2=0) (Figure 5).
Conclusions
Although efficacy is not clearly established, this study reveals that the adverse event rates related to thrombolysis are low in mild stroke. Intravenous rtPA should be considered in these patients until more RCT evidence is available.
[ "Background", "Search strategy and Eligibility Studies", "Selection of studies and data extraction", "Studies identified", "Outcome rates" ]
[ "Intravenous thrombolysis with recombinant tissue plasminogen activator (IV rtPA) applied within 3 hours or 4.5 hours is efficacious in acute ischemic stroke patients [1–3]. However, few ischemic stroke patients are treated with IV rtPA due to the narrow time window for treatment [4,5]. However, even patients who would generally be eligible are often not treated because of mild stroke or clinical improvement, perceived protocol exclusions, emergency department referral delay, and significant comorbidity [5]. It is very common to not treat patients with mild or rapidly improving symptoms because of an uncertain risk-benefit ratio. In studies evaluating eligibility for thrombolysis, up to 43% of patients with mild or improving stroke symptoms do not receive thrombolytic therapy [6].\nHowever, according to recent reports, 15–31% of patients with mild or rapidly improving symptoms are dependent or dead during hospital admission without thrombolysis [4–7]. In contrast, some researchers have reported that mild stroke patients also benefited from IV thrombolysis, and up to 94% achieved excellent 3-month outcome (modified Rankin Scale, mRS 0–1) [8–10]. At present, no one has truly tested the effectiveness of IV rtPA in mild stroke versus placebo. Studies evaluating intravenous rtPA in mild stroke patients are limited by small sample sizes and non-controlled comparison groups. Until more RCT evidence is available, a systematic review of all studies can provide useful information on the odds for benefits and risks of IV rtPA in patients with mild or rapidly improving symptoms and help decision-making for individual treatment. We therefore conducted this systematic review to assess the safety and outcome of thrombolysis in these patients.", "We systematically searched PubMed (from its earliest date to April 2013), Embase (1980 to May 2013), and Cochrane Central Register of Controlled Trials (The Cochrane library 2013, issue 3) for studies evaluating thrombolysis in patients with mild or rapidly improving symptoms. The terms ‘Minor stroke’, ‘Mild deficit’, ‘Mild symptom’, ‘Mild stroke’, ‘Stroke with rapidly improving symptoms’, ‘Thrombolysis’, ‘Intravenous tissue plasminogen activator’, and ‘rt-PA’ were combined using ‘And’ or ‘Or’ for searching relevant studies. The bibliographies of relevant articles were screened. Studies were included if the following criteria were fulfilled: (1) we considered both comparative (randomized or nonrandomized) and single-arm studies; (2) all patients had been treated for IV rtPA; (3) at least 10 patients were enrolled; (4) at least 1 of following outcomes was reported: functional outcome, mortality, or sICH. Articles were excluded if they were case reports. In case of multiple publications from the same study population, only the report with the most complete data was included.", "One reviewer independently screened the titles and abstracts of every record. The full articles were obtained when the information given in the title or abstracts conformed to the selection criteria outlined previously. Two reviewers independently performed data extraction and compared the results. The data extraction form included contents as follows: (1) general characteristics of studies and patients, (2) sample size, (3) the diagnostic criteria for mild stroke, (4) outcome measurements (mRS, Mortality, sICH). Articles that met all inclusion criteria but in which specific data extraction was not possible were marked as “NG” (not given). Discrepancies were resolved by consensus.", "The selection of studies is depicted in Figure 1. The initial literature search identified 461 relevant articles. After reading titles and abstracts, we retained 32 studies for further assessment; of these, we excluded 20 studies [12–31]. Two additional studies were included by reference list screening. Ultimately, 14 studies, containing 1906 patients, were included in this systematic review [8–10,32–42]. Two studies were subgroup analyses from previous RCTs (NINDS 1995 and IST-3) [9,37]. The remaining studies were observational studies (single-arm), of which 2 studies had a concurrent control group. Thus, 4 studies (2 randomized and 2 nonrandomized) contributed data to both the rtPA group and the non-rtPA group. The number of participants ranged from 19 to 535 (Table 1).", "Four comparative studies evaluated the effect of IV rtPA on excellent outcome. On the basis of these studies with a total of 1006 patients, the meta-analysis did not identify a significant difference in the odds of excellent outcome (OR=0.86; 95% CI: 0.64–1.15; I2=0) between IV rtPA-treated minor stroke and those without rtPA treatment (Figure 2).\nWe also calculated the pooled proportions for excellent outcome, mortality, and sICH, including both comparative and single-arm studies in patients with mild stroke receiving IV rtPA. The excellent outcome was available for 11 studies (1083 patients). It was reported to range from 57.6% to 100%. The pooled proportion of excellent outcome was 76.1% (95% CI: 69.8–81.5%, I2=42.5) (Figure 3). Seven studies involving 378 patients showed the risk of mortality rate ranged from 0% to 8%, with a pooled 90-day mortality rate of 4.5% (95% CI: 2.6–7.5%, I2=1.4) (Figure 4). Regarding the definition of sICH, 4 studies defined it as clinical neurological deterioration temporally related to ICH [9,10,33,36] and 2 defined it as a ≥4-point increase in NIHSS associated with ICH [8,38]; 1 defined clinical neurological deterioration or a ≥4-point increase in NIHSS associated with ICH [41]; the definition was unclear in the remaining studies [32,34,35,40]. The risk of sICH was reported to range from 0% to 5.1%. Twelve studies involving 831 patients showed the pooled rate of sICH was 2.4% (95% CI: 1.5–3.8, I2=0) (Figure 5)." ]
[ null, null, "methods", null, null ]
[ "Background", "Material and Methods", "Search strategy and Eligibility Studies", "Selection of studies and data extraction", "Statistical methods", "Results", "Studies identified", "Characteristics of included studies", "Outcome rates", "Discussion", "Conclusions" ]
[ "Intravenous thrombolysis with recombinant tissue plasminogen activator (IV rtPA) applied within 3 hours or 4.5 hours is efficacious in acute ischemic stroke patients [1–3]. However, few ischemic stroke patients are treated with IV rtPA due to the narrow time window for treatment [4,5]. However, even patients who would generally be eligible are often not treated because of mild stroke or clinical improvement, perceived protocol exclusions, emergency department referral delay, and significant comorbidity [5]. It is very common to not treat patients with mild or rapidly improving symptoms because of an uncertain risk-benefit ratio. In studies evaluating eligibility for thrombolysis, up to 43% of patients with mild or improving stroke symptoms do not receive thrombolytic therapy [6].\nHowever, according to recent reports, 15–31% of patients with mild or rapidly improving symptoms are dependent or dead during hospital admission without thrombolysis [4–7]. In contrast, some researchers have reported that mild stroke patients also benefited from IV thrombolysis, and up to 94% achieved excellent 3-month outcome (modified Rankin Scale, mRS 0–1) [8–10]. At present, no one has truly tested the effectiveness of IV rtPA in mild stroke versus placebo. Studies evaluating intravenous rtPA in mild stroke patients are limited by small sample sizes and non-controlled comparison groups. Until more RCT evidence is available, a systematic review of all studies can provide useful information on the odds for benefits and risks of IV rtPA in patients with mild or rapidly improving symptoms and help decision-making for individual treatment. We therefore conducted this systematic review to assess the safety and outcome of thrombolysis in these patients.", " Search strategy and Eligibility Studies We systematically searched PubMed (from its earliest date to April 2013), Embase (1980 to May 2013), and Cochrane Central Register of Controlled Trials (The Cochrane library 2013, issue 3) for studies evaluating thrombolysis in patients with mild or rapidly improving symptoms. The terms ‘Minor stroke’, ‘Mild deficit’, ‘Mild symptom’, ‘Mild stroke’, ‘Stroke with rapidly improving symptoms’, ‘Thrombolysis’, ‘Intravenous tissue plasminogen activator’, and ‘rt-PA’ were combined using ‘And’ or ‘Or’ for searching relevant studies. The bibliographies of relevant articles were screened. Studies were included if the following criteria were fulfilled: (1) we considered both comparative (randomized or nonrandomized) and single-arm studies; (2) all patients had been treated for IV rtPA; (3) at least 10 patients were enrolled; (4) at least 1 of following outcomes was reported: functional outcome, mortality, or sICH. Articles were excluded if they were case reports. In case of multiple publications from the same study population, only the report with the most complete data was included.\nWe systematically searched PubMed (from its earliest date to April 2013), Embase (1980 to May 2013), and Cochrane Central Register of Controlled Trials (The Cochrane library 2013, issue 3) for studies evaluating thrombolysis in patients with mild or rapidly improving symptoms. The terms ‘Minor stroke’, ‘Mild deficit’, ‘Mild symptom’, ‘Mild stroke’, ‘Stroke with rapidly improving symptoms’, ‘Thrombolysis’, ‘Intravenous tissue plasminogen activator’, and ‘rt-PA’ were combined using ‘And’ or ‘Or’ for searching relevant studies. The bibliographies of relevant articles were screened. Studies were included if the following criteria were fulfilled: (1) we considered both comparative (randomized or nonrandomized) and single-arm studies; (2) all patients had been treated for IV rtPA; (3) at least 10 patients were enrolled; (4) at least 1 of following outcomes was reported: functional outcome, mortality, or sICH. Articles were excluded if they were case reports. In case of multiple publications from the same study population, only the report with the most complete data was included.\n Selection of studies and data extraction One reviewer independently screened the titles and abstracts of every record. The full articles were obtained when the information given in the title or abstracts conformed to the selection criteria outlined previously. Two reviewers independently performed data extraction and compared the results. The data extraction form included contents as follows: (1) general characteristics of studies and patients, (2) sample size, (3) the diagnostic criteria for mild stroke, (4) outcome measurements (mRS, Mortality, sICH). Articles that met all inclusion criteria but in which specific data extraction was not possible were marked as “NG” (not given). Discrepancies were resolved by consensus.\nOne reviewer independently screened the titles and abstracts of every record. The full articles were obtained when the information given in the title or abstracts conformed to the selection criteria outlined previously. Two reviewers independently performed data extraction and compared the results. The data extraction form included contents as follows: (1) general characteristics of studies and patients, (2) sample size, (3) the diagnostic criteria for mild stroke, (4) outcome measurements (mRS, Mortality, sICH). Articles that met all inclusion criteria but in which specific data extraction was not possible were marked as “NG” (not given). Discrepancies were resolved by consensus.\n Statistical methods For comparative studies, results for dichotomous outcomes were expressed as odds ratios (OR) with 95% confidence intervals (CI) and we also obtained the pooled proportions for excellent outcome, mortality, and sICH, including both comparative and single-arm studies. We considered p-values less than 0.05 to be statistically significant.\nWe evaluated heterogeneity among included studies using the I2 test. We considered a value greater than 50% to indicate substantial heterogeneity. Regardless of the size of heterogeneity, a random effects model was used for statistical analysis. We conducted the meta-analysis using Cochrane RevMan 5.1 software and Meta-analyst (version 3.13beta; Tufts Medical Center) [11].\nFor comparative studies, results for dichotomous outcomes were expressed as odds ratios (OR) with 95% confidence intervals (CI) and we also obtained the pooled proportions for excellent outcome, mortality, and sICH, including both comparative and single-arm studies. We considered p-values less than 0.05 to be statistically significant.\nWe evaluated heterogeneity among included studies using the I2 test. We considered a value greater than 50% to indicate substantial heterogeneity. Regardless of the size of heterogeneity, a random effects model was used for statistical analysis. We conducted the meta-analysis using Cochrane RevMan 5.1 software and Meta-analyst (version 3.13beta; Tufts Medical Center) [11].", "We systematically searched PubMed (from its earliest date to April 2013), Embase (1980 to May 2013), and Cochrane Central Register of Controlled Trials (The Cochrane library 2013, issue 3) for studies evaluating thrombolysis in patients with mild or rapidly improving symptoms. The terms ‘Minor stroke’, ‘Mild deficit’, ‘Mild symptom’, ‘Mild stroke’, ‘Stroke with rapidly improving symptoms’, ‘Thrombolysis’, ‘Intravenous tissue plasminogen activator’, and ‘rt-PA’ were combined using ‘And’ or ‘Or’ for searching relevant studies. The bibliographies of relevant articles were screened. Studies were included if the following criteria were fulfilled: (1) we considered both comparative (randomized or nonrandomized) and single-arm studies; (2) all patients had been treated for IV rtPA; (3) at least 10 patients were enrolled; (4) at least 1 of following outcomes was reported: functional outcome, mortality, or sICH. Articles were excluded if they were case reports. In case of multiple publications from the same study population, only the report with the most complete data was included.", "One reviewer independently screened the titles and abstracts of every record. The full articles were obtained when the information given in the title or abstracts conformed to the selection criteria outlined previously. Two reviewers independently performed data extraction and compared the results. The data extraction form included contents as follows: (1) general characteristics of studies and patients, (2) sample size, (3) the diagnostic criteria for mild stroke, (4) outcome measurements (mRS, Mortality, sICH). Articles that met all inclusion criteria but in which specific data extraction was not possible were marked as “NG” (not given). Discrepancies were resolved by consensus.", "For comparative studies, results for dichotomous outcomes were expressed as odds ratios (OR) with 95% confidence intervals (CI) and we also obtained the pooled proportions for excellent outcome, mortality, and sICH, including both comparative and single-arm studies. We considered p-values less than 0.05 to be statistically significant.\nWe evaluated heterogeneity among included studies using the I2 test. We considered a value greater than 50% to indicate substantial heterogeneity. Regardless of the size of heterogeneity, a random effects model was used for statistical analysis. We conducted the meta-analysis using Cochrane RevMan 5.1 software and Meta-analyst (version 3.13beta; Tufts Medical Center) [11].", " Studies identified The selection of studies is depicted in Figure 1. The initial literature search identified 461 relevant articles. After reading titles and abstracts, we retained 32 studies for further assessment; of these, we excluded 20 studies [12–31]. Two additional studies were included by reference list screening. Ultimately, 14 studies, containing 1906 patients, were included in this systematic review [8–10,32–42]. Two studies were subgroup analyses from previous RCTs (NINDS 1995 and IST-3) [9,37]. The remaining studies were observational studies (single-arm), of which 2 studies had a concurrent control group. Thus, 4 studies (2 randomized and 2 nonrandomized) contributed data to both the rtPA group and the non-rtPA group. The number of participants ranged from 19 to 535 (Table 1).\nThe selection of studies is depicted in Figure 1. The initial literature search identified 461 relevant articles. After reading titles and abstracts, we retained 32 studies for further assessment; of these, we excluded 20 studies [12–31]. Two additional studies were included by reference list screening. Ultimately, 14 studies, containing 1906 patients, were included in this systematic review [8–10,32–42]. Two studies were subgroup analyses from previous RCTs (NINDS 1995 and IST-3) [9,37]. The remaining studies were observational studies (single-arm), of which 2 studies had a concurrent control group. Thus, 4 studies (2 randomized and 2 nonrandomized) contributed data to both the rtPA group and the non-rtPA group. The number of participants ranged from 19 to 535 (Table 1).\n Characteristics of included studies The mean age of participants ranged from 59 to 70 years. The proportion of male participants was 55.6–78.9% among these trials. Most of studies enrolled patients treated within 3 hours. All studies except 1 used NIHSS as criteria for mild stroke. Usual cut-off to define mild stroke was NIHSS 4, 5, or 6. More details are given in Tables 1 and 2.\nThe mean age of participants ranged from 59 to 70 years. The proportion of male participants was 55.6–78.9% among these trials. Most of studies enrolled patients treated within 3 hours. All studies except 1 used NIHSS as criteria for mild stroke. Usual cut-off to define mild stroke was NIHSS 4, 5, or 6. More details are given in Tables 1 and 2.\n Outcome rates Four comparative studies evaluated the effect of IV rtPA on excellent outcome. On the basis of these studies with a total of 1006 patients, the meta-analysis did not identify a significant difference in the odds of excellent outcome (OR=0.86; 95% CI: 0.64–1.15; I2=0) between IV rtPA-treated minor stroke and those without rtPA treatment (Figure 2).\nWe also calculated the pooled proportions for excellent outcome, mortality, and sICH, including both comparative and single-arm studies in patients with mild stroke receiving IV rtPA. The excellent outcome was available for 11 studies (1083 patients). It was reported to range from 57.6% to 100%. The pooled proportion of excellent outcome was 76.1% (95% CI: 69.8–81.5%, I2=42.5) (Figure 3). Seven studies involving 378 patients showed the risk of mortality rate ranged from 0% to 8%, with a pooled 90-day mortality rate of 4.5% (95% CI: 2.6–7.5%, I2=1.4) (Figure 4). Regarding the definition of sICH, 4 studies defined it as clinical neurological deterioration temporally related to ICH [9,10,33,36] and 2 defined it as a ≥4-point increase in NIHSS associated with ICH [8,38]; 1 defined clinical neurological deterioration or a ≥4-point increase in NIHSS associated with ICH [41]; the definition was unclear in the remaining studies [32,34,35,40]. The risk of sICH was reported to range from 0% to 5.1%. Twelve studies involving 831 patients showed the pooled rate of sICH was 2.4% (95% CI: 1.5–3.8, I2=0) (Figure 5).\nFour comparative studies evaluated the effect of IV rtPA on excellent outcome. On the basis of these studies with a total of 1006 patients, the meta-analysis did not identify a significant difference in the odds of excellent outcome (OR=0.86; 95% CI: 0.64–1.15; I2=0) between IV rtPA-treated minor stroke and those without rtPA treatment (Figure 2).\nWe also calculated the pooled proportions for excellent outcome, mortality, and sICH, including both comparative and single-arm studies in patients with mild stroke receiving IV rtPA. The excellent outcome was available for 11 studies (1083 patients). It was reported to range from 57.6% to 100%. The pooled proportion of excellent outcome was 76.1% (95% CI: 69.8–81.5%, I2=42.5) (Figure 3). Seven studies involving 378 patients showed the risk of mortality rate ranged from 0% to 8%, with a pooled 90-day mortality rate of 4.5% (95% CI: 2.6–7.5%, I2=1.4) (Figure 4). Regarding the definition of sICH, 4 studies defined it as clinical neurological deterioration temporally related to ICH [9,10,33,36] and 2 defined it as a ≥4-point increase in NIHSS associated with ICH [8,38]; 1 defined clinical neurological deterioration or a ≥4-point increase in NIHSS associated with ICH [41]; the definition was unclear in the remaining studies [32,34,35,40]. The risk of sICH was reported to range from 0% to 5.1%. Twelve studies involving 831 patients showed the pooled rate of sICH was 2.4% (95% CI: 1.5–3.8, I2=0) (Figure 5).", "The selection of studies is depicted in Figure 1. The initial literature search identified 461 relevant articles. After reading titles and abstracts, we retained 32 studies for further assessment; of these, we excluded 20 studies [12–31]. Two additional studies were included by reference list screening. Ultimately, 14 studies, containing 1906 patients, were included in this systematic review [8–10,32–42]. Two studies were subgroup analyses from previous RCTs (NINDS 1995 and IST-3) [9,37]. The remaining studies were observational studies (single-arm), of which 2 studies had a concurrent control group. Thus, 4 studies (2 randomized and 2 nonrandomized) contributed data to both the rtPA group and the non-rtPA group. The number of participants ranged from 19 to 535 (Table 1).", "The mean age of participants ranged from 59 to 70 years. The proportion of male participants was 55.6–78.9% among these trials. Most of studies enrolled patients treated within 3 hours. All studies except 1 used NIHSS as criteria for mild stroke. Usual cut-off to define mild stroke was NIHSS 4, 5, or 6. More details are given in Tables 1 and 2.", "Four comparative studies evaluated the effect of IV rtPA on excellent outcome. On the basis of these studies with a total of 1006 patients, the meta-analysis did not identify a significant difference in the odds of excellent outcome (OR=0.86; 95% CI: 0.64–1.15; I2=0) between IV rtPA-treated minor stroke and those without rtPA treatment (Figure 2).\nWe also calculated the pooled proportions for excellent outcome, mortality, and sICH, including both comparative and single-arm studies in patients with mild stroke receiving IV rtPA. The excellent outcome was available for 11 studies (1083 patients). It was reported to range from 57.6% to 100%. The pooled proportion of excellent outcome was 76.1% (95% CI: 69.8–81.5%, I2=42.5) (Figure 3). Seven studies involving 378 patients showed the risk of mortality rate ranged from 0% to 8%, with a pooled 90-day mortality rate of 4.5% (95% CI: 2.6–7.5%, I2=1.4) (Figure 4). Regarding the definition of sICH, 4 studies defined it as clinical neurological deterioration temporally related to ICH [9,10,33,36] and 2 defined it as a ≥4-point increase in NIHSS associated with ICH [8,38]; 1 defined clinical neurological deterioration or a ≥4-point increase in NIHSS associated with ICH [41]; the definition was unclear in the remaining studies [32,34,35,40]. The risk of sICH was reported to range from 0% to 5.1%. Twelve studies involving 831 patients showed the pooled rate of sICH was 2.4% (95% CI: 1.5–3.8, I2=0) (Figure 5).", "Thrombolysis is often withheld in patients with mild symptoms, so little is known about its efficacy and safety in these patients. Our study suggests that there are no significant differences for excellent outcome after 3 months of IV rtPA-treated minor stroke compared with those without rtPA treatment. The pooled estimates associated with IV rtPA were 76.1% for excellent outcome, 4.5% for mortality rate, and 2.4% for sICH.\nIn previous studies, the proportion of poor outcome (mRS 2-6) in mild patients who do not receive IV rtPA varied from 15% to 31%. Our study showed the pooled proportion of excellent outcome (mRS 0-1) was 76.1% for mild patients receiving IV rtPA, which is similar to the results mentioned above. A post hoc subgroup analysis of the NINDS study with small group of patients suggested that the risk-to-benefit ratio for using t-PA in patients with minor stroke favored treatment in eligible patients [9]. However, the subgroup analysis of the IST-3 trial did not show a significant effect of rt-PA in patients with mild stroke [37]. This may be due to the treatment effect being too small to be detected, and would require a very large sample. A second reason why IST may not have shown a benefit of rt-PA in mild strokes is because the treatment window was 6 hours and this was a criterion for inclusion into the trial.\nThe main reason of the exclusion from thrombolysis in patients with mild symptoms is the fear that rtPA will present a potential risk for cerebral hemorrhage. Our results demonstrated that the rate of sICH in IV rt-PA treated patients with mild stroke (2.4%) was similar to the rate of hemorrhage in the control group (1.8%) from a recently updated meta-analysis of rtPA for acute ischemic stroke (12 trials, 7012 patients) and lower than in treated patients (7.7%) [43]. It is also lower than the result of SITS-MOST containing 6483 treated patients, which assessed the safety profile of Alteplase in clinical practice [44].\nThe main limitation of this study is that most of the included studies that described the outcome either used historical controls or no control group and the patient count was low. A further limitation in this combined analysis is lack of adjustment on baseline differences. In addition, there is no consensus definition of minor stroke. The NINDS t-PA study and the ECASS III [1,2] both excluded patients with mild stroke, but they failed to clearly define a threshold for mild stroke. So far, although there are no identical variates for predicting the poor outcome of patients with minor stroke, future studies are needed to focus on how to really identify minor stroke patients with poor outcome by clinical features combined with imaging features. Previous studies found that mild stroke patients with large-vessel occlusion were at high risk for early neurological deterioration or poor outcome [45]. Imaging with advanced MRI is a possibility to guide treatment decision-making in mild stroke [27,34,36]. However, the decision-making process regarding these techniques seems to be rather sophisticated. These issues should be addressed in further randomized controlled clinical trials.", "Although efficacy is not clearly established, this study reveals that the adverse event rates related to thrombolysis are low in mild stroke. Intravenous rtPA should be considered in these patients until more RCT evidence is available." ]
[ null, "materials|methods", null, "methods", "methods", "results", null, "intro", null, "discussion", "conclusions" ]
[ "Intracranial Thrombosis", "Meta-Analysis", "Stroke" ]
Background: Intravenous thrombolysis with recombinant tissue plasminogen activator (IV rtPA) applied within 3 hours or 4.5 hours is efficacious in acute ischemic stroke patients [1–3]. However, few ischemic stroke patients are treated with IV rtPA due to the narrow time window for treatment [4,5]. However, even patients who would generally be eligible are often not treated because of mild stroke or clinical improvement, perceived protocol exclusions, emergency department referral delay, and significant comorbidity [5]. It is very common to not treat patients with mild or rapidly improving symptoms because of an uncertain risk-benefit ratio. In studies evaluating eligibility for thrombolysis, up to 43% of patients with mild or improving stroke symptoms do not receive thrombolytic therapy [6]. However, according to recent reports, 15–31% of patients with mild or rapidly improving symptoms are dependent or dead during hospital admission without thrombolysis [4–7]. In contrast, some researchers have reported that mild stroke patients also benefited from IV thrombolysis, and up to 94% achieved excellent 3-month outcome (modified Rankin Scale, mRS 0–1) [8–10]. At present, no one has truly tested the effectiveness of IV rtPA in mild stroke versus placebo. Studies evaluating intravenous rtPA in mild stroke patients are limited by small sample sizes and non-controlled comparison groups. Until more RCT evidence is available, a systematic review of all studies can provide useful information on the odds for benefits and risks of IV rtPA in patients with mild or rapidly improving symptoms and help decision-making for individual treatment. We therefore conducted this systematic review to assess the safety and outcome of thrombolysis in these patients. Material and Methods: Search strategy and Eligibility Studies We systematically searched PubMed (from its earliest date to April 2013), Embase (1980 to May 2013), and Cochrane Central Register of Controlled Trials (The Cochrane library 2013, issue 3) for studies evaluating thrombolysis in patients with mild or rapidly improving symptoms. The terms ‘Minor stroke’, ‘Mild deficit’, ‘Mild symptom’, ‘Mild stroke’, ‘Stroke with rapidly improving symptoms’, ‘Thrombolysis’, ‘Intravenous tissue plasminogen activator’, and ‘rt-PA’ were combined using ‘And’ or ‘Or’ for searching relevant studies. The bibliographies of relevant articles were screened. Studies were included if the following criteria were fulfilled: (1) we considered both comparative (randomized or nonrandomized) and single-arm studies; (2) all patients had been treated for IV rtPA; (3) at least 10 patients were enrolled; (4) at least 1 of following outcomes was reported: functional outcome, mortality, or sICH. Articles were excluded if they were case reports. In case of multiple publications from the same study population, only the report with the most complete data was included. We systematically searched PubMed (from its earliest date to April 2013), Embase (1980 to May 2013), and Cochrane Central Register of Controlled Trials (The Cochrane library 2013, issue 3) for studies evaluating thrombolysis in patients with mild or rapidly improving symptoms. The terms ‘Minor stroke’, ‘Mild deficit’, ‘Mild symptom’, ‘Mild stroke’, ‘Stroke with rapidly improving symptoms’, ‘Thrombolysis’, ‘Intravenous tissue plasminogen activator’, and ‘rt-PA’ were combined using ‘And’ or ‘Or’ for searching relevant studies. The bibliographies of relevant articles were screened. Studies were included if the following criteria were fulfilled: (1) we considered both comparative (randomized or nonrandomized) and single-arm studies; (2) all patients had been treated for IV rtPA; (3) at least 10 patients were enrolled; (4) at least 1 of following outcomes was reported: functional outcome, mortality, or sICH. Articles were excluded if they were case reports. In case of multiple publications from the same study population, only the report with the most complete data was included. Selection of studies and data extraction One reviewer independently screened the titles and abstracts of every record. The full articles were obtained when the information given in the title or abstracts conformed to the selection criteria outlined previously. Two reviewers independently performed data extraction and compared the results. The data extraction form included contents as follows: (1) general characteristics of studies and patients, (2) sample size, (3) the diagnostic criteria for mild stroke, (4) outcome measurements (mRS, Mortality, sICH). Articles that met all inclusion criteria but in which specific data extraction was not possible were marked as “NG” (not given). Discrepancies were resolved by consensus. One reviewer independently screened the titles and abstracts of every record. The full articles were obtained when the information given in the title or abstracts conformed to the selection criteria outlined previously. Two reviewers independently performed data extraction and compared the results. The data extraction form included contents as follows: (1) general characteristics of studies and patients, (2) sample size, (3) the diagnostic criteria for mild stroke, (4) outcome measurements (mRS, Mortality, sICH). Articles that met all inclusion criteria but in which specific data extraction was not possible were marked as “NG” (not given). Discrepancies were resolved by consensus. Statistical methods For comparative studies, results for dichotomous outcomes were expressed as odds ratios (OR) with 95% confidence intervals (CI) and we also obtained the pooled proportions for excellent outcome, mortality, and sICH, including both comparative and single-arm studies. We considered p-values less than 0.05 to be statistically significant. We evaluated heterogeneity among included studies using the I2 test. We considered a value greater than 50% to indicate substantial heterogeneity. Regardless of the size of heterogeneity, a random effects model was used for statistical analysis. We conducted the meta-analysis using Cochrane RevMan 5.1 software and Meta-analyst (version 3.13beta; Tufts Medical Center) [11]. For comparative studies, results for dichotomous outcomes were expressed as odds ratios (OR) with 95% confidence intervals (CI) and we also obtained the pooled proportions for excellent outcome, mortality, and sICH, including both comparative and single-arm studies. We considered p-values less than 0.05 to be statistically significant. We evaluated heterogeneity among included studies using the I2 test. We considered a value greater than 50% to indicate substantial heterogeneity. Regardless of the size of heterogeneity, a random effects model was used for statistical analysis. We conducted the meta-analysis using Cochrane RevMan 5.1 software and Meta-analyst (version 3.13beta; Tufts Medical Center) [11]. Search strategy and Eligibility Studies: We systematically searched PubMed (from its earliest date to April 2013), Embase (1980 to May 2013), and Cochrane Central Register of Controlled Trials (The Cochrane library 2013, issue 3) for studies evaluating thrombolysis in patients with mild or rapidly improving symptoms. The terms ‘Minor stroke’, ‘Mild deficit’, ‘Mild symptom’, ‘Mild stroke’, ‘Stroke with rapidly improving symptoms’, ‘Thrombolysis’, ‘Intravenous tissue plasminogen activator’, and ‘rt-PA’ were combined using ‘And’ or ‘Or’ for searching relevant studies. The bibliographies of relevant articles were screened. Studies were included if the following criteria were fulfilled: (1) we considered both comparative (randomized or nonrandomized) and single-arm studies; (2) all patients had been treated for IV rtPA; (3) at least 10 patients were enrolled; (4) at least 1 of following outcomes was reported: functional outcome, mortality, or sICH. Articles were excluded if they were case reports. In case of multiple publications from the same study population, only the report with the most complete data was included. Selection of studies and data extraction: One reviewer independently screened the titles and abstracts of every record. The full articles were obtained when the information given in the title or abstracts conformed to the selection criteria outlined previously. Two reviewers independently performed data extraction and compared the results. The data extraction form included contents as follows: (1) general characteristics of studies and patients, (2) sample size, (3) the diagnostic criteria for mild stroke, (4) outcome measurements (mRS, Mortality, sICH). Articles that met all inclusion criteria but in which specific data extraction was not possible were marked as “NG” (not given). Discrepancies were resolved by consensus. Statistical methods: For comparative studies, results for dichotomous outcomes were expressed as odds ratios (OR) with 95% confidence intervals (CI) and we also obtained the pooled proportions for excellent outcome, mortality, and sICH, including both comparative and single-arm studies. We considered p-values less than 0.05 to be statistically significant. We evaluated heterogeneity among included studies using the I2 test. We considered a value greater than 50% to indicate substantial heterogeneity. Regardless of the size of heterogeneity, a random effects model was used for statistical analysis. We conducted the meta-analysis using Cochrane RevMan 5.1 software and Meta-analyst (version 3.13beta; Tufts Medical Center) [11]. Results: Studies identified The selection of studies is depicted in Figure 1. The initial literature search identified 461 relevant articles. After reading titles and abstracts, we retained 32 studies for further assessment; of these, we excluded 20 studies [12–31]. Two additional studies were included by reference list screening. Ultimately, 14 studies, containing 1906 patients, were included in this systematic review [8–10,32–42]. Two studies were subgroup analyses from previous RCTs (NINDS 1995 and IST-3) [9,37]. The remaining studies were observational studies (single-arm), of which 2 studies had a concurrent control group. Thus, 4 studies (2 randomized and 2 nonrandomized) contributed data to both the rtPA group and the non-rtPA group. The number of participants ranged from 19 to 535 (Table 1). The selection of studies is depicted in Figure 1. The initial literature search identified 461 relevant articles. After reading titles and abstracts, we retained 32 studies for further assessment; of these, we excluded 20 studies [12–31]. Two additional studies were included by reference list screening. Ultimately, 14 studies, containing 1906 patients, were included in this systematic review [8–10,32–42]. Two studies were subgroup analyses from previous RCTs (NINDS 1995 and IST-3) [9,37]. The remaining studies were observational studies (single-arm), of which 2 studies had a concurrent control group. Thus, 4 studies (2 randomized and 2 nonrandomized) contributed data to both the rtPA group and the non-rtPA group. The number of participants ranged from 19 to 535 (Table 1). Characteristics of included studies The mean age of participants ranged from 59 to 70 years. The proportion of male participants was 55.6–78.9% among these trials. Most of studies enrolled patients treated within 3 hours. All studies except 1 used NIHSS as criteria for mild stroke. Usual cut-off to define mild stroke was NIHSS 4, 5, or 6. More details are given in Tables 1 and 2. The mean age of participants ranged from 59 to 70 years. The proportion of male participants was 55.6–78.9% among these trials. Most of studies enrolled patients treated within 3 hours. All studies except 1 used NIHSS as criteria for mild stroke. Usual cut-off to define mild stroke was NIHSS 4, 5, or 6. More details are given in Tables 1 and 2. Outcome rates Four comparative studies evaluated the effect of IV rtPA on excellent outcome. On the basis of these studies with a total of 1006 patients, the meta-analysis did not identify a significant difference in the odds of excellent outcome (OR=0.86; 95% CI: 0.64–1.15; I2=0) between IV rtPA-treated minor stroke and those without rtPA treatment (Figure 2). We also calculated the pooled proportions for excellent outcome, mortality, and sICH, including both comparative and single-arm studies in patients with mild stroke receiving IV rtPA. The excellent outcome was available for 11 studies (1083 patients). It was reported to range from 57.6% to 100%. The pooled proportion of excellent outcome was 76.1% (95% CI: 69.8–81.5%, I2=42.5) (Figure 3). Seven studies involving 378 patients showed the risk of mortality rate ranged from 0% to 8%, with a pooled 90-day mortality rate of 4.5% (95% CI: 2.6–7.5%, I2=1.4) (Figure 4). Regarding the definition of sICH, 4 studies defined it as clinical neurological deterioration temporally related to ICH [9,10,33,36] and 2 defined it as a ≥4-point increase in NIHSS associated with ICH [8,38]; 1 defined clinical neurological deterioration or a ≥4-point increase in NIHSS associated with ICH [41]; the definition was unclear in the remaining studies [32,34,35,40]. The risk of sICH was reported to range from 0% to 5.1%. Twelve studies involving 831 patients showed the pooled rate of sICH was 2.4% (95% CI: 1.5–3.8, I2=0) (Figure 5). Four comparative studies evaluated the effect of IV rtPA on excellent outcome. On the basis of these studies with a total of 1006 patients, the meta-analysis did not identify a significant difference in the odds of excellent outcome (OR=0.86; 95% CI: 0.64–1.15; I2=0) between IV rtPA-treated minor stroke and those without rtPA treatment (Figure 2). We also calculated the pooled proportions for excellent outcome, mortality, and sICH, including both comparative and single-arm studies in patients with mild stroke receiving IV rtPA. The excellent outcome was available for 11 studies (1083 patients). It was reported to range from 57.6% to 100%. The pooled proportion of excellent outcome was 76.1% (95% CI: 69.8–81.5%, I2=42.5) (Figure 3). Seven studies involving 378 patients showed the risk of mortality rate ranged from 0% to 8%, with a pooled 90-day mortality rate of 4.5% (95% CI: 2.6–7.5%, I2=1.4) (Figure 4). Regarding the definition of sICH, 4 studies defined it as clinical neurological deterioration temporally related to ICH [9,10,33,36] and 2 defined it as a ≥4-point increase in NIHSS associated with ICH [8,38]; 1 defined clinical neurological deterioration or a ≥4-point increase in NIHSS associated with ICH [41]; the definition was unclear in the remaining studies [32,34,35,40]. The risk of sICH was reported to range from 0% to 5.1%. Twelve studies involving 831 patients showed the pooled rate of sICH was 2.4% (95% CI: 1.5–3.8, I2=0) (Figure 5). Studies identified: The selection of studies is depicted in Figure 1. The initial literature search identified 461 relevant articles. After reading titles and abstracts, we retained 32 studies for further assessment; of these, we excluded 20 studies [12–31]. Two additional studies were included by reference list screening. Ultimately, 14 studies, containing 1906 patients, were included in this systematic review [8–10,32–42]. Two studies were subgroup analyses from previous RCTs (NINDS 1995 and IST-3) [9,37]. The remaining studies were observational studies (single-arm), of which 2 studies had a concurrent control group. Thus, 4 studies (2 randomized and 2 nonrandomized) contributed data to both the rtPA group and the non-rtPA group. The number of participants ranged from 19 to 535 (Table 1). Characteristics of included studies: The mean age of participants ranged from 59 to 70 years. The proportion of male participants was 55.6–78.9% among these trials. Most of studies enrolled patients treated within 3 hours. All studies except 1 used NIHSS as criteria for mild stroke. Usual cut-off to define mild stroke was NIHSS 4, 5, or 6. More details are given in Tables 1 and 2. Outcome rates: Four comparative studies evaluated the effect of IV rtPA on excellent outcome. On the basis of these studies with a total of 1006 patients, the meta-analysis did not identify a significant difference in the odds of excellent outcome (OR=0.86; 95% CI: 0.64–1.15; I2=0) between IV rtPA-treated minor stroke and those without rtPA treatment (Figure 2). We also calculated the pooled proportions for excellent outcome, mortality, and sICH, including both comparative and single-arm studies in patients with mild stroke receiving IV rtPA. The excellent outcome was available for 11 studies (1083 patients). It was reported to range from 57.6% to 100%. The pooled proportion of excellent outcome was 76.1% (95% CI: 69.8–81.5%, I2=42.5) (Figure 3). Seven studies involving 378 patients showed the risk of mortality rate ranged from 0% to 8%, with a pooled 90-day mortality rate of 4.5% (95% CI: 2.6–7.5%, I2=1.4) (Figure 4). Regarding the definition of sICH, 4 studies defined it as clinical neurological deterioration temporally related to ICH [9,10,33,36] and 2 defined it as a ≥4-point increase in NIHSS associated with ICH [8,38]; 1 defined clinical neurological deterioration or a ≥4-point increase in NIHSS associated with ICH [41]; the definition was unclear in the remaining studies [32,34,35,40]. The risk of sICH was reported to range from 0% to 5.1%. Twelve studies involving 831 patients showed the pooled rate of sICH was 2.4% (95% CI: 1.5–3.8, I2=0) (Figure 5). Discussion: Thrombolysis is often withheld in patients with mild symptoms, so little is known about its efficacy and safety in these patients. Our study suggests that there are no significant differences for excellent outcome after 3 months of IV rtPA-treated minor stroke compared with those without rtPA treatment. The pooled estimates associated with IV rtPA were 76.1% for excellent outcome, 4.5% for mortality rate, and 2.4% for sICH. In previous studies, the proportion of poor outcome (mRS 2-6) in mild patients who do not receive IV rtPA varied from 15% to 31%. Our study showed the pooled proportion of excellent outcome (mRS 0-1) was 76.1% for mild patients receiving IV rtPA, which is similar to the results mentioned above. A post hoc subgroup analysis of the NINDS study with small group of patients suggested that the risk-to-benefit ratio for using t-PA in patients with minor stroke favored treatment in eligible patients [9]. However, the subgroup analysis of the IST-3 trial did not show a significant effect of rt-PA in patients with mild stroke [37]. This may be due to the treatment effect being too small to be detected, and would require a very large sample. A second reason why IST may not have shown a benefit of rt-PA in mild strokes is because the treatment window was 6 hours and this was a criterion for inclusion into the trial. The main reason of the exclusion from thrombolysis in patients with mild symptoms is the fear that rtPA will present a potential risk for cerebral hemorrhage. Our results demonstrated that the rate of sICH in IV rt-PA treated patients with mild stroke (2.4%) was similar to the rate of hemorrhage in the control group (1.8%) from a recently updated meta-analysis of rtPA for acute ischemic stroke (12 trials, 7012 patients) and lower than in treated patients (7.7%) [43]. It is also lower than the result of SITS-MOST containing 6483 treated patients, which assessed the safety profile of Alteplase in clinical practice [44]. The main limitation of this study is that most of the included studies that described the outcome either used historical controls or no control group and the patient count was low. A further limitation in this combined analysis is lack of adjustment on baseline differences. In addition, there is no consensus definition of minor stroke. The NINDS t-PA study and the ECASS III [1,2] both excluded patients with mild stroke, but they failed to clearly define a threshold for mild stroke. So far, although there are no identical variates for predicting the poor outcome of patients with minor stroke, future studies are needed to focus on how to really identify minor stroke patients with poor outcome by clinical features combined with imaging features. Previous studies found that mild stroke patients with large-vessel occlusion were at high risk for early neurological deterioration or poor outcome [45]. Imaging with advanced MRI is a possibility to guide treatment decision-making in mild stroke [27,34,36]. However, the decision-making process regarding these techniques seems to be rather sophisticated. These issues should be addressed in further randomized controlled clinical trials. Conclusions: Although efficacy is not clearly established, this study reveals that the adverse event rates related to thrombolysis are low in mild stroke. Intravenous rtPA should be considered in these patients until more RCT evidence is available.
Background: Whether patients presenting with mild stroke should or should not be treated with intravenous rtPA is still controversial. This systematic review aims to assess the safety and outcome of thrombolysis in these patients. Methods: We systematically searched PubMed and Cochrane Central Register of Controlled Trials for studies evaluating intravenous rtPA in patients with mild or rapidly improving symptoms except case reports. Excellent outcome (author reported, mainly mRS 0-1), symptomatic intracranial hemorrhage (sICH) and mortality were analyzed. Results: Fourteen studies were included (n=1906 patients). Of these, 4 studies were comparative (2 randomized and 2 non-randomized). The remaining were single-arm studies. On the basis of 4 comparative studies with a total of 1006 patients, the meta-analysis did not identify a significant difference in the odds of excellent outcome (OR=0.86; 95% CI: 0.64-1.15; I2=0) between IV rtPA-treated minor stroke and those without rtPA treatment. Eleven studies involving 1083 patients showed the pooled rate of excellent outcome was 76.1% (95% CI: 69.8-81.5%, I2=42.5). Seven studies involving 378 patients showed the mortality rate was 4.5% (95% CI: 2.6-7.5%, I2=1.4). Twelve studies involving 831 patients showed the pooled rate of sICH was 2.4% (95% CI: 1.5-3.8, I2=0). Conclusions: Although efficacy is not clearly established, this study reveals the adverse event rates related to thrombolysis are low in mild stroke. Intravenous rtPA should be considered in these patients until more RCT evidence is available.
Characteristics of included studies: The mean age of participants ranged from 59 to 70 years. The proportion of male participants was 55.6–78.9% among these trials. Most of studies enrolled patients treated within 3 hours. All studies except 1 used NIHSS as criteria for mild stroke. Usual cut-off to define mild stroke was NIHSS 4, 5, or 6. More details are given in Tables 1 and 2. Conclusions: Although efficacy is not clearly established, this study reveals that the adverse event rates related to thrombolysis are low in mild stroke. Intravenous rtPA should be considered in these patients until more RCT evidence is available.
Background: Whether patients presenting with mild stroke should or should not be treated with intravenous rtPA is still controversial. This systematic review aims to assess the safety and outcome of thrombolysis in these patients. Methods: We systematically searched PubMed and Cochrane Central Register of Controlled Trials for studies evaluating intravenous rtPA in patients with mild or rapidly improving symptoms except case reports. Excellent outcome (author reported, mainly mRS 0-1), symptomatic intracranial hemorrhage (sICH) and mortality were analyzed. Results: Fourteen studies were included (n=1906 patients). Of these, 4 studies were comparative (2 randomized and 2 non-randomized). The remaining were single-arm studies. On the basis of 4 comparative studies with a total of 1006 patients, the meta-analysis did not identify a significant difference in the odds of excellent outcome (OR=0.86; 95% CI: 0.64-1.15; I2=0) between IV rtPA-treated minor stroke and those without rtPA treatment. Eleven studies involving 1083 patients showed the pooled rate of excellent outcome was 76.1% (95% CI: 69.8-81.5%, I2=42.5). Seven studies involving 378 patients showed the mortality rate was 4.5% (95% CI: 2.6-7.5%, I2=1.4). Twelve studies involving 831 patients showed the pooled rate of sICH was 2.4% (95% CI: 1.5-3.8, I2=0). Conclusions: Although efficacy is not clearly established, this study reveals the adverse event rates related to thrombolysis are low in mild stroke. Intravenous rtPA should be considered in these patients until more RCT evidence is available.
4,111
309
[ 313, 222, 125, 152, 312 ]
11
[ "studies", "patients", "mild", "stroke", "outcome", "rtpa", "mild stroke", "sich", "excellent", "iv" ]
[ "mild stroke intravenous", "safety outcome thrombolysis", "benefited iv thrombolysis", "thrombolysis patients mild", "evaluating eligibility thrombolysis" ]
[CONTENT] Intracranial Thrombosis | Meta-Analysis | Stroke [SUMMARY]
[CONTENT] Intracranial Thrombosis | Meta-Analysis | Stroke [SUMMARY]
[CONTENT] Intracranial Thrombosis | Meta-Analysis | Stroke [SUMMARY]
[CONTENT] Intracranial Thrombosis | Meta-Analysis | Stroke [SUMMARY]
[CONTENT] Intracranial Thrombosis | Meta-Analysis | Stroke [SUMMARY]
[CONTENT] Intracranial Thrombosis | Meta-Analysis | Stroke [SUMMARY]
[CONTENT] Humans | Stroke | Thrombolytic Therapy [SUMMARY]
[CONTENT] Humans | Stroke | Thrombolytic Therapy [SUMMARY]
[CONTENT] Humans | Stroke | Thrombolytic Therapy [SUMMARY]
[CONTENT] Humans | Stroke | Thrombolytic Therapy [SUMMARY]
[CONTENT] Humans | Stroke | Thrombolytic Therapy [SUMMARY]
[CONTENT] Humans | Stroke | Thrombolytic Therapy [SUMMARY]
[CONTENT] mild stroke intravenous | safety outcome thrombolysis | benefited iv thrombolysis | thrombolysis patients mild | evaluating eligibility thrombolysis [SUMMARY]
[CONTENT] mild stroke intravenous | safety outcome thrombolysis | benefited iv thrombolysis | thrombolysis patients mild | evaluating eligibility thrombolysis [SUMMARY]
[CONTENT] mild stroke intravenous | safety outcome thrombolysis | benefited iv thrombolysis | thrombolysis patients mild | evaluating eligibility thrombolysis [SUMMARY]
[CONTENT] mild stroke intravenous | safety outcome thrombolysis | benefited iv thrombolysis | thrombolysis patients mild | evaluating eligibility thrombolysis [SUMMARY]
[CONTENT] mild stroke intravenous | safety outcome thrombolysis | benefited iv thrombolysis | thrombolysis patients mild | evaluating eligibility thrombolysis [SUMMARY]
[CONTENT] mild stroke intravenous | safety outcome thrombolysis | benefited iv thrombolysis | thrombolysis patients mild | evaluating eligibility thrombolysis [SUMMARY]
[CONTENT] studies | patients | mild | stroke | outcome | rtpa | mild stroke | sich | excellent | iv [SUMMARY]
[CONTENT] studies | patients | mild | stroke | outcome | rtpa | mild stroke | sich | excellent | iv [SUMMARY]
[CONTENT] studies | patients | mild | stroke | outcome | rtpa | mild stroke | sich | excellent | iv [SUMMARY]
[CONTENT] studies | patients | mild | stroke | outcome | rtpa | mild stroke | sich | excellent | iv [SUMMARY]
[CONTENT] studies | patients | mild | stroke | outcome | rtpa | mild stroke | sich | excellent | iv [SUMMARY]
[CONTENT] studies | patients | mild | stroke | outcome | rtpa | mild stroke | sich | excellent | iv [SUMMARY]
[CONTENT] participants | nihss | years proportion male participants | 59 70 | 55 | 55 78 | 55 78 trials | 55 78 trials studies | 59 | 59 70 years [SUMMARY]
[CONTENT] heterogeneity | considered | meta | comparative | analysis | studies | 05 | studies considered | studies considered values | studies considered values 05 [SUMMARY]
[CONTENT] studies | figure | 95 ci | excellent outcome | nihss | rtpa | excellent | patients | ci | i2 [SUMMARY]
[CONTENT] rates related | considered patients | rates related thrombolysis | low mild stroke intravenous | thrombolysis low | thrombolysis low mild | thrombolysis low mild stroke | event rates related thrombolysis | rtpa considered | rtpa considered patients [SUMMARY]
[CONTENT] studies | patients | mild | stroke | rtpa | outcome | mild stroke | iv | criteria | excellent outcome [SUMMARY]
[CONTENT] studies | patients | mild | stroke | rtpa | outcome | mild stroke | iv | criteria | excellent outcome [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] PubMed ||| 0 [SUMMARY]
[CONTENT] Fourteen | n=1906 ||| 4 | 2 | 2 ||| ||| 4 | 1006 | OR=0.86 | 95% | CI | 0.64-1.15 ||| 1083 | 76.1% | 95% | CI | 69.8-81.5% | I2=42.5 ||| Seven | 378 | 4.5% | 95% | CI | 2.6-7.5% | I2=1.4 ||| Twelve | 831 | 2.4% | 95% | CI | 1.5-3.8 [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| ||| PubMed ||| 0 ||| ||| Fourteen | n=1906 ||| 4 | 2 | 2 ||| ||| 4 | 1006 | OR=0.86 | 95% | CI | 0.64-1.15 ||| 1083 | 76.1% | 95% | CI | 69.8-81.5% | I2=42.5 ||| Seven | 378 | 4.5% | 95% | CI | 2.6-7.5% | I2=1.4 ||| Twelve | 831 | 2.4% | 95% | CI | 1.5-3.8 ||| ||| [SUMMARY]
[CONTENT] ||| ||| PubMed ||| 0 ||| ||| Fourteen | n=1906 ||| 4 | 2 | 2 ||| ||| 4 | 1006 | OR=0.86 | 95% | CI | 0.64-1.15 ||| 1083 | 76.1% | 95% | CI | 69.8-81.5% | I2=42.5 ||| Seven | 378 | 4.5% | 95% | CI | 2.6-7.5% | I2=1.4 ||| Twelve | 831 | 2.4% | 95% | CI | 1.5-3.8 ||| ||| [SUMMARY]
Comparing complaint-based triage scales and early warning scores for emergency department triage.
35418407
Emergency triage systems are used globally to prioritise care based on patients' needs. These systems are commonly based on patient complaints, while the need for timely interventions on regular hospital wards is usually assessed with early warning scores (EWS). We aim to directly compare the ability of currently used triage scales and EWS scores to recognise patients in need of urgent care in the ED.
BACKGROUND
We performed a retrospective, single-centre study on all patients who presented to the ED of a Dutch Level 1 trauma centre, between 1 September 2018 and 24 June 2020 and for whom a Netherlands Triage System (NTS) score as well as a Modified Early Warning Score (MEWS) was recorded. The performance of these scores was assessed using surrogate markers for true urgency and presented using bar charts, cross tables and a paired area under the curve (AUC).
METHODS
We identified 12 317 unique patient visits where NTS and MEWS scores were documented during triage. A paired comparison of the AUC of these scores showed that the MEWS score had a significantly better AUC than the NTS for predicting the need for hospital admission (0.65 vs 0.60; p<0.001) or 30-day all-cause mortality (0.70 vs 0.60; p<0.001). Furthermore, when non-urgent MEWS scores co-occur with urgent NTS scores, the MEWS score seems to more accurately capture the urgency level that is warranted.
RESULTS
The results of this study suggest that EWSs could potentially be used to replace the current emergency triage systems.
CONCLUSIONS
[ "Early Warning Score", "Emergency Service, Hospital", "Hospital Mortality", "Hospitalization", "Humans", "Retrospective Studies", "Triage" ]
9411919
Introduction
Over the past decades, ED presentation rates have increased worldwide.1 At times of supply and demand mismatches, medical resources should be allocated based on the patients’ needs to ensure patient safety.1 2 Emergency triage systems are used globally to assess these specific needs. The performance of any emergency triage system is dependent on the environment in which it is used. Therefore, most countries use modified international triage systems to fit their particular situation. Commonly known triage scales include the internationally used Emergency Severity Index (ESI), the UK-based Manchester Triage Scale (MTS) and the Canadian Triage and Acuity Scales (CTAS).3 In Holland, the Netherlands Triage System (NTS) is used, which is a modified version of the MTS.4 A common theme among all triage systems is that these are decision trees based on patient complaints. Specific symptoms or high pain scores will result in higher urgency levels. Recently, two large systematic reviews have shown that the performance of triage scores varies considerably and that a significant part of the population may not be designated to the appropriate acuity group.5 6 Furthermore, there has been debate over the impractical complexity of the current triage systems and the need to rethink ED triage.7 The complaint-based approach during emergency triage is noticeably different from the simple early warning scores (EWS) used to detect clinical deterioration and the need for timely intervention in patients admitted to in-hospital wards. In the Netherlands, the Modified Early Warning Score (MEWS) is used in this regard.8 The EWS scores can accurately detect patients at high risk of deterioration and have been studied in numerous settings.9–14 Although EWS scores have been extensively studied for use in ED triage, they were never specifically developed to be triage tools.15–22 Furthermore, EWS scores and triage scales have not been compared head-to-head. In this study, we aim to compare the ability of currently used triage scales and EWS scores to recognise patients in need of urgent care in the ED. These two approaches will be represented by the NTS and MEWS scores, respectively, as they are commonly used in the Netherlands.
Methods and study design
Study setting A retrospective, single-centre study was performed using data from the electronic health records (EHRs) of the Amsterdam UMC, location Vrije Universiteit Medical Center (VUmc). Data recorded between 1 September 2018 and 24 June 2020 were extracted. Data from before September 2018 could not be used since the storage of the NTS form was outsourced until this point in time. The VUmc is a Level 1 trauma centre and teaching hospital with an estimated 29 000 ED presentations annually. The study adheres to the ‘Standards for Reporting Diagnostic Accuracy’ (STARD) guideline.23 A retrospective, single-centre study was performed using data from the electronic health records (EHRs) of the Amsterdam UMC, location Vrije Universiteit Medical Center (VUmc). Data recorded between 1 September 2018 and 24 June 2020 were extracted. Data from before September 2018 could not be used since the storage of the NTS form was outsourced until this point in time. The VUmc is a Level 1 trauma centre and teaching hospital with an estimated 29 000 ED presentations annually. The study adheres to the ‘Standards for Reporting Diagnostic Accuracy’ (STARD) guideline.23 Patient selection We included all patients who presented to the ED of Amsterdam UMC, location VUmc, and for whom an NTS score as well as a MEWS score was documented. Patients under the age of 18 were excluded, as were patients with an NTS score of 0. The NTS score of 0 indicates that the patient was being resuscitated on arrival, which makes triage redundant. We included all patients who presented to the ED of Amsterdam UMC, location VUmc, and for whom an NTS score as well as a MEWS score was documented. Patients under the age of 18 were excluded, as were patients with an NTS score of 0. The NTS score of 0 indicates that the patient was being resuscitated on arrival, which makes triage redundant. NTS and MEWS measurements All patients in the VUmc are triaged by a triage nurse who documents an NTS score. The NTS is a standardised five-level protocol with questions regarding patient complaints and pain levels. Lower numbered urgency levels (eg, NTS 1 or 2) indicate higher urgency4 (online supplemental table 1). The MEWS score is also frequently documented as part of the initial work-up in our hospital’s ED, but is not used to decide on the urgency level and is therefore not mandatory. The MEWS is derived from seven parameters (systolic BP, HR, RR, temperature, peripheral oxygen saturation, level of consciousness and urine production).24 Also, an additional point may be scored when the nurse is particularly worried (online supplemental table 2). The higher the MEWS scores, the more likely a patient is to deteriorate. Prior studies report that MEWS scores of 5 or higher are critical and indicate a high likelihood of deterioration, while Dutch hospitals are prompted to use a cut-off of 3.8 24 25 All patients in the VUmc are triaged by a triage nurse who documents an NTS score. The NTS is a standardised five-level protocol with questions regarding patient complaints and pain levels. Lower numbered urgency levels (eg, NTS 1 or 2) indicate higher urgency4 (online supplemental table 1). The MEWS score is also frequently documented as part of the initial work-up in our hospital’s ED, but is not used to decide on the urgency level and is therefore not mandatory. The MEWS is derived from seven parameters (systolic BP, HR, RR, temperature, peripheral oxygen saturation, level of consciousness and urine production).24 Also, an additional point may be scored when the nurse is particularly worried (online supplemental table 2). The higher the MEWS scores, the more likely a patient is to deteriorate. Prior studies report that MEWS scores of 5 or higher are critical and indicate a high likelihood of deterioration, while Dutch hospitals are prompted to use a cut-off of 3.8 24 25 Outcome measures Surrogate outcomes for high urgency were used, as is frequently done with the development and assessment of triage tools, since no gold standard for urgency exist.26 The outcomes we studied were admission rates and 30-day all-cause mortality, since they were clearly defined in the EHR data and are among the most studied surrogates in this regard.4 26 Surrogate outcomes for high urgency were used, as is frequently done with the development and assessment of triage tools, since no gold standard for urgency exist.26 The outcomes we studied were admission rates and 30-day all-cause mortality, since they were clearly defined in the EHR data and are among the most studied surrogates in this regard.4 26 Statistical analysis The characteristics of the study population are presented with descriptive statistics. Categorical variables are presented as counts and percentages. Normality of the data is assessed using histograms and Q-Q plots. Non-normally distributed continuous data are presented with medians and IQRs. NTS and MEWS scores are presented using bar charts and cross-tables. To assess for selection bias, we determined the distribution of NTS scores in the population studied as well as the entire adult population seen in the ED during the study period The predictive performance of both scores for the primary outcomes are visualised using receiver operating characteristics (ROC) curves and corresponding areas under the curve (AUCs). To compare the NTS and MEWS scores, we use the DeLong’s test for the comparison of AUCs of two correlated ROC curves. Data analysis was performed using R V.3.6.3 (R Foundation of Statistical Computing, Vienna, Austria).27 The figures were created using the ‘ggplot2’ package,28 and the paired AUC analysis was done using the ‘pROC’ package.29 The characteristics of the study population are presented with descriptive statistics. Categorical variables are presented as counts and percentages. Normality of the data is assessed using histograms and Q-Q plots. Non-normally distributed continuous data are presented with medians and IQRs. NTS and MEWS scores are presented using bar charts and cross-tables. To assess for selection bias, we determined the distribution of NTS scores in the population studied as well as the entire adult population seen in the ED during the study period The predictive performance of both scores for the primary outcomes are visualised using receiver operating characteristics (ROC) curves and corresponding areas under the curve (AUCs). To compare the NTS and MEWS scores, we use the DeLong’s test for the comparison of AUCs of two correlated ROC curves. Data analysis was performed using R V.3.6.3 (R Foundation of Statistical Computing, Vienna, Austria).27 The figures were created using the ‘ggplot2’ package,28 and the paired AUC analysis was done using the ‘pROC’ package.29 Patient and public involvement No patient involved. No patient involved.
Results
Baseline characteristics We identified 55 086 ED visits by 39 907 unique adult patients between 1 September 2018 and 24 June 2020. In 53 106 of these visits, the NTS triage score was recorded. Of these patients, 12 452 patients had a documented MEWS score. After exclusion of patients with an NTS score of 0, the final study population consisted of 12 317 unique visits. Table 1 shows the baseline characteristics of this study population. Baseline characteristics of the study population *Only complaints with a frequency over 500 are presented. We identified 55 086 ED visits by 39 907 unique adult patients between 1 September 2018 and 24 June 2020. In 53 106 of these visits, the NTS triage score was recorded. Of these patients, 12 452 patients had a documented MEWS score. After exclusion of patients with an NTS score of 0, the final study population consisted of 12 317 unique visits. Table 1 shows the baseline characteristics of this study population. Baseline characteristics of the study population *Only complaints with a frequency over 500 are presented. Frequency distributions of NTS and MEWS scores In figure 1A, we present the absolute counts of the various NTS scores. Notably, the NTS scores do not seem to follow any particular distribution; the majority of patients are assigned levels 2 and 3, and the NTS score of 4 is infrequently given. Similar results were seen in the complete population before excluding any patient (online supplemental figure 1). The MEWS scores follow a clear right-skewed distribution (figure 1B). A bar chart of the absolute counts of the various Netherlands Triage System (NTS) scores (A) and Modified Early Warning Scores (MEWS) (B) in the study population. In figure 1A, we present the absolute counts of the various NTS scores. Notably, the NTS scores do not seem to follow any particular distribution; the majority of patients are assigned levels 2 and 3, and the NTS score of 4 is infrequently given. Similar results were seen in the complete population before excluding any patient (online supplemental figure 1). The MEWS scores follow a clear right-skewed distribution (figure 1B). A bar chart of the absolute counts of the various Netherlands Triage System (NTS) scores (A) and Modified Early Warning Scores (MEWS) (B) in the study population. Comparison of NTS and MEWS scores Generally, the proportion of lower (more urgent) NTS scores increases with increasing (more urgent) MEWS scores (figure 2). In table 2, we present the counts of the different combinations of NTS and MEWS scores assigned. Notably, high NTS scores (non-urgent) never co-occur with high (urgent) MEWS scores, while low (more urgent) NTS scores do co-occur with low (non-urgent) MEWS scores. For example, the combination of NTS 1/MEWS 0 is reported in 120/12 317 (1%) instances and the combination of NTS 1/MEWS 2 in 388/12.317 (3.2%). Modified Early Warning Scores (MEWS) and Netherlands Triage System (NTS) scores compared. Frequencies of patients with all different Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) combination In tables 3 and 4, we demonstrate the outcomes of patients with each combination of NTS and MEWS scores. Where the NTS was notably more urgent than the MEWS, the admission and mortality rates are lower than the average in the population. In the above example of an NTS 1 of 1 and MEWS of 0, the admission rate (34%) and mortality rate (2%) are lower than the average admission rate of 40.6% and mortality rate of 3.9%. Fraction of patients admitted to the hospital stratified based on their Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) score NA, not available. Fraction of patients who died within 30 days stratified based on their Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) score NA, not available. Further, for any NTS score, the admission and mortality ranges can vary greatly for different MEWS scores in those same patients. For example, for patients with an NTS score of 2 the admission rate ranged from 29% to 83% depending on the MEWS score. Generally, the proportion of lower (more urgent) NTS scores increases with increasing (more urgent) MEWS scores (figure 2). In table 2, we present the counts of the different combinations of NTS and MEWS scores assigned. Notably, high NTS scores (non-urgent) never co-occur with high (urgent) MEWS scores, while low (more urgent) NTS scores do co-occur with low (non-urgent) MEWS scores. For example, the combination of NTS 1/MEWS 0 is reported in 120/12 317 (1%) instances and the combination of NTS 1/MEWS 2 in 388/12.317 (3.2%). Modified Early Warning Scores (MEWS) and Netherlands Triage System (NTS) scores compared. Frequencies of patients with all different Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) combination In tables 3 and 4, we demonstrate the outcomes of patients with each combination of NTS and MEWS scores. Where the NTS was notably more urgent than the MEWS, the admission and mortality rates are lower than the average in the population. In the above example of an NTS 1 of 1 and MEWS of 0, the admission rate (34%) and mortality rate (2%) are lower than the average admission rate of 40.6% and mortality rate of 3.9%. Fraction of patients admitted to the hospital stratified based on their Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) score NA, not available. Fraction of patients who died within 30 days stratified based on their Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) score NA, not available. Further, for any NTS score, the admission and mortality ranges can vary greatly for different MEWS scores in those same patients. For example, for patients with an NTS score of 2 the admission rate ranged from 29% to 83% depending on the MEWS score. Paired AUC analysis The ROC curves are presented in figure 3. Figure 3A shows the MEWS score has a higher AUC for predicting 30-day all-cause mortality (0.70; 95% CI=0.67 to 0.72), compared with the NTS score (0.60; 95% CI=0.57 to 0.62) (p<0.001). In figure 3B, we see that the MEWS score also has a higher AUC for hospital admission (0.65; 95% CI=0.65 to 0.66), compared with the NTS score (0.60; 95% CI=0.60 to 0.61). (p<0.001). The receiver operator characteristics (ROC) curves and corresponding area under the curve (AUC) for both the Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) regarding 30-day mortality (A) or hospital admission (B). The ROC curves are presented in figure 3. Figure 3A shows the MEWS score has a higher AUC for predicting 30-day all-cause mortality (0.70; 95% CI=0.67 to 0.72), compared with the NTS score (0.60; 95% CI=0.57 to 0.62) (p<0.001). In figure 3B, we see that the MEWS score also has a higher AUC for hospital admission (0.65; 95% CI=0.65 to 0.66), compared with the NTS score (0.60; 95% CI=0.60 to 0.61). (p<0.001). The receiver operator characteristics (ROC) curves and corresponding area under the curve (AUC) for both the Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) regarding 30-day mortality (A) or hospital admission (B).
Conclusion
We conclude that EWSs outperform currently used ED triage scales based on patient complaints regarding hospitalisation and 30-day mortality. In cases where these approaches yield particularly different urgency scores, the EWS, represented by the MEWS in our study, seems to assess the need for urgent care better than the complaint based NTS score. The results of this study suggest that EWSs could potentially replace the current emergency triage systems.
[ "What is already known on this topic", "What this study adds", "How this study might affect research, practice or policy", "Study setting", "Patient selection", "NTS and MEWS measurements", "Outcome measures", "Statistical analysis", "Patient and public involvement", "Baseline characteristics", "Frequency distributions of NTS and MEWS scores", "Comparison of NTS and MEWS scores", "Paired AUC analysis", "Strengths and limitations" ]
[ "Complaint-based triage scales are the norm in ED triage. However, their performance has shown to be highly variable and their practicality has been questioned due to their complexity.\nEarly warning scores have been shown to have good predictive value for admission and hospital outcome.", "In this retrospective, single-centre study comparing a complaint-based triage scale with an early warning score, we found that an early warning score was a better discriminator for admission and 30-day mortality than the Netherlands Triage Score.\nIn cases where these approaches yield strikingly different urgency scores, the early warning score was a better predictor.", "This study suggests that early warning scores could potentially replace current emergency triage systems.", "A retrospective, single-centre study was performed using data from the electronic health records (EHRs) of the Amsterdam UMC, location Vrije Universiteit Medical Center (VUmc). Data recorded between 1 September 2018 and 24 June 2020 were extracted. Data from before September 2018 could not be used since the storage of the NTS form was outsourced until this point in time. The VUmc is a Level 1 trauma centre and teaching hospital with an estimated 29 000 ED presentations annually. The study adheres to the ‘Standards for Reporting Diagnostic Accuracy’ (STARD) guideline.23\n", "We included all patients who presented to the ED of Amsterdam UMC, location VUmc, and for whom an NTS score as well as a MEWS score was documented. Patients under the age of 18 were excluded, as were patients with an NTS score of 0. The NTS score of 0 indicates that the patient was being resuscitated on arrival, which makes triage redundant.", "All patients in the VUmc are triaged by a triage nurse who documents an NTS score. The NTS is a standardised five-level protocol with questions regarding patient complaints and pain levels. Lower numbered urgency levels (eg, NTS 1 or 2) indicate higher urgency4 (online supplemental table 1).\n\n\n\nThe MEWS score is also frequently documented as part of the initial work-up in our hospital’s ED, but is not used to decide on the urgency level and is therefore not mandatory. The MEWS is derived from seven parameters (systolic BP, HR, RR, temperature, peripheral oxygen saturation, level of consciousness and urine production).24 Also, an additional point may be scored when the nurse is particularly worried (online supplemental table 2). The higher the MEWS scores, the more likely a patient is to deteriorate. Prior studies report that MEWS scores of 5 or higher are critical and indicate a high likelihood of deterioration, while Dutch hospitals are prompted to use a cut-off of 3.8 24 25\n", "Surrogate outcomes for high urgency were used, as is frequently done with the development and assessment of triage tools, since no gold standard for urgency exist.26 The outcomes we studied were admission rates and 30-day all-cause mortality, since they were clearly defined in the EHR data and are among the most studied surrogates in this regard.4 26\n", "The characteristics of the study population are presented with descriptive statistics. Categorical variables are presented as counts and percentages. Normality of the data is assessed using histograms and Q-Q plots. Non-normally distributed continuous data are presented with medians and IQRs. NTS and MEWS scores are presented using bar charts and cross-tables. To assess for selection bias, we determined the distribution of NTS scores in the population studied as well as the entire adult population seen in the ED during the study period\nThe predictive performance of both scores for the primary outcomes are visualised using receiver operating characteristics (ROC) curves and corresponding areas under the curve (AUCs). To compare the NTS and MEWS scores, we use the DeLong’s test for the comparison of AUCs of two correlated ROC curves.\nData analysis was performed using R V.3.6.3 (R Foundation of Statistical Computing, Vienna, Austria).27 The figures were created using the ‘ggplot2’ package,28 and the paired AUC analysis was done using the ‘pROC’ package.29\n", "No patient involved.", "We identified 55 086 ED visits by 39 907 unique adult patients between 1 September 2018 and 24 June 2020. In 53 106 of these visits, the NTS triage score was recorded. Of these patients, 12 452 patients had a documented MEWS score. After exclusion of patients with an NTS score of 0, the final study population consisted of 12 317 unique visits. Table 1 shows the baseline characteristics of this study population.\nBaseline characteristics of the study population\n*Only complaints with a frequency over 500 are presented.", "In figure 1A, we present the absolute counts of the various NTS scores. Notably, the NTS scores do not seem to follow any particular distribution; the majority of patients are assigned levels 2 and 3, and the NTS score of 4 is infrequently given. Similar results were seen in the complete population before excluding any patient (online supplemental figure 1). The MEWS scores follow a clear right-skewed distribution (figure 1B).\nA bar chart of the absolute counts of the various Netherlands Triage System (NTS) scores (A) and Modified Early Warning Scores (MEWS) (B) in the study population.", "Generally, the proportion of lower (more urgent) NTS scores increases with increasing (more urgent) MEWS scores (figure 2). In table 2, we present the counts of the different combinations of NTS and MEWS scores assigned. Notably, high NTS scores (non-urgent) never co-occur with high (urgent) MEWS scores, while low (more urgent) NTS scores do co-occur with low (non-urgent) MEWS scores. For example, the combination of NTS 1/MEWS 0 is reported in 120/12 317 (1%) instances and the combination of NTS 1/MEWS 2 in 388/12.317 (3.2%).\nModified Early Warning Scores (MEWS) and Netherlands Triage System (NTS) scores compared.\nFrequencies of patients with all different Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) combination\nIn tables 3 and 4, we demonstrate the outcomes of patients with each combination of NTS and MEWS scores. Where the NTS was notably more urgent than the MEWS, the admission and mortality rates are lower than the average in the population. In the above example of an NTS 1 of 1 and MEWS of 0, the admission rate (34%) and mortality rate (2%) are lower than the average admission rate of 40.6% and mortality rate of 3.9%.\nFraction of patients admitted to the hospital stratified based on their Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) score\nNA, not available.\nFraction of patients who died within 30 days stratified based on their Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) score\nNA, not available.\nFurther, for any NTS score, the admission and mortality ranges can vary greatly for different MEWS scores in those same patients. For example, for patients with an NTS score of 2 the admission rate ranged from 29% to 83% depending on the MEWS score.", "The ROC curves are presented in figure 3. Figure 3A shows the MEWS score has a higher AUC for predicting 30-day all-cause mortality (0.70; 95% CI=0.67 to 0.72), compared with the NTS score (0.60; 95% CI=0.57 to 0.62) (p<0.001). In figure 3B, we see that the MEWS score also has a higher AUC for hospital admission (0.65; 95% CI=0.65 to 0.66), compared with the NTS score (0.60; 95% CI=0.60 to 0.61). (p<0.001).\nThe receiver operator characteristics (ROC) curves and corresponding area under the curve (AUC) for both the Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) regarding 30-day mortality (A) or hospital admission (B).", "Our study has several strengths that distinguish this work from what has been published before. Through the use of deidentified EHR data, we were able to study a large population of patients which reflects a wide variety of clinical scenarios. The recorded MEWS and NTS scores were measured in the same patients at the same time, which lowers the chance that these results were biased. Other studies have often calculated clinical scores based on separate measurements, while our analysis is based on a structured data field that included a fully recorded MEWS score at the moment of triage.\nSeveral limitations of the current study need to be addressed. As noted above, studies on triage urgency, including this one, are inherently limited by that fact that there is no gold standard for acuity. Our study used surrogate outcomes for urgent need of care, which are more reflective of severity of disease than urgent care needs. Since EWS tools are specifically created to detect poor outcomes, they may do better when we associate them with these surrogate markers rather than with ‘true’ urgency as assessed by an expert panel through criterion validity methods. Nevertheless, the criterion validity approach also has its limitations and subjectivity, as addressed in previous paragraphs.\nAnother limitation of our study is that it is a retrospective study with potential for selection bias. We only examined situations when both MEWS and NTS score were available, which may have resulted in more urgent patients being included. While we had documented NTS scores for 53 106 patients, we only had MEWS scores for 12 452 of those patients. However, we show that the distribution of NTS scores is similar in the complete population compared with the study population of patients who have both scores, indicating that missing MEWS scores occur across the spectrum of disease severity according to NTS. Furthermore, the overall distribution of MEWS scores in our population resembles the distribution in other cohorts.14 20\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "What is already known on this topic", "What this study adds", "How this study might affect research, practice or policy", "Introduction", "Methods and study design", "Study setting", "Patient selection", "NTS and MEWS measurements", "Outcome measures", "Statistical analysis", "Patient and public involvement", "Results", "Baseline characteristics", "Frequency distributions of NTS and MEWS scores", "Comparison of NTS and MEWS scores", "Paired AUC analysis", "Discussion", "Strengths and limitations", "Conclusion" ]
[ "Complaint-based triage scales are the norm in ED triage. However, their performance has shown to be highly variable and their practicality has been questioned due to their complexity.\nEarly warning scores have been shown to have good predictive value for admission and hospital outcome.", "In this retrospective, single-centre study comparing a complaint-based triage scale with an early warning score, we found that an early warning score was a better discriminator for admission and 30-day mortality than the Netherlands Triage Score.\nIn cases where these approaches yield strikingly different urgency scores, the early warning score was a better predictor.", "This study suggests that early warning scores could potentially replace current emergency triage systems.", "Over the past decades, ED presentation rates have increased worldwide.1 At times of supply and demand mismatches, medical resources should be allocated based on the patients’ needs to ensure patient safety.1 2 Emergency triage systems are used globally to assess these specific needs.\nThe performance of any emergency triage system is dependent on the environment in which it is used. Therefore, most countries use modified international triage systems to fit their particular situation. Commonly known triage scales include the internationally used Emergency Severity Index (ESI), the UK-based Manchester Triage Scale (MTS) and the Canadian Triage and Acuity Scales (CTAS).3 In Holland, the Netherlands Triage System (NTS) is used, which is a modified version of the MTS.4 A common theme among all triage systems is that these are decision trees based on patient complaints. Specific symptoms or high pain scores will result in higher urgency levels. Recently, two large systematic reviews have shown that the performance of triage scores varies considerably and that a significant part of the population may not be designated to the appropriate acuity group.5 6 Furthermore, there has been debate over the impractical complexity of the current triage systems and the need to rethink ED triage.7\n\nThe complaint-based approach during emergency triage is noticeably different from the simple early warning scores (EWS) used to detect clinical deterioration and the need for timely intervention in patients admitted to in-hospital wards. In the Netherlands, the Modified Early Warning Score (MEWS) is used in this regard.8 The EWS scores can accurately detect patients at high risk of deterioration and have been studied in numerous settings.9–14 Although EWS scores have been extensively studied for use in ED triage, they were never specifically developed to be triage tools.15–22 Furthermore, EWS scores and triage scales have not been compared head-to-head.\nIn this study, we aim to compare the ability of currently used triage scales and EWS scores to recognise patients in need of urgent care in the ED. These two approaches will be represented by the NTS and MEWS scores, respectively, as they are commonly used in the Netherlands.", "Study setting A retrospective, single-centre study was performed using data from the electronic health records (EHRs) of the Amsterdam UMC, location Vrije Universiteit Medical Center (VUmc). Data recorded between 1 September 2018 and 24 June 2020 were extracted. Data from before September 2018 could not be used since the storage of the NTS form was outsourced until this point in time. The VUmc is a Level 1 trauma centre and teaching hospital with an estimated 29 000 ED presentations annually. The study adheres to the ‘Standards for Reporting Diagnostic Accuracy’ (STARD) guideline.23\n\nA retrospective, single-centre study was performed using data from the electronic health records (EHRs) of the Amsterdam UMC, location Vrije Universiteit Medical Center (VUmc). Data recorded between 1 September 2018 and 24 June 2020 were extracted. Data from before September 2018 could not be used since the storage of the NTS form was outsourced until this point in time. The VUmc is a Level 1 trauma centre and teaching hospital with an estimated 29 000 ED presentations annually. The study adheres to the ‘Standards for Reporting Diagnostic Accuracy’ (STARD) guideline.23\n\nPatient selection We included all patients who presented to the ED of Amsterdam UMC, location VUmc, and for whom an NTS score as well as a MEWS score was documented. Patients under the age of 18 were excluded, as were patients with an NTS score of 0. The NTS score of 0 indicates that the patient was being resuscitated on arrival, which makes triage redundant.\nWe included all patients who presented to the ED of Amsterdam UMC, location VUmc, and for whom an NTS score as well as a MEWS score was documented. Patients under the age of 18 were excluded, as were patients with an NTS score of 0. The NTS score of 0 indicates that the patient was being resuscitated on arrival, which makes triage redundant.\nNTS and MEWS measurements All patients in the VUmc are triaged by a triage nurse who documents an NTS score. The NTS is a standardised five-level protocol with questions regarding patient complaints and pain levels. Lower numbered urgency levels (eg, NTS 1 or 2) indicate higher urgency4 (online supplemental table 1).\n\n\n\nThe MEWS score is also frequently documented as part of the initial work-up in our hospital’s ED, but is not used to decide on the urgency level and is therefore not mandatory. The MEWS is derived from seven parameters (systolic BP, HR, RR, temperature, peripheral oxygen saturation, level of consciousness and urine production).24 Also, an additional point may be scored when the nurse is particularly worried (online supplemental table 2). The higher the MEWS scores, the more likely a patient is to deteriorate. Prior studies report that MEWS scores of 5 or higher are critical and indicate a high likelihood of deterioration, while Dutch hospitals are prompted to use a cut-off of 3.8 24 25\n\nAll patients in the VUmc are triaged by a triage nurse who documents an NTS score. The NTS is a standardised five-level protocol with questions regarding patient complaints and pain levels. Lower numbered urgency levels (eg, NTS 1 or 2) indicate higher urgency4 (online supplemental table 1).\n\n\n\nThe MEWS score is also frequently documented as part of the initial work-up in our hospital’s ED, but is not used to decide on the urgency level and is therefore not mandatory. The MEWS is derived from seven parameters (systolic BP, HR, RR, temperature, peripheral oxygen saturation, level of consciousness and urine production).24 Also, an additional point may be scored when the nurse is particularly worried (online supplemental table 2). The higher the MEWS scores, the more likely a patient is to deteriorate. Prior studies report that MEWS scores of 5 or higher are critical and indicate a high likelihood of deterioration, while Dutch hospitals are prompted to use a cut-off of 3.8 24 25\n\nOutcome measures Surrogate outcomes for high urgency were used, as is frequently done with the development and assessment of triage tools, since no gold standard for urgency exist.26 The outcomes we studied were admission rates and 30-day all-cause mortality, since they were clearly defined in the EHR data and are among the most studied surrogates in this regard.4 26\n\nSurrogate outcomes for high urgency were used, as is frequently done with the development and assessment of triage tools, since no gold standard for urgency exist.26 The outcomes we studied were admission rates and 30-day all-cause mortality, since they were clearly defined in the EHR data and are among the most studied surrogates in this regard.4 26\n\nStatistical analysis The characteristics of the study population are presented with descriptive statistics. Categorical variables are presented as counts and percentages. Normality of the data is assessed using histograms and Q-Q plots. Non-normally distributed continuous data are presented with medians and IQRs. NTS and MEWS scores are presented using bar charts and cross-tables. To assess for selection bias, we determined the distribution of NTS scores in the population studied as well as the entire adult population seen in the ED during the study period\nThe predictive performance of both scores for the primary outcomes are visualised using receiver operating characteristics (ROC) curves and corresponding areas under the curve (AUCs). To compare the NTS and MEWS scores, we use the DeLong’s test for the comparison of AUCs of two correlated ROC curves.\nData analysis was performed using R V.3.6.3 (R Foundation of Statistical Computing, Vienna, Austria).27 The figures were created using the ‘ggplot2’ package,28 and the paired AUC analysis was done using the ‘pROC’ package.29\n\nThe characteristics of the study population are presented with descriptive statistics. Categorical variables are presented as counts and percentages. Normality of the data is assessed using histograms and Q-Q plots. Non-normally distributed continuous data are presented with medians and IQRs. NTS and MEWS scores are presented using bar charts and cross-tables. To assess for selection bias, we determined the distribution of NTS scores in the population studied as well as the entire adult population seen in the ED during the study period\nThe predictive performance of both scores for the primary outcomes are visualised using receiver operating characteristics (ROC) curves and corresponding areas under the curve (AUCs). To compare the NTS and MEWS scores, we use the DeLong’s test for the comparison of AUCs of two correlated ROC curves.\nData analysis was performed using R V.3.6.3 (R Foundation of Statistical Computing, Vienna, Austria).27 The figures were created using the ‘ggplot2’ package,28 and the paired AUC analysis was done using the ‘pROC’ package.29\n\nPatient and public involvement No patient involved.\nNo patient involved.", "A retrospective, single-centre study was performed using data from the electronic health records (EHRs) of the Amsterdam UMC, location Vrije Universiteit Medical Center (VUmc). Data recorded between 1 September 2018 and 24 June 2020 were extracted. Data from before September 2018 could not be used since the storage of the NTS form was outsourced until this point in time. The VUmc is a Level 1 trauma centre and teaching hospital with an estimated 29 000 ED presentations annually. The study adheres to the ‘Standards for Reporting Diagnostic Accuracy’ (STARD) guideline.23\n", "We included all patients who presented to the ED of Amsterdam UMC, location VUmc, and for whom an NTS score as well as a MEWS score was documented. Patients under the age of 18 were excluded, as were patients with an NTS score of 0. The NTS score of 0 indicates that the patient was being resuscitated on arrival, which makes triage redundant.", "All patients in the VUmc are triaged by a triage nurse who documents an NTS score. The NTS is a standardised five-level protocol with questions regarding patient complaints and pain levels. Lower numbered urgency levels (eg, NTS 1 or 2) indicate higher urgency4 (online supplemental table 1).\n\n\n\nThe MEWS score is also frequently documented as part of the initial work-up in our hospital’s ED, but is not used to decide on the urgency level and is therefore not mandatory. The MEWS is derived from seven parameters (systolic BP, HR, RR, temperature, peripheral oxygen saturation, level of consciousness and urine production).24 Also, an additional point may be scored when the nurse is particularly worried (online supplemental table 2). The higher the MEWS scores, the more likely a patient is to deteriorate. Prior studies report that MEWS scores of 5 or higher are critical and indicate a high likelihood of deterioration, while Dutch hospitals are prompted to use a cut-off of 3.8 24 25\n", "Surrogate outcomes for high urgency were used, as is frequently done with the development and assessment of triage tools, since no gold standard for urgency exist.26 The outcomes we studied were admission rates and 30-day all-cause mortality, since they were clearly defined in the EHR data and are among the most studied surrogates in this regard.4 26\n", "The characteristics of the study population are presented with descriptive statistics. Categorical variables are presented as counts and percentages. Normality of the data is assessed using histograms and Q-Q plots. Non-normally distributed continuous data are presented with medians and IQRs. NTS and MEWS scores are presented using bar charts and cross-tables. To assess for selection bias, we determined the distribution of NTS scores in the population studied as well as the entire adult population seen in the ED during the study period\nThe predictive performance of both scores for the primary outcomes are visualised using receiver operating characteristics (ROC) curves and corresponding areas under the curve (AUCs). To compare the NTS and MEWS scores, we use the DeLong’s test for the comparison of AUCs of two correlated ROC curves.\nData analysis was performed using R V.3.6.3 (R Foundation of Statistical Computing, Vienna, Austria).27 The figures were created using the ‘ggplot2’ package,28 and the paired AUC analysis was done using the ‘pROC’ package.29\n", "No patient involved.", "Baseline characteristics We identified 55 086 ED visits by 39 907 unique adult patients between 1 September 2018 and 24 June 2020. In 53 106 of these visits, the NTS triage score was recorded. Of these patients, 12 452 patients had a documented MEWS score. After exclusion of patients with an NTS score of 0, the final study population consisted of 12 317 unique visits. Table 1 shows the baseline characteristics of this study population.\nBaseline characteristics of the study population\n*Only complaints with a frequency over 500 are presented.\nWe identified 55 086 ED visits by 39 907 unique adult patients between 1 September 2018 and 24 June 2020. In 53 106 of these visits, the NTS triage score was recorded. Of these patients, 12 452 patients had a documented MEWS score. After exclusion of patients with an NTS score of 0, the final study population consisted of 12 317 unique visits. Table 1 shows the baseline characteristics of this study population.\nBaseline characteristics of the study population\n*Only complaints with a frequency over 500 are presented.\nFrequency distributions of NTS and MEWS scores In figure 1A, we present the absolute counts of the various NTS scores. Notably, the NTS scores do not seem to follow any particular distribution; the majority of patients are assigned levels 2 and 3, and the NTS score of 4 is infrequently given. Similar results were seen in the complete population before excluding any patient (online supplemental figure 1). The MEWS scores follow a clear right-skewed distribution (figure 1B).\nA bar chart of the absolute counts of the various Netherlands Triage System (NTS) scores (A) and Modified Early Warning Scores (MEWS) (B) in the study population.\nIn figure 1A, we present the absolute counts of the various NTS scores. Notably, the NTS scores do not seem to follow any particular distribution; the majority of patients are assigned levels 2 and 3, and the NTS score of 4 is infrequently given. Similar results were seen in the complete population before excluding any patient (online supplemental figure 1). The MEWS scores follow a clear right-skewed distribution (figure 1B).\nA bar chart of the absolute counts of the various Netherlands Triage System (NTS) scores (A) and Modified Early Warning Scores (MEWS) (B) in the study population.\nComparison of NTS and MEWS scores Generally, the proportion of lower (more urgent) NTS scores increases with increasing (more urgent) MEWS scores (figure 2). In table 2, we present the counts of the different combinations of NTS and MEWS scores assigned. Notably, high NTS scores (non-urgent) never co-occur with high (urgent) MEWS scores, while low (more urgent) NTS scores do co-occur with low (non-urgent) MEWS scores. For example, the combination of NTS 1/MEWS 0 is reported in 120/12 317 (1%) instances and the combination of NTS 1/MEWS 2 in 388/12.317 (3.2%).\nModified Early Warning Scores (MEWS) and Netherlands Triage System (NTS) scores compared.\nFrequencies of patients with all different Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) combination\nIn tables 3 and 4, we demonstrate the outcomes of patients with each combination of NTS and MEWS scores. Where the NTS was notably more urgent than the MEWS, the admission and mortality rates are lower than the average in the population. In the above example of an NTS 1 of 1 and MEWS of 0, the admission rate (34%) and mortality rate (2%) are lower than the average admission rate of 40.6% and mortality rate of 3.9%.\nFraction of patients admitted to the hospital stratified based on their Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) score\nNA, not available.\nFraction of patients who died within 30 days stratified based on their Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) score\nNA, not available.\nFurther, for any NTS score, the admission and mortality ranges can vary greatly for different MEWS scores in those same patients. For example, for patients with an NTS score of 2 the admission rate ranged from 29% to 83% depending on the MEWS score.\nGenerally, the proportion of lower (more urgent) NTS scores increases with increasing (more urgent) MEWS scores (figure 2). In table 2, we present the counts of the different combinations of NTS and MEWS scores assigned. Notably, high NTS scores (non-urgent) never co-occur with high (urgent) MEWS scores, while low (more urgent) NTS scores do co-occur with low (non-urgent) MEWS scores. For example, the combination of NTS 1/MEWS 0 is reported in 120/12 317 (1%) instances and the combination of NTS 1/MEWS 2 in 388/12.317 (3.2%).\nModified Early Warning Scores (MEWS) and Netherlands Triage System (NTS) scores compared.\nFrequencies of patients with all different Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) combination\nIn tables 3 and 4, we demonstrate the outcomes of patients with each combination of NTS and MEWS scores. Where the NTS was notably more urgent than the MEWS, the admission and mortality rates are lower than the average in the population. In the above example of an NTS 1 of 1 and MEWS of 0, the admission rate (34%) and mortality rate (2%) are lower than the average admission rate of 40.6% and mortality rate of 3.9%.\nFraction of patients admitted to the hospital stratified based on their Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) score\nNA, not available.\nFraction of patients who died within 30 days stratified based on their Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) score\nNA, not available.\nFurther, for any NTS score, the admission and mortality ranges can vary greatly for different MEWS scores in those same patients. For example, for patients with an NTS score of 2 the admission rate ranged from 29% to 83% depending on the MEWS score.\nPaired AUC analysis The ROC curves are presented in figure 3. Figure 3A shows the MEWS score has a higher AUC for predicting 30-day all-cause mortality (0.70; 95% CI=0.67 to 0.72), compared with the NTS score (0.60; 95% CI=0.57 to 0.62) (p<0.001). In figure 3B, we see that the MEWS score also has a higher AUC for hospital admission (0.65; 95% CI=0.65 to 0.66), compared with the NTS score (0.60; 95% CI=0.60 to 0.61). (p<0.001).\nThe receiver operator characteristics (ROC) curves and corresponding area under the curve (AUC) for both the Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) regarding 30-day mortality (A) or hospital admission (B).\nThe ROC curves are presented in figure 3. Figure 3A shows the MEWS score has a higher AUC for predicting 30-day all-cause mortality (0.70; 95% CI=0.67 to 0.72), compared with the NTS score (0.60; 95% CI=0.57 to 0.62) (p<0.001). In figure 3B, we see that the MEWS score also has a higher AUC for hospital admission (0.65; 95% CI=0.65 to 0.66), compared with the NTS score (0.60; 95% CI=0.60 to 0.61). (p<0.001).\nThe receiver operator characteristics (ROC) curves and corresponding area under the curve (AUC) for both the Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) regarding 30-day mortality (A) or hospital admission (B).", "We identified 55 086 ED visits by 39 907 unique adult patients between 1 September 2018 and 24 June 2020. In 53 106 of these visits, the NTS triage score was recorded. Of these patients, 12 452 patients had a documented MEWS score. After exclusion of patients with an NTS score of 0, the final study population consisted of 12 317 unique visits. Table 1 shows the baseline characteristics of this study population.\nBaseline characteristics of the study population\n*Only complaints with a frequency over 500 are presented.", "In figure 1A, we present the absolute counts of the various NTS scores. Notably, the NTS scores do not seem to follow any particular distribution; the majority of patients are assigned levels 2 and 3, and the NTS score of 4 is infrequently given. Similar results were seen in the complete population before excluding any patient (online supplemental figure 1). The MEWS scores follow a clear right-skewed distribution (figure 1B).\nA bar chart of the absolute counts of the various Netherlands Triage System (NTS) scores (A) and Modified Early Warning Scores (MEWS) (B) in the study population.", "Generally, the proportion of lower (more urgent) NTS scores increases with increasing (more urgent) MEWS scores (figure 2). In table 2, we present the counts of the different combinations of NTS and MEWS scores assigned. Notably, high NTS scores (non-urgent) never co-occur with high (urgent) MEWS scores, while low (more urgent) NTS scores do co-occur with low (non-urgent) MEWS scores. For example, the combination of NTS 1/MEWS 0 is reported in 120/12 317 (1%) instances and the combination of NTS 1/MEWS 2 in 388/12.317 (3.2%).\nModified Early Warning Scores (MEWS) and Netherlands Triage System (NTS) scores compared.\nFrequencies of patients with all different Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) combination\nIn tables 3 and 4, we demonstrate the outcomes of patients with each combination of NTS and MEWS scores. Where the NTS was notably more urgent than the MEWS, the admission and mortality rates are lower than the average in the population. In the above example of an NTS 1 of 1 and MEWS of 0, the admission rate (34%) and mortality rate (2%) are lower than the average admission rate of 40.6% and mortality rate of 3.9%.\nFraction of patients admitted to the hospital stratified based on their Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) score\nNA, not available.\nFraction of patients who died within 30 days stratified based on their Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) score\nNA, not available.\nFurther, for any NTS score, the admission and mortality ranges can vary greatly for different MEWS scores in those same patients. For example, for patients with an NTS score of 2 the admission rate ranged from 29% to 83% depending on the MEWS score.", "The ROC curves are presented in figure 3. Figure 3A shows the MEWS score has a higher AUC for predicting 30-day all-cause mortality (0.70; 95% CI=0.67 to 0.72), compared with the NTS score (0.60; 95% CI=0.57 to 0.62) (p<0.001). In figure 3B, we see that the MEWS score also has a higher AUC for hospital admission (0.65; 95% CI=0.65 to 0.66), compared with the NTS score (0.60; 95% CI=0.60 to 0.61). (p<0.001).\nThe receiver operator characteristics (ROC) curves and corresponding area under the curve (AUC) for both the Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) regarding 30-day mortality (A) or hospital admission (B).", "We compared a traditional complaint-based triage scale and an EWS, represented by the NTS and MEWS score, respectively, on their ability to recognise patients in need of urgent care. The predictive performance of the MEWS score was significantly better than that of the NTS for 30-day mortality (0.70 vs 0.60; p<0.001) and hospital admission (0.65 vs 0.60; p<0.001), which are both well-studied surrogate markers for the urgent need of care. Furthermore, in instances with a particularly large discrepancy between the scores, the MEWS score seems to more accurately capture the urgency level that is warranted. Notably, neither tool reaches an excellent performance. While the MEWS reaches a fair (0.7–0.8) performance for 30-day mortality, all other AUCs can be considered poor (0.6–0.7).9\n\nComplaint-based emergency triage scales such as the ESI, MTS and CTAS have been validated in at least 14 studies.26 A major challenge with the validation of these triage systems is the determination of an appropriate reference standard. The lack of a consensus definition about which patients actually require urgent care makes research in this field inherently difficult and limited in the ability to draw firm conclusions. In general, criterion validity and construct validity are the two main methodologies used to validate triage systems.\nWith criterion validity methods, performance of a triage system is compared with a reference standard, which is usually an expert panel.26 30 These studies report the validity of the triage scale as a function of the inter-rater agreement between the triagists and the expert panel and generally show fair agreement.26 30 Specifically for the NTS score, a recent study showed good agreement between triagists and an expert panel for 41 written cases.31\n\nAlthough criterion validity methods could potentially detect true urgency best, they are labour intensive and cannot capture the full spectrum of clinical scenarios as seen in the ED.26 Given these limitations, and the fact that there is still significant subjectivity involved, researchers have usually opted for a method based on construct validity to validate triage tools, as we also did.26\n\nWith construct validity, surrogate markers that are deemed fair proxies for high urgency are used as outcome measures.26 These surrogates include but are not limited to admission rates, resource use, ED length of stay, overall costs and mortality rates. In the absence of a gold standard, construct validity methods have been named the ‘silver standard’ when it comes to validating triage systems.30 Studies generally show that the complaint-based triage scales like ESI, MTS and CTAS are associated with the surrogate markers for urgency. The most studied marker is hospital admission.26 The original validation study for the NTS score also showed significant associations of the NTS scores with hospital admission and resource use.4 MEWS scores and other EWS tools based on vital signs are actually created to detect the outcomes used as surrogate outcomes for urgent care needs. It is therefore no surprise that these models have good to excellent accuracy for detecting these outcomes.9–14\n\nOur study adds to literature suggesting that EWS tools may have added clinical value in ED triage, either by augmenting or by replacing the current complaint-based triage scales. Several studies have explored the stand-alone use of EWS in ED triage, with the same surrogate endpoints we used.9 18 20–22 For example, Spencer and colleagues found AUCs of EWS scores for hospital admission ranging from 0.54 to 0.70 and Lee et al found AUCs of the MEWS for 30-day mortality of 0.779.18 20McCabe and colleagues specifically studied the use of an EWS in conjunction with the MTS.19 The study showed the EWS addition led to a more risk-adverse triage, but increased the overall ED length of stay, suggesting that these tools may work better separately.\nThe current study performed a direct comparison between a complaint-based triage scale and EWS, represented by the MEWS and NTS scores. Generally, these scores have much overlap and high NTS scores (non-urgent) never co-occur with high (urgent) MEWS. However, more urgent NTS scores do occur in combination with non-urgent MEWS scores. In these situations, the MEWS score seems to be more reflective of the urgency since the admission and mortality rates are lower than average in this group. Furthermore, the AUCs of the MEWS were significantly higher than those of the NTS for surrogate markers of urgent care needs.\nBesides the performance of these scores, we believe the MEWS score is less complex and easier to use during triage since it consists of just eight items. Furthermore, from the distribution of the MEWS and NTS scores it appears that MEWS is better able to separate patients with lower from higher urgency. In our study, most needed, nearly half of the patients had an NTS score of 1 or 2, indicating the highest urgency. On the other hand, the right-skewed distribution of MEWS score found that most urgent cases are relatively rare and could be distinguished from lower urgency cases. Finally, using the MEWS score during triage will facilitate a continuous and comparable assessment over the course of hospital stay since it is also used in the hospital.\nOne aspect that favours the complaint-based approaches such as NTS is that they can be used to recognise specific conditions, such as acute angle-closure glaucoma or compartment syndrome, in which a short time-to-treatment is especially beneficial. In these situations, the urgency is not always reflected in a higher MEWS score as vital signs can be normal. However, currently used complaint-based triage systems have rarely been developed or validated in ways to show that these scores actually perform this function.\nStrengths and limitations Our study has several strengths that distinguish this work from what has been published before. Through the use of deidentified EHR data, we were able to study a large population of patients which reflects a wide variety of clinical scenarios. The recorded MEWS and NTS scores were measured in the same patients at the same time, which lowers the chance that these results were biased. Other studies have often calculated clinical scores based on separate measurements, while our analysis is based on a structured data field that included a fully recorded MEWS score at the moment of triage.\nSeveral limitations of the current study need to be addressed. As noted above, studies on triage urgency, including this one, are inherently limited by that fact that there is no gold standard for acuity. Our study used surrogate outcomes for urgent need of care, which are more reflective of severity of disease than urgent care needs. Since EWS tools are specifically created to detect poor outcomes, they may do better when we associate them with these surrogate markers rather than with ‘true’ urgency as assessed by an expert panel through criterion validity methods. Nevertheless, the criterion validity approach also has its limitations and subjectivity, as addressed in previous paragraphs.\nAnother limitation of our study is that it is a retrospective study with potential for selection bias. We only examined situations when both MEWS and NTS score were available, which may have resulted in more urgent patients being included. While we had documented NTS scores for 53 106 patients, we only had MEWS scores for 12 452 of those patients. However, we show that the distribution of NTS scores is similar in the complete population compared with the study population of patients who have both scores, indicating that missing MEWS scores occur across the spectrum of disease severity according to NTS. Furthermore, the overall distribution of MEWS scores in our population resembles the distribution in other cohorts.14 20\n\nOur study has several strengths that distinguish this work from what has been published before. Through the use of deidentified EHR data, we were able to study a large population of patients which reflects a wide variety of clinical scenarios. The recorded MEWS and NTS scores were measured in the same patients at the same time, which lowers the chance that these results were biased. Other studies have often calculated clinical scores based on separate measurements, while our analysis is based on a structured data field that included a fully recorded MEWS score at the moment of triage.\nSeveral limitations of the current study need to be addressed. As noted above, studies on triage urgency, including this one, are inherently limited by that fact that there is no gold standard for acuity. Our study used surrogate outcomes for urgent need of care, which are more reflective of severity of disease than urgent care needs. Since EWS tools are specifically created to detect poor outcomes, they may do better when we associate them with these surrogate markers rather than with ‘true’ urgency as assessed by an expert panel through criterion validity methods. Nevertheless, the criterion validity approach also has its limitations and subjectivity, as addressed in previous paragraphs.\nAnother limitation of our study is that it is a retrospective study with potential for selection bias. We only examined situations when both MEWS and NTS score were available, which may have resulted in more urgent patients being included. While we had documented NTS scores for 53 106 patients, we only had MEWS scores for 12 452 of those patients. However, we show that the distribution of NTS scores is similar in the complete population compared with the study population of patients who have both scores, indicating that missing MEWS scores occur across the spectrum of disease severity according to NTS. Furthermore, the overall distribution of MEWS scores in our population resembles the distribution in other cohorts.14 20\n", "Our study has several strengths that distinguish this work from what has been published before. Through the use of deidentified EHR data, we were able to study a large population of patients which reflects a wide variety of clinical scenarios. The recorded MEWS and NTS scores were measured in the same patients at the same time, which lowers the chance that these results were biased. Other studies have often calculated clinical scores based on separate measurements, while our analysis is based on a structured data field that included a fully recorded MEWS score at the moment of triage.\nSeveral limitations of the current study need to be addressed. As noted above, studies on triage urgency, including this one, are inherently limited by that fact that there is no gold standard for acuity. Our study used surrogate outcomes for urgent need of care, which are more reflective of severity of disease than urgent care needs. Since EWS tools are specifically created to detect poor outcomes, they may do better when we associate them with these surrogate markers rather than with ‘true’ urgency as assessed by an expert panel through criterion validity methods. Nevertheless, the criterion validity approach also has its limitations and subjectivity, as addressed in previous paragraphs.\nAnother limitation of our study is that it is a retrospective study with potential for selection bias. We only examined situations when both MEWS and NTS score were available, which may have resulted in more urgent patients being included. While we had documented NTS scores for 53 106 patients, we only had MEWS scores for 12 452 of those patients. However, we show that the distribution of NTS scores is similar in the complete population compared with the study population of patients who have both scores, indicating that missing MEWS scores occur across the spectrum of disease severity according to NTS. Furthermore, the overall distribution of MEWS scores in our population resembles the distribution in other cohorts.14 20\n", "We conclude that EWSs outperform currently used ED triage scales based on patient complaints regarding hospitalisation and 30-day mortality. In cases where these approaches yield particularly different urgency scores, the EWS, represented by the MEWS in our study, seems to assess the need for urgent care better than the complaint based NTS score. The results of this study suggest that EWSs could potentially replace the current emergency triage systems." ]
[ null, null, null, "intro", "methods", null, null, null, null, null, null, "results", null, null, null, null, "discussion", null, "conclusions" ]
[ "triage", "emergency department" ]
What is already known on this topic: Complaint-based triage scales are the norm in ED triage. However, their performance has shown to be highly variable and their practicality has been questioned due to their complexity. Early warning scores have been shown to have good predictive value for admission and hospital outcome. What this study adds: In this retrospective, single-centre study comparing a complaint-based triage scale with an early warning score, we found that an early warning score was a better discriminator for admission and 30-day mortality than the Netherlands Triage Score. In cases where these approaches yield strikingly different urgency scores, the early warning score was a better predictor. How this study might affect research, practice or policy: This study suggests that early warning scores could potentially replace current emergency triage systems. Introduction: Over the past decades, ED presentation rates have increased worldwide.1 At times of supply and demand mismatches, medical resources should be allocated based on the patients’ needs to ensure patient safety.1 2 Emergency triage systems are used globally to assess these specific needs. The performance of any emergency triage system is dependent on the environment in which it is used. Therefore, most countries use modified international triage systems to fit their particular situation. Commonly known triage scales include the internationally used Emergency Severity Index (ESI), the UK-based Manchester Triage Scale (MTS) and the Canadian Triage and Acuity Scales (CTAS).3 In Holland, the Netherlands Triage System (NTS) is used, which is a modified version of the MTS.4 A common theme among all triage systems is that these are decision trees based on patient complaints. Specific symptoms or high pain scores will result in higher urgency levels. Recently, two large systematic reviews have shown that the performance of triage scores varies considerably and that a significant part of the population may not be designated to the appropriate acuity group.5 6 Furthermore, there has been debate over the impractical complexity of the current triage systems and the need to rethink ED triage.7 The complaint-based approach during emergency triage is noticeably different from the simple early warning scores (EWS) used to detect clinical deterioration and the need for timely intervention in patients admitted to in-hospital wards. In the Netherlands, the Modified Early Warning Score (MEWS) is used in this regard.8 The EWS scores can accurately detect patients at high risk of deterioration and have been studied in numerous settings.9–14 Although EWS scores have been extensively studied for use in ED triage, they were never specifically developed to be triage tools.15–22 Furthermore, EWS scores and triage scales have not been compared head-to-head. In this study, we aim to compare the ability of currently used triage scales and EWS scores to recognise patients in need of urgent care in the ED. These two approaches will be represented by the NTS and MEWS scores, respectively, as they are commonly used in the Netherlands. Methods and study design: Study setting A retrospective, single-centre study was performed using data from the electronic health records (EHRs) of the Amsterdam UMC, location Vrije Universiteit Medical Center (VUmc). Data recorded between 1 September 2018 and 24 June 2020 were extracted. Data from before September 2018 could not be used since the storage of the NTS form was outsourced until this point in time. The VUmc is a Level 1 trauma centre and teaching hospital with an estimated 29 000 ED presentations annually. The study adheres to the ‘Standards for Reporting Diagnostic Accuracy’ (STARD) guideline.23 A retrospective, single-centre study was performed using data from the electronic health records (EHRs) of the Amsterdam UMC, location Vrije Universiteit Medical Center (VUmc). Data recorded between 1 September 2018 and 24 June 2020 were extracted. Data from before September 2018 could not be used since the storage of the NTS form was outsourced until this point in time. The VUmc is a Level 1 trauma centre and teaching hospital with an estimated 29 000 ED presentations annually. The study adheres to the ‘Standards for Reporting Diagnostic Accuracy’ (STARD) guideline.23 Patient selection We included all patients who presented to the ED of Amsterdam UMC, location VUmc, and for whom an NTS score as well as a MEWS score was documented. Patients under the age of 18 were excluded, as were patients with an NTS score of 0. The NTS score of 0 indicates that the patient was being resuscitated on arrival, which makes triage redundant. We included all patients who presented to the ED of Amsterdam UMC, location VUmc, and for whom an NTS score as well as a MEWS score was documented. Patients under the age of 18 were excluded, as were patients with an NTS score of 0. The NTS score of 0 indicates that the patient was being resuscitated on arrival, which makes triage redundant. NTS and MEWS measurements All patients in the VUmc are triaged by a triage nurse who documents an NTS score. The NTS is a standardised five-level protocol with questions regarding patient complaints and pain levels. Lower numbered urgency levels (eg, NTS 1 or 2) indicate higher urgency4 (online supplemental table 1). The MEWS score is also frequently documented as part of the initial work-up in our hospital’s ED, but is not used to decide on the urgency level and is therefore not mandatory. The MEWS is derived from seven parameters (systolic BP, HR, RR, temperature, peripheral oxygen saturation, level of consciousness and urine production).24 Also, an additional point may be scored when the nurse is particularly worried (online supplemental table 2). The higher the MEWS scores, the more likely a patient is to deteriorate. Prior studies report that MEWS scores of 5 or higher are critical and indicate a high likelihood of deterioration, while Dutch hospitals are prompted to use a cut-off of 3.8 24 25 All patients in the VUmc are triaged by a triage nurse who documents an NTS score. The NTS is a standardised five-level protocol with questions regarding patient complaints and pain levels. Lower numbered urgency levels (eg, NTS 1 or 2) indicate higher urgency4 (online supplemental table 1). The MEWS score is also frequently documented as part of the initial work-up in our hospital’s ED, but is not used to decide on the urgency level and is therefore not mandatory. The MEWS is derived from seven parameters (systolic BP, HR, RR, temperature, peripheral oxygen saturation, level of consciousness and urine production).24 Also, an additional point may be scored when the nurse is particularly worried (online supplemental table 2). The higher the MEWS scores, the more likely a patient is to deteriorate. Prior studies report that MEWS scores of 5 or higher are critical and indicate a high likelihood of deterioration, while Dutch hospitals are prompted to use a cut-off of 3.8 24 25 Outcome measures Surrogate outcomes for high urgency were used, as is frequently done with the development and assessment of triage tools, since no gold standard for urgency exist.26 The outcomes we studied were admission rates and 30-day all-cause mortality, since they were clearly defined in the EHR data and are among the most studied surrogates in this regard.4 26 Surrogate outcomes for high urgency were used, as is frequently done with the development and assessment of triage tools, since no gold standard for urgency exist.26 The outcomes we studied were admission rates and 30-day all-cause mortality, since they were clearly defined in the EHR data and are among the most studied surrogates in this regard.4 26 Statistical analysis The characteristics of the study population are presented with descriptive statistics. Categorical variables are presented as counts and percentages. Normality of the data is assessed using histograms and Q-Q plots. Non-normally distributed continuous data are presented with medians and IQRs. NTS and MEWS scores are presented using bar charts and cross-tables. To assess for selection bias, we determined the distribution of NTS scores in the population studied as well as the entire adult population seen in the ED during the study period The predictive performance of both scores for the primary outcomes are visualised using receiver operating characteristics (ROC) curves and corresponding areas under the curve (AUCs). To compare the NTS and MEWS scores, we use the DeLong’s test for the comparison of AUCs of two correlated ROC curves. Data analysis was performed using R V.3.6.3 (R Foundation of Statistical Computing, Vienna, Austria).27 The figures were created using the ‘ggplot2’ package,28 and the paired AUC analysis was done using the ‘pROC’ package.29 The characteristics of the study population are presented with descriptive statistics. Categorical variables are presented as counts and percentages. Normality of the data is assessed using histograms and Q-Q plots. Non-normally distributed continuous data are presented with medians and IQRs. NTS and MEWS scores are presented using bar charts and cross-tables. To assess for selection bias, we determined the distribution of NTS scores in the population studied as well as the entire adult population seen in the ED during the study period The predictive performance of both scores for the primary outcomes are visualised using receiver operating characteristics (ROC) curves and corresponding areas under the curve (AUCs). To compare the NTS and MEWS scores, we use the DeLong’s test for the comparison of AUCs of two correlated ROC curves. Data analysis was performed using R V.3.6.3 (R Foundation of Statistical Computing, Vienna, Austria).27 The figures were created using the ‘ggplot2’ package,28 and the paired AUC analysis was done using the ‘pROC’ package.29 Patient and public involvement No patient involved. No patient involved. Study setting: A retrospective, single-centre study was performed using data from the electronic health records (EHRs) of the Amsterdam UMC, location Vrije Universiteit Medical Center (VUmc). Data recorded between 1 September 2018 and 24 June 2020 were extracted. Data from before September 2018 could not be used since the storage of the NTS form was outsourced until this point in time. The VUmc is a Level 1 trauma centre and teaching hospital with an estimated 29 000 ED presentations annually. The study adheres to the ‘Standards for Reporting Diagnostic Accuracy’ (STARD) guideline.23 Patient selection: We included all patients who presented to the ED of Amsterdam UMC, location VUmc, and for whom an NTS score as well as a MEWS score was documented. Patients under the age of 18 were excluded, as were patients with an NTS score of 0. The NTS score of 0 indicates that the patient was being resuscitated on arrival, which makes triage redundant. NTS and MEWS measurements: All patients in the VUmc are triaged by a triage nurse who documents an NTS score. The NTS is a standardised five-level protocol with questions regarding patient complaints and pain levels. Lower numbered urgency levels (eg, NTS 1 or 2) indicate higher urgency4 (online supplemental table 1). The MEWS score is also frequently documented as part of the initial work-up in our hospital’s ED, but is not used to decide on the urgency level and is therefore not mandatory. The MEWS is derived from seven parameters (systolic BP, HR, RR, temperature, peripheral oxygen saturation, level of consciousness and urine production).24 Also, an additional point may be scored when the nurse is particularly worried (online supplemental table 2). The higher the MEWS scores, the more likely a patient is to deteriorate. Prior studies report that MEWS scores of 5 or higher are critical and indicate a high likelihood of deterioration, while Dutch hospitals are prompted to use a cut-off of 3.8 24 25 Outcome measures: Surrogate outcomes for high urgency were used, as is frequently done with the development and assessment of triage tools, since no gold standard for urgency exist.26 The outcomes we studied were admission rates and 30-day all-cause mortality, since they were clearly defined in the EHR data and are among the most studied surrogates in this regard.4 26 Statistical analysis: The characteristics of the study population are presented with descriptive statistics. Categorical variables are presented as counts and percentages. Normality of the data is assessed using histograms and Q-Q plots. Non-normally distributed continuous data are presented with medians and IQRs. NTS and MEWS scores are presented using bar charts and cross-tables. To assess for selection bias, we determined the distribution of NTS scores in the population studied as well as the entire adult population seen in the ED during the study period The predictive performance of both scores for the primary outcomes are visualised using receiver operating characteristics (ROC) curves and corresponding areas under the curve (AUCs). To compare the NTS and MEWS scores, we use the DeLong’s test for the comparison of AUCs of two correlated ROC curves. Data analysis was performed using R V.3.6.3 (R Foundation of Statistical Computing, Vienna, Austria).27 The figures were created using the ‘ggplot2’ package,28 and the paired AUC analysis was done using the ‘pROC’ package.29 Patient and public involvement: No patient involved. Results: Baseline characteristics We identified 55 086 ED visits by 39 907 unique adult patients between 1 September 2018 and 24 June 2020. In 53 106 of these visits, the NTS triage score was recorded. Of these patients, 12 452 patients had a documented MEWS score. After exclusion of patients with an NTS score of 0, the final study population consisted of 12 317 unique visits. Table 1 shows the baseline characteristics of this study population. Baseline characteristics of the study population *Only complaints with a frequency over 500 are presented. We identified 55 086 ED visits by 39 907 unique adult patients between 1 September 2018 and 24 June 2020. In 53 106 of these visits, the NTS triage score was recorded. Of these patients, 12 452 patients had a documented MEWS score. After exclusion of patients with an NTS score of 0, the final study population consisted of 12 317 unique visits. Table 1 shows the baseline characteristics of this study population. Baseline characteristics of the study population *Only complaints with a frequency over 500 are presented. Frequency distributions of NTS and MEWS scores In figure 1A, we present the absolute counts of the various NTS scores. Notably, the NTS scores do not seem to follow any particular distribution; the majority of patients are assigned levels 2 and 3, and the NTS score of 4 is infrequently given. Similar results were seen in the complete population before excluding any patient (online supplemental figure 1). The MEWS scores follow a clear right-skewed distribution (figure 1B). A bar chart of the absolute counts of the various Netherlands Triage System (NTS) scores (A) and Modified Early Warning Scores (MEWS) (B) in the study population. In figure 1A, we present the absolute counts of the various NTS scores. Notably, the NTS scores do not seem to follow any particular distribution; the majority of patients are assigned levels 2 and 3, and the NTS score of 4 is infrequently given. Similar results were seen in the complete population before excluding any patient (online supplemental figure 1). The MEWS scores follow a clear right-skewed distribution (figure 1B). A bar chart of the absolute counts of the various Netherlands Triage System (NTS) scores (A) and Modified Early Warning Scores (MEWS) (B) in the study population. Comparison of NTS and MEWS scores Generally, the proportion of lower (more urgent) NTS scores increases with increasing (more urgent) MEWS scores (figure 2). In table 2, we present the counts of the different combinations of NTS and MEWS scores assigned. Notably, high NTS scores (non-urgent) never co-occur with high (urgent) MEWS scores, while low (more urgent) NTS scores do co-occur with low (non-urgent) MEWS scores. For example, the combination of NTS 1/MEWS 0 is reported in 120/12 317 (1%) instances and the combination of NTS 1/MEWS 2 in 388/12.317 (3.2%). Modified Early Warning Scores (MEWS) and Netherlands Triage System (NTS) scores compared. Frequencies of patients with all different Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) combination In tables 3 and 4, we demonstrate the outcomes of patients with each combination of NTS and MEWS scores. Where the NTS was notably more urgent than the MEWS, the admission and mortality rates are lower than the average in the population. In the above example of an NTS 1 of 1 and MEWS of 0, the admission rate (34%) and mortality rate (2%) are lower than the average admission rate of 40.6% and mortality rate of 3.9%. Fraction of patients admitted to the hospital stratified based on their Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) score NA, not available. Fraction of patients who died within 30 days stratified based on their Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) score NA, not available. Further, for any NTS score, the admission and mortality ranges can vary greatly for different MEWS scores in those same patients. For example, for patients with an NTS score of 2 the admission rate ranged from 29% to 83% depending on the MEWS score. Generally, the proportion of lower (more urgent) NTS scores increases with increasing (more urgent) MEWS scores (figure 2). In table 2, we present the counts of the different combinations of NTS and MEWS scores assigned. Notably, high NTS scores (non-urgent) never co-occur with high (urgent) MEWS scores, while low (more urgent) NTS scores do co-occur with low (non-urgent) MEWS scores. For example, the combination of NTS 1/MEWS 0 is reported in 120/12 317 (1%) instances and the combination of NTS 1/MEWS 2 in 388/12.317 (3.2%). Modified Early Warning Scores (MEWS) and Netherlands Triage System (NTS) scores compared. Frequencies of patients with all different Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) combination In tables 3 and 4, we demonstrate the outcomes of patients with each combination of NTS and MEWS scores. Where the NTS was notably more urgent than the MEWS, the admission and mortality rates are lower than the average in the population. In the above example of an NTS 1 of 1 and MEWS of 0, the admission rate (34%) and mortality rate (2%) are lower than the average admission rate of 40.6% and mortality rate of 3.9%. Fraction of patients admitted to the hospital stratified based on their Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) score NA, not available. Fraction of patients who died within 30 days stratified based on their Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) score NA, not available. Further, for any NTS score, the admission and mortality ranges can vary greatly for different MEWS scores in those same patients. For example, for patients with an NTS score of 2 the admission rate ranged from 29% to 83% depending on the MEWS score. Paired AUC analysis The ROC curves are presented in figure 3. Figure 3A shows the MEWS score has a higher AUC for predicting 30-day all-cause mortality (0.70; 95% CI=0.67 to 0.72), compared with the NTS score (0.60; 95% CI=0.57 to 0.62) (p<0.001). In figure 3B, we see that the MEWS score also has a higher AUC for hospital admission (0.65; 95% CI=0.65 to 0.66), compared with the NTS score (0.60; 95% CI=0.60 to 0.61). (p<0.001). The receiver operator characteristics (ROC) curves and corresponding area under the curve (AUC) for both the Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) regarding 30-day mortality (A) or hospital admission (B). The ROC curves are presented in figure 3. Figure 3A shows the MEWS score has a higher AUC for predicting 30-day all-cause mortality (0.70; 95% CI=0.67 to 0.72), compared with the NTS score (0.60; 95% CI=0.57 to 0.62) (p<0.001). In figure 3B, we see that the MEWS score also has a higher AUC for hospital admission (0.65; 95% CI=0.65 to 0.66), compared with the NTS score (0.60; 95% CI=0.60 to 0.61). (p<0.001). The receiver operator characteristics (ROC) curves and corresponding area under the curve (AUC) for both the Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) regarding 30-day mortality (A) or hospital admission (B). Baseline characteristics: We identified 55 086 ED visits by 39 907 unique adult patients between 1 September 2018 and 24 June 2020. In 53 106 of these visits, the NTS triage score was recorded. Of these patients, 12 452 patients had a documented MEWS score. After exclusion of patients with an NTS score of 0, the final study population consisted of 12 317 unique visits. Table 1 shows the baseline characteristics of this study population. Baseline characteristics of the study population *Only complaints with a frequency over 500 are presented. Frequency distributions of NTS and MEWS scores: In figure 1A, we present the absolute counts of the various NTS scores. Notably, the NTS scores do not seem to follow any particular distribution; the majority of patients are assigned levels 2 and 3, and the NTS score of 4 is infrequently given. Similar results were seen in the complete population before excluding any patient (online supplemental figure 1). The MEWS scores follow a clear right-skewed distribution (figure 1B). A bar chart of the absolute counts of the various Netherlands Triage System (NTS) scores (A) and Modified Early Warning Scores (MEWS) (B) in the study population. Comparison of NTS and MEWS scores: Generally, the proportion of lower (more urgent) NTS scores increases with increasing (more urgent) MEWS scores (figure 2). In table 2, we present the counts of the different combinations of NTS and MEWS scores assigned. Notably, high NTS scores (non-urgent) never co-occur with high (urgent) MEWS scores, while low (more urgent) NTS scores do co-occur with low (non-urgent) MEWS scores. For example, the combination of NTS 1/MEWS 0 is reported in 120/12 317 (1%) instances and the combination of NTS 1/MEWS 2 in 388/12.317 (3.2%). Modified Early Warning Scores (MEWS) and Netherlands Triage System (NTS) scores compared. Frequencies of patients with all different Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) combination In tables 3 and 4, we demonstrate the outcomes of patients with each combination of NTS and MEWS scores. Where the NTS was notably more urgent than the MEWS, the admission and mortality rates are lower than the average in the population. In the above example of an NTS 1 of 1 and MEWS of 0, the admission rate (34%) and mortality rate (2%) are lower than the average admission rate of 40.6% and mortality rate of 3.9%. Fraction of patients admitted to the hospital stratified based on their Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) score NA, not available. Fraction of patients who died within 30 days stratified based on their Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) score NA, not available. Further, for any NTS score, the admission and mortality ranges can vary greatly for different MEWS scores in those same patients. For example, for patients with an NTS score of 2 the admission rate ranged from 29% to 83% depending on the MEWS score. Paired AUC analysis: The ROC curves are presented in figure 3. Figure 3A shows the MEWS score has a higher AUC for predicting 30-day all-cause mortality (0.70; 95% CI=0.67 to 0.72), compared with the NTS score (0.60; 95% CI=0.57 to 0.62) (p<0.001). In figure 3B, we see that the MEWS score also has a higher AUC for hospital admission (0.65; 95% CI=0.65 to 0.66), compared with the NTS score (0.60; 95% CI=0.60 to 0.61). (p<0.001). The receiver operator characteristics (ROC) curves and corresponding area under the curve (AUC) for both the Modified Early Warning Score (MEWS) and Netherlands Triage System (NTS) regarding 30-day mortality (A) or hospital admission (B). Discussion: We compared a traditional complaint-based triage scale and an EWS, represented by the NTS and MEWS score, respectively, on their ability to recognise patients in need of urgent care. The predictive performance of the MEWS score was significantly better than that of the NTS for 30-day mortality (0.70 vs 0.60; p<0.001) and hospital admission (0.65 vs 0.60; p<0.001), which are both well-studied surrogate markers for the urgent need of care. Furthermore, in instances with a particularly large discrepancy between the scores, the MEWS score seems to more accurately capture the urgency level that is warranted. Notably, neither tool reaches an excellent performance. While the MEWS reaches a fair (0.7–0.8) performance for 30-day mortality, all other AUCs can be considered poor (0.6–0.7).9 Complaint-based emergency triage scales such as the ESI, MTS and CTAS have been validated in at least 14 studies.26 A major challenge with the validation of these triage systems is the determination of an appropriate reference standard. The lack of a consensus definition about which patients actually require urgent care makes research in this field inherently difficult and limited in the ability to draw firm conclusions. In general, criterion validity and construct validity are the two main methodologies used to validate triage systems. With criterion validity methods, performance of a triage system is compared with a reference standard, which is usually an expert panel.26 30 These studies report the validity of the triage scale as a function of the inter-rater agreement between the triagists and the expert panel and generally show fair agreement.26 30 Specifically for the NTS score, a recent study showed good agreement between triagists and an expert panel for 41 written cases.31 Although criterion validity methods could potentially detect true urgency best, they are labour intensive and cannot capture the full spectrum of clinical scenarios as seen in the ED.26 Given these limitations, and the fact that there is still significant subjectivity involved, researchers have usually opted for a method based on construct validity to validate triage tools, as we also did.26 With construct validity, surrogate markers that are deemed fair proxies for high urgency are used as outcome measures.26 These surrogates include but are not limited to admission rates, resource use, ED length of stay, overall costs and mortality rates. In the absence of a gold standard, construct validity methods have been named the ‘silver standard’ when it comes to validating triage systems.30 Studies generally show that the complaint-based triage scales like ESI, MTS and CTAS are associated with the surrogate markers for urgency. The most studied marker is hospital admission.26 The original validation study for the NTS score also showed significant associations of the NTS scores with hospital admission and resource use.4 MEWS scores and other EWS tools based on vital signs are actually created to detect the outcomes used as surrogate outcomes for urgent care needs. It is therefore no surprise that these models have good to excellent accuracy for detecting these outcomes.9–14 Our study adds to literature suggesting that EWS tools may have added clinical value in ED triage, either by augmenting or by replacing the current complaint-based triage scales. Several studies have explored the stand-alone use of EWS in ED triage, with the same surrogate endpoints we used.9 18 20–22 For example, Spencer and colleagues found AUCs of EWS scores for hospital admission ranging from 0.54 to 0.70 and Lee et al found AUCs of the MEWS for 30-day mortality of 0.779.18 20McCabe and colleagues specifically studied the use of an EWS in conjunction with the MTS.19 The study showed the EWS addition led to a more risk-adverse triage, but increased the overall ED length of stay, suggesting that these tools may work better separately. The current study performed a direct comparison between a complaint-based triage scale and EWS, represented by the MEWS and NTS scores. Generally, these scores have much overlap and high NTS scores (non-urgent) never co-occur with high (urgent) MEWS. However, more urgent NTS scores do occur in combination with non-urgent MEWS scores. In these situations, the MEWS score seems to be more reflective of the urgency since the admission and mortality rates are lower than average in this group. Furthermore, the AUCs of the MEWS were significantly higher than those of the NTS for surrogate markers of urgent care needs. Besides the performance of these scores, we believe the MEWS score is less complex and easier to use during triage since it consists of just eight items. Furthermore, from the distribution of the MEWS and NTS scores it appears that MEWS is better able to separate patients with lower from higher urgency. In our study, most needed, nearly half of the patients had an NTS score of 1 or 2, indicating the highest urgency. On the other hand, the right-skewed distribution of MEWS score found that most urgent cases are relatively rare and could be distinguished from lower urgency cases. Finally, using the MEWS score during triage will facilitate a continuous and comparable assessment over the course of hospital stay since it is also used in the hospital. One aspect that favours the complaint-based approaches such as NTS is that they can be used to recognise specific conditions, such as acute angle-closure glaucoma or compartment syndrome, in which a short time-to-treatment is especially beneficial. In these situations, the urgency is not always reflected in a higher MEWS score as vital signs can be normal. However, currently used complaint-based triage systems have rarely been developed or validated in ways to show that these scores actually perform this function. Strengths and limitations Our study has several strengths that distinguish this work from what has been published before. Through the use of deidentified EHR data, we were able to study a large population of patients which reflects a wide variety of clinical scenarios. The recorded MEWS and NTS scores were measured in the same patients at the same time, which lowers the chance that these results were biased. Other studies have often calculated clinical scores based on separate measurements, while our analysis is based on a structured data field that included a fully recorded MEWS score at the moment of triage. Several limitations of the current study need to be addressed. As noted above, studies on triage urgency, including this one, are inherently limited by that fact that there is no gold standard for acuity. Our study used surrogate outcomes for urgent need of care, which are more reflective of severity of disease than urgent care needs. Since EWS tools are specifically created to detect poor outcomes, they may do better when we associate them with these surrogate markers rather than with ‘true’ urgency as assessed by an expert panel through criterion validity methods. Nevertheless, the criterion validity approach also has its limitations and subjectivity, as addressed in previous paragraphs. Another limitation of our study is that it is a retrospective study with potential for selection bias. We only examined situations when both MEWS and NTS score were available, which may have resulted in more urgent patients being included. While we had documented NTS scores for 53 106 patients, we only had MEWS scores for 12 452 of those patients. However, we show that the distribution of NTS scores is similar in the complete population compared with the study population of patients who have both scores, indicating that missing MEWS scores occur across the spectrum of disease severity according to NTS. Furthermore, the overall distribution of MEWS scores in our population resembles the distribution in other cohorts.14 20 Our study has several strengths that distinguish this work from what has been published before. Through the use of deidentified EHR data, we were able to study a large population of patients which reflects a wide variety of clinical scenarios. The recorded MEWS and NTS scores were measured in the same patients at the same time, which lowers the chance that these results were biased. Other studies have often calculated clinical scores based on separate measurements, while our analysis is based on a structured data field that included a fully recorded MEWS score at the moment of triage. Several limitations of the current study need to be addressed. As noted above, studies on triage urgency, including this one, are inherently limited by that fact that there is no gold standard for acuity. Our study used surrogate outcomes for urgent need of care, which are more reflective of severity of disease than urgent care needs. Since EWS tools are specifically created to detect poor outcomes, they may do better when we associate them with these surrogate markers rather than with ‘true’ urgency as assessed by an expert panel through criterion validity methods. Nevertheless, the criterion validity approach also has its limitations and subjectivity, as addressed in previous paragraphs. Another limitation of our study is that it is a retrospective study with potential for selection bias. We only examined situations when both MEWS and NTS score were available, which may have resulted in more urgent patients being included. While we had documented NTS scores for 53 106 patients, we only had MEWS scores for 12 452 of those patients. However, we show that the distribution of NTS scores is similar in the complete population compared with the study population of patients who have both scores, indicating that missing MEWS scores occur across the spectrum of disease severity according to NTS. Furthermore, the overall distribution of MEWS scores in our population resembles the distribution in other cohorts.14 20 Strengths and limitations: Our study has several strengths that distinguish this work from what has been published before. Through the use of deidentified EHR data, we were able to study a large population of patients which reflects a wide variety of clinical scenarios. The recorded MEWS and NTS scores were measured in the same patients at the same time, which lowers the chance that these results were biased. Other studies have often calculated clinical scores based on separate measurements, while our analysis is based on a structured data field that included a fully recorded MEWS score at the moment of triage. Several limitations of the current study need to be addressed. As noted above, studies on triage urgency, including this one, are inherently limited by that fact that there is no gold standard for acuity. Our study used surrogate outcomes for urgent need of care, which are more reflective of severity of disease than urgent care needs. Since EWS tools are specifically created to detect poor outcomes, they may do better when we associate them with these surrogate markers rather than with ‘true’ urgency as assessed by an expert panel through criterion validity methods. Nevertheless, the criterion validity approach also has its limitations and subjectivity, as addressed in previous paragraphs. Another limitation of our study is that it is a retrospective study with potential for selection bias. We only examined situations when both MEWS and NTS score were available, which may have resulted in more urgent patients being included. While we had documented NTS scores for 53 106 patients, we only had MEWS scores for 12 452 of those patients. However, we show that the distribution of NTS scores is similar in the complete population compared with the study population of patients who have both scores, indicating that missing MEWS scores occur across the spectrum of disease severity according to NTS. Furthermore, the overall distribution of MEWS scores in our population resembles the distribution in other cohorts.14 20 Conclusion: We conclude that EWSs outperform currently used ED triage scales based on patient complaints regarding hospitalisation and 30-day mortality. In cases where these approaches yield particularly different urgency scores, the EWS, represented by the MEWS in our study, seems to assess the need for urgent care better than the complaint based NTS score. The results of this study suggest that EWSs could potentially replace the current emergency triage systems.
Background: Emergency triage systems are used globally to prioritise care based on patients' needs. These systems are commonly based on patient complaints, while the need for timely interventions on regular hospital wards is usually assessed with early warning scores (EWS). We aim to directly compare the ability of currently used triage scales and EWS scores to recognise patients in need of urgent care in the ED. Methods: We performed a retrospective, single-centre study on all patients who presented to the ED of a Dutch Level 1 trauma centre, between 1 September 2018 and 24 June 2020 and for whom a Netherlands Triage System (NTS) score as well as a Modified Early Warning Score (MEWS) was recorded. The performance of these scores was assessed using surrogate markers for true urgency and presented using bar charts, cross tables and a paired area under the curve (AUC). Results: We identified 12 317 unique patient visits where NTS and MEWS scores were documented during triage. A paired comparison of the AUC of these scores showed that the MEWS score had a significantly better AUC than the NTS for predicting the need for hospital admission (0.65 vs 0.60; p<0.001) or 30-day all-cause mortality (0.70 vs 0.60; p<0.001). Furthermore, when non-urgent MEWS scores co-occur with urgent NTS scores, the MEWS score seems to more accurately capture the urgency level that is warranted. Conclusions: The results of this study suggest that EWSs could potentially be used to replace the current emergency triage systems.
Introduction: Over the past decades, ED presentation rates have increased worldwide.1 At times of supply and demand mismatches, medical resources should be allocated based on the patients’ needs to ensure patient safety.1 2 Emergency triage systems are used globally to assess these specific needs. The performance of any emergency triage system is dependent on the environment in which it is used. Therefore, most countries use modified international triage systems to fit their particular situation. Commonly known triage scales include the internationally used Emergency Severity Index (ESI), the UK-based Manchester Triage Scale (MTS) and the Canadian Triage and Acuity Scales (CTAS).3 In Holland, the Netherlands Triage System (NTS) is used, which is a modified version of the MTS.4 A common theme among all triage systems is that these are decision trees based on patient complaints. Specific symptoms or high pain scores will result in higher urgency levels. Recently, two large systematic reviews have shown that the performance of triage scores varies considerably and that a significant part of the population may not be designated to the appropriate acuity group.5 6 Furthermore, there has been debate over the impractical complexity of the current triage systems and the need to rethink ED triage.7 The complaint-based approach during emergency triage is noticeably different from the simple early warning scores (EWS) used to detect clinical deterioration and the need for timely intervention in patients admitted to in-hospital wards. In the Netherlands, the Modified Early Warning Score (MEWS) is used in this regard.8 The EWS scores can accurately detect patients at high risk of deterioration and have been studied in numerous settings.9–14 Although EWS scores have been extensively studied for use in ED triage, they were never specifically developed to be triage tools.15–22 Furthermore, EWS scores and triage scales have not been compared head-to-head. In this study, we aim to compare the ability of currently used triage scales and EWS scores to recognise patients in need of urgent care in the ED. These two approaches will be represented by the NTS and MEWS scores, respectively, as they are commonly used in the Netherlands. Conclusion: We conclude that EWSs outperform currently used ED triage scales based on patient complaints regarding hospitalisation and 30-day mortality. In cases where these approaches yield particularly different urgency scores, the EWS, represented by the MEWS in our study, seems to assess the need for urgent care better than the complaint based NTS score. The results of this study suggest that EWSs could potentially replace the current emergency triage systems.
Background: Emergency triage systems are used globally to prioritise care based on patients' needs. These systems are commonly based on patient complaints, while the need for timely interventions on regular hospital wards is usually assessed with early warning scores (EWS). We aim to directly compare the ability of currently used triage scales and EWS scores to recognise patients in need of urgent care in the ED. Methods: We performed a retrospective, single-centre study on all patients who presented to the ED of a Dutch Level 1 trauma centre, between 1 September 2018 and 24 June 2020 and for whom a Netherlands Triage System (NTS) score as well as a Modified Early Warning Score (MEWS) was recorded. The performance of these scores was assessed using surrogate markers for true urgency and presented using bar charts, cross tables and a paired area under the curve (AUC). Results: We identified 12 317 unique patient visits where NTS and MEWS scores were documented during triage. A paired comparison of the AUC of these scores showed that the MEWS score had a significantly better AUC than the NTS for predicting the need for hospital admission (0.65 vs 0.60; p<0.001) or 30-day all-cause mortality (0.70 vs 0.60; p<0.001). Furthermore, when non-urgent MEWS scores co-occur with urgent NTS scores, the MEWS score seems to more accurately capture the urgency level that is warranted. Conclusions: The results of this study suggest that EWSs could potentially be used to replace the current emergency triage systems.
7,073
298
[ 51, 66, 15, 108, 70, 194, 65, 192, 4, 105, 121, 378, 153, 360 ]
19
[ "nts", "mews", "scores", "score", "triage", "patients", "study", "mews scores", "nts score", "urgent" ]
[ "triage scores varies", "emergency triage systems", "performance emergency triage", "emergency triage noticeably", "emergency triage scales" ]
[CONTENT] triage | emergency department [SUMMARY]
[CONTENT] triage | emergency department [SUMMARY]
[CONTENT] triage | emergency department [SUMMARY]
[CONTENT] triage | emergency department [SUMMARY]
[CONTENT] triage | emergency department [SUMMARY]
[CONTENT] triage | emergency department [SUMMARY]
[CONTENT] Early Warning Score | Emergency Service, Hospital | Hospital Mortality | Hospitalization | Humans | Retrospective Studies | Triage [SUMMARY]
[CONTENT] Early Warning Score | Emergency Service, Hospital | Hospital Mortality | Hospitalization | Humans | Retrospective Studies | Triage [SUMMARY]
[CONTENT] Early Warning Score | Emergency Service, Hospital | Hospital Mortality | Hospitalization | Humans | Retrospective Studies | Triage [SUMMARY]
[CONTENT] Early Warning Score | Emergency Service, Hospital | Hospital Mortality | Hospitalization | Humans | Retrospective Studies | Triage [SUMMARY]
[CONTENT] Early Warning Score | Emergency Service, Hospital | Hospital Mortality | Hospitalization | Humans | Retrospective Studies | Triage [SUMMARY]
[CONTENT] Early Warning Score | Emergency Service, Hospital | Hospital Mortality | Hospitalization | Humans | Retrospective Studies | Triage [SUMMARY]
[CONTENT] triage scores varies | emergency triage systems | performance emergency triage | emergency triage noticeably | emergency triage scales [SUMMARY]
[CONTENT] triage scores varies | emergency triage systems | performance emergency triage | emergency triage noticeably | emergency triage scales [SUMMARY]
[CONTENT] triage scores varies | emergency triage systems | performance emergency triage | emergency triage noticeably | emergency triage scales [SUMMARY]
[CONTENT] triage scores varies | emergency triage systems | performance emergency triage | emergency triage noticeably | emergency triage scales [SUMMARY]
[CONTENT] triage scores varies | emergency triage systems | performance emergency triage | emergency triage noticeably | emergency triage scales [SUMMARY]
[CONTENT] triage scores varies | emergency triage systems | performance emergency triage | emergency triage noticeably | emergency triage scales [SUMMARY]
[CONTENT] nts | mews | scores | score | triage | patients | study | mews scores | nts score | urgent [SUMMARY]
[CONTENT] nts | mews | scores | score | triage | patients | study | mews scores | nts score | urgent [SUMMARY]
[CONTENT] nts | mews | scores | score | triage | patients | study | mews scores | nts score | urgent [SUMMARY]
[CONTENT] nts | mews | scores | score | triage | patients | study | mews scores | nts score | urgent [SUMMARY]
[CONTENT] nts | mews | scores | score | triage | patients | study | mews scores | nts score | urgent [SUMMARY]
[CONTENT] nts | mews | scores | score | triage | patients | study | mews scores | nts score | urgent [SUMMARY]
[CONTENT] triage | ews | ews scores | scores | emergency | scales | systems | triage systems | based | need [SUMMARY]
[CONTENT] data | nts | presented | mews | vumc | level | patient | score | scores | urgency [SUMMARY]
[CONTENT] nts | mews | scores | score | patients | figure | rate | urgent | mews scores | nts scores [SUMMARY]
[CONTENT] ewss | based | score results study | study assess need urgent | assess need urgent care | urgent care better | urgent care better complaint | urgency scores ews | complaint based nts | based patient complaints hospitalisation [SUMMARY]
[CONTENT] nts | scores | mews | score | patients | triage | study | patient | patient involved | involved [SUMMARY]
[CONTENT] nts | scores | mews | score | patients | triage | study | patient | patient involved | involved [SUMMARY]
[CONTENT] ||| EWS ||| EWS [SUMMARY]
[CONTENT] Dutch | between 1 September 2018 | 24 June 2020 | NTS ||| [SUMMARY]
[CONTENT] 12 | NTS | MEWS ||| MEWS | AUC | NTS | 0.65 | 0.60 | 30-day | 0.70 | 0.60 ||| NTS | MEWS [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] ||| EWS ||| EWS ||| Dutch | between 1 September 2018 | 24 June 2020 | NTS ||| ||| ||| 12 | NTS | MEWS ||| MEWS | AUC | NTS | 0.65 | 0.60 | 30-day | 0.70 | 0.60 ||| NTS | MEWS ||| [SUMMARY]
[CONTENT] ||| EWS ||| EWS ||| Dutch | between 1 September 2018 | 24 June 2020 | NTS ||| ||| ||| 12 | NTS | MEWS ||| MEWS | AUC | NTS | 0.65 | 0.60 | 30-day | 0.70 | 0.60 ||| NTS | MEWS ||| [SUMMARY]
Severe Phenotype of Non-alcoholic Fatty Liver Disease in Pediatric Patients with Subclinical Hypothyroidism: a Retrospective Multicenter Study from Korea.
34032030
It is uncertain whether non-alcoholic fatty liver disease (NAFLD) is associated with subclinical hypothyroidism (SH) in pediatric patients. The purpose of this study was to investigated the prevalence and related factors of SH in pediatric patients with NAFLD. We also evaluate the association between liver fibrosis and SH.
BACKGROUND
We retrospectively reviewed medical records for patients aged 4 to 18 years who were diagnosed with NAFLD and tested for thyroid function from January 2015 to December 2019 at 10 hospitals in Korea.
METHODS
The study included 428 patients with NAFLD. The prevalence of SH in pediatric NAFLD patients was 13.6%. In multivariate logistic regression, higher levels of steatosis on ultrasound and higher aspartate aminotransferase to platelet count ratio index (APRI) score were associated with increased risk of SH. Using receiver operating characteristic curves, the optimal cutoff value of the APRI score for predicting SH was 0.6012 (area under the curve, 0.67; P < 0.001; sensitivity 72.4%, specificity 61.9%, positive predictive value 23%, and negative predictive value 93.5%).
RESULTS
SH was often observed in patients with NAFLD, more frequently in patients with more severe liver damage. Thyroid function tests should be performed on pediatric NAFLD patients, especially those with higher grades of liver steatosis and fibrosis.
CONCLUSION
[ "Adolescent", "Child", "Fatty Liver", "Female", "Humans", "Hypothyroidism", "Liver Cirrhosis", "Male", "Non-alcoholic Fatty Liver Disease", "Prevalence", "Republic of Korea", "Retrospective Studies", "Thyroid Function Tests" ]
8144595
INTRODUCTION
The prevalence of non-alcoholic fatty liver disease (NAFLD) is significantly increasing in direct relation with the incidence of obesity. One study reported that nearly 30% of children with obesity had NAFLD.1 NAFLD is diagnosed when hepatic steatosis is present in imaging or histology, while excluding secondary causes of hepatic fat accumulation.2 Thyroid hormones are known to play an important role in regulating insulin resistance and lipid metabolism, which are known to affect the pathogenesis of NAFLD.3 Impaired thyroid hormone signaling reduces fatty acid utilization and the glucose-sensing machinery of β-cells in the liver, which contributes to hepatic insulin resistance.4 Other factors, such as oxidative stress, lipid peroxidation, and triglyceride accumulation, are caused by excessive thyroid-stimulating hormone (TSH) and induce liver damage.56 In addition, elevated TSH has a positive association with obesity through the mechanism of increasing the number of adipocytes; one study reported visceral adipose mass was the best predictor for TSH elevation.789 Thus, thyroid hormones may have a close relationship with liver disease, especially the pathogenesis of NAFLD and non-alcoholic steatohepatitis (NASH).10 In adults, subclinical hypothyroidism (SH) has been considered as a risk factor for metabolic syndrome and NAFLD. Furthermore, the possibility of liver steatosis improvement through SH treatment is also raised.1112 However, the current findings regarding the association of NAFLD with thyroid function remain controversial in children.1314 When treating NAFLD, detecting disease stage is important. In children, invasive methods such as liver biopsy can be difficult to perform. Scoring systems such as the fibrosis-4 (FIB-4) index and the aspartate aminotransferase to platelet count ratio index (APRI) can detect advanced fibrosis and disease progression more easily in patients with NAFLD.15 The clinical value of these markers is useful, so if any association were identified between these markers and SH, then TSH level could be used as a new biomarker for disease severity. A previous study reported a close relationship between thyroid function and NAFLD severity; specifically that advanced fibrosis was significantly higher in subjects with low to normal thyroid function and SH than in those with normal thyroid function.16 Therefore, we aimed to evaluate the prevalence of SH in NAFLD patients, and the association between NAFLD and SH in children. The second aim of the study was to assess the relationship between TSH elevation and liver disease severity in pediatric NAFLD patients.
METHODS
Patients and study design Between January 2015 and December 2019, patients aged 4 to 18 years who had been diagnosed with NAFLD were included in this multicenter retrospective study, which was conducted at the pediatrics departments of 10 centers in Korea: Chungnam National University Hospital, Inje University Haeundae Paik Hospital, Chung-Ang University Hospital, Jeonbuk National University Hospital, Kyungpook National University Children's Hospital, Korea University Anam Hospital, Soonchunhyang University Bucheon Hospital, Nowon Eulji Medical Center, Keimyung University Dongsan Medical Center, and Inje University Ilsan Paik Hospital. The exclusion criteria were as follows: use of medications such as thyroid hormone and antithyroid drugs, or laboratory or clinical evidence suggesting or confirming an underlying chronic liver disease (e.g., viral hepatitis, autoimmune hepatitis, Wilson disease, or other liver disease). Between January 2015 and December 2019, patients aged 4 to 18 years who had been diagnosed with NAFLD were included in this multicenter retrospective study, which was conducted at the pediatrics departments of 10 centers in Korea: Chungnam National University Hospital, Inje University Haeundae Paik Hospital, Chung-Ang University Hospital, Jeonbuk National University Hospital, Kyungpook National University Children's Hospital, Korea University Anam Hospital, Soonchunhyang University Bucheon Hospital, Nowon Eulji Medical Center, Keimyung University Dongsan Medical Center, and Inje University Ilsan Paik Hospital. The exclusion criteria were as follows: use of medications such as thyroid hormone and antithyroid drugs, or laboratory or clinical evidence suggesting or confirming an underlying chronic liver disease (e.g., viral hepatitis, autoimmune hepatitis, Wilson disease, or other liver disease). Clinical and laboratory assessments Body weight and height were measured by a trained technician, and the body mass index (BMI) was calculated by dividing the weight in kilograms by the square of the height in meters. Laboratory tests included the following: TSH, free thyroxine (FT4), alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma-glutamyl transferase (GTP), total cholesterol, triglyceride, low-density lipoprotein and high-density lipoprotein (HDL) cholesterol, and fasting glucose. The APRI score for noninvasive markers of liver fibrosis was calculated as follows: APRI score = AST level (IU/L)/AST upper limit of normal (IU/L)/platelet count (109/L).1718 Body weight and height were measured by a trained technician, and the body mass index (BMI) was calculated by dividing the weight in kilograms by the square of the height in meters. Laboratory tests included the following: TSH, free thyroxine (FT4), alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma-glutamyl transferase (GTP), total cholesterol, triglyceride, low-density lipoprotein and high-density lipoprotein (HDL) cholesterol, and fasting glucose. The APRI score for noninvasive markers of liver fibrosis was calculated as follows: APRI score = AST level (IU/L)/AST upper limit of normal (IU/L)/platelet count (109/L).1718 Definitions NAFLD was diagnosed on the basis of bright or hyperechoic lesions on liver imaging and ALT level ≥ 30 IU/L.19 The degree of steatosis was classified as “mild” (grade I), “moderate” (grade II) and “severe” (grade III).20 After repeated thyroid function test, SH was defined as a serum TSH level of > 5.00 μU/L with an FT4 level between 0.90 and 1.80 ng/Dl.212223 Diabetes mellitus (DM) was defined as a fasting plasma glucose level of ≥ 126 mg/dL or a 2-hour oral glucose tolerance test result of ≥ 200 mg/dL.2425 Hypertension was defined as repeated blood pressure values at three separate visits greater than the 95th percentile for the age, sex, and height of the patient.2627 For detection of cirrhosis, using an APRI cutoff score of 2.0 was more specific (91%) but less sensitive (46%). APRI values of ≤ 0.3 and ≤ 0.5 rule out significant fibrosis and cirrhosis, respectively, and a value of ≥ 1.5 rules in significant fibrosis.1828 NAFLD was diagnosed on the basis of bright or hyperechoic lesions on liver imaging and ALT level ≥ 30 IU/L.19 The degree of steatosis was classified as “mild” (grade I), “moderate” (grade II) and “severe” (grade III).20 After repeated thyroid function test, SH was defined as a serum TSH level of > 5.00 μU/L with an FT4 level between 0.90 and 1.80 ng/Dl.212223 Diabetes mellitus (DM) was defined as a fasting plasma glucose level of ≥ 126 mg/dL or a 2-hour oral glucose tolerance test result of ≥ 200 mg/dL.2425 Hypertension was defined as repeated blood pressure values at three separate visits greater than the 95th percentile for the age, sex, and height of the patient.2627 For detection of cirrhosis, using an APRI cutoff score of 2.0 was more specific (91%) but less sensitive (46%). APRI values of ≤ 0.3 and ≤ 0.5 rule out significant fibrosis and cirrhosis, respectively, and a value of ≥ 1.5 rules in significant fibrosis.1828 Statistical analysis The data are presented as frequency and percentage for categorical variables and as the mean ± standard deviation for continuous variables. Differences in the study participants' characteristics were compared across subgroups using the χ2 test or Fisher's exact test for categorical variables and the independent t-test or Mann-Whitney's U test for continuous variables as appropriate. Differences in the study participants' characteristics were also compared across subgroups using the analysis of variance with Scheffe's post hoc test or the Kruskal-Wallis test with Dunn's post hoc test as appropriate. To check if the distribution was normal, we used Shapiro-Wilk's test. Univariate and multivariate analyses using logistic regression were performed to identify prognostic factors that are independently related to SH. For the prevalence of SH in pediatric NAFLD patients, the percentage and its Blyth-Still-Casella 95% confidence interval (CI) were calculated. Receiver operating characteristic (ROC) curve analysis was performed to assess the sensitivity and specificity of APRI for predicting SH. All statistical analyses were carried out using SPSS 24.0 (SPSS Inc., Chicago, IL, USA), and P values less than 0.05 were considered statistically significant. The data are presented as frequency and percentage for categorical variables and as the mean ± standard deviation for continuous variables. Differences in the study participants' characteristics were compared across subgroups using the χ2 test or Fisher's exact test for categorical variables and the independent t-test or Mann-Whitney's U test for continuous variables as appropriate. Differences in the study participants' characteristics were also compared across subgroups using the analysis of variance with Scheffe's post hoc test or the Kruskal-Wallis test with Dunn's post hoc test as appropriate. To check if the distribution was normal, we used Shapiro-Wilk's test. Univariate and multivariate analyses using logistic regression were performed to identify prognostic factors that are independently related to SH. For the prevalence of SH in pediatric NAFLD patients, the percentage and its Blyth-Still-Casella 95% confidence interval (CI) were calculated. Receiver operating characteristic (ROC) curve analysis was performed to assess the sensitivity and specificity of APRI for predicting SH. All statistical analyses were carried out using SPSS 24.0 (SPSS Inc., Chicago, IL, USA), and P values less than 0.05 were considered statistically significant. Ethical statement This study was approved by the Institutional Review Boards (IRB) of Inje University Haeundae Paik Hospital and all other participating centers, and informed consent was waived due to the retrospective nature of this study (IRB number 2019-12-027). This study was approved by the Institutional Review Boards (IRB) of Inje University Haeundae Paik Hospital and all other participating centers, and informed consent was waived due to the retrospective nature of this study (IRB number 2019-12-027).
RESULTS
Prevalence of SH in pediatric NAFLD patients and Baseline characteristics A total of 428 patients were included, of which 29.4% were female, and the overall mean age was 12.18 ± 3.14 years. The prevalence of SH in pediatric NAFLD patients was 13.6%. The prevalences of SH according to the steatosis grade of liver sonography were 1.1%, 11.0%, and 55.4% in mild, moderate, and severe, respectively. The characteristics of the study subjects according to TSH status are shown in Table 1. In comparison by TSH status, AST and ALT levels were higher in patients with SH than in those with euthyroidism. Although there was no short stature in both groups, significantly lower height z-score was observed in the SH group. The steatosis grade of liver sonography and the APRI score, a noninvasive marker of liver fibrosis, were also significantly higher in patients with SH than in those with euthyroidism (Fig. 1). However, total cholesterol, triglyceride, HDL-cholesterol, and presence of comorbidities were not different between patients with SH and those with euthyroidism. Values are displayed as either frequency with percentage in parentheses or the mean ± standard deviation. Shapiro-Wilk's test was employed to test the assumption of normality. TSH = thyroid-stimulating hormone, SH = subclinical hypothyroidism, BMI = body mass index, ALT = alanine aminotransferase, AST = aspartate aminotransferase, GTP = gamma-glutamyl transferase, HDL = high-density lipoprotein, LDL = low-density lipoprotein, DM = diabetes mellitus, APRI = aspartate aminotransferase to platelet count ratio index. aP values were derived from Mann-Whitney's U test; bP values were derived by Fisher's exact test. APRI = aspartate aminotransferase to platelet count ratio index, SH = subclinical hypothyroidism. A comparison of parameters according to the steatosis grade as measured by ultrasonography is shown in Supplementary Table 1. Higher grades of steatosis confirmed by liver ultrasound were associated with higher levels of TSH, AST, ALT and prevalence of SH. In addition, in patients with moderate to severe grades of steatosis, BMI and GTP levels were higher than in those with mild severity, and the rate of diabetes increased. A total of 428 patients were included, of which 29.4% were female, and the overall mean age was 12.18 ± 3.14 years. The prevalence of SH in pediatric NAFLD patients was 13.6%. The prevalences of SH according to the steatosis grade of liver sonography were 1.1%, 11.0%, and 55.4% in mild, moderate, and severe, respectively. The characteristics of the study subjects according to TSH status are shown in Table 1. In comparison by TSH status, AST and ALT levels were higher in patients with SH than in those with euthyroidism. Although there was no short stature in both groups, significantly lower height z-score was observed in the SH group. The steatosis grade of liver sonography and the APRI score, a noninvasive marker of liver fibrosis, were also significantly higher in patients with SH than in those with euthyroidism (Fig. 1). However, total cholesterol, triglyceride, HDL-cholesterol, and presence of comorbidities were not different between patients with SH and those with euthyroidism. Values are displayed as either frequency with percentage in parentheses or the mean ± standard deviation. Shapiro-Wilk's test was employed to test the assumption of normality. TSH = thyroid-stimulating hormone, SH = subclinical hypothyroidism, BMI = body mass index, ALT = alanine aminotransferase, AST = aspartate aminotransferase, GTP = gamma-glutamyl transferase, HDL = high-density lipoprotein, LDL = low-density lipoprotein, DM = diabetes mellitus, APRI = aspartate aminotransferase to platelet count ratio index. aP values were derived from Mann-Whitney's U test; bP values were derived by Fisher's exact test. APRI = aspartate aminotransferase to platelet count ratio index, SH = subclinical hypothyroidism. A comparison of parameters according to the steatosis grade as measured by ultrasonography is shown in Supplementary Table 1. Higher grades of steatosis confirmed by liver ultrasound were associated with higher levels of TSH, AST, ALT and prevalence of SH. In addition, in patients with moderate to severe grades of steatosis, BMI and GTP levels were higher than in those with mild severity, and the rate of diabetes increased. Related factors of SH in pediatric NAFLD patients A univariate analysis of factors associated with SH in pediatric NAFLD patients found that SH was associated with AST, ALT, steatosis grade by liver ultrasonography, and the APRI score. In multivariate analysis, SH was positively correlated with steatosis grade by liver ultrasonography and with the APRI score (Table 2). Compared to patients with euthyroidism, the proportion of APRI scores > 1.5 was significantly higher and the proportion of APRI scores < 0.5 was lower in patients with SH (Supplementary Table 2, Supplementary Fig. 1). NAFLD = non-alcoholic fatty liver disease, OR = odds ratio, CI = confidence interval, BMI = body mass index, ALT = alanine aminotransferase, AST = aspartate aminotransferase, GTP = gamma-glutamyl transferase, HDL = high-density lipoprotein, LDL = low-density lipoprotein, DM = diabetes mellitus, APRI = aspartate aminotransferase to platelet count ratio index. A univariate analysis of factors associated with SH in pediatric NAFLD patients found that SH was associated with AST, ALT, steatosis grade by liver ultrasonography, and the APRI score. In multivariate analysis, SH was positively correlated with steatosis grade by liver ultrasonography and with the APRI score (Table 2). Compared to patients with euthyroidism, the proportion of APRI scores > 1.5 was significantly higher and the proportion of APRI scores < 0.5 was lower in patients with SH (Supplementary Table 2, Supplementary Fig. 1). NAFLD = non-alcoholic fatty liver disease, OR = odds ratio, CI = confidence interval, BMI = body mass index, ALT = alanine aminotransferase, AST = aspartate aminotransferase, GTP = gamma-glutamyl transferase, HDL = high-density lipoprotein, LDL = low-density lipoprotein, DM = diabetes mellitus, APRI = aspartate aminotransferase to platelet count ratio index. Cutoff value of APRI score for predicting SH To evaluate the predictive accuracy of SH using the APRI score, area under the curve (AUC) values were calculated using an ROC curve. As a result, the APRI score was found to be significant as a predictor of SH when it was 0.6012 or higher (AUC, 0.670; P < 0.001) (Fig. 2). Sensitivity, specificity, positive predictive value, and negative predictive value were 72.4%, 61.9%, 23.0% and 93.5%, respectively (Table 3). APRI = aspartate aminotransferase to platelet count ratio index. APRI = aspartate aminotransferase to platelet count ratio index, AUC = area under the curve, SH = subclinical hypothyroidism. To evaluate the predictive accuracy of SH using the APRI score, area under the curve (AUC) values were calculated using an ROC curve. As a result, the APRI score was found to be significant as a predictor of SH when it was 0.6012 or higher (AUC, 0.670; P < 0.001) (Fig. 2). Sensitivity, specificity, positive predictive value, and negative predictive value were 72.4%, 61.9%, 23.0% and 93.5%, respectively (Table 3). APRI = aspartate aminotransferase to platelet count ratio index. APRI = aspartate aminotransferase to platelet count ratio index, AUC = area under the curve, SH = subclinical hypothyroidism.
null
null
[ "Patients and study design", "Clinical and laboratory assessments", "Definitions", "Statistical analysis", "Ethical statement", "Prevalence of SH in pediatric NAFLD patients and Baseline characteristics", "Related factors of SH in pediatric NAFLD patients", "Cutoff value of APRI score for predicting SH" ]
[ "Between January 2015 and December 2019, patients aged 4 to 18 years who had been diagnosed with NAFLD were included in this multicenter retrospective study, which was conducted at the pediatrics departments of 10 centers in Korea: Chungnam National University Hospital, Inje University Haeundae Paik Hospital, Chung-Ang University Hospital, Jeonbuk National University Hospital, Kyungpook National University Children's Hospital, Korea University Anam Hospital, Soonchunhyang University Bucheon Hospital, Nowon Eulji Medical Center, Keimyung University Dongsan Medical Center, and Inje University Ilsan Paik Hospital. The exclusion criteria were as follows: use of medications such as thyroid hormone and antithyroid drugs, or laboratory or clinical evidence suggesting or confirming an underlying chronic liver disease (e.g., viral hepatitis, autoimmune hepatitis, Wilson disease, or other liver disease).", "Body weight and height were measured by a trained technician, and the body mass index (BMI) was calculated by dividing the weight in kilograms by the square of the height in meters. Laboratory tests included the following: TSH, free thyroxine (FT4), alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma-glutamyl transferase (GTP), total cholesterol, triglyceride, low-density lipoprotein and high-density lipoprotein (HDL) cholesterol, and fasting glucose.\nThe APRI score for noninvasive markers of liver fibrosis was calculated as follows: APRI score = AST level (IU/L)/AST upper limit of normal (IU/L)/platelet count (109/L).1718", "NAFLD was diagnosed on the basis of bright or hyperechoic lesions on liver imaging and ALT level ≥ 30 IU/L.19 The degree of steatosis was classified as “mild” (grade I), “moderate” (grade II) and “severe” (grade III).20 After repeated thyroid function test, SH was defined as a serum TSH level of > 5.00 μU/L with an FT4 level between 0.90 and 1.80 ng/Dl.212223 Diabetes mellitus (DM) was defined as a fasting plasma glucose level of ≥ 126 mg/dL or a 2-hour oral glucose tolerance test result of ≥ 200 mg/dL.2425 Hypertension was defined as repeated blood pressure values at three separate visits greater than the 95th percentile for the age, sex, and height of the patient.2627\nFor detection of cirrhosis, using an APRI cutoff score of 2.0 was more specific (91%) but less sensitive (46%). APRI values of ≤ 0.3 and ≤ 0.5 rule out significant fibrosis and cirrhosis, respectively, and a value of ≥ 1.5 rules in significant fibrosis.1828", "The data are presented as frequency and percentage for categorical variables and as the mean ± standard deviation for continuous variables. Differences in the study participants' characteristics were compared across subgroups using the χ2 test or Fisher's exact test for categorical variables and the independent t-test or Mann-Whitney's U test for continuous variables as appropriate. Differences in the study participants' characteristics were also compared across subgroups using the analysis of variance with Scheffe's post hoc test or the Kruskal-Wallis test with Dunn's post hoc test as appropriate. To check if the distribution was normal, we used Shapiro-Wilk's test. Univariate and multivariate analyses using logistic regression were performed to identify prognostic factors that are independently related to SH. For the prevalence of SH in pediatric NAFLD patients, the percentage and its Blyth-Still-Casella 95% confidence interval (CI) were calculated. Receiver operating characteristic (ROC) curve analysis was performed to assess the sensitivity and specificity of APRI for predicting SH. All statistical analyses were carried out using SPSS 24.0 (SPSS Inc., Chicago, IL, USA), and P values less than 0.05 were considered statistically significant.", "This study was approved by the Institutional Review Boards (IRB) of Inje University Haeundae Paik Hospital and all other participating centers, and informed consent was waived due to the retrospective nature of this study (IRB number 2019-12-027).", "A total of 428 patients were included, of which 29.4% were female, and the overall mean age was 12.18 ± 3.14 years. The prevalence of SH in pediatric NAFLD patients was 13.6%. The prevalences of SH according to the steatosis grade of liver sonography were 1.1%, 11.0%, and 55.4% in mild, moderate, and severe, respectively.\nThe characteristics of the study subjects according to TSH status are shown in Table 1. In comparison by TSH status, AST and ALT levels were higher in patients with SH than in those with euthyroidism. Although there was no short stature in both groups, significantly lower height z-score was observed in the SH group. The steatosis grade of liver sonography and the APRI score, a noninvasive marker of liver fibrosis, were also significantly higher in patients with SH than in those with euthyroidism (Fig. 1). However, total cholesterol, triglyceride, HDL-cholesterol, and presence of comorbidities were not different between patients with SH and those with euthyroidism.\nValues are displayed as either frequency with percentage in parentheses or the mean ± standard deviation. Shapiro-Wilk's test was employed to test the assumption of normality.\nTSH = thyroid-stimulating hormone, SH = subclinical hypothyroidism, BMI = body mass index, ALT = alanine aminotransferase, AST = aspartate aminotransferase, GTP = gamma-glutamyl transferase, HDL = high-density lipoprotein, LDL = low-density lipoprotein, DM = diabetes mellitus, APRI = aspartate aminotransferase to platelet count ratio index.\naP values were derived from Mann-Whitney's U test; bP values were derived by Fisher's exact test.\nAPRI = aspartate aminotransferase to platelet count ratio index, SH = subclinical hypothyroidism.\nA comparison of parameters according to the steatosis grade as measured by ultrasonography is shown in Supplementary Table 1. Higher grades of steatosis confirmed by liver ultrasound were associated with higher levels of TSH, AST, ALT and prevalence of SH. In addition, in patients with moderate to severe grades of steatosis, BMI and GTP levels were higher than in those with mild severity, and the rate of diabetes increased.", "A univariate analysis of factors associated with SH in pediatric NAFLD patients found that SH was associated with AST, ALT, steatosis grade by liver ultrasonography, and the APRI score. In multivariate analysis, SH was positively correlated with steatosis grade by liver ultrasonography and with the APRI score (Table 2). Compared to patients with euthyroidism, the proportion of APRI scores > 1.5 was significantly higher and the proportion of APRI scores < 0.5 was lower in patients with SH (Supplementary Table 2, Supplementary Fig. 1).\nNAFLD = non-alcoholic fatty liver disease, OR = odds ratio, CI = confidence interval, BMI = body mass index, ALT = alanine aminotransferase, AST = aspartate aminotransferase, GTP = gamma-glutamyl transferase, HDL = high-density lipoprotein, LDL = low-density lipoprotein, DM = diabetes mellitus, APRI = aspartate aminotransferase to platelet count ratio index.", "To evaluate the predictive accuracy of SH using the APRI score, area under the curve (AUC) values were calculated using an ROC curve. As a result, the APRI score was found to be significant as a predictor of SH when it was 0.6012 or higher (AUC, 0.670; P < 0.001) (Fig. 2). Sensitivity, specificity, positive predictive value, and negative predictive value were 72.4%, 61.9%, 23.0% and 93.5%, respectively (Table 3).\nAPRI = aspartate aminotransferase to platelet count ratio index.\nAPRI = aspartate aminotransferase to platelet count ratio index, AUC = area under the curve, SH = subclinical hypothyroidism." ]
[ null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Patients and study design", "Clinical and laboratory assessments", "Definitions", "Statistical analysis", "Ethical statement", "RESULTS", "Prevalence of SH in pediatric NAFLD patients and Baseline characteristics", "Related factors of SH in pediatric NAFLD patients", "Cutoff value of APRI score for predicting SH", "DISCUSSION" ]
[ "The prevalence of non-alcoholic fatty liver disease (NAFLD) is significantly increasing in direct relation with the incidence of obesity. One study reported that nearly 30% of children with obesity had NAFLD.1 NAFLD is diagnosed when hepatic steatosis is present in imaging or histology, while excluding secondary causes of hepatic fat accumulation.2\nThyroid hormones are known to play an important role in regulating insulin resistance and lipid metabolism, which are known to affect the pathogenesis of NAFLD.3 Impaired thyroid hormone signaling reduces fatty acid utilization and the glucose-sensing machinery of β-cells in the liver, which contributes to hepatic insulin resistance.4 Other factors, such as oxidative stress, lipid peroxidation, and triglyceride accumulation, are caused by excessive thyroid-stimulating hormone (TSH) and induce liver damage.56 In addition, elevated TSH has a positive association with obesity through the mechanism of increasing the number of adipocytes; one study reported visceral adipose mass was the best predictor for TSH elevation.789 Thus, thyroid hormones may have a close relationship with liver disease, especially the pathogenesis of NAFLD and non-alcoholic steatohepatitis (NASH).10 In adults, subclinical hypothyroidism (SH) has been considered as a risk factor for metabolic syndrome and NAFLD. Furthermore, the possibility of liver steatosis improvement through SH treatment is also raised.1112 However, the current findings regarding the association of NAFLD with thyroid function remain controversial in children.1314\nWhen treating NAFLD, detecting disease stage is important. In children, invasive methods such as liver biopsy can be difficult to perform. Scoring systems such as the fibrosis-4 (FIB-4) index and the aspartate aminotransferase to platelet count ratio index (APRI) can detect advanced fibrosis and disease progression more easily in patients with NAFLD.15 The clinical value of these markers is useful, so if any association were identified between these markers and SH, then TSH level could be used as a new biomarker for disease severity. A previous study reported a close relationship between thyroid function and NAFLD severity; specifically that advanced fibrosis was significantly higher in subjects with low to normal thyroid function and SH than in those with normal thyroid function.16\nTherefore, we aimed to evaluate the prevalence of SH in NAFLD patients, and the association between NAFLD and SH in children. The second aim of the study was to assess the relationship between TSH elevation and liver disease severity in pediatric NAFLD patients.", "Patients and study design Between January 2015 and December 2019, patients aged 4 to 18 years who had been diagnosed with NAFLD were included in this multicenter retrospective study, which was conducted at the pediatrics departments of 10 centers in Korea: Chungnam National University Hospital, Inje University Haeundae Paik Hospital, Chung-Ang University Hospital, Jeonbuk National University Hospital, Kyungpook National University Children's Hospital, Korea University Anam Hospital, Soonchunhyang University Bucheon Hospital, Nowon Eulji Medical Center, Keimyung University Dongsan Medical Center, and Inje University Ilsan Paik Hospital. The exclusion criteria were as follows: use of medications such as thyroid hormone and antithyroid drugs, or laboratory or clinical evidence suggesting or confirming an underlying chronic liver disease (e.g., viral hepatitis, autoimmune hepatitis, Wilson disease, or other liver disease).\nBetween January 2015 and December 2019, patients aged 4 to 18 years who had been diagnosed with NAFLD were included in this multicenter retrospective study, which was conducted at the pediatrics departments of 10 centers in Korea: Chungnam National University Hospital, Inje University Haeundae Paik Hospital, Chung-Ang University Hospital, Jeonbuk National University Hospital, Kyungpook National University Children's Hospital, Korea University Anam Hospital, Soonchunhyang University Bucheon Hospital, Nowon Eulji Medical Center, Keimyung University Dongsan Medical Center, and Inje University Ilsan Paik Hospital. The exclusion criteria were as follows: use of medications such as thyroid hormone and antithyroid drugs, or laboratory or clinical evidence suggesting or confirming an underlying chronic liver disease (e.g., viral hepatitis, autoimmune hepatitis, Wilson disease, or other liver disease).\nClinical and laboratory assessments Body weight and height were measured by a trained technician, and the body mass index (BMI) was calculated by dividing the weight in kilograms by the square of the height in meters. Laboratory tests included the following: TSH, free thyroxine (FT4), alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma-glutamyl transferase (GTP), total cholesterol, triglyceride, low-density lipoprotein and high-density lipoprotein (HDL) cholesterol, and fasting glucose.\nThe APRI score for noninvasive markers of liver fibrosis was calculated as follows: APRI score = AST level (IU/L)/AST upper limit of normal (IU/L)/platelet count (109/L).1718\nBody weight and height were measured by a trained technician, and the body mass index (BMI) was calculated by dividing the weight in kilograms by the square of the height in meters. Laboratory tests included the following: TSH, free thyroxine (FT4), alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma-glutamyl transferase (GTP), total cholesterol, triglyceride, low-density lipoprotein and high-density lipoprotein (HDL) cholesterol, and fasting glucose.\nThe APRI score for noninvasive markers of liver fibrosis was calculated as follows: APRI score = AST level (IU/L)/AST upper limit of normal (IU/L)/platelet count (109/L).1718\nDefinitions NAFLD was diagnosed on the basis of bright or hyperechoic lesions on liver imaging and ALT level ≥ 30 IU/L.19 The degree of steatosis was classified as “mild” (grade I), “moderate” (grade II) and “severe” (grade III).20 After repeated thyroid function test, SH was defined as a serum TSH level of > 5.00 μU/L with an FT4 level between 0.90 and 1.80 ng/Dl.212223 Diabetes mellitus (DM) was defined as a fasting plasma glucose level of ≥ 126 mg/dL or a 2-hour oral glucose tolerance test result of ≥ 200 mg/dL.2425 Hypertension was defined as repeated blood pressure values at three separate visits greater than the 95th percentile for the age, sex, and height of the patient.2627\nFor detection of cirrhosis, using an APRI cutoff score of 2.0 was more specific (91%) but less sensitive (46%). APRI values of ≤ 0.3 and ≤ 0.5 rule out significant fibrosis and cirrhosis, respectively, and a value of ≥ 1.5 rules in significant fibrosis.1828\nNAFLD was diagnosed on the basis of bright or hyperechoic lesions on liver imaging and ALT level ≥ 30 IU/L.19 The degree of steatosis was classified as “mild” (grade I), “moderate” (grade II) and “severe” (grade III).20 After repeated thyroid function test, SH was defined as a serum TSH level of > 5.00 μU/L with an FT4 level between 0.90 and 1.80 ng/Dl.212223 Diabetes mellitus (DM) was defined as a fasting plasma glucose level of ≥ 126 mg/dL or a 2-hour oral glucose tolerance test result of ≥ 200 mg/dL.2425 Hypertension was defined as repeated blood pressure values at three separate visits greater than the 95th percentile for the age, sex, and height of the patient.2627\nFor detection of cirrhosis, using an APRI cutoff score of 2.0 was more specific (91%) but less sensitive (46%). APRI values of ≤ 0.3 and ≤ 0.5 rule out significant fibrosis and cirrhosis, respectively, and a value of ≥ 1.5 rules in significant fibrosis.1828\nStatistical analysis The data are presented as frequency and percentage for categorical variables and as the mean ± standard deviation for continuous variables. Differences in the study participants' characteristics were compared across subgroups using the χ2 test or Fisher's exact test for categorical variables and the independent t-test or Mann-Whitney's U test for continuous variables as appropriate. Differences in the study participants' characteristics were also compared across subgroups using the analysis of variance with Scheffe's post hoc test or the Kruskal-Wallis test with Dunn's post hoc test as appropriate. To check if the distribution was normal, we used Shapiro-Wilk's test. Univariate and multivariate analyses using logistic regression were performed to identify prognostic factors that are independently related to SH. For the prevalence of SH in pediatric NAFLD patients, the percentage and its Blyth-Still-Casella 95% confidence interval (CI) were calculated. Receiver operating characteristic (ROC) curve analysis was performed to assess the sensitivity and specificity of APRI for predicting SH. All statistical analyses were carried out using SPSS 24.0 (SPSS Inc., Chicago, IL, USA), and P values less than 0.05 were considered statistically significant.\nThe data are presented as frequency and percentage for categorical variables and as the mean ± standard deviation for continuous variables. Differences in the study participants' characteristics were compared across subgroups using the χ2 test or Fisher's exact test for categorical variables and the independent t-test or Mann-Whitney's U test for continuous variables as appropriate. Differences in the study participants' characteristics were also compared across subgroups using the analysis of variance with Scheffe's post hoc test or the Kruskal-Wallis test with Dunn's post hoc test as appropriate. To check if the distribution was normal, we used Shapiro-Wilk's test. Univariate and multivariate analyses using logistic regression were performed to identify prognostic factors that are independently related to SH. For the prevalence of SH in pediatric NAFLD patients, the percentage and its Blyth-Still-Casella 95% confidence interval (CI) were calculated. Receiver operating characteristic (ROC) curve analysis was performed to assess the sensitivity and specificity of APRI for predicting SH. All statistical analyses were carried out using SPSS 24.0 (SPSS Inc., Chicago, IL, USA), and P values less than 0.05 were considered statistically significant.\nEthical statement This study was approved by the Institutional Review Boards (IRB) of Inje University Haeundae Paik Hospital and all other participating centers, and informed consent was waived due to the retrospective nature of this study (IRB number 2019-12-027).\nThis study was approved by the Institutional Review Boards (IRB) of Inje University Haeundae Paik Hospital and all other participating centers, and informed consent was waived due to the retrospective nature of this study (IRB number 2019-12-027).", "Between January 2015 and December 2019, patients aged 4 to 18 years who had been diagnosed with NAFLD were included in this multicenter retrospective study, which was conducted at the pediatrics departments of 10 centers in Korea: Chungnam National University Hospital, Inje University Haeundae Paik Hospital, Chung-Ang University Hospital, Jeonbuk National University Hospital, Kyungpook National University Children's Hospital, Korea University Anam Hospital, Soonchunhyang University Bucheon Hospital, Nowon Eulji Medical Center, Keimyung University Dongsan Medical Center, and Inje University Ilsan Paik Hospital. The exclusion criteria were as follows: use of medications such as thyroid hormone and antithyroid drugs, or laboratory or clinical evidence suggesting or confirming an underlying chronic liver disease (e.g., viral hepatitis, autoimmune hepatitis, Wilson disease, or other liver disease).", "Body weight and height were measured by a trained technician, and the body mass index (BMI) was calculated by dividing the weight in kilograms by the square of the height in meters. Laboratory tests included the following: TSH, free thyroxine (FT4), alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma-glutamyl transferase (GTP), total cholesterol, triglyceride, low-density lipoprotein and high-density lipoprotein (HDL) cholesterol, and fasting glucose.\nThe APRI score for noninvasive markers of liver fibrosis was calculated as follows: APRI score = AST level (IU/L)/AST upper limit of normal (IU/L)/platelet count (109/L).1718", "NAFLD was diagnosed on the basis of bright or hyperechoic lesions on liver imaging and ALT level ≥ 30 IU/L.19 The degree of steatosis was classified as “mild” (grade I), “moderate” (grade II) and “severe” (grade III).20 After repeated thyroid function test, SH was defined as a serum TSH level of > 5.00 μU/L with an FT4 level between 0.90 and 1.80 ng/Dl.212223 Diabetes mellitus (DM) was defined as a fasting plasma glucose level of ≥ 126 mg/dL or a 2-hour oral glucose tolerance test result of ≥ 200 mg/dL.2425 Hypertension was defined as repeated blood pressure values at three separate visits greater than the 95th percentile for the age, sex, and height of the patient.2627\nFor detection of cirrhosis, using an APRI cutoff score of 2.0 was more specific (91%) but less sensitive (46%). APRI values of ≤ 0.3 and ≤ 0.5 rule out significant fibrosis and cirrhosis, respectively, and a value of ≥ 1.5 rules in significant fibrosis.1828", "The data are presented as frequency and percentage for categorical variables and as the mean ± standard deviation for continuous variables. Differences in the study participants' characteristics were compared across subgroups using the χ2 test or Fisher's exact test for categorical variables and the independent t-test or Mann-Whitney's U test for continuous variables as appropriate. Differences in the study participants' characteristics were also compared across subgroups using the analysis of variance with Scheffe's post hoc test or the Kruskal-Wallis test with Dunn's post hoc test as appropriate. To check if the distribution was normal, we used Shapiro-Wilk's test. Univariate and multivariate analyses using logistic regression were performed to identify prognostic factors that are independently related to SH. For the prevalence of SH in pediatric NAFLD patients, the percentage and its Blyth-Still-Casella 95% confidence interval (CI) were calculated. Receiver operating characteristic (ROC) curve analysis was performed to assess the sensitivity and specificity of APRI for predicting SH. All statistical analyses were carried out using SPSS 24.0 (SPSS Inc., Chicago, IL, USA), and P values less than 0.05 were considered statistically significant.", "This study was approved by the Institutional Review Boards (IRB) of Inje University Haeundae Paik Hospital and all other participating centers, and informed consent was waived due to the retrospective nature of this study (IRB number 2019-12-027).", "Prevalence of SH in pediatric NAFLD patients and Baseline characteristics A total of 428 patients were included, of which 29.4% were female, and the overall mean age was 12.18 ± 3.14 years. The prevalence of SH in pediatric NAFLD patients was 13.6%. The prevalences of SH according to the steatosis grade of liver sonography were 1.1%, 11.0%, and 55.4% in mild, moderate, and severe, respectively.\nThe characteristics of the study subjects according to TSH status are shown in Table 1. In comparison by TSH status, AST and ALT levels were higher in patients with SH than in those with euthyroidism. Although there was no short stature in both groups, significantly lower height z-score was observed in the SH group. The steatosis grade of liver sonography and the APRI score, a noninvasive marker of liver fibrosis, were also significantly higher in patients with SH than in those with euthyroidism (Fig. 1). However, total cholesterol, triglyceride, HDL-cholesterol, and presence of comorbidities were not different between patients with SH and those with euthyroidism.\nValues are displayed as either frequency with percentage in parentheses or the mean ± standard deviation. Shapiro-Wilk's test was employed to test the assumption of normality.\nTSH = thyroid-stimulating hormone, SH = subclinical hypothyroidism, BMI = body mass index, ALT = alanine aminotransferase, AST = aspartate aminotransferase, GTP = gamma-glutamyl transferase, HDL = high-density lipoprotein, LDL = low-density lipoprotein, DM = diabetes mellitus, APRI = aspartate aminotransferase to platelet count ratio index.\naP values were derived from Mann-Whitney's U test; bP values were derived by Fisher's exact test.\nAPRI = aspartate aminotransferase to platelet count ratio index, SH = subclinical hypothyroidism.\nA comparison of parameters according to the steatosis grade as measured by ultrasonography is shown in Supplementary Table 1. Higher grades of steatosis confirmed by liver ultrasound were associated with higher levels of TSH, AST, ALT and prevalence of SH. In addition, in patients with moderate to severe grades of steatosis, BMI and GTP levels were higher than in those with mild severity, and the rate of diabetes increased.\nA total of 428 patients were included, of which 29.4% were female, and the overall mean age was 12.18 ± 3.14 years. The prevalence of SH in pediatric NAFLD patients was 13.6%. The prevalences of SH according to the steatosis grade of liver sonography were 1.1%, 11.0%, and 55.4% in mild, moderate, and severe, respectively.\nThe characteristics of the study subjects according to TSH status are shown in Table 1. In comparison by TSH status, AST and ALT levels were higher in patients with SH than in those with euthyroidism. Although there was no short stature in both groups, significantly lower height z-score was observed in the SH group. The steatosis grade of liver sonography and the APRI score, a noninvasive marker of liver fibrosis, were also significantly higher in patients with SH than in those with euthyroidism (Fig. 1). However, total cholesterol, triglyceride, HDL-cholesterol, and presence of comorbidities were not different between patients with SH and those with euthyroidism.\nValues are displayed as either frequency with percentage in parentheses or the mean ± standard deviation. Shapiro-Wilk's test was employed to test the assumption of normality.\nTSH = thyroid-stimulating hormone, SH = subclinical hypothyroidism, BMI = body mass index, ALT = alanine aminotransferase, AST = aspartate aminotransferase, GTP = gamma-glutamyl transferase, HDL = high-density lipoprotein, LDL = low-density lipoprotein, DM = diabetes mellitus, APRI = aspartate aminotransferase to platelet count ratio index.\naP values were derived from Mann-Whitney's U test; bP values were derived by Fisher's exact test.\nAPRI = aspartate aminotransferase to platelet count ratio index, SH = subclinical hypothyroidism.\nA comparison of parameters according to the steatosis grade as measured by ultrasonography is shown in Supplementary Table 1. Higher grades of steatosis confirmed by liver ultrasound were associated with higher levels of TSH, AST, ALT and prevalence of SH. In addition, in patients with moderate to severe grades of steatosis, BMI and GTP levels were higher than in those with mild severity, and the rate of diabetes increased.\nRelated factors of SH in pediatric NAFLD patients A univariate analysis of factors associated with SH in pediatric NAFLD patients found that SH was associated with AST, ALT, steatosis grade by liver ultrasonography, and the APRI score. In multivariate analysis, SH was positively correlated with steatosis grade by liver ultrasonography and with the APRI score (Table 2). Compared to patients with euthyroidism, the proportion of APRI scores > 1.5 was significantly higher and the proportion of APRI scores < 0.5 was lower in patients with SH (Supplementary Table 2, Supplementary Fig. 1).\nNAFLD = non-alcoholic fatty liver disease, OR = odds ratio, CI = confidence interval, BMI = body mass index, ALT = alanine aminotransferase, AST = aspartate aminotransferase, GTP = gamma-glutamyl transferase, HDL = high-density lipoprotein, LDL = low-density lipoprotein, DM = diabetes mellitus, APRI = aspartate aminotransferase to platelet count ratio index.\nA univariate analysis of factors associated with SH in pediatric NAFLD patients found that SH was associated with AST, ALT, steatosis grade by liver ultrasonography, and the APRI score. In multivariate analysis, SH was positively correlated with steatosis grade by liver ultrasonography and with the APRI score (Table 2). Compared to patients with euthyroidism, the proportion of APRI scores > 1.5 was significantly higher and the proportion of APRI scores < 0.5 was lower in patients with SH (Supplementary Table 2, Supplementary Fig. 1).\nNAFLD = non-alcoholic fatty liver disease, OR = odds ratio, CI = confidence interval, BMI = body mass index, ALT = alanine aminotransferase, AST = aspartate aminotransferase, GTP = gamma-glutamyl transferase, HDL = high-density lipoprotein, LDL = low-density lipoprotein, DM = diabetes mellitus, APRI = aspartate aminotransferase to platelet count ratio index.\nCutoff value of APRI score for predicting SH To evaluate the predictive accuracy of SH using the APRI score, area under the curve (AUC) values were calculated using an ROC curve. As a result, the APRI score was found to be significant as a predictor of SH when it was 0.6012 or higher (AUC, 0.670; P < 0.001) (Fig. 2). Sensitivity, specificity, positive predictive value, and negative predictive value were 72.4%, 61.9%, 23.0% and 93.5%, respectively (Table 3).\nAPRI = aspartate aminotransferase to platelet count ratio index.\nAPRI = aspartate aminotransferase to platelet count ratio index, AUC = area under the curve, SH = subclinical hypothyroidism.\nTo evaluate the predictive accuracy of SH using the APRI score, area under the curve (AUC) values were calculated using an ROC curve. As a result, the APRI score was found to be significant as a predictor of SH when it was 0.6012 or higher (AUC, 0.670; P < 0.001) (Fig. 2). Sensitivity, specificity, positive predictive value, and negative predictive value were 72.4%, 61.9%, 23.0% and 93.5%, respectively (Table 3).\nAPRI = aspartate aminotransferase to platelet count ratio index.\nAPRI = aspartate aminotransferase to platelet count ratio index, AUC = area under the curve, SH = subclinical hypothyroidism.", "A total of 428 patients were included, of which 29.4% were female, and the overall mean age was 12.18 ± 3.14 years. The prevalence of SH in pediatric NAFLD patients was 13.6%. The prevalences of SH according to the steatosis grade of liver sonography were 1.1%, 11.0%, and 55.4% in mild, moderate, and severe, respectively.\nThe characteristics of the study subjects according to TSH status are shown in Table 1. In comparison by TSH status, AST and ALT levels were higher in patients with SH than in those with euthyroidism. Although there was no short stature in both groups, significantly lower height z-score was observed in the SH group. The steatosis grade of liver sonography and the APRI score, a noninvasive marker of liver fibrosis, were also significantly higher in patients with SH than in those with euthyroidism (Fig. 1). However, total cholesterol, triglyceride, HDL-cholesterol, and presence of comorbidities were not different between patients with SH and those with euthyroidism.\nValues are displayed as either frequency with percentage in parentheses or the mean ± standard deviation. Shapiro-Wilk's test was employed to test the assumption of normality.\nTSH = thyroid-stimulating hormone, SH = subclinical hypothyroidism, BMI = body mass index, ALT = alanine aminotransferase, AST = aspartate aminotransferase, GTP = gamma-glutamyl transferase, HDL = high-density lipoprotein, LDL = low-density lipoprotein, DM = diabetes mellitus, APRI = aspartate aminotransferase to platelet count ratio index.\naP values were derived from Mann-Whitney's U test; bP values were derived by Fisher's exact test.\nAPRI = aspartate aminotransferase to platelet count ratio index, SH = subclinical hypothyroidism.\nA comparison of parameters according to the steatosis grade as measured by ultrasonography is shown in Supplementary Table 1. Higher grades of steatosis confirmed by liver ultrasound were associated with higher levels of TSH, AST, ALT and prevalence of SH. In addition, in patients with moderate to severe grades of steatosis, BMI and GTP levels were higher than in those with mild severity, and the rate of diabetes increased.", "A univariate analysis of factors associated with SH in pediatric NAFLD patients found that SH was associated with AST, ALT, steatosis grade by liver ultrasonography, and the APRI score. In multivariate analysis, SH was positively correlated with steatosis grade by liver ultrasonography and with the APRI score (Table 2). Compared to patients with euthyroidism, the proportion of APRI scores > 1.5 was significantly higher and the proportion of APRI scores < 0.5 was lower in patients with SH (Supplementary Table 2, Supplementary Fig. 1).\nNAFLD = non-alcoholic fatty liver disease, OR = odds ratio, CI = confidence interval, BMI = body mass index, ALT = alanine aminotransferase, AST = aspartate aminotransferase, GTP = gamma-glutamyl transferase, HDL = high-density lipoprotein, LDL = low-density lipoprotein, DM = diabetes mellitus, APRI = aspartate aminotransferase to platelet count ratio index.", "To evaluate the predictive accuracy of SH using the APRI score, area under the curve (AUC) values were calculated using an ROC curve. As a result, the APRI score was found to be significant as a predictor of SH when it was 0.6012 or higher (AUC, 0.670; P < 0.001) (Fig. 2). Sensitivity, specificity, positive predictive value, and negative predictive value were 72.4%, 61.9%, 23.0% and 93.5%, respectively (Table 3).\nAPRI = aspartate aminotransferase to platelet count ratio index.\nAPRI = aspartate aminotransferase to platelet count ratio index, AUC = area under the curve, SH = subclinical hypothyroidism.", "In this study, SH was often shown in pediatric patients with NAFLD, and these subjects had more severe steatosis in ultrasonography and higher liver fibrosis scores.\nThe liver plays an important role in the metabolism of thyroid hormones, and the thyroid hormones are also important to normal hepatic function.29 Previous studies suggested that SH have been thought to be important risk factors for NAFLD.30 Thyroid hormones stimulate lipolysis to generate circulating free fatty acids, and these are the major source of lipids for the liver.31 Elevated TSH stimulates TSH receptors, which are expressed in hepatocytes and which leads to hepatosteatosis via sterol regulatory element-binding protein-1c.32\nA previous study reported a higher incidence of hypothyroidism among patients with NAFLD compared to controls (21% vs. 9.5%, P < 0.01) and especially among patients with NASH (25% vs. 12.8%, P = 0.03).33 A study of Korean adults3435 found the incidence of SH to be 3.7–5.4% in the general population. Our study showed a higher prevalence of 13.6%, and the higher the steatosis grade, the higher the prevalence of SH was statistically significant.\nRecent studies reported a difference by gender in the risk of NAFLD among patients with SH. The Korean adult study found males with SH to have a higher risk of NAFLD (odds ratio [OR], 2.37; 95% CI, 1.09–5.12; P = 0.029),35 while another study reported that males had a 2.8-fold increased risk of NAFLD compared with females (OR, 2.836; 95% CI, 2.177–3.694).12 However, our study showed no gender difference in the association between euthyroidism and SH.\nThe most important finding of our study was that SH was related to the severity of NAFLD in children. Punekar et al.36 demonstrated that there were significant correlations between the levels of TSH and the severity of liver disease. In that study, patients with liver cirrhosis had significantly higher levels of TSH, compared with the controls. TSH might influence the progression of liver fibrosis, therefore the FIB‐4 index was higher in patients with SH than in those with euthyroidism.37\nSimilar to these studies, SH patients had more severe fatty infiltration in ultrasonography, and an APRI score greater than 0.6012 showed increased possibility of having SH. This finding suggests more severe liver damage is seen in patients with SH and NAFLD.\nThyroid dysfunction can cause metabolic changes by altering glucose and lipid metabolism. This finding is also evident in SH.38 Higher insulin levels and insulin resistance were positively correlated with TSH levels,39 and levels of common cholesterol and triglycerides were higher in cases of NAFLD with SH or overt hypothyroidism than in those with euthyroidism.40 In our study, metabolic profiles such as triglyceride and fasting glucose and comorbid metabolic syndromes such as DM were more frequent in patients with SH than in those with euthyroidism; however, the difference was not significant. Patients with a moderate or severe degree of hepatic steatosis were more likely to have DM.\nThe present study has several limitations. First, the retrospective study design may have affected the analysis variables. Second, liver biopsy was not performed in this study; however, most parents refuse this invasive procedure. Third, due to multicenter retrospective study design, sonography was not performed by a single radiologist, but the degree of steatosis was classified according to the reference guideline, and SH was also defined according to the references in the same unit. Fourth, changes in the sonographic or laboratory findings of NAFLD patients related to the therapeutic effect of SH and follow-up data of thyroid function test were not studied in this study. Further well-designed studies are needed to solve these limitations. Despite these limitations, our study is valuable because, we conducted study with relatively large-scale of pediatric patients and observed significant association between SH and severe steatosis of NAFLD.\nIn conclusion, this multicenter pediatric study showed a close association between NALFD and SH and between more severe hepatic steatosis and the liver fibrosis score in SH. TSH elevation can be taken as a predictor of a severe NAFLD. It is important to perform a thyroid function test in patients with NAFLD and follow-up periodically." ]
[ "intro", "methods", null, null, null, null, null, "results", null, null, null, "discussion" ]
[ "Non-alcoholic Fatty Liver Disease", "Subclinical Hypothyroidism", "Liver Steatosis", "Liver Fibrosis" ]
INTRODUCTION: The prevalence of non-alcoholic fatty liver disease (NAFLD) is significantly increasing in direct relation with the incidence of obesity. One study reported that nearly 30% of children with obesity had NAFLD.1 NAFLD is diagnosed when hepatic steatosis is present in imaging or histology, while excluding secondary causes of hepatic fat accumulation.2 Thyroid hormones are known to play an important role in regulating insulin resistance and lipid metabolism, which are known to affect the pathogenesis of NAFLD.3 Impaired thyroid hormone signaling reduces fatty acid utilization and the glucose-sensing machinery of β-cells in the liver, which contributes to hepatic insulin resistance.4 Other factors, such as oxidative stress, lipid peroxidation, and triglyceride accumulation, are caused by excessive thyroid-stimulating hormone (TSH) and induce liver damage.56 In addition, elevated TSH has a positive association with obesity through the mechanism of increasing the number of adipocytes; one study reported visceral adipose mass was the best predictor for TSH elevation.789 Thus, thyroid hormones may have a close relationship with liver disease, especially the pathogenesis of NAFLD and non-alcoholic steatohepatitis (NASH).10 In adults, subclinical hypothyroidism (SH) has been considered as a risk factor for metabolic syndrome and NAFLD. Furthermore, the possibility of liver steatosis improvement through SH treatment is also raised.1112 However, the current findings regarding the association of NAFLD with thyroid function remain controversial in children.1314 When treating NAFLD, detecting disease stage is important. In children, invasive methods such as liver biopsy can be difficult to perform. Scoring systems such as the fibrosis-4 (FIB-4) index and the aspartate aminotransferase to platelet count ratio index (APRI) can detect advanced fibrosis and disease progression more easily in patients with NAFLD.15 The clinical value of these markers is useful, so if any association were identified between these markers and SH, then TSH level could be used as a new biomarker for disease severity. A previous study reported a close relationship between thyroid function and NAFLD severity; specifically that advanced fibrosis was significantly higher in subjects with low to normal thyroid function and SH than in those with normal thyroid function.16 Therefore, we aimed to evaluate the prevalence of SH in NAFLD patients, and the association between NAFLD and SH in children. The second aim of the study was to assess the relationship between TSH elevation and liver disease severity in pediatric NAFLD patients. METHODS: Patients and study design Between January 2015 and December 2019, patients aged 4 to 18 years who had been diagnosed with NAFLD were included in this multicenter retrospective study, which was conducted at the pediatrics departments of 10 centers in Korea: Chungnam National University Hospital, Inje University Haeundae Paik Hospital, Chung-Ang University Hospital, Jeonbuk National University Hospital, Kyungpook National University Children's Hospital, Korea University Anam Hospital, Soonchunhyang University Bucheon Hospital, Nowon Eulji Medical Center, Keimyung University Dongsan Medical Center, and Inje University Ilsan Paik Hospital. The exclusion criteria were as follows: use of medications such as thyroid hormone and antithyroid drugs, or laboratory or clinical evidence suggesting or confirming an underlying chronic liver disease (e.g., viral hepatitis, autoimmune hepatitis, Wilson disease, or other liver disease). Between January 2015 and December 2019, patients aged 4 to 18 years who had been diagnosed with NAFLD were included in this multicenter retrospective study, which was conducted at the pediatrics departments of 10 centers in Korea: Chungnam National University Hospital, Inje University Haeundae Paik Hospital, Chung-Ang University Hospital, Jeonbuk National University Hospital, Kyungpook National University Children's Hospital, Korea University Anam Hospital, Soonchunhyang University Bucheon Hospital, Nowon Eulji Medical Center, Keimyung University Dongsan Medical Center, and Inje University Ilsan Paik Hospital. The exclusion criteria were as follows: use of medications such as thyroid hormone and antithyroid drugs, or laboratory or clinical evidence suggesting or confirming an underlying chronic liver disease (e.g., viral hepatitis, autoimmune hepatitis, Wilson disease, or other liver disease). Clinical and laboratory assessments Body weight and height were measured by a trained technician, and the body mass index (BMI) was calculated by dividing the weight in kilograms by the square of the height in meters. Laboratory tests included the following: TSH, free thyroxine (FT4), alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma-glutamyl transferase (GTP), total cholesterol, triglyceride, low-density lipoprotein and high-density lipoprotein (HDL) cholesterol, and fasting glucose. The APRI score for noninvasive markers of liver fibrosis was calculated as follows: APRI score = AST level (IU/L)/AST upper limit of normal (IU/L)/platelet count (109/L).1718 Body weight and height were measured by a trained technician, and the body mass index (BMI) was calculated by dividing the weight in kilograms by the square of the height in meters. Laboratory tests included the following: TSH, free thyroxine (FT4), alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma-glutamyl transferase (GTP), total cholesterol, triglyceride, low-density lipoprotein and high-density lipoprotein (HDL) cholesterol, and fasting glucose. The APRI score for noninvasive markers of liver fibrosis was calculated as follows: APRI score = AST level (IU/L)/AST upper limit of normal (IU/L)/platelet count (109/L).1718 Definitions NAFLD was diagnosed on the basis of bright or hyperechoic lesions on liver imaging and ALT level ≥ 30 IU/L.19 The degree of steatosis was classified as “mild” (grade I), “moderate” (grade II) and “severe” (grade III).20 After repeated thyroid function test, SH was defined as a serum TSH level of > 5.00 μU/L with an FT4 level between 0.90 and 1.80 ng/Dl.212223 Diabetes mellitus (DM) was defined as a fasting plasma glucose level of ≥ 126 mg/dL or a 2-hour oral glucose tolerance test result of ≥ 200 mg/dL.2425 Hypertension was defined as repeated blood pressure values at three separate visits greater than the 95th percentile for the age, sex, and height of the patient.2627 For detection of cirrhosis, using an APRI cutoff score of 2.0 was more specific (91%) but less sensitive (46%). APRI values of ≤ 0.3 and ≤ 0.5 rule out significant fibrosis and cirrhosis, respectively, and a value of ≥ 1.5 rules in significant fibrosis.1828 NAFLD was diagnosed on the basis of bright or hyperechoic lesions on liver imaging and ALT level ≥ 30 IU/L.19 The degree of steatosis was classified as “mild” (grade I), “moderate” (grade II) and “severe” (grade III).20 After repeated thyroid function test, SH was defined as a serum TSH level of > 5.00 μU/L with an FT4 level between 0.90 and 1.80 ng/Dl.212223 Diabetes mellitus (DM) was defined as a fasting plasma glucose level of ≥ 126 mg/dL or a 2-hour oral glucose tolerance test result of ≥ 200 mg/dL.2425 Hypertension was defined as repeated blood pressure values at three separate visits greater than the 95th percentile for the age, sex, and height of the patient.2627 For detection of cirrhosis, using an APRI cutoff score of 2.0 was more specific (91%) but less sensitive (46%). APRI values of ≤ 0.3 and ≤ 0.5 rule out significant fibrosis and cirrhosis, respectively, and a value of ≥ 1.5 rules in significant fibrosis.1828 Statistical analysis The data are presented as frequency and percentage for categorical variables and as the mean ± standard deviation for continuous variables. Differences in the study participants' characteristics were compared across subgroups using the χ2 test or Fisher's exact test for categorical variables and the independent t-test or Mann-Whitney's U test for continuous variables as appropriate. Differences in the study participants' characteristics were also compared across subgroups using the analysis of variance with Scheffe's post hoc test or the Kruskal-Wallis test with Dunn's post hoc test as appropriate. To check if the distribution was normal, we used Shapiro-Wilk's test. Univariate and multivariate analyses using logistic regression were performed to identify prognostic factors that are independently related to SH. For the prevalence of SH in pediatric NAFLD patients, the percentage and its Blyth-Still-Casella 95% confidence interval (CI) were calculated. Receiver operating characteristic (ROC) curve analysis was performed to assess the sensitivity and specificity of APRI for predicting SH. All statistical analyses were carried out using SPSS 24.0 (SPSS Inc., Chicago, IL, USA), and P values less than 0.05 were considered statistically significant. The data are presented as frequency and percentage for categorical variables and as the mean ± standard deviation for continuous variables. Differences in the study participants' characteristics were compared across subgroups using the χ2 test or Fisher's exact test for categorical variables and the independent t-test or Mann-Whitney's U test for continuous variables as appropriate. Differences in the study participants' characteristics were also compared across subgroups using the analysis of variance with Scheffe's post hoc test or the Kruskal-Wallis test with Dunn's post hoc test as appropriate. To check if the distribution was normal, we used Shapiro-Wilk's test. Univariate and multivariate analyses using logistic regression were performed to identify prognostic factors that are independently related to SH. For the prevalence of SH in pediatric NAFLD patients, the percentage and its Blyth-Still-Casella 95% confidence interval (CI) were calculated. Receiver operating characteristic (ROC) curve analysis was performed to assess the sensitivity and specificity of APRI for predicting SH. All statistical analyses were carried out using SPSS 24.0 (SPSS Inc., Chicago, IL, USA), and P values less than 0.05 were considered statistically significant. Ethical statement This study was approved by the Institutional Review Boards (IRB) of Inje University Haeundae Paik Hospital and all other participating centers, and informed consent was waived due to the retrospective nature of this study (IRB number 2019-12-027). This study was approved by the Institutional Review Boards (IRB) of Inje University Haeundae Paik Hospital and all other participating centers, and informed consent was waived due to the retrospective nature of this study (IRB number 2019-12-027). Patients and study design: Between January 2015 and December 2019, patients aged 4 to 18 years who had been diagnosed with NAFLD were included in this multicenter retrospective study, which was conducted at the pediatrics departments of 10 centers in Korea: Chungnam National University Hospital, Inje University Haeundae Paik Hospital, Chung-Ang University Hospital, Jeonbuk National University Hospital, Kyungpook National University Children's Hospital, Korea University Anam Hospital, Soonchunhyang University Bucheon Hospital, Nowon Eulji Medical Center, Keimyung University Dongsan Medical Center, and Inje University Ilsan Paik Hospital. The exclusion criteria were as follows: use of medications such as thyroid hormone and antithyroid drugs, or laboratory or clinical evidence suggesting or confirming an underlying chronic liver disease (e.g., viral hepatitis, autoimmune hepatitis, Wilson disease, or other liver disease). Clinical and laboratory assessments: Body weight and height were measured by a trained technician, and the body mass index (BMI) was calculated by dividing the weight in kilograms by the square of the height in meters. Laboratory tests included the following: TSH, free thyroxine (FT4), alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma-glutamyl transferase (GTP), total cholesterol, triglyceride, low-density lipoprotein and high-density lipoprotein (HDL) cholesterol, and fasting glucose. The APRI score for noninvasive markers of liver fibrosis was calculated as follows: APRI score = AST level (IU/L)/AST upper limit of normal (IU/L)/platelet count (109/L).1718 Definitions: NAFLD was diagnosed on the basis of bright or hyperechoic lesions on liver imaging and ALT level ≥ 30 IU/L.19 The degree of steatosis was classified as “mild” (grade I), “moderate” (grade II) and “severe” (grade III).20 After repeated thyroid function test, SH was defined as a serum TSH level of > 5.00 μU/L with an FT4 level between 0.90 and 1.80 ng/Dl.212223 Diabetes mellitus (DM) was defined as a fasting plasma glucose level of ≥ 126 mg/dL or a 2-hour oral glucose tolerance test result of ≥ 200 mg/dL.2425 Hypertension was defined as repeated blood pressure values at three separate visits greater than the 95th percentile for the age, sex, and height of the patient.2627 For detection of cirrhosis, using an APRI cutoff score of 2.0 was more specific (91%) but less sensitive (46%). APRI values of ≤ 0.3 and ≤ 0.5 rule out significant fibrosis and cirrhosis, respectively, and a value of ≥ 1.5 rules in significant fibrosis.1828 Statistical analysis: The data are presented as frequency and percentage for categorical variables and as the mean ± standard deviation for continuous variables. Differences in the study participants' characteristics were compared across subgroups using the χ2 test or Fisher's exact test for categorical variables and the independent t-test or Mann-Whitney's U test for continuous variables as appropriate. Differences in the study participants' characteristics were also compared across subgroups using the analysis of variance with Scheffe's post hoc test or the Kruskal-Wallis test with Dunn's post hoc test as appropriate. To check if the distribution was normal, we used Shapiro-Wilk's test. Univariate and multivariate analyses using logistic regression were performed to identify prognostic factors that are independently related to SH. For the prevalence of SH in pediatric NAFLD patients, the percentage and its Blyth-Still-Casella 95% confidence interval (CI) were calculated. Receiver operating characteristic (ROC) curve analysis was performed to assess the sensitivity and specificity of APRI for predicting SH. All statistical analyses were carried out using SPSS 24.0 (SPSS Inc., Chicago, IL, USA), and P values less than 0.05 were considered statistically significant. Ethical statement: This study was approved by the Institutional Review Boards (IRB) of Inje University Haeundae Paik Hospital and all other participating centers, and informed consent was waived due to the retrospective nature of this study (IRB number 2019-12-027). RESULTS: Prevalence of SH in pediatric NAFLD patients and Baseline characteristics A total of 428 patients were included, of which 29.4% were female, and the overall mean age was 12.18 ± 3.14 years. The prevalence of SH in pediatric NAFLD patients was 13.6%. The prevalences of SH according to the steatosis grade of liver sonography were 1.1%, 11.0%, and 55.4% in mild, moderate, and severe, respectively. The characteristics of the study subjects according to TSH status are shown in Table 1. In comparison by TSH status, AST and ALT levels were higher in patients with SH than in those with euthyroidism. Although there was no short stature in both groups, significantly lower height z-score was observed in the SH group. The steatosis grade of liver sonography and the APRI score, a noninvasive marker of liver fibrosis, were also significantly higher in patients with SH than in those with euthyroidism (Fig. 1). However, total cholesterol, triglyceride, HDL-cholesterol, and presence of comorbidities were not different between patients with SH and those with euthyroidism. Values are displayed as either frequency with percentage in parentheses or the mean ± standard deviation. Shapiro-Wilk's test was employed to test the assumption of normality. TSH = thyroid-stimulating hormone, SH = subclinical hypothyroidism, BMI = body mass index, ALT = alanine aminotransferase, AST = aspartate aminotransferase, GTP = gamma-glutamyl transferase, HDL = high-density lipoprotein, LDL = low-density lipoprotein, DM = diabetes mellitus, APRI = aspartate aminotransferase to platelet count ratio index. aP values were derived from Mann-Whitney's U test; bP values were derived by Fisher's exact test. APRI = aspartate aminotransferase to platelet count ratio index, SH = subclinical hypothyroidism. A comparison of parameters according to the steatosis grade as measured by ultrasonography is shown in Supplementary Table 1. Higher grades of steatosis confirmed by liver ultrasound were associated with higher levels of TSH, AST, ALT and prevalence of SH. In addition, in patients with moderate to severe grades of steatosis, BMI and GTP levels were higher than in those with mild severity, and the rate of diabetes increased. A total of 428 patients were included, of which 29.4% were female, and the overall mean age was 12.18 ± 3.14 years. The prevalence of SH in pediatric NAFLD patients was 13.6%. The prevalences of SH according to the steatosis grade of liver sonography were 1.1%, 11.0%, and 55.4% in mild, moderate, and severe, respectively. The characteristics of the study subjects according to TSH status are shown in Table 1. In comparison by TSH status, AST and ALT levels were higher in patients with SH than in those with euthyroidism. Although there was no short stature in both groups, significantly lower height z-score was observed in the SH group. The steatosis grade of liver sonography and the APRI score, a noninvasive marker of liver fibrosis, were also significantly higher in patients with SH than in those with euthyroidism (Fig. 1). However, total cholesterol, triglyceride, HDL-cholesterol, and presence of comorbidities were not different between patients with SH and those with euthyroidism. Values are displayed as either frequency with percentage in parentheses or the mean ± standard deviation. Shapiro-Wilk's test was employed to test the assumption of normality. TSH = thyroid-stimulating hormone, SH = subclinical hypothyroidism, BMI = body mass index, ALT = alanine aminotransferase, AST = aspartate aminotransferase, GTP = gamma-glutamyl transferase, HDL = high-density lipoprotein, LDL = low-density lipoprotein, DM = diabetes mellitus, APRI = aspartate aminotransferase to platelet count ratio index. aP values were derived from Mann-Whitney's U test; bP values were derived by Fisher's exact test. APRI = aspartate aminotransferase to platelet count ratio index, SH = subclinical hypothyroidism. A comparison of parameters according to the steatosis grade as measured by ultrasonography is shown in Supplementary Table 1. Higher grades of steatosis confirmed by liver ultrasound were associated with higher levels of TSH, AST, ALT and prevalence of SH. In addition, in patients with moderate to severe grades of steatosis, BMI and GTP levels were higher than in those with mild severity, and the rate of diabetes increased. Related factors of SH in pediatric NAFLD patients A univariate analysis of factors associated with SH in pediatric NAFLD patients found that SH was associated with AST, ALT, steatosis grade by liver ultrasonography, and the APRI score. In multivariate analysis, SH was positively correlated with steatosis grade by liver ultrasonography and with the APRI score (Table 2). Compared to patients with euthyroidism, the proportion of APRI scores > 1.5 was significantly higher and the proportion of APRI scores < 0.5 was lower in patients with SH (Supplementary Table 2, Supplementary Fig. 1). NAFLD = non-alcoholic fatty liver disease, OR = odds ratio, CI = confidence interval, BMI = body mass index, ALT = alanine aminotransferase, AST = aspartate aminotransferase, GTP = gamma-glutamyl transferase, HDL = high-density lipoprotein, LDL = low-density lipoprotein, DM = diabetes mellitus, APRI = aspartate aminotransferase to platelet count ratio index. A univariate analysis of factors associated with SH in pediatric NAFLD patients found that SH was associated with AST, ALT, steatosis grade by liver ultrasonography, and the APRI score. In multivariate analysis, SH was positively correlated with steatosis grade by liver ultrasonography and with the APRI score (Table 2). Compared to patients with euthyroidism, the proportion of APRI scores > 1.5 was significantly higher and the proportion of APRI scores < 0.5 was lower in patients with SH (Supplementary Table 2, Supplementary Fig. 1). NAFLD = non-alcoholic fatty liver disease, OR = odds ratio, CI = confidence interval, BMI = body mass index, ALT = alanine aminotransferase, AST = aspartate aminotransferase, GTP = gamma-glutamyl transferase, HDL = high-density lipoprotein, LDL = low-density lipoprotein, DM = diabetes mellitus, APRI = aspartate aminotransferase to platelet count ratio index. Cutoff value of APRI score for predicting SH To evaluate the predictive accuracy of SH using the APRI score, area under the curve (AUC) values were calculated using an ROC curve. As a result, the APRI score was found to be significant as a predictor of SH when it was 0.6012 or higher (AUC, 0.670; P < 0.001) (Fig. 2). Sensitivity, specificity, positive predictive value, and negative predictive value were 72.4%, 61.9%, 23.0% and 93.5%, respectively (Table 3). APRI = aspartate aminotransferase to platelet count ratio index. APRI = aspartate aminotransferase to platelet count ratio index, AUC = area under the curve, SH = subclinical hypothyroidism. To evaluate the predictive accuracy of SH using the APRI score, area under the curve (AUC) values were calculated using an ROC curve. As a result, the APRI score was found to be significant as a predictor of SH when it was 0.6012 or higher (AUC, 0.670; P < 0.001) (Fig. 2). Sensitivity, specificity, positive predictive value, and negative predictive value were 72.4%, 61.9%, 23.0% and 93.5%, respectively (Table 3). APRI = aspartate aminotransferase to platelet count ratio index. APRI = aspartate aminotransferase to platelet count ratio index, AUC = area under the curve, SH = subclinical hypothyroidism. Prevalence of SH in pediatric NAFLD patients and Baseline characteristics: A total of 428 patients were included, of which 29.4% were female, and the overall mean age was 12.18 ± 3.14 years. The prevalence of SH in pediatric NAFLD patients was 13.6%. The prevalences of SH according to the steatosis grade of liver sonography were 1.1%, 11.0%, and 55.4% in mild, moderate, and severe, respectively. The characteristics of the study subjects according to TSH status are shown in Table 1. In comparison by TSH status, AST and ALT levels were higher in patients with SH than in those with euthyroidism. Although there was no short stature in both groups, significantly lower height z-score was observed in the SH group. The steatosis grade of liver sonography and the APRI score, a noninvasive marker of liver fibrosis, were also significantly higher in patients with SH than in those with euthyroidism (Fig. 1). However, total cholesterol, triglyceride, HDL-cholesterol, and presence of comorbidities were not different between patients with SH and those with euthyroidism. Values are displayed as either frequency with percentage in parentheses or the mean ± standard deviation. Shapiro-Wilk's test was employed to test the assumption of normality. TSH = thyroid-stimulating hormone, SH = subclinical hypothyroidism, BMI = body mass index, ALT = alanine aminotransferase, AST = aspartate aminotransferase, GTP = gamma-glutamyl transferase, HDL = high-density lipoprotein, LDL = low-density lipoprotein, DM = diabetes mellitus, APRI = aspartate aminotransferase to platelet count ratio index. aP values were derived from Mann-Whitney's U test; bP values were derived by Fisher's exact test. APRI = aspartate aminotransferase to platelet count ratio index, SH = subclinical hypothyroidism. A comparison of parameters according to the steatosis grade as measured by ultrasonography is shown in Supplementary Table 1. Higher grades of steatosis confirmed by liver ultrasound were associated with higher levels of TSH, AST, ALT and prevalence of SH. In addition, in patients with moderate to severe grades of steatosis, BMI and GTP levels were higher than in those with mild severity, and the rate of diabetes increased. Related factors of SH in pediatric NAFLD patients: A univariate analysis of factors associated with SH in pediatric NAFLD patients found that SH was associated with AST, ALT, steatosis grade by liver ultrasonography, and the APRI score. In multivariate analysis, SH was positively correlated with steatosis grade by liver ultrasonography and with the APRI score (Table 2). Compared to patients with euthyroidism, the proportion of APRI scores > 1.5 was significantly higher and the proportion of APRI scores < 0.5 was lower in patients with SH (Supplementary Table 2, Supplementary Fig. 1). NAFLD = non-alcoholic fatty liver disease, OR = odds ratio, CI = confidence interval, BMI = body mass index, ALT = alanine aminotransferase, AST = aspartate aminotransferase, GTP = gamma-glutamyl transferase, HDL = high-density lipoprotein, LDL = low-density lipoprotein, DM = diabetes mellitus, APRI = aspartate aminotransferase to platelet count ratio index. Cutoff value of APRI score for predicting SH: To evaluate the predictive accuracy of SH using the APRI score, area under the curve (AUC) values were calculated using an ROC curve. As a result, the APRI score was found to be significant as a predictor of SH when it was 0.6012 or higher (AUC, 0.670; P < 0.001) (Fig. 2). Sensitivity, specificity, positive predictive value, and negative predictive value were 72.4%, 61.9%, 23.0% and 93.5%, respectively (Table 3). APRI = aspartate aminotransferase to platelet count ratio index. APRI = aspartate aminotransferase to platelet count ratio index, AUC = area under the curve, SH = subclinical hypothyroidism. DISCUSSION: In this study, SH was often shown in pediatric patients with NAFLD, and these subjects had more severe steatosis in ultrasonography and higher liver fibrosis scores. The liver plays an important role in the metabolism of thyroid hormones, and the thyroid hormones are also important to normal hepatic function.29 Previous studies suggested that SH have been thought to be important risk factors for NAFLD.30 Thyroid hormones stimulate lipolysis to generate circulating free fatty acids, and these are the major source of lipids for the liver.31 Elevated TSH stimulates TSH receptors, which are expressed in hepatocytes and which leads to hepatosteatosis via sterol regulatory element-binding protein-1c.32 A previous study reported a higher incidence of hypothyroidism among patients with NAFLD compared to controls (21% vs. 9.5%, P < 0.01) and especially among patients with NASH (25% vs. 12.8%, P = 0.03).33 A study of Korean adults3435 found the incidence of SH to be 3.7–5.4% in the general population. Our study showed a higher prevalence of 13.6%, and the higher the steatosis grade, the higher the prevalence of SH was statistically significant. Recent studies reported a difference by gender in the risk of NAFLD among patients with SH. The Korean adult study found males with SH to have a higher risk of NAFLD (odds ratio [OR], 2.37; 95% CI, 1.09–5.12; P = 0.029),35 while another study reported that males had a 2.8-fold increased risk of NAFLD compared with females (OR, 2.836; 95% CI, 2.177–3.694).12 However, our study showed no gender difference in the association between euthyroidism and SH. The most important finding of our study was that SH was related to the severity of NAFLD in children. Punekar et al.36 demonstrated that there were significant correlations between the levels of TSH and the severity of liver disease. In that study, patients with liver cirrhosis had significantly higher levels of TSH, compared with the controls. TSH might influence the progression of liver fibrosis, therefore the FIB‐4 index was higher in patients with SH than in those with euthyroidism.37 Similar to these studies, SH patients had more severe fatty infiltration in ultrasonography, and an APRI score greater than 0.6012 showed increased possibility of having SH. This finding suggests more severe liver damage is seen in patients with SH and NAFLD. Thyroid dysfunction can cause metabolic changes by altering glucose and lipid metabolism. This finding is also evident in SH.38 Higher insulin levels and insulin resistance were positively correlated with TSH levels,39 and levels of common cholesterol and triglycerides were higher in cases of NAFLD with SH or overt hypothyroidism than in those with euthyroidism.40 In our study, metabolic profiles such as triglyceride and fasting glucose and comorbid metabolic syndromes such as DM were more frequent in patients with SH than in those with euthyroidism; however, the difference was not significant. Patients with a moderate or severe degree of hepatic steatosis were more likely to have DM. The present study has several limitations. First, the retrospective study design may have affected the analysis variables. Second, liver biopsy was not performed in this study; however, most parents refuse this invasive procedure. Third, due to multicenter retrospective study design, sonography was not performed by a single radiologist, but the degree of steatosis was classified according to the reference guideline, and SH was also defined according to the references in the same unit. Fourth, changes in the sonographic or laboratory findings of NAFLD patients related to the therapeutic effect of SH and follow-up data of thyroid function test were not studied in this study. Further well-designed studies are needed to solve these limitations. Despite these limitations, our study is valuable because, we conducted study with relatively large-scale of pediatric patients and observed significant association between SH and severe steatosis of NAFLD. In conclusion, this multicenter pediatric study showed a close association between NALFD and SH and between more severe hepatic steatosis and the liver fibrosis score in SH. TSH elevation can be taken as a predictor of a severe NAFLD. It is important to perform a thyroid function test in patients with NAFLD and follow-up periodically.
Background: It is uncertain whether non-alcoholic fatty liver disease (NAFLD) is associated with subclinical hypothyroidism (SH) in pediatric patients. The purpose of this study was to investigated the prevalence and related factors of SH in pediatric patients with NAFLD. We also evaluate the association between liver fibrosis and SH. Methods: We retrospectively reviewed medical records for patients aged 4 to 18 years who were diagnosed with NAFLD and tested for thyroid function from January 2015 to December 2019 at 10 hospitals in Korea. Results: The study included 428 patients with NAFLD. The prevalence of SH in pediatric NAFLD patients was 13.6%. In multivariate logistic regression, higher levels of steatosis on ultrasound and higher aspartate aminotransferase to platelet count ratio index (APRI) score were associated with increased risk of SH. Using receiver operating characteristic curves, the optimal cutoff value of the APRI score for predicting SH was 0.6012 (area under the curve, 0.67; P < 0.001; sensitivity 72.4%, specificity 61.9%, positive predictive value 23%, and negative predictive value 93.5%). Conclusions: SH was often observed in patients with NAFLD, more frequently in patients with more severe liver damage. Thyroid function tests should be performed on pediatric NAFLD patients, especially those with higher grades of liver steatosis and fibrosis.
null
null
5,702
252
[ 146, 130, 200, 220, 47, 410, 170, 130 ]
12
[ "sh", "apri", "patients", "liver", "nafld", "test", "study", "aminotransferase", "higher", "steatosis" ]
[ "impaired thyroid hormone", "role metabolism thyroid", "nafld impaired thyroid", "fatty liver disease", "causes hepatic fat" ]
null
null
[CONTENT] Non-alcoholic Fatty Liver Disease | Subclinical Hypothyroidism | Liver Steatosis | Liver Fibrosis [SUMMARY]
[CONTENT] Non-alcoholic Fatty Liver Disease | Subclinical Hypothyroidism | Liver Steatosis | Liver Fibrosis [SUMMARY]
[CONTENT] Non-alcoholic Fatty Liver Disease | Subclinical Hypothyroidism | Liver Steatosis | Liver Fibrosis [SUMMARY]
null
[CONTENT] Non-alcoholic Fatty Liver Disease | Subclinical Hypothyroidism | Liver Steatosis | Liver Fibrosis [SUMMARY]
null
[CONTENT] Adolescent | Child | Fatty Liver | Female | Humans | Hypothyroidism | Liver Cirrhosis | Male | Non-alcoholic Fatty Liver Disease | Prevalence | Republic of Korea | Retrospective Studies | Thyroid Function Tests [SUMMARY]
[CONTENT] Adolescent | Child | Fatty Liver | Female | Humans | Hypothyroidism | Liver Cirrhosis | Male | Non-alcoholic Fatty Liver Disease | Prevalence | Republic of Korea | Retrospective Studies | Thyroid Function Tests [SUMMARY]
[CONTENT] Adolescent | Child | Fatty Liver | Female | Humans | Hypothyroidism | Liver Cirrhosis | Male | Non-alcoholic Fatty Liver Disease | Prevalence | Republic of Korea | Retrospective Studies | Thyroid Function Tests [SUMMARY]
null
[CONTENT] Adolescent | Child | Fatty Liver | Female | Humans | Hypothyroidism | Liver Cirrhosis | Male | Non-alcoholic Fatty Liver Disease | Prevalence | Republic of Korea | Retrospective Studies | Thyroid Function Tests [SUMMARY]
null
[CONTENT] impaired thyroid hormone | role metabolism thyroid | nafld impaired thyroid | fatty liver disease | causes hepatic fat [SUMMARY]
[CONTENT] impaired thyroid hormone | role metabolism thyroid | nafld impaired thyroid | fatty liver disease | causes hepatic fat [SUMMARY]
[CONTENT] impaired thyroid hormone | role metabolism thyroid | nafld impaired thyroid | fatty liver disease | causes hepatic fat [SUMMARY]
null
[CONTENT] impaired thyroid hormone | role metabolism thyroid | nafld impaired thyroid | fatty liver disease | causes hepatic fat [SUMMARY]
null
[CONTENT] sh | apri | patients | liver | nafld | test | study | aminotransferase | higher | steatosis [SUMMARY]
[CONTENT] sh | apri | patients | liver | nafld | test | study | aminotransferase | higher | steatosis [SUMMARY]
[CONTENT] sh | apri | patients | liver | nafld | test | study | aminotransferase | higher | steatosis [SUMMARY]
null
[CONTENT] sh | apri | patients | liver | nafld | test | study | aminotransferase | higher | steatosis [SUMMARY]
null
[CONTENT] nafld | thyroid | association | disease | liver | obesity | relationship | children | function | thyroid function [SUMMARY]
[CONTENT] university | hospital | test | level | variables | study | university hospital | dl | national | national university [SUMMARY]
[CONTENT] sh | apri | patients | aminotransferase | higher | steatosis | aspartate aminotransferase | aspartate | apri aspartate | apri aspartate aminotransferase [SUMMARY]
null
[CONTENT] sh | university | apri | test | hospital | patients | liver | study | nafld | aminotransferase [SUMMARY]
null
[CONTENT] ||| SH | NAFLD ||| SH [SUMMARY]
[CONTENT] aged 4 to 18 years | NAFLD | January 2015 to December 2019 | 10 | Korea [SUMMARY]
[CONTENT] 428 | NAFLD ||| SH | NAFLD | 13.6% ||| SH ||| APRI | SH | 0.6012 | 0.67 | P < 0.001 | 72.4% | 61.9% | 23% | 93.5% [SUMMARY]
null
[CONTENT] ||| SH | NAFLD ||| SH | aged 4 to 18 years | NAFLD | January 2015 to December 2019 | 10 | Korea ||| ||| 428 | NAFLD ||| SH | NAFLD | 13.6% ||| SH ||| APRI | SH | 0.6012 | 0.67 | P < 0.001 | 72.4% | 61.9% | 23% | 93.5% ||| SH | NAFLD ||| NAFLD [SUMMARY]
null
A new approach to analyse longitudinal epidemiological data with an excess of zeros.
23425202
Within longitudinal epidemiological research, 'count' outcome variables with an excess of zeros frequently occur. Although these outcomes are frequently analysed with a linear mixed model, or a Poisson mixed model, a two-part mixed model would be better in analysing outcome variables with an excess of zeros. Therefore, objective of this paper was to introduce the relatively 'new' method of two-part joint regression modelling in longitudinal data analysis for outcome variables with an excess of zeros, and to compare the performance of this method to current approaches.
BACKGROUND
Within an observational longitudinal dataset, we compared three techniques; two 'standard' approaches (a linear mixed model, and a Poisson mixed model), and a two-part joint mixed model (a binomial/Poisson mixed distribution model), including random intercepts and random slopes. Model fit indicators, and differences between predicted and observed values were used for comparisons. The analyses were performed with STATA using the GLLAMM procedure.
METHODS
Regarding the random intercept models, the two-part joint mixed model (binomial/Poisson) performed best. Adding random slopes for time to the models changed the sign of the regression coefficient for both the Poisson mixed model and the two-part joint mixed model (binomial/Poisson) and resulted into a much better fit.
RESULTS
This paper showed that a two-part joint mixed model is a more appropriate method to analyse longitudinal data with an excess of zeros compared to a linear mixed model and a Poisson mixed model. However, in a model with random slopes for time a Poisson mixed model also performed remarkably well.
CONCLUSION
[ "Binomial Distribution", "Data Interpretation, Statistical", "Diabetes Mellitus, Type 2", "Humans", "Hypoglycemia", "Hypoglycemic Agents", "Insulin Glargine", "Insulin, Long-Acting", "Longitudinal Studies", "Models, Statistical", "Poisson Distribution" ]
3599839
Background
Within longitudinal epidemiological research, ‘count’ outcome variables frequently occur. Nowadays it is possible to analyse longitudinal ‘count’ outcome variables with advanced statistical techniques such as mixed models. Because ‘count’ data often follow a Poisson distribution, these data are mostly analysed with longitudinal Poisson regression. In many situations ‘count’ data does not exactly follow a Poisson distribution; they are often overdispersed, (i.e. the variance of the outcome variable is higher than the mean value). One of the solutions to deal with this overdispersion in count data is to use a negative binomial regression analysis [1]. However, overdispersion in the count variable is mostly caused by an excess of zeros, which cannot completely be controlled by assuming a negative binomial distribution. Examples of data with an excess of zeros (which are also known as ‘semicontinuous’ data) [2] within the field of epidemiology are: the number of hypoglycaemic events in diabetics, the number of hospitalisations in the general population, the number of sports injuries, the number of falls in a group of elderly people and the number of cigarettes smoked. The classical methods to analyse outcome variables with an excess of zeros are to reduce the information in the data to either a dichotomous outcome variable (mostly comparing zero versus non-zero) or a categorical outcome variable (mostly comparing zero versus two groups of non-zero outcomes in which the groups are divided according to the median of the non-zero part). Sometimes, researchers try to transform (with a logarithmic transformation) a Poisson distribution with many zeros into a normally distributed variable. However, zeros cannot be log transformed and other computations such as adding ‘1’ to the ‘count’ outcomes with an excess of zeros before log transforming does not solve the problem either. To properly address the problem of excess of zeros, several so-called two-part statistical models have been developed. These models, which are particularly popular in econometrics, are also known as mixed response or mixed distribution models and they include zero-inflated Poisson (ZIP) regression, zero-inflated negative binomial (ZINB) regression, sample selection methods, and hurdle models [3-16]. The idea behind these two-part approaches is that the outcome variable has a mixed distribution (i.e. a binomial distribution to deal with zero versus non-zero, and a Poisson (or other) distribution to deal with the non-zero part of the distribution). In the standard two-part approaches the two processes are split and for every process different regression coefficients are obtained. This also means that different sets of covariates can be included, one set for the binomial process (zero versus non-zero) and one set for the Poisson process. In a ZIP model, for instance, one regression coefficient reflects the relationship of a certain covariate with zero versus non-zero, while another regression coefficient reflects the relationship with the ‘count’ outcomes above zero. [17,18]. For some research questions (e.g. investigating the determinants of smoking behaviour) this is a nice feature. However, in many situations one regression coefficient for each covariate would be preferable (e.g. the analysis of hypoglycaemic events). Despite the preference of one regression coefficient, it should be realized that this regression coefficient is somewhat difficult to interpret, because it combines a binomial and a Poisson process into one coefficient. Models that provide one set of regression coefficients for the binomial distribution and Poisson (or other) distribution combined are known as two-part joint regression models [19-22]. For longitudinal data analysis these two-part joint regression models are almost never used in epidemiological practice. The objective of this paper is to introduce a relatively ‘new’ method of a two-part joint mixed model (binomial/Poisson) in longitudinal data analysis for ‘count’ outcome variables with an excess of zeros. Furthermore, the performance of this new method will be compared to a linear mixed model and a Poisson mixed model; two models that are frequently used for longitudinal epidemiological data.
Methods
Dataset The observational longitudinal dataset used for the analyses was obtained from the Study of the Psychological Impact in Real care of Initiating insulin glargine Treatment (SPIRIT) conducted between 2005 and 2009. This study aimed to examine the use of insulin glargine (a long acting insulin analog) on general emotional well-being, diabetes symptom distress and worries about hypoglycaemia in Dutch type 2 diabetes patients who previously used oral anti-hyperglycaemic medication. Type 2 diabetes patients who used oral anti-hyperglycaemic agents were recruited from 363 Dutch primary care practices, which were spread across the country. This resulted in a total sample of 889 patients. Measurements were conducted at baseline, after three and after six months. Results from this study have been presented previously [23]. We re-analysed the data in order to assess the development over time in hypoglycaemic events for diabetic patients, and the difference between low and high educated diabetes patients with three different mixed models. The observational longitudinal dataset used for the analyses was obtained from the Study of the Psychological Impact in Real care of Initiating insulin glargine Treatment (SPIRIT) conducted between 2005 and 2009. This study aimed to examine the use of insulin glargine (a long acting insulin analog) on general emotional well-being, diabetes symptom distress and worries about hypoglycaemia in Dutch type 2 diabetes patients who previously used oral anti-hyperglycaemic medication. Type 2 diabetes patients who used oral anti-hyperglycaemic agents were recruited from 363 Dutch primary care practices, which were spread across the country. This resulted in a total sample of 889 patients. Measurements were conducted at baseline, after three and after six months. Results from this study have been presented previously [23]. We re-analysed the data in order to assess the development over time in hypoglycaemic events for diabetic patients, and the difference between low and high educated diabetes patients with three different mixed models. Statistical methods All analyses were performed within the framework of longitudinal mixed models, The general idea behind mixed models for longitudinal data analysis is that an adjustment is made for the correlated outcome observations within individuals over time by estimating either the differences in average values of the outcome and/or the differences in relationships with time-dependent covariates. These differences i.e. variances are known as random effects and can be added to the intercept of the regression model (i.e. random intercept) and/or to the different regression coefficients of time-dependent covariates (i.e. random slopes) [24-26]. In this paper two ‘standard’ approaches, i.e. a linear mixed model treating the outcome variables as normally distributed and a Poisson mixed model treating the outcome variables as Poisson distributed, will be compared with a two-part joint regression model in order to analyse the development over time and to analyse the differences between low and high educated patients. Equation 1a shows the linear mixed model with only a random intercept, while equation 1b shows the linear mixed model with both a random intercept and a random slope for time. (1a) y ij = β 1 + ζ 1 j + β 2 x ij + ∈ ij (1b) y ij = β 1 + ζ 1 j + β 2 + ζ 2 j x ij + ∈ ij Where yij is the hypoglycaemic score for the jth patient at the ith time, xij is the corresponding time, β1 the fixed intercept of the patients,ζ1j the patient-specific random intercept, β2 the fixed slope of the patients, ζ2j the patient-specific random slope, and ζij is a patient-specific residual error term at the ith time [26]. It was assumed that each of the two variations in the random intercept and the random slope was normally distributed with an average of zero and a variance σ2. Furthermore, the Poisson (ln(μij)) mixed model can be specified in a similar way. For the two-part joint approach, a binomial/Poisson mixed distribution was used. The general idea behind this mixture is that the outcome variable has a binomial distribution for the zero versus non-zero part and a Poisson distribution for the non-zero part. The binomial distribution is modelled by a logit link function, while the Poisson distribution is modelled by a log link function. The response probability of a longitudinal two-part joint binomial/Poisson regression model can be written down as: (2) Pr y ij | x ij = π 1 g y ij ; μ ij = 0 + π 2 g y ij ; μ ij = exp x ′ ij β The first part of the equation has a mean of zero and the second part of the equation has a mean that depends on the covariates (time). π1 and π2=1−π1 are the component weights/latent class probabilities and g(yij;μij) is the Poisson probability for count yij with mean μij[27]. A full explanation of the mathematical background of the analyses with mixed distribution models can be found in other papers [28-36]. For the two-part joint model, random intercepts and random slopes can be added in a similar fashion as for the linear mixed model. In the present analyses, educational level was modelled as a dichotomous variable distinguishing between low and high education (with low education as reference), time was modelled as a categorical variable (represented by two dummy variables, with baseline as reference). Two model fit parameters were used to compare the three models with each other. Firstly, the Bayesian information criterion (BIC) was used. The BIC is an indicator of model fit, based on the −2 log likelihood, but taking into account the number of parameters estimated [37]. A lower BIC indicates a better performance of the model. Secondly, predicted frequencies (including the random effects) of the outcome variable, obtained when fitting the models, were compared to observed frequencies in hypoglycaemic events to compare the accuracy of the different models. This comparison was graphically presented in scatter plots. In addition, the means of the squared residuals (MSR) were computed for the different models. A lower MSR indicates a better performance of the model. All analyses were performed with Stata (version 11.1) [38]. Estimations were performed with the GLLAMM procedure [26,39], using adaptive quadrature to estimate the random effects. Scatter plots were created within PASW Statistics 18 [40]. All analyses were performed within the framework of longitudinal mixed models, The general idea behind mixed models for longitudinal data analysis is that an adjustment is made for the correlated outcome observations within individuals over time by estimating either the differences in average values of the outcome and/or the differences in relationships with time-dependent covariates. These differences i.e. variances are known as random effects and can be added to the intercept of the regression model (i.e. random intercept) and/or to the different regression coefficients of time-dependent covariates (i.e. random slopes) [24-26]. In this paper two ‘standard’ approaches, i.e. a linear mixed model treating the outcome variables as normally distributed and a Poisson mixed model treating the outcome variables as Poisson distributed, will be compared with a two-part joint regression model in order to analyse the development over time and to analyse the differences between low and high educated patients. Equation 1a shows the linear mixed model with only a random intercept, while equation 1b shows the linear mixed model with both a random intercept and a random slope for time. (1a) y ij = β 1 + ζ 1 j + β 2 x ij + ∈ ij (1b) y ij = β 1 + ζ 1 j + β 2 + ζ 2 j x ij + ∈ ij Where yij is the hypoglycaemic score for the jth patient at the ith time, xij is the corresponding time, β1 the fixed intercept of the patients,ζ1j the patient-specific random intercept, β2 the fixed slope of the patients, ζ2j the patient-specific random slope, and ζij is a patient-specific residual error term at the ith time [26]. It was assumed that each of the two variations in the random intercept and the random slope was normally distributed with an average of zero and a variance σ2. Furthermore, the Poisson (ln(μij)) mixed model can be specified in a similar way. For the two-part joint approach, a binomial/Poisson mixed distribution was used. The general idea behind this mixture is that the outcome variable has a binomial distribution for the zero versus non-zero part and a Poisson distribution for the non-zero part. The binomial distribution is modelled by a logit link function, while the Poisson distribution is modelled by a log link function. The response probability of a longitudinal two-part joint binomial/Poisson regression model can be written down as: (2) Pr y ij | x ij = π 1 g y ij ; μ ij = 0 + π 2 g y ij ; μ ij = exp x ′ ij β The first part of the equation has a mean of zero and the second part of the equation has a mean that depends on the covariates (time). π1 and π2=1−π1 are the component weights/latent class probabilities and g(yij;μij) is the Poisson probability for count yij with mean μij[27]. A full explanation of the mathematical background of the analyses with mixed distribution models can be found in other papers [28-36]. For the two-part joint model, random intercepts and random slopes can be added in a similar fashion as for the linear mixed model. In the present analyses, educational level was modelled as a dichotomous variable distinguishing between low and high education (with low education as reference), time was modelled as a categorical variable (represented by two dummy variables, with baseline as reference). Two model fit parameters were used to compare the three models with each other. Firstly, the Bayesian information criterion (BIC) was used. The BIC is an indicator of model fit, based on the −2 log likelihood, but taking into account the number of parameters estimated [37]. A lower BIC indicates a better performance of the model. Secondly, predicted frequencies (including the random effects) of the outcome variable, obtained when fitting the models, were compared to observed frequencies in hypoglycaemic events to compare the accuracy of the different models. This comparison was graphically presented in scatter plots. In addition, the means of the squared residuals (MSR) were computed for the different models. A lower MSR indicates a better performance of the model. All analyses were performed with Stata (version 11.1) [38]. Estimations were performed with the GLLAMM procedure [26,39], using adaptive quadrature to estimate the random effects. Scatter plots were created within PASW Statistics 18 [40].
Results
Table 1 shows the number, the proportion, and the median of the patients who have experienced ≥1 hypoglycaemic event for the three measurements over time by education as well as for the total number of patients. The proportion of both lower and higher educated patients that experienced ≥1 hypoglycaemic event increased over time. 34.5% of the lower educated patients experienced ≥1 hypoglycaemic event at baseline and after six months this increased to 37.9%, for the higher educated patients the percentage increased from 43.1% to 50.4%. In contrast, the median number of events for subjects with ≥1 hypoglycaemic event decreased over time. For the lower educated patients the median decreased from 4 to 2 and for the higher educated patients from 4 to 3. Table 2 shows the results of the analyses relating the hypoglycaemic events (dependent variable) to educational level (independent variable) when the number of hypoglycaemic episodes was treated as normal, Poisson or binomial/Poisson (two-part joint) distributed. All three models showed a significant positive relationship between education and the number of hypoglycaemic events. The model fit was best for the two-part joint mixed model (binomial/Poisson) (BIC: 6687.64, MSR: 7.26). Furthermore, Figure 1 depicts the accuracy of the different analyses in scatter plots of observed vs. predicted values. The binomial/Poisson model clearly performed best especially in correctly predicting the number of patients with zero events. The proportion and median of diabetes patients with ≥ 1 hypoglycaemic event by time and educational level* * Having zero hypoglycaemic events is the complement of ≥ hypoglycaemic events. Regression and model fit parameters for the three longitudinal models with a random intercept, evaluating the difference in hypoglycaemic events for education* Abbreviation: Regression coefficients (Coef.), Standard errors (Std. Err.), P-value (P > |z|), Bayesian information criterion (BIC). * Education is time independent, therefore random slopes could not be calculated. Scatter plots of the observed vs. predicted values for the three longitudinal models with a random intercept, evaluating the hypoglycaemic events for education. Table 3 shows the results of the analyses regarding the development over time as independent variable with only a random intercept. In all three models the regression coefficients for time were negative, and the corresponding P-values at T2 were significant. Comparing both the fit indicators (Table 3) and the accuracy (Figure 2), similar results were found as for the analyses comparing higher and lower educated patients. The two-part joint model (binomial/Poisson) had the best model fit (BIC: 7013.64, MSR: 6.56) and was also best in correctly predicting the zero events. However, the models changed considerably once random slopes for time were added to the models (Table 4): The signs of the regression coefficients for the Poisson mixed model and the two-part joint mixed model (binomial/Poisson) changed from negative to positive. The regression coefficients derived from the Poisson mixed model changed from −0.11 (3 months) and −0.26 (6 months) to 0.28 (3 months) and 0.38 (6 months) when random slopes were included. For the two-part joint mixed model (binomial/Poisson) the regression coefficients changed from −0.18(3 months) and −0.27 (6 months) to 0.12 (3 months) and 0.25 (6 months). Adding random slopes to the models resulted in a much better fit for the Poisson (BIC: 6774.75, MSR: 0.24) and the two-part joint mixed model (binomial/Poisson) (BIC: 6467.55, MSR: 0.30). Furthermore, the predicted values (Figure 3) for the Poisson mixed model and the two-part joint mixed model (binomial/Poisson) were in close accordance to the observed values. However, to a small extent there was still a difference in the correctly estimated zeros in favour of the two-part joint mixed model (binomial/Poisson). In total, 89.5% of the zeros were correctly estimated for the Poisson mixed model and 92.8% of the zeros were correctly estimated for the longitudinal two-part joint mixed model. Regression and model fit parameters for the three longitudinal models with a random intercept, evaluating the difference in development of the hypoglycaemic events over time Abbreviation: Regression coefficients (Coef.), Standard errors (Std. Err.), P-value (P > |z|), Bayesian information criterion (BIC). Scatter plots of the observed vs. predicted values for the three longitudinal models with only a random intercept, evaluating the difference in development of the hypoglycaemic events over time. Regression and model fit parameters for the three longitudinal models with a random intercept and random slopes for time, evaluating the difference in development of the hypoglycaemic events over time Abbreviation: Regression coefficients (Coef.), Standard errors (Std. Err.), P-value (P > |z|), Bayesian information criterion (BIC). Scatter plots of the observed vs. predicted values for the three longitudinal models with a random intercept and random slopes for time, evaluating the difference in development of the hypoglycaemic events over time.
Conclusions
This paper showed that the two-part joint mixed model (binomial/Poisson) is a more appropriate method for the analysis of longitudinal data with an excess of zeros when only a random intercept is included into a model. However, in the model with random slopes for time, also the Poisson mixed model performed remarkably well. In addition, more research is needed on the interpretation of the regression coefficients of the longitudinal two-part joint model.
[ "Background", "Dataset", "Statistical methods", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Within longitudinal epidemiological research, ‘count’ outcome variables frequently occur. Nowadays it is possible to analyse longitudinal ‘count’ outcome variables with advanced statistical techniques such as mixed models. Because ‘count’ data often follow a Poisson distribution, these data are mostly analysed with longitudinal Poisson regression. In many situations ‘count’ data does not exactly follow a Poisson distribution; they are often overdispersed, (i.e. the variance of the outcome variable is higher than the mean value). One of the solutions to deal with this overdispersion in count data is to use a negative binomial regression analysis [1]. However, overdispersion in the count variable is mostly caused by an excess of zeros, which cannot completely be controlled by assuming a negative binomial distribution. Examples of data with an excess of zeros (which are also known as ‘semicontinuous’ data) [2] within the field of epidemiology are: the number of hypoglycaemic events in diabetics, the number of hospitalisations in the general population, the number of sports injuries, the number of falls in a group of elderly people and the number of cigarettes smoked.\nThe classical methods to analyse outcome variables with an excess of zeros are to reduce the information in the data to either a dichotomous outcome variable (mostly comparing zero versus non-zero) or a categorical outcome variable (mostly comparing zero versus two groups of non-zero outcomes in which the groups are divided according to the median of the non-zero part). Sometimes, researchers try to transform (with a logarithmic transformation) a Poisson distribution with many zeros into a normally distributed variable. However, zeros cannot be log transformed and other computations such as adding ‘1’ to the ‘count’ outcomes with an excess of zeros before log transforming does not solve the problem either.\nTo properly address the problem of excess of zeros, several so-called two-part statistical models have been developed. These models, which are particularly popular in econometrics, are also known as mixed response or mixed distribution models and they include zero-inflated Poisson (ZIP) regression, zero-inflated negative binomial (ZINB) regression, sample selection methods, and hurdle models [3-16]. The idea behind these two-part approaches is that the outcome variable has a mixed distribution (i.e. a binomial distribution to deal with zero versus non-zero, and a Poisson (or other) distribution to deal with the non-zero part of the distribution). In the standard two-part approaches the two processes are split and for every process different regression coefficients are obtained. This also means that different sets of covariates can be included, one set for the binomial process (zero versus non-zero) and one set for the Poisson process. In a ZIP model, for instance, one regression coefficient reflects the relationship of a certain covariate with zero versus non-zero, while another regression coefficient reflects the relationship with the ‘count’ outcomes above zero. [17,18]. For some research questions (e.g. investigating the determinants of smoking behaviour) this is a nice feature. However, in many situations one regression coefficient for each covariate would be preferable (e.g. the analysis of hypoglycaemic events). Despite the preference of one regression coefficient, it should be realized that this regression coefficient is somewhat difficult to interpret, because it combines a binomial and a Poisson process into one coefficient. Models that provide one set of regression coefficients for the binomial distribution and Poisson (or other) distribution combined are known as two-part joint regression models [19-22]. For longitudinal data analysis these two-part joint regression models are almost never used in epidemiological practice.\nThe objective of this paper is to introduce a relatively ‘new’ method of a two-part joint mixed model (binomial/Poisson) in longitudinal data analysis for ‘count’ outcome variables with an excess of zeros. Furthermore, the performance of this new method will be compared to a linear mixed model and a Poisson mixed model; two models that are frequently used for longitudinal epidemiological data.", "The observational longitudinal dataset used for the analyses was obtained from the Study of the Psychological Impact in Real care of Initiating insulin glargine Treatment (SPIRIT) conducted between 2005 and 2009. This study aimed to examine the use of insulin glargine (a long acting insulin analog) on general emotional well-being, diabetes symptom distress and worries about hypoglycaemia in Dutch type 2 diabetes patients who previously used oral anti-hyperglycaemic medication. Type 2 diabetes patients who used oral anti-hyperglycaemic agents were recruited from 363 Dutch primary care practices, which were spread across the country. This resulted in a total sample of 889 patients. Measurements were conducted at baseline, after three and after six months. Results from this study have been presented previously [23]. We re-analysed the data in order to assess the development over time in hypoglycaemic events for diabetic patients, and the difference between low and high educated diabetes patients with three different mixed models.", "All analyses were performed within the framework of longitudinal mixed models, The general idea behind mixed models for longitudinal data analysis is that an adjustment is made for the correlated outcome observations within individuals over time by estimating either the differences in average values of the outcome and/or the differences in relationships with time-dependent covariates. These differences i.e. variances are known as random effects and can be added to the intercept of the regression model (i.e. random intercept) and/or to the different regression coefficients of time-dependent covariates (i.e. random slopes) [24-26]. In this paper two ‘standard’ approaches, i.e. a linear mixed model treating the outcome variables as normally distributed and a Poisson mixed model treating the outcome variables as Poisson distributed, will be compared with a two-part joint regression model in order to analyse the development over time and to analyse the differences between low and high educated patients. Equation 1a shows the linear mixed model with only a random intercept, while equation 1b shows the linear mixed model with both a random intercept and a random slope for time.\n\n\n(1a)\n\n\n\ny\nij\n\n=\n\n\n\nβ\n1\n\n+\n\nζ\n\n1\nj\n\n\n\n\n+\n\nβ\n2\n\n\nx\nij\n\n+\n\n∈\nij\n\n\n\n\n\n\n\n(1b)\n\n\n\ny\nij\n\n=\n\n\n\nβ\n1\n\n+\n\nζ\n\n1\nj\n\n\n\n\n+\n\n\n\nβ\n2\n\n+\n\nζ\n\n2\nj\n\n\n\n\n\nx\nij\n\n+\n\n∈\nij\n\n\n\n\n\nWhere yij is the hypoglycaemic score for the jth patient at the ith time, xij is the corresponding time, β1 the fixed intercept of the patients,ζ1j the patient-specific random intercept, β2 the fixed slope of the patients, ζ2j the patient-specific random slope, and ζij is a patient-specific residual error term at the ith time [26]. It was assumed that each of the two variations in the random intercept and the random slope was normally distributed with an average of zero and a variance σ2. Furthermore, the Poisson (ln(μij)) mixed model can be specified in a similar way.\nFor the two-part joint approach, a binomial/Poisson mixed distribution was used. The general idea behind this mixture is that the outcome variable has a binomial distribution for the zero versus non-zero part and a Poisson distribution for the non-zero part. The binomial distribution is modelled by a logit link function, while the Poisson distribution is modelled by a log link function. The response probability of a longitudinal two-part joint binomial/Poisson regression model can be written down as:\n\n\n(2)\n\n\nPr\n\n\n\ny\nij\n\n|\n\nx\nij\n\n\n\n=\n\nπ\n1\n\ng\n\n\n\ny\nij\n\n;\n\nμ\nij\n\n=\n0\n\n\n+\n\nπ\n2\n\ng\n\n\n\ny\nij\n\n;\n\nμ\nij\n\n=\nexp\n\n\n\n\nx\n′\n\nij\n\nβ\n\n\n\n\n\n\n\n\nThe first part of the equation has a mean of zero and the second part of the equation has a mean that depends on the covariates (time). π1 and π2=1−π1 are the component weights/latent class probabilities and g(yij;μij) is the Poisson probability for count yij with mean μij[27]. A full explanation of the mathematical background of the analyses with mixed distribution models can be found in other papers [28-36]. For the two-part joint model, random intercepts and random slopes can be added in a similar fashion as for the linear mixed model.\nIn the present analyses, educational level was modelled as a dichotomous variable distinguishing between low and high education (with low education as reference), time was modelled as a categorical variable (represented by two dummy variables, with baseline as reference). Two model fit parameters were used to compare the three models with each other. Firstly, the Bayesian information criterion (BIC) was used. The BIC is an indicator of model fit, based on the −2 log likelihood, but taking into account the number of parameters estimated [37]. A lower BIC indicates a better performance of the model. Secondly, predicted frequencies (including the random effects) of the outcome variable, obtained when fitting the models, were compared to observed frequencies in hypoglycaemic events to compare the accuracy of the different models. This comparison was graphically presented in scatter plots. In addition, the means of the squared residuals (MSR) were computed for the different models. A lower MSR indicates a better performance of the model.\nAll analyses were performed with Stata (version 11.1) [38]. Estimations were performed with the GLLAMM procedure [26,39], using adaptive quadrature to estimate the random effects. Scatter plots were created within PASW Statistics 18 [40].", "The authors declare that they have no competing interests.", "ASS made contributions to the design, conducted the analysis and interpretation of the data, and drafted the manuscript. TRSH made contributions to the acquisition of the data, and reviewed the article critically for important intellectual content. MRB made contributions to the interpretation of the data, and reviewed the article critically for important intellectual content. MWH made contributions to the interpretation of the data, and reviewed the article critically for important intellectual content. JWRT made substantial contributions to the conception and design and the analysis of data, helped to draft the manuscript, supervised the analysis and interpretation of the data, and reviewed the article critically for important intellectual content. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2288/13/27/prepub\n" ]
[ null, null, null, null, null, null ]
[ "Background", "Methods", "Dataset", "Statistical methods", "Results", "Discussion", "Conclusions", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Within longitudinal epidemiological research, ‘count’ outcome variables frequently occur. Nowadays it is possible to analyse longitudinal ‘count’ outcome variables with advanced statistical techniques such as mixed models. Because ‘count’ data often follow a Poisson distribution, these data are mostly analysed with longitudinal Poisson regression. In many situations ‘count’ data does not exactly follow a Poisson distribution; they are often overdispersed, (i.e. the variance of the outcome variable is higher than the mean value). One of the solutions to deal with this overdispersion in count data is to use a negative binomial regression analysis [1]. However, overdispersion in the count variable is mostly caused by an excess of zeros, which cannot completely be controlled by assuming a negative binomial distribution. Examples of data with an excess of zeros (which are also known as ‘semicontinuous’ data) [2] within the field of epidemiology are: the number of hypoglycaemic events in diabetics, the number of hospitalisations in the general population, the number of sports injuries, the number of falls in a group of elderly people and the number of cigarettes smoked.\nThe classical methods to analyse outcome variables with an excess of zeros are to reduce the information in the data to either a dichotomous outcome variable (mostly comparing zero versus non-zero) or a categorical outcome variable (mostly comparing zero versus two groups of non-zero outcomes in which the groups are divided according to the median of the non-zero part). Sometimes, researchers try to transform (with a logarithmic transformation) a Poisson distribution with many zeros into a normally distributed variable. However, zeros cannot be log transformed and other computations such as adding ‘1’ to the ‘count’ outcomes with an excess of zeros before log transforming does not solve the problem either.\nTo properly address the problem of excess of zeros, several so-called two-part statistical models have been developed. These models, which are particularly popular in econometrics, are also known as mixed response or mixed distribution models and they include zero-inflated Poisson (ZIP) regression, zero-inflated negative binomial (ZINB) regression, sample selection methods, and hurdle models [3-16]. The idea behind these two-part approaches is that the outcome variable has a mixed distribution (i.e. a binomial distribution to deal with zero versus non-zero, and a Poisson (or other) distribution to deal with the non-zero part of the distribution). In the standard two-part approaches the two processes are split and for every process different regression coefficients are obtained. This also means that different sets of covariates can be included, one set for the binomial process (zero versus non-zero) and one set for the Poisson process. In a ZIP model, for instance, one regression coefficient reflects the relationship of a certain covariate with zero versus non-zero, while another regression coefficient reflects the relationship with the ‘count’ outcomes above zero. [17,18]. For some research questions (e.g. investigating the determinants of smoking behaviour) this is a nice feature. However, in many situations one regression coefficient for each covariate would be preferable (e.g. the analysis of hypoglycaemic events). Despite the preference of one regression coefficient, it should be realized that this regression coefficient is somewhat difficult to interpret, because it combines a binomial and a Poisson process into one coefficient. Models that provide one set of regression coefficients for the binomial distribution and Poisson (or other) distribution combined are known as two-part joint regression models [19-22]. For longitudinal data analysis these two-part joint regression models are almost never used in epidemiological practice.\nThe objective of this paper is to introduce a relatively ‘new’ method of a two-part joint mixed model (binomial/Poisson) in longitudinal data analysis for ‘count’ outcome variables with an excess of zeros. Furthermore, the performance of this new method will be compared to a linear mixed model and a Poisson mixed model; two models that are frequently used for longitudinal epidemiological data.", " Dataset The observational longitudinal dataset used for the analyses was obtained from the Study of the Psychological Impact in Real care of Initiating insulin glargine Treatment (SPIRIT) conducted between 2005 and 2009. This study aimed to examine the use of insulin glargine (a long acting insulin analog) on general emotional well-being, diabetes symptom distress and worries about hypoglycaemia in Dutch type 2 diabetes patients who previously used oral anti-hyperglycaemic medication. Type 2 diabetes patients who used oral anti-hyperglycaemic agents were recruited from 363 Dutch primary care practices, which were spread across the country. This resulted in a total sample of 889 patients. Measurements were conducted at baseline, after three and after six months. Results from this study have been presented previously [23]. We re-analysed the data in order to assess the development over time in hypoglycaemic events for diabetic patients, and the difference between low and high educated diabetes patients with three different mixed models.\nThe observational longitudinal dataset used for the analyses was obtained from the Study of the Psychological Impact in Real care of Initiating insulin glargine Treatment (SPIRIT) conducted between 2005 and 2009. This study aimed to examine the use of insulin glargine (a long acting insulin analog) on general emotional well-being, diabetes symptom distress and worries about hypoglycaemia in Dutch type 2 diabetes patients who previously used oral anti-hyperglycaemic medication. Type 2 diabetes patients who used oral anti-hyperglycaemic agents were recruited from 363 Dutch primary care practices, which were spread across the country. This resulted in a total sample of 889 patients. Measurements were conducted at baseline, after three and after six months. Results from this study have been presented previously [23]. We re-analysed the data in order to assess the development over time in hypoglycaemic events for diabetic patients, and the difference between low and high educated diabetes patients with three different mixed models.\n Statistical methods All analyses were performed within the framework of longitudinal mixed models, The general idea behind mixed models for longitudinal data analysis is that an adjustment is made for the correlated outcome observations within individuals over time by estimating either the differences in average values of the outcome and/or the differences in relationships with time-dependent covariates. These differences i.e. variances are known as random effects and can be added to the intercept of the regression model (i.e. random intercept) and/or to the different regression coefficients of time-dependent covariates (i.e. random slopes) [24-26]. In this paper two ‘standard’ approaches, i.e. a linear mixed model treating the outcome variables as normally distributed and a Poisson mixed model treating the outcome variables as Poisson distributed, will be compared with a two-part joint regression model in order to analyse the development over time and to analyse the differences between low and high educated patients. Equation 1a shows the linear mixed model with only a random intercept, while equation 1b shows the linear mixed model with both a random intercept and a random slope for time.\n\n\n(1a)\n\n\n\ny\nij\n\n=\n\n\n\nβ\n1\n\n+\n\nζ\n\n1\nj\n\n\n\n\n+\n\nβ\n2\n\n\nx\nij\n\n+\n\n∈\nij\n\n\n\n\n\n\n\n(1b)\n\n\n\ny\nij\n\n=\n\n\n\nβ\n1\n\n+\n\nζ\n\n1\nj\n\n\n\n\n+\n\n\n\nβ\n2\n\n+\n\nζ\n\n2\nj\n\n\n\n\n\nx\nij\n\n+\n\n∈\nij\n\n\n\n\n\nWhere yij is the hypoglycaemic score for the jth patient at the ith time, xij is the corresponding time, β1 the fixed intercept of the patients,ζ1j the patient-specific random intercept, β2 the fixed slope of the patients, ζ2j the patient-specific random slope, and ζij is a patient-specific residual error term at the ith time [26]. It was assumed that each of the two variations in the random intercept and the random slope was normally distributed with an average of zero and a variance σ2. Furthermore, the Poisson (ln(μij)) mixed model can be specified in a similar way.\nFor the two-part joint approach, a binomial/Poisson mixed distribution was used. The general idea behind this mixture is that the outcome variable has a binomial distribution for the zero versus non-zero part and a Poisson distribution for the non-zero part. The binomial distribution is modelled by a logit link function, while the Poisson distribution is modelled by a log link function. The response probability of a longitudinal two-part joint binomial/Poisson regression model can be written down as:\n\n\n(2)\n\n\nPr\n\n\n\ny\nij\n\n|\n\nx\nij\n\n\n\n=\n\nπ\n1\n\ng\n\n\n\ny\nij\n\n;\n\nμ\nij\n\n=\n0\n\n\n+\n\nπ\n2\n\ng\n\n\n\ny\nij\n\n;\n\nμ\nij\n\n=\nexp\n\n\n\n\nx\n′\n\nij\n\nβ\n\n\n\n\n\n\n\n\nThe first part of the equation has a mean of zero and the second part of the equation has a mean that depends on the covariates (time). π1 and π2=1−π1 are the component weights/latent class probabilities and g(yij;μij) is the Poisson probability for count yij with mean μij[27]. A full explanation of the mathematical background of the analyses with mixed distribution models can be found in other papers [28-36]. For the two-part joint model, random intercepts and random slopes can be added in a similar fashion as for the linear mixed model.\nIn the present analyses, educational level was modelled as a dichotomous variable distinguishing between low and high education (with low education as reference), time was modelled as a categorical variable (represented by two dummy variables, with baseline as reference). Two model fit parameters were used to compare the three models with each other. Firstly, the Bayesian information criterion (BIC) was used. The BIC is an indicator of model fit, based on the −2 log likelihood, but taking into account the number of parameters estimated [37]. A lower BIC indicates a better performance of the model. Secondly, predicted frequencies (including the random effects) of the outcome variable, obtained when fitting the models, were compared to observed frequencies in hypoglycaemic events to compare the accuracy of the different models. This comparison was graphically presented in scatter plots. In addition, the means of the squared residuals (MSR) were computed for the different models. A lower MSR indicates a better performance of the model.\nAll analyses were performed with Stata (version 11.1) [38]. Estimations were performed with the GLLAMM procedure [26,39], using adaptive quadrature to estimate the random effects. Scatter plots were created within PASW Statistics 18 [40].\nAll analyses were performed within the framework of longitudinal mixed models, The general idea behind mixed models for longitudinal data analysis is that an adjustment is made for the correlated outcome observations within individuals over time by estimating either the differences in average values of the outcome and/or the differences in relationships with time-dependent covariates. These differences i.e. variances are known as random effects and can be added to the intercept of the regression model (i.e. random intercept) and/or to the different regression coefficients of time-dependent covariates (i.e. random slopes) [24-26]. In this paper two ‘standard’ approaches, i.e. a linear mixed model treating the outcome variables as normally distributed and a Poisson mixed model treating the outcome variables as Poisson distributed, will be compared with a two-part joint regression model in order to analyse the development over time and to analyse the differences between low and high educated patients. Equation 1a shows the linear mixed model with only a random intercept, while equation 1b shows the linear mixed model with both a random intercept and a random slope for time.\n\n\n(1a)\n\n\n\ny\nij\n\n=\n\n\n\nβ\n1\n\n+\n\nζ\n\n1\nj\n\n\n\n\n+\n\nβ\n2\n\n\nx\nij\n\n+\n\n∈\nij\n\n\n\n\n\n\n\n(1b)\n\n\n\ny\nij\n\n=\n\n\n\nβ\n1\n\n+\n\nζ\n\n1\nj\n\n\n\n\n+\n\n\n\nβ\n2\n\n+\n\nζ\n\n2\nj\n\n\n\n\n\nx\nij\n\n+\n\n∈\nij\n\n\n\n\n\nWhere yij is the hypoglycaemic score for the jth patient at the ith time, xij is the corresponding time, β1 the fixed intercept of the patients,ζ1j the patient-specific random intercept, β2 the fixed slope of the patients, ζ2j the patient-specific random slope, and ζij is a patient-specific residual error term at the ith time [26]. It was assumed that each of the two variations in the random intercept and the random slope was normally distributed with an average of zero and a variance σ2. Furthermore, the Poisson (ln(μij)) mixed model can be specified in a similar way.\nFor the two-part joint approach, a binomial/Poisson mixed distribution was used. The general idea behind this mixture is that the outcome variable has a binomial distribution for the zero versus non-zero part and a Poisson distribution for the non-zero part. The binomial distribution is modelled by a logit link function, while the Poisson distribution is modelled by a log link function. The response probability of a longitudinal two-part joint binomial/Poisson regression model can be written down as:\n\n\n(2)\n\n\nPr\n\n\n\ny\nij\n\n|\n\nx\nij\n\n\n\n=\n\nπ\n1\n\ng\n\n\n\ny\nij\n\n;\n\nμ\nij\n\n=\n0\n\n\n+\n\nπ\n2\n\ng\n\n\n\ny\nij\n\n;\n\nμ\nij\n\n=\nexp\n\n\n\n\nx\n′\n\nij\n\nβ\n\n\n\n\n\n\n\n\nThe first part of the equation has a mean of zero and the second part of the equation has a mean that depends on the covariates (time). π1 and π2=1−π1 are the component weights/latent class probabilities and g(yij;μij) is the Poisson probability for count yij with mean μij[27]. A full explanation of the mathematical background of the analyses with mixed distribution models can be found in other papers [28-36]. For the two-part joint model, random intercepts and random slopes can be added in a similar fashion as for the linear mixed model.\nIn the present analyses, educational level was modelled as a dichotomous variable distinguishing between low and high education (with low education as reference), time was modelled as a categorical variable (represented by two dummy variables, with baseline as reference). Two model fit parameters were used to compare the three models with each other. Firstly, the Bayesian information criterion (BIC) was used. The BIC is an indicator of model fit, based on the −2 log likelihood, but taking into account the number of parameters estimated [37]. A lower BIC indicates a better performance of the model. Secondly, predicted frequencies (including the random effects) of the outcome variable, obtained when fitting the models, were compared to observed frequencies in hypoglycaemic events to compare the accuracy of the different models. This comparison was graphically presented in scatter plots. In addition, the means of the squared residuals (MSR) were computed for the different models. A lower MSR indicates a better performance of the model.\nAll analyses were performed with Stata (version 11.1) [38]. Estimations were performed with the GLLAMM procedure [26,39], using adaptive quadrature to estimate the random effects. Scatter plots were created within PASW Statistics 18 [40].", "The observational longitudinal dataset used for the analyses was obtained from the Study of the Psychological Impact in Real care of Initiating insulin glargine Treatment (SPIRIT) conducted between 2005 and 2009. This study aimed to examine the use of insulin glargine (a long acting insulin analog) on general emotional well-being, diabetes symptom distress and worries about hypoglycaemia in Dutch type 2 diabetes patients who previously used oral anti-hyperglycaemic medication. Type 2 diabetes patients who used oral anti-hyperglycaemic agents were recruited from 363 Dutch primary care practices, which were spread across the country. This resulted in a total sample of 889 patients. Measurements were conducted at baseline, after three and after six months. Results from this study have been presented previously [23]. We re-analysed the data in order to assess the development over time in hypoglycaemic events for diabetic patients, and the difference between low and high educated diabetes patients with three different mixed models.", "All analyses were performed within the framework of longitudinal mixed models, The general idea behind mixed models for longitudinal data analysis is that an adjustment is made for the correlated outcome observations within individuals over time by estimating either the differences in average values of the outcome and/or the differences in relationships with time-dependent covariates. These differences i.e. variances are known as random effects and can be added to the intercept of the regression model (i.e. random intercept) and/or to the different regression coefficients of time-dependent covariates (i.e. random slopes) [24-26]. In this paper two ‘standard’ approaches, i.e. a linear mixed model treating the outcome variables as normally distributed and a Poisson mixed model treating the outcome variables as Poisson distributed, will be compared with a two-part joint regression model in order to analyse the development over time and to analyse the differences between low and high educated patients. Equation 1a shows the linear mixed model with only a random intercept, while equation 1b shows the linear mixed model with both a random intercept and a random slope for time.\n\n\n(1a)\n\n\n\ny\nij\n\n=\n\n\n\nβ\n1\n\n+\n\nζ\n\n1\nj\n\n\n\n\n+\n\nβ\n2\n\n\nx\nij\n\n+\n\n∈\nij\n\n\n\n\n\n\n\n(1b)\n\n\n\ny\nij\n\n=\n\n\n\nβ\n1\n\n+\n\nζ\n\n1\nj\n\n\n\n\n+\n\n\n\nβ\n2\n\n+\n\nζ\n\n2\nj\n\n\n\n\n\nx\nij\n\n+\n\n∈\nij\n\n\n\n\n\nWhere yij is the hypoglycaemic score for the jth patient at the ith time, xij is the corresponding time, β1 the fixed intercept of the patients,ζ1j the patient-specific random intercept, β2 the fixed slope of the patients, ζ2j the patient-specific random slope, and ζij is a patient-specific residual error term at the ith time [26]. It was assumed that each of the two variations in the random intercept and the random slope was normally distributed with an average of zero and a variance σ2. Furthermore, the Poisson (ln(μij)) mixed model can be specified in a similar way.\nFor the two-part joint approach, a binomial/Poisson mixed distribution was used. The general idea behind this mixture is that the outcome variable has a binomial distribution for the zero versus non-zero part and a Poisson distribution for the non-zero part. The binomial distribution is modelled by a logit link function, while the Poisson distribution is modelled by a log link function. The response probability of a longitudinal two-part joint binomial/Poisson regression model can be written down as:\n\n\n(2)\n\n\nPr\n\n\n\ny\nij\n\n|\n\nx\nij\n\n\n\n=\n\nπ\n1\n\ng\n\n\n\ny\nij\n\n;\n\nμ\nij\n\n=\n0\n\n\n+\n\nπ\n2\n\ng\n\n\n\ny\nij\n\n;\n\nμ\nij\n\n=\nexp\n\n\n\n\nx\n′\n\nij\n\nβ\n\n\n\n\n\n\n\n\nThe first part of the equation has a mean of zero and the second part of the equation has a mean that depends on the covariates (time). π1 and π2=1−π1 are the component weights/latent class probabilities and g(yij;μij) is the Poisson probability for count yij with mean μij[27]. A full explanation of the mathematical background of the analyses with mixed distribution models can be found in other papers [28-36]. For the two-part joint model, random intercepts and random slopes can be added in a similar fashion as for the linear mixed model.\nIn the present analyses, educational level was modelled as a dichotomous variable distinguishing between low and high education (with low education as reference), time was modelled as a categorical variable (represented by two dummy variables, with baseline as reference). Two model fit parameters were used to compare the three models with each other. Firstly, the Bayesian information criterion (BIC) was used. The BIC is an indicator of model fit, based on the −2 log likelihood, but taking into account the number of parameters estimated [37]. A lower BIC indicates a better performance of the model. Secondly, predicted frequencies (including the random effects) of the outcome variable, obtained when fitting the models, were compared to observed frequencies in hypoglycaemic events to compare the accuracy of the different models. This comparison was graphically presented in scatter plots. In addition, the means of the squared residuals (MSR) were computed for the different models. A lower MSR indicates a better performance of the model.\nAll analyses were performed with Stata (version 11.1) [38]. Estimations were performed with the GLLAMM procedure [26,39], using adaptive quadrature to estimate the random effects. Scatter plots were created within PASW Statistics 18 [40].", "Table 1 shows the number, the proportion, and the median of the patients who have experienced ≥1 hypoglycaemic event for the three measurements over time by education as well as for the total number of patients. The proportion of both lower and higher educated patients that experienced ≥1 hypoglycaemic event increased over time. 34.5% of the lower educated patients experienced ≥1 hypoglycaemic event at baseline and after six months this increased to 37.9%, for the higher educated patients the percentage increased from 43.1% to 50.4%. In contrast, the median number of events for subjects with ≥1 hypoglycaemic event decreased over time. For the lower educated patients the median decreased from 4 to 2 and for the higher educated patients from 4 to 3. Table 2 shows the results of the analyses relating the hypoglycaemic events (dependent variable) to educational level (independent variable) when the number of hypoglycaemic episodes was treated as normal, Poisson or binomial/Poisson (two-part joint) distributed. All three models showed a significant positive relationship between education and the number of hypoglycaemic events. The model fit was best for the two-part joint mixed model (binomial/Poisson) (BIC: 6687.64, MSR: 7.26). Furthermore, Figure 1 depicts the accuracy of the different analyses in scatter plots of observed vs. predicted values. The binomial/Poisson model clearly performed best especially in correctly predicting the number of patients with zero events.\n\nThe proportion and median of diabetes patients with ≥ 1 hypoglycaemic event by time and educational level*\n* Having zero hypoglycaemic events is the complement of ≥ hypoglycaemic events.\nRegression and model fit parameters for the three longitudinal models with a random intercept, evaluating the difference in hypoglycaemic events for education*\nAbbreviation: Regression coefficients (Coef.), Standard errors (Std. Err.), P-value (P > |z|), Bayesian information criterion (BIC).\n* Education is time independent, therefore random slopes could not be calculated.\nScatter plots of the observed vs. predicted values for the three longitudinal models with a random intercept, evaluating the hypoglycaemic events for education.\nTable 3 shows the results of the analyses regarding the development over time as independent variable with only a random intercept. In all three models the regression coefficients for time were negative, and the corresponding P-values at T2 were significant. Comparing both the fit indicators (Table 3) and the accuracy (Figure 2), similar results were found as for the analyses comparing higher and lower educated patients. The two-part joint model (binomial/Poisson) had the best model fit (BIC: 7013.64, MSR: 6.56) and was also best in correctly predicting the zero events. However, the models changed considerably once random slopes for time were added to the models (Table 4): The signs of the regression coefficients for the Poisson mixed model and the two-part joint mixed model (binomial/Poisson) changed from negative to positive. The regression coefficients derived from the Poisson mixed model changed from −0.11 (3 months) and −0.26 (6 months) to 0.28 (3 months) and 0.38 (6 months) when random slopes were included. For the two-part joint mixed model (binomial/Poisson) the regression coefficients changed from −0.18(3 months) and −0.27 (6 months) to 0.12 (3 months) and 0.25 (6 months). Adding random slopes to the models resulted in a much better fit for the Poisson (BIC: 6774.75, MSR: 0.24) and the two-part joint mixed model (binomial/Poisson) (BIC: 6467.55, MSR: 0.30). Furthermore, the predicted values (Figure 3) for the Poisson mixed model and the two-part joint mixed model (binomial/Poisson) were in close accordance to the observed values. However, to a small extent there was still a difference in the correctly estimated zeros in favour of the two-part joint mixed model (binomial/Poisson). In total, 89.5% of the zeros were correctly estimated for the Poisson mixed model and 92.8% of the zeros were correctly estimated for the longitudinal two-part joint mixed model.\n\nRegression and model fit parameters for the three longitudinal models with a random intercept, evaluating the difference in development of the hypoglycaemic events over time\nAbbreviation: Regression coefficients (Coef.), Standard errors (Std. Err.), P-value (P > |z|), Bayesian information criterion (BIC).\nScatter plots of the observed vs. predicted values for the three longitudinal models with only a random intercept, evaluating the difference in development of the hypoglycaemic events over time.\nRegression and model fit parameters for the three longitudinal models with a random intercept and random slopes for time, evaluating the difference in development of the hypoglycaemic events over time\nAbbreviation: Regression coefficients (Coef.), Standard errors (Std. Err.), P-value (P > |z|), Bayesian information criterion (BIC).\nScatter plots of the observed vs. predicted values for the three longitudinal models with a random intercept and random slopes for time, evaluating the difference in development of the hypoglycaemic events over time.", "This study showed that the two-part joint mixed model (binomial/Poisson) model performed much better than the ‘conventional’ mixed models when only a random intercept was added to the models. This was especially the case in estimating the excess of zeros. However, when random slopes were added to the models, performance of the Poisson mixed model increased considerably and performed more or less the same as the two-part joint mixed model (binomial/Poisson).\nIt is known from the literature that Poisson regression can handle even a high fraction of zeros [1]. In the present study the percentage of subjects having zero events was relatively high and decreased over time from 61.4% to 56.4%. However, it is not exactly known to what extent the Poisson distribution would be able to model the excess of zeros. In addition, the performance of the Poisson mixed model regarding the number of zeros improved considerably when random slopes were added to the model. Surprisingly, adding random slopes to the model resulted not only in a much better fit, but also in a sign change for the development over time in both the Poisson mixed model and the two-part joint mixed model (binomial/Poisson). Although is it not clear why this sign change occurs, a possible explanation can be that in a model with a random intercept, only the ‘average’ values are allowed to differ between the subjects and therefore the regression coefficient obtained from these analyses only reflects the ‘average’ decrease in the number of events. When adding random slopes to the models, also the development over time is allowed to differ between the subjects. Therefore, an analysis with both a random intercept and random slopes also reflects the increased probability of having an event. This leads to a much better fit and a positive regression coefficient instead of an inverse one.\nThe interpretation of the regression coefficients of the linear mixed model and the Poisson mixed model are quite straightforward. For example, the interpretation of the relation between education and hypoglycaemic events can be interpreted as following for the linear mixed model (Table 2): Higher educated diabetic patients have 1.14 more events than (on average over time) compared to lower educated diabetic patients. For the Poisson mixed model the regression coefficient can be interpreted as (Table 2): exp (0.66) = 1.93. On average over time, higher educated diabetic patients have an increased prevalence rate of 93% in hypoglycaemic events compared to lower educated diabetic patients. The interpretation of the regression coefficient of the two-part joint mixed model is somewhat more complicated, since the model gives a combined regression coefficient for the binomial process and the Poisson process. However, some researchers have interpreted the regression coefficient of a two-part joint model as being the result for the cases that are above the limit [41] p. 320, [42] p. 503. These cases above the limit would be interpreted in the same way as a Poisson model i.e. higher educated diabetic patients have an increased prevalence rate of 86% in hypoglycaemic events compared to lower educated diabetic patients (exp(0.62) = 1.86). To overcome the problem of the interpretation of a combined regression coefficient, McDonald and Moffit [41] have developed a decomposition technique for the regression coefficient of a two-part joint binomial/normal (tobit) model. The general idea of their decomposition technique is that the regression coefficient combines two interpretations: 1) The difference in the outcome variable of being above the limit, weighted by the probability of being above the limit; and 2) the difference in the probability of being above the limit, weighted by the expected value of the outcome variable if above the limit [43]. In theory, this technique could also be used for two-part joint models that, instead of using a normal distribution, use another distribution such as the Poisson distribution for the values that are above zero.\nIn the present paper a two-part joint model was used to model the number of hypoglycaemic events obtaining a shared regression coefficient for both the binomial and the Poisson distribution combined. An important reason why one regression coefficient is preferred is that the outcome variable in this example (i.e. the number of hypoglycaemic events) should be seen as one process that cannot be split into two processes with separate regression coefficients. In contrast, sometimes it is better to analyse the data with a two-part separate model, leading to separate regression coefficients for both parts of the process. An example could be the analysis of determinants of smoking behaviour, which can be different for the logistic part of the analysis and the Poisson part. The logistic part of the analysis may need a set of covariates in order to model why some people smoke and others do not smoke, furthermore a different set of covariates may be needed to model how many cigarettes a person will smoke.", "This paper showed that the two-part joint mixed model (binomial/Poisson) is a more appropriate method for the analysis of longitudinal data with an excess of zeros when only a random intercept is included into a model. However, in the model with random slopes for time, also the Poisson mixed model performed remarkably well. In addition, more research is needed on the interpretation of the regression coefficients of the longitudinal two-part joint model.", "The authors declare that they have no competing interests.", "ASS made contributions to the design, conducted the analysis and interpretation of the data, and drafted the manuscript. TRSH made contributions to the acquisition of the data, and reviewed the article critically for important intellectual content. MRB made contributions to the interpretation of the data, and reviewed the article critically for important intellectual content. MWH made contributions to the interpretation of the data, and reviewed the article critically for important intellectual content. JWRT made substantial contributions to the conception and design and the analysis of data, helped to draft the manuscript, supervised the analysis and interpretation of the data, and reviewed the article critically for important intellectual content. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2288/13/27/prepub\n" ]
[ null, "methods", null, null, "results", "discussion", "conclusions", null, null, null ]
[ "Two-part joint model", "Excess of zeros", "Count", "Mixed modelling", "Longitudinal", "Statistical methods" ]
Background: Within longitudinal epidemiological research, ‘count’ outcome variables frequently occur. Nowadays it is possible to analyse longitudinal ‘count’ outcome variables with advanced statistical techniques such as mixed models. Because ‘count’ data often follow a Poisson distribution, these data are mostly analysed with longitudinal Poisson regression. In many situations ‘count’ data does not exactly follow a Poisson distribution; they are often overdispersed, (i.e. the variance of the outcome variable is higher than the mean value). One of the solutions to deal with this overdispersion in count data is to use a negative binomial regression analysis [1]. However, overdispersion in the count variable is mostly caused by an excess of zeros, which cannot completely be controlled by assuming a negative binomial distribution. Examples of data with an excess of zeros (which are also known as ‘semicontinuous’ data) [2] within the field of epidemiology are: the number of hypoglycaemic events in diabetics, the number of hospitalisations in the general population, the number of sports injuries, the number of falls in a group of elderly people and the number of cigarettes smoked. The classical methods to analyse outcome variables with an excess of zeros are to reduce the information in the data to either a dichotomous outcome variable (mostly comparing zero versus non-zero) or a categorical outcome variable (mostly comparing zero versus two groups of non-zero outcomes in which the groups are divided according to the median of the non-zero part). Sometimes, researchers try to transform (with a logarithmic transformation) a Poisson distribution with many zeros into a normally distributed variable. However, zeros cannot be log transformed and other computations such as adding ‘1’ to the ‘count’ outcomes with an excess of zeros before log transforming does not solve the problem either. To properly address the problem of excess of zeros, several so-called two-part statistical models have been developed. These models, which are particularly popular in econometrics, are also known as mixed response or mixed distribution models and they include zero-inflated Poisson (ZIP) regression, zero-inflated negative binomial (ZINB) regression, sample selection methods, and hurdle models [3-16]. The idea behind these two-part approaches is that the outcome variable has a mixed distribution (i.e. a binomial distribution to deal with zero versus non-zero, and a Poisson (or other) distribution to deal with the non-zero part of the distribution). In the standard two-part approaches the two processes are split and for every process different regression coefficients are obtained. This also means that different sets of covariates can be included, one set for the binomial process (zero versus non-zero) and one set for the Poisson process. In a ZIP model, for instance, one regression coefficient reflects the relationship of a certain covariate with zero versus non-zero, while another regression coefficient reflects the relationship with the ‘count’ outcomes above zero. [17,18]. For some research questions (e.g. investigating the determinants of smoking behaviour) this is a nice feature. However, in many situations one regression coefficient for each covariate would be preferable (e.g. the analysis of hypoglycaemic events). Despite the preference of one regression coefficient, it should be realized that this regression coefficient is somewhat difficult to interpret, because it combines a binomial and a Poisson process into one coefficient. Models that provide one set of regression coefficients for the binomial distribution and Poisson (or other) distribution combined are known as two-part joint regression models [19-22]. For longitudinal data analysis these two-part joint regression models are almost never used in epidemiological practice. The objective of this paper is to introduce a relatively ‘new’ method of a two-part joint mixed model (binomial/Poisson) in longitudinal data analysis for ‘count’ outcome variables with an excess of zeros. Furthermore, the performance of this new method will be compared to a linear mixed model and a Poisson mixed model; two models that are frequently used for longitudinal epidemiological data. Methods: Dataset The observational longitudinal dataset used for the analyses was obtained from the Study of the Psychological Impact in Real care of Initiating insulin glargine Treatment (SPIRIT) conducted between 2005 and 2009. This study aimed to examine the use of insulin glargine (a long acting insulin analog) on general emotional well-being, diabetes symptom distress and worries about hypoglycaemia in Dutch type 2 diabetes patients who previously used oral anti-hyperglycaemic medication. Type 2 diabetes patients who used oral anti-hyperglycaemic agents were recruited from 363 Dutch primary care practices, which were spread across the country. This resulted in a total sample of 889 patients. Measurements were conducted at baseline, after three and after six months. Results from this study have been presented previously [23]. We re-analysed the data in order to assess the development over time in hypoglycaemic events for diabetic patients, and the difference between low and high educated diabetes patients with three different mixed models. The observational longitudinal dataset used for the analyses was obtained from the Study of the Psychological Impact in Real care of Initiating insulin glargine Treatment (SPIRIT) conducted between 2005 and 2009. This study aimed to examine the use of insulin glargine (a long acting insulin analog) on general emotional well-being, diabetes symptom distress and worries about hypoglycaemia in Dutch type 2 diabetes patients who previously used oral anti-hyperglycaemic medication. Type 2 diabetes patients who used oral anti-hyperglycaemic agents were recruited from 363 Dutch primary care practices, which were spread across the country. This resulted in a total sample of 889 patients. Measurements were conducted at baseline, after three and after six months. Results from this study have been presented previously [23]. We re-analysed the data in order to assess the development over time in hypoglycaemic events for diabetic patients, and the difference between low and high educated diabetes patients with three different mixed models. Statistical methods All analyses were performed within the framework of longitudinal mixed models, The general idea behind mixed models for longitudinal data analysis is that an adjustment is made for the correlated outcome observations within individuals over time by estimating either the differences in average values of the outcome and/or the differences in relationships with time-dependent covariates. These differences i.e. variances are known as random effects and can be added to the intercept of the regression model (i.e. random intercept) and/or to the different regression coefficients of time-dependent covariates (i.e. random slopes) [24-26]. In this paper two ‘standard’ approaches, i.e. a linear mixed model treating the outcome variables as normally distributed and a Poisson mixed model treating the outcome variables as Poisson distributed, will be compared with a two-part joint regression model in order to analyse the development over time and to analyse the differences between low and high educated patients. Equation 1a shows the linear mixed model with only a random intercept, while equation 1b shows the linear mixed model with both a random intercept and a random slope for time. (1a) y ij = β 1 + ζ 1 j + β 2 x ij + ∈ ij (1b) y ij = β 1 + ζ 1 j + β 2 + ζ 2 j x ij + ∈ ij Where yij is the hypoglycaemic score for the jth patient at the ith time, xij is the corresponding time, β1 the fixed intercept of the patients,ζ1j the patient-specific random intercept, β2 the fixed slope of the patients, ζ2j the patient-specific random slope, and ζij is a patient-specific residual error term at the ith time [26]. It was assumed that each of the two variations in the random intercept and the random slope was normally distributed with an average of zero and a variance σ2. Furthermore, the Poisson (ln(μij)) mixed model can be specified in a similar way. For the two-part joint approach, a binomial/Poisson mixed distribution was used. The general idea behind this mixture is that the outcome variable has a binomial distribution for the zero versus non-zero part and a Poisson distribution for the non-zero part. The binomial distribution is modelled by a logit link function, while the Poisson distribution is modelled by a log link function. The response probability of a longitudinal two-part joint binomial/Poisson regression model can be written down as: (2) Pr y ij | x ij = π 1 g y ij ; μ ij = 0 + π 2 g y ij ; μ ij = exp x ′ ij β The first part of the equation has a mean of zero and the second part of the equation has a mean that depends on the covariates (time). π1 and π2=1−π1 are the component weights/latent class probabilities and g(yij;μij) is the Poisson probability for count yij with mean μij[27]. A full explanation of the mathematical background of the analyses with mixed distribution models can be found in other papers [28-36]. For the two-part joint model, random intercepts and random slopes can be added in a similar fashion as for the linear mixed model. In the present analyses, educational level was modelled as a dichotomous variable distinguishing between low and high education (with low education as reference), time was modelled as a categorical variable (represented by two dummy variables, with baseline as reference). Two model fit parameters were used to compare the three models with each other. Firstly, the Bayesian information criterion (BIC) was used. The BIC is an indicator of model fit, based on the −2 log likelihood, but taking into account the number of parameters estimated [37]. A lower BIC indicates a better performance of the model. Secondly, predicted frequencies (including the random effects) of the outcome variable, obtained when fitting the models, were compared to observed frequencies in hypoglycaemic events to compare the accuracy of the different models. This comparison was graphically presented in scatter plots. In addition, the means of the squared residuals (MSR) were computed for the different models. A lower MSR indicates a better performance of the model. All analyses were performed with Stata (version 11.1) [38]. Estimations were performed with the GLLAMM procedure [26,39], using adaptive quadrature to estimate the random effects. Scatter plots were created within PASW Statistics 18 [40]. All analyses were performed within the framework of longitudinal mixed models, The general idea behind mixed models for longitudinal data analysis is that an adjustment is made for the correlated outcome observations within individuals over time by estimating either the differences in average values of the outcome and/or the differences in relationships with time-dependent covariates. These differences i.e. variances are known as random effects and can be added to the intercept of the regression model (i.e. random intercept) and/or to the different regression coefficients of time-dependent covariates (i.e. random slopes) [24-26]. In this paper two ‘standard’ approaches, i.e. a linear mixed model treating the outcome variables as normally distributed and a Poisson mixed model treating the outcome variables as Poisson distributed, will be compared with a two-part joint regression model in order to analyse the development over time and to analyse the differences between low and high educated patients. Equation 1a shows the linear mixed model with only a random intercept, while equation 1b shows the linear mixed model with both a random intercept and a random slope for time. (1a) y ij = β 1 + ζ 1 j + β 2 x ij + ∈ ij (1b) y ij = β 1 + ζ 1 j + β 2 + ζ 2 j x ij + ∈ ij Where yij is the hypoglycaemic score for the jth patient at the ith time, xij is the corresponding time, β1 the fixed intercept of the patients,ζ1j the patient-specific random intercept, β2 the fixed slope of the patients, ζ2j the patient-specific random slope, and ζij is a patient-specific residual error term at the ith time [26]. It was assumed that each of the two variations in the random intercept and the random slope was normally distributed with an average of zero and a variance σ2. Furthermore, the Poisson (ln(μij)) mixed model can be specified in a similar way. For the two-part joint approach, a binomial/Poisson mixed distribution was used. The general idea behind this mixture is that the outcome variable has a binomial distribution for the zero versus non-zero part and a Poisson distribution for the non-zero part. The binomial distribution is modelled by a logit link function, while the Poisson distribution is modelled by a log link function. The response probability of a longitudinal two-part joint binomial/Poisson regression model can be written down as: (2) Pr y ij | x ij = π 1 g y ij ; μ ij = 0 + π 2 g y ij ; μ ij = exp x ′ ij β The first part of the equation has a mean of zero and the second part of the equation has a mean that depends on the covariates (time). π1 and π2=1−π1 are the component weights/latent class probabilities and g(yij;μij) is the Poisson probability for count yij with mean μij[27]. A full explanation of the mathematical background of the analyses with mixed distribution models can be found in other papers [28-36]. For the two-part joint model, random intercepts and random slopes can be added in a similar fashion as for the linear mixed model. In the present analyses, educational level was modelled as a dichotomous variable distinguishing between low and high education (with low education as reference), time was modelled as a categorical variable (represented by two dummy variables, with baseline as reference). Two model fit parameters were used to compare the three models with each other. Firstly, the Bayesian information criterion (BIC) was used. The BIC is an indicator of model fit, based on the −2 log likelihood, but taking into account the number of parameters estimated [37]. A lower BIC indicates a better performance of the model. Secondly, predicted frequencies (including the random effects) of the outcome variable, obtained when fitting the models, were compared to observed frequencies in hypoglycaemic events to compare the accuracy of the different models. This comparison was graphically presented in scatter plots. In addition, the means of the squared residuals (MSR) were computed for the different models. A lower MSR indicates a better performance of the model. All analyses were performed with Stata (version 11.1) [38]. Estimations were performed with the GLLAMM procedure [26,39], using adaptive quadrature to estimate the random effects. Scatter plots were created within PASW Statistics 18 [40]. Dataset: The observational longitudinal dataset used for the analyses was obtained from the Study of the Psychological Impact in Real care of Initiating insulin glargine Treatment (SPIRIT) conducted between 2005 and 2009. This study aimed to examine the use of insulin glargine (a long acting insulin analog) on general emotional well-being, diabetes symptom distress and worries about hypoglycaemia in Dutch type 2 diabetes patients who previously used oral anti-hyperglycaemic medication. Type 2 diabetes patients who used oral anti-hyperglycaemic agents were recruited from 363 Dutch primary care practices, which were spread across the country. This resulted in a total sample of 889 patients. Measurements were conducted at baseline, after three and after six months. Results from this study have been presented previously [23]. We re-analysed the data in order to assess the development over time in hypoglycaemic events for diabetic patients, and the difference between low and high educated diabetes patients with three different mixed models. Statistical methods: All analyses were performed within the framework of longitudinal mixed models, The general idea behind mixed models for longitudinal data analysis is that an adjustment is made for the correlated outcome observations within individuals over time by estimating either the differences in average values of the outcome and/or the differences in relationships with time-dependent covariates. These differences i.e. variances are known as random effects and can be added to the intercept of the regression model (i.e. random intercept) and/or to the different regression coefficients of time-dependent covariates (i.e. random slopes) [24-26]. In this paper two ‘standard’ approaches, i.e. a linear mixed model treating the outcome variables as normally distributed and a Poisson mixed model treating the outcome variables as Poisson distributed, will be compared with a two-part joint regression model in order to analyse the development over time and to analyse the differences between low and high educated patients. Equation 1a shows the linear mixed model with only a random intercept, while equation 1b shows the linear mixed model with both a random intercept and a random slope for time. (1a) y ij = β 1 + ζ 1 j + β 2 x ij + ∈ ij (1b) y ij = β 1 + ζ 1 j + β 2 + ζ 2 j x ij + ∈ ij Where yij is the hypoglycaemic score for the jth patient at the ith time, xij is the corresponding time, β1 the fixed intercept of the patients,ζ1j the patient-specific random intercept, β2 the fixed slope of the patients, ζ2j the patient-specific random slope, and ζij is a patient-specific residual error term at the ith time [26]. It was assumed that each of the two variations in the random intercept and the random slope was normally distributed with an average of zero and a variance σ2. Furthermore, the Poisson (ln(μij)) mixed model can be specified in a similar way. For the two-part joint approach, a binomial/Poisson mixed distribution was used. The general idea behind this mixture is that the outcome variable has a binomial distribution for the zero versus non-zero part and a Poisson distribution for the non-zero part. The binomial distribution is modelled by a logit link function, while the Poisson distribution is modelled by a log link function. The response probability of a longitudinal two-part joint binomial/Poisson regression model can be written down as: (2) Pr y ij | x ij = π 1 g y ij ; μ ij = 0 + π 2 g y ij ; μ ij = exp x ′ ij β The first part of the equation has a mean of zero and the second part of the equation has a mean that depends on the covariates (time). π1 and π2=1−π1 are the component weights/latent class probabilities and g(yij;μij) is the Poisson probability for count yij with mean μij[27]. A full explanation of the mathematical background of the analyses with mixed distribution models can be found in other papers [28-36]. For the two-part joint model, random intercepts and random slopes can be added in a similar fashion as for the linear mixed model. In the present analyses, educational level was modelled as a dichotomous variable distinguishing between low and high education (with low education as reference), time was modelled as a categorical variable (represented by two dummy variables, with baseline as reference). Two model fit parameters were used to compare the three models with each other. Firstly, the Bayesian information criterion (BIC) was used. The BIC is an indicator of model fit, based on the −2 log likelihood, but taking into account the number of parameters estimated [37]. A lower BIC indicates a better performance of the model. Secondly, predicted frequencies (including the random effects) of the outcome variable, obtained when fitting the models, were compared to observed frequencies in hypoglycaemic events to compare the accuracy of the different models. This comparison was graphically presented in scatter plots. In addition, the means of the squared residuals (MSR) were computed for the different models. A lower MSR indicates a better performance of the model. All analyses were performed with Stata (version 11.1) [38]. Estimations were performed with the GLLAMM procedure [26,39], using adaptive quadrature to estimate the random effects. Scatter plots were created within PASW Statistics 18 [40]. Results: Table 1 shows the number, the proportion, and the median of the patients who have experienced ≥1 hypoglycaemic event for the three measurements over time by education as well as for the total number of patients. The proportion of both lower and higher educated patients that experienced ≥1 hypoglycaemic event increased over time. 34.5% of the lower educated patients experienced ≥1 hypoglycaemic event at baseline and after six months this increased to 37.9%, for the higher educated patients the percentage increased from 43.1% to 50.4%. In contrast, the median number of events for subjects with ≥1 hypoglycaemic event decreased over time. For the lower educated patients the median decreased from 4 to 2 and for the higher educated patients from 4 to 3. Table 2 shows the results of the analyses relating the hypoglycaemic events (dependent variable) to educational level (independent variable) when the number of hypoglycaemic episodes was treated as normal, Poisson or binomial/Poisson (two-part joint) distributed. All three models showed a significant positive relationship between education and the number of hypoglycaemic events. The model fit was best for the two-part joint mixed model (binomial/Poisson) (BIC: 6687.64, MSR: 7.26). Furthermore, Figure 1 depicts the accuracy of the different analyses in scatter plots of observed vs. predicted values. The binomial/Poisson model clearly performed best especially in correctly predicting the number of patients with zero events. The proportion and median of diabetes patients with ≥ 1 hypoglycaemic event by time and educational level* * Having zero hypoglycaemic events is the complement of ≥ hypoglycaemic events. Regression and model fit parameters for the three longitudinal models with a random intercept, evaluating the difference in hypoglycaemic events for education* Abbreviation: Regression coefficients (Coef.), Standard errors (Std. Err.), P-value (P > |z|), Bayesian information criterion (BIC). * Education is time independent, therefore random slopes could not be calculated. Scatter plots of the observed vs. predicted values for the three longitudinal models with a random intercept, evaluating the hypoglycaemic events for education. Table 3 shows the results of the analyses regarding the development over time as independent variable with only a random intercept. In all three models the regression coefficients for time were negative, and the corresponding P-values at T2 were significant. Comparing both the fit indicators (Table 3) and the accuracy (Figure 2), similar results were found as for the analyses comparing higher and lower educated patients. The two-part joint model (binomial/Poisson) had the best model fit (BIC: 7013.64, MSR: 6.56) and was also best in correctly predicting the zero events. However, the models changed considerably once random slopes for time were added to the models (Table 4): The signs of the regression coefficients for the Poisson mixed model and the two-part joint mixed model (binomial/Poisson) changed from negative to positive. The regression coefficients derived from the Poisson mixed model changed from −0.11 (3 months) and −0.26 (6 months) to 0.28 (3 months) and 0.38 (6 months) when random slopes were included. For the two-part joint mixed model (binomial/Poisson) the regression coefficients changed from −0.18(3 months) and −0.27 (6 months) to 0.12 (3 months) and 0.25 (6 months). Adding random slopes to the models resulted in a much better fit for the Poisson (BIC: 6774.75, MSR: 0.24) and the two-part joint mixed model (binomial/Poisson) (BIC: 6467.55, MSR: 0.30). Furthermore, the predicted values (Figure 3) for the Poisson mixed model and the two-part joint mixed model (binomial/Poisson) were in close accordance to the observed values. However, to a small extent there was still a difference in the correctly estimated zeros in favour of the two-part joint mixed model (binomial/Poisson). In total, 89.5% of the zeros were correctly estimated for the Poisson mixed model and 92.8% of the zeros were correctly estimated for the longitudinal two-part joint mixed model. Regression and model fit parameters for the three longitudinal models with a random intercept, evaluating the difference in development of the hypoglycaemic events over time Abbreviation: Regression coefficients (Coef.), Standard errors (Std. Err.), P-value (P > |z|), Bayesian information criterion (BIC). Scatter plots of the observed vs. predicted values for the three longitudinal models with only a random intercept, evaluating the difference in development of the hypoglycaemic events over time. Regression and model fit parameters for the three longitudinal models with a random intercept and random slopes for time, evaluating the difference in development of the hypoglycaemic events over time Abbreviation: Regression coefficients (Coef.), Standard errors (Std. Err.), P-value (P > |z|), Bayesian information criterion (BIC). Scatter plots of the observed vs. predicted values for the three longitudinal models with a random intercept and random slopes for time, evaluating the difference in development of the hypoglycaemic events over time. Discussion: This study showed that the two-part joint mixed model (binomial/Poisson) model performed much better than the ‘conventional’ mixed models when only a random intercept was added to the models. This was especially the case in estimating the excess of zeros. However, when random slopes were added to the models, performance of the Poisson mixed model increased considerably and performed more or less the same as the two-part joint mixed model (binomial/Poisson). It is known from the literature that Poisson regression can handle even a high fraction of zeros [1]. In the present study the percentage of subjects having zero events was relatively high and decreased over time from 61.4% to 56.4%. However, it is not exactly known to what extent the Poisson distribution would be able to model the excess of zeros. In addition, the performance of the Poisson mixed model regarding the number of zeros improved considerably when random slopes were added to the model. Surprisingly, adding random slopes to the model resulted not only in a much better fit, but also in a sign change for the development over time in both the Poisson mixed model and the two-part joint mixed model (binomial/Poisson). Although is it not clear why this sign change occurs, a possible explanation can be that in a model with a random intercept, only the ‘average’ values are allowed to differ between the subjects and therefore the regression coefficient obtained from these analyses only reflects the ‘average’ decrease in the number of events. When adding random slopes to the models, also the development over time is allowed to differ between the subjects. Therefore, an analysis with both a random intercept and random slopes also reflects the increased probability of having an event. This leads to a much better fit and a positive regression coefficient instead of an inverse one. The interpretation of the regression coefficients of the linear mixed model and the Poisson mixed model are quite straightforward. For example, the interpretation of the relation between education and hypoglycaemic events can be interpreted as following for the linear mixed model (Table 2): Higher educated diabetic patients have 1.14 more events than (on average over time) compared to lower educated diabetic patients. For the Poisson mixed model the regression coefficient can be interpreted as (Table 2): exp (0.66) = 1.93. On average over time, higher educated diabetic patients have an increased prevalence rate of 93% in hypoglycaemic events compared to lower educated diabetic patients. The interpretation of the regression coefficient of the two-part joint mixed model is somewhat more complicated, since the model gives a combined regression coefficient for the binomial process and the Poisson process. However, some researchers have interpreted the regression coefficient of a two-part joint model as being the result for the cases that are above the limit [41] p. 320, [42] p. 503. These cases above the limit would be interpreted in the same way as a Poisson model i.e. higher educated diabetic patients have an increased prevalence rate of 86% in hypoglycaemic events compared to lower educated diabetic patients (exp(0.62) = 1.86). To overcome the problem of the interpretation of a combined regression coefficient, McDonald and Moffit [41] have developed a decomposition technique for the regression coefficient of a two-part joint binomial/normal (tobit) model. The general idea of their decomposition technique is that the regression coefficient combines two interpretations: 1) The difference in the outcome variable of being above the limit, weighted by the probability of being above the limit; and 2) the difference in the probability of being above the limit, weighted by the expected value of the outcome variable if above the limit [43]. In theory, this technique could also be used for two-part joint models that, instead of using a normal distribution, use another distribution such as the Poisson distribution for the values that are above zero. In the present paper a two-part joint model was used to model the number of hypoglycaemic events obtaining a shared regression coefficient for both the binomial and the Poisson distribution combined. An important reason why one regression coefficient is preferred is that the outcome variable in this example (i.e. the number of hypoglycaemic events) should be seen as one process that cannot be split into two processes with separate regression coefficients. In contrast, sometimes it is better to analyse the data with a two-part separate model, leading to separate regression coefficients for both parts of the process. An example could be the analysis of determinants of smoking behaviour, which can be different for the logistic part of the analysis and the Poisson part. The logistic part of the analysis may need a set of covariates in order to model why some people smoke and others do not smoke, furthermore a different set of covariates may be needed to model how many cigarettes a person will smoke. Conclusions: This paper showed that the two-part joint mixed model (binomial/Poisson) is a more appropriate method for the analysis of longitudinal data with an excess of zeros when only a random intercept is included into a model. However, in the model with random slopes for time, also the Poisson mixed model performed remarkably well. In addition, more research is needed on the interpretation of the regression coefficients of the longitudinal two-part joint model. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: ASS made contributions to the design, conducted the analysis and interpretation of the data, and drafted the manuscript. TRSH made contributions to the acquisition of the data, and reviewed the article critically for important intellectual content. MRB made contributions to the interpretation of the data, and reviewed the article critically for important intellectual content. MWH made contributions to the interpretation of the data, and reviewed the article critically for important intellectual content. JWRT made substantial contributions to the conception and design and the analysis of data, helped to draft the manuscript, supervised the analysis and interpretation of the data, and reviewed the article critically for important intellectual content. All authors read and approved the final manuscript. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2288/13/27/prepub
Background: Within longitudinal epidemiological research, 'count' outcome variables with an excess of zeros frequently occur. Although these outcomes are frequently analysed with a linear mixed model, or a Poisson mixed model, a two-part mixed model would be better in analysing outcome variables with an excess of zeros. Therefore, objective of this paper was to introduce the relatively 'new' method of two-part joint regression modelling in longitudinal data analysis for outcome variables with an excess of zeros, and to compare the performance of this method to current approaches. Methods: Within an observational longitudinal dataset, we compared three techniques; two 'standard' approaches (a linear mixed model, and a Poisson mixed model), and a two-part joint mixed model (a binomial/Poisson mixed distribution model), including random intercepts and random slopes. Model fit indicators, and differences between predicted and observed values were used for comparisons. The analyses were performed with STATA using the GLLAMM procedure. Results: Regarding the random intercept models, the two-part joint mixed model (binomial/Poisson) performed best. Adding random slopes for time to the models changed the sign of the regression coefficient for both the Poisson mixed model and the two-part joint mixed model (binomial/Poisson) and resulted into a much better fit. Conclusions: This paper showed that a two-part joint mixed model is a more appropriate method to analyse longitudinal data with an excess of zeros compared to a linear mixed model and a Poisson mixed model. However, in a model with random slopes for time a Poisson mixed model also performed remarkably well.
Background: Within longitudinal epidemiological research, ‘count’ outcome variables frequently occur. Nowadays it is possible to analyse longitudinal ‘count’ outcome variables with advanced statistical techniques such as mixed models. Because ‘count’ data often follow a Poisson distribution, these data are mostly analysed with longitudinal Poisson regression. In many situations ‘count’ data does not exactly follow a Poisson distribution; they are often overdispersed, (i.e. the variance of the outcome variable is higher than the mean value). One of the solutions to deal with this overdispersion in count data is to use a negative binomial regression analysis [1]. However, overdispersion in the count variable is mostly caused by an excess of zeros, which cannot completely be controlled by assuming a negative binomial distribution. Examples of data with an excess of zeros (which are also known as ‘semicontinuous’ data) [2] within the field of epidemiology are: the number of hypoglycaemic events in diabetics, the number of hospitalisations in the general population, the number of sports injuries, the number of falls in a group of elderly people and the number of cigarettes smoked. The classical methods to analyse outcome variables with an excess of zeros are to reduce the information in the data to either a dichotomous outcome variable (mostly comparing zero versus non-zero) or a categorical outcome variable (mostly comparing zero versus two groups of non-zero outcomes in which the groups are divided according to the median of the non-zero part). Sometimes, researchers try to transform (with a logarithmic transformation) a Poisson distribution with many zeros into a normally distributed variable. However, zeros cannot be log transformed and other computations such as adding ‘1’ to the ‘count’ outcomes with an excess of zeros before log transforming does not solve the problem either. To properly address the problem of excess of zeros, several so-called two-part statistical models have been developed. These models, which are particularly popular in econometrics, are also known as mixed response or mixed distribution models and they include zero-inflated Poisson (ZIP) regression, zero-inflated negative binomial (ZINB) regression, sample selection methods, and hurdle models [3-16]. The idea behind these two-part approaches is that the outcome variable has a mixed distribution (i.e. a binomial distribution to deal with zero versus non-zero, and a Poisson (or other) distribution to deal with the non-zero part of the distribution). In the standard two-part approaches the two processes are split and for every process different regression coefficients are obtained. This also means that different sets of covariates can be included, one set for the binomial process (zero versus non-zero) and one set for the Poisson process. In a ZIP model, for instance, one regression coefficient reflects the relationship of a certain covariate with zero versus non-zero, while another regression coefficient reflects the relationship with the ‘count’ outcomes above zero. [17,18]. For some research questions (e.g. investigating the determinants of smoking behaviour) this is a nice feature. However, in many situations one regression coefficient for each covariate would be preferable (e.g. the analysis of hypoglycaemic events). Despite the preference of one regression coefficient, it should be realized that this regression coefficient is somewhat difficult to interpret, because it combines a binomial and a Poisson process into one coefficient. Models that provide one set of regression coefficients for the binomial distribution and Poisson (or other) distribution combined are known as two-part joint regression models [19-22]. For longitudinal data analysis these two-part joint regression models are almost never used in epidemiological practice. The objective of this paper is to introduce a relatively ‘new’ method of a two-part joint mixed model (binomial/Poisson) in longitudinal data analysis for ‘count’ outcome variables with an excess of zeros. Furthermore, the performance of this new method will be compared to a linear mixed model and a Poisson mixed model; two models that are frequently used for longitudinal epidemiological data. Conclusions: This paper showed that the two-part joint mixed model (binomial/Poisson) is a more appropriate method for the analysis of longitudinal data with an excess of zeros when only a random intercept is included into a model. However, in the model with random slopes for time, also the Poisson mixed model performed remarkably well. In addition, more research is needed on the interpretation of the regression coefficients of the longitudinal two-part joint model.
Background: Within longitudinal epidemiological research, 'count' outcome variables with an excess of zeros frequently occur. Although these outcomes are frequently analysed with a linear mixed model, or a Poisson mixed model, a two-part mixed model would be better in analysing outcome variables with an excess of zeros. Therefore, objective of this paper was to introduce the relatively 'new' method of two-part joint regression modelling in longitudinal data analysis for outcome variables with an excess of zeros, and to compare the performance of this method to current approaches. Methods: Within an observational longitudinal dataset, we compared three techniques; two 'standard' approaches (a linear mixed model, and a Poisson mixed model), and a two-part joint mixed model (a binomial/Poisson mixed distribution model), including random intercepts and random slopes. Model fit indicators, and differences between predicted and observed values were used for comparisons. The analyses were performed with STATA using the GLLAMM procedure. Results: Regarding the random intercept models, the two-part joint mixed model (binomial/Poisson) performed best. Adding random slopes for time to the models changed the sign of the regression coefficient for both the Poisson mixed model and the two-part joint mixed model (binomial/Poisson) and resulted into a much better fit. Conclusions: This paper showed that a two-part joint mixed model is a more appropriate method to analyse longitudinal data with an excess of zeros compared to a linear mixed model and a Poisson mixed model. However, in a model with random slopes for time a Poisson mixed model also performed remarkably well.
6,320
317
[ 785, 179, 921, 10, 130, 16 ]
10
[ "model", "poisson", "random", "mixed", "time", "regression", "models", "mixed model", "patients", "ij" ]
[ "diabetic patients poisson", "count outcomes excess", "analysis overdispersion count", "longitudinal epidemiological data", "poisson regression situations" ]
[CONTENT] Two-part joint model | Excess of zeros | Count | Mixed modelling | Longitudinal | Statistical methods [SUMMARY]
[CONTENT] Two-part joint model | Excess of zeros | Count | Mixed modelling | Longitudinal | Statistical methods [SUMMARY]
[CONTENT] Two-part joint model | Excess of zeros | Count | Mixed modelling | Longitudinal | Statistical methods [SUMMARY]
[CONTENT] Two-part joint model | Excess of zeros | Count | Mixed modelling | Longitudinal | Statistical methods [SUMMARY]
[CONTENT] Two-part joint model | Excess of zeros | Count | Mixed modelling | Longitudinal | Statistical methods [SUMMARY]
[CONTENT] Two-part joint model | Excess of zeros | Count | Mixed modelling | Longitudinal | Statistical methods [SUMMARY]
[CONTENT] Binomial Distribution | Data Interpretation, Statistical | Diabetes Mellitus, Type 2 | Humans | Hypoglycemia | Hypoglycemic Agents | Insulin Glargine | Insulin, Long-Acting | Longitudinal Studies | Models, Statistical | Poisson Distribution [SUMMARY]
[CONTENT] Binomial Distribution | Data Interpretation, Statistical | Diabetes Mellitus, Type 2 | Humans | Hypoglycemia | Hypoglycemic Agents | Insulin Glargine | Insulin, Long-Acting | Longitudinal Studies | Models, Statistical | Poisson Distribution [SUMMARY]
[CONTENT] Binomial Distribution | Data Interpretation, Statistical | Diabetes Mellitus, Type 2 | Humans | Hypoglycemia | Hypoglycemic Agents | Insulin Glargine | Insulin, Long-Acting | Longitudinal Studies | Models, Statistical | Poisson Distribution [SUMMARY]
[CONTENT] Binomial Distribution | Data Interpretation, Statistical | Diabetes Mellitus, Type 2 | Humans | Hypoglycemia | Hypoglycemic Agents | Insulin Glargine | Insulin, Long-Acting | Longitudinal Studies | Models, Statistical | Poisson Distribution [SUMMARY]
[CONTENT] Binomial Distribution | Data Interpretation, Statistical | Diabetes Mellitus, Type 2 | Humans | Hypoglycemia | Hypoglycemic Agents | Insulin Glargine | Insulin, Long-Acting | Longitudinal Studies | Models, Statistical | Poisson Distribution [SUMMARY]
[CONTENT] Binomial Distribution | Data Interpretation, Statistical | Diabetes Mellitus, Type 2 | Humans | Hypoglycemia | Hypoglycemic Agents | Insulin Glargine | Insulin, Long-Acting | Longitudinal Studies | Models, Statistical | Poisson Distribution [SUMMARY]
[CONTENT] diabetic patients poisson | count outcomes excess | analysis overdispersion count | longitudinal epidemiological data | poisson regression situations [SUMMARY]
[CONTENT] diabetic patients poisson | count outcomes excess | analysis overdispersion count | longitudinal epidemiological data | poisson regression situations [SUMMARY]
[CONTENT] diabetic patients poisson | count outcomes excess | analysis overdispersion count | longitudinal epidemiological data | poisson regression situations [SUMMARY]
[CONTENT] diabetic patients poisson | count outcomes excess | analysis overdispersion count | longitudinal epidemiological data | poisson regression situations [SUMMARY]
[CONTENT] diabetic patients poisson | count outcomes excess | analysis overdispersion count | longitudinal epidemiological data | poisson regression situations [SUMMARY]
[CONTENT] diabetic patients poisson | count outcomes excess | analysis overdispersion count | longitudinal epidemiological data | poisson regression situations [SUMMARY]
[CONTENT] model | poisson | random | mixed | time | regression | models | mixed model | patients | ij [SUMMARY]
[CONTENT] model | poisson | random | mixed | time | regression | models | mixed model | patients | ij [SUMMARY]
[CONTENT] model | poisson | random | mixed | time | regression | models | mixed model | patients | ij [SUMMARY]
[CONTENT] model | poisson | random | mixed | time | regression | models | mixed model | patients | ij [SUMMARY]
[CONTENT] model | poisson | random | mixed | time | regression | models | mixed model | patients | ij [SUMMARY]
[CONTENT] model | poisson | random | mixed | time | regression | models | mixed model | patients | ij [SUMMARY]
[CONTENT] zero | distribution | regression | count | poisson | zeros | outcome | non | non zero | coefficient [SUMMARY]
[CONTENT] ij | random | ij ij | model | time | mixed | ij ij ij | patients | poisson | models [SUMMARY]
[CONTENT] model | hypoglycaemic | poisson | random | time | events | months | evaluating | longitudinal models random intercept | longitudinal models random [SUMMARY]
[CONTENT] model | random | poisson | mixed model | joint | longitudinal | model performed remarkably addition | method analysis longitudinal | method analysis | interpretation regression coefficients longitudinal [SUMMARY]
[CONTENT] model | poisson | random | mixed | regression | time | ij | mixed model | patients | models [SUMMARY]
[CONTENT] model | poisson | random | mixed | regression | time | ij | mixed model | patients | models [SUMMARY]
[CONTENT] zeros ||| Poisson | two | zeros ||| two | zeros [SUMMARY]
[CONTENT] three | two | Poisson | two ||| ||| GLLAMM [SUMMARY]
[CONTENT] two ||| Poisson | two [SUMMARY]
[CONTENT] two | zeros | Poisson ||| Poisson [SUMMARY]
[CONTENT] zeros ||| Poisson | two | zeros ||| two | zeros ||| three | two | Poisson | two ||| ||| GLLAMM ||| two ||| Poisson | two ||| two | zeros | Poisson ||| Poisson [SUMMARY]
[CONTENT] zeros ||| Poisson | two | zeros ||| two | zeros ||| three | two | Poisson | two ||| ||| GLLAMM ||| two ||| Poisson | two ||| two | zeros | Poisson ||| Poisson [SUMMARY]
Hemoperfusion and blood purification strategies in patients with COVID-19: A systematic review.
34632596
Coronavirus disease-19 (COVID-19) ranges from asymptomatic infection to severe cases requiring admission to the intensive care unit. Together with supportive therapies (ventilation in particular), the suppression of the pro-inflammatory state has been a hypothesized target. Pharmacological therapies with corticosteroids and interleukin-6 (IL-6) receptor antagonists have reduced mortality. The use of extracorporeal cytokine removal, also known as hemoperfusion (HP), could be a promising non-pharmacological approach to decrease the pro-inflammatory state in COVID-19.
BACKGROUND
We conducted a systematic review of PubMed and EMBASE databases in order to summarize the evidence regarding HP therapy in COVID-19. We included original studies and case series enrolling at least five patients.
METHODS
We included 11 articles and describe the characteristics of the populations studied from both clinical and biological perspectives. The methodological quality of the included studies was generally low. Only two studies had a control group, one of which included 101 patients in total. The remaining studies had a range between 10 and 50 patients included. There was large variability in the HP techniques implemented and in clinical and biological outcomes reported. Most studies described decreasing levels of IL-6 after HP treatment.
RESULTS
Our review does not support strong conclusions regarding the role of HP in COVID-19. Considering the very low level of clinical evidence detected, starting HP therapies in COVID-19 patients does not seem supported outside of clinical trials. Prospective randomized data are needed.
CONCLUSION
[ "Adult", "Aged", "Biomarkers", "COVID-19", "Cytokines", "Female", "Hemoperfusion", "Humans", "Inflammation Mediators", "Male", "Middle Aged", "Risk Factors", "Treatment Outcome" ]
8652899
INTRODUCTION
Since December 2019, the virus identified as SARS‐CoV‐2 has caused the pandemic of coronavirus disease 2019 (COVID‐19), which has spread worldwide in a sequence of following waves. According to data from Johns Hopkins University, as of September 17th, 2021, there have been over 227 million cases worldwide and over 4.6 million deaths. 1 COVID‐19 ranges from asymptomatic infection to extremely severe cases requiring hospitalization and possibly admission to the intensive care unit (ICU). The most frequent expression of severe COVID‐19 is the occurrence of acute respiratory distress syndrome (ARDS), 2 but the SARS‐CoV‐2 has shown the ability to cause cardiovascular 3 and, eventually, multi‐organ dysfunction. 4 COVID‐19 generates a pro‐inflammatory state with hypothesized cytokine storm, and high levels of interleukin 6 (IL‐6) have been repeatedly observed. Therefore, together with the attempt to control viral replication, and to provide supportive therapies (with invasive or non‐invasive mechanical ventilation—IMV or NIV, respectively), and eventually with extracorporeal support, 5 the suppression of the pro‐inflammatory state has been a target. 6 , 7 From pharmacological perspectives, the use of corticosteroids and more recently of IL‐6 receptor antagonists (i.e., tocilizumab) has shown improvement of prognosis for patients experiencing severe COVID‐19. 8 , 9 A possible non‐pharmacological approach to limit the pro‐inflammatory state induced by severe COVID‐19 is the use of extracorporeal cytokine removal, also known as hemoperfusion (HP). Extracorporeal approaches include plasma exchange, direct HP on a polymyxin B‐immobilized fiber column (PMX‐DHP), continuous hemodiafiltration with Cytosorb adsorber, and several other methods. These strategies have been previously investigated in other critical illnesses, such as septic shock, ARDS, and also for cases of Middle East Respiratory Syndrome due to coronavirus infection; however, there is no evidence of beneficial effects in these settings. The use of HP in patients with severe COVID‐19 is pathophysiologically sounded 10 and aims at interrupting the vicious pro‐inflammatory circle and the associated coagulopathy, endothelial damage, and organ failure. There have been increasing reports of the beneficial effects of such treatment among ICU patients with severe COVID‐19, but different methods have been used, 11 , 12 and several platforms/consoles have been modified to host these HP filters. 13 In order to summarize the evidence regarding the use of HP strategies for cytokine removal, we systematically reviewed the existing literature that evaluates the application of different HP strategies in patients with COVID‐19. Our aim was to gather information on biochemical and clinical outcomes described by the authors. From the overview of these outcomes it might be possible to acquire information that could be considered for future prospective studies.
METHODS
Registration, search strategy, and criteria We undertook a systematic Web‐based advanced literature search through the NHS Library Evidence tool on studies reporting the use of HP in patients with COVID‐19. We followed the approach suggested by the PRISMA statement for reporting systematic reviews and meta‐analyses, and a PRISMA checklist is provided separately (Supporting Digital Content 1). The protocol was registered with the International Prospective Register of Systematic Reviews (PROSPERO) with assigned number CRD42021253676. Our core search was structured by combining the findings from two groups of terms. The first group included the following: “blood purification” OR “Cytosorb” OR “Cytokine adsor*” OR “Toraymyxin” OR “endotoxin” OR “polymyxin” OR “hemoperfusion”; the second group contained the terms “covid” OR “COVID‐19”. An initial computerized search of PubMed and EMBASE databases was conducted from inception until February 4th, 2021, to identify the relevant articles, and a draft of results was written. The update of these systematic searches was performed on May 11th, 2021. Two further searches were performed manually and independently by four authors (GM, CP, GC, GDM), also exploring the list of references of the findings of the systematic search. Inclusion criteria were pre‐specified according to the PICOS approach (Table 1). We excluded experimental and animal studies, reviews, editorials, and letters to editor. Case series were included if involving at least 5 patients. We preventively decided to describe as Supporting Information case series with fewer than 5 patients, and case reports. Language restrictions were applied: we read the full manuscript only for articles published in English. PICOS criteria Abbreviations: COVID‐19, coronavirus disease 19; HP, hemoperfusion. We undertook a systematic Web‐based advanced literature search through the NHS Library Evidence tool on studies reporting the use of HP in patients with COVID‐19. We followed the approach suggested by the PRISMA statement for reporting systematic reviews and meta‐analyses, and a PRISMA checklist is provided separately (Supporting Digital Content 1). The protocol was registered with the International Prospective Register of Systematic Reviews (PROSPERO) with assigned number CRD42021253676. Our core search was structured by combining the findings from two groups of terms. The first group included the following: “blood purification” OR “Cytosorb” OR “Cytokine adsor*” OR “Toraymyxin” OR “endotoxin” OR “polymyxin” OR “hemoperfusion”; the second group contained the terms “covid” OR “COVID‐19”. An initial computerized search of PubMed and EMBASE databases was conducted from inception until February 4th, 2021, to identify the relevant articles, and a draft of results was written. The update of these systematic searches was performed on May 11th, 2021. Two further searches were performed manually and independently by four authors (GM, CP, GC, GDM), also exploring the list of references of the findings of the systematic search. Inclusion criteria were pre‐specified according to the PICOS approach (Table 1). We excluded experimental and animal studies, reviews, editorials, and letters to editor. Case series were included if involving at least 5 patients. We preventively decided to describe as Supporting Information case series with fewer than 5 patients, and case reports. Language restrictions were applied: we read the full manuscript only for articles published in English. PICOS criteria Abbreviations: COVID‐19, coronavirus disease 19; HP, hemoperfusion. Study screening and selection Study screening for determining the eligibility for inclusion in the systematic review and data extraction were performed independently by four reviewers (FS, GC, GDM, CP). Discordances were resolved involving two senior authors (AA, MA). Despite an expected significant heterogeneity in the techniques of blood purification, as per protocol registration on PROSPERO, we decided to include in our systematic review all the HP strategies adopted to reduce cytokines and pro‐inflammatory molecules in the context of COVID‐19. In particular, we included artificial liver support (ALS, already applied in cases of influenza A, H7N9 14 ), CytoSorb and HA 440/380/280/230 (which can be used as standalone or placed in series with extracorporeal membrane oxygenation and hemodialysis circuit), Toraymyxin, oXiris, and plasmapheresis. Other “standard” blood purification strategies, such as hemodialysis and continuous renal replacement therapy were not included as our focus was to investigate blood purification strategies aimed at reducing cytokine levels. Data were inserted in a password protected database on Excel by three authors (GC, GDM, CP) and cross‐checked by other three authors (FS, GM, LLV). Study screening for determining the eligibility for inclusion in the systematic review and data extraction were performed independently by four reviewers (FS, GC, GDM, CP). Discordances were resolved involving two senior authors (AA, MA). Despite an expected significant heterogeneity in the techniques of blood purification, as per protocol registration on PROSPERO, we decided to include in our systematic review all the HP strategies adopted to reduce cytokines and pro‐inflammatory molecules in the context of COVID‐19. In particular, we included artificial liver support (ALS, already applied in cases of influenza A, H7N9 14 ), CytoSorb and HA 440/380/280/230 (which can be used as standalone or placed in series with extracorporeal membrane oxygenation and hemodialysis circuit), Toraymyxin, oXiris, and plasmapheresis. Other “standard” blood purification strategies, such as hemodialysis and continuous renal replacement therapy were not included as our focus was to investigate blood purification strategies aimed at reducing cytokine levels. Data were inserted in a password protected database on Excel by three authors (GC, GDM, CP) and cross‐checked by other three authors (FS, GM, LLV). Analysis of clinical outcomes From a clinical standpoint, we focused on the description of population characteristics for each study, reporting their hemodynamic support (dose and/or percentage of patients receiving vasopressors), the respiratory variables (oxygenation parameters as well as use of IMV or NIV), the use of other extracorporeal techniques and, finally, the admission to the ICU, the length of stay, and mortality. Whenever possible, information is provided with separation of data before and after the HP treatment. From a clinical standpoint, we focused on the description of population characteristics for each study, reporting their hemodynamic support (dose and/or percentage of patients receiving vasopressors), the respiratory variables (oxygenation parameters as well as use of IMV or NIV), the use of other extracorporeal techniques and, finally, the admission to the ICU, the length of stay, and mortality. Whenever possible, information is provided with separation of data before and after the HP treatment. Biological variables Regarding the effects from biological perspectives, we reported data on inflammatory markers and on cytokines levels. As for clinical variables, information is provided with separation of data before and after the HP treatment. We also added information on drug therapies with particular reference to the use of steroids, IL‐6 receptor antagonists, and antibiotics. Regarding the effects from biological perspectives, we reported data on inflammatory markers and on cytokines levels. As for clinical variables, information is provided with separation of data before and after the HP treatment. We also added information on drug therapies with particular reference to the use of steroids, IL‐6 receptor antagonists, and antibiotics.
RESULTS
Our systematic search identified 292 total findings between Pubmed and EMBASE. No further findings were retrieved manually. As shown in the PRISMA flow diagram (Figure 1), after the evaluation of all abstracts, 11 full‐text articles 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 were included as matching the PICOS criteria, and their characteristics from clinical and biological perspectives are reported in Tables 2 and 3, respectively. PRISMA flow diagram Clinical data of the studies included in the systematic review Abbreviations: ALS, artificial‐liver blood‐purification system; ECMO, extracorporeal membrane oxygenation; HA, hemoadsorption; HP, hemoperfusion; ICU, intensive care unit; M/F, male/female; MV, mechanical ventilation; NE, norepinephrine; NIV, non‐invasive ventilation; RRT, renal replacement therapy; VDI, vasopressor dependency index; VIS, Vasoactive Inotropic Score. Biochemical data of the studies included in the systematic review Abbreviations: ALS, artificial‐liver blood‐purification system; CRP, reactive protein; HA, hemoadsorption; HCQ, hydroxychloroquine; HP, hemoperfusion; IL, interleukin; RRT, renal replacement therapy; SOFA, Sequential Organ Failure Assessment; TNF, tumor necrosis factor. As shown in Tables 2 and 3, we found that several HP techniques have been implemented, and that in most cases authors have reported large case series, while we found only three studies with a control group and no prospective randomized study; therefore, any chance to perform a meta‐analysis was deemed not feasible. In total, 226 patients were identified by our systematic review, while another 59 patients functioned as controls in some of the included studies. As shown in Tables 2 and 3, different HP strategies were implemented. Of the patients receiving HP therapy, the most frequently used approaches were ALS (n = 62), CytoSorb (n = 55) and oXiris (n = 52), followed by Toraymyxin (n = 24), HA 380/280/230 (n = 18) and plasmapheresis (n = 15). With regard to other non‐pharmacological extracorporeal therapies implemented in this population of patients treated with HP, we found that continuous renal replacement therapy was used in 128 patients (57%), while extracorporeal membrane oxygenation in only 9 patients (4%). The mortality in the included studies varied from 16% 17 to 58%. 18 The largest study with a control group included 101 patients, 50 of whom were treated with HP strategy with ALS, the others functioning as controls. None of the included patients needed continuous renal replacement therapy. 19 The second one was a three‐arm study, with patients receiving HP, continuous renal replacement therapy, or both; in this study the use of HP (±RRT; n = 8) was effective in reducing the norepinephrine infusion compared to the group of patients receiving only RRT (n = 4). Moreover, mortality was halved in those receiving HP versus continuous renal replacement therapy only. 20 The last study was a case‐control study of nine patients, 5 treated with Cytosorb HP (survival 80%), and 4 serving as controls (no survivors); none of these patients received continuous renal replacement therapy. 23 The other included studies reported a variable number of patients treated with any HP strategy (range 10 to 50 patients). From biological perspectives, most studies found decreasing levels of IL‐6. Considering the studies with a control group, Dai et al 26 found that IL‐6 levels decreased significantly in the ALS groups, while this was not the case in controls. Interestingly, the authors analyzed two subgroups of 15 patients each who were deemed in the early stage of “cytokine storm”. In the early sub‐group treated with ALS, all patients improved and were discharged without need for intubation; conversely, 40% (n = 6/15) of patients in the control group at the early stage of “cytokine storm” progressed to critical illness, and died. The small case‐control series was produced by Rampino et al, 23 and from biological perspectives the authors showed that IL‐6, and IL‐8 TNF‐a were reduced in the Cytosorb group. However, it is difficult to draw conclusions not only for the small number of patients, but also because patients in the Cytosorb group were on average 8 years younger. Further 27 findings were identified as small case series (2–4 patients, n = 6) and case reports (n = 21), and are reported in Supporting Digital Content 1 for their clinical and biological data.
CONCLUSION
Our systematic review identified several studies that evaluate the role of different HP strategies in COVID‐19. However, all these studies were of low methodological quality, and only a few had a control group. Considering the very low level of clinical evidence reported so far, starting HP therapies in COVID‐19 patients does not seem to be supported by hard evidence. Prospective randomized data are needed to establish the role of HP in COVID‐19 patients.
[ "INTRODUCTION", "Registration, search strategy, and criteria", "Study screening and selection", "Analysis of clinical outcomes", "Biological variables", "AUTHOR CONTRIBUTIONS" ]
[ "Since December 2019, the virus identified as SARS‐CoV‐2 has caused the pandemic of coronavirus disease 2019 (COVID‐19), which has spread worldwide in a sequence of following waves. According to data from Johns Hopkins University, as of September 17th, 2021, there have been over 227 million cases worldwide and over 4.6 million deaths.\n1\n\n\nCOVID‐19 ranges from asymptomatic infection to extremely severe cases requiring hospitalization and possibly admission to the intensive care unit (ICU). The most frequent expression of severe COVID‐19 is the occurrence of acute respiratory distress syndrome (ARDS),\n2\n but the SARS‐CoV‐2 has shown the ability to cause cardiovascular\n3\n and, eventually, multi‐organ dysfunction.\n4\n COVID‐19 generates a pro‐inflammatory state with hypothesized cytokine storm, and high levels of interleukin 6 (IL‐6) have been repeatedly observed.\nTherefore, together with the attempt to control viral replication, and to provide supportive therapies (with invasive or non‐invasive mechanical ventilation—IMV or NIV, respectively), and eventually with extracorporeal support,\n5\n the suppression of the pro‐inflammatory state has been a target.\n6\n, \n7\n From pharmacological perspectives, the use of corticosteroids and more recently of IL‐6 receptor antagonists (i.e., tocilizumab) has shown improvement of prognosis for patients experiencing severe COVID‐19.\n8\n, \n9\n A possible non‐pharmacological approach to limit the pro‐inflammatory state induced by severe COVID‐19 is the use of extracorporeal cytokine removal, also known as hemoperfusion (HP). Extracorporeal approaches include plasma exchange, direct HP on a polymyxin B‐immobilized fiber column (PMX‐DHP), continuous hemodiafiltration with Cytosorb adsorber, and several other methods. These strategies have been previously investigated in other critical illnesses, such as septic shock, ARDS, and also for cases of Middle East Respiratory Syndrome due to coronavirus infection; however, there is no evidence of beneficial effects in these settings.\nThe use of HP in patients with severe COVID‐19 is pathophysiologically sounded\n10\n and aims at interrupting the vicious pro‐inflammatory circle and the associated coagulopathy, endothelial damage, and organ failure. There have been increasing reports of the beneficial effects of such treatment among ICU patients with severe COVID‐19, but different methods have been used,\n11\n, \n12\n and several platforms/consoles have been modified to host these HP filters.\n13\n\n\nIn order to summarize the evidence regarding the use of HP strategies for cytokine removal, we systematically reviewed the existing literature that evaluates the application of different HP strategies in patients with COVID‐19. Our aim was to gather information on biochemical and clinical outcomes described by the authors. From the overview of these outcomes it might be possible to acquire information that could be considered for future prospective studies.", "We undertook a systematic Web‐based advanced literature search through the NHS Library Evidence tool on studies reporting the use of HP in patients with COVID‐19. We followed the approach suggested by the PRISMA statement for reporting systematic reviews and meta‐analyses, and a PRISMA checklist is provided separately (Supporting Digital Content 1). The protocol was registered with the International Prospective Register of Systematic Reviews (PROSPERO) with assigned number CRD42021253676. Our core search was structured by combining the findings from two groups of terms. The first group included the following: “blood purification” OR “Cytosorb” OR “Cytokine adsor*” OR “Toraymyxin” OR “endotoxin” OR “polymyxin” OR “hemoperfusion”; the second group contained the terms “covid” OR “COVID‐19”.\nAn initial computerized search of PubMed and EMBASE databases was conducted from inception until February 4th, 2021, to identify the relevant articles, and a draft of results was written. The update of these systematic searches was performed on May 11th, 2021. Two further searches were performed manually and independently by four authors (GM, CP, GC, GDM), also exploring the list of references of the findings of the systematic search.\nInclusion criteria were pre‐specified according to the PICOS approach (Table 1). We excluded experimental and animal studies, reviews, editorials, and letters to editor. Case series were included if involving at least 5 patients. We preventively decided to describe as Supporting Information case series with fewer than 5 patients, and case reports. Language restrictions were applied: we read the full manuscript only for articles published in English.\nPICOS criteria\nAbbreviations: COVID‐19, coronavirus disease 19; HP, hemoperfusion.", "Study screening for determining the eligibility for inclusion in the systematic review and data extraction were performed independently by four reviewers (FS, GC, GDM, CP). Discordances were resolved involving two senior authors (AA, MA).\nDespite an expected significant heterogeneity in the techniques of blood purification, as per protocol registration on PROSPERO, we decided to include in our systematic review all the HP strategies adopted to reduce cytokines and pro‐inflammatory molecules in the context of COVID‐19. In particular, we included artificial liver support (ALS, already applied in cases of influenza A, H7N9\n14\n), CytoSorb and HA 440/380/280/230 (which can be used as standalone or placed in series with extracorporeal membrane oxygenation and hemodialysis circuit), Toraymyxin, oXiris, and plasmapheresis. Other “standard” blood purification strategies, such as hemodialysis and continuous renal replacement therapy were not included as our focus was to investigate blood purification strategies aimed at reducing cytokine levels.\nData were inserted in a password protected database on Excel by three authors (GC, GDM, CP) and cross‐checked by other three authors (FS, GM, LLV).", "From a clinical standpoint, we focused on the description of population characteristics for each study, reporting their hemodynamic support (dose and/or percentage of patients receiving vasopressors), the respiratory variables (oxygenation parameters as well as use of IMV or NIV), the use of other extracorporeal techniques and, finally, the admission to the ICU, the length of stay, and mortality. Whenever possible, information is provided with separation of data before and after the HP treatment.", "Regarding the effects from biological perspectives, we reported data on inflammatory markers and on cytokines levels. As for clinical variables, information is provided with separation of data before and after the HP treatment. We also added information on drug therapies with particular reference to the use of steroids, IL‐6 receptor antagonists, and antibiotics.", "Concept/design: Filippo Sanfilippo, Antonio Arcadipane. Data analysis/interpretation: Carla Pulizzi and Filippo Sanfilippo. Drafting article: Gennaro Martucci. Critical revision of article: Luigi La Via and Gennaro Martucci. Approval of article: Marinella Astuto. Data collection: Giorgio Dimarco and Giuseppe Cuttone." ]
[ null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Registration, search strategy, and criteria", "Study screening and selection", "Analysis of clinical outcomes", "Biological variables", "RESULTS", "DISCUSSION", "CONCLUSION", "CONFLICT OF INTEREST", "AUTHOR CONTRIBUTIONS", "Supporting information" ]
[ "Since December 2019, the virus identified as SARS‐CoV‐2 has caused the pandemic of coronavirus disease 2019 (COVID‐19), which has spread worldwide in a sequence of following waves. According to data from Johns Hopkins University, as of September 17th, 2021, there have been over 227 million cases worldwide and over 4.6 million deaths.\n1\n\n\nCOVID‐19 ranges from asymptomatic infection to extremely severe cases requiring hospitalization and possibly admission to the intensive care unit (ICU). The most frequent expression of severe COVID‐19 is the occurrence of acute respiratory distress syndrome (ARDS),\n2\n but the SARS‐CoV‐2 has shown the ability to cause cardiovascular\n3\n and, eventually, multi‐organ dysfunction.\n4\n COVID‐19 generates a pro‐inflammatory state with hypothesized cytokine storm, and high levels of interleukin 6 (IL‐6) have been repeatedly observed.\nTherefore, together with the attempt to control viral replication, and to provide supportive therapies (with invasive or non‐invasive mechanical ventilation—IMV or NIV, respectively), and eventually with extracorporeal support,\n5\n the suppression of the pro‐inflammatory state has been a target.\n6\n, \n7\n From pharmacological perspectives, the use of corticosteroids and more recently of IL‐6 receptor antagonists (i.e., tocilizumab) has shown improvement of prognosis for patients experiencing severe COVID‐19.\n8\n, \n9\n A possible non‐pharmacological approach to limit the pro‐inflammatory state induced by severe COVID‐19 is the use of extracorporeal cytokine removal, also known as hemoperfusion (HP). Extracorporeal approaches include plasma exchange, direct HP on a polymyxin B‐immobilized fiber column (PMX‐DHP), continuous hemodiafiltration with Cytosorb adsorber, and several other methods. These strategies have been previously investigated in other critical illnesses, such as septic shock, ARDS, and also for cases of Middle East Respiratory Syndrome due to coronavirus infection; however, there is no evidence of beneficial effects in these settings.\nThe use of HP in patients with severe COVID‐19 is pathophysiologically sounded\n10\n and aims at interrupting the vicious pro‐inflammatory circle and the associated coagulopathy, endothelial damage, and organ failure. There have been increasing reports of the beneficial effects of such treatment among ICU patients with severe COVID‐19, but different methods have been used,\n11\n, \n12\n and several platforms/consoles have been modified to host these HP filters.\n13\n\n\nIn order to summarize the evidence regarding the use of HP strategies for cytokine removal, we systematically reviewed the existing literature that evaluates the application of different HP strategies in patients with COVID‐19. Our aim was to gather information on biochemical and clinical outcomes described by the authors. From the overview of these outcomes it might be possible to acquire information that could be considered for future prospective studies.", "Registration, search strategy, and criteria We undertook a systematic Web‐based advanced literature search through the NHS Library Evidence tool on studies reporting the use of HP in patients with COVID‐19. We followed the approach suggested by the PRISMA statement for reporting systematic reviews and meta‐analyses, and a PRISMA checklist is provided separately (Supporting Digital Content 1). The protocol was registered with the International Prospective Register of Systematic Reviews (PROSPERO) with assigned number CRD42021253676. Our core search was structured by combining the findings from two groups of terms. The first group included the following: “blood purification” OR “Cytosorb” OR “Cytokine adsor*” OR “Toraymyxin” OR “endotoxin” OR “polymyxin” OR “hemoperfusion”; the second group contained the terms “covid” OR “COVID‐19”.\nAn initial computerized search of PubMed and EMBASE databases was conducted from inception until February 4th, 2021, to identify the relevant articles, and a draft of results was written. The update of these systematic searches was performed on May 11th, 2021. Two further searches were performed manually and independently by four authors (GM, CP, GC, GDM), also exploring the list of references of the findings of the systematic search.\nInclusion criteria were pre‐specified according to the PICOS approach (Table 1). We excluded experimental and animal studies, reviews, editorials, and letters to editor. Case series were included if involving at least 5 patients. We preventively decided to describe as Supporting Information case series with fewer than 5 patients, and case reports. Language restrictions were applied: we read the full manuscript only for articles published in English.\nPICOS criteria\nAbbreviations: COVID‐19, coronavirus disease 19; HP, hemoperfusion.\nWe undertook a systematic Web‐based advanced literature search through the NHS Library Evidence tool on studies reporting the use of HP in patients with COVID‐19. We followed the approach suggested by the PRISMA statement for reporting systematic reviews and meta‐analyses, and a PRISMA checklist is provided separately (Supporting Digital Content 1). The protocol was registered with the International Prospective Register of Systematic Reviews (PROSPERO) with assigned number CRD42021253676. Our core search was structured by combining the findings from two groups of terms. The first group included the following: “blood purification” OR “Cytosorb” OR “Cytokine adsor*” OR “Toraymyxin” OR “endotoxin” OR “polymyxin” OR “hemoperfusion”; the second group contained the terms “covid” OR “COVID‐19”.\nAn initial computerized search of PubMed and EMBASE databases was conducted from inception until February 4th, 2021, to identify the relevant articles, and a draft of results was written. The update of these systematic searches was performed on May 11th, 2021. Two further searches were performed manually and independently by four authors (GM, CP, GC, GDM), also exploring the list of references of the findings of the systematic search.\nInclusion criteria were pre‐specified according to the PICOS approach (Table 1). We excluded experimental and animal studies, reviews, editorials, and letters to editor. Case series were included if involving at least 5 patients. We preventively decided to describe as Supporting Information case series with fewer than 5 patients, and case reports. Language restrictions were applied: we read the full manuscript only for articles published in English.\nPICOS criteria\nAbbreviations: COVID‐19, coronavirus disease 19; HP, hemoperfusion.\nStudy screening and selection Study screening for determining the eligibility for inclusion in the systematic review and data extraction were performed independently by four reviewers (FS, GC, GDM, CP). Discordances were resolved involving two senior authors (AA, MA).\nDespite an expected significant heterogeneity in the techniques of blood purification, as per protocol registration on PROSPERO, we decided to include in our systematic review all the HP strategies adopted to reduce cytokines and pro‐inflammatory molecules in the context of COVID‐19. In particular, we included artificial liver support (ALS, already applied in cases of influenza A, H7N9\n14\n), CytoSorb and HA 440/380/280/230 (which can be used as standalone or placed in series with extracorporeal membrane oxygenation and hemodialysis circuit), Toraymyxin, oXiris, and plasmapheresis. Other “standard” blood purification strategies, such as hemodialysis and continuous renal replacement therapy were not included as our focus was to investigate blood purification strategies aimed at reducing cytokine levels.\nData were inserted in a password protected database on Excel by three authors (GC, GDM, CP) and cross‐checked by other three authors (FS, GM, LLV).\nStudy screening for determining the eligibility for inclusion in the systematic review and data extraction were performed independently by four reviewers (FS, GC, GDM, CP). Discordances were resolved involving two senior authors (AA, MA).\nDespite an expected significant heterogeneity in the techniques of blood purification, as per protocol registration on PROSPERO, we decided to include in our systematic review all the HP strategies adopted to reduce cytokines and pro‐inflammatory molecules in the context of COVID‐19. In particular, we included artificial liver support (ALS, already applied in cases of influenza A, H7N9\n14\n), CytoSorb and HA 440/380/280/230 (which can be used as standalone or placed in series with extracorporeal membrane oxygenation and hemodialysis circuit), Toraymyxin, oXiris, and plasmapheresis. Other “standard” blood purification strategies, such as hemodialysis and continuous renal replacement therapy were not included as our focus was to investigate blood purification strategies aimed at reducing cytokine levels.\nData were inserted in a password protected database on Excel by three authors (GC, GDM, CP) and cross‐checked by other three authors (FS, GM, LLV).\nAnalysis of clinical outcomes From a clinical standpoint, we focused on the description of population characteristics for each study, reporting their hemodynamic support (dose and/or percentage of patients receiving vasopressors), the respiratory variables (oxygenation parameters as well as use of IMV or NIV), the use of other extracorporeal techniques and, finally, the admission to the ICU, the length of stay, and mortality. Whenever possible, information is provided with separation of data before and after the HP treatment.\nFrom a clinical standpoint, we focused on the description of population characteristics for each study, reporting their hemodynamic support (dose and/or percentage of patients receiving vasopressors), the respiratory variables (oxygenation parameters as well as use of IMV or NIV), the use of other extracorporeal techniques and, finally, the admission to the ICU, the length of stay, and mortality. Whenever possible, information is provided with separation of data before and after the HP treatment.\nBiological variables Regarding the effects from biological perspectives, we reported data on inflammatory markers and on cytokines levels. As for clinical variables, information is provided with separation of data before and after the HP treatment. We also added information on drug therapies with particular reference to the use of steroids, IL‐6 receptor antagonists, and antibiotics.\nRegarding the effects from biological perspectives, we reported data on inflammatory markers and on cytokines levels. As for clinical variables, information is provided with separation of data before and after the HP treatment. We also added information on drug therapies with particular reference to the use of steroids, IL‐6 receptor antagonists, and antibiotics.", "We undertook a systematic Web‐based advanced literature search through the NHS Library Evidence tool on studies reporting the use of HP in patients with COVID‐19. We followed the approach suggested by the PRISMA statement for reporting systematic reviews and meta‐analyses, and a PRISMA checklist is provided separately (Supporting Digital Content 1). The protocol was registered with the International Prospective Register of Systematic Reviews (PROSPERO) with assigned number CRD42021253676. Our core search was structured by combining the findings from two groups of terms. The first group included the following: “blood purification” OR “Cytosorb” OR “Cytokine adsor*” OR “Toraymyxin” OR “endotoxin” OR “polymyxin” OR “hemoperfusion”; the second group contained the terms “covid” OR “COVID‐19”.\nAn initial computerized search of PubMed and EMBASE databases was conducted from inception until February 4th, 2021, to identify the relevant articles, and a draft of results was written. The update of these systematic searches was performed on May 11th, 2021. Two further searches were performed manually and independently by four authors (GM, CP, GC, GDM), also exploring the list of references of the findings of the systematic search.\nInclusion criteria were pre‐specified according to the PICOS approach (Table 1). We excluded experimental and animal studies, reviews, editorials, and letters to editor. Case series were included if involving at least 5 patients. We preventively decided to describe as Supporting Information case series with fewer than 5 patients, and case reports. Language restrictions were applied: we read the full manuscript only for articles published in English.\nPICOS criteria\nAbbreviations: COVID‐19, coronavirus disease 19; HP, hemoperfusion.", "Study screening for determining the eligibility for inclusion in the systematic review and data extraction were performed independently by four reviewers (FS, GC, GDM, CP). Discordances were resolved involving two senior authors (AA, MA).\nDespite an expected significant heterogeneity in the techniques of blood purification, as per protocol registration on PROSPERO, we decided to include in our systematic review all the HP strategies adopted to reduce cytokines and pro‐inflammatory molecules in the context of COVID‐19. In particular, we included artificial liver support (ALS, already applied in cases of influenza A, H7N9\n14\n), CytoSorb and HA 440/380/280/230 (which can be used as standalone or placed in series with extracorporeal membrane oxygenation and hemodialysis circuit), Toraymyxin, oXiris, and plasmapheresis. Other “standard” blood purification strategies, such as hemodialysis and continuous renal replacement therapy were not included as our focus was to investigate blood purification strategies aimed at reducing cytokine levels.\nData were inserted in a password protected database on Excel by three authors (GC, GDM, CP) and cross‐checked by other three authors (FS, GM, LLV).", "From a clinical standpoint, we focused on the description of population characteristics for each study, reporting their hemodynamic support (dose and/or percentage of patients receiving vasopressors), the respiratory variables (oxygenation parameters as well as use of IMV or NIV), the use of other extracorporeal techniques and, finally, the admission to the ICU, the length of stay, and mortality. Whenever possible, information is provided with separation of data before and after the HP treatment.", "Regarding the effects from biological perspectives, we reported data on inflammatory markers and on cytokines levels. As for clinical variables, information is provided with separation of data before and after the HP treatment. We also added information on drug therapies with particular reference to the use of steroids, IL‐6 receptor antagonists, and antibiotics.", "Our systematic search identified 292 total findings between Pubmed and EMBASE. No further findings were retrieved manually. As shown in the PRISMA flow diagram (Figure 1), after the evaluation of all abstracts, 11 full‐text articles\n15\n, \n16\n, \n17\n, \n18\n, \n19\n, \n20\n, \n21\n, \n22\n, \n23\n, \n24\n, \n25\n were included as matching the PICOS criteria, and their characteristics from clinical and biological perspectives are reported in Tables 2 and 3, respectively.\nPRISMA flow diagram\nClinical data of the studies included in the systematic review\nAbbreviations: ALS, artificial‐liver blood‐purification system; ECMO, extracorporeal membrane oxygenation; HA, hemoadsorption; HP, hemoperfusion; ICU, intensive care unit; M/F, male/female; MV, mechanical ventilation; NE, norepinephrine; NIV, non‐invasive ventilation; RRT, renal replacement therapy; VDI, vasopressor dependency index; VIS, Vasoactive Inotropic Score.\nBiochemical data of the studies included in the systematic review\nAbbreviations: ALS, artificial‐liver blood‐purification system; CRP, reactive protein; HA, hemoadsorption; HCQ, hydroxychloroquine; HP, hemoperfusion; IL, interleukin; RRT, renal replacement therapy; SOFA, Sequential Organ Failure Assessment; TNF, tumor necrosis factor.\nAs shown in Tables 2 and 3, we found that several HP techniques have been implemented, and that in most cases authors have reported large case series, while we found only three studies with a control group and no prospective randomized study; therefore, any chance to perform a meta‐analysis was deemed not feasible.\nIn total, 226 patients were identified by our systematic review, while another 59 patients functioned as controls in some of the included studies. As shown in Tables 2 and 3, different HP strategies were implemented. Of the patients receiving HP therapy, the most frequently used approaches were ALS (n = 62), CytoSorb (n = 55) and oXiris (n = 52), followed by Toraymyxin (n = 24), HA 380/280/230 (n = 18) and plasmapheresis (n = 15).\nWith regard to other non‐pharmacological extracorporeal therapies implemented in this population of patients treated with HP, we found that continuous renal replacement therapy was used in 128 patients (57%), while extracorporeal membrane oxygenation in only 9 patients (4%). The mortality in the included studies varied from 16%\n17\n to 58%.\n18\n\n\nThe largest study with a control group included 101 patients, 50 of whom were treated with HP strategy with ALS, the others functioning as controls. None of the included patients needed continuous renal replacement therapy.\n19\n The second one was a three‐arm study, with patients receiving HP, continuous renal replacement therapy, or both; in this study the use of HP (±RRT; n = 8) was effective in reducing the norepinephrine infusion compared to the group of patients receiving only RRT (n = 4). Moreover, mortality was halved in those receiving HP versus continuous renal replacement therapy only.\n20\n The last study was a case‐control study of nine patients, 5 treated with Cytosorb HP (survival 80%), and 4 serving as controls (no survivors); none of these patients received continuous renal replacement therapy.\n23\n The other included studies reported a variable number of patients treated with any HP strategy (range 10 to 50 patients).\nFrom biological perspectives, most studies found decreasing levels of IL‐6. Considering the studies with a control group, Dai et al\n26\n found that IL‐6 levels decreased significantly in the ALS groups, while this was not the case in controls. Interestingly, the authors analyzed two subgroups of 15 patients each who were deemed in the early stage of “cytokine storm”. In the early sub‐group treated with ALS, all patients improved and were discharged without need for intubation; conversely, 40% (n = 6/15) of patients in the control group at the early stage of “cytokine storm” progressed to critical illness, and died. The small case‐control series was produced by Rampino et al,\n23\n and from biological perspectives the authors showed that IL‐6, and IL‐8 TNF‐a were reduced in the Cytosorb group. However, it is difficult to draw conclusions not only for the small number of patients, but also because patients in the Cytosorb group were on average 8 years younger.\nFurther 27 findings were identified as small case series (2–4 patients, n = 6) and case reports (n = 21), and are reported in Supporting Digital Content 1 for their clinical and biological data.", "The potential usefulness of cytokine adsorption therapies for patients with COVID‐19 has been hypothesized because of the dysregulated systemic immune over‐activation.\n27\n Initially, scientists suspected that Covid cytokine storm in patients with COVID‐19 was stronger than in other conditions determining critical illness, though this hypothesis has not been subsequently confirmed\n28\n; moreover, studies on sub‐phenotypes have shown that the systemic concentrations of inflammatory cytokines typical of the “cytokine storms” are lower in COVID‐19 than in patients with other causes of ARDS.\n29\n Nonetheless, it should be noted that the only drugs decreasing mortality in COVID‐19 patients act with properties of immune‐modulation (corticosteroids and IL‐6 receptor antagonists).\n8\n, \n30\n At the beginning of the pandemic a comprehensive review highlighted the potential importance of HP techniques and the lack of evidence in support of this approach. One year later, the scenario has not improved, but these therapeutic options are used in daily practice.\n31\n The aim of our systematic review was to pool data on HP therapies in patients with COVID‐19, trying to gather information on their clinical and immune‐modulator effects. The evaluation of a complex therapy (HP) in a disease with a very variable clinical presentation (COVID‐19) certainly warrants strict control for confounding factors. However, our systematic review identified only studies with low methodological quality (no randomized study was found); we summarized the most adopted biological outcomes tested in the current literature with the hope this could be useful for future application (i.e., design of randomized studies).\nIn general, we found large heterogeneity among the studies in clinical and biological outcomes evaluated. From clinical perspectives, information on cardiovascular pharmacological support was not uniformly reported, while more data were provided on oxygenation and on use of IMV or NIV. For instance, Villa et al\n25\n and Guo et al\n19\n reported an improvement of the PaO2/FiO2 ratio, but the lack of a control group hampers any discussion. Regarding the biological outcomes, surely the most reported one was the IL‐6 concentration.\n32\n Most studies reported decreasing levels of IL‐6 after HP treatment, but it should be noted that the values reported were very different, possibly as a result of variable laboratory methods and techniques, as well as timing in the course of the disease. Moreover, when evaluating a decrease in cytokine concentration, one should note that removal has to be contextualized to the initial cytokine concentration. Indeed, cytokine removal is concentration‐dependent, and the baseline values influences the performance of the HP method.\nThe cut‐off to define high level of circulating cytokines is not well‐defined, but it is reasonable that the higher the level, the greater the impact (hopefully positive) of the HP therapy.\n33\n Moreover, the HP methods not only eliminate the cytokines responsible for the hyper‐inflammatory state, but HP will likely remove anti‐inflammatory mediators and many other biological substances as well (up to 55 kDa). The latter are probably not innocent bystanders, but may be crucial for the recovery of the patient. Therefore, focusing only on one or few cytokines may be a misleading and myopic approach. Future studies should also consider the removal of anti‐inflammatory and other biologically relevant molecules eliminated by such “filters.” To add more complexity, knowledge of pharmacokinetics during HP is still on the way, and a potential issue could be the reduction in plasmatic concentration of circulating drugs like corticosteroids and other immune‐modulators, as well as a decrease in the concentration of antibiotics.\n34\n Moreover, COVID‐19 has shown a tendency to cause coagulation disorders with a pro‐thrombotic state and an associated risk of pulmonary embolism. Whether HP therapies influence the pharmacokinetics of drugs regulating the coagulation cascade in patients with COVID‐19 remains to be determined. This should certainly be considered in the context of a higher risk of pulmonary embolism in these patients.\n35\n, \n36\n\n\nAll these open questions and the absence of good‐quality data make the evaluation of HP usefulness in COVID‐19 patients challenging. For all these reasons, the Extracorporeal Life Support Organization's COVID‐19 guidelines do not currently recommend extracorporeal cytokine HP outside the context of clinical trials.\n37\n It is worth mentioning that after our screening process one important study was published. This small single‐center randomized study enrolled 34 COVID‐19 patients on extracorporeal membrane oxygenation and randomized them to HP with Cytosorb. Interestingly, the concentration of IL‐6 decreased in both groups (HP and controls) to a similar extent, and the 30‐day mortality was significantly higher in the group receiving HP (18% vs. 76% in those not receiving HP, p = 0.002).\n22\n These results need external validation by ongoing trials, and it should be noted that patients receiving extracorporeal membrane oxygenation are at very high risk of death, and that HP may not be beneficial in these cases where a very advanced stage of organ dysfunction has already taken place. Therefore, the latter results do not exclude a beneficial effect of HP strategies in severe COVID‐19 patient not requiring extracorporeal membrane oxygenation.\nIt is also worth noting the recent results of another interesting study (not on COVID‐19 patients) in which the authors studied patients with severe refractory septic shock undergoing cytokine removal with CytoSorb, with ongoing continuous renal replacement therapy, and matched them with a historical cohort. The authors found that IL‐6 levels and vasopressor requirements were not reduced in the treatment group. Importantly, patients treated with HP had an increased risk of death. The authors concluded that their results were consonant with recent evidence that suggests avoidance of indiscriminate use of cytokine adsorption outside of investigational trials.\n38\n\n", "Our systematic review identified several studies that evaluate the role of different HP strategies in COVID‐19. However, all these studies were of low methodological quality, and only a few had a control group. Considering the very low level of clinical evidence reported so far, starting HP therapies in COVID‐19 patients does not seem to be supported by hard evidence. Prospective randomized data are needed to establish the role of HP in COVID‐19 patients.", "The authors declare no conflict of interest.", "Concept/design: Filippo Sanfilippo, Antonio Arcadipane. Data analysis/interpretation: Carla Pulizzi and Filippo Sanfilippo. Drafting article: Gennaro Martucci. Critical revision of article: Luigi La Via and Gennaro Martucci. Approval of article: Marinella Astuto. Data collection: Giorgio Dimarco and Giuseppe Cuttone.", "Supplementary Material\nClick here for additional data file." ]
[ null, "methods", null, null, null, null, "results", "discussion", "conclusions", "COI-statement", null, "supplementary-material" ]
[ "artificial liver", "blood purification", "Cytosorb", "hemoadsorption", "inflammation", "interleukin‐6", "mortality", "oXiris", "Toraymyxin" ]
INTRODUCTION: Since December 2019, the virus identified as SARS‐CoV‐2 has caused the pandemic of coronavirus disease 2019 (COVID‐19), which has spread worldwide in a sequence of following waves. According to data from Johns Hopkins University, as of September 17th, 2021, there have been over 227 million cases worldwide and over 4.6 million deaths. 1 COVID‐19 ranges from asymptomatic infection to extremely severe cases requiring hospitalization and possibly admission to the intensive care unit (ICU). The most frequent expression of severe COVID‐19 is the occurrence of acute respiratory distress syndrome (ARDS), 2 but the SARS‐CoV‐2 has shown the ability to cause cardiovascular 3 and, eventually, multi‐organ dysfunction. 4 COVID‐19 generates a pro‐inflammatory state with hypothesized cytokine storm, and high levels of interleukin 6 (IL‐6) have been repeatedly observed. Therefore, together with the attempt to control viral replication, and to provide supportive therapies (with invasive or non‐invasive mechanical ventilation—IMV or NIV, respectively), and eventually with extracorporeal support, 5 the suppression of the pro‐inflammatory state has been a target. 6 , 7 From pharmacological perspectives, the use of corticosteroids and more recently of IL‐6 receptor antagonists (i.e., tocilizumab) has shown improvement of prognosis for patients experiencing severe COVID‐19. 8 , 9 A possible non‐pharmacological approach to limit the pro‐inflammatory state induced by severe COVID‐19 is the use of extracorporeal cytokine removal, also known as hemoperfusion (HP). Extracorporeal approaches include plasma exchange, direct HP on a polymyxin B‐immobilized fiber column (PMX‐DHP), continuous hemodiafiltration with Cytosorb adsorber, and several other methods. These strategies have been previously investigated in other critical illnesses, such as septic shock, ARDS, and also for cases of Middle East Respiratory Syndrome due to coronavirus infection; however, there is no evidence of beneficial effects in these settings. The use of HP in patients with severe COVID‐19 is pathophysiologically sounded 10 and aims at interrupting the vicious pro‐inflammatory circle and the associated coagulopathy, endothelial damage, and organ failure. There have been increasing reports of the beneficial effects of such treatment among ICU patients with severe COVID‐19, but different methods have been used, 11 , 12 and several platforms/consoles have been modified to host these HP filters. 13 In order to summarize the evidence regarding the use of HP strategies for cytokine removal, we systematically reviewed the existing literature that evaluates the application of different HP strategies in patients with COVID‐19. Our aim was to gather information on biochemical and clinical outcomes described by the authors. From the overview of these outcomes it might be possible to acquire information that could be considered for future prospective studies. METHODS: Registration, search strategy, and criteria We undertook a systematic Web‐based advanced literature search through the NHS Library Evidence tool on studies reporting the use of HP in patients with COVID‐19. We followed the approach suggested by the PRISMA statement for reporting systematic reviews and meta‐analyses, and a PRISMA checklist is provided separately (Supporting Digital Content 1). The protocol was registered with the International Prospective Register of Systematic Reviews (PROSPERO) with assigned number CRD42021253676. Our core search was structured by combining the findings from two groups of terms. The first group included the following: “blood purification” OR “Cytosorb” OR “Cytokine adsor*” OR “Toraymyxin” OR “endotoxin” OR “polymyxin” OR “hemoperfusion”; the second group contained the terms “covid” OR “COVID‐19”. An initial computerized search of PubMed and EMBASE databases was conducted from inception until February 4th, 2021, to identify the relevant articles, and a draft of results was written. The update of these systematic searches was performed on May 11th, 2021. Two further searches were performed manually and independently by four authors (GM, CP, GC, GDM), also exploring the list of references of the findings of the systematic search. Inclusion criteria were pre‐specified according to the PICOS approach (Table 1). We excluded experimental and animal studies, reviews, editorials, and letters to editor. Case series were included if involving at least 5 patients. We preventively decided to describe as Supporting Information case series with fewer than 5 patients, and case reports. Language restrictions were applied: we read the full manuscript only for articles published in English. PICOS criteria Abbreviations: COVID‐19, coronavirus disease 19; HP, hemoperfusion. We undertook a systematic Web‐based advanced literature search through the NHS Library Evidence tool on studies reporting the use of HP in patients with COVID‐19. We followed the approach suggested by the PRISMA statement for reporting systematic reviews and meta‐analyses, and a PRISMA checklist is provided separately (Supporting Digital Content 1). The protocol was registered with the International Prospective Register of Systematic Reviews (PROSPERO) with assigned number CRD42021253676. Our core search was structured by combining the findings from two groups of terms. The first group included the following: “blood purification” OR “Cytosorb” OR “Cytokine adsor*” OR “Toraymyxin” OR “endotoxin” OR “polymyxin” OR “hemoperfusion”; the second group contained the terms “covid” OR “COVID‐19”. An initial computerized search of PubMed and EMBASE databases was conducted from inception until February 4th, 2021, to identify the relevant articles, and a draft of results was written. The update of these systematic searches was performed on May 11th, 2021. Two further searches were performed manually and independently by four authors (GM, CP, GC, GDM), also exploring the list of references of the findings of the systematic search. Inclusion criteria were pre‐specified according to the PICOS approach (Table 1). We excluded experimental and animal studies, reviews, editorials, and letters to editor. Case series were included if involving at least 5 patients. We preventively decided to describe as Supporting Information case series with fewer than 5 patients, and case reports. Language restrictions were applied: we read the full manuscript only for articles published in English. PICOS criteria Abbreviations: COVID‐19, coronavirus disease 19; HP, hemoperfusion. Study screening and selection Study screening for determining the eligibility for inclusion in the systematic review and data extraction were performed independently by four reviewers (FS, GC, GDM, CP). Discordances were resolved involving two senior authors (AA, MA). Despite an expected significant heterogeneity in the techniques of blood purification, as per protocol registration on PROSPERO, we decided to include in our systematic review all the HP strategies adopted to reduce cytokines and pro‐inflammatory molecules in the context of COVID‐19. In particular, we included artificial liver support (ALS, already applied in cases of influenza A, H7N9 14 ), CytoSorb and HA 440/380/280/230 (which can be used as standalone or placed in series with extracorporeal membrane oxygenation and hemodialysis circuit), Toraymyxin, oXiris, and plasmapheresis. Other “standard” blood purification strategies, such as hemodialysis and continuous renal replacement therapy were not included as our focus was to investigate blood purification strategies aimed at reducing cytokine levels. Data were inserted in a password protected database on Excel by three authors (GC, GDM, CP) and cross‐checked by other three authors (FS, GM, LLV). Study screening for determining the eligibility for inclusion in the systematic review and data extraction were performed independently by four reviewers (FS, GC, GDM, CP). Discordances were resolved involving two senior authors (AA, MA). Despite an expected significant heterogeneity in the techniques of blood purification, as per protocol registration on PROSPERO, we decided to include in our systematic review all the HP strategies adopted to reduce cytokines and pro‐inflammatory molecules in the context of COVID‐19. In particular, we included artificial liver support (ALS, already applied in cases of influenza A, H7N9 14 ), CytoSorb and HA 440/380/280/230 (which can be used as standalone or placed in series with extracorporeal membrane oxygenation and hemodialysis circuit), Toraymyxin, oXiris, and plasmapheresis. Other “standard” blood purification strategies, such as hemodialysis and continuous renal replacement therapy were not included as our focus was to investigate blood purification strategies aimed at reducing cytokine levels. Data were inserted in a password protected database on Excel by three authors (GC, GDM, CP) and cross‐checked by other three authors (FS, GM, LLV). Analysis of clinical outcomes From a clinical standpoint, we focused on the description of population characteristics for each study, reporting their hemodynamic support (dose and/or percentage of patients receiving vasopressors), the respiratory variables (oxygenation parameters as well as use of IMV or NIV), the use of other extracorporeal techniques and, finally, the admission to the ICU, the length of stay, and mortality. Whenever possible, information is provided with separation of data before and after the HP treatment. From a clinical standpoint, we focused on the description of population characteristics for each study, reporting their hemodynamic support (dose and/or percentage of patients receiving vasopressors), the respiratory variables (oxygenation parameters as well as use of IMV or NIV), the use of other extracorporeal techniques and, finally, the admission to the ICU, the length of stay, and mortality. Whenever possible, information is provided with separation of data before and after the HP treatment. Biological variables Regarding the effects from biological perspectives, we reported data on inflammatory markers and on cytokines levels. As for clinical variables, information is provided with separation of data before and after the HP treatment. We also added information on drug therapies with particular reference to the use of steroids, IL‐6 receptor antagonists, and antibiotics. Regarding the effects from biological perspectives, we reported data on inflammatory markers and on cytokines levels. As for clinical variables, information is provided with separation of data before and after the HP treatment. We also added information on drug therapies with particular reference to the use of steroids, IL‐6 receptor antagonists, and antibiotics. Registration, search strategy, and criteria: We undertook a systematic Web‐based advanced literature search through the NHS Library Evidence tool on studies reporting the use of HP in patients with COVID‐19. We followed the approach suggested by the PRISMA statement for reporting systematic reviews and meta‐analyses, and a PRISMA checklist is provided separately (Supporting Digital Content 1). The protocol was registered with the International Prospective Register of Systematic Reviews (PROSPERO) with assigned number CRD42021253676. Our core search was structured by combining the findings from two groups of terms. The first group included the following: “blood purification” OR “Cytosorb” OR “Cytokine adsor*” OR “Toraymyxin” OR “endotoxin” OR “polymyxin” OR “hemoperfusion”; the second group contained the terms “covid” OR “COVID‐19”. An initial computerized search of PubMed and EMBASE databases was conducted from inception until February 4th, 2021, to identify the relevant articles, and a draft of results was written. The update of these systematic searches was performed on May 11th, 2021. Two further searches were performed manually and independently by four authors (GM, CP, GC, GDM), also exploring the list of references of the findings of the systematic search. Inclusion criteria were pre‐specified according to the PICOS approach (Table 1). We excluded experimental and animal studies, reviews, editorials, and letters to editor. Case series were included if involving at least 5 patients. We preventively decided to describe as Supporting Information case series with fewer than 5 patients, and case reports. Language restrictions were applied: we read the full manuscript only for articles published in English. PICOS criteria Abbreviations: COVID‐19, coronavirus disease 19; HP, hemoperfusion. Study screening and selection: Study screening for determining the eligibility for inclusion in the systematic review and data extraction were performed independently by four reviewers (FS, GC, GDM, CP). Discordances were resolved involving two senior authors (AA, MA). Despite an expected significant heterogeneity in the techniques of blood purification, as per protocol registration on PROSPERO, we decided to include in our systematic review all the HP strategies adopted to reduce cytokines and pro‐inflammatory molecules in the context of COVID‐19. In particular, we included artificial liver support (ALS, already applied in cases of influenza A, H7N9 14 ), CytoSorb and HA 440/380/280/230 (which can be used as standalone or placed in series with extracorporeal membrane oxygenation and hemodialysis circuit), Toraymyxin, oXiris, and plasmapheresis. Other “standard” blood purification strategies, such as hemodialysis and continuous renal replacement therapy were not included as our focus was to investigate blood purification strategies aimed at reducing cytokine levels. Data were inserted in a password protected database on Excel by three authors (GC, GDM, CP) and cross‐checked by other three authors (FS, GM, LLV). Analysis of clinical outcomes: From a clinical standpoint, we focused on the description of population characteristics for each study, reporting their hemodynamic support (dose and/or percentage of patients receiving vasopressors), the respiratory variables (oxygenation parameters as well as use of IMV or NIV), the use of other extracorporeal techniques and, finally, the admission to the ICU, the length of stay, and mortality. Whenever possible, information is provided with separation of data before and after the HP treatment. Biological variables: Regarding the effects from biological perspectives, we reported data on inflammatory markers and on cytokines levels. As for clinical variables, information is provided with separation of data before and after the HP treatment. We also added information on drug therapies with particular reference to the use of steroids, IL‐6 receptor antagonists, and antibiotics. RESULTS: Our systematic search identified 292 total findings between Pubmed and EMBASE. No further findings were retrieved manually. As shown in the PRISMA flow diagram (Figure 1), after the evaluation of all abstracts, 11 full‐text articles 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 were included as matching the PICOS criteria, and their characteristics from clinical and biological perspectives are reported in Tables 2 and 3, respectively. PRISMA flow diagram Clinical data of the studies included in the systematic review Abbreviations: ALS, artificial‐liver blood‐purification system; ECMO, extracorporeal membrane oxygenation; HA, hemoadsorption; HP, hemoperfusion; ICU, intensive care unit; M/F, male/female; MV, mechanical ventilation; NE, norepinephrine; NIV, non‐invasive ventilation; RRT, renal replacement therapy; VDI, vasopressor dependency index; VIS, Vasoactive Inotropic Score. Biochemical data of the studies included in the systematic review Abbreviations: ALS, artificial‐liver blood‐purification system; CRP, reactive protein; HA, hemoadsorption; HCQ, hydroxychloroquine; HP, hemoperfusion; IL, interleukin; RRT, renal replacement therapy; SOFA, Sequential Organ Failure Assessment; TNF, tumor necrosis factor. As shown in Tables 2 and 3, we found that several HP techniques have been implemented, and that in most cases authors have reported large case series, while we found only three studies with a control group and no prospective randomized study; therefore, any chance to perform a meta‐analysis was deemed not feasible. In total, 226 patients were identified by our systematic review, while another 59 patients functioned as controls in some of the included studies. As shown in Tables 2 and 3, different HP strategies were implemented. Of the patients receiving HP therapy, the most frequently used approaches were ALS (n = 62), CytoSorb (n = 55) and oXiris (n = 52), followed by Toraymyxin (n = 24), HA 380/280/230 (n = 18) and plasmapheresis (n = 15). With regard to other non‐pharmacological extracorporeal therapies implemented in this population of patients treated with HP, we found that continuous renal replacement therapy was used in 128 patients (57%), while extracorporeal membrane oxygenation in only 9 patients (4%). The mortality in the included studies varied from 16% 17 to 58%. 18 The largest study with a control group included 101 patients, 50 of whom were treated with HP strategy with ALS, the others functioning as controls. None of the included patients needed continuous renal replacement therapy. 19 The second one was a three‐arm study, with patients receiving HP, continuous renal replacement therapy, or both; in this study the use of HP (±RRT; n = 8) was effective in reducing the norepinephrine infusion compared to the group of patients receiving only RRT (n = 4). Moreover, mortality was halved in those receiving HP versus continuous renal replacement therapy only. 20 The last study was a case‐control study of nine patients, 5 treated with Cytosorb HP (survival 80%), and 4 serving as controls (no survivors); none of these patients received continuous renal replacement therapy. 23 The other included studies reported a variable number of patients treated with any HP strategy (range 10 to 50 patients). From biological perspectives, most studies found decreasing levels of IL‐6. Considering the studies with a control group, Dai et al 26 found that IL‐6 levels decreased significantly in the ALS groups, while this was not the case in controls. Interestingly, the authors analyzed two subgroups of 15 patients each who were deemed in the early stage of “cytokine storm”. In the early sub‐group treated with ALS, all patients improved and were discharged without need for intubation; conversely, 40% (n = 6/15) of patients in the control group at the early stage of “cytokine storm” progressed to critical illness, and died. The small case‐control series was produced by Rampino et al, 23 and from biological perspectives the authors showed that IL‐6, and IL‐8 TNF‐a were reduced in the Cytosorb group. However, it is difficult to draw conclusions not only for the small number of patients, but also because patients in the Cytosorb group were on average 8 years younger. Further 27 findings were identified as small case series (2–4 patients, n = 6) and case reports (n = 21), and are reported in Supporting Digital Content 1 for their clinical and biological data. DISCUSSION: The potential usefulness of cytokine adsorption therapies for patients with COVID‐19 has been hypothesized because of the dysregulated systemic immune over‐activation. 27 Initially, scientists suspected that Covid cytokine storm in patients with COVID‐19 was stronger than in other conditions determining critical illness, though this hypothesis has not been subsequently confirmed 28 ; moreover, studies on sub‐phenotypes have shown that the systemic concentrations of inflammatory cytokines typical of the “cytokine storms” are lower in COVID‐19 than in patients with other causes of ARDS. 29 Nonetheless, it should be noted that the only drugs decreasing mortality in COVID‐19 patients act with properties of immune‐modulation (corticosteroids and IL‐6 receptor antagonists). 8 , 30 At the beginning of the pandemic a comprehensive review highlighted the potential importance of HP techniques and the lack of evidence in support of this approach. One year later, the scenario has not improved, but these therapeutic options are used in daily practice. 31 The aim of our systematic review was to pool data on HP therapies in patients with COVID‐19, trying to gather information on their clinical and immune‐modulator effects. The evaluation of a complex therapy (HP) in a disease with a very variable clinical presentation (COVID‐19) certainly warrants strict control for confounding factors. However, our systematic review identified only studies with low methodological quality (no randomized study was found); we summarized the most adopted biological outcomes tested in the current literature with the hope this could be useful for future application (i.e., design of randomized studies). In general, we found large heterogeneity among the studies in clinical and biological outcomes evaluated. From clinical perspectives, information on cardiovascular pharmacological support was not uniformly reported, while more data were provided on oxygenation and on use of IMV or NIV. For instance, Villa et al 25 and Guo et al 19 reported an improvement of the PaO2/FiO2 ratio, but the lack of a control group hampers any discussion. Regarding the biological outcomes, surely the most reported one was the IL‐6 concentration. 32 Most studies reported decreasing levels of IL‐6 after HP treatment, but it should be noted that the values reported were very different, possibly as a result of variable laboratory methods and techniques, as well as timing in the course of the disease. Moreover, when evaluating a decrease in cytokine concentration, one should note that removal has to be contextualized to the initial cytokine concentration. Indeed, cytokine removal is concentration‐dependent, and the baseline values influences the performance of the HP method. The cut‐off to define high level of circulating cytokines is not well‐defined, but it is reasonable that the higher the level, the greater the impact (hopefully positive) of the HP therapy. 33 Moreover, the HP methods not only eliminate the cytokines responsible for the hyper‐inflammatory state, but HP will likely remove anti‐inflammatory mediators and many other biological substances as well (up to 55 kDa). The latter are probably not innocent bystanders, but may be crucial for the recovery of the patient. Therefore, focusing only on one or few cytokines may be a misleading and myopic approach. Future studies should also consider the removal of anti‐inflammatory and other biologically relevant molecules eliminated by such “filters.” To add more complexity, knowledge of pharmacokinetics during HP is still on the way, and a potential issue could be the reduction in plasmatic concentration of circulating drugs like corticosteroids and other immune‐modulators, as well as a decrease in the concentration of antibiotics. 34 Moreover, COVID‐19 has shown a tendency to cause coagulation disorders with a pro‐thrombotic state and an associated risk of pulmonary embolism. Whether HP therapies influence the pharmacokinetics of drugs regulating the coagulation cascade in patients with COVID‐19 remains to be determined. This should certainly be considered in the context of a higher risk of pulmonary embolism in these patients. 35 , 36 All these open questions and the absence of good‐quality data make the evaluation of HP usefulness in COVID‐19 patients challenging. For all these reasons, the Extracorporeal Life Support Organization's COVID‐19 guidelines do not currently recommend extracorporeal cytokine HP outside the context of clinical trials. 37 It is worth mentioning that after our screening process one important study was published. This small single‐center randomized study enrolled 34 COVID‐19 patients on extracorporeal membrane oxygenation and randomized them to HP with Cytosorb. Interestingly, the concentration of IL‐6 decreased in both groups (HP and controls) to a similar extent, and the 30‐day mortality was significantly higher in the group receiving HP (18% vs. 76% in those not receiving HP, p = 0.002). 22 These results need external validation by ongoing trials, and it should be noted that patients receiving extracorporeal membrane oxygenation are at very high risk of death, and that HP may not be beneficial in these cases where a very advanced stage of organ dysfunction has already taken place. Therefore, the latter results do not exclude a beneficial effect of HP strategies in severe COVID‐19 patient not requiring extracorporeal membrane oxygenation. It is also worth noting the recent results of another interesting study (not on COVID‐19 patients) in which the authors studied patients with severe refractory septic shock undergoing cytokine removal with CytoSorb, with ongoing continuous renal replacement therapy, and matched them with a historical cohort. The authors found that IL‐6 levels and vasopressor requirements were not reduced in the treatment group. Importantly, patients treated with HP had an increased risk of death. The authors concluded that their results were consonant with recent evidence that suggests avoidance of indiscriminate use of cytokine adsorption outside of investigational trials. 38 CONCLUSION: Our systematic review identified several studies that evaluate the role of different HP strategies in COVID‐19. However, all these studies were of low methodological quality, and only a few had a control group. Considering the very low level of clinical evidence reported so far, starting HP therapies in COVID‐19 patients does not seem to be supported by hard evidence. Prospective randomized data are needed to establish the role of HP in COVID‐19 patients. CONFLICT OF INTEREST: The authors declare no conflict of interest. AUTHOR CONTRIBUTIONS: Concept/design: Filippo Sanfilippo, Antonio Arcadipane. Data analysis/interpretation: Carla Pulizzi and Filippo Sanfilippo. Drafting article: Gennaro Martucci. Critical revision of article: Luigi La Via and Gennaro Martucci. Approval of article: Marinella Astuto. Data collection: Giorgio Dimarco and Giuseppe Cuttone. Supporting information: Supplementary Material Click here for additional data file.
Background: Coronavirus disease-19 (COVID-19) ranges from asymptomatic infection to severe cases requiring admission to the intensive care unit. Together with supportive therapies (ventilation in particular), the suppression of the pro-inflammatory state has been a hypothesized target. Pharmacological therapies with corticosteroids and interleukin-6 (IL-6) receptor antagonists have reduced mortality. The use of extracorporeal cytokine removal, also known as hemoperfusion (HP), could be a promising non-pharmacological approach to decrease the pro-inflammatory state in COVID-19. Methods: We conducted a systematic review of PubMed and EMBASE databases in order to summarize the evidence regarding HP therapy in COVID-19. We included original studies and case series enrolling at least five patients. Results: We included 11 articles and describe the characteristics of the populations studied from both clinical and biological perspectives. The methodological quality of the included studies was generally low. Only two studies had a control group, one of which included 101 patients in total. The remaining studies had a range between 10 and 50 patients included. There was large variability in the HP techniques implemented and in clinical and biological outcomes reported. Most studies described decreasing levels of IL-6 after HP treatment. Conclusions: Our review does not support strong conclusions regarding the role of HP in COVID-19. Considering the very low level of clinical evidence detected, starting HP therapies in COVID-19 patients does not seem supported outside of clinical trials. Prospective randomized data are needed.
INTRODUCTION: Since December 2019, the virus identified as SARS‐CoV‐2 has caused the pandemic of coronavirus disease 2019 (COVID‐19), which has spread worldwide in a sequence of following waves. According to data from Johns Hopkins University, as of September 17th, 2021, there have been over 227 million cases worldwide and over 4.6 million deaths. 1 COVID‐19 ranges from asymptomatic infection to extremely severe cases requiring hospitalization and possibly admission to the intensive care unit (ICU). The most frequent expression of severe COVID‐19 is the occurrence of acute respiratory distress syndrome (ARDS), 2 but the SARS‐CoV‐2 has shown the ability to cause cardiovascular 3 and, eventually, multi‐organ dysfunction. 4 COVID‐19 generates a pro‐inflammatory state with hypothesized cytokine storm, and high levels of interleukin 6 (IL‐6) have been repeatedly observed. Therefore, together with the attempt to control viral replication, and to provide supportive therapies (with invasive or non‐invasive mechanical ventilation—IMV or NIV, respectively), and eventually with extracorporeal support, 5 the suppression of the pro‐inflammatory state has been a target. 6 , 7 From pharmacological perspectives, the use of corticosteroids and more recently of IL‐6 receptor antagonists (i.e., tocilizumab) has shown improvement of prognosis for patients experiencing severe COVID‐19. 8 , 9 A possible non‐pharmacological approach to limit the pro‐inflammatory state induced by severe COVID‐19 is the use of extracorporeal cytokine removal, also known as hemoperfusion (HP). Extracorporeal approaches include plasma exchange, direct HP on a polymyxin B‐immobilized fiber column (PMX‐DHP), continuous hemodiafiltration with Cytosorb adsorber, and several other methods. These strategies have been previously investigated in other critical illnesses, such as septic shock, ARDS, and also for cases of Middle East Respiratory Syndrome due to coronavirus infection; however, there is no evidence of beneficial effects in these settings. The use of HP in patients with severe COVID‐19 is pathophysiologically sounded 10 and aims at interrupting the vicious pro‐inflammatory circle and the associated coagulopathy, endothelial damage, and organ failure. There have been increasing reports of the beneficial effects of such treatment among ICU patients with severe COVID‐19, but different methods have been used, 11 , 12 and several platforms/consoles have been modified to host these HP filters. 13 In order to summarize the evidence regarding the use of HP strategies for cytokine removal, we systematically reviewed the existing literature that evaluates the application of different HP strategies in patients with COVID‐19. Our aim was to gather information on biochemical and clinical outcomes described by the authors. From the overview of these outcomes it might be possible to acquire information that could be considered for future prospective studies. CONCLUSION: Our systematic review identified several studies that evaluate the role of different HP strategies in COVID‐19. However, all these studies were of low methodological quality, and only a few had a control group. Considering the very low level of clinical evidence reported so far, starting HP therapies in COVID‐19 patients does not seem to be supported by hard evidence. Prospective randomized data are needed to establish the role of HP in COVID‐19 patients.
Background: Coronavirus disease-19 (COVID-19) ranges from asymptomatic infection to severe cases requiring admission to the intensive care unit. Together with supportive therapies (ventilation in particular), the suppression of the pro-inflammatory state has been a hypothesized target. Pharmacological therapies with corticosteroids and interleukin-6 (IL-6) receptor antagonists have reduced mortality. The use of extracorporeal cytokine removal, also known as hemoperfusion (HP), could be a promising non-pharmacological approach to decrease the pro-inflammatory state in COVID-19. Methods: We conducted a systematic review of PubMed and EMBASE databases in order to summarize the evidence regarding HP therapy in COVID-19. We included original studies and case series enrolling at least five patients. Results: We included 11 articles and describe the characteristics of the populations studied from both clinical and biological perspectives. The methodological quality of the included studies was generally low. Only two studies had a control group, one of which included 101 patients in total. The remaining studies had a range between 10 and 50 patients included. There was large variability in the HP techniques implemented and in clinical and biological outcomes reported. Most studies described decreasing levels of IL-6 after HP treatment. Conclusions: Our review does not support strong conclusions regarding the role of HP in COVID-19. Considering the very low level of clinical evidence detected, starting HP therapies in COVID-19 patients does not seem supported outside of clinical trials. Prospective randomized data are needed.
4,795
276
[ 517, 322, 214, 88, 60, 55 ]
12
[ "hp", "patients", "19", "covid", "covid 19", "systematic", "data", "studies", "included", "authors" ]
[ "pandemic coronavirus disease", "covid 19 patients", "19 coronavirus", "severe covid 19", "respiratory syndrome coronavirus" ]
[CONTENT] artificial liver | blood purification | Cytosorb | hemoadsorption | inflammation | interleukin‐6 | mortality | oXiris | Toraymyxin [SUMMARY]
[CONTENT] artificial liver | blood purification | Cytosorb | hemoadsorption | inflammation | interleukin‐6 | mortality | oXiris | Toraymyxin [SUMMARY]
[CONTENT] artificial liver | blood purification | Cytosorb | hemoadsorption | inflammation | interleukin‐6 | mortality | oXiris | Toraymyxin [SUMMARY]
[CONTENT] artificial liver | blood purification | Cytosorb | hemoadsorption | inflammation | interleukin‐6 | mortality | oXiris | Toraymyxin [SUMMARY]
[CONTENT] artificial liver | blood purification | Cytosorb | hemoadsorption | inflammation | interleukin‐6 | mortality | oXiris | Toraymyxin [SUMMARY]
[CONTENT] artificial liver | blood purification | Cytosorb | hemoadsorption | inflammation | interleukin‐6 | mortality | oXiris | Toraymyxin [SUMMARY]
[CONTENT] Adult | Aged | Biomarkers | COVID-19 | Cytokines | Female | Hemoperfusion | Humans | Inflammation Mediators | Male | Middle Aged | Risk Factors | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Aged | Biomarkers | COVID-19 | Cytokines | Female | Hemoperfusion | Humans | Inflammation Mediators | Male | Middle Aged | Risk Factors | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Aged | Biomarkers | COVID-19 | Cytokines | Female | Hemoperfusion | Humans | Inflammation Mediators | Male | Middle Aged | Risk Factors | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Aged | Biomarkers | COVID-19 | Cytokines | Female | Hemoperfusion | Humans | Inflammation Mediators | Male | Middle Aged | Risk Factors | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Aged | Biomarkers | COVID-19 | Cytokines | Female | Hemoperfusion | Humans | Inflammation Mediators | Male | Middle Aged | Risk Factors | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Aged | Biomarkers | COVID-19 | Cytokines | Female | Hemoperfusion | Humans | Inflammation Mediators | Male | Middle Aged | Risk Factors | Treatment Outcome [SUMMARY]
[CONTENT] pandemic coronavirus disease | covid 19 patients | 19 coronavirus | severe covid 19 | respiratory syndrome coronavirus [SUMMARY]
[CONTENT] pandemic coronavirus disease | covid 19 patients | 19 coronavirus | severe covid 19 | respiratory syndrome coronavirus [SUMMARY]
[CONTENT] pandemic coronavirus disease | covid 19 patients | 19 coronavirus | severe covid 19 | respiratory syndrome coronavirus [SUMMARY]
[CONTENT] pandemic coronavirus disease | covid 19 patients | 19 coronavirus | severe covid 19 | respiratory syndrome coronavirus [SUMMARY]
[CONTENT] pandemic coronavirus disease | covid 19 patients | 19 coronavirus | severe covid 19 | respiratory syndrome coronavirus [SUMMARY]
[CONTENT] pandemic coronavirus disease | covid 19 patients | 19 coronavirus | severe covid 19 | respiratory syndrome coronavirus [SUMMARY]
[CONTENT] hp | patients | 19 | covid | covid 19 | systematic | data | studies | included | authors [SUMMARY]
[CONTENT] hp | patients | 19 | covid | covid 19 | systematic | data | studies | included | authors [SUMMARY]
[CONTENT] hp | patients | 19 | covid | covid 19 | systematic | data | studies | included | authors [SUMMARY]
[CONTENT] hp | patients | 19 | covid | covid 19 | systematic | data | studies | included | authors [SUMMARY]
[CONTENT] hp | patients | 19 | covid | covid 19 | systematic | data | studies | included | authors [SUMMARY]
[CONTENT] hp | patients | 19 | covid | covid 19 | systematic | data | studies | included | authors [SUMMARY]
[CONTENT] severe | covid 19 | covid | 19 | severe covid 19 | severe covid | pro inflammatory | pro inflammatory state | pro | hp [SUMMARY]
[CONTENT] systematic | search | covid | purification | blood purification | blood | included | 19 | reviews | performed [SUMMARY]
[CONTENT] patients | therapy | included | hp | group | renal replacement | renal replacement therapy | renal | replacement | replacement therapy [SUMMARY]
[CONTENT] role | covid 19 patients | 19 patients | low | covid | covid 19 | 19 | hp | evidence | studies [SUMMARY]
[CONTENT] hp | patients | covid | 19 | covid 19 | data | systematic | authors | studies | use [SUMMARY]
[CONTENT] hp | patients | covid | 19 | covid 19 | data | systematic | authors | studies | use [SUMMARY]
[CONTENT] Coronavirus disease-19 | COVID-19 ||| ||| ||| COVID-19 [SUMMARY]
[CONTENT] PubMed | EMBASE | COVID-19 ||| at least five [SUMMARY]
[CONTENT] 11 ||| ||| Only two | one | 101 ||| between 10 and 50 ||| HP ||| IL-6 [SUMMARY]
[CONTENT] COVID-19 ||| COVID-19 ||| [SUMMARY]
[CONTENT] COVID-19 ||| ||| ||| COVID-19 ||| PubMed | EMBASE | COVID-19 ||| at least five ||| ||| 11 ||| ||| Only two | one | 101 ||| between 10 and 50 ||| HP ||| IL-6 ||| COVID-19 ||| COVID-19 ||| [SUMMARY]
[CONTENT] COVID-19 ||| ||| ||| COVID-19 ||| PubMed | EMBASE | COVID-19 ||| at least five ||| ||| 11 ||| ||| Only two | one | 101 ||| between 10 and 50 ||| HP ||| IL-6 ||| COVID-19 ||| COVID-19 ||| [SUMMARY]
A quasi-experimental study to improve health service quality: implementing communication and self-efficacy skills training to primary healthcare workers in two counties in Iran.
34229675
Service satisfaction ratings from clients are a good indicator of service quality. The present study aimed to investigate the impact of communication skills and self-efficacy training for healthcare workers on clients' satisfaction.
BACKGROUND
A quasi-experimental study was conducted in health centers of Saveh University of Medical Science in Iran. Primary Healthcare (PHC; N = 105) workers and service recipients (N = 364) were randomly assigned to intervention and control groups. The intervention group received four 90-min training sessions consisting of lecture, film screening, role-playing, and discussion group. Before and 3 months after the intervention, a multi-part questionnaire (including demographics, self-efficacy and communication skills in PHC workers; and satisfaction questionnaire in service recipients) was completed by participants in both intervention and control groups.
METHODS
PHC worker mean scores of self-efficacy and communication skills after the educational program were increased in the intervention group compared to the control group (p < 0.05). Also, mean satisfaction scores for service recipients of the intervention group (PHC workers) generally significantly increased compared to the control group (p < 0.001).
RESULTS
The educational program improved the self-efficacy, and communication skills in health workers and improved client satisfaction overall. Our results support the application of self-efficacy and communication skills training for other medical groups who wish to improve clients satisfaction as an important health services outcome.
CONCLUSIONS
[ "Communication", "Health Personnel", "Health Services", "Humans", "Iran", "Primary Health Care", "Self Efficacy" ]
8258999
Introduction
Service satisfaction is affected by service quality, quality of service delivery, and levels of service recipients’ expectation of service quality [1, 2]. Service satisfaction is a good indicator of service quality [2]. Measurement of service recipient satisfaction is a common method for evaluating the treatment quality in healthcare organizations [3]. Generally, the concept of satisfaction in providing health services refers to the feeling or attitude of service clients. There is a direct relationship between patient satisfaction and remaining in treatment [4]. Appropriate interpersonal communication between healthcare providers and recipients is an important determinant of clients satisfaction and compliance with healthcare guidelines [5]. Proper and effective communication between health personnel and patients has a positive impact on health and medical care and enhances patient satisfaction. Focusing only on technical aspects of health care may lead professionals to use ineffective communication methods (e.g., lack of eye contact, not listening fully to client or patient concerns), and thus key and major problems of patients are not clearly identified [6, 7]. Communication skills (CS) are crucial to professionals that come in direct contact with clients. Such skills convey respect, attention and empathy; and frequently include asking open questions, listening actively, and using intelligible words for patients in order to increase the effectiveness of medical interview and treatment process as well as patients’ satisfaction [8–10]. Today, health managers and planners around the world, particularly in developing countries are facing important challenges in responding the health care needs of the general population [11, 12]. In Iran, Primary Healthcare (PHC) workers are responsible for providing appropriate health education and services for the public [13, 14]. PHC workers prevent patients from being referred to clinics and hospitals by providing primary health care [15]. Therefore, the PHC workers’ ability to communicate effectively with individuals is an essential requirement for satisfaction and engagement of service recipients to promote health [14, 15]. Improving self-efficacy (SE) for communicating may assist in improving CS [6]. SE is the main element of the social-cognitive theory that refers to an individual’s belief or judgment about their ability to perform tasks and responsibilities [16]. Therefore, SE is an important factor for successful performance, and the skills that lead to successful performance [6, 17]. In Iran, primary healthcare coverage is offered to over 95 % of rural areas, but quality of care is the main concern of health policymakers. Since satisfaction is an important index of quality and performance of health care [13, 15] and given the lack of information on how CS and SE of health workers affect patient satisfaction, the present study aimed to evaluate the impact of an educational intervention, based on SE and CS, for PHC workers. Of particular interest was impact on the satisfaction of public health service recipients.
null
null
Results
From 364 service recipients, 358 (180 in the intervention group and 178 in the control group) who completed the post-test underwent the final analysis. The mean age of service recipients was 40.5 ± 14.9 years in the intervention group, and 37.7 ± 12.3 years in the control group (p > 0.05). Intervention and control groups were similar on demographic variables (e.g., gender, insurance, education and occupation) and no significant differences were found between groups. Among PHC workers, intervention and control groups were similar on demographic variables (e.g., gender, work experience and literacy level) and there were no significant differences between groups. See Tables 1 and 2. Table 1Comparison of categorical variables in clients seen by two groups of primary healthcare workers (Behvarz) assigned tobk Intervention and Control groupsVariablesIntervention (n=180)Control (n=178)P-valueNumberPercentage (%)NumberPercentage (%)Sex Male7943.982460.67 Female10156.19654Education Illiterate158.4116.20.54 Elementary99559251.7 High school and diploma4625.65732 Academic20111810.1Job Student84.4105.60.39 Farmer / Shepherd4323.95430.3 Staff73.952.9 Housewife90508849.4 other3217.82111.8Insurance Yes16993.917095.50.49 No116.184.5Note: Chi-square usedTable 2Comparison of categorical variables in primary healthcare workers (Behvarz) assigned to Intervention and ControlVariablesIntervention (n=60)Control (n=44)P-valueNumberPercentage (%)NumberPercentage (%)Sex Male2541.61636.40.58 Female3558.42863.6Education Elementary813.3613.60.12 Middle school1118.3511.4 High school and diploma33553272.7 Academic813.312.3work experience <101525920.40.66 10-191931.71227.3 ≥202643.32352.3Note: Chi-square used Comparison of categorical variables in clients seen by two groups of primary healthcare workers (Behvarz) assigned tobk Intervention and Control groups Note: Chi-square used Comparison of categorical variables in primary healthcare workers (Behvarz) assigned to Intervention and Control Note: Chi-square used According to Table 3, for PHC workers, there was no significant difference between the intervention and control groups before training on SE and all CS constructs except for attending to client perception of referral source (p < 0.05). Following training, paired t-tests indicated that the mean scores of SE and all communication skill constructs significantly increased in the intervention group (p < 0.001), while mean scores in the control group increased on starting a session, decreased on data collection and evidenced no other significant differences. Table 3Comparison of communication skills and self-efficacy in primary healthcare workers (Behvarz) assigned to Intervention and Control at baseline and 3-months follow-upVariableGroup TimeIntervention group Mean ± SD (N = 60)Control group Mean ± SD (N = 44)P-value*Starting the sessionBaseline2.52 ± 0.622.38 ± 0.510.063-months follow-up3.79 ± 0.462.87 ± 0.720.001P-value**0.0010.001creating a relationshipBaseline8.95 ± 1.579.0 ± 1.540.683-months follow-up11.91 ± 1.789.75 ± 2.030.001P-value**0.0010.06data collectionBaseline5.0 ± 0.784.84 ± 0.630.183-months follow-up5.51 ± 0.564.55 ± 0.490.001P-value**0.0010.04attending to client perception of referral sourceBaseline3.25 ± 0.892.84 ± 0.750.013-months follow-up3.78 ± 0.482.77 ± 0.620.001P-value**0.0010.39providing informationBaseline4.67 ± 0.754.59 ± 0.870.563-months follow-up5.49 ± 0.644.69 ± 0.610.001P-value**0.0010.55mutual agreementBaseline2.87 ± 0.562.84 ± 0.730.953-months follow-up3.21 ± 0.582.66 ± 0.640.001P-value**0.0010.27ending the sessionBaseline5.64 ± 1.05.37 ± 0.850.063-months follow-up6.81 ± 0.925.66 ± 1.100.001P-value**0.0010.17P-value***Self-efficacyBaseline31.52 ± 2.9131.32 ± 2.640.673-months follow-up34.25 ± 4.031.26 ± 4.520.001P-value****0.0010.79* Independent T-test** Paired T-test*** Mann-Whitney**** Wilcoxon Comparison of communication skills and self-efficacy in primary healthcare workers (Behvarz) assigned to Intervention and Control at baseline and 3-months follow-up * Independent T-test ** Paired T-test *** Mann-Whitney **** Wilcoxon For service recipients (Table 4), there were no significant differences between intervention and control groups before training on components of satisfaction (access to services, continuity of care, humaneness of staff, comprehensiveness of care, provision of health education, effectiveness of service), but mean scores of satisfaction variables generally significantly increased in the intervention group after training (p < 0.001). No significant differences were observed in the control group from pre- to post-training. Table 4Comparison of client satisfaction in two groups of primary healthcare workers (Behvarz) assigned to Intervention and Control at baseline and 3-months follow-upVariableGroup TimeIntervention group Mean ± SD (N = 180)Control group Mean ± SD (N = 178)P-value*Access to servicesBaseline1.75 ± 0.601.73 ± 0.470.593-months follow-up2.89 ± 1.01.80 ± 0.560.001P-value**0.0010.32continuity of careBaseline1.19 ± 0.971.33 ± 0.870.283-months follow-up2.72 ± 1.141.40 ± 1.060.001P-value**0.0010.34humaneness of staffBaseline1.22 ± 0.841.18 ± 0.790.633-months follow-up2.88 ± 1.171.23 ± 0.820.001P-value**0.0010.28comprehensiveness of careBaseline-1.09 ± 0.67-1.04 ± 0.600.143-months follow-up-1.70 ± 1.22-0.61 ± 1.140.001P-value**0.0010.11provision of health educationBaseline1.05 ± 0.891.01 ± 0.950.693-months follow-up2.25 ± 1.121.07 ± 0.940.001P-value**0.0010.10effectiveness of servicesBaseline1.0 ± 0.891.10 ± 0.900.853-months follow-up2.75 ± 1.01.18 ± 1.00.001P-value**0.0010.47* Mann-Whitney** Wilcoxon Comparison of client satisfaction in two groups of primary healthcare workers (Behvarz) assigned to Intervention and Control at baseline and 3-months follow-up * Mann-Whitney ** Wilcoxon Clients were generally dissatisfied with comprehensiveness of care (Table 4). There were no differences between intervention and control groups prior to training. In the intervention group, clients became more dissatisfied with comprehensiveness of care following training; however, no difference was found from pre- to post-training for clients in the control group. Following training, clients in the intervention group were significantly more dissatisfied with comprehensiveness of care than those in the control group. Medians and Interquartile Range of SE and all satisfaction constructs reported in Table 5. Table 5Comparison of client satisfaction and self-efficacy median in two groups of primary healthcare workers (Behvarz) assigned to Intervention and Control at baseline and 3-months follow-upVariableGroup TimeIntervention group Median( IR) (N = 180)Control group Median( IR) (N = 178)Access to servicesBaseline2(2)2(1)3-months follow-up3(3)2(1)continuity of careBaseline1(2)1(2)3-months follow-up2.5(1.75)1(2)humaneness of staffBaseline1(1)1(1)3-months follow-up3(2)1.2(1.5)comprehensiveness of careBaseline-1(1)-1(2)3-months follow-up-2(2)-1(2)provision of health educationBaseline1(0)1(0)3-months follow-up2(3)1(0.5)effectiveness of servicesBaseline1(2)1(2)3-months follow-up3(4)1(1.75)Self-efficacyBaseline30(10)30(10)3-months follow-up32(10)30(10) Comparison of client satisfaction and self-efficacy median in two groups of primary healthcare workers (Behvarz) assigned to Intervention and Control at baseline and 3-months follow-up
Conclusions
Communication skills training improved the self-efficacy of PHC workers to effectively communicate with clients, improved PHC worker communication skills with clients, and improved clients’ services satisfaction. Findings are encouraging, and such training may be deployed in other practice settings, since it was delivered in only 4 group sessions of 90 min each.
[ "Methods", "Design, procedure and the study sample", "Measures", "Intervention and control groups", "Statistical analysis", "Ethics", "Limitation and future directions" ]
[ "Design, procedure and the study sample The present study, conducted in 2019, was a quasi-experimental intervention study conducted on primary healthcare workers (N = 105) in health centers of Saveh and Zarandieh counties, and patients (N = 364) living in rural areas of Saveh and Zarandieh. Setting power to 80 %, with a medium effect size and alpha = 0.01, the sample size needed for PHC workers was N = 44 per group (N = 88, in total), based on similar previous studies [17]. In anticipation of drop-out, N = 105 PHC workers were approached to participate in the study. One left just before beginning the study, leaving N = 104 PHC workers (N = 60 and N = 44 in intervention and control groups, respectively). Sample size needed for service recipients was calculated at N = 303 based on previous research [18], setting power to 80 %, with medium effect size and alpha = 0.01. Of the N = 364 service recipients screened for eligibility (see below) none were excluded, leaving N = 182 in both intervention and control conditions. Subsequently, N = 2 and N = 4 were lost to follow-up from intervention and control groups, respectively, leaving N = 358 for analyses (N = 180 and N = 178 in intervention and control groups, respectively). See Consort Diagram (Fig. 1). Figure 2 depicts study design.\nFig. 1Consort DiagramFig. 2Study Design. PHC = Primary Healthcare\nConsort Diagram\nStudy Design. PHC = Primary Healthcare\nBecause PHC workers in Zarandieh and Saveh had similar scientific and cultural characteristics, PHC workers in Zarandieh were placed in the control group, and PHC workers in Saveh were placed in the intervention group. This was done by randomizing which site would be placed in control (using flip of a coin). Thereafter, personnel numbers were utilized to randomly sample PHC workers in each site. Service recipients were randomly selected (using random numbers table) from the list of clients seen by PHC workers in the last 3 months, and then were contacted and informed of the research purpose. Appointments took place at their homes where they completed the satisfaction questionnaire.\nInclusion criteria for PHC workers were anticipated continued employment for the next 6 months, at least one year of work experience (both determined through interview) and willingness to participate in the study. PHC workers were excluded if they were absent from two consecutive training sessions (see below). For service recipients, inclusion criteria were residence in Zarandieh or Saveh, receipt of PHC services in the last 3 months, being 15 years or older and willingness to participate in the study.\nThe present study, conducted in 2019, was a quasi-experimental intervention study conducted on primary healthcare workers (N = 105) in health centers of Saveh and Zarandieh counties, and patients (N = 364) living in rural areas of Saveh and Zarandieh. Setting power to 80 %, with a medium effect size and alpha = 0.01, the sample size needed for PHC workers was N = 44 per group (N = 88, in total), based on similar previous studies [17]. In anticipation of drop-out, N = 105 PHC workers were approached to participate in the study. One left just before beginning the study, leaving N = 104 PHC workers (N = 60 and N = 44 in intervention and control groups, respectively). Sample size needed for service recipients was calculated at N = 303 based on previous research [18], setting power to 80 %, with medium effect size and alpha = 0.01. Of the N = 364 service recipients screened for eligibility (see below) none were excluded, leaving N = 182 in both intervention and control conditions. Subsequently, N = 2 and N = 4 were lost to follow-up from intervention and control groups, respectively, leaving N = 358 for analyses (N = 180 and N = 178 in intervention and control groups, respectively). See Consort Diagram (Fig. 1). Figure 2 depicts study design.\nFig. 1Consort DiagramFig. 2Study Design. PHC = Primary Healthcare\nConsort Diagram\nStudy Design. PHC = Primary Healthcare\nBecause PHC workers in Zarandieh and Saveh had similar scientific and cultural characteristics, PHC workers in Zarandieh were placed in the control group, and PHC workers in Saveh were placed in the intervention group. This was done by randomizing which site would be placed in control (using flip of a coin). Thereafter, personnel numbers were utilized to randomly sample PHC workers in each site. Service recipients were randomly selected (using random numbers table) from the list of clients seen by PHC workers in the last 3 months, and then were contacted and informed of the research purpose. Appointments took place at their homes where they completed the satisfaction questionnaire.\nInclusion criteria for PHC workers were anticipated continued employment for the next 6 months, at least one year of work experience (both determined through interview) and willingness to participate in the study. PHC workers were excluded if they were absent from two consecutive training sessions (see below). For service recipients, inclusion criteria were residence in Zarandieh or Saveh, receipt of PHC services in the last 3 months, being 15 years or older and willingness to participate in the study.\nMeasures A multi-part assessment included demographic information, and valid/reliable measures of SE, CS and satisfaction [6, 7, 14, 18]. PHC worker SE was assessed with 4 questions [7], with answers on a five-point Likert scale ranging from 5 = “always” to 1 = “never.“ Higher scores indicated higher SE. A study conducted in Iran found Cronbach’s alpha was 0.82 [6]. A checklist was used to assess PHC worker communication performance with clients in seven areas (2 items for starting the session, 6 items for creating a relationship, 3 items for data collection, 2 items for attending to client’s perception of referral source, 3 items for providing information, 2 items for mutual agreement and 4 items for ending the session). Performance of the skill received a score of 2 (yes) whereas not performing the skill was scored 1 (no). Scores on this construct ranged from 22 to 44. Cronbach’s alpha was 0.78 [7] for this measure. The client satisfaction questionnaire [18, 19] consisted of 42 items in 6 domains (8 items for access to services, 6 items for continuity of care, 8 items for humaneness of staff, 5 items for comprehensiveness of care, 5 items for provision of health education, 10 items for effectiveness of service). Responses were evaluated using a 5-point Likert scale from “strongly agree” (= 2) to “strongly disagree” (= -2). Higher and more positive scores indicate more satisfaction. In Iran, face- and content-validity, and reliability were confirmed [14]. Reliability was assessed for SE and CS questionnaires, in 20 health workers; and for service satisfaction questionnaire in 30 clients were similar to the target population in terms of demographic characteristics. Cronbach’s alphas were 0.81, 0.79 and 0.73 for SE, CS, and satisfaction questionnaires, respectively, when considering each questionnaire as a whole. This was calculated using standard statistical package for social sciences (SPSS 19).\nData were collected prior to training. PHC workers reported on SE and demographics, whereas trained observers completed the CS checklist while observing interactions between PHC workers and clients. Clients completed the satisfaction questionnaire via self-report; persons with no or low literacy completed the questionnaire via interview. Staff members assisting with observations/interviews were blind to condition, and clients were blind to condition. All data were collected 3 months following training, except for demographics.\nA multi-part assessment included demographic information, and valid/reliable measures of SE, CS and satisfaction [6, 7, 14, 18]. PHC worker SE was assessed with 4 questions [7], with answers on a five-point Likert scale ranging from 5 = “always” to 1 = “never.“ Higher scores indicated higher SE. A study conducted in Iran found Cronbach’s alpha was 0.82 [6]. A checklist was used to assess PHC worker communication performance with clients in seven areas (2 items for starting the session, 6 items for creating a relationship, 3 items for data collection, 2 items for attending to client’s perception of referral source, 3 items for providing information, 2 items for mutual agreement and 4 items for ending the session). Performance of the skill received a score of 2 (yes) whereas not performing the skill was scored 1 (no). Scores on this construct ranged from 22 to 44. Cronbach’s alpha was 0.78 [7] for this measure. The client satisfaction questionnaire [18, 19] consisted of 42 items in 6 domains (8 items for access to services, 6 items for continuity of care, 8 items for humaneness of staff, 5 items for comprehensiveness of care, 5 items for provision of health education, 10 items for effectiveness of service). Responses were evaluated using a 5-point Likert scale from “strongly agree” (= 2) to “strongly disagree” (= -2). Higher and more positive scores indicate more satisfaction. In Iran, face- and content-validity, and reliability were confirmed [14]. Reliability was assessed for SE and CS questionnaires, in 20 health workers; and for service satisfaction questionnaire in 30 clients were similar to the target population in terms of demographic characteristics. Cronbach’s alphas were 0.81, 0.79 and 0.73 for SE, CS, and satisfaction questionnaires, respectively, when considering each questionnaire as a whole. This was calculated using standard statistical package for social sciences (SPSS 19).\nData were collected prior to training. PHC workers reported on SE and demographics, whereas trained observers completed the CS checklist while observing interactions between PHC workers and clients. Clients completed the satisfaction questionnaire via self-report; persons with no or low literacy completed the questionnaire via interview. Staff members assisting with observations/interviews were blind to condition, and clients were blind to condition. All data were collected 3 months following training, except for demographics.\nIntervention and control groups The training program was designed and held for the intervention group in four 90-minute training sessions. Training methods included: Lecture and question-and-answer sessions to increase awareness and consolidate learning; film screening; role-playing to enhance SE and improve CS; discussion group to improve SE and CS; instruction booklets; and texting key points of effective communication as reminders. The control group received routine training. Typical training is 2 years consisting of course work, and in-service training. Topics cover general, oral and elderly health; problem solving; collaboration; social factors impacting health; human rights; and cultural beliefs.\nThe training program was designed and held for the intervention group in four 90-minute training sessions. Training methods included: Lecture and question-and-answer sessions to increase awareness and consolidate learning; film screening; role-playing to enhance SE and improve CS; discussion group to improve SE and CS; instruction booklets; and texting key points of effective communication as reminders. The control group received routine training. Typical training is 2 years consisting of course work, and in-service training. Topics cover general, oral and elderly health; problem solving; collaboration; social factors impacting health; human rights; and cultural beliefs.\nStatistical analysis Data were analyzed via SPSS 19 using chi-square tests for categorical variables, independent sample t-tests and paired t-tests. An independent sample t-test was used to compare the mean scores of CS questionnaires between intervention and control groups. Also, a paired t-test was used to compare the mean scores of CS questionnaires before and after training sessions. A Mann-Whitney was used to compare the mean scores of SE, and satisfaction questionnaires between intervention and control groups. Also, a Wilcoxon was used to compare the mean scores of SE, and satisfaction questionnaires before and after training sessions. Data normality was confirmed using the Kolmogorov-Smirnov test, histograms, and normality of residuals.\nData were analyzed via SPSS 19 using chi-square tests for categorical variables, independent sample t-tests and paired t-tests. An independent sample t-test was used to compare the mean scores of CS questionnaires between intervention and control groups. Also, a paired t-test was used to compare the mean scores of CS questionnaires before and after training sessions. A Mann-Whitney was used to compare the mean scores of SE, and satisfaction questionnaires between intervention and control groups. Also, a Wilcoxon was used to compare the mean scores of SE, and satisfaction questionnaires before and after training sessions. Data normality was confirmed using the Kolmogorov-Smirnov test, histograms, and normality of residuals.\nEthics The Research Ethics Committee of the Saveh University of Medical Sciences approved the study protocol (Number: IR.SAVEHUMS. REC1396.16). Also, all participants in this research completed a written informed consent.\n The Research Ethics Committee of the Saveh University of Medical Sciences approved the study protocol (Number: IR.SAVEHUMS. REC1396.16). Also, all participants in this research completed a written informed consent.", "The present study, conducted in 2019, was a quasi-experimental intervention study conducted on primary healthcare workers (N = 105) in health centers of Saveh and Zarandieh counties, and patients (N = 364) living in rural areas of Saveh and Zarandieh. Setting power to 80 %, with a medium effect size and alpha = 0.01, the sample size needed for PHC workers was N = 44 per group (N = 88, in total), based on similar previous studies [17]. In anticipation of drop-out, N = 105 PHC workers were approached to participate in the study. One left just before beginning the study, leaving N = 104 PHC workers (N = 60 and N = 44 in intervention and control groups, respectively). Sample size needed for service recipients was calculated at N = 303 based on previous research [18], setting power to 80 %, with medium effect size and alpha = 0.01. Of the N = 364 service recipients screened for eligibility (see below) none were excluded, leaving N = 182 in both intervention and control conditions. Subsequently, N = 2 and N = 4 were lost to follow-up from intervention and control groups, respectively, leaving N = 358 for analyses (N = 180 and N = 178 in intervention and control groups, respectively). See Consort Diagram (Fig. 1). Figure 2 depicts study design.\nFig. 1Consort DiagramFig. 2Study Design. PHC = Primary Healthcare\nConsort Diagram\nStudy Design. PHC = Primary Healthcare\nBecause PHC workers in Zarandieh and Saveh had similar scientific and cultural characteristics, PHC workers in Zarandieh were placed in the control group, and PHC workers in Saveh were placed in the intervention group. This was done by randomizing which site would be placed in control (using flip of a coin). Thereafter, personnel numbers were utilized to randomly sample PHC workers in each site. Service recipients were randomly selected (using random numbers table) from the list of clients seen by PHC workers in the last 3 months, and then were contacted and informed of the research purpose. Appointments took place at their homes where they completed the satisfaction questionnaire.\nInclusion criteria for PHC workers were anticipated continued employment for the next 6 months, at least one year of work experience (both determined through interview) and willingness to participate in the study. PHC workers were excluded if they were absent from two consecutive training sessions (see below). For service recipients, inclusion criteria were residence in Zarandieh or Saveh, receipt of PHC services in the last 3 months, being 15 years or older and willingness to participate in the study.", "A multi-part assessment included demographic information, and valid/reliable measures of SE, CS and satisfaction [6, 7, 14, 18]. PHC worker SE was assessed with 4 questions [7], with answers on a five-point Likert scale ranging from 5 = “always” to 1 = “never.“ Higher scores indicated higher SE. A study conducted in Iran found Cronbach’s alpha was 0.82 [6]. A checklist was used to assess PHC worker communication performance with clients in seven areas (2 items for starting the session, 6 items for creating a relationship, 3 items for data collection, 2 items for attending to client’s perception of referral source, 3 items for providing information, 2 items for mutual agreement and 4 items for ending the session). Performance of the skill received a score of 2 (yes) whereas not performing the skill was scored 1 (no). Scores on this construct ranged from 22 to 44. Cronbach’s alpha was 0.78 [7] for this measure. The client satisfaction questionnaire [18, 19] consisted of 42 items in 6 domains (8 items for access to services, 6 items for continuity of care, 8 items for humaneness of staff, 5 items for comprehensiveness of care, 5 items for provision of health education, 10 items for effectiveness of service). Responses were evaluated using a 5-point Likert scale from “strongly agree” (= 2) to “strongly disagree” (= -2). Higher and more positive scores indicate more satisfaction. In Iran, face- and content-validity, and reliability were confirmed [14]. Reliability was assessed for SE and CS questionnaires, in 20 health workers; and for service satisfaction questionnaire in 30 clients were similar to the target population in terms of demographic characteristics. Cronbach’s alphas were 0.81, 0.79 and 0.73 for SE, CS, and satisfaction questionnaires, respectively, when considering each questionnaire as a whole. This was calculated using standard statistical package for social sciences (SPSS 19).\nData were collected prior to training. PHC workers reported on SE and demographics, whereas trained observers completed the CS checklist while observing interactions between PHC workers and clients. Clients completed the satisfaction questionnaire via self-report; persons with no or low literacy completed the questionnaire via interview. Staff members assisting with observations/interviews were blind to condition, and clients were blind to condition. All data were collected 3 months following training, except for demographics.", "The training program was designed and held for the intervention group in four 90-minute training sessions. Training methods included: Lecture and question-and-answer sessions to increase awareness and consolidate learning; film screening; role-playing to enhance SE and improve CS; discussion group to improve SE and CS; instruction booklets; and texting key points of effective communication as reminders. The control group received routine training. Typical training is 2 years consisting of course work, and in-service training. Topics cover general, oral and elderly health; problem solving; collaboration; social factors impacting health; human rights; and cultural beliefs.", "Data were analyzed via SPSS 19 using chi-square tests for categorical variables, independent sample t-tests and paired t-tests. An independent sample t-test was used to compare the mean scores of CS questionnaires between intervention and control groups. Also, a paired t-test was used to compare the mean scores of CS questionnaires before and after training sessions. A Mann-Whitney was used to compare the mean scores of SE, and satisfaction questionnaires between intervention and control groups. Also, a Wilcoxon was used to compare the mean scores of SE, and satisfaction questionnaires before and after training sessions. Data normality was confirmed using the Kolmogorov-Smirnov test, histograms, and normality of residuals.", " The Research Ethics Committee of the Saveh University of Medical Sciences approved the study protocol (Number: IR.SAVEHUMS. REC1396.16). Also, all participants in this research completed a written informed consent.", "The randomized design of this research is a strength, but additional limitations should be considered. Results should be replicated in physicians, nurses, midwives and other health professionals. Statistically, no control was used for factors that may influence outcomes, including PHC worker or client demographics. Nesting within site or PHC worker was also not performed. Alphas were not corrected for family-wise error, but given the consistency and magnitude of the expected effects, results are likely replicable. In addition, formal mediational analyses were not performed to ascertain if the impact of training on client satisfaction is mediated by PHC worker communication or efficacy or both. Future studies may wish to conduct follow-up beyond 3 months to determine whether results enhance or diminish over time, and whether booster training may be appropriate. Future work with extended follow-up might determine client outcomes such as symptom reduction, or program outcomes such as staff turn-over and client drop-out." ]
[ null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Design, procedure and the study sample", "Measures", "Intervention and control groups", "Statistical analysis", "Ethics", "Results", "Discussion", "Limitation and future directions", "Conclusions" ]
[ "Service satisfaction is affected by service quality, quality of service delivery, and levels of service recipients’ expectation of service quality [1, 2]. Service satisfaction is a good indicator of service quality [2]. Measurement of service recipient satisfaction is a common method for evaluating the treatment quality in healthcare organizations [3]. Generally, the concept of satisfaction in providing health services refers to the feeling or attitude of service clients. There is a direct relationship between patient satisfaction and remaining in treatment [4].\nAppropriate interpersonal communication between healthcare providers and recipients is an important determinant of clients satisfaction and compliance with healthcare guidelines [5]. Proper and effective communication between health personnel and patients has a positive impact on health and medical care and enhances patient satisfaction. Focusing only on technical aspects of health care may lead professionals to use ineffective communication methods (e.g., lack of eye contact, not listening fully to client or patient concerns), and thus key and major problems of patients are not clearly identified [6, 7]. Communication skills (CS) are crucial to professionals that come in direct contact with clients. Such skills convey respect, attention and empathy; and frequently include asking open questions, listening actively, and using intelligible words for patients in order to increase the effectiveness of medical interview and treatment process as well as patients’ satisfaction [8–10].\nToday, health managers and planners around the world, particularly in developing countries are facing important challenges in responding the health care needs of the general population [11, 12]. In Iran, Primary Healthcare (PHC) workers are responsible for providing appropriate health education and services for the public [13, 14]. PHC workers prevent patients from being referred to clinics and hospitals by providing primary health care [15]. Therefore, the PHC workers’ ability to communicate effectively with individuals is an essential requirement for satisfaction and engagement of service recipients to promote health [14, 15]. Improving self-efficacy (SE) for communicating may assist in improving CS [6]. SE is the main element of the social-cognitive theory that refers to an individual’s belief or judgment about their ability to perform tasks and responsibilities [16]. Therefore, SE is an important factor for successful performance, and the skills that lead to successful performance [6, 17].\nIn Iran, primary healthcare coverage is offered to over 95 % of rural areas, but quality of care is the main concern of health policymakers. Since satisfaction is an important index of quality and performance of health care [13, 15] and given the lack of information on how CS and SE of health workers affect patient satisfaction, the present study aimed to evaluate the impact of an educational intervention, based on SE and CS, for PHC workers. Of particular interest was impact on the satisfaction of public health service recipients.", "Design, procedure and the study sample The present study, conducted in 2019, was a quasi-experimental intervention study conducted on primary healthcare workers (N = 105) in health centers of Saveh and Zarandieh counties, and patients (N = 364) living in rural areas of Saveh and Zarandieh. Setting power to 80 %, with a medium effect size and alpha = 0.01, the sample size needed for PHC workers was N = 44 per group (N = 88, in total), based on similar previous studies [17]. In anticipation of drop-out, N = 105 PHC workers were approached to participate in the study. One left just before beginning the study, leaving N = 104 PHC workers (N = 60 and N = 44 in intervention and control groups, respectively). Sample size needed for service recipients was calculated at N = 303 based on previous research [18], setting power to 80 %, with medium effect size and alpha = 0.01. Of the N = 364 service recipients screened for eligibility (see below) none were excluded, leaving N = 182 in both intervention and control conditions. Subsequently, N = 2 and N = 4 were lost to follow-up from intervention and control groups, respectively, leaving N = 358 for analyses (N = 180 and N = 178 in intervention and control groups, respectively). See Consort Diagram (Fig. 1). Figure 2 depicts study design.\nFig. 1Consort DiagramFig. 2Study Design. PHC = Primary Healthcare\nConsort Diagram\nStudy Design. PHC = Primary Healthcare\nBecause PHC workers in Zarandieh and Saveh had similar scientific and cultural characteristics, PHC workers in Zarandieh were placed in the control group, and PHC workers in Saveh were placed in the intervention group. This was done by randomizing which site would be placed in control (using flip of a coin). Thereafter, personnel numbers were utilized to randomly sample PHC workers in each site. Service recipients were randomly selected (using random numbers table) from the list of clients seen by PHC workers in the last 3 months, and then were contacted and informed of the research purpose. Appointments took place at their homes where they completed the satisfaction questionnaire.\nInclusion criteria for PHC workers were anticipated continued employment for the next 6 months, at least one year of work experience (both determined through interview) and willingness to participate in the study. PHC workers were excluded if they were absent from two consecutive training sessions (see below). For service recipients, inclusion criteria were residence in Zarandieh or Saveh, receipt of PHC services in the last 3 months, being 15 years or older and willingness to participate in the study.\nThe present study, conducted in 2019, was a quasi-experimental intervention study conducted on primary healthcare workers (N = 105) in health centers of Saveh and Zarandieh counties, and patients (N = 364) living in rural areas of Saveh and Zarandieh. Setting power to 80 %, with a medium effect size and alpha = 0.01, the sample size needed for PHC workers was N = 44 per group (N = 88, in total), based on similar previous studies [17]. In anticipation of drop-out, N = 105 PHC workers were approached to participate in the study. One left just before beginning the study, leaving N = 104 PHC workers (N = 60 and N = 44 in intervention and control groups, respectively). Sample size needed for service recipients was calculated at N = 303 based on previous research [18], setting power to 80 %, with medium effect size and alpha = 0.01. Of the N = 364 service recipients screened for eligibility (see below) none were excluded, leaving N = 182 in both intervention and control conditions. Subsequently, N = 2 and N = 4 were lost to follow-up from intervention and control groups, respectively, leaving N = 358 for analyses (N = 180 and N = 178 in intervention and control groups, respectively). See Consort Diagram (Fig. 1). Figure 2 depicts study design.\nFig. 1Consort DiagramFig. 2Study Design. PHC = Primary Healthcare\nConsort Diagram\nStudy Design. PHC = Primary Healthcare\nBecause PHC workers in Zarandieh and Saveh had similar scientific and cultural characteristics, PHC workers in Zarandieh were placed in the control group, and PHC workers in Saveh were placed in the intervention group. This was done by randomizing which site would be placed in control (using flip of a coin). Thereafter, personnel numbers were utilized to randomly sample PHC workers in each site. Service recipients were randomly selected (using random numbers table) from the list of clients seen by PHC workers in the last 3 months, and then were contacted and informed of the research purpose. Appointments took place at their homes where they completed the satisfaction questionnaire.\nInclusion criteria for PHC workers were anticipated continued employment for the next 6 months, at least one year of work experience (both determined through interview) and willingness to participate in the study. PHC workers were excluded if they were absent from two consecutive training sessions (see below). For service recipients, inclusion criteria were residence in Zarandieh or Saveh, receipt of PHC services in the last 3 months, being 15 years or older and willingness to participate in the study.\nMeasures A multi-part assessment included demographic information, and valid/reliable measures of SE, CS and satisfaction [6, 7, 14, 18]. PHC worker SE was assessed with 4 questions [7], with answers on a five-point Likert scale ranging from 5 = “always” to 1 = “never.“ Higher scores indicated higher SE. A study conducted in Iran found Cronbach’s alpha was 0.82 [6]. A checklist was used to assess PHC worker communication performance with clients in seven areas (2 items for starting the session, 6 items for creating a relationship, 3 items for data collection, 2 items for attending to client’s perception of referral source, 3 items for providing information, 2 items for mutual agreement and 4 items for ending the session). Performance of the skill received a score of 2 (yes) whereas not performing the skill was scored 1 (no). Scores on this construct ranged from 22 to 44. Cronbach’s alpha was 0.78 [7] for this measure. The client satisfaction questionnaire [18, 19] consisted of 42 items in 6 domains (8 items for access to services, 6 items for continuity of care, 8 items for humaneness of staff, 5 items for comprehensiveness of care, 5 items for provision of health education, 10 items for effectiveness of service). Responses were evaluated using a 5-point Likert scale from “strongly agree” (= 2) to “strongly disagree” (= -2). Higher and more positive scores indicate more satisfaction. In Iran, face- and content-validity, and reliability were confirmed [14]. Reliability was assessed for SE and CS questionnaires, in 20 health workers; and for service satisfaction questionnaire in 30 clients were similar to the target population in terms of demographic characteristics. Cronbach’s alphas were 0.81, 0.79 and 0.73 for SE, CS, and satisfaction questionnaires, respectively, when considering each questionnaire as a whole. This was calculated using standard statistical package for social sciences (SPSS 19).\nData were collected prior to training. PHC workers reported on SE and demographics, whereas trained observers completed the CS checklist while observing interactions between PHC workers and clients. Clients completed the satisfaction questionnaire via self-report; persons with no or low literacy completed the questionnaire via interview. Staff members assisting with observations/interviews were blind to condition, and clients were blind to condition. All data were collected 3 months following training, except for demographics.\nA multi-part assessment included demographic information, and valid/reliable measures of SE, CS and satisfaction [6, 7, 14, 18]. PHC worker SE was assessed with 4 questions [7], with answers on a five-point Likert scale ranging from 5 = “always” to 1 = “never.“ Higher scores indicated higher SE. A study conducted in Iran found Cronbach’s alpha was 0.82 [6]. A checklist was used to assess PHC worker communication performance with clients in seven areas (2 items for starting the session, 6 items for creating a relationship, 3 items for data collection, 2 items for attending to client’s perception of referral source, 3 items for providing information, 2 items for mutual agreement and 4 items for ending the session). Performance of the skill received a score of 2 (yes) whereas not performing the skill was scored 1 (no). Scores on this construct ranged from 22 to 44. Cronbach’s alpha was 0.78 [7] for this measure. The client satisfaction questionnaire [18, 19] consisted of 42 items in 6 domains (8 items for access to services, 6 items for continuity of care, 8 items for humaneness of staff, 5 items for comprehensiveness of care, 5 items for provision of health education, 10 items for effectiveness of service). Responses were evaluated using a 5-point Likert scale from “strongly agree” (= 2) to “strongly disagree” (= -2). Higher and more positive scores indicate more satisfaction. In Iran, face- and content-validity, and reliability were confirmed [14]. Reliability was assessed for SE and CS questionnaires, in 20 health workers; and for service satisfaction questionnaire in 30 clients were similar to the target population in terms of demographic characteristics. Cronbach’s alphas were 0.81, 0.79 and 0.73 for SE, CS, and satisfaction questionnaires, respectively, when considering each questionnaire as a whole. This was calculated using standard statistical package for social sciences (SPSS 19).\nData were collected prior to training. PHC workers reported on SE and demographics, whereas trained observers completed the CS checklist while observing interactions between PHC workers and clients. Clients completed the satisfaction questionnaire via self-report; persons with no or low literacy completed the questionnaire via interview. Staff members assisting with observations/interviews were blind to condition, and clients were blind to condition. All data were collected 3 months following training, except for demographics.\nIntervention and control groups The training program was designed and held for the intervention group in four 90-minute training sessions. Training methods included: Lecture and question-and-answer sessions to increase awareness and consolidate learning; film screening; role-playing to enhance SE and improve CS; discussion group to improve SE and CS; instruction booklets; and texting key points of effective communication as reminders. The control group received routine training. Typical training is 2 years consisting of course work, and in-service training. Topics cover general, oral and elderly health; problem solving; collaboration; social factors impacting health; human rights; and cultural beliefs.\nThe training program was designed and held for the intervention group in four 90-minute training sessions. Training methods included: Lecture and question-and-answer sessions to increase awareness and consolidate learning; film screening; role-playing to enhance SE and improve CS; discussion group to improve SE and CS; instruction booklets; and texting key points of effective communication as reminders. The control group received routine training. Typical training is 2 years consisting of course work, and in-service training. Topics cover general, oral and elderly health; problem solving; collaboration; social factors impacting health; human rights; and cultural beliefs.\nStatistical analysis Data were analyzed via SPSS 19 using chi-square tests for categorical variables, independent sample t-tests and paired t-tests. An independent sample t-test was used to compare the mean scores of CS questionnaires between intervention and control groups. Also, a paired t-test was used to compare the mean scores of CS questionnaires before and after training sessions. A Mann-Whitney was used to compare the mean scores of SE, and satisfaction questionnaires between intervention and control groups. Also, a Wilcoxon was used to compare the mean scores of SE, and satisfaction questionnaires before and after training sessions. Data normality was confirmed using the Kolmogorov-Smirnov test, histograms, and normality of residuals.\nData were analyzed via SPSS 19 using chi-square tests for categorical variables, independent sample t-tests and paired t-tests. An independent sample t-test was used to compare the mean scores of CS questionnaires between intervention and control groups. Also, a paired t-test was used to compare the mean scores of CS questionnaires before and after training sessions. A Mann-Whitney was used to compare the mean scores of SE, and satisfaction questionnaires between intervention and control groups. Also, a Wilcoxon was used to compare the mean scores of SE, and satisfaction questionnaires before and after training sessions. Data normality was confirmed using the Kolmogorov-Smirnov test, histograms, and normality of residuals.\nEthics The Research Ethics Committee of the Saveh University of Medical Sciences approved the study protocol (Number: IR.SAVEHUMS. REC1396.16). Also, all participants in this research completed a written informed consent.\n The Research Ethics Committee of the Saveh University of Medical Sciences approved the study protocol (Number: IR.SAVEHUMS. REC1396.16). Also, all participants in this research completed a written informed consent.", "The present study, conducted in 2019, was a quasi-experimental intervention study conducted on primary healthcare workers (N = 105) in health centers of Saveh and Zarandieh counties, and patients (N = 364) living in rural areas of Saveh and Zarandieh. Setting power to 80 %, with a medium effect size and alpha = 0.01, the sample size needed for PHC workers was N = 44 per group (N = 88, in total), based on similar previous studies [17]. In anticipation of drop-out, N = 105 PHC workers were approached to participate in the study. One left just before beginning the study, leaving N = 104 PHC workers (N = 60 and N = 44 in intervention and control groups, respectively). Sample size needed for service recipients was calculated at N = 303 based on previous research [18], setting power to 80 %, with medium effect size and alpha = 0.01. Of the N = 364 service recipients screened for eligibility (see below) none were excluded, leaving N = 182 in both intervention and control conditions. Subsequently, N = 2 and N = 4 were lost to follow-up from intervention and control groups, respectively, leaving N = 358 for analyses (N = 180 and N = 178 in intervention and control groups, respectively). See Consort Diagram (Fig. 1). Figure 2 depicts study design.\nFig. 1Consort DiagramFig. 2Study Design. PHC = Primary Healthcare\nConsort Diagram\nStudy Design. PHC = Primary Healthcare\nBecause PHC workers in Zarandieh and Saveh had similar scientific and cultural characteristics, PHC workers in Zarandieh were placed in the control group, and PHC workers in Saveh were placed in the intervention group. This was done by randomizing which site would be placed in control (using flip of a coin). Thereafter, personnel numbers were utilized to randomly sample PHC workers in each site. Service recipients were randomly selected (using random numbers table) from the list of clients seen by PHC workers in the last 3 months, and then were contacted and informed of the research purpose. Appointments took place at their homes where they completed the satisfaction questionnaire.\nInclusion criteria for PHC workers were anticipated continued employment for the next 6 months, at least one year of work experience (both determined through interview) and willingness to participate in the study. PHC workers were excluded if they were absent from two consecutive training sessions (see below). For service recipients, inclusion criteria were residence in Zarandieh or Saveh, receipt of PHC services in the last 3 months, being 15 years or older and willingness to participate in the study.", "A multi-part assessment included demographic information, and valid/reliable measures of SE, CS and satisfaction [6, 7, 14, 18]. PHC worker SE was assessed with 4 questions [7], with answers on a five-point Likert scale ranging from 5 = “always” to 1 = “never.“ Higher scores indicated higher SE. A study conducted in Iran found Cronbach’s alpha was 0.82 [6]. A checklist was used to assess PHC worker communication performance with clients in seven areas (2 items for starting the session, 6 items for creating a relationship, 3 items for data collection, 2 items for attending to client’s perception of referral source, 3 items for providing information, 2 items for mutual agreement and 4 items for ending the session). Performance of the skill received a score of 2 (yes) whereas not performing the skill was scored 1 (no). Scores on this construct ranged from 22 to 44. Cronbach’s alpha was 0.78 [7] for this measure. The client satisfaction questionnaire [18, 19] consisted of 42 items in 6 domains (8 items for access to services, 6 items for continuity of care, 8 items for humaneness of staff, 5 items for comprehensiveness of care, 5 items for provision of health education, 10 items for effectiveness of service). Responses were evaluated using a 5-point Likert scale from “strongly agree” (= 2) to “strongly disagree” (= -2). Higher and more positive scores indicate more satisfaction. In Iran, face- and content-validity, and reliability were confirmed [14]. Reliability was assessed for SE and CS questionnaires, in 20 health workers; and for service satisfaction questionnaire in 30 clients were similar to the target population in terms of demographic characteristics. Cronbach’s alphas were 0.81, 0.79 and 0.73 for SE, CS, and satisfaction questionnaires, respectively, when considering each questionnaire as a whole. This was calculated using standard statistical package for social sciences (SPSS 19).\nData were collected prior to training. PHC workers reported on SE and demographics, whereas trained observers completed the CS checklist while observing interactions between PHC workers and clients. Clients completed the satisfaction questionnaire via self-report; persons with no or low literacy completed the questionnaire via interview. Staff members assisting with observations/interviews were blind to condition, and clients were blind to condition. All data were collected 3 months following training, except for demographics.", "The training program was designed and held for the intervention group in four 90-minute training sessions. Training methods included: Lecture and question-and-answer sessions to increase awareness and consolidate learning; film screening; role-playing to enhance SE and improve CS; discussion group to improve SE and CS; instruction booklets; and texting key points of effective communication as reminders. The control group received routine training. Typical training is 2 years consisting of course work, and in-service training. Topics cover general, oral and elderly health; problem solving; collaboration; social factors impacting health; human rights; and cultural beliefs.", "Data were analyzed via SPSS 19 using chi-square tests for categorical variables, independent sample t-tests and paired t-tests. An independent sample t-test was used to compare the mean scores of CS questionnaires between intervention and control groups. Also, a paired t-test was used to compare the mean scores of CS questionnaires before and after training sessions. A Mann-Whitney was used to compare the mean scores of SE, and satisfaction questionnaires between intervention and control groups. Also, a Wilcoxon was used to compare the mean scores of SE, and satisfaction questionnaires before and after training sessions. Data normality was confirmed using the Kolmogorov-Smirnov test, histograms, and normality of residuals.", " The Research Ethics Committee of the Saveh University of Medical Sciences approved the study protocol (Number: IR.SAVEHUMS. REC1396.16). Also, all participants in this research completed a written informed consent.", "From 364 service recipients, 358 (180 in the intervention group and 178 in the control group) who completed the post-test underwent the final analysis. The mean age of service recipients was 40.5 ± 14.9 years in the intervention group, and 37.7 ± 12.3 years in the control group (p > 0.05). Intervention and control groups were similar on demographic variables (e.g., gender, insurance, education and occupation) and no significant differences were found between groups. Among PHC workers, intervention and control groups were similar on demographic variables (e.g., gender, work experience and literacy level) and there were no significant differences between groups. See Tables 1 and 2.\nTable 1Comparison of categorical variables in clients seen by two groups of primary healthcare workers (Behvarz) assigned tobk Intervention and Control groupsVariablesIntervention (n=180)Control (n=178)P-valueNumberPercentage (%)NumberPercentage (%)Sex Male7943.982460.67 Female10156.19654Education Illiterate158.4116.20.54 Elementary99559251.7 High school and diploma4625.65732 Academic20111810.1Job Student84.4105.60.39 Farmer / Shepherd4323.95430.3 Staff73.952.9 Housewife90508849.4 other3217.82111.8Insurance Yes16993.917095.50.49 No116.184.5Note: Chi-square usedTable 2Comparison of categorical variables in primary healthcare workers (Behvarz) assigned to Intervention and ControlVariablesIntervention (n=60)Control (n=44)P-valueNumberPercentage (%)NumberPercentage (%)Sex Male2541.61636.40.58 Female3558.42863.6Education Elementary813.3613.60.12 Middle school1118.3511.4 High school and diploma33553272.7 Academic813.312.3work experience <101525920.40.66 10-191931.71227.3 ≥202643.32352.3Note: Chi-square used\nComparison of categorical variables in clients seen by two groups of primary healthcare workers (Behvarz) assigned tobk Intervention and Control groups\nNote: Chi-square used\nComparison of categorical variables in primary healthcare workers (Behvarz) assigned to Intervention and Control\nNote: Chi-square used\nAccording to Table 3, for PHC workers, there was no significant difference between the intervention and control groups before training on SE and all CS constructs except for attending to client perception of referral source (p < 0.05). Following training, paired t-tests indicated that the mean scores of SE and all communication skill constructs significantly increased in the intervention group (p < 0.001), while mean scores in the control group increased on starting a session, decreased on data collection and evidenced no other significant differences.\nTable 3Comparison of communication skills and self-efficacy in primary healthcare workers (Behvarz) assigned to Intervention and Control at baseline and 3-months follow-upVariableGroup TimeIntervention group Mean ± SD (N = 60)Control group Mean ± SD (N = 44)P-value*Starting the sessionBaseline2.52 ± 0.622.38 ± 0.510.063-months follow-up3.79 ± 0.462.87 ± 0.720.001P-value**0.0010.001creating a relationshipBaseline8.95 ± 1.579.0 ± 1.540.683-months follow-up11.91 ± 1.789.75 ± 2.030.001P-value**0.0010.06data collectionBaseline5.0 ± 0.784.84 ± 0.630.183-months follow-up5.51 ± 0.564.55 ± 0.490.001P-value**0.0010.04attending to client perception of referral sourceBaseline3.25 ± 0.892.84 ± 0.750.013-months follow-up3.78 ± 0.482.77 ± 0.620.001P-value**0.0010.39providing informationBaseline4.67 ± 0.754.59 ± 0.870.563-months follow-up5.49 ± 0.644.69 ± 0.610.001P-value**0.0010.55mutual agreementBaseline2.87 ± 0.562.84 ± 0.730.953-months follow-up3.21 ± 0.582.66 ± 0.640.001P-value**0.0010.27ending the sessionBaseline5.64 ± 1.05.37 ± 0.850.063-months follow-up6.81 ± 0.925.66 ± 1.100.001P-value**0.0010.17P-value***Self-efficacyBaseline31.52 ± 2.9131.32 ± 2.640.673-months follow-up34.25 ± 4.031.26 ± 4.520.001P-value****0.0010.79* Independent T-test** Paired T-test*** Mann-Whitney**** Wilcoxon\nComparison of communication skills and self-efficacy in primary healthcare workers (Behvarz) assigned to Intervention and Control at baseline and 3-months follow-up\n* Independent T-test\n** Paired T-test\n*** Mann-Whitney\n**** Wilcoxon\nFor service recipients (Table 4), there were no significant differences between intervention and control groups before training on components of satisfaction (access to services, continuity of care, humaneness of staff, comprehensiveness of care, provision of health education, effectiveness of service), but mean scores of satisfaction variables generally significantly increased in the intervention group after training (p < 0.001). No significant differences were observed in the control group from pre- to post-training.\nTable 4Comparison of client satisfaction in two groups of primary healthcare workers (Behvarz) assigned to Intervention and Control at baseline and 3-months follow-upVariableGroup TimeIntervention group Mean ± SD (N = 180)Control group Mean ± SD (N = 178)P-value*Access to servicesBaseline1.75 ± 0.601.73 ± 0.470.593-months follow-up2.89 ± 1.01.80 ± 0.560.001P-value**0.0010.32continuity of careBaseline1.19 ± 0.971.33 ± 0.870.283-months follow-up2.72 ± 1.141.40 ± 1.060.001P-value**0.0010.34humaneness of staffBaseline1.22 ± 0.841.18 ± 0.790.633-months follow-up2.88 ± 1.171.23 ± 0.820.001P-value**0.0010.28comprehensiveness of careBaseline-1.09 ± 0.67-1.04 ± 0.600.143-months follow-up-1.70 ± 1.22-0.61 ± 1.140.001P-value**0.0010.11provision of health educationBaseline1.05 ± 0.891.01 ± 0.950.693-months follow-up2.25 ± 1.121.07 ± 0.940.001P-value**0.0010.10effectiveness of servicesBaseline1.0 ± 0.891.10 ± 0.900.853-months follow-up2.75 ± 1.01.18 ± 1.00.001P-value**0.0010.47* Mann-Whitney** Wilcoxon\nComparison of client satisfaction in two groups of primary healthcare workers (Behvarz) assigned to Intervention and Control at baseline and 3-months follow-up\n* Mann-Whitney\n** Wilcoxon\nClients were generally dissatisfied with comprehensiveness of care (Table 4). There were no differences between intervention and control groups prior to training. In the intervention group, clients became more dissatisfied with comprehensiveness of care following training; however, no difference was found from pre- to post-training for clients in the control group. Following training, clients in the intervention group were significantly more dissatisfied with comprehensiveness of care than those in the control group. Medians and Interquartile Range of SE and all satisfaction constructs reported in Table 5.\nTable 5Comparison of client satisfaction and self-efficacy median in two groups of primary healthcare workers (Behvarz) assigned to Intervention and Control at baseline and 3-months follow-upVariableGroup TimeIntervention group Median( IR) (N = 180)Control group Median( IR) (N = 178)Access to servicesBaseline2(2)2(1)3-months follow-up3(3)2(1)continuity of careBaseline1(2)1(2)3-months follow-up2.5(1.75)1(2)humaneness of staffBaseline1(1)1(1)3-months follow-up3(2)1.2(1.5)comprehensiveness of careBaseline-1(1)-1(2)3-months follow-up-2(2)-1(2)provision of health educationBaseline1(0)1(0)3-months follow-up2(3)1(0.5)effectiveness of servicesBaseline1(2)1(2)3-months follow-up3(4)1(1.75)Self-efficacyBaseline30(10)30(10)3-months follow-up32(10)30(10)\nComparison of client satisfaction and self-efficacy median in two groups of primary healthcare workers (Behvarz) assigned to Intervention and Control at baseline and 3-months follow-up", "The present study aimed to evaluate the impact of PHC worker training on: (1) PHC worker CS and SE, and (2) client satisfaction with services. Satisfaction among clients of trained PHC workers generally increased from pre- to post-training; and following training, satisfaction generally improved among clients of trained PHC workers as compared to clients of non-trained PHC workers. Similarly, SE and CS increased among trained PHC workers from pre- to post-training; and following training, SE and CS improved among trained PHC workers as compared to non-trained PHC workers. Results indicate training has the potential to improve PHC efficacy and communication skills, and to generally improve client satisfaction with services.\nWhereas training seemed to enhance client satisfaction with services across most subscales (e.g., services access, care continuity, staff humaneness, provision of health education, services effectiveness), following training, clients rated comprehensiveness of services with more dissatisfaction. More research should be done to understand elements of training (targeting clinician communication/ confidence) that may adversely impact ratings on comprehensiveness of care in particular. It may be that attending to new communication processes during client interactions distracts clinicians from attending to the range of client health needs. Such an effect might abate over time as clinicians grow accustomed to deployment of communication skills.\nConsistent with our overall findings, previous studies emphasized the importance of CS in promoting patient health and satisfaction [20, 21]. For instance, in a study by Boissy et al., [22]communication skill training increased patient satisfaction and improved empathy and SE among physicians. In another study by Bank et al., [23] teaching CS to physicians led to a marked improvement in patient satisfaction. Moore et al., [24] also found that CS training was effective in promoting physical and mental health, satisfaction, and quality of life in patients. Barth and Lannen [25] conducted a meta-analysis and concluded that CS training for healthcare workers is essential for changing their communication behavior and attitudes.\nUnlike our findings, a systematic review by Barth and Lannen [25] showed that communication skills of professionals can be improved; yet, patients do not necessarily give higher satisfaction score. In another study by Shilling et al., [26] teaching CS to physicians did not have a significant effect on patient satisfaction. Differences between findings of this study and other studies [26, 27] may be due to differences in client samples. For example, in Schilling et al. [27] service recipients were cancer patients whereas in the present study clients were primary health care recipients.\nIn our study, SE was specifically targeted for enhancement during CS training (e.g., through discussion, use of role-plays). Previous studies emphasized SE as an important factor for successful performance [27–29], and the current study is consistent with those findings. After the CS training, health worker SE increased significantly in the present study. In a study by Nørgaard et al. [30], and consistent with our findings, health worker SE for communicating with patients increased significantly after CS training. In a review study by Berkhof et al., [31] teaching CS to clinical staff improved patient satisfaction, self-esteem, and SE in doctors. There may be a positive and significant association between CS and SE in clinical staff; therefore, designing successful training to enhance patient-professional communication may be facilitated by attention to staff SE [7]. A study by Cegala DJ and Lenzmeier indicated that applying effective strategies to enhance self-efficacy in medical staffs could lead to satisfaction in both medical staffs and their clients [32]. Considering the results of previous studies [28, 29], and our findings, this could be inferred that designing self-efficacy-based interventions to establish effective communication between medical staff and their clients might be of critical importance. This issue should be considered in developing in-service training for health professionals by authorities.\nLimitation and future directions The randomized design of this research is a strength, but additional limitations should be considered. Results should be replicated in physicians, nurses, midwives and other health professionals. Statistically, no control was used for factors that may influence outcomes, including PHC worker or client demographics. Nesting within site or PHC worker was also not performed. Alphas were not corrected for family-wise error, but given the consistency and magnitude of the expected effects, results are likely replicable. In addition, formal mediational analyses were not performed to ascertain if the impact of training on client satisfaction is mediated by PHC worker communication or efficacy or both. Future studies may wish to conduct follow-up beyond 3 months to determine whether results enhance or diminish over time, and whether booster training may be appropriate. Future work with extended follow-up might determine client outcomes such as symptom reduction, or program outcomes such as staff turn-over and client drop-out.\nThe randomized design of this research is a strength, but additional limitations should be considered. Results should be replicated in physicians, nurses, midwives and other health professionals. Statistically, no control was used for factors that may influence outcomes, including PHC worker or client demographics. Nesting within site or PHC worker was also not performed. Alphas were not corrected for family-wise error, but given the consistency and magnitude of the expected effects, results are likely replicable. In addition, formal mediational analyses were not performed to ascertain if the impact of training on client satisfaction is mediated by PHC worker communication or efficacy or both. Future studies may wish to conduct follow-up beyond 3 months to determine whether results enhance or diminish over time, and whether booster training may be appropriate. Future work with extended follow-up might determine client outcomes such as symptom reduction, or program outcomes such as staff turn-over and client drop-out.", "The randomized design of this research is a strength, but additional limitations should be considered. Results should be replicated in physicians, nurses, midwives and other health professionals. Statistically, no control was used for factors that may influence outcomes, including PHC worker or client demographics. Nesting within site or PHC worker was also not performed. Alphas were not corrected for family-wise error, but given the consistency and magnitude of the expected effects, results are likely replicable. In addition, formal mediational analyses were not performed to ascertain if the impact of training on client satisfaction is mediated by PHC worker communication or efficacy or both. Future studies may wish to conduct follow-up beyond 3 months to determine whether results enhance or diminish over time, and whether booster training may be appropriate. Future work with extended follow-up might determine client outcomes such as symptom reduction, or program outcomes such as staff turn-over and client drop-out.", "Communication skills training improved the self-efficacy of PHC workers to effectively communicate with clients, improved PHC worker communication skills with clients, and improved clients’ services satisfaction. Findings are encouraging, and such training may be deployed in other practice settings, since it was delivered in only 4 group sessions of 90 min each." ]
[ "introduction", null, null, null, null, null, null, "results", "discussion", null, "conclusion" ]
[ "Communication skills", "Self-efficacy", "Primary healthcare", "Client" ]
Introduction: Service satisfaction is affected by service quality, quality of service delivery, and levels of service recipients’ expectation of service quality [1, 2]. Service satisfaction is a good indicator of service quality [2]. Measurement of service recipient satisfaction is a common method for evaluating the treatment quality in healthcare organizations [3]. Generally, the concept of satisfaction in providing health services refers to the feeling or attitude of service clients. There is a direct relationship between patient satisfaction and remaining in treatment [4]. Appropriate interpersonal communication between healthcare providers and recipients is an important determinant of clients satisfaction and compliance with healthcare guidelines [5]. Proper and effective communication between health personnel and patients has a positive impact on health and medical care and enhances patient satisfaction. Focusing only on technical aspects of health care may lead professionals to use ineffective communication methods (e.g., lack of eye contact, not listening fully to client or patient concerns), and thus key and major problems of patients are not clearly identified [6, 7]. Communication skills (CS) are crucial to professionals that come in direct contact with clients. Such skills convey respect, attention and empathy; and frequently include asking open questions, listening actively, and using intelligible words for patients in order to increase the effectiveness of medical interview and treatment process as well as patients’ satisfaction [8–10]. Today, health managers and planners around the world, particularly in developing countries are facing important challenges in responding the health care needs of the general population [11, 12]. In Iran, Primary Healthcare (PHC) workers are responsible for providing appropriate health education and services for the public [13, 14]. PHC workers prevent patients from being referred to clinics and hospitals by providing primary health care [15]. Therefore, the PHC workers’ ability to communicate effectively with individuals is an essential requirement for satisfaction and engagement of service recipients to promote health [14, 15]. Improving self-efficacy (SE) for communicating may assist in improving CS [6]. SE is the main element of the social-cognitive theory that refers to an individual’s belief or judgment about their ability to perform tasks and responsibilities [16]. Therefore, SE is an important factor for successful performance, and the skills that lead to successful performance [6, 17]. In Iran, primary healthcare coverage is offered to over 95 % of rural areas, but quality of care is the main concern of health policymakers. Since satisfaction is an important index of quality and performance of health care [13, 15] and given the lack of information on how CS and SE of health workers affect patient satisfaction, the present study aimed to evaluate the impact of an educational intervention, based on SE and CS, for PHC workers. Of particular interest was impact on the satisfaction of public health service recipients. Methods: Design, procedure and the study sample The present study, conducted in 2019, was a quasi-experimental intervention study conducted on primary healthcare workers (N = 105) in health centers of Saveh and Zarandieh counties, and patients (N = 364) living in rural areas of Saveh and Zarandieh. Setting power to 80 %, with a medium effect size and alpha = 0.01, the sample size needed for PHC workers was N = 44 per group (N = 88, in total), based on similar previous studies [17]. In anticipation of drop-out, N = 105 PHC workers were approached to participate in the study. One left just before beginning the study, leaving N = 104 PHC workers (N = 60 and N = 44 in intervention and control groups, respectively). Sample size needed for service recipients was calculated at N = 303 based on previous research [18], setting power to 80 %, with medium effect size and alpha = 0.01. Of the N = 364 service recipients screened for eligibility (see below) none were excluded, leaving N = 182 in both intervention and control conditions. Subsequently, N = 2 and N = 4 were lost to follow-up from intervention and control groups, respectively, leaving N = 358 for analyses (N = 180 and N = 178 in intervention and control groups, respectively). See Consort Diagram (Fig. 1). Figure 2 depicts study design. Fig. 1Consort DiagramFig. 2Study Design. PHC = Primary Healthcare Consort Diagram Study Design. PHC = Primary Healthcare Because PHC workers in Zarandieh and Saveh had similar scientific and cultural characteristics, PHC workers in Zarandieh were placed in the control group, and PHC workers in Saveh were placed in the intervention group. This was done by randomizing which site would be placed in control (using flip of a coin). Thereafter, personnel numbers were utilized to randomly sample PHC workers in each site. Service recipients were randomly selected (using random numbers table) from the list of clients seen by PHC workers in the last 3 months, and then were contacted and informed of the research purpose. Appointments took place at their homes where they completed the satisfaction questionnaire. Inclusion criteria for PHC workers were anticipated continued employment for the next 6 months, at least one year of work experience (both determined through interview) and willingness to participate in the study. PHC workers were excluded if they were absent from two consecutive training sessions (see below). For service recipients, inclusion criteria were residence in Zarandieh or Saveh, receipt of PHC services in the last 3 months, being 15 years or older and willingness to participate in the study. The present study, conducted in 2019, was a quasi-experimental intervention study conducted on primary healthcare workers (N = 105) in health centers of Saveh and Zarandieh counties, and patients (N = 364) living in rural areas of Saveh and Zarandieh. Setting power to 80 %, with a medium effect size and alpha = 0.01, the sample size needed for PHC workers was N = 44 per group (N = 88, in total), based on similar previous studies [17]. In anticipation of drop-out, N = 105 PHC workers were approached to participate in the study. One left just before beginning the study, leaving N = 104 PHC workers (N = 60 and N = 44 in intervention and control groups, respectively). Sample size needed for service recipients was calculated at N = 303 based on previous research [18], setting power to 80 %, with medium effect size and alpha = 0.01. Of the N = 364 service recipients screened for eligibility (see below) none were excluded, leaving N = 182 in both intervention and control conditions. Subsequently, N = 2 and N = 4 were lost to follow-up from intervention and control groups, respectively, leaving N = 358 for analyses (N = 180 and N = 178 in intervention and control groups, respectively). See Consort Diagram (Fig. 1). Figure 2 depicts study design. Fig. 1Consort DiagramFig. 2Study Design. PHC = Primary Healthcare Consort Diagram Study Design. PHC = Primary Healthcare Because PHC workers in Zarandieh and Saveh had similar scientific and cultural characteristics, PHC workers in Zarandieh were placed in the control group, and PHC workers in Saveh were placed in the intervention group. This was done by randomizing which site would be placed in control (using flip of a coin). Thereafter, personnel numbers were utilized to randomly sample PHC workers in each site. Service recipients were randomly selected (using random numbers table) from the list of clients seen by PHC workers in the last 3 months, and then were contacted and informed of the research purpose. Appointments took place at their homes where they completed the satisfaction questionnaire. Inclusion criteria for PHC workers were anticipated continued employment for the next 6 months, at least one year of work experience (both determined through interview) and willingness to participate in the study. PHC workers were excluded if they were absent from two consecutive training sessions (see below). For service recipients, inclusion criteria were residence in Zarandieh or Saveh, receipt of PHC services in the last 3 months, being 15 years or older and willingness to participate in the study. Measures A multi-part assessment included demographic information, and valid/reliable measures of SE, CS and satisfaction [6, 7, 14, 18]. PHC worker SE was assessed with 4 questions [7], with answers on a five-point Likert scale ranging from 5 = “always” to 1 = “never.“ Higher scores indicated higher SE. A study conducted in Iran found Cronbach’s alpha was 0.82 [6]. A checklist was used to assess PHC worker communication performance with clients in seven areas (2 items for starting the session, 6 items for creating a relationship, 3 items for data collection, 2 items for attending to client’s perception of referral source, 3 items for providing information, 2 items for mutual agreement and 4 items for ending the session). Performance of the skill received a score of 2 (yes) whereas not performing the skill was scored 1 (no). Scores on this construct ranged from 22 to 44. Cronbach’s alpha was 0.78 [7] for this measure. The client satisfaction questionnaire [18, 19] consisted of 42 items in 6 domains (8 items for access to services, 6 items for continuity of care, 8 items for humaneness of staff, 5 items for comprehensiveness of care, 5 items for provision of health education, 10 items for effectiveness of service). Responses were evaluated using a 5-point Likert scale from “strongly agree” (= 2) to “strongly disagree” (= -2). Higher and more positive scores indicate more satisfaction. In Iran, face- and content-validity, and reliability were confirmed [14]. Reliability was assessed for SE and CS questionnaires, in 20 health workers; and for service satisfaction questionnaire in 30 clients were similar to the target population in terms of demographic characteristics. Cronbach’s alphas were 0.81, 0.79 and 0.73 for SE, CS, and satisfaction questionnaires, respectively, when considering each questionnaire as a whole. This was calculated using standard statistical package for social sciences (SPSS 19). Data were collected prior to training. PHC workers reported on SE and demographics, whereas trained observers completed the CS checklist while observing interactions between PHC workers and clients. Clients completed the satisfaction questionnaire via self-report; persons with no or low literacy completed the questionnaire via interview. Staff members assisting with observations/interviews were blind to condition, and clients were blind to condition. All data were collected 3 months following training, except for demographics. A multi-part assessment included demographic information, and valid/reliable measures of SE, CS and satisfaction [6, 7, 14, 18]. PHC worker SE was assessed with 4 questions [7], with answers on a five-point Likert scale ranging from 5 = “always” to 1 = “never.“ Higher scores indicated higher SE. A study conducted in Iran found Cronbach’s alpha was 0.82 [6]. A checklist was used to assess PHC worker communication performance with clients in seven areas (2 items for starting the session, 6 items for creating a relationship, 3 items for data collection, 2 items for attending to client’s perception of referral source, 3 items for providing information, 2 items for mutual agreement and 4 items for ending the session). Performance of the skill received a score of 2 (yes) whereas not performing the skill was scored 1 (no). Scores on this construct ranged from 22 to 44. Cronbach’s alpha was 0.78 [7] for this measure. The client satisfaction questionnaire [18, 19] consisted of 42 items in 6 domains (8 items for access to services, 6 items for continuity of care, 8 items for humaneness of staff, 5 items for comprehensiveness of care, 5 items for provision of health education, 10 items for effectiveness of service). Responses were evaluated using a 5-point Likert scale from “strongly agree” (= 2) to “strongly disagree” (= -2). Higher and more positive scores indicate more satisfaction. In Iran, face- and content-validity, and reliability were confirmed [14]. Reliability was assessed for SE and CS questionnaires, in 20 health workers; and for service satisfaction questionnaire in 30 clients were similar to the target population in terms of demographic characteristics. Cronbach’s alphas were 0.81, 0.79 and 0.73 for SE, CS, and satisfaction questionnaires, respectively, when considering each questionnaire as a whole. This was calculated using standard statistical package for social sciences (SPSS 19). Data were collected prior to training. PHC workers reported on SE and demographics, whereas trained observers completed the CS checklist while observing interactions between PHC workers and clients. Clients completed the satisfaction questionnaire via self-report; persons with no or low literacy completed the questionnaire via interview. Staff members assisting with observations/interviews were blind to condition, and clients were blind to condition. All data were collected 3 months following training, except for demographics. Intervention and control groups The training program was designed and held for the intervention group in four 90-minute training sessions. Training methods included: Lecture and question-and-answer sessions to increase awareness and consolidate learning; film screening; role-playing to enhance SE and improve CS; discussion group to improve SE and CS; instruction booklets; and texting key points of effective communication as reminders. The control group received routine training. Typical training is 2 years consisting of course work, and in-service training. Topics cover general, oral and elderly health; problem solving; collaboration; social factors impacting health; human rights; and cultural beliefs. The training program was designed and held for the intervention group in four 90-minute training sessions. Training methods included: Lecture and question-and-answer sessions to increase awareness and consolidate learning; film screening; role-playing to enhance SE and improve CS; discussion group to improve SE and CS; instruction booklets; and texting key points of effective communication as reminders. The control group received routine training. Typical training is 2 years consisting of course work, and in-service training. Topics cover general, oral and elderly health; problem solving; collaboration; social factors impacting health; human rights; and cultural beliefs. Statistical analysis Data were analyzed via SPSS 19 using chi-square tests for categorical variables, independent sample t-tests and paired t-tests. An independent sample t-test was used to compare the mean scores of CS questionnaires between intervention and control groups. Also, a paired t-test was used to compare the mean scores of CS questionnaires before and after training sessions. A Mann-Whitney was used to compare the mean scores of SE, and satisfaction questionnaires between intervention and control groups. Also, a Wilcoxon was used to compare the mean scores of SE, and satisfaction questionnaires before and after training sessions. Data normality was confirmed using the Kolmogorov-Smirnov test, histograms, and normality of residuals. Data were analyzed via SPSS 19 using chi-square tests for categorical variables, independent sample t-tests and paired t-tests. An independent sample t-test was used to compare the mean scores of CS questionnaires between intervention and control groups. Also, a paired t-test was used to compare the mean scores of CS questionnaires before and after training sessions. A Mann-Whitney was used to compare the mean scores of SE, and satisfaction questionnaires between intervention and control groups. Also, a Wilcoxon was used to compare the mean scores of SE, and satisfaction questionnaires before and after training sessions. Data normality was confirmed using the Kolmogorov-Smirnov test, histograms, and normality of residuals. Ethics The Research Ethics Committee of the Saveh University of Medical Sciences approved the study protocol (Number: IR.SAVEHUMS. REC1396.16). Also, all participants in this research completed a written informed consent. The Research Ethics Committee of the Saveh University of Medical Sciences approved the study protocol (Number: IR.SAVEHUMS. REC1396.16). Also, all participants in this research completed a written informed consent. Design, procedure and the study sample: The present study, conducted in 2019, was a quasi-experimental intervention study conducted on primary healthcare workers (N = 105) in health centers of Saveh and Zarandieh counties, and patients (N = 364) living in rural areas of Saveh and Zarandieh. Setting power to 80 %, with a medium effect size and alpha = 0.01, the sample size needed for PHC workers was N = 44 per group (N = 88, in total), based on similar previous studies [17]. In anticipation of drop-out, N = 105 PHC workers were approached to participate in the study. One left just before beginning the study, leaving N = 104 PHC workers (N = 60 and N = 44 in intervention and control groups, respectively). Sample size needed for service recipients was calculated at N = 303 based on previous research [18], setting power to 80 %, with medium effect size and alpha = 0.01. Of the N = 364 service recipients screened for eligibility (see below) none were excluded, leaving N = 182 in both intervention and control conditions. Subsequently, N = 2 and N = 4 were lost to follow-up from intervention and control groups, respectively, leaving N = 358 for analyses (N = 180 and N = 178 in intervention and control groups, respectively). See Consort Diagram (Fig. 1). Figure 2 depicts study design. Fig. 1Consort DiagramFig. 2Study Design. PHC = Primary Healthcare Consort Diagram Study Design. PHC = Primary Healthcare Because PHC workers in Zarandieh and Saveh had similar scientific and cultural characteristics, PHC workers in Zarandieh were placed in the control group, and PHC workers in Saveh were placed in the intervention group. This was done by randomizing which site would be placed in control (using flip of a coin). Thereafter, personnel numbers were utilized to randomly sample PHC workers in each site. Service recipients were randomly selected (using random numbers table) from the list of clients seen by PHC workers in the last 3 months, and then were contacted and informed of the research purpose. Appointments took place at their homes where they completed the satisfaction questionnaire. Inclusion criteria for PHC workers were anticipated continued employment for the next 6 months, at least one year of work experience (both determined through interview) and willingness to participate in the study. PHC workers were excluded if they were absent from two consecutive training sessions (see below). For service recipients, inclusion criteria were residence in Zarandieh or Saveh, receipt of PHC services in the last 3 months, being 15 years or older and willingness to participate in the study. Measures: A multi-part assessment included demographic information, and valid/reliable measures of SE, CS and satisfaction [6, 7, 14, 18]. PHC worker SE was assessed with 4 questions [7], with answers on a five-point Likert scale ranging from 5 = “always” to 1 = “never.“ Higher scores indicated higher SE. A study conducted in Iran found Cronbach’s alpha was 0.82 [6]. A checklist was used to assess PHC worker communication performance with clients in seven areas (2 items for starting the session, 6 items for creating a relationship, 3 items for data collection, 2 items for attending to client’s perception of referral source, 3 items for providing information, 2 items for mutual agreement and 4 items for ending the session). Performance of the skill received a score of 2 (yes) whereas not performing the skill was scored 1 (no). Scores on this construct ranged from 22 to 44. Cronbach’s alpha was 0.78 [7] for this measure. The client satisfaction questionnaire [18, 19] consisted of 42 items in 6 domains (8 items for access to services, 6 items for continuity of care, 8 items for humaneness of staff, 5 items for comprehensiveness of care, 5 items for provision of health education, 10 items for effectiveness of service). Responses were evaluated using a 5-point Likert scale from “strongly agree” (= 2) to “strongly disagree” (= -2). Higher and more positive scores indicate more satisfaction. In Iran, face- and content-validity, and reliability were confirmed [14]. Reliability was assessed for SE and CS questionnaires, in 20 health workers; and for service satisfaction questionnaire in 30 clients were similar to the target population in terms of demographic characteristics. Cronbach’s alphas were 0.81, 0.79 and 0.73 for SE, CS, and satisfaction questionnaires, respectively, when considering each questionnaire as a whole. This was calculated using standard statistical package for social sciences (SPSS 19). Data were collected prior to training. PHC workers reported on SE and demographics, whereas trained observers completed the CS checklist while observing interactions between PHC workers and clients. Clients completed the satisfaction questionnaire via self-report; persons with no or low literacy completed the questionnaire via interview. Staff members assisting with observations/interviews were blind to condition, and clients were blind to condition. All data were collected 3 months following training, except for demographics. Intervention and control groups: The training program was designed and held for the intervention group in four 90-minute training sessions. Training methods included: Lecture and question-and-answer sessions to increase awareness and consolidate learning; film screening; role-playing to enhance SE and improve CS; discussion group to improve SE and CS; instruction booklets; and texting key points of effective communication as reminders. The control group received routine training. Typical training is 2 years consisting of course work, and in-service training. Topics cover general, oral and elderly health; problem solving; collaboration; social factors impacting health; human rights; and cultural beliefs. Statistical analysis: Data were analyzed via SPSS 19 using chi-square tests for categorical variables, independent sample t-tests and paired t-tests. An independent sample t-test was used to compare the mean scores of CS questionnaires between intervention and control groups. Also, a paired t-test was used to compare the mean scores of CS questionnaires before and after training sessions. A Mann-Whitney was used to compare the mean scores of SE, and satisfaction questionnaires between intervention and control groups. Also, a Wilcoxon was used to compare the mean scores of SE, and satisfaction questionnaires before and after training sessions. Data normality was confirmed using the Kolmogorov-Smirnov test, histograms, and normality of residuals. Ethics: The Research Ethics Committee of the Saveh University of Medical Sciences approved the study protocol (Number: IR.SAVEHUMS. REC1396.16). Also, all participants in this research completed a written informed consent. Results: From 364 service recipients, 358 (180 in the intervention group and 178 in the control group) who completed the post-test underwent the final analysis. The mean age of service recipients was 40.5 ± 14.9 years in the intervention group, and 37.7 ± 12.3 years in the control group (p > 0.05). Intervention and control groups were similar on demographic variables (e.g., gender, insurance, education and occupation) and no significant differences were found between groups. Among PHC workers, intervention and control groups were similar on demographic variables (e.g., gender, work experience and literacy level) and there were no significant differences between groups. See Tables 1 and 2. Table 1Comparison of categorical variables in clients seen by two groups of primary healthcare workers (Behvarz) assigned tobk Intervention and Control groupsVariablesIntervention (n=180)Control (n=178)P-valueNumberPercentage (%)NumberPercentage (%)Sex Male7943.982460.67 Female10156.19654Education Illiterate158.4116.20.54 Elementary99559251.7 High school and diploma4625.65732 Academic20111810.1Job Student84.4105.60.39 Farmer / Shepherd4323.95430.3 Staff73.952.9 Housewife90508849.4 other3217.82111.8Insurance Yes16993.917095.50.49 No116.184.5Note: Chi-square usedTable 2Comparison of categorical variables in primary healthcare workers (Behvarz) assigned to Intervention and ControlVariablesIntervention (n=60)Control (n=44)P-valueNumberPercentage (%)NumberPercentage (%)Sex Male2541.61636.40.58 Female3558.42863.6Education Elementary813.3613.60.12 Middle school1118.3511.4 High school and diploma33553272.7 Academic813.312.3work experience <101525920.40.66 10-191931.71227.3 ≥202643.32352.3Note: Chi-square used Comparison of categorical variables in clients seen by two groups of primary healthcare workers (Behvarz) assigned tobk Intervention and Control groups Note: Chi-square used Comparison of categorical variables in primary healthcare workers (Behvarz) assigned to Intervention and Control Note: Chi-square used According to Table 3, for PHC workers, there was no significant difference between the intervention and control groups before training on SE and all CS constructs except for attending to client perception of referral source (p < 0.05). Following training, paired t-tests indicated that the mean scores of SE and all communication skill constructs significantly increased in the intervention group (p < 0.001), while mean scores in the control group increased on starting a session, decreased on data collection and evidenced no other significant differences. Table 3Comparison of communication skills and self-efficacy in primary healthcare workers (Behvarz) assigned to Intervention and Control at baseline and 3-months follow-upVariableGroup TimeIntervention group Mean ± SD (N = 60)Control group Mean ± SD (N = 44)P-value*Starting the sessionBaseline2.52 ± 0.622.38 ± 0.510.063-months follow-up3.79 ± 0.462.87 ± 0.720.001P-value**0.0010.001creating a relationshipBaseline8.95 ± 1.579.0 ± 1.540.683-months follow-up11.91 ± 1.789.75 ± 2.030.001P-value**0.0010.06data collectionBaseline5.0 ± 0.784.84 ± 0.630.183-months follow-up5.51 ± 0.564.55 ± 0.490.001P-value**0.0010.04attending to client perception of referral sourceBaseline3.25 ± 0.892.84 ± 0.750.013-months follow-up3.78 ± 0.482.77 ± 0.620.001P-value**0.0010.39providing informationBaseline4.67 ± 0.754.59 ± 0.870.563-months follow-up5.49 ± 0.644.69 ± 0.610.001P-value**0.0010.55mutual agreementBaseline2.87 ± 0.562.84 ± 0.730.953-months follow-up3.21 ± 0.582.66 ± 0.640.001P-value**0.0010.27ending the sessionBaseline5.64 ± 1.05.37 ± 0.850.063-months follow-up6.81 ± 0.925.66 ± 1.100.001P-value**0.0010.17P-value***Self-efficacyBaseline31.52 ± 2.9131.32 ± 2.640.673-months follow-up34.25 ± 4.031.26 ± 4.520.001P-value****0.0010.79* Independent T-test** Paired T-test*** Mann-Whitney**** Wilcoxon Comparison of communication skills and self-efficacy in primary healthcare workers (Behvarz) assigned to Intervention and Control at baseline and 3-months follow-up * Independent T-test ** Paired T-test *** Mann-Whitney **** Wilcoxon For service recipients (Table 4), there were no significant differences between intervention and control groups before training on components of satisfaction (access to services, continuity of care, humaneness of staff, comprehensiveness of care, provision of health education, effectiveness of service), but mean scores of satisfaction variables generally significantly increased in the intervention group after training (p < 0.001). No significant differences were observed in the control group from pre- to post-training. Table 4Comparison of client satisfaction in two groups of primary healthcare workers (Behvarz) assigned to Intervention and Control at baseline and 3-months follow-upVariableGroup TimeIntervention group Mean ± SD (N = 180)Control group Mean ± SD (N = 178)P-value*Access to servicesBaseline1.75 ± 0.601.73 ± 0.470.593-months follow-up2.89 ± 1.01.80 ± 0.560.001P-value**0.0010.32continuity of careBaseline1.19 ± 0.971.33 ± 0.870.283-months follow-up2.72 ± 1.141.40 ± 1.060.001P-value**0.0010.34humaneness of staffBaseline1.22 ± 0.841.18 ± 0.790.633-months follow-up2.88 ± 1.171.23 ± 0.820.001P-value**0.0010.28comprehensiveness of careBaseline-1.09 ± 0.67-1.04 ± 0.600.143-months follow-up-1.70 ± 1.22-0.61 ± 1.140.001P-value**0.0010.11provision of health educationBaseline1.05 ± 0.891.01 ± 0.950.693-months follow-up2.25 ± 1.121.07 ± 0.940.001P-value**0.0010.10effectiveness of servicesBaseline1.0 ± 0.891.10 ± 0.900.853-months follow-up2.75 ± 1.01.18 ± 1.00.001P-value**0.0010.47* Mann-Whitney** Wilcoxon Comparison of client satisfaction in two groups of primary healthcare workers (Behvarz) assigned to Intervention and Control at baseline and 3-months follow-up * Mann-Whitney ** Wilcoxon Clients were generally dissatisfied with comprehensiveness of care (Table 4). There were no differences between intervention and control groups prior to training. In the intervention group, clients became more dissatisfied with comprehensiveness of care following training; however, no difference was found from pre- to post-training for clients in the control group. Following training, clients in the intervention group were significantly more dissatisfied with comprehensiveness of care than those in the control group. Medians and Interquartile Range of SE and all satisfaction constructs reported in Table 5. Table 5Comparison of client satisfaction and self-efficacy median in two groups of primary healthcare workers (Behvarz) assigned to Intervention and Control at baseline and 3-months follow-upVariableGroup TimeIntervention group Median( IR) (N = 180)Control group Median( IR) (N = 178)Access to servicesBaseline2(2)2(1)3-months follow-up3(3)2(1)continuity of careBaseline1(2)1(2)3-months follow-up2.5(1.75)1(2)humaneness of staffBaseline1(1)1(1)3-months follow-up3(2)1.2(1.5)comprehensiveness of careBaseline-1(1)-1(2)3-months follow-up-2(2)-1(2)provision of health educationBaseline1(0)1(0)3-months follow-up2(3)1(0.5)effectiveness of servicesBaseline1(2)1(2)3-months follow-up3(4)1(1.75)Self-efficacyBaseline30(10)30(10)3-months follow-up32(10)30(10) Comparison of client satisfaction and self-efficacy median in two groups of primary healthcare workers (Behvarz) assigned to Intervention and Control at baseline and 3-months follow-up Discussion: The present study aimed to evaluate the impact of PHC worker training on: (1) PHC worker CS and SE, and (2) client satisfaction with services. Satisfaction among clients of trained PHC workers generally increased from pre- to post-training; and following training, satisfaction generally improved among clients of trained PHC workers as compared to clients of non-trained PHC workers. Similarly, SE and CS increased among trained PHC workers from pre- to post-training; and following training, SE and CS improved among trained PHC workers as compared to non-trained PHC workers. Results indicate training has the potential to improve PHC efficacy and communication skills, and to generally improve client satisfaction with services. Whereas training seemed to enhance client satisfaction with services across most subscales (e.g., services access, care continuity, staff humaneness, provision of health education, services effectiveness), following training, clients rated comprehensiveness of services with more dissatisfaction. More research should be done to understand elements of training (targeting clinician communication/ confidence) that may adversely impact ratings on comprehensiveness of care in particular. It may be that attending to new communication processes during client interactions distracts clinicians from attending to the range of client health needs. Such an effect might abate over time as clinicians grow accustomed to deployment of communication skills. Consistent with our overall findings, previous studies emphasized the importance of CS in promoting patient health and satisfaction [20, 21]. For instance, in a study by Boissy et al., [22]communication skill training increased patient satisfaction and improved empathy and SE among physicians. In another study by Bank et al., [23] teaching CS to physicians led to a marked improvement in patient satisfaction. Moore et al., [24] also found that CS training was effective in promoting physical and mental health, satisfaction, and quality of life in patients. Barth and Lannen [25] conducted a meta-analysis and concluded that CS training for healthcare workers is essential for changing their communication behavior and attitudes. Unlike our findings, a systematic review by Barth and Lannen [25] showed that communication skills of professionals can be improved; yet, patients do not necessarily give higher satisfaction score. In another study by Shilling et al., [26] teaching CS to physicians did not have a significant effect on patient satisfaction. Differences between findings of this study and other studies [26, 27] may be due to differences in client samples. For example, in Schilling et al. [27] service recipients were cancer patients whereas in the present study clients were primary health care recipients. In our study, SE was specifically targeted for enhancement during CS training (e.g., through discussion, use of role-plays). Previous studies emphasized SE as an important factor for successful performance [27–29], and the current study is consistent with those findings. After the CS training, health worker SE increased significantly in the present study. In a study by Nørgaard et al. [30], and consistent with our findings, health worker SE for communicating with patients increased significantly after CS training. In a review study by Berkhof et al., [31] teaching CS to clinical staff improved patient satisfaction, self-esteem, and SE in doctors. There may be a positive and significant association between CS and SE in clinical staff; therefore, designing successful training to enhance patient-professional communication may be facilitated by attention to staff SE [7]. A study by Cegala DJ and Lenzmeier indicated that applying effective strategies to enhance self-efficacy in medical staffs could lead to satisfaction in both medical staffs and their clients [32]. Considering the results of previous studies [28, 29], and our findings, this could be inferred that designing self-efficacy-based interventions to establish effective communication between medical staff and their clients might be of critical importance. This issue should be considered in developing in-service training for health professionals by authorities. Limitation and future directions The randomized design of this research is a strength, but additional limitations should be considered. Results should be replicated in physicians, nurses, midwives and other health professionals. Statistically, no control was used for factors that may influence outcomes, including PHC worker or client demographics. Nesting within site or PHC worker was also not performed. Alphas were not corrected for family-wise error, but given the consistency and magnitude of the expected effects, results are likely replicable. In addition, formal mediational analyses were not performed to ascertain if the impact of training on client satisfaction is mediated by PHC worker communication or efficacy or both. Future studies may wish to conduct follow-up beyond 3 months to determine whether results enhance or diminish over time, and whether booster training may be appropriate. Future work with extended follow-up might determine client outcomes such as symptom reduction, or program outcomes such as staff turn-over and client drop-out. The randomized design of this research is a strength, but additional limitations should be considered. Results should be replicated in physicians, nurses, midwives and other health professionals. Statistically, no control was used for factors that may influence outcomes, including PHC worker or client demographics. Nesting within site or PHC worker was also not performed. Alphas were not corrected for family-wise error, but given the consistency and magnitude of the expected effects, results are likely replicable. In addition, formal mediational analyses were not performed to ascertain if the impact of training on client satisfaction is mediated by PHC worker communication or efficacy or both. Future studies may wish to conduct follow-up beyond 3 months to determine whether results enhance or diminish over time, and whether booster training may be appropriate. Future work with extended follow-up might determine client outcomes such as symptom reduction, or program outcomes such as staff turn-over and client drop-out. Limitation and future directions: The randomized design of this research is a strength, but additional limitations should be considered. Results should be replicated in physicians, nurses, midwives and other health professionals. Statistically, no control was used for factors that may influence outcomes, including PHC worker or client demographics. Nesting within site or PHC worker was also not performed. Alphas were not corrected for family-wise error, but given the consistency and magnitude of the expected effects, results are likely replicable. In addition, formal mediational analyses were not performed to ascertain if the impact of training on client satisfaction is mediated by PHC worker communication or efficacy or both. Future studies may wish to conduct follow-up beyond 3 months to determine whether results enhance or diminish over time, and whether booster training may be appropriate. Future work with extended follow-up might determine client outcomes such as symptom reduction, or program outcomes such as staff turn-over and client drop-out. Conclusions: Communication skills training improved the self-efficacy of PHC workers to effectively communicate with clients, improved PHC worker communication skills with clients, and improved clients’ services satisfaction. Findings are encouraging, and such training may be deployed in other practice settings, since it was delivered in only 4 group sessions of 90 min each.
Background: Service satisfaction ratings from clients are a good indicator of service quality. The present study aimed to investigate the impact of communication skills and self-efficacy training for healthcare workers on clients' satisfaction. Methods: A quasi-experimental study was conducted in health centers of Saveh University of Medical Science in Iran. Primary Healthcare (PHC; N = 105) workers and service recipients (N = 364) were randomly assigned to intervention and control groups. The intervention group received four 90-min training sessions consisting of lecture, film screening, role-playing, and discussion group. Before and 3 months after the intervention, a multi-part questionnaire (including demographics, self-efficacy and communication skills in PHC workers; and satisfaction questionnaire in service recipients) was completed by participants in both intervention and control groups. Results: PHC worker mean scores of self-efficacy and communication skills after the educational program were increased in the intervention group compared to the control group (p < 0.05). Also, mean satisfaction scores for service recipients of the intervention group (PHC workers) generally significantly increased compared to the control group (p < 0.001). Conclusions: The educational program improved the self-efficacy, and communication skills in health workers and improved client satisfaction overall. Our results support the application of self-efficacy and communication skills training for other medical groups who wish to improve clients satisfaction as an important health services outcome.
Introduction: Service satisfaction is affected by service quality, quality of service delivery, and levels of service recipients’ expectation of service quality [1, 2]. Service satisfaction is a good indicator of service quality [2]. Measurement of service recipient satisfaction is a common method for evaluating the treatment quality in healthcare organizations [3]. Generally, the concept of satisfaction in providing health services refers to the feeling or attitude of service clients. There is a direct relationship between patient satisfaction and remaining in treatment [4]. Appropriate interpersonal communication between healthcare providers and recipients is an important determinant of clients satisfaction and compliance with healthcare guidelines [5]. Proper and effective communication between health personnel and patients has a positive impact on health and medical care and enhances patient satisfaction. Focusing only on technical aspects of health care may lead professionals to use ineffective communication methods (e.g., lack of eye contact, not listening fully to client or patient concerns), and thus key and major problems of patients are not clearly identified [6, 7]. Communication skills (CS) are crucial to professionals that come in direct contact with clients. Such skills convey respect, attention and empathy; and frequently include asking open questions, listening actively, and using intelligible words for patients in order to increase the effectiveness of medical interview and treatment process as well as patients’ satisfaction [8–10]. Today, health managers and planners around the world, particularly in developing countries are facing important challenges in responding the health care needs of the general population [11, 12]. In Iran, Primary Healthcare (PHC) workers are responsible for providing appropriate health education and services for the public [13, 14]. PHC workers prevent patients from being referred to clinics and hospitals by providing primary health care [15]. Therefore, the PHC workers’ ability to communicate effectively with individuals is an essential requirement for satisfaction and engagement of service recipients to promote health [14, 15]. Improving self-efficacy (SE) for communicating may assist in improving CS [6]. SE is the main element of the social-cognitive theory that refers to an individual’s belief or judgment about their ability to perform tasks and responsibilities [16]. Therefore, SE is an important factor for successful performance, and the skills that lead to successful performance [6, 17]. In Iran, primary healthcare coverage is offered to over 95 % of rural areas, but quality of care is the main concern of health policymakers. Since satisfaction is an important index of quality and performance of health care [13, 15] and given the lack of information on how CS and SE of health workers affect patient satisfaction, the present study aimed to evaluate the impact of an educational intervention, based on SE and CS, for PHC workers. Of particular interest was impact on the satisfaction of public health service recipients. Conclusions: Communication skills training improved the self-efficacy of PHC workers to effectively communicate with clients, improved PHC worker communication skills with clients, and improved clients’ services satisfaction. Findings are encouraging, and such training may be deployed in other practice settings, since it was delivered in only 4 group sessions of 90 min each.
Background: Service satisfaction ratings from clients are a good indicator of service quality. The present study aimed to investigate the impact of communication skills and self-efficacy training for healthcare workers on clients' satisfaction. Methods: A quasi-experimental study was conducted in health centers of Saveh University of Medical Science in Iran. Primary Healthcare (PHC; N = 105) workers and service recipients (N = 364) were randomly assigned to intervention and control groups. The intervention group received four 90-min training sessions consisting of lecture, film screening, role-playing, and discussion group. Before and 3 months after the intervention, a multi-part questionnaire (including demographics, self-efficacy and communication skills in PHC workers; and satisfaction questionnaire in service recipients) was completed by participants in both intervention and control groups. Results: PHC worker mean scores of self-efficacy and communication skills after the educational program were increased in the intervention group compared to the control group (p < 0.05). Also, mean satisfaction scores for service recipients of the intervention group (PHC workers) generally significantly increased compared to the control group (p < 0.001). Conclusions: The educational program improved the self-efficacy, and communication skills in health workers and improved client satisfaction overall. Our results support the application of self-efficacy and communication skills training for other medical groups who wish to improve clients satisfaction as an important health services outcome.
7,351
289
[ 2667, 551, 478, 121, 135, 37, 180 ]
11
[ "phc", "training", "workers", "satisfaction", "control", "intervention", "se", "phc workers", "study", "items" ]
[ "patient satisfaction improved", "communicating patients increased", "patient health satisfaction", "interpersonal communication healthcare", "patient professional communication" ]
null
[CONTENT] Communication skills | Self-efficacy | Primary healthcare | Client [SUMMARY]
null
[CONTENT] Communication skills | Self-efficacy | Primary healthcare | Client [SUMMARY]
[CONTENT] Communication skills | Self-efficacy | Primary healthcare | Client [SUMMARY]
[CONTENT] Communication skills | Self-efficacy | Primary healthcare | Client [SUMMARY]
[CONTENT] Communication skills | Self-efficacy | Primary healthcare | Client [SUMMARY]
[CONTENT] Communication | Health Personnel | Health Services | Humans | Iran | Primary Health Care | Self Efficacy [SUMMARY]
null
[CONTENT] Communication | Health Personnel | Health Services | Humans | Iran | Primary Health Care | Self Efficacy [SUMMARY]
[CONTENT] Communication | Health Personnel | Health Services | Humans | Iran | Primary Health Care | Self Efficacy [SUMMARY]
[CONTENT] Communication | Health Personnel | Health Services | Humans | Iran | Primary Health Care | Self Efficacy [SUMMARY]
[CONTENT] Communication | Health Personnel | Health Services | Humans | Iran | Primary Health Care | Self Efficacy [SUMMARY]
[CONTENT] patient satisfaction improved | communicating patients increased | patient health satisfaction | interpersonal communication healthcare | patient professional communication [SUMMARY]
null
[CONTENT] patient satisfaction improved | communicating patients increased | patient health satisfaction | interpersonal communication healthcare | patient professional communication [SUMMARY]
[CONTENT] patient satisfaction improved | communicating patients increased | patient health satisfaction | interpersonal communication healthcare | patient professional communication [SUMMARY]
[CONTENT] patient satisfaction improved | communicating patients increased | patient health satisfaction | interpersonal communication healthcare | patient professional communication [SUMMARY]
[CONTENT] patient satisfaction improved | communicating patients increased | patient health satisfaction | interpersonal communication healthcare | patient professional communication [SUMMARY]
[CONTENT] phc | training | workers | satisfaction | control | intervention | se | phc workers | study | items [SUMMARY]
null
[CONTENT] phc | training | workers | satisfaction | control | intervention | se | phc workers | study | items [SUMMARY]
[CONTENT] phc | training | workers | satisfaction | control | intervention | se | phc workers | study | items [SUMMARY]
[CONTENT] phc | training | workers | satisfaction | control | intervention | se | phc workers | study | items [SUMMARY]
[CONTENT] phc | training | workers | satisfaction | control | intervention | se | phc workers | study | items [SUMMARY]
[CONTENT] health | quality | service | satisfaction | care | patient | important | health care | patients | healthcare [SUMMARY]
null
[CONTENT] months follow | value | follow | months | 0010 | 001p | 001p value | 001p value 0010 | value 0010 | control [SUMMARY]
[CONTENT] improved | clients improved | clients | skills | communication skills | skills training improved | improved clients services | min | services satisfaction findings encouraging | group sessions 90 min [SUMMARY]
[CONTENT] phc | training | items | workers | satisfaction | se | phc workers | study | control | cs [SUMMARY]
[CONTENT] phc | training | items | workers | satisfaction | se | phc workers | study | control | cs [SUMMARY]
[CONTENT] ||| [SUMMARY]
null
[CONTENT] PHC | 0.05 ||| PHC | 0.001 [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| ||| Saveh University of Medical Science | Iran ||| Primary Healthcare | PHC | N | 105 | 364 ||| four | 90 ||| Before and 3 months | PHC ||| PHC | 0.05 ||| PHC | 0.001 ||| ||| [SUMMARY]
[CONTENT] ||| ||| Saveh University of Medical Science | Iran ||| Primary Healthcare | PHC | N | 105 | 364 ||| four | 90 ||| Before and 3 months | PHC ||| PHC | 0.05 ||| PHC | 0.001 ||| ||| [SUMMARY]
Where are People Dying in Disasters, and Where is it Being Studied? A Mapping Review of Scientific Articles on Tropical Cyclone Mortality in English and Chinese.
35379375
Tropical cyclones are a recurrent, lethal hazard. Climate change, demographic, and development trends contribute to increasing hazards and vulnerability. This mapping review of articles on tropical cyclone mortality assesses geographic publication patterns, research gaps, and priorities for investigation to inform evidence-based risk reduction.
BACKGROUND
A mapping review of published scientific articles on tropical cyclone-related mortality indexed in PubMed and EMBASE (English) and SINOMED and CNKI (Chinese), focusing on research approach, location, and storm information, was conducted. Results were compared with data on historical tropical cyclone disasters.
METHODS
A total of 150 articles were included, 116 in English and 34 in Chinese. Nine cyclones accounted for 61% of specific event analyses. The United States (US) reported 0.76% of fatalities but was studied in 51% of articles, 96% in English and four percent in Chinese. Asian nations reported 90.4% of fatalities but were studied in 39% of articles, 50% in English and 50% in Chinese. Within the US, New York, New Jersey, and Pennsylvania experienced 4.59% of US tropical cyclones but were studied in 24% of US articles. Of the 12 articles where data were collected beyond six months from impact, 11 focused on storms in the US. Climate change was mentioned in eight percent of article abstracts.
FINDINGS
Regions that have historically experienced high mortality from tropical cyclones have not been studied as extensively as some regions with lower mortality impacts. Long-term mortality and the implications of climate change have not been extensively studied nor discussed in most settings. Research in highly impacted settings should be prioritized.
INTERPRETATION
[ "China", "Climate Change", "Cyclonic Storms", "Disasters", "Humans", "New York" ]
9118061
Introduction
Tropical cyclones, also known as hurricanes and typhoons, are among the most destructive weather events on earth. While preparedness efforts have helped reduce mortality, 1–4 advances have been uneven and large numbers of fatalities continue to occur. 5–8 Prediction 9–12 and communication 13–16 advances have not been uniformly implemented world-wide, and optimal risk reduction strategies may vary substantially depending on geographic, socioeconomic, and cultural factors. It is currently unclear how well existing research aligns with information needs. Successful interventions such as reversal of traffic flow on highways during evacuations in the United States (US) or use of elevated concrete cyclone shelters in Bangladesh are typically developed, evaluated, and improved through a combination of research and practical knowledge of the setting in question. 1,2,17 Information on human mortality due to cyclones can also catalyze government policies and other interventions. 18–20 A geographically and culturally diverse global research base is thus essential to support timely, situationally appropriate decision making. Geographically diverse research is also necessary because climate change, demographic, and development trends may contribute to increasing hazards and vulnerability, and optimal interventions to address these issues vary widely across the globe. Warming, rising seas mean that tropical cyclones may exhibit more rapid intensification, 21,22 increasing wind intensity and rainfall, 23,24 higher risk of prolonged impacts due to stalling, 25,26 more extreme storm surges, 27–29 and exposure of new regions to cyclones. 24,30 Many affected nations expect substantial population growth; 31 one model suggests that by 2030, approximately 140 million people will be exposed to tropical cyclones annually, many in low- and middle-income countries of Asia and Africa. 32 Migration toward coastal cities, 33–35 settlement of floodplains and steep hillsides, 36–38 loss of protective coastal marshes and mangroves, 3,39,40 and reliance on engineered defenses 41–43 may affect vulnerability. Research on the implications of each of these trends is needed to guide policy. In addition, recent studies show that impacts from tropical cyclones can extend well beyond the date of the storm. Following Hurricane Maria (2017) in Puerto Rico, the official death toll of 64 prompted multiple studies which showed that thousands had lost their lives in the ensuing months. 6,7,19,44 Similar mortality dynamics have been noted in other settings; 6,7,45–49 long-term, all-cause excess mortality may differ substantially from immediate mortality figures based on cause of death. However, these effects are only identified when specifically investigated, 19 and while uniform reporting systems have been proposed, 50–52 analysis of mortality remains challenging. Growing hazards related to climate change, worsening vulnerability related to demographic and development trends, and recent evidence for long-term and indirect mortality effects create an urgent need for research on tropical cyclone mortality that can inform future risk reduction efforts across a wide variety of settings. This mapping review seeks to describe the production of scientific knowledge on tropical cyclone mortality and to identify gaps or biases in the literature with regards to geography, methodology, and content.
Methods
This study consisted of a structured mapping review of peer-reviewed scientific literature published in English or Chinese on the topic of human mortality in tropical cyclones. The majority of peer-reviewed literature is published in English, 53–55 but publication volume in Chinese has increased rapidly. 56,57 Structured searches were conducted in PubMed (National Center for Biotechnology Information, National Institutes of Health; Bethesda, Maryland USA [English]); EMBASE (Elsevier; Amsterdam, Netherlands [English]); SINOMED (Sino Medical Sciences Technology Inc.; Tianjin, China [Chinese]); and CNKI (Beijing, China [Chinese]). Searches consisted of (mortality OR death) AND (hurricane OR typhoon OR cyclone OR tropical storm OR natural disaster) in English and (死亡) AND (飓风 OR 台风 OR 气旋 OR 热带风暴 OR 自然灾害) in Chinese. Results were indexed and duplicates removed. Each article was reviewed by two separate native speakers of the language of original publication. Articles were included if they had a title and abstract available in English or Chinese, were published in or after 1985, studied tropical cyclones, hurricanes, or typhoons, and addressed human mortality as a quantitative endpoint or thematic topic. Studies exclusively presenting statistical techniques were excluded (Table 1). Table 1.Article Inclusion and Exclusion CriteriaInclusion CriteriaExclusion CriteriaPublished in English or ChinesePublished in languages other than English or ChineseFocus on tropical cyclones, hurricanes, or typhoonsExclusive focus on statistical techniquesAddress human mortality as either a quantitative endpoint or thematic topicNon-human mortalityAbstract available in English or ChinesePublished prior to 1985Indexed in in PubMed, EMBASE, SINOMED, or CNKI Article Inclusion and Exclusion Criteria Attributes were abstracted by two independent reviewers; fields included publication type, data source, data collection duration, name(s) of hurricanes studied, locations studied, mortality measurement methodology, and whether the paper referenced climate change in the abstract (a replicable proxy for whether climate change featured prominently). Results were supplemented with a global dataset of tropical cyclone disasters 1985-2019 from the Emergency Events Database (EM-DAT; Centre for Research on the Epidemiology of Disasters; Brussels, Belgium) 5 and information on cyclone impacts in US states from the National Oceanic and Atmospheric Administration (NOAA; Washington, DC USA). 58 Analysis was produced using R v3.6.0 (R Foundation for Statistical Computing; Vienna, Austria). 59
Results
A total of 2,192 articles were identified in PubMed, EMBASE, SINOMED, or CNKI via structured searches. After removal of duplicates, Chinese translations of English articles, and articles that did not meet inclusion criteria (Table 1), 150 articles were retained for analysis (Figure 1). Figure 1.Results of Structured Process for Identification, Screening, and Inclusion of Articles for Analysis. Results of Structured Process for Identification, Screening, and Inclusion of Articles for Analysis. Most articles were recent; 94 (63%) were published in 2010 or later. Original research studies accounted for 108 (72%) with other types accounting for less than 10% each (Table 2). Of 82 studies that reported a data collection timeframe, 70 (85%) collected data for six months or less after cyclone impact (Figure 2). Of the 12 studies (15%) that collected data for six months or more after storm impact, nine studied Hurricane Katrina (2005) and only one studied a location outside the US. Of the 150 studies examined, 12 (8%) referenced climate change in the abstract and 42 (28%) computed excess mortality. Table 2.Attributes of Articles Included in AnalysisNumber ofArticlesPercentageof TotalTotal150100% Language English11677%Chinese3423% Year of Publication 1985 to 20105637%2010 to 20199463% Publication Type Research Studies10872%Agency Reports139%Opinion Articles107%Review Articles85%Situation Reports53%Letters21%Case Reports21%Meta-Analyses21% Data Source Total Articles that Reported Data12382%Primary Data Collection7662% a Pre-Existing Database or Repository4033% a Review or Meta-Analysis43% a Publication Content Climate Change Referenced in Abstract128%Excess Mortality Calculation4228% a Out of articles reporting data. Attributes of Articles Included in Analysis Out of articles reporting data. Figure 2.Duration of Data Collection Following Tropical Cyclone Impact in Published Studies for which Information was Available.Note: Five publications with timeframes longer than two years are not plotted. Duration of Data Collection Following Tropical Cyclone Impact in Published Studies for which Information was Available. Note: Five publications with timeframes longer than two years are not plotted. A total of 46 specific storms were analyzed individually. Some were analyzed in multiple articles and some articles discussed multiple storms; a total of 126 analyses of specific storms were identified. Of these, the top nine storms accounted for 77 analyses (61.1%) and the top 20 storms accounted for 103 (81.7%; Table 3). Twelve out of the 50 deadliest storms in EM-DAT (24%) were the subject of any studies identified in this review (Table 4). 5 Table 3.Tropical Cyclones Analyzed in More than One Article and Associated Mortality,1985-2019NameArticlesaMortalityRegionKatrina251,833AmericasSandy14145AmericasMaria b 93,058AmericasRananim7188AsiaHaiyan (Yolanda)67,375AsiaIke5163AmericasAndrew448AmericasGustav4152AmericasHarvey388AmericasCharley215AmericasFrances249AmericasIrma2105AmericasIvan2123AmericasOndoy (Ketsana)2716AsiaMitch218,820AmericasNargis2138,375AsiaOdisha Super Cyclone (BOB06/O5B)29,843AsiaRammasun2209AsiaRita210AmericasSaomai2441AsiaTropical Storm One (1B) - Bay of Bengal215,000Asia1991 Bangladesh Cyclone (Gorky/O2B)2138,866AsiaAbbreviation: EMDAT, Emergency Events Database. a Articles with substantive focus on more than one storm are listed with each storm. b Mortality based on EMDAT and revised official death toll from Govt. of Puerto Rico. Tropical Cyclones Analyzed in More than One Article and Associated Mortality,1985-2019 Abbreviation: EMDAT, Emergency Events Database. Articles with substantive focus on more than one storm are listed with each storm. Mortality based on EMDAT and revised official death toll from Govt. of Puerto Rico. Table 4.Mortality and Articles on Mortality in the 50 Deadliest Tropical Cyclones, 1985-2019RankYearNameMortalityArticles a 119911991 Bangladesh Cyclone (Gorky/O2B)138,866222008Nargis138,375231998Mitch18,820241985Tropical Storm One (1B) - Bay of Bengal15,000251999Odisha Super Cyclone (BOB06/O5B)9,843262013Haiyan (Yolanda)7,375671991Thelma (Uring)5,956082007Sidr4,234091997Linda3,8590102017Maria b 3,058911199803A2,8710122004Jeanne2,7820132012Bopha1,9010142005Katrina1,83325152005Stan1,6290162005Winnie1,6190172006Durian (Reming)1,4940182011Washi (Sendong)1,4390192019Idai1,2340201994Fred1,1770211994Gordon1,130022198804B1,074023199002B9570242017Ockhi9110251987Nina8820261995Angela8820272008Bilis8770281985Cecil7980291989Cecil7510301996O37310312009Ondoy (Ketsana)716232199607B7080332009Morakot (Kiko)6640342008Fengshen (Franck)65803519930304-PAK (EMDAT Desig.)6090362016Matthew5951371996Frankie5850381998Georges5541391989Vera5500402008Hanna53704119960086-BGC (EMDAT Desig.)52504219970530-PER (EMDAT Desig.)5180432009Pepeng (Parma)5150441990Mike (Ruping)5030451987Thelma4830461989Gay458047199902A4510481995Kent4450492006Saomai4413501986Wayne4350Abbreviation: EMDAT, Emergency Events Database. a Articles with substantive focus on more than one storm are listed with each storm. b Mortality based on EMDAT and revised official death toll from Govt. of Puerto Rico. Mortality and Articles on Mortality in the 50 Deadliest Tropical Cyclones, 1985-2019 Abbreviation: EMDAT, Emergency Events Database. Articles with substantive focus on more than one storm are listed with each storm. Mortality based on EMDAT and revised official death toll from Govt. of Puerto Rico. The number of articles studying tropical cyclone mortality varied by storm impact location and are presented with cyclone mortality from EM-DAT (1985-2019) for context (Figure 3 and Figure 4). The US reported 3,167 fatalities (0.76% of global mortality) during this period 5 but was the subject of 77 published articles (51%), 74 (96%) in English and three (4%) in Chinese. China reported 10,489 fatalities (2.51% of global mortality) and was the subject of 27 articles (18%), five (19%) in English and 22 (81%) in Chinese. 5 Asian nations other than China reported 366,482 fatalities (87.9% of global mortality) but were the focus of 31 articles (21%), 25 (81%) in English and six (19%) in Chinese. 5 Central American and Caribbean nations were the subject of four articles (3%) in English, though they reported 30,706 fatalities (7.36% of global mortality); inclusion of Spanish literature could alter this finding. 5 No studies examined mortality in African nations, although 4,490 fatalities (1.07% of global mortality) were reported in this region during the study timeframe. 5 Figure 3.Tropical Cyclone Mortality and Article Volumes by Nation, 1985-2019. Tropical Cyclone Mortality and Article Volumes by Nation, 1985-2019. Figure 4.Global Distribution of (A) Mortality Attributed to Tropical Cyclones and Global Distribution of (B) Articles Analyzing Tropical Cyclone Mortality, 1985-2019. Global Distribution of (A) Mortality Attributed to Tropical Cyclones and Global Distribution of (B) Articles Analyzing Tropical Cyclone Mortality, 1985-2019. Disaggregated mortality data were not available for individual US states in a uniform format; 47,60,61 tropical cyclone transits from NOAA (1985-2019) 58 were used to contextualize distribution of the 73 articles on specific US states (Figure 5). Louisiana, Texas, and Florida, sites of multiple recent disasters, experienced 109 cyclone transits (38.5% of the US total) and were the subject of 46 articles (63% of the US total). New York, New Jersey, and Pennsylvania experienced 13 (4.59%) cyclone transits and were the subject of 17 articles (23%), principally regarding Hurricane Sandy (2012). Other US states experienced 161 (56.9%) cyclone transits and were the topic of 10 articles (14%). Figure 5.Tropical Cyclone Transits and Article Volumes by US State, 1985-2019. Tropical Cyclone Transits and Article Volumes by US State, 1985-2019.
Conclusion
Scientific articles on tropical cyclone mortality disproportionately focus on a limited number of storms. The US and China are over-represented in the global literature relative to historical mortality, while nations in Southeast Asia, Africa, and the Americas outside the US are under-represented. Substantial knowledge gaps persist; long-term mortality effects are unclear, particularly in low-resource settings. Few publications prominently mention climate change, despite its substantial implications. Research addressing mortality related to tropical cyclones in low- and middle-income settings and over extended timeframes should be prioritized.
[]
[]
[]
[ "Introduction", "Methods", "Results", "Discussion", "Limitations", "Conclusion" ]
[ "Tropical cyclones, also known as hurricanes and typhoons, are among the most destructive weather events on earth. While preparedness efforts have helped reduce mortality,\n1–4\n advances have been uneven and large numbers of fatalities continue to occur.\n5–8\n Prediction\n9–12\n and communication\n13–16\n advances have not been uniformly implemented world-wide, and optimal risk reduction strategies may vary substantially depending on geographic, socioeconomic, and cultural factors. It is currently unclear how well existing research aligns with information needs.\nSuccessful interventions such as reversal of traffic flow on highways during evacuations in the United States (US) or use of elevated concrete cyclone shelters in Bangladesh are typically developed, evaluated, and improved through a combination of research and practical knowledge of the setting in question.\n1,2,17\n Information on human mortality due to cyclones can also catalyze government policies and other interventions.\n18–20\n A geographically and culturally diverse global research base is thus essential to support timely, situationally appropriate decision making.\nGeographically diverse research is also necessary because climate change, demographic, and development trends may contribute to increasing hazards and vulnerability, and optimal interventions to address these issues vary widely across the globe. Warming, rising seas mean that tropical cyclones may exhibit more rapid intensification,\n21,22\n increasing wind intensity and rainfall,\n23,24\n higher risk of prolonged impacts due to stalling,\n25,26\n more extreme storm surges,\n27–29\n and exposure of new regions to cyclones.\n24,30\n Many affected nations expect substantial population growth;\n31\n one model suggests that by 2030, approximately 140 million people will be exposed to tropical cyclones annually, many in low- and middle-income countries of Asia and Africa.\n32\n Migration toward coastal cities,\n33–35\n settlement of floodplains and steep hillsides,\n36–38\n loss of protective coastal marshes and mangroves,\n3,39,40\n and reliance on engineered defenses\n41–43\n may affect vulnerability. Research on the implications of each of these trends is needed to guide policy.\nIn addition, recent studies show that impacts from tropical cyclones can extend well beyond the date of the storm. Following Hurricane Maria (2017) in Puerto Rico, the official death toll of 64 prompted multiple studies which showed that thousands had lost their lives in the ensuing months.\n6,7,19,44\n Similar mortality dynamics have been noted in other settings;\n6,7,45–49\n long-term, all-cause excess mortality may differ substantially from immediate mortality figures based on cause of death. However, these effects are only identified when specifically investigated,\n19\n and while uniform reporting systems have been proposed,\n50–52\n analysis of mortality remains challenging.\nGrowing hazards related to climate change, worsening vulnerability related to demographic and development trends, and recent evidence for long-term and indirect mortality effects create an urgent need for research on tropical cyclone mortality that can inform future risk reduction efforts across a wide variety of settings. This mapping review seeks to describe the production of scientific knowledge on tropical cyclone mortality and to identify gaps or biases in the literature with regards to geography, methodology, and content.", "This study consisted of a structured mapping review of peer-reviewed scientific literature published in English or Chinese on the topic of human mortality in tropical cyclones. The majority of peer-reviewed literature is published in English,\n53–55\n but publication volume in Chinese has increased rapidly.\n56,57\n Structured searches were conducted in PubMed (National Center for Biotechnology Information, National Institutes of Health; Bethesda, Maryland USA [English]); EMBASE (Elsevier; Amsterdam, Netherlands [English]); SINOMED (Sino Medical Sciences Technology Inc.; Tianjin, China [Chinese]); and CNKI (Beijing, China [Chinese]). Searches consisted of (mortality OR death) AND (hurricane OR typhoon OR cyclone OR tropical storm OR natural disaster) in English and (死亡) AND (飓风 OR 台风 OR 气旋 OR 热带风暴 OR 自然灾害) in Chinese. Results were indexed and duplicates removed. Each article was reviewed by two separate native speakers of the language of original publication. Articles were included if they had a title and abstract available in English or Chinese, were published in or after 1985, studied tropical cyclones, hurricanes, or typhoons, and addressed human mortality as a quantitative endpoint or thematic topic. Studies exclusively presenting statistical techniques were excluded (Table 1).\n\nTable 1.Article Inclusion and Exclusion CriteriaInclusion CriteriaExclusion CriteriaPublished in English or ChinesePublished in languages other than English or ChineseFocus on tropical cyclones, hurricanes, or typhoonsExclusive focus on statistical techniquesAddress human mortality as either a quantitative endpoint or thematic topicNon-human mortalityAbstract available in English or ChinesePublished prior to 1985Indexed in in PubMed, EMBASE, SINOMED, or CNKI\n\nArticle Inclusion and Exclusion Criteria\nAttributes were abstracted by two independent reviewers; fields included publication type, data source, data collection duration, name(s) of hurricanes studied, locations studied, mortality measurement methodology, and whether the paper referenced climate change in the abstract (a replicable proxy for whether climate change featured prominently). Results were supplemented with a global dataset of tropical cyclone disasters 1985-2019 from the Emergency Events Database (EM-DAT; Centre for Research on the Epidemiology of Disasters; Brussels, Belgium)\n5\n and information on cyclone impacts in US states from the National Oceanic and Atmospheric Administration (NOAA; Washington, DC USA).\n58\n Analysis was produced using R v3.6.0 (R Foundation for Statistical Computing; Vienna, Austria).\n59\n\n", "A total of 2,192 articles were identified in PubMed, EMBASE, SINOMED, or CNKI via structured searches. After removal of duplicates, Chinese translations of English articles, and articles that did not meet inclusion criteria (Table 1), 150 articles were retained for analysis (Figure 1).\n\nFigure 1.Results of Structured Process for Identification, Screening, and Inclusion of Articles for Analysis.\n\nResults of Structured Process for Identification, Screening, and Inclusion of Articles for Analysis.\nMost articles were recent; 94 (63%) were published in 2010 or later. Original research studies accounted for 108 (72%) with other types accounting for less than 10% each (Table 2). Of 82 studies that reported a data collection timeframe, 70 (85%) collected data for six months or less after cyclone impact (Figure 2). Of the 12 studies (15%) that collected data for six months or more after storm impact, nine studied Hurricane Katrina (2005) and only one studied a location outside the US. Of the 150 studies examined, 12 (8%) referenced climate change in the abstract and 42 (28%) computed excess mortality.\n\nTable 2.Attributes of Articles Included in AnalysisNumber ofArticlesPercentageof TotalTotal150100%\nLanguage\nEnglish11677%Chinese3423%\nYear of Publication\n1985 to 20105637%2010 to 20199463%\nPublication Type\nResearch Studies10872%Agency Reports139%Opinion Articles107%Review Articles85%Situation Reports53%Letters21%Case Reports21%Meta-Analyses21%\nData Source\nTotal Articles that Reported Data12382%Primary Data Collection7662% \na\n\nPre-Existing Database or Repository4033% \na\n\nReview or Meta-Analysis43% \na\n\n\nPublication Content\nClimate Change Referenced in Abstract128%Excess Mortality Calculation4228%\na\nOut of articles reporting data.\n\nAttributes of Articles Included in Analysis\nOut of articles reporting data.\n\nFigure 2.Duration of Data Collection Following Tropical Cyclone Impact in Published Studies for which Information was Available.Note: Five publications with timeframes longer than two years are not plotted.\n\nDuration of Data Collection Following Tropical Cyclone Impact in Published Studies for which Information was Available.\nNote: Five publications with timeframes longer than two years are not plotted.\nA total of 46 specific storms were analyzed individually. Some were analyzed in multiple articles and some articles discussed multiple storms; a total of 126 analyses of specific storms were identified. Of these, the top nine storms accounted for 77 analyses (61.1%) and the top 20 storms accounted for 103 (81.7%; Table 3). Twelve out of the 50 deadliest storms in EM-DAT (24%) were the subject of any studies identified in this review (Table 4).\n5\n\n\n\nTable 3.Tropical Cyclones Analyzed in More than One Article and Associated Mortality,1985-2019NameArticlesaMortalityRegionKatrina251,833AmericasSandy14145AmericasMaria \nb\n\n93,058AmericasRananim7188AsiaHaiyan (Yolanda)67,375AsiaIke5163AmericasAndrew448AmericasGustav4152AmericasHarvey388AmericasCharley215AmericasFrances249AmericasIrma2105AmericasIvan2123AmericasOndoy (Ketsana)2716AsiaMitch218,820AmericasNargis2138,375AsiaOdisha Super Cyclone (BOB06/O5B)29,843AsiaRammasun2209AsiaRita210AmericasSaomai2441AsiaTropical Storm One (1B) - Bay of Bengal215,000Asia1991 Bangladesh Cyclone (Gorky/O2B)2138,866AsiaAbbreviation: EMDAT, Emergency Events Database.\na\nArticles with substantive focus on more than one storm are listed with each storm.\nb\nMortality based on EMDAT and revised official death toll from Govt. of Puerto Rico.\n\nTropical Cyclones Analyzed in More than One Article and Associated Mortality,1985-2019\nAbbreviation: EMDAT, Emergency Events Database.\nArticles with substantive focus on more than one storm are listed with each storm.\nMortality based on EMDAT and revised official death toll from Govt. of Puerto Rico.\n\nTable 4.Mortality and Articles on Mortality in the 50 Deadliest Tropical Cyclones, 1985-2019RankYearNameMortalityArticles \na\n\n119911991 Bangladesh Cyclone (Gorky/O2B)138,866222008Nargis138,375231998Mitch18,820241985Tropical Storm One (1B) - Bay of Bengal15,000251999Odisha Super Cyclone (BOB06/O5B)9,843262013Haiyan (Yolanda)7,375671991Thelma (Uring)5,956082007Sidr4,234091997Linda3,8590102017Maria \nb\n\n3,058911199803A2,8710122004Jeanne2,7820132012Bopha1,9010142005Katrina1,83325152005Stan1,6290162005Winnie1,6190172006Durian (Reming)1,4940182011Washi (Sendong)1,4390192019Idai1,2340201994Fred1,1770211994Gordon1,130022198804B1,074023199002B9570242017Ockhi9110251987Nina8820261995Angela8820272008Bilis8770281985Cecil7980291989Cecil7510301996O37310312009Ondoy (Ketsana)716232199607B7080332009Morakot (Kiko)6640342008Fengshen (Franck)65803519930304-PAK (EMDAT Desig.)6090362016Matthew5951371996Frankie5850381998Georges5541391989Vera5500402008Hanna53704119960086-BGC (EMDAT Desig.)52504219970530-PER (EMDAT Desig.)5180432009Pepeng (Parma)5150441990Mike (Ruping)5030451987Thelma4830461989Gay458047199902A4510481995Kent4450492006Saomai4413501986Wayne4350Abbreviation: EMDAT, Emergency Events Database.\na\nArticles with substantive focus on more than one storm are listed with each storm.\nb\nMortality based on EMDAT and revised official death toll from Govt. of Puerto Rico.\n\nMortality and Articles on Mortality in the 50 Deadliest Tropical Cyclones, 1985-2019\nAbbreviation: EMDAT, Emergency Events Database.\nArticles with substantive focus on more than one storm are listed with each storm.\nMortality based on EMDAT and revised official death toll from Govt. of Puerto Rico.\nThe number of articles studying tropical cyclone mortality varied by storm impact location and are presented with cyclone mortality from EM-DAT (1985-2019) for context (Figure 3 and Figure 4). The US reported 3,167 fatalities (0.76% of global mortality) during this period\n5\n but was the subject of 77 published articles (51%), 74 (96%) in English and three (4%) in Chinese. China reported 10,489 fatalities (2.51% of global mortality) and was the subject of 27 articles (18%), five (19%) in English and 22 (81%) in Chinese.\n5\n Asian nations other than China reported 366,482 fatalities (87.9% of global mortality) but were the focus of 31 articles (21%), 25 (81%) in English and six (19%) in Chinese.\n5\n Central American and Caribbean nations were the subject of four articles (3%) in English, though they reported 30,706 fatalities (7.36% of global mortality); inclusion of Spanish literature could alter this finding.\n5\n No studies examined mortality in African nations, although 4,490 fatalities (1.07% of global mortality) were reported in this region during the study timeframe.\n5\n\n\n\nFigure 3.Tropical Cyclone Mortality and Article Volumes by Nation, 1985-2019.\n\nTropical Cyclone Mortality and Article Volumes by Nation, 1985-2019.\n\nFigure 4.Global Distribution of (A) Mortality Attributed to Tropical Cyclones and Global Distribution of (B) Articles Analyzing Tropical Cyclone Mortality, 1985-2019.\n\nGlobal Distribution of (A) Mortality Attributed to Tropical Cyclones and Global Distribution of (B) Articles Analyzing Tropical Cyclone Mortality, 1985-2019.\nDisaggregated mortality data were not available for individual US states in a uniform format;\n47,60,61\n tropical cyclone transits from NOAA (1985-2019)\n58\n were used to contextualize distribution of the 73 articles on specific US states (Figure 5). Louisiana, Texas, and Florida, sites of multiple recent disasters, experienced 109 cyclone transits (38.5% of the US total) and were the subject of 46 articles (63% of the US total). New York, New Jersey, and Pennsylvania experienced 13 (4.59%) cyclone transits and were the subject of 17 articles (23%), principally regarding Hurricane Sandy (2012). Other US states experienced 161 (56.9%) cyclone transits and were the topic of 10 articles (14%).\n\nFigure 5.Tropical Cyclone Transits and Article Volumes by US State, 1985-2019.\n\nTropical Cyclone Transits and Article Volumes by US State, 1985-2019.", "This review maps geography, methodology, and content for 150 scientific articles on mortality during and after tropical cyclones. While some situations have been studied in detail, for example mortality in Puerto Rico following Hurricane Maria\n6,7,44\n and in sub-populations following Hurricane Sandy,\n45,47,61–63\n the distribution of existing research is not proportional to historical mortality and key knowledge gaps remain.\nPublished articles largely focus on mortality in the US and China, which together accounted for 68% of the articles identified in this review, despite reporting less than 3.5% of recent tropical cyclone mortality.\n5\n In contrast, Southeast Asia, Africa, Central America, and the Caribbean were comparatively under-represented in the literature despite high mortality. An analogous pattern was noted within the US; a disproportionate number of articles focused on states in the Northeast affected by Hurricane Sandy, while several Southern states that routinely experienced more storms were under-represented.\n58\n Future research will be most useful if conducted in settings that are highly impacted by tropical cyclones and in which findings can maximally contribute to mortality prevention.\nThe articles identified in this review also disproportionately focus on a small number of tropical cyclones that may or may not be representative of mortality dynamics elsewhere. Nine storms accounted for 61% of analyses of specific storms identified in this study; of the 50 deadliest tropical cyclones in EM-DAT from 1985-2019, less than one-quarter were the subject of an article identified in this review. The concentration of articles on a limited sample of individual storms raises questions about the representativeness and generalizability of current knowledge.\nIn addition, the long-term mortality effects of tropical cyclones remain poorly understood. Only 12 studies evaluated effects more than six months after cyclone impact, and only one of these studied a location outside the US. The mechanisms proposed to mediate post-cyclone excess mortality largely involve pre-exiting medical issues, disruptions of infrastructure, and disruptions of medical care.\n6,44,64–66\n It is thus plausible that the degree to which a tropical cyclone affects long-term mortality is related to factors including baseline levels of medical vulnerability, dependence on infrastructure, and infrastructure fragility in the affected area.\n6,47\n As these factors vary widely on both global and national scales, it is unknown whether the long-term impacts identified in existing studies are widely generalizable or describe exceptional circumstances. Additional long-term studies are needed, particularly in settings outside the US.\nFinally, few studies explicitly evaluated the implications of climate change. Most articles (92%) identified in this review did not mention climate change or related terms in the abstract, which was used as a replicable proxy for prominent consideration of this topic. Given the implications of climate change for tropical cyclone hazards,\n27,35,67\n consideration of this issue is important; long-term hazard projections provide important context for the study of mortality in tropical cyclones and should be considered in risk reduction strategies.\nFuture years will likely witness rising seas, intensifying tropical cyclones, and worsening vulnerability in affected populations. Research on tropical cyclone mortality should prioritize lower- and middle-income settings with high historical mortality, examination of long-term effects, and evaluation of the implications of climate change. Prevention of future mortality will depend on the development of evidence-based risk reduction programs and their continuous monitoring for effectiveness during future storms. Policymakers should prioritize increased accessibility of mortality records and support for researchers working in highly affected settings.", "This review evaluated articles published in English and Chinese; additional articles may exist in other languages and could affect results. Also, EM-DAT cyclone mortality data included a small number of extra-tropical cyclonic storms.\n5\n\n", "Scientific articles on tropical cyclone mortality disproportionately focus on a limited number of storms. The US and China are over-represented in the global literature relative to historical mortality, while nations in Southeast Asia, Africa, and the Americas outside the US are under-represented. Substantial knowledge gaps persist; long-term mortality effects are unclear, particularly in low-resource settings. Few publications prominently mention climate change, despite its substantial implications. Research addressing mortality related to tropical cyclones in low- and middle-income settings and over extended timeframes should be prioritized." ]
[ "intro", "methods", "results", "discussion", "other", "conclusions" ]
[ "cyclonic storms", "disaster medicine", "mortality" ]
Introduction: Tropical cyclones, also known as hurricanes and typhoons, are among the most destructive weather events on earth. While preparedness efforts have helped reduce mortality, 1–4 advances have been uneven and large numbers of fatalities continue to occur. 5–8 Prediction 9–12 and communication 13–16 advances have not been uniformly implemented world-wide, and optimal risk reduction strategies may vary substantially depending on geographic, socioeconomic, and cultural factors. It is currently unclear how well existing research aligns with information needs. Successful interventions such as reversal of traffic flow on highways during evacuations in the United States (US) or use of elevated concrete cyclone shelters in Bangladesh are typically developed, evaluated, and improved through a combination of research and practical knowledge of the setting in question. 1,2,17 Information on human mortality due to cyclones can also catalyze government policies and other interventions. 18–20 A geographically and culturally diverse global research base is thus essential to support timely, situationally appropriate decision making. Geographically diverse research is also necessary because climate change, demographic, and development trends may contribute to increasing hazards and vulnerability, and optimal interventions to address these issues vary widely across the globe. Warming, rising seas mean that tropical cyclones may exhibit more rapid intensification, 21,22 increasing wind intensity and rainfall, 23,24 higher risk of prolonged impacts due to stalling, 25,26 more extreme storm surges, 27–29 and exposure of new regions to cyclones. 24,30 Many affected nations expect substantial population growth; 31 one model suggests that by 2030, approximately 140 million people will be exposed to tropical cyclones annually, many in low- and middle-income countries of Asia and Africa. 32 Migration toward coastal cities, 33–35 settlement of floodplains and steep hillsides, 36–38 loss of protective coastal marshes and mangroves, 3,39,40 and reliance on engineered defenses 41–43 may affect vulnerability. Research on the implications of each of these trends is needed to guide policy. In addition, recent studies show that impacts from tropical cyclones can extend well beyond the date of the storm. Following Hurricane Maria (2017) in Puerto Rico, the official death toll of 64 prompted multiple studies which showed that thousands had lost their lives in the ensuing months. 6,7,19,44 Similar mortality dynamics have been noted in other settings; 6,7,45–49 long-term, all-cause excess mortality may differ substantially from immediate mortality figures based on cause of death. However, these effects are only identified when specifically investigated, 19 and while uniform reporting systems have been proposed, 50–52 analysis of mortality remains challenging. Growing hazards related to climate change, worsening vulnerability related to demographic and development trends, and recent evidence for long-term and indirect mortality effects create an urgent need for research on tropical cyclone mortality that can inform future risk reduction efforts across a wide variety of settings. This mapping review seeks to describe the production of scientific knowledge on tropical cyclone mortality and to identify gaps or biases in the literature with regards to geography, methodology, and content. Methods: This study consisted of a structured mapping review of peer-reviewed scientific literature published in English or Chinese on the topic of human mortality in tropical cyclones. The majority of peer-reviewed literature is published in English, 53–55 but publication volume in Chinese has increased rapidly. 56,57 Structured searches were conducted in PubMed (National Center for Biotechnology Information, National Institutes of Health; Bethesda, Maryland USA [English]); EMBASE (Elsevier; Amsterdam, Netherlands [English]); SINOMED (Sino Medical Sciences Technology Inc.; Tianjin, China [Chinese]); and CNKI (Beijing, China [Chinese]). Searches consisted of (mortality OR death) AND (hurricane OR typhoon OR cyclone OR tropical storm OR natural disaster) in English and (死亡) AND (飓风 OR 台风 OR 气旋 OR 热带风暴 OR 自然灾害) in Chinese. Results were indexed and duplicates removed. Each article was reviewed by two separate native speakers of the language of original publication. Articles were included if they had a title and abstract available in English or Chinese, were published in or after 1985, studied tropical cyclones, hurricanes, or typhoons, and addressed human mortality as a quantitative endpoint or thematic topic. Studies exclusively presenting statistical techniques were excluded (Table 1). Table 1.Article Inclusion and Exclusion CriteriaInclusion CriteriaExclusion CriteriaPublished in English or ChinesePublished in languages other than English or ChineseFocus on tropical cyclones, hurricanes, or typhoonsExclusive focus on statistical techniquesAddress human mortality as either a quantitative endpoint or thematic topicNon-human mortalityAbstract available in English or ChinesePublished prior to 1985Indexed in in PubMed, EMBASE, SINOMED, or CNKI Article Inclusion and Exclusion Criteria Attributes were abstracted by two independent reviewers; fields included publication type, data source, data collection duration, name(s) of hurricanes studied, locations studied, mortality measurement methodology, and whether the paper referenced climate change in the abstract (a replicable proxy for whether climate change featured prominently). Results were supplemented with a global dataset of tropical cyclone disasters 1985-2019 from the Emergency Events Database (EM-DAT; Centre for Research on the Epidemiology of Disasters; Brussels, Belgium) 5 and information on cyclone impacts in US states from the National Oceanic and Atmospheric Administration (NOAA; Washington, DC USA). 58 Analysis was produced using R v3.6.0 (R Foundation for Statistical Computing; Vienna, Austria). 59 Results: A total of 2,192 articles were identified in PubMed, EMBASE, SINOMED, or CNKI via structured searches. After removal of duplicates, Chinese translations of English articles, and articles that did not meet inclusion criteria (Table 1), 150 articles were retained for analysis (Figure 1). Figure 1.Results of Structured Process for Identification, Screening, and Inclusion of Articles for Analysis. Results of Structured Process for Identification, Screening, and Inclusion of Articles for Analysis. Most articles were recent; 94 (63%) were published in 2010 or later. Original research studies accounted for 108 (72%) with other types accounting for less than 10% each (Table 2). Of 82 studies that reported a data collection timeframe, 70 (85%) collected data for six months or less after cyclone impact (Figure 2). Of the 12 studies (15%) that collected data for six months or more after storm impact, nine studied Hurricane Katrina (2005) and only one studied a location outside the US. Of the 150 studies examined, 12 (8%) referenced climate change in the abstract and 42 (28%) computed excess mortality. Table 2.Attributes of Articles Included in AnalysisNumber ofArticlesPercentageof TotalTotal150100% Language English11677%Chinese3423% Year of Publication 1985 to 20105637%2010 to 20199463% Publication Type Research Studies10872%Agency Reports139%Opinion Articles107%Review Articles85%Situation Reports53%Letters21%Case Reports21%Meta-Analyses21% Data Source Total Articles that Reported Data12382%Primary Data Collection7662% a Pre-Existing Database or Repository4033% a Review or Meta-Analysis43% a Publication Content Climate Change Referenced in Abstract128%Excess Mortality Calculation4228% a Out of articles reporting data. Attributes of Articles Included in Analysis Out of articles reporting data. Figure 2.Duration of Data Collection Following Tropical Cyclone Impact in Published Studies for which Information was Available.Note: Five publications with timeframes longer than two years are not plotted. Duration of Data Collection Following Tropical Cyclone Impact in Published Studies for which Information was Available. Note: Five publications with timeframes longer than two years are not plotted. A total of 46 specific storms were analyzed individually. Some were analyzed in multiple articles and some articles discussed multiple storms; a total of 126 analyses of specific storms were identified. Of these, the top nine storms accounted for 77 analyses (61.1%) and the top 20 storms accounted for 103 (81.7%; Table 3). Twelve out of the 50 deadliest storms in EM-DAT (24%) were the subject of any studies identified in this review (Table 4). 5 Table 3.Tropical Cyclones Analyzed in More than One Article and Associated Mortality,1985-2019NameArticlesaMortalityRegionKatrina251,833AmericasSandy14145AmericasMaria b 93,058AmericasRananim7188AsiaHaiyan (Yolanda)67,375AsiaIke5163AmericasAndrew448AmericasGustav4152AmericasHarvey388AmericasCharley215AmericasFrances249AmericasIrma2105AmericasIvan2123AmericasOndoy (Ketsana)2716AsiaMitch218,820AmericasNargis2138,375AsiaOdisha Super Cyclone (BOB06/O5B)29,843AsiaRammasun2209AsiaRita210AmericasSaomai2441AsiaTropical Storm One (1B) - Bay of Bengal215,000Asia1991 Bangladesh Cyclone (Gorky/O2B)2138,866AsiaAbbreviation: EMDAT, Emergency Events Database. a Articles with substantive focus on more than one storm are listed with each storm. b Mortality based on EMDAT and revised official death toll from Govt. of Puerto Rico. Tropical Cyclones Analyzed in More than One Article and Associated Mortality,1985-2019 Abbreviation: EMDAT, Emergency Events Database. Articles with substantive focus on more than one storm are listed with each storm. Mortality based on EMDAT and revised official death toll from Govt. of Puerto Rico. Table 4.Mortality and Articles on Mortality in the 50 Deadliest Tropical Cyclones, 1985-2019RankYearNameMortalityArticles a 119911991 Bangladesh Cyclone (Gorky/O2B)138,866222008Nargis138,375231998Mitch18,820241985Tropical Storm One (1B) - Bay of Bengal15,000251999Odisha Super Cyclone (BOB06/O5B)9,843262013Haiyan (Yolanda)7,375671991Thelma (Uring)5,956082007Sidr4,234091997Linda3,8590102017Maria b 3,058911199803A2,8710122004Jeanne2,7820132012Bopha1,9010142005Katrina1,83325152005Stan1,6290162005Winnie1,6190172006Durian (Reming)1,4940182011Washi (Sendong)1,4390192019Idai1,2340201994Fred1,1770211994Gordon1,130022198804B1,074023199002B9570242017Ockhi9110251987Nina8820261995Angela8820272008Bilis8770281985Cecil7980291989Cecil7510301996O37310312009Ondoy (Ketsana)716232199607B7080332009Morakot (Kiko)6640342008Fengshen (Franck)65803519930304-PAK (EMDAT Desig.)6090362016Matthew5951371996Frankie5850381998Georges5541391989Vera5500402008Hanna53704119960086-BGC (EMDAT Desig.)52504219970530-PER (EMDAT Desig.)5180432009Pepeng (Parma)5150441990Mike (Ruping)5030451987Thelma4830461989Gay458047199902A4510481995Kent4450492006Saomai4413501986Wayne4350Abbreviation: EMDAT, Emergency Events Database. a Articles with substantive focus on more than one storm are listed with each storm. b Mortality based on EMDAT and revised official death toll from Govt. of Puerto Rico. Mortality and Articles on Mortality in the 50 Deadliest Tropical Cyclones, 1985-2019 Abbreviation: EMDAT, Emergency Events Database. Articles with substantive focus on more than one storm are listed with each storm. Mortality based on EMDAT and revised official death toll from Govt. of Puerto Rico. The number of articles studying tropical cyclone mortality varied by storm impact location and are presented with cyclone mortality from EM-DAT (1985-2019) for context (Figure 3 and Figure 4). The US reported 3,167 fatalities (0.76% of global mortality) during this period 5 but was the subject of 77 published articles (51%), 74 (96%) in English and three (4%) in Chinese. China reported 10,489 fatalities (2.51% of global mortality) and was the subject of 27 articles (18%), five (19%) in English and 22 (81%) in Chinese. 5 Asian nations other than China reported 366,482 fatalities (87.9% of global mortality) but were the focus of 31 articles (21%), 25 (81%) in English and six (19%) in Chinese. 5 Central American and Caribbean nations were the subject of four articles (3%) in English, though they reported 30,706 fatalities (7.36% of global mortality); inclusion of Spanish literature could alter this finding. 5 No studies examined mortality in African nations, although 4,490 fatalities (1.07% of global mortality) were reported in this region during the study timeframe. 5 Figure 3.Tropical Cyclone Mortality and Article Volumes by Nation, 1985-2019. Tropical Cyclone Mortality and Article Volumes by Nation, 1985-2019. Figure 4.Global Distribution of (A) Mortality Attributed to Tropical Cyclones and Global Distribution of (B) Articles Analyzing Tropical Cyclone Mortality, 1985-2019. Global Distribution of (A) Mortality Attributed to Tropical Cyclones and Global Distribution of (B) Articles Analyzing Tropical Cyclone Mortality, 1985-2019. Disaggregated mortality data were not available for individual US states in a uniform format; 47,60,61 tropical cyclone transits from NOAA (1985-2019) 58 were used to contextualize distribution of the 73 articles on specific US states (Figure 5). Louisiana, Texas, and Florida, sites of multiple recent disasters, experienced 109 cyclone transits (38.5% of the US total) and were the subject of 46 articles (63% of the US total). New York, New Jersey, and Pennsylvania experienced 13 (4.59%) cyclone transits and were the subject of 17 articles (23%), principally regarding Hurricane Sandy (2012). Other US states experienced 161 (56.9%) cyclone transits and were the topic of 10 articles (14%). Figure 5.Tropical Cyclone Transits and Article Volumes by US State, 1985-2019. Tropical Cyclone Transits and Article Volumes by US State, 1985-2019. Discussion: This review maps geography, methodology, and content for 150 scientific articles on mortality during and after tropical cyclones. While some situations have been studied in detail, for example mortality in Puerto Rico following Hurricane Maria 6,7,44 and in sub-populations following Hurricane Sandy, 45,47,61–63 the distribution of existing research is not proportional to historical mortality and key knowledge gaps remain. Published articles largely focus on mortality in the US and China, which together accounted for 68% of the articles identified in this review, despite reporting less than 3.5% of recent tropical cyclone mortality. 5 In contrast, Southeast Asia, Africa, Central America, and the Caribbean were comparatively under-represented in the literature despite high mortality. An analogous pattern was noted within the US; a disproportionate number of articles focused on states in the Northeast affected by Hurricane Sandy, while several Southern states that routinely experienced more storms were under-represented. 58 Future research will be most useful if conducted in settings that are highly impacted by tropical cyclones and in which findings can maximally contribute to mortality prevention. The articles identified in this review also disproportionately focus on a small number of tropical cyclones that may or may not be representative of mortality dynamics elsewhere. Nine storms accounted for 61% of analyses of specific storms identified in this study; of the 50 deadliest tropical cyclones in EM-DAT from 1985-2019, less than one-quarter were the subject of an article identified in this review. The concentration of articles on a limited sample of individual storms raises questions about the representativeness and generalizability of current knowledge. In addition, the long-term mortality effects of tropical cyclones remain poorly understood. Only 12 studies evaluated effects more than six months after cyclone impact, and only one of these studied a location outside the US. The mechanisms proposed to mediate post-cyclone excess mortality largely involve pre-exiting medical issues, disruptions of infrastructure, and disruptions of medical care. 6,44,64–66 It is thus plausible that the degree to which a tropical cyclone affects long-term mortality is related to factors including baseline levels of medical vulnerability, dependence on infrastructure, and infrastructure fragility in the affected area. 6,47 As these factors vary widely on both global and national scales, it is unknown whether the long-term impacts identified in existing studies are widely generalizable or describe exceptional circumstances. Additional long-term studies are needed, particularly in settings outside the US. Finally, few studies explicitly evaluated the implications of climate change. Most articles (92%) identified in this review did not mention climate change or related terms in the abstract, which was used as a replicable proxy for prominent consideration of this topic. Given the implications of climate change for tropical cyclone hazards, 27,35,67 consideration of this issue is important; long-term hazard projections provide important context for the study of mortality in tropical cyclones and should be considered in risk reduction strategies. Future years will likely witness rising seas, intensifying tropical cyclones, and worsening vulnerability in affected populations. Research on tropical cyclone mortality should prioritize lower- and middle-income settings with high historical mortality, examination of long-term effects, and evaluation of the implications of climate change. Prevention of future mortality will depend on the development of evidence-based risk reduction programs and their continuous monitoring for effectiveness during future storms. Policymakers should prioritize increased accessibility of mortality records and support for researchers working in highly affected settings. Limitations: This review evaluated articles published in English and Chinese; additional articles may exist in other languages and could affect results. Also, EM-DAT cyclone mortality data included a small number of extra-tropical cyclonic storms. 5 Conclusion: Scientific articles on tropical cyclone mortality disproportionately focus on a limited number of storms. The US and China are over-represented in the global literature relative to historical mortality, while nations in Southeast Asia, Africa, and the Americas outside the US are under-represented. Substantial knowledge gaps persist; long-term mortality effects are unclear, particularly in low-resource settings. Few publications prominently mention climate change, despite its substantial implications. Research addressing mortality related to tropical cyclones in low- and middle-income settings and over extended timeframes should be prioritized.
Background: Tropical cyclones are a recurrent, lethal hazard. Climate change, demographic, and development trends contribute to increasing hazards and vulnerability. This mapping review of articles on tropical cyclone mortality assesses geographic publication patterns, research gaps, and priorities for investigation to inform evidence-based risk reduction. Methods: A mapping review of published scientific articles on tropical cyclone-related mortality indexed in PubMed and EMBASE (English) and SINOMED and CNKI (Chinese), focusing on research approach, location, and storm information, was conducted. Results were compared with data on historical tropical cyclone disasters. Results: A total of 150 articles were included, 116 in English and 34 in Chinese. Nine cyclones accounted for 61% of specific event analyses. The United States (US) reported 0.76% of fatalities but was studied in 51% of articles, 96% in English and four percent in Chinese. Asian nations reported 90.4% of fatalities but were studied in 39% of articles, 50% in English and 50% in Chinese. Within the US, New York, New Jersey, and Pennsylvania experienced 4.59% of US tropical cyclones but were studied in 24% of US articles. Of the 12 articles where data were collected beyond six months from impact, 11 focused on storms in the US. Climate change was mentioned in eight percent of article abstracts. Conclusions: Regions that have historically experienced high mortality from tropical cyclones have not been studied as extensively as some regions with lower mortality impacts. Long-term mortality and the implications of climate change have not been extensively studied nor discussed in most settings. Research in highly impacted settings should be prioritized.
Introduction: Tropical cyclones, also known as hurricanes and typhoons, are among the most destructive weather events on earth. While preparedness efforts have helped reduce mortality, 1–4 advances have been uneven and large numbers of fatalities continue to occur. 5–8 Prediction 9–12 and communication 13–16 advances have not been uniformly implemented world-wide, and optimal risk reduction strategies may vary substantially depending on geographic, socioeconomic, and cultural factors. It is currently unclear how well existing research aligns with information needs. Successful interventions such as reversal of traffic flow on highways during evacuations in the United States (US) or use of elevated concrete cyclone shelters in Bangladesh are typically developed, evaluated, and improved through a combination of research and practical knowledge of the setting in question. 1,2,17 Information on human mortality due to cyclones can also catalyze government policies and other interventions. 18–20 A geographically and culturally diverse global research base is thus essential to support timely, situationally appropriate decision making. Geographically diverse research is also necessary because climate change, demographic, and development trends may contribute to increasing hazards and vulnerability, and optimal interventions to address these issues vary widely across the globe. Warming, rising seas mean that tropical cyclones may exhibit more rapid intensification, 21,22 increasing wind intensity and rainfall, 23,24 higher risk of prolonged impacts due to stalling, 25,26 more extreme storm surges, 27–29 and exposure of new regions to cyclones. 24,30 Many affected nations expect substantial population growth; 31 one model suggests that by 2030, approximately 140 million people will be exposed to tropical cyclones annually, many in low- and middle-income countries of Asia and Africa. 32 Migration toward coastal cities, 33–35 settlement of floodplains and steep hillsides, 36–38 loss of protective coastal marshes and mangroves, 3,39,40 and reliance on engineered defenses 41–43 may affect vulnerability. Research on the implications of each of these trends is needed to guide policy. In addition, recent studies show that impacts from tropical cyclones can extend well beyond the date of the storm. Following Hurricane Maria (2017) in Puerto Rico, the official death toll of 64 prompted multiple studies which showed that thousands had lost their lives in the ensuing months. 6,7,19,44 Similar mortality dynamics have been noted in other settings; 6,7,45–49 long-term, all-cause excess mortality may differ substantially from immediate mortality figures based on cause of death. However, these effects are only identified when specifically investigated, 19 and while uniform reporting systems have been proposed, 50–52 analysis of mortality remains challenging. Growing hazards related to climate change, worsening vulnerability related to demographic and development trends, and recent evidence for long-term and indirect mortality effects create an urgent need for research on tropical cyclone mortality that can inform future risk reduction efforts across a wide variety of settings. This mapping review seeks to describe the production of scientific knowledge on tropical cyclone mortality and to identify gaps or biases in the literature with regards to geography, methodology, and content. Conclusion: Scientific articles on tropical cyclone mortality disproportionately focus on a limited number of storms. The US and China are over-represented in the global literature relative to historical mortality, while nations in Southeast Asia, Africa, and the Americas outside the US are under-represented. Substantial knowledge gaps persist; long-term mortality effects are unclear, particularly in low-resource settings. Few publications prominently mention climate change, despite its substantial implications. Research addressing mortality related to tropical cyclones in low- and middle-income settings and over extended timeframes should be prioritized.
Background: Tropical cyclones are a recurrent, lethal hazard. Climate change, demographic, and development trends contribute to increasing hazards and vulnerability. This mapping review of articles on tropical cyclone mortality assesses geographic publication patterns, research gaps, and priorities for investigation to inform evidence-based risk reduction. Methods: A mapping review of published scientific articles on tropical cyclone-related mortality indexed in PubMed and EMBASE (English) and SINOMED and CNKI (Chinese), focusing on research approach, location, and storm information, was conducted. Results were compared with data on historical tropical cyclone disasters. Results: A total of 150 articles were included, 116 in English and 34 in Chinese. Nine cyclones accounted for 61% of specific event analyses. The United States (US) reported 0.76% of fatalities but was studied in 51% of articles, 96% in English and four percent in Chinese. Asian nations reported 90.4% of fatalities but were studied in 39% of articles, 50% in English and 50% in Chinese. Within the US, New York, New Jersey, and Pennsylvania experienced 4.59% of US tropical cyclones but were studied in 24% of US articles. Of the 12 articles where data were collected beyond six months from impact, 11 focused on storms in the US. Climate change was mentioned in eight percent of article abstracts. Conclusions: Regions that have historically experienced high mortality from tropical cyclones have not been studied as extensively as some regions with lower mortality impacts. Long-term mortality and the implications of climate change have not been extensively studied nor discussed in most settings. Research in highly impacted settings should be prioritized.
3,239
321
[]
6
[ "mortality", "articles", "tropical", "cyclone", "cyclones", "tropical cyclones", "tropical cyclone", "1985", "studies", "english" ]
[ "tropical cyclones situations", "tropical cyclone disasters", "cyclone mortality prioritize", "mortality cyclones catalyze", "cyclone shelters bangladesh" ]
[CONTENT] cyclonic storms | disaster medicine | mortality [SUMMARY]
[CONTENT] cyclonic storms | disaster medicine | mortality [SUMMARY]
[CONTENT] cyclonic storms | disaster medicine | mortality [SUMMARY]
[CONTENT] cyclonic storms | disaster medicine | mortality [SUMMARY]
[CONTENT] cyclonic storms | disaster medicine | mortality [SUMMARY]
[CONTENT] cyclonic storms | disaster medicine | mortality [SUMMARY]
[CONTENT] China | Climate Change | Cyclonic Storms | Disasters | Humans | New York [SUMMARY]
[CONTENT] China | Climate Change | Cyclonic Storms | Disasters | Humans | New York [SUMMARY]
[CONTENT] China | Climate Change | Cyclonic Storms | Disasters | Humans | New York [SUMMARY]
[CONTENT] China | Climate Change | Cyclonic Storms | Disasters | Humans | New York [SUMMARY]
[CONTENT] China | Climate Change | Cyclonic Storms | Disasters | Humans | New York [SUMMARY]
[CONTENT] China | Climate Change | Cyclonic Storms | Disasters | Humans | New York [SUMMARY]
[CONTENT] tropical cyclones situations | tropical cyclone disasters | cyclone mortality prioritize | mortality cyclones catalyze | cyclone shelters bangladesh [SUMMARY]
[CONTENT] tropical cyclones situations | tropical cyclone disasters | cyclone mortality prioritize | mortality cyclones catalyze | cyclone shelters bangladesh [SUMMARY]
[CONTENT] tropical cyclones situations | tropical cyclone disasters | cyclone mortality prioritize | mortality cyclones catalyze | cyclone shelters bangladesh [SUMMARY]
[CONTENT] tropical cyclones situations | tropical cyclone disasters | cyclone mortality prioritize | mortality cyclones catalyze | cyclone shelters bangladesh [SUMMARY]
[CONTENT] tropical cyclones situations | tropical cyclone disasters | cyclone mortality prioritize | mortality cyclones catalyze | cyclone shelters bangladesh [SUMMARY]
[CONTENT] tropical cyclones situations | tropical cyclone disasters | cyclone mortality prioritize | mortality cyclones catalyze | cyclone shelters bangladesh [SUMMARY]
[CONTENT] mortality | articles | tropical | cyclone | cyclones | tropical cyclones | tropical cyclone | 1985 | studies | english [SUMMARY]
[CONTENT] mortality | articles | tropical | cyclone | cyclones | tropical cyclones | tropical cyclone | 1985 | studies | english [SUMMARY]
[CONTENT] mortality | articles | tropical | cyclone | cyclones | tropical cyclones | tropical cyclone | 1985 | studies | english [SUMMARY]
[CONTENT] mortality | articles | tropical | cyclone | cyclones | tropical cyclones | tropical cyclone | 1985 | studies | english [SUMMARY]
[CONTENT] mortality | articles | tropical | cyclone | cyclones | tropical cyclones | tropical cyclone | 1985 | studies | english [SUMMARY]
[CONTENT] mortality | articles | tropical | cyclone | cyclones | tropical cyclones | tropical cyclone | 1985 | studies | english [SUMMARY]
[CONTENT] mortality | research | cyclones | trends | interventions | tropical | vulnerability | risk | tropical cyclones | diverse [SUMMARY]
[CONTENT] english | chinese | human | statistical | reviewed | publication | national | human mortality | hurricanes | mortality [SUMMARY]
[CONTENT] articles | mortality | emdat | figure | 1985 | cyclone | storm | tropical | reported | 1985 2019 [SUMMARY]
[CONTENT] mortality | substantial | low | represented | settings | settings extended timeframes | tropical cyclones low | tropical cyclones low middle | extended | extended timeframes [SUMMARY]
[CONTENT] mortality | articles | tropical | cyclone | english | cyclones | tropical cyclones | chinese | storms | data [SUMMARY]
[CONTENT] mortality | articles | tropical | cyclone | english | cyclones | tropical cyclones | chinese | storms | data [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] PubMed | English | SINOMED | CNKI | Chinese ||| [SUMMARY]
[CONTENT] 150 | 116 | English | 34 | Chinese ||| Nine | 61% ||| The United States | US | 0.76% | 51% | 96% | English | four percent | Chinese ||| Asian | 90.4% | 39% | 50% | English | 50% | Chinese ||| US | New York | New Jersey | Pennsylvania | 4.59% | US | 24% | US ||| 12 | six months | 11 | US ||| eight percent [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| PubMed | English | SINOMED | CNKI | Chinese ||| ||| ||| 150 | 116 | English | 34 | Chinese ||| Nine | 61% ||| The United States | US | 0.76% | 51% | 96% | English | four percent | Chinese ||| Asian | 90.4% | 39% | 50% | English | 50% | Chinese ||| US | New York | New Jersey | Pennsylvania | 4.59% | US | 24% | US ||| 12 | six months | 11 | US ||| eight percent ||| ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| PubMed | English | SINOMED | CNKI | Chinese ||| ||| ||| 150 | 116 | English | 34 | Chinese ||| Nine | 61% ||| The United States | US | 0.76% | 51% | 96% | English | four percent | Chinese ||| Asian | 90.4% | 39% | 50% | English | 50% | Chinese ||| US | New York | New Jersey | Pennsylvania | 4.59% | US | 24% | US ||| 12 | six months | 11 | US ||| eight percent ||| ||| ||| [SUMMARY]
The role of the electrocardiogram in the recognition of cardiac transplant rejection: A systematic review and meta-analysis.
35066923
In cardiac transplant recipients, the electrocardiogram (ECG) is a noninvasive measure of early allograft rejection. The ECG can predict an acute cellular rejection, thus shortening the time to recognition of rejection. Earlier diagnosis has the potential to reduce the number and severity of rejection episodes.
BACKGROUND
A systematic literature review was conducted to identify and select the original research reports on using electrocardiography in diagnosing cardiac transplant rejection in accordance with the PRISMA guidelines. Studies included reported sensitivity and specificity of ECG readings in heart transplant recipients during the first post-transplant year. Data were analyzed with Review manager version 5.4. p-value was used in testing the significant difference.
METHODOLOGY
After the removal of duplicates, 98 articles were eligible for screening. After the full-text screening, a total of 17 papers were included in the review based on the above criteria. A meta-analysis of five studies was done.
RESULTS
In heart transplant recipients, a noninvasive measure of early allograft rejection has the potential to reduce the number and severity of rejection episodes by reducing the time and cost of surveillance of rejection and shortening the time to recognition of rejection.
CONCLUSION
[ "Electrocardiography", "Graft Rejection", "Heart Diseases", "Heart Transplantation", "Humans", "Mass Screening" ]
8922543
INTRODUCTION
A well‐established treatment for end‐stage heart failure patients is heart transplantation. The median survival of adult patients who received a heart transplant after the year 2000 is over 12 years 1 —a significant cause of early mortality in acute allograft rejection. The prevalence of allograft rejection has been reported to exceed 13% in the first year following adult heart transplantation. Thus, if the patient survives the first‐year post‐transplant, they are expected to survive at least 15 years. 1 According to the 2011 annual United States data released by the International Society for Heart Lung Transplantation Registry, 26% of heart transplant patients experience at least one rejection episode within the first‐year post‐transplant. The most frequent cause of morbidity and rehospitalization in this patient population remains acute rejection. 2 , 3 The electrocardiogram (ECG) is a simple, cost‐effective, and noninvasive tool used to evaluate the rhythm and electrical activity of the heart. Sensors attached to the skin are used to display the electrical signals generated by your heart on an easy to interpret grid paper. 4 Utilizing ECG readings in heart transplant recipients can predict an acute cellular rejection, thus shortening the time to recognize rejection. A recent study examined serial ECGs in 98 patients within the first‐year post‐heart transplantation. The most common abnormalities were associated with intraventricular conduction delays, with the right bundle branch block (RBBB) being the most prevalent. 5 , 6 In cardiac transplant recipients, a noninvasive measure of early allograft rejection can reduce the number and severity of rejection episodes. ECG can reduce the time to detection and the cost of surveillance of rejection. 6 In this study, we summarize the diagnostic accuracy and criteria of the ECG in the detection of cardiac transplant rejection patients.
METHODOLOGY
Selection of studies A systematic literature review was conducted to identify all studies about the detection of graft rejection in heart transplant surgeries per the PRISMA guidelines. 7 The online database: Google Scholar, PubMed, and Cochrane were searched from January 1985 to September 2020. Keywords used in the search included (Heart transplant rejection OR Heart transplantation rejection OR Detection of heart transplant rejection OR Cardiac transplant detection OR Cardiac transplantation detection OR Noninvasive ways of detection of cardiac transplant rejection). The screening was completed by Hashim T. Hashim, and Jaffer Sha, with disagreements being resolved by Joseph Varney. There was no restriction on participant's age, gender, or ethnicity, and no restrictions to language written. The references of selected papers were manually checked for additional relating studies. An analysis of the funnel plot was carried out to determine the possibility of bias in the publication in the Review Manager program version 5.4. Inclusion criteria for meta‐analysis were studies that correlated the Endomyocardial biopsy grading to the ECG features. A systematic literature review was conducted to identify all studies about the detection of graft rejection in heart transplant surgeries per the PRISMA guidelines. 7 The online database: Google Scholar, PubMed, and Cochrane were searched from January 1985 to September 2020. Keywords used in the search included (Heart transplant rejection OR Heart transplantation rejection OR Detection of heart transplant rejection OR Cardiac transplant detection OR Cardiac transplantation detection OR Noninvasive ways of detection of cardiac transplant rejection). The screening was completed by Hashim T. Hashim, and Jaffer Sha, with disagreements being resolved by Joseph Varney. There was no restriction on participant's age, gender, or ethnicity, and no restrictions to language written. The references of selected papers were manually checked for additional relating studies. An analysis of the funnel plot was carried out to determine the possibility of bias in the publication in the Review Manager program version 5.4. Inclusion criteria for meta‐analysis were studies that correlated the Endomyocardial biopsy grading to the ECG features. Data extraction Details of the study design, ECG characteristics, endomyocardial biopsy grading, and outcome data, including the QT interval, QTc, QT dispersion, and QTc dispersion, were extracted. The risk of bias was assessed using the Newcastle‐Ottawa Quality Assessment tool. Details of the study design, ECG characteristics, endomyocardial biopsy grading, and outcome data, including the QT interval, QTc, QT dispersion, and QTc dispersion, were extracted. The risk of bias was assessed using the Newcastle‐Ottawa Quality Assessment tool. Study inclusion After a comprehensive search of the literature, 170 publications resulted and then became 96 after removal of duplicates. Of these, 51 were eligible for full‐text screening. After the full‐text screening, 18 studies were included in the systematic review and meta‐analysis, as shown in (Figure 1). Six studies were included in the meta‐analysis. The included QT (ms), QTc, QT dispersion, and QTc dispersion outcomes in the meta‐analysis were reported in 2, 3, and 5 studies. The summary of the included studies and risk of bias assessment are shown in Tables 1, 2, 3, respectively. Flow chart of the study selection process The studies' general information The studies' specific data The ECG's characteristics After a comprehensive search of the literature, 170 publications resulted and then became 96 after removal of duplicates. Of these, 51 were eligible for full‐text screening. After the full‐text screening, 18 studies were included in the systematic review and meta‐analysis, as shown in (Figure 1). Six studies were included in the meta‐analysis. The included QT (ms), QTc, QT dispersion, and QTc dispersion outcomes in the meta‐analysis were reported in 2, 3, and 5 studies. The summary of the included studies and risk of bias assessment are shown in Tables 1, 2, 3, respectively. Flow chart of the study selection process The studies' general information The studies' specific data The ECG's characteristics Analyses The total number of patients included in the meta‐analysis in the no rejection or mild rejection group is 1733 patients, and the total number of patients in the moderate or severe rejection group is 264 patients. The total number of patients included in the meta‐analysis in the no rejection or mild rejection group is 1733 patients, and the total number of patients in the moderate or severe rejection group is 264 patients.
RESULTS
We used random effects due to heterogeneity observed among studies when we used fixed effects. In QT (ms) outcome, the pooled analysis between no or mild rejection and moderate or severe rejection was (MD = 3.80, 95% CI = −18.10 to 25.70, p‐value = .73), we observed heterogeneity that was not solved by random effects, as shown in Figure 1. The pooled analyses between no or mild rejection and moderate or severe rejection in QTc, QT dispersion and QTc dispersions outcomes, were (MD = 18.91, 95% CI = −21.30 to 59.11, p‐value = .36), (MD = −68.54, 95% CI = −195.74 to 58.66, p‐value = .29) and (MD = −41.15, 95% CI = −93.26 to 10.96, p‐value = .12), respectively (Figures S2–S4). We did subgroup analysis based on the duration of the follow‐up. The two subgroups were from 3 to 6 months and from hospital discharge to 7 days, the heterogeneity was not solved by subgroup analysis and leave one out test in the QTC subgroups and the results (MD = 5.19, 95% CI = −15.55 to 25.92, p‐value = .62) and (MD = 34.67, 95% CI = −30.99 to 100.33, p‐value = .30), respectively (Figure S5). In the QTc dispersion outcome, the heterogeneity was not solved by subgroup analysis or leave one out test. The results were in the 3–6 months follow‐up subgroup (MD = −68.77, 95% CI = −195.54 to 58.01, p‐value = .29) and in the hospital discharge to 7 days follow‐up subgroup were (MD = 10.00, 95% CI = − 2.17 to 22.17, p‐value = .11) (Figure S6). No publication bias was observed among included studies, as shown in Figure S7. Table 1 describes the characteristics data for each study individually and their citations. Table 2 shows the data of patients and their characteristics, their ages, the type of study, and the duration of the study (the age and the duration are either mean or median). Table 3 shows the characteristics of the ECG recording and the outcomes of the studies (No. of rejections is the number of patients recognized with ECG). The rejection was diagnosed with histology findings and biopsies and then compared with the findings of the ECG to give the definitive diagnosis (Figure 2). Publication bias Figure S7 shows the risk of biases and applicability concerns among the studies distributed as high risk, low risk, and unclear risk.
CONCLUSION
In heart transplant recipients, the ECG is a noninvasive measure of early allograft rejection. It holds the potential to reduce the number and severity of rejection episodes of rejection. Time‐efficient and low cost make the ECG a good choice for screening, yet specificity and sensitivity varied widely throughout the studies chosen. Reasonings behind this could be as simple as user error. To avoid this shortfall, an algorism for artificial intelligence reading cardiac transplant rejection patients should be created. With the exceedingly high cost of a heart transplant, we feel further investigation is warranted. We found no significant association between heart transplant rejection and QT changes of ECG. Although some studies reported significant association, other studies did not. There is heterogeneity among studies included in the meta‐analysis, that does not provide conclusive results. More clinical trials are needed to give final conclusion about using ECG as a measure in detecting heart transplant rejections.
[ "INTRODUCTION", "Selection of studies", "Data extraction", "Study inclusion", "Analyses", "LIMITATIONS & FUTURE DIRECTION", "CONSENT FOR PUBLICATION", "AUTHOR CONTRIBUTIONS" ]
[ "A well‐established treatment for end‐stage heart failure patients is heart transplantation. The median survival of adult patients who received a heart transplant after the year 2000 is over 12 years\n1\n—a significant cause of early mortality in acute allograft rejection. The prevalence of allograft rejection has been reported to exceed 13% in the first year following adult heart transplantation. Thus, if the patient survives the first‐year post‐transplant, they are expected to survive at least 15 years.\n1\n According to the 2011 annual United States data released by the International Society for Heart Lung Transplantation Registry, 26% of heart transplant patients experience at least one rejection episode within the first‐year post‐transplant. The most frequent cause of morbidity and rehospitalization in this patient population remains acute rejection.\n2\n, \n3\n\n\nThe electrocardiogram (ECG) is a simple, cost‐effective, and noninvasive tool used to evaluate the rhythm and electrical activity of the heart. Sensors attached to the skin are used to display the electrical signals generated by your heart on an easy to interpret grid paper.\n4\n Utilizing ECG readings in heart transplant recipients can predict an acute cellular rejection, thus shortening the time to recognize rejection. A recent study examined serial ECGs in 98 patients within the first‐year post‐heart transplantation. The most common abnormalities were associated with intraventricular conduction delays, with the right bundle branch block (RBBB) being the most prevalent.\n5\n, \n6\n\n\nIn cardiac transplant recipients, a noninvasive measure of early allograft rejection can reduce the number and severity of rejection episodes. ECG can reduce the time to detection and the cost of surveillance of rejection.\n6\n In this study, we summarize the diagnostic accuracy and criteria of the ECG in the detection of cardiac transplant rejection patients.", "A systematic literature review was conducted to identify all studies about the detection of graft rejection in heart transplant surgeries per the PRISMA guidelines.\n7\n The online database: Google Scholar, PubMed, and Cochrane were searched from January 1985 to September 2020. Keywords used in the search included (Heart transplant rejection OR Heart transplantation rejection OR Detection of heart transplant rejection OR Cardiac transplant detection OR Cardiac transplantation detection OR Noninvasive ways of detection of cardiac transplant rejection). The screening was completed by Hashim T. Hashim, and Jaffer Sha, with disagreements being resolved by Joseph Varney. There was no restriction on participant's age, gender, or ethnicity, and no restrictions to language written. The references of selected papers were manually checked for additional relating studies. An analysis of the funnel plot was carried out to determine the possibility of bias in the publication in the Review Manager program version 5.4. Inclusion criteria for meta‐analysis were studies that correlated the Endomyocardial biopsy grading to the ECG features.", "Details of the study design, ECG characteristics, endomyocardial biopsy grading, and outcome data, including the QT interval, QTc, QT dispersion, and QTc dispersion, were extracted. The risk of bias was assessed using the Newcastle‐Ottawa Quality Assessment tool.", "After a comprehensive search of the literature, 170 publications resulted and then became 96 after removal of duplicates. Of these, 51 were eligible for full‐text screening. After the full‐text screening, 18 studies were included in the systematic review and meta‐analysis, as shown in (Figure 1). Six studies were included in the meta‐analysis. The included QT (ms), QTc, QT dispersion, and QTc dispersion outcomes in the meta‐analysis were reported in 2, 3, and 5 studies. The summary of the included studies and risk of bias assessment are shown in Tables 1, 2, 3, respectively.\nFlow chart of the study selection process\nThe studies' general information\nThe studies' specific data\nThe ECG's characteristics", "The total number of patients included in the meta‐analysis in the no rejection or mild rejection group is 1733 patients, and the total number of patients in the moderate or severe rejection group is 264 patients.", "There is heterogeneity among included studies due to diversity of study designs in the studies included in the meta‐analysis. Few numbers of studies are included in the meta‐analysis due to few data published. Most of the published data are about QT changes with no interest to the other components of the ECG. ECG abnormalities are less sensitive to the mild forms of rejection that occurs with the currently used immunosuppression medications. The use of the SA‐ECG may be combined with the help of the standard ECG to not miss patients with milder rejection who do not have abnormal ECG features.\n28\n Further studies are needed to determine if the frequency or time domain of the SA‐ECG are better predictors of rejection.", "All the authors provided their consents for publication.", "Hashim T. Hashim and Mustafa A. Ramadhan created the idea, wrote the first draft and supervised the work; Joseph Varney, Jaffer Shah, and Shoaib Ahmad did the data collection and studies selection; Karam Ramadan Motawea, Omneya A. Kandil, and Joseph Varney wrote the final draft and did the analysis." ]
[ null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODOLOGY", "Selection of studies", "Data extraction", "Study inclusion", "Analyses", "RESULTS", "DISCUSSION", "LIMITATIONS & FUTURE DIRECTION", "CONCLUSION", "CONFLICT OF INTERESTS", "CONSENT FOR PUBLICATION", "AUTHOR CONTRIBUTIONS", "Supporting information" ]
[ "A well‐established treatment for end‐stage heart failure patients is heart transplantation. The median survival of adult patients who received a heart transplant after the year 2000 is over 12 years\n1\n—a significant cause of early mortality in acute allograft rejection. The prevalence of allograft rejection has been reported to exceed 13% in the first year following adult heart transplantation. Thus, if the patient survives the first‐year post‐transplant, they are expected to survive at least 15 years.\n1\n According to the 2011 annual United States data released by the International Society for Heart Lung Transplantation Registry, 26% of heart transplant patients experience at least one rejection episode within the first‐year post‐transplant. The most frequent cause of morbidity and rehospitalization in this patient population remains acute rejection.\n2\n, \n3\n\n\nThe electrocardiogram (ECG) is a simple, cost‐effective, and noninvasive tool used to evaluate the rhythm and electrical activity of the heart. Sensors attached to the skin are used to display the electrical signals generated by your heart on an easy to interpret grid paper.\n4\n Utilizing ECG readings in heart transplant recipients can predict an acute cellular rejection, thus shortening the time to recognize rejection. A recent study examined serial ECGs in 98 patients within the first‐year post‐heart transplantation. The most common abnormalities were associated with intraventricular conduction delays, with the right bundle branch block (RBBB) being the most prevalent.\n5\n, \n6\n\n\nIn cardiac transplant recipients, a noninvasive measure of early allograft rejection can reduce the number and severity of rejection episodes. ECG can reduce the time to detection and the cost of surveillance of rejection.\n6\n In this study, we summarize the diagnostic accuracy and criteria of the ECG in the detection of cardiac transplant rejection patients.", "Selection of studies A systematic literature review was conducted to identify all studies about the detection of graft rejection in heart transplant surgeries per the PRISMA guidelines.\n7\n The online database: Google Scholar, PubMed, and Cochrane were searched from January 1985 to September 2020. Keywords used in the search included (Heart transplant rejection OR Heart transplantation rejection OR Detection of heart transplant rejection OR Cardiac transplant detection OR Cardiac transplantation detection OR Noninvasive ways of detection of cardiac transplant rejection). The screening was completed by Hashim T. Hashim, and Jaffer Sha, with disagreements being resolved by Joseph Varney. There was no restriction on participant's age, gender, or ethnicity, and no restrictions to language written. The references of selected papers were manually checked for additional relating studies. An analysis of the funnel plot was carried out to determine the possibility of bias in the publication in the Review Manager program version 5.4. Inclusion criteria for meta‐analysis were studies that correlated the Endomyocardial biopsy grading to the ECG features.\nA systematic literature review was conducted to identify all studies about the detection of graft rejection in heart transplant surgeries per the PRISMA guidelines.\n7\n The online database: Google Scholar, PubMed, and Cochrane were searched from January 1985 to September 2020. Keywords used in the search included (Heart transplant rejection OR Heart transplantation rejection OR Detection of heart transplant rejection OR Cardiac transplant detection OR Cardiac transplantation detection OR Noninvasive ways of detection of cardiac transplant rejection). The screening was completed by Hashim T. Hashim, and Jaffer Sha, with disagreements being resolved by Joseph Varney. There was no restriction on participant's age, gender, or ethnicity, and no restrictions to language written. The references of selected papers were manually checked for additional relating studies. An analysis of the funnel plot was carried out to determine the possibility of bias in the publication in the Review Manager program version 5.4. Inclusion criteria for meta‐analysis were studies that correlated the Endomyocardial biopsy grading to the ECG features.\nData extraction Details of the study design, ECG characteristics, endomyocardial biopsy grading, and outcome data, including the QT interval, QTc, QT dispersion, and QTc dispersion, were extracted. The risk of bias was assessed using the Newcastle‐Ottawa Quality Assessment tool.\nDetails of the study design, ECG characteristics, endomyocardial biopsy grading, and outcome data, including the QT interval, QTc, QT dispersion, and QTc dispersion, were extracted. The risk of bias was assessed using the Newcastle‐Ottawa Quality Assessment tool.\nStudy inclusion After a comprehensive search of the literature, 170 publications resulted and then became 96 after removal of duplicates. Of these, 51 were eligible for full‐text screening. After the full‐text screening, 18 studies were included in the systematic review and meta‐analysis, as shown in (Figure 1). Six studies were included in the meta‐analysis. The included QT (ms), QTc, QT dispersion, and QTc dispersion outcomes in the meta‐analysis were reported in 2, 3, and 5 studies. The summary of the included studies and risk of bias assessment are shown in Tables 1, 2, 3, respectively.\nFlow chart of the study selection process\nThe studies' general information\nThe studies' specific data\nThe ECG's characteristics\nAfter a comprehensive search of the literature, 170 publications resulted and then became 96 after removal of duplicates. Of these, 51 were eligible for full‐text screening. After the full‐text screening, 18 studies were included in the systematic review and meta‐analysis, as shown in (Figure 1). Six studies were included in the meta‐analysis. The included QT (ms), QTc, QT dispersion, and QTc dispersion outcomes in the meta‐analysis were reported in 2, 3, and 5 studies. The summary of the included studies and risk of bias assessment are shown in Tables 1, 2, 3, respectively.\nFlow chart of the study selection process\nThe studies' general information\nThe studies' specific data\nThe ECG's characteristics\nAnalyses The total number of patients included in the meta‐analysis in the no rejection or mild rejection group is 1733 patients, and the total number of patients in the moderate or severe rejection group is 264 patients.\nThe total number of patients included in the meta‐analysis in the no rejection or mild rejection group is 1733 patients, and the total number of patients in the moderate or severe rejection group is 264 patients.", "A systematic literature review was conducted to identify all studies about the detection of graft rejection in heart transplant surgeries per the PRISMA guidelines.\n7\n The online database: Google Scholar, PubMed, and Cochrane were searched from January 1985 to September 2020. Keywords used in the search included (Heart transplant rejection OR Heart transplantation rejection OR Detection of heart transplant rejection OR Cardiac transplant detection OR Cardiac transplantation detection OR Noninvasive ways of detection of cardiac transplant rejection). The screening was completed by Hashim T. Hashim, and Jaffer Sha, with disagreements being resolved by Joseph Varney. There was no restriction on participant's age, gender, or ethnicity, and no restrictions to language written. The references of selected papers were manually checked for additional relating studies. An analysis of the funnel plot was carried out to determine the possibility of bias in the publication in the Review Manager program version 5.4. Inclusion criteria for meta‐analysis were studies that correlated the Endomyocardial biopsy grading to the ECG features.", "Details of the study design, ECG characteristics, endomyocardial biopsy grading, and outcome data, including the QT interval, QTc, QT dispersion, and QTc dispersion, were extracted. The risk of bias was assessed using the Newcastle‐Ottawa Quality Assessment tool.", "After a comprehensive search of the literature, 170 publications resulted and then became 96 after removal of duplicates. Of these, 51 were eligible for full‐text screening. After the full‐text screening, 18 studies were included in the systematic review and meta‐analysis, as shown in (Figure 1). Six studies were included in the meta‐analysis. The included QT (ms), QTc, QT dispersion, and QTc dispersion outcomes in the meta‐analysis were reported in 2, 3, and 5 studies. The summary of the included studies and risk of bias assessment are shown in Tables 1, 2, 3, respectively.\nFlow chart of the study selection process\nThe studies' general information\nThe studies' specific data\nThe ECG's characteristics", "The total number of patients included in the meta‐analysis in the no rejection or mild rejection group is 1733 patients, and the total number of patients in the moderate or severe rejection group is 264 patients.", "We used random effects due to heterogeneity observed among studies when we used fixed effects. In QT (ms) outcome, the pooled analysis between no or mild rejection and moderate or severe rejection was (MD = 3.80, 95% CI = −18.10 to 25.70, p‐value = .73), we observed heterogeneity that was not solved by random effects, as shown in Figure 1. The pooled analyses between no or mild rejection and moderate or severe rejection in QTc, QT dispersion and QTc dispersions outcomes, were (MD = 18.91, 95% CI = −21.30 to 59.11, p‐value = .36), (MD = −68.54, 95% CI = −195.74 to 58.66, p‐value = .29) and (MD = −41.15, 95% CI = −93.26 to 10.96, p‐value = .12), respectively (Figures S2–S4).\nWe did subgroup analysis based on the duration of the follow‐up. The two subgroups were from 3 to 6 months and from hospital discharge to 7 days, the heterogeneity was not solved by subgroup analysis and leave one out test in the QTC subgroups and the results (MD = 5.19, 95% CI = −15.55 to 25.92, p‐value = .62) and (MD = 34.67, 95% CI = −30.99 to 100.33, p‐value = .30), respectively (Figure S5).\nIn the QTc dispersion outcome, the heterogeneity was not solved by subgroup analysis or leave one out test. The results were in the 3–6 months follow‐up subgroup (MD = −68.77, 95% CI = −195.54 to 58.01, p‐value = .29) and in the hospital discharge to 7 days follow‐up subgroup were (MD = 10.00, 95% CI = − 2.17 to 22.17, p‐value = .11) (Figure S6).\nNo publication bias was observed among included studies, as shown in Figure S7.\nTable 1 describes the characteristics data for each study individually and their citations.\nTable 2 shows the data of patients and their characteristics, their ages, the type of study, and the duration of the study (the age and the duration are either mean or median).\nTable 3 shows the characteristics of the ECG recording and the outcomes of the studies (No. of rejections is the number of patients recognized with ECG).\nThe rejection was diagnosed with histology findings and biopsies and then compared with the findings of the ECG to give the definitive diagnosis (Figure 2).\nPublication bias\nFigure S7 shows the risk of biases and applicability concerns among the studies distributed as high risk, low risk, and unclear risk.", "We found no significant association between heart transplant rejection and QT changes of ECG. The studies included in this review report the rejection of the heart transplant after the surgery with either moderate or severe rejection. The results were assured by the biopsy to compare between the results of the ECG and the histology. A total of 957 patients were identified for heart transplant rejection, with 304 diagnosed by ECG (31.7%). The primary method used for diagnosis was the QRS interval and amplitude (see Table 3). Sensitivity and specificity varied widely between our studies (see Figure S3), potentially showing the user error in ECG placement and reading. To date, the only consistently effective approach available for the diagnosis of cardiac transplant rejection is an endomyocardial biopsy.\nThe utility of ECG in this population may have various utilities. Preclinical advances in cardiac transplantation have shown that ECG may indicate a beneficial corticosteroid response.\n26\n After cardiac transplant, the incidence of conduction disorders is well known, and RBBB is the most frequent of these.\n27\n The occurrence of RBBB within 1 month of cardiac transplant might have different clinical consequences from those with later RBBB incidence. Before the ECG can consistently be used to detect acute allograft rejection, an investigation is still required to assess computerized ECG measurement algorithms.\nGiven that ECGs are carried out regularly, QTc time and QTc dispersion measurements could be used to accurately detect acute rejection at the early stage after heart transplantation. If further studies confirm our early current findings, it will be possible to implant QT‐driven rate‐sensitive units with periodic interrogation of these units at the time of transplantation. This could nullify the need for endomyocardial biopsy.", "There is heterogeneity among included studies due to diversity of study designs in the studies included in the meta‐analysis. Few numbers of studies are included in the meta‐analysis due to few data published. Most of the published data are about QT changes with no interest to the other components of the ECG. ECG abnormalities are less sensitive to the mild forms of rejection that occurs with the currently used immunosuppression medications. The use of the SA‐ECG may be combined with the help of the standard ECG to not miss patients with milder rejection who do not have abnormal ECG features.\n28\n Further studies are needed to determine if the frequency or time domain of the SA‐ECG are better predictors of rejection.", "In heart transplant recipients, the ECG is a noninvasive measure of early allograft rejection. It holds the potential to reduce the number and severity of rejection episodes of rejection. Time‐efficient and low cost make the ECG a good choice for screening, yet specificity and sensitivity varied widely throughout the studies chosen. Reasonings behind this could be as simple as user error. To avoid this shortfall, an algorism for artificial intelligence reading cardiac transplant rejection patients should be created. With the exceedingly high cost of a heart transplant, we feel further investigation is warranted.\nWe found no significant association between heart transplant rejection and QT changes of ECG. Although some studies reported significant association, other studies did not. There is heterogeneity among studies included in the meta‐analysis, that does not provide conclusive results. More clinical trials are needed to give final conclusion about using ECG as a measure in detecting heart transplant rejections.", "The authors declare that there are no conflict of interests.", "All the authors provided their consents for publication.", "Hashim T. Hashim and Mustafa A. Ramadhan created the idea, wrote the first draft and supervised the work; Joseph Varney, Jaffer Shah, and Shoaib Ahmad did the data collection and studies selection; Karam Ramadan Motawea, Omneya A. Kandil, and Joseph Varney wrote the final draft and did the analysis.", "Supplementary information.\nClick here for additional data file." ]
[ null, "methods", null, null, null, null, "results", "discussion", null, "conclusions", "COI-statement", null, null, "supplementary-material" ]
[ "cardio transplant rejection", "ECG", "heart transplant rejection", "rejection diagnosis" ]
INTRODUCTION: A well‐established treatment for end‐stage heart failure patients is heart transplantation. The median survival of adult patients who received a heart transplant after the year 2000 is over 12 years 1 —a significant cause of early mortality in acute allograft rejection. The prevalence of allograft rejection has been reported to exceed 13% in the first year following adult heart transplantation. Thus, if the patient survives the first‐year post‐transplant, they are expected to survive at least 15 years. 1 According to the 2011 annual United States data released by the International Society for Heart Lung Transplantation Registry, 26% of heart transplant patients experience at least one rejection episode within the first‐year post‐transplant. The most frequent cause of morbidity and rehospitalization in this patient population remains acute rejection. 2 , 3 The electrocardiogram (ECG) is a simple, cost‐effective, and noninvasive tool used to evaluate the rhythm and electrical activity of the heart. Sensors attached to the skin are used to display the electrical signals generated by your heart on an easy to interpret grid paper. 4 Utilizing ECG readings in heart transplant recipients can predict an acute cellular rejection, thus shortening the time to recognize rejection. A recent study examined serial ECGs in 98 patients within the first‐year post‐heart transplantation. The most common abnormalities were associated with intraventricular conduction delays, with the right bundle branch block (RBBB) being the most prevalent. 5 , 6 In cardiac transplant recipients, a noninvasive measure of early allograft rejection can reduce the number and severity of rejection episodes. ECG can reduce the time to detection and the cost of surveillance of rejection. 6 In this study, we summarize the diagnostic accuracy and criteria of the ECG in the detection of cardiac transplant rejection patients. METHODOLOGY: Selection of studies A systematic literature review was conducted to identify all studies about the detection of graft rejection in heart transplant surgeries per the PRISMA guidelines. 7 The online database: Google Scholar, PubMed, and Cochrane were searched from January 1985 to September 2020. Keywords used in the search included (Heart transplant rejection OR Heart transplantation rejection OR Detection of heart transplant rejection OR Cardiac transplant detection OR Cardiac transplantation detection OR Noninvasive ways of detection of cardiac transplant rejection). The screening was completed by Hashim T. Hashim, and Jaffer Sha, with disagreements being resolved by Joseph Varney. There was no restriction on participant's age, gender, or ethnicity, and no restrictions to language written. The references of selected papers were manually checked for additional relating studies. An analysis of the funnel plot was carried out to determine the possibility of bias in the publication in the Review Manager program version 5.4. Inclusion criteria for meta‐analysis were studies that correlated the Endomyocardial biopsy grading to the ECG features. A systematic literature review was conducted to identify all studies about the detection of graft rejection in heart transplant surgeries per the PRISMA guidelines. 7 The online database: Google Scholar, PubMed, and Cochrane were searched from January 1985 to September 2020. Keywords used in the search included (Heart transplant rejection OR Heart transplantation rejection OR Detection of heart transplant rejection OR Cardiac transplant detection OR Cardiac transplantation detection OR Noninvasive ways of detection of cardiac transplant rejection). The screening was completed by Hashim T. Hashim, and Jaffer Sha, with disagreements being resolved by Joseph Varney. There was no restriction on participant's age, gender, or ethnicity, and no restrictions to language written. The references of selected papers were manually checked for additional relating studies. An analysis of the funnel plot was carried out to determine the possibility of bias in the publication in the Review Manager program version 5.4. Inclusion criteria for meta‐analysis were studies that correlated the Endomyocardial biopsy grading to the ECG features. Data extraction Details of the study design, ECG characteristics, endomyocardial biopsy grading, and outcome data, including the QT interval, QTc, QT dispersion, and QTc dispersion, were extracted. The risk of bias was assessed using the Newcastle‐Ottawa Quality Assessment tool. Details of the study design, ECG characteristics, endomyocardial biopsy grading, and outcome data, including the QT interval, QTc, QT dispersion, and QTc dispersion, were extracted. The risk of bias was assessed using the Newcastle‐Ottawa Quality Assessment tool. Study inclusion After a comprehensive search of the literature, 170 publications resulted and then became 96 after removal of duplicates. Of these, 51 were eligible for full‐text screening. After the full‐text screening, 18 studies were included in the systematic review and meta‐analysis, as shown in (Figure 1). Six studies were included in the meta‐analysis. The included QT (ms), QTc, QT dispersion, and QTc dispersion outcomes in the meta‐analysis were reported in 2, 3, and 5 studies. The summary of the included studies and risk of bias assessment are shown in Tables 1, 2, 3, respectively. Flow chart of the study selection process The studies' general information The studies' specific data The ECG's characteristics After a comprehensive search of the literature, 170 publications resulted and then became 96 after removal of duplicates. Of these, 51 were eligible for full‐text screening. After the full‐text screening, 18 studies were included in the systematic review and meta‐analysis, as shown in (Figure 1). Six studies were included in the meta‐analysis. The included QT (ms), QTc, QT dispersion, and QTc dispersion outcomes in the meta‐analysis were reported in 2, 3, and 5 studies. The summary of the included studies and risk of bias assessment are shown in Tables 1, 2, 3, respectively. Flow chart of the study selection process The studies' general information The studies' specific data The ECG's characteristics Analyses The total number of patients included in the meta‐analysis in the no rejection or mild rejection group is 1733 patients, and the total number of patients in the moderate or severe rejection group is 264 patients. The total number of patients included in the meta‐analysis in the no rejection or mild rejection group is 1733 patients, and the total number of patients in the moderate or severe rejection group is 264 patients. Selection of studies: A systematic literature review was conducted to identify all studies about the detection of graft rejection in heart transplant surgeries per the PRISMA guidelines. 7 The online database: Google Scholar, PubMed, and Cochrane were searched from January 1985 to September 2020. Keywords used in the search included (Heart transplant rejection OR Heart transplantation rejection OR Detection of heart transplant rejection OR Cardiac transplant detection OR Cardiac transplantation detection OR Noninvasive ways of detection of cardiac transplant rejection). The screening was completed by Hashim T. Hashim, and Jaffer Sha, with disagreements being resolved by Joseph Varney. There was no restriction on participant's age, gender, or ethnicity, and no restrictions to language written. The references of selected papers were manually checked for additional relating studies. An analysis of the funnel plot was carried out to determine the possibility of bias in the publication in the Review Manager program version 5.4. Inclusion criteria for meta‐analysis were studies that correlated the Endomyocardial biopsy grading to the ECG features. Data extraction: Details of the study design, ECG characteristics, endomyocardial biopsy grading, and outcome data, including the QT interval, QTc, QT dispersion, and QTc dispersion, were extracted. The risk of bias was assessed using the Newcastle‐Ottawa Quality Assessment tool. Study inclusion: After a comprehensive search of the literature, 170 publications resulted and then became 96 after removal of duplicates. Of these, 51 were eligible for full‐text screening. After the full‐text screening, 18 studies were included in the systematic review and meta‐analysis, as shown in (Figure 1). Six studies were included in the meta‐analysis. The included QT (ms), QTc, QT dispersion, and QTc dispersion outcomes in the meta‐analysis were reported in 2, 3, and 5 studies. The summary of the included studies and risk of bias assessment are shown in Tables 1, 2, 3, respectively. Flow chart of the study selection process The studies' general information The studies' specific data The ECG's characteristics Analyses: The total number of patients included in the meta‐analysis in the no rejection or mild rejection group is 1733 patients, and the total number of patients in the moderate or severe rejection group is 264 patients. RESULTS: We used random effects due to heterogeneity observed among studies when we used fixed effects. In QT (ms) outcome, the pooled analysis between no or mild rejection and moderate or severe rejection was (MD = 3.80, 95% CI = −18.10 to 25.70, p‐value = .73), we observed heterogeneity that was not solved by random effects, as shown in Figure 1. The pooled analyses between no or mild rejection and moderate or severe rejection in QTc, QT dispersion and QTc dispersions outcomes, were (MD = 18.91, 95% CI = −21.30 to 59.11, p‐value = .36), (MD = −68.54, 95% CI = −195.74 to 58.66, p‐value = .29) and (MD = −41.15, 95% CI = −93.26 to 10.96, p‐value = .12), respectively (Figures S2–S4). We did subgroup analysis based on the duration of the follow‐up. The two subgroups were from 3 to 6 months and from hospital discharge to 7 days, the heterogeneity was not solved by subgroup analysis and leave one out test in the QTC subgroups and the results (MD = 5.19, 95% CI = −15.55 to 25.92, p‐value = .62) and (MD = 34.67, 95% CI = −30.99 to 100.33, p‐value = .30), respectively (Figure S5). In the QTc dispersion outcome, the heterogeneity was not solved by subgroup analysis or leave one out test. The results were in the 3–6 months follow‐up subgroup (MD = −68.77, 95% CI = −195.54 to 58.01, p‐value = .29) and in the hospital discharge to 7 days follow‐up subgroup were (MD = 10.00, 95% CI = − 2.17 to 22.17, p‐value = .11) (Figure S6). No publication bias was observed among included studies, as shown in Figure S7. Table 1 describes the characteristics data for each study individually and their citations. Table 2 shows the data of patients and their characteristics, their ages, the type of study, and the duration of the study (the age and the duration are either mean or median). Table 3 shows the characteristics of the ECG recording and the outcomes of the studies (No. of rejections is the number of patients recognized with ECG). The rejection was diagnosed with histology findings and biopsies and then compared with the findings of the ECG to give the definitive diagnosis (Figure 2). Publication bias Figure S7 shows the risk of biases and applicability concerns among the studies distributed as high risk, low risk, and unclear risk. DISCUSSION: We found no significant association between heart transplant rejection and QT changes of ECG. The studies included in this review report the rejection of the heart transplant after the surgery with either moderate or severe rejection. The results were assured by the biopsy to compare between the results of the ECG and the histology. A total of 957 patients were identified for heart transplant rejection, with 304 diagnosed by ECG (31.7%). The primary method used for diagnosis was the QRS interval and amplitude (see Table 3). Sensitivity and specificity varied widely between our studies (see Figure S3), potentially showing the user error in ECG placement and reading. To date, the only consistently effective approach available for the diagnosis of cardiac transplant rejection is an endomyocardial biopsy. The utility of ECG in this population may have various utilities. Preclinical advances in cardiac transplantation have shown that ECG may indicate a beneficial corticosteroid response. 26 After cardiac transplant, the incidence of conduction disorders is well known, and RBBB is the most frequent of these. 27 The occurrence of RBBB within 1 month of cardiac transplant might have different clinical consequences from those with later RBBB incidence. Before the ECG can consistently be used to detect acute allograft rejection, an investigation is still required to assess computerized ECG measurement algorithms. Given that ECGs are carried out regularly, QTc time and QTc dispersion measurements could be used to accurately detect acute rejection at the early stage after heart transplantation. If further studies confirm our early current findings, it will be possible to implant QT‐driven rate‐sensitive units with periodic interrogation of these units at the time of transplantation. This could nullify the need for endomyocardial biopsy. LIMITATIONS & FUTURE DIRECTION: There is heterogeneity among included studies due to diversity of study designs in the studies included in the meta‐analysis. Few numbers of studies are included in the meta‐analysis due to few data published. Most of the published data are about QT changes with no interest to the other components of the ECG. ECG abnormalities are less sensitive to the mild forms of rejection that occurs with the currently used immunosuppression medications. The use of the SA‐ECG may be combined with the help of the standard ECG to not miss patients with milder rejection who do not have abnormal ECG features. 28 Further studies are needed to determine if the frequency or time domain of the SA‐ECG are better predictors of rejection. CONCLUSION: In heart transplant recipients, the ECG is a noninvasive measure of early allograft rejection. It holds the potential to reduce the number and severity of rejection episodes of rejection. Time‐efficient and low cost make the ECG a good choice for screening, yet specificity and sensitivity varied widely throughout the studies chosen. Reasonings behind this could be as simple as user error. To avoid this shortfall, an algorism for artificial intelligence reading cardiac transplant rejection patients should be created. With the exceedingly high cost of a heart transplant, we feel further investigation is warranted. We found no significant association between heart transplant rejection and QT changes of ECG. Although some studies reported significant association, other studies did not. There is heterogeneity among studies included in the meta‐analysis, that does not provide conclusive results. More clinical trials are needed to give final conclusion about using ECG as a measure in detecting heart transplant rejections. CONFLICT OF INTERESTS: The authors declare that there are no conflict of interests. CONSENT FOR PUBLICATION: All the authors provided their consents for publication. AUTHOR CONTRIBUTIONS: Hashim T. Hashim and Mustafa A. Ramadhan created the idea, wrote the first draft and supervised the work; Joseph Varney, Jaffer Shah, and Shoaib Ahmad did the data collection and studies selection; Karam Ramadan Motawea, Omneya A. Kandil, and Joseph Varney wrote the final draft and did the analysis. Supporting information: Supplementary information. Click here for additional data file.
Background: In cardiac transplant recipients, the electrocardiogram (ECG) is a noninvasive measure of early allograft rejection. The ECG can predict an acute cellular rejection, thus shortening the time to recognition of rejection. Earlier diagnosis has the potential to reduce the number and severity of rejection episodes. Methods: A systematic literature review was conducted to identify and select the original research reports on using electrocardiography in diagnosing cardiac transplant rejection in accordance with the PRISMA guidelines. Studies included reported sensitivity and specificity of ECG readings in heart transplant recipients during the first post-transplant year. Data were analyzed with Review manager version 5.4. p-value was used in testing the significant difference. Results: After the removal of duplicates, 98 articles were eligible for screening. After the full-text screening, a total of 17 papers were included in the review based on the above criteria. A meta-analysis of five studies was done. Conclusions: In heart transplant recipients, a noninvasive measure of early allograft rejection has the potential to reduce the number and severity of rejection episodes by reducing the time and cost of surveillance of rejection and shortening the time to recognition of rejection.
INTRODUCTION: A well‐established treatment for end‐stage heart failure patients is heart transplantation. The median survival of adult patients who received a heart transplant after the year 2000 is over 12 years 1 —a significant cause of early mortality in acute allograft rejection. The prevalence of allograft rejection has been reported to exceed 13% in the first year following adult heart transplantation. Thus, if the patient survives the first‐year post‐transplant, they are expected to survive at least 15 years. 1 According to the 2011 annual United States data released by the International Society for Heart Lung Transplantation Registry, 26% of heart transplant patients experience at least one rejection episode within the first‐year post‐transplant. The most frequent cause of morbidity and rehospitalization in this patient population remains acute rejection. 2 , 3 The electrocardiogram (ECG) is a simple, cost‐effective, and noninvasive tool used to evaluate the rhythm and electrical activity of the heart. Sensors attached to the skin are used to display the electrical signals generated by your heart on an easy to interpret grid paper. 4 Utilizing ECG readings in heart transplant recipients can predict an acute cellular rejection, thus shortening the time to recognize rejection. A recent study examined serial ECGs in 98 patients within the first‐year post‐heart transplantation. The most common abnormalities were associated with intraventricular conduction delays, with the right bundle branch block (RBBB) being the most prevalent. 5 , 6 In cardiac transplant recipients, a noninvasive measure of early allograft rejection can reduce the number and severity of rejection episodes. ECG can reduce the time to detection and the cost of surveillance of rejection. 6 In this study, we summarize the diagnostic accuracy and criteria of the ECG in the detection of cardiac transplant rejection patients. CONCLUSION: In heart transplant recipients, the ECG is a noninvasive measure of early allograft rejection. It holds the potential to reduce the number and severity of rejection episodes of rejection. Time‐efficient and low cost make the ECG a good choice for screening, yet specificity and sensitivity varied widely throughout the studies chosen. Reasonings behind this could be as simple as user error. To avoid this shortfall, an algorism for artificial intelligence reading cardiac transplant rejection patients should be created. With the exceedingly high cost of a heart transplant, we feel further investigation is warranted. We found no significant association between heart transplant rejection and QT changes of ECG. Although some studies reported significant association, other studies did not. There is heterogeneity among studies included in the meta‐analysis, that does not provide conclusive results. More clinical trials are needed to give final conclusion about using ECG as a measure in detecting heart transplant rejections.
Background: In cardiac transplant recipients, the electrocardiogram (ECG) is a noninvasive measure of early allograft rejection. The ECG can predict an acute cellular rejection, thus shortening the time to recognition of rejection. Earlier diagnosis has the potential to reduce the number and severity of rejection episodes. Methods: A systematic literature review was conducted to identify and select the original research reports on using electrocardiography in diagnosing cardiac transplant rejection in accordance with the PRISMA guidelines. Studies included reported sensitivity and specificity of ECG readings in heart transplant recipients during the first post-transplant year. Data were analyzed with Review manager version 5.4. p-value was used in testing the significant difference. Results: After the removal of duplicates, 98 articles were eligible for screening. After the full-text screening, a total of 17 papers were included in the review based on the above criteria. A meta-analysis of five studies was done. Conclusions: In heart transplant recipients, a noninvasive measure of early allograft rejection has the potential to reduce the number and severity of rejection episodes by reducing the time and cost of surveillance of rejection and shortening the time to recognition of rejection.
2,906
226
[ 336, 185, 47, 142, 38, 129, 9, 57 ]
14
[ "rejection", "studies", "ecg", "transplant", "heart", "analysis", "included", "patients", "heart transplant", "meta analysis" ]
[ "rejection cardiac transplant", "transplantation shown ecg", "detecting heart transplant", "heart transplantation studies", "transplant recipients ecg" ]
[CONTENT] cardio transplant rejection | ECG | heart transplant rejection | rejection diagnosis [SUMMARY]
[CONTENT] cardio transplant rejection | ECG | heart transplant rejection | rejection diagnosis [SUMMARY]
[CONTENT] cardio transplant rejection | ECG | heart transplant rejection | rejection diagnosis [SUMMARY]
[CONTENT] cardio transplant rejection | ECG | heart transplant rejection | rejection diagnosis [SUMMARY]
[CONTENT] cardio transplant rejection | ECG | heart transplant rejection | rejection diagnosis [SUMMARY]
[CONTENT] cardio transplant rejection | ECG | heart transplant rejection | rejection diagnosis [SUMMARY]
[CONTENT] Electrocardiography | Graft Rejection | Heart Diseases | Heart Transplantation | Humans | Mass Screening [SUMMARY]
[CONTENT] Electrocardiography | Graft Rejection | Heart Diseases | Heart Transplantation | Humans | Mass Screening [SUMMARY]
[CONTENT] Electrocardiography | Graft Rejection | Heart Diseases | Heart Transplantation | Humans | Mass Screening [SUMMARY]
[CONTENT] Electrocardiography | Graft Rejection | Heart Diseases | Heart Transplantation | Humans | Mass Screening [SUMMARY]
[CONTENT] Electrocardiography | Graft Rejection | Heart Diseases | Heart Transplantation | Humans | Mass Screening [SUMMARY]
[CONTENT] Electrocardiography | Graft Rejection | Heart Diseases | Heart Transplantation | Humans | Mass Screening [SUMMARY]
[CONTENT] rejection cardiac transplant | transplantation shown ecg | detecting heart transplant | heart transplantation studies | transplant recipients ecg [SUMMARY]
[CONTENT] rejection cardiac transplant | transplantation shown ecg | detecting heart transplant | heart transplantation studies | transplant recipients ecg [SUMMARY]
[CONTENT] rejection cardiac transplant | transplantation shown ecg | detecting heart transplant | heart transplantation studies | transplant recipients ecg [SUMMARY]
[CONTENT] rejection cardiac transplant | transplantation shown ecg | detecting heart transplant | heart transplantation studies | transplant recipients ecg [SUMMARY]
[CONTENT] rejection cardiac transplant | transplantation shown ecg | detecting heart transplant | heart transplantation studies | transplant recipients ecg [SUMMARY]
[CONTENT] rejection cardiac transplant | transplantation shown ecg | detecting heart transplant | heart transplantation studies | transplant recipients ecg [SUMMARY]
[CONTENT] rejection | studies | ecg | transplant | heart | analysis | included | patients | heart transplant | meta analysis [SUMMARY]
[CONTENT] rejection | studies | ecg | transplant | heart | analysis | included | patients | heart transplant | meta analysis [SUMMARY]
[CONTENT] rejection | studies | ecg | transplant | heart | analysis | included | patients | heart transplant | meta analysis [SUMMARY]
[CONTENT] rejection | studies | ecg | transplant | heart | analysis | included | patients | heart transplant | meta analysis [SUMMARY]
[CONTENT] rejection | studies | ecg | transplant | heart | analysis | included | patients | heart transplant | meta analysis [SUMMARY]
[CONTENT] rejection | studies | ecg | transplant | heart | analysis | included | patients | heart transplant | meta analysis [SUMMARY]
[CONTENT] heart | rejection | year | transplant | post | year post | transplantation | patients | acute | allograft rejection [SUMMARY]
[CONTENT] studies | rejection | detection | transplant | analysis | included | meta | meta analysis | qtc | heart [SUMMARY]
[CONTENT] value | 95 | md | 95 ci | ci | subgroup | figure | heterogeneity | subgroup analysis | effects [SUMMARY]
[CONTENT] transplant | heart | heart transplant | rejection | studies | ecg | association | cost | significant association | measure [SUMMARY]
[CONTENT] rejection | studies | transplant | ecg | heart | patients | authors | analysis | included | heart transplant [SUMMARY]
[CONTENT] rejection | studies | transplant | ecg | heart | patients | authors | analysis | included | heart transplant [SUMMARY]
[CONTENT] ECG ||| ECG ||| [SUMMARY]
[CONTENT] PRISMA ||| ECG | first ||| Review | 5.4 ||| [SUMMARY]
[CONTENT] 98 ||| 17 ||| five [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] ECG ||| ECG ||| ||| PRISMA ||| ECG | first ||| Review | 5.4 ||| ||| 98 ||| 17 ||| five ||| [SUMMARY]
[CONTENT] ECG ||| ECG ||| ||| PRISMA ||| ECG | first ||| Review | 5.4 ||| ||| 98 ||| 17 ||| five ||| [SUMMARY]
Pain in Canadian Veterans: analysis of data from the Survey on Transition to Civilian Life.
25602711
Little is known about the prevalence of chronic pain among Veterans outside the United States.
BACKGROUND
The 2010 Survey on Transition to Civilian Life included a nationally representative sample of 3154 Canadian Armed Forces Regular Force Veterans released from service between 1998 and 2007. Data from a telephone survey of Veterans were linked with Department of National Defence and Veterans Affairs Canada administrative databases. Pain was defined as constant⁄reoccurring pain (chronic pain) and as moderate/severe pain interference with activities.
METHODS
Forty-one percent of the population experienced constant chronic pain and 23% experienced intermittent chronic pain. Twenty-five percent reported pain interference. Needing help with tasks of daily living, back problems, arthritis, gastrointestinal conditions and age ≥ 30 years were independently associated with chronic pain. Needing help with tasks of daily living, back problems, arthritis, mental health conditions, age ≥ 30 years, gastrointestinal conditions, low social support and noncommissioned member rank were associated with pain interference.
RESULTS
These findings provide evidence for agencies and those supporting the well-being of Veterans, and inform longitudinal studies to better understand the determinants and life course effects of chronic pain in military Veterans.
CONCLUSIONS
[ "Adult", "Aged", "Canada", "Cross-Sectional Studies", "Female", "Humans", "Male", "Middle Aged", "Pain", "Statistics as Topic", "Surveys and Questionnaires", "Veterans", "Young Adult" ]
4391444
null
null
METHODS
Data for the present analysis came from the Survey on Transition to Civilian Life, a cross-sectional study of Canadian Armed Forces (CAF) Regular Force Veterans who were released from service between January 1, 1998 and December 31, 2007. Only 30.4% of Veterans in the target population are Veterans Affairs Canada (VAC) clients. Therefore, a stratified random sample was drawn that included an oversampling of VAC clients (14). Veterans living in institutions, the Territories or outside Canada, serving outside Canada, or still serving in the military were excluded. Of the 4721 Veterans sampled, the response rate was 71%. Ninety-four percent agreed to share responses with VAC and the Department of National Defense (DND), providing a nationally representative sample of 3154. Sixty-six percent were not clients of VAC. The current analysis is based on the sample of 3154 Veterans, which represents a weighted total population of 32,015. Military characteristics, VAC status and demographic characteristics were obtained through data linkage with DND and VAC administrative databases and self-report. Details of this computer-assisted telephone survey conducted by Statistics Canada and the variables contained and extracted from the DND and VAC administrative databases are reported elsewhere (13). The objectives of the Survey on Transition to Civilian Life were to capture self-reported information on health, disability and determinants of health of former CAF members (14). The questionnaire sought information on multiple factors, limiting the ability to collect detailed data on pain. The questionnaire included three questions about pain: “Do you have any pain or discomfort that is always present? (yes/no)”; “Do you have any pain or discomfort that reoccurs from time to time? (yes/no)”; and “During the past four weeks, how much did pain interfere with your normal work (including work both outside the home and housework)? (not at all/a little bit/moderately/quite a bit/extremely)”. The first two questions are similar to questions from the Health Utility Index, and the third question comes from the Short Form-12 Health Outcomes Survey (2,15–17). For the purpose of the present study, pain was defined in two ways: constant or reoccurring pain or discomfort (responded ‘yes’ for questions 1 and 2; which were referred to as chronic pain) and moderate to severe pain that interfered with activities (selected ‘moderately’, ‘quite a bit’ or ‘extremely’ for question 3; which was referred to as pain interference). Several factors known to be associated with chronic pain in the general population (1,2,17–20) were examined in the present analysis, including age, sex, education, marital status, household income and employment status. Health-related characteristics included tobacco and alcohol use, diagnosed chronic physical health conditions, body mass index (BMI) and diagnosed mental health conditions. Activity limitation was measured as needing help with at least one basic or instrumental activity of daily living. In addition, military characteristics, including years of service and rank, were examined. Continuous variables were categorized as follows: 10-year age intervals, household income quartiles, years of service (<2, 2 to 9, 10 to 19, ≥20) and BMI (<25 kg/m2 = underweight/normal, ≥25 kg/m2 to <30 kg/m2 = over-weight and ≥30 kg/m2 = obese). All physical and health-related data were captured through self-report. Details about questions and responses are reported elsewhere (13). Missing data were minimal with the exception of alcohol consumption (13% missing) and income (5.0%). An ‘undisclosed’ category was created for these variables to allow for the inclusion of the missing respondents. Statistics Canada respondent sampling weights were applied to account for VAC client status, age, sex and nonresponse (21,22). Data analysis included frequencies and percentages for all independent variables and the two outcome variables. Mean and SD was also calculated for age. Independent variables were cross-tabulated with pain outcome variables. The cross-tabulations were conducted on the sample to ensure adequate sample size to conduct a robust analysis before applying sampling weights. The χ2 test was used to assess for differences in the distribution of demographic and clinical variables for those who have chronic pain and those who do not. The same analyses were conducted for those who have moderate to severe pain interference and those who do not. Statistically significant variables identified in the bivariate analyses were included in the multivariable logistic regression analyses. Backwards manual stepwise methods were used to eliminate variables until the most parsimonious model remained. At each iteration, the variable contributing the least to the model was removed. Only variables with P<0.05 were retained in the final models. Results are expressed as OR and 95% CIs. All analyses were conducted using Stata 11.1 (www.stata.com); P<0.05 was considered to be statistically significant. Ethics approval was obtained from the Queen’s University Research Ethics Board (Kingston, Ontario).
RESULTS
The mean (± SD) age of Veterans was 44±10 years (range 20 to 67 years), 88% were men and 60% were married. Forty-seven percent had high school education or less. Fifty-three percent were in the military for ≥20 years, 58% were junior or senior noncommissioned officers, 21% were senior officers and 21% were private/recruits. Twenty-four percent of the population reported at least one mental health condition. The highest prevalence was reported for anxiety or depression (20%), followed by anxiety or mood disorder (12%) and post-traumatic stress disorder (11%). Forty-one percent of Veterans experienced constant pain or discomfort, and 23% experienced reoccurrent pain. The weight-adjusted combined prevalence of constant/reoccurring pain (chronic pain) was 64%. The prevalence of chronic pain ranged from 36% (20 to 29 years of age) to 75% (50 to 59 years of age) among men and 35% (20 to 29 years of age) to 74% (40 to 49 years of age) among women. The prevalence of moderate to severe pain interference in the total population of Veterans was 25%. Pain interference ranged from 8% (20 to 29 years of age) to 36% (40 to 49 years of age) among men and 11% (20 to 29 years of age) to 42% (40 to 49 years of age) among women (Figure 1). The distribution of sociodemographic characteristics according to the presence of chronic pain and moderate to severe pain interference are provided in Table 1. χ2 testing indicated that the distribution of all demographic characteristics differed between those with chronic pain and those who did not report chronic pain (P<0.05). Chronic pain was most common in Veterans with the following characteristics: 10 to 19 years of military service, rank of junior noncommissioned officer, 50 to 59 years of age, unemployed, undisclosed alcohol consumption, less than high school education, daily smoker and male sex. Similar findings were present for pain interference, except women were more likely than men to report pain interference. Chronic pain and pain interference were highly prevalent in Veterans with physical conditions such as arthritis, gastrointestinal issues, back problems and respiratory issues (Table 2). Pain was also highly prevalent in the presence of mental health conditions. Eighty-five percent of Veterans with at least one mental health condition had chronic pain and 55% had pain interference. The prevalence of chronic pain was high in Veterans with depression or anxiety (86%), anxiety or mood disorder (88%), and post-traumatic stress disorder (93%). In addition, 95% of individuals with mental health conditions also had at least one physical health condition. The prevalence of chronic pain was highest in the subgroup of Veterans who were obese, compared with overweight and normal/underweight Veterans. Table 3 reports the findings of the most parsimonious model for chronic pain. The strongest association was found for requiring help with activities, followed by back problems and arthritis. Given that back problems and arthritis may be highly correlated with requiring help, the model was rerun after requiring help with activities was removed. The odds of chronic pain increased for back problems (OR 10.89 [95% CI 8.26 to 14.35]), arthritis (OR 9.23 [95% CI 6.24 to 13.66]) and age (30 to 39 years, OR 1.79 [95% CI 1.24 to 2.58]; 40 to 49 years, OR 3.07 [95% CI 2.20 to 4.28]; 50 to 59 years, OR 3.17 [95% CI 2.23 to 4.52]; ≥60 years, OR 1.65 [95% CI 1.04 to 2.63]). Table 4 reports the findings for the pain interference model. Similar to the chronic pain model, requiring help with activities had the highest odds of pain interference. Removal of this variable from the model changed the ORs for some of the other variables but not to the same extent as in the chronic pain model; back problems (OR 3.33 [95% CI 2.68 to 4.13]), arthritis (OR 3.77 [95% CI 3.02 to 4.72]), having a mental health condition (OR 2.73 [95% CI 2.16 to 3.44]), older age (40 to 49 years OR 3.00 [95% CI 1.77 to 5.11]; 50 to 59 years OR 2.23 [95% CI 1.29 to 3.88]), having gastrointestinal problems (OR 1.79 [95% CI 1.30 to 2.46]), low social support (OR 1.48 [95% CI 1.19 to 1.83]), being unemployed (OR 2.3 [95% CI 1.85 to 2.9]) and being a junior noncommissioned member remained nonsignificant (OR 1.02 [95% CI 0.65 to 1.59]); however, the OR increased to being the same as the OR for private/recruit (reference category).
null
null
[]
[]
[]
[ "METHODS", "RESULTS", "DISCUSSION" ]
[ "Data for the present analysis came from the Survey on Transition to Civilian Life, a cross-sectional study of Canadian Armed Forces (CAF) Regular Force Veterans who were released from service between January 1, 1998 and December 31, 2007. Only 30.4% of Veterans in the target population are Veterans Affairs Canada (VAC) clients. Therefore, a stratified random sample was drawn that included an oversampling of VAC clients (14). Veterans living in institutions, the Territories or outside Canada, serving outside Canada, or still serving in the military were excluded. Of the 4721 Veterans sampled, the response rate was 71%. Ninety-four percent agreed to share responses with VAC and the Department of National Defense (DND), providing a nationally representative sample of 3154. Sixty-six percent were not clients of VAC. The current analysis is based on the sample of 3154 Veterans, which represents a weighted total population of 32,015. Military characteristics, VAC status and demographic characteristics were obtained through data linkage with DND and VAC administrative databases and self-report. Details of this computer-assisted telephone survey conducted by Statistics Canada and the variables contained and extracted from the DND and VAC administrative databases are reported elsewhere (13).\nThe objectives of the Survey on Transition to Civilian Life were to capture self-reported information on health, disability and determinants of health of former CAF members (14). The questionnaire sought information on multiple factors, limiting the ability to collect detailed data on pain. The questionnaire included three questions about pain: “Do you have any pain or discomfort that is always present? (yes/no)”; “Do you have any pain or discomfort that reoccurs from time to time? (yes/no)”; and “During the past four weeks, how much did pain interfere with your normal work (including work both outside the home and housework)? (not at all/a little bit/moderately/quite a bit/extremely)”. The first two questions are similar to questions from the Health Utility Index, and the third question comes from the Short Form-12 Health Outcomes Survey (2,15–17). For the purpose of the present study, pain was defined in two ways: constant or reoccurring pain or discomfort (responded ‘yes’ for questions 1 and 2; which were referred to as chronic pain) and moderate to severe pain that interfered with activities (selected ‘moderately’, ‘quite a bit’ or ‘extremely’ for question 3; which was referred to as pain interference).\nSeveral factors known to be associated with chronic pain in the general population (1,2,17–20) were examined in the present analysis, including age, sex, education, marital status, household income and employment status. Health-related characteristics included tobacco and alcohol use, diagnosed chronic physical health conditions, body mass index (BMI) and diagnosed mental health conditions. Activity limitation was measured as needing help with at least one basic or instrumental activity of daily living. In addition, military characteristics, including years of service and rank, were examined. Continuous variables were categorized as follows: 10-year age intervals, household income quartiles, years of service (<2, 2 to 9, 10 to 19, ≥20) and BMI (<25 kg/m2 = underweight/normal, ≥25 kg/m2 to <30 kg/m2 = over-weight and ≥30 kg/m2 = obese).\nAll physical and health-related data were captured through self-report. Details about questions and responses are reported elsewhere (13). Missing data were minimal with the exception of alcohol consumption (13% missing) and income (5.0%). An ‘undisclosed’ category was created for these variables to allow for the inclusion of the missing respondents.\nStatistics Canada respondent sampling weights were applied to account for VAC client status, age, sex and nonresponse (21,22). Data analysis included frequencies and percentages for all independent variables and the two outcome variables. Mean and SD was also calculated for age. Independent variables were cross-tabulated with pain outcome variables. The cross-tabulations were conducted on the sample to ensure adequate sample size to conduct a robust analysis before applying sampling weights. The χ2 test was used to assess for differences in the distribution of demographic and clinical variables for those who have chronic pain and those who do not. The same analyses were conducted for those who have moderate to severe pain interference and those who do not. Statistically significant variables identified in the bivariate analyses were included in the multivariable logistic regression analyses. Backwards manual stepwise methods were used to eliminate variables until the most parsimonious model remained. At each iteration, the variable contributing the least to the model was removed. Only variables with P<0.05 were retained in the final models. Results are expressed as OR and 95% CIs. All analyses were conducted using Stata 11.1 (www.stata.com); P<0.05 was considered to be statistically significant. Ethics approval was obtained from the Queen’s University Research Ethics Board (Kingston, Ontario).", "The mean (± SD) age of Veterans was 44±10 years (range 20 to 67 years), 88% were men and 60% were married. Forty-seven percent had high school education or less. Fifty-three percent were in the military for ≥20 years, 58% were junior or senior noncommissioned officers, 21% were senior officers and 21% were private/recruits. Twenty-four percent of the population reported at least one mental health condition. The highest prevalence was reported for anxiety or depression (20%), followed by anxiety or mood disorder (12%) and post-traumatic stress disorder (11%).\nForty-one percent of Veterans experienced constant pain or discomfort, and 23% experienced reoccurrent pain. The weight-adjusted combined prevalence of constant/reoccurring pain (chronic pain) was 64%. The prevalence of chronic pain ranged from 36% (20 to 29 years of age) to 75% (50 to 59 years of age) among men and 35% (20 to 29 years of age) to 74% (40 to 49 years of age) among women. The prevalence of moderate to severe pain interference in the total population of Veterans was 25%. Pain interference ranged from 8% (20 to 29 years of age) to 36% (40 to 49 years of age) among men and 11% (20 to 29 years of age) to 42% (40 to 49 years of age) among women (Figure 1).\nThe distribution of sociodemographic characteristics according to the presence of chronic pain and moderate to severe pain interference are provided in Table 1. χ2 testing indicated that the distribution of all demographic characteristics differed between those with chronic pain and those who did not report chronic pain (P<0.05). Chronic pain was most common in Veterans with the following characteristics: 10 to 19 years of military service, rank of junior noncommissioned officer, 50 to 59 years of age, unemployed, undisclosed alcohol consumption, less than high school education, daily smoker and male sex. Similar findings were present for pain interference, except women were more likely than men to report pain interference.\nChronic pain and pain interference were highly prevalent in Veterans with physical conditions such as arthritis, gastrointestinal issues, back problems and respiratory issues (Table 2). Pain was also highly prevalent in the presence of mental health conditions. Eighty-five percent of Veterans with at least one mental health condition had chronic pain and 55% had pain interference. The prevalence of chronic pain was high in Veterans with depression or anxiety (86%), anxiety or mood disorder (88%), and post-traumatic stress disorder (93%). In addition, 95% of individuals with mental health conditions also had at least one physical health condition. The prevalence of chronic pain was highest in the subgroup of Veterans who were obese, compared with overweight and normal/underweight Veterans.\nTable 3 reports the findings of the most parsimonious model for chronic pain. The strongest association was found for requiring help with activities, followed by back problems and arthritis. Given that back problems and arthritis may be highly correlated with requiring help, the model was rerun after requiring help with activities was removed. The odds of chronic pain increased for back problems (OR 10.89 [95% CI 8.26 to 14.35]), arthritis (OR 9.23 [95% CI 6.24 to 13.66]) and age (30 to 39 years, OR 1.79 [95% CI 1.24 to 2.58]; 40 to 49 years, OR 3.07 [95% CI 2.20 to 4.28]; 50 to 59 years, OR 3.17 [95% CI 2.23 to 4.52]; ≥60 years, OR 1.65 [95% CI 1.04 to 2.63]).\nTable 4 reports the findings for the pain interference model. Similar to the chronic pain model, requiring help with activities had the highest odds of pain interference. Removal of this variable from the model changed the ORs for some of the other variables but not to the same extent as in the chronic pain model; back problems (OR 3.33 [95% CI 2.68 to 4.13]), arthritis (OR 3.77 [95% CI 3.02 to 4.72]), having a mental health condition (OR 2.73 [95% CI 2.16 to 3.44]), older age (40 to 49 years OR 3.00 [95% CI 1.77 to 5.11]; 50 to 59 years OR 2.23 [95% CI 1.29 to 3.88]), having gastrointestinal problems (OR 1.79 [95% CI 1.30 to 2.46]), low social support (OR 1.48 [95% CI 1.19 to 1.83]), being unemployed (OR 2.3 [95% CI 1.85 to 2.9]) and being a junior noncommissioned member remained nonsignificant (OR 1.02 [95% CI 0.65 to 1.59]); however, the OR increased to being the same as the OR for private/recruit (reference category).", "The present study was the first to explore the epidemiology of chronic pain in Canadian Veterans and, in addition to a Finnish study, one of the only studies outside of the United States. Sixty-four percent of Canadian Veterans experienced constant or intermittent chronic pain or discomfort, and 25% had moderate to severe interference with activities due to pain. After controlling for several significant covariates, there was a strong association between physical health conditions and the presence of chronic pain and moderate to severe pain interference. Mental health conditions were associated with pain interference but not with the presence of chronic pain.\nThe prevalence of 64% with constant or reoccurring chronic pain in our study is higher than the range reported in a recent systematic review of chronic pain in Veterans (25% to 50%) (5); however, our finding of 41% with constant pain is consistent with the review. The definition of chronic pain in our study is closest to the measure used in a convenience sample of Veterans from primary care in the VA Connecticut Healthcare System (10), which obtained a prevalence of 48% (10). Our estimate is higher than international estimates of approximately 30% (23) and Canadian estimates that range from 18% to 35% (2,17,24,25). The Canadian National Population Health Survey definition (18%) is based on constant pain only (ie, “are you usually free of pain or discomfort?”), while our definition is based on constant or reoccurring pain or discomfort. The heterogeneity in prevalence estimates in Veteran and civilian populations is partially due to the variability in sampling techniques, populations studied and measurement tools used. Research involving chronic pain is more advanced in the general populations, in whom more studies have been conducted using more detailed and validated measurement tools. This has likely led to the more refined range of estimates in general populations (2,17,23,26).\nSome physical health conditions were up to twice as prevalent in this Veteran population than in the age- and sex-adjusted Canadian general population: back problems, arthritis, gastrointestinal conditions and obesity (13). Back problems, arthritis and gastrointestinal conditions were associated with chronic pain and pain interference. Musculoskeletal disorders and gastrointestinal disorders are commonly associated with chronic pain and discomfort in other populations (27–32). These findings are supported by the association between physical health-related quality of life and chronic pain reported in a previous study involving this study population (11).\nSeventy-one percent of Veterans were overweight or obese, which is slightly higher than the estimate of 60% in the general population (13,33). The prevalence of chronic pain and pain interference was higher for overweight and obese Veterans; however, after adjusting for other factors, BMI was not retained in the final multivariate model. Obesity has been correlated with chronic pain in other studies (20,34,35) and the relationship is believed to be multifactorial (36). The finding of a strong association between chronic pain and physical health conditions, such as musculoskeletal disorders, suggests that painful health conditions for which obesity is a risk factor could have mediated the effect of obesity, which would account for the lack of association between obesity and chronic pain and pain interference in the final models. Furthermore, musculoskeletal disorders may contribute to activity limitation, which may lead to obesity, and vice versa (20,37–40).\nOther physical health problems, such as heart disease and diabetes, were not associated with chronic pain or pain interference in the adjusted model in our study. Physical health conditions accumulate with age; however, some conditions may be more prevalent at earlier ages. For example, arthritis and back problems are more common earlier in life than heart disease or adult-onset diabetes. This was a relatively young population with a mean age of 44 years; thus, larger sample sizes are likely required to detect associations between heart disease or diabetes and chronic pain.\nIn bivariate analysis, 85% of Veterans with at least one mental health condition had chronic pain and 55% had pain interference compared with 58% and 16%, respectively, of Veterans without a mental health condition. Additionally, 95% of respondents with a mental health condition also had a physical health condition. In multivariable regression, mental health conditions were not associated with chronic pain independently of chronic musculoskeletal and gastrointestinal conditions. Respondents may have understood the phrase “pain or discomfort” to relate more to physical than mental health conditions. However, chronic pain is well known to have mental health dimensions (41), which could, in part, explain why physical and mental health conditions were independently associated with interference with activity by pain. The importance of the co-occurrence of physical and mental health conditions in the epidemiology of disability is well established in population studies (42–48), and the findings of our study implicate the role of painful physical health conditions together with mental health status in disability in Veterans.\nIn the present study, being employed and having higher levels of income decreased the odds of pain interference, which is supported by the literature (2,20,25). Military rank reflects socioeconomic factors and was also associated with pain interference. In bivariate analysis, noncommissioned member rank was associated with increased odds of pain interference before controlling for physical and mental health conditions. Noncommissioned members had the highest rates of medical release, physical and mental health conditions and activity limitations in this population (48). It is possible that the disproportionate rate of chronic health conditions in former noncommissioned members, especially painful musculoskeletal conditions, are attributable to higher occupational physical demands. Studies in the United States have reported similar gradients in self-reported health according to rank and higher odds of disability discharge in Veterans with physically demanding occupations (49–52).\nThe prevalence of chronic pain and pain interference was highest in Veterans 40 to 59 years of age and prevalence decreased for those ≥60 years of age. Reports describing the relationship between chronic pain and age are inconsistent; however, the results in the current study support the findings in a large European study of chronic pain in the general population in which prevalence declined in the older age groups, particularly after 60 years of age (1). In a systematic review investigating pain in Veterans, only one study identified age as a correlate of chronic pain and higher rates were also reported in younger Veterans compared with older Veterans (5). Specific age categories were not provided. The finding of lower levels of pain in older age groups may be related to expectations that pain is normal with aging and, therefore, individuals may be less likely to report it. Another factor may be that pain interference decreases, because work-related activity usually decreases with age. The socioemotional selectivity theory may also contribute to decreased pain reports with older age. This theory posits that as time horizons shrink (ie, as people age), individuals increasingly focus on positive thoughts or memories (53).\nThe prevalence of chronic pain was similar for women (63%) and men (65%), while women were more likely to report moderate to severe pain interference (30% of women versus 25% of men). These findings are similar to a study involving OEF/OIF Veterans in the United States in which women had a lower prevalence of chronic pain compared with men, even after adjusting for other factors, but they were slightly more likely to report moderate to severe pain (7). The lack of a statistically significant association for sex in the final models may be related to a lack of statistical power owing to the low proportion of women compared with men in the study, or factors that moderate the relationship between sex and pain. However, the low numbers of women in the survey limited the analysis. Future prospective studies in larger cohorts of women Veterans are needed to explore these outcomes further.\nA strength of the present study was the ability to generalize findings to all Regular Force Veterans who transitioned to civilian life from the Canadian military between 1998 and 2007. Random sampling and weighting of the sample to the total population to account for VAC status, age, sex and nonresponse also contribute to the generalizability of the findings (13). The good response rate and high consent-to-share rate reduced the likelihood of response bias. Sociodemographic and military characteristics were captured from DND administrative databases rather than self-report, thereby minimizing recall bias.\nA limitation of the present study was that indicators and determinants of health were captured by self-report. There is potential for under- or over-reporting health conditions when relying on self-report. However, self-report has long been used in Canadian population health studies to study determinants of health. Self-report is also the standard means of capturing pain outcomes given that pain is a subjective experience. The three questions used to capture pain outcomes are general, do not capture the level of detail available from other validated survey instruments and do not include a measure of pain duration. However, the questions are similar to those used to measure chronic pain in several Canadian studies (2,15–17), allowing for comparison of pain in Canadian Veterans to Canadians in the general population. The study sample was limited to regular forces members; therefore, these findings cannot be generalized to reservists. The findings do not necessarily apply to CAF personnel who deployed to Afghanistan because it was conducted before the conclusion of Canada’s Afghanistan combat mission, and most of those who deployed were and even today remain in service and, therefore, were not included in the 2010 survey. The findings cannot be generalized to elderly Veterans, given that the oldest participant in the study sample was 67 years of age. A further limitation is the small sample of women included in the study, which limited the ability to adequately assess the relationship between pain and sex. Due to the cross-sectional nature of the study, causality and directionality of the relationship cannot be determined.\nThe present study identified a group of Veterans who have a high prevalence of chronic pain and discomfort and pain-related activity interference, along with associated chronic health conditions and socioeconomic barriers. These findings add to the growing knowledge about chronic pain in Veterans, and offer useful information for providers and agencies supporting Veterans’ well-being. The results support the importance of considering physical health conditions when treating Veterans with mental health conditions and chronic pain because 95% of those with mental health conditions had chronic physical health conditions and many chronic physical health conditions are painful. In addition, chronic pain, mental health conditions and physical health conditions are all highly correlated with disability in this population (48). The results of the present study will inform the design of longitudinal studies of chronic pain and research to identify optimum approaches to mitigate chronic pain and the disabling effects of chronic pain in military Veterans." ]
[ "methods", "results", "discussion" ]
[ "Associated factors", "Chronic pain", "Military", "Prevalence", "Veterans" ]
METHODS: Data for the present analysis came from the Survey on Transition to Civilian Life, a cross-sectional study of Canadian Armed Forces (CAF) Regular Force Veterans who were released from service between January 1, 1998 and December 31, 2007. Only 30.4% of Veterans in the target population are Veterans Affairs Canada (VAC) clients. Therefore, a stratified random sample was drawn that included an oversampling of VAC clients (14). Veterans living in institutions, the Territories or outside Canada, serving outside Canada, or still serving in the military were excluded. Of the 4721 Veterans sampled, the response rate was 71%. Ninety-four percent agreed to share responses with VAC and the Department of National Defense (DND), providing a nationally representative sample of 3154. Sixty-six percent were not clients of VAC. The current analysis is based on the sample of 3154 Veterans, which represents a weighted total population of 32,015. Military characteristics, VAC status and demographic characteristics were obtained through data linkage with DND and VAC administrative databases and self-report. Details of this computer-assisted telephone survey conducted by Statistics Canada and the variables contained and extracted from the DND and VAC administrative databases are reported elsewhere (13). The objectives of the Survey on Transition to Civilian Life were to capture self-reported information on health, disability and determinants of health of former CAF members (14). The questionnaire sought information on multiple factors, limiting the ability to collect detailed data on pain. The questionnaire included three questions about pain: “Do you have any pain or discomfort that is always present? (yes/no)”; “Do you have any pain or discomfort that reoccurs from time to time? (yes/no)”; and “During the past four weeks, how much did pain interfere with your normal work (including work both outside the home and housework)? (not at all/a little bit/moderately/quite a bit/extremely)”. The first two questions are similar to questions from the Health Utility Index, and the third question comes from the Short Form-12 Health Outcomes Survey (2,15–17). For the purpose of the present study, pain was defined in two ways: constant or reoccurring pain or discomfort (responded ‘yes’ for questions 1 and 2; which were referred to as chronic pain) and moderate to severe pain that interfered with activities (selected ‘moderately’, ‘quite a bit’ or ‘extremely’ for question 3; which was referred to as pain interference). Several factors known to be associated with chronic pain in the general population (1,2,17–20) were examined in the present analysis, including age, sex, education, marital status, household income and employment status. Health-related characteristics included tobacco and alcohol use, diagnosed chronic physical health conditions, body mass index (BMI) and diagnosed mental health conditions. Activity limitation was measured as needing help with at least one basic or instrumental activity of daily living. In addition, military characteristics, including years of service and rank, were examined. Continuous variables were categorized as follows: 10-year age intervals, household income quartiles, years of service (<2, 2 to 9, 10 to 19, ≥20) and BMI (<25 kg/m2 = underweight/normal, ≥25 kg/m2 to <30 kg/m2 = over-weight and ≥30 kg/m2 = obese). All physical and health-related data were captured through self-report. Details about questions and responses are reported elsewhere (13). Missing data were minimal with the exception of alcohol consumption (13% missing) and income (5.0%). An ‘undisclosed’ category was created for these variables to allow for the inclusion of the missing respondents. Statistics Canada respondent sampling weights were applied to account for VAC client status, age, sex and nonresponse (21,22). Data analysis included frequencies and percentages for all independent variables and the two outcome variables. Mean and SD was also calculated for age. Independent variables were cross-tabulated with pain outcome variables. The cross-tabulations were conducted on the sample to ensure adequate sample size to conduct a robust analysis before applying sampling weights. The χ2 test was used to assess for differences in the distribution of demographic and clinical variables for those who have chronic pain and those who do not. The same analyses were conducted for those who have moderate to severe pain interference and those who do not. Statistically significant variables identified in the bivariate analyses were included in the multivariable logistic regression analyses. Backwards manual stepwise methods were used to eliminate variables until the most parsimonious model remained. At each iteration, the variable contributing the least to the model was removed. Only variables with P<0.05 were retained in the final models. Results are expressed as OR and 95% CIs. All analyses were conducted using Stata 11.1 (www.stata.com); P<0.05 was considered to be statistically significant. Ethics approval was obtained from the Queen’s University Research Ethics Board (Kingston, Ontario). RESULTS: The mean (± SD) age of Veterans was 44±10 years (range 20 to 67 years), 88% were men and 60% were married. Forty-seven percent had high school education or less. Fifty-three percent were in the military for ≥20 years, 58% were junior or senior noncommissioned officers, 21% were senior officers and 21% were private/recruits. Twenty-four percent of the population reported at least one mental health condition. The highest prevalence was reported for anxiety or depression (20%), followed by anxiety or mood disorder (12%) and post-traumatic stress disorder (11%). Forty-one percent of Veterans experienced constant pain or discomfort, and 23% experienced reoccurrent pain. The weight-adjusted combined prevalence of constant/reoccurring pain (chronic pain) was 64%. The prevalence of chronic pain ranged from 36% (20 to 29 years of age) to 75% (50 to 59 years of age) among men and 35% (20 to 29 years of age) to 74% (40 to 49 years of age) among women. The prevalence of moderate to severe pain interference in the total population of Veterans was 25%. Pain interference ranged from 8% (20 to 29 years of age) to 36% (40 to 49 years of age) among men and 11% (20 to 29 years of age) to 42% (40 to 49 years of age) among women (Figure 1). The distribution of sociodemographic characteristics according to the presence of chronic pain and moderate to severe pain interference are provided in Table 1. χ2 testing indicated that the distribution of all demographic characteristics differed between those with chronic pain and those who did not report chronic pain (P<0.05). Chronic pain was most common in Veterans with the following characteristics: 10 to 19 years of military service, rank of junior noncommissioned officer, 50 to 59 years of age, unemployed, undisclosed alcohol consumption, less than high school education, daily smoker and male sex. Similar findings were present for pain interference, except women were more likely than men to report pain interference. Chronic pain and pain interference were highly prevalent in Veterans with physical conditions such as arthritis, gastrointestinal issues, back problems and respiratory issues (Table 2). Pain was also highly prevalent in the presence of mental health conditions. Eighty-five percent of Veterans with at least one mental health condition had chronic pain and 55% had pain interference. The prevalence of chronic pain was high in Veterans with depression or anxiety (86%), anxiety or mood disorder (88%), and post-traumatic stress disorder (93%). In addition, 95% of individuals with mental health conditions also had at least one physical health condition. The prevalence of chronic pain was highest in the subgroup of Veterans who were obese, compared with overweight and normal/underweight Veterans. Table 3 reports the findings of the most parsimonious model for chronic pain. The strongest association was found for requiring help with activities, followed by back problems and arthritis. Given that back problems and arthritis may be highly correlated with requiring help, the model was rerun after requiring help with activities was removed. The odds of chronic pain increased for back problems (OR 10.89 [95% CI 8.26 to 14.35]), arthritis (OR 9.23 [95% CI 6.24 to 13.66]) and age (30 to 39 years, OR 1.79 [95% CI 1.24 to 2.58]; 40 to 49 years, OR 3.07 [95% CI 2.20 to 4.28]; 50 to 59 years, OR 3.17 [95% CI 2.23 to 4.52]; ≥60 years, OR 1.65 [95% CI 1.04 to 2.63]). Table 4 reports the findings for the pain interference model. Similar to the chronic pain model, requiring help with activities had the highest odds of pain interference. Removal of this variable from the model changed the ORs for some of the other variables but not to the same extent as in the chronic pain model; back problems (OR 3.33 [95% CI 2.68 to 4.13]), arthritis (OR 3.77 [95% CI 3.02 to 4.72]), having a mental health condition (OR 2.73 [95% CI 2.16 to 3.44]), older age (40 to 49 years OR 3.00 [95% CI 1.77 to 5.11]; 50 to 59 years OR 2.23 [95% CI 1.29 to 3.88]), having gastrointestinal problems (OR 1.79 [95% CI 1.30 to 2.46]), low social support (OR 1.48 [95% CI 1.19 to 1.83]), being unemployed (OR 2.3 [95% CI 1.85 to 2.9]) and being a junior noncommissioned member remained nonsignificant (OR 1.02 [95% CI 0.65 to 1.59]); however, the OR increased to being the same as the OR for private/recruit (reference category). DISCUSSION: The present study was the first to explore the epidemiology of chronic pain in Canadian Veterans and, in addition to a Finnish study, one of the only studies outside of the United States. Sixty-four percent of Canadian Veterans experienced constant or intermittent chronic pain or discomfort, and 25% had moderate to severe interference with activities due to pain. After controlling for several significant covariates, there was a strong association between physical health conditions and the presence of chronic pain and moderate to severe pain interference. Mental health conditions were associated with pain interference but not with the presence of chronic pain. The prevalence of 64% with constant or reoccurring chronic pain in our study is higher than the range reported in a recent systematic review of chronic pain in Veterans (25% to 50%) (5); however, our finding of 41% with constant pain is consistent with the review. The definition of chronic pain in our study is closest to the measure used in a convenience sample of Veterans from primary care in the VA Connecticut Healthcare System (10), which obtained a prevalence of 48% (10). Our estimate is higher than international estimates of approximately 30% (23) and Canadian estimates that range from 18% to 35% (2,17,24,25). The Canadian National Population Health Survey definition (18%) is based on constant pain only (ie, “are you usually free of pain or discomfort?”), while our definition is based on constant or reoccurring pain or discomfort. The heterogeneity in prevalence estimates in Veteran and civilian populations is partially due to the variability in sampling techniques, populations studied and measurement tools used. Research involving chronic pain is more advanced in the general populations, in whom more studies have been conducted using more detailed and validated measurement tools. This has likely led to the more refined range of estimates in general populations (2,17,23,26). Some physical health conditions were up to twice as prevalent in this Veteran population than in the age- and sex-adjusted Canadian general population: back problems, arthritis, gastrointestinal conditions and obesity (13). Back problems, arthritis and gastrointestinal conditions were associated with chronic pain and pain interference. Musculoskeletal disorders and gastrointestinal disorders are commonly associated with chronic pain and discomfort in other populations (27–32). These findings are supported by the association between physical health-related quality of life and chronic pain reported in a previous study involving this study population (11). Seventy-one percent of Veterans were overweight or obese, which is slightly higher than the estimate of 60% in the general population (13,33). The prevalence of chronic pain and pain interference was higher for overweight and obese Veterans; however, after adjusting for other factors, BMI was not retained in the final multivariate model. Obesity has been correlated with chronic pain in other studies (20,34,35) and the relationship is believed to be multifactorial (36). The finding of a strong association between chronic pain and physical health conditions, such as musculoskeletal disorders, suggests that painful health conditions for which obesity is a risk factor could have mediated the effect of obesity, which would account for the lack of association between obesity and chronic pain and pain interference in the final models. Furthermore, musculoskeletal disorders may contribute to activity limitation, which may lead to obesity, and vice versa (20,37–40). Other physical health problems, such as heart disease and diabetes, were not associated with chronic pain or pain interference in the adjusted model in our study. Physical health conditions accumulate with age; however, some conditions may be more prevalent at earlier ages. For example, arthritis and back problems are more common earlier in life than heart disease or adult-onset diabetes. This was a relatively young population with a mean age of 44 years; thus, larger sample sizes are likely required to detect associations between heart disease or diabetes and chronic pain. In bivariate analysis, 85% of Veterans with at least one mental health condition had chronic pain and 55% had pain interference compared with 58% and 16%, respectively, of Veterans without a mental health condition. Additionally, 95% of respondents with a mental health condition also had a physical health condition. In multivariable regression, mental health conditions were not associated with chronic pain independently of chronic musculoskeletal and gastrointestinal conditions. Respondents may have understood the phrase “pain or discomfort” to relate more to physical than mental health conditions. However, chronic pain is well known to have mental health dimensions (41), which could, in part, explain why physical and mental health conditions were independently associated with interference with activity by pain. The importance of the co-occurrence of physical and mental health conditions in the epidemiology of disability is well established in population studies (42–48), and the findings of our study implicate the role of painful physical health conditions together with mental health status in disability in Veterans. In the present study, being employed and having higher levels of income decreased the odds of pain interference, which is supported by the literature (2,20,25). Military rank reflects socioeconomic factors and was also associated with pain interference. In bivariate analysis, noncommissioned member rank was associated with increased odds of pain interference before controlling for physical and mental health conditions. Noncommissioned members had the highest rates of medical release, physical and mental health conditions and activity limitations in this population (48). It is possible that the disproportionate rate of chronic health conditions in former noncommissioned members, especially painful musculoskeletal conditions, are attributable to higher occupational physical demands. Studies in the United States have reported similar gradients in self-reported health according to rank and higher odds of disability discharge in Veterans with physically demanding occupations (49–52). The prevalence of chronic pain and pain interference was highest in Veterans 40 to 59 years of age and prevalence decreased for those ≥60 years of age. Reports describing the relationship between chronic pain and age are inconsistent; however, the results in the current study support the findings in a large European study of chronic pain in the general population in which prevalence declined in the older age groups, particularly after 60 years of age (1). In a systematic review investigating pain in Veterans, only one study identified age as a correlate of chronic pain and higher rates were also reported in younger Veterans compared with older Veterans (5). Specific age categories were not provided. The finding of lower levels of pain in older age groups may be related to expectations that pain is normal with aging and, therefore, individuals may be less likely to report it. Another factor may be that pain interference decreases, because work-related activity usually decreases with age. The socioemotional selectivity theory may also contribute to decreased pain reports with older age. This theory posits that as time horizons shrink (ie, as people age), individuals increasingly focus on positive thoughts or memories (53). The prevalence of chronic pain was similar for women (63%) and men (65%), while women were more likely to report moderate to severe pain interference (30% of women versus 25% of men). These findings are similar to a study involving OEF/OIF Veterans in the United States in which women had a lower prevalence of chronic pain compared with men, even after adjusting for other factors, but they were slightly more likely to report moderate to severe pain (7). The lack of a statistically significant association for sex in the final models may be related to a lack of statistical power owing to the low proportion of women compared with men in the study, or factors that moderate the relationship between sex and pain. However, the low numbers of women in the survey limited the analysis. Future prospective studies in larger cohorts of women Veterans are needed to explore these outcomes further. A strength of the present study was the ability to generalize findings to all Regular Force Veterans who transitioned to civilian life from the Canadian military between 1998 and 2007. Random sampling and weighting of the sample to the total population to account for VAC status, age, sex and nonresponse also contribute to the generalizability of the findings (13). The good response rate and high consent-to-share rate reduced the likelihood of response bias. Sociodemographic and military characteristics were captured from DND administrative databases rather than self-report, thereby minimizing recall bias. A limitation of the present study was that indicators and determinants of health were captured by self-report. There is potential for under- or over-reporting health conditions when relying on self-report. However, self-report has long been used in Canadian population health studies to study determinants of health. Self-report is also the standard means of capturing pain outcomes given that pain is a subjective experience. The three questions used to capture pain outcomes are general, do not capture the level of detail available from other validated survey instruments and do not include a measure of pain duration. However, the questions are similar to those used to measure chronic pain in several Canadian studies (2,15–17), allowing for comparison of pain in Canadian Veterans to Canadians in the general population. The study sample was limited to regular forces members; therefore, these findings cannot be generalized to reservists. The findings do not necessarily apply to CAF personnel who deployed to Afghanistan because it was conducted before the conclusion of Canada’s Afghanistan combat mission, and most of those who deployed were and even today remain in service and, therefore, were not included in the 2010 survey. The findings cannot be generalized to elderly Veterans, given that the oldest participant in the study sample was 67 years of age. A further limitation is the small sample of women included in the study, which limited the ability to adequately assess the relationship between pain and sex. Due to the cross-sectional nature of the study, causality and directionality of the relationship cannot be determined. The present study identified a group of Veterans who have a high prevalence of chronic pain and discomfort and pain-related activity interference, along with associated chronic health conditions and socioeconomic barriers. These findings add to the growing knowledge about chronic pain in Veterans, and offer useful information for providers and agencies supporting Veterans’ well-being. The results support the importance of considering physical health conditions when treating Veterans with mental health conditions and chronic pain because 95% of those with mental health conditions had chronic physical health conditions and many chronic physical health conditions are painful. In addition, chronic pain, mental health conditions and physical health conditions are all highly correlated with disability in this population (48). The results of the present study will inform the design of longitudinal studies of chronic pain and research to identify optimum approaches to mitigate chronic pain and the disabling effects of chronic pain in military Veterans.
Background: Little is known about the prevalence of chronic pain among Veterans outside the United States. Methods: The 2010 Survey on Transition to Civilian Life included a nationally representative sample of 3154 Canadian Armed Forces Regular Force Veterans released from service between 1998 and 2007. Data from a telephone survey of Veterans were linked with Department of National Defence and Veterans Affairs Canada administrative databases. Pain was defined as constant⁄reoccurring pain (chronic pain) and as moderate/severe pain interference with activities. Results: Forty-one percent of the population experienced constant chronic pain and 23% experienced intermittent chronic pain. Twenty-five percent reported pain interference. Needing help with tasks of daily living, back problems, arthritis, gastrointestinal conditions and age ≥ 30 years were independently associated with chronic pain. Needing help with tasks of daily living, back problems, arthritis, mental health conditions, age ≥ 30 years, gastrointestinal conditions, low social support and noncommissioned member rank were associated with pain interference. Conclusions: These findings provide evidence for agencies and those supporting the well-being of Veterans, and inform longitudinal studies to better understand the determinants and life course effects of chronic pain in military Veterans.
null
null
4,008
229
[]
3
[ "pain", "chronic", "chronic pain", "health", "veterans", "conditions", "age", "health conditions", "interference", "years" ]
[ "canadian veterans addition", "canada serving military", "prevalence estimates veteran", "veteran civilian populations", "veterans affairs canada" ]
null
null
null
null
[CONTENT] Associated factors | Chronic pain | Military | Prevalence | Veterans [SUMMARY]
[CONTENT] Associated factors | Chronic pain | Military | Prevalence | Veterans [SUMMARY]
null
[CONTENT] Associated factors | Chronic pain | Military | Prevalence | Veterans [SUMMARY]
null
null
[CONTENT] Adult | Aged | Canada | Cross-Sectional Studies | Female | Humans | Male | Middle Aged | Pain | Statistics as Topic | Surveys and Questionnaires | Veterans | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Canada | Cross-Sectional Studies | Female | Humans | Male | Middle Aged | Pain | Statistics as Topic | Surveys and Questionnaires | Veterans | Young Adult [SUMMARY]
null
[CONTENT] Adult | Aged | Canada | Cross-Sectional Studies | Female | Humans | Male | Middle Aged | Pain | Statistics as Topic | Surveys and Questionnaires | Veterans | Young Adult [SUMMARY]
null
null
[CONTENT] canadian veterans addition | canada serving military | prevalence estimates veteran | veteran civilian populations | veterans affairs canada [SUMMARY]
[CONTENT] canadian veterans addition | canada serving military | prevalence estimates veteran | veteran civilian populations | veterans affairs canada [SUMMARY]
null
[CONTENT] canadian veterans addition | canada serving military | prevalence estimates veteran | veteran civilian populations | veterans affairs canada [SUMMARY]
null
null
[CONTENT] pain | chronic | chronic pain | health | veterans | conditions | age | health conditions | interference | years [SUMMARY]
[CONTENT] pain | chronic | chronic pain | health | veterans | conditions | age | health conditions | interference | years [SUMMARY]
null
[CONTENT] pain | chronic | chronic pain | health | veterans | conditions | age | health conditions | interference | years [SUMMARY]
null
null
[CONTENT] variables | pain | vac | data | health | analyses | kg m2 | kg | m2 | included [SUMMARY]
[CONTENT] pain | 95 ci | ci | years | 95 | chronic | chronic pain | age | years age | veterans [SUMMARY]
null
[CONTENT] pain | chronic | chronic pain | health | veterans | age | ci | 95 ci | years | conditions [SUMMARY]
null
null
[CONTENT] 2010 | 3154 | Canadian Armed Forces Regular Force Veterans | between 1998 and 2007 ||| Veterans | Department of National Defence and Veterans Affairs Canada ||| [SUMMARY]
[CONTENT] Forty-one percent | 23% ||| Twenty-five percent ||| daily | ≥ | 30 years ||| daily | ≥ | 30 years [SUMMARY]
null
[CONTENT] Veterans | the United States ||| 2010 | 3154 | Canadian Armed Forces Regular Force Veterans | between 1998 and 2007 ||| Veterans | Department of National Defence and Veterans Affairs Canada ||| ||| Forty-one percent | 23% ||| Twenty-five percent ||| daily | ≥ | 30 years ||| daily | ≥ | 30 years ||| Veterans | Veterans [SUMMARY]
null
An observational report of intensive robotic and manual gait training in sub-acute stroke.
22329866
The use of automated electromechanical devices for gait training in neurological patients is increasing, yet the functional outcomes of well-defined training programs using these devices and the characteristics of patients that would most benefit are seldom reported in the literature. In an observational study of functional outcomes, we aimed to provide a benchmark for expected change in gait function in early stroke patients, from an intensive inpatient rehabilitation program including both robotic and manual gait training.
BACKGROUND
We followed 103 sub-acute stroke patients who met the clinical inclusion criteria for Body Weight Supported Robotic Gait Training (BWSRGT). Patients completed an intensive 8-week gait-training program comprising robotic gait training (weeks 0-4) followed by manual gait training (weeks 4-8). A change in clinical function was determined by the following assessments taken at 0, 4 and 8 weeks (baseline, mid-point and end-point respectively): Functional Ambulatory Categories (FAC), 10 m Walking Test (10 MWT), and Tinetti Gait and Balance Scales.
METHODS
Over half of the patients made a clinically meaningful improvement on the Tinetti Gait Scale (> 3 points) and Tinetti Balance Scale (> 5 points), while over 80% of the patients increased at least 1 point on the FAC scale (0-5) and improved walking speed by more than 0.2 m/s. Patients responded positively in gait function regardless of variables gender, age, aetiology (hemorrhagic/ischemic), and affected hemisphere. The most robust and significant change was observed for patients in the FAC categories two and three. The therapy was well tolerated and no patients withdrew for factors related to the type or intensity of training.
RESULTS
Eight-weeks of intensive rehabilitation including robotic and manual gait training was well tolerated by early stroke patients, and was associated with significant gains in function. Patients with mid-level gait dysfunction showed the most robust improvement following robotic training.
CONCLUSIONS
[ "Exercise Therapy", "Female", "Gait", "Gait Disorders, Neurologic", "Humans", "Male", "Middle Aged", "Recovery of Function", "Robotics", "Stroke", "Stroke Rehabilitation" ]
3305481
Background
The recovery of independent walking is one of the major goals of rehabilitation after stroke, remaining as a leading cause of serious long-term disability[1]. More than 30% of patients who have had a stroke do not achieve a complete motor recovery after the rehabilitation process[2,3]. For this reason, new rehabilitation approaches are needed in order to improve quality of life in stroke patients. There is no unique approach for rehabilitation of gait after stroke[4]. From physical therapy interventions (such as Bobath, Perfetti, Propioceptive Neuromuscular Facilitation - PNF) [5,6] to more technological approaches including the use of Functional Electric Stimulation (FES) [7] or Body Weight Support Robotic Gait Training (BWSRGT),[8-10] many therapeutic options have been used alone and combined to improve motor recovery in stroke. The beneficial effects of treadmill training have been extensively investigated in the last fifteen years in stroke patients,[11] including the greater effects of the body weight support training[12,13]. Many studies have reported that electromechanical devices have at least the same efficacy as manual gait training with less effort by the patient and physiotherapist[14-17]. In a recent study, electromechanical assisted gait training has been shown to improve the independent walking ability (FAC) but not the walking speed when compared sub-acute and chronic patients that received conventional gait training[8]. The clinical characteristics of patients who benefit most from BWSRGT are presently unclear. If clinical variables that predispose to a positive response to BWSRGT could be identified, it might be possible to individually tailor the rehabilitation program and include BWSRGT for those patients who would have the greatest benefit in the rehabilitation process[18-20]. The treatment dose (number of hours of therapy and frequency) has been reported as a valuable variable of outcome,[21] but the optimal dose is still uncertain. Robotic gait training using the BWSGT for 4 weeks (5 ×/week) has previously shown in a small number of early stroke patients to be well tolerated, and is reported to improve gait function[22]. We applied a daily intensive program with robotic gait training in our inpatient setting of sub-acute stroke patients, and then followed with a period of consolidation using manual gait training. We report our observations of functional gain in a large number of patients by using accepted clinical scales to measure gait speed, assistance in locomotion and balance. The main goal of this study is to assist with clinical decision making and to power randomized controlled clinical trials.
Methods
Subjects 103 subjects with sub-acute stroke (< 6 months post-stroke) were followed in a prospective observational study from March 2006 to March 2009. Thirty-four patients did not complete the 8-week program due to factors not related to the study such as hospital discharge, or medical complications (such as pneumonia or infections). Of the 69 patients who finished the training protocol (49 men, 20 women, mean age 48 ± 11 years), 36 suffered hemorrhagic stroke and 33 ischemic stroke. According to residual deficits, 34 patients presented right hemiparesis, 28 patients presented left hemiparesis and 7 presented tetraparesis. 85.5% of the patients were non-ambulatory (FAC ≤ 2, n = 59) and 14.5% were ambulatory patients (FAC ≥ 3, n = 10). Mean post-stroke interval was 72 ± 38 days (Table 1). Patient baseline characteristics. Baseline demographic and functional characteristics of the stroke patients enrolled in the 8-week intensive rehabilitation program. Data collection was obtained from patients who were performing BWSRGT program at the Neurorehabilitation Hospital Institut Guttmann (Badalona, Spain) following the clinical protocols and according to the local Ethics Committee. All patients gave written informed consent prior to enrolment in the study. Stroke patients were included in the study if they were within 180 days post-stroke, presented clinical hemiparesis or tetraparesis, aged 25-75, were able to voluntarily participate into the study, were not expected to be discharged in the next 8-weeks and had the ability to perform manual gait training with or without external devices (initial FAC 0-4). They were excluded if they had severe cognitive and/or language deficits that precluded them from following instructions, had contraindication for physical exercise (unstable cardiac status, or other pre-morbid condition that discard them from rehabilitation), had severe spasticity that interferes with robotic function or/and rigid join contractures/malformations on the lower limbs (> 10 degrees), or had inability to stand even in parallel bars. 103 subjects with sub-acute stroke (< 6 months post-stroke) were followed in a prospective observational study from March 2006 to March 2009. Thirty-four patients did not complete the 8-week program due to factors not related to the study such as hospital discharge, or medical complications (such as pneumonia or infections). Of the 69 patients who finished the training protocol (49 men, 20 women, mean age 48 ± 11 years), 36 suffered hemorrhagic stroke and 33 ischemic stroke. According to residual deficits, 34 patients presented right hemiparesis, 28 patients presented left hemiparesis and 7 presented tetraparesis. 85.5% of the patients were non-ambulatory (FAC ≤ 2, n = 59) and 14.5% were ambulatory patients (FAC ≥ 3, n = 10). Mean post-stroke interval was 72 ± 38 days (Table 1). Patient baseline characteristics. Baseline demographic and functional characteristics of the stroke patients enrolled in the 8-week intensive rehabilitation program. Data collection was obtained from patients who were performing BWSRGT program at the Neurorehabilitation Hospital Institut Guttmann (Badalona, Spain) following the clinical protocols and according to the local Ethics Committee. All patients gave written informed consent prior to enrolment in the study. Stroke patients were included in the study if they were within 180 days post-stroke, presented clinical hemiparesis or tetraparesis, aged 25-75, were able to voluntarily participate into the study, were not expected to be discharged in the next 8-weeks and had the ability to perform manual gait training with or without external devices (initial FAC 0-4). They were excluded if they had severe cognitive and/or language deficits that precluded them from following instructions, had contraindication for physical exercise (unstable cardiac status, or other pre-morbid condition that discard them from rehabilitation), had severe spasticity that interferes with robotic function or/and rigid join contractures/malformations on the lower limbs (> 10 degrees), or had inability to stand even in parallel bars. Training Intervention and Inpatient Rehabilitation The robotic and manual gait training intervention was part of a comprehensive intensive rehabilitation program consisting of 5 hours per day/5 days per week. The rehabilitation program included occupational therapy, physical therapy, gait training, fitness, sports therapy, hydrotherapy and other activities oriented to achieve the maximum degree of functional independence (including urban tours or cooking training). Patients performed two contiguous blocks of 4-weeks of intensive rehabilitation program. The duration of each block period was based on previous literature[23,24]. During the first period, patients performed robotic body weight supported gait training (BWSRGT, described below), while in the second 4-week period the BWSRGT was substituted for manual gait training (MGT). In each case the gait training was performed daily as part of the 5-hour rehabilitation activities (Figure 1). Schematic of the intensive rehabilitation program. The rehabilitation period comprised 8 weeks (5 hrs/day, 5 days/week); the first 4 weeks with Body-Weight-Supported-Robotic-Gait Training (BWSRGT) and the last 4 weeks Manual Gait Training. (*) Depending on patient individual needs and clinical goals. The BWSRGT was performed with a Gait Trainer® (Reha-Stim, Berlin), where patient is held by a harness with each foot attached to a footplate that moves to mechanically simulate the stance and swing phase of gait controlling the ankle angle at push off and heel strike (Figure 2). The duration of the exercise ranged from 20-40 min depending on the tolerance of the patients (when patient reported excessive fatigue the training would stop). The percentage of weight unloaded varied between patients and corresponded to the weight that allowed the patient to stand with complete knee extension. Velocity ranged between 0.28 meter per second (m/s) and 0.42 m/s (to prevent overexertion of the patient). Step length was adjusted to the available range-of-motion for each patient. An elastic strap was placed on the paretic knee to help extension during the complete stance phase if needed (as a passive support). A sub-acute stroke patient during the Robotic Gait training session. During the Body-Weight-Supported-Robotic-Gait Training (BWSRGT) the body weight is slightly unloaded via the use of a harness, while the fixed foot placement on the device ensures a set pattern that mimics human gait with alternate stance and swing phase. The manual gait training (MGT) consisted of gait training over ground, with technical aids and the support of a physiotherapist as needed. Conventional technical aids for stroke patients are considered unilateral, ie knee-ankle-foot orthosis (KAFO), ankle foot orthosis (AFO), dynamic ankle foot orthosis (DAFO) and functional electrical stimulation (FES). To provide more stability, crutches, walkers (in tetraparesic patients), or parallel bars were allowed. Gait training was done under the supervision of a physical therapist, who provided verbal instructions and physical assistance to facilitate the swing phase when needed. Our work intends to provide benchmark clinical gains data and tolerability from this atypically intensive rehabilitation program of consecutive blocks of robotic and manual gait training. The robotic and manual gait training intervention was part of a comprehensive intensive rehabilitation program consisting of 5 hours per day/5 days per week. The rehabilitation program included occupational therapy, physical therapy, gait training, fitness, sports therapy, hydrotherapy and other activities oriented to achieve the maximum degree of functional independence (including urban tours or cooking training). Patients performed two contiguous blocks of 4-weeks of intensive rehabilitation program. The duration of each block period was based on previous literature[23,24]. During the first period, patients performed robotic body weight supported gait training (BWSRGT, described below), while in the second 4-week period the BWSRGT was substituted for manual gait training (MGT). In each case the gait training was performed daily as part of the 5-hour rehabilitation activities (Figure 1). Schematic of the intensive rehabilitation program. The rehabilitation period comprised 8 weeks (5 hrs/day, 5 days/week); the first 4 weeks with Body-Weight-Supported-Robotic-Gait Training (BWSRGT) and the last 4 weeks Manual Gait Training. (*) Depending on patient individual needs and clinical goals. The BWSRGT was performed with a Gait Trainer® (Reha-Stim, Berlin), where patient is held by a harness with each foot attached to a footplate that moves to mechanically simulate the stance and swing phase of gait controlling the ankle angle at push off and heel strike (Figure 2). The duration of the exercise ranged from 20-40 min depending on the tolerance of the patients (when patient reported excessive fatigue the training would stop). The percentage of weight unloaded varied between patients and corresponded to the weight that allowed the patient to stand with complete knee extension. Velocity ranged between 0.28 meter per second (m/s) and 0.42 m/s (to prevent overexertion of the patient). Step length was adjusted to the available range-of-motion for each patient. An elastic strap was placed on the paretic knee to help extension during the complete stance phase if needed (as a passive support). A sub-acute stroke patient during the Robotic Gait training session. During the Body-Weight-Supported-Robotic-Gait Training (BWSRGT) the body weight is slightly unloaded via the use of a harness, while the fixed foot placement on the device ensures a set pattern that mimics human gait with alternate stance and swing phase. The manual gait training (MGT) consisted of gait training over ground, with technical aids and the support of a physiotherapist as needed. Conventional technical aids for stroke patients are considered unilateral, ie knee-ankle-foot orthosis (KAFO), ankle foot orthosis (AFO), dynamic ankle foot orthosis (DAFO) and functional electrical stimulation (FES). To provide more stability, crutches, walkers (in tetraparesic patients), or parallel bars were allowed. Gait training was done under the supervision of a physical therapist, who provided verbal instructions and physical assistance to facilitate the swing phase when needed. Our work intends to provide benchmark clinical gains data and tolerability from this atypically intensive rehabilitation program of consecutive blocks of robotic and manual gait training. Functional Outcome Assessment The functional outcome assessment battery comprised: Functional Ambulatory Category (FAC),[25] Tinetti Balance and Gait Scale[26] and 10 Meter Walking Test[27]. Each outcome was assessed at baseline, mid-point, and at the end of the study (0, 4, 8 weeks) by an experienced physiotherapist. The FAC was performed to assess gait ability and autonomy[28]. This ordinary scale includes 6 levels ranging from 0 to 5. FAC = 0 means no ability to walk or needed 2 assistants to help them walk, FAC 1 = able to walk with the constant attention of one assistant, FAC 2 = able to walk with someone for balance support, FAC 3 = able to walk with one assistant beside them to give them confidence, FAC 4 = independent walking but need some help with stairs or uneven ground, FAC 5 = independent for gait function in any given place. The 10 MWT quantifies walking speed, step length and cadence[29]. Patients were permitted to use technical aids during the test. This test was performed three times per evaluation and the mean speed was calculated. Patient gait was videotaped for extra documentation. The Tinetti Gait and Balance Scale examines gait pattern and balance level[30]. The gait subscale ranges from 0 to 12, zero indicates and inability to walk or unable to perform any of the events of the gait pattern correctly, and 12 indicates a correct gait pattern. The balance subscale ranges from 0 to 16, zero indicates very poor balance and 16 good control of the equilibrium. Our centre considers a clinically meaningful change in function to correspond with approximately 2 points in the FAC, 0.20 m/s in the walking speed, 3 points in Tinetti Gait Scale and 5 points in the Tinetti balance test. The functional outcome assessment battery comprised: Functional Ambulatory Category (FAC),[25] Tinetti Balance and Gait Scale[26] and 10 Meter Walking Test[27]. Each outcome was assessed at baseline, mid-point, and at the end of the study (0, 4, 8 weeks) by an experienced physiotherapist. The FAC was performed to assess gait ability and autonomy[28]. This ordinary scale includes 6 levels ranging from 0 to 5. FAC = 0 means no ability to walk or needed 2 assistants to help them walk, FAC 1 = able to walk with the constant attention of one assistant, FAC 2 = able to walk with someone for balance support, FAC 3 = able to walk with one assistant beside them to give them confidence, FAC 4 = independent walking but need some help with stairs or uneven ground, FAC 5 = independent for gait function in any given place. The 10 MWT quantifies walking speed, step length and cadence[29]. Patients were permitted to use technical aids during the test. This test was performed three times per evaluation and the mean speed was calculated. Patient gait was videotaped for extra documentation. The Tinetti Gait and Balance Scale examines gait pattern and balance level[30]. The gait subscale ranges from 0 to 12, zero indicates and inability to walk or unable to perform any of the events of the gait pattern correctly, and 12 indicates a correct gait pattern. The balance subscale ranges from 0 to 16, zero indicates very poor balance and 16 good control of the equilibrium. Our centre considers a clinically meaningful change in function to correspond with approximately 2 points in the FAC, 0.20 m/s in the walking speed, 3 points in Tinetti Gait Scale and 5 points in the Tinetti balance test. Data analysis In this observational prospective study we used categorical variables described by frequency and percentage, and continuous variables described by mean and standard deviation (mean ± SD). Median and inter-quartile range (IQR) was used to explore asymmetry when necessary. The non-parametric Friedman test was used to assess differences between the 3 assessment measures, while the non-parametric Kruskall-Wallis test was used to assess outcome based on initial functional level. The alpha level was set at p < 0.05. In this observational prospective study we used categorical variables described by frequency and percentage, and continuous variables described by mean and standard deviation (mean ± SD). Median and inter-quartile range (IQR) was used to explore asymmetry when necessary. The non-parametric Friedman test was used to assess differences between the 3 assessment measures, while the non-parametric Kruskall-Wallis test was used to assess outcome based on initial functional level. The alpha level was set at p < 0.05.
null
null
Conclusions
LC, UC and EM contributed equally with data acquisition, analysis and interpretation as well as study design and manuscript development. DE and MC contributed with data interpretation, intellectual content and critical review of the manuscript. DL contributed with the revision of the last version of the manuscript. MB and JM contributed with the original study design, and manuscript development. All authors read and approved the final manuscript.
[ "Background", "Subjects", "Training Intervention and Inpatient Rehabilitation", "Functional Outcome Assessment", "Data analysis", "Results and Discussion", "Functional Ambulatory Category", "Walking speed", "Tinetti Gait Scale", "Tinetti Balance Scale", "Patient demographic and initial clinical status", "Discussion", "Conclusions" ]
[ "The recovery of independent walking is one of the major goals of rehabilitation after stroke, remaining as a leading cause of serious long-term disability[1]. More than 30% of patients who have had a stroke do not achieve a complete motor recovery after the rehabilitation process[2,3]. For this reason, new rehabilitation approaches are needed in order to improve quality of life in stroke patients.\nThere is no unique approach for rehabilitation of gait after stroke[4]. From physical therapy interventions (such as Bobath, Perfetti, Propioceptive Neuromuscular Facilitation - PNF) [5,6] to more technological approaches including the use of Functional Electric Stimulation (FES) [7] or Body Weight Support Robotic Gait Training (BWSRGT),[8-10] many therapeutic options have been used alone and combined to improve motor recovery in stroke. The beneficial effects of treadmill training have been extensively investigated in the last fifteen years in stroke patients,[11] including the greater effects of the body weight support training[12,13]. Many studies have reported that electromechanical devices have at least the same efficacy as manual gait training with less effort by the patient and physiotherapist[14-17]. In a recent study, electromechanical assisted gait training has been shown to improve the independent walking ability (FAC) but not the walking speed when compared sub-acute and chronic patients that received conventional gait training[8].\nThe clinical characteristics of patients who benefit most from BWSRGT are presently unclear. If clinical variables that predispose to a positive response to BWSRGT could be identified, it might be possible to individually tailor the rehabilitation program and include BWSRGT for those patients who would have the greatest benefit in the rehabilitation process[18-20].\nThe treatment dose (number of hours of therapy and frequency) has been reported as a valuable variable of outcome,[21] but the optimal dose is still uncertain. Robotic gait training using the BWSGT for 4 weeks (5 ×/week) has previously shown in a small number of early stroke patients to be well tolerated, and is reported to improve gait function[22]. We applied a daily intensive program with robotic gait training in our inpatient setting of sub-acute stroke patients, and then followed with a period of consolidation using manual gait training. We report our observations of functional gain in a large number of patients by using accepted clinical scales to measure gait speed, assistance in locomotion and balance. The main goal of this study is to assist with clinical decision making and to power randomized controlled clinical trials.", "103 subjects with sub-acute stroke (< 6 months post-stroke) were followed in a prospective observational study from March 2006 to March 2009. Thirty-four patients did not complete the 8-week program due to factors not related to the study such as hospital discharge, or medical complications (such as pneumonia or infections). Of the 69 patients who finished the training protocol (49 men, 20 women, mean age 48 ± 11 years), 36 suffered hemorrhagic stroke and 33 ischemic stroke. According to residual deficits, 34 patients presented right hemiparesis, 28 patients presented left hemiparesis and 7 presented tetraparesis. 85.5% of the patients were non-ambulatory (FAC ≤ 2, n = 59) and 14.5% were ambulatory patients (FAC ≥ 3, n = 10). Mean post-stroke interval was 72 ± 38 days (Table 1).\nPatient baseline characteristics.\nBaseline demographic and functional characteristics of the stroke patients enrolled in the 8-week intensive rehabilitation program.\nData collection was obtained from patients who were performing BWSRGT program at the Neurorehabilitation Hospital Institut Guttmann (Badalona, Spain) following the clinical protocols and according to the local Ethics Committee. All patients gave written informed consent prior to enrolment in the study.\nStroke patients were included in the study if they were within 180 days post-stroke, presented clinical hemiparesis or tetraparesis, aged 25-75, were able to voluntarily participate into the study, were not expected to be discharged in the next 8-weeks and had the ability to perform manual gait training with or without external devices (initial FAC 0-4).\nThey were excluded if they had severe cognitive and/or language deficits that precluded them from following instructions, had contraindication for physical exercise (unstable cardiac status, or other pre-morbid condition that discard them from rehabilitation), had severe spasticity that interferes with robotic function or/and rigid join contractures/malformations on the lower limbs (> 10 degrees), or had inability to stand even in parallel bars.", "The robotic and manual gait training intervention was part of a comprehensive intensive rehabilitation program consisting of 5 hours per day/5 days per week. The rehabilitation program included occupational therapy, physical therapy, gait training, fitness, sports therapy, hydrotherapy and other activities oriented to achieve the maximum degree of functional independence (including urban tours or cooking training).\nPatients performed two contiguous blocks of 4-weeks of intensive rehabilitation program. The duration of each block period was based on previous literature[23,24]. During the first period, patients performed robotic body weight supported gait training (BWSRGT, described below), while in the second 4-week period the BWSRGT was substituted for manual gait training (MGT). In each case the gait training was performed daily as part of the 5-hour rehabilitation activities (Figure 1).\nSchematic of the intensive rehabilitation program. The rehabilitation period comprised 8 weeks (5 hrs/day, 5 days/week); the first 4 weeks with Body-Weight-Supported-Robotic-Gait Training (BWSRGT) and the last 4 weeks Manual Gait Training. (*) Depending on patient individual needs and clinical goals.\nThe BWSRGT was performed with a Gait Trainer® (Reha-Stim, Berlin), where patient is held by a harness with each foot attached to a footplate that moves to mechanically simulate the stance and swing phase of gait controlling the ankle angle at push off and heel strike (Figure 2). The duration of the exercise ranged from 20-40 min depending on the tolerance of the patients (when patient reported excessive fatigue the training would stop). The percentage of weight unloaded varied between patients and corresponded to the weight that allowed the patient to stand with complete knee extension. Velocity ranged between 0.28 meter per second (m/s) and 0.42 m/s (to prevent overexertion of the patient). Step length was adjusted to the available range-of-motion for each patient. An elastic strap was placed on the paretic knee to help extension during the complete stance phase if needed (as a passive support).\nA sub-acute stroke patient during the Robotic Gait training session. During the Body-Weight-Supported-Robotic-Gait Training (BWSRGT) the body weight is slightly unloaded via the use of a harness, while the fixed foot placement on the device ensures a set pattern that mimics human gait with alternate stance and swing phase.\nThe manual gait training (MGT) consisted of gait training over ground, with technical aids and the support of a physiotherapist as needed. Conventional technical aids for stroke patients are considered unilateral, ie knee-ankle-foot orthosis (KAFO), ankle foot orthosis (AFO), dynamic ankle foot orthosis (DAFO) and functional electrical stimulation (FES). To provide more stability, crutches, walkers (in tetraparesic patients), or parallel bars were allowed. Gait training was done under the supervision of a physical therapist, who provided verbal instructions and physical assistance to facilitate the swing phase when needed.\nOur work intends to provide benchmark clinical gains data and tolerability from this atypically intensive rehabilitation program of consecutive blocks of robotic and manual gait training.", "The functional outcome assessment battery comprised: Functional Ambulatory Category (FAC),[25] Tinetti Balance and Gait Scale[26] and 10 Meter Walking Test[27]. Each outcome was assessed at baseline, mid-point, and at the end of the study (0, 4, 8 weeks) by an experienced physiotherapist.\nThe FAC was performed to assess gait ability and autonomy[28]. This ordinary scale includes 6 levels ranging from 0 to 5. FAC = 0 means no ability to walk or needed 2 assistants to help them walk, FAC 1 = able to walk with the constant attention of one assistant, FAC 2 = able to walk with someone for balance support, FAC 3 = able to walk with one assistant beside them to give them confidence, FAC 4 = independent walking but need some help with stairs or uneven ground, FAC 5 = independent for gait function in any given place.\nThe 10 MWT quantifies walking speed, step length and cadence[29]. Patients were permitted to use technical aids during the test. This test was performed three times per evaluation and the mean speed was calculated. Patient gait was videotaped for extra documentation.\nThe Tinetti Gait and Balance Scale examines gait pattern and balance level[30]. The gait subscale ranges from 0 to 12, zero indicates and inability to walk or unable to perform any of the events of the gait pattern correctly, and 12 indicates a correct gait pattern. The balance subscale ranges from 0 to 16, zero indicates very poor balance and 16 good control of the equilibrium.\nOur centre considers a clinically meaningful change in function to correspond with approximately 2 points in the FAC, 0.20 m/s in the walking speed, 3 points in Tinetti Gait Scale and 5 points in the Tinetti balance test.", "In this observational prospective study we used categorical variables described by frequency and percentage, and continuous variables described by mean and standard deviation (mean ± SD). Median and inter-quartile range (IQR) was used to explore asymmetry when necessary. The non-parametric Friedman test was used to assess differences between the 3 assessment measures, while the non-parametric Kruskall-Wallis test was used to assess outcome based on initial functional level. The alpha level was set at p < 0.05.", "We report results from 69 patients who completed the intensive training program, showing significant improvements for each outcome (Figure 3).\nFunctional outcome results. Improvement in functional outcome across the robotic and manual training period (mean ± SD). For each outcome: (a) FAC (b) walking speed (c) Tinetti Gait and (d) Tinetti Balance, there was a significant increase following the robotic training, and further consolidation from the follow-up manual gait training.\n Functional Ambulatory Category Across the 8-week intensive rehabilitation period FAC score improved by 45% across the period of robotic training (baseline 1.30 points ± 1.23, mid-point 2.37 ± 1.51, p < 0.001), and improved by a further 31% following the manual training (final assessment 3.14 ± 1.52, p < 0.001). 88% of patients improved by one point or more on the FAC scale across the 8-week intervention period.\nAcross the 8-week intensive rehabilitation period FAC score improved by 45% across the period of robotic training (baseline 1.30 points ± 1.23, mid-point 2.37 ± 1.51, p < 0.001), and improved by a further 31% following the manual training (final assessment 3.14 ± 1.52, p < 0.001). 88% of patients improved by one point or more on the FAC scale across the 8-week intervention period.\n Walking speed Walking speed improved by 46% across the period of robotic training (baseline 0.17 m/s ± 0.22, mid-point 0.33 ± 0.33, p < 0.001), and improved by a further 22% following the manual training (final 0.48 ± 0.41, p < 0.001). Across the full training period 83% of patients improved more than 0.20 m/s.\nWalking speed improved by 46% across the period of robotic training (baseline 0.17 m/s ± 0.22, mid-point 0.33 ± 0.33, p < 0.001), and improved by a further 22% following the manual training (final 0.48 ± 0.41, p < 0.001). Across the full training period 83% of patients improved more than 0.20 m/s.\n Tinetti Gait Scale Tinetti Gait Scale improved by 45% across the period of robotic training (baseline 4.10 ± 3.10, mid-point 7.10 ± 3.16, p < 0.001), and improved by a further 18% following the manual training (final 8.69 ± 3.10, p < 0.001). During the 8-week rehabilitation period 56% of patients improved more than 3 points on Tinetti Gait Scale.\nTinetti Gait Scale improved by 45% across the period of robotic training (baseline 4.10 ± 3.10, mid-point 7.10 ± 3.16, p < 0.001), and improved by a further 18% following the manual training (final 8.69 ± 3.10, p < 0.001). During the 8-week rehabilitation period 56% of patients improved more than 3 points on Tinetti Gait Scale.\n Tinetti Balance Scale The Tinetti Balance Scale score improved by 38% across the period of robotic training (baseline 6.14 ± 3.84, mid-point 9.92 ± 4.13, p < 0.001), and it improved by a further 18% following the manual training (final 12.15 ± 4.19, p < 0.001). Across the 8-week rehabilitation period 65% of patients improved more than 5 points.\nThe Tinetti Balance Scale score improved by 38% across the period of robotic training (baseline 6.14 ± 3.84, mid-point 9.92 ± 4.13, p < 0.001), and it improved by a further 18% following the manual training (final 12.15 ± 4.19, p < 0.001). Across the 8-week rehabilitation period 65% of patients improved more than 5 points.\n Patient demographic and initial clinical status The functional changes according to age, gender, initial FAC, initial Gait and Balance Tinetti and initial walking speed, were analyzed during the 4-week BWSRGT period, the 4-week manual gait training period and at the 8-week total intensive rehabilitation period. Patients with an initial FAC of 2 or 3 showed significantly greater changes in walking speed than patients with other initial FAC across the 4-week robotic training protocol (Table 2). These changes were maintained over 4 weeks of manual gait training. The change in each outcome was not significantly influenced by variables; gender, age, aetiology (hemorrhagic/ischemic), affected hemisphere, initial walking speed, initial Balance Tinetti and initial Gait Tinetti.\nChange in outcome measures stratified by initial Functional Ambulatory Category (FAC).\nThe table describes the change in the Tinetti Gait, Tinetti Balance, and Walking Speed during the Body Weight Support Robotic Gait Trainer (BWSRGT) training (0-4 weeks); during the Manual Gait Training (MGT) period (4-8 weeks); and the change over the 8-weeks total of intensive rehabilitation period. Subjects with initial FAC 2 and 3 showed significant improvement in walking speed across the robotic training period. (*) p < 0.05.\nDuring the 8-week training period the non-ambulatory patients (FAC ≤ 2) went from 85% (n = 59) at baseline to 34% (n = 24) to the end-point; and the ambulatory patients (FAC ≥ 3) went from 14% (n = 10) at baseline to 65% (n = 45) at the endpoint (Table 3).\nNumber of patients in each Functional Ambulatory Category (FAC) over the treatment period.\nAt commencement of the study, the majority of patients were in low FAC levels, with nil patients in Category 5. This trend changed across the weeks to ultimately have a more even spread across categories at week 8, including 16 patients in the highest category and only 3 remaining in the lowest category. This trend is highlighted with division of the FAC into non-ambulatory (FAC ≤ 2) and ambulatory (FAC ≥ 3), showing a movement of majority from non-ambulatory (85% of the patients, n = 59) at commencement, to ambulatory by completion of the training program (65.2%, n = 45).\nThe functional changes according to age, gender, initial FAC, initial Gait and Balance Tinetti and initial walking speed, were analyzed during the 4-week BWSRGT period, the 4-week manual gait training period and at the 8-week total intensive rehabilitation period. Patients with an initial FAC of 2 or 3 showed significantly greater changes in walking speed than patients with other initial FAC across the 4-week robotic training protocol (Table 2). These changes were maintained over 4 weeks of manual gait training. The change in each outcome was not significantly influenced by variables; gender, age, aetiology (hemorrhagic/ischemic), affected hemisphere, initial walking speed, initial Balance Tinetti and initial Gait Tinetti.\nChange in outcome measures stratified by initial Functional Ambulatory Category (FAC).\nThe table describes the change in the Tinetti Gait, Tinetti Balance, and Walking Speed during the Body Weight Support Robotic Gait Trainer (BWSRGT) training (0-4 weeks); during the Manual Gait Training (MGT) period (4-8 weeks); and the change over the 8-weeks total of intensive rehabilitation period. Subjects with initial FAC 2 and 3 showed significant improvement in walking speed across the robotic training period. (*) p < 0.05.\nDuring the 8-week training period the non-ambulatory patients (FAC ≤ 2) went from 85% (n = 59) at baseline to 34% (n = 24) to the end-point; and the ambulatory patients (FAC ≥ 3) went from 14% (n = 10) at baseline to 65% (n = 45) at the endpoint (Table 3).\nNumber of patients in each Functional Ambulatory Category (FAC) over the treatment period.\nAt commencement of the study, the majority of patients were in low FAC levels, with nil patients in Category 5. This trend changed across the weeks to ultimately have a more even spread across categories at week 8, including 16 patients in the highest category and only 3 remaining in the lowest category. This trend is highlighted with division of the FAC into non-ambulatory (FAC ≤ 2) and ambulatory (FAC ≥ 3), showing a movement of majority from non-ambulatory (85% of the patients, n = 59) at commencement, to ambulatory by completion of the training program (65.2%, n = 45).", "Across the 8-week intensive rehabilitation period FAC score improved by 45% across the period of robotic training (baseline 1.30 points ± 1.23, mid-point 2.37 ± 1.51, p < 0.001), and improved by a further 31% following the manual training (final assessment 3.14 ± 1.52, p < 0.001). 88% of patients improved by one point or more on the FAC scale across the 8-week intervention period.", "Walking speed improved by 46% across the period of robotic training (baseline 0.17 m/s ± 0.22, mid-point 0.33 ± 0.33, p < 0.001), and improved by a further 22% following the manual training (final 0.48 ± 0.41, p < 0.001). Across the full training period 83% of patients improved more than 0.20 m/s.", "Tinetti Gait Scale improved by 45% across the period of robotic training (baseline 4.10 ± 3.10, mid-point 7.10 ± 3.16, p < 0.001), and improved by a further 18% following the manual training (final 8.69 ± 3.10, p < 0.001). During the 8-week rehabilitation period 56% of patients improved more than 3 points on Tinetti Gait Scale.", "The Tinetti Balance Scale score improved by 38% across the period of robotic training (baseline 6.14 ± 3.84, mid-point 9.92 ± 4.13, p < 0.001), and it improved by a further 18% following the manual training (final 12.15 ± 4.19, p < 0.001). Across the 8-week rehabilitation period 65% of patients improved more than 5 points.", "The functional changes according to age, gender, initial FAC, initial Gait and Balance Tinetti and initial walking speed, were analyzed during the 4-week BWSRGT period, the 4-week manual gait training period and at the 8-week total intensive rehabilitation period. Patients with an initial FAC of 2 or 3 showed significantly greater changes in walking speed than patients with other initial FAC across the 4-week robotic training protocol (Table 2). These changes were maintained over 4 weeks of manual gait training. The change in each outcome was not significantly influenced by variables; gender, age, aetiology (hemorrhagic/ischemic), affected hemisphere, initial walking speed, initial Balance Tinetti and initial Gait Tinetti.\nChange in outcome measures stratified by initial Functional Ambulatory Category (FAC).\nThe table describes the change in the Tinetti Gait, Tinetti Balance, and Walking Speed during the Body Weight Support Robotic Gait Trainer (BWSRGT) training (0-4 weeks); during the Manual Gait Training (MGT) period (4-8 weeks); and the change over the 8-weeks total of intensive rehabilitation period. Subjects with initial FAC 2 and 3 showed significant improvement in walking speed across the robotic training period. (*) p < 0.05.\nDuring the 8-week training period the non-ambulatory patients (FAC ≤ 2) went from 85% (n = 59) at baseline to 34% (n = 24) to the end-point; and the ambulatory patients (FAC ≥ 3) went from 14% (n = 10) at baseline to 65% (n = 45) at the endpoint (Table 3).\nNumber of patients in each Functional Ambulatory Category (FAC) over the treatment period.\nAt commencement of the study, the majority of patients were in low FAC levels, with nil patients in Category 5. This trend changed across the weeks to ultimately have a more even spread across categories at week 8, including 16 patients in the highest category and only 3 remaining in the lowest category. This trend is highlighted with division of the FAC into non-ambulatory (FAC ≤ 2) and ambulatory (FAC ≥ 3), showing a movement of majority from non-ambulatory (85% of the patients, n = 59) at commencement, to ambulatory by completion of the training program (65.2%, n = 45).", "The results of this observational study provide evidence that a comprehensive and intensive eight-week rehabilitation program including BWSRGT followed by Manual Gait Training in patients early after stroke, can led to an improvement in all functional outcomes, independently of patient demographic or initial functional status. However, the results indicate that patients with initial FAC level of 2 or 3 obtained the most benefit. The intensive rehabilitation program was well tolerated, and no patients withdrew for factors related to the gait training or the high training dose.\nOther research groups have studied the improvement of walking ability of sub-acute stroke patients combining methods of gait training showing that an intensive locomotor training on an electromechanical gait trainer plus physiotherapy resulted in a significantly better gait ability and daily living competence compared with physiotherapy alone[22,31].\nHowever, many studies have failed in showing significant differences in gain of functional scores when comparing robot-driven gait orthosis training with conventional physiotherapy[32]. In the study by Peraula et al,[33] chronic ambulatory patients regained the same walking ability when they received body weight supported training with or without FES compared with over-ground walking exercise training program.\nThe dose or intensity of the training seems to influence the improvement in the walking ability improvement since our study shows greater results than studies with only 20 or 30 min of daily therapy for 3 to 4 weeks[32,33]. Higher intensity of gait practice, in line with modern principles of motor learning, probably explains the superior results. The total amount of rehabilitation given to the patients is higher than reported in other studies[22,34] and our results should be considered in the context of high-intensity rehabilitation in sub-acute stroke.\nIn our study, after 8-weeks of intensive rehabilitation we found gait improvements in one or more of the outcome measures in 95.54% patients. This finding is higher than recovery reported by other authors[23,35] and may indicate that the higher dose in our rehabilitation program can lead to greater improvement of motor function in sub-acute stroke patients. To determine the magnitude of the improvement attributed to gait training, a comparison with an experimental control group without gait training would need to be done. This considered, the results of the present study should be used as a benchmark for expected change to aid clinical decision-making and to power controlled clinical research studies.\nThe selected functional outcome measures were sensitive to detect change across patients and may be suitable to use for future studies. Care should be taken interpreting functional scales that may include the use of assistive technology (as can be used in the 10 MWT). The underlying factors of patient performance leading to improved scores on each outcome measures is difficult to determine from the present study. For example, the improved score on the Tinetti balance test could be a cause of improved gait, since various balance functions are known to affect gait[36,37]. The more the patient sways, the worse is the balance and consequently the gait parameters[38]. According to Kollen et al[39] the recovery of independent gait is highly dependent on improvements in control of standing balance. These results are in line with our study, where the gait speed and functional ambulatory measures improve in parallel with balance measures.\nA pertinent finding of our study is that patients with mid-range FAC at admission obtained the most benefit, which raises the possibility that FAC could be tested as a clinical predictor for recovery, although Masiero et al[20] did not find a correlation between FAC and motor recovery during conventional rehabilitation programs. Other studies have shown that initial level of paresis[40] or trunk control[41] could be used as clinical predictors of balance and gait for rehabilitation in sub-acute patients. Moreover, previous studies have reinforced the idea of the excellent reliability of the FAC, good predictive validity and responsiveness in sub-acute stroke patients and it has been proposed to predict community ambulation with high sensitivity and specificity[28]. We provided evidence that suggests that FAC could be useful as a predictor of outcome, but not initial walking speed, as reported by Barbeau[42].\nOne of the limitations of the present study is that the design of the study does not allow the comparison of the robotic gait training with the same amount - 60 min - of conventional gait therapy, combined both with an intensive rehabilitation program. The characteristics of the patients that would benefit the most of this type of combined gait therapies (robotic + conventional) remain unclear. In our study, patients ranged from the early phase of recovery to 3 months after the injury, when the largest gains are observable,[43-45] however some studies have found improvements in gait function in late phases of recovery[46]. The optimum dose of therapy is another open question for future studies. Even if daily therapy seems to be a decisive factor in the training program success early after stroke, [14,47] there is disparity of results about what is the ideal frequency of training in chronic phases[48,49].", "This study shows that gait training using the BWSRGT is associated with improved walking function in individuals with sub-acute stroke, and that it is feasible and safe to combine with a comprehensive and intensive functional rehabilitation program over 8-weeks. Further studies need to address if the improved walking parameters after a combined and intensive gait rehabilitation program are maintained over time. Moreover, the optimal dose of training characteristics (frequency and duration), as well as the precise gait parameters associated with training responsiveness, need further research." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Subjects", "Training Intervention and Inpatient Rehabilitation", "Functional Outcome Assessment", "Data analysis", "Results and Discussion", "Functional Ambulatory Category", "Walking speed", "Tinetti Gait Scale", "Tinetti Balance Scale", "Patient demographic and initial clinical status", "Discussion", "Conclusions" ]
[ "The recovery of independent walking is one of the major goals of rehabilitation after stroke, remaining as a leading cause of serious long-term disability[1]. More than 30% of patients who have had a stroke do not achieve a complete motor recovery after the rehabilitation process[2,3]. For this reason, new rehabilitation approaches are needed in order to improve quality of life in stroke patients.\nThere is no unique approach for rehabilitation of gait after stroke[4]. From physical therapy interventions (such as Bobath, Perfetti, Propioceptive Neuromuscular Facilitation - PNF) [5,6] to more technological approaches including the use of Functional Electric Stimulation (FES) [7] or Body Weight Support Robotic Gait Training (BWSRGT),[8-10] many therapeutic options have been used alone and combined to improve motor recovery in stroke. The beneficial effects of treadmill training have been extensively investigated in the last fifteen years in stroke patients,[11] including the greater effects of the body weight support training[12,13]. Many studies have reported that electromechanical devices have at least the same efficacy as manual gait training with less effort by the patient and physiotherapist[14-17]. In a recent study, electromechanical assisted gait training has been shown to improve the independent walking ability (FAC) but not the walking speed when compared sub-acute and chronic patients that received conventional gait training[8].\nThe clinical characteristics of patients who benefit most from BWSRGT are presently unclear. If clinical variables that predispose to a positive response to BWSRGT could be identified, it might be possible to individually tailor the rehabilitation program and include BWSRGT for those patients who would have the greatest benefit in the rehabilitation process[18-20].\nThe treatment dose (number of hours of therapy and frequency) has been reported as a valuable variable of outcome,[21] but the optimal dose is still uncertain. Robotic gait training using the BWSGT for 4 weeks (5 ×/week) has previously shown in a small number of early stroke patients to be well tolerated, and is reported to improve gait function[22]. We applied a daily intensive program with robotic gait training in our inpatient setting of sub-acute stroke patients, and then followed with a period of consolidation using manual gait training. We report our observations of functional gain in a large number of patients by using accepted clinical scales to measure gait speed, assistance in locomotion and balance. The main goal of this study is to assist with clinical decision making and to power randomized controlled clinical trials.", " Subjects 103 subjects with sub-acute stroke (< 6 months post-stroke) were followed in a prospective observational study from March 2006 to March 2009. Thirty-four patients did not complete the 8-week program due to factors not related to the study such as hospital discharge, or medical complications (such as pneumonia or infections). Of the 69 patients who finished the training protocol (49 men, 20 women, mean age 48 ± 11 years), 36 suffered hemorrhagic stroke and 33 ischemic stroke. According to residual deficits, 34 patients presented right hemiparesis, 28 patients presented left hemiparesis and 7 presented tetraparesis. 85.5% of the patients were non-ambulatory (FAC ≤ 2, n = 59) and 14.5% were ambulatory patients (FAC ≥ 3, n = 10). Mean post-stroke interval was 72 ± 38 days (Table 1).\nPatient baseline characteristics.\nBaseline demographic and functional characteristics of the stroke patients enrolled in the 8-week intensive rehabilitation program.\nData collection was obtained from patients who were performing BWSRGT program at the Neurorehabilitation Hospital Institut Guttmann (Badalona, Spain) following the clinical protocols and according to the local Ethics Committee. All patients gave written informed consent prior to enrolment in the study.\nStroke patients were included in the study if they were within 180 days post-stroke, presented clinical hemiparesis or tetraparesis, aged 25-75, were able to voluntarily participate into the study, were not expected to be discharged in the next 8-weeks and had the ability to perform manual gait training with or without external devices (initial FAC 0-4).\nThey were excluded if they had severe cognitive and/or language deficits that precluded them from following instructions, had contraindication for physical exercise (unstable cardiac status, or other pre-morbid condition that discard them from rehabilitation), had severe spasticity that interferes with robotic function or/and rigid join contractures/malformations on the lower limbs (> 10 degrees), or had inability to stand even in parallel bars.\n103 subjects with sub-acute stroke (< 6 months post-stroke) were followed in a prospective observational study from March 2006 to March 2009. Thirty-four patients did not complete the 8-week program due to factors not related to the study such as hospital discharge, or medical complications (such as pneumonia or infections). Of the 69 patients who finished the training protocol (49 men, 20 women, mean age 48 ± 11 years), 36 suffered hemorrhagic stroke and 33 ischemic stroke. According to residual deficits, 34 patients presented right hemiparesis, 28 patients presented left hemiparesis and 7 presented tetraparesis. 85.5% of the patients were non-ambulatory (FAC ≤ 2, n = 59) and 14.5% were ambulatory patients (FAC ≥ 3, n = 10). Mean post-stroke interval was 72 ± 38 days (Table 1).\nPatient baseline characteristics.\nBaseline demographic and functional characteristics of the stroke patients enrolled in the 8-week intensive rehabilitation program.\nData collection was obtained from patients who were performing BWSRGT program at the Neurorehabilitation Hospital Institut Guttmann (Badalona, Spain) following the clinical protocols and according to the local Ethics Committee. All patients gave written informed consent prior to enrolment in the study.\nStroke patients were included in the study if they were within 180 days post-stroke, presented clinical hemiparesis or tetraparesis, aged 25-75, were able to voluntarily participate into the study, were not expected to be discharged in the next 8-weeks and had the ability to perform manual gait training with or without external devices (initial FAC 0-4).\nThey were excluded if they had severe cognitive and/or language deficits that precluded them from following instructions, had contraindication for physical exercise (unstable cardiac status, or other pre-morbid condition that discard them from rehabilitation), had severe spasticity that interferes with robotic function or/and rigid join contractures/malformations on the lower limbs (> 10 degrees), or had inability to stand even in parallel bars.\n Training Intervention and Inpatient Rehabilitation The robotic and manual gait training intervention was part of a comprehensive intensive rehabilitation program consisting of 5 hours per day/5 days per week. The rehabilitation program included occupational therapy, physical therapy, gait training, fitness, sports therapy, hydrotherapy and other activities oriented to achieve the maximum degree of functional independence (including urban tours or cooking training).\nPatients performed two contiguous blocks of 4-weeks of intensive rehabilitation program. The duration of each block period was based on previous literature[23,24]. During the first period, patients performed robotic body weight supported gait training (BWSRGT, described below), while in the second 4-week period the BWSRGT was substituted for manual gait training (MGT). In each case the gait training was performed daily as part of the 5-hour rehabilitation activities (Figure 1).\nSchematic of the intensive rehabilitation program. The rehabilitation period comprised 8 weeks (5 hrs/day, 5 days/week); the first 4 weeks with Body-Weight-Supported-Robotic-Gait Training (BWSRGT) and the last 4 weeks Manual Gait Training. (*) Depending on patient individual needs and clinical goals.\nThe BWSRGT was performed with a Gait Trainer® (Reha-Stim, Berlin), where patient is held by a harness with each foot attached to a footplate that moves to mechanically simulate the stance and swing phase of gait controlling the ankle angle at push off and heel strike (Figure 2). The duration of the exercise ranged from 20-40 min depending on the tolerance of the patients (when patient reported excessive fatigue the training would stop). The percentage of weight unloaded varied between patients and corresponded to the weight that allowed the patient to stand with complete knee extension. Velocity ranged between 0.28 meter per second (m/s) and 0.42 m/s (to prevent overexertion of the patient). Step length was adjusted to the available range-of-motion for each patient. An elastic strap was placed on the paretic knee to help extension during the complete stance phase if needed (as a passive support).\nA sub-acute stroke patient during the Robotic Gait training session. During the Body-Weight-Supported-Robotic-Gait Training (BWSRGT) the body weight is slightly unloaded via the use of a harness, while the fixed foot placement on the device ensures a set pattern that mimics human gait with alternate stance and swing phase.\nThe manual gait training (MGT) consisted of gait training over ground, with technical aids and the support of a physiotherapist as needed. Conventional technical aids for stroke patients are considered unilateral, ie knee-ankle-foot orthosis (KAFO), ankle foot orthosis (AFO), dynamic ankle foot orthosis (DAFO) and functional electrical stimulation (FES). To provide more stability, crutches, walkers (in tetraparesic patients), or parallel bars were allowed. Gait training was done under the supervision of a physical therapist, who provided verbal instructions and physical assistance to facilitate the swing phase when needed.\nOur work intends to provide benchmark clinical gains data and tolerability from this atypically intensive rehabilitation program of consecutive blocks of robotic and manual gait training.\nThe robotic and manual gait training intervention was part of a comprehensive intensive rehabilitation program consisting of 5 hours per day/5 days per week. The rehabilitation program included occupational therapy, physical therapy, gait training, fitness, sports therapy, hydrotherapy and other activities oriented to achieve the maximum degree of functional independence (including urban tours or cooking training).\nPatients performed two contiguous blocks of 4-weeks of intensive rehabilitation program. The duration of each block period was based on previous literature[23,24]. During the first period, patients performed robotic body weight supported gait training (BWSRGT, described below), while in the second 4-week period the BWSRGT was substituted for manual gait training (MGT). In each case the gait training was performed daily as part of the 5-hour rehabilitation activities (Figure 1).\nSchematic of the intensive rehabilitation program. The rehabilitation period comprised 8 weeks (5 hrs/day, 5 days/week); the first 4 weeks with Body-Weight-Supported-Robotic-Gait Training (BWSRGT) and the last 4 weeks Manual Gait Training. (*) Depending on patient individual needs and clinical goals.\nThe BWSRGT was performed with a Gait Trainer® (Reha-Stim, Berlin), where patient is held by a harness with each foot attached to a footplate that moves to mechanically simulate the stance and swing phase of gait controlling the ankle angle at push off and heel strike (Figure 2). The duration of the exercise ranged from 20-40 min depending on the tolerance of the patients (when patient reported excessive fatigue the training would stop). The percentage of weight unloaded varied between patients and corresponded to the weight that allowed the patient to stand with complete knee extension. Velocity ranged between 0.28 meter per second (m/s) and 0.42 m/s (to prevent overexertion of the patient). Step length was adjusted to the available range-of-motion for each patient. An elastic strap was placed on the paretic knee to help extension during the complete stance phase if needed (as a passive support).\nA sub-acute stroke patient during the Robotic Gait training session. During the Body-Weight-Supported-Robotic-Gait Training (BWSRGT) the body weight is slightly unloaded via the use of a harness, while the fixed foot placement on the device ensures a set pattern that mimics human gait with alternate stance and swing phase.\nThe manual gait training (MGT) consisted of gait training over ground, with technical aids and the support of a physiotherapist as needed. Conventional technical aids for stroke patients are considered unilateral, ie knee-ankle-foot orthosis (KAFO), ankle foot orthosis (AFO), dynamic ankle foot orthosis (DAFO) and functional electrical stimulation (FES). To provide more stability, crutches, walkers (in tetraparesic patients), or parallel bars were allowed. Gait training was done under the supervision of a physical therapist, who provided verbal instructions and physical assistance to facilitate the swing phase when needed.\nOur work intends to provide benchmark clinical gains data and tolerability from this atypically intensive rehabilitation program of consecutive blocks of robotic and manual gait training.\n Functional Outcome Assessment The functional outcome assessment battery comprised: Functional Ambulatory Category (FAC),[25] Tinetti Balance and Gait Scale[26] and 10 Meter Walking Test[27]. Each outcome was assessed at baseline, mid-point, and at the end of the study (0, 4, 8 weeks) by an experienced physiotherapist.\nThe FAC was performed to assess gait ability and autonomy[28]. This ordinary scale includes 6 levels ranging from 0 to 5. FAC = 0 means no ability to walk or needed 2 assistants to help them walk, FAC 1 = able to walk with the constant attention of one assistant, FAC 2 = able to walk with someone for balance support, FAC 3 = able to walk with one assistant beside them to give them confidence, FAC 4 = independent walking but need some help with stairs or uneven ground, FAC 5 = independent for gait function in any given place.\nThe 10 MWT quantifies walking speed, step length and cadence[29]. Patients were permitted to use technical aids during the test. This test was performed three times per evaluation and the mean speed was calculated. Patient gait was videotaped for extra documentation.\nThe Tinetti Gait and Balance Scale examines gait pattern and balance level[30]. The gait subscale ranges from 0 to 12, zero indicates and inability to walk or unable to perform any of the events of the gait pattern correctly, and 12 indicates a correct gait pattern. The balance subscale ranges from 0 to 16, zero indicates very poor balance and 16 good control of the equilibrium.\nOur centre considers a clinically meaningful change in function to correspond with approximately 2 points in the FAC, 0.20 m/s in the walking speed, 3 points in Tinetti Gait Scale and 5 points in the Tinetti balance test.\nThe functional outcome assessment battery comprised: Functional Ambulatory Category (FAC),[25] Tinetti Balance and Gait Scale[26] and 10 Meter Walking Test[27]. Each outcome was assessed at baseline, mid-point, and at the end of the study (0, 4, 8 weeks) by an experienced physiotherapist.\nThe FAC was performed to assess gait ability and autonomy[28]. This ordinary scale includes 6 levels ranging from 0 to 5. FAC = 0 means no ability to walk or needed 2 assistants to help them walk, FAC 1 = able to walk with the constant attention of one assistant, FAC 2 = able to walk with someone for balance support, FAC 3 = able to walk with one assistant beside them to give them confidence, FAC 4 = independent walking but need some help with stairs or uneven ground, FAC 5 = independent for gait function in any given place.\nThe 10 MWT quantifies walking speed, step length and cadence[29]. Patients were permitted to use technical aids during the test. This test was performed three times per evaluation and the mean speed was calculated. Patient gait was videotaped for extra documentation.\nThe Tinetti Gait and Balance Scale examines gait pattern and balance level[30]. The gait subscale ranges from 0 to 12, zero indicates and inability to walk or unable to perform any of the events of the gait pattern correctly, and 12 indicates a correct gait pattern. The balance subscale ranges from 0 to 16, zero indicates very poor balance and 16 good control of the equilibrium.\nOur centre considers a clinically meaningful change in function to correspond with approximately 2 points in the FAC, 0.20 m/s in the walking speed, 3 points in Tinetti Gait Scale and 5 points in the Tinetti balance test.\n Data analysis In this observational prospective study we used categorical variables described by frequency and percentage, and continuous variables described by mean and standard deviation (mean ± SD). Median and inter-quartile range (IQR) was used to explore asymmetry when necessary. The non-parametric Friedman test was used to assess differences between the 3 assessment measures, while the non-parametric Kruskall-Wallis test was used to assess outcome based on initial functional level. The alpha level was set at p < 0.05.\nIn this observational prospective study we used categorical variables described by frequency and percentage, and continuous variables described by mean and standard deviation (mean ± SD). Median and inter-quartile range (IQR) was used to explore asymmetry when necessary. The non-parametric Friedman test was used to assess differences between the 3 assessment measures, while the non-parametric Kruskall-Wallis test was used to assess outcome based on initial functional level. The alpha level was set at p < 0.05.", "103 subjects with sub-acute stroke (< 6 months post-stroke) were followed in a prospective observational study from March 2006 to March 2009. Thirty-four patients did not complete the 8-week program due to factors not related to the study such as hospital discharge, or medical complications (such as pneumonia or infections). Of the 69 patients who finished the training protocol (49 men, 20 women, mean age 48 ± 11 years), 36 suffered hemorrhagic stroke and 33 ischemic stroke. According to residual deficits, 34 patients presented right hemiparesis, 28 patients presented left hemiparesis and 7 presented tetraparesis. 85.5% of the patients were non-ambulatory (FAC ≤ 2, n = 59) and 14.5% were ambulatory patients (FAC ≥ 3, n = 10). Mean post-stroke interval was 72 ± 38 days (Table 1).\nPatient baseline characteristics.\nBaseline demographic and functional characteristics of the stroke patients enrolled in the 8-week intensive rehabilitation program.\nData collection was obtained from patients who were performing BWSRGT program at the Neurorehabilitation Hospital Institut Guttmann (Badalona, Spain) following the clinical protocols and according to the local Ethics Committee. All patients gave written informed consent prior to enrolment in the study.\nStroke patients were included in the study if they were within 180 days post-stroke, presented clinical hemiparesis or tetraparesis, aged 25-75, were able to voluntarily participate into the study, were not expected to be discharged in the next 8-weeks and had the ability to perform manual gait training with or without external devices (initial FAC 0-4).\nThey were excluded if they had severe cognitive and/or language deficits that precluded them from following instructions, had contraindication for physical exercise (unstable cardiac status, or other pre-morbid condition that discard them from rehabilitation), had severe spasticity that interferes with robotic function or/and rigid join contractures/malformations on the lower limbs (> 10 degrees), or had inability to stand even in parallel bars.", "The robotic and manual gait training intervention was part of a comprehensive intensive rehabilitation program consisting of 5 hours per day/5 days per week. The rehabilitation program included occupational therapy, physical therapy, gait training, fitness, sports therapy, hydrotherapy and other activities oriented to achieve the maximum degree of functional independence (including urban tours or cooking training).\nPatients performed two contiguous blocks of 4-weeks of intensive rehabilitation program. The duration of each block period was based on previous literature[23,24]. During the first period, patients performed robotic body weight supported gait training (BWSRGT, described below), while in the second 4-week period the BWSRGT was substituted for manual gait training (MGT). In each case the gait training was performed daily as part of the 5-hour rehabilitation activities (Figure 1).\nSchematic of the intensive rehabilitation program. The rehabilitation period comprised 8 weeks (5 hrs/day, 5 days/week); the first 4 weeks with Body-Weight-Supported-Robotic-Gait Training (BWSRGT) and the last 4 weeks Manual Gait Training. (*) Depending on patient individual needs and clinical goals.\nThe BWSRGT was performed with a Gait Trainer® (Reha-Stim, Berlin), where patient is held by a harness with each foot attached to a footplate that moves to mechanically simulate the stance and swing phase of gait controlling the ankle angle at push off and heel strike (Figure 2). The duration of the exercise ranged from 20-40 min depending on the tolerance of the patients (when patient reported excessive fatigue the training would stop). The percentage of weight unloaded varied between patients and corresponded to the weight that allowed the patient to stand with complete knee extension. Velocity ranged between 0.28 meter per second (m/s) and 0.42 m/s (to prevent overexertion of the patient). Step length was adjusted to the available range-of-motion for each patient. An elastic strap was placed on the paretic knee to help extension during the complete stance phase if needed (as a passive support).\nA sub-acute stroke patient during the Robotic Gait training session. During the Body-Weight-Supported-Robotic-Gait Training (BWSRGT) the body weight is slightly unloaded via the use of a harness, while the fixed foot placement on the device ensures a set pattern that mimics human gait with alternate stance and swing phase.\nThe manual gait training (MGT) consisted of gait training over ground, with technical aids and the support of a physiotherapist as needed. Conventional technical aids for stroke patients are considered unilateral, ie knee-ankle-foot orthosis (KAFO), ankle foot orthosis (AFO), dynamic ankle foot orthosis (DAFO) and functional electrical stimulation (FES). To provide more stability, crutches, walkers (in tetraparesic patients), or parallel bars were allowed. Gait training was done under the supervision of a physical therapist, who provided verbal instructions and physical assistance to facilitate the swing phase when needed.\nOur work intends to provide benchmark clinical gains data and tolerability from this atypically intensive rehabilitation program of consecutive blocks of robotic and manual gait training.", "The functional outcome assessment battery comprised: Functional Ambulatory Category (FAC),[25] Tinetti Balance and Gait Scale[26] and 10 Meter Walking Test[27]. Each outcome was assessed at baseline, mid-point, and at the end of the study (0, 4, 8 weeks) by an experienced physiotherapist.\nThe FAC was performed to assess gait ability and autonomy[28]. This ordinary scale includes 6 levels ranging from 0 to 5. FAC = 0 means no ability to walk or needed 2 assistants to help them walk, FAC 1 = able to walk with the constant attention of one assistant, FAC 2 = able to walk with someone for balance support, FAC 3 = able to walk with one assistant beside them to give them confidence, FAC 4 = independent walking but need some help with stairs or uneven ground, FAC 5 = independent for gait function in any given place.\nThe 10 MWT quantifies walking speed, step length and cadence[29]. Patients were permitted to use technical aids during the test. This test was performed three times per evaluation and the mean speed was calculated. Patient gait was videotaped for extra documentation.\nThe Tinetti Gait and Balance Scale examines gait pattern and balance level[30]. The gait subscale ranges from 0 to 12, zero indicates and inability to walk or unable to perform any of the events of the gait pattern correctly, and 12 indicates a correct gait pattern. The balance subscale ranges from 0 to 16, zero indicates very poor balance and 16 good control of the equilibrium.\nOur centre considers a clinically meaningful change in function to correspond with approximately 2 points in the FAC, 0.20 m/s in the walking speed, 3 points in Tinetti Gait Scale and 5 points in the Tinetti balance test.", "In this observational prospective study we used categorical variables described by frequency and percentage, and continuous variables described by mean and standard deviation (mean ± SD). Median and inter-quartile range (IQR) was used to explore asymmetry when necessary. The non-parametric Friedman test was used to assess differences between the 3 assessment measures, while the non-parametric Kruskall-Wallis test was used to assess outcome based on initial functional level. The alpha level was set at p < 0.05.", "We report results from 69 patients who completed the intensive training program, showing significant improvements for each outcome (Figure 3).\nFunctional outcome results. Improvement in functional outcome across the robotic and manual training period (mean ± SD). For each outcome: (a) FAC (b) walking speed (c) Tinetti Gait and (d) Tinetti Balance, there was a significant increase following the robotic training, and further consolidation from the follow-up manual gait training.\n Functional Ambulatory Category Across the 8-week intensive rehabilitation period FAC score improved by 45% across the period of robotic training (baseline 1.30 points ± 1.23, mid-point 2.37 ± 1.51, p < 0.001), and improved by a further 31% following the manual training (final assessment 3.14 ± 1.52, p < 0.001). 88% of patients improved by one point or more on the FAC scale across the 8-week intervention period.\nAcross the 8-week intensive rehabilitation period FAC score improved by 45% across the period of robotic training (baseline 1.30 points ± 1.23, mid-point 2.37 ± 1.51, p < 0.001), and improved by a further 31% following the manual training (final assessment 3.14 ± 1.52, p < 0.001). 88% of patients improved by one point or more on the FAC scale across the 8-week intervention period.\n Walking speed Walking speed improved by 46% across the period of robotic training (baseline 0.17 m/s ± 0.22, mid-point 0.33 ± 0.33, p < 0.001), and improved by a further 22% following the manual training (final 0.48 ± 0.41, p < 0.001). Across the full training period 83% of patients improved more than 0.20 m/s.\nWalking speed improved by 46% across the period of robotic training (baseline 0.17 m/s ± 0.22, mid-point 0.33 ± 0.33, p < 0.001), and improved by a further 22% following the manual training (final 0.48 ± 0.41, p < 0.001). Across the full training period 83% of patients improved more than 0.20 m/s.\n Tinetti Gait Scale Tinetti Gait Scale improved by 45% across the period of robotic training (baseline 4.10 ± 3.10, mid-point 7.10 ± 3.16, p < 0.001), and improved by a further 18% following the manual training (final 8.69 ± 3.10, p < 0.001). During the 8-week rehabilitation period 56% of patients improved more than 3 points on Tinetti Gait Scale.\nTinetti Gait Scale improved by 45% across the period of robotic training (baseline 4.10 ± 3.10, mid-point 7.10 ± 3.16, p < 0.001), and improved by a further 18% following the manual training (final 8.69 ± 3.10, p < 0.001). During the 8-week rehabilitation period 56% of patients improved more than 3 points on Tinetti Gait Scale.\n Tinetti Balance Scale The Tinetti Balance Scale score improved by 38% across the period of robotic training (baseline 6.14 ± 3.84, mid-point 9.92 ± 4.13, p < 0.001), and it improved by a further 18% following the manual training (final 12.15 ± 4.19, p < 0.001). Across the 8-week rehabilitation period 65% of patients improved more than 5 points.\nThe Tinetti Balance Scale score improved by 38% across the period of robotic training (baseline 6.14 ± 3.84, mid-point 9.92 ± 4.13, p < 0.001), and it improved by a further 18% following the manual training (final 12.15 ± 4.19, p < 0.001). Across the 8-week rehabilitation period 65% of patients improved more than 5 points.\n Patient demographic and initial clinical status The functional changes according to age, gender, initial FAC, initial Gait and Balance Tinetti and initial walking speed, were analyzed during the 4-week BWSRGT period, the 4-week manual gait training period and at the 8-week total intensive rehabilitation period. Patients with an initial FAC of 2 or 3 showed significantly greater changes in walking speed than patients with other initial FAC across the 4-week robotic training protocol (Table 2). These changes were maintained over 4 weeks of manual gait training. The change in each outcome was not significantly influenced by variables; gender, age, aetiology (hemorrhagic/ischemic), affected hemisphere, initial walking speed, initial Balance Tinetti and initial Gait Tinetti.\nChange in outcome measures stratified by initial Functional Ambulatory Category (FAC).\nThe table describes the change in the Tinetti Gait, Tinetti Balance, and Walking Speed during the Body Weight Support Robotic Gait Trainer (BWSRGT) training (0-4 weeks); during the Manual Gait Training (MGT) period (4-8 weeks); and the change over the 8-weeks total of intensive rehabilitation period. Subjects with initial FAC 2 and 3 showed significant improvement in walking speed across the robotic training period. (*) p < 0.05.\nDuring the 8-week training period the non-ambulatory patients (FAC ≤ 2) went from 85% (n = 59) at baseline to 34% (n = 24) to the end-point; and the ambulatory patients (FAC ≥ 3) went from 14% (n = 10) at baseline to 65% (n = 45) at the endpoint (Table 3).\nNumber of patients in each Functional Ambulatory Category (FAC) over the treatment period.\nAt commencement of the study, the majority of patients were in low FAC levels, with nil patients in Category 5. This trend changed across the weeks to ultimately have a more even spread across categories at week 8, including 16 patients in the highest category and only 3 remaining in the lowest category. This trend is highlighted with division of the FAC into non-ambulatory (FAC ≤ 2) and ambulatory (FAC ≥ 3), showing a movement of majority from non-ambulatory (85% of the patients, n = 59) at commencement, to ambulatory by completion of the training program (65.2%, n = 45).\nThe functional changes according to age, gender, initial FAC, initial Gait and Balance Tinetti and initial walking speed, were analyzed during the 4-week BWSRGT period, the 4-week manual gait training period and at the 8-week total intensive rehabilitation period. Patients with an initial FAC of 2 or 3 showed significantly greater changes in walking speed than patients with other initial FAC across the 4-week robotic training protocol (Table 2). These changes were maintained over 4 weeks of manual gait training. The change in each outcome was not significantly influenced by variables; gender, age, aetiology (hemorrhagic/ischemic), affected hemisphere, initial walking speed, initial Balance Tinetti and initial Gait Tinetti.\nChange in outcome measures stratified by initial Functional Ambulatory Category (FAC).\nThe table describes the change in the Tinetti Gait, Tinetti Balance, and Walking Speed during the Body Weight Support Robotic Gait Trainer (BWSRGT) training (0-4 weeks); during the Manual Gait Training (MGT) period (4-8 weeks); and the change over the 8-weeks total of intensive rehabilitation period. Subjects with initial FAC 2 and 3 showed significant improvement in walking speed across the robotic training period. (*) p < 0.05.\nDuring the 8-week training period the non-ambulatory patients (FAC ≤ 2) went from 85% (n = 59) at baseline to 34% (n = 24) to the end-point; and the ambulatory patients (FAC ≥ 3) went from 14% (n = 10) at baseline to 65% (n = 45) at the endpoint (Table 3).\nNumber of patients in each Functional Ambulatory Category (FAC) over the treatment period.\nAt commencement of the study, the majority of patients were in low FAC levels, with nil patients in Category 5. This trend changed across the weeks to ultimately have a more even spread across categories at week 8, including 16 patients in the highest category and only 3 remaining in the lowest category. This trend is highlighted with division of the FAC into non-ambulatory (FAC ≤ 2) and ambulatory (FAC ≥ 3), showing a movement of majority from non-ambulatory (85% of the patients, n = 59) at commencement, to ambulatory by completion of the training program (65.2%, n = 45).", "Across the 8-week intensive rehabilitation period FAC score improved by 45% across the period of robotic training (baseline 1.30 points ± 1.23, mid-point 2.37 ± 1.51, p < 0.001), and improved by a further 31% following the manual training (final assessment 3.14 ± 1.52, p < 0.001). 88% of patients improved by one point or more on the FAC scale across the 8-week intervention period.", "Walking speed improved by 46% across the period of robotic training (baseline 0.17 m/s ± 0.22, mid-point 0.33 ± 0.33, p < 0.001), and improved by a further 22% following the manual training (final 0.48 ± 0.41, p < 0.001). Across the full training period 83% of patients improved more than 0.20 m/s.", "Tinetti Gait Scale improved by 45% across the period of robotic training (baseline 4.10 ± 3.10, mid-point 7.10 ± 3.16, p < 0.001), and improved by a further 18% following the manual training (final 8.69 ± 3.10, p < 0.001). During the 8-week rehabilitation period 56% of patients improved more than 3 points on Tinetti Gait Scale.", "The Tinetti Balance Scale score improved by 38% across the period of robotic training (baseline 6.14 ± 3.84, mid-point 9.92 ± 4.13, p < 0.001), and it improved by a further 18% following the manual training (final 12.15 ± 4.19, p < 0.001). Across the 8-week rehabilitation period 65% of patients improved more than 5 points.", "The functional changes according to age, gender, initial FAC, initial Gait and Balance Tinetti and initial walking speed, were analyzed during the 4-week BWSRGT period, the 4-week manual gait training period and at the 8-week total intensive rehabilitation period. Patients with an initial FAC of 2 or 3 showed significantly greater changes in walking speed than patients with other initial FAC across the 4-week robotic training protocol (Table 2). These changes were maintained over 4 weeks of manual gait training. The change in each outcome was not significantly influenced by variables; gender, age, aetiology (hemorrhagic/ischemic), affected hemisphere, initial walking speed, initial Balance Tinetti and initial Gait Tinetti.\nChange in outcome measures stratified by initial Functional Ambulatory Category (FAC).\nThe table describes the change in the Tinetti Gait, Tinetti Balance, and Walking Speed during the Body Weight Support Robotic Gait Trainer (BWSRGT) training (0-4 weeks); during the Manual Gait Training (MGT) period (4-8 weeks); and the change over the 8-weeks total of intensive rehabilitation period. Subjects with initial FAC 2 and 3 showed significant improvement in walking speed across the robotic training period. (*) p < 0.05.\nDuring the 8-week training period the non-ambulatory patients (FAC ≤ 2) went from 85% (n = 59) at baseline to 34% (n = 24) to the end-point; and the ambulatory patients (FAC ≥ 3) went from 14% (n = 10) at baseline to 65% (n = 45) at the endpoint (Table 3).\nNumber of patients in each Functional Ambulatory Category (FAC) over the treatment period.\nAt commencement of the study, the majority of patients were in low FAC levels, with nil patients in Category 5. This trend changed across the weeks to ultimately have a more even spread across categories at week 8, including 16 patients in the highest category and only 3 remaining in the lowest category. This trend is highlighted with division of the FAC into non-ambulatory (FAC ≤ 2) and ambulatory (FAC ≥ 3), showing a movement of majority from non-ambulatory (85% of the patients, n = 59) at commencement, to ambulatory by completion of the training program (65.2%, n = 45).", "The results of this observational study provide evidence that a comprehensive and intensive eight-week rehabilitation program including BWSRGT followed by Manual Gait Training in patients early after stroke, can led to an improvement in all functional outcomes, independently of patient demographic or initial functional status. However, the results indicate that patients with initial FAC level of 2 or 3 obtained the most benefit. The intensive rehabilitation program was well tolerated, and no patients withdrew for factors related to the gait training or the high training dose.\nOther research groups have studied the improvement of walking ability of sub-acute stroke patients combining methods of gait training showing that an intensive locomotor training on an electromechanical gait trainer plus physiotherapy resulted in a significantly better gait ability and daily living competence compared with physiotherapy alone[22,31].\nHowever, many studies have failed in showing significant differences in gain of functional scores when comparing robot-driven gait orthosis training with conventional physiotherapy[32]. In the study by Peraula et al,[33] chronic ambulatory patients regained the same walking ability when they received body weight supported training with or without FES compared with over-ground walking exercise training program.\nThe dose or intensity of the training seems to influence the improvement in the walking ability improvement since our study shows greater results than studies with only 20 or 30 min of daily therapy for 3 to 4 weeks[32,33]. Higher intensity of gait practice, in line with modern principles of motor learning, probably explains the superior results. The total amount of rehabilitation given to the patients is higher than reported in other studies[22,34] and our results should be considered in the context of high-intensity rehabilitation in sub-acute stroke.\nIn our study, after 8-weeks of intensive rehabilitation we found gait improvements in one or more of the outcome measures in 95.54% patients. This finding is higher than recovery reported by other authors[23,35] and may indicate that the higher dose in our rehabilitation program can lead to greater improvement of motor function in sub-acute stroke patients. To determine the magnitude of the improvement attributed to gait training, a comparison with an experimental control group without gait training would need to be done. This considered, the results of the present study should be used as a benchmark for expected change to aid clinical decision-making and to power controlled clinical research studies.\nThe selected functional outcome measures were sensitive to detect change across patients and may be suitable to use for future studies. Care should be taken interpreting functional scales that may include the use of assistive technology (as can be used in the 10 MWT). The underlying factors of patient performance leading to improved scores on each outcome measures is difficult to determine from the present study. For example, the improved score on the Tinetti balance test could be a cause of improved gait, since various balance functions are known to affect gait[36,37]. The more the patient sways, the worse is the balance and consequently the gait parameters[38]. According to Kollen et al[39] the recovery of independent gait is highly dependent on improvements in control of standing balance. These results are in line with our study, where the gait speed and functional ambulatory measures improve in parallel with balance measures.\nA pertinent finding of our study is that patients with mid-range FAC at admission obtained the most benefit, which raises the possibility that FAC could be tested as a clinical predictor for recovery, although Masiero et al[20] did not find a correlation between FAC and motor recovery during conventional rehabilitation programs. Other studies have shown that initial level of paresis[40] or trunk control[41] could be used as clinical predictors of balance and gait for rehabilitation in sub-acute patients. Moreover, previous studies have reinforced the idea of the excellent reliability of the FAC, good predictive validity and responsiveness in sub-acute stroke patients and it has been proposed to predict community ambulation with high sensitivity and specificity[28]. We provided evidence that suggests that FAC could be useful as a predictor of outcome, but not initial walking speed, as reported by Barbeau[42].\nOne of the limitations of the present study is that the design of the study does not allow the comparison of the robotic gait training with the same amount - 60 min - of conventional gait therapy, combined both with an intensive rehabilitation program. The characteristics of the patients that would benefit the most of this type of combined gait therapies (robotic + conventional) remain unclear. In our study, patients ranged from the early phase of recovery to 3 months after the injury, when the largest gains are observable,[43-45] however some studies have found improvements in gait function in late phases of recovery[46]. The optimum dose of therapy is another open question for future studies. Even if daily therapy seems to be a decisive factor in the training program success early after stroke, [14,47] there is disparity of results about what is the ideal frequency of training in chronic phases[48,49].", "This study shows that gait training using the BWSRGT is associated with improved walking function in individuals with sub-acute stroke, and that it is feasible and safe to combine with a comprehensive and intensive functional rehabilitation program over 8-weeks. Further studies need to address if the improved walking parameters after a combined and intensive gait rehabilitation program are maintained over time. Moreover, the optimal dose of training characteristics (frequency and duration), as well as the precise gait parameters associated with training responsiveness, need further research." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Gait training", "stroke", "body weight support", "rehabilitation" ]
Background: The recovery of independent walking is one of the major goals of rehabilitation after stroke, remaining as a leading cause of serious long-term disability[1]. More than 30% of patients who have had a stroke do not achieve a complete motor recovery after the rehabilitation process[2,3]. For this reason, new rehabilitation approaches are needed in order to improve quality of life in stroke patients. There is no unique approach for rehabilitation of gait after stroke[4]. From physical therapy interventions (such as Bobath, Perfetti, Propioceptive Neuromuscular Facilitation - PNF) [5,6] to more technological approaches including the use of Functional Electric Stimulation (FES) [7] or Body Weight Support Robotic Gait Training (BWSRGT),[8-10] many therapeutic options have been used alone and combined to improve motor recovery in stroke. The beneficial effects of treadmill training have been extensively investigated in the last fifteen years in stroke patients,[11] including the greater effects of the body weight support training[12,13]. Many studies have reported that electromechanical devices have at least the same efficacy as manual gait training with less effort by the patient and physiotherapist[14-17]. In a recent study, electromechanical assisted gait training has been shown to improve the independent walking ability (FAC) but not the walking speed when compared sub-acute and chronic patients that received conventional gait training[8]. The clinical characteristics of patients who benefit most from BWSRGT are presently unclear. If clinical variables that predispose to a positive response to BWSRGT could be identified, it might be possible to individually tailor the rehabilitation program and include BWSRGT for those patients who would have the greatest benefit in the rehabilitation process[18-20]. The treatment dose (number of hours of therapy and frequency) has been reported as a valuable variable of outcome,[21] but the optimal dose is still uncertain. Robotic gait training using the BWSGT for 4 weeks (5 ×/week) has previously shown in a small number of early stroke patients to be well tolerated, and is reported to improve gait function[22]. We applied a daily intensive program with robotic gait training in our inpatient setting of sub-acute stroke patients, and then followed with a period of consolidation using manual gait training. We report our observations of functional gain in a large number of patients by using accepted clinical scales to measure gait speed, assistance in locomotion and balance. The main goal of this study is to assist with clinical decision making and to power randomized controlled clinical trials. Methods: Subjects 103 subjects with sub-acute stroke (< 6 months post-stroke) were followed in a prospective observational study from March 2006 to March 2009. Thirty-four patients did not complete the 8-week program due to factors not related to the study such as hospital discharge, or medical complications (such as pneumonia or infections). Of the 69 patients who finished the training protocol (49 men, 20 women, mean age 48 ± 11 years), 36 suffered hemorrhagic stroke and 33 ischemic stroke. According to residual deficits, 34 patients presented right hemiparesis, 28 patients presented left hemiparesis and 7 presented tetraparesis. 85.5% of the patients were non-ambulatory (FAC ≤ 2, n = 59) and 14.5% were ambulatory patients (FAC ≥ 3, n = 10). Mean post-stroke interval was 72 ± 38 days (Table 1). Patient baseline characteristics. Baseline demographic and functional characteristics of the stroke patients enrolled in the 8-week intensive rehabilitation program. Data collection was obtained from patients who were performing BWSRGT program at the Neurorehabilitation Hospital Institut Guttmann (Badalona, Spain) following the clinical protocols and according to the local Ethics Committee. All patients gave written informed consent prior to enrolment in the study. Stroke patients were included in the study if they were within 180 days post-stroke, presented clinical hemiparesis or tetraparesis, aged 25-75, were able to voluntarily participate into the study, were not expected to be discharged in the next 8-weeks and had the ability to perform manual gait training with or without external devices (initial FAC 0-4). They were excluded if they had severe cognitive and/or language deficits that precluded them from following instructions, had contraindication for physical exercise (unstable cardiac status, or other pre-morbid condition that discard them from rehabilitation), had severe spasticity that interferes with robotic function or/and rigid join contractures/malformations on the lower limbs (> 10 degrees), or had inability to stand even in parallel bars. 103 subjects with sub-acute stroke (< 6 months post-stroke) were followed in a prospective observational study from March 2006 to March 2009. Thirty-four patients did not complete the 8-week program due to factors not related to the study such as hospital discharge, or medical complications (such as pneumonia or infections). Of the 69 patients who finished the training protocol (49 men, 20 women, mean age 48 ± 11 years), 36 suffered hemorrhagic stroke and 33 ischemic stroke. According to residual deficits, 34 patients presented right hemiparesis, 28 patients presented left hemiparesis and 7 presented tetraparesis. 85.5% of the patients were non-ambulatory (FAC ≤ 2, n = 59) and 14.5% were ambulatory patients (FAC ≥ 3, n = 10). Mean post-stroke interval was 72 ± 38 days (Table 1). Patient baseline characteristics. Baseline demographic and functional characteristics of the stroke patients enrolled in the 8-week intensive rehabilitation program. Data collection was obtained from patients who were performing BWSRGT program at the Neurorehabilitation Hospital Institut Guttmann (Badalona, Spain) following the clinical protocols and according to the local Ethics Committee. All patients gave written informed consent prior to enrolment in the study. Stroke patients were included in the study if they were within 180 days post-stroke, presented clinical hemiparesis or tetraparesis, aged 25-75, were able to voluntarily participate into the study, were not expected to be discharged in the next 8-weeks and had the ability to perform manual gait training with or without external devices (initial FAC 0-4). They were excluded if they had severe cognitive and/or language deficits that precluded them from following instructions, had contraindication for physical exercise (unstable cardiac status, or other pre-morbid condition that discard them from rehabilitation), had severe spasticity that interferes with robotic function or/and rigid join contractures/malformations on the lower limbs (> 10 degrees), or had inability to stand even in parallel bars. Training Intervention and Inpatient Rehabilitation The robotic and manual gait training intervention was part of a comprehensive intensive rehabilitation program consisting of 5 hours per day/5 days per week. The rehabilitation program included occupational therapy, physical therapy, gait training, fitness, sports therapy, hydrotherapy and other activities oriented to achieve the maximum degree of functional independence (including urban tours or cooking training). Patients performed two contiguous blocks of 4-weeks of intensive rehabilitation program. The duration of each block period was based on previous literature[23,24]. During the first period, patients performed robotic body weight supported gait training (BWSRGT, described below), while in the second 4-week period the BWSRGT was substituted for manual gait training (MGT). In each case the gait training was performed daily as part of the 5-hour rehabilitation activities (Figure 1). Schematic of the intensive rehabilitation program. The rehabilitation period comprised 8 weeks (5 hrs/day, 5 days/week); the first 4 weeks with Body-Weight-Supported-Robotic-Gait Training (BWSRGT) and the last 4 weeks Manual Gait Training. (*) Depending on patient individual needs and clinical goals. The BWSRGT was performed with a Gait Trainer® (Reha-Stim, Berlin), where patient is held by a harness with each foot attached to a footplate that moves to mechanically simulate the stance and swing phase of gait controlling the ankle angle at push off and heel strike (Figure 2). The duration of the exercise ranged from 20-40 min depending on the tolerance of the patients (when patient reported excessive fatigue the training would stop). The percentage of weight unloaded varied between patients and corresponded to the weight that allowed the patient to stand with complete knee extension. Velocity ranged between 0.28 meter per second (m/s) and 0.42 m/s (to prevent overexertion of the patient). Step length was adjusted to the available range-of-motion for each patient. An elastic strap was placed on the paretic knee to help extension during the complete stance phase if needed (as a passive support). A sub-acute stroke patient during the Robotic Gait training session. During the Body-Weight-Supported-Robotic-Gait Training (BWSRGT) the body weight is slightly unloaded via the use of a harness, while the fixed foot placement on the device ensures a set pattern that mimics human gait with alternate stance and swing phase. The manual gait training (MGT) consisted of gait training over ground, with technical aids and the support of a physiotherapist as needed. Conventional technical aids for stroke patients are considered unilateral, ie knee-ankle-foot orthosis (KAFO), ankle foot orthosis (AFO), dynamic ankle foot orthosis (DAFO) and functional electrical stimulation (FES). To provide more stability, crutches, walkers (in tetraparesic patients), or parallel bars were allowed. Gait training was done under the supervision of a physical therapist, who provided verbal instructions and physical assistance to facilitate the swing phase when needed. Our work intends to provide benchmark clinical gains data and tolerability from this atypically intensive rehabilitation program of consecutive blocks of robotic and manual gait training. The robotic and manual gait training intervention was part of a comprehensive intensive rehabilitation program consisting of 5 hours per day/5 days per week. The rehabilitation program included occupational therapy, physical therapy, gait training, fitness, sports therapy, hydrotherapy and other activities oriented to achieve the maximum degree of functional independence (including urban tours or cooking training). Patients performed two contiguous blocks of 4-weeks of intensive rehabilitation program. The duration of each block period was based on previous literature[23,24]. During the first period, patients performed robotic body weight supported gait training (BWSRGT, described below), while in the second 4-week period the BWSRGT was substituted for manual gait training (MGT). In each case the gait training was performed daily as part of the 5-hour rehabilitation activities (Figure 1). Schematic of the intensive rehabilitation program. The rehabilitation period comprised 8 weeks (5 hrs/day, 5 days/week); the first 4 weeks with Body-Weight-Supported-Robotic-Gait Training (BWSRGT) and the last 4 weeks Manual Gait Training. (*) Depending on patient individual needs and clinical goals. The BWSRGT was performed with a Gait Trainer® (Reha-Stim, Berlin), where patient is held by a harness with each foot attached to a footplate that moves to mechanically simulate the stance and swing phase of gait controlling the ankle angle at push off and heel strike (Figure 2). The duration of the exercise ranged from 20-40 min depending on the tolerance of the patients (when patient reported excessive fatigue the training would stop). The percentage of weight unloaded varied between patients and corresponded to the weight that allowed the patient to stand with complete knee extension. Velocity ranged between 0.28 meter per second (m/s) and 0.42 m/s (to prevent overexertion of the patient). Step length was adjusted to the available range-of-motion for each patient. An elastic strap was placed on the paretic knee to help extension during the complete stance phase if needed (as a passive support). A sub-acute stroke patient during the Robotic Gait training session. During the Body-Weight-Supported-Robotic-Gait Training (BWSRGT) the body weight is slightly unloaded via the use of a harness, while the fixed foot placement on the device ensures a set pattern that mimics human gait with alternate stance and swing phase. The manual gait training (MGT) consisted of gait training over ground, with technical aids and the support of a physiotherapist as needed. Conventional technical aids for stroke patients are considered unilateral, ie knee-ankle-foot orthosis (KAFO), ankle foot orthosis (AFO), dynamic ankle foot orthosis (DAFO) and functional electrical stimulation (FES). To provide more stability, crutches, walkers (in tetraparesic patients), or parallel bars were allowed. Gait training was done under the supervision of a physical therapist, who provided verbal instructions and physical assistance to facilitate the swing phase when needed. Our work intends to provide benchmark clinical gains data and tolerability from this atypically intensive rehabilitation program of consecutive blocks of robotic and manual gait training. Functional Outcome Assessment The functional outcome assessment battery comprised: Functional Ambulatory Category (FAC),[25] Tinetti Balance and Gait Scale[26] and 10 Meter Walking Test[27]. Each outcome was assessed at baseline, mid-point, and at the end of the study (0, 4, 8 weeks) by an experienced physiotherapist. The FAC was performed to assess gait ability and autonomy[28]. This ordinary scale includes 6 levels ranging from 0 to 5. FAC = 0 means no ability to walk or needed 2 assistants to help them walk, FAC 1 = able to walk with the constant attention of one assistant, FAC 2 = able to walk with someone for balance support, FAC 3 = able to walk with one assistant beside them to give them confidence, FAC 4 = independent walking but need some help with stairs or uneven ground, FAC 5 = independent for gait function in any given place. The 10 MWT quantifies walking speed, step length and cadence[29]. Patients were permitted to use technical aids during the test. This test was performed three times per evaluation and the mean speed was calculated. Patient gait was videotaped for extra documentation. The Tinetti Gait and Balance Scale examines gait pattern and balance level[30]. The gait subscale ranges from 0 to 12, zero indicates and inability to walk or unable to perform any of the events of the gait pattern correctly, and 12 indicates a correct gait pattern. The balance subscale ranges from 0 to 16, zero indicates very poor balance and 16 good control of the equilibrium. Our centre considers a clinically meaningful change in function to correspond with approximately 2 points in the FAC, 0.20 m/s in the walking speed, 3 points in Tinetti Gait Scale and 5 points in the Tinetti balance test. The functional outcome assessment battery comprised: Functional Ambulatory Category (FAC),[25] Tinetti Balance and Gait Scale[26] and 10 Meter Walking Test[27]. Each outcome was assessed at baseline, mid-point, and at the end of the study (0, 4, 8 weeks) by an experienced physiotherapist. The FAC was performed to assess gait ability and autonomy[28]. This ordinary scale includes 6 levels ranging from 0 to 5. FAC = 0 means no ability to walk or needed 2 assistants to help them walk, FAC 1 = able to walk with the constant attention of one assistant, FAC 2 = able to walk with someone for balance support, FAC 3 = able to walk with one assistant beside them to give them confidence, FAC 4 = independent walking but need some help with stairs or uneven ground, FAC 5 = independent for gait function in any given place. The 10 MWT quantifies walking speed, step length and cadence[29]. Patients were permitted to use technical aids during the test. This test was performed three times per evaluation and the mean speed was calculated. Patient gait was videotaped for extra documentation. The Tinetti Gait and Balance Scale examines gait pattern and balance level[30]. The gait subscale ranges from 0 to 12, zero indicates and inability to walk or unable to perform any of the events of the gait pattern correctly, and 12 indicates a correct gait pattern. The balance subscale ranges from 0 to 16, zero indicates very poor balance and 16 good control of the equilibrium. Our centre considers a clinically meaningful change in function to correspond with approximately 2 points in the FAC, 0.20 m/s in the walking speed, 3 points in Tinetti Gait Scale and 5 points in the Tinetti balance test. Data analysis In this observational prospective study we used categorical variables described by frequency and percentage, and continuous variables described by mean and standard deviation (mean ± SD). Median and inter-quartile range (IQR) was used to explore asymmetry when necessary. The non-parametric Friedman test was used to assess differences between the 3 assessment measures, while the non-parametric Kruskall-Wallis test was used to assess outcome based on initial functional level. The alpha level was set at p < 0.05. In this observational prospective study we used categorical variables described by frequency and percentage, and continuous variables described by mean and standard deviation (mean ± SD). Median and inter-quartile range (IQR) was used to explore asymmetry when necessary. The non-parametric Friedman test was used to assess differences between the 3 assessment measures, while the non-parametric Kruskall-Wallis test was used to assess outcome based on initial functional level. The alpha level was set at p < 0.05. Subjects: 103 subjects with sub-acute stroke (< 6 months post-stroke) were followed in a prospective observational study from March 2006 to March 2009. Thirty-four patients did not complete the 8-week program due to factors not related to the study such as hospital discharge, or medical complications (such as pneumonia or infections). Of the 69 patients who finished the training protocol (49 men, 20 women, mean age 48 ± 11 years), 36 suffered hemorrhagic stroke and 33 ischemic stroke. According to residual deficits, 34 patients presented right hemiparesis, 28 patients presented left hemiparesis and 7 presented tetraparesis. 85.5% of the patients were non-ambulatory (FAC ≤ 2, n = 59) and 14.5% were ambulatory patients (FAC ≥ 3, n = 10). Mean post-stroke interval was 72 ± 38 days (Table 1). Patient baseline characteristics. Baseline demographic and functional characteristics of the stroke patients enrolled in the 8-week intensive rehabilitation program. Data collection was obtained from patients who were performing BWSRGT program at the Neurorehabilitation Hospital Institut Guttmann (Badalona, Spain) following the clinical protocols and according to the local Ethics Committee. All patients gave written informed consent prior to enrolment in the study. Stroke patients were included in the study if they were within 180 days post-stroke, presented clinical hemiparesis or tetraparesis, aged 25-75, were able to voluntarily participate into the study, were not expected to be discharged in the next 8-weeks and had the ability to perform manual gait training with or without external devices (initial FAC 0-4). They were excluded if they had severe cognitive and/or language deficits that precluded them from following instructions, had contraindication for physical exercise (unstable cardiac status, or other pre-morbid condition that discard them from rehabilitation), had severe spasticity that interferes with robotic function or/and rigid join contractures/malformations on the lower limbs (> 10 degrees), or had inability to stand even in parallel bars. Training Intervention and Inpatient Rehabilitation: The robotic and manual gait training intervention was part of a comprehensive intensive rehabilitation program consisting of 5 hours per day/5 days per week. The rehabilitation program included occupational therapy, physical therapy, gait training, fitness, sports therapy, hydrotherapy and other activities oriented to achieve the maximum degree of functional independence (including urban tours or cooking training). Patients performed two contiguous blocks of 4-weeks of intensive rehabilitation program. The duration of each block period was based on previous literature[23,24]. During the first period, patients performed robotic body weight supported gait training (BWSRGT, described below), while in the second 4-week period the BWSRGT was substituted for manual gait training (MGT). In each case the gait training was performed daily as part of the 5-hour rehabilitation activities (Figure 1). Schematic of the intensive rehabilitation program. The rehabilitation period comprised 8 weeks (5 hrs/day, 5 days/week); the first 4 weeks with Body-Weight-Supported-Robotic-Gait Training (BWSRGT) and the last 4 weeks Manual Gait Training. (*) Depending on patient individual needs and clinical goals. The BWSRGT was performed with a Gait Trainer® (Reha-Stim, Berlin), where patient is held by a harness with each foot attached to a footplate that moves to mechanically simulate the stance and swing phase of gait controlling the ankle angle at push off and heel strike (Figure 2). The duration of the exercise ranged from 20-40 min depending on the tolerance of the patients (when patient reported excessive fatigue the training would stop). The percentage of weight unloaded varied between patients and corresponded to the weight that allowed the patient to stand with complete knee extension. Velocity ranged between 0.28 meter per second (m/s) and 0.42 m/s (to prevent overexertion of the patient). Step length was adjusted to the available range-of-motion for each patient. An elastic strap was placed on the paretic knee to help extension during the complete stance phase if needed (as a passive support). A sub-acute stroke patient during the Robotic Gait training session. During the Body-Weight-Supported-Robotic-Gait Training (BWSRGT) the body weight is slightly unloaded via the use of a harness, while the fixed foot placement on the device ensures a set pattern that mimics human gait with alternate stance and swing phase. The manual gait training (MGT) consisted of gait training over ground, with technical aids and the support of a physiotherapist as needed. Conventional technical aids for stroke patients are considered unilateral, ie knee-ankle-foot orthosis (KAFO), ankle foot orthosis (AFO), dynamic ankle foot orthosis (DAFO) and functional electrical stimulation (FES). To provide more stability, crutches, walkers (in tetraparesic patients), or parallel bars were allowed. Gait training was done under the supervision of a physical therapist, who provided verbal instructions and physical assistance to facilitate the swing phase when needed. Our work intends to provide benchmark clinical gains data and tolerability from this atypically intensive rehabilitation program of consecutive blocks of robotic and manual gait training. Functional Outcome Assessment: The functional outcome assessment battery comprised: Functional Ambulatory Category (FAC),[25] Tinetti Balance and Gait Scale[26] and 10 Meter Walking Test[27]. Each outcome was assessed at baseline, mid-point, and at the end of the study (0, 4, 8 weeks) by an experienced physiotherapist. The FAC was performed to assess gait ability and autonomy[28]. This ordinary scale includes 6 levels ranging from 0 to 5. FAC = 0 means no ability to walk or needed 2 assistants to help them walk, FAC 1 = able to walk with the constant attention of one assistant, FAC 2 = able to walk with someone for balance support, FAC 3 = able to walk with one assistant beside them to give them confidence, FAC 4 = independent walking but need some help with stairs or uneven ground, FAC 5 = independent for gait function in any given place. The 10 MWT quantifies walking speed, step length and cadence[29]. Patients were permitted to use technical aids during the test. This test was performed three times per evaluation and the mean speed was calculated. Patient gait was videotaped for extra documentation. The Tinetti Gait and Balance Scale examines gait pattern and balance level[30]. The gait subscale ranges from 0 to 12, zero indicates and inability to walk or unable to perform any of the events of the gait pattern correctly, and 12 indicates a correct gait pattern. The balance subscale ranges from 0 to 16, zero indicates very poor balance and 16 good control of the equilibrium. Our centre considers a clinically meaningful change in function to correspond with approximately 2 points in the FAC, 0.20 m/s in the walking speed, 3 points in Tinetti Gait Scale and 5 points in the Tinetti balance test. Data analysis: In this observational prospective study we used categorical variables described by frequency and percentage, and continuous variables described by mean and standard deviation (mean ± SD). Median and inter-quartile range (IQR) was used to explore asymmetry when necessary. The non-parametric Friedman test was used to assess differences between the 3 assessment measures, while the non-parametric Kruskall-Wallis test was used to assess outcome based on initial functional level. The alpha level was set at p < 0.05. Results and Discussion: We report results from 69 patients who completed the intensive training program, showing significant improvements for each outcome (Figure 3). Functional outcome results. Improvement in functional outcome across the robotic and manual training period (mean ± SD). For each outcome: (a) FAC (b) walking speed (c) Tinetti Gait and (d) Tinetti Balance, there was a significant increase following the robotic training, and further consolidation from the follow-up manual gait training. Functional Ambulatory Category Across the 8-week intensive rehabilitation period FAC score improved by 45% across the period of robotic training (baseline 1.30 points ± 1.23, mid-point 2.37 ± 1.51, p < 0.001), and improved by a further 31% following the manual training (final assessment 3.14 ± 1.52, p < 0.001). 88% of patients improved by one point or more on the FAC scale across the 8-week intervention period. Across the 8-week intensive rehabilitation period FAC score improved by 45% across the period of robotic training (baseline 1.30 points ± 1.23, mid-point 2.37 ± 1.51, p < 0.001), and improved by a further 31% following the manual training (final assessment 3.14 ± 1.52, p < 0.001). 88% of patients improved by one point or more on the FAC scale across the 8-week intervention period. Walking speed Walking speed improved by 46% across the period of robotic training (baseline 0.17 m/s ± 0.22, mid-point 0.33 ± 0.33, p < 0.001), and improved by a further 22% following the manual training (final 0.48 ± 0.41, p < 0.001). Across the full training period 83% of patients improved more than 0.20 m/s. Walking speed improved by 46% across the period of robotic training (baseline 0.17 m/s ± 0.22, mid-point 0.33 ± 0.33, p < 0.001), and improved by a further 22% following the manual training (final 0.48 ± 0.41, p < 0.001). Across the full training period 83% of patients improved more than 0.20 m/s. Tinetti Gait Scale Tinetti Gait Scale improved by 45% across the period of robotic training (baseline 4.10 ± 3.10, mid-point 7.10 ± 3.16, p < 0.001), and improved by a further 18% following the manual training (final 8.69 ± 3.10, p < 0.001). During the 8-week rehabilitation period 56% of patients improved more than 3 points on Tinetti Gait Scale. Tinetti Gait Scale improved by 45% across the period of robotic training (baseline 4.10 ± 3.10, mid-point 7.10 ± 3.16, p < 0.001), and improved by a further 18% following the manual training (final 8.69 ± 3.10, p < 0.001). During the 8-week rehabilitation period 56% of patients improved more than 3 points on Tinetti Gait Scale. Tinetti Balance Scale The Tinetti Balance Scale score improved by 38% across the period of robotic training (baseline 6.14 ± 3.84, mid-point 9.92 ± 4.13, p < 0.001), and it improved by a further 18% following the manual training (final 12.15 ± 4.19, p < 0.001). Across the 8-week rehabilitation period 65% of patients improved more than 5 points. The Tinetti Balance Scale score improved by 38% across the period of robotic training (baseline 6.14 ± 3.84, mid-point 9.92 ± 4.13, p < 0.001), and it improved by a further 18% following the manual training (final 12.15 ± 4.19, p < 0.001). Across the 8-week rehabilitation period 65% of patients improved more than 5 points. Patient demographic and initial clinical status The functional changes according to age, gender, initial FAC, initial Gait and Balance Tinetti and initial walking speed, were analyzed during the 4-week BWSRGT period, the 4-week manual gait training period and at the 8-week total intensive rehabilitation period. Patients with an initial FAC of 2 or 3 showed significantly greater changes in walking speed than patients with other initial FAC across the 4-week robotic training protocol (Table 2). These changes were maintained over 4 weeks of manual gait training. The change in each outcome was not significantly influenced by variables; gender, age, aetiology (hemorrhagic/ischemic), affected hemisphere, initial walking speed, initial Balance Tinetti and initial Gait Tinetti. Change in outcome measures stratified by initial Functional Ambulatory Category (FAC). The table describes the change in the Tinetti Gait, Tinetti Balance, and Walking Speed during the Body Weight Support Robotic Gait Trainer (BWSRGT) training (0-4 weeks); during the Manual Gait Training (MGT) period (4-8 weeks); and the change over the 8-weeks total of intensive rehabilitation period. Subjects with initial FAC 2 and 3 showed significant improvement in walking speed across the robotic training period. (*) p < 0.05. During the 8-week training period the non-ambulatory patients (FAC ≤ 2) went from 85% (n = 59) at baseline to 34% (n = 24) to the end-point; and the ambulatory patients (FAC ≥ 3) went from 14% (n = 10) at baseline to 65% (n = 45) at the endpoint (Table 3). Number of patients in each Functional Ambulatory Category (FAC) over the treatment period. At commencement of the study, the majority of patients were in low FAC levels, with nil patients in Category 5. This trend changed across the weeks to ultimately have a more even spread across categories at week 8, including 16 patients in the highest category and only 3 remaining in the lowest category. This trend is highlighted with division of the FAC into non-ambulatory (FAC ≤ 2) and ambulatory (FAC ≥ 3), showing a movement of majority from non-ambulatory (85% of the patients, n = 59) at commencement, to ambulatory by completion of the training program (65.2%, n = 45). The functional changes according to age, gender, initial FAC, initial Gait and Balance Tinetti and initial walking speed, were analyzed during the 4-week BWSRGT period, the 4-week manual gait training period and at the 8-week total intensive rehabilitation period. Patients with an initial FAC of 2 or 3 showed significantly greater changes in walking speed than patients with other initial FAC across the 4-week robotic training protocol (Table 2). These changes were maintained over 4 weeks of manual gait training. The change in each outcome was not significantly influenced by variables; gender, age, aetiology (hemorrhagic/ischemic), affected hemisphere, initial walking speed, initial Balance Tinetti and initial Gait Tinetti. Change in outcome measures stratified by initial Functional Ambulatory Category (FAC). The table describes the change in the Tinetti Gait, Tinetti Balance, and Walking Speed during the Body Weight Support Robotic Gait Trainer (BWSRGT) training (0-4 weeks); during the Manual Gait Training (MGT) period (4-8 weeks); and the change over the 8-weeks total of intensive rehabilitation period. Subjects with initial FAC 2 and 3 showed significant improvement in walking speed across the robotic training period. (*) p < 0.05. During the 8-week training period the non-ambulatory patients (FAC ≤ 2) went from 85% (n = 59) at baseline to 34% (n = 24) to the end-point; and the ambulatory patients (FAC ≥ 3) went from 14% (n = 10) at baseline to 65% (n = 45) at the endpoint (Table 3). Number of patients in each Functional Ambulatory Category (FAC) over the treatment period. At commencement of the study, the majority of patients were in low FAC levels, with nil patients in Category 5. This trend changed across the weeks to ultimately have a more even spread across categories at week 8, including 16 patients in the highest category and only 3 remaining in the lowest category. This trend is highlighted with division of the FAC into non-ambulatory (FAC ≤ 2) and ambulatory (FAC ≥ 3), showing a movement of majority from non-ambulatory (85% of the patients, n = 59) at commencement, to ambulatory by completion of the training program (65.2%, n = 45). Functional Ambulatory Category: Across the 8-week intensive rehabilitation period FAC score improved by 45% across the period of robotic training (baseline 1.30 points ± 1.23, mid-point 2.37 ± 1.51, p < 0.001), and improved by a further 31% following the manual training (final assessment 3.14 ± 1.52, p < 0.001). 88% of patients improved by one point or more on the FAC scale across the 8-week intervention period. Walking speed: Walking speed improved by 46% across the period of robotic training (baseline 0.17 m/s ± 0.22, mid-point 0.33 ± 0.33, p < 0.001), and improved by a further 22% following the manual training (final 0.48 ± 0.41, p < 0.001). Across the full training period 83% of patients improved more than 0.20 m/s. Tinetti Gait Scale: Tinetti Gait Scale improved by 45% across the period of robotic training (baseline 4.10 ± 3.10, mid-point 7.10 ± 3.16, p < 0.001), and improved by a further 18% following the manual training (final 8.69 ± 3.10, p < 0.001). During the 8-week rehabilitation period 56% of patients improved more than 3 points on Tinetti Gait Scale. Tinetti Balance Scale: The Tinetti Balance Scale score improved by 38% across the period of robotic training (baseline 6.14 ± 3.84, mid-point 9.92 ± 4.13, p < 0.001), and it improved by a further 18% following the manual training (final 12.15 ± 4.19, p < 0.001). Across the 8-week rehabilitation period 65% of patients improved more than 5 points. Patient demographic and initial clinical status: The functional changes according to age, gender, initial FAC, initial Gait and Balance Tinetti and initial walking speed, were analyzed during the 4-week BWSRGT period, the 4-week manual gait training period and at the 8-week total intensive rehabilitation period. Patients with an initial FAC of 2 or 3 showed significantly greater changes in walking speed than patients with other initial FAC across the 4-week robotic training protocol (Table 2). These changes were maintained over 4 weeks of manual gait training. The change in each outcome was not significantly influenced by variables; gender, age, aetiology (hemorrhagic/ischemic), affected hemisphere, initial walking speed, initial Balance Tinetti and initial Gait Tinetti. Change in outcome measures stratified by initial Functional Ambulatory Category (FAC). The table describes the change in the Tinetti Gait, Tinetti Balance, and Walking Speed during the Body Weight Support Robotic Gait Trainer (BWSRGT) training (0-4 weeks); during the Manual Gait Training (MGT) period (4-8 weeks); and the change over the 8-weeks total of intensive rehabilitation period. Subjects with initial FAC 2 and 3 showed significant improvement in walking speed across the robotic training period. (*) p < 0.05. During the 8-week training period the non-ambulatory patients (FAC ≤ 2) went from 85% (n = 59) at baseline to 34% (n = 24) to the end-point; and the ambulatory patients (FAC ≥ 3) went from 14% (n = 10) at baseline to 65% (n = 45) at the endpoint (Table 3). Number of patients in each Functional Ambulatory Category (FAC) over the treatment period. At commencement of the study, the majority of patients were in low FAC levels, with nil patients in Category 5. This trend changed across the weeks to ultimately have a more even spread across categories at week 8, including 16 patients in the highest category and only 3 remaining in the lowest category. This trend is highlighted with division of the FAC into non-ambulatory (FAC ≤ 2) and ambulatory (FAC ≥ 3), showing a movement of majority from non-ambulatory (85% of the patients, n = 59) at commencement, to ambulatory by completion of the training program (65.2%, n = 45). Discussion: The results of this observational study provide evidence that a comprehensive and intensive eight-week rehabilitation program including BWSRGT followed by Manual Gait Training in patients early after stroke, can led to an improvement in all functional outcomes, independently of patient demographic or initial functional status. However, the results indicate that patients with initial FAC level of 2 or 3 obtained the most benefit. The intensive rehabilitation program was well tolerated, and no patients withdrew for factors related to the gait training or the high training dose. Other research groups have studied the improvement of walking ability of sub-acute stroke patients combining methods of gait training showing that an intensive locomotor training on an electromechanical gait trainer plus physiotherapy resulted in a significantly better gait ability and daily living competence compared with physiotherapy alone[22,31]. However, many studies have failed in showing significant differences in gain of functional scores when comparing robot-driven gait orthosis training with conventional physiotherapy[32]. In the study by Peraula et al,[33] chronic ambulatory patients regained the same walking ability when they received body weight supported training with or without FES compared with over-ground walking exercise training program. The dose or intensity of the training seems to influence the improvement in the walking ability improvement since our study shows greater results than studies with only 20 or 30 min of daily therapy for 3 to 4 weeks[32,33]. Higher intensity of gait practice, in line with modern principles of motor learning, probably explains the superior results. The total amount of rehabilitation given to the patients is higher than reported in other studies[22,34] and our results should be considered in the context of high-intensity rehabilitation in sub-acute stroke. In our study, after 8-weeks of intensive rehabilitation we found gait improvements in one or more of the outcome measures in 95.54% patients. This finding is higher than recovery reported by other authors[23,35] and may indicate that the higher dose in our rehabilitation program can lead to greater improvement of motor function in sub-acute stroke patients. To determine the magnitude of the improvement attributed to gait training, a comparison with an experimental control group without gait training would need to be done. This considered, the results of the present study should be used as a benchmark for expected change to aid clinical decision-making and to power controlled clinical research studies. The selected functional outcome measures were sensitive to detect change across patients and may be suitable to use for future studies. Care should be taken interpreting functional scales that may include the use of assistive technology (as can be used in the 10 MWT). The underlying factors of patient performance leading to improved scores on each outcome measures is difficult to determine from the present study. For example, the improved score on the Tinetti balance test could be a cause of improved gait, since various balance functions are known to affect gait[36,37]. The more the patient sways, the worse is the balance and consequently the gait parameters[38]. According to Kollen et al[39] the recovery of independent gait is highly dependent on improvements in control of standing balance. These results are in line with our study, where the gait speed and functional ambulatory measures improve in parallel with balance measures. A pertinent finding of our study is that patients with mid-range FAC at admission obtained the most benefit, which raises the possibility that FAC could be tested as a clinical predictor for recovery, although Masiero et al[20] did not find a correlation between FAC and motor recovery during conventional rehabilitation programs. Other studies have shown that initial level of paresis[40] or trunk control[41] could be used as clinical predictors of balance and gait for rehabilitation in sub-acute patients. Moreover, previous studies have reinforced the idea of the excellent reliability of the FAC, good predictive validity and responsiveness in sub-acute stroke patients and it has been proposed to predict community ambulation with high sensitivity and specificity[28]. We provided evidence that suggests that FAC could be useful as a predictor of outcome, but not initial walking speed, as reported by Barbeau[42]. One of the limitations of the present study is that the design of the study does not allow the comparison of the robotic gait training with the same amount - 60 min - of conventional gait therapy, combined both with an intensive rehabilitation program. The characteristics of the patients that would benefit the most of this type of combined gait therapies (robotic + conventional) remain unclear. In our study, patients ranged from the early phase of recovery to 3 months after the injury, when the largest gains are observable,[43-45] however some studies have found improvements in gait function in late phases of recovery[46]. The optimum dose of therapy is another open question for future studies. Even if daily therapy seems to be a decisive factor in the training program success early after stroke, [14,47] there is disparity of results about what is the ideal frequency of training in chronic phases[48,49]. Conclusions: This study shows that gait training using the BWSRGT is associated with improved walking function in individuals with sub-acute stroke, and that it is feasible and safe to combine with a comprehensive and intensive functional rehabilitation program over 8-weeks. Further studies need to address if the improved walking parameters after a combined and intensive gait rehabilitation program are maintained over time. Moreover, the optimal dose of training characteristics (frequency and duration), as well as the precise gait parameters associated with training responsiveness, need further research.
Background: The use of automated electromechanical devices for gait training in neurological patients is increasing, yet the functional outcomes of well-defined training programs using these devices and the characteristics of patients that would most benefit are seldom reported in the literature. In an observational study of functional outcomes, we aimed to provide a benchmark for expected change in gait function in early stroke patients, from an intensive inpatient rehabilitation program including both robotic and manual gait training. Methods: We followed 103 sub-acute stroke patients who met the clinical inclusion criteria for Body Weight Supported Robotic Gait Training (BWSRGT). Patients completed an intensive 8-week gait-training program comprising robotic gait training (weeks 0-4) followed by manual gait training (weeks 4-8). A change in clinical function was determined by the following assessments taken at 0, 4 and 8 weeks (baseline, mid-point and end-point respectively): Functional Ambulatory Categories (FAC), 10 m Walking Test (10 MWT), and Tinetti Gait and Balance Scales. Results: Over half of the patients made a clinically meaningful improvement on the Tinetti Gait Scale (> 3 points) and Tinetti Balance Scale (> 5 points), while over 80% of the patients increased at least 1 point on the FAC scale (0-5) and improved walking speed by more than 0.2 m/s. Patients responded positively in gait function regardless of variables gender, age, aetiology (hemorrhagic/ischemic), and affected hemisphere. The most robust and significant change was observed for patients in the FAC categories two and three. The therapy was well tolerated and no patients withdrew for factors related to the type or intensity of training. Conclusions: Eight-weeks of intensive rehabilitation including robotic and manual gait training was well tolerated by early stroke patients, and was associated with significant gains in function. Patients with mid-level gait dysfunction showed the most robust improvement following robotic training.
Background: The recovery of independent walking is one of the major goals of rehabilitation after stroke, remaining as a leading cause of serious long-term disability[1]. More than 30% of patients who have had a stroke do not achieve a complete motor recovery after the rehabilitation process[2,3]. For this reason, new rehabilitation approaches are needed in order to improve quality of life in stroke patients. There is no unique approach for rehabilitation of gait after stroke[4]. From physical therapy interventions (such as Bobath, Perfetti, Propioceptive Neuromuscular Facilitation - PNF) [5,6] to more technological approaches including the use of Functional Electric Stimulation (FES) [7] or Body Weight Support Robotic Gait Training (BWSRGT),[8-10] many therapeutic options have been used alone and combined to improve motor recovery in stroke. The beneficial effects of treadmill training have been extensively investigated in the last fifteen years in stroke patients,[11] including the greater effects of the body weight support training[12,13]. Many studies have reported that electromechanical devices have at least the same efficacy as manual gait training with less effort by the patient and physiotherapist[14-17]. In a recent study, electromechanical assisted gait training has been shown to improve the independent walking ability (FAC) but not the walking speed when compared sub-acute and chronic patients that received conventional gait training[8]. The clinical characteristics of patients who benefit most from BWSRGT are presently unclear. If clinical variables that predispose to a positive response to BWSRGT could be identified, it might be possible to individually tailor the rehabilitation program and include BWSRGT for those patients who would have the greatest benefit in the rehabilitation process[18-20]. The treatment dose (number of hours of therapy and frequency) has been reported as a valuable variable of outcome,[21] but the optimal dose is still uncertain. Robotic gait training using the BWSGT for 4 weeks (5 ×/week) has previously shown in a small number of early stroke patients to be well tolerated, and is reported to improve gait function[22]. We applied a daily intensive program with robotic gait training in our inpatient setting of sub-acute stroke patients, and then followed with a period of consolidation using manual gait training. We report our observations of functional gain in a large number of patients by using accepted clinical scales to measure gait speed, assistance in locomotion and balance. The main goal of this study is to assist with clinical decision making and to power randomized controlled clinical trials. Conclusions: LC, UC and EM contributed equally with data acquisition, analysis and interpretation as well as study design and manuscript development. DE and MC contributed with data interpretation, intellectual content and critical review of the manuscript. DL contributed with the revision of the last version of the manuscript. MB and JM contributed with the original study design, and manuscript development. All authors read and approved the final manuscript.
Background: The use of automated electromechanical devices for gait training in neurological patients is increasing, yet the functional outcomes of well-defined training programs using these devices and the characteristics of patients that would most benefit are seldom reported in the literature. In an observational study of functional outcomes, we aimed to provide a benchmark for expected change in gait function in early stroke patients, from an intensive inpatient rehabilitation program including both robotic and manual gait training. Methods: We followed 103 sub-acute stroke patients who met the clinical inclusion criteria for Body Weight Supported Robotic Gait Training (BWSRGT). Patients completed an intensive 8-week gait-training program comprising robotic gait training (weeks 0-4) followed by manual gait training (weeks 4-8). A change in clinical function was determined by the following assessments taken at 0, 4 and 8 weeks (baseline, mid-point and end-point respectively): Functional Ambulatory Categories (FAC), 10 m Walking Test (10 MWT), and Tinetti Gait and Balance Scales. Results: Over half of the patients made a clinically meaningful improvement on the Tinetti Gait Scale (> 3 points) and Tinetti Balance Scale (> 5 points), while over 80% of the patients increased at least 1 point on the FAC scale (0-5) and improved walking speed by more than 0.2 m/s. Patients responded positively in gait function regardless of variables gender, age, aetiology (hemorrhagic/ischemic), and affected hemisphere. The most robust and significant change was observed for patients in the FAC categories two and three. The therapy was well tolerated and no patients withdrew for factors related to the type or intensity of training. Conclusions: Eight-weeks of intensive rehabilitation including robotic and manual gait training was well tolerated by early stroke patients, and was associated with significant gains in function. Patients with mid-level gait dysfunction showed the most robust improvement following robotic training.
8,314
378
[ 466, 391, 612, 334, 94, 1657, 84, 71, 74, 73, 464, 935, 98 ]
14
[ "gait", "training", "patients", "fac", "gait training", "period", "rehabilitation", "robotic", "week", "stroke" ]
[ "electromechanical gait trainer", "recovery independent gait", "balance gait rehabilitation", "gait stroke physical", "rehabilitation gait stroke" ]
null
[CONTENT] Gait training | stroke | body weight support | rehabilitation [SUMMARY]
[CONTENT] Gait training | stroke | body weight support | rehabilitation [SUMMARY]
null
[CONTENT] Gait training | stroke | body weight support | rehabilitation [SUMMARY]
[CONTENT] Gait training | stroke | body weight support | rehabilitation [SUMMARY]
[CONTENT] Gait training | stroke | body weight support | rehabilitation [SUMMARY]
[CONTENT] Exercise Therapy | Female | Gait | Gait Disorders, Neurologic | Humans | Male | Middle Aged | Recovery of Function | Robotics | Stroke | Stroke Rehabilitation [SUMMARY]
[CONTENT] Exercise Therapy | Female | Gait | Gait Disorders, Neurologic | Humans | Male | Middle Aged | Recovery of Function | Robotics | Stroke | Stroke Rehabilitation [SUMMARY]
null
[CONTENT] Exercise Therapy | Female | Gait | Gait Disorders, Neurologic | Humans | Male | Middle Aged | Recovery of Function | Robotics | Stroke | Stroke Rehabilitation [SUMMARY]
[CONTENT] Exercise Therapy | Female | Gait | Gait Disorders, Neurologic | Humans | Male | Middle Aged | Recovery of Function | Robotics | Stroke | Stroke Rehabilitation [SUMMARY]
[CONTENT] Exercise Therapy | Female | Gait | Gait Disorders, Neurologic | Humans | Male | Middle Aged | Recovery of Function | Robotics | Stroke | Stroke Rehabilitation [SUMMARY]
[CONTENT] electromechanical gait trainer | recovery independent gait | balance gait rehabilitation | gait stroke physical | rehabilitation gait stroke [SUMMARY]
[CONTENT] electromechanical gait trainer | recovery independent gait | balance gait rehabilitation | gait stroke physical | rehabilitation gait stroke [SUMMARY]
null
[CONTENT] electromechanical gait trainer | recovery independent gait | balance gait rehabilitation | gait stroke physical | rehabilitation gait stroke [SUMMARY]
[CONTENT] electromechanical gait trainer | recovery independent gait | balance gait rehabilitation | gait stroke physical | rehabilitation gait stroke [SUMMARY]
[CONTENT] electromechanical gait trainer | recovery independent gait | balance gait rehabilitation | gait stroke physical | rehabilitation gait stroke [SUMMARY]
[CONTENT] gait | training | patients | fac | gait training | period | rehabilitation | robotic | week | stroke [SUMMARY]
[CONTENT] gait | training | patients | fac | gait training | period | rehabilitation | robotic | week | stroke [SUMMARY]
null
[CONTENT] gait | training | patients | fac | gait training | period | rehabilitation | robotic | week | stroke [SUMMARY]
[CONTENT] gait | training | patients | fac | gait training | period | rehabilitation | robotic | week | stroke [SUMMARY]
[CONTENT] gait | training | patients | fac | gait training | period | rehabilitation | robotic | week | stroke [SUMMARY]
[CONTENT] stroke | gait | gait training | improve | training | patients | clinical | recovery | stroke patients | rehabilitation [SUMMARY]
[CONTENT] gait | gait training | training | patients | fac | stroke | walk | patient | performed | foot [SUMMARY]
null
[CONTENT] associated | improved walking | parameters | need | gait | rehabilitation program | training | improved | program | walking [SUMMARY]
[CONTENT] gait | training | improved | patients | fac | period | 001 | gait training | rehabilitation | stroke [SUMMARY]
[CONTENT] gait | training | improved | patients | fac | period | 001 | gait training | rehabilitation | stroke [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] 103 | Body Weight Supported Robotic Gait Training | BWSRGT ||| 8-week | weeks 0-4 | weeks 4-8) ||| 0 | 4 | 8 weeks | Functional Ambulatory Categories | FAC | 10 | 10 | MWT | Tinetti Gait [SUMMARY]
null
[CONTENT] Eight-weeks ||| [SUMMARY]
[CONTENT] ||| ||| 103 | Body Weight Supported Robotic Gait Training | BWSRGT ||| 8-week | weeks 0-4 | weeks 4-8) ||| 0 | 4 | 8 weeks | Functional Ambulatory Categories | FAC | 10 | 10 | MWT | Tinetti Gait ||| Over half | the Tinetti Gait Scale | 3 | 5 | 80% | at least 1 | FAC | 0-5 | more than 0.2 | hemorrhagic ||| FAC | two | three ||| ||| Eight-weeks ||| [SUMMARY]
[CONTENT] ||| ||| 103 | Body Weight Supported Robotic Gait Training | BWSRGT ||| 8-week | weeks 0-4 | weeks 4-8) ||| 0 | 4 | 8 weeks | Functional Ambulatory Categories | FAC | 10 | 10 | MWT | Tinetti Gait ||| Over half | the Tinetti Gait Scale | 3 | 5 | 80% | at least 1 | FAC | 0-5 | more than 0.2 | hemorrhagic ||| FAC | two | three ||| ||| Eight-weeks ||| [SUMMARY]
Prevalence and trend of hepatitis C virus infection among blood donors in Chinese mainland: a systematic review and meta-analysis.
21477324
Blood transfusion is one of the most common transmission pathways of hepatitis C virus (HCV). This paper aims to provide a comprehensive and reliable tabulation of available data on the epidemiological characteristics and risk factors for HCV infection among blood donors in Chinese mainland, so as to help make prevention strategies and guide further research.
BACKGROUND
A systematic review was constructed based on the computerized literature database. Infection rates and 95% confidence intervals (95% CI) were calculated using the approximate normal distribution model. Odds ratios and 95% CI were calculated by fixed or random effects models. Data manipulation and statistical analyses were performed using STATA 10.0 and ArcGIS 9.3 was used for map construction.
METHODS
Two hundred and sixty-five studies met our inclusion criteria. The pooled prevalence of HCV infection among blood donors in Chinese mainland was 8.68% (95% CI: 8.01%-9.39%), and the epidemic was severer in North and Central China, especially in Henan and Hebei. While a significant lower rate was found in Yunnan. Notably, before 1998 the pooled prevalence of HCV infection was 12.87% (95%CI: 11.25%-14.56%) among blood donors, but decreased to 1.71% (95%CI: 1.43%-1.99%) after 1998. No significant difference was found in HCV infection rates between male and female blood donors, or among different blood type donors. The prevalence of HCV infection was found to increase with age. During 1994-1995, the prevalence rate reached the highest with a percentage of 15.78% (95%CI: 12.21%-19.75%), and showed a decreasing trend in the following years. A significant difference was found among groups with different blood donation types, Plasma donors had a relatively higher prevalence than whole blood donors of HCV infection (33.95% vs 7.9%).
RESULTS
The prevalence of HCV infection has rapidly decreased since 1998 and kept a low level in recent years, but some provinces showed relatively higher prevalence than the general population. It is urgent to make efficient measures to prevent HCV secondary transmission and control chronic progress, and the key to reduce the HCV incidence among blood donors is to encourage true voluntary blood donors, strictly implement blood donation law, and avoid cross-infection.
CONCLUSIONS
[ "Adolescent", "Adult", "Blood Donors", "China", "Female", "Hepacivirus", "Hepatitis C", "Humans", "Male", "Middle Aged", "Models, Statistical", "Prevalence", "Risk Factors", "Young Adult" ]
3079653
Background
Chronic infection with hepatitis C virus (HCV) is a major and growing public health problem, which could easily lead to chronic liver disease, cirrhosis and even hepatocellular carcinoma [1]. The prevention and control of HCV infection showed complexity and challenge in describing geographic distribution of HCV infection, determining its associated risk factors, and evaluating cofactors that accelerate hepatitis C progression. Estimated 170 million persons are infected with HCV worldwide and more than 3.5 million new sufferers occurred annually [2]. According to the national epidemiological survey of viral hepatitis from 1992 to 1995, average anti-HCV positive rate was 3.2% in the general Chinese population, amounting to more than 30 million infected individuals [3]. The rapid global spread of HCV is believed to have occurred primarily because of efficient transmission through blood transfusion and parenteral exposures with contaminated equipment [4]. Blood donors, particularly those that rely on blood donation as a source of income, had a very high prevalence of HCV infection [5]. Recent studies have reported that the current residual risk of transfusion-transmitted HCV infection in China is about 1 in 40,000-60,000 donations, higher than that found in more developed countries [6]. With the implemention of blood donation law in 1998, many blood centers relied on other methods to motivate donors, mostly through employer-organized blood collection, but these donors may not have been true volunteers, as they may be coerced by the employer to some extent. In recent years, the true voluntary donors are gradually becoming the main source of blood donation in many blood centers in China [7]. Among paid blood donors, the HCV prevalence has reached 5.7% or higher [8]. However, among employer-organized donors and volunteer donors, the HCV prevalence was reported at lower level between 1.1-2.3%, and 0.46%, respectively [7,9]. A large amount of studies have been done in the last decade on HCV infection and its associated risk factors among blood donors. However, many of them drew incompatible or even contradictory conclusion and the utilization of these statistics are therefore limited. This paper reviews on the available studies so as to provide comprehensive and reliable epidemiological characteristics of HCV infection among blood donors in China, which is speculated to help make prevention strategies and guide further research.
Methods
Literature search Literatures on the HCV prevalence among blood donors in China were acquired through searching PubMed, Embase, China National Knowledge Infrastructure (CNKI), and Wanfang Database from 1990 to 2010. In order to search and include related studies as many as possible, we used combinations of various key words, including hepatitis C virus or HCV, blood donors, and China or Chinese Mainland. Literatures on the HCV prevalence among blood donors in China were acquired through searching PubMed, Embase, China National Knowledge Infrastructure (CNKI), and Wanfang Database from 1990 to 2010. In order to search and include related studies as many as possible, we used combinations of various key words, including hepatitis C virus or HCV, blood donors, and China or Chinese Mainland. Selection and data abstraction All the potentially relevant papers were reviewed independently by two investigators through assessing the eligibility of each article and abstracting data with standardized data-abstraction forms. Disagreements were resolved through discussion. The following information, though some studies did not contain all of them, were extracted from the literatures: first author's name, publication date, study period, province of sample, blood donor recruitment methods (paid blood donors, employer-organized donors, or true volunteer donors), type of blood donation (categorized as plasma donors and whole blood donors), sampling size, the number of subjects infected with HCV, HBV, and HIV or co-infected with two or three of these viruses, gender, age (18-30 years and 31-60 years), and blood type, etc. The inclusion criteria were: (1) studies in the mentioned four databases with full text, despite the language of original text; (2) studies reporting anti-HCV positive rates among blood donors in Chinese Mainland; (3) studies using anti-HCV as a detection index of HCV. The exclusion criteria were: (1) studies without specific sample origins; (2) studies with overlapping time intervals of sample collection from the same origin; (3) studies with a sample size less than 50; (4) studies that failed to present data clearly enough or with obviously paradoxical data. All the potentially relevant papers were reviewed independently by two investigators through assessing the eligibility of each article and abstracting data with standardized data-abstraction forms. Disagreements were resolved through discussion. The following information, though some studies did not contain all of them, were extracted from the literatures: first author's name, publication date, study period, province of sample, blood donor recruitment methods (paid blood donors, employer-organized donors, or true volunteer donors), type of blood donation (categorized as plasma donors and whole blood donors), sampling size, the number of subjects infected with HCV, HBV, and HIV or co-infected with two or three of these viruses, gender, age (18-30 years and 31-60 years), and blood type, etc. The inclusion criteria were: (1) studies in the mentioned four databases with full text, despite the language of original text; (2) studies reporting anti-HCV positive rates among blood donors in Chinese Mainland; (3) studies using anti-HCV as a detection index of HCV. The exclusion criteria were: (1) studies without specific sample origins; (2) studies with overlapping time intervals of sample collection from the same origin; (3) studies with a sample size less than 50; (4) studies that failed to present data clearly enough or with obviously paradoxical data. Statistical analysis In our review, random effect models were used for meta-analysis, considering the possibility of significant heterogeneity between studies which was tested with the Q test (P < 0.10 was considered indicative of statistically significant heterogeneity) and the I2statistic (values of 25%, 50% and 75% are considered to represent low, medium and high heterogeneity respectively). Freeman-Tukey arcsin transform to stabilize variances, and after the meta-analysis, investigators can transform the summary estimate and the CI boundaries back to proportions using sin function, the specific conversion details can be seen in reference [10]. Stratified analyses were performed by study locations, gender, age, study period, blood donor recruitment methods, type of blood donation, and blood type. The Z or χ2 test was used to assess the differences among the subgroups. Data manipulation and statistical analyses were undertaken using the Statistical Software Package (STATA) 10.0 (STATA Corporation, College Station, TX, USA, 2009), and ArcGIS 9.3 (ESRI, Redlands, California, USA) was used for map construction. In our review, random effect models were used for meta-analysis, considering the possibility of significant heterogeneity between studies which was tested with the Q test (P < 0.10 was considered indicative of statistically significant heterogeneity) and the I2statistic (values of 25%, 50% and 75% are considered to represent low, medium and high heterogeneity respectively). Freeman-Tukey arcsin transform to stabilize variances, and after the meta-analysis, investigators can transform the summary estimate and the CI boundaries back to proportions using sin function, the specific conversion details can be seen in reference [10]. Stratified analyses were performed by study locations, gender, age, study period, blood donor recruitment methods, type of blood donation, and blood type. The Z or χ2 test was used to assess the differences among the subgroups. Data manipulation and statistical analyses were undertaken using the Statistical Software Package (STATA) 10.0 (STATA Corporation, College Station, TX, USA, 2009), and ArcGIS 9.3 (ESRI, Redlands, California, USA) was used for map construction.
null
null
Conclusions
The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2334/11/88/prepub
[ "Background", "Literature search", "Selection and data abstraction", "Statistical analysis", "Results", "General information of samples", "Prevalence of HCV infection among blood donors in Chinese mainland", "Region", "Gender", "Age", "Time", "Blood type", "Donation type", "Discussion", "Conclusions" ]
[ "Chronic infection with hepatitis C virus (HCV) is a major and growing public health problem, which could easily lead to chronic liver disease, cirrhosis and even hepatocellular carcinoma [1]. The prevention and control of HCV infection showed complexity and challenge in describing geographic distribution of HCV infection, determining its associated risk factors, and evaluating cofactors that accelerate hepatitis C progression. Estimated 170 million persons are infected with HCV worldwide and more than 3.5 million new sufferers occurred annually [2]. According to the national epidemiological survey of viral hepatitis from 1992 to 1995, average anti-HCV positive rate was 3.2% in the general Chinese population, amounting to more than 30 million infected individuals [3].\nThe rapid global spread of HCV is believed to have occurred primarily because of efficient transmission through blood transfusion and parenteral exposures with contaminated equipment [4]. Blood donors, particularly those that rely on blood donation as a source of income, had a very high prevalence of HCV infection [5]. Recent studies have reported that the current residual risk of transfusion-transmitted HCV infection in China is about 1 in 40,000-60,000 donations, higher than that found in more developed countries [6]. With the implemention of blood donation law in 1998, many blood centers relied on other methods to motivate donors, mostly through employer-organized blood collection, but these donors may not have been true volunteers, as they may be coerced by the employer to some extent. In recent years, the true voluntary donors are gradually becoming the main source of blood donation in many blood centers in China [7]. Among paid blood donors, the HCV prevalence has reached 5.7% or higher [8]. However, among employer-organized donors and volunteer donors, the HCV prevalence was reported at lower level between 1.1-2.3%, and 0.46%, respectively [7,9].\nA large amount of studies have been done in the last decade on HCV infection and its associated risk factors among blood donors. However, many of them drew incompatible or even contradictory conclusion and the utilization of these statistics are therefore limited. This paper reviews on the available studies so as to provide comprehensive and reliable epidemiological characteristics of HCV infection among blood donors in China, which is speculated to help make prevention strategies and guide further research.", "Literatures on the HCV prevalence among blood donors in China were acquired through searching PubMed, Embase, China National Knowledge Infrastructure (CNKI), and Wanfang Database from 1990 to 2010. In order to search and include related studies as many as possible, we used combinations of various key words, including hepatitis C virus or HCV, blood donors, and China or Chinese Mainland.", "All the potentially relevant papers were reviewed independently by two investigators through assessing the eligibility of each article and abstracting data with standardized data-abstraction forms. Disagreements were resolved through discussion. The following information, though some studies did not contain all of them, were extracted from the literatures: first author's name, publication date, study period, province of sample, blood donor recruitment methods (paid blood donors, employer-organized donors, or true volunteer donors), type of blood donation (categorized as plasma donors and whole blood donors), sampling size, the number of subjects infected with HCV, HBV, and HIV or co-infected with two or three of these viruses, gender, age (18-30 years and 31-60 years), and blood type, etc.\nThe inclusion criteria were: (1) studies in the mentioned four databases with full text, despite the language of original text; (2) studies reporting anti-HCV positive rates among blood donors in Chinese Mainland; (3) studies using anti-HCV as a detection index of HCV. The exclusion criteria were: (1) studies without specific sample origins; (2) studies with overlapping time intervals of sample collection from the same origin; (3) studies with a sample size less than 50; (4) studies that failed to present data clearly enough or with obviously paradoxical data.", "In our review, random effect models were used for meta-analysis, considering the possibility of significant heterogeneity between studies which was tested with the Q test (P < 0.10 was considered indicative of statistically significant heterogeneity) and the I2statistic (values of 25%, 50% and 75% are considered to represent low, medium and high heterogeneity respectively). Freeman-Tukey arcsin transform to stabilize variances, and after the meta-analysis, investigators can transform the summary estimate and the CI boundaries back to proportions using sin function, the specific conversion details can be seen in reference [10]. Stratified analyses were performed by study locations, gender, age, study period, blood donor recruitment methods, type of blood donation, and blood type. The Z or χ2 test was used to assess the differences among the subgroups. Data manipulation and statistical analyses were undertaken using the Statistical Software Package (STATA) 10.0 (STATA Corporation, College Station, TX, USA, 2009), and ArcGIS 9.3 (ESRI, Redlands, California, USA) was used for map construction.", "According to the literature search strategies, 726 studies (90 studies in PubMed, 636 studies in CNKI and Wanfang database) were identified, but 461 studies were excluded based on the inclusion and exclusion criteria (Figure 1). There were 11 studies in English [8,9,11-19] and 254 studies in Chinese [20-273] of the finally adopted 265 studies.\nResults of the systematic literature search.\n General information of samples A total of 4519313 blood donors between the ages of 18 to 60 were included, with a wide range of blood donation frequency from 1 to more than 50 times. Some donors had a duration (the period from the first blood donation to when selected in original research) longer than 15 years. The majority of blood donors were men, approximately 57.72% (101319/175540), while women accounted for 42.28% (74221/175540). The occupation of blood donors was widely distributed. Voluntary blood donors mainly came from college students, health care providers, officials, and military from Chinese People's Liberation Army (PLA), while paid blood donors were mostly from peasants, low-wage workers, and unemployed individuals.\nThe blood samples mainly came from blood banks, hospitals, and Centers for Disease Control and Prevention (CDC). The studies of our review involved in the following regions of 29 provinces and cities: Central China (Hunan, Hubei, Henan), North China (Beijing, Hebei, Shanxi, Tianjin, Inner Mongolia), South China(Guangdong, Guangxi), Northwest China(Shanxi, Gansu, Ningxia, Qinghai, Xinjiang), Northeast China (Liaoning, Jilin, Heilongjiang), Southwest China (Yunnan, Guizhou, Sichuan, Chongqing), East China (Shandong, Jiangsu, Anhui, Zhejiang, Fujian, Shanghai, Jiangxi).\nA total of 4519313 blood donors between the ages of 18 to 60 were included, with a wide range of blood donation frequency from 1 to more than 50 times. Some donors had a duration (the period from the first blood donation to when selected in original research) longer than 15 years. The majority of blood donors were men, approximately 57.72% (101319/175540), while women accounted for 42.28% (74221/175540). The occupation of blood donors was widely distributed. Voluntary blood donors mainly came from college students, health care providers, officials, and military from Chinese People's Liberation Army (PLA), while paid blood donors were mostly from peasants, low-wage workers, and unemployed individuals.\nThe blood samples mainly came from blood banks, hospitals, and Centers for Disease Control and Prevention (CDC). The studies of our review involved in the following regions of 29 provinces and cities: Central China (Hunan, Hubei, Henan), North China (Beijing, Hebei, Shanxi, Tianjin, Inner Mongolia), South China(Guangdong, Guangxi), Northwest China(Shanxi, Gansu, Ningxia, Qinghai, Xinjiang), Northeast China (Liaoning, Jilin, Heilongjiang), Southwest China (Yunnan, Guizhou, Sichuan, Chongqing), East China (Shandong, Jiangsu, Anhui, Zhejiang, Fujian, Shanghai, Jiangxi).\n Prevalence of HCV infection among blood donors in Chinese mainland Region As seen in Table 1 and Figure 2, 3, the pooled prevalence of HCV infection among blood donors in Chinese mainland from 1990 to 2010 was 8.68% (95% CI: 8.01%- 9.39%). Dramatic geographic difference in pooled HCV infection rates among blood donors was observed. The epidemic was severest in North and Central China, where the HCV infection rate were 13.45% (95%CI: 11.41%-15.67%) and 14.74% (95%CI: 11.06%-18.80%), respectively. The lowest prevalence was in South China with the rate of 2.88% (95% CI: 2.19%-3.64%). Before 1998, the pooled prevalence of HCV infection was 12.87% (95% CI: 11.25%-14.56%) among blood donors, with the highest rates found in Henan (35.04%, 95% CI: 23.62%-47.41%), then Hebei (29.26%, 95% CI: 19.63%-39.98%), and then the pooled prevalence decreased to 1.71% (95%CI: 1.43%-1.99%) after 1998.\nPrevalence of HCV infection among blood donors at different regions\na. IM: Inner Mongolia\nb. When calculating the pooled prevalence rate either before or after 1998 respectively, the studies that spanned 1998 were excluded. In that case, number of total literatures is bigger than the sum of literature number before 1998 and after 1998.\nThe regional distribution of pooled prevalence of HCV infection among blood donors in China before 1998.\nThe regional distribution of pooled prevalence of HCV infection among blood donors in China after 1998.\nAs seen in Table 1 and Figure 2, 3, the pooled prevalence of HCV infection among blood donors in Chinese mainland from 1990 to 2010 was 8.68% (95% CI: 8.01%- 9.39%). Dramatic geographic difference in pooled HCV infection rates among blood donors was observed. The epidemic was severest in North and Central China, where the HCV infection rate were 13.45% (95%CI: 11.41%-15.67%) and 14.74% (95%CI: 11.06%-18.80%), respectively. The lowest prevalence was in South China with the rate of 2.88% (95% CI: 2.19%-3.64%). Before 1998, the pooled prevalence of HCV infection was 12.87% (95% CI: 11.25%-14.56%) among blood donors, with the highest rates found in Henan (35.04%, 95% CI: 23.62%-47.41%), then Hebei (29.26%, 95% CI: 19.63%-39.98%), and then the pooled prevalence decreased to 1.71% (95%CI: 1.43%-1.99%) after 1998.\nPrevalence of HCV infection among blood donors at different regions\na. IM: Inner Mongolia\nb. When calculating the pooled prevalence rate either before or after 1998 respectively, the studies that spanned 1998 were excluded. In that case, number of total literatures is bigger than the sum of literature number before 1998 and after 1998.\nThe regional distribution of pooled prevalence of HCV infection among blood donors in China before 1998.\nThe regional distribution of pooled prevalence of HCV infection among blood donors in China after 1998.\n Gender A total of 79 studies have investigated the association between the prevalence rate of HCV infection and gender among blood donors. HCV infection rate of male blood donors was 9.87% (95%CI: 8.26%-11.63%), while female blood donors had a rate of 9.78% (95%CI: 7.88%-11.89%). There was no significant statistical difference between males and females (Z = 0.62, P = 0.54).\nA total of 79 studies have investigated the association between the prevalence rate of HCV infection and gender among blood donors. HCV infection rate of male blood donors was 9.87% (95%CI: 8.26%-11.63%), while female blood donors had a rate of 9.78% (95%CI: 7.88%-11.89%). There was no significant statistical difference between males and females (Z = 0.62, P = 0.54).\n Age There were 30 studies indicating the association between the prevalence rate of HCV infection and age among blood donors. In most studies blood donors were divided into two groups by the age of 30 years old. HCV infection rate of individuals aged 18-30 was 4.91% (95% CI: 3.89%-6.05%), while the prevalence rate of individuals aged 31-60 was 8.99% (95% CI: 6.86%-11.37%), and there was significant statistical difference between two groups (Z = 66.02, P < 0.01).\nThere were 30 studies indicating the association between the prevalence rate of HCV infection and age among blood donors. In most studies blood donors were divided into two groups by the age of 30 years old. HCV infection rate of individuals aged 18-30 was 4.91% (95% CI: 3.89%-6.05%), while the prevalence rate of individuals aged 31-60 was 8.99% (95% CI: 6.86%-11.37%), and there was significant statistical difference between two groups (Z = 66.02, P < 0.01).\n Time As presented in Table 2, 179 studies were divided into 9 groups according to the study period, and Figure 4 was drawn on the prevalence of HCV in each group. During 1994-1995, the prevalence rate reached the highest, which was 15.78% (95% CI: 12.21%-19.75%). Since 1995, the rates showed a decreasing trend among blood donors, even as low as 0.36% (95% CI: 0.09%-0.81%) during 2006-2010.\nPrevalence of HCV infection among blood donors at different study period\nREM: random effect model.\nPrevalence of HCV infection among blood donors at different study period.\nAs presented in Table 2, 179 studies were divided into 9 groups according to the study period, and Figure 4 was drawn on the prevalence of HCV in each group. During 1994-1995, the prevalence rate reached the highest, which was 15.78% (95% CI: 12.21%-19.75%). Since 1995, the rates showed a decreasing trend among blood donors, even as low as 0.36% (95% CI: 0.09%-0.81%) during 2006-2010.\nPrevalence of HCV infection among blood donors at different study period\nREM: random effect model.\nPrevalence of HCV infection among blood donors at different study period.\n Blood type As displayed in Table 3, among blood type A, B, AB, and O donors, the HCV infection rates were 8.18% (95% CI: 4.55%-12.77%), 7.58% (95%CI: 4.26%-11.73%), 8.15% (95%CI: 4.64%-12.54%), and 7.85% (95%CI: 4.64%-11.82%), respectively. There was no significant statistical difference among four blood types (χ2 = 4.97, P = 0.17).\nPrevalence of HCV infection among blood donors of different blood type\nREM: random effect model.\nAs displayed in Table 3, among blood type A, B, AB, and O donors, the HCV infection rates were 8.18% (95% CI: 4.55%-12.77%), 7.58% (95%CI: 4.26%-11.73%), 8.15% (95%CI: 4.64%-12.54%), and 7.85% (95%CI: 4.64%-11.82%), respectively. There was no significant statistical difference among four blood types (χ2 = 4.97, P = 0.17).\nPrevalence of HCV infection among blood donors of different blood type\nREM: random effect model.\n Donation type As seen in Table 4, HCV infection rate of voluntary blood donors and paid blood donors were 0.97% (95% CI: 0.79%-1.16%) and 15.53% (95% CI: 13.28%-17.91%) respectively. There was significant statistical difference (Z = 325.65, P < 0.01). The prevalence of HCV infection differed significantly (Z = 142.22, P < 0.01) among plasma donors and whole blood donors, which was 7.90% (95%CI: 6.44%-9.51%) and 33.95% (95%CI: 29.80%-38.17%), respectively.\nPrevalence of HCV infection among blood donors of different donation types\nREM: random effect model.\nAs seen in Table 4, HCV infection rate of voluntary blood donors and paid blood donors were 0.97% (95% CI: 0.79%-1.16%) and 15.53% (95% CI: 13.28%-17.91%) respectively. There was significant statistical difference (Z = 325.65, P < 0.01). The prevalence of HCV infection differed significantly (Z = 142.22, P < 0.01) among plasma donors and whole blood donors, which was 7.90% (95%CI: 6.44%-9.51%) and 33.95% (95%CI: 29.80%-38.17%), respectively.\nPrevalence of HCV infection among blood donors of different donation types\nREM: random effect model.\n Region As seen in Table 1 and Figure 2, 3, the pooled prevalence of HCV infection among blood donors in Chinese mainland from 1990 to 2010 was 8.68% (95% CI: 8.01%- 9.39%). Dramatic geographic difference in pooled HCV infection rates among blood donors was observed. The epidemic was severest in North and Central China, where the HCV infection rate were 13.45% (95%CI: 11.41%-15.67%) and 14.74% (95%CI: 11.06%-18.80%), respectively. The lowest prevalence was in South China with the rate of 2.88% (95% CI: 2.19%-3.64%). Before 1998, the pooled prevalence of HCV infection was 12.87% (95% CI: 11.25%-14.56%) among blood donors, with the highest rates found in Henan (35.04%, 95% CI: 23.62%-47.41%), then Hebei (29.26%, 95% CI: 19.63%-39.98%), and then the pooled prevalence decreased to 1.71% (95%CI: 1.43%-1.99%) after 1998.\nPrevalence of HCV infection among blood donors at different regions\na. IM: Inner Mongolia\nb. When calculating the pooled prevalence rate either before or after 1998 respectively, the studies that spanned 1998 were excluded. In that case, number of total literatures is bigger than the sum of literature number before 1998 and after 1998.\nThe regional distribution of pooled prevalence of HCV infection among blood donors in China before 1998.\nThe regional distribution of pooled prevalence of HCV infection among blood donors in China after 1998.\nAs seen in Table 1 and Figure 2, 3, the pooled prevalence of HCV infection among blood donors in Chinese mainland from 1990 to 2010 was 8.68% (95% CI: 8.01%- 9.39%). Dramatic geographic difference in pooled HCV infection rates among blood donors was observed. The epidemic was severest in North and Central China, where the HCV infection rate were 13.45% (95%CI: 11.41%-15.67%) and 14.74% (95%CI: 11.06%-18.80%), respectively. The lowest prevalence was in South China with the rate of 2.88% (95% CI: 2.19%-3.64%). Before 1998, the pooled prevalence of HCV infection was 12.87% (95% CI: 11.25%-14.56%) among blood donors, with the highest rates found in Henan (35.04%, 95% CI: 23.62%-47.41%), then Hebei (29.26%, 95% CI: 19.63%-39.98%), and then the pooled prevalence decreased to 1.71% (95%CI: 1.43%-1.99%) after 1998.\nPrevalence of HCV infection among blood donors at different regions\na. IM: Inner Mongolia\nb. When calculating the pooled prevalence rate either before or after 1998 respectively, the studies that spanned 1998 were excluded. In that case, number of total literatures is bigger than the sum of literature number before 1998 and after 1998.\nThe regional distribution of pooled prevalence of HCV infection among blood donors in China before 1998.\nThe regional distribution of pooled prevalence of HCV infection among blood donors in China after 1998.\n Gender A total of 79 studies have investigated the association between the prevalence rate of HCV infection and gender among blood donors. HCV infection rate of male blood donors was 9.87% (95%CI: 8.26%-11.63%), while female blood donors had a rate of 9.78% (95%CI: 7.88%-11.89%). There was no significant statistical difference between males and females (Z = 0.62, P = 0.54).\nA total of 79 studies have investigated the association between the prevalence rate of HCV infection and gender among blood donors. HCV infection rate of male blood donors was 9.87% (95%CI: 8.26%-11.63%), while female blood donors had a rate of 9.78% (95%CI: 7.88%-11.89%). There was no significant statistical difference between males and females (Z = 0.62, P = 0.54).\n Age There were 30 studies indicating the association between the prevalence rate of HCV infection and age among blood donors. In most studies blood donors were divided into two groups by the age of 30 years old. HCV infection rate of individuals aged 18-30 was 4.91% (95% CI: 3.89%-6.05%), while the prevalence rate of individuals aged 31-60 was 8.99% (95% CI: 6.86%-11.37%), and there was significant statistical difference between two groups (Z = 66.02, P < 0.01).\nThere were 30 studies indicating the association between the prevalence rate of HCV infection and age among blood donors. In most studies blood donors were divided into two groups by the age of 30 years old. HCV infection rate of individuals aged 18-30 was 4.91% (95% CI: 3.89%-6.05%), while the prevalence rate of individuals aged 31-60 was 8.99% (95% CI: 6.86%-11.37%), and there was significant statistical difference between two groups (Z = 66.02, P < 0.01).\n Time As presented in Table 2, 179 studies were divided into 9 groups according to the study period, and Figure 4 was drawn on the prevalence of HCV in each group. During 1994-1995, the prevalence rate reached the highest, which was 15.78% (95% CI: 12.21%-19.75%). Since 1995, the rates showed a decreasing trend among blood donors, even as low as 0.36% (95% CI: 0.09%-0.81%) during 2006-2010.\nPrevalence of HCV infection among blood donors at different study period\nREM: random effect model.\nPrevalence of HCV infection among blood donors at different study period.\nAs presented in Table 2, 179 studies were divided into 9 groups according to the study period, and Figure 4 was drawn on the prevalence of HCV in each group. During 1994-1995, the prevalence rate reached the highest, which was 15.78% (95% CI: 12.21%-19.75%). Since 1995, the rates showed a decreasing trend among blood donors, even as low as 0.36% (95% CI: 0.09%-0.81%) during 2006-2010.\nPrevalence of HCV infection among blood donors at different study period\nREM: random effect model.\nPrevalence of HCV infection among blood donors at different study period.\n Blood type As displayed in Table 3, among blood type A, B, AB, and O donors, the HCV infection rates were 8.18% (95% CI: 4.55%-12.77%), 7.58% (95%CI: 4.26%-11.73%), 8.15% (95%CI: 4.64%-12.54%), and 7.85% (95%CI: 4.64%-11.82%), respectively. There was no significant statistical difference among four blood types (χ2 = 4.97, P = 0.17).\nPrevalence of HCV infection among blood donors of different blood type\nREM: random effect model.\nAs displayed in Table 3, among blood type A, B, AB, and O donors, the HCV infection rates were 8.18% (95% CI: 4.55%-12.77%), 7.58% (95%CI: 4.26%-11.73%), 8.15% (95%CI: 4.64%-12.54%), and 7.85% (95%CI: 4.64%-11.82%), respectively. There was no significant statistical difference among four blood types (χ2 = 4.97, P = 0.17).\nPrevalence of HCV infection among blood donors of different blood type\nREM: random effect model.\n Donation type As seen in Table 4, HCV infection rate of voluntary blood donors and paid blood donors were 0.97% (95% CI: 0.79%-1.16%) and 15.53% (95% CI: 13.28%-17.91%) respectively. There was significant statistical difference (Z = 325.65, P < 0.01). The prevalence of HCV infection differed significantly (Z = 142.22, P < 0.01) among plasma donors and whole blood donors, which was 7.90% (95%CI: 6.44%-9.51%) and 33.95% (95%CI: 29.80%-38.17%), respectively.\nPrevalence of HCV infection among blood donors of different donation types\nREM: random effect model.\nAs seen in Table 4, HCV infection rate of voluntary blood donors and paid blood donors were 0.97% (95% CI: 0.79%-1.16%) and 15.53% (95% CI: 13.28%-17.91%) respectively. There was significant statistical difference (Z = 325.65, P < 0.01). The prevalence of HCV infection differed significantly (Z = 142.22, P < 0.01) among plasma donors and whole blood donors, which was 7.90% (95%CI: 6.44%-9.51%) and 33.95% (95%CI: 29.80%-38.17%), respectively.\nPrevalence of HCV infection among blood donors of different donation types\nREM: random effect model.", "A total of 4519313 blood donors between the ages of 18 to 60 were included, with a wide range of blood donation frequency from 1 to more than 50 times. Some donors had a duration (the period from the first blood donation to when selected in original research) longer than 15 years. The majority of blood donors were men, approximately 57.72% (101319/175540), while women accounted for 42.28% (74221/175540). The occupation of blood donors was widely distributed. Voluntary blood donors mainly came from college students, health care providers, officials, and military from Chinese People's Liberation Army (PLA), while paid blood donors were mostly from peasants, low-wage workers, and unemployed individuals.\nThe blood samples mainly came from blood banks, hospitals, and Centers for Disease Control and Prevention (CDC). The studies of our review involved in the following regions of 29 provinces and cities: Central China (Hunan, Hubei, Henan), North China (Beijing, Hebei, Shanxi, Tianjin, Inner Mongolia), South China(Guangdong, Guangxi), Northwest China(Shanxi, Gansu, Ningxia, Qinghai, Xinjiang), Northeast China (Liaoning, Jilin, Heilongjiang), Southwest China (Yunnan, Guizhou, Sichuan, Chongqing), East China (Shandong, Jiangsu, Anhui, Zhejiang, Fujian, Shanghai, Jiangxi).", " Region As seen in Table 1 and Figure 2, 3, the pooled prevalence of HCV infection among blood donors in Chinese mainland from 1990 to 2010 was 8.68% (95% CI: 8.01%- 9.39%). Dramatic geographic difference in pooled HCV infection rates among blood donors was observed. The epidemic was severest in North and Central China, where the HCV infection rate were 13.45% (95%CI: 11.41%-15.67%) and 14.74% (95%CI: 11.06%-18.80%), respectively. The lowest prevalence was in South China with the rate of 2.88% (95% CI: 2.19%-3.64%). Before 1998, the pooled prevalence of HCV infection was 12.87% (95% CI: 11.25%-14.56%) among blood donors, with the highest rates found in Henan (35.04%, 95% CI: 23.62%-47.41%), then Hebei (29.26%, 95% CI: 19.63%-39.98%), and then the pooled prevalence decreased to 1.71% (95%CI: 1.43%-1.99%) after 1998.\nPrevalence of HCV infection among blood donors at different regions\na. IM: Inner Mongolia\nb. When calculating the pooled prevalence rate either before or after 1998 respectively, the studies that spanned 1998 were excluded. In that case, number of total literatures is bigger than the sum of literature number before 1998 and after 1998.\nThe regional distribution of pooled prevalence of HCV infection among blood donors in China before 1998.\nThe regional distribution of pooled prevalence of HCV infection among blood donors in China after 1998.\nAs seen in Table 1 and Figure 2, 3, the pooled prevalence of HCV infection among blood donors in Chinese mainland from 1990 to 2010 was 8.68% (95% CI: 8.01%- 9.39%). Dramatic geographic difference in pooled HCV infection rates among blood donors was observed. The epidemic was severest in North and Central China, where the HCV infection rate were 13.45% (95%CI: 11.41%-15.67%) and 14.74% (95%CI: 11.06%-18.80%), respectively. The lowest prevalence was in South China with the rate of 2.88% (95% CI: 2.19%-3.64%). Before 1998, the pooled prevalence of HCV infection was 12.87% (95% CI: 11.25%-14.56%) among blood donors, with the highest rates found in Henan (35.04%, 95% CI: 23.62%-47.41%), then Hebei (29.26%, 95% CI: 19.63%-39.98%), and then the pooled prevalence decreased to 1.71% (95%CI: 1.43%-1.99%) after 1998.\nPrevalence of HCV infection among blood donors at different regions\na. IM: Inner Mongolia\nb. When calculating the pooled prevalence rate either before or after 1998 respectively, the studies that spanned 1998 were excluded. In that case, number of total literatures is bigger than the sum of literature number before 1998 and after 1998.\nThe regional distribution of pooled prevalence of HCV infection among blood donors in China before 1998.\nThe regional distribution of pooled prevalence of HCV infection among blood donors in China after 1998.\n Gender A total of 79 studies have investigated the association between the prevalence rate of HCV infection and gender among blood donors. HCV infection rate of male blood donors was 9.87% (95%CI: 8.26%-11.63%), while female blood donors had a rate of 9.78% (95%CI: 7.88%-11.89%). There was no significant statistical difference between males and females (Z = 0.62, P = 0.54).\nA total of 79 studies have investigated the association between the prevalence rate of HCV infection and gender among blood donors. HCV infection rate of male blood donors was 9.87% (95%CI: 8.26%-11.63%), while female blood donors had a rate of 9.78% (95%CI: 7.88%-11.89%). There was no significant statistical difference between males and females (Z = 0.62, P = 0.54).\n Age There were 30 studies indicating the association between the prevalence rate of HCV infection and age among blood donors. In most studies blood donors were divided into two groups by the age of 30 years old. HCV infection rate of individuals aged 18-30 was 4.91% (95% CI: 3.89%-6.05%), while the prevalence rate of individuals aged 31-60 was 8.99% (95% CI: 6.86%-11.37%), and there was significant statistical difference between two groups (Z = 66.02, P < 0.01).\nThere were 30 studies indicating the association between the prevalence rate of HCV infection and age among blood donors. In most studies blood donors were divided into two groups by the age of 30 years old. HCV infection rate of individuals aged 18-30 was 4.91% (95% CI: 3.89%-6.05%), while the prevalence rate of individuals aged 31-60 was 8.99% (95% CI: 6.86%-11.37%), and there was significant statistical difference between two groups (Z = 66.02, P < 0.01).\n Time As presented in Table 2, 179 studies were divided into 9 groups according to the study period, and Figure 4 was drawn on the prevalence of HCV in each group. During 1994-1995, the prevalence rate reached the highest, which was 15.78% (95% CI: 12.21%-19.75%). Since 1995, the rates showed a decreasing trend among blood donors, even as low as 0.36% (95% CI: 0.09%-0.81%) during 2006-2010.\nPrevalence of HCV infection among blood donors at different study period\nREM: random effect model.\nPrevalence of HCV infection among blood donors at different study period.\nAs presented in Table 2, 179 studies were divided into 9 groups according to the study period, and Figure 4 was drawn on the prevalence of HCV in each group. During 1994-1995, the prevalence rate reached the highest, which was 15.78% (95% CI: 12.21%-19.75%). Since 1995, the rates showed a decreasing trend among blood donors, even as low as 0.36% (95% CI: 0.09%-0.81%) during 2006-2010.\nPrevalence of HCV infection among blood donors at different study period\nREM: random effect model.\nPrevalence of HCV infection among blood donors at different study period.\n Blood type As displayed in Table 3, among blood type A, B, AB, and O donors, the HCV infection rates were 8.18% (95% CI: 4.55%-12.77%), 7.58% (95%CI: 4.26%-11.73%), 8.15% (95%CI: 4.64%-12.54%), and 7.85% (95%CI: 4.64%-11.82%), respectively. There was no significant statistical difference among four blood types (χ2 = 4.97, P = 0.17).\nPrevalence of HCV infection among blood donors of different blood type\nREM: random effect model.\nAs displayed in Table 3, among blood type A, B, AB, and O donors, the HCV infection rates were 8.18% (95% CI: 4.55%-12.77%), 7.58% (95%CI: 4.26%-11.73%), 8.15% (95%CI: 4.64%-12.54%), and 7.85% (95%CI: 4.64%-11.82%), respectively. There was no significant statistical difference among four blood types (χ2 = 4.97, P = 0.17).\nPrevalence of HCV infection among blood donors of different blood type\nREM: random effect model.\n Donation type As seen in Table 4, HCV infection rate of voluntary blood donors and paid blood donors were 0.97% (95% CI: 0.79%-1.16%) and 15.53% (95% CI: 13.28%-17.91%) respectively. There was significant statistical difference (Z = 325.65, P < 0.01). The prevalence of HCV infection differed significantly (Z = 142.22, P < 0.01) among plasma donors and whole blood donors, which was 7.90% (95%CI: 6.44%-9.51%) and 33.95% (95%CI: 29.80%-38.17%), respectively.\nPrevalence of HCV infection among blood donors of different donation types\nREM: random effect model.\nAs seen in Table 4, HCV infection rate of voluntary blood donors and paid blood donors were 0.97% (95% CI: 0.79%-1.16%) and 15.53% (95% CI: 13.28%-17.91%) respectively. There was significant statistical difference (Z = 325.65, P < 0.01). The prevalence of HCV infection differed significantly (Z = 142.22, P < 0.01) among plasma donors and whole blood donors, which was 7.90% (95%CI: 6.44%-9.51%) and 33.95% (95%CI: 29.80%-38.17%), respectively.\nPrevalence of HCV infection among blood donors of different donation types\nREM: random effect model.", "As seen in Table 1 and Figure 2, 3, the pooled prevalence of HCV infection among blood donors in Chinese mainland from 1990 to 2010 was 8.68% (95% CI: 8.01%- 9.39%). Dramatic geographic difference in pooled HCV infection rates among blood donors was observed. The epidemic was severest in North and Central China, where the HCV infection rate were 13.45% (95%CI: 11.41%-15.67%) and 14.74% (95%CI: 11.06%-18.80%), respectively. The lowest prevalence was in South China with the rate of 2.88% (95% CI: 2.19%-3.64%). Before 1998, the pooled prevalence of HCV infection was 12.87% (95% CI: 11.25%-14.56%) among blood donors, with the highest rates found in Henan (35.04%, 95% CI: 23.62%-47.41%), then Hebei (29.26%, 95% CI: 19.63%-39.98%), and then the pooled prevalence decreased to 1.71% (95%CI: 1.43%-1.99%) after 1998.\nPrevalence of HCV infection among blood donors at different regions\na. IM: Inner Mongolia\nb. When calculating the pooled prevalence rate either before or after 1998 respectively, the studies that spanned 1998 were excluded. In that case, number of total literatures is bigger than the sum of literature number before 1998 and after 1998.\nThe regional distribution of pooled prevalence of HCV infection among blood donors in China before 1998.\nThe regional distribution of pooled prevalence of HCV infection among blood donors in China after 1998.", "A total of 79 studies have investigated the association between the prevalence rate of HCV infection and gender among blood donors. HCV infection rate of male blood donors was 9.87% (95%CI: 8.26%-11.63%), while female blood donors had a rate of 9.78% (95%CI: 7.88%-11.89%). There was no significant statistical difference between males and females (Z = 0.62, P = 0.54).", "There were 30 studies indicating the association between the prevalence rate of HCV infection and age among blood donors. In most studies blood donors were divided into two groups by the age of 30 years old. HCV infection rate of individuals aged 18-30 was 4.91% (95% CI: 3.89%-6.05%), while the prevalence rate of individuals aged 31-60 was 8.99% (95% CI: 6.86%-11.37%), and there was significant statistical difference between two groups (Z = 66.02, P < 0.01).", "As presented in Table 2, 179 studies were divided into 9 groups according to the study period, and Figure 4 was drawn on the prevalence of HCV in each group. During 1994-1995, the prevalence rate reached the highest, which was 15.78% (95% CI: 12.21%-19.75%). Since 1995, the rates showed a decreasing trend among blood donors, even as low as 0.36% (95% CI: 0.09%-0.81%) during 2006-2010.\nPrevalence of HCV infection among blood donors at different study period\nREM: random effect model.\nPrevalence of HCV infection among blood donors at different study period.", "As displayed in Table 3, among blood type A, B, AB, and O donors, the HCV infection rates were 8.18% (95% CI: 4.55%-12.77%), 7.58% (95%CI: 4.26%-11.73%), 8.15% (95%CI: 4.64%-12.54%), and 7.85% (95%CI: 4.64%-11.82%), respectively. There was no significant statistical difference among four blood types (χ2 = 4.97, P = 0.17).\nPrevalence of HCV infection among blood donors of different blood type\nREM: random effect model.", "As seen in Table 4, HCV infection rate of voluntary blood donors and paid blood donors were 0.97% (95% CI: 0.79%-1.16%) and 15.53% (95% CI: 13.28%-17.91%) respectively. There was significant statistical difference (Z = 325.65, P < 0.01). The prevalence of HCV infection differed significantly (Z = 142.22, P < 0.01) among plasma donors and whole blood donors, which was 7.90% (95%CI: 6.44%-9.51%) and 33.95% (95%CI: 29.80%-38.17%), respectively.\nPrevalence of HCV infection among blood donors of different donation types\nREM: random effect model.", "As a blood-borne pathogen, HCV virus was frequently detected among paid blood donors in China in the early 1990s [9]. To improve the safety of blood supply and reduce the risk of transfusion-transmitted diseases, the Chinese government has outlawed the use of paid blood donors since 1998. As a result, Chinese blood banks now rely on various other methods to recruit blood donors, mostly on employer-organized donors and true voluntary donors [274]. This transition in the blood donor recruitment methods has been associated with a gradual decrease in the prevalence of anti-HCV among donors. In addition, an HCV RNA screening strategy was implemented in 2010 in all Chinese blood banks, which has also contributed to the decline of HCV prevalence rate. This review shows that the pooled prevalence of HCV infection among blood donors was 8.68% (95% CI: 8.01% to 9.39%) from 1990 to 2010, significantly higher than the estimated 3.2% in the general population of China [1]. It is noteworthy that before 1998 the pooled prevalence of HCV infection was 12.87% among blood donors, but dramatically decreased to 1.71% after 1998. In economically developed regions, the decreasing trend was more prominent due to effective control measures.\nOur results showed significant geographic difference of the prevalence of anti-HCV. Compared with other regions, North and Central China had relatively higher anti-HCV positive rates among blood donors, accounting for 13.45% and 14.74%. Meanwhile, the lowest epidemic rate was found in South China with a percentage of 2.88%. Notably, during the period through 1990 and 1998, the prevalence rates were commonly at high level among different regions of China. Especially in Henan and Hebei, the rates even reached 35.04% and 29.26% respectively. After 1998, the general epidemic rates have rapidly decreased, benefited from the government's prohibition of using paid blood donors. However, the reduction of HCV prevalence was not that obvious in North and Central China. Possible reasons for this lack of reduction may include larger population migration, poor economic conditions, higher HCV infection rates in the general populations, or limited sampling.\nAccording to the national epidemiological survey of viral hepatitis from 1992 to 1995, HCV infection rate among the general population increased gradually with age, but the prevalence rate had no significant difference between male and female [275], which is consistent with our findings that significant difference was found between two age groups but not between gender. The findings indicated that both male and female had the same susceptibility to HCV infection, while the older had increased chance of being HCV-infected. Since the paid donors were usually driven by economic benefits, the association of age with HCV infection may also be explained by increased exposure chances with a greater number of instances of blood donation times or longer duration of blood donation. It is early for a conclusive explanation until systematic investigations are performed and causal relation underneath is revealed [35,61,65,153,179].\nBy and large, the prevalence rate of HCV infection showed a rising trend among blood donors from 1990 to 1995, but significantly decreased from 1996 to 2010 (Figure 3). Our results revealed an outbreak of HCV infection in blood donors around 1995. As recalled, lots of plasma collection stations were established in different regions before 1995. However, the majority of them were illegal and severe cross-contamination on plasmapheresis frequently occurred due to those commonly existed nonstandard operations, such as neglecting sterilization, lacking accurate detection method of anti-HCV, improperly usage of non-disposable needles. In some places, the prevalence rate of HCV was reported as high as 80% [275]. Given the urgency of the situation, in 1995 the government implemented strict management on blood stations. Besides, the detection technology of anti-HCV got greatly improved with better sensitivity and specificity in the following years [276]. With the implementation of Blood Donation Law of China in 1998, real voluntary blood donors replaced paid donors and became a steady and major source of the blood supply. All these measures lead to the great achievements in control and prevention of HCV infection. Nowadays reports of HCV infection among the true voluntary blood donors could rarely been seen in China.\nOur study showed that among blood type A, B, AB, and O donors, the prevalence of HCV infection were 8.18%, 7.58%, 8.15%, and 7.85%, respectively. No significant difference was found between blood types and the epidemic rate of HCV, indicating that blood type is not associated with susceptibility to HCV infection. This finding was consistent with the studies of Lu KQ [52], Zhou ZD [119], and Pu SF [269]. While it was a different story in the study of Ye C [239], in which type O blood donors were reported to have a higher infection rate, and type AB showed relatively lower rate. Uneven distribution with blood type was also seen in Rui ZL's study, in which type A blood donors were reported to have a higher rate [131]. However, the mechanism between blood type and HCV infection remained undefined, which may be related to red cell immune adherence function among persons with different blood types, but it need further study [131].\nNumerous research has showed that paid blood donors are more likely to be infected with HCV than both employer-organized donors or true voluntary donors. Our results confirmed that HCV infection rate in paid blood donors was significantly higher than in voluntary blood donors (15.53% vs 0.97%). Those paid donors who were attracted by high compensation and chose to donate blood in illegal blood stations, also risked a greater risk of cross-contamination. The prevalence rate among plasma donors was significantly higher than among whole blood donors (33.95% vs 7.90%), possibly due to cross-contamination of blood collection equipment by HCV positive plasma donors [77]. The elimination of paid plasma and whole blood donation could contribute to a reduction in HCV infection among blood donors.\nSeveral limitations in our study need to be addressed. First of all, the studies were observational and blood donors were not randomly chosen. Therefore selection bias and confounding seems inevitable. Secondly, many of our data were extracted from studies written in Chinese, which makes it difficult for non-Chinese reviewers, editors, and readers to trace back to the original materials. Thirdly, our ability to assess study quality was limited by the fact that many studies failed to offer detailed information of selected subjects or valid data on important factors. Besides, as with all meta-analyses, this study has potential limitation of publication bias. Negative trials are sometimes less likely to be published. However, we have confidence on our results since the included literatures were mostly from multi-resources and had large sample size, which should reduce publication bias to some extent.", "This meta-analysis provides a comprehensive and reliable data on the prevalence and trend of HCV infection among blood donors. The pooled epidemic rate of HCV infection has rapidly decreased after 1998, though some provinces still showed relatively high prevalence. Achievements and lessons in previous work indicated that long-term, comprehensive and effective interventions and preventions are urgently needed. In particular, implementing and enforcing the \"Blood Donation Law\" and promoting HCV screening, diagnosis, and treatment among blood donors are very important measures to control the transmission of HCV infection. In addition, the key to reduce the incidence of HCV infection among blood donors is to encourage true voluntary blood donation, pay more attention to exclude those high-risk persons from the volunteers, and eliminate cross-infection completely when collecting single plasma." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Literature search", "Selection and data abstraction", "Statistical analysis", "Results", "General information of samples", "Prevalence of HCV infection among blood donors in Chinese mainland", "Region", "Gender", "Age", "Time", "Blood type", "Donation type", "Discussion", "Conclusions" ]
[ "Chronic infection with hepatitis C virus (HCV) is a major and growing public health problem, which could easily lead to chronic liver disease, cirrhosis and even hepatocellular carcinoma [1]. The prevention and control of HCV infection showed complexity and challenge in describing geographic distribution of HCV infection, determining its associated risk factors, and evaluating cofactors that accelerate hepatitis C progression. Estimated 170 million persons are infected with HCV worldwide and more than 3.5 million new sufferers occurred annually [2]. According to the national epidemiological survey of viral hepatitis from 1992 to 1995, average anti-HCV positive rate was 3.2% in the general Chinese population, amounting to more than 30 million infected individuals [3].\nThe rapid global spread of HCV is believed to have occurred primarily because of efficient transmission through blood transfusion and parenteral exposures with contaminated equipment [4]. Blood donors, particularly those that rely on blood donation as a source of income, had a very high prevalence of HCV infection [5]. Recent studies have reported that the current residual risk of transfusion-transmitted HCV infection in China is about 1 in 40,000-60,000 donations, higher than that found in more developed countries [6]. With the implemention of blood donation law in 1998, many blood centers relied on other methods to motivate donors, mostly through employer-organized blood collection, but these donors may not have been true volunteers, as they may be coerced by the employer to some extent. In recent years, the true voluntary donors are gradually becoming the main source of blood donation in many blood centers in China [7]. Among paid blood donors, the HCV prevalence has reached 5.7% or higher [8]. However, among employer-organized donors and volunteer donors, the HCV prevalence was reported at lower level between 1.1-2.3%, and 0.46%, respectively [7,9].\nA large amount of studies have been done in the last decade on HCV infection and its associated risk factors among blood donors. However, many of them drew incompatible or even contradictory conclusion and the utilization of these statistics are therefore limited. This paper reviews on the available studies so as to provide comprehensive and reliable epidemiological characteristics of HCV infection among blood donors in China, which is speculated to help make prevention strategies and guide further research.", " Literature search Literatures on the HCV prevalence among blood donors in China were acquired through searching PubMed, Embase, China National Knowledge Infrastructure (CNKI), and Wanfang Database from 1990 to 2010. In order to search and include related studies as many as possible, we used combinations of various key words, including hepatitis C virus or HCV, blood donors, and China or Chinese Mainland.\nLiteratures on the HCV prevalence among blood donors in China were acquired through searching PubMed, Embase, China National Knowledge Infrastructure (CNKI), and Wanfang Database from 1990 to 2010. In order to search and include related studies as many as possible, we used combinations of various key words, including hepatitis C virus or HCV, blood donors, and China or Chinese Mainland.\n Selection and data abstraction All the potentially relevant papers were reviewed independently by two investigators through assessing the eligibility of each article and abstracting data with standardized data-abstraction forms. Disagreements were resolved through discussion. The following information, though some studies did not contain all of them, were extracted from the literatures: first author's name, publication date, study period, province of sample, blood donor recruitment methods (paid blood donors, employer-organized donors, or true volunteer donors), type of blood donation (categorized as plasma donors and whole blood donors), sampling size, the number of subjects infected with HCV, HBV, and HIV or co-infected with two or three of these viruses, gender, age (18-30 years and 31-60 years), and blood type, etc.\nThe inclusion criteria were: (1) studies in the mentioned four databases with full text, despite the language of original text; (2) studies reporting anti-HCV positive rates among blood donors in Chinese Mainland; (3) studies using anti-HCV as a detection index of HCV. The exclusion criteria were: (1) studies without specific sample origins; (2) studies with overlapping time intervals of sample collection from the same origin; (3) studies with a sample size less than 50; (4) studies that failed to present data clearly enough or with obviously paradoxical data.\nAll the potentially relevant papers were reviewed independently by two investigators through assessing the eligibility of each article and abstracting data with standardized data-abstraction forms. Disagreements were resolved through discussion. The following information, though some studies did not contain all of them, were extracted from the literatures: first author's name, publication date, study period, province of sample, blood donor recruitment methods (paid blood donors, employer-organized donors, or true volunteer donors), type of blood donation (categorized as plasma donors and whole blood donors), sampling size, the number of subjects infected with HCV, HBV, and HIV or co-infected with two or three of these viruses, gender, age (18-30 years and 31-60 years), and blood type, etc.\nThe inclusion criteria were: (1) studies in the mentioned four databases with full text, despite the language of original text; (2) studies reporting anti-HCV positive rates among blood donors in Chinese Mainland; (3) studies using anti-HCV as a detection index of HCV. The exclusion criteria were: (1) studies without specific sample origins; (2) studies with overlapping time intervals of sample collection from the same origin; (3) studies with a sample size less than 50; (4) studies that failed to present data clearly enough or with obviously paradoxical data.\n Statistical analysis In our review, random effect models were used for meta-analysis, considering the possibility of significant heterogeneity between studies which was tested with the Q test (P < 0.10 was considered indicative of statistically significant heterogeneity) and the I2statistic (values of 25%, 50% and 75% are considered to represent low, medium and high heterogeneity respectively). Freeman-Tukey arcsin transform to stabilize variances, and after the meta-analysis, investigators can transform the summary estimate and the CI boundaries back to proportions using sin function, the specific conversion details can be seen in reference [10]. Stratified analyses were performed by study locations, gender, age, study period, blood donor recruitment methods, type of blood donation, and blood type. The Z or χ2 test was used to assess the differences among the subgroups. Data manipulation and statistical analyses were undertaken using the Statistical Software Package (STATA) 10.0 (STATA Corporation, College Station, TX, USA, 2009), and ArcGIS 9.3 (ESRI, Redlands, California, USA) was used for map construction.\nIn our review, random effect models were used for meta-analysis, considering the possibility of significant heterogeneity between studies which was tested with the Q test (P < 0.10 was considered indicative of statistically significant heterogeneity) and the I2statistic (values of 25%, 50% and 75% are considered to represent low, medium and high heterogeneity respectively). Freeman-Tukey arcsin transform to stabilize variances, and after the meta-analysis, investigators can transform the summary estimate and the CI boundaries back to proportions using sin function, the specific conversion details can be seen in reference [10]. Stratified analyses were performed by study locations, gender, age, study period, blood donor recruitment methods, type of blood donation, and blood type. The Z or χ2 test was used to assess the differences among the subgroups. Data manipulation and statistical analyses were undertaken using the Statistical Software Package (STATA) 10.0 (STATA Corporation, College Station, TX, USA, 2009), and ArcGIS 9.3 (ESRI, Redlands, California, USA) was used for map construction.", "Literatures on the HCV prevalence among blood donors in China were acquired through searching PubMed, Embase, China National Knowledge Infrastructure (CNKI), and Wanfang Database from 1990 to 2010. In order to search and include related studies as many as possible, we used combinations of various key words, including hepatitis C virus or HCV, blood donors, and China or Chinese Mainland.", "All the potentially relevant papers were reviewed independently by two investigators through assessing the eligibility of each article and abstracting data with standardized data-abstraction forms. Disagreements were resolved through discussion. The following information, though some studies did not contain all of them, were extracted from the literatures: first author's name, publication date, study period, province of sample, blood donor recruitment methods (paid blood donors, employer-organized donors, or true volunteer donors), type of blood donation (categorized as plasma donors and whole blood donors), sampling size, the number of subjects infected with HCV, HBV, and HIV or co-infected with two or three of these viruses, gender, age (18-30 years and 31-60 years), and blood type, etc.\nThe inclusion criteria were: (1) studies in the mentioned four databases with full text, despite the language of original text; (2) studies reporting anti-HCV positive rates among blood donors in Chinese Mainland; (3) studies using anti-HCV as a detection index of HCV. The exclusion criteria were: (1) studies without specific sample origins; (2) studies with overlapping time intervals of sample collection from the same origin; (3) studies with a sample size less than 50; (4) studies that failed to present data clearly enough or with obviously paradoxical data.", "In our review, random effect models were used for meta-analysis, considering the possibility of significant heterogeneity between studies which was tested with the Q test (P < 0.10 was considered indicative of statistically significant heterogeneity) and the I2statistic (values of 25%, 50% and 75% are considered to represent low, medium and high heterogeneity respectively). Freeman-Tukey arcsin transform to stabilize variances, and after the meta-analysis, investigators can transform the summary estimate and the CI boundaries back to proportions using sin function, the specific conversion details can be seen in reference [10]. Stratified analyses were performed by study locations, gender, age, study period, blood donor recruitment methods, type of blood donation, and blood type. The Z or χ2 test was used to assess the differences among the subgroups. Data manipulation and statistical analyses were undertaken using the Statistical Software Package (STATA) 10.0 (STATA Corporation, College Station, TX, USA, 2009), and ArcGIS 9.3 (ESRI, Redlands, California, USA) was used for map construction.", "According to the literature search strategies, 726 studies (90 studies in PubMed, 636 studies in CNKI and Wanfang database) were identified, but 461 studies were excluded based on the inclusion and exclusion criteria (Figure 1). There were 11 studies in English [8,9,11-19] and 254 studies in Chinese [20-273] of the finally adopted 265 studies.\nResults of the systematic literature search.\n General information of samples A total of 4519313 blood donors between the ages of 18 to 60 were included, with a wide range of blood donation frequency from 1 to more than 50 times. Some donors had a duration (the period from the first blood donation to when selected in original research) longer than 15 years. The majority of blood donors were men, approximately 57.72% (101319/175540), while women accounted for 42.28% (74221/175540). The occupation of blood donors was widely distributed. Voluntary blood donors mainly came from college students, health care providers, officials, and military from Chinese People's Liberation Army (PLA), while paid blood donors were mostly from peasants, low-wage workers, and unemployed individuals.\nThe blood samples mainly came from blood banks, hospitals, and Centers for Disease Control and Prevention (CDC). The studies of our review involved in the following regions of 29 provinces and cities: Central China (Hunan, Hubei, Henan), North China (Beijing, Hebei, Shanxi, Tianjin, Inner Mongolia), South China(Guangdong, Guangxi), Northwest China(Shanxi, Gansu, Ningxia, Qinghai, Xinjiang), Northeast China (Liaoning, Jilin, Heilongjiang), Southwest China (Yunnan, Guizhou, Sichuan, Chongqing), East China (Shandong, Jiangsu, Anhui, Zhejiang, Fujian, Shanghai, Jiangxi).\nA total of 4519313 blood donors between the ages of 18 to 60 were included, with a wide range of blood donation frequency from 1 to more than 50 times. Some donors had a duration (the period from the first blood donation to when selected in original research) longer than 15 years. The majority of blood donors were men, approximately 57.72% (101319/175540), while women accounted for 42.28% (74221/175540). The occupation of blood donors was widely distributed. Voluntary blood donors mainly came from college students, health care providers, officials, and military from Chinese People's Liberation Army (PLA), while paid blood donors were mostly from peasants, low-wage workers, and unemployed individuals.\nThe blood samples mainly came from blood banks, hospitals, and Centers for Disease Control and Prevention (CDC). The studies of our review involved in the following regions of 29 provinces and cities: Central China (Hunan, Hubei, Henan), North China (Beijing, Hebei, Shanxi, Tianjin, Inner Mongolia), South China(Guangdong, Guangxi), Northwest China(Shanxi, Gansu, Ningxia, Qinghai, Xinjiang), Northeast China (Liaoning, Jilin, Heilongjiang), Southwest China (Yunnan, Guizhou, Sichuan, Chongqing), East China (Shandong, Jiangsu, Anhui, Zhejiang, Fujian, Shanghai, Jiangxi).\n Prevalence of HCV infection among blood donors in Chinese mainland Region As seen in Table 1 and Figure 2, 3, the pooled prevalence of HCV infection among blood donors in Chinese mainland from 1990 to 2010 was 8.68% (95% CI: 8.01%- 9.39%). Dramatic geographic difference in pooled HCV infection rates among blood donors was observed. The epidemic was severest in North and Central China, where the HCV infection rate were 13.45% (95%CI: 11.41%-15.67%) and 14.74% (95%CI: 11.06%-18.80%), respectively. The lowest prevalence was in South China with the rate of 2.88% (95% CI: 2.19%-3.64%). Before 1998, the pooled prevalence of HCV infection was 12.87% (95% CI: 11.25%-14.56%) among blood donors, with the highest rates found in Henan (35.04%, 95% CI: 23.62%-47.41%), then Hebei (29.26%, 95% CI: 19.63%-39.98%), and then the pooled prevalence decreased to 1.71% (95%CI: 1.43%-1.99%) after 1998.\nPrevalence of HCV infection among blood donors at different regions\na. IM: Inner Mongolia\nb. When calculating the pooled prevalence rate either before or after 1998 respectively, the studies that spanned 1998 were excluded. In that case, number of total literatures is bigger than the sum of literature number before 1998 and after 1998.\nThe regional distribution of pooled prevalence of HCV infection among blood donors in China before 1998.\nThe regional distribution of pooled prevalence of HCV infection among blood donors in China after 1998.\nAs seen in Table 1 and Figure 2, 3, the pooled prevalence of HCV infection among blood donors in Chinese mainland from 1990 to 2010 was 8.68% (95% CI: 8.01%- 9.39%). Dramatic geographic difference in pooled HCV infection rates among blood donors was observed. The epidemic was severest in North and Central China, where the HCV infection rate were 13.45% (95%CI: 11.41%-15.67%) and 14.74% (95%CI: 11.06%-18.80%), respectively. The lowest prevalence was in South China with the rate of 2.88% (95% CI: 2.19%-3.64%). Before 1998, the pooled prevalence of HCV infection was 12.87% (95% CI: 11.25%-14.56%) among blood donors, with the highest rates found in Henan (35.04%, 95% CI: 23.62%-47.41%), then Hebei (29.26%, 95% CI: 19.63%-39.98%), and then the pooled prevalence decreased to 1.71% (95%CI: 1.43%-1.99%) after 1998.\nPrevalence of HCV infection among blood donors at different regions\na. IM: Inner Mongolia\nb. When calculating the pooled prevalence rate either before or after 1998 respectively, the studies that spanned 1998 were excluded. In that case, number of total literatures is bigger than the sum of literature number before 1998 and after 1998.\nThe regional distribution of pooled prevalence of HCV infection among blood donors in China before 1998.\nThe regional distribution of pooled prevalence of HCV infection among blood donors in China after 1998.\n Gender A total of 79 studies have investigated the association between the prevalence rate of HCV infection and gender among blood donors. HCV infection rate of male blood donors was 9.87% (95%CI: 8.26%-11.63%), while female blood donors had a rate of 9.78% (95%CI: 7.88%-11.89%). There was no significant statistical difference between males and females (Z = 0.62, P = 0.54).\nA total of 79 studies have investigated the association between the prevalence rate of HCV infection and gender among blood donors. HCV infection rate of male blood donors was 9.87% (95%CI: 8.26%-11.63%), while female blood donors had a rate of 9.78% (95%CI: 7.88%-11.89%). There was no significant statistical difference between males and females (Z = 0.62, P = 0.54).\n Age There were 30 studies indicating the association between the prevalence rate of HCV infection and age among blood donors. In most studies blood donors were divided into two groups by the age of 30 years old. HCV infection rate of individuals aged 18-30 was 4.91% (95% CI: 3.89%-6.05%), while the prevalence rate of individuals aged 31-60 was 8.99% (95% CI: 6.86%-11.37%), and there was significant statistical difference between two groups (Z = 66.02, P < 0.01).\nThere were 30 studies indicating the association between the prevalence rate of HCV infection and age among blood donors. In most studies blood donors were divided into two groups by the age of 30 years old. HCV infection rate of individuals aged 18-30 was 4.91% (95% CI: 3.89%-6.05%), while the prevalence rate of individuals aged 31-60 was 8.99% (95% CI: 6.86%-11.37%), and there was significant statistical difference between two groups (Z = 66.02, P < 0.01).\n Time As presented in Table 2, 179 studies were divided into 9 groups according to the study period, and Figure 4 was drawn on the prevalence of HCV in each group. During 1994-1995, the prevalence rate reached the highest, which was 15.78% (95% CI: 12.21%-19.75%). Since 1995, the rates showed a decreasing trend among blood donors, even as low as 0.36% (95% CI: 0.09%-0.81%) during 2006-2010.\nPrevalence of HCV infection among blood donors at different study period\nREM: random effect model.\nPrevalence of HCV infection among blood donors at different study period.\nAs presented in Table 2, 179 studies were divided into 9 groups according to the study period, and Figure 4 was drawn on the prevalence of HCV in each group. During 1994-1995, the prevalence rate reached the highest, which was 15.78% (95% CI: 12.21%-19.75%). Since 1995, the rates showed a decreasing trend among blood donors, even as low as 0.36% (95% CI: 0.09%-0.81%) during 2006-2010.\nPrevalence of HCV infection among blood donors at different study period\nREM: random effect model.\nPrevalence of HCV infection among blood donors at different study period.\n Blood type As displayed in Table 3, among blood type A, B, AB, and O donors, the HCV infection rates were 8.18% (95% CI: 4.55%-12.77%), 7.58% (95%CI: 4.26%-11.73%), 8.15% (95%CI: 4.64%-12.54%), and 7.85% (95%CI: 4.64%-11.82%), respectively. There was no significant statistical difference among four blood types (χ2 = 4.97, P = 0.17).\nPrevalence of HCV infection among blood donors of different blood type\nREM: random effect model.\nAs displayed in Table 3, among blood type A, B, AB, and O donors, the HCV infection rates were 8.18% (95% CI: 4.55%-12.77%), 7.58% (95%CI: 4.26%-11.73%), 8.15% (95%CI: 4.64%-12.54%), and 7.85% (95%CI: 4.64%-11.82%), respectively. There was no significant statistical difference among four blood types (χ2 = 4.97, P = 0.17).\nPrevalence of HCV infection among blood donors of different blood type\nREM: random effect model.\n Donation type As seen in Table 4, HCV infection rate of voluntary blood donors and paid blood donors were 0.97% (95% CI: 0.79%-1.16%) and 15.53% (95% CI: 13.28%-17.91%) respectively. There was significant statistical difference (Z = 325.65, P < 0.01). The prevalence of HCV infection differed significantly (Z = 142.22, P < 0.01) among plasma donors and whole blood donors, which was 7.90% (95%CI: 6.44%-9.51%) and 33.95% (95%CI: 29.80%-38.17%), respectively.\nPrevalence of HCV infection among blood donors of different donation types\nREM: random effect model.\nAs seen in Table 4, HCV infection rate of voluntary blood donors and paid blood donors were 0.97% (95% CI: 0.79%-1.16%) and 15.53% (95% CI: 13.28%-17.91%) respectively. There was significant statistical difference (Z = 325.65, P < 0.01). The prevalence of HCV infection differed significantly (Z = 142.22, P < 0.01) among plasma donors and whole blood donors, which was 7.90% (95%CI: 6.44%-9.51%) and 33.95% (95%CI: 29.80%-38.17%), respectively.\nPrevalence of HCV infection among blood donors of different donation types\nREM: random effect model.\n Region As seen in Table 1 and Figure 2, 3, the pooled prevalence of HCV infection among blood donors in Chinese mainland from 1990 to 2010 was 8.68% (95% CI: 8.01%- 9.39%). Dramatic geographic difference in pooled HCV infection rates among blood donors was observed. The epidemic was severest in North and Central China, where the HCV infection rate were 13.45% (95%CI: 11.41%-15.67%) and 14.74% (95%CI: 11.06%-18.80%), respectively. The lowest prevalence was in South China with the rate of 2.88% (95% CI: 2.19%-3.64%). Before 1998, the pooled prevalence of HCV infection was 12.87% (95% CI: 11.25%-14.56%) among blood donors, with the highest rates found in Henan (35.04%, 95% CI: 23.62%-47.41%), then Hebei (29.26%, 95% CI: 19.63%-39.98%), and then the pooled prevalence decreased to 1.71% (95%CI: 1.43%-1.99%) after 1998.\nPrevalence of HCV infection among blood donors at different regions\na. IM: Inner Mongolia\nb. When calculating the pooled prevalence rate either before or after 1998 respectively, the studies that spanned 1998 were excluded. In that case, number of total literatures is bigger than the sum of literature number before 1998 and after 1998.\nThe regional distribution of pooled prevalence of HCV infection among blood donors in China before 1998.\nThe regional distribution of pooled prevalence of HCV infection among blood donors in China after 1998.\nAs seen in Table 1 and Figure 2, 3, the pooled prevalence of HCV infection among blood donors in Chinese mainland from 1990 to 2010 was 8.68% (95% CI: 8.01%- 9.39%). Dramatic geographic difference in pooled HCV infection rates among blood donors was observed. The epidemic was severest in North and Central China, where the HCV infection rate were 13.45% (95%CI: 11.41%-15.67%) and 14.74% (95%CI: 11.06%-18.80%), respectively. The lowest prevalence was in South China with the rate of 2.88% (95% CI: 2.19%-3.64%). Before 1998, the pooled prevalence of HCV infection was 12.87% (95% CI: 11.25%-14.56%) among blood donors, with the highest rates found in Henan (35.04%, 95% CI: 23.62%-47.41%), then Hebei (29.26%, 95% CI: 19.63%-39.98%), and then the pooled prevalence decreased to 1.71% (95%CI: 1.43%-1.99%) after 1998.\nPrevalence of HCV infection among blood donors at different regions\na. IM: Inner Mongolia\nb. When calculating the pooled prevalence rate either before or after 1998 respectively, the studies that spanned 1998 were excluded. In that case, number of total literatures is bigger than the sum of literature number before 1998 and after 1998.\nThe regional distribution of pooled prevalence of HCV infection among blood donors in China before 1998.\nThe regional distribution of pooled prevalence of HCV infection among blood donors in China after 1998.\n Gender A total of 79 studies have investigated the association between the prevalence rate of HCV infection and gender among blood donors. HCV infection rate of male blood donors was 9.87% (95%CI: 8.26%-11.63%), while female blood donors had a rate of 9.78% (95%CI: 7.88%-11.89%). There was no significant statistical difference between males and females (Z = 0.62, P = 0.54).\nA total of 79 studies have investigated the association between the prevalence rate of HCV infection and gender among blood donors. HCV infection rate of male blood donors was 9.87% (95%CI: 8.26%-11.63%), while female blood donors had a rate of 9.78% (95%CI: 7.88%-11.89%). There was no significant statistical difference between males and females (Z = 0.62, P = 0.54).\n Age There were 30 studies indicating the association between the prevalence rate of HCV infection and age among blood donors. In most studies blood donors were divided into two groups by the age of 30 years old. HCV infection rate of individuals aged 18-30 was 4.91% (95% CI: 3.89%-6.05%), while the prevalence rate of individuals aged 31-60 was 8.99% (95% CI: 6.86%-11.37%), and there was significant statistical difference between two groups (Z = 66.02, P < 0.01).\nThere were 30 studies indicating the association between the prevalence rate of HCV infection and age among blood donors. In most studies blood donors were divided into two groups by the age of 30 years old. HCV infection rate of individuals aged 18-30 was 4.91% (95% CI: 3.89%-6.05%), while the prevalence rate of individuals aged 31-60 was 8.99% (95% CI: 6.86%-11.37%), and there was significant statistical difference between two groups (Z = 66.02, P < 0.01).\n Time As presented in Table 2, 179 studies were divided into 9 groups according to the study period, and Figure 4 was drawn on the prevalence of HCV in each group. During 1994-1995, the prevalence rate reached the highest, which was 15.78% (95% CI: 12.21%-19.75%). Since 1995, the rates showed a decreasing trend among blood donors, even as low as 0.36% (95% CI: 0.09%-0.81%) during 2006-2010.\nPrevalence of HCV infection among blood donors at different study period\nREM: random effect model.\nPrevalence of HCV infection among blood donors at different study period.\nAs presented in Table 2, 179 studies were divided into 9 groups according to the study period, and Figure 4 was drawn on the prevalence of HCV in each group. During 1994-1995, the prevalence rate reached the highest, which was 15.78% (95% CI: 12.21%-19.75%). Since 1995, the rates showed a decreasing trend among blood donors, even as low as 0.36% (95% CI: 0.09%-0.81%) during 2006-2010.\nPrevalence of HCV infection among blood donors at different study period\nREM: random effect model.\nPrevalence of HCV infection among blood donors at different study period.\n Blood type As displayed in Table 3, among blood type A, B, AB, and O donors, the HCV infection rates were 8.18% (95% CI: 4.55%-12.77%), 7.58% (95%CI: 4.26%-11.73%), 8.15% (95%CI: 4.64%-12.54%), and 7.85% (95%CI: 4.64%-11.82%), respectively. There was no significant statistical difference among four blood types (χ2 = 4.97, P = 0.17).\nPrevalence of HCV infection among blood donors of different blood type\nREM: random effect model.\nAs displayed in Table 3, among blood type A, B, AB, and O donors, the HCV infection rates were 8.18% (95% CI: 4.55%-12.77%), 7.58% (95%CI: 4.26%-11.73%), 8.15% (95%CI: 4.64%-12.54%), and 7.85% (95%CI: 4.64%-11.82%), respectively. There was no significant statistical difference among four blood types (χ2 = 4.97, P = 0.17).\nPrevalence of HCV infection among blood donors of different blood type\nREM: random effect model.\n Donation type As seen in Table 4, HCV infection rate of voluntary blood donors and paid blood donors were 0.97% (95% CI: 0.79%-1.16%) and 15.53% (95% CI: 13.28%-17.91%) respectively. There was significant statistical difference (Z = 325.65, P < 0.01). The prevalence of HCV infection differed significantly (Z = 142.22, P < 0.01) among plasma donors and whole blood donors, which was 7.90% (95%CI: 6.44%-9.51%) and 33.95% (95%CI: 29.80%-38.17%), respectively.\nPrevalence of HCV infection among blood donors of different donation types\nREM: random effect model.\nAs seen in Table 4, HCV infection rate of voluntary blood donors and paid blood donors were 0.97% (95% CI: 0.79%-1.16%) and 15.53% (95% CI: 13.28%-17.91%) respectively. There was significant statistical difference (Z = 325.65, P < 0.01). The prevalence of HCV infection differed significantly (Z = 142.22, P < 0.01) among plasma donors and whole blood donors, which was 7.90% (95%CI: 6.44%-9.51%) and 33.95% (95%CI: 29.80%-38.17%), respectively.\nPrevalence of HCV infection among blood donors of different donation types\nREM: random effect model.", "A total of 4519313 blood donors between the ages of 18 to 60 were included, with a wide range of blood donation frequency from 1 to more than 50 times. Some donors had a duration (the period from the first blood donation to when selected in original research) longer than 15 years. The majority of blood donors were men, approximately 57.72% (101319/175540), while women accounted for 42.28% (74221/175540). The occupation of blood donors was widely distributed. Voluntary blood donors mainly came from college students, health care providers, officials, and military from Chinese People's Liberation Army (PLA), while paid blood donors were mostly from peasants, low-wage workers, and unemployed individuals.\nThe blood samples mainly came from blood banks, hospitals, and Centers for Disease Control and Prevention (CDC). The studies of our review involved in the following regions of 29 provinces and cities: Central China (Hunan, Hubei, Henan), North China (Beijing, Hebei, Shanxi, Tianjin, Inner Mongolia), South China(Guangdong, Guangxi), Northwest China(Shanxi, Gansu, Ningxia, Qinghai, Xinjiang), Northeast China (Liaoning, Jilin, Heilongjiang), Southwest China (Yunnan, Guizhou, Sichuan, Chongqing), East China (Shandong, Jiangsu, Anhui, Zhejiang, Fujian, Shanghai, Jiangxi).", " Region As seen in Table 1 and Figure 2, 3, the pooled prevalence of HCV infection among blood donors in Chinese mainland from 1990 to 2010 was 8.68% (95% CI: 8.01%- 9.39%). Dramatic geographic difference in pooled HCV infection rates among blood donors was observed. The epidemic was severest in North and Central China, where the HCV infection rate were 13.45% (95%CI: 11.41%-15.67%) and 14.74% (95%CI: 11.06%-18.80%), respectively. The lowest prevalence was in South China with the rate of 2.88% (95% CI: 2.19%-3.64%). Before 1998, the pooled prevalence of HCV infection was 12.87% (95% CI: 11.25%-14.56%) among blood donors, with the highest rates found in Henan (35.04%, 95% CI: 23.62%-47.41%), then Hebei (29.26%, 95% CI: 19.63%-39.98%), and then the pooled prevalence decreased to 1.71% (95%CI: 1.43%-1.99%) after 1998.\nPrevalence of HCV infection among blood donors at different regions\na. IM: Inner Mongolia\nb. When calculating the pooled prevalence rate either before or after 1998 respectively, the studies that spanned 1998 were excluded. In that case, number of total literatures is bigger than the sum of literature number before 1998 and after 1998.\nThe regional distribution of pooled prevalence of HCV infection among blood donors in China before 1998.\nThe regional distribution of pooled prevalence of HCV infection among blood donors in China after 1998.\nAs seen in Table 1 and Figure 2, 3, the pooled prevalence of HCV infection among blood donors in Chinese mainland from 1990 to 2010 was 8.68% (95% CI: 8.01%- 9.39%). Dramatic geographic difference in pooled HCV infection rates among blood donors was observed. The epidemic was severest in North and Central China, where the HCV infection rate were 13.45% (95%CI: 11.41%-15.67%) and 14.74% (95%CI: 11.06%-18.80%), respectively. The lowest prevalence was in South China with the rate of 2.88% (95% CI: 2.19%-3.64%). Before 1998, the pooled prevalence of HCV infection was 12.87% (95% CI: 11.25%-14.56%) among blood donors, with the highest rates found in Henan (35.04%, 95% CI: 23.62%-47.41%), then Hebei (29.26%, 95% CI: 19.63%-39.98%), and then the pooled prevalence decreased to 1.71% (95%CI: 1.43%-1.99%) after 1998.\nPrevalence of HCV infection among blood donors at different regions\na. IM: Inner Mongolia\nb. When calculating the pooled prevalence rate either before or after 1998 respectively, the studies that spanned 1998 were excluded. In that case, number of total literatures is bigger than the sum of literature number before 1998 and after 1998.\nThe regional distribution of pooled prevalence of HCV infection among blood donors in China before 1998.\nThe regional distribution of pooled prevalence of HCV infection among blood donors in China after 1998.\n Gender A total of 79 studies have investigated the association between the prevalence rate of HCV infection and gender among blood donors. HCV infection rate of male blood donors was 9.87% (95%CI: 8.26%-11.63%), while female blood donors had a rate of 9.78% (95%CI: 7.88%-11.89%). There was no significant statistical difference between males and females (Z = 0.62, P = 0.54).\nA total of 79 studies have investigated the association between the prevalence rate of HCV infection and gender among blood donors. HCV infection rate of male blood donors was 9.87% (95%CI: 8.26%-11.63%), while female blood donors had a rate of 9.78% (95%CI: 7.88%-11.89%). There was no significant statistical difference between males and females (Z = 0.62, P = 0.54).\n Age There were 30 studies indicating the association between the prevalence rate of HCV infection and age among blood donors. In most studies blood donors were divided into two groups by the age of 30 years old. HCV infection rate of individuals aged 18-30 was 4.91% (95% CI: 3.89%-6.05%), while the prevalence rate of individuals aged 31-60 was 8.99% (95% CI: 6.86%-11.37%), and there was significant statistical difference between two groups (Z = 66.02, P < 0.01).\nThere were 30 studies indicating the association between the prevalence rate of HCV infection and age among blood donors. In most studies blood donors were divided into two groups by the age of 30 years old. HCV infection rate of individuals aged 18-30 was 4.91% (95% CI: 3.89%-6.05%), while the prevalence rate of individuals aged 31-60 was 8.99% (95% CI: 6.86%-11.37%), and there was significant statistical difference between two groups (Z = 66.02, P < 0.01).\n Time As presented in Table 2, 179 studies were divided into 9 groups according to the study period, and Figure 4 was drawn on the prevalence of HCV in each group. During 1994-1995, the prevalence rate reached the highest, which was 15.78% (95% CI: 12.21%-19.75%). Since 1995, the rates showed a decreasing trend among blood donors, even as low as 0.36% (95% CI: 0.09%-0.81%) during 2006-2010.\nPrevalence of HCV infection among blood donors at different study period\nREM: random effect model.\nPrevalence of HCV infection among blood donors at different study period.\nAs presented in Table 2, 179 studies were divided into 9 groups according to the study period, and Figure 4 was drawn on the prevalence of HCV in each group. During 1994-1995, the prevalence rate reached the highest, which was 15.78% (95% CI: 12.21%-19.75%). Since 1995, the rates showed a decreasing trend among blood donors, even as low as 0.36% (95% CI: 0.09%-0.81%) during 2006-2010.\nPrevalence of HCV infection among blood donors at different study period\nREM: random effect model.\nPrevalence of HCV infection among blood donors at different study period.\n Blood type As displayed in Table 3, among blood type A, B, AB, and O donors, the HCV infection rates were 8.18% (95% CI: 4.55%-12.77%), 7.58% (95%CI: 4.26%-11.73%), 8.15% (95%CI: 4.64%-12.54%), and 7.85% (95%CI: 4.64%-11.82%), respectively. There was no significant statistical difference among four blood types (χ2 = 4.97, P = 0.17).\nPrevalence of HCV infection among blood donors of different blood type\nREM: random effect model.\nAs displayed in Table 3, among blood type A, B, AB, and O donors, the HCV infection rates were 8.18% (95% CI: 4.55%-12.77%), 7.58% (95%CI: 4.26%-11.73%), 8.15% (95%CI: 4.64%-12.54%), and 7.85% (95%CI: 4.64%-11.82%), respectively. There was no significant statistical difference among four blood types (χ2 = 4.97, P = 0.17).\nPrevalence of HCV infection among blood donors of different blood type\nREM: random effect model.\n Donation type As seen in Table 4, HCV infection rate of voluntary blood donors and paid blood donors were 0.97% (95% CI: 0.79%-1.16%) and 15.53% (95% CI: 13.28%-17.91%) respectively. There was significant statistical difference (Z = 325.65, P < 0.01). The prevalence of HCV infection differed significantly (Z = 142.22, P < 0.01) among plasma donors and whole blood donors, which was 7.90% (95%CI: 6.44%-9.51%) and 33.95% (95%CI: 29.80%-38.17%), respectively.\nPrevalence of HCV infection among blood donors of different donation types\nREM: random effect model.\nAs seen in Table 4, HCV infection rate of voluntary blood donors and paid blood donors were 0.97% (95% CI: 0.79%-1.16%) and 15.53% (95% CI: 13.28%-17.91%) respectively. There was significant statistical difference (Z = 325.65, P < 0.01). The prevalence of HCV infection differed significantly (Z = 142.22, P < 0.01) among plasma donors and whole blood donors, which was 7.90% (95%CI: 6.44%-9.51%) and 33.95% (95%CI: 29.80%-38.17%), respectively.\nPrevalence of HCV infection among blood donors of different donation types\nREM: random effect model.", "As seen in Table 1 and Figure 2, 3, the pooled prevalence of HCV infection among blood donors in Chinese mainland from 1990 to 2010 was 8.68% (95% CI: 8.01%- 9.39%). Dramatic geographic difference in pooled HCV infection rates among blood donors was observed. The epidemic was severest in North and Central China, where the HCV infection rate were 13.45% (95%CI: 11.41%-15.67%) and 14.74% (95%CI: 11.06%-18.80%), respectively. The lowest prevalence was in South China with the rate of 2.88% (95% CI: 2.19%-3.64%). Before 1998, the pooled prevalence of HCV infection was 12.87% (95% CI: 11.25%-14.56%) among blood donors, with the highest rates found in Henan (35.04%, 95% CI: 23.62%-47.41%), then Hebei (29.26%, 95% CI: 19.63%-39.98%), and then the pooled prevalence decreased to 1.71% (95%CI: 1.43%-1.99%) after 1998.\nPrevalence of HCV infection among blood donors at different regions\na. IM: Inner Mongolia\nb. When calculating the pooled prevalence rate either before or after 1998 respectively, the studies that spanned 1998 were excluded. In that case, number of total literatures is bigger than the sum of literature number before 1998 and after 1998.\nThe regional distribution of pooled prevalence of HCV infection among blood donors in China before 1998.\nThe regional distribution of pooled prevalence of HCV infection among blood donors in China after 1998.", "A total of 79 studies have investigated the association between the prevalence rate of HCV infection and gender among blood donors. HCV infection rate of male blood donors was 9.87% (95%CI: 8.26%-11.63%), while female blood donors had a rate of 9.78% (95%CI: 7.88%-11.89%). There was no significant statistical difference between males and females (Z = 0.62, P = 0.54).", "There were 30 studies indicating the association between the prevalence rate of HCV infection and age among blood donors. In most studies blood donors were divided into two groups by the age of 30 years old. HCV infection rate of individuals aged 18-30 was 4.91% (95% CI: 3.89%-6.05%), while the prevalence rate of individuals aged 31-60 was 8.99% (95% CI: 6.86%-11.37%), and there was significant statistical difference between two groups (Z = 66.02, P < 0.01).", "As presented in Table 2, 179 studies were divided into 9 groups according to the study period, and Figure 4 was drawn on the prevalence of HCV in each group. During 1994-1995, the prevalence rate reached the highest, which was 15.78% (95% CI: 12.21%-19.75%). Since 1995, the rates showed a decreasing trend among blood donors, even as low as 0.36% (95% CI: 0.09%-0.81%) during 2006-2010.\nPrevalence of HCV infection among blood donors at different study period\nREM: random effect model.\nPrevalence of HCV infection among blood donors at different study period.", "As displayed in Table 3, among blood type A, B, AB, and O donors, the HCV infection rates were 8.18% (95% CI: 4.55%-12.77%), 7.58% (95%CI: 4.26%-11.73%), 8.15% (95%CI: 4.64%-12.54%), and 7.85% (95%CI: 4.64%-11.82%), respectively. There was no significant statistical difference among four blood types (χ2 = 4.97, P = 0.17).\nPrevalence of HCV infection among blood donors of different blood type\nREM: random effect model.", "As seen in Table 4, HCV infection rate of voluntary blood donors and paid blood donors were 0.97% (95% CI: 0.79%-1.16%) and 15.53% (95% CI: 13.28%-17.91%) respectively. There was significant statistical difference (Z = 325.65, P < 0.01). The prevalence of HCV infection differed significantly (Z = 142.22, P < 0.01) among plasma donors and whole blood donors, which was 7.90% (95%CI: 6.44%-9.51%) and 33.95% (95%CI: 29.80%-38.17%), respectively.\nPrevalence of HCV infection among blood donors of different donation types\nREM: random effect model.", "As a blood-borne pathogen, HCV virus was frequently detected among paid blood donors in China in the early 1990s [9]. To improve the safety of blood supply and reduce the risk of transfusion-transmitted diseases, the Chinese government has outlawed the use of paid blood donors since 1998. As a result, Chinese blood banks now rely on various other methods to recruit blood donors, mostly on employer-organized donors and true voluntary donors [274]. This transition in the blood donor recruitment methods has been associated with a gradual decrease in the prevalence of anti-HCV among donors. In addition, an HCV RNA screening strategy was implemented in 2010 in all Chinese blood banks, which has also contributed to the decline of HCV prevalence rate. This review shows that the pooled prevalence of HCV infection among blood donors was 8.68% (95% CI: 8.01% to 9.39%) from 1990 to 2010, significantly higher than the estimated 3.2% in the general population of China [1]. It is noteworthy that before 1998 the pooled prevalence of HCV infection was 12.87% among blood donors, but dramatically decreased to 1.71% after 1998. In economically developed regions, the decreasing trend was more prominent due to effective control measures.\nOur results showed significant geographic difference of the prevalence of anti-HCV. Compared with other regions, North and Central China had relatively higher anti-HCV positive rates among blood donors, accounting for 13.45% and 14.74%. Meanwhile, the lowest epidemic rate was found in South China with a percentage of 2.88%. Notably, during the period through 1990 and 1998, the prevalence rates were commonly at high level among different regions of China. Especially in Henan and Hebei, the rates even reached 35.04% and 29.26% respectively. After 1998, the general epidemic rates have rapidly decreased, benefited from the government's prohibition of using paid blood donors. However, the reduction of HCV prevalence was not that obvious in North and Central China. Possible reasons for this lack of reduction may include larger population migration, poor economic conditions, higher HCV infection rates in the general populations, or limited sampling.\nAccording to the national epidemiological survey of viral hepatitis from 1992 to 1995, HCV infection rate among the general population increased gradually with age, but the prevalence rate had no significant difference between male and female [275], which is consistent with our findings that significant difference was found between two age groups but not between gender. The findings indicated that both male and female had the same susceptibility to HCV infection, while the older had increased chance of being HCV-infected. Since the paid donors were usually driven by economic benefits, the association of age with HCV infection may also be explained by increased exposure chances with a greater number of instances of blood donation times or longer duration of blood donation. It is early for a conclusive explanation until systematic investigations are performed and causal relation underneath is revealed [35,61,65,153,179].\nBy and large, the prevalence rate of HCV infection showed a rising trend among blood donors from 1990 to 1995, but significantly decreased from 1996 to 2010 (Figure 3). Our results revealed an outbreak of HCV infection in blood donors around 1995. As recalled, lots of plasma collection stations were established in different regions before 1995. However, the majority of them were illegal and severe cross-contamination on plasmapheresis frequently occurred due to those commonly existed nonstandard operations, such as neglecting sterilization, lacking accurate detection method of anti-HCV, improperly usage of non-disposable needles. In some places, the prevalence rate of HCV was reported as high as 80% [275]. Given the urgency of the situation, in 1995 the government implemented strict management on blood stations. Besides, the detection technology of anti-HCV got greatly improved with better sensitivity and specificity in the following years [276]. With the implementation of Blood Donation Law of China in 1998, real voluntary blood donors replaced paid donors and became a steady and major source of the blood supply. All these measures lead to the great achievements in control and prevention of HCV infection. Nowadays reports of HCV infection among the true voluntary blood donors could rarely been seen in China.\nOur study showed that among blood type A, B, AB, and O donors, the prevalence of HCV infection were 8.18%, 7.58%, 8.15%, and 7.85%, respectively. No significant difference was found between blood types and the epidemic rate of HCV, indicating that blood type is not associated with susceptibility to HCV infection. This finding was consistent with the studies of Lu KQ [52], Zhou ZD [119], and Pu SF [269]. While it was a different story in the study of Ye C [239], in which type O blood donors were reported to have a higher infection rate, and type AB showed relatively lower rate. Uneven distribution with blood type was also seen in Rui ZL's study, in which type A blood donors were reported to have a higher rate [131]. However, the mechanism between blood type and HCV infection remained undefined, which may be related to red cell immune adherence function among persons with different blood types, but it need further study [131].\nNumerous research has showed that paid blood donors are more likely to be infected with HCV than both employer-organized donors or true voluntary donors. Our results confirmed that HCV infection rate in paid blood donors was significantly higher than in voluntary blood donors (15.53% vs 0.97%). Those paid donors who were attracted by high compensation and chose to donate blood in illegal blood stations, also risked a greater risk of cross-contamination. The prevalence rate among plasma donors was significantly higher than among whole blood donors (33.95% vs 7.90%), possibly due to cross-contamination of blood collection equipment by HCV positive plasma donors [77]. The elimination of paid plasma and whole blood donation could contribute to a reduction in HCV infection among blood donors.\nSeveral limitations in our study need to be addressed. First of all, the studies were observational and blood donors were not randomly chosen. Therefore selection bias and confounding seems inevitable. Secondly, many of our data were extracted from studies written in Chinese, which makes it difficult for non-Chinese reviewers, editors, and readers to trace back to the original materials. Thirdly, our ability to assess study quality was limited by the fact that many studies failed to offer detailed information of selected subjects or valid data on important factors. Besides, as with all meta-analyses, this study has potential limitation of publication bias. Negative trials are sometimes less likely to be published. However, we have confidence on our results since the included literatures were mostly from multi-resources and had large sample size, which should reduce publication bias to some extent.", "This meta-analysis provides a comprehensive and reliable data on the prevalence and trend of HCV infection among blood donors. The pooled epidemic rate of HCV infection has rapidly decreased after 1998, though some provinces still showed relatively high prevalence. Achievements and lessons in previous work indicated that long-term, comprehensive and effective interventions and preventions are urgently needed. In particular, implementing and enforcing the \"Blood Donation Law\" and promoting HCV screening, diagnosis, and treatment among blood donors are very important measures to control the transmission of HCV infection. In addition, the key to reduce the incidence of HCV infection among blood donors is to encourage true voluntary blood donation, pay more attention to exclude those high-risk persons from the volunteers, and eliminate cross-infection completely when collecting single plasma." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "hepatitis C virus", "infection", "blood donors", "meta-analysis" ]
Background: Chronic infection with hepatitis C virus (HCV) is a major and growing public health problem, which could easily lead to chronic liver disease, cirrhosis and even hepatocellular carcinoma [1]. The prevention and control of HCV infection showed complexity and challenge in describing geographic distribution of HCV infection, determining its associated risk factors, and evaluating cofactors that accelerate hepatitis C progression. Estimated 170 million persons are infected with HCV worldwide and more than 3.5 million new sufferers occurred annually [2]. According to the national epidemiological survey of viral hepatitis from 1992 to 1995, average anti-HCV positive rate was 3.2% in the general Chinese population, amounting to more than 30 million infected individuals [3]. The rapid global spread of HCV is believed to have occurred primarily because of efficient transmission through blood transfusion and parenteral exposures with contaminated equipment [4]. Blood donors, particularly those that rely on blood donation as a source of income, had a very high prevalence of HCV infection [5]. Recent studies have reported that the current residual risk of transfusion-transmitted HCV infection in China is about 1 in 40,000-60,000 donations, higher than that found in more developed countries [6]. With the implemention of blood donation law in 1998, many blood centers relied on other methods to motivate donors, mostly through employer-organized blood collection, but these donors may not have been true volunteers, as they may be coerced by the employer to some extent. In recent years, the true voluntary donors are gradually becoming the main source of blood donation in many blood centers in China [7]. Among paid blood donors, the HCV prevalence has reached 5.7% or higher [8]. However, among employer-organized donors and volunteer donors, the HCV prevalence was reported at lower level between 1.1-2.3%, and 0.46%, respectively [7,9]. A large amount of studies have been done in the last decade on HCV infection and its associated risk factors among blood donors. However, many of them drew incompatible or even contradictory conclusion and the utilization of these statistics are therefore limited. This paper reviews on the available studies so as to provide comprehensive and reliable epidemiological characteristics of HCV infection among blood donors in China, which is speculated to help make prevention strategies and guide further research. Methods: Literature search Literatures on the HCV prevalence among blood donors in China were acquired through searching PubMed, Embase, China National Knowledge Infrastructure (CNKI), and Wanfang Database from 1990 to 2010. In order to search and include related studies as many as possible, we used combinations of various key words, including hepatitis C virus or HCV, blood donors, and China or Chinese Mainland. Literatures on the HCV prevalence among blood donors in China were acquired through searching PubMed, Embase, China National Knowledge Infrastructure (CNKI), and Wanfang Database from 1990 to 2010. In order to search and include related studies as many as possible, we used combinations of various key words, including hepatitis C virus or HCV, blood donors, and China or Chinese Mainland. Selection and data abstraction All the potentially relevant papers were reviewed independently by two investigators through assessing the eligibility of each article and abstracting data with standardized data-abstraction forms. Disagreements were resolved through discussion. The following information, though some studies did not contain all of them, were extracted from the literatures: first author's name, publication date, study period, province of sample, blood donor recruitment methods (paid blood donors, employer-organized donors, or true volunteer donors), type of blood donation (categorized as plasma donors and whole blood donors), sampling size, the number of subjects infected with HCV, HBV, and HIV or co-infected with two or three of these viruses, gender, age (18-30 years and 31-60 years), and blood type, etc. The inclusion criteria were: (1) studies in the mentioned four databases with full text, despite the language of original text; (2) studies reporting anti-HCV positive rates among blood donors in Chinese Mainland; (3) studies using anti-HCV as a detection index of HCV. The exclusion criteria were: (1) studies without specific sample origins; (2) studies with overlapping time intervals of sample collection from the same origin; (3) studies with a sample size less than 50; (4) studies that failed to present data clearly enough or with obviously paradoxical data. All the potentially relevant papers were reviewed independently by two investigators through assessing the eligibility of each article and abstracting data with standardized data-abstraction forms. Disagreements were resolved through discussion. The following information, though some studies did not contain all of them, were extracted from the literatures: first author's name, publication date, study period, province of sample, blood donor recruitment methods (paid blood donors, employer-organized donors, or true volunteer donors), type of blood donation (categorized as plasma donors and whole blood donors), sampling size, the number of subjects infected with HCV, HBV, and HIV or co-infected with two or three of these viruses, gender, age (18-30 years and 31-60 years), and blood type, etc. The inclusion criteria were: (1) studies in the mentioned four databases with full text, despite the language of original text; (2) studies reporting anti-HCV positive rates among blood donors in Chinese Mainland; (3) studies using anti-HCV as a detection index of HCV. The exclusion criteria were: (1) studies without specific sample origins; (2) studies with overlapping time intervals of sample collection from the same origin; (3) studies with a sample size less than 50; (4) studies that failed to present data clearly enough or with obviously paradoxical data. Statistical analysis In our review, random effect models were used for meta-analysis, considering the possibility of significant heterogeneity between studies which was tested with the Q test (P < 0.10 was considered indicative of statistically significant heterogeneity) and the I2statistic (values of 25%, 50% and 75% are considered to represent low, medium and high heterogeneity respectively). Freeman-Tukey arcsin transform to stabilize variances, and after the meta-analysis, investigators can transform the summary estimate and the CI boundaries back to proportions using sin function, the specific conversion details can be seen in reference [10]. Stratified analyses were performed by study locations, gender, age, study period, blood donor recruitment methods, type of blood donation, and blood type. The Z or χ2 test was used to assess the differences among the subgroups. Data manipulation and statistical analyses were undertaken using the Statistical Software Package (STATA) 10.0 (STATA Corporation, College Station, TX, USA, 2009), and ArcGIS 9.3 (ESRI, Redlands, California, USA) was used for map construction. In our review, random effect models were used for meta-analysis, considering the possibility of significant heterogeneity between studies which was tested with the Q test (P < 0.10 was considered indicative of statistically significant heterogeneity) and the I2statistic (values of 25%, 50% and 75% are considered to represent low, medium and high heterogeneity respectively). Freeman-Tukey arcsin transform to stabilize variances, and after the meta-analysis, investigators can transform the summary estimate and the CI boundaries back to proportions using sin function, the specific conversion details can be seen in reference [10]. Stratified analyses were performed by study locations, gender, age, study period, blood donor recruitment methods, type of blood donation, and blood type. The Z or χ2 test was used to assess the differences among the subgroups. Data manipulation and statistical analyses were undertaken using the Statistical Software Package (STATA) 10.0 (STATA Corporation, College Station, TX, USA, 2009), and ArcGIS 9.3 (ESRI, Redlands, California, USA) was used for map construction. Literature search: Literatures on the HCV prevalence among blood donors in China were acquired through searching PubMed, Embase, China National Knowledge Infrastructure (CNKI), and Wanfang Database from 1990 to 2010. In order to search and include related studies as many as possible, we used combinations of various key words, including hepatitis C virus or HCV, blood donors, and China or Chinese Mainland. Selection and data abstraction: All the potentially relevant papers were reviewed independently by two investigators through assessing the eligibility of each article and abstracting data with standardized data-abstraction forms. Disagreements were resolved through discussion. The following information, though some studies did not contain all of them, were extracted from the literatures: first author's name, publication date, study period, province of sample, blood donor recruitment methods (paid blood donors, employer-organized donors, or true volunteer donors), type of blood donation (categorized as plasma donors and whole blood donors), sampling size, the number of subjects infected with HCV, HBV, and HIV or co-infected with two or three of these viruses, gender, age (18-30 years and 31-60 years), and blood type, etc. The inclusion criteria were: (1) studies in the mentioned four databases with full text, despite the language of original text; (2) studies reporting anti-HCV positive rates among blood donors in Chinese Mainland; (3) studies using anti-HCV as a detection index of HCV. The exclusion criteria were: (1) studies without specific sample origins; (2) studies with overlapping time intervals of sample collection from the same origin; (3) studies with a sample size less than 50; (4) studies that failed to present data clearly enough or with obviously paradoxical data. Statistical analysis: In our review, random effect models were used for meta-analysis, considering the possibility of significant heterogeneity between studies which was tested with the Q test (P < 0.10 was considered indicative of statistically significant heterogeneity) and the I2statistic (values of 25%, 50% and 75% are considered to represent low, medium and high heterogeneity respectively). Freeman-Tukey arcsin transform to stabilize variances, and after the meta-analysis, investigators can transform the summary estimate and the CI boundaries back to proportions using sin function, the specific conversion details can be seen in reference [10]. Stratified analyses were performed by study locations, gender, age, study period, blood donor recruitment methods, type of blood donation, and blood type. The Z or χ2 test was used to assess the differences among the subgroups. Data manipulation and statistical analyses were undertaken using the Statistical Software Package (STATA) 10.0 (STATA Corporation, College Station, TX, USA, 2009), and ArcGIS 9.3 (ESRI, Redlands, California, USA) was used for map construction. Results: According to the literature search strategies, 726 studies (90 studies in PubMed, 636 studies in CNKI and Wanfang database) were identified, but 461 studies were excluded based on the inclusion and exclusion criteria (Figure 1). There were 11 studies in English [8,9,11-19] and 254 studies in Chinese [20-273] of the finally adopted 265 studies. Results of the systematic literature search. General information of samples A total of 4519313 blood donors between the ages of 18 to 60 were included, with a wide range of blood donation frequency from 1 to more than 50 times. Some donors had a duration (the period from the first blood donation to when selected in original research) longer than 15 years. The majority of blood donors were men, approximately 57.72% (101319/175540), while women accounted for 42.28% (74221/175540). The occupation of blood donors was widely distributed. Voluntary blood donors mainly came from college students, health care providers, officials, and military from Chinese People's Liberation Army (PLA), while paid blood donors were mostly from peasants, low-wage workers, and unemployed individuals. The blood samples mainly came from blood banks, hospitals, and Centers for Disease Control and Prevention (CDC). The studies of our review involved in the following regions of 29 provinces and cities: Central China (Hunan, Hubei, Henan), North China (Beijing, Hebei, Shanxi, Tianjin, Inner Mongolia), South China(Guangdong, Guangxi), Northwest China(Shanxi, Gansu, Ningxia, Qinghai, Xinjiang), Northeast China (Liaoning, Jilin, Heilongjiang), Southwest China (Yunnan, Guizhou, Sichuan, Chongqing), East China (Shandong, Jiangsu, Anhui, Zhejiang, Fujian, Shanghai, Jiangxi). A total of 4519313 blood donors between the ages of 18 to 60 were included, with a wide range of blood donation frequency from 1 to more than 50 times. Some donors had a duration (the period from the first blood donation to when selected in original research) longer than 15 years. The majority of blood donors were men, approximately 57.72% (101319/175540), while women accounted for 42.28% (74221/175540). The occupation of blood donors was widely distributed. Voluntary blood donors mainly came from college students, health care providers, officials, and military from Chinese People's Liberation Army (PLA), while paid blood donors were mostly from peasants, low-wage workers, and unemployed individuals. The blood samples mainly came from blood banks, hospitals, and Centers for Disease Control and Prevention (CDC). The studies of our review involved in the following regions of 29 provinces and cities: Central China (Hunan, Hubei, Henan), North China (Beijing, Hebei, Shanxi, Tianjin, Inner Mongolia), South China(Guangdong, Guangxi), Northwest China(Shanxi, Gansu, Ningxia, Qinghai, Xinjiang), Northeast China (Liaoning, Jilin, Heilongjiang), Southwest China (Yunnan, Guizhou, Sichuan, Chongqing), East China (Shandong, Jiangsu, Anhui, Zhejiang, Fujian, Shanghai, Jiangxi). Prevalence of HCV infection among blood donors in Chinese mainland Region As seen in Table 1 and Figure 2, 3, the pooled prevalence of HCV infection among blood donors in Chinese mainland from 1990 to 2010 was 8.68% (95% CI: 8.01%- 9.39%). Dramatic geographic difference in pooled HCV infection rates among blood donors was observed. The epidemic was severest in North and Central China, where the HCV infection rate were 13.45% (95%CI: 11.41%-15.67%) and 14.74% (95%CI: 11.06%-18.80%), respectively. The lowest prevalence was in South China with the rate of 2.88% (95% CI: 2.19%-3.64%). Before 1998, the pooled prevalence of HCV infection was 12.87% (95% CI: 11.25%-14.56%) among blood donors, with the highest rates found in Henan (35.04%, 95% CI: 23.62%-47.41%), then Hebei (29.26%, 95% CI: 19.63%-39.98%), and then the pooled prevalence decreased to 1.71% (95%CI: 1.43%-1.99%) after 1998. Prevalence of HCV infection among blood donors at different regions a. IM: Inner Mongolia b. When calculating the pooled prevalence rate either before or after 1998 respectively, the studies that spanned 1998 were excluded. In that case, number of total literatures is bigger than the sum of literature number before 1998 and after 1998. The regional distribution of pooled prevalence of HCV infection among blood donors in China before 1998. The regional distribution of pooled prevalence of HCV infection among blood donors in China after 1998. As seen in Table 1 and Figure 2, 3, the pooled prevalence of HCV infection among blood donors in Chinese mainland from 1990 to 2010 was 8.68% (95% CI: 8.01%- 9.39%). Dramatic geographic difference in pooled HCV infection rates among blood donors was observed. The epidemic was severest in North and Central China, where the HCV infection rate were 13.45% (95%CI: 11.41%-15.67%) and 14.74% (95%CI: 11.06%-18.80%), respectively. The lowest prevalence was in South China with the rate of 2.88% (95% CI: 2.19%-3.64%). Before 1998, the pooled prevalence of HCV infection was 12.87% (95% CI: 11.25%-14.56%) among blood donors, with the highest rates found in Henan (35.04%, 95% CI: 23.62%-47.41%), then Hebei (29.26%, 95% CI: 19.63%-39.98%), and then the pooled prevalence decreased to 1.71% (95%CI: 1.43%-1.99%) after 1998. Prevalence of HCV infection among blood donors at different regions a. IM: Inner Mongolia b. When calculating the pooled prevalence rate either before or after 1998 respectively, the studies that spanned 1998 were excluded. In that case, number of total literatures is bigger than the sum of literature number before 1998 and after 1998. The regional distribution of pooled prevalence of HCV infection among blood donors in China before 1998. The regional distribution of pooled prevalence of HCV infection among blood donors in China after 1998. Gender A total of 79 studies have investigated the association between the prevalence rate of HCV infection and gender among blood donors. HCV infection rate of male blood donors was 9.87% (95%CI: 8.26%-11.63%), while female blood donors had a rate of 9.78% (95%CI: 7.88%-11.89%). There was no significant statistical difference between males and females (Z = 0.62, P = 0.54). A total of 79 studies have investigated the association between the prevalence rate of HCV infection and gender among blood donors. HCV infection rate of male blood donors was 9.87% (95%CI: 8.26%-11.63%), while female blood donors had a rate of 9.78% (95%CI: 7.88%-11.89%). There was no significant statistical difference between males and females (Z = 0.62, P = 0.54). Age There were 30 studies indicating the association between the prevalence rate of HCV infection and age among blood donors. In most studies blood donors were divided into two groups by the age of 30 years old. HCV infection rate of individuals aged 18-30 was 4.91% (95% CI: 3.89%-6.05%), while the prevalence rate of individuals aged 31-60 was 8.99% (95% CI: 6.86%-11.37%), and there was significant statistical difference between two groups (Z = 66.02, P < 0.01). There were 30 studies indicating the association between the prevalence rate of HCV infection and age among blood donors. In most studies blood donors were divided into two groups by the age of 30 years old. HCV infection rate of individuals aged 18-30 was 4.91% (95% CI: 3.89%-6.05%), while the prevalence rate of individuals aged 31-60 was 8.99% (95% CI: 6.86%-11.37%), and there was significant statistical difference between two groups (Z = 66.02, P < 0.01). Time As presented in Table 2, 179 studies were divided into 9 groups according to the study period, and Figure 4 was drawn on the prevalence of HCV in each group. During 1994-1995, the prevalence rate reached the highest, which was 15.78% (95% CI: 12.21%-19.75%). Since 1995, the rates showed a decreasing trend among blood donors, even as low as 0.36% (95% CI: 0.09%-0.81%) during 2006-2010. Prevalence of HCV infection among blood donors at different study period REM: random effect model. Prevalence of HCV infection among blood donors at different study period. As presented in Table 2, 179 studies were divided into 9 groups according to the study period, and Figure 4 was drawn on the prevalence of HCV in each group. During 1994-1995, the prevalence rate reached the highest, which was 15.78% (95% CI: 12.21%-19.75%). Since 1995, the rates showed a decreasing trend among blood donors, even as low as 0.36% (95% CI: 0.09%-0.81%) during 2006-2010. Prevalence of HCV infection among blood donors at different study period REM: random effect model. Prevalence of HCV infection among blood donors at different study period. Blood type As displayed in Table 3, among blood type A, B, AB, and O donors, the HCV infection rates were 8.18% (95% CI: 4.55%-12.77%), 7.58% (95%CI: 4.26%-11.73%), 8.15% (95%CI: 4.64%-12.54%), and 7.85% (95%CI: 4.64%-11.82%), respectively. There was no significant statistical difference among four blood types (χ2 = 4.97, P = 0.17). Prevalence of HCV infection among blood donors of different blood type REM: random effect model. As displayed in Table 3, among blood type A, B, AB, and O donors, the HCV infection rates were 8.18% (95% CI: 4.55%-12.77%), 7.58% (95%CI: 4.26%-11.73%), 8.15% (95%CI: 4.64%-12.54%), and 7.85% (95%CI: 4.64%-11.82%), respectively. There was no significant statistical difference among four blood types (χ2 = 4.97, P = 0.17). Prevalence of HCV infection among blood donors of different blood type REM: random effect model. Donation type As seen in Table 4, HCV infection rate of voluntary blood donors and paid blood donors were 0.97% (95% CI: 0.79%-1.16%) and 15.53% (95% CI: 13.28%-17.91%) respectively. There was significant statistical difference (Z = 325.65, P < 0.01). The prevalence of HCV infection differed significantly (Z = 142.22, P < 0.01) among plasma donors and whole blood donors, which was 7.90% (95%CI: 6.44%-9.51%) and 33.95% (95%CI: 29.80%-38.17%), respectively. Prevalence of HCV infection among blood donors of different donation types REM: random effect model. As seen in Table 4, HCV infection rate of voluntary blood donors and paid blood donors were 0.97% (95% CI: 0.79%-1.16%) and 15.53% (95% CI: 13.28%-17.91%) respectively. There was significant statistical difference (Z = 325.65, P < 0.01). The prevalence of HCV infection differed significantly (Z = 142.22, P < 0.01) among plasma donors and whole blood donors, which was 7.90% (95%CI: 6.44%-9.51%) and 33.95% (95%CI: 29.80%-38.17%), respectively. Prevalence of HCV infection among blood donors of different donation types REM: random effect model. Region As seen in Table 1 and Figure 2, 3, the pooled prevalence of HCV infection among blood donors in Chinese mainland from 1990 to 2010 was 8.68% (95% CI: 8.01%- 9.39%). Dramatic geographic difference in pooled HCV infection rates among blood donors was observed. The epidemic was severest in North and Central China, where the HCV infection rate were 13.45% (95%CI: 11.41%-15.67%) and 14.74% (95%CI: 11.06%-18.80%), respectively. The lowest prevalence was in South China with the rate of 2.88% (95% CI: 2.19%-3.64%). Before 1998, the pooled prevalence of HCV infection was 12.87% (95% CI: 11.25%-14.56%) among blood donors, with the highest rates found in Henan (35.04%, 95% CI: 23.62%-47.41%), then Hebei (29.26%, 95% CI: 19.63%-39.98%), and then the pooled prevalence decreased to 1.71% (95%CI: 1.43%-1.99%) after 1998. Prevalence of HCV infection among blood donors at different regions a. IM: Inner Mongolia b. When calculating the pooled prevalence rate either before or after 1998 respectively, the studies that spanned 1998 were excluded. In that case, number of total literatures is bigger than the sum of literature number before 1998 and after 1998. The regional distribution of pooled prevalence of HCV infection among blood donors in China before 1998. The regional distribution of pooled prevalence of HCV infection among blood donors in China after 1998. As seen in Table 1 and Figure 2, 3, the pooled prevalence of HCV infection among blood donors in Chinese mainland from 1990 to 2010 was 8.68% (95% CI: 8.01%- 9.39%). Dramatic geographic difference in pooled HCV infection rates among blood donors was observed. The epidemic was severest in North and Central China, where the HCV infection rate were 13.45% (95%CI: 11.41%-15.67%) and 14.74% (95%CI: 11.06%-18.80%), respectively. The lowest prevalence was in South China with the rate of 2.88% (95% CI: 2.19%-3.64%). Before 1998, the pooled prevalence of HCV infection was 12.87% (95% CI: 11.25%-14.56%) among blood donors, with the highest rates found in Henan (35.04%, 95% CI: 23.62%-47.41%), then Hebei (29.26%, 95% CI: 19.63%-39.98%), and then the pooled prevalence decreased to 1.71% (95%CI: 1.43%-1.99%) after 1998. Prevalence of HCV infection among blood donors at different regions a. IM: Inner Mongolia b. When calculating the pooled prevalence rate either before or after 1998 respectively, the studies that spanned 1998 were excluded. In that case, number of total literatures is bigger than the sum of literature number before 1998 and after 1998. The regional distribution of pooled prevalence of HCV infection among blood donors in China before 1998. The regional distribution of pooled prevalence of HCV infection among blood donors in China after 1998. Gender A total of 79 studies have investigated the association between the prevalence rate of HCV infection and gender among blood donors. HCV infection rate of male blood donors was 9.87% (95%CI: 8.26%-11.63%), while female blood donors had a rate of 9.78% (95%CI: 7.88%-11.89%). There was no significant statistical difference between males and females (Z = 0.62, P = 0.54). A total of 79 studies have investigated the association between the prevalence rate of HCV infection and gender among blood donors. HCV infection rate of male blood donors was 9.87% (95%CI: 8.26%-11.63%), while female blood donors had a rate of 9.78% (95%CI: 7.88%-11.89%). There was no significant statistical difference between males and females (Z = 0.62, P = 0.54). Age There were 30 studies indicating the association between the prevalence rate of HCV infection and age among blood donors. In most studies blood donors were divided into two groups by the age of 30 years old. HCV infection rate of individuals aged 18-30 was 4.91% (95% CI: 3.89%-6.05%), while the prevalence rate of individuals aged 31-60 was 8.99% (95% CI: 6.86%-11.37%), and there was significant statistical difference between two groups (Z = 66.02, P < 0.01). There were 30 studies indicating the association between the prevalence rate of HCV infection and age among blood donors. In most studies blood donors were divided into two groups by the age of 30 years old. HCV infection rate of individuals aged 18-30 was 4.91% (95% CI: 3.89%-6.05%), while the prevalence rate of individuals aged 31-60 was 8.99% (95% CI: 6.86%-11.37%), and there was significant statistical difference between two groups (Z = 66.02, P < 0.01). Time As presented in Table 2, 179 studies were divided into 9 groups according to the study period, and Figure 4 was drawn on the prevalence of HCV in each group. During 1994-1995, the prevalence rate reached the highest, which was 15.78% (95% CI: 12.21%-19.75%). Since 1995, the rates showed a decreasing trend among blood donors, even as low as 0.36% (95% CI: 0.09%-0.81%) during 2006-2010. Prevalence of HCV infection among blood donors at different study period REM: random effect model. Prevalence of HCV infection among blood donors at different study period. As presented in Table 2, 179 studies were divided into 9 groups according to the study period, and Figure 4 was drawn on the prevalence of HCV in each group. During 1994-1995, the prevalence rate reached the highest, which was 15.78% (95% CI: 12.21%-19.75%). Since 1995, the rates showed a decreasing trend among blood donors, even as low as 0.36% (95% CI: 0.09%-0.81%) during 2006-2010. Prevalence of HCV infection among blood donors at different study period REM: random effect model. Prevalence of HCV infection among blood donors at different study period. Blood type As displayed in Table 3, among blood type A, B, AB, and O donors, the HCV infection rates were 8.18% (95% CI: 4.55%-12.77%), 7.58% (95%CI: 4.26%-11.73%), 8.15% (95%CI: 4.64%-12.54%), and 7.85% (95%CI: 4.64%-11.82%), respectively. There was no significant statistical difference among four blood types (χ2 = 4.97, P = 0.17). Prevalence of HCV infection among blood donors of different blood type REM: random effect model. As displayed in Table 3, among blood type A, B, AB, and O donors, the HCV infection rates were 8.18% (95% CI: 4.55%-12.77%), 7.58% (95%CI: 4.26%-11.73%), 8.15% (95%CI: 4.64%-12.54%), and 7.85% (95%CI: 4.64%-11.82%), respectively. There was no significant statistical difference among four blood types (χ2 = 4.97, P = 0.17). Prevalence of HCV infection among blood donors of different blood type REM: random effect model. Donation type As seen in Table 4, HCV infection rate of voluntary blood donors and paid blood donors were 0.97% (95% CI: 0.79%-1.16%) and 15.53% (95% CI: 13.28%-17.91%) respectively. There was significant statistical difference (Z = 325.65, P < 0.01). The prevalence of HCV infection differed significantly (Z = 142.22, P < 0.01) among plasma donors and whole blood donors, which was 7.90% (95%CI: 6.44%-9.51%) and 33.95% (95%CI: 29.80%-38.17%), respectively. Prevalence of HCV infection among blood donors of different donation types REM: random effect model. As seen in Table 4, HCV infection rate of voluntary blood donors and paid blood donors were 0.97% (95% CI: 0.79%-1.16%) and 15.53% (95% CI: 13.28%-17.91%) respectively. There was significant statistical difference (Z = 325.65, P < 0.01). The prevalence of HCV infection differed significantly (Z = 142.22, P < 0.01) among plasma donors and whole blood donors, which was 7.90% (95%CI: 6.44%-9.51%) and 33.95% (95%CI: 29.80%-38.17%), respectively. Prevalence of HCV infection among blood donors of different donation types REM: random effect model. General information of samples: A total of 4519313 blood donors between the ages of 18 to 60 were included, with a wide range of blood donation frequency from 1 to more than 50 times. Some donors had a duration (the period from the first blood donation to when selected in original research) longer than 15 years. The majority of blood donors were men, approximately 57.72% (101319/175540), while women accounted for 42.28% (74221/175540). The occupation of blood donors was widely distributed. Voluntary blood donors mainly came from college students, health care providers, officials, and military from Chinese People's Liberation Army (PLA), while paid blood donors were mostly from peasants, low-wage workers, and unemployed individuals. The blood samples mainly came from blood banks, hospitals, and Centers for Disease Control and Prevention (CDC). The studies of our review involved in the following regions of 29 provinces and cities: Central China (Hunan, Hubei, Henan), North China (Beijing, Hebei, Shanxi, Tianjin, Inner Mongolia), South China(Guangdong, Guangxi), Northwest China(Shanxi, Gansu, Ningxia, Qinghai, Xinjiang), Northeast China (Liaoning, Jilin, Heilongjiang), Southwest China (Yunnan, Guizhou, Sichuan, Chongqing), East China (Shandong, Jiangsu, Anhui, Zhejiang, Fujian, Shanghai, Jiangxi). Prevalence of HCV infection among blood donors in Chinese mainland: Region As seen in Table 1 and Figure 2, 3, the pooled prevalence of HCV infection among blood donors in Chinese mainland from 1990 to 2010 was 8.68% (95% CI: 8.01%- 9.39%). Dramatic geographic difference in pooled HCV infection rates among blood donors was observed. The epidemic was severest in North and Central China, where the HCV infection rate were 13.45% (95%CI: 11.41%-15.67%) and 14.74% (95%CI: 11.06%-18.80%), respectively. The lowest prevalence was in South China with the rate of 2.88% (95% CI: 2.19%-3.64%). Before 1998, the pooled prevalence of HCV infection was 12.87% (95% CI: 11.25%-14.56%) among blood donors, with the highest rates found in Henan (35.04%, 95% CI: 23.62%-47.41%), then Hebei (29.26%, 95% CI: 19.63%-39.98%), and then the pooled prevalence decreased to 1.71% (95%CI: 1.43%-1.99%) after 1998. Prevalence of HCV infection among blood donors at different regions a. IM: Inner Mongolia b. When calculating the pooled prevalence rate either before or after 1998 respectively, the studies that spanned 1998 were excluded. In that case, number of total literatures is bigger than the sum of literature number before 1998 and after 1998. The regional distribution of pooled prevalence of HCV infection among blood donors in China before 1998. The regional distribution of pooled prevalence of HCV infection among blood donors in China after 1998. As seen in Table 1 and Figure 2, 3, the pooled prevalence of HCV infection among blood donors in Chinese mainland from 1990 to 2010 was 8.68% (95% CI: 8.01%- 9.39%). Dramatic geographic difference in pooled HCV infection rates among blood donors was observed. The epidemic was severest in North and Central China, where the HCV infection rate were 13.45% (95%CI: 11.41%-15.67%) and 14.74% (95%CI: 11.06%-18.80%), respectively. The lowest prevalence was in South China with the rate of 2.88% (95% CI: 2.19%-3.64%). Before 1998, the pooled prevalence of HCV infection was 12.87% (95% CI: 11.25%-14.56%) among blood donors, with the highest rates found in Henan (35.04%, 95% CI: 23.62%-47.41%), then Hebei (29.26%, 95% CI: 19.63%-39.98%), and then the pooled prevalence decreased to 1.71% (95%CI: 1.43%-1.99%) after 1998. Prevalence of HCV infection among blood donors at different regions a. IM: Inner Mongolia b. When calculating the pooled prevalence rate either before or after 1998 respectively, the studies that spanned 1998 were excluded. In that case, number of total literatures is bigger than the sum of literature number before 1998 and after 1998. The regional distribution of pooled prevalence of HCV infection among blood donors in China before 1998. The regional distribution of pooled prevalence of HCV infection among blood donors in China after 1998. Gender A total of 79 studies have investigated the association between the prevalence rate of HCV infection and gender among blood donors. HCV infection rate of male blood donors was 9.87% (95%CI: 8.26%-11.63%), while female blood donors had a rate of 9.78% (95%CI: 7.88%-11.89%). There was no significant statistical difference between males and females (Z = 0.62, P = 0.54). A total of 79 studies have investigated the association between the prevalence rate of HCV infection and gender among blood donors. HCV infection rate of male blood donors was 9.87% (95%CI: 8.26%-11.63%), while female blood donors had a rate of 9.78% (95%CI: 7.88%-11.89%). There was no significant statistical difference between males and females (Z = 0.62, P = 0.54). Age There were 30 studies indicating the association between the prevalence rate of HCV infection and age among blood donors. In most studies blood donors were divided into two groups by the age of 30 years old. HCV infection rate of individuals aged 18-30 was 4.91% (95% CI: 3.89%-6.05%), while the prevalence rate of individuals aged 31-60 was 8.99% (95% CI: 6.86%-11.37%), and there was significant statistical difference between two groups (Z = 66.02, P < 0.01). There were 30 studies indicating the association between the prevalence rate of HCV infection and age among blood donors. In most studies blood donors were divided into two groups by the age of 30 years old. HCV infection rate of individuals aged 18-30 was 4.91% (95% CI: 3.89%-6.05%), while the prevalence rate of individuals aged 31-60 was 8.99% (95% CI: 6.86%-11.37%), and there was significant statistical difference between two groups (Z = 66.02, P < 0.01). Time As presented in Table 2, 179 studies were divided into 9 groups according to the study period, and Figure 4 was drawn on the prevalence of HCV in each group. During 1994-1995, the prevalence rate reached the highest, which was 15.78% (95% CI: 12.21%-19.75%). Since 1995, the rates showed a decreasing trend among blood donors, even as low as 0.36% (95% CI: 0.09%-0.81%) during 2006-2010. Prevalence of HCV infection among blood donors at different study period REM: random effect model. Prevalence of HCV infection among blood donors at different study period. As presented in Table 2, 179 studies were divided into 9 groups according to the study period, and Figure 4 was drawn on the prevalence of HCV in each group. During 1994-1995, the prevalence rate reached the highest, which was 15.78% (95% CI: 12.21%-19.75%). Since 1995, the rates showed a decreasing trend among blood donors, even as low as 0.36% (95% CI: 0.09%-0.81%) during 2006-2010. Prevalence of HCV infection among blood donors at different study period REM: random effect model. Prevalence of HCV infection among blood donors at different study period. Blood type As displayed in Table 3, among blood type A, B, AB, and O donors, the HCV infection rates were 8.18% (95% CI: 4.55%-12.77%), 7.58% (95%CI: 4.26%-11.73%), 8.15% (95%CI: 4.64%-12.54%), and 7.85% (95%CI: 4.64%-11.82%), respectively. There was no significant statistical difference among four blood types (χ2 = 4.97, P = 0.17). Prevalence of HCV infection among blood donors of different blood type REM: random effect model. As displayed in Table 3, among blood type A, B, AB, and O donors, the HCV infection rates were 8.18% (95% CI: 4.55%-12.77%), 7.58% (95%CI: 4.26%-11.73%), 8.15% (95%CI: 4.64%-12.54%), and 7.85% (95%CI: 4.64%-11.82%), respectively. There was no significant statistical difference among four blood types (χ2 = 4.97, P = 0.17). Prevalence of HCV infection among blood donors of different blood type REM: random effect model. Donation type As seen in Table 4, HCV infection rate of voluntary blood donors and paid blood donors were 0.97% (95% CI: 0.79%-1.16%) and 15.53% (95% CI: 13.28%-17.91%) respectively. There was significant statistical difference (Z = 325.65, P < 0.01). The prevalence of HCV infection differed significantly (Z = 142.22, P < 0.01) among plasma donors and whole blood donors, which was 7.90% (95%CI: 6.44%-9.51%) and 33.95% (95%CI: 29.80%-38.17%), respectively. Prevalence of HCV infection among blood donors of different donation types REM: random effect model. As seen in Table 4, HCV infection rate of voluntary blood donors and paid blood donors were 0.97% (95% CI: 0.79%-1.16%) and 15.53% (95% CI: 13.28%-17.91%) respectively. There was significant statistical difference (Z = 325.65, P < 0.01). The prevalence of HCV infection differed significantly (Z = 142.22, P < 0.01) among plasma donors and whole blood donors, which was 7.90% (95%CI: 6.44%-9.51%) and 33.95% (95%CI: 29.80%-38.17%), respectively. Prevalence of HCV infection among blood donors of different donation types REM: random effect model. Region: As seen in Table 1 and Figure 2, 3, the pooled prevalence of HCV infection among blood donors in Chinese mainland from 1990 to 2010 was 8.68% (95% CI: 8.01%- 9.39%). Dramatic geographic difference in pooled HCV infection rates among blood donors was observed. The epidemic was severest in North and Central China, where the HCV infection rate were 13.45% (95%CI: 11.41%-15.67%) and 14.74% (95%CI: 11.06%-18.80%), respectively. The lowest prevalence was in South China with the rate of 2.88% (95% CI: 2.19%-3.64%). Before 1998, the pooled prevalence of HCV infection was 12.87% (95% CI: 11.25%-14.56%) among blood donors, with the highest rates found in Henan (35.04%, 95% CI: 23.62%-47.41%), then Hebei (29.26%, 95% CI: 19.63%-39.98%), and then the pooled prevalence decreased to 1.71% (95%CI: 1.43%-1.99%) after 1998. Prevalence of HCV infection among blood donors at different regions a. IM: Inner Mongolia b. When calculating the pooled prevalence rate either before or after 1998 respectively, the studies that spanned 1998 were excluded. In that case, number of total literatures is bigger than the sum of literature number before 1998 and after 1998. The regional distribution of pooled prevalence of HCV infection among blood donors in China before 1998. The regional distribution of pooled prevalence of HCV infection among blood donors in China after 1998. Gender: A total of 79 studies have investigated the association between the prevalence rate of HCV infection and gender among blood donors. HCV infection rate of male blood donors was 9.87% (95%CI: 8.26%-11.63%), while female blood donors had a rate of 9.78% (95%CI: 7.88%-11.89%). There was no significant statistical difference between males and females (Z = 0.62, P = 0.54). Age: There were 30 studies indicating the association between the prevalence rate of HCV infection and age among blood donors. In most studies blood donors were divided into two groups by the age of 30 years old. HCV infection rate of individuals aged 18-30 was 4.91% (95% CI: 3.89%-6.05%), while the prevalence rate of individuals aged 31-60 was 8.99% (95% CI: 6.86%-11.37%), and there was significant statistical difference between two groups (Z = 66.02, P < 0.01). Time: As presented in Table 2, 179 studies were divided into 9 groups according to the study period, and Figure 4 was drawn on the prevalence of HCV in each group. During 1994-1995, the prevalence rate reached the highest, which was 15.78% (95% CI: 12.21%-19.75%). Since 1995, the rates showed a decreasing trend among blood donors, even as low as 0.36% (95% CI: 0.09%-0.81%) during 2006-2010. Prevalence of HCV infection among blood donors at different study period REM: random effect model. Prevalence of HCV infection among blood donors at different study period. Blood type: As displayed in Table 3, among blood type A, B, AB, and O donors, the HCV infection rates were 8.18% (95% CI: 4.55%-12.77%), 7.58% (95%CI: 4.26%-11.73%), 8.15% (95%CI: 4.64%-12.54%), and 7.85% (95%CI: 4.64%-11.82%), respectively. There was no significant statistical difference among four blood types (χ2 = 4.97, P = 0.17). Prevalence of HCV infection among blood donors of different blood type REM: random effect model. Donation type: As seen in Table 4, HCV infection rate of voluntary blood donors and paid blood donors were 0.97% (95% CI: 0.79%-1.16%) and 15.53% (95% CI: 13.28%-17.91%) respectively. There was significant statistical difference (Z = 325.65, P < 0.01). The prevalence of HCV infection differed significantly (Z = 142.22, P < 0.01) among plasma donors and whole blood donors, which was 7.90% (95%CI: 6.44%-9.51%) and 33.95% (95%CI: 29.80%-38.17%), respectively. Prevalence of HCV infection among blood donors of different donation types REM: random effect model. Discussion: As a blood-borne pathogen, HCV virus was frequently detected among paid blood donors in China in the early 1990s [9]. To improve the safety of blood supply and reduce the risk of transfusion-transmitted diseases, the Chinese government has outlawed the use of paid blood donors since 1998. As a result, Chinese blood banks now rely on various other methods to recruit blood donors, mostly on employer-organized donors and true voluntary donors [274]. This transition in the blood donor recruitment methods has been associated with a gradual decrease in the prevalence of anti-HCV among donors. In addition, an HCV RNA screening strategy was implemented in 2010 in all Chinese blood banks, which has also contributed to the decline of HCV prevalence rate. This review shows that the pooled prevalence of HCV infection among blood donors was 8.68% (95% CI: 8.01% to 9.39%) from 1990 to 2010, significantly higher than the estimated 3.2% in the general population of China [1]. It is noteworthy that before 1998 the pooled prevalence of HCV infection was 12.87% among blood donors, but dramatically decreased to 1.71% after 1998. In economically developed regions, the decreasing trend was more prominent due to effective control measures. Our results showed significant geographic difference of the prevalence of anti-HCV. Compared with other regions, North and Central China had relatively higher anti-HCV positive rates among blood donors, accounting for 13.45% and 14.74%. Meanwhile, the lowest epidemic rate was found in South China with a percentage of 2.88%. Notably, during the period through 1990 and 1998, the prevalence rates were commonly at high level among different regions of China. Especially in Henan and Hebei, the rates even reached 35.04% and 29.26% respectively. After 1998, the general epidemic rates have rapidly decreased, benefited from the government's prohibition of using paid blood donors. However, the reduction of HCV prevalence was not that obvious in North and Central China. Possible reasons for this lack of reduction may include larger population migration, poor economic conditions, higher HCV infection rates in the general populations, or limited sampling. According to the national epidemiological survey of viral hepatitis from 1992 to 1995, HCV infection rate among the general population increased gradually with age, but the prevalence rate had no significant difference between male and female [275], which is consistent with our findings that significant difference was found between two age groups but not between gender. The findings indicated that both male and female had the same susceptibility to HCV infection, while the older had increased chance of being HCV-infected. Since the paid donors were usually driven by economic benefits, the association of age with HCV infection may also be explained by increased exposure chances with a greater number of instances of blood donation times or longer duration of blood donation. It is early for a conclusive explanation until systematic investigations are performed and causal relation underneath is revealed [35,61,65,153,179]. By and large, the prevalence rate of HCV infection showed a rising trend among blood donors from 1990 to 1995, but significantly decreased from 1996 to 2010 (Figure 3). Our results revealed an outbreak of HCV infection in blood donors around 1995. As recalled, lots of plasma collection stations were established in different regions before 1995. However, the majority of them were illegal and severe cross-contamination on plasmapheresis frequently occurred due to those commonly existed nonstandard operations, such as neglecting sterilization, lacking accurate detection method of anti-HCV, improperly usage of non-disposable needles. In some places, the prevalence rate of HCV was reported as high as 80% [275]. Given the urgency of the situation, in 1995 the government implemented strict management on blood stations. Besides, the detection technology of anti-HCV got greatly improved with better sensitivity and specificity in the following years [276]. With the implementation of Blood Donation Law of China in 1998, real voluntary blood donors replaced paid donors and became a steady and major source of the blood supply. All these measures lead to the great achievements in control and prevention of HCV infection. Nowadays reports of HCV infection among the true voluntary blood donors could rarely been seen in China. Our study showed that among blood type A, B, AB, and O donors, the prevalence of HCV infection were 8.18%, 7.58%, 8.15%, and 7.85%, respectively. No significant difference was found between blood types and the epidemic rate of HCV, indicating that blood type is not associated with susceptibility to HCV infection. This finding was consistent with the studies of Lu KQ [52], Zhou ZD [119], and Pu SF [269]. While it was a different story in the study of Ye C [239], in which type O blood donors were reported to have a higher infection rate, and type AB showed relatively lower rate. Uneven distribution with blood type was also seen in Rui ZL's study, in which type A blood donors were reported to have a higher rate [131]. However, the mechanism between blood type and HCV infection remained undefined, which may be related to red cell immune adherence function among persons with different blood types, but it need further study [131]. Numerous research has showed that paid blood donors are more likely to be infected with HCV than both employer-organized donors or true voluntary donors. Our results confirmed that HCV infection rate in paid blood donors was significantly higher than in voluntary blood donors (15.53% vs 0.97%). Those paid donors who were attracted by high compensation and chose to donate blood in illegal blood stations, also risked a greater risk of cross-contamination. The prevalence rate among plasma donors was significantly higher than among whole blood donors (33.95% vs 7.90%), possibly due to cross-contamination of blood collection equipment by HCV positive plasma donors [77]. The elimination of paid plasma and whole blood donation could contribute to a reduction in HCV infection among blood donors. Several limitations in our study need to be addressed. First of all, the studies were observational and blood donors were not randomly chosen. Therefore selection bias and confounding seems inevitable. Secondly, many of our data were extracted from studies written in Chinese, which makes it difficult for non-Chinese reviewers, editors, and readers to trace back to the original materials. Thirdly, our ability to assess study quality was limited by the fact that many studies failed to offer detailed information of selected subjects or valid data on important factors. Besides, as with all meta-analyses, this study has potential limitation of publication bias. Negative trials are sometimes less likely to be published. However, we have confidence on our results since the included literatures were mostly from multi-resources and had large sample size, which should reduce publication bias to some extent. Conclusions: This meta-analysis provides a comprehensive and reliable data on the prevalence and trend of HCV infection among blood donors. The pooled epidemic rate of HCV infection has rapidly decreased after 1998, though some provinces still showed relatively high prevalence. Achievements and lessons in previous work indicated that long-term, comprehensive and effective interventions and preventions are urgently needed. In particular, implementing and enforcing the "Blood Donation Law" and promoting HCV screening, diagnosis, and treatment among blood donors are very important measures to control the transmission of HCV infection. In addition, the key to reduce the incidence of HCV infection among blood donors is to encourage true voluntary blood donation, pay more attention to exclude those high-risk persons from the volunteers, and eliminate cross-infection completely when collecting single plasma.
Background: Blood transfusion is one of the most common transmission pathways of hepatitis C virus (HCV). This paper aims to provide a comprehensive and reliable tabulation of available data on the epidemiological characteristics and risk factors for HCV infection among blood donors in Chinese mainland, so as to help make prevention strategies and guide further research. Methods: A systematic review was constructed based on the computerized literature database. Infection rates and 95% confidence intervals (95% CI) were calculated using the approximate normal distribution model. Odds ratios and 95% CI were calculated by fixed or random effects models. Data manipulation and statistical analyses were performed using STATA 10.0 and ArcGIS 9.3 was used for map construction. Results: Two hundred and sixty-five studies met our inclusion criteria. The pooled prevalence of HCV infection among blood donors in Chinese mainland was 8.68% (95% CI: 8.01%-9.39%), and the epidemic was severer in North and Central China, especially in Henan and Hebei. While a significant lower rate was found in Yunnan. Notably, before 1998 the pooled prevalence of HCV infection was 12.87% (95%CI: 11.25%-14.56%) among blood donors, but decreased to 1.71% (95%CI: 1.43%-1.99%) after 1998. No significant difference was found in HCV infection rates between male and female blood donors, or among different blood type donors. The prevalence of HCV infection was found to increase with age. During 1994-1995, the prevalence rate reached the highest with a percentage of 15.78% (95%CI: 12.21%-19.75%), and showed a decreasing trend in the following years. A significant difference was found among groups with different blood donation types, Plasma donors had a relatively higher prevalence than whole blood donors of HCV infection (33.95% vs 7.9%). Conclusions: The prevalence of HCV infection has rapidly decreased since 1998 and kept a low level in recent years, but some provinces showed relatively higher prevalence than the general population. It is urgent to make efficient measures to prevent HCV secondary transmission and control chronic progress, and the key to reduce the HCV incidence among blood donors is to encourage true voluntary blood donors, strictly implement blood donation law, and avoid cross-infection.
Background: Chronic infection with hepatitis C virus (HCV) is a major and growing public health problem, which could easily lead to chronic liver disease, cirrhosis and even hepatocellular carcinoma [1]. The prevention and control of HCV infection showed complexity and challenge in describing geographic distribution of HCV infection, determining its associated risk factors, and evaluating cofactors that accelerate hepatitis C progression. Estimated 170 million persons are infected with HCV worldwide and more than 3.5 million new sufferers occurred annually [2]. According to the national epidemiological survey of viral hepatitis from 1992 to 1995, average anti-HCV positive rate was 3.2% in the general Chinese population, amounting to more than 30 million infected individuals [3]. The rapid global spread of HCV is believed to have occurred primarily because of efficient transmission through blood transfusion and parenteral exposures with contaminated equipment [4]. Blood donors, particularly those that rely on blood donation as a source of income, had a very high prevalence of HCV infection [5]. Recent studies have reported that the current residual risk of transfusion-transmitted HCV infection in China is about 1 in 40,000-60,000 donations, higher than that found in more developed countries [6]. With the implemention of blood donation law in 1998, many blood centers relied on other methods to motivate donors, mostly through employer-organized blood collection, but these donors may not have been true volunteers, as they may be coerced by the employer to some extent. In recent years, the true voluntary donors are gradually becoming the main source of blood donation in many blood centers in China [7]. Among paid blood donors, the HCV prevalence has reached 5.7% or higher [8]. However, among employer-organized donors and volunteer donors, the HCV prevalence was reported at lower level between 1.1-2.3%, and 0.46%, respectively [7,9]. A large amount of studies have been done in the last decade on HCV infection and its associated risk factors among blood donors. However, many of them drew incompatible or even contradictory conclusion and the utilization of these statistics are therefore limited. This paper reviews on the available studies so as to provide comprehensive and reliable epidemiological characteristics of HCV infection among blood donors in China, which is speculated to help make prevention strategies and guide further research. Conclusions: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2334/11/88/prepub
Background: Blood transfusion is one of the most common transmission pathways of hepatitis C virus (HCV). This paper aims to provide a comprehensive and reliable tabulation of available data on the epidemiological characteristics and risk factors for HCV infection among blood donors in Chinese mainland, so as to help make prevention strategies and guide further research. Methods: A systematic review was constructed based on the computerized literature database. Infection rates and 95% confidence intervals (95% CI) were calculated using the approximate normal distribution model. Odds ratios and 95% CI were calculated by fixed or random effects models. Data manipulation and statistical analyses were performed using STATA 10.0 and ArcGIS 9.3 was used for map construction. Results: Two hundred and sixty-five studies met our inclusion criteria. The pooled prevalence of HCV infection among blood donors in Chinese mainland was 8.68% (95% CI: 8.01%-9.39%), and the epidemic was severer in North and Central China, especially in Henan and Hebei. While a significant lower rate was found in Yunnan. Notably, before 1998 the pooled prevalence of HCV infection was 12.87% (95%CI: 11.25%-14.56%) among blood donors, but decreased to 1.71% (95%CI: 1.43%-1.99%) after 1998. No significant difference was found in HCV infection rates between male and female blood donors, or among different blood type donors. The prevalence of HCV infection was found to increase with age. During 1994-1995, the prevalence rate reached the highest with a percentage of 15.78% (95%CI: 12.21%-19.75%), and showed a decreasing trend in the following years. A significant difference was found among groups with different blood donation types, Plasma donors had a relatively higher prevalence than whole blood donors of HCV infection (33.95% vs 7.9%). Conclusions: The prevalence of HCV infection has rapidly decreased since 1998 and kept a low level in recent years, but some provinces showed relatively higher prevalence than the general population. It is urgent to make efficient measures to prevent HCV secondary transmission and control chronic progress, and the key to reduce the HCV incidence among blood donors is to encourage true voluntary blood donors, strictly implement blood donation law, and avoid cross-infection.
10,253
425
[ 443, 71, 269, 206, 3897, 258, 1642, 285, 76, 100, 122, 104, 121, 1319, 150 ]
16
[ "blood", "donors", "hcv", "blood donors", "95", "ci", "infection", "95 ci", "hcv infection", "prevalence" ]
[ "epidemiological characteristics hcv", "hcv prevalence blood", "donors prevalence hcv", "china hcv infection", "hcv infection china" ]
null
[CONTENT] hepatitis C virus | infection | blood donors | meta-analysis [SUMMARY]
[CONTENT] hepatitis C virus | infection | blood donors | meta-analysis [SUMMARY]
null
[CONTENT] hepatitis C virus | infection | blood donors | meta-analysis [SUMMARY]
[CONTENT] hepatitis C virus | infection | blood donors | meta-analysis [SUMMARY]
[CONTENT] hepatitis C virus | infection | blood donors | meta-analysis [SUMMARY]
[CONTENT] Adolescent | Adult | Blood Donors | China | Female | Hepacivirus | Hepatitis C | Humans | Male | Middle Aged | Models, Statistical | Prevalence | Risk Factors | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Blood Donors | China | Female | Hepacivirus | Hepatitis C | Humans | Male | Middle Aged | Models, Statistical | Prevalence | Risk Factors | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Blood Donors | China | Female | Hepacivirus | Hepatitis C | Humans | Male | Middle Aged | Models, Statistical | Prevalence | Risk Factors | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Blood Donors | China | Female | Hepacivirus | Hepatitis C | Humans | Male | Middle Aged | Models, Statistical | Prevalence | Risk Factors | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Blood Donors | China | Female | Hepacivirus | Hepatitis C | Humans | Male | Middle Aged | Models, Statistical | Prevalence | Risk Factors | Young Adult [SUMMARY]
[CONTENT] epidemiological characteristics hcv | hcv prevalence blood | donors prevalence hcv | china hcv infection | hcv infection china [SUMMARY]
[CONTENT] epidemiological characteristics hcv | hcv prevalence blood | donors prevalence hcv | china hcv infection | hcv infection china [SUMMARY]
null
[CONTENT] epidemiological characteristics hcv | hcv prevalence blood | donors prevalence hcv | china hcv infection | hcv infection china [SUMMARY]
[CONTENT] epidemiological characteristics hcv | hcv prevalence blood | donors prevalence hcv | china hcv infection | hcv infection china [SUMMARY]
[CONTENT] epidemiological characteristics hcv | hcv prevalence blood | donors prevalence hcv | china hcv infection | hcv infection china [SUMMARY]
[CONTENT] blood | donors | hcv | blood donors | 95 | ci | infection | 95 ci | hcv infection | prevalence [SUMMARY]
[CONTENT] blood | donors | hcv | blood donors | 95 | ci | infection | 95 ci | hcv infection | prevalence [SUMMARY]
null
[CONTENT] blood | donors | hcv | blood donors | 95 | ci | infection | 95 ci | hcv infection | prevalence [SUMMARY]
[CONTENT] blood | donors | hcv | blood donors | 95 | ci | infection | 95 ci | hcv infection | prevalence [SUMMARY]
[CONTENT] blood | donors | hcv | blood donors | 95 | ci | infection | 95 ci | hcv infection | prevalence [SUMMARY]
[CONTENT] hcv | blood | donors | infection | million | hcv infection | risk | employer | hepatitis | risk factors [SUMMARY]
[CONTENT] studies | data | blood | sample | donors | 10 | heterogeneity | type | hcv | analysis [SUMMARY]
null
[CONTENT] infection | hcv | comprehensive | hcv infection | blood | high | blood donation | donors | blood donors | important measures control transmission [SUMMARY]
[CONTENT] blood | donors | hcv | 95 | blood donors | 95 ci | infection | hcv infection | ci | prevalence [SUMMARY]
[CONTENT] blood | donors | hcv | 95 | blood donors | 95 ci | infection | hcv infection | ci | prevalence [SUMMARY]
[CONTENT] ||| Chinese [SUMMARY]
[CONTENT] ||| 95% | 95% | CI ||| 95% | CI ||| 10.0 | 9.3 [SUMMARY]
null
[CONTENT] 1998 | recent years ||| HCV [SUMMARY]
[CONTENT] ||| Chinese ||| ||| 95% | 95% | CI ||| 95% | CI ||| 10.0 | 9.3 ||| Two hundred and sixty-five ||| Chinese | 8.68% | 95% | CI | 8.01%-9.39% | North and Central China | Henan | Hebei ||| Yunnan ||| 1998 | 12.87% | 11.25%-14.56% | 1.71% | 1.43%-1.99% | 1998 ||| ||| ||| 1994-1995 | 15.78% | 12.21%-19.75% | the following years ||| Plasma | 33.95% | 7.9% ||| 1998 | recent years ||| HCV [SUMMARY]
[CONTENT] ||| Chinese ||| ||| 95% | 95% | CI ||| 95% | CI ||| 10.0 | 9.3 ||| Two hundred and sixty-five ||| Chinese | 8.68% | 95% | CI | 8.01%-9.39% | North and Central China | Henan | Hebei ||| Yunnan ||| 1998 | 12.87% | 11.25%-14.56% | 1.71% | 1.43%-1.99% | 1998 ||| ||| ||| 1994-1995 | 15.78% | 12.21%-19.75% | the following years ||| Plasma | 33.95% | 7.9% ||| 1998 | recent years ||| HCV [SUMMARY]
A smartphone-assisted pressure-measuring-based diagnosis system for acute myocardial infarction diagnosis.
31040668
Acute myocardial infarction (AMI), usually caused by atherosclerosis of coronary artery, is the most severe manifestation of coronary artery disease which results in a large amount of death annually. A new diagnosis approach with high accuracy, reliability and low measuring-time-consuming is essential for AMI quick diagnosis.
BACKGROUND
50 plasma samples of acute myocardial infarction patients were analyzed by developed Smartphone-Assisted Pressure-Measuring-Based Diagnosis System (SPDS). The concentration of substrate was firstly optimized. The effect of antibody labeling and matrix solution on measuring result were then evaluated. And standard curves for cTnI, CK-MB and Myo were built for clinical sample analysis. The measuring results of 50 clinical samples were finally evaluated by comparing with the measuring result obtained by CLIA.
PATIENTS AND METHODS
The concentration of substrate H2O2 was firstly optimized as 30% to increase measuring signal. A commercial serum matrix was chosen as the matrix solution to dilute biomarkers for standard curve building to minimize matrix effect on the accuracy of clinical plasma sample measuring. The standard curves for cTnI, CK-MB and Myo were built, with measuring dynamic range of 0-25 ng/mL, 0-33 ng/mL and 0-250 ng/mL, and limit of detection of 0.014 ng/mL, 0.16 ng/mL and 0.85 ng/mL respectively. The measuring results obtained by the developed system of 50 clinical plasma samples for three biomarkers matched well with the results obtained by chemiluminescent immunoassay.
RESULTS
Due to its small device size, high sensitivity and accuracy, SPDS showed a bright potential for point-of-care testing (POCT) applications.
CONCLUSION
[ "Antibodies", "Biomarkers", "Blood Pressure", "Catalysis", "Female", "Humans", "Hydrogen Peroxide", "Male", "Middle Aged", "Myocardial Infarction", "Nanoparticles", "Platinum", "Reference Standards", "Reproducibility of Results", "Sensitivity and Specificity", "Smartphone", "Static Electricity" ]
6459154
Introduction
Annually, more than 2.4 million deaths in the US, 4 million deaths in Europe and northern Asia, and more than a third of deaths in developed nations are caused by coronary artery disease (CAD).1–4 Acute myocardial infarction (AMI), usually caused by atherosclerosis of coronary artery, is the most severe manifestation of CAD, resulting in high mortality.5 Treatment of AMI is time-critical.6 Early medical and surgical intervention has been widely demonstrated to be able to significantly reduce the myocardial damage and mortality.7 To provide an accurate medical treatment for AMI, a quick diagnosis of AMI during door-to-balloon time is crucially required. According to current consensus, AMI is majorly defined by some physical diagnostic approaches such as electrocardiogram (ECG),8–10 changes in the motion of the heart wall on imaging, and some well-evaluated cardiac biomarkers,11,12 like cardiac troponin I/T (cTnI/cTnT),13–17 MB isoenzyme of creatine kinase (CK-MB),18,19 and myoglobin (Myo).20 ECG involves the placement of a series of leads on a person’s chest that measure electrical activity associated with the contraction of heart muscle, which has long been used for AMI diagnosis. By measuring ST-T variation or Q-waves, ECG can accurately diagnose AMI. Imaging methods, like chest X-ray, single-photon emission computed tomography/computed tomography scans, and positron emission tomography scans, can also be used for AMI diagnosis.21 Instead of ECG and imaging approaches, some well-evaluated biomarkers are now widely used for AMI diagnosis. These biomarkers include highly specific proteins like cTnI/cTnT, as well as some less-specific biomarkers like CK-MB and Myo. Due to differences in their diagnostic window periods and to improve the accuracy of AMI diagnosis, these biomarkers are often measured simultaneously. Various detection approaches for these AMI biomarkers, such as chemiluminescence immunoassay (CLIA), ELISA, and lateral immunochromatographic assay (LICA), are currently available in most hospitals.22 Among these approaches, CLIA, assisted by fully automated devices, has shown the highest user-friendliness, sensitivity, reliability, and diagnosis efficiency for quantitative measurement. However, limited by the device size and a rigorous demand for running-environment control to ensure a stable working condition, the highly precise equipment can hardly work out of a well-developed laboratory. This results in a lack of effective and reasonable diagnosis of AMI in some less developed areas, which cannot support such precision and expansive instrument, as well as some emergency situations in the wild. As a supplement for CLIA, LICA is widely used under highly unfavorable environments due to its simple operational process. However, most LICA products can only support qualitative purpose instead of quantitative application, because of a significant variation caused by the uncontrollable reaction process that occurs on the nitrocellulose membrane. Therefore, a quantitative detection approach with good portability, high sensitivity, and accuracy under an unfavorable environment is highly essential. In recent years, Pressure-based Bioassay (PASS) for biomarker detection has been reported.23–26 Different from traditional detection methods that are based on light, color, electrical activity, magnetic force, heat, or distance, the developed assay transforms molecular signal into pressure signal by enzyme- or catalyst (nanoparticles)-linked immu-nosorbent assay. Furthermore, a similar system was further applied for analysis at the single-cell level,27,28 drug detection, and analysis of disease biomarkers.29 PASS has shown high sensitivity, high reliability, and good portability in the reported works, which can be attributed to pressure sensor that is highly sensitive to pressure variations caused by the immunity assay. However, the developed measuring device is not user-friendly enough and can measure only one sample per time. The measurement between different samples with a short delay may cause unpredictable error. Besides, due to being lack of a strict control of manipulation procedure, the detection result may be affected by operator’s experience. Therefore, PASS is still far from being applied in applications that have a rigorous requirement for reliability and controllability like clinical diagnosis, although the technique still has a bright potential. To solve abovementioned problems, herein, we developed Smartphone-Assisted Pressure-Measuring-Based Diagnosis System (SPDS) for portable and highly sensitive diagnostic applications. SPDS is composed of a pressure measuring device and a smartphone. The size of the pressure measuring device is 115×67×50 mm3, small enough to be pocket-portable. The smartphone is connected to the pressure measuring device via Bluetooth, which helps to analyze the data and give an accurate detection and diagnosis result, with the capability of storing more than 105 detection results. To estimate the sensitivity and reliability of the instrument, SPDS was used to measure clinical plasma samples for AMI diagnosis by cTnI/CK-MB/Myo combined examination. The labeling process was first estimated, and matrix effect of biomarkers in different matrix solutions was compared. To minimize the matrix effect for detection accuracy, a commercially available matrix serum was used to replace low-biomarker-concentration plasma for the building of standard curve for each biomarker. Finally, 50 clinical plasma samples were tested and compared with the measuring results obtained by CLIA (Abbott Architect i2000SR). The results showed that the concentration values of three biomarkers in the 50 samples measured by SPDS matched well with those measured by CLIA. The SPDS showed a comparable sensitivity and reliability with CLIA, while with much smaller device size and assisted by smart-phone for data analysis system, allowing applications under highly unfavorable circumstances, including the hospitals in less developed areas and emergency situations in the wild. Moreover, to help to give early medical treatment and surgical intervention for AMI patient, the SPDS shows bright prospect for AMI diagnosis during ambulance delivery time, much earlier than traditional door-to-balloon time, which may have potential to significantly decrease the damage caused by AMI in the patients.
null
null
null
null
Conclusion
In summary, to satisfy the demand for a portable system with high detection sensitivity and accuracy for the applications of diagnosis under unfavorable environments and detection in wild situations, we developed SPDS to solve the problem that the currently available approaches or devices are facing. The catalytic efficiency was firstly optimized and matrix effect was minimized by using commercially available serum matrix, instead of buffer, to dilute bio-marker molecules to build the standard curve. The detection performance and system reliability of SPDS were finally verified by comparing the measuring results for the three biomarkers of 50 clinical plasma samples obtained by SPDS and CLIA. Furthermore, though a small device, while not losing any sensitivity and accuracy, SPDS showed a good potential for POCT application and diagnosis under emergency, bacteria detection out of field, and poison detection at the military front. Compared with the results reported for PASS, SPDS showed much better user-friendliness and reliability due to its integrated pressure measuring device, optimized pressure variation value measuring method, and the support of smart-phone for data manipulation. SPDS can also measure eight samples simultaneously, which could significantly decrease the error caused by the tedious and unfriendly manipulation of the device in PASS which involves measuring the sample on one-by-one basis. SPDS has shown a bright prospect in clinical application. However, we also have to admit that SPDS is still a half-automated system for AMI diagnosis and the manipulation for immune assay is still complicated. In the future work, we would convert the device into a totally automated system for POCT. Besides, we would attempt to further optimize PtNPs modification protocol to minimize the effect of modification on nanoparticle catalytic efficiency and further improve the sensitivity of SPDS.
[ "Synthesis of Pt nanoparticles (PtNPs)", "The optimization of H2O2 concentration", "Antibody labeling on PtNPs", "Evaluation of the effect of antibody labeling on PtNP catalysis efficiency", "Coating of magnetic beads with capture antibody", "Matrix effect evaluation", "Standard curve building for cTnI immunoassay", "Standard curve building for CK-MB immunoassay", "Standard curve building for Myo immunoassay", "Determining the limit of detection (LOD) of a biomarker", "Specificity evaluation of SPDS for three biomarkers", "Detection of clinical plasma samples", "Manipulation process of SPDS", "Principle of SPDS", "Optimization of H2O2 concentration", "Effect of antibody labeling on PtNPs catalysis efficiency", "Evaluation of matrix effect", "Standard curves of three AMI biomarkers", "Specificity evaluation of SPDS for three biomarkers", "Clinical sample detection", "Conclusion" ]
[ "To prepare PtNPs, 1 mL of 0.1 M H2PtCl6 solution and 96.9 mL of ultrapure water were added into a round-bottom flask. The solution was heated to 80°C with magnetic stirring at the rate of 700 rpm. About 2.1 mL of 1.4 M ascorbic acid was then added. The mixture was kept at 80°C for 30 minutes. The obtained PtNPs solution was stored in a conical flask at room temperature.", "To optimize the concentration of H2O2, 1 μL of synthesized PtNP solution was added into 50 μL of 10 mM hosphate buffered saline with 1% Twen-20 (PBST, pH 7.0) in an eight-well strip, followed by mixing with pipette tip blowing. About 50 μL of H2O2 at different concentrations (30%, 15%, 7.5%, and 3.75%) was then added and mixed. The strip was placed into the pressure reading device for the real-time monitoring of pressure variation. The results were compared to choose the best H2O2 concentration.", "Five hundred microliters of PtNP solution was centrifuged at 13,000 rpm for 10 minutes to remove the supernatant and resuspended with 500 μL of 1 mM morpholinoethanesulfonic acid (MES) buffer (pH 7.0). For cTnI assay, 100 μL of labeling antibody solution (120 μg/mL diluted in ultrapure water) was added into the resuspended PtNPs solution, followed by a short mixing for 40 seconds to ensure uniform absorption of antibody molecules on the PtNPs surface, followed by 10-minute incubation at room temperature to increase the absorption efficiency. The mixture was then added to 500 μL of blocking buffer (0.25% casein in 100 mM Na2CO3-NaHCO3 buffer with 0.25% Tween-20, 5% sucrose, pH 9.0) and kept for 2 hours at room temperature, followed by centrifugation at 5,000 rpm for 5 minutes to remove the supernatant. The recovered nanoparticles were finally resuspended in 10 mM citrate buffer (pH 7.0) and stored at 4°C. For CK-MB and Myo assay, 110 μL of labeling antibody solution (with 50 μg/mL and 10 μg/mL labeling antibody for each biomarker, respectively) was used and the recovered nanoparticles were stored in 0.1 M PBS (pH 7.4). The labeling process for cTnI was similar to the abovementioned procedure.", "To evaluate the influence of antibody labeling process on PtNP catalysis efficiency, 1 μL of PtNP solution with three antibody-labeled biomarkers was added into 50 μL of 10 mM PBST (pH 7.0) in an eight-well strip, followed by a short mixing. About 50 μL of 30% H2O2 solution was then added and mixed several times by blowing with a pipette tip. Then the strip was placed into the pressure measuring device at 37°C for 5 minutes to measure the pressure value. As a comparison, 1 μL of synthesized PtNP solution was used following the same steps. The obtained pressure values were compared to evaluate the variation in catalysis efficiency. Each point was measured three times to obtain an average value.\nTo demonstrate that the antibody molecules have bound onto the PtNP surface, PtNP solution before and after labeling was diluted with ultrapure water by 50 times, respectively, and zeta-average diameter and zeta-potential were measured using a dynamic light scattering device (Zetasizer Nano ZS90, Malvern, Worcestershire, United Kingdom). The obtained zeta-average diameter and potential for PtNPs before and after labeling were compared.", "Dynabeads M280 Tosylactivated (165 μL, 30 mg/mL) was washed three times with 0.1 M PBS buffer (pH 7.4). About 20 μL of capture antibodies (cTnI 7.5 mg/mL, CK-MB 5 mg/mL, and Myo 5 mg/mL), 150 μL of 0.1 M PBS (pH 7.4), and 100 μL of 3 M ammonium sulfate were added and incubated for 16 hours at 37°C. After removing the supernatant, 1 mL of 0.01 M PBS with 0.5% BSA was added to block the leftover binding sites on the beads for 1 hour at 37°C. The magnetic beads were resuspended in 250 μL of PBS with 0.1% BSA and washed three times to achieve a final beads concentration of 20 mg/mL. The obtained beads solution was stored at 4°C for further use.", "To evaluate the matrix effect on plasma sample detection, samples with gradient concentrations of cTnI (0, 0.1, 1, 5, 10, 25, and 50 ng/mL) antigen diluted in 10 mM PBS buffer, commercial serum matrix, and a mix of negative clinical samples (concentration of cTnI <0.01 ng/mL, measured by CLIA), were measured by the developed SPDS which included the following steps:\nHundred microliters of samples with different concentrations of cTnI in different matrix solutions was mixed with 20 μL of capture antibody-modified magnetic beads (1 mg/mL, diluted by reagent diluent) for 5 minutes to let the antigen molecules in the sample to be captured by the antibody on magnetic beads.\nThe beads were washed three times following the steps of magnetic beads gathering by a permanent magnet for 30 seconds, resuspension in 200 μL of washing buffer (10 mM PBST), and shaking for 30 seconds.\nThe washing buffer was then replaced with 50 μL of antibody-labeled PtNP (diluted 2.5 times with 10 mM PBST) solution, incubated for 5 minutes, and was followed by washing for another three times to remove free antibody-labeled PtNP.\nHundred microliters of H2O2 was then added into the well to allow O2 molecule generation for 5 minutes. The eight-well strip was placed into a pressure reader to collect the detection result.\nThe obtained measurement result was finally analyzed and compared. The best matrix for standard substance dilution for standard curve building was chosen considering the balance between acceptable matrix effect and acquiring convenience.\nFor the matrix effect evaluation of CK-MB and Myo assays, the same approach was used, except that the negative clinical serum sample for CK-MB was <0.5 mg/mL and for Myo was <1 mg/mL.\nEach point was measured three times, and the average values and SD were calculated.", "Hundred microliters of sample with a series of cTnI concentrations (0, 0.01, 0.1, 1.0, 5.0, 10.0, 15.0, 20.0, 25.0, and 50.0 ng/mL) in different matrix solutions was mixed with 50 μL of capture antibody-modified magnetic beads (1 mg/mL, diluted by reagent diluent) for 5 minutes to let the antigen molecules in the sample be captured by the antibody on magnetic beads.\nThe beads were repeatedly washed three times with the steps of magnetic beads gathering by a permanent magnet for 30 seconds, resuspension with 200 μL of washing buffer (10 mM PBST), and shaking for 30 seconds.\nThe washing buffer was replaced with 50 μL of antibody-labeled PtNP (diluted 2.5 times with 10 mM PBST) solution, allowing for 5 minutes of incubation, and then followed by washing with 200 μL of washing buffer (10 mM PBST) for another three times to remove free PtNPs.\nHundred microliters of H2O2 was then added into the well to allow O2 molecule generation for 5 minutes. The eight-well strip was immediately placed on a pressure measuring device to collect detection result.\nEach concentration measurement was repeated for three times, and the average values and SDs were calculated.", "Hundred microliters of samples with different concentrations of CK-MB (0, 0.41, 1.23, 3.69, 11.1, 33.3, 100, and 300 ng/mL) in different matrix solutions was mixed with 50 μL of capture antibody-modified magnetic beads (1 mg/mL, diluted by reagent diluent) for 5 minutes to let the antigen molecules in the sample to be captured by the antibody on magnetic beads.\nThe beads were repeatedly washed three times with steps of magnetic beads gathering by a permanent magnet for 30 seconds, followed by resuspension with 250 μL of washing buffer (10 mM PBST) and shaking for 30 seconds.\nThe washing buffer was replaced with 50 μL of antibody-labeled PtNP (diluted 2.5 times with 10 mM PBST) solution, incubated for 5 minutes, and was followed by washing for another three times to remove free antibody-labeled PtNP.\nHundred microliters of H2O2 was then added into the well to allow O2 molecule generation for 5 minutes. The eight-well strip was placed on a pressure measuring device to collect detection result.\nEach concentration measurement was repeated for three times, and the average values and SDs were calculated.", "Twenty microliters of samples with different concentrations of Myo (0, 15.7, 31.3, 62.5, 125, 250, 500, and 1,000 ng/mL) in different matrix solutions was mixed with 50 μL of capture antibody-modified magnetic beads (1 mg/mL, diluted by reagent diluent) in the well of an eight-well strip for 5 minutes to let the antigen molecules in the sample to be captured by the antibody on magnetic beads.\nThe beads were repeatedly washed three times with steps of magnetic beads gathering by a permanent magnet for 30 seconds, followed by resuspension with 250 μL washing buffer (10 mM PBST) and shaking for 30 seconds.\nThe washing buffer was replaced with 50 μL of antibody-labeled PtNP (diluted by 2.5 times with 10 mM PBST) solution, incubated for 5 minutes, which was followed by washing for another three times to remove free antibody-labeled PtNP.\nHundred microliters of H2O2 was then added into the well to allow O2 molecule generation for 5 minutes. The eight-well strip was placed on a pressure measuring device to collect detection result.\nEach concentration measurement was repeated for three times, and the average values and SDs were calculated.", "The pressure variation value of each biomarker was found to be linear to the biomarker concentration. The LOD for each biomarker was determined as follows:\nLOD=2sd0/slope,where sd0 is the SD of blank samples and slope is obtained by linear regression for pressure variation to different biomarker concentrations at low level.", "To investigate the specificity of SPDS for three biomarkers, some common proteins found in serum, including thrombin (Thr), hemoglobin (Hb), human serum albumin (HSA), immunoglobulin G (IgG), and a common inflammation bio-marker C-reactive protein (CRP), were used as the negative controls. The concentration of each negative control protein was 1 mg/mL diluted in serum matrix, and the concentrations of the three biomarkers (cTnI, CK-MB, Myo) were 10 ng/mL, 100 ng/mL, and 250 ng/mL, respectively, which acted as the positive controls. The same assay procedure and reagents for three biomarkers were used to test these negative control proteins samples to measure the pressure variation value after immune assay. Three runs were taken for each protein test and average value and SD were calculated.", "The concentration of three biomarkers, cTnI, CK-MB, and Myo, in 50 clinical plasma samples was measured with SPDS following the same measuring procedure as standard curve building for each biomarker respectively. The obtained results were compared with CLIA (ARCHITECT i2000SR). Each point was measured three times, and the average values and SD were calculated.", "The eight-well strip with H2O2 and PtNPs-bound magnetic beads after immunoassay was covered with a sealing rubber pad to seal the wells and put inside the adaptor to deliver into pressure measuring device. The screw valve then moved the pressure sensor module down to press onto the rubber pad, fully sealing the gap between pressure sensor module and rubber pad. Afterward, the start button was pressed to start the measurement of pressure.", "SPDS comprises a pressure measuring device (Figure 1A and C) for data collection and transporting pressure data to a smartphone (Figure 1A), and a smartphone for data analysis, storage, and output. The structure profile of developed pressure measuring device is shown in Figure 1B. The major parts of the device include a screw valve to seal the eight-well strip, eight integrated pressure sensors to measure pressure value of each well, a rubber pad for gas sealing, and an adaptor for loading of the strip into the device. The schematic of SPDS working procedure is shown in Figure 1A. To measure the biomarker concentration with SPDS, the sample is first incubated with magnetic beads, which is coated with capture antibody molecules, in a well of an eight-well strip to capture biomarker molecules onto beads surface. After a washing step to remove the sample remnant to avoid non-specific absorption, PtNPs modified with labeling antibody are added into the well to label the biomarkers, generating an antibody-antigen-antibody sandwich structure. After another washing step to remove free labeling PtNPs, H2O2 substrate is added and catalyzed by PtNPs. Then, the strip is immediately placed in a pressure measuring device and sealed with screw valve. A large amount of O2 gas is produced causing significant pressure variation in the wells, which is measured by the pressure sensor and transmitted to a smartphone via Bluetooth. The pressure variation value is finally analyzed by the smartphone and transformed into biomarker concentration value with a previously fed standard curve input pattern. Compared to traditional biomarker detection approaches such as light, heat, magnetic force, etc., SPDS transforms biomarker molecule signal into pressure signal, thus avoiding the inevitable environmental effects caused by light, heat, and magnetic field. Furthermore, by measuring the pressure variation value after and before catalyzation process, instead of only the absolute pressure value after catalyzation, SPDS eliminates the possible error caused by the atmospheric pressure changes due to altitude variation or changes in weather. By using a simple but highly precise integrated pressure sensor, SPDS significantly decreases the device size for highly sensitive monitoring and allows for highly accurate biomarker detection under robust environments. Furthermore, assisted by a smartphone via Bluetooth, SPDS is able to deal with complicated data analysis contributed by the strong computing power of smartphone,30,31 which helps expand the application field of SPDS to a larger extent, such as POCT for emergency, bacterial detection in external environments such as POCT for emergency and bacteria detection out of field.", "As the substrate of PtNPs, the concentration of H2O2 may significantly affect the catalytic efficiency leading to a change in sensitivity. To optimize the catalytic efficiency, firstly we optimized the H2O2 concentration. The result is shown in Figure S1; when the concentration of H2O2 decreased, the catalytic efficiency of PtNPs also decreased. The measured pressure variation value for 30% H2O2 was almost ten times the value obtained by 3.75% H2O2. To obtain the highest detection sensitivity for biomarkers, 30% H2O2 was chosen for further pressure measuring assays.", "As shown in Figure 2A and B, the zeta-average diameter increased to 207.3 nm for cTnI (for profile, see Figure S2B), 215.2 nm for CK-MB (for profile, see Figure S2C), and 188.3 nm for Myo (for profile, see Figure S2D) from 126.2 nm before labeling (for profile, see Figure S2A). The zeta-potential changed from −27.6 mV to −41.9 mV for cTnI, −40.0 mV for CK-MB, and −38.6 mV for Myo. The increase in zeta-average diameter suggests binding of antibody molecules and blocking by protein molecules (casein) onto the PtNP surface. The negative variation of zeta-potential can be attributed to the fact that casein (iso-electric point ~4.8) and antibodies (isoelectric point ~6.8) are negatively charged in pure water (pH~7.0).\nTo evaluate the influence of labeling process on PtNPs catalysis function, the catalysis efficiency of PtNPs before and after labeling with different biomarker antibodies was compared. The result is shown in Figure 2C. The catalysis efficiency of PtNPs labeled with cTnI, CK-MB, and Myo antibodies was found to significantly decrease by 86.9%, 83.6%, and 83.4% after labeling, which was speculated to be caused by the occupation of the catalytic site by antibody molecules, surface charge variation of PtNPs, and a decrease of specific surface area.", "Compared with buffer solution, plasma contains much more complex compounds, which include a large amount of proteins, polysaccharides, ions, and even cells.32 The measurement of biomarker molecules in plasma is usually different from that obtained from buffer, due to an unpredictable non-specific absorption and molecular conformation variation under different environments. To minimize the measuring error caused by this kind of matrix effect, a gradient concentration of biomarker molecules was added into two different matrixes, including 10 mM PBS buffer and commercially available serum matrix. As the control group, a mixture of ten clinically determined low-level plasma samples (cTnI <0.01 ng/mL, CK-MB <0.5 ng/mL, and Myo <1 mg/mL) was used. The measuring results of three biomarkers in different matrixes were compared to choose the one showing the smallest matrix effect. As shown in Figure 3A–C, the cTnI added in serum matrix and plasma mixture showed similar concentration–pressure response, while much higher pressure response was observed in buffer samples. CK-MB added in serum matrix also showed the same result with low-level plasma samples, while the response of sample with CK-MB added in buffer was lower than the one added in matrix. However, no difference was found in three kinds of matrix analysis for Myo detection. This result was inferred to be caused by the contents, such as proteins, ions, or organic molecules, in serum matrix and plasma. These molecules may compete with antigen or lead to a non-specific absorption to magnetic beads and PtNP surface to decrease antibody-antigen binding efficiency, thus resulting in a lower measuring value (similar to what happened in CK-MB assay). On the other hand, some conditions, like pH and different types of ions or ion strength in serum matrix and plasma, might increase the antibody binding efficiency, causing a higher measuring result (similar to what happened in cTnI assay). However, when the biomarker molecule was stable enough and was rarely affected by the solution environment, there will be no significant difference in the results obtained from different types of matrix (similar to what happened in Myo assay). To minimize the measuring error caused by matrix effect, commercial serum matrix was chosen to dilute the biomarkers to build standard curve for further applications.", "Serum matrix solution added with different concentrations of cTnI, CK-MB, or Myo was evaluated with the developed assay as the steps described in the Materials and methods section. The resulting standard curves for three biomarkers are shown in Figure 3D–F. The pressure value for cTnI showed a good linear relationship with antigen concentration in the range from 0 to 25 ng/mL (Figure S3A), with a LOD of 0.014 ng/mL, consistent with the sensitivity of currently available cTnI diagnosis approaches.22 The increasing tendency of pressure variation value declining when cTnI concentration was further increased after 20 ng/mL, which was mainly caused by the saturation of antibody binding sites on the magnetic beads. The linear range for pressure response value to CK-MB concentration and Myo concentration was found to be 0–33 ng/mL (Figure S3B) and 0–250 ng/mL (Figure S3C), with LOD of 0.16 ng/mL and 0.85 ng/mL, respectively, and the decrease of pressure variation value in the same slope was found when biomarker concentration increased higher than about 75 ng/mL and 400 ng/mL for CK-MB and Myo, respectively. All the coefficient values of variation for each concentration of the three biomarkers were determined to be smaller than 10%, demonstrating a good repeatability of SPDS.\nTo increase the quantitative accuracy, logistic regression with four variants was used to build a standard curve formula. The logistic regression formula is given by formula 1 as follows:\nΔP=A1−A21+(cc0)+A2\nIn formula 1, A1, A2, c0, and n are parameters obtained by logistic fitting, ΔP is pressure variation value obtained by pressure measuring device, and c is biomarker concentration in samples.\nThe values for four variants were fitted by Origin software, and the fitting results for cTnI, CK-MB, and Myo are given by formula 2, formula 3, and formula 4, respectively:\nΔP=−2,143.611+(c34.35)1.24+2,155.14ΔP=−796.081+(c69.98)1.10+822.63ΔP=−573.221+(c329.65)1.33+556.09\nThe fitting results were transferred into the smartphone App for further sample detection and data calculation to translate pressure signal into concentration value.\nAccording to the built standard curves, SPDS had shown a higher sensitivity and accuracy (with CV% smaller than 10% for different biomarker concentrations) than most POCT products like LICA (LOD equals about 0.1 ng/mL for cTnI, CV <15%) and comparable detection performance with CLIA (LOD equals about 0.02 ng/mL for cTnI, CV <15%). However, with much smaller device size and being more user-friendly, SPDS could be applied under highly unfavorable conditions, rather than CLIA.", "To investigate the specificity of SPDS for three biomarkers, some common proteins found in serum, including Thr, Hb, HSA, IgG, and a common inflammation biomarker CRP, were used as the negative controls. The results shown in Figure 3G–I indicate that the developed assays for three biomarkers had high specificity to these proteins commonly found in human blood. The results demonstrated that SPDS and developed reagents were highly specific to their bio-markers and were rarely affected by the common proteins present in real blood samples, which we consider is majorly contributed by high specificity of the chosen antibodies and well-developed nanoparticle coating procedure.", "To estimate the performance of SPDS, 50 clinical samples, whose concentrations of cTnI, CK-MB, and Myo had been measured by CLIA, were tested by SPDS. The results of comparison are shown in Figure 4A, C, and E. The linear slopes of comparison for cTnI, CK-MB, and Myo were fitted as 1.049 (R2=0.9852), 0.9545 (R2=0.9852), and 0.998 (R2=0.9908), which demonstrated an excellent match between two different assays. To further analyze the correlation between the results of CLIA and SPDS, Bland–Altman analysis was performed.33 As shown in Figure 4B, D, and F, almost all of the compared samples were in the range of 95% CI, suggesting a good correlation between the developed assay and CLIA. Only 4/50, 3/50, and 1/50 samples for cTnI, CK-MB, and Myo were out of 95% CI range, which might be caused by biomarker proteolysis during sample storage.\nThe clinical sample detection result demonstrated that SPDS rivaled CLIA in the detection performance for three biomarkers of AMI, and it has strong potential for clinical applications that need high sensitivity and accuracy instead of CLIA where a huge automatic device cannot be supported.", "In summary, to satisfy the demand for a portable system with high detection sensitivity and accuracy for the applications of diagnosis under unfavorable environments and detection in wild situations, we developed SPDS to solve the problem that the currently available approaches or devices are facing. The catalytic efficiency was firstly optimized and matrix effect was minimized by using commercially available serum matrix, instead of buffer, to dilute bio-marker molecules to build the standard curve. The detection performance and system reliability of SPDS were finally verified by comparing the measuring results for the three biomarkers of 50 clinical plasma samples obtained by SPDS and CLIA. Furthermore, though a small device, while not losing any sensitivity and accuracy, SPDS showed a good potential for POCT application and diagnosis under emergency, bacteria detection out of field, and poison detection at the military front.\nCompared with the results reported for PASS, SPDS showed much better user-friendliness and reliability due to its integrated pressure measuring device, optimized pressure variation value measuring method, and the support of smart-phone for data manipulation. SPDS can also measure eight samples simultaneously, which could significantly decrease the error caused by the tedious and unfriendly manipulation of the device in PASS which involves measuring the sample on one-by-one basis. SPDS has shown a bright prospect in clinical application.\nHowever, we also have to admit that SPDS is still a half-automated system for AMI diagnosis and the manipulation for immune assay is still complicated. In the future work, we would convert the device into a totally automated system for POCT. Besides, we would attempt to further optimize PtNPs modification protocol to minimize the effect of modification on nanoparticle catalytic efficiency and further improve the sensitivity of SPDS." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Chemicals and materials", "Synthesis of Pt nanoparticles (PtNPs)", "The optimization of H2O2 concentration", "Antibody labeling on PtNPs", "Evaluation of the effect of antibody labeling on PtNP catalysis efficiency", "Coating of magnetic beads with capture antibody", "Matrix effect evaluation", "Standard curve building for cTnI immunoassay", "Standard curve building for CK-MB immunoassay", "Standard curve building for Myo immunoassay", "Determining the limit of detection (LOD) of a biomarker", "Specificity evaluation of SPDS for three biomarkers", "Detection of clinical plasma samples", "Results of Bland–Altman analysis of clinical plasma samples", "Manipulation process of SPDS", "Results and discussion", "Principle of SPDS", "Optimization of H2O2 concentration", "Effect of antibody labeling on PtNPs catalysis efficiency", "Evaluation of matrix effect", "Standard curves of three AMI biomarkers", "Specificity evaluation of SPDS for three biomarkers", "Clinical sample detection", "Conclusion", "Supplementary materials" ]
[ "Annually, more than 2.4 million deaths in the US, 4 million deaths in Europe and northern Asia, and more than a third of deaths in developed nations are caused by coronary artery disease (CAD).1–4 Acute myocardial infarction (AMI), usually caused by atherosclerosis of coronary artery, is the most severe manifestation of CAD, resulting in high mortality.5 Treatment of AMI is time-critical.6 Early medical and surgical intervention has been widely demonstrated to be able to significantly reduce the myocardial damage and mortality.7 To provide an accurate medical treatment for AMI, a quick diagnosis of AMI during door-to-balloon time is crucially required. According to current consensus, AMI is majorly defined by some physical diagnostic approaches such as electrocardiogram (ECG),8–10 changes in the motion of the heart wall on imaging, and some well-evaluated cardiac biomarkers,11,12 like cardiac troponin I/T (cTnI/cTnT),13–17 MB isoenzyme of creatine kinase (CK-MB),18,19 and myoglobin (Myo).20 ECG involves the placement of a series of leads on a person’s chest that measure electrical activity associated with the contraction of heart muscle, which has long been used for AMI diagnosis. By measuring ST-T variation or Q-waves, ECG can accurately diagnose AMI. Imaging methods, like chest X-ray, single-photon emission computed tomography/computed tomography scans, and positron emission tomography scans, can also be used for AMI diagnosis.21 Instead of ECG and imaging approaches, some well-evaluated biomarkers are now widely used for AMI diagnosis. These biomarkers include highly specific proteins like cTnI/cTnT, as well as some less-specific biomarkers like CK-MB and Myo. Due to differences in their diagnostic window periods and to improve the accuracy of AMI diagnosis, these biomarkers are often measured simultaneously. Various detection approaches for these AMI biomarkers, such as chemiluminescence immunoassay (CLIA), ELISA, and lateral immunochromatographic assay (LICA), are currently available in most hospitals.22 Among these approaches, CLIA, assisted by fully automated devices, has shown the highest user-friendliness, sensitivity, reliability, and diagnosis efficiency for quantitative measurement. However, limited by the device size and a rigorous demand for running-environment control to ensure a stable working condition, the highly precise equipment can hardly work out of a well-developed laboratory. This results in a lack of effective and reasonable diagnosis of AMI in some less developed areas, which cannot support such precision and expansive instrument, as well as some emergency situations in the wild. As a supplement for CLIA, LICA is widely used under highly unfavorable environments due to its simple operational process. However, most LICA products can only support qualitative purpose instead of quantitative application, because of a significant variation caused by the uncontrollable reaction process that occurs on the nitrocellulose membrane. Therefore, a quantitative detection approach with good portability, high sensitivity, and accuracy under an unfavorable environment is highly essential.\nIn recent years, Pressure-based Bioassay (PASS) for biomarker detection has been reported.23–26 Different from traditional detection methods that are based on light, color, electrical activity, magnetic force, heat, or distance, the developed assay transforms molecular signal into pressure signal by enzyme- or catalyst (nanoparticles)-linked immu-nosorbent assay. Furthermore, a similar system was further applied for analysis at the single-cell level,27,28 drug detection, and analysis of disease biomarkers.29 PASS has shown high sensitivity, high reliability, and good portability in the reported works, which can be attributed to pressure sensor that is highly sensitive to pressure variations caused by the immunity assay. However, the developed measuring device is not user-friendly enough and can measure only one sample per time. The measurement between different samples with a short delay may cause unpredictable error. Besides, due to being lack of a strict control of manipulation procedure, the detection result may be affected by operator’s experience. Therefore, PASS is still far from being applied in applications that have a rigorous requirement for reliability and controllability like clinical diagnosis, although the technique still has a bright potential.\nTo solve abovementioned problems, herein, we developed Smartphone-Assisted Pressure-Measuring-Based Diagnosis System (SPDS) for portable and highly sensitive diagnostic applications. SPDS is composed of a pressure measuring device and a smartphone. The size of the pressure measuring device is 115×67×50 mm3, small enough to be pocket-portable. The smartphone is connected to the pressure measuring device via Bluetooth, which helps to analyze the data and give an accurate detection and diagnosis result, with the capability of storing more than 105 detection results. To estimate the sensitivity and reliability of the instrument, SPDS was used to measure clinical plasma samples for AMI diagnosis by cTnI/CK-MB/Myo combined examination. The labeling process was first estimated, and matrix effect of biomarkers in different matrix solutions was compared. To minimize the matrix effect for detection accuracy, a commercially available matrix serum was used to replace low-biomarker-concentration plasma for the building of standard curve for each biomarker. Finally, 50 clinical plasma samples were tested and compared with the measuring results obtained by CLIA (Abbott Architect i2000SR). The results showed that the concentration values of three biomarkers in the 50 samples measured by SPDS matched well with those measured by CLIA. The SPDS showed a comparable sensitivity and reliability with CLIA, while with much smaller device size and assisted by smart-phone for data analysis system, allowing applications under highly unfavorable circumstances, including the hospitals in less developed areas and emergency situations in the wild. Moreover, to help to give early medical treatment and surgical intervention for AMI patient, the SPDS shows bright prospect for AMI diagnosis during ambulance delivery time, much earlier than traditional door-to-balloon time, which may have potential to significantly decrease the damage caused by AMI in the patients.", " Chemicals and materials The antibodies for three biomarkers and matrix serum were supplied by Xiamen Passtech Co., Ltd. H2PtCl6, Tween-20, and casein-Na were obtained from Sigma-Aldrich Co. Magnetic beads were obtained from Thermo Fisher Scientific. Other inorganic salts were obtained from Sinopharm Chemical Reagent Co., Ltd.\nThe antibodies for three biomarkers and matrix serum were supplied by Xiamen Passtech Co., Ltd. H2PtCl6, Tween-20, and casein-Na were obtained from Sigma-Aldrich Co. Magnetic beads were obtained from Thermo Fisher Scientific. Other inorganic salts were obtained from Sinopharm Chemical Reagent Co., Ltd.\n Synthesis of Pt nanoparticles (PtNPs) To prepare PtNPs, 1 mL of 0.1 M H2PtCl6 solution and 96.9 mL of ultrapure water were added into a round-bottom flask. The solution was heated to 80°C with magnetic stirring at the rate of 700 rpm. About 2.1 mL of 1.4 M ascorbic acid was then added. The mixture was kept at 80°C for 30 minutes. The obtained PtNPs solution was stored in a conical flask at room temperature.\nTo prepare PtNPs, 1 mL of 0.1 M H2PtCl6 solution and 96.9 mL of ultrapure water were added into a round-bottom flask. The solution was heated to 80°C with magnetic stirring at the rate of 700 rpm. About 2.1 mL of 1.4 M ascorbic acid was then added. The mixture was kept at 80°C for 30 minutes. The obtained PtNPs solution was stored in a conical flask at room temperature.\n The optimization of H2O2 concentration To optimize the concentration of H2O2, 1 μL of synthesized PtNP solution was added into 50 μL of 10 mM hosphate buffered saline with 1% Twen-20 (PBST, pH 7.0) in an eight-well strip, followed by mixing with pipette tip blowing. About 50 μL of H2O2 at different concentrations (30%, 15%, 7.5%, and 3.75%) was then added and mixed. The strip was placed into the pressure reading device for the real-time monitoring of pressure variation. The results were compared to choose the best H2O2 concentration.\nTo optimize the concentration of H2O2, 1 μL of synthesized PtNP solution was added into 50 μL of 10 mM hosphate buffered saline with 1% Twen-20 (PBST, pH 7.0) in an eight-well strip, followed by mixing with pipette tip blowing. About 50 μL of H2O2 at different concentrations (30%, 15%, 7.5%, and 3.75%) was then added and mixed. The strip was placed into the pressure reading device for the real-time monitoring of pressure variation. The results were compared to choose the best H2O2 concentration.\n Antibody labeling on PtNPs Five hundred microliters of PtNP solution was centrifuged at 13,000 rpm for 10 minutes to remove the supernatant and resuspended with 500 μL of 1 mM morpholinoethanesulfonic acid (MES) buffer (pH 7.0). For cTnI assay, 100 μL of labeling antibody solution (120 μg/mL diluted in ultrapure water) was added into the resuspended PtNPs solution, followed by a short mixing for 40 seconds to ensure uniform absorption of antibody molecules on the PtNPs surface, followed by 10-minute incubation at room temperature to increase the absorption efficiency. The mixture was then added to 500 μL of blocking buffer (0.25% casein in 100 mM Na2CO3-NaHCO3 buffer with 0.25% Tween-20, 5% sucrose, pH 9.0) and kept for 2 hours at room temperature, followed by centrifugation at 5,000 rpm for 5 minutes to remove the supernatant. The recovered nanoparticles were finally resuspended in 10 mM citrate buffer (pH 7.0) and stored at 4°C. For CK-MB and Myo assay, 110 μL of labeling antibody solution (with 50 μg/mL and 10 μg/mL labeling antibody for each biomarker, respectively) was used and the recovered nanoparticles were stored in 0.1 M PBS (pH 7.4). The labeling process for cTnI was similar to the abovementioned procedure.\nFive hundred microliters of PtNP solution was centrifuged at 13,000 rpm for 10 minutes to remove the supernatant and resuspended with 500 μL of 1 mM morpholinoethanesulfonic acid (MES) buffer (pH 7.0). For cTnI assay, 100 μL of labeling antibody solution (120 μg/mL diluted in ultrapure water) was added into the resuspended PtNPs solution, followed by a short mixing for 40 seconds to ensure uniform absorption of antibody molecules on the PtNPs surface, followed by 10-minute incubation at room temperature to increase the absorption efficiency. The mixture was then added to 500 μL of blocking buffer (0.25% casein in 100 mM Na2CO3-NaHCO3 buffer with 0.25% Tween-20, 5% sucrose, pH 9.0) and kept for 2 hours at room temperature, followed by centrifugation at 5,000 rpm for 5 minutes to remove the supernatant. The recovered nanoparticles were finally resuspended in 10 mM citrate buffer (pH 7.0) and stored at 4°C. For CK-MB and Myo assay, 110 μL of labeling antibody solution (with 50 μg/mL and 10 μg/mL labeling antibody for each biomarker, respectively) was used and the recovered nanoparticles were stored in 0.1 M PBS (pH 7.4). The labeling process for cTnI was similar to the abovementioned procedure.\n Evaluation of the effect of antibody labeling on PtNP catalysis efficiency To evaluate the influence of antibody labeling process on PtNP catalysis efficiency, 1 μL of PtNP solution with three antibody-labeled biomarkers was added into 50 μL of 10 mM PBST (pH 7.0) in an eight-well strip, followed by a short mixing. About 50 μL of 30% H2O2 solution was then added and mixed several times by blowing with a pipette tip. Then the strip was placed into the pressure measuring device at 37°C for 5 minutes to measure the pressure value. As a comparison, 1 μL of synthesized PtNP solution was used following the same steps. The obtained pressure values were compared to evaluate the variation in catalysis efficiency. Each point was measured three times to obtain an average value.\nTo demonstrate that the antibody molecules have bound onto the PtNP surface, PtNP solution before and after labeling was diluted with ultrapure water by 50 times, respectively, and zeta-average diameter and zeta-potential were measured using a dynamic light scattering device (Zetasizer Nano ZS90, Malvern, Worcestershire, United Kingdom). The obtained zeta-average diameter and potential for PtNPs before and after labeling were compared.\nTo evaluate the influence of antibody labeling process on PtNP catalysis efficiency, 1 μL of PtNP solution with three antibody-labeled biomarkers was added into 50 μL of 10 mM PBST (pH 7.0) in an eight-well strip, followed by a short mixing. About 50 μL of 30% H2O2 solution was then added and mixed several times by blowing with a pipette tip. Then the strip was placed into the pressure measuring device at 37°C for 5 minutes to measure the pressure value. As a comparison, 1 μL of synthesized PtNP solution was used following the same steps. The obtained pressure values were compared to evaluate the variation in catalysis efficiency. Each point was measured three times to obtain an average value.\nTo demonstrate that the antibody molecules have bound onto the PtNP surface, PtNP solution before and after labeling was diluted with ultrapure water by 50 times, respectively, and zeta-average diameter and zeta-potential were measured using a dynamic light scattering device (Zetasizer Nano ZS90, Malvern, Worcestershire, United Kingdom). The obtained zeta-average diameter and potential for PtNPs before and after labeling were compared.\n Coating of magnetic beads with capture antibody Dynabeads M280 Tosylactivated (165 μL, 30 mg/mL) was washed three times with 0.1 M PBS buffer (pH 7.4). About 20 μL of capture antibodies (cTnI 7.5 mg/mL, CK-MB 5 mg/mL, and Myo 5 mg/mL), 150 μL of 0.1 M PBS (pH 7.4), and 100 μL of 3 M ammonium sulfate were added and incubated for 16 hours at 37°C. After removing the supernatant, 1 mL of 0.01 M PBS with 0.5% BSA was added to block the leftover binding sites on the beads for 1 hour at 37°C. The magnetic beads were resuspended in 250 μL of PBS with 0.1% BSA and washed three times to achieve a final beads concentration of 20 mg/mL. The obtained beads solution was stored at 4°C for further use.\nDynabeads M280 Tosylactivated (165 μL, 30 mg/mL) was washed three times with 0.1 M PBS buffer (pH 7.4). About 20 μL of capture antibodies (cTnI 7.5 mg/mL, CK-MB 5 mg/mL, and Myo 5 mg/mL), 150 μL of 0.1 M PBS (pH 7.4), and 100 μL of 3 M ammonium sulfate were added and incubated for 16 hours at 37°C. After removing the supernatant, 1 mL of 0.01 M PBS with 0.5% BSA was added to block the leftover binding sites on the beads for 1 hour at 37°C. The magnetic beads were resuspended in 250 μL of PBS with 0.1% BSA and washed three times to achieve a final beads concentration of 20 mg/mL. The obtained beads solution was stored at 4°C for further use.\n Matrix effect evaluation To evaluate the matrix effect on plasma sample detection, samples with gradient concentrations of cTnI (0, 0.1, 1, 5, 10, 25, and 50 ng/mL) antigen diluted in 10 mM PBS buffer, commercial serum matrix, and a mix of negative clinical samples (concentration of cTnI <0.01 ng/mL, measured by CLIA), were measured by the developed SPDS which included the following steps:\nHundred microliters of samples with different concentrations of cTnI in different matrix solutions was mixed with 20 μL of capture antibody-modified magnetic beads (1 mg/mL, diluted by reagent diluent) for 5 minutes to let the antigen molecules in the sample to be captured by the antibody on magnetic beads.\nThe beads were washed three times following the steps of magnetic beads gathering by a permanent magnet for 30 seconds, resuspension in 200 μL of washing buffer (10 mM PBST), and shaking for 30 seconds.\nThe washing buffer was then replaced with 50 μL of antibody-labeled PtNP (diluted 2.5 times with 10 mM PBST) solution, incubated for 5 minutes, and was followed by washing for another three times to remove free antibody-labeled PtNP.\nHundred microliters of H2O2 was then added into the well to allow O2 molecule generation for 5 minutes. The eight-well strip was placed into a pressure reader to collect the detection result.\nThe obtained measurement result was finally analyzed and compared. The best matrix for standard substance dilution for standard curve building was chosen considering the balance between acceptable matrix effect and acquiring convenience.\nFor the matrix effect evaluation of CK-MB and Myo assays, the same approach was used, except that the negative clinical serum sample for CK-MB was <0.5 mg/mL and for Myo was <1 mg/mL.\nEach point was measured three times, and the average values and SD were calculated.\nTo evaluate the matrix effect on plasma sample detection, samples with gradient concentrations of cTnI (0, 0.1, 1, 5, 10, 25, and 50 ng/mL) antigen diluted in 10 mM PBS buffer, commercial serum matrix, and a mix of negative clinical samples (concentration of cTnI <0.01 ng/mL, measured by CLIA), were measured by the developed SPDS which included the following steps:\nHundred microliters of samples with different concentrations of cTnI in different matrix solutions was mixed with 20 μL of capture antibody-modified magnetic beads (1 mg/mL, diluted by reagent diluent) for 5 minutes to let the antigen molecules in the sample to be captured by the antibody on magnetic beads.\nThe beads were washed three times following the steps of magnetic beads gathering by a permanent magnet for 30 seconds, resuspension in 200 μL of washing buffer (10 mM PBST), and shaking for 30 seconds.\nThe washing buffer was then replaced with 50 μL of antibody-labeled PtNP (diluted 2.5 times with 10 mM PBST) solution, incubated for 5 minutes, and was followed by washing for another three times to remove free antibody-labeled PtNP.\nHundred microliters of H2O2 was then added into the well to allow O2 molecule generation for 5 minutes. The eight-well strip was placed into a pressure reader to collect the detection result.\nThe obtained measurement result was finally analyzed and compared. The best matrix for standard substance dilution for standard curve building was chosen considering the balance between acceptable matrix effect and acquiring convenience.\nFor the matrix effect evaluation of CK-MB and Myo assays, the same approach was used, except that the negative clinical serum sample for CK-MB was <0.5 mg/mL and for Myo was <1 mg/mL.\nEach point was measured three times, and the average values and SD were calculated.\n Standard curve building for cTnI immunoassay Hundred microliters of sample with a series of cTnI concentrations (0, 0.01, 0.1, 1.0, 5.0, 10.0, 15.0, 20.0, 25.0, and 50.0 ng/mL) in different matrix solutions was mixed with 50 μL of capture antibody-modified magnetic beads (1 mg/mL, diluted by reagent diluent) for 5 minutes to let the antigen molecules in the sample be captured by the antibody on magnetic beads.\nThe beads were repeatedly washed three times with the steps of magnetic beads gathering by a permanent magnet for 30 seconds, resuspension with 200 μL of washing buffer (10 mM PBST), and shaking for 30 seconds.\nThe washing buffer was replaced with 50 μL of antibody-labeled PtNP (diluted 2.5 times with 10 mM PBST) solution, allowing for 5 minutes of incubation, and then followed by washing with 200 μL of washing buffer (10 mM PBST) for another three times to remove free PtNPs.\nHundred microliters of H2O2 was then added into the well to allow O2 molecule generation for 5 minutes. The eight-well strip was immediately placed on a pressure measuring device to collect detection result.\nEach concentration measurement was repeated for three times, and the average values and SDs were calculated.\nHundred microliters of sample with a series of cTnI concentrations (0, 0.01, 0.1, 1.0, 5.0, 10.0, 15.0, 20.0, 25.0, and 50.0 ng/mL) in different matrix solutions was mixed with 50 μL of capture antibody-modified magnetic beads (1 mg/mL, diluted by reagent diluent) for 5 minutes to let the antigen molecules in the sample be captured by the antibody on magnetic beads.\nThe beads were repeatedly washed three times with the steps of magnetic beads gathering by a permanent magnet for 30 seconds, resuspension with 200 μL of washing buffer (10 mM PBST), and shaking for 30 seconds.\nThe washing buffer was replaced with 50 μL of antibody-labeled PtNP (diluted 2.5 times with 10 mM PBST) solution, allowing for 5 minutes of incubation, and then followed by washing with 200 μL of washing buffer (10 mM PBST) for another three times to remove free PtNPs.\nHundred microliters of H2O2 was then added into the well to allow O2 molecule generation for 5 minutes. The eight-well strip was immediately placed on a pressure measuring device to collect detection result.\nEach concentration measurement was repeated for three times, and the average values and SDs were calculated.\n Standard curve building for CK-MB immunoassay Hundred microliters of samples with different concentrations of CK-MB (0, 0.41, 1.23, 3.69, 11.1, 33.3, 100, and 300 ng/mL) in different matrix solutions was mixed with 50 μL of capture antibody-modified magnetic beads (1 mg/mL, diluted by reagent diluent) for 5 minutes to let the antigen molecules in the sample to be captured by the antibody on magnetic beads.\nThe beads were repeatedly washed three times with steps of magnetic beads gathering by a permanent magnet for 30 seconds, followed by resuspension with 250 μL of washing buffer (10 mM PBST) and shaking for 30 seconds.\nThe washing buffer was replaced with 50 μL of antibody-labeled PtNP (diluted 2.5 times with 10 mM PBST) solution, incubated for 5 minutes, and was followed by washing for another three times to remove free antibody-labeled PtNP.\nHundred microliters of H2O2 was then added into the well to allow O2 molecule generation for 5 minutes. The eight-well strip was placed on a pressure measuring device to collect detection result.\nEach concentration measurement was repeated for three times, and the average values and SDs were calculated.\nHundred microliters of samples with different concentrations of CK-MB (0, 0.41, 1.23, 3.69, 11.1, 33.3, 100, and 300 ng/mL) in different matrix solutions was mixed with 50 μL of capture antibody-modified magnetic beads (1 mg/mL, diluted by reagent diluent) for 5 minutes to let the antigen molecules in the sample to be captured by the antibody on magnetic beads.\nThe beads were repeatedly washed three times with steps of magnetic beads gathering by a permanent magnet for 30 seconds, followed by resuspension with 250 μL of washing buffer (10 mM PBST) and shaking for 30 seconds.\nThe washing buffer was replaced with 50 μL of antibody-labeled PtNP (diluted 2.5 times with 10 mM PBST) solution, incubated for 5 minutes, and was followed by washing for another three times to remove free antibody-labeled PtNP.\nHundred microliters of H2O2 was then added into the well to allow O2 molecule generation for 5 minutes. The eight-well strip was placed on a pressure measuring device to collect detection result.\nEach concentration measurement was repeated for three times, and the average values and SDs were calculated.\n Standard curve building for Myo immunoassay Twenty microliters of samples with different concentrations of Myo (0, 15.7, 31.3, 62.5, 125, 250, 500, and 1,000 ng/mL) in different matrix solutions was mixed with 50 μL of capture antibody-modified magnetic beads (1 mg/mL, diluted by reagent diluent) in the well of an eight-well strip for 5 minutes to let the antigen molecules in the sample to be captured by the antibody on magnetic beads.\nThe beads were repeatedly washed three times with steps of magnetic beads gathering by a permanent magnet for 30 seconds, followed by resuspension with 250 μL washing buffer (10 mM PBST) and shaking for 30 seconds.\nThe washing buffer was replaced with 50 μL of antibody-labeled PtNP (diluted by 2.5 times with 10 mM PBST) solution, incubated for 5 minutes, which was followed by washing for another three times to remove free antibody-labeled PtNP.\nHundred microliters of H2O2 was then added into the well to allow O2 molecule generation for 5 minutes. The eight-well strip was placed on a pressure measuring device to collect detection result.\nEach concentration measurement was repeated for three times, and the average values and SDs were calculated.\nTwenty microliters of samples with different concentrations of Myo (0, 15.7, 31.3, 62.5, 125, 250, 500, and 1,000 ng/mL) in different matrix solutions was mixed with 50 μL of capture antibody-modified magnetic beads (1 mg/mL, diluted by reagent diluent) in the well of an eight-well strip for 5 minutes to let the antigen molecules in the sample to be captured by the antibody on magnetic beads.\nThe beads were repeatedly washed three times with steps of magnetic beads gathering by a permanent magnet for 30 seconds, followed by resuspension with 250 μL washing buffer (10 mM PBST) and shaking for 30 seconds.\nThe washing buffer was replaced with 50 μL of antibody-labeled PtNP (diluted by 2.5 times with 10 mM PBST) solution, incubated for 5 minutes, which was followed by washing for another three times to remove free antibody-labeled PtNP.\nHundred microliters of H2O2 was then added into the well to allow O2 molecule generation for 5 minutes. The eight-well strip was placed on a pressure measuring device to collect detection result.\nEach concentration measurement was repeated for three times, and the average values and SDs were calculated.\n Determining the limit of detection (LOD) of a biomarker The pressure variation value of each biomarker was found to be linear to the biomarker concentration. The LOD for each biomarker was determined as follows:\nLOD=2sd0/slope,where sd0 is the SD of blank samples and slope is obtained by linear regression for pressure variation to different biomarker concentrations at low level.\nThe pressure variation value of each biomarker was found to be linear to the biomarker concentration. The LOD for each biomarker was determined as follows:\nLOD=2sd0/slope,where sd0 is the SD of blank samples and slope is obtained by linear regression for pressure variation to different biomarker concentrations at low level.\n Specificity evaluation of SPDS for three biomarkers To investigate the specificity of SPDS for three biomarkers, some common proteins found in serum, including thrombin (Thr), hemoglobin (Hb), human serum albumin (HSA), immunoglobulin G (IgG), and a common inflammation bio-marker C-reactive protein (CRP), were used as the negative controls. The concentration of each negative control protein was 1 mg/mL diluted in serum matrix, and the concentrations of the three biomarkers (cTnI, CK-MB, Myo) were 10 ng/mL, 100 ng/mL, and 250 ng/mL, respectively, which acted as the positive controls. The same assay procedure and reagents for three biomarkers were used to test these negative control proteins samples to measure the pressure variation value after immune assay. Three runs were taken for each protein test and average value and SD were calculated.\nTo investigate the specificity of SPDS for three biomarkers, some common proteins found in serum, including thrombin (Thr), hemoglobin (Hb), human serum albumin (HSA), immunoglobulin G (IgG), and a common inflammation bio-marker C-reactive protein (CRP), were used as the negative controls. The concentration of each negative control protein was 1 mg/mL diluted in serum matrix, and the concentrations of the three biomarkers (cTnI, CK-MB, Myo) were 10 ng/mL, 100 ng/mL, and 250 ng/mL, respectively, which acted as the positive controls. The same assay procedure and reagents for three biomarkers were used to test these negative control proteins samples to measure the pressure variation value after immune assay. Three runs were taken for each protein test and average value and SD were calculated.\n Detection of clinical plasma samples The concentration of three biomarkers, cTnI, CK-MB, and Myo, in 50 clinical plasma samples was measured with SPDS following the same measuring procedure as standard curve building for each biomarker respectively. The obtained results were compared with CLIA (ARCHITECT i2000SR). Each point was measured three times, and the average values and SD were calculated.\nThe concentration of three biomarkers, cTnI, CK-MB, and Myo, in 50 clinical plasma samples was measured with SPDS following the same measuring procedure as standard curve building for each biomarker respectively. The obtained results were compared with CLIA (ARCHITECT i2000SR). Each point was measured three times, and the average values and SD were calculated.\n Results of Bland–Altman analysis of clinical plasma samples The Bland–Altman analysis of clinical samples comparing the results of SPDS with the CLIA method was performed by using SPSS software. Briefly, for the results obtained from SPDS (Vspds) and CLIA (Vclia) methods for 50 clinical samples, variables were calculated as follows: mMean\nDiffi=Vspds,i−Vclia,ImMeani=(Vspds,i−Vclia,i)/2i=1,2,3,…50\nThen, the average value (mean) and SD of Diffi were calculated by the software, and the reference lines of 95% CI were obtained as follows:\nYup=Mean+1.96*SDYdown=Mean+1.96*SD\nFinally, a scatter diagram was build using mMeani as x-axis and Diffi as y-axis. Mean reference line and 95% CI reference lines were added on the scatter diagram to generate Bland–Altman diagram.\nThe Bland–Altman analysis of clinical samples comparing the results of SPDS with the CLIA method was performed by using SPSS software. Briefly, for the results obtained from SPDS (Vspds) and CLIA (Vclia) methods for 50 clinical samples, variables were calculated as follows: mMean\nDiffi=Vspds,i−Vclia,ImMeani=(Vspds,i−Vclia,i)/2i=1,2,3,…50\nThen, the average value (mean) and SD of Diffi were calculated by the software, and the reference lines of 95% CI were obtained as follows:\nYup=Mean+1.96*SDYdown=Mean+1.96*SD\nFinally, a scatter diagram was build using mMeani as x-axis and Diffi as y-axis. Mean reference line and 95% CI reference lines were added on the scatter diagram to generate Bland–Altman diagram.\n Manipulation process of SPDS The eight-well strip with H2O2 and PtNPs-bound magnetic beads after immunoassay was covered with a sealing rubber pad to seal the wells and put inside the adaptor to deliver into pressure measuring device. The screw valve then moved the pressure sensor module down to press onto the rubber pad, fully sealing the gap between pressure sensor module and rubber pad. Afterward, the start button was pressed to start the measurement of pressure.\nThe eight-well strip with H2O2 and PtNPs-bound magnetic beads after immunoassay was covered with a sealing rubber pad to seal the wells and put inside the adaptor to deliver into pressure measuring device. The screw valve then moved the pressure sensor module down to press onto the rubber pad, fully sealing the gap between pressure sensor module and rubber pad. Afterward, the start button was pressed to start the measurement of pressure.", "The antibodies for three biomarkers and matrix serum were supplied by Xiamen Passtech Co., Ltd. H2PtCl6, Tween-20, and casein-Na were obtained from Sigma-Aldrich Co. Magnetic beads were obtained from Thermo Fisher Scientific. Other inorganic salts were obtained from Sinopharm Chemical Reagent Co., Ltd.", "To prepare PtNPs, 1 mL of 0.1 M H2PtCl6 solution and 96.9 mL of ultrapure water were added into a round-bottom flask. The solution was heated to 80°C with magnetic stirring at the rate of 700 rpm. About 2.1 mL of 1.4 M ascorbic acid was then added. The mixture was kept at 80°C for 30 minutes. The obtained PtNPs solution was stored in a conical flask at room temperature.", "To optimize the concentration of H2O2, 1 μL of synthesized PtNP solution was added into 50 μL of 10 mM hosphate buffered saline with 1% Twen-20 (PBST, pH 7.0) in an eight-well strip, followed by mixing with pipette tip blowing. About 50 μL of H2O2 at different concentrations (30%, 15%, 7.5%, and 3.75%) was then added and mixed. The strip was placed into the pressure reading device for the real-time monitoring of pressure variation. The results were compared to choose the best H2O2 concentration.", "Five hundred microliters of PtNP solution was centrifuged at 13,000 rpm for 10 minutes to remove the supernatant and resuspended with 500 μL of 1 mM morpholinoethanesulfonic acid (MES) buffer (pH 7.0). For cTnI assay, 100 μL of labeling antibody solution (120 μg/mL diluted in ultrapure water) was added into the resuspended PtNPs solution, followed by a short mixing for 40 seconds to ensure uniform absorption of antibody molecules on the PtNPs surface, followed by 10-minute incubation at room temperature to increase the absorption efficiency. The mixture was then added to 500 μL of blocking buffer (0.25% casein in 100 mM Na2CO3-NaHCO3 buffer with 0.25% Tween-20, 5% sucrose, pH 9.0) and kept for 2 hours at room temperature, followed by centrifugation at 5,000 rpm for 5 minutes to remove the supernatant. The recovered nanoparticles were finally resuspended in 10 mM citrate buffer (pH 7.0) and stored at 4°C. For CK-MB and Myo assay, 110 μL of labeling antibody solution (with 50 μg/mL and 10 μg/mL labeling antibody for each biomarker, respectively) was used and the recovered nanoparticles were stored in 0.1 M PBS (pH 7.4). The labeling process for cTnI was similar to the abovementioned procedure.", "To evaluate the influence of antibody labeling process on PtNP catalysis efficiency, 1 μL of PtNP solution with three antibody-labeled biomarkers was added into 50 μL of 10 mM PBST (pH 7.0) in an eight-well strip, followed by a short mixing. About 50 μL of 30% H2O2 solution was then added and mixed several times by blowing with a pipette tip. Then the strip was placed into the pressure measuring device at 37°C for 5 minutes to measure the pressure value. As a comparison, 1 μL of synthesized PtNP solution was used following the same steps. The obtained pressure values were compared to evaluate the variation in catalysis efficiency. Each point was measured three times to obtain an average value.\nTo demonstrate that the antibody molecules have bound onto the PtNP surface, PtNP solution before and after labeling was diluted with ultrapure water by 50 times, respectively, and zeta-average diameter and zeta-potential were measured using a dynamic light scattering device (Zetasizer Nano ZS90, Malvern, Worcestershire, United Kingdom). The obtained zeta-average diameter and potential for PtNPs before and after labeling were compared.", "Dynabeads M280 Tosylactivated (165 μL, 30 mg/mL) was washed three times with 0.1 M PBS buffer (pH 7.4). About 20 μL of capture antibodies (cTnI 7.5 mg/mL, CK-MB 5 mg/mL, and Myo 5 mg/mL), 150 μL of 0.1 M PBS (pH 7.4), and 100 μL of 3 M ammonium sulfate were added and incubated for 16 hours at 37°C. After removing the supernatant, 1 mL of 0.01 M PBS with 0.5% BSA was added to block the leftover binding sites on the beads for 1 hour at 37°C. The magnetic beads were resuspended in 250 μL of PBS with 0.1% BSA and washed three times to achieve a final beads concentration of 20 mg/mL. The obtained beads solution was stored at 4°C for further use.", "To evaluate the matrix effect on plasma sample detection, samples with gradient concentrations of cTnI (0, 0.1, 1, 5, 10, 25, and 50 ng/mL) antigen diluted in 10 mM PBS buffer, commercial serum matrix, and a mix of negative clinical samples (concentration of cTnI <0.01 ng/mL, measured by CLIA), were measured by the developed SPDS which included the following steps:\nHundred microliters of samples with different concentrations of cTnI in different matrix solutions was mixed with 20 μL of capture antibody-modified magnetic beads (1 mg/mL, diluted by reagent diluent) for 5 minutes to let the antigen molecules in the sample to be captured by the antibody on magnetic beads.\nThe beads were washed three times following the steps of magnetic beads gathering by a permanent magnet for 30 seconds, resuspension in 200 μL of washing buffer (10 mM PBST), and shaking for 30 seconds.\nThe washing buffer was then replaced with 50 μL of antibody-labeled PtNP (diluted 2.5 times with 10 mM PBST) solution, incubated for 5 minutes, and was followed by washing for another three times to remove free antibody-labeled PtNP.\nHundred microliters of H2O2 was then added into the well to allow O2 molecule generation for 5 minutes. The eight-well strip was placed into a pressure reader to collect the detection result.\nThe obtained measurement result was finally analyzed and compared. The best matrix for standard substance dilution for standard curve building was chosen considering the balance between acceptable matrix effect and acquiring convenience.\nFor the matrix effect evaluation of CK-MB and Myo assays, the same approach was used, except that the negative clinical serum sample for CK-MB was <0.5 mg/mL and for Myo was <1 mg/mL.\nEach point was measured three times, and the average values and SD were calculated.", "Hundred microliters of sample with a series of cTnI concentrations (0, 0.01, 0.1, 1.0, 5.0, 10.0, 15.0, 20.0, 25.0, and 50.0 ng/mL) in different matrix solutions was mixed with 50 μL of capture antibody-modified magnetic beads (1 mg/mL, diluted by reagent diluent) for 5 minutes to let the antigen molecules in the sample be captured by the antibody on magnetic beads.\nThe beads were repeatedly washed three times with the steps of magnetic beads gathering by a permanent magnet for 30 seconds, resuspension with 200 μL of washing buffer (10 mM PBST), and shaking for 30 seconds.\nThe washing buffer was replaced with 50 μL of antibody-labeled PtNP (diluted 2.5 times with 10 mM PBST) solution, allowing for 5 minutes of incubation, and then followed by washing with 200 μL of washing buffer (10 mM PBST) for another three times to remove free PtNPs.\nHundred microliters of H2O2 was then added into the well to allow O2 molecule generation for 5 minutes. The eight-well strip was immediately placed on a pressure measuring device to collect detection result.\nEach concentration measurement was repeated for three times, and the average values and SDs were calculated.", "Hundred microliters of samples with different concentrations of CK-MB (0, 0.41, 1.23, 3.69, 11.1, 33.3, 100, and 300 ng/mL) in different matrix solutions was mixed with 50 μL of capture antibody-modified magnetic beads (1 mg/mL, diluted by reagent diluent) for 5 minutes to let the antigen molecules in the sample to be captured by the antibody on magnetic beads.\nThe beads were repeatedly washed three times with steps of magnetic beads gathering by a permanent magnet for 30 seconds, followed by resuspension with 250 μL of washing buffer (10 mM PBST) and shaking for 30 seconds.\nThe washing buffer was replaced with 50 μL of antibody-labeled PtNP (diluted 2.5 times with 10 mM PBST) solution, incubated for 5 minutes, and was followed by washing for another three times to remove free antibody-labeled PtNP.\nHundred microliters of H2O2 was then added into the well to allow O2 molecule generation for 5 minutes. The eight-well strip was placed on a pressure measuring device to collect detection result.\nEach concentration measurement was repeated for three times, and the average values and SDs were calculated.", "Twenty microliters of samples with different concentrations of Myo (0, 15.7, 31.3, 62.5, 125, 250, 500, and 1,000 ng/mL) in different matrix solutions was mixed with 50 μL of capture antibody-modified magnetic beads (1 mg/mL, diluted by reagent diluent) in the well of an eight-well strip for 5 minutes to let the antigen molecules in the sample to be captured by the antibody on magnetic beads.\nThe beads were repeatedly washed three times with steps of magnetic beads gathering by a permanent magnet for 30 seconds, followed by resuspension with 250 μL washing buffer (10 mM PBST) and shaking for 30 seconds.\nThe washing buffer was replaced with 50 μL of antibody-labeled PtNP (diluted by 2.5 times with 10 mM PBST) solution, incubated for 5 minutes, which was followed by washing for another three times to remove free antibody-labeled PtNP.\nHundred microliters of H2O2 was then added into the well to allow O2 molecule generation for 5 minutes. The eight-well strip was placed on a pressure measuring device to collect detection result.\nEach concentration measurement was repeated for three times, and the average values and SDs were calculated.", "The pressure variation value of each biomarker was found to be linear to the biomarker concentration. The LOD for each biomarker was determined as follows:\nLOD=2sd0/slope,where sd0 is the SD of blank samples and slope is obtained by linear regression for pressure variation to different biomarker concentrations at low level.", "To investigate the specificity of SPDS for three biomarkers, some common proteins found in serum, including thrombin (Thr), hemoglobin (Hb), human serum albumin (HSA), immunoglobulin G (IgG), and a common inflammation bio-marker C-reactive protein (CRP), were used as the negative controls. The concentration of each negative control protein was 1 mg/mL diluted in serum matrix, and the concentrations of the three biomarkers (cTnI, CK-MB, Myo) were 10 ng/mL, 100 ng/mL, and 250 ng/mL, respectively, which acted as the positive controls. The same assay procedure and reagents for three biomarkers were used to test these negative control proteins samples to measure the pressure variation value after immune assay. Three runs were taken for each protein test and average value and SD were calculated.", "The concentration of three biomarkers, cTnI, CK-MB, and Myo, in 50 clinical plasma samples was measured with SPDS following the same measuring procedure as standard curve building for each biomarker respectively. The obtained results were compared with CLIA (ARCHITECT i2000SR). Each point was measured three times, and the average values and SD were calculated.", "The Bland–Altman analysis of clinical samples comparing the results of SPDS with the CLIA method was performed by using SPSS software. Briefly, for the results obtained from SPDS (Vspds) and CLIA (Vclia) methods for 50 clinical samples, variables were calculated as follows: mMean\nDiffi=Vspds,i−Vclia,ImMeani=(Vspds,i−Vclia,i)/2i=1,2,3,…50\nThen, the average value (mean) and SD of Diffi were calculated by the software, and the reference lines of 95% CI were obtained as follows:\nYup=Mean+1.96*SDYdown=Mean+1.96*SD\nFinally, a scatter diagram was build using mMeani as x-axis and Diffi as y-axis. Mean reference line and 95% CI reference lines were added on the scatter diagram to generate Bland–Altman diagram.", "The eight-well strip with H2O2 and PtNPs-bound magnetic beads after immunoassay was covered with a sealing rubber pad to seal the wells and put inside the adaptor to deliver into pressure measuring device. The screw valve then moved the pressure sensor module down to press onto the rubber pad, fully sealing the gap between pressure sensor module and rubber pad. Afterward, the start button was pressed to start the measurement of pressure.", " Principle of SPDS SPDS comprises a pressure measuring device (Figure 1A and C) for data collection and transporting pressure data to a smartphone (Figure 1A), and a smartphone for data analysis, storage, and output. The structure profile of developed pressure measuring device is shown in Figure 1B. The major parts of the device include a screw valve to seal the eight-well strip, eight integrated pressure sensors to measure pressure value of each well, a rubber pad for gas sealing, and an adaptor for loading of the strip into the device. The schematic of SPDS working procedure is shown in Figure 1A. To measure the biomarker concentration with SPDS, the sample is first incubated with magnetic beads, which is coated with capture antibody molecules, in a well of an eight-well strip to capture biomarker molecules onto beads surface. After a washing step to remove the sample remnant to avoid non-specific absorption, PtNPs modified with labeling antibody are added into the well to label the biomarkers, generating an antibody-antigen-antibody sandwich structure. After another washing step to remove free labeling PtNPs, H2O2 substrate is added and catalyzed by PtNPs. Then, the strip is immediately placed in a pressure measuring device and sealed with screw valve. A large amount of O2 gas is produced causing significant pressure variation in the wells, which is measured by the pressure sensor and transmitted to a smartphone via Bluetooth. The pressure variation value is finally analyzed by the smartphone and transformed into biomarker concentration value with a previously fed standard curve input pattern. Compared to traditional biomarker detection approaches such as light, heat, magnetic force, etc., SPDS transforms biomarker molecule signal into pressure signal, thus avoiding the inevitable environmental effects caused by light, heat, and magnetic field. Furthermore, by measuring the pressure variation value after and before catalyzation process, instead of only the absolute pressure value after catalyzation, SPDS eliminates the possible error caused by the atmospheric pressure changes due to altitude variation or changes in weather. By using a simple but highly precise integrated pressure sensor, SPDS significantly decreases the device size for highly sensitive monitoring and allows for highly accurate biomarker detection under robust environments. Furthermore, assisted by a smartphone via Bluetooth, SPDS is able to deal with complicated data analysis contributed by the strong computing power of smartphone,30,31 which helps expand the application field of SPDS to a larger extent, such as POCT for emergency, bacterial detection in external environments such as POCT for emergency and bacteria detection out of field.\nSPDS comprises a pressure measuring device (Figure 1A and C) for data collection and transporting pressure data to a smartphone (Figure 1A), and a smartphone for data analysis, storage, and output. The structure profile of developed pressure measuring device is shown in Figure 1B. The major parts of the device include a screw valve to seal the eight-well strip, eight integrated pressure sensors to measure pressure value of each well, a rubber pad for gas sealing, and an adaptor for loading of the strip into the device. The schematic of SPDS working procedure is shown in Figure 1A. To measure the biomarker concentration with SPDS, the sample is first incubated with magnetic beads, which is coated with capture antibody molecules, in a well of an eight-well strip to capture biomarker molecules onto beads surface. After a washing step to remove the sample remnant to avoid non-specific absorption, PtNPs modified with labeling antibody are added into the well to label the biomarkers, generating an antibody-antigen-antibody sandwich structure. After another washing step to remove free labeling PtNPs, H2O2 substrate is added and catalyzed by PtNPs. Then, the strip is immediately placed in a pressure measuring device and sealed with screw valve. A large amount of O2 gas is produced causing significant pressure variation in the wells, which is measured by the pressure sensor and transmitted to a smartphone via Bluetooth. The pressure variation value is finally analyzed by the smartphone and transformed into biomarker concentration value with a previously fed standard curve input pattern. Compared to traditional biomarker detection approaches such as light, heat, magnetic force, etc., SPDS transforms biomarker molecule signal into pressure signal, thus avoiding the inevitable environmental effects caused by light, heat, and magnetic field. Furthermore, by measuring the pressure variation value after and before catalyzation process, instead of only the absolute pressure value after catalyzation, SPDS eliminates the possible error caused by the atmospheric pressure changes due to altitude variation or changes in weather. By using a simple but highly precise integrated pressure sensor, SPDS significantly decreases the device size for highly sensitive monitoring and allows for highly accurate biomarker detection under robust environments. Furthermore, assisted by a smartphone via Bluetooth, SPDS is able to deal with complicated data analysis contributed by the strong computing power of smartphone,30,31 which helps expand the application field of SPDS to a larger extent, such as POCT for emergency, bacterial detection in external environments such as POCT for emergency and bacteria detection out of field.\n Optimization of H2O2 concentration As the substrate of PtNPs, the concentration of H2O2 may significantly affect the catalytic efficiency leading to a change in sensitivity. To optimize the catalytic efficiency, firstly we optimized the H2O2 concentration. The result is shown in Figure S1; when the concentration of H2O2 decreased, the catalytic efficiency of PtNPs also decreased. The measured pressure variation value for 30% H2O2 was almost ten times the value obtained by 3.75% H2O2. To obtain the highest detection sensitivity for biomarkers, 30% H2O2 was chosen for further pressure measuring assays.\nAs the substrate of PtNPs, the concentration of H2O2 may significantly affect the catalytic efficiency leading to a change in sensitivity. To optimize the catalytic efficiency, firstly we optimized the H2O2 concentration. The result is shown in Figure S1; when the concentration of H2O2 decreased, the catalytic efficiency of PtNPs also decreased. The measured pressure variation value for 30% H2O2 was almost ten times the value obtained by 3.75% H2O2. To obtain the highest detection sensitivity for biomarkers, 30% H2O2 was chosen for further pressure measuring assays.\n Effect of antibody labeling on PtNPs catalysis efficiency As shown in Figure 2A and B, the zeta-average diameter increased to 207.3 nm for cTnI (for profile, see Figure S2B), 215.2 nm for CK-MB (for profile, see Figure S2C), and 188.3 nm for Myo (for profile, see Figure S2D) from 126.2 nm before labeling (for profile, see Figure S2A). The zeta-potential changed from −27.6 mV to −41.9 mV for cTnI, −40.0 mV for CK-MB, and −38.6 mV for Myo. The increase in zeta-average diameter suggests binding of antibody molecules and blocking by protein molecules (casein) onto the PtNP surface. The negative variation of zeta-potential can be attributed to the fact that casein (iso-electric point ~4.8) and antibodies (isoelectric point ~6.8) are negatively charged in pure water (pH~7.0).\nTo evaluate the influence of labeling process on PtNPs catalysis function, the catalysis efficiency of PtNPs before and after labeling with different biomarker antibodies was compared. The result is shown in Figure 2C. The catalysis efficiency of PtNPs labeled with cTnI, CK-MB, and Myo antibodies was found to significantly decrease by 86.9%, 83.6%, and 83.4% after labeling, which was speculated to be caused by the occupation of the catalytic site by antibody molecules, surface charge variation of PtNPs, and a decrease of specific surface area.\nAs shown in Figure 2A and B, the zeta-average diameter increased to 207.3 nm for cTnI (for profile, see Figure S2B), 215.2 nm for CK-MB (for profile, see Figure S2C), and 188.3 nm for Myo (for profile, see Figure S2D) from 126.2 nm before labeling (for profile, see Figure S2A). The zeta-potential changed from −27.6 mV to −41.9 mV for cTnI, −40.0 mV for CK-MB, and −38.6 mV for Myo. The increase in zeta-average diameter suggests binding of antibody molecules and blocking by protein molecules (casein) onto the PtNP surface. The negative variation of zeta-potential can be attributed to the fact that casein (iso-electric point ~4.8) and antibodies (isoelectric point ~6.8) are negatively charged in pure water (pH~7.0).\nTo evaluate the influence of labeling process on PtNPs catalysis function, the catalysis efficiency of PtNPs before and after labeling with different biomarker antibodies was compared. The result is shown in Figure 2C. The catalysis efficiency of PtNPs labeled with cTnI, CK-MB, and Myo antibodies was found to significantly decrease by 86.9%, 83.6%, and 83.4% after labeling, which was speculated to be caused by the occupation of the catalytic site by antibody molecules, surface charge variation of PtNPs, and a decrease of specific surface area.\n Evaluation of matrix effect Compared with buffer solution, plasma contains much more complex compounds, which include a large amount of proteins, polysaccharides, ions, and even cells.32 The measurement of biomarker molecules in plasma is usually different from that obtained from buffer, due to an unpredictable non-specific absorption and molecular conformation variation under different environments. To minimize the measuring error caused by this kind of matrix effect, a gradient concentration of biomarker molecules was added into two different matrixes, including 10 mM PBS buffer and commercially available serum matrix. As the control group, a mixture of ten clinically determined low-level plasma samples (cTnI <0.01 ng/mL, CK-MB <0.5 ng/mL, and Myo <1 mg/mL) was used. The measuring results of three biomarkers in different matrixes were compared to choose the one showing the smallest matrix effect. As shown in Figure 3A–C, the cTnI added in serum matrix and plasma mixture showed similar concentration–pressure response, while much higher pressure response was observed in buffer samples. CK-MB added in serum matrix also showed the same result with low-level plasma samples, while the response of sample with CK-MB added in buffer was lower than the one added in matrix. However, no difference was found in three kinds of matrix analysis for Myo detection. This result was inferred to be caused by the contents, such as proteins, ions, or organic molecules, in serum matrix and plasma. These molecules may compete with antigen or lead to a non-specific absorption to magnetic beads and PtNP surface to decrease antibody-antigen binding efficiency, thus resulting in a lower measuring value (similar to what happened in CK-MB assay). On the other hand, some conditions, like pH and different types of ions or ion strength in serum matrix and plasma, might increase the antibody binding efficiency, causing a higher measuring result (similar to what happened in cTnI assay). However, when the biomarker molecule was stable enough and was rarely affected by the solution environment, there will be no significant difference in the results obtained from different types of matrix (similar to what happened in Myo assay). To minimize the measuring error caused by matrix effect, commercial serum matrix was chosen to dilute the biomarkers to build standard curve for further applications.\nCompared with buffer solution, plasma contains much more complex compounds, which include a large amount of proteins, polysaccharides, ions, and even cells.32 The measurement of biomarker molecules in plasma is usually different from that obtained from buffer, due to an unpredictable non-specific absorption and molecular conformation variation under different environments. To minimize the measuring error caused by this kind of matrix effect, a gradient concentration of biomarker molecules was added into two different matrixes, including 10 mM PBS buffer and commercially available serum matrix. As the control group, a mixture of ten clinically determined low-level plasma samples (cTnI <0.01 ng/mL, CK-MB <0.5 ng/mL, and Myo <1 mg/mL) was used. The measuring results of three biomarkers in different matrixes were compared to choose the one showing the smallest matrix effect. As shown in Figure 3A–C, the cTnI added in serum matrix and plasma mixture showed similar concentration–pressure response, while much higher pressure response was observed in buffer samples. CK-MB added in serum matrix also showed the same result with low-level plasma samples, while the response of sample with CK-MB added in buffer was lower than the one added in matrix. However, no difference was found in three kinds of matrix analysis for Myo detection. This result was inferred to be caused by the contents, such as proteins, ions, or organic molecules, in serum matrix and plasma. These molecules may compete with antigen or lead to a non-specific absorption to magnetic beads and PtNP surface to decrease antibody-antigen binding efficiency, thus resulting in a lower measuring value (similar to what happened in CK-MB assay). On the other hand, some conditions, like pH and different types of ions or ion strength in serum matrix and plasma, might increase the antibody binding efficiency, causing a higher measuring result (similar to what happened in cTnI assay). However, when the biomarker molecule was stable enough and was rarely affected by the solution environment, there will be no significant difference in the results obtained from different types of matrix (similar to what happened in Myo assay). To minimize the measuring error caused by matrix effect, commercial serum matrix was chosen to dilute the biomarkers to build standard curve for further applications.\n Standard curves of three AMI biomarkers Serum matrix solution added with different concentrations of cTnI, CK-MB, or Myo was evaluated with the developed assay as the steps described in the Materials and methods section. The resulting standard curves for three biomarkers are shown in Figure 3D–F. The pressure value for cTnI showed a good linear relationship with antigen concentration in the range from 0 to 25 ng/mL (Figure S3A), with a LOD of 0.014 ng/mL, consistent with the sensitivity of currently available cTnI diagnosis approaches.22 The increasing tendency of pressure variation value declining when cTnI concentration was further increased after 20 ng/mL, which was mainly caused by the saturation of antibody binding sites on the magnetic beads. The linear range for pressure response value to CK-MB concentration and Myo concentration was found to be 0–33 ng/mL (Figure S3B) and 0–250 ng/mL (Figure S3C), with LOD of 0.16 ng/mL and 0.85 ng/mL, respectively, and the decrease of pressure variation value in the same slope was found when biomarker concentration increased higher than about 75 ng/mL and 400 ng/mL for CK-MB and Myo, respectively. All the coefficient values of variation for each concentration of the three biomarkers were determined to be smaller than 10%, demonstrating a good repeatability of SPDS.\nTo increase the quantitative accuracy, logistic regression with four variants was used to build a standard curve formula. The logistic regression formula is given by formula 1 as follows:\nΔP=A1−A21+(cc0)+A2\nIn formula 1, A1, A2, c0, and n are parameters obtained by logistic fitting, ΔP is pressure variation value obtained by pressure measuring device, and c is biomarker concentration in samples.\nThe values for four variants were fitted by Origin software, and the fitting results for cTnI, CK-MB, and Myo are given by formula 2, formula 3, and formula 4, respectively:\nΔP=−2,143.611+(c34.35)1.24+2,155.14ΔP=−796.081+(c69.98)1.10+822.63ΔP=−573.221+(c329.65)1.33+556.09\nThe fitting results were transferred into the smartphone App for further sample detection and data calculation to translate pressure signal into concentration value.\nAccording to the built standard curves, SPDS had shown a higher sensitivity and accuracy (with CV% smaller than 10% for different biomarker concentrations) than most POCT products like LICA (LOD equals about 0.1 ng/mL for cTnI, CV <15%) and comparable detection performance with CLIA (LOD equals about 0.02 ng/mL for cTnI, CV <15%). However, with much smaller device size and being more user-friendly, SPDS could be applied under highly unfavorable conditions, rather than CLIA.\nSerum matrix solution added with different concentrations of cTnI, CK-MB, or Myo was evaluated with the developed assay as the steps described in the Materials and methods section. The resulting standard curves for three biomarkers are shown in Figure 3D–F. The pressure value for cTnI showed a good linear relationship with antigen concentration in the range from 0 to 25 ng/mL (Figure S3A), with a LOD of 0.014 ng/mL, consistent with the sensitivity of currently available cTnI diagnosis approaches.22 The increasing tendency of pressure variation value declining when cTnI concentration was further increased after 20 ng/mL, which was mainly caused by the saturation of antibody binding sites on the magnetic beads. The linear range for pressure response value to CK-MB concentration and Myo concentration was found to be 0–33 ng/mL (Figure S3B) and 0–250 ng/mL (Figure S3C), with LOD of 0.16 ng/mL and 0.85 ng/mL, respectively, and the decrease of pressure variation value in the same slope was found when biomarker concentration increased higher than about 75 ng/mL and 400 ng/mL for CK-MB and Myo, respectively. All the coefficient values of variation for each concentration of the three biomarkers were determined to be smaller than 10%, demonstrating a good repeatability of SPDS.\nTo increase the quantitative accuracy, logistic regression with four variants was used to build a standard curve formula. The logistic regression formula is given by formula 1 as follows:\nΔP=A1−A21+(cc0)+A2\nIn formula 1, A1, A2, c0, and n are parameters obtained by logistic fitting, ΔP is pressure variation value obtained by pressure measuring device, and c is biomarker concentration in samples.\nThe values for four variants were fitted by Origin software, and the fitting results for cTnI, CK-MB, and Myo are given by formula 2, formula 3, and formula 4, respectively:\nΔP=−2,143.611+(c34.35)1.24+2,155.14ΔP=−796.081+(c69.98)1.10+822.63ΔP=−573.221+(c329.65)1.33+556.09\nThe fitting results were transferred into the smartphone App for further sample detection and data calculation to translate pressure signal into concentration value.\nAccording to the built standard curves, SPDS had shown a higher sensitivity and accuracy (with CV% smaller than 10% for different biomarker concentrations) than most POCT products like LICA (LOD equals about 0.1 ng/mL for cTnI, CV <15%) and comparable detection performance with CLIA (LOD equals about 0.02 ng/mL for cTnI, CV <15%). However, with much smaller device size and being more user-friendly, SPDS could be applied under highly unfavorable conditions, rather than CLIA.\n Specificity evaluation of SPDS for three biomarkers To investigate the specificity of SPDS for three biomarkers, some common proteins found in serum, including Thr, Hb, HSA, IgG, and a common inflammation biomarker CRP, were used as the negative controls. The results shown in Figure 3G–I indicate that the developed assays for three biomarkers had high specificity to these proteins commonly found in human blood. The results demonstrated that SPDS and developed reagents were highly specific to their bio-markers and were rarely affected by the common proteins present in real blood samples, which we consider is majorly contributed by high specificity of the chosen antibodies and well-developed nanoparticle coating procedure.\nTo investigate the specificity of SPDS for three biomarkers, some common proteins found in serum, including Thr, Hb, HSA, IgG, and a common inflammation biomarker CRP, were used as the negative controls. The results shown in Figure 3G–I indicate that the developed assays for three biomarkers had high specificity to these proteins commonly found in human blood. The results demonstrated that SPDS and developed reagents were highly specific to their bio-markers and were rarely affected by the common proteins present in real blood samples, which we consider is majorly contributed by high specificity of the chosen antibodies and well-developed nanoparticle coating procedure.\n Clinical sample detection To estimate the performance of SPDS, 50 clinical samples, whose concentrations of cTnI, CK-MB, and Myo had been measured by CLIA, were tested by SPDS. The results of comparison are shown in Figure 4A, C, and E. The linear slopes of comparison for cTnI, CK-MB, and Myo were fitted as 1.049 (R2=0.9852), 0.9545 (R2=0.9852), and 0.998 (R2=0.9908), which demonstrated an excellent match between two different assays. To further analyze the correlation between the results of CLIA and SPDS, Bland–Altman analysis was performed.33 As shown in Figure 4B, D, and F, almost all of the compared samples were in the range of 95% CI, suggesting a good correlation between the developed assay and CLIA. Only 4/50, 3/50, and 1/50 samples for cTnI, CK-MB, and Myo were out of 95% CI range, which might be caused by biomarker proteolysis during sample storage.\nThe clinical sample detection result demonstrated that SPDS rivaled CLIA in the detection performance for three biomarkers of AMI, and it has strong potential for clinical applications that need high sensitivity and accuracy instead of CLIA where a huge automatic device cannot be supported.\nTo estimate the performance of SPDS, 50 clinical samples, whose concentrations of cTnI, CK-MB, and Myo had been measured by CLIA, were tested by SPDS. The results of comparison are shown in Figure 4A, C, and E. The linear slopes of comparison for cTnI, CK-MB, and Myo were fitted as 1.049 (R2=0.9852), 0.9545 (R2=0.9852), and 0.998 (R2=0.9908), which demonstrated an excellent match between two different assays. To further analyze the correlation between the results of CLIA and SPDS, Bland–Altman analysis was performed.33 As shown in Figure 4B, D, and F, almost all of the compared samples were in the range of 95% CI, suggesting a good correlation between the developed assay and CLIA. Only 4/50, 3/50, and 1/50 samples for cTnI, CK-MB, and Myo were out of 95% CI range, which might be caused by biomarker proteolysis during sample storage.\nThe clinical sample detection result demonstrated that SPDS rivaled CLIA in the detection performance for three biomarkers of AMI, and it has strong potential for clinical applications that need high sensitivity and accuracy instead of CLIA where a huge automatic device cannot be supported.", "SPDS comprises a pressure measuring device (Figure 1A and C) for data collection and transporting pressure data to a smartphone (Figure 1A), and a smartphone for data analysis, storage, and output. The structure profile of developed pressure measuring device is shown in Figure 1B. The major parts of the device include a screw valve to seal the eight-well strip, eight integrated pressure sensors to measure pressure value of each well, a rubber pad for gas sealing, and an adaptor for loading of the strip into the device. The schematic of SPDS working procedure is shown in Figure 1A. To measure the biomarker concentration with SPDS, the sample is first incubated with magnetic beads, which is coated with capture antibody molecules, in a well of an eight-well strip to capture biomarker molecules onto beads surface. After a washing step to remove the sample remnant to avoid non-specific absorption, PtNPs modified with labeling antibody are added into the well to label the biomarkers, generating an antibody-antigen-antibody sandwich structure. After another washing step to remove free labeling PtNPs, H2O2 substrate is added and catalyzed by PtNPs. Then, the strip is immediately placed in a pressure measuring device and sealed with screw valve. A large amount of O2 gas is produced causing significant pressure variation in the wells, which is measured by the pressure sensor and transmitted to a smartphone via Bluetooth. The pressure variation value is finally analyzed by the smartphone and transformed into biomarker concentration value with a previously fed standard curve input pattern. Compared to traditional biomarker detection approaches such as light, heat, magnetic force, etc., SPDS transforms biomarker molecule signal into pressure signal, thus avoiding the inevitable environmental effects caused by light, heat, and magnetic field. Furthermore, by measuring the pressure variation value after and before catalyzation process, instead of only the absolute pressure value after catalyzation, SPDS eliminates the possible error caused by the atmospheric pressure changes due to altitude variation or changes in weather. By using a simple but highly precise integrated pressure sensor, SPDS significantly decreases the device size for highly sensitive monitoring and allows for highly accurate biomarker detection under robust environments. Furthermore, assisted by a smartphone via Bluetooth, SPDS is able to deal with complicated data analysis contributed by the strong computing power of smartphone,30,31 which helps expand the application field of SPDS to a larger extent, such as POCT for emergency, bacterial detection in external environments such as POCT for emergency and bacteria detection out of field.", "As the substrate of PtNPs, the concentration of H2O2 may significantly affect the catalytic efficiency leading to a change in sensitivity. To optimize the catalytic efficiency, firstly we optimized the H2O2 concentration. The result is shown in Figure S1; when the concentration of H2O2 decreased, the catalytic efficiency of PtNPs also decreased. The measured pressure variation value for 30% H2O2 was almost ten times the value obtained by 3.75% H2O2. To obtain the highest detection sensitivity for biomarkers, 30% H2O2 was chosen for further pressure measuring assays.", "As shown in Figure 2A and B, the zeta-average diameter increased to 207.3 nm for cTnI (for profile, see Figure S2B), 215.2 nm for CK-MB (for profile, see Figure S2C), and 188.3 nm for Myo (for profile, see Figure S2D) from 126.2 nm before labeling (for profile, see Figure S2A). The zeta-potential changed from −27.6 mV to −41.9 mV for cTnI, −40.0 mV for CK-MB, and −38.6 mV for Myo. The increase in zeta-average diameter suggests binding of antibody molecules and blocking by protein molecules (casein) onto the PtNP surface. The negative variation of zeta-potential can be attributed to the fact that casein (iso-electric point ~4.8) and antibodies (isoelectric point ~6.8) are negatively charged in pure water (pH~7.0).\nTo evaluate the influence of labeling process on PtNPs catalysis function, the catalysis efficiency of PtNPs before and after labeling with different biomarker antibodies was compared. The result is shown in Figure 2C. The catalysis efficiency of PtNPs labeled with cTnI, CK-MB, and Myo antibodies was found to significantly decrease by 86.9%, 83.6%, and 83.4% after labeling, which was speculated to be caused by the occupation of the catalytic site by antibody molecules, surface charge variation of PtNPs, and a decrease of specific surface area.", "Compared with buffer solution, plasma contains much more complex compounds, which include a large amount of proteins, polysaccharides, ions, and even cells.32 The measurement of biomarker molecules in plasma is usually different from that obtained from buffer, due to an unpredictable non-specific absorption and molecular conformation variation under different environments. To minimize the measuring error caused by this kind of matrix effect, a gradient concentration of biomarker molecules was added into two different matrixes, including 10 mM PBS buffer and commercially available serum matrix. As the control group, a mixture of ten clinically determined low-level plasma samples (cTnI <0.01 ng/mL, CK-MB <0.5 ng/mL, and Myo <1 mg/mL) was used. The measuring results of three biomarkers in different matrixes were compared to choose the one showing the smallest matrix effect. As shown in Figure 3A–C, the cTnI added in serum matrix and plasma mixture showed similar concentration–pressure response, while much higher pressure response was observed in buffer samples. CK-MB added in serum matrix also showed the same result with low-level plasma samples, while the response of sample with CK-MB added in buffer was lower than the one added in matrix. However, no difference was found in three kinds of matrix analysis for Myo detection. This result was inferred to be caused by the contents, such as proteins, ions, or organic molecules, in serum matrix and plasma. These molecules may compete with antigen or lead to a non-specific absorption to magnetic beads and PtNP surface to decrease antibody-antigen binding efficiency, thus resulting in a lower measuring value (similar to what happened in CK-MB assay). On the other hand, some conditions, like pH and different types of ions or ion strength in serum matrix and plasma, might increase the antibody binding efficiency, causing a higher measuring result (similar to what happened in cTnI assay). However, when the biomarker molecule was stable enough and was rarely affected by the solution environment, there will be no significant difference in the results obtained from different types of matrix (similar to what happened in Myo assay). To minimize the measuring error caused by matrix effect, commercial serum matrix was chosen to dilute the biomarkers to build standard curve for further applications.", "Serum matrix solution added with different concentrations of cTnI, CK-MB, or Myo was evaluated with the developed assay as the steps described in the Materials and methods section. The resulting standard curves for three biomarkers are shown in Figure 3D–F. The pressure value for cTnI showed a good linear relationship with antigen concentration in the range from 0 to 25 ng/mL (Figure S3A), with a LOD of 0.014 ng/mL, consistent with the sensitivity of currently available cTnI diagnosis approaches.22 The increasing tendency of pressure variation value declining when cTnI concentration was further increased after 20 ng/mL, which was mainly caused by the saturation of antibody binding sites on the magnetic beads. The linear range for pressure response value to CK-MB concentration and Myo concentration was found to be 0–33 ng/mL (Figure S3B) and 0–250 ng/mL (Figure S3C), with LOD of 0.16 ng/mL and 0.85 ng/mL, respectively, and the decrease of pressure variation value in the same slope was found when biomarker concentration increased higher than about 75 ng/mL and 400 ng/mL for CK-MB and Myo, respectively. All the coefficient values of variation for each concentration of the three biomarkers were determined to be smaller than 10%, demonstrating a good repeatability of SPDS.\nTo increase the quantitative accuracy, logistic regression with four variants was used to build a standard curve formula. The logistic regression formula is given by formula 1 as follows:\nΔP=A1−A21+(cc0)+A2\nIn formula 1, A1, A2, c0, and n are parameters obtained by logistic fitting, ΔP is pressure variation value obtained by pressure measuring device, and c is biomarker concentration in samples.\nThe values for four variants were fitted by Origin software, and the fitting results for cTnI, CK-MB, and Myo are given by formula 2, formula 3, and formula 4, respectively:\nΔP=−2,143.611+(c34.35)1.24+2,155.14ΔP=−796.081+(c69.98)1.10+822.63ΔP=−573.221+(c329.65)1.33+556.09\nThe fitting results were transferred into the smartphone App for further sample detection and data calculation to translate pressure signal into concentration value.\nAccording to the built standard curves, SPDS had shown a higher sensitivity and accuracy (with CV% smaller than 10% for different biomarker concentrations) than most POCT products like LICA (LOD equals about 0.1 ng/mL for cTnI, CV <15%) and comparable detection performance with CLIA (LOD equals about 0.02 ng/mL for cTnI, CV <15%). However, with much smaller device size and being more user-friendly, SPDS could be applied under highly unfavorable conditions, rather than CLIA.", "To investigate the specificity of SPDS for three biomarkers, some common proteins found in serum, including Thr, Hb, HSA, IgG, and a common inflammation biomarker CRP, were used as the negative controls. The results shown in Figure 3G–I indicate that the developed assays for three biomarkers had high specificity to these proteins commonly found in human blood. The results demonstrated that SPDS and developed reagents were highly specific to their bio-markers and were rarely affected by the common proteins present in real blood samples, which we consider is majorly contributed by high specificity of the chosen antibodies and well-developed nanoparticle coating procedure.", "To estimate the performance of SPDS, 50 clinical samples, whose concentrations of cTnI, CK-MB, and Myo had been measured by CLIA, were tested by SPDS. The results of comparison are shown in Figure 4A, C, and E. The linear slopes of comparison for cTnI, CK-MB, and Myo were fitted as 1.049 (R2=0.9852), 0.9545 (R2=0.9852), and 0.998 (R2=0.9908), which demonstrated an excellent match between two different assays. To further analyze the correlation between the results of CLIA and SPDS, Bland–Altman analysis was performed.33 As shown in Figure 4B, D, and F, almost all of the compared samples were in the range of 95% CI, suggesting a good correlation between the developed assay and CLIA. Only 4/50, 3/50, and 1/50 samples for cTnI, CK-MB, and Myo were out of 95% CI range, which might be caused by biomarker proteolysis during sample storage.\nThe clinical sample detection result demonstrated that SPDS rivaled CLIA in the detection performance for three biomarkers of AMI, and it has strong potential for clinical applications that need high sensitivity and accuracy instead of CLIA where a huge automatic device cannot be supported.", "In summary, to satisfy the demand for a portable system with high detection sensitivity and accuracy for the applications of diagnosis under unfavorable environments and detection in wild situations, we developed SPDS to solve the problem that the currently available approaches or devices are facing. The catalytic efficiency was firstly optimized and matrix effect was minimized by using commercially available serum matrix, instead of buffer, to dilute bio-marker molecules to build the standard curve. The detection performance and system reliability of SPDS were finally verified by comparing the measuring results for the three biomarkers of 50 clinical plasma samples obtained by SPDS and CLIA. Furthermore, though a small device, while not losing any sensitivity and accuracy, SPDS showed a good potential for POCT application and diagnosis under emergency, bacteria detection out of field, and poison detection at the military front.\nCompared with the results reported for PASS, SPDS showed much better user-friendliness and reliability due to its integrated pressure measuring device, optimized pressure variation value measuring method, and the support of smart-phone for data manipulation. SPDS can also measure eight samples simultaneously, which could significantly decrease the error caused by the tedious and unfriendly manipulation of the device in PASS which involves measuring the sample on one-by-one basis. SPDS has shown a bright prospect in clinical application.\nHowever, we also have to admit that SPDS is still a half-automated system for AMI diagnosis and the manipulation for immune assay is still complicated. In the future work, we would convert the device into a totally automated system for POCT. Besides, we would attempt to further optimize PtNPs modification protocol to minimize the effect of modification on nanoparticle catalytic efficiency and further improve the sensitivity of SPDS.", "Optimization result of H2O2 concentration.\nThe nanoparticle zeta-average diameter profile of PtNPs before labeling (A) and after labeling for cTnI (B), CK-MB (C), and Myo (D).\nAbbreviations: PtNPs, platinum nanoparticles; cTnI, cardiac troponin I; CK-MB, MB isoenzyme of creatine kinase; Myo, myoglobin.\nLinearly fitting results for cTnI (A), CK-MB (B), and Myo (C) under low concentrations.\nAbbreviations: cTnI, cardiac troponin I; CK-MB, MB isoenzyme of creatine kinase; Myo, myoglobin." ]
[ "intro", "materials|methods", "materials", null, null, null, null, null, null, null, null, null, null, null, null, "methods|results", null, "results|discussion", null, null, null, null, null, null, null, null, "supplementary-material" ]
[ "acute myocardial infarction", "diagnosis", "pressure sensor", "smartphone", "Pt nanoparticle" ]
Introduction: Annually, more than 2.4 million deaths in the US, 4 million deaths in Europe and northern Asia, and more than a third of deaths in developed nations are caused by coronary artery disease (CAD).1–4 Acute myocardial infarction (AMI), usually caused by atherosclerosis of coronary artery, is the most severe manifestation of CAD, resulting in high mortality.5 Treatment of AMI is time-critical.6 Early medical and surgical intervention has been widely demonstrated to be able to significantly reduce the myocardial damage and mortality.7 To provide an accurate medical treatment for AMI, a quick diagnosis of AMI during door-to-balloon time is crucially required. According to current consensus, AMI is majorly defined by some physical diagnostic approaches such as electrocardiogram (ECG),8–10 changes in the motion of the heart wall on imaging, and some well-evaluated cardiac biomarkers,11,12 like cardiac troponin I/T (cTnI/cTnT),13–17 MB isoenzyme of creatine kinase (CK-MB),18,19 and myoglobin (Myo).20 ECG involves the placement of a series of leads on a person’s chest that measure electrical activity associated with the contraction of heart muscle, which has long been used for AMI diagnosis. By measuring ST-T variation or Q-waves, ECG can accurately diagnose AMI. Imaging methods, like chest X-ray, single-photon emission computed tomography/computed tomography scans, and positron emission tomography scans, can also be used for AMI diagnosis.21 Instead of ECG and imaging approaches, some well-evaluated biomarkers are now widely used for AMI diagnosis. These biomarkers include highly specific proteins like cTnI/cTnT, as well as some less-specific biomarkers like CK-MB and Myo. Due to differences in their diagnostic window periods and to improve the accuracy of AMI diagnosis, these biomarkers are often measured simultaneously. Various detection approaches for these AMI biomarkers, such as chemiluminescence immunoassay (CLIA), ELISA, and lateral immunochromatographic assay (LICA), are currently available in most hospitals.22 Among these approaches, CLIA, assisted by fully automated devices, has shown the highest user-friendliness, sensitivity, reliability, and diagnosis efficiency for quantitative measurement. However, limited by the device size and a rigorous demand for running-environment control to ensure a stable working condition, the highly precise equipment can hardly work out of a well-developed laboratory. This results in a lack of effective and reasonable diagnosis of AMI in some less developed areas, which cannot support such precision and expansive instrument, as well as some emergency situations in the wild. As a supplement for CLIA, LICA is widely used under highly unfavorable environments due to its simple operational process. However, most LICA products can only support qualitative purpose instead of quantitative application, because of a significant variation caused by the uncontrollable reaction process that occurs on the nitrocellulose membrane. Therefore, a quantitative detection approach with good portability, high sensitivity, and accuracy under an unfavorable environment is highly essential. In recent years, Pressure-based Bioassay (PASS) for biomarker detection has been reported.23–26 Different from traditional detection methods that are based on light, color, electrical activity, magnetic force, heat, or distance, the developed assay transforms molecular signal into pressure signal by enzyme- or catalyst (nanoparticles)-linked immu-nosorbent assay. Furthermore, a similar system was further applied for analysis at the single-cell level,27,28 drug detection, and analysis of disease biomarkers.29 PASS has shown high sensitivity, high reliability, and good portability in the reported works, which can be attributed to pressure sensor that is highly sensitive to pressure variations caused by the immunity assay. However, the developed measuring device is not user-friendly enough and can measure only one sample per time. The measurement between different samples with a short delay may cause unpredictable error. Besides, due to being lack of a strict control of manipulation procedure, the detection result may be affected by operator’s experience. Therefore, PASS is still far from being applied in applications that have a rigorous requirement for reliability and controllability like clinical diagnosis, although the technique still has a bright potential. To solve abovementioned problems, herein, we developed Smartphone-Assisted Pressure-Measuring-Based Diagnosis System (SPDS) for portable and highly sensitive diagnostic applications. SPDS is composed of a pressure measuring device and a smartphone. The size of the pressure measuring device is 115×67×50 mm3, small enough to be pocket-portable. The smartphone is connected to the pressure measuring device via Bluetooth, which helps to analyze the data and give an accurate detection and diagnosis result, with the capability of storing more than 105 detection results. To estimate the sensitivity and reliability of the instrument, SPDS was used to measure clinical plasma samples for AMI diagnosis by cTnI/CK-MB/Myo combined examination. The labeling process was first estimated, and matrix effect of biomarkers in different matrix solutions was compared. To minimize the matrix effect for detection accuracy, a commercially available matrix serum was used to replace low-biomarker-concentration plasma for the building of standard curve for each biomarker. Finally, 50 clinical plasma samples were tested and compared with the measuring results obtained by CLIA (Abbott Architect i2000SR). The results showed that the concentration values of three biomarkers in the 50 samples measured by SPDS matched well with those measured by CLIA. The SPDS showed a comparable sensitivity and reliability with CLIA, while with much smaller device size and assisted by smart-phone for data analysis system, allowing applications under highly unfavorable circumstances, including the hospitals in less developed areas and emergency situations in the wild. Moreover, to help to give early medical treatment and surgical intervention for AMI patient, the SPDS shows bright prospect for AMI diagnosis during ambulance delivery time, much earlier than traditional door-to-balloon time, which may have potential to significantly decrease the damage caused by AMI in the patients. Materials and methods: Chemicals and materials The antibodies for three biomarkers and matrix serum were supplied by Xiamen Passtech Co., Ltd. H2PtCl6, Tween-20, and casein-Na were obtained from Sigma-Aldrich Co. Magnetic beads were obtained from Thermo Fisher Scientific. Other inorganic salts were obtained from Sinopharm Chemical Reagent Co., Ltd. The antibodies for three biomarkers and matrix serum were supplied by Xiamen Passtech Co., Ltd. H2PtCl6, Tween-20, and casein-Na were obtained from Sigma-Aldrich Co. Magnetic beads were obtained from Thermo Fisher Scientific. Other inorganic salts were obtained from Sinopharm Chemical Reagent Co., Ltd. Synthesis of Pt nanoparticles (PtNPs) To prepare PtNPs, 1 mL of 0.1 M H2PtCl6 solution and 96.9 mL of ultrapure water were added into a round-bottom flask. The solution was heated to 80°C with magnetic stirring at the rate of 700 rpm. About 2.1 mL of 1.4 M ascorbic acid was then added. The mixture was kept at 80°C for 30 minutes. The obtained PtNPs solution was stored in a conical flask at room temperature. To prepare PtNPs, 1 mL of 0.1 M H2PtCl6 solution and 96.9 mL of ultrapure water were added into a round-bottom flask. The solution was heated to 80°C with magnetic stirring at the rate of 700 rpm. About 2.1 mL of 1.4 M ascorbic acid was then added. The mixture was kept at 80°C for 30 minutes. The obtained PtNPs solution was stored in a conical flask at room temperature. The optimization of H2O2 concentration To optimize the concentration of H2O2, 1 μL of synthesized PtNP solution was added into 50 μL of 10 mM hosphate buffered saline with 1% Twen-20 (PBST, pH 7.0) in an eight-well strip, followed by mixing with pipette tip blowing. About 50 μL of H2O2 at different concentrations (30%, 15%, 7.5%, and 3.75%) was then added and mixed. The strip was placed into the pressure reading device for the real-time monitoring of pressure variation. The results were compared to choose the best H2O2 concentration. To optimize the concentration of H2O2, 1 μL of synthesized PtNP solution was added into 50 μL of 10 mM hosphate buffered saline with 1% Twen-20 (PBST, pH 7.0) in an eight-well strip, followed by mixing with pipette tip blowing. About 50 μL of H2O2 at different concentrations (30%, 15%, 7.5%, and 3.75%) was then added and mixed. The strip was placed into the pressure reading device for the real-time monitoring of pressure variation. The results were compared to choose the best H2O2 concentration. Antibody labeling on PtNPs Five hundred microliters of PtNP solution was centrifuged at 13,000 rpm for 10 minutes to remove the supernatant and resuspended with 500 μL of 1 mM morpholinoethanesulfonic acid (MES) buffer (pH 7.0). For cTnI assay, 100 μL of labeling antibody solution (120 μg/mL diluted in ultrapure water) was added into the resuspended PtNPs solution, followed by a short mixing for 40 seconds to ensure uniform absorption of antibody molecules on the PtNPs surface, followed by 10-minute incubation at room temperature to increase the absorption efficiency. The mixture was then added to 500 μL of blocking buffer (0.25% casein in 100 mM Na2CO3-NaHCO3 buffer with 0.25% Tween-20, 5% sucrose, pH 9.0) and kept for 2 hours at room temperature, followed by centrifugation at 5,000 rpm for 5 minutes to remove the supernatant. The recovered nanoparticles were finally resuspended in 10 mM citrate buffer (pH 7.0) and stored at 4°C. For CK-MB and Myo assay, 110 μL of labeling antibody solution (with 50 μg/mL and 10 μg/mL labeling antibody for each biomarker, respectively) was used and the recovered nanoparticles were stored in 0.1 M PBS (pH 7.4). The labeling process for cTnI was similar to the abovementioned procedure. Five hundred microliters of PtNP solution was centrifuged at 13,000 rpm for 10 minutes to remove the supernatant and resuspended with 500 μL of 1 mM morpholinoethanesulfonic acid (MES) buffer (pH 7.0). For cTnI assay, 100 μL of labeling antibody solution (120 μg/mL diluted in ultrapure water) was added into the resuspended PtNPs solution, followed by a short mixing for 40 seconds to ensure uniform absorption of antibody molecules on the PtNPs surface, followed by 10-minute incubation at room temperature to increase the absorption efficiency. The mixture was then added to 500 μL of blocking buffer (0.25% casein in 100 mM Na2CO3-NaHCO3 buffer with 0.25% Tween-20, 5% sucrose, pH 9.0) and kept for 2 hours at room temperature, followed by centrifugation at 5,000 rpm for 5 minutes to remove the supernatant. The recovered nanoparticles were finally resuspended in 10 mM citrate buffer (pH 7.0) and stored at 4°C. For CK-MB and Myo assay, 110 μL of labeling antibody solution (with 50 μg/mL and 10 μg/mL labeling antibody for each biomarker, respectively) was used and the recovered nanoparticles were stored in 0.1 M PBS (pH 7.4). The labeling process for cTnI was similar to the abovementioned procedure. Evaluation of the effect of antibody labeling on PtNP catalysis efficiency To evaluate the influence of antibody labeling process on PtNP catalysis efficiency, 1 μL of PtNP solution with three antibody-labeled biomarkers was added into 50 μL of 10 mM PBST (pH 7.0) in an eight-well strip, followed by a short mixing. About 50 μL of 30% H2O2 solution was then added and mixed several times by blowing with a pipette tip. Then the strip was placed into the pressure measuring device at 37°C for 5 minutes to measure the pressure value. As a comparison, 1 μL of synthesized PtNP solution was used following the same steps. The obtained pressure values were compared to evaluate the variation in catalysis efficiency. Each point was measured three times to obtain an average value. To demonstrate that the antibody molecules have bound onto the PtNP surface, PtNP solution before and after labeling was diluted with ultrapure water by 50 times, respectively, and zeta-average diameter and zeta-potential were measured using a dynamic light scattering device (Zetasizer Nano ZS90, Malvern, Worcestershire, United Kingdom). The obtained zeta-average diameter and potential for PtNPs before and after labeling were compared. To evaluate the influence of antibody labeling process on PtNP catalysis efficiency, 1 μL of PtNP solution with three antibody-labeled biomarkers was added into 50 μL of 10 mM PBST (pH 7.0) in an eight-well strip, followed by a short mixing. About 50 μL of 30% H2O2 solution was then added and mixed several times by blowing with a pipette tip. Then the strip was placed into the pressure measuring device at 37°C for 5 minutes to measure the pressure value. As a comparison, 1 μL of synthesized PtNP solution was used following the same steps. The obtained pressure values were compared to evaluate the variation in catalysis efficiency. Each point was measured three times to obtain an average value. To demonstrate that the antibody molecules have bound onto the PtNP surface, PtNP solution before and after labeling was diluted with ultrapure water by 50 times, respectively, and zeta-average diameter and zeta-potential were measured using a dynamic light scattering device (Zetasizer Nano ZS90, Malvern, Worcestershire, United Kingdom). The obtained zeta-average diameter and potential for PtNPs before and after labeling were compared. Coating of magnetic beads with capture antibody Dynabeads M280 Tosylactivated (165 μL, 30 mg/mL) was washed three times with 0.1 M PBS buffer (pH 7.4). About 20 μL of capture antibodies (cTnI 7.5 mg/mL, CK-MB 5 mg/mL, and Myo 5 mg/mL), 150 μL of 0.1 M PBS (pH 7.4), and 100 μL of 3 M ammonium sulfate were added and incubated for 16 hours at 37°C. After removing the supernatant, 1 mL of 0.01 M PBS with 0.5% BSA was added to block the leftover binding sites on the beads for 1 hour at 37°C. The magnetic beads were resuspended in 250 μL of PBS with 0.1% BSA and washed three times to achieve a final beads concentration of 20 mg/mL. The obtained beads solution was stored at 4°C for further use. Dynabeads M280 Tosylactivated (165 μL, 30 mg/mL) was washed three times with 0.1 M PBS buffer (pH 7.4). About 20 μL of capture antibodies (cTnI 7.5 mg/mL, CK-MB 5 mg/mL, and Myo 5 mg/mL), 150 μL of 0.1 M PBS (pH 7.4), and 100 μL of 3 M ammonium sulfate were added and incubated for 16 hours at 37°C. After removing the supernatant, 1 mL of 0.01 M PBS with 0.5% BSA was added to block the leftover binding sites on the beads for 1 hour at 37°C. The magnetic beads were resuspended in 250 μL of PBS with 0.1% BSA and washed three times to achieve a final beads concentration of 20 mg/mL. The obtained beads solution was stored at 4°C for further use. Matrix effect evaluation To evaluate the matrix effect on plasma sample detection, samples with gradient concentrations of cTnI (0, 0.1, 1, 5, 10, 25, and 50 ng/mL) antigen diluted in 10 mM PBS buffer, commercial serum matrix, and a mix of negative clinical samples (concentration of cTnI <0.01 ng/mL, measured by CLIA), were measured by the developed SPDS which included the following steps: Hundred microliters of samples with different concentrations of cTnI in different matrix solutions was mixed with 20 μL of capture antibody-modified magnetic beads (1 mg/mL, diluted by reagent diluent) for 5 minutes to let the antigen molecules in the sample to be captured by the antibody on magnetic beads. The beads were washed three times following the steps of magnetic beads gathering by a permanent magnet for 30 seconds, resuspension in 200 μL of washing buffer (10 mM PBST), and shaking for 30 seconds. The washing buffer was then replaced with 50 μL of antibody-labeled PtNP (diluted 2.5 times with 10 mM PBST) solution, incubated for 5 minutes, and was followed by washing for another three times to remove free antibody-labeled PtNP. Hundred microliters of H2O2 was then added into the well to allow O2 molecule generation for 5 minutes. The eight-well strip was placed into a pressure reader to collect the detection result. The obtained measurement result was finally analyzed and compared. The best matrix for standard substance dilution for standard curve building was chosen considering the balance between acceptable matrix effect and acquiring convenience. For the matrix effect evaluation of CK-MB and Myo assays, the same approach was used, except that the negative clinical serum sample for CK-MB was <0.5 mg/mL and for Myo was <1 mg/mL. Each point was measured three times, and the average values and SD were calculated. To evaluate the matrix effect on plasma sample detection, samples with gradient concentrations of cTnI (0, 0.1, 1, 5, 10, 25, and 50 ng/mL) antigen diluted in 10 mM PBS buffer, commercial serum matrix, and a mix of negative clinical samples (concentration of cTnI <0.01 ng/mL, measured by CLIA), were measured by the developed SPDS which included the following steps: Hundred microliters of samples with different concentrations of cTnI in different matrix solutions was mixed with 20 μL of capture antibody-modified magnetic beads (1 mg/mL, diluted by reagent diluent) for 5 minutes to let the antigen molecules in the sample to be captured by the antibody on magnetic beads. The beads were washed three times following the steps of magnetic beads gathering by a permanent magnet for 30 seconds, resuspension in 200 μL of washing buffer (10 mM PBST), and shaking for 30 seconds. The washing buffer was then replaced with 50 μL of antibody-labeled PtNP (diluted 2.5 times with 10 mM PBST) solution, incubated for 5 minutes, and was followed by washing for another three times to remove free antibody-labeled PtNP. Hundred microliters of H2O2 was then added into the well to allow O2 molecule generation for 5 minutes. The eight-well strip was placed into a pressure reader to collect the detection result. The obtained measurement result was finally analyzed and compared. The best matrix for standard substance dilution for standard curve building was chosen considering the balance between acceptable matrix effect and acquiring convenience. For the matrix effect evaluation of CK-MB and Myo assays, the same approach was used, except that the negative clinical serum sample for CK-MB was <0.5 mg/mL and for Myo was <1 mg/mL. Each point was measured three times, and the average values and SD were calculated. Standard curve building for cTnI immunoassay Hundred microliters of sample with a series of cTnI concentrations (0, 0.01, 0.1, 1.0, 5.0, 10.0, 15.0, 20.0, 25.0, and 50.0 ng/mL) in different matrix solutions was mixed with 50 μL of capture antibody-modified magnetic beads (1 mg/mL, diluted by reagent diluent) for 5 minutes to let the antigen molecules in the sample be captured by the antibody on magnetic beads. The beads were repeatedly washed three times with the steps of magnetic beads gathering by a permanent magnet for 30 seconds, resuspension with 200 μL of washing buffer (10 mM PBST), and shaking for 30 seconds. The washing buffer was replaced with 50 μL of antibody-labeled PtNP (diluted 2.5 times with 10 mM PBST) solution, allowing for 5 minutes of incubation, and then followed by washing with 200 μL of washing buffer (10 mM PBST) for another three times to remove free PtNPs. Hundred microliters of H2O2 was then added into the well to allow O2 molecule generation for 5 minutes. The eight-well strip was immediately placed on a pressure measuring device to collect detection result. Each concentration measurement was repeated for three times, and the average values and SDs were calculated. Hundred microliters of sample with a series of cTnI concentrations (0, 0.01, 0.1, 1.0, 5.0, 10.0, 15.0, 20.0, 25.0, and 50.0 ng/mL) in different matrix solutions was mixed with 50 μL of capture antibody-modified magnetic beads (1 mg/mL, diluted by reagent diluent) for 5 minutes to let the antigen molecules in the sample be captured by the antibody on magnetic beads. The beads were repeatedly washed three times with the steps of magnetic beads gathering by a permanent magnet for 30 seconds, resuspension with 200 μL of washing buffer (10 mM PBST), and shaking for 30 seconds. The washing buffer was replaced with 50 μL of antibody-labeled PtNP (diluted 2.5 times with 10 mM PBST) solution, allowing for 5 minutes of incubation, and then followed by washing with 200 μL of washing buffer (10 mM PBST) for another three times to remove free PtNPs. Hundred microliters of H2O2 was then added into the well to allow O2 molecule generation for 5 minutes. The eight-well strip was immediately placed on a pressure measuring device to collect detection result. Each concentration measurement was repeated for three times, and the average values and SDs were calculated. Standard curve building for CK-MB immunoassay Hundred microliters of samples with different concentrations of CK-MB (0, 0.41, 1.23, 3.69, 11.1, 33.3, 100, and 300 ng/mL) in different matrix solutions was mixed with 50 μL of capture antibody-modified magnetic beads (1 mg/mL, diluted by reagent diluent) for 5 minutes to let the antigen molecules in the sample to be captured by the antibody on magnetic beads. The beads were repeatedly washed three times with steps of magnetic beads gathering by a permanent magnet for 30 seconds, followed by resuspension with 250 μL of washing buffer (10 mM PBST) and shaking for 30 seconds. The washing buffer was replaced with 50 μL of antibody-labeled PtNP (diluted 2.5 times with 10 mM PBST) solution, incubated for 5 minutes, and was followed by washing for another three times to remove free antibody-labeled PtNP. Hundred microliters of H2O2 was then added into the well to allow O2 molecule generation for 5 minutes. The eight-well strip was placed on a pressure measuring device to collect detection result. Each concentration measurement was repeated for three times, and the average values and SDs were calculated. Hundred microliters of samples with different concentrations of CK-MB (0, 0.41, 1.23, 3.69, 11.1, 33.3, 100, and 300 ng/mL) in different matrix solutions was mixed with 50 μL of capture antibody-modified magnetic beads (1 mg/mL, diluted by reagent diluent) for 5 minutes to let the antigen molecules in the sample to be captured by the antibody on magnetic beads. The beads were repeatedly washed three times with steps of magnetic beads gathering by a permanent magnet for 30 seconds, followed by resuspension with 250 μL of washing buffer (10 mM PBST) and shaking for 30 seconds. The washing buffer was replaced with 50 μL of antibody-labeled PtNP (diluted 2.5 times with 10 mM PBST) solution, incubated for 5 minutes, and was followed by washing for another three times to remove free antibody-labeled PtNP. Hundred microliters of H2O2 was then added into the well to allow O2 molecule generation for 5 minutes. The eight-well strip was placed on a pressure measuring device to collect detection result. Each concentration measurement was repeated for three times, and the average values and SDs were calculated. Standard curve building for Myo immunoassay Twenty microliters of samples with different concentrations of Myo (0, 15.7, 31.3, 62.5, 125, 250, 500, and 1,000 ng/mL) in different matrix solutions was mixed with 50 μL of capture antibody-modified magnetic beads (1 mg/mL, diluted by reagent diluent) in the well of an eight-well strip for 5 minutes to let the antigen molecules in the sample to be captured by the antibody on magnetic beads. The beads were repeatedly washed three times with steps of magnetic beads gathering by a permanent magnet for 30 seconds, followed by resuspension with 250 μL washing buffer (10 mM PBST) and shaking for 30 seconds. The washing buffer was replaced with 50 μL of antibody-labeled PtNP (diluted by 2.5 times with 10 mM PBST) solution, incubated for 5 minutes, which was followed by washing for another three times to remove free antibody-labeled PtNP. Hundred microliters of H2O2 was then added into the well to allow O2 molecule generation for 5 minutes. The eight-well strip was placed on a pressure measuring device to collect detection result. Each concentration measurement was repeated for three times, and the average values and SDs were calculated. Twenty microliters of samples with different concentrations of Myo (0, 15.7, 31.3, 62.5, 125, 250, 500, and 1,000 ng/mL) in different matrix solutions was mixed with 50 μL of capture antibody-modified magnetic beads (1 mg/mL, diluted by reagent diluent) in the well of an eight-well strip for 5 minutes to let the antigen molecules in the sample to be captured by the antibody on magnetic beads. The beads were repeatedly washed three times with steps of magnetic beads gathering by a permanent magnet for 30 seconds, followed by resuspension with 250 μL washing buffer (10 mM PBST) and shaking for 30 seconds. The washing buffer was replaced with 50 μL of antibody-labeled PtNP (diluted by 2.5 times with 10 mM PBST) solution, incubated for 5 minutes, which was followed by washing for another three times to remove free antibody-labeled PtNP. Hundred microliters of H2O2 was then added into the well to allow O2 molecule generation for 5 minutes. The eight-well strip was placed on a pressure measuring device to collect detection result. Each concentration measurement was repeated for three times, and the average values and SDs were calculated. Determining the limit of detection (LOD) of a biomarker The pressure variation value of each biomarker was found to be linear to the biomarker concentration. The LOD for each biomarker was determined as follows: LOD=2sd0/slope,where sd0 is the SD of blank samples and slope is obtained by linear regression for pressure variation to different biomarker concentrations at low level. The pressure variation value of each biomarker was found to be linear to the biomarker concentration. The LOD for each biomarker was determined as follows: LOD=2sd0/slope,where sd0 is the SD of blank samples and slope is obtained by linear regression for pressure variation to different biomarker concentrations at low level. Specificity evaluation of SPDS for three biomarkers To investigate the specificity of SPDS for three biomarkers, some common proteins found in serum, including thrombin (Thr), hemoglobin (Hb), human serum albumin (HSA), immunoglobulin G (IgG), and a common inflammation bio-marker C-reactive protein (CRP), were used as the negative controls. The concentration of each negative control protein was 1 mg/mL diluted in serum matrix, and the concentrations of the three biomarkers (cTnI, CK-MB, Myo) were 10 ng/mL, 100 ng/mL, and 250 ng/mL, respectively, which acted as the positive controls. The same assay procedure and reagents for three biomarkers were used to test these negative control proteins samples to measure the pressure variation value after immune assay. Three runs were taken for each protein test and average value and SD were calculated. To investigate the specificity of SPDS for three biomarkers, some common proteins found in serum, including thrombin (Thr), hemoglobin (Hb), human serum albumin (HSA), immunoglobulin G (IgG), and a common inflammation bio-marker C-reactive protein (CRP), were used as the negative controls. The concentration of each negative control protein was 1 mg/mL diluted in serum matrix, and the concentrations of the three biomarkers (cTnI, CK-MB, Myo) were 10 ng/mL, 100 ng/mL, and 250 ng/mL, respectively, which acted as the positive controls. The same assay procedure and reagents for three biomarkers were used to test these negative control proteins samples to measure the pressure variation value after immune assay. Three runs were taken for each protein test and average value and SD were calculated. Detection of clinical plasma samples The concentration of three biomarkers, cTnI, CK-MB, and Myo, in 50 clinical plasma samples was measured with SPDS following the same measuring procedure as standard curve building for each biomarker respectively. The obtained results were compared with CLIA (ARCHITECT i2000SR). Each point was measured three times, and the average values and SD were calculated. The concentration of three biomarkers, cTnI, CK-MB, and Myo, in 50 clinical plasma samples was measured with SPDS following the same measuring procedure as standard curve building for each biomarker respectively. The obtained results were compared with CLIA (ARCHITECT i2000SR). Each point was measured three times, and the average values and SD were calculated. Results of Bland–Altman analysis of clinical plasma samples The Bland–Altman analysis of clinical samples comparing the results of SPDS with the CLIA method was performed by using SPSS software. Briefly, for the results obtained from SPDS (Vspds) and CLIA (Vclia) methods for 50 clinical samples, variables were calculated as follows: mMean Diffi=Vspds,i−Vclia,ImMeani=(Vspds,i−Vclia,i)/2i=1,2,3,…50 Then, the average value (mean) and SD of Diffi were calculated by the software, and the reference lines of 95% CI were obtained as follows: Yup=Mean+1.96*SDYdown=Mean+1.96*SD Finally, a scatter diagram was build using mMeani as x-axis and Diffi as y-axis. Mean reference line and 95% CI reference lines were added on the scatter diagram to generate Bland–Altman diagram. The Bland–Altman analysis of clinical samples comparing the results of SPDS with the CLIA method was performed by using SPSS software. Briefly, for the results obtained from SPDS (Vspds) and CLIA (Vclia) methods for 50 clinical samples, variables were calculated as follows: mMean Diffi=Vspds,i−Vclia,ImMeani=(Vspds,i−Vclia,i)/2i=1,2,3,…50 Then, the average value (mean) and SD of Diffi were calculated by the software, and the reference lines of 95% CI were obtained as follows: Yup=Mean+1.96*SDYdown=Mean+1.96*SD Finally, a scatter diagram was build using mMeani as x-axis and Diffi as y-axis. Mean reference line and 95% CI reference lines were added on the scatter diagram to generate Bland–Altman diagram. Manipulation process of SPDS The eight-well strip with H2O2 and PtNPs-bound magnetic beads after immunoassay was covered with a sealing rubber pad to seal the wells and put inside the adaptor to deliver into pressure measuring device. The screw valve then moved the pressure sensor module down to press onto the rubber pad, fully sealing the gap between pressure sensor module and rubber pad. Afterward, the start button was pressed to start the measurement of pressure. The eight-well strip with H2O2 and PtNPs-bound magnetic beads after immunoassay was covered with a sealing rubber pad to seal the wells and put inside the adaptor to deliver into pressure measuring device. The screw valve then moved the pressure sensor module down to press onto the rubber pad, fully sealing the gap between pressure sensor module and rubber pad. Afterward, the start button was pressed to start the measurement of pressure. Chemicals and materials: The antibodies for three biomarkers and matrix serum were supplied by Xiamen Passtech Co., Ltd. H2PtCl6, Tween-20, and casein-Na were obtained from Sigma-Aldrich Co. Magnetic beads were obtained from Thermo Fisher Scientific. Other inorganic salts were obtained from Sinopharm Chemical Reagent Co., Ltd. Synthesis of Pt nanoparticles (PtNPs): To prepare PtNPs, 1 mL of 0.1 M H2PtCl6 solution and 96.9 mL of ultrapure water were added into a round-bottom flask. The solution was heated to 80°C with magnetic stirring at the rate of 700 rpm. About 2.1 mL of 1.4 M ascorbic acid was then added. The mixture was kept at 80°C for 30 minutes. The obtained PtNPs solution was stored in a conical flask at room temperature. The optimization of H2O2 concentration: To optimize the concentration of H2O2, 1 μL of synthesized PtNP solution was added into 50 μL of 10 mM hosphate buffered saline with 1% Twen-20 (PBST, pH 7.0) in an eight-well strip, followed by mixing with pipette tip blowing. About 50 μL of H2O2 at different concentrations (30%, 15%, 7.5%, and 3.75%) was then added and mixed. The strip was placed into the pressure reading device for the real-time monitoring of pressure variation. The results were compared to choose the best H2O2 concentration. Antibody labeling on PtNPs: Five hundred microliters of PtNP solution was centrifuged at 13,000 rpm for 10 minutes to remove the supernatant and resuspended with 500 μL of 1 mM morpholinoethanesulfonic acid (MES) buffer (pH 7.0). For cTnI assay, 100 μL of labeling antibody solution (120 μg/mL diluted in ultrapure water) was added into the resuspended PtNPs solution, followed by a short mixing for 40 seconds to ensure uniform absorption of antibody molecules on the PtNPs surface, followed by 10-minute incubation at room temperature to increase the absorption efficiency. The mixture was then added to 500 μL of blocking buffer (0.25% casein in 100 mM Na2CO3-NaHCO3 buffer with 0.25% Tween-20, 5% sucrose, pH 9.0) and kept for 2 hours at room temperature, followed by centrifugation at 5,000 rpm for 5 minutes to remove the supernatant. The recovered nanoparticles were finally resuspended in 10 mM citrate buffer (pH 7.0) and stored at 4°C. For CK-MB and Myo assay, 110 μL of labeling antibody solution (with 50 μg/mL and 10 μg/mL labeling antibody for each biomarker, respectively) was used and the recovered nanoparticles were stored in 0.1 M PBS (pH 7.4). The labeling process for cTnI was similar to the abovementioned procedure. Evaluation of the effect of antibody labeling on PtNP catalysis efficiency: To evaluate the influence of antibody labeling process on PtNP catalysis efficiency, 1 μL of PtNP solution with three antibody-labeled biomarkers was added into 50 μL of 10 mM PBST (pH 7.0) in an eight-well strip, followed by a short mixing. About 50 μL of 30% H2O2 solution was then added and mixed several times by blowing with a pipette tip. Then the strip was placed into the pressure measuring device at 37°C for 5 minutes to measure the pressure value. As a comparison, 1 μL of synthesized PtNP solution was used following the same steps. The obtained pressure values were compared to evaluate the variation in catalysis efficiency. Each point was measured three times to obtain an average value. To demonstrate that the antibody molecules have bound onto the PtNP surface, PtNP solution before and after labeling was diluted with ultrapure water by 50 times, respectively, and zeta-average diameter and zeta-potential were measured using a dynamic light scattering device (Zetasizer Nano ZS90, Malvern, Worcestershire, United Kingdom). The obtained zeta-average diameter and potential for PtNPs before and after labeling were compared. Coating of magnetic beads with capture antibody: Dynabeads M280 Tosylactivated (165 μL, 30 mg/mL) was washed three times with 0.1 M PBS buffer (pH 7.4). About 20 μL of capture antibodies (cTnI 7.5 mg/mL, CK-MB 5 mg/mL, and Myo 5 mg/mL), 150 μL of 0.1 M PBS (pH 7.4), and 100 μL of 3 M ammonium sulfate were added and incubated for 16 hours at 37°C. After removing the supernatant, 1 mL of 0.01 M PBS with 0.5% BSA was added to block the leftover binding sites on the beads for 1 hour at 37°C. The magnetic beads were resuspended in 250 μL of PBS with 0.1% BSA and washed three times to achieve a final beads concentration of 20 mg/mL. The obtained beads solution was stored at 4°C for further use. Matrix effect evaluation: To evaluate the matrix effect on plasma sample detection, samples with gradient concentrations of cTnI (0, 0.1, 1, 5, 10, 25, and 50 ng/mL) antigen diluted in 10 mM PBS buffer, commercial serum matrix, and a mix of negative clinical samples (concentration of cTnI <0.01 ng/mL, measured by CLIA), were measured by the developed SPDS which included the following steps: Hundred microliters of samples with different concentrations of cTnI in different matrix solutions was mixed with 20 μL of capture antibody-modified magnetic beads (1 mg/mL, diluted by reagent diluent) for 5 minutes to let the antigen molecules in the sample to be captured by the antibody on magnetic beads. The beads were washed three times following the steps of magnetic beads gathering by a permanent magnet for 30 seconds, resuspension in 200 μL of washing buffer (10 mM PBST), and shaking for 30 seconds. The washing buffer was then replaced with 50 μL of antibody-labeled PtNP (diluted 2.5 times with 10 mM PBST) solution, incubated for 5 minutes, and was followed by washing for another three times to remove free antibody-labeled PtNP. Hundred microliters of H2O2 was then added into the well to allow O2 molecule generation for 5 minutes. The eight-well strip was placed into a pressure reader to collect the detection result. The obtained measurement result was finally analyzed and compared. The best matrix for standard substance dilution for standard curve building was chosen considering the balance between acceptable matrix effect and acquiring convenience. For the matrix effect evaluation of CK-MB and Myo assays, the same approach was used, except that the negative clinical serum sample for CK-MB was <0.5 mg/mL and for Myo was <1 mg/mL. Each point was measured three times, and the average values and SD were calculated. Standard curve building for cTnI immunoassay: Hundred microliters of sample with a series of cTnI concentrations (0, 0.01, 0.1, 1.0, 5.0, 10.0, 15.0, 20.0, 25.0, and 50.0 ng/mL) in different matrix solutions was mixed with 50 μL of capture antibody-modified magnetic beads (1 mg/mL, diluted by reagent diluent) for 5 minutes to let the antigen molecules in the sample be captured by the antibody on magnetic beads. The beads were repeatedly washed three times with the steps of magnetic beads gathering by a permanent magnet for 30 seconds, resuspension with 200 μL of washing buffer (10 mM PBST), and shaking for 30 seconds. The washing buffer was replaced with 50 μL of antibody-labeled PtNP (diluted 2.5 times with 10 mM PBST) solution, allowing for 5 minutes of incubation, and then followed by washing with 200 μL of washing buffer (10 mM PBST) for another three times to remove free PtNPs. Hundred microliters of H2O2 was then added into the well to allow O2 molecule generation for 5 minutes. The eight-well strip was immediately placed on a pressure measuring device to collect detection result. Each concentration measurement was repeated for three times, and the average values and SDs were calculated. Standard curve building for CK-MB immunoassay: Hundred microliters of samples with different concentrations of CK-MB (0, 0.41, 1.23, 3.69, 11.1, 33.3, 100, and 300 ng/mL) in different matrix solutions was mixed with 50 μL of capture antibody-modified magnetic beads (1 mg/mL, diluted by reagent diluent) for 5 minutes to let the antigen molecules in the sample to be captured by the antibody on magnetic beads. The beads were repeatedly washed three times with steps of magnetic beads gathering by a permanent magnet for 30 seconds, followed by resuspension with 250 μL of washing buffer (10 mM PBST) and shaking for 30 seconds. The washing buffer was replaced with 50 μL of antibody-labeled PtNP (diluted 2.5 times with 10 mM PBST) solution, incubated for 5 minutes, and was followed by washing for another three times to remove free antibody-labeled PtNP. Hundred microliters of H2O2 was then added into the well to allow O2 molecule generation for 5 minutes. The eight-well strip was placed on a pressure measuring device to collect detection result. Each concentration measurement was repeated for three times, and the average values and SDs were calculated. Standard curve building for Myo immunoassay: Twenty microliters of samples with different concentrations of Myo (0, 15.7, 31.3, 62.5, 125, 250, 500, and 1,000 ng/mL) in different matrix solutions was mixed with 50 μL of capture antibody-modified magnetic beads (1 mg/mL, diluted by reagent diluent) in the well of an eight-well strip for 5 minutes to let the antigen molecules in the sample to be captured by the antibody on magnetic beads. The beads were repeatedly washed three times with steps of magnetic beads gathering by a permanent magnet for 30 seconds, followed by resuspension with 250 μL washing buffer (10 mM PBST) and shaking for 30 seconds. The washing buffer was replaced with 50 μL of antibody-labeled PtNP (diluted by 2.5 times with 10 mM PBST) solution, incubated for 5 minutes, which was followed by washing for another three times to remove free antibody-labeled PtNP. Hundred microliters of H2O2 was then added into the well to allow O2 molecule generation for 5 minutes. The eight-well strip was placed on a pressure measuring device to collect detection result. Each concentration measurement was repeated for three times, and the average values and SDs were calculated. Determining the limit of detection (LOD) of a biomarker: The pressure variation value of each biomarker was found to be linear to the biomarker concentration. The LOD for each biomarker was determined as follows: LOD=2sd0/slope,where sd0 is the SD of blank samples and slope is obtained by linear regression for pressure variation to different biomarker concentrations at low level. Specificity evaluation of SPDS for three biomarkers: To investigate the specificity of SPDS for three biomarkers, some common proteins found in serum, including thrombin (Thr), hemoglobin (Hb), human serum albumin (HSA), immunoglobulin G (IgG), and a common inflammation bio-marker C-reactive protein (CRP), were used as the negative controls. The concentration of each negative control protein was 1 mg/mL diluted in serum matrix, and the concentrations of the three biomarkers (cTnI, CK-MB, Myo) were 10 ng/mL, 100 ng/mL, and 250 ng/mL, respectively, which acted as the positive controls. The same assay procedure and reagents for three biomarkers were used to test these negative control proteins samples to measure the pressure variation value after immune assay. Three runs were taken for each protein test and average value and SD were calculated. Detection of clinical plasma samples: The concentration of three biomarkers, cTnI, CK-MB, and Myo, in 50 clinical plasma samples was measured with SPDS following the same measuring procedure as standard curve building for each biomarker respectively. The obtained results were compared with CLIA (ARCHITECT i2000SR). Each point was measured three times, and the average values and SD were calculated. Results of Bland–Altman analysis of clinical plasma samples: The Bland–Altman analysis of clinical samples comparing the results of SPDS with the CLIA method was performed by using SPSS software. Briefly, for the results obtained from SPDS (Vspds) and CLIA (Vclia) methods for 50 clinical samples, variables were calculated as follows: mMean Diffi=Vspds,i−Vclia,ImMeani=(Vspds,i−Vclia,i)/2i=1,2,3,…50 Then, the average value (mean) and SD of Diffi were calculated by the software, and the reference lines of 95% CI were obtained as follows: Yup=Mean+1.96*SDYdown=Mean+1.96*SD Finally, a scatter diagram was build using mMeani as x-axis and Diffi as y-axis. Mean reference line and 95% CI reference lines were added on the scatter diagram to generate Bland–Altman diagram. Manipulation process of SPDS: The eight-well strip with H2O2 and PtNPs-bound magnetic beads after immunoassay was covered with a sealing rubber pad to seal the wells and put inside the adaptor to deliver into pressure measuring device. The screw valve then moved the pressure sensor module down to press onto the rubber pad, fully sealing the gap between pressure sensor module and rubber pad. Afterward, the start button was pressed to start the measurement of pressure. Results and discussion: Principle of SPDS SPDS comprises a pressure measuring device (Figure 1A and C) for data collection and transporting pressure data to a smartphone (Figure 1A), and a smartphone for data analysis, storage, and output. The structure profile of developed pressure measuring device is shown in Figure 1B. The major parts of the device include a screw valve to seal the eight-well strip, eight integrated pressure sensors to measure pressure value of each well, a rubber pad for gas sealing, and an adaptor for loading of the strip into the device. The schematic of SPDS working procedure is shown in Figure 1A. To measure the biomarker concentration with SPDS, the sample is first incubated with magnetic beads, which is coated with capture antibody molecules, in a well of an eight-well strip to capture biomarker molecules onto beads surface. After a washing step to remove the sample remnant to avoid non-specific absorption, PtNPs modified with labeling antibody are added into the well to label the biomarkers, generating an antibody-antigen-antibody sandwich structure. After another washing step to remove free labeling PtNPs, H2O2 substrate is added and catalyzed by PtNPs. Then, the strip is immediately placed in a pressure measuring device and sealed with screw valve. A large amount of O2 gas is produced causing significant pressure variation in the wells, which is measured by the pressure sensor and transmitted to a smartphone via Bluetooth. The pressure variation value is finally analyzed by the smartphone and transformed into biomarker concentration value with a previously fed standard curve input pattern. Compared to traditional biomarker detection approaches such as light, heat, magnetic force, etc., SPDS transforms biomarker molecule signal into pressure signal, thus avoiding the inevitable environmental effects caused by light, heat, and magnetic field. Furthermore, by measuring the pressure variation value after and before catalyzation process, instead of only the absolute pressure value after catalyzation, SPDS eliminates the possible error caused by the atmospheric pressure changes due to altitude variation or changes in weather. By using a simple but highly precise integrated pressure sensor, SPDS significantly decreases the device size for highly sensitive monitoring and allows for highly accurate biomarker detection under robust environments. Furthermore, assisted by a smartphone via Bluetooth, SPDS is able to deal with complicated data analysis contributed by the strong computing power of smartphone,30,31 which helps expand the application field of SPDS to a larger extent, such as POCT for emergency, bacterial detection in external environments such as POCT for emergency and bacteria detection out of field. SPDS comprises a pressure measuring device (Figure 1A and C) for data collection and transporting pressure data to a smartphone (Figure 1A), and a smartphone for data analysis, storage, and output. The structure profile of developed pressure measuring device is shown in Figure 1B. The major parts of the device include a screw valve to seal the eight-well strip, eight integrated pressure sensors to measure pressure value of each well, a rubber pad for gas sealing, and an adaptor for loading of the strip into the device. The schematic of SPDS working procedure is shown in Figure 1A. To measure the biomarker concentration with SPDS, the sample is first incubated with magnetic beads, which is coated with capture antibody molecules, in a well of an eight-well strip to capture biomarker molecules onto beads surface. After a washing step to remove the sample remnant to avoid non-specific absorption, PtNPs modified with labeling antibody are added into the well to label the biomarkers, generating an antibody-antigen-antibody sandwich structure. After another washing step to remove free labeling PtNPs, H2O2 substrate is added and catalyzed by PtNPs. Then, the strip is immediately placed in a pressure measuring device and sealed with screw valve. A large amount of O2 gas is produced causing significant pressure variation in the wells, which is measured by the pressure sensor and transmitted to a smartphone via Bluetooth. The pressure variation value is finally analyzed by the smartphone and transformed into biomarker concentration value with a previously fed standard curve input pattern. Compared to traditional biomarker detection approaches such as light, heat, magnetic force, etc., SPDS transforms biomarker molecule signal into pressure signal, thus avoiding the inevitable environmental effects caused by light, heat, and magnetic field. Furthermore, by measuring the pressure variation value after and before catalyzation process, instead of only the absolute pressure value after catalyzation, SPDS eliminates the possible error caused by the atmospheric pressure changes due to altitude variation or changes in weather. By using a simple but highly precise integrated pressure sensor, SPDS significantly decreases the device size for highly sensitive monitoring and allows for highly accurate biomarker detection under robust environments. Furthermore, assisted by a smartphone via Bluetooth, SPDS is able to deal with complicated data analysis contributed by the strong computing power of smartphone,30,31 which helps expand the application field of SPDS to a larger extent, such as POCT for emergency, bacterial detection in external environments such as POCT for emergency and bacteria detection out of field. Optimization of H2O2 concentration As the substrate of PtNPs, the concentration of H2O2 may significantly affect the catalytic efficiency leading to a change in sensitivity. To optimize the catalytic efficiency, firstly we optimized the H2O2 concentration. The result is shown in Figure S1; when the concentration of H2O2 decreased, the catalytic efficiency of PtNPs also decreased. The measured pressure variation value for 30% H2O2 was almost ten times the value obtained by 3.75% H2O2. To obtain the highest detection sensitivity for biomarkers, 30% H2O2 was chosen for further pressure measuring assays. As the substrate of PtNPs, the concentration of H2O2 may significantly affect the catalytic efficiency leading to a change in sensitivity. To optimize the catalytic efficiency, firstly we optimized the H2O2 concentration. The result is shown in Figure S1; when the concentration of H2O2 decreased, the catalytic efficiency of PtNPs also decreased. The measured pressure variation value for 30% H2O2 was almost ten times the value obtained by 3.75% H2O2. To obtain the highest detection sensitivity for biomarkers, 30% H2O2 was chosen for further pressure measuring assays. Effect of antibody labeling on PtNPs catalysis efficiency As shown in Figure 2A and B, the zeta-average diameter increased to 207.3 nm for cTnI (for profile, see Figure S2B), 215.2 nm for CK-MB (for profile, see Figure S2C), and 188.3 nm for Myo (for profile, see Figure S2D) from 126.2 nm before labeling (for profile, see Figure S2A). The zeta-potential changed from −27.6 mV to −41.9 mV for cTnI, −40.0 mV for CK-MB, and −38.6 mV for Myo. The increase in zeta-average diameter suggests binding of antibody molecules and blocking by protein molecules (casein) onto the PtNP surface. The negative variation of zeta-potential can be attributed to the fact that casein (iso-electric point ~4.8) and antibodies (isoelectric point ~6.8) are negatively charged in pure water (pH~7.0). To evaluate the influence of labeling process on PtNPs catalysis function, the catalysis efficiency of PtNPs before and after labeling with different biomarker antibodies was compared. The result is shown in Figure 2C. The catalysis efficiency of PtNPs labeled with cTnI, CK-MB, and Myo antibodies was found to significantly decrease by 86.9%, 83.6%, and 83.4% after labeling, which was speculated to be caused by the occupation of the catalytic site by antibody molecules, surface charge variation of PtNPs, and a decrease of specific surface area. As shown in Figure 2A and B, the zeta-average diameter increased to 207.3 nm for cTnI (for profile, see Figure S2B), 215.2 nm for CK-MB (for profile, see Figure S2C), and 188.3 nm for Myo (for profile, see Figure S2D) from 126.2 nm before labeling (for profile, see Figure S2A). The zeta-potential changed from −27.6 mV to −41.9 mV for cTnI, −40.0 mV for CK-MB, and −38.6 mV for Myo. The increase in zeta-average diameter suggests binding of antibody molecules and blocking by protein molecules (casein) onto the PtNP surface. The negative variation of zeta-potential can be attributed to the fact that casein (iso-electric point ~4.8) and antibodies (isoelectric point ~6.8) are negatively charged in pure water (pH~7.0). To evaluate the influence of labeling process on PtNPs catalysis function, the catalysis efficiency of PtNPs before and after labeling with different biomarker antibodies was compared. The result is shown in Figure 2C. The catalysis efficiency of PtNPs labeled with cTnI, CK-MB, and Myo antibodies was found to significantly decrease by 86.9%, 83.6%, and 83.4% after labeling, which was speculated to be caused by the occupation of the catalytic site by antibody molecules, surface charge variation of PtNPs, and a decrease of specific surface area. Evaluation of matrix effect Compared with buffer solution, plasma contains much more complex compounds, which include a large amount of proteins, polysaccharides, ions, and even cells.32 The measurement of biomarker molecules in plasma is usually different from that obtained from buffer, due to an unpredictable non-specific absorption and molecular conformation variation under different environments. To minimize the measuring error caused by this kind of matrix effect, a gradient concentration of biomarker molecules was added into two different matrixes, including 10 mM PBS buffer and commercially available serum matrix. As the control group, a mixture of ten clinically determined low-level plasma samples (cTnI <0.01 ng/mL, CK-MB <0.5 ng/mL, and Myo <1 mg/mL) was used. The measuring results of three biomarkers in different matrixes were compared to choose the one showing the smallest matrix effect. As shown in Figure 3A–C, the cTnI added in serum matrix and plasma mixture showed similar concentration–pressure response, while much higher pressure response was observed in buffer samples. CK-MB added in serum matrix also showed the same result with low-level plasma samples, while the response of sample with CK-MB added in buffer was lower than the one added in matrix. However, no difference was found in three kinds of matrix analysis for Myo detection. This result was inferred to be caused by the contents, such as proteins, ions, or organic molecules, in serum matrix and plasma. These molecules may compete with antigen or lead to a non-specific absorption to magnetic beads and PtNP surface to decrease antibody-antigen binding efficiency, thus resulting in a lower measuring value (similar to what happened in CK-MB assay). On the other hand, some conditions, like pH and different types of ions or ion strength in serum matrix and plasma, might increase the antibody binding efficiency, causing a higher measuring result (similar to what happened in cTnI assay). However, when the biomarker molecule was stable enough and was rarely affected by the solution environment, there will be no significant difference in the results obtained from different types of matrix (similar to what happened in Myo assay). To minimize the measuring error caused by matrix effect, commercial serum matrix was chosen to dilute the biomarkers to build standard curve for further applications. Compared with buffer solution, plasma contains much more complex compounds, which include a large amount of proteins, polysaccharides, ions, and even cells.32 The measurement of biomarker molecules in plasma is usually different from that obtained from buffer, due to an unpredictable non-specific absorption and molecular conformation variation under different environments. To minimize the measuring error caused by this kind of matrix effect, a gradient concentration of biomarker molecules was added into two different matrixes, including 10 mM PBS buffer and commercially available serum matrix. As the control group, a mixture of ten clinically determined low-level plasma samples (cTnI <0.01 ng/mL, CK-MB <0.5 ng/mL, and Myo <1 mg/mL) was used. The measuring results of three biomarkers in different matrixes were compared to choose the one showing the smallest matrix effect. As shown in Figure 3A–C, the cTnI added in serum matrix and plasma mixture showed similar concentration–pressure response, while much higher pressure response was observed in buffer samples. CK-MB added in serum matrix also showed the same result with low-level plasma samples, while the response of sample with CK-MB added in buffer was lower than the one added in matrix. However, no difference was found in three kinds of matrix analysis for Myo detection. This result was inferred to be caused by the contents, such as proteins, ions, or organic molecules, in serum matrix and plasma. These molecules may compete with antigen or lead to a non-specific absorption to magnetic beads and PtNP surface to decrease antibody-antigen binding efficiency, thus resulting in a lower measuring value (similar to what happened in CK-MB assay). On the other hand, some conditions, like pH and different types of ions or ion strength in serum matrix and plasma, might increase the antibody binding efficiency, causing a higher measuring result (similar to what happened in cTnI assay). However, when the biomarker molecule was stable enough and was rarely affected by the solution environment, there will be no significant difference in the results obtained from different types of matrix (similar to what happened in Myo assay). To minimize the measuring error caused by matrix effect, commercial serum matrix was chosen to dilute the biomarkers to build standard curve for further applications. Standard curves of three AMI biomarkers Serum matrix solution added with different concentrations of cTnI, CK-MB, or Myo was evaluated with the developed assay as the steps described in the Materials and methods section. The resulting standard curves for three biomarkers are shown in Figure 3D–F. The pressure value for cTnI showed a good linear relationship with antigen concentration in the range from 0 to 25 ng/mL (Figure S3A), with a LOD of 0.014 ng/mL, consistent with the sensitivity of currently available cTnI diagnosis approaches.22 The increasing tendency of pressure variation value declining when cTnI concentration was further increased after 20 ng/mL, which was mainly caused by the saturation of antibody binding sites on the magnetic beads. The linear range for pressure response value to CK-MB concentration and Myo concentration was found to be 0–33 ng/mL (Figure S3B) and 0–250 ng/mL (Figure S3C), with LOD of 0.16 ng/mL and 0.85 ng/mL, respectively, and the decrease of pressure variation value in the same slope was found when biomarker concentration increased higher than about 75 ng/mL and 400 ng/mL for CK-MB and Myo, respectively. All the coefficient values of variation for each concentration of the three biomarkers were determined to be smaller than 10%, demonstrating a good repeatability of SPDS. To increase the quantitative accuracy, logistic regression with four variants was used to build a standard curve formula. The logistic regression formula is given by formula 1 as follows: ΔP=A1−A21+(cc0)+A2 In formula 1, A1, A2, c0, and n are parameters obtained by logistic fitting, ΔP is pressure variation value obtained by pressure measuring device, and c is biomarker concentration in samples. The values for four variants were fitted by Origin software, and the fitting results for cTnI, CK-MB, and Myo are given by formula 2, formula 3, and formula 4, respectively: ΔP=−2,143.611+(c34.35)1.24+2,155.14ΔP=−796.081+(c69.98)1.10+822.63ΔP=−573.221+(c329.65)1.33+556.09 The fitting results were transferred into the smartphone App for further sample detection and data calculation to translate pressure signal into concentration value. According to the built standard curves, SPDS had shown a higher sensitivity and accuracy (with CV% smaller than 10% for different biomarker concentrations) than most POCT products like LICA (LOD equals about 0.1 ng/mL for cTnI, CV <15%) and comparable detection performance with CLIA (LOD equals about 0.02 ng/mL for cTnI, CV <15%). However, with much smaller device size and being more user-friendly, SPDS could be applied under highly unfavorable conditions, rather than CLIA. Serum matrix solution added with different concentrations of cTnI, CK-MB, or Myo was evaluated with the developed assay as the steps described in the Materials and methods section. The resulting standard curves for three biomarkers are shown in Figure 3D–F. The pressure value for cTnI showed a good linear relationship with antigen concentration in the range from 0 to 25 ng/mL (Figure S3A), with a LOD of 0.014 ng/mL, consistent with the sensitivity of currently available cTnI diagnosis approaches.22 The increasing tendency of pressure variation value declining when cTnI concentration was further increased after 20 ng/mL, which was mainly caused by the saturation of antibody binding sites on the magnetic beads. The linear range for pressure response value to CK-MB concentration and Myo concentration was found to be 0–33 ng/mL (Figure S3B) and 0–250 ng/mL (Figure S3C), with LOD of 0.16 ng/mL and 0.85 ng/mL, respectively, and the decrease of pressure variation value in the same slope was found when biomarker concentration increased higher than about 75 ng/mL and 400 ng/mL for CK-MB and Myo, respectively. All the coefficient values of variation for each concentration of the three biomarkers were determined to be smaller than 10%, demonstrating a good repeatability of SPDS. To increase the quantitative accuracy, logistic regression with four variants was used to build a standard curve formula. The logistic regression formula is given by formula 1 as follows: ΔP=A1−A21+(cc0)+A2 In formula 1, A1, A2, c0, and n are parameters obtained by logistic fitting, ΔP is pressure variation value obtained by pressure measuring device, and c is biomarker concentration in samples. The values for four variants were fitted by Origin software, and the fitting results for cTnI, CK-MB, and Myo are given by formula 2, formula 3, and formula 4, respectively: ΔP=−2,143.611+(c34.35)1.24+2,155.14ΔP=−796.081+(c69.98)1.10+822.63ΔP=−573.221+(c329.65)1.33+556.09 The fitting results were transferred into the smartphone App for further sample detection and data calculation to translate pressure signal into concentration value. According to the built standard curves, SPDS had shown a higher sensitivity and accuracy (with CV% smaller than 10% for different biomarker concentrations) than most POCT products like LICA (LOD equals about 0.1 ng/mL for cTnI, CV <15%) and comparable detection performance with CLIA (LOD equals about 0.02 ng/mL for cTnI, CV <15%). However, with much smaller device size and being more user-friendly, SPDS could be applied under highly unfavorable conditions, rather than CLIA. Specificity evaluation of SPDS for three biomarkers To investigate the specificity of SPDS for three biomarkers, some common proteins found in serum, including Thr, Hb, HSA, IgG, and a common inflammation biomarker CRP, were used as the negative controls. The results shown in Figure 3G–I indicate that the developed assays for three biomarkers had high specificity to these proteins commonly found in human blood. The results demonstrated that SPDS and developed reagents were highly specific to their bio-markers and were rarely affected by the common proteins present in real blood samples, which we consider is majorly contributed by high specificity of the chosen antibodies and well-developed nanoparticle coating procedure. To investigate the specificity of SPDS for three biomarkers, some common proteins found in serum, including Thr, Hb, HSA, IgG, and a common inflammation biomarker CRP, were used as the negative controls. The results shown in Figure 3G–I indicate that the developed assays for three biomarkers had high specificity to these proteins commonly found in human blood. The results demonstrated that SPDS and developed reagents were highly specific to their bio-markers and were rarely affected by the common proteins present in real blood samples, which we consider is majorly contributed by high specificity of the chosen antibodies and well-developed nanoparticle coating procedure. Clinical sample detection To estimate the performance of SPDS, 50 clinical samples, whose concentrations of cTnI, CK-MB, and Myo had been measured by CLIA, were tested by SPDS. The results of comparison are shown in Figure 4A, C, and E. The linear slopes of comparison for cTnI, CK-MB, and Myo were fitted as 1.049 (R2=0.9852), 0.9545 (R2=0.9852), and 0.998 (R2=0.9908), which demonstrated an excellent match between two different assays. To further analyze the correlation between the results of CLIA and SPDS, Bland–Altman analysis was performed.33 As shown in Figure 4B, D, and F, almost all of the compared samples were in the range of 95% CI, suggesting a good correlation between the developed assay and CLIA. Only 4/50, 3/50, and 1/50 samples for cTnI, CK-MB, and Myo were out of 95% CI range, which might be caused by biomarker proteolysis during sample storage. The clinical sample detection result demonstrated that SPDS rivaled CLIA in the detection performance for three biomarkers of AMI, and it has strong potential for clinical applications that need high sensitivity and accuracy instead of CLIA where a huge automatic device cannot be supported. To estimate the performance of SPDS, 50 clinical samples, whose concentrations of cTnI, CK-MB, and Myo had been measured by CLIA, were tested by SPDS. The results of comparison are shown in Figure 4A, C, and E. The linear slopes of comparison for cTnI, CK-MB, and Myo were fitted as 1.049 (R2=0.9852), 0.9545 (R2=0.9852), and 0.998 (R2=0.9908), which demonstrated an excellent match between two different assays. To further analyze the correlation between the results of CLIA and SPDS, Bland–Altman analysis was performed.33 As shown in Figure 4B, D, and F, almost all of the compared samples were in the range of 95% CI, suggesting a good correlation between the developed assay and CLIA. Only 4/50, 3/50, and 1/50 samples for cTnI, CK-MB, and Myo were out of 95% CI range, which might be caused by biomarker proteolysis during sample storage. The clinical sample detection result demonstrated that SPDS rivaled CLIA in the detection performance for three biomarkers of AMI, and it has strong potential for clinical applications that need high sensitivity and accuracy instead of CLIA where a huge automatic device cannot be supported. Principle of SPDS: SPDS comprises a pressure measuring device (Figure 1A and C) for data collection and transporting pressure data to a smartphone (Figure 1A), and a smartphone for data analysis, storage, and output. The structure profile of developed pressure measuring device is shown in Figure 1B. The major parts of the device include a screw valve to seal the eight-well strip, eight integrated pressure sensors to measure pressure value of each well, a rubber pad for gas sealing, and an adaptor for loading of the strip into the device. The schematic of SPDS working procedure is shown in Figure 1A. To measure the biomarker concentration with SPDS, the sample is first incubated with magnetic beads, which is coated with capture antibody molecules, in a well of an eight-well strip to capture biomarker molecules onto beads surface. After a washing step to remove the sample remnant to avoid non-specific absorption, PtNPs modified with labeling antibody are added into the well to label the biomarkers, generating an antibody-antigen-antibody sandwich structure. After another washing step to remove free labeling PtNPs, H2O2 substrate is added and catalyzed by PtNPs. Then, the strip is immediately placed in a pressure measuring device and sealed with screw valve. A large amount of O2 gas is produced causing significant pressure variation in the wells, which is measured by the pressure sensor and transmitted to a smartphone via Bluetooth. The pressure variation value is finally analyzed by the smartphone and transformed into biomarker concentration value with a previously fed standard curve input pattern. Compared to traditional biomarker detection approaches such as light, heat, magnetic force, etc., SPDS transforms biomarker molecule signal into pressure signal, thus avoiding the inevitable environmental effects caused by light, heat, and magnetic field. Furthermore, by measuring the pressure variation value after and before catalyzation process, instead of only the absolute pressure value after catalyzation, SPDS eliminates the possible error caused by the atmospheric pressure changes due to altitude variation or changes in weather. By using a simple but highly precise integrated pressure sensor, SPDS significantly decreases the device size for highly sensitive monitoring and allows for highly accurate biomarker detection under robust environments. Furthermore, assisted by a smartphone via Bluetooth, SPDS is able to deal with complicated data analysis contributed by the strong computing power of smartphone,30,31 which helps expand the application field of SPDS to a larger extent, such as POCT for emergency, bacterial detection in external environments such as POCT for emergency and bacteria detection out of field. Optimization of H2O2 concentration: As the substrate of PtNPs, the concentration of H2O2 may significantly affect the catalytic efficiency leading to a change in sensitivity. To optimize the catalytic efficiency, firstly we optimized the H2O2 concentration. The result is shown in Figure S1; when the concentration of H2O2 decreased, the catalytic efficiency of PtNPs also decreased. The measured pressure variation value for 30% H2O2 was almost ten times the value obtained by 3.75% H2O2. To obtain the highest detection sensitivity for biomarkers, 30% H2O2 was chosen for further pressure measuring assays. Effect of antibody labeling on PtNPs catalysis efficiency: As shown in Figure 2A and B, the zeta-average diameter increased to 207.3 nm for cTnI (for profile, see Figure S2B), 215.2 nm for CK-MB (for profile, see Figure S2C), and 188.3 nm for Myo (for profile, see Figure S2D) from 126.2 nm before labeling (for profile, see Figure S2A). The zeta-potential changed from −27.6 mV to −41.9 mV for cTnI, −40.0 mV for CK-MB, and −38.6 mV for Myo. The increase in zeta-average diameter suggests binding of antibody molecules and blocking by protein molecules (casein) onto the PtNP surface. The negative variation of zeta-potential can be attributed to the fact that casein (iso-electric point ~4.8) and antibodies (isoelectric point ~6.8) are negatively charged in pure water (pH~7.0). To evaluate the influence of labeling process on PtNPs catalysis function, the catalysis efficiency of PtNPs before and after labeling with different biomarker antibodies was compared. The result is shown in Figure 2C. The catalysis efficiency of PtNPs labeled with cTnI, CK-MB, and Myo antibodies was found to significantly decrease by 86.9%, 83.6%, and 83.4% after labeling, which was speculated to be caused by the occupation of the catalytic site by antibody molecules, surface charge variation of PtNPs, and a decrease of specific surface area. Evaluation of matrix effect: Compared with buffer solution, plasma contains much more complex compounds, which include a large amount of proteins, polysaccharides, ions, and even cells.32 The measurement of biomarker molecules in plasma is usually different from that obtained from buffer, due to an unpredictable non-specific absorption and molecular conformation variation under different environments. To minimize the measuring error caused by this kind of matrix effect, a gradient concentration of biomarker molecules was added into two different matrixes, including 10 mM PBS buffer and commercially available serum matrix. As the control group, a mixture of ten clinically determined low-level plasma samples (cTnI <0.01 ng/mL, CK-MB <0.5 ng/mL, and Myo <1 mg/mL) was used. The measuring results of three biomarkers in different matrixes were compared to choose the one showing the smallest matrix effect. As shown in Figure 3A–C, the cTnI added in serum matrix and plasma mixture showed similar concentration–pressure response, while much higher pressure response was observed in buffer samples. CK-MB added in serum matrix also showed the same result with low-level plasma samples, while the response of sample with CK-MB added in buffer was lower than the one added in matrix. However, no difference was found in three kinds of matrix analysis for Myo detection. This result was inferred to be caused by the contents, such as proteins, ions, or organic molecules, in serum matrix and plasma. These molecules may compete with antigen or lead to a non-specific absorption to magnetic beads and PtNP surface to decrease antibody-antigen binding efficiency, thus resulting in a lower measuring value (similar to what happened in CK-MB assay). On the other hand, some conditions, like pH and different types of ions or ion strength in serum matrix and plasma, might increase the antibody binding efficiency, causing a higher measuring result (similar to what happened in cTnI assay). However, when the biomarker molecule was stable enough and was rarely affected by the solution environment, there will be no significant difference in the results obtained from different types of matrix (similar to what happened in Myo assay). To minimize the measuring error caused by matrix effect, commercial serum matrix was chosen to dilute the biomarkers to build standard curve for further applications. Standard curves of three AMI biomarkers: Serum matrix solution added with different concentrations of cTnI, CK-MB, or Myo was evaluated with the developed assay as the steps described in the Materials and methods section. The resulting standard curves for three biomarkers are shown in Figure 3D–F. The pressure value for cTnI showed a good linear relationship with antigen concentration in the range from 0 to 25 ng/mL (Figure S3A), with a LOD of 0.014 ng/mL, consistent with the sensitivity of currently available cTnI diagnosis approaches.22 The increasing tendency of pressure variation value declining when cTnI concentration was further increased after 20 ng/mL, which was mainly caused by the saturation of antibody binding sites on the magnetic beads. The linear range for pressure response value to CK-MB concentration and Myo concentration was found to be 0–33 ng/mL (Figure S3B) and 0–250 ng/mL (Figure S3C), with LOD of 0.16 ng/mL and 0.85 ng/mL, respectively, and the decrease of pressure variation value in the same slope was found when biomarker concentration increased higher than about 75 ng/mL and 400 ng/mL for CK-MB and Myo, respectively. All the coefficient values of variation for each concentration of the three biomarkers were determined to be smaller than 10%, demonstrating a good repeatability of SPDS. To increase the quantitative accuracy, logistic regression with four variants was used to build a standard curve formula. The logistic regression formula is given by formula 1 as follows: ΔP=A1−A21+(cc0)+A2 In formula 1, A1, A2, c0, and n are parameters obtained by logistic fitting, ΔP is pressure variation value obtained by pressure measuring device, and c is biomarker concentration in samples. The values for four variants were fitted by Origin software, and the fitting results for cTnI, CK-MB, and Myo are given by formula 2, formula 3, and formula 4, respectively: ΔP=−2,143.611+(c34.35)1.24+2,155.14ΔP=−796.081+(c69.98)1.10+822.63ΔP=−573.221+(c329.65)1.33+556.09 The fitting results were transferred into the smartphone App for further sample detection and data calculation to translate pressure signal into concentration value. According to the built standard curves, SPDS had shown a higher sensitivity and accuracy (with CV% smaller than 10% for different biomarker concentrations) than most POCT products like LICA (LOD equals about 0.1 ng/mL for cTnI, CV <15%) and comparable detection performance with CLIA (LOD equals about 0.02 ng/mL for cTnI, CV <15%). However, with much smaller device size and being more user-friendly, SPDS could be applied under highly unfavorable conditions, rather than CLIA. Specificity evaluation of SPDS for three biomarkers: To investigate the specificity of SPDS for three biomarkers, some common proteins found in serum, including Thr, Hb, HSA, IgG, and a common inflammation biomarker CRP, were used as the negative controls. The results shown in Figure 3G–I indicate that the developed assays for three biomarkers had high specificity to these proteins commonly found in human blood. The results demonstrated that SPDS and developed reagents were highly specific to their bio-markers and were rarely affected by the common proteins present in real blood samples, which we consider is majorly contributed by high specificity of the chosen antibodies and well-developed nanoparticle coating procedure. Clinical sample detection: To estimate the performance of SPDS, 50 clinical samples, whose concentrations of cTnI, CK-MB, and Myo had been measured by CLIA, were tested by SPDS. The results of comparison are shown in Figure 4A, C, and E. The linear slopes of comparison for cTnI, CK-MB, and Myo were fitted as 1.049 (R2=0.9852), 0.9545 (R2=0.9852), and 0.998 (R2=0.9908), which demonstrated an excellent match between two different assays. To further analyze the correlation between the results of CLIA and SPDS, Bland–Altman analysis was performed.33 As shown in Figure 4B, D, and F, almost all of the compared samples were in the range of 95% CI, suggesting a good correlation between the developed assay and CLIA. Only 4/50, 3/50, and 1/50 samples for cTnI, CK-MB, and Myo were out of 95% CI range, which might be caused by biomarker proteolysis during sample storage. The clinical sample detection result demonstrated that SPDS rivaled CLIA in the detection performance for three biomarkers of AMI, and it has strong potential for clinical applications that need high sensitivity and accuracy instead of CLIA where a huge automatic device cannot be supported. Conclusion: In summary, to satisfy the demand for a portable system with high detection sensitivity and accuracy for the applications of diagnosis under unfavorable environments and detection in wild situations, we developed SPDS to solve the problem that the currently available approaches or devices are facing. The catalytic efficiency was firstly optimized and matrix effect was minimized by using commercially available serum matrix, instead of buffer, to dilute bio-marker molecules to build the standard curve. The detection performance and system reliability of SPDS were finally verified by comparing the measuring results for the three biomarkers of 50 clinical plasma samples obtained by SPDS and CLIA. Furthermore, though a small device, while not losing any sensitivity and accuracy, SPDS showed a good potential for POCT application and diagnosis under emergency, bacteria detection out of field, and poison detection at the military front. Compared with the results reported for PASS, SPDS showed much better user-friendliness and reliability due to its integrated pressure measuring device, optimized pressure variation value measuring method, and the support of smart-phone for data manipulation. SPDS can also measure eight samples simultaneously, which could significantly decrease the error caused by the tedious and unfriendly manipulation of the device in PASS which involves measuring the sample on one-by-one basis. SPDS has shown a bright prospect in clinical application. However, we also have to admit that SPDS is still a half-automated system for AMI diagnosis and the manipulation for immune assay is still complicated. In the future work, we would convert the device into a totally automated system for POCT. Besides, we would attempt to further optimize PtNPs modification protocol to minimize the effect of modification on nanoparticle catalytic efficiency and further improve the sensitivity of SPDS. Supplementary materials: Optimization result of H2O2 concentration. The nanoparticle zeta-average diameter profile of PtNPs before labeling (A) and after labeling for cTnI (B), CK-MB (C), and Myo (D). Abbreviations: PtNPs, platinum nanoparticles; cTnI, cardiac troponin I; CK-MB, MB isoenzyme of creatine kinase; Myo, myoglobin. Linearly fitting results for cTnI (A), CK-MB (B), and Myo (C) under low concentrations. Abbreviations: cTnI, cardiac troponin I; CK-MB, MB isoenzyme of creatine kinase; Myo, myoglobin.
Background: Acute myocardial infarction (AMI), usually caused by atherosclerosis of coronary artery, is the most severe manifestation of coronary artery disease which results in a large amount of death annually. A new diagnosis approach with high accuracy, reliability and low measuring-time-consuming is essential for AMI quick diagnosis. Methods: 50 plasma samples of acute myocardial infarction patients were analyzed by developed Smartphone-Assisted Pressure-Measuring-Based Diagnosis System (SPDS). The concentration of substrate was firstly optimized. The effect of antibody labeling and matrix solution on measuring result were then evaluated. And standard curves for cTnI, CK-MB and Myo were built for clinical sample analysis. The measuring results of 50 clinical samples were finally evaluated by comparing with the measuring result obtained by CLIA. Results: The concentration of substrate H2O2 was firstly optimized as 30% to increase measuring signal. A commercial serum matrix was chosen as the matrix solution to dilute biomarkers for standard curve building to minimize matrix effect on the accuracy of clinical plasma sample measuring. The standard curves for cTnI, CK-MB and Myo were built, with measuring dynamic range of 0-25 ng/mL, 0-33 ng/mL and 0-250 ng/mL, and limit of detection of 0.014 ng/mL, 0.16 ng/mL and 0.85 ng/mL respectively. The measuring results obtained by the developed system of 50 clinical plasma samples for three biomarkers matched well with the results obtained by chemiluminescent immunoassay. Conclusions: Due to its small device size, high sensitivity and accuracy, SPDS showed a bright potential for point-of-care testing (POCT) applications.
Introduction: Annually, more than 2.4 million deaths in the US, 4 million deaths in Europe and northern Asia, and more than a third of deaths in developed nations are caused by coronary artery disease (CAD).1–4 Acute myocardial infarction (AMI), usually caused by atherosclerosis of coronary artery, is the most severe manifestation of CAD, resulting in high mortality.5 Treatment of AMI is time-critical.6 Early medical and surgical intervention has been widely demonstrated to be able to significantly reduce the myocardial damage and mortality.7 To provide an accurate medical treatment for AMI, a quick diagnosis of AMI during door-to-balloon time is crucially required. According to current consensus, AMI is majorly defined by some physical diagnostic approaches such as electrocardiogram (ECG),8–10 changes in the motion of the heart wall on imaging, and some well-evaluated cardiac biomarkers,11,12 like cardiac troponin I/T (cTnI/cTnT),13–17 MB isoenzyme of creatine kinase (CK-MB),18,19 and myoglobin (Myo).20 ECG involves the placement of a series of leads on a person’s chest that measure electrical activity associated with the contraction of heart muscle, which has long been used for AMI diagnosis. By measuring ST-T variation or Q-waves, ECG can accurately diagnose AMI. Imaging methods, like chest X-ray, single-photon emission computed tomography/computed tomography scans, and positron emission tomography scans, can also be used for AMI diagnosis.21 Instead of ECG and imaging approaches, some well-evaluated biomarkers are now widely used for AMI diagnosis. These biomarkers include highly specific proteins like cTnI/cTnT, as well as some less-specific biomarkers like CK-MB and Myo. Due to differences in their diagnostic window periods and to improve the accuracy of AMI diagnosis, these biomarkers are often measured simultaneously. Various detection approaches for these AMI biomarkers, such as chemiluminescence immunoassay (CLIA), ELISA, and lateral immunochromatographic assay (LICA), are currently available in most hospitals.22 Among these approaches, CLIA, assisted by fully automated devices, has shown the highest user-friendliness, sensitivity, reliability, and diagnosis efficiency for quantitative measurement. However, limited by the device size and a rigorous demand for running-environment control to ensure a stable working condition, the highly precise equipment can hardly work out of a well-developed laboratory. This results in a lack of effective and reasonable diagnosis of AMI in some less developed areas, which cannot support such precision and expansive instrument, as well as some emergency situations in the wild. As a supplement for CLIA, LICA is widely used under highly unfavorable environments due to its simple operational process. However, most LICA products can only support qualitative purpose instead of quantitative application, because of a significant variation caused by the uncontrollable reaction process that occurs on the nitrocellulose membrane. Therefore, a quantitative detection approach with good portability, high sensitivity, and accuracy under an unfavorable environment is highly essential. In recent years, Pressure-based Bioassay (PASS) for biomarker detection has been reported.23–26 Different from traditional detection methods that are based on light, color, electrical activity, magnetic force, heat, or distance, the developed assay transforms molecular signal into pressure signal by enzyme- or catalyst (nanoparticles)-linked immu-nosorbent assay. Furthermore, a similar system was further applied for analysis at the single-cell level,27,28 drug detection, and analysis of disease biomarkers.29 PASS has shown high sensitivity, high reliability, and good portability in the reported works, which can be attributed to pressure sensor that is highly sensitive to pressure variations caused by the immunity assay. However, the developed measuring device is not user-friendly enough and can measure only one sample per time. The measurement between different samples with a short delay may cause unpredictable error. Besides, due to being lack of a strict control of manipulation procedure, the detection result may be affected by operator’s experience. Therefore, PASS is still far from being applied in applications that have a rigorous requirement for reliability and controllability like clinical diagnosis, although the technique still has a bright potential. To solve abovementioned problems, herein, we developed Smartphone-Assisted Pressure-Measuring-Based Diagnosis System (SPDS) for portable and highly sensitive diagnostic applications. SPDS is composed of a pressure measuring device and a smartphone. The size of the pressure measuring device is 115×67×50 mm3, small enough to be pocket-portable. The smartphone is connected to the pressure measuring device via Bluetooth, which helps to analyze the data and give an accurate detection and diagnosis result, with the capability of storing more than 105 detection results. To estimate the sensitivity and reliability of the instrument, SPDS was used to measure clinical plasma samples for AMI diagnosis by cTnI/CK-MB/Myo combined examination. The labeling process was first estimated, and matrix effect of biomarkers in different matrix solutions was compared. To minimize the matrix effect for detection accuracy, a commercially available matrix serum was used to replace low-biomarker-concentration plasma for the building of standard curve for each biomarker. Finally, 50 clinical plasma samples were tested and compared with the measuring results obtained by CLIA (Abbott Architect i2000SR). The results showed that the concentration values of three biomarkers in the 50 samples measured by SPDS matched well with those measured by CLIA. The SPDS showed a comparable sensitivity and reliability with CLIA, while with much smaller device size and assisted by smart-phone for data analysis system, allowing applications under highly unfavorable circumstances, including the hospitals in less developed areas and emergency situations in the wild. Moreover, to help to give early medical treatment and surgical intervention for AMI patient, the SPDS shows bright prospect for AMI diagnosis during ambulance delivery time, much earlier than traditional door-to-balloon time, which may have potential to significantly decrease the damage caused by AMI in the patients. Conclusion: In summary, to satisfy the demand for a portable system with high detection sensitivity and accuracy for the applications of diagnosis under unfavorable environments and detection in wild situations, we developed SPDS to solve the problem that the currently available approaches or devices are facing. The catalytic efficiency was firstly optimized and matrix effect was minimized by using commercially available serum matrix, instead of buffer, to dilute bio-marker molecules to build the standard curve. The detection performance and system reliability of SPDS were finally verified by comparing the measuring results for the three biomarkers of 50 clinical plasma samples obtained by SPDS and CLIA. Furthermore, though a small device, while not losing any sensitivity and accuracy, SPDS showed a good potential for POCT application and diagnosis under emergency, bacteria detection out of field, and poison detection at the military front. Compared with the results reported for PASS, SPDS showed much better user-friendliness and reliability due to its integrated pressure measuring device, optimized pressure variation value measuring method, and the support of smart-phone for data manipulation. SPDS can also measure eight samples simultaneously, which could significantly decrease the error caused by the tedious and unfriendly manipulation of the device in PASS which involves measuring the sample on one-by-one basis. SPDS has shown a bright prospect in clinical application. However, we also have to admit that SPDS is still a half-automated system for AMI diagnosis and the manipulation for immune assay is still complicated. In the future work, we would convert the device into a totally automated system for POCT. Besides, we would attempt to further optimize PtNPs modification protocol to minimize the effect of modification on nanoparticle catalytic efficiency and further improve the sensitivity of SPDS.
Background: Acute myocardial infarction (AMI), usually caused by atherosclerosis of coronary artery, is the most severe manifestation of coronary artery disease which results in a large amount of death annually. A new diagnosis approach with high accuracy, reliability and low measuring-time-consuming is essential for AMI quick diagnosis. Methods: 50 plasma samples of acute myocardial infarction patients were analyzed by developed Smartphone-Assisted Pressure-Measuring-Based Diagnosis System (SPDS). The concentration of substrate was firstly optimized. The effect of antibody labeling and matrix solution on measuring result were then evaluated. And standard curves for cTnI, CK-MB and Myo were built for clinical sample analysis. The measuring results of 50 clinical samples were finally evaluated by comparing with the measuring result obtained by CLIA. Results: The concentration of substrate H2O2 was firstly optimized as 30% to increase measuring signal. A commercial serum matrix was chosen as the matrix solution to dilute biomarkers for standard curve building to minimize matrix effect on the accuracy of clinical plasma sample measuring. The standard curves for cTnI, CK-MB and Myo were built, with measuring dynamic range of 0-25 ng/mL, 0-33 ng/mL and 0-250 ng/mL, and limit of detection of 0.014 ng/mL, 0.16 ng/mL and 0.85 ng/mL respectively. The measuring results obtained by the developed system of 50 clinical plasma samples for three biomarkers matched well with the results obtained by chemiluminescent immunoassay. Conclusions: Due to its small device size, high sensitivity and accuracy, SPDS showed a bright potential for point-of-care testing (POCT) applications.
15,639
321
[ 82, 107, 241, 216, 163, 363, 238, 225, 232, 58, 166, 66, 81, 469, 100, 262, 442, 498, 119, 230, 325 ]
27
[ "pressure", "ml", "antibody", "spds", "μl", "ctni", "matrix", "concentration", "beads", "mb" ]
[ "myocardial damage mortality", "evaluated cardiac", "cad acute myocardial", "cardiac biomarkers 11", "myocardial infarction ami" ]
null
null
[CONTENT] acute myocardial infarction | diagnosis | pressure sensor | smartphone | Pt nanoparticle [SUMMARY]
null
null
[CONTENT] acute myocardial infarction | diagnosis | pressure sensor | smartphone | Pt nanoparticle [SUMMARY]
[CONTENT] acute myocardial infarction | diagnosis | pressure sensor | smartphone | Pt nanoparticle [SUMMARY]
[CONTENT] acute myocardial infarction | diagnosis | pressure sensor | smartphone | Pt nanoparticle [SUMMARY]
[CONTENT] Antibodies | Biomarkers | Blood Pressure | Catalysis | Female | Humans | Hydrogen Peroxide | Male | Middle Aged | Myocardial Infarction | Nanoparticles | Platinum | Reference Standards | Reproducibility of Results | Sensitivity and Specificity | Smartphone | Static Electricity [SUMMARY]
null
null
[CONTENT] Antibodies | Biomarkers | Blood Pressure | Catalysis | Female | Humans | Hydrogen Peroxide | Male | Middle Aged | Myocardial Infarction | Nanoparticles | Platinum | Reference Standards | Reproducibility of Results | Sensitivity and Specificity | Smartphone | Static Electricity [SUMMARY]
[CONTENT] Antibodies | Biomarkers | Blood Pressure | Catalysis | Female | Humans | Hydrogen Peroxide | Male | Middle Aged | Myocardial Infarction | Nanoparticles | Platinum | Reference Standards | Reproducibility of Results | Sensitivity and Specificity | Smartphone | Static Electricity [SUMMARY]
[CONTENT] Antibodies | Biomarkers | Blood Pressure | Catalysis | Female | Humans | Hydrogen Peroxide | Male | Middle Aged | Myocardial Infarction | Nanoparticles | Platinum | Reference Standards | Reproducibility of Results | Sensitivity and Specificity | Smartphone | Static Electricity [SUMMARY]
[CONTENT] myocardial damage mortality | evaluated cardiac | cad acute myocardial | cardiac biomarkers 11 | myocardial infarction ami [SUMMARY]
null
null
[CONTENT] myocardial damage mortality | evaluated cardiac | cad acute myocardial | cardiac biomarkers 11 | myocardial infarction ami [SUMMARY]
[CONTENT] myocardial damage mortality | evaluated cardiac | cad acute myocardial | cardiac biomarkers 11 | myocardial infarction ami [SUMMARY]
[CONTENT] myocardial damage mortality | evaluated cardiac | cad acute myocardial | cardiac biomarkers 11 | myocardial infarction ami [SUMMARY]
[CONTENT] pressure | ml | antibody | spds | μl | ctni | matrix | concentration | beads | mb [SUMMARY]
null
null
[CONTENT] pressure | ml | antibody | spds | μl | ctni | matrix | concentration | beads | mb [SUMMARY]
[CONTENT] pressure | ml | antibody | spds | μl | ctni | matrix | concentration | beads | mb [SUMMARY]
[CONTENT] pressure | ml | antibody | spds | μl | ctni | matrix | concentration | beads | mb [SUMMARY]
[CONTENT] ami | diagnosis | ami diagnosis | highly | reliability | detection | time | biomarkers | ecg | developed [SUMMARY]
null
null
[CONTENT] spds | system | manipulation | detection | diagnosis | automated system | modification | sensitivity | device | measuring [SUMMARY]
[CONTENT] ml | μl | pressure | antibody | spds | times | beads | ctni | mb | 50 [SUMMARY]
[CONTENT] ml | μl | pressure | antibody | spds | times | beads | ctni | mb | 50 [SUMMARY]
[CONTENT] AMI | annually ||| AMI [SUMMARY]
null
null
[CONTENT] SPDS [SUMMARY]
[CONTENT] AMI | annually ||| AMI ||| 50 | Smartphone-Assisted Pressure-Measuring-Based Diagnosis System ||| ||| ||| CK-MB | Myo ||| 50 ||| ||| 30% ||| ||| CK-MB | Myo | 0-25 | 0 | 0-250 | 0.014 ng/mL | 0.16 ng/mL | 0.85 ng ||| 50 | three ||| SPDS [SUMMARY]
[CONTENT] AMI | annually ||| AMI ||| 50 | Smartphone-Assisted Pressure-Measuring-Based Diagnosis System ||| ||| ||| CK-MB | Myo ||| 50 ||| ||| 30% ||| ||| CK-MB | Myo | 0-25 | 0 | 0-250 | 0.014 ng/mL | 0.16 ng/mL | 0.85 ng ||| 50 | three ||| SPDS [SUMMARY]
Severe Acute Respiratory Illness Deaths in Sub-Saharan Africa and the Role of Influenza: A Case Series From 8 Countries.
25712970
Data on causes of death due to respiratory illness in Africa are limited.
BACKGROUND
From January to April 2013, 28 African countries were invited to participate in a review of severe acute respiratory illness (SARI)-associated deaths identified from influenza surveillance during 2009-2012.
METHODS
Twenty-three countries (82%) responded, 11 (48%) collect mortality data, and 8 provided data. Data were collected from 37 714 SARI cases, and 3091 (8.2%; range by country, 5.1%-25.9%) tested positive for influenza virus. There were 1073 deaths (2.8%; range by country, 0.1%-5.3%) reported, among which influenza virus was detected in 57 (5.3%). Case-fatality proportion (CFP) was higher among countries with systematic death reporting than among those with sporadic reporting. The influenza-associated CFP was 1.8% (57 of 3091), compared with 2.9% (1016 of 34 623) for influenza virus-negative cases (P < .001). Among 834 deaths (77.7%) tested for other respiratory pathogens, rhinovirus (107 [12.8%]), adenovirus (64 [6.0%]), respiratory syncytial virus (60 [5.6%]), and Streptococcus pneumoniae (57 [5.3%]) were most commonly identified. Among 1073 deaths, 402 (37.5%) involved people aged 0-4 years, 462 (43.1%) involved people aged 5-49 years, and 209 (19.5%) involved people aged ≥50 years.
RESULTS
Few African countries systematically collect data on outcomes of people hospitalized with respiratory illness. Stronger surveillance for deaths due to respiratory illness may identify risk groups for targeted vaccine use and other prevention strategies.
CONCLUSIONS
[ "Adolescent", "Adult", "Africa South of the Sahara", "Age Distribution", "Aged", "Bacterial Infections", "Child", "Child, Preschool", "Humans", "Infant", "Infant, Newborn", "Influenza, Human", "Middle Aged", "Population Surveillance", "Respiratory Tract Infections", "Young Adult" ]
4826902
null
null
METHODS
Between January to April 2013, we sent a standard data template to representatives of 28 (59%) of 46 World Health Organization (WHO) African Region Member States. These included all 24 countries who reported data to the WHO's FluNet in 2012 and countries that we were aware were conducting systematic hospital-based SARI surveillance (Supplementary Table 1). We requested information on all SARI cases and SARI-associated deaths detected by surveillance during 2009–2012. SARI case definitions used by these countries were consistent with WHO recommendations for adults and children <5 years of age [16]; however, some countries used expanded case definitions to include 14 days of symptom duration and/or physician-diagnosed lower respiratory tract infection. WHO-recommended case definitions were updated in 2011, and many countries have recently updated SARI case definitions to reflect these recommendations. The WHO's current SARI case definition for all age groups is an acute respiratory infection with a subjective or measured temperature of ≥38°C and cough with onset in the last 10 days requiring hospitalization [17]. Variables collected in SARI surveillance included the number of subjects enrolled, the number of specimens tested for influenza virus by real-time reverse transcription–polymerase chain reaction (rRT-PCR), influenza virus rRT-PCR test results, age, and outcome of hospitalization (discharge or death). We collected additional variables for SARI-associated deaths, including sex, pregnancy status, underlying medical conditions, and results of testing for other respiratory infections. Nasopharyngeal and oropharyngeal swab specimens or nasal aspirates were collected from enrolled SARI cases to test for influenza virus and other respiratory viruses. Except for Malawi, which conducts internal quality assurance, all countries providing data participate in the WHO's External Quality Assurance Project for influenza diagnosis. Data on Streptococcus pneumoniae from Kenya, South Africa, and Madagascar were from blood culture or blood lytA PCR and are indicative of invasive disease [18, 19]. HIV infection, tuberculosis, and other comorbid conditions were reported according to local diagnostic and surveillance practices and were not independently verified. Data were analyzed using SAS 9.3 (SAS Institute, Cary, North Carolina). The Wilcoxon rank sum test was used to assess statistical significance of differences in nonparametric variables. The Pearson χ2 test was used to test for associations between categorical variables and influenza virus infection among SARI-associated deaths.
RESULTS
Representatives from 23 of 28 countries (82%) provided information about their country's SARI surveillance. Of these 23 countries, 11 (48%) did not collect outcome data on persons hospitalized for SARI, 9 (38%) collected SARI-associated mortality data systematically either prospectively or retrospectively, 2 (8%) received reports of SARI-associated deaths from sentinel sites but not systematically, and 1 (4%) had <1 year of surveillance data. Of the 11 countries that collected mortality data either systematically or via sentinel site reports, the following 8 completed the standard template for this analysis: DRC, Kenya, Madagascar, Malawi, Rwanda, South Africa, Tanzania, and Uganda. Of these, DRC, Tanzania, and Uganda receive reports of SARI-associated deaths from sentinel sites but did not collect these data systematically; Rwanda conducted a retrospective review of medical charts and registers to identify SARI-associated deaths; and the remaining countries (Kenya, Madagascar, Malawi, and South Africa) collect SARI outcome data (including mortality) prospectively from the time of enrollment in SARI surveillance to the time of hospital discharge or death. Community deaths following discharge were not reported by any country. SARI surveillance was primarily conducted in pediatric and/or adult medical wards at surveillance hospitals, and surveillance practices varied by country in terms of the number of cases enrolled per day or the days of enrollment per week. During 2009–2012, the 8 countries that provided data on SARI-associated deaths enrolled 40 355 subjects, of whom 1222 (3.0%) died during hospitalization. Complete data on influenza virus testing and age were available for 37 714 SARI cases (93.5%). Among these cases, 1073 deaths were reported (SARI CFP, 2.8%), ranging widely by country, from 0.1% in DRC to 5.5% in Madagascar. Among the 37 714 SARI cases tested for influenza virus, 3091 (8.2%) tested positive for influenza virus, ranging from 5.1% in Tanzania to 25.9% in Madagascar (Table 1). All-Cause and Influenza-Associated Severe Acute Respiratory Illness (SARI) Cases and Deaths During 2009–2012, by Country and Age Group Abbreviations: CFP, case-fatality proportion; DRC, Democratic Republic of the Congo. Of 1073 SARI cases who died during hospitalization and were tested for influenza virus, 57 (5.3%) tested positive, for an overall influenza-associated CFP of 1.8% (57 of 3091), compared with 2.9% (1016 of 34 623) among influenza virus–negative SARI cases (P < .001). The influenza-associated CFPs ranged from 0 in Malawi, Tanzania, and Uganda to 3.6% in Madagascar. Countries with systematic death reporting had higher all-cause and influenza-associated CFPs than countries with sporadic reporting. Of the 1073 reported deaths, 402 (37.5%) were among children aged 0–4 years, 462 (43.1%) were among children and adults aged 5–49 years, and 209 (19.5%) were among adults aged ≥50 years (Table 1). Among the 57 influenza-associated deaths, 19 (33.3%) were among children aged 0–4 years, 20 (35.1%) were among adults aged 18–49 years, and the remaining 18 (31.5%) were among adults aged ≥50 years (Table 1). The median age among influenza-associated deaths was 32 years (interquartile range [IQR], 1–56 years), compared with 28 years (IQR, 1–45 years) among SARI-associated deaths without influenza virus infection (P = .05). The influenza-associated CFP varied by age group and was highest among adults aged ≥65 years (8.2%) and lowest among children aged 5–17 years (0%; P < .001). Persons aged ≥65 years accounted for just 2.5% of SARI cases but 7.2% of SARI-associated deaths and 14.0% of influenza-associated deaths due to SARI. Among 1073 SARI-associated deaths, data on underlying medical conditions were incomplete (Table 2). Despite the limited availability of data, 532 deaths (49.6%) were reported to have an underlying medical condition (including HIV infection and M. tuberculosis infection). Among 1073 deaths, HIV status was reported for 580 (54.1%), and, of these, 419 (72.2%) were infected with HIV. Thirty-three of 57 influenza virus–positive deaths (57.9%) had HIV status information, and, of these, 23 (69.7%) were HIV infected. A total of 547 of 1016 influenza virus–negative deaths (53.8%) had HIV status reported, and 396 (72.4%) were HIV infected (P = .74). Among the 661 deaths (61.6%) with data, tuberculosis was reported for 103 (15.6%), and there was no difference in the prevalence of tuberculosis among influenza virus–positive and influenza virus–negative deaths (P = .34). Information on asthma was available for 1056 SARI-associated deaths (98.4%) but was rarely identified in both influenza virus–positive (3.6%) and influenza virus–negative deaths (1.6%; P = .24). Information on other medical conditions was reported for 720 deaths (67.1%). Among children and adults with a reported underlying medical condition, HIV infection was most commonly reported among all ages (419 of 532 cases [78.8%]), followed by malnutrition (11 of 96 [11.5%]) and tuberculosis (8 of 96 [8.3%]) for children aged 0–4 years and tuberculosis (95 of 436 [21.8%]) and diabetes (25 of 436 [5.7%]) for children and adults aged ≥5 years. Of the 245 deaths among women of childbearing age, pregnancy status was provided for 237 (96.7%), of whom 7 (3.0%) were pregnant at the time of death. None of the 7 pregnant women with SARI identified in routine surveillance who died were infected with influenza virus. Underlying Medical Conditions Reported Among Deaths Due to Severe Acute Respiratory Illness (SARI) Detected by Surveillance During 2009–2012 in 8 African Countries, Overall and by Influenza Virus Status Abbreviation: HIV, human immunodeficiency virus. a For comparison of the prevalence of the condition among influenza virus–positive and influenza virus–negative deaths due to SARI. Among the 57 influenza-associated deaths, 9 (15.8%) were confirmed as involving influenza A(H1N1)pdm09 infection, 19 (33.3%) as involving influenza A(H3N2) infection, and 20 (35.1%) as involving influenza B infection; nonsubtyped influenza A infection was detected in 9 (15.8%). There were no statistically significant differences in the distribution of deaths by influenza virus subtype by age group (P = .72). Other respiratory pathogens were identified by culture or PCR testing only in Kenya, Madagascar, and South Africa. Specimens from 834 deaths in these countries were tested for the presence of at least 1 other respiratory pathogen: 247 deaths (30%) involved children aged 0–4 years, and 587 (70%) involved persons aged ≥5 years. Among the 247 deaths in children aged 0–4 years tested for other pathogens, 145 (59%) tested positive for a respiratory pathogen, with influenza virus detected in 14 (6%) and ≥1 respiratory pathogen other than influenza virus detected in 131 (53%). Among those aged 0–4 years, the most commonly identified respiratory pathogens other than influenza virus were rhinovirus (35 cases [14.2%]), respiratory syncytial virus (33 [13.4%]), and adenovirus (32 [13.0%]; Table 3). Among deaths involving children aged 0–4 years, those testing negative for influenza virus were more likely to have another respiratory virus identified than those who tested positive for influenza virus (P = .014). Respiratory Pathogens Observed Among 834 Deaths Due to Severe Acute Respiratory Illness Detected by Surveillance During 2009–2012 in 3 African Countries, by Age and Influenza Virus Status Abbreviation: S. pneumoniae, Streptococcus pneumoniae. a Bocavirus (n = 1), coronavirus (n = 2), enterovirus (n = 4), human metapneumovirus (n = 8), and parainfluenza viruses (1, 2, and not subtyped; n = 8). b P = .014, by the Fisher exact test. c P = .04, by the Fisher exact test. d Bocavirus (n = 1), enterovirus (n = 7), human metapneumovirus (n = 5), and parainfluenza viruses (1, 2, and not subtyped; n = 6). Among the 587 deaths in persons aged ≥5 years who were tested, 243 (41%) tested positive for a respiratory pathogen, with influenza virus detected in 37 (6%) and ≥1 respiratory pathogen other than influenza virus detected in 206 (35%). The most commonly identified respiratory pathogens other than influenza virus in this age group were rhinovirus (72 cases [12.3%]), S. pneumoniae (48 [8.2%]), and adenovirus (32 [5.5%]; Table 3).
null
null
[]
[]
[]
[ "METHODS", "RESULTS", "DISCUSSION", "Supplementary Material" ]
[ "Between January to April 2013, we sent a standard data template to representatives of 28 (59%) of 46 World Health Organization (WHO) African Region Member States. These included all 24 countries who reported data to the WHO's FluNet in 2012 and countries that we were aware were conducting systematic hospital-based SARI surveillance (Supplementary Table 1). We requested information on all SARI cases and SARI-associated deaths detected by surveillance during 2009–2012. SARI case definitions used by these countries were consistent with WHO recommendations for adults and children <5 years of age [16]; however, some countries used expanded case definitions to include 14 days of symptom duration and/or physician-diagnosed lower respiratory tract infection. WHO-recommended case definitions were updated in 2011, and many countries have recently updated SARI case definitions to reflect these recommendations. The WHO's current SARI case definition for all age groups is an acute respiratory infection with a subjective or measured temperature of ≥38°C and cough with onset in the last 10 days requiring hospitalization [17]. Variables collected in SARI surveillance included the number of subjects enrolled, the number of specimens tested for influenza virus by real-time reverse transcription–polymerase chain reaction (rRT-PCR), influenza virus rRT-PCR test results, age, and outcome of hospitalization (discharge or death). We collected additional variables for SARI-associated deaths, including sex, pregnancy status, underlying medical conditions, and results of testing for other respiratory infections. Nasopharyngeal and oropharyngeal swab specimens or nasal aspirates were collected from enrolled SARI cases to test for influenza virus and other respiratory viruses. Except for Malawi, which conducts internal quality assurance, all countries providing data participate in the WHO's External Quality Assurance Project for influenza diagnosis. Data on Streptococcus pneumoniae from Kenya, South Africa, and Madagascar were from blood culture or blood lytA PCR and are indicative of invasive disease [18, 19]. HIV infection, tuberculosis, and other comorbid conditions were reported according to local diagnostic and surveillance practices and were not independently verified.\nData were analyzed using SAS 9.3 (SAS Institute, Cary, North Carolina). The Wilcoxon rank sum test was used to assess statistical significance of differences in nonparametric variables. The Pearson χ2 test was used to test for associations between categorical variables and influenza virus infection among SARI-associated deaths.", "Representatives from 23 of 28 countries (82%) provided information about their country's SARI surveillance. Of these 23 countries, 11 (48%) did not collect outcome data on persons hospitalized for SARI, 9 (38%) collected SARI-associated mortality data systematically either prospectively or retrospectively, 2 (8%) received reports of SARI-associated deaths from sentinel sites but not systematically, and 1 (4%) had <1 year of surveillance data. Of the 11 countries that collected mortality data either systematically or via sentinel site reports, the following 8 completed the standard template for this analysis: DRC, Kenya, Madagascar, Malawi, Rwanda, South Africa, Tanzania, and Uganda. Of these, DRC, Tanzania, and Uganda receive reports of SARI-associated deaths from sentinel sites but did not collect these data systematically; Rwanda conducted a retrospective review of medical charts and registers to identify SARI-associated deaths; and the remaining countries (Kenya, Madagascar, Malawi, and South Africa) collect SARI outcome data (including mortality) prospectively from the time of enrollment in SARI surveillance to the time of hospital discharge or death. Community deaths following discharge were not reported by any country. SARI surveillance was primarily conducted in pediatric and/or adult medical wards at surveillance hospitals, and surveillance practices varied by country in terms of the number of cases enrolled per day or the days of enrollment per week.\nDuring 2009–2012, the 8 countries that provided data on SARI-associated deaths enrolled 40 355 subjects, of whom 1222 (3.0%) died during hospitalization. Complete data on influenza virus testing and age were available for 37 714 SARI cases (93.5%). Among these cases, 1073 deaths were reported (SARI CFP, 2.8%), ranging widely by country, from 0.1% in DRC to 5.5% in Madagascar. Among the 37 714 SARI cases tested for influenza virus, 3091 (8.2%) tested positive for influenza virus, ranging from 5.1% in Tanzania to 25.9% in Madagascar (Table 1).\n\nAll-Cause and Influenza-Associated Severe Acute Respiratory Illness (SARI) Cases and Deaths During 2009–2012, by Country and Age Group\nAbbreviations: CFP, case-fatality proportion; DRC, Democratic Republic of the Congo.\nOf 1073 SARI cases who died during hospitalization and were tested for influenza virus, 57 (5.3%) tested positive, for an overall influenza-associated CFP of 1.8% (57 of 3091), compared with 2.9% (1016 of 34 623) among influenza virus–negative SARI cases (P < .001). The influenza-associated CFPs ranged from 0 in Malawi, Tanzania, and Uganda to 3.6% in Madagascar. Countries with systematic death reporting had higher all-cause and influenza-associated CFPs than countries with sporadic reporting. Of the 1073 reported deaths, 402 (37.5%) were among children aged 0–4 years, 462 (43.1%) were among children and adults aged 5–49 years, and 209 (19.5%) were among adults aged ≥50 years (Table 1).\nAmong the 57 influenza-associated deaths, 19 (33.3%) were among children aged 0–4 years, 20 (35.1%) were among adults aged 18–49 years, and the remaining 18 (31.5%) were among adults aged ≥50 years (Table 1). The median age among influenza-associated deaths was 32 years (interquartile range [IQR], 1–56 years), compared with 28 years (IQR, 1–45 years) among SARI-associated deaths without influenza virus infection (P = .05). The influenza-associated CFP varied by age group and was highest among adults aged ≥65 years (8.2%) and lowest among children aged 5–17 years (0%; P < .001). Persons aged ≥65 years accounted for just 2.5% of SARI cases but 7.2% of SARI-associated deaths and 14.0% of influenza-associated deaths due to SARI.\nAmong 1073 SARI-associated deaths, data on underlying medical conditions were incomplete (Table 2). Despite the limited availability of data, 532 deaths (49.6%) were reported to have an underlying medical condition (including HIV infection and M. tuberculosis infection). Among 1073 deaths, HIV status was reported for 580 (54.1%), and, of these, 419 (72.2%) were infected with HIV. Thirty-three of 57 influenza virus–positive deaths (57.9%) had HIV status information, and, of these, 23 (69.7%) were HIV infected. A total of 547 of 1016 influenza virus–negative deaths (53.8%) had HIV status reported, and 396 (72.4%) were HIV infected (P = .74). Among the 661 deaths (61.6%) with data, tuberculosis was reported for 103 (15.6%), and there was no difference in the prevalence of tuberculosis among influenza virus–positive and influenza virus–negative deaths (P = .34). Information on asthma was available for 1056 SARI-associated deaths (98.4%) but was rarely identified in both influenza virus–positive (3.6%) and influenza virus–negative deaths (1.6%; P = .24). Information on other medical conditions was reported for 720 deaths (67.1%). Among children and adults with a reported underlying medical condition, HIV infection was most commonly reported among all ages (419 of 532 cases [78.8%]), followed by malnutrition (11 of 96 [11.5%]) and tuberculosis (8 of 96 [8.3%]) for children aged 0–4 years and tuberculosis (95 of 436 [21.8%]) and diabetes (25 of 436 [5.7%]) for children and adults aged ≥5 years. Of the 245 deaths among women of childbearing age, pregnancy status was provided for 237 (96.7%), of whom 7 (3.0%) were pregnant at the time of death. None of the 7 pregnant women with SARI identified in routine surveillance who died were infected with influenza virus.\n\nUnderlying Medical Conditions Reported Among Deaths Due to Severe Acute Respiratory Illness (SARI) Detected by Surveillance During 2009–2012 in 8 African Countries, Overall and by Influenza Virus Status\nAbbreviation: HIV, human immunodeficiency virus.\n\na For comparison of the prevalence of the condition among influenza virus–positive and influenza virus–negative deaths due to SARI.\nAmong the 57 influenza-associated deaths, 9 (15.8%) were confirmed as involving influenza A(H1N1)pdm09 infection, 19 (33.3%) as involving influenza A(H3N2) infection, and 20 (35.1%) as involving influenza B infection; nonsubtyped influenza A infection was detected in 9 (15.8%). There were no statistically significant differences in the distribution of deaths by influenza virus subtype by age group (P = .72).\nOther respiratory pathogens were identified by culture or PCR testing only in Kenya, Madagascar, and South Africa. Specimens from 834 deaths in these countries were tested for the presence of at least 1 other respiratory pathogen: 247 deaths (30%) involved children aged 0–4 years, and 587 (70%) involved persons aged ≥5 years. Among the 247 deaths in children aged 0–4 years tested for other pathogens, 145 (59%) tested positive for a respiratory pathogen, with influenza virus detected in 14 (6%) and ≥1 respiratory pathogen other than influenza virus detected in 131 (53%). Among those aged 0–4 years, the most commonly identified respiratory pathogens other than influenza virus were rhinovirus (35 cases [14.2%]), respiratory syncytial virus (33 [13.4%]), and adenovirus (32 [13.0%]; Table 3). Among deaths involving children aged 0–4 years, those testing negative for influenza virus were more likely to have another respiratory virus identified than those who tested positive for influenza virus (P = .014).\n\nRespiratory Pathogens Observed Among 834 Deaths Due to Severe Acute Respiratory Illness Detected by Surveillance During 2009–2012 in 3 African Countries, by Age and Influenza Virus Status\nAbbreviation: S. pneumoniae, Streptococcus pneumoniae.\n\na Bocavirus (n = 1), coronavirus (n = 2), enterovirus (n = 4), human metapneumovirus (n = 8), and parainfluenza viruses (1, 2, and not subtyped; n = 8).\n\nb\nP = .014, by the Fisher exact test.\n\nc\nP = .04, by the Fisher exact test.\n\nd Bocavirus (n = 1), enterovirus (n = 7), human metapneumovirus (n = 5), and parainfluenza viruses (1, 2, and not subtyped; n = 6).\nAmong the 587 deaths in persons aged ≥5 years who were tested, 243 (41%) tested positive for a respiratory pathogen, with influenza virus detected in 37 (6%) and ≥1 respiratory pathogen other than influenza virus detected in 206 (35%). The most commonly identified respiratory pathogens other than influenza virus in this age group were rhinovirus (72 cases [12.3%]), S. pneumoniae (48 [8.2%]), and adenovirus (32 [5.5%]; Table 3).", "Hospital-based surveillance for severe respiratory disease in Africa has expanded dramatically in the last decade, yet data on etiologies of mortality are very sparse. Our aim was to assess the capacity of SARI surveillance to collect mortality data and provide initial insights of the characteristics of SARI and influenza-confirmed deaths, rather than to accurately document the burden of influenza-associated mortality. We found that few countries that conduct influenza surveillance are systematically collecting data on outcomes of hospitalization, and available information on deaths is sparse and incomplete. Likewise, data on the presence of comorbidities were incomplete. We also found that pregnant women were underrepresented in SARI data from all countries. We found a wide range in CFPs for SARI and influenza across countries, which may reflect differences in case definitions, criteria for hospitalization, quality of hospital care, or quality of surveillance, including proper handling of biological specimens and ascertainment of deaths. Countries where deaths are reported from sites sporadically had lower CFPs than those that reported deaths systematically, suggesting that sporadic reporting may fail to capture a large number of SARI-associated deaths. Reports of hospitalizations and deaths from low-income and middle-income settings during the 2009 A(H1N1) pandemic include La Reunion (255 hospitalizations and 6 deaths; CFP, 2.4%), Argentina (11 086 hospitalizations and 580 deaths; CFP, 5.2%), and Chile (1585 hospitalizations and 134 deaths; CFP, 8.5%) [20]. The majority of published data on SARI CFPs from African countries is focused on children. A meta-analysis of 11 African studies in children aged 0–59 months found an average in-hospital CFP of 3.9% (95% CI, 2.7%–5.5%) among children hospitalized with acute lower respiratory tract infections [3]. Comparison with CFP estimates among respiratory disease–associated hospitalizations from the region suggests incomplete death reporting in this analysis.\nDespite the incomplete data, we were able to draw some important conclusions. First, in-hospital respiratory mortality and influenza-associated mortality occur predominantly in very young individuals and those 18–49 years. Second, a viral pathogen could be identified in 38% of deaths tested for influenza virus and other respiratory pathogens. In particular, our data highlight the role that respiratory syncytial virus and adenovirus may play in respiratory mortality in the region. Third, HIV infection and tuberculosis are important factors in severe respiratory disease in Africa.\nThe low number of deaths among pregnant women is unexpected, given their increased risk of influenza-associated death during the pandemic [12, 21] and increased risk of severe disease from seasonal influenza [22], and it may reflect inadequate surveillance in antenatal clinics and maternity wards. Owing to high fertility rates, >9% of African women of childbearing age are estimated to be pregnant at any given time [23], suggesting that current surveillance may not adequately identify severe disease in pregnant women.\nFrom the data collected here nearly two thirds of enrolled SARI cases were children 0–4 years of age, while adults aged ≥65 years accounted for just 2.4% of SARI cases. Some but not all of this difference may be explained by population demographic characteristics in Africa; >40% of the population of Africa in 2013 is <15 years of age, and just 3.4% are aged ≥65 years [24]. There may be differences in access to care among elderly persons in Africa that result in fewer elderly individuals with respiratory illness being hospitalized. Our analysis demonstrates that elderly persons who are hospitalized have a greater risk of death from respiratory disease, especially influenza, than young children. A systematic review of the case-fatality risk from the 2009 A(H1N1) pandemic found significant variation in risk by age group, ranging from approximately 1 death per 100 000 symptomatic cases in children aged 0–19 years to approximately 1000 deaths per 100 000 symptomatic cases in persons aged ≥65 years [25]. These estimates support our finding of an increased CFP among hospitalized elderly individuals, although only 15.8% of the influenza-associated deaths reported here involved influenza A(H1N1) infections (Table 1). Despite this, children aged 0–4 years accounted for over one third of all SARI-associated deaths and one-third of influenza-associated deaths due to SARI in this analysis, a finding consistent with prior studies in low-income settings [3, 26]. Likewise, adults aged 18–49 years accounted for over one third of all SARI-associated deaths and one third of influenza-associated deaths due to SARI, which may be explained by the high prevalence of HIV in this age group.\nIn this analysis, HIV status was not reported for nearly half of enrolled SARI-associated deaths. Despite this limitation, these data clearly demonstrate that HIV infection is common among SARI-associated hospitalizations and deaths with and without influenza virus coinfection; however, we are unable to demonstrate a statistical association of HIV infection with death among hospitalized SARI cases. Our coauthors from South Africa have found that HIV-infected patients with influenza were 4 times more likely (95% CI, 1–12) to die than HIV-uninfected patients with influenza [2]. Data on tuberculosis were also missing from many deaths; however, when reported, tuberculosis was common among SARI-associated deaths but was not found more commonly among SARI-associated deaths with influenza.\nThere were several important limitations of this analysis. Because almost 90% of all deaths were reported from Kenya and South Africa, our findings may not be representative. In South Africa, a country with systematic reporting and a relatively high CFP in this analysis, retrospective review of respiratory deaths in sentinel hospitals found that as many as 1 in 3 respiratory deaths were not enrolled in SARI surveillance [27]. Some countries in our study reported very few or no SARI-associated deaths, which limits confidence in conclusions drawn from these data. Also, substantial differences existed in how SARI, HIV status, M. tuberculosis coinfection, and other underlying medical conditions were diagnosed or defined for surveillance purposes. It is likely that very severely ill patients would not have been enrolled into surveillance, especially if informed consent was required, as was the case in several countries. This may explain why 149 of 1222 reported deaths (12%) did not have influenza virus test results. Another limitation is the ability to attribute death to any one pathogen. Although influenza viruses are less commonly found in asymptomatic persons, other viruses, such as rhinoviruses and adenoviruses, are frequently isolated from nasopharyngeal or oropharyngeal swab specimens in asymptomatic persons, especially children [28–31]. The relative contribution of these pathogens to mortality may also be affected. Nasopharyngeal or oropharyngeal swab specimens may not be the ideal specimens for detecting some respiratory pathogens [32]. Also, the methods of testing for some non–influenza virus pathogens differed by site. For example, South Africa diagnosed S. pneumoniae infection on the basis of lytA PCR of blood samples, while Kenya used blood cultures to diagnose invasive S. pneumoniae infection. Our study is limited in its ability to assess the true role of pathogens other than influenza virus since data on the number of cases tested for each pathogen were not available.\nSentinel surveillance sites are often limited in their population catchment and may not capture an adequate number of respiratory disease– and influenza-associated deaths. Because of the logistics of specimen collection, transport, and analysis, many countries are only able to support a small number of sites, which may not be representative of the population. Limited access to care in many African settings may further reduce the number of persons hospitalized with respiratory infections, including influenza. Many cases of SARI may be due to secondary complications such as bacterial pneumonia after initial influenza virus infection, when influenza virus shedding has decreased or ceased. Moreover, influenza-associated deaths with a nonrespiratory presentation, including heart attack and stroke, are unlikely to be tested for influenza virus even if hospitalized. Many complications of influenza, especially postinfluenza pneumonia and death, happen >1 week after initial infection [33, 34] and therefore may occur at home after discharge. Vital registration data in South Africa indicate that as many as 50% of respiratory disease–associated deaths occur outside of hospitals [35].\nBecause of these limitations many countries have used other methods to estimate influenza-associated mortality. Some sites have conducted thorough community-based mortality reviews in which persons meeting the case definition of influenza-associated death are counted in a defined population [36, 37]. Excess mortality modeling is commonly used to assess deaths due to influenza in populations with accurate vital statistics or International Classification of Diseases–coded hospitalization data, with or without adjustment for influenza virus circulation [6, 38, 39]; however, few countries in Africa have complete vital registration data, and many countries do not experience clear seasonal peaks in influenza transmission, which may limit the utility of such methods. Many countries in Africa have health and demographic surveillance sites where all births and deaths are recorded and where deaths are assessed by verbal autopsy. It is possible that an assessment of trends in mortality at these sites, combined with virological data on influenza virus circulation patterns, may provide more-accurate estimates of influenza-associated deaths when vital statistics are not available.\nIn conclusion, stronger surveillance for respiratory deaths may help to identify risk groups for targeted vaccine use and other prevention strategies. Among those tested, respiratory viruses other than influenza virus and S. pneumoniae were commonly identified in SARI-associated deaths of all ages; however, nasopharyngeal carriage may overestimate mortality from some of these pathogens. Surveillance in antenatal clinics and/or maternity wards should be strengthened to better capture pregnant women, given the WHO's recent decision to prioritize them for influenza vaccination [40]. Sentinel surveillance may provide some information on characteristics of influenza-associated deaths but will likely underestimate influenza-associated mortality. Alternative methods should be used to estimate influenza-associated mortality in Africa, depending on the availability of vital statistics, accurate hospitalization data, and other forms of demographic and health surveillance.", "Click here for additional data file." ]
[ "methods", "results", "discussion", "supplementary-material" ]
[ "influenza, human", "mortality", "Africa South of the Sahara" ]
METHODS: Between January to April 2013, we sent a standard data template to representatives of 28 (59%) of 46 World Health Organization (WHO) African Region Member States. These included all 24 countries who reported data to the WHO's FluNet in 2012 and countries that we were aware were conducting systematic hospital-based SARI surveillance (Supplementary Table 1). We requested information on all SARI cases and SARI-associated deaths detected by surveillance during 2009–2012. SARI case definitions used by these countries were consistent with WHO recommendations for adults and children <5 years of age [16]; however, some countries used expanded case definitions to include 14 days of symptom duration and/or physician-diagnosed lower respiratory tract infection. WHO-recommended case definitions were updated in 2011, and many countries have recently updated SARI case definitions to reflect these recommendations. The WHO's current SARI case definition for all age groups is an acute respiratory infection with a subjective or measured temperature of ≥38°C and cough with onset in the last 10 days requiring hospitalization [17]. Variables collected in SARI surveillance included the number of subjects enrolled, the number of specimens tested for influenza virus by real-time reverse transcription–polymerase chain reaction (rRT-PCR), influenza virus rRT-PCR test results, age, and outcome of hospitalization (discharge or death). We collected additional variables for SARI-associated deaths, including sex, pregnancy status, underlying medical conditions, and results of testing for other respiratory infections. Nasopharyngeal and oropharyngeal swab specimens or nasal aspirates were collected from enrolled SARI cases to test for influenza virus and other respiratory viruses. Except for Malawi, which conducts internal quality assurance, all countries providing data participate in the WHO's External Quality Assurance Project for influenza diagnosis. Data on Streptococcus pneumoniae from Kenya, South Africa, and Madagascar were from blood culture or blood lytA PCR and are indicative of invasive disease [18, 19]. HIV infection, tuberculosis, and other comorbid conditions were reported according to local diagnostic and surveillance practices and were not independently verified. Data were analyzed using SAS 9.3 (SAS Institute, Cary, North Carolina). The Wilcoxon rank sum test was used to assess statistical significance of differences in nonparametric variables. The Pearson χ2 test was used to test for associations between categorical variables and influenza virus infection among SARI-associated deaths. RESULTS: Representatives from 23 of 28 countries (82%) provided information about their country's SARI surveillance. Of these 23 countries, 11 (48%) did not collect outcome data on persons hospitalized for SARI, 9 (38%) collected SARI-associated mortality data systematically either prospectively or retrospectively, 2 (8%) received reports of SARI-associated deaths from sentinel sites but not systematically, and 1 (4%) had <1 year of surveillance data. Of the 11 countries that collected mortality data either systematically or via sentinel site reports, the following 8 completed the standard template for this analysis: DRC, Kenya, Madagascar, Malawi, Rwanda, South Africa, Tanzania, and Uganda. Of these, DRC, Tanzania, and Uganda receive reports of SARI-associated deaths from sentinel sites but did not collect these data systematically; Rwanda conducted a retrospective review of medical charts and registers to identify SARI-associated deaths; and the remaining countries (Kenya, Madagascar, Malawi, and South Africa) collect SARI outcome data (including mortality) prospectively from the time of enrollment in SARI surveillance to the time of hospital discharge or death. Community deaths following discharge were not reported by any country. SARI surveillance was primarily conducted in pediatric and/or adult medical wards at surveillance hospitals, and surveillance practices varied by country in terms of the number of cases enrolled per day or the days of enrollment per week. During 2009–2012, the 8 countries that provided data on SARI-associated deaths enrolled 40 355 subjects, of whom 1222 (3.0%) died during hospitalization. Complete data on influenza virus testing and age were available for 37 714 SARI cases (93.5%). Among these cases, 1073 deaths were reported (SARI CFP, 2.8%), ranging widely by country, from 0.1% in DRC to 5.5% in Madagascar. Among the 37 714 SARI cases tested for influenza virus, 3091 (8.2%) tested positive for influenza virus, ranging from 5.1% in Tanzania to 25.9% in Madagascar (Table 1). All-Cause and Influenza-Associated Severe Acute Respiratory Illness (SARI) Cases and Deaths During 2009–2012, by Country and Age Group Abbreviations: CFP, case-fatality proportion; DRC, Democratic Republic of the Congo. Of 1073 SARI cases who died during hospitalization and were tested for influenza virus, 57 (5.3%) tested positive, for an overall influenza-associated CFP of 1.8% (57 of 3091), compared with 2.9% (1016 of 34 623) among influenza virus–negative SARI cases (P < .001). The influenza-associated CFPs ranged from 0 in Malawi, Tanzania, and Uganda to 3.6% in Madagascar. Countries with systematic death reporting had higher all-cause and influenza-associated CFPs than countries with sporadic reporting. Of the 1073 reported deaths, 402 (37.5%) were among children aged 0–4 years, 462 (43.1%) were among children and adults aged 5–49 years, and 209 (19.5%) were among adults aged ≥50 years (Table 1). Among the 57 influenza-associated deaths, 19 (33.3%) were among children aged 0–4 years, 20 (35.1%) were among adults aged 18–49 years, and the remaining 18 (31.5%) were among adults aged ≥50 years (Table 1). The median age among influenza-associated deaths was 32 years (interquartile range [IQR], 1–56 years), compared with 28 years (IQR, 1–45 years) among SARI-associated deaths without influenza virus infection (P = .05). The influenza-associated CFP varied by age group and was highest among adults aged ≥65 years (8.2%) and lowest among children aged 5–17 years (0%; P < .001). Persons aged ≥65 years accounted for just 2.5% of SARI cases but 7.2% of SARI-associated deaths and 14.0% of influenza-associated deaths due to SARI. Among 1073 SARI-associated deaths, data on underlying medical conditions were incomplete (Table 2). Despite the limited availability of data, 532 deaths (49.6%) were reported to have an underlying medical condition (including HIV infection and M. tuberculosis infection). Among 1073 deaths, HIV status was reported for 580 (54.1%), and, of these, 419 (72.2%) were infected with HIV. Thirty-three of 57 influenza virus–positive deaths (57.9%) had HIV status information, and, of these, 23 (69.7%) were HIV infected. A total of 547 of 1016 influenza virus–negative deaths (53.8%) had HIV status reported, and 396 (72.4%) were HIV infected (P = .74). Among the 661 deaths (61.6%) with data, tuberculosis was reported for 103 (15.6%), and there was no difference in the prevalence of tuberculosis among influenza virus–positive and influenza virus–negative deaths (P = .34). Information on asthma was available for 1056 SARI-associated deaths (98.4%) but was rarely identified in both influenza virus–positive (3.6%) and influenza virus–negative deaths (1.6%; P = .24). Information on other medical conditions was reported for 720 deaths (67.1%). Among children and adults with a reported underlying medical condition, HIV infection was most commonly reported among all ages (419 of 532 cases [78.8%]), followed by malnutrition (11 of 96 [11.5%]) and tuberculosis (8 of 96 [8.3%]) for children aged 0–4 years and tuberculosis (95 of 436 [21.8%]) and diabetes (25 of 436 [5.7%]) for children and adults aged ≥5 years. Of the 245 deaths among women of childbearing age, pregnancy status was provided for 237 (96.7%), of whom 7 (3.0%) were pregnant at the time of death. None of the 7 pregnant women with SARI identified in routine surveillance who died were infected with influenza virus. Underlying Medical Conditions Reported Among Deaths Due to Severe Acute Respiratory Illness (SARI) Detected by Surveillance During 2009–2012 in 8 African Countries, Overall and by Influenza Virus Status Abbreviation: HIV, human immunodeficiency virus. a For comparison of the prevalence of the condition among influenza virus–positive and influenza virus–negative deaths due to SARI. Among the 57 influenza-associated deaths, 9 (15.8%) were confirmed as involving influenza A(H1N1)pdm09 infection, 19 (33.3%) as involving influenza A(H3N2) infection, and 20 (35.1%) as involving influenza B infection; nonsubtyped influenza A infection was detected in 9 (15.8%). There were no statistically significant differences in the distribution of deaths by influenza virus subtype by age group (P = .72). Other respiratory pathogens were identified by culture or PCR testing only in Kenya, Madagascar, and South Africa. Specimens from 834 deaths in these countries were tested for the presence of at least 1 other respiratory pathogen: 247 deaths (30%) involved children aged 0–4 years, and 587 (70%) involved persons aged ≥5 years. Among the 247 deaths in children aged 0–4 years tested for other pathogens, 145 (59%) tested positive for a respiratory pathogen, with influenza virus detected in 14 (6%) and ≥1 respiratory pathogen other than influenza virus detected in 131 (53%). Among those aged 0–4 years, the most commonly identified respiratory pathogens other than influenza virus were rhinovirus (35 cases [14.2%]), respiratory syncytial virus (33 [13.4%]), and adenovirus (32 [13.0%]; Table 3). Among deaths involving children aged 0–4 years, those testing negative for influenza virus were more likely to have another respiratory virus identified than those who tested positive for influenza virus (P = .014). Respiratory Pathogens Observed Among 834 Deaths Due to Severe Acute Respiratory Illness Detected by Surveillance During 2009–2012 in 3 African Countries, by Age and Influenza Virus Status Abbreviation: S. pneumoniae, Streptococcus pneumoniae. a Bocavirus (n = 1), coronavirus (n = 2), enterovirus (n = 4), human metapneumovirus (n = 8), and parainfluenza viruses (1, 2, and not subtyped; n = 8). b P = .014, by the Fisher exact test. c P = .04, by the Fisher exact test. d Bocavirus (n = 1), enterovirus (n = 7), human metapneumovirus (n = 5), and parainfluenza viruses (1, 2, and not subtyped; n = 6). Among the 587 deaths in persons aged ≥5 years who were tested, 243 (41%) tested positive for a respiratory pathogen, with influenza virus detected in 37 (6%) and ≥1 respiratory pathogen other than influenza virus detected in 206 (35%). The most commonly identified respiratory pathogens other than influenza virus in this age group were rhinovirus (72 cases [12.3%]), S. pneumoniae (48 [8.2%]), and adenovirus (32 [5.5%]; Table 3). DISCUSSION: Hospital-based surveillance for severe respiratory disease in Africa has expanded dramatically in the last decade, yet data on etiologies of mortality are very sparse. Our aim was to assess the capacity of SARI surveillance to collect mortality data and provide initial insights of the characteristics of SARI and influenza-confirmed deaths, rather than to accurately document the burden of influenza-associated mortality. We found that few countries that conduct influenza surveillance are systematically collecting data on outcomes of hospitalization, and available information on deaths is sparse and incomplete. Likewise, data on the presence of comorbidities were incomplete. We also found that pregnant women were underrepresented in SARI data from all countries. We found a wide range in CFPs for SARI and influenza across countries, which may reflect differences in case definitions, criteria for hospitalization, quality of hospital care, or quality of surveillance, including proper handling of biological specimens and ascertainment of deaths. Countries where deaths are reported from sites sporadically had lower CFPs than those that reported deaths systematically, suggesting that sporadic reporting may fail to capture a large number of SARI-associated deaths. Reports of hospitalizations and deaths from low-income and middle-income settings during the 2009 A(H1N1) pandemic include La Reunion (255 hospitalizations and 6 deaths; CFP, 2.4%), Argentina (11 086 hospitalizations and 580 deaths; CFP, 5.2%), and Chile (1585 hospitalizations and 134 deaths; CFP, 8.5%) [20]. The majority of published data on SARI CFPs from African countries is focused on children. A meta-analysis of 11 African studies in children aged 0–59 months found an average in-hospital CFP of 3.9% (95% CI, 2.7%–5.5%) among children hospitalized with acute lower respiratory tract infections [3]. Comparison with CFP estimates among respiratory disease–associated hospitalizations from the region suggests incomplete death reporting in this analysis. Despite the incomplete data, we were able to draw some important conclusions. First, in-hospital respiratory mortality and influenza-associated mortality occur predominantly in very young individuals and those 18–49 years. Second, a viral pathogen could be identified in 38% of deaths tested for influenza virus and other respiratory pathogens. In particular, our data highlight the role that respiratory syncytial virus and adenovirus may play in respiratory mortality in the region. Third, HIV infection and tuberculosis are important factors in severe respiratory disease in Africa. The low number of deaths among pregnant women is unexpected, given their increased risk of influenza-associated death during the pandemic [12, 21] and increased risk of severe disease from seasonal influenza [22], and it may reflect inadequate surveillance in antenatal clinics and maternity wards. Owing to high fertility rates, >9% of African women of childbearing age are estimated to be pregnant at any given time [23], suggesting that current surveillance may not adequately identify severe disease in pregnant women. From the data collected here nearly two thirds of enrolled SARI cases were children 0–4 years of age, while adults aged ≥65 years accounted for just 2.4% of SARI cases. Some but not all of this difference may be explained by population demographic characteristics in Africa; >40% of the population of Africa in 2013 is <15 years of age, and just 3.4% are aged ≥65 years [24]. There may be differences in access to care among elderly persons in Africa that result in fewer elderly individuals with respiratory illness being hospitalized. Our analysis demonstrates that elderly persons who are hospitalized have a greater risk of death from respiratory disease, especially influenza, than young children. A systematic review of the case-fatality risk from the 2009 A(H1N1) pandemic found significant variation in risk by age group, ranging from approximately 1 death per 100 000 symptomatic cases in children aged 0–19 years to approximately 1000 deaths per 100 000 symptomatic cases in persons aged ≥65 years [25]. These estimates support our finding of an increased CFP among hospitalized elderly individuals, although only 15.8% of the influenza-associated deaths reported here involved influenza A(H1N1) infections (Table 1). Despite this, children aged 0–4 years accounted for over one third of all SARI-associated deaths and one-third of influenza-associated deaths due to SARI in this analysis, a finding consistent with prior studies in low-income settings [3, 26]. Likewise, adults aged 18–49 years accounted for over one third of all SARI-associated deaths and one third of influenza-associated deaths due to SARI, which may be explained by the high prevalence of HIV in this age group. In this analysis, HIV status was not reported for nearly half of enrolled SARI-associated deaths. Despite this limitation, these data clearly demonstrate that HIV infection is common among SARI-associated hospitalizations and deaths with and without influenza virus coinfection; however, we are unable to demonstrate a statistical association of HIV infection with death among hospitalized SARI cases. Our coauthors from South Africa have found that HIV-infected patients with influenza were 4 times more likely (95% CI, 1–12) to die than HIV-uninfected patients with influenza [2]. Data on tuberculosis were also missing from many deaths; however, when reported, tuberculosis was common among SARI-associated deaths but was not found more commonly among SARI-associated deaths with influenza. There were several important limitations of this analysis. Because almost 90% of all deaths were reported from Kenya and South Africa, our findings may not be representative. In South Africa, a country with systematic reporting and a relatively high CFP in this analysis, retrospective review of respiratory deaths in sentinel hospitals found that as many as 1 in 3 respiratory deaths were not enrolled in SARI surveillance [27]. Some countries in our study reported very few or no SARI-associated deaths, which limits confidence in conclusions drawn from these data. Also, substantial differences existed in how SARI, HIV status, M. tuberculosis coinfection, and other underlying medical conditions were diagnosed or defined for surveillance purposes. It is likely that very severely ill patients would not have been enrolled into surveillance, especially if informed consent was required, as was the case in several countries. This may explain why 149 of 1222 reported deaths (12%) did not have influenza virus test results. Another limitation is the ability to attribute death to any one pathogen. Although influenza viruses are less commonly found in asymptomatic persons, other viruses, such as rhinoviruses and adenoviruses, are frequently isolated from nasopharyngeal or oropharyngeal swab specimens in asymptomatic persons, especially children [28–31]. The relative contribution of these pathogens to mortality may also be affected. Nasopharyngeal or oropharyngeal swab specimens may not be the ideal specimens for detecting some respiratory pathogens [32]. Also, the methods of testing for some non–influenza virus pathogens differed by site. For example, South Africa diagnosed S. pneumoniae infection on the basis of lytA PCR of blood samples, while Kenya used blood cultures to diagnose invasive S. pneumoniae infection. Our study is limited in its ability to assess the true role of pathogens other than influenza virus since data on the number of cases tested for each pathogen were not available. Sentinel surveillance sites are often limited in their population catchment and may not capture an adequate number of respiratory disease– and influenza-associated deaths. Because of the logistics of specimen collection, transport, and analysis, many countries are only able to support a small number of sites, which may not be representative of the population. Limited access to care in many African settings may further reduce the number of persons hospitalized with respiratory infections, including influenza. Many cases of SARI may be due to secondary complications such as bacterial pneumonia after initial influenza virus infection, when influenza virus shedding has decreased or ceased. Moreover, influenza-associated deaths with a nonrespiratory presentation, including heart attack and stroke, are unlikely to be tested for influenza virus even if hospitalized. Many complications of influenza, especially postinfluenza pneumonia and death, happen >1 week after initial infection [33, 34] and therefore may occur at home after discharge. Vital registration data in South Africa indicate that as many as 50% of respiratory disease–associated deaths occur outside of hospitals [35]. Because of these limitations many countries have used other methods to estimate influenza-associated mortality. Some sites have conducted thorough community-based mortality reviews in which persons meeting the case definition of influenza-associated death are counted in a defined population [36, 37]. Excess mortality modeling is commonly used to assess deaths due to influenza in populations with accurate vital statistics or International Classification of Diseases–coded hospitalization data, with or without adjustment for influenza virus circulation [6, 38, 39]; however, few countries in Africa have complete vital registration data, and many countries do not experience clear seasonal peaks in influenza transmission, which may limit the utility of such methods. Many countries in Africa have health and demographic surveillance sites where all births and deaths are recorded and where deaths are assessed by verbal autopsy. It is possible that an assessment of trends in mortality at these sites, combined with virological data on influenza virus circulation patterns, may provide more-accurate estimates of influenza-associated deaths when vital statistics are not available. In conclusion, stronger surveillance for respiratory deaths may help to identify risk groups for targeted vaccine use and other prevention strategies. Among those tested, respiratory viruses other than influenza virus and S. pneumoniae were commonly identified in SARI-associated deaths of all ages; however, nasopharyngeal carriage may overestimate mortality from some of these pathogens. Surveillance in antenatal clinics and/or maternity wards should be strengthened to better capture pregnant women, given the WHO's recent decision to prioritize them for influenza vaccination [40]. Sentinel surveillance may provide some information on characteristics of influenza-associated deaths but will likely underestimate influenza-associated mortality. Alternative methods should be used to estimate influenza-associated mortality in Africa, depending on the availability of vital statistics, accurate hospitalization data, and other forms of demographic and health surveillance. Supplementary Material: Click here for additional data file.
Background: Data on causes of death due to respiratory illness in Africa are limited. Methods: From January to April 2013, 28 African countries were invited to participate in a review of severe acute respiratory illness (SARI)-associated deaths identified from influenza surveillance during 2009-2012. Results: Twenty-three countries (82%) responded, 11 (48%) collect mortality data, and 8 provided data. Data were collected from 37 714 SARI cases, and 3091 (8.2%; range by country, 5.1%-25.9%) tested positive for influenza virus. There were 1073 deaths (2.8%; range by country, 0.1%-5.3%) reported, among which influenza virus was detected in 57 (5.3%). Case-fatality proportion (CFP) was higher among countries with systematic death reporting than among those with sporadic reporting. The influenza-associated CFP was 1.8% (57 of 3091), compared with 2.9% (1016 of 34 623) for influenza virus-negative cases (P < .001). Among 834 deaths (77.7%) tested for other respiratory pathogens, rhinovirus (107 [12.8%]), adenovirus (64 [6.0%]), respiratory syncytial virus (60 [5.6%]), and Streptococcus pneumoniae (57 [5.3%]) were most commonly identified. Among 1073 deaths, 402 (37.5%) involved people aged 0-4 years, 462 (43.1%) involved people aged 5-49 years, and 209 (19.5%) involved people aged ≥50 years. Conclusions: Few African countries systematically collect data on outcomes of people hospitalized with respiratory illness. Stronger surveillance for deaths due to respiratory illness may identify risk groups for targeted vaccine use and other prevention strategies.
null
null
4,152
337
[]
4
[ "influenza", "deaths", "sari", "associated", "virus", "influenza virus", "respiratory", "data", "years", "associated deaths" ]
[ "enrolled sari cases", "estimates influenza associated", "respiratory mortality influenza", "sari associated hospitalizations", "sari influenza countries" ]
null
null
null
null
[CONTENT] influenza, human | mortality | Africa South of the Sahara [SUMMARY]
[CONTENT] influenza, human | mortality | Africa South of the Sahara [SUMMARY]
null
[CONTENT] influenza, human | mortality | Africa South of the Sahara [SUMMARY]
null
null
[CONTENT] Adolescent | Adult | Africa South of the Sahara | Age Distribution | Aged | Bacterial Infections | Child | Child, Preschool | Humans | Infant | Infant, Newborn | Influenza, Human | Middle Aged | Population Surveillance | Respiratory Tract Infections | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Africa South of the Sahara | Age Distribution | Aged | Bacterial Infections | Child | Child, Preschool | Humans | Infant | Infant, Newborn | Influenza, Human | Middle Aged | Population Surveillance | Respiratory Tract Infections | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Africa South of the Sahara | Age Distribution | Aged | Bacterial Infections | Child | Child, Preschool | Humans | Infant | Infant, Newborn | Influenza, Human | Middle Aged | Population Surveillance | Respiratory Tract Infections | Young Adult [SUMMARY]
null
null
[CONTENT] enrolled sari cases | estimates influenza associated | respiratory mortality influenza | sari associated hospitalizations | sari influenza countries [SUMMARY]
[CONTENT] enrolled sari cases | estimates influenza associated | respiratory mortality influenza | sari associated hospitalizations | sari influenza countries [SUMMARY]
null
[CONTENT] enrolled sari cases | estimates influenza associated | respiratory mortality influenza | sari associated hospitalizations | sari influenza countries [SUMMARY]
null
null
[CONTENT] influenza | deaths | sari | associated | virus | influenza virus | respiratory | data | years | associated deaths [SUMMARY]
[CONTENT] influenza | deaths | sari | associated | virus | influenza virus | respiratory | data | years | associated deaths [SUMMARY]
null
[CONTENT] influenza | deaths | sari | associated | virus | influenza virus | respiratory | data | years | associated deaths [SUMMARY]
null
null
[CONTENT] sari | variables | countries | test | influenza | case | case definitions | definitions | sari case | data [SUMMARY]
[CONTENT] influenza | deaths | virus | influenza virus | sari | years | aged | associated | positive | respiratory [SUMMARY]
null
[CONTENT] influenza | deaths | sari | data | virus | associated | influenza virus | file | additional data | additional data file [SUMMARY]
null
null
[CONTENT] January to April 2013 | 28 | African | 2009-2012 [SUMMARY]
[CONTENT] Twenty-three | 82% | 11 | 48% | 8 ||| 37 714 | 3091 | 8.2% | 5.1%-25.9% ||| 1073 | 2.8% | 0.1%-5.3% | 57 | 5.3% ||| CFP ||| CFP | 1.8% | 57 | 3091 | 2.9% | 1016 of | 34 623 ||| 834 | 77.7% | 107 | 12.8% | 64 | 6.0% | 60 | 5.6% | Streptococcus | 57 | 5.3% ||| 1073 | 402 | 37.5% | 0-4 years | 462 | 43.1% | 5-49 years | 209 | 19.5% [SUMMARY]
null
[CONTENT] Africa ||| January to April 2013 | 28 | African | 2009-2012 ||| Twenty-three | 82% | 11 | 48% | 8 ||| 37 714 | 3091 | 8.2% | 5.1%-25.9% ||| 1073 | 2.8% | 0.1%-5.3% | 57 | 5.3% ||| CFP ||| CFP | 1.8% | 57 | 3091 | 2.9% | 1016 of | 34 623 ||| 834 | 77.7% | 107 | 12.8% | 64 | 6.0% | 60 | 5.6% | Streptococcus | 57 | 5.3% ||| 1073 | 402 | 37.5% | 0-4 years | 462 | 43.1% | 5-49 years | 209 | 19.5% ||| African ||| [SUMMARY]
null
Diagnostic yield of transbronchial cryobiopsy in non-neoplastic lung disease: a retrospective case series.
25366106
Due to the small amount of alveolar tissue in transbronchial biopsy (TBB) by forceps, the diagnosis of diffuse, parenchymal lung diseases (DPLD) is inherently problematic, with an overall low yield. The use of cryotechnique in bronchoscopy, including TBB by cryoprobe, has revealed new opportunities in the endoscopical diagnosis of malignant and non-malignant lung diseases.
BACKGROUND
To evaluate TBB by cryotechnique for non-neoplastic lung diseases, we analyzed 52 patients (mean age 63 ± 13 years) with unclear DPLD. These individuals underwent bronchoscopy with TBB by cryoprobe. Thereafter histopathological results were compared with the clinically evaluated diagnosis.
METHODS
No major complications were seen. Mean specimen diameter in the histological biopsies was 6.9 ± 4.4 mm (Range 2 - 22 mm). A correlation between clinical and histopathological diagnoses was found in 79% of cases (41/52). In the case of UIP (usual interstitial pneumonia) pattern, the concordance was 10/15 (66%).
RESULTS
Based on these results TBB by cryotechnique would appear to be a safe and useful method that reveals new perspectives for the endoscopical diagnosis of DPLD.
CONCLUSION
[ "Biopsy", "Bronchoscopy", "Cold Temperature", "Humans", "Idiopathic Pulmonary Fibrosis", "Lung", "Retrospective Studies" ]
4223742
Background
Concerning transbronchial lung biopsy (TBB) by forceps, histopathological results depend on specimen size and quality, artificial changes due to the procedure itself, and the amount of alveolar tissue contained in the sample. TBB by forceps typically delivers one or more 1–2 mm sized specimen, often with underrepresentation of alveolar tissue [1–4]. Due to the large variation of indications, as well as different sizes and locations of pulmonary lesions, diagnostic yield of TBB by forceps is severe to describe. Most recent case series specify a diagnostic accuracy of about 50% to 70% [1, 3, 5–11]. Diffuse parenchymal lung diseases (DPLD) are harder to diagnose by TBB with an overall lower yield. Efficacy variations depend on the underlying disease: Sarcoidosis and cryptogenic organizing pneumonia (COP) render fairly good results, whereas usual interstitial pneumonia (UIP), pneumoconiosis, respiratory bronchiolitis associated interstitial lung disease (RB-ILD), non specific interstitial pneumonia (NSIP) and pulmonary histiocytosis X show poor results [1, 4, 10, 12, 13]. This broad range is explained by the diverging importance of alveolar tissue for the histopathological diagnosis. Due to the small yield of alveoli in TBB by forceps it is not possible to gather further information concerning the histopathological pattern of affected tissue throughout the lung. This problem is well known and has recently been discussed in literature [1, 3, 4, 6, 13–17]. Some papers do show an increase of diagnostic yield depending on the size of the specimen [6, 14], however artificial changes due to TBB by forceps are an important limiting criterion for the pathological diagnosis of DPLD. Cryobiopsy as a tool in bronchology has been introduced on a routine basis in recent years and has been found to be safe in a routine diagnostic setting [18, 19]. Specimen size has been reported to be larger and diagnostically more valuable due to more alveolar tissue and less artificial changes. In a previous paper morphometrical benefits of cryobioptically obtained lung tissue specimen were shown and perspectives of this method in a daily routine were discussed [20]. There is evidence that cryobiopsies increase efficacy concerning histopathological tumor diagnostics in central malignant lesions [19, 21]. The aim of this study was to evaluate TBB by cryotechnique for non-neoplastic diseases. Our focus was sample adequacy for diagnostic purposes, sample size and proportion of alveolar tissue retrieved, as well as the possibility of histopathological diagnosis of DPLD by cryobiopsy in correlation to clinical diagnoses.
Methods
This is a retrospective case series (June 2009 – December 2011) of 52 patients with diffuse, interstitial, non-neoplastic lung diseases who underwent flexible, fiberoptic bronchoscopy with transbronchial cryobiopsy. Additionally, all patients had routine diagnostics including lung function evaluation, chest x-ray and thoracic computed tomography (CT-scan). For the transbronchial cryobiopsy a flexible cryoprobe with a diameter of 1.9 mm was used (flexible Kryosonde diameter 1.9 mm length 900 mm, Erbe Elektromedizin GmbH, Tübingen, Germany), the probe was cooled to a temperature of about – 77°C by carbon dioxide. The flexible bronchoscopy (1 T 160 and 1 T 180, Olympus Corp. Tokyo Japan) was performed under sedation with disoprivan or midazolam and local anaesthesia with lidocaine. The cryoprobe was introduced into the selected area with a distance of approximately 1–2 cm from the thoracic wall under radiological guidance. In this position the cryoprobe was cooled for three to five seconds and then retracted with the attached frozen lung tissue. For each patient one to two specimen were taken and then fixed in 4% buffered formalin. To avoid incomplete sectioning of specimen particles all biopsies were conventionally processed by serial sectioning of at least 12 H & E stained section steps. Concerning quantity, quality (number of artefacts), and the amount of alveolar tissue, the biopsies were rated by two experienced lung pathologists (SG & TM). Mirax Viewer Image Software Ver (1, 6) was used for scanning the Hematoxylin-eosine slides by a ZEISS-MIRAX Midi Slide scanning system (Zeiss Microimaging, Oberkochen, Germany and 3DTech, Budapest, Hungary). The total diameter of the biopsy specimens were measured and expressed in μm. Histopathological changes were rated according to criteria for UIP diagnosis based on the Official ATS/ERS/JRS/ALAT Statement (22). The histological diagnosis of other entities was made by the use of classical criteria for interstial lung diseases (4). Histopathological results and radiographic images were compared to the patients’ medical history, physical examination and data of pulmonary lung function testing. At last, in an interdisciplinary setting (pathologist, radiologist, pneumologist) a diagnosis was found. Furthermore, complications seen during bronchoscopy were rated. Statistically, results were expressed as frequencies or as mean ± SD. Chi-square-test was used to compare proportions. The significance level of the analyses was set to 5%, and exact p values were reported. Results were expressed using descriptive statistics. Statistical software (Statistical Package for Social Sciences, Version 14.0; SPSS, Chicago, IL, USA) was used to analyze and process the data on a Windows XP operating system (Microsoft; Redmond, WA, USA). A waiver for this study was received by the ethics committee of the Charité, Berlin, Germany (“Ethikkommission, Ethikausschuss 1 am Campus Charité – Mitte”) on January 23, 2014.
Results
Overall, 52 patients with a median age of 63 ± 13 years were analyzed. 36/52 (69%) patients were male, 16/52 (31%) were female. In 41/52 cases (79%) a correlation with clinical and histopathological diagnosis was found. In 11/52 cases (21%) no match could be achieved. Mean specimen diameter in the histological biopsies was 6.9 ± 4.4 mm (Range 2 – 22 mm). In the specimen, alveolar tissue was found in 48/52 (92%) cases. In 4/52 (8%) cases no alveolar tissue was found. In one of these four cases no histopathological diagnosis could be matched to the clinical diagnosis due to the lack of alveolar tissue. In the other three cases the diagnosis was sarcoidosis, and typical granulomas were found in the bronchial mucosa. The specimens lacking alveolar tissue either contained only bronchial mucosa and sometimes cartilage, or presented themselves as long flat bands of inner bronchial wall lining. No major complications (pneumothorax, major bleeding >3 minutes) with need of further intervention were reported. Table 1 shows the list of clinically diagnosed lung diseases, the number of matching histopathological findings and the average diagnostic yield of TBB by forceps reported in literature.Table 1 Comparison of clinically diagnosed DPLD (with number of cases and matching histopathological findings) and averagely reported diagnostic yield by forceps biopsy Clinical diagnosis Number of cases Matching histopathological findings Average reported diagnostic yield by forceps biopsy COP98/9 (89%)65% (10, 27, 28)Rheumatoid lung disease22/2 (100%)Sarcoidosis1210/12 (83%)69% (10, 28, 29)Alveolar microlithiasis11/1 (100%)-NSIP11/1 (100%)-medically-induced lung damages22/2 (100%)-HP76/7 (86%)95% (10)Pulmonary manifestation of scleroderma21/2 (50%)-Histiocytosis21/2 (50%)-pANCA-pos. Vasculitis10/1 (0%)-IPF139/13 (69%)34% (1, 10) Comparison of clinically diagnosed DPLD (with number of cases and matching histopathological findings) and averagely reported diagnostic yield by forceps biopsy The HR-CT images of patients who had the clincal diagnosis of idiopathical lung fibrosis (IPF) or pulmonary manifestation of scleroderma were rated apropos of the radiological criteria for UIP after ATS (American Thoracic Society) and ERS (European Respiratory Society) [22, 23]. Of these fifteen cases, fourteen (93%) showed possible or probable UIP pattern and one (7%) was inconsistent with UIP pattern.
Conclusions
Cryobiopsy could improve the results reported on conventional transbronchial forceps biopsy. Nevertheless, previously reported series are small and prospective comparisons do not exist. Such studies could even reveal that eventually less cryobiopsy pieces per patient are necessary as compared to transbronchial forceps biopsies. For the latter most of the authors recommend four biopsies per bronchoscopy during the processing of diffuse lung disease. In our present series only 1–2 cryobiopsy specimen were sampled as a rule. The high diagnostic yield and the lack of any major complication in our series encourages one to proceed with larger studies and to establish transbronchial cryobiopsy within routine clinical algorithms in the diagnostic of diffuse, parenchymal lung disease.
[ "Background" ]
[ "Concerning transbronchial lung biopsy (TBB) by forceps, histopathological results depend on specimen size and quality, artificial changes due to the procedure itself, and the amount of alveolar tissue contained in the sample. TBB by forceps typically delivers one or more 1–2 mm sized specimen, often with underrepresentation of alveolar tissue [1–4].\nDue to the large variation of indications, as well as different sizes and locations of pulmonary lesions, diagnostic yield of TBB by forceps is severe to describe. Most recent case series specify a diagnostic accuracy of about 50% to 70% [1, 3, 5–11].\nDiffuse parenchymal lung diseases (DPLD) are harder to diagnose by TBB with an overall lower yield. Efficacy variations depend on the underlying disease: Sarcoidosis and cryptogenic organizing pneumonia (COP) render fairly good results, whereas usual interstitial pneumonia (UIP), pneumoconiosis, respiratory bronchiolitis associated interstitial lung disease (RB-ILD), non specific interstitial pneumonia (NSIP) and pulmonary histiocytosis X show poor results [1, 4, 10, 12, 13]. This broad range is explained by the diverging importance of alveolar tissue for the histopathological diagnosis.\nDue to the small yield of alveoli in TBB by forceps it is not possible to gather further information concerning the histopathological pattern of affected tissue throughout the lung. This problem is well known and has recently been discussed in literature [1, 3, 4, 6, 13–17]. Some papers do show an increase of diagnostic yield depending on the size of the specimen [6, 14], however artificial changes due to TBB by forceps are an important limiting criterion for the pathological diagnosis of DPLD.\nCryobiopsy as a tool in bronchology has been introduced on a routine basis in recent years and has been found to be safe in a routine diagnostic setting [18, 19]. Specimen size has been reported to be larger and diagnostically more valuable due to more alveolar tissue and less artificial changes. In a previous paper morphometrical benefits of cryobioptically obtained lung tissue specimen were shown and perspectives of this method in a daily routine were discussed [20]. There is evidence that cryobiopsies increase efficacy concerning histopathological tumor diagnostics in central malignant lesions [19, 21].\nThe aim of this study was to evaluate TBB by cryotechnique for non-neoplastic diseases. Our focus was sample adequacy for diagnostic purposes, sample size and proportion of alveolar tissue retrieved, as well as the possibility of histopathological diagnosis of DPLD by cryobiopsy in correlation to clinical diagnoses." ]
[ null ]
[ "Background", "Methods", "Results", "Discussion", "Conclusions" ]
[ "Concerning transbronchial lung biopsy (TBB) by forceps, histopathological results depend on specimen size and quality, artificial changes due to the procedure itself, and the amount of alveolar tissue contained in the sample. TBB by forceps typically delivers one or more 1–2 mm sized specimen, often with underrepresentation of alveolar tissue [1–4].\nDue to the large variation of indications, as well as different sizes and locations of pulmonary lesions, diagnostic yield of TBB by forceps is severe to describe. Most recent case series specify a diagnostic accuracy of about 50% to 70% [1, 3, 5–11].\nDiffuse parenchymal lung diseases (DPLD) are harder to diagnose by TBB with an overall lower yield. Efficacy variations depend on the underlying disease: Sarcoidosis and cryptogenic organizing pneumonia (COP) render fairly good results, whereas usual interstitial pneumonia (UIP), pneumoconiosis, respiratory bronchiolitis associated interstitial lung disease (RB-ILD), non specific interstitial pneumonia (NSIP) and pulmonary histiocytosis X show poor results [1, 4, 10, 12, 13]. This broad range is explained by the diverging importance of alveolar tissue for the histopathological diagnosis.\nDue to the small yield of alveoli in TBB by forceps it is not possible to gather further information concerning the histopathological pattern of affected tissue throughout the lung. This problem is well known and has recently been discussed in literature [1, 3, 4, 6, 13–17]. Some papers do show an increase of diagnostic yield depending on the size of the specimen [6, 14], however artificial changes due to TBB by forceps are an important limiting criterion for the pathological diagnosis of DPLD.\nCryobiopsy as a tool in bronchology has been introduced on a routine basis in recent years and has been found to be safe in a routine diagnostic setting [18, 19]. Specimen size has been reported to be larger and diagnostically more valuable due to more alveolar tissue and less artificial changes. In a previous paper morphometrical benefits of cryobioptically obtained lung tissue specimen were shown and perspectives of this method in a daily routine were discussed [20]. There is evidence that cryobiopsies increase efficacy concerning histopathological tumor diagnostics in central malignant lesions [19, 21].\nThe aim of this study was to evaluate TBB by cryotechnique for non-neoplastic diseases. Our focus was sample adequacy for diagnostic purposes, sample size and proportion of alveolar tissue retrieved, as well as the possibility of histopathological diagnosis of DPLD by cryobiopsy in correlation to clinical diagnoses.", "This is a retrospective case series (June 2009 – December 2011) of 52 patients with diffuse, interstitial, non-neoplastic lung diseases who underwent flexible, fiberoptic bronchoscopy with transbronchial cryobiopsy. Additionally, all patients had routine diagnostics including lung function evaluation, chest x-ray and thoracic computed tomography (CT-scan).\nFor the transbronchial cryobiopsy a flexible cryoprobe with a diameter of 1.9 mm was used (flexible Kryosonde diameter 1.9 mm length 900 mm, Erbe Elektromedizin GmbH, Tübingen, Germany), the probe was cooled to a temperature of about – 77°C by carbon dioxide. The flexible bronchoscopy (1 T 160 and 1 T 180, Olympus Corp. Tokyo Japan) was performed under sedation with disoprivan or midazolam and local anaesthesia with lidocaine. The cryoprobe was introduced into the selected area with a distance of approximately 1–2 cm from the thoracic wall under radiological guidance. In this position the cryoprobe was cooled for three to five seconds and then retracted with the attached frozen lung tissue. For each patient one to two specimen were taken and then fixed in 4% buffered formalin.\nTo avoid incomplete sectioning of specimen particles all biopsies were conventionally processed by serial sectioning of at least 12 H & E stained section steps. Concerning quantity, quality (number of artefacts), and the amount of alveolar tissue, the biopsies were rated by two experienced lung pathologists (SG & TM).\nMirax Viewer Image Software Ver (1, 6) was used for scanning the Hematoxylin-eosine slides by a ZEISS-MIRAX Midi Slide scanning system (Zeiss Microimaging, Oberkochen, Germany and 3DTech, Budapest, Hungary). The total diameter of the biopsy specimens were measured and expressed in μm.\nHistopathological changes were rated according to criteria for UIP diagnosis based on the Official ATS/ERS/JRS/ALAT Statement (22). The histological diagnosis of other entities was made by the use of classical criteria for interstial lung diseases (4).\nHistopathological results and radiographic images were compared to the patients’ medical history, physical examination and data of pulmonary lung function testing. At last, in an interdisciplinary setting (pathologist, radiologist, pneumologist) a diagnosis was found. Furthermore, complications seen during bronchoscopy were rated.\nStatistically, results were expressed as frequencies or as mean ± SD. Chi-square-test was used to compare proportions. The significance level of the analyses was set to 5%, and exact p values were reported. Results were expressed using descriptive statistics.\nStatistical software (Statistical Package for Social Sciences, Version 14.0; SPSS, Chicago, IL, USA) was used to analyze and process the data on a Windows XP operating system (Microsoft; Redmond, WA, USA).\nA waiver for this study was received by the ethics committee of the Charité, Berlin, Germany (“Ethikkommission, Ethikausschuss 1 am Campus Charité – Mitte”) on January 23, 2014.", "Overall, 52 patients with a median age of 63 ± 13 years were analyzed. 36/52 (69%) patients were male, 16/52 (31%) were female. In 41/52 cases (79%) a correlation with clinical and histopathological diagnosis was found. In 11/52 cases (21%) no match could be achieved.\nMean specimen diameter in the histological biopsies was 6.9 ± 4.4 mm (Range 2 – 22 mm). In the specimen, alveolar tissue was found in 48/52 (92%) cases. In 4/52 (8%) cases no alveolar tissue was found. In one of these four cases no histopathological diagnosis could be matched to the clinical diagnosis due to the lack of alveolar tissue. In the other three cases the diagnosis was sarcoidosis, and typical granulomas were found in the bronchial mucosa. The specimens lacking alveolar tissue either contained only bronchial mucosa and sometimes cartilage, or presented themselves as long flat bands of inner bronchial wall lining.\nNo major complications (pneumothorax, major bleeding >3 minutes) with need of further intervention were reported.\nTable 1 shows the list of clinically diagnosed lung diseases, the number of matching histopathological findings and the average diagnostic yield of TBB by forceps reported in literature.Table 1\nComparison of clinically diagnosed DPLD (with number of cases and matching histopathological findings) and averagely reported diagnostic yield by forceps biopsy\n\nClinical diagnosis\n\nNumber of cases\n\nMatching histopathological findings\n\nAverage reported diagnostic yield by forceps biopsy\nCOP98/9 (89%)65% (10, 27, 28)Rheumatoid lung disease22/2 (100%)Sarcoidosis1210/12 (83%)69% (10, 28, 29)Alveolar microlithiasis11/1 (100%)-NSIP11/1 (100%)-medically-induced lung damages22/2 (100%)-HP76/7 (86%)95% (10)Pulmonary manifestation of scleroderma21/2 (50%)-Histiocytosis21/2 (50%)-pANCA-pos. Vasculitis10/1 (0%)-IPF139/13 (69%)34% (1, 10)\n\nComparison of clinically diagnosed DPLD (with number of cases and matching histopathological findings) and averagely reported diagnostic yield by forceps biopsy\n\nThe HR-CT images of patients who had the clincal diagnosis of idiopathical lung fibrosis (IPF) or pulmonary manifestation of scleroderma were rated apropos of the radiological criteria for UIP after ATS (American Thoracic Society) and ERS (European Respiratory Society) [22, 23]. Of these fifteen cases, fourteen (93%) showed possible or probable UIP pattern and one (7%) was inconsistent with UIP pattern.", "Cryobiopsy in our series proved to be a sufficient tool in the diagnostic processing for various diffuse, parenchymal lung diseases (DPLD). Specifically, the highest diagnostic yields were achieved in patients with sarcoidosis (83%), COP (89%), and hypersensitivity pneumonia (HP, 86%). Comparable results with the use of transbronchial forceps biopsy have been reported in recent years [1, 3, 5–10] (see Table 1). These good results are probably due to the location of granulomatous or other characteristic changes close to or within the bronchial wall.\nNevertheless clear distinctions became obvious between diseases that required the recognition of a gross histological pattern (UIP, NSIP, RB-ILD) and all others.\nIt has generally been assumed that transbronchial lung biopsies cannot be used for the diagnosis of UIP [10]. A hallmark characteristic of UIP is the patchy involvement of lung tissue, so that areas of involved parenchyma and unaffected alveoli stand next to each other. Furthermore, UIP is characterized histologically by fibrosis and chronic inflammation, i.e. features that are usual unspecific findings located in the peribronchial tissue.\nIn a study by Berbescu et al. [1], 22 patients with UIP pattern assessed by open lung biopsy were retrospectively analyzed concerning a pre-op achieved TBB. This revealed a characteristic histopathological UIP pattern in nine cases. Berbescu et al. concluded [1] that certain characteristic features of UIP, such as the patchwork pattern of involvement by fibrosis and temporal variability with fibroblast foci, collagen, and honeycomb changes, previously thought to be recognizable only on surgical lung biopsy specimen, can sometimes be seen on TBB specimen. The patchwork pattern is typically characterized by normal alveoli in close relation to areas of interstitial fibrosis. Its presence helps to distinguish the changes to nonspecific peribronchial fibrosis where there is a gradual transition from normal to abnormal.\nIn our series of transbronchial cryobiopsies an UIP pattern was diagnosed in two-thirds (10/15, 67%) of the cases (IPF, pulmonary manifestation of scleroderma, see Table 1). This improves the diagnostic yield of TBB for UIP pattern in comparison to data described in literature by up to 50% (see Table 1).\nCryobiopsy specimen tend to be even larger than transbronchial forceps biopsy specimen, and contain more and larger amounts of alveolar tissue [21]. In a previous study the number of alveolar spaces necessary for an adequate biopsy was defined as 20 [6]. This criterion is likely to be fulfilled in most of the cryobiopsy specimen ([21]; unpublished data). However, transbronchial cryobiopsy, as well as TBB by forceps, fail to deliver the diagnosis of UIP in a significant proportion of patients. This may be due to the distance seen frequently between the bronchial wall and typical histological changes, such as fibroblast foci, which are located deeply in the alveolar parenchyma [1] (Figures 1, 2, 3, 4).Figure 1\nComparison of TBB by cryoprobe (left) and forceps (right): significant differences in size and quality.\nFigure 2\nPatient with radiological UIP pattern. Overview of transbronchial cryobiopsy: patchy involvement of fibrosing process next to unaffected lung tissue.Figure 3\nTransbronchial cryobiopsy: architectural distortion of lung tissue with scaring next to normal lung parenchyma.\nFigure 4\nTransbronchial cryobiopsy: active ongoing fibrosis (fibroblast focus) as an expression of “temporary variegation”.\n\n\nComparison of TBB by cryoprobe (left) and forceps (right): significant differences in size and quality.\n\n\nPatient with radiological UIP pattern. Overview of transbronchial cryobiopsy: patchy involvement of fibrosing process next to unaffected lung tissue.\n\nTransbronchial cryobiopsy: architectural distortion of lung tissue with scaring next to normal lung parenchyma.\n\n\nTransbronchial cryobiopsy: active ongoing fibrosis (fibroblast focus) as an expression of “temporary variegation”.\n\nDespite the encouraging findings about cryobiopsy these results do not yet command a recommendation of transbronchial cryobiopsy as a standard procedure in the processing of suspected pulmonary fibrosis. Nevertheless, the current problem of distinguishing between HP with UIP pattern and IPF (which radiologically speaking cannot be securely discriminated [24]) could be solved by the use of cryobiopsy with greater specimen size. Furthermore, there is a greater chance for detection of granulomas or other characteristic histopathological features [25–29]. Therefore larger prospective and comparative series must be evaluated before a general clinical algorithm can be proposed. Meanwhile, in those individual patients where cryobiopsy has revealed the full pattern of UIP, open lung biopsy is unnecessary if histology and clinical data engender a clear diagnosis.", "Cryobiopsy could improve the results reported on conventional transbronchial forceps biopsy. Nevertheless, previously reported series are small and prospective comparisons do not exist. Such studies could even reveal that eventually less cryobiopsy pieces per patient are necessary as compared to transbronchial forceps biopsies. For the latter most of the authors recommend four biopsies per bronchoscopy during the processing of diffuse lung disease. In our present series only 1–2 cryobiopsy specimen were sampled as a rule.\nThe high diagnostic yield and the lack of any major complication in our series encourages one to proceed with larger studies and to establish transbronchial cryobiopsy within routine clinical algorithms in the diagnostic of diffuse, parenchymal lung disease." ]
[ null, "methods", "results", "discussion", "conclusions" ]
[ "Diffuse parenchymal lung disease (DPLD)", "Bronchoscopy", "Transbronchial biopsy (TBB)", "Cryotechnique", "Histopathology", "Usual interstitial pneumonia (UIP)" ]
Background: Concerning transbronchial lung biopsy (TBB) by forceps, histopathological results depend on specimen size and quality, artificial changes due to the procedure itself, and the amount of alveolar tissue contained in the sample. TBB by forceps typically delivers one or more 1–2 mm sized specimen, often with underrepresentation of alveolar tissue [1–4]. Due to the large variation of indications, as well as different sizes and locations of pulmonary lesions, diagnostic yield of TBB by forceps is severe to describe. Most recent case series specify a diagnostic accuracy of about 50% to 70% [1, 3, 5–11]. Diffuse parenchymal lung diseases (DPLD) are harder to diagnose by TBB with an overall lower yield. Efficacy variations depend on the underlying disease: Sarcoidosis and cryptogenic organizing pneumonia (COP) render fairly good results, whereas usual interstitial pneumonia (UIP), pneumoconiosis, respiratory bronchiolitis associated interstitial lung disease (RB-ILD), non specific interstitial pneumonia (NSIP) and pulmonary histiocytosis X show poor results [1, 4, 10, 12, 13]. This broad range is explained by the diverging importance of alveolar tissue for the histopathological diagnosis. Due to the small yield of alveoli in TBB by forceps it is not possible to gather further information concerning the histopathological pattern of affected tissue throughout the lung. This problem is well known and has recently been discussed in literature [1, 3, 4, 6, 13–17]. Some papers do show an increase of diagnostic yield depending on the size of the specimen [6, 14], however artificial changes due to TBB by forceps are an important limiting criterion for the pathological diagnosis of DPLD. Cryobiopsy as a tool in bronchology has been introduced on a routine basis in recent years and has been found to be safe in a routine diagnostic setting [18, 19]. Specimen size has been reported to be larger and diagnostically more valuable due to more alveolar tissue and less artificial changes. In a previous paper morphometrical benefits of cryobioptically obtained lung tissue specimen were shown and perspectives of this method in a daily routine were discussed [20]. There is evidence that cryobiopsies increase efficacy concerning histopathological tumor diagnostics in central malignant lesions [19, 21]. The aim of this study was to evaluate TBB by cryotechnique for non-neoplastic diseases. Our focus was sample adequacy for diagnostic purposes, sample size and proportion of alveolar tissue retrieved, as well as the possibility of histopathological diagnosis of DPLD by cryobiopsy in correlation to clinical diagnoses. Methods: This is a retrospective case series (June 2009 – December 2011) of 52 patients with diffuse, interstitial, non-neoplastic lung diseases who underwent flexible, fiberoptic bronchoscopy with transbronchial cryobiopsy. Additionally, all patients had routine diagnostics including lung function evaluation, chest x-ray and thoracic computed tomography (CT-scan). For the transbronchial cryobiopsy a flexible cryoprobe with a diameter of 1.9 mm was used (flexible Kryosonde diameter 1.9 mm length 900 mm, Erbe Elektromedizin GmbH, Tübingen, Germany), the probe was cooled to a temperature of about – 77°C by carbon dioxide. The flexible bronchoscopy (1 T 160 and 1 T 180, Olympus Corp. Tokyo Japan) was performed under sedation with disoprivan or midazolam and local anaesthesia with lidocaine. The cryoprobe was introduced into the selected area with a distance of approximately 1–2 cm from the thoracic wall under radiological guidance. In this position the cryoprobe was cooled for three to five seconds and then retracted with the attached frozen lung tissue. For each patient one to two specimen were taken and then fixed in 4% buffered formalin. To avoid incomplete sectioning of specimen particles all biopsies were conventionally processed by serial sectioning of at least 12 H & E stained section steps. Concerning quantity, quality (number of artefacts), and the amount of alveolar tissue, the biopsies were rated by two experienced lung pathologists (SG & TM). Mirax Viewer Image Software Ver (1, 6) was used for scanning the Hematoxylin-eosine slides by a ZEISS-MIRAX Midi Slide scanning system (Zeiss Microimaging, Oberkochen, Germany and 3DTech, Budapest, Hungary). The total diameter of the biopsy specimens were measured and expressed in μm. Histopathological changes were rated according to criteria for UIP diagnosis based on the Official ATS/ERS/JRS/ALAT Statement (22). The histological diagnosis of other entities was made by the use of classical criteria for interstial lung diseases (4). Histopathological results and radiographic images were compared to the patients’ medical history, physical examination and data of pulmonary lung function testing. At last, in an interdisciplinary setting (pathologist, radiologist, pneumologist) a diagnosis was found. Furthermore, complications seen during bronchoscopy were rated. Statistically, results were expressed as frequencies or as mean ± SD. Chi-square-test was used to compare proportions. The significance level of the analyses was set to 5%, and exact p values were reported. Results were expressed using descriptive statistics. Statistical software (Statistical Package for Social Sciences, Version 14.0; SPSS, Chicago, IL, USA) was used to analyze and process the data on a Windows XP operating system (Microsoft; Redmond, WA, USA). A waiver for this study was received by the ethics committee of the Charité, Berlin, Germany (“Ethikkommission, Ethikausschuss 1 am Campus Charité – Mitte”) on January 23, 2014. Results: Overall, 52 patients with a median age of 63 ± 13 years were analyzed. 36/52 (69%) patients were male, 16/52 (31%) were female. In 41/52 cases (79%) a correlation with clinical and histopathological diagnosis was found. In 11/52 cases (21%) no match could be achieved. Mean specimen diameter in the histological biopsies was 6.9 ± 4.4 mm (Range 2 – 22 mm). In the specimen, alveolar tissue was found in 48/52 (92%) cases. In 4/52 (8%) cases no alveolar tissue was found. In one of these four cases no histopathological diagnosis could be matched to the clinical diagnosis due to the lack of alveolar tissue. In the other three cases the diagnosis was sarcoidosis, and typical granulomas were found in the bronchial mucosa. The specimens lacking alveolar tissue either contained only bronchial mucosa and sometimes cartilage, or presented themselves as long flat bands of inner bronchial wall lining. No major complications (pneumothorax, major bleeding >3 minutes) with need of further intervention were reported. Table 1 shows the list of clinically diagnosed lung diseases, the number of matching histopathological findings and the average diagnostic yield of TBB by forceps reported in literature.Table 1 Comparison of clinically diagnosed DPLD (with number of cases and matching histopathological findings) and averagely reported diagnostic yield by forceps biopsy Clinical diagnosis Number of cases Matching histopathological findings Average reported diagnostic yield by forceps biopsy COP98/9 (89%)65% (10, 27, 28)Rheumatoid lung disease22/2 (100%)Sarcoidosis1210/12 (83%)69% (10, 28, 29)Alveolar microlithiasis11/1 (100%)-NSIP11/1 (100%)-medically-induced lung damages22/2 (100%)-HP76/7 (86%)95% (10)Pulmonary manifestation of scleroderma21/2 (50%)-Histiocytosis21/2 (50%)-pANCA-pos. Vasculitis10/1 (0%)-IPF139/13 (69%)34% (1, 10) Comparison of clinically diagnosed DPLD (with number of cases and matching histopathological findings) and averagely reported diagnostic yield by forceps biopsy The HR-CT images of patients who had the clincal diagnosis of idiopathical lung fibrosis (IPF) or pulmonary manifestation of scleroderma were rated apropos of the radiological criteria for UIP after ATS (American Thoracic Society) and ERS (European Respiratory Society) [22, 23]. Of these fifteen cases, fourteen (93%) showed possible or probable UIP pattern and one (7%) was inconsistent with UIP pattern. Discussion: Cryobiopsy in our series proved to be a sufficient tool in the diagnostic processing for various diffuse, parenchymal lung diseases (DPLD). Specifically, the highest diagnostic yields were achieved in patients with sarcoidosis (83%), COP (89%), and hypersensitivity pneumonia (HP, 86%). Comparable results with the use of transbronchial forceps biopsy have been reported in recent years [1, 3, 5–10] (see Table 1). These good results are probably due to the location of granulomatous or other characteristic changes close to or within the bronchial wall. Nevertheless clear distinctions became obvious between diseases that required the recognition of a gross histological pattern (UIP, NSIP, RB-ILD) and all others. It has generally been assumed that transbronchial lung biopsies cannot be used for the diagnosis of UIP [10]. A hallmark characteristic of UIP is the patchy involvement of lung tissue, so that areas of involved parenchyma and unaffected alveoli stand next to each other. Furthermore, UIP is characterized histologically by fibrosis and chronic inflammation, i.e. features that are usual unspecific findings located in the peribronchial tissue. In a study by Berbescu et al. [1], 22 patients with UIP pattern assessed by open lung biopsy were retrospectively analyzed concerning a pre-op achieved TBB. This revealed a characteristic histopathological UIP pattern in nine cases. Berbescu et al. concluded [1] that certain characteristic features of UIP, such as the patchwork pattern of involvement by fibrosis and temporal variability with fibroblast foci, collagen, and honeycomb changes, previously thought to be recognizable only on surgical lung biopsy specimen, can sometimes be seen on TBB specimen. The patchwork pattern is typically characterized by normal alveoli in close relation to areas of interstitial fibrosis. Its presence helps to distinguish the changes to nonspecific peribronchial fibrosis where there is a gradual transition from normal to abnormal. In our series of transbronchial cryobiopsies an UIP pattern was diagnosed in two-thirds (10/15, 67%) of the cases (IPF, pulmonary manifestation of scleroderma, see Table 1). This improves the diagnostic yield of TBB for UIP pattern in comparison to data described in literature by up to 50% (see Table 1). Cryobiopsy specimen tend to be even larger than transbronchial forceps biopsy specimen, and contain more and larger amounts of alveolar tissue [21]. In a previous study the number of alveolar spaces necessary for an adequate biopsy was defined as 20 [6]. This criterion is likely to be fulfilled in most of the cryobiopsy specimen ([21]; unpublished data). However, transbronchial cryobiopsy, as well as TBB by forceps, fail to deliver the diagnosis of UIP in a significant proportion of patients. This may be due to the distance seen frequently between the bronchial wall and typical histological changes, such as fibroblast foci, which are located deeply in the alveolar parenchyma [1] (Figures 1, 2, 3, 4).Figure 1 Comparison of TBB by cryoprobe (left) and forceps (right): significant differences in size and quality. Figure 2 Patient with radiological UIP pattern. Overview of transbronchial cryobiopsy: patchy involvement of fibrosing process next to unaffected lung tissue.Figure 3 Transbronchial cryobiopsy: architectural distortion of lung tissue with scaring next to normal lung parenchyma. Figure 4 Transbronchial cryobiopsy: active ongoing fibrosis (fibroblast focus) as an expression of “temporary variegation”. Comparison of TBB by cryoprobe (left) and forceps (right): significant differences in size and quality. Patient with radiological UIP pattern. Overview of transbronchial cryobiopsy: patchy involvement of fibrosing process next to unaffected lung tissue. Transbronchial cryobiopsy: architectural distortion of lung tissue with scaring next to normal lung parenchyma. Transbronchial cryobiopsy: active ongoing fibrosis (fibroblast focus) as an expression of “temporary variegation”. Despite the encouraging findings about cryobiopsy these results do not yet command a recommendation of transbronchial cryobiopsy as a standard procedure in the processing of suspected pulmonary fibrosis. Nevertheless, the current problem of distinguishing between HP with UIP pattern and IPF (which radiologically speaking cannot be securely discriminated [24]) could be solved by the use of cryobiopsy with greater specimen size. Furthermore, there is a greater chance for detection of granulomas or other characteristic histopathological features [25–29]. Therefore larger prospective and comparative series must be evaluated before a general clinical algorithm can be proposed. Meanwhile, in those individual patients where cryobiopsy has revealed the full pattern of UIP, open lung biopsy is unnecessary if histology and clinical data engender a clear diagnosis. Conclusions: Cryobiopsy could improve the results reported on conventional transbronchial forceps biopsy. Nevertheless, previously reported series are small and prospective comparisons do not exist. Such studies could even reveal that eventually less cryobiopsy pieces per patient are necessary as compared to transbronchial forceps biopsies. For the latter most of the authors recommend four biopsies per bronchoscopy during the processing of diffuse lung disease. In our present series only 1–2 cryobiopsy specimen were sampled as a rule. The high diagnostic yield and the lack of any major complication in our series encourages one to proceed with larger studies and to establish transbronchial cryobiopsy within routine clinical algorithms in the diagnostic of diffuse, parenchymal lung disease.
Background: Due to the small amount of alveolar tissue in transbronchial biopsy (TBB) by forceps, the diagnosis of diffuse, parenchymal lung diseases (DPLD) is inherently problematic, with an overall low yield. The use of cryotechnique in bronchoscopy, including TBB by cryoprobe, has revealed new opportunities in the endoscopical diagnosis of malignant and non-malignant lung diseases. Methods: To evaluate TBB by cryotechnique for non-neoplastic lung diseases, we analyzed 52 patients (mean age 63 ± 13 years) with unclear DPLD. These individuals underwent bronchoscopy with TBB by cryoprobe. Thereafter histopathological results were compared with the clinically evaluated diagnosis. Results: No major complications were seen. Mean specimen diameter in the histological biopsies was 6.9 ± 4.4 mm (Range 2 - 22 mm). A correlation between clinical and histopathological diagnoses was found in 79% of cases (41/52). In the case of UIP (usual interstitial pneumonia) pattern, the concordance was 10/15 (66%). Conclusions: Based on these results TBB by cryotechnique would appear to be a safe and useful method that reveals new perspectives for the endoscopical diagnosis of DPLD.
Background: Concerning transbronchial lung biopsy (TBB) by forceps, histopathological results depend on specimen size and quality, artificial changes due to the procedure itself, and the amount of alveolar tissue contained in the sample. TBB by forceps typically delivers one or more 1–2 mm sized specimen, often with underrepresentation of alveolar tissue [1–4]. Due to the large variation of indications, as well as different sizes and locations of pulmonary lesions, diagnostic yield of TBB by forceps is severe to describe. Most recent case series specify a diagnostic accuracy of about 50% to 70% [1, 3, 5–11]. Diffuse parenchymal lung diseases (DPLD) are harder to diagnose by TBB with an overall lower yield. Efficacy variations depend on the underlying disease: Sarcoidosis and cryptogenic organizing pneumonia (COP) render fairly good results, whereas usual interstitial pneumonia (UIP), pneumoconiosis, respiratory bronchiolitis associated interstitial lung disease (RB-ILD), non specific interstitial pneumonia (NSIP) and pulmonary histiocytosis X show poor results [1, 4, 10, 12, 13]. This broad range is explained by the diverging importance of alveolar tissue for the histopathological diagnosis. Due to the small yield of alveoli in TBB by forceps it is not possible to gather further information concerning the histopathological pattern of affected tissue throughout the lung. This problem is well known and has recently been discussed in literature [1, 3, 4, 6, 13–17]. Some papers do show an increase of diagnostic yield depending on the size of the specimen [6, 14], however artificial changes due to TBB by forceps are an important limiting criterion for the pathological diagnosis of DPLD. Cryobiopsy as a tool in bronchology has been introduced on a routine basis in recent years and has been found to be safe in a routine diagnostic setting [18, 19]. Specimen size has been reported to be larger and diagnostically more valuable due to more alveolar tissue and less artificial changes. In a previous paper morphometrical benefits of cryobioptically obtained lung tissue specimen were shown and perspectives of this method in a daily routine were discussed [20]. There is evidence that cryobiopsies increase efficacy concerning histopathological tumor diagnostics in central malignant lesions [19, 21]. The aim of this study was to evaluate TBB by cryotechnique for non-neoplastic diseases. Our focus was sample adequacy for diagnostic purposes, sample size and proportion of alveolar tissue retrieved, as well as the possibility of histopathological diagnosis of DPLD by cryobiopsy in correlation to clinical diagnoses. Conclusions: Cryobiopsy could improve the results reported on conventional transbronchial forceps biopsy. Nevertheless, previously reported series are small and prospective comparisons do not exist. Such studies could even reveal that eventually less cryobiopsy pieces per patient are necessary as compared to transbronchial forceps biopsies. For the latter most of the authors recommend four biopsies per bronchoscopy during the processing of diffuse lung disease. In our present series only 1–2 cryobiopsy specimen were sampled as a rule. The high diagnostic yield and the lack of any major complication in our series encourages one to proceed with larger studies and to establish transbronchial cryobiopsy within routine clinical algorithms in the diagnostic of diffuse, parenchymal lung disease.
Background: Due to the small amount of alveolar tissue in transbronchial biopsy (TBB) by forceps, the diagnosis of diffuse, parenchymal lung diseases (DPLD) is inherently problematic, with an overall low yield. The use of cryotechnique in bronchoscopy, including TBB by cryoprobe, has revealed new opportunities in the endoscopical diagnosis of malignant and non-malignant lung diseases. Methods: To evaluate TBB by cryotechnique for non-neoplastic lung diseases, we analyzed 52 patients (mean age 63 ± 13 years) with unclear DPLD. These individuals underwent bronchoscopy with TBB by cryoprobe. Thereafter histopathological results were compared with the clinically evaluated diagnosis. Results: No major complications were seen. Mean specimen diameter in the histological biopsies was 6.9 ± 4.4 mm (Range 2 - 22 mm). A correlation between clinical and histopathological diagnoses was found in 79% of cases (41/52). In the case of UIP (usual interstitial pneumonia) pattern, the concordance was 10/15 (66%). Conclusions: Based on these results TBB by cryotechnique would appear to be a safe and useful method that reveals new perspectives for the endoscopical diagnosis of DPLD.
2,526
229
[ 480 ]
5
[ "lung", "cryobiopsy", "tissue", "uip", "transbronchial", "specimen", "forceps", "diagnosis", "histopathological", "pattern" ]
[ "diagnosis idiopathical lung", "lung parenchyma transbronchial", "transbronchial lung biopsy", "diagnosed lung diseases", "lung biopsy tbb" ]
[CONTENT] Diffuse parenchymal lung disease (DPLD) | Bronchoscopy | Transbronchial biopsy (TBB) | Cryotechnique | Histopathology | Usual interstitial pneumonia (UIP) [SUMMARY]
[CONTENT] Diffuse parenchymal lung disease (DPLD) | Bronchoscopy | Transbronchial biopsy (TBB) | Cryotechnique | Histopathology | Usual interstitial pneumonia (UIP) [SUMMARY]
[CONTENT] Diffuse parenchymal lung disease (DPLD) | Bronchoscopy | Transbronchial biopsy (TBB) | Cryotechnique | Histopathology | Usual interstitial pneumonia (UIP) [SUMMARY]
[CONTENT] Diffuse parenchymal lung disease (DPLD) | Bronchoscopy | Transbronchial biopsy (TBB) | Cryotechnique | Histopathology | Usual interstitial pneumonia (UIP) [SUMMARY]
[CONTENT] Diffuse parenchymal lung disease (DPLD) | Bronchoscopy | Transbronchial biopsy (TBB) | Cryotechnique | Histopathology | Usual interstitial pneumonia (UIP) [SUMMARY]
[CONTENT] Diffuse parenchymal lung disease (DPLD) | Bronchoscopy | Transbronchial biopsy (TBB) | Cryotechnique | Histopathology | Usual interstitial pneumonia (UIP) [SUMMARY]
[CONTENT] Biopsy | Bronchoscopy | Cold Temperature | Humans | Idiopathic Pulmonary Fibrosis | Lung | Retrospective Studies [SUMMARY]
[CONTENT] Biopsy | Bronchoscopy | Cold Temperature | Humans | Idiopathic Pulmonary Fibrosis | Lung | Retrospective Studies [SUMMARY]
[CONTENT] Biopsy | Bronchoscopy | Cold Temperature | Humans | Idiopathic Pulmonary Fibrosis | Lung | Retrospective Studies [SUMMARY]
[CONTENT] Biopsy | Bronchoscopy | Cold Temperature | Humans | Idiopathic Pulmonary Fibrosis | Lung | Retrospective Studies [SUMMARY]
[CONTENT] Biopsy | Bronchoscopy | Cold Temperature | Humans | Idiopathic Pulmonary Fibrosis | Lung | Retrospective Studies [SUMMARY]
[CONTENT] Biopsy | Bronchoscopy | Cold Temperature | Humans | Idiopathic Pulmonary Fibrosis | Lung | Retrospective Studies [SUMMARY]
[CONTENT] diagnosis idiopathical lung | lung parenchyma transbronchial | transbronchial lung biopsy | diagnosed lung diseases | lung biopsy tbb [SUMMARY]
[CONTENT] diagnosis idiopathical lung | lung parenchyma transbronchial | transbronchial lung biopsy | diagnosed lung diseases | lung biopsy tbb [SUMMARY]
[CONTENT] diagnosis idiopathical lung | lung parenchyma transbronchial | transbronchial lung biopsy | diagnosed lung diseases | lung biopsy tbb [SUMMARY]
[CONTENT] diagnosis idiopathical lung | lung parenchyma transbronchial | transbronchial lung biopsy | diagnosed lung diseases | lung biopsy tbb [SUMMARY]
[CONTENT] diagnosis idiopathical lung | lung parenchyma transbronchial | transbronchial lung biopsy | diagnosed lung diseases | lung biopsy tbb [SUMMARY]
[CONTENT] diagnosis idiopathical lung | lung parenchyma transbronchial | transbronchial lung biopsy | diagnosed lung diseases | lung biopsy tbb [SUMMARY]
[CONTENT] lung | cryobiopsy | tissue | uip | transbronchial | specimen | forceps | diagnosis | histopathological | pattern [SUMMARY]
[CONTENT] lung | cryobiopsy | tissue | uip | transbronchial | specimen | forceps | diagnosis | histopathological | pattern [SUMMARY]
[CONTENT] lung | cryobiopsy | tissue | uip | transbronchial | specimen | forceps | diagnosis | histopathological | pattern [SUMMARY]
[CONTENT] lung | cryobiopsy | tissue | uip | transbronchial | specimen | forceps | diagnosis | histopathological | pattern [SUMMARY]
[CONTENT] lung | cryobiopsy | tissue | uip | transbronchial | specimen | forceps | diagnosis | histopathological | pattern [SUMMARY]
[CONTENT] lung | cryobiopsy | tissue | uip | transbronchial | specimen | forceps | diagnosis | histopathological | pattern [SUMMARY]
[CONTENT] tbb | tissue | tbb forceps | size | sample | artificial changes | artificial | alveolar | alveolar tissue | histopathological [SUMMARY]
[CONTENT] flexible | expressed | germany | lung | cryoprobe | diameter | rated | bronchoscopy | mm | patients [SUMMARY]
[CONTENT] cases | 52 | matching | 100 | histopathological findings | matching histopathological findings | matching histopathological | diagnosis | histopathological | findings [SUMMARY]
[CONTENT] cryobiopsy | studies | series | transbronchial | lung disease | transbronchial forceps | disease | diagnostic | forceps | biopsies [SUMMARY]
[CONTENT] lung | cryobiopsy | tissue | transbronchial | forceps | diagnostic | cases | tbb | uip | histopathological [SUMMARY]
[CONTENT] lung | cryobiopsy | tissue | transbronchial | forceps | diagnostic | cases | tbb | uip | histopathological [SUMMARY]
[CONTENT] TBB | DPLD ||| TBB [SUMMARY]
[CONTENT] TBB | 52 | age 63 | 13 years | DPLD ||| TBB ||| [SUMMARY]
[CONTENT] ||| 6.9 | 4.4 mm | Range 2 - 22 ||| 79% ||| UIP | 66% [SUMMARY]
[CONTENT] TBB | DPLD [SUMMARY]
[CONTENT] TBB | DPLD ||| TBB ||| TBB | 52 | age 63 | 13 years | DPLD ||| TBB ||| ||| ||| ||| 6.9 | 4.4 mm | Range 2 - 22 ||| 79% ||| UIP | 66% ||| TBB | DPLD [SUMMARY]
[CONTENT] TBB | DPLD ||| TBB ||| TBB | 52 | age 63 | 13 years | DPLD ||| TBB ||| ||| ||| ||| 6.9 | 4.4 mm | Range 2 - 22 ||| 79% ||| UIP | 66% ||| TBB | DPLD [SUMMARY]
Magnitude of impaired fasting glucose and undiagnosed diabetic mellitus and associated risk factors among adults living in Woreta town, northwest Ethiopia: a community-based cross-sectional study, 2021.
36199073
Impaired fasting glucose (IFG) is an early warning system that provides prior information to prevent the future development of DM and diabetes-related problems, but early detection of DM is not practically applicable in Ethiopia. This study was aimed to assess the magnitude of impaired fasting glucose and undiagnosed diabetes mellitus (DM) and associated factors.
BACKGROUND
A community-based, cross-sectional study was conducted from May to June 30, 2021. A structured interviewer-administered questionnaire was used to collect data. Anthropometric measurements were also recorded. A fasting blood sugar (FBS) test was assessed by samples taken early in the morning. Epi-Info 7.2.5.0 was used to enter data, which was then exported to SPSS 25 for analysis. To identify factors associated with IFG, logistics regression was used. The level of statistical significance was declared at p 0.05.
METHODS
Three hundred and twenty-four (324) participants with a mean age of 43.76 ± 17.29 years were enrolled. The overall magnitude of impaired fasting glucose (IFG) and undiagnosed diabetes mellitus (DM) were 43.2% and 10.0%, respectively. Waist circumference (AOR: 1.72, 95% CI 1.23-3.14), hypertension (AOR: 3.48, 95% CI 1.35-8.89), family history of Diabetic mellitus (AOR: 2.34, 95% CI 1.37-5.79) and hypertriglyceridemia (AOR: 2.35, 95% CI 1.41-5.43) were found to be independently associated with impaired fasting glucose.
RESULT
Individuals who are overweight, hypertriglyceridemia, and are hypertensive should have regular checkups and community-based screening.
CONCLUSION
[ "Adult", "Blood Glucose", "Cross-Sectional Studies", "Diabetes Mellitus", "Ethiopia", "Fasting", "Humans", "Hypertension", "Hypertriglyceridemia", "Middle Aged", "Prediabetic State", "Risk Factors" ]
9533517
Background
Diabetes mellitus (DM) is a public health problem characterized by a high blood sugar level in the body. It can be caused by a lack of either insulin secretion or insulin resistance. Its magnitude has been increasing in recent decades. The magnitude in people above the age of 18 increased from 4.7% to 1980 to 8.8% in 2017 [1–3]. It affects more than 425million people worldwide, with the number expected to rise to 529million by 2030 [1]. Diabetes mellitus is a big public health problem with serious consequences. Undiagnosed diabetes mellitus (UDM) affects 50% of people aged 20 to 79 years in the world, 69% in Africa, which is nearly double that of high-income nations (37%). This provides new information on Africa’s high rate of sickness and mortality, which could occur at a younger age [2, 3]. Impaired fasting glucose (IFG) is characterized by glycemic levels that are greater than normal but below diabetes thresholds (fasting glucose > 60.0 mmol/L and > 70.0 mmol/L). It is a major risk factor for diabetes and its consequences, including nephropathy, diabetic retinopathy, and an elevated risk of macrovascular disease [4–6]. As a result, understanding IFG is critical for avoiding diabetes forecasts in the future. According to studies in Behar Dar city and rural areas of Dire Dawa, UDM was 10.2% and 6.2%, respectively [7, 8]. A worldwide study by 2030 reported that 470million people will be affected by IFG [9, 10]. In addition, studies have shown that people diagnosed with IFG are more likely to develop diabetes. Diabetes is a 5–10% annual risk for patients with IFG, according to Tabak et al. [11]. Larson et al. found that 25% of postmenopausal women with impaired fasting glucose (IFG) or impaired glucose tolerance (IGT) tests progressed to type 2 diabetes (T2DM) in 5 years [12]. However, there is evidence that controlling IFG with lifestyle changes, including physical activity and healthy diets, [13–15], can prevent or postpone the advancement of DM, and another study found that behavioral changes alone lowered the risk of DM by 40–70%[11]. In general, it is critical to identify people who have an IFG condition so that early preventive actions can be implemented. In 2015, the International Diabetes Federation (IDF) reported that the magnitude of DM in Ethiopia was 2.2%[16]. Despite the fact that IFG is an early warning system that provides a warning to prevent the future development of DM and diabetes-related problems, data on the prevalence of prediabetes (IFG), undiagnosed DM, and associated risk factors in Ethiopia is inadequate. As a result, this research will give baseline data on the magnitude of IFG, UDM, and associated factors in Ethiopia.
null
null
Results
Sociodemographic characteristics of study participants A total of 324 participants were incorporated for the final analysis, with a response rate of 100%. The mean age of the participants was 43.76 ± 17.29 years (range 20–80 years). The bulk of the participants, 190 (58.6%) were men; 248 (76.5%) were married; 256 (79.0%) were Orthodox Christians; and 80 (24.7%) had no formal education (Table1). Table 1Sociodemographic characteristics of the study population in Woreta town, Ethiopia, 2021VariablesFrequencyPercentage Sex Male19058.6Female13441.4 Age group (years) 20–306018.531–404814.841–507623.551–607222.2≥ 616821.0 Marital status Married24876.5Single3611.1Divorced123.7Widowed/separated288.6 Level of education Not able to read and write8024.7Primary school8425.9High school3611.1Higher educations12438.3 Religion Orthodox25679.0Muslim5617.3Protestant123.7 Occupation Farmer6018.5housewife8024.7Merchant6419.8Employed12037.0 Smoking habit Yes123.7No31296.3 Khat chewing Yes5284.0No27216.0 Moderate alcohol intake Yes10833.3No21666.7 Perform physical exercise Yes164.9No30895.1 Sociodemographic characteristics of the study population in Woreta town, Ethiopia, 2021 A total of 324 participants were incorporated for the final analysis, with a response rate of 100%. The mean age of the participants was 43.76 ± 17.29 years (range 20–80 years). The bulk of the participants, 190 (58.6%) were men; 248 (76.5%) were married; 256 (79.0%) were Orthodox Christians; and 80 (24.7%) had no formal education (Table1). Table 1Sociodemographic characteristics of the study population in Woreta town, Ethiopia, 2021VariablesFrequencyPercentage Sex Male19058.6Female13441.4 Age group (years) 20–306018.531–404814.841–507623.551–607222.2≥ 616821.0 Marital status Married24876.5Single3611.1Divorced123.7Widowed/separated288.6 Level of education Not able to read and write8024.7Primary school8425.9High school3611.1Higher educations12438.3 Religion Orthodox25679.0Muslim5617.3Protestant123.7 Occupation Farmer6018.5housewife8024.7Merchant6419.8Employed12037.0 Smoking habit Yes123.7No31296.3 Khat chewing Yes5284.0No27216.0 Moderate alcohol intake Yes10833.3No21666.7 Perform physical exercise Yes164.9No30895.1 Sociodemographic characteristics of the study population in Woreta town, Ethiopia, 2021 Magnitude of impaired fasting glucose and undiagnosed DM The magnitude of IFG and UDM among the study subjects were 12% (95% CI 9–16) and 2.3% (95% CI 1.1–4), respectively, according to the American Diabetes Association (ADA) fasting criteria. IFG was found to be prevalent in 19.7% of males and 23.5% of females in our study area (Table2). Table 2IFG and undiagnosed DM by socio-demographic characteristics of the study participants in Woreta town, Ethiopia, 2021ParameterNFG (n = 152, 46.9%)IFG (n = 140, 43.2%)UDM (n = 32, 10.0%) Sex Male96(29.6)64(19.7)32(10.0)Female56(17.3)76(23.5)0 Age 20–3048(14.8)12(3.7)031–4028(8.6)12(3.7)8(2.5)41–5032(9.9)40(12.3)4(1.2)51–6024(7.4)36(11.1)12(3.7)≥ 6120(6.2)40(12.3)8(2.5) Occupation Farmer24(7.4)36(11.1)0housewife24(7.4)56(17.3)0Merchant36(11.1)20(6.2)8(2.5)Employed56(17.3)40(12.3)24(7.4) Marital status Married108(33.3)116(35.8)24(7.4)Single32(9.9)4(1.2)0Divorced4(1.2)8(2.5)0Widowed/separated8(2.5)12(3.7)8(2.5) Level of education No formal education36(11.1)40(12.3)4(1.2)Primary school36(11.1)40(12.3)8(2.5)High school24(7.4)20(6.2)0Higher educations56(17.3)48(14.8)20(6.2) Religion Orthodox124(38.3)108(33.3)24(7.4)Muslim28(8.6)24(7.4)4(1.2)Protestant08(2.5)4(1.2)IFG impaired fasting glucose, N/n number, NFG normal fasting glucose, UDM undiagnosed diabetes mellitus, % percent IFG and undiagnosed DM by socio-demographic characteristics of the study participants in Woreta town, Ethiopia, 2021 IFG impaired fasting glucose, N/n number, NFG normal fasting glucose, UDM undiagnosed diabetes mellitus, % percent The magnitude of IFG and UDM among the study subjects were 12% (95% CI 9–16) and 2.3% (95% CI 1.1–4), respectively, according to the American Diabetes Association (ADA) fasting criteria. IFG was found to be prevalent in 19.7% of males and 23.5% of females in our study area (Table2). Table 2IFG and undiagnosed DM by socio-demographic characteristics of the study participants in Woreta town, Ethiopia, 2021ParameterNFG (n = 152, 46.9%)IFG (n = 140, 43.2%)UDM (n = 32, 10.0%) Sex Male96(29.6)64(19.7)32(10.0)Female56(17.3)76(23.5)0 Age 20–3048(14.8)12(3.7)031–4028(8.6)12(3.7)8(2.5)41–5032(9.9)40(12.3)4(1.2)51–6024(7.4)36(11.1)12(3.7)≥ 6120(6.2)40(12.3)8(2.5) Occupation Farmer24(7.4)36(11.1)0housewife24(7.4)56(17.3)0Merchant36(11.1)20(6.2)8(2.5)Employed56(17.3)40(12.3)24(7.4) Marital status Married108(33.3)116(35.8)24(7.4)Single32(9.9)4(1.2)0Divorced4(1.2)8(2.5)0Widowed/separated8(2.5)12(3.7)8(2.5) Level of education No formal education36(11.1)40(12.3)4(1.2)Primary school36(11.1)40(12.3)8(2.5)High school24(7.4)20(6.2)0Higher educations56(17.3)48(14.8)20(6.2) Religion Orthodox124(38.3)108(33.3)24(7.4)Muslim28(8.6)24(7.4)4(1.2)Protestant08(2.5)4(1.2)IFG impaired fasting glucose, N/n number, NFG normal fasting glucose, UDM undiagnosed diabetes mellitus, % percent IFG and undiagnosed DM by socio-demographic characteristics of the study participants in Woreta town, Ethiopia, 2021 IFG impaired fasting glucose, N/n number, NFG normal fasting glucose, UDM undiagnosed diabetes mellitus, % percent Clinical presentations of study subjects with IFG and UDM Thirty-six (11.11%) of the subjects had an overweight BMI, whereas 88.9% of the overweight subjects had IFG. About 7.4% of the participants had examined their blood glucose level previously, and 1.2% had screened for DM within the last month. The most commonly elicited symptoms of diabetes were polyuria, polydipsia, polyphagia, and weariness, which were known by eight (2.5%) of the individuals. The prevalence of UDM was reported to be 9.9%, with a 95% confidence interval of 7.5 to 11.7. Furthermore, the magnitude of IFG was 44.14%, with a 95% confidence interval of 19.45 to 64.54 (Table3). Table 3Clinical presentation and laboratory measurements of study subjects in Woreta town, Ethiopia, 2021CharacteristicsNormal(< 100mg/dl)IFG (100–125 mg/dl)UDM(≥ 126 mg/dl)Chi-Squarep-valueEver checked your blood sugar levelYes4(16.7)4(16.7)16(66.6)23.480.000No148(49.3)136(45.3)16(5.4)The checked blood sugar level in the last monthYes04(100.00)01.3310.514No152(47.5)136(42.5)32(10.0)Family history of DMYes8(25.0)24(75.0)50.9890.61No144(55.4)116(44.6)27(10.9)Diagnosis of hypertensionYes12(12.2)62(63.3)24(24.5)23.1580.000No140(61.9)78(34.5)8(3.5)Know symptoms of DMYes26(55.3)13(27.7)8(17.0)60.2560.000No126(45.5)127(45.8)24(8.7)Body mass index (BMI), kg/m2Under Weight4(100.0)0023.3870.001normal144(55.4)96(36.9)20(7.7)Over Weight4(11.1)32(88.9)0Obese012(50.0)12(50.0)Perform physical exerciseYes16(100.0)004.7610.092No136(44.2)140(45.4)32(10.4)Moderate alcohol intakeYes44(40.7)44(40.7)20(18.5)3.4490.178No108(50.0)96(44.4)12(5.6) Clinical presentation and laboratory measurements of study subjects in Woreta town, Ethiopia, 2021 Thirty-six (11.11%) of the subjects had an overweight BMI, whereas 88.9% of the overweight subjects had IFG. About 7.4% of the participants had examined their blood glucose level previously, and 1.2% had screened for DM within the last month. The most commonly elicited symptoms of diabetes were polyuria, polydipsia, polyphagia, and weariness, which were known by eight (2.5%) of the individuals. The prevalence of UDM was reported to be 9.9%, with a 95% confidence interval of 7.5 to 11.7. Furthermore, the magnitude of IFG was 44.14%, with a 95% confidence interval of 19.45 to 64.54 (Table3). Table 3Clinical presentation and laboratory measurements of study subjects in Woreta town, Ethiopia, 2021CharacteristicsNormal(< 100mg/dl)IFG (100–125 mg/dl)UDM(≥ 126 mg/dl)Chi-Squarep-valueEver checked your blood sugar levelYes4(16.7)4(16.7)16(66.6)23.480.000No148(49.3)136(45.3)16(5.4)The checked blood sugar level in the last monthYes04(100.00)01.3310.514No152(47.5)136(42.5)32(10.0)Family history of DMYes8(25.0)24(75.0)50.9890.61No144(55.4)116(44.6)27(10.9)Diagnosis of hypertensionYes12(12.2)62(63.3)24(24.5)23.1580.000No140(61.9)78(34.5)8(3.5)Know symptoms of DMYes26(55.3)13(27.7)8(17.0)60.2560.000No126(45.5)127(45.8)24(8.7)Body mass index (BMI), kg/m2Under Weight4(100.0)0023.3870.001normal144(55.4)96(36.9)20(7.7)Over Weight4(11.1)32(88.9)0Obese012(50.0)12(50.0)Perform physical exerciseYes16(100.0)004.7610.092No136(44.2)140(45.4)32(10.4)Moderate alcohol intakeYes44(40.7)44(40.7)20(18.5)3.4490.178No108(50.0)96(44.4)12(5.6) Clinical presentation and laboratory measurements of study subjects in Woreta town, Ethiopia, 2021 Factors associated with IFG and UDM Age, family history of diabetes mellitus, body mass index, waist circumference, hypertriglyceridemia, and hypertension were found to be substantially associated with the prevalence of impaired fasting glucose in the bivariate analysis (Table4). The results of the multivariate logistic regression analysis revealed that IFG was independently associated with WC, HPN, FHDM and hypertriglyceridemia. The magnitude of IFG was 1.72 times greater in participants with high risk of WC compared to low-risk WC subjects (AOR = 1.72, 95% CI 1.23–3.14). Prediabetes was 2.35 times more common in people with hypertriglyceridemia than in people with normal TG (AOR = 2.35, 95% CI 1.41–5.43). IFG was 3.48 times more common in people with hypertension compared to people without hypertension (AOR = 3.48, 95% CI 1.35–8.89) (Table4). Table 4Bivariate and multivariate analysis of factors associated with IFG in Woreta town, Ethiopia, 2021VariableCategoryIFGCOR(95%CI)AOR(95%CI)YesNoAge20–3012481131–4012281.47(0.45–4.4.62)0.52(0.14–2.364)41–5032403.98(1.36–11.02) *2.06(0.51–7.09)51–6024363.32(1.09–10.27) *1.59(0.45–5.38)≥ 6120402.46(0.77–7.18)1.47(0.39–5.18)BMI (kg/m2 )≤ 24.910913511≥2531172.26(1.19–4.30) *1.29(1.19–3.79) *WCLow risk8912511High risk51272.65(1.55–4.55) **1.72(1.23–3.14) *TGNormal11713911High23132.10(1.02–4.33) **2.35(1.41–5.43) **FHDMYes2483.72(1.61–8.6) **2.34(1.37–5.79)**No11614411HPNYes62129.23(4.71–18.26) **3.48 (1.35–8.89) **No7814011AOR adjusted odds ratio, BMI body mass index, CI confidence interval, COR crude odds ratio, FHDM family history of diabetic mellitus, DM diabetes mellitus, FHDM family history of diabetes mellitus, IFG impaired fasting glucose, TG triglycerides, WC waist circumference, HPN hypertension* P < 0.05 ** P < 0.01 Bivariate and multivariate analysis of factors associated with IFG in Woreta town, Ethiopia, 2021 AOR adjusted odds ratio, BMI body mass index, CI confidence interval, COR crude odds ratio, FHDM family history of diabetic mellitus, DM diabetes mellitus, FHDM family history of diabetes mellitus, IFG impaired fasting glucose, TG triglycerides, WC waist circumference, HPN hypertension * P < 0.05 ** P < 0.01 Age, family history of diabetes mellitus, body mass index, waist circumference, hypertriglyceridemia, and hypertension were found to be substantially associated with the prevalence of impaired fasting glucose in the bivariate analysis (Table4). The results of the multivariate logistic regression analysis revealed that IFG was independently associated with WC, HPN, FHDM and hypertriglyceridemia. The magnitude of IFG was 1.72 times greater in participants with high risk of WC compared to low-risk WC subjects (AOR = 1.72, 95% CI 1.23–3.14). Prediabetes was 2.35 times more common in people with hypertriglyceridemia than in people with normal TG (AOR = 2.35, 95% CI 1.41–5.43). IFG was 3.48 times more common in people with hypertension compared to people without hypertension (AOR = 3.48, 95% CI 1.35–8.89) (Table4). Table 4Bivariate and multivariate analysis of factors associated with IFG in Woreta town, Ethiopia, 2021VariableCategoryIFGCOR(95%CI)AOR(95%CI)YesNoAge20–3012481131–4012281.47(0.45–4.4.62)0.52(0.14–2.364)41–5032403.98(1.36–11.02) *2.06(0.51–7.09)51–6024363.32(1.09–10.27) *1.59(0.45–5.38)≥ 6120402.46(0.77–7.18)1.47(0.39–5.18)BMI (kg/m2 )≤ 24.910913511≥2531172.26(1.19–4.30) *1.29(1.19–3.79) *WCLow risk8912511High risk51272.65(1.55–4.55) **1.72(1.23–3.14) *TGNormal11713911High23132.10(1.02–4.33) **2.35(1.41–5.43) **FHDMYes2483.72(1.61–8.6) **2.34(1.37–5.79)**No11614411HPNYes62129.23(4.71–18.26) **3.48 (1.35–8.89) **No7814011AOR adjusted odds ratio, BMI body mass index, CI confidence interval, COR crude odds ratio, FHDM family history of diabetic mellitus, DM diabetes mellitus, FHDM family history of diabetes mellitus, IFG impaired fasting glucose, TG triglycerides, WC waist circumference, HPN hypertension* P < 0.05 ** P < 0.01 Bivariate and multivariate analysis of factors associated with IFG in Woreta town, Ethiopia, 2021 AOR adjusted odds ratio, BMI body mass index, CI confidence interval, COR crude odds ratio, FHDM family history of diabetic mellitus, DM diabetes mellitus, FHDM family history of diabetes mellitus, IFG impaired fasting glucose, TG triglycerides, WC waist circumference, HPN hypertension * P < 0.05 ** P < 0.01
Conclusion
In conclusion, the magnitude of IFG and UDM was significantly high. Family history of diabetes, hypertension, WC, and TG were all associated with IFG. Regular physical activities are recommended measures for the prevention and control of diabetes mellitus among the population of Ethiopia.
[ "Background", "Methods", "Study area", "Study design and period", "Source population", "Study population", "Sample size", "Sampling procedure", "Data collection procedures", "Variable measurements", "Data quality management", "Data analysis", "Sociodemographic characteristics of study participants", "Magnitude of impaired fasting glucose and undiagnosed DM", "Clinical presentations of study subjects with IFG and UDM", "Factors associated with IFG and UDM" ]
[ "Diabetes mellitus (DM) is a public health problem characterized by a high blood sugar level in the body. It can be caused by a lack of either insulin secretion or insulin resistance. Its magnitude has been increasing in recent decades. The magnitude in people above the age of 18 increased from 4.7% to 1980 to 8.8% in 2017 [1–3]. It affects more than 425million people worldwide, with the number expected to rise to 529million by 2030 [1].\nDiabetes mellitus is a big public health problem with serious consequences. Undiagnosed diabetes mellitus (UDM) affects 50% of people aged 20 to 79 years in the world, 69% in Africa, which is nearly double that of high-income nations (37%). This provides new information on Africa’s high rate of sickness and mortality, which could occur at a younger age [2, 3].\nImpaired fasting glucose (IFG) is characterized by glycemic levels that are greater than normal but below diabetes thresholds (fasting glucose > 60.0 mmol/L and > 70.0 mmol/L). It is a major risk factor for diabetes and its consequences, including nephropathy, diabetic retinopathy, and an elevated risk of macrovascular disease [4–6]. As a result, understanding IFG is critical for avoiding diabetes forecasts in the future.\nAccording to studies in Behar Dar city and rural areas of Dire Dawa, UDM was 10.2% and 6.2%, respectively [7, 8]. A worldwide study by 2030 reported that 470million people will be affected by IFG [9, 10]. In addition, studies have shown that people diagnosed with IFG are more likely to develop diabetes. Diabetes is a 5–10% annual risk for patients with IFG, according to Tabak et al. [11]. Larson et al. found that 25% of postmenopausal women with impaired fasting glucose (IFG) or impaired glucose tolerance (IGT) tests progressed to type 2 diabetes (T2DM) in 5 years [12]. However, there is evidence that controlling IFG with lifestyle changes, including physical activity and healthy diets, [13–15], can prevent or postpone the advancement of DM, and another study found that behavioral changes alone lowered the risk of DM by 40–70%[11]. In general, it is critical to identify people who have an IFG condition so that early preventive actions can be implemented.\nIn 2015, the International Diabetes Federation (IDF) reported that the magnitude of DM in Ethiopia was 2.2%[16]. Despite the fact that IFG is an early warning system that provides a warning to prevent the future development of DM and diabetes-related problems, data on the prevalence of prediabetes (IFG), undiagnosed DM, and associated risk factors in Ethiopia is inadequate. As a result, this research will give baseline data on the magnitude of IFG, UDM, and associated factors in Ethiopia.", "Study area The research was carried out in Wereta, Fogera district. Woreta town is located in the Amhara National Regional State, 561km from Addis Ababa, the capital city of Ethiopia. The town has 18,400 inhabitants, living in five kebeles (the smallest administrative units). There is one government health center and one private hospital in town.\nThe research was carried out in Wereta, Fogera district. Woreta town is located in the Amhara National Regional State, 561km from Addis Ababa, the capital city of Ethiopia. The town has 18,400 inhabitants, living in five kebeles (the smallest administrative units). There is one government health center and one private hospital in town.\nStudy design and period A community-based cross-sectional study design was conducted from June to July 2021.\nA community-based cross-sectional study design was conducted from June to July 2021.\nSource population All adults (age over 18 years old) who live in Woreta town.\nAll adults (age over 18 years old) who live in Woreta town.\nStudy population Adults who live in Woreta town (selected kebeles) and fulfill the inclusion criteria.\nAdults who live in Woreta town (selected kebeles) and fulfill the inclusion criteria.\nSample size The sample size was generated using the single population proportion approach, which factored in a 12% prevalence of IFG discovered in a prior study in Koladiba town, northwest Ethiopia [17], a 5% level of significance, and a 5% margin of error. The minimum sample size was 324 as a result of multiplying the estimated result (162) by the design effect of 2.\nThe sample size was generated using the single population proportion approach, which factored in a 12% prevalence of IFG discovered in a prior study in Koladiba town, northwest Ethiopia [17], a 5% level of significance, and a 5% margin of error. The minimum sample size was 324 as a result of multiplying the estimated result (162) by the design effect of 2.\nSampling procedure A multistage sampling procedure was used to choose the participants, and three kebeles (the smallest administrative units with similar demographics) were picked at random from five kebeles. Participants were proportionally assigned to each of the selected kebeles based on the number of households. Thus, 120, 98, and 106 participants were selected from Kebele one, three, and five, respectively, using the systematic random sampling technique. When there were multiple eligible subjects in a family, the lottery method was employed to choose one at random.\nA multistage sampling procedure was used to choose the participants, and three kebeles (the smallest administrative units with similar demographics) were picked at random from five kebeles. Participants were proportionally assigned to each of the selected kebeles based on the number of households. Thus, 120, 98, and 106 participants were selected from Kebele one, three, and five, respectively, using the systematic random sampling technique. When there were multiple eligible subjects in a family, the lottery method was employed to choose one at random.", "The research was carried out in Wereta, Fogera district. Woreta town is located in the Amhara National Regional State, 561km from Addis Ababa, the capital city of Ethiopia. The town has 18,400 inhabitants, living in five kebeles (the smallest administrative units). There is one government health center and one private hospital in town.", "A community-based cross-sectional study design was conducted from June to July 2021.", "All adults (age over 18 years old) who live in Woreta town.", "Adults who live in Woreta town (selected kebeles) and fulfill the inclusion criteria.", "The sample size was generated using the single population proportion approach, which factored in a 12% prevalence of IFG discovered in a prior study in Koladiba town, northwest Ethiopia [17], a 5% level of significance, and a 5% margin of error. The minimum sample size was 324 as a result of multiplying the estimated result (162) by the design effect of 2.", "A multistage sampling procedure was used to choose the participants, and three kebeles (the smallest administrative units with similar demographics) were picked at random from five kebeles. Participants were proportionally assigned to each of the selected kebeles based on the number of households. Thus, 120, 98, and 106 participants were selected from Kebele one, three, and five, respectively, using the systematic random sampling technique. When there were multiple eligible subjects in a family, the lottery method was employed to choose one at random.", "Before the actual data collection, the data collectors received a 3-day training on interviewing techniques, questionnaire administration, and physical measuring procedures. Data was collected by two health officers and two laboratory technicians.\nThe timing of a participant’s last meal was checked to ensure that they had fasted for at least eight hours the night before. A pre-test was conducted from non-study areas to confirm that the data collectors and responders understood the questions. As a result of the pre-test comments, necessary improvements were made prior to actual data collection.\nThe WHO NCD STEPS tool, which consists of three steps for measuring the risk of NCD risk variables, was used in this study [18]. First, the study population’s core and expanded socio-demographic and behavioral characteristics are gathered. The second step involves taking core and expanded physical measurements, while the third step involves biochemical analysis [18]. The data collection application was originally written in English and then translated into Amharic, the commonly spoken language in the study area, by language experts. To guarantee uniformity, a back translation into English was undertaken prior to data collection.\nVariable measurements The International Diabetes Association’s (IDA) definition was used to define DM: “fasting blood glucose (FBG) ≥126mg/dL. Previously undiagnosed DM was defined as participants who had not had their blood sugar tested before and were not taking DM medications during the survey and had an FBG ≥126mg/dL“[16].\nAnthropometric measures were taken using standardized methodologies and calibrated equipment. Each study subject was weighed to the nearest 0.1kg. Height and weight were measured to calculate their BMI (kg/m2). Height was measured using a portable stadiometer. “Underweight was defined as < 18.5 kg/m2, normal was defined as 18.5–24.9 kg/m2, overweight was defined as 25–29.9 kg/m2, and obesity was defined as ≥ 30 kg/m2”. The midpoint between the lower margin of the least palpable rib and the top of the iliac crest was marked to measure the waist circumference (WC). According to the World Health Organization, WC values of more than 102cm for boys and more than 88cm for females were considered high-risk [18].\nBlood pressure (BP) was measured twice with a mercury sphygmomanometer, first in a sitting position and once after 15min of rest. The second BP measurement was done 5min after the first. The average value was calculated and the BP result was taken. SBP of 140 mmHg or DBP of 90 mmHg, as well as the usage of antihypertensive medicines on a regular basis, were used to identify hypertension [17].\nA smoking habit was defined as smoking tobacco products on a daily basis during the data collection period. Participants who consumed alcohol and chewed khat within the previous 30 days of the data collection period were classified as current alcohol users and khat chewers, respectively.\nThe International Diabetes Association’s (IDA) definition was used to define DM: “fasting blood glucose (FBG) ≥126mg/dL. Previously undiagnosed DM was defined as participants who had not had their blood sugar tested before and were not taking DM medications during the survey and had an FBG ≥126mg/dL“[16].\nAnthropometric measures were taken using standardized methodologies and calibrated equipment. Each study subject was weighed to the nearest 0.1kg. Height and weight were measured to calculate their BMI (kg/m2). Height was measured using a portable stadiometer. “Underweight was defined as < 18.5 kg/m2, normal was defined as 18.5–24.9 kg/m2, overweight was defined as 25–29.9 kg/m2, and obesity was defined as ≥ 30 kg/m2”. The midpoint between the lower margin of the least palpable rib and the top of the iliac crest was marked to measure the waist circumference (WC). According to the World Health Organization, WC values of more than 102cm for boys and more than 88cm for females were considered high-risk [18].\nBlood pressure (BP) was measured twice with a mercury sphygmomanometer, first in a sitting position and once after 15min of rest. The second BP measurement was done 5min after the first. The average value was calculated and the BP result was taken. SBP of 140 mmHg or DBP of 90 mmHg, as well as the usage of antihypertensive medicines on a regular basis, were used to identify hypertension [17].\nA smoking habit was defined as smoking tobacco products on a daily basis during the data collection period. Participants who consumed alcohol and chewed khat within the previous 30 days of the data collection period were classified as current alcohol users and khat chewers, respectively.\nData quality management A pretest was conducted to check that the questions were appropriate for the study. An Amharic-translated version of the questionnaire was employed. Physical measurements were taken twice, and in some cases, three times, to reduce observer error in measurements and records, and data collectors were rotated to compare values. Before and after each day’s data collection, the sphygmomanometer used in the field was compared to the one used in the nearby hospital, Debre Tabor comprehensive hospital. For uniformity in reference and test readings, the glucometer gadget and strips were examined on a regular basis. Every day, the data is cleaned, coded, and recorded.\nA pretest was conducted to check that the questions were appropriate for the study. An Amharic-translated version of the questionnaire was employed. Physical measurements were taken twice, and in some cases, three times, to reduce observer error in measurements and records, and data collectors were rotated to compare values. Before and after each day’s data collection, the sphygmomanometer used in the field was compared to the one used in the nearby hospital, Debre Tabor comprehensive hospital. For uniformity in reference and test readings, the glucometer gadget and strips were examined on a regular basis. Every day, the data is cleaned, coded, and recorded.\nData analysis The collected data was double-checked for accuracy. Completed questionnaires were entered into Epi–Info version 7.2.5.0 immediately after data collection and then exported to SPSS version 25 for analysis. Means, percentages, standard deviations, and ranges were calculated using descriptive statistics. A logistics regression was done to identify factors associated with IFG and UDM. The statistical level of significance was declared at p 0.05.\nThe collected data was double-checked for accuracy. Completed questionnaires were entered into Epi–Info version 7.2.5.0 immediately after data collection and then exported to SPSS version 25 for analysis. Means, percentages, standard deviations, and ranges were calculated using descriptive statistics. A logistics regression was done to identify factors associated with IFG and UDM. The statistical level of significance was declared at p 0.05.", "The International Diabetes Association’s (IDA) definition was used to define DM: “fasting blood glucose (FBG) ≥126mg/dL. Previously undiagnosed DM was defined as participants who had not had their blood sugar tested before and were not taking DM medications during the survey and had an FBG ≥126mg/dL“[16].\nAnthropometric measures were taken using standardized methodologies and calibrated equipment. Each study subject was weighed to the nearest 0.1kg. Height and weight were measured to calculate their BMI (kg/m2). Height was measured using a portable stadiometer. “Underweight was defined as < 18.5 kg/m2, normal was defined as 18.5–24.9 kg/m2, overweight was defined as 25–29.9 kg/m2, and obesity was defined as ≥ 30 kg/m2”. The midpoint between the lower margin of the least palpable rib and the top of the iliac crest was marked to measure the waist circumference (WC). According to the World Health Organization, WC values of more than 102cm for boys and more than 88cm for females were considered high-risk [18].\nBlood pressure (BP) was measured twice with a mercury sphygmomanometer, first in a sitting position and once after 15min of rest. The second BP measurement was done 5min after the first. The average value was calculated and the BP result was taken. SBP of 140 mmHg or DBP of 90 mmHg, as well as the usage of antihypertensive medicines on a regular basis, were used to identify hypertension [17].\nA smoking habit was defined as smoking tobacco products on a daily basis during the data collection period. Participants who consumed alcohol and chewed khat within the previous 30 days of the data collection period were classified as current alcohol users and khat chewers, respectively.", "A pretest was conducted to check that the questions were appropriate for the study. An Amharic-translated version of the questionnaire was employed. Physical measurements were taken twice, and in some cases, three times, to reduce observer error in measurements and records, and data collectors were rotated to compare values. Before and after each day’s data collection, the sphygmomanometer used in the field was compared to the one used in the nearby hospital, Debre Tabor comprehensive hospital. For uniformity in reference and test readings, the glucometer gadget and strips were examined on a regular basis. Every day, the data is cleaned, coded, and recorded.", "The collected data was double-checked for accuracy. Completed questionnaires were entered into Epi–Info version 7.2.5.0 immediately after data collection and then exported to SPSS version 25 for analysis. Means, percentages, standard deviations, and ranges were calculated using descriptive statistics. A logistics regression was done to identify factors associated with IFG and UDM. The statistical level of significance was declared at p 0.05.", "A total of 324 participants were incorporated for the final analysis, with a response rate of 100%. The mean age of the participants was 43.76 ± 17.29 years (range 20–80 years). The bulk of the participants, 190 (58.6%) were men; 248 (76.5%) were married; 256 (79.0%) were Orthodox Christians; and 80 (24.7%) had no formal education (Table1).\n\nTable 1Sociodemographic characteristics of the study population in Woreta town, Ethiopia, 2021VariablesFrequencyPercentage\nSex\nMale19058.6Female13441.4\nAge group (years)\n20–306018.531–404814.841–507623.551–607222.2≥ 616821.0\nMarital status\nMarried24876.5Single3611.1Divorced123.7Widowed/separated288.6\nLevel of education\nNot able to read and write8024.7Primary school8425.9High school3611.1Higher educations12438.3\nReligion\nOrthodox25679.0Muslim5617.3Protestant123.7\nOccupation\nFarmer6018.5housewife8024.7Merchant6419.8Employed12037.0\nSmoking habit\nYes123.7No31296.3\nKhat chewing\nYes5284.0No27216.0\nModerate alcohol intake\nYes10833.3No21666.7\nPerform physical exercise\nYes164.9No30895.1\n\nSociodemographic characteristics of the study population in Woreta town, Ethiopia, 2021", "The magnitude of IFG and UDM among the study subjects were 12% (95% CI 9–16) and 2.3% (95% CI 1.1–4), respectively, according to the American Diabetes Association (ADA) fasting criteria. IFG was found to be prevalent in 19.7% of males and 23.5% of females in our study area (Table2).\n\nTable 2IFG and undiagnosed DM by socio-demographic characteristics of the study participants in Woreta town, Ethiopia, 2021ParameterNFG (n = 152, 46.9%)IFG (n = 140, 43.2%)UDM (n = 32, 10.0%)\nSex\nMale96(29.6)64(19.7)32(10.0)Female56(17.3)76(23.5)0\nAge\n20–3048(14.8)12(3.7)031–4028(8.6)12(3.7)8(2.5)41–5032(9.9)40(12.3)4(1.2)51–6024(7.4)36(11.1)12(3.7)≥ 6120(6.2)40(12.3)8(2.5)\nOccupation\nFarmer24(7.4)36(11.1)0housewife24(7.4)56(17.3)0Merchant36(11.1)20(6.2)8(2.5)Employed56(17.3)40(12.3)24(7.4)\nMarital status\nMarried108(33.3)116(35.8)24(7.4)Single32(9.9)4(1.2)0Divorced4(1.2)8(2.5)0Widowed/separated8(2.5)12(3.7)8(2.5)\nLevel of education\nNo formal education36(11.1)40(12.3)4(1.2)Primary school36(11.1)40(12.3)8(2.5)High school24(7.4)20(6.2)0Higher educations56(17.3)48(14.8)20(6.2)\nReligion\nOrthodox124(38.3)108(33.3)24(7.4)Muslim28(8.6)24(7.4)4(1.2)Protestant08(2.5)4(1.2)IFG impaired fasting glucose, N/n number, NFG normal fasting glucose, UDM undiagnosed diabetes mellitus, % percent\n\nIFG and undiagnosed DM by socio-demographic characteristics of the study participants in Woreta town, Ethiopia, 2021\nIFG impaired fasting glucose, N/n number, NFG normal fasting glucose, UDM undiagnosed diabetes mellitus, % percent", "Thirty-six (11.11%) of the subjects had an overweight BMI, whereas 88.9% of the overweight subjects had IFG. About 7.4% of the participants had examined their blood glucose level previously, and 1.2% had screened for DM within the last month. The most commonly elicited symptoms of diabetes were polyuria, polydipsia, polyphagia, and weariness, which were known by eight (2.5%) of the individuals. The prevalence of UDM was reported to be 9.9%, with a 95% confidence interval of 7.5 to 11.7. Furthermore, the magnitude of IFG was 44.14%, with a 95% confidence interval of 19.45 to 64.54 (Table3).\n\nTable 3Clinical presentation and laboratory measurements of study subjects in Woreta town, Ethiopia, 2021CharacteristicsNormal(< 100mg/dl)IFG (100–125 mg/dl)UDM(≥ 126 mg/dl)Chi-Squarep-valueEver checked your blood sugar levelYes4(16.7)4(16.7)16(66.6)23.480.000No148(49.3)136(45.3)16(5.4)The checked blood sugar level in the last monthYes04(100.00)01.3310.514No152(47.5)136(42.5)32(10.0)Family history of DMYes8(25.0)24(75.0)50.9890.61No144(55.4)116(44.6)27(10.9)Diagnosis of hypertensionYes12(12.2)62(63.3)24(24.5)23.1580.000No140(61.9)78(34.5)8(3.5)Know symptoms of DMYes26(55.3)13(27.7)8(17.0)60.2560.000No126(45.5)127(45.8)24(8.7)Body mass index (BMI), kg/m2Under Weight4(100.0)0023.3870.001normal144(55.4)96(36.9)20(7.7)Over Weight4(11.1)32(88.9)0Obese012(50.0)12(50.0)Perform physical exerciseYes16(100.0)004.7610.092No136(44.2)140(45.4)32(10.4)Moderate alcohol intakeYes44(40.7)44(40.7)20(18.5)3.4490.178No108(50.0)96(44.4)12(5.6)\n\nClinical presentation and laboratory measurements of study subjects in Woreta town, Ethiopia, 2021", "Age, family history of diabetes mellitus, body mass index, waist circumference, hypertriglyceridemia, and hypertension were found to be substantially associated with the prevalence of impaired fasting glucose in the bivariate analysis (Table4).\nThe results of the multivariate logistic regression analysis revealed that IFG was independently associated with WC, HPN, FHDM and hypertriglyceridemia. The magnitude of IFG was 1.72 times greater in participants with high risk of WC compared to low-risk WC subjects (AOR = 1.72, 95% CI 1.23–3.14). Prediabetes was 2.35 times more common in people with hypertriglyceridemia than in people with normal TG (AOR = 2.35, 95% CI 1.41–5.43). IFG was 3.48 times more common in people with hypertension compared to people without hypertension (AOR = 3.48, 95% CI 1.35–8.89) (Table4).\n\nTable 4Bivariate and multivariate analysis of factors associated with IFG in Woreta town, Ethiopia, 2021VariableCategoryIFGCOR(95%CI)AOR(95%CI)YesNoAge20–3012481131–4012281.47(0.45–4.4.62)0.52(0.14–2.364)41–5032403.98(1.36–11.02) *2.06(0.51–7.09)51–6024363.32(1.09–10.27) *1.59(0.45–5.38)≥ 6120402.46(0.77–7.18)1.47(0.39–5.18)BMI (kg/m2 )≤ 24.910913511≥2531172.26(1.19–4.30) *1.29(1.19–3.79) *WCLow risk8912511High risk51272.65(1.55–4.55) **1.72(1.23–3.14) *TGNormal11713911High23132.10(1.02–4.33) **2.35(1.41–5.43) **FHDMYes2483.72(1.61–8.6) **2.34(1.37–5.79)**No11614411HPNYes62129.23(4.71–18.26) **3.48 (1.35–8.89) **No7814011AOR adjusted odds ratio, BMI body mass index, CI confidence interval, COR crude odds ratio, FHDM family history of diabetic mellitus, DM diabetes mellitus, FHDM family history of diabetes mellitus, IFG impaired fasting glucose, TG triglycerides, WC waist circumference, HPN hypertension* P < 0.05 ** P < 0.01\n\nBivariate and multivariate analysis of factors associated with IFG in Woreta town, Ethiopia, 2021\nAOR adjusted odds ratio, BMI body mass index, CI confidence interval, COR crude odds ratio, FHDM family history of diabetic mellitus, DM diabetes mellitus, FHDM family history of diabetes mellitus, IFG impaired fasting glucose, TG triglycerides, WC waist circumference, HPN hypertension\n* P < 0.05 ** P < 0.01" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study area", "Study design and period", "Source population", "Study population", "Sample size", "Sampling procedure", "Data collection procedures", "Variable measurements", "Data quality management", "Data analysis", "Results", "Sociodemographic characteristics of study participants", "Magnitude of impaired fasting glucose and undiagnosed DM", "Clinical presentations of study subjects with IFG and UDM", "Factors associated with IFG and UDM", "Discussion", "Conclusion" ]
[ "Diabetes mellitus (DM) is a public health problem characterized by a high blood sugar level in the body. It can be caused by a lack of either insulin secretion or insulin resistance. Its magnitude has been increasing in recent decades. The magnitude in people above the age of 18 increased from 4.7% to 1980 to 8.8% in 2017 [1–3]. It affects more than 425million people worldwide, with the number expected to rise to 529million by 2030 [1].\nDiabetes mellitus is a big public health problem with serious consequences. Undiagnosed diabetes mellitus (UDM) affects 50% of people aged 20 to 79 years in the world, 69% in Africa, which is nearly double that of high-income nations (37%). This provides new information on Africa’s high rate of sickness and mortality, which could occur at a younger age [2, 3].\nImpaired fasting glucose (IFG) is characterized by glycemic levels that are greater than normal but below diabetes thresholds (fasting glucose > 60.0 mmol/L and > 70.0 mmol/L). It is a major risk factor for diabetes and its consequences, including nephropathy, diabetic retinopathy, and an elevated risk of macrovascular disease [4–6]. As a result, understanding IFG is critical for avoiding diabetes forecasts in the future.\nAccording to studies in Behar Dar city and rural areas of Dire Dawa, UDM was 10.2% and 6.2%, respectively [7, 8]. A worldwide study by 2030 reported that 470million people will be affected by IFG [9, 10]. In addition, studies have shown that people diagnosed with IFG are more likely to develop diabetes. Diabetes is a 5–10% annual risk for patients with IFG, according to Tabak et al. [11]. Larson et al. found that 25% of postmenopausal women with impaired fasting glucose (IFG) or impaired glucose tolerance (IGT) tests progressed to type 2 diabetes (T2DM) in 5 years [12]. However, there is evidence that controlling IFG with lifestyle changes, including physical activity and healthy diets, [13–15], can prevent or postpone the advancement of DM, and another study found that behavioral changes alone lowered the risk of DM by 40–70%[11]. In general, it is critical to identify people who have an IFG condition so that early preventive actions can be implemented.\nIn 2015, the International Diabetes Federation (IDF) reported that the magnitude of DM in Ethiopia was 2.2%[16]. Despite the fact that IFG is an early warning system that provides a warning to prevent the future development of DM and diabetes-related problems, data on the prevalence of prediabetes (IFG), undiagnosed DM, and associated risk factors in Ethiopia is inadequate. As a result, this research will give baseline data on the magnitude of IFG, UDM, and associated factors in Ethiopia.", "Study area The research was carried out in Wereta, Fogera district. Woreta town is located in the Amhara National Regional State, 561km from Addis Ababa, the capital city of Ethiopia. The town has 18,400 inhabitants, living in five kebeles (the smallest administrative units). There is one government health center and one private hospital in town.\nThe research was carried out in Wereta, Fogera district. Woreta town is located in the Amhara National Regional State, 561km from Addis Ababa, the capital city of Ethiopia. The town has 18,400 inhabitants, living in five kebeles (the smallest administrative units). There is one government health center and one private hospital in town.\nStudy design and period A community-based cross-sectional study design was conducted from June to July 2021.\nA community-based cross-sectional study design was conducted from June to July 2021.\nSource population All adults (age over 18 years old) who live in Woreta town.\nAll adults (age over 18 years old) who live in Woreta town.\nStudy population Adults who live in Woreta town (selected kebeles) and fulfill the inclusion criteria.\nAdults who live in Woreta town (selected kebeles) and fulfill the inclusion criteria.\nSample size The sample size was generated using the single population proportion approach, which factored in a 12% prevalence of IFG discovered in a prior study in Koladiba town, northwest Ethiopia [17], a 5% level of significance, and a 5% margin of error. The minimum sample size was 324 as a result of multiplying the estimated result (162) by the design effect of 2.\nThe sample size was generated using the single population proportion approach, which factored in a 12% prevalence of IFG discovered in a prior study in Koladiba town, northwest Ethiopia [17], a 5% level of significance, and a 5% margin of error. The minimum sample size was 324 as a result of multiplying the estimated result (162) by the design effect of 2.\nSampling procedure A multistage sampling procedure was used to choose the participants, and three kebeles (the smallest administrative units with similar demographics) were picked at random from five kebeles. Participants were proportionally assigned to each of the selected kebeles based on the number of households. Thus, 120, 98, and 106 participants were selected from Kebele one, three, and five, respectively, using the systematic random sampling technique. When there were multiple eligible subjects in a family, the lottery method was employed to choose one at random.\nA multistage sampling procedure was used to choose the participants, and three kebeles (the smallest administrative units with similar demographics) were picked at random from five kebeles. Participants were proportionally assigned to each of the selected kebeles based on the number of households. Thus, 120, 98, and 106 participants were selected from Kebele one, three, and five, respectively, using the systematic random sampling technique. When there were multiple eligible subjects in a family, the lottery method was employed to choose one at random.", "The research was carried out in Wereta, Fogera district. Woreta town is located in the Amhara National Regional State, 561km from Addis Ababa, the capital city of Ethiopia. The town has 18,400 inhabitants, living in five kebeles (the smallest administrative units). There is one government health center and one private hospital in town.", "A community-based cross-sectional study design was conducted from June to July 2021.", "All adults (age over 18 years old) who live in Woreta town.", "Adults who live in Woreta town (selected kebeles) and fulfill the inclusion criteria.", "The sample size was generated using the single population proportion approach, which factored in a 12% prevalence of IFG discovered in a prior study in Koladiba town, northwest Ethiopia [17], a 5% level of significance, and a 5% margin of error. The minimum sample size was 324 as a result of multiplying the estimated result (162) by the design effect of 2.", "A multistage sampling procedure was used to choose the participants, and three kebeles (the smallest administrative units with similar demographics) were picked at random from five kebeles. Participants were proportionally assigned to each of the selected kebeles based on the number of households. Thus, 120, 98, and 106 participants were selected from Kebele one, three, and five, respectively, using the systematic random sampling technique. When there were multiple eligible subjects in a family, the lottery method was employed to choose one at random.", "Before the actual data collection, the data collectors received a 3-day training on interviewing techniques, questionnaire administration, and physical measuring procedures. Data was collected by two health officers and two laboratory technicians.\nThe timing of a participant’s last meal was checked to ensure that they had fasted for at least eight hours the night before. A pre-test was conducted from non-study areas to confirm that the data collectors and responders understood the questions. As a result of the pre-test comments, necessary improvements were made prior to actual data collection.\nThe WHO NCD STEPS tool, which consists of three steps for measuring the risk of NCD risk variables, was used in this study [18]. First, the study population’s core and expanded socio-demographic and behavioral characteristics are gathered. The second step involves taking core and expanded physical measurements, while the third step involves biochemical analysis [18]. The data collection application was originally written in English and then translated into Amharic, the commonly spoken language in the study area, by language experts. To guarantee uniformity, a back translation into English was undertaken prior to data collection.\nVariable measurements The International Diabetes Association’s (IDA) definition was used to define DM: “fasting blood glucose (FBG) ≥126mg/dL. Previously undiagnosed DM was defined as participants who had not had their blood sugar tested before and were not taking DM medications during the survey and had an FBG ≥126mg/dL“[16].\nAnthropometric measures were taken using standardized methodologies and calibrated equipment. Each study subject was weighed to the nearest 0.1kg. Height and weight were measured to calculate their BMI (kg/m2). Height was measured using a portable stadiometer. “Underweight was defined as < 18.5 kg/m2, normal was defined as 18.5–24.9 kg/m2, overweight was defined as 25–29.9 kg/m2, and obesity was defined as ≥ 30 kg/m2”. The midpoint between the lower margin of the least palpable rib and the top of the iliac crest was marked to measure the waist circumference (WC). According to the World Health Organization, WC values of more than 102cm for boys and more than 88cm for females were considered high-risk [18].\nBlood pressure (BP) was measured twice with a mercury sphygmomanometer, first in a sitting position and once after 15min of rest. The second BP measurement was done 5min after the first. The average value was calculated and the BP result was taken. SBP of 140 mmHg or DBP of 90 mmHg, as well as the usage of antihypertensive medicines on a regular basis, were used to identify hypertension [17].\nA smoking habit was defined as smoking tobacco products on a daily basis during the data collection period. Participants who consumed alcohol and chewed khat within the previous 30 days of the data collection period were classified as current alcohol users and khat chewers, respectively.\nThe International Diabetes Association’s (IDA) definition was used to define DM: “fasting blood glucose (FBG) ≥126mg/dL. Previously undiagnosed DM was defined as participants who had not had their blood sugar tested before and were not taking DM medications during the survey and had an FBG ≥126mg/dL“[16].\nAnthropometric measures were taken using standardized methodologies and calibrated equipment. Each study subject was weighed to the nearest 0.1kg. Height and weight were measured to calculate their BMI (kg/m2). Height was measured using a portable stadiometer. “Underweight was defined as < 18.5 kg/m2, normal was defined as 18.5–24.9 kg/m2, overweight was defined as 25–29.9 kg/m2, and obesity was defined as ≥ 30 kg/m2”. The midpoint between the lower margin of the least palpable rib and the top of the iliac crest was marked to measure the waist circumference (WC). According to the World Health Organization, WC values of more than 102cm for boys and more than 88cm for females were considered high-risk [18].\nBlood pressure (BP) was measured twice with a mercury sphygmomanometer, first in a sitting position and once after 15min of rest. The second BP measurement was done 5min after the first. The average value was calculated and the BP result was taken. SBP of 140 mmHg or DBP of 90 mmHg, as well as the usage of antihypertensive medicines on a regular basis, were used to identify hypertension [17].\nA smoking habit was defined as smoking tobacco products on a daily basis during the data collection period. Participants who consumed alcohol and chewed khat within the previous 30 days of the data collection period were classified as current alcohol users and khat chewers, respectively.\nData quality management A pretest was conducted to check that the questions were appropriate for the study. An Amharic-translated version of the questionnaire was employed. Physical measurements were taken twice, and in some cases, three times, to reduce observer error in measurements and records, and data collectors were rotated to compare values. Before and after each day’s data collection, the sphygmomanometer used in the field was compared to the one used in the nearby hospital, Debre Tabor comprehensive hospital. For uniformity in reference and test readings, the glucometer gadget and strips were examined on a regular basis. Every day, the data is cleaned, coded, and recorded.\nA pretest was conducted to check that the questions were appropriate for the study. An Amharic-translated version of the questionnaire was employed. Physical measurements were taken twice, and in some cases, three times, to reduce observer error in measurements and records, and data collectors were rotated to compare values. Before and after each day’s data collection, the sphygmomanometer used in the field was compared to the one used in the nearby hospital, Debre Tabor comprehensive hospital. For uniformity in reference and test readings, the glucometer gadget and strips were examined on a regular basis. Every day, the data is cleaned, coded, and recorded.\nData analysis The collected data was double-checked for accuracy. Completed questionnaires were entered into Epi–Info version 7.2.5.0 immediately after data collection and then exported to SPSS version 25 for analysis. Means, percentages, standard deviations, and ranges were calculated using descriptive statistics. A logistics regression was done to identify factors associated with IFG and UDM. The statistical level of significance was declared at p 0.05.\nThe collected data was double-checked for accuracy. Completed questionnaires were entered into Epi–Info version 7.2.5.0 immediately after data collection and then exported to SPSS version 25 for analysis. Means, percentages, standard deviations, and ranges were calculated using descriptive statistics. A logistics regression was done to identify factors associated with IFG and UDM. The statistical level of significance was declared at p 0.05.", "The International Diabetes Association’s (IDA) definition was used to define DM: “fasting blood glucose (FBG) ≥126mg/dL. Previously undiagnosed DM was defined as participants who had not had their blood sugar tested before and were not taking DM medications during the survey and had an FBG ≥126mg/dL“[16].\nAnthropometric measures were taken using standardized methodologies and calibrated equipment. Each study subject was weighed to the nearest 0.1kg. Height and weight were measured to calculate their BMI (kg/m2). Height was measured using a portable stadiometer. “Underweight was defined as < 18.5 kg/m2, normal was defined as 18.5–24.9 kg/m2, overweight was defined as 25–29.9 kg/m2, and obesity was defined as ≥ 30 kg/m2”. The midpoint between the lower margin of the least palpable rib and the top of the iliac crest was marked to measure the waist circumference (WC). According to the World Health Organization, WC values of more than 102cm for boys and more than 88cm for females were considered high-risk [18].\nBlood pressure (BP) was measured twice with a mercury sphygmomanometer, first in a sitting position and once after 15min of rest. The second BP measurement was done 5min after the first. The average value was calculated and the BP result was taken. SBP of 140 mmHg or DBP of 90 mmHg, as well as the usage of antihypertensive medicines on a regular basis, were used to identify hypertension [17].\nA smoking habit was defined as smoking tobacco products on a daily basis during the data collection period. Participants who consumed alcohol and chewed khat within the previous 30 days of the data collection period were classified as current alcohol users and khat chewers, respectively.", "A pretest was conducted to check that the questions were appropriate for the study. An Amharic-translated version of the questionnaire was employed. Physical measurements were taken twice, and in some cases, three times, to reduce observer error in measurements and records, and data collectors were rotated to compare values. Before and after each day’s data collection, the sphygmomanometer used in the field was compared to the one used in the nearby hospital, Debre Tabor comprehensive hospital. For uniformity in reference and test readings, the glucometer gadget and strips were examined on a regular basis. Every day, the data is cleaned, coded, and recorded.", "The collected data was double-checked for accuracy. Completed questionnaires were entered into Epi–Info version 7.2.5.0 immediately after data collection and then exported to SPSS version 25 for analysis. Means, percentages, standard deviations, and ranges were calculated using descriptive statistics. A logistics regression was done to identify factors associated with IFG and UDM. The statistical level of significance was declared at p 0.05.", "Sociodemographic characteristics of study participants A total of 324 participants were incorporated for the final analysis, with a response rate of 100%. The mean age of the participants was 43.76 ± 17.29 years (range 20–80 years). The bulk of the participants, 190 (58.6%) were men; 248 (76.5%) were married; 256 (79.0%) were Orthodox Christians; and 80 (24.7%) had no formal education (Table1).\n\nTable 1Sociodemographic characteristics of the study population in Woreta town, Ethiopia, 2021VariablesFrequencyPercentage\nSex\nMale19058.6Female13441.4\nAge group (years)\n20–306018.531–404814.841–507623.551–607222.2≥ 616821.0\nMarital status\nMarried24876.5Single3611.1Divorced123.7Widowed/separated288.6\nLevel of education\nNot able to read and write8024.7Primary school8425.9High school3611.1Higher educations12438.3\nReligion\nOrthodox25679.0Muslim5617.3Protestant123.7\nOccupation\nFarmer6018.5housewife8024.7Merchant6419.8Employed12037.0\nSmoking habit\nYes123.7No31296.3\nKhat chewing\nYes5284.0No27216.0\nModerate alcohol intake\nYes10833.3No21666.7\nPerform physical exercise\nYes164.9No30895.1\n\nSociodemographic characteristics of the study population in Woreta town, Ethiopia, 2021\nA total of 324 participants were incorporated for the final analysis, with a response rate of 100%. The mean age of the participants was 43.76 ± 17.29 years (range 20–80 years). The bulk of the participants, 190 (58.6%) were men; 248 (76.5%) were married; 256 (79.0%) were Orthodox Christians; and 80 (24.7%) had no formal education (Table1).\n\nTable 1Sociodemographic characteristics of the study population in Woreta town, Ethiopia, 2021VariablesFrequencyPercentage\nSex\nMale19058.6Female13441.4\nAge group (years)\n20–306018.531–404814.841–507623.551–607222.2≥ 616821.0\nMarital status\nMarried24876.5Single3611.1Divorced123.7Widowed/separated288.6\nLevel of education\nNot able to read and write8024.7Primary school8425.9High school3611.1Higher educations12438.3\nReligion\nOrthodox25679.0Muslim5617.3Protestant123.7\nOccupation\nFarmer6018.5housewife8024.7Merchant6419.8Employed12037.0\nSmoking habit\nYes123.7No31296.3\nKhat chewing\nYes5284.0No27216.0\nModerate alcohol intake\nYes10833.3No21666.7\nPerform physical exercise\nYes164.9No30895.1\n\nSociodemographic characteristics of the study population in Woreta town, Ethiopia, 2021\nMagnitude of impaired fasting glucose and undiagnosed DM The magnitude of IFG and UDM among the study subjects were 12% (95% CI 9–16) and 2.3% (95% CI 1.1–4), respectively, according to the American Diabetes Association (ADA) fasting criteria. IFG was found to be prevalent in 19.7% of males and 23.5% of females in our study area (Table2).\n\nTable 2IFG and undiagnosed DM by socio-demographic characteristics of the study participants in Woreta town, Ethiopia, 2021ParameterNFG (n = 152, 46.9%)IFG (n = 140, 43.2%)UDM (n = 32, 10.0%)\nSex\nMale96(29.6)64(19.7)32(10.0)Female56(17.3)76(23.5)0\nAge\n20–3048(14.8)12(3.7)031–4028(8.6)12(3.7)8(2.5)41–5032(9.9)40(12.3)4(1.2)51–6024(7.4)36(11.1)12(3.7)≥ 6120(6.2)40(12.3)8(2.5)\nOccupation\nFarmer24(7.4)36(11.1)0housewife24(7.4)56(17.3)0Merchant36(11.1)20(6.2)8(2.5)Employed56(17.3)40(12.3)24(7.4)\nMarital status\nMarried108(33.3)116(35.8)24(7.4)Single32(9.9)4(1.2)0Divorced4(1.2)8(2.5)0Widowed/separated8(2.5)12(3.7)8(2.5)\nLevel of education\nNo formal education36(11.1)40(12.3)4(1.2)Primary school36(11.1)40(12.3)8(2.5)High school24(7.4)20(6.2)0Higher educations56(17.3)48(14.8)20(6.2)\nReligion\nOrthodox124(38.3)108(33.3)24(7.4)Muslim28(8.6)24(7.4)4(1.2)Protestant08(2.5)4(1.2)IFG impaired fasting glucose, N/n number, NFG normal fasting glucose, UDM undiagnosed diabetes mellitus, % percent\n\nIFG and undiagnosed DM by socio-demographic characteristics of the study participants in Woreta town, Ethiopia, 2021\nIFG impaired fasting glucose, N/n number, NFG normal fasting glucose, UDM undiagnosed diabetes mellitus, % percent\nThe magnitude of IFG and UDM among the study subjects were 12% (95% CI 9–16) and 2.3% (95% CI 1.1–4), respectively, according to the American Diabetes Association (ADA) fasting criteria. IFG was found to be prevalent in 19.7% of males and 23.5% of females in our study area (Table2).\n\nTable 2IFG and undiagnosed DM by socio-demographic characteristics of the study participants in Woreta town, Ethiopia, 2021ParameterNFG (n = 152, 46.9%)IFG (n = 140, 43.2%)UDM (n = 32, 10.0%)\nSex\nMale96(29.6)64(19.7)32(10.0)Female56(17.3)76(23.5)0\nAge\n20–3048(14.8)12(3.7)031–4028(8.6)12(3.7)8(2.5)41–5032(9.9)40(12.3)4(1.2)51–6024(7.4)36(11.1)12(3.7)≥ 6120(6.2)40(12.3)8(2.5)\nOccupation\nFarmer24(7.4)36(11.1)0housewife24(7.4)56(17.3)0Merchant36(11.1)20(6.2)8(2.5)Employed56(17.3)40(12.3)24(7.4)\nMarital status\nMarried108(33.3)116(35.8)24(7.4)Single32(9.9)4(1.2)0Divorced4(1.2)8(2.5)0Widowed/separated8(2.5)12(3.7)8(2.5)\nLevel of education\nNo formal education36(11.1)40(12.3)4(1.2)Primary school36(11.1)40(12.3)8(2.5)High school24(7.4)20(6.2)0Higher educations56(17.3)48(14.8)20(6.2)\nReligion\nOrthodox124(38.3)108(33.3)24(7.4)Muslim28(8.6)24(7.4)4(1.2)Protestant08(2.5)4(1.2)IFG impaired fasting glucose, N/n number, NFG normal fasting glucose, UDM undiagnosed diabetes mellitus, % percent\n\nIFG and undiagnosed DM by socio-demographic characteristics of the study participants in Woreta town, Ethiopia, 2021\nIFG impaired fasting glucose, N/n number, NFG normal fasting glucose, UDM undiagnosed diabetes mellitus, % percent\nClinical presentations of study subjects with IFG and UDM Thirty-six (11.11%) of the subjects had an overweight BMI, whereas 88.9% of the overweight subjects had IFG. About 7.4% of the participants had examined their blood glucose level previously, and 1.2% had screened for DM within the last month. The most commonly elicited symptoms of diabetes were polyuria, polydipsia, polyphagia, and weariness, which were known by eight (2.5%) of the individuals. The prevalence of UDM was reported to be 9.9%, with a 95% confidence interval of 7.5 to 11.7. Furthermore, the magnitude of IFG was 44.14%, with a 95% confidence interval of 19.45 to 64.54 (Table3).\n\nTable 3Clinical presentation and laboratory measurements of study subjects in Woreta town, Ethiopia, 2021CharacteristicsNormal(< 100mg/dl)IFG (100–125 mg/dl)UDM(≥ 126 mg/dl)Chi-Squarep-valueEver checked your blood sugar levelYes4(16.7)4(16.7)16(66.6)23.480.000No148(49.3)136(45.3)16(5.4)The checked blood sugar level in the last monthYes04(100.00)01.3310.514No152(47.5)136(42.5)32(10.0)Family history of DMYes8(25.0)24(75.0)50.9890.61No144(55.4)116(44.6)27(10.9)Diagnosis of hypertensionYes12(12.2)62(63.3)24(24.5)23.1580.000No140(61.9)78(34.5)8(3.5)Know symptoms of DMYes26(55.3)13(27.7)8(17.0)60.2560.000No126(45.5)127(45.8)24(8.7)Body mass index (BMI), kg/m2Under Weight4(100.0)0023.3870.001normal144(55.4)96(36.9)20(7.7)Over Weight4(11.1)32(88.9)0Obese012(50.0)12(50.0)Perform physical exerciseYes16(100.0)004.7610.092No136(44.2)140(45.4)32(10.4)Moderate alcohol intakeYes44(40.7)44(40.7)20(18.5)3.4490.178No108(50.0)96(44.4)12(5.6)\n\nClinical presentation and laboratory measurements of study subjects in Woreta town, Ethiopia, 2021\nThirty-six (11.11%) of the subjects had an overweight BMI, whereas 88.9% of the overweight subjects had IFG. About 7.4% of the participants had examined their blood glucose level previously, and 1.2% had screened for DM within the last month. The most commonly elicited symptoms of diabetes were polyuria, polydipsia, polyphagia, and weariness, which were known by eight (2.5%) of the individuals. The prevalence of UDM was reported to be 9.9%, with a 95% confidence interval of 7.5 to 11.7. Furthermore, the magnitude of IFG was 44.14%, with a 95% confidence interval of 19.45 to 64.54 (Table3).\n\nTable 3Clinical presentation and laboratory measurements of study subjects in Woreta town, Ethiopia, 2021CharacteristicsNormal(< 100mg/dl)IFG (100–125 mg/dl)UDM(≥ 126 mg/dl)Chi-Squarep-valueEver checked your blood sugar levelYes4(16.7)4(16.7)16(66.6)23.480.000No148(49.3)136(45.3)16(5.4)The checked blood sugar level in the last monthYes04(100.00)01.3310.514No152(47.5)136(42.5)32(10.0)Family history of DMYes8(25.0)24(75.0)50.9890.61No144(55.4)116(44.6)27(10.9)Diagnosis of hypertensionYes12(12.2)62(63.3)24(24.5)23.1580.000No140(61.9)78(34.5)8(3.5)Know symptoms of DMYes26(55.3)13(27.7)8(17.0)60.2560.000No126(45.5)127(45.8)24(8.7)Body mass index (BMI), kg/m2Under Weight4(100.0)0023.3870.001normal144(55.4)96(36.9)20(7.7)Over Weight4(11.1)32(88.9)0Obese012(50.0)12(50.0)Perform physical exerciseYes16(100.0)004.7610.092No136(44.2)140(45.4)32(10.4)Moderate alcohol intakeYes44(40.7)44(40.7)20(18.5)3.4490.178No108(50.0)96(44.4)12(5.6)\n\nClinical presentation and laboratory measurements of study subjects in Woreta town, Ethiopia, 2021\nFactors associated with IFG and UDM Age, family history of diabetes mellitus, body mass index, waist circumference, hypertriglyceridemia, and hypertension were found to be substantially associated with the prevalence of impaired fasting glucose in the bivariate analysis (Table4).\nThe results of the multivariate logistic regression analysis revealed that IFG was independently associated with WC, HPN, FHDM and hypertriglyceridemia. The magnitude of IFG was 1.72 times greater in participants with high risk of WC compared to low-risk WC subjects (AOR = 1.72, 95% CI 1.23–3.14). Prediabetes was 2.35 times more common in people with hypertriglyceridemia than in people with normal TG (AOR = 2.35, 95% CI 1.41–5.43). IFG was 3.48 times more common in people with hypertension compared to people without hypertension (AOR = 3.48, 95% CI 1.35–8.89) (Table4).\n\nTable 4Bivariate and multivariate analysis of factors associated with IFG in Woreta town, Ethiopia, 2021VariableCategoryIFGCOR(95%CI)AOR(95%CI)YesNoAge20–3012481131–4012281.47(0.45–4.4.62)0.52(0.14–2.364)41–5032403.98(1.36–11.02) *2.06(0.51–7.09)51–6024363.32(1.09–10.27) *1.59(0.45–5.38)≥ 6120402.46(0.77–7.18)1.47(0.39–5.18)BMI (kg/m2 )≤ 24.910913511≥2531172.26(1.19–4.30) *1.29(1.19–3.79) *WCLow risk8912511High risk51272.65(1.55–4.55) **1.72(1.23–3.14) *TGNormal11713911High23132.10(1.02–4.33) **2.35(1.41–5.43) **FHDMYes2483.72(1.61–8.6) **2.34(1.37–5.79)**No11614411HPNYes62129.23(4.71–18.26) **3.48 (1.35–8.89) **No7814011AOR adjusted odds ratio, BMI body mass index, CI confidence interval, COR crude odds ratio, FHDM family history of diabetic mellitus, DM diabetes mellitus, FHDM family history of diabetes mellitus, IFG impaired fasting glucose, TG triglycerides, WC waist circumference, HPN hypertension* P < 0.05 ** P < 0.01\n\nBivariate and multivariate analysis of factors associated with IFG in Woreta town, Ethiopia, 2021\nAOR adjusted odds ratio, BMI body mass index, CI confidence interval, COR crude odds ratio, FHDM family history of diabetic mellitus, DM diabetes mellitus, FHDM family history of diabetes mellitus, IFG impaired fasting glucose, TG triglycerides, WC waist circumference, HPN hypertension\n* P < 0.05 ** P < 0.01\nAge, family history of diabetes mellitus, body mass index, waist circumference, hypertriglyceridemia, and hypertension were found to be substantially associated with the prevalence of impaired fasting glucose in the bivariate analysis (Table4).\nThe results of the multivariate logistic regression analysis revealed that IFG was independently associated with WC, HPN, FHDM and hypertriglyceridemia. The magnitude of IFG was 1.72 times greater in participants with high risk of WC compared to low-risk WC subjects (AOR = 1.72, 95% CI 1.23–3.14). Prediabetes was 2.35 times more common in people with hypertriglyceridemia than in people with normal TG (AOR = 2.35, 95% CI 1.41–5.43). IFG was 3.48 times more common in people with hypertension compared to people without hypertension (AOR = 3.48, 95% CI 1.35–8.89) (Table4).\n\nTable 4Bivariate and multivariate analysis of factors associated with IFG in Woreta town, Ethiopia, 2021VariableCategoryIFGCOR(95%CI)AOR(95%CI)YesNoAge20–3012481131–4012281.47(0.45–4.4.62)0.52(0.14–2.364)41–5032403.98(1.36–11.02) *2.06(0.51–7.09)51–6024363.32(1.09–10.27) *1.59(0.45–5.38)≥ 6120402.46(0.77–7.18)1.47(0.39–5.18)BMI (kg/m2 )≤ 24.910913511≥2531172.26(1.19–4.30) *1.29(1.19–3.79) *WCLow risk8912511High risk51272.65(1.55–4.55) **1.72(1.23–3.14) *TGNormal11713911High23132.10(1.02–4.33) **2.35(1.41–5.43) **FHDMYes2483.72(1.61–8.6) **2.34(1.37–5.79)**No11614411HPNYes62129.23(4.71–18.26) **3.48 (1.35–8.89) **No7814011AOR adjusted odds ratio, BMI body mass index, CI confidence interval, COR crude odds ratio, FHDM family history of diabetic mellitus, DM diabetes mellitus, FHDM family history of diabetes mellitus, IFG impaired fasting glucose, TG triglycerides, WC waist circumference, HPN hypertension* P < 0.05 ** P < 0.01\n\nBivariate and multivariate analysis of factors associated with IFG in Woreta town, Ethiopia, 2021\nAOR adjusted odds ratio, BMI body mass index, CI confidence interval, COR crude odds ratio, FHDM family history of diabetic mellitus, DM diabetes mellitus, FHDM family history of diabetes mellitus, IFG impaired fasting glucose, TG triglycerides, WC waist circumference, HPN hypertension\n* P < 0.05 ** P < 0.01", "A total of 324 participants were incorporated for the final analysis, with a response rate of 100%. The mean age of the participants was 43.76 ± 17.29 years (range 20–80 years). The bulk of the participants, 190 (58.6%) were men; 248 (76.5%) were married; 256 (79.0%) were Orthodox Christians; and 80 (24.7%) had no formal education (Table1).\n\nTable 1Sociodemographic characteristics of the study population in Woreta town, Ethiopia, 2021VariablesFrequencyPercentage\nSex\nMale19058.6Female13441.4\nAge group (years)\n20–306018.531–404814.841–507623.551–607222.2≥ 616821.0\nMarital status\nMarried24876.5Single3611.1Divorced123.7Widowed/separated288.6\nLevel of education\nNot able to read and write8024.7Primary school8425.9High school3611.1Higher educations12438.3\nReligion\nOrthodox25679.0Muslim5617.3Protestant123.7\nOccupation\nFarmer6018.5housewife8024.7Merchant6419.8Employed12037.0\nSmoking habit\nYes123.7No31296.3\nKhat chewing\nYes5284.0No27216.0\nModerate alcohol intake\nYes10833.3No21666.7\nPerform physical exercise\nYes164.9No30895.1\n\nSociodemographic characteristics of the study population in Woreta town, Ethiopia, 2021", "The magnitude of IFG and UDM among the study subjects were 12% (95% CI 9–16) and 2.3% (95% CI 1.1–4), respectively, according to the American Diabetes Association (ADA) fasting criteria. IFG was found to be prevalent in 19.7% of males and 23.5% of females in our study area (Table2).\n\nTable 2IFG and undiagnosed DM by socio-demographic characteristics of the study participants in Woreta town, Ethiopia, 2021ParameterNFG (n = 152, 46.9%)IFG (n = 140, 43.2%)UDM (n = 32, 10.0%)\nSex\nMale96(29.6)64(19.7)32(10.0)Female56(17.3)76(23.5)0\nAge\n20–3048(14.8)12(3.7)031–4028(8.6)12(3.7)8(2.5)41–5032(9.9)40(12.3)4(1.2)51–6024(7.4)36(11.1)12(3.7)≥ 6120(6.2)40(12.3)8(2.5)\nOccupation\nFarmer24(7.4)36(11.1)0housewife24(7.4)56(17.3)0Merchant36(11.1)20(6.2)8(2.5)Employed56(17.3)40(12.3)24(7.4)\nMarital status\nMarried108(33.3)116(35.8)24(7.4)Single32(9.9)4(1.2)0Divorced4(1.2)8(2.5)0Widowed/separated8(2.5)12(3.7)8(2.5)\nLevel of education\nNo formal education36(11.1)40(12.3)4(1.2)Primary school36(11.1)40(12.3)8(2.5)High school24(7.4)20(6.2)0Higher educations56(17.3)48(14.8)20(6.2)\nReligion\nOrthodox124(38.3)108(33.3)24(7.4)Muslim28(8.6)24(7.4)4(1.2)Protestant08(2.5)4(1.2)IFG impaired fasting glucose, N/n number, NFG normal fasting glucose, UDM undiagnosed diabetes mellitus, % percent\n\nIFG and undiagnosed DM by socio-demographic characteristics of the study participants in Woreta town, Ethiopia, 2021\nIFG impaired fasting glucose, N/n number, NFG normal fasting glucose, UDM undiagnosed diabetes mellitus, % percent", "Thirty-six (11.11%) of the subjects had an overweight BMI, whereas 88.9% of the overweight subjects had IFG. About 7.4% of the participants had examined their blood glucose level previously, and 1.2% had screened for DM within the last month. The most commonly elicited symptoms of diabetes were polyuria, polydipsia, polyphagia, and weariness, which were known by eight (2.5%) of the individuals. The prevalence of UDM was reported to be 9.9%, with a 95% confidence interval of 7.5 to 11.7. Furthermore, the magnitude of IFG was 44.14%, with a 95% confidence interval of 19.45 to 64.54 (Table3).\n\nTable 3Clinical presentation and laboratory measurements of study subjects in Woreta town, Ethiopia, 2021CharacteristicsNormal(< 100mg/dl)IFG (100–125 mg/dl)UDM(≥ 126 mg/dl)Chi-Squarep-valueEver checked your blood sugar levelYes4(16.7)4(16.7)16(66.6)23.480.000No148(49.3)136(45.3)16(5.4)The checked blood sugar level in the last monthYes04(100.00)01.3310.514No152(47.5)136(42.5)32(10.0)Family history of DMYes8(25.0)24(75.0)50.9890.61No144(55.4)116(44.6)27(10.9)Diagnosis of hypertensionYes12(12.2)62(63.3)24(24.5)23.1580.000No140(61.9)78(34.5)8(3.5)Know symptoms of DMYes26(55.3)13(27.7)8(17.0)60.2560.000No126(45.5)127(45.8)24(8.7)Body mass index (BMI), kg/m2Under Weight4(100.0)0023.3870.001normal144(55.4)96(36.9)20(7.7)Over Weight4(11.1)32(88.9)0Obese012(50.0)12(50.0)Perform physical exerciseYes16(100.0)004.7610.092No136(44.2)140(45.4)32(10.4)Moderate alcohol intakeYes44(40.7)44(40.7)20(18.5)3.4490.178No108(50.0)96(44.4)12(5.6)\n\nClinical presentation and laboratory measurements of study subjects in Woreta town, Ethiopia, 2021", "Age, family history of diabetes mellitus, body mass index, waist circumference, hypertriglyceridemia, and hypertension were found to be substantially associated with the prevalence of impaired fasting glucose in the bivariate analysis (Table4).\nThe results of the multivariate logistic regression analysis revealed that IFG was independently associated with WC, HPN, FHDM and hypertriglyceridemia. The magnitude of IFG was 1.72 times greater in participants with high risk of WC compared to low-risk WC subjects (AOR = 1.72, 95% CI 1.23–3.14). Prediabetes was 2.35 times more common in people with hypertriglyceridemia than in people with normal TG (AOR = 2.35, 95% CI 1.41–5.43). IFG was 3.48 times more common in people with hypertension compared to people without hypertension (AOR = 3.48, 95% CI 1.35–8.89) (Table4).\n\nTable 4Bivariate and multivariate analysis of factors associated with IFG in Woreta town, Ethiopia, 2021VariableCategoryIFGCOR(95%CI)AOR(95%CI)YesNoAge20–3012481131–4012281.47(0.45–4.4.62)0.52(0.14–2.364)41–5032403.98(1.36–11.02) *2.06(0.51–7.09)51–6024363.32(1.09–10.27) *1.59(0.45–5.38)≥ 6120402.46(0.77–7.18)1.47(0.39–5.18)BMI (kg/m2 )≤ 24.910913511≥2531172.26(1.19–4.30) *1.29(1.19–3.79) *WCLow risk8912511High risk51272.65(1.55–4.55) **1.72(1.23–3.14) *TGNormal11713911High23132.10(1.02–4.33) **2.35(1.41–5.43) **FHDMYes2483.72(1.61–8.6) **2.34(1.37–5.79)**No11614411HPNYes62129.23(4.71–18.26) **3.48 (1.35–8.89) **No7814011AOR adjusted odds ratio, BMI body mass index, CI confidence interval, COR crude odds ratio, FHDM family history of diabetic mellitus, DM diabetes mellitus, FHDM family history of diabetes mellitus, IFG impaired fasting glucose, TG triglycerides, WC waist circumference, HPN hypertension* P < 0.05 ** P < 0.01\n\nBivariate and multivariate analysis of factors associated with IFG in Woreta town, Ethiopia, 2021\nAOR adjusted odds ratio, BMI body mass index, CI confidence interval, COR crude odds ratio, FHDM family history of diabetic mellitus, DM diabetes mellitus, FHDM family history of diabetes mellitus, IFG impaired fasting glucose, TG triglycerides, WC waist circumference, HPN hypertension\n* P < 0.05 ** P < 0.01", "This study revealed that the prevalence of IFG was 43%, indicating that if appropriate interventions were not made, the individuals might develop diabetes. This report was higher than a population-based study in Qatar (12.5%)[19]. In Denmark (27.1%)[20] and Bukittinggi, Indonesia (32%)[21], Uganda (3.9%)[22], and Kenya (3.1%)[1]. The probable reason could be the differences in relation to sociodemographic and living standards.\nIn this study, the magnitude of UDM was 9.87%. It was associated with obesity, hypertriglyceridemia, a family history of DM, and hypertension. This report was in line with reports from studies in the East Gojjam zone, northwest Ethiopia (11.5%) [23] Bahir Dar city, northwest Ethiopia (10.2%)[7], Germany (8.2%)[24] USA (11.5%)[25] Italia (10%)[26] and African urban population (8.68%)[27]. However, the magnitude of UDM in this study was higher than that of a study in Uganda (2.3%)[22] and Kenya (2.4%)[1]. The possible difference between Kenya’s study and the current study is that the current study was from a single center while Kenya’s sample was a nationally representative sample. The study in Uganda was among municipal communities with better educational status, expected to have better awareness of diabetic mellitus.\nNevertheless, this report was higher than that of a study at Koladiba (2.3%)[17], Bishoftu town (5%)[7], and Addis Ababa, Ethiopia (6.6%)[28], Teheran, Iran (5%) [29] and Qatar (5.9%)[19]. Moreover, our report was lower than those of patients with UDM and hypertension in India (15%)[30]. The probable reasons for these differences could be different study settings, lifestyles, health-seeking behavior, and practice in routine screening of diabetes and other health problems. In particular, this report was significantly higher than that of the previous study in Koladiba, a rural district in northwest Ethiopia[17]. The disparity could be due to differences in sample size.\nFor participants with high risk of WC, the odds of IFG was twice higher than those who have low risk of WC. This finding was in line with those of studies conducted in koladiba [17]. The probable reason could be in relation to the relation of central obesity and insulin resistance. Similarly, among those who had hypertension, the odds of getting IFG were three times higher as compared to no hypertension.\nFor participants with a high risk of WC, the odds of IFG were twice as high as those who had a low risk of WC. This finding was in line with those of studies conducted in Koladiba [7]. The probable reason could be in relation to the relationship between central obesity and insulin resistance. Similarly, among those who had hypertension, the odds of getting IFG were three times higher as compared to those who had normal blood pressure.\nAny causal association is not estimated by the cross-sectional design of this study. Self-reported dietary information, social habits, and physical activity data may yield inaccurate estimations. The study participant’s unpredictability in obtaining a fasting state is another possible limitations in a population-based study like this study, which might easily result in an overestimation of the prevalence of impaired fasting glucose and undiagnosed diabetic mellitus.", "In conclusion, the magnitude of IFG and UDM was significantly high. Family history of diabetes, hypertension, WC, and TG were all associated with IFG. Regular physical activities are recommended measures for the prevention and control of diabetes mellitus among the population of Ethiopia." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, "results", null, null, null, null, "discussion", "conclusion" ]
[ "Impaired fasting glucose", "Undiagnosed diabetic mellitus", "Risk factors", "Ethiopia" ]
Background: Diabetes mellitus (DM) is a public health problem characterized by a high blood sugar level in the body. It can be caused by a lack of either insulin secretion or insulin resistance. Its magnitude has been increasing in recent decades. The magnitude in people above the age of 18 increased from 4.7% to 1980 to 8.8% in 2017 [1–3]. It affects more than 425million people worldwide, with the number expected to rise to 529million by 2030 [1]. Diabetes mellitus is a big public health problem with serious consequences. Undiagnosed diabetes mellitus (UDM) affects 50% of people aged 20 to 79 years in the world, 69% in Africa, which is nearly double that of high-income nations (37%). This provides new information on Africa’s high rate of sickness and mortality, which could occur at a younger age [2, 3]. Impaired fasting glucose (IFG) is characterized by glycemic levels that are greater than normal but below diabetes thresholds (fasting glucose > 60.0 mmol/L and > 70.0 mmol/L). It is a major risk factor for diabetes and its consequences, including nephropathy, diabetic retinopathy, and an elevated risk of macrovascular disease [4–6]. As a result, understanding IFG is critical for avoiding diabetes forecasts in the future. According to studies in Behar Dar city and rural areas of Dire Dawa, UDM was 10.2% and 6.2%, respectively [7, 8]. A worldwide study by 2030 reported that 470million people will be affected by IFG [9, 10]. In addition, studies have shown that people diagnosed with IFG are more likely to develop diabetes. Diabetes is a 5–10% annual risk for patients with IFG, according to Tabak et al. [11]. Larson et al. found that 25% of postmenopausal women with impaired fasting glucose (IFG) or impaired glucose tolerance (IGT) tests progressed to type 2 diabetes (T2DM) in 5 years [12]. However, there is evidence that controlling IFG with lifestyle changes, including physical activity and healthy diets, [13–15], can prevent or postpone the advancement of DM, and another study found that behavioral changes alone lowered the risk of DM by 40–70%[11]. In general, it is critical to identify people who have an IFG condition so that early preventive actions can be implemented. In 2015, the International Diabetes Federation (IDF) reported that the magnitude of DM in Ethiopia was 2.2%[16]. Despite the fact that IFG is an early warning system that provides a warning to prevent the future development of DM and diabetes-related problems, data on the prevalence of prediabetes (IFG), undiagnosed DM, and associated risk factors in Ethiopia is inadequate. As a result, this research will give baseline data on the magnitude of IFG, UDM, and associated factors in Ethiopia. Methods: Study area The research was carried out in Wereta, Fogera district. Woreta town is located in the Amhara National Regional State, 561km from Addis Ababa, the capital city of Ethiopia. The town has 18,400 inhabitants, living in five kebeles (the smallest administrative units). There is one government health center and one private hospital in town. The research was carried out in Wereta, Fogera district. Woreta town is located in the Amhara National Regional State, 561km from Addis Ababa, the capital city of Ethiopia. The town has 18,400 inhabitants, living in five kebeles (the smallest administrative units). There is one government health center and one private hospital in town. Study design and period A community-based cross-sectional study design was conducted from June to July 2021. A community-based cross-sectional study design was conducted from June to July 2021. Source population All adults (age over 18 years old) who live in Woreta town. All adults (age over 18 years old) who live in Woreta town. Study population Adults who live in Woreta town (selected kebeles) and fulfill the inclusion criteria. Adults who live in Woreta town (selected kebeles) and fulfill the inclusion criteria. Sample size The sample size was generated using the single population proportion approach, which factored in a 12% prevalence of IFG discovered in a prior study in Koladiba town, northwest Ethiopia [17], a 5% level of significance, and a 5% margin of error. The minimum sample size was 324 as a result of multiplying the estimated result (162) by the design effect of 2. The sample size was generated using the single population proportion approach, which factored in a 12% prevalence of IFG discovered in a prior study in Koladiba town, northwest Ethiopia [17], a 5% level of significance, and a 5% margin of error. The minimum sample size was 324 as a result of multiplying the estimated result (162) by the design effect of 2. Sampling procedure A multistage sampling procedure was used to choose the participants, and three kebeles (the smallest administrative units with similar demographics) were picked at random from five kebeles. Participants were proportionally assigned to each of the selected kebeles based on the number of households. Thus, 120, 98, and 106 participants were selected from Kebele one, three, and five, respectively, using the systematic random sampling technique. When there were multiple eligible subjects in a family, the lottery method was employed to choose one at random. A multistage sampling procedure was used to choose the participants, and three kebeles (the smallest administrative units with similar demographics) were picked at random from five kebeles. Participants were proportionally assigned to each of the selected kebeles based on the number of households. Thus, 120, 98, and 106 participants were selected from Kebele one, three, and five, respectively, using the systematic random sampling technique. When there were multiple eligible subjects in a family, the lottery method was employed to choose one at random. Study area: The research was carried out in Wereta, Fogera district. Woreta town is located in the Amhara National Regional State, 561km from Addis Ababa, the capital city of Ethiopia. The town has 18,400 inhabitants, living in five kebeles (the smallest administrative units). There is one government health center and one private hospital in town. Study design and period: A community-based cross-sectional study design was conducted from June to July 2021. Source population: All adults (age over 18 years old) who live in Woreta town. Study population: Adults who live in Woreta town (selected kebeles) and fulfill the inclusion criteria. Sample size: The sample size was generated using the single population proportion approach, which factored in a 12% prevalence of IFG discovered in a prior study in Koladiba town, northwest Ethiopia [17], a 5% level of significance, and a 5% margin of error. The minimum sample size was 324 as a result of multiplying the estimated result (162) by the design effect of 2. Sampling procedure: A multistage sampling procedure was used to choose the participants, and three kebeles (the smallest administrative units with similar demographics) were picked at random from five kebeles. Participants were proportionally assigned to each of the selected kebeles based on the number of households. Thus, 120, 98, and 106 participants were selected from Kebele one, three, and five, respectively, using the systematic random sampling technique. When there were multiple eligible subjects in a family, the lottery method was employed to choose one at random. Data collection procedures: Before the actual data collection, the data collectors received a 3-day training on interviewing techniques, questionnaire administration, and physical measuring procedures. Data was collected by two health officers and two laboratory technicians. The timing of a participant’s last meal was checked to ensure that they had fasted for at least eight hours the night before. A pre-test was conducted from non-study areas to confirm that the data collectors and responders understood the questions. As a result of the pre-test comments, necessary improvements were made prior to actual data collection. The WHO NCD STEPS tool, which consists of three steps for measuring the risk of NCD risk variables, was used in this study [18]. First, the study population’s core and expanded socio-demographic and behavioral characteristics are gathered. The second step involves taking core and expanded physical measurements, while the third step involves biochemical analysis [18]. The data collection application was originally written in English and then translated into Amharic, the commonly spoken language in the study area, by language experts. To guarantee uniformity, a back translation into English was undertaken prior to data collection. Variable measurements The International Diabetes Association’s (IDA) definition was used to define DM: “fasting blood glucose (FBG) ≥126mg/dL. Previously undiagnosed DM was defined as participants who had not had their blood sugar tested before and were not taking DM medications during the survey and had an FBG ≥126mg/dL“[16]. Anthropometric measures were taken using standardized methodologies and calibrated equipment. Each study subject was weighed to the nearest 0.1kg. Height and weight were measured to calculate their BMI (kg/m2). Height was measured using a portable stadiometer. “Underweight was defined as < 18.5 kg/m2, normal was defined as 18.5–24.9 kg/m2, overweight was defined as 25–29.9 kg/m2, and obesity was defined as ≥ 30 kg/m2”. The midpoint between the lower margin of the least palpable rib and the top of the iliac crest was marked to measure the waist circumference (WC). According to the World Health Organization, WC values of more than 102cm for boys and more than 88cm for females were considered high-risk [18]. Blood pressure (BP) was measured twice with a mercury sphygmomanometer, first in a sitting position and once after 15min of rest. The second BP measurement was done 5min after the first. The average value was calculated and the BP result was taken. SBP of 140 mmHg or DBP of 90 mmHg, as well as the usage of antihypertensive medicines on a regular basis, were used to identify hypertension [17]. A smoking habit was defined as smoking tobacco products on a daily basis during the data collection period. Participants who consumed alcohol and chewed khat within the previous 30 days of the data collection period were classified as current alcohol users and khat chewers, respectively. The International Diabetes Association’s (IDA) definition was used to define DM: “fasting blood glucose (FBG) ≥126mg/dL. Previously undiagnosed DM was defined as participants who had not had their blood sugar tested before and were not taking DM medications during the survey and had an FBG ≥126mg/dL“[16]. Anthropometric measures were taken using standardized methodologies and calibrated equipment. Each study subject was weighed to the nearest 0.1kg. Height and weight were measured to calculate their BMI (kg/m2). Height was measured using a portable stadiometer. “Underweight was defined as < 18.5 kg/m2, normal was defined as 18.5–24.9 kg/m2, overweight was defined as 25–29.9 kg/m2, and obesity was defined as ≥ 30 kg/m2”. The midpoint between the lower margin of the least palpable rib and the top of the iliac crest was marked to measure the waist circumference (WC). According to the World Health Organization, WC values of more than 102cm for boys and more than 88cm for females were considered high-risk [18]. Blood pressure (BP) was measured twice with a mercury sphygmomanometer, first in a sitting position and once after 15min of rest. The second BP measurement was done 5min after the first. The average value was calculated and the BP result was taken. SBP of 140 mmHg or DBP of 90 mmHg, as well as the usage of antihypertensive medicines on a regular basis, were used to identify hypertension [17]. A smoking habit was defined as smoking tobacco products on a daily basis during the data collection period. Participants who consumed alcohol and chewed khat within the previous 30 days of the data collection period were classified as current alcohol users and khat chewers, respectively. Data quality management A pretest was conducted to check that the questions were appropriate for the study. An Amharic-translated version of the questionnaire was employed. Physical measurements were taken twice, and in some cases, three times, to reduce observer error in measurements and records, and data collectors were rotated to compare values. Before and after each day’s data collection, the sphygmomanometer used in the field was compared to the one used in the nearby hospital, Debre Tabor comprehensive hospital. For uniformity in reference and test readings, the glucometer gadget and strips were examined on a regular basis. Every day, the data is cleaned, coded, and recorded. A pretest was conducted to check that the questions were appropriate for the study. An Amharic-translated version of the questionnaire was employed. Physical measurements were taken twice, and in some cases, three times, to reduce observer error in measurements and records, and data collectors were rotated to compare values. Before and after each day’s data collection, the sphygmomanometer used in the field was compared to the one used in the nearby hospital, Debre Tabor comprehensive hospital. For uniformity in reference and test readings, the glucometer gadget and strips were examined on a regular basis. Every day, the data is cleaned, coded, and recorded. Data analysis The collected data was double-checked for accuracy. Completed questionnaires were entered into Epi–Info version 7.2.5.0 immediately after data collection and then exported to SPSS version 25 for analysis. Means, percentages, standard deviations, and ranges were calculated using descriptive statistics. A logistics regression was done to identify factors associated with IFG and UDM. The statistical level of significance was declared at p 0.05. The collected data was double-checked for accuracy. Completed questionnaires were entered into Epi–Info version 7.2.5.0 immediately after data collection and then exported to SPSS version 25 for analysis. Means, percentages, standard deviations, and ranges were calculated using descriptive statistics. A logistics regression was done to identify factors associated with IFG and UDM. The statistical level of significance was declared at p 0.05. Variable measurements: The International Diabetes Association’s (IDA) definition was used to define DM: “fasting blood glucose (FBG) ≥126mg/dL. Previously undiagnosed DM was defined as participants who had not had their blood sugar tested before and were not taking DM medications during the survey and had an FBG ≥126mg/dL“[16]. Anthropometric measures were taken using standardized methodologies and calibrated equipment. Each study subject was weighed to the nearest 0.1kg. Height and weight were measured to calculate their BMI (kg/m2). Height was measured using a portable stadiometer. “Underweight was defined as < 18.5 kg/m2, normal was defined as 18.5–24.9 kg/m2, overweight was defined as 25–29.9 kg/m2, and obesity was defined as ≥ 30 kg/m2”. The midpoint between the lower margin of the least palpable rib and the top of the iliac crest was marked to measure the waist circumference (WC). According to the World Health Organization, WC values of more than 102cm for boys and more than 88cm for females were considered high-risk [18]. Blood pressure (BP) was measured twice with a mercury sphygmomanometer, first in a sitting position and once after 15min of rest. The second BP measurement was done 5min after the first. The average value was calculated and the BP result was taken. SBP of 140 mmHg or DBP of 90 mmHg, as well as the usage of antihypertensive medicines on a regular basis, were used to identify hypertension [17]. A smoking habit was defined as smoking tobacco products on a daily basis during the data collection period. Participants who consumed alcohol and chewed khat within the previous 30 days of the data collection period were classified as current alcohol users and khat chewers, respectively. Data quality management: A pretest was conducted to check that the questions were appropriate for the study. An Amharic-translated version of the questionnaire was employed. Physical measurements were taken twice, and in some cases, three times, to reduce observer error in measurements and records, and data collectors were rotated to compare values. Before and after each day’s data collection, the sphygmomanometer used in the field was compared to the one used in the nearby hospital, Debre Tabor comprehensive hospital. For uniformity in reference and test readings, the glucometer gadget and strips were examined on a regular basis. Every day, the data is cleaned, coded, and recorded. Data analysis: The collected data was double-checked for accuracy. Completed questionnaires were entered into Epi–Info version 7.2.5.0 immediately after data collection and then exported to SPSS version 25 for analysis. Means, percentages, standard deviations, and ranges were calculated using descriptive statistics. A logistics regression was done to identify factors associated with IFG and UDM. The statistical level of significance was declared at p 0.05. Results: Sociodemographic characteristics of study participants A total of 324 participants were incorporated for the final analysis, with a response rate of 100%. The mean age of the participants was 43.76 ± 17.29 years (range 20–80 years). The bulk of the participants, 190 (58.6%) were men; 248 (76.5%) were married; 256 (79.0%) were Orthodox Christians; and 80 (24.7%) had no formal education (Table1). Table 1Sociodemographic characteristics of the study population in Woreta town, Ethiopia, 2021VariablesFrequencyPercentage Sex Male19058.6Female13441.4 Age group (years) 20–306018.531–404814.841–507623.551–607222.2≥ 616821.0 Marital status Married24876.5Single3611.1Divorced123.7Widowed/separated288.6 Level of education Not able to read and write8024.7Primary school8425.9High school3611.1Higher educations12438.3 Religion Orthodox25679.0Muslim5617.3Protestant123.7 Occupation Farmer6018.5housewife8024.7Merchant6419.8Employed12037.0 Smoking habit Yes123.7No31296.3 Khat chewing Yes5284.0No27216.0 Moderate alcohol intake Yes10833.3No21666.7 Perform physical exercise Yes164.9No30895.1 Sociodemographic characteristics of the study population in Woreta town, Ethiopia, 2021 A total of 324 participants were incorporated for the final analysis, with a response rate of 100%. The mean age of the participants was 43.76 ± 17.29 years (range 20–80 years). The bulk of the participants, 190 (58.6%) were men; 248 (76.5%) were married; 256 (79.0%) were Orthodox Christians; and 80 (24.7%) had no formal education (Table1). Table 1Sociodemographic characteristics of the study population in Woreta town, Ethiopia, 2021VariablesFrequencyPercentage Sex Male19058.6Female13441.4 Age group (years) 20–306018.531–404814.841–507623.551–607222.2≥ 616821.0 Marital status Married24876.5Single3611.1Divorced123.7Widowed/separated288.6 Level of education Not able to read and write8024.7Primary school8425.9High school3611.1Higher educations12438.3 Religion Orthodox25679.0Muslim5617.3Protestant123.7 Occupation Farmer6018.5housewife8024.7Merchant6419.8Employed12037.0 Smoking habit Yes123.7No31296.3 Khat chewing Yes5284.0No27216.0 Moderate alcohol intake Yes10833.3No21666.7 Perform physical exercise Yes164.9No30895.1 Sociodemographic characteristics of the study population in Woreta town, Ethiopia, 2021 Magnitude of impaired fasting glucose and undiagnosed DM The magnitude of IFG and UDM among the study subjects were 12% (95% CI 9–16) and 2.3% (95% CI 1.1–4), respectively, according to the American Diabetes Association (ADA) fasting criteria. IFG was found to be prevalent in 19.7% of males and 23.5% of females in our study area (Table2). Table 2IFG and undiagnosed DM by socio-demographic characteristics of the study participants in Woreta town, Ethiopia, 2021ParameterNFG (n = 152, 46.9%)IFG (n = 140, 43.2%)UDM (n = 32, 10.0%) Sex Male96(29.6)64(19.7)32(10.0)Female56(17.3)76(23.5)0 Age 20–3048(14.8)12(3.7)031–4028(8.6)12(3.7)8(2.5)41–5032(9.9)40(12.3)4(1.2)51–6024(7.4)36(11.1)12(3.7)≥ 6120(6.2)40(12.3)8(2.5) Occupation Farmer24(7.4)36(11.1)0housewife24(7.4)56(17.3)0Merchant36(11.1)20(6.2)8(2.5)Employed56(17.3)40(12.3)24(7.4) Marital status Married108(33.3)116(35.8)24(7.4)Single32(9.9)4(1.2)0Divorced4(1.2)8(2.5)0Widowed/separated8(2.5)12(3.7)8(2.5) Level of education No formal education36(11.1)40(12.3)4(1.2)Primary school36(11.1)40(12.3)8(2.5)High school24(7.4)20(6.2)0Higher educations56(17.3)48(14.8)20(6.2) Religion Orthodox124(38.3)108(33.3)24(7.4)Muslim28(8.6)24(7.4)4(1.2)Protestant08(2.5)4(1.2)IFG impaired fasting glucose, N/n number, NFG normal fasting glucose, UDM undiagnosed diabetes mellitus, % percent IFG and undiagnosed DM by socio-demographic characteristics of the study participants in Woreta town, Ethiopia, 2021 IFG impaired fasting glucose, N/n number, NFG normal fasting glucose, UDM undiagnosed diabetes mellitus, % percent The magnitude of IFG and UDM among the study subjects were 12% (95% CI 9–16) and 2.3% (95% CI 1.1–4), respectively, according to the American Diabetes Association (ADA) fasting criteria. IFG was found to be prevalent in 19.7% of males and 23.5% of females in our study area (Table2). Table 2IFG and undiagnosed DM by socio-demographic characteristics of the study participants in Woreta town, Ethiopia, 2021ParameterNFG (n = 152, 46.9%)IFG (n = 140, 43.2%)UDM (n = 32, 10.0%) Sex Male96(29.6)64(19.7)32(10.0)Female56(17.3)76(23.5)0 Age 20–3048(14.8)12(3.7)031–4028(8.6)12(3.7)8(2.5)41–5032(9.9)40(12.3)4(1.2)51–6024(7.4)36(11.1)12(3.7)≥ 6120(6.2)40(12.3)8(2.5) Occupation Farmer24(7.4)36(11.1)0housewife24(7.4)56(17.3)0Merchant36(11.1)20(6.2)8(2.5)Employed56(17.3)40(12.3)24(7.4) Marital status Married108(33.3)116(35.8)24(7.4)Single32(9.9)4(1.2)0Divorced4(1.2)8(2.5)0Widowed/separated8(2.5)12(3.7)8(2.5) Level of education No formal education36(11.1)40(12.3)4(1.2)Primary school36(11.1)40(12.3)8(2.5)High school24(7.4)20(6.2)0Higher educations56(17.3)48(14.8)20(6.2) Religion Orthodox124(38.3)108(33.3)24(7.4)Muslim28(8.6)24(7.4)4(1.2)Protestant08(2.5)4(1.2)IFG impaired fasting glucose, N/n number, NFG normal fasting glucose, UDM undiagnosed diabetes mellitus, % percent IFG and undiagnosed DM by socio-demographic characteristics of the study participants in Woreta town, Ethiopia, 2021 IFG impaired fasting glucose, N/n number, NFG normal fasting glucose, UDM undiagnosed diabetes mellitus, % percent Clinical presentations of study subjects with IFG and UDM Thirty-six (11.11%) of the subjects had an overweight BMI, whereas 88.9% of the overweight subjects had IFG. About 7.4% of the participants had examined their blood glucose level previously, and 1.2% had screened for DM within the last month. The most commonly elicited symptoms of diabetes were polyuria, polydipsia, polyphagia, and weariness, which were known by eight (2.5%) of the individuals. The prevalence of UDM was reported to be 9.9%, with a 95% confidence interval of 7.5 to 11.7. Furthermore, the magnitude of IFG was 44.14%, with a 95% confidence interval of 19.45 to 64.54 (Table3). Table 3Clinical presentation and laboratory measurements of study subjects in Woreta town, Ethiopia, 2021CharacteristicsNormal(< 100mg/dl)IFG (100–125 mg/dl)UDM(≥ 126 mg/dl)Chi-Squarep-valueEver checked your blood sugar levelYes4(16.7)4(16.7)16(66.6)23.480.000No148(49.3)136(45.3)16(5.4)The checked blood sugar level in the last monthYes04(100.00)01.3310.514No152(47.5)136(42.5)32(10.0)Family history of DMYes8(25.0)24(75.0)50.9890.61No144(55.4)116(44.6)27(10.9)Diagnosis of hypertensionYes12(12.2)62(63.3)24(24.5)23.1580.000No140(61.9)78(34.5)8(3.5)Know symptoms of DMYes26(55.3)13(27.7)8(17.0)60.2560.000No126(45.5)127(45.8)24(8.7)Body mass index (BMI), kg/m2Under Weight4(100.0)0023.3870.001normal144(55.4)96(36.9)20(7.7)Over Weight4(11.1)32(88.9)0Obese012(50.0)12(50.0)Perform physical exerciseYes16(100.0)004.7610.092No136(44.2)140(45.4)32(10.4)Moderate alcohol intakeYes44(40.7)44(40.7)20(18.5)3.4490.178No108(50.0)96(44.4)12(5.6) Clinical presentation and laboratory measurements of study subjects in Woreta town, Ethiopia, 2021 Thirty-six (11.11%) of the subjects had an overweight BMI, whereas 88.9% of the overweight subjects had IFG. About 7.4% of the participants had examined their blood glucose level previously, and 1.2% had screened for DM within the last month. The most commonly elicited symptoms of diabetes were polyuria, polydipsia, polyphagia, and weariness, which were known by eight (2.5%) of the individuals. The prevalence of UDM was reported to be 9.9%, with a 95% confidence interval of 7.5 to 11.7. Furthermore, the magnitude of IFG was 44.14%, with a 95% confidence interval of 19.45 to 64.54 (Table3). Table 3Clinical presentation and laboratory measurements of study subjects in Woreta town, Ethiopia, 2021CharacteristicsNormal(< 100mg/dl)IFG (100–125 mg/dl)UDM(≥ 126 mg/dl)Chi-Squarep-valueEver checked your blood sugar levelYes4(16.7)4(16.7)16(66.6)23.480.000No148(49.3)136(45.3)16(5.4)The checked blood sugar level in the last monthYes04(100.00)01.3310.514No152(47.5)136(42.5)32(10.0)Family history of DMYes8(25.0)24(75.0)50.9890.61No144(55.4)116(44.6)27(10.9)Diagnosis of hypertensionYes12(12.2)62(63.3)24(24.5)23.1580.000No140(61.9)78(34.5)8(3.5)Know symptoms of DMYes26(55.3)13(27.7)8(17.0)60.2560.000No126(45.5)127(45.8)24(8.7)Body mass index (BMI), kg/m2Under Weight4(100.0)0023.3870.001normal144(55.4)96(36.9)20(7.7)Over Weight4(11.1)32(88.9)0Obese012(50.0)12(50.0)Perform physical exerciseYes16(100.0)004.7610.092No136(44.2)140(45.4)32(10.4)Moderate alcohol intakeYes44(40.7)44(40.7)20(18.5)3.4490.178No108(50.0)96(44.4)12(5.6) Clinical presentation and laboratory measurements of study subjects in Woreta town, Ethiopia, 2021 Factors associated with IFG and UDM Age, family history of diabetes mellitus, body mass index, waist circumference, hypertriglyceridemia, and hypertension were found to be substantially associated with the prevalence of impaired fasting glucose in the bivariate analysis (Table4). The results of the multivariate logistic regression analysis revealed that IFG was independently associated with WC, HPN, FHDM and hypertriglyceridemia. The magnitude of IFG was 1.72 times greater in participants with high risk of WC compared to low-risk WC subjects (AOR = 1.72, 95% CI 1.23–3.14). Prediabetes was 2.35 times more common in people with hypertriglyceridemia than in people with normal TG (AOR = 2.35, 95% CI 1.41–5.43). IFG was 3.48 times more common in people with hypertension compared to people without hypertension (AOR = 3.48, 95% CI 1.35–8.89) (Table4). Table 4Bivariate and multivariate analysis of factors associated with IFG in Woreta town, Ethiopia, 2021VariableCategoryIFGCOR(95%CI)AOR(95%CI)YesNoAge20–3012481131–4012281.47(0.45–4.4.62)0.52(0.14–2.364)41–5032403.98(1.36–11.02) *2.06(0.51–7.09)51–6024363.32(1.09–10.27) *1.59(0.45–5.38)≥ 6120402.46(0.77–7.18)1.47(0.39–5.18)BMI (kg/m2 )≤ 24.910913511≥2531172.26(1.19–4.30) *1.29(1.19–3.79) *WCLow risk8912511High risk51272.65(1.55–4.55) **1.72(1.23–3.14) *TGNormal11713911High23132.10(1.02–4.33) **2.35(1.41–5.43) **FHDMYes2483.72(1.61–8.6) **2.34(1.37–5.79)**No11614411HPNYes62129.23(4.71–18.26) **3.48 (1.35–8.89) **No7814011AOR adjusted odds ratio, BMI body mass index, CI confidence interval, COR crude odds ratio, FHDM family history of diabetic mellitus, DM diabetes mellitus, FHDM family history of diabetes mellitus, IFG impaired fasting glucose, TG triglycerides, WC waist circumference, HPN hypertension* P < 0.05 ** P < 0.01 Bivariate and multivariate analysis of factors associated with IFG in Woreta town, Ethiopia, 2021 AOR adjusted odds ratio, BMI body mass index, CI confidence interval, COR crude odds ratio, FHDM family history of diabetic mellitus, DM diabetes mellitus, FHDM family history of diabetes mellitus, IFG impaired fasting glucose, TG triglycerides, WC waist circumference, HPN hypertension * P < 0.05 ** P < 0.01 Age, family history of diabetes mellitus, body mass index, waist circumference, hypertriglyceridemia, and hypertension were found to be substantially associated with the prevalence of impaired fasting glucose in the bivariate analysis (Table4). The results of the multivariate logistic regression analysis revealed that IFG was independently associated with WC, HPN, FHDM and hypertriglyceridemia. The magnitude of IFG was 1.72 times greater in participants with high risk of WC compared to low-risk WC subjects (AOR = 1.72, 95% CI 1.23–3.14). Prediabetes was 2.35 times more common in people with hypertriglyceridemia than in people with normal TG (AOR = 2.35, 95% CI 1.41–5.43). IFG was 3.48 times more common in people with hypertension compared to people without hypertension (AOR = 3.48, 95% CI 1.35–8.89) (Table4). Table 4Bivariate and multivariate analysis of factors associated with IFG in Woreta town, Ethiopia, 2021VariableCategoryIFGCOR(95%CI)AOR(95%CI)YesNoAge20–3012481131–4012281.47(0.45–4.4.62)0.52(0.14–2.364)41–5032403.98(1.36–11.02) *2.06(0.51–7.09)51–6024363.32(1.09–10.27) *1.59(0.45–5.38)≥ 6120402.46(0.77–7.18)1.47(0.39–5.18)BMI (kg/m2 )≤ 24.910913511≥2531172.26(1.19–4.30) *1.29(1.19–3.79) *WCLow risk8912511High risk51272.65(1.55–4.55) **1.72(1.23–3.14) *TGNormal11713911High23132.10(1.02–4.33) **2.35(1.41–5.43) **FHDMYes2483.72(1.61–8.6) **2.34(1.37–5.79)**No11614411HPNYes62129.23(4.71–18.26) **3.48 (1.35–8.89) **No7814011AOR adjusted odds ratio, BMI body mass index, CI confidence interval, COR crude odds ratio, FHDM family history of diabetic mellitus, DM diabetes mellitus, FHDM family history of diabetes mellitus, IFG impaired fasting glucose, TG triglycerides, WC waist circumference, HPN hypertension* P < 0.05 ** P < 0.01 Bivariate and multivariate analysis of factors associated with IFG in Woreta town, Ethiopia, 2021 AOR adjusted odds ratio, BMI body mass index, CI confidence interval, COR crude odds ratio, FHDM family history of diabetic mellitus, DM diabetes mellitus, FHDM family history of diabetes mellitus, IFG impaired fasting glucose, TG triglycerides, WC waist circumference, HPN hypertension * P < 0.05 ** P < 0.01 Sociodemographic characteristics of study participants: A total of 324 participants were incorporated for the final analysis, with a response rate of 100%. The mean age of the participants was 43.76 ± 17.29 years (range 20–80 years). The bulk of the participants, 190 (58.6%) were men; 248 (76.5%) were married; 256 (79.0%) were Orthodox Christians; and 80 (24.7%) had no formal education (Table1). Table 1Sociodemographic characteristics of the study population in Woreta town, Ethiopia, 2021VariablesFrequencyPercentage Sex Male19058.6Female13441.4 Age group (years) 20–306018.531–404814.841–507623.551–607222.2≥ 616821.0 Marital status Married24876.5Single3611.1Divorced123.7Widowed/separated288.6 Level of education Not able to read and write8024.7Primary school8425.9High school3611.1Higher educations12438.3 Religion Orthodox25679.0Muslim5617.3Protestant123.7 Occupation Farmer6018.5housewife8024.7Merchant6419.8Employed12037.0 Smoking habit Yes123.7No31296.3 Khat chewing Yes5284.0No27216.0 Moderate alcohol intake Yes10833.3No21666.7 Perform physical exercise Yes164.9No30895.1 Sociodemographic characteristics of the study population in Woreta town, Ethiopia, 2021 Magnitude of impaired fasting glucose and undiagnosed DM: The magnitude of IFG and UDM among the study subjects were 12% (95% CI 9–16) and 2.3% (95% CI 1.1–4), respectively, according to the American Diabetes Association (ADA) fasting criteria. IFG was found to be prevalent in 19.7% of males and 23.5% of females in our study area (Table2). Table 2IFG and undiagnosed DM by socio-demographic characteristics of the study participants in Woreta town, Ethiopia, 2021ParameterNFG (n = 152, 46.9%)IFG (n = 140, 43.2%)UDM (n = 32, 10.0%) Sex Male96(29.6)64(19.7)32(10.0)Female56(17.3)76(23.5)0 Age 20–3048(14.8)12(3.7)031–4028(8.6)12(3.7)8(2.5)41–5032(9.9)40(12.3)4(1.2)51–6024(7.4)36(11.1)12(3.7)≥ 6120(6.2)40(12.3)8(2.5) Occupation Farmer24(7.4)36(11.1)0housewife24(7.4)56(17.3)0Merchant36(11.1)20(6.2)8(2.5)Employed56(17.3)40(12.3)24(7.4) Marital status Married108(33.3)116(35.8)24(7.4)Single32(9.9)4(1.2)0Divorced4(1.2)8(2.5)0Widowed/separated8(2.5)12(3.7)8(2.5) Level of education No formal education36(11.1)40(12.3)4(1.2)Primary school36(11.1)40(12.3)8(2.5)High school24(7.4)20(6.2)0Higher educations56(17.3)48(14.8)20(6.2) Religion Orthodox124(38.3)108(33.3)24(7.4)Muslim28(8.6)24(7.4)4(1.2)Protestant08(2.5)4(1.2)IFG impaired fasting glucose, N/n number, NFG normal fasting glucose, UDM undiagnosed diabetes mellitus, % percent IFG and undiagnosed DM by socio-demographic characteristics of the study participants in Woreta town, Ethiopia, 2021 IFG impaired fasting glucose, N/n number, NFG normal fasting glucose, UDM undiagnosed diabetes mellitus, % percent Clinical presentations of study subjects with IFG and UDM: Thirty-six (11.11%) of the subjects had an overweight BMI, whereas 88.9% of the overweight subjects had IFG. About 7.4% of the participants had examined their blood glucose level previously, and 1.2% had screened for DM within the last month. The most commonly elicited symptoms of diabetes were polyuria, polydipsia, polyphagia, and weariness, which were known by eight (2.5%) of the individuals. The prevalence of UDM was reported to be 9.9%, with a 95% confidence interval of 7.5 to 11.7. Furthermore, the magnitude of IFG was 44.14%, with a 95% confidence interval of 19.45 to 64.54 (Table3). Table 3Clinical presentation and laboratory measurements of study subjects in Woreta town, Ethiopia, 2021CharacteristicsNormal(< 100mg/dl)IFG (100–125 mg/dl)UDM(≥ 126 mg/dl)Chi-Squarep-valueEver checked your blood sugar levelYes4(16.7)4(16.7)16(66.6)23.480.000No148(49.3)136(45.3)16(5.4)The checked blood sugar level in the last monthYes04(100.00)01.3310.514No152(47.5)136(42.5)32(10.0)Family history of DMYes8(25.0)24(75.0)50.9890.61No144(55.4)116(44.6)27(10.9)Diagnosis of hypertensionYes12(12.2)62(63.3)24(24.5)23.1580.000No140(61.9)78(34.5)8(3.5)Know symptoms of DMYes26(55.3)13(27.7)8(17.0)60.2560.000No126(45.5)127(45.8)24(8.7)Body mass index (BMI), kg/m2Under Weight4(100.0)0023.3870.001normal144(55.4)96(36.9)20(7.7)Over Weight4(11.1)32(88.9)0Obese012(50.0)12(50.0)Perform physical exerciseYes16(100.0)004.7610.092No136(44.2)140(45.4)32(10.4)Moderate alcohol intakeYes44(40.7)44(40.7)20(18.5)3.4490.178No108(50.0)96(44.4)12(5.6) Clinical presentation and laboratory measurements of study subjects in Woreta town, Ethiopia, 2021 Factors associated with IFG and UDM: Age, family history of diabetes mellitus, body mass index, waist circumference, hypertriglyceridemia, and hypertension were found to be substantially associated with the prevalence of impaired fasting glucose in the bivariate analysis (Table4). The results of the multivariate logistic regression analysis revealed that IFG was independently associated with WC, HPN, FHDM and hypertriglyceridemia. The magnitude of IFG was 1.72 times greater in participants with high risk of WC compared to low-risk WC subjects (AOR = 1.72, 95% CI 1.23–3.14). Prediabetes was 2.35 times more common in people with hypertriglyceridemia than in people with normal TG (AOR = 2.35, 95% CI 1.41–5.43). IFG was 3.48 times more common in people with hypertension compared to people without hypertension (AOR = 3.48, 95% CI 1.35–8.89) (Table4). Table 4Bivariate and multivariate analysis of factors associated with IFG in Woreta town, Ethiopia, 2021VariableCategoryIFGCOR(95%CI)AOR(95%CI)YesNoAge20–3012481131–4012281.47(0.45–4.4.62)0.52(0.14–2.364)41–5032403.98(1.36–11.02) *2.06(0.51–7.09)51–6024363.32(1.09–10.27) *1.59(0.45–5.38)≥ 6120402.46(0.77–7.18)1.47(0.39–5.18)BMI (kg/m2 )≤ 24.910913511≥2531172.26(1.19–4.30) *1.29(1.19–3.79) *WCLow risk8912511High risk51272.65(1.55–4.55) **1.72(1.23–3.14) *TGNormal11713911High23132.10(1.02–4.33) **2.35(1.41–5.43) **FHDMYes2483.72(1.61–8.6) **2.34(1.37–5.79)**No11614411HPNYes62129.23(4.71–18.26) **3.48 (1.35–8.89) **No7814011AOR adjusted odds ratio, BMI body mass index, CI confidence interval, COR crude odds ratio, FHDM family history of diabetic mellitus, DM diabetes mellitus, FHDM family history of diabetes mellitus, IFG impaired fasting glucose, TG triglycerides, WC waist circumference, HPN hypertension* P < 0.05 ** P < 0.01 Bivariate and multivariate analysis of factors associated with IFG in Woreta town, Ethiopia, 2021 AOR adjusted odds ratio, BMI body mass index, CI confidence interval, COR crude odds ratio, FHDM family history of diabetic mellitus, DM diabetes mellitus, FHDM family history of diabetes mellitus, IFG impaired fasting glucose, TG triglycerides, WC waist circumference, HPN hypertension * P < 0.05 ** P < 0.01 Discussion: This study revealed that the prevalence of IFG was 43%, indicating that if appropriate interventions were not made, the individuals might develop diabetes. This report was higher than a population-based study in Qatar (12.5%)[19]. In Denmark (27.1%)[20] and Bukittinggi, Indonesia (32%)[21], Uganda (3.9%)[22], and Kenya (3.1%)[1]. The probable reason could be the differences in relation to sociodemographic and living standards. In this study, the magnitude of UDM was 9.87%. It was associated with obesity, hypertriglyceridemia, a family history of DM, and hypertension. This report was in line with reports from studies in the East Gojjam zone, northwest Ethiopia (11.5%) [23] Bahir Dar city, northwest Ethiopia (10.2%)[7], Germany (8.2%)[24] USA (11.5%)[25] Italia (10%)[26] and African urban population (8.68%)[27]. However, the magnitude of UDM in this study was higher than that of a study in Uganda (2.3%)[22] and Kenya (2.4%)[1]. The possible difference between Kenya’s study and the current study is that the current study was from a single center while Kenya’s sample was a nationally representative sample. The study in Uganda was among municipal communities with better educational status, expected to have better awareness of diabetic mellitus. Nevertheless, this report was higher than that of a study at Koladiba (2.3%)[17], Bishoftu town (5%)[7], and Addis Ababa, Ethiopia (6.6%)[28], Teheran, Iran (5%) [29] and Qatar (5.9%)[19]. Moreover, our report was lower than those of patients with UDM and hypertension in India (15%)[30]. The probable reasons for these differences could be different study settings, lifestyles, health-seeking behavior, and practice in routine screening of diabetes and other health problems. In particular, this report was significantly higher than that of the previous study in Koladiba, a rural district in northwest Ethiopia[17]. The disparity could be due to differences in sample size. For participants with high risk of WC, the odds of IFG was twice higher than those who have low risk of WC. This finding was in line with those of studies conducted in koladiba [17]. The probable reason could be in relation to the relation of central obesity and insulin resistance. Similarly, among those who had hypertension, the odds of getting IFG were three times higher as compared to no hypertension. For participants with a high risk of WC, the odds of IFG were twice as high as those who had a low risk of WC. This finding was in line with those of studies conducted in Koladiba [7]. The probable reason could be in relation to the relationship between central obesity and insulin resistance. Similarly, among those who had hypertension, the odds of getting IFG were three times higher as compared to those who had normal blood pressure. Any causal association is not estimated by the cross-sectional design of this study. Self-reported dietary information, social habits, and physical activity data may yield inaccurate estimations. The study participant’s unpredictability in obtaining a fasting state is another possible limitations in a population-based study like this study, which might easily result in an overestimation of the prevalence of impaired fasting glucose and undiagnosed diabetic mellitus. Conclusion: In conclusion, the magnitude of IFG and UDM was significantly high. Family history of diabetes, hypertension, WC, and TG were all associated with IFG. Regular physical activities are recommended measures for the prevention and control of diabetes mellitus among the population of Ethiopia.
Background: Impaired fasting glucose (IFG) is an early warning system that provides prior information to prevent the future development of DM and diabetes-related problems, but early detection of DM is not practically applicable in Ethiopia. This study was aimed to assess the magnitude of impaired fasting glucose and undiagnosed diabetes mellitus (DM) and associated factors. Methods: A community-based, cross-sectional study was conducted from May to June 30, 2021. A structured interviewer-administered questionnaire was used to collect data. Anthropometric measurements were also recorded. A fasting blood sugar (FBS) test was assessed by samples taken early in the morning. Epi-Info 7.2.5.0 was used to enter data, which was then exported to SPSS 25 for analysis. To identify factors associated with IFG, logistics regression was used. The level of statistical significance was declared at p 0.05. Results: Three hundred and twenty-four (324) participants with a mean age of 43.76 ± 17.29 years were enrolled. The overall magnitude of impaired fasting glucose (IFG) and undiagnosed diabetes mellitus (DM) were 43.2% and 10.0%, respectively. Waist circumference (AOR: 1.72, 95% CI 1.23-3.14), hypertension (AOR: 3.48, 95% CI 1.35-8.89), family history of Diabetic mellitus (AOR: 2.34, 95% CI 1.37-5.79) and hypertriglyceridemia (AOR: 2.35, 95% CI 1.41-5.43) were found to be independently associated with impaired fasting glucose. Conclusions: Individuals who are overweight, hypertriglyceridemia, and are hypertensive should have regular checkups and community-based screening.
Background: Diabetes mellitus (DM) is a public health problem characterized by a high blood sugar level in the body. It can be caused by a lack of either insulin secretion or insulin resistance. Its magnitude has been increasing in recent decades. The magnitude in people above the age of 18 increased from 4.7% to 1980 to 8.8% in 2017 [1–3]. It affects more than 425million people worldwide, with the number expected to rise to 529million by 2030 [1]. Diabetes mellitus is a big public health problem with serious consequences. Undiagnosed diabetes mellitus (UDM) affects 50% of people aged 20 to 79 years in the world, 69% in Africa, which is nearly double that of high-income nations (37%). This provides new information on Africa’s high rate of sickness and mortality, which could occur at a younger age [2, 3]. Impaired fasting glucose (IFG) is characterized by glycemic levels that are greater than normal but below diabetes thresholds (fasting glucose > 60.0 mmol/L and > 70.0 mmol/L). It is a major risk factor for diabetes and its consequences, including nephropathy, diabetic retinopathy, and an elevated risk of macrovascular disease [4–6]. As a result, understanding IFG is critical for avoiding diabetes forecasts in the future. According to studies in Behar Dar city and rural areas of Dire Dawa, UDM was 10.2% and 6.2%, respectively [7, 8]. A worldwide study by 2030 reported that 470million people will be affected by IFG [9, 10]. In addition, studies have shown that people diagnosed with IFG are more likely to develop diabetes. Diabetes is a 5–10% annual risk for patients with IFG, according to Tabak et al. [11]. Larson et al. found that 25% of postmenopausal women with impaired fasting glucose (IFG) or impaired glucose tolerance (IGT) tests progressed to type 2 diabetes (T2DM) in 5 years [12]. However, there is evidence that controlling IFG with lifestyle changes, including physical activity and healthy diets, [13–15], can prevent or postpone the advancement of DM, and another study found that behavioral changes alone lowered the risk of DM by 40–70%[11]. In general, it is critical to identify people who have an IFG condition so that early preventive actions can be implemented. In 2015, the International Diabetes Federation (IDF) reported that the magnitude of DM in Ethiopia was 2.2%[16]. Despite the fact that IFG is an early warning system that provides a warning to prevent the future development of DM and diabetes-related problems, data on the prevalence of prediabetes (IFG), undiagnosed DM, and associated risk factors in Ethiopia is inadequate. As a result, this research will give baseline data on the magnitude of IFG, UDM, and associated factors in Ethiopia. Conclusion: In conclusion, the magnitude of IFG and UDM was significantly high. Family history of diabetes, hypertension, WC, and TG were all associated with IFG. Regular physical activities are recommended measures for the prevention and control of diabetes mellitus among the population of Ethiopia.
Background: Impaired fasting glucose (IFG) is an early warning system that provides prior information to prevent the future development of DM and diabetes-related problems, but early detection of DM is not practically applicable in Ethiopia. This study was aimed to assess the magnitude of impaired fasting glucose and undiagnosed diabetes mellitus (DM) and associated factors. Methods: A community-based, cross-sectional study was conducted from May to June 30, 2021. A structured interviewer-administered questionnaire was used to collect data. Anthropometric measurements were also recorded. A fasting blood sugar (FBS) test was assessed by samples taken early in the morning. Epi-Info 7.2.5.0 was used to enter data, which was then exported to SPSS 25 for analysis. To identify factors associated with IFG, logistics regression was used. The level of statistical significance was declared at p 0.05. Results: Three hundred and twenty-four (324) participants with a mean age of 43.76 ± 17.29 years were enrolled. The overall magnitude of impaired fasting glucose (IFG) and undiagnosed diabetes mellitus (DM) were 43.2% and 10.0%, respectively. Waist circumference (AOR: 1.72, 95% CI 1.23-3.14), hypertension (AOR: 3.48, 95% CI 1.35-8.89), family history of Diabetic mellitus (AOR: 2.34, 95% CI 1.37-5.79) and hypertriglyceridemia (AOR: 2.35, 95% CI 1.41-5.43) were found to be independently associated with impaired fasting glucose. Conclusions: Individuals who are overweight, hypertriglyceridemia, and are hypertensive should have regular checkups and community-based screening.
7,076
317
[ 555, 595, 64, 17, 15, 16, 75, 98, 1305, 339, 122, 74, 178, 220, 217, 376 ]
19
[ "ifg", "study", "diabetes", "12", "town", "participants", "ethiopia", "data", "11", "glucose" ]
[ "diabetes mellitus dm", "impaired fasting glucose", "diabetes mellitus population", "diabetes consequences including", "background diabetes mellitus" ]
null
[CONTENT] Impaired fasting glucose | Undiagnosed diabetic mellitus | Risk factors | Ethiopia [SUMMARY]
null
[CONTENT] Impaired fasting glucose | Undiagnosed diabetic mellitus | Risk factors | Ethiopia [SUMMARY]
[CONTENT] Impaired fasting glucose | Undiagnosed diabetic mellitus | Risk factors | Ethiopia [SUMMARY]
[CONTENT] Impaired fasting glucose | Undiagnosed diabetic mellitus | Risk factors | Ethiopia [SUMMARY]
[CONTENT] Impaired fasting glucose | Undiagnosed diabetic mellitus | Risk factors | Ethiopia [SUMMARY]
[CONTENT] Adult | Blood Glucose | Cross-Sectional Studies | Diabetes Mellitus | Ethiopia | Fasting | Humans | Hypertension | Hypertriglyceridemia | Middle Aged | Prediabetic State | Risk Factors [SUMMARY]
null
[CONTENT] Adult | Blood Glucose | Cross-Sectional Studies | Diabetes Mellitus | Ethiopia | Fasting | Humans | Hypertension | Hypertriglyceridemia | Middle Aged | Prediabetic State | Risk Factors [SUMMARY]
[CONTENT] Adult | Blood Glucose | Cross-Sectional Studies | Diabetes Mellitus | Ethiopia | Fasting | Humans | Hypertension | Hypertriglyceridemia | Middle Aged | Prediabetic State | Risk Factors [SUMMARY]
[CONTENT] Adult | Blood Glucose | Cross-Sectional Studies | Diabetes Mellitus | Ethiopia | Fasting | Humans | Hypertension | Hypertriglyceridemia | Middle Aged | Prediabetic State | Risk Factors [SUMMARY]
[CONTENT] Adult | Blood Glucose | Cross-Sectional Studies | Diabetes Mellitus | Ethiopia | Fasting | Humans | Hypertension | Hypertriglyceridemia | Middle Aged | Prediabetic State | Risk Factors [SUMMARY]
[CONTENT] diabetes mellitus dm | impaired fasting glucose | diabetes mellitus population | diabetes consequences including | background diabetes mellitus [SUMMARY]
null
[CONTENT] diabetes mellitus dm | impaired fasting glucose | diabetes mellitus population | diabetes consequences including | background diabetes mellitus [SUMMARY]
[CONTENT] diabetes mellitus dm | impaired fasting glucose | diabetes mellitus population | diabetes consequences including | background diabetes mellitus [SUMMARY]
[CONTENT] diabetes mellitus dm | impaired fasting glucose | diabetes mellitus population | diabetes consequences including | background diabetes mellitus [SUMMARY]
[CONTENT] diabetes mellitus dm | impaired fasting glucose | diabetes mellitus population | diabetes consequences including | background diabetes mellitus [SUMMARY]
[CONTENT] ifg | study | diabetes | 12 | town | participants | ethiopia | data | 11 | glucose [SUMMARY]
null
[CONTENT] ifg | study | diabetes | 12 | town | participants | ethiopia | data | 11 | glucose [SUMMARY]
[CONTENT] ifg | study | diabetes | 12 | town | participants | ethiopia | data | 11 | glucose [SUMMARY]
[CONTENT] ifg | study | diabetes | 12 | town | participants | ethiopia | data | 11 | glucose [SUMMARY]
[CONTENT] ifg | study | diabetes | 12 | town | participants | ethiopia | data | 11 | glucose [SUMMARY]
[CONTENT] diabetes | ifg | people | dm | risk | magnitude | glucose | impaired | diabetes mellitus | fasting glucose [SUMMARY]
null
[CONTENT] ifg | 12 | ci | 95 | 11 | mellitus | 95 ci | 45 | 24 | town ethiopia [SUMMARY]
[CONTENT] diabetes | activities recommended measures prevention | conclusion | high family | high family history | high family history diabetes | activities | activities recommended | activities recommended measures | recommended measures [SUMMARY]
[CONTENT] ifg | town | study | kebeles | woreta | woreta town | data | diabetes | 12 | participants [SUMMARY]
[CONTENT] ifg | town | study | kebeles | woreta | woreta town | data | diabetes | 12 | participants [SUMMARY]
[CONTENT] Ethiopia ||| [SUMMARY]
null
[CONTENT] Three hundred and twenty-four | 324 | 43.76 ± | 17.29 years ||| 43.2% | 10.0% ||| 1.72 | 95% | CI | 1.23-3.14 | 3.48 | 95% | CI | 1.35-8.89 | Diabetic | 2.34 | 95% | CI | 1.37-5.79 | 2.35 | 95% | CI | 1.41-5.43 [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] Ethiopia ||| ||| May to June 30, 2021 ||| ||| ||| FBS | early in the morning ||| Epi-Info 7.2.5.0 | SPSS 25 ||| IFG ||| p 0.05 ||| Three hundred and twenty-four | 324 | 43.76 ± | 17.29 years ||| 43.2% | 10.0% ||| 1.72 | 95% | CI | 1.23-3.14 | 3.48 | 95% | CI | 1.35-8.89 | Diabetic | 2.34 | 95% | CI | 1.37-5.79 | 2.35 | 95% | CI | 1.41-5.43 ||| [SUMMARY]
[CONTENT] Ethiopia ||| ||| May to June 30, 2021 ||| ||| ||| FBS | early in the morning ||| Epi-Info 7.2.5.0 | SPSS 25 ||| IFG ||| p 0.05 ||| Three hundred and twenty-four | 324 | 43.76 ± | 17.29 years ||| 43.2% | 10.0% ||| 1.72 | 95% | CI | 1.23-3.14 | 3.48 | 95% | CI | 1.35-8.89 | Diabetic | 2.34 | 95% | CI | 1.37-5.79 | 2.35 | 95% | CI | 1.41-5.43 ||| [SUMMARY]
Differential Expression of Ki-67 and P27 in Cholesteatoma Compared to Skin Tissue Predicts the Prognosis of Adult Acquired Cholesteatoma.
34309550
The aim of this study was to compare the differential Ki-67 and p27 staining properties of acquired cholesteatoma in adult patients for prognostic analysis.
BACKGROUND
Forty-two adult patients with acquired cholesteatoma were enrolled. The cholesteatoma and matched meatal skin tissues of the patients were immunostained with Ki-67 and p27 antibodies. Canal wall down mastoidectomy was performed in all patients. The differential staining properties--positive staining in the cholesteatoma and negative staining in the skin tissue (C+S-), negative staining in the cholesteatoma and positive staining in the skin tissue(C-S+)--were compared for bone erosion scores (BES), stage, and recurrence rates.
METHODS
Isolated findings in the cholesteatoma tissues, without matching with the skin tissues, demonstrated that stage and recurrence rates were not related to findings in the cholesteatoma tissues (P > .05). However, C+S- for Ki-67 and C-S+ for p27 are risk factors for worse prognosis including advanced stage (P < .001 for Ki-67 and P = .008 for p27), BES values (P < .001 for Ki-67 and P = .001 for p27), and recurrence rates (P < .001 for Ki-67 and P = .037 for p27).
RESULTS
This is the first paper assessing the cholesteatoma prognosis according to the differential Ki-67 and p27 staining properties of cholesteatoma and healthy skin tissues. Cellular proliferation rate in the cholesteatoma is important but insufficient by itself for predicting the prognosis of cholesteatoma patients. Patients having lower basal levels of cellular proliferation rate and higher cellular activity in the cholesteatoma tissue are prone to worse prognosis with increased stage, recurrence rates, and degree of bone erosion.
CONCLUSION
[ "Adult", "Cholesteatoma, Middle Ear", "Cyclin-Dependent Kinase Inhibitor p27", "Humans", "Ki-67 Antigen", "Mastoidectomy", "Prognosis", "Recurrence" ]
8975409
Introduction
Cholesteatoma is a progressive hyperplastic keratinized squamous epithelium of the temporal bone characterized by osteoclastic activity and bone resorption. It is composed of 3 compartments: a cystic part, a matrix, and a perimatrix. The central cystic part contains dead keratinocytes. It is surrounded by the matrix, and the most external compartment is the perimatrix. The active part of the cholesteatoma is the matrix, which harbors continuous proliferating undifferentiated keratinocytes. The perimatrix consists of fibroblasts and granulation tissue. As the central cystic part expands with continuous desquamation of the dead keratinocytes from the matrix, osteoclastic cascade reactions result in bone erosion. The bone erosion capability of the cholesteatoma can trigger extensive expansion and complications such as hearing loss, vestibular involvement, facial nerve paralysis, and even brain abscess and death.1,2 To date, the only definitive treatment of the cholesteatoma is surgery. Despite developing technology and the widespread use of endoscopes, recurrence and recidivism after surgery still exist as a major problem and various studies report rates of 0-70%.3 The underlying pathogenesis and molecular mechanisms of cholesteatoma have not been fully understood. Inflammatory cytokines including interleukin (IL)-1, IL-6, TNF-α, matrix metalloproteinases, imbalance between keratinocyte proliferation and apoptosis, Rho kinase pathway, genetic susceptibility, angiogenetic growth factors, platelet-derived growth factor, and chronic proceeding infections have all been reported to have a role in the cholesteatoma development.1,2,4,5 Nuclear antigen Ki-67 is a protein that is expressed in proliferating cells; thus, it is commonly used as a mitotic index for tumor grading.3 Increased Ki-67 expression in the basal and spinous layers of the cholesteatoma was also reported as showing the high proliferation property of keratinocytes in the cholesteatoma.6 Few ­studies3,7-15 have also investigated the Ki-67 labeling index for predicting the prognostic features of cholesteatoma. However, there is such a wide range of Kİ-67 labeling indexes among healthy skin tissues of cholesteatoma patients, ranging from 0.9% to 24%.10,16 Moreover, a stronger expression of Ki-67 was also reported in the healthy skin tissue compared to the cholesteatoma tissue.15 Nonetheless, all these studies3,7-15 compared the Ki-67 labeling index of the cholesteatoma tissues obtained from different patients without matching the results with the healthy skin tissue (control group) of the same patient during comparison to predict the role of Ki-67 on the cholesteatoma prognosis. Cyclin-dependent kinases are also involved in cell cycle and activate cellular proliferation, similar to Ki-67. P27 is the novel cyclin-dependent kinase inhibitor that acts as a tumor suppressor gene, arrests the cell cycle in phase G1, and stops cellular proliferation.17,18 A limited number of studies have focused on the effect of p27 on the cholesteatoma pathogenesis with conflicting results.16-18 Only one study18 investigated the role of p27 on recurrence of the cholesteatoma without matching the results with the skin tissue of the same patient. Additionally, to the best of the authors’ knowledge, the role of p27 on extensiveness and bone erosion degree of the cholesteatoma has not been evaluated, yet. In this study, we investigated the effect of Ki-67 and p27 on the extensiveness (stage), recurrence rate, and bone erosion score of adults’ acquired cholesteatoma by matching the staining status of the markers in the cholesteatoma and healthy skin tissues of the same patients, immunohistochemically.
null
null
Results
There were 42 patients in the study group, of whom 27 (64.3%) were male and 15 (35.7%) were female. The mean age of the patients was 40.73 ± 15.28 (min = 18, max = 65). The mean follow-up period was 66.69 ± 8.11 (min = 60, max = 96) months. Eight (19.04%) of the patients had cholesteatoma recurrence during the follow-up period. Ten (23.9%) patients had stage 1, 29 (69%) patients had stage 2, and 3 (7.1%) patients had stage 3 cholesteatoma, according to the intraoperative and CT findings. One patient had lateral semicircular canal fistula and 2 patients had facial nerve paralysis as intratemporal complications (stage 3). None of the patients had intracranial complications (stage 4 cholesteatoma). Ki-67 Thirty-nine (92.8%) of the patients had Ki-67-positive staining in the cholesteatoma tissue, and 37 (88.1%) of the patients had Ki-67-positive staining in the meatal skin tissue. There was no statistically significant difference between cholesteatoma and skin tissues regarding the number of patients having Ki-67-positive staining (P = .323) (Figure 3). There was no statistically significant difference between recurrent and non-recurrent patients regarding the number of positive Ki-67 staining in the patients’ cholesteatoma tissue (P = .383). However, when the differential staining properties of the patients’ tissues were further analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for Ki-67 (P < .001) (Table 1). The proportion of C(+)S(−) patients was significantly higher in the recurrent group (4/8) compared to the non-recurrent group (0/34) (z value: 4.3347, P < .001). There was no statistically significant difference among the stage groups regarding the number of patients having Ki-67-positive staining in the cholesteatoma tissue (P = .19). However, when the differential staining properties of the patients’ tissues were additionally analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for Ki-67 (P < .001) (Table 2). The proportion of C(+)S(−) patients was significantly higher in the stage 3 group (3/3) compared to the stage 1 group (0/10) (z value: 3.6056, P = .0003) and stage 2 group (1/29) (z value:4.8138, P < .01). The mean BES of the patients having positive and negative Ki-67 staining in the cholesteatoma tissue were 2 ± 0 and 2.2 ± 1.15, respectively. There was no statistically significant mean BES difference between positive and negative Ki-67-staining in the patients’ cholesteatoma tissue (P ˃ .05). When the differential staining properties of the patients’ tissues were further evaluated, the mean BES of C(+)S(−), C(−)S(+), C(+)S(+) and C(−)S(−) patients were 5 ± 0.81, 2 ± 0, 1.88 ± 0.63 and 2 ± 0, respectively. C(+)S(−) patients had higher BES values compared to the others (P < .001) (Figure 4). Thirty-nine (92.8%) of the patients had Ki-67-positive staining in the cholesteatoma tissue, and 37 (88.1%) of the patients had Ki-67-positive staining in the meatal skin tissue. There was no statistically significant difference between cholesteatoma and skin tissues regarding the number of patients having Ki-67-positive staining (P = .323) (Figure 3). There was no statistically significant difference between recurrent and non-recurrent patients regarding the number of positive Ki-67 staining in the patients’ cholesteatoma tissue (P = .383). However, when the differential staining properties of the patients’ tissues were further analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for Ki-67 (P < .001) (Table 1). The proportion of C(+)S(−) patients was significantly higher in the recurrent group (4/8) compared to the non-recurrent group (0/34) (z value: 4.3347, P < .001). There was no statistically significant difference among the stage groups regarding the number of patients having Ki-67-positive staining in the cholesteatoma tissue (P = .19). However, when the differential staining properties of the patients’ tissues were additionally analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for Ki-67 (P < .001) (Table 2). The proportion of C(+)S(−) patients was significantly higher in the stage 3 group (3/3) compared to the stage 1 group (0/10) (z value: 3.6056, P = .0003) and stage 2 group (1/29) (z value:4.8138, P < .01). The mean BES of the patients having positive and negative Ki-67 staining in the cholesteatoma tissue were 2 ± 0 and 2.2 ± 1.15, respectively. There was no statistically significant mean BES difference between positive and negative Ki-67-staining in the patients’ cholesteatoma tissue (P ˃ .05). When the differential staining properties of the patients’ tissues were further evaluated, the mean BES of C(+)S(−), C(−)S(+), C(+)S(+) and C(−)S(−) patients were 5 ± 0.81, 2 ± 0, 1.88 ± 0.63 and 2 ± 0, respectively. C(+)S(−) patients had higher BES values compared to the others (P < .001) (Figure 4). P27 Eleven (26.2%) of the patients had p27-positive staining in the cholesteatoma tissue, and 31(73.8%) of the patients had p27-positive staining in the meatal skin tissue. The number of patients having p27-positive staining in the meatal skin tissue was significantly higher compared to the number of patients having p27-positive staining in the cholesteatoma tissue (P = .041) (Figure 3). It was observed that there was no statistically significant difference between recurrent and non-recurrent patients regarding the number of patients with positive p27-staining in the cholesteatoma tissue (P = .657). However, when the differential staining properties of the patients’ tissues were further analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for p27 (P < .001) (Table 3). The proportion of C(−)S(+) patients was significantly higher in the recurrent group (7/8) compared to the non-recurrent group (13/34) (z value:2.51, P = .012). There was no statistically significant difference among the stage groups regarding the number of patients having p27-positive staining in the cholesteatoma tissue (P = .108). However, when the differential staining properties of the tissues were analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for p27 (P < .001) (Table 4), and the proportion of C(−)S(+) patients was significantly higher in the stage 2 (17/29) (z value: −3.2236, P = .0012) and the stage 3 patient groups (3/3) (z value: −3.6056, P = .0003) compared to the stage 1 patient group (0/10) . The mean BES values of the patients having positive and negative p27 staining in the cholesteatoma tissues were 1.27 ± 0.46 and 2.54 ± 1.05, respectively. Patients with negative staining had significantly higher BES values than patients with positive staining (P < .001). According to the differential staining properties of the patients’ tissues, the mean BES values of C(−)S(+), C(−)S(−) and C(+)S(+) patients for p27 staining were 2.75 ± 1.25, 2.18 ± 0.4, 1.27 ± 0.46, respectively. C(−)S(+) patients had significantly higher BES values compared to other groups, and C(−)S(−) patients had significantly higher BES values than C(+)S(+) patients (P = .001) (Figure 5). Eleven (26.2%) of the patients had p27-positive staining in the cholesteatoma tissue, and 31(73.8%) of the patients had p27-positive staining in the meatal skin tissue. The number of patients having p27-positive staining in the meatal skin tissue was significantly higher compared to the number of patients having p27-positive staining in the cholesteatoma tissue (P = .041) (Figure 3). It was observed that there was no statistically significant difference between recurrent and non-recurrent patients regarding the number of patients with positive p27-staining in the cholesteatoma tissue (P = .657). However, when the differential staining properties of the patients’ tissues were further analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for p27 (P < .001) (Table 3). The proportion of C(−)S(+) patients was significantly higher in the recurrent group (7/8) compared to the non-recurrent group (13/34) (z value:2.51, P = .012). There was no statistically significant difference among the stage groups regarding the number of patients having p27-positive staining in the cholesteatoma tissue (P = .108). However, when the differential staining properties of the tissues were analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for p27 (P < .001) (Table 4), and the proportion of C(−)S(+) patients was significantly higher in the stage 2 (17/29) (z value: −3.2236, P = .0012) and the stage 3 patient groups (3/3) (z value: −3.6056, P = .0003) compared to the stage 1 patient group (0/10) . The mean BES values of the patients having positive and negative p27 staining in the cholesteatoma tissues were 1.27 ± 0.46 and 2.54 ± 1.05, respectively. Patients with negative staining had significantly higher BES values than patients with positive staining (P < .001). According to the differential staining properties of the patients’ tissues, the mean BES values of C(−)S(+), C(−)S(−) and C(+)S(+) patients for p27 staining were 2.75 ± 1.25, 2.18 ± 0.4, 1.27 ± 0.46, respectively. C(−)S(+) patients had significantly higher BES values compared to other groups, and C(−)S(−) patients had significantly higher BES values than C(+)S(+) patients (P = .001) (Figure 5).
null
null
[ "Staging", "Bone Erosion Score (BES)", "Recurrence Rate", "Immunohistochemistry", "Comparison of the Staining Properties of the Tissues", "Differential Staining Properties of the Patients", "Statistical Analysis", "Ki-67", "P27" ]
[ "According to the intraoperative and computerized tomography (CT) findings, all patients were staged according to the 2017 EAONO/JOS cholesteatoma staging system.19\n", "The erosion status of the middle ear ossicles, scutum, facial nerve canal, tegmen tympani, and otic capsule were noted. Each patient was scored with a bone erosion score ranging from 0 to 12.7\n", "All patients were followed-up for a minimum of 5 years. Patients were examined in the third month, sixth month, and first year visits. Afterward, all patients were routinely examined at each 6-month interval for mastoid cavity control. Patients having perforation of the grafts, unexplained decrement in hearing, persistent otorrhea despite appropriate antibiotic usage, having and diffusion restriction on non-echo-planar diffusion-weighted magnetic resonance imaging were re-operated. According to the intraoperative findings and postsurgical histopathological evaluation results, the recurrence status of the patients was noted.", "Paraffin-embedded cholesteatoma and meatal skin tissues were fixed with 10% formalin. Slices of 5 µm thickness were prepared for immunohistochemical analysis. Deparaffinization of the slices was performed by xylene and alcohol solution washouts after overnight incubation at 37°C. Afterward, the slices were immunostained using Ki-67 monoclonal antibody and p27 antibody (Abcam® Ki-67 antibody, ab15580, Abcam® p27 antibody, ab193379).\nThe pathologist examined the cholesteatoma and skin tissues under the light microscope (Olympus BX53®, Olympus, Tokyo, Japan). A result of >25 of nuclear positively staining cells in the epithelium among 500 total cells (>5%) was regarded as positive staining15,17,18,20 (Figures 1and 2).\n", "Firstly, the number of patients having positive and negative staining with Ki-67 and p27 in the cholesteatoma and skin tissues were compared among all patients. Secondly, only the cholesteatoma tissues of the patients were compared regarding the labeling status according to the subgroup of the patients (stage categories, recurrence rate status, and bone erosion scores). Thirdly, different from the previous studies, the differential staining properties of the patients were also considered by matching the cholesteatoma results with the results of the healthy skin tissues of the same patients.", "Each patient was classified into one of the 4 categories according to the staining by Ki-67 and p27 antibodies:\n\na. Cholesteatoma-positive and skin-negative (C(+)S(−)): Patient had positive staining in the cholesteatoma and negative staining in the skin tissue.\n\nb. Cholesteatoma-negative and skin-positive (C(−)S(+)): Patient had negative staining in the cholesteatoma and positive staining in the skin tissue.\n\nc. Cholesteatoma-positive and skin-positive (C(+)S(+)): Patient had positive staining in both the cholesteatoma and the skin tissues.\n\nd. Cholesteatoma-negative and skin-negative (C(−)S(−)): Patient had negative staining in both the cholesteatoma and the skin tissues.", "Statistical analysis was performed using SPSS Version 24.0 (IBM SPSS, New York, USA, 2016). Data were shown as mean ± standard deviation for continuous variables and the number of cases was used for categorical variables. Data were controlled for normal distribution using the Shapiro–Wilk test. The chi-square test was employed for the comparison of the number of positive- and negative- staining tissues between patient groups. The Z test was used to compare the proportions of the number of patients according to the differential staining properties. In addition, the Mann–Whitney U-test was used to compare the mean BES of positive- and negative staining in the patients’ cholesteatoma tissues. One-way analysis of variance test was used to compare the mean BES of the patients according to the differential staining status. A P value of < .05 was regarded as statistically significant.", "Thirty-nine (92.8%) of the patients had Ki-67-positive staining in the cholesteatoma tissue, and 37 (88.1%) of the patients had Ki-67-positive staining in the meatal skin tissue. There was no statistically significant difference between cholesteatoma and skin tissues regarding the number of patients having Ki-67-positive staining (P = .323) (Figure 3).\n\nThere was no statistically significant difference between recurrent and non-recurrent patients regarding the number of positive Ki-67 staining in the patients’ cholesteatoma tissue (P = .383). However, when the differential staining properties of the patients’ tissues were further analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for Ki-67 (P < .001) (Table 1). The proportion of C(+)S(−) patients was significantly higher in the recurrent group (4/8) compared to the non-recurrent group (0/34) (z value: 4.3347, P < .001). \n\nThere was no statistically significant difference among the stage groups regarding the number of patients having Ki-67-positive staining in the cholesteatoma tissue (P = .19). However, when the differential staining properties of the patients’ tissues were additionally analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for Ki-67 (P < .001) (Table 2). The proportion of C(+)S(−) patients was significantly higher in the stage 3 group (3/3) compared to the stage 1 group (0/10) (z value: 3.6056, P = .0003) and stage 2 group (1/29) (z value:4.8138, P < .01).\n\nThe mean BES of the patients having positive and negative Ki-67 staining in the cholesteatoma tissue were 2 ± 0 and 2.2 ± 1.15, respectively. There was no statistically significant mean BES difference between positive and negative Ki-67-staining in the patients’ cholesteatoma tissue (P ˃ .05). When the differential staining properties of the patients’ tissues were further evaluated, the mean BES of C(+)S(−), C(−)S(+), C(+)S(+) and C(−)S(−) patients were 5 ± 0.81, 2 ± 0, 1.88 ± 0.63 and 2 ± 0, respectively. C(+)S(−) patients had higher BES values compared to the others (P < .001) (Figure 4).\n", "Eleven (26.2%) of the patients had p27-positive staining in the cholesteatoma tissue, and 31(73.8%) of the patients had p27-positive staining in the meatal skin tissue. The number of patients having p27-positive staining in the meatal skin tissue was significantly higher compared to the number of patients having p27-positive staining in the cholesteatoma tissue (P = .041) (Figure 3).\nIt was observed that there was no statistically significant difference between recurrent and non-recurrent patients regarding the number of patients with positive p27-staining in the cholesteatoma tissue (P = .657). However, when the differential staining properties of the patients’ tissues were further analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for p27 (P < .001) (Table 3). The proportion of C(−)S(+) patients was significantly higher in the recurrent group (7/8) compared to the non-recurrent group (13/34) (z value:2.51, P = .012). \n\nThere was no statistically significant difference among the stage groups regarding the number of patients having p27-positive staining in the cholesteatoma tissue (P = .108). However, when the differential staining properties of the tissues were analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for p27 (P < .001) (Table 4), and the proportion of C(−)S(+) patients was significantly higher in the stage 2 (17/29) (z value: −3.2236, P = .0012) and the stage 3 patient groups (3/3) (z value: −3.6056, P = .0003) compared to the stage 1 patient group (0/10) . \n\nThe mean BES values of the patients having positive and negative p27 staining in the cholesteatoma tissues were 1.27 ± 0.46 and 2.54 ± 1.05, respectively. Patients with negative staining had significantly higher BES values than patients with positive staining (P < .001). According to the differential staining properties of the patients’ tissues, the mean BES values of C(−)S(+),\nC(−)S(−) and C(+)S(+) patients for p27 staining were 2.75 ± 1.25, 2.18 ± 0.4, 1.27 ± 0.46, respectively. C(−)S(+) patients had significantly higher BES values compared to other groups, and C(−)S(−) patients had significantly higher BES values than C(+)S(+) patients (P = .001) (Figure 5). \n" ]
[ null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and Methods", "Staging", "Bone Erosion Score (BES)", "Recurrence Rate", "Immunohistochemistry", "Comparison of the Staining Properties of the Tissues", "Differential Staining Properties of the Patients", "Statistical Analysis", "Results", "Ki-67", "P27", "Discussion" ]
[ "Cholesteatoma is a progressive hyperplastic keratinized squamous epithelium of the temporal bone characterized by osteoclastic activity and bone resorption. It is composed of 3 compartments: a cystic part, a matrix, and a perimatrix. The central cystic part contains dead keratinocytes. It is surrounded by the matrix, and the most external compartment is the perimatrix. The active part of the cholesteatoma is the matrix, which harbors continuous proliferating undifferentiated keratinocytes. The perimatrix consists of fibroblasts and granulation tissue. As the central cystic part expands with continuous desquamation of the dead keratinocytes from the matrix, osteoclastic cascade reactions result in bone erosion. The bone erosion capability of the cholesteatoma can trigger extensive expansion and complications such as hearing loss, vestibular involvement, facial nerve paralysis, and even brain abscess and death.1,2 To date, the only definitive treatment of the cholesteatoma is surgery. Despite developing technology and the widespread use of endoscopes, recurrence and recidivism after surgery still exist as a major problem and various studies report rates of 0-70%.3\n\nThe underlying pathogenesis and molecular mechanisms of cholesteatoma have not been fully understood. Inflammatory cytokines including interleukin (IL)-1, IL-6, TNF-α, matrix metalloproteinases, imbalance between keratinocyte proliferation and apoptosis, Rho kinase pathway, genetic susceptibility, angiogenetic growth factors, platelet-derived growth factor, and chronic proceeding infections have all been reported to have a role in the cholesteatoma development.1,2,4,5 Nuclear antigen Ki-67 is a protein that is expressed in proliferating cells; thus, it is commonly used as a mitotic index for tumor grading.3 Increased Ki-67 expression in the basal and spinous layers of the cholesteatoma was also reported as showing the high proliferation property of keratinocytes in the cholesteatoma.6 Few ­studies3,7-15 have also investigated the Ki-67 labeling index for predicting the prognostic features of cholesteatoma. However, there is such a wide range of Kİ-67 labeling indexes among healthy skin tissues of cholesteatoma patients, ranging from 0.9% to 24%.10,16 Moreover, a stronger expression of Ki-67 was also reported in the healthy skin tissue compared to the cholesteatoma tissue.15 Nonetheless, all these studies3,7-15 compared the Ki-67 labeling index of the cholesteatoma tissues obtained from different patients without matching the results with the healthy skin tissue (control group) of the same patient during comparison to predict the role of Ki-67 on the cholesteatoma prognosis.\nCyclin-dependent kinases are also involved in cell cycle and activate cellular proliferation, similar to Ki-67. P27 is the novel cyclin-dependent kinase inhibitor that acts as a tumor suppressor gene, arrests the cell cycle in phase G1, and stops cellular proliferation.17,18 A limited number of studies have focused on the effect of p27 on the cholesteatoma pathogenesis with conflicting results.16-18 Only one study18 investigated the role of p27 on recurrence of the cholesteatoma without matching the results with the skin tissue of the same patient. Additionally, to the best of the authors’ knowledge, the role of p27 on extensiveness and bone erosion degree of the cholesteatoma has not been evaluated, yet.\nIn this study, we investigated the effect of Ki-67 and p27 on the extensiveness (stage), recurrence rate, and bone erosion score of adults’ acquired cholesteatoma by matching the staining status of the markers in the cholesteatoma and healthy skin tissues of the same patients, immunohistochemically.", "Local ethical committee approval was acquired for this prospective study. The power analysis of the study was performed according to the previously published articles investigating the role of Ki-67 on the cholesteatoma pathogenesis. The result of the power analysis was 12 individuals in each group. Forty-two adults (>18 years old) patients with acquired cholesteatoma were enrolled. The diagnosis of the cholesteatoma was made by intraoperative and histopathological findings. All patients were operated under general anesthesia with a canal wall down mastoidectomy technique. Patients having recurrent disease at first admission and patients who had been operated with a canal wall-up technique were excluded. The cholesteatoma tissue and healthy meatal skin tissue (3 × 3 mm.) away from the cholesteatoma were obtained from all patients during the surgery. Healthy skin tissue was obtained as the control group for each patient.\nStaging According to the intraoperative and computerized tomography (CT) findings, all patients were staged according to the 2017 EAONO/JOS cholesteatoma staging system.19\n\nAccording to the intraoperative and computerized tomography (CT) findings, all patients were staged according to the 2017 EAONO/JOS cholesteatoma staging system.19\n\nBone Erosion Score (BES) The erosion status of the middle ear ossicles, scutum, facial nerve canal, tegmen tympani, and otic capsule were noted. Each patient was scored with a bone erosion score ranging from 0 to 12.7\n\nThe erosion status of the middle ear ossicles, scutum, facial nerve canal, tegmen tympani, and otic capsule were noted. Each patient was scored with a bone erosion score ranging from 0 to 12.7\n\nRecurrence Rate All patients were followed-up for a minimum of 5 years. Patients were examined in the third month, sixth month, and first year visits. Afterward, all patients were routinely examined at each 6-month interval for mastoid cavity control. Patients having perforation of the grafts, unexplained decrement in hearing, persistent otorrhea despite appropriate antibiotic usage, having and diffusion restriction on non-echo-planar diffusion-weighted magnetic resonance imaging were re-operated. According to the intraoperative findings and postsurgical histopathological evaluation results, the recurrence status of the patients was noted.\nAll patients were followed-up for a minimum of 5 years. Patients were examined in the third month, sixth month, and first year visits. Afterward, all patients were routinely examined at each 6-month interval for mastoid cavity control. Patients having perforation of the grafts, unexplained decrement in hearing, persistent otorrhea despite appropriate antibiotic usage, having and diffusion restriction on non-echo-planar diffusion-weighted magnetic resonance imaging were re-operated. According to the intraoperative findings and postsurgical histopathological evaluation results, the recurrence status of the patients was noted.\nImmunohistochemistry Paraffin-embedded cholesteatoma and meatal skin tissues were fixed with 10% formalin. Slices of 5 µm thickness were prepared for immunohistochemical analysis. Deparaffinization of the slices was performed by xylene and alcohol solution washouts after overnight incubation at 37°C. Afterward, the slices were immunostained using Ki-67 monoclonal antibody and p27 antibody (Abcam® Ki-67 antibody, ab15580, Abcam® p27 antibody, ab193379).\nThe pathologist examined the cholesteatoma and skin tissues under the light microscope (Olympus BX53®, Olympus, Tokyo, Japan). A result of >25 of nuclear positively staining cells in the epithelium among 500 total cells (>5%) was regarded as positive staining15,17,18,20 (Figures 1and 2).\n\nParaffin-embedded cholesteatoma and meatal skin tissues were fixed with 10% formalin. Slices of 5 µm thickness were prepared for immunohistochemical analysis. Deparaffinization of the slices was performed by xylene and alcohol solution washouts after overnight incubation at 37°C. Afterward, the slices were immunostained using Ki-67 monoclonal antibody and p27 antibody (Abcam® Ki-67 antibody, ab15580, Abcam® p27 antibody, ab193379).\nThe pathologist examined the cholesteatoma and skin tissues under the light microscope (Olympus BX53®, Olympus, Tokyo, Japan). A result of >25 of nuclear positively staining cells in the epithelium among 500 total cells (>5%) was regarded as positive staining15,17,18,20 (Figures 1and 2).\n\nComparison of the Staining Properties of the Tissues Firstly, the number of patients having positive and negative staining with Ki-67 and p27 in the cholesteatoma and skin tissues were compared among all patients. Secondly, only the cholesteatoma tissues of the patients were compared regarding the labeling status according to the subgroup of the patients (stage categories, recurrence rate status, and bone erosion scores). Thirdly, different from the previous studies, the differential staining properties of the patients were also considered by matching the cholesteatoma results with the results of the healthy skin tissues of the same patients.\nFirstly, the number of patients having positive and negative staining with Ki-67 and p27 in the cholesteatoma and skin tissues were compared among all patients. Secondly, only the cholesteatoma tissues of the patients were compared regarding the labeling status according to the subgroup of the patients (stage categories, recurrence rate status, and bone erosion scores). Thirdly, different from the previous studies, the differential staining properties of the patients were also considered by matching the cholesteatoma results with the results of the healthy skin tissues of the same patients.\nDifferential Staining Properties of the Patients Each patient was classified into one of the 4 categories according to the staining by Ki-67 and p27 antibodies:\n\na. Cholesteatoma-positive and skin-negative (C(+)S(−)): Patient had positive staining in the cholesteatoma and negative staining in the skin tissue.\n\nb. Cholesteatoma-negative and skin-positive (C(−)S(+)): Patient had negative staining in the cholesteatoma and positive staining in the skin tissue.\n\nc. Cholesteatoma-positive and skin-positive (C(+)S(+)): Patient had positive staining in both the cholesteatoma and the skin tissues.\n\nd. Cholesteatoma-negative and skin-negative (C(−)S(−)): Patient had negative staining in both the cholesteatoma and the skin tissues.\nEach patient was classified into one of the 4 categories according to the staining by Ki-67 and p27 antibodies:\n\na. Cholesteatoma-positive and skin-negative (C(+)S(−)): Patient had positive staining in the cholesteatoma and negative staining in the skin tissue.\n\nb. Cholesteatoma-negative and skin-positive (C(−)S(+)): Patient had negative staining in the cholesteatoma and positive staining in the skin tissue.\n\nc. Cholesteatoma-positive and skin-positive (C(+)S(+)): Patient had positive staining in both the cholesteatoma and the skin tissues.\n\nd. Cholesteatoma-negative and skin-negative (C(−)S(−)): Patient had negative staining in both the cholesteatoma and the skin tissues.\nStatistical Analysis Statistical analysis was performed using SPSS Version 24.0 (IBM SPSS, New York, USA, 2016). Data were shown as mean ± standard deviation for continuous variables and the number of cases was used for categorical variables. Data were controlled for normal distribution using the Shapiro–Wilk test. The chi-square test was employed for the comparison of the number of positive- and negative- staining tissues between patient groups. The Z test was used to compare the proportions of the number of patients according to the differential staining properties. In addition, the Mann–Whitney U-test was used to compare the mean BES of positive- and negative staining in the patients’ cholesteatoma tissues. One-way analysis of variance test was used to compare the mean BES of the patients according to the differential staining status. A P value of < .05 was regarded as statistically significant.\nStatistical analysis was performed using SPSS Version 24.0 (IBM SPSS, New York, USA, 2016). Data were shown as mean ± standard deviation for continuous variables and the number of cases was used for categorical variables. Data were controlled for normal distribution using the Shapiro–Wilk test. The chi-square test was employed for the comparison of the number of positive- and negative- staining tissues between patient groups. The Z test was used to compare the proportions of the number of patients according to the differential staining properties. In addition, the Mann–Whitney U-test was used to compare the mean BES of positive- and negative staining in the patients’ cholesteatoma tissues. One-way analysis of variance test was used to compare the mean BES of the patients according to the differential staining status. A P value of < .05 was regarded as statistically significant.", "According to the intraoperative and computerized tomography (CT) findings, all patients were staged according to the 2017 EAONO/JOS cholesteatoma staging system.19\n", "The erosion status of the middle ear ossicles, scutum, facial nerve canal, tegmen tympani, and otic capsule were noted. Each patient was scored with a bone erosion score ranging from 0 to 12.7\n", "All patients were followed-up for a minimum of 5 years. Patients were examined in the third month, sixth month, and first year visits. Afterward, all patients were routinely examined at each 6-month interval for mastoid cavity control. Patients having perforation of the grafts, unexplained decrement in hearing, persistent otorrhea despite appropriate antibiotic usage, having and diffusion restriction on non-echo-planar diffusion-weighted magnetic resonance imaging were re-operated. According to the intraoperative findings and postsurgical histopathological evaluation results, the recurrence status of the patients was noted.", "Paraffin-embedded cholesteatoma and meatal skin tissues were fixed with 10% formalin. Slices of 5 µm thickness were prepared for immunohistochemical analysis. Deparaffinization of the slices was performed by xylene and alcohol solution washouts after overnight incubation at 37°C. Afterward, the slices were immunostained using Ki-67 monoclonal antibody and p27 antibody (Abcam® Ki-67 antibody, ab15580, Abcam® p27 antibody, ab193379).\nThe pathologist examined the cholesteatoma and skin tissues under the light microscope (Olympus BX53®, Olympus, Tokyo, Japan). A result of >25 of nuclear positively staining cells in the epithelium among 500 total cells (>5%) was regarded as positive staining15,17,18,20 (Figures 1and 2).\n", "Firstly, the number of patients having positive and negative staining with Ki-67 and p27 in the cholesteatoma and skin tissues were compared among all patients. Secondly, only the cholesteatoma tissues of the patients were compared regarding the labeling status according to the subgroup of the patients (stage categories, recurrence rate status, and bone erosion scores). Thirdly, different from the previous studies, the differential staining properties of the patients were also considered by matching the cholesteatoma results with the results of the healthy skin tissues of the same patients.", "Each patient was classified into one of the 4 categories according to the staining by Ki-67 and p27 antibodies:\n\na. Cholesteatoma-positive and skin-negative (C(+)S(−)): Patient had positive staining in the cholesteatoma and negative staining in the skin tissue.\n\nb. Cholesteatoma-negative and skin-positive (C(−)S(+)): Patient had negative staining in the cholesteatoma and positive staining in the skin tissue.\n\nc. Cholesteatoma-positive and skin-positive (C(+)S(+)): Patient had positive staining in both the cholesteatoma and the skin tissues.\n\nd. Cholesteatoma-negative and skin-negative (C(−)S(−)): Patient had negative staining in both the cholesteatoma and the skin tissues.", "Statistical analysis was performed using SPSS Version 24.0 (IBM SPSS, New York, USA, 2016). Data were shown as mean ± standard deviation for continuous variables and the number of cases was used for categorical variables. Data were controlled for normal distribution using the Shapiro–Wilk test. The chi-square test was employed for the comparison of the number of positive- and negative- staining tissues between patient groups. The Z test was used to compare the proportions of the number of patients according to the differential staining properties. In addition, the Mann–Whitney U-test was used to compare the mean BES of positive- and negative staining in the patients’ cholesteatoma tissues. One-way analysis of variance test was used to compare the mean BES of the patients according to the differential staining status. A P value of < .05 was regarded as statistically significant.", "There were 42 patients in the study group, of whom 27 (64.3%) were male and 15 (35.7%) were female. The mean age of the patients was 40.73 ± 15.28 (min = 18, max = 65). The mean follow-up period was 66.69 ± 8.11 (min = 60, max = 96) months. Eight (19.04%) of the patients had cholesteatoma recurrence during the follow-up period. Ten (23.9%) patients had stage 1, 29 (69%) patients had stage 2, and 3 (7.1%) patients had stage 3 cholesteatoma, according to the intraoperative and CT findings. One patient had lateral semicircular canal fistula and 2 patients had facial nerve paralysis as intratemporal complications (stage 3). None of the patients had intracranial complications (stage 4 cholesteatoma).\nKi-67 Thirty-nine (92.8%) of the patients had Ki-67-positive staining in the cholesteatoma tissue, and 37 (88.1%) of the patients had Ki-67-positive staining in the meatal skin tissue. There was no statistically significant difference between cholesteatoma and skin tissues regarding the number of patients having Ki-67-positive staining (P = .323) (Figure 3).\n\nThere was no statistically significant difference between recurrent and non-recurrent patients regarding the number of positive Ki-67 staining in the patients’ cholesteatoma tissue (P = .383). However, when the differential staining properties of the patients’ tissues were further analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for Ki-67 (P < .001) (Table 1). The proportion of C(+)S(−) patients was significantly higher in the recurrent group (4/8) compared to the non-recurrent group (0/34) (z value: 4.3347, P < .001). \n\nThere was no statistically significant difference among the stage groups regarding the number of patients having Ki-67-positive staining in the cholesteatoma tissue (P = .19). However, when the differential staining properties of the patients’ tissues were additionally analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for Ki-67 (P < .001) (Table 2). The proportion of C(+)S(−) patients was significantly higher in the stage 3 group (3/3) compared to the stage 1 group (0/10) (z value: 3.6056, P = .0003) and stage 2 group (1/29) (z value:4.8138, P < .01).\n\nThe mean BES of the patients having positive and negative Ki-67 staining in the cholesteatoma tissue were 2 ± 0 and 2.2 ± 1.15, respectively. There was no statistically significant mean BES difference between positive and negative Ki-67-staining in the patients’ cholesteatoma tissue (P ˃ .05). When the differential staining properties of the patients’ tissues were further evaluated, the mean BES of C(+)S(−), C(−)S(+), C(+)S(+) and C(−)S(−) patients were 5 ± 0.81, 2 ± 0, 1.88 ± 0.63 and 2 ± 0, respectively. C(+)S(−) patients had higher BES values compared to the others (P < .001) (Figure 4).\n\nThirty-nine (92.8%) of the patients had Ki-67-positive staining in the cholesteatoma tissue, and 37 (88.1%) of the patients had Ki-67-positive staining in the meatal skin tissue. There was no statistically significant difference between cholesteatoma and skin tissues regarding the number of patients having Ki-67-positive staining (P = .323) (Figure 3).\n\nThere was no statistically significant difference between recurrent and non-recurrent patients regarding the number of positive Ki-67 staining in the patients’ cholesteatoma tissue (P = .383). However, when the differential staining properties of the patients’ tissues were further analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for Ki-67 (P < .001) (Table 1). The proportion of C(+)S(−) patients was significantly higher in the recurrent group (4/8) compared to the non-recurrent group (0/34) (z value: 4.3347, P < .001). \n\nThere was no statistically significant difference among the stage groups regarding the number of patients having Ki-67-positive staining in the cholesteatoma tissue (P = .19). However, when the differential staining properties of the patients’ tissues were additionally analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for Ki-67 (P < .001) (Table 2). The proportion of C(+)S(−) patients was significantly higher in the stage 3 group (3/3) compared to the stage 1 group (0/10) (z value: 3.6056, P = .0003) and stage 2 group (1/29) (z value:4.8138, P < .01).\n\nThe mean BES of the patients having positive and negative Ki-67 staining in the cholesteatoma tissue were 2 ± 0 and 2.2 ± 1.15, respectively. There was no statistically significant mean BES difference between positive and negative Ki-67-staining in the patients’ cholesteatoma tissue (P ˃ .05). When the differential staining properties of the patients’ tissues were further evaluated, the mean BES of C(+)S(−), C(−)S(+), C(+)S(+) and C(−)S(−) patients were 5 ± 0.81, 2 ± 0, 1.88 ± 0.63 and 2 ± 0, respectively. C(+)S(−) patients had higher BES values compared to the others (P < .001) (Figure 4).\n\nP27 Eleven (26.2%) of the patients had p27-positive staining in the cholesteatoma tissue, and 31(73.8%) of the patients had p27-positive staining in the meatal skin tissue. The number of patients having p27-positive staining in the meatal skin tissue was significantly higher compared to the number of patients having p27-positive staining in the cholesteatoma tissue (P = .041) (Figure 3).\nIt was observed that there was no statistically significant difference between recurrent and non-recurrent patients regarding the number of patients with positive p27-staining in the cholesteatoma tissue (P = .657). However, when the differential staining properties of the patients’ tissues were further analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for p27 (P < .001) (Table 3). The proportion of C(−)S(+) patients was significantly higher in the recurrent group (7/8) compared to the non-recurrent group (13/34) (z value:2.51, P = .012). \n\nThere was no statistically significant difference among the stage groups regarding the number of patients having p27-positive staining in the cholesteatoma tissue (P = .108). However, when the differential staining properties of the tissues were analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for p27 (P < .001) (Table 4), and the proportion of C(−)S(+) patients was significantly higher in the stage 2 (17/29) (z value: −3.2236, P = .0012) and the stage 3 patient groups (3/3) (z value: −3.6056, P = .0003) compared to the stage 1 patient group (0/10) . \n\nThe mean BES values of the patients having positive and negative p27 staining in the cholesteatoma tissues were 1.27 ± 0.46 and 2.54 ± 1.05, respectively. Patients with negative staining had significantly higher BES values than patients with positive staining (P < .001). According to the differential staining properties of the patients’ tissues, the mean BES values of C(−)S(+),\nC(−)S(−) and C(+)S(+) patients for p27 staining were 2.75 ± 1.25, 2.18 ± 0.4, 1.27 ± 0.46, respectively. C(−)S(+) patients had significantly higher BES values compared to other groups, and C(−)S(−) patients had significantly higher BES values than C(+)S(+) patients (P = .001) (Figure 5). \n\nEleven (26.2%) of the patients had p27-positive staining in the cholesteatoma tissue, and 31(73.8%) of the patients had p27-positive staining in the meatal skin tissue. The number of patients having p27-positive staining in the meatal skin tissue was significantly higher compared to the number of patients having p27-positive staining in the cholesteatoma tissue (P = .041) (Figure 3).\nIt was observed that there was no statistically significant difference between recurrent and non-recurrent patients regarding the number of patients with positive p27-staining in the cholesteatoma tissue (P = .657). However, when the differential staining properties of the patients’ tissues were further analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for p27 (P < .001) (Table 3). The proportion of C(−)S(+) patients was significantly higher in the recurrent group (7/8) compared to the non-recurrent group (13/34) (z value:2.51, P = .012). \n\nThere was no statistically significant difference among the stage groups regarding the number of patients having p27-positive staining in the cholesteatoma tissue (P = .108). However, when the differential staining properties of the tissues were analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for p27 (P < .001) (Table 4), and the proportion of C(−)S(+) patients was significantly higher in the stage 2 (17/29) (z value: −3.2236, P = .0012) and the stage 3 patient groups (3/3) (z value: −3.6056, P = .0003) compared to the stage 1 patient group (0/10) . \n\nThe mean BES values of the patients having positive and negative p27 staining in the cholesteatoma tissues were 1.27 ± 0.46 and 2.54 ± 1.05, respectively. Patients with negative staining had significantly higher BES values than patients with positive staining (P < .001). According to the differential staining properties of the patients’ tissues, the mean BES values of C(−)S(+),\nC(−)S(−) and C(+)S(+) patients for p27 staining were 2.75 ± 1.25, 2.18 ± 0.4, 1.27 ± 0.46, respectively. C(−)S(+) patients had significantly higher BES values compared to other groups, and C(−)S(−) patients had significantly higher BES values than C(+)S(+) patients (P = .001) (Figure 5). \n", "Thirty-nine (92.8%) of the patients had Ki-67-positive staining in the cholesteatoma tissue, and 37 (88.1%) of the patients had Ki-67-positive staining in the meatal skin tissue. There was no statistically significant difference between cholesteatoma and skin tissues regarding the number of patients having Ki-67-positive staining (P = .323) (Figure 3).\n\nThere was no statistically significant difference between recurrent and non-recurrent patients regarding the number of positive Ki-67 staining in the patients’ cholesteatoma tissue (P = .383). However, when the differential staining properties of the patients’ tissues were further analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for Ki-67 (P < .001) (Table 1). The proportion of C(+)S(−) patients was significantly higher in the recurrent group (4/8) compared to the non-recurrent group (0/34) (z value: 4.3347, P < .001). \n\nThere was no statistically significant difference among the stage groups regarding the number of patients having Ki-67-positive staining in the cholesteatoma tissue (P = .19). However, when the differential staining properties of the patients’ tissues were additionally analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for Ki-67 (P < .001) (Table 2). The proportion of C(+)S(−) patients was significantly higher in the stage 3 group (3/3) compared to the stage 1 group (0/10) (z value: 3.6056, P = .0003) and stage 2 group (1/29) (z value:4.8138, P < .01).\n\nThe mean BES of the patients having positive and negative Ki-67 staining in the cholesteatoma tissue were 2 ± 0 and 2.2 ± 1.15, respectively. There was no statistically significant mean BES difference between positive and negative Ki-67-staining in the patients’ cholesteatoma tissue (P ˃ .05). When the differential staining properties of the patients’ tissues were further evaluated, the mean BES of C(+)S(−), C(−)S(+), C(+)S(+) and C(−)S(−) patients were 5 ± 0.81, 2 ± 0, 1.88 ± 0.63 and 2 ± 0, respectively. C(+)S(−) patients had higher BES values compared to the others (P < .001) (Figure 4).\n", "Eleven (26.2%) of the patients had p27-positive staining in the cholesteatoma tissue, and 31(73.8%) of the patients had p27-positive staining in the meatal skin tissue. The number of patients having p27-positive staining in the meatal skin tissue was significantly higher compared to the number of patients having p27-positive staining in the cholesteatoma tissue (P = .041) (Figure 3).\nIt was observed that there was no statistically significant difference between recurrent and non-recurrent patients regarding the number of patients with positive p27-staining in the cholesteatoma tissue (P = .657). However, when the differential staining properties of the patients’ tissues were further analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for p27 (P < .001) (Table 3). The proportion of C(−)S(+) patients was significantly higher in the recurrent group (7/8) compared to the non-recurrent group (13/34) (z value:2.51, P = .012). \n\nThere was no statistically significant difference among the stage groups regarding the number of patients having p27-positive staining in the cholesteatoma tissue (P = .108). However, when the differential staining properties of the tissues were analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for p27 (P < .001) (Table 4), and the proportion of C(−)S(+) patients was significantly higher in the stage 2 (17/29) (z value: −3.2236, P = .0012) and the stage 3 patient groups (3/3) (z value: −3.6056, P = .0003) compared to the stage 1 patient group (0/10) . \n\nThe mean BES values of the patients having positive and negative p27 staining in the cholesteatoma tissues were 1.27 ± 0.46 and 2.54 ± 1.05, respectively. Patients with negative staining had significantly higher BES values than patients with positive staining (P < .001). According to the differential staining properties of the patients’ tissues, the mean BES values of C(−)S(+),\nC(−)S(−) and C(+)S(+) patients for p27 staining were 2.75 ± 1.25, 2.18 ± 0.4, 1.27 ± 0.46, respectively. C(−)S(+) patients had significantly higher BES values compared to other groups, and C(−)S(−) patients had significantly higher BES values than C(+)S(+) patients (P = .001) (Figure 5). \n", "In this study, we found that the number of patients having negative p27 staining in the cholesteatoma tissue was higher than for the staining in the skin tissue, however there was no significant difference with regard to the Ki-67 staining. The recurrence rates, stage, and BES values of the patients were not related to the Ki-67 staining status in the cholesteatoma tissues of the patients. However, differential expression of Ki-67 in the cholesteatoma tissue compared to the healthy skin tissue was related to a worse prognosis with increased recurrence rate, stage, and BES values. It was also observed that, negative p27 staining in the cholesteatoma tissue was only related to the elevated BES values; stage and recurrence rates of the cholesteatoma patients were not related to the p27 staining status in the cholesteatoma tissue. On the other hand, differential non-expression of p27 in the cholesteatoma tissue compared to the skin tissue was related to worse prognosis for cholesteatoma patients, such as increased recurrence rate, stage, and BES values.\nProliferating undifferentiated keratinocytes in the matrix and activated fibroblasts in the perimatrix are the main cells in the cholesteatoma tissue that enhance progression. Uncontrolled proliferation of the keratinocytes initiates the vicious cycle; proliferating keratinocytes release cytokines that activate the perimatrix fibroblasts. Activated fibroblasts secrete epidermal growth factor and keratinocyte growth factor, which in turn activate the matrix keratinocytes in a vicious cycle. Osteoclastic molecules secreted from the proliferating keratinocytes and activated fibroblasts trigger the bone resorption and in turn the development of complications.1,21 Thus, cellular proliferation rate in the cholesteatoma tissue is an important factor for progression of the cholesteatoma.\nKi-67 nuclear antigen was demonstrated to be expressed in proliferating cells in the late G1, S, G2, and M phases of the cellular cycle. This protein has been widely used as a proliferation marker for tumors and proliferative diseases including cholesteatoma.8 Most studies8-11,13,16,20,22 reported higher expression levels of Ki-67 protein in the cholesteatoma tissue compared to the healthy skin tissue. However, Kuczkowski et al.6 found an insignificant increase in Ki-67 expression in the cholesteatoma, and Kim et al.15 reported a strongly positive Ki-67 expression in the skin tissue compared to focal staining in the cholesteatoma tissue. Our results were also correlated with Kuczkowski et al.,6 with a slightly increased expression of Ki-67 in the cholesteatoma tissue compared to the skin tissue. When the effect of Ki-67 on the prognosis of cholesteatoma was evaluated, challenging results were reported. A higher Ki-67 labeling index in the cholesteatoma tissue was observed to be significantly related to the recurrence rate of the cholesteatomas.7,10 On the contrary, some studies3,14 could not find such a relationship. The bone erosion capacity of the cholesteatoma is an important prognostic factor that enables extension and complication development. Recently, Araz Server et al.3 found a correlation between the Ki-67 labeling index of cholesteatoma tissue and malleus erosion. Additionally, Hamed et al.8 reported a direct correlation between Ki-67 expression in the cholesteatoma and BES. On the other hand, several authors7,9,11,15 could not find a correlation between the Ki-67 expression levels in the cholesteatoma tissue and BES or extent of the disease. None of the aforementioned studies considered the Ki-67 skin-labeling status of the patients during prognostic analysis. According to our study results, differential Ki-67 staining in the cholesteatoma tissue compared to the skin tissue (C(+)S(−)) affected the prognosis (recurrence rate, extensiveness-stage, and bone erosion capacity) rather than the isolated staining status in the cholesteatoma tissue. The controversial outcomes in the previous literature may be attributed to the wide range of basal Ki-67 expression in the skin tissue. Unmatched cholesteatoma-skin expression status might have caused conflicting results regarding the effect of Ki-67 on the cholesteatoma prognosis.\nCyclin-dependent kinase inhibitor p27 blocks cyclin D, E, A, and B-dependent kinases. Thus, the decrease in the levels of p27 is related to the increased cyclin-dependent kinase levels and enables ongoing cellular proliferation.16 Lower expression levels of p27 in the cholesteatoma tissue compared to the healthy skin tissue were reported as an evidence of higher cellular activity of the cholesteatoma.17,18 On the other hand, Chae et al.16 demonstrated an increased expression level of p27 in the cholesteatoma tissue. Regarding the prognostic value of p27 expression in the cholesteatoma tissue, Kuczkowski et al.18 reported a lower expression of p27 in the recurrent cases of cholesteatoma than the primary acquired ones, without matching the cholesteatoma-skin values. Additionally, the effect of p27-dependent cellular proliferation on other prognostic factors of cholesteatoma such as extensiveness (stage) and bone erosion levels have not been evaluated, yet. According to our study results, the number of patients having negative p27 staining was greater in the cholesteatoma tissue compared to skin tissue, demonstrating the increased cellular proliferation of the cholesteatoma. Moreover, differential non-expression of p27 in the cholesteatoma tissue compared to skin tissue (C(−)S(+)) was a prognostic factor for cholesteatoma with increased stage, recurrence rate, and BES values. Combining the results of p27 with Ki-67, we can say that every cholesteatoma patient has a basal cellular proliferation activity rate in the meatal skin. Cellular proliferation rate in the cholesteatoma is important but not solely enough for predicting the prognosis of cholesteatoma patients. Patients having lower basal levels of cellular proliferation rate and higher cellular activity in the cholesteatoma tissue are prone to worse prognosis with increased stage, recurrence rates, and bone erosion degrees.\nThe limitation of this study was the limited patient number. Future studies with a higher number of patients should also investigate the role of Ki-67 and p27 on cholesteatoma prognosis by comparing each patient’s cholesteatoma results with the healthy skin tissue results. However, we think that our results are meaningful and can lead future studies with our long interval (minimum 60 months) follow-up period for the detection of cholesteatoma recurrence.\nIn conclusion, the differential expression of Ki-67 and non-expression of p27 in the cholesteatoma tissue compared to the healthy skin ­tissue were associated with worse prognosis, including increased ­otitis media, with increased cholesteatoma stage, recurrence rate, and bone erosion degree for the cholesteatoma patients. With upcoming studies, Ki-67- and p27-targeted topical treatment options may be enhanced to prevent the growing and expansion of the cholesteatoma. Considering the future studies that will evaluate the effect of cellular proliferation on cholesteatoma progression, it would be more logical to analyze the basal levels of cellular proliferation rates for each patient and to match the results of cholesteatoma tissues with the healthy skin tissues for predicting the prognosis." ]
[ "intro", "materials and methods", null, null, null, null, null, null, null, "results", null, null, "discussion" ]
[ "Cholesteatoma", "Ki-67", "p27", "cellular proliferation", "prognosis", "stage", "recurrence", "bone erosion score" ]
Introduction: Cholesteatoma is a progressive hyperplastic keratinized squamous epithelium of the temporal bone characterized by osteoclastic activity and bone resorption. It is composed of 3 compartments: a cystic part, a matrix, and a perimatrix. The central cystic part contains dead keratinocytes. It is surrounded by the matrix, and the most external compartment is the perimatrix. The active part of the cholesteatoma is the matrix, which harbors continuous proliferating undifferentiated keratinocytes. The perimatrix consists of fibroblasts and granulation tissue. As the central cystic part expands with continuous desquamation of the dead keratinocytes from the matrix, osteoclastic cascade reactions result in bone erosion. The bone erosion capability of the cholesteatoma can trigger extensive expansion and complications such as hearing loss, vestibular involvement, facial nerve paralysis, and even brain abscess and death.1,2 To date, the only definitive treatment of the cholesteatoma is surgery. Despite developing technology and the widespread use of endoscopes, recurrence and recidivism after surgery still exist as a major problem and various studies report rates of 0-70%.3 The underlying pathogenesis and molecular mechanisms of cholesteatoma have not been fully understood. Inflammatory cytokines including interleukin (IL)-1, IL-6, TNF-α, matrix metalloproteinases, imbalance between keratinocyte proliferation and apoptosis, Rho kinase pathway, genetic susceptibility, angiogenetic growth factors, platelet-derived growth factor, and chronic proceeding infections have all been reported to have a role in the cholesteatoma development.1,2,4,5 Nuclear antigen Ki-67 is a protein that is expressed in proliferating cells; thus, it is commonly used as a mitotic index for tumor grading.3 Increased Ki-67 expression in the basal and spinous layers of the cholesteatoma was also reported as showing the high proliferation property of keratinocytes in the cholesteatoma.6 Few ­studies3,7-15 have also investigated the Ki-67 labeling index for predicting the prognostic features of cholesteatoma. However, there is such a wide range of Kİ-67 labeling indexes among healthy skin tissues of cholesteatoma patients, ranging from 0.9% to 24%.10,16 Moreover, a stronger expression of Ki-67 was also reported in the healthy skin tissue compared to the cholesteatoma tissue.15 Nonetheless, all these studies3,7-15 compared the Ki-67 labeling index of the cholesteatoma tissues obtained from different patients without matching the results with the healthy skin tissue (control group) of the same patient during comparison to predict the role of Ki-67 on the cholesteatoma prognosis. Cyclin-dependent kinases are also involved in cell cycle and activate cellular proliferation, similar to Ki-67. P27 is the novel cyclin-dependent kinase inhibitor that acts as a tumor suppressor gene, arrests the cell cycle in phase G1, and stops cellular proliferation.17,18 A limited number of studies have focused on the effect of p27 on the cholesteatoma pathogenesis with conflicting results.16-18 Only one study18 investigated the role of p27 on recurrence of the cholesteatoma without matching the results with the skin tissue of the same patient. Additionally, to the best of the authors’ knowledge, the role of p27 on extensiveness and bone erosion degree of the cholesteatoma has not been evaluated, yet. In this study, we investigated the effect of Ki-67 and p27 on the extensiveness (stage), recurrence rate, and bone erosion score of adults’ acquired cholesteatoma by matching the staining status of the markers in the cholesteatoma and healthy skin tissues of the same patients, immunohistochemically. Materials and Methods: Local ethical committee approval was acquired for this prospective study. The power analysis of the study was performed according to the previously published articles investigating the role of Ki-67 on the cholesteatoma pathogenesis. The result of the power analysis was 12 individuals in each group. Forty-two adults (>18 years old) patients with acquired cholesteatoma were enrolled. The diagnosis of the cholesteatoma was made by intraoperative and histopathological findings. All patients were operated under general anesthesia with a canal wall down mastoidectomy technique. Patients having recurrent disease at first admission and patients who had been operated with a canal wall-up technique were excluded. The cholesteatoma tissue and healthy meatal skin tissue (3 × 3 mm.) away from the cholesteatoma were obtained from all patients during the surgery. Healthy skin tissue was obtained as the control group for each patient. Staging According to the intraoperative and computerized tomography (CT) findings, all patients were staged according to the 2017 EAONO/JOS cholesteatoma staging system.19 According to the intraoperative and computerized tomography (CT) findings, all patients were staged according to the 2017 EAONO/JOS cholesteatoma staging system.19 Bone Erosion Score (BES) The erosion status of the middle ear ossicles, scutum, facial nerve canal, tegmen tympani, and otic capsule were noted. Each patient was scored with a bone erosion score ranging from 0 to 12.7 The erosion status of the middle ear ossicles, scutum, facial nerve canal, tegmen tympani, and otic capsule were noted. Each patient was scored with a bone erosion score ranging from 0 to 12.7 Recurrence Rate All patients were followed-up for a minimum of 5 years. Patients were examined in the third month, sixth month, and first year visits. Afterward, all patients were routinely examined at each 6-month interval for mastoid cavity control. Patients having perforation of the grafts, unexplained decrement in hearing, persistent otorrhea despite appropriate antibiotic usage, having and diffusion restriction on non-echo-planar diffusion-weighted magnetic resonance imaging were re-operated. According to the intraoperative findings and postsurgical histopathological evaluation results, the recurrence status of the patients was noted. All patients were followed-up for a minimum of 5 years. Patients were examined in the third month, sixth month, and first year visits. Afterward, all patients were routinely examined at each 6-month interval for mastoid cavity control. Patients having perforation of the grafts, unexplained decrement in hearing, persistent otorrhea despite appropriate antibiotic usage, having and diffusion restriction on non-echo-planar diffusion-weighted magnetic resonance imaging were re-operated. According to the intraoperative findings and postsurgical histopathological evaluation results, the recurrence status of the patients was noted. Immunohistochemistry Paraffin-embedded cholesteatoma and meatal skin tissues were fixed with 10% formalin. Slices of 5 µm thickness were prepared for immunohistochemical analysis. Deparaffinization of the slices was performed by xylene and alcohol solution washouts after overnight incubation at 37°C. Afterward, the slices were immunostained using Ki-67 monoclonal antibody and p27 antibody (Abcam® Ki-67 antibody, ab15580, Abcam® p27 antibody, ab193379). The pathologist examined the cholesteatoma and skin tissues under the light microscope (Olympus BX53®, Olympus, Tokyo, Japan). A result of >25 of nuclear positively staining cells in the epithelium among 500 total cells (>5%) was regarded as positive staining15,17,18,20 (Figures 1and 2). Paraffin-embedded cholesteatoma and meatal skin tissues were fixed with 10% formalin. Slices of 5 µm thickness were prepared for immunohistochemical analysis. Deparaffinization of the slices was performed by xylene and alcohol solution washouts after overnight incubation at 37°C. Afterward, the slices were immunostained using Ki-67 monoclonal antibody and p27 antibody (Abcam® Ki-67 antibody, ab15580, Abcam® p27 antibody, ab193379). The pathologist examined the cholesteatoma and skin tissues under the light microscope (Olympus BX53®, Olympus, Tokyo, Japan). A result of >25 of nuclear positively staining cells in the epithelium among 500 total cells (>5%) was regarded as positive staining15,17,18,20 (Figures 1and 2). Comparison of the Staining Properties of the Tissues Firstly, the number of patients having positive and negative staining with Ki-67 and p27 in the cholesteatoma and skin tissues were compared among all patients. Secondly, only the cholesteatoma tissues of the patients were compared regarding the labeling status according to the subgroup of the patients (stage categories, recurrence rate status, and bone erosion scores). Thirdly, different from the previous studies, the differential staining properties of the patients were also considered by matching the cholesteatoma results with the results of the healthy skin tissues of the same patients. Firstly, the number of patients having positive and negative staining with Ki-67 and p27 in the cholesteatoma and skin tissues were compared among all patients. Secondly, only the cholesteatoma tissues of the patients were compared regarding the labeling status according to the subgroup of the patients (stage categories, recurrence rate status, and bone erosion scores). Thirdly, different from the previous studies, the differential staining properties of the patients were also considered by matching the cholesteatoma results with the results of the healthy skin tissues of the same patients. Differential Staining Properties of the Patients Each patient was classified into one of the 4 categories according to the staining by Ki-67 and p27 antibodies: a. Cholesteatoma-positive and skin-negative (C(+)S(−)): Patient had positive staining in the cholesteatoma and negative staining in the skin tissue. b. Cholesteatoma-negative and skin-positive (C(−)S(+)): Patient had negative staining in the cholesteatoma and positive staining in the skin tissue. c. Cholesteatoma-positive and skin-positive (C(+)S(+)): Patient had positive staining in both the cholesteatoma and the skin tissues. d. Cholesteatoma-negative and skin-negative (C(−)S(−)): Patient had negative staining in both the cholesteatoma and the skin tissues. Each patient was classified into one of the 4 categories according to the staining by Ki-67 and p27 antibodies: a. Cholesteatoma-positive and skin-negative (C(+)S(−)): Patient had positive staining in the cholesteatoma and negative staining in the skin tissue. b. Cholesteatoma-negative and skin-positive (C(−)S(+)): Patient had negative staining in the cholesteatoma and positive staining in the skin tissue. c. Cholesteatoma-positive and skin-positive (C(+)S(+)): Patient had positive staining in both the cholesteatoma and the skin tissues. d. Cholesteatoma-negative and skin-negative (C(−)S(−)): Patient had negative staining in both the cholesteatoma and the skin tissues. Statistical Analysis Statistical analysis was performed using SPSS Version 24.0 (IBM SPSS, New York, USA, 2016). Data were shown as mean ± standard deviation for continuous variables and the number of cases was used for categorical variables. Data were controlled for normal distribution using the Shapiro–Wilk test. The chi-square test was employed for the comparison of the number of positive- and negative- staining tissues between patient groups. The Z test was used to compare the proportions of the number of patients according to the differential staining properties. In addition, the Mann–Whitney U-test was used to compare the mean BES of positive- and negative staining in the patients’ cholesteatoma tissues. One-way analysis of variance test was used to compare the mean BES of the patients according to the differential staining status. A P value of < .05 was regarded as statistically significant. Statistical analysis was performed using SPSS Version 24.0 (IBM SPSS, New York, USA, 2016). Data were shown as mean ± standard deviation for continuous variables and the number of cases was used for categorical variables. Data were controlled for normal distribution using the Shapiro–Wilk test. The chi-square test was employed for the comparison of the number of positive- and negative- staining tissues between patient groups. The Z test was used to compare the proportions of the number of patients according to the differential staining properties. In addition, the Mann–Whitney U-test was used to compare the mean BES of positive- and negative staining in the patients’ cholesteatoma tissues. One-way analysis of variance test was used to compare the mean BES of the patients according to the differential staining status. A P value of < .05 was regarded as statistically significant. Staging: According to the intraoperative and computerized tomography (CT) findings, all patients were staged according to the 2017 EAONO/JOS cholesteatoma staging system.19 Bone Erosion Score (BES): The erosion status of the middle ear ossicles, scutum, facial nerve canal, tegmen tympani, and otic capsule were noted. Each patient was scored with a bone erosion score ranging from 0 to 12.7 Recurrence Rate: All patients were followed-up for a minimum of 5 years. Patients were examined in the third month, sixth month, and first year visits. Afterward, all patients were routinely examined at each 6-month interval for mastoid cavity control. Patients having perforation of the grafts, unexplained decrement in hearing, persistent otorrhea despite appropriate antibiotic usage, having and diffusion restriction on non-echo-planar diffusion-weighted magnetic resonance imaging were re-operated. According to the intraoperative findings and postsurgical histopathological evaluation results, the recurrence status of the patients was noted. Immunohistochemistry: Paraffin-embedded cholesteatoma and meatal skin tissues were fixed with 10% formalin. Slices of 5 µm thickness were prepared for immunohistochemical analysis. Deparaffinization of the slices was performed by xylene and alcohol solution washouts after overnight incubation at 37°C. Afterward, the slices were immunostained using Ki-67 monoclonal antibody and p27 antibody (Abcam® Ki-67 antibody, ab15580, Abcam® p27 antibody, ab193379). The pathologist examined the cholesteatoma and skin tissues under the light microscope (Olympus BX53®, Olympus, Tokyo, Japan). A result of >25 of nuclear positively staining cells in the epithelium among 500 total cells (>5%) was regarded as positive staining15,17,18,20 (Figures 1and 2). Comparison of the Staining Properties of the Tissues: Firstly, the number of patients having positive and negative staining with Ki-67 and p27 in the cholesteatoma and skin tissues were compared among all patients. Secondly, only the cholesteatoma tissues of the patients were compared regarding the labeling status according to the subgroup of the patients (stage categories, recurrence rate status, and bone erosion scores). Thirdly, different from the previous studies, the differential staining properties of the patients were also considered by matching the cholesteatoma results with the results of the healthy skin tissues of the same patients. Differential Staining Properties of the Patients: Each patient was classified into one of the 4 categories according to the staining by Ki-67 and p27 antibodies: a. Cholesteatoma-positive and skin-negative (C(+)S(−)): Patient had positive staining in the cholesteatoma and negative staining in the skin tissue. b. Cholesteatoma-negative and skin-positive (C(−)S(+)): Patient had negative staining in the cholesteatoma and positive staining in the skin tissue. c. Cholesteatoma-positive and skin-positive (C(+)S(+)): Patient had positive staining in both the cholesteatoma and the skin tissues. d. Cholesteatoma-negative and skin-negative (C(−)S(−)): Patient had negative staining in both the cholesteatoma and the skin tissues. Statistical Analysis: Statistical analysis was performed using SPSS Version 24.0 (IBM SPSS, New York, USA, 2016). Data were shown as mean ± standard deviation for continuous variables and the number of cases was used for categorical variables. Data were controlled for normal distribution using the Shapiro–Wilk test. The chi-square test was employed for the comparison of the number of positive- and negative- staining tissues between patient groups. The Z test was used to compare the proportions of the number of patients according to the differential staining properties. In addition, the Mann–Whitney U-test was used to compare the mean BES of positive- and negative staining in the patients’ cholesteatoma tissues. One-way analysis of variance test was used to compare the mean BES of the patients according to the differential staining status. A P value of < .05 was regarded as statistically significant. Results: There were 42 patients in the study group, of whom 27 (64.3%) were male and 15 (35.7%) were female. The mean age of the patients was 40.73 ± 15.28 (min = 18, max = 65). The mean follow-up period was 66.69 ± 8.11 (min = 60, max = 96) months. Eight (19.04%) of the patients had cholesteatoma recurrence during the follow-up period. Ten (23.9%) patients had stage 1, 29 (69%) patients had stage 2, and 3 (7.1%) patients had stage 3 cholesteatoma, according to the intraoperative and CT findings. One patient had lateral semicircular canal fistula and 2 patients had facial nerve paralysis as intratemporal complications (stage 3). None of the patients had intracranial complications (stage 4 cholesteatoma). Ki-67 Thirty-nine (92.8%) of the patients had Ki-67-positive staining in the cholesteatoma tissue, and 37 (88.1%) of the patients had Ki-67-positive staining in the meatal skin tissue. There was no statistically significant difference between cholesteatoma and skin tissues regarding the number of patients having Ki-67-positive staining (P = .323) (Figure 3). There was no statistically significant difference between recurrent and non-recurrent patients regarding the number of positive Ki-67 staining in the patients’ cholesteatoma tissue (P = .383). However, when the differential staining properties of the patients’ tissues were further analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for Ki-67 (P < .001) (Table 1). The proportion of C(+)S(−) patients was significantly higher in the recurrent group (4/8) compared to the non-recurrent group (0/34) (z value: 4.3347, P < .001). There was no statistically significant difference among the stage groups regarding the number of patients having Ki-67-positive staining in the cholesteatoma tissue (P = .19). However, when the differential staining properties of the patients’ tissues were additionally analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for Ki-67 (P < .001) (Table 2). The proportion of C(+)S(−) patients was significantly higher in the stage 3 group (3/3) compared to the stage 1 group (0/10) (z value: 3.6056, P = .0003) and stage 2 group (1/29) (z value:4.8138, P < .01). The mean BES of the patients having positive and negative Ki-67 staining in the cholesteatoma tissue were 2 ± 0 and 2.2 ± 1.15, respectively. There was no statistically significant mean BES difference between positive and negative Ki-67-staining in the patients’ cholesteatoma tissue (P ˃ .05). When the differential staining properties of the patients’ tissues were further evaluated, the mean BES of C(+)S(−), C(−)S(+), C(+)S(+) and C(−)S(−) patients were 5 ± 0.81, 2 ± 0, 1.88 ± 0.63 and 2 ± 0, respectively. C(+)S(−) patients had higher BES values compared to the others (P < .001) (Figure 4). Thirty-nine (92.8%) of the patients had Ki-67-positive staining in the cholesteatoma tissue, and 37 (88.1%) of the patients had Ki-67-positive staining in the meatal skin tissue. There was no statistically significant difference between cholesteatoma and skin tissues regarding the number of patients having Ki-67-positive staining (P = .323) (Figure 3). There was no statistically significant difference between recurrent and non-recurrent patients regarding the number of positive Ki-67 staining in the patients’ cholesteatoma tissue (P = .383). However, when the differential staining properties of the patients’ tissues were further analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for Ki-67 (P < .001) (Table 1). The proportion of C(+)S(−) patients was significantly higher in the recurrent group (4/8) compared to the non-recurrent group (0/34) (z value: 4.3347, P < .001). There was no statistically significant difference among the stage groups regarding the number of patients having Ki-67-positive staining in the cholesteatoma tissue (P = .19). However, when the differential staining properties of the patients’ tissues were additionally analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for Ki-67 (P < .001) (Table 2). The proportion of C(+)S(−) patients was significantly higher in the stage 3 group (3/3) compared to the stage 1 group (0/10) (z value: 3.6056, P = .0003) and stage 2 group (1/29) (z value:4.8138, P < .01). The mean BES of the patients having positive and negative Ki-67 staining in the cholesteatoma tissue were 2 ± 0 and 2.2 ± 1.15, respectively. There was no statistically significant mean BES difference between positive and negative Ki-67-staining in the patients’ cholesteatoma tissue (P ˃ .05). When the differential staining properties of the patients’ tissues were further evaluated, the mean BES of C(+)S(−), C(−)S(+), C(+)S(+) and C(−)S(−) patients were 5 ± 0.81, 2 ± 0, 1.88 ± 0.63 and 2 ± 0, respectively. C(+)S(−) patients had higher BES values compared to the others (P < .001) (Figure 4). P27 Eleven (26.2%) of the patients had p27-positive staining in the cholesteatoma tissue, and 31(73.8%) of the patients had p27-positive staining in the meatal skin tissue. The number of patients having p27-positive staining in the meatal skin tissue was significantly higher compared to the number of patients having p27-positive staining in the cholesteatoma tissue (P = .041) (Figure 3). It was observed that there was no statistically significant difference between recurrent and non-recurrent patients regarding the number of patients with positive p27-staining in the cholesteatoma tissue (P = .657). However, when the differential staining properties of the patients’ tissues were further analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for p27 (P < .001) (Table 3). The proportion of C(−)S(+) patients was significantly higher in the recurrent group (7/8) compared to the non-recurrent group (13/34) (z value:2.51, P = .012). There was no statistically significant difference among the stage groups regarding the number of patients having p27-positive staining in the cholesteatoma tissue (P = .108). However, when the differential staining properties of the tissues were analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for p27 (P < .001) (Table 4), and the proportion of C(−)S(+) patients was significantly higher in the stage 2 (17/29) (z value: −3.2236, P = .0012) and the stage 3 patient groups (3/3) (z value: −3.6056, P = .0003) compared to the stage 1 patient group (0/10) . The mean BES values of the patients having positive and negative p27 staining in the cholesteatoma tissues were 1.27 ± 0.46 and 2.54 ± 1.05, respectively. Patients with negative staining had significantly higher BES values than patients with positive staining (P < .001). According to the differential staining properties of the patients’ tissues, the mean BES values of C(−)S(+), C(−)S(−) and C(+)S(+) patients for p27 staining were 2.75 ± 1.25, 2.18 ± 0.4, 1.27 ± 0.46, respectively. C(−)S(+) patients had significantly higher BES values compared to other groups, and C(−)S(−) patients had significantly higher BES values than C(+)S(+) patients (P = .001) (Figure 5). Eleven (26.2%) of the patients had p27-positive staining in the cholesteatoma tissue, and 31(73.8%) of the patients had p27-positive staining in the meatal skin tissue. The number of patients having p27-positive staining in the meatal skin tissue was significantly higher compared to the number of patients having p27-positive staining in the cholesteatoma tissue (P = .041) (Figure 3). It was observed that there was no statistically significant difference between recurrent and non-recurrent patients regarding the number of patients with positive p27-staining in the cholesteatoma tissue (P = .657). However, when the differential staining properties of the patients’ tissues were further analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for p27 (P < .001) (Table 3). The proportion of C(−)S(+) patients was significantly higher in the recurrent group (7/8) compared to the non-recurrent group (13/34) (z value:2.51, P = .012). There was no statistically significant difference among the stage groups regarding the number of patients having p27-positive staining in the cholesteatoma tissue (P = .108). However, when the differential staining properties of the tissues were analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for p27 (P < .001) (Table 4), and the proportion of C(−)S(+) patients was significantly higher in the stage 2 (17/29) (z value: −3.2236, P = .0012) and the stage 3 patient groups (3/3) (z value: −3.6056, P = .0003) compared to the stage 1 patient group (0/10) . The mean BES values of the patients having positive and negative p27 staining in the cholesteatoma tissues were 1.27 ± 0.46 and 2.54 ± 1.05, respectively. Patients with negative staining had significantly higher BES values than patients with positive staining (P < .001). According to the differential staining properties of the patients’ tissues, the mean BES values of C(−)S(+), C(−)S(−) and C(+)S(+) patients for p27 staining were 2.75 ± 1.25, 2.18 ± 0.4, 1.27 ± 0.46, respectively. C(−)S(+) patients had significantly higher BES values compared to other groups, and C(−)S(−) patients had significantly higher BES values than C(+)S(+) patients (P = .001) (Figure 5). Ki-67: Thirty-nine (92.8%) of the patients had Ki-67-positive staining in the cholesteatoma tissue, and 37 (88.1%) of the patients had Ki-67-positive staining in the meatal skin tissue. There was no statistically significant difference between cholesteatoma and skin tissues regarding the number of patients having Ki-67-positive staining (P = .323) (Figure 3). There was no statistically significant difference between recurrent and non-recurrent patients regarding the number of positive Ki-67 staining in the patients’ cholesteatoma tissue (P = .383). However, when the differential staining properties of the patients’ tissues were further analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for Ki-67 (P < .001) (Table 1). The proportion of C(+)S(−) patients was significantly higher in the recurrent group (4/8) compared to the non-recurrent group (0/34) (z value: 4.3347, P < .001). There was no statistically significant difference among the stage groups regarding the number of patients having Ki-67-positive staining in the cholesteatoma tissue (P = .19). However, when the differential staining properties of the patients’ tissues were additionally analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for Ki-67 (P < .001) (Table 2). The proportion of C(+)S(−) patients was significantly higher in the stage 3 group (3/3) compared to the stage 1 group (0/10) (z value: 3.6056, P = .0003) and stage 2 group (1/29) (z value:4.8138, P < .01). The mean BES of the patients having positive and negative Ki-67 staining in the cholesteatoma tissue were 2 ± 0 and 2.2 ± 1.15, respectively. There was no statistically significant mean BES difference between positive and negative Ki-67-staining in the patients’ cholesteatoma tissue (P ˃ .05). When the differential staining properties of the patients’ tissues were further evaluated, the mean BES of C(+)S(−), C(−)S(+), C(+)S(+) and C(−)S(−) patients were 5 ± 0.81, 2 ± 0, 1.88 ± 0.63 and 2 ± 0, respectively. C(+)S(−) patients had higher BES values compared to the others (P < .001) (Figure 4). P27: Eleven (26.2%) of the patients had p27-positive staining in the cholesteatoma tissue, and 31(73.8%) of the patients had p27-positive staining in the meatal skin tissue. The number of patients having p27-positive staining in the meatal skin tissue was significantly higher compared to the number of patients having p27-positive staining in the cholesteatoma tissue (P = .041) (Figure 3). It was observed that there was no statistically significant difference between recurrent and non-recurrent patients regarding the number of patients with positive p27-staining in the cholesteatoma tissue (P = .657). However, when the differential staining properties of the patients’ tissues were further analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for p27 (P < .001) (Table 3). The proportion of C(−)S(+) patients was significantly higher in the recurrent group (7/8) compared to the non-recurrent group (13/34) (z value:2.51, P = .012). There was no statistically significant difference among the stage groups regarding the number of patients having p27-positive staining in the cholesteatoma tissue (P = .108). However, when the differential staining properties of the tissues were analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for p27 (P < .001) (Table 4), and the proportion of C(−)S(+) patients was significantly higher in the stage 2 (17/29) (z value: −3.2236, P = .0012) and the stage 3 patient groups (3/3) (z value: −3.6056, P = .0003) compared to the stage 1 patient group (0/10) . The mean BES values of the patients having positive and negative p27 staining in the cholesteatoma tissues were 1.27 ± 0.46 and 2.54 ± 1.05, respectively. Patients with negative staining had significantly higher BES values than patients with positive staining (P < .001). According to the differential staining properties of the patients’ tissues, the mean BES values of C(−)S(+), C(−)S(−) and C(+)S(+) patients for p27 staining were 2.75 ± 1.25, 2.18 ± 0.4, 1.27 ± 0.46, respectively. C(−)S(+) patients had significantly higher BES values compared to other groups, and C(−)S(−) patients had significantly higher BES values than C(+)S(+) patients (P = .001) (Figure 5). Discussion: In this study, we found that the number of patients having negative p27 staining in the cholesteatoma tissue was higher than for the staining in the skin tissue, however there was no significant difference with regard to the Ki-67 staining. The recurrence rates, stage, and BES values of the patients were not related to the Ki-67 staining status in the cholesteatoma tissues of the patients. However, differential expression of Ki-67 in the cholesteatoma tissue compared to the healthy skin tissue was related to a worse prognosis with increased recurrence rate, stage, and BES values. It was also observed that, negative p27 staining in the cholesteatoma tissue was only related to the elevated BES values; stage and recurrence rates of the cholesteatoma patients were not related to the p27 staining status in the cholesteatoma tissue. On the other hand, differential non-expression of p27 in the cholesteatoma tissue compared to the skin tissue was related to worse prognosis for cholesteatoma patients, such as increased recurrence rate, stage, and BES values. Proliferating undifferentiated keratinocytes in the matrix and activated fibroblasts in the perimatrix are the main cells in the cholesteatoma tissue that enhance progression. Uncontrolled proliferation of the keratinocytes initiates the vicious cycle; proliferating keratinocytes release cytokines that activate the perimatrix fibroblasts. Activated fibroblasts secrete epidermal growth factor and keratinocyte growth factor, which in turn activate the matrix keratinocytes in a vicious cycle. Osteoclastic molecules secreted from the proliferating keratinocytes and activated fibroblasts trigger the bone resorption and in turn the development of complications.1,21 Thus, cellular proliferation rate in the cholesteatoma tissue is an important factor for progression of the cholesteatoma. Ki-67 nuclear antigen was demonstrated to be expressed in proliferating cells in the late G1, S, G2, and M phases of the cellular cycle. This protein has been widely used as a proliferation marker for tumors and proliferative diseases including cholesteatoma.8 Most studies8-11,13,16,20,22 reported higher expression levels of Ki-67 protein in the cholesteatoma tissue compared to the healthy skin tissue. However, Kuczkowski et al.6 found an insignificant increase in Ki-67 expression in the cholesteatoma, and Kim et al.15 reported a strongly positive Ki-67 expression in the skin tissue compared to focal staining in the cholesteatoma tissue. Our results were also correlated with Kuczkowski et al.,6 with a slightly increased expression of Ki-67 in the cholesteatoma tissue compared to the skin tissue. When the effect of Ki-67 on the prognosis of cholesteatoma was evaluated, challenging results were reported. A higher Ki-67 labeling index in the cholesteatoma tissue was observed to be significantly related to the recurrence rate of the cholesteatomas.7,10 On the contrary, some studies3,14 could not find such a relationship. The bone erosion capacity of the cholesteatoma is an important prognostic factor that enables extension and complication development. Recently, Araz Server et al.3 found a correlation between the Ki-67 labeling index of cholesteatoma tissue and malleus erosion. Additionally, Hamed et al.8 reported a direct correlation between Ki-67 expression in the cholesteatoma and BES. On the other hand, several authors7,9,11,15 could not find a correlation between the Ki-67 expression levels in the cholesteatoma tissue and BES or extent of the disease. None of the aforementioned studies considered the Ki-67 skin-labeling status of the patients during prognostic analysis. According to our study results, differential Ki-67 staining in the cholesteatoma tissue compared to the skin tissue (C(+)S(−)) affected the prognosis (recurrence rate, extensiveness-stage, and bone erosion capacity) rather than the isolated staining status in the cholesteatoma tissue. The controversial outcomes in the previous literature may be attributed to the wide range of basal Ki-67 expression in the skin tissue. Unmatched cholesteatoma-skin expression status might have caused conflicting results regarding the effect of Ki-67 on the cholesteatoma prognosis. Cyclin-dependent kinase inhibitor p27 blocks cyclin D, E, A, and B-dependent kinases. Thus, the decrease in the levels of p27 is related to the increased cyclin-dependent kinase levels and enables ongoing cellular proliferation.16 Lower expression levels of p27 in the cholesteatoma tissue compared to the healthy skin tissue were reported as an evidence of higher cellular activity of the cholesteatoma.17,18 On the other hand, Chae et al.16 demonstrated an increased expression level of p27 in the cholesteatoma tissue. Regarding the prognostic value of p27 expression in the cholesteatoma tissue, Kuczkowski et al.18 reported a lower expression of p27 in the recurrent cases of cholesteatoma than the primary acquired ones, without matching the cholesteatoma-skin values. Additionally, the effect of p27-dependent cellular proliferation on other prognostic factors of cholesteatoma such as extensiveness (stage) and bone erosion levels have not been evaluated, yet. According to our study results, the number of patients having negative p27 staining was greater in the cholesteatoma tissue compared to skin tissue, demonstrating the increased cellular proliferation of the cholesteatoma. Moreover, differential non-expression of p27 in the cholesteatoma tissue compared to skin tissue (C(−)S(+)) was a prognostic factor for cholesteatoma with increased stage, recurrence rate, and BES values. Combining the results of p27 with Ki-67, we can say that every cholesteatoma patient has a basal cellular proliferation activity rate in the meatal skin. Cellular proliferation rate in the cholesteatoma is important but not solely enough for predicting the prognosis of cholesteatoma patients. Patients having lower basal levels of cellular proliferation rate and higher cellular activity in the cholesteatoma tissue are prone to worse prognosis with increased stage, recurrence rates, and bone erosion degrees. The limitation of this study was the limited patient number. Future studies with a higher number of patients should also investigate the role of Ki-67 and p27 on cholesteatoma prognosis by comparing each patient’s cholesteatoma results with the healthy skin tissue results. However, we think that our results are meaningful and can lead future studies with our long interval (minimum 60 months) follow-up period for the detection of cholesteatoma recurrence. In conclusion, the differential expression of Ki-67 and non-expression of p27 in the cholesteatoma tissue compared to the healthy skin ­tissue were associated with worse prognosis, including increased ­otitis media, with increased cholesteatoma stage, recurrence rate, and bone erosion degree for the cholesteatoma patients. With upcoming studies, Ki-67- and p27-targeted topical treatment options may be enhanced to prevent the growing and expansion of the cholesteatoma. Considering the future studies that will evaluate the effect of cellular proliferation on cholesteatoma progression, it would be more logical to analyze the basal levels of cellular proliferation rates for each patient and to match the results of cholesteatoma tissues with the healthy skin tissues for predicting the prognosis.
Background: The aim of this study was to compare the differential Ki-67 and p27 staining properties of acquired cholesteatoma in adult patients for prognostic analysis. Methods: Forty-two adult patients with acquired cholesteatoma were enrolled. The cholesteatoma and matched meatal skin tissues of the patients were immunostained with Ki-67 and p27 antibodies. Canal wall down mastoidectomy was performed in all patients. The differential staining properties--positive staining in the cholesteatoma and negative staining in the skin tissue (C+S-), negative staining in the cholesteatoma and positive staining in the skin tissue(C-S+)--were compared for bone erosion scores (BES), stage, and recurrence rates. Results: Isolated findings in the cholesteatoma tissues, without matching with the skin tissues, demonstrated that stage and recurrence rates were not related to findings in the cholesteatoma tissues (P > .05). However, C+S- for Ki-67 and C-S+ for p27 are risk factors for worse prognosis including advanced stage (P < .001 for Ki-67 and P = .008 for p27), BES values (P < .001 for Ki-67 and P = .001 for p27), and recurrence rates (P < .001 for Ki-67 and P = .037 for p27). Conclusions: This is the first paper assessing the cholesteatoma prognosis according to the differential Ki-67 and p27 staining properties of cholesteatoma and healthy skin tissues. Cellular proliferation rate in the cholesteatoma is important but insufficient by itself for predicting the prognosis of cholesteatoma patients. Patients having lower basal levels of cellular proliferation rate and higher cellular activity in the cholesteatoma tissue are prone to worse prognosis with increased stage, recurrence rates, and degree of bone erosion.
null
null
7,096
311
[ 27, 39, 107, 134, 99, 128, 164, 448, 471 ]
13
[ "patients", "cholesteatoma", "staining", "tissue", "positive", "skin", "ki", "ki 67", "67", "p27" ]
[ "keratinocytes cholesteatoma", "stage cholesteatoma according", "cholesteatoma tissue kuczkowski", "cholesteatoma matrix harbors", "perimatrix active cholesteatoma" ]
null
null
null
[CONTENT] Cholesteatoma | Ki-67 | p27 | cellular proliferation | prognosis | stage | recurrence | bone erosion score [SUMMARY]
null
[CONTENT] Cholesteatoma | Ki-67 | p27 | cellular proliferation | prognosis | stage | recurrence | bone erosion score [SUMMARY]
null
[CONTENT] Cholesteatoma | Ki-67 | p27 | cellular proliferation | prognosis | stage | recurrence | bone erosion score [SUMMARY]
null
[CONTENT] Adult | Cholesteatoma, Middle Ear | Cyclin-Dependent Kinase Inhibitor p27 | Humans | Ki-67 Antigen | Mastoidectomy | Prognosis | Recurrence [SUMMARY]
null
[CONTENT] Adult | Cholesteatoma, Middle Ear | Cyclin-Dependent Kinase Inhibitor p27 | Humans | Ki-67 Antigen | Mastoidectomy | Prognosis | Recurrence [SUMMARY]
null
[CONTENT] Adult | Cholesteatoma, Middle Ear | Cyclin-Dependent Kinase Inhibitor p27 | Humans | Ki-67 Antigen | Mastoidectomy | Prognosis | Recurrence [SUMMARY]
null
[CONTENT] keratinocytes cholesteatoma | stage cholesteatoma according | cholesteatoma tissue kuczkowski | cholesteatoma matrix harbors | perimatrix active cholesteatoma [SUMMARY]
null
[CONTENT] keratinocytes cholesteatoma | stage cholesteatoma according | cholesteatoma tissue kuczkowski | cholesteatoma matrix harbors | perimatrix active cholesteatoma [SUMMARY]
null
[CONTENT] keratinocytes cholesteatoma | stage cholesteatoma according | cholesteatoma tissue kuczkowski | cholesteatoma matrix harbors | perimatrix active cholesteatoma [SUMMARY]
null
[CONTENT] patients | cholesteatoma | staining | tissue | positive | skin | ki | ki 67 | 67 | p27 [SUMMARY]
null
[CONTENT] patients | cholesteatoma | staining | tissue | positive | skin | ki | ki 67 | 67 | p27 [SUMMARY]
null
[CONTENT] patients | cholesteatoma | staining | tissue | positive | skin | ki | ki 67 | 67 | p27 [SUMMARY]
null
[CONTENT] cholesteatoma | 67 | ki 67 | ki | matrix | bone | keratinocytes | proliferation | role | cystic [SUMMARY]
null
[CONTENT] patients | staining | difference | statistically significant difference | positive | tissue | statistically | positive staining | statistically significant | higher [SUMMARY]
null
[CONTENT] patients | cholesteatoma | staining | positive | skin | tissue | ki 67 | ki | 67 | p27 [SUMMARY]
null
[CONTENT] [SUMMARY]
null
[CONTENT] ||| .008 | BES | .001 | .037 [SUMMARY]
null
[CONTENT] ||| Forty-two ||| ||| ||| BES ||| ||| .008 | BES | .001 | .037 ||| first ||| ||| [SUMMARY]
null
Associations of Daily Walking Time With Pneumonia Mortality Among Elderly Individuals With or Without a Medical History of Myocardial Infarction or Stroke: Findings From the Japan Collaborative Cohort Study.
30249944
The association between daily walking and pneumonia mortality, stratified by the presence of disease conditions, such as myocardial infarction (MI) or stroke, was investigated.
BACKGROUND
The study participants were 22,280 Japanese individuals (9,067 men and 13,213 women) aged 65-79 years. Inverse propensity weighted competing risk model was used to calculate the hazard ratio (HR) and 95% confidence interval (CI) for pneumonia mortality.
METHODS
After a median of 11.9 years of follow-up, 1,203 participants died of pneumonia. Participants who did not have a history of MI or stroke and who walked for 1 hour/day or more were less likely to die from pneumonia (HR 0.90; 95% CI, 0.82-0.98) than those walked for 0.5 hours/day. A similar inverse association of pneumonia and walking (0.5 hours/day) was observed among participants with a history of MI (HR 0.66; 95% CI, 0.48-0.90). Among the participants with a history of stroke, those who walked for 0.6-0.9 hours/day were less likely to die because of pneumonia (HR 0.65; 95% CI, 0.43-0.98).
RESULTS
Regular walking for ≥1 hour/day may reduce the risk of pneumonia mortality in elderly individuals with or without cardiovascular disease history.
CONCLUSIONS
[ "Aged", "Cause of Death", "Cohort Studies", "Female", "Humans", "Japan", "Male", "Myocardial Infarction", "Pneumonia", "Proportional Hazards Models", "Prospective Studies", "Stroke", "Walking" ]
6522391
null
null
METHODS
Study population and data collection The Japan Collaborative Cohort Study for Evaluation of Cancer Risk (JACC Study), which was established in 1988–1990, has been described in detail elsewhere.12,13 Briefly, 110,585 inhabitants (46,395 men and 64,190 women) aged 40–79 years from 45 areas in Japan were enrolled into the study. In the present study, the overall number of baseline participants (aged 65–79 years) was 29,956 (12,196 men and 17,760 women). Data were collected through a self-administered questionnaire, and a response rate of 83% was observed. Information on the average number of daily walking hours and other lifestyle factors was obtained from the baseline questionnaire. Participants were asked about the average daily time spent walking: “How long on average do you spend walking indoors or outside on a daily basis?”, and the possible responses were: “<0.5 hours/day”, “0.5 hours/day,” “0.6–0.9 hours/day,” and “≥1.0 hours/day.” Of the original cohort members, 5,492 participants in six areas were excluded because the questionnaire that was used did not include data on the average number of daily walking hours. Furthermore, 2,184 participants from other areas with missing data on the average daily walking hours were excluded. Consequently, 22,280 participants (9,067 men and 13,213 women) were included in the present study. The Japan Collaborative Cohort Study for Evaluation of Cancer Risk (JACC Study), which was established in 1988–1990, has been described in detail elsewhere.12,13 Briefly, 110,585 inhabitants (46,395 men and 64,190 women) aged 40–79 years from 45 areas in Japan were enrolled into the study. In the present study, the overall number of baseline participants (aged 65–79 years) was 29,956 (12,196 men and 17,760 women). Data were collected through a self-administered questionnaire, and a response rate of 83% was observed. Information on the average number of daily walking hours and other lifestyle factors was obtained from the baseline questionnaire. Participants were asked about the average daily time spent walking: “How long on average do you spend walking indoors or outside on a daily basis?”, and the possible responses were: “<0.5 hours/day”, “0.5 hours/day,” “0.6–0.9 hours/day,” and “≥1.0 hours/day.” Of the original cohort members, 5,492 participants in six areas were excluded because the questionnaire that was used did not include data on the average number of daily walking hours. Furthermore, 2,184 participants from other areas with missing data on the average daily walking hours were excluded. Consequently, 22,280 participants (9,067 men and 13,213 women) were included in the present study. Follow-up The date and cause of death were confirmed via death certificates and coded according to the International Classification of Diseases, 10th Revision (ICD-10). The primary outcome of the present study was death due to pneumonia or influenza (J9–18, J69). Participants who moved away from the study area during the study period were treated as censored cases. The date and cause of death were confirmed via death certificates and coded according to the International Classification of Diseases, 10th Revision (ICD-10). The primary outcome of the present study was death due to pneumonia or influenza (J9–18, J69). Participants who moved away from the study area during the study period were treated as censored cases. Statistical analysis We used four statistical models to calculate multivariate-adjusted hazard ratios (HRs) and confidence intervals (CIs) for pneumonia-related mortality.14,15 First, we conducted multivariate adjustment using an inverse probability weighting (IPW) method based on generalized propensity scores because of a relatively small number of pneumonia deaths.16 This approach is a statistical alternative to propensity score matching that balances the confounders in non-randomized studies. To develop the generalized propensity score, we conducted a multinomial logistic regression analysis for the four walking categories using all the demographic information,17 including, age (as a continuous variable), sex (male or female), smoking status (never, former, current smoking, or unknown), alcohol drinking status (never, former, current alcohol drinker, or unknown), body mass index (BMI; <18.5, 18.5–24.9, ≥25.0 kg/m2, or unknown), educational level (school age up to age 15 years, 15–18 years, ≥19 years, or unknown), marital status (single, married, divorced/widowed, or unknown), depressive tendency (defined later; presence, absence, or unknown), sleep duration (<6.5 hours/day, 6.5–8.4 hours/day, ≥8.5 hours/day, or unknown), and a history of cancer (yes or no/unknown), diabetes mellitus (yes or no/unknown), kidney diseases (yes or no/unknown), and asthma (yes or no/unknown). Four psychological or behavioral items from the baseline questionnaire were used to quantify depressive tendency.18 These items are the following: “Do you think your life is meaningful?”, “Do you think you make decisions quickly?”, “Are you enjoying your life?”, and “Do you feel others rely very much on you?”. Participants with two or more psychological or behavioral items were considered to have a depressive tendency. Second, to assess for covariate balance, we showed the propensity score overlap through kernel density plots. Third, we then used an IPW Cox proportional hazards model.19 Finally, we utilized a competing risk model,20 in which death caused by factors other than pneumonia was considered as a competing risk. Linear trends in mortality risks were assessed based on four categories of walking hours per day, which were used as numeric variables. To avoid an inverse causal relationship, we considered the participants who walked for 0.5 hours/day as ambulatory,21 and they were included in the reference group. All statistical analyses were conducted separately among participants with or without a history of MI or stroke. An alpha level of 0.05 was considered statistically significant. All statistical analyses were performed using SAS 9.4 (SAS Institute Inc., Cary, NC, USA). The graphs were drawn using JMP 12.0.2 (SAS Institute Inc.). We used four statistical models to calculate multivariate-adjusted hazard ratios (HRs) and confidence intervals (CIs) for pneumonia-related mortality.14,15 First, we conducted multivariate adjustment using an inverse probability weighting (IPW) method based on generalized propensity scores because of a relatively small number of pneumonia deaths.16 This approach is a statistical alternative to propensity score matching that balances the confounders in non-randomized studies. To develop the generalized propensity score, we conducted a multinomial logistic regression analysis for the four walking categories using all the demographic information,17 including, age (as a continuous variable), sex (male or female), smoking status (never, former, current smoking, or unknown), alcohol drinking status (never, former, current alcohol drinker, or unknown), body mass index (BMI; <18.5, 18.5–24.9, ≥25.0 kg/m2, or unknown), educational level (school age up to age 15 years, 15–18 years, ≥19 years, or unknown), marital status (single, married, divorced/widowed, or unknown), depressive tendency (defined later; presence, absence, or unknown), sleep duration (<6.5 hours/day, 6.5–8.4 hours/day, ≥8.5 hours/day, or unknown), and a history of cancer (yes or no/unknown), diabetes mellitus (yes or no/unknown), kidney diseases (yes or no/unknown), and asthma (yes or no/unknown). Four psychological or behavioral items from the baseline questionnaire were used to quantify depressive tendency.18 These items are the following: “Do you think your life is meaningful?”, “Do you think you make decisions quickly?”, “Are you enjoying your life?”, and “Do you feel others rely very much on you?”. Participants with two or more psychological or behavioral items were considered to have a depressive tendency. Second, to assess for covariate balance, we showed the propensity score overlap through kernel density plots. Third, we then used an IPW Cox proportional hazards model.19 Finally, we utilized a competing risk model,20 in which death caused by factors other than pneumonia was considered as a competing risk. Linear trends in mortality risks were assessed based on four categories of walking hours per day, which were used as numeric variables. To avoid an inverse causal relationship, we considered the participants who walked for 0.5 hours/day as ambulatory,21 and they were included in the reference group. All statistical analyses were conducted separately among participants with or without a history of MI or stroke. An alpha level of 0.05 was considered statistically significant. All statistical analyses were performed using SAS 9.4 (SAS Institute Inc., Cary, NC, USA). The graphs were drawn using JMP 12.0.2 (SAS Institute Inc.).
RESULTS
Of the 22,280 participants, 1,210, 604, and 80 participants had a history of MI, stroke, and both MI and stroke, respectively. The baseline mean age of the participants was 70.5 (standard deviation [SD], 4.1) years (men, 70.6 [SD, 4.1] years; women, 70.4 [SD, 4.1] years). The baseline characteristics of the participants with or without a history of MI or stroke according to daily walking time are shown in Table 1. About 50.4% of participants without a history of MI and stroke, 41.8% of participants with a history of MI, and 33.9% of the participants with a history of stroke walked >1 hour daily. Among the participants with or without a history of MI or stroke, those who walked >1 hour/day were younger in age, women, and current drinkers; these participants also had a BMI of 18.5–25.0 kg/m2 and sleep duration of 6.5–8.4 hours/day. Moreover, they were less likely to study in college or obtain a diploma and have a depressive tendency and history of diabetes mellitus and cancer compared with those who walked for 0.5 hours/day. The differences of all the potential confounders among the four categories of walking were reduced after weighting (eFigure 1). MI, myocardial infarction. Values are expressed as mean ± standard deviation or percentage. The median follow-up period was 11.9 years. For 306,578 person-years of follow-up (109,078 person-years for men and 197,500 person-years for women), 1,203 participants (731 men and 472 women) died from pneumonia, 1,367 left the study area, and 9,753 died because of factors other than pneumonia. The HRs of pneumonia mortality according to walking time are shown in Table 2. Participants without a history of MI or stroke who walked for more than 1 hour/day were less likely to die because of pneumonia (HR 0.90; 95% CI, 0.82–0.98; P for trend < 0.001) compared with those who walked for 0.5 hours/day. Similarly, decreased HRs were observed among participants with a history of MI (HR 0.66; 95% CI, 0.48–0.90; P for trend = 0.02). Among the participants with a history of stroke, those who walked for more than 0.6–0.9 hours/day were less likely to die because of pneumonia compared with those who walked for 0.5 hour/day (HR 0.65; 95% CI, 0.43–0.98). In contrast, the death of individuals who walked ≥1 hour/day was not associated with pneumonia (HR 1.15; 95% CI, 0.81–1.63). CI, confidence interval; CVD, cardiovascular disease; HR, hazard ratio; MI, myocardial infarction; PY, person-year. P for trend was calculated across the categories of walking time. The generalized propensity scores were calculated by a multinomial logistic regression analysis for the four walking categories using the demographic information, including, age, sex, smoking status, alcohol drinking status, body mass index, educational level, marital status, depressive tendency, sleep duration, and a history of cancer, diabetes mellitus, kidney diseases, and asthma. *P < 0.05. aCox proportional hazard model with inverse propensity weighting. bCompeting risk model with inverse propensity weighting.
Conclusion
This large-scale cohort study demonstrated that walking at least ≥1 hour/day reduced the risk of pneumonia mortality among elderly individuals with or without a medical history of MI. Our findings suggest that regular walking may be beneficial in reducing the risk of pneumonia mortality in the elderly population.
[ "Study population and data collection", "Follow-up", "Statistical analysis", "Conclusion" ]
[ "The Japan Collaborative Cohort Study for Evaluation of Cancer Risk (JACC Study), which was established in 1988–1990, has been described in detail elsewhere.12,13 Briefly, 110,585 inhabitants (46,395 men and 64,190 women) aged 40–79 years from 45 areas in Japan were enrolled into the study. In the present study, the overall number of baseline participants (aged 65–79 years) was 29,956 (12,196 men and 17,760 women). Data were collected through a self-administered questionnaire, and a response rate of 83% was observed. Information on the average number of daily walking hours and other lifestyle factors was obtained from the baseline questionnaire. Participants were asked about the average daily time spent walking: “How long on average do you spend walking indoors or outside on a daily basis?”, and the possible responses were: “<0.5 hours/day”, “0.5 hours/day,” “0.6–0.9 hours/day,” and “≥1.0 hours/day.”\nOf the original cohort members, 5,492 participants in six areas were excluded because the questionnaire that was used did not include data on the average number of daily walking hours. Furthermore, 2,184 participants from other areas with missing data on the average daily walking hours were excluded. Consequently, 22,280 participants (9,067 men and 13,213 women) were included in the present study.", "The date and cause of death were confirmed via death certificates and coded according to the International Classification of Diseases, 10th Revision (ICD-10). The primary outcome of the present study was death due to pneumonia or influenza (J9–18, J69). Participants who moved away from the study area during the study period were treated as censored cases.", "We used four statistical models to calculate multivariate-adjusted hazard ratios (HRs) and confidence intervals (CIs) for pneumonia-related mortality.14,15 First, we conducted multivariate adjustment using an inverse probability weighting (IPW) method based on generalized propensity scores because of a relatively small number of pneumonia deaths.16 This approach is a statistical alternative to propensity score matching that balances the confounders in non-randomized studies. To develop the generalized propensity score, we conducted a multinomial logistic regression analysis for the four walking categories using all the demographic information,17 including, age (as a continuous variable), sex (male or female), smoking status (never, former, current smoking, or unknown), alcohol drinking status (never, former, current alcohol drinker, or unknown), body mass index (BMI; <18.5, 18.5–24.9, ≥25.0 kg/m2, or unknown), educational level (school age up to age 15 years, 15–18 years, ≥19 years, or unknown), marital status (single, married, divorced/widowed, or unknown), depressive tendency (defined later; presence, absence, or unknown), sleep duration (<6.5 hours/day, 6.5–8.4 hours/day, ≥8.5 hours/day, or unknown), and a history of cancer (yes or no/unknown), diabetes mellitus (yes or no/unknown), kidney diseases (yes or no/unknown), and asthma (yes or no/unknown). Four psychological or behavioral items from the baseline questionnaire were used to quantify depressive tendency.18 These items are the following: “Do you think your life is meaningful?”, “Do you think you make decisions quickly?”, “Are you enjoying your life?”, and “Do you feel others rely very much on you?”. Participants with two or more psychological or behavioral items were considered to have a depressive tendency. Second, to assess for covariate balance, we showed the propensity score overlap through kernel density plots. Third, we then used an IPW Cox proportional hazards model.19 Finally, we utilized a competing risk model,20 in which death caused by factors other than pneumonia was considered as a competing risk. Linear trends in mortality risks were assessed based on four categories of walking hours per day, which were used as numeric variables. To avoid an inverse causal relationship, we considered the participants who walked for 0.5 hours/day as ambulatory,21 and they were included in the reference group. All statistical analyses were conducted separately among participants with or without a history of MI or stroke. An alpha level of 0.05 was considered statistically significant. All statistical analyses were performed using SAS 9.4 (SAS Institute Inc., Cary, NC, USA). The graphs were drawn using JMP 12.0.2 (SAS Institute Inc.).", "This large-scale cohort study demonstrated that walking at least ≥1 hour/day reduced the risk of pneumonia mortality among elderly individuals with or without a medical history of MI. Our findings suggest that regular walking may be beneficial in reducing the risk of pneumonia mortality in the elderly population." ]
[ null, null, null, null ]
[ "BACKGROUND", "METHODS", "Study population and data collection", "Follow-up", "Statistical analysis", "RESULTS", "DISCUSSION", "Conclusion" ]
[ "Pneumonia is one of the leading causes of death in developed countries ranking sixth and eight in England1 and North America,2 respectively, among all underlying causes of death. Similarly, the combination of pneumonia and influenza ranks third among the leading causes of death in the Japanese elderly population, accounting for more than 561 deaths per 100,000 population annually.3 Indeed, several systematic reviews suggested that age ≥65 years is a risk factor of pneumonia.4–6 Previous cohort studies showed that walking7 or high-intensity physical activities8,9 were associated with a decreased risk of pneumonia. However, the elderly population often has underlying chronic diseases, such as myocardial infarction (MI) or stroke, which may prevent them from walking and increase the risk of pneumonia,10,11 and the association of walking with pneumonia may result from a combination of underlying conditions. Therefore, this study aimed to investigate whether daily walking time was associated with pneumonia mortality in Japanese participants aged 65–79 years with or without a medical history of MI or stroke.", " Study population and data collection The Japan Collaborative Cohort Study for Evaluation of Cancer Risk (JACC Study), which was established in 1988–1990, has been described in detail elsewhere.12,13 Briefly, 110,585 inhabitants (46,395 men and 64,190 women) aged 40–79 years from 45 areas in Japan were enrolled into the study. In the present study, the overall number of baseline participants (aged 65–79 years) was 29,956 (12,196 men and 17,760 women). Data were collected through a self-administered questionnaire, and a response rate of 83% was observed. Information on the average number of daily walking hours and other lifestyle factors was obtained from the baseline questionnaire. Participants were asked about the average daily time spent walking: “How long on average do you spend walking indoors or outside on a daily basis?”, and the possible responses were: “<0.5 hours/day”, “0.5 hours/day,” “0.6–0.9 hours/day,” and “≥1.0 hours/day.”\nOf the original cohort members, 5,492 participants in six areas were excluded because the questionnaire that was used did not include data on the average number of daily walking hours. Furthermore, 2,184 participants from other areas with missing data on the average daily walking hours were excluded. Consequently, 22,280 participants (9,067 men and 13,213 women) were included in the present study.\nThe Japan Collaborative Cohort Study for Evaluation of Cancer Risk (JACC Study), which was established in 1988–1990, has been described in detail elsewhere.12,13 Briefly, 110,585 inhabitants (46,395 men and 64,190 women) aged 40–79 years from 45 areas in Japan were enrolled into the study. In the present study, the overall number of baseline participants (aged 65–79 years) was 29,956 (12,196 men and 17,760 women). Data were collected through a self-administered questionnaire, and a response rate of 83% was observed. Information on the average number of daily walking hours and other lifestyle factors was obtained from the baseline questionnaire. Participants were asked about the average daily time spent walking: “How long on average do you spend walking indoors or outside on a daily basis?”, and the possible responses were: “<0.5 hours/day”, “0.5 hours/day,” “0.6–0.9 hours/day,” and “≥1.0 hours/day.”\nOf the original cohort members, 5,492 participants in six areas were excluded because the questionnaire that was used did not include data on the average number of daily walking hours. Furthermore, 2,184 participants from other areas with missing data on the average daily walking hours were excluded. Consequently, 22,280 participants (9,067 men and 13,213 women) were included in the present study.\n Follow-up The date and cause of death were confirmed via death certificates and coded according to the International Classification of Diseases, 10th Revision (ICD-10). The primary outcome of the present study was death due to pneumonia or influenza (J9–18, J69). Participants who moved away from the study area during the study period were treated as censored cases.\nThe date and cause of death were confirmed via death certificates and coded according to the International Classification of Diseases, 10th Revision (ICD-10). The primary outcome of the present study was death due to pneumonia or influenza (J9–18, J69). Participants who moved away from the study area during the study period were treated as censored cases.\n Statistical analysis We used four statistical models to calculate multivariate-adjusted hazard ratios (HRs) and confidence intervals (CIs) for pneumonia-related mortality.14,15 First, we conducted multivariate adjustment using an inverse probability weighting (IPW) method based on generalized propensity scores because of a relatively small number of pneumonia deaths.16 This approach is a statistical alternative to propensity score matching that balances the confounders in non-randomized studies. To develop the generalized propensity score, we conducted a multinomial logistic regression analysis for the four walking categories using all the demographic information,17 including, age (as a continuous variable), sex (male or female), smoking status (never, former, current smoking, or unknown), alcohol drinking status (never, former, current alcohol drinker, or unknown), body mass index (BMI; <18.5, 18.5–24.9, ≥25.0 kg/m2, or unknown), educational level (school age up to age 15 years, 15–18 years, ≥19 years, or unknown), marital status (single, married, divorced/widowed, or unknown), depressive tendency (defined later; presence, absence, or unknown), sleep duration (<6.5 hours/day, 6.5–8.4 hours/day, ≥8.5 hours/day, or unknown), and a history of cancer (yes or no/unknown), diabetes mellitus (yes or no/unknown), kidney diseases (yes or no/unknown), and asthma (yes or no/unknown). Four psychological or behavioral items from the baseline questionnaire were used to quantify depressive tendency.18 These items are the following: “Do you think your life is meaningful?”, “Do you think you make decisions quickly?”, “Are you enjoying your life?”, and “Do you feel others rely very much on you?”. Participants with two or more psychological or behavioral items were considered to have a depressive tendency. Second, to assess for covariate balance, we showed the propensity score overlap through kernel density plots. Third, we then used an IPW Cox proportional hazards model.19 Finally, we utilized a competing risk model,20 in which death caused by factors other than pneumonia was considered as a competing risk. Linear trends in mortality risks were assessed based on four categories of walking hours per day, which were used as numeric variables. To avoid an inverse causal relationship, we considered the participants who walked for 0.5 hours/day as ambulatory,21 and they were included in the reference group. All statistical analyses were conducted separately among participants with or without a history of MI or stroke. An alpha level of 0.05 was considered statistically significant. All statistical analyses were performed using SAS 9.4 (SAS Institute Inc., Cary, NC, USA). The graphs were drawn using JMP 12.0.2 (SAS Institute Inc.).\nWe used four statistical models to calculate multivariate-adjusted hazard ratios (HRs) and confidence intervals (CIs) for pneumonia-related mortality.14,15 First, we conducted multivariate adjustment using an inverse probability weighting (IPW) method based on generalized propensity scores because of a relatively small number of pneumonia deaths.16 This approach is a statistical alternative to propensity score matching that balances the confounders in non-randomized studies. To develop the generalized propensity score, we conducted a multinomial logistic regression analysis for the four walking categories using all the demographic information,17 including, age (as a continuous variable), sex (male or female), smoking status (never, former, current smoking, or unknown), alcohol drinking status (never, former, current alcohol drinker, or unknown), body mass index (BMI; <18.5, 18.5–24.9, ≥25.0 kg/m2, or unknown), educational level (school age up to age 15 years, 15–18 years, ≥19 years, or unknown), marital status (single, married, divorced/widowed, or unknown), depressive tendency (defined later; presence, absence, or unknown), sleep duration (<6.5 hours/day, 6.5–8.4 hours/day, ≥8.5 hours/day, or unknown), and a history of cancer (yes or no/unknown), diabetes mellitus (yes or no/unknown), kidney diseases (yes or no/unknown), and asthma (yes or no/unknown). Four psychological or behavioral items from the baseline questionnaire were used to quantify depressive tendency.18 These items are the following: “Do you think your life is meaningful?”, “Do you think you make decisions quickly?”, “Are you enjoying your life?”, and “Do you feel others rely very much on you?”. Participants with two or more psychological or behavioral items were considered to have a depressive tendency. Second, to assess for covariate balance, we showed the propensity score overlap through kernel density plots. Third, we then used an IPW Cox proportional hazards model.19 Finally, we utilized a competing risk model,20 in which death caused by factors other than pneumonia was considered as a competing risk. Linear trends in mortality risks were assessed based on four categories of walking hours per day, which were used as numeric variables. To avoid an inverse causal relationship, we considered the participants who walked for 0.5 hours/day as ambulatory,21 and they were included in the reference group. All statistical analyses were conducted separately among participants with or without a history of MI or stroke. An alpha level of 0.05 was considered statistically significant. All statistical analyses were performed using SAS 9.4 (SAS Institute Inc., Cary, NC, USA). The graphs were drawn using JMP 12.0.2 (SAS Institute Inc.).", "The Japan Collaborative Cohort Study for Evaluation of Cancer Risk (JACC Study), which was established in 1988–1990, has been described in detail elsewhere.12,13 Briefly, 110,585 inhabitants (46,395 men and 64,190 women) aged 40–79 years from 45 areas in Japan were enrolled into the study. In the present study, the overall number of baseline participants (aged 65–79 years) was 29,956 (12,196 men and 17,760 women). Data were collected through a self-administered questionnaire, and a response rate of 83% was observed. Information on the average number of daily walking hours and other lifestyle factors was obtained from the baseline questionnaire. Participants were asked about the average daily time spent walking: “How long on average do you spend walking indoors or outside on a daily basis?”, and the possible responses were: “<0.5 hours/day”, “0.5 hours/day,” “0.6–0.9 hours/day,” and “≥1.0 hours/day.”\nOf the original cohort members, 5,492 participants in six areas were excluded because the questionnaire that was used did not include data on the average number of daily walking hours. Furthermore, 2,184 participants from other areas with missing data on the average daily walking hours were excluded. Consequently, 22,280 participants (9,067 men and 13,213 women) were included in the present study.", "The date and cause of death were confirmed via death certificates and coded according to the International Classification of Diseases, 10th Revision (ICD-10). The primary outcome of the present study was death due to pneumonia or influenza (J9–18, J69). Participants who moved away from the study area during the study period were treated as censored cases.", "We used four statistical models to calculate multivariate-adjusted hazard ratios (HRs) and confidence intervals (CIs) for pneumonia-related mortality.14,15 First, we conducted multivariate adjustment using an inverse probability weighting (IPW) method based on generalized propensity scores because of a relatively small number of pneumonia deaths.16 This approach is a statistical alternative to propensity score matching that balances the confounders in non-randomized studies. To develop the generalized propensity score, we conducted a multinomial logistic regression analysis for the four walking categories using all the demographic information,17 including, age (as a continuous variable), sex (male or female), smoking status (never, former, current smoking, or unknown), alcohol drinking status (never, former, current alcohol drinker, or unknown), body mass index (BMI; <18.5, 18.5–24.9, ≥25.0 kg/m2, or unknown), educational level (school age up to age 15 years, 15–18 years, ≥19 years, or unknown), marital status (single, married, divorced/widowed, or unknown), depressive tendency (defined later; presence, absence, or unknown), sleep duration (<6.5 hours/day, 6.5–8.4 hours/day, ≥8.5 hours/day, or unknown), and a history of cancer (yes or no/unknown), diabetes mellitus (yes or no/unknown), kidney diseases (yes or no/unknown), and asthma (yes or no/unknown). Four psychological or behavioral items from the baseline questionnaire were used to quantify depressive tendency.18 These items are the following: “Do you think your life is meaningful?”, “Do you think you make decisions quickly?”, “Are you enjoying your life?”, and “Do you feel others rely very much on you?”. Participants with two or more psychological or behavioral items were considered to have a depressive tendency. Second, to assess for covariate balance, we showed the propensity score overlap through kernel density plots. Third, we then used an IPW Cox proportional hazards model.19 Finally, we utilized a competing risk model,20 in which death caused by factors other than pneumonia was considered as a competing risk. Linear trends in mortality risks were assessed based on four categories of walking hours per day, which were used as numeric variables. To avoid an inverse causal relationship, we considered the participants who walked for 0.5 hours/day as ambulatory,21 and they were included in the reference group. All statistical analyses were conducted separately among participants with or without a history of MI or stroke. An alpha level of 0.05 was considered statistically significant. All statistical analyses were performed using SAS 9.4 (SAS Institute Inc., Cary, NC, USA). The graphs were drawn using JMP 12.0.2 (SAS Institute Inc.).", "Of the 22,280 participants, 1,210, 604, and 80 participants had a history of MI, stroke, and both MI and stroke, respectively. The baseline mean age of the participants was 70.5 (standard deviation [SD], 4.1) years (men, 70.6 [SD, 4.1] years; women, 70.4 [SD, 4.1] years).\nThe baseline characteristics of the participants with or without a history of MI or stroke according to daily walking time are shown in Table 1. About 50.4% of participants without a history of MI and stroke, 41.8% of participants with a history of MI, and 33.9% of the participants with a history of stroke walked >1 hour daily. Among the participants with or without a history of MI or stroke, those who walked >1 hour/day were younger in age, women, and current drinkers; these participants also had a BMI of 18.5–25.0 kg/m2 and sleep duration of 6.5–8.4 hours/day. Moreover, they were less likely to study in college or obtain a diploma and have a depressive tendency and history of diabetes mellitus and cancer compared with those who walked for 0.5 hours/day. The differences of all the potential confounders among the four categories of walking were reduced after weighting (eFigure 1).\nMI, myocardial infarction.\nValues are expressed as mean ± standard deviation or percentage.\nThe median follow-up period was 11.9 years. For 306,578 person-years of follow-up (109,078 person-years for men and 197,500 person-years for women), 1,203 participants (731 men and 472 women) died from pneumonia, 1,367 left the study area, and 9,753 died because of factors other than pneumonia. The HRs of pneumonia mortality according to walking time are shown in Table 2. Participants without a history of MI or stroke who walked for more than 1 hour/day were less likely to die because of pneumonia (HR 0.90; 95% CI, 0.82–0.98; P for trend < 0.001) compared with those who walked for 0.5 hours/day. Similarly, decreased HRs were observed among participants with a history of MI (HR 0.66; 95% CI, 0.48–0.90; P for trend = 0.02). Among the participants with a history of stroke, those who walked for more than 0.6–0.9 hours/day were less likely to die because of pneumonia compared with those who walked for 0.5 hour/day (HR 0.65; 95% CI, 0.43–0.98). In contrast, the death of individuals who walked ≥1 hour/day was not associated with pneumonia (HR 1.15; 95% CI, 0.81–1.63).\nCI, confidence interval; CVD, cardiovascular disease; HR, hazard ratio; MI, myocardial infarction; PY, person-year.\nP for trend was calculated across the categories of walking time. The generalized propensity scores were calculated by a multinomial logistic regression analysis for the four walking categories using the demographic information, including, age, sex, smoking status, alcohol drinking status, body mass index, educational level, marital status, depressive tendency, sleep duration, and a history of cancer, diabetes mellitus, kidney diseases, and asthma.\n*P < 0.05.\naCox proportional hazard model with inverse propensity weighting.\nbCompeting risk model with inverse propensity weighting.", "In this large cohort study, participants without a history of MI or stroke who walked for ≥1 hour/day were less likely to die from pneumonia than those who walked for 0.5 hours/day. Similar results were obtained among participants with a history of MI. For participants with a history of stroke, those who walked for 0.6–0.9 hours/day were less likely to die from pneumonia than those who walked for 0.5 hours/day. However, this inverse association was not observed in participants with a history of stroke who walked ≥1 hour/day.\nParticipants with or without a history of MI or stroke who walk for a longer number of hours per day were less likely to die from pneumonia compared with those who walk for a shorter number of hours per day. Our finding shows a reduced incidence of pneumonia8 and pneumonia mortality7,9 in community residents with moderate physical activities, and this finding is similar to the results obtained from three previous studies. Whether elderly individuals with coronary heart disease are more susceptible to pneumonia than those without coronary heart disease is not clear.22 Two studies, including a cohort study23 and nested case-control study,10 suggested that elderly individuals with chronic cardiovascular disease were more likely to develop pneumonia compared with those without the disease condition (HR 1.46; 95% CI, 1.16–1.84 and HR 1.68; 95% CI, 1.58–1.77, respectively). These individuals may acquire pneumonia because of deteriorated immunological responses.24 Furthermore, since pneumonia itself causes cardiac problems,25 such as left ventricular dysfunction26,27 or increased cardiac arrhythmias,28,29 those who experienced cardiac complications along with pneumonia were more likely to die.30 Therefore, walking may be important for individuals with heart disease because it enhances mucosal immune function in elderly individuals.31 Furthermore, moderate physical activities, such as walking, enhance the immune function by increasing the activities of macrophages, natural killer cells,32 and neutrophils and regulating cytokines.33,34 However, caution must be taken into consideration. Although rare, some patients with coronary artery disease have sudden cardiac death during exercise, particularly habitually sedentary adults.35 Thus, elderly individuals with a history of coronary artery disease should obtain an exercise prescription from physicians to avoid inappropriate physical activities.\nA population-based cohort study suggested that elderly participants with a history of stroke were 1.26 (95% CI, 1.08–1.48) times more likely to be diagnosed with pneumonia than those without stroke after a 3-year follow-up.11 In the present study, participants with a history of stroke who walked for 0.6–0.9 hours/day were less likely to die from pneumonia compared with those who walked for 0.5 hours/day. This benefit was not observed among those who walked ≥1 hour/day. However, the mechanisms explaining why a longer walking time had no additional benefits for elderly individuals with stroke was unclear. Although a meta-analysis revealed that aerobic exercise training during the chronic stage of stroke recovery has beneficial effects on cardiorespiratory health,36 the oxygen cost of walking among people with history of stroke is 2-fold higher than that reported for non-stroke participants.37 Furthermore, other biological changes that may negatively affect cardiorespiratory health after stroke include elevated systemic levels of proinflammatory markers, abnormal glucose levels and insulin metabolism, impaired autonomic control, and respiratory dysfunction.38 In the context of age- and disease-related heterogeneity in cardiorespiratory capacity and medical comorbidities, walking duration ≥1 hour/day might be too much for elderly individuals with a history of stroke. However, to clarify an appropriate duration and time of physical activities for elderly individuals with a history of stroke, further epidemiological studies should be conducted.\nThe strengths of the present study include its prospective cohort design, long follow-up period, and an inclusion of participants from Japan. Several limitations should be discussed. First, we obtained information on the daily walking time through self-report, and that information was not validated. Therefore, some misclassifications were possibly included in our results.39 The use of an accelerometer40 can provide more reliable results. Second, the effects of the residual confounding factors were not completely excluded. Third, information on the duration and severity of MI and stroke were not available. These conditions may affect walking ability, capacity, or habit. Fourth, we obtained information retrospectively via medical histories; therefore, some misclassification could be included in our results.\n Conclusion This large-scale cohort study demonstrated that walking at least ≥1 hour/day reduced the risk of pneumonia mortality among elderly individuals with or without a medical history of MI. Our findings suggest that regular walking may be beneficial in reducing the risk of pneumonia mortality in the elderly population.\nThis large-scale cohort study demonstrated that walking at least ≥1 hour/day reduced the risk of pneumonia mortality among elderly individuals with or without a medical history of MI. Our findings suggest that regular walking may be beneficial in reducing the risk of pneumonia mortality in the elderly population.", "This large-scale cohort study demonstrated that walking at least ≥1 hour/day reduced the risk of pneumonia mortality among elderly individuals with or without a medical history of MI. Our findings suggest that regular walking may be beneficial in reducing the risk of pneumonia mortality in the elderly population." ]
[ "other1", "methods", null, null, null, "results", "discussion", null ]
[ "walking", "pneumonia", "influenza", "motor activity", "epidemiology" ]
BACKGROUND: Pneumonia is one of the leading causes of death in developed countries ranking sixth and eight in England1 and North America,2 respectively, among all underlying causes of death. Similarly, the combination of pneumonia and influenza ranks third among the leading causes of death in the Japanese elderly population, accounting for more than 561 deaths per 100,000 population annually.3 Indeed, several systematic reviews suggested that age ≥65 years is a risk factor of pneumonia.4–6 Previous cohort studies showed that walking7 or high-intensity physical activities8,9 were associated with a decreased risk of pneumonia. However, the elderly population often has underlying chronic diseases, such as myocardial infarction (MI) or stroke, which may prevent them from walking and increase the risk of pneumonia,10,11 and the association of walking with pneumonia may result from a combination of underlying conditions. Therefore, this study aimed to investigate whether daily walking time was associated with pneumonia mortality in Japanese participants aged 65–79 years with or without a medical history of MI or stroke. METHODS: Study population and data collection The Japan Collaborative Cohort Study for Evaluation of Cancer Risk (JACC Study), which was established in 1988–1990, has been described in detail elsewhere.12,13 Briefly, 110,585 inhabitants (46,395 men and 64,190 women) aged 40–79 years from 45 areas in Japan were enrolled into the study. In the present study, the overall number of baseline participants (aged 65–79 years) was 29,956 (12,196 men and 17,760 women). Data were collected through a self-administered questionnaire, and a response rate of 83% was observed. Information on the average number of daily walking hours and other lifestyle factors was obtained from the baseline questionnaire. Participants were asked about the average daily time spent walking: “How long on average do you spend walking indoors or outside on a daily basis?”, and the possible responses were: “<0.5 hours/day”, “0.5 hours/day,” “0.6–0.9 hours/day,” and “≥1.0 hours/day.” Of the original cohort members, 5,492 participants in six areas were excluded because the questionnaire that was used did not include data on the average number of daily walking hours. Furthermore, 2,184 participants from other areas with missing data on the average daily walking hours were excluded. Consequently, 22,280 participants (9,067 men and 13,213 women) were included in the present study. The Japan Collaborative Cohort Study for Evaluation of Cancer Risk (JACC Study), which was established in 1988–1990, has been described in detail elsewhere.12,13 Briefly, 110,585 inhabitants (46,395 men and 64,190 women) aged 40–79 years from 45 areas in Japan were enrolled into the study. In the present study, the overall number of baseline participants (aged 65–79 years) was 29,956 (12,196 men and 17,760 women). Data were collected through a self-administered questionnaire, and a response rate of 83% was observed. Information on the average number of daily walking hours and other lifestyle factors was obtained from the baseline questionnaire. Participants were asked about the average daily time spent walking: “How long on average do you spend walking indoors or outside on a daily basis?”, and the possible responses were: “<0.5 hours/day”, “0.5 hours/day,” “0.6–0.9 hours/day,” and “≥1.0 hours/day.” Of the original cohort members, 5,492 participants in six areas were excluded because the questionnaire that was used did not include data on the average number of daily walking hours. Furthermore, 2,184 participants from other areas with missing data on the average daily walking hours were excluded. Consequently, 22,280 participants (9,067 men and 13,213 women) were included in the present study. Follow-up The date and cause of death were confirmed via death certificates and coded according to the International Classification of Diseases, 10th Revision (ICD-10). The primary outcome of the present study was death due to pneumonia or influenza (J9–18, J69). Participants who moved away from the study area during the study period were treated as censored cases. The date and cause of death were confirmed via death certificates and coded according to the International Classification of Diseases, 10th Revision (ICD-10). The primary outcome of the present study was death due to pneumonia or influenza (J9–18, J69). Participants who moved away from the study area during the study period were treated as censored cases. Statistical analysis We used four statistical models to calculate multivariate-adjusted hazard ratios (HRs) and confidence intervals (CIs) for pneumonia-related mortality.14,15 First, we conducted multivariate adjustment using an inverse probability weighting (IPW) method based on generalized propensity scores because of a relatively small number of pneumonia deaths.16 This approach is a statistical alternative to propensity score matching that balances the confounders in non-randomized studies. To develop the generalized propensity score, we conducted a multinomial logistic regression analysis for the four walking categories using all the demographic information,17 including, age (as a continuous variable), sex (male or female), smoking status (never, former, current smoking, or unknown), alcohol drinking status (never, former, current alcohol drinker, or unknown), body mass index (BMI; <18.5, 18.5–24.9, ≥25.0 kg/m2, or unknown), educational level (school age up to age 15 years, 15–18 years, ≥19 years, or unknown), marital status (single, married, divorced/widowed, or unknown), depressive tendency (defined later; presence, absence, or unknown), sleep duration (<6.5 hours/day, 6.5–8.4 hours/day, ≥8.5 hours/day, or unknown), and a history of cancer (yes or no/unknown), diabetes mellitus (yes or no/unknown), kidney diseases (yes or no/unknown), and asthma (yes or no/unknown). Four psychological or behavioral items from the baseline questionnaire were used to quantify depressive tendency.18 These items are the following: “Do you think your life is meaningful?”, “Do you think you make decisions quickly?”, “Are you enjoying your life?”, and “Do you feel others rely very much on you?”. Participants with two or more psychological or behavioral items were considered to have a depressive tendency. Second, to assess for covariate balance, we showed the propensity score overlap through kernel density plots. Third, we then used an IPW Cox proportional hazards model.19 Finally, we utilized a competing risk model,20 in which death caused by factors other than pneumonia was considered as a competing risk. Linear trends in mortality risks were assessed based on four categories of walking hours per day, which were used as numeric variables. To avoid an inverse causal relationship, we considered the participants who walked for 0.5 hours/day as ambulatory,21 and they were included in the reference group. All statistical analyses were conducted separately among participants with or without a history of MI or stroke. An alpha level of 0.05 was considered statistically significant. All statistical analyses were performed using SAS 9.4 (SAS Institute Inc., Cary, NC, USA). The graphs were drawn using JMP 12.0.2 (SAS Institute Inc.). We used four statistical models to calculate multivariate-adjusted hazard ratios (HRs) and confidence intervals (CIs) for pneumonia-related mortality.14,15 First, we conducted multivariate adjustment using an inverse probability weighting (IPW) method based on generalized propensity scores because of a relatively small number of pneumonia deaths.16 This approach is a statistical alternative to propensity score matching that balances the confounders in non-randomized studies. To develop the generalized propensity score, we conducted a multinomial logistic regression analysis for the four walking categories using all the demographic information,17 including, age (as a continuous variable), sex (male or female), smoking status (never, former, current smoking, or unknown), alcohol drinking status (never, former, current alcohol drinker, or unknown), body mass index (BMI; <18.5, 18.5–24.9, ≥25.0 kg/m2, or unknown), educational level (school age up to age 15 years, 15–18 years, ≥19 years, or unknown), marital status (single, married, divorced/widowed, or unknown), depressive tendency (defined later; presence, absence, or unknown), sleep duration (<6.5 hours/day, 6.5–8.4 hours/day, ≥8.5 hours/day, or unknown), and a history of cancer (yes or no/unknown), diabetes mellitus (yes or no/unknown), kidney diseases (yes or no/unknown), and asthma (yes or no/unknown). Four psychological or behavioral items from the baseline questionnaire were used to quantify depressive tendency.18 These items are the following: “Do you think your life is meaningful?”, “Do you think you make decisions quickly?”, “Are you enjoying your life?”, and “Do you feel others rely very much on you?”. Participants with two or more psychological or behavioral items were considered to have a depressive tendency. Second, to assess for covariate balance, we showed the propensity score overlap through kernel density plots. Third, we then used an IPW Cox proportional hazards model.19 Finally, we utilized a competing risk model,20 in which death caused by factors other than pneumonia was considered as a competing risk. Linear trends in mortality risks were assessed based on four categories of walking hours per day, which were used as numeric variables. To avoid an inverse causal relationship, we considered the participants who walked for 0.5 hours/day as ambulatory,21 and they were included in the reference group. All statistical analyses were conducted separately among participants with or without a history of MI or stroke. An alpha level of 0.05 was considered statistically significant. All statistical analyses were performed using SAS 9.4 (SAS Institute Inc., Cary, NC, USA). The graphs were drawn using JMP 12.0.2 (SAS Institute Inc.). Study population and data collection: The Japan Collaborative Cohort Study for Evaluation of Cancer Risk (JACC Study), which was established in 1988–1990, has been described in detail elsewhere.12,13 Briefly, 110,585 inhabitants (46,395 men and 64,190 women) aged 40–79 years from 45 areas in Japan were enrolled into the study. In the present study, the overall number of baseline participants (aged 65–79 years) was 29,956 (12,196 men and 17,760 women). Data were collected through a self-administered questionnaire, and a response rate of 83% was observed. Information on the average number of daily walking hours and other lifestyle factors was obtained from the baseline questionnaire. Participants were asked about the average daily time spent walking: “How long on average do you spend walking indoors or outside on a daily basis?”, and the possible responses were: “<0.5 hours/day”, “0.5 hours/day,” “0.6–0.9 hours/day,” and “≥1.0 hours/day.” Of the original cohort members, 5,492 participants in six areas were excluded because the questionnaire that was used did not include data on the average number of daily walking hours. Furthermore, 2,184 participants from other areas with missing data on the average daily walking hours were excluded. Consequently, 22,280 participants (9,067 men and 13,213 women) were included in the present study. Follow-up: The date and cause of death were confirmed via death certificates and coded according to the International Classification of Diseases, 10th Revision (ICD-10). The primary outcome of the present study was death due to pneumonia or influenza (J9–18, J69). Participants who moved away from the study area during the study period were treated as censored cases. Statistical analysis: We used four statistical models to calculate multivariate-adjusted hazard ratios (HRs) and confidence intervals (CIs) for pneumonia-related mortality.14,15 First, we conducted multivariate adjustment using an inverse probability weighting (IPW) method based on generalized propensity scores because of a relatively small number of pneumonia deaths.16 This approach is a statistical alternative to propensity score matching that balances the confounders in non-randomized studies. To develop the generalized propensity score, we conducted a multinomial logistic regression analysis for the four walking categories using all the demographic information,17 including, age (as a continuous variable), sex (male or female), smoking status (never, former, current smoking, or unknown), alcohol drinking status (never, former, current alcohol drinker, or unknown), body mass index (BMI; <18.5, 18.5–24.9, ≥25.0 kg/m2, or unknown), educational level (school age up to age 15 years, 15–18 years, ≥19 years, or unknown), marital status (single, married, divorced/widowed, or unknown), depressive tendency (defined later; presence, absence, or unknown), sleep duration (<6.5 hours/day, 6.5–8.4 hours/day, ≥8.5 hours/day, or unknown), and a history of cancer (yes or no/unknown), diabetes mellitus (yes or no/unknown), kidney diseases (yes or no/unknown), and asthma (yes or no/unknown). Four psychological or behavioral items from the baseline questionnaire were used to quantify depressive tendency.18 These items are the following: “Do you think your life is meaningful?”, “Do you think you make decisions quickly?”, “Are you enjoying your life?”, and “Do you feel others rely very much on you?”. Participants with two or more psychological or behavioral items were considered to have a depressive tendency. Second, to assess for covariate balance, we showed the propensity score overlap through kernel density plots. Third, we then used an IPW Cox proportional hazards model.19 Finally, we utilized a competing risk model,20 in which death caused by factors other than pneumonia was considered as a competing risk. Linear trends in mortality risks were assessed based on four categories of walking hours per day, which were used as numeric variables. To avoid an inverse causal relationship, we considered the participants who walked for 0.5 hours/day as ambulatory,21 and they were included in the reference group. All statistical analyses were conducted separately among participants with or without a history of MI or stroke. An alpha level of 0.05 was considered statistically significant. All statistical analyses were performed using SAS 9.4 (SAS Institute Inc., Cary, NC, USA). The graphs were drawn using JMP 12.0.2 (SAS Institute Inc.). RESULTS: Of the 22,280 participants, 1,210, 604, and 80 participants had a history of MI, stroke, and both MI and stroke, respectively. The baseline mean age of the participants was 70.5 (standard deviation [SD], 4.1) years (men, 70.6 [SD, 4.1] years; women, 70.4 [SD, 4.1] years). The baseline characteristics of the participants with or without a history of MI or stroke according to daily walking time are shown in Table 1. About 50.4% of participants without a history of MI and stroke, 41.8% of participants with a history of MI, and 33.9% of the participants with a history of stroke walked >1 hour daily. Among the participants with or without a history of MI or stroke, those who walked >1 hour/day were younger in age, women, and current drinkers; these participants also had a BMI of 18.5–25.0 kg/m2 and sleep duration of 6.5–8.4 hours/day. Moreover, they were less likely to study in college or obtain a diploma and have a depressive tendency and history of diabetes mellitus and cancer compared with those who walked for 0.5 hours/day. The differences of all the potential confounders among the four categories of walking were reduced after weighting (eFigure 1). MI, myocardial infarction. Values are expressed as mean ± standard deviation or percentage. The median follow-up period was 11.9 years. For 306,578 person-years of follow-up (109,078 person-years for men and 197,500 person-years for women), 1,203 participants (731 men and 472 women) died from pneumonia, 1,367 left the study area, and 9,753 died because of factors other than pneumonia. The HRs of pneumonia mortality according to walking time are shown in Table 2. Participants without a history of MI or stroke who walked for more than 1 hour/day were less likely to die because of pneumonia (HR 0.90; 95% CI, 0.82–0.98; P for trend < 0.001) compared with those who walked for 0.5 hours/day. Similarly, decreased HRs were observed among participants with a history of MI (HR 0.66; 95% CI, 0.48–0.90; P for trend = 0.02). Among the participants with a history of stroke, those who walked for more than 0.6–0.9 hours/day were less likely to die because of pneumonia compared with those who walked for 0.5 hour/day (HR 0.65; 95% CI, 0.43–0.98). In contrast, the death of individuals who walked ≥1 hour/day was not associated with pneumonia (HR 1.15; 95% CI, 0.81–1.63). CI, confidence interval; CVD, cardiovascular disease; HR, hazard ratio; MI, myocardial infarction; PY, person-year. P for trend was calculated across the categories of walking time. The generalized propensity scores were calculated by a multinomial logistic regression analysis for the four walking categories using the demographic information, including, age, sex, smoking status, alcohol drinking status, body mass index, educational level, marital status, depressive tendency, sleep duration, and a history of cancer, diabetes mellitus, kidney diseases, and asthma. *P < 0.05. aCox proportional hazard model with inverse propensity weighting. bCompeting risk model with inverse propensity weighting. DISCUSSION: In this large cohort study, participants without a history of MI or stroke who walked for ≥1 hour/day were less likely to die from pneumonia than those who walked for 0.5 hours/day. Similar results were obtained among participants with a history of MI. For participants with a history of stroke, those who walked for 0.6–0.9 hours/day were less likely to die from pneumonia than those who walked for 0.5 hours/day. However, this inverse association was not observed in participants with a history of stroke who walked ≥1 hour/day. Participants with or without a history of MI or stroke who walk for a longer number of hours per day were less likely to die from pneumonia compared with those who walk for a shorter number of hours per day. Our finding shows a reduced incidence of pneumonia8 and pneumonia mortality7,9 in community residents with moderate physical activities, and this finding is similar to the results obtained from three previous studies. Whether elderly individuals with coronary heart disease are more susceptible to pneumonia than those without coronary heart disease is not clear.22 Two studies, including a cohort study23 and nested case-control study,10 suggested that elderly individuals with chronic cardiovascular disease were more likely to develop pneumonia compared with those without the disease condition (HR 1.46; 95% CI, 1.16–1.84 and HR 1.68; 95% CI, 1.58–1.77, respectively). These individuals may acquire pneumonia because of deteriorated immunological responses.24 Furthermore, since pneumonia itself causes cardiac problems,25 such as left ventricular dysfunction26,27 or increased cardiac arrhythmias,28,29 those who experienced cardiac complications along with pneumonia were more likely to die.30 Therefore, walking may be important for individuals with heart disease because it enhances mucosal immune function in elderly individuals.31 Furthermore, moderate physical activities, such as walking, enhance the immune function by increasing the activities of macrophages, natural killer cells,32 and neutrophils and regulating cytokines.33,34 However, caution must be taken into consideration. Although rare, some patients with coronary artery disease have sudden cardiac death during exercise, particularly habitually sedentary adults.35 Thus, elderly individuals with a history of coronary artery disease should obtain an exercise prescription from physicians to avoid inappropriate physical activities. A population-based cohort study suggested that elderly participants with a history of stroke were 1.26 (95% CI, 1.08–1.48) times more likely to be diagnosed with pneumonia than those without stroke after a 3-year follow-up.11 In the present study, participants with a history of stroke who walked for 0.6–0.9 hours/day were less likely to die from pneumonia compared with those who walked for 0.5 hours/day. This benefit was not observed among those who walked ≥1 hour/day. However, the mechanisms explaining why a longer walking time had no additional benefits for elderly individuals with stroke was unclear. Although a meta-analysis revealed that aerobic exercise training during the chronic stage of stroke recovery has beneficial effects on cardiorespiratory health,36 the oxygen cost of walking among people with history of stroke is 2-fold higher than that reported for non-stroke participants.37 Furthermore, other biological changes that may negatively affect cardiorespiratory health after stroke include elevated systemic levels of proinflammatory markers, abnormal glucose levels and insulin metabolism, impaired autonomic control, and respiratory dysfunction.38 In the context of age- and disease-related heterogeneity in cardiorespiratory capacity and medical comorbidities, walking duration ≥1 hour/day might be too much for elderly individuals with a history of stroke. However, to clarify an appropriate duration and time of physical activities for elderly individuals with a history of stroke, further epidemiological studies should be conducted. The strengths of the present study include its prospective cohort design, long follow-up period, and an inclusion of participants from Japan. Several limitations should be discussed. First, we obtained information on the daily walking time through self-report, and that information was not validated. Therefore, some misclassifications were possibly included in our results.39 The use of an accelerometer40 can provide more reliable results. Second, the effects of the residual confounding factors were not completely excluded. Third, information on the duration and severity of MI and stroke were not available. These conditions may affect walking ability, capacity, or habit. Fourth, we obtained information retrospectively via medical histories; therefore, some misclassification could be included in our results. Conclusion This large-scale cohort study demonstrated that walking at least ≥1 hour/day reduced the risk of pneumonia mortality among elderly individuals with or without a medical history of MI. Our findings suggest that regular walking may be beneficial in reducing the risk of pneumonia mortality in the elderly population. This large-scale cohort study demonstrated that walking at least ≥1 hour/day reduced the risk of pneumonia mortality among elderly individuals with or without a medical history of MI. Our findings suggest that regular walking may be beneficial in reducing the risk of pneumonia mortality in the elderly population. Conclusion: This large-scale cohort study demonstrated that walking at least ≥1 hour/day reduced the risk of pneumonia mortality among elderly individuals with or without a medical history of MI. Our findings suggest that regular walking may be beneficial in reducing the risk of pneumonia mortality in the elderly population.
Background: The association between daily walking and pneumonia mortality, stratified by the presence of disease conditions, such as myocardial infarction (MI) or stroke, was investigated. Methods: The study participants were 22,280 Japanese individuals (9,067 men and 13,213 women) aged 65-79 years. Inverse propensity weighted competing risk model was used to calculate the hazard ratio (HR) and 95% confidence interval (CI) for pneumonia mortality. Results: After a median of 11.9 years of follow-up, 1,203 participants died of pneumonia. Participants who did not have a history of MI or stroke and who walked for 1 hour/day or more were less likely to die from pneumonia (HR 0.90; 95% CI, 0.82-0.98) than those walked for 0.5 hours/day. A similar inverse association of pneumonia and walking (0.5 hours/day) was observed among participants with a history of MI (HR 0.66; 95% CI, 0.48-0.90). Among the participants with a history of stroke, those who walked for 0.6-0.9 hours/day were less likely to die because of pneumonia (HR 0.65; 95% CI, 0.43-0.98). Conclusions: Regular walking for ≥1 hour/day may reduce the risk of pneumonia mortality in elderly individuals with or without cardiovascular disease history.
null
null
4,396
259
[ 254, 65, 533, 54 ]
8
[ "participants", "day", "hours", "walking", "pneumonia", "hours day", "study", "unknown", "history", "stroke" ]
[ "mortality japanese participants", "risk pneumonia elderly", "pneumonia walked hours", "association walking pneumonia", "pneumonia mortality japanese" ]
null
null
null
[CONTENT] walking | pneumonia | influenza | motor activity | epidemiology [SUMMARY]
[CONTENT] walking | pneumonia | influenza | motor activity | epidemiology [SUMMARY]
[CONTENT] walking | pneumonia | influenza | motor activity | epidemiology [SUMMARY]
[CONTENT] walking | pneumonia | influenza | motor activity | epidemiology [SUMMARY]
null
null
[CONTENT] Aged | Cause of Death | Cohort Studies | Female | Humans | Japan | Male | Myocardial Infarction | Pneumonia | Proportional Hazards Models | Prospective Studies | Stroke | Walking [SUMMARY]
[CONTENT] Aged | Cause of Death | Cohort Studies | Female | Humans | Japan | Male | Myocardial Infarction | Pneumonia | Proportional Hazards Models | Prospective Studies | Stroke | Walking [SUMMARY]
[CONTENT] Aged | Cause of Death | Cohort Studies | Female | Humans | Japan | Male | Myocardial Infarction | Pneumonia | Proportional Hazards Models | Prospective Studies | Stroke | Walking [SUMMARY]
[CONTENT] Aged | Cause of Death | Cohort Studies | Female | Humans | Japan | Male | Myocardial Infarction | Pneumonia | Proportional Hazards Models | Prospective Studies | Stroke | Walking [SUMMARY]
null
null
[CONTENT] mortality japanese participants | risk pneumonia elderly | pneumonia walked hours | association walking pneumonia | pneumonia mortality japanese [SUMMARY]
[CONTENT] mortality japanese participants | risk pneumonia elderly | pneumonia walked hours | association walking pneumonia | pneumonia mortality japanese [SUMMARY]
[CONTENT] mortality japanese participants | risk pneumonia elderly | pneumonia walked hours | association walking pneumonia | pneumonia mortality japanese [SUMMARY]
[CONTENT] mortality japanese participants | risk pneumonia elderly | pneumonia walked hours | association walking pneumonia | pneumonia mortality japanese [SUMMARY]
null
null
[CONTENT] participants | day | hours | walking | pneumonia | hours day | study | unknown | history | stroke [SUMMARY]
[CONTENT] participants | day | hours | walking | pneumonia | hours day | study | unknown | history | stroke [SUMMARY]
[CONTENT] participants | day | hours | walking | pneumonia | hours day | study | unknown | history | stroke [SUMMARY]
[CONTENT] participants | day | hours | walking | pneumonia | hours day | study | unknown | history | stroke [SUMMARY]
null
null
[CONTENT] unknown | hours | hours day | day | average | participants | study | statistical | day hours | day hours day [SUMMARY]
[CONTENT] participants | participants history | history | walked | mi | stroke | participants history mi | walked hour | hr | ci [SUMMARY]
[CONTENT] mortality elderly | risk pneumonia mortality | risk pneumonia mortality elderly | pneumonia mortality elderly | risk pneumonia | elderly | pneumonia mortality | mortality | pneumonia | risk [SUMMARY]
[CONTENT] pneumonia | hours | day | participants | unknown | walking | study | hours day | history | elderly [SUMMARY]
null
null
[CONTENT] 22,280 | Japanese | 9,067 | 13,213 | 65-79 years ||| 95% | CI [SUMMARY]
[CONTENT] 11.9 years | 1,203 ||| 1 hour | 0.90 | 95% | CI | 0.82 | 0.5 hours ||| 0.5 hours | 0.66 | 95% | CI | 0.48 ||| 0.6-0.9 hours | HR 0.65 | 95% | CI | 0.43-0.98 [SUMMARY]
[CONTENT] ≥1 hour [SUMMARY]
[CONTENT] between daily ||| 22,280 | Japanese | 9,067 | 13,213 | 65-79 years ||| 95% | CI ||| 11.9 years | 1,203 ||| 1 hour | 0.90 | 95% | CI | 0.82 | 0.5 hours ||| 0.5 hours | 0.66 | 95% | CI | 0.48 ||| 0.6-0.9 hours | HR 0.65 | 95% | CI | 0.43-0.98 ||| ≥1 hour [SUMMARY]
null
[Pancreas transplantation-clinic, technique, and histological assessment].
34415383
In Germany pancreas transplants are performed in only a few selected and specialized centres, usually combined with a kidney transplant. Knowlegde of the indications for and techniques of transplantation as well as of the histopathological assessment for rejection in pancreas and duodenal biopsies is not very widespread.
BACKGROUND
A thorough literature search for aspects of the history, technique and indication for pancreas transplantation was performed and discussed in the context of the local experience and technical particularities specific for the transplant centre in Bochum. The occurrence of complications was compared with international reports. Results of pancreas and duodenal biopsies submitted to Erlangen between 06/2017 and 12/2020 for histological evaluation, which were evaluated according to the Banff classification, were summarized. For a better understanding key histological findings of pancreas rejection and differential diagnoses were illustrated and discussed.
MATERIAL AND METHODS
A total of 93 pancreas transplant specimens and 3 duodenal biopsies were included. 34.4% of pancreas specimens did not contain representative material for a diagnosis. In the remaining 61 biopsies 24.6% showed no rejection, 62.3% were diagnosed with acute T-cell mediated rejection (TCMR) and 8.2% with signs suspicious of antibody-mediated rejection (ABMR). Acute acinary epithelial injury was seen in 59%, pancreatitis in 8.2% and allograft fibrosis was reported in as many as 54.1%. Calcineurin-inhibitor toxicity was discussed in only 4.9%.
RESULTS
Pancreas-kidney-transplantation and standardized histological assessment of the transplanted pancreas or rarely duodenum with reporting according to the updated Banff classification of pancreas transplants or previous reports of duodenal rejection are important mainstays in the management of patients with diabetes.
CONCLUSION
[ "Biopsy", "Graft Rejection", "Humans", "Kidney", "Kidney Transplantation", "Pancreas Transplantation" ]
8390418
null
null
null
null
null
null
Fazit für die Praxis
Die Biopsie des transplantierten Pankreas oder in seltenen Fällen auch des Spenderduodenums mit anschließender standardisierter Beurteilung entsprechend der aktuellen international gültigen Banff-Klassifikation der Pankreasabstoßung und der Empfehlungen zur Beurteilung von Duodenalbiopsien hat ihren festen Stellenwert in der Behandlung Pankreas‑/Nierentransplantierter Patienten.Wie bei anderen transplantierten parenchymatösen Organen werden die Biopsien nach klinischer Indikationsstellung durchgeführt und anschließend in entsprechend ausgerüsteten Pathologien gemäß der aktuell geltenden Einteilungen standardisiert aufgearbeitet und beurteilt.Diese Einordnung der morphologischen Veränderungen nach der Banff-Klassifikation der Pankreastransplantatabstoßung ist dann wiederum mit entsprechenden klinischen Handlungsanweisungen verknüpft, die in größeren klinischen Studien erprobt wurden und werden und mit entsprechenden Erfolgswahrscheinlichkeiten einhergehen. Die Biopsie des transplantierten Pankreas oder in seltenen Fällen auch des Spenderduodenums mit anschließender standardisierter Beurteilung entsprechend der aktuellen international gültigen Banff-Klassifikation der Pankreasabstoßung und der Empfehlungen zur Beurteilung von Duodenalbiopsien hat ihren festen Stellenwert in der Behandlung Pankreas‑/Nierentransplantierter Patienten. Wie bei anderen transplantierten parenchymatösen Organen werden die Biopsien nach klinischer Indikationsstellung durchgeführt und anschließend in entsprechend ausgerüsteten Pathologien gemäß der aktuell geltenden Einteilungen standardisiert aufgearbeitet und beurteilt. Diese Einordnung der morphologischen Veränderungen nach der Banff-Klassifikation der Pankreastransplantatabstoßung ist dann wiederum mit entsprechenden klinischen Handlungsanweisungen verknüpft, die in größeren klinischen Studien erprobt wurden und werden und mit entsprechenden Erfolgswahrscheinlichkeiten einhergehen.
[ "Simultane Pankreas‑/Nierentransplantation (SPK)", "Alleinige Pankreastransplantation (PTA)", "Pankreastransplantation nach erfolgter Nierentransplantation (PAK)", "Indikationen zur Pankreastransplantation", "Spenderkriterien", "Technik der Pankreastransplantation", "Komplikationen nach Pankreastransplantation (PTX)", "Transplantatthrombose", "Transplantatpankreatitis", "Blutung", "Akute Transplantatabstoßung", "Indikation und Technik der Pankreastransplantatbiopsie", "Indikationen", "Technik der Biopsieentnahme", "Ergebnisse und Verlauf nach Pankreastransplantation (Tab. 3)", "Aufarbeitung und Beurteilung von Pankreas- und Duodenal-Abstoßungsbiopsien", "Molekulare Marker der humoralen Pankreasabstoßung", "Häufige histomorphologische Befunde an Pankreastransplantatbiopsien (Abb. 4, 5, 6 und 7)", "Akute T-Zell- und akute antikörpervermittelte Transplantatabstoßung (Tab. 1, 4, 5 und 6)", "Nicht abstoßungsbedingte histomorphologische Veränderungen in Pankreasbiopsien nach Transplantation (Abb. 8)", "Erfahrungen der gemeinsamen PTX-Biopsiediagnostik Bochum-Erlangen (06/2017–12/2020, Tab. 6; Abb. 9)" ]
[ "Die klassische Indikation zur simultanen PNTX stellt der juvenile Typ I‑Diabetes mit terminaler Niereninsuffizienz dar. Eine kombinierte PNTX ist jedoch auch im Stadium der präterminalen Niereninsuffizienz möglich. So kann ein Patient ab Stadium 4 der chronischen Niereninsuffizienz (glomeruläre Filtrationsrate [GFR], < 30 ml/min) in die Warteliste für eine kombinierte PNTX aufgenommen werden. Diese präemptive Transplantation führt zu einer Reduktion der perioperativen Mortalität und verbessert das Langzeitüberleben der Patienten erheblich. In Europa erfolgt ein Großteil (89 %) der PTX zusammen mit einer Nierentransplantation [12]. Dabei stammen beide Organe vom selben Organspender.", "In seltenen Fällen kann bei Patienten mit extrem instabilem Diabetes mellitus Typ I (sog. Brittle-Diabetes) die Indikation zur alleinigen PTX gestellt werden. Hypoglykämie-Wahrnehmungsstörungen, das Vorliegen einer subkutanen Insulinresistenz, aber auch ein frühes Auftreten und rasches Fortschreiten diabetischer Spätschäden können eine alleinige PTX rechtfertigen. In diesen Fällen sollte die Nierenfunktion aufgrund der zu erwartenden Nephrotoxizität der Immunsuppressiva nicht oder nur gering beeinträchtigt sein. Trotz Verbesserungen der Ergebnisse ist die alleinige PTX keine prinzipielle Alternative zu konservativen Therapiemöglichkeiten. Die Indikation sollte streng gestellt werden und nur interdisziplinär, z. B. im Rahmen der Transplantationskonferenz, nachdem wichtige Alternativen (z. B. eine Insulinpumpentherapie) ausgeschöpft und selbstinduzierte Hypoglykämien bei psychischen Störungen ausgeschlossen wurden [13]. Es muss eine sorgfältige Abwägung des Operationsrisikos und der Nebenwirkungen der immunsuppressiven Therapie gegenüber den zu erwartenden Vorteilen einer Transplantation erfolgen.", "Nach stattgehabter Leichennieren- oder Lebendnierentransplantation eines Typ-I-Diabetikers kann eine PTX erfolgen. Bei dieser Art der Transplantation fehlt die immunologische Identität der Transplantate. Von Vorteil ist jedoch, dass die Immunsuppression zum Zeitpunkt der PTX bereits besteht. Weitere Vorteile dieser Transplantationsform sind eine reduzierte Wartezeit sowie eine geringere Mortalität im Vergleich zur simultanen PNTX [14].", "Entsprechend der aktuellen Richtlinie für die Wartelistenführung und die Organvermittlung zur PTX und kombinierten PNTX müssen folgende Kriterien für die Aufnahme in die Warteliste erfüllt werden [15]:\nVorhandensein von Autoantikörpern gegen Glutamatdecarboxylase (GAD) und/oder Inselzellen (ICA) und/oder Tyrosinphosphatase 2 (IA-2) und/oder Zinktransporter 8 (ZnT8) und/oder Insulin (IAA).\nDer Autoantikörpernachweis von GAD, IA‑2, ICA und ZnT8 kann zum Zeitpunkt der Listung oder in der Vergangenheit erfolgt sein. Der positive Befund für IAA ist nur dann akzeptabel, wenn das Nachweisdatum vor Beginn der Insulintherapie liegt. Daher muss für den Nachweis der IAA das Datum der Testung und der Beginn der Insulintherapie an Eurotransplant übermittelt werden. Können keine Autoantikörper detektiert werden, muss eine Betazelldefizienz nachgewiesen werden. Diese wird in der aktuellen Richtlinie definiert als:C‑Peptid vor Stimulation < 0,5 ng/ml mit einem Anstieg nach Stimulation von < 20 %, wenn parallel zu diesem Messzeitpunkt kein Blutzuckerwert vorliegt oderC‑Peptid vor Stimulation < 0,5 ng/ml mit einem gleichzeitig erhobenen Blutzuckerwert ≥ 70 mg/dl (bzw. ≥ 3,9 mmol/l) oderC‑Peptid nach Stimulation < 0,8 ng/ml, mit einem gleichzeitig einhergehenden Blutzuckeranstieg auf ≥ 100 mg/dl (bzw. ≥ 5,6 mmol/l).\nC‑Peptid vor Stimulation < 0,5 ng/ml mit einem Anstieg nach Stimulation von < 20 %, wenn parallel zu diesem Messzeitpunkt kein Blutzuckerwert vorliegt oder\nC‑Peptid vor Stimulation < 0,5 ng/ml mit einem gleichzeitig erhobenen Blutzuckerwert ≥ 70 mg/dl (bzw. ≥ 3,9 mmol/l) oder\nC‑Peptid nach Stimulation < 0,8 ng/ml, mit einem gleichzeitig einhergehenden Blutzuckeranstieg auf ≥ 100 mg/dl (bzw. ≥ 5,6 mmol/l).\nAls Stimulationstests werden ein oraler Glukosetoleranztest (OGTT), ein Mixed-Meal-Toleranztest (MMTT) oder ein intravenöser oder subkutaner Glukagontest akzeptiert. Zusätzlich können bedrohliche und rasch fortschreitende diabetische Spätfolgen, das Syndrom der unbemerkten schweren Hypoglykämie, ein exzessiver Insulinbedarf oder fehlende Applikationswege für Insulin bei subkutaner Insulinresistenz eine Indikation zur isolierten PTX darstellen. Über die mögliche Aufnahme in die Warteliste entscheidet dann in jedem Einzelfall eine Auditgruppe bei der Vermittlungsstelle (Eurotransplant Pancreas Advisory Committee, EPAC).\nDie PTX bei Typ-2-Diabetikern wird weiterhin kontrovers diskutiert. Der Anteil der pankreastransplantierten Typ-2-Diabetiker liegt weltweit zwischen 1 % und 8 %, ist aber in den letzten Jahren kontinuierlich angestiegen [2]. Schlanke Typ-2-Diabetiker mit geringem Insulinbedarf und Niereninsuffizienz sowie Patienten mit Typ-MODY-Diabetes („maturity onset diabetes of the young“) profitieren in ähnlicher Weise von einer kombinierten PNTX wie Typ-1-Diabetiker [16]. Auch hier entscheidet das EPAC über die Aufnahme auf die Warteliste. Kontraindikationen für eine PTX stellen eine bestehende Malignomerkrankung, eine nicht sanierte akute oder chronische Infektion sowie eine schwere psychische Störung und Non-Adhärenz dar. Ebenfalls können ausgeprägte kardiovaskuläre Erkrankungen, soweit nicht therapierbar, eine Kontraindikation darstellen. Bei Patienten mit einem Body-Mass-Index (BMI) > 30 kg/m2 sollte nur in Ausnahmefällen eine PTX durchgeführt werden. Einerseits besteht eine Assoziation zwischen Übergewicht und der perioperativen Morbidität und Mortalität, andererseits kann der Transplantationserfolg mit Erreichen einer Insulinunabhängigkeit und Normoglykämie durch das bestehende Übergewicht gefährdet werden. Das Patientenalter allein stellt keine Kontraindikation zur PTX dar. Waren in den letzten Jahrzehnten noch viele Zentren sehr zurückhaltend bei der PTX über 50-jähriger Patienten, so konnte in den letzten Jahren, insbesondere in High-volume-Zentren, gezeigt werden, dass auch bei älteren Patienten durchaus gute Ergebnisse nach einer PTX erzielt werden können [17, 18]. Kardiovaskuläre Erkrankungen stellen mit Abstand den größten Komorbiditätsfaktor bei Patienten mit Diabetes mellitus Typ I und Niereninsuffizienz dar. Potenzielle Pankreastransplantatempfänger mit kardialer Anamnese, Auffälligkeiten in der nichtinvasiven Diagnostik (Echokardiografie, Myokardszintigrafie), einem Lebensalter > 50 Jahre oder bereits bestehender Dialysepflichtigkeit sollten vor einer PTX immer einer Koronarangiografie unterzogen werden. Ebenso sorgfältig sollten die peripheren Gefäßverhältnisse, insbesondere die Iliakalgefäße, hinsichtlich ihrer Anastomosierungsfähigkeit untersucht werden.", "Die Spenderkriterien sowie eine optimale Organentnahme und Organkonservierung sind mitentscheidend für den Erfolg einer PTX. Eine Vielzahl von Spenderparametern wurde hinsichtlich ihrer Bedeutung für das Ergebnis nach PTX untersucht. Wesentliche Faktoren sind das Spenderalter, BMI, Todesursache (traumatisch oder kardiovaskulär), Verweildauer auf der Intensivstation, erfolgte Reanimation, hypotensive Phasen und der Einsatz von Katecholaminen. Des Weiteren fließen Laborparameter wie Serumamylase und Serumlipase, Vorliegen einer Hypernatriämie, Blutzucker und Serumkreatinin in die Entscheidung über die Transplantabilität eines Spenderpankreas ein [19, 20]. Die Dauer der Ischämiezeit beeinflusst die Rate und Intensität von Pankreatitiden und somit die Funktionsrate nach Transplantation, während der Einfluss verschiedener Konservierungslösungen kontrovers diskutiert wird. Neben dem Vorliegen von Malignomen und Infektionskrankheiten gelten ein Diabetes mellitus, eine akute oder chronische Pankreatitis, vorausgegangene chirurgische Eingriffe am Pankreas und ein höhergradiges Pankreastrauma als Kontraindikationen beim Spender. Von einigen Autoren werden ein chronischer Alkoholabusus, eine intraabdominelle Sepsis sowie ein BMI > 35 kg/m2 ebenfalls als Kontraindikationen angesehen [20]. Viele Transplantationschirurgen akzeptieren keine Spenderpankreata mit Kalzifikationen, fortgeschrittener Organfibrose, infiltrativen Verfettungen, deutlichem Ödem sowie einer ausgeprägten Atherosklerose der Viszeral- und Beckenarterien des Spenders. Ebenso von Bedeutung ist die zu palpierende Konsistenz des Organs. Dabei ist die intraoperative Beurteilung des Pankreas durch einen erfahrenen Transplantationschirurgen von wesentlicher Bedeutung. Diese Kriterien erscheinen etwas subjektiv, sind jedoch in der Allokationsrealität Laborwerten und demografischen Daten des Spenders überlegen. Im Vergleich mit der Akzeptanz anderer Organe zur Transplantation ist die Ablehnungsrate potenzieller Pankreasspender sowohl in Europa als auch in den USA hoch. So wurden z. B. im Jahr 2019 nur 21 % der bei Eurotransplant gemeldeten Spenderpankreata transplantiert [21].", "Die klinische PTX unterlag zahlreichen Veränderungen und Modifikationen hinsichtlich der chirurgischen Technik. Auch heute werden diverse Techniken nebeneinander angewendet, ohne dass letztlich die Überlegenheit der einen oder anderen Methode nachgewiesen werden konnte. Allein die Tatsache, dass in einigen Zentren parallel verschiedene Techniken Verwendung finden, verdeutlicht diese Situation. Nach ihrer Einführung war die PTX mit einer sehr hohen peri- und postoperativen Morbidität und Mortalität behaftet, sodass sie viele Jahre als experimentelles Verfahren angesehen wurde. Vordergründig für die schlechten Ergebnisse zu dieser Zeit war das ungenügende Management der exokrinen Pankreassekretion. Dieses ist auch heute noch von zentraler Bedeutung, da die Freisetzung von aktivierten Pankreasenzymen, ähnlich wie bei der akuten Pankreatitis, zu gravierenden Gewebeschädigungen führen kann. Erst mit der von Sollinger 1983 [22] entwickelten Blasendrainagetechnik und der Einführung der Pankreasduodenaltransplantation durch Nghiem und Corry 1987 ist die PTX im Hinblick auf das Auftreten von Nahtinsuffizienzen und Pankreasfisteln sicherer geworden [23]. Bei dieser Technik wird das gesamte Spenderpankreas mit einem blindverschlossenen, kurzen Duodenalsegment transplantiert. Das exokrine Sekret wird dabei über eine in Seit-zu-Seit-Technik angelegte Duodenozystostomie in die Harnblase abgeleitet. Dieses Verfahren erlaubte durch Bestimmung von Amylase und Lipase im Urin ein Transplantatmonitoring, war jedoch durch das Auftreten schwerer metabolischer Azidosen, Dehydratationen, Harnblasen- und Harnröhrenentzündungen sowie Refluxpankreatitiden mit vielen Komplikationen behaftet. Mit der Verfügbarkeit einer besseren Immunsuppression und der damit verbundenen Reduktion immunologisch bedingter Transplantatverluste hat sich seit Mitte der 1990er-Jahre in den meisten Transplantationszentren die enterale Drainage in den Dünndarm durchgesetzt. Hierbei wird das exokrine Pankreassekret durch eine Seit-zu-Seit-Duodenojejunostomie oder eine Duodenoduodenostomie in den Darm abgeleitet. Auch Rekonstruktionen über eine nach Roux‑Y ausgeschaltete Dünndarmschlinge finden Anwendung. In den letzten Jahren wird von einigen Zentren eine retroperitoneale Platzierung des Pankreastransplantates favorisiert [24, 25]. Die endokrine Drainage oder venöse Ableitung des Pankreastransplantates kann systemisch-venös in die V. cava inferior oder portal-venös in die V. mesenterica superior erfolgen. Ob das technisch anspruchsvollere, aber physiologischere Verfahren der portal-venösen Drainage metabolische Vorteile hat, ist bisher nicht bewiesen. Beide Verfahren führen zu vergleichbaren Ergebnissen. Seit 2007 bevorzugen wir die retroperitoneale Positionierung des Transplantates unter Anlage einer Seit-zu-Seit-Duodenoduodenostomie mit portal-venöser oder systemisch-venöser Anastomosierung (Abb. 1). Der Vorteil der Duodenoduodenostomie besteht darin, dass sowohl die Dünndarmanastomose als auch der Pankreaskopf des Transplantates endoskopisch erreicht werden können. Somit wird es möglich, endoskopisch Biopsien zur Abstoßungsdiagnostik zu gewinnen. Des Weiteren kann im Falle einer intestinalen Blutung im Anastomosenbereich eine einfache endoskopische Intervention zur Blutstillung erfolgen. Mögliche Nachteile ergeben sich im Falle einer Anastomoseninsuffizienz oder eines Transplantatverlustes, da die resultierende Leckage im Bereich des Duodenums chirurgisch schwieriger zu versorgen ist. Neben vielen weiteren Vorteilen der retroperitonealen Positionierung ist das Pankreastransplantat besser sonografisch darstellbar und einfach perkutan zu punktieren.", "Transplantatthrombose Die Pankreastransplantatthrombose ist nach wie vor die häufigste Ursache für einen frühen Verlust des Pankreastransplantates. Bei einem plötzlichen Blutzuckeranstieg in der Frühphase nach Transplantation muss nach Ausschluss anderer Ursachen immer an eine Perfusionsstörung des Pankreastransplantates gedacht werden. Der Nachweis erfolgt in der Regel durch eine CT oder MRT des Abdomens. Die farbcodierte Duplexsonografie kann erste Hinweise auf Perfusionsstörungen ergeben. Im eigenen Zentrum hat sich in den letzten Jahren die kontrastmittelunterstützte Sonografie (CEUS) zum Perfusionsnachweis als extrem hilfreich erwiesen, da diese sofort auf Station im Patientenbett durchgeführt werden kann und kein potenziell nephrotoxisches Kontrastmittel appliziert werden muss. Bei Nachweis einer Thrombose im Bereich der Pfortader oder Milzvene sowie Thrombosen im Bereich des arteriellen Zuflusses (Y-Graft) ist eine sofortige Relaparotomie durchzuführen. In einigen Fällen ist es dabei gelungen, erfolgreich eine Thrombektomie durchzuführen und die Organfunktion zu erhalten. Auch Erfolge bei Lysetherapien wurden berichtet. Häufig bleibt einem jedoch nur die Entfernung des thrombosierten Transplantates übrig.\nDie Pankreastransplantatthrombose ist nach wie vor die häufigste Ursache für einen frühen Verlust des Pankreastransplantates. Bei einem plötzlichen Blutzuckeranstieg in der Frühphase nach Transplantation muss nach Ausschluss anderer Ursachen immer an eine Perfusionsstörung des Pankreastransplantates gedacht werden. Der Nachweis erfolgt in der Regel durch eine CT oder MRT des Abdomens. Die farbcodierte Duplexsonografie kann erste Hinweise auf Perfusionsstörungen ergeben. Im eigenen Zentrum hat sich in den letzten Jahren die kontrastmittelunterstützte Sonografie (CEUS) zum Perfusionsnachweis als extrem hilfreich erwiesen, da diese sofort auf Station im Patientenbett durchgeführt werden kann und kein potenziell nephrotoxisches Kontrastmittel appliziert werden muss. Bei Nachweis einer Thrombose im Bereich der Pfortader oder Milzvene sowie Thrombosen im Bereich des arteriellen Zuflusses (Y-Graft) ist eine sofortige Relaparotomie durchzuführen. In einigen Fällen ist es dabei gelungen, erfolgreich eine Thrombektomie durchzuführen und die Organfunktion zu erhalten. Auch Erfolge bei Lysetherapien wurden berichtet. Häufig bleibt einem jedoch nur die Entfernung des thrombosierten Transplantates übrig.\nTransplantatpankreatitis Das Entstehen einer Transplantatpankreatitis ist oftmals schon intraoperativ nach Reperfusion zu bemerken. Aber auch nach der Reperfusion unauffällig anmutende Transplantate können im Verlauf schwerwiegende Formen einer Pankreatitis entwickeln. In der Diagnostik ergänzen sich klinischer Befund, Laboruntersuchungen und Bildgebung. Bei der klinischen Untersuchung finden sich meist rechtsseitig betonte Bauchschmerzen ggf. mit Peritonismus. Zusätzlich können das Entstehen eines paralytischen Ileus, Fieber sowie eine Transplantatdysfunktion auf eine Pankreatitis hinweisen. Laborchemisch besteht häufig eine deutliche CRP-Erhöhung, die mit einem Anstieg von Serumamylase und Serumlipase assoziiert sein kann. In der Bildgebung können alle Formen einer Pankreatitis imponieren. Diese reichen von einer leichten ödematösen Form bis hin zur nekrotisierenden Pankreatitis mit Perfusionsausfällen. Bei schweren, lebensbedrohlichen Verlaufsformen einer Transplantatpankreatitis kann die partielle oder komplette Entfernung des Transplantates notwendig werden, auch wenn eine gute Funktion vorliegt.\nDas Entstehen einer Transplantatpankreatitis ist oftmals schon intraoperativ nach Reperfusion zu bemerken. Aber auch nach der Reperfusion unauffällig anmutende Transplantate können im Verlauf schwerwiegende Formen einer Pankreatitis entwickeln. In der Diagnostik ergänzen sich klinischer Befund, Laboruntersuchungen und Bildgebung. Bei der klinischen Untersuchung finden sich meist rechtsseitig betonte Bauchschmerzen ggf. mit Peritonismus. Zusätzlich können das Entstehen eines paralytischen Ileus, Fieber sowie eine Transplantatdysfunktion auf eine Pankreatitis hinweisen. Laborchemisch besteht häufig eine deutliche CRP-Erhöhung, die mit einem Anstieg von Serumamylase und Serumlipase assoziiert sein kann. In der Bildgebung können alle Formen einer Pankreatitis imponieren. Diese reichen von einer leichten ödematösen Form bis hin zur nekrotisierenden Pankreatitis mit Perfusionsausfällen. Bei schweren, lebensbedrohlichen Verlaufsformen einer Transplantatpankreatitis kann die partielle oder komplette Entfernung des Transplantates notwendig werden, auch wenn eine gute Funktion vorliegt.\nBlutung Bei Blutungskomplikationen nach einer PTX muss zwischen intraluminaler, intestinaler Blutung im Bereich der Anastomose und einer Blutung im Bauchraum unterschieden werden. Die Möglichkeit der einfachen endoskopischen Blutstillung im Bereich der Duodenoduodenostomie ist einer der großen Vorteile dieser Anastomosierungstechnik. Aber auch bei Anastomosen im Bereich der 1. und 2. Jejunalschlinge und intraperitonealer Positionierung des Pankreastransplantates ist über eine Push-Intestinoskopie die Anastomose zu erreichen, wobei sich dies technisch schwieriger gestaltet. Intraabdominelle Blutungen nach PTX sind kein seltenes Ereignis, zumal ein Großteil der Patienten eine postoperative Heparintherapie erhält. Kleinere Blutungen können oftmals konservativ beherrscht werden, indem die Antikoagulantientherapie pausiert wird und plasmatische Gerinnungsfaktoren optimiert werden. Hämodynamisch relevante Blutungen und ein rezidivierender Transfusionsbedarf zeigen die Indikation zur Relaparotomie an. Das Spektrum solcher Nachblutungen reicht von kleinen diffusen Blutungen aus dem Transplantatpankreas (häufig an der Mesenterialwurzel) oder kleineren Blutungen an den peritonealen Inzisionslinien bis hin zu Katastrophenblutungen aus der unteren Hohlvene und der Beckenarterie. Bei Arrosionsblutungen im Bereich der Beckenarterie kann in manchen Fällen das endovaskuläre Stenting der Defektzone die rettende Maßnahme für den Patienten sein.\nBei Blutungskomplikationen nach einer PTX muss zwischen intraluminaler, intestinaler Blutung im Bereich der Anastomose und einer Blutung im Bauchraum unterschieden werden. Die Möglichkeit der einfachen endoskopischen Blutstillung im Bereich der Duodenoduodenostomie ist einer der großen Vorteile dieser Anastomosierungstechnik. Aber auch bei Anastomosen im Bereich der 1. und 2. Jejunalschlinge und intraperitonealer Positionierung des Pankreastransplantates ist über eine Push-Intestinoskopie die Anastomose zu erreichen, wobei sich dies technisch schwieriger gestaltet. Intraabdominelle Blutungen nach PTX sind kein seltenes Ereignis, zumal ein Großteil der Patienten eine postoperative Heparintherapie erhält. Kleinere Blutungen können oftmals konservativ beherrscht werden, indem die Antikoagulantientherapie pausiert wird und plasmatische Gerinnungsfaktoren optimiert werden. Hämodynamisch relevante Blutungen und ein rezidivierender Transfusionsbedarf zeigen die Indikation zur Relaparotomie an. Das Spektrum solcher Nachblutungen reicht von kleinen diffusen Blutungen aus dem Transplantatpankreas (häufig an der Mesenterialwurzel) oder kleineren Blutungen an den peritonealen Inzisionslinien bis hin zu Katastrophenblutungen aus der unteren Hohlvene und der Beckenarterie. Bei Arrosionsblutungen im Bereich der Beckenarterie kann in manchen Fällen das endovaskuläre Stenting der Defektzone die rettende Maßnahme für den Patienten sein.\nAkute Transplantatabstoßung Ein Pankreastransplantat kann Ziel der allogenen Immunität (Abstoßung) und/oder der Autoimmunität (Rekurrenz der Grunderkrankung) sein. Amylase- und Lipaseerhöhung im Serum sowie ein Anstieg der Blutzuckerwerte sind häufig die einzigen, aber unspezifischen Zeichen einer Abstoßung. Im Falle einer kombinierten PNTX ist die histologische Diagnose aus dem Nierentransplantat oftmals ausschlaggebend. Es konnte aber auch gezeigt werden, dass eine nicht unerhebliche Diskordanz beim Auftreten von Rejektionen und deren Schweregrad in zeitgleich entnommenen Biopsien aus Pankreas- und Nierentransplantat besteht. Uva et al. [26] beschrieben lediglich in 40 % der Fälle mit Abstoßungen ein zeitgleiches Auftreten der Rejektion in beiden Organen. In einer Studie von Parajuli et al. [27] lag die Diskordanzrate für Abstoßungen bei 37,5 %. Bei den 62,5 % der Fälle mit konkordanten Befunden in beiden Organen wurden wiederum unterschiedliche Abstoßungstypen gefunden. Im eigenen Zentrum konnten wir nach zeitgleicher Punktion von Pankreas und Niere eine Konkordanz für das Auftreten bzw. Ausschluss einer Rejektion in beiden Organen von 68,4 % feststellen. Bei einer isolierten PTX oder einer PAK-Transplantation, bei welcher die Spenderorgane nicht HLA-identisch sind, spielt die Pankreasbiopsie eine noch wichtigere Rolle. Die histopathologische Begutachtung ist aufwendig und schwierig. Aufgrund der an sich schon geringen Fallzahlen der PTX und der noch seltener durchgeführten Pankreastransplantatbiopsien ist vielerorts nur begrenzte oder keine Erfahrung vorhanden, sodass wie bei anderen Spezialuntersuchungen die Biopsien meist überregional verschickt und begutachtet werden. Für die Beurteilung von Pankreastransplantatbiopsien existiert, ähnlich wie bei der NTX, seit vielen Jahren ein international anerkanntes und ständig aktualisiertes Banff-Schema zur Graduierung der Abstoßung (Tab. 1; [28–30]).1. Normal2. Unklar3. Akute T‑Zell-vermittelte Abstoßung (TCMR)Grad I/mild: aktive septale Entzündung mit Venulitis und/oder Duktitis und/oder fokale azinäre Entzündung (≤ 2 Foci pro Läppchen) ohne oder nur mit minimalem AzinuszellschadenGrad II/moderat (erfordert eine Abgrenzung zur ABMR): multifokale (aber nicht konfluente oder diffuse) azinäre Entzündung (≥ 3 Foci pro Läppchen) mit vereinzeltem, individuellem Azinuszellschaden und/oder milde intimale Arteriitis (< 25 % des Lumens)Grad III/schwer (erfordert eine Abgrenzung zur ABMR): diffuse azinäre Entzündung mit fokaler oder diffuser, multizellulärer/konfluierender Azinuszellnekrose und/oder moderate oder schwere intimale Arteriitis (> 25 % des Lumens) und/oder transmurale Entzündung – nekrotisierende Arteriitis4. Akute/aktive Antikörper-vermittelter Abstoßung (ABMR)a) morphologischer Nachweis eines akuten GewebeschadensGrad I (milde akute ABMR): Gut erhaltene Architektur, milde interazinäre Monozyten‑/Makrophagen- oder gemischte (Monozyten‑/Makrophagen/neutrophile) Infiltrate mit seltenem Azinuszellschaden (Schwellung, Nekrose)Grad II (moderate akute ABMR): Insgesamt erhaltene Architektur mit interazinären Monozyten-Makrophagen-Infiltraten oder gemischte (Monozyten-/Makrophagen/neutrophile) Infiltrate, Dilatation der Kapillaren, interazinäre Kapillaritis, intimale Arteriitis, Stauung, multizellulärer Azinuszellverlust und ErythrozytenextravasationGrad III (schwere akute ABMR): Architekturstörung, verstreute Entzündungsinfiltrate bei interstitiellen Einblutungen, multifokaler oder konfluierender Parenchymnekrose, arterieller oder venöser Gefäßwandnekrose, transmuraler/nekrotisierender Arteriitis und Thrombose (in Abwesenheit anderer offensichtlicher Ursachen)b) C4d-Positivität der interazinären Kapillaren (IAC) ≥ 1 % der Azinusläppchenoberflächec) Bestätigte donorspezifische Antikörper (DSA)Abschließende Einordnung:1 von 3 diagnostischen Kriterien: benötigt Ausschluss einer ABMR2 von 3 diagnostischen Kriterien: ABMR muss erwogen werden3 von 3 diagnostischen Kriterien: definitive Diagnose einer ABMR5. Chronisch-aktive ABMRKategorie 3 und/oder 4 mit chronischer Arteriopathie und/oder Kategorie 6. Spezifikation, ob TCMR, ABMR oder gemischt6. Chronische ArteriopathieFibrointimale arterielle Verbreitung mit Lumeneinengung:inaktiv: fibrointimalaktiv: Infiltration der subintimalen fibrösen Proliferation durch mononukleäre Zellen (T-Zellen und Makrophagen)Grad 0: keine LumeneinengungGrad 1: mild, ≤ 25 % LumeneinengungGrad 2: moderat, 26–50 % LumeneinengungGrad 3: schwer, ≥ 50 % Lumeneinengung7. Chronische TransplantatfibroseGrad I (milde Transplantatfibrose): < 30 %Grad II (moderate Transplantatfibrose): 30–60 %Grad III (schwere Transplantatfibrose): > 60 %8. Inselpathologie: Rekurrenz des autoimmunen Diabetes mellitus (Insulitis und/oder selektiver Betazellverlust), Amyloidablagerungen, Calcineurininhibitor-Toxizität der Inselzellen9. Andere histologische Veränderungen, welche nicht einer akuten und/oder chronischen Abstoßung zugeordnet werden, wie z. B. CMV-Pankreatitis, posttransplantations-lymphoproliferative Erkrankung\n6. Chronische Arteriopathie\nFibrointimale arterielle Verbreitung mit Lumeneinengung:\ninaktiv: fibrointimal\naktiv: Infiltration der subintimalen fibrösen Proliferation durch mononukleäre Zellen (T-Zellen und Makrophagen)\nDie Therapie der akuten Abstoßung eines Pankreastransplantates unterscheidet sich nicht wesentlich von der Abstoßungstherapie anderer transplantierter Organe. Bei milden akuten T‑zellulären Abstoßungen (Banff Grad I) erfolgt auch nach PTX eine Steroidbolustherapie mit z. B. 3 × 500 mg Prednisolon i.v. und eine Intensivierung der Basisimmunsuppression. Bei höhergradigen zellulären Abstoßungen und Rezidivabstoßungen erfolgen Therapien mit T‑Zell-depletierenden Antikörpern (ATG), bei humoraler Abstoßungskomponente auch in Kombination mit einer Anti-B-Zell-Therapie (z. B. Rituximab), Plasmapheresebehandlungen und die Gabe von intravenösen Immunglobulinen (IVIG) [31].\nEin Pankreastransplantat kann Ziel der allogenen Immunität (Abstoßung) und/oder der Autoimmunität (Rekurrenz der Grunderkrankung) sein. Amylase- und Lipaseerhöhung im Serum sowie ein Anstieg der Blutzuckerwerte sind häufig die einzigen, aber unspezifischen Zeichen einer Abstoßung. Im Falle einer kombinierten PNTX ist die histologische Diagnose aus dem Nierentransplantat oftmals ausschlaggebend. Es konnte aber auch gezeigt werden, dass eine nicht unerhebliche Diskordanz beim Auftreten von Rejektionen und deren Schweregrad in zeitgleich entnommenen Biopsien aus Pankreas- und Nierentransplantat besteht. Uva et al. [26] beschrieben lediglich in 40 % der Fälle mit Abstoßungen ein zeitgleiches Auftreten der Rejektion in beiden Organen. In einer Studie von Parajuli et al. [27] lag die Diskordanzrate für Abstoßungen bei 37,5 %. Bei den 62,5 % der Fälle mit konkordanten Befunden in beiden Organen wurden wiederum unterschiedliche Abstoßungstypen gefunden. Im eigenen Zentrum konnten wir nach zeitgleicher Punktion von Pankreas und Niere eine Konkordanz für das Auftreten bzw. Ausschluss einer Rejektion in beiden Organen von 68,4 % feststellen. Bei einer isolierten PTX oder einer PAK-Transplantation, bei welcher die Spenderorgane nicht HLA-identisch sind, spielt die Pankreasbiopsie eine noch wichtigere Rolle. Die histopathologische Begutachtung ist aufwendig und schwierig. Aufgrund der an sich schon geringen Fallzahlen der PTX und der noch seltener durchgeführten Pankreastransplantatbiopsien ist vielerorts nur begrenzte oder keine Erfahrung vorhanden, sodass wie bei anderen Spezialuntersuchungen die Biopsien meist überregional verschickt und begutachtet werden. Für die Beurteilung von Pankreastransplantatbiopsien existiert, ähnlich wie bei der NTX, seit vielen Jahren ein international anerkanntes und ständig aktualisiertes Banff-Schema zur Graduierung der Abstoßung (Tab. 1; [28–30]).1. Normal2. Unklar3. Akute T‑Zell-vermittelte Abstoßung (TCMR)Grad I/mild: aktive septale Entzündung mit Venulitis und/oder Duktitis und/oder fokale azinäre Entzündung (≤ 2 Foci pro Läppchen) ohne oder nur mit minimalem AzinuszellschadenGrad II/moderat (erfordert eine Abgrenzung zur ABMR): multifokale (aber nicht konfluente oder diffuse) azinäre Entzündung (≥ 3 Foci pro Läppchen) mit vereinzeltem, individuellem Azinuszellschaden und/oder milde intimale Arteriitis (< 25 % des Lumens)Grad III/schwer (erfordert eine Abgrenzung zur ABMR): diffuse azinäre Entzündung mit fokaler oder diffuser, multizellulärer/konfluierender Azinuszellnekrose und/oder moderate oder schwere intimale Arteriitis (> 25 % des Lumens) und/oder transmurale Entzündung – nekrotisierende Arteriitis4. Akute/aktive Antikörper-vermittelter Abstoßung (ABMR)a) morphologischer Nachweis eines akuten GewebeschadensGrad I (milde akute ABMR): Gut erhaltene Architektur, milde interazinäre Monozyten‑/Makrophagen- oder gemischte (Monozyten‑/Makrophagen/neutrophile) Infiltrate mit seltenem Azinuszellschaden (Schwellung, Nekrose)Grad II (moderate akute ABMR): Insgesamt erhaltene Architektur mit interazinären Monozyten-Makrophagen-Infiltraten oder gemischte (Monozyten-/Makrophagen/neutrophile) Infiltrate, Dilatation der Kapillaren, interazinäre Kapillaritis, intimale Arteriitis, Stauung, multizellulärer Azinuszellverlust und ErythrozytenextravasationGrad III (schwere akute ABMR): Architekturstörung, verstreute Entzündungsinfiltrate bei interstitiellen Einblutungen, multifokaler oder konfluierender Parenchymnekrose, arterieller oder venöser Gefäßwandnekrose, transmuraler/nekrotisierender Arteriitis und Thrombose (in Abwesenheit anderer offensichtlicher Ursachen)b) C4d-Positivität der interazinären Kapillaren (IAC) ≥ 1 % der Azinusläppchenoberflächec) Bestätigte donorspezifische Antikörper (DSA)Abschließende Einordnung:1 von 3 diagnostischen Kriterien: benötigt Ausschluss einer ABMR2 von 3 diagnostischen Kriterien: ABMR muss erwogen werden3 von 3 diagnostischen Kriterien: definitive Diagnose einer ABMR5. Chronisch-aktive ABMRKategorie 3 und/oder 4 mit chronischer Arteriopathie und/oder Kategorie 6. Spezifikation, ob TCMR, ABMR oder gemischt6. Chronische ArteriopathieFibrointimale arterielle Verbreitung mit Lumeneinengung:inaktiv: fibrointimalaktiv: Infiltration der subintimalen fibrösen Proliferation durch mononukleäre Zellen (T-Zellen und Makrophagen)Grad 0: keine LumeneinengungGrad 1: mild, ≤ 25 % LumeneinengungGrad 2: moderat, 26–50 % LumeneinengungGrad 3: schwer, ≥ 50 % Lumeneinengung7. Chronische TransplantatfibroseGrad I (milde Transplantatfibrose): < 30 %Grad II (moderate Transplantatfibrose): 30–60 %Grad III (schwere Transplantatfibrose): > 60 %8. Inselpathologie: Rekurrenz des autoimmunen Diabetes mellitus (Insulitis und/oder selektiver Betazellverlust), Amyloidablagerungen, Calcineurininhibitor-Toxizität der Inselzellen9. Andere histologische Veränderungen, welche nicht einer akuten und/oder chronischen Abstoßung zugeordnet werden, wie z. B. CMV-Pankreatitis, posttransplantations-lymphoproliferative Erkrankung\n6. Chronische Arteriopathie\nFibrointimale arterielle Verbreitung mit Lumeneinengung:\ninaktiv: fibrointimal\naktiv: Infiltration der subintimalen fibrösen Proliferation durch mononukleäre Zellen (T-Zellen und Makrophagen)\nDie Therapie der akuten Abstoßung eines Pankreastransplantates unterscheidet sich nicht wesentlich von der Abstoßungstherapie anderer transplantierter Organe. Bei milden akuten T‑zellulären Abstoßungen (Banff Grad I) erfolgt auch nach PTX eine Steroidbolustherapie mit z. B. 3 × 500 mg Prednisolon i.v. und eine Intensivierung der Basisimmunsuppression. Bei höhergradigen zellulären Abstoßungen und Rezidivabstoßungen erfolgen Therapien mit T‑Zell-depletierenden Antikörpern (ATG), bei humoraler Abstoßungskomponente auch in Kombination mit einer Anti-B-Zell-Therapie (z. B. Rituximab), Plasmapheresebehandlungen und die Gabe von intravenösen Immunglobulinen (IVIG) [31].", "Die Pankreastransplantatthrombose ist nach wie vor die häufigste Ursache für einen frühen Verlust des Pankreastransplantates. Bei einem plötzlichen Blutzuckeranstieg in der Frühphase nach Transplantation muss nach Ausschluss anderer Ursachen immer an eine Perfusionsstörung des Pankreastransplantates gedacht werden. Der Nachweis erfolgt in der Regel durch eine CT oder MRT des Abdomens. Die farbcodierte Duplexsonografie kann erste Hinweise auf Perfusionsstörungen ergeben. Im eigenen Zentrum hat sich in den letzten Jahren die kontrastmittelunterstützte Sonografie (CEUS) zum Perfusionsnachweis als extrem hilfreich erwiesen, da diese sofort auf Station im Patientenbett durchgeführt werden kann und kein potenziell nephrotoxisches Kontrastmittel appliziert werden muss. Bei Nachweis einer Thrombose im Bereich der Pfortader oder Milzvene sowie Thrombosen im Bereich des arteriellen Zuflusses (Y-Graft) ist eine sofortige Relaparotomie durchzuführen. In einigen Fällen ist es dabei gelungen, erfolgreich eine Thrombektomie durchzuführen und die Organfunktion zu erhalten. Auch Erfolge bei Lysetherapien wurden berichtet. Häufig bleibt einem jedoch nur die Entfernung des thrombosierten Transplantates übrig.", "Das Entstehen einer Transplantatpankreatitis ist oftmals schon intraoperativ nach Reperfusion zu bemerken. Aber auch nach der Reperfusion unauffällig anmutende Transplantate können im Verlauf schwerwiegende Formen einer Pankreatitis entwickeln. In der Diagnostik ergänzen sich klinischer Befund, Laboruntersuchungen und Bildgebung. Bei der klinischen Untersuchung finden sich meist rechtsseitig betonte Bauchschmerzen ggf. mit Peritonismus. Zusätzlich können das Entstehen eines paralytischen Ileus, Fieber sowie eine Transplantatdysfunktion auf eine Pankreatitis hinweisen. Laborchemisch besteht häufig eine deutliche CRP-Erhöhung, die mit einem Anstieg von Serumamylase und Serumlipase assoziiert sein kann. In der Bildgebung können alle Formen einer Pankreatitis imponieren. Diese reichen von einer leichten ödematösen Form bis hin zur nekrotisierenden Pankreatitis mit Perfusionsausfällen. Bei schweren, lebensbedrohlichen Verlaufsformen einer Transplantatpankreatitis kann die partielle oder komplette Entfernung des Transplantates notwendig werden, auch wenn eine gute Funktion vorliegt.", "Bei Blutungskomplikationen nach einer PTX muss zwischen intraluminaler, intestinaler Blutung im Bereich der Anastomose und einer Blutung im Bauchraum unterschieden werden. Die Möglichkeit der einfachen endoskopischen Blutstillung im Bereich der Duodenoduodenostomie ist einer der großen Vorteile dieser Anastomosierungstechnik. Aber auch bei Anastomosen im Bereich der 1. und 2. Jejunalschlinge und intraperitonealer Positionierung des Pankreastransplantates ist über eine Push-Intestinoskopie die Anastomose zu erreichen, wobei sich dies technisch schwieriger gestaltet. Intraabdominelle Blutungen nach PTX sind kein seltenes Ereignis, zumal ein Großteil der Patienten eine postoperative Heparintherapie erhält. Kleinere Blutungen können oftmals konservativ beherrscht werden, indem die Antikoagulantientherapie pausiert wird und plasmatische Gerinnungsfaktoren optimiert werden. Hämodynamisch relevante Blutungen und ein rezidivierender Transfusionsbedarf zeigen die Indikation zur Relaparotomie an. Das Spektrum solcher Nachblutungen reicht von kleinen diffusen Blutungen aus dem Transplantatpankreas (häufig an der Mesenterialwurzel) oder kleineren Blutungen an den peritonealen Inzisionslinien bis hin zu Katastrophenblutungen aus der unteren Hohlvene und der Beckenarterie. Bei Arrosionsblutungen im Bereich der Beckenarterie kann in manchen Fällen das endovaskuläre Stenting der Defektzone die rettende Maßnahme für den Patienten sein.", "Ein Pankreastransplantat kann Ziel der allogenen Immunität (Abstoßung) und/oder der Autoimmunität (Rekurrenz der Grunderkrankung) sein. Amylase- und Lipaseerhöhung im Serum sowie ein Anstieg der Blutzuckerwerte sind häufig die einzigen, aber unspezifischen Zeichen einer Abstoßung. Im Falle einer kombinierten PNTX ist die histologische Diagnose aus dem Nierentransplantat oftmals ausschlaggebend. Es konnte aber auch gezeigt werden, dass eine nicht unerhebliche Diskordanz beim Auftreten von Rejektionen und deren Schweregrad in zeitgleich entnommenen Biopsien aus Pankreas- und Nierentransplantat besteht. Uva et al. [26] beschrieben lediglich in 40 % der Fälle mit Abstoßungen ein zeitgleiches Auftreten der Rejektion in beiden Organen. In einer Studie von Parajuli et al. [27] lag die Diskordanzrate für Abstoßungen bei 37,5 %. Bei den 62,5 % der Fälle mit konkordanten Befunden in beiden Organen wurden wiederum unterschiedliche Abstoßungstypen gefunden. Im eigenen Zentrum konnten wir nach zeitgleicher Punktion von Pankreas und Niere eine Konkordanz für das Auftreten bzw. Ausschluss einer Rejektion in beiden Organen von 68,4 % feststellen. Bei einer isolierten PTX oder einer PAK-Transplantation, bei welcher die Spenderorgane nicht HLA-identisch sind, spielt die Pankreasbiopsie eine noch wichtigere Rolle. Die histopathologische Begutachtung ist aufwendig und schwierig. Aufgrund der an sich schon geringen Fallzahlen der PTX und der noch seltener durchgeführten Pankreastransplantatbiopsien ist vielerorts nur begrenzte oder keine Erfahrung vorhanden, sodass wie bei anderen Spezialuntersuchungen die Biopsien meist überregional verschickt und begutachtet werden. Für die Beurteilung von Pankreastransplantatbiopsien existiert, ähnlich wie bei der NTX, seit vielen Jahren ein international anerkanntes und ständig aktualisiertes Banff-Schema zur Graduierung der Abstoßung (Tab. 1; [28–30]).1. Normal2. Unklar3. Akute T‑Zell-vermittelte Abstoßung (TCMR)Grad I/mild: aktive septale Entzündung mit Venulitis und/oder Duktitis und/oder fokale azinäre Entzündung (≤ 2 Foci pro Läppchen) ohne oder nur mit minimalem AzinuszellschadenGrad II/moderat (erfordert eine Abgrenzung zur ABMR): multifokale (aber nicht konfluente oder diffuse) azinäre Entzündung (≥ 3 Foci pro Läppchen) mit vereinzeltem, individuellem Azinuszellschaden und/oder milde intimale Arteriitis (< 25 % des Lumens)Grad III/schwer (erfordert eine Abgrenzung zur ABMR): diffuse azinäre Entzündung mit fokaler oder diffuser, multizellulärer/konfluierender Azinuszellnekrose und/oder moderate oder schwere intimale Arteriitis (> 25 % des Lumens) und/oder transmurale Entzündung – nekrotisierende Arteriitis4. Akute/aktive Antikörper-vermittelter Abstoßung (ABMR)a) morphologischer Nachweis eines akuten GewebeschadensGrad I (milde akute ABMR): Gut erhaltene Architektur, milde interazinäre Monozyten‑/Makrophagen- oder gemischte (Monozyten‑/Makrophagen/neutrophile) Infiltrate mit seltenem Azinuszellschaden (Schwellung, Nekrose)Grad II (moderate akute ABMR): Insgesamt erhaltene Architektur mit interazinären Monozyten-Makrophagen-Infiltraten oder gemischte (Monozyten-/Makrophagen/neutrophile) Infiltrate, Dilatation der Kapillaren, interazinäre Kapillaritis, intimale Arteriitis, Stauung, multizellulärer Azinuszellverlust und ErythrozytenextravasationGrad III (schwere akute ABMR): Architekturstörung, verstreute Entzündungsinfiltrate bei interstitiellen Einblutungen, multifokaler oder konfluierender Parenchymnekrose, arterieller oder venöser Gefäßwandnekrose, transmuraler/nekrotisierender Arteriitis und Thrombose (in Abwesenheit anderer offensichtlicher Ursachen)b) C4d-Positivität der interazinären Kapillaren (IAC) ≥ 1 % der Azinusläppchenoberflächec) Bestätigte donorspezifische Antikörper (DSA)Abschließende Einordnung:1 von 3 diagnostischen Kriterien: benötigt Ausschluss einer ABMR2 von 3 diagnostischen Kriterien: ABMR muss erwogen werden3 von 3 diagnostischen Kriterien: definitive Diagnose einer ABMR5. Chronisch-aktive ABMRKategorie 3 und/oder 4 mit chronischer Arteriopathie und/oder Kategorie 6. Spezifikation, ob TCMR, ABMR oder gemischt6. Chronische ArteriopathieFibrointimale arterielle Verbreitung mit Lumeneinengung:inaktiv: fibrointimalaktiv: Infiltration der subintimalen fibrösen Proliferation durch mononukleäre Zellen (T-Zellen und Makrophagen)Grad 0: keine LumeneinengungGrad 1: mild, ≤ 25 % LumeneinengungGrad 2: moderat, 26–50 % LumeneinengungGrad 3: schwer, ≥ 50 % Lumeneinengung7. Chronische TransplantatfibroseGrad I (milde Transplantatfibrose): < 30 %Grad II (moderate Transplantatfibrose): 30–60 %Grad III (schwere Transplantatfibrose): > 60 %8. Inselpathologie: Rekurrenz des autoimmunen Diabetes mellitus (Insulitis und/oder selektiver Betazellverlust), Amyloidablagerungen, Calcineurininhibitor-Toxizität der Inselzellen9. Andere histologische Veränderungen, welche nicht einer akuten und/oder chronischen Abstoßung zugeordnet werden, wie z. B. CMV-Pankreatitis, posttransplantations-lymphoproliferative Erkrankung\n6. Chronische Arteriopathie\nFibrointimale arterielle Verbreitung mit Lumeneinengung:\ninaktiv: fibrointimal\naktiv: Infiltration der subintimalen fibrösen Proliferation durch mononukleäre Zellen (T-Zellen und Makrophagen)\nDie Therapie der akuten Abstoßung eines Pankreastransplantates unterscheidet sich nicht wesentlich von der Abstoßungstherapie anderer transplantierter Organe. Bei milden akuten T‑zellulären Abstoßungen (Banff Grad I) erfolgt auch nach PTX eine Steroidbolustherapie mit z. B. 3 × 500 mg Prednisolon i.v. und eine Intensivierung der Basisimmunsuppression. Bei höhergradigen zellulären Abstoßungen und Rezidivabstoßungen erfolgen Therapien mit T‑Zell-depletierenden Antikörpern (ATG), bei humoraler Abstoßungskomponente auch in Kombination mit einer Anti-B-Zell-Therapie (z. B. Rituximab), Plasmapheresebehandlungen und die Gabe von intravenösen Immunglobulinen (IVIG) [31].", "Indikationen Nach einer PTX kann eine Biopsie aus verschiedenen Indikationen heraus notwendig werden. Neben der Durchführung von Indikationsbiopsien aufgrund einer Störung der Transplantatfunktion oder immunologischen Vorgängen werden von einigen Zentren auch sog. Protokollbiopsien, d. h. routinemäßige Entnahmen von Biopsien nach einem definierten Schema entnommen. Störungen der Pankreastransplantatfunktion äußern sich klinisch am häufigsten durch erhöhte Blutzuckerwerte. Diese können mit einem Anstieg von Serumamylase und Serumlipase assoziiert sein. Differenzialdiagnostisch müssen dabei verschiedene Ursachen in Betracht gezogen werden [31–33]: eine primäre Nichtfunktion oder verzögerte Funktionsaufnahme des Transplantates, eine Transplantatpankreatitis, eine Thrombose, eine akute oder chronische Rejektion, eine medikationsbedingte Hyperglykämie (Steroide, Tacrolimus), Infektionen, eine Rekurrenz der Grunderkrankung, eine Toxizität von Calcineurininhibitoren (CNI) oder die Manifestation eines Diabetes mellitus Typ 2. Einen Überblick über häufige Indikationen zur Biopsie des Pankreastransplantates gibt Tab. 2.IndikationStörungen der endokrinen FunktionHyperglykämieGestörte GlukosetoleranzErhöhter HbA1c-WertStörungen der exokrinen FunktionErhöhte Amylase- und LipasewerteHumorale AbstoßungDonorspezifische Antikörper (DSA)Rekurrenz der GrunderkrankungDiabetes mellitus Typ 1 assoziierte AutoantikörperProtokollbiopsienProtokollKontrollbiopsienNach z. B. stattgehabter Abstoßungstherapie\nNach einer PTX kann eine Biopsie aus verschiedenen Indikationen heraus notwendig werden. Neben der Durchführung von Indikationsbiopsien aufgrund einer Störung der Transplantatfunktion oder immunologischen Vorgängen werden von einigen Zentren auch sog. Protokollbiopsien, d. h. routinemäßige Entnahmen von Biopsien nach einem definierten Schema entnommen. Störungen der Pankreastransplantatfunktion äußern sich klinisch am häufigsten durch erhöhte Blutzuckerwerte. Diese können mit einem Anstieg von Serumamylase und Serumlipase assoziiert sein. Differenzialdiagnostisch müssen dabei verschiedene Ursachen in Betracht gezogen werden [31–33]: eine primäre Nichtfunktion oder verzögerte Funktionsaufnahme des Transplantates, eine Transplantatpankreatitis, eine Thrombose, eine akute oder chronische Rejektion, eine medikationsbedingte Hyperglykämie (Steroide, Tacrolimus), Infektionen, eine Rekurrenz der Grunderkrankung, eine Toxizität von Calcineurininhibitoren (CNI) oder die Manifestation eines Diabetes mellitus Typ 2. Einen Überblick über häufige Indikationen zur Biopsie des Pankreastransplantates gibt Tab. 2.IndikationStörungen der endokrinen FunktionHyperglykämieGestörte GlukosetoleranzErhöhter HbA1c-WertStörungen der exokrinen FunktionErhöhte Amylase- und LipasewerteHumorale AbstoßungDonorspezifische Antikörper (DSA)Rekurrenz der GrunderkrankungDiabetes mellitus Typ 1 assoziierte AutoantikörperProtokollbiopsienProtokollKontrollbiopsienNach z. B. stattgehabter Abstoßungstherapie\nTechnik der Biopsieentnahme Die Pankreastransplantatbiopsie kann über verschiedene Zugangswege gewonnen werden. Dazu muss zwischen operativen, endoskopischen und perkutanen Techniken unterschieden werden. Bei den operativen Verfahren stellt die Laparotomie die invasivste Variante dar. Biopsien können jedoch im Rahmen von Laparotomien aus anderen Gründen einfach simultan als Punktions- oder Exzisionsbiopsie gewonnen werden (Abb. 2). Bei intraperitonealer Lage des Transplantates und damit häufiger Überlagerung des Transplantates von Dünndarmschlingen wird zunehmend die laparoskopisch assistierte Pankreasbiopsie angewendet. Limitiert wird das laparoskopische Vorgehen in einigen Fällen durch postoperative Verwachsungen, die eine Zugänglichkeit des Transplantates erschweren und das Risiko einer perioperativen Komplikation erhöhen [34]. Die früher häufig und heutzutage nur noch selten durchgeführte Blasendrainage des Pankreastransplantates, bei der das Spenderduodenum mit der Harnblase anastomosiert wird, ermöglicht eine einfache zystoskopische Biopsie aus dem Spenderduodenum oder Kopf des Pankreastransplantates [35]. Die histopathologische Aufarbeitung der Biopsien vom Spenderduodenum und Pankreastransplantat ergaben jedoch Diskrepanzen im Grad der Abstoßung. Hierbei konnte das Spenderduodenum unabhängig vom Pankreastransplantat eine Abstoßung aufweisen und umgekehrt [36]. Bei der heute hauptsächlich verwendeten Dünndarmdrainagetechnik ist die Entnahme einer Biopsie aus dem Spenderduodenum im Rahmen einer Ösophagogastroduodenoskopie oder Push-Intestinoskopie möglich [36, 37]. Auch hier konnten deutliche Diskordanzen beim histologischen Nachweis einer Abstoßung im Spenderduodenum und Pankreastransplantat beobachtet werden [35, 36]. Mittels Endosonografie lassen sich sowohl aus dem Spenderduodenum als auch aus dem Pankreastransplantat Feinnadelbiopsien entnehmen, wobei die gewonnene Gewebemenge oftmals eine aussagekräftige Diagnose nicht erlaubt.\nIm eigenen Zentrum hat sich im letzten Jahrzehnt die perkutane Pankreastransplantatbiopsie durchgesetzt. Die mit diesem Verfahren gewonnenen Gewebestanzen erlauben, ähnlich wie bei der Nierentransplantatbiopsie, eine aussagekräftige histopathologische Diagnosestellung. Die perkutane Biopsie erfolgt in der Regel in Lokalanästhesie unter sonografischer oder CT-gesteuerter Kontrolle. Bei beiden Methoden wird das Parenchym des Pankreastransplantates mit einer automatisierten 16G- oder 18G-Tru-Cut®-Nadel punktiert. Dabei werden 2–3 Punktionszylinder entnommen. Je nach Anatomie bieten sich hierbei verschiedene Zugangswege an (Abb. 3). Die Schwierigkeit in der genauen Punktionslokalisation liegt bei der CT darin, dass diese nativ durchgeführt wird und sowohl das Parenchym als auch die Transplantatgefäße schlecht visualisiert werden. Im Vorfeld angefertigte CT- oder MRT-Bilder sind daher hilfreich. In der Literatur finden sich Raten von bis zu 30 %, in denen bioptisch kein Pankreasgewebe gewonnen werden konnte [38]. In den letzten Jahren konnten wir im eigenen Vorgehen unter Nutzung der kontrastmittelgestützten Sonografie (CEUS) die Trefferquote erhöhen. In jedem Fall ist die Zusammenarbeit zwischen Transplantationschirurgen und Radiologen essenziell, um die Treffsicherheit zu erhöhen und die Komplikationsrate gering zu halten. Häufigste Komplikation nach Biopsie ist die Transplantatpankreatitis. In seltenen Fällen kann es zu Blutungen sowie Abszess- oder Fistelentstehungen kommen. Die Notwendigkeit eines chirurgischen Eingriffes als Folge einer perkutanen Biopsie ist insgesamt sehr selten [33, 39].\nDie Pankreastransplantatbiopsie kann über verschiedene Zugangswege gewonnen werden. Dazu muss zwischen operativen, endoskopischen und perkutanen Techniken unterschieden werden. Bei den operativen Verfahren stellt die Laparotomie die invasivste Variante dar. Biopsien können jedoch im Rahmen von Laparotomien aus anderen Gründen einfach simultan als Punktions- oder Exzisionsbiopsie gewonnen werden (Abb. 2). Bei intraperitonealer Lage des Transplantates und damit häufiger Überlagerung des Transplantates von Dünndarmschlingen wird zunehmend die laparoskopisch assistierte Pankreasbiopsie angewendet. Limitiert wird das laparoskopische Vorgehen in einigen Fällen durch postoperative Verwachsungen, die eine Zugänglichkeit des Transplantates erschweren und das Risiko einer perioperativen Komplikation erhöhen [34]. Die früher häufig und heutzutage nur noch selten durchgeführte Blasendrainage des Pankreastransplantates, bei der das Spenderduodenum mit der Harnblase anastomosiert wird, ermöglicht eine einfache zystoskopische Biopsie aus dem Spenderduodenum oder Kopf des Pankreastransplantates [35]. Die histopathologische Aufarbeitung der Biopsien vom Spenderduodenum und Pankreastransplantat ergaben jedoch Diskrepanzen im Grad der Abstoßung. Hierbei konnte das Spenderduodenum unabhängig vom Pankreastransplantat eine Abstoßung aufweisen und umgekehrt [36]. Bei der heute hauptsächlich verwendeten Dünndarmdrainagetechnik ist die Entnahme einer Biopsie aus dem Spenderduodenum im Rahmen einer Ösophagogastroduodenoskopie oder Push-Intestinoskopie möglich [36, 37]. Auch hier konnten deutliche Diskordanzen beim histologischen Nachweis einer Abstoßung im Spenderduodenum und Pankreastransplantat beobachtet werden [35, 36]. Mittels Endosonografie lassen sich sowohl aus dem Spenderduodenum als auch aus dem Pankreastransplantat Feinnadelbiopsien entnehmen, wobei die gewonnene Gewebemenge oftmals eine aussagekräftige Diagnose nicht erlaubt.\nIm eigenen Zentrum hat sich im letzten Jahrzehnt die perkutane Pankreastransplantatbiopsie durchgesetzt. Die mit diesem Verfahren gewonnenen Gewebestanzen erlauben, ähnlich wie bei der Nierentransplantatbiopsie, eine aussagekräftige histopathologische Diagnosestellung. Die perkutane Biopsie erfolgt in der Regel in Lokalanästhesie unter sonografischer oder CT-gesteuerter Kontrolle. Bei beiden Methoden wird das Parenchym des Pankreastransplantates mit einer automatisierten 16G- oder 18G-Tru-Cut®-Nadel punktiert. Dabei werden 2–3 Punktionszylinder entnommen. Je nach Anatomie bieten sich hierbei verschiedene Zugangswege an (Abb. 3). Die Schwierigkeit in der genauen Punktionslokalisation liegt bei der CT darin, dass diese nativ durchgeführt wird und sowohl das Parenchym als auch die Transplantatgefäße schlecht visualisiert werden. Im Vorfeld angefertigte CT- oder MRT-Bilder sind daher hilfreich. In der Literatur finden sich Raten von bis zu 30 %, in denen bioptisch kein Pankreasgewebe gewonnen werden konnte [38]. In den letzten Jahren konnten wir im eigenen Vorgehen unter Nutzung der kontrastmittelgestützten Sonografie (CEUS) die Trefferquote erhöhen. In jedem Fall ist die Zusammenarbeit zwischen Transplantationschirurgen und Radiologen essenziell, um die Treffsicherheit zu erhöhen und die Komplikationsrate gering zu halten. Häufigste Komplikation nach Biopsie ist die Transplantatpankreatitis. In seltenen Fällen kann es zu Blutungen sowie Abszess- oder Fistelentstehungen kommen. Die Notwendigkeit eines chirurgischen Eingriffes als Folge einer perkutanen Biopsie ist insgesamt sehr selten [33, 39].", "Nach einer PTX kann eine Biopsie aus verschiedenen Indikationen heraus notwendig werden. Neben der Durchführung von Indikationsbiopsien aufgrund einer Störung der Transplantatfunktion oder immunologischen Vorgängen werden von einigen Zentren auch sog. Protokollbiopsien, d. h. routinemäßige Entnahmen von Biopsien nach einem definierten Schema entnommen. Störungen der Pankreastransplantatfunktion äußern sich klinisch am häufigsten durch erhöhte Blutzuckerwerte. Diese können mit einem Anstieg von Serumamylase und Serumlipase assoziiert sein. Differenzialdiagnostisch müssen dabei verschiedene Ursachen in Betracht gezogen werden [31–33]: eine primäre Nichtfunktion oder verzögerte Funktionsaufnahme des Transplantates, eine Transplantatpankreatitis, eine Thrombose, eine akute oder chronische Rejektion, eine medikationsbedingte Hyperglykämie (Steroide, Tacrolimus), Infektionen, eine Rekurrenz der Grunderkrankung, eine Toxizität von Calcineurininhibitoren (CNI) oder die Manifestation eines Diabetes mellitus Typ 2. Einen Überblick über häufige Indikationen zur Biopsie des Pankreastransplantates gibt Tab. 2.IndikationStörungen der endokrinen FunktionHyperglykämieGestörte GlukosetoleranzErhöhter HbA1c-WertStörungen der exokrinen FunktionErhöhte Amylase- und LipasewerteHumorale AbstoßungDonorspezifische Antikörper (DSA)Rekurrenz der GrunderkrankungDiabetes mellitus Typ 1 assoziierte AutoantikörperProtokollbiopsienProtokollKontrollbiopsienNach z. B. stattgehabter Abstoßungstherapie", "Die Pankreastransplantatbiopsie kann über verschiedene Zugangswege gewonnen werden. Dazu muss zwischen operativen, endoskopischen und perkutanen Techniken unterschieden werden. Bei den operativen Verfahren stellt die Laparotomie die invasivste Variante dar. Biopsien können jedoch im Rahmen von Laparotomien aus anderen Gründen einfach simultan als Punktions- oder Exzisionsbiopsie gewonnen werden (Abb. 2). Bei intraperitonealer Lage des Transplantates und damit häufiger Überlagerung des Transplantates von Dünndarmschlingen wird zunehmend die laparoskopisch assistierte Pankreasbiopsie angewendet. Limitiert wird das laparoskopische Vorgehen in einigen Fällen durch postoperative Verwachsungen, die eine Zugänglichkeit des Transplantates erschweren und das Risiko einer perioperativen Komplikation erhöhen [34]. Die früher häufig und heutzutage nur noch selten durchgeführte Blasendrainage des Pankreastransplantates, bei der das Spenderduodenum mit der Harnblase anastomosiert wird, ermöglicht eine einfache zystoskopische Biopsie aus dem Spenderduodenum oder Kopf des Pankreastransplantates [35]. Die histopathologische Aufarbeitung der Biopsien vom Spenderduodenum und Pankreastransplantat ergaben jedoch Diskrepanzen im Grad der Abstoßung. Hierbei konnte das Spenderduodenum unabhängig vom Pankreastransplantat eine Abstoßung aufweisen und umgekehrt [36]. Bei der heute hauptsächlich verwendeten Dünndarmdrainagetechnik ist die Entnahme einer Biopsie aus dem Spenderduodenum im Rahmen einer Ösophagogastroduodenoskopie oder Push-Intestinoskopie möglich [36, 37]. Auch hier konnten deutliche Diskordanzen beim histologischen Nachweis einer Abstoßung im Spenderduodenum und Pankreastransplantat beobachtet werden [35, 36]. Mittels Endosonografie lassen sich sowohl aus dem Spenderduodenum als auch aus dem Pankreastransplantat Feinnadelbiopsien entnehmen, wobei die gewonnene Gewebemenge oftmals eine aussagekräftige Diagnose nicht erlaubt.\nIm eigenen Zentrum hat sich im letzten Jahrzehnt die perkutane Pankreastransplantatbiopsie durchgesetzt. Die mit diesem Verfahren gewonnenen Gewebestanzen erlauben, ähnlich wie bei der Nierentransplantatbiopsie, eine aussagekräftige histopathologische Diagnosestellung. Die perkutane Biopsie erfolgt in der Regel in Lokalanästhesie unter sonografischer oder CT-gesteuerter Kontrolle. Bei beiden Methoden wird das Parenchym des Pankreastransplantates mit einer automatisierten 16G- oder 18G-Tru-Cut®-Nadel punktiert. Dabei werden 2–3 Punktionszylinder entnommen. Je nach Anatomie bieten sich hierbei verschiedene Zugangswege an (Abb. 3). Die Schwierigkeit in der genauen Punktionslokalisation liegt bei der CT darin, dass diese nativ durchgeführt wird und sowohl das Parenchym als auch die Transplantatgefäße schlecht visualisiert werden. Im Vorfeld angefertigte CT- oder MRT-Bilder sind daher hilfreich. In der Literatur finden sich Raten von bis zu 30 %, in denen bioptisch kein Pankreasgewebe gewonnen werden konnte [38]. In den letzten Jahren konnten wir im eigenen Vorgehen unter Nutzung der kontrastmittelgestützten Sonografie (CEUS) die Trefferquote erhöhen. In jedem Fall ist die Zusammenarbeit zwischen Transplantationschirurgen und Radiologen essenziell, um die Treffsicherheit zu erhöhen und die Komplikationsrate gering zu halten. Häufigste Komplikation nach Biopsie ist die Transplantatpankreatitis. In seltenen Fällen kann es zu Blutungen sowie Abszess- oder Fistelentstehungen kommen. Die Notwendigkeit eines chirurgischen Eingriffes als Folge einer perkutanen Biopsie ist insgesamt sehr selten [33, 39].", "Im Vergleich zu den US-amerikanischen Spenderdaten sind deutsche Pankreasspender signifikant älter und der Anteil zerebrovaskulärer Todesursachen ist höher. Trotz dieser Tatsache konnten in den letzten Jahren sehr gute Ergebnisse nach PTX in deutschen Zentren erzielt werden [40]. Mit einem 1‑Jahres-Patienten- und 1‑Jahres-Transplantat-Überleben von 92 % und 83 % stehen diese den internationalen Ergebnissen in nichts nach (Tab. 3). Die Überlebensraten sind für die verschiedenen Formen der PTX nicht direkt vergleichbar, da es sich um Patientenkollektive mit unterschiedlicher Morbidität handelt (z. B. urämische vs. nichturämische Patienten). Die besten Ergebnisse werden dabei nach kombinierter SPK erzielt, mit einem Pankreastransplantatüberleben von 89 %, 71 % und 57 % nach 1, 5 und 10 Jahren. Für die gleichen Zeiträume liegen die Ergebnisse nach alleiniger PTA bei 84 %, 52 % und 38 % [2]. Die Transplantat-Halbwertszeiten für Pankreata (50 % Funktionsrate) werden mit 14 Jahren (SPK) und 7 Jahren (PAK, PTA) angegeben [2]. Durch die langfristige Normalisierung des Glukosestoffwechsels kommt es zu einer signifikanten Senkung der Mortalität, welche bei der SPK deutlich größer ist als bei alleiniger Nierentransplantation bei einem Typ-1-Diabetiker [8].QI/BezeichnungReferenzbereich (%)2016/2017 (%)2018/2019 (%)Sterblichkeit imKrankenhaus< 53,574,761‑Jahres-Überleben bei bekanntem Status> 9092,9691,362‑Jahres-Überleben bei bekanntem Status> 8091,5291,233‑Jahres-Überleben bei bekanntem Status> 7591,1090,20Qualität der Transplantatfunktion bei Entlassung> 7578,9583,23Qualität der Transplantatfunktion(1 Jahr nach Entlassung)Nicht definiert83,0792,16Qualität der Transplantatfunktion(2 Jahre nach Entlassung)Nicht definiert78,9583,23Qualität der Transplantatfunktion(3 Jahre nach Entlassung)Nicht definiert74,6775,79\nSterblichkeit im\nKrankenhaus\nQualität der Transplantatfunktion\n(1 Jahr nach Entlassung)\nQualität der Transplantatfunktion\n(2 Jahre nach Entlassung)\nQualität der Transplantatfunktion\n(3 Jahre nach Entlassung)", "Wie bei Organbiopsien üblich erfolgt nach Eingang und Prüfung der klinischen Angaben zunächst eine makroskopische Beurteilung der formalinfixierten Proben, wobei hier für eine erste Einschätzung auch ein Durchlichtmikroskop hilfreich sein kann. Da eine definitive Einordnung in endo- und exokrines Pankreas allerdings in der Lichtmikroskopie des Feuchtmaterials nicht möglich ist, empfiehlt sich zunächst eine Herstellung von HE- und PAS-Schnitten und die Anfertigung von mindestens 6 Leerschnitten für ergänzende histochemische und immunhistologische Untersuchungen, die nach Prüfung des Gewebes an den HE-/PAS-Schnitten angefordert werden können. Weiterhin empfiehlt es sich, in jedem Fall eine Bindegewebsfaserfärbung (z. B. Siriusrot, Elastica-van-Gieson [EvG] oder Mason-Goldner [MG]) und die 5 folgenden immunhistologischen Färbungen anzufertigen: C4d als Marker einer antikörpervermittelten Transplantatabstoßung (ABMR), CD3 als T‑Zell- und CD68 als Makrophagenmarker sowie Insulin und Glukagon zur Visualisierung der endokrinen Pankreasinseln [28, 38, 39].\nEin entsprechendes Vorgehen – allerdings primär ohne ergänzende immunhistologische Untersuchungen – wird auch für die selten durchgeführten und schwieriger zu beurteilenden Biopsien des Spenderduodenums durchgeführt, die hinsichtlich ihrer Signifikanz für die Pankreasabstoßung insgesamt sehr kontrovers diskutiert werden [41].", "Eine Analyse von 38 ausgewählten Genen (u. a. endotheliale Gene, NK-Zellgene und Entzündungsgene) mittels NanoString-nCounter-Technologie an 52 formalinfixierten Pankreasbiopsien zeigte, dass durch ergänzende molekulare Marker die Pankreasabstoßungsdiagnostik verbessert werden kann [42]. Diese aufwendige und kostenintensive molekulare Zusatzdiagnostik wird derzeit jedoch nicht routinemäßig eingesetzt.", "Für die Beurteilung von transplantatassoziierten Veränderungen ist primär die Kenntnis der normalen Anatomie des Pankreas und der Normalbefunde der verwendeten immunhistologischen Färbungen hilfreich (Abb. 4). Zunächst sollte in der HE- und PAS-Färbung (Abb. 4a, b) geprüft werden, ob exo- und endokrines Pankreasgewebe vorhanden ist und wieviele Läppchen erfasst sind. In der CD3- und der CD68-Färbung zeigen sich üblicherweise nur sehr wenige T‑Zellen und Gewebehistiozyten/Makrophagen (Abb. 4c, d). In der Insulin- und der Glukagon-Immunhistologie lassen sich die endokrinen Inseln und hier speziell die Alpha- und Betazellen meist sehr schön darstellen (Abb. 4e, f).", "Die morphologischen Veränderungen des Pankreas werden gemäß der aktuellen international gültigen Banff-Klassifikation eingeteilt [30]: Bei der häufigen akuten T‑Zell-vermittelten Transplantatabstoßung (TCMR; Tab. 1; Abb. 5) finden sich die folgenden histologischen Veränderungen: septale Entzündung mit aktivierten Lymphozyten- und z. T. Eosinophileninfiltraten, Venulitis und Duktitis, azinäre Entzündung und Endothelialitis (Abb. 5a–f). Das Ausmaß der T‑zellulären oder makrophagozytären Infiltration lässt sich hierbei gut durch ergänzende CD3- (Abb. 5g) bzw. CD68-Färbung (Abb. 5h) visualisieren, wobei hier v. a. das T‑zelluläre Infiltrat eine größere Rolle als das monozytär/makrophagozytäre Infiltrat spielt (Tab. 4). Die TCMR wird entsprechend ihres Schweregrades in 3 Kategorien (mild, moderat, schwer) eingeteilt, wobei die schwere Form mit diffuser azinärer Entzündung, fokaler oder diffuser Azinuszellnekrose und/oder moderater oder schwerer intimaler Arteriitis (> 25 % des Lumens) eine Abgrenzung zur akuten antikörpervermittelten Transplantatabstoßung (ABMR) notwendig macht, sodass hier in jedem Fall nach dem Vorhandensein donorspezifischer Antikörper (DSA) zu fragen ist (Tab. 1).TCMRABMRSeptale Infiltrate+++− bis +Eosinophile Granulozyten+ bis +++− bis +Neutrophile Granulozyten− bis ++ ± bis +++T‑Lymphozyten++ bis +++ ± bis +Makrophagen++ ++++GradWichtigste histologische BefundeNicht eindeutig bzw. ausreichend für eine TCMRMinimales lokalisiertes Entzündungsinfiltrat, minimaler Kryptenepithelschaden, vermehrte epitheliale Apoptosen (< 6 Apoptosekörperchen/10 Krypten), keine oder nur minimale Architekturstörung der Schleimhaut, keine Schleimhautulzerationen und keine eindeutigen Veränderungen einer milden TCMRMilde TCMRMildes lokalisiertes Entzündungsinfiltrat mit aktivierten Lymphozyten, milder Kryptenepithelschaden, vermehrte epitheliale Apoptosen (> 6 Apoptosekörperchen/10 Krypten), milde Architekturstörung der Schleimhaut, keine SchleimhautulzerationenModerate TCMRDiffuses Entzündungsinfiltrat in der Lamina propria, diffuser Kryptenepithelschaden, vermehrt epitheliale Apoptosen mit fokaler Konfluenz, stärkere Schleimhautarchitekturstörung, milde bis mäßige intimale Arteriitis möglich, keine SchleimhautulzerationenSchwere TCMRVeränderungen der moderaten TCMR plus mukosale Ulzerationen bzw. auch schwere oder transmurale intimale Arteriitis möglichTCMR T-Zell-vermittelte AbstoßungAnzahl der Fälle (Prozentwert %)Alle BiopsienaPankreasbiopsien/Duodenalbiopsien93 (100)/3Kein Material (Pankreasbiopsien)32 (34,4)Pankreasbiopsien mit verwertbarem MaterialbPankreasbiopsien gesamt61 (100)Keine Abstoßung15 (24,6)Akute TCMRIndeterminate: 4 (6,6)Grad I: 30 (49,2)Grad II: 8 (13,1)Aktive ABMRDefinitiv: 0 (0)Erwägen: 3 (4,9)Ausschließen: 2 (3,3)Akuter Azinusepithelschaden36 (59)AllograftfibroseGrad I: 29 (47,5)Grad II: 2 (3,3)Grad III: 2 (3,3)Chronische Allograftarteriopathie1 (1,6)Insulitis2 (möglich) (3,3)CNI-Toxizität3 (möglich) (4,9)Pankreatitis5 (8,2)ABMR antikörpervermittelte Abstoßung, CNI Calcineurin-Inhibitor, TCMR T-Zell-vermittelte AbstoßungaProzentwerte bezogen auf alle Pankreasbiopsien unabhängig davon, ob verwertbares Material vorhanden warbProzentwerte bezogen auf Pankreasbiopsien mit verwertbarem Material\nTCMR T-Zell-vermittelte Abstoßung\nIndeterminate: 4 (6,6)\nGrad I: 30 (49,2)\nGrad II: 8 (13,1)\nDefinitiv: 0 (0)\nErwägen: 3 (4,9)\nAusschließen: 2 (3,3)\nGrad I: 29 (47,5)\nGrad II: 2 (3,3)\nGrad III: 2 (3,3)\nABMR antikörpervermittelte Abstoßung, CNI Calcineurin-Inhibitor, TCMR T-Zell-vermittelte Abstoßung\naProzentwerte bezogen auf alle Pankreasbiopsien unabhängig davon, ob verwertbares Material vorhanden war\nbProzentwerte bezogen auf Pankreasbiopsien mit verwertbarem Material\nAuch im biopsierten Spenderduodenum (Tab. 5; Abb. 6a, b) können sich charakteristische Veränderungen einer TCMR mit unterschiedlich ausgeprägter Vermehrung von Lymphoyzten intraepithelial und in der Lamina propria sowie erhöhter epithelialer Apoptoserate finden sowie eine Architekturstörung der Schleimhaut (definiert als Verplumpung/Abflachung der Villi in dem am besten orientierten Schnitt) [43]. Im schweren Stadium zeigt sich auch eine ausgeprägte Schleimhautdestruktion mit Kryptenverlust, Verschorfung und neutrophilenreichem Infiltrat (Tab. 5; [43]). Hiervon abzugrenzen sind jedoch ischämische Veränderungen, die im Einzelfall ein ähnliches histologisches Bild induzieren können.\nDie ABMR manifestiert sich am transplantierten Pankreas als mikrovaskulärer Endothelzellschaden des exokrinen Parenchyms, interazinäre Entzündung, Azinusepithelschaden, Vaskulitis und Thrombose, kann in der HE-Färbung aber auch relativ blande aussehen (Abb. 7a). Neben DSA und der Morphologie ist auch die spezifische C4d-Positivität der interazinären Kapillaren eines der 3 diagnostischen Kriterien der ABMR [29, 30], wofür eine C4d-Färbung notwendig ist (Abb. 7b). Im Gegensatz zur TCMR ist die ABMR v. a. durch ein monozytär/makrophagozytäres Infiltrat charakterisiert (Tab. 4; Abb. 7c). Zur Darstellung der interazinären Kapillaren und speziell zur Beurteilung der Dilatation und Endothelzellschwellung als Zeichen der kapillären Schädigung kann eine CD34-Immunhistochemie hilfreich sein (Abb. 7d).", "Hier sind zum einen anatomische bzw. nicht krankheitswertige Variationen, wie z. B. die relativ häufige Vermehrung von Fettzellen im Pankreas (Abb. 8a), [44] aber auch z. T. transplantationsassoziierte Veränderungen wie tryptische Pankreasnekrosen (Abb. 8b), ischämische Posttransplantationspankreatitis, Peripankreatitis bzw. peripankreatische Flüssigkeitsansammlung/Ödem, CMV-Pankreatitis, posttransplantations-lymphoproliferative Erkrankung (PTLD), bakterielle Infektionen oder Pilzinfektionen, Rekurrenz der Autoimmunerkrankung/des Diabetes mellitus oder eine CNI-Toxizität, die sich meist als Inselzellschaden, wie z. B. eine Vakuolisierung der endokrinen Zellen zeigt (Abb. 8c; [32]), zu nennen. Die Vakuolisierung von Inselzellen war in einer Studie, die die histologischen Befunde bei den beiden CNIs Cyclosporin A und Tacrolimus untersuchte, bei 2 Kontrollfällen nur minimal ausgeprägt und grobe Vakuolisierungen – wie bei CNI-Toxizität – wurden nicht beobachtet. Als weitere Merkmale der CNI-Toxizität fanden sich Inselzellschwellungen, Apoptosen und eine verminderte Reaktivität für Insulin in der Immunhistochemie [32].", "Im Zeitraum von Juni 2017 bis Dezember 2020 wurden aus der Chirurgischen Klinik des Universitätsklinikum Knappschaftskrankenhaus Bochum insgesamt 93 Pankreastransplantatbiopsien und 3 Duodenalbiopsien des Spenderduodenums von 49 Patienten in der Nephropathologie Erlangen untersucht. Die Ergebnisse der Pankreasbiopsien, wie in den Originalbefunden dokumentiert, sind in Tab. 6 zusammengefasst und in Abb. 9 teilweise illustriert. In einem Drittel der Biopsien (34,4 %) wurde kein diagnostisches Material gewonnen. Die bei weitem häufigste Diagnose war eine TCMR. Zur Einordnung der Befunde wie Insulitis und CNI-Toxizität oder einer ABMR ist immer der klinische Kontext von größter Wichtigkeit zur abschließenden Interpretation, sodass häufig nur anhand der Histologie eine definitive Diagnose nicht zu stellen ist. Im gleichen Zeitraum wurden 3 Biopsien des Spenderduodenums eingesandt, von denen eine Zeichen einer TCMR zeigte und 2 weitere keine Abstoßungszeichen." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Simultane Pankreas‑/Nierentransplantation (SPK)", "Alleinige Pankreastransplantation (PTA)", "Pankreastransplantation nach erfolgter Nierentransplantation (PAK)", "Indikationen zur Pankreastransplantation", "Spenderkriterien", "Technik der Pankreastransplantation", "Komplikationen nach Pankreastransplantation (PTX)", "Transplantatthrombose", "Transplantatpankreatitis", "Blutung", "Akute Transplantatabstoßung", "Indikation und Technik der Pankreastransplantatbiopsie", "Indikationen", "Technik der Biopsieentnahme", "Ergebnisse und Verlauf nach Pankreastransplantation (Tab. 3)", "Aufarbeitung und Beurteilung von Pankreas- und Duodenal-Abstoßungsbiopsien", "Molekulare Marker der humoralen Pankreasabstoßung", "Häufige histomorphologische Befunde an Pankreastransplantatbiopsien (Abb. 4, 5, 6 und 7)", "Akute T-Zell- und akute antikörpervermittelte Transplantatabstoßung (Tab. 1, 4, 5 und 6)", "Nicht abstoßungsbedingte histomorphologische Veränderungen in Pankreasbiopsien nach Transplantation (Abb. 8)", "Erfahrungen der gemeinsamen PTX-Biopsiediagnostik Bochum-Erlangen (06/2017–12/2020, Tab. 6; Abb. 9)", "Fazit für die Praxis" ]
[ "Die klassische Indikation zur simultanen PNTX stellt der juvenile Typ I‑Diabetes mit terminaler Niereninsuffizienz dar. Eine kombinierte PNTX ist jedoch auch im Stadium der präterminalen Niereninsuffizienz möglich. So kann ein Patient ab Stadium 4 der chronischen Niereninsuffizienz (glomeruläre Filtrationsrate [GFR], < 30 ml/min) in die Warteliste für eine kombinierte PNTX aufgenommen werden. Diese präemptive Transplantation führt zu einer Reduktion der perioperativen Mortalität und verbessert das Langzeitüberleben der Patienten erheblich. In Europa erfolgt ein Großteil (89 %) der PTX zusammen mit einer Nierentransplantation [12]. Dabei stammen beide Organe vom selben Organspender.", "In seltenen Fällen kann bei Patienten mit extrem instabilem Diabetes mellitus Typ I (sog. Brittle-Diabetes) die Indikation zur alleinigen PTX gestellt werden. Hypoglykämie-Wahrnehmungsstörungen, das Vorliegen einer subkutanen Insulinresistenz, aber auch ein frühes Auftreten und rasches Fortschreiten diabetischer Spätschäden können eine alleinige PTX rechtfertigen. In diesen Fällen sollte die Nierenfunktion aufgrund der zu erwartenden Nephrotoxizität der Immunsuppressiva nicht oder nur gering beeinträchtigt sein. Trotz Verbesserungen der Ergebnisse ist die alleinige PTX keine prinzipielle Alternative zu konservativen Therapiemöglichkeiten. Die Indikation sollte streng gestellt werden und nur interdisziplinär, z. B. im Rahmen der Transplantationskonferenz, nachdem wichtige Alternativen (z. B. eine Insulinpumpentherapie) ausgeschöpft und selbstinduzierte Hypoglykämien bei psychischen Störungen ausgeschlossen wurden [13]. Es muss eine sorgfältige Abwägung des Operationsrisikos und der Nebenwirkungen der immunsuppressiven Therapie gegenüber den zu erwartenden Vorteilen einer Transplantation erfolgen.", "Nach stattgehabter Leichennieren- oder Lebendnierentransplantation eines Typ-I-Diabetikers kann eine PTX erfolgen. Bei dieser Art der Transplantation fehlt die immunologische Identität der Transplantate. Von Vorteil ist jedoch, dass die Immunsuppression zum Zeitpunkt der PTX bereits besteht. Weitere Vorteile dieser Transplantationsform sind eine reduzierte Wartezeit sowie eine geringere Mortalität im Vergleich zur simultanen PNTX [14].", "Entsprechend der aktuellen Richtlinie für die Wartelistenführung und die Organvermittlung zur PTX und kombinierten PNTX müssen folgende Kriterien für die Aufnahme in die Warteliste erfüllt werden [15]:\nVorhandensein von Autoantikörpern gegen Glutamatdecarboxylase (GAD) und/oder Inselzellen (ICA) und/oder Tyrosinphosphatase 2 (IA-2) und/oder Zinktransporter 8 (ZnT8) und/oder Insulin (IAA).\nDer Autoantikörpernachweis von GAD, IA‑2, ICA und ZnT8 kann zum Zeitpunkt der Listung oder in der Vergangenheit erfolgt sein. Der positive Befund für IAA ist nur dann akzeptabel, wenn das Nachweisdatum vor Beginn der Insulintherapie liegt. Daher muss für den Nachweis der IAA das Datum der Testung und der Beginn der Insulintherapie an Eurotransplant übermittelt werden. Können keine Autoantikörper detektiert werden, muss eine Betazelldefizienz nachgewiesen werden. Diese wird in der aktuellen Richtlinie definiert als:C‑Peptid vor Stimulation < 0,5 ng/ml mit einem Anstieg nach Stimulation von < 20 %, wenn parallel zu diesem Messzeitpunkt kein Blutzuckerwert vorliegt oderC‑Peptid vor Stimulation < 0,5 ng/ml mit einem gleichzeitig erhobenen Blutzuckerwert ≥ 70 mg/dl (bzw. ≥ 3,9 mmol/l) oderC‑Peptid nach Stimulation < 0,8 ng/ml, mit einem gleichzeitig einhergehenden Blutzuckeranstieg auf ≥ 100 mg/dl (bzw. ≥ 5,6 mmol/l).\nC‑Peptid vor Stimulation < 0,5 ng/ml mit einem Anstieg nach Stimulation von < 20 %, wenn parallel zu diesem Messzeitpunkt kein Blutzuckerwert vorliegt oder\nC‑Peptid vor Stimulation < 0,5 ng/ml mit einem gleichzeitig erhobenen Blutzuckerwert ≥ 70 mg/dl (bzw. ≥ 3,9 mmol/l) oder\nC‑Peptid nach Stimulation < 0,8 ng/ml, mit einem gleichzeitig einhergehenden Blutzuckeranstieg auf ≥ 100 mg/dl (bzw. ≥ 5,6 mmol/l).\nAls Stimulationstests werden ein oraler Glukosetoleranztest (OGTT), ein Mixed-Meal-Toleranztest (MMTT) oder ein intravenöser oder subkutaner Glukagontest akzeptiert. Zusätzlich können bedrohliche und rasch fortschreitende diabetische Spätfolgen, das Syndrom der unbemerkten schweren Hypoglykämie, ein exzessiver Insulinbedarf oder fehlende Applikationswege für Insulin bei subkutaner Insulinresistenz eine Indikation zur isolierten PTX darstellen. Über die mögliche Aufnahme in die Warteliste entscheidet dann in jedem Einzelfall eine Auditgruppe bei der Vermittlungsstelle (Eurotransplant Pancreas Advisory Committee, EPAC).\nDie PTX bei Typ-2-Diabetikern wird weiterhin kontrovers diskutiert. Der Anteil der pankreastransplantierten Typ-2-Diabetiker liegt weltweit zwischen 1 % und 8 %, ist aber in den letzten Jahren kontinuierlich angestiegen [2]. Schlanke Typ-2-Diabetiker mit geringem Insulinbedarf und Niereninsuffizienz sowie Patienten mit Typ-MODY-Diabetes („maturity onset diabetes of the young“) profitieren in ähnlicher Weise von einer kombinierten PNTX wie Typ-1-Diabetiker [16]. Auch hier entscheidet das EPAC über die Aufnahme auf die Warteliste. Kontraindikationen für eine PTX stellen eine bestehende Malignomerkrankung, eine nicht sanierte akute oder chronische Infektion sowie eine schwere psychische Störung und Non-Adhärenz dar. Ebenfalls können ausgeprägte kardiovaskuläre Erkrankungen, soweit nicht therapierbar, eine Kontraindikation darstellen. Bei Patienten mit einem Body-Mass-Index (BMI) > 30 kg/m2 sollte nur in Ausnahmefällen eine PTX durchgeführt werden. Einerseits besteht eine Assoziation zwischen Übergewicht und der perioperativen Morbidität und Mortalität, andererseits kann der Transplantationserfolg mit Erreichen einer Insulinunabhängigkeit und Normoglykämie durch das bestehende Übergewicht gefährdet werden. Das Patientenalter allein stellt keine Kontraindikation zur PTX dar. Waren in den letzten Jahrzehnten noch viele Zentren sehr zurückhaltend bei der PTX über 50-jähriger Patienten, so konnte in den letzten Jahren, insbesondere in High-volume-Zentren, gezeigt werden, dass auch bei älteren Patienten durchaus gute Ergebnisse nach einer PTX erzielt werden können [17, 18]. Kardiovaskuläre Erkrankungen stellen mit Abstand den größten Komorbiditätsfaktor bei Patienten mit Diabetes mellitus Typ I und Niereninsuffizienz dar. Potenzielle Pankreastransplantatempfänger mit kardialer Anamnese, Auffälligkeiten in der nichtinvasiven Diagnostik (Echokardiografie, Myokardszintigrafie), einem Lebensalter > 50 Jahre oder bereits bestehender Dialysepflichtigkeit sollten vor einer PTX immer einer Koronarangiografie unterzogen werden. Ebenso sorgfältig sollten die peripheren Gefäßverhältnisse, insbesondere die Iliakalgefäße, hinsichtlich ihrer Anastomosierungsfähigkeit untersucht werden.", "Die Spenderkriterien sowie eine optimale Organentnahme und Organkonservierung sind mitentscheidend für den Erfolg einer PTX. Eine Vielzahl von Spenderparametern wurde hinsichtlich ihrer Bedeutung für das Ergebnis nach PTX untersucht. Wesentliche Faktoren sind das Spenderalter, BMI, Todesursache (traumatisch oder kardiovaskulär), Verweildauer auf der Intensivstation, erfolgte Reanimation, hypotensive Phasen und der Einsatz von Katecholaminen. Des Weiteren fließen Laborparameter wie Serumamylase und Serumlipase, Vorliegen einer Hypernatriämie, Blutzucker und Serumkreatinin in die Entscheidung über die Transplantabilität eines Spenderpankreas ein [19, 20]. Die Dauer der Ischämiezeit beeinflusst die Rate und Intensität von Pankreatitiden und somit die Funktionsrate nach Transplantation, während der Einfluss verschiedener Konservierungslösungen kontrovers diskutiert wird. Neben dem Vorliegen von Malignomen und Infektionskrankheiten gelten ein Diabetes mellitus, eine akute oder chronische Pankreatitis, vorausgegangene chirurgische Eingriffe am Pankreas und ein höhergradiges Pankreastrauma als Kontraindikationen beim Spender. Von einigen Autoren werden ein chronischer Alkoholabusus, eine intraabdominelle Sepsis sowie ein BMI > 35 kg/m2 ebenfalls als Kontraindikationen angesehen [20]. Viele Transplantationschirurgen akzeptieren keine Spenderpankreata mit Kalzifikationen, fortgeschrittener Organfibrose, infiltrativen Verfettungen, deutlichem Ödem sowie einer ausgeprägten Atherosklerose der Viszeral- und Beckenarterien des Spenders. Ebenso von Bedeutung ist die zu palpierende Konsistenz des Organs. Dabei ist die intraoperative Beurteilung des Pankreas durch einen erfahrenen Transplantationschirurgen von wesentlicher Bedeutung. Diese Kriterien erscheinen etwas subjektiv, sind jedoch in der Allokationsrealität Laborwerten und demografischen Daten des Spenders überlegen. Im Vergleich mit der Akzeptanz anderer Organe zur Transplantation ist die Ablehnungsrate potenzieller Pankreasspender sowohl in Europa als auch in den USA hoch. So wurden z. B. im Jahr 2019 nur 21 % der bei Eurotransplant gemeldeten Spenderpankreata transplantiert [21].", "Die klinische PTX unterlag zahlreichen Veränderungen und Modifikationen hinsichtlich der chirurgischen Technik. Auch heute werden diverse Techniken nebeneinander angewendet, ohne dass letztlich die Überlegenheit der einen oder anderen Methode nachgewiesen werden konnte. Allein die Tatsache, dass in einigen Zentren parallel verschiedene Techniken Verwendung finden, verdeutlicht diese Situation. Nach ihrer Einführung war die PTX mit einer sehr hohen peri- und postoperativen Morbidität und Mortalität behaftet, sodass sie viele Jahre als experimentelles Verfahren angesehen wurde. Vordergründig für die schlechten Ergebnisse zu dieser Zeit war das ungenügende Management der exokrinen Pankreassekretion. Dieses ist auch heute noch von zentraler Bedeutung, da die Freisetzung von aktivierten Pankreasenzymen, ähnlich wie bei der akuten Pankreatitis, zu gravierenden Gewebeschädigungen führen kann. Erst mit der von Sollinger 1983 [22] entwickelten Blasendrainagetechnik und der Einführung der Pankreasduodenaltransplantation durch Nghiem und Corry 1987 ist die PTX im Hinblick auf das Auftreten von Nahtinsuffizienzen und Pankreasfisteln sicherer geworden [23]. Bei dieser Technik wird das gesamte Spenderpankreas mit einem blindverschlossenen, kurzen Duodenalsegment transplantiert. Das exokrine Sekret wird dabei über eine in Seit-zu-Seit-Technik angelegte Duodenozystostomie in die Harnblase abgeleitet. Dieses Verfahren erlaubte durch Bestimmung von Amylase und Lipase im Urin ein Transplantatmonitoring, war jedoch durch das Auftreten schwerer metabolischer Azidosen, Dehydratationen, Harnblasen- und Harnröhrenentzündungen sowie Refluxpankreatitiden mit vielen Komplikationen behaftet. Mit der Verfügbarkeit einer besseren Immunsuppression und der damit verbundenen Reduktion immunologisch bedingter Transplantatverluste hat sich seit Mitte der 1990er-Jahre in den meisten Transplantationszentren die enterale Drainage in den Dünndarm durchgesetzt. Hierbei wird das exokrine Pankreassekret durch eine Seit-zu-Seit-Duodenojejunostomie oder eine Duodenoduodenostomie in den Darm abgeleitet. Auch Rekonstruktionen über eine nach Roux‑Y ausgeschaltete Dünndarmschlinge finden Anwendung. In den letzten Jahren wird von einigen Zentren eine retroperitoneale Platzierung des Pankreastransplantates favorisiert [24, 25]. Die endokrine Drainage oder venöse Ableitung des Pankreastransplantates kann systemisch-venös in die V. cava inferior oder portal-venös in die V. mesenterica superior erfolgen. Ob das technisch anspruchsvollere, aber physiologischere Verfahren der portal-venösen Drainage metabolische Vorteile hat, ist bisher nicht bewiesen. Beide Verfahren führen zu vergleichbaren Ergebnissen. Seit 2007 bevorzugen wir die retroperitoneale Positionierung des Transplantates unter Anlage einer Seit-zu-Seit-Duodenoduodenostomie mit portal-venöser oder systemisch-venöser Anastomosierung (Abb. 1). Der Vorteil der Duodenoduodenostomie besteht darin, dass sowohl die Dünndarmanastomose als auch der Pankreaskopf des Transplantates endoskopisch erreicht werden können. Somit wird es möglich, endoskopisch Biopsien zur Abstoßungsdiagnostik zu gewinnen. Des Weiteren kann im Falle einer intestinalen Blutung im Anastomosenbereich eine einfache endoskopische Intervention zur Blutstillung erfolgen. Mögliche Nachteile ergeben sich im Falle einer Anastomoseninsuffizienz oder eines Transplantatverlustes, da die resultierende Leckage im Bereich des Duodenums chirurgisch schwieriger zu versorgen ist. Neben vielen weiteren Vorteilen der retroperitonealen Positionierung ist das Pankreastransplantat besser sonografisch darstellbar und einfach perkutan zu punktieren.", "Transplantatthrombose Die Pankreastransplantatthrombose ist nach wie vor die häufigste Ursache für einen frühen Verlust des Pankreastransplantates. Bei einem plötzlichen Blutzuckeranstieg in der Frühphase nach Transplantation muss nach Ausschluss anderer Ursachen immer an eine Perfusionsstörung des Pankreastransplantates gedacht werden. Der Nachweis erfolgt in der Regel durch eine CT oder MRT des Abdomens. Die farbcodierte Duplexsonografie kann erste Hinweise auf Perfusionsstörungen ergeben. Im eigenen Zentrum hat sich in den letzten Jahren die kontrastmittelunterstützte Sonografie (CEUS) zum Perfusionsnachweis als extrem hilfreich erwiesen, da diese sofort auf Station im Patientenbett durchgeführt werden kann und kein potenziell nephrotoxisches Kontrastmittel appliziert werden muss. Bei Nachweis einer Thrombose im Bereich der Pfortader oder Milzvene sowie Thrombosen im Bereich des arteriellen Zuflusses (Y-Graft) ist eine sofortige Relaparotomie durchzuführen. In einigen Fällen ist es dabei gelungen, erfolgreich eine Thrombektomie durchzuführen und die Organfunktion zu erhalten. Auch Erfolge bei Lysetherapien wurden berichtet. Häufig bleibt einem jedoch nur die Entfernung des thrombosierten Transplantates übrig.\nDie Pankreastransplantatthrombose ist nach wie vor die häufigste Ursache für einen frühen Verlust des Pankreastransplantates. Bei einem plötzlichen Blutzuckeranstieg in der Frühphase nach Transplantation muss nach Ausschluss anderer Ursachen immer an eine Perfusionsstörung des Pankreastransplantates gedacht werden. Der Nachweis erfolgt in der Regel durch eine CT oder MRT des Abdomens. Die farbcodierte Duplexsonografie kann erste Hinweise auf Perfusionsstörungen ergeben. Im eigenen Zentrum hat sich in den letzten Jahren die kontrastmittelunterstützte Sonografie (CEUS) zum Perfusionsnachweis als extrem hilfreich erwiesen, da diese sofort auf Station im Patientenbett durchgeführt werden kann und kein potenziell nephrotoxisches Kontrastmittel appliziert werden muss. Bei Nachweis einer Thrombose im Bereich der Pfortader oder Milzvene sowie Thrombosen im Bereich des arteriellen Zuflusses (Y-Graft) ist eine sofortige Relaparotomie durchzuführen. In einigen Fällen ist es dabei gelungen, erfolgreich eine Thrombektomie durchzuführen und die Organfunktion zu erhalten. Auch Erfolge bei Lysetherapien wurden berichtet. Häufig bleibt einem jedoch nur die Entfernung des thrombosierten Transplantates übrig.\nTransplantatpankreatitis Das Entstehen einer Transplantatpankreatitis ist oftmals schon intraoperativ nach Reperfusion zu bemerken. Aber auch nach der Reperfusion unauffällig anmutende Transplantate können im Verlauf schwerwiegende Formen einer Pankreatitis entwickeln. In der Diagnostik ergänzen sich klinischer Befund, Laboruntersuchungen und Bildgebung. Bei der klinischen Untersuchung finden sich meist rechtsseitig betonte Bauchschmerzen ggf. mit Peritonismus. Zusätzlich können das Entstehen eines paralytischen Ileus, Fieber sowie eine Transplantatdysfunktion auf eine Pankreatitis hinweisen. Laborchemisch besteht häufig eine deutliche CRP-Erhöhung, die mit einem Anstieg von Serumamylase und Serumlipase assoziiert sein kann. In der Bildgebung können alle Formen einer Pankreatitis imponieren. Diese reichen von einer leichten ödematösen Form bis hin zur nekrotisierenden Pankreatitis mit Perfusionsausfällen. Bei schweren, lebensbedrohlichen Verlaufsformen einer Transplantatpankreatitis kann die partielle oder komplette Entfernung des Transplantates notwendig werden, auch wenn eine gute Funktion vorliegt.\nDas Entstehen einer Transplantatpankreatitis ist oftmals schon intraoperativ nach Reperfusion zu bemerken. Aber auch nach der Reperfusion unauffällig anmutende Transplantate können im Verlauf schwerwiegende Formen einer Pankreatitis entwickeln. In der Diagnostik ergänzen sich klinischer Befund, Laboruntersuchungen und Bildgebung. Bei der klinischen Untersuchung finden sich meist rechtsseitig betonte Bauchschmerzen ggf. mit Peritonismus. Zusätzlich können das Entstehen eines paralytischen Ileus, Fieber sowie eine Transplantatdysfunktion auf eine Pankreatitis hinweisen. Laborchemisch besteht häufig eine deutliche CRP-Erhöhung, die mit einem Anstieg von Serumamylase und Serumlipase assoziiert sein kann. In der Bildgebung können alle Formen einer Pankreatitis imponieren. Diese reichen von einer leichten ödematösen Form bis hin zur nekrotisierenden Pankreatitis mit Perfusionsausfällen. Bei schweren, lebensbedrohlichen Verlaufsformen einer Transplantatpankreatitis kann die partielle oder komplette Entfernung des Transplantates notwendig werden, auch wenn eine gute Funktion vorliegt.\nBlutung Bei Blutungskomplikationen nach einer PTX muss zwischen intraluminaler, intestinaler Blutung im Bereich der Anastomose und einer Blutung im Bauchraum unterschieden werden. Die Möglichkeit der einfachen endoskopischen Blutstillung im Bereich der Duodenoduodenostomie ist einer der großen Vorteile dieser Anastomosierungstechnik. Aber auch bei Anastomosen im Bereich der 1. und 2. Jejunalschlinge und intraperitonealer Positionierung des Pankreastransplantates ist über eine Push-Intestinoskopie die Anastomose zu erreichen, wobei sich dies technisch schwieriger gestaltet. Intraabdominelle Blutungen nach PTX sind kein seltenes Ereignis, zumal ein Großteil der Patienten eine postoperative Heparintherapie erhält. Kleinere Blutungen können oftmals konservativ beherrscht werden, indem die Antikoagulantientherapie pausiert wird und plasmatische Gerinnungsfaktoren optimiert werden. Hämodynamisch relevante Blutungen und ein rezidivierender Transfusionsbedarf zeigen die Indikation zur Relaparotomie an. Das Spektrum solcher Nachblutungen reicht von kleinen diffusen Blutungen aus dem Transplantatpankreas (häufig an der Mesenterialwurzel) oder kleineren Blutungen an den peritonealen Inzisionslinien bis hin zu Katastrophenblutungen aus der unteren Hohlvene und der Beckenarterie. Bei Arrosionsblutungen im Bereich der Beckenarterie kann in manchen Fällen das endovaskuläre Stenting der Defektzone die rettende Maßnahme für den Patienten sein.\nBei Blutungskomplikationen nach einer PTX muss zwischen intraluminaler, intestinaler Blutung im Bereich der Anastomose und einer Blutung im Bauchraum unterschieden werden. Die Möglichkeit der einfachen endoskopischen Blutstillung im Bereich der Duodenoduodenostomie ist einer der großen Vorteile dieser Anastomosierungstechnik. Aber auch bei Anastomosen im Bereich der 1. und 2. Jejunalschlinge und intraperitonealer Positionierung des Pankreastransplantates ist über eine Push-Intestinoskopie die Anastomose zu erreichen, wobei sich dies technisch schwieriger gestaltet. Intraabdominelle Blutungen nach PTX sind kein seltenes Ereignis, zumal ein Großteil der Patienten eine postoperative Heparintherapie erhält. Kleinere Blutungen können oftmals konservativ beherrscht werden, indem die Antikoagulantientherapie pausiert wird und plasmatische Gerinnungsfaktoren optimiert werden. Hämodynamisch relevante Blutungen und ein rezidivierender Transfusionsbedarf zeigen die Indikation zur Relaparotomie an. Das Spektrum solcher Nachblutungen reicht von kleinen diffusen Blutungen aus dem Transplantatpankreas (häufig an der Mesenterialwurzel) oder kleineren Blutungen an den peritonealen Inzisionslinien bis hin zu Katastrophenblutungen aus der unteren Hohlvene und der Beckenarterie. Bei Arrosionsblutungen im Bereich der Beckenarterie kann in manchen Fällen das endovaskuläre Stenting der Defektzone die rettende Maßnahme für den Patienten sein.\nAkute Transplantatabstoßung Ein Pankreastransplantat kann Ziel der allogenen Immunität (Abstoßung) und/oder der Autoimmunität (Rekurrenz der Grunderkrankung) sein. Amylase- und Lipaseerhöhung im Serum sowie ein Anstieg der Blutzuckerwerte sind häufig die einzigen, aber unspezifischen Zeichen einer Abstoßung. Im Falle einer kombinierten PNTX ist die histologische Diagnose aus dem Nierentransplantat oftmals ausschlaggebend. Es konnte aber auch gezeigt werden, dass eine nicht unerhebliche Diskordanz beim Auftreten von Rejektionen und deren Schweregrad in zeitgleich entnommenen Biopsien aus Pankreas- und Nierentransplantat besteht. Uva et al. [26] beschrieben lediglich in 40 % der Fälle mit Abstoßungen ein zeitgleiches Auftreten der Rejektion in beiden Organen. In einer Studie von Parajuli et al. [27] lag die Diskordanzrate für Abstoßungen bei 37,5 %. Bei den 62,5 % der Fälle mit konkordanten Befunden in beiden Organen wurden wiederum unterschiedliche Abstoßungstypen gefunden. Im eigenen Zentrum konnten wir nach zeitgleicher Punktion von Pankreas und Niere eine Konkordanz für das Auftreten bzw. Ausschluss einer Rejektion in beiden Organen von 68,4 % feststellen. Bei einer isolierten PTX oder einer PAK-Transplantation, bei welcher die Spenderorgane nicht HLA-identisch sind, spielt die Pankreasbiopsie eine noch wichtigere Rolle. Die histopathologische Begutachtung ist aufwendig und schwierig. Aufgrund der an sich schon geringen Fallzahlen der PTX und der noch seltener durchgeführten Pankreastransplantatbiopsien ist vielerorts nur begrenzte oder keine Erfahrung vorhanden, sodass wie bei anderen Spezialuntersuchungen die Biopsien meist überregional verschickt und begutachtet werden. Für die Beurteilung von Pankreastransplantatbiopsien existiert, ähnlich wie bei der NTX, seit vielen Jahren ein international anerkanntes und ständig aktualisiertes Banff-Schema zur Graduierung der Abstoßung (Tab. 1; [28–30]).1. Normal2. Unklar3. Akute T‑Zell-vermittelte Abstoßung (TCMR)Grad I/mild: aktive septale Entzündung mit Venulitis und/oder Duktitis und/oder fokale azinäre Entzündung (≤ 2 Foci pro Läppchen) ohne oder nur mit minimalem AzinuszellschadenGrad II/moderat (erfordert eine Abgrenzung zur ABMR): multifokale (aber nicht konfluente oder diffuse) azinäre Entzündung (≥ 3 Foci pro Läppchen) mit vereinzeltem, individuellem Azinuszellschaden und/oder milde intimale Arteriitis (< 25 % des Lumens)Grad III/schwer (erfordert eine Abgrenzung zur ABMR): diffuse azinäre Entzündung mit fokaler oder diffuser, multizellulärer/konfluierender Azinuszellnekrose und/oder moderate oder schwere intimale Arteriitis (> 25 % des Lumens) und/oder transmurale Entzündung – nekrotisierende Arteriitis4. Akute/aktive Antikörper-vermittelter Abstoßung (ABMR)a) morphologischer Nachweis eines akuten GewebeschadensGrad I (milde akute ABMR): Gut erhaltene Architektur, milde interazinäre Monozyten‑/Makrophagen- oder gemischte (Monozyten‑/Makrophagen/neutrophile) Infiltrate mit seltenem Azinuszellschaden (Schwellung, Nekrose)Grad II (moderate akute ABMR): Insgesamt erhaltene Architektur mit interazinären Monozyten-Makrophagen-Infiltraten oder gemischte (Monozyten-/Makrophagen/neutrophile) Infiltrate, Dilatation der Kapillaren, interazinäre Kapillaritis, intimale Arteriitis, Stauung, multizellulärer Azinuszellverlust und ErythrozytenextravasationGrad III (schwere akute ABMR): Architekturstörung, verstreute Entzündungsinfiltrate bei interstitiellen Einblutungen, multifokaler oder konfluierender Parenchymnekrose, arterieller oder venöser Gefäßwandnekrose, transmuraler/nekrotisierender Arteriitis und Thrombose (in Abwesenheit anderer offensichtlicher Ursachen)b) C4d-Positivität der interazinären Kapillaren (IAC) ≥ 1 % der Azinusläppchenoberflächec) Bestätigte donorspezifische Antikörper (DSA)Abschließende Einordnung:1 von 3 diagnostischen Kriterien: benötigt Ausschluss einer ABMR2 von 3 diagnostischen Kriterien: ABMR muss erwogen werden3 von 3 diagnostischen Kriterien: definitive Diagnose einer ABMR5. Chronisch-aktive ABMRKategorie 3 und/oder 4 mit chronischer Arteriopathie und/oder Kategorie 6. Spezifikation, ob TCMR, ABMR oder gemischt6. Chronische ArteriopathieFibrointimale arterielle Verbreitung mit Lumeneinengung:inaktiv: fibrointimalaktiv: Infiltration der subintimalen fibrösen Proliferation durch mononukleäre Zellen (T-Zellen und Makrophagen)Grad 0: keine LumeneinengungGrad 1: mild, ≤ 25 % LumeneinengungGrad 2: moderat, 26–50 % LumeneinengungGrad 3: schwer, ≥ 50 % Lumeneinengung7. Chronische TransplantatfibroseGrad I (milde Transplantatfibrose): < 30 %Grad II (moderate Transplantatfibrose): 30–60 %Grad III (schwere Transplantatfibrose): > 60 %8. Inselpathologie: Rekurrenz des autoimmunen Diabetes mellitus (Insulitis und/oder selektiver Betazellverlust), Amyloidablagerungen, Calcineurininhibitor-Toxizität der Inselzellen9. Andere histologische Veränderungen, welche nicht einer akuten und/oder chronischen Abstoßung zugeordnet werden, wie z. B. CMV-Pankreatitis, posttransplantations-lymphoproliferative Erkrankung\n6. Chronische Arteriopathie\nFibrointimale arterielle Verbreitung mit Lumeneinengung:\ninaktiv: fibrointimal\naktiv: Infiltration der subintimalen fibrösen Proliferation durch mononukleäre Zellen (T-Zellen und Makrophagen)\nDie Therapie der akuten Abstoßung eines Pankreastransplantates unterscheidet sich nicht wesentlich von der Abstoßungstherapie anderer transplantierter Organe. Bei milden akuten T‑zellulären Abstoßungen (Banff Grad I) erfolgt auch nach PTX eine Steroidbolustherapie mit z. B. 3 × 500 mg Prednisolon i.v. und eine Intensivierung der Basisimmunsuppression. Bei höhergradigen zellulären Abstoßungen und Rezidivabstoßungen erfolgen Therapien mit T‑Zell-depletierenden Antikörpern (ATG), bei humoraler Abstoßungskomponente auch in Kombination mit einer Anti-B-Zell-Therapie (z. B. Rituximab), Plasmapheresebehandlungen und die Gabe von intravenösen Immunglobulinen (IVIG) [31].\nEin Pankreastransplantat kann Ziel der allogenen Immunität (Abstoßung) und/oder der Autoimmunität (Rekurrenz der Grunderkrankung) sein. Amylase- und Lipaseerhöhung im Serum sowie ein Anstieg der Blutzuckerwerte sind häufig die einzigen, aber unspezifischen Zeichen einer Abstoßung. Im Falle einer kombinierten PNTX ist die histologische Diagnose aus dem Nierentransplantat oftmals ausschlaggebend. Es konnte aber auch gezeigt werden, dass eine nicht unerhebliche Diskordanz beim Auftreten von Rejektionen und deren Schweregrad in zeitgleich entnommenen Biopsien aus Pankreas- und Nierentransplantat besteht. Uva et al. [26] beschrieben lediglich in 40 % der Fälle mit Abstoßungen ein zeitgleiches Auftreten der Rejektion in beiden Organen. In einer Studie von Parajuli et al. [27] lag die Diskordanzrate für Abstoßungen bei 37,5 %. Bei den 62,5 % der Fälle mit konkordanten Befunden in beiden Organen wurden wiederum unterschiedliche Abstoßungstypen gefunden. Im eigenen Zentrum konnten wir nach zeitgleicher Punktion von Pankreas und Niere eine Konkordanz für das Auftreten bzw. Ausschluss einer Rejektion in beiden Organen von 68,4 % feststellen. Bei einer isolierten PTX oder einer PAK-Transplantation, bei welcher die Spenderorgane nicht HLA-identisch sind, spielt die Pankreasbiopsie eine noch wichtigere Rolle. Die histopathologische Begutachtung ist aufwendig und schwierig. Aufgrund der an sich schon geringen Fallzahlen der PTX und der noch seltener durchgeführten Pankreastransplantatbiopsien ist vielerorts nur begrenzte oder keine Erfahrung vorhanden, sodass wie bei anderen Spezialuntersuchungen die Biopsien meist überregional verschickt und begutachtet werden. Für die Beurteilung von Pankreastransplantatbiopsien existiert, ähnlich wie bei der NTX, seit vielen Jahren ein international anerkanntes und ständig aktualisiertes Banff-Schema zur Graduierung der Abstoßung (Tab. 1; [28–30]).1. Normal2. Unklar3. Akute T‑Zell-vermittelte Abstoßung (TCMR)Grad I/mild: aktive septale Entzündung mit Venulitis und/oder Duktitis und/oder fokale azinäre Entzündung (≤ 2 Foci pro Läppchen) ohne oder nur mit minimalem AzinuszellschadenGrad II/moderat (erfordert eine Abgrenzung zur ABMR): multifokale (aber nicht konfluente oder diffuse) azinäre Entzündung (≥ 3 Foci pro Läppchen) mit vereinzeltem, individuellem Azinuszellschaden und/oder milde intimale Arteriitis (< 25 % des Lumens)Grad III/schwer (erfordert eine Abgrenzung zur ABMR): diffuse azinäre Entzündung mit fokaler oder diffuser, multizellulärer/konfluierender Azinuszellnekrose und/oder moderate oder schwere intimale Arteriitis (> 25 % des Lumens) und/oder transmurale Entzündung – nekrotisierende Arteriitis4. Akute/aktive Antikörper-vermittelter Abstoßung (ABMR)a) morphologischer Nachweis eines akuten GewebeschadensGrad I (milde akute ABMR): Gut erhaltene Architektur, milde interazinäre Monozyten‑/Makrophagen- oder gemischte (Monozyten‑/Makrophagen/neutrophile) Infiltrate mit seltenem Azinuszellschaden (Schwellung, Nekrose)Grad II (moderate akute ABMR): Insgesamt erhaltene Architektur mit interazinären Monozyten-Makrophagen-Infiltraten oder gemischte (Monozyten-/Makrophagen/neutrophile) Infiltrate, Dilatation der Kapillaren, interazinäre Kapillaritis, intimale Arteriitis, Stauung, multizellulärer Azinuszellverlust und ErythrozytenextravasationGrad III (schwere akute ABMR): Architekturstörung, verstreute Entzündungsinfiltrate bei interstitiellen Einblutungen, multifokaler oder konfluierender Parenchymnekrose, arterieller oder venöser Gefäßwandnekrose, transmuraler/nekrotisierender Arteriitis und Thrombose (in Abwesenheit anderer offensichtlicher Ursachen)b) C4d-Positivität der interazinären Kapillaren (IAC) ≥ 1 % der Azinusläppchenoberflächec) Bestätigte donorspezifische Antikörper (DSA)Abschließende Einordnung:1 von 3 diagnostischen Kriterien: benötigt Ausschluss einer ABMR2 von 3 diagnostischen Kriterien: ABMR muss erwogen werden3 von 3 diagnostischen Kriterien: definitive Diagnose einer ABMR5. Chronisch-aktive ABMRKategorie 3 und/oder 4 mit chronischer Arteriopathie und/oder Kategorie 6. Spezifikation, ob TCMR, ABMR oder gemischt6. Chronische ArteriopathieFibrointimale arterielle Verbreitung mit Lumeneinengung:inaktiv: fibrointimalaktiv: Infiltration der subintimalen fibrösen Proliferation durch mononukleäre Zellen (T-Zellen und Makrophagen)Grad 0: keine LumeneinengungGrad 1: mild, ≤ 25 % LumeneinengungGrad 2: moderat, 26–50 % LumeneinengungGrad 3: schwer, ≥ 50 % Lumeneinengung7. Chronische TransplantatfibroseGrad I (milde Transplantatfibrose): < 30 %Grad II (moderate Transplantatfibrose): 30–60 %Grad III (schwere Transplantatfibrose): > 60 %8. Inselpathologie: Rekurrenz des autoimmunen Diabetes mellitus (Insulitis und/oder selektiver Betazellverlust), Amyloidablagerungen, Calcineurininhibitor-Toxizität der Inselzellen9. Andere histologische Veränderungen, welche nicht einer akuten und/oder chronischen Abstoßung zugeordnet werden, wie z. B. CMV-Pankreatitis, posttransplantations-lymphoproliferative Erkrankung\n6. Chronische Arteriopathie\nFibrointimale arterielle Verbreitung mit Lumeneinengung:\ninaktiv: fibrointimal\naktiv: Infiltration der subintimalen fibrösen Proliferation durch mononukleäre Zellen (T-Zellen und Makrophagen)\nDie Therapie der akuten Abstoßung eines Pankreastransplantates unterscheidet sich nicht wesentlich von der Abstoßungstherapie anderer transplantierter Organe. Bei milden akuten T‑zellulären Abstoßungen (Banff Grad I) erfolgt auch nach PTX eine Steroidbolustherapie mit z. B. 3 × 500 mg Prednisolon i.v. und eine Intensivierung der Basisimmunsuppression. Bei höhergradigen zellulären Abstoßungen und Rezidivabstoßungen erfolgen Therapien mit T‑Zell-depletierenden Antikörpern (ATG), bei humoraler Abstoßungskomponente auch in Kombination mit einer Anti-B-Zell-Therapie (z. B. Rituximab), Plasmapheresebehandlungen und die Gabe von intravenösen Immunglobulinen (IVIG) [31].", "Die Pankreastransplantatthrombose ist nach wie vor die häufigste Ursache für einen frühen Verlust des Pankreastransplantates. Bei einem plötzlichen Blutzuckeranstieg in der Frühphase nach Transplantation muss nach Ausschluss anderer Ursachen immer an eine Perfusionsstörung des Pankreastransplantates gedacht werden. Der Nachweis erfolgt in der Regel durch eine CT oder MRT des Abdomens. Die farbcodierte Duplexsonografie kann erste Hinweise auf Perfusionsstörungen ergeben. Im eigenen Zentrum hat sich in den letzten Jahren die kontrastmittelunterstützte Sonografie (CEUS) zum Perfusionsnachweis als extrem hilfreich erwiesen, da diese sofort auf Station im Patientenbett durchgeführt werden kann und kein potenziell nephrotoxisches Kontrastmittel appliziert werden muss. Bei Nachweis einer Thrombose im Bereich der Pfortader oder Milzvene sowie Thrombosen im Bereich des arteriellen Zuflusses (Y-Graft) ist eine sofortige Relaparotomie durchzuführen. In einigen Fällen ist es dabei gelungen, erfolgreich eine Thrombektomie durchzuführen und die Organfunktion zu erhalten. Auch Erfolge bei Lysetherapien wurden berichtet. Häufig bleibt einem jedoch nur die Entfernung des thrombosierten Transplantates übrig.", "Das Entstehen einer Transplantatpankreatitis ist oftmals schon intraoperativ nach Reperfusion zu bemerken. Aber auch nach der Reperfusion unauffällig anmutende Transplantate können im Verlauf schwerwiegende Formen einer Pankreatitis entwickeln. In der Diagnostik ergänzen sich klinischer Befund, Laboruntersuchungen und Bildgebung. Bei der klinischen Untersuchung finden sich meist rechtsseitig betonte Bauchschmerzen ggf. mit Peritonismus. Zusätzlich können das Entstehen eines paralytischen Ileus, Fieber sowie eine Transplantatdysfunktion auf eine Pankreatitis hinweisen. Laborchemisch besteht häufig eine deutliche CRP-Erhöhung, die mit einem Anstieg von Serumamylase und Serumlipase assoziiert sein kann. In der Bildgebung können alle Formen einer Pankreatitis imponieren. Diese reichen von einer leichten ödematösen Form bis hin zur nekrotisierenden Pankreatitis mit Perfusionsausfällen. Bei schweren, lebensbedrohlichen Verlaufsformen einer Transplantatpankreatitis kann die partielle oder komplette Entfernung des Transplantates notwendig werden, auch wenn eine gute Funktion vorliegt.", "Bei Blutungskomplikationen nach einer PTX muss zwischen intraluminaler, intestinaler Blutung im Bereich der Anastomose und einer Blutung im Bauchraum unterschieden werden. Die Möglichkeit der einfachen endoskopischen Blutstillung im Bereich der Duodenoduodenostomie ist einer der großen Vorteile dieser Anastomosierungstechnik. Aber auch bei Anastomosen im Bereich der 1. und 2. Jejunalschlinge und intraperitonealer Positionierung des Pankreastransplantates ist über eine Push-Intestinoskopie die Anastomose zu erreichen, wobei sich dies technisch schwieriger gestaltet. Intraabdominelle Blutungen nach PTX sind kein seltenes Ereignis, zumal ein Großteil der Patienten eine postoperative Heparintherapie erhält. Kleinere Blutungen können oftmals konservativ beherrscht werden, indem die Antikoagulantientherapie pausiert wird und plasmatische Gerinnungsfaktoren optimiert werden. Hämodynamisch relevante Blutungen und ein rezidivierender Transfusionsbedarf zeigen die Indikation zur Relaparotomie an. Das Spektrum solcher Nachblutungen reicht von kleinen diffusen Blutungen aus dem Transplantatpankreas (häufig an der Mesenterialwurzel) oder kleineren Blutungen an den peritonealen Inzisionslinien bis hin zu Katastrophenblutungen aus der unteren Hohlvene und der Beckenarterie. Bei Arrosionsblutungen im Bereich der Beckenarterie kann in manchen Fällen das endovaskuläre Stenting der Defektzone die rettende Maßnahme für den Patienten sein.", "Ein Pankreastransplantat kann Ziel der allogenen Immunität (Abstoßung) und/oder der Autoimmunität (Rekurrenz der Grunderkrankung) sein. Amylase- und Lipaseerhöhung im Serum sowie ein Anstieg der Blutzuckerwerte sind häufig die einzigen, aber unspezifischen Zeichen einer Abstoßung. Im Falle einer kombinierten PNTX ist die histologische Diagnose aus dem Nierentransplantat oftmals ausschlaggebend. Es konnte aber auch gezeigt werden, dass eine nicht unerhebliche Diskordanz beim Auftreten von Rejektionen und deren Schweregrad in zeitgleich entnommenen Biopsien aus Pankreas- und Nierentransplantat besteht. Uva et al. [26] beschrieben lediglich in 40 % der Fälle mit Abstoßungen ein zeitgleiches Auftreten der Rejektion in beiden Organen. In einer Studie von Parajuli et al. [27] lag die Diskordanzrate für Abstoßungen bei 37,5 %. Bei den 62,5 % der Fälle mit konkordanten Befunden in beiden Organen wurden wiederum unterschiedliche Abstoßungstypen gefunden. Im eigenen Zentrum konnten wir nach zeitgleicher Punktion von Pankreas und Niere eine Konkordanz für das Auftreten bzw. Ausschluss einer Rejektion in beiden Organen von 68,4 % feststellen. Bei einer isolierten PTX oder einer PAK-Transplantation, bei welcher die Spenderorgane nicht HLA-identisch sind, spielt die Pankreasbiopsie eine noch wichtigere Rolle. Die histopathologische Begutachtung ist aufwendig und schwierig. Aufgrund der an sich schon geringen Fallzahlen der PTX und der noch seltener durchgeführten Pankreastransplantatbiopsien ist vielerorts nur begrenzte oder keine Erfahrung vorhanden, sodass wie bei anderen Spezialuntersuchungen die Biopsien meist überregional verschickt und begutachtet werden. Für die Beurteilung von Pankreastransplantatbiopsien existiert, ähnlich wie bei der NTX, seit vielen Jahren ein international anerkanntes und ständig aktualisiertes Banff-Schema zur Graduierung der Abstoßung (Tab. 1; [28–30]).1. Normal2. Unklar3. Akute T‑Zell-vermittelte Abstoßung (TCMR)Grad I/mild: aktive septale Entzündung mit Venulitis und/oder Duktitis und/oder fokale azinäre Entzündung (≤ 2 Foci pro Läppchen) ohne oder nur mit minimalem AzinuszellschadenGrad II/moderat (erfordert eine Abgrenzung zur ABMR): multifokale (aber nicht konfluente oder diffuse) azinäre Entzündung (≥ 3 Foci pro Läppchen) mit vereinzeltem, individuellem Azinuszellschaden und/oder milde intimale Arteriitis (< 25 % des Lumens)Grad III/schwer (erfordert eine Abgrenzung zur ABMR): diffuse azinäre Entzündung mit fokaler oder diffuser, multizellulärer/konfluierender Azinuszellnekrose und/oder moderate oder schwere intimale Arteriitis (> 25 % des Lumens) und/oder transmurale Entzündung – nekrotisierende Arteriitis4. Akute/aktive Antikörper-vermittelter Abstoßung (ABMR)a) morphologischer Nachweis eines akuten GewebeschadensGrad I (milde akute ABMR): Gut erhaltene Architektur, milde interazinäre Monozyten‑/Makrophagen- oder gemischte (Monozyten‑/Makrophagen/neutrophile) Infiltrate mit seltenem Azinuszellschaden (Schwellung, Nekrose)Grad II (moderate akute ABMR): Insgesamt erhaltene Architektur mit interazinären Monozyten-Makrophagen-Infiltraten oder gemischte (Monozyten-/Makrophagen/neutrophile) Infiltrate, Dilatation der Kapillaren, interazinäre Kapillaritis, intimale Arteriitis, Stauung, multizellulärer Azinuszellverlust und ErythrozytenextravasationGrad III (schwere akute ABMR): Architekturstörung, verstreute Entzündungsinfiltrate bei interstitiellen Einblutungen, multifokaler oder konfluierender Parenchymnekrose, arterieller oder venöser Gefäßwandnekrose, transmuraler/nekrotisierender Arteriitis und Thrombose (in Abwesenheit anderer offensichtlicher Ursachen)b) C4d-Positivität der interazinären Kapillaren (IAC) ≥ 1 % der Azinusläppchenoberflächec) Bestätigte donorspezifische Antikörper (DSA)Abschließende Einordnung:1 von 3 diagnostischen Kriterien: benötigt Ausschluss einer ABMR2 von 3 diagnostischen Kriterien: ABMR muss erwogen werden3 von 3 diagnostischen Kriterien: definitive Diagnose einer ABMR5. Chronisch-aktive ABMRKategorie 3 und/oder 4 mit chronischer Arteriopathie und/oder Kategorie 6. Spezifikation, ob TCMR, ABMR oder gemischt6. Chronische ArteriopathieFibrointimale arterielle Verbreitung mit Lumeneinengung:inaktiv: fibrointimalaktiv: Infiltration der subintimalen fibrösen Proliferation durch mononukleäre Zellen (T-Zellen und Makrophagen)Grad 0: keine LumeneinengungGrad 1: mild, ≤ 25 % LumeneinengungGrad 2: moderat, 26–50 % LumeneinengungGrad 3: schwer, ≥ 50 % Lumeneinengung7. Chronische TransplantatfibroseGrad I (milde Transplantatfibrose): < 30 %Grad II (moderate Transplantatfibrose): 30–60 %Grad III (schwere Transplantatfibrose): > 60 %8. Inselpathologie: Rekurrenz des autoimmunen Diabetes mellitus (Insulitis und/oder selektiver Betazellverlust), Amyloidablagerungen, Calcineurininhibitor-Toxizität der Inselzellen9. Andere histologische Veränderungen, welche nicht einer akuten und/oder chronischen Abstoßung zugeordnet werden, wie z. B. CMV-Pankreatitis, posttransplantations-lymphoproliferative Erkrankung\n6. Chronische Arteriopathie\nFibrointimale arterielle Verbreitung mit Lumeneinengung:\ninaktiv: fibrointimal\naktiv: Infiltration der subintimalen fibrösen Proliferation durch mononukleäre Zellen (T-Zellen und Makrophagen)\nDie Therapie der akuten Abstoßung eines Pankreastransplantates unterscheidet sich nicht wesentlich von der Abstoßungstherapie anderer transplantierter Organe. Bei milden akuten T‑zellulären Abstoßungen (Banff Grad I) erfolgt auch nach PTX eine Steroidbolustherapie mit z. B. 3 × 500 mg Prednisolon i.v. und eine Intensivierung der Basisimmunsuppression. Bei höhergradigen zellulären Abstoßungen und Rezidivabstoßungen erfolgen Therapien mit T‑Zell-depletierenden Antikörpern (ATG), bei humoraler Abstoßungskomponente auch in Kombination mit einer Anti-B-Zell-Therapie (z. B. Rituximab), Plasmapheresebehandlungen und die Gabe von intravenösen Immunglobulinen (IVIG) [31].", "Indikationen Nach einer PTX kann eine Biopsie aus verschiedenen Indikationen heraus notwendig werden. Neben der Durchführung von Indikationsbiopsien aufgrund einer Störung der Transplantatfunktion oder immunologischen Vorgängen werden von einigen Zentren auch sog. Protokollbiopsien, d. h. routinemäßige Entnahmen von Biopsien nach einem definierten Schema entnommen. Störungen der Pankreastransplantatfunktion äußern sich klinisch am häufigsten durch erhöhte Blutzuckerwerte. Diese können mit einem Anstieg von Serumamylase und Serumlipase assoziiert sein. Differenzialdiagnostisch müssen dabei verschiedene Ursachen in Betracht gezogen werden [31–33]: eine primäre Nichtfunktion oder verzögerte Funktionsaufnahme des Transplantates, eine Transplantatpankreatitis, eine Thrombose, eine akute oder chronische Rejektion, eine medikationsbedingte Hyperglykämie (Steroide, Tacrolimus), Infektionen, eine Rekurrenz der Grunderkrankung, eine Toxizität von Calcineurininhibitoren (CNI) oder die Manifestation eines Diabetes mellitus Typ 2. Einen Überblick über häufige Indikationen zur Biopsie des Pankreastransplantates gibt Tab. 2.IndikationStörungen der endokrinen FunktionHyperglykämieGestörte GlukosetoleranzErhöhter HbA1c-WertStörungen der exokrinen FunktionErhöhte Amylase- und LipasewerteHumorale AbstoßungDonorspezifische Antikörper (DSA)Rekurrenz der GrunderkrankungDiabetes mellitus Typ 1 assoziierte AutoantikörperProtokollbiopsienProtokollKontrollbiopsienNach z. B. stattgehabter Abstoßungstherapie\nNach einer PTX kann eine Biopsie aus verschiedenen Indikationen heraus notwendig werden. Neben der Durchführung von Indikationsbiopsien aufgrund einer Störung der Transplantatfunktion oder immunologischen Vorgängen werden von einigen Zentren auch sog. Protokollbiopsien, d. h. routinemäßige Entnahmen von Biopsien nach einem definierten Schema entnommen. Störungen der Pankreastransplantatfunktion äußern sich klinisch am häufigsten durch erhöhte Blutzuckerwerte. Diese können mit einem Anstieg von Serumamylase und Serumlipase assoziiert sein. Differenzialdiagnostisch müssen dabei verschiedene Ursachen in Betracht gezogen werden [31–33]: eine primäre Nichtfunktion oder verzögerte Funktionsaufnahme des Transplantates, eine Transplantatpankreatitis, eine Thrombose, eine akute oder chronische Rejektion, eine medikationsbedingte Hyperglykämie (Steroide, Tacrolimus), Infektionen, eine Rekurrenz der Grunderkrankung, eine Toxizität von Calcineurininhibitoren (CNI) oder die Manifestation eines Diabetes mellitus Typ 2. Einen Überblick über häufige Indikationen zur Biopsie des Pankreastransplantates gibt Tab. 2.IndikationStörungen der endokrinen FunktionHyperglykämieGestörte GlukosetoleranzErhöhter HbA1c-WertStörungen der exokrinen FunktionErhöhte Amylase- und LipasewerteHumorale AbstoßungDonorspezifische Antikörper (DSA)Rekurrenz der GrunderkrankungDiabetes mellitus Typ 1 assoziierte AutoantikörperProtokollbiopsienProtokollKontrollbiopsienNach z. B. stattgehabter Abstoßungstherapie\nTechnik der Biopsieentnahme Die Pankreastransplantatbiopsie kann über verschiedene Zugangswege gewonnen werden. Dazu muss zwischen operativen, endoskopischen und perkutanen Techniken unterschieden werden. Bei den operativen Verfahren stellt die Laparotomie die invasivste Variante dar. Biopsien können jedoch im Rahmen von Laparotomien aus anderen Gründen einfach simultan als Punktions- oder Exzisionsbiopsie gewonnen werden (Abb. 2). Bei intraperitonealer Lage des Transplantates und damit häufiger Überlagerung des Transplantates von Dünndarmschlingen wird zunehmend die laparoskopisch assistierte Pankreasbiopsie angewendet. Limitiert wird das laparoskopische Vorgehen in einigen Fällen durch postoperative Verwachsungen, die eine Zugänglichkeit des Transplantates erschweren und das Risiko einer perioperativen Komplikation erhöhen [34]. Die früher häufig und heutzutage nur noch selten durchgeführte Blasendrainage des Pankreastransplantates, bei der das Spenderduodenum mit der Harnblase anastomosiert wird, ermöglicht eine einfache zystoskopische Biopsie aus dem Spenderduodenum oder Kopf des Pankreastransplantates [35]. Die histopathologische Aufarbeitung der Biopsien vom Spenderduodenum und Pankreastransplantat ergaben jedoch Diskrepanzen im Grad der Abstoßung. Hierbei konnte das Spenderduodenum unabhängig vom Pankreastransplantat eine Abstoßung aufweisen und umgekehrt [36]. Bei der heute hauptsächlich verwendeten Dünndarmdrainagetechnik ist die Entnahme einer Biopsie aus dem Spenderduodenum im Rahmen einer Ösophagogastroduodenoskopie oder Push-Intestinoskopie möglich [36, 37]. Auch hier konnten deutliche Diskordanzen beim histologischen Nachweis einer Abstoßung im Spenderduodenum und Pankreastransplantat beobachtet werden [35, 36]. Mittels Endosonografie lassen sich sowohl aus dem Spenderduodenum als auch aus dem Pankreastransplantat Feinnadelbiopsien entnehmen, wobei die gewonnene Gewebemenge oftmals eine aussagekräftige Diagnose nicht erlaubt.\nIm eigenen Zentrum hat sich im letzten Jahrzehnt die perkutane Pankreastransplantatbiopsie durchgesetzt. Die mit diesem Verfahren gewonnenen Gewebestanzen erlauben, ähnlich wie bei der Nierentransplantatbiopsie, eine aussagekräftige histopathologische Diagnosestellung. Die perkutane Biopsie erfolgt in der Regel in Lokalanästhesie unter sonografischer oder CT-gesteuerter Kontrolle. Bei beiden Methoden wird das Parenchym des Pankreastransplantates mit einer automatisierten 16G- oder 18G-Tru-Cut®-Nadel punktiert. Dabei werden 2–3 Punktionszylinder entnommen. Je nach Anatomie bieten sich hierbei verschiedene Zugangswege an (Abb. 3). Die Schwierigkeit in der genauen Punktionslokalisation liegt bei der CT darin, dass diese nativ durchgeführt wird und sowohl das Parenchym als auch die Transplantatgefäße schlecht visualisiert werden. Im Vorfeld angefertigte CT- oder MRT-Bilder sind daher hilfreich. In der Literatur finden sich Raten von bis zu 30 %, in denen bioptisch kein Pankreasgewebe gewonnen werden konnte [38]. In den letzten Jahren konnten wir im eigenen Vorgehen unter Nutzung der kontrastmittelgestützten Sonografie (CEUS) die Trefferquote erhöhen. In jedem Fall ist die Zusammenarbeit zwischen Transplantationschirurgen und Radiologen essenziell, um die Treffsicherheit zu erhöhen und die Komplikationsrate gering zu halten. Häufigste Komplikation nach Biopsie ist die Transplantatpankreatitis. In seltenen Fällen kann es zu Blutungen sowie Abszess- oder Fistelentstehungen kommen. Die Notwendigkeit eines chirurgischen Eingriffes als Folge einer perkutanen Biopsie ist insgesamt sehr selten [33, 39].\nDie Pankreastransplantatbiopsie kann über verschiedene Zugangswege gewonnen werden. Dazu muss zwischen operativen, endoskopischen und perkutanen Techniken unterschieden werden. Bei den operativen Verfahren stellt die Laparotomie die invasivste Variante dar. Biopsien können jedoch im Rahmen von Laparotomien aus anderen Gründen einfach simultan als Punktions- oder Exzisionsbiopsie gewonnen werden (Abb. 2). Bei intraperitonealer Lage des Transplantates und damit häufiger Überlagerung des Transplantates von Dünndarmschlingen wird zunehmend die laparoskopisch assistierte Pankreasbiopsie angewendet. Limitiert wird das laparoskopische Vorgehen in einigen Fällen durch postoperative Verwachsungen, die eine Zugänglichkeit des Transplantates erschweren und das Risiko einer perioperativen Komplikation erhöhen [34]. Die früher häufig und heutzutage nur noch selten durchgeführte Blasendrainage des Pankreastransplantates, bei der das Spenderduodenum mit der Harnblase anastomosiert wird, ermöglicht eine einfache zystoskopische Biopsie aus dem Spenderduodenum oder Kopf des Pankreastransplantates [35]. Die histopathologische Aufarbeitung der Biopsien vom Spenderduodenum und Pankreastransplantat ergaben jedoch Diskrepanzen im Grad der Abstoßung. Hierbei konnte das Spenderduodenum unabhängig vom Pankreastransplantat eine Abstoßung aufweisen und umgekehrt [36]. Bei der heute hauptsächlich verwendeten Dünndarmdrainagetechnik ist die Entnahme einer Biopsie aus dem Spenderduodenum im Rahmen einer Ösophagogastroduodenoskopie oder Push-Intestinoskopie möglich [36, 37]. Auch hier konnten deutliche Diskordanzen beim histologischen Nachweis einer Abstoßung im Spenderduodenum und Pankreastransplantat beobachtet werden [35, 36]. Mittels Endosonografie lassen sich sowohl aus dem Spenderduodenum als auch aus dem Pankreastransplantat Feinnadelbiopsien entnehmen, wobei die gewonnene Gewebemenge oftmals eine aussagekräftige Diagnose nicht erlaubt.\nIm eigenen Zentrum hat sich im letzten Jahrzehnt die perkutane Pankreastransplantatbiopsie durchgesetzt. Die mit diesem Verfahren gewonnenen Gewebestanzen erlauben, ähnlich wie bei der Nierentransplantatbiopsie, eine aussagekräftige histopathologische Diagnosestellung. Die perkutane Biopsie erfolgt in der Regel in Lokalanästhesie unter sonografischer oder CT-gesteuerter Kontrolle. Bei beiden Methoden wird das Parenchym des Pankreastransplantates mit einer automatisierten 16G- oder 18G-Tru-Cut®-Nadel punktiert. Dabei werden 2–3 Punktionszylinder entnommen. Je nach Anatomie bieten sich hierbei verschiedene Zugangswege an (Abb. 3). Die Schwierigkeit in der genauen Punktionslokalisation liegt bei der CT darin, dass diese nativ durchgeführt wird und sowohl das Parenchym als auch die Transplantatgefäße schlecht visualisiert werden. Im Vorfeld angefertigte CT- oder MRT-Bilder sind daher hilfreich. In der Literatur finden sich Raten von bis zu 30 %, in denen bioptisch kein Pankreasgewebe gewonnen werden konnte [38]. In den letzten Jahren konnten wir im eigenen Vorgehen unter Nutzung der kontrastmittelgestützten Sonografie (CEUS) die Trefferquote erhöhen. In jedem Fall ist die Zusammenarbeit zwischen Transplantationschirurgen und Radiologen essenziell, um die Treffsicherheit zu erhöhen und die Komplikationsrate gering zu halten. Häufigste Komplikation nach Biopsie ist die Transplantatpankreatitis. In seltenen Fällen kann es zu Blutungen sowie Abszess- oder Fistelentstehungen kommen. Die Notwendigkeit eines chirurgischen Eingriffes als Folge einer perkutanen Biopsie ist insgesamt sehr selten [33, 39].", "Nach einer PTX kann eine Biopsie aus verschiedenen Indikationen heraus notwendig werden. Neben der Durchführung von Indikationsbiopsien aufgrund einer Störung der Transplantatfunktion oder immunologischen Vorgängen werden von einigen Zentren auch sog. Protokollbiopsien, d. h. routinemäßige Entnahmen von Biopsien nach einem definierten Schema entnommen. Störungen der Pankreastransplantatfunktion äußern sich klinisch am häufigsten durch erhöhte Blutzuckerwerte. Diese können mit einem Anstieg von Serumamylase und Serumlipase assoziiert sein. Differenzialdiagnostisch müssen dabei verschiedene Ursachen in Betracht gezogen werden [31–33]: eine primäre Nichtfunktion oder verzögerte Funktionsaufnahme des Transplantates, eine Transplantatpankreatitis, eine Thrombose, eine akute oder chronische Rejektion, eine medikationsbedingte Hyperglykämie (Steroide, Tacrolimus), Infektionen, eine Rekurrenz der Grunderkrankung, eine Toxizität von Calcineurininhibitoren (CNI) oder die Manifestation eines Diabetes mellitus Typ 2. Einen Überblick über häufige Indikationen zur Biopsie des Pankreastransplantates gibt Tab. 2.IndikationStörungen der endokrinen FunktionHyperglykämieGestörte GlukosetoleranzErhöhter HbA1c-WertStörungen der exokrinen FunktionErhöhte Amylase- und LipasewerteHumorale AbstoßungDonorspezifische Antikörper (DSA)Rekurrenz der GrunderkrankungDiabetes mellitus Typ 1 assoziierte AutoantikörperProtokollbiopsienProtokollKontrollbiopsienNach z. B. stattgehabter Abstoßungstherapie", "Die Pankreastransplantatbiopsie kann über verschiedene Zugangswege gewonnen werden. Dazu muss zwischen operativen, endoskopischen und perkutanen Techniken unterschieden werden. Bei den operativen Verfahren stellt die Laparotomie die invasivste Variante dar. Biopsien können jedoch im Rahmen von Laparotomien aus anderen Gründen einfach simultan als Punktions- oder Exzisionsbiopsie gewonnen werden (Abb. 2). Bei intraperitonealer Lage des Transplantates und damit häufiger Überlagerung des Transplantates von Dünndarmschlingen wird zunehmend die laparoskopisch assistierte Pankreasbiopsie angewendet. Limitiert wird das laparoskopische Vorgehen in einigen Fällen durch postoperative Verwachsungen, die eine Zugänglichkeit des Transplantates erschweren und das Risiko einer perioperativen Komplikation erhöhen [34]. Die früher häufig und heutzutage nur noch selten durchgeführte Blasendrainage des Pankreastransplantates, bei der das Spenderduodenum mit der Harnblase anastomosiert wird, ermöglicht eine einfache zystoskopische Biopsie aus dem Spenderduodenum oder Kopf des Pankreastransplantates [35]. Die histopathologische Aufarbeitung der Biopsien vom Spenderduodenum und Pankreastransplantat ergaben jedoch Diskrepanzen im Grad der Abstoßung. Hierbei konnte das Spenderduodenum unabhängig vom Pankreastransplantat eine Abstoßung aufweisen und umgekehrt [36]. Bei der heute hauptsächlich verwendeten Dünndarmdrainagetechnik ist die Entnahme einer Biopsie aus dem Spenderduodenum im Rahmen einer Ösophagogastroduodenoskopie oder Push-Intestinoskopie möglich [36, 37]. Auch hier konnten deutliche Diskordanzen beim histologischen Nachweis einer Abstoßung im Spenderduodenum und Pankreastransplantat beobachtet werden [35, 36]. Mittels Endosonografie lassen sich sowohl aus dem Spenderduodenum als auch aus dem Pankreastransplantat Feinnadelbiopsien entnehmen, wobei die gewonnene Gewebemenge oftmals eine aussagekräftige Diagnose nicht erlaubt.\nIm eigenen Zentrum hat sich im letzten Jahrzehnt die perkutane Pankreastransplantatbiopsie durchgesetzt. Die mit diesem Verfahren gewonnenen Gewebestanzen erlauben, ähnlich wie bei der Nierentransplantatbiopsie, eine aussagekräftige histopathologische Diagnosestellung. Die perkutane Biopsie erfolgt in der Regel in Lokalanästhesie unter sonografischer oder CT-gesteuerter Kontrolle. Bei beiden Methoden wird das Parenchym des Pankreastransplantates mit einer automatisierten 16G- oder 18G-Tru-Cut®-Nadel punktiert. Dabei werden 2–3 Punktionszylinder entnommen. Je nach Anatomie bieten sich hierbei verschiedene Zugangswege an (Abb. 3). Die Schwierigkeit in der genauen Punktionslokalisation liegt bei der CT darin, dass diese nativ durchgeführt wird und sowohl das Parenchym als auch die Transplantatgefäße schlecht visualisiert werden. Im Vorfeld angefertigte CT- oder MRT-Bilder sind daher hilfreich. In der Literatur finden sich Raten von bis zu 30 %, in denen bioptisch kein Pankreasgewebe gewonnen werden konnte [38]. In den letzten Jahren konnten wir im eigenen Vorgehen unter Nutzung der kontrastmittelgestützten Sonografie (CEUS) die Trefferquote erhöhen. In jedem Fall ist die Zusammenarbeit zwischen Transplantationschirurgen und Radiologen essenziell, um die Treffsicherheit zu erhöhen und die Komplikationsrate gering zu halten. Häufigste Komplikation nach Biopsie ist die Transplantatpankreatitis. In seltenen Fällen kann es zu Blutungen sowie Abszess- oder Fistelentstehungen kommen. Die Notwendigkeit eines chirurgischen Eingriffes als Folge einer perkutanen Biopsie ist insgesamt sehr selten [33, 39].", "Im Vergleich zu den US-amerikanischen Spenderdaten sind deutsche Pankreasspender signifikant älter und der Anteil zerebrovaskulärer Todesursachen ist höher. Trotz dieser Tatsache konnten in den letzten Jahren sehr gute Ergebnisse nach PTX in deutschen Zentren erzielt werden [40]. Mit einem 1‑Jahres-Patienten- und 1‑Jahres-Transplantat-Überleben von 92 % und 83 % stehen diese den internationalen Ergebnissen in nichts nach (Tab. 3). Die Überlebensraten sind für die verschiedenen Formen der PTX nicht direkt vergleichbar, da es sich um Patientenkollektive mit unterschiedlicher Morbidität handelt (z. B. urämische vs. nichturämische Patienten). Die besten Ergebnisse werden dabei nach kombinierter SPK erzielt, mit einem Pankreastransplantatüberleben von 89 %, 71 % und 57 % nach 1, 5 und 10 Jahren. Für die gleichen Zeiträume liegen die Ergebnisse nach alleiniger PTA bei 84 %, 52 % und 38 % [2]. Die Transplantat-Halbwertszeiten für Pankreata (50 % Funktionsrate) werden mit 14 Jahren (SPK) und 7 Jahren (PAK, PTA) angegeben [2]. Durch die langfristige Normalisierung des Glukosestoffwechsels kommt es zu einer signifikanten Senkung der Mortalität, welche bei der SPK deutlich größer ist als bei alleiniger Nierentransplantation bei einem Typ-1-Diabetiker [8].QI/BezeichnungReferenzbereich (%)2016/2017 (%)2018/2019 (%)Sterblichkeit imKrankenhaus< 53,574,761‑Jahres-Überleben bei bekanntem Status> 9092,9691,362‑Jahres-Überleben bei bekanntem Status> 8091,5291,233‑Jahres-Überleben bei bekanntem Status> 7591,1090,20Qualität der Transplantatfunktion bei Entlassung> 7578,9583,23Qualität der Transplantatfunktion(1 Jahr nach Entlassung)Nicht definiert83,0792,16Qualität der Transplantatfunktion(2 Jahre nach Entlassung)Nicht definiert78,9583,23Qualität der Transplantatfunktion(3 Jahre nach Entlassung)Nicht definiert74,6775,79\nSterblichkeit im\nKrankenhaus\nQualität der Transplantatfunktion\n(1 Jahr nach Entlassung)\nQualität der Transplantatfunktion\n(2 Jahre nach Entlassung)\nQualität der Transplantatfunktion\n(3 Jahre nach Entlassung)", "Wie bei Organbiopsien üblich erfolgt nach Eingang und Prüfung der klinischen Angaben zunächst eine makroskopische Beurteilung der formalinfixierten Proben, wobei hier für eine erste Einschätzung auch ein Durchlichtmikroskop hilfreich sein kann. Da eine definitive Einordnung in endo- und exokrines Pankreas allerdings in der Lichtmikroskopie des Feuchtmaterials nicht möglich ist, empfiehlt sich zunächst eine Herstellung von HE- und PAS-Schnitten und die Anfertigung von mindestens 6 Leerschnitten für ergänzende histochemische und immunhistologische Untersuchungen, die nach Prüfung des Gewebes an den HE-/PAS-Schnitten angefordert werden können. Weiterhin empfiehlt es sich, in jedem Fall eine Bindegewebsfaserfärbung (z. B. Siriusrot, Elastica-van-Gieson [EvG] oder Mason-Goldner [MG]) und die 5 folgenden immunhistologischen Färbungen anzufertigen: C4d als Marker einer antikörpervermittelten Transplantatabstoßung (ABMR), CD3 als T‑Zell- und CD68 als Makrophagenmarker sowie Insulin und Glukagon zur Visualisierung der endokrinen Pankreasinseln [28, 38, 39].\nEin entsprechendes Vorgehen – allerdings primär ohne ergänzende immunhistologische Untersuchungen – wird auch für die selten durchgeführten und schwieriger zu beurteilenden Biopsien des Spenderduodenums durchgeführt, die hinsichtlich ihrer Signifikanz für die Pankreasabstoßung insgesamt sehr kontrovers diskutiert werden [41].", "Eine Analyse von 38 ausgewählten Genen (u. a. endotheliale Gene, NK-Zellgene und Entzündungsgene) mittels NanoString-nCounter-Technologie an 52 formalinfixierten Pankreasbiopsien zeigte, dass durch ergänzende molekulare Marker die Pankreasabstoßungsdiagnostik verbessert werden kann [42]. Diese aufwendige und kostenintensive molekulare Zusatzdiagnostik wird derzeit jedoch nicht routinemäßig eingesetzt.", "Für die Beurteilung von transplantatassoziierten Veränderungen ist primär die Kenntnis der normalen Anatomie des Pankreas und der Normalbefunde der verwendeten immunhistologischen Färbungen hilfreich (Abb. 4). Zunächst sollte in der HE- und PAS-Färbung (Abb. 4a, b) geprüft werden, ob exo- und endokrines Pankreasgewebe vorhanden ist und wieviele Läppchen erfasst sind. In der CD3- und der CD68-Färbung zeigen sich üblicherweise nur sehr wenige T‑Zellen und Gewebehistiozyten/Makrophagen (Abb. 4c, d). In der Insulin- und der Glukagon-Immunhistologie lassen sich die endokrinen Inseln und hier speziell die Alpha- und Betazellen meist sehr schön darstellen (Abb. 4e, f).", "Die morphologischen Veränderungen des Pankreas werden gemäß der aktuellen international gültigen Banff-Klassifikation eingeteilt [30]: Bei der häufigen akuten T‑Zell-vermittelten Transplantatabstoßung (TCMR; Tab. 1; Abb. 5) finden sich die folgenden histologischen Veränderungen: septale Entzündung mit aktivierten Lymphozyten- und z. T. Eosinophileninfiltraten, Venulitis und Duktitis, azinäre Entzündung und Endothelialitis (Abb. 5a–f). Das Ausmaß der T‑zellulären oder makrophagozytären Infiltration lässt sich hierbei gut durch ergänzende CD3- (Abb. 5g) bzw. CD68-Färbung (Abb. 5h) visualisieren, wobei hier v. a. das T‑zelluläre Infiltrat eine größere Rolle als das monozytär/makrophagozytäre Infiltrat spielt (Tab. 4). Die TCMR wird entsprechend ihres Schweregrades in 3 Kategorien (mild, moderat, schwer) eingeteilt, wobei die schwere Form mit diffuser azinärer Entzündung, fokaler oder diffuser Azinuszellnekrose und/oder moderater oder schwerer intimaler Arteriitis (> 25 % des Lumens) eine Abgrenzung zur akuten antikörpervermittelten Transplantatabstoßung (ABMR) notwendig macht, sodass hier in jedem Fall nach dem Vorhandensein donorspezifischer Antikörper (DSA) zu fragen ist (Tab. 1).TCMRABMRSeptale Infiltrate+++− bis +Eosinophile Granulozyten+ bis +++− bis +Neutrophile Granulozyten− bis ++ ± bis +++T‑Lymphozyten++ bis +++ ± bis +Makrophagen++ ++++GradWichtigste histologische BefundeNicht eindeutig bzw. ausreichend für eine TCMRMinimales lokalisiertes Entzündungsinfiltrat, minimaler Kryptenepithelschaden, vermehrte epitheliale Apoptosen (< 6 Apoptosekörperchen/10 Krypten), keine oder nur minimale Architekturstörung der Schleimhaut, keine Schleimhautulzerationen und keine eindeutigen Veränderungen einer milden TCMRMilde TCMRMildes lokalisiertes Entzündungsinfiltrat mit aktivierten Lymphozyten, milder Kryptenepithelschaden, vermehrte epitheliale Apoptosen (> 6 Apoptosekörperchen/10 Krypten), milde Architekturstörung der Schleimhaut, keine SchleimhautulzerationenModerate TCMRDiffuses Entzündungsinfiltrat in der Lamina propria, diffuser Kryptenepithelschaden, vermehrt epitheliale Apoptosen mit fokaler Konfluenz, stärkere Schleimhautarchitekturstörung, milde bis mäßige intimale Arteriitis möglich, keine SchleimhautulzerationenSchwere TCMRVeränderungen der moderaten TCMR plus mukosale Ulzerationen bzw. auch schwere oder transmurale intimale Arteriitis möglichTCMR T-Zell-vermittelte AbstoßungAnzahl der Fälle (Prozentwert %)Alle BiopsienaPankreasbiopsien/Duodenalbiopsien93 (100)/3Kein Material (Pankreasbiopsien)32 (34,4)Pankreasbiopsien mit verwertbarem MaterialbPankreasbiopsien gesamt61 (100)Keine Abstoßung15 (24,6)Akute TCMRIndeterminate: 4 (6,6)Grad I: 30 (49,2)Grad II: 8 (13,1)Aktive ABMRDefinitiv: 0 (0)Erwägen: 3 (4,9)Ausschließen: 2 (3,3)Akuter Azinusepithelschaden36 (59)AllograftfibroseGrad I: 29 (47,5)Grad II: 2 (3,3)Grad III: 2 (3,3)Chronische Allograftarteriopathie1 (1,6)Insulitis2 (möglich) (3,3)CNI-Toxizität3 (möglich) (4,9)Pankreatitis5 (8,2)ABMR antikörpervermittelte Abstoßung, CNI Calcineurin-Inhibitor, TCMR T-Zell-vermittelte AbstoßungaProzentwerte bezogen auf alle Pankreasbiopsien unabhängig davon, ob verwertbares Material vorhanden warbProzentwerte bezogen auf Pankreasbiopsien mit verwertbarem Material\nTCMR T-Zell-vermittelte Abstoßung\nIndeterminate: 4 (6,6)\nGrad I: 30 (49,2)\nGrad II: 8 (13,1)\nDefinitiv: 0 (0)\nErwägen: 3 (4,9)\nAusschließen: 2 (3,3)\nGrad I: 29 (47,5)\nGrad II: 2 (3,3)\nGrad III: 2 (3,3)\nABMR antikörpervermittelte Abstoßung, CNI Calcineurin-Inhibitor, TCMR T-Zell-vermittelte Abstoßung\naProzentwerte bezogen auf alle Pankreasbiopsien unabhängig davon, ob verwertbares Material vorhanden war\nbProzentwerte bezogen auf Pankreasbiopsien mit verwertbarem Material\nAuch im biopsierten Spenderduodenum (Tab. 5; Abb. 6a, b) können sich charakteristische Veränderungen einer TCMR mit unterschiedlich ausgeprägter Vermehrung von Lymphoyzten intraepithelial und in der Lamina propria sowie erhöhter epithelialer Apoptoserate finden sowie eine Architekturstörung der Schleimhaut (definiert als Verplumpung/Abflachung der Villi in dem am besten orientierten Schnitt) [43]. Im schweren Stadium zeigt sich auch eine ausgeprägte Schleimhautdestruktion mit Kryptenverlust, Verschorfung und neutrophilenreichem Infiltrat (Tab. 5; [43]). Hiervon abzugrenzen sind jedoch ischämische Veränderungen, die im Einzelfall ein ähnliches histologisches Bild induzieren können.\nDie ABMR manifestiert sich am transplantierten Pankreas als mikrovaskulärer Endothelzellschaden des exokrinen Parenchyms, interazinäre Entzündung, Azinusepithelschaden, Vaskulitis und Thrombose, kann in der HE-Färbung aber auch relativ blande aussehen (Abb. 7a). Neben DSA und der Morphologie ist auch die spezifische C4d-Positivität der interazinären Kapillaren eines der 3 diagnostischen Kriterien der ABMR [29, 30], wofür eine C4d-Färbung notwendig ist (Abb. 7b). Im Gegensatz zur TCMR ist die ABMR v. a. durch ein monozytär/makrophagozytäres Infiltrat charakterisiert (Tab. 4; Abb. 7c). Zur Darstellung der interazinären Kapillaren und speziell zur Beurteilung der Dilatation und Endothelzellschwellung als Zeichen der kapillären Schädigung kann eine CD34-Immunhistochemie hilfreich sein (Abb. 7d).", "Hier sind zum einen anatomische bzw. nicht krankheitswertige Variationen, wie z. B. die relativ häufige Vermehrung von Fettzellen im Pankreas (Abb. 8a), [44] aber auch z. T. transplantationsassoziierte Veränderungen wie tryptische Pankreasnekrosen (Abb. 8b), ischämische Posttransplantationspankreatitis, Peripankreatitis bzw. peripankreatische Flüssigkeitsansammlung/Ödem, CMV-Pankreatitis, posttransplantations-lymphoproliferative Erkrankung (PTLD), bakterielle Infektionen oder Pilzinfektionen, Rekurrenz der Autoimmunerkrankung/des Diabetes mellitus oder eine CNI-Toxizität, die sich meist als Inselzellschaden, wie z. B. eine Vakuolisierung der endokrinen Zellen zeigt (Abb. 8c; [32]), zu nennen. Die Vakuolisierung von Inselzellen war in einer Studie, die die histologischen Befunde bei den beiden CNIs Cyclosporin A und Tacrolimus untersuchte, bei 2 Kontrollfällen nur minimal ausgeprägt und grobe Vakuolisierungen – wie bei CNI-Toxizität – wurden nicht beobachtet. Als weitere Merkmale der CNI-Toxizität fanden sich Inselzellschwellungen, Apoptosen und eine verminderte Reaktivität für Insulin in der Immunhistochemie [32].", "Im Zeitraum von Juni 2017 bis Dezember 2020 wurden aus der Chirurgischen Klinik des Universitätsklinikum Knappschaftskrankenhaus Bochum insgesamt 93 Pankreastransplantatbiopsien und 3 Duodenalbiopsien des Spenderduodenums von 49 Patienten in der Nephropathologie Erlangen untersucht. Die Ergebnisse der Pankreasbiopsien, wie in den Originalbefunden dokumentiert, sind in Tab. 6 zusammengefasst und in Abb. 9 teilweise illustriert. In einem Drittel der Biopsien (34,4 %) wurde kein diagnostisches Material gewonnen. Die bei weitem häufigste Diagnose war eine TCMR. Zur Einordnung der Befunde wie Insulitis und CNI-Toxizität oder einer ABMR ist immer der klinische Kontext von größter Wichtigkeit zur abschließenden Interpretation, sodass häufig nur anhand der Histologie eine definitive Diagnose nicht zu stellen ist. Im gleichen Zeitraum wurden 3 Biopsien des Spenderduodenums eingesandt, von denen eine Zeichen einer TCMR zeigte und 2 weitere keine Abstoßungszeichen.", "\nDie Biopsie des transplantierten Pankreas oder in seltenen Fällen auch des Spenderduodenums mit anschließender standardisierter Beurteilung entsprechend der aktuellen international gültigen Banff-Klassifikation der Pankreasabstoßung und der Empfehlungen zur Beurteilung von Duodenalbiopsien hat ihren festen Stellenwert in der Behandlung Pankreas‑/Nierentransplantierter Patienten.Wie bei anderen transplantierten parenchymatösen Organen werden die Biopsien nach klinischer Indikationsstellung durchgeführt und anschließend in entsprechend ausgerüsteten Pathologien gemäß der aktuell geltenden Einteilungen standardisiert aufgearbeitet und beurteilt.Diese Einordnung der morphologischen Veränderungen nach der Banff-Klassifikation der Pankreastransplantatabstoßung ist dann wiederum mit entsprechenden klinischen Handlungsanweisungen verknüpft, die in größeren klinischen Studien erprobt wurden und werden und mit entsprechenden Erfolgswahrscheinlichkeiten einhergehen.\n\nDie Biopsie des transplantierten Pankreas oder in seltenen Fällen auch des Spenderduodenums mit anschließender standardisierter Beurteilung entsprechend der aktuellen international gültigen Banff-Klassifikation der Pankreasabstoßung und der Empfehlungen zur Beurteilung von Duodenalbiopsien hat ihren festen Stellenwert in der Behandlung Pankreas‑/Nierentransplantierter Patienten.\nWie bei anderen transplantierten parenchymatösen Organen werden die Biopsien nach klinischer Indikationsstellung durchgeführt und anschließend in entsprechend ausgerüsteten Pathologien gemäß der aktuell geltenden Einteilungen standardisiert aufgearbeitet und beurteilt.\nDiese Einordnung der morphologischen Veränderungen nach der Banff-Klassifikation der Pankreastransplantatabstoßung ist dann wiederum mit entsprechenden klinischen Handlungsanweisungen verknüpft, die in größeren klinischen Studien erprobt wurden und werden und mit entsprechenden Erfolgswahrscheinlichkeiten einhergehen." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "conclusion" ]
[ "Biopsie", "Diabetes mellitus", "Duodenum", "Histologie", "Nierentransplantation", "Biopsy", "Diabetes mellitus", "Duodenum", "Histology", "Kidney transplantation" ]
Simultane Pankreas‑/Nierentransplantation (SPK): Die klassische Indikation zur simultanen PNTX stellt der juvenile Typ I‑Diabetes mit terminaler Niereninsuffizienz dar. Eine kombinierte PNTX ist jedoch auch im Stadium der präterminalen Niereninsuffizienz möglich. So kann ein Patient ab Stadium 4 der chronischen Niereninsuffizienz (glomeruläre Filtrationsrate [GFR], < 30 ml/min) in die Warteliste für eine kombinierte PNTX aufgenommen werden. Diese präemptive Transplantation führt zu einer Reduktion der perioperativen Mortalität und verbessert das Langzeitüberleben der Patienten erheblich. In Europa erfolgt ein Großteil (89 %) der PTX zusammen mit einer Nierentransplantation [12]. Dabei stammen beide Organe vom selben Organspender. Alleinige Pankreastransplantation (PTA): In seltenen Fällen kann bei Patienten mit extrem instabilem Diabetes mellitus Typ I (sog. Brittle-Diabetes) die Indikation zur alleinigen PTX gestellt werden. Hypoglykämie-Wahrnehmungsstörungen, das Vorliegen einer subkutanen Insulinresistenz, aber auch ein frühes Auftreten und rasches Fortschreiten diabetischer Spätschäden können eine alleinige PTX rechtfertigen. In diesen Fällen sollte die Nierenfunktion aufgrund der zu erwartenden Nephrotoxizität der Immunsuppressiva nicht oder nur gering beeinträchtigt sein. Trotz Verbesserungen der Ergebnisse ist die alleinige PTX keine prinzipielle Alternative zu konservativen Therapiemöglichkeiten. Die Indikation sollte streng gestellt werden und nur interdisziplinär, z. B. im Rahmen der Transplantationskonferenz, nachdem wichtige Alternativen (z. B. eine Insulinpumpentherapie) ausgeschöpft und selbstinduzierte Hypoglykämien bei psychischen Störungen ausgeschlossen wurden [13]. Es muss eine sorgfältige Abwägung des Operationsrisikos und der Nebenwirkungen der immunsuppressiven Therapie gegenüber den zu erwartenden Vorteilen einer Transplantation erfolgen. Pankreastransplantation nach erfolgter Nierentransplantation (PAK): Nach stattgehabter Leichennieren- oder Lebendnierentransplantation eines Typ-I-Diabetikers kann eine PTX erfolgen. Bei dieser Art der Transplantation fehlt die immunologische Identität der Transplantate. Von Vorteil ist jedoch, dass die Immunsuppression zum Zeitpunkt der PTX bereits besteht. Weitere Vorteile dieser Transplantationsform sind eine reduzierte Wartezeit sowie eine geringere Mortalität im Vergleich zur simultanen PNTX [14]. Indikationen zur Pankreastransplantation: Entsprechend der aktuellen Richtlinie für die Wartelistenführung und die Organvermittlung zur PTX und kombinierten PNTX müssen folgende Kriterien für die Aufnahme in die Warteliste erfüllt werden [15]: Vorhandensein von Autoantikörpern gegen Glutamatdecarboxylase (GAD) und/oder Inselzellen (ICA) und/oder Tyrosinphosphatase 2 (IA-2) und/oder Zinktransporter 8 (ZnT8) und/oder Insulin (IAA). Der Autoantikörpernachweis von GAD, IA‑2, ICA und ZnT8 kann zum Zeitpunkt der Listung oder in der Vergangenheit erfolgt sein. Der positive Befund für IAA ist nur dann akzeptabel, wenn das Nachweisdatum vor Beginn der Insulintherapie liegt. Daher muss für den Nachweis der IAA das Datum der Testung und der Beginn der Insulintherapie an Eurotransplant übermittelt werden. Können keine Autoantikörper detektiert werden, muss eine Betazelldefizienz nachgewiesen werden. Diese wird in der aktuellen Richtlinie definiert als:C‑Peptid vor Stimulation < 0,5 ng/ml mit einem Anstieg nach Stimulation von < 20 %, wenn parallel zu diesem Messzeitpunkt kein Blutzuckerwert vorliegt oderC‑Peptid vor Stimulation < 0,5 ng/ml mit einem gleichzeitig erhobenen Blutzuckerwert ≥ 70 mg/dl (bzw. ≥ 3,9 mmol/l) oderC‑Peptid nach Stimulation < 0,8 ng/ml, mit einem gleichzeitig einhergehenden Blutzuckeranstieg auf ≥ 100 mg/dl (bzw. ≥ 5,6 mmol/l). C‑Peptid vor Stimulation < 0,5 ng/ml mit einem Anstieg nach Stimulation von < 20 %, wenn parallel zu diesem Messzeitpunkt kein Blutzuckerwert vorliegt oder C‑Peptid vor Stimulation < 0,5 ng/ml mit einem gleichzeitig erhobenen Blutzuckerwert ≥ 70 mg/dl (bzw. ≥ 3,9 mmol/l) oder C‑Peptid nach Stimulation < 0,8 ng/ml, mit einem gleichzeitig einhergehenden Blutzuckeranstieg auf ≥ 100 mg/dl (bzw. ≥ 5,6 mmol/l). Als Stimulationstests werden ein oraler Glukosetoleranztest (OGTT), ein Mixed-Meal-Toleranztest (MMTT) oder ein intravenöser oder subkutaner Glukagontest akzeptiert. Zusätzlich können bedrohliche und rasch fortschreitende diabetische Spätfolgen, das Syndrom der unbemerkten schweren Hypoglykämie, ein exzessiver Insulinbedarf oder fehlende Applikationswege für Insulin bei subkutaner Insulinresistenz eine Indikation zur isolierten PTX darstellen. Über die mögliche Aufnahme in die Warteliste entscheidet dann in jedem Einzelfall eine Auditgruppe bei der Vermittlungsstelle (Eurotransplant Pancreas Advisory Committee, EPAC). Die PTX bei Typ-2-Diabetikern wird weiterhin kontrovers diskutiert. Der Anteil der pankreastransplantierten Typ-2-Diabetiker liegt weltweit zwischen 1 % und 8 %, ist aber in den letzten Jahren kontinuierlich angestiegen [2]. Schlanke Typ-2-Diabetiker mit geringem Insulinbedarf und Niereninsuffizienz sowie Patienten mit Typ-MODY-Diabetes („maturity onset diabetes of the young“) profitieren in ähnlicher Weise von einer kombinierten PNTX wie Typ-1-Diabetiker [16]. Auch hier entscheidet das EPAC über die Aufnahme auf die Warteliste. Kontraindikationen für eine PTX stellen eine bestehende Malignomerkrankung, eine nicht sanierte akute oder chronische Infektion sowie eine schwere psychische Störung und Non-Adhärenz dar. Ebenfalls können ausgeprägte kardiovaskuläre Erkrankungen, soweit nicht therapierbar, eine Kontraindikation darstellen. Bei Patienten mit einem Body-Mass-Index (BMI) > 30 kg/m2 sollte nur in Ausnahmefällen eine PTX durchgeführt werden. Einerseits besteht eine Assoziation zwischen Übergewicht und der perioperativen Morbidität und Mortalität, andererseits kann der Transplantationserfolg mit Erreichen einer Insulinunabhängigkeit und Normoglykämie durch das bestehende Übergewicht gefährdet werden. Das Patientenalter allein stellt keine Kontraindikation zur PTX dar. Waren in den letzten Jahrzehnten noch viele Zentren sehr zurückhaltend bei der PTX über 50-jähriger Patienten, so konnte in den letzten Jahren, insbesondere in High-volume-Zentren, gezeigt werden, dass auch bei älteren Patienten durchaus gute Ergebnisse nach einer PTX erzielt werden können [17, 18]. Kardiovaskuläre Erkrankungen stellen mit Abstand den größten Komorbiditätsfaktor bei Patienten mit Diabetes mellitus Typ I und Niereninsuffizienz dar. Potenzielle Pankreastransplantatempfänger mit kardialer Anamnese, Auffälligkeiten in der nichtinvasiven Diagnostik (Echokardiografie, Myokardszintigrafie), einem Lebensalter > 50 Jahre oder bereits bestehender Dialysepflichtigkeit sollten vor einer PTX immer einer Koronarangiografie unterzogen werden. Ebenso sorgfältig sollten die peripheren Gefäßverhältnisse, insbesondere die Iliakalgefäße, hinsichtlich ihrer Anastomosierungsfähigkeit untersucht werden. Spenderkriterien: Die Spenderkriterien sowie eine optimale Organentnahme und Organkonservierung sind mitentscheidend für den Erfolg einer PTX. Eine Vielzahl von Spenderparametern wurde hinsichtlich ihrer Bedeutung für das Ergebnis nach PTX untersucht. Wesentliche Faktoren sind das Spenderalter, BMI, Todesursache (traumatisch oder kardiovaskulär), Verweildauer auf der Intensivstation, erfolgte Reanimation, hypotensive Phasen und der Einsatz von Katecholaminen. Des Weiteren fließen Laborparameter wie Serumamylase und Serumlipase, Vorliegen einer Hypernatriämie, Blutzucker und Serumkreatinin in die Entscheidung über die Transplantabilität eines Spenderpankreas ein [19, 20]. Die Dauer der Ischämiezeit beeinflusst die Rate und Intensität von Pankreatitiden und somit die Funktionsrate nach Transplantation, während der Einfluss verschiedener Konservierungslösungen kontrovers diskutiert wird. Neben dem Vorliegen von Malignomen und Infektionskrankheiten gelten ein Diabetes mellitus, eine akute oder chronische Pankreatitis, vorausgegangene chirurgische Eingriffe am Pankreas und ein höhergradiges Pankreastrauma als Kontraindikationen beim Spender. Von einigen Autoren werden ein chronischer Alkoholabusus, eine intraabdominelle Sepsis sowie ein BMI > 35 kg/m2 ebenfalls als Kontraindikationen angesehen [20]. Viele Transplantationschirurgen akzeptieren keine Spenderpankreata mit Kalzifikationen, fortgeschrittener Organfibrose, infiltrativen Verfettungen, deutlichem Ödem sowie einer ausgeprägten Atherosklerose der Viszeral- und Beckenarterien des Spenders. Ebenso von Bedeutung ist die zu palpierende Konsistenz des Organs. Dabei ist die intraoperative Beurteilung des Pankreas durch einen erfahrenen Transplantationschirurgen von wesentlicher Bedeutung. Diese Kriterien erscheinen etwas subjektiv, sind jedoch in der Allokationsrealität Laborwerten und demografischen Daten des Spenders überlegen. Im Vergleich mit der Akzeptanz anderer Organe zur Transplantation ist die Ablehnungsrate potenzieller Pankreasspender sowohl in Europa als auch in den USA hoch. So wurden z. B. im Jahr 2019 nur 21 % der bei Eurotransplant gemeldeten Spenderpankreata transplantiert [21]. Technik der Pankreastransplantation: Die klinische PTX unterlag zahlreichen Veränderungen und Modifikationen hinsichtlich der chirurgischen Technik. Auch heute werden diverse Techniken nebeneinander angewendet, ohne dass letztlich die Überlegenheit der einen oder anderen Methode nachgewiesen werden konnte. Allein die Tatsache, dass in einigen Zentren parallel verschiedene Techniken Verwendung finden, verdeutlicht diese Situation. Nach ihrer Einführung war die PTX mit einer sehr hohen peri- und postoperativen Morbidität und Mortalität behaftet, sodass sie viele Jahre als experimentelles Verfahren angesehen wurde. Vordergründig für die schlechten Ergebnisse zu dieser Zeit war das ungenügende Management der exokrinen Pankreassekretion. Dieses ist auch heute noch von zentraler Bedeutung, da die Freisetzung von aktivierten Pankreasenzymen, ähnlich wie bei der akuten Pankreatitis, zu gravierenden Gewebeschädigungen führen kann. Erst mit der von Sollinger 1983 [22] entwickelten Blasendrainagetechnik und der Einführung der Pankreasduodenaltransplantation durch Nghiem und Corry 1987 ist die PTX im Hinblick auf das Auftreten von Nahtinsuffizienzen und Pankreasfisteln sicherer geworden [23]. Bei dieser Technik wird das gesamte Spenderpankreas mit einem blindverschlossenen, kurzen Duodenalsegment transplantiert. Das exokrine Sekret wird dabei über eine in Seit-zu-Seit-Technik angelegte Duodenozystostomie in die Harnblase abgeleitet. Dieses Verfahren erlaubte durch Bestimmung von Amylase und Lipase im Urin ein Transplantatmonitoring, war jedoch durch das Auftreten schwerer metabolischer Azidosen, Dehydratationen, Harnblasen- und Harnröhrenentzündungen sowie Refluxpankreatitiden mit vielen Komplikationen behaftet. Mit der Verfügbarkeit einer besseren Immunsuppression und der damit verbundenen Reduktion immunologisch bedingter Transplantatverluste hat sich seit Mitte der 1990er-Jahre in den meisten Transplantationszentren die enterale Drainage in den Dünndarm durchgesetzt. Hierbei wird das exokrine Pankreassekret durch eine Seit-zu-Seit-Duodenojejunostomie oder eine Duodenoduodenostomie in den Darm abgeleitet. Auch Rekonstruktionen über eine nach Roux‑Y ausgeschaltete Dünndarmschlinge finden Anwendung. In den letzten Jahren wird von einigen Zentren eine retroperitoneale Platzierung des Pankreastransplantates favorisiert [24, 25]. Die endokrine Drainage oder venöse Ableitung des Pankreastransplantates kann systemisch-venös in die V. cava inferior oder portal-venös in die V. mesenterica superior erfolgen. Ob das technisch anspruchsvollere, aber physiologischere Verfahren der portal-venösen Drainage metabolische Vorteile hat, ist bisher nicht bewiesen. Beide Verfahren führen zu vergleichbaren Ergebnissen. Seit 2007 bevorzugen wir die retroperitoneale Positionierung des Transplantates unter Anlage einer Seit-zu-Seit-Duodenoduodenostomie mit portal-venöser oder systemisch-venöser Anastomosierung (Abb. 1). Der Vorteil der Duodenoduodenostomie besteht darin, dass sowohl die Dünndarmanastomose als auch der Pankreaskopf des Transplantates endoskopisch erreicht werden können. Somit wird es möglich, endoskopisch Biopsien zur Abstoßungsdiagnostik zu gewinnen. Des Weiteren kann im Falle einer intestinalen Blutung im Anastomosenbereich eine einfache endoskopische Intervention zur Blutstillung erfolgen. Mögliche Nachteile ergeben sich im Falle einer Anastomoseninsuffizienz oder eines Transplantatverlustes, da die resultierende Leckage im Bereich des Duodenums chirurgisch schwieriger zu versorgen ist. Neben vielen weiteren Vorteilen der retroperitonealen Positionierung ist das Pankreastransplantat besser sonografisch darstellbar und einfach perkutan zu punktieren. Komplikationen nach Pankreastransplantation (PTX): Transplantatthrombose Die Pankreastransplantatthrombose ist nach wie vor die häufigste Ursache für einen frühen Verlust des Pankreastransplantates. Bei einem plötzlichen Blutzuckeranstieg in der Frühphase nach Transplantation muss nach Ausschluss anderer Ursachen immer an eine Perfusionsstörung des Pankreastransplantates gedacht werden. Der Nachweis erfolgt in der Regel durch eine CT oder MRT des Abdomens. Die farbcodierte Duplexsonografie kann erste Hinweise auf Perfusionsstörungen ergeben. Im eigenen Zentrum hat sich in den letzten Jahren die kontrastmittelunterstützte Sonografie (CEUS) zum Perfusionsnachweis als extrem hilfreich erwiesen, da diese sofort auf Station im Patientenbett durchgeführt werden kann und kein potenziell nephrotoxisches Kontrastmittel appliziert werden muss. Bei Nachweis einer Thrombose im Bereich der Pfortader oder Milzvene sowie Thrombosen im Bereich des arteriellen Zuflusses (Y-Graft) ist eine sofortige Relaparotomie durchzuführen. In einigen Fällen ist es dabei gelungen, erfolgreich eine Thrombektomie durchzuführen und die Organfunktion zu erhalten. Auch Erfolge bei Lysetherapien wurden berichtet. Häufig bleibt einem jedoch nur die Entfernung des thrombosierten Transplantates übrig. Die Pankreastransplantatthrombose ist nach wie vor die häufigste Ursache für einen frühen Verlust des Pankreastransplantates. Bei einem plötzlichen Blutzuckeranstieg in der Frühphase nach Transplantation muss nach Ausschluss anderer Ursachen immer an eine Perfusionsstörung des Pankreastransplantates gedacht werden. Der Nachweis erfolgt in der Regel durch eine CT oder MRT des Abdomens. Die farbcodierte Duplexsonografie kann erste Hinweise auf Perfusionsstörungen ergeben. Im eigenen Zentrum hat sich in den letzten Jahren die kontrastmittelunterstützte Sonografie (CEUS) zum Perfusionsnachweis als extrem hilfreich erwiesen, da diese sofort auf Station im Patientenbett durchgeführt werden kann und kein potenziell nephrotoxisches Kontrastmittel appliziert werden muss. Bei Nachweis einer Thrombose im Bereich der Pfortader oder Milzvene sowie Thrombosen im Bereich des arteriellen Zuflusses (Y-Graft) ist eine sofortige Relaparotomie durchzuführen. In einigen Fällen ist es dabei gelungen, erfolgreich eine Thrombektomie durchzuführen und die Organfunktion zu erhalten. Auch Erfolge bei Lysetherapien wurden berichtet. Häufig bleibt einem jedoch nur die Entfernung des thrombosierten Transplantates übrig. Transplantatpankreatitis Das Entstehen einer Transplantatpankreatitis ist oftmals schon intraoperativ nach Reperfusion zu bemerken. Aber auch nach der Reperfusion unauffällig anmutende Transplantate können im Verlauf schwerwiegende Formen einer Pankreatitis entwickeln. In der Diagnostik ergänzen sich klinischer Befund, Laboruntersuchungen und Bildgebung. Bei der klinischen Untersuchung finden sich meist rechtsseitig betonte Bauchschmerzen ggf. mit Peritonismus. Zusätzlich können das Entstehen eines paralytischen Ileus, Fieber sowie eine Transplantatdysfunktion auf eine Pankreatitis hinweisen. Laborchemisch besteht häufig eine deutliche CRP-Erhöhung, die mit einem Anstieg von Serumamylase und Serumlipase assoziiert sein kann. In der Bildgebung können alle Formen einer Pankreatitis imponieren. Diese reichen von einer leichten ödematösen Form bis hin zur nekrotisierenden Pankreatitis mit Perfusionsausfällen. Bei schweren, lebensbedrohlichen Verlaufsformen einer Transplantatpankreatitis kann die partielle oder komplette Entfernung des Transplantates notwendig werden, auch wenn eine gute Funktion vorliegt. Das Entstehen einer Transplantatpankreatitis ist oftmals schon intraoperativ nach Reperfusion zu bemerken. Aber auch nach der Reperfusion unauffällig anmutende Transplantate können im Verlauf schwerwiegende Formen einer Pankreatitis entwickeln. In der Diagnostik ergänzen sich klinischer Befund, Laboruntersuchungen und Bildgebung. Bei der klinischen Untersuchung finden sich meist rechtsseitig betonte Bauchschmerzen ggf. mit Peritonismus. Zusätzlich können das Entstehen eines paralytischen Ileus, Fieber sowie eine Transplantatdysfunktion auf eine Pankreatitis hinweisen. Laborchemisch besteht häufig eine deutliche CRP-Erhöhung, die mit einem Anstieg von Serumamylase und Serumlipase assoziiert sein kann. In der Bildgebung können alle Formen einer Pankreatitis imponieren. Diese reichen von einer leichten ödematösen Form bis hin zur nekrotisierenden Pankreatitis mit Perfusionsausfällen. Bei schweren, lebensbedrohlichen Verlaufsformen einer Transplantatpankreatitis kann die partielle oder komplette Entfernung des Transplantates notwendig werden, auch wenn eine gute Funktion vorliegt. Blutung Bei Blutungskomplikationen nach einer PTX muss zwischen intraluminaler, intestinaler Blutung im Bereich der Anastomose und einer Blutung im Bauchraum unterschieden werden. Die Möglichkeit der einfachen endoskopischen Blutstillung im Bereich der Duodenoduodenostomie ist einer der großen Vorteile dieser Anastomosierungstechnik. Aber auch bei Anastomosen im Bereich der 1. und 2. Jejunalschlinge und intraperitonealer Positionierung des Pankreastransplantates ist über eine Push-Intestinoskopie die Anastomose zu erreichen, wobei sich dies technisch schwieriger gestaltet. Intraabdominelle Blutungen nach PTX sind kein seltenes Ereignis, zumal ein Großteil der Patienten eine postoperative Heparintherapie erhält. Kleinere Blutungen können oftmals konservativ beherrscht werden, indem die Antikoagulantientherapie pausiert wird und plasmatische Gerinnungsfaktoren optimiert werden. Hämodynamisch relevante Blutungen und ein rezidivierender Transfusionsbedarf zeigen die Indikation zur Relaparotomie an. Das Spektrum solcher Nachblutungen reicht von kleinen diffusen Blutungen aus dem Transplantatpankreas (häufig an der Mesenterialwurzel) oder kleineren Blutungen an den peritonealen Inzisionslinien bis hin zu Katastrophenblutungen aus der unteren Hohlvene und der Beckenarterie. Bei Arrosionsblutungen im Bereich der Beckenarterie kann in manchen Fällen das endovaskuläre Stenting der Defektzone die rettende Maßnahme für den Patienten sein. Bei Blutungskomplikationen nach einer PTX muss zwischen intraluminaler, intestinaler Blutung im Bereich der Anastomose und einer Blutung im Bauchraum unterschieden werden. Die Möglichkeit der einfachen endoskopischen Blutstillung im Bereich der Duodenoduodenostomie ist einer der großen Vorteile dieser Anastomosierungstechnik. Aber auch bei Anastomosen im Bereich der 1. und 2. Jejunalschlinge und intraperitonealer Positionierung des Pankreastransplantates ist über eine Push-Intestinoskopie die Anastomose zu erreichen, wobei sich dies technisch schwieriger gestaltet. Intraabdominelle Blutungen nach PTX sind kein seltenes Ereignis, zumal ein Großteil der Patienten eine postoperative Heparintherapie erhält. Kleinere Blutungen können oftmals konservativ beherrscht werden, indem die Antikoagulantientherapie pausiert wird und plasmatische Gerinnungsfaktoren optimiert werden. Hämodynamisch relevante Blutungen und ein rezidivierender Transfusionsbedarf zeigen die Indikation zur Relaparotomie an. Das Spektrum solcher Nachblutungen reicht von kleinen diffusen Blutungen aus dem Transplantatpankreas (häufig an der Mesenterialwurzel) oder kleineren Blutungen an den peritonealen Inzisionslinien bis hin zu Katastrophenblutungen aus der unteren Hohlvene und der Beckenarterie. Bei Arrosionsblutungen im Bereich der Beckenarterie kann in manchen Fällen das endovaskuläre Stenting der Defektzone die rettende Maßnahme für den Patienten sein. Akute Transplantatabstoßung Ein Pankreastransplantat kann Ziel der allogenen Immunität (Abstoßung) und/oder der Autoimmunität (Rekurrenz der Grunderkrankung) sein. Amylase- und Lipaseerhöhung im Serum sowie ein Anstieg der Blutzuckerwerte sind häufig die einzigen, aber unspezifischen Zeichen einer Abstoßung. Im Falle einer kombinierten PNTX ist die histologische Diagnose aus dem Nierentransplantat oftmals ausschlaggebend. Es konnte aber auch gezeigt werden, dass eine nicht unerhebliche Diskordanz beim Auftreten von Rejektionen und deren Schweregrad in zeitgleich entnommenen Biopsien aus Pankreas- und Nierentransplantat besteht. Uva et al. [26] beschrieben lediglich in 40 % der Fälle mit Abstoßungen ein zeitgleiches Auftreten der Rejektion in beiden Organen. In einer Studie von Parajuli et al. [27] lag die Diskordanzrate für Abstoßungen bei 37,5 %. Bei den 62,5 % der Fälle mit konkordanten Befunden in beiden Organen wurden wiederum unterschiedliche Abstoßungstypen gefunden. Im eigenen Zentrum konnten wir nach zeitgleicher Punktion von Pankreas und Niere eine Konkordanz für das Auftreten bzw. Ausschluss einer Rejektion in beiden Organen von 68,4 % feststellen. Bei einer isolierten PTX oder einer PAK-Transplantation, bei welcher die Spenderorgane nicht HLA-identisch sind, spielt die Pankreasbiopsie eine noch wichtigere Rolle. Die histopathologische Begutachtung ist aufwendig und schwierig. Aufgrund der an sich schon geringen Fallzahlen der PTX und der noch seltener durchgeführten Pankreastransplantatbiopsien ist vielerorts nur begrenzte oder keine Erfahrung vorhanden, sodass wie bei anderen Spezialuntersuchungen die Biopsien meist überregional verschickt und begutachtet werden. Für die Beurteilung von Pankreastransplantatbiopsien existiert, ähnlich wie bei der NTX, seit vielen Jahren ein international anerkanntes und ständig aktualisiertes Banff-Schema zur Graduierung der Abstoßung (Tab. 1; [28–30]).1. Normal2. Unklar3. Akute T‑Zell-vermittelte Abstoßung (TCMR)Grad I/mild: aktive septale Entzündung mit Venulitis und/oder Duktitis und/oder fokale azinäre Entzündung (≤ 2 Foci pro Läppchen) ohne oder nur mit minimalem AzinuszellschadenGrad II/moderat (erfordert eine Abgrenzung zur ABMR): multifokale (aber nicht konfluente oder diffuse) azinäre Entzündung (≥ 3 Foci pro Läppchen) mit vereinzeltem, individuellem Azinuszellschaden und/oder milde intimale Arteriitis (< 25 % des Lumens)Grad III/schwer (erfordert eine Abgrenzung zur ABMR): diffuse azinäre Entzündung mit fokaler oder diffuser, multizellulärer/konfluierender Azinuszellnekrose und/oder moderate oder schwere intimale Arteriitis (> 25 % des Lumens) und/oder transmurale Entzündung – nekrotisierende Arteriitis4. Akute/aktive Antikörper-vermittelter Abstoßung (ABMR)a) morphologischer Nachweis eines akuten GewebeschadensGrad I (milde akute ABMR): Gut erhaltene Architektur, milde interazinäre Monozyten‑/Makrophagen- oder gemischte (Monozyten‑/Makrophagen/neutrophile) Infiltrate mit seltenem Azinuszellschaden (Schwellung, Nekrose)Grad II (moderate akute ABMR): Insgesamt erhaltene Architektur mit interazinären Monozyten-Makrophagen-Infiltraten oder gemischte (Monozyten-/Makrophagen/neutrophile) Infiltrate, Dilatation der Kapillaren, interazinäre Kapillaritis, intimale Arteriitis, Stauung, multizellulärer Azinuszellverlust und ErythrozytenextravasationGrad III (schwere akute ABMR): Architekturstörung, verstreute Entzündungsinfiltrate bei interstitiellen Einblutungen, multifokaler oder konfluierender Parenchymnekrose, arterieller oder venöser Gefäßwandnekrose, transmuraler/nekrotisierender Arteriitis und Thrombose (in Abwesenheit anderer offensichtlicher Ursachen)b) C4d-Positivität der interazinären Kapillaren (IAC) ≥ 1 % der Azinusläppchenoberflächec) Bestätigte donorspezifische Antikörper (DSA)Abschließende Einordnung:1 von 3 diagnostischen Kriterien: benötigt Ausschluss einer ABMR2 von 3 diagnostischen Kriterien: ABMR muss erwogen werden3 von 3 diagnostischen Kriterien: definitive Diagnose einer ABMR5. Chronisch-aktive ABMRKategorie 3 und/oder 4 mit chronischer Arteriopathie und/oder Kategorie 6. Spezifikation, ob TCMR, ABMR oder gemischt6. Chronische ArteriopathieFibrointimale arterielle Verbreitung mit Lumeneinengung:inaktiv: fibrointimalaktiv: Infiltration der subintimalen fibrösen Proliferation durch mononukleäre Zellen (T-Zellen und Makrophagen)Grad 0: keine LumeneinengungGrad 1: mild, ≤ 25 % LumeneinengungGrad 2: moderat, 26–50 % LumeneinengungGrad 3: schwer, ≥ 50 % Lumeneinengung7. Chronische TransplantatfibroseGrad I (milde Transplantatfibrose): < 30 %Grad II (moderate Transplantatfibrose): 30–60 %Grad III (schwere Transplantatfibrose): > 60 %8. Inselpathologie: Rekurrenz des autoimmunen Diabetes mellitus (Insulitis und/oder selektiver Betazellverlust), Amyloidablagerungen, Calcineurininhibitor-Toxizität der Inselzellen9. Andere histologische Veränderungen, welche nicht einer akuten und/oder chronischen Abstoßung zugeordnet werden, wie z. B. CMV-Pankreatitis, posttransplantations-lymphoproliferative Erkrankung 6. Chronische Arteriopathie Fibrointimale arterielle Verbreitung mit Lumeneinengung: inaktiv: fibrointimal aktiv: Infiltration der subintimalen fibrösen Proliferation durch mononukleäre Zellen (T-Zellen und Makrophagen) Die Therapie der akuten Abstoßung eines Pankreastransplantates unterscheidet sich nicht wesentlich von der Abstoßungstherapie anderer transplantierter Organe. Bei milden akuten T‑zellulären Abstoßungen (Banff Grad I) erfolgt auch nach PTX eine Steroidbolustherapie mit z. B. 3 × 500 mg Prednisolon i.v. und eine Intensivierung der Basisimmunsuppression. Bei höhergradigen zellulären Abstoßungen und Rezidivabstoßungen erfolgen Therapien mit T‑Zell-depletierenden Antikörpern (ATG), bei humoraler Abstoßungskomponente auch in Kombination mit einer Anti-B-Zell-Therapie (z. B. Rituximab), Plasmapheresebehandlungen und die Gabe von intravenösen Immunglobulinen (IVIG) [31]. Ein Pankreastransplantat kann Ziel der allogenen Immunität (Abstoßung) und/oder der Autoimmunität (Rekurrenz der Grunderkrankung) sein. Amylase- und Lipaseerhöhung im Serum sowie ein Anstieg der Blutzuckerwerte sind häufig die einzigen, aber unspezifischen Zeichen einer Abstoßung. Im Falle einer kombinierten PNTX ist die histologische Diagnose aus dem Nierentransplantat oftmals ausschlaggebend. Es konnte aber auch gezeigt werden, dass eine nicht unerhebliche Diskordanz beim Auftreten von Rejektionen und deren Schweregrad in zeitgleich entnommenen Biopsien aus Pankreas- und Nierentransplantat besteht. Uva et al. [26] beschrieben lediglich in 40 % der Fälle mit Abstoßungen ein zeitgleiches Auftreten der Rejektion in beiden Organen. In einer Studie von Parajuli et al. [27] lag die Diskordanzrate für Abstoßungen bei 37,5 %. Bei den 62,5 % der Fälle mit konkordanten Befunden in beiden Organen wurden wiederum unterschiedliche Abstoßungstypen gefunden. Im eigenen Zentrum konnten wir nach zeitgleicher Punktion von Pankreas und Niere eine Konkordanz für das Auftreten bzw. Ausschluss einer Rejektion in beiden Organen von 68,4 % feststellen. Bei einer isolierten PTX oder einer PAK-Transplantation, bei welcher die Spenderorgane nicht HLA-identisch sind, spielt die Pankreasbiopsie eine noch wichtigere Rolle. Die histopathologische Begutachtung ist aufwendig und schwierig. Aufgrund der an sich schon geringen Fallzahlen der PTX und der noch seltener durchgeführten Pankreastransplantatbiopsien ist vielerorts nur begrenzte oder keine Erfahrung vorhanden, sodass wie bei anderen Spezialuntersuchungen die Biopsien meist überregional verschickt und begutachtet werden. Für die Beurteilung von Pankreastransplantatbiopsien existiert, ähnlich wie bei der NTX, seit vielen Jahren ein international anerkanntes und ständig aktualisiertes Banff-Schema zur Graduierung der Abstoßung (Tab. 1; [28–30]).1. Normal2. Unklar3. Akute T‑Zell-vermittelte Abstoßung (TCMR)Grad I/mild: aktive septale Entzündung mit Venulitis und/oder Duktitis und/oder fokale azinäre Entzündung (≤ 2 Foci pro Läppchen) ohne oder nur mit minimalem AzinuszellschadenGrad II/moderat (erfordert eine Abgrenzung zur ABMR): multifokale (aber nicht konfluente oder diffuse) azinäre Entzündung (≥ 3 Foci pro Läppchen) mit vereinzeltem, individuellem Azinuszellschaden und/oder milde intimale Arteriitis (< 25 % des Lumens)Grad III/schwer (erfordert eine Abgrenzung zur ABMR): diffuse azinäre Entzündung mit fokaler oder diffuser, multizellulärer/konfluierender Azinuszellnekrose und/oder moderate oder schwere intimale Arteriitis (> 25 % des Lumens) und/oder transmurale Entzündung – nekrotisierende Arteriitis4. Akute/aktive Antikörper-vermittelter Abstoßung (ABMR)a) morphologischer Nachweis eines akuten GewebeschadensGrad I (milde akute ABMR): Gut erhaltene Architektur, milde interazinäre Monozyten‑/Makrophagen- oder gemischte (Monozyten‑/Makrophagen/neutrophile) Infiltrate mit seltenem Azinuszellschaden (Schwellung, Nekrose)Grad II (moderate akute ABMR): Insgesamt erhaltene Architektur mit interazinären Monozyten-Makrophagen-Infiltraten oder gemischte (Monozyten-/Makrophagen/neutrophile) Infiltrate, Dilatation der Kapillaren, interazinäre Kapillaritis, intimale Arteriitis, Stauung, multizellulärer Azinuszellverlust und ErythrozytenextravasationGrad III (schwere akute ABMR): Architekturstörung, verstreute Entzündungsinfiltrate bei interstitiellen Einblutungen, multifokaler oder konfluierender Parenchymnekrose, arterieller oder venöser Gefäßwandnekrose, transmuraler/nekrotisierender Arteriitis und Thrombose (in Abwesenheit anderer offensichtlicher Ursachen)b) C4d-Positivität der interazinären Kapillaren (IAC) ≥ 1 % der Azinusläppchenoberflächec) Bestätigte donorspezifische Antikörper (DSA)Abschließende Einordnung:1 von 3 diagnostischen Kriterien: benötigt Ausschluss einer ABMR2 von 3 diagnostischen Kriterien: ABMR muss erwogen werden3 von 3 diagnostischen Kriterien: definitive Diagnose einer ABMR5. Chronisch-aktive ABMRKategorie 3 und/oder 4 mit chronischer Arteriopathie und/oder Kategorie 6. Spezifikation, ob TCMR, ABMR oder gemischt6. Chronische ArteriopathieFibrointimale arterielle Verbreitung mit Lumeneinengung:inaktiv: fibrointimalaktiv: Infiltration der subintimalen fibrösen Proliferation durch mononukleäre Zellen (T-Zellen und Makrophagen)Grad 0: keine LumeneinengungGrad 1: mild, ≤ 25 % LumeneinengungGrad 2: moderat, 26–50 % LumeneinengungGrad 3: schwer, ≥ 50 % Lumeneinengung7. Chronische TransplantatfibroseGrad I (milde Transplantatfibrose): < 30 %Grad II (moderate Transplantatfibrose): 30–60 %Grad III (schwere Transplantatfibrose): > 60 %8. Inselpathologie: Rekurrenz des autoimmunen Diabetes mellitus (Insulitis und/oder selektiver Betazellverlust), Amyloidablagerungen, Calcineurininhibitor-Toxizität der Inselzellen9. Andere histologische Veränderungen, welche nicht einer akuten und/oder chronischen Abstoßung zugeordnet werden, wie z. B. CMV-Pankreatitis, posttransplantations-lymphoproliferative Erkrankung 6. Chronische Arteriopathie Fibrointimale arterielle Verbreitung mit Lumeneinengung: inaktiv: fibrointimal aktiv: Infiltration der subintimalen fibrösen Proliferation durch mononukleäre Zellen (T-Zellen und Makrophagen) Die Therapie der akuten Abstoßung eines Pankreastransplantates unterscheidet sich nicht wesentlich von der Abstoßungstherapie anderer transplantierter Organe. Bei milden akuten T‑zellulären Abstoßungen (Banff Grad I) erfolgt auch nach PTX eine Steroidbolustherapie mit z. B. 3 × 500 mg Prednisolon i.v. und eine Intensivierung der Basisimmunsuppression. Bei höhergradigen zellulären Abstoßungen und Rezidivabstoßungen erfolgen Therapien mit T‑Zell-depletierenden Antikörpern (ATG), bei humoraler Abstoßungskomponente auch in Kombination mit einer Anti-B-Zell-Therapie (z. B. Rituximab), Plasmapheresebehandlungen und die Gabe von intravenösen Immunglobulinen (IVIG) [31]. Transplantatthrombose: Die Pankreastransplantatthrombose ist nach wie vor die häufigste Ursache für einen frühen Verlust des Pankreastransplantates. Bei einem plötzlichen Blutzuckeranstieg in der Frühphase nach Transplantation muss nach Ausschluss anderer Ursachen immer an eine Perfusionsstörung des Pankreastransplantates gedacht werden. Der Nachweis erfolgt in der Regel durch eine CT oder MRT des Abdomens. Die farbcodierte Duplexsonografie kann erste Hinweise auf Perfusionsstörungen ergeben. Im eigenen Zentrum hat sich in den letzten Jahren die kontrastmittelunterstützte Sonografie (CEUS) zum Perfusionsnachweis als extrem hilfreich erwiesen, da diese sofort auf Station im Patientenbett durchgeführt werden kann und kein potenziell nephrotoxisches Kontrastmittel appliziert werden muss. Bei Nachweis einer Thrombose im Bereich der Pfortader oder Milzvene sowie Thrombosen im Bereich des arteriellen Zuflusses (Y-Graft) ist eine sofortige Relaparotomie durchzuführen. In einigen Fällen ist es dabei gelungen, erfolgreich eine Thrombektomie durchzuführen und die Organfunktion zu erhalten. Auch Erfolge bei Lysetherapien wurden berichtet. Häufig bleibt einem jedoch nur die Entfernung des thrombosierten Transplantates übrig. Transplantatpankreatitis: Das Entstehen einer Transplantatpankreatitis ist oftmals schon intraoperativ nach Reperfusion zu bemerken. Aber auch nach der Reperfusion unauffällig anmutende Transplantate können im Verlauf schwerwiegende Formen einer Pankreatitis entwickeln. In der Diagnostik ergänzen sich klinischer Befund, Laboruntersuchungen und Bildgebung. Bei der klinischen Untersuchung finden sich meist rechtsseitig betonte Bauchschmerzen ggf. mit Peritonismus. Zusätzlich können das Entstehen eines paralytischen Ileus, Fieber sowie eine Transplantatdysfunktion auf eine Pankreatitis hinweisen. Laborchemisch besteht häufig eine deutliche CRP-Erhöhung, die mit einem Anstieg von Serumamylase und Serumlipase assoziiert sein kann. In der Bildgebung können alle Formen einer Pankreatitis imponieren. Diese reichen von einer leichten ödematösen Form bis hin zur nekrotisierenden Pankreatitis mit Perfusionsausfällen. Bei schweren, lebensbedrohlichen Verlaufsformen einer Transplantatpankreatitis kann die partielle oder komplette Entfernung des Transplantates notwendig werden, auch wenn eine gute Funktion vorliegt. Blutung: Bei Blutungskomplikationen nach einer PTX muss zwischen intraluminaler, intestinaler Blutung im Bereich der Anastomose und einer Blutung im Bauchraum unterschieden werden. Die Möglichkeit der einfachen endoskopischen Blutstillung im Bereich der Duodenoduodenostomie ist einer der großen Vorteile dieser Anastomosierungstechnik. Aber auch bei Anastomosen im Bereich der 1. und 2. Jejunalschlinge und intraperitonealer Positionierung des Pankreastransplantates ist über eine Push-Intestinoskopie die Anastomose zu erreichen, wobei sich dies technisch schwieriger gestaltet. Intraabdominelle Blutungen nach PTX sind kein seltenes Ereignis, zumal ein Großteil der Patienten eine postoperative Heparintherapie erhält. Kleinere Blutungen können oftmals konservativ beherrscht werden, indem die Antikoagulantientherapie pausiert wird und plasmatische Gerinnungsfaktoren optimiert werden. Hämodynamisch relevante Blutungen und ein rezidivierender Transfusionsbedarf zeigen die Indikation zur Relaparotomie an. Das Spektrum solcher Nachblutungen reicht von kleinen diffusen Blutungen aus dem Transplantatpankreas (häufig an der Mesenterialwurzel) oder kleineren Blutungen an den peritonealen Inzisionslinien bis hin zu Katastrophenblutungen aus der unteren Hohlvene und der Beckenarterie. Bei Arrosionsblutungen im Bereich der Beckenarterie kann in manchen Fällen das endovaskuläre Stenting der Defektzone die rettende Maßnahme für den Patienten sein. Akute Transplantatabstoßung: Ein Pankreastransplantat kann Ziel der allogenen Immunität (Abstoßung) und/oder der Autoimmunität (Rekurrenz der Grunderkrankung) sein. Amylase- und Lipaseerhöhung im Serum sowie ein Anstieg der Blutzuckerwerte sind häufig die einzigen, aber unspezifischen Zeichen einer Abstoßung. Im Falle einer kombinierten PNTX ist die histologische Diagnose aus dem Nierentransplantat oftmals ausschlaggebend. Es konnte aber auch gezeigt werden, dass eine nicht unerhebliche Diskordanz beim Auftreten von Rejektionen und deren Schweregrad in zeitgleich entnommenen Biopsien aus Pankreas- und Nierentransplantat besteht. Uva et al. [26] beschrieben lediglich in 40 % der Fälle mit Abstoßungen ein zeitgleiches Auftreten der Rejektion in beiden Organen. In einer Studie von Parajuli et al. [27] lag die Diskordanzrate für Abstoßungen bei 37,5 %. Bei den 62,5 % der Fälle mit konkordanten Befunden in beiden Organen wurden wiederum unterschiedliche Abstoßungstypen gefunden. Im eigenen Zentrum konnten wir nach zeitgleicher Punktion von Pankreas und Niere eine Konkordanz für das Auftreten bzw. Ausschluss einer Rejektion in beiden Organen von 68,4 % feststellen. Bei einer isolierten PTX oder einer PAK-Transplantation, bei welcher die Spenderorgane nicht HLA-identisch sind, spielt die Pankreasbiopsie eine noch wichtigere Rolle. Die histopathologische Begutachtung ist aufwendig und schwierig. Aufgrund der an sich schon geringen Fallzahlen der PTX und der noch seltener durchgeführten Pankreastransplantatbiopsien ist vielerorts nur begrenzte oder keine Erfahrung vorhanden, sodass wie bei anderen Spezialuntersuchungen die Biopsien meist überregional verschickt und begutachtet werden. Für die Beurteilung von Pankreastransplantatbiopsien existiert, ähnlich wie bei der NTX, seit vielen Jahren ein international anerkanntes und ständig aktualisiertes Banff-Schema zur Graduierung der Abstoßung (Tab. 1; [28–30]).1. Normal2. Unklar3. Akute T‑Zell-vermittelte Abstoßung (TCMR)Grad I/mild: aktive septale Entzündung mit Venulitis und/oder Duktitis und/oder fokale azinäre Entzündung (≤ 2 Foci pro Läppchen) ohne oder nur mit minimalem AzinuszellschadenGrad II/moderat (erfordert eine Abgrenzung zur ABMR): multifokale (aber nicht konfluente oder diffuse) azinäre Entzündung (≥ 3 Foci pro Läppchen) mit vereinzeltem, individuellem Azinuszellschaden und/oder milde intimale Arteriitis (< 25 % des Lumens)Grad III/schwer (erfordert eine Abgrenzung zur ABMR): diffuse azinäre Entzündung mit fokaler oder diffuser, multizellulärer/konfluierender Azinuszellnekrose und/oder moderate oder schwere intimale Arteriitis (> 25 % des Lumens) und/oder transmurale Entzündung – nekrotisierende Arteriitis4. Akute/aktive Antikörper-vermittelter Abstoßung (ABMR)a) morphologischer Nachweis eines akuten GewebeschadensGrad I (milde akute ABMR): Gut erhaltene Architektur, milde interazinäre Monozyten‑/Makrophagen- oder gemischte (Monozyten‑/Makrophagen/neutrophile) Infiltrate mit seltenem Azinuszellschaden (Schwellung, Nekrose)Grad II (moderate akute ABMR): Insgesamt erhaltene Architektur mit interazinären Monozyten-Makrophagen-Infiltraten oder gemischte (Monozyten-/Makrophagen/neutrophile) Infiltrate, Dilatation der Kapillaren, interazinäre Kapillaritis, intimale Arteriitis, Stauung, multizellulärer Azinuszellverlust und ErythrozytenextravasationGrad III (schwere akute ABMR): Architekturstörung, verstreute Entzündungsinfiltrate bei interstitiellen Einblutungen, multifokaler oder konfluierender Parenchymnekrose, arterieller oder venöser Gefäßwandnekrose, transmuraler/nekrotisierender Arteriitis und Thrombose (in Abwesenheit anderer offensichtlicher Ursachen)b) C4d-Positivität der interazinären Kapillaren (IAC) ≥ 1 % der Azinusläppchenoberflächec) Bestätigte donorspezifische Antikörper (DSA)Abschließende Einordnung:1 von 3 diagnostischen Kriterien: benötigt Ausschluss einer ABMR2 von 3 diagnostischen Kriterien: ABMR muss erwogen werden3 von 3 diagnostischen Kriterien: definitive Diagnose einer ABMR5. Chronisch-aktive ABMRKategorie 3 und/oder 4 mit chronischer Arteriopathie und/oder Kategorie 6. Spezifikation, ob TCMR, ABMR oder gemischt6. Chronische ArteriopathieFibrointimale arterielle Verbreitung mit Lumeneinengung:inaktiv: fibrointimalaktiv: Infiltration der subintimalen fibrösen Proliferation durch mononukleäre Zellen (T-Zellen und Makrophagen)Grad 0: keine LumeneinengungGrad 1: mild, ≤ 25 % LumeneinengungGrad 2: moderat, 26–50 % LumeneinengungGrad 3: schwer, ≥ 50 % Lumeneinengung7. Chronische TransplantatfibroseGrad I (milde Transplantatfibrose): < 30 %Grad II (moderate Transplantatfibrose): 30–60 %Grad III (schwere Transplantatfibrose): > 60 %8. Inselpathologie: Rekurrenz des autoimmunen Diabetes mellitus (Insulitis und/oder selektiver Betazellverlust), Amyloidablagerungen, Calcineurininhibitor-Toxizität der Inselzellen9. Andere histologische Veränderungen, welche nicht einer akuten und/oder chronischen Abstoßung zugeordnet werden, wie z. B. CMV-Pankreatitis, posttransplantations-lymphoproliferative Erkrankung 6. Chronische Arteriopathie Fibrointimale arterielle Verbreitung mit Lumeneinengung: inaktiv: fibrointimal aktiv: Infiltration der subintimalen fibrösen Proliferation durch mononukleäre Zellen (T-Zellen und Makrophagen) Die Therapie der akuten Abstoßung eines Pankreastransplantates unterscheidet sich nicht wesentlich von der Abstoßungstherapie anderer transplantierter Organe. Bei milden akuten T‑zellulären Abstoßungen (Banff Grad I) erfolgt auch nach PTX eine Steroidbolustherapie mit z. B. 3 × 500 mg Prednisolon i.v. und eine Intensivierung der Basisimmunsuppression. Bei höhergradigen zellulären Abstoßungen und Rezidivabstoßungen erfolgen Therapien mit T‑Zell-depletierenden Antikörpern (ATG), bei humoraler Abstoßungskomponente auch in Kombination mit einer Anti-B-Zell-Therapie (z. B. Rituximab), Plasmapheresebehandlungen und die Gabe von intravenösen Immunglobulinen (IVIG) [31]. Indikation und Technik der Pankreastransplantatbiopsie: Indikationen Nach einer PTX kann eine Biopsie aus verschiedenen Indikationen heraus notwendig werden. Neben der Durchführung von Indikationsbiopsien aufgrund einer Störung der Transplantatfunktion oder immunologischen Vorgängen werden von einigen Zentren auch sog. Protokollbiopsien, d. h. routinemäßige Entnahmen von Biopsien nach einem definierten Schema entnommen. Störungen der Pankreastransplantatfunktion äußern sich klinisch am häufigsten durch erhöhte Blutzuckerwerte. Diese können mit einem Anstieg von Serumamylase und Serumlipase assoziiert sein. Differenzialdiagnostisch müssen dabei verschiedene Ursachen in Betracht gezogen werden [31–33]: eine primäre Nichtfunktion oder verzögerte Funktionsaufnahme des Transplantates, eine Transplantatpankreatitis, eine Thrombose, eine akute oder chronische Rejektion, eine medikationsbedingte Hyperglykämie (Steroide, Tacrolimus), Infektionen, eine Rekurrenz der Grunderkrankung, eine Toxizität von Calcineurininhibitoren (CNI) oder die Manifestation eines Diabetes mellitus Typ 2. Einen Überblick über häufige Indikationen zur Biopsie des Pankreastransplantates gibt Tab. 2.IndikationStörungen der endokrinen FunktionHyperglykämieGestörte GlukosetoleranzErhöhter HbA1c-WertStörungen der exokrinen FunktionErhöhte Amylase- und LipasewerteHumorale AbstoßungDonorspezifische Antikörper (DSA)Rekurrenz der GrunderkrankungDiabetes mellitus Typ 1 assoziierte AutoantikörperProtokollbiopsienProtokollKontrollbiopsienNach z. B. stattgehabter Abstoßungstherapie Nach einer PTX kann eine Biopsie aus verschiedenen Indikationen heraus notwendig werden. Neben der Durchführung von Indikationsbiopsien aufgrund einer Störung der Transplantatfunktion oder immunologischen Vorgängen werden von einigen Zentren auch sog. Protokollbiopsien, d. h. routinemäßige Entnahmen von Biopsien nach einem definierten Schema entnommen. Störungen der Pankreastransplantatfunktion äußern sich klinisch am häufigsten durch erhöhte Blutzuckerwerte. Diese können mit einem Anstieg von Serumamylase und Serumlipase assoziiert sein. Differenzialdiagnostisch müssen dabei verschiedene Ursachen in Betracht gezogen werden [31–33]: eine primäre Nichtfunktion oder verzögerte Funktionsaufnahme des Transplantates, eine Transplantatpankreatitis, eine Thrombose, eine akute oder chronische Rejektion, eine medikationsbedingte Hyperglykämie (Steroide, Tacrolimus), Infektionen, eine Rekurrenz der Grunderkrankung, eine Toxizität von Calcineurininhibitoren (CNI) oder die Manifestation eines Diabetes mellitus Typ 2. Einen Überblick über häufige Indikationen zur Biopsie des Pankreastransplantates gibt Tab. 2.IndikationStörungen der endokrinen FunktionHyperglykämieGestörte GlukosetoleranzErhöhter HbA1c-WertStörungen der exokrinen FunktionErhöhte Amylase- und LipasewerteHumorale AbstoßungDonorspezifische Antikörper (DSA)Rekurrenz der GrunderkrankungDiabetes mellitus Typ 1 assoziierte AutoantikörperProtokollbiopsienProtokollKontrollbiopsienNach z. B. stattgehabter Abstoßungstherapie Technik der Biopsieentnahme Die Pankreastransplantatbiopsie kann über verschiedene Zugangswege gewonnen werden. Dazu muss zwischen operativen, endoskopischen und perkutanen Techniken unterschieden werden. Bei den operativen Verfahren stellt die Laparotomie die invasivste Variante dar. Biopsien können jedoch im Rahmen von Laparotomien aus anderen Gründen einfach simultan als Punktions- oder Exzisionsbiopsie gewonnen werden (Abb. 2). Bei intraperitonealer Lage des Transplantates und damit häufiger Überlagerung des Transplantates von Dünndarmschlingen wird zunehmend die laparoskopisch assistierte Pankreasbiopsie angewendet. Limitiert wird das laparoskopische Vorgehen in einigen Fällen durch postoperative Verwachsungen, die eine Zugänglichkeit des Transplantates erschweren und das Risiko einer perioperativen Komplikation erhöhen [34]. Die früher häufig und heutzutage nur noch selten durchgeführte Blasendrainage des Pankreastransplantates, bei der das Spenderduodenum mit der Harnblase anastomosiert wird, ermöglicht eine einfache zystoskopische Biopsie aus dem Spenderduodenum oder Kopf des Pankreastransplantates [35]. Die histopathologische Aufarbeitung der Biopsien vom Spenderduodenum und Pankreastransplantat ergaben jedoch Diskrepanzen im Grad der Abstoßung. Hierbei konnte das Spenderduodenum unabhängig vom Pankreastransplantat eine Abstoßung aufweisen und umgekehrt [36]. Bei der heute hauptsächlich verwendeten Dünndarmdrainagetechnik ist die Entnahme einer Biopsie aus dem Spenderduodenum im Rahmen einer Ösophagogastroduodenoskopie oder Push-Intestinoskopie möglich [36, 37]. Auch hier konnten deutliche Diskordanzen beim histologischen Nachweis einer Abstoßung im Spenderduodenum und Pankreastransplantat beobachtet werden [35, 36]. Mittels Endosonografie lassen sich sowohl aus dem Spenderduodenum als auch aus dem Pankreastransplantat Feinnadelbiopsien entnehmen, wobei die gewonnene Gewebemenge oftmals eine aussagekräftige Diagnose nicht erlaubt. Im eigenen Zentrum hat sich im letzten Jahrzehnt die perkutane Pankreastransplantatbiopsie durchgesetzt. Die mit diesem Verfahren gewonnenen Gewebestanzen erlauben, ähnlich wie bei der Nierentransplantatbiopsie, eine aussagekräftige histopathologische Diagnosestellung. Die perkutane Biopsie erfolgt in der Regel in Lokalanästhesie unter sonografischer oder CT-gesteuerter Kontrolle. Bei beiden Methoden wird das Parenchym des Pankreastransplantates mit einer automatisierten 16G- oder 18G-Tru-Cut®-Nadel punktiert. Dabei werden 2–3 Punktionszylinder entnommen. Je nach Anatomie bieten sich hierbei verschiedene Zugangswege an (Abb. 3). Die Schwierigkeit in der genauen Punktionslokalisation liegt bei der CT darin, dass diese nativ durchgeführt wird und sowohl das Parenchym als auch die Transplantatgefäße schlecht visualisiert werden. Im Vorfeld angefertigte CT- oder MRT-Bilder sind daher hilfreich. In der Literatur finden sich Raten von bis zu 30 %, in denen bioptisch kein Pankreasgewebe gewonnen werden konnte [38]. In den letzten Jahren konnten wir im eigenen Vorgehen unter Nutzung der kontrastmittelgestützten Sonografie (CEUS) die Trefferquote erhöhen. In jedem Fall ist die Zusammenarbeit zwischen Transplantationschirurgen und Radiologen essenziell, um die Treffsicherheit zu erhöhen und die Komplikationsrate gering zu halten. Häufigste Komplikation nach Biopsie ist die Transplantatpankreatitis. In seltenen Fällen kann es zu Blutungen sowie Abszess- oder Fistelentstehungen kommen. Die Notwendigkeit eines chirurgischen Eingriffes als Folge einer perkutanen Biopsie ist insgesamt sehr selten [33, 39]. Die Pankreastransplantatbiopsie kann über verschiedene Zugangswege gewonnen werden. Dazu muss zwischen operativen, endoskopischen und perkutanen Techniken unterschieden werden. Bei den operativen Verfahren stellt die Laparotomie die invasivste Variante dar. Biopsien können jedoch im Rahmen von Laparotomien aus anderen Gründen einfach simultan als Punktions- oder Exzisionsbiopsie gewonnen werden (Abb. 2). Bei intraperitonealer Lage des Transplantates und damit häufiger Überlagerung des Transplantates von Dünndarmschlingen wird zunehmend die laparoskopisch assistierte Pankreasbiopsie angewendet. Limitiert wird das laparoskopische Vorgehen in einigen Fällen durch postoperative Verwachsungen, die eine Zugänglichkeit des Transplantates erschweren und das Risiko einer perioperativen Komplikation erhöhen [34]. Die früher häufig und heutzutage nur noch selten durchgeführte Blasendrainage des Pankreastransplantates, bei der das Spenderduodenum mit der Harnblase anastomosiert wird, ermöglicht eine einfache zystoskopische Biopsie aus dem Spenderduodenum oder Kopf des Pankreastransplantates [35]. Die histopathologische Aufarbeitung der Biopsien vom Spenderduodenum und Pankreastransplantat ergaben jedoch Diskrepanzen im Grad der Abstoßung. Hierbei konnte das Spenderduodenum unabhängig vom Pankreastransplantat eine Abstoßung aufweisen und umgekehrt [36]. Bei der heute hauptsächlich verwendeten Dünndarmdrainagetechnik ist die Entnahme einer Biopsie aus dem Spenderduodenum im Rahmen einer Ösophagogastroduodenoskopie oder Push-Intestinoskopie möglich [36, 37]. Auch hier konnten deutliche Diskordanzen beim histologischen Nachweis einer Abstoßung im Spenderduodenum und Pankreastransplantat beobachtet werden [35, 36]. Mittels Endosonografie lassen sich sowohl aus dem Spenderduodenum als auch aus dem Pankreastransplantat Feinnadelbiopsien entnehmen, wobei die gewonnene Gewebemenge oftmals eine aussagekräftige Diagnose nicht erlaubt. Im eigenen Zentrum hat sich im letzten Jahrzehnt die perkutane Pankreastransplantatbiopsie durchgesetzt. Die mit diesem Verfahren gewonnenen Gewebestanzen erlauben, ähnlich wie bei der Nierentransplantatbiopsie, eine aussagekräftige histopathologische Diagnosestellung. Die perkutane Biopsie erfolgt in der Regel in Lokalanästhesie unter sonografischer oder CT-gesteuerter Kontrolle. Bei beiden Methoden wird das Parenchym des Pankreastransplantates mit einer automatisierten 16G- oder 18G-Tru-Cut®-Nadel punktiert. Dabei werden 2–3 Punktionszylinder entnommen. Je nach Anatomie bieten sich hierbei verschiedene Zugangswege an (Abb. 3). Die Schwierigkeit in der genauen Punktionslokalisation liegt bei der CT darin, dass diese nativ durchgeführt wird und sowohl das Parenchym als auch die Transplantatgefäße schlecht visualisiert werden. Im Vorfeld angefertigte CT- oder MRT-Bilder sind daher hilfreich. In der Literatur finden sich Raten von bis zu 30 %, in denen bioptisch kein Pankreasgewebe gewonnen werden konnte [38]. In den letzten Jahren konnten wir im eigenen Vorgehen unter Nutzung der kontrastmittelgestützten Sonografie (CEUS) die Trefferquote erhöhen. In jedem Fall ist die Zusammenarbeit zwischen Transplantationschirurgen und Radiologen essenziell, um die Treffsicherheit zu erhöhen und die Komplikationsrate gering zu halten. Häufigste Komplikation nach Biopsie ist die Transplantatpankreatitis. In seltenen Fällen kann es zu Blutungen sowie Abszess- oder Fistelentstehungen kommen. Die Notwendigkeit eines chirurgischen Eingriffes als Folge einer perkutanen Biopsie ist insgesamt sehr selten [33, 39]. Indikationen: Nach einer PTX kann eine Biopsie aus verschiedenen Indikationen heraus notwendig werden. Neben der Durchführung von Indikationsbiopsien aufgrund einer Störung der Transplantatfunktion oder immunologischen Vorgängen werden von einigen Zentren auch sog. Protokollbiopsien, d. h. routinemäßige Entnahmen von Biopsien nach einem definierten Schema entnommen. Störungen der Pankreastransplantatfunktion äußern sich klinisch am häufigsten durch erhöhte Blutzuckerwerte. Diese können mit einem Anstieg von Serumamylase und Serumlipase assoziiert sein. Differenzialdiagnostisch müssen dabei verschiedene Ursachen in Betracht gezogen werden [31–33]: eine primäre Nichtfunktion oder verzögerte Funktionsaufnahme des Transplantates, eine Transplantatpankreatitis, eine Thrombose, eine akute oder chronische Rejektion, eine medikationsbedingte Hyperglykämie (Steroide, Tacrolimus), Infektionen, eine Rekurrenz der Grunderkrankung, eine Toxizität von Calcineurininhibitoren (CNI) oder die Manifestation eines Diabetes mellitus Typ 2. Einen Überblick über häufige Indikationen zur Biopsie des Pankreastransplantates gibt Tab. 2.IndikationStörungen der endokrinen FunktionHyperglykämieGestörte GlukosetoleranzErhöhter HbA1c-WertStörungen der exokrinen FunktionErhöhte Amylase- und LipasewerteHumorale AbstoßungDonorspezifische Antikörper (DSA)Rekurrenz der GrunderkrankungDiabetes mellitus Typ 1 assoziierte AutoantikörperProtokollbiopsienProtokollKontrollbiopsienNach z. B. stattgehabter Abstoßungstherapie Technik der Biopsieentnahme: Die Pankreastransplantatbiopsie kann über verschiedene Zugangswege gewonnen werden. Dazu muss zwischen operativen, endoskopischen und perkutanen Techniken unterschieden werden. Bei den operativen Verfahren stellt die Laparotomie die invasivste Variante dar. Biopsien können jedoch im Rahmen von Laparotomien aus anderen Gründen einfach simultan als Punktions- oder Exzisionsbiopsie gewonnen werden (Abb. 2). Bei intraperitonealer Lage des Transplantates und damit häufiger Überlagerung des Transplantates von Dünndarmschlingen wird zunehmend die laparoskopisch assistierte Pankreasbiopsie angewendet. Limitiert wird das laparoskopische Vorgehen in einigen Fällen durch postoperative Verwachsungen, die eine Zugänglichkeit des Transplantates erschweren und das Risiko einer perioperativen Komplikation erhöhen [34]. Die früher häufig und heutzutage nur noch selten durchgeführte Blasendrainage des Pankreastransplantates, bei der das Spenderduodenum mit der Harnblase anastomosiert wird, ermöglicht eine einfache zystoskopische Biopsie aus dem Spenderduodenum oder Kopf des Pankreastransplantates [35]. Die histopathologische Aufarbeitung der Biopsien vom Spenderduodenum und Pankreastransplantat ergaben jedoch Diskrepanzen im Grad der Abstoßung. Hierbei konnte das Spenderduodenum unabhängig vom Pankreastransplantat eine Abstoßung aufweisen und umgekehrt [36]. Bei der heute hauptsächlich verwendeten Dünndarmdrainagetechnik ist die Entnahme einer Biopsie aus dem Spenderduodenum im Rahmen einer Ösophagogastroduodenoskopie oder Push-Intestinoskopie möglich [36, 37]. Auch hier konnten deutliche Diskordanzen beim histologischen Nachweis einer Abstoßung im Spenderduodenum und Pankreastransplantat beobachtet werden [35, 36]. Mittels Endosonografie lassen sich sowohl aus dem Spenderduodenum als auch aus dem Pankreastransplantat Feinnadelbiopsien entnehmen, wobei die gewonnene Gewebemenge oftmals eine aussagekräftige Diagnose nicht erlaubt. Im eigenen Zentrum hat sich im letzten Jahrzehnt die perkutane Pankreastransplantatbiopsie durchgesetzt. Die mit diesem Verfahren gewonnenen Gewebestanzen erlauben, ähnlich wie bei der Nierentransplantatbiopsie, eine aussagekräftige histopathologische Diagnosestellung. Die perkutane Biopsie erfolgt in der Regel in Lokalanästhesie unter sonografischer oder CT-gesteuerter Kontrolle. Bei beiden Methoden wird das Parenchym des Pankreastransplantates mit einer automatisierten 16G- oder 18G-Tru-Cut®-Nadel punktiert. Dabei werden 2–3 Punktionszylinder entnommen. Je nach Anatomie bieten sich hierbei verschiedene Zugangswege an (Abb. 3). Die Schwierigkeit in der genauen Punktionslokalisation liegt bei der CT darin, dass diese nativ durchgeführt wird und sowohl das Parenchym als auch die Transplantatgefäße schlecht visualisiert werden. Im Vorfeld angefertigte CT- oder MRT-Bilder sind daher hilfreich. In der Literatur finden sich Raten von bis zu 30 %, in denen bioptisch kein Pankreasgewebe gewonnen werden konnte [38]. In den letzten Jahren konnten wir im eigenen Vorgehen unter Nutzung der kontrastmittelgestützten Sonografie (CEUS) die Trefferquote erhöhen. In jedem Fall ist die Zusammenarbeit zwischen Transplantationschirurgen und Radiologen essenziell, um die Treffsicherheit zu erhöhen und die Komplikationsrate gering zu halten. Häufigste Komplikation nach Biopsie ist die Transplantatpankreatitis. In seltenen Fällen kann es zu Blutungen sowie Abszess- oder Fistelentstehungen kommen. Die Notwendigkeit eines chirurgischen Eingriffes als Folge einer perkutanen Biopsie ist insgesamt sehr selten [33, 39]. Ergebnisse und Verlauf nach Pankreastransplantation (Tab. 3): Im Vergleich zu den US-amerikanischen Spenderdaten sind deutsche Pankreasspender signifikant älter und der Anteil zerebrovaskulärer Todesursachen ist höher. Trotz dieser Tatsache konnten in den letzten Jahren sehr gute Ergebnisse nach PTX in deutschen Zentren erzielt werden [40]. Mit einem 1‑Jahres-Patienten- und 1‑Jahres-Transplantat-Überleben von 92 % und 83 % stehen diese den internationalen Ergebnissen in nichts nach (Tab. 3). Die Überlebensraten sind für die verschiedenen Formen der PTX nicht direkt vergleichbar, da es sich um Patientenkollektive mit unterschiedlicher Morbidität handelt (z. B. urämische vs. nichturämische Patienten). Die besten Ergebnisse werden dabei nach kombinierter SPK erzielt, mit einem Pankreastransplantatüberleben von 89 %, 71 % und 57 % nach 1, 5 und 10 Jahren. Für die gleichen Zeiträume liegen die Ergebnisse nach alleiniger PTA bei 84 %, 52 % und 38 % [2]. Die Transplantat-Halbwertszeiten für Pankreata (50 % Funktionsrate) werden mit 14 Jahren (SPK) und 7 Jahren (PAK, PTA) angegeben [2]. Durch die langfristige Normalisierung des Glukosestoffwechsels kommt es zu einer signifikanten Senkung der Mortalität, welche bei der SPK deutlich größer ist als bei alleiniger Nierentransplantation bei einem Typ-1-Diabetiker [8].QI/BezeichnungReferenzbereich (%)2016/2017 (%)2018/2019 (%)Sterblichkeit imKrankenhaus< 53,574,761‑Jahres-Überleben bei bekanntem Status> 9092,9691,362‑Jahres-Überleben bei bekanntem Status> 8091,5291,233‑Jahres-Überleben bei bekanntem Status> 7591,1090,20Qualität der Transplantatfunktion bei Entlassung> 7578,9583,23Qualität der Transplantatfunktion(1 Jahr nach Entlassung)Nicht definiert83,0792,16Qualität der Transplantatfunktion(2 Jahre nach Entlassung)Nicht definiert78,9583,23Qualität der Transplantatfunktion(3 Jahre nach Entlassung)Nicht definiert74,6775,79 Sterblichkeit im Krankenhaus Qualität der Transplantatfunktion (1 Jahr nach Entlassung) Qualität der Transplantatfunktion (2 Jahre nach Entlassung) Qualität der Transplantatfunktion (3 Jahre nach Entlassung) Aufarbeitung und Beurteilung von Pankreas- und Duodenal-Abstoßungsbiopsien: Wie bei Organbiopsien üblich erfolgt nach Eingang und Prüfung der klinischen Angaben zunächst eine makroskopische Beurteilung der formalinfixierten Proben, wobei hier für eine erste Einschätzung auch ein Durchlichtmikroskop hilfreich sein kann. Da eine definitive Einordnung in endo- und exokrines Pankreas allerdings in der Lichtmikroskopie des Feuchtmaterials nicht möglich ist, empfiehlt sich zunächst eine Herstellung von HE- und PAS-Schnitten und die Anfertigung von mindestens 6 Leerschnitten für ergänzende histochemische und immunhistologische Untersuchungen, die nach Prüfung des Gewebes an den HE-/PAS-Schnitten angefordert werden können. Weiterhin empfiehlt es sich, in jedem Fall eine Bindegewebsfaserfärbung (z. B. Siriusrot, Elastica-van-Gieson [EvG] oder Mason-Goldner [MG]) und die 5 folgenden immunhistologischen Färbungen anzufertigen: C4d als Marker einer antikörpervermittelten Transplantatabstoßung (ABMR), CD3 als T‑Zell- und CD68 als Makrophagenmarker sowie Insulin und Glukagon zur Visualisierung der endokrinen Pankreasinseln [28, 38, 39]. Ein entsprechendes Vorgehen – allerdings primär ohne ergänzende immunhistologische Untersuchungen – wird auch für die selten durchgeführten und schwieriger zu beurteilenden Biopsien des Spenderduodenums durchgeführt, die hinsichtlich ihrer Signifikanz für die Pankreasabstoßung insgesamt sehr kontrovers diskutiert werden [41]. Molekulare Marker der humoralen Pankreasabstoßung: Eine Analyse von 38 ausgewählten Genen (u. a. endotheliale Gene, NK-Zellgene und Entzündungsgene) mittels NanoString-nCounter-Technologie an 52 formalinfixierten Pankreasbiopsien zeigte, dass durch ergänzende molekulare Marker die Pankreasabstoßungsdiagnostik verbessert werden kann [42]. Diese aufwendige und kostenintensive molekulare Zusatzdiagnostik wird derzeit jedoch nicht routinemäßig eingesetzt. Häufige histomorphologische Befunde an Pankreastransplantatbiopsien (Abb. 4, 5, 6 und 7): Für die Beurteilung von transplantatassoziierten Veränderungen ist primär die Kenntnis der normalen Anatomie des Pankreas und der Normalbefunde der verwendeten immunhistologischen Färbungen hilfreich (Abb. 4). Zunächst sollte in der HE- und PAS-Färbung (Abb. 4a, b) geprüft werden, ob exo- und endokrines Pankreasgewebe vorhanden ist und wieviele Läppchen erfasst sind. In der CD3- und der CD68-Färbung zeigen sich üblicherweise nur sehr wenige T‑Zellen und Gewebehistiozyten/Makrophagen (Abb. 4c, d). In der Insulin- und der Glukagon-Immunhistologie lassen sich die endokrinen Inseln und hier speziell die Alpha- und Betazellen meist sehr schön darstellen (Abb. 4e, f). Akute T-Zell- und akute antikörpervermittelte Transplantatabstoßung (Tab. 1, 4, 5 und 6): Die morphologischen Veränderungen des Pankreas werden gemäß der aktuellen international gültigen Banff-Klassifikation eingeteilt [30]: Bei der häufigen akuten T‑Zell-vermittelten Transplantatabstoßung (TCMR; Tab. 1; Abb. 5) finden sich die folgenden histologischen Veränderungen: septale Entzündung mit aktivierten Lymphozyten- und z. T. Eosinophileninfiltraten, Venulitis und Duktitis, azinäre Entzündung und Endothelialitis (Abb. 5a–f). Das Ausmaß der T‑zellulären oder makrophagozytären Infiltration lässt sich hierbei gut durch ergänzende CD3- (Abb. 5g) bzw. CD68-Färbung (Abb. 5h) visualisieren, wobei hier v. a. das T‑zelluläre Infiltrat eine größere Rolle als das monozytär/makrophagozytäre Infiltrat spielt (Tab. 4). Die TCMR wird entsprechend ihres Schweregrades in 3 Kategorien (mild, moderat, schwer) eingeteilt, wobei die schwere Form mit diffuser azinärer Entzündung, fokaler oder diffuser Azinuszellnekrose und/oder moderater oder schwerer intimaler Arteriitis (> 25 % des Lumens) eine Abgrenzung zur akuten antikörpervermittelten Transplantatabstoßung (ABMR) notwendig macht, sodass hier in jedem Fall nach dem Vorhandensein donorspezifischer Antikörper (DSA) zu fragen ist (Tab. 1).TCMRABMRSeptale Infiltrate+++− bis +Eosinophile Granulozyten+ bis +++− bis +Neutrophile Granulozyten− bis ++ ± bis +++T‑Lymphozyten++ bis +++ ± bis +Makrophagen++ ++++GradWichtigste histologische BefundeNicht eindeutig bzw. ausreichend für eine TCMRMinimales lokalisiertes Entzündungsinfiltrat, minimaler Kryptenepithelschaden, vermehrte epitheliale Apoptosen (< 6 Apoptosekörperchen/10 Krypten), keine oder nur minimale Architekturstörung der Schleimhaut, keine Schleimhautulzerationen und keine eindeutigen Veränderungen einer milden TCMRMilde TCMRMildes lokalisiertes Entzündungsinfiltrat mit aktivierten Lymphozyten, milder Kryptenepithelschaden, vermehrte epitheliale Apoptosen (> 6 Apoptosekörperchen/10 Krypten), milde Architekturstörung der Schleimhaut, keine SchleimhautulzerationenModerate TCMRDiffuses Entzündungsinfiltrat in der Lamina propria, diffuser Kryptenepithelschaden, vermehrt epitheliale Apoptosen mit fokaler Konfluenz, stärkere Schleimhautarchitekturstörung, milde bis mäßige intimale Arteriitis möglich, keine SchleimhautulzerationenSchwere TCMRVeränderungen der moderaten TCMR plus mukosale Ulzerationen bzw. auch schwere oder transmurale intimale Arteriitis möglichTCMR T-Zell-vermittelte AbstoßungAnzahl der Fälle (Prozentwert %)Alle BiopsienaPankreasbiopsien/Duodenalbiopsien93 (100)/3Kein Material (Pankreasbiopsien)32 (34,4)Pankreasbiopsien mit verwertbarem MaterialbPankreasbiopsien gesamt61 (100)Keine Abstoßung15 (24,6)Akute TCMRIndeterminate: 4 (6,6)Grad I: 30 (49,2)Grad II: 8 (13,1)Aktive ABMRDefinitiv: 0 (0)Erwägen: 3 (4,9)Ausschließen: 2 (3,3)Akuter Azinusepithelschaden36 (59)AllograftfibroseGrad I: 29 (47,5)Grad II: 2 (3,3)Grad III: 2 (3,3)Chronische Allograftarteriopathie1 (1,6)Insulitis2 (möglich) (3,3)CNI-Toxizität3 (möglich) (4,9)Pankreatitis5 (8,2)ABMR antikörpervermittelte Abstoßung, CNI Calcineurin-Inhibitor, TCMR T-Zell-vermittelte AbstoßungaProzentwerte bezogen auf alle Pankreasbiopsien unabhängig davon, ob verwertbares Material vorhanden warbProzentwerte bezogen auf Pankreasbiopsien mit verwertbarem Material TCMR T-Zell-vermittelte Abstoßung Indeterminate: 4 (6,6) Grad I: 30 (49,2) Grad II: 8 (13,1) Definitiv: 0 (0) Erwägen: 3 (4,9) Ausschließen: 2 (3,3) Grad I: 29 (47,5) Grad II: 2 (3,3) Grad III: 2 (3,3) ABMR antikörpervermittelte Abstoßung, CNI Calcineurin-Inhibitor, TCMR T-Zell-vermittelte Abstoßung aProzentwerte bezogen auf alle Pankreasbiopsien unabhängig davon, ob verwertbares Material vorhanden war bProzentwerte bezogen auf Pankreasbiopsien mit verwertbarem Material Auch im biopsierten Spenderduodenum (Tab. 5; Abb. 6a, b) können sich charakteristische Veränderungen einer TCMR mit unterschiedlich ausgeprägter Vermehrung von Lymphoyzten intraepithelial und in der Lamina propria sowie erhöhter epithelialer Apoptoserate finden sowie eine Architekturstörung der Schleimhaut (definiert als Verplumpung/Abflachung der Villi in dem am besten orientierten Schnitt) [43]. Im schweren Stadium zeigt sich auch eine ausgeprägte Schleimhautdestruktion mit Kryptenverlust, Verschorfung und neutrophilenreichem Infiltrat (Tab. 5; [43]). Hiervon abzugrenzen sind jedoch ischämische Veränderungen, die im Einzelfall ein ähnliches histologisches Bild induzieren können. Die ABMR manifestiert sich am transplantierten Pankreas als mikrovaskulärer Endothelzellschaden des exokrinen Parenchyms, interazinäre Entzündung, Azinusepithelschaden, Vaskulitis und Thrombose, kann in der HE-Färbung aber auch relativ blande aussehen (Abb. 7a). Neben DSA und der Morphologie ist auch die spezifische C4d-Positivität der interazinären Kapillaren eines der 3 diagnostischen Kriterien der ABMR [29, 30], wofür eine C4d-Färbung notwendig ist (Abb. 7b). Im Gegensatz zur TCMR ist die ABMR v. a. durch ein monozytär/makrophagozytäres Infiltrat charakterisiert (Tab. 4; Abb. 7c). Zur Darstellung der interazinären Kapillaren und speziell zur Beurteilung der Dilatation und Endothelzellschwellung als Zeichen der kapillären Schädigung kann eine CD34-Immunhistochemie hilfreich sein (Abb. 7d). Nicht abstoßungsbedingte histomorphologische Veränderungen in Pankreasbiopsien nach Transplantation (Abb. 8): Hier sind zum einen anatomische bzw. nicht krankheitswertige Variationen, wie z. B. die relativ häufige Vermehrung von Fettzellen im Pankreas (Abb. 8a), [44] aber auch z. T. transplantationsassoziierte Veränderungen wie tryptische Pankreasnekrosen (Abb. 8b), ischämische Posttransplantationspankreatitis, Peripankreatitis bzw. peripankreatische Flüssigkeitsansammlung/Ödem, CMV-Pankreatitis, posttransplantations-lymphoproliferative Erkrankung (PTLD), bakterielle Infektionen oder Pilzinfektionen, Rekurrenz der Autoimmunerkrankung/des Diabetes mellitus oder eine CNI-Toxizität, die sich meist als Inselzellschaden, wie z. B. eine Vakuolisierung der endokrinen Zellen zeigt (Abb. 8c; [32]), zu nennen. Die Vakuolisierung von Inselzellen war in einer Studie, die die histologischen Befunde bei den beiden CNIs Cyclosporin A und Tacrolimus untersuchte, bei 2 Kontrollfällen nur minimal ausgeprägt und grobe Vakuolisierungen – wie bei CNI-Toxizität – wurden nicht beobachtet. Als weitere Merkmale der CNI-Toxizität fanden sich Inselzellschwellungen, Apoptosen und eine verminderte Reaktivität für Insulin in der Immunhistochemie [32]. Erfahrungen der gemeinsamen PTX-Biopsiediagnostik Bochum-Erlangen (06/2017–12/2020, Tab. 6; Abb. 9): Im Zeitraum von Juni 2017 bis Dezember 2020 wurden aus der Chirurgischen Klinik des Universitätsklinikum Knappschaftskrankenhaus Bochum insgesamt 93 Pankreastransplantatbiopsien und 3 Duodenalbiopsien des Spenderduodenums von 49 Patienten in der Nephropathologie Erlangen untersucht. Die Ergebnisse der Pankreasbiopsien, wie in den Originalbefunden dokumentiert, sind in Tab. 6 zusammengefasst und in Abb. 9 teilweise illustriert. In einem Drittel der Biopsien (34,4 %) wurde kein diagnostisches Material gewonnen. Die bei weitem häufigste Diagnose war eine TCMR. Zur Einordnung der Befunde wie Insulitis und CNI-Toxizität oder einer ABMR ist immer der klinische Kontext von größter Wichtigkeit zur abschließenden Interpretation, sodass häufig nur anhand der Histologie eine definitive Diagnose nicht zu stellen ist. Im gleichen Zeitraum wurden 3 Biopsien des Spenderduodenums eingesandt, von denen eine Zeichen einer TCMR zeigte und 2 weitere keine Abstoßungszeichen. Fazit für die Praxis: Die Biopsie des transplantierten Pankreas oder in seltenen Fällen auch des Spenderduodenums mit anschließender standardisierter Beurteilung entsprechend der aktuellen international gültigen Banff-Klassifikation der Pankreasabstoßung und der Empfehlungen zur Beurteilung von Duodenalbiopsien hat ihren festen Stellenwert in der Behandlung Pankreas‑/Nierentransplantierter Patienten.Wie bei anderen transplantierten parenchymatösen Organen werden die Biopsien nach klinischer Indikationsstellung durchgeführt und anschließend in entsprechend ausgerüsteten Pathologien gemäß der aktuell geltenden Einteilungen standardisiert aufgearbeitet und beurteilt.Diese Einordnung der morphologischen Veränderungen nach der Banff-Klassifikation der Pankreastransplantatabstoßung ist dann wiederum mit entsprechenden klinischen Handlungsanweisungen verknüpft, die in größeren klinischen Studien erprobt wurden und werden und mit entsprechenden Erfolgswahrscheinlichkeiten einhergehen. Die Biopsie des transplantierten Pankreas oder in seltenen Fällen auch des Spenderduodenums mit anschließender standardisierter Beurteilung entsprechend der aktuellen international gültigen Banff-Klassifikation der Pankreasabstoßung und der Empfehlungen zur Beurteilung von Duodenalbiopsien hat ihren festen Stellenwert in der Behandlung Pankreas‑/Nierentransplantierter Patienten. Wie bei anderen transplantierten parenchymatösen Organen werden die Biopsien nach klinischer Indikationsstellung durchgeführt und anschließend in entsprechend ausgerüsteten Pathologien gemäß der aktuell geltenden Einteilungen standardisiert aufgearbeitet und beurteilt. Diese Einordnung der morphologischen Veränderungen nach der Banff-Klassifikation der Pankreastransplantatabstoßung ist dann wiederum mit entsprechenden klinischen Handlungsanweisungen verknüpft, die in größeren klinischen Studien erprobt wurden und werden und mit entsprechenden Erfolgswahrscheinlichkeiten einhergehen.
Background: In Germany pancreas transplants are performed in only a few selected and specialized centres, usually combined with a kidney transplant. Knowlegde of the indications for and techniques of transplantation as well as of the histopathological assessment for rejection in pancreas and duodenal biopsies is not very widespread. Methods: A thorough literature search for aspects of the history, technique and indication for pancreas transplantation was performed and discussed in the context of the local experience and technical particularities specific for the transplant centre in Bochum. The occurrence of complications was compared with international reports. Results of pancreas and duodenal biopsies submitted to Erlangen between 06/2017 and 12/2020 for histological evaluation, which were evaluated according to the Banff classification, were summarized. For a better understanding key histological findings of pancreas rejection and differential diagnoses were illustrated and discussed. Results: A total of 93 pancreas transplant specimens and 3 duodenal biopsies were included. 34.4% of pancreas specimens did not contain representative material for a diagnosis. In the remaining 61 biopsies 24.6% showed no rejection, 62.3% were diagnosed with acute T-cell mediated rejection (TCMR) and 8.2% with signs suspicious of antibody-mediated rejection (ABMR). Acute acinary epithelial injury was seen in 59%, pancreatitis in 8.2% and allograft fibrosis was reported in as many as 54.1%. Calcineurin-inhibitor toxicity was discussed in only 4.9%. Conclusions: Pancreas-kidney-transplantation and standardized histological assessment of the transplanted pancreas or rarely duodenum with reporting according to the updated Banff classification of pancreas transplants or previous reports of duodenal rejection are important mainstays in the management of patients with diabetes.
null
null
10,793
309
[ 112, 154, 65, 785, 301, 521, 2902, 173, 146, 196, 930, 1399, 183, 513, 349, 212, 60, 127, 889, 194, 154 ]
22
[ "der", "und", "die", "oder", "eine", "mit", "einer", "bei", "von", "werden" ]
[ "und kombinierten pntx", "ptx transplantatthrombose die", "pankreas nierentransplantation", "diese präemptive transplantation", "nierentransplantierter patienten wie" ]
null
null
null
null
null
null
null
[CONTENT] Biopsie | Diabetes mellitus | Duodenum | Histologie | Nierentransplantation | Biopsy | Diabetes mellitus | Duodenum | Histology | Kidney transplantation [SUMMARY]
[CONTENT] Biopsie | Diabetes mellitus | Duodenum | Histologie | Nierentransplantation | Biopsy | Diabetes mellitus | Duodenum | Histology | Kidney transplantation [SUMMARY]
null
null
null
null
[CONTENT] Biopsy | Graft Rejection | Humans | Kidney | Kidney Transplantation | Pancreas Transplantation [SUMMARY]
[CONTENT] Biopsy | Graft Rejection | Humans | Kidney | Kidney Transplantation | Pancreas Transplantation [SUMMARY]
null
null
null
null
[CONTENT] und kombinierten pntx | ptx transplantatthrombose die | pankreas nierentransplantation | diese präemptive transplantation | nierentransplantierter patienten wie [SUMMARY]
[CONTENT] und kombinierten pntx | ptx transplantatthrombose die | pankreas nierentransplantation | diese präemptive transplantation | nierentransplantierter patienten wie [SUMMARY]
null
null
null
null
[CONTENT] der | und | die | oder | eine | mit | einer | bei | von | werden [SUMMARY]
[CONTENT] der | und | die | oder | eine | mit | einer | bei | von | werden [SUMMARY]
null
null
null
null
[CONTENT] der | mit entsprechenden | entsprechenden | banff klassifikation der | klassifikation der | banff klassifikation | transplantierten | klassifikation | entsprechend | und [SUMMARY]
[CONTENT] der | und | die | eine | oder | mit | einer | von | bei | im [SUMMARY]
null
null
null
null
[CONTENT] [SUMMARY]
[CONTENT] Germany ||| ||| Bochum ||| ||| Erlangen | 12/2020 ||| ||| 93 | 3 ||| 34.4% ||| 61 | 24.6% | 62.3% | TCMR | 8.2% ||| Acute | 59% | 8.2% | as many as ||| only 4.9% ||| [SUMMARY]
null
Effects of a temporary suspension of community-based health insurance in Kwara State, North-Central, Nigeria.
35145602
a subsidized community health insurance programme in Kwara State, Nigeria was temporarily suspended in 2016 in anticipation of the roll-out of a state-wide health insurance scheme. This article reports the adverse consequences of the scheme´s suspension on enrollees´ healthcare utilization.
INTRODUCTION
a mixed-methods study was carried out in Kwara State, Nigeria, in 2018 using a semi-quantitative cross-sectional survey amongst 600 former Kwara community health insurance clients, and in-depth interviews with 24 clients and 29 participating public and private healthcare providers in the program. Both quantitative and qualitative data were analyzed and triangulated.
METHODS
most of former enrollees (95.3%) kept utilizing programme facilities after the suspension, mainly because of the high quality of care. However, majority of the enrollees (95.8%) reverted to out-of-pocket payment while 67% reported constraints in payment for healthcare services after suspension of the program. In the absence of insurance, the most common coping mechanisms for healthcare payment were personal savings (63.3%), donations from friends and families (34.7%) and loans (11.8%). Being a male enrollee (odd ratio=1.61), living in a rural community (odd ratio =1.77), exclusive usage of Kwara Community Health Insurance Programme (KCHIP) prior to suspension (odd ratio=1.94) and suffering an acute illness (odd ratio=3.38) increased the odds of being financially constrained in accessing healthcare.
RESULTS
after the suspension of the scheme, many enrollees and health facilities experienced financial constraints. These underscore the importance of sustainable health insurance schemes as a risk-pooling mechanism to sustain access to good quality health care and financial protection from catastrophic health expenditures.
CONCLUSION
[ "Community-Based Health Insurance", "Cross-Sectional Studies", "Health Expenditures", "Humans", "Insurance, Health", "Male", "Nigeria" ]
8797047
Introduction
The progress towards Universal Health Coverage (UHC) involves setting ambitious goals for expanding access to quality health services based on establishing a greater reliance on risk-pooling and prepayment mechanisms to finance health, stimulating investments in healthcare infrastructure and quality, and building human resources and skills for health. The World Health Organization (WHO) estimates that more than half the world´s population does not have access to the health services they need, and 100 million people suffer financial catastrophe every year due to out-of-pocket (OOP) expenditures for unexpected healthcare [1]. Introduction of a health insurance programme is one of the ways to enhance access to healthcare services and to protect individuals from catastrophic health expenditures [2]. Financing healthcare through a tax-based system (which is also a form of risk-pooling) is difficult as many low- and middle-income countries (LMICs) are struggling to mobilize sufficient resources. As a result, OOP expenditures remain high and in combination with poor healthcare services form an important barrier to UHC. How to successfully roll out sustainable health insurance on a large scale and ensure sufficient take-up in LMICs is an outstanding question [3,4]. In Africa, more than half of all healthcare expenses are covered through OOP payments. For example, in Nigeria - the most populous country in Africa, with a population of more than 200 million, there are substantial inequalities in access to healthcare with 72% of health expenses paid OOP and only about 4% of the people, mostly in the formal sector, having access to health insurance today [5]. Nigeria accounts for 2% of the world population but contributes to 14% of maternal deaths and 23% of malaria cases [2]. To address these burdens, Kwara State, one of the poorest states in Nigeria, with the support of PharmAccess and the Netherlands Health Insurance Fund launched a subsidized Kwara Community Health Insurance Programme (KCHIP) in 2007 [6-8]. By the year 2015, a total of 347,132 people and 42 public and private healthcare facilities participated in KCHIP (Figure 1). major policy milestone, study period and clients´ enrolment trend in Kwara community health insurance programme between 2015 and 2018 The impact of KCHIP has been assessed over time through various studies [6-8], indicating an increase in the use of healthcare (while controlling for additional variables) of up to 90% among enrolled communities [7,9], markedly improved cost-effectiveness, as well as substantial benefits in terms of improved health outcomes concerning chronic diseases like hypertension; [10], and maternal and child care [11]. Similarly, OOP expenditures significantly decreased by 50% among enrollees, thus securing more financial protection in the medium run [7,9]. The KCHIP was also found to increase awareness about health status among community members [9,12]. Additionally, it was demonstrated KCHIP could deliver a basic quality healthcare coverage at the US $28 per person per year, compared to the WHO benchmark of US $60 and Nigeria´s total health expenditure per capita of US $115 [8]. An important feature of KCHIP was as incremental financial commitment and ownership of the programme by the Kwara State Government over time. The programme was aimed at synergizing with the Nigeria National Health Insurance Programme (NHIS) to attain UHC for the state [13]. In January 2015 (Figure 1), the programme partners signed an agreement to transition KCHIP to the Kwara State Health Insurance Programme (KSHIP). Pending this arrangement, KCHIP enrolment was temporarily suspended while designing a new insurance product and premium to be introduced and deployed on a state-wide level. Whereas in January 2016, KCHIP was active in 11 out of the 16 Local Government Areas (LGAs) in Kwara State and recorded a total enrolment of 139,714 clients, these clients were not renewed throughout 2016. This resulted in a gradual drop-out over the year with no clients insured by January 2017 (Figure 1). Therefore, a unique ‘reverse insurance intervention´ situation emerged, which was evaluated in this study. This paper describes the consequences of the suspension of KCHIP in Kwara State, Nigeria, in the wake of state-wide health insurance, and analyses the effects on healthcare quality, utilization and financial constraints for healthcare among former enrollees, as well as the consequences for formerly participating KCHIP health facilities.
Methods
Study design and study population: in August 2018, about 2 years after the suspension of KCHIP (Figure 1), a mixed-method study was carried out among KCHIP former enrollees and healthcare providers in Kwara State, Nigeria. Using multi-stage random sampling, we recruited a total of 600 enrollees whose health insurance policy had expired at least 4 months before the end of December 2016. For the quantitative cross-sectional survey we obtain data on socio-demographics, healthcare utilization, enrolment status, health financial constraints and coping strategies since the suspension. Only adults (18 years and above) were included in the study, of whom a purposively selected 400 enrollees had accessed care in a KCHIP healthcare facility in the preceding 12 months. The remaining 200 participants were selected from those uninsured in the past 12 months. Of those 400 participants who have accessed healthcare, half (200) who had in addition to other health conditions been seeking chronic care, maternal care and care for acute conditions were included in the study. In-depth interviews (IDIs) were performed among 24 purposively selected former enrollees and among 29 health facilities´ managers of (19 public, 10 private) participating KCHIP facilities. The IDIs explored the effects of the programme suspension on both healthcare utilization by former enrollees and their coping mechanisms, and health facilities´ service provision. To be selected for the IDI, the participant must be above 18 years of age and must have utilized pertinent healthcare in the past 12 months. To obtain healthcare utilization pattern due to the programme suspension, health facilities´ clinical records were reviewed as part of the observation checklist tool developed for the qualitative data collection. Sampling and data collection Quantitative study: multi-stage sampling was used, selecting 5 Local Government Areas (LGAs): two from Kwara South, two from Kwara North and one from Kwara Central senatorial zones. Enrollees were selected randomly with the KCHIP enrollment database serving as sampling frame after allocating LGAs proportionate to constituent population sizes (total enrollment in the 5 LGAs in January 2016 was 73,438). An additional 30% was added from the sample frame for each LGA to cater for non-response and untraceable enrollees. The selected enrollees were traced in the community (with the help of community mobilizers) and interviewed by trained interviewers. The questionnaire captured data on respondents´ socio-economic characteristics, morbidity patterns, healthcare access and utilization in the preceding 12 months. Qualitative study: we conducted two rounds of IDIs among former enrollees and facilities´ managers. The enrollees´ interviews were conducted among 24 purposively selected adults across 9 selected LGAs cutting across the 3 zones of Kwara State. The selection of former enrollees into the IDIs was carried out in and around the health facilities using a pretested interview guide. The facility managers´ interviews were conducted in KCHIP facilities among the officers-in-charge (or the medical director). This comprised all 29 Enhanced Community Based Care (ECBC) health facilities (19 public, 10 private) spread across 9 LGAs; 13 health posts providing remote care services were excluded from the study because they were already linked to records of the 29 ECBCs. Data analysis:the quantitative data entry platform was designed using Open Data Kit® (ODK), while the data was entered using Kobo Toolbox® [14] and later exported to Statistical Package for Social Science (SPSS) version 22 for analysis. Simple logistic regression was used to explore the predictive factors of the financial constraints in the ability to pay for healthcare services after the programme suspension. The level of significance was set at a p-value of < 0.05 complemented with a 95% confidence interval (CI). Recorded qualitative interviews were transcribed and thematic analysis was carried out manually. Mixed results of the qualitative and quantitative data were triangulated and reported together to complement major contextual observations in this study. Ethics approval and consent to participate: written permissions were obtained from the ethics committee of the Kwara State Ministry of Health, Ilorin, Nigeria. Informed consent was obtained from the participants. Confidentiality of the participants´ and health facilities´ information were maintained. Quantitative study: multi-stage sampling was used, selecting 5 Local Government Areas (LGAs): two from Kwara South, two from Kwara North and one from Kwara Central senatorial zones. Enrollees were selected randomly with the KCHIP enrollment database serving as sampling frame after allocating LGAs proportionate to constituent population sizes (total enrollment in the 5 LGAs in January 2016 was 73,438). An additional 30% was added from the sample frame for each LGA to cater for non-response and untraceable enrollees. The selected enrollees were traced in the community (with the help of community mobilizers) and interviewed by trained interviewers. The questionnaire captured data on respondents´ socio-economic characteristics, morbidity patterns, healthcare access and utilization in the preceding 12 months. Qualitative study: we conducted two rounds of IDIs among former enrollees and facilities´ managers. The enrollees´ interviews were conducted among 24 purposively selected adults across 9 selected LGAs cutting across the 3 zones of Kwara State. The selection of former enrollees into the IDIs was carried out in and around the health facilities using a pretested interview guide. The facility managers´ interviews were conducted in KCHIP facilities among the officers-in-charge (or the medical director). This comprised all 29 Enhanced Community Based Care (ECBC) health facilities (19 public, 10 private) spread across 9 LGAs; 13 health posts providing remote care services were excluded from the study because they were already linked to records of the 29 ECBCs. Data analysis:the quantitative data entry platform was designed using Open Data Kit® (ODK), while the data was entered using Kobo Toolbox® [14] and later exported to Statistical Package for Social Science (SPSS) version 22 for analysis. Simple logistic regression was used to explore the predictive factors of the financial constraints in the ability to pay for healthcare services after the programme suspension. The level of significance was set at a p-value of < 0.05 complemented with a 95% confidence interval (CI). Recorded qualitative interviews were transcribed and thematic analysis was carried out manually. Mixed results of the qualitative and quantitative data were triangulated and reported together to complement major contextual observations in this study. Ethics approval and consent to participate: written permissions were obtained from the ethics committee of the Kwara State Ministry of Health, Ilorin, Nigeria. Informed consent was obtained from the participants. Confidentiality of the participants´ and health facilities´ information were maintained.
Results
Socio-demography of the enrollees: the enrollees had a median age of 43 years and 74.5% were women (Table 1). Close to half of the enrollees did not have formal education (42.5%); 77.2% were married and 17.2% widows. About three-quarters of the enrollees (73.8%) lived in semi-urban areas (Table 1). The majority were from Yoruba (64.5%) and Nupe (32.2%) ethnic groups. Islam was the predominant (83.2%) religion amongst them. The enrollees were equally spread over the wealth quintiles, with wealth calculated as annual per capita consumption of food and non-food items. socio-demographic and socio-economic characteristics of the respondents JSS: junior secondary school; SSS: senior secondary school Consequences of (re)enrolment suspension on households: the survey shows that the majority of former enrollees (95.3%) kept utilizing KCHIP facilities, even after the suspension of the programme (Table 2). The factors responsible for this were explored by the IDI. Some former enrollees perceived that the KCHIP facilities have very friendly staff and this would encourage them to keep patronizing the health facilities. However, 74.0% of enrollees reported reverting to OOP payment for healthcare services at the KCHIP facilities. In general, the enrollees had more confidence in the private than the public KCHIP facilities. This is due to full (4.2%) or partial (21.8%) exemption from hospital bill when having financial constraints. According to IDI, many patients had established some friendship and cordial relationships with the facilities throughout the programs. For some enrollees the KCHIP facilities allowed them to pay in tranches for reasons of empathy and familiarity. preferences and constraints in ability to pay for and accessibility to healthcare after KCHIP suspension ^ Multiple responses We also reported that after suspension, 67.0% of enrollees experienced financial constraints in the ability to pay for healthcare services; 30.8% of whom reported it was due to the suspension, 11.2% to the economic situation of the country while 25.0% said it was due to both (Table 2). But a third (33.0%) of enrollees reported little or no change in their ability to pay for healthcare services. The IDIs reported almost all enrollees had financial constraints their ability to pay for healthcare services. An enrollee said; “my ability to pay has been hampered seriously by lack of funds”. Most common coping mechanisms reported by the enrollees were personal savings (63.3%), donations from friends and families (34.7%) and borrowing (11.8%). Other coping mechanisms included proceeds trading and sales (25.8%), household purse (19.7%) and money from other relatives (4.5%). Most of the enrollees in the qualitative interview agreed that spending household/family savings to offset healthcare bills were the immediate coping mechanism available to them after the programme suspension. Others narrated that they had to borrow from friends and family members, including seeking assistance from other relatives including the children in paying hospital bills. Some prominent social group mechanisms that offered health benefits to members such as donations during episodes of illness and loan facilities to offset medical bills were reported from the quantitative data. Examples of these mechanisms were (Table 2): Ajo - a local thrift (47.3%), religious groups (28.5%), community groups (18.7%) and cooperative groups (17.5%). Factors associated with a constrained ability to pay for healthcare among enrollees after the programme suspension: different factors were found to be associated with the reported financial constraints inability to pay for healthcare after insurance suspension (Table 3). Constraints were experienced more often by male enrollees (74.5%, p = 0.022), living in rural locations (75.8%, p = 0.006) and by those having an acute illness/injury in the preceding 12 months (74.3%, p<0.001). Besides, ethnic groups other than Yoruba (Nupe 89.6% and Hausa 87.5%) in the study (p<0.001) and those enrollees that patronized KCHIP facilities exclusively before suspension (68.1%, p<0.001) were significantly more likely to be financially constrained to pay for healthcare services after the programme suspension. Wealth was also significantly associated with constraint ability to pay. Being in lower wealth quintiles is associated with constraints ability to pay for healthcare. factors associated with constraints in ability to pay for healthcare services after re-enrollment suspension Statistical significance; X2: chi-square Per Table 4, the predictive factors for being financial constrained were being male (OR=1.61, 95% CI=1.069; 2.436) and living in rural communities (OR=1.77, 95% CI=1.171; 2.677). Enrollees of Yoruba ethnicity (OR=0.15, 95% CI=0.091; 0.236) had less financial constraints in paying for healthcare services after the programme suspension compared to people from other ethnicities. Those enrollees who depended solely on KCHIP health facilities before suspension had increased odds (OR=1.94, 95% CI=1.032; 3.648) of financial constraints while those with acute illness or injury in the preceding 12 months also had increased odds (OR=3.38, 95% CI=2.309; 4.939). predictors of constrained ability to pay for healthcare services after the suspension of the program Significance level (p) < 0.05 Consequences of the programme suspension on the KCHIP facilities (IDI and hospital records): after the suspension, 24 of 29 health facilities claimed the quality and quantity of services provided remained the same while five confirmed reduction in service provision. In the past, more than two-thirds of the health facilities claimed they experienced increased patronage and service utilization due to KCHIP. However, with the suspension of the program, records revealed that all facilities experienced a significant reduction in out-patient loads as there was a gradual decline in healthcare utilization (Figure 2). No appreciable effect was seen on in-patient visits (Figure 3). Out of those that reported a reduction in service provision, a facility manager commented that: “at present, just about 5% of those previously registered on the programme is still coming to the health facility for treatment”. Another manager narrated that: “We saw just 2 patients today, compared to the time when the programme was in place whereby we will not have the time to even attend to you to have this interview”. out-patients´ visits by month for the year 2016 across the public and private health facilities in-patient visits by month for the year 2016 across public and private health facilities Seventeen of 29 facilities reported a decrease in revenue after suspension of the program. Nine recorded an increase in their revenue, despite a reduction in patient load; the facilities in this category were all public health facilities. They attributed this increase to the removal of restrictions on billing patients directly during the enrollment period. A facility manager said: “Our revenue has increased because patients now have to pay out of pocket unlike when the programme was running and drugs and tests were done freely”. Another said: “Our revenue has increased because people coming to the hospital now pay for services”. Similarly, 23 out of 29 facilities experienced a high staff turnover after suspension of KCHIP. Almost 90% (26) of the facilities reported a salary cut for their staff after the suspension. One of the facility managers recounted that: “with the suspension of the program, the revenue generated could not cater for the salaries of all the staff so 39 members of staff were laid off and those that remained (31) had a 40% salary cut”. However, the loss of incentives and salary cut reduced staff motivation and productivity as attested to by a manager: “the salary cut, as well as the loss of incentives, has resulted in low productivity of staff members who claim not to be motivated anymore”. More facilities (21) informed of the decrease in drug purchase after suspension of the program.
Conclusion
After the suspension of the KCHIP health insurance programme in Kwara State, Nigeria, former enrollees still preferred using the KCHIP health facilities and they reverted almost ubiquitously to OOP payments. At the same time, out-patient healthcare consumption decreased substantially, with a large proportion of former enrollees not being able to afford healthcare services. Belonging to some form of financial/social group proved beneficial in the short term as a coping mechanism. Social capital built through KCHIP between former enrollees and clinics helped alleviate part of the financial burden for the former enrollees, but not for the facilities. Enrollees with the highest probability of suffering adverse consequences of the programme suspension were male enrollees, households in the lower social quintiles, living in rural communities and those reporting recent acute illness. Private health facilities suffered more consequences of the programme suspension than public facilities in terms of reduced financial inflow sequel to change in the revenue and resources. These observations point to the need for designing effective transition processes from community-based health insurance to state insurance in Nigerian states. Funding: though the PharmAccess Foundation funded this study and one of the authors work at the company, the study was not influenced by his participation in the design, data collection and manuscript writing. What is known about this topic It is already known fact that community-based health insurance scheme is an important strategy to achieve accessibility to quality healthcare for underserved population; Kwara Community Health Insurance Scheme was adjudged as one of the impactful health intervention from sub-Saharan Africa. It is already known fact that community-based health insurance scheme is an important strategy to achieve accessibility to quality healthcare for underserved population; Kwara Community Health Insurance Scheme was adjudged as one of the impactful health intervention from sub-Saharan Africa. It is already known fact that community-based health insurance scheme is an important strategy to achieve accessibility to quality healthcare for underserved population; Kwara Community Health Insurance Scheme was adjudged as one of the impactful health intervention from sub-Saharan Africa. It is already known fact that community-based health insurance scheme is an important strategy to achieve accessibility to quality healthcare for underserved population; Kwara Community Health Insurance Scheme was adjudged as one of the impactful health intervention from sub-Saharan Africa. What this study adds This study provides insight into the consequences of suspending an impactful health intervention like Kwara Community Health Insurance Scheme.It also highlighted the common challenges experienced and coping strategies adopted by the enrollees and the care providers on the scheme. This study provides insight into the consequences of suspending an impactful health intervention like Kwara Community Health Insurance Scheme. It also highlighted the common challenges experienced and coping strategies adopted by the enrollees and the care providers on the scheme. This study provides insight into the consequences of suspending an impactful health intervention like Kwara Community Health Insurance Scheme.It also highlighted the common challenges experienced and coping strategies adopted by the enrollees and the care providers on the scheme. This study provides insight into the consequences of suspending an impactful health intervention like Kwara Community Health Insurance Scheme. It also highlighted the common challenges experienced and coping strategies adopted by the enrollees and the care providers on the scheme.
[ "Sampling and data collection", "What is known about this topic", "What this study adds" ]
[ "Quantitative study: multi-stage sampling was used, selecting 5 Local Government Areas (LGAs): two from Kwara South, two from Kwara North and one from Kwara Central senatorial zones. Enrollees were selected randomly with the KCHIP enrollment database serving as sampling frame after allocating LGAs proportionate to constituent population sizes (total enrollment in the 5 LGAs in January 2016 was 73,438). An additional 30% was added from the sample frame for each LGA to cater for non-response and untraceable enrollees. The selected enrollees were traced in the community (with the help of community mobilizers) and interviewed by trained interviewers. The questionnaire captured data on respondents´ socio-economic characteristics, morbidity patterns, healthcare access and utilization in the preceding 12 months.\nQualitative study: we conducted two rounds of IDIs among former enrollees and facilities´ managers. The enrollees´ interviews were conducted among 24 purposively selected adults across 9 selected LGAs cutting across the 3 zones of Kwara State. The selection of former enrollees into the IDIs was carried out in and around the health facilities using a pretested interview guide. The facility managers´ interviews were conducted in KCHIP facilities among the officers-in-charge (or the medical director). This comprised all 29 Enhanced Community Based Care (ECBC) health facilities (19 public, 10 private) spread across 9 LGAs; 13 health posts providing remote care services were excluded from the study because they were already linked to records of the 29 ECBCs.\nData analysis:the quantitative data entry platform was designed using Open Data Kit® (ODK), while the data was entered using Kobo Toolbox® [14] and later exported to Statistical Package for Social Science (SPSS) version 22 for analysis. Simple logistic regression was used to explore the predictive factors of the financial constraints in the ability to pay for healthcare services after the programme suspension. The level of significance was set at a p-value of < 0.05 complemented with a 95% confidence interval (CI). Recorded qualitative interviews were transcribed and thematic analysis was carried out manually. Mixed results of the qualitative and quantitative data were triangulated and reported together to complement major contextual observations in this study.\nEthics approval and consent to participate: written permissions were obtained from the ethics committee of the Kwara State Ministry of Health, Ilorin, Nigeria. Informed consent was obtained from the participants. Confidentiality of the participants´ and health facilities´ information were maintained.", "\nIt is already known fact that community-based health insurance scheme is an important strategy to achieve accessibility to quality healthcare for underserved population; Kwara Community Health Insurance Scheme was adjudged as one of the impactful health intervention from sub-Saharan Africa.\n\nIt is already known fact that community-based health insurance scheme is an important strategy to achieve accessibility to quality healthcare for underserved population; Kwara Community Health Insurance Scheme was adjudged as one of the impactful health intervention from sub-Saharan Africa.", "\nThis study provides insight into the consequences of suspending an impactful health intervention like Kwara Community Health Insurance Scheme.It also highlighted the common challenges experienced and coping strategies adopted by the enrollees and the care providers on the scheme.\n\nThis study provides insight into the consequences of suspending an impactful health intervention like Kwara Community Health Insurance Scheme.\nIt also highlighted the common challenges experienced and coping strategies adopted by the enrollees and the care providers on the scheme." ]
[ null, null, null ]
[ "Introduction", "Methods", "Sampling and data collection", "Results", "Discussion", "Conclusion", "What is known about this topic", "What this study adds" ]
[ "The progress towards Universal Health Coverage (UHC) involves setting ambitious goals for expanding access to quality health services based on establishing a greater reliance on risk-pooling and prepayment mechanisms to finance health, stimulating investments in healthcare infrastructure and quality, and building human resources and skills for health. The World Health Organization (WHO) estimates that more than half the world´s population does not have access to the health services they need, and 100 million people suffer financial catastrophe every year due to out-of-pocket (OOP) expenditures for unexpected healthcare [1]. Introduction of a health insurance programme is one of the ways to enhance access to healthcare services and to protect individuals from catastrophic health expenditures [2]. Financing healthcare through a tax-based system (which is also a form of risk-pooling) is difficult as many low- and middle-income countries (LMICs) are struggling to mobilize sufficient resources. As a result, OOP expenditures remain high and in combination with poor healthcare services form an important barrier to UHC. How to successfully roll out sustainable health insurance on a large scale and ensure sufficient take-up in LMICs is an outstanding question [3,4].\nIn Africa, more than half of all healthcare expenses are covered through OOP payments. For example, in Nigeria - the most populous country in Africa, with a population of more than 200 million, there are substantial inequalities in access to healthcare with 72% of health expenses paid OOP and only about 4% of the people, mostly in the formal sector, having access to health insurance today [5]. Nigeria accounts for 2% of the world population but contributes to 14% of maternal deaths and 23% of malaria cases [2]. To address these burdens, Kwara State, one of the poorest states in Nigeria, with the support of PharmAccess and the Netherlands Health Insurance Fund launched a subsidized Kwara Community Health Insurance Programme (KCHIP) in 2007 [6-8]. By the year 2015, a total of 347,132 people and 42 public and private healthcare facilities participated in KCHIP (Figure 1).\nmajor policy milestone, study period and clients´ enrolment trend in Kwara community health insurance programme between 2015 and 2018\nThe impact of KCHIP has been assessed over time through various studies [6-8], indicating an increase in the use of healthcare (while controlling for additional variables) of up to 90% among enrolled communities [7,9], markedly improved cost-effectiveness, as well as substantial benefits in terms of improved health outcomes concerning chronic diseases like hypertension; [10], and maternal and child care [11]. Similarly, OOP expenditures significantly decreased by 50% among enrollees, thus securing more financial protection in the medium run [7,9]. The KCHIP was also found to increase awareness about health status among community members [9,12]. Additionally, it was demonstrated KCHIP could deliver a basic quality healthcare coverage at the US $28 per person per year, compared to the WHO benchmark of US $60 and Nigeria´s total health expenditure per capita of US $115 [8].\nAn important feature of KCHIP was as incremental financial commitment and ownership of the programme by the Kwara State Government over time. The programme was aimed at synergizing with the Nigeria National Health Insurance Programme (NHIS) to attain UHC for the state [13]. In January 2015 (Figure 1), the programme partners signed an agreement to transition KCHIP to the Kwara State Health Insurance Programme (KSHIP). Pending this arrangement, KCHIP enrolment was temporarily suspended while designing a new insurance product and premium to be introduced and deployed on a state-wide level. Whereas in January 2016, KCHIP was active in 11 out of the 16 Local Government Areas (LGAs) in Kwara State and recorded a total enrolment of 139,714 clients, these clients were not renewed throughout 2016. This resulted in a gradual drop-out over the year with no clients insured by January 2017 (Figure 1). Therefore, a unique ‘reverse insurance intervention´ situation emerged, which was evaluated in this study. This paper describes the consequences of the suspension of KCHIP in Kwara State, Nigeria, in the wake of state-wide health insurance, and analyses the effects on healthcare quality, utilization and financial constraints for healthcare among former enrollees, as well as the consequences for formerly participating KCHIP health facilities.", "Study design and study population: in August 2018, about 2 years after the suspension of KCHIP (Figure 1), a mixed-method study was carried out among KCHIP former enrollees and healthcare providers in Kwara State, Nigeria. Using multi-stage random sampling, we recruited a total of 600 enrollees whose health insurance policy had expired at least 4 months before the end of December 2016. For the quantitative cross-sectional survey we obtain data on socio-demographics, healthcare utilization, enrolment status, health financial constraints and coping strategies since the suspension. Only adults (18 years and above) were included in the study, of whom a purposively selected 400 enrollees had accessed care in a KCHIP healthcare facility in the preceding 12 months. The remaining 200 participants were selected from those uninsured in the past 12 months. Of those 400 participants who have accessed healthcare, half (200) who had in addition to other health conditions been seeking chronic care, maternal care and care for acute conditions were included in the study.\nIn-depth interviews (IDIs) were performed among 24 purposively selected former enrollees and among 29 health facilities´ managers of (19 public, 10 private) participating KCHIP facilities. The IDIs explored the effects of the programme suspension on both healthcare utilization by former enrollees and their coping mechanisms, and health facilities´ service provision. To be selected for the IDI, the participant must be above 18 years of age and must have utilized pertinent healthcare in the past 12 months. To obtain healthcare utilization pattern due to the programme suspension, health facilities´ clinical records were reviewed as part of the observation checklist tool developed for the qualitative data collection.\nSampling and data collection Quantitative study: multi-stage sampling was used, selecting 5 Local Government Areas (LGAs): two from Kwara South, two from Kwara North and one from Kwara Central senatorial zones. Enrollees were selected randomly with the KCHIP enrollment database serving as sampling frame after allocating LGAs proportionate to constituent population sizes (total enrollment in the 5 LGAs in January 2016 was 73,438). An additional 30% was added from the sample frame for each LGA to cater for non-response and untraceable enrollees. The selected enrollees were traced in the community (with the help of community mobilizers) and interviewed by trained interviewers. The questionnaire captured data on respondents´ socio-economic characteristics, morbidity patterns, healthcare access and utilization in the preceding 12 months.\nQualitative study: we conducted two rounds of IDIs among former enrollees and facilities´ managers. The enrollees´ interviews were conducted among 24 purposively selected adults across 9 selected LGAs cutting across the 3 zones of Kwara State. The selection of former enrollees into the IDIs was carried out in and around the health facilities using a pretested interview guide. The facility managers´ interviews were conducted in KCHIP facilities among the officers-in-charge (or the medical director). This comprised all 29 Enhanced Community Based Care (ECBC) health facilities (19 public, 10 private) spread across 9 LGAs; 13 health posts providing remote care services were excluded from the study because they were already linked to records of the 29 ECBCs.\nData analysis:the quantitative data entry platform was designed using Open Data Kit® (ODK), while the data was entered using Kobo Toolbox® [14] and later exported to Statistical Package for Social Science (SPSS) version 22 for analysis. Simple logistic regression was used to explore the predictive factors of the financial constraints in the ability to pay for healthcare services after the programme suspension. The level of significance was set at a p-value of < 0.05 complemented with a 95% confidence interval (CI). Recorded qualitative interviews were transcribed and thematic analysis was carried out manually. Mixed results of the qualitative and quantitative data were triangulated and reported together to complement major contextual observations in this study.\nEthics approval and consent to participate: written permissions were obtained from the ethics committee of the Kwara State Ministry of Health, Ilorin, Nigeria. Informed consent was obtained from the participants. Confidentiality of the participants´ and health facilities´ information were maintained.\nQuantitative study: multi-stage sampling was used, selecting 5 Local Government Areas (LGAs): two from Kwara South, two from Kwara North and one from Kwara Central senatorial zones. Enrollees were selected randomly with the KCHIP enrollment database serving as sampling frame after allocating LGAs proportionate to constituent population sizes (total enrollment in the 5 LGAs in January 2016 was 73,438). An additional 30% was added from the sample frame for each LGA to cater for non-response and untraceable enrollees. The selected enrollees were traced in the community (with the help of community mobilizers) and interviewed by trained interviewers. The questionnaire captured data on respondents´ socio-economic characteristics, morbidity patterns, healthcare access and utilization in the preceding 12 months.\nQualitative study: we conducted two rounds of IDIs among former enrollees and facilities´ managers. The enrollees´ interviews were conducted among 24 purposively selected adults across 9 selected LGAs cutting across the 3 zones of Kwara State. The selection of former enrollees into the IDIs was carried out in and around the health facilities using a pretested interview guide. The facility managers´ interviews were conducted in KCHIP facilities among the officers-in-charge (or the medical director). This comprised all 29 Enhanced Community Based Care (ECBC) health facilities (19 public, 10 private) spread across 9 LGAs; 13 health posts providing remote care services were excluded from the study because they were already linked to records of the 29 ECBCs.\nData analysis:the quantitative data entry platform was designed using Open Data Kit® (ODK), while the data was entered using Kobo Toolbox® [14] and later exported to Statistical Package for Social Science (SPSS) version 22 for analysis. Simple logistic regression was used to explore the predictive factors of the financial constraints in the ability to pay for healthcare services after the programme suspension. The level of significance was set at a p-value of < 0.05 complemented with a 95% confidence interval (CI). Recorded qualitative interviews were transcribed and thematic analysis was carried out manually. Mixed results of the qualitative and quantitative data were triangulated and reported together to complement major contextual observations in this study.\nEthics approval and consent to participate: written permissions were obtained from the ethics committee of the Kwara State Ministry of Health, Ilorin, Nigeria. Informed consent was obtained from the participants. Confidentiality of the participants´ and health facilities´ information were maintained.", "Quantitative study: multi-stage sampling was used, selecting 5 Local Government Areas (LGAs): two from Kwara South, two from Kwara North and one from Kwara Central senatorial zones. Enrollees were selected randomly with the KCHIP enrollment database serving as sampling frame after allocating LGAs proportionate to constituent population sizes (total enrollment in the 5 LGAs in January 2016 was 73,438). An additional 30% was added from the sample frame for each LGA to cater for non-response and untraceable enrollees. The selected enrollees were traced in the community (with the help of community mobilizers) and interviewed by trained interviewers. The questionnaire captured data on respondents´ socio-economic characteristics, morbidity patterns, healthcare access and utilization in the preceding 12 months.\nQualitative study: we conducted two rounds of IDIs among former enrollees and facilities´ managers. The enrollees´ interviews were conducted among 24 purposively selected adults across 9 selected LGAs cutting across the 3 zones of Kwara State. The selection of former enrollees into the IDIs was carried out in and around the health facilities using a pretested interview guide. The facility managers´ interviews were conducted in KCHIP facilities among the officers-in-charge (or the medical director). This comprised all 29 Enhanced Community Based Care (ECBC) health facilities (19 public, 10 private) spread across 9 LGAs; 13 health posts providing remote care services were excluded from the study because they were already linked to records of the 29 ECBCs.\nData analysis:the quantitative data entry platform was designed using Open Data Kit® (ODK), while the data was entered using Kobo Toolbox® [14] and later exported to Statistical Package for Social Science (SPSS) version 22 for analysis. Simple logistic regression was used to explore the predictive factors of the financial constraints in the ability to pay for healthcare services after the programme suspension. The level of significance was set at a p-value of < 0.05 complemented with a 95% confidence interval (CI). Recorded qualitative interviews were transcribed and thematic analysis was carried out manually. Mixed results of the qualitative and quantitative data were triangulated and reported together to complement major contextual observations in this study.\nEthics approval and consent to participate: written permissions were obtained from the ethics committee of the Kwara State Ministry of Health, Ilorin, Nigeria. Informed consent was obtained from the participants. Confidentiality of the participants´ and health facilities´ information were maintained.", "Socio-demography of the enrollees: the enrollees had a median age of 43 years and 74.5% were women (Table 1). Close to half of the enrollees did not have formal education (42.5%); 77.2% were married and 17.2% widows. About three-quarters of the enrollees (73.8%) lived in semi-urban areas (Table 1). The majority were from Yoruba (64.5%) and Nupe (32.2%) ethnic groups. Islam was the predominant (83.2%) religion amongst them. The enrollees were equally spread over the wealth quintiles, with wealth calculated as annual per capita consumption of food and non-food items.\nsocio-demographic and socio-economic characteristics of the respondents\nJSS: junior secondary school; SSS: senior secondary school\nConsequences of (re)enrolment suspension on households: the survey shows that the majority of former enrollees (95.3%) kept utilizing KCHIP facilities, even after the suspension of the programme (Table 2). The factors responsible for this were explored by the IDI. Some former enrollees perceived that the KCHIP facilities have very friendly staff and this would encourage them to keep patronizing the health facilities. However, 74.0% of enrollees reported reverting to OOP payment for healthcare services at the KCHIP facilities. In general, the enrollees had more confidence in the private than the public KCHIP facilities. This is due to full (4.2%) or partial (21.8%) exemption from hospital bill when having financial constraints. According to IDI, many patients had established some friendship and cordial relationships with the facilities throughout the programs. For some enrollees the KCHIP facilities allowed them to pay in tranches for reasons of empathy and familiarity.\npreferences and constraints in ability to pay for and accessibility to healthcare after KCHIP suspension\n^ Multiple responses\nWe also reported that after suspension, 67.0% of enrollees experienced financial constraints in the ability to pay for healthcare services; 30.8% of whom reported it was due to the suspension, 11.2% to the economic situation of the country while 25.0% said it was due to both (Table 2). But a third (33.0%) of enrollees reported little or no change in their ability to pay for healthcare services. The IDIs reported almost all enrollees had financial constraints their ability to pay for healthcare services. An enrollee said; “my ability to pay has been hampered seriously by lack of funds”.\nMost common coping mechanisms reported by the enrollees were personal savings (63.3%), donations from friends and families (34.7%) and borrowing (11.8%). Other coping mechanisms included proceeds trading and sales (25.8%), household purse (19.7%) and money from other relatives (4.5%). Most of the enrollees in the qualitative interview agreed that spending household/family savings to offset healthcare bills were the immediate coping mechanism available to them after the programme suspension. Others narrated that they had to borrow from friends and family members, including seeking assistance from other relatives including the children in paying hospital bills. Some prominent social group mechanisms that offered health benefits to members such as donations during episodes of illness and loan facilities to offset medical bills were reported from the quantitative data. Examples of these mechanisms were (Table 2): Ajo - a local thrift (47.3%), religious groups (28.5%), community groups (18.7%) and cooperative groups (17.5%).\nFactors associated with a constrained ability to pay for healthcare among enrollees after the programme suspension: different factors were found to be associated with the reported financial constraints inability to pay for healthcare after insurance suspension (Table 3). Constraints were experienced more often by male enrollees (74.5%, p = 0.022), living in rural locations (75.8%, p = 0.006) and by those having an acute illness/injury in the preceding 12 months (74.3%, p<0.001). Besides, ethnic groups other than Yoruba (Nupe 89.6% and Hausa 87.5%) in the study (p<0.001) and those enrollees that patronized KCHIP facilities exclusively before suspension (68.1%, p<0.001) were significantly more likely to be financially constrained to pay for healthcare services after the programme suspension. Wealth was also significantly associated with constraint ability to pay. Being in lower wealth quintiles is associated with constraints ability to pay for healthcare.\nfactors associated with constraints in ability to pay for healthcare services after re-enrollment suspension\nStatistical significance; X2: chi-square\nPer Table 4, the predictive factors for being financial constrained were being male (OR=1.61, 95% CI=1.069; 2.436) and living in rural communities (OR=1.77, 95% CI=1.171; 2.677). Enrollees of Yoruba ethnicity (OR=0.15, 95% CI=0.091; 0.236) had less financial constraints in paying for healthcare services after the programme suspension compared to people from other ethnicities. Those enrollees who depended solely on KCHIP health facilities before suspension had increased odds (OR=1.94, 95% CI=1.032; 3.648) of financial constraints while those with acute illness or injury in the preceding 12 months also had increased odds (OR=3.38, 95% CI=2.309; 4.939).\npredictors of constrained ability to pay for healthcare services after the suspension of the program\nSignificance level (p) < 0.05\nConsequences of the programme suspension on the KCHIP facilities (IDI and hospital records): after the suspension, 24 of 29 health facilities claimed the quality and quantity of services provided remained the same while five confirmed reduction in service provision. In the past, more than two-thirds of the health facilities claimed they experienced increased patronage and service utilization due to KCHIP. However, with the suspension of the program, records revealed that all facilities experienced a significant reduction in out-patient loads as there was a gradual decline in healthcare utilization (Figure 2). No appreciable effect was seen on in-patient visits (Figure 3). Out of those that reported a reduction in service provision, a facility manager commented that: “at present, just about 5% of those previously registered on the programme is still coming to the health facility for treatment”. Another manager narrated that: “We saw just 2 patients today, compared to the time when the programme was in place whereby we will not have the time to even attend to you to have this interview”.\nout-patients´ visits by month for the year 2016 across the public and private health facilities\nin-patient visits by month for the year 2016 across public and private health facilities\nSeventeen of 29 facilities reported a decrease in revenue after suspension of the program. Nine recorded an increase in their revenue, despite a reduction in patient load; the facilities in this category were all public health facilities. They attributed this increase to the removal of restrictions on billing patients directly during the enrollment period. A facility manager said: “Our revenue has increased because patients now have to pay out of pocket unlike when the programme was running and drugs and tests were done freely”. Another said: “Our revenue has increased because people coming to the hospital now pay for services”. Similarly, 23 out of 29 facilities experienced a high staff turnover after suspension of KCHIP. Almost 90% (26) of the facilities reported a salary cut for their staff after the suspension. One of the facility managers recounted that: “with the suspension of the program, the revenue generated could not cater for the salaries of all the staff so 39 members of staff were laid off and those that remained (31) had a 40% salary cut”. However, the loss of incentives and salary cut reduced staff motivation and productivity as attested to by a manager: “the salary cut, as well as the loss of incentives, has resulted in low productivity of staff members who claim not to be motivated anymore”. More facilities (21) informed of the decrease in drug purchase after suspension of the program.", "This study assessed the effects of the suspension of a community-based health insurance programme in Kwara State, Nigeria. This suspension was due to the restructuring and repositioning of the scheme for a major government policy change to herald a state-wide health insurance program. This, therefore, necessitated an unusual 'reverse intervention' evaluation of community-based health insurance. This we carried out using a mixed-method to study both the former enrollees in KCHIP and the participating healthcare facilities.\nFirstly, this study reported that despite the suspension of KCHIP, the large majority of former enrollees still preferred to use the KCHIP health facilities. This was mostly due to extended positive experiences and relationships previously established with the KCHIP health facilities and perceived quality of care. The quality upgrades and periodic training drive at the KCHIP facilities before the programme suspension were most likely contributory factors to the quality of care observed by the enrollees. Also, other quality improvement interventions executed by KCHIP included capacity building on protocol and guidelines for treatments, records, laboratory, drug storage and infrastructure [10,11]. Therefore, this improved the quality of care and standard of practice in the KCHIP facilities with basically a few alternative options of similar medical quality available for enrollees in the state. The trust and the continuing usage of the KCHIP facilities reported by the enrollees after the suspension showed the propensity of community-based health insurance, when combined with quality improvement of medical services, to remove barriers to healthcare utilization [4]. The trust and cordial relationship are shown by the former enrollees to KCHIP facilities can be harnessed for the high uptake of the incoming state-wide health insurance scheme.\nThis study also demonstrated that three-quarters of the former KCHIP enrollees reverted to OOP payment after suspension of the program. Because of the established relationships with KCHIP facilities, the remaining one-quarter of enrollees were treated for free or were allowed to make partial or tranche-wise payments, even with private healthcare providers. This could be an indicator of the build-up of social benefits from the KCHIP, though at a certain cost to the healthcare providers. The high rate of reversal to OOP endangered and eroded the original insurance aspirations and benefits of KCHIP [6,8,10] and it also represents a potential threat, which can plunge enrollees into catastrophic health expenditure [8,15].\nWe show that of the two-thirds of former enrollees who experienced constraints to pay for healthcare services, the suspension, as well as the general economic recession in Nigeria, were mentioned as the most important perceived causes. The economic recession has been reported elsewhere to cause a reduction in individual expenditure and health insurance consumption [16]. We found that the KCHIP suspension had additional and immediate consequences for former enrollees, leading to the adoption of financial coping mechanisms like personal savings, donations and borrowing. Enrollees also reported receiving support from financial or social groups in the form of “Ajo” thrifts and; religious, community and social cooperative groups as a form of coping for the unexpected healthcare cost after the suspension. These were beneficial to individuals who required funds for sickness. Such financial and social groups are effective coping strategies in terms of improved household income [17]. This underscores the potentials of local thrifts and cooperative groups in financing the health insurance enrollment fees in Nigeria communities.\nThe male enrollees living in rural communities reported more difficulties paying for healthcare services after the programme suspension. This is in line with a study on catastrophic health expenditure in Nigeria, which concluded that female-headed households were less likely to incur catastrophic expenses compared to male-headed households [18]. This reflects lower access to healthcare services and higher foregone formal care among women compared to men [19,20]. The Yoruba ethnic group appeared less constrained to pay for healthcare services after the suspension. Living in rural communities of Nigeria is associated with poverty, poor infrastructure and lack of geographical and financial access to healthcare services [21]. Our findings on the wealth quintiles that indicated a significant socio-economic gradient in access to healthcare after suspension looks similar to the inference by another local study [22], which concluded that the richer quintiles indeed experienced less catastrophic health expenditure.\nA previous study in Kwara State on spending for non-communicable chronic disease (NCCD) reported health expenditures relative to the annual consumption of the poorest quintile exceeding those of the highest quintile 2.2-fold, and the poorest quintile exhibiting a higher rate of catastrophic health spending (10.8% among NCCD-affected households) than the three upper quintiles (4.2% to 6.7%) [19]. This finding to the state-wide scheme implies that the low socio-economic group are at more risk of financial constraint. They should have enrollment fees subsidized or paid for through a government social scheme. Enrollees who experienced an acute illness or injury in the preceding 12 months before the suspension of enrollment had increase odds of being financially constrained in the ability to pay after the suspension. Several Nigerian studies [22] also reported an increased risk of incurring catastrophic health expenditures for household members with non-chronic illnesses. This finding implied that the enrollees with acute illnesses are more likely to be unprepared and could suffer more financial constraints paying for healthcare services without health insurance scheme.\nWhile there were no serious consequences concerning the range of service provision, a significant reduction in patient load in (almost) all of the KCHIP facilities were observed. However, there were slight spikes on patient load around the wet months of the year, which supported seasonal patterns of health-seeking behavior in Nigeria. This is mostly related to malaria season and harvest time. All health facilities´ revenues dropped considerably as enrollees exited the program. Private health facilities experienced higher drops in revenue after KCHIP suspension. Public facilities still received stipends from the government to run their services, which cushioned these effects. Some public health facilities even reported an increase in the revenue generation because of the removal of insurance programme users charge restrictions on the direct billing of the patients. This is also possibly due to some shift of private patients towards the public sector since some services were free in public health facilities. Private health facilities disproportionately suffered a reduction in staff strength, motivation and productivity. This resulted in the downsizing of staff in many of these facilities. Also, we observed a downward trend in drug purchase among the private health facilities, which remained unchanged in public facilities that kept benefiting from the supply of essential drugs from the ministry of health. Finally, the suspension of KCHIP was reflected by clear downward out-patient department visits, but in-patient visits remained the same. These finding revealed that though the private facilities enjoyed the trust of the enrollees before the suspension, they were unable to cope with service provisions and staff retention after suspension like the public facilities who enjoyed funding from the government.\nIn the literature, suspending an impactful health insurance programme is an unusual policy decision. This is probably due to high political sensitivity and the legislative bureaucracy that such action will cause. In January 2016, the Qatari government suspended a state-financed mandatory national health insurance programme due to inability to sustain the exclusive funding of the programme because of a fall in global oil prices [23]. Experts expected in the short term a larger private sector involvement in the Qatari healthcare coverage, while in the long term and uncertainty regarding payment of Qataris' medical bills and UHC. The suspension of the Qatari health insurance programme adversely affected hospitals, health centers and patients, which caused a negative outcry among the population [23]. Similar observations are made in Kwara concerning deteriorating access to healthcare, which happened much rapidly due to the weaker healthcare infrastructure and poverty status of the population. Shifting from fragmented smaller-scale community-based health insurance programs to a larger state-owned insurance programme is a precarious process. Lessons can be learnt from elsewhere in Africa, like the development of the National Health Insurance Scheme (NHIS) in Ghana [24], the political path to impactful community health insurance in Rwanda [25] and the transition of the improved Community Health Fund (iCHF) into a national iCHF in Tanzania [26]. A common recommendation is the introduction of a transition phase with clearly defined services before the new larger-scale insurance package is introduced, providers are assigned and financial coverage is arranged for instance through tax systems, like value added tax (VAT) such as in Ghana [24].\nThis paper demonstrates that temporary suspension of health insurance in the absence of transitional measures has consequences for enrollees and healthcare providers. It also provides opportunities to learn lessons. For example, it was learnt that transition periods can leverage on previously built social capital, including the network of relations between former enrollees and healthcare providers, as well as the support from particular social groups (religious, community and cooperatives). It was also learnt that refurbishment of health facilities and quality improvement of services during the previous phase of community-based health insurance was appreciated also during the suspension of the KCHIP, with people continuing to visit KCHIP healthcare facilities. At the policy level, Kwara State worked to adopt a law that makes health insurance mandatory for all inhabitants and requires that the state government commits one percent of its revenues to finance health insurance. In addition, during the transition phase, Kwara State started the process of setting up a dedicated state health insurance fund that pools financial contributions from diverse sources, including the State Government, the Federal Government of Nigeria (particularly Ministry of Health and National Health Insurance Program) and individual enrollees. All these should be harnessed for a sustainable and effective health insurance scheme in Kwara State, Nigeria.", "After the suspension of the KCHIP health insurance programme in Kwara State, Nigeria, former enrollees still preferred using the KCHIP health facilities and they reverted almost ubiquitously to OOP payments. At the same time, out-patient healthcare consumption decreased substantially, with a large proportion of former enrollees not being able to afford healthcare services. Belonging to some form of financial/social group proved beneficial in the short term as a coping mechanism. Social capital built through KCHIP between former enrollees and clinics helped alleviate part of the financial burden for the former enrollees, but not for the facilities. Enrollees with the highest probability of suffering adverse consequences of the programme suspension were male enrollees, households in the lower social quintiles, living in rural communities and those reporting recent acute illness. Private health facilities suffered more consequences of the programme suspension than public facilities in terms of reduced financial inflow sequel to change in the revenue and resources. These observations point to the need for designing effective transition processes from community-based health insurance to state insurance in Nigerian states.\nFunding: though the PharmAccess Foundation funded this study and one of the authors work at the company, the study was not influenced by his participation in the design, data collection and manuscript writing.\nWhat is known about this topic \nIt is already known fact that community-based health insurance scheme is an important strategy to achieve accessibility to quality healthcare for underserved population; Kwara Community Health Insurance Scheme was adjudged as one of the impactful health intervention from sub-Saharan Africa.\n\nIt is already known fact that community-based health insurance scheme is an important strategy to achieve accessibility to quality healthcare for underserved population; Kwara Community Health Insurance Scheme was adjudged as one of the impactful health intervention from sub-Saharan Africa.\n\nIt is already known fact that community-based health insurance scheme is an important strategy to achieve accessibility to quality healthcare for underserved population; Kwara Community Health Insurance Scheme was adjudged as one of the impactful health intervention from sub-Saharan Africa.\n\nIt is already known fact that community-based health insurance scheme is an important strategy to achieve accessibility to quality healthcare for underserved population; Kwara Community Health Insurance Scheme was adjudged as one of the impactful health intervention from sub-Saharan Africa.\nWhat this study adds \nThis study provides insight into the consequences of suspending an impactful health intervention like Kwara Community Health Insurance Scheme.It also highlighted the common challenges experienced and coping strategies adopted by the enrollees and the care providers on the scheme.\n\nThis study provides insight into the consequences of suspending an impactful health intervention like Kwara Community Health Insurance Scheme.\nIt also highlighted the common challenges experienced and coping strategies adopted by the enrollees and the care providers on the scheme.\n\nThis study provides insight into the consequences of suspending an impactful health intervention like Kwara Community Health Insurance Scheme.It also highlighted the common challenges experienced and coping strategies adopted by the enrollees and the care providers on the scheme.\n\nThis study provides insight into the consequences of suspending an impactful health intervention like Kwara Community Health Insurance Scheme.\nIt also highlighted the common challenges experienced and coping strategies adopted by the enrollees and the care providers on the scheme.", "\nIt is already known fact that community-based health insurance scheme is an important strategy to achieve accessibility to quality healthcare for underserved population; Kwara Community Health Insurance Scheme was adjudged as one of the impactful health intervention from sub-Saharan Africa.\n\nIt is already known fact that community-based health insurance scheme is an important strategy to achieve accessibility to quality healthcare for underserved population; Kwara Community Health Insurance Scheme was adjudged as one of the impactful health intervention from sub-Saharan Africa.", "\nThis study provides insight into the consequences of suspending an impactful health intervention like Kwara Community Health Insurance Scheme.It also highlighted the common challenges experienced and coping strategies adopted by the enrollees and the care providers on the scheme.\n\nThis study provides insight into the consequences of suspending an impactful health intervention like Kwara Community Health Insurance Scheme.\nIt also highlighted the common challenges experienced and coping strategies adopted by the enrollees and the care providers on the scheme." ]
[ "intro", "methods", null, "results", "discussion", "conclusions", null, null ]
[ "Community-based", "health insurance", "suspension", "Kwara" ]
Introduction: The progress towards Universal Health Coverage (UHC) involves setting ambitious goals for expanding access to quality health services based on establishing a greater reliance on risk-pooling and prepayment mechanisms to finance health, stimulating investments in healthcare infrastructure and quality, and building human resources and skills for health. The World Health Organization (WHO) estimates that more than half the world´s population does not have access to the health services they need, and 100 million people suffer financial catastrophe every year due to out-of-pocket (OOP) expenditures for unexpected healthcare [1]. Introduction of a health insurance programme is one of the ways to enhance access to healthcare services and to protect individuals from catastrophic health expenditures [2]. Financing healthcare through a tax-based system (which is also a form of risk-pooling) is difficult as many low- and middle-income countries (LMICs) are struggling to mobilize sufficient resources. As a result, OOP expenditures remain high and in combination with poor healthcare services form an important barrier to UHC. How to successfully roll out sustainable health insurance on a large scale and ensure sufficient take-up in LMICs is an outstanding question [3,4]. In Africa, more than half of all healthcare expenses are covered through OOP payments. For example, in Nigeria - the most populous country in Africa, with a population of more than 200 million, there are substantial inequalities in access to healthcare with 72% of health expenses paid OOP and only about 4% of the people, mostly in the formal sector, having access to health insurance today [5]. Nigeria accounts for 2% of the world population but contributes to 14% of maternal deaths and 23% of malaria cases [2]. To address these burdens, Kwara State, one of the poorest states in Nigeria, with the support of PharmAccess and the Netherlands Health Insurance Fund launched a subsidized Kwara Community Health Insurance Programme (KCHIP) in 2007 [6-8]. By the year 2015, a total of 347,132 people and 42 public and private healthcare facilities participated in KCHIP (Figure 1). major policy milestone, study period and clients´ enrolment trend in Kwara community health insurance programme between 2015 and 2018 The impact of KCHIP has been assessed over time through various studies [6-8], indicating an increase in the use of healthcare (while controlling for additional variables) of up to 90% among enrolled communities [7,9], markedly improved cost-effectiveness, as well as substantial benefits in terms of improved health outcomes concerning chronic diseases like hypertension; [10], and maternal and child care [11]. Similarly, OOP expenditures significantly decreased by 50% among enrollees, thus securing more financial protection in the medium run [7,9]. The KCHIP was also found to increase awareness about health status among community members [9,12]. Additionally, it was demonstrated KCHIP could deliver a basic quality healthcare coverage at the US $28 per person per year, compared to the WHO benchmark of US $60 and Nigeria´s total health expenditure per capita of US $115 [8]. An important feature of KCHIP was as incremental financial commitment and ownership of the programme by the Kwara State Government over time. The programme was aimed at synergizing with the Nigeria National Health Insurance Programme (NHIS) to attain UHC for the state [13]. In January 2015 (Figure 1), the programme partners signed an agreement to transition KCHIP to the Kwara State Health Insurance Programme (KSHIP). Pending this arrangement, KCHIP enrolment was temporarily suspended while designing a new insurance product and premium to be introduced and deployed on a state-wide level. Whereas in January 2016, KCHIP was active in 11 out of the 16 Local Government Areas (LGAs) in Kwara State and recorded a total enrolment of 139,714 clients, these clients were not renewed throughout 2016. This resulted in a gradual drop-out over the year with no clients insured by January 2017 (Figure 1). Therefore, a unique ‘reverse insurance intervention´ situation emerged, which was evaluated in this study. This paper describes the consequences of the suspension of KCHIP in Kwara State, Nigeria, in the wake of state-wide health insurance, and analyses the effects on healthcare quality, utilization and financial constraints for healthcare among former enrollees, as well as the consequences for formerly participating KCHIP health facilities. Methods: Study design and study population: in August 2018, about 2 years after the suspension of KCHIP (Figure 1), a mixed-method study was carried out among KCHIP former enrollees and healthcare providers in Kwara State, Nigeria. Using multi-stage random sampling, we recruited a total of 600 enrollees whose health insurance policy had expired at least 4 months before the end of December 2016. For the quantitative cross-sectional survey we obtain data on socio-demographics, healthcare utilization, enrolment status, health financial constraints and coping strategies since the suspension. Only adults (18 years and above) were included in the study, of whom a purposively selected 400 enrollees had accessed care in a KCHIP healthcare facility in the preceding 12 months. The remaining 200 participants were selected from those uninsured in the past 12 months. Of those 400 participants who have accessed healthcare, half (200) who had in addition to other health conditions been seeking chronic care, maternal care and care for acute conditions were included in the study. In-depth interviews (IDIs) were performed among 24 purposively selected former enrollees and among 29 health facilities´ managers of (19 public, 10 private) participating KCHIP facilities. The IDIs explored the effects of the programme suspension on both healthcare utilization by former enrollees and their coping mechanisms, and health facilities´ service provision. To be selected for the IDI, the participant must be above 18 years of age and must have utilized pertinent healthcare in the past 12 months. To obtain healthcare utilization pattern due to the programme suspension, health facilities´ clinical records were reviewed as part of the observation checklist tool developed for the qualitative data collection. Sampling and data collection Quantitative study: multi-stage sampling was used, selecting 5 Local Government Areas (LGAs): two from Kwara South, two from Kwara North and one from Kwara Central senatorial zones. Enrollees were selected randomly with the KCHIP enrollment database serving as sampling frame after allocating LGAs proportionate to constituent population sizes (total enrollment in the 5 LGAs in January 2016 was 73,438). An additional 30% was added from the sample frame for each LGA to cater for non-response and untraceable enrollees. The selected enrollees were traced in the community (with the help of community mobilizers) and interviewed by trained interviewers. The questionnaire captured data on respondents´ socio-economic characteristics, morbidity patterns, healthcare access and utilization in the preceding 12 months. Qualitative study: we conducted two rounds of IDIs among former enrollees and facilities´ managers. The enrollees´ interviews were conducted among 24 purposively selected adults across 9 selected LGAs cutting across the 3 zones of Kwara State. The selection of former enrollees into the IDIs was carried out in and around the health facilities using a pretested interview guide. The facility managers´ interviews were conducted in KCHIP facilities among the officers-in-charge (or the medical director). This comprised all 29 Enhanced Community Based Care (ECBC) health facilities (19 public, 10 private) spread across 9 LGAs; 13 health posts providing remote care services were excluded from the study because they were already linked to records of the 29 ECBCs. Data analysis:the quantitative data entry platform was designed using Open Data Kit® (ODK), while the data was entered using Kobo Toolbox® [14] and later exported to Statistical Package for Social Science (SPSS) version 22 for analysis. Simple logistic regression was used to explore the predictive factors of the financial constraints in the ability to pay for healthcare services after the programme suspension. The level of significance was set at a p-value of < 0.05 complemented with a 95% confidence interval (CI). Recorded qualitative interviews were transcribed and thematic analysis was carried out manually. Mixed results of the qualitative and quantitative data were triangulated and reported together to complement major contextual observations in this study. Ethics approval and consent to participate: written permissions were obtained from the ethics committee of the Kwara State Ministry of Health, Ilorin, Nigeria. Informed consent was obtained from the participants. Confidentiality of the participants´ and health facilities´ information were maintained. Quantitative study: multi-stage sampling was used, selecting 5 Local Government Areas (LGAs): two from Kwara South, two from Kwara North and one from Kwara Central senatorial zones. Enrollees were selected randomly with the KCHIP enrollment database serving as sampling frame after allocating LGAs proportionate to constituent population sizes (total enrollment in the 5 LGAs in January 2016 was 73,438). An additional 30% was added from the sample frame for each LGA to cater for non-response and untraceable enrollees. The selected enrollees were traced in the community (with the help of community mobilizers) and interviewed by trained interviewers. The questionnaire captured data on respondents´ socio-economic characteristics, morbidity patterns, healthcare access and utilization in the preceding 12 months. Qualitative study: we conducted two rounds of IDIs among former enrollees and facilities´ managers. The enrollees´ interviews were conducted among 24 purposively selected adults across 9 selected LGAs cutting across the 3 zones of Kwara State. The selection of former enrollees into the IDIs was carried out in and around the health facilities using a pretested interview guide. The facility managers´ interviews were conducted in KCHIP facilities among the officers-in-charge (or the medical director). This comprised all 29 Enhanced Community Based Care (ECBC) health facilities (19 public, 10 private) spread across 9 LGAs; 13 health posts providing remote care services were excluded from the study because they were already linked to records of the 29 ECBCs. Data analysis:the quantitative data entry platform was designed using Open Data Kit® (ODK), while the data was entered using Kobo Toolbox® [14] and later exported to Statistical Package for Social Science (SPSS) version 22 for analysis. Simple logistic regression was used to explore the predictive factors of the financial constraints in the ability to pay for healthcare services after the programme suspension. The level of significance was set at a p-value of < 0.05 complemented with a 95% confidence interval (CI). Recorded qualitative interviews were transcribed and thematic analysis was carried out manually. Mixed results of the qualitative and quantitative data were triangulated and reported together to complement major contextual observations in this study. Ethics approval and consent to participate: written permissions were obtained from the ethics committee of the Kwara State Ministry of Health, Ilorin, Nigeria. Informed consent was obtained from the participants. Confidentiality of the participants´ and health facilities´ information were maintained. Sampling and data collection: Quantitative study: multi-stage sampling was used, selecting 5 Local Government Areas (LGAs): two from Kwara South, two from Kwara North and one from Kwara Central senatorial zones. Enrollees were selected randomly with the KCHIP enrollment database serving as sampling frame after allocating LGAs proportionate to constituent population sizes (total enrollment in the 5 LGAs in January 2016 was 73,438). An additional 30% was added from the sample frame for each LGA to cater for non-response and untraceable enrollees. The selected enrollees were traced in the community (with the help of community mobilizers) and interviewed by trained interviewers. The questionnaire captured data on respondents´ socio-economic characteristics, morbidity patterns, healthcare access and utilization in the preceding 12 months. Qualitative study: we conducted two rounds of IDIs among former enrollees and facilities´ managers. The enrollees´ interviews were conducted among 24 purposively selected adults across 9 selected LGAs cutting across the 3 zones of Kwara State. The selection of former enrollees into the IDIs was carried out in and around the health facilities using a pretested interview guide. The facility managers´ interviews were conducted in KCHIP facilities among the officers-in-charge (or the medical director). This comprised all 29 Enhanced Community Based Care (ECBC) health facilities (19 public, 10 private) spread across 9 LGAs; 13 health posts providing remote care services were excluded from the study because they were already linked to records of the 29 ECBCs. Data analysis:the quantitative data entry platform was designed using Open Data Kit® (ODK), while the data was entered using Kobo Toolbox® [14] and later exported to Statistical Package for Social Science (SPSS) version 22 for analysis. Simple logistic regression was used to explore the predictive factors of the financial constraints in the ability to pay for healthcare services after the programme suspension. The level of significance was set at a p-value of < 0.05 complemented with a 95% confidence interval (CI). Recorded qualitative interviews were transcribed and thematic analysis was carried out manually. Mixed results of the qualitative and quantitative data were triangulated and reported together to complement major contextual observations in this study. Ethics approval and consent to participate: written permissions were obtained from the ethics committee of the Kwara State Ministry of Health, Ilorin, Nigeria. Informed consent was obtained from the participants. Confidentiality of the participants´ and health facilities´ information were maintained. Results: Socio-demography of the enrollees: the enrollees had a median age of 43 years and 74.5% were women (Table 1). Close to half of the enrollees did not have formal education (42.5%); 77.2% were married and 17.2% widows. About three-quarters of the enrollees (73.8%) lived in semi-urban areas (Table 1). The majority were from Yoruba (64.5%) and Nupe (32.2%) ethnic groups. Islam was the predominant (83.2%) religion amongst them. The enrollees were equally spread over the wealth quintiles, with wealth calculated as annual per capita consumption of food and non-food items. socio-demographic and socio-economic characteristics of the respondents JSS: junior secondary school; SSS: senior secondary school Consequences of (re)enrolment suspension on households: the survey shows that the majority of former enrollees (95.3%) kept utilizing KCHIP facilities, even after the suspension of the programme (Table 2). The factors responsible for this were explored by the IDI. Some former enrollees perceived that the KCHIP facilities have very friendly staff and this would encourage them to keep patronizing the health facilities. However, 74.0% of enrollees reported reverting to OOP payment for healthcare services at the KCHIP facilities. In general, the enrollees had more confidence in the private than the public KCHIP facilities. This is due to full (4.2%) or partial (21.8%) exemption from hospital bill when having financial constraints. According to IDI, many patients had established some friendship and cordial relationships with the facilities throughout the programs. For some enrollees the KCHIP facilities allowed them to pay in tranches for reasons of empathy and familiarity. preferences and constraints in ability to pay for and accessibility to healthcare after KCHIP suspension ^ Multiple responses We also reported that after suspension, 67.0% of enrollees experienced financial constraints in the ability to pay for healthcare services; 30.8% of whom reported it was due to the suspension, 11.2% to the economic situation of the country while 25.0% said it was due to both (Table 2). But a third (33.0%) of enrollees reported little or no change in their ability to pay for healthcare services. The IDIs reported almost all enrollees had financial constraints their ability to pay for healthcare services. An enrollee said; “my ability to pay has been hampered seriously by lack of funds”. Most common coping mechanisms reported by the enrollees were personal savings (63.3%), donations from friends and families (34.7%) and borrowing (11.8%). Other coping mechanisms included proceeds trading and sales (25.8%), household purse (19.7%) and money from other relatives (4.5%). Most of the enrollees in the qualitative interview agreed that spending household/family savings to offset healthcare bills were the immediate coping mechanism available to them after the programme suspension. Others narrated that they had to borrow from friends and family members, including seeking assistance from other relatives including the children in paying hospital bills. Some prominent social group mechanisms that offered health benefits to members such as donations during episodes of illness and loan facilities to offset medical bills were reported from the quantitative data. Examples of these mechanisms were (Table 2): Ajo - a local thrift (47.3%), religious groups (28.5%), community groups (18.7%) and cooperative groups (17.5%). Factors associated with a constrained ability to pay for healthcare among enrollees after the programme suspension: different factors were found to be associated with the reported financial constraints inability to pay for healthcare after insurance suspension (Table 3). Constraints were experienced more often by male enrollees (74.5%, p = 0.022), living in rural locations (75.8%, p = 0.006) and by those having an acute illness/injury in the preceding 12 months (74.3%, p<0.001). Besides, ethnic groups other than Yoruba (Nupe 89.6% and Hausa 87.5%) in the study (p<0.001) and those enrollees that patronized KCHIP facilities exclusively before suspension (68.1%, p<0.001) were significantly more likely to be financially constrained to pay for healthcare services after the programme suspension. Wealth was also significantly associated with constraint ability to pay. Being in lower wealth quintiles is associated with constraints ability to pay for healthcare. factors associated with constraints in ability to pay for healthcare services after re-enrollment suspension Statistical significance; X2: chi-square Per Table 4, the predictive factors for being financial constrained were being male (OR=1.61, 95% CI=1.069; 2.436) and living in rural communities (OR=1.77, 95% CI=1.171; 2.677). Enrollees of Yoruba ethnicity (OR=0.15, 95% CI=0.091; 0.236) had less financial constraints in paying for healthcare services after the programme suspension compared to people from other ethnicities. Those enrollees who depended solely on KCHIP health facilities before suspension had increased odds (OR=1.94, 95% CI=1.032; 3.648) of financial constraints while those with acute illness or injury in the preceding 12 months also had increased odds (OR=3.38, 95% CI=2.309; 4.939). predictors of constrained ability to pay for healthcare services after the suspension of the program Significance level (p) < 0.05 Consequences of the programme suspension on the KCHIP facilities (IDI and hospital records): after the suspension, 24 of 29 health facilities claimed the quality and quantity of services provided remained the same while five confirmed reduction in service provision. In the past, more than two-thirds of the health facilities claimed they experienced increased patronage and service utilization due to KCHIP. However, with the suspension of the program, records revealed that all facilities experienced a significant reduction in out-patient loads as there was a gradual decline in healthcare utilization (Figure 2). No appreciable effect was seen on in-patient visits (Figure 3). Out of those that reported a reduction in service provision, a facility manager commented that: “at present, just about 5% of those previously registered on the programme is still coming to the health facility for treatment”. Another manager narrated that: “We saw just 2 patients today, compared to the time when the programme was in place whereby we will not have the time to even attend to you to have this interview”. out-patients´ visits by month for the year 2016 across the public and private health facilities in-patient visits by month for the year 2016 across public and private health facilities Seventeen of 29 facilities reported a decrease in revenue after suspension of the program. Nine recorded an increase in their revenue, despite a reduction in patient load; the facilities in this category were all public health facilities. They attributed this increase to the removal of restrictions on billing patients directly during the enrollment period. A facility manager said: “Our revenue has increased because patients now have to pay out of pocket unlike when the programme was running and drugs and tests were done freely”. Another said: “Our revenue has increased because people coming to the hospital now pay for services”. Similarly, 23 out of 29 facilities experienced a high staff turnover after suspension of KCHIP. Almost 90% (26) of the facilities reported a salary cut for their staff after the suspension. One of the facility managers recounted that: “with the suspension of the program, the revenue generated could not cater for the salaries of all the staff so 39 members of staff were laid off and those that remained (31) had a 40% salary cut”. However, the loss of incentives and salary cut reduced staff motivation and productivity as attested to by a manager: “the salary cut, as well as the loss of incentives, has resulted in low productivity of staff members who claim not to be motivated anymore”. More facilities (21) informed of the decrease in drug purchase after suspension of the program. Discussion: This study assessed the effects of the suspension of a community-based health insurance programme in Kwara State, Nigeria. This suspension was due to the restructuring and repositioning of the scheme for a major government policy change to herald a state-wide health insurance program. This, therefore, necessitated an unusual 'reverse intervention' evaluation of community-based health insurance. This we carried out using a mixed-method to study both the former enrollees in KCHIP and the participating healthcare facilities. Firstly, this study reported that despite the suspension of KCHIP, the large majority of former enrollees still preferred to use the KCHIP health facilities. This was mostly due to extended positive experiences and relationships previously established with the KCHIP health facilities and perceived quality of care. The quality upgrades and periodic training drive at the KCHIP facilities before the programme suspension were most likely contributory factors to the quality of care observed by the enrollees. Also, other quality improvement interventions executed by KCHIP included capacity building on protocol and guidelines for treatments, records, laboratory, drug storage and infrastructure [10,11]. Therefore, this improved the quality of care and standard of practice in the KCHIP facilities with basically a few alternative options of similar medical quality available for enrollees in the state. The trust and the continuing usage of the KCHIP facilities reported by the enrollees after the suspension showed the propensity of community-based health insurance, when combined with quality improvement of medical services, to remove barriers to healthcare utilization [4]. The trust and cordial relationship are shown by the former enrollees to KCHIP facilities can be harnessed for the high uptake of the incoming state-wide health insurance scheme. This study also demonstrated that three-quarters of the former KCHIP enrollees reverted to OOP payment after suspension of the program. Because of the established relationships with KCHIP facilities, the remaining one-quarter of enrollees were treated for free or were allowed to make partial or tranche-wise payments, even with private healthcare providers. This could be an indicator of the build-up of social benefits from the KCHIP, though at a certain cost to the healthcare providers. The high rate of reversal to OOP endangered and eroded the original insurance aspirations and benefits of KCHIP [6,8,10] and it also represents a potential threat, which can plunge enrollees into catastrophic health expenditure [8,15]. We show that of the two-thirds of former enrollees who experienced constraints to pay for healthcare services, the suspension, as well as the general economic recession in Nigeria, were mentioned as the most important perceived causes. The economic recession has been reported elsewhere to cause a reduction in individual expenditure and health insurance consumption [16]. We found that the KCHIP suspension had additional and immediate consequences for former enrollees, leading to the adoption of financial coping mechanisms like personal savings, donations and borrowing. Enrollees also reported receiving support from financial or social groups in the form of “Ajo” thrifts and; religious, community and social cooperative groups as a form of coping for the unexpected healthcare cost after the suspension. These were beneficial to individuals who required funds for sickness. Such financial and social groups are effective coping strategies in terms of improved household income [17]. This underscores the potentials of local thrifts and cooperative groups in financing the health insurance enrollment fees in Nigeria communities. The male enrollees living in rural communities reported more difficulties paying for healthcare services after the programme suspension. This is in line with a study on catastrophic health expenditure in Nigeria, which concluded that female-headed households were less likely to incur catastrophic expenses compared to male-headed households [18]. This reflects lower access to healthcare services and higher foregone formal care among women compared to men [19,20]. The Yoruba ethnic group appeared less constrained to pay for healthcare services after the suspension. Living in rural communities of Nigeria is associated with poverty, poor infrastructure and lack of geographical and financial access to healthcare services [21]. Our findings on the wealth quintiles that indicated a significant socio-economic gradient in access to healthcare after suspension looks similar to the inference by another local study [22], which concluded that the richer quintiles indeed experienced less catastrophic health expenditure. A previous study in Kwara State on spending for non-communicable chronic disease (NCCD) reported health expenditures relative to the annual consumption of the poorest quintile exceeding those of the highest quintile 2.2-fold, and the poorest quintile exhibiting a higher rate of catastrophic health spending (10.8% among NCCD-affected households) than the three upper quintiles (4.2% to 6.7%) [19]. This finding to the state-wide scheme implies that the low socio-economic group are at more risk of financial constraint. They should have enrollment fees subsidized or paid for through a government social scheme. Enrollees who experienced an acute illness or injury in the preceding 12 months before the suspension of enrollment had increase odds of being financially constrained in the ability to pay after the suspension. Several Nigerian studies [22] also reported an increased risk of incurring catastrophic health expenditures for household members with non-chronic illnesses. This finding implied that the enrollees with acute illnesses are more likely to be unprepared and could suffer more financial constraints paying for healthcare services without health insurance scheme. While there were no serious consequences concerning the range of service provision, a significant reduction in patient load in (almost) all of the KCHIP facilities were observed. However, there were slight spikes on patient load around the wet months of the year, which supported seasonal patterns of health-seeking behavior in Nigeria. This is mostly related to malaria season and harvest time. All health facilities´ revenues dropped considerably as enrollees exited the program. Private health facilities experienced higher drops in revenue after KCHIP suspension. Public facilities still received stipends from the government to run their services, which cushioned these effects. Some public health facilities even reported an increase in the revenue generation because of the removal of insurance programme users charge restrictions on the direct billing of the patients. This is also possibly due to some shift of private patients towards the public sector since some services were free in public health facilities. Private health facilities disproportionately suffered a reduction in staff strength, motivation and productivity. This resulted in the downsizing of staff in many of these facilities. Also, we observed a downward trend in drug purchase among the private health facilities, which remained unchanged in public facilities that kept benefiting from the supply of essential drugs from the ministry of health. Finally, the suspension of KCHIP was reflected by clear downward out-patient department visits, but in-patient visits remained the same. These finding revealed that though the private facilities enjoyed the trust of the enrollees before the suspension, they were unable to cope with service provisions and staff retention after suspension like the public facilities who enjoyed funding from the government. In the literature, suspending an impactful health insurance programme is an unusual policy decision. This is probably due to high political sensitivity and the legislative bureaucracy that such action will cause. In January 2016, the Qatari government suspended a state-financed mandatory national health insurance programme due to inability to sustain the exclusive funding of the programme because of a fall in global oil prices [23]. Experts expected in the short term a larger private sector involvement in the Qatari healthcare coverage, while in the long term and uncertainty regarding payment of Qataris' medical bills and UHC. The suspension of the Qatari health insurance programme adversely affected hospitals, health centers and patients, which caused a negative outcry among the population [23]. Similar observations are made in Kwara concerning deteriorating access to healthcare, which happened much rapidly due to the weaker healthcare infrastructure and poverty status of the population. Shifting from fragmented smaller-scale community-based health insurance programs to a larger state-owned insurance programme is a precarious process. Lessons can be learnt from elsewhere in Africa, like the development of the National Health Insurance Scheme (NHIS) in Ghana [24], the political path to impactful community health insurance in Rwanda [25] and the transition of the improved Community Health Fund (iCHF) into a national iCHF in Tanzania [26]. A common recommendation is the introduction of a transition phase with clearly defined services before the new larger-scale insurance package is introduced, providers are assigned and financial coverage is arranged for instance through tax systems, like value added tax (VAT) such as in Ghana [24]. This paper demonstrates that temporary suspension of health insurance in the absence of transitional measures has consequences for enrollees and healthcare providers. It also provides opportunities to learn lessons. For example, it was learnt that transition periods can leverage on previously built social capital, including the network of relations between former enrollees and healthcare providers, as well as the support from particular social groups (religious, community and cooperatives). It was also learnt that refurbishment of health facilities and quality improvement of services during the previous phase of community-based health insurance was appreciated also during the suspension of the KCHIP, with people continuing to visit KCHIP healthcare facilities. At the policy level, Kwara State worked to adopt a law that makes health insurance mandatory for all inhabitants and requires that the state government commits one percent of its revenues to finance health insurance. In addition, during the transition phase, Kwara State started the process of setting up a dedicated state health insurance fund that pools financial contributions from diverse sources, including the State Government, the Federal Government of Nigeria (particularly Ministry of Health and National Health Insurance Program) and individual enrollees. All these should be harnessed for a sustainable and effective health insurance scheme in Kwara State, Nigeria. Conclusion: After the suspension of the KCHIP health insurance programme in Kwara State, Nigeria, former enrollees still preferred using the KCHIP health facilities and they reverted almost ubiquitously to OOP payments. At the same time, out-patient healthcare consumption decreased substantially, with a large proportion of former enrollees not being able to afford healthcare services. Belonging to some form of financial/social group proved beneficial in the short term as a coping mechanism. Social capital built through KCHIP between former enrollees and clinics helped alleviate part of the financial burden for the former enrollees, but not for the facilities. Enrollees with the highest probability of suffering adverse consequences of the programme suspension were male enrollees, households in the lower social quintiles, living in rural communities and those reporting recent acute illness. Private health facilities suffered more consequences of the programme suspension than public facilities in terms of reduced financial inflow sequel to change in the revenue and resources. These observations point to the need for designing effective transition processes from community-based health insurance to state insurance in Nigerian states. Funding: though the PharmAccess Foundation funded this study and one of the authors work at the company, the study was not influenced by his participation in the design, data collection and manuscript writing. What is known about this topic It is already known fact that community-based health insurance scheme is an important strategy to achieve accessibility to quality healthcare for underserved population; Kwara Community Health Insurance Scheme was adjudged as one of the impactful health intervention from sub-Saharan Africa. It is already known fact that community-based health insurance scheme is an important strategy to achieve accessibility to quality healthcare for underserved population; Kwara Community Health Insurance Scheme was adjudged as one of the impactful health intervention from sub-Saharan Africa. It is already known fact that community-based health insurance scheme is an important strategy to achieve accessibility to quality healthcare for underserved population; Kwara Community Health Insurance Scheme was adjudged as one of the impactful health intervention from sub-Saharan Africa. It is already known fact that community-based health insurance scheme is an important strategy to achieve accessibility to quality healthcare for underserved population; Kwara Community Health Insurance Scheme was adjudged as one of the impactful health intervention from sub-Saharan Africa. What this study adds This study provides insight into the consequences of suspending an impactful health intervention like Kwara Community Health Insurance Scheme.It also highlighted the common challenges experienced and coping strategies adopted by the enrollees and the care providers on the scheme. This study provides insight into the consequences of suspending an impactful health intervention like Kwara Community Health Insurance Scheme. It also highlighted the common challenges experienced and coping strategies adopted by the enrollees and the care providers on the scheme. This study provides insight into the consequences of suspending an impactful health intervention like Kwara Community Health Insurance Scheme.It also highlighted the common challenges experienced and coping strategies adopted by the enrollees and the care providers on the scheme. This study provides insight into the consequences of suspending an impactful health intervention like Kwara Community Health Insurance Scheme. It also highlighted the common challenges experienced and coping strategies adopted by the enrollees and the care providers on the scheme. What is known about this topic: It is already known fact that community-based health insurance scheme is an important strategy to achieve accessibility to quality healthcare for underserved population; Kwara Community Health Insurance Scheme was adjudged as one of the impactful health intervention from sub-Saharan Africa. It is already known fact that community-based health insurance scheme is an important strategy to achieve accessibility to quality healthcare for underserved population; Kwara Community Health Insurance Scheme was adjudged as one of the impactful health intervention from sub-Saharan Africa. What this study adds: This study provides insight into the consequences of suspending an impactful health intervention like Kwara Community Health Insurance Scheme.It also highlighted the common challenges experienced and coping strategies adopted by the enrollees and the care providers on the scheme. This study provides insight into the consequences of suspending an impactful health intervention like Kwara Community Health Insurance Scheme. It also highlighted the common challenges experienced and coping strategies adopted by the enrollees and the care providers on the scheme.
Background: a subsidized community health insurance programme in Kwara State, Nigeria was temporarily suspended in 2016 in anticipation of the roll-out of a state-wide health insurance scheme. This article reports the adverse consequences of the scheme´s suspension on enrollees´ healthcare utilization. Methods: a mixed-methods study was carried out in Kwara State, Nigeria, in 2018 using a semi-quantitative cross-sectional survey amongst 600 former Kwara community health insurance clients, and in-depth interviews with 24 clients and 29 participating public and private healthcare providers in the program. Both quantitative and qualitative data were analyzed and triangulated. Results: most of former enrollees (95.3%) kept utilizing programme facilities after the suspension, mainly because of the high quality of care. However, majority of the enrollees (95.8%) reverted to out-of-pocket payment while 67% reported constraints in payment for healthcare services after suspension of the program. In the absence of insurance, the most common coping mechanisms for healthcare payment were personal savings (63.3%), donations from friends and families (34.7%) and loans (11.8%). Being a male enrollee (odd ratio=1.61), living in a rural community (odd ratio =1.77), exclusive usage of Kwara Community Health Insurance Programme (KCHIP) prior to suspension (odd ratio=1.94) and suffering an acute illness (odd ratio=3.38) increased the odds of being financially constrained in accessing healthcare. Conclusions: after the suspension of the scheme, many enrollees and health facilities experienced financial constraints. These underscore the importance of sustainable health insurance schemes as a risk-pooling mechanism to sustain access to good quality health care and financial protection from catastrophic health expenditures.
Introduction: The progress towards Universal Health Coverage (UHC) involves setting ambitious goals for expanding access to quality health services based on establishing a greater reliance on risk-pooling and prepayment mechanisms to finance health, stimulating investments in healthcare infrastructure and quality, and building human resources and skills for health. The World Health Organization (WHO) estimates that more than half the world´s population does not have access to the health services they need, and 100 million people suffer financial catastrophe every year due to out-of-pocket (OOP) expenditures for unexpected healthcare [1]. Introduction of a health insurance programme is one of the ways to enhance access to healthcare services and to protect individuals from catastrophic health expenditures [2]. Financing healthcare through a tax-based system (which is also a form of risk-pooling) is difficult as many low- and middle-income countries (LMICs) are struggling to mobilize sufficient resources. As a result, OOP expenditures remain high and in combination with poor healthcare services form an important barrier to UHC. How to successfully roll out sustainable health insurance on a large scale and ensure sufficient take-up in LMICs is an outstanding question [3,4]. In Africa, more than half of all healthcare expenses are covered through OOP payments. For example, in Nigeria - the most populous country in Africa, with a population of more than 200 million, there are substantial inequalities in access to healthcare with 72% of health expenses paid OOP and only about 4% of the people, mostly in the formal sector, having access to health insurance today [5]. Nigeria accounts for 2% of the world population but contributes to 14% of maternal deaths and 23% of malaria cases [2]. To address these burdens, Kwara State, one of the poorest states in Nigeria, with the support of PharmAccess and the Netherlands Health Insurance Fund launched a subsidized Kwara Community Health Insurance Programme (KCHIP) in 2007 [6-8]. By the year 2015, a total of 347,132 people and 42 public and private healthcare facilities participated in KCHIP (Figure 1). major policy milestone, study period and clients´ enrolment trend in Kwara community health insurance programme between 2015 and 2018 The impact of KCHIP has been assessed over time through various studies [6-8], indicating an increase in the use of healthcare (while controlling for additional variables) of up to 90% among enrolled communities [7,9], markedly improved cost-effectiveness, as well as substantial benefits in terms of improved health outcomes concerning chronic diseases like hypertension; [10], and maternal and child care [11]. Similarly, OOP expenditures significantly decreased by 50% among enrollees, thus securing more financial protection in the medium run [7,9]. The KCHIP was also found to increase awareness about health status among community members [9,12]. Additionally, it was demonstrated KCHIP could deliver a basic quality healthcare coverage at the US $28 per person per year, compared to the WHO benchmark of US $60 and Nigeria´s total health expenditure per capita of US $115 [8]. An important feature of KCHIP was as incremental financial commitment and ownership of the programme by the Kwara State Government over time. The programme was aimed at synergizing with the Nigeria National Health Insurance Programme (NHIS) to attain UHC for the state [13]. In January 2015 (Figure 1), the programme partners signed an agreement to transition KCHIP to the Kwara State Health Insurance Programme (KSHIP). Pending this arrangement, KCHIP enrolment was temporarily suspended while designing a new insurance product and premium to be introduced and deployed on a state-wide level. Whereas in January 2016, KCHIP was active in 11 out of the 16 Local Government Areas (LGAs) in Kwara State and recorded a total enrolment of 139,714 clients, these clients were not renewed throughout 2016. This resulted in a gradual drop-out over the year with no clients insured by January 2017 (Figure 1). Therefore, a unique ‘reverse insurance intervention´ situation emerged, which was evaluated in this study. This paper describes the consequences of the suspension of KCHIP in Kwara State, Nigeria, in the wake of state-wide health insurance, and analyses the effects on healthcare quality, utilization and financial constraints for healthcare among former enrollees, as well as the consequences for formerly participating KCHIP health facilities. Conclusion: After the suspension of the KCHIP health insurance programme in Kwara State, Nigeria, former enrollees still preferred using the KCHIP health facilities and they reverted almost ubiquitously to OOP payments. At the same time, out-patient healthcare consumption decreased substantially, with a large proportion of former enrollees not being able to afford healthcare services. Belonging to some form of financial/social group proved beneficial in the short term as a coping mechanism. Social capital built through KCHIP between former enrollees and clinics helped alleviate part of the financial burden for the former enrollees, but not for the facilities. Enrollees with the highest probability of suffering adverse consequences of the programme suspension were male enrollees, households in the lower social quintiles, living in rural communities and those reporting recent acute illness. Private health facilities suffered more consequences of the programme suspension than public facilities in terms of reduced financial inflow sequel to change in the revenue and resources. These observations point to the need for designing effective transition processes from community-based health insurance to state insurance in Nigerian states. Funding: though the PharmAccess Foundation funded this study and one of the authors work at the company, the study was not influenced by his participation in the design, data collection and manuscript writing. What is known about this topic It is already known fact that community-based health insurance scheme is an important strategy to achieve accessibility to quality healthcare for underserved population; Kwara Community Health Insurance Scheme was adjudged as one of the impactful health intervention from sub-Saharan Africa. It is already known fact that community-based health insurance scheme is an important strategy to achieve accessibility to quality healthcare for underserved population; Kwara Community Health Insurance Scheme was adjudged as one of the impactful health intervention from sub-Saharan Africa. It is already known fact that community-based health insurance scheme is an important strategy to achieve accessibility to quality healthcare for underserved population; Kwara Community Health Insurance Scheme was adjudged as one of the impactful health intervention from sub-Saharan Africa. It is already known fact that community-based health insurance scheme is an important strategy to achieve accessibility to quality healthcare for underserved population; Kwara Community Health Insurance Scheme was adjudged as one of the impactful health intervention from sub-Saharan Africa. What this study adds This study provides insight into the consequences of suspending an impactful health intervention like Kwara Community Health Insurance Scheme.It also highlighted the common challenges experienced and coping strategies adopted by the enrollees and the care providers on the scheme. This study provides insight into the consequences of suspending an impactful health intervention like Kwara Community Health Insurance Scheme. It also highlighted the common challenges experienced and coping strategies adopted by the enrollees and the care providers on the scheme. This study provides insight into the consequences of suspending an impactful health intervention like Kwara Community Health Insurance Scheme.It also highlighted the common challenges experienced and coping strategies adopted by the enrollees and the care providers on the scheme. This study provides insight into the consequences of suspending an impactful health intervention like Kwara Community Health Insurance Scheme. It also highlighted the common challenges experienced and coping strategies adopted by the enrollees and the care providers on the scheme.
Background: a subsidized community health insurance programme in Kwara State, Nigeria was temporarily suspended in 2016 in anticipation of the roll-out of a state-wide health insurance scheme. This article reports the adverse consequences of the scheme´s suspension on enrollees´ healthcare utilization. Methods: a mixed-methods study was carried out in Kwara State, Nigeria, in 2018 using a semi-quantitative cross-sectional survey amongst 600 former Kwara community health insurance clients, and in-depth interviews with 24 clients and 29 participating public and private healthcare providers in the program. Both quantitative and qualitative data were analyzed and triangulated. Results: most of former enrollees (95.3%) kept utilizing programme facilities after the suspension, mainly because of the high quality of care. However, majority of the enrollees (95.8%) reverted to out-of-pocket payment while 67% reported constraints in payment for healthcare services after suspension of the program. In the absence of insurance, the most common coping mechanisms for healthcare payment were personal savings (63.3%), donations from friends and families (34.7%) and loans (11.8%). Being a male enrollee (odd ratio=1.61), living in a rural community (odd ratio =1.77), exclusive usage of Kwara Community Health Insurance Programme (KCHIP) prior to suspension (odd ratio=1.94) and suffering an acute illness (odd ratio=3.38) increased the odds of being financially constrained in accessing healthcare. Conclusions: after the suspension of the scheme, many enrollees and health facilities experienced financial constraints. These underscore the importance of sustainable health insurance schemes as a risk-pooling mechanism to sustain access to good quality health care and financial protection from catastrophic health expenditures.
6,752
330
[ 463, 94, 87 ]
8
[ "health", "enrollees", "facilities", "healthcare", "insurance", "suspension", "kchip", "health insurance", "kwara", "community" ]
[ "health insurance programme", "catastrophic health expenditure", "healthcare expenses", "universal health coverage", "sustainable health insurance" ]
[CONTENT] Community-based | health insurance | suspension | Kwara [SUMMARY]
[CONTENT] Community-based | health insurance | suspension | Kwara [SUMMARY]
[CONTENT] Community-based | health insurance | suspension | Kwara [SUMMARY]
[CONTENT] Community-based | health insurance | suspension | Kwara [SUMMARY]
[CONTENT] Community-based | health insurance | suspension | Kwara [SUMMARY]
[CONTENT] Community-based | health insurance | suspension | Kwara [SUMMARY]
[CONTENT] Community-Based Health Insurance | Cross-Sectional Studies | Health Expenditures | Humans | Insurance, Health | Male | Nigeria [SUMMARY]
[CONTENT] Community-Based Health Insurance | Cross-Sectional Studies | Health Expenditures | Humans | Insurance, Health | Male | Nigeria [SUMMARY]
[CONTENT] Community-Based Health Insurance | Cross-Sectional Studies | Health Expenditures | Humans | Insurance, Health | Male | Nigeria [SUMMARY]
[CONTENT] Community-Based Health Insurance | Cross-Sectional Studies | Health Expenditures | Humans | Insurance, Health | Male | Nigeria [SUMMARY]
[CONTENT] Community-Based Health Insurance | Cross-Sectional Studies | Health Expenditures | Humans | Insurance, Health | Male | Nigeria [SUMMARY]
[CONTENT] Community-Based Health Insurance | Cross-Sectional Studies | Health Expenditures | Humans | Insurance, Health | Male | Nigeria [SUMMARY]
[CONTENT] health insurance programme | catastrophic health expenditure | healthcare expenses | universal health coverage | sustainable health insurance [SUMMARY]
[CONTENT] health insurance programme | catastrophic health expenditure | healthcare expenses | universal health coverage | sustainable health insurance [SUMMARY]
[CONTENT] health insurance programme | catastrophic health expenditure | healthcare expenses | universal health coverage | sustainable health insurance [SUMMARY]
[CONTENT] health insurance programme | catastrophic health expenditure | healthcare expenses | universal health coverage | sustainable health insurance [SUMMARY]
[CONTENT] health insurance programme | catastrophic health expenditure | healthcare expenses | universal health coverage | sustainable health insurance [SUMMARY]
[CONTENT] health insurance programme | catastrophic health expenditure | healthcare expenses | universal health coverage | sustainable health insurance [SUMMARY]
[CONTENT] health | enrollees | facilities | healthcare | insurance | suspension | kchip | health insurance | kwara | community [SUMMARY]
[CONTENT] health | enrollees | facilities | healthcare | insurance | suspension | kchip | health insurance | kwara | community [SUMMARY]
[CONTENT] health | enrollees | facilities | healthcare | insurance | suspension | kchip | health insurance | kwara | community [SUMMARY]
[CONTENT] health | enrollees | facilities | healthcare | insurance | suspension | kchip | health insurance | kwara | community [SUMMARY]
[CONTENT] health | enrollees | facilities | healthcare | insurance | suspension | kchip | health insurance | kwara | community [SUMMARY]
[CONTENT] health | enrollees | facilities | healthcare | insurance | suspension | kchip | health insurance | kwara | community [SUMMARY]
[CONTENT] health | kchip | healthcare | insurance | health insurance | state | clients | programme | health insurance programme | insurance programme [SUMMARY]
[CONTENT] selected | data | enrollees | lgas | facilities | health | interviews | study | qualitative | quantitative [SUMMARY]
[CONTENT] facilities | suspension | pay | enrollees | table | reported | ability pay | ability | healthcare | staff [SUMMARY]
[CONTENT] scheme | health | health insurance scheme | insurance scheme | health insurance | insurance | community health insurance scheme | health intervention | impactful health intervention | community [SUMMARY]
[CONTENT] health | enrollees | facilities | health insurance | insurance | scheme | healthcare | kchip | kwara | suspension [SUMMARY]
[CONTENT] health | enrollees | facilities | health insurance | insurance | scheme | healthcare | kchip | kwara | suspension [SUMMARY]
[CONTENT] Kwara State | Nigeria | 2016 ||| [SUMMARY]
[CONTENT] Kwara State | Nigeria | 2018 | 600 | Kwara community health insurance clients | 24 | 29 ||| [SUMMARY]
[CONTENT] 95.3% ||| 95.8% | 67% ||| 63.3% | 34.7% | 11.8% ||| 1.77 | Kwara Community Health Insurance Programme | KCHIP [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] Kwara State | Nigeria | 2016 ||| ||| Kwara State | Nigeria | 2018 | 600 | Kwara community health insurance clients | 24 | 29 ||| ||| 95.3% ||| 95.8% | 67% ||| 63.3% | 34.7% | 11.8% ||| 1.77 | Kwara Community Health Insurance Programme | KCHIP ||| ||| [SUMMARY]
[CONTENT] Kwara State | Nigeria | 2016 ||| ||| Kwara State | Nigeria | 2018 | 600 | Kwara community health insurance clients | 24 | 29 ||| ||| 95.3% ||| 95.8% | 67% ||| 63.3% | 34.7% | 11.8% ||| 1.77 | Kwara Community Health Insurance Programme | KCHIP ||| ||| [SUMMARY]
Biosynthesized silver and gold nanoparticles are potent antimycotics against opportunistic pathogenic yeasts and dermatophytes.
29440895
Epidemiologic observations indicate that the number of systemic fungal infections has increased significantly during the past decades, however in human mycosis, mainly cutaneous infections predominate, generating major public health concerns and providing much of the impetus for current attempts to develop novel and efficient agents against cutaneous mycosis causing species. Innovative, environmentally benign and economic nanotechnology-based approaches have recently emerged utilizing principally biological sources to produce nano-sized structures with unique antimicrobial properties. In line with this, our aim was to generate silver nanoparticles (AgNPs) and gold nanoparticles (AuNPs) by biological synthesis and to study the effect of the obtained nanoparticles on cutaneous mycosis causing fungi and on human keratinocytes.
BACKGROUND
Cell-free extract of the red yeast Phaffia rhodozyma proved to be suitable for nanoparticle preparation and the generated AgNPs and AuNPs were characterized by transmission electron microscopy, dynamic light scattering and X-ray powder diffraction.
METHODS
Antifungal studies demonstrated that the biosynthesized silver particles were able to inhibit the growth of several opportunistic Candida or Cryptococcus species and were highly potent against filamentous Microsporum and Trichophyton dermatophytes. Among the tested species only Cryptococcus neoformans was susceptible to both AgNPs and AuNPs. Neither AgNPs nor AuNPs exerted toxicity on human keratinocytes.
RESULTS
Our results emphasize the therapeutic potential of such biosynthesized nanoparticles, since their biocompatibility to skin cells and their outstanding antifungal performance can be exploited for topical treatment and prophylaxis of superficial cutaneous mycosis.
CONCLUSION
[ "Antifungal Agents", "Basidiomycota", "Candida", "Cell Line", "Cell-Free System", "Dermatomycoses", "Drug Evaluation, Preclinical", "Dynamic Light Scattering", "Gold", "Humans", "Keratinocytes", "Metal Nanoparticles", "Microscopy, Electron, Transmission", "Silver", "Trichophyton" ]
5798539
Introduction
Due to social and global environmental changes, the aging population and the growing number of susceptible individuals (eg, patients with predisposing factors), the incidence of superficial mycosis has increased over the past few years causing health problems world-wide.1,2 Dermatomycosis usually manifests as itchy, sore skin rashes and tinea on the toes, the inner thighs or groin; leading to flaking and blistering of the affected area. Fungi may spread to the scalp or nails causing hair loss and thickened or deformed fingernails. The etiological agents behind dermatomycosis are mostly filamentous dermatophytes (eg, Microsporum spp., Trichophyton spp. or Epidermophyton spp.), however, several opportunistic yeast species such as Candida or Cryptococcus can invade keratinized tissues.1,3 In case of cutaneous candidiasis, the origin is mostly endogenous as the causative agents (ie, Candida spp.) are the resident or transient members of the human dermal microflora. Cryptococcus neoformans can induce meningitis in immunocompromised patients, however, due to systemic dissemination of yeast cells in such patients, skin lesions may also appear.4 In addition to this secondary variant, a primary form of cutaneous cryptococcosis has recently been identified, where the pathogen enters the body directly through skin injury.5 Regardless of the distinct origins of Cr. neoformans infections, this pathogen can cause life-threatening systemic complications in the host.4,5 Therefore, there is an inevitable and urgent medical need to find novel and efficient agents to defeat opportunistic pathogens and cutaneous mycosis-causing species. To achieve this goal, an innovative, environmentally benign and economic approach is required. A solution that fulfills these demands could be based on nanotechnology, since owing to the abundance of recent scientific advancements in this field, it is now possible to design and develop nano-sized structures with unique properties tailored for specific applications (eg, in optics, in electronics, in catalysis, and in household items, as well as in medicine).6 In recent years, gold nanoparticles (AuNPs) and silver nanoparticles (AgNPs),7 in particular, have been in the focus of increasing interest due to their simple synthesis and desirable biological activities (ie, antitumor, antibacterial, antiviral and antifungal effects).8,9 We have reported that the cytotoxic features of AgNPs can be exploited to kill p53 tumor suppressor-deficient10 as well as multidrug-resistant cancer cells,11 providing some further details to the mechanism of AgNP-induced antitumor actions. In recent years, several studies have also demonstrated that AgNPs, synthesized either by conventional chemical reduction methods, physical techniques or by different biological entities, exhibit potent inhibitory effects against Gram positive and Gram negative bacterial species and induce cytotoxicity in various pathogenic fungi.8,12 Green alternatives to harsh reducing chemicals and environmentally benign capping agents have also been developed to reduce the costs of nanoparticle production and to minimize the generation of hazardous wastes upon the synthetic procedures.13 In a comprehensive study on AgNPs obtained by chemical reduction using coffee and green tea extracts, we proved the remarkable antimicrobial efficiency of such particles,14 and we showed evidence that the green material used for AgNP synthesis can largely define the physical, chemical and biological characteristics of the produced nanomaterial.14 Owing to widespread applications, there is an ever-growing demand for AgNPs and AuNPs, while the challenges of their economical, environmentally safe and sustainable production should also be addressed. A rapidly progressing area of nanobiotechnology is the microbe-assisted nanoparticle synthesis,15 where bacterial strains, yeast- and alga-mediated reactions are exploited for nanoparticle generation.16,17 One such study demonstrated that the astaxanthin-containing green alga Chlorella vulgaris is a useful agent for gold nanoparticle synthesis.18 Based on this observation we hypothesized that Phaffia rhodozyma (perfect state Xanthophyllomyces dendrorhous), a basidiomycetous red yeast with high astaxanthin content,19 might also be a potential candidate for microbe-assisted nanoparticle synthesis. Therefore, the aim of this present study was to investigate the suitability of P. rhodozyma cell-free extract for the preparation of AgNPs and AuNPs. Since our synthesis method proved to be successful, the as-prepared nanoparticles were characterized, and a complex biological screening was carried out to determine their biological activity, where beside toxicity to human skin-derived cells, the antifungal efficiency of the biosynthesized nanoparticles was also assessed with a special emphasis on inhibitory features against opportunistic pathogenic yeasts and dermatophytes.
null
null
Results
Characterization of nanoparticles Particle morphology and size of the synthesized AgNPs and AuNPs were determined by TEM image analysis. According to the obtained TEM micrographs, AgNPs were quasi-spherical, well separated from each other and minor polydispersity could be observed (Figure 1A). The average size of the AgNPs proved to be 4.1±1.44 nm. Gold particles were almost monodispersed, well separated and spherical with a narrow size distribution (2.22±0.7 nm). DLS measurements were also carried out to determine hydrodynamic particle sizes (Figure 1B). The average size of AgNPs was between 5 and 9 nm diameter, whereas the size of gold nanoparticles was around 4–7 nm. The formation of metal nanoparticles was ascertained by UV-VIS measurements (Figure 1C). The typical surface plasmon resonance (SPR) band of AgNPs appeared around 455 nm. UV-VIS spectra of AuNPs showed absorption peak maxima at 541 nm characteristic to SPR, which further supported the formation of metallic gold nanoparticles. In order to confirm the crystalline nature of the nanoparticles, XRD analysis was performed. The XRD pattern of the AgNP sample (Figure 1D) exhibited four identical diffraction peaks appearing at 2θ =38.4°, 44.4°, 64.7° and 77.5° corresponding to (111), (200), (220) and (311) planes of the face-centered cubic lattice structure of metallic silver21 (Joint Committee on Powder Diffraction Standards [JCPDS] No 87-0717). Crystallinity of the as-synthesized AuNPs was also investigated. Gold nanocrystals exhibited four distinct peaks at 2θ =39.4°, 44.5°, 65.7° and 78.6° corresponding to (111), (200), (220) and (310) Bragg’s reflection based on the face-centered cubic structure of metallic gold (Figure 1D) (JCPDS No 4-0784).22 Particle morphology and size of the synthesized AgNPs and AuNPs were determined by TEM image analysis. According to the obtained TEM micrographs, AgNPs were quasi-spherical, well separated from each other and minor polydispersity could be observed (Figure 1A). The average size of the AgNPs proved to be 4.1±1.44 nm. Gold particles were almost monodispersed, well separated and spherical with a narrow size distribution (2.22±0.7 nm). DLS measurements were also carried out to determine hydrodynamic particle sizes (Figure 1B). The average size of AgNPs was between 5 and 9 nm diameter, whereas the size of gold nanoparticles was around 4–7 nm. The formation of metal nanoparticles was ascertained by UV-VIS measurements (Figure 1C). The typical surface plasmon resonance (SPR) band of AgNPs appeared around 455 nm. UV-VIS spectra of AuNPs showed absorption peak maxima at 541 nm characteristic to SPR, which further supported the formation of metallic gold nanoparticles. In order to confirm the crystalline nature of the nanoparticles, XRD analysis was performed. The XRD pattern of the AgNP sample (Figure 1D) exhibited four identical diffraction peaks appearing at 2θ =38.4°, 44.4°, 64.7° and 77.5° corresponding to (111), (200), (220) and (311) planes of the face-centered cubic lattice structure of metallic silver21 (Joint Committee on Powder Diffraction Standards [JCPDS] No 87-0717). Crystallinity of the as-synthesized AuNPs was also investigated. Gold nanocrystals exhibited four distinct peaks at 2θ =39.4°, 44.5°, 65.7° and 78.6° corresponding to (111), (200), (220) and (310) Bragg’s reflection based on the face-centered cubic structure of metallic gold (Figure 1D) (JCPDS No 4-0784).22 Toxicity on human cells We assessed the cell viability of human skin-derived keratinocyte HaCaT cells treated with biologically synthesized AgNPs and AuNPs. MTT assays revealed that the nanoparticles, both AgNPs and AuNPs, applied in 0–60 µg/mL concentrations, did not reduce the viability of HaCaT cells (Figure 2A). On the other hand, treatments with the well-known therapeutic agent cisplatin caused considerable cytotoxicity, since significant loss of keratinocyte metabolic activity was observed already at 5 µg/mL cisplatin concentration (Figure 2A). Cytotoxicity of AgNPs and AuNPs was analyzed by the crystal violet staining method (Figure 2B). AgNPs and AuNPs generated by P. rhodozyma cell-free extract were non-toxic to the applied human cells in the tested concentration range. Conversely, cisplatin proved to be significantly toxic to human HaCaT cells. These results indicate that these biogenic AgNPs and AuNPs exhibit no toxicity therefore both are biocompatible to human keratinocytes. We assessed the cell viability of human skin-derived keratinocyte HaCaT cells treated with biologically synthesized AgNPs and AuNPs. MTT assays revealed that the nanoparticles, both AgNPs and AuNPs, applied in 0–60 µg/mL concentrations, did not reduce the viability of HaCaT cells (Figure 2A). On the other hand, treatments with the well-known therapeutic agent cisplatin caused considerable cytotoxicity, since significant loss of keratinocyte metabolic activity was observed already at 5 µg/mL cisplatin concentration (Figure 2A). Cytotoxicity of AgNPs and AuNPs was analyzed by the crystal violet staining method (Figure 2B). AgNPs and AuNPs generated by P. rhodozyma cell-free extract were non-toxic to the applied human cells in the tested concentration range. Conversely, cisplatin proved to be significantly toxic to human HaCaT cells. These results indicate that these biogenic AgNPs and AuNPs exhibit no toxicity therefore both are biocompatible to human keratinocytes. Efficiency against pathogenic yeasts The antifungal activity of the prepared AgNPs and AuNPs was tested against various Candida and Cryptococcus species. AgNPs inhibited the growth of all the examined strains in agar diffusion assay except that of Candida tropicalis (Figure 3A). Interestingly, Cr. neoformans was sensitive to AuNPs as well, whereas other species exhibited resistance against AuNPs (Figure 3A). Since Cr. neoformans appeared susceptible to both nanoparticles, the antifungal effect of AgNPs and AuNPs was assessed more quantitatively on this particular strain. Hence, Cr. neoformans cells were subjected to different concentrations of AgNPs and AuNPs for 24 hours then cell viability was determined using colony forming assay (Figure 3B). Treatments with 1 µg/mL of either AgNPs or AuNPs already resulted in a significant loss of cell viability, whereas the number of viable Cr. neoformans cells dropped below the detectable level following 30 µg/mL of AuNP or AgNP administrations (Figure 3B). To further test the viability of nanoparticle-treated Cr. neoformans cells, calcein-AM staining procedure was applied. As expected, increasing concentrations of either AgNPs or AuNPs gradually diminished the percentage of calcein-positive cells. After 10 µg/mL AgNP administration, approximately 50% of the cells were alive, while 83% of the examined cells were dead when AgNP concentration was elevated to 30 µg/mL (Figure 3C). A similar pattern was observed when yeast cells were exposed to AuNPs, since nanoparticle treatments in 30 µg/mL concentrations decreased the percentage of the viable cells to approximately 50% (Figure 3C). Time dependence of the nanoparticle-induced antifungal effect was also determined. For this purpose, Cr. neoformans cells were again subjected to 10 µg/mL AgNPs or AuNPs and after 1-, 3-, 10- and 24-hour treatments the number of the colonies was counted. Exposure to AgNPs for 1 hour caused a 50% decrease in the number of the CFU, but 10-hour treatments with either AgNPs or AuNPs reduced the number of viable cells below 10% (Figure 3D). The antifungal activity of the prepared AgNPs and AuNPs was tested against various Candida and Cryptococcus species. AgNPs inhibited the growth of all the examined strains in agar diffusion assay except that of Candida tropicalis (Figure 3A). Interestingly, Cr. neoformans was sensitive to AuNPs as well, whereas other species exhibited resistance against AuNPs (Figure 3A). Since Cr. neoformans appeared susceptible to both nanoparticles, the antifungal effect of AgNPs and AuNPs was assessed more quantitatively on this particular strain. Hence, Cr. neoformans cells were subjected to different concentrations of AgNPs and AuNPs for 24 hours then cell viability was determined using colony forming assay (Figure 3B). Treatments with 1 µg/mL of either AgNPs or AuNPs already resulted in a significant loss of cell viability, whereas the number of viable Cr. neoformans cells dropped below the detectable level following 30 µg/mL of AuNP or AgNP administrations (Figure 3B). To further test the viability of nanoparticle-treated Cr. neoformans cells, calcein-AM staining procedure was applied. As expected, increasing concentrations of either AgNPs or AuNPs gradually diminished the percentage of calcein-positive cells. After 10 µg/mL AgNP administration, approximately 50% of the cells were alive, while 83% of the examined cells were dead when AgNP concentration was elevated to 30 µg/mL (Figure 3C). A similar pattern was observed when yeast cells were exposed to AuNPs, since nanoparticle treatments in 30 µg/mL concentrations decreased the percentage of the viable cells to approximately 50% (Figure 3C). Time dependence of the nanoparticle-induced antifungal effect was also determined. For this purpose, Cr. neoformans cells were again subjected to 10 µg/mL AgNPs or AuNPs and after 1-, 3-, 10- and 24-hour treatments the number of the colonies was counted. Exposure to AgNPs for 1 hour caused a 50% decrease in the number of the CFU, but 10-hour treatments with either AgNPs or AuNPs reduced the number of viable cells below 10% (Figure 3D). Anti-dermatophyte activity All the three examined dermatophyte species were sensitive to AgNP treatments in growth inhibition assay (Figure 4). AgNPs in 30 µg/mL concentration induced significant inhibition of colony growth, as the diameters of the colonies were markedly smaller, indicating an approximately 80% growth inhibition for each strain (Figure 4). On the other hand, none of the dermatophyte species reacted to AuNPs, when applied in 10 or 30 µg/mL concentrations. Therefore, AuNP concentration was elevated up to 300 µg/mL but still no inhibition was detected (data not shown), indicating that AuNPs are not suitable for the growth inhibition of the examined cutaneous mycosis-causing dermatophyte fungi. All the three examined dermatophyte species were sensitive to AgNP treatments in growth inhibition assay (Figure 4). AgNPs in 30 µg/mL concentration induced significant inhibition of colony growth, as the diameters of the colonies were markedly smaller, indicating an approximately 80% growth inhibition for each strain (Figure 4). On the other hand, none of the dermatophyte species reacted to AuNPs, when applied in 10 or 30 µg/mL concentrations. Therefore, AuNP concentration was elevated up to 300 µg/mL but still no inhibition was detected (data not shown), indicating that AuNPs are not suitable for the growth inhibition of the examined cutaneous mycosis-causing dermatophyte fungi.
Conclusion
We found that the cell-free extract of P. rhodozyma is a suitable material for the synthesis of biogenic AgNPs and AuNPs. We demonstrated that these nanoparticles are potent against cutaneous mycosis-causing dermatophytes and opportunistic yeast species and are non-toxic to human keratinocytes. Considering our data in the context of the current clinical results, the biocompatibility to skin cells and the outstanding antifungal performance of our biosynthesized nanoparticles might be exploited during topical treatment of cutaneous mycosis or as a prophylactic treatment against the recurrence of the infection.
[ "Preparation of P. rhodozyma cell-free extract", "Synthesis of AgNPs and AuNPs", "Characterization of nanoparticles", "Cell culture", "Cell viability and toxicity assays", "Screening of antifungal activity", "Yeast viability assay", "Characterization of nanoparticles", "Toxicity on human cells", "Efficiency against pathogenic yeasts", "Anti-dermatophyte activity", "Conclusion" ]
[ "P. rhodozyma American Type Culture Collection (ATCC) 24203 cells grown on 2× yeast extract peptone dextrose (YPD; Sigma-Aldrich Co., St Louis, MO, USA) (2% D-glucose, 2% peptone, 1% yeast extract and 2% agar) plates at 22°C for 5 days were collected by cell scraper. Ten grams of wet weight biomass were suspended in 200 mL sterile distilled water and the cells were disrupted in Bead Beater (BioSpec Products, Inc., Bartlesville, OK, USA) using glass beads with 0.5–1 mm diameter. Cell debris was removed by centrifugation at 18,000 g for 15 min in Sorvall RC-5B centrifuge (Thermo Fisher Scientific, Waltham, MA, USA) at 4°C. The supernatant was filtered through 0.45 µm pore sized nitrocellulose membrane (Merck KGaA, Darmstadt, Germany). The filtrate was used as a reducing agent and stabilizer for the synthesis of nanoparticles.", "After optimization of the nanoparticle synthesis process, the following protocol was used. For preparation of AgNP, 90 mL cell-free extract (pH 6.7) was mixed with 10 mL of 1 M AgNO3 suspension. Gold nanoparticles were synthesized similarly, by adding 10 mL 1 M HAuCl4 solution to 90 mL cell-free extract. Both suspensions were constantly stirred using an orbital shaker at 22°C for 24 hours. Nanoparticles were pelleted by centrifugation for 15 min at 18,000 g in Sorvall RC-5B centrifuge at 4°C then washed twice with sterile distilled water. The final colloid suspensions were characterized and stored at 4°C.", "The morphological features of the synthesized AgNPs and AuNPs were analyzed by transmission electron microscopy (TEM) using a FEI Tecnai G2 20 microscope (FEI Corporate Headquarters, Hillsboro, OR, USA) at an acceleration voltage of 200 kV. The hydrodynamic particle size distribution of the samples was assessed by dynamic light scattering (DLS) analysis using a Zetasizer Nano Instrument (Malvern Instruments, Malvern, UK). The crystal structures were examined by X-ray powder diffraction (XRD), where the scans were performed with a Rigaku MiniFlex II powder diffractometer (Rigaku Corporation, Tokyo, Japan) using Cu Kα radiation. A scanning rate of 2° min−1 in the 20°–80° 2θ range was used. Finally, the optical properties of the obtained AuNPs and AgNPs were studied using an Ocean Optics 355 DH-2000-BAL ultraviolet-visible (UV-VIS) spectrophotometer (Halma PLC, Largo, FL, USA) in a 10-mm path length quartz cuvette. The absorbance spectra of nanoparticles were recorded within the range of 300–800 nm.", "HaCaT immortalized keratinocyte cell line from adult human skin was purchased from ATCC. HaCaT cells were maintained in Dulbecco’s Modified Eagle’s Medium (DMEM) containing 4.5 g/L glucose, supplemented with 10% fetal bovine serum, 2 mM L-glutamine, 0.01% streptomycin and 0.005% ampicillin. Cells were cultured under standard conditions in a 37°C incubator at 5% CO2 in 95% humidity.", "MTT mitochondrial activity assay (Sigma-Aldrich) was performed to measure cell viability. HaCaT cells were seeded into 96 well plates (10,000/wells) and treated with either AgNPs or AuNPs, or with cisplatin in different concentrations on the following day. After 24-hour treatments cells were washed with phosphate buffered saline (PBS) and incubated with culture medium containing 0.5 mg/mL MTT reagent for 1 hour at 37°C. Formazan crystals were solubilized in DMSO and extinction was measured at 570 nm using a Synergy HTX plate reader (Thermo Fisher Scientific). Absorption corresponding to the untreated control samples was considered as 100%. MTT assays were performed at least three times using four independent biological replicates.\nCytotoxicity of the synthesized nanoparticles was assessed by crystal violet staining as well. For this, HaCaT cells were seeded into 24 well plates and were left to grow until they reached confluence. Then cell layers were treated with nanoparticles or with cisplatin for 24 hours. After each treatment, cells were washed three times with PBS and fixed using methanol:acetone 70:30 mixture. Fixed cells were then stained using 0.5% crystal violet dissolved in 25% methanol, washed with distilled water then air-dried. Plates were photographed, and crystal violet was solubilized using 400 µL 10% acetic acid. From each well, 100 µL solution was transferred to 96 well plates then absorbance was determined at 590 nm using a the HTX microplate reader.", "The antifungal activity of the nanoparticles was tested against pathogenic yeasts as well as against various dermatophytes. The examined strains are listed in Table 1.\nFirst, agar diffusion assay was carried out as described previously.20 Briefly, 5 µL of the synthesized AuNP or AgNP suspensions (5 mg/mL) were loaded onto the surface of the plates seeded by the test strains. The plates were incubated at 30°C and the inhibition zones were determined after 24 hours.\nAnti-dermatophyte activity was tested on potato dextrose agar (PDA; VWR, Radnor, PA, USA) plates. Agar plugs of 3 mm diameter were cut from 4-day-old cultures and inoculated upside down onto the surface of PDA plates supplemented with AgNP (in 10 or 30 µg/mL concentration) or with AuNP (first in 10 or 30 µg/mL and later in 300 µg/mL concentrations). Subsequently, the plates were incubated for 4 days at 30°C and the diameter of the colonies was measured.\nAll experiments were carried out at least three times using three biological replicates.", "The number of colony forming units (CFU) was determined to assess the viability of nanoparticle-treated Cr. neoformans cells. Briefly, 4×106 yeast cells in Hanks’ Balanced Salt solution (HBSS) were exposed to either AgNP or AuNP in concentrations of 1, 5, 10, and 30 µg/mL for 24 hours at 30°C. After treatment, cells were washed and were resuspended in 1 mL sterile water. A tenfold dilution series was prepared and 25 µL aliquots from each dilution were spread onto YPD plates in triplicates. Non-treated cells were used as control. Plates were incubated at 30°C for 72 hours and the number of the colonies was counted. The experiments were carried out 3 times.\nFollowing AgNP or AuNP treatments, the viability of Cr. neoformans cells was examined by calcein-AM assay using flow cytometry. Briefly, 4×106 cells were exposed to 5, 10 or 30 µg/mL AgNPs or to 10 and 30 µg/mL AuNPs in HBSS at 30°C for 24 hours. After the treatments, cells were washed with sterile distilled water, suspended in 200 µL HBSS supplemented with 10 µg/mL verapamil (Sigma-Aldrich) and were incubated for 20 min at 30°C to block efflux transporter activity. Subsequently, calcein-AM (Thermo Fisher Scientific) was added in 10 µM concentration and the suspensions were further incubated in the dark at 30°C for 3 hours. Cells were washed with HBSS and the fluorescent intensity was measured by flow cytometer (FlowSight®, Amnis-EMD Millipore) using a 488-nm excitation laser.\nThe time-dependent effect of nanoparticle treatments was investigated by exposing 4×106\nCr. neoformans cells in 1 mL HBSS to AgNP or AuNP in 10 µg/mL concentrations at 30°C. After 1-, 3-, 10- and 24-hour treatments, aliquots of 100 µL each were taken, and after a washing step were resuspended in 1 mL sterile water. Then tenfold dilution series was prepared and from each dilution 25 µL aliquots were spread onto the surface of YPD plates. Non-treated cells were used as control. The plates were incubated at 30°C for 72 hours and the number of the colonies was counted. The experiments were carried out three times, in three biological replicates.", "Particle morphology and size of the synthesized AgNPs and AuNPs were determined by TEM image analysis. According to the obtained TEM micrographs, AgNPs were quasi-spherical, well separated from each other and minor polydispersity could be observed (Figure 1A). The average size of the AgNPs proved to be 4.1±1.44 nm. Gold particles were almost monodispersed, well separated and spherical with a narrow size distribution (2.22±0.7 nm). DLS measurements were also carried out to determine hydrodynamic particle sizes (Figure 1B). The average size of AgNPs was between 5 and 9 nm diameter, whereas the size of gold nanoparticles was around 4–7 nm. The formation of metal nanoparticles was ascertained by UV-VIS measurements (Figure 1C). The typical surface plasmon resonance (SPR) band of AgNPs appeared around 455 nm. UV-VIS spectra of AuNPs showed absorption peak maxima at 541 nm characteristic to SPR, which further supported the formation of metallic gold nanoparticles. In order to confirm the crystalline nature of the nanoparticles, XRD analysis was performed. The XRD pattern of the AgNP sample (Figure 1D) exhibited four identical diffraction peaks appearing at 2θ =38.4°, 44.4°, 64.7° and 77.5° corresponding to (111), (200), (220) and (311) planes of the face-centered cubic lattice structure of metallic silver21 (Joint Committee on Powder Diffraction Standards [JCPDS] No 87-0717). Crystallinity of the as-synthesized AuNPs was also investigated. Gold nanocrystals exhibited four distinct peaks at 2θ =39.4°, 44.5°, 65.7° and 78.6° corresponding to (111), (200), (220) and (310) Bragg’s reflection based on the face-centered cubic structure of metallic gold (Figure 1D) (JCPDS No 4-0784).22", "We assessed the cell viability of human skin-derived keratinocyte HaCaT cells treated with biologically synthesized AgNPs and AuNPs. MTT assays revealed that the nanoparticles, both AgNPs and AuNPs, applied in 0–60 µg/mL concentrations, did not reduce the viability of HaCaT cells (Figure 2A). On the other hand, treatments with the well-known therapeutic agent cisplatin caused considerable cytotoxicity, since significant loss of keratinocyte metabolic activity was observed already at 5 µg/mL cisplatin concentration (Figure 2A).\nCytotoxicity of AgNPs and AuNPs was analyzed by the crystal violet staining method (Figure 2B). AgNPs and AuNPs generated by P. rhodozyma cell-free extract were non-toxic to the applied human cells in the tested concentration range. Conversely, cisplatin proved to be significantly toxic to human HaCaT cells. These results indicate that these biogenic AgNPs and AuNPs exhibit no toxicity therefore both are biocompatible to human keratinocytes.", "The antifungal activity of the prepared AgNPs and AuNPs was tested against various Candida and Cryptococcus species. AgNPs inhibited the growth of all the examined strains in agar diffusion assay except that of Candida tropicalis (Figure 3A). Interestingly, Cr. neoformans was sensitive to AuNPs as well, whereas other species exhibited resistance against AuNPs (Figure 3A).\nSince Cr. neoformans appeared susceptible to both nanoparticles, the antifungal effect of AgNPs and AuNPs was assessed more quantitatively on this particular strain. Hence, Cr. neoformans cells were subjected to different concentrations of AgNPs and AuNPs for 24 hours then cell viability was determined using colony forming assay (Figure 3B). Treatments with 1 µg/mL of either AgNPs or AuNPs already resulted in a significant loss of cell viability, whereas the number of viable Cr. neoformans cells dropped below the detectable level following 30 µg/mL of AuNP or AgNP administrations (Figure 3B).\nTo further test the viability of nanoparticle-treated Cr. neoformans cells, calcein-AM staining procedure was applied. As expected, increasing concentrations of either AgNPs or AuNPs gradually diminished the percentage of calcein-positive cells. After 10 µg/mL AgNP administration, approximately 50% of the cells were alive, while 83% of the examined cells were dead when AgNP concentration was elevated to 30 µg/mL (Figure 3C). A similar pattern was observed when yeast cells were exposed to AuNPs, since nanoparticle treatments in 30 µg/mL concentrations decreased the percentage of the viable cells to approximately 50% (Figure 3C).\nTime dependence of the nanoparticle-induced antifungal effect was also determined. For this purpose, Cr. neoformans cells were again subjected to 10 µg/mL AgNPs or AuNPs and after 1-, 3-, 10- and 24-hour treatments the number of the colonies was counted. Exposure to AgNPs for 1 hour caused a 50% decrease in the number of the CFU, but 10-hour treatments with either AgNPs or AuNPs reduced the number of viable cells below 10% (Figure 3D).", "All the three examined dermatophyte species were sensitive to AgNP treatments in growth inhibition assay (Figure 4). AgNPs in 30 µg/mL concentration induced significant inhibition of colony growth, as the diameters of the colonies were markedly smaller, indicating an approximately 80% growth inhibition for each strain (Figure 4). On the other hand, none of the dermatophyte species reacted to AuNPs, when applied in 10 or 30 µg/mL concentrations. Therefore, AuNP concentration was elevated up to 300 µg/mL but still no inhibition was detected (data not shown), indicating that AuNPs are not suitable for the growth inhibition of the examined cutaneous mycosis-causing dermatophyte fungi.", "We found that the cell-free extract of P. rhodozyma is a suitable material for the synthesis of biogenic AgNPs and AuNPs. We demonstrated that these nanoparticles are potent against cutaneous mycosis-causing dermatophytes and opportunistic yeast species and are non-toxic to human keratinocytes. Considering our data in the context of the current clinical results, the biocompatibility to skin cells and the outstanding antifungal performance of our biosynthesized nanoparticles might be exploited during topical treatment of cutaneous mycosis or as a prophylactic treatment against the recurrence of the infection." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Preparation of P. rhodozyma cell-free extract", "Synthesis of AgNPs and AuNPs", "Characterization of nanoparticles", "Cell culture", "Cell viability and toxicity assays", "Screening of antifungal activity", "Yeast viability assay", "Results", "Characterization of nanoparticles", "Toxicity on human cells", "Efficiency against pathogenic yeasts", "Anti-dermatophyte activity", "Discussion", "Conclusion" ]
[ "Due to social and global environmental changes, the aging population and the growing number of susceptible individuals (eg, patients with predisposing factors), the incidence of superficial mycosis has increased over the past few years causing health problems world-wide.1,2 Dermatomycosis usually manifests as itchy, sore skin rashes and tinea on the toes, the inner thighs or groin; leading to flaking and blistering of the affected area. Fungi may spread to the scalp or nails causing hair loss and thickened or deformed fingernails. The etiological agents behind dermatomycosis are mostly filamentous dermatophytes (eg, Microsporum spp., Trichophyton spp. or Epidermophyton spp.), however, several opportunistic yeast species such as Candida or Cryptococcus can invade keratinized tissues.1,3 In case of cutaneous candidiasis, the origin is mostly endogenous as the causative agents (ie, Candida spp.) are the resident or transient members of the human dermal microflora. Cryptococcus neoformans can induce meningitis in immunocompromised patients, however, due to systemic dissemination of yeast cells in such patients, skin lesions may also appear.4 In addition to this secondary variant, a primary form of cutaneous cryptococcosis has recently been identified, where the pathogen enters the body directly through skin injury.5 Regardless of the distinct origins of Cr. neoformans infections, this pathogen can cause life-threatening systemic complications in the host.4,5 Therefore, there is an inevitable and urgent medical need to find novel and efficient agents to defeat opportunistic pathogens and cutaneous mycosis-causing species. To achieve this goal, an innovative, environmentally benign and economic approach is required.\nA solution that fulfills these demands could be based on nanotechnology, since owing to the abundance of recent scientific advancements in this field, it is now possible to design and develop nano-sized structures with unique properties tailored for specific applications (eg, in optics, in electronics, in catalysis, and in household items, as well as in medicine).6 In recent years, gold nanoparticles (AuNPs) and silver nanoparticles (AgNPs),7 in particular, have been in the focus of increasing interest due to their simple synthesis and desirable biological activities (ie, antitumor, antibacterial, antiviral and antifungal effects).8,9 We have reported that the cytotoxic features of AgNPs can be exploited to kill p53 tumor suppressor-deficient10 as well as multidrug-resistant cancer cells,11 providing some further details to the mechanism of AgNP-induced antitumor actions. In recent years, several studies have also demonstrated that AgNPs, synthesized either by conventional chemical reduction methods, physical techniques or by different biological entities, exhibit potent inhibitory effects against Gram positive and Gram negative bacterial species and induce cytotoxicity in various pathogenic fungi.8,12 Green alternatives to harsh reducing chemicals and environmentally benign capping agents have also been developed to reduce the costs of nanoparticle production and to minimize the generation of hazardous wastes upon the synthetic procedures.13 In a comprehensive study on AgNPs obtained by chemical reduction using coffee and green tea extracts, we proved the remarkable antimicrobial efficiency of such particles,14 and we showed evidence that the green material used for AgNP synthesis can largely define the physical, chemical and biological characteristics of the produced nanomaterial.14\nOwing to widespread applications, there is an ever-growing demand for AgNPs and AuNPs, while the challenges of their economical, environmentally safe and sustainable production should also be addressed. A rapidly progressing area of nanobiotechnology is the microbe-assisted nanoparticle synthesis,15 where bacterial strains, yeast- and alga-mediated reactions are exploited for nanoparticle generation.16,17 One such study demonstrated that the astaxanthin-containing green alga Chlorella vulgaris is a useful agent for gold nanoparticle synthesis.18 Based on this observation we hypothesized that Phaffia rhodozyma (perfect state Xanthophyllomyces dendrorhous), a basidiomycetous red yeast with high astaxanthin content,19 might also be a potential candidate for microbe-assisted nanoparticle synthesis. Therefore, the aim of this present study was to investigate the suitability of P. rhodozyma cell-free extract for the preparation of AgNPs and AuNPs. Since our synthesis method proved to be successful, the as-prepared nanoparticles were characterized, and a complex biological screening was carried out to determine their biological activity, where beside toxicity to human skin-derived cells, the antifungal efficiency of the biosynthesized nanoparticles was also assessed with a special emphasis on inhibitory features against opportunistic pathogenic yeasts and dermatophytes.", " Preparation of P. rhodozyma cell-free extract P. rhodozyma American Type Culture Collection (ATCC) 24203 cells grown on 2× yeast extract peptone dextrose (YPD; Sigma-Aldrich Co., St Louis, MO, USA) (2% D-glucose, 2% peptone, 1% yeast extract and 2% agar) plates at 22°C for 5 days were collected by cell scraper. Ten grams of wet weight biomass were suspended in 200 mL sterile distilled water and the cells were disrupted in Bead Beater (BioSpec Products, Inc., Bartlesville, OK, USA) using glass beads with 0.5–1 mm diameter. Cell debris was removed by centrifugation at 18,000 g for 15 min in Sorvall RC-5B centrifuge (Thermo Fisher Scientific, Waltham, MA, USA) at 4°C. The supernatant was filtered through 0.45 µm pore sized nitrocellulose membrane (Merck KGaA, Darmstadt, Germany). The filtrate was used as a reducing agent and stabilizer for the synthesis of nanoparticles.\nP. rhodozyma American Type Culture Collection (ATCC) 24203 cells grown on 2× yeast extract peptone dextrose (YPD; Sigma-Aldrich Co., St Louis, MO, USA) (2% D-glucose, 2% peptone, 1% yeast extract and 2% agar) plates at 22°C for 5 days were collected by cell scraper. Ten grams of wet weight biomass were suspended in 200 mL sterile distilled water and the cells were disrupted in Bead Beater (BioSpec Products, Inc., Bartlesville, OK, USA) using glass beads with 0.5–1 mm diameter. Cell debris was removed by centrifugation at 18,000 g for 15 min in Sorvall RC-5B centrifuge (Thermo Fisher Scientific, Waltham, MA, USA) at 4°C. The supernatant was filtered through 0.45 µm pore sized nitrocellulose membrane (Merck KGaA, Darmstadt, Germany). The filtrate was used as a reducing agent and stabilizer for the synthesis of nanoparticles.\n Synthesis of AgNPs and AuNPs After optimization of the nanoparticle synthesis process, the following protocol was used. For preparation of AgNP, 90 mL cell-free extract (pH 6.7) was mixed with 10 mL of 1 M AgNO3 suspension. Gold nanoparticles were synthesized similarly, by adding 10 mL 1 M HAuCl4 solution to 90 mL cell-free extract. Both suspensions were constantly stirred using an orbital shaker at 22°C for 24 hours. Nanoparticles were pelleted by centrifugation for 15 min at 18,000 g in Sorvall RC-5B centrifuge at 4°C then washed twice with sterile distilled water. The final colloid suspensions were characterized and stored at 4°C.\nAfter optimization of the nanoparticle synthesis process, the following protocol was used. For preparation of AgNP, 90 mL cell-free extract (pH 6.7) was mixed with 10 mL of 1 M AgNO3 suspension. Gold nanoparticles were synthesized similarly, by adding 10 mL 1 M HAuCl4 solution to 90 mL cell-free extract. Both suspensions were constantly stirred using an orbital shaker at 22°C for 24 hours. Nanoparticles were pelleted by centrifugation for 15 min at 18,000 g in Sorvall RC-5B centrifuge at 4°C then washed twice with sterile distilled water. The final colloid suspensions were characterized and stored at 4°C.\n Characterization of nanoparticles The morphological features of the synthesized AgNPs and AuNPs were analyzed by transmission electron microscopy (TEM) using a FEI Tecnai G2 20 microscope (FEI Corporate Headquarters, Hillsboro, OR, USA) at an acceleration voltage of 200 kV. The hydrodynamic particle size distribution of the samples was assessed by dynamic light scattering (DLS) analysis using a Zetasizer Nano Instrument (Malvern Instruments, Malvern, UK). The crystal structures were examined by X-ray powder diffraction (XRD), where the scans were performed with a Rigaku MiniFlex II powder diffractometer (Rigaku Corporation, Tokyo, Japan) using Cu Kα radiation. A scanning rate of 2° min−1 in the 20°–80° 2θ range was used. Finally, the optical properties of the obtained AuNPs and AgNPs were studied using an Ocean Optics 355 DH-2000-BAL ultraviolet-visible (UV-VIS) spectrophotometer (Halma PLC, Largo, FL, USA) in a 10-mm path length quartz cuvette. The absorbance spectra of nanoparticles were recorded within the range of 300–800 nm.\nThe morphological features of the synthesized AgNPs and AuNPs were analyzed by transmission electron microscopy (TEM) using a FEI Tecnai G2 20 microscope (FEI Corporate Headquarters, Hillsboro, OR, USA) at an acceleration voltage of 200 kV. The hydrodynamic particle size distribution of the samples was assessed by dynamic light scattering (DLS) analysis using a Zetasizer Nano Instrument (Malvern Instruments, Malvern, UK). The crystal structures were examined by X-ray powder diffraction (XRD), where the scans were performed with a Rigaku MiniFlex II powder diffractometer (Rigaku Corporation, Tokyo, Japan) using Cu Kα radiation. A scanning rate of 2° min−1 in the 20°–80° 2θ range was used. Finally, the optical properties of the obtained AuNPs and AgNPs were studied using an Ocean Optics 355 DH-2000-BAL ultraviolet-visible (UV-VIS) spectrophotometer (Halma PLC, Largo, FL, USA) in a 10-mm path length quartz cuvette. The absorbance spectra of nanoparticles were recorded within the range of 300–800 nm.\n Cell culture HaCaT immortalized keratinocyte cell line from adult human skin was purchased from ATCC. HaCaT cells were maintained in Dulbecco’s Modified Eagle’s Medium (DMEM) containing 4.5 g/L glucose, supplemented with 10% fetal bovine serum, 2 mM L-glutamine, 0.01% streptomycin and 0.005% ampicillin. Cells were cultured under standard conditions in a 37°C incubator at 5% CO2 in 95% humidity.\nHaCaT immortalized keratinocyte cell line from adult human skin was purchased from ATCC. HaCaT cells were maintained in Dulbecco’s Modified Eagle’s Medium (DMEM) containing 4.5 g/L glucose, supplemented with 10% fetal bovine serum, 2 mM L-glutamine, 0.01% streptomycin and 0.005% ampicillin. Cells were cultured under standard conditions in a 37°C incubator at 5% CO2 in 95% humidity.\n Cell viability and toxicity assays MTT mitochondrial activity assay (Sigma-Aldrich) was performed to measure cell viability. HaCaT cells were seeded into 96 well plates (10,000/wells) and treated with either AgNPs or AuNPs, or with cisplatin in different concentrations on the following day. After 24-hour treatments cells were washed with phosphate buffered saline (PBS) and incubated with culture medium containing 0.5 mg/mL MTT reagent for 1 hour at 37°C. Formazan crystals were solubilized in DMSO and extinction was measured at 570 nm using a Synergy HTX plate reader (Thermo Fisher Scientific). Absorption corresponding to the untreated control samples was considered as 100%. MTT assays were performed at least three times using four independent biological replicates.\nCytotoxicity of the synthesized nanoparticles was assessed by crystal violet staining as well. For this, HaCaT cells were seeded into 24 well plates and were left to grow until they reached confluence. Then cell layers were treated with nanoparticles or with cisplatin for 24 hours. After each treatment, cells were washed three times with PBS and fixed using methanol:acetone 70:30 mixture. Fixed cells were then stained using 0.5% crystal violet dissolved in 25% methanol, washed with distilled water then air-dried. Plates were photographed, and crystal violet was solubilized using 400 µL 10% acetic acid. From each well, 100 µL solution was transferred to 96 well plates then absorbance was determined at 590 nm using a the HTX microplate reader.\nMTT mitochondrial activity assay (Sigma-Aldrich) was performed to measure cell viability. HaCaT cells were seeded into 96 well plates (10,000/wells) and treated with either AgNPs or AuNPs, or with cisplatin in different concentrations on the following day. After 24-hour treatments cells were washed with phosphate buffered saline (PBS) and incubated with culture medium containing 0.5 mg/mL MTT reagent for 1 hour at 37°C. Formazan crystals were solubilized in DMSO and extinction was measured at 570 nm using a Synergy HTX plate reader (Thermo Fisher Scientific). Absorption corresponding to the untreated control samples was considered as 100%. MTT assays were performed at least three times using four independent biological replicates.\nCytotoxicity of the synthesized nanoparticles was assessed by crystal violet staining as well. For this, HaCaT cells were seeded into 24 well plates and were left to grow until they reached confluence. Then cell layers were treated with nanoparticles or with cisplatin for 24 hours. After each treatment, cells were washed three times with PBS and fixed using methanol:acetone 70:30 mixture. Fixed cells were then stained using 0.5% crystal violet dissolved in 25% methanol, washed with distilled water then air-dried. Plates were photographed, and crystal violet was solubilized using 400 µL 10% acetic acid. From each well, 100 µL solution was transferred to 96 well plates then absorbance was determined at 590 nm using a the HTX microplate reader.\n Screening of antifungal activity The antifungal activity of the nanoparticles was tested against pathogenic yeasts as well as against various dermatophytes. The examined strains are listed in Table 1.\nFirst, agar diffusion assay was carried out as described previously.20 Briefly, 5 µL of the synthesized AuNP or AgNP suspensions (5 mg/mL) were loaded onto the surface of the plates seeded by the test strains. The plates were incubated at 30°C and the inhibition zones were determined after 24 hours.\nAnti-dermatophyte activity was tested on potato dextrose agar (PDA; VWR, Radnor, PA, USA) plates. Agar plugs of 3 mm diameter were cut from 4-day-old cultures and inoculated upside down onto the surface of PDA plates supplemented with AgNP (in 10 or 30 µg/mL concentration) or with AuNP (first in 10 or 30 µg/mL and later in 300 µg/mL concentrations). Subsequently, the plates were incubated for 4 days at 30°C and the diameter of the colonies was measured.\nAll experiments were carried out at least three times using three biological replicates.\nThe antifungal activity of the nanoparticles was tested against pathogenic yeasts as well as against various dermatophytes. The examined strains are listed in Table 1.\nFirst, agar diffusion assay was carried out as described previously.20 Briefly, 5 µL of the synthesized AuNP or AgNP suspensions (5 mg/mL) were loaded onto the surface of the plates seeded by the test strains. The plates were incubated at 30°C and the inhibition zones were determined after 24 hours.\nAnti-dermatophyte activity was tested on potato dextrose agar (PDA; VWR, Radnor, PA, USA) plates. Agar plugs of 3 mm diameter were cut from 4-day-old cultures and inoculated upside down onto the surface of PDA plates supplemented with AgNP (in 10 or 30 µg/mL concentration) or with AuNP (first in 10 or 30 µg/mL and later in 300 µg/mL concentrations). Subsequently, the plates were incubated for 4 days at 30°C and the diameter of the colonies was measured.\nAll experiments were carried out at least three times using three biological replicates.\n Yeast viability assay The number of colony forming units (CFU) was determined to assess the viability of nanoparticle-treated Cr. neoformans cells. Briefly, 4×106 yeast cells in Hanks’ Balanced Salt solution (HBSS) were exposed to either AgNP or AuNP in concentrations of 1, 5, 10, and 30 µg/mL for 24 hours at 30°C. After treatment, cells were washed and were resuspended in 1 mL sterile water. A tenfold dilution series was prepared and 25 µL aliquots from each dilution were spread onto YPD plates in triplicates. Non-treated cells were used as control. Plates were incubated at 30°C for 72 hours and the number of the colonies was counted. The experiments were carried out 3 times.\nFollowing AgNP or AuNP treatments, the viability of Cr. neoformans cells was examined by calcein-AM assay using flow cytometry. Briefly, 4×106 cells were exposed to 5, 10 or 30 µg/mL AgNPs or to 10 and 30 µg/mL AuNPs in HBSS at 30°C for 24 hours. After the treatments, cells were washed with sterile distilled water, suspended in 200 µL HBSS supplemented with 10 µg/mL verapamil (Sigma-Aldrich) and were incubated for 20 min at 30°C to block efflux transporter activity. Subsequently, calcein-AM (Thermo Fisher Scientific) was added in 10 µM concentration and the suspensions were further incubated in the dark at 30°C for 3 hours. Cells were washed with HBSS and the fluorescent intensity was measured by flow cytometer (FlowSight®, Amnis-EMD Millipore) using a 488-nm excitation laser.\nThe time-dependent effect of nanoparticle treatments was investigated by exposing 4×106\nCr. neoformans cells in 1 mL HBSS to AgNP or AuNP in 10 µg/mL concentrations at 30°C. After 1-, 3-, 10- and 24-hour treatments, aliquots of 100 µL each were taken, and after a washing step were resuspended in 1 mL sterile water. Then tenfold dilution series was prepared and from each dilution 25 µL aliquots were spread onto the surface of YPD plates. Non-treated cells were used as control. The plates were incubated at 30°C for 72 hours and the number of the colonies was counted. The experiments were carried out three times, in three biological replicates.\nThe number of colony forming units (CFU) was determined to assess the viability of nanoparticle-treated Cr. neoformans cells. Briefly, 4×106 yeast cells in Hanks’ Balanced Salt solution (HBSS) were exposed to either AgNP or AuNP in concentrations of 1, 5, 10, and 30 µg/mL for 24 hours at 30°C. After treatment, cells were washed and were resuspended in 1 mL sterile water. A tenfold dilution series was prepared and 25 µL aliquots from each dilution were spread onto YPD plates in triplicates. Non-treated cells were used as control. Plates were incubated at 30°C for 72 hours and the number of the colonies was counted. The experiments were carried out 3 times.\nFollowing AgNP or AuNP treatments, the viability of Cr. neoformans cells was examined by calcein-AM assay using flow cytometry. Briefly, 4×106 cells were exposed to 5, 10 or 30 µg/mL AgNPs or to 10 and 30 µg/mL AuNPs in HBSS at 30°C for 24 hours. After the treatments, cells were washed with sterile distilled water, suspended in 200 µL HBSS supplemented with 10 µg/mL verapamil (Sigma-Aldrich) and were incubated for 20 min at 30°C to block efflux transporter activity. Subsequently, calcein-AM (Thermo Fisher Scientific) was added in 10 µM concentration and the suspensions were further incubated in the dark at 30°C for 3 hours. Cells were washed with HBSS and the fluorescent intensity was measured by flow cytometer (FlowSight®, Amnis-EMD Millipore) using a 488-nm excitation laser.\nThe time-dependent effect of nanoparticle treatments was investigated by exposing 4×106\nCr. neoformans cells in 1 mL HBSS to AgNP or AuNP in 10 µg/mL concentrations at 30°C. After 1-, 3-, 10- and 24-hour treatments, aliquots of 100 µL each were taken, and after a washing step were resuspended in 1 mL sterile water. Then tenfold dilution series was prepared and from each dilution 25 µL aliquots were spread onto the surface of YPD plates. Non-treated cells were used as control. The plates were incubated at 30°C for 72 hours and the number of the colonies was counted. The experiments were carried out three times, in three biological replicates.", "P. rhodozyma American Type Culture Collection (ATCC) 24203 cells grown on 2× yeast extract peptone dextrose (YPD; Sigma-Aldrich Co., St Louis, MO, USA) (2% D-glucose, 2% peptone, 1% yeast extract and 2% agar) plates at 22°C for 5 days were collected by cell scraper. Ten grams of wet weight biomass were suspended in 200 mL sterile distilled water and the cells were disrupted in Bead Beater (BioSpec Products, Inc., Bartlesville, OK, USA) using glass beads with 0.5–1 mm diameter. Cell debris was removed by centrifugation at 18,000 g for 15 min in Sorvall RC-5B centrifuge (Thermo Fisher Scientific, Waltham, MA, USA) at 4°C. The supernatant was filtered through 0.45 µm pore sized nitrocellulose membrane (Merck KGaA, Darmstadt, Germany). The filtrate was used as a reducing agent and stabilizer for the synthesis of nanoparticles.", "After optimization of the nanoparticle synthesis process, the following protocol was used. For preparation of AgNP, 90 mL cell-free extract (pH 6.7) was mixed with 10 mL of 1 M AgNO3 suspension. Gold nanoparticles were synthesized similarly, by adding 10 mL 1 M HAuCl4 solution to 90 mL cell-free extract. Both suspensions were constantly stirred using an orbital shaker at 22°C for 24 hours. Nanoparticles were pelleted by centrifugation for 15 min at 18,000 g in Sorvall RC-5B centrifuge at 4°C then washed twice with sterile distilled water. The final colloid suspensions were characterized and stored at 4°C.", "The morphological features of the synthesized AgNPs and AuNPs were analyzed by transmission electron microscopy (TEM) using a FEI Tecnai G2 20 microscope (FEI Corporate Headquarters, Hillsboro, OR, USA) at an acceleration voltage of 200 kV. The hydrodynamic particle size distribution of the samples was assessed by dynamic light scattering (DLS) analysis using a Zetasizer Nano Instrument (Malvern Instruments, Malvern, UK). The crystal structures were examined by X-ray powder diffraction (XRD), where the scans were performed with a Rigaku MiniFlex II powder diffractometer (Rigaku Corporation, Tokyo, Japan) using Cu Kα radiation. A scanning rate of 2° min−1 in the 20°–80° 2θ range was used. Finally, the optical properties of the obtained AuNPs and AgNPs were studied using an Ocean Optics 355 DH-2000-BAL ultraviolet-visible (UV-VIS) spectrophotometer (Halma PLC, Largo, FL, USA) in a 10-mm path length quartz cuvette. The absorbance spectra of nanoparticles were recorded within the range of 300–800 nm.", "HaCaT immortalized keratinocyte cell line from adult human skin was purchased from ATCC. HaCaT cells were maintained in Dulbecco’s Modified Eagle’s Medium (DMEM) containing 4.5 g/L glucose, supplemented with 10% fetal bovine serum, 2 mM L-glutamine, 0.01% streptomycin and 0.005% ampicillin. Cells were cultured under standard conditions in a 37°C incubator at 5% CO2 in 95% humidity.", "MTT mitochondrial activity assay (Sigma-Aldrich) was performed to measure cell viability. HaCaT cells were seeded into 96 well plates (10,000/wells) and treated with either AgNPs or AuNPs, or with cisplatin in different concentrations on the following day. After 24-hour treatments cells were washed with phosphate buffered saline (PBS) and incubated with culture medium containing 0.5 mg/mL MTT reagent for 1 hour at 37°C. Formazan crystals were solubilized in DMSO and extinction was measured at 570 nm using a Synergy HTX plate reader (Thermo Fisher Scientific). Absorption corresponding to the untreated control samples was considered as 100%. MTT assays were performed at least three times using four independent biological replicates.\nCytotoxicity of the synthesized nanoparticles was assessed by crystal violet staining as well. For this, HaCaT cells were seeded into 24 well plates and were left to grow until they reached confluence. Then cell layers were treated with nanoparticles or with cisplatin for 24 hours. After each treatment, cells were washed three times with PBS and fixed using methanol:acetone 70:30 mixture. Fixed cells were then stained using 0.5% crystal violet dissolved in 25% methanol, washed with distilled water then air-dried. Plates were photographed, and crystal violet was solubilized using 400 µL 10% acetic acid. From each well, 100 µL solution was transferred to 96 well plates then absorbance was determined at 590 nm using a the HTX microplate reader.", "The antifungal activity of the nanoparticles was tested against pathogenic yeasts as well as against various dermatophytes. The examined strains are listed in Table 1.\nFirst, agar diffusion assay was carried out as described previously.20 Briefly, 5 µL of the synthesized AuNP or AgNP suspensions (5 mg/mL) were loaded onto the surface of the plates seeded by the test strains. The plates were incubated at 30°C and the inhibition zones were determined after 24 hours.\nAnti-dermatophyte activity was tested on potato dextrose agar (PDA; VWR, Radnor, PA, USA) plates. Agar plugs of 3 mm diameter were cut from 4-day-old cultures and inoculated upside down onto the surface of PDA plates supplemented with AgNP (in 10 or 30 µg/mL concentration) or with AuNP (first in 10 or 30 µg/mL and later in 300 µg/mL concentrations). Subsequently, the plates were incubated for 4 days at 30°C and the diameter of the colonies was measured.\nAll experiments were carried out at least three times using three biological replicates.", "The number of colony forming units (CFU) was determined to assess the viability of nanoparticle-treated Cr. neoformans cells. Briefly, 4×106 yeast cells in Hanks’ Balanced Salt solution (HBSS) were exposed to either AgNP or AuNP in concentrations of 1, 5, 10, and 30 µg/mL for 24 hours at 30°C. After treatment, cells were washed and were resuspended in 1 mL sterile water. A tenfold dilution series was prepared and 25 µL aliquots from each dilution were spread onto YPD plates in triplicates. Non-treated cells were used as control. Plates were incubated at 30°C for 72 hours and the number of the colonies was counted. The experiments were carried out 3 times.\nFollowing AgNP or AuNP treatments, the viability of Cr. neoformans cells was examined by calcein-AM assay using flow cytometry. Briefly, 4×106 cells were exposed to 5, 10 or 30 µg/mL AgNPs or to 10 and 30 µg/mL AuNPs in HBSS at 30°C for 24 hours. After the treatments, cells were washed with sterile distilled water, suspended in 200 µL HBSS supplemented with 10 µg/mL verapamil (Sigma-Aldrich) and were incubated for 20 min at 30°C to block efflux transporter activity. Subsequently, calcein-AM (Thermo Fisher Scientific) was added in 10 µM concentration and the suspensions were further incubated in the dark at 30°C for 3 hours. Cells were washed with HBSS and the fluorescent intensity was measured by flow cytometer (FlowSight®, Amnis-EMD Millipore) using a 488-nm excitation laser.\nThe time-dependent effect of nanoparticle treatments was investigated by exposing 4×106\nCr. neoformans cells in 1 mL HBSS to AgNP or AuNP in 10 µg/mL concentrations at 30°C. After 1-, 3-, 10- and 24-hour treatments, aliquots of 100 µL each were taken, and after a washing step were resuspended in 1 mL sterile water. Then tenfold dilution series was prepared and from each dilution 25 µL aliquots were spread onto the surface of YPD plates. Non-treated cells were used as control. The plates were incubated at 30°C for 72 hours and the number of the colonies was counted. The experiments were carried out three times, in three biological replicates.", " Characterization of nanoparticles Particle morphology and size of the synthesized AgNPs and AuNPs were determined by TEM image analysis. According to the obtained TEM micrographs, AgNPs were quasi-spherical, well separated from each other and minor polydispersity could be observed (Figure 1A). The average size of the AgNPs proved to be 4.1±1.44 nm. Gold particles were almost monodispersed, well separated and spherical with a narrow size distribution (2.22±0.7 nm). DLS measurements were also carried out to determine hydrodynamic particle sizes (Figure 1B). The average size of AgNPs was between 5 and 9 nm diameter, whereas the size of gold nanoparticles was around 4–7 nm. The formation of metal nanoparticles was ascertained by UV-VIS measurements (Figure 1C). The typical surface plasmon resonance (SPR) band of AgNPs appeared around 455 nm. UV-VIS spectra of AuNPs showed absorption peak maxima at 541 nm characteristic to SPR, which further supported the formation of metallic gold nanoparticles. In order to confirm the crystalline nature of the nanoparticles, XRD analysis was performed. The XRD pattern of the AgNP sample (Figure 1D) exhibited four identical diffraction peaks appearing at 2θ =38.4°, 44.4°, 64.7° and 77.5° corresponding to (111), (200), (220) and (311) planes of the face-centered cubic lattice structure of metallic silver21 (Joint Committee on Powder Diffraction Standards [JCPDS] No 87-0717). Crystallinity of the as-synthesized AuNPs was also investigated. Gold nanocrystals exhibited four distinct peaks at 2θ =39.4°, 44.5°, 65.7° and 78.6° corresponding to (111), (200), (220) and (310) Bragg’s reflection based on the face-centered cubic structure of metallic gold (Figure 1D) (JCPDS No 4-0784).22\nParticle morphology and size of the synthesized AgNPs and AuNPs were determined by TEM image analysis. According to the obtained TEM micrographs, AgNPs were quasi-spherical, well separated from each other and minor polydispersity could be observed (Figure 1A). The average size of the AgNPs proved to be 4.1±1.44 nm. Gold particles were almost monodispersed, well separated and spherical with a narrow size distribution (2.22±0.7 nm). DLS measurements were also carried out to determine hydrodynamic particle sizes (Figure 1B). The average size of AgNPs was between 5 and 9 nm diameter, whereas the size of gold nanoparticles was around 4–7 nm. The formation of metal nanoparticles was ascertained by UV-VIS measurements (Figure 1C). The typical surface plasmon resonance (SPR) band of AgNPs appeared around 455 nm. UV-VIS spectra of AuNPs showed absorption peak maxima at 541 nm characteristic to SPR, which further supported the formation of metallic gold nanoparticles. In order to confirm the crystalline nature of the nanoparticles, XRD analysis was performed. The XRD pattern of the AgNP sample (Figure 1D) exhibited four identical diffraction peaks appearing at 2θ =38.4°, 44.4°, 64.7° and 77.5° corresponding to (111), (200), (220) and (311) planes of the face-centered cubic lattice structure of metallic silver21 (Joint Committee on Powder Diffraction Standards [JCPDS] No 87-0717). Crystallinity of the as-synthesized AuNPs was also investigated. Gold nanocrystals exhibited four distinct peaks at 2θ =39.4°, 44.5°, 65.7° and 78.6° corresponding to (111), (200), (220) and (310) Bragg’s reflection based on the face-centered cubic structure of metallic gold (Figure 1D) (JCPDS No 4-0784).22\n Toxicity on human cells We assessed the cell viability of human skin-derived keratinocyte HaCaT cells treated with biologically synthesized AgNPs and AuNPs. MTT assays revealed that the nanoparticles, both AgNPs and AuNPs, applied in 0–60 µg/mL concentrations, did not reduce the viability of HaCaT cells (Figure 2A). On the other hand, treatments with the well-known therapeutic agent cisplatin caused considerable cytotoxicity, since significant loss of keratinocyte metabolic activity was observed already at 5 µg/mL cisplatin concentration (Figure 2A).\nCytotoxicity of AgNPs and AuNPs was analyzed by the crystal violet staining method (Figure 2B). AgNPs and AuNPs generated by P. rhodozyma cell-free extract were non-toxic to the applied human cells in the tested concentration range. Conversely, cisplatin proved to be significantly toxic to human HaCaT cells. These results indicate that these biogenic AgNPs and AuNPs exhibit no toxicity therefore both are biocompatible to human keratinocytes.\nWe assessed the cell viability of human skin-derived keratinocyte HaCaT cells treated with biologically synthesized AgNPs and AuNPs. MTT assays revealed that the nanoparticles, both AgNPs and AuNPs, applied in 0–60 µg/mL concentrations, did not reduce the viability of HaCaT cells (Figure 2A). On the other hand, treatments with the well-known therapeutic agent cisplatin caused considerable cytotoxicity, since significant loss of keratinocyte metabolic activity was observed already at 5 µg/mL cisplatin concentration (Figure 2A).\nCytotoxicity of AgNPs and AuNPs was analyzed by the crystal violet staining method (Figure 2B). AgNPs and AuNPs generated by P. rhodozyma cell-free extract were non-toxic to the applied human cells in the tested concentration range. Conversely, cisplatin proved to be significantly toxic to human HaCaT cells. These results indicate that these biogenic AgNPs and AuNPs exhibit no toxicity therefore both are biocompatible to human keratinocytes.\n Efficiency against pathogenic yeasts The antifungal activity of the prepared AgNPs and AuNPs was tested against various Candida and Cryptococcus species. AgNPs inhibited the growth of all the examined strains in agar diffusion assay except that of Candida tropicalis (Figure 3A). Interestingly, Cr. neoformans was sensitive to AuNPs as well, whereas other species exhibited resistance against AuNPs (Figure 3A).\nSince Cr. neoformans appeared susceptible to both nanoparticles, the antifungal effect of AgNPs and AuNPs was assessed more quantitatively on this particular strain. Hence, Cr. neoformans cells were subjected to different concentrations of AgNPs and AuNPs for 24 hours then cell viability was determined using colony forming assay (Figure 3B). Treatments with 1 µg/mL of either AgNPs or AuNPs already resulted in a significant loss of cell viability, whereas the number of viable Cr. neoformans cells dropped below the detectable level following 30 µg/mL of AuNP or AgNP administrations (Figure 3B).\nTo further test the viability of nanoparticle-treated Cr. neoformans cells, calcein-AM staining procedure was applied. As expected, increasing concentrations of either AgNPs or AuNPs gradually diminished the percentage of calcein-positive cells. After 10 µg/mL AgNP administration, approximately 50% of the cells were alive, while 83% of the examined cells were dead when AgNP concentration was elevated to 30 µg/mL (Figure 3C). A similar pattern was observed when yeast cells were exposed to AuNPs, since nanoparticle treatments in 30 µg/mL concentrations decreased the percentage of the viable cells to approximately 50% (Figure 3C).\nTime dependence of the nanoparticle-induced antifungal effect was also determined. For this purpose, Cr. neoformans cells were again subjected to 10 µg/mL AgNPs or AuNPs and after 1-, 3-, 10- and 24-hour treatments the number of the colonies was counted. Exposure to AgNPs for 1 hour caused a 50% decrease in the number of the CFU, but 10-hour treatments with either AgNPs or AuNPs reduced the number of viable cells below 10% (Figure 3D).\nThe antifungal activity of the prepared AgNPs and AuNPs was tested against various Candida and Cryptococcus species. AgNPs inhibited the growth of all the examined strains in agar diffusion assay except that of Candida tropicalis (Figure 3A). Interestingly, Cr. neoformans was sensitive to AuNPs as well, whereas other species exhibited resistance against AuNPs (Figure 3A).\nSince Cr. neoformans appeared susceptible to both nanoparticles, the antifungal effect of AgNPs and AuNPs was assessed more quantitatively on this particular strain. Hence, Cr. neoformans cells were subjected to different concentrations of AgNPs and AuNPs for 24 hours then cell viability was determined using colony forming assay (Figure 3B). Treatments with 1 µg/mL of either AgNPs or AuNPs already resulted in a significant loss of cell viability, whereas the number of viable Cr. neoformans cells dropped below the detectable level following 30 µg/mL of AuNP or AgNP administrations (Figure 3B).\nTo further test the viability of nanoparticle-treated Cr. neoformans cells, calcein-AM staining procedure was applied. As expected, increasing concentrations of either AgNPs or AuNPs gradually diminished the percentage of calcein-positive cells. After 10 µg/mL AgNP administration, approximately 50% of the cells were alive, while 83% of the examined cells were dead when AgNP concentration was elevated to 30 µg/mL (Figure 3C). A similar pattern was observed when yeast cells were exposed to AuNPs, since nanoparticle treatments in 30 µg/mL concentrations decreased the percentage of the viable cells to approximately 50% (Figure 3C).\nTime dependence of the nanoparticle-induced antifungal effect was also determined. For this purpose, Cr. neoformans cells were again subjected to 10 µg/mL AgNPs or AuNPs and after 1-, 3-, 10- and 24-hour treatments the number of the colonies was counted. Exposure to AgNPs for 1 hour caused a 50% decrease in the number of the CFU, but 10-hour treatments with either AgNPs or AuNPs reduced the number of viable cells below 10% (Figure 3D).\n Anti-dermatophyte activity All the three examined dermatophyte species were sensitive to AgNP treatments in growth inhibition assay (Figure 4). AgNPs in 30 µg/mL concentration induced significant inhibition of colony growth, as the diameters of the colonies were markedly smaller, indicating an approximately 80% growth inhibition for each strain (Figure 4). On the other hand, none of the dermatophyte species reacted to AuNPs, when applied in 10 or 30 µg/mL concentrations. Therefore, AuNP concentration was elevated up to 300 µg/mL but still no inhibition was detected (data not shown), indicating that AuNPs are not suitable for the growth inhibition of the examined cutaneous mycosis-causing dermatophyte fungi.\nAll the three examined dermatophyte species were sensitive to AgNP treatments in growth inhibition assay (Figure 4). AgNPs in 30 µg/mL concentration induced significant inhibition of colony growth, as the diameters of the colonies were markedly smaller, indicating an approximately 80% growth inhibition for each strain (Figure 4). On the other hand, none of the dermatophyte species reacted to AuNPs, when applied in 10 or 30 µg/mL concentrations. Therefore, AuNP concentration was elevated up to 300 µg/mL but still no inhibition was detected (data not shown), indicating that AuNPs are not suitable for the growth inhibition of the examined cutaneous mycosis-causing dermatophyte fungi.", "Particle morphology and size of the synthesized AgNPs and AuNPs were determined by TEM image analysis. According to the obtained TEM micrographs, AgNPs were quasi-spherical, well separated from each other and minor polydispersity could be observed (Figure 1A). The average size of the AgNPs proved to be 4.1±1.44 nm. Gold particles were almost monodispersed, well separated and spherical with a narrow size distribution (2.22±0.7 nm). DLS measurements were also carried out to determine hydrodynamic particle sizes (Figure 1B). The average size of AgNPs was between 5 and 9 nm diameter, whereas the size of gold nanoparticles was around 4–7 nm. The formation of metal nanoparticles was ascertained by UV-VIS measurements (Figure 1C). The typical surface plasmon resonance (SPR) band of AgNPs appeared around 455 nm. UV-VIS spectra of AuNPs showed absorption peak maxima at 541 nm characteristic to SPR, which further supported the formation of metallic gold nanoparticles. In order to confirm the crystalline nature of the nanoparticles, XRD analysis was performed. The XRD pattern of the AgNP sample (Figure 1D) exhibited four identical diffraction peaks appearing at 2θ =38.4°, 44.4°, 64.7° and 77.5° corresponding to (111), (200), (220) and (311) planes of the face-centered cubic lattice structure of metallic silver21 (Joint Committee on Powder Diffraction Standards [JCPDS] No 87-0717). Crystallinity of the as-synthesized AuNPs was also investigated. Gold nanocrystals exhibited four distinct peaks at 2θ =39.4°, 44.5°, 65.7° and 78.6° corresponding to (111), (200), (220) and (310) Bragg’s reflection based on the face-centered cubic structure of metallic gold (Figure 1D) (JCPDS No 4-0784).22", "We assessed the cell viability of human skin-derived keratinocyte HaCaT cells treated with biologically synthesized AgNPs and AuNPs. MTT assays revealed that the nanoparticles, both AgNPs and AuNPs, applied in 0–60 µg/mL concentrations, did not reduce the viability of HaCaT cells (Figure 2A). On the other hand, treatments with the well-known therapeutic agent cisplatin caused considerable cytotoxicity, since significant loss of keratinocyte metabolic activity was observed already at 5 µg/mL cisplatin concentration (Figure 2A).\nCytotoxicity of AgNPs and AuNPs was analyzed by the crystal violet staining method (Figure 2B). AgNPs and AuNPs generated by P. rhodozyma cell-free extract were non-toxic to the applied human cells in the tested concentration range. Conversely, cisplatin proved to be significantly toxic to human HaCaT cells. These results indicate that these biogenic AgNPs and AuNPs exhibit no toxicity therefore both are biocompatible to human keratinocytes.", "The antifungal activity of the prepared AgNPs and AuNPs was tested against various Candida and Cryptococcus species. AgNPs inhibited the growth of all the examined strains in agar diffusion assay except that of Candida tropicalis (Figure 3A). Interestingly, Cr. neoformans was sensitive to AuNPs as well, whereas other species exhibited resistance against AuNPs (Figure 3A).\nSince Cr. neoformans appeared susceptible to both nanoparticles, the antifungal effect of AgNPs and AuNPs was assessed more quantitatively on this particular strain. Hence, Cr. neoformans cells were subjected to different concentrations of AgNPs and AuNPs for 24 hours then cell viability was determined using colony forming assay (Figure 3B). Treatments with 1 µg/mL of either AgNPs or AuNPs already resulted in a significant loss of cell viability, whereas the number of viable Cr. neoformans cells dropped below the detectable level following 30 µg/mL of AuNP or AgNP administrations (Figure 3B).\nTo further test the viability of nanoparticle-treated Cr. neoformans cells, calcein-AM staining procedure was applied. As expected, increasing concentrations of either AgNPs or AuNPs gradually diminished the percentage of calcein-positive cells. After 10 µg/mL AgNP administration, approximately 50% of the cells were alive, while 83% of the examined cells were dead when AgNP concentration was elevated to 30 µg/mL (Figure 3C). A similar pattern was observed when yeast cells were exposed to AuNPs, since nanoparticle treatments in 30 µg/mL concentrations decreased the percentage of the viable cells to approximately 50% (Figure 3C).\nTime dependence of the nanoparticle-induced antifungal effect was also determined. For this purpose, Cr. neoformans cells were again subjected to 10 µg/mL AgNPs or AuNPs and after 1-, 3-, 10- and 24-hour treatments the number of the colonies was counted. Exposure to AgNPs for 1 hour caused a 50% decrease in the number of the CFU, but 10-hour treatments with either AgNPs or AuNPs reduced the number of viable cells below 10% (Figure 3D).", "All the three examined dermatophyte species were sensitive to AgNP treatments in growth inhibition assay (Figure 4). AgNPs in 30 µg/mL concentration induced significant inhibition of colony growth, as the diameters of the colonies were markedly smaller, indicating an approximately 80% growth inhibition for each strain (Figure 4). On the other hand, none of the dermatophyte species reacted to AuNPs, when applied in 10 or 30 µg/mL concentrations. Therefore, AuNP concentration was elevated up to 300 µg/mL but still no inhibition was detected (data not shown), indicating that AuNPs are not suitable for the growth inhibition of the examined cutaneous mycosis-causing dermatophyte fungi.", "Emerging problems associated with antibiotic-resistant microbes demanded new strategies to find novel antimicrobial agents.23,24 Due to their broad spectrum activity, silver-containing materials have been used as antiseptics and disinfectants for decades,25 however the potential application of colloidal gold for medical purposes has only recently been recognized.26 Physical and chemical methods are generally considered as standard approaches for nanoparticle preparation, nevertheless these procedures lead to the accumulation of environmentally hazardous wastes.27 Furthermore, the presence of chemical residues within the nanoparticle colloid solutions limits their biomedical utilization.28 Biological synthesis is regarded as an eco-friendly and cost-effective alternative for the production of biocompatible nanoparticles.29 The numerous publications reporting either living microorganisms, plants or cell-free extracts for the synthesis of nanomaterials highlight this rapidly expanding field of bionanotechnology.15,30\nIn this study, we investigated the suitability of P. rhodozyma cell-free extract for AuNP and AgNP preparation, since this microbe contains an effective antioxidant, astaxanthin in high concentration, which promotes the formation of metal nanoparticles.31 AgNPs and AuNPs were successfully synthesized and the biological activities of the as-prepared nanoparticles were subjected to complex analysis. AgNPs were effective against all the examined mycosis-causing fungal species, except for C. tropicalis. Most importantly, these biosynthesized silver particles were capable of inhibiting the growth of several opportunistic Candida or Cryptococcus species and were highly potent against filamentous Microsporum and Trichophyton dermatophytes. In addition, AgNPs were biocompatible, showing no cytotoxicity to human HaCat keratinocytes, which renders these biosynthesized AgNPs attractive candidates for further biomedical and pharmacological applications. It is also noteworthy that among the tested species only Cr. neoformans was susceptible to both AgNPs and AuNPs. We believe this is a relevant finding, since mainly antibacterial features of AuNPs have been demonstrated so far,32–34 while the scientific literature is short on investigations into the potential antifungal feature of AuNPs.35,36 Importantly, apart from their capacity to inhibit Cr. neoformans, gold nanoparticles exerted no toxicity on human keratinocytes. These results emphasize the potential of such biosynthesized nanoparticles in topical dermatologic therapeutic strategies. Since cutaneous mycosis has the highest incidence among fungal infections,1,2,37 cost-effective, novel treatment modalities should be introduced into the related clinical practice. The symptoms can range from subclinical (eg, itching, rash), to serious inflammatory reactions (eg, blistering, scarring, alopecia), requiring distinct treatment regimens. Furthermore, the infective species can induce severe complications in immunocompromised individuals, including organ transplant, hematology and oncology patients where the infected area can manifest ulceration or even tissue necrosis.3 The severity of these symptoms should be attenuated by complementary management using topical treatment approaches. We believe that biosynthesized metal nanoparticles could be implemented to improve the clinical outcome of cutaneous mycosis.", "We found that the cell-free extract of P. rhodozyma is a suitable material for the synthesis of biogenic AgNPs and AuNPs. We demonstrated that these nanoparticles are potent against cutaneous mycosis-causing dermatophytes and opportunistic yeast species and are non-toxic to human keratinocytes. Considering our data in the context of the current clinical results, the biocompatibility to skin cells and the outstanding antifungal performance of our biosynthesized nanoparticles might be exploited during topical treatment of cutaneous mycosis or as a prophylactic treatment against the recurrence of the infection." ]
[ "intro", "materials|methods", null, null, null, null, null, null, null, "results", null, null, null, null, "discussion", null ]
[ "antifungal activity", "biological synthesis", "dermatophytes", "opportunistic pathogenic yeasts", "silver nanoparticles", "toxicity" ]
Introduction: Due to social and global environmental changes, the aging population and the growing number of susceptible individuals (eg, patients with predisposing factors), the incidence of superficial mycosis has increased over the past few years causing health problems world-wide.1,2 Dermatomycosis usually manifests as itchy, sore skin rashes and tinea on the toes, the inner thighs or groin; leading to flaking and blistering of the affected area. Fungi may spread to the scalp or nails causing hair loss and thickened or deformed fingernails. The etiological agents behind dermatomycosis are mostly filamentous dermatophytes (eg, Microsporum spp., Trichophyton spp. or Epidermophyton spp.), however, several opportunistic yeast species such as Candida or Cryptococcus can invade keratinized tissues.1,3 In case of cutaneous candidiasis, the origin is mostly endogenous as the causative agents (ie, Candida spp.) are the resident or transient members of the human dermal microflora. Cryptococcus neoformans can induce meningitis in immunocompromised patients, however, due to systemic dissemination of yeast cells in such patients, skin lesions may also appear.4 In addition to this secondary variant, a primary form of cutaneous cryptococcosis has recently been identified, where the pathogen enters the body directly through skin injury.5 Regardless of the distinct origins of Cr. neoformans infections, this pathogen can cause life-threatening systemic complications in the host.4,5 Therefore, there is an inevitable and urgent medical need to find novel and efficient agents to defeat opportunistic pathogens and cutaneous mycosis-causing species. To achieve this goal, an innovative, environmentally benign and economic approach is required. A solution that fulfills these demands could be based on nanotechnology, since owing to the abundance of recent scientific advancements in this field, it is now possible to design and develop nano-sized structures with unique properties tailored for specific applications (eg, in optics, in electronics, in catalysis, and in household items, as well as in medicine).6 In recent years, gold nanoparticles (AuNPs) and silver nanoparticles (AgNPs),7 in particular, have been in the focus of increasing interest due to their simple synthesis and desirable biological activities (ie, antitumor, antibacterial, antiviral and antifungal effects).8,9 We have reported that the cytotoxic features of AgNPs can be exploited to kill p53 tumor suppressor-deficient10 as well as multidrug-resistant cancer cells,11 providing some further details to the mechanism of AgNP-induced antitumor actions. In recent years, several studies have also demonstrated that AgNPs, synthesized either by conventional chemical reduction methods, physical techniques or by different biological entities, exhibit potent inhibitory effects against Gram positive and Gram negative bacterial species and induce cytotoxicity in various pathogenic fungi.8,12 Green alternatives to harsh reducing chemicals and environmentally benign capping agents have also been developed to reduce the costs of nanoparticle production and to minimize the generation of hazardous wastes upon the synthetic procedures.13 In a comprehensive study on AgNPs obtained by chemical reduction using coffee and green tea extracts, we proved the remarkable antimicrobial efficiency of such particles,14 and we showed evidence that the green material used for AgNP synthesis can largely define the physical, chemical and biological characteristics of the produced nanomaterial.14 Owing to widespread applications, there is an ever-growing demand for AgNPs and AuNPs, while the challenges of their economical, environmentally safe and sustainable production should also be addressed. A rapidly progressing area of nanobiotechnology is the microbe-assisted nanoparticle synthesis,15 where bacterial strains, yeast- and alga-mediated reactions are exploited for nanoparticle generation.16,17 One such study demonstrated that the astaxanthin-containing green alga Chlorella vulgaris is a useful agent for gold nanoparticle synthesis.18 Based on this observation we hypothesized that Phaffia rhodozyma (perfect state Xanthophyllomyces dendrorhous), a basidiomycetous red yeast with high astaxanthin content,19 might also be a potential candidate for microbe-assisted nanoparticle synthesis. Therefore, the aim of this present study was to investigate the suitability of P. rhodozyma cell-free extract for the preparation of AgNPs and AuNPs. Since our synthesis method proved to be successful, the as-prepared nanoparticles were characterized, and a complex biological screening was carried out to determine their biological activity, where beside toxicity to human skin-derived cells, the antifungal efficiency of the biosynthesized nanoparticles was also assessed with a special emphasis on inhibitory features against opportunistic pathogenic yeasts and dermatophytes. Materials and methods: Preparation of P. rhodozyma cell-free extract P. rhodozyma American Type Culture Collection (ATCC) 24203 cells grown on 2× yeast extract peptone dextrose (YPD; Sigma-Aldrich Co., St Louis, MO, USA) (2% D-glucose, 2% peptone, 1% yeast extract and 2% agar) plates at 22°C for 5 days were collected by cell scraper. Ten grams of wet weight biomass were suspended in 200 mL sterile distilled water and the cells were disrupted in Bead Beater (BioSpec Products, Inc., Bartlesville, OK, USA) using glass beads with 0.5–1 mm diameter. Cell debris was removed by centrifugation at 18,000 g for 15 min in Sorvall RC-5B centrifuge (Thermo Fisher Scientific, Waltham, MA, USA) at 4°C. The supernatant was filtered through 0.45 µm pore sized nitrocellulose membrane (Merck KGaA, Darmstadt, Germany). The filtrate was used as a reducing agent and stabilizer for the synthesis of nanoparticles. P. rhodozyma American Type Culture Collection (ATCC) 24203 cells grown on 2× yeast extract peptone dextrose (YPD; Sigma-Aldrich Co., St Louis, MO, USA) (2% D-glucose, 2% peptone, 1% yeast extract and 2% agar) plates at 22°C for 5 days were collected by cell scraper. Ten grams of wet weight biomass were suspended in 200 mL sterile distilled water and the cells were disrupted in Bead Beater (BioSpec Products, Inc., Bartlesville, OK, USA) using glass beads with 0.5–1 mm diameter. Cell debris was removed by centrifugation at 18,000 g for 15 min in Sorvall RC-5B centrifuge (Thermo Fisher Scientific, Waltham, MA, USA) at 4°C. The supernatant was filtered through 0.45 µm pore sized nitrocellulose membrane (Merck KGaA, Darmstadt, Germany). The filtrate was used as a reducing agent and stabilizer for the synthesis of nanoparticles. Synthesis of AgNPs and AuNPs After optimization of the nanoparticle synthesis process, the following protocol was used. For preparation of AgNP, 90 mL cell-free extract (pH 6.7) was mixed with 10 mL of 1 M AgNO3 suspension. Gold nanoparticles were synthesized similarly, by adding 10 mL 1 M HAuCl4 solution to 90 mL cell-free extract. Both suspensions were constantly stirred using an orbital shaker at 22°C for 24 hours. Nanoparticles were pelleted by centrifugation for 15 min at 18,000 g in Sorvall RC-5B centrifuge at 4°C then washed twice with sterile distilled water. The final colloid suspensions were characterized and stored at 4°C. After optimization of the nanoparticle synthesis process, the following protocol was used. For preparation of AgNP, 90 mL cell-free extract (pH 6.7) was mixed with 10 mL of 1 M AgNO3 suspension. Gold nanoparticles were synthesized similarly, by adding 10 mL 1 M HAuCl4 solution to 90 mL cell-free extract. Both suspensions were constantly stirred using an orbital shaker at 22°C for 24 hours. Nanoparticles were pelleted by centrifugation for 15 min at 18,000 g in Sorvall RC-5B centrifuge at 4°C then washed twice with sterile distilled water. The final colloid suspensions were characterized and stored at 4°C. Characterization of nanoparticles The morphological features of the synthesized AgNPs and AuNPs were analyzed by transmission electron microscopy (TEM) using a FEI Tecnai G2 20 microscope (FEI Corporate Headquarters, Hillsboro, OR, USA) at an acceleration voltage of 200 kV. The hydrodynamic particle size distribution of the samples was assessed by dynamic light scattering (DLS) analysis using a Zetasizer Nano Instrument (Malvern Instruments, Malvern, UK). The crystal structures were examined by X-ray powder diffraction (XRD), where the scans were performed with a Rigaku MiniFlex II powder diffractometer (Rigaku Corporation, Tokyo, Japan) using Cu Kα radiation. A scanning rate of 2° min−1 in the 20°–80° 2θ range was used. Finally, the optical properties of the obtained AuNPs and AgNPs were studied using an Ocean Optics 355 DH-2000-BAL ultraviolet-visible (UV-VIS) spectrophotometer (Halma PLC, Largo, FL, USA) in a 10-mm path length quartz cuvette. The absorbance spectra of nanoparticles were recorded within the range of 300–800 nm. The morphological features of the synthesized AgNPs and AuNPs were analyzed by transmission electron microscopy (TEM) using a FEI Tecnai G2 20 microscope (FEI Corporate Headquarters, Hillsboro, OR, USA) at an acceleration voltage of 200 kV. The hydrodynamic particle size distribution of the samples was assessed by dynamic light scattering (DLS) analysis using a Zetasizer Nano Instrument (Malvern Instruments, Malvern, UK). The crystal structures were examined by X-ray powder diffraction (XRD), where the scans were performed with a Rigaku MiniFlex II powder diffractometer (Rigaku Corporation, Tokyo, Japan) using Cu Kα radiation. A scanning rate of 2° min−1 in the 20°–80° 2θ range was used. Finally, the optical properties of the obtained AuNPs and AgNPs were studied using an Ocean Optics 355 DH-2000-BAL ultraviolet-visible (UV-VIS) spectrophotometer (Halma PLC, Largo, FL, USA) in a 10-mm path length quartz cuvette. The absorbance spectra of nanoparticles were recorded within the range of 300–800 nm. Cell culture HaCaT immortalized keratinocyte cell line from adult human skin was purchased from ATCC. HaCaT cells were maintained in Dulbecco’s Modified Eagle’s Medium (DMEM) containing 4.5 g/L glucose, supplemented with 10% fetal bovine serum, 2 mM L-glutamine, 0.01% streptomycin and 0.005% ampicillin. Cells were cultured under standard conditions in a 37°C incubator at 5% CO2 in 95% humidity. HaCaT immortalized keratinocyte cell line from adult human skin was purchased from ATCC. HaCaT cells were maintained in Dulbecco’s Modified Eagle’s Medium (DMEM) containing 4.5 g/L glucose, supplemented with 10% fetal bovine serum, 2 mM L-glutamine, 0.01% streptomycin and 0.005% ampicillin. Cells were cultured under standard conditions in a 37°C incubator at 5% CO2 in 95% humidity. Cell viability and toxicity assays MTT mitochondrial activity assay (Sigma-Aldrich) was performed to measure cell viability. HaCaT cells were seeded into 96 well plates (10,000/wells) and treated with either AgNPs or AuNPs, or with cisplatin in different concentrations on the following day. After 24-hour treatments cells were washed with phosphate buffered saline (PBS) and incubated with culture medium containing 0.5 mg/mL MTT reagent for 1 hour at 37°C. Formazan crystals were solubilized in DMSO and extinction was measured at 570 nm using a Synergy HTX plate reader (Thermo Fisher Scientific). Absorption corresponding to the untreated control samples was considered as 100%. MTT assays were performed at least three times using four independent biological replicates. Cytotoxicity of the synthesized nanoparticles was assessed by crystal violet staining as well. For this, HaCaT cells were seeded into 24 well plates and were left to grow until they reached confluence. Then cell layers were treated with nanoparticles or with cisplatin for 24 hours. After each treatment, cells were washed three times with PBS and fixed using methanol:acetone 70:30 mixture. Fixed cells were then stained using 0.5% crystal violet dissolved in 25% methanol, washed with distilled water then air-dried. Plates were photographed, and crystal violet was solubilized using 400 µL 10% acetic acid. From each well, 100 µL solution was transferred to 96 well plates then absorbance was determined at 590 nm using a the HTX microplate reader. MTT mitochondrial activity assay (Sigma-Aldrich) was performed to measure cell viability. HaCaT cells were seeded into 96 well plates (10,000/wells) and treated with either AgNPs or AuNPs, or with cisplatin in different concentrations on the following day. After 24-hour treatments cells were washed with phosphate buffered saline (PBS) and incubated with culture medium containing 0.5 mg/mL MTT reagent for 1 hour at 37°C. Formazan crystals were solubilized in DMSO and extinction was measured at 570 nm using a Synergy HTX plate reader (Thermo Fisher Scientific). Absorption corresponding to the untreated control samples was considered as 100%. MTT assays were performed at least three times using four independent biological replicates. Cytotoxicity of the synthesized nanoparticles was assessed by crystal violet staining as well. For this, HaCaT cells were seeded into 24 well plates and were left to grow until they reached confluence. Then cell layers were treated with nanoparticles or with cisplatin for 24 hours. After each treatment, cells were washed three times with PBS and fixed using methanol:acetone 70:30 mixture. Fixed cells were then stained using 0.5% crystal violet dissolved in 25% methanol, washed with distilled water then air-dried. Plates were photographed, and crystal violet was solubilized using 400 µL 10% acetic acid. From each well, 100 µL solution was transferred to 96 well plates then absorbance was determined at 590 nm using a the HTX microplate reader. Screening of antifungal activity The antifungal activity of the nanoparticles was tested against pathogenic yeasts as well as against various dermatophytes. The examined strains are listed in Table 1. First, agar diffusion assay was carried out as described previously.20 Briefly, 5 µL of the synthesized AuNP or AgNP suspensions (5 mg/mL) were loaded onto the surface of the plates seeded by the test strains. The plates were incubated at 30°C and the inhibition zones were determined after 24 hours. Anti-dermatophyte activity was tested on potato dextrose agar (PDA; VWR, Radnor, PA, USA) plates. Agar plugs of 3 mm diameter were cut from 4-day-old cultures and inoculated upside down onto the surface of PDA plates supplemented with AgNP (in 10 or 30 µg/mL concentration) or with AuNP (first in 10 or 30 µg/mL and later in 300 µg/mL concentrations). Subsequently, the plates were incubated for 4 days at 30°C and the diameter of the colonies was measured. All experiments were carried out at least three times using three biological replicates. The antifungal activity of the nanoparticles was tested against pathogenic yeasts as well as against various dermatophytes. The examined strains are listed in Table 1. First, agar diffusion assay was carried out as described previously.20 Briefly, 5 µL of the synthesized AuNP or AgNP suspensions (5 mg/mL) were loaded onto the surface of the plates seeded by the test strains. The plates were incubated at 30°C and the inhibition zones were determined after 24 hours. Anti-dermatophyte activity was tested on potato dextrose agar (PDA; VWR, Radnor, PA, USA) plates. Agar plugs of 3 mm diameter were cut from 4-day-old cultures and inoculated upside down onto the surface of PDA plates supplemented with AgNP (in 10 or 30 µg/mL concentration) or with AuNP (first in 10 or 30 µg/mL and later in 300 µg/mL concentrations). Subsequently, the plates were incubated for 4 days at 30°C and the diameter of the colonies was measured. All experiments were carried out at least three times using three biological replicates. Yeast viability assay The number of colony forming units (CFU) was determined to assess the viability of nanoparticle-treated Cr. neoformans cells. Briefly, 4×106 yeast cells in Hanks’ Balanced Salt solution (HBSS) were exposed to either AgNP or AuNP in concentrations of 1, 5, 10, and 30 µg/mL for 24 hours at 30°C. After treatment, cells were washed and were resuspended in 1 mL sterile water. A tenfold dilution series was prepared and 25 µL aliquots from each dilution were spread onto YPD plates in triplicates. Non-treated cells were used as control. Plates were incubated at 30°C for 72 hours and the number of the colonies was counted. The experiments were carried out 3 times. Following AgNP or AuNP treatments, the viability of Cr. neoformans cells was examined by calcein-AM assay using flow cytometry. Briefly, 4×106 cells were exposed to 5, 10 or 30 µg/mL AgNPs or to 10 and 30 µg/mL AuNPs in HBSS at 30°C for 24 hours. After the treatments, cells were washed with sterile distilled water, suspended in 200 µL HBSS supplemented with 10 µg/mL verapamil (Sigma-Aldrich) and were incubated for 20 min at 30°C to block efflux transporter activity. Subsequently, calcein-AM (Thermo Fisher Scientific) was added in 10 µM concentration and the suspensions were further incubated in the dark at 30°C for 3 hours. Cells were washed with HBSS and the fluorescent intensity was measured by flow cytometer (FlowSight®, Amnis-EMD Millipore) using a 488-nm excitation laser. The time-dependent effect of nanoparticle treatments was investigated by exposing 4×106 Cr. neoformans cells in 1 mL HBSS to AgNP or AuNP in 10 µg/mL concentrations at 30°C. After 1-, 3-, 10- and 24-hour treatments, aliquots of 100 µL each were taken, and after a washing step were resuspended in 1 mL sterile water. Then tenfold dilution series was prepared and from each dilution 25 µL aliquots were spread onto the surface of YPD plates. Non-treated cells were used as control. The plates were incubated at 30°C for 72 hours and the number of the colonies was counted. The experiments were carried out three times, in three biological replicates. The number of colony forming units (CFU) was determined to assess the viability of nanoparticle-treated Cr. neoformans cells. Briefly, 4×106 yeast cells in Hanks’ Balanced Salt solution (HBSS) were exposed to either AgNP or AuNP in concentrations of 1, 5, 10, and 30 µg/mL for 24 hours at 30°C. After treatment, cells were washed and were resuspended in 1 mL sterile water. A tenfold dilution series was prepared and 25 µL aliquots from each dilution were spread onto YPD plates in triplicates. Non-treated cells were used as control. Plates were incubated at 30°C for 72 hours and the number of the colonies was counted. The experiments were carried out 3 times. Following AgNP or AuNP treatments, the viability of Cr. neoformans cells was examined by calcein-AM assay using flow cytometry. Briefly, 4×106 cells were exposed to 5, 10 or 30 µg/mL AgNPs or to 10 and 30 µg/mL AuNPs in HBSS at 30°C for 24 hours. After the treatments, cells were washed with sterile distilled water, suspended in 200 µL HBSS supplemented with 10 µg/mL verapamil (Sigma-Aldrich) and were incubated for 20 min at 30°C to block efflux transporter activity. Subsequently, calcein-AM (Thermo Fisher Scientific) was added in 10 µM concentration and the suspensions were further incubated in the dark at 30°C for 3 hours. Cells were washed with HBSS and the fluorescent intensity was measured by flow cytometer (FlowSight®, Amnis-EMD Millipore) using a 488-nm excitation laser. The time-dependent effect of nanoparticle treatments was investigated by exposing 4×106 Cr. neoformans cells in 1 mL HBSS to AgNP or AuNP in 10 µg/mL concentrations at 30°C. After 1-, 3-, 10- and 24-hour treatments, aliquots of 100 µL each were taken, and after a washing step were resuspended in 1 mL sterile water. Then tenfold dilution series was prepared and from each dilution 25 µL aliquots were spread onto the surface of YPD plates. Non-treated cells were used as control. The plates were incubated at 30°C for 72 hours and the number of the colonies was counted. The experiments were carried out three times, in three biological replicates. Preparation of P. rhodozyma cell-free extract: P. rhodozyma American Type Culture Collection (ATCC) 24203 cells grown on 2× yeast extract peptone dextrose (YPD; Sigma-Aldrich Co., St Louis, MO, USA) (2% D-glucose, 2% peptone, 1% yeast extract and 2% agar) plates at 22°C for 5 days were collected by cell scraper. Ten grams of wet weight biomass were suspended in 200 mL sterile distilled water and the cells were disrupted in Bead Beater (BioSpec Products, Inc., Bartlesville, OK, USA) using glass beads with 0.5–1 mm diameter. Cell debris was removed by centrifugation at 18,000 g for 15 min in Sorvall RC-5B centrifuge (Thermo Fisher Scientific, Waltham, MA, USA) at 4°C. The supernatant was filtered through 0.45 µm pore sized nitrocellulose membrane (Merck KGaA, Darmstadt, Germany). The filtrate was used as a reducing agent and stabilizer for the synthesis of nanoparticles. Synthesis of AgNPs and AuNPs: After optimization of the nanoparticle synthesis process, the following protocol was used. For preparation of AgNP, 90 mL cell-free extract (pH 6.7) was mixed with 10 mL of 1 M AgNO3 suspension. Gold nanoparticles were synthesized similarly, by adding 10 mL 1 M HAuCl4 solution to 90 mL cell-free extract. Both suspensions were constantly stirred using an orbital shaker at 22°C for 24 hours. Nanoparticles were pelleted by centrifugation for 15 min at 18,000 g in Sorvall RC-5B centrifuge at 4°C then washed twice with sterile distilled water. The final colloid suspensions were characterized and stored at 4°C. Characterization of nanoparticles: The morphological features of the synthesized AgNPs and AuNPs were analyzed by transmission electron microscopy (TEM) using a FEI Tecnai G2 20 microscope (FEI Corporate Headquarters, Hillsboro, OR, USA) at an acceleration voltage of 200 kV. The hydrodynamic particle size distribution of the samples was assessed by dynamic light scattering (DLS) analysis using a Zetasizer Nano Instrument (Malvern Instruments, Malvern, UK). The crystal structures were examined by X-ray powder diffraction (XRD), where the scans were performed with a Rigaku MiniFlex II powder diffractometer (Rigaku Corporation, Tokyo, Japan) using Cu Kα radiation. A scanning rate of 2° min−1 in the 20°–80° 2θ range was used. Finally, the optical properties of the obtained AuNPs and AgNPs were studied using an Ocean Optics 355 DH-2000-BAL ultraviolet-visible (UV-VIS) spectrophotometer (Halma PLC, Largo, FL, USA) in a 10-mm path length quartz cuvette. The absorbance spectra of nanoparticles were recorded within the range of 300–800 nm. Cell culture: HaCaT immortalized keratinocyte cell line from adult human skin was purchased from ATCC. HaCaT cells were maintained in Dulbecco’s Modified Eagle’s Medium (DMEM) containing 4.5 g/L glucose, supplemented with 10% fetal bovine serum, 2 mM L-glutamine, 0.01% streptomycin and 0.005% ampicillin. Cells were cultured under standard conditions in a 37°C incubator at 5% CO2 in 95% humidity. Cell viability and toxicity assays: MTT mitochondrial activity assay (Sigma-Aldrich) was performed to measure cell viability. HaCaT cells were seeded into 96 well plates (10,000/wells) and treated with either AgNPs or AuNPs, or with cisplatin in different concentrations on the following day. After 24-hour treatments cells were washed with phosphate buffered saline (PBS) and incubated with culture medium containing 0.5 mg/mL MTT reagent for 1 hour at 37°C. Formazan crystals were solubilized in DMSO and extinction was measured at 570 nm using a Synergy HTX plate reader (Thermo Fisher Scientific). Absorption corresponding to the untreated control samples was considered as 100%. MTT assays were performed at least three times using four independent biological replicates. Cytotoxicity of the synthesized nanoparticles was assessed by crystal violet staining as well. For this, HaCaT cells were seeded into 24 well plates and were left to grow until they reached confluence. Then cell layers were treated with nanoparticles or with cisplatin for 24 hours. After each treatment, cells were washed three times with PBS and fixed using methanol:acetone 70:30 mixture. Fixed cells were then stained using 0.5% crystal violet dissolved in 25% methanol, washed with distilled water then air-dried. Plates were photographed, and crystal violet was solubilized using 400 µL 10% acetic acid. From each well, 100 µL solution was transferred to 96 well plates then absorbance was determined at 590 nm using a the HTX microplate reader. Screening of antifungal activity: The antifungal activity of the nanoparticles was tested against pathogenic yeasts as well as against various dermatophytes. The examined strains are listed in Table 1. First, agar diffusion assay was carried out as described previously.20 Briefly, 5 µL of the synthesized AuNP or AgNP suspensions (5 mg/mL) were loaded onto the surface of the plates seeded by the test strains. The plates were incubated at 30°C and the inhibition zones were determined after 24 hours. Anti-dermatophyte activity was tested on potato dextrose agar (PDA; VWR, Radnor, PA, USA) plates. Agar plugs of 3 mm diameter were cut from 4-day-old cultures and inoculated upside down onto the surface of PDA plates supplemented with AgNP (in 10 or 30 µg/mL concentration) or with AuNP (first in 10 or 30 µg/mL and later in 300 µg/mL concentrations). Subsequently, the plates were incubated for 4 days at 30°C and the diameter of the colonies was measured. All experiments were carried out at least three times using three biological replicates. Yeast viability assay: The number of colony forming units (CFU) was determined to assess the viability of nanoparticle-treated Cr. neoformans cells. Briefly, 4×106 yeast cells in Hanks’ Balanced Salt solution (HBSS) were exposed to either AgNP or AuNP in concentrations of 1, 5, 10, and 30 µg/mL for 24 hours at 30°C. After treatment, cells were washed and were resuspended in 1 mL sterile water. A tenfold dilution series was prepared and 25 µL aliquots from each dilution were spread onto YPD plates in triplicates. Non-treated cells were used as control. Plates were incubated at 30°C for 72 hours and the number of the colonies was counted. The experiments were carried out 3 times. Following AgNP or AuNP treatments, the viability of Cr. neoformans cells was examined by calcein-AM assay using flow cytometry. Briefly, 4×106 cells were exposed to 5, 10 or 30 µg/mL AgNPs or to 10 and 30 µg/mL AuNPs in HBSS at 30°C for 24 hours. After the treatments, cells were washed with sterile distilled water, suspended in 200 µL HBSS supplemented with 10 µg/mL verapamil (Sigma-Aldrich) and were incubated for 20 min at 30°C to block efflux transporter activity. Subsequently, calcein-AM (Thermo Fisher Scientific) was added in 10 µM concentration and the suspensions were further incubated in the dark at 30°C for 3 hours. Cells were washed with HBSS and the fluorescent intensity was measured by flow cytometer (FlowSight®, Amnis-EMD Millipore) using a 488-nm excitation laser. The time-dependent effect of nanoparticle treatments was investigated by exposing 4×106 Cr. neoformans cells in 1 mL HBSS to AgNP or AuNP in 10 µg/mL concentrations at 30°C. After 1-, 3-, 10- and 24-hour treatments, aliquots of 100 µL each were taken, and after a washing step were resuspended in 1 mL sterile water. Then tenfold dilution series was prepared and from each dilution 25 µL aliquots were spread onto the surface of YPD plates. Non-treated cells were used as control. The plates were incubated at 30°C for 72 hours and the number of the colonies was counted. The experiments were carried out three times, in three biological replicates. Results: Characterization of nanoparticles Particle morphology and size of the synthesized AgNPs and AuNPs were determined by TEM image analysis. According to the obtained TEM micrographs, AgNPs were quasi-spherical, well separated from each other and minor polydispersity could be observed (Figure 1A). The average size of the AgNPs proved to be 4.1±1.44 nm. Gold particles were almost monodispersed, well separated and spherical with a narrow size distribution (2.22±0.7 nm). DLS measurements were also carried out to determine hydrodynamic particle sizes (Figure 1B). The average size of AgNPs was between 5 and 9 nm diameter, whereas the size of gold nanoparticles was around 4–7 nm. The formation of metal nanoparticles was ascertained by UV-VIS measurements (Figure 1C). The typical surface plasmon resonance (SPR) band of AgNPs appeared around 455 nm. UV-VIS spectra of AuNPs showed absorption peak maxima at 541 nm characteristic to SPR, which further supported the formation of metallic gold nanoparticles. In order to confirm the crystalline nature of the nanoparticles, XRD analysis was performed. The XRD pattern of the AgNP sample (Figure 1D) exhibited four identical diffraction peaks appearing at 2θ =38.4°, 44.4°, 64.7° and 77.5° corresponding to (111), (200), (220) and (311) planes of the face-centered cubic lattice structure of metallic silver21 (Joint Committee on Powder Diffraction Standards [JCPDS] No 87-0717). Crystallinity of the as-synthesized AuNPs was also investigated. Gold nanocrystals exhibited four distinct peaks at 2θ =39.4°, 44.5°, 65.7° and 78.6° corresponding to (111), (200), (220) and (310) Bragg’s reflection based on the face-centered cubic structure of metallic gold (Figure 1D) (JCPDS No 4-0784).22 Particle morphology and size of the synthesized AgNPs and AuNPs were determined by TEM image analysis. According to the obtained TEM micrographs, AgNPs were quasi-spherical, well separated from each other and minor polydispersity could be observed (Figure 1A). The average size of the AgNPs proved to be 4.1±1.44 nm. Gold particles were almost monodispersed, well separated and spherical with a narrow size distribution (2.22±0.7 nm). DLS measurements were also carried out to determine hydrodynamic particle sizes (Figure 1B). The average size of AgNPs was between 5 and 9 nm diameter, whereas the size of gold nanoparticles was around 4–7 nm. The formation of metal nanoparticles was ascertained by UV-VIS measurements (Figure 1C). The typical surface plasmon resonance (SPR) band of AgNPs appeared around 455 nm. UV-VIS spectra of AuNPs showed absorption peak maxima at 541 nm characteristic to SPR, which further supported the formation of metallic gold nanoparticles. In order to confirm the crystalline nature of the nanoparticles, XRD analysis was performed. The XRD pattern of the AgNP sample (Figure 1D) exhibited four identical diffraction peaks appearing at 2θ =38.4°, 44.4°, 64.7° and 77.5° corresponding to (111), (200), (220) and (311) planes of the face-centered cubic lattice structure of metallic silver21 (Joint Committee on Powder Diffraction Standards [JCPDS] No 87-0717). Crystallinity of the as-synthesized AuNPs was also investigated. Gold nanocrystals exhibited four distinct peaks at 2θ =39.4°, 44.5°, 65.7° and 78.6° corresponding to (111), (200), (220) and (310) Bragg’s reflection based on the face-centered cubic structure of metallic gold (Figure 1D) (JCPDS No 4-0784).22 Toxicity on human cells We assessed the cell viability of human skin-derived keratinocyte HaCaT cells treated with biologically synthesized AgNPs and AuNPs. MTT assays revealed that the nanoparticles, both AgNPs and AuNPs, applied in 0–60 µg/mL concentrations, did not reduce the viability of HaCaT cells (Figure 2A). On the other hand, treatments with the well-known therapeutic agent cisplatin caused considerable cytotoxicity, since significant loss of keratinocyte metabolic activity was observed already at 5 µg/mL cisplatin concentration (Figure 2A). Cytotoxicity of AgNPs and AuNPs was analyzed by the crystal violet staining method (Figure 2B). AgNPs and AuNPs generated by P. rhodozyma cell-free extract were non-toxic to the applied human cells in the tested concentration range. Conversely, cisplatin proved to be significantly toxic to human HaCaT cells. These results indicate that these biogenic AgNPs and AuNPs exhibit no toxicity therefore both are biocompatible to human keratinocytes. We assessed the cell viability of human skin-derived keratinocyte HaCaT cells treated with biologically synthesized AgNPs and AuNPs. MTT assays revealed that the nanoparticles, both AgNPs and AuNPs, applied in 0–60 µg/mL concentrations, did not reduce the viability of HaCaT cells (Figure 2A). On the other hand, treatments with the well-known therapeutic agent cisplatin caused considerable cytotoxicity, since significant loss of keratinocyte metabolic activity was observed already at 5 µg/mL cisplatin concentration (Figure 2A). Cytotoxicity of AgNPs and AuNPs was analyzed by the crystal violet staining method (Figure 2B). AgNPs and AuNPs generated by P. rhodozyma cell-free extract were non-toxic to the applied human cells in the tested concentration range. Conversely, cisplatin proved to be significantly toxic to human HaCaT cells. These results indicate that these biogenic AgNPs and AuNPs exhibit no toxicity therefore both are biocompatible to human keratinocytes. Efficiency against pathogenic yeasts The antifungal activity of the prepared AgNPs and AuNPs was tested against various Candida and Cryptococcus species. AgNPs inhibited the growth of all the examined strains in agar diffusion assay except that of Candida tropicalis (Figure 3A). Interestingly, Cr. neoformans was sensitive to AuNPs as well, whereas other species exhibited resistance against AuNPs (Figure 3A). Since Cr. neoformans appeared susceptible to both nanoparticles, the antifungal effect of AgNPs and AuNPs was assessed more quantitatively on this particular strain. Hence, Cr. neoformans cells were subjected to different concentrations of AgNPs and AuNPs for 24 hours then cell viability was determined using colony forming assay (Figure 3B). Treatments with 1 µg/mL of either AgNPs or AuNPs already resulted in a significant loss of cell viability, whereas the number of viable Cr. neoformans cells dropped below the detectable level following 30 µg/mL of AuNP or AgNP administrations (Figure 3B). To further test the viability of nanoparticle-treated Cr. neoformans cells, calcein-AM staining procedure was applied. As expected, increasing concentrations of either AgNPs or AuNPs gradually diminished the percentage of calcein-positive cells. After 10 µg/mL AgNP administration, approximately 50% of the cells were alive, while 83% of the examined cells were dead when AgNP concentration was elevated to 30 µg/mL (Figure 3C). A similar pattern was observed when yeast cells were exposed to AuNPs, since nanoparticle treatments in 30 µg/mL concentrations decreased the percentage of the viable cells to approximately 50% (Figure 3C). Time dependence of the nanoparticle-induced antifungal effect was also determined. For this purpose, Cr. neoformans cells were again subjected to 10 µg/mL AgNPs or AuNPs and after 1-, 3-, 10- and 24-hour treatments the number of the colonies was counted. Exposure to AgNPs for 1 hour caused a 50% decrease in the number of the CFU, but 10-hour treatments with either AgNPs or AuNPs reduced the number of viable cells below 10% (Figure 3D). The antifungal activity of the prepared AgNPs and AuNPs was tested against various Candida and Cryptococcus species. AgNPs inhibited the growth of all the examined strains in agar diffusion assay except that of Candida tropicalis (Figure 3A). Interestingly, Cr. neoformans was sensitive to AuNPs as well, whereas other species exhibited resistance against AuNPs (Figure 3A). Since Cr. neoformans appeared susceptible to both nanoparticles, the antifungal effect of AgNPs and AuNPs was assessed more quantitatively on this particular strain. Hence, Cr. neoformans cells were subjected to different concentrations of AgNPs and AuNPs for 24 hours then cell viability was determined using colony forming assay (Figure 3B). Treatments with 1 µg/mL of either AgNPs or AuNPs already resulted in a significant loss of cell viability, whereas the number of viable Cr. neoformans cells dropped below the detectable level following 30 µg/mL of AuNP or AgNP administrations (Figure 3B). To further test the viability of nanoparticle-treated Cr. neoformans cells, calcein-AM staining procedure was applied. As expected, increasing concentrations of either AgNPs or AuNPs gradually diminished the percentage of calcein-positive cells. After 10 µg/mL AgNP administration, approximately 50% of the cells were alive, while 83% of the examined cells were dead when AgNP concentration was elevated to 30 µg/mL (Figure 3C). A similar pattern was observed when yeast cells were exposed to AuNPs, since nanoparticle treatments in 30 µg/mL concentrations decreased the percentage of the viable cells to approximately 50% (Figure 3C). Time dependence of the nanoparticle-induced antifungal effect was also determined. For this purpose, Cr. neoformans cells were again subjected to 10 µg/mL AgNPs or AuNPs and after 1-, 3-, 10- and 24-hour treatments the number of the colonies was counted. Exposure to AgNPs for 1 hour caused a 50% decrease in the number of the CFU, but 10-hour treatments with either AgNPs or AuNPs reduced the number of viable cells below 10% (Figure 3D). Anti-dermatophyte activity All the three examined dermatophyte species were sensitive to AgNP treatments in growth inhibition assay (Figure 4). AgNPs in 30 µg/mL concentration induced significant inhibition of colony growth, as the diameters of the colonies were markedly smaller, indicating an approximately 80% growth inhibition for each strain (Figure 4). On the other hand, none of the dermatophyte species reacted to AuNPs, when applied in 10 or 30 µg/mL concentrations. Therefore, AuNP concentration was elevated up to 300 µg/mL but still no inhibition was detected (data not shown), indicating that AuNPs are not suitable for the growth inhibition of the examined cutaneous mycosis-causing dermatophyte fungi. All the three examined dermatophyte species were sensitive to AgNP treatments in growth inhibition assay (Figure 4). AgNPs in 30 µg/mL concentration induced significant inhibition of colony growth, as the diameters of the colonies were markedly smaller, indicating an approximately 80% growth inhibition for each strain (Figure 4). On the other hand, none of the dermatophyte species reacted to AuNPs, when applied in 10 or 30 µg/mL concentrations. Therefore, AuNP concentration was elevated up to 300 µg/mL but still no inhibition was detected (data not shown), indicating that AuNPs are not suitable for the growth inhibition of the examined cutaneous mycosis-causing dermatophyte fungi. Characterization of nanoparticles: Particle morphology and size of the synthesized AgNPs and AuNPs were determined by TEM image analysis. According to the obtained TEM micrographs, AgNPs were quasi-spherical, well separated from each other and minor polydispersity could be observed (Figure 1A). The average size of the AgNPs proved to be 4.1±1.44 nm. Gold particles were almost monodispersed, well separated and spherical with a narrow size distribution (2.22±0.7 nm). DLS measurements were also carried out to determine hydrodynamic particle sizes (Figure 1B). The average size of AgNPs was between 5 and 9 nm diameter, whereas the size of gold nanoparticles was around 4–7 nm. The formation of metal nanoparticles was ascertained by UV-VIS measurements (Figure 1C). The typical surface plasmon resonance (SPR) band of AgNPs appeared around 455 nm. UV-VIS spectra of AuNPs showed absorption peak maxima at 541 nm characteristic to SPR, which further supported the formation of metallic gold nanoparticles. In order to confirm the crystalline nature of the nanoparticles, XRD analysis was performed. The XRD pattern of the AgNP sample (Figure 1D) exhibited four identical diffraction peaks appearing at 2θ =38.4°, 44.4°, 64.7° and 77.5° corresponding to (111), (200), (220) and (311) planes of the face-centered cubic lattice structure of metallic silver21 (Joint Committee on Powder Diffraction Standards [JCPDS] No 87-0717). Crystallinity of the as-synthesized AuNPs was also investigated. Gold nanocrystals exhibited four distinct peaks at 2θ =39.4°, 44.5°, 65.7° and 78.6° corresponding to (111), (200), (220) and (310) Bragg’s reflection based on the face-centered cubic structure of metallic gold (Figure 1D) (JCPDS No 4-0784).22 Toxicity on human cells: We assessed the cell viability of human skin-derived keratinocyte HaCaT cells treated with biologically synthesized AgNPs and AuNPs. MTT assays revealed that the nanoparticles, both AgNPs and AuNPs, applied in 0–60 µg/mL concentrations, did not reduce the viability of HaCaT cells (Figure 2A). On the other hand, treatments with the well-known therapeutic agent cisplatin caused considerable cytotoxicity, since significant loss of keratinocyte metabolic activity was observed already at 5 µg/mL cisplatin concentration (Figure 2A). Cytotoxicity of AgNPs and AuNPs was analyzed by the crystal violet staining method (Figure 2B). AgNPs and AuNPs generated by P. rhodozyma cell-free extract were non-toxic to the applied human cells in the tested concentration range. Conversely, cisplatin proved to be significantly toxic to human HaCaT cells. These results indicate that these biogenic AgNPs and AuNPs exhibit no toxicity therefore both are biocompatible to human keratinocytes. Efficiency against pathogenic yeasts: The antifungal activity of the prepared AgNPs and AuNPs was tested against various Candida and Cryptococcus species. AgNPs inhibited the growth of all the examined strains in agar diffusion assay except that of Candida tropicalis (Figure 3A). Interestingly, Cr. neoformans was sensitive to AuNPs as well, whereas other species exhibited resistance against AuNPs (Figure 3A). Since Cr. neoformans appeared susceptible to both nanoparticles, the antifungal effect of AgNPs and AuNPs was assessed more quantitatively on this particular strain. Hence, Cr. neoformans cells were subjected to different concentrations of AgNPs and AuNPs for 24 hours then cell viability was determined using colony forming assay (Figure 3B). Treatments with 1 µg/mL of either AgNPs or AuNPs already resulted in a significant loss of cell viability, whereas the number of viable Cr. neoformans cells dropped below the detectable level following 30 µg/mL of AuNP or AgNP administrations (Figure 3B). To further test the viability of nanoparticle-treated Cr. neoformans cells, calcein-AM staining procedure was applied. As expected, increasing concentrations of either AgNPs or AuNPs gradually diminished the percentage of calcein-positive cells. After 10 µg/mL AgNP administration, approximately 50% of the cells were alive, while 83% of the examined cells were dead when AgNP concentration was elevated to 30 µg/mL (Figure 3C). A similar pattern was observed when yeast cells were exposed to AuNPs, since nanoparticle treatments in 30 µg/mL concentrations decreased the percentage of the viable cells to approximately 50% (Figure 3C). Time dependence of the nanoparticle-induced antifungal effect was also determined. For this purpose, Cr. neoformans cells were again subjected to 10 µg/mL AgNPs or AuNPs and after 1-, 3-, 10- and 24-hour treatments the number of the colonies was counted. Exposure to AgNPs for 1 hour caused a 50% decrease in the number of the CFU, but 10-hour treatments with either AgNPs or AuNPs reduced the number of viable cells below 10% (Figure 3D). Anti-dermatophyte activity: All the three examined dermatophyte species were sensitive to AgNP treatments in growth inhibition assay (Figure 4). AgNPs in 30 µg/mL concentration induced significant inhibition of colony growth, as the diameters of the colonies were markedly smaller, indicating an approximately 80% growth inhibition for each strain (Figure 4). On the other hand, none of the dermatophyte species reacted to AuNPs, when applied in 10 or 30 µg/mL concentrations. Therefore, AuNP concentration was elevated up to 300 µg/mL but still no inhibition was detected (data not shown), indicating that AuNPs are not suitable for the growth inhibition of the examined cutaneous mycosis-causing dermatophyte fungi. Discussion: Emerging problems associated with antibiotic-resistant microbes demanded new strategies to find novel antimicrobial agents.23,24 Due to their broad spectrum activity, silver-containing materials have been used as antiseptics and disinfectants for decades,25 however the potential application of colloidal gold for medical purposes has only recently been recognized.26 Physical and chemical methods are generally considered as standard approaches for nanoparticle preparation, nevertheless these procedures lead to the accumulation of environmentally hazardous wastes.27 Furthermore, the presence of chemical residues within the nanoparticle colloid solutions limits their biomedical utilization.28 Biological synthesis is regarded as an eco-friendly and cost-effective alternative for the production of biocompatible nanoparticles.29 The numerous publications reporting either living microorganisms, plants or cell-free extracts for the synthesis of nanomaterials highlight this rapidly expanding field of bionanotechnology.15,30 In this study, we investigated the suitability of P. rhodozyma cell-free extract for AuNP and AgNP preparation, since this microbe contains an effective antioxidant, astaxanthin in high concentration, which promotes the formation of metal nanoparticles.31 AgNPs and AuNPs were successfully synthesized and the biological activities of the as-prepared nanoparticles were subjected to complex analysis. AgNPs were effective against all the examined mycosis-causing fungal species, except for C. tropicalis. Most importantly, these biosynthesized silver particles were capable of inhibiting the growth of several opportunistic Candida or Cryptococcus species and were highly potent against filamentous Microsporum and Trichophyton dermatophytes. In addition, AgNPs were biocompatible, showing no cytotoxicity to human HaCat keratinocytes, which renders these biosynthesized AgNPs attractive candidates for further biomedical and pharmacological applications. It is also noteworthy that among the tested species only Cr. neoformans was susceptible to both AgNPs and AuNPs. We believe this is a relevant finding, since mainly antibacterial features of AuNPs have been demonstrated so far,32–34 while the scientific literature is short on investigations into the potential antifungal feature of AuNPs.35,36 Importantly, apart from their capacity to inhibit Cr. neoformans, gold nanoparticles exerted no toxicity on human keratinocytes. These results emphasize the potential of such biosynthesized nanoparticles in topical dermatologic therapeutic strategies. Since cutaneous mycosis has the highest incidence among fungal infections,1,2,37 cost-effective, novel treatment modalities should be introduced into the related clinical practice. The symptoms can range from subclinical (eg, itching, rash), to serious inflammatory reactions (eg, blistering, scarring, alopecia), requiring distinct treatment regimens. Furthermore, the infective species can induce severe complications in immunocompromised individuals, including organ transplant, hematology and oncology patients where the infected area can manifest ulceration or even tissue necrosis.3 The severity of these symptoms should be attenuated by complementary management using topical treatment approaches. We believe that biosynthesized metal nanoparticles could be implemented to improve the clinical outcome of cutaneous mycosis. Conclusion: We found that the cell-free extract of P. rhodozyma is a suitable material for the synthesis of biogenic AgNPs and AuNPs. We demonstrated that these nanoparticles are potent against cutaneous mycosis-causing dermatophytes and opportunistic yeast species and are non-toxic to human keratinocytes. Considering our data in the context of the current clinical results, the biocompatibility to skin cells and the outstanding antifungal performance of our biosynthesized nanoparticles might be exploited during topical treatment of cutaneous mycosis or as a prophylactic treatment against the recurrence of the infection.
Background: Epidemiologic observations indicate that the number of systemic fungal infections has increased significantly during the past decades, however in human mycosis, mainly cutaneous infections predominate, generating major public health concerns and providing much of the impetus for current attempts to develop novel and efficient agents against cutaneous mycosis causing species. Innovative, environmentally benign and economic nanotechnology-based approaches have recently emerged utilizing principally biological sources to produce nano-sized structures with unique antimicrobial properties. In line with this, our aim was to generate silver nanoparticles (AgNPs) and gold nanoparticles (AuNPs) by biological synthesis and to study the effect of the obtained nanoparticles on cutaneous mycosis causing fungi and on human keratinocytes. Methods: Cell-free extract of the red yeast Phaffia rhodozyma proved to be suitable for nanoparticle preparation and the generated AgNPs and AuNPs were characterized by transmission electron microscopy, dynamic light scattering and X-ray powder diffraction. Results: Antifungal studies demonstrated that the biosynthesized silver particles were able to inhibit the growth of several opportunistic Candida or Cryptococcus species and were highly potent against filamentous Microsporum and Trichophyton dermatophytes. Among the tested species only Cryptococcus neoformans was susceptible to both AgNPs and AuNPs. Neither AgNPs nor AuNPs exerted toxicity on human keratinocytes. Conclusions: Our results emphasize the therapeutic potential of such biosynthesized nanoparticles, since their biocompatibility to skin cells and their outstanding antifungal performance can be exploited for topical treatment and prophylaxis of superficial cutaneous mycosis.
Introduction: Due to social and global environmental changes, the aging population and the growing number of susceptible individuals (eg, patients with predisposing factors), the incidence of superficial mycosis has increased over the past few years causing health problems world-wide.1,2 Dermatomycosis usually manifests as itchy, sore skin rashes and tinea on the toes, the inner thighs or groin; leading to flaking and blistering of the affected area. Fungi may spread to the scalp or nails causing hair loss and thickened or deformed fingernails. The etiological agents behind dermatomycosis are mostly filamentous dermatophytes (eg, Microsporum spp., Trichophyton spp. or Epidermophyton spp.), however, several opportunistic yeast species such as Candida or Cryptococcus can invade keratinized tissues.1,3 In case of cutaneous candidiasis, the origin is mostly endogenous as the causative agents (ie, Candida spp.) are the resident or transient members of the human dermal microflora. Cryptococcus neoformans can induce meningitis in immunocompromised patients, however, due to systemic dissemination of yeast cells in such patients, skin lesions may also appear.4 In addition to this secondary variant, a primary form of cutaneous cryptococcosis has recently been identified, where the pathogen enters the body directly through skin injury.5 Regardless of the distinct origins of Cr. neoformans infections, this pathogen can cause life-threatening systemic complications in the host.4,5 Therefore, there is an inevitable and urgent medical need to find novel and efficient agents to defeat opportunistic pathogens and cutaneous mycosis-causing species. To achieve this goal, an innovative, environmentally benign and economic approach is required. A solution that fulfills these demands could be based on nanotechnology, since owing to the abundance of recent scientific advancements in this field, it is now possible to design and develop nano-sized structures with unique properties tailored for specific applications (eg, in optics, in electronics, in catalysis, and in household items, as well as in medicine).6 In recent years, gold nanoparticles (AuNPs) and silver nanoparticles (AgNPs),7 in particular, have been in the focus of increasing interest due to their simple synthesis and desirable biological activities (ie, antitumor, antibacterial, antiviral and antifungal effects).8,9 We have reported that the cytotoxic features of AgNPs can be exploited to kill p53 tumor suppressor-deficient10 as well as multidrug-resistant cancer cells,11 providing some further details to the mechanism of AgNP-induced antitumor actions. In recent years, several studies have also demonstrated that AgNPs, synthesized either by conventional chemical reduction methods, physical techniques or by different biological entities, exhibit potent inhibitory effects against Gram positive and Gram negative bacterial species and induce cytotoxicity in various pathogenic fungi.8,12 Green alternatives to harsh reducing chemicals and environmentally benign capping agents have also been developed to reduce the costs of nanoparticle production and to minimize the generation of hazardous wastes upon the synthetic procedures.13 In a comprehensive study on AgNPs obtained by chemical reduction using coffee and green tea extracts, we proved the remarkable antimicrobial efficiency of such particles,14 and we showed evidence that the green material used for AgNP synthesis can largely define the physical, chemical and biological characteristics of the produced nanomaterial.14 Owing to widespread applications, there is an ever-growing demand for AgNPs and AuNPs, while the challenges of their economical, environmentally safe and sustainable production should also be addressed. A rapidly progressing area of nanobiotechnology is the microbe-assisted nanoparticle synthesis,15 where bacterial strains, yeast- and alga-mediated reactions are exploited for nanoparticle generation.16,17 One such study demonstrated that the astaxanthin-containing green alga Chlorella vulgaris is a useful agent for gold nanoparticle synthesis.18 Based on this observation we hypothesized that Phaffia rhodozyma (perfect state Xanthophyllomyces dendrorhous), a basidiomycetous red yeast with high astaxanthin content,19 might also be a potential candidate for microbe-assisted nanoparticle synthesis. Therefore, the aim of this present study was to investigate the suitability of P. rhodozyma cell-free extract for the preparation of AgNPs and AuNPs. Since our synthesis method proved to be successful, the as-prepared nanoparticles were characterized, and a complex biological screening was carried out to determine their biological activity, where beside toxicity to human skin-derived cells, the antifungal efficiency of the biosynthesized nanoparticles was also assessed with a special emphasis on inhibitory features against opportunistic pathogenic yeasts and dermatophytes. Conclusion: We found that the cell-free extract of P. rhodozyma is a suitable material for the synthesis of biogenic AgNPs and AuNPs. We demonstrated that these nanoparticles are potent against cutaneous mycosis-causing dermatophytes and opportunistic yeast species and are non-toxic to human keratinocytes. Considering our data in the context of the current clinical results, the biocompatibility to skin cells and the outstanding antifungal performance of our biosynthesized nanoparticles might be exploited during topical treatment of cutaneous mycosis or as a prophylactic treatment against the recurrence of the infection.
Background: Epidemiologic observations indicate that the number of systemic fungal infections has increased significantly during the past decades, however in human mycosis, mainly cutaneous infections predominate, generating major public health concerns and providing much of the impetus for current attempts to develop novel and efficient agents against cutaneous mycosis causing species. Innovative, environmentally benign and economic nanotechnology-based approaches have recently emerged utilizing principally biological sources to produce nano-sized structures with unique antimicrobial properties. In line with this, our aim was to generate silver nanoparticles (AgNPs) and gold nanoparticles (AuNPs) by biological synthesis and to study the effect of the obtained nanoparticles on cutaneous mycosis causing fungi and on human keratinocytes. Methods: Cell-free extract of the red yeast Phaffia rhodozyma proved to be suitable for nanoparticle preparation and the generated AgNPs and AuNPs were characterized by transmission electron microscopy, dynamic light scattering and X-ray powder diffraction. Results: Antifungal studies demonstrated that the biosynthesized silver particles were able to inhibit the growth of several opportunistic Candida or Cryptococcus species and were highly potent against filamentous Microsporum and Trichophyton dermatophytes. Among the tested species only Cryptococcus neoformans was susceptible to both AgNPs and AuNPs. Neither AgNPs nor AuNPs exerted toxicity on human keratinocytes. Conclusions: Our results emphasize the therapeutic potential of such biosynthesized nanoparticles, since their biocompatibility to skin cells and their outstanding antifungal performance can be exploited for topical treatment and prophylaxis of superficial cutaneous mycosis.
9,147
272
[ 177, 119, 197, 78, 274, 209, 443, 342, 172, 391, 128, 97 ]
16
[ "cells", "ml", "agnps", "aunps", "10", "30", "µg", "µg ml", "nanoparticles", "agnps aunps" ]
[ "agents dermatomycosis filamentous", "dermatomycosis filamentous", "etiological agents dermatomycosis", "mycosis causing fungal", "mycosis causing dermatophyte" ]
null
[CONTENT] antifungal activity | biological synthesis | dermatophytes | opportunistic pathogenic yeasts | silver nanoparticles | toxicity [SUMMARY]
null
[CONTENT] antifungal activity | biological synthesis | dermatophytes | opportunistic pathogenic yeasts | silver nanoparticles | toxicity [SUMMARY]
[CONTENT] antifungal activity | biological synthesis | dermatophytes | opportunistic pathogenic yeasts | silver nanoparticles | toxicity [SUMMARY]
[CONTENT] antifungal activity | biological synthesis | dermatophytes | opportunistic pathogenic yeasts | silver nanoparticles | toxicity [SUMMARY]
[CONTENT] antifungal activity | biological synthesis | dermatophytes | opportunistic pathogenic yeasts | silver nanoparticles | toxicity [SUMMARY]
[CONTENT] Antifungal Agents | Basidiomycota | Candida | Cell Line | Cell-Free System | Dermatomycoses | Drug Evaluation, Preclinical | Dynamic Light Scattering | Gold | Humans | Keratinocytes | Metal Nanoparticles | Microscopy, Electron, Transmission | Silver | Trichophyton [SUMMARY]
null
[CONTENT] Antifungal Agents | Basidiomycota | Candida | Cell Line | Cell-Free System | Dermatomycoses | Drug Evaluation, Preclinical | Dynamic Light Scattering | Gold | Humans | Keratinocytes | Metal Nanoparticles | Microscopy, Electron, Transmission | Silver | Trichophyton [SUMMARY]
[CONTENT] Antifungal Agents | Basidiomycota | Candida | Cell Line | Cell-Free System | Dermatomycoses | Drug Evaluation, Preclinical | Dynamic Light Scattering | Gold | Humans | Keratinocytes | Metal Nanoparticles | Microscopy, Electron, Transmission | Silver | Trichophyton [SUMMARY]
[CONTENT] Antifungal Agents | Basidiomycota | Candida | Cell Line | Cell-Free System | Dermatomycoses | Drug Evaluation, Preclinical | Dynamic Light Scattering | Gold | Humans | Keratinocytes | Metal Nanoparticles | Microscopy, Electron, Transmission | Silver | Trichophyton [SUMMARY]
[CONTENT] Antifungal Agents | Basidiomycota | Candida | Cell Line | Cell-Free System | Dermatomycoses | Drug Evaluation, Preclinical | Dynamic Light Scattering | Gold | Humans | Keratinocytes | Metal Nanoparticles | Microscopy, Electron, Transmission | Silver | Trichophyton [SUMMARY]
[CONTENT] agents dermatomycosis filamentous | dermatomycosis filamentous | etiological agents dermatomycosis | mycosis causing fungal | mycosis causing dermatophyte [SUMMARY]
null
[CONTENT] agents dermatomycosis filamentous | dermatomycosis filamentous | etiological agents dermatomycosis | mycosis causing fungal | mycosis causing dermatophyte [SUMMARY]
[CONTENT] agents dermatomycosis filamentous | dermatomycosis filamentous | etiological agents dermatomycosis | mycosis causing fungal | mycosis causing dermatophyte [SUMMARY]
[CONTENT] agents dermatomycosis filamentous | dermatomycosis filamentous | etiological agents dermatomycosis | mycosis causing fungal | mycosis causing dermatophyte [SUMMARY]
[CONTENT] agents dermatomycosis filamentous | dermatomycosis filamentous | etiological agents dermatomycosis | mycosis causing fungal | mycosis causing dermatophyte [SUMMARY]
[CONTENT] cells | ml | agnps | aunps | 10 | 30 | µg | µg ml | nanoparticles | agnps aunps [SUMMARY]
null
[CONTENT] cells | ml | agnps | aunps | 10 | 30 | µg | µg ml | nanoparticles | agnps aunps [SUMMARY]
[CONTENT] cells | ml | agnps | aunps | 10 | 30 | µg | µg ml | nanoparticles | agnps aunps [SUMMARY]
[CONTENT] cells | ml | agnps | aunps | 10 | 30 | µg | µg ml | nanoparticles | agnps aunps [SUMMARY]
[CONTENT] cells | ml | agnps | aunps | 10 | 30 | µg | µg ml | nanoparticles | agnps aunps [SUMMARY]
[CONTENT] spp | green | synthesis | agents | biological | years | recent | nanoparticle | chemical | eg [SUMMARY]
null
[CONTENT] figure | agnps | aunps | cells | µg ml | µg | agnps aunps | ml | neoformans | nm [SUMMARY]
[CONTENT] treatment | cutaneous | cutaneous mycosis | mycosis | toxic human keratinocytes considering | agnps aunps demonstrated nanoparticles | found cell | found cell free | found cell free extract | recurrence [SUMMARY]
[CONTENT] cells | ml | agnps | aunps | figure | µg ml | µg | 30 | 10 | plates [SUMMARY]
[CONTENT] cells | ml | agnps | aunps | figure | µg ml | µg | 30 | 10 | plates [SUMMARY]
[CONTENT] the past decades ||| ||| AgNPs [SUMMARY]
null
[CONTENT] Antifungal | Candida | Cryptococcus | Microsporum | Trichophyton ||| Cryptococcus | AgNPs | AuNPs ||| AgNPs | AuNPs [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] the past decades ||| ||| AgNPs ||| Phaffia | AgNPs | AuNPs ||| ||| Antifungal | Candida | Cryptococcus | Microsporum | Trichophyton ||| Cryptococcus | AgNPs | AuNPs ||| AgNPs | AuNPs ||| [SUMMARY]
[CONTENT] the past decades ||| ||| AgNPs ||| Phaffia | AgNPs | AuNPs ||| ||| Antifungal | Candida | Cryptococcus | Microsporum | Trichophyton ||| Cryptococcus | AgNPs | AuNPs ||| AgNPs | AuNPs ||| [SUMMARY]
Interleukin-8 is increased in chronic kidney disease in children, but not related to cardiovascular disease.
33711092
In this study, we aimed to detect the cytokine that is involved in the early stage of chronic kidney disease and associated with cardiovascular disease.
INTRODUCTION
We included 50 patients who were diagnosed with predialytic chronic kidney disease and 30 healthy pediatric patients in Ege University Medical Faculty Pediatric Clinic, İzmir/Turkey. Interleukin-8 (IL-8), interleukin-10 (IL-10), interleukin-13 (IL-13), and transforming grow factor-β1 (TGF-β1) levels (pg/mL) were measured by ELISA. Carotid-femoral pulse wave velocity (PWV), augmentation index (Aix), carotid intima media thickness (cIMT), and left ventricular mass index (LVMI) were evaluated as markers of cardiovascular disease. The presence of a cardiovascular disease marker was defined as an abnormality in any of the parameters (cIMT, PWV, Aix, and left ventricular mass index (SVKI)). The patient group was divided into two groups as with and without cardiovascular disease.
METHODS
Mean Aix and PWV values were higher in CKD patients than controls (Aix: CKD 32.8±11.11%, healthy subjects: 6.74±6.58%, PWV CKD: 7.31±4.34m/s, healthy subjects: 3.42±3.01m/s, respectively; p=0.02, p=0.03). The serum IL-8 levels of CKD were significantly higher than of healthy subjects 568.48±487.35pg/mL, 33.67±47.47pg/mL, respectively (p<0.001). There was no statistically significant difference between IL-8, IL-10, IL-13, TGF-1, in CKD patients with and without cardiovascular disease (p> 0.05).
RESULTS
IL-8 is the sole cytokine that increases in pediatric patients with chronic kidney disease among other cytokines (IL-10, IL-13 and TGF-β1). However, we did not show that IL-8 is related to the presence of cardiovascular disease.
DISCUSSION
[ "Cardiovascular Diseases", "Carotid Intima-Media Thickness", "Child", "Humans", "Interleukin-8", "Pulse Wave Analysis", "Renal Insufficiency, Chronic" ]
8428641
Introduction
Compared to healthy children, children with chronic kidney disease (CKD) have shorter life expectancy. Although survival has improved with renal transplantation, cardiovascular disease (CVD) is the most common cause of death in patients with CKD1. The incidence of cardiovascular events in children with CKD increases with age and is reported to be 23.9% in the 10-14 age range and 36.9% in children aged 15-19 years2. There are traditional and uremia-related risk factors for the development of CVD in CKD. Uremia-related risk factors are dysregulation of the Ca-P-PTH and fetuin-A, treatment-related factors are high dose vitamin D and high dose phosphate binders, and disease-related factors (hypertension). Traditional risk factors are hypertension, obesity, malnutrition, insulin resistance, and dyslipidemia. Cardiovascular abnormalities begin to develop in early stage CKD. CVD in children is usually asymptomatic in contrast to adults. Left ventricular hypertrophy, increased carotid artery intima media thickness, and vascular calcification are the most common early cardiovascular abnormalities in children with CKD. CKD begins with early vascular changes in children with endothelial damage. Subsequently, vascular calcification develops due to inflammatory response, adhesion molecules, growth factors, and cytokines from endothelial cells. Inflammation in early stage CKD is the main cause of CVD due to endothelial damage3. However, the pathogenic mechanism of microinflammation is still unknown. Knowing which cytokines are involved in the pathogenesis of inflammation may allow the development of preventive therapy. Some inflammatory mediators have been shown to be responsible for chronic inflammation in CKD4. It is not known which cytokines play a role in the development of CVD in the CKD. We examined whether there was a relationship between CVD and potent proinflammatory and chemotactic cytokines, which are interleukin-8 (IL-8), interleukin-10 (IL-10), interleukin-13 (IL-13), and transforming growth factor-β1 (TGF-β1) in pediatric patients with CKD.
null
null
Results
Fifty patients with predialytic CKD and 30 healthy children were enrolled in this study. Twenty-seven (54%) of the patients were male and 23 (46%) were female. Seventeen (57%) of the healthy children were female and 13 (43%) were male. The mean age of the patients with CKD was 12.59±4.53 years and of healthy subjects, 13.21±6.02 years. The cause of CKD was reflux/urinary tract infection (n=25), obstructive uropathy (n=3), polycystic kidney disease (n=3), hereditary nephritis (n=3), aplasia/hypoplasia (n=1), metabolic disease (n=2), primary glomerulonephritis (n=1), and unknown (n=12). Of all individuals included in the study, 6 (12%) were at stage 2, 20 (40%) at stage 3, and 24 (48%) at stage 4 CKD. The mean BMI was 19.42±5.12 kg/m2 in patients and 20.27±3.12 in healthy subjects. Patients with CKD had lower eGFR than healthy subjects: 32.52±21.42 mL/min/1.73m2, 121.51±21.42 mL/min/1.73m2, respectively, (p=0.001). The mean duration of CKD was 3.64±5.23 years. The SPB and DBP (mean ± SD) of the patients was 119.40±11.03 and 72.40±16.59 mmHg, respectively. All healthy subjects and patients were normotensive. Total cholesterol and triglycerides were significantly higher in the patient group, while HDL was significantly lower (p=0.09, p=0.08, p=0.09) respectively. There was no difference in demographic characteristics between the two groups (p >0.05) (Table 1). Mean±SD, CKD: Chronic kidney disease, BMI: Body mass index, SBP: systolic blood pressure, DBP: diastolic blood pressure, CRP: C- reaktif protein, ESR: Erythrocyte sedimentation rate, HDL: High Density Lipoprotein, LDL: Low Density Lipoprotein, eGFR: Estimated glomerular filtration rate. There was no statistically significant difference between the two groups in hemoglobin, serum Ca, P, and CaxP levels (p> 0.05). PTH was found to be higher in CKD patients than in healthy subjects (p=0.001) (Table 2). Mean±SD, CKD: Chronic kidney disease, Ca: calcium, P: phosphorus, PTH: parathormone Mean Aix and PWV values were higher in CKD patients than in healthy subjects (Aix CKD 32.81±11.11%, healthy subjects 6.74±6.58%, PWV CKD 7.31±4.34m/s, healthy subjects 3.42±3.01m/s) (p=0.02, p=0.03). The serum IL-8 levels of CKD were significantly higher than of healthy subjects (568.48±487.35 pg/mL vs. 33.67± 47.47 pg/mL, p <0.001). IL-10, IL-13, and TGF-β1 levels were not different between CKD and healthy subjects (p> 0.05) (Table 3). Mean±SD, CKD: Chronic kidney disease, SVKI: left ventricular mass index, cIMT: carotid intima media thickness, Aix: augmentation index, PWV: pulse wave velocity, TGF-β1: transforming grow factor -β1, IL-8: interleukin-8, IL-10: interleukin-10, IL-13: interleukin-13. Values are expressed by mean ± SD. Abnormalities of CVD markers (anomaly in any of cIMT, PWV, Aix and SVKI) were detected in 25 of the 50 CKD patients. There was no statistically significant difference between IL-10, IL-13, IL-8, and TGF-β1 levels in patients with and without CVD (Table 4). Values are expressed by mean ± SD. Group 1: CKD patients without cardiovascular disease. Group 2: CKD patients with cardiovascular disease. TGF-β1: transforming grow factor -β1, IL-8: interleukin-8, IL-10: interleukin-10, IL-13: interleukin-13. There was no correlation among IL-8 and inflammation markers (ESR, CRP) with biochemical parameters and eGFR (p>0.005).
Conclusion
We found that serum IL-8 levels of CKD were significantly higher than of healthy subjects but there was no significant difference between IL-8 levels in patients with and without CVD. Elevated levels of IL-8 may not be considered as a marker for cardiovascular disease, but probably indicate disease-related inflammation. IL-8 may be a cytokine that can increase CKD, but it cannot be a marker of CVD.
[ "Study population, clinical and biochemical characteristics of the\npatients", "Evaluation of cardiovascular parameters", "Measurement of cytokines", "Statistical analyses" ]
[ "This study was carried out between March 2016 and March 2018 in Ege University\nFaculty of Medicine, Pediatric Clinics, İzmir, Turkey. This single center\nclinical study enrolled 50 children with CKD and 30 healthy volunteers. Verbal\nand written consent was obtained from the families and patients. The study was\napproved by the Clinical Research Ethics Committee of Ege University Faculty of\nMedicine (16-11/29). Patients with active systemic vasculitis, active infection,\nrenal vascular anomalies, cardiovascular anomalies, and family history of early\ncardiovascular disease were excluded. None of the patients had active\ninflammatory conditions or taking immunosuppressive drugs. CKD was defined as\npersistence of kidney dysfunction for more than 3 months. Children between the\nages of 5 and 18 years with estimated glomerular filtration rate (eGFR) between\n15 and 60 mL/min/1.73m2 and had not started renal replacement therapy\nwere included in the study5.\nMoreover, we examined 30 healthy volunteers, which were between the same age\nrange, and who went to the clinic for their routine health visits. All\nmeasurements and analyses were done to both healthy subjects and pediatric\npatients with CKD.\neGFR was calculated using the Schwartz formula6. Blood plasma was obtained after 12 hours of fasting and stored at\n-80°C for biochemical tests. Serum urea, creatinine, uric acid, calcium,\nphosphorus, C-reactive protein (CRP), erythrocyte sedimentation rate (ESR),\ncystatin C (CysC), glucose (mmol/L), fasting insulin, lipid parameters high\ndensity lipoprotein (HDL), low density lipoprotein (LDL), total cholesterol,\ntriglyceride, parathormone (PTH), and homocysteine were detected with an\nautomatic biochemical analyzer (Hitachi 7600; Tokyo, Japan). Age, sex, etiology,\nweight, height, body mass index (BMI), and systolic and diastolic blood pressure\nwere measured.", "Arterial stiffness evaluation, carotid-femoral PWV (cfPWV), and augmentation\nindex (Aix) measurements were carried out with a Vicorder (Skidmore Medical\nLimited, Bristol, UK) device. The peripheral and central arterial pulse wave\nforms from radial and carotid arteries were recorded with a Vicorder device. For\nthe measurement of arterial stiffness, the patients fasted for 12 hours, rested\ncomfortably in the supine position for 30 minutes, and withhold all\nantihypertensive drugs prior to PWV and AIx measurement. Mean values of the\ncompound radial waveforms were calculated using the computer program prepared\nsolely for this study. The Aix was calculated as the difference between first\nand second systolic peaks of the central aortic waveforms, and expressed as the\npercentage of pulsation length. Echocardiography evaluations were performed by\nthe same pediatric cardiologist using two-dimensional M-mode echocardiography\nwith a 3.5-MHz transducer (HP SONOS 1000 System, Philips, Best, The\nNetherlands). Measurements consisted of interventricular septal thickness,\nposterior wall thickness, left ventricular (LV) diameter at end diastole, and LV\ndiameter at end systole. The LV mass was calculated using the formula validated\nby Devereux and Reichek7. Carotid artery\nultrasonography was performed to measure carotid intima-media thickness (cIMT)\naccording to a previously described method by an experienced pediatric\ncardiologist8. Measurements were done\n3 times and mean values were recorded.", "IL-8, IL-10, IL-13, and TGF-β1 levels were assayed with high-sensitive ELISA\nmethod. Patients with CKD were divided into two groups according to the presence\nof CVD.\nPatients without CVD were classified as group 1, and those with CVD, as group 2,\nbased on reference values of aortic pulse wave velocity set by Aix Reusz et al.\nfor children9. The cIMT norms set by Doyon\nA et al. were used 10. PWV normal values\nfor children according to age and sex determined by Thurn D et al were used11. LVH is defined as LV mass >51\ng/m2.7 or LV mass >115 g per body surface area (BSA) for boys\nand LV mass >95 g/BSA for girls12. The\npresence of CVD was defined as anomaly in any of cIMT, PWV, Aix, and left\nventricular mass index (SVKI); 25 patients had CVD and 25 patients did not.", "The SPSS software (Statistical Package for the Social Sciences, version 25.0, IBM\nCorp, New York, NY, USA) was used for analyses. Numeric variables' suitability\nto the normal distribution was analyzed with Shapiro-Wilk (n<50) and\nKolmogorv-Smirnov (n>=50) tests. Variables are presented as mean ± standard\nerror. Categorical variables are presented as numbers and percentages.\nChi-square test and Pearson correlation test were used. P value less than 0.05\nwas accepted as significance level for all hypotheses." ]
[ null, null, null, null ]
[ "Introduction", "Material and Method", "Study population, clinical and biochemical characteristics of the\npatients", "Evaluation of cardiovascular parameters", "Measurement of cytokines", "Statistical analyses", "Results", "Discussion", "Conclusion" ]
[ "Compared to healthy children, children with chronic kidney disease (CKD) have shorter\nlife expectancy. Although survival has improved with renal transplantation,\ncardiovascular disease (CVD) is the most common cause of death in patients with\nCKD1. The incidence of cardiovascular\nevents in children with CKD increases with age and is reported to be 23.9% in the\n10-14 age range and 36.9% in children aged 15-19 years2. There are traditional and uremia-related risk factors for the\ndevelopment of CVD in CKD. Uremia-related risk factors are dysregulation of the\nCa-P-PTH and fetuin-A, treatment-related factors are high dose vitamin D and high\ndose phosphate binders, and disease-related factors (hypertension). Traditional risk\nfactors are hypertension, obesity, malnutrition, insulin resistance, and\ndyslipidemia.\nCardiovascular abnormalities begin to develop in early stage CKD. CVD in children is\nusually asymptomatic in contrast to adults. Left ventricular hypertrophy, increased\ncarotid artery intima media thickness, and vascular calcification are the most\ncommon early cardiovascular abnormalities in children with CKD. CKD begins with\nearly vascular changes in children with endothelial damage. Subsequently, vascular\ncalcification develops due to inflammatory response, adhesion molecules, growth\nfactors, and cytokines from endothelial cells. Inflammation in early stage CKD is\nthe main cause of CVD due to endothelial damage3. However, the pathogenic mechanism of microinflammation is still\nunknown. Knowing which cytokines are involved in the pathogenesis of inflammation\nmay allow the development of preventive therapy. Some inflammatory mediators have\nbeen shown to be responsible for chronic inflammation in CKD4. It is not known which cytokines play a role in the\ndevelopment of CVD in the CKD. We examined whether there was a relationship between\nCVD and potent proinflammatory and chemotactic cytokines, which are interleukin-8\n(IL-8), interleukin-10 (IL-10), interleukin-13 (IL-13), and transforming growth\nfactor-β1 (TGF-β1) in pediatric patients with CKD.", "Study population, clinical and biochemical characteristics of the\npatients This study was carried out between March 2016 and March 2018 in Ege University\nFaculty of Medicine, Pediatric Clinics, İzmir, Turkey. This single center\nclinical study enrolled 50 children with CKD and 30 healthy volunteers. Verbal\nand written consent was obtained from the families and patients. The study was\napproved by the Clinical Research Ethics Committee of Ege University Faculty of\nMedicine (16-11/29). Patients with active systemic vasculitis, active infection,\nrenal vascular anomalies, cardiovascular anomalies, and family history of early\ncardiovascular disease were excluded. None of the patients had active\ninflammatory conditions or taking immunosuppressive drugs. CKD was defined as\npersistence of kidney dysfunction for more than 3 months. Children between the\nages of 5 and 18 years with estimated glomerular filtration rate (eGFR) between\n15 and 60 mL/min/1.73m2 and had not started renal replacement therapy\nwere included in the study5.\nMoreover, we examined 30 healthy volunteers, which were between the same age\nrange, and who went to the clinic for their routine health visits. All\nmeasurements and analyses were done to both healthy subjects and pediatric\npatients with CKD.\neGFR was calculated using the Schwartz formula6. Blood plasma was obtained after 12 hours of fasting and stored at\n-80°C for biochemical tests. Serum urea, creatinine, uric acid, calcium,\nphosphorus, C-reactive protein (CRP), erythrocyte sedimentation rate (ESR),\ncystatin C (CysC), glucose (mmol/L), fasting insulin, lipid parameters high\ndensity lipoprotein (HDL), low density lipoprotein (LDL), total cholesterol,\ntriglyceride, parathormone (PTH), and homocysteine were detected with an\nautomatic biochemical analyzer (Hitachi 7600; Tokyo, Japan). Age, sex, etiology,\nweight, height, body mass index (BMI), and systolic and diastolic blood pressure\nwere measured.\nThis study was carried out between March 2016 and March 2018 in Ege University\nFaculty of Medicine, Pediatric Clinics, İzmir, Turkey. This single center\nclinical study enrolled 50 children with CKD and 30 healthy volunteers. Verbal\nand written consent was obtained from the families and patients. The study was\napproved by the Clinical Research Ethics Committee of Ege University Faculty of\nMedicine (16-11/29). Patients with active systemic vasculitis, active infection,\nrenal vascular anomalies, cardiovascular anomalies, and family history of early\ncardiovascular disease were excluded. None of the patients had active\ninflammatory conditions or taking immunosuppressive drugs. CKD was defined as\npersistence of kidney dysfunction for more than 3 months. Children between the\nages of 5 and 18 years with estimated glomerular filtration rate (eGFR) between\n15 and 60 mL/min/1.73m2 and had not started renal replacement therapy\nwere included in the study5.\nMoreover, we examined 30 healthy volunteers, which were between the same age\nrange, and who went to the clinic for their routine health visits. All\nmeasurements and analyses were done to both healthy subjects and pediatric\npatients with CKD.\neGFR was calculated using the Schwartz formula6. Blood plasma was obtained after 12 hours of fasting and stored at\n-80°C for biochemical tests. Serum urea, creatinine, uric acid, calcium,\nphosphorus, C-reactive protein (CRP), erythrocyte sedimentation rate (ESR),\ncystatin C (CysC), glucose (mmol/L), fasting insulin, lipid parameters high\ndensity lipoprotein (HDL), low density lipoprotein (LDL), total cholesterol,\ntriglyceride, parathormone (PTH), and homocysteine were detected with an\nautomatic biochemical analyzer (Hitachi 7600; Tokyo, Japan). Age, sex, etiology,\nweight, height, body mass index (BMI), and systolic and diastolic blood pressure\nwere measured.\nEvaluation of cardiovascular parameters Arterial stiffness evaluation, carotid-femoral PWV (cfPWV), and augmentation\nindex (Aix) measurements were carried out with a Vicorder (Skidmore Medical\nLimited, Bristol, UK) device. The peripheral and central arterial pulse wave\nforms from radial and carotid arteries were recorded with a Vicorder device. For\nthe measurement of arterial stiffness, the patients fasted for 12 hours, rested\ncomfortably in the supine position for 30 minutes, and withhold all\nantihypertensive drugs prior to PWV and AIx measurement. Mean values of the\ncompound radial waveforms were calculated using the computer program prepared\nsolely for this study. The Aix was calculated as the difference between first\nand second systolic peaks of the central aortic waveforms, and expressed as the\npercentage of pulsation length. Echocardiography evaluations were performed by\nthe same pediatric cardiologist using two-dimensional M-mode echocardiography\nwith a 3.5-MHz transducer (HP SONOS 1000 System, Philips, Best, The\nNetherlands). Measurements consisted of interventricular septal thickness,\nposterior wall thickness, left ventricular (LV) diameter at end diastole, and LV\ndiameter at end systole. The LV mass was calculated using the formula validated\nby Devereux and Reichek7. Carotid artery\nultrasonography was performed to measure carotid intima-media thickness (cIMT)\naccording to a previously described method by an experienced pediatric\ncardiologist8. Measurements were done\n3 times and mean values were recorded.\nArterial stiffness evaluation, carotid-femoral PWV (cfPWV), and augmentation\nindex (Aix) measurements were carried out with a Vicorder (Skidmore Medical\nLimited, Bristol, UK) device. The peripheral and central arterial pulse wave\nforms from radial and carotid arteries were recorded with a Vicorder device. For\nthe measurement of arterial stiffness, the patients fasted for 12 hours, rested\ncomfortably in the supine position for 30 minutes, and withhold all\nantihypertensive drugs prior to PWV and AIx measurement. Mean values of the\ncompound radial waveforms were calculated using the computer program prepared\nsolely for this study. The Aix was calculated as the difference between first\nand second systolic peaks of the central aortic waveforms, and expressed as the\npercentage of pulsation length. Echocardiography evaluations were performed by\nthe same pediatric cardiologist using two-dimensional M-mode echocardiography\nwith a 3.5-MHz transducer (HP SONOS 1000 System, Philips, Best, The\nNetherlands). Measurements consisted of interventricular septal thickness,\nposterior wall thickness, left ventricular (LV) diameter at end diastole, and LV\ndiameter at end systole. The LV mass was calculated using the formula validated\nby Devereux and Reichek7. Carotid artery\nultrasonography was performed to measure carotid intima-media thickness (cIMT)\naccording to a previously described method by an experienced pediatric\ncardiologist8. Measurements were done\n3 times and mean values were recorded.\nMeasurement of cytokines IL-8, IL-10, IL-13, and TGF-β1 levels were assayed with high-sensitive ELISA\nmethod. Patients with CKD were divided into two groups according to the presence\nof CVD.\nPatients without CVD were classified as group 1, and those with CVD, as group 2,\nbased on reference values of aortic pulse wave velocity set by Aix Reusz et al.\nfor children9. The cIMT norms set by Doyon\nA et al. were used 10. PWV normal values\nfor children according to age and sex determined by Thurn D et al were used11. LVH is defined as LV mass >51\ng/m2.7 or LV mass >115 g per body surface area (BSA) for boys\nand LV mass >95 g/BSA for girls12. The\npresence of CVD was defined as anomaly in any of cIMT, PWV, Aix, and left\nventricular mass index (SVKI); 25 patients had CVD and 25 patients did not.\nIL-8, IL-10, IL-13, and TGF-β1 levels were assayed with high-sensitive ELISA\nmethod. Patients with CKD were divided into two groups according to the presence\nof CVD.\nPatients without CVD were classified as group 1, and those with CVD, as group 2,\nbased on reference values of aortic pulse wave velocity set by Aix Reusz et al.\nfor children9. The cIMT norms set by Doyon\nA et al. were used 10. PWV normal values\nfor children according to age and sex determined by Thurn D et al were used11. LVH is defined as LV mass >51\ng/m2.7 or LV mass >115 g per body surface area (BSA) for boys\nand LV mass >95 g/BSA for girls12. The\npresence of CVD was defined as anomaly in any of cIMT, PWV, Aix, and left\nventricular mass index (SVKI); 25 patients had CVD and 25 patients did not.\nStatistical analyses The SPSS software (Statistical Package for the Social Sciences, version 25.0, IBM\nCorp, New York, NY, USA) was used for analyses. Numeric variables' suitability\nto the normal distribution was analyzed with Shapiro-Wilk (n<50) and\nKolmogorv-Smirnov (n>=50) tests. Variables are presented as mean ± standard\nerror. Categorical variables are presented as numbers and percentages.\nChi-square test and Pearson correlation test were used. P value less than 0.05\nwas accepted as significance level for all hypotheses.\nThe SPSS software (Statistical Package for the Social Sciences, version 25.0, IBM\nCorp, New York, NY, USA) was used for analyses. Numeric variables' suitability\nto the normal distribution was analyzed with Shapiro-Wilk (n<50) and\nKolmogorv-Smirnov (n>=50) tests. Variables are presented as mean ± standard\nerror. Categorical variables are presented as numbers and percentages.\nChi-square test and Pearson correlation test were used. P value less than 0.05\nwas accepted as significance level for all hypotheses.", "This study was carried out between March 2016 and March 2018 in Ege University\nFaculty of Medicine, Pediatric Clinics, İzmir, Turkey. This single center\nclinical study enrolled 50 children with CKD and 30 healthy volunteers. Verbal\nand written consent was obtained from the families and patients. The study was\napproved by the Clinical Research Ethics Committee of Ege University Faculty of\nMedicine (16-11/29). Patients with active systemic vasculitis, active infection,\nrenal vascular anomalies, cardiovascular anomalies, and family history of early\ncardiovascular disease were excluded. None of the patients had active\ninflammatory conditions or taking immunosuppressive drugs. CKD was defined as\npersistence of kidney dysfunction for more than 3 months. Children between the\nages of 5 and 18 years with estimated glomerular filtration rate (eGFR) between\n15 and 60 mL/min/1.73m2 and had not started renal replacement therapy\nwere included in the study5.\nMoreover, we examined 30 healthy volunteers, which were between the same age\nrange, and who went to the clinic for their routine health visits. All\nmeasurements and analyses were done to both healthy subjects and pediatric\npatients with CKD.\neGFR was calculated using the Schwartz formula6. Blood plasma was obtained after 12 hours of fasting and stored at\n-80°C for biochemical tests. Serum urea, creatinine, uric acid, calcium,\nphosphorus, C-reactive protein (CRP), erythrocyte sedimentation rate (ESR),\ncystatin C (CysC), glucose (mmol/L), fasting insulin, lipid parameters high\ndensity lipoprotein (HDL), low density lipoprotein (LDL), total cholesterol,\ntriglyceride, parathormone (PTH), and homocysteine were detected with an\nautomatic biochemical analyzer (Hitachi 7600; Tokyo, Japan). Age, sex, etiology,\nweight, height, body mass index (BMI), and systolic and diastolic blood pressure\nwere measured.", "Arterial stiffness evaluation, carotid-femoral PWV (cfPWV), and augmentation\nindex (Aix) measurements were carried out with a Vicorder (Skidmore Medical\nLimited, Bristol, UK) device. The peripheral and central arterial pulse wave\nforms from radial and carotid arteries were recorded with a Vicorder device. For\nthe measurement of arterial stiffness, the patients fasted for 12 hours, rested\ncomfortably in the supine position for 30 minutes, and withhold all\nantihypertensive drugs prior to PWV and AIx measurement. Mean values of the\ncompound radial waveforms were calculated using the computer program prepared\nsolely for this study. The Aix was calculated as the difference between first\nand second systolic peaks of the central aortic waveforms, and expressed as the\npercentage of pulsation length. Echocardiography evaluations were performed by\nthe same pediatric cardiologist using two-dimensional M-mode echocardiography\nwith a 3.5-MHz transducer (HP SONOS 1000 System, Philips, Best, The\nNetherlands). Measurements consisted of interventricular septal thickness,\nposterior wall thickness, left ventricular (LV) diameter at end diastole, and LV\ndiameter at end systole. The LV mass was calculated using the formula validated\nby Devereux and Reichek7. Carotid artery\nultrasonography was performed to measure carotid intima-media thickness (cIMT)\naccording to a previously described method by an experienced pediatric\ncardiologist8. Measurements were done\n3 times and mean values were recorded.", "IL-8, IL-10, IL-13, and TGF-β1 levels were assayed with high-sensitive ELISA\nmethod. Patients with CKD were divided into two groups according to the presence\nof CVD.\nPatients without CVD were classified as group 1, and those with CVD, as group 2,\nbased on reference values of aortic pulse wave velocity set by Aix Reusz et al.\nfor children9. The cIMT norms set by Doyon\nA et al. were used 10. PWV normal values\nfor children according to age and sex determined by Thurn D et al were used11. LVH is defined as LV mass >51\ng/m2.7 or LV mass >115 g per body surface area (BSA) for boys\nand LV mass >95 g/BSA for girls12. The\npresence of CVD was defined as anomaly in any of cIMT, PWV, Aix, and left\nventricular mass index (SVKI); 25 patients had CVD and 25 patients did not.", "The SPSS software (Statistical Package for the Social Sciences, version 25.0, IBM\nCorp, New York, NY, USA) was used for analyses. Numeric variables' suitability\nto the normal distribution was analyzed with Shapiro-Wilk (n<50) and\nKolmogorv-Smirnov (n>=50) tests. Variables are presented as mean ± standard\nerror. Categorical variables are presented as numbers and percentages.\nChi-square test and Pearson correlation test were used. P value less than 0.05\nwas accepted as significance level for all hypotheses.", "Fifty patients with predialytic CKD and 30 healthy children were enrolled in this\nstudy. Twenty-seven (54%) of the patients were male and 23 (46%) were female.\nSeventeen (57%) of the healthy children were female and 13 (43%) were male. The mean\nage of the patients with CKD was 12.59±4.53 years and of healthy subjects,\n13.21±6.02 years. The cause of CKD was reflux/urinary tract infection (n=25),\nobstructive uropathy (n=3), polycystic kidney disease (n=3), hereditary nephritis\n(n=3), aplasia/hypoplasia (n=1), metabolic disease (n=2), primary glomerulonephritis\n(n=1), and unknown (n=12). Of all individuals included in the study, 6 (12%) were at\nstage 2, 20 (40%) at stage 3, and 24 (48%) at stage 4 CKD. The mean BMI was\n19.42±5.12 kg/m2 in patients and 20.27±3.12 in healthy subjects. Patients\nwith CKD had lower eGFR than healthy subjects: 32.52±21.42 mL/min/1.73m2,\n121.51±21.42 mL/min/1.73m2, respectively, (p=0.001). The mean duration of\nCKD was 3.64±5.23 years. The SPB and DBP (mean ± SD) of the patients was\n119.40±11.03 and 72.40±16.59 mmHg, respectively. All healthy subjects and patients\nwere normotensive. Total cholesterol and triglycerides were significantly higher in\nthe patient group, while HDL was significantly lower (p=0.09, p=0.08, p=0.09)\nrespectively. There was no difference in demographic characteristics between the two\ngroups (p >0.05) (Table 1).\nMean±SD, CKD: Chronic kidney disease, BMI: Body mass index, SBP: systolic\nblood pressure, DBP: diastolic blood pressure, CRP: C- reaktif protein,\nESR: Erythrocyte sedimentation rate, HDL: High Density Lipoprotein, LDL:\nLow Density Lipoprotein, eGFR: Estimated glomerular filtration rate.\nThere was no statistically significant difference between the two groups in\nhemoglobin, serum Ca, P, and CaxP levels (p> 0.05). PTH was found to be higher in\nCKD patients than in healthy subjects (p=0.001) (Table 2).\nMean±SD, CKD: Chronic kidney disease, Ca: calcium, P: phosphorus, PTH:\nparathormone\nMean Aix and PWV values were higher in CKD patients than in healthy subjects (Aix CKD\n32.81±11.11%, healthy subjects 6.74±6.58%, PWV CKD 7.31±4.34m/s, healthy subjects\n3.42±3.01m/s) (p=0.02, p=0.03). The serum IL-8 levels of CKD were significantly\nhigher than of healthy subjects (568.48±487.35 pg/mL vs. 33.67± 47.47 pg/mL, p\n<0.001). IL-10, IL-13, and TGF-β1 levels were not different between CKD and\nhealthy subjects (p> 0.05) (Table 3).\nMean±SD, CKD: Chronic kidney disease, SVKI: left ventricular mass index,\ncIMT: carotid intima media thickness, Aix: augmentation index, PWV:\npulse wave velocity, TGF-β1: transforming grow factor -β1, IL-8:\ninterleukin-8, IL-10: interleukin-10, IL-13: interleukin-13. Values are\nexpressed by mean ± SD.\nAbnormalities of CVD markers (anomaly in any of cIMT, PWV, Aix and SVKI) were\ndetected in 25 of the 50 CKD patients. There was no statistically significant\ndifference between IL-10, IL-13, IL-8, and TGF-β1 levels in patients with and\nwithout CVD (Table 4).\nValues are expressed by mean ± SD. Group 1: CKD patients without\ncardiovascular disease. Group 2: CKD patients with cardiovascular\ndisease. TGF-β1: transforming grow factor -β1, IL-8: interleukin-8,\nIL-10: interleukin-10, IL-13: interleukin-13.\nThere was no correlation among IL-8 and inflammation markers (ESR, CRP) with\nbiochemical parameters and eGFR (p>0.005).", "Although life expectancy decreases in patients with CKD, CVD remains the leading\ncause of death. CVD begins in the early stage of CKD due to endothelial damage13\n,\n14. In the early stages of CKD, traditional\nrisk factors for CVD development are influenced by inflammation and uremic toxin,\nwhile dialysis and drugs are effective in the late stages15. Knowledge of the cytokine involved in chronic inflammation\nin the early stage of CVD formation is important for treatment.\nWe examined the release of IL-8, IL-10, IL-13, and TGF-β1, potent proinflammatory and\nchemotactic cytokines, and their prognostic significance in predicting CVD in\nchildren with CKD. The reason we chose these specific cytokines in this study is\nthat they are considered to be effective in inflammation underlying atherosclerosis.\nPrevious studies showed that cytokine IL-13 may prevent progression of\natherosclerosis16. It is thought to\nsuppress inflammation through the production of anti-inflammatory mediators such as\nIL -10 and TGF-β114. IL-8 was first\ncharacterized in 1987. Since then, knowledge regarding its function in leucocyte\ntrafficking and activation has advanced rapidly, especially regarding its role in\natherosclerosis. Several studies have identified IL-8 in sites of vascular injury,\nwhereas others have demonstrated that IL-8 potentially plays a role in various\nstages of atherosclerosis17. None of these\ncytokines have been shown to be associated with reduced glomerular filtration\nrate18.\nPWV and Aix increase in end-stage renal disease (ESRD) in childhood. These anomalies\nhave been accepted as markers of the early, asymptomatic phase of the cardiovascular\nprocess19.\nIn our study, PWV and Aix anomalies were found in children with CKD. This study\ndemonstrated that CVD develops at an early stage because of the presence of these\nanomalies in the case of moderate renal failure. The increase in PWV and Aix in CKD\nchildren without cIMT and SVKI elevations suggests that there are functional changes\nbefore structural changes. The mechanism of formation of arterial stiffness in CKD\nis unclear. Arterial stiffness is associated with dyslipidemia or hypertension20, and is considered to be a sign of CVD\nonset.\nIL-8 is important in the regulation of the acute inflammatory response21. IL-8 level was higher in children with CKD,\nalthough there was no increase in inflammation markers (CRP, ESR). In our study,\nnone of the patients had any condition to cause acute inflammation. Despite this,\nIL-8 was elevated in the patient group. Therefore, we think that IL-8 is not only an\nindicator of acute inflammation but it also increases in chronic inflammation.\nIL-8 is one of the cytokines involved in the pathogenesis of CKD22. IL-8 has been shown to induce endothelial cells (ECs)\ndysfunction and proliferation of vascular smooth muscles cells (VSMCs) by\nstimulating the development of vascular calcification through other risk factors.\nIL-8 has not been shown to be associated with atherosclerosis and coronary heart\ndisease in adult patients, but has been reported to be associated with overall\nmortality23. Panichi et al. evaluated the\neffect of serum IL-8 on ESRD patients. IL-8 has been shown to be a strong indicator\nof CVD and overall mortality in ESRD patients24. The first stage of atherosclerosis is endothelial dysfunction.\nIndicators of endothelial dysfunction are PWV and Aix alteration. Although IL-8 is\nthought to be a cytokine that may be effective at the stage of endothelial\ndysfunction, we could not show it. IL-8 was high in the CKD group, and PWV and Aix,\nwhich are indicators of endothelial dysfunction, were impaired, but this was not\nassociated with IL-8.\nIL-8 can be a marker of inflammation in CKD, but we did not find any relationship\nbetween IL-8 levels and CVD risk factors. IL-8 is not an indicator of CVD or\nendothelial damage in CKD.\nBecause the onset of CVD in CKD patients at the early stage is characterized by\nendothelial dysfunction, in this study, we aimed to find the cytokine involved in\nendothelial dysfunction and to shed light on the treatment using the effective\ncytokine antagonists.\nIn our study, PWV, Aix, and IL-8 levels were significantly increased in CKD compared\nto the healthy subjects. It was shown that cardiovascular changes started at the\nearliest stages of CKD by means of endothelial dysfunction. Although IL-8 was\nthought to be the cytokine involved in inflammation in CKD, it was not associated\nwith endothelial dysfunctions in our study.\nLimitation of this study is that the role of these specific cytokines cannot be fully\nexplained. Cardiovascular drugs such as ACE / ARB and statins have been shown to\nhave a pleiotropic anti-inflammatory effect, such as inhibition of cytokine\nproduction in vitro, but these drugs were not evaluate and this is\na limitation of our study.", "We found that serum IL-8 levels of CKD were significantly higher than of healthy\nsubjects but there was no significant difference between IL-8 levels in patients\nwith and without CVD. Elevated levels of IL-8 may not be considered as a marker for\ncardiovascular disease, but probably indicate disease-related inflammation. IL-8 may\nbe a cytokine that can increase CKD, but it cannot be a marker of CVD." ]
[ "intro", "materials|methods", null, null, null, null, "results", "discussion", "conclusions" ]
[ "Renal Insufficiency, Chronic", "Child", "Cardiovascular Diseases", "Interleukin-8", "Cytokines", "Insuficiência Renal Crônica", "Criança", "Doenças Cardiovasculares", "Interleucina-8", "Citocinas" ]
Introduction: Compared to healthy children, children with chronic kidney disease (CKD) have shorter life expectancy. Although survival has improved with renal transplantation, cardiovascular disease (CVD) is the most common cause of death in patients with CKD1. The incidence of cardiovascular events in children with CKD increases with age and is reported to be 23.9% in the 10-14 age range and 36.9% in children aged 15-19 years2. There are traditional and uremia-related risk factors for the development of CVD in CKD. Uremia-related risk factors are dysregulation of the Ca-P-PTH and fetuin-A, treatment-related factors are high dose vitamin D and high dose phosphate binders, and disease-related factors (hypertension). Traditional risk factors are hypertension, obesity, malnutrition, insulin resistance, and dyslipidemia. Cardiovascular abnormalities begin to develop in early stage CKD. CVD in children is usually asymptomatic in contrast to adults. Left ventricular hypertrophy, increased carotid artery intima media thickness, and vascular calcification are the most common early cardiovascular abnormalities in children with CKD. CKD begins with early vascular changes in children with endothelial damage. Subsequently, vascular calcification develops due to inflammatory response, adhesion molecules, growth factors, and cytokines from endothelial cells. Inflammation in early stage CKD is the main cause of CVD due to endothelial damage3. However, the pathogenic mechanism of microinflammation is still unknown. Knowing which cytokines are involved in the pathogenesis of inflammation may allow the development of preventive therapy. Some inflammatory mediators have been shown to be responsible for chronic inflammation in CKD4. It is not known which cytokines play a role in the development of CVD in the CKD. We examined whether there was a relationship between CVD and potent proinflammatory and chemotactic cytokines, which are interleukin-8 (IL-8), interleukin-10 (IL-10), interleukin-13 (IL-13), and transforming growth factor-β1 (TGF-β1) in pediatric patients with CKD. Material and Method: Study population, clinical and biochemical characteristics of the patients This study was carried out between March 2016 and March 2018 in Ege University Faculty of Medicine, Pediatric Clinics, İzmir, Turkey. This single center clinical study enrolled 50 children with CKD and 30 healthy volunteers. Verbal and written consent was obtained from the families and patients. The study was approved by the Clinical Research Ethics Committee of Ege University Faculty of Medicine (16-11/29). Patients with active systemic vasculitis, active infection, renal vascular anomalies, cardiovascular anomalies, and family history of early cardiovascular disease were excluded. None of the patients had active inflammatory conditions or taking immunosuppressive drugs. CKD was defined as persistence of kidney dysfunction for more than 3 months. Children between the ages of 5 and 18 years with estimated glomerular filtration rate (eGFR) between 15 and 60 mL/min/1.73m2 and had not started renal replacement therapy were included in the study5. Moreover, we examined 30 healthy volunteers, which were between the same age range, and who went to the clinic for their routine health visits. All measurements and analyses were done to both healthy subjects and pediatric patients with CKD. eGFR was calculated using the Schwartz formula6. Blood plasma was obtained after 12 hours of fasting and stored at -80°C for biochemical tests. Serum urea, creatinine, uric acid, calcium, phosphorus, C-reactive protein (CRP), erythrocyte sedimentation rate (ESR), cystatin C (CysC), glucose (mmol/L), fasting insulin, lipid parameters high density lipoprotein (HDL), low density lipoprotein (LDL), total cholesterol, triglyceride, parathormone (PTH), and homocysteine were detected with an automatic biochemical analyzer (Hitachi 7600; Tokyo, Japan). Age, sex, etiology, weight, height, body mass index (BMI), and systolic and diastolic blood pressure were measured. This study was carried out between March 2016 and March 2018 in Ege University Faculty of Medicine, Pediatric Clinics, İzmir, Turkey. This single center clinical study enrolled 50 children with CKD and 30 healthy volunteers. Verbal and written consent was obtained from the families and patients. The study was approved by the Clinical Research Ethics Committee of Ege University Faculty of Medicine (16-11/29). Patients with active systemic vasculitis, active infection, renal vascular anomalies, cardiovascular anomalies, and family history of early cardiovascular disease were excluded. None of the patients had active inflammatory conditions or taking immunosuppressive drugs. CKD was defined as persistence of kidney dysfunction for more than 3 months. Children between the ages of 5 and 18 years with estimated glomerular filtration rate (eGFR) between 15 and 60 mL/min/1.73m2 and had not started renal replacement therapy were included in the study5. Moreover, we examined 30 healthy volunteers, which were between the same age range, and who went to the clinic for their routine health visits. All measurements and analyses were done to both healthy subjects and pediatric patients with CKD. eGFR was calculated using the Schwartz formula6. Blood plasma was obtained after 12 hours of fasting and stored at -80°C for biochemical tests. Serum urea, creatinine, uric acid, calcium, phosphorus, C-reactive protein (CRP), erythrocyte sedimentation rate (ESR), cystatin C (CysC), glucose (mmol/L), fasting insulin, lipid parameters high density lipoprotein (HDL), low density lipoprotein (LDL), total cholesterol, triglyceride, parathormone (PTH), and homocysteine were detected with an automatic biochemical analyzer (Hitachi 7600; Tokyo, Japan). Age, sex, etiology, weight, height, body mass index (BMI), and systolic and diastolic blood pressure were measured. Evaluation of cardiovascular parameters Arterial stiffness evaluation, carotid-femoral PWV (cfPWV), and augmentation index (Aix) measurements were carried out with a Vicorder (Skidmore Medical Limited, Bristol, UK) device. The peripheral and central arterial pulse wave forms from radial and carotid arteries were recorded with a Vicorder device. For the measurement of arterial stiffness, the patients fasted for 12 hours, rested comfortably in the supine position for 30 minutes, and withhold all antihypertensive drugs prior to PWV and AIx measurement. Mean values of the compound radial waveforms were calculated using the computer program prepared solely for this study. The Aix was calculated as the difference between first and second systolic peaks of the central aortic waveforms, and expressed as the percentage of pulsation length. Echocardiography evaluations were performed by the same pediatric cardiologist using two-dimensional M-mode echocardiography with a 3.5-MHz transducer (HP SONOS 1000 System, Philips, Best, The Netherlands). Measurements consisted of interventricular septal thickness, posterior wall thickness, left ventricular (LV) diameter at end diastole, and LV diameter at end systole. The LV mass was calculated using the formula validated by Devereux and Reichek7. Carotid artery ultrasonography was performed to measure carotid intima-media thickness (cIMT) according to a previously described method by an experienced pediatric cardiologist8. Measurements were done 3 times and mean values were recorded. Arterial stiffness evaluation, carotid-femoral PWV (cfPWV), and augmentation index (Aix) measurements were carried out with a Vicorder (Skidmore Medical Limited, Bristol, UK) device. The peripheral and central arterial pulse wave forms from radial and carotid arteries were recorded with a Vicorder device. For the measurement of arterial stiffness, the patients fasted for 12 hours, rested comfortably in the supine position for 30 minutes, and withhold all antihypertensive drugs prior to PWV and AIx measurement. Mean values of the compound radial waveforms were calculated using the computer program prepared solely for this study. The Aix was calculated as the difference between first and second systolic peaks of the central aortic waveforms, and expressed as the percentage of pulsation length. Echocardiography evaluations were performed by the same pediatric cardiologist using two-dimensional M-mode echocardiography with a 3.5-MHz transducer (HP SONOS 1000 System, Philips, Best, The Netherlands). Measurements consisted of interventricular septal thickness, posterior wall thickness, left ventricular (LV) diameter at end diastole, and LV diameter at end systole. The LV mass was calculated using the formula validated by Devereux and Reichek7. Carotid artery ultrasonography was performed to measure carotid intima-media thickness (cIMT) according to a previously described method by an experienced pediatric cardiologist8. Measurements were done 3 times and mean values were recorded. Measurement of cytokines IL-8, IL-10, IL-13, and TGF-β1 levels were assayed with high-sensitive ELISA method. Patients with CKD were divided into two groups according to the presence of CVD. Patients without CVD were classified as group 1, and those with CVD, as group 2, based on reference values of aortic pulse wave velocity set by Aix Reusz et al. for children9. The cIMT norms set by Doyon A et al. were used 10. PWV normal values for children according to age and sex determined by Thurn D et al were used11. LVH is defined as LV mass >51 g/m2.7 or LV mass >115 g per body surface area (BSA) for boys and LV mass >95 g/BSA for girls12. The presence of CVD was defined as anomaly in any of cIMT, PWV, Aix, and left ventricular mass index (SVKI); 25 patients had CVD and 25 patients did not. IL-8, IL-10, IL-13, and TGF-β1 levels were assayed with high-sensitive ELISA method. Patients with CKD were divided into two groups according to the presence of CVD. Patients without CVD were classified as group 1, and those with CVD, as group 2, based on reference values of aortic pulse wave velocity set by Aix Reusz et al. for children9. The cIMT norms set by Doyon A et al. were used 10. PWV normal values for children according to age and sex determined by Thurn D et al were used11. LVH is defined as LV mass >51 g/m2.7 or LV mass >115 g per body surface area (BSA) for boys and LV mass >95 g/BSA for girls12. The presence of CVD was defined as anomaly in any of cIMT, PWV, Aix, and left ventricular mass index (SVKI); 25 patients had CVD and 25 patients did not. Statistical analyses The SPSS software (Statistical Package for the Social Sciences, version 25.0, IBM Corp, New York, NY, USA) was used for analyses. Numeric variables' suitability to the normal distribution was analyzed with Shapiro-Wilk (n<50) and Kolmogorv-Smirnov (n>=50) tests. Variables are presented as mean ± standard error. Categorical variables are presented as numbers and percentages. Chi-square test and Pearson correlation test were used. P value less than 0.05 was accepted as significance level for all hypotheses. The SPSS software (Statistical Package for the Social Sciences, version 25.0, IBM Corp, New York, NY, USA) was used for analyses. Numeric variables' suitability to the normal distribution was analyzed with Shapiro-Wilk (n<50) and Kolmogorv-Smirnov (n>=50) tests. Variables are presented as mean ± standard error. Categorical variables are presented as numbers and percentages. Chi-square test and Pearson correlation test were used. P value less than 0.05 was accepted as significance level for all hypotheses. Study population, clinical and biochemical characteristics of the patients: This study was carried out between March 2016 and March 2018 in Ege University Faculty of Medicine, Pediatric Clinics, İzmir, Turkey. This single center clinical study enrolled 50 children with CKD and 30 healthy volunteers. Verbal and written consent was obtained from the families and patients. The study was approved by the Clinical Research Ethics Committee of Ege University Faculty of Medicine (16-11/29). Patients with active systemic vasculitis, active infection, renal vascular anomalies, cardiovascular anomalies, and family history of early cardiovascular disease were excluded. None of the patients had active inflammatory conditions or taking immunosuppressive drugs. CKD was defined as persistence of kidney dysfunction for more than 3 months. Children between the ages of 5 and 18 years with estimated glomerular filtration rate (eGFR) between 15 and 60 mL/min/1.73m2 and had not started renal replacement therapy were included in the study5. Moreover, we examined 30 healthy volunteers, which were between the same age range, and who went to the clinic for their routine health visits. All measurements and analyses were done to both healthy subjects and pediatric patients with CKD. eGFR was calculated using the Schwartz formula6. Blood plasma was obtained after 12 hours of fasting and stored at -80°C for biochemical tests. Serum urea, creatinine, uric acid, calcium, phosphorus, C-reactive protein (CRP), erythrocyte sedimentation rate (ESR), cystatin C (CysC), glucose (mmol/L), fasting insulin, lipid parameters high density lipoprotein (HDL), low density lipoprotein (LDL), total cholesterol, triglyceride, parathormone (PTH), and homocysteine were detected with an automatic biochemical analyzer (Hitachi 7600; Tokyo, Japan). Age, sex, etiology, weight, height, body mass index (BMI), and systolic and diastolic blood pressure were measured. Evaluation of cardiovascular parameters: Arterial stiffness evaluation, carotid-femoral PWV (cfPWV), and augmentation index (Aix) measurements were carried out with a Vicorder (Skidmore Medical Limited, Bristol, UK) device. The peripheral and central arterial pulse wave forms from radial and carotid arteries were recorded with a Vicorder device. For the measurement of arterial stiffness, the patients fasted for 12 hours, rested comfortably in the supine position for 30 minutes, and withhold all antihypertensive drugs prior to PWV and AIx measurement. Mean values of the compound radial waveforms were calculated using the computer program prepared solely for this study. The Aix was calculated as the difference between first and second systolic peaks of the central aortic waveforms, and expressed as the percentage of pulsation length. Echocardiography evaluations were performed by the same pediatric cardiologist using two-dimensional M-mode echocardiography with a 3.5-MHz transducer (HP SONOS 1000 System, Philips, Best, The Netherlands). Measurements consisted of interventricular septal thickness, posterior wall thickness, left ventricular (LV) diameter at end diastole, and LV diameter at end systole. The LV mass was calculated using the formula validated by Devereux and Reichek7. Carotid artery ultrasonography was performed to measure carotid intima-media thickness (cIMT) according to a previously described method by an experienced pediatric cardiologist8. Measurements were done 3 times and mean values were recorded. Measurement of cytokines: IL-8, IL-10, IL-13, and TGF-β1 levels were assayed with high-sensitive ELISA method. Patients with CKD were divided into two groups according to the presence of CVD. Patients without CVD were classified as group 1, and those with CVD, as group 2, based on reference values of aortic pulse wave velocity set by Aix Reusz et al. for children9. The cIMT norms set by Doyon A et al. were used 10. PWV normal values for children according to age and sex determined by Thurn D et al were used11. LVH is defined as LV mass >51 g/m2.7 or LV mass >115 g per body surface area (BSA) for boys and LV mass >95 g/BSA for girls12. The presence of CVD was defined as anomaly in any of cIMT, PWV, Aix, and left ventricular mass index (SVKI); 25 patients had CVD and 25 patients did not. Statistical analyses: The SPSS software (Statistical Package for the Social Sciences, version 25.0, IBM Corp, New York, NY, USA) was used for analyses. Numeric variables' suitability to the normal distribution was analyzed with Shapiro-Wilk (n<50) and Kolmogorv-Smirnov (n>=50) tests. Variables are presented as mean ± standard error. Categorical variables are presented as numbers and percentages. Chi-square test and Pearson correlation test were used. P value less than 0.05 was accepted as significance level for all hypotheses. Results: Fifty patients with predialytic CKD and 30 healthy children were enrolled in this study. Twenty-seven (54%) of the patients were male and 23 (46%) were female. Seventeen (57%) of the healthy children were female and 13 (43%) were male. The mean age of the patients with CKD was 12.59±4.53 years and of healthy subjects, 13.21±6.02 years. The cause of CKD was reflux/urinary tract infection (n=25), obstructive uropathy (n=3), polycystic kidney disease (n=3), hereditary nephritis (n=3), aplasia/hypoplasia (n=1), metabolic disease (n=2), primary glomerulonephritis (n=1), and unknown (n=12). Of all individuals included in the study, 6 (12%) were at stage 2, 20 (40%) at stage 3, and 24 (48%) at stage 4 CKD. The mean BMI was 19.42±5.12 kg/m2 in patients and 20.27±3.12 in healthy subjects. Patients with CKD had lower eGFR than healthy subjects: 32.52±21.42 mL/min/1.73m2, 121.51±21.42 mL/min/1.73m2, respectively, (p=0.001). The mean duration of CKD was 3.64±5.23 years. The SPB and DBP (mean ± SD) of the patients was 119.40±11.03 and 72.40±16.59 mmHg, respectively. All healthy subjects and patients were normotensive. Total cholesterol and triglycerides were significantly higher in the patient group, while HDL was significantly lower (p=0.09, p=0.08, p=0.09) respectively. There was no difference in demographic characteristics between the two groups (p >0.05) (Table 1). Mean±SD, CKD: Chronic kidney disease, BMI: Body mass index, SBP: systolic blood pressure, DBP: diastolic blood pressure, CRP: C- reaktif protein, ESR: Erythrocyte sedimentation rate, HDL: High Density Lipoprotein, LDL: Low Density Lipoprotein, eGFR: Estimated glomerular filtration rate. There was no statistically significant difference between the two groups in hemoglobin, serum Ca, P, and CaxP levels (p> 0.05). PTH was found to be higher in CKD patients than in healthy subjects (p=0.001) (Table 2). Mean±SD, CKD: Chronic kidney disease, Ca: calcium, P: phosphorus, PTH: parathormone Mean Aix and PWV values were higher in CKD patients than in healthy subjects (Aix CKD 32.81±11.11%, healthy subjects 6.74±6.58%, PWV CKD 7.31±4.34m/s, healthy subjects 3.42±3.01m/s) (p=0.02, p=0.03). The serum IL-8 levels of CKD were significantly higher than of healthy subjects (568.48±487.35 pg/mL vs. 33.67± 47.47 pg/mL, p <0.001). IL-10, IL-13, and TGF-β1 levels were not different between CKD and healthy subjects (p> 0.05) (Table 3). Mean±SD, CKD: Chronic kidney disease, SVKI: left ventricular mass index, cIMT: carotid intima media thickness, Aix: augmentation index, PWV: pulse wave velocity, TGF-β1: transforming grow factor -β1, IL-8: interleukin-8, IL-10: interleukin-10, IL-13: interleukin-13. Values are expressed by mean ± SD. Abnormalities of CVD markers (anomaly in any of cIMT, PWV, Aix and SVKI) were detected in 25 of the 50 CKD patients. There was no statistically significant difference between IL-10, IL-13, IL-8, and TGF-β1 levels in patients with and without CVD (Table 4). Values are expressed by mean ± SD. Group 1: CKD patients without cardiovascular disease. Group 2: CKD patients with cardiovascular disease. TGF-β1: transforming grow factor -β1, IL-8: interleukin-8, IL-10: interleukin-10, IL-13: interleukin-13. There was no correlation among IL-8 and inflammation markers (ESR, CRP) with biochemical parameters and eGFR (p>0.005). Discussion: Although life expectancy decreases in patients with CKD, CVD remains the leading cause of death. CVD begins in the early stage of CKD due to endothelial damage13 , 14. In the early stages of CKD, traditional risk factors for CVD development are influenced by inflammation and uremic toxin, while dialysis and drugs are effective in the late stages15. Knowledge of the cytokine involved in chronic inflammation in the early stage of CVD formation is important for treatment. We examined the release of IL-8, IL-10, IL-13, and TGF-β1, potent proinflammatory and chemotactic cytokines, and their prognostic significance in predicting CVD in children with CKD. The reason we chose these specific cytokines in this study is that they are considered to be effective in inflammation underlying atherosclerosis. Previous studies showed that cytokine IL-13 may prevent progression of atherosclerosis16. It is thought to suppress inflammation through the production of anti-inflammatory mediators such as IL -10 and TGF-β114. IL-8 was first characterized in 1987. Since then, knowledge regarding its function in leucocyte trafficking and activation has advanced rapidly, especially regarding its role in atherosclerosis. Several studies have identified IL-8 in sites of vascular injury, whereas others have demonstrated that IL-8 potentially plays a role in various stages of atherosclerosis17. None of these cytokines have been shown to be associated with reduced glomerular filtration rate18. PWV and Aix increase in end-stage renal disease (ESRD) in childhood. These anomalies have been accepted as markers of the early, asymptomatic phase of the cardiovascular process19. In our study, PWV and Aix anomalies were found in children with CKD. This study demonstrated that CVD develops at an early stage because of the presence of these anomalies in the case of moderate renal failure. The increase in PWV and Aix in CKD children without cIMT and SVKI elevations suggests that there are functional changes before structural changes. The mechanism of formation of arterial stiffness in CKD is unclear. Arterial stiffness is associated with dyslipidemia or hypertension20, and is considered to be a sign of CVD onset. IL-8 is important in the regulation of the acute inflammatory response21. IL-8 level was higher in children with CKD, although there was no increase in inflammation markers (CRP, ESR). In our study, none of the patients had any condition to cause acute inflammation. Despite this, IL-8 was elevated in the patient group. Therefore, we think that IL-8 is not only an indicator of acute inflammation but it also increases in chronic inflammation. IL-8 is one of the cytokines involved in the pathogenesis of CKD22. IL-8 has been shown to induce endothelial cells (ECs) dysfunction and proliferation of vascular smooth muscles cells (VSMCs) by stimulating the development of vascular calcification through other risk factors. IL-8 has not been shown to be associated with atherosclerosis and coronary heart disease in adult patients, but has been reported to be associated with overall mortality23. Panichi et al. evaluated the effect of serum IL-8 on ESRD patients. IL-8 has been shown to be a strong indicator of CVD and overall mortality in ESRD patients24. The first stage of atherosclerosis is endothelial dysfunction. Indicators of endothelial dysfunction are PWV and Aix alteration. Although IL-8 is thought to be a cytokine that may be effective at the stage of endothelial dysfunction, we could not show it. IL-8 was high in the CKD group, and PWV and Aix, which are indicators of endothelial dysfunction, were impaired, but this was not associated with IL-8. IL-8 can be a marker of inflammation in CKD, but we did not find any relationship between IL-8 levels and CVD risk factors. IL-8 is not an indicator of CVD or endothelial damage in CKD. Because the onset of CVD in CKD patients at the early stage is characterized by endothelial dysfunction, in this study, we aimed to find the cytokine involved in endothelial dysfunction and to shed light on the treatment using the effective cytokine antagonists. In our study, PWV, Aix, and IL-8 levels were significantly increased in CKD compared to the healthy subjects. It was shown that cardiovascular changes started at the earliest stages of CKD by means of endothelial dysfunction. Although IL-8 was thought to be the cytokine involved in inflammation in CKD, it was not associated with endothelial dysfunctions in our study. Limitation of this study is that the role of these specific cytokines cannot be fully explained. Cardiovascular drugs such as ACE / ARB and statins have been shown to have a pleiotropic anti-inflammatory effect, such as inhibition of cytokine production in vitro, but these drugs were not evaluate and this is a limitation of our study. Conclusion: We found that serum IL-8 levels of CKD were significantly higher than of healthy subjects but there was no significant difference between IL-8 levels in patients with and without CVD. Elevated levels of IL-8 may not be considered as a marker for cardiovascular disease, but probably indicate disease-related inflammation. IL-8 may be a cytokine that can increase CKD, but it cannot be a marker of CVD.
Background: In this study, we aimed to detect the cytokine that is involved in the early stage of chronic kidney disease and associated with cardiovascular disease. Methods: We included 50 patients who were diagnosed with predialytic chronic kidney disease and 30 healthy pediatric patients in Ege University Medical Faculty Pediatric Clinic, İzmir/Turkey. Interleukin-8 (IL-8), interleukin-10 (IL-10), interleukin-13 (IL-13), and transforming grow factor-β1 (TGF-β1) levels (pg/mL) were measured by ELISA. Carotid-femoral pulse wave velocity (PWV), augmentation index (Aix), carotid intima media thickness (cIMT), and left ventricular mass index (LVMI) were evaluated as markers of cardiovascular disease. The presence of a cardiovascular disease marker was defined as an abnormality in any of the parameters (cIMT, PWV, Aix, and left ventricular mass index (SVKI)). The patient group was divided into two groups as with and without cardiovascular disease. Results: Mean Aix and PWV values were higher in CKD patients than controls (Aix: CKD 32.8±11.11%, healthy subjects: 6.74±6.58%, PWV CKD: 7.31±4.34m/s, healthy subjects: 3.42±3.01m/s, respectively; p=0.02, p=0.03). The serum IL-8 levels of CKD were significantly higher than of healthy subjects 568.48±487.35pg/mL, 33.67±47.47pg/mL, respectively (p<0.001). There was no statistically significant difference between IL-8, IL-10, IL-13, TGF-1, in CKD patients with and without cardiovascular disease (p> 0.05). Conclusions: IL-8 is the sole cytokine that increases in pediatric patients with chronic kidney disease among other cytokines (IL-10, IL-13 and TGF-β1). However, we did not show that IL-8 is related to the presence of cardiovascular disease.
Introduction: Compared to healthy children, children with chronic kidney disease (CKD) have shorter life expectancy. Although survival has improved with renal transplantation, cardiovascular disease (CVD) is the most common cause of death in patients with CKD1. The incidence of cardiovascular events in children with CKD increases with age and is reported to be 23.9% in the 10-14 age range and 36.9% in children aged 15-19 years2. There are traditional and uremia-related risk factors for the development of CVD in CKD. Uremia-related risk factors are dysregulation of the Ca-P-PTH and fetuin-A, treatment-related factors are high dose vitamin D and high dose phosphate binders, and disease-related factors (hypertension). Traditional risk factors are hypertension, obesity, malnutrition, insulin resistance, and dyslipidemia. Cardiovascular abnormalities begin to develop in early stage CKD. CVD in children is usually asymptomatic in contrast to adults. Left ventricular hypertrophy, increased carotid artery intima media thickness, and vascular calcification are the most common early cardiovascular abnormalities in children with CKD. CKD begins with early vascular changes in children with endothelial damage. Subsequently, vascular calcification develops due to inflammatory response, adhesion molecules, growth factors, and cytokines from endothelial cells. Inflammation in early stage CKD is the main cause of CVD due to endothelial damage3. However, the pathogenic mechanism of microinflammation is still unknown. Knowing which cytokines are involved in the pathogenesis of inflammation may allow the development of preventive therapy. Some inflammatory mediators have been shown to be responsible for chronic inflammation in CKD4. It is not known which cytokines play a role in the development of CVD in the CKD. We examined whether there was a relationship between CVD and potent proinflammatory and chemotactic cytokines, which are interleukin-8 (IL-8), interleukin-10 (IL-10), interleukin-13 (IL-13), and transforming growth factor-β1 (TGF-β1) in pediatric patients with CKD. Conclusion: We found that serum IL-8 levels of CKD were significantly higher than of healthy subjects but there was no significant difference between IL-8 levels in patients with and without CVD. Elevated levels of IL-8 may not be considered as a marker for cardiovascular disease, but probably indicate disease-related inflammation. IL-8 may be a cytokine that can increase CKD, but it cannot be a marker of CVD.
Background: In this study, we aimed to detect the cytokine that is involved in the early stage of chronic kidney disease and associated with cardiovascular disease. Methods: We included 50 patients who were diagnosed with predialytic chronic kidney disease and 30 healthy pediatric patients in Ege University Medical Faculty Pediatric Clinic, İzmir/Turkey. Interleukin-8 (IL-8), interleukin-10 (IL-10), interleukin-13 (IL-13), and transforming grow factor-β1 (TGF-β1) levels (pg/mL) were measured by ELISA. Carotid-femoral pulse wave velocity (PWV), augmentation index (Aix), carotid intima media thickness (cIMT), and left ventricular mass index (LVMI) were evaluated as markers of cardiovascular disease. The presence of a cardiovascular disease marker was defined as an abnormality in any of the parameters (cIMT, PWV, Aix, and left ventricular mass index (SVKI)). The patient group was divided into two groups as with and without cardiovascular disease. Results: Mean Aix and PWV values were higher in CKD patients than controls (Aix: CKD 32.8±11.11%, healthy subjects: 6.74±6.58%, PWV CKD: 7.31±4.34m/s, healthy subjects: 3.42±3.01m/s, respectively; p=0.02, p=0.03). The serum IL-8 levels of CKD were significantly higher than of healthy subjects 568.48±487.35pg/mL, 33.67±47.47pg/mL, respectively (p<0.001). There was no statistically significant difference between IL-8, IL-10, IL-13, TGF-1, in CKD patients with and without cardiovascular disease (p> 0.05). Conclusions: IL-8 is the sole cytokine that increases in pediatric patients with chronic kidney disease among other cytokines (IL-10, IL-13 and TGF-β1). However, we did not show that IL-8 is related to the presence of cardiovascular disease.
5,053
340
[ 373, 276, 189, 104 ]
9
[ "ckd", "il", "patients", "cvd", "study", "aix", "healthy", "children", "pwv", "mass" ]
[ "ckd patients early", "ckd increases age", "children ckd increase", "ckd patients cardiovascular", "cvd ckd uremia" ]
null
[CONTENT] Renal Insufficiency, Chronic | Child | Cardiovascular Diseases | Interleukin-8 | Cytokines | Insuficiência Renal Crônica | Criança | Doenças Cardiovasculares | Interleucina-8 | Citocinas [SUMMARY]
null
[CONTENT] Renal Insufficiency, Chronic | Child | Cardiovascular Diseases | Interleukin-8 | Cytokines | Insuficiência Renal Crônica | Criança | Doenças Cardiovasculares | Interleucina-8 | Citocinas [SUMMARY]
[CONTENT] Renal Insufficiency, Chronic | Child | Cardiovascular Diseases | Interleukin-8 | Cytokines | Insuficiência Renal Crônica | Criança | Doenças Cardiovasculares | Interleucina-8 | Citocinas [SUMMARY]
[CONTENT] Renal Insufficiency, Chronic | Child | Cardiovascular Diseases | Interleukin-8 | Cytokines | Insuficiência Renal Crônica | Criança | Doenças Cardiovasculares | Interleucina-8 | Citocinas [SUMMARY]
[CONTENT] Renal Insufficiency, Chronic | Child | Cardiovascular Diseases | Interleukin-8 | Cytokines | Insuficiência Renal Crônica | Criança | Doenças Cardiovasculares | Interleucina-8 | Citocinas [SUMMARY]
[CONTENT] Cardiovascular Diseases | Carotid Intima-Media Thickness | Child | Humans | Interleukin-8 | Pulse Wave Analysis | Renal Insufficiency, Chronic [SUMMARY]
null
[CONTENT] Cardiovascular Diseases | Carotid Intima-Media Thickness | Child | Humans | Interleukin-8 | Pulse Wave Analysis | Renal Insufficiency, Chronic [SUMMARY]
[CONTENT] Cardiovascular Diseases | Carotid Intima-Media Thickness | Child | Humans | Interleukin-8 | Pulse Wave Analysis | Renal Insufficiency, Chronic [SUMMARY]
[CONTENT] Cardiovascular Diseases | Carotid Intima-Media Thickness | Child | Humans | Interleukin-8 | Pulse Wave Analysis | Renal Insufficiency, Chronic [SUMMARY]
[CONTENT] Cardiovascular Diseases | Carotid Intima-Media Thickness | Child | Humans | Interleukin-8 | Pulse Wave Analysis | Renal Insufficiency, Chronic [SUMMARY]
[CONTENT] ckd patients early | ckd increases age | children ckd increase | ckd patients cardiovascular | cvd ckd uremia [SUMMARY]
null
[CONTENT] ckd patients early | ckd increases age | children ckd increase | ckd patients cardiovascular | cvd ckd uremia [SUMMARY]
[CONTENT] ckd patients early | ckd increases age | children ckd increase | ckd patients cardiovascular | cvd ckd uremia [SUMMARY]
[CONTENT] ckd patients early | ckd increases age | children ckd increase | ckd patients cardiovascular | cvd ckd uremia [SUMMARY]
[CONTENT] ckd patients early | ckd increases age | children ckd increase | ckd patients cardiovascular | cvd ckd uremia [SUMMARY]
[CONTENT] ckd | il | patients | cvd | study | aix | healthy | children | pwv | mass [SUMMARY]
null
[CONTENT] ckd | il | patients | cvd | study | aix | healthy | children | pwv | mass [SUMMARY]
[CONTENT] ckd | il | patients | cvd | study | aix | healthy | children | pwv | mass [SUMMARY]
[CONTENT] ckd | il | patients | cvd | study | aix | healthy | children | pwv | mass [SUMMARY]
[CONTENT] ckd | il | patients | cvd | study | aix | healthy | children | pwv | mass [SUMMARY]
[CONTENT] factors | ckd | children | related | cvd | cytokines | early | development | endothelial | risk factors [SUMMARY]
null
[CONTENT] ckd | il | mean | healthy | sd | mean sd | healthy subjects | subjects | patients | interleukin [SUMMARY]
[CONTENT] il | levels | marker | il levels | cvd | disease | disease probably indicate | elevated levels | levels il | levels il considered [SUMMARY]
[CONTENT] il | ckd | cvd | patients | lv | aix | healthy | mass | pwv | children [SUMMARY]
[CONTENT] il | ckd | cvd | patients | lv | aix | healthy | mass | pwv | children [SUMMARY]
[CONTENT] [SUMMARY]
null
[CONTENT] Mean Aix | PWV | CKD | Aix | 32.8±11.11% | 6.74±6.58% | PWV ||| CKD | 33.67±47.47pg ||| IL-10 | TGF-1 | CKD | 0.05 [SUMMARY]
[CONTENT] IL-10 ||| [SUMMARY]
[CONTENT] ||| 50 | 30 | Ege University Medical Faculty Pediatric Clinic | İzmir/Turkey ||| interleukin-10 | IL-10 | interleukin-13 | ELISA ||| PWV | Aix ||| PWV | Aix ||| two ||| Mean Aix | PWV | CKD | Aix | 32.8±11.11% | 6.74±6.58% | PWV ||| CKD | 33.67±47.47pg ||| IL-10 | TGF-1 | CKD | 0.05 ||| IL-10 ||| [SUMMARY]
[CONTENT] ||| 50 | 30 | Ege University Medical Faculty Pediatric Clinic | İzmir/Turkey ||| interleukin-10 | IL-10 | interleukin-13 | ELISA ||| PWV | Aix ||| PWV | Aix ||| two ||| Mean Aix | PWV | CKD | Aix | 32.8±11.11% | 6.74±6.58% | PWV ||| CKD | 33.67±47.47pg ||| IL-10 | TGF-1 | CKD | 0.05 ||| IL-10 ||| [SUMMARY]
Panobinostat synergizes with bortezomib to induce endoplasmic reticulum stress and ubiquitinated protein accumulation in renal cancer cells.
25176354
Inducing endoplasmic reticulum (ER) stress is a novel strategy used to treat malignancies. Inhibition of histone deacetylase (HDAC) 6 by the HDAC inhibitor panobinostat hinders the refolding of unfolded proteins by increasing the acetylation of heat shock protein 90. We investigated whether combining panobinostat with the proteasome inhibitor bortezomib would kill cancer cells effectively by inhibiting the degradation of these unfolded proteins, thereby causing ubiquitinated proteins to accumulate and induce ER stress.
BACKGROUND
Caki-1, ACHN, and 769-P cells were treated with panobinostat and/or bortezomib. Cell viability, clonogenicity, and induction of apoptosis were evaluated. The in vivo efficacy of the combination was evaluated using a murine subcutaneous xenograft model. The combination-induced ER stress and ubiquitinated protein accumulation were assessed.
METHODS
The combination of panobinostat and bortezomib induced apoptosis and inhibited renal cancer growth synergistically (combination indexes <1). It also suppressed colony formation significantly (p <0.05). In a murine subcutaneous tumor model, a 10-day treatment was well tolerated and inhibited tumor growth significantly (p <0.05). Enhanced acetylation of the HDAC6 substrate alpha-tubulin was consistent with the suppression of HDAC6 activity by panobinostat, and the combination was shown to induce ER stress and ubiquitinated protein accumulation synergistically.
RESULTS
Panobinostat inhibits renal cancer growth by synergizing with bortezomib to induce ER stress and ubiquitinated protein accumulation. The current study provides a basis for testing the combination in patients with advanced renal cancer.
CONCLUSIONS
[ "Acetylation", "Animals", "Antineoplastic Agents", "Antineoplastic Combined Chemotherapy Protocols", "Apoptosis", "Boronic Acids", "Bortezomib", "Cell Line, Tumor", "Disease Models, Animal", "Drug Synergism", "Endoplasmic Reticulum Stress", "Histone Deacetylase Inhibitors", "Histones", "Humans", "Hydroxamic Acids", "Indoles", "Kidney Neoplasms", "Mice, Inbred BALB C", "Panobinostat", "Pyrazines", "Ubiquitinated Proteins" ]
4153447
Background
A new therapeutic approach to advanced renal cancer is urgently needed because there is presently no curative treatment, and one innovative treatment strategy used against cancer is to induce endoplasmic reticulum (ER) stress and ubiquitinated protein accumulation [1]. Protein unfolding rates that exceed the capacity of protein chaperones cause ER stress, and chronic or unresolved ER stress can lead to apoptosis [2]. On the other hand, unfolded proteins that fail to be repaired by chaperones are then ubiquitinated and the accumulation of these ubiquitinated proteins is also cytotoxic [3]. Histone deacetylase (HDAC) 6 inhibition acetylates heat shock protein (HSP) 90 and suppresses its function as a molecular chaperon, increasing the amount of unfolded proteins in the cell [4]. Because these unfolded proteins are then ubiquitinated and degraded by the proteasome [5], HDAC6 inhibition alone is thought to cause no or only slight ER stress and ubiquitinated protein accumulation if the proteasome function is normal. We thought that combining an HDAC inhibitor with the proteasome inhibitor bortezomib would cause ER stress and ubiquitinated protein accumulation synergistically because the increased ubiquitinated proteins would not be degraded by the inhibited proteasome. Panobinostat is a novel HDAC inhibitor that has been clinically tested not only in patients with hematological malignancies [6,7] but also patients with solid tumors, including renal cell carcinoma [8,9]. Bortezomib has been approved by the FDA and widely used for the treatment of multiple myeloma [10]. In the present study using renal cancer cells, we investigated whether the combination of panobinostat and bortezomib induces ER stress and ubiquitinated protein accumulation, and kills cancer cells effectively in vitro and in vivo.
Methods
Cell lines Renal cancer cell lines (Caki-1, ACHN, and 769-P) were purchased from the American Type Culture Collection (Rockville, MD). Caki-1 cells were maintained in MEM, ACHN cells in DMEM, and 769-P cells in RPMI medium, all supplemented with 10% fetal bovine serum and 0.3% penicillin-streptomycin (Invitrogen, Carlsbad, CA). Renal cancer cell lines (Caki-1, ACHN, and 769-P) were purchased from the American Type Culture Collection (Rockville, MD). Caki-1 cells were maintained in MEM, ACHN cells in DMEM, and 769-P cells in RPMI medium, all supplemented with 10% fetal bovine serum and 0.3% penicillin-streptomycin (Invitrogen, Carlsbad, CA). Reagents Panobinostat and bortezomib were obtained from Cayman Chemical (Ann Arbor, MI) and LC Laboratories (Woburn, MA), respectively, dissolved in dimethyl sulfoxide (DMSO), and stored at -20°C until use. Panobinostat and bortezomib were obtained from Cayman Chemical (Ann Arbor, MI) and LC Laboratories (Woburn, MA), respectively, dissolved in dimethyl sulfoxide (DMSO), and stored at -20°C until use. Evaluating effect of the combination of panobinostat and bortezomib on cell viability and colony formation For cell viability assay, 5 × 103 cells were plated in a 96-well culture plate one day before treatment and treated with panobinostat (25–50 nM) and/or bortezomib (5–15 nM) for 48 hours. Cell viability was evaluated by MTS assay (Promega, Madison, WI) according to the manufacturer’s protocol. For colony formation assay, 1 × 102 cells were plated in 6-well plates one day before treatment and cultured for 48 hours in media containing 50 nM panobinostat and/or 10 nM bortezomib. They were then given fresh media and allowed to grow for 1–2 weeks, depending on the cell line. The number of colonies was then counted after fixing the cells with 100% methanol and staining them with Giemsa’s solution. For cell viability assay, 5 × 103 cells were plated in a 96-well culture plate one day before treatment and treated with panobinostat (25–50 nM) and/or bortezomib (5–15 nM) for 48 hours. Cell viability was evaluated by MTS assay (Promega, Madison, WI) according to the manufacturer’s protocol. For colony formation assay, 1 × 102 cells were plated in 6-well plates one day before treatment and cultured for 48 hours in media containing 50 nM panobinostat and/or 10 nM bortezomib. They were then given fresh media and allowed to grow for 1–2 weeks, depending on the cell line. The number of colonies was then counted after fixing the cells with 100% methanol and staining them with Giemsa’s solution. Evaluating effect of the combination of panobinostat and bortezomib on induction of apoptosis 1.5 × 105 cells were plated in a 6-well culture plate one day before being cultured for 48 hours in medium containing 50 nM panobinostat and/or 10 nM bortezomib. Induction of apoptosis was evaluated, using flow cytometry, by annexin-V assay and cell cycle analysis. For annexin-V assay the harvested cells were stained with annexin V according to the manufacturer’s protocol (Beckman Coulter, Marseille, France). For cell cycle analysis the harvested cells were resuspended in citrate buffer and stained with propidium iodide. They were then analyzed by flow cytometry using CellQuest Pro Software (BD Biosciences, San Jose, CA). 1.5 × 105 cells were plated in a 6-well culture plate one day before being cultured for 48 hours in medium containing 50 nM panobinostat and/or 10 nM bortezomib. Induction of apoptosis was evaluated, using flow cytometry, by annexin-V assay and cell cycle analysis. For annexin-V assay the harvested cells were stained with annexin V according to the manufacturer’s protocol (Beckman Coulter, Marseille, France). For cell cycle analysis the harvested cells were resuspended in citrate buffer and stained with propidium iodide. They were then analyzed by flow cytometry using CellQuest Pro Software (BD Biosciences, San Jose, CA). Murine xenograft model The animal protocol for this experiment has been approved by the institutional Animal Care and Use Committee of National Defense Medical College. 5-week-old male nude mice (strain BALB/c Slc-nu/nu) were purchased from CLEA (Tokyo, Japan). The animals were housed under pathogen-free conditions and had access to standard food and water ad libitum. 1 × 107 Caki-1 cells were subcutaneously injected into the flank and treatments were initiated 4 days after the injection (day 1), when the tumors became palpable. The mice were divided into 4 groups of 5, the control group receiving intraperitoneal injections of DMSO and the other groups receiving either panobinostat (2 mg/kg) or bortezomib (60 μg/kg) or both. Injections were given once a day, 5 days a week, for 2 weeks. Tumor volume was estimated as one half of the product of the length and the square of the width (i.e., volume = 0.5 × length × width2). The animal protocol for this experiment has been approved by the institutional Animal Care and Use Committee of National Defense Medical College. 5-week-old male nude mice (strain BALB/c Slc-nu/nu) were purchased from CLEA (Tokyo, Japan). The animals were housed under pathogen-free conditions and had access to standard food and water ad libitum. 1 × 107 Caki-1 cells were subcutaneously injected into the flank and treatments were initiated 4 days after the injection (day 1), when the tumors became palpable. The mice were divided into 4 groups of 5, the control group receiving intraperitoneal injections of DMSO and the other groups receiving either panobinostat (2 mg/kg) or bortezomib (60 μg/kg) or both. Injections were given once a day, 5 days a week, for 2 weeks. Tumor volume was estimated as one half of the product of the length and the square of the width (i.e., volume = 0.5 × length × width2). Western blotting Cells were treated under the indicated conditions for 48 hours and whole-cell lysates were obtained using RIPA buffer. Equal amounts of protein were subjected to SDS-PAGE and transferred onto nitrocellulose membranes that were then probed with antibodies specific for glucose-regulated protein (GRP) 78, ubiquitin (Santa Cruz Biotechnology, Santa Cruz, CA), actin (Millipore, Billerica, MA), HSP70, endoplasmic reticulum resident protein (ERp) 44, endoplasmic oxidoreductin-1-like protein (Ero1-L)α, cleaved poly(ADP-ribose) polymerase (PARP) (Cell Signaling Technology, Danvers, MA), acetylated α-tubulin (Enzo Life Sciences, Farmingdale, NY), and acetylated histone (Abcam, Cambridge, UK). This probing was followed by treatment with horseradish-peroxidase-tagged secondary antibodies (Bio-Rad, Hercules, CA) and visualization by chemiluminescence (ECL, Amersham, Piscataway, NJ). Cells were treated under the indicated conditions for 48 hours and whole-cell lysates were obtained using RIPA buffer. Equal amounts of protein were subjected to SDS-PAGE and transferred onto nitrocellulose membranes that were then probed with antibodies specific for glucose-regulated protein (GRP) 78, ubiquitin (Santa Cruz Biotechnology, Santa Cruz, CA), actin (Millipore, Billerica, MA), HSP70, endoplasmic reticulum resident protein (ERp) 44, endoplasmic oxidoreductin-1-like protein (Ero1-L)α, cleaved poly(ADP-ribose) polymerase (PARP) (Cell Signaling Technology, Danvers, MA), acetylated α-tubulin (Enzo Life Sciences, Farmingdale, NY), and acetylated histone (Abcam, Cambridge, UK). This probing was followed by treatment with horseradish-peroxidase-tagged secondary antibodies (Bio-Rad, Hercules, CA) and visualization by chemiluminescence (ECL, Amersham, Piscataway, NJ). Statistical analyses The combination indexes were calculated using the Chou-Talalay method and CalcuSyn software (Biosoft, Cambridge, UK). The statistical significance of observed differences between samples was determined using the Mann-Whitney U test (StatView software, SAS Institute, Cary, NC). Differences were considered significant at p <0.05. The combination indexes were calculated using the Chou-Talalay method and CalcuSyn software (Biosoft, Cambridge, UK). The statistical significance of observed differences between samples was determined using the Mann-Whitney U test (StatView software, SAS Institute, Cary, NC). Differences were considered significant at p <0.05.
Results
Combination of panobinostat and bortezomib inhibited renal cancer growth synergistically We first investigated the combined effect of panobinostat and bortezomib on renal cancer cell viability by MTS assay. Panobinostat and bortezomib each inhibited the growth of renal cancer cells in a dose-dependent fashion, and the combination did so more effectively than either did by itself (Figure 1A). Analysis using the Chou-Talalay method indicated that the effect of the combination was synergistic (combination index <1) in many of the treatment conditions (Table 1). We then investigated whether the combination affects the clonogenic survival of renal cancer cells. Colony formation assay revealed that the combination suppressed colony formation significantly and did so significantly more than did either panobinostat or bortezomib alone (Figure 1B). The combination of panobinostat and bortezomib inhibited renal cancer growth effectively. A, MTS assay results (mean ± SD, n = 6) after cells were treated for 48 hours either with bortezomib or panobinostat alone or with bortezomib and panobinostat together. B, Colony formation assay results (mean ± SD, n = 3) after 1–2 week incubation in control media (C) or media containing 50 nM panobinostat (P) and/or 10 nM bortezomib (B). *p = 0.0495; **p = 0.0463. Combination indexes for the combination of panobinostat and bortezomib in renal cancer cells We also used a subcutaneous xenograft mouse model to test the efficacy of the combination therapy in vivo. A 10-day treatment with panobinostat and bortezomib was well tolerated and suppressed tumor growth significantly (Figure 2). The p values at day 12 were 0.0283 for the control group and combination group, 0.0283 for the bortezomib group and combination group, and 0.0472 for the panobinostat group and combination group. The average tumor size at day 15 was 520 ± 175 mm3 (mean ± SE) in the vehicle-treated mice and was 266 ± 39 mm3 in the combination-treated mice. Thus the combination of panobinostat and bortezomib was shown to be effective for suppressing renal cancer growth both in vitro and in vivo. The combination of panobinostat and bortezomib suppressed tumor growth in vivo. A murine subcutaneous tumor model was made using Caki-1 cells, and the control group received intraperitoneal injections of DMSO, while other groups received either panobinostat (2 mg/kg) or bortezomib (60 μg/kg) or both. Injections were given once a day, 5 days a week, for 2 weeks. The 10-day treatment was well tolerated and suppressed tumor growth significantly (mean ± SE; p = 0.0283 at day 12). We first investigated the combined effect of panobinostat and bortezomib on renal cancer cell viability by MTS assay. Panobinostat and bortezomib each inhibited the growth of renal cancer cells in a dose-dependent fashion, and the combination did so more effectively than either did by itself (Figure 1A). Analysis using the Chou-Talalay method indicated that the effect of the combination was synergistic (combination index <1) in many of the treatment conditions (Table 1). We then investigated whether the combination affects the clonogenic survival of renal cancer cells. Colony formation assay revealed that the combination suppressed colony formation significantly and did so significantly more than did either panobinostat or bortezomib alone (Figure 1B). The combination of panobinostat and bortezomib inhibited renal cancer growth effectively. A, MTS assay results (mean ± SD, n = 6) after cells were treated for 48 hours either with bortezomib or panobinostat alone or with bortezomib and panobinostat together. B, Colony formation assay results (mean ± SD, n = 3) after 1–2 week incubation in control media (C) or media containing 50 nM panobinostat (P) and/or 10 nM bortezomib (B). *p = 0.0495; **p = 0.0463. Combination indexes for the combination of panobinostat and bortezomib in renal cancer cells We also used a subcutaneous xenograft mouse model to test the efficacy of the combination therapy in vivo. A 10-day treatment with panobinostat and bortezomib was well tolerated and suppressed tumor growth significantly (Figure 2). The p values at day 12 were 0.0283 for the control group and combination group, 0.0283 for the bortezomib group and combination group, and 0.0472 for the panobinostat group and combination group. The average tumor size at day 15 was 520 ± 175 mm3 (mean ± SE) in the vehicle-treated mice and was 266 ± 39 mm3 in the combination-treated mice. Thus the combination of panobinostat and bortezomib was shown to be effective for suppressing renal cancer growth both in vitro and in vivo. The combination of panobinostat and bortezomib suppressed tumor growth in vivo. A murine subcutaneous tumor model was made using Caki-1 cells, and the control group received intraperitoneal injections of DMSO, while other groups received either panobinostat (2 mg/kg) or bortezomib (60 μg/kg) or both. Injections were given once a day, 5 days a week, for 2 weeks. The 10-day treatment was well tolerated and suppressed tumor growth significantly (mean ± SE; p = 0.0283 at day 12). Combination of panobinostat and bortezomib induced apoptosis The combination increased the annexin-V fluorescence intensity (up to 19.4-fold compared with control vehicle) (Figure 3A) and also increased the number of the cells in the sub-G1 fraction (up to 70.5%) (Figure 3B). Thus the combination of panobinostat and bortezomib was demonstrated to induce apoptosis in renal cancer cells. The combination of panobinostat and bortezomib induced apoptosis in renal cancer cells. Cells were treated for 48 hours with 50 nM panobinostat with or without 10 nM bortezomib. The combination increased the annexin-V-FITC fluorescence intensity (A) and increased the number of the cells in the sub-G1 fraction (B). Relative annexin-V-FITC fluorescence intensity (control = 1) is shown in the insets. White, control; red, treated. The percentage of cells in the sub-G1 fraction is shown in the graph. Representative results are shown. The combination increased the annexin-V fluorescence intensity (up to 19.4-fold compared with control vehicle) (Figure 3A) and also increased the number of the cells in the sub-G1 fraction (up to 70.5%) (Figure 3B). Thus the combination of panobinostat and bortezomib was demonstrated to induce apoptosis in renal cancer cells. The combination of panobinostat and bortezomib induced apoptosis in renal cancer cells. Cells were treated for 48 hours with 50 nM panobinostat with or without 10 nM bortezomib. The combination increased the annexin-V-FITC fluorescence intensity (A) and increased the number of the cells in the sub-G1 fraction (B). Relative annexin-V-FITC fluorescence intensity (control = 1) is shown in the insets. White, control; red, treated. The percentage of cells in the sub-G1 fraction is shown in the graph. Representative results are shown. Combination of panobinostat and bortezomib induced ER stress and ubiquitinated protein accumulation synergistically The combination induced ER stress synergistically as indicated by the increased expression of ER stress markers such as GRP78, HSP70, ERp44, and (except in 769-P cells) Ero1-Lα (Figure 4A). As expected, the combination induced ubiquitinated protein accumulation synergistically (Figure 4B): in Caki-1 and 769-P cells, 10 nM bortezomib alone did not cause ubiquitinated proteins to accumulate but in combination with 50 nM panobinostat increased the accumulation of ubiquitinated proteins markedly. In ACHN cells, 10 nM bortezomib caused ubiquitinated protein accumulation and the accumulation was synergistically enhanced by 50 nM panobinostat. Acetylation of α-tubulin by panobinostat is consistent with HDAC6 inhibition because α-tubulin is one of the important substrates of HDAC6. Interestingly, the combination also enhanced the acetylation of histone and α-tubulin synergistically in Caki-1 and ACHN cells. In 769-P cells, the combination enhanced the acetylation of α-tubulin but not that of histone. The combination therapy induced ER stress and histone acetylation in renal cancer cells. The 48-hour treatment with the combination of panobinostat and bortezomib induced ER stress synergistically as indicated by the increased expression of GRP78, HSP70, ERp44, and Ero1-Lα (A). It also caused ubiquitinated protein accumulation in all the cell lines synergistically and enhanced histone and also α-tubulin acetylation in Caki-1 and ACHN cells. In 769-P cells, the combination enhanced the acetylation of α-tubulin but not that of histone (B). The dashed lines in the Caki-1 results in parts A and B indicate that the order of the bands has been rearranged from the original gel. The combination induced ER stress synergistically as indicated by the increased expression of ER stress markers such as GRP78, HSP70, ERp44, and (except in 769-P cells) Ero1-Lα (Figure 4A). As expected, the combination induced ubiquitinated protein accumulation synergistically (Figure 4B): in Caki-1 and 769-P cells, 10 nM bortezomib alone did not cause ubiquitinated proteins to accumulate but in combination with 50 nM panobinostat increased the accumulation of ubiquitinated proteins markedly. In ACHN cells, 10 nM bortezomib caused ubiquitinated protein accumulation and the accumulation was synergistically enhanced by 50 nM panobinostat. Acetylation of α-tubulin by panobinostat is consistent with HDAC6 inhibition because α-tubulin is one of the important substrates of HDAC6. Interestingly, the combination also enhanced the acetylation of histone and α-tubulin synergistically in Caki-1 and ACHN cells. In 769-P cells, the combination enhanced the acetylation of α-tubulin but not that of histone. The combination therapy induced ER stress and histone acetylation in renal cancer cells. The 48-hour treatment with the combination of panobinostat and bortezomib induced ER stress synergistically as indicated by the increased expression of GRP78, HSP70, ERp44, and Ero1-Lα (A). It also caused ubiquitinated protein accumulation in all the cell lines synergistically and enhanced histone and also α-tubulin acetylation in Caki-1 and ACHN cells. In 769-P cells, the combination enhanced the acetylation of α-tubulin but not that of histone (B). The dashed lines in the Caki-1 results in parts A and B indicate that the order of the bands has been rearranged from the original gel. Histone acetylation was a consequence of ubiquitinated protein accumulation We then investigated the relationship between histone acetylation and ubiquitinated protein accumulation. Panobinostat caused histone acetylation in a dose-dependent fashion in all the cell lines but did not induce ubiquitinated protein accumulation (Figure 5A). Bortezomib, on the other hand, caused both ubiquitinated protein accumulation and histone acetylation in a dose-dependent fashion in Caki-1 and ACHN cells but did not cause histone acetylation in 769-P cells (Figure 5B). This is in accordance with the result that the combination did not enhance histone acetylation in 769-P cells despite inducing ubiquitinated protein accumulation in them (Figure 4B). We inferred from these results that the histone acetylation the combination caused in Caki-1 and ACHN cells was a consequence of ubiquitinated protein accumulation. Histone acetylation was a consequence of ubiquitinated protein accumulation. A, 48-hour treatment with panobinostat caused dose-dependent histone acetylation in all the cell lines but did not cause ubiquitinated protein accumulation. B, 48-hour treatment with bortezomib, on the other hand, caused both histone acetylation and ubiquitinated protein accumulation in Caki-1 and ACHN cells and caused only ubiquitinated protein accumulation in 769-P cells. We then investigated the relationship between histone acetylation and ubiquitinated protein accumulation. Panobinostat caused histone acetylation in a dose-dependent fashion in all the cell lines but did not induce ubiquitinated protein accumulation (Figure 5A). Bortezomib, on the other hand, caused both ubiquitinated protein accumulation and histone acetylation in a dose-dependent fashion in Caki-1 and ACHN cells but did not cause histone acetylation in 769-P cells (Figure 5B). This is in accordance with the result that the combination did not enhance histone acetylation in 769-P cells despite inducing ubiquitinated protein accumulation in them (Figure 4B). We inferred from these results that the histone acetylation the combination caused in Caki-1 and ACHN cells was a consequence of ubiquitinated protein accumulation. Histone acetylation was a consequence of ubiquitinated protein accumulation. A, 48-hour treatment with panobinostat caused dose-dependent histone acetylation in all the cell lines but did not cause ubiquitinated protein accumulation. B, 48-hour treatment with bortezomib, on the other hand, caused both histone acetylation and ubiquitinated protein accumulation in Caki-1 and ACHN cells and caused only ubiquitinated protein accumulation in 769-P cells.
Conclusions
Panobinostat inhibits renal cancer growth by synergizing with bortezomib to induce ER stress and ubiquitinated protein accumulation. Histone acetylation may be another important mechanism of action. This is the first study to demonstrate the combination’s effect on renal cancer cells both in vitro and in vivo, and it provides a basis for testing the combination in patients with advanced renal cancer.
[ "Background", "Cell lines", "Reagents", "Evaluating effect of the combination of panobinostat and bortezomib on cell viability and colony formation", "Evaluating effect of the combination of panobinostat and bortezomib on induction of apoptosis", "Murine xenograft model", "Western blotting", "Statistical analyses", "Combination of panobinostat and bortezomib inhibited renal cancer growth synergistically", "Combination of panobinostat and bortezomib induced apoptosis", "Combination of panobinostat and bortezomib induced ER stress and ubiquitinated protein accumulation synergistically", "Histone acetylation was a consequence of ubiquitinated protein accumulation", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "A new therapeutic approach to advanced renal cancer is urgently needed because there is presently no curative treatment, and one innovative treatment strategy used against cancer is to induce endoplasmic reticulum (ER) stress and ubiquitinated protein accumulation [1]. Protein unfolding rates that exceed the capacity of protein chaperones cause ER stress, and chronic or unresolved ER stress can lead to apoptosis [2]. On the other hand, unfolded proteins that fail to be repaired by chaperones are then ubiquitinated and the accumulation of these ubiquitinated proteins is also cytotoxic [3].\nHistone deacetylase (HDAC) 6 inhibition acetylates heat shock protein (HSP) 90 and suppresses its function as a molecular chaperon, increasing the amount of unfolded proteins in the cell [4]. Because these unfolded proteins are then ubiquitinated and degraded by the proteasome [5], HDAC6 inhibition alone is thought to cause no or only slight ER stress and ubiquitinated protein accumulation if the proteasome function is normal. We thought that combining an HDAC inhibitor with the proteasome inhibitor bortezomib would cause ER stress and ubiquitinated protein accumulation synergistically because the increased ubiquitinated proteins would not be degraded by the inhibited proteasome.\nPanobinostat is a novel HDAC inhibitor that has been clinically tested not only in patients with hematological malignancies [6,7] but also patients with solid tumors, including renal cell carcinoma [8,9]. Bortezomib has been approved by the FDA and widely used for the treatment of multiple myeloma [10].\nIn the present study using renal cancer cells, we investigated whether the combination of panobinostat and bortezomib induces ER stress and ubiquitinated protein accumulation, and kills cancer cells effectively in vitro and in vivo.", "Renal cancer cell lines (Caki-1, ACHN, and 769-P) were purchased from the American Type Culture Collection (Rockville, MD). Caki-1 cells were maintained in MEM, ACHN cells in DMEM, and 769-P cells in RPMI medium, all supplemented with 10% fetal bovine serum and 0.3% penicillin-streptomycin (Invitrogen, Carlsbad, CA).", "Panobinostat and bortezomib were obtained from Cayman Chemical (Ann Arbor, MI) and LC Laboratories (Woburn, MA), respectively, dissolved in dimethyl sulfoxide (DMSO), and stored at -20°C until use.", "For cell viability assay, 5 × 103 cells were plated in a 96-well culture plate one day before treatment and treated with panobinostat (25–50 nM) and/or bortezomib (5–15 nM) for 48 hours. Cell viability was evaluated by MTS assay (Promega, Madison, WI) according to the manufacturer’s protocol. For colony formation assay, 1 × 102 cells were plated in 6-well plates one day before treatment and cultured for 48 hours in media containing 50 nM panobinostat and/or 10 nM bortezomib. They were then given fresh media and allowed to grow for 1–2 weeks, depending on the cell line. The number of colonies was then counted after fixing the cells with 100% methanol and staining them with Giemsa’s solution.", "1.5 × 105 cells were plated in a 6-well culture plate one day before being cultured for 48 hours in medium containing 50 nM panobinostat and/or 10 nM bortezomib. Induction of apoptosis was evaluated, using flow cytometry, by annexin-V assay and cell cycle analysis. For annexin-V assay the harvested cells were stained with annexin V according to the manufacturer’s protocol (Beckman Coulter, Marseille, France). For cell cycle analysis the harvested cells were resuspended in citrate buffer and stained with propidium iodide. They were then analyzed by flow cytometry using CellQuest Pro Software (BD Biosciences, San Jose, CA).", "The animal protocol for this experiment has been approved by the institutional Animal Care and Use Committee of National Defense Medical College. 5-week-old male nude mice (strain BALB/c Slc-nu/nu) were purchased from CLEA (Tokyo, Japan). The animals were housed under pathogen-free conditions and had access to standard food and water ad libitum. 1 × 107 Caki-1 cells were subcutaneously injected into the flank and treatments were initiated 4 days after the injection (day 1), when the tumors became palpable. The mice were divided into 4 groups of 5, the control group receiving intraperitoneal injections of DMSO and the other groups receiving either panobinostat (2 mg/kg) or bortezomib (60 μg/kg) or both. Injections were given once a day, 5 days a week, for 2 weeks. Tumor volume was estimated as one half of the product of the length and the square of the width (i.e., volume = 0.5 × length × width2).", "Cells were treated under the indicated conditions for 48 hours and whole-cell lysates were obtained using RIPA buffer. Equal amounts of protein were subjected to SDS-PAGE and transferred onto nitrocellulose membranes that were then probed with antibodies specific for glucose-regulated protein (GRP) 78, ubiquitin (Santa Cruz Biotechnology, Santa Cruz, CA), actin (Millipore, Billerica, MA), HSP70, endoplasmic reticulum resident protein (ERp) 44, endoplasmic oxidoreductin-1-like protein (Ero1-L)α, cleaved poly(ADP-ribose) polymerase (PARP) (Cell Signaling Technology, Danvers, MA), acetylated α-tubulin (Enzo Life Sciences, Farmingdale, NY), and acetylated histone (Abcam, Cambridge, UK). This probing was followed by treatment with horseradish-peroxidase-tagged secondary antibodies (Bio-Rad, Hercules, CA) and visualization by chemiluminescence (ECL, Amersham, Piscataway, NJ).", "The combination indexes were calculated using the Chou-Talalay method and CalcuSyn software (Biosoft, Cambridge, UK). The statistical significance of observed differences between samples was determined using the Mann-Whitney U test (StatView software, SAS Institute, Cary, NC). Differences were considered significant at p <0.05.", "We first investigated the combined effect of panobinostat and bortezomib on renal cancer cell viability by MTS assay. Panobinostat and bortezomib each inhibited the growth of renal cancer cells in a dose-dependent fashion, and the combination did so more effectively than either did by itself (Figure 1A). Analysis using the Chou-Talalay method indicated that the effect of the combination was synergistic (combination index <1) in many of the treatment conditions (Table 1). We then investigated whether the combination affects the clonogenic survival of renal cancer cells. Colony formation assay revealed that the combination suppressed colony formation significantly and did so significantly more than did either panobinostat or bortezomib alone (Figure 1B).\nThe combination of panobinostat and bortezomib inhibited renal cancer growth effectively. A, MTS assay results (mean ± SD, n = 6) after cells were treated for 48 hours either with bortezomib or panobinostat alone or with bortezomib and panobinostat together. B, Colony formation assay results (mean ± SD, n = 3) after 1–2 week incubation in control media (C) or media containing 50 nM panobinostat (P) and/or 10 nM bortezomib (B). *p = 0.0495; **p = 0.0463.\nCombination indexes for the combination of panobinostat and bortezomib in renal cancer cells\nWe also used a subcutaneous xenograft mouse model to test the efficacy of the combination therapy in vivo. A 10-day treatment with panobinostat and bortezomib was well tolerated and suppressed tumor growth significantly (Figure 2). The p values at day 12 were 0.0283 for the control group and combination group, 0.0283 for the bortezomib group and combination group, and 0.0472 for the panobinostat group and combination group. The average tumor size at day 15 was 520 ± 175 mm3 (mean ± SE) in the vehicle-treated mice and was 266 ± 39 mm3 in the combination-treated mice. Thus the combination of panobinostat and bortezomib was shown to be effective for suppressing renal cancer growth both in vitro and in vivo.\nThe combination of panobinostat and bortezomib suppressed tumor growth in vivo. A murine subcutaneous tumor model was made using Caki-1 cells, and the control group received intraperitoneal injections of DMSO, while other groups received either panobinostat (2 mg/kg) or bortezomib (60 μg/kg) or both. Injections were given once a day, 5 days a week, for 2 weeks. The 10-day treatment was well tolerated and suppressed tumor growth significantly (mean ± SE; p = 0.0283 at day 12).", "The combination increased the annexin-V fluorescence intensity (up to 19.4-fold compared with control vehicle) (Figure 3A) and also increased the number of the cells in the sub-G1 fraction (up to 70.5%) (Figure 3B). Thus the combination of panobinostat and bortezomib was demonstrated to induce apoptosis in renal cancer cells.\nThe combination of panobinostat and bortezomib induced apoptosis in renal cancer cells. Cells were treated for 48 hours with 50 nM panobinostat with or without 10 nM bortezomib. The combination increased the annexin-V-FITC fluorescence intensity (A) and increased the number of the cells in the sub-G1 fraction (B). Relative annexin-V-FITC fluorescence intensity (control = 1) is shown in the insets. White, control; red, treated. The percentage of cells in the sub-G1 fraction is shown in the graph. Representative results are shown.", "The combination induced ER stress synergistically as indicated by the increased expression of ER stress markers such as GRP78, HSP70, ERp44, and (except in 769-P cells) Ero1-Lα (Figure 4A). As expected, the combination induced ubiquitinated protein accumulation synergistically (Figure 4B): in Caki-1 and 769-P cells, 10 nM bortezomib alone did not cause ubiquitinated proteins to accumulate but in combination with 50 nM panobinostat increased the accumulation of ubiquitinated proteins markedly. In ACHN cells, 10 nM bortezomib caused ubiquitinated protein accumulation and the accumulation was synergistically enhanced by 50 nM panobinostat. Acetylation of α-tubulin by panobinostat is consistent with HDAC6 inhibition because α-tubulin is one of the important substrates of HDAC6. Interestingly, the combination also enhanced the acetylation of histone and α-tubulin synergistically in Caki-1 and ACHN cells. In 769-P cells, the combination enhanced the acetylation of α-tubulin but not that of histone.\nThe combination therapy induced ER stress and histone acetylation in renal cancer cells. The 48-hour treatment with the combination of panobinostat and bortezomib induced ER stress synergistically as indicated by the increased expression of GRP78, HSP70, ERp44, and Ero1-Lα (A). It also caused ubiquitinated protein accumulation in all the cell lines synergistically and enhanced histone and also α-tubulin acetylation in Caki-1 and ACHN cells. In 769-P cells, the combination enhanced the acetylation of α-tubulin but not that of histone (B). The dashed lines in the Caki-1 results in parts A and B indicate that the order of the bands has been rearranged from the original gel.", "We then investigated the relationship between histone acetylation and ubiquitinated protein accumulation. Panobinostat caused histone acetylation in a dose-dependent fashion in all the cell lines but did not induce ubiquitinated protein accumulation (Figure 5A). Bortezomib, on the other hand, caused both ubiquitinated protein accumulation and histone acetylation in a dose-dependent fashion in Caki-1 and ACHN cells but did not cause histone acetylation in 769-P cells (Figure 5B). This is in accordance with the result that the combination did not enhance histone acetylation in 769-P cells despite inducing ubiquitinated protein accumulation in them (Figure 4B). We inferred from these results that the histone acetylation the combination caused in Caki-1 and ACHN cells was a consequence of ubiquitinated protein accumulation.\nHistone acetylation was a consequence of ubiquitinated protein accumulation. A, 48-hour treatment with panobinostat caused dose-dependent histone acetylation in all the cell lines but did not cause ubiquitinated protein accumulation. B, 48-hour treatment with bortezomib, on the other hand, caused both histone acetylation and ubiquitinated protein accumulation in Caki-1 and ACHN cells and caused only ubiquitinated protein accumulation in 769-P cells.", "The authors declare that they have no competing interests.", "AS contributed to design and interpretation of all experiments, drafting of the manuscript and execution of western blotting, colony formation assay and animal experiments. TA collected and assembled data and performed MTS assay, cell cycle analysis, annexin-V assay and animal experiments. MI participated in the study design, performed statistical analysis and helped to draft the manuscript. KI contributed to design and interpretation of all experiments and drafting of the manuscript. TA participated in the study design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2490/14/71/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Cell lines", "Reagents", "Evaluating effect of the combination of panobinostat and bortezomib on cell viability and colony formation", "Evaluating effect of the combination of panobinostat and bortezomib on induction of apoptosis", "Murine xenograft model", "Western blotting", "Statistical analyses", "Results", "Combination of panobinostat and bortezomib inhibited renal cancer growth synergistically", "Combination of panobinostat and bortezomib induced apoptosis", "Combination of panobinostat and bortezomib induced ER stress and ubiquitinated protein accumulation synergistically", "Histone acetylation was a consequence of ubiquitinated protein accumulation", "Discussion", "Conclusions", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "A new therapeutic approach to advanced renal cancer is urgently needed because there is presently no curative treatment, and one innovative treatment strategy used against cancer is to induce endoplasmic reticulum (ER) stress and ubiquitinated protein accumulation [1]. Protein unfolding rates that exceed the capacity of protein chaperones cause ER stress, and chronic or unresolved ER stress can lead to apoptosis [2]. On the other hand, unfolded proteins that fail to be repaired by chaperones are then ubiquitinated and the accumulation of these ubiquitinated proteins is also cytotoxic [3].\nHistone deacetylase (HDAC) 6 inhibition acetylates heat shock protein (HSP) 90 and suppresses its function as a molecular chaperon, increasing the amount of unfolded proteins in the cell [4]. Because these unfolded proteins are then ubiquitinated and degraded by the proteasome [5], HDAC6 inhibition alone is thought to cause no or only slight ER stress and ubiquitinated protein accumulation if the proteasome function is normal. We thought that combining an HDAC inhibitor with the proteasome inhibitor bortezomib would cause ER stress and ubiquitinated protein accumulation synergistically because the increased ubiquitinated proteins would not be degraded by the inhibited proteasome.\nPanobinostat is a novel HDAC inhibitor that has been clinically tested not only in patients with hematological malignancies [6,7] but also patients with solid tumors, including renal cell carcinoma [8,9]. Bortezomib has been approved by the FDA and widely used for the treatment of multiple myeloma [10].\nIn the present study using renal cancer cells, we investigated whether the combination of panobinostat and bortezomib induces ER stress and ubiquitinated protein accumulation, and kills cancer cells effectively in vitro and in vivo.", " Cell lines Renal cancer cell lines (Caki-1, ACHN, and 769-P) were purchased from the American Type Culture Collection (Rockville, MD). Caki-1 cells were maintained in MEM, ACHN cells in DMEM, and 769-P cells in RPMI medium, all supplemented with 10% fetal bovine serum and 0.3% penicillin-streptomycin (Invitrogen, Carlsbad, CA).\nRenal cancer cell lines (Caki-1, ACHN, and 769-P) were purchased from the American Type Culture Collection (Rockville, MD). Caki-1 cells were maintained in MEM, ACHN cells in DMEM, and 769-P cells in RPMI medium, all supplemented with 10% fetal bovine serum and 0.3% penicillin-streptomycin (Invitrogen, Carlsbad, CA).\n Reagents Panobinostat and bortezomib were obtained from Cayman Chemical (Ann Arbor, MI) and LC Laboratories (Woburn, MA), respectively, dissolved in dimethyl sulfoxide (DMSO), and stored at -20°C until use.\nPanobinostat and bortezomib were obtained from Cayman Chemical (Ann Arbor, MI) and LC Laboratories (Woburn, MA), respectively, dissolved in dimethyl sulfoxide (DMSO), and stored at -20°C until use.\n Evaluating effect of the combination of panobinostat and bortezomib on cell viability and colony formation For cell viability assay, 5 × 103 cells were plated in a 96-well culture plate one day before treatment and treated with panobinostat (25–50 nM) and/or bortezomib (5–15 nM) for 48 hours. Cell viability was evaluated by MTS assay (Promega, Madison, WI) according to the manufacturer’s protocol. For colony formation assay, 1 × 102 cells were plated in 6-well plates one day before treatment and cultured for 48 hours in media containing 50 nM panobinostat and/or 10 nM bortezomib. They were then given fresh media and allowed to grow for 1–2 weeks, depending on the cell line. The number of colonies was then counted after fixing the cells with 100% methanol and staining them with Giemsa’s solution.\nFor cell viability assay, 5 × 103 cells were plated in a 96-well culture plate one day before treatment and treated with panobinostat (25–50 nM) and/or bortezomib (5–15 nM) for 48 hours. Cell viability was evaluated by MTS assay (Promega, Madison, WI) according to the manufacturer’s protocol. For colony formation assay, 1 × 102 cells were plated in 6-well plates one day before treatment and cultured for 48 hours in media containing 50 nM panobinostat and/or 10 nM bortezomib. They were then given fresh media and allowed to grow for 1–2 weeks, depending on the cell line. The number of colonies was then counted after fixing the cells with 100% methanol and staining them with Giemsa’s solution.\n Evaluating effect of the combination of panobinostat and bortezomib on induction of apoptosis 1.5 × 105 cells were plated in a 6-well culture plate one day before being cultured for 48 hours in medium containing 50 nM panobinostat and/or 10 nM bortezomib. Induction of apoptosis was evaluated, using flow cytometry, by annexin-V assay and cell cycle analysis. For annexin-V assay the harvested cells were stained with annexin V according to the manufacturer’s protocol (Beckman Coulter, Marseille, France). For cell cycle analysis the harvested cells were resuspended in citrate buffer and stained with propidium iodide. They were then analyzed by flow cytometry using CellQuest Pro Software (BD Biosciences, San Jose, CA).\n1.5 × 105 cells were plated in a 6-well culture plate one day before being cultured for 48 hours in medium containing 50 nM panobinostat and/or 10 nM bortezomib. Induction of apoptosis was evaluated, using flow cytometry, by annexin-V assay and cell cycle analysis. For annexin-V assay the harvested cells were stained with annexin V according to the manufacturer’s protocol (Beckman Coulter, Marseille, France). For cell cycle analysis the harvested cells were resuspended in citrate buffer and stained with propidium iodide. They were then analyzed by flow cytometry using CellQuest Pro Software (BD Biosciences, San Jose, CA).\n Murine xenograft model The animal protocol for this experiment has been approved by the institutional Animal Care and Use Committee of National Defense Medical College. 5-week-old male nude mice (strain BALB/c Slc-nu/nu) were purchased from CLEA (Tokyo, Japan). The animals were housed under pathogen-free conditions and had access to standard food and water ad libitum. 1 × 107 Caki-1 cells were subcutaneously injected into the flank and treatments were initiated 4 days after the injection (day 1), when the tumors became palpable. The mice were divided into 4 groups of 5, the control group receiving intraperitoneal injections of DMSO and the other groups receiving either panobinostat (2 mg/kg) or bortezomib (60 μg/kg) or both. Injections were given once a day, 5 days a week, for 2 weeks. Tumor volume was estimated as one half of the product of the length and the square of the width (i.e., volume = 0.5 × length × width2).\nThe animal protocol for this experiment has been approved by the institutional Animal Care and Use Committee of National Defense Medical College. 5-week-old male nude mice (strain BALB/c Slc-nu/nu) were purchased from CLEA (Tokyo, Japan). The animals were housed under pathogen-free conditions and had access to standard food and water ad libitum. 1 × 107 Caki-1 cells were subcutaneously injected into the flank and treatments were initiated 4 days after the injection (day 1), when the tumors became palpable. The mice were divided into 4 groups of 5, the control group receiving intraperitoneal injections of DMSO and the other groups receiving either panobinostat (2 mg/kg) or bortezomib (60 μg/kg) or both. Injections were given once a day, 5 days a week, for 2 weeks. Tumor volume was estimated as one half of the product of the length and the square of the width (i.e., volume = 0.5 × length × width2).\n Western blotting Cells were treated under the indicated conditions for 48 hours and whole-cell lysates were obtained using RIPA buffer. Equal amounts of protein were subjected to SDS-PAGE and transferred onto nitrocellulose membranes that were then probed with antibodies specific for glucose-regulated protein (GRP) 78, ubiquitin (Santa Cruz Biotechnology, Santa Cruz, CA), actin (Millipore, Billerica, MA), HSP70, endoplasmic reticulum resident protein (ERp) 44, endoplasmic oxidoreductin-1-like protein (Ero1-L)α, cleaved poly(ADP-ribose) polymerase (PARP) (Cell Signaling Technology, Danvers, MA), acetylated α-tubulin (Enzo Life Sciences, Farmingdale, NY), and acetylated histone (Abcam, Cambridge, UK). This probing was followed by treatment with horseradish-peroxidase-tagged secondary antibodies (Bio-Rad, Hercules, CA) and visualization by chemiluminescence (ECL, Amersham, Piscataway, NJ).\nCells were treated under the indicated conditions for 48 hours and whole-cell lysates were obtained using RIPA buffer. Equal amounts of protein were subjected to SDS-PAGE and transferred onto nitrocellulose membranes that were then probed with antibodies specific for glucose-regulated protein (GRP) 78, ubiquitin (Santa Cruz Biotechnology, Santa Cruz, CA), actin (Millipore, Billerica, MA), HSP70, endoplasmic reticulum resident protein (ERp) 44, endoplasmic oxidoreductin-1-like protein (Ero1-L)α, cleaved poly(ADP-ribose) polymerase (PARP) (Cell Signaling Technology, Danvers, MA), acetylated α-tubulin (Enzo Life Sciences, Farmingdale, NY), and acetylated histone (Abcam, Cambridge, UK). This probing was followed by treatment with horseradish-peroxidase-tagged secondary antibodies (Bio-Rad, Hercules, CA) and visualization by chemiluminescence (ECL, Amersham, Piscataway, NJ).\n Statistical analyses The combination indexes were calculated using the Chou-Talalay method and CalcuSyn software (Biosoft, Cambridge, UK). The statistical significance of observed differences between samples was determined using the Mann-Whitney U test (StatView software, SAS Institute, Cary, NC). Differences were considered significant at p <0.05.\nThe combination indexes were calculated using the Chou-Talalay method and CalcuSyn software (Biosoft, Cambridge, UK). The statistical significance of observed differences between samples was determined using the Mann-Whitney U test (StatView software, SAS Institute, Cary, NC). Differences were considered significant at p <0.05.", "Renal cancer cell lines (Caki-1, ACHN, and 769-P) were purchased from the American Type Culture Collection (Rockville, MD). Caki-1 cells were maintained in MEM, ACHN cells in DMEM, and 769-P cells in RPMI medium, all supplemented with 10% fetal bovine serum and 0.3% penicillin-streptomycin (Invitrogen, Carlsbad, CA).", "Panobinostat and bortezomib were obtained from Cayman Chemical (Ann Arbor, MI) and LC Laboratories (Woburn, MA), respectively, dissolved in dimethyl sulfoxide (DMSO), and stored at -20°C until use.", "For cell viability assay, 5 × 103 cells were plated in a 96-well culture plate one day before treatment and treated with panobinostat (25–50 nM) and/or bortezomib (5–15 nM) for 48 hours. Cell viability was evaluated by MTS assay (Promega, Madison, WI) according to the manufacturer’s protocol. For colony formation assay, 1 × 102 cells were plated in 6-well plates one day before treatment and cultured for 48 hours in media containing 50 nM panobinostat and/or 10 nM bortezomib. They were then given fresh media and allowed to grow for 1–2 weeks, depending on the cell line. The number of colonies was then counted after fixing the cells with 100% methanol and staining them with Giemsa’s solution.", "1.5 × 105 cells were plated in a 6-well culture plate one day before being cultured for 48 hours in medium containing 50 nM panobinostat and/or 10 nM bortezomib. Induction of apoptosis was evaluated, using flow cytometry, by annexin-V assay and cell cycle analysis. For annexin-V assay the harvested cells were stained with annexin V according to the manufacturer’s protocol (Beckman Coulter, Marseille, France). For cell cycle analysis the harvested cells were resuspended in citrate buffer and stained with propidium iodide. They were then analyzed by flow cytometry using CellQuest Pro Software (BD Biosciences, San Jose, CA).", "The animal protocol for this experiment has been approved by the institutional Animal Care and Use Committee of National Defense Medical College. 5-week-old male nude mice (strain BALB/c Slc-nu/nu) were purchased from CLEA (Tokyo, Japan). The animals were housed under pathogen-free conditions and had access to standard food and water ad libitum. 1 × 107 Caki-1 cells were subcutaneously injected into the flank and treatments were initiated 4 days after the injection (day 1), when the tumors became palpable. The mice were divided into 4 groups of 5, the control group receiving intraperitoneal injections of DMSO and the other groups receiving either panobinostat (2 mg/kg) or bortezomib (60 μg/kg) or both. Injections were given once a day, 5 days a week, for 2 weeks. Tumor volume was estimated as one half of the product of the length and the square of the width (i.e., volume = 0.5 × length × width2).", "Cells were treated under the indicated conditions for 48 hours and whole-cell lysates were obtained using RIPA buffer. Equal amounts of protein were subjected to SDS-PAGE and transferred onto nitrocellulose membranes that were then probed with antibodies specific for glucose-regulated protein (GRP) 78, ubiquitin (Santa Cruz Biotechnology, Santa Cruz, CA), actin (Millipore, Billerica, MA), HSP70, endoplasmic reticulum resident protein (ERp) 44, endoplasmic oxidoreductin-1-like protein (Ero1-L)α, cleaved poly(ADP-ribose) polymerase (PARP) (Cell Signaling Technology, Danvers, MA), acetylated α-tubulin (Enzo Life Sciences, Farmingdale, NY), and acetylated histone (Abcam, Cambridge, UK). This probing was followed by treatment with horseradish-peroxidase-tagged secondary antibodies (Bio-Rad, Hercules, CA) and visualization by chemiluminescence (ECL, Amersham, Piscataway, NJ).", "The combination indexes were calculated using the Chou-Talalay method and CalcuSyn software (Biosoft, Cambridge, UK). The statistical significance of observed differences between samples was determined using the Mann-Whitney U test (StatView software, SAS Institute, Cary, NC). Differences were considered significant at p <0.05.", " Combination of panobinostat and bortezomib inhibited renal cancer growth synergistically We first investigated the combined effect of panobinostat and bortezomib on renal cancer cell viability by MTS assay. Panobinostat and bortezomib each inhibited the growth of renal cancer cells in a dose-dependent fashion, and the combination did so more effectively than either did by itself (Figure 1A). Analysis using the Chou-Talalay method indicated that the effect of the combination was synergistic (combination index <1) in many of the treatment conditions (Table 1). We then investigated whether the combination affects the clonogenic survival of renal cancer cells. Colony formation assay revealed that the combination suppressed colony formation significantly and did so significantly more than did either panobinostat or bortezomib alone (Figure 1B).\nThe combination of panobinostat and bortezomib inhibited renal cancer growth effectively. A, MTS assay results (mean ± SD, n = 6) after cells were treated for 48 hours either with bortezomib or panobinostat alone or with bortezomib and panobinostat together. B, Colony formation assay results (mean ± SD, n = 3) after 1–2 week incubation in control media (C) or media containing 50 nM panobinostat (P) and/or 10 nM bortezomib (B). *p = 0.0495; **p = 0.0463.\nCombination indexes for the combination of panobinostat and bortezomib in renal cancer cells\nWe also used a subcutaneous xenograft mouse model to test the efficacy of the combination therapy in vivo. A 10-day treatment with panobinostat and bortezomib was well tolerated and suppressed tumor growth significantly (Figure 2). The p values at day 12 were 0.0283 for the control group and combination group, 0.0283 for the bortezomib group and combination group, and 0.0472 for the panobinostat group and combination group. The average tumor size at day 15 was 520 ± 175 mm3 (mean ± SE) in the vehicle-treated mice and was 266 ± 39 mm3 in the combination-treated mice. Thus the combination of panobinostat and bortezomib was shown to be effective for suppressing renal cancer growth both in vitro and in vivo.\nThe combination of panobinostat and bortezomib suppressed tumor growth in vivo. A murine subcutaneous tumor model was made using Caki-1 cells, and the control group received intraperitoneal injections of DMSO, while other groups received either panobinostat (2 mg/kg) or bortezomib (60 μg/kg) or both. Injections were given once a day, 5 days a week, for 2 weeks. The 10-day treatment was well tolerated and suppressed tumor growth significantly (mean ± SE; p = 0.0283 at day 12).\nWe first investigated the combined effect of panobinostat and bortezomib on renal cancer cell viability by MTS assay. Panobinostat and bortezomib each inhibited the growth of renal cancer cells in a dose-dependent fashion, and the combination did so more effectively than either did by itself (Figure 1A). Analysis using the Chou-Talalay method indicated that the effect of the combination was synergistic (combination index <1) in many of the treatment conditions (Table 1). We then investigated whether the combination affects the clonogenic survival of renal cancer cells. Colony formation assay revealed that the combination suppressed colony formation significantly and did so significantly more than did either panobinostat or bortezomib alone (Figure 1B).\nThe combination of panobinostat and bortezomib inhibited renal cancer growth effectively. A, MTS assay results (mean ± SD, n = 6) after cells were treated for 48 hours either with bortezomib or panobinostat alone or with bortezomib and panobinostat together. B, Colony formation assay results (mean ± SD, n = 3) after 1–2 week incubation in control media (C) or media containing 50 nM panobinostat (P) and/or 10 nM bortezomib (B). *p = 0.0495; **p = 0.0463.\nCombination indexes for the combination of panobinostat and bortezomib in renal cancer cells\nWe also used a subcutaneous xenograft mouse model to test the efficacy of the combination therapy in vivo. A 10-day treatment with panobinostat and bortezomib was well tolerated and suppressed tumor growth significantly (Figure 2). The p values at day 12 were 0.0283 for the control group and combination group, 0.0283 for the bortezomib group and combination group, and 0.0472 for the panobinostat group and combination group. The average tumor size at day 15 was 520 ± 175 mm3 (mean ± SE) in the vehicle-treated mice and was 266 ± 39 mm3 in the combination-treated mice. Thus the combination of panobinostat and bortezomib was shown to be effective for suppressing renal cancer growth both in vitro and in vivo.\nThe combination of panobinostat and bortezomib suppressed tumor growth in vivo. A murine subcutaneous tumor model was made using Caki-1 cells, and the control group received intraperitoneal injections of DMSO, while other groups received either panobinostat (2 mg/kg) or bortezomib (60 μg/kg) or both. Injections were given once a day, 5 days a week, for 2 weeks. The 10-day treatment was well tolerated and suppressed tumor growth significantly (mean ± SE; p = 0.0283 at day 12).\n Combination of panobinostat and bortezomib induced apoptosis The combination increased the annexin-V fluorescence intensity (up to 19.4-fold compared with control vehicle) (Figure 3A) and also increased the number of the cells in the sub-G1 fraction (up to 70.5%) (Figure 3B). Thus the combination of panobinostat and bortezomib was demonstrated to induce apoptosis in renal cancer cells.\nThe combination of panobinostat and bortezomib induced apoptosis in renal cancer cells. Cells were treated for 48 hours with 50 nM panobinostat with or without 10 nM bortezomib. The combination increased the annexin-V-FITC fluorescence intensity (A) and increased the number of the cells in the sub-G1 fraction (B). Relative annexin-V-FITC fluorescence intensity (control = 1) is shown in the insets. White, control; red, treated. The percentage of cells in the sub-G1 fraction is shown in the graph. Representative results are shown.\nThe combination increased the annexin-V fluorescence intensity (up to 19.4-fold compared with control vehicle) (Figure 3A) and also increased the number of the cells in the sub-G1 fraction (up to 70.5%) (Figure 3B). Thus the combination of panobinostat and bortezomib was demonstrated to induce apoptosis in renal cancer cells.\nThe combination of panobinostat and bortezomib induced apoptosis in renal cancer cells. Cells were treated for 48 hours with 50 nM panobinostat with or without 10 nM bortezomib. The combination increased the annexin-V-FITC fluorescence intensity (A) and increased the number of the cells in the sub-G1 fraction (B). Relative annexin-V-FITC fluorescence intensity (control = 1) is shown in the insets. White, control; red, treated. The percentage of cells in the sub-G1 fraction is shown in the graph. Representative results are shown.\n Combination of panobinostat and bortezomib induced ER stress and ubiquitinated protein accumulation synergistically The combination induced ER stress synergistically as indicated by the increased expression of ER stress markers such as GRP78, HSP70, ERp44, and (except in 769-P cells) Ero1-Lα (Figure 4A). As expected, the combination induced ubiquitinated protein accumulation synergistically (Figure 4B): in Caki-1 and 769-P cells, 10 nM bortezomib alone did not cause ubiquitinated proteins to accumulate but in combination with 50 nM panobinostat increased the accumulation of ubiquitinated proteins markedly. In ACHN cells, 10 nM bortezomib caused ubiquitinated protein accumulation and the accumulation was synergistically enhanced by 50 nM panobinostat. Acetylation of α-tubulin by panobinostat is consistent with HDAC6 inhibition because α-tubulin is one of the important substrates of HDAC6. Interestingly, the combination also enhanced the acetylation of histone and α-tubulin synergistically in Caki-1 and ACHN cells. In 769-P cells, the combination enhanced the acetylation of α-tubulin but not that of histone.\nThe combination therapy induced ER stress and histone acetylation in renal cancer cells. The 48-hour treatment with the combination of panobinostat and bortezomib induced ER stress synergistically as indicated by the increased expression of GRP78, HSP70, ERp44, and Ero1-Lα (A). It also caused ubiquitinated protein accumulation in all the cell lines synergistically and enhanced histone and also α-tubulin acetylation in Caki-1 and ACHN cells. In 769-P cells, the combination enhanced the acetylation of α-tubulin but not that of histone (B). The dashed lines in the Caki-1 results in parts A and B indicate that the order of the bands has been rearranged from the original gel.\nThe combination induced ER stress synergistically as indicated by the increased expression of ER stress markers such as GRP78, HSP70, ERp44, and (except in 769-P cells) Ero1-Lα (Figure 4A). As expected, the combination induced ubiquitinated protein accumulation synergistically (Figure 4B): in Caki-1 and 769-P cells, 10 nM bortezomib alone did not cause ubiquitinated proteins to accumulate but in combination with 50 nM panobinostat increased the accumulation of ubiquitinated proteins markedly. In ACHN cells, 10 nM bortezomib caused ubiquitinated protein accumulation and the accumulation was synergistically enhanced by 50 nM panobinostat. Acetylation of α-tubulin by panobinostat is consistent with HDAC6 inhibition because α-tubulin is one of the important substrates of HDAC6. Interestingly, the combination also enhanced the acetylation of histone and α-tubulin synergistically in Caki-1 and ACHN cells. In 769-P cells, the combination enhanced the acetylation of α-tubulin but not that of histone.\nThe combination therapy induced ER stress and histone acetylation in renal cancer cells. The 48-hour treatment with the combination of panobinostat and bortezomib induced ER stress synergistically as indicated by the increased expression of GRP78, HSP70, ERp44, and Ero1-Lα (A). It also caused ubiquitinated protein accumulation in all the cell lines synergistically and enhanced histone and also α-tubulin acetylation in Caki-1 and ACHN cells. In 769-P cells, the combination enhanced the acetylation of α-tubulin but not that of histone (B). The dashed lines in the Caki-1 results in parts A and B indicate that the order of the bands has been rearranged from the original gel.\n Histone acetylation was a consequence of ubiquitinated protein accumulation We then investigated the relationship between histone acetylation and ubiquitinated protein accumulation. Panobinostat caused histone acetylation in a dose-dependent fashion in all the cell lines but did not induce ubiquitinated protein accumulation (Figure 5A). Bortezomib, on the other hand, caused both ubiquitinated protein accumulation and histone acetylation in a dose-dependent fashion in Caki-1 and ACHN cells but did not cause histone acetylation in 769-P cells (Figure 5B). This is in accordance with the result that the combination did not enhance histone acetylation in 769-P cells despite inducing ubiquitinated protein accumulation in them (Figure 4B). We inferred from these results that the histone acetylation the combination caused in Caki-1 and ACHN cells was a consequence of ubiquitinated protein accumulation.\nHistone acetylation was a consequence of ubiquitinated protein accumulation. A, 48-hour treatment with panobinostat caused dose-dependent histone acetylation in all the cell lines but did not cause ubiquitinated protein accumulation. B, 48-hour treatment with bortezomib, on the other hand, caused both histone acetylation and ubiquitinated protein accumulation in Caki-1 and ACHN cells and caused only ubiquitinated protein accumulation in 769-P cells.\nWe then investigated the relationship between histone acetylation and ubiquitinated protein accumulation. Panobinostat caused histone acetylation in a dose-dependent fashion in all the cell lines but did not induce ubiquitinated protein accumulation (Figure 5A). Bortezomib, on the other hand, caused both ubiquitinated protein accumulation and histone acetylation in a dose-dependent fashion in Caki-1 and ACHN cells but did not cause histone acetylation in 769-P cells (Figure 5B). This is in accordance with the result that the combination did not enhance histone acetylation in 769-P cells despite inducing ubiquitinated protein accumulation in them (Figure 4B). We inferred from these results that the histone acetylation the combination caused in Caki-1 and ACHN cells was a consequence of ubiquitinated protein accumulation.\nHistone acetylation was a consequence of ubiquitinated protein accumulation. A, 48-hour treatment with panobinostat caused dose-dependent histone acetylation in all the cell lines but did not cause ubiquitinated protein accumulation. B, 48-hour treatment with bortezomib, on the other hand, caused both histone acetylation and ubiquitinated protein accumulation in Caki-1 and ACHN cells and caused only ubiquitinated protein accumulation in 769-P cells.", "We first investigated the combined effect of panobinostat and bortezomib on renal cancer cell viability by MTS assay. Panobinostat and bortezomib each inhibited the growth of renal cancer cells in a dose-dependent fashion, and the combination did so more effectively than either did by itself (Figure 1A). Analysis using the Chou-Talalay method indicated that the effect of the combination was synergistic (combination index <1) in many of the treatment conditions (Table 1). We then investigated whether the combination affects the clonogenic survival of renal cancer cells. Colony formation assay revealed that the combination suppressed colony formation significantly and did so significantly more than did either panobinostat or bortezomib alone (Figure 1B).\nThe combination of panobinostat and bortezomib inhibited renal cancer growth effectively. A, MTS assay results (mean ± SD, n = 6) after cells were treated for 48 hours either with bortezomib or panobinostat alone or with bortezomib and panobinostat together. B, Colony formation assay results (mean ± SD, n = 3) after 1–2 week incubation in control media (C) or media containing 50 nM panobinostat (P) and/or 10 nM bortezomib (B). *p = 0.0495; **p = 0.0463.\nCombination indexes for the combination of panobinostat and bortezomib in renal cancer cells\nWe also used a subcutaneous xenograft mouse model to test the efficacy of the combination therapy in vivo. A 10-day treatment with panobinostat and bortezomib was well tolerated and suppressed tumor growth significantly (Figure 2). The p values at day 12 were 0.0283 for the control group and combination group, 0.0283 for the bortezomib group and combination group, and 0.0472 for the panobinostat group and combination group. The average tumor size at day 15 was 520 ± 175 mm3 (mean ± SE) in the vehicle-treated mice and was 266 ± 39 mm3 in the combination-treated mice. Thus the combination of panobinostat and bortezomib was shown to be effective for suppressing renal cancer growth both in vitro and in vivo.\nThe combination of panobinostat and bortezomib suppressed tumor growth in vivo. A murine subcutaneous tumor model was made using Caki-1 cells, and the control group received intraperitoneal injections of DMSO, while other groups received either panobinostat (2 mg/kg) or bortezomib (60 μg/kg) or both. Injections were given once a day, 5 days a week, for 2 weeks. The 10-day treatment was well tolerated and suppressed tumor growth significantly (mean ± SE; p = 0.0283 at day 12).", "The combination increased the annexin-V fluorescence intensity (up to 19.4-fold compared with control vehicle) (Figure 3A) and also increased the number of the cells in the sub-G1 fraction (up to 70.5%) (Figure 3B). Thus the combination of panobinostat and bortezomib was demonstrated to induce apoptosis in renal cancer cells.\nThe combination of panobinostat and bortezomib induced apoptosis in renal cancer cells. Cells were treated for 48 hours with 50 nM panobinostat with or without 10 nM bortezomib. The combination increased the annexin-V-FITC fluorescence intensity (A) and increased the number of the cells in the sub-G1 fraction (B). Relative annexin-V-FITC fluorescence intensity (control = 1) is shown in the insets. White, control; red, treated. The percentage of cells in the sub-G1 fraction is shown in the graph. Representative results are shown.", "The combination induced ER stress synergistically as indicated by the increased expression of ER stress markers such as GRP78, HSP70, ERp44, and (except in 769-P cells) Ero1-Lα (Figure 4A). As expected, the combination induced ubiquitinated protein accumulation synergistically (Figure 4B): in Caki-1 and 769-P cells, 10 nM bortezomib alone did not cause ubiquitinated proteins to accumulate but in combination with 50 nM panobinostat increased the accumulation of ubiquitinated proteins markedly. In ACHN cells, 10 nM bortezomib caused ubiquitinated protein accumulation and the accumulation was synergistically enhanced by 50 nM panobinostat. Acetylation of α-tubulin by panobinostat is consistent with HDAC6 inhibition because α-tubulin is one of the important substrates of HDAC6. Interestingly, the combination also enhanced the acetylation of histone and α-tubulin synergistically in Caki-1 and ACHN cells. In 769-P cells, the combination enhanced the acetylation of α-tubulin but not that of histone.\nThe combination therapy induced ER stress and histone acetylation in renal cancer cells. The 48-hour treatment with the combination of panobinostat and bortezomib induced ER stress synergistically as indicated by the increased expression of GRP78, HSP70, ERp44, and Ero1-Lα (A). It also caused ubiquitinated protein accumulation in all the cell lines synergistically and enhanced histone and also α-tubulin acetylation in Caki-1 and ACHN cells. In 769-P cells, the combination enhanced the acetylation of α-tubulin but not that of histone (B). The dashed lines in the Caki-1 results in parts A and B indicate that the order of the bands has been rearranged from the original gel.", "We then investigated the relationship between histone acetylation and ubiquitinated protein accumulation. Panobinostat caused histone acetylation in a dose-dependent fashion in all the cell lines but did not induce ubiquitinated protein accumulation (Figure 5A). Bortezomib, on the other hand, caused both ubiquitinated protein accumulation and histone acetylation in a dose-dependent fashion in Caki-1 and ACHN cells but did not cause histone acetylation in 769-P cells (Figure 5B). This is in accordance with the result that the combination did not enhance histone acetylation in 769-P cells despite inducing ubiquitinated protein accumulation in them (Figure 4B). We inferred from these results that the histone acetylation the combination caused in Caki-1 and ACHN cells was a consequence of ubiquitinated protein accumulation.\nHistone acetylation was a consequence of ubiquitinated protein accumulation. A, 48-hour treatment with panobinostat caused dose-dependent histone acetylation in all the cell lines but did not cause ubiquitinated protein accumulation. B, 48-hour treatment with bortezomib, on the other hand, caused both histone acetylation and ubiquitinated protein accumulation in Caki-1 and ACHN cells and caused only ubiquitinated protein accumulation in 769-P cells.", "Inducing ER stress and ubiquitinated protein accumulation is a novel approach to cancer therapy. The combination of an HDAC inhibitor and bortezomib is one of the combinations that might be expected to do it. The combination of panobinostat and bortezomib has recently been investigated mainly in hematological malignancies [11,12]. It has been reported that the combination of bortezomib and the HDAC inhibitor suberoylanilide hydroxamic acid inhibits renal cancer growth by causing accumulation of ubiquitinated proteins and histone acetylation [13], but that study did not show the relationship between ubiquitinated protein accumulation and histone acetylation. In the present study, using panobinostat, a more potent HDAC inhibitor (acting at nanomolar concentrations, whereas suberoylanilide hydroxamic acid acts at micromolar concentrations), we investigated the effect of the bortezomib-panobinostat combination on renal cancer growth as well as further mechanisms of the combination of bortezomib and an HDAC inhibitor.\nInhibition of HDAC6 acetylates HSP90, abrogating its function and increasing the amount of unfolded proteins [4]. We think that bortezomib inhibits degradation of unfolded proteins increased by panobinostat, which induces ER stress and ubiquitinated protein accumulation. Accumulation of unfolded proteins, or ER stress, activates a signaling pathway known as the unfolded protein response (UPR), which leads to increased transcription of ER folding and quality-control factors [14]. In the present study we showed the induction of ER stress by detecting the increased expression of UPR-related proteins: GRP78, HSP70, Ero1-Lα, and ERp44. GRP78 is a master regulator for ER stress because of its role as a major ER chaperone as well as its ability to control the activation of UPR signaling [15]. HSP70 is a molecular chaperone localized in the cytoplasm but associated with the regulation of the UPR by forming a stable protein complex with the cytosolic domain of inositol-requiring enzyme 1α [16]. Ero1-Lα regulates oxidative protein folding by selectively oxidizing protein disulfide isomerase [17], one of the key players in the control of disulfide bond formation [18]. ERp44 forms mixed disulfides with Ero1-Lα and may be involved in the control of oxidative protein folding [19]. The increased expression of these ER stress-related proteins thus confirmed that ER stress was induced by the combination of panobinostat and bortezomib.\nAcetylation of α-tubulin, one of the important substrates of HDAC6 [20], is consistent with the inhibition of HDAC6 by panobinostat. Interestingly, panobinostat itself did not cause marked ER stress even though it inhibited HDAC6 function. This may be because the unfolded proteins increased by panobinostat can be degraded immediately by the proteasome if its function is not suppressed. This explanation is consistent with the result that panobinostat induced marked ER stress only when combined with bortezomib.\nThe combination induced ubiquitinated protein accumulation synergistically. This is because panobinostat increased unfolded proteins, which were then ubiquitinated, and bortezomib inhibited their degradation. The ubiquitinated protein accumulation is also in accordance with the above-discussed enhanced ER stress induced by the combination because ER stress is induced by the accumulation of unfolded proteins in the cell, and many of these unfolded proteins are ubiquitinated. Not only are ubiquitinated proteins themselves toxic to tumor cells [3], some of them may be important molecules for cancer cell survival (such as transcription factors and signal transduction molecules) that have lost their function because of unfolding and ubiquitination, presumably leading to the inhibition of multiple signal transduction pathways. Furthermore, the inhibition of NF-kB is thought to play an important role in the combination therapy with panobinostat and bortezomib because of the accumulation of undegraded IkB, a suppressor of NF-kB [21]. Jiang XJ et al. reported that the combination of panobinostat and bortezomib activated caspases and down-regulated antiapoptotic proteins such as XIAP and Bcl-2 through inhibition of the AKT and NF-kB pathways [11]. The combination is thus thought to inhibit cancer growth by diverse mechanisms other than the induction of ER stress and ubiquitinated protein accumulation.\nIn Caki-1 and ACHN cells the combination of panobinostat and bortezomib not only caused ubiquitinated protein accumulation but also enhanced histone acetylation. In these cell lines, panobinostat alone caused histone acetylation but not ubiquitinated protein accumulation, whereas bortezomib alone induced both ubiquitinated protein accumulation and histone acetylation. We therefore think the histone acetylation in these cell lines is a consequence of ubiquitinated protein accumulation, which is consistent with the results of a previous study using prostate cancer cells [22]. In 769-P cells, on the other hand, the combination enhanced ubiquitinated protein accumulation but not histone acetylation. This is, however, also in accordance with the result that bortezomib alone did not cause histone acetylation in 769-P cells. In Caki-1 and ACHN cells, HDAC function decreased by ubiquitination may be one explanation. In 769-P cells, bortezomib alone seems to even decrease histone acetylation. Ubiquitination may result in the HDAC activity in 769-P cells being higher than the histone acetyltransferase activity there. However, further study will be needed to clarify the exact mechanism of this decreased histone acetylation.\nThe combination of panobinostat and bortezomib has also been tested clinically, mainly in patients with hematological malignancies. In the most recent phase-II study enrolling 55 patients with relapsed and bortezomib-refractory myeloma [23], the patients were treated with eight 3-week cycles of 20 mg panobinostat three times a week and 1.3 mg/m2 bortezomib twice a week with 20 mg of dexamethasone four times a week on weeks 1 and 2. If the patients showed clinical benefit, then they were treated with 6-week cycles of panobinostat three times a week and bortezomib once a week on weeks 1, 2, and 4 with dexamethasone on the days of and after bortezomib. In that study the overall response rate was 34.5%, the clinical benefit rate was 52.7%, and grade 3 or 4 adverse events were thrombocytopenia (63.6%), fatigue (20.0%), and diarrhea (20.0%). Two limitations of our in-vivo study are that it could not provide information about whether the doses we used in mice were equivalent to those used in humans and that it lacked a proper assessment of side effects. This study is, however, the first to show the beneficial combined effect of panobinostat and bortezomib in renal cancer cells, and it provides a basis for testing the combination in clinical settings.", "Panobinostat inhibits renal cancer growth by synergizing with bortezomib to induce ER stress and ubiquitinated protein accumulation. Histone acetylation may be another important mechanism of action. This is the first study to demonstrate the combination’s effect on renal cancer cells both in vitro and in vivo, and it provides a basis for testing the combination in patients with advanced renal cancer.", "The authors declare that they have no competing interests.", "AS contributed to design and interpretation of all experiments, drafting of the manuscript and execution of western blotting, colony formation assay and animal experiments. TA collected and assembled data and performed MTS assay, cell cycle analysis, annexin-V assay and animal experiments. MI participated in the study design, performed statistical analysis and helped to draft the manuscript. KI contributed to design and interpretation of all experiments and drafting of the manuscript. TA participated in the study design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2490/14/71/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, "results", null, null, null, null, "discussion", "conclusions", null, null, null ]
[ "Panobinostat", "Bortezomib", "Endoplasmic reticulum stress", "Ubiquitinated protein", "Histone acetylation", "Renal cancer", "Combination therapy" ]
Background: A new therapeutic approach to advanced renal cancer is urgently needed because there is presently no curative treatment, and one innovative treatment strategy used against cancer is to induce endoplasmic reticulum (ER) stress and ubiquitinated protein accumulation [1]. Protein unfolding rates that exceed the capacity of protein chaperones cause ER stress, and chronic or unresolved ER stress can lead to apoptosis [2]. On the other hand, unfolded proteins that fail to be repaired by chaperones are then ubiquitinated and the accumulation of these ubiquitinated proteins is also cytotoxic [3]. Histone deacetylase (HDAC) 6 inhibition acetylates heat shock protein (HSP) 90 and suppresses its function as a molecular chaperon, increasing the amount of unfolded proteins in the cell [4]. Because these unfolded proteins are then ubiquitinated and degraded by the proteasome [5], HDAC6 inhibition alone is thought to cause no or only slight ER stress and ubiquitinated protein accumulation if the proteasome function is normal. We thought that combining an HDAC inhibitor with the proteasome inhibitor bortezomib would cause ER stress and ubiquitinated protein accumulation synergistically because the increased ubiquitinated proteins would not be degraded by the inhibited proteasome. Panobinostat is a novel HDAC inhibitor that has been clinically tested not only in patients with hematological malignancies [6,7] but also patients with solid tumors, including renal cell carcinoma [8,9]. Bortezomib has been approved by the FDA and widely used for the treatment of multiple myeloma [10]. In the present study using renal cancer cells, we investigated whether the combination of panobinostat and bortezomib induces ER stress and ubiquitinated protein accumulation, and kills cancer cells effectively in vitro and in vivo. Methods: Cell lines Renal cancer cell lines (Caki-1, ACHN, and 769-P) were purchased from the American Type Culture Collection (Rockville, MD). Caki-1 cells were maintained in MEM, ACHN cells in DMEM, and 769-P cells in RPMI medium, all supplemented with 10% fetal bovine serum and 0.3% penicillin-streptomycin (Invitrogen, Carlsbad, CA). Renal cancer cell lines (Caki-1, ACHN, and 769-P) were purchased from the American Type Culture Collection (Rockville, MD). Caki-1 cells were maintained in MEM, ACHN cells in DMEM, and 769-P cells in RPMI medium, all supplemented with 10% fetal bovine serum and 0.3% penicillin-streptomycin (Invitrogen, Carlsbad, CA). Reagents Panobinostat and bortezomib were obtained from Cayman Chemical (Ann Arbor, MI) and LC Laboratories (Woburn, MA), respectively, dissolved in dimethyl sulfoxide (DMSO), and stored at -20°C until use. Panobinostat and bortezomib were obtained from Cayman Chemical (Ann Arbor, MI) and LC Laboratories (Woburn, MA), respectively, dissolved in dimethyl sulfoxide (DMSO), and stored at -20°C until use. Evaluating effect of the combination of panobinostat and bortezomib on cell viability and colony formation For cell viability assay, 5 × 103 cells were plated in a 96-well culture plate one day before treatment and treated with panobinostat (25–50 nM) and/or bortezomib (5–15 nM) for 48 hours. Cell viability was evaluated by MTS assay (Promega, Madison, WI) according to the manufacturer’s protocol. For colony formation assay, 1 × 102 cells were plated in 6-well plates one day before treatment and cultured for 48 hours in media containing 50 nM panobinostat and/or 10 nM bortezomib. They were then given fresh media and allowed to grow for 1–2 weeks, depending on the cell line. The number of colonies was then counted after fixing the cells with 100% methanol and staining them with Giemsa’s solution. For cell viability assay, 5 × 103 cells were plated in a 96-well culture plate one day before treatment and treated with panobinostat (25–50 nM) and/or bortezomib (5–15 nM) for 48 hours. Cell viability was evaluated by MTS assay (Promega, Madison, WI) according to the manufacturer’s protocol. For colony formation assay, 1 × 102 cells were plated in 6-well plates one day before treatment and cultured for 48 hours in media containing 50 nM panobinostat and/or 10 nM bortezomib. They were then given fresh media and allowed to grow for 1–2 weeks, depending on the cell line. The number of colonies was then counted after fixing the cells with 100% methanol and staining them with Giemsa’s solution. Evaluating effect of the combination of panobinostat and bortezomib on induction of apoptosis 1.5 × 105 cells were plated in a 6-well culture plate one day before being cultured for 48 hours in medium containing 50 nM panobinostat and/or 10 nM bortezomib. Induction of apoptosis was evaluated, using flow cytometry, by annexin-V assay and cell cycle analysis. For annexin-V assay the harvested cells were stained with annexin V according to the manufacturer’s protocol (Beckman Coulter, Marseille, France). For cell cycle analysis the harvested cells were resuspended in citrate buffer and stained with propidium iodide. They were then analyzed by flow cytometry using CellQuest Pro Software (BD Biosciences, San Jose, CA). 1.5 × 105 cells were plated in a 6-well culture plate one day before being cultured for 48 hours in medium containing 50 nM panobinostat and/or 10 nM bortezomib. Induction of apoptosis was evaluated, using flow cytometry, by annexin-V assay and cell cycle analysis. For annexin-V assay the harvested cells were stained with annexin V according to the manufacturer’s protocol (Beckman Coulter, Marseille, France). For cell cycle analysis the harvested cells were resuspended in citrate buffer and stained with propidium iodide. They were then analyzed by flow cytometry using CellQuest Pro Software (BD Biosciences, San Jose, CA). Murine xenograft model The animal protocol for this experiment has been approved by the institutional Animal Care and Use Committee of National Defense Medical College. 5-week-old male nude mice (strain BALB/c Slc-nu/nu) were purchased from CLEA (Tokyo, Japan). The animals were housed under pathogen-free conditions and had access to standard food and water ad libitum. 1 × 107 Caki-1 cells were subcutaneously injected into the flank and treatments were initiated 4 days after the injection (day 1), when the tumors became palpable. The mice were divided into 4 groups of 5, the control group receiving intraperitoneal injections of DMSO and the other groups receiving either panobinostat (2 mg/kg) or bortezomib (60 μg/kg) or both. Injections were given once a day, 5 days a week, for 2 weeks. Tumor volume was estimated as one half of the product of the length and the square of the width (i.e., volume = 0.5 × length × width2). The animal protocol for this experiment has been approved by the institutional Animal Care and Use Committee of National Defense Medical College. 5-week-old male nude mice (strain BALB/c Slc-nu/nu) were purchased from CLEA (Tokyo, Japan). The animals were housed under pathogen-free conditions and had access to standard food and water ad libitum. 1 × 107 Caki-1 cells were subcutaneously injected into the flank and treatments were initiated 4 days after the injection (day 1), when the tumors became palpable. The mice were divided into 4 groups of 5, the control group receiving intraperitoneal injections of DMSO and the other groups receiving either panobinostat (2 mg/kg) or bortezomib (60 μg/kg) or both. Injections were given once a day, 5 days a week, for 2 weeks. Tumor volume was estimated as one half of the product of the length and the square of the width (i.e., volume = 0.5 × length × width2). Western blotting Cells were treated under the indicated conditions for 48 hours and whole-cell lysates were obtained using RIPA buffer. Equal amounts of protein were subjected to SDS-PAGE and transferred onto nitrocellulose membranes that were then probed with antibodies specific for glucose-regulated protein (GRP) 78, ubiquitin (Santa Cruz Biotechnology, Santa Cruz, CA), actin (Millipore, Billerica, MA), HSP70, endoplasmic reticulum resident protein (ERp) 44, endoplasmic oxidoreductin-1-like protein (Ero1-L)α, cleaved poly(ADP-ribose) polymerase (PARP) (Cell Signaling Technology, Danvers, MA), acetylated α-tubulin (Enzo Life Sciences, Farmingdale, NY), and acetylated histone (Abcam, Cambridge, UK). This probing was followed by treatment with horseradish-peroxidase-tagged secondary antibodies (Bio-Rad, Hercules, CA) and visualization by chemiluminescence (ECL, Amersham, Piscataway, NJ). Cells were treated under the indicated conditions for 48 hours and whole-cell lysates were obtained using RIPA buffer. Equal amounts of protein were subjected to SDS-PAGE and transferred onto nitrocellulose membranes that were then probed with antibodies specific for glucose-regulated protein (GRP) 78, ubiquitin (Santa Cruz Biotechnology, Santa Cruz, CA), actin (Millipore, Billerica, MA), HSP70, endoplasmic reticulum resident protein (ERp) 44, endoplasmic oxidoreductin-1-like protein (Ero1-L)α, cleaved poly(ADP-ribose) polymerase (PARP) (Cell Signaling Technology, Danvers, MA), acetylated α-tubulin (Enzo Life Sciences, Farmingdale, NY), and acetylated histone (Abcam, Cambridge, UK). This probing was followed by treatment with horseradish-peroxidase-tagged secondary antibodies (Bio-Rad, Hercules, CA) and visualization by chemiluminescence (ECL, Amersham, Piscataway, NJ). Statistical analyses The combination indexes were calculated using the Chou-Talalay method and CalcuSyn software (Biosoft, Cambridge, UK). The statistical significance of observed differences between samples was determined using the Mann-Whitney U test (StatView software, SAS Institute, Cary, NC). Differences were considered significant at p <0.05. The combination indexes were calculated using the Chou-Talalay method and CalcuSyn software (Biosoft, Cambridge, UK). The statistical significance of observed differences between samples was determined using the Mann-Whitney U test (StatView software, SAS Institute, Cary, NC). Differences were considered significant at p <0.05. Cell lines: Renal cancer cell lines (Caki-1, ACHN, and 769-P) were purchased from the American Type Culture Collection (Rockville, MD). Caki-1 cells were maintained in MEM, ACHN cells in DMEM, and 769-P cells in RPMI medium, all supplemented with 10% fetal bovine serum and 0.3% penicillin-streptomycin (Invitrogen, Carlsbad, CA). Reagents: Panobinostat and bortezomib were obtained from Cayman Chemical (Ann Arbor, MI) and LC Laboratories (Woburn, MA), respectively, dissolved in dimethyl sulfoxide (DMSO), and stored at -20°C until use. Evaluating effect of the combination of panobinostat and bortezomib on cell viability and colony formation: For cell viability assay, 5 × 103 cells were plated in a 96-well culture plate one day before treatment and treated with panobinostat (25–50 nM) and/or bortezomib (5–15 nM) for 48 hours. Cell viability was evaluated by MTS assay (Promega, Madison, WI) according to the manufacturer’s protocol. For colony formation assay, 1 × 102 cells were plated in 6-well plates one day before treatment and cultured for 48 hours in media containing 50 nM panobinostat and/or 10 nM bortezomib. They were then given fresh media and allowed to grow for 1–2 weeks, depending on the cell line. The number of colonies was then counted after fixing the cells with 100% methanol and staining them with Giemsa’s solution. Evaluating effect of the combination of panobinostat and bortezomib on induction of apoptosis: 1.5 × 105 cells were plated in a 6-well culture plate one day before being cultured for 48 hours in medium containing 50 nM panobinostat and/or 10 nM bortezomib. Induction of apoptosis was evaluated, using flow cytometry, by annexin-V assay and cell cycle analysis. For annexin-V assay the harvested cells were stained with annexin V according to the manufacturer’s protocol (Beckman Coulter, Marseille, France). For cell cycle analysis the harvested cells were resuspended in citrate buffer and stained with propidium iodide. They were then analyzed by flow cytometry using CellQuest Pro Software (BD Biosciences, San Jose, CA). Murine xenograft model: The animal protocol for this experiment has been approved by the institutional Animal Care and Use Committee of National Defense Medical College. 5-week-old male nude mice (strain BALB/c Slc-nu/nu) were purchased from CLEA (Tokyo, Japan). The animals were housed under pathogen-free conditions and had access to standard food and water ad libitum. 1 × 107 Caki-1 cells were subcutaneously injected into the flank and treatments were initiated 4 days after the injection (day 1), when the tumors became palpable. The mice were divided into 4 groups of 5, the control group receiving intraperitoneal injections of DMSO and the other groups receiving either panobinostat (2 mg/kg) or bortezomib (60 μg/kg) or both. Injections were given once a day, 5 days a week, for 2 weeks. Tumor volume was estimated as one half of the product of the length and the square of the width (i.e., volume = 0.5 × length × width2). Western blotting: Cells were treated under the indicated conditions for 48 hours and whole-cell lysates were obtained using RIPA buffer. Equal amounts of protein were subjected to SDS-PAGE and transferred onto nitrocellulose membranes that were then probed with antibodies specific for glucose-regulated protein (GRP) 78, ubiquitin (Santa Cruz Biotechnology, Santa Cruz, CA), actin (Millipore, Billerica, MA), HSP70, endoplasmic reticulum resident protein (ERp) 44, endoplasmic oxidoreductin-1-like protein (Ero1-L)α, cleaved poly(ADP-ribose) polymerase (PARP) (Cell Signaling Technology, Danvers, MA), acetylated α-tubulin (Enzo Life Sciences, Farmingdale, NY), and acetylated histone (Abcam, Cambridge, UK). This probing was followed by treatment with horseradish-peroxidase-tagged secondary antibodies (Bio-Rad, Hercules, CA) and visualization by chemiluminescence (ECL, Amersham, Piscataway, NJ). Statistical analyses: The combination indexes were calculated using the Chou-Talalay method and CalcuSyn software (Biosoft, Cambridge, UK). The statistical significance of observed differences between samples was determined using the Mann-Whitney U test (StatView software, SAS Institute, Cary, NC). Differences were considered significant at p <0.05. Results: Combination of panobinostat and bortezomib inhibited renal cancer growth synergistically We first investigated the combined effect of panobinostat and bortezomib on renal cancer cell viability by MTS assay. Panobinostat and bortezomib each inhibited the growth of renal cancer cells in a dose-dependent fashion, and the combination did so more effectively than either did by itself (Figure 1A). Analysis using the Chou-Talalay method indicated that the effect of the combination was synergistic (combination index <1) in many of the treatment conditions (Table 1). We then investigated whether the combination affects the clonogenic survival of renal cancer cells. Colony formation assay revealed that the combination suppressed colony formation significantly and did so significantly more than did either panobinostat or bortezomib alone (Figure 1B). The combination of panobinostat and bortezomib inhibited renal cancer growth effectively. A, MTS assay results (mean ± SD, n = 6) after cells were treated for 48 hours either with bortezomib or panobinostat alone or with bortezomib and panobinostat together. B, Colony formation assay results (mean ± SD, n = 3) after 1–2 week incubation in control media (C) or media containing 50 nM panobinostat (P) and/or 10 nM bortezomib (B). *p = 0.0495; **p = 0.0463. Combination indexes for the combination of panobinostat and bortezomib in renal cancer cells We also used a subcutaneous xenograft mouse model to test the efficacy of the combination therapy in vivo. A 10-day treatment with panobinostat and bortezomib was well tolerated and suppressed tumor growth significantly (Figure 2). The p values at day 12 were 0.0283 for the control group and combination group, 0.0283 for the bortezomib group and combination group, and 0.0472 for the panobinostat group and combination group. The average tumor size at day 15 was 520 ± 175 mm3 (mean ± SE) in the vehicle-treated mice and was 266 ± 39 mm3 in the combination-treated mice. Thus the combination of panobinostat and bortezomib was shown to be effective for suppressing renal cancer growth both in vitro and in vivo. The combination of panobinostat and bortezomib suppressed tumor growth in vivo. A murine subcutaneous tumor model was made using Caki-1 cells, and the control group received intraperitoneal injections of DMSO, while other groups received either panobinostat (2 mg/kg) or bortezomib (60 μg/kg) or both. Injections were given once a day, 5 days a week, for 2 weeks. The 10-day treatment was well tolerated and suppressed tumor growth significantly (mean ± SE; p = 0.0283 at day 12). We first investigated the combined effect of panobinostat and bortezomib on renal cancer cell viability by MTS assay. Panobinostat and bortezomib each inhibited the growth of renal cancer cells in a dose-dependent fashion, and the combination did so more effectively than either did by itself (Figure 1A). Analysis using the Chou-Talalay method indicated that the effect of the combination was synergistic (combination index <1) in many of the treatment conditions (Table 1). We then investigated whether the combination affects the clonogenic survival of renal cancer cells. Colony formation assay revealed that the combination suppressed colony formation significantly and did so significantly more than did either panobinostat or bortezomib alone (Figure 1B). The combination of panobinostat and bortezomib inhibited renal cancer growth effectively. A, MTS assay results (mean ± SD, n = 6) after cells were treated for 48 hours either with bortezomib or panobinostat alone or with bortezomib and panobinostat together. B, Colony formation assay results (mean ± SD, n = 3) after 1–2 week incubation in control media (C) or media containing 50 nM panobinostat (P) and/or 10 nM bortezomib (B). *p = 0.0495; **p = 0.0463. Combination indexes for the combination of panobinostat and bortezomib in renal cancer cells We also used a subcutaneous xenograft mouse model to test the efficacy of the combination therapy in vivo. A 10-day treatment with panobinostat and bortezomib was well tolerated and suppressed tumor growth significantly (Figure 2). The p values at day 12 were 0.0283 for the control group and combination group, 0.0283 for the bortezomib group and combination group, and 0.0472 for the panobinostat group and combination group. The average tumor size at day 15 was 520 ± 175 mm3 (mean ± SE) in the vehicle-treated mice and was 266 ± 39 mm3 in the combination-treated mice. Thus the combination of panobinostat and bortezomib was shown to be effective for suppressing renal cancer growth both in vitro and in vivo. The combination of panobinostat and bortezomib suppressed tumor growth in vivo. A murine subcutaneous tumor model was made using Caki-1 cells, and the control group received intraperitoneal injections of DMSO, while other groups received either panobinostat (2 mg/kg) or bortezomib (60 μg/kg) or both. Injections were given once a day, 5 days a week, for 2 weeks. The 10-day treatment was well tolerated and suppressed tumor growth significantly (mean ± SE; p = 0.0283 at day 12). Combination of panobinostat and bortezomib induced apoptosis The combination increased the annexin-V fluorescence intensity (up to 19.4-fold compared with control vehicle) (Figure 3A) and also increased the number of the cells in the sub-G1 fraction (up to 70.5%) (Figure 3B). Thus the combination of panobinostat and bortezomib was demonstrated to induce apoptosis in renal cancer cells. The combination of panobinostat and bortezomib induced apoptosis in renal cancer cells. Cells were treated for 48 hours with 50 nM panobinostat with or without 10 nM bortezomib. The combination increased the annexin-V-FITC fluorescence intensity (A) and increased the number of the cells in the sub-G1 fraction (B). Relative annexin-V-FITC fluorescence intensity (control = 1) is shown in the insets. White, control; red, treated. The percentage of cells in the sub-G1 fraction is shown in the graph. Representative results are shown. The combination increased the annexin-V fluorescence intensity (up to 19.4-fold compared with control vehicle) (Figure 3A) and also increased the number of the cells in the sub-G1 fraction (up to 70.5%) (Figure 3B). Thus the combination of panobinostat and bortezomib was demonstrated to induce apoptosis in renal cancer cells. The combination of panobinostat and bortezomib induced apoptosis in renal cancer cells. Cells were treated for 48 hours with 50 nM panobinostat with or without 10 nM bortezomib. The combination increased the annexin-V-FITC fluorescence intensity (A) and increased the number of the cells in the sub-G1 fraction (B). Relative annexin-V-FITC fluorescence intensity (control = 1) is shown in the insets. White, control; red, treated. The percentage of cells in the sub-G1 fraction is shown in the graph. Representative results are shown. Combination of panobinostat and bortezomib induced ER stress and ubiquitinated protein accumulation synergistically The combination induced ER stress synergistically as indicated by the increased expression of ER stress markers such as GRP78, HSP70, ERp44, and (except in 769-P cells) Ero1-Lα (Figure 4A). As expected, the combination induced ubiquitinated protein accumulation synergistically (Figure 4B): in Caki-1 and 769-P cells, 10 nM bortezomib alone did not cause ubiquitinated proteins to accumulate but in combination with 50 nM panobinostat increased the accumulation of ubiquitinated proteins markedly. In ACHN cells, 10 nM bortezomib caused ubiquitinated protein accumulation and the accumulation was synergistically enhanced by 50 nM panobinostat. Acetylation of α-tubulin by panobinostat is consistent with HDAC6 inhibition because α-tubulin is one of the important substrates of HDAC6. Interestingly, the combination also enhanced the acetylation of histone and α-tubulin synergistically in Caki-1 and ACHN cells. In 769-P cells, the combination enhanced the acetylation of α-tubulin but not that of histone. The combination therapy induced ER stress and histone acetylation in renal cancer cells. The 48-hour treatment with the combination of panobinostat and bortezomib induced ER stress synergistically as indicated by the increased expression of GRP78, HSP70, ERp44, and Ero1-Lα (A). It also caused ubiquitinated protein accumulation in all the cell lines synergistically and enhanced histone and also α-tubulin acetylation in Caki-1 and ACHN cells. In 769-P cells, the combination enhanced the acetylation of α-tubulin but not that of histone (B). The dashed lines in the Caki-1 results in parts A and B indicate that the order of the bands has been rearranged from the original gel. The combination induced ER stress synergistically as indicated by the increased expression of ER stress markers such as GRP78, HSP70, ERp44, and (except in 769-P cells) Ero1-Lα (Figure 4A). As expected, the combination induced ubiquitinated protein accumulation synergistically (Figure 4B): in Caki-1 and 769-P cells, 10 nM bortezomib alone did not cause ubiquitinated proteins to accumulate but in combination with 50 nM panobinostat increased the accumulation of ubiquitinated proteins markedly. In ACHN cells, 10 nM bortezomib caused ubiquitinated protein accumulation and the accumulation was synergistically enhanced by 50 nM panobinostat. Acetylation of α-tubulin by panobinostat is consistent with HDAC6 inhibition because α-tubulin is one of the important substrates of HDAC6. Interestingly, the combination also enhanced the acetylation of histone and α-tubulin synergistically in Caki-1 and ACHN cells. In 769-P cells, the combination enhanced the acetylation of α-tubulin but not that of histone. The combination therapy induced ER stress and histone acetylation in renal cancer cells. The 48-hour treatment with the combination of panobinostat and bortezomib induced ER stress synergistically as indicated by the increased expression of GRP78, HSP70, ERp44, and Ero1-Lα (A). It also caused ubiquitinated protein accumulation in all the cell lines synergistically and enhanced histone and also α-tubulin acetylation in Caki-1 and ACHN cells. In 769-P cells, the combination enhanced the acetylation of α-tubulin but not that of histone (B). The dashed lines in the Caki-1 results in parts A and B indicate that the order of the bands has been rearranged from the original gel. Histone acetylation was a consequence of ubiquitinated protein accumulation We then investigated the relationship between histone acetylation and ubiquitinated protein accumulation. Panobinostat caused histone acetylation in a dose-dependent fashion in all the cell lines but did not induce ubiquitinated protein accumulation (Figure 5A). Bortezomib, on the other hand, caused both ubiquitinated protein accumulation and histone acetylation in a dose-dependent fashion in Caki-1 and ACHN cells but did not cause histone acetylation in 769-P cells (Figure 5B). This is in accordance with the result that the combination did not enhance histone acetylation in 769-P cells despite inducing ubiquitinated protein accumulation in them (Figure 4B). We inferred from these results that the histone acetylation the combination caused in Caki-1 and ACHN cells was a consequence of ubiquitinated protein accumulation. Histone acetylation was a consequence of ubiquitinated protein accumulation. A, 48-hour treatment with panobinostat caused dose-dependent histone acetylation in all the cell lines but did not cause ubiquitinated protein accumulation. B, 48-hour treatment with bortezomib, on the other hand, caused both histone acetylation and ubiquitinated protein accumulation in Caki-1 and ACHN cells and caused only ubiquitinated protein accumulation in 769-P cells. We then investigated the relationship between histone acetylation and ubiquitinated protein accumulation. Panobinostat caused histone acetylation in a dose-dependent fashion in all the cell lines but did not induce ubiquitinated protein accumulation (Figure 5A). Bortezomib, on the other hand, caused both ubiquitinated protein accumulation and histone acetylation in a dose-dependent fashion in Caki-1 and ACHN cells but did not cause histone acetylation in 769-P cells (Figure 5B). This is in accordance with the result that the combination did not enhance histone acetylation in 769-P cells despite inducing ubiquitinated protein accumulation in them (Figure 4B). We inferred from these results that the histone acetylation the combination caused in Caki-1 and ACHN cells was a consequence of ubiquitinated protein accumulation. Histone acetylation was a consequence of ubiquitinated protein accumulation. A, 48-hour treatment with panobinostat caused dose-dependent histone acetylation in all the cell lines but did not cause ubiquitinated protein accumulation. B, 48-hour treatment with bortezomib, on the other hand, caused both histone acetylation and ubiquitinated protein accumulation in Caki-1 and ACHN cells and caused only ubiquitinated protein accumulation in 769-P cells. Combination of panobinostat and bortezomib inhibited renal cancer growth synergistically: We first investigated the combined effect of panobinostat and bortezomib on renal cancer cell viability by MTS assay. Panobinostat and bortezomib each inhibited the growth of renal cancer cells in a dose-dependent fashion, and the combination did so more effectively than either did by itself (Figure 1A). Analysis using the Chou-Talalay method indicated that the effect of the combination was synergistic (combination index <1) in many of the treatment conditions (Table 1). We then investigated whether the combination affects the clonogenic survival of renal cancer cells. Colony formation assay revealed that the combination suppressed colony formation significantly and did so significantly more than did either panobinostat or bortezomib alone (Figure 1B). The combination of panobinostat and bortezomib inhibited renal cancer growth effectively. A, MTS assay results (mean ± SD, n = 6) after cells were treated for 48 hours either with bortezomib or panobinostat alone or with bortezomib and panobinostat together. B, Colony formation assay results (mean ± SD, n = 3) after 1–2 week incubation in control media (C) or media containing 50 nM panobinostat (P) and/or 10 nM bortezomib (B). *p = 0.0495; **p = 0.0463. Combination indexes for the combination of panobinostat and bortezomib in renal cancer cells We also used a subcutaneous xenograft mouse model to test the efficacy of the combination therapy in vivo. A 10-day treatment with panobinostat and bortezomib was well tolerated and suppressed tumor growth significantly (Figure 2). The p values at day 12 were 0.0283 for the control group and combination group, 0.0283 for the bortezomib group and combination group, and 0.0472 for the panobinostat group and combination group. The average tumor size at day 15 was 520 ± 175 mm3 (mean ± SE) in the vehicle-treated mice and was 266 ± 39 mm3 in the combination-treated mice. Thus the combination of panobinostat and bortezomib was shown to be effective for suppressing renal cancer growth both in vitro and in vivo. The combination of panobinostat and bortezomib suppressed tumor growth in vivo. A murine subcutaneous tumor model was made using Caki-1 cells, and the control group received intraperitoneal injections of DMSO, while other groups received either panobinostat (2 mg/kg) or bortezomib (60 μg/kg) or both. Injections were given once a day, 5 days a week, for 2 weeks. The 10-day treatment was well tolerated and suppressed tumor growth significantly (mean ± SE; p = 0.0283 at day 12). Combination of panobinostat and bortezomib induced apoptosis: The combination increased the annexin-V fluorescence intensity (up to 19.4-fold compared with control vehicle) (Figure 3A) and also increased the number of the cells in the sub-G1 fraction (up to 70.5%) (Figure 3B). Thus the combination of panobinostat and bortezomib was demonstrated to induce apoptosis in renal cancer cells. The combination of panobinostat and bortezomib induced apoptosis in renal cancer cells. Cells were treated for 48 hours with 50 nM panobinostat with or without 10 nM bortezomib. The combination increased the annexin-V-FITC fluorescence intensity (A) and increased the number of the cells in the sub-G1 fraction (B). Relative annexin-V-FITC fluorescence intensity (control = 1) is shown in the insets. White, control; red, treated. The percentage of cells in the sub-G1 fraction is shown in the graph. Representative results are shown. Combination of panobinostat and bortezomib induced ER stress and ubiquitinated protein accumulation synergistically: The combination induced ER stress synergistically as indicated by the increased expression of ER stress markers such as GRP78, HSP70, ERp44, and (except in 769-P cells) Ero1-Lα (Figure 4A). As expected, the combination induced ubiquitinated protein accumulation synergistically (Figure 4B): in Caki-1 and 769-P cells, 10 nM bortezomib alone did not cause ubiquitinated proteins to accumulate but in combination with 50 nM panobinostat increased the accumulation of ubiquitinated proteins markedly. In ACHN cells, 10 nM bortezomib caused ubiquitinated protein accumulation and the accumulation was synergistically enhanced by 50 nM panobinostat. Acetylation of α-tubulin by panobinostat is consistent with HDAC6 inhibition because α-tubulin is one of the important substrates of HDAC6. Interestingly, the combination also enhanced the acetylation of histone and α-tubulin synergistically in Caki-1 and ACHN cells. In 769-P cells, the combination enhanced the acetylation of α-tubulin but not that of histone. The combination therapy induced ER stress and histone acetylation in renal cancer cells. The 48-hour treatment with the combination of panobinostat and bortezomib induced ER stress synergistically as indicated by the increased expression of GRP78, HSP70, ERp44, and Ero1-Lα (A). It also caused ubiquitinated protein accumulation in all the cell lines synergistically and enhanced histone and also α-tubulin acetylation in Caki-1 and ACHN cells. In 769-P cells, the combination enhanced the acetylation of α-tubulin but not that of histone (B). The dashed lines in the Caki-1 results in parts A and B indicate that the order of the bands has been rearranged from the original gel. Histone acetylation was a consequence of ubiquitinated protein accumulation: We then investigated the relationship between histone acetylation and ubiquitinated protein accumulation. Panobinostat caused histone acetylation in a dose-dependent fashion in all the cell lines but did not induce ubiquitinated protein accumulation (Figure 5A). Bortezomib, on the other hand, caused both ubiquitinated protein accumulation and histone acetylation in a dose-dependent fashion in Caki-1 and ACHN cells but did not cause histone acetylation in 769-P cells (Figure 5B). This is in accordance with the result that the combination did not enhance histone acetylation in 769-P cells despite inducing ubiquitinated protein accumulation in them (Figure 4B). We inferred from these results that the histone acetylation the combination caused in Caki-1 and ACHN cells was a consequence of ubiquitinated protein accumulation. Histone acetylation was a consequence of ubiquitinated protein accumulation. A, 48-hour treatment with panobinostat caused dose-dependent histone acetylation in all the cell lines but did not cause ubiquitinated protein accumulation. B, 48-hour treatment with bortezomib, on the other hand, caused both histone acetylation and ubiquitinated protein accumulation in Caki-1 and ACHN cells and caused only ubiquitinated protein accumulation in 769-P cells. Discussion: Inducing ER stress and ubiquitinated protein accumulation is a novel approach to cancer therapy. The combination of an HDAC inhibitor and bortezomib is one of the combinations that might be expected to do it. The combination of panobinostat and bortezomib has recently been investigated mainly in hematological malignancies [11,12]. It has been reported that the combination of bortezomib and the HDAC inhibitor suberoylanilide hydroxamic acid inhibits renal cancer growth by causing accumulation of ubiquitinated proteins and histone acetylation [13], but that study did not show the relationship between ubiquitinated protein accumulation and histone acetylation. In the present study, using panobinostat, a more potent HDAC inhibitor (acting at nanomolar concentrations, whereas suberoylanilide hydroxamic acid acts at micromolar concentrations), we investigated the effect of the bortezomib-panobinostat combination on renal cancer growth as well as further mechanisms of the combination of bortezomib and an HDAC inhibitor. Inhibition of HDAC6 acetylates HSP90, abrogating its function and increasing the amount of unfolded proteins [4]. We think that bortezomib inhibits degradation of unfolded proteins increased by panobinostat, which induces ER stress and ubiquitinated protein accumulation. Accumulation of unfolded proteins, or ER stress, activates a signaling pathway known as the unfolded protein response (UPR), which leads to increased transcription of ER folding and quality-control factors [14]. In the present study we showed the induction of ER stress by detecting the increased expression of UPR-related proteins: GRP78, HSP70, Ero1-Lα, and ERp44. GRP78 is a master regulator for ER stress because of its role as a major ER chaperone as well as its ability to control the activation of UPR signaling [15]. HSP70 is a molecular chaperone localized in the cytoplasm but associated with the regulation of the UPR by forming a stable protein complex with the cytosolic domain of inositol-requiring enzyme 1α [16]. Ero1-Lα regulates oxidative protein folding by selectively oxidizing protein disulfide isomerase [17], one of the key players in the control of disulfide bond formation [18]. ERp44 forms mixed disulfides with Ero1-Lα and may be involved in the control of oxidative protein folding [19]. The increased expression of these ER stress-related proteins thus confirmed that ER stress was induced by the combination of panobinostat and bortezomib. Acetylation of α-tubulin, one of the important substrates of HDAC6 [20], is consistent with the inhibition of HDAC6 by panobinostat. Interestingly, panobinostat itself did not cause marked ER stress even though it inhibited HDAC6 function. This may be because the unfolded proteins increased by panobinostat can be degraded immediately by the proteasome if its function is not suppressed. This explanation is consistent with the result that panobinostat induced marked ER stress only when combined with bortezomib. The combination induced ubiquitinated protein accumulation synergistically. This is because panobinostat increased unfolded proteins, which were then ubiquitinated, and bortezomib inhibited their degradation. The ubiquitinated protein accumulation is also in accordance with the above-discussed enhanced ER stress induced by the combination because ER stress is induced by the accumulation of unfolded proteins in the cell, and many of these unfolded proteins are ubiquitinated. Not only are ubiquitinated proteins themselves toxic to tumor cells [3], some of them may be important molecules for cancer cell survival (such as transcription factors and signal transduction molecules) that have lost their function because of unfolding and ubiquitination, presumably leading to the inhibition of multiple signal transduction pathways. Furthermore, the inhibition of NF-kB is thought to play an important role in the combination therapy with panobinostat and bortezomib because of the accumulation of undegraded IkB, a suppressor of NF-kB [21]. Jiang XJ et al. reported that the combination of panobinostat and bortezomib activated caspases and down-regulated antiapoptotic proteins such as XIAP and Bcl-2 through inhibition of the AKT and NF-kB pathways [11]. The combination is thus thought to inhibit cancer growth by diverse mechanisms other than the induction of ER stress and ubiquitinated protein accumulation. In Caki-1 and ACHN cells the combination of panobinostat and bortezomib not only caused ubiquitinated protein accumulation but also enhanced histone acetylation. In these cell lines, panobinostat alone caused histone acetylation but not ubiquitinated protein accumulation, whereas bortezomib alone induced both ubiquitinated protein accumulation and histone acetylation. We therefore think the histone acetylation in these cell lines is a consequence of ubiquitinated protein accumulation, which is consistent with the results of a previous study using prostate cancer cells [22]. In 769-P cells, on the other hand, the combination enhanced ubiquitinated protein accumulation but not histone acetylation. This is, however, also in accordance with the result that bortezomib alone did not cause histone acetylation in 769-P cells. In Caki-1 and ACHN cells, HDAC function decreased by ubiquitination may be one explanation. In 769-P cells, bortezomib alone seems to even decrease histone acetylation. Ubiquitination may result in the HDAC activity in 769-P cells being higher than the histone acetyltransferase activity there. However, further study will be needed to clarify the exact mechanism of this decreased histone acetylation. The combination of panobinostat and bortezomib has also been tested clinically, mainly in patients with hematological malignancies. In the most recent phase-II study enrolling 55 patients with relapsed and bortezomib-refractory myeloma [23], the patients were treated with eight 3-week cycles of 20 mg panobinostat three times a week and 1.3 mg/m2 bortezomib twice a week with 20 mg of dexamethasone four times a week on weeks 1 and 2. If the patients showed clinical benefit, then they were treated with 6-week cycles of panobinostat three times a week and bortezomib once a week on weeks 1, 2, and 4 with dexamethasone on the days of and after bortezomib. In that study the overall response rate was 34.5%, the clinical benefit rate was 52.7%, and grade 3 or 4 adverse events were thrombocytopenia (63.6%), fatigue (20.0%), and diarrhea (20.0%). Two limitations of our in-vivo study are that it could not provide information about whether the doses we used in mice were equivalent to those used in humans and that it lacked a proper assessment of side effects. This study is, however, the first to show the beneficial combined effect of panobinostat and bortezomib in renal cancer cells, and it provides a basis for testing the combination in clinical settings. Conclusions: Panobinostat inhibits renal cancer growth by synergizing with bortezomib to induce ER stress and ubiquitinated protein accumulation. Histone acetylation may be another important mechanism of action. This is the first study to demonstrate the combination’s effect on renal cancer cells both in vitro and in vivo, and it provides a basis for testing the combination in patients with advanced renal cancer. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: AS contributed to design and interpretation of all experiments, drafting of the manuscript and execution of western blotting, colony formation assay and animal experiments. TA collected and assembled data and performed MTS assay, cell cycle analysis, annexin-V assay and animal experiments. MI participated in the study design, performed statistical analysis and helped to draft the manuscript. KI contributed to design and interpretation of all experiments and drafting of the manuscript. TA participated in the study design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2490/14/71/prepub
Background: Inducing endoplasmic reticulum (ER) stress is a novel strategy used to treat malignancies. Inhibition of histone deacetylase (HDAC) 6 by the HDAC inhibitor panobinostat hinders the refolding of unfolded proteins by increasing the acetylation of heat shock protein 90. We investigated whether combining panobinostat with the proteasome inhibitor bortezomib would kill cancer cells effectively by inhibiting the degradation of these unfolded proteins, thereby causing ubiquitinated proteins to accumulate and induce ER stress. Methods: Caki-1, ACHN, and 769-P cells were treated with panobinostat and/or bortezomib. Cell viability, clonogenicity, and induction of apoptosis were evaluated. The in vivo efficacy of the combination was evaluated using a murine subcutaneous xenograft model. The combination-induced ER stress and ubiquitinated protein accumulation were assessed. Results: The combination of panobinostat and bortezomib induced apoptosis and inhibited renal cancer growth synergistically (combination indexes <1). It also suppressed colony formation significantly (p <0.05). In a murine subcutaneous tumor model, a 10-day treatment was well tolerated and inhibited tumor growth significantly (p <0.05). Enhanced acetylation of the HDAC6 substrate alpha-tubulin was consistent with the suppression of HDAC6 activity by panobinostat, and the combination was shown to induce ER stress and ubiquitinated protein accumulation synergistically. Conclusions: Panobinostat inhibits renal cancer growth by synergizing with bortezomib to induce ER stress and ubiquitinated protein accumulation. The current study provides a basis for testing the combination in patients with advanced renal cancer.
Background: A new therapeutic approach to advanced renal cancer is urgently needed because there is presently no curative treatment, and one innovative treatment strategy used against cancer is to induce endoplasmic reticulum (ER) stress and ubiquitinated protein accumulation [1]. Protein unfolding rates that exceed the capacity of protein chaperones cause ER stress, and chronic or unresolved ER stress can lead to apoptosis [2]. On the other hand, unfolded proteins that fail to be repaired by chaperones are then ubiquitinated and the accumulation of these ubiquitinated proteins is also cytotoxic [3]. Histone deacetylase (HDAC) 6 inhibition acetylates heat shock protein (HSP) 90 and suppresses its function as a molecular chaperon, increasing the amount of unfolded proteins in the cell [4]. Because these unfolded proteins are then ubiquitinated and degraded by the proteasome [5], HDAC6 inhibition alone is thought to cause no or only slight ER stress and ubiquitinated protein accumulation if the proteasome function is normal. We thought that combining an HDAC inhibitor with the proteasome inhibitor bortezomib would cause ER stress and ubiquitinated protein accumulation synergistically because the increased ubiquitinated proteins would not be degraded by the inhibited proteasome. Panobinostat is a novel HDAC inhibitor that has been clinically tested not only in patients with hematological malignancies [6,7] but also patients with solid tumors, including renal cell carcinoma [8,9]. Bortezomib has been approved by the FDA and widely used for the treatment of multiple myeloma [10]. In the present study using renal cancer cells, we investigated whether the combination of panobinostat and bortezomib induces ER stress and ubiquitinated protein accumulation, and kills cancer cells effectively in vitro and in vivo. Conclusions: Panobinostat inhibits renal cancer growth by synergizing with bortezomib to induce ER stress and ubiquitinated protein accumulation. Histone acetylation may be another important mechanism of action. This is the first study to demonstrate the combination’s effect on renal cancer cells both in vitro and in vivo, and it provides a basis for testing the combination in patients with advanced renal cancer.
Background: Inducing endoplasmic reticulum (ER) stress is a novel strategy used to treat malignancies. Inhibition of histone deacetylase (HDAC) 6 by the HDAC inhibitor panobinostat hinders the refolding of unfolded proteins by increasing the acetylation of heat shock protein 90. We investigated whether combining panobinostat with the proteasome inhibitor bortezomib would kill cancer cells effectively by inhibiting the degradation of these unfolded proteins, thereby causing ubiquitinated proteins to accumulate and induce ER stress. Methods: Caki-1, ACHN, and 769-P cells were treated with panobinostat and/or bortezomib. Cell viability, clonogenicity, and induction of apoptosis were evaluated. The in vivo efficacy of the combination was evaluated using a murine subcutaneous xenograft model. The combination-induced ER stress and ubiquitinated protein accumulation were assessed. Results: The combination of panobinostat and bortezomib induced apoptosis and inhibited renal cancer growth synergistically (combination indexes <1). It also suppressed colony formation significantly (p <0.05). In a murine subcutaneous tumor model, a 10-day treatment was well tolerated and inhibited tumor growth significantly (p <0.05). Enhanced acetylation of the HDAC6 substrate alpha-tubulin was consistent with the suppression of HDAC6 activity by panobinostat, and the combination was shown to induce ER stress and ubiquitinated protein accumulation synergistically. Conclusions: Panobinostat inhibits renal cancer growth by synergizing with bortezomib to induce ER stress and ubiquitinated protein accumulation. The current study provides a basis for testing the combination in patients with advanced renal cancer.
8,093
281
[ 311, 71, 42, 144, 120, 203, 177, 60, 513, 181, 310, 219, 10, 105, 16 ]
19
[ "cells", "combination", "bortezomib", "panobinostat", "protein", "ubiquitinated", "accumulation", "histone", "acetylation", "panobinostat bortezomib" ]
[ "unfolded proteins er", "inhibition hdac6 acetylates", "proteasome hdac6", "unfolded proteins ubiquitinated", "deacetylase hdac inhibition" ]
[CONTENT] Panobinostat | Bortezomib | Endoplasmic reticulum stress | Ubiquitinated protein | Histone acetylation | Renal cancer | Combination therapy [SUMMARY]
[CONTENT] Panobinostat | Bortezomib | Endoplasmic reticulum stress | Ubiquitinated protein | Histone acetylation | Renal cancer | Combination therapy [SUMMARY]
[CONTENT] Panobinostat | Bortezomib | Endoplasmic reticulum stress | Ubiquitinated protein | Histone acetylation | Renal cancer | Combination therapy [SUMMARY]
[CONTENT] Panobinostat | Bortezomib | Endoplasmic reticulum stress | Ubiquitinated protein | Histone acetylation | Renal cancer | Combination therapy [SUMMARY]
[CONTENT] Panobinostat | Bortezomib | Endoplasmic reticulum stress | Ubiquitinated protein | Histone acetylation | Renal cancer | Combination therapy [SUMMARY]
[CONTENT] Panobinostat | Bortezomib | Endoplasmic reticulum stress | Ubiquitinated protein | Histone acetylation | Renal cancer | Combination therapy [SUMMARY]
[CONTENT] Acetylation | Animals | Antineoplastic Agents | Antineoplastic Combined Chemotherapy Protocols | Apoptosis | Boronic Acids | Bortezomib | Cell Line, Tumor | Disease Models, Animal | Drug Synergism | Endoplasmic Reticulum Stress | Histone Deacetylase Inhibitors | Histones | Humans | Hydroxamic Acids | Indoles | Kidney Neoplasms | Mice, Inbred BALB C | Panobinostat | Pyrazines | Ubiquitinated Proteins [SUMMARY]
[CONTENT] Acetylation | Animals | Antineoplastic Agents | Antineoplastic Combined Chemotherapy Protocols | Apoptosis | Boronic Acids | Bortezomib | Cell Line, Tumor | Disease Models, Animal | Drug Synergism | Endoplasmic Reticulum Stress | Histone Deacetylase Inhibitors | Histones | Humans | Hydroxamic Acids | Indoles | Kidney Neoplasms | Mice, Inbred BALB C | Panobinostat | Pyrazines | Ubiquitinated Proteins [SUMMARY]
[CONTENT] Acetylation | Animals | Antineoplastic Agents | Antineoplastic Combined Chemotherapy Protocols | Apoptosis | Boronic Acids | Bortezomib | Cell Line, Tumor | Disease Models, Animal | Drug Synergism | Endoplasmic Reticulum Stress | Histone Deacetylase Inhibitors | Histones | Humans | Hydroxamic Acids | Indoles | Kidney Neoplasms | Mice, Inbred BALB C | Panobinostat | Pyrazines | Ubiquitinated Proteins [SUMMARY]
[CONTENT] Acetylation | Animals | Antineoplastic Agents | Antineoplastic Combined Chemotherapy Protocols | Apoptosis | Boronic Acids | Bortezomib | Cell Line, Tumor | Disease Models, Animal | Drug Synergism | Endoplasmic Reticulum Stress | Histone Deacetylase Inhibitors | Histones | Humans | Hydroxamic Acids | Indoles | Kidney Neoplasms | Mice, Inbred BALB C | Panobinostat | Pyrazines | Ubiquitinated Proteins [SUMMARY]
[CONTENT] Acetylation | Animals | Antineoplastic Agents | Antineoplastic Combined Chemotherapy Protocols | Apoptosis | Boronic Acids | Bortezomib | Cell Line, Tumor | Disease Models, Animal | Drug Synergism | Endoplasmic Reticulum Stress | Histone Deacetylase Inhibitors | Histones | Humans | Hydroxamic Acids | Indoles | Kidney Neoplasms | Mice, Inbred BALB C | Panobinostat | Pyrazines | Ubiquitinated Proteins [SUMMARY]
[CONTENT] Acetylation | Animals | Antineoplastic Agents | Antineoplastic Combined Chemotherapy Protocols | Apoptosis | Boronic Acids | Bortezomib | Cell Line, Tumor | Disease Models, Animal | Drug Synergism | Endoplasmic Reticulum Stress | Histone Deacetylase Inhibitors | Histones | Humans | Hydroxamic Acids | Indoles | Kidney Neoplasms | Mice, Inbred BALB C | Panobinostat | Pyrazines | Ubiquitinated Proteins [SUMMARY]
[CONTENT] unfolded proteins er | inhibition hdac6 acetylates | proteasome hdac6 | unfolded proteins ubiquitinated | deacetylase hdac inhibition [SUMMARY]
[CONTENT] unfolded proteins er | inhibition hdac6 acetylates | proteasome hdac6 | unfolded proteins ubiquitinated | deacetylase hdac inhibition [SUMMARY]
[CONTENT] unfolded proteins er | inhibition hdac6 acetylates | proteasome hdac6 | unfolded proteins ubiquitinated | deacetylase hdac inhibition [SUMMARY]
[CONTENT] unfolded proteins er | inhibition hdac6 acetylates | proteasome hdac6 | unfolded proteins ubiquitinated | deacetylase hdac inhibition [SUMMARY]
[CONTENT] unfolded proteins er | inhibition hdac6 acetylates | proteasome hdac6 | unfolded proteins ubiquitinated | deacetylase hdac inhibition [SUMMARY]
[CONTENT] unfolded proteins er | inhibition hdac6 acetylates | proteasome hdac6 | unfolded proteins ubiquitinated | deacetylase hdac inhibition [SUMMARY]
[CONTENT] cells | combination | bortezomib | panobinostat | protein | ubiquitinated | accumulation | histone | acetylation | panobinostat bortezomib [SUMMARY]
[CONTENT] cells | combination | bortezomib | panobinostat | protein | ubiquitinated | accumulation | histone | acetylation | panobinostat bortezomib [SUMMARY]
[CONTENT] cells | combination | bortezomib | panobinostat | protein | ubiquitinated | accumulation | histone | acetylation | panobinostat bortezomib [SUMMARY]
[CONTENT] cells | combination | bortezomib | panobinostat | protein | ubiquitinated | accumulation | histone | acetylation | panobinostat bortezomib [SUMMARY]
[CONTENT] cells | combination | bortezomib | panobinostat | protein | ubiquitinated | accumulation | histone | acetylation | panobinostat bortezomib [SUMMARY]
[CONTENT] cells | combination | bortezomib | panobinostat | protein | ubiquitinated | accumulation | histone | acetylation | panobinostat bortezomib [SUMMARY]
[CONTENT] ubiquitinated | stress | er | er stress | protein | proteins | proteasome | accumulation | er stress ubiquitinated protein | stress ubiquitinated protein accumulation [SUMMARY]
[CONTENT] cells | cell | nm | assay | day | panobinostat | bortezomib | software | cells plated | ma [SUMMARY]
[CONTENT] combination | acetylation | cells | panobinostat | accumulation | ubiquitinated | bortezomib | protein accumulation | ubiquitinated protein accumulation | ubiquitinated protein [SUMMARY]
[CONTENT] renal | cancer | renal cancer | action study demonstrate | action study | action | demonstrate combination effect renal | demonstrate combination effect | demonstrate combination | demonstrate [SUMMARY]
[CONTENT] cells | combination | bortezomib | panobinostat | protein | ubiquitinated | accumulation | acetylation | ubiquitinated protein | ubiquitinated protein accumulation [SUMMARY]
[CONTENT] cells | combination | bortezomib | panobinostat | protein | ubiquitinated | accumulation | acetylation | ubiquitinated protein | ubiquitinated protein accumulation [SUMMARY]
[CONTENT] ER ||| HDAC | 6 | HDAC | 90 ||| ER [SUMMARY]
[CONTENT] Caki-1 | ACHN | 769 ||| ||| ||| ER [SUMMARY]
[CONTENT] ||| ||| 10-day ||| HDAC6 | HDAC6 | ER [SUMMARY]
[CONTENT] ER ||| [SUMMARY]
[CONTENT] ER ||| HDAC | 6 | HDAC | 90 ||| ER ||| Caki-1 | ACHN | 769 ||| ||| ||| ER ||| ||| ||| ||| 10-day ||| HDAC6 | HDAC6 | ER ||| ER ||| [SUMMARY]
[CONTENT] ER ||| HDAC | 6 | HDAC | 90 ||| ER ||| Caki-1 | ACHN | 769 ||| ||| ||| ER ||| ||| ||| ||| 10-day ||| HDAC6 | HDAC6 | ER ||| ER ||| [SUMMARY]
Investigation of high-resolution computed tomographic (HRCT) outcomes associated with chronic pulmonary microaspiration (CPM) in Tehran and Zahedan, Iran.
34394230
In patients with chronic pulmonary microaspiration (CPM) the recognition of high-resolution computed tomographic (HRCT) findings and their pattern is important.
BACKGROUND
This descriptive study enrolled 100 consecutive patients with CPM underwent HRCT of the lungs between 2017 and 2018 in Tehran and Zahedan Hospitals and private centers. The required variables were recorded for each patient with a questionnaire. Subsequently, HRCT was performed and abnormalities were then reported by two radiologists.
MATERIALS AND METHODS
Most of patients exhibited bronchial thickening in 33.6% of cases, followed by ground-glass opacity (12.4%), emphysema (11.1%), and bronchiectasis (8.5%). In addition, the most common HRCT findings were found in left lower lobe (LLL) (37.1%), followed by right lower lobe (RLL) (35.9 %), right upper lobe (RUL) (6,2%), and left upper lobe (LUL) (6%).
RESULTS
Our data showed the most common findings in HRCT were bronchial thickening ground-glass opacity, emphysema, and bronchiectasis, where these findings was dominantly found in LLL, RLL, RUL, and LUL, indicating its high tendency to dependent areas.
CONCLUSION
[ "Bronchiectasis", "Chronic Disease", "Emphysema", "Female", "Humans", "Lung", "Male", "Middle Aged", "Pneumonia, Aspiration", "Retrospective Studies", "Tomography, X-Ray Computed" ]
8351860
Introduction
Pulmonary aspirations (PA) are defined as the inhalation of oropharyngeal secretions or gastric matter toward the larynx and lower respiratory system. Aspiration-induced clinical syndrome, such as aspiration and pneumonitis are defined to be related to the kinds and volume of aspirated material, aspiration frequency, and the host's response to aspirated substance 1. Chronic aspiration pneumonia can lead to changes associated with microaspiration or macroaspiration of orogastric content with time. Nearly half of the adults aspirate a small amount of oropharyngeal secretions during sleep, and an increased risk of microaspiration can be associated with other co-morbidities such as scleroderma, cerebrovascular disease and neurodegenerative diseases, an increased risk of microaspiration 1–4. Chronic pulmonary microaspiration (CPM) is one of the complications which have received much attention despite its low incidence (1.4 to 6 per 10,000). Aspiration pneumonia has been accounted for 5–15% of community-acquired pneumonia cases 1,5–8. Aspiration pneumonia is one of the most leading causes of death in subjects with dysphagia, with an estimated case between 300,000 and 6,000,000 annually in the United States 3 Clinical outcomes of CPM are a range of uncomplicated changes to severe respiratory problems and even death. As a matter of fact, regarding the different types of aspirated materials, the range of the disease is ranged from mild pneumonia to severe respiratory distress syndrome, and lack of cardiovascular responses and renal insufficiency. Furthermore, other associated-risk factors will lead to deaths of up to 76%, as well as monetary burden of the disease should be taken into consideration. As a result, its diagnosis is of great importance and it can significantly reduce its complications and treatment costs 9–11. Bronchoalveolar lavage (BAL) is used to evaluate patients suspected of CPM. The use of HRCT has reduced the clinical use of BAL. HRCT specifies special imaging patterns that are associated with CPM. In the past decade, HRCT has been useful in making a specific diagnosis or limiting the differential diagnosis of pulmonary disease 12. Despite the usefulness of HRCT, BAL is still used in cases, where the clinical diagnosis of microaspiration is suspected, but HRCT is normal. In spite of the radiologic diagnosis of CPM in suspected patients, cellular analysis of the BAL is used to confirm HRCT findings and cases without typical clinical symptoms for correct management of patients. Of course, BAL is not capable of excluding microscopic abnormalities, and in this case HRCT may be helpful. Therefore, considering the overlaps in the efficacy of these two tests, more studies are needed to clarify the efficiency and indication of HRCT. This study was aimed to examine the findings of a HRCT in patients with CPM.
Methods
Ethical Committee The study protocol was approved by the Ethics Committee of Zahedan University of Medical Sciences. All procedures performed in accordance with the ethical standards of the institution and/or national research committee and with the 1964 Helsinki Declaration for human participants The study protocol was approved by the Ethics Committee of Zahedan University of Medical Sciences. All procedures performed in accordance with the ethical standards of the institution and/or national research committee and with the 1964 Helsinki Declaration for human participants Patients In this descriptive study, 100 patients with CPM were enrolled on the basis of the pathology archives in Tehran and Zahedan hospitals and private centers from 2017 to 2018. The inclusion criteria were: i) patients with pulmonary microaspiration, ii) all eligible patients (aged ≥18 years) with CPM based on the histology. In addition, exclusion criteria included i) pregnant women, ii) children, iii) coagulation disorders, iv) cardiovascular instability. The required variables were recorded for each patient with a questionnaire. Subsequently, HRCT was performed and abnormalities were then reported by two radiologists, who were blinded to the aspiration (e.g., consolidation, collapse, scar, nodule, etc.). According to Cardasis et al. (2014), the observation of lipoid pneumonia, giant cell, foreign material, and granuloma was considered as CPM 11. In other words, microaspiration was approved if this evidence were present. In this descriptive study, 100 patients with CPM were enrolled on the basis of the pathology archives in Tehran and Zahedan hospitals and private centers from 2017 to 2018. The inclusion criteria were: i) patients with pulmonary microaspiration, ii) all eligible patients (aged ≥18 years) with CPM based on the histology. In addition, exclusion criteria included i) pregnant women, ii) children, iii) coagulation disorders, iv) cardiovascular instability. The required variables were recorded for each patient with a questionnaire. Subsequently, HRCT was performed and abnormalities were then reported by two radiologists, who were blinded to the aspiration (e.g., consolidation, collapse, scar, nodule, etc.). According to Cardasis et al. (2014), the observation of lipoid pneumonia, giant cell, foreign material, and granuloma was considered as CPM 11. In other words, microaspiration was approved if this evidence were present.
Results
In this research, a total of 100 patients were contained; the finding presented herein indicated that the most common CT detections were bronchial thickening (33.6%), followed by Ground-Glass Opacity (12.4%), Emphysema (11.1%), and Bronchiectasis (8.5%), (Fig 1). The frequency distribution of HRCT findings in surveyed patients (BT: Bronchial thickening, EP: Emphysema, GG: Ground-glass, HH: Hiatal hernia, GGO: Ground-glass opacities, BCH: Bronchiolectasis, AT: Atelectasis, CN: Centrilobular nodule, LO: Linear opacities, CS: consolidation, TB: Traction bronchiectasis, ATP: Air trapping, HC: Honeycombing, DIP: Desquamative interstitial pneumonia, IS: Interface sign, BM: Bronchomalacia, BWT: Bronchial wall thickening, S: SCAR, F: Fibrosis, TM: Tracheomalacia, AL: Alveolitis, TI: Thickening of interlobular, and VIP-P: VIP pattern) As shown in figure 2, the highest HRCT findings were found to be LLL (37.1%), followed by RLL (35.9%), RUL (6.2%), and LUL (6%). The findings revealed that emphysema was highly more prevalent in RUL, but other findings were more prevalent in LLL and RLL. The analysis of the findings showed that the frequency of location according to the type of HRCT findings was statistically significant (P = 0.0001; Fig 2). The frequency distribution of findings location in the investigated patients LLL, left lower lobe; LNG, lingular segment; LUL, left upper lobe; ML, middle lobe; RLL, right lower lobe. RUL: right upper lobe
Conclusion
In summary, it is concluded that the most common findings of HRCT in patients with CPM include Bronchial Bonding, Ground-Glass Opacity, Emphysema and Bronchiectasis, and their most common locations were LLL, RLL, RUL, and LUL. Further studies in other centers and hospitals are needed in comparison with control groups, as well as studies on other variables such as age and gender are suggested to achieve more documentary results.
[ "Ethical Committee", "Patients", "Statistical analysis" ]
[ "The study protocol was approved by the Ethics Committee of Zahedan University of Medical Sciences. All procedures performed in accordance with the ethical standards of the institution and/or national research committee and with the 1964 Helsinki Declaration for human participants", "In this descriptive study, 100 patients with CPM were enrolled on the basis of the pathology archives in Tehran and Zahedan hospitals and private centers from 2017 to 2018. The inclusion criteria were: i) patients with pulmonary microaspiration, ii) all eligible patients (aged ≥18 years) with CPM based on the histology. In addition, exclusion criteria included i) pregnant women, ii) children, iii) coagulation disorders, iv) cardiovascular instability.\nThe required variables were recorded for each patient with a questionnaire. Subsequently, HRCT was performed and abnormalities were then reported by two radiologists, who were blinded to the aspiration (e.g., consolidation, collapse, scar, nodule, etc.). According to Cardasis et al. (2014), the observation of lipoid pneumonia, giant cell, foreign material, and granuloma was considered as CPM 11. In other words, microaspiration was approved if this evidence were present.", "Frequencies are indicated as percentages for qualitative variables. Correlative analysis was conducted with SPSS software version 24. Differences between the frequencies of location (tendency to involve the dependent areas) were determined using χ2 tests for qualitative variables. A p-value <0.05 was considered to be statistically significant." ]
[ null, null, null ]
[ "Introduction", "Methods", "Ethical Committee", "Patients", "Statistical analysis", "Results", "Discussion", "Conclusion" ]
[ "Pulmonary aspirations (PA) are defined as the inhalation of oropharyngeal secretions or gastric matter toward the larynx and lower respiratory system. Aspiration-induced clinical syndrome, such as aspiration and pneumonitis are defined to be related to the kinds and volume of aspirated material, aspiration frequency, and the host's response to aspirated substance 1. Chronic aspiration pneumonia can lead to changes associated with microaspiration or macroaspiration of orogastric content with time. Nearly half of the adults aspirate a small amount of oropharyngeal secretions during sleep, and an increased risk of microaspiration can be associated with other co-morbidities such as scleroderma, cerebrovascular disease and neurodegenerative diseases, an increased risk of microaspiration 1–4. Chronic pulmonary microaspiration (CPM) is one of the complications which have received much attention despite its low incidence (1.4 to 6 per 10,000). Aspiration pneumonia has been accounted for 5–15% of community-acquired pneumonia cases 1,5–8. Aspiration pneumonia is one of the most leading causes of death in subjects with dysphagia, with an estimated case between 300,000 and 6,000,000 annually in the United States 3\nClinical outcomes of CPM are a range of uncomplicated changes to severe respiratory problems and even death. As a matter of fact, regarding the different types of aspirated materials, the range of the disease is ranged from mild pneumonia to severe respiratory distress syndrome, and lack of cardiovascular responses and renal insufficiency. Furthermore, other associated-risk factors will lead to deaths of up to 76%, as well as monetary burden of the disease should be taken into consideration. As a result, its diagnosis is of great importance and it can significantly reduce its complications and treatment costs 9–11. Bronchoalveolar lavage (BAL) is used to evaluate patients suspected of CPM. The use of HRCT has reduced the clinical use of BAL. HRCT specifies special imaging patterns that are associated with CPM. In the past decade, HRCT has been useful in making a specific diagnosis or limiting the differential diagnosis of pulmonary disease 12. Despite the usefulness of HRCT, BAL is still used in cases, where the clinical diagnosis of microaspiration is suspected, but HRCT is normal. In spite of the radiologic diagnosis of CPM in suspected patients, cellular analysis of the BAL is used to confirm HRCT findings and cases without typical clinical symptoms for correct management of patients.\nOf course, BAL is not capable of excluding microscopic abnormalities, and in this case HRCT may be helpful. Therefore, considering the overlaps in the efficacy of these two tests, more studies are needed to clarify the efficiency and indication of HRCT. This study was aimed to examine the findings of a HRCT in patients with CPM.", "Ethical Committee The study protocol was approved by the Ethics Committee of Zahedan University of Medical Sciences. All procedures performed in accordance with the ethical standards of the institution and/or national research committee and with the 1964 Helsinki Declaration for human participants\nThe study protocol was approved by the Ethics Committee of Zahedan University of Medical Sciences. All procedures performed in accordance with the ethical standards of the institution and/or national research committee and with the 1964 Helsinki Declaration for human participants\nPatients In this descriptive study, 100 patients with CPM were enrolled on the basis of the pathology archives in Tehran and Zahedan hospitals and private centers from 2017 to 2018. The inclusion criteria were: i) patients with pulmonary microaspiration, ii) all eligible patients (aged ≥18 years) with CPM based on the histology. In addition, exclusion criteria included i) pregnant women, ii) children, iii) coagulation disorders, iv) cardiovascular instability.\nThe required variables were recorded for each patient with a questionnaire. Subsequently, HRCT was performed and abnormalities were then reported by two radiologists, who were blinded to the aspiration (e.g., consolidation, collapse, scar, nodule, etc.). According to Cardasis et al. (2014), the observation of lipoid pneumonia, giant cell, foreign material, and granuloma was considered as CPM 11. In other words, microaspiration was approved if this evidence were present.\nIn this descriptive study, 100 patients with CPM were enrolled on the basis of the pathology archives in Tehran and Zahedan hospitals and private centers from 2017 to 2018. The inclusion criteria were: i) patients with pulmonary microaspiration, ii) all eligible patients (aged ≥18 years) with CPM based on the histology. In addition, exclusion criteria included i) pregnant women, ii) children, iii) coagulation disorders, iv) cardiovascular instability.\nThe required variables were recorded for each patient with a questionnaire. Subsequently, HRCT was performed and abnormalities were then reported by two radiologists, who were blinded to the aspiration (e.g., consolidation, collapse, scar, nodule, etc.). According to Cardasis et al. (2014), the observation of lipoid pneumonia, giant cell, foreign material, and granuloma was considered as CPM 11. In other words, microaspiration was approved if this evidence were present.", "The study protocol was approved by the Ethics Committee of Zahedan University of Medical Sciences. All procedures performed in accordance with the ethical standards of the institution and/or national research committee and with the 1964 Helsinki Declaration for human participants", "In this descriptive study, 100 patients with CPM were enrolled on the basis of the pathology archives in Tehran and Zahedan hospitals and private centers from 2017 to 2018. The inclusion criteria were: i) patients with pulmonary microaspiration, ii) all eligible patients (aged ≥18 years) with CPM based on the histology. In addition, exclusion criteria included i) pregnant women, ii) children, iii) coagulation disorders, iv) cardiovascular instability.\nThe required variables were recorded for each patient with a questionnaire. Subsequently, HRCT was performed and abnormalities were then reported by two radiologists, who were blinded to the aspiration (e.g., consolidation, collapse, scar, nodule, etc.). According to Cardasis et al. (2014), the observation of lipoid pneumonia, giant cell, foreign material, and granuloma was considered as CPM 11. In other words, microaspiration was approved if this evidence were present.", "Frequencies are indicated as percentages for qualitative variables. Correlative analysis was conducted with SPSS software version 24. Differences between the frequencies of location (tendency to involve the dependent areas) were determined using χ2 tests for qualitative variables. A p-value <0.05 was considered to be statistically significant.", "In this research, a total of 100 patients were contained; the finding presented herein indicated that the most common CT detections were bronchial thickening (33.6%), followed by Ground-Glass Opacity (12.4%), Emphysema (11.1%), and Bronchiectasis (8.5%), (Fig 1).\nThe frequency distribution of HRCT findings in surveyed patients (BT: Bronchial thickening, EP: Emphysema, GG: Ground-glass, HH: Hiatal hernia, GGO: Ground-glass opacities, BCH: Bronchiolectasis, AT: Atelectasis, CN: Centrilobular nodule, LO: Linear opacities, CS: consolidation, TB: Traction bronchiectasis, ATP: Air trapping, HC: Honeycombing, DIP: Desquamative interstitial pneumonia, IS: Interface sign, BM: Bronchomalacia, BWT: Bronchial wall thickening, S: SCAR, F: Fibrosis, TM: Tracheomalacia, AL: Alveolitis, TI: Thickening of interlobular, and VIP-P: VIP pattern)\nAs shown in figure 2, the highest HRCT findings were found to be LLL (37.1%), followed by RLL (35.9%), RUL (6.2%), and LUL (6%). The findings revealed that emphysema was highly more prevalent in RUL, but other findings were more prevalent in LLL and RLL. The analysis of the findings showed that the frequency of location according to the type of HRCT findings was statistically significant (P = 0.0001; Fig 2).\nThe frequency distribution of findings location in the investigated patients LLL, left lower lobe; LNG, lingular segment; LUL, left upper lobe; ML, middle lobe; RLL, right lower lobe. RUL: right upper lobe", "Silent microaspiration has been revealed to be involved in different lung diseases (e.g., chronic bronchiolar and interstitial pulmonary disease, lipoid pneumonia, inflammatory pneumonitis, post-transplantation bronchiolitis, etc.,), 13–17. For instance, silent microaspiration of exogenous lipid can be a causative factor of lipoid pneumonia, leading to a chronic inflammatory pneumonitis and fibrosis 15\nHRCT provides is remarkably capable of improving the sensitivity and specificity of clinical and histopathological assessment 18. HRCT can be of great importance in a united method of detection where it can be capable of providing evidence for a various detection and non-invasive definitive detection 19. Despite the radiological diagnosis of CPM in suspected patients, cellular analysis of the BAL is used to confirm the HRCT findings and is also used to manage patients correctly in patients whose clinical symptoms are not typical. Of course, BAL does not rule out the presence of microscopic changes, and in this case HRCT may be capable of showing abnormalities\nIn the present study, among the patients with aspiration, the most common HRCT patterns include bronchial thickening (33.6%), followed by Ground-Glass Opacity (12.4%), Emphysema (11.1%), and Bronchiectasis (8.5%). As shown in figure 2, the highest HRCT findings were found to be LLL (37.1%), followed by RLL (35.9%), RUL (6.2%), and LUL (6%).\nIn a study by Scheeren et al., 2016 demonstrated that centrilobular nodules, consolidation, atelectasis, bronchiolectasis, and ground-glass opacities were predominantly found in patients with chronic aspiration as compared to control group, where a remarkable predilection was found for the lower lobes. Aforementioned study indicated that the thickness of the bronchial wall and the air trapping were not significantly different among the groups 20\nAnother study reported that aspiration pneumonia was mostly observed as a bronchopneumonia pattern and bronchiolitis pattern using CT findings. Moreover, centrilobular nodules, ground-glass attenuation, atelectasis consolidation were among the most common finding of aspiration pneumonia on CT 21. The HRCT findings in our study were more or less consistent with studies in some pattern 7,20,22, although different nonspecific imaging findings have been demonstrated for aspiration23. Evidence suggests that that microaspiration cause chronic pneumonitis and long-term pulmonary fibrosis, in addition post-transplantation bronchiolitis, all of which are found in HRCT 1. But in our study, the most common findings of HRCT in patients with CPM included bronchial thickening, ground-glass opacity, emphysema and bronchiectasis. In a study by Elicker et al. (2010), HRCT was the best modalities for microaspiration and ceated a distinct appearance of crazy paving, which is in the form of consolidation with few attenuation and ground-glass opacities 6, which is consistent with our findings. Pereira-Silva and colleagues evaluated13 patients and histologic detection of CPM under high-resolution computed tomography (CT) and their main finding was centrilobular nodules and groundglass opacities. Furthermore, branching opacities and small foci of consolidation, septal lines, and bronchiectasis were found to be other common finding in many patients, which is relatively similar to the results of our research. The reason that other findings are not consistent with our research is that HRCT findings by different radiologists can be interpreted differently, and for this reason, we examined the results by two radiologists.\nIn the study by Oikonomou and Prassopoulos (2013), the following findings were observed in radiological examination of patients with microaspiration: centrilobular nodules, ground-glass opacities, branching opacities, small foci of consolidation, bronchiectasis / bronchioloectasis, reticular interstitial pattern 24, similar findings are also obtained in our study.\nThe accuracy of HRCT has been previously assessed for the diagnosis of allergic bronchopulmonary aspergillosis in patients with asthma, where Bronchiectasis was found in 95% (42 patients), centrilobular nodules in 93% (41 patients, and mucoid impaction in 29.5 (67%). Diagnosis of microaspiration has been previously evaluated be researchers such as barium swallow, CT scan, and scintigraphy (Gamma scan), 1. Plus, Pereira-Silva et al., 2014 reported that CPM detection has been done by some investigations regarding clinical signs and risk factors7, where centrilobular nodules and ground-glass opacities has been demonstrated as CPM outcomes 20 Accumulating evidence indicated Esophagography and CT have been usefully incapable in assessing aspiration disease associated with tracheoesophageal or tracheopulmonary fistula. CT scan is capable of increasing the low accuracy of chest radiography in aspiration diseases along with clinical manifestations and complications25. Additionally, CT scan is the modality of choice in detecting asymptomatic aspiration for other indications. Chest CT scan can be effectively useful for detecting early radiographic changes of aspiration, e.g., mild bronchiectasis, pleural thickening and air-trapping, as well as consolidation, and atelectasis 26–29", "In summary, it is concluded that the most common findings of HRCT in patients with CPM include Bronchial Bonding, Ground-Glass Opacity, Emphysema and Bronchiectasis, and their most common locations were LLL, RLL, RUL, and LUL. Further studies in other centers and hospitals are needed in comparison with control groups, as well as studies on other variables such as age and gender are suggested to achieve more documentary results." ]
[ "intro", "methods", null, null, null, "results", "discussion", "conclusions" ]
[ "Imaging", "high-resolution computed tomographic", "chronic lung microaspiration" ]
Introduction: Pulmonary aspirations (PA) are defined as the inhalation of oropharyngeal secretions or gastric matter toward the larynx and lower respiratory system. Aspiration-induced clinical syndrome, such as aspiration and pneumonitis are defined to be related to the kinds and volume of aspirated material, aspiration frequency, and the host's response to aspirated substance 1. Chronic aspiration pneumonia can lead to changes associated with microaspiration or macroaspiration of orogastric content with time. Nearly half of the adults aspirate a small amount of oropharyngeal secretions during sleep, and an increased risk of microaspiration can be associated with other co-morbidities such as scleroderma, cerebrovascular disease and neurodegenerative diseases, an increased risk of microaspiration 1–4. Chronic pulmonary microaspiration (CPM) is one of the complications which have received much attention despite its low incidence (1.4 to 6 per 10,000). Aspiration pneumonia has been accounted for 5–15% of community-acquired pneumonia cases 1,5–8. Aspiration pneumonia is one of the most leading causes of death in subjects with dysphagia, with an estimated case between 300,000 and 6,000,000 annually in the United States 3 Clinical outcomes of CPM are a range of uncomplicated changes to severe respiratory problems and even death. As a matter of fact, regarding the different types of aspirated materials, the range of the disease is ranged from mild pneumonia to severe respiratory distress syndrome, and lack of cardiovascular responses and renal insufficiency. Furthermore, other associated-risk factors will lead to deaths of up to 76%, as well as monetary burden of the disease should be taken into consideration. As a result, its diagnosis is of great importance and it can significantly reduce its complications and treatment costs 9–11. Bronchoalveolar lavage (BAL) is used to evaluate patients suspected of CPM. The use of HRCT has reduced the clinical use of BAL. HRCT specifies special imaging patterns that are associated with CPM. In the past decade, HRCT has been useful in making a specific diagnosis or limiting the differential diagnosis of pulmonary disease 12. Despite the usefulness of HRCT, BAL is still used in cases, where the clinical diagnosis of microaspiration is suspected, but HRCT is normal. In spite of the radiologic diagnosis of CPM in suspected patients, cellular analysis of the BAL is used to confirm HRCT findings and cases without typical clinical symptoms for correct management of patients. Of course, BAL is not capable of excluding microscopic abnormalities, and in this case HRCT may be helpful. Therefore, considering the overlaps in the efficacy of these two tests, more studies are needed to clarify the efficiency and indication of HRCT. This study was aimed to examine the findings of a HRCT in patients with CPM. Methods: Ethical Committee The study protocol was approved by the Ethics Committee of Zahedan University of Medical Sciences. All procedures performed in accordance with the ethical standards of the institution and/or national research committee and with the 1964 Helsinki Declaration for human participants The study protocol was approved by the Ethics Committee of Zahedan University of Medical Sciences. All procedures performed in accordance with the ethical standards of the institution and/or national research committee and with the 1964 Helsinki Declaration for human participants Patients In this descriptive study, 100 patients with CPM were enrolled on the basis of the pathology archives in Tehran and Zahedan hospitals and private centers from 2017 to 2018. The inclusion criteria were: i) patients with pulmonary microaspiration, ii) all eligible patients (aged ≥18 years) with CPM based on the histology. In addition, exclusion criteria included i) pregnant women, ii) children, iii) coagulation disorders, iv) cardiovascular instability. The required variables were recorded for each patient with a questionnaire. Subsequently, HRCT was performed and abnormalities were then reported by two radiologists, who were blinded to the aspiration (e.g., consolidation, collapse, scar, nodule, etc.). According to Cardasis et al. (2014), the observation of lipoid pneumonia, giant cell, foreign material, and granuloma was considered as CPM 11. In other words, microaspiration was approved if this evidence were present. In this descriptive study, 100 patients with CPM were enrolled on the basis of the pathology archives in Tehran and Zahedan hospitals and private centers from 2017 to 2018. The inclusion criteria were: i) patients with pulmonary microaspiration, ii) all eligible patients (aged ≥18 years) with CPM based on the histology. In addition, exclusion criteria included i) pregnant women, ii) children, iii) coagulation disorders, iv) cardiovascular instability. The required variables were recorded for each patient with a questionnaire. Subsequently, HRCT was performed and abnormalities were then reported by two radiologists, who were blinded to the aspiration (e.g., consolidation, collapse, scar, nodule, etc.). According to Cardasis et al. (2014), the observation of lipoid pneumonia, giant cell, foreign material, and granuloma was considered as CPM 11. In other words, microaspiration was approved if this evidence were present. Ethical Committee: The study protocol was approved by the Ethics Committee of Zahedan University of Medical Sciences. All procedures performed in accordance with the ethical standards of the institution and/or national research committee and with the 1964 Helsinki Declaration for human participants Patients: In this descriptive study, 100 patients with CPM were enrolled on the basis of the pathology archives in Tehran and Zahedan hospitals and private centers from 2017 to 2018. The inclusion criteria were: i) patients with pulmonary microaspiration, ii) all eligible patients (aged ≥18 years) with CPM based on the histology. In addition, exclusion criteria included i) pregnant women, ii) children, iii) coagulation disorders, iv) cardiovascular instability. The required variables were recorded for each patient with a questionnaire. Subsequently, HRCT was performed and abnormalities were then reported by two radiologists, who were blinded to the aspiration (e.g., consolidation, collapse, scar, nodule, etc.). According to Cardasis et al. (2014), the observation of lipoid pneumonia, giant cell, foreign material, and granuloma was considered as CPM 11. In other words, microaspiration was approved if this evidence were present. Statistical analysis: Frequencies are indicated as percentages for qualitative variables. Correlative analysis was conducted with SPSS software version 24. Differences between the frequencies of location (tendency to involve the dependent areas) were determined using χ2 tests for qualitative variables. A p-value <0.05 was considered to be statistically significant. Results: In this research, a total of 100 patients were contained; the finding presented herein indicated that the most common CT detections were bronchial thickening (33.6%), followed by Ground-Glass Opacity (12.4%), Emphysema (11.1%), and Bronchiectasis (8.5%), (Fig 1). The frequency distribution of HRCT findings in surveyed patients (BT: Bronchial thickening, EP: Emphysema, GG: Ground-glass, HH: Hiatal hernia, GGO: Ground-glass opacities, BCH: Bronchiolectasis, AT: Atelectasis, CN: Centrilobular nodule, LO: Linear opacities, CS: consolidation, TB: Traction bronchiectasis, ATP: Air trapping, HC: Honeycombing, DIP: Desquamative interstitial pneumonia, IS: Interface sign, BM: Bronchomalacia, BWT: Bronchial wall thickening, S: SCAR, F: Fibrosis, TM: Tracheomalacia, AL: Alveolitis, TI: Thickening of interlobular, and VIP-P: VIP pattern) As shown in figure 2, the highest HRCT findings were found to be LLL (37.1%), followed by RLL (35.9%), RUL (6.2%), and LUL (6%). The findings revealed that emphysema was highly more prevalent in RUL, but other findings were more prevalent in LLL and RLL. The analysis of the findings showed that the frequency of location according to the type of HRCT findings was statistically significant (P = 0.0001; Fig 2). The frequency distribution of findings location in the investigated patients LLL, left lower lobe; LNG, lingular segment; LUL, left upper lobe; ML, middle lobe; RLL, right lower lobe. RUL: right upper lobe Discussion: Silent microaspiration has been revealed to be involved in different lung diseases (e.g., chronic bronchiolar and interstitial pulmonary disease, lipoid pneumonia, inflammatory pneumonitis, post-transplantation bronchiolitis, etc.,), 13–17. For instance, silent microaspiration of exogenous lipid can be a causative factor of lipoid pneumonia, leading to a chronic inflammatory pneumonitis and fibrosis 15 HRCT provides is remarkably capable of improving the sensitivity and specificity of clinical and histopathological assessment 18. HRCT can be of great importance in a united method of detection where it can be capable of providing evidence for a various detection and non-invasive definitive detection 19. Despite the radiological diagnosis of CPM in suspected patients, cellular analysis of the BAL is used to confirm the HRCT findings and is also used to manage patients correctly in patients whose clinical symptoms are not typical. Of course, BAL does not rule out the presence of microscopic changes, and in this case HRCT may be capable of showing abnormalities In the present study, among the patients with aspiration, the most common HRCT patterns include bronchial thickening (33.6%), followed by Ground-Glass Opacity (12.4%), Emphysema (11.1%), and Bronchiectasis (8.5%). As shown in figure 2, the highest HRCT findings were found to be LLL (37.1%), followed by RLL (35.9%), RUL (6.2%), and LUL (6%). In a study by Scheeren et al., 2016 demonstrated that centrilobular nodules, consolidation, atelectasis, bronchiolectasis, and ground-glass opacities were predominantly found in patients with chronic aspiration as compared to control group, where a remarkable predilection was found for the lower lobes. Aforementioned study indicated that the thickness of the bronchial wall and the air trapping were not significantly different among the groups 20 Another study reported that aspiration pneumonia was mostly observed as a bronchopneumonia pattern and bronchiolitis pattern using CT findings. Moreover, centrilobular nodules, ground-glass attenuation, atelectasis consolidation were among the most common finding of aspiration pneumonia on CT 21. The HRCT findings in our study were more or less consistent with studies in some pattern 7,20,22, although different nonspecific imaging findings have been demonstrated for aspiration23. Evidence suggests that that microaspiration cause chronic pneumonitis and long-term pulmonary fibrosis, in addition post-transplantation bronchiolitis, all of which are found in HRCT 1. But in our study, the most common findings of HRCT in patients with CPM included bronchial thickening, ground-glass opacity, emphysema and bronchiectasis. In a study by Elicker et al. (2010), HRCT was the best modalities for microaspiration and ceated a distinct appearance of crazy paving, which is in the form of consolidation with few attenuation and ground-glass opacities 6, which is consistent with our findings. Pereira-Silva and colleagues evaluated13 patients and histologic detection of CPM under high-resolution computed tomography (CT) and their main finding was centrilobular nodules and groundglass opacities. Furthermore, branching opacities and small foci of consolidation, septal lines, and bronchiectasis were found to be other common finding in many patients, which is relatively similar to the results of our research. The reason that other findings are not consistent with our research is that HRCT findings by different radiologists can be interpreted differently, and for this reason, we examined the results by two radiologists. In the study by Oikonomou and Prassopoulos (2013), the following findings were observed in radiological examination of patients with microaspiration: centrilobular nodules, ground-glass opacities, branching opacities, small foci of consolidation, bronchiectasis / bronchioloectasis, reticular interstitial pattern 24, similar findings are also obtained in our study. The accuracy of HRCT has been previously assessed for the diagnosis of allergic bronchopulmonary aspergillosis in patients with asthma, where Bronchiectasis was found in 95% (42 patients), centrilobular nodules in 93% (41 patients, and mucoid impaction in 29.5 (67%). Diagnosis of microaspiration has been previously evaluated be researchers such as barium swallow, CT scan, and scintigraphy (Gamma scan), 1. Plus, Pereira-Silva et al., 2014 reported that CPM detection has been done by some investigations regarding clinical signs and risk factors7, where centrilobular nodules and ground-glass opacities has been demonstrated as CPM outcomes 20 Accumulating evidence indicated Esophagography and CT have been usefully incapable in assessing aspiration disease associated with tracheoesophageal or tracheopulmonary fistula. CT scan is capable of increasing the low accuracy of chest radiography in aspiration diseases along with clinical manifestations and complications25. Additionally, CT scan is the modality of choice in detecting asymptomatic aspiration for other indications. Chest CT scan can be effectively useful for detecting early radiographic changes of aspiration, e.g., mild bronchiectasis, pleural thickening and air-trapping, as well as consolidation, and atelectasis 26–29 Conclusion: In summary, it is concluded that the most common findings of HRCT in patients with CPM include Bronchial Bonding, Ground-Glass Opacity, Emphysema and Bronchiectasis, and their most common locations were LLL, RLL, RUL, and LUL. Further studies in other centers and hospitals are needed in comparison with control groups, as well as studies on other variables such as age and gender are suggested to achieve more documentary results.
Background: In patients with chronic pulmonary microaspiration (CPM) the recognition of high-resolution computed tomographic (HRCT) findings and their pattern is important. Methods: This descriptive study enrolled 100 consecutive patients with CPM underwent HRCT of the lungs between 2017 and 2018 in Tehran and Zahedan Hospitals and private centers. The required variables were recorded for each patient with a questionnaire. Subsequently, HRCT was performed and abnormalities were then reported by two radiologists. Results: Most of patients exhibited bronchial thickening in 33.6% of cases, followed by ground-glass opacity (12.4%), emphysema (11.1%), and bronchiectasis (8.5%). In addition, the most common HRCT findings were found in left lower lobe (LLL) (37.1%), followed by right lower lobe (RLL) (35.9 %), right upper lobe (RUL) (6,2%), and left upper lobe (LUL) (6%). Conclusions: Our data showed the most common findings in HRCT were bronchial thickening ground-glass opacity, emphysema, and bronchiectasis, where these findings was dominantly found in LLL, RLL, RUL, and LUL, indicating its high tendency to dependent areas.
Introduction: Pulmonary aspirations (PA) are defined as the inhalation of oropharyngeal secretions or gastric matter toward the larynx and lower respiratory system. Aspiration-induced clinical syndrome, such as aspiration and pneumonitis are defined to be related to the kinds and volume of aspirated material, aspiration frequency, and the host's response to aspirated substance 1. Chronic aspiration pneumonia can lead to changes associated with microaspiration or macroaspiration of orogastric content with time. Nearly half of the adults aspirate a small amount of oropharyngeal secretions during sleep, and an increased risk of microaspiration can be associated with other co-morbidities such as scleroderma, cerebrovascular disease and neurodegenerative diseases, an increased risk of microaspiration 1–4. Chronic pulmonary microaspiration (CPM) is one of the complications which have received much attention despite its low incidence (1.4 to 6 per 10,000). Aspiration pneumonia has been accounted for 5–15% of community-acquired pneumonia cases 1,5–8. Aspiration pneumonia is one of the most leading causes of death in subjects with dysphagia, with an estimated case between 300,000 and 6,000,000 annually in the United States 3 Clinical outcomes of CPM are a range of uncomplicated changes to severe respiratory problems and even death. As a matter of fact, regarding the different types of aspirated materials, the range of the disease is ranged from mild pneumonia to severe respiratory distress syndrome, and lack of cardiovascular responses and renal insufficiency. Furthermore, other associated-risk factors will lead to deaths of up to 76%, as well as monetary burden of the disease should be taken into consideration. As a result, its diagnosis is of great importance and it can significantly reduce its complications and treatment costs 9–11. Bronchoalveolar lavage (BAL) is used to evaluate patients suspected of CPM. The use of HRCT has reduced the clinical use of BAL. HRCT specifies special imaging patterns that are associated with CPM. In the past decade, HRCT has been useful in making a specific diagnosis or limiting the differential diagnosis of pulmonary disease 12. Despite the usefulness of HRCT, BAL is still used in cases, where the clinical diagnosis of microaspiration is suspected, but HRCT is normal. In spite of the radiologic diagnosis of CPM in suspected patients, cellular analysis of the BAL is used to confirm HRCT findings and cases without typical clinical symptoms for correct management of patients. Of course, BAL is not capable of excluding microscopic abnormalities, and in this case HRCT may be helpful. Therefore, considering the overlaps in the efficacy of these two tests, more studies are needed to clarify the efficiency and indication of HRCT. This study was aimed to examine the findings of a HRCT in patients with CPM. Conclusion: In summary, it is concluded that the most common findings of HRCT in patients with CPM include Bronchial Bonding, Ground-Glass Opacity, Emphysema and Bronchiectasis, and their most common locations were LLL, RLL, RUL, and LUL. Further studies in other centers and hospitals are needed in comparison with control groups, as well as studies on other variables such as age and gender are suggested to achieve more documentary results.
Background: In patients with chronic pulmonary microaspiration (CPM) the recognition of high-resolution computed tomographic (HRCT) findings and their pattern is important. Methods: This descriptive study enrolled 100 consecutive patients with CPM underwent HRCT of the lungs between 2017 and 2018 in Tehran and Zahedan Hospitals and private centers. The required variables were recorded for each patient with a questionnaire. Subsequently, HRCT was performed and abnormalities were then reported by two radiologists. Results: Most of patients exhibited bronchial thickening in 33.6% of cases, followed by ground-glass opacity (12.4%), emphysema (11.1%), and bronchiectasis (8.5%). In addition, the most common HRCT findings were found in left lower lobe (LLL) (37.1%), followed by right lower lobe (RLL) (35.9 %), right upper lobe (RUL) (6,2%), and left upper lobe (LUL) (6%). Conclusions: Our data showed the most common findings in HRCT were bronchial thickening ground-glass opacity, emphysema, and bronchiectasis, where these findings was dominantly found in LLL, RLL, RUL, and LUL, indicating its high tendency to dependent areas.
2,558
235
[ 41, 176, 55 ]
8
[ "patients", "hrct", "findings", "cpm", "aspiration", "microaspiration", "study", "pneumonia", "glass", "ground" ]
[ "chronic pulmonary microaspiration", "patients pulmonary microaspiration", "aspiration pneumonia leading", "pneumonia cases aspiration", "aspiration pneumonia accounted" ]
[CONTENT] Imaging | high-resolution computed tomographic | chronic lung microaspiration [SUMMARY]
[CONTENT] Imaging | high-resolution computed tomographic | chronic lung microaspiration [SUMMARY]
[CONTENT] Imaging | high-resolution computed tomographic | chronic lung microaspiration [SUMMARY]
[CONTENT] Imaging | high-resolution computed tomographic | chronic lung microaspiration [SUMMARY]
[CONTENT] Imaging | high-resolution computed tomographic | chronic lung microaspiration [SUMMARY]
[CONTENT] Imaging | high-resolution computed tomographic | chronic lung microaspiration [SUMMARY]
[CONTENT] Bronchiectasis | Chronic Disease | Emphysema | Female | Humans | Lung | Male | Middle Aged | Pneumonia, Aspiration | Retrospective Studies | Tomography, X-Ray Computed [SUMMARY]
[CONTENT] Bronchiectasis | Chronic Disease | Emphysema | Female | Humans | Lung | Male | Middle Aged | Pneumonia, Aspiration | Retrospective Studies | Tomography, X-Ray Computed [SUMMARY]
[CONTENT] Bronchiectasis | Chronic Disease | Emphysema | Female | Humans | Lung | Male | Middle Aged | Pneumonia, Aspiration | Retrospective Studies | Tomography, X-Ray Computed [SUMMARY]
[CONTENT] Bronchiectasis | Chronic Disease | Emphysema | Female | Humans | Lung | Male | Middle Aged | Pneumonia, Aspiration | Retrospective Studies | Tomography, X-Ray Computed [SUMMARY]
[CONTENT] Bronchiectasis | Chronic Disease | Emphysema | Female | Humans | Lung | Male | Middle Aged | Pneumonia, Aspiration | Retrospective Studies | Tomography, X-Ray Computed [SUMMARY]
[CONTENT] Bronchiectasis | Chronic Disease | Emphysema | Female | Humans | Lung | Male | Middle Aged | Pneumonia, Aspiration | Retrospective Studies | Tomography, X-Ray Computed [SUMMARY]
[CONTENT] chronic pulmonary microaspiration | patients pulmonary microaspiration | aspiration pneumonia leading | pneumonia cases aspiration | aspiration pneumonia accounted [SUMMARY]
[CONTENT] chronic pulmonary microaspiration | patients pulmonary microaspiration | aspiration pneumonia leading | pneumonia cases aspiration | aspiration pneumonia accounted [SUMMARY]
[CONTENT] chronic pulmonary microaspiration | patients pulmonary microaspiration | aspiration pneumonia leading | pneumonia cases aspiration | aspiration pneumonia accounted [SUMMARY]
[CONTENT] chronic pulmonary microaspiration | patients pulmonary microaspiration | aspiration pneumonia leading | pneumonia cases aspiration | aspiration pneumonia accounted [SUMMARY]
[CONTENT] chronic pulmonary microaspiration | patients pulmonary microaspiration | aspiration pneumonia leading | pneumonia cases aspiration | aspiration pneumonia accounted [SUMMARY]
[CONTENT] chronic pulmonary microaspiration | patients pulmonary microaspiration | aspiration pneumonia leading | pneumonia cases aspiration | aspiration pneumonia accounted [SUMMARY]
[CONTENT] patients | hrct | findings | cpm | aspiration | microaspiration | study | pneumonia | glass | ground [SUMMARY]
[CONTENT] patients | hrct | findings | cpm | aspiration | microaspiration | study | pneumonia | glass | ground [SUMMARY]
[CONTENT] patients | hrct | findings | cpm | aspiration | microaspiration | study | pneumonia | glass | ground [SUMMARY]
[CONTENT] patients | hrct | findings | cpm | aspiration | microaspiration | study | pneumonia | glass | ground [SUMMARY]
[CONTENT] patients | hrct | findings | cpm | aspiration | microaspiration | study | pneumonia | glass | ground [SUMMARY]
[CONTENT] patients | hrct | findings | cpm | aspiration | microaspiration | study | pneumonia | glass | ground [SUMMARY]
[CONTENT] hrct | bal | diagnosis | clinical | 000 | aspiration | cpm | associated | disease | microaspiration [SUMMARY]
[CONTENT] committee | patients | cpm | criteria | ii | zahedan | approved | performed | microaspiration | ethical [SUMMARY]
[CONTENT] lobe | findings | thickening | frequency | lll | ground glass | rll | ground | hrct findings | bronchial [SUMMARY]
[CONTENT] studies | common | comparison control groups | bronchial bonding ground glass | rll rul lul | rll rul | bronchiectasis common | bronchiectasis common locations | bronchiectasis common locations lll | centers hospitals [SUMMARY]
[CONTENT] patients | hrct | cpm | findings | committee | microaspiration | aspiration | study | variables | ground glass [SUMMARY]
[CONTENT] patients | hrct | cpm | findings | committee | microaspiration | aspiration | study | variables | ground glass [SUMMARY]
[CONTENT] CPM [SUMMARY]
[CONTENT] 100 | CPM | between 2017 and 2018 | Tehran | Zahedan Hospitals ||| ||| HRCT | two [SUMMARY]
[CONTENT] 33.6% | 12.4% | emphysema | 11.1% | 8.5% ||| LLL | 37.1% | RLL | 35.9 % | 6,2% | LUL | 6% [SUMMARY]
[CONTENT] HRCT | emphysema | LLL | RLL | LUL [SUMMARY]
[CONTENT] CPM ||| 100 | CPM | between 2017 and 2018 | Tehran | Zahedan Hospitals ||| ||| HRCT | two ||| 33.6% | 12.4% | emphysema | 11.1% | 8.5% ||| LLL | 37.1% | RLL | 35.9 % | 6,2% | LUL | 6% ||| HRCT | emphysema | LLL | RLL | LUL [SUMMARY]
[CONTENT] CPM ||| 100 | CPM | between 2017 and 2018 | Tehran | Zahedan Hospitals ||| ||| HRCT | two ||| 33.6% | 12.4% | emphysema | 11.1% | 8.5% ||| LLL | 37.1% | RLL | 35.9 % | 6,2% | LUL | 6% ||| HRCT | emphysema | LLL | RLL | LUL [SUMMARY]
Peroral endoscopic myotomy
36156932
Achalasia is a rare benign esophageal motor disorder characterized by incomplete relaxation of the lower esophageal sphincter (LES). The treatment of achalasia is not curative, but rather is aimed at reducing LES pressure. In patients who have failed noninvasive therapy, surgery should be considered. Myotomy with partial fundoplication has been considered the first-line treatment for non-advanced achalasia. Recently, peroral endoscopic myotomy (POEM), a technique that employs the principles of submucosal endoscopy to perform the equivalent of a surgical myotomy, has emerged as a promising minimally invasive technique for the management of this condition.
BACKGROUND
Forty treatment-naive adult patients who had been diagnosed with achalasia based on clinical and manometric criteria (dysphagia score ≥ II and Eckardt score > 3) were randomized to undergo either LM-PF or POEM. The outcome measures were anesthesia time, procedure time, symptom improvement, reflux esophagitis (as determined with the Gastroesophageal Reflux Disease Questionnaire), barium column height at 1 and 5 min (on a barium esophagogram), pressure at the LES, the occurrence of adverse events (AEs), length of stay (LOS), and quality of life (QoL).
METHODS
There were no statistically significant differences between the LM-PF and POEM groups regarding symptom improvement at 1, 6, and 12 mo of follow-up (P = 0.192, P = 0.242, and P = 0.242, respectively). However, the rates of reflux esophagitis at 1, 6, and 12 mo of follow-up were significantly higher in the POEM group (P = 0.014, P < 0.001, and P = 0.002, respectively). There were also no statistical differences regarding the manometry values, the occurrence of AEs, or LOS. Anesthesia time and procedure time were significantly shorter in the POEM group than in the LM-PF group (185.00 ± 56.89 and 95.70 ± 30.47 min vs 296.75 ± 56.13 and 218.75 ± 50.88 min, respectively; P = 0.001 for both). In the POEM group, there were improvements in all domains of the QoL questionnaire, whereas there were improvements in only three domains in the LM-PF group.
RESULTS
POEM and LM-PF appear to be equally effective in controlling the symptoms of achalasia, shortening LOS, and minimizing AEs. Nevertheless, POEM has the advantage of improving all domains of QoL, and shortening anesthesia and procedure times but with a significantly higher rate of gastroesophageal reflux.
CONCLUSION
[ "Adult", "Barium", "Esophageal Achalasia", "Esophageal Sphincter, Lower", "Esophagitis, Peptic", "Esophagoscopy", "Fundoplication", "Gastroesophageal Reflux", "Humans", "Laparoscopy", "Myotomy", "Natural Orifice Endoscopic Surgery", "Quality of Life", "Treatment Outcome" ]
9476850
INTRODUCTION
Achalasia is a rare benign esophageal motor disorder characterized by incomplete relaxation of the lower esophageal sphincter (LES)[1-3]. For primary or idiopathic achalasia, the underlying etiology has yet to be clearly defined; secondary achalasia results from any one of several systemic diseases including infectious, autoimmune, and drug-induced disorders[4-6]. In both cases, the most common symptoms are progressive dysphagia, regurgitation, and weight loss. The symptom intensity and treatment response can be assessed with validated scales such as the Eckardt score[2,7,8]. The diagnosis requires the proper integration between reported symptoms and the interpretation of diagnostic tests such as a barium esophagogram, esophagogastroduodenoscopy (EGD), and manometry—either conventional esophageal manometry (EM) or high-resolution manometry (HRM)[9-12]. The treatment of achalasia is not curative but rather is aimed at reducing LES pressure[13-17]. In patients who have failed noninvasive therapy, surgery should be considered[18]. Myotomy with partial fundoplication has been considered the first-line treatment for non-advanced achalasia[19]. Recently, peroral endoscopic myotomy (POEM), a technique that employs the principles of submucosal endoscopy to perform the equivalent of a surgical myotomy, has emerged as a promising minimally invasive technique for the management of this condition[20]. This technique was first described in 1980 and subsequently modified to create what is now POEM[21,22]. This randomized controlled trial (RCT) compared the efficacy and outcomes of laparoscopic myotomy and partial fundoplication (LM-PF) with those of POEM for the treatment of patients with achalasia of any etiology. We also compared the two procedures in terms of the incidence of reflux esophagitis.
MATERIALS AND METHODS
Study design and participants This was a single-center RCT in which we evaluated 40 treatment-naive patients with esophageal achalasia. We included patients ≥ 18 years of age who had been diagnosed with achalasia based on clinical and manometric criteria (dysphagia score ≥ II and Eckardt score > 3) and who provided informed consent. Patients who had previously undergone endoscopic or surgical procedures involving the esophagogastric junction (EGJ) were excluded, as were those with liver cirrhosis, esophageal varices, Barrett’s esophagus, esophageal strictures, premalignant or malignant EGJ lesions, coagulopathies, pseudoachalasia, esophageal diverticulum, severe cardiopulmonary diseases, or severe systemic diseases, as well as those who were at high surgical risk and those who were pregnant or lactating. This was a single-center RCT in which we evaluated 40 treatment-naive patients with esophageal achalasia. We included patients ≥ 18 years of age who had been diagnosed with achalasia based on clinical and manometric criteria (dysphagia score ≥ II and Eckardt score > 3) and who provided informed consent. Patients who had previously undergone endoscopic or surgical procedures involving the esophagogastric junction (EGJ) were excluded, as were those with liver cirrhosis, esophageal varices, Barrett’s esophagus, esophageal strictures, premalignant or malignant EGJ lesions, coagulopathies, pseudoachalasia, esophageal diverticulum, severe cardiopulmonary diseases, or severe systemic diseases, as well as those who were at high surgical risk and those who were pregnant or lactating. Randomization strategy An investigator who was unaffiliated with the trial created the randomization list. Specific software (www.randomizer.org) was used, and participants were randomly allocated at a 1:1 ratio to the experimental (POEM) group or the comparison (LM-PF) group. An investigator who was unaffiliated with the trial created the randomization list. Specific software (www.randomizer.org) was used, and participants were randomly allocated at a 1:1 ratio to the experimental (POEM) group or the comparison (LM-PF) group. Sample size calculation The sample size was calculated to identify statistical significance between LM-PF and POEM regarding reflux esophagitis rates, which were assumed to be 5% and 40% after LM-PF and POEM, respectively[23]. To achieve a power of 80% with an alpha of 0.05, we estimated the minimum sample size to be 38 (19 patients in each group). Taking potential losses into consideration, we chose to include a total of 40 patients. The sample size was calculated to identify statistical significance between LM-PF and POEM regarding reflux esophagitis rates, which were assumed to be 5% and 40% after LM-PF and POEM, respectively[23]. To achieve a power of 80% with an alpha of 0.05, we estimated the minimum sample size to be 38 (19 patients in each group). Taking potential losses into consideration, we chose to include a total of 40 patients. Techniques POEM: All POEM procedures were performed by a single operator with extensive experience in the technique. Prophylactic intravenous antibiotics and a proton pump inhibitor (PPI) were administered 30 min before intubation and general anesthesia. After the gastroscope was introduced, the esophageal lumen and mucosa were thoroughly cleaned. This was followed by submucosal injection of 10 mL of 0.5% indigo carmine. An incision was made into the mucosa of the posterior wall, between 5 and 6 o’clock, at 10 cm above the EGJ. The incision was made with a dual-function submucosal dissection knife (HybridKnife; Erbe, Tübingen, Germany) in Endocut Q mode (effect 2, width 3, and interval 5). Subsequently, spray coagulation (effect 2 at 40 W) was used to create a submucosal tunnel extending 3-4 cm beyond the EGJ into the proximal stomach. In all patients, full-thickness myotomy—including the circular and longitudinal muscle layers—was performed in Endocut Q mode. The myotomy was initiated 2 cm distal from the mucosal entry point and extended 3-4 cm into the proximal stomach. The mucosal incision was closed by using through-the-scope clips (Supplementary Figures 1 and 2). A barium esophagogram was obtained on postoperative day 1. In the absence of complications, the patient was started on a clear liquid diet and subsequently advanced to a full liquid diet for 14 d. LM-PF: All LM-PF procedures were performed by members of the foregut surgery group. After pneumoperitoneum had been established, five trocars were positioned, after which the left hepatic lobe was retracted to access the EGJ (Supplementary Figure 3). That was followed by division of the phrenoesophageal ligament, dissection, and isolation of the distal esophagus from adjacent structures; and anterolateral dislocation of the distal esophagus. The anterior gastric adipose tissue and the anterior vagus nerve were dissected and separated from the esophagus and stomach, after which myotomy of the circular and longitudinal muscle layers was performed, extending from 5-6 cm above the EGJ to 2-3 cm below the EGJ. Partial fundoplication between the esophagus and stomach was then performed by placing sutures in three planes: Posterior—two to three sutures; left lateral—four to five sutures (on the left side); and right anterior—a line of sutures covering the myotomy, thus interposed with the gastric fundus on the right. In the absence of complications, patients were started on a clear liquid diet on the morning following the procedure and maintained on a mechanical soft diet for 14 d after discharge. POEM: All POEM procedures were performed by a single operator with extensive experience in the technique. Prophylactic intravenous antibiotics and a proton pump inhibitor (PPI) were administered 30 min before intubation and general anesthesia. After the gastroscope was introduced, the esophageal lumen and mucosa were thoroughly cleaned. This was followed by submucosal injection of 10 mL of 0.5% indigo carmine. An incision was made into the mucosa of the posterior wall, between 5 and 6 o’clock, at 10 cm above the EGJ. The incision was made with a dual-function submucosal dissection knife (HybridKnife; Erbe, Tübingen, Germany) in Endocut Q mode (effect 2, width 3, and interval 5). Subsequently, spray coagulation (effect 2 at 40 W) was used to create a submucosal tunnel extending 3-4 cm beyond the EGJ into the proximal stomach. In all patients, full-thickness myotomy—including the circular and longitudinal muscle layers—was performed in Endocut Q mode. The myotomy was initiated 2 cm distal from the mucosal entry point and extended 3-4 cm into the proximal stomach. The mucosal incision was closed by using through-the-scope clips (Supplementary Figures 1 and 2). A barium esophagogram was obtained on postoperative day 1. In the absence of complications, the patient was started on a clear liquid diet and subsequently advanced to a full liquid diet for 14 d. LM-PF: All LM-PF procedures were performed by members of the foregut surgery group. After pneumoperitoneum had been established, five trocars were positioned, after which the left hepatic lobe was retracted to access the EGJ (Supplementary Figure 3). That was followed by division of the phrenoesophageal ligament, dissection, and isolation of the distal esophagus from adjacent structures; and anterolateral dislocation of the distal esophagus. The anterior gastric adipose tissue and the anterior vagus nerve were dissected and separated from the esophagus and stomach, after which myotomy of the circular and longitudinal muscle layers was performed, extending from 5-6 cm above the EGJ to 2-3 cm below the EGJ. Partial fundoplication between the esophagus and stomach was then performed by placing sutures in three planes: Posterior—two to three sutures; left lateral—four to five sutures (on the left side); and right anterior—a line of sutures covering the myotomy, thus interposed with the gastric fundus on the right. In the absence of complications, patients were started on a clear liquid diet on the morning following the procedure and maintained on a mechanical soft diet for 14 d after discharge. Diagnosis and follow-up Clinical assessments: Although achalasia subtyping is defined based on HRM, in this study, the achalasia subtype was evaluated according to the degree of esophageal dilation on the barium esophagogram and esophageal motor activity on EM or HRM. Given that Chagas disease, which often involves the esophagus, is common in Brazil, all patients were screened for anti-Trypanosoma cruzi antibodies by enzyme-linked immunosorbent assay and indirect immunofluorescence. Weight loss, dysphagia, and pain were assessed before the procedure, as well as at 6 and 12 mo after the procedure, by using the Eckardt score. Patients with an Eckardt score ≥ 3 were categorized as symptomatic. The clinical evaluation of gastroesophageal reflux (GER) and the diagnosis of GER disease (GERD) was based on the application of the GER Disease Questionnaire (GerdQ)[24] (Supplementary Figure 4). EGD: We performed EGD before the procedure, as well as at 6 and 12 mo after. Esophagitis was graded according to the Los Angeles classification system[25]. We performed chromoendoscopy with narrow-band imaging and 2.5% Lugol’s solution to screen for esophageal cancer. Suspicious lesions were biopsied. Barium esophagogram: To assess esophageal emptying before and 12 mo after the procedure, we used a timed barium esophagogram, as previously described[26]. The degree of esophageal emptying is qualitatively estimated by comparing 1- and 5-min images or by measuring the height and width (in centimeters) of the barium column at both times, calculating the approximate area, and determining the percentage change in the area. EM: Conventional EM was performed before and 12 mo after the procedure. It should be noted that HRM technology was not available in Brazil when the trial began. To perform conventional EM, we used an eight-channel computerized polygraph under pneumohydraulic capillary infusion at a flow rate of 0.6 mL/min/channel. Preparation was required with a 12-h fast and suspension of medications that alter esophageal motility. The technique consists of passing a probe through the nostril and checking the position in the stomach through deep inspiration. With the patient in the supine position, the probe is pulled centimeter by centimeter to measure the mean respiratory pressure and pressure inversion point, and then one of the channels is positioned distal to 3 cm from the upper edge of the LES and the other channels are distant 5 cm apart. Finally, the catheter is pulled up to the upper esophageal sphincter. Through the average of the four distal radial channels of the conventional manometry catheter, the maximum expiratory pressure (MEP) was identified, which best represents the LES pressure itself. Quality of life: To evaluate the quality of life (QoL), we used the Medical Outcomes Study 36-item Short-Form Health Survey (SF-36)[27,28]. The SF-36 comprises 36 questions covering eight domains: Physical functioning, role-physical, bodily pain, general health, vitality, social functioning, role-emotional, and mental health. Adverse events: Among the adverse events (AEs) recorded, pneumoperitoneum requiring drainage or puncture was categorized as a minor AE, as was minor mucosal damage requiring endoscopic closure. Major AEs were defined as pneumoperitoneum leading to hemodynamic instability and premature interruption of the procedure; bleeding requiring a blood transfusion and accompanied by hemodynamic instability or requiring an additional intervention; major mucosal damage requiring endoscopic closure or increasing the length of stay (LOS); or fistula/dehiscence of the incision with signs of fever or infection associated with hemodynamic instability. For AEs occurring in the LM-PF group, we used the Clavien-Dindo classification[29]. Clinical assessments: Although achalasia subtyping is defined based on HRM, in this study, the achalasia subtype was evaluated according to the degree of esophageal dilation on the barium esophagogram and esophageal motor activity on EM or HRM. Given that Chagas disease, which often involves the esophagus, is common in Brazil, all patients were screened for anti-Trypanosoma cruzi antibodies by enzyme-linked immunosorbent assay and indirect immunofluorescence. Weight loss, dysphagia, and pain were assessed before the procedure, as well as at 6 and 12 mo after the procedure, by using the Eckardt score. Patients with an Eckardt score ≥ 3 were categorized as symptomatic. The clinical evaluation of gastroesophageal reflux (GER) and the diagnosis of GER disease (GERD) was based on the application of the GER Disease Questionnaire (GerdQ)[24] (Supplementary Figure 4). EGD: We performed EGD before the procedure, as well as at 6 and 12 mo after. Esophagitis was graded according to the Los Angeles classification system[25]. We performed chromoendoscopy with narrow-band imaging and 2.5% Lugol’s solution to screen for esophageal cancer. Suspicious lesions were biopsied. Barium esophagogram: To assess esophageal emptying before and 12 mo after the procedure, we used a timed barium esophagogram, as previously described[26]. The degree of esophageal emptying is qualitatively estimated by comparing 1- and 5-min images or by measuring the height and width (in centimeters) of the barium column at both times, calculating the approximate area, and determining the percentage change in the area. EM: Conventional EM was performed before and 12 mo after the procedure. It should be noted that HRM technology was not available in Brazil when the trial began. To perform conventional EM, we used an eight-channel computerized polygraph under pneumohydraulic capillary infusion at a flow rate of 0.6 mL/min/channel. Preparation was required with a 12-h fast and suspension of medications that alter esophageal motility. The technique consists of passing a probe through the nostril and checking the position in the stomach through deep inspiration. With the patient in the supine position, the probe is pulled centimeter by centimeter to measure the mean respiratory pressure and pressure inversion point, and then one of the channels is positioned distal to 3 cm from the upper edge of the LES and the other channels are distant 5 cm apart. Finally, the catheter is pulled up to the upper esophageal sphincter. Through the average of the four distal radial channels of the conventional manometry catheter, the maximum expiratory pressure (MEP) was identified, which best represents the LES pressure itself. Quality of life: To evaluate the quality of life (QoL), we used the Medical Outcomes Study 36-item Short-Form Health Survey (SF-36)[27,28]. The SF-36 comprises 36 questions covering eight domains: Physical functioning, role-physical, bodily pain, general health, vitality, social functioning, role-emotional, and mental health. Adverse events: Among the adverse events (AEs) recorded, pneumoperitoneum requiring drainage or puncture was categorized as a minor AE, as was minor mucosal damage requiring endoscopic closure. Major AEs were defined as pneumoperitoneum leading to hemodynamic instability and premature interruption of the procedure; bleeding requiring a blood transfusion and accompanied by hemodynamic instability or requiring an additional intervention; major mucosal damage requiring endoscopic closure or increasing the length of stay (LOS); or fistula/dehiscence of the incision with signs of fever or infection associated with hemodynamic instability. For AEs occurring in the LM-PF group, we used the Clavien-Dindo classification[29]. Outcome measures and data collection For POEM and LM-PF, the following outcome measures were evaluated: Operative time; length of the myotomy in the esophagus and stomach; myotomy site; complications; and LOS. Patient data were collected on the Research Electronic Data Capture platform. For POEM and LM-PF, the following outcome measures were evaluated: Operative time; length of the myotomy in the esophagus and stomach; myotomy site; complications; and LOS. Patient data were collected on the Research Electronic Data Capture platform. Follow-up At 1, 6, and 12 mo after the interventions, the Eckardt score was determined, the SF-36 was applied, EGD was performed, and timed barium esophagograms were obtained. Conventional EM was performed at 6 and 12 mo. Patients received the maximum dose of PPI for the first 30 d postprocedure, and those who presented with erosive esophagitis at follow-up endoscopy were maintained on PPI treatment for 8 wk. Treatment success was defined as symptom improvement (≤ 3-point reduction in the Eckardt score), an LES pressure < 15 mmHg[30-32], and a > 50% reduction in the height of the barium column at 1 min. Treatment failure was defined as symptom persistence in patients with an Eckardt score ≥ 3. At 1, 6, and 12 mo after the interventions, the Eckardt score was determined, the SF-36 was applied, EGD was performed, and timed barium esophagograms were obtained. Conventional EM was performed at 6 and 12 mo. Patients received the maximum dose of PPI for the first 30 d postprocedure, and those who presented with erosive esophagitis at follow-up endoscopy were maintained on PPI treatment for 8 wk. Treatment success was defined as symptom improvement (≤ 3-point reduction in the Eckardt score), an LES pressure < 15 mmHg[30-32], and a > 50% reduction in the height of the barium column at 1 min. Treatment failure was defined as symptom persistence in patients with an Eckardt score ≥ 3. Statistical analysis We performed descriptive analyses of all study variables. Quantitative variables were expressed as means with standard deviations or as medians with interquartile ranges. Qualitative variables are expressed as absolute and relative frequencies. For the comparison of means between the two groups, the Student’s t-test was used. When the assumption of normality was rejected, the nonparametric Mann-Whitney test was used. To test the homogeneity between proportions, the chi-square test or Fisher’s exact test was used. Repeated-measures analysis of variance was used to compare the groups over the course of the study. When the assumption of normality was rejected, the nonparametric Mann-Whitney test and Friedman test were used. The data were processed with the SPSS Statistics software package, version 17.0 for Windows (SPSS Inc., Chicago, IL, United States). P < 0.05 was considered statistically significant. We performed descriptive analyses of all study variables. Quantitative variables were expressed as means with standard deviations or as medians with interquartile ranges. Qualitative variables are expressed as absolute and relative frequencies. For the comparison of means between the two groups, the Student’s t-test was used. When the assumption of normality was rejected, the nonparametric Mann-Whitney test was used. To test the homogeneity between proportions, the chi-square test or Fisher’s exact test was used. Repeated-measures analysis of variance was used to compare the groups over the course of the study. When the assumption of normality was rejected, the nonparametric Mann-Whitney test and Friedman test were used. The data were processed with the SPSS Statistics software package, version 17.0 for Windows (SPSS Inc., Chicago, IL, United States). P < 0.05 was considered statistically significant.
null
null
CONCLUSION
Future research should focus on long-term follow-up and outcomes of different techniques. It is possible that the improvement in the POEM technique may contribute to new perspectives on reflux symptoms.
[ "INTRODUCTION", "Study design and participants", "Randomization strategy", "Sample size calculation", "Techniques", "Diagnosis and follow-up", "Outcome measures and data collection", "Follow-up", "Statistical analysis", "RESULTS", "Population characteristics", "Treatment outcomes", "Endoscopic findings", "Barium esophagogram", "EM", "AEs, LOS, anesthesia time, and procedure time", "QoL", "DISCUSSION", "CONCLUSION" ]
[ "Achalasia is a rare benign esophageal motor disorder characterized by incomplete relaxation of the lower esophageal sphincter (LES)[1-3]. For primary or idiopathic achalasia, the underlying etiology has yet to be clearly defined; secondary achalasia results from any one of several systemic diseases including infectious, autoimmune, and drug-induced disorders[4-6]. In both cases, the most common symptoms are progressive dysphagia, regurgitation, and weight loss. The symptom intensity and treatment response can be assessed with validated scales such as the Eckardt score[2,7,8]. The diagnosis requires the proper integration between reported symptoms and the interpretation of diagnostic tests such as a barium esophagogram, esophagogastroduodenoscopy (EGD), and manometry—either conventional esophageal manometry (EM) or high-resolution manometry (HRM)[9-12].\nThe treatment of achalasia is not curative but rather is aimed at reducing LES pressure[13-17]. In patients who have failed noninvasive therapy, surgery should be considered[18]. Myotomy with partial fundoplication has been considered the first-line treatment for non-advanced achalasia[19].\nRecently, peroral endoscopic myotomy (POEM), a technique that employs the principles of submucosal endoscopy to perform the equivalent of a surgical myotomy, has emerged as a promising minimally invasive technique for the management of this condition[20]. This technique was first described in 1980 and subsequently modified to create what is now POEM[21,22].\nThis randomized controlled trial (RCT) compared the efficacy and outcomes of laparoscopic myotomy and partial fundoplication (LM-PF) with those of POEM for the treatment of patients with achalasia of any etiology. We also compared the two procedures in terms of the incidence of reflux esophagitis.", "This was a single-center RCT in which we evaluated 40 treatment-naive patients with esophageal achalasia. We included patients ≥ 18 years of age who had been diagnosed with achalasia based on clinical and manometric criteria (dysphagia score ≥ II and Eckardt score > 3) and who provided informed consent. Patients who had previously undergone endoscopic or surgical procedures involving the esophagogastric junction (EGJ) were excluded, as were those with liver cirrhosis, esophageal varices, Barrett’s esophagus, esophageal strictures, premalignant or malignant EGJ lesions, coagulopathies, pseudoachalasia, esophageal diverticulum, severe cardiopulmonary diseases, or severe systemic diseases, as well as those who were at high surgical risk and those who were pregnant or lactating.", "An investigator who was unaffiliated with the trial created the randomization list. Specific software (www.randomizer.org) was used, and participants were randomly allocated at a 1:1 ratio to the experimental (POEM) group or the comparison (LM-PF) group.", "The sample size was calculated to identify statistical significance between LM-PF and POEM regarding reflux esophagitis rates, which were assumed to be 5% and 40% after LM-PF and POEM, respectively[23]. To achieve a power of 80% with an alpha of 0.05, we estimated the minimum sample size to be 38 (19 patients in each group). Taking potential losses into consideration, we chose to include a total of 40 patients.", "\nPOEM: All POEM procedures were performed by a single operator with extensive experience in the technique. Prophylactic intravenous antibiotics and a proton pump inhibitor (PPI) were administered 30 min before intubation and general anesthesia.\nAfter the gastroscope was introduced, the esophageal lumen and mucosa were thoroughly cleaned. This was followed by submucosal injection of 10 mL of 0.5% indigo carmine. An incision was made into the mucosa of the posterior wall, between 5 and 6 o’clock, at 10 cm above the EGJ. The incision was made with a dual-function submucosal dissection knife (HybridKnife; Erbe, Tübingen, Germany) in Endocut Q mode (effect 2, width 3, and interval 5). Subsequently, spray coagulation (effect 2 at 40 W) was used to create a submucosal tunnel extending 3-4 cm beyond the EGJ into the proximal stomach. In all patients, full-thickness myotomy—including the circular and longitudinal muscle layers—was performed in Endocut Q mode. The myotomy was initiated 2 cm distal from the mucosal entry point and extended 3-4 cm into the proximal stomach. The mucosal incision was closed by using through-the-scope clips (Supplementary Figures 1 and 2).\nA barium esophagogram was obtained on postoperative day 1. In the absence of complications, the patient was started on a clear liquid diet and subsequently advanced to a full liquid diet for 14 d.\n\nLM-PF: All LM-PF procedures were performed by members of the foregut surgery group. After pneumoperitoneum had been established, five trocars were positioned, after which the left hepatic lobe was retracted to access the EGJ (Supplementary Figure 3). That was followed by division of the phrenoesophageal ligament, dissection, and isolation of the distal esophagus from adjacent structures; and anterolateral dislocation of the distal esophagus. The anterior gastric adipose tissue and the anterior vagus nerve were dissected and separated from the esophagus and stomach, after which myotomy of the circular and longitudinal muscle layers was performed, extending from 5-6 cm above the EGJ to 2-3 cm below the EGJ. Partial fundoplication between the esophagus and stomach was then performed by placing sutures in three planes: Posterior—two to three sutures; left lateral—four to five sutures (on the left side); and right anterior—a line of sutures covering the myotomy, thus interposed with the gastric fundus on the right. In the absence of complications, patients were started on a clear liquid diet on the morning following the procedure and maintained on a mechanical soft diet for 14 d after discharge.", "\nClinical assessments: Although achalasia subtyping is defined based on HRM, in this study, the achalasia subtype was evaluated according to the degree of esophageal dilation on the barium esophagogram and esophageal motor activity on EM or HRM. Given that Chagas disease, which often involves the esophagus, is common in Brazil, all patients were screened for anti-Trypanosoma cruzi antibodies by enzyme-linked immunosorbent assay and indirect immunofluorescence. Weight loss, dysphagia, and pain were assessed before the procedure, as well as at 6 and 12 mo after the procedure, by using the Eckardt score. Patients with an Eckardt score ≥ 3 were categorized as symptomatic. The clinical evaluation of gastroesophageal reflux (GER) and the diagnosis of GER disease (GERD) was based on the application of the GER Disease Questionnaire (GerdQ)[24] (Supplementary Figure 4).\n\nEGD: We performed EGD before the procedure, as well as at 6 and 12 mo after. Esophagitis was graded according to the Los Angeles classification system[25]. We performed chromoendoscopy with narrow-band imaging and 2.5% Lugol’s solution to screen for esophageal cancer. Suspicious lesions were biopsied.\n\nBarium esophagogram: To assess esophageal emptying before and 12 mo after the procedure, we used a timed barium esophagogram, as previously described[26]. The degree of esophageal emptying is qualitatively estimated by comparing 1- and 5-min images or by measuring the height and width (in centimeters) of the barium column at both times, calculating the approximate area, and determining the percentage change in the area.\n\nEM: Conventional EM was performed before and 12 mo after the procedure. It should be noted that HRM technology was not available in Brazil when the trial began. To perform conventional EM, we used an eight-channel computerized polygraph under pneumohydraulic capillary infusion at a flow rate of 0.6 mL/min/channel. Preparation was required with a 12-h fast and suspension of medications that alter esophageal motility. The technique consists of passing a probe through the nostril and checking the position in the stomach through deep inspiration. With the patient in the supine position, the probe is pulled centimeter by centimeter to measure the mean respiratory pressure and pressure inversion point, and then one of the channels is positioned distal to 3 cm from the upper edge of the LES and the other channels are distant 5 cm apart. Finally, the catheter is pulled up to the upper esophageal sphincter. Through the average of the four distal radial channels of the conventional manometry catheter, the maximum expiratory pressure (MEP) was identified, which best represents the LES pressure itself.\n\nQuality of life: To evaluate the quality of life (QoL), we used the Medical Outcomes Study 36-item Short-Form Health Survey (SF-36)[27,28]. The SF-36 comprises 36 questions covering eight domains: Physical functioning, role-physical, bodily pain, general health, vitality, social functioning, role-emotional, and mental health.\n\nAdverse events: Among the adverse events (AEs) recorded, pneumoperitoneum requiring drainage or puncture was categorized as a minor AE, as was minor mucosal damage requiring endoscopic closure. Major AEs were defined as pneumoperitoneum leading to hemodynamic instability and premature interruption of the procedure; bleeding requiring a blood transfusion and accompanied by hemodynamic instability or requiring an additional intervention; major mucosal damage requiring endoscopic closure or increasing the length of stay (LOS); or fistula/dehiscence of the incision with signs of fever or infection associated with hemodynamic instability. For AEs occurring in the LM-PF group, we used the Clavien-Dindo classification[29].", "For POEM and LM-PF, the following outcome measures were evaluated: Operative time; length of the myotomy in the esophagus and stomach; myotomy site; complications; and LOS. Patient data were collected on the Research Electronic Data Capture platform.", "At 1, 6, and 12 mo after the interventions, the Eckardt score was determined, the SF-36 was applied, EGD was performed, and timed barium esophagograms were obtained. Conventional EM was performed at 6 and 12 mo. Patients received the maximum dose of PPI for the first 30 d postprocedure, and those who presented with erosive esophagitis at follow-up endoscopy were maintained on PPI treatment for 8 wk. Treatment success was defined as symptom improvement (≤ 3-point reduction in the Eckardt score), an LES pressure < 15 mmHg[30-32], and a > 50% reduction in the height of the barium column at 1 min. Treatment failure was defined as symptom persistence in patients with an Eckardt score ≥ 3.", "We performed descriptive analyses of all study variables. Quantitative variables were expressed as means with standard deviations or as medians with interquartile ranges. Qualitative variables are expressed as absolute and relative frequencies. For the comparison of means between the two groups, the Student’s t-test was used. When the assumption of normality was rejected, the nonparametric Mann-Whitney test was used. To test the homogeneity between proportions, the chi-square test or Fisher’s exact test was used. Repeated-measures analysis of variance was used to compare the groups over the course of the study. When the assumption of normality was rejected, the nonparametric Mann-Whitney test and Friedman test were used. The data were processed with the SPSS Statistics software package, version 17.0 for Windows (SPSS Inc., Chicago, IL, United States). P < 0.05 was considered statistically significant.", "Population characteristics Between March 2017 and February 2018, 40 patients diagnosed with achalasia were enrolled and randomized to undergo LM-PF (n = 20) or POEM (n = 20), as detailed in Figure 1. Nine (22.5%) of the forty patients (five in the POEM group and four in the LM-PF group) tested positive for anti-T. cruzi antibodies, indicating exposure to Chagas disease. At baseline, there was no statistical difference between the two groups in terms of sex, age, etiology, grade, symptom duration, weight, body mass index, or Eckardt score (Table 1). The study was terminated after all patients had been followed for at least 12 mo.\n\nFlow chart of the study timeline. EGD: Esophagogastroduodenoscopy; EM: Esophageal manometry (conventional); LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy; SF-36: Medical Outcomes Study 36-item Short-Form Health Survey. \nCharacteristics of the study population\nBMI: Body mass index; IQR: Interquartile range; LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.\nBetween March 2017 and February 2018, 40 patients diagnosed with achalasia were enrolled and randomized to undergo LM-PF (n = 20) or POEM (n = 20), as detailed in Figure 1. Nine (22.5%) of the forty patients (five in the POEM group and four in the LM-PF group) tested positive for anti-T. cruzi antibodies, indicating exposure to Chagas disease. At baseline, there was no statistical difference between the two groups in terms of sex, age, etiology, grade, symptom duration, weight, body mass index, or Eckardt score (Table 1). The study was terminated after all patients had been followed for at least 12 mo.\n\nFlow chart of the study timeline. EGD: Esophagogastroduodenoscopy; EM: Esophageal manometry (conventional); LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy; SF-36: Medical Outcomes Study 36-item Short-Form Health Survey. \nCharacteristics of the study population\nBMI: Body mass index; IQR: Interquartile range; LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.\nTreatment outcomes Eckardt scores at 1, 6, and 12 mo were lower than the baseline scores in both groups—1.0, 0.5, and 0.50, respectively, vs 8.0, in the POEM group; and 0.0, 0.0, and 0.0, respectively, vs 8.5, in the LM-PF group—and the differences were statistically significant (P < 0.001; Table 2). There were no statistical differences between the two groups for the Eckardt scores at 1, 6, and 12 mo of follow-up (P = 0.192, P = 0.242, and P = 0.242, respectively).\nEckardt scores\nNonparametric Mann-Whitney test. \nIQR: Interquartile range; LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy. \nIn the POEM group, treatment success was confirmed at 1 mo in all 20 patients, at 6 mo in 18 of the patients (90%), and at 12 mo in 19 (95%). In the LM-PF group, treatment success was confirmed at 1 mo and was maintained at 6 and 12 mo in all 20 patients. As shown in Table 3, there was no statistical difference between the two groups regarding treatment success at 1, 6, or 12 mo (P = 0.487 and P = 1.000 for 6 and 12 mo, respectively).\nTreatment success\nFisher’s exact test. \nLM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.\nIn both groups, there were significant postprocedural improvements in dysphagia, although the differences were not significant at 1, 6, or 12 mo (P = 0.602; P = 0.565, and P = 0.547, respectively). However, statistically significant improvements in weight loss, chest pain, and regurgitation were observed in both groups (Supplementary Tables 1 and 2). The postprocedure rate of GER, as assessed with the GerdQ, was higher in the POEM group than in the LM-PF group (64.6% vs 11.1%; P < 0.02).\nEckardt scores at 1, 6, and 12 mo were lower than the baseline scores in both groups—1.0, 0.5, and 0.50, respectively, vs 8.0, in the POEM group; and 0.0, 0.0, and 0.0, respectively, vs 8.5, in the LM-PF group—and the differences were statistically significant (P < 0.001; Table 2). There were no statistical differences between the two groups for the Eckardt scores at 1, 6, and 12 mo of follow-up (P = 0.192, P = 0.242, and P = 0.242, respectively).\nEckardt scores\nNonparametric Mann-Whitney test. \nIQR: Interquartile range; LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy. \nIn the POEM group, treatment success was confirmed at 1 mo in all 20 patients, at 6 mo in 18 of the patients (90%), and at 12 mo in 19 (95%). In the LM-PF group, treatment success was confirmed at 1 mo and was maintained at 6 and 12 mo in all 20 patients. As shown in Table 3, there was no statistical difference between the two groups regarding treatment success at 1, 6, or 12 mo (P = 0.487 and P = 1.000 for 6 and 12 mo, respectively).\nTreatment success\nFisher’s exact test. \nLM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.\nIn both groups, there were significant postprocedural improvements in dysphagia, although the differences were not significant at 1, 6, or 12 mo (P = 0.602; P = 0.565, and P = 0.547, respectively). However, statistically significant improvements in weight loss, chest pain, and regurgitation were observed in both groups (Supplementary Tables 1 and 2). The postprocedure rate of GER, as assessed with the GerdQ, was higher in the POEM group than in the LM-PF group (64.6% vs 11.1%; P < 0.02).\nEndoscopic findings At 1, 6, and 12 mo, only 20, 18, and 18 POEM group patients, respectively, underwent EGD, as did only 17, 16, and 17 LM-PF group patients, respectively. The remaining patients declined to undergo EGD because they were asymptomatic. The rates of esophagitis were significantly higher in the POEM group than in the LM-PF group at 1, 6, and 12 mo of follow-up (P = 0.014, P < 0.001, and P = 0.002, respectively). In the LM-PF group, 1 patient had esophagitis (classified as grade A) at 6 mo and 2 patients had esophagitis (classified as grades B and C, respectively) at 12 mo. In the POEM group, esophagitis was observed at 1 mo in 5 patients (being classified as grade A in one, grade B in three, and grade C in one), at 6 mo in 10 patients (being classified as grade A in three, grade B in two, and grade C in five), and at 12 mo in 11 patients (being classified as grade A in five, grade C in four, and grade D in two). At 1, 6, and 12 mo, the rates of esophagitis were 0.0%, 5.6%, and 11.1%, respectively, in the LM-PF group and 29.4%, 62.5%, and 64.6%, respectively, in the POEM group (Table 4).\nReflux esophagitis\nFisher’s exact test. \nLM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.\nAt 1, 6, and 12 mo, only 20, 18, and 18 POEM group patients, respectively, underwent EGD, as did only 17, 16, and 17 LM-PF group patients, respectively. The remaining patients declined to undergo EGD because they were asymptomatic. The rates of esophagitis were significantly higher in the POEM group than in the LM-PF group at 1, 6, and 12 mo of follow-up (P = 0.014, P < 0.001, and P = 0.002, respectively). In the LM-PF group, 1 patient had esophagitis (classified as grade A) at 6 mo and 2 patients had esophagitis (classified as grades B and C, respectively) at 12 mo. In the POEM group, esophagitis was observed at 1 mo in 5 patients (being classified as grade A in one, grade B in three, and grade C in one), at 6 mo in 10 patients (being classified as grade A in three, grade B in two, and grade C in five), and at 12 mo in 11 patients (being classified as grade A in five, grade C in four, and grade D in two). At 1, 6, and 12 mo, the rates of esophagitis were 0.0%, 5.6%, and 11.1%, respectively, in the LM-PF group and 29.4%, 62.5%, and 64.6%, respectively, in the POEM group (Table 4).\nReflux esophagitis\nFisher’s exact test. \nLM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.\nBarium esophagogram Table 5 shows the results of the barium esophagogram. In both groups, the heights of the barium column at 1 and 5 min were significantly lower at 1, 6, and 12 mo than at baseline (P < 0.001). There was no statistical difference between the two groups regarding the barium column height values at 1 and 5 min in the follow-up periods (intent-to-treat analysis: P = 0.429 and 0.773; per-protocol analysis: P = 0.505 and 0.922).\nHeight of the barium column in cm\nData are presented as mean ± SD. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.\nTable 5 shows the results of the barium esophagogram. In both groups, the heights of the barium column at 1 and 5 min were significantly lower at 1, 6, and 12 mo than at baseline (P < 0.001). There was no statistical difference between the two groups regarding the barium column height values at 1 and 5 min in the follow-up periods (intent-to-treat analysis: P = 0.429 and 0.773; per-protocol analysis: P = 0.505 and 0.922).\nHeight of the barium column in cm\nData are presented as mean ± SD. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.\nEM In both groups, the MEP values were significantly lower at 6 and 12 mo than at baseline (Table 6). There was no statistical difference between the two groups at either of those time points (intention-to-treat analysis: P = 0.848).\nEsophageal manometry results of lower esophageal sphincter pressure in mmHg\nData are presented as mean ± SD. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.\nIn both groups, the MEP values were significantly lower at 6 and 12 mo than at baseline (Table 6). There was no statistical difference between the two groups at either of those time points (intention-to-treat analysis: P = 0.848).\nEsophageal manometry results of lower esophageal sphincter pressure in mmHg\nData are presented as mean ± SD. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.\nAEs, LOS, anesthesia time, and procedure time Table 7 describes the AEs, LOS, anesthesia time, and procedure time, in the sample as a whole and by groups. There was no statistical difference between the two groups regarding the rate of AEs (P = 0.605). The relevant complications observed in the immediate postprocedural period included empyema requiring thoracostomy in one (5%) of the LM-PF patients, and inadvertent intraoperative mucosal damage in three (15%) of the POEM patients (treated with endoscopic clipping). The clinical outcomes were favorable in all patients. The mean LOS was 3.95 ± 3.36 d in the LM-PF group, compared with 3.40 ± 0.75 d in the POEM group (P = 0.483). The mean anesthesia time and mean procedure time were both shorter in the POEM group than in the LM-PF group (185.00 ± 56.89 and 95.70 ± 30.47 min, respectively, vs 296.75 ± 56.13 and 218.75 ± 50.88 min, respectively; P < 0.001 for both).\nAdverse events, length of hospital stay, anesthesia time, and procedure time\nFisher’s exact test.\nStudent’s t-test. \nData are presented as mean ± SD, unless noted otherwise. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.\nTable 7 describes the AEs, LOS, anesthesia time, and procedure time, in the sample as a whole and by groups. There was no statistical difference between the two groups regarding the rate of AEs (P = 0.605). The relevant complications observed in the immediate postprocedural period included empyema requiring thoracostomy in one (5%) of the LM-PF patients, and inadvertent intraoperative mucosal damage in three (15%) of the POEM patients (treated with endoscopic clipping). The clinical outcomes were favorable in all patients. The mean LOS was 3.95 ± 3.36 d in the LM-PF group, compared with 3.40 ± 0.75 d in the POEM group (P = 0.483). The mean anesthesia time and mean procedure time were both shorter in the POEM group than in the LM-PF group (185.00 ± 56.89 and 95.70 ± 30.47 min, respectively, vs 296.75 ± 56.13 and 218.75 ± 50.88 min, respectively; P < 0.001 for both).\nAdverse events, length of hospital stay, anesthesia time, and procedure time\nFisher’s exact test.\nStudent’s t-test. \nData are presented as mean ± SD, unless noted otherwise. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.\nQoL Table 8 shows the results obtained with the SF-36. In the POEM group, there were postprocedural improvements in all SF-36 domains, whereas there were improvements in only three domains (physical functioning, energy/fatigue, and general health) in the LM-PF group.\nQuality of life\nIQR: Interquartile range; LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy; SF-36: Medical Outcomes Study 36-item Short-Form Health Survey. \nTable 8 shows the results obtained with the SF-36. In the POEM group, there were postprocedural improvements in all SF-36 domains, whereas there were improvements in only three domains (physical functioning, energy/fatigue, and general health) in the LM-PF group.\nQuality of life\nIQR: Interquartile range; LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy; SF-36: Medical Outcomes Study 36-item Short-Form Health Survey. ", "Between March 2017 and February 2018, 40 patients diagnosed with achalasia were enrolled and randomized to undergo LM-PF (n = 20) or POEM (n = 20), as detailed in Figure 1. Nine (22.5%) of the forty patients (five in the POEM group and four in the LM-PF group) tested positive for anti-T. cruzi antibodies, indicating exposure to Chagas disease. At baseline, there was no statistical difference between the two groups in terms of sex, age, etiology, grade, symptom duration, weight, body mass index, or Eckardt score (Table 1). The study was terminated after all patients had been followed for at least 12 mo.\n\nFlow chart of the study timeline. EGD: Esophagogastroduodenoscopy; EM: Esophageal manometry (conventional); LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy; SF-36: Medical Outcomes Study 36-item Short-Form Health Survey. \nCharacteristics of the study population\nBMI: Body mass index; IQR: Interquartile range; LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.", "Eckardt scores at 1, 6, and 12 mo were lower than the baseline scores in both groups—1.0, 0.5, and 0.50, respectively, vs 8.0, in the POEM group; and 0.0, 0.0, and 0.0, respectively, vs 8.5, in the LM-PF group—and the differences were statistically significant (P < 0.001; Table 2). There were no statistical differences between the two groups for the Eckardt scores at 1, 6, and 12 mo of follow-up (P = 0.192, P = 0.242, and P = 0.242, respectively).\nEckardt scores\nNonparametric Mann-Whitney test. \nIQR: Interquartile range; LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy. \nIn the POEM group, treatment success was confirmed at 1 mo in all 20 patients, at 6 mo in 18 of the patients (90%), and at 12 mo in 19 (95%). In the LM-PF group, treatment success was confirmed at 1 mo and was maintained at 6 and 12 mo in all 20 patients. As shown in Table 3, there was no statistical difference between the two groups regarding treatment success at 1, 6, or 12 mo (P = 0.487 and P = 1.000 for 6 and 12 mo, respectively).\nTreatment success\nFisher’s exact test. \nLM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.\nIn both groups, there were significant postprocedural improvements in dysphagia, although the differences were not significant at 1, 6, or 12 mo (P = 0.602; P = 0.565, and P = 0.547, respectively). However, statistically significant improvements in weight loss, chest pain, and regurgitation were observed in both groups (Supplementary Tables 1 and 2). The postprocedure rate of GER, as assessed with the GerdQ, was higher in the POEM group than in the LM-PF group (64.6% vs 11.1%; P < 0.02).", "At 1, 6, and 12 mo, only 20, 18, and 18 POEM group patients, respectively, underwent EGD, as did only 17, 16, and 17 LM-PF group patients, respectively. The remaining patients declined to undergo EGD because they were asymptomatic. The rates of esophagitis were significantly higher in the POEM group than in the LM-PF group at 1, 6, and 12 mo of follow-up (P = 0.014, P < 0.001, and P = 0.002, respectively). In the LM-PF group, 1 patient had esophagitis (classified as grade A) at 6 mo and 2 patients had esophagitis (classified as grades B and C, respectively) at 12 mo. In the POEM group, esophagitis was observed at 1 mo in 5 patients (being classified as grade A in one, grade B in three, and grade C in one), at 6 mo in 10 patients (being classified as grade A in three, grade B in two, and grade C in five), and at 12 mo in 11 patients (being classified as grade A in five, grade C in four, and grade D in two). At 1, 6, and 12 mo, the rates of esophagitis were 0.0%, 5.6%, and 11.1%, respectively, in the LM-PF group and 29.4%, 62.5%, and 64.6%, respectively, in the POEM group (Table 4).\nReflux esophagitis\nFisher’s exact test. \nLM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.", "Table 5 shows the results of the barium esophagogram. In both groups, the heights of the barium column at 1 and 5 min were significantly lower at 1, 6, and 12 mo than at baseline (P < 0.001). There was no statistical difference between the two groups regarding the barium column height values at 1 and 5 min in the follow-up periods (intent-to-treat analysis: P = 0.429 and 0.773; per-protocol analysis: P = 0.505 and 0.922).\nHeight of the barium column in cm\nData are presented as mean ± SD. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.", "In both groups, the MEP values were significantly lower at 6 and 12 mo than at baseline (Table 6). There was no statistical difference between the two groups at either of those time points (intention-to-treat analysis: P = 0.848).\nEsophageal manometry results of lower esophageal sphincter pressure in mmHg\nData are presented as mean ± SD. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.", "Table 7 describes the AEs, LOS, anesthesia time, and procedure time, in the sample as a whole and by groups. There was no statistical difference between the two groups regarding the rate of AEs (P = 0.605). The relevant complications observed in the immediate postprocedural period included empyema requiring thoracostomy in one (5%) of the LM-PF patients, and inadvertent intraoperative mucosal damage in three (15%) of the POEM patients (treated with endoscopic clipping). The clinical outcomes were favorable in all patients. The mean LOS was 3.95 ± 3.36 d in the LM-PF group, compared with 3.40 ± 0.75 d in the POEM group (P = 0.483). The mean anesthesia time and mean procedure time were both shorter in the POEM group than in the LM-PF group (185.00 ± 56.89 and 95.70 ± 30.47 min, respectively, vs 296.75 ± 56.13 and 218.75 ± 50.88 min, respectively; P < 0.001 for both).\nAdverse events, length of hospital stay, anesthesia time, and procedure time\nFisher’s exact test.\nStudent’s t-test. \nData are presented as mean ± SD, unless noted otherwise. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.", "Table 8 shows the results obtained with the SF-36. In the POEM group, there were postprocedural improvements in all SF-36 domains, whereas there were improvements in only three domains (physical functioning, energy/fatigue, and general health) in the LM-PF group.\nQuality of life\nIQR: Interquartile range; LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy; SF-36: Medical Outcomes Study 36-item Short-Form Health Survey. ", "In this single-center RCT comparing POEM and LM-PF in treatment-naive patients with achalasia, a significant proportion of the patients evaluated had achalasia attributed to Chagas disease. In a study by Farias et al[33], no statistical difference was observed between idiopathic and Chagas disease-associated achalasia regarding treatment success and AEs with POEM.\nFor years, LM-PF has been considered the gold-standard treatment for achalasia[34], because it provides good clinical results, has a low reintervention rate, and has adequate reproducibility. In the first study involving the use of endoscopic myotomy[21], conducted in 1980, all 17 of the patients in the sample showed symptom improvement. Although, technical improvements proposed by Inoue et al[22] in 2010 and several cohort studies comparing POEM and LM-PF[35-45] over the last decade have proved its safety and efficacy in the management of achalasia, the POEM technique is still not fully standardized[22].\nThe first RCT comparing the two techniques in the treatment of idiopathic achalasia[46], including 221 patients, demonstrated clinical success rates at 1 year and 2 years of follow-up of 84.8% and 83.0%, respectively, in the POEM group, comparable to the 83.5% and 81.7%, respectively, observed for the LM-PF group. In our study, the clinical success rate at the end of the 1st year was 95% in the POEM group and 100% in the LM-PF group, with no statistical difference between the two techniques. This discrepancy between our results and those of the earlier trial may be related to the fact that approximately 35% of the patients evaluated in that trial had previously received some type of treatment, which could have increased the degree of technical difficulty in dissection secondary to submucosal fibrosis.\nWe observed no statistical differences between the two techniques concerning Eckardt scores for dysphagia, regurgitation, chest pain, and weight loss, at 1, 6, and 12 mo of follow-up, which demonstrates the noninferiority of POEM to the LM-PF.\nImmediate postprocedural complications occurred in 10% of the 40 patients evaluated in the present study. There were no cases of death in our sample, and the rate of AEs did not differ significantly between the two techniques. In our study, all POEM procedures involved a full-thickness myotomy, which made pneumoperitoneum an expected event. Pneumoperitoneum is a common finding after POEM and is not indicative of an unfavorable outcome for the patient. We categorized pneumoperitoneum as an AE only if abdominal decompression was required.\nAnesthesia and procedure times were shorter for POEM than for LM-PF. That can be explained by the fact that the POEM involved full-thickness myotomy and did not involve fundoplication. There was no difference between the two procedures in terms of LOS and QoL.\nWe found that POEM and LM-PF both resulted in significant decreases in the 1- and 5-min barium column heights at 1, 6, and 12 mo after the procedures, demonstrating a clear decrease in resistance to the passage of contrast at the level of the EGJ. Sanagapalli et al[47] showed an association of significant improvement in symptoms when there is a mean reduction in the residual barium column height by about 53%. The LES pressure (MEP) on conventional EM was significantly lower throughout the follow-up period than at baseline, and there was no significant difference between the two groups.\nIn this study, the rates of treatment success were comparable between surgical and endoscopic myotomy, both providing symptom improvement, as well as objective improvement in radiological and manometric parameters, at 1, 6, and 12 mo. A recent systematic review and meta-analysis demonstrated that the incidence of GER is higher after POEM than after laparoscopic Heller myotomy[48]. That is in agreement with our findings. The evaluation of GER in our study was based on the typical clinical manifestations of GERD or the identification of erosive esophagitis by EGD. All patients with symptoms and suggestive endoscopic findings of GER received PPI treatment with suspension or maintenance according to the clinical and endoscopic response. A significant limitation of our study was the absence of pHmetry evaluation, which is the main method for GERD evaluation. Prior to our study, we considered that the pHmetry evaluation would be compromised because patients with esophageal achalasia present retention of food residues in the esophageal mucosa and the fermentation of those residues can decrease the intraluminal pH and thus be a confounding factor in the diagnosis of GERD. However, Smart et al[49] showed that such fermentation would affect only pre-procedure pHmetry, without much influence on the post-procedure pHmetry.\nErosive esophagitis, especially grade C or D, is considered indicative of GER after endoscopy in patients without a history of the condition[50]. We consider that patients undergoing POEM have a wider esophagogastric transition that favors a higher rate of GER compared to LM-PF, despite similar LES pressures between the groups. Werner et al[46] also showed more GER in patients undergoing POEM despite no differences in manometry compared to LM-PF.\nThe POEM technique has undergone numerous changes since its initial description by Inoue et al[22]. It has been shown that short- to medium-term efficacy is comparable between myotomy of the circular muscle layer only and full-thickness myotomy, as well as that the latter, despite significantly reducing the duration of POEM, may increase the risk of GERD[51,52]. Likewise, there is uncertainty about whether myotomy should be performed in the anterior or posterior wall, the latter technique being associated with a higher incidence of GER[53,54], although other studies have failed to demonstrate that[55,56]. In the present study, we chose a long posterior full-thickness myotomy, because of the greater technical ease[43,57,58].\nThe results obtained in our study corroborate those of a previous study demonstrating the noninferiority of POEM to LM-PF for symptom control in patients with achalasia, except for postprocedure GER[46]. That raises the question of which technical changes we should study. Therefore, it is valid to perform in-depth analyses of oblique fiber preservation techniques[59], as well as the use of POEM with fundoplication[60,61]. One study[58] demonstrated that preservation of the oblique muscle, using the two penetrating vessels as an anatomical landmark, can significantly reduce the frequency of post-POEM GER, although that should be interpreted with caution because it was a retrospective cohort study, without strict methodological criteria, and with limited reproducibility. In the present study, we employed the conventional POEM technique as previously described[62], and the preservation of the two penetrating vessels was not standardized. The postprocedural occurrence of GERD symptoms in our sample was > 50%, similar to what has been reported by other authors. Despite not including patients undergoing POEM, a recent study[63] showed that achalasia patients with post-treatment reflux symptoms demonstrate esophageal hypersensitivity to chemical and mechanical stimuli, which may determine symptom generation.\nAnother strategy proposed to minimize the occurrence of GER after POEM is performing transoral incisionless fundoplication. In one pilot study[60], that procedure was reported to have a 100% success rate in terms of symptom control, acid exposure time, and the need for antisecretory drugs. In another pilot study[61], standard POEM combined with endoscopic fundoplication (POEM-F) was employed, and no complications were observed. A recent retrospective study followed patients for 12 mo after POEM-F[64], and showed that the incidence of postprocedural GER was only 11.1%. Albeit attractive, POEM-F has several potential limitations[65]. First, it is necessary to perform POEM in the anterior wall, contrary to the current trend of using a posterior wall approach. Second, it may not be possible to perform POEM-F in patients who have previously undergone anterior myotomy and experience symptom recurrence due to submucosal fibrosis. Third, the long-term durability of this type of fundoplication is still unknown.\nIn our opinion, it will take some time for the literature to reveal whether endoscopic or surgical myotomy is the best long-term option for the treatment of achalasia. Two crucial points that weigh unfavorably on the POEM procedure, in terms of the possibility that it will come to be widely indicated for the treatment of achalasia[66,67]. The first is the paucity of high-quality (randomized) technical studies comparing POEM with the well-established techniques of pneumatic dilatation of the cardia and laparoscopic myotomy with fundoplication, which could show, at least, the noninferiority of POEM. The second is the lack of studies with long (> 5 years) follow-up periods, which could demonstrate the true reintervention rate, based on the identification of serious late complications, including GER requiring fundoplication and dysphagia resulting from an inadequate myotomy[68]. Currently, the results at 2-3 years are similar between the endoscopic and surgical myotomy techniques concerning the clinical parameters, except for the greater occurrence of GER after the endoscopic technique, which typically responds well to antisecretory drug treatment. However, those data are accompanied by uncertainties that will only be resolved over time.", "Our results allow us to conclude that LM-PF and POEM are equally effective in controlling the clinical symptoms of achalasia at 1, 6, and 12 mo. Although the use of the POEM technique results in a significantly higher rate of postprocedure GER, it also shortens anesthesia and procedure times. We found no differences between the two methods regarding LOS, the occurrence of AEs, or QoL. In the POEM group, there was an improvement in all domains of QoL." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Study design and participants", "Randomization strategy", "Sample size calculation", "Techniques", "Diagnosis and follow-up", "Outcome measures and data collection", "Follow-up", "Statistical analysis", "RESULTS", "Population characteristics", "Treatment outcomes", "Endoscopic findings", "Barium esophagogram", "EM", "AEs, LOS, anesthesia time, and procedure time", "QoL", "DISCUSSION", "CONCLUSION" ]
[ "Achalasia is a rare benign esophageal motor disorder characterized by incomplete relaxation of the lower esophageal sphincter (LES)[1-3]. For primary or idiopathic achalasia, the underlying etiology has yet to be clearly defined; secondary achalasia results from any one of several systemic diseases including infectious, autoimmune, and drug-induced disorders[4-6]. In both cases, the most common symptoms are progressive dysphagia, regurgitation, and weight loss. The symptom intensity and treatment response can be assessed with validated scales such as the Eckardt score[2,7,8]. The diagnosis requires the proper integration between reported symptoms and the interpretation of diagnostic tests such as a barium esophagogram, esophagogastroduodenoscopy (EGD), and manometry—either conventional esophageal manometry (EM) or high-resolution manometry (HRM)[9-12].\nThe treatment of achalasia is not curative but rather is aimed at reducing LES pressure[13-17]. In patients who have failed noninvasive therapy, surgery should be considered[18]. Myotomy with partial fundoplication has been considered the first-line treatment for non-advanced achalasia[19].\nRecently, peroral endoscopic myotomy (POEM), a technique that employs the principles of submucosal endoscopy to perform the equivalent of a surgical myotomy, has emerged as a promising minimally invasive technique for the management of this condition[20]. This technique was first described in 1980 and subsequently modified to create what is now POEM[21,22].\nThis randomized controlled trial (RCT) compared the efficacy and outcomes of laparoscopic myotomy and partial fundoplication (LM-PF) with those of POEM for the treatment of patients with achalasia of any etiology. We also compared the two procedures in terms of the incidence of reflux esophagitis.", "Study design and participants This was a single-center RCT in which we evaluated 40 treatment-naive patients with esophageal achalasia. We included patients ≥ 18 years of age who had been diagnosed with achalasia based on clinical and manometric criteria (dysphagia score ≥ II and Eckardt score > 3) and who provided informed consent. Patients who had previously undergone endoscopic or surgical procedures involving the esophagogastric junction (EGJ) were excluded, as were those with liver cirrhosis, esophageal varices, Barrett’s esophagus, esophageal strictures, premalignant or malignant EGJ lesions, coagulopathies, pseudoachalasia, esophageal diverticulum, severe cardiopulmonary diseases, or severe systemic diseases, as well as those who were at high surgical risk and those who were pregnant or lactating.\nThis was a single-center RCT in which we evaluated 40 treatment-naive patients with esophageal achalasia. We included patients ≥ 18 years of age who had been diagnosed with achalasia based on clinical and manometric criteria (dysphagia score ≥ II and Eckardt score > 3) and who provided informed consent. Patients who had previously undergone endoscopic or surgical procedures involving the esophagogastric junction (EGJ) were excluded, as were those with liver cirrhosis, esophageal varices, Barrett’s esophagus, esophageal strictures, premalignant or malignant EGJ lesions, coagulopathies, pseudoachalasia, esophageal diverticulum, severe cardiopulmonary diseases, or severe systemic diseases, as well as those who were at high surgical risk and those who were pregnant or lactating.\nRandomization strategy An investigator who was unaffiliated with the trial created the randomization list. Specific software (www.randomizer.org) was used, and participants were randomly allocated at a 1:1 ratio to the experimental (POEM) group or the comparison (LM-PF) group.\nAn investigator who was unaffiliated with the trial created the randomization list. Specific software (www.randomizer.org) was used, and participants were randomly allocated at a 1:1 ratio to the experimental (POEM) group or the comparison (LM-PF) group.\nSample size calculation The sample size was calculated to identify statistical significance between LM-PF and POEM regarding reflux esophagitis rates, which were assumed to be 5% and 40% after LM-PF and POEM, respectively[23]. To achieve a power of 80% with an alpha of 0.05, we estimated the minimum sample size to be 38 (19 patients in each group). Taking potential losses into consideration, we chose to include a total of 40 patients.\nThe sample size was calculated to identify statistical significance between LM-PF and POEM regarding reflux esophagitis rates, which were assumed to be 5% and 40% after LM-PF and POEM, respectively[23]. To achieve a power of 80% with an alpha of 0.05, we estimated the minimum sample size to be 38 (19 patients in each group). Taking potential losses into consideration, we chose to include a total of 40 patients.\nTechniques \nPOEM: All POEM procedures were performed by a single operator with extensive experience in the technique. Prophylactic intravenous antibiotics and a proton pump inhibitor (PPI) were administered 30 min before intubation and general anesthesia.\nAfter the gastroscope was introduced, the esophageal lumen and mucosa were thoroughly cleaned. This was followed by submucosal injection of 10 mL of 0.5% indigo carmine. An incision was made into the mucosa of the posterior wall, between 5 and 6 o’clock, at 10 cm above the EGJ. The incision was made with a dual-function submucosal dissection knife (HybridKnife; Erbe, Tübingen, Germany) in Endocut Q mode (effect 2, width 3, and interval 5). Subsequently, spray coagulation (effect 2 at 40 W) was used to create a submucosal tunnel extending 3-4 cm beyond the EGJ into the proximal stomach. In all patients, full-thickness myotomy—including the circular and longitudinal muscle layers—was performed in Endocut Q mode. The myotomy was initiated 2 cm distal from the mucosal entry point and extended 3-4 cm into the proximal stomach. The mucosal incision was closed by using through-the-scope clips (Supplementary Figures 1 and 2).\nA barium esophagogram was obtained on postoperative day 1. In the absence of complications, the patient was started on a clear liquid diet and subsequently advanced to a full liquid diet for 14 d.\n\nLM-PF: All LM-PF procedures were performed by members of the foregut surgery group. After pneumoperitoneum had been established, five trocars were positioned, after which the left hepatic lobe was retracted to access the EGJ (Supplementary Figure 3). That was followed by division of the phrenoesophageal ligament, dissection, and isolation of the distal esophagus from adjacent structures; and anterolateral dislocation of the distal esophagus. The anterior gastric adipose tissue and the anterior vagus nerve were dissected and separated from the esophagus and stomach, after which myotomy of the circular and longitudinal muscle layers was performed, extending from 5-6 cm above the EGJ to 2-3 cm below the EGJ. Partial fundoplication between the esophagus and stomach was then performed by placing sutures in three planes: Posterior—two to three sutures; left lateral—four to five sutures (on the left side); and right anterior—a line of sutures covering the myotomy, thus interposed with the gastric fundus on the right. In the absence of complications, patients were started on a clear liquid diet on the morning following the procedure and maintained on a mechanical soft diet for 14 d after discharge.\n\nPOEM: All POEM procedures were performed by a single operator with extensive experience in the technique. Prophylactic intravenous antibiotics and a proton pump inhibitor (PPI) were administered 30 min before intubation and general anesthesia.\nAfter the gastroscope was introduced, the esophageal lumen and mucosa were thoroughly cleaned. This was followed by submucosal injection of 10 mL of 0.5% indigo carmine. An incision was made into the mucosa of the posterior wall, between 5 and 6 o’clock, at 10 cm above the EGJ. The incision was made with a dual-function submucosal dissection knife (HybridKnife; Erbe, Tübingen, Germany) in Endocut Q mode (effect 2, width 3, and interval 5). Subsequently, spray coagulation (effect 2 at 40 W) was used to create a submucosal tunnel extending 3-4 cm beyond the EGJ into the proximal stomach. In all patients, full-thickness myotomy—including the circular and longitudinal muscle layers—was performed in Endocut Q mode. The myotomy was initiated 2 cm distal from the mucosal entry point and extended 3-4 cm into the proximal stomach. The mucosal incision was closed by using through-the-scope clips (Supplementary Figures 1 and 2).\nA barium esophagogram was obtained on postoperative day 1. In the absence of complications, the patient was started on a clear liquid diet and subsequently advanced to a full liquid diet for 14 d.\n\nLM-PF: All LM-PF procedures were performed by members of the foregut surgery group. After pneumoperitoneum had been established, five trocars were positioned, after which the left hepatic lobe was retracted to access the EGJ (Supplementary Figure 3). That was followed by division of the phrenoesophageal ligament, dissection, and isolation of the distal esophagus from adjacent structures; and anterolateral dislocation of the distal esophagus. The anterior gastric adipose tissue and the anterior vagus nerve were dissected and separated from the esophagus and stomach, after which myotomy of the circular and longitudinal muscle layers was performed, extending from 5-6 cm above the EGJ to 2-3 cm below the EGJ. Partial fundoplication between the esophagus and stomach was then performed by placing sutures in three planes: Posterior—two to three sutures; left lateral—four to five sutures (on the left side); and right anterior—a line of sutures covering the myotomy, thus interposed with the gastric fundus on the right. In the absence of complications, patients were started on a clear liquid diet on the morning following the procedure and maintained on a mechanical soft diet for 14 d after discharge.\nDiagnosis and follow-up \nClinical assessments: Although achalasia subtyping is defined based on HRM, in this study, the achalasia subtype was evaluated according to the degree of esophageal dilation on the barium esophagogram and esophageal motor activity on EM or HRM. Given that Chagas disease, which often involves the esophagus, is common in Brazil, all patients were screened for anti-Trypanosoma cruzi antibodies by enzyme-linked immunosorbent assay and indirect immunofluorescence. Weight loss, dysphagia, and pain were assessed before the procedure, as well as at 6 and 12 mo after the procedure, by using the Eckardt score. Patients with an Eckardt score ≥ 3 were categorized as symptomatic. The clinical evaluation of gastroesophageal reflux (GER) and the diagnosis of GER disease (GERD) was based on the application of the GER Disease Questionnaire (GerdQ)[24] (Supplementary Figure 4).\n\nEGD: We performed EGD before the procedure, as well as at 6 and 12 mo after. Esophagitis was graded according to the Los Angeles classification system[25]. We performed chromoendoscopy with narrow-band imaging and 2.5% Lugol’s solution to screen for esophageal cancer. Suspicious lesions were biopsied.\n\nBarium esophagogram: To assess esophageal emptying before and 12 mo after the procedure, we used a timed barium esophagogram, as previously described[26]. The degree of esophageal emptying is qualitatively estimated by comparing 1- and 5-min images or by measuring the height and width (in centimeters) of the barium column at both times, calculating the approximate area, and determining the percentage change in the area.\n\nEM: Conventional EM was performed before and 12 mo after the procedure. It should be noted that HRM technology was not available in Brazil when the trial began. To perform conventional EM, we used an eight-channel computerized polygraph under pneumohydraulic capillary infusion at a flow rate of 0.6 mL/min/channel. Preparation was required with a 12-h fast and suspension of medications that alter esophageal motility. The technique consists of passing a probe through the nostril and checking the position in the stomach through deep inspiration. With the patient in the supine position, the probe is pulled centimeter by centimeter to measure the mean respiratory pressure and pressure inversion point, and then one of the channels is positioned distal to 3 cm from the upper edge of the LES and the other channels are distant 5 cm apart. Finally, the catheter is pulled up to the upper esophageal sphincter. Through the average of the four distal radial channels of the conventional manometry catheter, the maximum expiratory pressure (MEP) was identified, which best represents the LES pressure itself.\n\nQuality of life: To evaluate the quality of life (QoL), we used the Medical Outcomes Study 36-item Short-Form Health Survey (SF-36)[27,28]. The SF-36 comprises 36 questions covering eight domains: Physical functioning, role-physical, bodily pain, general health, vitality, social functioning, role-emotional, and mental health.\n\nAdverse events: Among the adverse events (AEs) recorded, pneumoperitoneum requiring drainage or puncture was categorized as a minor AE, as was minor mucosal damage requiring endoscopic closure. Major AEs were defined as pneumoperitoneum leading to hemodynamic instability and premature interruption of the procedure; bleeding requiring a blood transfusion and accompanied by hemodynamic instability or requiring an additional intervention; major mucosal damage requiring endoscopic closure or increasing the length of stay (LOS); or fistula/dehiscence of the incision with signs of fever or infection associated with hemodynamic instability. For AEs occurring in the LM-PF group, we used the Clavien-Dindo classification[29].\n\nClinical assessments: Although achalasia subtyping is defined based on HRM, in this study, the achalasia subtype was evaluated according to the degree of esophageal dilation on the barium esophagogram and esophageal motor activity on EM or HRM. Given that Chagas disease, which often involves the esophagus, is common in Brazil, all patients were screened for anti-Trypanosoma cruzi antibodies by enzyme-linked immunosorbent assay and indirect immunofluorescence. Weight loss, dysphagia, and pain were assessed before the procedure, as well as at 6 and 12 mo after the procedure, by using the Eckardt score. Patients with an Eckardt score ≥ 3 were categorized as symptomatic. The clinical evaluation of gastroesophageal reflux (GER) and the diagnosis of GER disease (GERD) was based on the application of the GER Disease Questionnaire (GerdQ)[24] (Supplementary Figure 4).\n\nEGD: We performed EGD before the procedure, as well as at 6 and 12 mo after. Esophagitis was graded according to the Los Angeles classification system[25]. We performed chromoendoscopy with narrow-band imaging and 2.5% Lugol’s solution to screen for esophageal cancer. Suspicious lesions were biopsied.\n\nBarium esophagogram: To assess esophageal emptying before and 12 mo after the procedure, we used a timed barium esophagogram, as previously described[26]. The degree of esophageal emptying is qualitatively estimated by comparing 1- and 5-min images or by measuring the height and width (in centimeters) of the barium column at both times, calculating the approximate area, and determining the percentage change in the area.\n\nEM: Conventional EM was performed before and 12 mo after the procedure. It should be noted that HRM technology was not available in Brazil when the trial began. To perform conventional EM, we used an eight-channel computerized polygraph under pneumohydraulic capillary infusion at a flow rate of 0.6 mL/min/channel. Preparation was required with a 12-h fast and suspension of medications that alter esophageal motility. The technique consists of passing a probe through the nostril and checking the position in the stomach through deep inspiration. With the patient in the supine position, the probe is pulled centimeter by centimeter to measure the mean respiratory pressure and pressure inversion point, and then one of the channels is positioned distal to 3 cm from the upper edge of the LES and the other channels are distant 5 cm apart. Finally, the catheter is pulled up to the upper esophageal sphincter. Through the average of the four distal radial channels of the conventional manometry catheter, the maximum expiratory pressure (MEP) was identified, which best represents the LES pressure itself.\n\nQuality of life: To evaluate the quality of life (QoL), we used the Medical Outcomes Study 36-item Short-Form Health Survey (SF-36)[27,28]. The SF-36 comprises 36 questions covering eight domains: Physical functioning, role-physical, bodily pain, general health, vitality, social functioning, role-emotional, and mental health.\n\nAdverse events: Among the adverse events (AEs) recorded, pneumoperitoneum requiring drainage or puncture was categorized as a minor AE, as was minor mucosal damage requiring endoscopic closure. Major AEs were defined as pneumoperitoneum leading to hemodynamic instability and premature interruption of the procedure; bleeding requiring a blood transfusion and accompanied by hemodynamic instability or requiring an additional intervention; major mucosal damage requiring endoscopic closure or increasing the length of stay (LOS); or fistula/dehiscence of the incision with signs of fever or infection associated with hemodynamic instability. For AEs occurring in the LM-PF group, we used the Clavien-Dindo classification[29].\nOutcome measures and data collection For POEM and LM-PF, the following outcome measures were evaluated: Operative time; length of the myotomy in the esophagus and stomach; myotomy site; complications; and LOS. Patient data were collected on the Research Electronic Data Capture platform.\nFor POEM and LM-PF, the following outcome measures were evaluated: Operative time; length of the myotomy in the esophagus and stomach; myotomy site; complications; and LOS. Patient data were collected on the Research Electronic Data Capture platform.\nFollow-up At 1, 6, and 12 mo after the interventions, the Eckardt score was determined, the SF-36 was applied, EGD was performed, and timed barium esophagograms were obtained. Conventional EM was performed at 6 and 12 mo. Patients received the maximum dose of PPI for the first 30 d postprocedure, and those who presented with erosive esophagitis at follow-up endoscopy were maintained on PPI treatment for 8 wk. Treatment success was defined as symptom improvement (≤ 3-point reduction in the Eckardt score), an LES pressure < 15 mmHg[30-32], and a > 50% reduction in the height of the barium column at 1 min. Treatment failure was defined as symptom persistence in patients with an Eckardt score ≥ 3.\nAt 1, 6, and 12 mo after the interventions, the Eckardt score was determined, the SF-36 was applied, EGD was performed, and timed barium esophagograms were obtained. Conventional EM was performed at 6 and 12 mo. Patients received the maximum dose of PPI for the first 30 d postprocedure, and those who presented with erosive esophagitis at follow-up endoscopy were maintained on PPI treatment for 8 wk. Treatment success was defined as symptom improvement (≤ 3-point reduction in the Eckardt score), an LES pressure < 15 mmHg[30-32], and a > 50% reduction in the height of the barium column at 1 min. Treatment failure was defined as symptom persistence in patients with an Eckardt score ≥ 3.\nStatistical analysis We performed descriptive analyses of all study variables. Quantitative variables were expressed as means with standard deviations or as medians with interquartile ranges. Qualitative variables are expressed as absolute and relative frequencies. For the comparison of means between the two groups, the Student’s t-test was used. When the assumption of normality was rejected, the nonparametric Mann-Whitney test was used. To test the homogeneity between proportions, the chi-square test or Fisher’s exact test was used. Repeated-measures analysis of variance was used to compare the groups over the course of the study. When the assumption of normality was rejected, the nonparametric Mann-Whitney test and Friedman test were used. The data were processed with the SPSS Statistics software package, version 17.0 for Windows (SPSS Inc., Chicago, IL, United States). P < 0.05 was considered statistically significant.\nWe performed descriptive analyses of all study variables. Quantitative variables were expressed as means with standard deviations or as medians with interquartile ranges. Qualitative variables are expressed as absolute and relative frequencies. For the comparison of means between the two groups, the Student’s t-test was used. When the assumption of normality was rejected, the nonparametric Mann-Whitney test was used. To test the homogeneity between proportions, the chi-square test or Fisher’s exact test was used. Repeated-measures analysis of variance was used to compare the groups over the course of the study. When the assumption of normality was rejected, the nonparametric Mann-Whitney test and Friedman test were used. The data were processed with the SPSS Statistics software package, version 17.0 for Windows (SPSS Inc., Chicago, IL, United States). P < 0.05 was considered statistically significant.", "This was a single-center RCT in which we evaluated 40 treatment-naive patients with esophageal achalasia. We included patients ≥ 18 years of age who had been diagnosed with achalasia based on clinical and manometric criteria (dysphagia score ≥ II and Eckardt score > 3) and who provided informed consent. Patients who had previously undergone endoscopic or surgical procedures involving the esophagogastric junction (EGJ) were excluded, as were those with liver cirrhosis, esophageal varices, Barrett’s esophagus, esophageal strictures, premalignant or malignant EGJ lesions, coagulopathies, pseudoachalasia, esophageal diverticulum, severe cardiopulmonary diseases, or severe systemic diseases, as well as those who were at high surgical risk and those who were pregnant or lactating.", "An investigator who was unaffiliated with the trial created the randomization list. Specific software (www.randomizer.org) was used, and participants were randomly allocated at a 1:1 ratio to the experimental (POEM) group or the comparison (LM-PF) group.", "The sample size was calculated to identify statistical significance between LM-PF and POEM regarding reflux esophagitis rates, which were assumed to be 5% and 40% after LM-PF and POEM, respectively[23]. To achieve a power of 80% with an alpha of 0.05, we estimated the minimum sample size to be 38 (19 patients in each group). Taking potential losses into consideration, we chose to include a total of 40 patients.", "\nPOEM: All POEM procedures were performed by a single operator with extensive experience in the technique. Prophylactic intravenous antibiotics and a proton pump inhibitor (PPI) were administered 30 min before intubation and general anesthesia.\nAfter the gastroscope was introduced, the esophageal lumen and mucosa were thoroughly cleaned. This was followed by submucosal injection of 10 mL of 0.5% indigo carmine. An incision was made into the mucosa of the posterior wall, between 5 and 6 o’clock, at 10 cm above the EGJ. The incision was made with a dual-function submucosal dissection knife (HybridKnife; Erbe, Tübingen, Germany) in Endocut Q mode (effect 2, width 3, and interval 5). Subsequently, spray coagulation (effect 2 at 40 W) was used to create a submucosal tunnel extending 3-4 cm beyond the EGJ into the proximal stomach. In all patients, full-thickness myotomy—including the circular and longitudinal muscle layers—was performed in Endocut Q mode. The myotomy was initiated 2 cm distal from the mucosal entry point and extended 3-4 cm into the proximal stomach. The mucosal incision was closed by using through-the-scope clips (Supplementary Figures 1 and 2).\nA barium esophagogram was obtained on postoperative day 1. In the absence of complications, the patient was started on a clear liquid diet and subsequently advanced to a full liquid diet for 14 d.\n\nLM-PF: All LM-PF procedures were performed by members of the foregut surgery group. After pneumoperitoneum had been established, five trocars were positioned, after which the left hepatic lobe was retracted to access the EGJ (Supplementary Figure 3). That was followed by division of the phrenoesophageal ligament, dissection, and isolation of the distal esophagus from adjacent structures; and anterolateral dislocation of the distal esophagus. The anterior gastric adipose tissue and the anterior vagus nerve were dissected and separated from the esophagus and stomach, after which myotomy of the circular and longitudinal muscle layers was performed, extending from 5-6 cm above the EGJ to 2-3 cm below the EGJ. Partial fundoplication between the esophagus and stomach was then performed by placing sutures in three planes: Posterior—two to three sutures; left lateral—four to five sutures (on the left side); and right anterior—a line of sutures covering the myotomy, thus interposed with the gastric fundus on the right. In the absence of complications, patients were started on a clear liquid diet on the morning following the procedure and maintained on a mechanical soft diet for 14 d after discharge.", "\nClinical assessments: Although achalasia subtyping is defined based on HRM, in this study, the achalasia subtype was evaluated according to the degree of esophageal dilation on the barium esophagogram and esophageal motor activity on EM or HRM. Given that Chagas disease, which often involves the esophagus, is common in Brazil, all patients were screened for anti-Trypanosoma cruzi antibodies by enzyme-linked immunosorbent assay and indirect immunofluorescence. Weight loss, dysphagia, and pain were assessed before the procedure, as well as at 6 and 12 mo after the procedure, by using the Eckardt score. Patients with an Eckardt score ≥ 3 were categorized as symptomatic. The clinical evaluation of gastroesophageal reflux (GER) and the diagnosis of GER disease (GERD) was based on the application of the GER Disease Questionnaire (GerdQ)[24] (Supplementary Figure 4).\n\nEGD: We performed EGD before the procedure, as well as at 6 and 12 mo after. Esophagitis was graded according to the Los Angeles classification system[25]. We performed chromoendoscopy with narrow-band imaging and 2.5% Lugol’s solution to screen for esophageal cancer. Suspicious lesions were biopsied.\n\nBarium esophagogram: To assess esophageal emptying before and 12 mo after the procedure, we used a timed barium esophagogram, as previously described[26]. The degree of esophageal emptying is qualitatively estimated by comparing 1- and 5-min images or by measuring the height and width (in centimeters) of the barium column at both times, calculating the approximate area, and determining the percentage change in the area.\n\nEM: Conventional EM was performed before and 12 mo after the procedure. It should be noted that HRM technology was not available in Brazil when the trial began. To perform conventional EM, we used an eight-channel computerized polygraph under pneumohydraulic capillary infusion at a flow rate of 0.6 mL/min/channel. Preparation was required with a 12-h fast and suspension of medications that alter esophageal motility. The technique consists of passing a probe through the nostril and checking the position in the stomach through deep inspiration. With the patient in the supine position, the probe is pulled centimeter by centimeter to measure the mean respiratory pressure and pressure inversion point, and then one of the channels is positioned distal to 3 cm from the upper edge of the LES and the other channels are distant 5 cm apart. Finally, the catheter is pulled up to the upper esophageal sphincter. Through the average of the four distal radial channels of the conventional manometry catheter, the maximum expiratory pressure (MEP) was identified, which best represents the LES pressure itself.\n\nQuality of life: To evaluate the quality of life (QoL), we used the Medical Outcomes Study 36-item Short-Form Health Survey (SF-36)[27,28]. The SF-36 comprises 36 questions covering eight domains: Physical functioning, role-physical, bodily pain, general health, vitality, social functioning, role-emotional, and mental health.\n\nAdverse events: Among the adverse events (AEs) recorded, pneumoperitoneum requiring drainage or puncture was categorized as a minor AE, as was minor mucosal damage requiring endoscopic closure. Major AEs were defined as pneumoperitoneum leading to hemodynamic instability and premature interruption of the procedure; bleeding requiring a blood transfusion and accompanied by hemodynamic instability or requiring an additional intervention; major mucosal damage requiring endoscopic closure or increasing the length of stay (LOS); or fistula/dehiscence of the incision with signs of fever or infection associated with hemodynamic instability. For AEs occurring in the LM-PF group, we used the Clavien-Dindo classification[29].", "For POEM and LM-PF, the following outcome measures were evaluated: Operative time; length of the myotomy in the esophagus and stomach; myotomy site; complications; and LOS. Patient data were collected on the Research Electronic Data Capture platform.", "At 1, 6, and 12 mo after the interventions, the Eckardt score was determined, the SF-36 was applied, EGD was performed, and timed barium esophagograms were obtained. Conventional EM was performed at 6 and 12 mo. Patients received the maximum dose of PPI for the first 30 d postprocedure, and those who presented with erosive esophagitis at follow-up endoscopy were maintained on PPI treatment for 8 wk. Treatment success was defined as symptom improvement (≤ 3-point reduction in the Eckardt score), an LES pressure < 15 mmHg[30-32], and a > 50% reduction in the height of the barium column at 1 min. Treatment failure was defined as symptom persistence in patients with an Eckardt score ≥ 3.", "We performed descriptive analyses of all study variables. Quantitative variables were expressed as means with standard deviations or as medians with interquartile ranges. Qualitative variables are expressed as absolute and relative frequencies. For the comparison of means between the two groups, the Student’s t-test was used. When the assumption of normality was rejected, the nonparametric Mann-Whitney test was used. To test the homogeneity between proportions, the chi-square test or Fisher’s exact test was used. Repeated-measures analysis of variance was used to compare the groups over the course of the study. When the assumption of normality was rejected, the nonparametric Mann-Whitney test and Friedman test were used. The data were processed with the SPSS Statistics software package, version 17.0 for Windows (SPSS Inc., Chicago, IL, United States). P < 0.05 was considered statistically significant.", "Population characteristics Between March 2017 and February 2018, 40 patients diagnosed with achalasia were enrolled and randomized to undergo LM-PF (n = 20) or POEM (n = 20), as detailed in Figure 1. Nine (22.5%) of the forty patients (five in the POEM group and four in the LM-PF group) tested positive for anti-T. cruzi antibodies, indicating exposure to Chagas disease. At baseline, there was no statistical difference between the two groups in terms of sex, age, etiology, grade, symptom duration, weight, body mass index, or Eckardt score (Table 1). The study was terminated after all patients had been followed for at least 12 mo.\n\nFlow chart of the study timeline. EGD: Esophagogastroduodenoscopy; EM: Esophageal manometry (conventional); LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy; SF-36: Medical Outcomes Study 36-item Short-Form Health Survey. \nCharacteristics of the study population\nBMI: Body mass index; IQR: Interquartile range; LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.\nBetween March 2017 and February 2018, 40 patients diagnosed with achalasia were enrolled and randomized to undergo LM-PF (n = 20) or POEM (n = 20), as detailed in Figure 1. Nine (22.5%) of the forty patients (five in the POEM group and four in the LM-PF group) tested positive for anti-T. cruzi antibodies, indicating exposure to Chagas disease. At baseline, there was no statistical difference between the two groups in terms of sex, age, etiology, grade, symptom duration, weight, body mass index, or Eckardt score (Table 1). The study was terminated after all patients had been followed for at least 12 mo.\n\nFlow chart of the study timeline. EGD: Esophagogastroduodenoscopy; EM: Esophageal manometry (conventional); LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy; SF-36: Medical Outcomes Study 36-item Short-Form Health Survey. \nCharacteristics of the study population\nBMI: Body mass index; IQR: Interquartile range; LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.\nTreatment outcomes Eckardt scores at 1, 6, and 12 mo were lower than the baseline scores in both groups—1.0, 0.5, and 0.50, respectively, vs 8.0, in the POEM group; and 0.0, 0.0, and 0.0, respectively, vs 8.5, in the LM-PF group—and the differences were statistically significant (P < 0.001; Table 2). There were no statistical differences between the two groups for the Eckardt scores at 1, 6, and 12 mo of follow-up (P = 0.192, P = 0.242, and P = 0.242, respectively).\nEckardt scores\nNonparametric Mann-Whitney test. \nIQR: Interquartile range; LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy. \nIn the POEM group, treatment success was confirmed at 1 mo in all 20 patients, at 6 mo in 18 of the patients (90%), and at 12 mo in 19 (95%). In the LM-PF group, treatment success was confirmed at 1 mo and was maintained at 6 and 12 mo in all 20 patients. As shown in Table 3, there was no statistical difference between the two groups regarding treatment success at 1, 6, or 12 mo (P = 0.487 and P = 1.000 for 6 and 12 mo, respectively).\nTreatment success\nFisher’s exact test. \nLM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.\nIn both groups, there were significant postprocedural improvements in dysphagia, although the differences were not significant at 1, 6, or 12 mo (P = 0.602; P = 0.565, and P = 0.547, respectively). However, statistically significant improvements in weight loss, chest pain, and regurgitation were observed in both groups (Supplementary Tables 1 and 2). The postprocedure rate of GER, as assessed with the GerdQ, was higher in the POEM group than in the LM-PF group (64.6% vs 11.1%; P < 0.02).\nEckardt scores at 1, 6, and 12 mo were lower than the baseline scores in both groups—1.0, 0.5, and 0.50, respectively, vs 8.0, in the POEM group; and 0.0, 0.0, and 0.0, respectively, vs 8.5, in the LM-PF group—and the differences were statistically significant (P < 0.001; Table 2). There were no statistical differences between the two groups for the Eckardt scores at 1, 6, and 12 mo of follow-up (P = 0.192, P = 0.242, and P = 0.242, respectively).\nEckardt scores\nNonparametric Mann-Whitney test. \nIQR: Interquartile range; LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy. \nIn the POEM group, treatment success was confirmed at 1 mo in all 20 patients, at 6 mo in 18 of the patients (90%), and at 12 mo in 19 (95%). In the LM-PF group, treatment success was confirmed at 1 mo and was maintained at 6 and 12 mo in all 20 patients. As shown in Table 3, there was no statistical difference between the two groups regarding treatment success at 1, 6, or 12 mo (P = 0.487 and P = 1.000 for 6 and 12 mo, respectively).\nTreatment success\nFisher’s exact test. \nLM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.\nIn both groups, there were significant postprocedural improvements in dysphagia, although the differences were not significant at 1, 6, or 12 mo (P = 0.602; P = 0.565, and P = 0.547, respectively). However, statistically significant improvements in weight loss, chest pain, and regurgitation were observed in both groups (Supplementary Tables 1 and 2). The postprocedure rate of GER, as assessed with the GerdQ, was higher in the POEM group than in the LM-PF group (64.6% vs 11.1%; P < 0.02).\nEndoscopic findings At 1, 6, and 12 mo, only 20, 18, and 18 POEM group patients, respectively, underwent EGD, as did only 17, 16, and 17 LM-PF group patients, respectively. The remaining patients declined to undergo EGD because they were asymptomatic. The rates of esophagitis were significantly higher in the POEM group than in the LM-PF group at 1, 6, and 12 mo of follow-up (P = 0.014, P < 0.001, and P = 0.002, respectively). In the LM-PF group, 1 patient had esophagitis (classified as grade A) at 6 mo and 2 patients had esophagitis (classified as grades B and C, respectively) at 12 mo. In the POEM group, esophagitis was observed at 1 mo in 5 patients (being classified as grade A in one, grade B in three, and grade C in one), at 6 mo in 10 patients (being classified as grade A in three, grade B in two, and grade C in five), and at 12 mo in 11 patients (being classified as grade A in five, grade C in four, and grade D in two). At 1, 6, and 12 mo, the rates of esophagitis were 0.0%, 5.6%, and 11.1%, respectively, in the LM-PF group and 29.4%, 62.5%, and 64.6%, respectively, in the POEM group (Table 4).\nReflux esophagitis\nFisher’s exact test. \nLM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.\nAt 1, 6, and 12 mo, only 20, 18, and 18 POEM group patients, respectively, underwent EGD, as did only 17, 16, and 17 LM-PF group patients, respectively. The remaining patients declined to undergo EGD because they were asymptomatic. The rates of esophagitis were significantly higher in the POEM group than in the LM-PF group at 1, 6, and 12 mo of follow-up (P = 0.014, P < 0.001, and P = 0.002, respectively). In the LM-PF group, 1 patient had esophagitis (classified as grade A) at 6 mo and 2 patients had esophagitis (classified as grades B and C, respectively) at 12 mo. In the POEM group, esophagitis was observed at 1 mo in 5 patients (being classified as grade A in one, grade B in three, and grade C in one), at 6 mo in 10 patients (being classified as grade A in three, grade B in two, and grade C in five), and at 12 mo in 11 patients (being classified as grade A in five, grade C in four, and grade D in two). At 1, 6, and 12 mo, the rates of esophagitis were 0.0%, 5.6%, and 11.1%, respectively, in the LM-PF group and 29.4%, 62.5%, and 64.6%, respectively, in the POEM group (Table 4).\nReflux esophagitis\nFisher’s exact test. \nLM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.\nBarium esophagogram Table 5 shows the results of the barium esophagogram. In both groups, the heights of the barium column at 1 and 5 min were significantly lower at 1, 6, and 12 mo than at baseline (P < 0.001). There was no statistical difference between the two groups regarding the barium column height values at 1 and 5 min in the follow-up periods (intent-to-treat analysis: P = 0.429 and 0.773; per-protocol analysis: P = 0.505 and 0.922).\nHeight of the barium column in cm\nData are presented as mean ± SD. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.\nTable 5 shows the results of the barium esophagogram. In both groups, the heights of the barium column at 1 and 5 min were significantly lower at 1, 6, and 12 mo than at baseline (P < 0.001). There was no statistical difference between the two groups regarding the barium column height values at 1 and 5 min in the follow-up periods (intent-to-treat analysis: P = 0.429 and 0.773; per-protocol analysis: P = 0.505 and 0.922).\nHeight of the barium column in cm\nData are presented as mean ± SD. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.\nEM In both groups, the MEP values were significantly lower at 6 and 12 mo than at baseline (Table 6). There was no statistical difference between the two groups at either of those time points (intention-to-treat analysis: P = 0.848).\nEsophageal manometry results of lower esophageal sphincter pressure in mmHg\nData are presented as mean ± SD. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.\nIn both groups, the MEP values were significantly lower at 6 and 12 mo than at baseline (Table 6). There was no statistical difference between the two groups at either of those time points (intention-to-treat analysis: P = 0.848).\nEsophageal manometry results of lower esophageal sphincter pressure in mmHg\nData are presented as mean ± SD. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.\nAEs, LOS, anesthesia time, and procedure time Table 7 describes the AEs, LOS, anesthesia time, and procedure time, in the sample as a whole and by groups. There was no statistical difference between the two groups regarding the rate of AEs (P = 0.605). The relevant complications observed in the immediate postprocedural period included empyema requiring thoracostomy in one (5%) of the LM-PF patients, and inadvertent intraoperative mucosal damage in three (15%) of the POEM patients (treated with endoscopic clipping). The clinical outcomes were favorable in all patients. The mean LOS was 3.95 ± 3.36 d in the LM-PF group, compared with 3.40 ± 0.75 d in the POEM group (P = 0.483). The mean anesthesia time and mean procedure time were both shorter in the POEM group than in the LM-PF group (185.00 ± 56.89 and 95.70 ± 30.47 min, respectively, vs 296.75 ± 56.13 and 218.75 ± 50.88 min, respectively; P < 0.001 for both).\nAdverse events, length of hospital stay, anesthesia time, and procedure time\nFisher’s exact test.\nStudent’s t-test. \nData are presented as mean ± SD, unless noted otherwise. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.\nTable 7 describes the AEs, LOS, anesthesia time, and procedure time, in the sample as a whole and by groups. There was no statistical difference between the two groups regarding the rate of AEs (P = 0.605). The relevant complications observed in the immediate postprocedural period included empyema requiring thoracostomy in one (5%) of the LM-PF patients, and inadvertent intraoperative mucosal damage in three (15%) of the POEM patients (treated with endoscopic clipping). The clinical outcomes were favorable in all patients. The mean LOS was 3.95 ± 3.36 d in the LM-PF group, compared with 3.40 ± 0.75 d in the POEM group (P = 0.483). The mean anesthesia time and mean procedure time were both shorter in the POEM group than in the LM-PF group (185.00 ± 56.89 and 95.70 ± 30.47 min, respectively, vs 296.75 ± 56.13 and 218.75 ± 50.88 min, respectively; P < 0.001 for both).\nAdverse events, length of hospital stay, anesthesia time, and procedure time\nFisher’s exact test.\nStudent’s t-test. \nData are presented as mean ± SD, unless noted otherwise. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.\nQoL Table 8 shows the results obtained with the SF-36. In the POEM group, there were postprocedural improvements in all SF-36 domains, whereas there were improvements in only three domains (physical functioning, energy/fatigue, and general health) in the LM-PF group.\nQuality of life\nIQR: Interquartile range; LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy; SF-36: Medical Outcomes Study 36-item Short-Form Health Survey. \nTable 8 shows the results obtained with the SF-36. In the POEM group, there were postprocedural improvements in all SF-36 domains, whereas there were improvements in only three domains (physical functioning, energy/fatigue, and general health) in the LM-PF group.\nQuality of life\nIQR: Interquartile range; LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy; SF-36: Medical Outcomes Study 36-item Short-Form Health Survey. ", "Between March 2017 and February 2018, 40 patients diagnosed with achalasia were enrolled and randomized to undergo LM-PF (n = 20) or POEM (n = 20), as detailed in Figure 1. Nine (22.5%) of the forty patients (five in the POEM group and four in the LM-PF group) tested positive for anti-T. cruzi antibodies, indicating exposure to Chagas disease. At baseline, there was no statistical difference between the two groups in terms of sex, age, etiology, grade, symptom duration, weight, body mass index, or Eckardt score (Table 1). The study was terminated after all patients had been followed for at least 12 mo.\n\nFlow chart of the study timeline. EGD: Esophagogastroduodenoscopy; EM: Esophageal manometry (conventional); LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy; SF-36: Medical Outcomes Study 36-item Short-Form Health Survey. \nCharacteristics of the study population\nBMI: Body mass index; IQR: Interquartile range; LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.", "Eckardt scores at 1, 6, and 12 mo were lower than the baseline scores in both groups—1.0, 0.5, and 0.50, respectively, vs 8.0, in the POEM group; and 0.0, 0.0, and 0.0, respectively, vs 8.5, in the LM-PF group—and the differences were statistically significant (P < 0.001; Table 2). There were no statistical differences between the two groups for the Eckardt scores at 1, 6, and 12 mo of follow-up (P = 0.192, P = 0.242, and P = 0.242, respectively).\nEckardt scores\nNonparametric Mann-Whitney test. \nIQR: Interquartile range; LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy. \nIn the POEM group, treatment success was confirmed at 1 mo in all 20 patients, at 6 mo in 18 of the patients (90%), and at 12 mo in 19 (95%). In the LM-PF group, treatment success was confirmed at 1 mo and was maintained at 6 and 12 mo in all 20 patients. As shown in Table 3, there was no statistical difference between the two groups regarding treatment success at 1, 6, or 12 mo (P = 0.487 and P = 1.000 for 6 and 12 mo, respectively).\nTreatment success\nFisher’s exact test. \nLM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.\nIn both groups, there were significant postprocedural improvements in dysphagia, although the differences were not significant at 1, 6, or 12 mo (P = 0.602; P = 0.565, and P = 0.547, respectively). However, statistically significant improvements in weight loss, chest pain, and regurgitation were observed in both groups (Supplementary Tables 1 and 2). The postprocedure rate of GER, as assessed with the GerdQ, was higher in the POEM group than in the LM-PF group (64.6% vs 11.1%; P < 0.02).", "At 1, 6, and 12 mo, only 20, 18, and 18 POEM group patients, respectively, underwent EGD, as did only 17, 16, and 17 LM-PF group patients, respectively. The remaining patients declined to undergo EGD because they were asymptomatic. The rates of esophagitis were significantly higher in the POEM group than in the LM-PF group at 1, 6, and 12 mo of follow-up (P = 0.014, P < 0.001, and P = 0.002, respectively). In the LM-PF group, 1 patient had esophagitis (classified as grade A) at 6 mo and 2 patients had esophagitis (classified as grades B and C, respectively) at 12 mo. In the POEM group, esophagitis was observed at 1 mo in 5 patients (being classified as grade A in one, grade B in three, and grade C in one), at 6 mo in 10 patients (being classified as grade A in three, grade B in two, and grade C in five), and at 12 mo in 11 patients (being classified as grade A in five, grade C in four, and grade D in two). At 1, 6, and 12 mo, the rates of esophagitis were 0.0%, 5.6%, and 11.1%, respectively, in the LM-PF group and 29.4%, 62.5%, and 64.6%, respectively, in the POEM group (Table 4).\nReflux esophagitis\nFisher’s exact test. \nLM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.", "Table 5 shows the results of the barium esophagogram. In both groups, the heights of the barium column at 1 and 5 min were significantly lower at 1, 6, and 12 mo than at baseline (P < 0.001). There was no statistical difference between the two groups regarding the barium column height values at 1 and 5 min in the follow-up periods (intent-to-treat analysis: P = 0.429 and 0.773; per-protocol analysis: P = 0.505 and 0.922).\nHeight of the barium column in cm\nData are presented as mean ± SD. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.", "In both groups, the MEP values were significantly lower at 6 and 12 mo than at baseline (Table 6). There was no statistical difference between the two groups at either of those time points (intention-to-treat analysis: P = 0.848).\nEsophageal manometry results of lower esophageal sphincter pressure in mmHg\nData are presented as mean ± SD. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.", "Table 7 describes the AEs, LOS, anesthesia time, and procedure time, in the sample as a whole and by groups. There was no statistical difference between the two groups regarding the rate of AEs (P = 0.605). The relevant complications observed in the immediate postprocedural period included empyema requiring thoracostomy in one (5%) of the LM-PF patients, and inadvertent intraoperative mucosal damage in three (15%) of the POEM patients (treated with endoscopic clipping). The clinical outcomes were favorable in all patients. The mean LOS was 3.95 ± 3.36 d in the LM-PF group, compared with 3.40 ± 0.75 d in the POEM group (P = 0.483). The mean anesthesia time and mean procedure time were both shorter in the POEM group than in the LM-PF group (185.00 ± 56.89 and 95.70 ± 30.47 min, respectively, vs 296.75 ± 56.13 and 218.75 ± 50.88 min, respectively; P < 0.001 for both).\nAdverse events, length of hospital stay, anesthesia time, and procedure time\nFisher’s exact test.\nStudent’s t-test. \nData are presented as mean ± SD, unless noted otherwise. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy.", "Table 8 shows the results obtained with the SF-36. In the POEM group, there were postprocedural improvements in all SF-36 domains, whereas there were improvements in only three domains (physical functioning, energy/fatigue, and general health) in the LM-PF group.\nQuality of life\nIQR: Interquartile range; LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy; SF-36: Medical Outcomes Study 36-item Short-Form Health Survey. ", "In this single-center RCT comparing POEM and LM-PF in treatment-naive patients with achalasia, a significant proportion of the patients evaluated had achalasia attributed to Chagas disease. In a study by Farias et al[33], no statistical difference was observed between idiopathic and Chagas disease-associated achalasia regarding treatment success and AEs with POEM.\nFor years, LM-PF has been considered the gold-standard treatment for achalasia[34], because it provides good clinical results, has a low reintervention rate, and has adequate reproducibility. In the first study involving the use of endoscopic myotomy[21], conducted in 1980, all 17 of the patients in the sample showed symptom improvement. Although, technical improvements proposed by Inoue et al[22] in 2010 and several cohort studies comparing POEM and LM-PF[35-45] over the last decade have proved its safety and efficacy in the management of achalasia, the POEM technique is still not fully standardized[22].\nThe first RCT comparing the two techniques in the treatment of idiopathic achalasia[46], including 221 patients, demonstrated clinical success rates at 1 year and 2 years of follow-up of 84.8% and 83.0%, respectively, in the POEM group, comparable to the 83.5% and 81.7%, respectively, observed for the LM-PF group. In our study, the clinical success rate at the end of the 1st year was 95% in the POEM group and 100% in the LM-PF group, with no statistical difference between the two techniques. This discrepancy between our results and those of the earlier trial may be related to the fact that approximately 35% of the patients evaluated in that trial had previously received some type of treatment, which could have increased the degree of technical difficulty in dissection secondary to submucosal fibrosis.\nWe observed no statistical differences between the two techniques concerning Eckardt scores for dysphagia, regurgitation, chest pain, and weight loss, at 1, 6, and 12 mo of follow-up, which demonstrates the noninferiority of POEM to the LM-PF.\nImmediate postprocedural complications occurred in 10% of the 40 patients evaluated in the present study. There were no cases of death in our sample, and the rate of AEs did not differ significantly between the two techniques. In our study, all POEM procedures involved a full-thickness myotomy, which made pneumoperitoneum an expected event. Pneumoperitoneum is a common finding after POEM and is not indicative of an unfavorable outcome for the patient. We categorized pneumoperitoneum as an AE only if abdominal decompression was required.\nAnesthesia and procedure times were shorter for POEM than for LM-PF. That can be explained by the fact that the POEM involved full-thickness myotomy and did not involve fundoplication. There was no difference between the two procedures in terms of LOS and QoL.\nWe found that POEM and LM-PF both resulted in significant decreases in the 1- and 5-min barium column heights at 1, 6, and 12 mo after the procedures, demonstrating a clear decrease in resistance to the passage of contrast at the level of the EGJ. Sanagapalli et al[47] showed an association of significant improvement in symptoms when there is a mean reduction in the residual barium column height by about 53%. The LES pressure (MEP) on conventional EM was significantly lower throughout the follow-up period than at baseline, and there was no significant difference between the two groups.\nIn this study, the rates of treatment success were comparable between surgical and endoscopic myotomy, both providing symptom improvement, as well as objective improvement in radiological and manometric parameters, at 1, 6, and 12 mo. A recent systematic review and meta-analysis demonstrated that the incidence of GER is higher after POEM than after laparoscopic Heller myotomy[48]. That is in agreement with our findings. The evaluation of GER in our study was based on the typical clinical manifestations of GERD or the identification of erosive esophagitis by EGD. All patients with symptoms and suggestive endoscopic findings of GER received PPI treatment with suspension or maintenance according to the clinical and endoscopic response. A significant limitation of our study was the absence of pHmetry evaluation, which is the main method for GERD evaluation. Prior to our study, we considered that the pHmetry evaluation would be compromised because patients with esophageal achalasia present retention of food residues in the esophageal mucosa and the fermentation of those residues can decrease the intraluminal pH and thus be a confounding factor in the diagnosis of GERD. However, Smart et al[49] showed that such fermentation would affect only pre-procedure pHmetry, without much influence on the post-procedure pHmetry.\nErosive esophagitis, especially grade C or D, is considered indicative of GER after endoscopy in patients without a history of the condition[50]. We consider that patients undergoing POEM have a wider esophagogastric transition that favors a higher rate of GER compared to LM-PF, despite similar LES pressures between the groups. Werner et al[46] also showed more GER in patients undergoing POEM despite no differences in manometry compared to LM-PF.\nThe POEM technique has undergone numerous changes since its initial description by Inoue et al[22]. It has been shown that short- to medium-term efficacy is comparable between myotomy of the circular muscle layer only and full-thickness myotomy, as well as that the latter, despite significantly reducing the duration of POEM, may increase the risk of GERD[51,52]. Likewise, there is uncertainty about whether myotomy should be performed in the anterior or posterior wall, the latter technique being associated with a higher incidence of GER[53,54], although other studies have failed to demonstrate that[55,56]. In the present study, we chose a long posterior full-thickness myotomy, because of the greater technical ease[43,57,58].\nThe results obtained in our study corroborate those of a previous study demonstrating the noninferiority of POEM to LM-PF for symptom control in patients with achalasia, except for postprocedure GER[46]. That raises the question of which technical changes we should study. Therefore, it is valid to perform in-depth analyses of oblique fiber preservation techniques[59], as well as the use of POEM with fundoplication[60,61]. One study[58] demonstrated that preservation of the oblique muscle, using the two penetrating vessels as an anatomical landmark, can significantly reduce the frequency of post-POEM GER, although that should be interpreted with caution because it was a retrospective cohort study, without strict methodological criteria, and with limited reproducibility. In the present study, we employed the conventional POEM technique as previously described[62], and the preservation of the two penetrating vessels was not standardized. The postprocedural occurrence of GERD symptoms in our sample was > 50%, similar to what has been reported by other authors. Despite not including patients undergoing POEM, a recent study[63] showed that achalasia patients with post-treatment reflux symptoms demonstrate esophageal hypersensitivity to chemical and mechanical stimuli, which may determine symptom generation.\nAnother strategy proposed to minimize the occurrence of GER after POEM is performing transoral incisionless fundoplication. In one pilot study[60], that procedure was reported to have a 100% success rate in terms of symptom control, acid exposure time, and the need for antisecretory drugs. In another pilot study[61], standard POEM combined with endoscopic fundoplication (POEM-F) was employed, and no complications were observed. A recent retrospective study followed patients for 12 mo after POEM-F[64], and showed that the incidence of postprocedural GER was only 11.1%. Albeit attractive, POEM-F has several potential limitations[65]. First, it is necessary to perform POEM in the anterior wall, contrary to the current trend of using a posterior wall approach. Second, it may not be possible to perform POEM-F in patients who have previously undergone anterior myotomy and experience symptom recurrence due to submucosal fibrosis. Third, the long-term durability of this type of fundoplication is still unknown.\nIn our opinion, it will take some time for the literature to reveal whether endoscopic or surgical myotomy is the best long-term option for the treatment of achalasia. Two crucial points that weigh unfavorably on the POEM procedure, in terms of the possibility that it will come to be widely indicated for the treatment of achalasia[66,67]. The first is the paucity of high-quality (randomized) technical studies comparing POEM with the well-established techniques of pneumatic dilatation of the cardia and laparoscopic myotomy with fundoplication, which could show, at least, the noninferiority of POEM. The second is the lack of studies with long (> 5 years) follow-up periods, which could demonstrate the true reintervention rate, based on the identification of serious late complications, including GER requiring fundoplication and dysphagia resulting from an inadequate myotomy[68]. Currently, the results at 2-3 years are similar between the endoscopic and surgical myotomy techniques concerning the clinical parameters, except for the greater occurrence of GER after the endoscopic technique, which typically responds well to antisecretory drug treatment. However, those data are accompanied by uncertainties that will only be resolved over time.", "Our results allow us to conclude that LM-PF and POEM are equally effective in controlling the clinical symptoms of achalasia at 1, 6, and 12 mo. Although the use of the POEM technique results in a significantly higher rate of postprocedure GER, it also shortens anesthesia and procedure times. We found no differences between the two methods regarding LOS, the occurrence of AEs, or QoL. In the POEM group, there was an improvement in all domains of QoL." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Esophageal achalasia", "Gastroesophageal reflux", "Deglutition disorders", "Heller myotomy", "Fundoplication", "Randomized controlled trial" ]
INTRODUCTION: Achalasia is a rare benign esophageal motor disorder characterized by incomplete relaxation of the lower esophageal sphincter (LES)[1-3]. For primary or idiopathic achalasia, the underlying etiology has yet to be clearly defined; secondary achalasia results from any one of several systemic diseases including infectious, autoimmune, and drug-induced disorders[4-6]. In both cases, the most common symptoms are progressive dysphagia, regurgitation, and weight loss. The symptom intensity and treatment response can be assessed with validated scales such as the Eckardt score[2,7,8]. The diagnosis requires the proper integration between reported symptoms and the interpretation of diagnostic tests such as a barium esophagogram, esophagogastroduodenoscopy (EGD), and manometry—either conventional esophageal manometry (EM) or high-resolution manometry (HRM)[9-12]. The treatment of achalasia is not curative but rather is aimed at reducing LES pressure[13-17]. In patients who have failed noninvasive therapy, surgery should be considered[18]. Myotomy with partial fundoplication has been considered the first-line treatment for non-advanced achalasia[19]. Recently, peroral endoscopic myotomy (POEM), a technique that employs the principles of submucosal endoscopy to perform the equivalent of a surgical myotomy, has emerged as a promising minimally invasive technique for the management of this condition[20]. This technique was first described in 1980 and subsequently modified to create what is now POEM[21,22]. This randomized controlled trial (RCT) compared the efficacy and outcomes of laparoscopic myotomy and partial fundoplication (LM-PF) with those of POEM for the treatment of patients with achalasia of any etiology. We also compared the two procedures in terms of the incidence of reflux esophagitis. MATERIALS AND METHODS: Study design and participants This was a single-center RCT in which we evaluated 40 treatment-naive patients with esophageal achalasia. We included patients ≥ 18 years of age who had been diagnosed with achalasia based on clinical and manometric criteria (dysphagia score ≥ II and Eckardt score > 3) and who provided informed consent. Patients who had previously undergone endoscopic or surgical procedures involving the esophagogastric junction (EGJ) were excluded, as were those with liver cirrhosis, esophageal varices, Barrett’s esophagus, esophageal strictures, premalignant or malignant EGJ lesions, coagulopathies, pseudoachalasia, esophageal diverticulum, severe cardiopulmonary diseases, or severe systemic diseases, as well as those who were at high surgical risk and those who were pregnant or lactating. This was a single-center RCT in which we evaluated 40 treatment-naive patients with esophageal achalasia. We included patients ≥ 18 years of age who had been diagnosed with achalasia based on clinical and manometric criteria (dysphagia score ≥ II and Eckardt score > 3) and who provided informed consent. Patients who had previously undergone endoscopic or surgical procedures involving the esophagogastric junction (EGJ) were excluded, as were those with liver cirrhosis, esophageal varices, Barrett’s esophagus, esophageal strictures, premalignant or malignant EGJ lesions, coagulopathies, pseudoachalasia, esophageal diverticulum, severe cardiopulmonary diseases, or severe systemic diseases, as well as those who were at high surgical risk and those who were pregnant or lactating. Randomization strategy An investigator who was unaffiliated with the trial created the randomization list. Specific software (www.randomizer.org) was used, and participants were randomly allocated at a 1:1 ratio to the experimental (POEM) group or the comparison (LM-PF) group. An investigator who was unaffiliated with the trial created the randomization list. Specific software (www.randomizer.org) was used, and participants were randomly allocated at a 1:1 ratio to the experimental (POEM) group or the comparison (LM-PF) group. Sample size calculation The sample size was calculated to identify statistical significance between LM-PF and POEM regarding reflux esophagitis rates, which were assumed to be 5% and 40% after LM-PF and POEM, respectively[23]. To achieve a power of 80% with an alpha of 0.05, we estimated the minimum sample size to be 38 (19 patients in each group). Taking potential losses into consideration, we chose to include a total of 40 patients. The sample size was calculated to identify statistical significance between LM-PF and POEM regarding reflux esophagitis rates, which were assumed to be 5% and 40% after LM-PF and POEM, respectively[23]. To achieve a power of 80% with an alpha of 0.05, we estimated the minimum sample size to be 38 (19 patients in each group). Taking potential losses into consideration, we chose to include a total of 40 patients. Techniques POEM: All POEM procedures were performed by a single operator with extensive experience in the technique. Prophylactic intravenous antibiotics and a proton pump inhibitor (PPI) were administered 30 min before intubation and general anesthesia. After the gastroscope was introduced, the esophageal lumen and mucosa were thoroughly cleaned. This was followed by submucosal injection of 10 mL of 0.5% indigo carmine. An incision was made into the mucosa of the posterior wall, between 5 and 6 o’clock, at 10 cm above the EGJ. The incision was made with a dual-function submucosal dissection knife (HybridKnife; Erbe, Tübingen, Germany) in Endocut Q mode (effect 2, width 3, and interval 5). Subsequently, spray coagulation (effect 2 at 40 W) was used to create a submucosal tunnel extending 3-4 cm beyond the EGJ into the proximal stomach. In all patients, full-thickness myotomy—including the circular and longitudinal muscle layers—was performed in Endocut Q mode. The myotomy was initiated 2 cm distal from the mucosal entry point and extended 3-4 cm into the proximal stomach. The mucosal incision was closed by using through-the-scope clips (Supplementary Figures 1 and 2). A barium esophagogram was obtained on postoperative day 1. In the absence of complications, the patient was started on a clear liquid diet and subsequently advanced to a full liquid diet for 14 d. LM-PF: All LM-PF procedures were performed by members of the foregut surgery group. After pneumoperitoneum had been established, five trocars were positioned, after which the left hepatic lobe was retracted to access the EGJ (Supplementary Figure 3). That was followed by division of the phrenoesophageal ligament, dissection, and isolation of the distal esophagus from adjacent structures; and anterolateral dislocation of the distal esophagus. The anterior gastric adipose tissue and the anterior vagus nerve were dissected and separated from the esophagus and stomach, after which myotomy of the circular and longitudinal muscle layers was performed, extending from 5-6 cm above the EGJ to 2-3 cm below the EGJ. Partial fundoplication between the esophagus and stomach was then performed by placing sutures in three planes: Posterior—two to three sutures; left lateral—four to five sutures (on the left side); and right anterior—a line of sutures covering the myotomy, thus interposed with the gastric fundus on the right. In the absence of complications, patients were started on a clear liquid diet on the morning following the procedure and maintained on a mechanical soft diet for 14 d after discharge. POEM: All POEM procedures were performed by a single operator with extensive experience in the technique. Prophylactic intravenous antibiotics and a proton pump inhibitor (PPI) were administered 30 min before intubation and general anesthesia. After the gastroscope was introduced, the esophageal lumen and mucosa were thoroughly cleaned. This was followed by submucosal injection of 10 mL of 0.5% indigo carmine. An incision was made into the mucosa of the posterior wall, between 5 and 6 o’clock, at 10 cm above the EGJ. The incision was made with a dual-function submucosal dissection knife (HybridKnife; Erbe, Tübingen, Germany) in Endocut Q mode (effect 2, width 3, and interval 5). Subsequently, spray coagulation (effect 2 at 40 W) was used to create a submucosal tunnel extending 3-4 cm beyond the EGJ into the proximal stomach. In all patients, full-thickness myotomy—including the circular and longitudinal muscle layers—was performed in Endocut Q mode. The myotomy was initiated 2 cm distal from the mucosal entry point and extended 3-4 cm into the proximal stomach. The mucosal incision was closed by using through-the-scope clips (Supplementary Figures 1 and 2). A barium esophagogram was obtained on postoperative day 1. In the absence of complications, the patient was started on a clear liquid diet and subsequently advanced to a full liquid diet for 14 d. LM-PF: All LM-PF procedures were performed by members of the foregut surgery group. After pneumoperitoneum had been established, five trocars were positioned, after which the left hepatic lobe was retracted to access the EGJ (Supplementary Figure 3). That was followed by division of the phrenoesophageal ligament, dissection, and isolation of the distal esophagus from adjacent structures; and anterolateral dislocation of the distal esophagus. The anterior gastric adipose tissue and the anterior vagus nerve were dissected and separated from the esophagus and stomach, after which myotomy of the circular and longitudinal muscle layers was performed, extending from 5-6 cm above the EGJ to 2-3 cm below the EGJ. Partial fundoplication between the esophagus and stomach was then performed by placing sutures in three planes: Posterior—two to three sutures; left lateral—four to five sutures (on the left side); and right anterior—a line of sutures covering the myotomy, thus interposed with the gastric fundus on the right. In the absence of complications, patients were started on a clear liquid diet on the morning following the procedure and maintained on a mechanical soft diet for 14 d after discharge. Diagnosis and follow-up Clinical assessments: Although achalasia subtyping is defined based on HRM, in this study, the achalasia subtype was evaluated according to the degree of esophageal dilation on the barium esophagogram and esophageal motor activity on EM or HRM. Given that Chagas disease, which often involves the esophagus, is common in Brazil, all patients were screened for anti-Trypanosoma cruzi antibodies by enzyme-linked immunosorbent assay and indirect immunofluorescence. Weight loss, dysphagia, and pain were assessed before the procedure, as well as at 6 and 12 mo after the procedure, by using the Eckardt score. Patients with an Eckardt score ≥ 3 were categorized as symptomatic. The clinical evaluation of gastroesophageal reflux (GER) and the diagnosis of GER disease (GERD) was based on the application of the GER Disease Questionnaire (GerdQ)[24] (Supplementary Figure 4). EGD: We performed EGD before the procedure, as well as at 6 and 12 mo after. Esophagitis was graded according to the Los Angeles classification system[25]. We performed chromoendoscopy with narrow-band imaging and 2.5% Lugol’s solution to screen for esophageal cancer. Suspicious lesions were biopsied. Barium esophagogram: To assess esophageal emptying before and 12 mo after the procedure, we used a timed barium esophagogram, as previously described[26]. The degree of esophageal emptying is qualitatively estimated by comparing 1- and 5-min images or by measuring the height and width (in centimeters) of the barium column at both times, calculating the approximate area, and determining the percentage change in the area. EM: Conventional EM was performed before and 12 mo after the procedure. It should be noted that HRM technology was not available in Brazil when the trial began. To perform conventional EM, we used an eight-channel computerized polygraph under pneumohydraulic capillary infusion at a flow rate of 0.6 mL/min/channel. Preparation was required with a 12-h fast and suspension of medications that alter esophageal motility. The technique consists of passing a probe through the nostril and checking the position in the stomach through deep inspiration. With the patient in the supine position, the probe is pulled centimeter by centimeter to measure the mean respiratory pressure and pressure inversion point, and then one of the channels is positioned distal to 3 cm from the upper edge of the LES and the other channels are distant 5 cm apart. Finally, the catheter is pulled up to the upper esophageal sphincter. Through the average of the four distal radial channels of the conventional manometry catheter, the maximum expiratory pressure (MEP) was identified, which best represents the LES pressure itself. Quality of life: To evaluate the quality of life (QoL), we used the Medical Outcomes Study 36-item Short-Form Health Survey (SF-36)[27,28]. The SF-36 comprises 36 questions covering eight domains: Physical functioning, role-physical, bodily pain, general health, vitality, social functioning, role-emotional, and mental health. Adverse events: Among the adverse events (AEs) recorded, pneumoperitoneum requiring drainage or puncture was categorized as a minor AE, as was minor mucosal damage requiring endoscopic closure. Major AEs were defined as pneumoperitoneum leading to hemodynamic instability and premature interruption of the procedure; bleeding requiring a blood transfusion and accompanied by hemodynamic instability or requiring an additional intervention; major mucosal damage requiring endoscopic closure or increasing the length of stay (LOS); or fistula/dehiscence of the incision with signs of fever or infection associated with hemodynamic instability. For AEs occurring in the LM-PF group, we used the Clavien-Dindo classification[29]. Clinical assessments: Although achalasia subtyping is defined based on HRM, in this study, the achalasia subtype was evaluated according to the degree of esophageal dilation on the barium esophagogram and esophageal motor activity on EM or HRM. Given that Chagas disease, which often involves the esophagus, is common in Brazil, all patients were screened for anti-Trypanosoma cruzi antibodies by enzyme-linked immunosorbent assay and indirect immunofluorescence. Weight loss, dysphagia, and pain were assessed before the procedure, as well as at 6 and 12 mo after the procedure, by using the Eckardt score. Patients with an Eckardt score ≥ 3 were categorized as symptomatic. The clinical evaluation of gastroesophageal reflux (GER) and the diagnosis of GER disease (GERD) was based on the application of the GER Disease Questionnaire (GerdQ)[24] (Supplementary Figure 4). EGD: We performed EGD before the procedure, as well as at 6 and 12 mo after. Esophagitis was graded according to the Los Angeles classification system[25]. We performed chromoendoscopy with narrow-band imaging and 2.5% Lugol’s solution to screen for esophageal cancer. Suspicious lesions were biopsied. Barium esophagogram: To assess esophageal emptying before and 12 mo after the procedure, we used a timed barium esophagogram, as previously described[26]. The degree of esophageal emptying is qualitatively estimated by comparing 1- and 5-min images or by measuring the height and width (in centimeters) of the barium column at both times, calculating the approximate area, and determining the percentage change in the area. EM: Conventional EM was performed before and 12 mo after the procedure. It should be noted that HRM technology was not available in Brazil when the trial began. To perform conventional EM, we used an eight-channel computerized polygraph under pneumohydraulic capillary infusion at a flow rate of 0.6 mL/min/channel. Preparation was required with a 12-h fast and suspension of medications that alter esophageal motility. The technique consists of passing a probe through the nostril and checking the position in the stomach through deep inspiration. With the patient in the supine position, the probe is pulled centimeter by centimeter to measure the mean respiratory pressure and pressure inversion point, and then one of the channels is positioned distal to 3 cm from the upper edge of the LES and the other channels are distant 5 cm apart. Finally, the catheter is pulled up to the upper esophageal sphincter. Through the average of the four distal radial channels of the conventional manometry catheter, the maximum expiratory pressure (MEP) was identified, which best represents the LES pressure itself. Quality of life: To evaluate the quality of life (QoL), we used the Medical Outcomes Study 36-item Short-Form Health Survey (SF-36)[27,28]. The SF-36 comprises 36 questions covering eight domains: Physical functioning, role-physical, bodily pain, general health, vitality, social functioning, role-emotional, and mental health. Adverse events: Among the adverse events (AEs) recorded, pneumoperitoneum requiring drainage or puncture was categorized as a minor AE, as was minor mucosal damage requiring endoscopic closure. Major AEs were defined as pneumoperitoneum leading to hemodynamic instability and premature interruption of the procedure; bleeding requiring a blood transfusion and accompanied by hemodynamic instability or requiring an additional intervention; major mucosal damage requiring endoscopic closure or increasing the length of stay (LOS); or fistula/dehiscence of the incision with signs of fever or infection associated with hemodynamic instability. For AEs occurring in the LM-PF group, we used the Clavien-Dindo classification[29]. Outcome measures and data collection For POEM and LM-PF, the following outcome measures were evaluated: Operative time; length of the myotomy in the esophagus and stomach; myotomy site; complications; and LOS. Patient data were collected on the Research Electronic Data Capture platform. For POEM and LM-PF, the following outcome measures were evaluated: Operative time; length of the myotomy in the esophagus and stomach; myotomy site; complications; and LOS. Patient data were collected on the Research Electronic Data Capture platform. Follow-up At 1, 6, and 12 mo after the interventions, the Eckardt score was determined, the SF-36 was applied, EGD was performed, and timed barium esophagograms were obtained. Conventional EM was performed at 6 and 12 mo. Patients received the maximum dose of PPI for the first 30 d postprocedure, and those who presented with erosive esophagitis at follow-up endoscopy were maintained on PPI treatment for 8 wk. Treatment success was defined as symptom improvement (≤ 3-point reduction in the Eckardt score), an LES pressure < 15 mmHg[30-32], and a > 50% reduction in the height of the barium column at 1 min. Treatment failure was defined as symptom persistence in patients with an Eckardt score ≥ 3. At 1, 6, and 12 mo after the interventions, the Eckardt score was determined, the SF-36 was applied, EGD was performed, and timed barium esophagograms were obtained. Conventional EM was performed at 6 and 12 mo. Patients received the maximum dose of PPI for the first 30 d postprocedure, and those who presented with erosive esophagitis at follow-up endoscopy were maintained on PPI treatment for 8 wk. Treatment success was defined as symptom improvement (≤ 3-point reduction in the Eckardt score), an LES pressure < 15 mmHg[30-32], and a > 50% reduction in the height of the barium column at 1 min. Treatment failure was defined as symptom persistence in patients with an Eckardt score ≥ 3. Statistical analysis We performed descriptive analyses of all study variables. Quantitative variables were expressed as means with standard deviations or as medians with interquartile ranges. Qualitative variables are expressed as absolute and relative frequencies. For the comparison of means between the two groups, the Student’s t-test was used. When the assumption of normality was rejected, the nonparametric Mann-Whitney test was used. To test the homogeneity between proportions, the chi-square test or Fisher’s exact test was used. Repeated-measures analysis of variance was used to compare the groups over the course of the study. When the assumption of normality was rejected, the nonparametric Mann-Whitney test and Friedman test were used. The data were processed with the SPSS Statistics software package, version 17.0 for Windows (SPSS Inc., Chicago, IL, United States). P < 0.05 was considered statistically significant. We performed descriptive analyses of all study variables. Quantitative variables were expressed as means with standard deviations or as medians with interquartile ranges. Qualitative variables are expressed as absolute and relative frequencies. For the comparison of means between the two groups, the Student’s t-test was used. When the assumption of normality was rejected, the nonparametric Mann-Whitney test was used. To test the homogeneity between proportions, the chi-square test or Fisher’s exact test was used. Repeated-measures analysis of variance was used to compare the groups over the course of the study. When the assumption of normality was rejected, the nonparametric Mann-Whitney test and Friedman test were used. The data were processed with the SPSS Statistics software package, version 17.0 for Windows (SPSS Inc., Chicago, IL, United States). P < 0.05 was considered statistically significant. Study design and participants: This was a single-center RCT in which we evaluated 40 treatment-naive patients with esophageal achalasia. We included patients ≥ 18 years of age who had been diagnosed with achalasia based on clinical and manometric criteria (dysphagia score ≥ II and Eckardt score > 3) and who provided informed consent. Patients who had previously undergone endoscopic or surgical procedures involving the esophagogastric junction (EGJ) were excluded, as were those with liver cirrhosis, esophageal varices, Barrett’s esophagus, esophageal strictures, premalignant or malignant EGJ lesions, coagulopathies, pseudoachalasia, esophageal diverticulum, severe cardiopulmonary diseases, or severe systemic diseases, as well as those who were at high surgical risk and those who were pregnant or lactating. Randomization strategy: An investigator who was unaffiliated with the trial created the randomization list. Specific software (www.randomizer.org) was used, and participants were randomly allocated at a 1:1 ratio to the experimental (POEM) group or the comparison (LM-PF) group. Sample size calculation: The sample size was calculated to identify statistical significance between LM-PF and POEM regarding reflux esophagitis rates, which were assumed to be 5% and 40% after LM-PF and POEM, respectively[23]. To achieve a power of 80% with an alpha of 0.05, we estimated the minimum sample size to be 38 (19 patients in each group). Taking potential losses into consideration, we chose to include a total of 40 patients. Techniques: POEM: All POEM procedures were performed by a single operator with extensive experience in the technique. Prophylactic intravenous antibiotics and a proton pump inhibitor (PPI) were administered 30 min before intubation and general anesthesia. After the gastroscope was introduced, the esophageal lumen and mucosa were thoroughly cleaned. This was followed by submucosal injection of 10 mL of 0.5% indigo carmine. An incision was made into the mucosa of the posterior wall, between 5 and 6 o’clock, at 10 cm above the EGJ. The incision was made with a dual-function submucosal dissection knife (HybridKnife; Erbe, Tübingen, Germany) in Endocut Q mode (effect 2, width 3, and interval 5). Subsequently, spray coagulation (effect 2 at 40 W) was used to create a submucosal tunnel extending 3-4 cm beyond the EGJ into the proximal stomach. In all patients, full-thickness myotomy—including the circular and longitudinal muscle layers—was performed in Endocut Q mode. The myotomy was initiated 2 cm distal from the mucosal entry point and extended 3-4 cm into the proximal stomach. The mucosal incision was closed by using through-the-scope clips (Supplementary Figures 1 and 2). A barium esophagogram was obtained on postoperative day 1. In the absence of complications, the patient was started on a clear liquid diet and subsequently advanced to a full liquid diet for 14 d. LM-PF: All LM-PF procedures were performed by members of the foregut surgery group. After pneumoperitoneum had been established, five trocars were positioned, after which the left hepatic lobe was retracted to access the EGJ (Supplementary Figure 3). That was followed by division of the phrenoesophageal ligament, dissection, and isolation of the distal esophagus from adjacent structures; and anterolateral dislocation of the distal esophagus. The anterior gastric adipose tissue and the anterior vagus nerve were dissected and separated from the esophagus and stomach, after which myotomy of the circular and longitudinal muscle layers was performed, extending from 5-6 cm above the EGJ to 2-3 cm below the EGJ. Partial fundoplication between the esophagus and stomach was then performed by placing sutures in three planes: Posterior—two to three sutures; left lateral—four to five sutures (on the left side); and right anterior—a line of sutures covering the myotomy, thus interposed with the gastric fundus on the right. In the absence of complications, patients were started on a clear liquid diet on the morning following the procedure and maintained on a mechanical soft diet for 14 d after discharge. Diagnosis and follow-up: Clinical assessments: Although achalasia subtyping is defined based on HRM, in this study, the achalasia subtype was evaluated according to the degree of esophageal dilation on the barium esophagogram and esophageal motor activity on EM or HRM. Given that Chagas disease, which often involves the esophagus, is common in Brazil, all patients were screened for anti-Trypanosoma cruzi antibodies by enzyme-linked immunosorbent assay and indirect immunofluorescence. Weight loss, dysphagia, and pain were assessed before the procedure, as well as at 6 and 12 mo after the procedure, by using the Eckardt score. Patients with an Eckardt score ≥ 3 were categorized as symptomatic. The clinical evaluation of gastroesophageal reflux (GER) and the diagnosis of GER disease (GERD) was based on the application of the GER Disease Questionnaire (GerdQ)[24] (Supplementary Figure 4). EGD: We performed EGD before the procedure, as well as at 6 and 12 mo after. Esophagitis was graded according to the Los Angeles classification system[25]. We performed chromoendoscopy with narrow-band imaging and 2.5% Lugol’s solution to screen for esophageal cancer. Suspicious lesions were biopsied. Barium esophagogram: To assess esophageal emptying before and 12 mo after the procedure, we used a timed barium esophagogram, as previously described[26]. The degree of esophageal emptying is qualitatively estimated by comparing 1- and 5-min images or by measuring the height and width (in centimeters) of the barium column at both times, calculating the approximate area, and determining the percentage change in the area. EM: Conventional EM was performed before and 12 mo after the procedure. It should be noted that HRM technology was not available in Brazil when the trial began. To perform conventional EM, we used an eight-channel computerized polygraph under pneumohydraulic capillary infusion at a flow rate of 0.6 mL/min/channel. Preparation was required with a 12-h fast and suspension of medications that alter esophageal motility. The technique consists of passing a probe through the nostril and checking the position in the stomach through deep inspiration. With the patient in the supine position, the probe is pulled centimeter by centimeter to measure the mean respiratory pressure and pressure inversion point, and then one of the channels is positioned distal to 3 cm from the upper edge of the LES and the other channels are distant 5 cm apart. Finally, the catheter is pulled up to the upper esophageal sphincter. Through the average of the four distal radial channels of the conventional manometry catheter, the maximum expiratory pressure (MEP) was identified, which best represents the LES pressure itself. Quality of life: To evaluate the quality of life (QoL), we used the Medical Outcomes Study 36-item Short-Form Health Survey (SF-36)[27,28]. The SF-36 comprises 36 questions covering eight domains: Physical functioning, role-physical, bodily pain, general health, vitality, social functioning, role-emotional, and mental health. Adverse events: Among the adverse events (AEs) recorded, pneumoperitoneum requiring drainage or puncture was categorized as a minor AE, as was minor mucosal damage requiring endoscopic closure. Major AEs were defined as pneumoperitoneum leading to hemodynamic instability and premature interruption of the procedure; bleeding requiring a blood transfusion and accompanied by hemodynamic instability or requiring an additional intervention; major mucosal damage requiring endoscopic closure or increasing the length of stay (LOS); or fistula/dehiscence of the incision with signs of fever or infection associated with hemodynamic instability. For AEs occurring in the LM-PF group, we used the Clavien-Dindo classification[29]. Outcome measures and data collection: For POEM and LM-PF, the following outcome measures were evaluated: Operative time; length of the myotomy in the esophagus and stomach; myotomy site; complications; and LOS. Patient data were collected on the Research Electronic Data Capture platform. Follow-up: At 1, 6, and 12 mo after the interventions, the Eckardt score was determined, the SF-36 was applied, EGD was performed, and timed barium esophagograms were obtained. Conventional EM was performed at 6 and 12 mo. Patients received the maximum dose of PPI for the first 30 d postprocedure, and those who presented with erosive esophagitis at follow-up endoscopy were maintained on PPI treatment for 8 wk. Treatment success was defined as symptom improvement (≤ 3-point reduction in the Eckardt score), an LES pressure < 15 mmHg[30-32], and a > 50% reduction in the height of the barium column at 1 min. Treatment failure was defined as symptom persistence in patients with an Eckardt score ≥ 3. Statistical analysis: We performed descriptive analyses of all study variables. Quantitative variables were expressed as means with standard deviations or as medians with interquartile ranges. Qualitative variables are expressed as absolute and relative frequencies. For the comparison of means between the two groups, the Student’s t-test was used. When the assumption of normality was rejected, the nonparametric Mann-Whitney test was used. To test the homogeneity between proportions, the chi-square test or Fisher’s exact test was used. Repeated-measures analysis of variance was used to compare the groups over the course of the study. When the assumption of normality was rejected, the nonparametric Mann-Whitney test and Friedman test were used. The data were processed with the SPSS Statistics software package, version 17.0 for Windows (SPSS Inc., Chicago, IL, United States). P < 0.05 was considered statistically significant. RESULTS: Population characteristics Between March 2017 and February 2018, 40 patients diagnosed with achalasia were enrolled and randomized to undergo LM-PF (n = 20) or POEM (n = 20), as detailed in Figure 1. Nine (22.5%) of the forty patients (five in the POEM group and four in the LM-PF group) tested positive for anti-T. cruzi antibodies, indicating exposure to Chagas disease. At baseline, there was no statistical difference between the two groups in terms of sex, age, etiology, grade, symptom duration, weight, body mass index, or Eckardt score (Table 1). The study was terminated after all patients had been followed for at least 12 mo. Flow chart of the study timeline. EGD: Esophagogastroduodenoscopy; EM: Esophageal manometry (conventional); LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy; SF-36: Medical Outcomes Study 36-item Short-Form Health Survey. Characteristics of the study population BMI: Body mass index; IQR: Interquartile range; LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy. Between March 2017 and February 2018, 40 patients diagnosed with achalasia were enrolled and randomized to undergo LM-PF (n = 20) or POEM (n = 20), as detailed in Figure 1. Nine (22.5%) of the forty patients (five in the POEM group and four in the LM-PF group) tested positive for anti-T. cruzi antibodies, indicating exposure to Chagas disease. At baseline, there was no statistical difference between the two groups in terms of sex, age, etiology, grade, symptom duration, weight, body mass index, or Eckardt score (Table 1). The study was terminated after all patients had been followed for at least 12 mo. Flow chart of the study timeline. EGD: Esophagogastroduodenoscopy; EM: Esophageal manometry (conventional); LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy; SF-36: Medical Outcomes Study 36-item Short-Form Health Survey. Characteristics of the study population BMI: Body mass index; IQR: Interquartile range; LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy. Treatment outcomes Eckardt scores at 1, 6, and 12 mo were lower than the baseline scores in both groups—1.0, 0.5, and 0.50, respectively, vs 8.0, in the POEM group; and 0.0, 0.0, and 0.0, respectively, vs 8.5, in the LM-PF group—and the differences were statistically significant (P < 0.001; Table 2). There were no statistical differences between the two groups for the Eckardt scores at 1, 6, and 12 mo of follow-up (P = 0.192, P = 0.242, and P = 0.242, respectively). Eckardt scores Nonparametric Mann-Whitney test. IQR: Interquartile range; LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy. In the POEM group, treatment success was confirmed at 1 mo in all 20 patients, at 6 mo in 18 of the patients (90%), and at 12 mo in 19 (95%). In the LM-PF group, treatment success was confirmed at 1 mo and was maintained at 6 and 12 mo in all 20 patients. As shown in Table 3, there was no statistical difference between the two groups regarding treatment success at 1, 6, or 12 mo (P = 0.487 and P = 1.000 for 6 and 12 mo, respectively). Treatment success Fisher’s exact test. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy. In both groups, there were significant postprocedural improvements in dysphagia, although the differences were not significant at 1, 6, or 12 mo (P = 0.602; P = 0.565, and P = 0.547, respectively). However, statistically significant improvements in weight loss, chest pain, and regurgitation were observed in both groups (Supplementary Tables 1 and 2). The postprocedure rate of GER, as assessed with the GerdQ, was higher in the POEM group than in the LM-PF group (64.6% vs 11.1%; P < 0.02). Eckardt scores at 1, 6, and 12 mo were lower than the baseline scores in both groups—1.0, 0.5, and 0.50, respectively, vs 8.0, in the POEM group; and 0.0, 0.0, and 0.0, respectively, vs 8.5, in the LM-PF group—and the differences were statistically significant (P < 0.001; Table 2). There were no statistical differences between the two groups for the Eckardt scores at 1, 6, and 12 mo of follow-up (P = 0.192, P = 0.242, and P = 0.242, respectively). Eckardt scores Nonparametric Mann-Whitney test. IQR: Interquartile range; LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy. In the POEM group, treatment success was confirmed at 1 mo in all 20 patients, at 6 mo in 18 of the patients (90%), and at 12 mo in 19 (95%). In the LM-PF group, treatment success was confirmed at 1 mo and was maintained at 6 and 12 mo in all 20 patients. As shown in Table 3, there was no statistical difference between the two groups regarding treatment success at 1, 6, or 12 mo (P = 0.487 and P = 1.000 for 6 and 12 mo, respectively). Treatment success Fisher’s exact test. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy. In both groups, there were significant postprocedural improvements in dysphagia, although the differences were not significant at 1, 6, or 12 mo (P = 0.602; P = 0.565, and P = 0.547, respectively). However, statistically significant improvements in weight loss, chest pain, and regurgitation were observed in both groups (Supplementary Tables 1 and 2). The postprocedure rate of GER, as assessed with the GerdQ, was higher in the POEM group than in the LM-PF group (64.6% vs 11.1%; P < 0.02). Endoscopic findings At 1, 6, and 12 mo, only 20, 18, and 18 POEM group patients, respectively, underwent EGD, as did only 17, 16, and 17 LM-PF group patients, respectively. The remaining patients declined to undergo EGD because they were asymptomatic. The rates of esophagitis were significantly higher in the POEM group than in the LM-PF group at 1, 6, and 12 mo of follow-up (P = 0.014, P < 0.001, and P = 0.002, respectively). In the LM-PF group, 1 patient had esophagitis (classified as grade A) at 6 mo and 2 patients had esophagitis (classified as grades B and C, respectively) at 12 mo. In the POEM group, esophagitis was observed at 1 mo in 5 patients (being classified as grade A in one, grade B in three, and grade C in one), at 6 mo in 10 patients (being classified as grade A in three, grade B in two, and grade C in five), and at 12 mo in 11 patients (being classified as grade A in five, grade C in four, and grade D in two). At 1, 6, and 12 mo, the rates of esophagitis were 0.0%, 5.6%, and 11.1%, respectively, in the LM-PF group and 29.4%, 62.5%, and 64.6%, respectively, in the POEM group (Table 4). Reflux esophagitis Fisher’s exact test. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy. At 1, 6, and 12 mo, only 20, 18, and 18 POEM group patients, respectively, underwent EGD, as did only 17, 16, and 17 LM-PF group patients, respectively. The remaining patients declined to undergo EGD because they were asymptomatic. The rates of esophagitis were significantly higher in the POEM group than in the LM-PF group at 1, 6, and 12 mo of follow-up (P = 0.014, P < 0.001, and P = 0.002, respectively). In the LM-PF group, 1 patient had esophagitis (classified as grade A) at 6 mo and 2 patients had esophagitis (classified as grades B and C, respectively) at 12 mo. In the POEM group, esophagitis was observed at 1 mo in 5 patients (being classified as grade A in one, grade B in three, and grade C in one), at 6 mo in 10 patients (being classified as grade A in three, grade B in two, and grade C in five), and at 12 mo in 11 patients (being classified as grade A in five, grade C in four, and grade D in two). At 1, 6, and 12 mo, the rates of esophagitis were 0.0%, 5.6%, and 11.1%, respectively, in the LM-PF group and 29.4%, 62.5%, and 64.6%, respectively, in the POEM group (Table 4). Reflux esophagitis Fisher’s exact test. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy. Barium esophagogram Table 5 shows the results of the barium esophagogram. In both groups, the heights of the barium column at 1 and 5 min were significantly lower at 1, 6, and 12 mo than at baseline (P < 0.001). There was no statistical difference between the two groups regarding the barium column height values at 1 and 5 min in the follow-up periods (intent-to-treat analysis: P = 0.429 and 0.773; per-protocol analysis: P = 0.505 and 0.922). Height of the barium column in cm Data are presented as mean ± SD. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy. Table 5 shows the results of the barium esophagogram. In both groups, the heights of the barium column at 1 and 5 min were significantly lower at 1, 6, and 12 mo than at baseline (P < 0.001). There was no statistical difference between the two groups regarding the barium column height values at 1 and 5 min in the follow-up periods (intent-to-treat analysis: P = 0.429 and 0.773; per-protocol analysis: P = 0.505 and 0.922). Height of the barium column in cm Data are presented as mean ± SD. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy. EM In both groups, the MEP values were significantly lower at 6 and 12 mo than at baseline (Table 6). There was no statistical difference between the two groups at either of those time points (intention-to-treat analysis: P = 0.848). Esophageal manometry results of lower esophageal sphincter pressure in mmHg Data are presented as mean ± SD. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy. In both groups, the MEP values were significantly lower at 6 and 12 mo than at baseline (Table 6). There was no statistical difference between the two groups at either of those time points (intention-to-treat analysis: P = 0.848). Esophageal manometry results of lower esophageal sphincter pressure in mmHg Data are presented as mean ± SD. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy. AEs, LOS, anesthesia time, and procedure time Table 7 describes the AEs, LOS, anesthesia time, and procedure time, in the sample as a whole and by groups. There was no statistical difference between the two groups regarding the rate of AEs (P = 0.605). The relevant complications observed in the immediate postprocedural period included empyema requiring thoracostomy in one (5%) of the LM-PF patients, and inadvertent intraoperative mucosal damage in three (15%) of the POEM patients (treated with endoscopic clipping). The clinical outcomes were favorable in all patients. The mean LOS was 3.95 ± 3.36 d in the LM-PF group, compared with 3.40 ± 0.75 d in the POEM group (P = 0.483). The mean anesthesia time and mean procedure time were both shorter in the POEM group than in the LM-PF group (185.00 ± 56.89 and 95.70 ± 30.47 min, respectively, vs 296.75 ± 56.13 and 218.75 ± 50.88 min, respectively; P < 0.001 for both). Adverse events, length of hospital stay, anesthesia time, and procedure time Fisher’s exact test. Student’s t-test. Data are presented as mean ± SD, unless noted otherwise. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy. Table 7 describes the AEs, LOS, anesthesia time, and procedure time, in the sample as a whole and by groups. There was no statistical difference between the two groups regarding the rate of AEs (P = 0.605). The relevant complications observed in the immediate postprocedural period included empyema requiring thoracostomy in one (5%) of the LM-PF patients, and inadvertent intraoperative mucosal damage in three (15%) of the POEM patients (treated with endoscopic clipping). The clinical outcomes were favorable in all patients. The mean LOS was 3.95 ± 3.36 d in the LM-PF group, compared with 3.40 ± 0.75 d in the POEM group (P = 0.483). The mean anesthesia time and mean procedure time were both shorter in the POEM group than in the LM-PF group (185.00 ± 56.89 and 95.70 ± 30.47 min, respectively, vs 296.75 ± 56.13 and 218.75 ± 50.88 min, respectively; P < 0.001 for both). Adverse events, length of hospital stay, anesthesia time, and procedure time Fisher’s exact test. Student’s t-test. Data are presented as mean ± SD, unless noted otherwise. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy. QoL Table 8 shows the results obtained with the SF-36. In the POEM group, there were postprocedural improvements in all SF-36 domains, whereas there were improvements in only three domains (physical functioning, energy/fatigue, and general health) in the LM-PF group. Quality of life IQR: Interquartile range; LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy; SF-36: Medical Outcomes Study 36-item Short-Form Health Survey. Table 8 shows the results obtained with the SF-36. In the POEM group, there were postprocedural improvements in all SF-36 domains, whereas there were improvements in only three domains (physical functioning, energy/fatigue, and general health) in the LM-PF group. Quality of life IQR: Interquartile range; LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy; SF-36: Medical Outcomes Study 36-item Short-Form Health Survey. Population characteristics: Between March 2017 and February 2018, 40 patients diagnosed with achalasia were enrolled and randomized to undergo LM-PF (n = 20) or POEM (n = 20), as detailed in Figure 1. Nine (22.5%) of the forty patients (five in the POEM group and four in the LM-PF group) tested positive for anti-T. cruzi antibodies, indicating exposure to Chagas disease. At baseline, there was no statistical difference between the two groups in terms of sex, age, etiology, grade, symptom duration, weight, body mass index, or Eckardt score (Table 1). The study was terminated after all patients had been followed for at least 12 mo. Flow chart of the study timeline. EGD: Esophagogastroduodenoscopy; EM: Esophageal manometry (conventional); LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy; SF-36: Medical Outcomes Study 36-item Short-Form Health Survey. Characteristics of the study population BMI: Body mass index; IQR: Interquartile range; LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy. Treatment outcomes: Eckardt scores at 1, 6, and 12 mo were lower than the baseline scores in both groups—1.0, 0.5, and 0.50, respectively, vs 8.0, in the POEM group; and 0.0, 0.0, and 0.0, respectively, vs 8.5, in the LM-PF group—and the differences were statistically significant (P < 0.001; Table 2). There were no statistical differences between the two groups for the Eckardt scores at 1, 6, and 12 mo of follow-up (P = 0.192, P = 0.242, and P = 0.242, respectively). Eckardt scores Nonparametric Mann-Whitney test. IQR: Interquartile range; LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy. In the POEM group, treatment success was confirmed at 1 mo in all 20 patients, at 6 mo in 18 of the patients (90%), and at 12 mo in 19 (95%). In the LM-PF group, treatment success was confirmed at 1 mo and was maintained at 6 and 12 mo in all 20 patients. As shown in Table 3, there was no statistical difference between the two groups regarding treatment success at 1, 6, or 12 mo (P = 0.487 and P = 1.000 for 6 and 12 mo, respectively). Treatment success Fisher’s exact test. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy. In both groups, there were significant postprocedural improvements in dysphagia, although the differences were not significant at 1, 6, or 12 mo (P = 0.602; P = 0.565, and P = 0.547, respectively). However, statistically significant improvements in weight loss, chest pain, and regurgitation were observed in both groups (Supplementary Tables 1 and 2). The postprocedure rate of GER, as assessed with the GerdQ, was higher in the POEM group than in the LM-PF group (64.6% vs 11.1%; P < 0.02). Endoscopic findings: At 1, 6, and 12 mo, only 20, 18, and 18 POEM group patients, respectively, underwent EGD, as did only 17, 16, and 17 LM-PF group patients, respectively. The remaining patients declined to undergo EGD because they were asymptomatic. The rates of esophagitis were significantly higher in the POEM group than in the LM-PF group at 1, 6, and 12 mo of follow-up (P = 0.014, P < 0.001, and P = 0.002, respectively). In the LM-PF group, 1 patient had esophagitis (classified as grade A) at 6 mo and 2 patients had esophagitis (classified as grades B and C, respectively) at 12 mo. In the POEM group, esophagitis was observed at 1 mo in 5 patients (being classified as grade A in one, grade B in three, and grade C in one), at 6 mo in 10 patients (being classified as grade A in three, grade B in two, and grade C in five), and at 12 mo in 11 patients (being classified as grade A in five, grade C in four, and grade D in two). At 1, 6, and 12 mo, the rates of esophagitis were 0.0%, 5.6%, and 11.1%, respectively, in the LM-PF group and 29.4%, 62.5%, and 64.6%, respectively, in the POEM group (Table 4). Reflux esophagitis Fisher’s exact test. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy. Barium esophagogram: Table 5 shows the results of the barium esophagogram. In both groups, the heights of the barium column at 1 and 5 min were significantly lower at 1, 6, and 12 mo than at baseline (P < 0.001). There was no statistical difference between the two groups regarding the barium column height values at 1 and 5 min in the follow-up periods (intent-to-treat analysis: P = 0.429 and 0.773; per-protocol analysis: P = 0.505 and 0.922). Height of the barium column in cm Data are presented as mean ± SD. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy. EM: In both groups, the MEP values were significantly lower at 6 and 12 mo than at baseline (Table 6). There was no statistical difference between the two groups at either of those time points (intention-to-treat analysis: P = 0.848). Esophageal manometry results of lower esophageal sphincter pressure in mmHg Data are presented as mean ± SD. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy. AEs, LOS, anesthesia time, and procedure time: Table 7 describes the AEs, LOS, anesthesia time, and procedure time, in the sample as a whole and by groups. There was no statistical difference between the two groups regarding the rate of AEs (P = 0.605). The relevant complications observed in the immediate postprocedural period included empyema requiring thoracostomy in one (5%) of the LM-PF patients, and inadvertent intraoperative mucosal damage in three (15%) of the POEM patients (treated with endoscopic clipping). The clinical outcomes were favorable in all patients. The mean LOS was 3.95 ± 3.36 d in the LM-PF group, compared with 3.40 ± 0.75 d in the POEM group (P = 0.483). The mean anesthesia time and mean procedure time were both shorter in the POEM group than in the LM-PF group (185.00 ± 56.89 and 95.70 ± 30.47 min, respectively, vs 296.75 ± 56.13 and 218.75 ± 50.88 min, respectively; P < 0.001 for both). Adverse events, length of hospital stay, anesthesia time, and procedure time Fisher’s exact test. Student’s t-test. Data are presented as mean ± SD, unless noted otherwise. LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy. QoL: Table 8 shows the results obtained with the SF-36. In the POEM group, there were postprocedural improvements in all SF-36 domains, whereas there were improvements in only three domains (physical functioning, energy/fatigue, and general health) in the LM-PF group. Quality of life IQR: Interquartile range; LM-PF: Laparoscopic myotomy and partial fundoplication; POEM: Peroral endoscopic myotomy; SF-36: Medical Outcomes Study 36-item Short-Form Health Survey. DISCUSSION: In this single-center RCT comparing POEM and LM-PF in treatment-naive patients with achalasia, a significant proportion of the patients evaluated had achalasia attributed to Chagas disease. In a study by Farias et al[33], no statistical difference was observed between idiopathic and Chagas disease-associated achalasia regarding treatment success and AEs with POEM. For years, LM-PF has been considered the gold-standard treatment for achalasia[34], because it provides good clinical results, has a low reintervention rate, and has adequate reproducibility. In the first study involving the use of endoscopic myotomy[21], conducted in 1980, all 17 of the patients in the sample showed symptom improvement. Although, technical improvements proposed by Inoue et al[22] in 2010 and several cohort studies comparing POEM and LM-PF[35-45] over the last decade have proved its safety and efficacy in the management of achalasia, the POEM technique is still not fully standardized[22]. The first RCT comparing the two techniques in the treatment of idiopathic achalasia[46], including 221 patients, demonstrated clinical success rates at 1 year and 2 years of follow-up of 84.8% and 83.0%, respectively, in the POEM group, comparable to the 83.5% and 81.7%, respectively, observed for the LM-PF group. In our study, the clinical success rate at the end of the 1st year was 95% in the POEM group and 100% in the LM-PF group, with no statistical difference between the two techniques. This discrepancy between our results and those of the earlier trial may be related to the fact that approximately 35% of the patients evaluated in that trial had previously received some type of treatment, which could have increased the degree of technical difficulty in dissection secondary to submucosal fibrosis. We observed no statistical differences between the two techniques concerning Eckardt scores for dysphagia, regurgitation, chest pain, and weight loss, at 1, 6, and 12 mo of follow-up, which demonstrates the noninferiority of POEM to the LM-PF. Immediate postprocedural complications occurred in 10% of the 40 patients evaluated in the present study. There were no cases of death in our sample, and the rate of AEs did not differ significantly between the two techniques. In our study, all POEM procedures involved a full-thickness myotomy, which made pneumoperitoneum an expected event. Pneumoperitoneum is a common finding after POEM and is not indicative of an unfavorable outcome for the patient. We categorized pneumoperitoneum as an AE only if abdominal decompression was required. Anesthesia and procedure times were shorter for POEM than for LM-PF. That can be explained by the fact that the POEM involved full-thickness myotomy and did not involve fundoplication. There was no difference between the two procedures in terms of LOS and QoL. We found that POEM and LM-PF both resulted in significant decreases in the 1- and 5-min barium column heights at 1, 6, and 12 mo after the procedures, demonstrating a clear decrease in resistance to the passage of contrast at the level of the EGJ. Sanagapalli et al[47] showed an association of significant improvement in symptoms when there is a mean reduction in the residual barium column height by about 53%. The LES pressure (MEP) on conventional EM was significantly lower throughout the follow-up period than at baseline, and there was no significant difference between the two groups. In this study, the rates of treatment success were comparable between surgical and endoscopic myotomy, both providing symptom improvement, as well as objective improvement in radiological and manometric parameters, at 1, 6, and 12 mo. A recent systematic review and meta-analysis demonstrated that the incidence of GER is higher after POEM than after laparoscopic Heller myotomy[48]. That is in agreement with our findings. The evaluation of GER in our study was based on the typical clinical manifestations of GERD or the identification of erosive esophagitis by EGD. All patients with symptoms and suggestive endoscopic findings of GER received PPI treatment with suspension or maintenance according to the clinical and endoscopic response. A significant limitation of our study was the absence of pHmetry evaluation, which is the main method for GERD evaluation. Prior to our study, we considered that the pHmetry evaluation would be compromised because patients with esophageal achalasia present retention of food residues in the esophageal mucosa and the fermentation of those residues can decrease the intraluminal pH and thus be a confounding factor in the diagnosis of GERD. However, Smart et al[49] showed that such fermentation would affect only pre-procedure pHmetry, without much influence on the post-procedure pHmetry. Erosive esophagitis, especially grade C or D, is considered indicative of GER after endoscopy in patients without a history of the condition[50]. We consider that patients undergoing POEM have a wider esophagogastric transition that favors a higher rate of GER compared to LM-PF, despite similar LES pressures between the groups. Werner et al[46] also showed more GER in patients undergoing POEM despite no differences in manometry compared to LM-PF. The POEM technique has undergone numerous changes since its initial description by Inoue et al[22]. It has been shown that short- to medium-term efficacy is comparable between myotomy of the circular muscle layer only and full-thickness myotomy, as well as that the latter, despite significantly reducing the duration of POEM, may increase the risk of GERD[51,52]. Likewise, there is uncertainty about whether myotomy should be performed in the anterior or posterior wall, the latter technique being associated with a higher incidence of GER[53,54], although other studies have failed to demonstrate that[55,56]. In the present study, we chose a long posterior full-thickness myotomy, because of the greater technical ease[43,57,58]. The results obtained in our study corroborate those of a previous study demonstrating the noninferiority of POEM to LM-PF for symptom control in patients with achalasia, except for postprocedure GER[46]. That raises the question of which technical changes we should study. Therefore, it is valid to perform in-depth analyses of oblique fiber preservation techniques[59], as well as the use of POEM with fundoplication[60,61]. One study[58] demonstrated that preservation of the oblique muscle, using the two penetrating vessels as an anatomical landmark, can significantly reduce the frequency of post-POEM GER, although that should be interpreted with caution because it was a retrospective cohort study, without strict methodological criteria, and with limited reproducibility. In the present study, we employed the conventional POEM technique as previously described[62], and the preservation of the two penetrating vessels was not standardized. The postprocedural occurrence of GERD symptoms in our sample was > 50%, similar to what has been reported by other authors. Despite not including patients undergoing POEM, a recent study[63] showed that achalasia patients with post-treatment reflux symptoms demonstrate esophageal hypersensitivity to chemical and mechanical stimuli, which may determine symptom generation. Another strategy proposed to minimize the occurrence of GER after POEM is performing transoral incisionless fundoplication. In one pilot study[60], that procedure was reported to have a 100% success rate in terms of symptom control, acid exposure time, and the need for antisecretory drugs. In another pilot study[61], standard POEM combined with endoscopic fundoplication (POEM-F) was employed, and no complications were observed. A recent retrospective study followed patients for 12 mo after POEM-F[64], and showed that the incidence of postprocedural GER was only 11.1%. Albeit attractive, POEM-F has several potential limitations[65]. First, it is necessary to perform POEM in the anterior wall, contrary to the current trend of using a posterior wall approach. Second, it may not be possible to perform POEM-F in patients who have previously undergone anterior myotomy and experience symptom recurrence due to submucosal fibrosis. Third, the long-term durability of this type of fundoplication is still unknown. In our opinion, it will take some time for the literature to reveal whether endoscopic or surgical myotomy is the best long-term option for the treatment of achalasia. Two crucial points that weigh unfavorably on the POEM procedure, in terms of the possibility that it will come to be widely indicated for the treatment of achalasia[66,67]. The first is the paucity of high-quality (randomized) technical studies comparing POEM with the well-established techniques of pneumatic dilatation of the cardia and laparoscopic myotomy with fundoplication, which could show, at least, the noninferiority of POEM. The second is the lack of studies with long (> 5 years) follow-up periods, which could demonstrate the true reintervention rate, based on the identification of serious late complications, including GER requiring fundoplication and dysphagia resulting from an inadequate myotomy[68]. Currently, the results at 2-3 years are similar between the endoscopic and surgical myotomy techniques concerning the clinical parameters, except for the greater occurrence of GER after the endoscopic technique, which typically responds well to antisecretory drug treatment. However, those data are accompanied by uncertainties that will only be resolved over time. CONCLUSION: Our results allow us to conclude that LM-PF and POEM are equally effective in controlling the clinical symptoms of achalasia at 1, 6, and 12 mo. Although the use of the POEM technique results in a significantly higher rate of postprocedure GER, it also shortens anesthesia and procedure times. We found no differences between the two methods regarding LOS, the occurrence of AEs, or QoL. In the POEM group, there was an improvement in all domains of QoL.
Background: Achalasia is a rare benign esophageal motor disorder characterized by incomplete relaxation of the lower esophageal sphincter (LES). The treatment of achalasia is not curative, but rather is aimed at reducing LES pressure. In patients who have failed noninvasive therapy, surgery should be considered. Myotomy with partial fundoplication has been considered the first-line treatment for non-advanced achalasia. Recently, peroral endoscopic myotomy (POEM), a technique that employs the principles of submucosal endoscopy to perform the equivalent of a surgical myotomy, has emerged as a promising minimally invasive technique for the management of this condition. Methods: Forty treatment-naive adult patients who had been diagnosed with achalasia based on clinical and manometric criteria (dysphagia score ≥ II and Eckardt score > 3) were randomized to undergo either LM-PF or POEM. The outcome measures were anesthesia time, procedure time, symptom improvement, reflux esophagitis (as determined with the Gastroesophageal Reflux Disease Questionnaire), barium column height at 1 and 5 min (on a barium esophagogram), pressure at the LES, the occurrence of adverse events (AEs), length of stay (LOS), and quality of life (QoL). Results: There were no statistically significant differences between the LM-PF and POEM groups regarding symptom improvement at 1, 6, and 12 mo of follow-up (P = 0.192, P = 0.242, and P = 0.242, respectively). However, the rates of reflux esophagitis at 1, 6, and 12 mo of follow-up were significantly higher in the POEM group (P = 0.014, P < 0.001, and P = 0.002, respectively). There were also no statistical differences regarding the manometry values, the occurrence of AEs, or LOS. Anesthesia time and procedure time were significantly shorter in the POEM group than in the LM-PF group (185.00 ± 56.89 and 95.70 ± 30.47 min vs 296.75 ± 56.13 and 218.75 ± 50.88 min, respectively; P = 0.001 for both). In the POEM group, there were improvements in all domains of the QoL questionnaire, whereas there were improvements in only three domains in the LM-PF group. Conclusions: POEM and LM-PF appear to be equally effective in controlling the symptoms of achalasia, shortening LOS, and minimizing AEs. Nevertheless, POEM has the advantage of improving all domains of QoL, and shortening anesthesia and procedure times but with a significantly higher rate of gastroesophageal reflux.
INTRODUCTION: Achalasia is a rare benign esophageal motor disorder characterized by incomplete relaxation of the lower esophageal sphincter (LES)[1-3]. For primary or idiopathic achalasia, the underlying etiology has yet to be clearly defined; secondary achalasia results from any one of several systemic diseases including infectious, autoimmune, and drug-induced disorders[4-6]. In both cases, the most common symptoms are progressive dysphagia, regurgitation, and weight loss. The symptom intensity and treatment response can be assessed with validated scales such as the Eckardt score[2,7,8]. The diagnosis requires the proper integration between reported symptoms and the interpretation of diagnostic tests such as a barium esophagogram, esophagogastroduodenoscopy (EGD), and manometry—either conventional esophageal manometry (EM) or high-resolution manometry (HRM)[9-12]. The treatment of achalasia is not curative but rather is aimed at reducing LES pressure[13-17]. In patients who have failed noninvasive therapy, surgery should be considered[18]. Myotomy with partial fundoplication has been considered the first-line treatment for non-advanced achalasia[19]. Recently, peroral endoscopic myotomy (POEM), a technique that employs the principles of submucosal endoscopy to perform the equivalent of a surgical myotomy, has emerged as a promising minimally invasive technique for the management of this condition[20]. This technique was first described in 1980 and subsequently modified to create what is now POEM[21,22]. This randomized controlled trial (RCT) compared the efficacy and outcomes of laparoscopic myotomy and partial fundoplication (LM-PF) with those of POEM for the treatment of patients with achalasia of any etiology. We also compared the two procedures in terms of the incidence of reflux esophagitis. CONCLUSION: Future research should focus on long-term follow-up and outcomes of different techniques. It is possible that the improvement in the POEM technique may contribute to new perspectives on reflux symptoms.
Background: Achalasia is a rare benign esophageal motor disorder characterized by incomplete relaxation of the lower esophageal sphincter (LES). The treatment of achalasia is not curative, but rather is aimed at reducing LES pressure. In patients who have failed noninvasive therapy, surgery should be considered. Myotomy with partial fundoplication has been considered the first-line treatment for non-advanced achalasia. Recently, peroral endoscopic myotomy (POEM), a technique that employs the principles of submucosal endoscopy to perform the equivalent of a surgical myotomy, has emerged as a promising minimally invasive technique for the management of this condition. Methods: Forty treatment-naive adult patients who had been diagnosed with achalasia based on clinical and manometric criteria (dysphagia score ≥ II and Eckardt score > 3) were randomized to undergo either LM-PF or POEM. The outcome measures were anesthesia time, procedure time, symptom improvement, reflux esophagitis (as determined with the Gastroesophageal Reflux Disease Questionnaire), barium column height at 1 and 5 min (on a barium esophagogram), pressure at the LES, the occurrence of adverse events (AEs), length of stay (LOS), and quality of life (QoL). Results: There were no statistically significant differences between the LM-PF and POEM groups regarding symptom improvement at 1, 6, and 12 mo of follow-up (P = 0.192, P = 0.242, and P = 0.242, respectively). However, the rates of reflux esophagitis at 1, 6, and 12 mo of follow-up were significantly higher in the POEM group (P = 0.014, P < 0.001, and P = 0.002, respectively). There were also no statistical differences regarding the manometry values, the occurrence of AEs, or LOS. Anesthesia time and procedure time were significantly shorter in the POEM group than in the LM-PF group (185.00 ± 56.89 and 95.70 ± 30.47 min vs 296.75 ± 56.13 and 218.75 ± 50.88 min, respectively; P = 0.001 for both). In the POEM group, there were improvements in all domains of the QoL questionnaire, whereas there were improvements in only three domains in the LM-PF group. Conclusions: POEM and LM-PF appear to be equally effective in controlling the symptoms of achalasia, shortening LOS, and minimizing AEs. Nevertheless, POEM has the advantage of improving all domains of QoL, and shortening anesthesia and procedure times but with a significantly higher rate of gastroesophageal reflux.
12,092
475
[ 316, 133, 47, 86, 492, 681, 47, 141, 165, 2973, 219, 390, 310, 130, 87, 243, 91, 1721, 88 ]
20
[ "poem", "lm", "lm pf", "pf", "patients", "myotomy", "group", "mo", "12", "12 mo" ]
[ "idiopathic achalasia", "achalasia based clinical", "clinical assessments achalasia", "esophageal achalasia included", "esophageal achalasia present" ]
null
[CONTENT] Esophageal achalasia | Gastroesophageal reflux | Deglutition disorders | Heller myotomy | Fundoplication | Randomized controlled trial [SUMMARY]
[CONTENT] Esophageal achalasia | Gastroesophageal reflux | Deglutition disorders | Heller myotomy | Fundoplication | Randomized controlled trial [SUMMARY]
null
[CONTENT] Esophageal achalasia | Gastroesophageal reflux | Deglutition disorders | Heller myotomy | Fundoplication | Randomized controlled trial [SUMMARY]
[CONTENT] Esophageal achalasia | Gastroesophageal reflux | Deglutition disorders | Heller myotomy | Fundoplication | Randomized controlled trial [SUMMARY]
[CONTENT] Esophageal achalasia | Gastroesophageal reflux | Deglutition disorders | Heller myotomy | Fundoplication | Randomized controlled trial [SUMMARY]
[CONTENT] Adult | Barium | Esophageal Achalasia | Esophageal Sphincter, Lower | Esophagitis, Peptic | Esophagoscopy | Fundoplication | Gastroesophageal Reflux | Humans | Laparoscopy | Myotomy | Natural Orifice Endoscopic Surgery | Quality of Life | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Barium | Esophageal Achalasia | Esophageal Sphincter, Lower | Esophagitis, Peptic | Esophagoscopy | Fundoplication | Gastroesophageal Reflux | Humans | Laparoscopy | Myotomy | Natural Orifice Endoscopic Surgery | Quality of Life | Treatment Outcome [SUMMARY]
null
[CONTENT] Adult | Barium | Esophageal Achalasia | Esophageal Sphincter, Lower | Esophagitis, Peptic | Esophagoscopy | Fundoplication | Gastroesophageal Reflux | Humans | Laparoscopy | Myotomy | Natural Orifice Endoscopic Surgery | Quality of Life | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Barium | Esophageal Achalasia | Esophageal Sphincter, Lower | Esophagitis, Peptic | Esophagoscopy | Fundoplication | Gastroesophageal Reflux | Humans | Laparoscopy | Myotomy | Natural Orifice Endoscopic Surgery | Quality of Life | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Barium | Esophageal Achalasia | Esophageal Sphincter, Lower | Esophagitis, Peptic | Esophagoscopy | Fundoplication | Gastroesophageal Reflux | Humans | Laparoscopy | Myotomy | Natural Orifice Endoscopic Surgery | Quality of Life | Treatment Outcome [SUMMARY]
[CONTENT] idiopathic achalasia | achalasia based clinical | clinical assessments achalasia | esophageal achalasia included | esophageal achalasia present [SUMMARY]
[CONTENT] idiopathic achalasia | achalasia based clinical | clinical assessments achalasia | esophageal achalasia included | esophageal achalasia present [SUMMARY]
null
[CONTENT] idiopathic achalasia | achalasia based clinical | clinical assessments achalasia | esophageal achalasia included | esophageal achalasia present [SUMMARY]
[CONTENT] idiopathic achalasia | achalasia based clinical | clinical assessments achalasia | esophageal achalasia included | esophageal achalasia present [SUMMARY]
[CONTENT] idiopathic achalasia | achalasia based clinical | clinical assessments achalasia | esophageal achalasia included | esophageal achalasia present [SUMMARY]
[CONTENT] poem | lm | lm pf | pf | patients | myotomy | group | mo | 12 | 12 mo [SUMMARY]
[CONTENT] poem | lm | lm pf | pf | patients | myotomy | group | mo | 12 | 12 mo [SUMMARY]
null
[CONTENT] poem | lm | lm pf | pf | patients | myotomy | group | mo | 12 | 12 mo [SUMMARY]
[CONTENT] poem | lm | lm pf | pf | patients | myotomy | group | mo | 12 | 12 mo [SUMMARY]
[CONTENT] poem | lm | lm pf | pf | patients | myotomy | group | mo | 12 | 12 mo [SUMMARY]
[CONTENT] achalasia | treatment | technique | manometry | myotomy | symptoms | etiology | esophageal | compared | considered [SUMMARY]
[CONTENT] performed | esophageal | cm | egj | esophagus | patients | test | stomach | score | procedure [SUMMARY]
null
[CONTENT] qol | results | poem | achalasia 12 mo use | qol poem | rate postprocedure | rate postprocedure ger | rate postprocedure ger shortens | shortens anesthesia procedure times | shortens anesthesia procedure [SUMMARY]
[CONTENT] poem | myotomy | patients | group | lm | lm pf | pf | mo | 12 | 12 mo [SUMMARY]
[CONTENT] poem | myotomy | patients | group | lm | lm pf | pf | mo | 12 | 12 mo [SUMMARY]
[CONTENT] Achalasia | LES ||| achalasia | LES ||| ||| first ||| [SUMMARY]
[CONTENT] Forty | achalasia | dysphagia | ≥ II | Eckardt | 3 ||| anesthesia time | the Gastroesophageal Reflux Disease Questionnaire | 1 | LES | QoL [SUMMARY]
null
[CONTENT] ||| QoL [SUMMARY]
[CONTENT] Achalasia | LES ||| achalasia | LES ||| ||| first ||| ||| achalasia | dysphagia | ≥ II | Eckardt | 3 ||| anesthesia time | the Gastroesophageal Reflux Disease Questionnaire | 1 | LES | QoL ||| 1 | 6 | 12 | 0.192 | 0.242 | 0.242 ||| 1 | 6 | 12 | 0.014 | P < 0.001 | 0.002 ||| ||| Anesthesia | 185.00 | 56.89 | 95.70 | 30.47 | 296.75 | 56.13 | 218.75 | 50.88 | P = | 0.001 ||| only three ||| ||| QoL [SUMMARY]
[CONTENT] Achalasia | LES ||| achalasia | LES ||| ||| first ||| ||| achalasia | dysphagia | ≥ II | Eckardt | 3 ||| anesthesia time | the Gastroesophageal Reflux Disease Questionnaire | 1 | LES | QoL ||| 1 | 6 | 12 | 0.192 | 0.242 | 0.242 ||| 1 | 6 | 12 | 0.014 | P < 0.001 | 0.002 ||| ||| Anesthesia | 185.00 | 56.89 | 95.70 | 30.47 | 296.75 | 56.13 | 218.75 | 50.88 | P = | 0.001 ||| only three ||| ||| QoL [SUMMARY]
Underlying Mechanisms of Memory Deficits Induced by Etomidate Anesthesia in Aged Rat Model: Critical Role of Immediate Early Genes.
26712432
Etomidate (R-1-[1-ethylphenyl] imidazole-5-ethyl ester) is a widely used anesthetic drug that had been reported to contribute to cognitive deficits after general surgery. However, its underlying mechanisms have not been fully elucidated. In this study, we aimed to explore the neurobiological mechanisms of cognitive impairments that caused by etomidate.
BACKGROUND
A total of 30 Sprague-Dawley rats were used and divided into two groups randomly to receive a single injection of etomidate or vehicle. Then, the rats' spatial memory ability and neuronal survival were evaluated using the Morris water maze test and Nissl staining, respectively. Furthermore, we analyzed levels of oxidative stress, as well as cyclic adenosine 3',5'-monophosphate response element-binding (CREB) protein phosphorylation and immediate early gene (IEG, including Arc, c-fos, and Egr1) expression levels using Western blot analysis.
METHODS
Compared with vehicle-treated rats, the etomidate-treated rats displayed impaired spatial learning (day 4: 27.26 ± 5.33 s vs. 35.52 ± 3.88 s, t = 2.988, P = 0.0068; day 5: 15.84 ± 4.02 s vs. 30.67 ± 4.23 s, t = 3.013, P = 0.0057; day 6: 9.47 ± 2.35 s vs. 25.66 ± 4.16 s, t = 3.567, P = 0.0036) and memory ability (crossing times: 4.40 ± 1.18 vs. 2.06 ± 0.80, t = 2.896, P = 0.0072; duration: 34.00 ± 4.24 s vs. 18.07 ± 4.79 s, t = 3.023, P = 0.0053; total swimming distance: 40.73 ± 3.45 cm vs. 27.40 ± 6.56 cm, t = 2.798, P = 0.0086) but no neuronal death. Furthermore, etomidate did not cause oxidative stress or deficits in CREB phosphorylation. The levels of multiple IEGs (Arc: vehicle treated rats 100%, etomidate treated rats 86%, t = 2.876, P = 0.0086; c-fos: Vehicle treated rats 100%, etomidate treated rats 72%, t = 2.996, P = 0.0076; Egr1: Vehicle treated rats 100%, etomidate treated rats 58%, t = 3.011, P = 0.0057) were significantly reduced in hippocampi of etomidate-treated rats.
RESULTS
Our data suggested that etomidate might induce memory impairment in rats via inhibition of IEG expression.
CONCLUSION
[ "Anesthesia", "Animals", "Etomidate", "Hippocampus", "Hypnotics and Sedatives", "Immediate-Early Proteins", "Maze Learning", "Memory Disorders", "Rats", "Rats, Sprague-Dawley" ]
4797542
INTRODUCTION
Postoperative cognitive dysfunction (POCD) is a short-term cognitive decline, especially in memory and executive functions, that may last from a few days to a few weeks after surgery.[1] Currently, the underlying mechanisms mediating the development of cognitive impairment after anesthesia and surgery are not yet fully clear. A previous study proposed that general anesthetics play a causal role in POCD because the duration of anesthesia positively correlates with the incidence of POCD in patients.[2] Moreover, a single exposure to an anesthetic can cause retrograde and anterograde memory deficits that last for days to weeks in rodent models.[34] For example, halothane and nitrous oxide anesthesia administered during the perinatal period led to learning deficits and delayed behavioral development,[5] as well as N-methyl-d-aspartate receptor blockade, which is typical of nitrous oxide and can produce long lasting memory deficits.[6] However, the mechanisms by which anesthetics cause persistent memory deficits in adults are poorly understood. Etomidate (R-1-[1-ethylphenyl] imidazole-5-ethyl ester), is a unique drug used for the induction of general anesthesia and sedation. Etomidate is the only imidazole among general anesthesia inducing drugs and has the most favorable therapeutic effect following single bolus administration.[7] It is known that the dominant molecular targets that mediate the anesthetic effects of etomidate in the central nervous system are specific γ-aminobutyric acid type A (GABAA) receptor subtypes, which had been strongly implicated in memory processes.[8] Furthermore, etomidate not only causes memory impairment in vivo but also abolishes long-term potentiation induced by high-frequency stimulation in the hippocampal slices of wild-type but not Gabra5−/− mice.[9] Thus, etomidate anesthesia impairs synaptic plasticity, which in turn causes memory deficits. However, whether the neuronal death, oxidative stress, loss of cyclic adenosine 3’,5’-monophosphate (cAMP) response element-binding (CREB) protein, and immediate early genes (IEGs) are involved in the etomidate-induced memory deficits in the elderly is still unknown. Here, using behavioral tests and biochemical and immunohistochemical assay, we are trying to understand the underlying mechanisms for cognitive deficits induced by etomidate in vivo.
METHODS
Animals and treatment A total of 30 Sprague-Dawley rats, that were 15–18 months old and weighed 350–400 g, were purchased from the Center of Experimental Animal (Zhengzhou University, China) and maintained under standard laboratory conditions (room temperature 22 ± 2°C; relative humidity 60 ± 5%; and 12-h light/dark cycle) with standard rodent food and water ad libitum. All the animal experiments were approved by the Review Committee on Animal Experiments of Zhengzhou University, China. The rats were randomly divided into two groups namely the vehicle- (Con) and etomidate-treated (Eto) groups (n = 15 in each group). The rats in the Eto and Con groups received a dose of 8 mg/kg etomidate or a dose of the vehicle by intraperitoneal injection, respectively. The body weights were measured before induction of anesthesia and after the Morris water maze test. A total of 30 Sprague-Dawley rats, that were 15–18 months old and weighed 350–400 g, were purchased from the Center of Experimental Animal (Zhengzhou University, China) and maintained under standard laboratory conditions (room temperature 22 ± 2°C; relative humidity 60 ± 5%; and 12-h light/dark cycle) with standard rodent food and water ad libitum. All the animal experiments were approved by the Review Committee on Animal Experiments of Zhengzhou University, China. The rats were randomly divided into two groups namely the vehicle- (Con) and etomidate-treated (Eto) groups (n = 15 in each group). The rats in the Eto and Con groups received a dose of 8 mg/kg etomidate or a dose of the vehicle by intraperitoneal injection, respectively. The body weights were measured before induction of anesthesia and after the Morris water maze test. Morris water maze test The maze consisted of a round pool (diameter and depth were 180 cm and 50 cm, respectively) filled with warm water (25 ± 2°C) up to 2 cm above a hidden platform in the third quadrant. The rats were habituated in the test room for 1 week before training, which commenced 1 day after the rats had recovered from the anesthesia. During each trial, the rats were placed in the swimming pool facing the wall in a fixed position and allowed 60 s to find the hidden platform in the third quadrant. Rats that did not locate the platform within this time were guided there and allowed to remain for 20 s. All rats underwent four trials every day in four quadrants. After every trial, the rats were wiped dry and kept warm. The rats were trained for 6 days consecutively, and the hidden platform was removed for the probe test on day 7 when the memory was detected in the rats. A video tracking system was used to record the swimming motions of the rats. The crossing time, duration, and total distance traveled by each rat in the target quadrant were used to evaluate memory retention ability while the swimming speed was used to evaluate motor ability. The maze consisted of a round pool (diameter and depth were 180 cm and 50 cm, respectively) filled with warm water (25 ± 2°C) up to 2 cm above a hidden platform in the third quadrant. The rats were habituated in the test room for 1 week before training, which commenced 1 day after the rats had recovered from the anesthesia. During each trial, the rats were placed in the swimming pool facing the wall in a fixed position and allowed 60 s to find the hidden platform in the third quadrant. Rats that did not locate the platform within this time were guided there and allowed to remain for 20 s. All rats underwent four trials every day in four quadrants. After every trial, the rats were wiped dry and kept warm. The rats were trained for 6 days consecutively, and the hidden platform was removed for the probe test on day 7 when the memory was detected in the rats. A video tracking system was used to record the swimming motions of the rats. The crossing time, duration, and total distance traveled by each rat in the target quadrant were used to evaluate memory retention ability while the swimming speed was used to evaluate motor ability. Nissl staining Five rats were randomly selected from each group and euthanized to obtain tissue samples for the Nissl staining using a previously reported method.[10] Briefly, the rats were anesthetized with an overdose of chloral hydrate intraperitoneally and then perfused transcardially with 0.9% sodium chloride at 4°C, followed by 4% paraformaldehyde in 0.1 mol/L phosphate buffer (pH 7.40). Then, the whole brains were removed and postfixed in the same fixative at 4°C for another 24 h. The brains were dehydrated in 30% and 40% sucrose until they sank, rapidly frozen in isopentane, and then coronal sections (25-μm thick) were cut on a cryostat (CM1950, Leica, Heidelberger, Germany). All the sections were used Nissl stained with 0.1% cresyl violet (Sigma-Aldrich, St. Louis, MO, USA) to evaluate the hippocampal neuronal damage. The cell counting was performed using ImageJ software (National Institutes of Health, Bethesda, MD, USA) as previously reported.[11] Five rats were randomly selected from each group and euthanized to obtain tissue samples for the Nissl staining using a previously reported method.[10] Briefly, the rats were anesthetized with an overdose of chloral hydrate intraperitoneally and then perfused transcardially with 0.9% sodium chloride at 4°C, followed by 4% paraformaldehyde in 0.1 mol/L phosphate buffer (pH 7.40). Then, the whole brains were removed and postfixed in the same fixative at 4°C for another 24 h. The brains were dehydrated in 30% and 40% sucrose until they sank, rapidly frozen in isopentane, and then coronal sections (25-μm thick) were cut on a cryostat (CM1950, Leica, Heidelberger, Germany). All the sections were used Nissl stained with 0.1% cresyl violet (Sigma-Aldrich, St. Louis, MO, USA) to evaluate the hippocampal neuronal damage. The cell counting was performed using ImageJ software (National Institutes of Health, Bethesda, MD, USA) as previously reported.[11] Superoxide dismutase and malondialdehyde measurement The activity levels of superoxide dismutase (SOD) and malondialdehyde (MDA) in brain tissue were assayed using commercial kits (Nanjing Jiancheng Inc., China).[12] The hippocampal SOD activity was expressed as U/mg protein while MDA concentrations were as nmol/mg (n = 6 in each group).[13] The activity levels of superoxide dismutase (SOD) and malondialdehyde (MDA) in brain tissue were assayed using commercial kits (Nanjing Jiancheng Inc., China).[12] The hippocampal SOD activity was expressed as U/mg protein while MDA concentrations were as nmol/mg (n = 6 in each group).[13] Western blot analysis Protein was extracted from the hippocampi in rats with etomidate treatment or vehicle according to a previously described procedure.[14] Membranes were probed with primary antibodies against CREB, phospho-Ser133-CREB (#9104 and #9191, respectively, Cell Signaling, USA), Arc, c-fos, and Egr1 (ab118929, ab53036, and ab55160, respectively, Abcam, USA). Horseradish peroxidase conjugated anti-rabbit or anti-mouse secondary antibodies were used and visualized using the enhanced chemiluminescence kit (Thermo Scientific, USA). The density of each band was quantified using the ImageJ software (National Institutes of Health, USA). Glyceraldehyde 3-phosphate dehydrogenase (GAPDH, ab1603, Abcam) was used as a loading control. Protein was extracted from the hippocampi in rats with etomidate treatment or vehicle according to a previously described procedure.[14] Membranes were probed with primary antibodies against CREB, phospho-Ser133-CREB (#9104 and #9191, respectively, Cell Signaling, USA), Arc, c-fos, and Egr1 (ab118929, ab53036, and ab55160, respectively, Abcam, USA). Horseradish peroxidase conjugated anti-rabbit or anti-mouse secondary antibodies were used and visualized using the enhanced chemiluminescence kit (Thermo Scientific, USA). The density of each band was quantified using the ImageJ software (National Institutes of Health, USA). Glyceraldehyde 3-phosphate dehydrogenase (GAPDH, ab1603, Abcam) was used as a loading control. Statistical analysis The statistical analyses were carried out using SPSS 14.0 (SPSS Inc., Chicago, IL, USA). All the graphic presentations were constructed using the SigmaPlot 10.0 (Systat Software, Inc. San Jose, CA, USA). A paired t-test was used to analyze the data, which were shown as a mean ± standard error (SE), and A P < 0.05 was considered to be statistically significant. The statistical analyses were carried out using SPSS 14.0 (SPSS Inc., Chicago, IL, USA). All the graphic presentations were constructed using the SigmaPlot 10.0 (Systat Software, Inc. San Jose, CA, USA). A paired t-test was used to analyze the data, which were shown as a mean ± standard error (SE), and A P < 0.05 was considered to be statistically significant.
RESULTS
Etomidate-induced memory deficits To explore the effects of etomidate on the memory of rats, we first examined the spatial learning and memory ability using the Morris water maze test. The learning ability of the rats was assessed using a consecutive 6 days training with a hidden platform under the water surface. During the learning process, the etomidate-treated rats displayed a longer latency period for locating the platform than the vehicle-treated rats did (day 4: 35.52 ± 3.88 s vs. 27.26 ± 5.33 s, t = 2.988, P = 0.0068; day 5: 30.67 ± 4.23 s vs. 15.84 ± 4.02 s, t = 3.013, P = 0.0057; day 6: 25.66 ± 4.16 s vs. 9.47 ± 2.35 s, t = 3.567, P = 0.0036) [Figure 1a]. Then, the platform was removed for the probe test on day 7, and the etomidate anesthesia led to fewer crossing times (2.06 ± 0.80 vs. 4.40 ± 1.18, t = 2.896, P = 0.0072) at the platform position, as well as reduced the time spent in the target quadrant from 34% to 11% and the total distance traveled from 41% to 13% (duration: 18.07 ± 4.79 s vs. 34.00 ± 4.24 s, t = 3.023, P = 0.0053; total swimming distance: 27.40 ± 6.56 cm vs. 40.73 ± 3.45 cm, t = 2.798, P = 0.0086) [Figure 1b–1e]. No significant difference was found in the swimming speed [Figure 1f] between two groups (t = 1.430, P = 0.1360). There was no significant difference in the body weight between Con and Eto groups before the injection (372.89 ± 13.97 g vs. 374.07 ± 17.14 g, t = 0.622, P = 0.460) and after the Morris water maze test (392.35 ± 12.83 g vs. 395.53 ± 17.10 g, t = 0.593, P = 0.650), suggesting that etomidate anesthesia impaired memory without affecting the motor ability and physiology of the rats. Effect of etomidate on learning and memory of aging rats in Morris water maze test. (a) The representative escape traces (upper) and the escape latencies in the learning days 1–6 (lower; day 4: t = 2.988, P = 0.0068; day 5: t = 3.013, P = 0.0057; and day 6: t = 3.567, P = 0.0036). (b) The swim trace in day 9 for probe test (c) the crossing times at day 9 for probe test (t = 2.896, P = 0.0072). (d) The duration in the target quadrant (t = 3.023, P = 0.0053). (e) The total swimming distance in the target quadrant on day 9 (t = 2.798, P = 0.0086). (f) The swimming speed (t = 1.430, P = 0.1360). *Compared with vehicle-treated group. Con: The vehicle-treated group; Eto: Etomidate-treated group. To explore the effects of etomidate on the memory of rats, we first examined the spatial learning and memory ability using the Morris water maze test. The learning ability of the rats was assessed using a consecutive 6 days training with a hidden platform under the water surface. During the learning process, the etomidate-treated rats displayed a longer latency period for locating the platform than the vehicle-treated rats did (day 4: 35.52 ± 3.88 s vs. 27.26 ± 5.33 s, t = 2.988, P = 0.0068; day 5: 30.67 ± 4.23 s vs. 15.84 ± 4.02 s, t = 3.013, P = 0.0057; day 6: 25.66 ± 4.16 s vs. 9.47 ± 2.35 s, t = 3.567, P = 0.0036) [Figure 1a]. Then, the platform was removed for the probe test on day 7, and the etomidate anesthesia led to fewer crossing times (2.06 ± 0.80 vs. 4.40 ± 1.18, t = 2.896, P = 0.0072) at the platform position, as well as reduced the time spent in the target quadrant from 34% to 11% and the total distance traveled from 41% to 13% (duration: 18.07 ± 4.79 s vs. 34.00 ± 4.24 s, t = 3.023, P = 0.0053; total swimming distance: 27.40 ± 6.56 cm vs. 40.73 ± 3.45 cm, t = 2.798, P = 0.0086) [Figure 1b–1e]. No significant difference was found in the swimming speed [Figure 1f] between two groups (t = 1.430, P = 0.1360). There was no significant difference in the body weight between Con and Eto groups before the injection (372.89 ± 13.97 g vs. 374.07 ± 17.14 g, t = 0.622, P = 0.460) and after the Morris water maze test (392.35 ± 12.83 g vs. 395.53 ± 17.10 g, t = 0.593, P = 0.650), suggesting that etomidate anesthesia impaired memory without affecting the motor ability and physiology of the rats. Effect of etomidate on learning and memory of aging rats in Morris water maze test. (a) The representative escape traces (upper) and the escape latencies in the learning days 1–6 (lower; day 4: t = 2.988, P = 0.0068; day 5: t = 3.013, P = 0.0057; and day 6: t = 3.567, P = 0.0036). (b) The swim trace in day 9 for probe test (c) the crossing times at day 9 for probe test (t = 2.896, P = 0.0072). (d) The duration in the target quadrant (t = 3.023, P = 0.0053). (e) The total swimming distance in the target quadrant on day 9 (t = 2.798, P = 0.0086). (f) The swimming speed (t = 1.430, P = 0.1360). *Compared with vehicle-treated group. Con: The vehicle-treated group; Eto: Etomidate-treated group. Etomidate did not induce the neuronal death To determine the potential mechanisms mediating the memory impairments induced by etomidate anesthesia, we first performed Nissl staining to evaluate the neuronal loss after etomidate anesthesia. However, after carefully counting, we did not detect any significant difference in the hippocampal neuronal numbers between the etomidate- and vehicle-treated rats (CA1: 63.45 ± 6.99 vs. 64.70 ± 4.81, t = 0.765, P = 0.4620; CA3: 66.18 ± 5.42 vs. 64.60 ± 6.52, t = 0.833, P = 0.4320; 62.27 ± 5.55 vs. 63.40 ± 5.36, t = 0.678, P = 0.5210) [Figure 2a and 2b]. Thus, etomidate inducing memory loss may not be due to neuronal loss in the hippocampus. The effect of etomidate on neuronal death by Nissl staining (a) the representative images in the hippocampus; Scale bar = 100 μm. (b) The quantitative analysis between two groups (CA1: t = 0.765, P = 0.4620; CA3: t = 0.833, P = 0.4320; DG: t = 0.678, P = 0.5210). CA1: Region I of hippocampus proper; CA3: Region III of hippocampus proper; DG: Dentategyrus; Con: The vehicle-treated group; Eto: Etomidate-treated group. To determine the potential mechanisms mediating the memory impairments induced by etomidate anesthesia, we first performed Nissl staining to evaluate the neuronal loss after etomidate anesthesia. However, after carefully counting, we did not detect any significant difference in the hippocampal neuronal numbers between the etomidate- and vehicle-treated rats (CA1: 63.45 ± 6.99 vs. 64.70 ± 4.81, t = 0.765, P = 0.4620; CA3: 66.18 ± 5.42 vs. 64.60 ± 6.52, t = 0.833, P = 0.4320; 62.27 ± 5.55 vs. 63.40 ± 5.36, t = 0.678, P = 0.5210) [Figure 2a and 2b]. Thus, etomidate inducing memory loss may not be due to neuronal loss in the hippocampus. The effect of etomidate on neuronal death by Nissl staining (a) the representative images in the hippocampus; Scale bar = 100 μm. (b) The quantitative analysis between two groups (CA1: t = 0.765, P = 0.4620; CA3: t = 0.833, P = 0.4320; DG: t = 0.678, P = 0.5210). CA1: Region I of hippocampus proper; CA3: Region III of hippocampus proper; DG: Dentategyrus; Con: The vehicle-treated group; Eto: Etomidate-treated group. Etomidate did not induce oxidative stress We then asked whether etomidate could induce transient oxidative stress in the hippocampus. We used two commercial kits to determine the expression of SOD and MDA in two groups and found that rats in etomidate-treated group did not show any significant differences in the levels of SOD and MDA compared to the rats in vehicle-treated group (SOD: 105.69 ± 7.40% vs. 100.00 ± 11.59%, t = 0.975, P = 0.3540; MDA: 96.32 ± 6.78% vs. 100.00 ± 8.29%, t = 1.112, P = 0.2690), which ruled out the possible role of oxidative stress in etomidate-induced memory deficits [Figure 3]. Effect of etomidate on oxidative stress. The SOD and MDA levels were measured by commercial kits, and the raw intensity was finally transferred to the relative levels by setting the control group as 100% (SOD: t = 0.975, P = 0.3540; MDA: t = 1.112, P = 0.2690). SOD: Superoxide dismutase; MDA: Malondialdehyde; Con: The vehicle-treated group; Eto: Etomidate-treated group. We then asked whether etomidate could induce transient oxidative stress in the hippocampus. We used two commercial kits to determine the expression of SOD and MDA in two groups and found that rats in etomidate-treated group did not show any significant differences in the levels of SOD and MDA compared to the rats in vehicle-treated group (SOD: 105.69 ± 7.40% vs. 100.00 ± 11.59%, t = 0.975, P = 0.3540; MDA: 96.32 ± 6.78% vs. 100.00 ± 8.29%, t = 1.112, P = 0.2690), which ruled out the possible role of oxidative stress in etomidate-induced memory deficits [Figure 3]. Effect of etomidate on oxidative stress. The SOD and MDA levels were measured by commercial kits, and the raw intensity was finally transferred to the relative levels by setting the control group as 100% (SOD: t = 0.975, P = 0.3540; MDA: t = 1.112, P = 0.2690). SOD: Superoxide dismutase; MDA: Malondialdehyde; Con: The vehicle-treated group; Eto: Etomidate-treated group. Cyclic adenosine 3’,5’- monophosphate response element-binding phosphorylation was not involved in etomidate-induced memory deficits Phosphorylation of CREB at the Ser133 site is known to play an important role in regulating the memory process. Therefore, we evaluated the effects of etomidate on CREB phosphorylation using Western blot analysis. However, we did not find significant differences in changes of the protein levels of CREB and phosopho-CREB between etomidate- and vehicle-treated rats [Figure 4a and 4b]. These results excluded the involvement of CREB signaling disruption in the etomidate-induced memory deficits. Effect of etomidate on CREB phosphorylation. (a) The representative blots of pCREB at ser133 site, total CREB and the loading control (GAPDH) in the hippocampus. (b) The quantitative analysis between two groups (pCREB: t = 0.98, P = 0.3430; CREB: t = 1.021, P = 0.2900). CREB: Cyclic adenosine 3’,5’-monophosphate response element-binding; pCREB: Phospho-cyclic adenosine 3’,5’-monophosphate response element-binding; GAPDH: Glyceraldehyde 3-phosphate dehydrogenase; Con: The vehicle-treated group; Eto: Etomidate-treated group. Phosphorylation of CREB at the Ser133 site is known to play an important role in regulating the memory process. Therefore, we evaluated the effects of etomidate on CREB phosphorylation using Western blot analysis. However, we did not find significant differences in changes of the protein levels of CREB and phosopho-CREB between etomidate- and vehicle-treated rats [Figure 4a and 4b]. These results excluded the involvement of CREB signaling disruption in the etomidate-induced memory deficits. Effect of etomidate on CREB phosphorylation. (a) The representative blots of pCREB at ser133 site, total CREB and the loading control (GAPDH) in the hippocampus. (b) The quantitative analysis between two groups (pCREB: t = 0.98, P = 0.3430; CREB: t = 1.021, P = 0.2900). CREB: Cyclic adenosine 3’,5’-monophosphate response element-binding; pCREB: Phospho-cyclic adenosine 3’,5’-monophosphate response element-binding; GAPDH: Glyceraldehyde 3-phosphate dehydrogenase; Con: The vehicle-treated group; Eto: Etomidate-treated group. Etomidate suppressed the immediate early genes expression Then, we sought to determine whether the failure of expression of essential IEGs, such as Arc, c-fos, and Egr1, was involved in memory deficits. We found that following treatment with etomidate, the expression of Arc, c-fos, and Egr1 was dramatically reduced [Figure 5a], and densitometry analysis revealed the intensities were reduced to 86%, 72%, and 58% of normal levels, respectively [Figure 5b]. Furthermore, by using quantitative real-time polymerase chain reaction we also found that the mRNAs level of Arc, c-fos, and Egr1 was reduced following etomidate treatment [Figure 5c]. These data suggested that etomidate anesthesia suppressed IEGs expression, which may induce the memory deficits. Effect of etomidate on immediate early genes levels. (a) The representative blots of Arc, c-fos, Egr1, and the loading control GAPDH in the hippocampus. (b) The quantitative analysis between two groups (Arc: t = 2.876, P = 0.0086; c-fos: t = 2.996, P = 0.0076; Egr1: t = 3.011, P = 0.0057). (c) The quantitative polymerase chain reaction analysis for the detection of mRNAs of Arc, c-fos, and Egr1 (Arc: t = 2.893, P = 0.0082; c-fos: t = 3.213, P = 0.0047; Egr1: t = 3.452, P = 0.0034). *Compared with Con group. GAPDH: Glyceraldehyde 3-phosphate dehydrogenase; Con: The vehicle-treated group; Eto: Etomidate-treated group. Then, we sought to determine whether the failure of expression of essential IEGs, such as Arc, c-fos, and Egr1, was involved in memory deficits. We found that following treatment with etomidate, the expression of Arc, c-fos, and Egr1 was dramatically reduced [Figure 5a], and densitometry analysis revealed the intensities were reduced to 86%, 72%, and 58% of normal levels, respectively [Figure 5b]. Furthermore, by using quantitative real-time polymerase chain reaction we also found that the mRNAs level of Arc, c-fos, and Egr1 was reduced following etomidate treatment [Figure 5c]. These data suggested that etomidate anesthesia suppressed IEGs expression, which may induce the memory deficits. Effect of etomidate on immediate early genes levels. (a) The representative blots of Arc, c-fos, Egr1, and the loading control GAPDH in the hippocampus. (b) The quantitative analysis between two groups (Arc: t = 2.876, P = 0.0086; c-fos: t = 2.996, P = 0.0076; Egr1: t = 3.011, P = 0.0057). (c) The quantitative polymerase chain reaction analysis for the detection of mRNAs of Arc, c-fos, and Egr1 (Arc: t = 2.893, P = 0.0082; c-fos: t = 3.213, P = 0.0047; Egr1: t = 3.452, P = 0.0034). *Compared with Con group. GAPDH: Glyceraldehyde 3-phosphate dehydrogenase; Con: The vehicle-treated group; Eto: Etomidate-treated group.
null
null
[ "Animals and treatment", "Morris water maze test", "Nissl staining", "Superoxide dismutase and malondialdehyde measurement", "Western blot analysis", "Statistical analysis", "Etomidate-induced memory deficits", "Etomidate did not induce the neuronal death", "Etomidate did not induce oxidative stress", "Cyclic adenosine 3’,5’- monophosphate response element-binding phosphorylation was not involved in etomidate-induced memory deficits", "Etomidate suppressed the immediate early genes expression", "Financial support and sponsorship", "Conflicts of interest" ]
[ "A total of 30 Sprague-Dawley rats, that were 15–18 months old and weighed 350–400 g, were purchased from the Center of Experimental Animal (Zhengzhou University, China) and maintained under standard laboratory conditions (room temperature 22 ± 2°C; relative humidity 60 ± 5%; and 12-h light/dark cycle) with standard rodent food and water ad libitum. All the animal experiments were approved by the Review Committee on Animal Experiments of Zhengzhou University, China. The rats were randomly divided into two groups namely the vehicle- (Con) and etomidate-treated (Eto) groups (n = 15 in each group). The rats in the Eto and Con groups received a dose of 8 mg/kg etomidate or a dose of the vehicle by intraperitoneal injection, respectively. The body weights were measured before induction of anesthesia and after the Morris water maze test.", "The maze consisted of a round pool (diameter and depth were 180 cm and 50 cm, respectively) filled with warm water (25 ± 2°C) up to 2 cm above a hidden platform in the third quadrant. The rats were habituated in the test room for 1 week before training, which commenced 1 day after the rats had recovered from the anesthesia. During each trial, the rats were placed in the swimming pool facing the wall in a fixed position and allowed 60 s to find the hidden platform in the third quadrant. Rats that did not locate the platform within this time were guided there and allowed to remain for 20 s. All rats underwent four trials every day in four quadrants. After every trial, the rats were wiped dry and kept warm. The rats were trained for 6 days consecutively, and the hidden platform was removed for the probe test on day 7 when the memory was detected in the rats. A video tracking system was used to record the swimming motions of the rats. The crossing time, duration, and total distance traveled by each rat in the target quadrant were used to evaluate memory retention ability while the swimming speed was used to evaluate motor ability.", "Five rats were randomly selected from each group and euthanized to obtain tissue samples for the Nissl staining using a previously reported method.[10] Briefly, the rats were anesthetized with an overdose of chloral hydrate intraperitoneally and then perfused transcardially with 0.9% sodium chloride at 4°C, followed by 4% paraformaldehyde in 0.1 mol/L phosphate buffer (pH 7.40). Then, the whole brains were removed and postfixed in the same fixative at 4°C for another 24 h. The brains were dehydrated in 30% and 40% sucrose until they sank, rapidly frozen in isopentane, and then coronal sections (25-μm thick) were cut on a cryostat (CM1950, Leica, Heidelberger, Germany). All the sections were used Nissl stained with 0.1% cresyl violet (Sigma-Aldrich, St. Louis, MO, USA) to evaluate the hippocampal neuronal damage. The cell counting was performed using ImageJ software (National Institutes of Health, Bethesda, MD, USA) as previously reported.[11]", "The activity levels of superoxide dismutase (SOD) and malondialdehyde (MDA) in brain tissue were assayed using commercial kits (Nanjing Jiancheng Inc., China).[12] The hippocampal SOD activity was expressed as U/mg protein while MDA concentrations were as nmol/mg (n = 6 in each group).[13]", "Protein was extracted from the hippocampi in rats with etomidate treatment or vehicle according to a previously described procedure.[14] Membranes were probed with primary antibodies against CREB, phospho-Ser133-CREB (#9104 and #9191, respectively, Cell Signaling, USA), Arc, c-fos, and Egr1 (ab118929, ab53036, and ab55160, respectively, Abcam, USA). Horseradish peroxidase conjugated anti-rabbit or anti-mouse secondary antibodies were used and visualized using the enhanced chemiluminescence kit (Thermo Scientific, USA). The density of each band was quantified using the ImageJ software (National Institutes of Health, USA). Glyceraldehyde 3-phosphate dehydrogenase (GAPDH, ab1603, Abcam) was used as a loading control.", "The statistical analyses were carried out using SPSS 14.0 (SPSS Inc., Chicago, IL, USA). All the graphic presentations were constructed using the SigmaPlot 10.0 (Systat Software, Inc. San Jose, CA, USA). A paired t-test was used to analyze the data, which were shown as a mean ± standard error (SE), and A P < 0.05 was considered to be statistically significant.", "To explore the effects of etomidate on the memory of rats, we first examined the spatial learning and memory ability using the Morris water maze test. The learning ability of the rats was assessed using a consecutive 6 days training with a hidden platform under the water surface. During the learning process, the etomidate-treated rats displayed a longer latency period for locating the platform than the vehicle-treated rats did (day 4: 35.52 ± 3.88 s vs. 27.26 ± 5.33 s, t = 2.988, P = 0.0068; day 5: 30.67 ± 4.23 s vs. 15.84 ± 4.02 s, t = 3.013, P = 0.0057; day 6: 25.66 ± 4.16 s vs. 9.47 ± 2.35 s, t = 3.567, P = 0.0036) [Figure 1a]. Then, the platform was removed for the probe test on day 7, and the etomidate anesthesia led to fewer crossing times (2.06 ± 0.80 vs. 4.40 ± 1.18, t = 2.896, P = 0.0072) at the platform position, as well as reduced the time spent in the target quadrant from 34% to 11% and the total distance traveled from 41% to 13% (duration: 18.07 ± 4.79 s vs. 34.00 ± 4.24 s, t = 3.023, P = 0.0053; total swimming distance: 27.40 ± 6.56 cm vs. 40.73 ± 3.45 cm, t = 2.798, P = 0.0086) [Figure 1b–1e]. No significant difference was found in the swimming speed [Figure 1f] between two groups (t = 1.430, P = 0.1360). There was no significant difference in the body weight between Con and Eto groups before the injection (372.89 ± 13.97 g vs. 374.07 ± 17.14 g, t = 0.622, P = 0.460) and after the Morris water maze test (392.35 ± 12.83 g vs. 395.53 ± 17.10 g, t = 0.593, P = 0.650), suggesting that etomidate anesthesia impaired memory without affecting the motor ability and physiology of the rats.\nEffect of etomidate on learning and memory of aging rats in Morris water maze test. (a) The representative escape traces (upper) and the escape latencies in the learning days 1–6 (lower; day 4: t = 2.988, P = 0.0068; day 5: t = 3.013, P = 0.0057; and day 6: t = 3.567, P = 0.0036). (b) The swim trace in day 9 for probe test (c) the crossing times at day 9 for probe test (t = 2.896, P = 0.0072). (d) The duration in the target quadrant (t = 3.023, P = 0.0053). (e) The total swimming distance in the target quadrant on day 9 (t = 2.798, P = 0.0086). (f) The swimming speed (t = 1.430, P = 0.1360). *Compared with vehicle-treated group. Con: The vehicle-treated group; Eto: Etomidate-treated group.", "To determine the potential mechanisms mediating the memory impairments induced by etomidate anesthesia, we first performed Nissl staining to evaluate the neuronal loss after etomidate anesthesia. However, after carefully counting, we did not detect any significant difference in the hippocampal neuronal numbers between the etomidate- and vehicle-treated rats (CA1: 63.45 ± 6.99 vs. 64.70 ± 4.81, t = 0.765, P = 0.4620; CA3: 66.18 ± 5.42 vs. 64.60 ± 6.52, t = 0.833, P = 0.4320; 62.27 ± 5.55 vs. 63.40 ± 5.36, t = 0.678, P = 0.5210) [Figure 2a and 2b]. Thus, etomidate inducing memory loss may not be due to neuronal loss in the hippocampus.\nThe effect of etomidate on neuronal death by Nissl staining (a) the representative images in the hippocampus; Scale bar = 100 μm. (b) The quantitative analysis between two groups (CA1: t = 0.765, P = 0.4620; CA3: t = 0.833, P = 0.4320; DG: t = 0.678, P = 0.5210). CA1: Region I of hippocampus proper; CA3: Region III of hippocampus proper; DG: Dentategyrus; Con: The vehicle-treated group; Eto: Etomidate-treated group.", "We then asked whether etomidate could induce transient oxidative stress in the hippocampus. We used two commercial kits to determine the expression of SOD and MDA in two groups and found that rats in etomidate-treated group did not show any significant differences in the levels of SOD and MDA compared to the rats in vehicle-treated group (SOD: 105.69 ± 7.40% vs. 100.00 ± 11.59%, t = 0.975, P = 0.3540; MDA: 96.32 ± 6.78% vs. 100.00 ± 8.29%, t = 1.112, P = 0.2690), which ruled out the possible role of oxidative stress in etomidate-induced memory deficits [Figure 3].\nEffect of etomidate on oxidative stress. The SOD and MDA levels were measured by commercial kits, and the raw intensity was finally transferred to the relative levels by setting the control group as 100% (SOD: t = 0.975, P = 0.3540; MDA: t = 1.112, P = 0.2690). SOD: Superoxide dismutase; MDA: Malondialdehyde; Con: The vehicle-treated group; Eto: Etomidate-treated group.", "Phosphorylation of CREB at the Ser133 site is known to play an important role in regulating the memory process. Therefore, we evaluated the effects of etomidate on CREB phosphorylation using Western blot analysis. However, we did not find significant differences in changes of the protein levels of CREB and phosopho-CREB between etomidate- and vehicle-treated rats [Figure 4a and 4b]. These results excluded the involvement of CREB signaling disruption in the etomidate-induced memory deficits.\nEffect of etomidate on CREB phosphorylation. (a) The representative blots of pCREB at ser133 site, total CREB and the loading control (GAPDH) in the hippocampus. (b) The quantitative analysis between two groups (pCREB: t = 0.98, P = 0.3430; CREB: t = 1.021, P = 0.2900). CREB: Cyclic adenosine 3’,5’-monophosphate response element-binding; pCREB: Phospho-cyclic adenosine 3’,5’-monophosphate response element-binding; GAPDH: Glyceraldehyde 3-phosphate dehydrogenase; Con: The vehicle-treated group; Eto: Etomidate-treated group.", "Then, we sought to determine whether the failure of expression of essential IEGs, such as Arc, c-fos, and Egr1, was involved in memory deficits. We found that following treatment with etomidate, the expression of Arc, c-fos, and Egr1 was dramatically reduced [Figure 5a], and densitometry analysis revealed the intensities were reduced to 86%, 72%, and 58% of normal levels, respectively [Figure 5b]. Furthermore, by using quantitative real-time polymerase chain reaction we also found that the mRNAs level of Arc, c-fos, and Egr1 was reduced following etomidate treatment [Figure 5c]. These data suggested that etomidate anesthesia suppressed IEGs expression, which may induce the memory deficits.\nEffect of etomidate on immediate early genes levels. (a) The representative blots of Arc, c-fos, Egr1, and the loading control GAPDH in the hippocampus. (b) The quantitative analysis between two groups (Arc: t = 2.876, P = 0.0086; c-fos: t = 2.996, P = 0.0076; Egr1: t = 3.011, P = 0.0057). (c) The quantitative polymerase chain reaction analysis for the detection of mRNAs of Arc, c-fos, and Egr1 (Arc: t = 2.893, P = 0.0082; c-fos: t = 3.213, P = 0.0047; Egr1: t = 3.452, P = 0.0034). *Compared with Con group. GAPDH: Glyceraldehyde 3-phosphate dehydrogenase; Con: The vehicle-treated group; Eto: Etomidate-treated group.", "This study was supported by grants from Key Scientific and Technological Projects of Henan Province (No. 122102310072), Scientific and Technological Projects of Zhengzhou (No. 121PPTGG492-4 and No. 121PPTGG492-5).", "There are no conflicts of interest." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Animals and treatment", "Morris water maze test", "Nissl staining", "Superoxide dismutase and malondialdehyde measurement", "Western blot analysis", "Statistical analysis", "RESULTS", "Etomidate-induced memory deficits", "Etomidate did not induce the neuronal death", "Etomidate did not induce oxidative stress", "Cyclic adenosine 3’,5’- monophosphate response element-binding phosphorylation was not involved in etomidate-induced memory deficits", "Etomidate suppressed the immediate early genes expression", "DISCUSSION", "Financial support and sponsorship", "Conflicts of interest" ]
[ "Postoperative cognitive dysfunction (POCD) is a short-term cognitive decline, especially in memory and executive functions, that may last from a few days to a few weeks after surgery.[1] Currently, the underlying mechanisms mediating the development of cognitive impairment after anesthesia and surgery are not yet fully clear. A previous study proposed that general anesthetics play a causal role in POCD because the duration of anesthesia positively correlates with the incidence of POCD in patients.[2] Moreover, a single exposure to an anesthetic can cause retrograde and anterograde memory deficits that last for days to weeks in rodent models.[34] For example, halothane and nitrous oxide anesthesia administered during the perinatal period led to learning deficits and delayed behavioral development,[5] as well as N-methyl-d-aspartate receptor blockade, which is typical of nitrous oxide and can produce long lasting memory deficits.[6] However, the mechanisms by which anesthetics cause persistent memory deficits in adults are poorly understood.\nEtomidate (R-1-[1-ethylphenyl] imidazole-5-ethyl ester), is a unique drug used for the induction of general anesthesia and sedation. Etomidate is the only imidazole among general anesthesia inducing drugs and has the most favorable therapeutic effect following single bolus administration.[7] It is known that the dominant molecular targets that mediate the anesthetic effects of etomidate in the central nervous system are specific γ-aminobutyric acid type A (GABAA) receptor subtypes, which had been strongly implicated in memory processes.[8] Furthermore, etomidate not only causes memory impairment in vivo but also abolishes long-term potentiation induced by high-frequency stimulation in the hippocampal slices of wild-type but not Gabra5−/− mice.[9] Thus, etomidate anesthesia impairs synaptic plasticity, which in turn causes memory deficits. However, whether the neuronal death, oxidative stress, loss of cyclic adenosine 3’,5’-monophosphate (cAMP) response element-binding (CREB) protein, and immediate early genes (IEGs) are involved in the etomidate-induced memory deficits in the elderly is still unknown.\nHere, using behavioral tests and biochemical and immunohistochemical assay, we are trying to understand the underlying mechanisms for cognitive deficits induced by etomidate in vivo.", " Animals and treatment A total of 30 Sprague-Dawley rats, that were 15–18 months old and weighed 350–400 g, were purchased from the Center of Experimental Animal (Zhengzhou University, China) and maintained under standard laboratory conditions (room temperature 22 ± 2°C; relative humidity 60 ± 5%; and 12-h light/dark cycle) with standard rodent food and water ad libitum. All the animal experiments were approved by the Review Committee on Animal Experiments of Zhengzhou University, China. The rats were randomly divided into two groups namely the vehicle- (Con) and etomidate-treated (Eto) groups (n = 15 in each group). The rats in the Eto and Con groups received a dose of 8 mg/kg etomidate or a dose of the vehicle by intraperitoneal injection, respectively. The body weights were measured before induction of anesthesia and after the Morris water maze test.\nA total of 30 Sprague-Dawley rats, that were 15–18 months old and weighed 350–400 g, were purchased from the Center of Experimental Animal (Zhengzhou University, China) and maintained under standard laboratory conditions (room temperature 22 ± 2°C; relative humidity 60 ± 5%; and 12-h light/dark cycle) with standard rodent food and water ad libitum. All the animal experiments were approved by the Review Committee on Animal Experiments of Zhengzhou University, China. The rats were randomly divided into two groups namely the vehicle- (Con) and etomidate-treated (Eto) groups (n = 15 in each group). The rats in the Eto and Con groups received a dose of 8 mg/kg etomidate or a dose of the vehicle by intraperitoneal injection, respectively. The body weights were measured before induction of anesthesia and after the Morris water maze test.\n Morris water maze test The maze consisted of a round pool (diameter and depth were 180 cm and 50 cm, respectively) filled with warm water (25 ± 2°C) up to 2 cm above a hidden platform in the third quadrant. The rats were habituated in the test room for 1 week before training, which commenced 1 day after the rats had recovered from the anesthesia. During each trial, the rats were placed in the swimming pool facing the wall in a fixed position and allowed 60 s to find the hidden platform in the third quadrant. Rats that did not locate the platform within this time were guided there and allowed to remain for 20 s. All rats underwent four trials every day in four quadrants. After every trial, the rats were wiped dry and kept warm. The rats were trained for 6 days consecutively, and the hidden platform was removed for the probe test on day 7 when the memory was detected in the rats. A video tracking system was used to record the swimming motions of the rats. The crossing time, duration, and total distance traveled by each rat in the target quadrant were used to evaluate memory retention ability while the swimming speed was used to evaluate motor ability.\nThe maze consisted of a round pool (diameter and depth were 180 cm and 50 cm, respectively) filled with warm water (25 ± 2°C) up to 2 cm above a hidden platform in the third quadrant. The rats were habituated in the test room for 1 week before training, which commenced 1 day after the rats had recovered from the anesthesia. During each trial, the rats were placed in the swimming pool facing the wall in a fixed position and allowed 60 s to find the hidden platform in the third quadrant. Rats that did not locate the platform within this time were guided there and allowed to remain for 20 s. All rats underwent four trials every day in four quadrants. After every trial, the rats were wiped dry and kept warm. The rats were trained for 6 days consecutively, and the hidden platform was removed for the probe test on day 7 when the memory was detected in the rats. A video tracking system was used to record the swimming motions of the rats. The crossing time, duration, and total distance traveled by each rat in the target quadrant were used to evaluate memory retention ability while the swimming speed was used to evaluate motor ability.\n Nissl staining Five rats were randomly selected from each group and euthanized to obtain tissue samples for the Nissl staining using a previously reported method.[10] Briefly, the rats were anesthetized with an overdose of chloral hydrate intraperitoneally and then perfused transcardially with 0.9% sodium chloride at 4°C, followed by 4% paraformaldehyde in 0.1 mol/L phosphate buffer (pH 7.40). Then, the whole brains were removed and postfixed in the same fixative at 4°C for another 24 h. The brains were dehydrated in 30% and 40% sucrose until they sank, rapidly frozen in isopentane, and then coronal sections (25-μm thick) were cut on a cryostat (CM1950, Leica, Heidelberger, Germany). All the sections were used Nissl stained with 0.1% cresyl violet (Sigma-Aldrich, St. Louis, MO, USA) to evaluate the hippocampal neuronal damage. The cell counting was performed using ImageJ software (National Institutes of Health, Bethesda, MD, USA) as previously reported.[11]\nFive rats were randomly selected from each group and euthanized to obtain tissue samples for the Nissl staining using a previously reported method.[10] Briefly, the rats were anesthetized with an overdose of chloral hydrate intraperitoneally and then perfused transcardially with 0.9% sodium chloride at 4°C, followed by 4% paraformaldehyde in 0.1 mol/L phosphate buffer (pH 7.40). Then, the whole brains were removed and postfixed in the same fixative at 4°C for another 24 h. The brains were dehydrated in 30% and 40% sucrose until they sank, rapidly frozen in isopentane, and then coronal sections (25-μm thick) were cut on a cryostat (CM1950, Leica, Heidelberger, Germany). All the sections were used Nissl stained with 0.1% cresyl violet (Sigma-Aldrich, St. Louis, MO, USA) to evaluate the hippocampal neuronal damage. The cell counting was performed using ImageJ software (National Institutes of Health, Bethesda, MD, USA) as previously reported.[11]\n Superoxide dismutase and malondialdehyde measurement The activity levels of superoxide dismutase (SOD) and malondialdehyde (MDA) in brain tissue were assayed using commercial kits (Nanjing Jiancheng Inc., China).[12] The hippocampal SOD activity was expressed as U/mg protein while MDA concentrations were as nmol/mg (n = 6 in each group).[13]\nThe activity levels of superoxide dismutase (SOD) and malondialdehyde (MDA) in brain tissue were assayed using commercial kits (Nanjing Jiancheng Inc., China).[12] The hippocampal SOD activity was expressed as U/mg protein while MDA concentrations were as nmol/mg (n = 6 in each group).[13]\n Western blot analysis Protein was extracted from the hippocampi in rats with etomidate treatment or vehicle according to a previously described procedure.[14] Membranes were probed with primary antibodies against CREB, phospho-Ser133-CREB (#9104 and #9191, respectively, Cell Signaling, USA), Arc, c-fos, and Egr1 (ab118929, ab53036, and ab55160, respectively, Abcam, USA). Horseradish peroxidase conjugated anti-rabbit or anti-mouse secondary antibodies were used and visualized using the enhanced chemiluminescence kit (Thermo Scientific, USA). The density of each band was quantified using the ImageJ software (National Institutes of Health, USA). Glyceraldehyde 3-phosphate dehydrogenase (GAPDH, ab1603, Abcam) was used as a loading control.\nProtein was extracted from the hippocampi in rats with etomidate treatment or vehicle according to a previously described procedure.[14] Membranes were probed with primary antibodies against CREB, phospho-Ser133-CREB (#9104 and #9191, respectively, Cell Signaling, USA), Arc, c-fos, and Egr1 (ab118929, ab53036, and ab55160, respectively, Abcam, USA). Horseradish peroxidase conjugated anti-rabbit or anti-mouse secondary antibodies were used and visualized using the enhanced chemiluminescence kit (Thermo Scientific, USA). The density of each band was quantified using the ImageJ software (National Institutes of Health, USA). Glyceraldehyde 3-phosphate dehydrogenase (GAPDH, ab1603, Abcam) was used as a loading control.\n Statistical analysis The statistical analyses were carried out using SPSS 14.0 (SPSS Inc., Chicago, IL, USA). All the graphic presentations were constructed using the SigmaPlot 10.0 (Systat Software, Inc. San Jose, CA, USA). A paired t-test was used to analyze the data, which were shown as a mean ± standard error (SE), and A P < 0.05 was considered to be statistically significant.\nThe statistical analyses were carried out using SPSS 14.0 (SPSS Inc., Chicago, IL, USA). All the graphic presentations were constructed using the SigmaPlot 10.0 (Systat Software, Inc. San Jose, CA, USA). A paired t-test was used to analyze the data, which were shown as a mean ± standard error (SE), and A P < 0.05 was considered to be statistically significant.", "A total of 30 Sprague-Dawley rats, that were 15–18 months old and weighed 350–400 g, were purchased from the Center of Experimental Animal (Zhengzhou University, China) and maintained under standard laboratory conditions (room temperature 22 ± 2°C; relative humidity 60 ± 5%; and 12-h light/dark cycle) with standard rodent food and water ad libitum. All the animal experiments were approved by the Review Committee on Animal Experiments of Zhengzhou University, China. The rats were randomly divided into two groups namely the vehicle- (Con) and etomidate-treated (Eto) groups (n = 15 in each group). The rats in the Eto and Con groups received a dose of 8 mg/kg etomidate or a dose of the vehicle by intraperitoneal injection, respectively. The body weights were measured before induction of anesthesia and after the Morris water maze test.", "The maze consisted of a round pool (diameter and depth were 180 cm and 50 cm, respectively) filled with warm water (25 ± 2°C) up to 2 cm above a hidden platform in the third quadrant. The rats were habituated in the test room for 1 week before training, which commenced 1 day after the rats had recovered from the anesthesia. During each trial, the rats were placed in the swimming pool facing the wall in a fixed position and allowed 60 s to find the hidden platform in the third quadrant. Rats that did not locate the platform within this time were guided there and allowed to remain for 20 s. All rats underwent four trials every day in four quadrants. After every trial, the rats were wiped dry and kept warm. The rats were trained for 6 days consecutively, and the hidden platform was removed for the probe test on day 7 when the memory was detected in the rats. A video tracking system was used to record the swimming motions of the rats. The crossing time, duration, and total distance traveled by each rat in the target quadrant were used to evaluate memory retention ability while the swimming speed was used to evaluate motor ability.", "Five rats were randomly selected from each group and euthanized to obtain tissue samples for the Nissl staining using a previously reported method.[10] Briefly, the rats were anesthetized with an overdose of chloral hydrate intraperitoneally and then perfused transcardially with 0.9% sodium chloride at 4°C, followed by 4% paraformaldehyde in 0.1 mol/L phosphate buffer (pH 7.40). Then, the whole brains were removed and postfixed in the same fixative at 4°C for another 24 h. The brains were dehydrated in 30% and 40% sucrose until they sank, rapidly frozen in isopentane, and then coronal sections (25-μm thick) were cut on a cryostat (CM1950, Leica, Heidelberger, Germany). All the sections were used Nissl stained with 0.1% cresyl violet (Sigma-Aldrich, St. Louis, MO, USA) to evaluate the hippocampal neuronal damage. The cell counting was performed using ImageJ software (National Institutes of Health, Bethesda, MD, USA) as previously reported.[11]", "The activity levels of superoxide dismutase (SOD) and malondialdehyde (MDA) in brain tissue were assayed using commercial kits (Nanjing Jiancheng Inc., China).[12] The hippocampal SOD activity was expressed as U/mg protein while MDA concentrations were as nmol/mg (n = 6 in each group).[13]", "Protein was extracted from the hippocampi in rats with etomidate treatment or vehicle according to a previously described procedure.[14] Membranes were probed with primary antibodies against CREB, phospho-Ser133-CREB (#9104 and #9191, respectively, Cell Signaling, USA), Arc, c-fos, and Egr1 (ab118929, ab53036, and ab55160, respectively, Abcam, USA). Horseradish peroxidase conjugated anti-rabbit or anti-mouse secondary antibodies were used and visualized using the enhanced chemiluminescence kit (Thermo Scientific, USA). The density of each band was quantified using the ImageJ software (National Institutes of Health, USA). Glyceraldehyde 3-phosphate dehydrogenase (GAPDH, ab1603, Abcam) was used as a loading control.", "The statistical analyses were carried out using SPSS 14.0 (SPSS Inc., Chicago, IL, USA). All the graphic presentations were constructed using the SigmaPlot 10.0 (Systat Software, Inc. San Jose, CA, USA). A paired t-test was used to analyze the data, which were shown as a mean ± standard error (SE), and A P < 0.05 was considered to be statistically significant.", " Etomidate-induced memory deficits To explore the effects of etomidate on the memory of rats, we first examined the spatial learning and memory ability using the Morris water maze test. The learning ability of the rats was assessed using a consecutive 6 days training with a hidden platform under the water surface. During the learning process, the etomidate-treated rats displayed a longer latency period for locating the platform than the vehicle-treated rats did (day 4: 35.52 ± 3.88 s vs. 27.26 ± 5.33 s, t = 2.988, P = 0.0068; day 5: 30.67 ± 4.23 s vs. 15.84 ± 4.02 s, t = 3.013, P = 0.0057; day 6: 25.66 ± 4.16 s vs. 9.47 ± 2.35 s, t = 3.567, P = 0.0036) [Figure 1a]. Then, the platform was removed for the probe test on day 7, and the etomidate anesthesia led to fewer crossing times (2.06 ± 0.80 vs. 4.40 ± 1.18, t = 2.896, P = 0.0072) at the platform position, as well as reduced the time spent in the target quadrant from 34% to 11% and the total distance traveled from 41% to 13% (duration: 18.07 ± 4.79 s vs. 34.00 ± 4.24 s, t = 3.023, P = 0.0053; total swimming distance: 27.40 ± 6.56 cm vs. 40.73 ± 3.45 cm, t = 2.798, P = 0.0086) [Figure 1b–1e]. No significant difference was found in the swimming speed [Figure 1f] between two groups (t = 1.430, P = 0.1360). There was no significant difference in the body weight between Con and Eto groups before the injection (372.89 ± 13.97 g vs. 374.07 ± 17.14 g, t = 0.622, P = 0.460) and after the Morris water maze test (392.35 ± 12.83 g vs. 395.53 ± 17.10 g, t = 0.593, P = 0.650), suggesting that etomidate anesthesia impaired memory without affecting the motor ability and physiology of the rats.\nEffect of etomidate on learning and memory of aging rats in Morris water maze test. (a) The representative escape traces (upper) and the escape latencies in the learning days 1–6 (lower; day 4: t = 2.988, P = 0.0068; day 5: t = 3.013, P = 0.0057; and day 6: t = 3.567, P = 0.0036). (b) The swim trace in day 9 for probe test (c) the crossing times at day 9 for probe test (t = 2.896, P = 0.0072). (d) The duration in the target quadrant (t = 3.023, P = 0.0053). (e) The total swimming distance in the target quadrant on day 9 (t = 2.798, P = 0.0086). (f) The swimming speed (t = 1.430, P = 0.1360). *Compared with vehicle-treated group. Con: The vehicle-treated group; Eto: Etomidate-treated group.\nTo explore the effects of etomidate on the memory of rats, we first examined the spatial learning and memory ability using the Morris water maze test. The learning ability of the rats was assessed using a consecutive 6 days training with a hidden platform under the water surface. During the learning process, the etomidate-treated rats displayed a longer latency period for locating the platform than the vehicle-treated rats did (day 4: 35.52 ± 3.88 s vs. 27.26 ± 5.33 s, t = 2.988, P = 0.0068; day 5: 30.67 ± 4.23 s vs. 15.84 ± 4.02 s, t = 3.013, P = 0.0057; day 6: 25.66 ± 4.16 s vs. 9.47 ± 2.35 s, t = 3.567, P = 0.0036) [Figure 1a]. Then, the platform was removed for the probe test on day 7, and the etomidate anesthesia led to fewer crossing times (2.06 ± 0.80 vs. 4.40 ± 1.18, t = 2.896, P = 0.0072) at the platform position, as well as reduced the time spent in the target quadrant from 34% to 11% and the total distance traveled from 41% to 13% (duration: 18.07 ± 4.79 s vs. 34.00 ± 4.24 s, t = 3.023, P = 0.0053; total swimming distance: 27.40 ± 6.56 cm vs. 40.73 ± 3.45 cm, t = 2.798, P = 0.0086) [Figure 1b–1e]. No significant difference was found in the swimming speed [Figure 1f] between two groups (t = 1.430, P = 0.1360). There was no significant difference in the body weight between Con and Eto groups before the injection (372.89 ± 13.97 g vs. 374.07 ± 17.14 g, t = 0.622, P = 0.460) and after the Morris water maze test (392.35 ± 12.83 g vs. 395.53 ± 17.10 g, t = 0.593, P = 0.650), suggesting that etomidate anesthesia impaired memory without affecting the motor ability and physiology of the rats.\nEffect of etomidate on learning and memory of aging rats in Morris water maze test. (a) The representative escape traces (upper) and the escape latencies in the learning days 1–6 (lower; day 4: t = 2.988, P = 0.0068; day 5: t = 3.013, P = 0.0057; and day 6: t = 3.567, P = 0.0036). (b) The swim trace in day 9 for probe test (c) the crossing times at day 9 for probe test (t = 2.896, P = 0.0072). (d) The duration in the target quadrant (t = 3.023, P = 0.0053). (e) The total swimming distance in the target quadrant on day 9 (t = 2.798, P = 0.0086). (f) The swimming speed (t = 1.430, P = 0.1360). *Compared with vehicle-treated group. Con: The vehicle-treated group; Eto: Etomidate-treated group.\n Etomidate did not induce the neuronal death To determine the potential mechanisms mediating the memory impairments induced by etomidate anesthesia, we first performed Nissl staining to evaluate the neuronal loss after etomidate anesthesia. However, after carefully counting, we did not detect any significant difference in the hippocampal neuronal numbers between the etomidate- and vehicle-treated rats (CA1: 63.45 ± 6.99 vs. 64.70 ± 4.81, t = 0.765, P = 0.4620; CA3: 66.18 ± 5.42 vs. 64.60 ± 6.52, t = 0.833, P = 0.4320; 62.27 ± 5.55 vs. 63.40 ± 5.36, t = 0.678, P = 0.5210) [Figure 2a and 2b]. Thus, etomidate inducing memory loss may not be due to neuronal loss in the hippocampus.\nThe effect of etomidate on neuronal death by Nissl staining (a) the representative images in the hippocampus; Scale bar = 100 μm. (b) The quantitative analysis between two groups (CA1: t = 0.765, P = 0.4620; CA3: t = 0.833, P = 0.4320; DG: t = 0.678, P = 0.5210). CA1: Region I of hippocampus proper; CA3: Region III of hippocampus proper; DG: Dentategyrus; Con: The vehicle-treated group; Eto: Etomidate-treated group.\nTo determine the potential mechanisms mediating the memory impairments induced by etomidate anesthesia, we first performed Nissl staining to evaluate the neuronal loss after etomidate anesthesia. However, after carefully counting, we did not detect any significant difference in the hippocampal neuronal numbers between the etomidate- and vehicle-treated rats (CA1: 63.45 ± 6.99 vs. 64.70 ± 4.81, t = 0.765, P = 0.4620; CA3: 66.18 ± 5.42 vs. 64.60 ± 6.52, t = 0.833, P = 0.4320; 62.27 ± 5.55 vs. 63.40 ± 5.36, t = 0.678, P = 0.5210) [Figure 2a and 2b]. Thus, etomidate inducing memory loss may not be due to neuronal loss in the hippocampus.\nThe effect of etomidate on neuronal death by Nissl staining (a) the representative images in the hippocampus; Scale bar = 100 μm. (b) The quantitative analysis between two groups (CA1: t = 0.765, P = 0.4620; CA3: t = 0.833, P = 0.4320; DG: t = 0.678, P = 0.5210). CA1: Region I of hippocampus proper; CA3: Region III of hippocampus proper; DG: Dentategyrus; Con: The vehicle-treated group; Eto: Etomidate-treated group.\n Etomidate did not induce oxidative stress We then asked whether etomidate could induce transient oxidative stress in the hippocampus. We used two commercial kits to determine the expression of SOD and MDA in two groups and found that rats in etomidate-treated group did not show any significant differences in the levels of SOD and MDA compared to the rats in vehicle-treated group (SOD: 105.69 ± 7.40% vs. 100.00 ± 11.59%, t = 0.975, P = 0.3540; MDA: 96.32 ± 6.78% vs. 100.00 ± 8.29%, t = 1.112, P = 0.2690), which ruled out the possible role of oxidative stress in etomidate-induced memory deficits [Figure 3].\nEffect of etomidate on oxidative stress. The SOD and MDA levels were measured by commercial kits, and the raw intensity was finally transferred to the relative levels by setting the control group as 100% (SOD: t = 0.975, P = 0.3540; MDA: t = 1.112, P = 0.2690). SOD: Superoxide dismutase; MDA: Malondialdehyde; Con: The vehicle-treated group; Eto: Etomidate-treated group.\nWe then asked whether etomidate could induce transient oxidative stress in the hippocampus. We used two commercial kits to determine the expression of SOD and MDA in two groups and found that rats in etomidate-treated group did not show any significant differences in the levels of SOD and MDA compared to the rats in vehicle-treated group (SOD: 105.69 ± 7.40% vs. 100.00 ± 11.59%, t = 0.975, P = 0.3540; MDA: 96.32 ± 6.78% vs. 100.00 ± 8.29%, t = 1.112, P = 0.2690), which ruled out the possible role of oxidative stress in etomidate-induced memory deficits [Figure 3].\nEffect of etomidate on oxidative stress. The SOD and MDA levels were measured by commercial kits, and the raw intensity was finally transferred to the relative levels by setting the control group as 100% (SOD: t = 0.975, P = 0.3540; MDA: t = 1.112, P = 0.2690). SOD: Superoxide dismutase; MDA: Malondialdehyde; Con: The vehicle-treated group; Eto: Etomidate-treated group.\n Cyclic adenosine 3’,5’- monophosphate response element-binding phosphorylation was not involved in etomidate-induced memory deficits Phosphorylation of CREB at the Ser133 site is known to play an important role in regulating the memory process. Therefore, we evaluated the effects of etomidate on CREB phosphorylation using Western blot analysis. However, we did not find significant differences in changes of the protein levels of CREB and phosopho-CREB between etomidate- and vehicle-treated rats [Figure 4a and 4b]. These results excluded the involvement of CREB signaling disruption in the etomidate-induced memory deficits.\nEffect of etomidate on CREB phosphorylation. (a) The representative blots of pCREB at ser133 site, total CREB and the loading control (GAPDH) in the hippocampus. (b) The quantitative analysis between two groups (pCREB: t = 0.98, P = 0.3430; CREB: t = 1.021, P = 0.2900). CREB: Cyclic adenosine 3’,5’-monophosphate response element-binding; pCREB: Phospho-cyclic adenosine 3’,5’-monophosphate response element-binding; GAPDH: Glyceraldehyde 3-phosphate dehydrogenase; Con: The vehicle-treated group; Eto: Etomidate-treated group.\nPhosphorylation of CREB at the Ser133 site is known to play an important role in regulating the memory process. Therefore, we evaluated the effects of etomidate on CREB phosphorylation using Western blot analysis. However, we did not find significant differences in changes of the protein levels of CREB and phosopho-CREB between etomidate- and vehicle-treated rats [Figure 4a and 4b]. These results excluded the involvement of CREB signaling disruption in the etomidate-induced memory deficits.\nEffect of etomidate on CREB phosphorylation. (a) The representative blots of pCREB at ser133 site, total CREB and the loading control (GAPDH) in the hippocampus. (b) The quantitative analysis between two groups (pCREB: t = 0.98, P = 0.3430; CREB: t = 1.021, P = 0.2900). CREB: Cyclic adenosine 3’,5’-monophosphate response element-binding; pCREB: Phospho-cyclic adenosine 3’,5’-monophosphate response element-binding; GAPDH: Glyceraldehyde 3-phosphate dehydrogenase; Con: The vehicle-treated group; Eto: Etomidate-treated group.\n Etomidate suppressed the immediate early genes expression Then, we sought to determine whether the failure of expression of essential IEGs, such as Arc, c-fos, and Egr1, was involved in memory deficits. We found that following treatment with etomidate, the expression of Arc, c-fos, and Egr1 was dramatically reduced [Figure 5a], and densitometry analysis revealed the intensities were reduced to 86%, 72%, and 58% of normal levels, respectively [Figure 5b]. Furthermore, by using quantitative real-time polymerase chain reaction we also found that the mRNAs level of Arc, c-fos, and Egr1 was reduced following etomidate treatment [Figure 5c]. These data suggested that etomidate anesthesia suppressed IEGs expression, which may induce the memory deficits.\nEffect of etomidate on immediate early genes levels. (a) The representative blots of Arc, c-fos, Egr1, and the loading control GAPDH in the hippocampus. (b) The quantitative analysis between two groups (Arc: t = 2.876, P = 0.0086; c-fos: t = 2.996, P = 0.0076; Egr1: t = 3.011, P = 0.0057). (c) The quantitative polymerase chain reaction analysis for the detection of mRNAs of Arc, c-fos, and Egr1 (Arc: t = 2.893, P = 0.0082; c-fos: t = 3.213, P = 0.0047; Egr1: t = 3.452, P = 0.0034). *Compared with Con group. GAPDH: Glyceraldehyde 3-phosphate dehydrogenase; Con: The vehicle-treated group; Eto: Etomidate-treated group.\nThen, we sought to determine whether the failure of expression of essential IEGs, such as Arc, c-fos, and Egr1, was involved in memory deficits. We found that following treatment with etomidate, the expression of Arc, c-fos, and Egr1 was dramatically reduced [Figure 5a], and densitometry analysis revealed the intensities were reduced to 86%, 72%, and 58% of normal levels, respectively [Figure 5b]. Furthermore, by using quantitative real-time polymerase chain reaction we also found that the mRNAs level of Arc, c-fos, and Egr1 was reduced following etomidate treatment [Figure 5c]. These data suggested that etomidate anesthesia suppressed IEGs expression, which may induce the memory deficits.\nEffect of etomidate on immediate early genes levels. (a) The representative blots of Arc, c-fos, Egr1, and the loading control GAPDH in the hippocampus. (b) The quantitative analysis between two groups (Arc: t = 2.876, P = 0.0086; c-fos: t = 2.996, P = 0.0076; Egr1: t = 3.011, P = 0.0057). (c) The quantitative polymerase chain reaction analysis for the detection of mRNAs of Arc, c-fos, and Egr1 (Arc: t = 2.893, P = 0.0082; c-fos: t = 3.213, P = 0.0047; Egr1: t = 3.452, P = 0.0034). *Compared with Con group. GAPDH: Glyceraldehyde 3-phosphate dehydrogenase; Con: The vehicle-treated group; Eto: Etomidate-treated group.", "To explore the effects of etomidate on the memory of rats, we first examined the spatial learning and memory ability using the Morris water maze test. The learning ability of the rats was assessed using a consecutive 6 days training with a hidden platform under the water surface. During the learning process, the etomidate-treated rats displayed a longer latency period for locating the platform than the vehicle-treated rats did (day 4: 35.52 ± 3.88 s vs. 27.26 ± 5.33 s, t = 2.988, P = 0.0068; day 5: 30.67 ± 4.23 s vs. 15.84 ± 4.02 s, t = 3.013, P = 0.0057; day 6: 25.66 ± 4.16 s vs. 9.47 ± 2.35 s, t = 3.567, P = 0.0036) [Figure 1a]. Then, the platform was removed for the probe test on day 7, and the etomidate anesthesia led to fewer crossing times (2.06 ± 0.80 vs. 4.40 ± 1.18, t = 2.896, P = 0.0072) at the platform position, as well as reduced the time spent in the target quadrant from 34% to 11% and the total distance traveled from 41% to 13% (duration: 18.07 ± 4.79 s vs. 34.00 ± 4.24 s, t = 3.023, P = 0.0053; total swimming distance: 27.40 ± 6.56 cm vs. 40.73 ± 3.45 cm, t = 2.798, P = 0.0086) [Figure 1b–1e]. No significant difference was found in the swimming speed [Figure 1f] between two groups (t = 1.430, P = 0.1360). There was no significant difference in the body weight between Con and Eto groups before the injection (372.89 ± 13.97 g vs. 374.07 ± 17.14 g, t = 0.622, P = 0.460) and after the Morris water maze test (392.35 ± 12.83 g vs. 395.53 ± 17.10 g, t = 0.593, P = 0.650), suggesting that etomidate anesthesia impaired memory without affecting the motor ability and physiology of the rats.\nEffect of etomidate on learning and memory of aging rats in Morris water maze test. (a) The representative escape traces (upper) and the escape latencies in the learning days 1–6 (lower; day 4: t = 2.988, P = 0.0068; day 5: t = 3.013, P = 0.0057; and day 6: t = 3.567, P = 0.0036). (b) The swim trace in day 9 for probe test (c) the crossing times at day 9 for probe test (t = 2.896, P = 0.0072). (d) The duration in the target quadrant (t = 3.023, P = 0.0053). (e) The total swimming distance in the target quadrant on day 9 (t = 2.798, P = 0.0086). (f) The swimming speed (t = 1.430, P = 0.1360). *Compared with vehicle-treated group. Con: The vehicle-treated group; Eto: Etomidate-treated group.", "To determine the potential mechanisms mediating the memory impairments induced by etomidate anesthesia, we first performed Nissl staining to evaluate the neuronal loss after etomidate anesthesia. However, after carefully counting, we did not detect any significant difference in the hippocampal neuronal numbers between the etomidate- and vehicle-treated rats (CA1: 63.45 ± 6.99 vs. 64.70 ± 4.81, t = 0.765, P = 0.4620; CA3: 66.18 ± 5.42 vs. 64.60 ± 6.52, t = 0.833, P = 0.4320; 62.27 ± 5.55 vs. 63.40 ± 5.36, t = 0.678, P = 0.5210) [Figure 2a and 2b]. Thus, etomidate inducing memory loss may not be due to neuronal loss in the hippocampus.\nThe effect of etomidate on neuronal death by Nissl staining (a) the representative images in the hippocampus; Scale bar = 100 μm. (b) The quantitative analysis between two groups (CA1: t = 0.765, P = 0.4620; CA3: t = 0.833, P = 0.4320; DG: t = 0.678, P = 0.5210). CA1: Region I of hippocampus proper; CA3: Region III of hippocampus proper; DG: Dentategyrus; Con: The vehicle-treated group; Eto: Etomidate-treated group.", "We then asked whether etomidate could induce transient oxidative stress in the hippocampus. We used two commercial kits to determine the expression of SOD and MDA in two groups and found that rats in etomidate-treated group did not show any significant differences in the levels of SOD and MDA compared to the rats in vehicle-treated group (SOD: 105.69 ± 7.40% vs. 100.00 ± 11.59%, t = 0.975, P = 0.3540; MDA: 96.32 ± 6.78% vs. 100.00 ± 8.29%, t = 1.112, P = 0.2690), which ruled out the possible role of oxidative stress in etomidate-induced memory deficits [Figure 3].\nEffect of etomidate on oxidative stress. The SOD and MDA levels were measured by commercial kits, and the raw intensity was finally transferred to the relative levels by setting the control group as 100% (SOD: t = 0.975, P = 0.3540; MDA: t = 1.112, P = 0.2690). SOD: Superoxide dismutase; MDA: Malondialdehyde; Con: The vehicle-treated group; Eto: Etomidate-treated group.", "Phosphorylation of CREB at the Ser133 site is known to play an important role in regulating the memory process. Therefore, we evaluated the effects of etomidate on CREB phosphorylation using Western blot analysis. However, we did not find significant differences in changes of the protein levels of CREB and phosopho-CREB between etomidate- and vehicle-treated rats [Figure 4a and 4b]. These results excluded the involvement of CREB signaling disruption in the etomidate-induced memory deficits.\nEffect of etomidate on CREB phosphorylation. (a) The representative blots of pCREB at ser133 site, total CREB and the loading control (GAPDH) in the hippocampus. (b) The quantitative analysis between two groups (pCREB: t = 0.98, P = 0.3430; CREB: t = 1.021, P = 0.2900). CREB: Cyclic adenosine 3’,5’-monophosphate response element-binding; pCREB: Phospho-cyclic adenosine 3’,5’-monophosphate response element-binding; GAPDH: Glyceraldehyde 3-phosphate dehydrogenase; Con: The vehicle-treated group; Eto: Etomidate-treated group.", "Then, we sought to determine whether the failure of expression of essential IEGs, such as Arc, c-fos, and Egr1, was involved in memory deficits. We found that following treatment with etomidate, the expression of Arc, c-fos, and Egr1 was dramatically reduced [Figure 5a], and densitometry analysis revealed the intensities were reduced to 86%, 72%, and 58% of normal levels, respectively [Figure 5b]. Furthermore, by using quantitative real-time polymerase chain reaction we also found that the mRNAs level of Arc, c-fos, and Egr1 was reduced following etomidate treatment [Figure 5c]. These data suggested that etomidate anesthesia suppressed IEGs expression, which may induce the memory deficits.\nEffect of etomidate on immediate early genes levels. (a) The representative blots of Arc, c-fos, Egr1, and the loading control GAPDH in the hippocampus. (b) The quantitative analysis between two groups (Arc: t = 2.876, P = 0.0086; c-fos: t = 2.996, P = 0.0076; Egr1: t = 3.011, P = 0.0057). (c) The quantitative polymerase chain reaction analysis for the detection of mRNAs of Arc, c-fos, and Egr1 (Arc: t = 2.893, P = 0.0082; c-fos: t = 3.213, P = 0.0047; Egr1: t = 3.452, P = 0.0034). *Compared with Con group. GAPDH: Glyceraldehyde 3-phosphate dehydrogenase; Con: The vehicle-treated group; Eto: Etomidate-treated group.", "Previous studies have suggested that anesthesia induces neuronal cell death in the brain of developing rat.[15] Specifically, acute anesthesia exposure downregulated B-cell lymphoma extra-large, upregulated cytochrome C, and activated caspase-9 in 7-day-old rats indicating activation of the intrinsic apoptotic pathway. Furthermore, a parallel upregulation of Fas protein and activation of caspase-8 in 7-day-old rats indicated the activation of the extrinsic apoptotic pathway. Moreover, various studies have reported different results for experiments with etomidate. Acute etomidate treatment reduced the neuronal loss caused by kainic acid[16] but did not alter the hippocampal neuronal loss in rats with traumatic brain injury.[17] In this study, we did not find any significant difference in neuron numbers between etomidate- and vehicle-treated rats, suggesting that etomidate anesthesia did not affect neuronal survival.\nOxidative stress has also been implicated in anesthetic-induced neurotoxicity and memory deficits.[18] The application of oxidative stress blockers including the mitochondrial protector, L-carnitine[19] and melatonin[20] in vivo, and specific antioxidants in vitro, including the SOD mimetic, M40403, and the nitric oxide synthase inhibitor, 7-nitroindazole, fully, or partially reversed the anesthetic-induced neurotoxicity.[21] Recent gene expression assessments indicated that anesthetic treatment alters genes in the oxidative stress pathway in developing animals.[21] However, in this study, etomidate did not alter the level of MDA and SOD, which are the two principal markers of oxidative stress in the brain, further ruling out the potential role of oxidative stress in the etomidate-induced memory deficits. Previous reports found that etomidate acts on GABAergic neurons to exert its anesthetic activity,[22] and GABAergic neurons are known to be particularly susceptible to oxidative stress.[23] Therefore, the lack of change in the oxidative stress parameters observed here was reasonable and consistent with a previous study.[24]\nIt is well known that the regulation of gene transcription by cAMP-mediated second messenger pathway plays an important role in learning and memory.[2526] Both cAMP-dependent protein kinase A (PKA) and CREB are activated in the course of spatial learning. Training of rats using the radial maze resulted in a significant increase in PKA and CREB phosphorylation in the hippocampi during spatial learning, which was followed by spatial memory formation. However, we did not find any increase in phosphorylation of CREB following treatment with etomidate, suggesting that it did not affect the PKA-CREB pathway.\nFinally, we discovered that the level of multiple IEGs was decreased following treatment with etomidate. Previous studies have provided direct evidence that IEG expression reflects the integration of information processed by hippocampal neurons. Moreover, IEG expression is not only correlated with neural activity but also plays a critical role in stabilizing recent changes in synaptic efficacy. For example, intrahippocampal administration of antisense IEG oligonucleotides (Arc and c-fos) or germline disruption of the IEGs c-fos, tissue plasminogen activator, or Egr1 impairs the consolidation of long-term memory formation but does not affect learning or short-term memory.[2728] Here, etomidate application decreased the level of Arc, c-fos, and Egr1 in the hippocampus, which was accompanied with memory deficits and suggested the possible involvement of IEG disruption in etomidate-induced memory decline.\nAs the disruption of synaptic plasticity, loss of dendritic spines, impairments of membranous receptor trafficking are also important to the memory formation, a further study is necessary to evaluate their roles in etomidate-induced memory deficits.\nIn summary, we found that etomidate impairs memory by decreasing IEG expression but not by inducing neuronal loss, oxidative stress, or PKA-CREB pathway inhibition.\n Financial support and sponsorship This study was supported by grants from Key Scientific and Technological Projects of Henan Province (No. 122102310072), Scientific and Technological Projects of Zhengzhou (No. 121PPTGG492-4 and No. 121PPTGG492-5).\nThis study was supported by grants from Key Scientific and Technological Projects of Henan Province (No. 122102310072), Scientific and Technological Projects of Zhengzhou (No. 121PPTGG492-4 and No. 121PPTGG492-5).\n Conflicts of interest There are no conflicts of interest.\nThere are no conflicts of interest.", "This study was supported by grants from Key Scientific and Technological Projects of Henan Province (No. 122102310072), Scientific and Technological Projects of Zhengzhou (No. 121PPTGG492-4 and No. 121PPTGG492-5).", "There are no conflicts of interest." ]
[ "intro", "methods", null, null, null, null, null, null, "results", null, null, null, null, null, "discussion", null, null ]
[ "Anesthesia", "Cyclic Adenosine 3’", "5’- Monophosphate Response Element-binding Phosphorylation", "Etomidate", "Immediate Early Genes", "Neuronal Death", "Oxidative Stress" ]
INTRODUCTION: Postoperative cognitive dysfunction (POCD) is a short-term cognitive decline, especially in memory and executive functions, that may last from a few days to a few weeks after surgery.[1] Currently, the underlying mechanisms mediating the development of cognitive impairment after anesthesia and surgery are not yet fully clear. A previous study proposed that general anesthetics play a causal role in POCD because the duration of anesthesia positively correlates with the incidence of POCD in patients.[2] Moreover, a single exposure to an anesthetic can cause retrograde and anterograde memory deficits that last for days to weeks in rodent models.[34] For example, halothane and nitrous oxide anesthesia administered during the perinatal period led to learning deficits and delayed behavioral development,[5] as well as N-methyl-d-aspartate receptor blockade, which is typical of nitrous oxide and can produce long lasting memory deficits.[6] However, the mechanisms by which anesthetics cause persistent memory deficits in adults are poorly understood. Etomidate (R-1-[1-ethylphenyl] imidazole-5-ethyl ester), is a unique drug used for the induction of general anesthesia and sedation. Etomidate is the only imidazole among general anesthesia inducing drugs and has the most favorable therapeutic effect following single bolus administration.[7] It is known that the dominant molecular targets that mediate the anesthetic effects of etomidate in the central nervous system are specific γ-aminobutyric acid type A (GABAA) receptor subtypes, which had been strongly implicated in memory processes.[8] Furthermore, etomidate not only causes memory impairment in vivo but also abolishes long-term potentiation induced by high-frequency stimulation in the hippocampal slices of wild-type but not Gabra5−/− mice.[9] Thus, etomidate anesthesia impairs synaptic plasticity, which in turn causes memory deficits. However, whether the neuronal death, oxidative stress, loss of cyclic adenosine 3’,5’-monophosphate (cAMP) response element-binding (CREB) protein, and immediate early genes (IEGs) are involved in the etomidate-induced memory deficits in the elderly is still unknown. Here, using behavioral tests and biochemical and immunohistochemical assay, we are trying to understand the underlying mechanisms for cognitive deficits induced by etomidate in vivo. METHODS: Animals and treatment A total of 30 Sprague-Dawley rats, that were 15–18 months old and weighed 350–400 g, were purchased from the Center of Experimental Animal (Zhengzhou University, China) and maintained under standard laboratory conditions (room temperature 22 ± 2°C; relative humidity 60 ± 5%; and 12-h light/dark cycle) with standard rodent food and water ad libitum. All the animal experiments were approved by the Review Committee on Animal Experiments of Zhengzhou University, China. The rats were randomly divided into two groups namely the vehicle- (Con) and etomidate-treated (Eto) groups (n = 15 in each group). The rats in the Eto and Con groups received a dose of 8 mg/kg etomidate or a dose of the vehicle by intraperitoneal injection, respectively. The body weights were measured before induction of anesthesia and after the Morris water maze test. A total of 30 Sprague-Dawley rats, that were 15–18 months old and weighed 350–400 g, were purchased from the Center of Experimental Animal (Zhengzhou University, China) and maintained under standard laboratory conditions (room temperature 22 ± 2°C; relative humidity 60 ± 5%; and 12-h light/dark cycle) with standard rodent food and water ad libitum. All the animal experiments were approved by the Review Committee on Animal Experiments of Zhengzhou University, China. The rats were randomly divided into two groups namely the vehicle- (Con) and etomidate-treated (Eto) groups (n = 15 in each group). The rats in the Eto and Con groups received a dose of 8 mg/kg etomidate or a dose of the vehicle by intraperitoneal injection, respectively. The body weights were measured before induction of anesthesia and after the Morris water maze test. Morris water maze test The maze consisted of a round pool (diameter and depth were 180 cm and 50 cm, respectively) filled with warm water (25 ± 2°C) up to 2 cm above a hidden platform in the third quadrant. The rats were habituated in the test room for 1 week before training, which commenced 1 day after the rats had recovered from the anesthesia. During each trial, the rats were placed in the swimming pool facing the wall in a fixed position and allowed 60 s to find the hidden platform in the third quadrant. Rats that did not locate the platform within this time were guided there and allowed to remain for 20 s. All rats underwent four trials every day in four quadrants. After every trial, the rats were wiped dry and kept warm. The rats were trained for 6 days consecutively, and the hidden platform was removed for the probe test on day 7 when the memory was detected in the rats. A video tracking system was used to record the swimming motions of the rats. The crossing time, duration, and total distance traveled by each rat in the target quadrant were used to evaluate memory retention ability while the swimming speed was used to evaluate motor ability. The maze consisted of a round pool (diameter and depth were 180 cm and 50 cm, respectively) filled with warm water (25 ± 2°C) up to 2 cm above a hidden platform in the third quadrant. The rats were habituated in the test room for 1 week before training, which commenced 1 day after the rats had recovered from the anesthesia. During each trial, the rats were placed in the swimming pool facing the wall in a fixed position and allowed 60 s to find the hidden platform in the third quadrant. Rats that did not locate the platform within this time were guided there and allowed to remain for 20 s. All rats underwent four trials every day in four quadrants. After every trial, the rats were wiped dry and kept warm. The rats were trained for 6 days consecutively, and the hidden platform was removed for the probe test on day 7 when the memory was detected in the rats. A video tracking system was used to record the swimming motions of the rats. The crossing time, duration, and total distance traveled by each rat in the target quadrant were used to evaluate memory retention ability while the swimming speed was used to evaluate motor ability. Nissl staining Five rats were randomly selected from each group and euthanized to obtain tissue samples for the Nissl staining using a previously reported method.[10] Briefly, the rats were anesthetized with an overdose of chloral hydrate intraperitoneally and then perfused transcardially with 0.9% sodium chloride at 4°C, followed by 4% paraformaldehyde in 0.1 mol/L phosphate buffer (pH 7.40). Then, the whole brains were removed and postfixed in the same fixative at 4°C for another 24 h. The brains were dehydrated in 30% and 40% sucrose until they sank, rapidly frozen in isopentane, and then coronal sections (25-μm thick) were cut on a cryostat (CM1950, Leica, Heidelberger, Germany). All the sections were used Nissl stained with 0.1% cresyl violet (Sigma-Aldrich, St. Louis, MO, USA) to evaluate the hippocampal neuronal damage. The cell counting was performed using ImageJ software (National Institutes of Health, Bethesda, MD, USA) as previously reported.[11] Five rats were randomly selected from each group and euthanized to obtain tissue samples for the Nissl staining using a previously reported method.[10] Briefly, the rats were anesthetized with an overdose of chloral hydrate intraperitoneally and then perfused transcardially with 0.9% sodium chloride at 4°C, followed by 4% paraformaldehyde in 0.1 mol/L phosphate buffer (pH 7.40). Then, the whole brains were removed and postfixed in the same fixative at 4°C for another 24 h. The brains were dehydrated in 30% and 40% sucrose until they sank, rapidly frozen in isopentane, and then coronal sections (25-μm thick) were cut on a cryostat (CM1950, Leica, Heidelberger, Germany). All the sections were used Nissl stained with 0.1% cresyl violet (Sigma-Aldrich, St. Louis, MO, USA) to evaluate the hippocampal neuronal damage. The cell counting was performed using ImageJ software (National Institutes of Health, Bethesda, MD, USA) as previously reported.[11] Superoxide dismutase and malondialdehyde measurement The activity levels of superoxide dismutase (SOD) and malondialdehyde (MDA) in brain tissue were assayed using commercial kits (Nanjing Jiancheng Inc., China).[12] The hippocampal SOD activity was expressed as U/mg protein while MDA concentrations were as nmol/mg (n = 6 in each group).[13] The activity levels of superoxide dismutase (SOD) and malondialdehyde (MDA) in brain tissue were assayed using commercial kits (Nanjing Jiancheng Inc., China).[12] The hippocampal SOD activity was expressed as U/mg protein while MDA concentrations were as nmol/mg (n = 6 in each group).[13] Western blot analysis Protein was extracted from the hippocampi in rats with etomidate treatment or vehicle according to a previously described procedure.[14] Membranes were probed with primary antibodies against CREB, phospho-Ser133-CREB (#9104 and #9191, respectively, Cell Signaling, USA), Arc, c-fos, and Egr1 (ab118929, ab53036, and ab55160, respectively, Abcam, USA). Horseradish peroxidase conjugated anti-rabbit or anti-mouse secondary antibodies were used and visualized using the enhanced chemiluminescence kit (Thermo Scientific, USA). The density of each band was quantified using the ImageJ software (National Institutes of Health, USA). Glyceraldehyde 3-phosphate dehydrogenase (GAPDH, ab1603, Abcam) was used as a loading control. Protein was extracted from the hippocampi in rats with etomidate treatment or vehicle according to a previously described procedure.[14] Membranes were probed with primary antibodies against CREB, phospho-Ser133-CREB (#9104 and #9191, respectively, Cell Signaling, USA), Arc, c-fos, and Egr1 (ab118929, ab53036, and ab55160, respectively, Abcam, USA). Horseradish peroxidase conjugated anti-rabbit or anti-mouse secondary antibodies were used and visualized using the enhanced chemiluminescence kit (Thermo Scientific, USA). The density of each band was quantified using the ImageJ software (National Institutes of Health, USA). Glyceraldehyde 3-phosphate dehydrogenase (GAPDH, ab1603, Abcam) was used as a loading control. Statistical analysis The statistical analyses were carried out using SPSS 14.0 (SPSS Inc., Chicago, IL, USA). All the graphic presentations were constructed using the SigmaPlot 10.0 (Systat Software, Inc. San Jose, CA, USA). A paired t-test was used to analyze the data, which were shown as a mean ± standard error (SE), and A P < 0.05 was considered to be statistically significant. The statistical analyses were carried out using SPSS 14.0 (SPSS Inc., Chicago, IL, USA). All the graphic presentations were constructed using the SigmaPlot 10.0 (Systat Software, Inc. San Jose, CA, USA). A paired t-test was used to analyze the data, which were shown as a mean ± standard error (SE), and A P < 0.05 was considered to be statistically significant. Animals and treatment: A total of 30 Sprague-Dawley rats, that were 15–18 months old and weighed 350–400 g, were purchased from the Center of Experimental Animal (Zhengzhou University, China) and maintained under standard laboratory conditions (room temperature 22 ± 2°C; relative humidity 60 ± 5%; and 12-h light/dark cycle) with standard rodent food and water ad libitum. All the animal experiments were approved by the Review Committee on Animal Experiments of Zhengzhou University, China. The rats were randomly divided into two groups namely the vehicle- (Con) and etomidate-treated (Eto) groups (n = 15 in each group). The rats in the Eto and Con groups received a dose of 8 mg/kg etomidate or a dose of the vehicle by intraperitoneal injection, respectively. The body weights were measured before induction of anesthesia and after the Morris water maze test. Morris water maze test: The maze consisted of a round pool (diameter and depth were 180 cm and 50 cm, respectively) filled with warm water (25 ± 2°C) up to 2 cm above a hidden platform in the third quadrant. The rats were habituated in the test room for 1 week before training, which commenced 1 day after the rats had recovered from the anesthesia. During each trial, the rats were placed in the swimming pool facing the wall in a fixed position and allowed 60 s to find the hidden platform in the third quadrant. Rats that did not locate the platform within this time were guided there and allowed to remain for 20 s. All rats underwent four trials every day in four quadrants. After every trial, the rats were wiped dry and kept warm. The rats were trained for 6 days consecutively, and the hidden platform was removed for the probe test on day 7 when the memory was detected in the rats. A video tracking system was used to record the swimming motions of the rats. The crossing time, duration, and total distance traveled by each rat in the target quadrant were used to evaluate memory retention ability while the swimming speed was used to evaluate motor ability. Nissl staining: Five rats were randomly selected from each group and euthanized to obtain tissue samples for the Nissl staining using a previously reported method.[10] Briefly, the rats were anesthetized with an overdose of chloral hydrate intraperitoneally and then perfused transcardially with 0.9% sodium chloride at 4°C, followed by 4% paraformaldehyde in 0.1 mol/L phosphate buffer (pH 7.40). Then, the whole brains were removed and postfixed in the same fixative at 4°C for another 24 h. The brains were dehydrated in 30% and 40% sucrose until they sank, rapidly frozen in isopentane, and then coronal sections (25-μm thick) were cut on a cryostat (CM1950, Leica, Heidelberger, Germany). All the sections were used Nissl stained with 0.1% cresyl violet (Sigma-Aldrich, St. Louis, MO, USA) to evaluate the hippocampal neuronal damage. The cell counting was performed using ImageJ software (National Institutes of Health, Bethesda, MD, USA) as previously reported.[11] Superoxide dismutase and malondialdehyde measurement: The activity levels of superoxide dismutase (SOD) and malondialdehyde (MDA) in brain tissue were assayed using commercial kits (Nanjing Jiancheng Inc., China).[12] The hippocampal SOD activity was expressed as U/mg protein while MDA concentrations were as nmol/mg (n = 6 in each group).[13] Western blot analysis: Protein was extracted from the hippocampi in rats with etomidate treatment or vehicle according to a previously described procedure.[14] Membranes were probed with primary antibodies against CREB, phospho-Ser133-CREB (#9104 and #9191, respectively, Cell Signaling, USA), Arc, c-fos, and Egr1 (ab118929, ab53036, and ab55160, respectively, Abcam, USA). Horseradish peroxidase conjugated anti-rabbit or anti-mouse secondary antibodies were used and visualized using the enhanced chemiluminescence kit (Thermo Scientific, USA). The density of each band was quantified using the ImageJ software (National Institutes of Health, USA). Glyceraldehyde 3-phosphate dehydrogenase (GAPDH, ab1603, Abcam) was used as a loading control. Statistical analysis: The statistical analyses were carried out using SPSS 14.0 (SPSS Inc., Chicago, IL, USA). All the graphic presentations were constructed using the SigmaPlot 10.0 (Systat Software, Inc. San Jose, CA, USA). A paired t-test was used to analyze the data, which were shown as a mean ± standard error (SE), and A P < 0.05 was considered to be statistically significant. RESULTS: Etomidate-induced memory deficits To explore the effects of etomidate on the memory of rats, we first examined the spatial learning and memory ability using the Morris water maze test. The learning ability of the rats was assessed using a consecutive 6 days training with a hidden platform under the water surface. During the learning process, the etomidate-treated rats displayed a longer latency period for locating the platform than the vehicle-treated rats did (day 4: 35.52 ± 3.88 s vs. 27.26 ± 5.33 s, t = 2.988, P = 0.0068; day 5: 30.67 ± 4.23 s vs. 15.84 ± 4.02 s, t = 3.013, P = 0.0057; day 6: 25.66 ± 4.16 s vs. 9.47 ± 2.35 s, t = 3.567, P = 0.0036) [Figure 1a]. Then, the platform was removed for the probe test on day 7, and the etomidate anesthesia led to fewer crossing times (2.06 ± 0.80 vs. 4.40 ± 1.18, t = 2.896, P = 0.0072) at the platform position, as well as reduced the time spent in the target quadrant from 34% to 11% and the total distance traveled from 41% to 13% (duration: 18.07 ± 4.79 s vs. 34.00 ± 4.24 s, t = 3.023, P = 0.0053; total swimming distance: 27.40 ± 6.56 cm vs. 40.73 ± 3.45 cm, t = 2.798, P = 0.0086) [Figure 1b–1e]. No significant difference was found in the swimming speed [Figure 1f] between two groups (t = 1.430, P = 0.1360). There was no significant difference in the body weight between Con and Eto groups before the injection (372.89 ± 13.97 g vs. 374.07 ± 17.14 g, t = 0.622, P = 0.460) and after the Morris water maze test (392.35 ± 12.83 g vs. 395.53 ± 17.10 g, t = 0.593, P = 0.650), suggesting that etomidate anesthesia impaired memory without affecting the motor ability and physiology of the rats. Effect of etomidate on learning and memory of aging rats in Morris water maze test. (a) The representative escape traces (upper) and the escape latencies in the learning days 1–6 (lower; day 4: t = 2.988, P = 0.0068; day 5: t = 3.013, P = 0.0057; and day 6: t = 3.567, P = 0.0036). (b) The swim trace in day 9 for probe test (c) the crossing times at day 9 for probe test (t = 2.896, P = 0.0072). (d) The duration in the target quadrant (t = 3.023, P = 0.0053). (e) The total swimming distance in the target quadrant on day 9 (t = 2.798, P = 0.0086). (f) The swimming speed (t = 1.430, P = 0.1360). *Compared with vehicle-treated group. Con: The vehicle-treated group; Eto: Etomidate-treated group. To explore the effects of etomidate on the memory of rats, we first examined the spatial learning and memory ability using the Morris water maze test. The learning ability of the rats was assessed using a consecutive 6 days training with a hidden platform under the water surface. During the learning process, the etomidate-treated rats displayed a longer latency period for locating the platform than the vehicle-treated rats did (day 4: 35.52 ± 3.88 s vs. 27.26 ± 5.33 s, t = 2.988, P = 0.0068; day 5: 30.67 ± 4.23 s vs. 15.84 ± 4.02 s, t = 3.013, P = 0.0057; day 6: 25.66 ± 4.16 s vs. 9.47 ± 2.35 s, t = 3.567, P = 0.0036) [Figure 1a]. Then, the platform was removed for the probe test on day 7, and the etomidate anesthesia led to fewer crossing times (2.06 ± 0.80 vs. 4.40 ± 1.18, t = 2.896, P = 0.0072) at the platform position, as well as reduced the time spent in the target quadrant from 34% to 11% and the total distance traveled from 41% to 13% (duration: 18.07 ± 4.79 s vs. 34.00 ± 4.24 s, t = 3.023, P = 0.0053; total swimming distance: 27.40 ± 6.56 cm vs. 40.73 ± 3.45 cm, t = 2.798, P = 0.0086) [Figure 1b–1e]. No significant difference was found in the swimming speed [Figure 1f] between two groups (t = 1.430, P = 0.1360). There was no significant difference in the body weight between Con and Eto groups before the injection (372.89 ± 13.97 g vs. 374.07 ± 17.14 g, t = 0.622, P = 0.460) and after the Morris water maze test (392.35 ± 12.83 g vs. 395.53 ± 17.10 g, t = 0.593, P = 0.650), suggesting that etomidate anesthesia impaired memory without affecting the motor ability and physiology of the rats. Effect of etomidate on learning and memory of aging rats in Morris water maze test. (a) The representative escape traces (upper) and the escape latencies in the learning days 1–6 (lower; day 4: t = 2.988, P = 0.0068; day 5: t = 3.013, P = 0.0057; and day 6: t = 3.567, P = 0.0036). (b) The swim trace in day 9 for probe test (c) the crossing times at day 9 for probe test (t = 2.896, P = 0.0072). (d) The duration in the target quadrant (t = 3.023, P = 0.0053). (e) The total swimming distance in the target quadrant on day 9 (t = 2.798, P = 0.0086). (f) The swimming speed (t = 1.430, P = 0.1360). *Compared with vehicle-treated group. Con: The vehicle-treated group; Eto: Etomidate-treated group. Etomidate did not induce the neuronal death To determine the potential mechanisms mediating the memory impairments induced by etomidate anesthesia, we first performed Nissl staining to evaluate the neuronal loss after etomidate anesthesia. However, after carefully counting, we did not detect any significant difference in the hippocampal neuronal numbers between the etomidate- and vehicle-treated rats (CA1: 63.45 ± 6.99 vs. 64.70 ± 4.81, t = 0.765, P = 0.4620; CA3: 66.18 ± 5.42 vs. 64.60 ± 6.52, t = 0.833, P = 0.4320; 62.27 ± 5.55 vs. 63.40 ± 5.36, t = 0.678, P = 0.5210) [Figure 2a and 2b]. Thus, etomidate inducing memory loss may not be due to neuronal loss in the hippocampus. The effect of etomidate on neuronal death by Nissl staining (a) the representative images in the hippocampus; Scale bar = 100 μm. (b) The quantitative analysis between two groups (CA1: t = 0.765, P = 0.4620; CA3: t = 0.833, P = 0.4320; DG: t = 0.678, P = 0.5210). CA1: Region I of hippocampus proper; CA3: Region III of hippocampus proper; DG: Dentategyrus; Con: The vehicle-treated group; Eto: Etomidate-treated group. To determine the potential mechanisms mediating the memory impairments induced by etomidate anesthesia, we first performed Nissl staining to evaluate the neuronal loss after etomidate anesthesia. However, after carefully counting, we did not detect any significant difference in the hippocampal neuronal numbers between the etomidate- and vehicle-treated rats (CA1: 63.45 ± 6.99 vs. 64.70 ± 4.81, t = 0.765, P = 0.4620; CA3: 66.18 ± 5.42 vs. 64.60 ± 6.52, t = 0.833, P = 0.4320; 62.27 ± 5.55 vs. 63.40 ± 5.36, t = 0.678, P = 0.5210) [Figure 2a and 2b]. Thus, etomidate inducing memory loss may not be due to neuronal loss in the hippocampus. The effect of etomidate on neuronal death by Nissl staining (a) the representative images in the hippocampus; Scale bar = 100 μm. (b) The quantitative analysis between two groups (CA1: t = 0.765, P = 0.4620; CA3: t = 0.833, P = 0.4320; DG: t = 0.678, P = 0.5210). CA1: Region I of hippocampus proper; CA3: Region III of hippocampus proper; DG: Dentategyrus; Con: The vehicle-treated group; Eto: Etomidate-treated group. Etomidate did not induce oxidative stress We then asked whether etomidate could induce transient oxidative stress in the hippocampus. We used two commercial kits to determine the expression of SOD and MDA in two groups and found that rats in etomidate-treated group did not show any significant differences in the levels of SOD and MDA compared to the rats in vehicle-treated group (SOD: 105.69 ± 7.40% vs. 100.00 ± 11.59%, t = 0.975, P = 0.3540; MDA: 96.32 ± 6.78% vs. 100.00 ± 8.29%, t = 1.112, P = 0.2690), which ruled out the possible role of oxidative stress in etomidate-induced memory deficits [Figure 3]. Effect of etomidate on oxidative stress. The SOD and MDA levels were measured by commercial kits, and the raw intensity was finally transferred to the relative levels by setting the control group as 100% (SOD: t = 0.975, P = 0.3540; MDA: t = 1.112, P = 0.2690). SOD: Superoxide dismutase; MDA: Malondialdehyde; Con: The vehicle-treated group; Eto: Etomidate-treated group. We then asked whether etomidate could induce transient oxidative stress in the hippocampus. We used two commercial kits to determine the expression of SOD and MDA in two groups and found that rats in etomidate-treated group did not show any significant differences in the levels of SOD and MDA compared to the rats in vehicle-treated group (SOD: 105.69 ± 7.40% vs. 100.00 ± 11.59%, t = 0.975, P = 0.3540; MDA: 96.32 ± 6.78% vs. 100.00 ± 8.29%, t = 1.112, P = 0.2690), which ruled out the possible role of oxidative stress in etomidate-induced memory deficits [Figure 3]. Effect of etomidate on oxidative stress. The SOD and MDA levels were measured by commercial kits, and the raw intensity was finally transferred to the relative levels by setting the control group as 100% (SOD: t = 0.975, P = 0.3540; MDA: t = 1.112, P = 0.2690). SOD: Superoxide dismutase; MDA: Malondialdehyde; Con: The vehicle-treated group; Eto: Etomidate-treated group. Cyclic adenosine 3’,5’- monophosphate response element-binding phosphorylation was not involved in etomidate-induced memory deficits Phosphorylation of CREB at the Ser133 site is known to play an important role in regulating the memory process. Therefore, we evaluated the effects of etomidate on CREB phosphorylation using Western blot analysis. However, we did not find significant differences in changes of the protein levels of CREB and phosopho-CREB between etomidate- and vehicle-treated rats [Figure 4a and 4b]. These results excluded the involvement of CREB signaling disruption in the etomidate-induced memory deficits. Effect of etomidate on CREB phosphorylation. (a) The representative blots of pCREB at ser133 site, total CREB and the loading control (GAPDH) in the hippocampus. (b) The quantitative analysis between two groups (pCREB: t = 0.98, P = 0.3430; CREB: t = 1.021, P = 0.2900). CREB: Cyclic adenosine 3’,5’-monophosphate response element-binding; pCREB: Phospho-cyclic adenosine 3’,5’-monophosphate response element-binding; GAPDH: Glyceraldehyde 3-phosphate dehydrogenase; Con: The vehicle-treated group; Eto: Etomidate-treated group. Phosphorylation of CREB at the Ser133 site is known to play an important role in regulating the memory process. Therefore, we evaluated the effects of etomidate on CREB phosphorylation using Western blot analysis. However, we did not find significant differences in changes of the protein levels of CREB and phosopho-CREB between etomidate- and vehicle-treated rats [Figure 4a and 4b]. These results excluded the involvement of CREB signaling disruption in the etomidate-induced memory deficits. Effect of etomidate on CREB phosphorylation. (a) The representative blots of pCREB at ser133 site, total CREB and the loading control (GAPDH) in the hippocampus. (b) The quantitative analysis between two groups (pCREB: t = 0.98, P = 0.3430; CREB: t = 1.021, P = 0.2900). CREB: Cyclic adenosine 3’,5’-monophosphate response element-binding; pCREB: Phospho-cyclic adenosine 3’,5’-monophosphate response element-binding; GAPDH: Glyceraldehyde 3-phosphate dehydrogenase; Con: The vehicle-treated group; Eto: Etomidate-treated group. Etomidate suppressed the immediate early genes expression Then, we sought to determine whether the failure of expression of essential IEGs, such as Arc, c-fos, and Egr1, was involved in memory deficits. We found that following treatment with etomidate, the expression of Arc, c-fos, and Egr1 was dramatically reduced [Figure 5a], and densitometry analysis revealed the intensities were reduced to 86%, 72%, and 58% of normal levels, respectively [Figure 5b]. Furthermore, by using quantitative real-time polymerase chain reaction we also found that the mRNAs level of Arc, c-fos, and Egr1 was reduced following etomidate treatment [Figure 5c]. These data suggested that etomidate anesthesia suppressed IEGs expression, which may induce the memory deficits. Effect of etomidate on immediate early genes levels. (a) The representative blots of Arc, c-fos, Egr1, and the loading control GAPDH in the hippocampus. (b) The quantitative analysis between two groups (Arc: t = 2.876, P = 0.0086; c-fos: t = 2.996, P = 0.0076; Egr1: t = 3.011, P = 0.0057). (c) The quantitative polymerase chain reaction analysis for the detection of mRNAs of Arc, c-fos, and Egr1 (Arc: t = 2.893, P = 0.0082; c-fos: t = 3.213, P = 0.0047; Egr1: t = 3.452, P = 0.0034). *Compared with Con group. GAPDH: Glyceraldehyde 3-phosphate dehydrogenase; Con: The vehicle-treated group; Eto: Etomidate-treated group. Then, we sought to determine whether the failure of expression of essential IEGs, such as Arc, c-fos, and Egr1, was involved in memory deficits. We found that following treatment with etomidate, the expression of Arc, c-fos, and Egr1 was dramatically reduced [Figure 5a], and densitometry analysis revealed the intensities were reduced to 86%, 72%, and 58% of normal levels, respectively [Figure 5b]. Furthermore, by using quantitative real-time polymerase chain reaction we also found that the mRNAs level of Arc, c-fos, and Egr1 was reduced following etomidate treatment [Figure 5c]. These data suggested that etomidate anesthesia suppressed IEGs expression, which may induce the memory deficits. Effect of etomidate on immediate early genes levels. (a) The representative blots of Arc, c-fos, Egr1, and the loading control GAPDH in the hippocampus. (b) The quantitative analysis between two groups (Arc: t = 2.876, P = 0.0086; c-fos: t = 2.996, P = 0.0076; Egr1: t = 3.011, P = 0.0057). (c) The quantitative polymerase chain reaction analysis for the detection of mRNAs of Arc, c-fos, and Egr1 (Arc: t = 2.893, P = 0.0082; c-fos: t = 3.213, P = 0.0047; Egr1: t = 3.452, P = 0.0034). *Compared with Con group. GAPDH: Glyceraldehyde 3-phosphate dehydrogenase; Con: The vehicle-treated group; Eto: Etomidate-treated group. Etomidate-induced memory deficits: To explore the effects of etomidate on the memory of rats, we first examined the spatial learning and memory ability using the Morris water maze test. The learning ability of the rats was assessed using a consecutive 6 days training with a hidden platform under the water surface. During the learning process, the etomidate-treated rats displayed a longer latency period for locating the platform than the vehicle-treated rats did (day 4: 35.52 ± 3.88 s vs. 27.26 ± 5.33 s, t = 2.988, P = 0.0068; day 5: 30.67 ± 4.23 s vs. 15.84 ± 4.02 s, t = 3.013, P = 0.0057; day 6: 25.66 ± 4.16 s vs. 9.47 ± 2.35 s, t = 3.567, P = 0.0036) [Figure 1a]. Then, the platform was removed for the probe test on day 7, and the etomidate anesthesia led to fewer crossing times (2.06 ± 0.80 vs. 4.40 ± 1.18, t = 2.896, P = 0.0072) at the platform position, as well as reduced the time spent in the target quadrant from 34% to 11% and the total distance traveled from 41% to 13% (duration: 18.07 ± 4.79 s vs. 34.00 ± 4.24 s, t = 3.023, P = 0.0053; total swimming distance: 27.40 ± 6.56 cm vs. 40.73 ± 3.45 cm, t = 2.798, P = 0.0086) [Figure 1b–1e]. No significant difference was found in the swimming speed [Figure 1f] between two groups (t = 1.430, P = 0.1360). There was no significant difference in the body weight between Con and Eto groups before the injection (372.89 ± 13.97 g vs. 374.07 ± 17.14 g, t = 0.622, P = 0.460) and after the Morris water maze test (392.35 ± 12.83 g vs. 395.53 ± 17.10 g, t = 0.593, P = 0.650), suggesting that etomidate anesthesia impaired memory without affecting the motor ability and physiology of the rats. Effect of etomidate on learning and memory of aging rats in Morris water maze test. (a) The representative escape traces (upper) and the escape latencies in the learning days 1–6 (lower; day 4: t = 2.988, P = 0.0068; day 5: t = 3.013, P = 0.0057; and day 6: t = 3.567, P = 0.0036). (b) The swim trace in day 9 for probe test (c) the crossing times at day 9 for probe test (t = 2.896, P = 0.0072). (d) The duration in the target quadrant (t = 3.023, P = 0.0053). (e) The total swimming distance in the target quadrant on day 9 (t = 2.798, P = 0.0086). (f) The swimming speed (t = 1.430, P = 0.1360). *Compared with vehicle-treated group. Con: The vehicle-treated group; Eto: Etomidate-treated group. Etomidate did not induce the neuronal death: To determine the potential mechanisms mediating the memory impairments induced by etomidate anesthesia, we first performed Nissl staining to evaluate the neuronal loss after etomidate anesthesia. However, after carefully counting, we did not detect any significant difference in the hippocampal neuronal numbers between the etomidate- and vehicle-treated rats (CA1: 63.45 ± 6.99 vs. 64.70 ± 4.81, t = 0.765, P = 0.4620; CA3: 66.18 ± 5.42 vs. 64.60 ± 6.52, t = 0.833, P = 0.4320; 62.27 ± 5.55 vs. 63.40 ± 5.36, t = 0.678, P = 0.5210) [Figure 2a and 2b]. Thus, etomidate inducing memory loss may not be due to neuronal loss in the hippocampus. The effect of etomidate on neuronal death by Nissl staining (a) the representative images in the hippocampus; Scale bar = 100 μm. (b) The quantitative analysis between two groups (CA1: t = 0.765, P = 0.4620; CA3: t = 0.833, P = 0.4320; DG: t = 0.678, P = 0.5210). CA1: Region I of hippocampus proper; CA3: Region III of hippocampus proper; DG: Dentategyrus; Con: The vehicle-treated group; Eto: Etomidate-treated group. Etomidate did not induce oxidative stress: We then asked whether etomidate could induce transient oxidative stress in the hippocampus. We used two commercial kits to determine the expression of SOD and MDA in two groups and found that rats in etomidate-treated group did not show any significant differences in the levels of SOD and MDA compared to the rats in vehicle-treated group (SOD: 105.69 ± 7.40% vs. 100.00 ± 11.59%, t = 0.975, P = 0.3540; MDA: 96.32 ± 6.78% vs. 100.00 ± 8.29%, t = 1.112, P = 0.2690), which ruled out the possible role of oxidative stress in etomidate-induced memory deficits [Figure 3]. Effect of etomidate on oxidative stress. The SOD and MDA levels were measured by commercial kits, and the raw intensity was finally transferred to the relative levels by setting the control group as 100% (SOD: t = 0.975, P = 0.3540; MDA: t = 1.112, P = 0.2690). SOD: Superoxide dismutase; MDA: Malondialdehyde; Con: The vehicle-treated group; Eto: Etomidate-treated group. Cyclic adenosine 3’,5’- monophosphate response element-binding phosphorylation was not involved in etomidate-induced memory deficits: Phosphorylation of CREB at the Ser133 site is known to play an important role in regulating the memory process. Therefore, we evaluated the effects of etomidate on CREB phosphorylation using Western blot analysis. However, we did not find significant differences in changes of the protein levels of CREB and phosopho-CREB between etomidate- and vehicle-treated rats [Figure 4a and 4b]. These results excluded the involvement of CREB signaling disruption in the etomidate-induced memory deficits. Effect of etomidate on CREB phosphorylation. (a) The representative blots of pCREB at ser133 site, total CREB and the loading control (GAPDH) in the hippocampus. (b) The quantitative analysis between two groups (pCREB: t = 0.98, P = 0.3430; CREB: t = 1.021, P = 0.2900). CREB: Cyclic adenosine 3’,5’-monophosphate response element-binding; pCREB: Phospho-cyclic adenosine 3’,5’-monophosphate response element-binding; GAPDH: Glyceraldehyde 3-phosphate dehydrogenase; Con: The vehicle-treated group; Eto: Etomidate-treated group. Etomidate suppressed the immediate early genes expression: Then, we sought to determine whether the failure of expression of essential IEGs, such as Arc, c-fos, and Egr1, was involved in memory deficits. We found that following treatment with etomidate, the expression of Arc, c-fos, and Egr1 was dramatically reduced [Figure 5a], and densitometry analysis revealed the intensities were reduced to 86%, 72%, and 58% of normal levels, respectively [Figure 5b]. Furthermore, by using quantitative real-time polymerase chain reaction we also found that the mRNAs level of Arc, c-fos, and Egr1 was reduced following etomidate treatment [Figure 5c]. These data suggested that etomidate anesthesia suppressed IEGs expression, which may induce the memory deficits. Effect of etomidate on immediate early genes levels. (a) The representative blots of Arc, c-fos, Egr1, and the loading control GAPDH in the hippocampus. (b) The quantitative analysis between two groups (Arc: t = 2.876, P = 0.0086; c-fos: t = 2.996, P = 0.0076; Egr1: t = 3.011, P = 0.0057). (c) The quantitative polymerase chain reaction analysis for the detection of mRNAs of Arc, c-fos, and Egr1 (Arc: t = 2.893, P = 0.0082; c-fos: t = 3.213, P = 0.0047; Egr1: t = 3.452, P = 0.0034). *Compared with Con group. GAPDH: Glyceraldehyde 3-phosphate dehydrogenase; Con: The vehicle-treated group; Eto: Etomidate-treated group. DISCUSSION: Previous studies have suggested that anesthesia induces neuronal cell death in the brain of developing rat.[15] Specifically, acute anesthesia exposure downregulated B-cell lymphoma extra-large, upregulated cytochrome C, and activated caspase-9 in 7-day-old rats indicating activation of the intrinsic apoptotic pathway. Furthermore, a parallel upregulation of Fas protein and activation of caspase-8 in 7-day-old rats indicated the activation of the extrinsic apoptotic pathway. Moreover, various studies have reported different results for experiments with etomidate. Acute etomidate treatment reduced the neuronal loss caused by kainic acid[16] but did not alter the hippocampal neuronal loss in rats with traumatic brain injury.[17] In this study, we did not find any significant difference in neuron numbers between etomidate- and vehicle-treated rats, suggesting that etomidate anesthesia did not affect neuronal survival. Oxidative stress has also been implicated in anesthetic-induced neurotoxicity and memory deficits.[18] The application of oxidative stress blockers including the mitochondrial protector, L-carnitine[19] and melatonin[20] in vivo, and specific antioxidants in vitro, including the SOD mimetic, M40403, and the nitric oxide synthase inhibitor, 7-nitroindazole, fully, or partially reversed the anesthetic-induced neurotoxicity.[21] Recent gene expression assessments indicated that anesthetic treatment alters genes in the oxidative stress pathway in developing animals.[21] However, in this study, etomidate did not alter the level of MDA and SOD, which are the two principal markers of oxidative stress in the brain, further ruling out the potential role of oxidative stress in the etomidate-induced memory deficits. Previous reports found that etomidate acts on GABAergic neurons to exert its anesthetic activity,[22] and GABAergic neurons are known to be particularly susceptible to oxidative stress.[23] Therefore, the lack of change in the oxidative stress parameters observed here was reasonable and consistent with a previous study.[24] It is well known that the regulation of gene transcription by cAMP-mediated second messenger pathway plays an important role in learning and memory.[2526] Both cAMP-dependent protein kinase A (PKA) and CREB are activated in the course of spatial learning. Training of rats using the radial maze resulted in a significant increase in PKA and CREB phosphorylation in the hippocampi during spatial learning, which was followed by spatial memory formation. However, we did not find any increase in phosphorylation of CREB following treatment with etomidate, suggesting that it did not affect the PKA-CREB pathway. Finally, we discovered that the level of multiple IEGs was decreased following treatment with etomidate. Previous studies have provided direct evidence that IEG expression reflects the integration of information processed by hippocampal neurons. Moreover, IEG expression is not only correlated with neural activity but also plays a critical role in stabilizing recent changes in synaptic efficacy. For example, intrahippocampal administration of antisense IEG oligonucleotides (Arc and c-fos) or germline disruption of the IEGs c-fos, tissue plasminogen activator, or Egr1 impairs the consolidation of long-term memory formation but does not affect learning or short-term memory.[2728] Here, etomidate application decreased the level of Arc, c-fos, and Egr1 in the hippocampus, which was accompanied with memory deficits and suggested the possible involvement of IEG disruption in etomidate-induced memory decline. As the disruption of synaptic plasticity, loss of dendritic spines, impairments of membranous receptor trafficking are also important to the memory formation, a further study is necessary to evaluate their roles in etomidate-induced memory deficits. In summary, we found that etomidate impairs memory by decreasing IEG expression but not by inducing neuronal loss, oxidative stress, or PKA-CREB pathway inhibition. Financial support and sponsorship This study was supported by grants from Key Scientific and Technological Projects of Henan Province (No. 122102310072), Scientific and Technological Projects of Zhengzhou (No. 121PPTGG492-4 and No. 121PPTGG492-5). This study was supported by grants from Key Scientific and Technological Projects of Henan Province (No. 122102310072), Scientific and Technological Projects of Zhengzhou (No. 121PPTGG492-4 and No. 121PPTGG492-5). Conflicts of interest There are no conflicts of interest. There are no conflicts of interest. Financial support and sponsorship: This study was supported by grants from Key Scientific and Technological Projects of Henan Province (No. 122102310072), Scientific and Technological Projects of Zhengzhou (No. 121PPTGG492-4 and No. 121PPTGG492-5). Conflicts of interest: There are no conflicts of interest.
Background: Etomidate (R-1-[1-ethylphenyl] imidazole-5-ethyl ester) is a widely used anesthetic drug that had been reported to contribute to cognitive deficits after general surgery. However, its underlying mechanisms have not been fully elucidated. In this study, we aimed to explore the neurobiological mechanisms of cognitive impairments that caused by etomidate. Methods: A total of 30 Sprague-Dawley rats were used and divided into two groups randomly to receive a single injection of etomidate or vehicle. Then, the rats' spatial memory ability and neuronal survival were evaluated using the Morris water maze test and Nissl staining, respectively. Furthermore, we analyzed levels of oxidative stress, as well as cyclic adenosine 3',5'-monophosphate response element-binding (CREB) protein phosphorylation and immediate early gene (IEG, including Arc, c-fos, and Egr1) expression levels using Western blot analysis. Results: Compared with vehicle-treated rats, the etomidate-treated rats displayed impaired spatial learning (day 4: 27.26 ± 5.33 s vs. 35.52 ± 3.88 s, t = 2.988, P = 0.0068; day 5: 15.84 ± 4.02 s vs. 30.67 ± 4.23 s, t = 3.013, P = 0.0057; day 6: 9.47 ± 2.35 s vs. 25.66 ± 4.16 s, t = 3.567, P = 0.0036) and memory ability (crossing times: 4.40 ± 1.18 vs. 2.06 ± 0.80, t = 2.896, P = 0.0072; duration: 34.00 ± 4.24 s vs. 18.07 ± 4.79 s, t = 3.023, P = 0.0053; total swimming distance: 40.73 ± 3.45 cm vs. 27.40 ± 6.56 cm, t = 2.798, P = 0.0086) but no neuronal death. Furthermore, etomidate did not cause oxidative stress or deficits in CREB phosphorylation. The levels of multiple IEGs (Arc: vehicle treated rats 100%, etomidate treated rats 86%, t = 2.876, P = 0.0086; c-fos: Vehicle treated rats 100%, etomidate treated rats 72%, t = 2.996, P = 0.0076; Egr1: Vehicle treated rats 100%, etomidate treated rats 58%, t = 3.011, P = 0.0057) were significantly reduced in hippocampi of etomidate-treated rats. Conclusions: Our data suggested that etomidate might induce memory impairment in rats via inhibition of IEG expression.
null
null
8,547
440
[ 169, 229, 189, 56, 139, 80, 563, 234, 207, 196, 304, 41, 7 ]
17
[ "etomidate", "rats", "memory", "treated", "group", "day", "vehicle", "creb", "vs", "treated group" ]
[ "rats recovered anesthesia", "etomidate impairs memory", "anesthesia induces neuronal", "postoperative cognitive dysfunction", "deficits mechanisms anesthetics" ]
null
null
[CONTENT] Anesthesia | Cyclic Adenosine 3’ | 5’- Monophosphate Response Element-binding Phosphorylation | Etomidate | Immediate Early Genes | Neuronal Death | Oxidative Stress [SUMMARY]
[CONTENT] Anesthesia | Cyclic Adenosine 3’ | 5’- Monophosphate Response Element-binding Phosphorylation | Etomidate | Immediate Early Genes | Neuronal Death | Oxidative Stress [SUMMARY]
[CONTENT] Anesthesia | Cyclic Adenosine 3’ | 5’- Monophosphate Response Element-binding Phosphorylation | Etomidate | Immediate Early Genes | Neuronal Death | Oxidative Stress [SUMMARY]
null
[CONTENT] Anesthesia | Cyclic Adenosine 3’ | 5’- Monophosphate Response Element-binding Phosphorylation | Etomidate | Immediate Early Genes | Neuronal Death | Oxidative Stress [SUMMARY]
null
[CONTENT] Anesthesia | Animals | Etomidate | Hippocampus | Hypnotics and Sedatives | Immediate-Early Proteins | Maze Learning | Memory Disorders | Rats | Rats, Sprague-Dawley [SUMMARY]
[CONTENT] Anesthesia | Animals | Etomidate | Hippocampus | Hypnotics and Sedatives | Immediate-Early Proteins | Maze Learning | Memory Disorders | Rats | Rats, Sprague-Dawley [SUMMARY]
[CONTENT] Anesthesia | Animals | Etomidate | Hippocampus | Hypnotics and Sedatives | Immediate-Early Proteins | Maze Learning | Memory Disorders | Rats | Rats, Sprague-Dawley [SUMMARY]
null
[CONTENT] Anesthesia | Animals | Etomidate | Hippocampus | Hypnotics and Sedatives | Immediate-Early Proteins | Maze Learning | Memory Disorders | Rats | Rats, Sprague-Dawley [SUMMARY]
null
[CONTENT] rats recovered anesthesia | etomidate impairs memory | anesthesia induces neuronal | postoperative cognitive dysfunction | deficits mechanisms anesthetics [SUMMARY]
[CONTENT] rats recovered anesthesia | etomidate impairs memory | anesthesia induces neuronal | postoperative cognitive dysfunction | deficits mechanisms anesthetics [SUMMARY]
[CONTENT] rats recovered anesthesia | etomidate impairs memory | anesthesia induces neuronal | postoperative cognitive dysfunction | deficits mechanisms anesthetics [SUMMARY]
null
[CONTENT] rats recovered anesthesia | etomidate impairs memory | anesthesia induces neuronal | postoperative cognitive dysfunction | deficits mechanisms anesthetics [SUMMARY]
null
[CONTENT] etomidate | rats | memory | treated | group | day | vehicle | creb | vs | treated group [SUMMARY]
[CONTENT] etomidate | rats | memory | treated | group | day | vehicle | creb | vs | treated group [SUMMARY]
[CONTENT] etomidate | rats | memory | treated | group | day | vehicle | creb | vs | treated group [SUMMARY]
null
[CONTENT] etomidate | rats | memory | treated | group | day | vehicle | creb | vs | treated group [SUMMARY]
null
[CONTENT] deficits | cognitive | memory | etomidate | memory deficits | general | pocd | anesthesia | mechanisms | behavioral [SUMMARY]
[CONTENT] rats | usa | platform | test | animal | respectively | inc | china | mg | previously [SUMMARY]
[CONTENT] etomidate | vs | treated | treated group | group | day | vehicle treated | memory | figure | creb [SUMMARY]
null
[CONTENT] etomidate | rats | memory | conflicts | interest | conflicts interest | group | creb | treated | usa [SUMMARY]
null
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] 30 | Sprague-Dawley | two ||| Morris | Nissl ||| CREB | Arc | Egr1 | Western [SUMMARY]
[CONTENT] (day 4 | 27.26 | 35.52 | 3.88 | 2.988 | 0.0068 | day 5 | 15.84 | 4.02 | 30.67 | 3.013 | 0.0057 | day 6 | 9.47 | 2.35 | 25.66 | 4.16 | 3.567 | 0.0036 | 4.40 | 1.18 | 2.06 | 0.80 | 2.896 | 0.0072 | 34.00 | 18.07 | 4.79 | 3.023 | 0.0053 | 40.73 | 3.45 cm | 6.56 cm | 2.798 | 0.0086 ||| CREB ||| 100% | 86% | 2.876 | 0.0086 | 100% | 72% | 2.996 | 0.0076 | Egr1 | 100% | 58% | 3.011 | 0.0057 [SUMMARY]
null
[CONTENT] ||| ||| ||| 30 | Sprague-Dawley | two ||| Morris | Nissl ||| CREB | Arc | Egr1 | Western ||| (day 4 | 27.26 | 35.52 | 3.88 | 2.988 | 0.0068 | day 5 | 15.84 | 4.02 | 30.67 | 3.013 | 0.0057 | day 6 | 9.47 | 2.35 | 25.66 | 4.16 | 3.567 | 0.0036 | 4.40 | 1.18 | 2.06 | 0.80 | 2.896 | 0.0072 | 34.00 | 18.07 | 4.79 | 3.023 | 0.0053 | 40.73 | 3.45 cm | 6.56 cm | 2.798 | 0.0086 ||| CREB ||| 100% | 86% | 2.876 | 0.0086 | 100% | 72% | 2.996 | 0.0076 | Egr1 | 100% | 58% | 3.011 | 0.0057 ||| [SUMMARY]
null
Association of colchicine use for acute gout with clinical outcomes in acute decompensated heart failure.
35481608
Gout is a common comorbidity in heart failure (HF) patients and is frequently associated with acute exacerbations during treatment for decompensated HF. Although colchicine is often used to manage acute gout in HF patients, its impact on clinical outcomes when used during acute decompensated HF is unknown.
BACKGROUND
This was a single center, retrospective study of hospitalized patients treated for an acute HF exacerbation with and without acute gout flare between March 2011 and December 2020. We assessed clinical outcomes in patients treated with colchicine for a gout flare compared to those who did not experience a gout flare or receive colchicine. The primary outcome was in-hospital all-cause mortality.
METHODS
Among 1047 patient encounters for acute HF during the study period, there were 237 encounters (22.7%) where the patient also received colchicine for acute gout during admission. In-hospital all-cause mortality was significantly reduced in the colchicine group compared with the control group (2.1% vs. 6.5%, p = .009). The colchicine group had increased length of stay (9.93 vs. 7.96 days, p < .001) but no significant difference in 30-day readmissions (21.5% vs. 19.5%, p = .495). In a Cox proportional hazards model adjusted for age, inpatient colchicine use was associated with improved survival to discharge (hazards ratio [HR] 0.163, 95% confidence interval [CI] 0.051-0.525, p = .002) and a reduced rate of in-hospital CV mortality (HR 0.184, 95% CI 0.044-0.770, p = .021).
RESULTS
Among patients with a HF exacerbation, treatment with colchicine for a gout flare was associated with significantly lower in-hospital mortality compared with those not treated for acute gout.
CONCLUSION
[ "Colchicine", "Gout", "Heart Failure", "Hospitalization", "Humans", "Retrospective Studies", "Symptom Flare Up" ]
9286335
INTRODUCTION
Gout is a common comorbidity in heart failure (HF) patients and is the result of monosodium urate crystal deposition in joints and periarticular tissues. 1 Gout is associated with significant morbidity, mortality, and healthcare costs. 1 , 2 , 3 Diuretics are known to precipitate hyperuricemia and increase the risk of gout flares through mechanisms related to decreased uric acid secretion and increased uric acid reabsorption. 4 , 5 Studies have estimated the prevalence of gout in HF patients to be approximately 16%−40%, and one study found that 56% of hospitalized HF patients had hyperuricemia. 6 , 7 , 8 , 9 The therapeutic agents commonly used for an acute gout flare include colchicine, steroids, and nonsteroidal anti‐inflammatory drugs (NSAIDs). However, steroids and NSAIDs are often avoided in HF because of the legitimate concerns of fluid retention and HF exacerbation. 10 In addition to its role in gout, colchicine's anti‐inflammatory effects are also highly beneficial in the treatment and prevention of other cardiac conditions such as pericarditis. 11 Colchicine has also recently shown broader cardiovascular (CV) outcomes benefit in high‐risk patients, particularly those with coronary artery disease (CAD) or history of myocardial infarction (MI). However, the impact of colchicine use during gout flares on outcomes in patients with acutely decompensated HF is unknown. The purpose of this study was to assess clinical outcomes in patients treated for an acute HF exacerbation and receiving colchicine for an acute gout flare.
METHODS
Study design This single center, retrospective cohort study compared clinical outcomes in those receiving colchicine for the treatment of an acute gout flare versus those without a gout flare among patients with an acute HF exacerbation at an academic medical center. Adult patients (age ≥ 18 years) admitted between March 2011 and February 2020 with an acute HF exacerbation who received initial intravenous (IV) diuretics were eligible for inclusion. Patients were identified using ICD 9 and ICD 10 codes for acute HF exacerbation. Patients treated with colchicine during the admission for a documented acute gout flare were included in the treatment group, while those not given colchicine during the admission were presumed not to have had a gout flare and were included in the control group. Patients were excluded if they had end‐stage renal disease on hemodialysis, any history of transplantation or underwent transplantation during the admission, and any history of left ventricular assist device (LVAD) or LVAD implantation during admission. Patients receiving colchicine for indications other than acute gout or admitted for reasons other than acute decompensated HF were also excluded. All patients were included for analysis on an intention‐to‐treat basis. The study was approved by the Institutional Review Board before data collection. Study covariates were collected from the electronic medical record and data warehouse of medical records. This single center, retrospective cohort study compared clinical outcomes in those receiving colchicine for the treatment of an acute gout flare versus those without a gout flare among patients with an acute HF exacerbation at an academic medical center. Adult patients (age ≥ 18 years) admitted between March 2011 and February 2020 with an acute HF exacerbation who received initial intravenous (IV) diuretics were eligible for inclusion. Patients were identified using ICD 9 and ICD 10 codes for acute HF exacerbation. Patients treated with colchicine during the admission for a documented acute gout flare were included in the treatment group, while those not given colchicine during the admission were presumed not to have had a gout flare and were included in the control group. Patients were excluded if they had end‐stage renal disease on hemodialysis, any history of transplantation or underwent transplantation during the admission, and any history of left ventricular assist device (LVAD) or LVAD implantation during admission. Patients receiving colchicine for indications other than acute gout or admitted for reasons other than acute decompensated HF were also excluded. All patients were included for analysis on an intention‐to‐treat basis. The study was approved by the Institutional Review Board before data collection. Study covariates were collected from the electronic medical record and data warehouse of medical records. Outcomes The primary outcome was in‐hospital all‐cause mortality. Secondary outcomes included hospital length of stay (LOS), 30‐day readmissions, and time to death. This study also compared the primary and secondary outcomes between patients with a prior history of gout, a first diagnosis of gout, and the control group. A post hoc analysis was completed to evaluate in‐hospital CV mortality and time to CV death. Baseline characteristics included age, gender, and ejection fraction (on the most recent imaging including echocardiogram, nuclear stress test, or angiography), comorbidities, and home HF and gout medications. The primary outcome was in‐hospital all‐cause mortality. Secondary outcomes included hospital length of stay (LOS), 30‐day readmissions, and time to death. This study also compared the primary and secondary outcomes between patients with a prior history of gout, a first diagnosis of gout, and the control group. A post hoc analysis was completed to evaluate in‐hospital CV mortality and time to CV death. Baseline characteristics included age, gender, and ejection fraction (on the most recent imaging including echocardiogram, nuclear stress test, or angiography), comorbidities, and home HF and gout medications. Statistical analysis The primary outcome of in‐hospital all‐cause mortality and secondary outcome of 30‐day readmissions were compared using the Pearson χ 2 test, while LOS was assessed using the Mann−Whitney U test. Reverse Kaplan−Meier curves stratified by colchicine treatment were constructed to evaluate the difference in survival. Time to death and time to CV death during the hospital admission were assessed using bivariable Cox proportional hazards regression with censoring at 4 weeks. The proportional hazards assumption in this case was easily verified by inspection of the Kaplan−Meier survival curves. Other baseline demographics and secondary outcomes compared categorical variables with Pearson χ 2 tests, while continuous variables were compared using unpaired two‐sample t‐tests/analysis of variance (ANOVA) and the Mann−Whitney U/Kruskal−Wallis tests. Multivariable logistic regression was performed to assess the association of in‐hospital colchicine with survival to discharge with adjustment for key covariates. Multivariable Cox proportional regression was used to assess differences in time to death with adjustment for age. The α value for all statistical tests was set at .05. R and SPSS statistical software were used in the analysis. The primary outcome of in‐hospital all‐cause mortality and secondary outcome of 30‐day readmissions were compared using the Pearson χ 2 test, while LOS was assessed using the Mann−Whitney U test. Reverse Kaplan−Meier curves stratified by colchicine treatment were constructed to evaluate the difference in survival. Time to death and time to CV death during the hospital admission were assessed using bivariable Cox proportional hazards regression with censoring at 4 weeks. The proportional hazards assumption in this case was easily verified by inspection of the Kaplan−Meier survival curves. Other baseline demographics and secondary outcomes compared categorical variables with Pearson χ 2 tests, while continuous variables were compared using unpaired two‐sample t‐tests/analysis of variance (ANOVA) and the Mann−Whitney U/Kruskal−Wallis tests. Multivariable logistic regression was performed to assess the association of in‐hospital colchicine with survival to discharge with adjustment for key covariates. Multivariable Cox proportional regression was used to assess differences in time to death with adjustment for age. The α value for all statistical tests was set at .05. R and SPSS statistical software were used in the analysis.
RESULTS
Baseline characteristics A total of 5109 patient encounters had an ICD 9 or ICD 10 code for acute HF exacerbation during the study period (Figure 1). The final cohort included 1047 patient encounters after exclusions as indicated in Figure 1 and was then stratified into colchicine and control groups as determined a priori. Baseline characteristics were similar between groups, with the exception of age, sex, and comorbidities of gout and alcohol use (Table 1). Patients in the colchicine group were more likely to be male and younger compared to the control group. A majority of the patients had HF with reduced ejection fraction (54.1%), and there was no significant difference in HF classifications between groups. Patients who received colchicine for an acute gout flare were more likely to have a history of gout (50.2% vs. 3.2%, p < .001) and alcohol use (3.4% vs. 1.2%, p = .026) compared with the control group. Admission serum creatinine was significantly higher in the colchicine group than the control group (1.73 vs. 1.44 mg/dl, p < .001). There was no significant difference in the change from admission serum creatinine to discharge serum creatinine between groups. Among the patient encounters with a uric acid level checked during the admission, uric acid was significantly higher in the colchicine group compared with the control group (10.63 vs. 8.26 mg/dl, p = .002). There was no significant difference in admission B‐type natriuretic peptide level between groups. CONSORT flow diagram. The CONSORT flow diagram is shown for the cohort. ESRD, end stage renal disease; HF, heart failure; IV, intravenous; LVAD, left ventricular assist device Baseline Characteristics a Abbreviations: ACEi, angiotensin converting enzyme inhibitor; ARB, angiotensin receptor blocker; ARNi, angiotensin receptor neprilysin inhibitor; BNP, B‐type natriuretic peptide; CAD, coronary artery disease; HFmrEF, heart failure with mid‐range ejection fraction; HFpEF, heart failure with preserved ejection fraction; HFrEF, heart failure with reduced ejection fraction; MRA, mineralocorticoid receptor antagonist; Non‐DHP CCB, nondihydropyridine calcium channel blocker; SCr, serum creatinine. Plus−minus values are means ± SD. Admission BNP was only available for 846 patients (189 patients in the colchicine group and 657 patients in the control group). Uric acid was only available for 98 patients (71 patients in the colchicine group and 27 patients in the control group). If there were multiple uric acid levels during admission, the first level was recorded. A total of 5109 patient encounters had an ICD 9 or ICD 10 code for acute HF exacerbation during the study period (Figure 1). The final cohort included 1047 patient encounters after exclusions as indicated in Figure 1 and was then stratified into colchicine and control groups as determined a priori. Baseline characteristics were similar between groups, with the exception of age, sex, and comorbidities of gout and alcohol use (Table 1). Patients in the colchicine group were more likely to be male and younger compared to the control group. A majority of the patients had HF with reduced ejection fraction (54.1%), and there was no significant difference in HF classifications between groups. Patients who received colchicine for an acute gout flare were more likely to have a history of gout (50.2% vs. 3.2%, p < .001) and alcohol use (3.4% vs. 1.2%, p = .026) compared with the control group. Admission serum creatinine was significantly higher in the colchicine group than the control group (1.73 vs. 1.44 mg/dl, p < .001). There was no significant difference in the change from admission serum creatinine to discharge serum creatinine between groups. Among the patient encounters with a uric acid level checked during the admission, uric acid was significantly higher in the colchicine group compared with the control group (10.63 vs. 8.26 mg/dl, p = .002). There was no significant difference in admission B‐type natriuretic peptide level between groups. CONSORT flow diagram. The CONSORT flow diagram is shown for the cohort. ESRD, end stage renal disease; HF, heart failure; IV, intravenous; LVAD, left ventricular assist device Baseline Characteristics a Abbreviations: ACEi, angiotensin converting enzyme inhibitor; ARB, angiotensin receptor blocker; ARNi, angiotensin receptor neprilysin inhibitor; BNP, B‐type natriuretic peptide; CAD, coronary artery disease; HFmrEF, heart failure with mid‐range ejection fraction; HFpEF, heart failure with preserved ejection fraction; HFrEF, heart failure with reduced ejection fraction; MRA, mineralocorticoid receptor antagonist; Non‐DHP CCB, nondihydropyridine calcium channel blocker; SCr, serum creatinine. Plus−minus values are means ± SD. Admission BNP was only available for 846 patients (189 patients in the colchicine group and 657 patients in the control group). Uric acid was only available for 98 patients (71 patients in the colchicine group and 27 patients in the control group). If there were multiple uric acid levels during admission, the first level was recorded. Primary outcome analysis A total of 58 patients (5.5%) died during admission, five in the colchicine group and 53 in the control group (2.1% vs. 6.5%, p = .009), that is, a lower in‐hospital all‐cause mortality in the colchicine group (Table 2). A subgroup analysis was conducted to assess outcomes with the colchicine group stratified based on prior documented history of gout versus de novo gout presentation. In‐hospital all‐cause mortality was not significantly different between patients with a new gout diagnosis compared to those with a prior history of gout (3.4% vs. 0.8%, p = .213). Primary and secondary outcomes Abbreviations: LOS, length of stay; SD, standard deviation. A total of 58 patients (5.5%) died during admission, five in the colchicine group and 53 in the control group (2.1% vs. 6.5%, p = .009), that is, a lower in‐hospital all‐cause mortality in the colchicine group (Table 2). A subgroup analysis was conducted to assess outcomes with the colchicine group stratified based on prior documented history of gout versus de novo gout presentation. In‐hospital all‐cause mortality was not significantly different between patients with a new gout diagnosis compared to those with a prior history of gout (3.4% vs. 0.8%, p = .213). Primary and secondary outcomes Abbreviations: LOS, length of stay; SD, standard deviation. Secondary outcomes analysis The 30‐day readmissions were not significantly different between the colchicine and control groups (21.5% vs. 19.5%, p = .495) (Table 2). In the subgroup analysis, 30‐day readmissions remained similar when comparing patients with de novo gout to those with a prior history of gout (22.9% vs. 20.2%, p = .611). Mean LOS was significantly increased in the colchicine group compared to the control group (9.93 vs. 7.96 days, p < .001) (Table 2). In the subgroup analysis, LOS was significantly increased in the de novo presentation of gout group compared to the control group (10.52 vs. 7.96 days, p < .001) and in the prior history of gout group compared to the control group (9.35 vs. 7.96 days, p = .006). There was no significant difference in LOS between those with a first diagnosis of gout and those with a prior history of gout (p = .272). In a post hoc analysis, in‐hospital CV mortality censored at 4 weeks was significantly lower in the colchicine group than the control group (0.89% vs. 3.93%, p = .02). The 30‐day readmissions were not significantly different between the colchicine and control groups (21.5% vs. 19.5%, p = .495) (Table 2). In the subgroup analysis, 30‐day readmissions remained similar when comparing patients with de novo gout to those with a prior history of gout (22.9% vs. 20.2%, p = .611). Mean LOS was significantly increased in the colchicine group compared to the control group (9.93 vs. 7.96 days, p < .001) (Table 2). In the subgroup analysis, LOS was significantly increased in the de novo presentation of gout group compared to the control group (10.52 vs. 7.96 days, p < .001) and in the prior history of gout group compared to the control group (9.35 vs. 7.96 days, p = .006). There was no significant difference in LOS between those with a first diagnosis of gout and those with a prior history of gout (p = .272). In a post hoc analysis, in‐hospital CV mortality censored at 4 weeks was significantly lower in the colchicine group than the control group (0.89% vs. 3.93%, p = .02). Stratified Kaplan−Meier analysis and Cox proportional hazards regression during hospital admission Reverse Kaplan−Meier curves stratified by in‐hospital colchicine and censored at 4 weeks are shown in Figure 2. Inpatient colchicine was associated with reduced rates of both in‐hospital all‐cause mortality (log rank p = .00026) and in‐hospital CV mortality (log rank p = .0063) compared with the control group. In a Cox proportional hazards model adjusted for age, in‐hospital colchicine use was associated with improved survival to discharge (hazard ratio [HR] 0.163, 95% confidence interval [CI] 0.051−0.525, p = .002) and a decreased rate of in‐hospital CV mortality (HR 0.184, 95% CI 0.044−0.770, p = .021). Reverse Kaplan–Meier curves stratified by home colchicine use were also generated (Figure 3). Home colchicine use was associated with a reduced rate of in‐hospital all‐cause mortality (log rank p = .037) but no significant difference in the rate of in‐hospital CV mortality (log rank p = .14). Inpatient all‐cause and cardiovascular (CV) death by inpatient colchicine use. Reverse Kaplan−Meier curves for inpatient all‐cause death and inpatient CV death stratified on inpatient colchicine use are shown (p = .00026 and p = .0063, respectively) Inpatient all‐cause and cardiovascular (CV) death by home colchicine use. Reverse Kaplan−Meier curves for inpatient all‐cause death and inpatient CV death stratified on home colchicine use are shown (p = .037 and p = .14, respectively) Reverse Kaplan−Meier curves stratified by in‐hospital colchicine and censored at 4 weeks are shown in Figure 2. Inpatient colchicine was associated with reduced rates of both in‐hospital all‐cause mortality (log rank p = .00026) and in‐hospital CV mortality (log rank p = .0063) compared with the control group. In a Cox proportional hazards model adjusted for age, in‐hospital colchicine use was associated with improved survival to discharge (hazard ratio [HR] 0.163, 95% confidence interval [CI] 0.051−0.525, p = .002) and a decreased rate of in‐hospital CV mortality (HR 0.184, 95% CI 0.044−0.770, p = .021). Reverse Kaplan–Meier curves stratified by home colchicine use were also generated (Figure 3). Home colchicine use was associated with a reduced rate of in‐hospital all‐cause mortality (log rank p = .037) but no significant difference in the rate of in‐hospital CV mortality (log rank p = .14). Inpatient all‐cause and cardiovascular (CV) death by inpatient colchicine use. Reverse Kaplan−Meier curves for inpatient all‐cause death and inpatient CV death stratified on inpatient colchicine use are shown (p = .00026 and p = .0063, respectively) Inpatient all‐cause and cardiovascular (CV) death by home colchicine use. Reverse Kaplan−Meier curves for inpatient all‐cause death and inpatient CV death stratified on home colchicine use are shown (p = .037 and p = .14, respectively) Multivariate logistic regression analysis A multivariate logistic regression model was performed to evaluate associations of other covariates with in‐hospital all‐cause mortality. These covariates included in‐hospital colchicine use, home beta‐blocker use, inotrope use, age, and diabetes mellitus. In‐hospital colchicine use given for a gout flare was significantly associated with reduced in‐hospital all‐cause mortality (OR 0.322, 95% CI 0.105−0.779, p = .02) after adjustment for home beta‐blocker use, inotrope use, age, and diabetes mellitus (p < .05 for all in the model). A multivariate logistic regression model was performed to evaluate associations of other covariates with in‐hospital all‐cause mortality. These covariates included in‐hospital colchicine use, home beta‐blocker use, inotrope use, age, and diabetes mellitus. In‐hospital colchicine use given for a gout flare was significantly associated with reduced in‐hospital all‐cause mortality (OR 0.322, 95% CI 0.105−0.779, p = .02) after adjustment for home beta‐blocker use, inotrope use, age, and diabetes mellitus (p < .05 for all in the model).
CONCLUSION
This study demonstrated that colchicine use for acute gout flare in hospitalized patients with a HF exacerbation was associated with decreased in‐hospital all‐cause mortality and in‐hospital CV mortality compared with the control group. The use of colchicine was also associated with a longer LOS but similar 30‐day readmissions. Additional large multicenter retrospective and prospective randomized studies are needed to more fully understand the association of colchicine use with outcomes in patients undergoing treatment for HF exacerbations and explore its role as a potential treatment option in this population.
[ "INTRODUCTION", "Study design", "Outcomes", "Statistical analysis", "Baseline characteristics", "Primary outcome analysis", "Secondary outcomes analysis", "Stratified Kaplan−Meier analysis and Cox proportional hazards regression during hospital admission", "Multivariate logistic regression analysis", "LIMITATIONS" ]
[ "Gout is a common comorbidity in heart failure (HF) patients and is the result of monosodium urate crystal deposition in joints and periarticular tissues.\n1\n Gout is associated with significant morbidity, mortality, and healthcare costs.\n1\n, \n2\n, \n3\n Diuretics are known to precipitate hyperuricemia and increase the risk of gout flares through mechanisms related to decreased uric acid secretion and increased uric acid reabsorption.\n4\n, \n5\n Studies have estimated the prevalence of gout in HF patients to be approximately 16%−40%, and one study found that 56% of hospitalized HF patients had hyperuricemia.\n6\n, \n7\n, \n8\n, \n9\n\n\nThe therapeutic agents commonly used for an acute gout flare include colchicine, steroids, and nonsteroidal anti‐inflammatory drugs (NSAIDs). However, steroids and NSAIDs are often avoided in HF because of the legitimate concerns of fluid retention and HF exacerbation.\n10\n In addition to its role in gout, colchicine's anti‐inflammatory effects are also highly beneficial in the treatment and prevention of other cardiac conditions such as pericarditis.\n11\n Colchicine has also recently shown broader cardiovascular (CV) outcomes benefit in high‐risk patients, particularly those with coronary artery disease (CAD) or history of myocardial infarction (MI). However, the impact of colchicine use during gout flares on outcomes in patients with acutely decompensated HF is unknown. The purpose of this study was to assess clinical outcomes in patients treated for an acute HF exacerbation and receiving colchicine for an acute gout flare.", "This single center, retrospective cohort study compared clinical outcomes in those receiving colchicine for the treatment of an acute gout flare versus those without a gout flare among patients with an acute HF exacerbation at an academic medical center. Adult patients (age ≥ 18 years) admitted between March 2011 and February 2020 with an acute HF exacerbation who received initial intravenous (IV) diuretics were eligible for inclusion. Patients were identified using ICD 9 and ICD 10 codes for acute HF exacerbation. Patients treated with colchicine during the admission for a documented acute gout flare were included in the treatment group, while those not given colchicine during the admission were presumed not to have had a gout flare and were included in the control group. Patients were excluded if they had end‐stage renal disease on hemodialysis, any history of transplantation or underwent transplantation during the admission, and any history of left ventricular assist device (LVAD) or LVAD implantation during admission. Patients receiving colchicine for indications other than acute gout or admitted for reasons other than acute decompensated HF were also excluded. All patients were included for analysis on an intention‐to‐treat basis. The study was approved by the Institutional Review Board before data collection. Study covariates were collected from the electronic medical record and data warehouse of medical records.", "The primary outcome was in‐hospital all‐cause mortality. Secondary outcomes included hospital length of stay (LOS), 30‐day readmissions, and time to death. This study also compared the primary and secondary outcomes between patients with a prior history of gout, a first diagnosis of gout, and the control group. A post hoc analysis was completed to evaluate in‐hospital CV mortality and time to CV death. Baseline characteristics included age, gender, and ejection fraction (on the most recent imaging including echocardiogram, nuclear stress test, or angiography), comorbidities, and home HF and gout medications.", "The primary outcome of in‐hospital all‐cause mortality and secondary outcome of 30‐day readmissions were compared using the Pearson χ\n2 test, while LOS was assessed using the Mann−Whitney U test. Reverse Kaplan−Meier curves stratified by colchicine treatment were constructed to evaluate the difference in survival. Time to death and time to CV death during the hospital admission were assessed using bivariable Cox proportional hazards regression with censoring at 4 weeks. The proportional hazards assumption in this case was easily verified by inspection of the Kaplan−Meier survival curves. Other baseline demographics and secondary outcomes compared categorical variables with Pearson χ\n2 tests, while continuous variables were compared using unpaired two‐sample t‐tests/analysis of variance (ANOVA) and the Mann−Whitney U/Kruskal−Wallis tests. Multivariable logistic regression was performed to assess the association of in‐hospital colchicine with survival to discharge with adjustment for key covariates. Multivariable Cox proportional regression was used to assess differences in time to death with adjustment for age. The α value for all statistical tests was set at .05. R and SPSS statistical software were used in the analysis.", "A total of 5109 patient encounters had an ICD 9 or ICD 10 code for acute HF exacerbation during the study period (Figure 1). The final cohort included 1047 patient encounters after exclusions as indicated in Figure 1 and was then stratified into colchicine and control groups as determined a priori. Baseline characteristics were similar between groups, with the exception of age, sex, and comorbidities of gout and alcohol use (Table 1). Patients in the colchicine group were more likely to be male and younger compared to the control group. A majority of the patients had HF with reduced ejection fraction (54.1%), and there was no significant difference in HF classifications between groups. Patients who received colchicine for an acute gout flare were more likely to have a history of gout (50.2% vs. 3.2%, p < .001) and alcohol use (3.4% vs. 1.2%, p = .026) compared with the control group. Admission serum creatinine was significantly higher in the colchicine group than the control group (1.73 vs. 1.44 mg/dl, p < .001). There was no significant difference in the change from admission serum creatinine to discharge serum creatinine between groups. Among the patient encounters with a uric acid level checked during the admission, uric acid was significantly higher in the colchicine group compared with the control group (10.63 vs. 8.26 mg/dl, p = .002). There was no significant difference in admission B‐type natriuretic peptide level between groups.\nCONSORT flow diagram. The CONSORT flow diagram is shown for the cohort. ESRD, end stage renal disease; HF, heart failure; IV, intravenous; LVAD, left ventricular assist device\nBaseline Characteristics\na\n\n\nAbbreviations: ACEi, angiotensin converting enzyme inhibitor; ARB, angiotensin receptor blocker; ARNi, angiotensin receptor neprilysin inhibitor; BNP, B‐type natriuretic peptide; CAD, coronary artery disease; HFmrEF, heart failure with mid‐range ejection fraction; HFpEF, heart failure with preserved ejection fraction; HFrEF, heart failure with reduced ejection fraction; MRA, mineralocorticoid receptor antagonist; Non‐DHP CCB, nondihydropyridine calcium channel blocker; SCr, serum creatinine.\nPlus−minus values are means ± SD.\nAdmission BNP was only available for 846 patients (189 patients in the colchicine group and 657 patients in the control group).\nUric acid was only available for 98 patients (71 patients in the colchicine group and 27 patients in the control group). If there were multiple uric acid levels during admission, the first level was recorded.", "A total of 58 patients (5.5%) died during admission, five in the colchicine group and 53 in the control group (2.1% vs. 6.5%, p = .009), that is, a lower in‐hospital all‐cause mortality in the colchicine group (Table 2). A subgroup analysis was conducted to assess outcomes with the colchicine group stratified based on prior documented history of gout versus de novo gout presentation. In‐hospital all‐cause mortality was not significantly different between patients with a new gout diagnosis compared to those with a prior history of gout (3.4% vs. 0.8%, p = .213).\nPrimary and secondary outcomes\nAbbreviations: LOS, length of stay; SD, standard deviation.", "The 30‐day readmissions were not significantly different between the colchicine and control groups (21.5% vs. 19.5%, p = .495) (Table 2). In the subgroup analysis, 30‐day readmissions remained similar when comparing patients with de novo gout to those with a prior history of gout (22.9% vs. 20.2%, p = .611). Mean LOS was significantly increased in the colchicine group compared to the control group (9.93 vs. 7.96 days, p < .001) (Table 2). In the subgroup analysis, LOS was significantly increased in the de novo presentation of gout group compared to the control group (10.52 vs. 7.96 days, p < .001) and in the prior history of gout group compared to the control group (9.35 vs. 7.96 days, p = .006). There was no significant difference in LOS between those with a first diagnosis of gout and those with a prior history of gout (p = .272). In a post hoc analysis, in‐hospital CV mortality censored at 4 weeks was significantly lower in the colchicine group than the control group (0.89% vs. 3.93%, p = .02).", "Reverse Kaplan−Meier curves stratified by in‐hospital colchicine and censored at 4 weeks are shown in Figure 2. Inpatient colchicine was associated with reduced rates of both in‐hospital all‐cause mortality (log rank p = .00026) and in‐hospital CV mortality (log rank p = .0063) compared with the control group. In a Cox proportional hazards model adjusted for age, in‐hospital colchicine use was associated with improved survival to discharge (hazard ratio [HR] 0.163, 95% confidence interval [CI] 0.051−0.525, p = .002) and a decreased rate of in‐hospital CV mortality (HR 0.184, 95% CI 0.044−0.770, p = .021). Reverse Kaplan–Meier curves stratified by home colchicine use were also generated (Figure 3). Home colchicine use was associated with a reduced rate of in‐hospital all‐cause mortality (log rank p = .037) but no significant difference in the rate of in‐hospital CV mortality (log rank p = .14).\nInpatient all‐cause and cardiovascular (CV) death by inpatient colchicine use. Reverse Kaplan−Meier curves for inpatient all‐cause death and inpatient CV death stratified on inpatient colchicine use are shown (p = .00026 and p = .0063, respectively)\nInpatient all‐cause and cardiovascular (CV) death by home colchicine use. Reverse Kaplan−Meier curves for inpatient all‐cause death and inpatient CV death stratified on home colchicine use are shown (p = .037 and p = .14, respectively)", "A multivariate logistic regression model was performed to evaluate associations of other covariates with in‐hospital all‐cause mortality. These covariates included in‐hospital colchicine use, home beta‐blocker use, inotrope use, age, and diabetes mellitus. In‐hospital colchicine use given for a gout flare was significantly associated with reduced in‐hospital all‐cause mortality (OR 0.322, 95% CI 0.105−0.779, p = .02) after adjustment for home beta‐blocker use, inotrope use, age, and diabetes mellitus (p < .05 for all in the model).", "There are some limitations to this study that should be acknowledged. Limitations due to the retrospective design include potential for missing data points, inability to assess diuretic doses received during hospitalization, and reliance on ICD 9 and ICD 10 codes for initial diagnosis of acute HF exacerbation. Readmissions at outside hospitals that are not linked with the institution's electronic medical record may not have been identified. Additionally, in this study the intervention group consisted of patients given colchicine for an acute gout flare, while the control group included those with neither a gout flare nor colchicine treatment. At our institution, most HF patients with acute gout are treated with colchicine, so it would not have been possible to assemble a sufficiently powered control group of acute HF patients who had a gout flare without colchicine treatment. It is possible that acute gout could be a surrogate marker for more effective diuresis or renal dysfunction. However, the existing literature demonstrating worsened outcomes associated with gout and hyperuricemia suggests that the presence of gout in the intervention group would skew the results toward the null hypothesis. As this was an observational study, it is possible that there could still be unmeasured confounders even after statistical adjustment." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Study design", "Outcomes", "Statistical analysis", "RESULTS", "Baseline characteristics", "Primary outcome analysis", "Secondary outcomes analysis", "Stratified Kaplan−Meier analysis and Cox proportional hazards regression during hospital admission", "Multivariate logistic regression analysis", "DISCUSSION", "LIMITATIONS", "CONCLUSION", "CONFLICTS OF INTEREST" ]
[ "Gout is a common comorbidity in heart failure (HF) patients and is the result of monosodium urate crystal deposition in joints and periarticular tissues.\n1\n Gout is associated with significant morbidity, mortality, and healthcare costs.\n1\n, \n2\n, \n3\n Diuretics are known to precipitate hyperuricemia and increase the risk of gout flares through mechanisms related to decreased uric acid secretion and increased uric acid reabsorption.\n4\n, \n5\n Studies have estimated the prevalence of gout in HF patients to be approximately 16%−40%, and one study found that 56% of hospitalized HF patients had hyperuricemia.\n6\n, \n7\n, \n8\n, \n9\n\n\nThe therapeutic agents commonly used for an acute gout flare include colchicine, steroids, and nonsteroidal anti‐inflammatory drugs (NSAIDs). However, steroids and NSAIDs are often avoided in HF because of the legitimate concerns of fluid retention and HF exacerbation.\n10\n In addition to its role in gout, colchicine's anti‐inflammatory effects are also highly beneficial in the treatment and prevention of other cardiac conditions such as pericarditis.\n11\n Colchicine has also recently shown broader cardiovascular (CV) outcomes benefit in high‐risk patients, particularly those with coronary artery disease (CAD) or history of myocardial infarction (MI). However, the impact of colchicine use during gout flares on outcomes in patients with acutely decompensated HF is unknown. The purpose of this study was to assess clinical outcomes in patients treated for an acute HF exacerbation and receiving colchicine for an acute gout flare.", "Study design This single center, retrospective cohort study compared clinical outcomes in those receiving colchicine for the treatment of an acute gout flare versus those without a gout flare among patients with an acute HF exacerbation at an academic medical center. Adult patients (age ≥ 18 years) admitted between March 2011 and February 2020 with an acute HF exacerbation who received initial intravenous (IV) diuretics were eligible for inclusion. Patients were identified using ICD 9 and ICD 10 codes for acute HF exacerbation. Patients treated with colchicine during the admission for a documented acute gout flare were included in the treatment group, while those not given colchicine during the admission were presumed not to have had a gout flare and were included in the control group. Patients were excluded if they had end‐stage renal disease on hemodialysis, any history of transplantation or underwent transplantation during the admission, and any history of left ventricular assist device (LVAD) or LVAD implantation during admission. Patients receiving colchicine for indications other than acute gout or admitted for reasons other than acute decompensated HF were also excluded. All patients were included for analysis on an intention‐to‐treat basis. The study was approved by the Institutional Review Board before data collection. Study covariates were collected from the electronic medical record and data warehouse of medical records.\nThis single center, retrospective cohort study compared clinical outcomes in those receiving colchicine for the treatment of an acute gout flare versus those without a gout flare among patients with an acute HF exacerbation at an academic medical center. Adult patients (age ≥ 18 years) admitted between March 2011 and February 2020 with an acute HF exacerbation who received initial intravenous (IV) diuretics were eligible for inclusion. Patients were identified using ICD 9 and ICD 10 codes for acute HF exacerbation. Patients treated with colchicine during the admission for a documented acute gout flare were included in the treatment group, while those not given colchicine during the admission were presumed not to have had a gout flare and were included in the control group. Patients were excluded if they had end‐stage renal disease on hemodialysis, any history of transplantation or underwent transplantation during the admission, and any history of left ventricular assist device (LVAD) or LVAD implantation during admission. Patients receiving colchicine for indications other than acute gout or admitted for reasons other than acute decompensated HF were also excluded. All patients were included for analysis on an intention‐to‐treat basis. The study was approved by the Institutional Review Board before data collection. Study covariates were collected from the electronic medical record and data warehouse of medical records.\nOutcomes The primary outcome was in‐hospital all‐cause mortality. Secondary outcomes included hospital length of stay (LOS), 30‐day readmissions, and time to death. This study also compared the primary and secondary outcomes between patients with a prior history of gout, a first diagnosis of gout, and the control group. A post hoc analysis was completed to evaluate in‐hospital CV mortality and time to CV death. Baseline characteristics included age, gender, and ejection fraction (on the most recent imaging including echocardiogram, nuclear stress test, or angiography), comorbidities, and home HF and gout medications.\nThe primary outcome was in‐hospital all‐cause mortality. Secondary outcomes included hospital length of stay (LOS), 30‐day readmissions, and time to death. This study also compared the primary and secondary outcomes between patients with a prior history of gout, a first diagnosis of gout, and the control group. A post hoc analysis was completed to evaluate in‐hospital CV mortality and time to CV death. Baseline characteristics included age, gender, and ejection fraction (on the most recent imaging including echocardiogram, nuclear stress test, or angiography), comorbidities, and home HF and gout medications.\nStatistical analysis The primary outcome of in‐hospital all‐cause mortality and secondary outcome of 30‐day readmissions were compared using the Pearson χ\n2 test, while LOS was assessed using the Mann−Whitney U test. Reverse Kaplan−Meier curves stratified by colchicine treatment were constructed to evaluate the difference in survival. Time to death and time to CV death during the hospital admission were assessed using bivariable Cox proportional hazards regression with censoring at 4 weeks. The proportional hazards assumption in this case was easily verified by inspection of the Kaplan−Meier survival curves. Other baseline demographics and secondary outcomes compared categorical variables with Pearson χ\n2 tests, while continuous variables were compared using unpaired two‐sample t‐tests/analysis of variance (ANOVA) and the Mann−Whitney U/Kruskal−Wallis tests. Multivariable logistic regression was performed to assess the association of in‐hospital colchicine with survival to discharge with adjustment for key covariates. Multivariable Cox proportional regression was used to assess differences in time to death with adjustment for age. The α value for all statistical tests was set at .05. R and SPSS statistical software were used in the analysis.\nThe primary outcome of in‐hospital all‐cause mortality and secondary outcome of 30‐day readmissions were compared using the Pearson χ\n2 test, while LOS was assessed using the Mann−Whitney U test. Reverse Kaplan−Meier curves stratified by colchicine treatment were constructed to evaluate the difference in survival. Time to death and time to CV death during the hospital admission were assessed using bivariable Cox proportional hazards regression with censoring at 4 weeks. The proportional hazards assumption in this case was easily verified by inspection of the Kaplan−Meier survival curves. Other baseline demographics and secondary outcomes compared categorical variables with Pearson χ\n2 tests, while continuous variables were compared using unpaired two‐sample t‐tests/analysis of variance (ANOVA) and the Mann−Whitney U/Kruskal−Wallis tests. Multivariable logistic regression was performed to assess the association of in‐hospital colchicine with survival to discharge with adjustment for key covariates. Multivariable Cox proportional regression was used to assess differences in time to death with adjustment for age. The α value for all statistical tests was set at .05. R and SPSS statistical software were used in the analysis.", "This single center, retrospective cohort study compared clinical outcomes in those receiving colchicine for the treatment of an acute gout flare versus those without a gout flare among patients with an acute HF exacerbation at an academic medical center. Adult patients (age ≥ 18 years) admitted between March 2011 and February 2020 with an acute HF exacerbation who received initial intravenous (IV) diuretics were eligible for inclusion. Patients were identified using ICD 9 and ICD 10 codes for acute HF exacerbation. Patients treated with colchicine during the admission for a documented acute gout flare were included in the treatment group, while those not given colchicine during the admission were presumed not to have had a gout flare and were included in the control group. Patients were excluded if they had end‐stage renal disease on hemodialysis, any history of transplantation or underwent transplantation during the admission, and any history of left ventricular assist device (LVAD) or LVAD implantation during admission. Patients receiving colchicine for indications other than acute gout or admitted for reasons other than acute decompensated HF were also excluded. All patients were included for analysis on an intention‐to‐treat basis. The study was approved by the Institutional Review Board before data collection. Study covariates were collected from the electronic medical record and data warehouse of medical records.", "The primary outcome was in‐hospital all‐cause mortality. Secondary outcomes included hospital length of stay (LOS), 30‐day readmissions, and time to death. This study also compared the primary and secondary outcomes between patients with a prior history of gout, a first diagnosis of gout, and the control group. A post hoc analysis was completed to evaluate in‐hospital CV mortality and time to CV death. Baseline characteristics included age, gender, and ejection fraction (on the most recent imaging including echocardiogram, nuclear stress test, or angiography), comorbidities, and home HF and gout medications.", "The primary outcome of in‐hospital all‐cause mortality and secondary outcome of 30‐day readmissions were compared using the Pearson χ\n2 test, while LOS was assessed using the Mann−Whitney U test. Reverse Kaplan−Meier curves stratified by colchicine treatment were constructed to evaluate the difference in survival. Time to death and time to CV death during the hospital admission were assessed using bivariable Cox proportional hazards regression with censoring at 4 weeks. The proportional hazards assumption in this case was easily verified by inspection of the Kaplan−Meier survival curves. Other baseline demographics and secondary outcomes compared categorical variables with Pearson χ\n2 tests, while continuous variables were compared using unpaired two‐sample t‐tests/analysis of variance (ANOVA) and the Mann−Whitney U/Kruskal−Wallis tests. Multivariable logistic regression was performed to assess the association of in‐hospital colchicine with survival to discharge with adjustment for key covariates. Multivariable Cox proportional regression was used to assess differences in time to death with adjustment for age. The α value for all statistical tests was set at .05. R and SPSS statistical software were used in the analysis.", "Baseline characteristics A total of 5109 patient encounters had an ICD 9 or ICD 10 code for acute HF exacerbation during the study period (Figure 1). The final cohort included 1047 patient encounters after exclusions as indicated in Figure 1 and was then stratified into colchicine and control groups as determined a priori. Baseline characteristics were similar between groups, with the exception of age, sex, and comorbidities of gout and alcohol use (Table 1). Patients in the colchicine group were more likely to be male and younger compared to the control group. A majority of the patients had HF with reduced ejection fraction (54.1%), and there was no significant difference in HF classifications between groups. Patients who received colchicine for an acute gout flare were more likely to have a history of gout (50.2% vs. 3.2%, p < .001) and alcohol use (3.4% vs. 1.2%, p = .026) compared with the control group. Admission serum creatinine was significantly higher in the colchicine group than the control group (1.73 vs. 1.44 mg/dl, p < .001). There was no significant difference in the change from admission serum creatinine to discharge serum creatinine between groups. Among the patient encounters with a uric acid level checked during the admission, uric acid was significantly higher in the colchicine group compared with the control group (10.63 vs. 8.26 mg/dl, p = .002). There was no significant difference in admission B‐type natriuretic peptide level between groups.\nCONSORT flow diagram. The CONSORT flow diagram is shown for the cohort. ESRD, end stage renal disease; HF, heart failure; IV, intravenous; LVAD, left ventricular assist device\nBaseline Characteristics\na\n\n\nAbbreviations: ACEi, angiotensin converting enzyme inhibitor; ARB, angiotensin receptor blocker; ARNi, angiotensin receptor neprilysin inhibitor; BNP, B‐type natriuretic peptide; CAD, coronary artery disease; HFmrEF, heart failure with mid‐range ejection fraction; HFpEF, heart failure with preserved ejection fraction; HFrEF, heart failure with reduced ejection fraction; MRA, mineralocorticoid receptor antagonist; Non‐DHP CCB, nondihydropyridine calcium channel blocker; SCr, serum creatinine.\nPlus−minus values are means ± SD.\nAdmission BNP was only available for 846 patients (189 patients in the colchicine group and 657 patients in the control group).\nUric acid was only available for 98 patients (71 patients in the colchicine group and 27 patients in the control group). If there were multiple uric acid levels during admission, the first level was recorded.\nA total of 5109 patient encounters had an ICD 9 or ICD 10 code for acute HF exacerbation during the study period (Figure 1). The final cohort included 1047 patient encounters after exclusions as indicated in Figure 1 and was then stratified into colchicine and control groups as determined a priori. Baseline characteristics were similar between groups, with the exception of age, sex, and comorbidities of gout and alcohol use (Table 1). Patients in the colchicine group were more likely to be male and younger compared to the control group. A majority of the patients had HF with reduced ejection fraction (54.1%), and there was no significant difference in HF classifications between groups. Patients who received colchicine for an acute gout flare were more likely to have a history of gout (50.2% vs. 3.2%, p < .001) and alcohol use (3.4% vs. 1.2%, p = .026) compared with the control group. Admission serum creatinine was significantly higher in the colchicine group than the control group (1.73 vs. 1.44 mg/dl, p < .001). There was no significant difference in the change from admission serum creatinine to discharge serum creatinine between groups. Among the patient encounters with a uric acid level checked during the admission, uric acid was significantly higher in the colchicine group compared with the control group (10.63 vs. 8.26 mg/dl, p = .002). There was no significant difference in admission B‐type natriuretic peptide level between groups.\nCONSORT flow diagram. The CONSORT flow diagram is shown for the cohort. ESRD, end stage renal disease; HF, heart failure; IV, intravenous; LVAD, left ventricular assist device\nBaseline Characteristics\na\n\n\nAbbreviations: ACEi, angiotensin converting enzyme inhibitor; ARB, angiotensin receptor blocker; ARNi, angiotensin receptor neprilysin inhibitor; BNP, B‐type natriuretic peptide; CAD, coronary artery disease; HFmrEF, heart failure with mid‐range ejection fraction; HFpEF, heart failure with preserved ejection fraction; HFrEF, heart failure with reduced ejection fraction; MRA, mineralocorticoid receptor antagonist; Non‐DHP CCB, nondihydropyridine calcium channel blocker; SCr, serum creatinine.\nPlus−minus values are means ± SD.\nAdmission BNP was only available for 846 patients (189 patients in the colchicine group and 657 patients in the control group).\nUric acid was only available for 98 patients (71 patients in the colchicine group and 27 patients in the control group). If there were multiple uric acid levels during admission, the first level was recorded.\nPrimary outcome analysis A total of 58 patients (5.5%) died during admission, five in the colchicine group and 53 in the control group (2.1% vs. 6.5%, p = .009), that is, a lower in‐hospital all‐cause mortality in the colchicine group (Table 2). A subgroup analysis was conducted to assess outcomes with the colchicine group stratified based on prior documented history of gout versus de novo gout presentation. In‐hospital all‐cause mortality was not significantly different between patients with a new gout diagnosis compared to those with a prior history of gout (3.4% vs. 0.8%, p = .213).\nPrimary and secondary outcomes\nAbbreviations: LOS, length of stay; SD, standard deviation.\nA total of 58 patients (5.5%) died during admission, five in the colchicine group and 53 in the control group (2.1% vs. 6.5%, p = .009), that is, a lower in‐hospital all‐cause mortality in the colchicine group (Table 2). A subgroup analysis was conducted to assess outcomes with the colchicine group stratified based on prior documented history of gout versus de novo gout presentation. In‐hospital all‐cause mortality was not significantly different between patients with a new gout diagnosis compared to those with a prior history of gout (3.4% vs. 0.8%, p = .213).\nPrimary and secondary outcomes\nAbbreviations: LOS, length of stay; SD, standard deviation.\nSecondary outcomes analysis The 30‐day readmissions were not significantly different between the colchicine and control groups (21.5% vs. 19.5%, p = .495) (Table 2). In the subgroup analysis, 30‐day readmissions remained similar when comparing patients with de novo gout to those with a prior history of gout (22.9% vs. 20.2%, p = .611). Mean LOS was significantly increased in the colchicine group compared to the control group (9.93 vs. 7.96 days, p < .001) (Table 2). In the subgroup analysis, LOS was significantly increased in the de novo presentation of gout group compared to the control group (10.52 vs. 7.96 days, p < .001) and in the prior history of gout group compared to the control group (9.35 vs. 7.96 days, p = .006). There was no significant difference in LOS between those with a first diagnosis of gout and those with a prior history of gout (p = .272). In a post hoc analysis, in‐hospital CV mortality censored at 4 weeks was significantly lower in the colchicine group than the control group (0.89% vs. 3.93%, p = .02).\nThe 30‐day readmissions were not significantly different between the colchicine and control groups (21.5% vs. 19.5%, p = .495) (Table 2). In the subgroup analysis, 30‐day readmissions remained similar when comparing patients with de novo gout to those with a prior history of gout (22.9% vs. 20.2%, p = .611). Mean LOS was significantly increased in the colchicine group compared to the control group (9.93 vs. 7.96 days, p < .001) (Table 2). In the subgroup analysis, LOS was significantly increased in the de novo presentation of gout group compared to the control group (10.52 vs. 7.96 days, p < .001) and in the prior history of gout group compared to the control group (9.35 vs. 7.96 days, p = .006). There was no significant difference in LOS between those with a first diagnosis of gout and those with a prior history of gout (p = .272). In a post hoc analysis, in‐hospital CV mortality censored at 4 weeks was significantly lower in the colchicine group than the control group (0.89% vs. 3.93%, p = .02).\nStratified Kaplan−Meier analysis and Cox proportional hazards regression during hospital admission Reverse Kaplan−Meier curves stratified by in‐hospital colchicine and censored at 4 weeks are shown in Figure 2. Inpatient colchicine was associated with reduced rates of both in‐hospital all‐cause mortality (log rank p = .00026) and in‐hospital CV mortality (log rank p = .0063) compared with the control group. In a Cox proportional hazards model adjusted for age, in‐hospital colchicine use was associated with improved survival to discharge (hazard ratio [HR] 0.163, 95% confidence interval [CI] 0.051−0.525, p = .002) and a decreased rate of in‐hospital CV mortality (HR 0.184, 95% CI 0.044−0.770, p = .021). Reverse Kaplan–Meier curves stratified by home colchicine use were also generated (Figure 3). Home colchicine use was associated with a reduced rate of in‐hospital all‐cause mortality (log rank p = .037) but no significant difference in the rate of in‐hospital CV mortality (log rank p = .14).\nInpatient all‐cause and cardiovascular (CV) death by inpatient colchicine use. Reverse Kaplan−Meier curves for inpatient all‐cause death and inpatient CV death stratified on inpatient colchicine use are shown (p = .00026 and p = .0063, respectively)\nInpatient all‐cause and cardiovascular (CV) death by home colchicine use. Reverse Kaplan−Meier curves for inpatient all‐cause death and inpatient CV death stratified on home colchicine use are shown (p = .037 and p = .14, respectively)\nReverse Kaplan−Meier curves stratified by in‐hospital colchicine and censored at 4 weeks are shown in Figure 2. Inpatient colchicine was associated with reduced rates of both in‐hospital all‐cause mortality (log rank p = .00026) and in‐hospital CV mortality (log rank p = .0063) compared with the control group. In a Cox proportional hazards model adjusted for age, in‐hospital colchicine use was associated with improved survival to discharge (hazard ratio [HR] 0.163, 95% confidence interval [CI] 0.051−0.525, p = .002) and a decreased rate of in‐hospital CV mortality (HR 0.184, 95% CI 0.044−0.770, p = .021). Reverse Kaplan–Meier curves stratified by home colchicine use were also generated (Figure 3). Home colchicine use was associated with a reduced rate of in‐hospital all‐cause mortality (log rank p = .037) but no significant difference in the rate of in‐hospital CV mortality (log rank p = .14).\nInpatient all‐cause and cardiovascular (CV) death by inpatient colchicine use. Reverse Kaplan−Meier curves for inpatient all‐cause death and inpatient CV death stratified on inpatient colchicine use are shown (p = .00026 and p = .0063, respectively)\nInpatient all‐cause and cardiovascular (CV) death by home colchicine use. Reverse Kaplan−Meier curves for inpatient all‐cause death and inpatient CV death stratified on home colchicine use are shown (p = .037 and p = .14, respectively)\nMultivariate logistic regression analysis A multivariate logistic regression model was performed to evaluate associations of other covariates with in‐hospital all‐cause mortality. These covariates included in‐hospital colchicine use, home beta‐blocker use, inotrope use, age, and diabetes mellitus. In‐hospital colchicine use given for a gout flare was significantly associated with reduced in‐hospital all‐cause mortality (OR 0.322, 95% CI 0.105−0.779, p = .02) after adjustment for home beta‐blocker use, inotrope use, age, and diabetes mellitus (p < .05 for all in the model).\nA multivariate logistic regression model was performed to evaluate associations of other covariates with in‐hospital all‐cause mortality. These covariates included in‐hospital colchicine use, home beta‐blocker use, inotrope use, age, and diabetes mellitus. In‐hospital colchicine use given for a gout flare was significantly associated with reduced in‐hospital all‐cause mortality (OR 0.322, 95% CI 0.105−0.779, p = .02) after adjustment for home beta‐blocker use, inotrope use, age, and diabetes mellitus (p < .05 for all in the model).", "A total of 5109 patient encounters had an ICD 9 or ICD 10 code for acute HF exacerbation during the study period (Figure 1). The final cohort included 1047 patient encounters after exclusions as indicated in Figure 1 and was then stratified into colchicine and control groups as determined a priori. Baseline characteristics were similar between groups, with the exception of age, sex, and comorbidities of gout and alcohol use (Table 1). Patients in the colchicine group were more likely to be male and younger compared to the control group. A majority of the patients had HF with reduced ejection fraction (54.1%), and there was no significant difference in HF classifications between groups. Patients who received colchicine for an acute gout flare were more likely to have a history of gout (50.2% vs. 3.2%, p < .001) and alcohol use (3.4% vs. 1.2%, p = .026) compared with the control group. Admission serum creatinine was significantly higher in the colchicine group than the control group (1.73 vs. 1.44 mg/dl, p < .001). There was no significant difference in the change from admission serum creatinine to discharge serum creatinine between groups. Among the patient encounters with a uric acid level checked during the admission, uric acid was significantly higher in the colchicine group compared with the control group (10.63 vs. 8.26 mg/dl, p = .002). There was no significant difference in admission B‐type natriuretic peptide level between groups.\nCONSORT flow diagram. The CONSORT flow diagram is shown for the cohort. ESRD, end stage renal disease; HF, heart failure; IV, intravenous; LVAD, left ventricular assist device\nBaseline Characteristics\na\n\n\nAbbreviations: ACEi, angiotensin converting enzyme inhibitor; ARB, angiotensin receptor blocker; ARNi, angiotensin receptor neprilysin inhibitor; BNP, B‐type natriuretic peptide; CAD, coronary artery disease; HFmrEF, heart failure with mid‐range ejection fraction; HFpEF, heart failure with preserved ejection fraction; HFrEF, heart failure with reduced ejection fraction; MRA, mineralocorticoid receptor antagonist; Non‐DHP CCB, nondihydropyridine calcium channel blocker; SCr, serum creatinine.\nPlus−minus values are means ± SD.\nAdmission BNP was only available for 846 patients (189 patients in the colchicine group and 657 patients in the control group).\nUric acid was only available for 98 patients (71 patients in the colchicine group and 27 patients in the control group). If there were multiple uric acid levels during admission, the first level was recorded.", "A total of 58 patients (5.5%) died during admission, five in the colchicine group and 53 in the control group (2.1% vs. 6.5%, p = .009), that is, a lower in‐hospital all‐cause mortality in the colchicine group (Table 2). A subgroup analysis was conducted to assess outcomes with the colchicine group stratified based on prior documented history of gout versus de novo gout presentation. In‐hospital all‐cause mortality was not significantly different between patients with a new gout diagnosis compared to those with a prior history of gout (3.4% vs. 0.8%, p = .213).\nPrimary and secondary outcomes\nAbbreviations: LOS, length of stay; SD, standard deviation.", "The 30‐day readmissions were not significantly different between the colchicine and control groups (21.5% vs. 19.5%, p = .495) (Table 2). In the subgroup analysis, 30‐day readmissions remained similar when comparing patients with de novo gout to those with a prior history of gout (22.9% vs. 20.2%, p = .611). Mean LOS was significantly increased in the colchicine group compared to the control group (9.93 vs. 7.96 days, p < .001) (Table 2). In the subgroup analysis, LOS was significantly increased in the de novo presentation of gout group compared to the control group (10.52 vs. 7.96 days, p < .001) and in the prior history of gout group compared to the control group (9.35 vs. 7.96 days, p = .006). There was no significant difference in LOS between those with a first diagnosis of gout and those with a prior history of gout (p = .272). In a post hoc analysis, in‐hospital CV mortality censored at 4 weeks was significantly lower in the colchicine group than the control group (0.89% vs. 3.93%, p = .02).", "Reverse Kaplan−Meier curves stratified by in‐hospital colchicine and censored at 4 weeks are shown in Figure 2. Inpatient colchicine was associated with reduced rates of both in‐hospital all‐cause mortality (log rank p = .00026) and in‐hospital CV mortality (log rank p = .0063) compared with the control group. In a Cox proportional hazards model adjusted for age, in‐hospital colchicine use was associated with improved survival to discharge (hazard ratio [HR] 0.163, 95% confidence interval [CI] 0.051−0.525, p = .002) and a decreased rate of in‐hospital CV mortality (HR 0.184, 95% CI 0.044−0.770, p = .021). Reverse Kaplan–Meier curves stratified by home colchicine use were also generated (Figure 3). Home colchicine use was associated with a reduced rate of in‐hospital all‐cause mortality (log rank p = .037) but no significant difference in the rate of in‐hospital CV mortality (log rank p = .14).\nInpatient all‐cause and cardiovascular (CV) death by inpatient colchicine use. Reverse Kaplan−Meier curves for inpatient all‐cause death and inpatient CV death stratified on inpatient colchicine use are shown (p = .00026 and p = .0063, respectively)\nInpatient all‐cause and cardiovascular (CV) death by home colchicine use. Reverse Kaplan−Meier curves for inpatient all‐cause death and inpatient CV death stratified on home colchicine use are shown (p = .037 and p = .14, respectively)", "A multivariate logistic regression model was performed to evaluate associations of other covariates with in‐hospital all‐cause mortality. These covariates included in‐hospital colchicine use, home beta‐blocker use, inotrope use, age, and diabetes mellitus. In‐hospital colchicine use given for a gout flare was significantly associated with reduced in‐hospital all‐cause mortality (OR 0.322, 95% CI 0.105−0.779, p = .02) after adjustment for home beta‐blocker use, inotrope use, age, and diabetes mellitus (p < .05 for all in the model).", "In this retrospective cohort study, we evaluated the use of colchicine for an acute gout flare during hospitalization for acute decompensated HF. We found that colchicine use during acute HF exacerbation was associated with decreased in‐hospital all‐cause mortality and in‐hospital CV mortality, as well as increased hospital LOS. The incidence of acute gout in this study population was 22.7% of all patient encounters. Although the rate of acute gout while receiving IV diuretics during hospitalization for acute HF is not extensively characterized in the literature, a 2017 study of patients treated with IV bumetanide during hospitalization for acute HF found the incidence of acute gout to be 13.6% over the course of the study.\n12\n Our findings highlight the relative high prevalence of acute gout during treatment with IV diuretics for HF exacerbation.\nColchicine is a potent anti‐inflammatory and antiproliferative drug that has been used for both acute gout treatment as well as prevention.\n10\n Several studies have reported the safety and beneficial outcomes of colchicine in other cardiac conditions. A recent meta‐analysis of patients with a range of CV disease states evaluated the impact of colchicine on a composite CV outcome, which consisted of the primary outcome of each individual trial and included mortality, acute coronary syndrome, MI, cerebrovascular accident, cardiac arrest, or revascularization. The meta‐analysis found that colchicine use was associated with a 56% decrease in the composite CV outcome (p = .0004), as well as a nonsignificant trend toward reduction in all‐cause mortality (relative risk 0.50, p  = .08).\n11\n Several additional retrospective studies have also demonstrated favorable CV outcomes with the use of colchicine.\n13\n, \n14\n\n\nRecently, prospective, randomized, and placebo‐controlled trials have also examined the potential benefit of colchicine in CV patients. The Colchicine Cardiovascular Outcomes Trial (COLCOT) evaluated the use of colchicine within 30 days after MI, and the Low Dose Colchicine 2 Trial (LoDoCo2) evaluated colchicine in patients with stable CAD. In COLCOT and LoDoCo2, colchicine resulted in a significant reduction in the primary outcomes, which were a composite of CV death and other clinical outcomes in both trials.\n25\n, \n26\n Additionally, a recent systematic review and meta‐analysis assessed the impact of colchicine in patients with CAD in 13 randomized trials, which included a total of 13 125 patients.\n15\n The study found that treatment with colchicine significantly reduced the risk of MI as well as stroke or transient ischemic attack when compared to placebo or standard care. However, colchicine was not associated with a significant reduction in all‐cause or CV mortality.\nWhile many of the existing studies have evaluated colchicine use in patients with CAD or prior MI, to our knowledge, only one trial to date has evaluated colchicine's effects in stable HF.\n16\n Investigators randomized stable symptomatic HF patients to receive either colchicine 0.5 mg twice daily or placebo for 6 months. The primary end point, which was the proportion of patients achieving at least one‐grade improvement in New York Heart Association (NYHA) functional status classification, was not significantly different between the two groups (p = .365). Colchicine use was associated with a significant decrease in measured inflammatory biomarkers including high sensitivity C‐reactive protein and interleukin‐6. There are some key differences notable in the aforementioned study compared with ours. First, investigators excluded patients hospitalized within the previous 3 months, whereas our population was comprised exclusively of patients admitted for an acute HF exacerbation. Second, patients were given colchicine regardless of gout status, whereas in our study, patients who were given colchicine received it due to an acute gout flare. Third, investigators only included patients with left ventricular ejection fraction ≤40%, in contrast to our study which included HF patients regardless of ejection fraction.\nPrior studies have also explored the impact of other gout therapies on HF outcomes. Hyperuricemia has been associated with an increased incidence of HF as well as increased mortality among those with HF. Therefore, uric acid lowering therapies have been considered potential medication candidates for improving HF outcomes. Initial studies demonstrated that allopurinol, a xanthine oxidase inhibitor, was associated with improved endothelial function in HF patients.\n17\n Subsequently, the Effects of Xanthine Oxidase Inhibition on Hyperuricemic Heart Failure Patients (EXACT‐HF) study randomized patients (with primarily NYHA Class II and III HFrEF and hyperuricemia) to allopurinol (target dose 600 mg daily) versus placebo for 24 weeks.\n18\n The primary outcome, a composite clinical end point based on several factors including survival, worsening HF, and patient global assessment, was not significantly different between the allopurinol and placebo groups. While this prior study failed to demonstrate the efficacy of uric acid lowering with allopurinol on HF outcomes, colchicine has an important distinction related to its anti‐inflammatory properties. This anti‐inflammatory effect is what we believe may underlie the positive findings in our study. Additionally, the EXACT‐HF study enrolled patients in the outpatient setting, while this study focused specifically on patients hospitalized with acute decompensated HF.\nThe mechanistic underpinnings of the potential beneficial effects of colchicine on CV events may involve its anti‐inflammatory properties on the CV system.\n19\n It has been postulated that activated neutrophils are present in atherosclerotic plaques and play a key role in the transformation of a stable to an unstable plaque.\n19\n Colchicine's anti‐inflammatory effects and inhibition of neutrophil chemotaxis and activation may play a role in stabilizing plaques and preventing MI or ischemic strokes. Hitherto, the potential utility of colchicine in acute decompensated HF has not been considered. Thus, the underlying mechanistic pathways that could explain the potential benefits of colchicine in the HF population are largely unknown but may be multifactorial. It has been well established that an acute HF admission is associated with increased short term mortality as well as other adverse CV events following an index admission.\n20\n Accordingly, a worsening HF event has increasingly been recognized as an end point for enrollment in clinical trials.\n21\n, \n22\n, \n23\n In this sense, acutely decompensated HF represents a distinct vulnerable phenotypic state characterized by multiple neurohormonal perturbations and a heightened proinflammatory milieu.\n24\n It is tempting to surmise that our findings demonstrating the favorable effects of colchicine on HF mortality could potentially be explained by the modulating influence of the anti‐inflammatory effects of colchicine on this distinct vulnerable phenotypic phase in the HF trajectory. If indeed our findings are validated, then consideration could be made for designing clinical trials that incorporate anti‐inflammatory agents such as colchicine targeting this vulnerable phase of worsening HF.", "There are some limitations to this study that should be acknowledged. Limitations due to the retrospective design include potential for missing data points, inability to assess diuretic doses received during hospitalization, and reliance on ICD 9 and ICD 10 codes for initial diagnosis of acute HF exacerbation. Readmissions at outside hospitals that are not linked with the institution's electronic medical record may not have been identified. Additionally, in this study the intervention group consisted of patients given colchicine for an acute gout flare, while the control group included those with neither a gout flare nor colchicine treatment. At our institution, most HF patients with acute gout are treated with colchicine, so it would not have been possible to assemble a sufficiently powered control group of acute HF patients who had a gout flare without colchicine treatment. It is possible that acute gout could be a surrogate marker for more effective diuresis or renal dysfunction. However, the existing literature demonstrating worsened outcomes associated with gout and hyperuricemia suggests that the presence of gout in the intervention group would skew the results toward the null hypothesis. As this was an observational study, it is possible that there could still be unmeasured confounders even after statistical adjustment.", "This study demonstrated that colchicine use for acute gout flare in hospitalized patients with a HF exacerbation was associated with decreased in‐hospital all‐cause mortality and in‐hospital CV mortality compared with the control group. The use of colchicine was also associated with a longer LOS but similar 30‐day readmissions. Additional large multicenter retrospective and prospective randomized studies are needed to more fully understand the association of colchicine use with outcomes in patients undergoing treatment for HF exacerbations and explore its role as a potential treatment option in this population.", "The authors declare no conflicts of interest." ]
[ null, "methods", null, null, null, "results", null, null, null, null, null, "discussion", null, "conclusions", "COI-statement" ]
[ "colchicine", "gout", "heart failure", "in‐hospital mortality" ]
INTRODUCTION: Gout is a common comorbidity in heart failure (HF) patients and is the result of monosodium urate crystal deposition in joints and periarticular tissues. 1 Gout is associated with significant morbidity, mortality, and healthcare costs. 1 , 2 , 3 Diuretics are known to precipitate hyperuricemia and increase the risk of gout flares through mechanisms related to decreased uric acid secretion and increased uric acid reabsorption. 4 , 5 Studies have estimated the prevalence of gout in HF patients to be approximately 16%−40%, and one study found that 56% of hospitalized HF patients had hyperuricemia. 6 , 7 , 8 , 9 The therapeutic agents commonly used for an acute gout flare include colchicine, steroids, and nonsteroidal anti‐inflammatory drugs (NSAIDs). However, steroids and NSAIDs are often avoided in HF because of the legitimate concerns of fluid retention and HF exacerbation. 10 In addition to its role in gout, colchicine's anti‐inflammatory effects are also highly beneficial in the treatment and prevention of other cardiac conditions such as pericarditis. 11 Colchicine has also recently shown broader cardiovascular (CV) outcomes benefit in high‐risk patients, particularly those with coronary artery disease (CAD) or history of myocardial infarction (MI). However, the impact of colchicine use during gout flares on outcomes in patients with acutely decompensated HF is unknown. The purpose of this study was to assess clinical outcomes in patients treated for an acute HF exacerbation and receiving colchicine for an acute gout flare. METHODS: Study design This single center, retrospective cohort study compared clinical outcomes in those receiving colchicine for the treatment of an acute gout flare versus those without a gout flare among patients with an acute HF exacerbation at an academic medical center. Adult patients (age ≥ 18 years) admitted between March 2011 and February 2020 with an acute HF exacerbation who received initial intravenous (IV) diuretics were eligible for inclusion. Patients were identified using ICD 9 and ICD 10 codes for acute HF exacerbation. Patients treated with colchicine during the admission for a documented acute gout flare were included in the treatment group, while those not given colchicine during the admission were presumed not to have had a gout flare and were included in the control group. Patients were excluded if they had end‐stage renal disease on hemodialysis, any history of transplantation or underwent transplantation during the admission, and any history of left ventricular assist device (LVAD) or LVAD implantation during admission. Patients receiving colchicine for indications other than acute gout or admitted for reasons other than acute decompensated HF were also excluded. All patients were included for analysis on an intention‐to‐treat basis. The study was approved by the Institutional Review Board before data collection. Study covariates were collected from the electronic medical record and data warehouse of medical records. This single center, retrospective cohort study compared clinical outcomes in those receiving colchicine for the treatment of an acute gout flare versus those without a gout flare among patients with an acute HF exacerbation at an academic medical center. Adult patients (age ≥ 18 years) admitted between March 2011 and February 2020 with an acute HF exacerbation who received initial intravenous (IV) diuretics were eligible for inclusion. Patients were identified using ICD 9 and ICD 10 codes for acute HF exacerbation. Patients treated with colchicine during the admission for a documented acute gout flare were included in the treatment group, while those not given colchicine during the admission were presumed not to have had a gout flare and were included in the control group. Patients were excluded if they had end‐stage renal disease on hemodialysis, any history of transplantation or underwent transplantation during the admission, and any history of left ventricular assist device (LVAD) or LVAD implantation during admission. Patients receiving colchicine for indications other than acute gout or admitted for reasons other than acute decompensated HF were also excluded. All patients were included for analysis on an intention‐to‐treat basis. The study was approved by the Institutional Review Board before data collection. Study covariates were collected from the electronic medical record and data warehouse of medical records. Outcomes The primary outcome was in‐hospital all‐cause mortality. Secondary outcomes included hospital length of stay (LOS), 30‐day readmissions, and time to death. This study also compared the primary and secondary outcomes between patients with a prior history of gout, a first diagnosis of gout, and the control group. A post hoc analysis was completed to evaluate in‐hospital CV mortality and time to CV death. Baseline characteristics included age, gender, and ejection fraction (on the most recent imaging including echocardiogram, nuclear stress test, or angiography), comorbidities, and home HF and gout medications. The primary outcome was in‐hospital all‐cause mortality. Secondary outcomes included hospital length of stay (LOS), 30‐day readmissions, and time to death. This study also compared the primary and secondary outcomes between patients with a prior history of gout, a first diagnosis of gout, and the control group. A post hoc analysis was completed to evaluate in‐hospital CV mortality and time to CV death. Baseline characteristics included age, gender, and ejection fraction (on the most recent imaging including echocardiogram, nuclear stress test, or angiography), comorbidities, and home HF and gout medications. Statistical analysis The primary outcome of in‐hospital all‐cause mortality and secondary outcome of 30‐day readmissions were compared using the Pearson χ 2 test, while LOS was assessed using the Mann−Whitney U test. Reverse Kaplan−Meier curves stratified by colchicine treatment were constructed to evaluate the difference in survival. Time to death and time to CV death during the hospital admission were assessed using bivariable Cox proportional hazards regression with censoring at 4 weeks. The proportional hazards assumption in this case was easily verified by inspection of the Kaplan−Meier survival curves. Other baseline demographics and secondary outcomes compared categorical variables with Pearson χ 2 tests, while continuous variables were compared using unpaired two‐sample t‐tests/analysis of variance (ANOVA) and the Mann−Whitney U/Kruskal−Wallis tests. Multivariable logistic regression was performed to assess the association of in‐hospital colchicine with survival to discharge with adjustment for key covariates. Multivariable Cox proportional regression was used to assess differences in time to death with adjustment for age. The α value for all statistical tests was set at .05. R and SPSS statistical software were used in the analysis. The primary outcome of in‐hospital all‐cause mortality and secondary outcome of 30‐day readmissions were compared using the Pearson χ 2 test, while LOS was assessed using the Mann−Whitney U test. Reverse Kaplan−Meier curves stratified by colchicine treatment were constructed to evaluate the difference in survival. Time to death and time to CV death during the hospital admission were assessed using bivariable Cox proportional hazards regression with censoring at 4 weeks. The proportional hazards assumption in this case was easily verified by inspection of the Kaplan−Meier survival curves. Other baseline demographics and secondary outcomes compared categorical variables with Pearson χ 2 tests, while continuous variables were compared using unpaired two‐sample t‐tests/analysis of variance (ANOVA) and the Mann−Whitney U/Kruskal−Wallis tests. Multivariable logistic regression was performed to assess the association of in‐hospital colchicine with survival to discharge with adjustment for key covariates. Multivariable Cox proportional regression was used to assess differences in time to death with adjustment for age. The α value for all statistical tests was set at .05. R and SPSS statistical software were used in the analysis. Study design: This single center, retrospective cohort study compared clinical outcomes in those receiving colchicine for the treatment of an acute gout flare versus those without a gout flare among patients with an acute HF exacerbation at an academic medical center. Adult patients (age ≥ 18 years) admitted between March 2011 and February 2020 with an acute HF exacerbation who received initial intravenous (IV) diuretics were eligible for inclusion. Patients were identified using ICD 9 and ICD 10 codes for acute HF exacerbation. Patients treated with colchicine during the admission for a documented acute gout flare were included in the treatment group, while those not given colchicine during the admission were presumed not to have had a gout flare and were included in the control group. Patients were excluded if they had end‐stage renal disease on hemodialysis, any history of transplantation or underwent transplantation during the admission, and any history of left ventricular assist device (LVAD) or LVAD implantation during admission. Patients receiving colchicine for indications other than acute gout or admitted for reasons other than acute decompensated HF were also excluded. All patients were included for analysis on an intention‐to‐treat basis. The study was approved by the Institutional Review Board before data collection. Study covariates were collected from the electronic medical record and data warehouse of medical records. Outcomes: The primary outcome was in‐hospital all‐cause mortality. Secondary outcomes included hospital length of stay (LOS), 30‐day readmissions, and time to death. This study also compared the primary and secondary outcomes between patients with a prior history of gout, a first diagnosis of gout, and the control group. A post hoc analysis was completed to evaluate in‐hospital CV mortality and time to CV death. Baseline characteristics included age, gender, and ejection fraction (on the most recent imaging including echocardiogram, nuclear stress test, or angiography), comorbidities, and home HF and gout medications. Statistical analysis: The primary outcome of in‐hospital all‐cause mortality and secondary outcome of 30‐day readmissions were compared using the Pearson χ 2 test, while LOS was assessed using the Mann−Whitney U test. Reverse Kaplan−Meier curves stratified by colchicine treatment were constructed to evaluate the difference in survival. Time to death and time to CV death during the hospital admission were assessed using bivariable Cox proportional hazards regression with censoring at 4 weeks. The proportional hazards assumption in this case was easily verified by inspection of the Kaplan−Meier survival curves. Other baseline demographics and secondary outcomes compared categorical variables with Pearson χ 2 tests, while continuous variables were compared using unpaired two‐sample t‐tests/analysis of variance (ANOVA) and the Mann−Whitney U/Kruskal−Wallis tests. Multivariable logistic regression was performed to assess the association of in‐hospital colchicine with survival to discharge with adjustment for key covariates. Multivariable Cox proportional regression was used to assess differences in time to death with adjustment for age. The α value for all statistical tests was set at .05. R and SPSS statistical software were used in the analysis. RESULTS: Baseline characteristics A total of 5109 patient encounters had an ICD 9 or ICD 10 code for acute HF exacerbation during the study period (Figure 1). The final cohort included 1047 patient encounters after exclusions as indicated in Figure 1 and was then stratified into colchicine and control groups as determined a priori. Baseline characteristics were similar between groups, with the exception of age, sex, and comorbidities of gout and alcohol use (Table 1). Patients in the colchicine group were more likely to be male and younger compared to the control group. A majority of the patients had HF with reduced ejection fraction (54.1%), and there was no significant difference in HF classifications between groups. Patients who received colchicine for an acute gout flare were more likely to have a history of gout (50.2% vs. 3.2%, p < .001) and alcohol use (3.4% vs. 1.2%, p = .026) compared with the control group. Admission serum creatinine was significantly higher in the colchicine group than the control group (1.73 vs. 1.44 mg/dl, p < .001). There was no significant difference in the change from admission serum creatinine to discharge serum creatinine between groups. Among the patient encounters with a uric acid level checked during the admission, uric acid was significantly higher in the colchicine group compared with the control group (10.63 vs. 8.26 mg/dl, p = .002). There was no significant difference in admission B‐type natriuretic peptide level between groups. CONSORT flow diagram. The CONSORT flow diagram is shown for the cohort. ESRD, end stage renal disease; HF, heart failure; IV, intravenous; LVAD, left ventricular assist device Baseline Characteristics a Abbreviations: ACEi, angiotensin converting enzyme inhibitor; ARB, angiotensin receptor blocker; ARNi, angiotensin receptor neprilysin inhibitor; BNP, B‐type natriuretic peptide; CAD, coronary artery disease; HFmrEF, heart failure with mid‐range ejection fraction; HFpEF, heart failure with preserved ejection fraction; HFrEF, heart failure with reduced ejection fraction; MRA, mineralocorticoid receptor antagonist; Non‐DHP CCB, nondihydropyridine calcium channel blocker; SCr, serum creatinine. Plus−minus values are means ± SD. Admission BNP was only available for 846 patients (189 patients in the colchicine group and 657 patients in the control group). Uric acid was only available for 98 patients (71 patients in the colchicine group and 27 patients in the control group). If there were multiple uric acid levels during admission, the first level was recorded. A total of 5109 patient encounters had an ICD 9 or ICD 10 code for acute HF exacerbation during the study period (Figure 1). The final cohort included 1047 patient encounters after exclusions as indicated in Figure 1 and was then stratified into colchicine and control groups as determined a priori. Baseline characteristics were similar between groups, with the exception of age, sex, and comorbidities of gout and alcohol use (Table 1). Patients in the colchicine group were more likely to be male and younger compared to the control group. A majority of the patients had HF with reduced ejection fraction (54.1%), and there was no significant difference in HF classifications between groups. Patients who received colchicine for an acute gout flare were more likely to have a history of gout (50.2% vs. 3.2%, p < .001) and alcohol use (3.4% vs. 1.2%, p = .026) compared with the control group. Admission serum creatinine was significantly higher in the colchicine group than the control group (1.73 vs. 1.44 mg/dl, p < .001). There was no significant difference in the change from admission serum creatinine to discharge serum creatinine between groups. Among the patient encounters with a uric acid level checked during the admission, uric acid was significantly higher in the colchicine group compared with the control group (10.63 vs. 8.26 mg/dl, p = .002). There was no significant difference in admission B‐type natriuretic peptide level between groups. CONSORT flow diagram. The CONSORT flow diagram is shown for the cohort. ESRD, end stage renal disease; HF, heart failure; IV, intravenous; LVAD, left ventricular assist device Baseline Characteristics a Abbreviations: ACEi, angiotensin converting enzyme inhibitor; ARB, angiotensin receptor blocker; ARNi, angiotensin receptor neprilysin inhibitor; BNP, B‐type natriuretic peptide; CAD, coronary artery disease; HFmrEF, heart failure with mid‐range ejection fraction; HFpEF, heart failure with preserved ejection fraction; HFrEF, heart failure with reduced ejection fraction; MRA, mineralocorticoid receptor antagonist; Non‐DHP CCB, nondihydropyridine calcium channel blocker; SCr, serum creatinine. Plus−minus values are means ± SD. Admission BNP was only available for 846 patients (189 patients in the colchicine group and 657 patients in the control group). Uric acid was only available for 98 patients (71 patients in the colchicine group and 27 patients in the control group). If there were multiple uric acid levels during admission, the first level was recorded. Primary outcome analysis A total of 58 patients (5.5%) died during admission, five in the colchicine group and 53 in the control group (2.1% vs. 6.5%, p = .009), that is, a lower in‐hospital all‐cause mortality in the colchicine group (Table 2). A subgroup analysis was conducted to assess outcomes with the colchicine group stratified based on prior documented history of gout versus de novo gout presentation. In‐hospital all‐cause mortality was not significantly different between patients with a new gout diagnosis compared to those with a prior history of gout (3.4% vs. 0.8%, p = .213). Primary and secondary outcomes Abbreviations: LOS, length of stay; SD, standard deviation. A total of 58 patients (5.5%) died during admission, five in the colchicine group and 53 in the control group (2.1% vs. 6.5%, p = .009), that is, a lower in‐hospital all‐cause mortality in the colchicine group (Table 2). A subgroup analysis was conducted to assess outcomes with the colchicine group stratified based on prior documented history of gout versus de novo gout presentation. In‐hospital all‐cause mortality was not significantly different between patients with a new gout diagnosis compared to those with a prior history of gout (3.4% vs. 0.8%, p = .213). Primary and secondary outcomes Abbreviations: LOS, length of stay; SD, standard deviation. Secondary outcomes analysis The 30‐day readmissions were not significantly different between the colchicine and control groups (21.5% vs. 19.5%, p = .495) (Table 2). In the subgroup analysis, 30‐day readmissions remained similar when comparing patients with de novo gout to those with a prior history of gout (22.9% vs. 20.2%, p = .611). Mean LOS was significantly increased in the colchicine group compared to the control group (9.93 vs. 7.96 days, p < .001) (Table 2). In the subgroup analysis, LOS was significantly increased in the de novo presentation of gout group compared to the control group (10.52 vs. 7.96 days, p < .001) and in the prior history of gout group compared to the control group (9.35 vs. 7.96 days, p = .006). There was no significant difference in LOS between those with a first diagnosis of gout and those with a prior history of gout (p = .272). In a post hoc analysis, in‐hospital CV mortality censored at 4 weeks was significantly lower in the colchicine group than the control group (0.89% vs. 3.93%, p = .02). The 30‐day readmissions were not significantly different between the colchicine and control groups (21.5% vs. 19.5%, p = .495) (Table 2). In the subgroup analysis, 30‐day readmissions remained similar when comparing patients with de novo gout to those with a prior history of gout (22.9% vs. 20.2%, p = .611). Mean LOS was significantly increased in the colchicine group compared to the control group (9.93 vs. 7.96 days, p < .001) (Table 2). In the subgroup analysis, LOS was significantly increased in the de novo presentation of gout group compared to the control group (10.52 vs. 7.96 days, p < .001) and in the prior history of gout group compared to the control group (9.35 vs. 7.96 days, p = .006). There was no significant difference in LOS between those with a first diagnosis of gout and those with a prior history of gout (p = .272). In a post hoc analysis, in‐hospital CV mortality censored at 4 weeks was significantly lower in the colchicine group than the control group (0.89% vs. 3.93%, p = .02). Stratified Kaplan−Meier analysis and Cox proportional hazards regression during hospital admission Reverse Kaplan−Meier curves stratified by in‐hospital colchicine and censored at 4 weeks are shown in Figure 2. Inpatient colchicine was associated with reduced rates of both in‐hospital all‐cause mortality (log rank p = .00026) and in‐hospital CV mortality (log rank p = .0063) compared with the control group. In a Cox proportional hazards model adjusted for age, in‐hospital colchicine use was associated with improved survival to discharge (hazard ratio [HR] 0.163, 95% confidence interval [CI] 0.051−0.525, p = .002) and a decreased rate of in‐hospital CV mortality (HR 0.184, 95% CI 0.044−0.770, p = .021). Reverse Kaplan–Meier curves stratified by home colchicine use were also generated (Figure 3). Home colchicine use was associated with a reduced rate of in‐hospital all‐cause mortality (log rank p = .037) but no significant difference in the rate of in‐hospital CV mortality (log rank p = .14). Inpatient all‐cause and cardiovascular (CV) death by inpatient colchicine use. Reverse Kaplan−Meier curves for inpatient all‐cause death and inpatient CV death stratified on inpatient colchicine use are shown (p = .00026 and p = .0063, respectively) Inpatient all‐cause and cardiovascular (CV) death by home colchicine use. Reverse Kaplan−Meier curves for inpatient all‐cause death and inpatient CV death stratified on home colchicine use are shown (p = .037 and p = .14, respectively) Reverse Kaplan−Meier curves stratified by in‐hospital colchicine and censored at 4 weeks are shown in Figure 2. Inpatient colchicine was associated with reduced rates of both in‐hospital all‐cause mortality (log rank p = .00026) and in‐hospital CV mortality (log rank p = .0063) compared with the control group. In a Cox proportional hazards model adjusted for age, in‐hospital colchicine use was associated with improved survival to discharge (hazard ratio [HR] 0.163, 95% confidence interval [CI] 0.051−0.525, p = .002) and a decreased rate of in‐hospital CV mortality (HR 0.184, 95% CI 0.044−0.770, p = .021). Reverse Kaplan–Meier curves stratified by home colchicine use were also generated (Figure 3). Home colchicine use was associated with a reduced rate of in‐hospital all‐cause mortality (log rank p = .037) but no significant difference in the rate of in‐hospital CV mortality (log rank p = .14). Inpatient all‐cause and cardiovascular (CV) death by inpatient colchicine use. Reverse Kaplan−Meier curves for inpatient all‐cause death and inpatient CV death stratified on inpatient colchicine use are shown (p = .00026 and p = .0063, respectively) Inpatient all‐cause and cardiovascular (CV) death by home colchicine use. Reverse Kaplan−Meier curves for inpatient all‐cause death and inpatient CV death stratified on home colchicine use are shown (p = .037 and p = .14, respectively) Multivariate logistic regression analysis A multivariate logistic regression model was performed to evaluate associations of other covariates with in‐hospital all‐cause mortality. These covariates included in‐hospital colchicine use, home beta‐blocker use, inotrope use, age, and diabetes mellitus. In‐hospital colchicine use given for a gout flare was significantly associated with reduced in‐hospital all‐cause mortality (OR 0.322, 95% CI 0.105−0.779, p = .02) after adjustment for home beta‐blocker use, inotrope use, age, and diabetes mellitus (p < .05 for all in the model). A multivariate logistic regression model was performed to evaluate associations of other covariates with in‐hospital all‐cause mortality. These covariates included in‐hospital colchicine use, home beta‐blocker use, inotrope use, age, and diabetes mellitus. In‐hospital colchicine use given for a gout flare was significantly associated with reduced in‐hospital all‐cause mortality (OR 0.322, 95% CI 0.105−0.779, p = .02) after adjustment for home beta‐blocker use, inotrope use, age, and diabetes mellitus (p < .05 for all in the model). Baseline characteristics: A total of 5109 patient encounters had an ICD 9 or ICD 10 code for acute HF exacerbation during the study period (Figure 1). The final cohort included 1047 patient encounters after exclusions as indicated in Figure 1 and was then stratified into colchicine and control groups as determined a priori. Baseline characteristics were similar between groups, with the exception of age, sex, and comorbidities of gout and alcohol use (Table 1). Patients in the colchicine group were more likely to be male and younger compared to the control group. A majority of the patients had HF with reduced ejection fraction (54.1%), and there was no significant difference in HF classifications between groups. Patients who received colchicine for an acute gout flare were more likely to have a history of gout (50.2% vs. 3.2%, p < .001) and alcohol use (3.4% vs. 1.2%, p = .026) compared with the control group. Admission serum creatinine was significantly higher in the colchicine group than the control group (1.73 vs. 1.44 mg/dl, p < .001). There was no significant difference in the change from admission serum creatinine to discharge serum creatinine between groups. Among the patient encounters with a uric acid level checked during the admission, uric acid was significantly higher in the colchicine group compared with the control group (10.63 vs. 8.26 mg/dl, p = .002). There was no significant difference in admission B‐type natriuretic peptide level between groups. CONSORT flow diagram. The CONSORT flow diagram is shown for the cohort. ESRD, end stage renal disease; HF, heart failure; IV, intravenous; LVAD, left ventricular assist device Baseline Characteristics a Abbreviations: ACEi, angiotensin converting enzyme inhibitor; ARB, angiotensin receptor blocker; ARNi, angiotensin receptor neprilysin inhibitor; BNP, B‐type natriuretic peptide; CAD, coronary artery disease; HFmrEF, heart failure with mid‐range ejection fraction; HFpEF, heart failure with preserved ejection fraction; HFrEF, heart failure with reduced ejection fraction; MRA, mineralocorticoid receptor antagonist; Non‐DHP CCB, nondihydropyridine calcium channel blocker; SCr, serum creatinine. Plus−minus values are means ± SD. Admission BNP was only available for 846 patients (189 patients in the colchicine group and 657 patients in the control group). Uric acid was only available for 98 patients (71 patients in the colchicine group and 27 patients in the control group). If there were multiple uric acid levels during admission, the first level was recorded. Primary outcome analysis: A total of 58 patients (5.5%) died during admission, five in the colchicine group and 53 in the control group (2.1% vs. 6.5%, p = .009), that is, a lower in‐hospital all‐cause mortality in the colchicine group (Table 2). A subgroup analysis was conducted to assess outcomes with the colchicine group stratified based on prior documented history of gout versus de novo gout presentation. In‐hospital all‐cause mortality was not significantly different between patients with a new gout diagnosis compared to those with a prior history of gout (3.4% vs. 0.8%, p = .213). Primary and secondary outcomes Abbreviations: LOS, length of stay; SD, standard deviation. Secondary outcomes analysis: The 30‐day readmissions were not significantly different between the colchicine and control groups (21.5% vs. 19.5%, p = .495) (Table 2). In the subgroup analysis, 30‐day readmissions remained similar when comparing patients with de novo gout to those with a prior history of gout (22.9% vs. 20.2%, p = .611). Mean LOS was significantly increased in the colchicine group compared to the control group (9.93 vs. 7.96 days, p < .001) (Table 2). In the subgroup analysis, LOS was significantly increased in the de novo presentation of gout group compared to the control group (10.52 vs. 7.96 days, p < .001) and in the prior history of gout group compared to the control group (9.35 vs. 7.96 days, p = .006). There was no significant difference in LOS between those with a first diagnosis of gout and those with a prior history of gout (p = .272). In a post hoc analysis, in‐hospital CV mortality censored at 4 weeks was significantly lower in the colchicine group than the control group (0.89% vs. 3.93%, p = .02). Stratified Kaplan−Meier analysis and Cox proportional hazards regression during hospital admission: Reverse Kaplan−Meier curves stratified by in‐hospital colchicine and censored at 4 weeks are shown in Figure 2. Inpatient colchicine was associated with reduced rates of both in‐hospital all‐cause mortality (log rank p = .00026) and in‐hospital CV mortality (log rank p = .0063) compared with the control group. In a Cox proportional hazards model adjusted for age, in‐hospital colchicine use was associated with improved survival to discharge (hazard ratio [HR] 0.163, 95% confidence interval [CI] 0.051−0.525, p = .002) and a decreased rate of in‐hospital CV mortality (HR 0.184, 95% CI 0.044−0.770, p = .021). Reverse Kaplan–Meier curves stratified by home colchicine use were also generated (Figure 3). Home colchicine use was associated with a reduced rate of in‐hospital all‐cause mortality (log rank p = .037) but no significant difference in the rate of in‐hospital CV mortality (log rank p = .14). Inpatient all‐cause and cardiovascular (CV) death by inpatient colchicine use. Reverse Kaplan−Meier curves for inpatient all‐cause death and inpatient CV death stratified on inpatient colchicine use are shown (p = .00026 and p = .0063, respectively) Inpatient all‐cause and cardiovascular (CV) death by home colchicine use. Reverse Kaplan−Meier curves for inpatient all‐cause death and inpatient CV death stratified on home colchicine use are shown (p = .037 and p = .14, respectively) Multivariate logistic regression analysis: A multivariate logistic regression model was performed to evaluate associations of other covariates with in‐hospital all‐cause mortality. These covariates included in‐hospital colchicine use, home beta‐blocker use, inotrope use, age, and diabetes mellitus. In‐hospital colchicine use given for a gout flare was significantly associated with reduced in‐hospital all‐cause mortality (OR 0.322, 95% CI 0.105−0.779, p = .02) after adjustment for home beta‐blocker use, inotrope use, age, and diabetes mellitus (p < .05 for all in the model). DISCUSSION: In this retrospective cohort study, we evaluated the use of colchicine for an acute gout flare during hospitalization for acute decompensated HF. We found that colchicine use during acute HF exacerbation was associated with decreased in‐hospital all‐cause mortality and in‐hospital CV mortality, as well as increased hospital LOS. The incidence of acute gout in this study population was 22.7% of all patient encounters. Although the rate of acute gout while receiving IV diuretics during hospitalization for acute HF is not extensively characterized in the literature, a 2017 study of patients treated with IV bumetanide during hospitalization for acute HF found the incidence of acute gout to be 13.6% over the course of the study. 12 Our findings highlight the relative high prevalence of acute gout during treatment with IV diuretics for HF exacerbation. Colchicine is a potent anti‐inflammatory and antiproliferative drug that has been used for both acute gout treatment as well as prevention. 10 Several studies have reported the safety and beneficial outcomes of colchicine in other cardiac conditions. A recent meta‐analysis of patients with a range of CV disease states evaluated the impact of colchicine on a composite CV outcome, which consisted of the primary outcome of each individual trial and included mortality, acute coronary syndrome, MI, cerebrovascular accident, cardiac arrest, or revascularization. The meta‐analysis found that colchicine use was associated with a 56% decrease in the composite CV outcome (p = .0004), as well as a nonsignificant trend toward reduction in all‐cause mortality (relative risk 0.50, p  = .08). 11 Several additional retrospective studies have also demonstrated favorable CV outcomes with the use of colchicine. 13 , 14 Recently, prospective, randomized, and placebo‐controlled trials have also examined the potential benefit of colchicine in CV patients. The Colchicine Cardiovascular Outcomes Trial (COLCOT) evaluated the use of colchicine within 30 days after MI, and the Low Dose Colchicine 2 Trial (LoDoCo2) evaluated colchicine in patients with stable CAD. In COLCOT and LoDoCo2, colchicine resulted in a significant reduction in the primary outcomes, which were a composite of CV death and other clinical outcomes in both trials. 25 , 26 Additionally, a recent systematic review and meta‐analysis assessed the impact of colchicine in patients with CAD in 13 randomized trials, which included a total of 13 125 patients. 15 The study found that treatment with colchicine significantly reduced the risk of MI as well as stroke or transient ischemic attack when compared to placebo or standard care. However, colchicine was not associated with a significant reduction in all‐cause or CV mortality. While many of the existing studies have evaluated colchicine use in patients with CAD or prior MI, to our knowledge, only one trial to date has evaluated colchicine's effects in stable HF. 16 Investigators randomized stable symptomatic HF patients to receive either colchicine 0.5 mg twice daily or placebo for 6 months. The primary end point, which was the proportion of patients achieving at least one‐grade improvement in New York Heart Association (NYHA) functional status classification, was not significantly different between the two groups (p = .365). Colchicine use was associated with a significant decrease in measured inflammatory biomarkers including high sensitivity C‐reactive protein and interleukin‐6. There are some key differences notable in the aforementioned study compared with ours. First, investigators excluded patients hospitalized within the previous 3 months, whereas our population was comprised exclusively of patients admitted for an acute HF exacerbation. Second, patients were given colchicine regardless of gout status, whereas in our study, patients who were given colchicine received it due to an acute gout flare. Third, investigators only included patients with left ventricular ejection fraction ≤40%, in contrast to our study which included HF patients regardless of ejection fraction. Prior studies have also explored the impact of other gout therapies on HF outcomes. Hyperuricemia has been associated with an increased incidence of HF as well as increased mortality among those with HF. Therefore, uric acid lowering therapies have been considered potential medication candidates for improving HF outcomes. Initial studies demonstrated that allopurinol, a xanthine oxidase inhibitor, was associated with improved endothelial function in HF patients. 17 Subsequently, the Effects of Xanthine Oxidase Inhibition on Hyperuricemic Heart Failure Patients (EXACT‐HF) study randomized patients (with primarily NYHA Class II and III HFrEF and hyperuricemia) to allopurinol (target dose 600 mg daily) versus placebo for 24 weeks. 18 The primary outcome, a composite clinical end point based on several factors including survival, worsening HF, and patient global assessment, was not significantly different between the allopurinol and placebo groups. While this prior study failed to demonstrate the efficacy of uric acid lowering with allopurinol on HF outcomes, colchicine has an important distinction related to its anti‐inflammatory properties. This anti‐inflammatory effect is what we believe may underlie the positive findings in our study. Additionally, the EXACT‐HF study enrolled patients in the outpatient setting, while this study focused specifically on patients hospitalized with acute decompensated HF. The mechanistic underpinnings of the potential beneficial effects of colchicine on CV events may involve its anti‐inflammatory properties on the CV system. 19 It has been postulated that activated neutrophils are present in atherosclerotic plaques and play a key role in the transformation of a stable to an unstable plaque. 19 Colchicine's anti‐inflammatory effects and inhibition of neutrophil chemotaxis and activation may play a role in stabilizing plaques and preventing MI or ischemic strokes. Hitherto, the potential utility of colchicine in acute decompensated HF has not been considered. Thus, the underlying mechanistic pathways that could explain the potential benefits of colchicine in the HF population are largely unknown but may be multifactorial. It has been well established that an acute HF admission is associated with increased short term mortality as well as other adverse CV events following an index admission. 20 Accordingly, a worsening HF event has increasingly been recognized as an end point for enrollment in clinical trials. 21 , 22 , 23 In this sense, acutely decompensated HF represents a distinct vulnerable phenotypic state characterized by multiple neurohormonal perturbations and a heightened proinflammatory milieu. 24 It is tempting to surmise that our findings demonstrating the favorable effects of colchicine on HF mortality could potentially be explained by the modulating influence of the anti‐inflammatory effects of colchicine on this distinct vulnerable phenotypic phase in the HF trajectory. If indeed our findings are validated, then consideration could be made for designing clinical trials that incorporate anti‐inflammatory agents such as colchicine targeting this vulnerable phase of worsening HF. LIMITATIONS: There are some limitations to this study that should be acknowledged. Limitations due to the retrospective design include potential for missing data points, inability to assess diuretic doses received during hospitalization, and reliance on ICD 9 and ICD 10 codes for initial diagnosis of acute HF exacerbation. Readmissions at outside hospitals that are not linked with the institution's electronic medical record may not have been identified. Additionally, in this study the intervention group consisted of patients given colchicine for an acute gout flare, while the control group included those with neither a gout flare nor colchicine treatment. At our institution, most HF patients with acute gout are treated with colchicine, so it would not have been possible to assemble a sufficiently powered control group of acute HF patients who had a gout flare without colchicine treatment. It is possible that acute gout could be a surrogate marker for more effective diuresis or renal dysfunction. However, the existing literature demonstrating worsened outcomes associated with gout and hyperuricemia suggests that the presence of gout in the intervention group would skew the results toward the null hypothesis. As this was an observational study, it is possible that there could still be unmeasured confounders even after statistical adjustment. CONCLUSION: This study demonstrated that colchicine use for acute gout flare in hospitalized patients with a HF exacerbation was associated with decreased in‐hospital all‐cause mortality and in‐hospital CV mortality compared with the control group. The use of colchicine was also associated with a longer LOS but similar 30‐day readmissions. Additional large multicenter retrospective and prospective randomized studies are needed to more fully understand the association of colchicine use with outcomes in patients undergoing treatment for HF exacerbations and explore its role as a potential treatment option in this population. CONFLICTS OF INTEREST: The authors declare no conflicts of interest.
Background: Gout is a common comorbidity in heart failure (HF) patients and is frequently associated with acute exacerbations during treatment for decompensated HF. Although colchicine is often used to manage acute gout in HF patients, its impact on clinical outcomes when used during acute decompensated HF is unknown. Methods: This was a single center, retrospective study of hospitalized patients treated for an acute HF exacerbation with and without acute gout flare between March 2011 and December 2020. We assessed clinical outcomes in patients treated with colchicine for a gout flare compared to those who did not experience a gout flare or receive colchicine. The primary outcome was in-hospital all-cause mortality. Results: Among 1047 patient encounters for acute HF during the study period, there were 237 encounters (22.7%) where the patient also received colchicine for acute gout during admission. In-hospital all-cause mortality was significantly reduced in the colchicine group compared with the control group (2.1% vs. 6.5%, p = .009). The colchicine group had increased length of stay (9.93 vs. 7.96 days, p < .001) but no significant difference in 30-day readmissions (21.5% vs. 19.5%, p = .495). In a Cox proportional hazards model adjusted for age, inpatient colchicine use was associated with improved survival to discharge (hazards ratio [HR] 0.163, 95% confidence interval [CI] 0.051-0.525, p = .002) and a reduced rate of in-hospital CV mortality (HR 0.184, 95% CI 0.044-0.770, p = .021). Conclusions: Among patients with a HF exacerbation, treatment with colchicine for a gout flare was associated with significantly lower in-hospital mortality compared with those not treated for acute gout.
INTRODUCTION: Gout is a common comorbidity in heart failure (HF) patients and is the result of monosodium urate crystal deposition in joints and periarticular tissues. 1 Gout is associated with significant morbidity, mortality, and healthcare costs. 1 , 2 , 3 Diuretics are known to precipitate hyperuricemia and increase the risk of gout flares through mechanisms related to decreased uric acid secretion and increased uric acid reabsorption. 4 , 5 Studies have estimated the prevalence of gout in HF patients to be approximately 16%−40%, and one study found that 56% of hospitalized HF patients had hyperuricemia. 6 , 7 , 8 , 9 The therapeutic agents commonly used for an acute gout flare include colchicine, steroids, and nonsteroidal anti‐inflammatory drugs (NSAIDs). However, steroids and NSAIDs are often avoided in HF because of the legitimate concerns of fluid retention and HF exacerbation. 10 In addition to its role in gout, colchicine's anti‐inflammatory effects are also highly beneficial in the treatment and prevention of other cardiac conditions such as pericarditis. 11 Colchicine has also recently shown broader cardiovascular (CV) outcomes benefit in high‐risk patients, particularly those with coronary artery disease (CAD) or history of myocardial infarction (MI). However, the impact of colchicine use during gout flares on outcomes in patients with acutely decompensated HF is unknown. The purpose of this study was to assess clinical outcomes in patients treated for an acute HF exacerbation and receiving colchicine for an acute gout flare. CONCLUSION: This study demonstrated that colchicine use for acute gout flare in hospitalized patients with a HF exacerbation was associated with decreased in‐hospital all‐cause mortality and in‐hospital CV mortality compared with the control group. The use of colchicine was also associated with a longer LOS but similar 30‐day readmissions. Additional large multicenter retrospective and prospective randomized studies are needed to more fully understand the association of colchicine use with outcomes in patients undergoing treatment for HF exacerbations and explore its role as a potential treatment option in this population.
Background: Gout is a common comorbidity in heart failure (HF) patients and is frequently associated with acute exacerbations during treatment for decompensated HF. Although colchicine is often used to manage acute gout in HF patients, its impact on clinical outcomes when used during acute decompensated HF is unknown. Methods: This was a single center, retrospective study of hospitalized patients treated for an acute HF exacerbation with and without acute gout flare between March 2011 and December 2020. We assessed clinical outcomes in patients treated with colchicine for a gout flare compared to those who did not experience a gout flare or receive colchicine. The primary outcome was in-hospital all-cause mortality. Results: Among 1047 patient encounters for acute HF during the study period, there were 237 encounters (22.7%) where the patient also received colchicine for acute gout during admission. In-hospital all-cause mortality was significantly reduced in the colchicine group compared with the control group (2.1% vs. 6.5%, p = .009). The colchicine group had increased length of stay (9.93 vs. 7.96 days, p < .001) but no significant difference in 30-day readmissions (21.5% vs. 19.5%, p = .495). In a Cox proportional hazards model adjusted for age, inpatient colchicine use was associated with improved survival to discharge (hazards ratio [HR] 0.163, 95% confidence interval [CI] 0.051-0.525, p = .002) and a reduced rate of in-hospital CV mortality (HR 0.184, 95% CI 0.044-0.770, p = .021). Conclusions: Among patients with a HF exacerbation, treatment with colchicine for a gout flare was associated with significantly lower in-hospital mortality compared with those not treated for acute gout.
7,344
350
[ 295, 239, 109, 197, 494, 138, 234, 282, 97, 221 ]
15
[ "colchicine", "gout", "patients", "group", "hospital", "hf", "use", "acute", "control", "mortality" ]
[ "gout flares outcomes", "gout treatment prevention", "patients gout flare", "gout hyperuricemia suggests", "hf gout medications" ]
[CONTENT] colchicine | gout | heart failure | in‐hospital mortality [SUMMARY]
[CONTENT] colchicine | gout | heart failure | in‐hospital mortality [SUMMARY]
[CONTENT] colchicine | gout | heart failure | in‐hospital mortality [SUMMARY]
[CONTENT] colchicine | gout | heart failure | in‐hospital mortality [SUMMARY]
[CONTENT] colchicine | gout | heart failure | in‐hospital mortality [SUMMARY]
[CONTENT] colchicine | gout | heart failure | in‐hospital mortality [SUMMARY]
[CONTENT] Colchicine | Gout | Heart Failure | Hospitalization | Humans | Retrospective Studies | Symptom Flare Up [SUMMARY]
[CONTENT] Colchicine | Gout | Heart Failure | Hospitalization | Humans | Retrospective Studies | Symptom Flare Up [SUMMARY]
[CONTENT] Colchicine | Gout | Heart Failure | Hospitalization | Humans | Retrospective Studies | Symptom Flare Up [SUMMARY]
[CONTENT] Colchicine | Gout | Heart Failure | Hospitalization | Humans | Retrospective Studies | Symptom Flare Up [SUMMARY]
[CONTENT] Colchicine | Gout | Heart Failure | Hospitalization | Humans | Retrospective Studies | Symptom Flare Up [SUMMARY]
[CONTENT] Colchicine | Gout | Heart Failure | Hospitalization | Humans | Retrospective Studies | Symptom Flare Up [SUMMARY]
[CONTENT] gout flares outcomes | gout treatment prevention | patients gout flare | gout hyperuricemia suggests | hf gout medications [SUMMARY]
[CONTENT] gout flares outcomes | gout treatment prevention | patients gout flare | gout hyperuricemia suggests | hf gout medications [SUMMARY]
[CONTENT] gout flares outcomes | gout treatment prevention | patients gout flare | gout hyperuricemia suggests | hf gout medications [SUMMARY]
[CONTENT] gout flares outcomes | gout treatment prevention | patients gout flare | gout hyperuricemia suggests | hf gout medications [SUMMARY]
[CONTENT] gout flares outcomes | gout treatment prevention | patients gout flare | gout hyperuricemia suggests | hf gout medications [SUMMARY]
[CONTENT] gout flares outcomes | gout treatment prevention | patients gout flare | gout hyperuricemia suggests | hf gout medications [SUMMARY]
[CONTENT] colchicine | gout | patients | group | hospital | hf | use | acute | control | mortality [SUMMARY]
[CONTENT] colchicine | gout | patients | group | hospital | hf | use | acute | control | mortality [SUMMARY]
[CONTENT] colchicine | gout | patients | group | hospital | hf | use | acute | control | mortality [SUMMARY]
[CONTENT] colchicine | gout | patients | group | hospital | hf | use | acute | control | mortality [SUMMARY]
[CONTENT] colchicine | gout | patients | group | hospital | hf | use | acute | control | mortality [SUMMARY]
[CONTENT] colchicine | gout | patients | group | hospital | hf | use | acute | control | mortality [SUMMARY]
[CONTENT] hf | gout | patients | hf patients | steroids | gout flares | nsaids | flares | colchicine | inflammatory [SUMMARY]
[CONTENT] time | acute | tests | patients | gout | death | admission | hospital | secondary | included [SUMMARY]
[CONTENT] group | colchicine | vs | use | inpatient | colchicine group | hospital | control | gout | control group [SUMMARY]
[CONTENT] use | colchicine use | colchicine | treatment | associated | colchicine use acute gout | additional large | associated longer los | associated longer los similar | additional large multicenter retrospective [SUMMARY]
[CONTENT] colchicine | gout | patients | group | hf | hospital | use | acute | vs | mortality [SUMMARY]
[CONTENT] colchicine | gout | patients | group | hf | hospital | use | acute | vs | mortality [SUMMARY]
[CONTENT] ||| HF [SUMMARY]
[CONTENT] between March 2011 and December 2020 ||| ||| [SUMMARY]
[CONTENT] 1047 | 237 | 22.7% ||| 2.1% | 6.5% | .009 ||| 9.93 | 7.96 days | .001 | 30-day | 21.5% | 19.5% | .495 ||| Cox | 0.163 | 95% | CI | .002 | CV | 0.184 | 95% | CI | 0.044-0.770 | .021 [SUMMARY]
[CONTENT] HF [SUMMARY]
[CONTENT] ||| HF ||| between March 2011 and December 2020 ||| ||| ||| ||| 1047 | 237 | 22.7% ||| 2.1% | 6.5% | .009 ||| 9.93 | 7.96 days | .001 | 30-day | 21.5% | 19.5% | .495 ||| Cox | 0.163 | 95% | CI | .002 | CV | 0.184 | 95% | CI | 0.044-0.770 | .021 ||| HF [SUMMARY]
[CONTENT] ||| HF ||| between March 2011 and December 2020 ||| ||| ||| ||| 1047 | 237 | 22.7% ||| 2.1% | 6.5% | .009 ||| 9.93 | 7.96 days | .001 | 30-day | 21.5% | 19.5% | .495 ||| Cox | 0.163 | 95% | CI | .002 | CV | 0.184 | 95% | CI | 0.044-0.770 | .021 ||| HF [SUMMARY]
Barriers to pandemic influenza vaccination and uptake of seasonal influenza vaccine in the post-pandemic season in Germany.
23113995
In Germany, annual vaccination against seasonal influenza is recommended for certain target groups (e.g. persons aged ≥60 years, chronically ill persons, healthcare workers (HCW)). In season 2009/10, vaccination against pandemic influenza A(H1N1)pdm09, which was controversially discussed in the public, was recommended for the whole population. The objectives of this study were to assess vaccination coverage for seasonal (seasons 2008/09-2010/11) and pandemic influenza (season 2009/10), to identify predictors of and barriers to pandemic vaccine uptake and whether the controversial discussions on pandemic vaccination has had a negative impact on seasonal influenza vaccine uptake in Germany.
BACKGROUND
We analysed data from the 'German Health Update' (GEDA10) telephone survey (n=22,050) and a smaller GEDA10-follow-up survey (n=2,493), which were both representative of the general population aged ≥18 years living in Germany.
METHODS
Overall only 8.8% of the adult population in Germany received a vaccination against pandemic influenza. High socioeconomic status, having received a seasonal influenza shot in the previous season, and belonging to a target group for seasonal influenza vaccination were independently associated with the uptake of pandemic vaccines. The main reasons for not receiving a pandemic vaccination were 'fear of side effects' and the opinion that 'vaccination was not necessary'. Seasonal influenza vaccine uptake in the pre-pandemic season 2008/09 was 52.8% among persons aged ≥60 years; 30.5% among HCW, and 43.3% among chronically ill persons. A decrease in vaccination coverage was observed across all target groups in the first post-pandemic season 2010/11 (50.6%, 25.8%, and 41.0% vaccination coverage, respectively).
RESULTS
Seasonal influenza vaccination coverage in Germany remains in all target groups below 75%, which is a declared goal of the European Union. Our results suggest that controversial public discussions about safety and the benefits of pandemic influenza vaccination may have contributed to both a very low uptake of pandemic vaccines and a decreased uptake of seasonal influenza vaccines in the first post-pandemic season. In the upcoming years, the uptake of seasonal influenza vaccines should be carefully monitored in all target groups to identify if this trend continues and to guide public health authorities in developing more effective vaccination and communication strategies for seasonal influenza vaccination.
CONCLUSIONS
[ "Adolescent", "Adult", "Aged", "Aged, 80 and over", "Female", "Follow-Up Studies", "Germany", "Health Care Surveys", "Health Services Accessibility", "Humans", "Influenza Vaccines", "Influenza, Human", "Male", "Middle Aged", "Pandemics", "Seasons", "Young Adult" ]
3527143
Background
In Germany, annual influenza epidemics usually occur during the winter months December to March. In the last decade, an estimated zero to 19,000 excess deaths per year were attributable to influenza virus infections [1]. Moreover, approximately one to six million influenza-related excess physician consultations per season were estimated for Germany [2]. Severe influenza virus infections or influenza-related complications typically occur in the very young and elderly population as well as in persons with underlying chronic medical conditions. Annual vaccination has proven to be an effective method to reduce the burden of influenza disease [3]. In Germany, vaccination against seasonal influenza is recommended by the Standing Committee on Vaccination (STIKO) for individuals who have either an increased risk to develop severe influenza disease (i.e. persons aged ≥60 years, pregnant women, and persons with certain chronic medical conditions) or who are likely to transmit the virus to vulnerable groups (e.g. health care workers (HCW)) [4]. Vaccination is free of charge for the target groups in Germany. During the influenza pandemic 2009/10, STIKO additionally recommended vaccination with a monovalent vaccine against the pandemic influenza virus strain A(H1N1)pdm09 for the whole population. Due to expected limitations in vaccine supplies at the beginning of the vaccination campaign, STIKO defined and ranked priority groups for the pandemic vaccination: 1) HCW, 2) persons with underlying chronic conditions, 3) pregnant women, 4) household contacts of vulnerable persons, 5) all other persons aged 6 months to 24 years, 6) all other persons aged 25–59 years, 7) all other person aged 60 years and above [5]. The pandemic vaccination campaign started in Germany on 26 October 2009 [6]. The AS03-adjuvanted monovalent vaccine Pandemrix® was almost exclusively used and available in sufficient quantities [7]. During the pandemic, vaccination against A(H1N1)pdm09 was subject to controversial discussions, not only in Germany but also in many other countries worldwide [7-9]. Main topics of the debate in the media and among experts and ‘self-proclaimed experts’ were vaccine safety, effectiveness, and concern that there were too little data on the new vaccines or vaccine ingredients (especially new adjuvants) available. Moreover, the general necessity of vaccination in view of the relative mildness of the pandemic influenza disease was called into question [7,9-11]. As a result, compliance with the national recommendations for pandemic vaccination was very poor in Germany. Since Germany has no central immunization registry, information on vaccination coverage (and factors influencing coverage) is only available from telephone and household surveys [12-15]. According to the results of thirteen consecutive cross-sectional telephone surveys (total n=13,010) conducted during the pandemic, only 8.1% of the general population aged ≥14 years living in Germany received a vaccine against pandemic influenza [6]. To develop target group specific communication strategies and to enhance compliance with the official recommendations it is important to monitor vaccine uptake in each of the target groups and to understand factors that influence uptake. This applies not only for annual influenza vaccination campaigns but also for the planning of future vaccination campaigns during a pandemic. The influence of the 2009/2010 pandemic situation on seasonal influenza vaccine uptake in Germany in the post-pandemic seasons has so far not been investigated. For this purpose, we utilized data from the large (~22,000 respondents) ‘German Health Update 2010’ (GEDA10) telephone survey and a smaller GEDA10 follow-up survey (~2,500 respondents). The objectives of our study were (1) to assess seasonal influenza vaccination coverage for seasons 2008/09 to 2010/11, (2) to assess pandemic influenza vaccination coverage for season 2009/10, (3) to identify predictors of and barriers to pandemic vaccine uptake, and (4) to detect a potential influence of the pandemic situation on seasonal influenza vaccine uptake in the first post-pandemic season (2010/11).
Methods
Study population and survey design The GEDA survey design has been described previously [12,13,16]. In brief, GEDA is a large annual telephone survey which is conducted by the Robert Koch Institute (RKI) as a part of Germany’s national health monitoring. The study population consists of persons ≥18 years of age who are living in a private household in Germany, have sufficient knowledge of the German language, and can be contacted via landline telephone. The GEDA study protocol was approved by Germany’s federal and regional data-protection commissioners. All data were collected and analysed in an anonymous manner. In this study we present data from GEDA10 which was conducted between 22 September 2009 and 10 July 2010. Since the annual GEDA survey was not conducted in 2010/2011, we conducted a follow-up interview among a subsample of 2.493 GEDA10 respondents (from now on referred to as the GEDA follow-up survey) from 1 April to 2 July 2011 to assess seasonal influenza vaccine uptake for the post pandemic season 2010/11. Based on a sample size calculation, 385 subjects were needed to estimate a prevalence of 50% (“worst case scenario”) vaccine uptake with a confidence interval of +/− 5%. Our sample size of ~2,400 subjects for the follow-up survey was based on the premise to estimate a prevalence (i.e. vaccination coverage) of 50% for up to six subgroups (=cells; 6*385≈2400). Since both surveys were conducted by RKI, the ownership of the data lies with RKI and we did not have to obtain permission to use the data for this study. Similar to the previous GEDA survey (GEDA09), a public use file of the GEDA10 dataset will be provided soon. Data from the GEDA follow-up survey will not be openly available. To control for possible selection biases, weighting factors for the GEDA10 sample were constructed by taking age, sex, educational status, geographical region, and household size into consideration. Potential participants of the follow-up survey were sampled disproportionally to their weighting factors in GEDA10. We applied this method to avoid that groups which were already underrepresented in GEDA10 become underrepresented again at the sampling stage in the follow-up sample and thus to prevent further bias. Weighting factors for the follow-up survey were constructed in a first step on the basis of the values calculated for GEDA10 (for age, sex, education, adipositas, smoking status, subjective health, physical activity, employment status), and, in a second step, on population data gathered in the Microcensus 2008 [17], taking geographical region, age, sex, and educational status into account. Information on seasonal influenza vaccination status for season 2008/2009 was collected from all participants of GEDA10. Between 1 January 2010 and 10 July 2010 respondents were additionally asked to provide information on seasonal and pandemic influenza vaccination status for season 2009/10. Unvaccinated survey participants of GEDA10 were additionally asked to state their reasons for not receiving a pandemic influenza vaccination. Information on seasonal vaccination status for season 2010/11 was collected from all respondents of the GEDA follow-up survey. In the follow-up survey, respondents were additionally asked by whom they were vaccinated (general practitioner/other physician in private practice/occupational physician/other) and in which month they received the vaccination against seasonal influenza in season 2010/11. One should note that GEDA10 does not cover the paediatric population (aged 0–17 years) for which pandemic vaccination was also recommended in Germany. Information on vaccination coverage in this particular age-group was therefore not available from this data source. We calculated the response for GEDA10 by using Response Rate 3 as defined by the American Association for Public Opinion Research (AAPOR) [18]. Response Rate 3 is the proportion of the number of complete interviews divided by the number of interviews plus the number of non-interviews (refusal and break-off plus non-contacts plus others) plus cases of unknown eligibility. For cases of unknown eligibility Response Rate 3 estimates what proportion of cases of unknown eligibility is actually eligible. This estimation is based on the proportion of eligible households among all numbers for which a definitive determination of status was obtained (hence a very conservative estimate). We additionally calculated the cooperation rate at respondent level, which is defined as the proportion of all respondents interviewed of all respondents ever contacted [18]. Since the GEDA follow-up survey was not a random digit dialling study (as GEDA10), we reported the minimal response rate (Response Rate 1 as defined by AAPOR, [18]) for the follow-up survey. The GEDA survey design has been described previously [12,13,16]. In brief, GEDA is a large annual telephone survey which is conducted by the Robert Koch Institute (RKI) as a part of Germany’s national health monitoring. The study population consists of persons ≥18 years of age who are living in a private household in Germany, have sufficient knowledge of the German language, and can be contacted via landline telephone. The GEDA study protocol was approved by Germany’s federal and regional data-protection commissioners. All data were collected and analysed in an anonymous manner. In this study we present data from GEDA10 which was conducted between 22 September 2009 and 10 July 2010. Since the annual GEDA survey was not conducted in 2010/2011, we conducted a follow-up interview among a subsample of 2.493 GEDA10 respondents (from now on referred to as the GEDA follow-up survey) from 1 April to 2 July 2011 to assess seasonal influenza vaccine uptake for the post pandemic season 2010/11. Based on a sample size calculation, 385 subjects were needed to estimate a prevalence of 50% (“worst case scenario”) vaccine uptake with a confidence interval of +/− 5%. Our sample size of ~2,400 subjects for the follow-up survey was based on the premise to estimate a prevalence (i.e. vaccination coverage) of 50% for up to six subgroups (=cells; 6*385≈2400). Since both surveys were conducted by RKI, the ownership of the data lies with RKI and we did not have to obtain permission to use the data for this study. Similar to the previous GEDA survey (GEDA09), a public use file of the GEDA10 dataset will be provided soon. Data from the GEDA follow-up survey will not be openly available. To control for possible selection biases, weighting factors for the GEDA10 sample were constructed by taking age, sex, educational status, geographical region, and household size into consideration. Potential participants of the follow-up survey were sampled disproportionally to their weighting factors in GEDA10. We applied this method to avoid that groups which were already underrepresented in GEDA10 become underrepresented again at the sampling stage in the follow-up sample and thus to prevent further bias. Weighting factors for the follow-up survey were constructed in a first step on the basis of the values calculated for GEDA10 (for age, sex, education, adipositas, smoking status, subjective health, physical activity, employment status), and, in a second step, on population data gathered in the Microcensus 2008 [17], taking geographical region, age, sex, and educational status into account. Information on seasonal influenza vaccination status for season 2008/2009 was collected from all participants of GEDA10. Between 1 January 2010 and 10 July 2010 respondents were additionally asked to provide information on seasonal and pandemic influenza vaccination status for season 2009/10. Unvaccinated survey participants of GEDA10 were additionally asked to state their reasons for not receiving a pandemic influenza vaccination. Information on seasonal vaccination status for season 2010/11 was collected from all respondents of the GEDA follow-up survey. In the follow-up survey, respondents were additionally asked by whom they were vaccinated (general practitioner/other physician in private practice/occupational physician/other) and in which month they received the vaccination against seasonal influenza in season 2010/11. One should note that GEDA10 does not cover the paediatric population (aged 0–17 years) for which pandemic vaccination was also recommended in Germany. Information on vaccination coverage in this particular age-group was therefore not available from this data source. We calculated the response for GEDA10 by using Response Rate 3 as defined by the American Association for Public Opinion Research (AAPOR) [18]. Response Rate 3 is the proportion of the number of complete interviews divided by the number of interviews plus the number of non-interviews (refusal and break-off plus non-contacts plus others) plus cases of unknown eligibility. For cases of unknown eligibility Response Rate 3 estimates what proportion of cases of unknown eligibility is actually eligible. This estimation is based on the proportion of eligible households among all numbers for which a definitive determination of status was obtained (hence a very conservative estimate). We additionally calculated the cooperation rate at respondent level, which is defined as the proportion of all respondents interviewed of all respondents ever contacted [18]. Since the GEDA follow-up survey was not a random digit dialling study (as GEDA10), we reported the minimal response rate (Response Rate 1 as defined by AAPOR, [18]) for the follow-up survey. Definition of variables Socio-economic status levels were created as described by Lampert and Kroll on the basis of self-reported educational, income, and professional status of survey respondents [19]. In accordance with the STIKO-recommendations [4], persons were classified into the target groups for seasonal influenza vaccination in our study if they reported (1) to be ≥60 years of age, (2) to have at least one underlying chronic disease (defined here as having a chronic underlying respiratory, cardiovascular, liver, or renal disease, cancer, or diabetes), or (3) to work as HCW. Since female respondents of child-bearing age were not asked whether they had been pregnant during the last influenza season, it was not possible to include pregnant women as target group in our analysis. The geographic region category ‘Western Federal States’ (WFS) comprised the federal states Schleswig-Holstein, Bremen, Hamburg, Lower Saxony, Hesse, Rhineland-Palatinate, Saarland, North Rhine-Westphalia, Baden-Württemberg and Bavaria; ‘Eastern Federal States’ (EFS) comprised Mecklenburg-Vorpommern, Brandenburg, Berlin, Saxony-Anhalt, Thuringia and Saxony. Socio-economic status levels were created as described by Lampert and Kroll on the basis of self-reported educational, income, and professional status of survey respondents [19]. In accordance with the STIKO-recommendations [4], persons were classified into the target groups for seasonal influenza vaccination in our study if they reported (1) to be ≥60 years of age, (2) to have at least one underlying chronic disease (defined here as having a chronic underlying respiratory, cardiovascular, liver, or renal disease, cancer, or diabetes), or (3) to work as HCW. Since female respondents of child-bearing age were not asked whether they had been pregnant during the last influenza season, it was not possible to include pregnant women as target group in our analysis. The geographic region category ‘Western Federal States’ (WFS) comprised the federal states Schleswig-Holstein, Bremen, Hamburg, Lower Saxony, Hesse, Rhineland-Palatinate, Saarland, North Rhine-Westphalia, Baden-Württemberg and Bavaria; ‘Eastern Federal States’ (EFS) comprised Mecklenburg-Vorpommern, Brandenburg, Berlin, Saxony-Anhalt, Thuringia and Saxony. Statistical analysis Data analysis was performed using PASW 18.0 for Windows (SPSS Inc., Chicago, USA). Proportions were calculated by using procedures for the analysis of complex samples. Univariate analyses were conducted to determine associations between pandemic influenza vaccine uptake and socio-demographic, health-related and professional factors. A p-value ≤0.05 was considered to indicate a statistically significant difference. Odds ratios (OR) and 95% confidence intervals (CI) were calculated as appropriate. Multivariable analysis was performed by entering variables potentially associated with vaccine uptake (p-value <0.2 in univariate analysis) into a multivariable logistic regression model in a first step, followed by step-wise backward removal of variables with a p-value >0.05 to produce a final model. Interaction terms were included to account for effect modification between independent variables. Data analysis was performed using PASW 18.0 for Windows (SPSS Inc., Chicago, USA). Proportions were calculated by using procedures for the analysis of complex samples. Univariate analyses were conducted to determine associations between pandemic influenza vaccine uptake and socio-demographic, health-related and professional factors. A p-value ≤0.05 was considered to indicate a statistically significant difference. Odds ratios (OR) and 95% confidence intervals (CI) were calculated as appropriate. Multivariable analysis was performed by entering variables potentially associated with vaccine uptake (p-value <0.2 in univariate analysis) into a multivariable logistic regression model in a first step, followed by step-wise backward removal of variables with a p-value >0.05 to produce a final model. Interaction terms were included to account for effect modification between independent variables.
Results
Sample characteristics In total, 22,050 telephone interviews were conducted during the study period of GEDA10 and 2,493 participants were re-interviewed for the follow-up survey. An overview of the survey populations is given in Table 1. The median age was 48.0 years (range 18–99 years) in GEDA10 and 49.7 years (range 19–96 years) in the follow-up survey. Response Rate 3 was 28.9% in GEDA10; the cooperation rate at respondent level was 55.8%. Response Rate 1 was 75.0% in the follow up survey. Characteristics of participants in the ‘German Health Update Survey’ (GEDA10) and the GEDA10 follow-up survey *Weighted data. In total, 22,050 telephone interviews were conducted during the study period of GEDA10 and 2,493 participants were re-interviewed for the follow-up survey. An overview of the survey populations is given in Table 1. The median age was 48.0 years (range 18–99 years) in GEDA10 and 49.7 years (range 19–96 years) in the follow-up survey. Response Rate 3 was 28.9% in GEDA10; the cooperation rate at respondent level was 55.8%. Response Rate 1 was 75.0% in the follow up survey. Characteristics of participants in the ‘German Health Update Survey’ (GEDA10) and the GEDA10 follow-up survey *Weighted data. Seasonal influenza vaccination coverage Information on seasonal influenza vaccination status was available for over 99.8% in each of the study samples for the three seasons under investigation (seasons 2008/09-2010/11). Vaccination coverage for the three seasons by sex, age group, place of residence, and target group is presented in Table 2. To allow comparability with international studies, seasonal influenza vaccine uptake among ≥65 year-olds was additionally calculated and revealed a coverage of 56.1% (95% CI: 54.0-58.2) in season 2008/09, 50.2% (95% CI: 47.5-52.9) in season 2009/10, and 54.2% (95% CI: 48.4-59.8) in season 2010/11. For influenza season 2009/10, vaccination coverage was calculated by age-groups in decades (Figure 1). Vaccine uptake in both the target population for seasonal influenza vaccination (defined here as persons who have an underlying chronic disease, or work as HCW) and non-target population increased with age and was highest in persons ≥70 years. The vast majority (97.8%) of vaccinated persons had received their influenza vaccination for season 2010/11 by the end of December 2010. Seasonal influenza vaccine uptake by sex, age group, place of residence and target group, seasons 2008/09 − 2010/11, Germany *Weighted data. Seasonal vaccination coverage by age group and target group, season 2009/10. Figure 2 shows trends in seasonal influenza vaccine uptake among the three different target groups and the non-target group for four consecutive seasons (2007/08 to 2010/11; results for season 2007/08 according to a previously published analysis of GEDA 2009 data [12]). While vaccination coverage slightly decreased in persons aged ≥60 years and in persons with underlying chronic diseases between seasons 2007/08 and 2009/10, there was an increase in vaccine uptake among HCWs from season 2007/08 to 2008/09. In all subgroups under investigation, vaccine uptake for season 2009/10 was higher in the follow-up sample (empty symbols) compared to the GEDA10 sample (filled symbols). Considering only the results from the follow-up survey (Figure 2), a significant decrease in seasonal influenza vaccination coverage between seasons 2009/10 and 2010/11 was observed for persons with underlying chronic conditions (p=0.04), HCWs (p=0.03), and persons not targeted for seasonal influenza vaccination (p<0.01). For persons ≥60 years of age there was also a decrease, but without reaching statistical significance. In season 2010/11, 83.4% of respondents who received a seasonal influenza shot were vaccinated by their general practitioner, 4.7% by another physician in private practice (e.g. gynaecologist, paediatrician), 9.4% by an occupational physician, 0.4% by a hospital physician, and 2.1% by any other physician. Of those who were vaccinated by a general practitioner or by an occupational physician, 18.4% and 59.4% did not belong to a target group, respectively. Trends in seasonal vaccine uptake in target groups in Germany for seasons 2007/08−2010/11 according to GEDA09 and GEDA10(filled symbols)and a follow-up sample of GEDA10(empty symbols).1 Data source: GEDA09 (n=15,552) [12]; 2 Data source: GEDA10 (n=22,009); 3 Data source: GEDA10 (n=13,040); 4 Data source: GEDA follow-up survey (n=2,492). Information on seasonal influenza vaccination status was available for over 99.8% in each of the study samples for the three seasons under investigation (seasons 2008/09-2010/11). Vaccination coverage for the three seasons by sex, age group, place of residence, and target group is presented in Table 2. To allow comparability with international studies, seasonal influenza vaccine uptake among ≥65 year-olds was additionally calculated and revealed a coverage of 56.1% (95% CI: 54.0-58.2) in season 2008/09, 50.2% (95% CI: 47.5-52.9) in season 2009/10, and 54.2% (95% CI: 48.4-59.8) in season 2010/11. For influenza season 2009/10, vaccination coverage was calculated by age-groups in decades (Figure 1). Vaccine uptake in both the target population for seasonal influenza vaccination (defined here as persons who have an underlying chronic disease, or work as HCW) and non-target population increased with age and was highest in persons ≥70 years. The vast majority (97.8%) of vaccinated persons had received their influenza vaccination for season 2010/11 by the end of December 2010. Seasonal influenza vaccine uptake by sex, age group, place of residence and target group, seasons 2008/09 − 2010/11, Germany *Weighted data. Seasonal vaccination coverage by age group and target group, season 2009/10. Figure 2 shows trends in seasonal influenza vaccine uptake among the three different target groups and the non-target group for four consecutive seasons (2007/08 to 2010/11; results for season 2007/08 according to a previously published analysis of GEDA 2009 data [12]). While vaccination coverage slightly decreased in persons aged ≥60 years and in persons with underlying chronic diseases between seasons 2007/08 and 2009/10, there was an increase in vaccine uptake among HCWs from season 2007/08 to 2008/09. In all subgroups under investigation, vaccine uptake for season 2009/10 was higher in the follow-up sample (empty symbols) compared to the GEDA10 sample (filled symbols). Considering only the results from the follow-up survey (Figure 2), a significant decrease in seasonal influenza vaccination coverage between seasons 2009/10 and 2010/11 was observed for persons with underlying chronic conditions (p=0.04), HCWs (p=0.03), and persons not targeted for seasonal influenza vaccination (p<0.01). For persons ≥60 years of age there was also a decrease, but without reaching statistical significance. In season 2010/11, 83.4% of respondents who received a seasonal influenza shot were vaccinated by their general practitioner, 4.7% by another physician in private practice (e.g. gynaecologist, paediatrician), 9.4% by an occupational physician, 0.4% by a hospital physician, and 2.1% by any other physician. Of those who were vaccinated by a general practitioner or by an occupational physician, 18.4% and 59.4% did not belong to a target group, respectively. Trends in seasonal vaccine uptake in target groups in Germany for seasons 2007/08−2010/11 according to GEDA09 and GEDA10(filled symbols)and a follow-up sample of GEDA10(empty symbols).1 Data source: GEDA09 (n=15,552) [12]; 2 Data source: GEDA10 (n=22,009); 3 Data source: GEDA10 (n=13,040); 4 Data source: GEDA follow-up survey (n=2,492). Pandemic influenza vaccination coverage Information on pandemic influenza vaccine uptake was available for 99.9% of the respective study population in GEDA 10 (n=13,048). Vaccination coverage by sex, age group, place of residence, socio-economic status and different target groups for seasonal influenza vaccination is presented in Table 3. In total, 8.8% (95% CI: 8.2-9.5) of the general adult population in Germany received a vaccination against pandemic influenza. With 11.2% (95% CI: 10.2-12.3) pandemic vaccine uptake was significantly higher in persons belonging to the target group for seasonal influenza vaccination as compared to the non-seasonal influenza target group (6.4%; 95% CI: 5.7-7.1; p<0.001). Determinants of pandemic influenza vaccine uptake, Germany, season 2009/10 #weighted data; * p<0.05; ** p<0.001; ref=reference category; n.s.= not significant; WFS=Western federal states, EFS=Eastern federal states. p-value for interaction between agegroup*seasonal influenza vaccination status: 0.011. The most frequently reported reasons for not receiving the vaccination were (1) ‘fear of side effects of pandemic vaccines’ (stated by 37.2%; 95% CI: 36.1-38.3), (2) ‘pandemic vaccination is not necessary’ (33.8%; 95% CI: 32.7-34.9), (3) ‘pandemic vaccination not officially recommended for me’ (16.6%; 95% CI: 15.8-17.5), and (4) ‘reject vaccinations in general’ (8.5%; 95% CI: 7.8-9.2). Information on pandemic influenza vaccine uptake was available for 99.9% of the respective study population in GEDA 10 (n=13,048). Vaccination coverage by sex, age group, place of residence, socio-economic status and different target groups for seasonal influenza vaccination is presented in Table 3. In total, 8.8% (95% CI: 8.2-9.5) of the general adult population in Germany received a vaccination against pandemic influenza. With 11.2% (95% CI: 10.2-12.3) pandemic vaccine uptake was significantly higher in persons belonging to the target group for seasonal influenza vaccination as compared to the non-seasonal influenza target group (6.4%; 95% CI: 5.7-7.1; p<0.001). Determinants of pandemic influenza vaccine uptake, Germany, season 2009/10 #weighted data; * p<0.05; ** p<0.001; ref=reference category; n.s.= not significant; WFS=Western federal states, EFS=Eastern federal states. p-value for interaction between agegroup*seasonal influenza vaccination status: 0.011. The most frequently reported reasons for not receiving the vaccination were (1) ‘fear of side effects of pandemic vaccines’ (stated by 37.2%; 95% CI: 36.1-38.3), (2) ‘pandemic vaccination is not necessary’ (33.8%; 95% CI: 32.7-34.9), (3) ‘pandemic vaccination not officially recommended for me’ (16.6%; 95% CI: 15.8-17.5), and (4) ‘reject vaccinations in general’ (8.5%; 95% CI: 7.8-9.2). Factors associated with pandemic influenza vaccine uptake Results of univariate and multivariable analysis of factors potentially associated with pandemic influenza vaccine uptake are shown in Table 3. Having received a seasonal influenza vaccination in the previous season (season 2008/09) was the strongest independent predictor of pandemic influenza vaccination. However, this effect differed by age group and we therefore included an interaction term in the final model. Additionally, working as HCW, having a chronic disease, high socioeconomic status, and being male were significantly associated with higher uptake in multivariable analysis. Results of univariate and multivariable analysis of factors potentially associated with pandemic influenza vaccine uptake are shown in Table 3. Having received a seasonal influenza vaccination in the previous season (season 2008/09) was the strongest independent predictor of pandemic influenza vaccination. However, this effect differed by age group and we therefore included an interaction term in the final model. Additionally, working as HCW, having a chronic disease, high socioeconomic status, and being male were significantly associated with higher uptake in multivariable analysis.
Conclusion
In conclusion, poor compliance with official vaccination recommendation resulting in low uptake of pandemic influenza vaccines during the pandemic season 2009/10 suggests that public communication strategies and vaccination campaigns during the influenza A(H1N1)pdm09 pandemic in Germany were not successful. In addition, our results raise concerns that controversial discussions about the safety and necessity of pandemic influenza vaccines may have contributed to decreased seasonal influenza vaccine uptake in the first post-pandemic season. It is therefore crucial to develop concerted communication strategies based on the lessons learned from the 2009/10 influenza pandemic and to include them in the national pandemic preparedness plan. This should be done, not only with respect to a competent handling of pandemic situations but also to avoid a decrease in the acceptance of vaccinations in general. In this respect, communication strategies and different modes of communication to specific target groups should be evaluated and implemented already in non-crisis situations, to be enhanced during a pandemic influenza situation or other public health crisis. This is of particular importance, since seasonal influenza vaccine uptake in the recommended target groups in Germany stagnated at a low level since 2005 [38] and does by far not meet the EU goal of 75% [20]. Further studies should be conducted to monitor the trends of seasonal influenza vaccine uptake in Germany in the specific target groups including pregnant women (which is a target group for seasonal influenza vaccination since 2010) and to precisely identify barriers to influenza vaccination in the upcoming years which might differ from the pandemic and the first post-pandemic season. This information would be crucial to guide public health authorities in developing more effective communication strategies for seasonal influenza vaccination tailored to specific target groups.
[ "Background", "Study population and survey design", "Definition of variables", "Statistical analysis", "Sample characteristics", "Seasonal influenza vaccination coverage", "Pandemic influenza vaccination coverage", "Factors associated with pandemic influenza vaccine uptake", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "In Germany, annual influenza epidemics usually occur during the winter months December to March. In the last decade, an estimated zero to 19,000 excess deaths per year were attributable to influenza virus infections [1]. Moreover, approximately one to six million influenza-related excess physician consultations per season were estimated for Germany [2]. Severe influenza virus infections or influenza-related complications typically occur in the very young and elderly population as well as in persons with underlying chronic medical conditions.\nAnnual vaccination has proven to be an effective method to reduce the burden of influenza disease [3]. In Germany, vaccination against seasonal influenza is recommended by the Standing Committee on Vaccination (STIKO) for individuals who have either an increased risk to develop severe influenza disease (i.e. persons aged ≥60 years, pregnant women, and persons with certain chronic medical conditions) or who are likely to transmit the virus to vulnerable groups (e.g. health care workers (HCW)) [4]. Vaccination is free of charge for the target groups in Germany. During the influenza pandemic 2009/10, STIKO additionally recommended vaccination with a monovalent vaccine against the pandemic influenza virus strain A(H1N1)pdm09 for the whole population. Due to expected limitations in vaccine supplies at the beginning of the vaccination campaign, STIKO defined and ranked priority groups for the pandemic vaccination: 1) HCW, 2) persons with underlying chronic conditions, 3) pregnant women, 4) household contacts of vulnerable persons, 5) all other persons aged 6 months to 24 years, 6) all other persons aged 25–59 years, 7) all other person aged 60 years and above [5]. The pandemic vaccination campaign started in Germany on 26 October 2009 [6]. The AS03-adjuvanted monovalent vaccine Pandemrix® was almost exclusively used and available in sufficient quantities [7].\nDuring the pandemic, vaccination against A(H1N1)pdm09 was subject to controversial discussions, not only in Germany but also in many other countries worldwide [7-9]. Main topics of the debate in the media and among experts and ‘self-proclaimed experts’ were vaccine safety, effectiveness, and concern that there were too little data on the new vaccines or vaccine ingredients (especially new adjuvants) available. Moreover, the general necessity of vaccination in view of the relative mildness of the pandemic influenza disease was called into question [7,9-11]. As a result, compliance with the national recommendations for pandemic vaccination was very poor in Germany. Since Germany has no central immunization registry, information on vaccination coverage (and factors influencing coverage) is only available from telephone and household surveys [12-15]. According to the results of thirteen consecutive cross-sectional telephone surveys (total n=13,010) conducted during the pandemic, only 8.1% of the general population aged ≥14 years living in Germany received a vaccine against pandemic influenza [6].\nTo develop target group specific communication strategies and to enhance compliance with the official recommendations it is important to monitor vaccine uptake in each of the target groups and to understand factors that influence uptake. This applies not only for annual influenza vaccination campaigns but also for the planning of future vaccination campaigns during a pandemic. The influence of the 2009/2010 pandemic situation on seasonal influenza vaccine uptake in Germany in the post-pandemic seasons has so far not been investigated. For this purpose, we utilized data from the large (~22,000 respondents) ‘German Health Update 2010’ (GEDA10) telephone survey and a smaller GEDA10 follow-up survey (~2,500 respondents). The objectives of our study were (1) to assess seasonal influenza vaccination coverage for seasons 2008/09 to 2010/11, (2) to assess pandemic influenza vaccination coverage for season 2009/10, (3) to identify predictors of and barriers to pandemic vaccine uptake, and (4) to detect a potential influence of the pandemic situation on seasonal influenza vaccine uptake in the first post-pandemic season (2010/11).", "The GEDA survey design has been described previously [12,13,16]. In brief, GEDA is a large annual telephone survey which is conducted by the Robert Koch Institute (RKI) as a part of Germany’s national health monitoring. The study population consists of persons ≥18 years of age who are living in a private household in Germany, have sufficient knowledge of the German language, and can be contacted via landline telephone. The GEDA study protocol was approved by Germany’s federal and regional data-protection commissioners. All data were collected and analysed in an anonymous manner.\nIn this study we present data from GEDA10 which was conducted between 22 September 2009 and 10 July 2010. Since the annual GEDA survey was not conducted in 2010/2011, we conducted a follow-up interview among a subsample of 2.493 GEDA10 respondents (from now on referred to as the GEDA follow-up survey) from 1 April to 2 July 2011 to assess seasonal influenza vaccine uptake for the post pandemic season 2010/11. Based on a sample size calculation, 385 subjects were needed to estimate a prevalence of 50% (“worst case scenario”) vaccine uptake with a confidence interval of +/− 5%. Our sample size of ~2,400 subjects for the follow-up survey was based on the premise to estimate a prevalence (i.e. vaccination coverage) of 50% for up to six subgroups (=cells; 6*385≈2400).\nSince both surveys were conducted by RKI, the ownership of the data lies with RKI and we did not have to obtain permission to use the data for this study. Similar to the previous GEDA survey (GEDA09), a public use file of the GEDA10 dataset will be provided soon. Data from the GEDA follow-up survey will not be openly available.\nTo control for possible selection biases, weighting factors for the GEDA10 sample were constructed by taking age, sex, educational status, geographical region, and household size into consideration. Potential participants of the follow-up survey were sampled disproportionally to their weighting factors in GEDA10. We applied this method to avoid that groups which were already underrepresented in GEDA10 become underrepresented again at the sampling stage in the follow-up sample and thus to prevent further bias. Weighting factors for the follow-up survey were constructed in a first step on the basis of the values calculated for GEDA10 (for age, sex, education, adipositas, smoking status, subjective health, physical activity, employment status), and, in a second step, on population data gathered in the Microcensus 2008 [17], taking geographical region, age, sex, and educational status into account.\nInformation on seasonal influenza vaccination status for season 2008/2009 was collected from all participants of GEDA10. Between 1 January 2010 and 10 July 2010 respondents were additionally asked to provide information on seasonal and pandemic influenza vaccination status for season 2009/10. Unvaccinated survey participants of GEDA10 were additionally asked to state their reasons for not receiving a pandemic influenza vaccination. Information on seasonal vaccination status for season 2010/11 was collected from all respondents of the GEDA follow-up survey. In the follow-up survey, respondents were additionally asked by whom they were vaccinated (general practitioner/other physician in private practice/occupational physician/other) and in which month they received the vaccination against seasonal influenza in season 2010/11. One should note that GEDA10 does not cover the paediatric population (aged 0–17 years) for which pandemic vaccination was also recommended in Germany. Information on vaccination coverage in this particular age-group was therefore not available from this data source.\nWe calculated the response for GEDA10 by using Response Rate 3 as defined by the American Association for Public Opinion Research (AAPOR) [18]. Response Rate 3 is the proportion of the number of complete interviews divided by the number of interviews plus the number of non-interviews (refusal and break-off plus non-contacts plus others) plus cases of unknown eligibility. For cases of unknown eligibility Response Rate 3 estimates what proportion of cases of unknown eligibility is actually eligible. This estimation is based on the proportion of eligible households among all numbers for which a definitive determination of status was obtained (hence a very conservative estimate). We additionally calculated the cooperation rate at respondent level, which is defined as the proportion of all respondents interviewed of all respondents ever contacted [18]. Since the GEDA follow-up survey was not a random digit dialling study (as GEDA10), we reported the minimal response rate (Response Rate 1 as defined by AAPOR, [18]) for the follow-up survey.", "Socio-economic status levels were created as described by Lampert and Kroll on the basis of self-reported educational, income, and professional status of survey respondents [19]. In accordance with the STIKO-recommendations [4], persons were classified into the target groups for seasonal influenza vaccination in our study if they reported (1) to be ≥60 years of age, (2) to have at least one underlying chronic disease (defined here as having a chronic underlying respiratory, cardiovascular, liver, or renal disease, cancer, or diabetes), or (3) to work as HCW. Since female respondents of child-bearing age were not asked whether they had been pregnant during the last influenza season, it was not possible to include pregnant women as target group in our analysis. The geographic region category ‘Western Federal States’ (WFS) comprised the federal states Schleswig-Holstein, Bremen, Hamburg, Lower Saxony, Hesse, Rhineland-Palatinate, Saarland, North Rhine-Westphalia, Baden-Württemberg and Bavaria; ‘Eastern Federal States’ (EFS) comprised Mecklenburg-Vorpommern, Brandenburg, Berlin, Saxony-Anhalt, Thuringia and Saxony.", "Data analysis was performed using PASW 18.0 for Windows (SPSS Inc., Chicago, USA). Proportions were calculated by using procedures for the analysis of complex samples. Univariate analyses were conducted to determine associations between pandemic influenza vaccine uptake and socio-demographic, health-related and professional factors. A p-value ≤0.05 was considered to indicate a statistically significant difference. Odds ratios (OR) and 95% confidence intervals (CI) were calculated as appropriate. Multivariable analysis was performed by entering variables potentially associated with vaccine uptake (p-value <0.2 in univariate analysis) into a multivariable logistic regression model in a first step, followed by step-wise backward removal of variables with a p-value >0.05 to produce a final model. Interaction terms were included to account for effect modification between independent variables.", "In total, 22,050 telephone interviews were conducted during the study period of GEDA10 and 2,493 participants were re-interviewed for the follow-up survey. An overview of the survey populations is given in Table 1. The median age was 48.0 years (range 18–99 years) in GEDA10 and 49.7 years (range 19–96 years) in the follow-up survey. Response Rate 3 was 28.9% in GEDA10; the cooperation rate at respondent level was 55.8%. Response Rate 1 was 75.0% in the follow up survey.\nCharacteristics of participants in the ‘German Health Update Survey’ (GEDA10) and the GEDA10 follow-up survey\n*Weighted data.", "Information on seasonal influenza vaccination status was available for over 99.8% in each of the study samples for the three seasons under investigation (seasons 2008/09-2010/11). Vaccination coverage for the three seasons by sex, age group, place of residence, and target group is presented in Table 2. To allow comparability with international studies, seasonal influenza vaccine uptake among ≥65 year-olds was additionally calculated and revealed a coverage of 56.1% (95% CI: 54.0-58.2) in season 2008/09, 50.2% (95% CI: 47.5-52.9) in season 2009/10, and 54.2% (95% CI: 48.4-59.8) in season 2010/11. For influenza season 2009/10, vaccination coverage was calculated by age-groups in decades (Figure 1). Vaccine uptake in both the target population for seasonal influenza vaccination (defined here as persons who have an underlying chronic disease, or work as HCW) and non-target population increased with age and was highest in persons ≥70 years. The vast majority (97.8%) of vaccinated persons had received their influenza vaccination for season 2010/11 by the end of December 2010.\nSeasonal influenza vaccine uptake by sex, age group, place of residence and target group, seasons 2008/09 − 2010/11, Germany\n*Weighted data.\nSeasonal vaccination coverage by age group and target group, season 2009/10.\nFigure 2 shows trends in seasonal influenza vaccine uptake among the three different target groups and the non-target group for four consecutive seasons (2007/08 to 2010/11; results for season 2007/08 according to a previously published analysis of GEDA 2009 data [12]). While vaccination coverage slightly decreased in persons aged ≥60 years and in persons with underlying chronic diseases between seasons 2007/08 and 2009/10, there was an increase in vaccine uptake among HCWs from season 2007/08 to 2008/09. In all subgroups under investigation, vaccine uptake for season 2009/10 was higher in the follow-up sample (empty symbols) compared to the GEDA10 sample (filled symbols). Considering only the results from the follow-up survey (Figure 2), a significant decrease in seasonal influenza vaccination coverage between seasons 2009/10 and 2010/11 was observed for persons with underlying chronic conditions (p=0.04), HCWs (p=0.03), and persons not targeted for seasonal influenza vaccination (p<0.01). For persons ≥60 years of age there was also a decrease, but without reaching statistical significance. In season 2010/11, 83.4% of respondents who received a seasonal influenza shot were vaccinated by their general practitioner, 4.7% by another physician in private practice (e.g. gynaecologist, paediatrician), 9.4% by an occupational physician, 0.4% by a hospital physician, and 2.1% by any other physician. Of those who were vaccinated by a general practitioner or by an occupational physician, 18.4% and 59.4% did not belong to a target group, respectively.\nTrends in seasonal vaccine uptake in target groups in Germany for seasons 2007/08−2010/11 according to GEDA09 and GEDA10(filled symbols)and a follow-up sample of GEDA10(empty symbols).1 Data source: GEDA09 (n=15,552) [12]; 2 Data source: GEDA10 (n=22,009); 3 Data source: GEDA10 (n=13,040); 4 Data source: GEDA follow-up survey (n=2,492).", "Information on pandemic influenza vaccine uptake was available for 99.9% of the respective study population in GEDA 10 (n=13,048). Vaccination coverage by sex, age group, place of residence, socio-economic status and different target groups for seasonal influenza vaccination is presented in Table 3. In total, 8.8% (95% CI: 8.2-9.5) of the general adult population in Germany received a vaccination against pandemic influenza. With 11.2% (95% CI: 10.2-12.3) pandemic vaccine uptake was significantly higher in persons belonging to the target group for seasonal influenza vaccination as compared to the non-seasonal influenza target group (6.4%; 95% CI: 5.7-7.1; p<0.001).\nDeterminants of pandemic influenza vaccine uptake, Germany, season 2009/10\n#weighted data; * p<0.05; ** p<0.001; ref=reference category; n.s.= not significant; WFS=Western federal states, EFS=Eastern federal states.\np-value for interaction between agegroup*seasonal influenza vaccination status: 0.011.\nThe most frequently reported reasons for not receiving the vaccination were (1) ‘fear of side effects of pandemic vaccines’ (stated by 37.2%; 95% CI: 36.1-38.3), (2) ‘pandemic vaccination is not necessary’ (33.8%; 95% CI: 32.7-34.9), (3) ‘pandemic vaccination not officially recommended for me’ (16.6%; 95% CI: 15.8-17.5), and (4) ‘reject vaccinations in general’ (8.5%; 95% CI: 7.8-9.2).", "Results of univariate and multivariable analysis of factors potentially associated with pandemic influenza vaccine uptake are shown in Table 3. Having received a seasonal influenza vaccination in the previous season (season 2008/09) was the strongest independent predictor of pandemic influenza vaccination. However, this effect differed by age group and we therefore included an interaction term in the final model. Additionally, working as HCW, having a chronic disease, high socioeconomic status, and being male were significantly associated with higher uptake in multivariable analysis.", "The authors have declared no conflict of interest.", "All authors made substantial contributions to the study. SM was involved in the development of the GEDA10 study design and contributed to the Methods section. MMB, DW, and OW developed the study design of the GEDA10 follow-up survey. MMB analysed the data in consultation with DW, GF, OW and GK. MMB wrote the draft version of the manuscript. All authors have read, carefully reviewed and approved the final version of the manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/12/938/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study population and survey design", "Definition of variables", "Statistical analysis", "Results", "Sample characteristics", "Seasonal influenza vaccination coverage", "Pandemic influenza vaccination coverage", "Factors associated with pandemic influenza vaccine uptake", "Discussion", "Conclusion", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "In Germany, annual influenza epidemics usually occur during the winter months December to March. In the last decade, an estimated zero to 19,000 excess deaths per year were attributable to influenza virus infections [1]. Moreover, approximately one to six million influenza-related excess physician consultations per season were estimated for Germany [2]. Severe influenza virus infections or influenza-related complications typically occur in the very young and elderly population as well as in persons with underlying chronic medical conditions.\nAnnual vaccination has proven to be an effective method to reduce the burden of influenza disease [3]. In Germany, vaccination against seasonal influenza is recommended by the Standing Committee on Vaccination (STIKO) for individuals who have either an increased risk to develop severe influenza disease (i.e. persons aged ≥60 years, pregnant women, and persons with certain chronic medical conditions) or who are likely to transmit the virus to vulnerable groups (e.g. health care workers (HCW)) [4]. Vaccination is free of charge for the target groups in Germany. During the influenza pandemic 2009/10, STIKO additionally recommended vaccination with a monovalent vaccine against the pandemic influenza virus strain A(H1N1)pdm09 for the whole population. Due to expected limitations in vaccine supplies at the beginning of the vaccination campaign, STIKO defined and ranked priority groups for the pandemic vaccination: 1) HCW, 2) persons with underlying chronic conditions, 3) pregnant women, 4) household contacts of vulnerable persons, 5) all other persons aged 6 months to 24 years, 6) all other persons aged 25–59 years, 7) all other person aged 60 years and above [5]. The pandemic vaccination campaign started in Germany on 26 October 2009 [6]. The AS03-adjuvanted monovalent vaccine Pandemrix® was almost exclusively used and available in sufficient quantities [7].\nDuring the pandemic, vaccination against A(H1N1)pdm09 was subject to controversial discussions, not only in Germany but also in many other countries worldwide [7-9]. Main topics of the debate in the media and among experts and ‘self-proclaimed experts’ were vaccine safety, effectiveness, and concern that there were too little data on the new vaccines or vaccine ingredients (especially new adjuvants) available. Moreover, the general necessity of vaccination in view of the relative mildness of the pandemic influenza disease was called into question [7,9-11]. As a result, compliance with the national recommendations for pandemic vaccination was very poor in Germany. Since Germany has no central immunization registry, information on vaccination coverage (and factors influencing coverage) is only available from telephone and household surveys [12-15]. According to the results of thirteen consecutive cross-sectional telephone surveys (total n=13,010) conducted during the pandemic, only 8.1% of the general population aged ≥14 years living in Germany received a vaccine against pandemic influenza [6].\nTo develop target group specific communication strategies and to enhance compliance with the official recommendations it is important to monitor vaccine uptake in each of the target groups and to understand factors that influence uptake. This applies not only for annual influenza vaccination campaigns but also for the planning of future vaccination campaigns during a pandemic. The influence of the 2009/2010 pandemic situation on seasonal influenza vaccine uptake in Germany in the post-pandemic seasons has so far not been investigated. For this purpose, we utilized data from the large (~22,000 respondents) ‘German Health Update 2010’ (GEDA10) telephone survey and a smaller GEDA10 follow-up survey (~2,500 respondents). The objectives of our study were (1) to assess seasonal influenza vaccination coverage for seasons 2008/09 to 2010/11, (2) to assess pandemic influenza vaccination coverage for season 2009/10, (3) to identify predictors of and barriers to pandemic vaccine uptake, and (4) to detect a potential influence of the pandemic situation on seasonal influenza vaccine uptake in the first post-pandemic season (2010/11).", " Study population and survey design The GEDA survey design has been described previously [12,13,16]. In brief, GEDA is a large annual telephone survey which is conducted by the Robert Koch Institute (RKI) as a part of Germany’s national health monitoring. The study population consists of persons ≥18 years of age who are living in a private household in Germany, have sufficient knowledge of the German language, and can be contacted via landline telephone. The GEDA study protocol was approved by Germany’s federal and regional data-protection commissioners. All data were collected and analysed in an anonymous manner.\nIn this study we present data from GEDA10 which was conducted between 22 September 2009 and 10 July 2010. Since the annual GEDA survey was not conducted in 2010/2011, we conducted a follow-up interview among a subsample of 2.493 GEDA10 respondents (from now on referred to as the GEDA follow-up survey) from 1 April to 2 July 2011 to assess seasonal influenza vaccine uptake for the post pandemic season 2010/11. Based on a sample size calculation, 385 subjects were needed to estimate a prevalence of 50% (“worst case scenario”) vaccine uptake with a confidence interval of +/− 5%. Our sample size of ~2,400 subjects for the follow-up survey was based on the premise to estimate a prevalence (i.e. vaccination coverage) of 50% for up to six subgroups (=cells; 6*385≈2400).\nSince both surveys were conducted by RKI, the ownership of the data lies with RKI and we did not have to obtain permission to use the data for this study. Similar to the previous GEDA survey (GEDA09), a public use file of the GEDA10 dataset will be provided soon. Data from the GEDA follow-up survey will not be openly available.\nTo control for possible selection biases, weighting factors for the GEDA10 sample were constructed by taking age, sex, educational status, geographical region, and household size into consideration. Potential participants of the follow-up survey were sampled disproportionally to their weighting factors in GEDA10. We applied this method to avoid that groups which were already underrepresented in GEDA10 become underrepresented again at the sampling stage in the follow-up sample and thus to prevent further bias. Weighting factors for the follow-up survey were constructed in a first step on the basis of the values calculated for GEDA10 (for age, sex, education, adipositas, smoking status, subjective health, physical activity, employment status), and, in a second step, on population data gathered in the Microcensus 2008 [17], taking geographical region, age, sex, and educational status into account.\nInformation on seasonal influenza vaccination status for season 2008/2009 was collected from all participants of GEDA10. Between 1 January 2010 and 10 July 2010 respondents were additionally asked to provide information on seasonal and pandemic influenza vaccination status for season 2009/10. Unvaccinated survey participants of GEDA10 were additionally asked to state their reasons for not receiving a pandemic influenza vaccination. Information on seasonal vaccination status for season 2010/11 was collected from all respondents of the GEDA follow-up survey. In the follow-up survey, respondents were additionally asked by whom they were vaccinated (general practitioner/other physician in private practice/occupational physician/other) and in which month they received the vaccination against seasonal influenza in season 2010/11. One should note that GEDA10 does not cover the paediatric population (aged 0–17 years) for which pandemic vaccination was also recommended in Germany. Information on vaccination coverage in this particular age-group was therefore not available from this data source.\nWe calculated the response for GEDA10 by using Response Rate 3 as defined by the American Association for Public Opinion Research (AAPOR) [18]. Response Rate 3 is the proportion of the number of complete interviews divided by the number of interviews plus the number of non-interviews (refusal and break-off plus non-contacts plus others) plus cases of unknown eligibility. For cases of unknown eligibility Response Rate 3 estimates what proportion of cases of unknown eligibility is actually eligible. This estimation is based on the proportion of eligible households among all numbers for which a definitive determination of status was obtained (hence a very conservative estimate). We additionally calculated the cooperation rate at respondent level, which is defined as the proportion of all respondents interviewed of all respondents ever contacted [18]. Since the GEDA follow-up survey was not a random digit dialling study (as GEDA10), we reported the minimal response rate (Response Rate 1 as defined by AAPOR, [18]) for the follow-up survey.\nThe GEDA survey design has been described previously [12,13,16]. In brief, GEDA is a large annual telephone survey which is conducted by the Robert Koch Institute (RKI) as a part of Germany’s national health monitoring. The study population consists of persons ≥18 years of age who are living in a private household in Germany, have sufficient knowledge of the German language, and can be contacted via landline telephone. The GEDA study protocol was approved by Germany’s federal and regional data-protection commissioners. All data were collected and analysed in an anonymous manner.\nIn this study we present data from GEDA10 which was conducted between 22 September 2009 and 10 July 2010. Since the annual GEDA survey was not conducted in 2010/2011, we conducted a follow-up interview among a subsample of 2.493 GEDA10 respondents (from now on referred to as the GEDA follow-up survey) from 1 April to 2 July 2011 to assess seasonal influenza vaccine uptake for the post pandemic season 2010/11. Based on a sample size calculation, 385 subjects were needed to estimate a prevalence of 50% (“worst case scenario”) vaccine uptake with a confidence interval of +/− 5%. Our sample size of ~2,400 subjects for the follow-up survey was based on the premise to estimate a prevalence (i.e. vaccination coverage) of 50% for up to six subgroups (=cells; 6*385≈2400).\nSince both surveys were conducted by RKI, the ownership of the data lies with RKI and we did not have to obtain permission to use the data for this study. Similar to the previous GEDA survey (GEDA09), a public use file of the GEDA10 dataset will be provided soon. Data from the GEDA follow-up survey will not be openly available.\nTo control for possible selection biases, weighting factors for the GEDA10 sample were constructed by taking age, sex, educational status, geographical region, and household size into consideration. Potential participants of the follow-up survey were sampled disproportionally to their weighting factors in GEDA10. We applied this method to avoid that groups which were already underrepresented in GEDA10 become underrepresented again at the sampling stage in the follow-up sample and thus to prevent further bias. Weighting factors for the follow-up survey were constructed in a first step on the basis of the values calculated for GEDA10 (for age, sex, education, adipositas, smoking status, subjective health, physical activity, employment status), and, in a second step, on population data gathered in the Microcensus 2008 [17], taking geographical region, age, sex, and educational status into account.\nInformation on seasonal influenza vaccination status for season 2008/2009 was collected from all participants of GEDA10. Between 1 January 2010 and 10 July 2010 respondents were additionally asked to provide information on seasonal and pandemic influenza vaccination status for season 2009/10. Unvaccinated survey participants of GEDA10 were additionally asked to state their reasons for not receiving a pandemic influenza vaccination. Information on seasonal vaccination status for season 2010/11 was collected from all respondents of the GEDA follow-up survey. In the follow-up survey, respondents were additionally asked by whom they were vaccinated (general practitioner/other physician in private practice/occupational physician/other) and in which month they received the vaccination against seasonal influenza in season 2010/11. One should note that GEDA10 does not cover the paediatric population (aged 0–17 years) for which pandemic vaccination was also recommended in Germany. Information on vaccination coverage in this particular age-group was therefore not available from this data source.\nWe calculated the response for GEDA10 by using Response Rate 3 as defined by the American Association for Public Opinion Research (AAPOR) [18]. Response Rate 3 is the proportion of the number of complete interviews divided by the number of interviews plus the number of non-interviews (refusal and break-off plus non-contacts plus others) plus cases of unknown eligibility. For cases of unknown eligibility Response Rate 3 estimates what proportion of cases of unknown eligibility is actually eligible. This estimation is based on the proportion of eligible households among all numbers for which a definitive determination of status was obtained (hence a very conservative estimate). We additionally calculated the cooperation rate at respondent level, which is defined as the proportion of all respondents interviewed of all respondents ever contacted [18]. Since the GEDA follow-up survey was not a random digit dialling study (as GEDA10), we reported the minimal response rate (Response Rate 1 as defined by AAPOR, [18]) for the follow-up survey.\n Definition of variables Socio-economic status levels were created as described by Lampert and Kroll on the basis of self-reported educational, income, and professional status of survey respondents [19]. In accordance with the STIKO-recommendations [4], persons were classified into the target groups for seasonal influenza vaccination in our study if they reported (1) to be ≥60 years of age, (2) to have at least one underlying chronic disease (defined here as having a chronic underlying respiratory, cardiovascular, liver, or renal disease, cancer, or diabetes), or (3) to work as HCW. Since female respondents of child-bearing age were not asked whether they had been pregnant during the last influenza season, it was not possible to include pregnant women as target group in our analysis. The geographic region category ‘Western Federal States’ (WFS) comprised the federal states Schleswig-Holstein, Bremen, Hamburg, Lower Saxony, Hesse, Rhineland-Palatinate, Saarland, North Rhine-Westphalia, Baden-Württemberg and Bavaria; ‘Eastern Federal States’ (EFS) comprised Mecklenburg-Vorpommern, Brandenburg, Berlin, Saxony-Anhalt, Thuringia and Saxony.\nSocio-economic status levels were created as described by Lampert and Kroll on the basis of self-reported educational, income, and professional status of survey respondents [19]. In accordance with the STIKO-recommendations [4], persons were classified into the target groups for seasonal influenza vaccination in our study if they reported (1) to be ≥60 years of age, (2) to have at least one underlying chronic disease (defined here as having a chronic underlying respiratory, cardiovascular, liver, or renal disease, cancer, or diabetes), or (3) to work as HCW. Since female respondents of child-bearing age were not asked whether they had been pregnant during the last influenza season, it was not possible to include pregnant women as target group in our analysis. The geographic region category ‘Western Federal States’ (WFS) comprised the federal states Schleswig-Holstein, Bremen, Hamburg, Lower Saxony, Hesse, Rhineland-Palatinate, Saarland, North Rhine-Westphalia, Baden-Württemberg and Bavaria; ‘Eastern Federal States’ (EFS) comprised Mecklenburg-Vorpommern, Brandenburg, Berlin, Saxony-Anhalt, Thuringia and Saxony.\n Statistical analysis Data analysis was performed using PASW 18.0 for Windows (SPSS Inc., Chicago, USA). Proportions were calculated by using procedures for the analysis of complex samples. Univariate analyses were conducted to determine associations between pandemic influenza vaccine uptake and socio-demographic, health-related and professional factors. A p-value ≤0.05 was considered to indicate a statistically significant difference. Odds ratios (OR) and 95% confidence intervals (CI) were calculated as appropriate. Multivariable analysis was performed by entering variables potentially associated with vaccine uptake (p-value <0.2 in univariate analysis) into a multivariable logistic regression model in a first step, followed by step-wise backward removal of variables with a p-value >0.05 to produce a final model. Interaction terms were included to account for effect modification between independent variables.\nData analysis was performed using PASW 18.0 for Windows (SPSS Inc., Chicago, USA). Proportions were calculated by using procedures for the analysis of complex samples. Univariate analyses were conducted to determine associations between pandemic influenza vaccine uptake and socio-demographic, health-related and professional factors. A p-value ≤0.05 was considered to indicate a statistically significant difference. Odds ratios (OR) and 95% confidence intervals (CI) were calculated as appropriate. Multivariable analysis was performed by entering variables potentially associated with vaccine uptake (p-value <0.2 in univariate analysis) into a multivariable logistic regression model in a first step, followed by step-wise backward removal of variables with a p-value >0.05 to produce a final model. Interaction terms were included to account for effect modification between independent variables.", "The GEDA survey design has been described previously [12,13,16]. In brief, GEDA is a large annual telephone survey which is conducted by the Robert Koch Institute (RKI) as a part of Germany’s national health monitoring. The study population consists of persons ≥18 years of age who are living in a private household in Germany, have sufficient knowledge of the German language, and can be contacted via landline telephone. The GEDA study protocol was approved by Germany’s federal and regional data-protection commissioners. All data were collected and analysed in an anonymous manner.\nIn this study we present data from GEDA10 which was conducted between 22 September 2009 and 10 July 2010. Since the annual GEDA survey was not conducted in 2010/2011, we conducted a follow-up interview among a subsample of 2.493 GEDA10 respondents (from now on referred to as the GEDA follow-up survey) from 1 April to 2 July 2011 to assess seasonal influenza vaccine uptake for the post pandemic season 2010/11. Based on a sample size calculation, 385 subjects were needed to estimate a prevalence of 50% (“worst case scenario”) vaccine uptake with a confidence interval of +/− 5%. Our sample size of ~2,400 subjects for the follow-up survey was based on the premise to estimate a prevalence (i.e. vaccination coverage) of 50% for up to six subgroups (=cells; 6*385≈2400).\nSince both surveys were conducted by RKI, the ownership of the data lies with RKI and we did not have to obtain permission to use the data for this study. Similar to the previous GEDA survey (GEDA09), a public use file of the GEDA10 dataset will be provided soon. Data from the GEDA follow-up survey will not be openly available.\nTo control for possible selection biases, weighting factors for the GEDA10 sample were constructed by taking age, sex, educational status, geographical region, and household size into consideration. Potential participants of the follow-up survey were sampled disproportionally to their weighting factors in GEDA10. We applied this method to avoid that groups which were already underrepresented in GEDA10 become underrepresented again at the sampling stage in the follow-up sample and thus to prevent further bias. Weighting factors for the follow-up survey were constructed in a first step on the basis of the values calculated for GEDA10 (for age, sex, education, adipositas, smoking status, subjective health, physical activity, employment status), and, in a second step, on population data gathered in the Microcensus 2008 [17], taking geographical region, age, sex, and educational status into account.\nInformation on seasonal influenza vaccination status for season 2008/2009 was collected from all participants of GEDA10. Between 1 January 2010 and 10 July 2010 respondents were additionally asked to provide information on seasonal and pandemic influenza vaccination status for season 2009/10. Unvaccinated survey participants of GEDA10 were additionally asked to state their reasons for not receiving a pandemic influenza vaccination. Information on seasonal vaccination status for season 2010/11 was collected from all respondents of the GEDA follow-up survey. In the follow-up survey, respondents were additionally asked by whom they were vaccinated (general practitioner/other physician in private practice/occupational physician/other) and in which month they received the vaccination against seasonal influenza in season 2010/11. One should note that GEDA10 does not cover the paediatric population (aged 0–17 years) for which pandemic vaccination was also recommended in Germany. Information on vaccination coverage in this particular age-group was therefore not available from this data source.\nWe calculated the response for GEDA10 by using Response Rate 3 as defined by the American Association for Public Opinion Research (AAPOR) [18]. Response Rate 3 is the proportion of the number of complete interviews divided by the number of interviews plus the number of non-interviews (refusal and break-off plus non-contacts plus others) plus cases of unknown eligibility. For cases of unknown eligibility Response Rate 3 estimates what proportion of cases of unknown eligibility is actually eligible. This estimation is based on the proportion of eligible households among all numbers for which a definitive determination of status was obtained (hence a very conservative estimate). We additionally calculated the cooperation rate at respondent level, which is defined as the proportion of all respondents interviewed of all respondents ever contacted [18]. Since the GEDA follow-up survey was not a random digit dialling study (as GEDA10), we reported the minimal response rate (Response Rate 1 as defined by AAPOR, [18]) for the follow-up survey.", "Socio-economic status levels were created as described by Lampert and Kroll on the basis of self-reported educational, income, and professional status of survey respondents [19]. In accordance with the STIKO-recommendations [4], persons were classified into the target groups for seasonal influenza vaccination in our study if they reported (1) to be ≥60 years of age, (2) to have at least one underlying chronic disease (defined here as having a chronic underlying respiratory, cardiovascular, liver, or renal disease, cancer, or diabetes), or (3) to work as HCW. Since female respondents of child-bearing age were not asked whether they had been pregnant during the last influenza season, it was not possible to include pregnant women as target group in our analysis. The geographic region category ‘Western Federal States’ (WFS) comprised the federal states Schleswig-Holstein, Bremen, Hamburg, Lower Saxony, Hesse, Rhineland-Palatinate, Saarland, North Rhine-Westphalia, Baden-Württemberg and Bavaria; ‘Eastern Federal States’ (EFS) comprised Mecklenburg-Vorpommern, Brandenburg, Berlin, Saxony-Anhalt, Thuringia and Saxony.", "Data analysis was performed using PASW 18.0 for Windows (SPSS Inc., Chicago, USA). Proportions were calculated by using procedures for the analysis of complex samples. Univariate analyses were conducted to determine associations between pandemic influenza vaccine uptake and socio-demographic, health-related and professional factors. A p-value ≤0.05 was considered to indicate a statistically significant difference. Odds ratios (OR) and 95% confidence intervals (CI) were calculated as appropriate. Multivariable analysis was performed by entering variables potentially associated with vaccine uptake (p-value <0.2 in univariate analysis) into a multivariable logistic regression model in a first step, followed by step-wise backward removal of variables with a p-value >0.05 to produce a final model. Interaction terms were included to account for effect modification between independent variables.", " Sample characteristics In total, 22,050 telephone interviews were conducted during the study period of GEDA10 and 2,493 participants were re-interviewed for the follow-up survey. An overview of the survey populations is given in Table 1. The median age was 48.0 years (range 18–99 years) in GEDA10 and 49.7 years (range 19–96 years) in the follow-up survey. Response Rate 3 was 28.9% in GEDA10; the cooperation rate at respondent level was 55.8%. Response Rate 1 was 75.0% in the follow up survey.\nCharacteristics of participants in the ‘German Health Update Survey’ (GEDA10) and the GEDA10 follow-up survey\n*Weighted data.\nIn total, 22,050 telephone interviews were conducted during the study period of GEDA10 and 2,493 participants were re-interviewed for the follow-up survey. An overview of the survey populations is given in Table 1. The median age was 48.0 years (range 18–99 years) in GEDA10 and 49.7 years (range 19–96 years) in the follow-up survey. Response Rate 3 was 28.9% in GEDA10; the cooperation rate at respondent level was 55.8%. Response Rate 1 was 75.0% in the follow up survey.\nCharacteristics of participants in the ‘German Health Update Survey’ (GEDA10) and the GEDA10 follow-up survey\n*Weighted data.\n Seasonal influenza vaccination coverage Information on seasonal influenza vaccination status was available for over 99.8% in each of the study samples for the three seasons under investigation (seasons 2008/09-2010/11). Vaccination coverage for the three seasons by sex, age group, place of residence, and target group is presented in Table 2. To allow comparability with international studies, seasonal influenza vaccine uptake among ≥65 year-olds was additionally calculated and revealed a coverage of 56.1% (95% CI: 54.0-58.2) in season 2008/09, 50.2% (95% CI: 47.5-52.9) in season 2009/10, and 54.2% (95% CI: 48.4-59.8) in season 2010/11. For influenza season 2009/10, vaccination coverage was calculated by age-groups in decades (Figure 1). Vaccine uptake in both the target population for seasonal influenza vaccination (defined here as persons who have an underlying chronic disease, or work as HCW) and non-target population increased with age and was highest in persons ≥70 years. The vast majority (97.8%) of vaccinated persons had received their influenza vaccination for season 2010/11 by the end of December 2010.\nSeasonal influenza vaccine uptake by sex, age group, place of residence and target group, seasons 2008/09 − 2010/11, Germany\n*Weighted data.\nSeasonal vaccination coverage by age group and target group, season 2009/10.\nFigure 2 shows trends in seasonal influenza vaccine uptake among the three different target groups and the non-target group for four consecutive seasons (2007/08 to 2010/11; results for season 2007/08 according to a previously published analysis of GEDA 2009 data [12]). While vaccination coverage slightly decreased in persons aged ≥60 years and in persons with underlying chronic diseases between seasons 2007/08 and 2009/10, there was an increase in vaccine uptake among HCWs from season 2007/08 to 2008/09. In all subgroups under investigation, vaccine uptake for season 2009/10 was higher in the follow-up sample (empty symbols) compared to the GEDA10 sample (filled symbols). Considering only the results from the follow-up survey (Figure 2), a significant decrease in seasonal influenza vaccination coverage between seasons 2009/10 and 2010/11 was observed for persons with underlying chronic conditions (p=0.04), HCWs (p=0.03), and persons not targeted for seasonal influenza vaccination (p<0.01). For persons ≥60 years of age there was also a decrease, but without reaching statistical significance. In season 2010/11, 83.4% of respondents who received a seasonal influenza shot were vaccinated by their general practitioner, 4.7% by another physician in private practice (e.g. gynaecologist, paediatrician), 9.4% by an occupational physician, 0.4% by a hospital physician, and 2.1% by any other physician. Of those who were vaccinated by a general practitioner or by an occupational physician, 18.4% and 59.4% did not belong to a target group, respectively.\nTrends in seasonal vaccine uptake in target groups in Germany for seasons 2007/08−2010/11 according to GEDA09 and GEDA10(filled symbols)and a follow-up sample of GEDA10(empty symbols).1 Data source: GEDA09 (n=15,552) [12]; 2 Data source: GEDA10 (n=22,009); 3 Data source: GEDA10 (n=13,040); 4 Data source: GEDA follow-up survey (n=2,492).\nInformation on seasonal influenza vaccination status was available for over 99.8% in each of the study samples for the three seasons under investigation (seasons 2008/09-2010/11). Vaccination coverage for the three seasons by sex, age group, place of residence, and target group is presented in Table 2. To allow comparability with international studies, seasonal influenza vaccine uptake among ≥65 year-olds was additionally calculated and revealed a coverage of 56.1% (95% CI: 54.0-58.2) in season 2008/09, 50.2% (95% CI: 47.5-52.9) in season 2009/10, and 54.2% (95% CI: 48.4-59.8) in season 2010/11. For influenza season 2009/10, vaccination coverage was calculated by age-groups in decades (Figure 1). Vaccine uptake in both the target population for seasonal influenza vaccination (defined here as persons who have an underlying chronic disease, or work as HCW) and non-target population increased with age and was highest in persons ≥70 years. The vast majority (97.8%) of vaccinated persons had received their influenza vaccination for season 2010/11 by the end of December 2010.\nSeasonal influenza vaccine uptake by sex, age group, place of residence and target group, seasons 2008/09 − 2010/11, Germany\n*Weighted data.\nSeasonal vaccination coverage by age group and target group, season 2009/10.\nFigure 2 shows trends in seasonal influenza vaccine uptake among the three different target groups and the non-target group for four consecutive seasons (2007/08 to 2010/11; results for season 2007/08 according to a previously published analysis of GEDA 2009 data [12]). While vaccination coverage slightly decreased in persons aged ≥60 years and in persons with underlying chronic diseases between seasons 2007/08 and 2009/10, there was an increase in vaccine uptake among HCWs from season 2007/08 to 2008/09. In all subgroups under investigation, vaccine uptake for season 2009/10 was higher in the follow-up sample (empty symbols) compared to the GEDA10 sample (filled symbols). Considering only the results from the follow-up survey (Figure 2), a significant decrease in seasonal influenza vaccination coverage between seasons 2009/10 and 2010/11 was observed for persons with underlying chronic conditions (p=0.04), HCWs (p=0.03), and persons not targeted for seasonal influenza vaccination (p<0.01). For persons ≥60 years of age there was also a decrease, but without reaching statistical significance. In season 2010/11, 83.4% of respondents who received a seasonal influenza shot were vaccinated by their general practitioner, 4.7% by another physician in private practice (e.g. gynaecologist, paediatrician), 9.4% by an occupational physician, 0.4% by a hospital physician, and 2.1% by any other physician. Of those who were vaccinated by a general practitioner or by an occupational physician, 18.4% and 59.4% did not belong to a target group, respectively.\nTrends in seasonal vaccine uptake in target groups in Germany for seasons 2007/08−2010/11 according to GEDA09 and GEDA10(filled symbols)and a follow-up sample of GEDA10(empty symbols).1 Data source: GEDA09 (n=15,552) [12]; 2 Data source: GEDA10 (n=22,009); 3 Data source: GEDA10 (n=13,040); 4 Data source: GEDA follow-up survey (n=2,492).\n Pandemic influenza vaccination coverage Information on pandemic influenza vaccine uptake was available for 99.9% of the respective study population in GEDA 10 (n=13,048). Vaccination coverage by sex, age group, place of residence, socio-economic status and different target groups for seasonal influenza vaccination is presented in Table 3. In total, 8.8% (95% CI: 8.2-9.5) of the general adult population in Germany received a vaccination against pandemic influenza. With 11.2% (95% CI: 10.2-12.3) pandemic vaccine uptake was significantly higher in persons belonging to the target group for seasonal influenza vaccination as compared to the non-seasonal influenza target group (6.4%; 95% CI: 5.7-7.1; p<0.001).\nDeterminants of pandemic influenza vaccine uptake, Germany, season 2009/10\n#weighted data; * p<0.05; ** p<0.001; ref=reference category; n.s.= not significant; WFS=Western federal states, EFS=Eastern federal states.\np-value for interaction between agegroup*seasonal influenza vaccination status: 0.011.\nThe most frequently reported reasons for not receiving the vaccination were (1) ‘fear of side effects of pandemic vaccines’ (stated by 37.2%; 95% CI: 36.1-38.3), (2) ‘pandemic vaccination is not necessary’ (33.8%; 95% CI: 32.7-34.9), (3) ‘pandemic vaccination not officially recommended for me’ (16.6%; 95% CI: 15.8-17.5), and (4) ‘reject vaccinations in general’ (8.5%; 95% CI: 7.8-9.2).\nInformation on pandemic influenza vaccine uptake was available for 99.9% of the respective study population in GEDA 10 (n=13,048). Vaccination coverage by sex, age group, place of residence, socio-economic status and different target groups for seasonal influenza vaccination is presented in Table 3. In total, 8.8% (95% CI: 8.2-9.5) of the general adult population in Germany received a vaccination against pandemic influenza. With 11.2% (95% CI: 10.2-12.3) pandemic vaccine uptake was significantly higher in persons belonging to the target group for seasonal influenza vaccination as compared to the non-seasonal influenza target group (6.4%; 95% CI: 5.7-7.1; p<0.001).\nDeterminants of pandemic influenza vaccine uptake, Germany, season 2009/10\n#weighted data; * p<0.05; ** p<0.001; ref=reference category; n.s.= not significant; WFS=Western federal states, EFS=Eastern federal states.\np-value for interaction between agegroup*seasonal influenza vaccination status: 0.011.\nThe most frequently reported reasons for not receiving the vaccination were (1) ‘fear of side effects of pandemic vaccines’ (stated by 37.2%; 95% CI: 36.1-38.3), (2) ‘pandemic vaccination is not necessary’ (33.8%; 95% CI: 32.7-34.9), (3) ‘pandemic vaccination not officially recommended for me’ (16.6%; 95% CI: 15.8-17.5), and (4) ‘reject vaccinations in general’ (8.5%; 95% CI: 7.8-9.2).\n Factors associated with pandemic influenza vaccine uptake Results of univariate and multivariable analysis of factors potentially associated with pandemic influenza vaccine uptake are shown in Table 3. Having received a seasonal influenza vaccination in the previous season (season 2008/09) was the strongest independent predictor of pandemic influenza vaccination. However, this effect differed by age group and we therefore included an interaction term in the final model. Additionally, working as HCW, having a chronic disease, high socioeconomic status, and being male were significantly associated with higher uptake in multivariable analysis.\nResults of univariate and multivariable analysis of factors potentially associated with pandemic influenza vaccine uptake are shown in Table 3. Having received a seasonal influenza vaccination in the previous season (season 2008/09) was the strongest independent predictor of pandemic influenza vaccination. However, this effect differed by age group and we therefore included an interaction term in the final model. Additionally, working as HCW, having a chronic disease, high socioeconomic status, and being male were significantly associated with higher uptake in multivariable analysis.", "In total, 22,050 telephone interviews were conducted during the study period of GEDA10 and 2,493 participants were re-interviewed for the follow-up survey. An overview of the survey populations is given in Table 1. The median age was 48.0 years (range 18–99 years) in GEDA10 and 49.7 years (range 19–96 years) in the follow-up survey. Response Rate 3 was 28.9% in GEDA10; the cooperation rate at respondent level was 55.8%. Response Rate 1 was 75.0% in the follow up survey.\nCharacteristics of participants in the ‘German Health Update Survey’ (GEDA10) and the GEDA10 follow-up survey\n*Weighted data.", "Information on seasonal influenza vaccination status was available for over 99.8% in each of the study samples for the three seasons under investigation (seasons 2008/09-2010/11). Vaccination coverage for the three seasons by sex, age group, place of residence, and target group is presented in Table 2. To allow comparability with international studies, seasonal influenza vaccine uptake among ≥65 year-olds was additionally calculated and revealed a coverage of 56.1% (95% CI: 54.0-58.2) in season 2008/09, 50.2% (95% CI: 47.5-52.9) in season 2009/10, and 54.2% (95% CI: 48.4-59.8) in season 2010/11. For influenza season 2009/10, vaccination coverage was calculated by age-groups in decades (Figure 1). Vaccine uptake in both the target population for seasonal influenza vaccination (defined here as persons who have an underlying chronic disease, or work as HCW) and non-target population increased with age and was highest in persons ≥70 years. The vast majority (97.8%) of vaccinated persons had received their influenza vaccination for season 2010/11 by the end of December 2010.\nSeasonal influenza vaccine uptake by sex, age group, place of residence and target group, seasons 2008/09 − 2010/11, Germany\n*Weighted data.\nSeasonal vaccination coverage by age group and target group, season 2009/10.\nFigure 2 shows trends in seasonal influenza vaccine uptake among the three different target groups and the non-target group for four consecutive seasons (2007/08 to 2010/11; results for season 2007/08 according to a previously published analysis of GEDA 2009 data [12]). While vaccination coverage slightly decreased in persons aged ≥60 years and in persons with underlying chronic diseases between seasons 2007/08 and 2009/10, there was an increase in vaccine uptake among HCWs from season 2007/08 to 2008/09. In all subgroups under investigation, vaccine uptake for season 2009/10 was higher in the follow-up sample (empty symbols) compared to the GEDA10 sample (filled symbols). Considering only the results from the follow-up survey (Figure 2), a significant decrease in seasonal influenza vaccination coverage between seasons 2009/10 and 2010/11 was observed for persons with underlying chronic conditions (p=0.04), HCWs (p=0.03), and persons not targeted for seasonal influenza vaccination (p<0.01). For persons ≥60 years of age there was also a decrease, but without reaching statistical significance. In season 2010/11, 83.4% of respondents who received a seasonal influenza shot were vaccinated by their general practitioner, 4.7% by another physician in private practice (e.g. gynaecologist, paediatrician), 9.4% by an occupational physician, 0.4% by a hospital physician, and 2.1% by any other physician. Of those who were vaccinated by a general practitioner or by an occupational physician, 18.4% and 59.4% did not belong to a target group, respectively.\nTrends in seasonal vaccine uptake in target groups in Germany for seasons 2007/08−2010/11 according to GEDA09 and GEDA10(filled symbols)and a follow-up sample of GEDA10(empty symbols).1 Data source: GEDA09 (n=15,552) [12]; 2 Data source: GEDA10 (n=22,009); 3 Data source: GEDA10 (n=13,040); 4 Data source: GEDA follow-up survey (n=2,492).", "Information on pandemic influenza vaccine uptake was available for 99.9% of the respective study population in GEDA 10 (n=13,048). Vaccination coverage by sex, age group, place of residence, socio-economic status and different target groups for seasonal influenza vaccination is presented in Table 3. In total, 8.8% (95% CI: 8.2-9.5) of the general adult population in Germany received a vaccination against pandemic influenza. With 11.2% (95% CI: 10.2-12.3) pandemic vaccine uptake was significantly higher in persons belonging to the target group for seasonal influenza vaccination as compared to the non-seasonal influenza target group (6.4%; 95% CI: 5.7-7.1; p<0.001).\nDeterminants of pandemic influenza vaccine uptake, Germany, season 2009/10\n#weighted data; * p<0.05; ** p<0.001; ref=reference category; n.s.= not significant; WFS=Western federal states, EFS=Eastern federal states.\np-value for interaction between agegroup*seasonal influenza vaccination status: 0.011.\nThe most frequently reported reasons for not receiving the vaccination were (1) ‘fear of side effects of pandemic vaccines’ (stated by 37.2%; 95% CI: 36.1-38.3), (2) ‘pandemic vaccination is not necessary’ (33.8%; 95% CI: 32.7-34.9), (3) ‘pandemic vaccination not officially recommended for me’ (16.6%; 95% CI: 15.8-17.5), and (4) ‘reject vaccinations in general’ (8.5%; 95% CI: 7.8-9.2).", "Results of univariate and multivariable analysis of factors potentially associated with pandemic influenza vaccine uptake are shown in Table 3. Having received a seasonal influenza vaccination in the previous season (season 2008/09) was the strongest independent predictor of pandemic influenza vaccination. However, this effect differed by age group and we therefore included an interaction term in the final model. Additionally, working as HCW, having a chronic disease, high socioeconomic status, and being male were significantly associated with higher uptake in multivariable analysis.", "The aim of this study was to assess the uptake of seasonal influenza vaccines in specific target groups for seasons 2008/09 and 2009/10, as well as for pandemic influenza vaccines during the pandemic season 2009/10 in Germany in the total adult population by using data from a large population-representative telephone survey. By using data from a smaller follow-up survey, our study moreover provides the only so far available data on seasonal influenza vaccination coverage in Germany for the post-pandemic season 2010/11. Overall, only 8.8% of the adult population in Germany followed the official recommendation and received a vaccination against pandemic influenza in season 2009/10. The follow-up survey revealed a decrease in seasonal influenza vaccine uptake in the first post-pandemic season across all target groups when compared to the pre-pandemic season 2008/09, most prominent among HCW. With an average coverage of 50% in the elderly, 41% in the chronically ill, and 28% in HCW in seasons 2008/09 to 2010/11, the EU goal of reaching a seasonal influenza vaccination coverage of at least 75% in the target groups [20] has not yet been achieved in Germany.\nHaving received a seasonal influenza shot in the pre-pandemic season was the strongest predictor for receiving pandemic influenza vaccination in our study. The high correlation between seasonal and pandemic influenza vaccine uptake highlights the significance of habitual behaviour with regard to influenza vaccination decisions. In addition and independent from this factor, persons belonging to at least one of the recommended target groups for seasonal influenza vaccination were significantly more likely to receive a pandemic influenza vaccination than persons not belonging to a target group. Our results are broadly in line with the findings of a prospective monitoring survey on pandemic influenza vaccination in Germany [6] and two reviews investigating determinants of pandemic vaccine uptake [21,22]. Prior seasonal vaccination was not only found to be positively associated with the intention to receive the pandemic vaccination among adults in several industrialized countries (e.g. in the UK [23], France [24], Australia [25], and the US [26]) but also with the actual receipt of the vaccination (e.g. [6,27,28]). When developing vaccination strategies for future pandemic situations one should therefore consider targeted strategies for enhancing coverage among those who do not fall within the target groups for seasonal influenza vaccination and thus do not regularly receive a seasonal influenza shot. A further opportunity to enhance compliance with national recommendations and therefore vaccination coverage in a future pandemic situation could be to increase seasonal vaccine uptake in the target groups. However, major reasons for not being vaccinated were the perception that vaccination was not necessary or not safe. It can be assumed that both reasons will not be barriers to high pandemic vaccine uptake in a future pandemic setting if the mortality is much higher than during the 2009/10 pandemic.\nA higher uptake of seasonal influenza vaccines in season 2009/10 was observed in the follow-up survey population when compared to the total GEDA10 study population in 2009/10. It is therefore very likely that the point estimates for seasonal influenza vaccination coverage for the 2010/11 season, which were based on data from the same follow-up survey, were also overestimated. Taking into consideration that acceptance of seasonal influenza vaccination was higher in the follow-up survey population, our study results suggest that seasonal influenza vaccine uptake in the recommended target groups in Germany has decreased in the post-pandemic season 2010/11, not only in comparison to season 2008/09 but also to the pandemic season 2009/10 (compare Figure 2, empty symbols). Hence, our findings are discordant with observations made in several other industrialised countries. For instance, seasonal influenza vaccination coverage in the UK remained stable between seasons 2009/10 and 2010/11 among at risk persons under 65 years of age (51.6% vs. 50.4% vaccination coverage) as well as among persons aged ≥65 years (72.4% vs. 72.8%) [29]. Among high-risk persons aged 18–64 years living in the US, seasonal influenza vaccine uptake was 46.2% in the pandemic and 46.7% in the post-pandemic season [30]. However, in both countries acceptance and uptake of pandemic influenza vaccination was higher as compared to Germany (UK: 37.6% vaccination coverage in clinical risk groups [29]; US: 41.2% among all persons aged ≥6 month [31]). In France, despite the poor uptake of pandemic influenza vaccines (11.1%), an increase in seasonal influenza vaccine uptake was observed in the post-pandemic season among persons aged ≥65 years with underlying chronic conditions (62.6% in season 2009/10 [27] vs. 71.0% in season 2010/11 [32]). In the upcoming years, the uptake of seasonal influenza vaccines should be carefully monitored in Germany in all target groups to identify if this trend continues. Especially the strong decrease in vaccination coverage among HCW is of concern, and communication activities should be strengthened especially for this target group not only to achieve individual protection of this target group but also to protect vulnerable patients managed by HCW.\nIn our study ‘fear of side effects’ was found to be the most frequently stated reason for rejecting pandemic vaccination, thereby confirming findings of 13 smaller consecutive surveys carried out during the pandemic in Germany [33]. Conversely, believing that the pandemic vaccine is safe was significantly associated with the receipt of the pandemic vaccine in many countries worldwide [21]. Appropriate addressing of vaccine safety concerns by public health authorities may be an important factor to maintain public trust in national vaccination recommendations and beyond that to enhance vaccine uptake in future pandemic situations [22,33].\nIn our study, 8.5% of those who did not receive a pandemic influenza vaccination stated that they reject vaccinations in general. This translates into a proportion of 7.7% for the total study population. Little is known about the exact proportion of vaccination opponents among the general adult population in Germany. In a recent survey performed by the German Federal Centre for Health Education (BZgA) among 3,002 parents of children aged 0–13 years, 35% stated that they reject particular vaccinations for their children, but only 1% of parents reject vaccinations in general [34]. Since our study did not focus on the general rejection of vaccinations in the population, we did not ask further detailed questions related to this topic to verify this attitude and the underlying reasons. Therefore, this figure must be interpreted with caution.\nOur study has some limitations that need to be acknowledged. Calculation of influenza vaccination coverage was based on self-reported vaccination status and may therefore be prone to recall problems. However, it was shown in several studies that self-report of influenza vaccination status has an adequate degree of validity [35,36]. Furthermore, the response rate in GEDA10 was comparatively low at 29%. However, it should be noted that the chosen method of calculating the response rate (namely Response Rate 3 as defined by AAPOR [18]) is a very conservative approach and that our response rate is comparable to studies using the same approach (e.g. CDC-Behavioral Risk Factor Surveillance Rates Report [37]). Finally, it cannot be ruled out that other reasons than the controversial discussions on the pandemic vaccination have also contributed to the observed drop in seasonal influenza vaccination coverage in 2010/11.", "In conclusion, poor compliance with official vaccination recommendation resulting in low uptake of pandemic influenza vaccines during the pandemic season 2009/10 suggests that public communication strategies and vaccination campaigns during the influenza A(H1N1)pdm09 pandemic in Germany were not successful. In addition, our results raise concerns that controversial discussions about the safety and necessity of pandemic influenza vaccines may have contributed to decreased seasonal influenza vaccine uptake in the first post-pandemic season. It is therefore crucial to develop concerted communication strategies based on the lessons learned from the 2009/10 influenza pandemic and to include them in the national pandemic preparedness plan. This should be done, not only with respect to a competent handling of pandemic situations but also to avoid a decrease in the acceptance of vaccinations in general. In this respect, communication strategies and different modes of communication to specific target groups should be evaluated and implemented already in non-crisis situations, to be enhanced during a pandemic influenza situation or other public health crisis. This is of particular importance, since seasonal influenza vaccine uptake in the recommended target groups in Germany stagnated at a low level since 2005 [38] and does by far not meet the EU goal of 75% [20]. Further studies should be conducted to monitor the trends of seasonal influenza vaccine uptake in Germany in the specific target groups including pregnant women (which is a target group for seasonal influenza vaccination since 2010) and to precisely identify barriers to influenza vaccination in the upcoming years which might differ from the pandemic and the first post-pandemic season. This information would be crucial to guide public health authorities in developing more effective communication strategies for seasonal influenza vaccination tailored to specific target groups.", "The authors have declared no conflict of interest.", "All authors made substantial contributions to the study. SM was involved in the development of the GEDA10 study design and contributed to the Methods section. MMB, DW, and OW developed the study design of the GEDA10 follow-up survey. MMB analysed the data in consultation with DW, GF, OW and GK. MMB wrote the draft version of the manuscript. All authors have read, carefully reviewed and approved the final version of the manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/12/938/prepub\n" ]
[ null, "methods", null, null, null, "results", null, null, null, null, "discussion", "conclusions", null, null, null ]
[ "Vaccination", "Influenza", "Coverage", "Pandemic", "Germany" ]
Background: In Germany, annual influenza epidemics usually occur during the winter months December to March. In the last decade, an estimated zero to 19,000 excess deaths per year were attributable to influenza virus infections [1]. Moreover, approximately one to six million influenza-related excess physician consultations per season were estimated for Germany [2]. Severe influenza virus infections or influenza-related complications typically occur in the very young and elderly population as well as in persons with underlying chronic medical conditions. Annual vaccination has proven to be an effective method to reduce the burden of influenza disease [3]. In Germany, vaccination against seasonal influenza is recommended by the Standing Committee on Vaccination (STIKO) for individuals who have either an increased risk to develop severe influenza disease (i.e. persons aged ≥60 years, pregnant women, and persons with certain chronic medical conditions) or who are likely to transmit the virus to vulnerable groups (e.g. health care workers (HCW)) [4]. Vaccination is free of charge for the target groups in Germany. During the influenza pandemic 2009/10, STIKO additionally recommended vaccination with a monovalent vaccine against the pandemic influenza virus strain A(H1N1)pdm09 for the whole population. Due to expected limitations in vaccine supplies at the beginning of the vaccination campaign, STIKO defined and ranked priority groups for the pandemic vaccination: 1) HCW, 2) persons with underlying chronic conditions, 3) pregnant women, 4) household contacts of vulnerable persons, 5) all other persons aged 6 months to 24 years, 6) all other persons aged 25–59 years, 7) all other person aged 60 years and above [5]. The pandemic vaccination campaign started in Germany on 26 October 2009 [6]. The AS03-adjuvanted monovalent vaccine Pandemrix® was almost exclusively used and available in sufficient quantities [7]. During the pandemic, vaccination against A(H1N1)pdm09 was subject to controversial discussions, not only in Germany but also in many other countries worldwide [7-9]. Main topics of the debate in the media and among experts and ‘self-proclaimed experts’ were vaccine safety, effectiveness, and concern that there were too little data on the new vaccines or vaccine ingredients (especially new adjuvants) available. Moreover, the general necessity of vaccination in view of the relative mildness of the pandemic influenza disease was called into question [7,9-11]. As a result, compliance with the national recommendations for pandemic vaccination was very poor in Germany. Since Germany has no central immunization registry, information on vaccination coverage (and factors influencing coverage) is only available from telephone and household surveys [12-15]. According to the results of thirteen consecutive cross-sectional telephone surveys (total n=13,010) conducted during the pandemic, only 8.1% of the general population aged ≥14 years living in Germany received a vaccine against pandemic influenza [6]. To develop target group specific communication strategies and to enhance compliance with the official recommendations it is important to monitor vaccine uptake in each of the target groups and to understand factors that influence uptake. This applies not only for annual influenza vaccination campaigns but also for the planning of future vaccination campaigns during a pandemic. The influence of the 2009/2010 pandemic situation on seasonal influenza vaccine uptake in Germany in the post-pandemic seasons has so far not been investigated. For this purpose, we utilized data from the large (~22,000 respondents) ‘German Health Update 2010’ (GEDA10) telephone survey and a smaller GEDA10 follow-up survey (~2,500 respondents). The objectives of our study were (1) to assess seasonal influenza vaccination coverage for seasons 2008/09 to 2010/11, (2) to assess pandemic influenza vaccination coverage for season 2009/10, (3) to identify predictors of and barriers to pandemic vaccine uptake, and (4) to detect a potential influence of the pandemic situation on seasonal influenza vaccine uptake in the first post-pandemic season (2010/11). Methods: Study population and survey design The GEDA survey design has been described previously [12,13,16]. In brief, GEDA is a large annual telephone survey which is conducted by the Robert Koch Institute (RKI) as a part of Germany’s national health monitoring. The study population consists of persons ≥18 years of age who are living in a private household in Germany, have sufficient knowledge of the German language, and can be contacted via landline telephone. The GEDA study protocol was approved by Germany’s federal and regional data-protection commissioners. All data were collected and analysed in an anonymous manner. In this study we present data from GEDA10 which was conducted between 22 September 2009 and 10 July 2010. Since the annual GEDA survey was not conducted in 2010/2011, we conducted a follow-up interview among a subsample of 2.493 GEDA10 respondents (from now on referred to as the GEDA follow-up survey) from 1 April to 2 July 2011 to assess seasonal influenza vaccine uptake for the post pandemic season 2010/11. Based on a sample size calculation, 385 subjects were needed to estimate a prevalence of 50% (“worst case scenario”) vaccine uptake with a confidence interval of +/− 5%. Our sample size of ~2,400 subjects for the follow-up survey was based on the premise to estimate a prevalence (i.e. vaccination coverage) of 50% for up to six subgroups (=cells; 6*385≈2400). Since both surveys were conducted by RKI, the ownership of the data lies with RKI and we did not have to obtain permission to use the data for this study. Similar to the previous GEDA survey (GEDA09), a public use file of the GEDA10 dataset will be provided soon. Data from the GEDA follow-up survey will not be openly available. To control for possible selection biases, weighting factors for the GEDA10 sample were constructed by taking age, sex, educational status, geographical region, and household size into consideration. Potential participants of the follow-up survey were sampled disproportionally to their weighting factors in GEDA10. We applied this method to avoid that groups which were already underrepresented in GEDA10 become underrepresented again at the sampling stage in the follow-up sample and thus to prevent further bias. Weighting factors for the follow-up survey were constructed in a first step on the basis of the values calculated for GEDA10 (for age, sex, education, adipositas, smoking status, subjective health, physical activity, employment status), and, in a second step, on population data gathered in the Microcensus 2008 [17], taking geographical region, age, sex, and educational status into account. Information on seasonal influenza vaccination status for season 2008/2009 was collected from all participants of GEDA10. Between 1 January 2010 and 10 July 2010 respondents were additionally asked to provide information on seasonal and pandemic influenza vaccination status for season 2009/10. Unvaccinated survey participants of GEDA10 were additionally asked to state their reasons for not receiving a pandemic influenza vaccination. Information on seasonal vaccination status for season 2010/11 was collected from all respondents of the GEDA follow-up survey. In the follow-up survey, respondents were additionally asked by whom they were vaccinated (general practitioner/other physician in private practice/occupational physician/other) and in which month they received the vaccination against seasonal influenza in season 2010/11. One should note that GEDA10 does not cover the paediatric population (aged 0–17 years) for which pandemic vaccination was also recommended in Germany. Information on vaccination coverage in this particular age-group was therefore not available from this data source. We calculated the response for GEDA10 by using Response Rate 3 as defined by the American Association for Public Opinion Research (AAPOR) [18]. Response Rate 3 is the proportion of the number of complete interviews divided by the number of interviews plus the number of non-interviews (refusal and break-off plus non-contacts plus others) plus cases of unknown eligibility. For cases of unknown eligibility Response Rate 3 estimates what proportion of cases of unknown eligibility is actually eligible. This estimation is based on the proportion of eligible households among all numbers for which a definitive determination of status was obtained (hence a very conservative estimate). We additionally calculated the cooperation rate at respondent level, which is defined as the proportion of all respondents interviewed of all respondents ever contacted [18]. Since the GEDA follow-up survey was not a random digit dialling study (as GEDA10), we reported the minimal response rate (Response Rate 1 as defined by AAPOR, [18]) for the follow-up survey. The GEDA survey design has been described previously [12,13,16]. In brief, GEDA is a large annual telephone survey which is conducted by the Robert Koch Institute (RKI) as a part of Germany’s national health monitoring. The study population consists of persons ≥18 years of age who are living in a private household in Germany, have sufficient knowledge of the German language, and can be contacted via landline telephone. The GEDA study protocol was approved by Germany’s federal and regional data-protection commissioners. All data were collected and analysed in an anonymous manner. In this study we present data from GEDA10 which was conducted between 22 September 2009 and 10 July 2010. Since the annual GEDA survey was not conducted in 2010/2011, we conducted a follow-up interview among a subsample of 2.493 GEDA10 respondents (from now on referred to as the GEDA follow-up survey) from 1 April to 2 July 2011 to assess seasonal influenza vaccine uptake for the post pandemic season 2010/11. Based on a sample size calculation, 385 subjects were needed to estimate a prevalence of 50% (“worst case scenario”) vaccine uptake with a confidence interval of +/− 5%. Our sample size of ~2,400 subjects for the follow-up survey was based on the premise to estimate a prevalence (i.e. vaccination coverage) of 50% for up to six subgroups (=cells; 6*385≈2400). Since both surveys were conducted by RKI, the ownership of the data lies with RKI and we did not have to obtain permission to use the data for this study. Similar to the previous GEDA survey (GEDA09), a public use file of the GEDA10 dataset will be provided soon. Data from the GEDA follow-up survey will not be openly available. To control for possible selection biases, weighting factors for the GEDA10 sample were constructed by taking age, sex, educational status, geographical region, and household size into consideration. Potential participants of the follow-up survey were sampled disproportionally to their weighting factors in GEDA10. We applied this method to avoid that groups which were already underrepresented in GEDA10 become underrepresented again at the sampling stage in the follow-up sample and thus to prevent further bias. Weighting factors for the follow-up survey were constructed in a first step on the basis of the values calculated for GEDA10 (for age, sex, education, adipositas, smoking status, subjective health, physical activity, employment status), and, in a second step, on population data gathered in the Microcensus 2008 [17], taking geographical region, age, sex, and educational status into account. Information on seasonal influenza vaccination status for season 2008/2009 was collected from all participants of GEDA10. Between 1 January 2010 and 10 July 2010 respondents were additionally asked to provide information on seasonal and pandemic influenza vaccination status for season 2009/10. Unvaccinated survey participants of GEDA10 were additionally asked to state their reasons for not receiving a pandemic influenza vaccination. Information on seasonal vaccination status for season 2010/11 was collected from all respondents of the GEDA follow-up survey. In the follow-up survey, respondents were additionally asked by whom they were vaccinated (general practitioner/other physician in private practice/occupational physician/other) and in which month they received the vaccination against seasonal influenza in season 2010/11. One should note that GEDA10 does not cover the paediatric population (aged 0–17 years) for which pandemic vaccination was also recommended in Germany. Information on vaccination coverage in this particular age-group was therefore not available from this data source. We calculated the response for GEDA10 by using Response Rate 3 as defined by the American Association for Public Opinion Research (AAPOR) [18]. Response Rate 3 is the proportion of the number of complete interviews divided by the number of interviews plus the number of non-interviews (refusal and break-off plus non-contacts plus others) plus cases of unknown eligibility. For cases of unknown eligibility Response Rate 3 estimates what proportion of cases of unknown eligibility is actually eligible. This estimation is based on the proportion of eligible households among all numbers for which a definitive determination of status was obtained (hence a very conservative estimate). We additionally calculated the cooperation rate at respondent level, which is defined as the proportion of all respondents interviewed of all respondents ever contacted [18]. Since the GEDA follow-up survey was not a random digit dialling study (as GEDA10), we reported the minimal response rate (Response Rate 1 as defined by AAPOR, [18]) for the follow-up survey. Definition of variables Socio-economic status levels were created as described by Lampert and Kroll on the basis of self-reported educational, income, and professional status of survey respondents [19]. In accordance with the STIKO-recommendations [4], persons were classified into the target groups for seasonal influenza vaccination in our study if they reported (1) to be ≥60 years of age, (2) to have at least one underlying chronic disease (defined here as having a chronic underlying respiratory, cardiovascular, liver, or renal disease, cancer, or diabetes), or (3) to work as HCW. Since female respondents of child-bearing age were not asked whether they had been pregnant during the last influenza season, it was not possible to include pregnant women as target group in our analysis. The geographic region category ‘Western Federal States’ (WFS) comprised the federal states Schleswig-Holstein, Bremen, Hamburg, Lower Saxony, Hesse, Rhineland-Palatinate, Saarland, North Rhine-Westphalia, Baden-Württemberg and Bavaria; ‘Eastern Federal States’ (EFS) comprised Mecklenburg-Vorpommern, Brandenburg, Berlin, Saxony-Anhalt, Thuringia and Saxony. Socio-economic status levels were created as described by Lampert and Kroll on the basis of self-reported educational, income, and professional status of survey respondents [19]. In accordance with the STIKO-recommendations [4], persons were classified into the target groups for seasonal influenza vaccination in our study if they reported (1) to be ≥60 years of age, (2) to have at least one underlying chronic disease (defined here as having a chronic underlying respiratory, cardiovascular, liver, or renal disease, cancer, or diabetes), or (3) to work as HCW. Since female respondents of child-bearing age were not asked whether they had been pregnant during the last influenza season, it was not possible to include pregnant women as target group in our analysis. The geographic region category ‘Western Federal States’ (WFS) comprised the federal states Schleswig-Holstein, Bremen, Hamburg, Lower Saxony, Hesse, Rhineland-Palatinate, Saarland, North Rhine-Westphalia, Baden-Württemberg and Bavaria; ‘Eastern Federal States’ (EFS) comprised Mecklenburg-Vorpommern, Brandenburg, Berlin, Saxony-Anhalt, Thuringia and Saxony. Statistical analysis Data analysis was performed using PASW 18.0 for Windows (SPSS Inc., Chicago, USA). Proportions were calculated by using procedures for the analysis of complex samples. Univariate analyses were conducted to determine associations between pandemic influenza vaccine uptake and socio-demographic, health-related and professional factors. A p-value ≤0.05 was considered to indicate a statistically significant difference. Odds ratios (OR) and 95% confidence intervals (CI) were calculated as appropriate. Multivariable analysis was performed by entering variables potentially associated with vaccine uptake (p-value <0.2 in univariate analysis) into a multivariable logistic regression model in a first step, followed by step-wise backward removal of variables with a p-value >0.05 to produce a final model. Interaction terms were included to account for effect modification between independent variables. Data analysis was performed using PASW 18.0 for Windows (SPSS Inc., Chicago, USA). Proportions were calculated by using procedures for the analysis of complex samples. Univariate analyses were conducted to determine associations between pandemic influenza vaccine uptake and socio-demographic, health-related and professional factors. A p-value ≤0.05 was considered to indicate a statistically significant difference. Odds ratios (OR) and 95% confidence intervals (CI) were calculated as appropriate. Multivariable analysis was performed by entering variables potentially associated with vaccine uptake (p-value <0.2 in univariate analysis) into a multivariable logistic regression model in a first step, followed by step-wise backward removal of variables with a p-value >0.05 to produce a final model. Interaction terms were included to account for effect modification between independent variables. Study population and survey design: The GEDA survey design has been described previously [12,13,16]. In brief, GEDA is a large annual telephone survey which is conducted by the Robert Koch Institute (RKI) as a part of Germany’s national health monitoring. The study population consists of persons ≥18 years of age who are living in a private household in Germany, have sufficient knowledge of the German language, and can be contacted via landline telephone. The GEDA study protocol was approved by Germany’s federal and regional data-protection commissioners. All data were collected and analysed in an anonymous manner. In this study we present data from GEDA10 which was conducted between 22 September 2009 and 10 July 2010. Since the annual GEDA survey was not conducted in 2010/2011, we conducted a follow-up interview among a subsample of 2.493 GEDA10 respondents (from now on referred to as the GEDA follow-up survey) from 1 April to 2 July 2011 to assess seasonal influenza vaccine uptake for the post pandemic season 2010/11. Based on a sample size calculation, 385 subjects were needed to estimate a prevalence of 50% (“worst case scenario”) vaccine uptake with a confidence interval of +/− 5%. Our sample size of ~2,400 subjects for the follow-up survey was based on the premise to estimate a prevalence (i.e. vaccination coverage) of 50% for up to six subgroups (=cells; 6*385≈2400). Since both surveys were conducted by RKI, the ownership of the data lies with RKI and we did not have to obtain permission to use the data for this study. Similar to the previous GEDA survey (GEDA09), a public use file of the GEDA10 dataset will be provided soon. Data from the GEDA follow-up survey will not be openly available. To control for possible selection biases, weighting factors for the GEDA10 sample were constructed by taking age, sex, educational status, geographical region, and household size into consideration. Potential participants of the follow-up survey were sampled disproportionally to their weighting factors in GEDA10. We applied this method to avoid that groups which were already underrepresented in GEDA10 become underrepresented again at the sampling stage in the follow-up sample and thus to prevent further bias. Weighting factors for the follow-up survey were constructed in a first step on the basis of the values calculated for GEDA10 (for age, sex, education, adipositas, smoking status, subjective health, physical activity, employment status), and, in a second step, on population data gathered in the Microcensus 2008 [17], taking geographical region, age, sex, and educational status into account. Information on seasonal influenza vaccination status for season 2008/2009 was collected from all participants of GEDA10. Between 1 January 2010 and 10 July 2010 respondents were additionally asked to provide information on seasonal and pandemic influenza vaccination status for season 2009/10. Unvaccinated survey participants of GEDA10 were additionally asked to state their reasons for not receiving a pandemic influenza vaccination. Information on seasonal vaccination status for season 2010/11 was collected from all respondents of the GEDA follow-up survey. In the follow-up survey, respondents were additionally asked by whom they were vaccinated (general practitioner/other physician in private practice/occupational physician/other) and in which month they received the vaccination against seasonal influenza in season 2010/11. One should note that GEDA10 does not cover the paediatric population (aged 0–17 years) for which pandemic vaccination was also recommended in Germany. Information on vaccination coverage in this particular age-group was therefore not available from this data source. We calculated the response for GEDA10 by using Response Rate 3 as defined by the American Association for Public Opinion Research (AAPOR) [18]. Response Rate 3 is the proportion of the number of complete interviews divided by the number of interviews plus the number of non-interviews (refusal and break-off plus non-contacts plus others) plus cases of unknown eligibility. For cases of unknown eligibility Response Rate 3 estimates what proportion of cases of unknown eligibility is actually eligible. This estimation is based on the proportion of eligible households among all numbers for which a definitive determination of status was obtained (hence a very conservative estimate). We additionally calculated the cooperation rate at respondent level, which is defined as the proportion of all respondents interviewed of all respondents ever contacted [18]. Since the GEDA follow-up survey was not a random digit dialling study (as GEDA10), we reported the minimal response rate (Response Rate 1 as defined by AAPOR, [18]) for the follow-up survey. Definition of variables: Socio-economic status levels were created as described by Lampert and Kroll on the basis of self-reported educational, income, and professional status of survey respondents [19]. In accordance with the STIKO-recommendations [4], persons were classified into the target groups for seasonal influenza vaccination in our study if they reported (1) to be ≥60 years of age, (2) to have at least one underlying chronic disease (defined here as having a chronic underlying respiratory, cardiovascular, liver, or renal disease, cancer, or diabetes), or (3) to work as HCW. Since female respondents of child-bearing age were not asked whether they had been pregnant during the last influenza season, it was not possible to include pregnant women as target group in our analysis. The geographic region category ‘Western Federal States’ (WFS) comprised the federal states Schleswig-Holstein, Bremen, Hamburg, Lower Saxony, Hesse, Rhineland-Palatinate, Saarland, North Rhine-Westphalia, Baden-Württemberg and Bavaria; ‘Eastern Federal States’ (EFS) comprised Mecklenburg-Vorpommern, Brandenburg, Berlin, Saxony-Anhalt, Thuringia and Saxony. Statistical analysis: Data analysis was performed using PASW 18.0 for Windows (SPSS Inc., Chicago, USA). Proportions were calculated by using procedures for the analysis of complex samples. Univariate analyses were conducted to determine associations between pandemic influenza vaccine uptake and socio-demographic, health-related and professional factors. A p-value ≤0.05 was considered to indicate a statistically significant difference. Odds ratios (OR) and 95% confidence intervals (CI) were calculated as appropriate. Multivariable analysis was performed by entering variables potentially associated with vaccine uptake (p-value <0.2 in univariate analysis) into a multivariable logistic regression model in a first step, followed by step-wise backward removal of variables with a p-value >0.05 to produce a final model. Interaction terms were included to account for effect modification between independent variables. Results: Sample characteristics In total, 22,050 telephone interviews were conducted during the study period of GEDA10 and 2,493 participants were re-interviewed for the follow-up survey. An overview of the survey populations is given in Table 1. The median age was 48.0 years (range 18–99 years) in GEDA10 and 49.7 years (range 19–96 years) in the follow-up survey. Response Rate 3 was 28.9% in GEDA10; the cooperation rate at respondent level was 55.8%. Response Rate 1 was 75.0% in the follow up survey. Characteristics of participants in the ‘German Health Update Survey’ (GEDA10) and the GEDA10 follow-up survey *Weighted data. In total, 22,050 telephone interviews were conducted during the study period of GEDA10 and 2,493 participants were re-interviewed for the follow-up survey. An overview of the survey populations is given in Table 1. The median age was 48.0 years (range 18–99 years) in GEDA10 and 49.7 years (range 19–96 years) in the follow-up survey. Response Rate 3 was 28.9% in GEDA10; the cooperation rate at respondent level was 55.8%. Response Rate 1 was 75.0% in the follow up survey. Characteristics of participants in the ‘German Health Update Survey’ (GEDA10) and the GEDA10 follow-up survey *Weighted data. Seasonal influenza vaccination coverage Information on seasonal influenza vaccination status was available for over 99.8% in each of the study samples for the three seasons under investigation (seasons 2008/09-2010/11). Vaccination coverage for the three seasons by sex, age group, place of residence, and target group is presented in Table 2. To allow comparability with international studies, seasonal influenza vaccine uptake among ≥65 year-olds was additionally calculated and revealed a coverage of 56.1% (95% CI: 54.0-58.2) in season 2008/09, 50.2% (95% CI: 47.5-52.9) in season 2009/10, and 54.2% (95% CI: 48.4-59.8) in season 2010/11. For influenza season 2009/10, vaccination coverage was calculated by age-groups in decades (Figure 1). Vaccine uptake in both the target population for seasonal influenza vaccination (defined here as persons who have an underlying chronic disease, or work as HCW) and non-target population increased with age and was highest in persons ≥70 years. The vast majority (97.8%) of vaccinated persons had received their influenza vaccination for season 2010/11 by the end of December 2010. Seasonal influenza vaccine uptake by sex, age group, place of residence and target group, seasons 2008/09 − 2010/11, Germany *Weighted data. Seasonal vaccination coverage by age group and target group, season 2009/10. Figure 2 shows trends in seasonal influenza vaccine uptake among the three different target groups and the non-target group for four consecutive seasons (2007/08 to 2010/11; results for season 2007/08 according to a previously published analysis of GEDA 2009 data [12]). While vaccination coverage slightly decreased in persons aged ≥60 years and in persons with underlying chronic diseases between seasons 2007/08 and 2009/10, there was an increase in vaccine uptake among HCWs from season 2007/08 to 2008/09. In all subgroups under investigation, vaccine uptake for season 2009/10 was higher in the follow-up sample (empty symbols) compared to the GEDA10 sample (filled symbols). Considering only the results from the follow-up survey (Figure 2), a significant decrease in seasonal influenza vaccination coverage between seasons 2009/10 and 2010/11 was observed for persons with underlying chronic conditions (p=0.04), HCWs (p=0.03), and persons not targeted for seasonal influenza vaccination (p<0.01). For persons ≥60 years of age there was also a decrease, but without reaching statistical significance. In season 2010/11, 83.4% of respondents who received a seasonal influenza shot were vaccinated by their general practitioner, 4.7% by another physician in private practice (e.g. gynaecologist, paediatrician), 9.4% by an occupational physician, 0.4% by a hospital physician, and 2.1% by any other physician. Of those who were vaccinated by a general practitioner or by an occupational physician, 18.4% and 59.4% did not belong to a target group, respectively. Trends in seasonal vaccine uptake in target groups in Germany for seasons 2007/08−2010/11 according to GEDA09 and GEDA10(filled symbols)and a follow-up sample of GEDA10(empty symbols).1 Data source: GEDA09 (n=15,552) [12]; 2 Data source: GEDA10 (n=22,009); 3 Data source: GEDA10 (n=13,040); 4 Data source: GEDA follow-up survey (n=2,492). Information on seasonal influenza vaccination status was available for over 99.8% in each of the study samples for the three seasons under investigation (seasons 2008/09-2010/11). Vaccination coverage for the three seasons by sex, age group, place of residence, and target group is presented in Table 2. To allow comparability with international studies, seasonal influenza vaccine uptake among ≥65 year-olds was additionally calculated and revealed a coverage of 56.1% (95% CI: 54.0-58.2) in season 2008/09, 50.2% (95% CI: 47.5-52.9) in season 2009/10, and 54.2% (95% CI: 48.4-59.8) in season 2010/11. For influenza season 2009/10, vaccination coverage was calculated by age-groups in decades (Figure 1). Vaccine uptake in both the target population for seasonal influenza vaccination (defined here as persons who have an underlying chronic disease, or work as HCW) and non-target population increased with age and was highest in persons ≥70 years. The vast majority (97.8%) of vaccinated persons had received their influenza vaccination for season 2010/11 by the end of December 2010. Seasonal influenza vaccine uptake by sex, age group, place of residence and target group, seasons 2008/09 − 2010/11, Germany *Weighted data. Seasonal vaccination coverage by age group and target group, season 2009/10. Figure 2 shows trends in seasonal influenza vaccine uptake among the three different target groups and the non-target group for four consecutive seasons (2007/08 to 2010/11; results for season 2007/08 according to a previously published analysis of GEDA 2009 data [12]). While vaccination coverage slightly decreased in persons aged ≥60 years and in persons with underlying chronic diseases between seasons 2007/08 and 2009/10, there was an increase in vaccine uptake among HCWs from season 2007/08 to 2008/09. In all subgroups under investigation, vaccine uptake for season 2009/10 was higher in the follow-up sample (empty symbols) compared to the GEDA10 sample (filled symbols). Considering only the results from the follow-up survey (Figure 2), a significant decrease in seasonal influenza vaccination coverage between seasons 2009/10 and 2010/11 was observed for persons with underlying chronic conditions (p=0.04), HCWs (p=0.03), and persons not targeted for seasonal influenza vaccination (p<0.01). For persons ≥60 years of age there was also a decrease, but without reaching statistical significance. In season 2010/11, 83.4% of respondents who received a seasonal influenza shot were vaccinated by their general practitioner, 4.7% by another physician in private practice (e.g. gynaecologist, paediatrician), 9.4% by an occupational physician, 0.4% by a hospital physician, and 2.1% by any other physician. Of those who were vaccinated by a general practitioner or by an occupational physician, 18.4% and 59.4% did not belong to a target group, respectively. Trends in seasonal vaccine uptake in target groups in Germany for seasons 2007/08−2010/11 according to GEDA09 and GEDA10(filled symbols)and a follow-up sample of GEDA10(empty symbols).1 Data source: GEDA09 (n=15,552) [12]; 2 Data source: GEDA10 (n=22,009); 3 Data source: GEDA10 (n=13,040); 4 Data source: GEDA follow-up survey (n=2,492). Pandemic influenza vaccination coverage Information on pandemic influenza vaccine uptake was available for 99.9% of the respective study population in GEDA 10 (n=13,048). Vaccination coverage by sex, age group, place of residence, socio-economic status and different target groups for seasonal influenza vaccination is presented in Table 3. In total, 8.8% (95% CI: 8.2-9.5) of the general adult population in Germany received a vaccination against pandemic influenza. With 11.2% (95% CI: 10.2-12.3) pandemic vaccine uptake was significantly higher in persons belonging to the target group for seasonal influenza vaccination as compared to the non-seasonal influenza target group (6.4%; 95% CI: 5.7-7.1; p<0.001). Determinants of pandemic influenza vaccine uptake, Germany, season 2009/10 #weighted data; * p<0.05; ** p<0.001; ref=reference category; n.s.= not significant; WFS=Western federal states, EFS=Eastern federal states. p-value for interaction between agegroup*seasonal influenza vaccination status: 0.011. The most frequently reported reasons for not receiving the vaccination were (1) ‘fear of side effects of pandemic vaccines’ (stated by 37.2%; 95% CI: 36.1-38.3), (2) ‘pandemic vaccination is not necessary’ (33.8%; 95% CI: 32.7-34.9), (3) ‘pandemic vaccination not officially recommended for me’ (16.6%; 95% CI: 15.8-17.5), and (4) ‘reject vaccinations in general’ (8.5%; 95% CI: 7.8-9.2). Information on pandemic influenza vaccine uptake was available for 99.9% of the respective study population in GEDA 10 (n=13,048). Vaccination coverage by sex, age group, place of residence, socio-economic status and different target groups for seasonal influenza vaccination is presented in Table 3. In total, 8.8% (95% CI: 8.2-9.5) of the general adult population in Germany received a vaccination against pandemic influenza. With 11.2% (95% CI: 10.2-12.3) pandemic vaccine uptake was significantly higher in persons belonging to the target group for seasonal influenza vaccination as compared to the non-seasonal influenza target group (6.4%; 95% CI: 5.7-7.1; p<0.001). Determinants of pandemic influenza vaccine uptake, Germany, season 2009/10 #weighted data; * p<0.05; ** p<0.001; ref=reference category; n.s.= not significant; WFS=Western federal states, EFS=Eastern federal states. p-value for interaction between agegroup*seasonal influenza vaccination status: 0.011. The most frequently reported reasons for not receiving the vaccination were (1) ‘fear of side effects of pandemic vaccines’ (stated by 37.2%; 95% CI: 36.1-38.3), (2) ‘pandemic vaccination is not necessary’ (33.8%; 95% CI: 32.7-34.9), (3) ‘pandemic vaccination not officially recommended for me’ (16.6%; 95% CI: 15.8-17.5), and (4) ‘reject vaccinations in general’ (8.5%; 95% CI: 7.8-9.2). Factors associated with pandemic influenza vaccine uptake Results of univariate and multivariable analysis of factors potentially associated with pandemic influenza vaccine uptake are shown in Table 3. Having received a seasonal influenza vaccination in the previous season (season 2008/09) was the strongest independent predictor of pandemic influenza vaccination. However, this effect differed by age group and we therefore included an interaction term in the final model. Additionally, working as HCW, having a chronic disease, high socioeconomic status, and being male were significantly associated with higher uptake in multivariable analysis. Results of univariate and multivariable analysis of factors potentially associated with pandemic influenza vaccine uptake are shown in Table 3. Having received a seasonal influenza vaccination in the previous season (season 2008/09) was the strongest independent predictor of pandemic influenza vaccination. However, this effect differed by age group and we therefore included an interaction term in the final model. Additionally, working as HCW, having a chronic disease, high socioeconomic status, and being male were significantly associated with higher uptake in multivariable analysis. Sample characteristics: In total, 22,050 telephone interviews were conducted during the study period of GEDA10 and 2,493 participants were re-interviewed for the follow-up survey. An overview of the survey populations is given in Table 1. The median age was 48.0 years (range 18–99 years) in GEDA10 and 49.7 years (range 19–96 years) in the follow-up survey. Response Rate 3 was 28.9% in GEDA10; the cooperation rate at respondent level was 55.8%. Response Rate 1 was 75.0% in the follow up survey. Characteristics of participants in the ‘German Health Update Survey’ (GEDA10) and the GEDA10 follow-up survey *Weighted data. Seasonal influenza vaccination coverage: Information on seasonal influenza vaccination status was available for over 99.8% in each of the study samples for the three seasons under investigation (seasons 2008/09-2010/11). Vaccination coverage for the three seasons by sex, age group, place of residence, and target group is presented in Table 2. To allow comparability with international studies, seasonal influenza vaccine uptake among ≥65 year-olds was additionally calculated and revealed a coverage of 56.1% (95% CI: 54.0-58.2) in season 2008/09, 50.2% (95% CI: 47.5-52.9) in season 2009/10, and 54.2% (95% CI: 48.4-59.8) in season 2010/11. For influenza season 2009/10, vaccination coverage was calculated by age-groups in decades (Figure 1). Vaccine uptake in both the target population for seasonal influenza vaccination (defined here as persons who have an underlying chronic disease, or work as HCW) and non-target population increased with age and was highest in persons ≥70 years. The vast majority (97.8%) of vaccinated persons had received their influenza vaccination for season 2010/11 by the end of December 2010. Seasonal influenza vaccine uptake by sex, age group, place of residence and target group, seasons 2008/09 − 2010/11, Germany *Weighted data. Seasonal vaccination coverage by age group and target group, season 2009/10. Figure 2 shows trends in seasonal influenza vaccine uptake among the three different target groups and the non-target group for four consecutive seasons (2007/08 to 2010/11; results for season 2007/08 according to a previously published analysis of GEDA 2009 data [12]). While vaccination coverage slightly decreased in persons aged ≥60 years and in persons with underlying chronic diseases between seasons 2007/08 and 2009/10, there was an increase in vaccine uptake among HCWs from season 2007/08 to 2008/09. In all subgroups under investigation, vaccine uptake for season 2009/10 was higher in the follow-up sample (empty symbols) compared to the GEDA10 sample (filled symbols). Considering only the results from the follow-up survey (Figure 2), a significant decrease in seasonal influenza vaccination coverage between seasons 2009/10 and 2010/11 was observed for persons with underlying chronic conditions (p=0.04), HCWs (p=0.03), and persons not targeted for seasonal influenza vaccination (p<0.01). For persons ≥60 years of age there was also a decrease, but without reaching statistical significance. In season 2010/11, 83.4% of respondents who received a seasonal influenza shot were vaccinated by their general practitioner, 4.7% by another physician in private practice (e.g. gynaecologist, paediatrician), 9.4% by an occupational physician, 0.4% by a hospital physician, and 2.1% by any other physician. Of those who were vaccinated by a general practitioner or by an occupational physician, 18.4% and 59.4% did not belong to a target group, respectively. Trends in seasonal vaccine uptake in target groups in Germany for seasons 2007/08−2010/11 according to GEDA09 and GEDA10(filled symbols)and a follow-up sample of GEDA10(empty symbols).1 Data source: GEDA09 (n=15,552) [12]; 2 Data source: GEDA10 (n=22,009); 3 Data source: GEDA10 (n=13,040); 4 Data source: GEDA follow-up survey (n=2,492). Pandemic influenza vaccination coverage: Information on pandemic influenza vaccine uptake was available for 99.9% of the respective study population in GEDA 10 (n=13,048). Vaccination coverage by sex, age group, place of residence, socio-economic status and different target groups for seasonal influenza vaccination is presented in Table 3. In total, 8.8% (95% CI: 8.2-9.5) of the general adult population in Germany received a vaccination against pandemic influenza. With 11.2% (95% CI: 10.2-12.3) pandemic vaccine uptake was significantly higher in persons belonging to the target group for seasonal influenza vaccination as compared to the non-seasonal influenza target group (6.4%; 95% CI: 5.7-7.1; p<0.001). Determinants of pandemic influenza vaccine uptake, Germany, season 2009/10 #weighted data; * p<0.05; ** p<0.001; ref=reference category; n.s.= not significant; WFS=Western federal states, EFS=Eastern federal states. p-value for interaction between agegroup*seasonal influenza vaccination status: 0.011. The most frequently reported reasons for not receiving the vaccination were (1) ‘fear of side effects of pandemic vaccines’ (stated by 37.2%; 95% CI: 36.1-38.3), (2) ‘pandemic vaccination is not necessary’ (33.8%; 95% CI: 32.7-34.9), (3) ‘pandemic vaccination not officially recommended for me’ (16.6%; 95% CI: 15.8-17.5), and (4) ‘reject vaccinations in general’ (8.5%; 95% CI: 7.8-9.2). Factors associated with pandemic influenza vaccine uptake: Results of univariate and multivariable analysis of factors potentially associated with pandemic influenza vaccine uptake are shown in Table 3. Having received a seasonal influenza vaccination in the previous season (season 2008/09) was the strongest independent predictor of pandemic influenza vaccination. However, this effect differed by age group and we therefore included an interaction term in the final model. Additionally, working as HCW, having a chronic disease, high socioeconomic status, and being male were significantly associated with higher uptake in multivariable analysis. Discussion: The aim of this study was to assess the uptake of seasonal influenza vaccines in specific target groups for seasons 2008/09 and 2009/10, as well as for pandemic influenza vaccines during the pandemic season 2009/10 in Germany in the total adult population by using data from a large population-representative telephone survey. By using data from a smaller follow-up survey, our study moreover provides the only so far available data on seasonal influenza vaccination coverage in Germany for the post-pandemic season 2010/11. Overall, only 8.8% of the adult population in Germany followed the official recommendation and received a vaccination against pandemic influenza in season 2009/10. The follow-up survey revealed a decrease in seasonal influenza vaccine uptake in the first post-pandemic season across all target groups when compared to the pre-pandemic season 2008/09, most prominent among HCW. With an average coverage of 50% in the elderly, 41% in the chronically ill, and 28% in HCW in seasons 2008/09 to 2010/11, the EU goal of reaching a seasonal influenza vaccination coverage of at least 75% in the target groups [20] has not yet been achieved in Germany. Having received a seasonal influenza shot in the pre-pandemic season was the strongest predictor for receiving pandemic influenza vaccination in our study. The high correlation between seasonal and pandemic influenza vaccine uptake highlights the significance of habitual behaviour with regard to influenza vaccination decisions. In addition and independent from this factor, persons belonging to at least one of the recommended target groups for seasonal influenza vaccination were significantly more likely to receive a pandemic influenza vaccination than persons not belonging to a target group. Our results are broadly in line with the findings of a prospective monitoring survey on pandemic influenza vaccination in Germany [6] and two reviews investigating determinants of pandemic vaccine uptake [21,22]. Prior seasonal vaccination was not only found to be positively associated with the intention to receive the pandemic vaccination among adults in several industrialized countries (e.g. in the UK [23], France [24], Australia [25], and the US [26]) but also with the actual receipt of the vaccination (e.g. [6,27,28]). When developing vaccination strategies for future pandemic situations one should therefore consider targeted strategies for enhancing coverage among those who do not fall within the target groups for seasonal influenza vaccination and thus do not regularly receive a seasonal influenza shot. A further opportunity to enhance compliance with national recommendations and therefore vaccination coverage in a future pandemic situation could be to increase seasonal vaccine uptake in the target groups. However, major reasons for not being vaccinated were the perception that vaccination was not necessary or not safe. It can be assumed that both reasons will not be barriers to high pandemic vaccine uptake in a future pandemic setting if the mortality is much higher than during the 2009/10 pandemic. A higher uptake of seasonal influenza vaccines in season 2009/10 was observed in the follow-up survey population when compared to the total GEDA10 study population in 2009/10. It is therefore very likely that the point estimates for seasonal influenza vaccination coverage for the 2010/11 season, which were based on data from the same follow-up survey, were also overestimated. Taking into consideration that acceptance of seasonal influenza vaccination was higher in the follow-up survey population, our study results suggest that seasonal influenza vaccine uptake in the recommended target groups in Germany has decreased in the post-pandemic season 2010/11, not only in comparison to season 2008/09 but also to the pandemic season 2009/10 (compare Figure 2, empty symbols). Hence, our findings are discordant with observations made in several other industrialised countries. For instance, seasonal influenza vaccination coverage in the UK remained stable between seasons 2009/10 and 2010/11 among at risk persons under 65 years of age (51.6% vs. 50.4% vaccination coverage) as well as among persons aged ≥65 years (72.4% vs. 72.8%) [29]. Among high-risk persons aged 18–64 years living in the US, seasonal influenza vaccine uptake was 46.2% in the pandemic and 46.7% in the post-pandemic season [30]. However, in both countries acceptance and uptake of pandemic influenza vaccination was higher as compared to Germany (UK: 37.6% vaccination coverage in clinical risk groups [29]; US: 41.2% among all persons aged ≥6 month [31]). In France, despite the poor uptake of pandemic influenza vaccines (11.1%), an increase in seasonal influenza vaccine uptake was observed in the post-pandemic season among persons aged ≥65 years with underlying chronic conditions (62.6% in season 2009/10 [27] vs. 71.0% in season 2010/11 [32]). In the upcoming years, the uptake of seasonal influenza vaccines should be carefully monitored in Germany in all target groups to identify if this trend continues. Especially the strong decrease in vaccination coverage among HCW is of concern, and communication activities should be strengthened especially for this target group not only to achieve individual protection of this target group but also to protect vulnerable patients managed by HCW. In our study ‘fear of side effects’ was found to be the most frequently stated reason for rejecting pandemic vaccination, thereby confirming findings of 13 smaller consecutive surveys carried out during the pandemic in Germany [33]. Conversely, believing that the pandemic vaccine is safe was significantly associated with the receipt of the pandemic vaccine in many countries worldwide [21]. Appropriate addressing of vaccine safety concerns by public health authorities may be an important factor to maintain public trust in national vaccination recommendations and beyond that to enhance vaccine uptake in future pandemic situations [22,33]. In our study, 8.5% of those who did not receive a pandemic influenza vaccination stated that they reject vaccinations in general. This translates into a proportion of 7.7% for the total study population. Little is known about the exact proportion of vaccination opponents among the general adult population in Germany. In a recent survey performed by the German Federal Centre for Health Education (BZgA) among 3,002 parents of children aged 0–13 years, 35% stated that they reject particular vaccinations for their children, but only 1% of parents reject vaccinations in general [34]. Since our study did not focus on the general rejection of vaccinations in the population, we did not ask further detailed questions related to this topic to verify this attitude and the underlying reasons. Therefore, this figure must be interpreted with caution. Our study has some limitations that need to be acknowledged. Calculation of influenza vaccination coverage was based on self-reported vaccination status and may therefore be prone to recall problems. However, it was shown in several studies that self-report of influenza vaccination status has an adequate degree of validity [35,36]. Furthermore, the response rate in GEDA10 was comparatively low at 29%. However, it should be noted that the chosen method of calculating the response rate (namely Response Rate 3 as defined by AAPOR [18]) is a very conservative approach and that our response rate is comparable to studies using the same approach (e.g. CDC-Behavioral Risk Factor Surveillance Rates Report [37]). Finally, it cannot be ruled out that other reasons than the controversial discussions on the pandemic vaccination have also contributed to the observed drop in seasonal influenza vaccination coverage in 2010/11. Conclusion: In conclusion, poor compliance with official vaccination recommendation resulting in low uptake of pandemic influenza vaccines during the pandemic season 2009/10 suggests that public communication strategies and vaccination campaigns during the influenza A(H1N1)pdm09 pandemic in Germany were not successful. In addition, our results raise concerns that controversial discussions about the safety and necessity of pandemic influenza vaccines may have contributed to decreased seasonal influenza vaccine uptake in the first post-pandemic season. It is therefore crucial to develop concerted communication strategies based on the lessons learned from the 2009/10 influenza pandemic and to include them in the national pandemic preparedness plan. This should be done, not only with respect to a competent handling of pandemic situations but also to avoid a decrease in the acceptance of vaccinations in general. In this respect, communication strategies and different modes of communication to specific target groups should be evaluated and implemented already in non-crisis situations, to be enhanced during a pandemic influenza situation or other public health crisis. This is of particular importance, since seasonal influenza vaccine uptake in the recommended target groups in Germany stagnated at a low level since 2005 [38] and does by far not meet the EU goal of 75% [20]. Further studies should be conducted to monitor the trends of seasonal influenza vaccine uptake in Germany in the specific target groups including pregnant women (which is a target group for seasonal influenza vaccination since 2010) and to precisely identify barriers to influenza vaccination in the upcoming years which might differ from the pandemic and the first post-pandemic season. This information would be crucial to guide public health authorities in developing more effective communication strategies for seasonal influenza vaccination tailored to specific target groups. Competing interests: The authors have declared no conflict of interest. Authors’ contributions: All authors made substantial contributions to the study. SM was involved in the development of the GEDA10 study design and contributed to the Methods section. MMB, DW, and OW developed the study design of the GEDA10 follow-up survey. MMB analysed the data in consultation with DW, GF, OW and GK. MMB wrote the draft version of the manuscript. All authors have read, carefully reviewed and approved the final version of the manuscript. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2458/12/938/prepub
Background: In Germany, annual vaccination against seasonal influenza is recommended for certain target groups (e.g. persons aged ≥60 years, chronically ill persons, healthcare workers (HCW)). In season 2009/10, vaccination against pandemic influenza A(H1N1)pdm09, which was controversially discussed in the public, was recommended for the whole population. The objectives of this study were to assess vaccination coverage for seasonal (seasons 2008/09-2010/11) and pandemic influenza (season 2009/10), to identify predictors of and barriers to pandemic vaccine uptake and whether the controversial discussions on pandemic vaccination has had a negative impact on seasonal influenza vaccine uptake in Germany. Methods: We analysed data from the 'German Health Update' (GEDA10) telephone survey (n=22,050) and a smaller GEDA10-follow-up survey (n=2,493), which were both representative of the general population aged ≥18 years living in Germany. Results: Overall only 8.8% of the adult population in Germany received a vaccination against pandemic influenza. High socioeconomic status, having received a seasonal influenza shot in the previous season, and belonging to a target group for seasonal influenza vaccination were independently associated with the uptake of pandemic vaccines. The main reasons for not receiving a pandemic vaccination were 'fear of side effects' and the opinion that 'vaccination was not necessary'. Seasonal influenza vaccine uptake in the pre-pandemic season 2008/09 was 52.8% among persons aged ≥60 years; 30.5% among HCW, and 43.3% among chronically ill persons. A decrease in vaccination coverage was observed across all target groups in the first post-pandemic season 2010/11 (50.6%, 25.8%, and 41.0% vaccination coverage, respectively). Conclusions: Seasonal influenza vaccination coverage in Germany remains in all target groups below 75%, which is a declared goal of the European Union. Our results suggest that controversial public discussions about safety and the benefits of pandemic influenza vaccination may have contributed to both a very low uptake of pandemic vaccines and a decreased uptake of seasonal influenza vaccines in the first post-pandemic season. In the upcoming years, the uptake of seasonal influenza vaccines should be carefully monitored in all target groups to identify if this trend continues and to guide public health authorities in developing more effective vaccination and communication strategies for seasonal influenza vaccination.
Background: In Germany, annual influenza epidemics usually occur during the winter months December to March. In the last decade, an estimated zero to 19,000 excess deaths per year were attributable to influenza virus infections [1]. Moreover, approximately one to six million influenza-related excess physician consultations per season were estimated for Germany [2]. Severe influenza virus infections or influenza-related complications typically occur in the very young and elderly population as well as in persons with underlying chronic medical conditions. Annual vaccination has proven to be an effective method to reduce the burden of influenza disease [3]. In Germany, vaccination against seasonal influenza is recommended by the Standing Committee on Vaccination (STIKO) for individuals who have either an increased risk to develop severe influenza disease (i.e. persons aged ≥60 years, pregnant women, and persons with certain chronic medical conditions) or who are likely to transmit the virus to vulnerable groups (e.g. health care workers (HCW)) [4]. Vaccination is free of charge for the target groups in Germany. During the influenza pandemic 2009/10, STIKO additionally recommended vaccination with a monovalent vaccine against the pandemic influenza virus strain A(H1N1)pdm09 for the whole population. Due to expected limitations in vaccine supplies at the beginning of the vaccination campaign, STIKO defined and ranked priority groups for the pandemic vaccination: 1) HCW, 2) persons with underlying chronic conditions, 3) pregnant women, 4) household contacts of vulnerable persons, 5) all other persons aged 6 months to 24 years, 6) all other persons aged 25–59 years, 7) all other person aged 60 years and above [5]. The pandemic vaccination campaign started in Germany on 26 October 2009 [6]. The AS03-adjuvanted monovalent vaccine Pandemrix® was almost exclusively used and available in sufficient quantities [7]. During the pandemic, vaccination against A(H1N1)pdm09 was subject to controversial discussions, not only in Germany but also in many other countries worldwide [7-9]. Main topics of the debate in the media and among experts and ‘self-proclaimed experts’ were vaccine safety, effectiveness, and concern that there were too little data on the new vaccines or vaccine ingredients (especially new adjuvants) available. Moreover, the general necessity of vaccination in view of the relative mildness of the pandemic influenza disease was called into question [7,9-11]. As a result, compliance with the national recommendations for pandemic vaccination was very poor in Germany. Since Germany has no central immunization registry, information on vaccination coverage (and factors influencing coverage) is only available from telephone and household surveys [12-15]. According to the results of thirteen consecutive cross-sectional telephone surveys (total n=13,010) conducted during the pandemic, only 8.1% of the general population aged ≥14 years living in Germany received a vaccine against pandemic influenza [6]. To develop target group specific communication strategies and to enhance compliance with the official recommendations it is important to monitor vaccine uptake in each of the target groups and to understand factors that influence uptake. This applies not only for annual influenza vaccination campaigns but also for the planning of future vaccination campaigns during a pandemic. The influence of the 2009/2010 pandemic situation on seasonal influenza vaccine uptake in Germany in the post-pandemic seasons has so far not been investigated. For this purpose, we utilized data from the large (~22,000 respondents) ‘German Health Update 2010’ (GEDA10) telephone survey and a smaller GEDA10 follow-up survey (~2,500 respondents). The objectives of our study were (1) to assess seasonal influenza vaccination coverage for seasons 2008/09 to 2010/11, (2) to assess pandemic influenza vaccination coverage for season 2009/10, (3) to identify predictors of and barriers to pandemic vaccine uptake, and (4) to detect a potential influence of the pandemic situation on seasonal influenza vaccine uptake in the first post-pandemic season (2010/11). Conclusion: In conclusion, poor compliance with official vaccination recommendation resulting in low uptake of pandemic influenza vaccines during the pandemic season 2009/10 suggests that public communication strategies and vaccination campaigns during the influenza A(H1N1)pdm09 pandemic in Germany were not successful. In addition, our results raise concerns that controversial discussions about the safety and necessity of pandemic influenza vaccines may have contributed to decreased seasonal influenza vaccine uptake in the first post-pandemic season. It is therefore crucial to develop concerted communication strategies based on the lessons learned from the 2009/10 influenza pandemic and to include them in the national pandemic preparedness plan. This should be done, not only with respect to a competent handling of pandemic situations but also to avoid a decrease in the acceptance of vaccinations in general. In this respect, communication strategies and different modes of communication to specific target groups should be evaluated and implemented already in non-crisis situations, to be enhanced during a pandemic influenza situation or other public health crisis. This is of particular importance, since seasonal influenza vaccine uptake in the recommended target groups in Germany stagnated at a low level since 2005 [38] and does by far not meet the EU goal of 75% [20]. Further studies should be conducted to monitor the trends of seasonal influenza vaccine uptake in Germany in the specific target groups including pregnant women (which is a target group for seasonal influenza vaccination since 2010) and to precisely identify barriers to influenza vaccination in the upcoming years which might differ from the pandemic and the first post-pandemic season. This information would be crucial to guide public health authorities in developing more effective communication strategies for seasonal influenza vaccination tailored to specific target groups.
Background: In Germany, annual vaccination against seasonal influenza is recommended for certain target groups (e.g. persons aged ≥60 years, chronically ill persons, healthcare workers (HCW)). In season 2009/10, vaccination against pandemic influenza A(H1N1)pdm09, which was controversially discussed in the public, was recommended for the whole population. The objectives of this study were to assess vaccination coverage for seasonal (seasons 2008/09-2010/11) and pandemic influenza (season 2009/10), to identify predictors of and barriers to pandemic vaccine uptake and whether the controversial discussions on pandemic vaccination has had a negative impact on seasonal influenza vaccine uptake in Germany. Methods: We analysed data from the 'German Health Update' (GEDA10) telephone survey (n=22,050) and a smaller GEDA10-follow-up survey (n=2,493), which were both representative of the general population aged ≥18 years living in Germany. Results: Overall only 8.8% of the adult population in Germany received a vaccination against pandemic influenza. High socioeconomic status, having received a seasonal influenza shot in the previous season, and belonging to a target group for seasonal influenza vaccination were independently associated with the uptake of pandemic vaccines. The main reasons for not receiving a pandemic vaccination were 'fear of side effects' and the opinion that 'vaccination was not necessary'. Seasonal influenza vaccine uptake in the pre-pandemic season 2008/09 was 52.8% among persons aged ≥60 years; 30.5% among HCW, and 43.3% among chronically ill persons. A decrease in vaccination coverage was observed across all target groups in the first post-pandemic season 2010/11 (50.6%, 25.8%, and 41.0% vaccination coverage, respectively). Conclusions: Seasonal influenza vaccination coverage in Germany remains in all target groups below 75%, which is a declared goal of the European Union. Our results suggest that controversial public discussions about safety and the benefits of pandemic influenza vaccination may have contributed to both a very low uptake of pandemic vaccines and a decreased uptake of seasonal influenza vaccines in the first post-pandemic season. In the upcoming years, the uptake of seasonal influenza vaccines should be carefully monitored in all target groups to identify if this trend continues and to guide public health authorities in developing more effective vaccination and communication strategies for seasonal influenza vaccination.
9,847
435
[ 746, 874, 224, 155, 126, 612, 304, 93, 9, 85, 16 ]
15
[ "influenza", "vaccination", "pandemic", "seasonal", "survey", "seasonal influenza", "season", "uptake", "vaccine", "geda10" ]
[ "influenza vaccination decisions", "influenza vaccination coverage", "seasonal influenza vaccination", "germany severe influenza", "germany annual influenza" ]
[CONTENT] Vaccination | Influenza | Coverage | Pandemic | Germany [SUMMARY]
[CONTENT] Vaccination | Influenza | Coverage | Pandemic | Germany [SUMMARY]
[CONTENT] Vaccination | Influenza | Coverage | Pandemic | Germany [SUMMARY]
[CONTENT] Vaccination | Influenza | Coverage | Pandemic | Germany [SUMMARY]
[CONTENT] Vaccination | Influenza | Coverage | Pandemic | Germany [SUMMARY]
[CONTENT] Vaccination | Influenza | Coverage | Pandemic | Germany [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Female | Follow-Up Studies | Germany | Health Care Surveys | Health Services Accessibility | Humans | Influenza Vaccines | Influenza, Human | Male | Middle Aged | Pandemics | Seasons | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Female | Follow-Up Studies | Germany | Health Care Surveys | Health Services Accessibility | Humans | Influenza Vaccines | Influenza, Human | Male | Middle Aged | Pandemics | Seasons | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Female | Follow-Up Studies | Germany | Health Care Surveys | Health Services Accessibility | Humans | Influenza Vaccines | Influenza, Human | Male | Middle Aged | Pandemics | Seasons | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Female | Follow-Up Studies | Germany | Health Care Surveys | Health Services Accessibility | Humans | Influenza Vaccines | Influenza, Human | Male | Middle Aged | Pandemics | Seasons | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Female | Follow-Up Studies | Germany | Health Care Surveys | Health Services Accessibility | Humans | Influenza Vaccines | Influenza, Human | Male | Middle Aged | Pandemics | Seasons | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Female | Follow-Up Studies | Germany | Health Care Surveys | Health Services Accessibility | Humans | Influenza Vaccines | Influenza, Human | Male | Middle Aged | Pandemics | Seasons | Young Adult [SUMMARY]
[CONTENT] influenza vaccination decisions | influenza vaccination coverage | seasonal influenza vaccination | germany severe influenza | germany annual influenza [SUMMARY]
[CONTENT] influenza vaccination decisions | influenza vaccination coverage | seasonal influenza vaccination | germany severe influenza | germany annual influenza [SUMMARY]
[CONTENT] influenza vaccination decisions | influenza vaccination coverage | seasonal influenza vaccination | germany severe influenza | germany annual influenza [SUMMARY]
[CONTENT] influenza vaccination decisions | influenza vaccination coverage | seasonal influenza vaccination | germany severe influenza | germany annual influenza [SUMMARY]
[CONTENT] influenza vaccination decisions | influenza vaccination coverage | seasonal influenza vaccination | germany severe influenza | germany annual influenza [SUMMARY]
[CONTENT] influenza vaccination decisions | influenza vaccination coverage | seasonal influenza vaccination | germany severe influenza | germany annual influenza [SUMMARY]
[CONTENT] influenza | vaccination | pandemic | seasonal | survey | seasonal influenza | season | uptake | vaccine | geda10 [SUMMARY]
[CONTENT] influenza | vaccination | pandemic | seasonal | survey | seasonal influenza | season | uptake | vaccine | geda10 [SUMMARY]
[CONTENT] influenza | vaccination | pandemic | seasonal | survey | seasonal influenza | season | uptake | vaccine | geda10 [SUMMARY]
[CONTENT] influenza | vaccination | pandemic | seasonal | survey | seasonal influenza | season | uptake | vaccine | geda10 [SUMMARY]
[CONTENT] influenza | vaccination | pandemic | seasonal | survey | seasonal influenza | season | uptake | vaccine | geda10 [SUMMARY]
[CONTENT] influenza | vaccination | pandemic | seasonal | survey | seasonal influenza | season | uptake | vaccine | geda10 [SUMMARY]
[CONTENT] pandemic | vaccination | influenza | germany | vaccine | virus | persons | influenza disease | influenza virus | influence [SUMMARY]
[CONTENT] survey | geda10 | geda | follow | status | respondents | follow survey | vaccination | data | response [SUMMARY]
[CONTENT] influenza | vaccination | 95 ci | seasonal | 95 | ci | target | seasonal influenza | season | seasons [SUMMARY]
[CONTENT] pandemic | influenza | communication | communication strategies | strategies | specific target | specific target groups | target | specific | vaccination [SUMMARY]
[CONTENT] influenza | vaccination | pandemic | seasonal | survey | geda10 | seasonal influenza | uptake | season | vaccine [SUMMARY]
[CONTENT] influenza | vaccination | pandemic | seasonal | survey | geda10 | seasonal influenza | uptake | season | vaccine [SUMMARY]
[CONTENT] Germany | annual | influenza | ≥60 years ||| season 2009/10 ||| season 2009/10 | Germany [SUMMARY]
[CONTENT] the 'German Health Update' | GEDA10 | years | Germany [SUMMARY]
[CONTENT] only 8.8% | Germany ||| the previous season ||| ||| Seasonal | influenza vaccine | 2008/09 | 52.8% | ≥60 years | 30.5% | HCW | 43.3% ||| first | 50.6% | 25.8% | 41.0% [SUMMARY]
[CONTENT] Seasonal | Germany | below 75% | the European Union ||| first ||| the upcoming years [SUMMARY]
[CONTENT] Germany | annual | influenza | ≥60 years ||| season 2009/10 ||| season 2009/10 | Germany ||| the 'German Health Update' | GEDA10 | years | Germany ||| ||| only 8.8% | Germany ||| the previous season ||| ||| Seasonal | influenza vaccine | 2008/09 | 52.8% | ≥60 years | 30.5% | HCW | 43.3% ||| first | 50.6% | 25.8% | 41.0% ||| Germany | below 75% | the European Union ||| first ||| the upcoming years [SUMMARY]
[CONTENT] Germany | annual | influenza | ≥60 years ||| season 2009/10 ||| season 2009/10 | Germany ||| the 'German Health Update' | GEDA10 | years | Germany ||| ||| only 8.8% | Germany ||| the previous season ||| ||| Seasonal | influenza vaccine | 2008/09 | 52.8% | ≥60 years | 30.5% | HCW | 43.3% ||| first | 50.6% | 25.8% | 41.0% ||| Germany | below 75% | the European Union ||| first ||| the upcoming years [SUMMARY]
Disease burden and demographic characteristics of mucormycosis: A nationwide population-based study in Taiwan, 2006-2017.
35713608
Epidemiological knowledge of mucormycosis obtained from national population-based databases is scarce.
BACKGROUND
Data from patients with either mucormycosis or aspergillosis from 2006 to 2017 identified with the International Classification of Diseases (ICD) codes were extracted from the NHIRD. The incidence, demographics and clinical data of both diseases were analysed.
METHODS
A total of 204 patients with mucormycosis and 2270 patients with aspergillosis who were hospitalised and treated with mould-active antifungals between 2006 and 2017 were identified. The average annual incidence of aspergillosis (0.81 cases per 100,000 population [0.81/100,000]) was 11-fold higher than that of mucormycosis (0.07/100,000). A significant increase in incidence was observed for aspergillosis (from 0.48/100,000 in 2006 to 1.19/100,000 in 2017, p < .0001) but not for mucormycosis (from 0.04/100,000 in 2006 to 0.11/100,000 in 2017, p = .07). The major underlying disease identified was diabetes mellitus (60.8%) for mucormycosis and malignant neoplasms (45.9%) for aspergillosis. The all-cause 90-day mortality rate was similar between mucormycosis and aspergillosis patients (39% vs. 37%, p = .60). For mucormycosis patients, multivariate analysis revealed that posaconazole use was associated with lower in-hospital mortality (aOR 0.38; 95% CI 0.15-0.97; p = .04).
RESULTS
Mucormycosis is an uncommon fungal disease in Taiwan, occurring mostly in diabetic patients. However, the incidence might be underestimated due to limited diagnostics. Continuous surveillance might aid in delineating the evolving features of mucormycosis.
CONCLUSIONS
[ "Antifungal Agents", "Aspergillosis", "Cost of Illness", "Hospital Mortality", "Humans", "Mucormycosis", "Taiwan" ]
9796055
INTRODUCTION
Mucormycosis is the most common invasive mould disease after aspergillosis. It is caused by members of the order Mucorales, which are thermotolerant fungi ubiquitously distributed in the environment, such as in decaying organic materials and soils. Patients usually acquire infection via inhalation, ingestion or direct inoculation of fungal spores. Those with diabetes mellitus, haematological malignancy, solid organ transplantation, corticosteroid use, iron overload and major trauma are at particular risk for developing mucormycosis. 1 , 2 Mucormycosis is characterised by its angioinvasive nature and is associated with substantial mortality. Rhino‐orbital‐cerebral and pulmonary forms are the most common manifestation of mucormycosis, followed by cutaneous, gastrointestinal and renal infection, and localised infection might progress to disseminated infection in severe cases. 1 , 2 Epidemiological studies have demonstrated varied incidences and features of mucormycosis across different countries. 1 , 2 India was reported to have the highest estimated incidence of mucormycosis (14 cases per 100,000 population), with uncontrolled diabetes as the major risk factor, whereas lower incidences (0.06–0.2/100,000) were reported in most of the European countries, with haematological malignancy and transplantation as the major risk factors. 2 Nevertheless, the disease burden of mucormycosis described in the literature was mostly deduced from estimations based on available epidemiological data and modelling. 2 The currently available nationwide population‐based incidence data were mainly from a French study, which used data extraction codes according to the International Classification of Diseases (ICD) from the French hospital information system. 3 It revealed an increasing trend in the incidence of mucormycosis from 0.07/100,000 in 1997 to 0.12/100,000 in 2006, with an average of 0.09/100,000 over the 10 years. This French incidence rate (0.09–1.2/100,000) was thus often used as a reference value for estimating the burden of mucormycosis in other countries. 4 , 5 , 6 A comprehensive estimation of disease burden and identifying population at risk aids in determining public health measures and the diagnostic and therapeutic needs of the population. However, large‐scale epidemiological research of mucormycosis in Asia is scarce and mostly limited to reports from single‐centre or multicentre studies in India. 7 Taiwan is situated in East Asia, which belongs to the tropical and subtropical climate zone. It launched the National Health Insurance (NHI) in 1995, which covers more than 99% of the 23 million people who compose the Taiwanese population; therefore, Taiwan's National Health Insurance Research Database (NHIRD) could serve as a population‐based database for large‐scale, healthcare research. 8 Using the NHIRD, studies revealed an increase in the immunocompromised population and in diabetes prevalence over the years and a significant increase in the incidence of invasive pulmonary aspergillosis from 0.94 cases in 2002 to 2.06 cases in 2011 per million population (p < .0001), 9 , 10 , 11 , 12 , 13 while the incidence of mucormycosis remained undescribed. Therefore, this study aimed to delineate the temporal trend of disease burden and demographic characteristics of mucormycosis in Taiwan during 2006–2017 based on the NHIRD using those of aspergillosis as comparators.
null
null
RESULTS
Over the 12‐year study period, the population covered by Taiwan's NHI increased from 22.4 million in 2006 to 23.8 million in 2017, exceeding 99% of the total population in Taiwan. A total of 204 patients with mucormycosis and 2270 patients with aspergillosis who were hospitalised and treated with mould‐active antifungals from 2006 to 2017 were identified and extracted from the NHIRD for further analyses (Figure 1 ). Overall, the annual incidence of mucormycosis (range 0.04–0.11/100,000) was lower than that of aspergillosis (range 0.40–1.25/100,000), and the average annual incidence of aspergillosis (0.81/100,000) was approximately 11‐fold higher than that of mucormycosis (0.07/100,000) (Figure 2). Notably, the incidence of aspergillosis increased significantly over the years, from 0.48/100,000 in 2006 to 1.19/100,000 in 2017 (p < .0001). Although the incidence of mucormycosis also increased from 0.04/100,000 in 2006 to 0.11/100,000 in 2017, the increase did not reach statistical significance (p = .07). Annual case numbers and incidence of mucormycosis compared with those of aspergillosis during 2006–2017 Patients with mucormycosis and patients with aspergillosis were old‐aged (mean age 59.4‐ and 57.6‐year‐old, respectively), and most patients were male (67.2% and 64.3%, respectively). Diabetes mellitus (60.8% vs. 23.8%; adjusted odds ratio (aOR) 4.30, 95% confidence interval (CI) 3.17–5.82, p < .0001) was more commonly associated with mucormycosis, whereas malignant neoplasms (22.1% vs. 45.9%, p < .0001), including haematological malignancy (16.7% vs. 34.2%), solid cancer (7.8% vs. 16.3%) and COPD (14.2% vs. 23.9%, p = .0002), were more commonly associated with aspergillosis (Table 1). HIV infection was identified in less than three (<1%) patients with mucormycosis and in 32 (1.4%) patients with aspergillosis. Demographic and clinical data of patients with mucormycosis and patients with aspergillosis from 2006 to 2017 Abbreviations: aOR, adjusted odds ratio; CCI, Charlson Comorbidity Index; CI, confidence interval; COPD, chronic obstructive pulmonary disease; ESRD, end‐stage renal disease; SD, standard deviation. The all‐cause in‐hospital, 30‐day and 90‐day mortality rates did not differ significantly between mucormycosis and aspergillosis (22.1% vs. 22.3%, p = .93; 29.9% vs. 26.9%, p = .35; 38.7% vs. 37.1%, p = .64) (Table 1). Kaplan–Meier survival curves revealed similar survival probability for both diseases at 90 days after discharge (p = .60) (Figure 3A). Of 204 patients with mucormycosis, 187 (91.6%) patients had received Mucorales‐active agents with either amphotericin B or posaconazole. In the multivariate analysis which examined factors of gender, age, underlying diseases, chemotherapy, corticosteroids use and prescribed antifungal agents, posaconazole use was found to be associated with lower in‐hospital mortality (aOR 0.38; 95% CI 0.15–0.97; p = .04), whereas voriconazole use was associated with in‐hospital mortality (aOR 3.31; 95% CI 1.48–7.39; p = .003) (Table 2). Kaplan–Meier survival analysis also demonstrated that posaconazole use during hospitalisation was associated with a higher survival probability at 90 days after discharge (p = .04), whereas voriconazole use was associated with a lower survival probability (p = .02) (Figure 3B,C). Of 2270 patients with aspergillosis, all patients had received mould‐active agents. Multivariate analysis revealed that COPD, haematological cancer and antineoplastic chemotherapy were associated with lower in‐hospital mortality, whereas age ≥ 65 y.o., CCI ≥3 and use of amphotericin B or corticosteroids were associated with in‐hospital mortality (Table S3). Kaplan–Meier survival curves at 90 days after discharge according to (A) diagnosis (mucormycosis vs. aspergillosis), (B) posaconazole use in mucormycosis and (C) voriconazole use in mucormycosis (na, data reporting not allowed) Clinical data and factors associated with all‐cause in‐hospital mortality among patients with mucormycosis from 2006 to 2017 Abbreviations: aOR, adjusted odds ratio; CCI, Charlson Comorbidity Index; CI, confidence interval; COPD, chronic obstructive pulmonary disease; na, nonapplicable; SD, standard deviation. The number of patients receiving haemodialysis or transplantation was less than three in some cells, and thus the data regarding end‐stage renal disease and transplantation was not provided. Anti‐Aspergillus‐only agents include itraconazole, voriconazole, anidulafungin, caspofungin and micafungin; Mucorales‐active agents include posaconazole and amphotericin B; mould‐active agents include amphotericin B, itraconazole, voriconazole, posaconazole, anidulafungin, caspofungin and micafungin.
null
null
[ "INTRODUCTION", "Study design and database", "Study subjects and definitions", "Statistical analysis", "AUTHOR CONTRIBUTIONS" ]
[ "Mucormycosis is the most common invasive mould disease after aspergillosis. It is caused by members of the order Mucorales, which are thermotolerant fungi ubiquitously distributed in the environment, such as in decaying organic materials and soils. Patients usually acquire infection via inhalation, ingestion or direct inoculation of fungal spores. Those with diabetes mellitus, haematological malignancy, solid organ transplantation, corticosteroid use, iron overload and major trauma are at particular risk for developing mucormycosis.\n1\n, \n2\n Mucormycosis is characterised by its angioinvasive nature and is associated with substantial mortality. Rhino‐orbital‐cerebral and pulmonary forms are the most common manifestation of mucormycosis, followed by cutaneous, gastrointestinal and renal infection, and localised infection might progress to disseminated infection in severe cases.\n1\n, \n2\n\n\nEpidemiological studies have demonstrated varied incidences and features of mucormycosis across different countries.\n1\n, \n2\n India was reported to have the highest estimated incidence of mucormycosis (14 cases per 100,000 population), with uncontrolled diabetes as the major risk factor, whereas lower incidences (0.06–0.2/100,000) were reported in most of the European countries, with haematological malignancy and transplantation as the major risk factors.\n2\n Nevertheless, the disease burden of mucormycosis described in the literature was mostly deduced from estimations based on available epidemiological data and modelling.\n2\n The currently available nationwide population‐based incidence data were mainly from a French study, which used data extraction codes according to the International Classification of Diseases (ICD) from the French hospital information system.\n3\n It revealed an increasing trend in the incidence of mucormycosis from 0.07/100,000 in 1997 to 0.12/100,000 in 2006, with an average of 0.09/100,000 over the 10 years. This French incidence rate (0.09–1.2/100,000) was thus often used as a reference value for estimating the burden of mucormycosis in other countries.\n4\n, \n5\n, \n6\n\n\nA comprehensive estimation of disease burden and identifying population at risk aids in determining public health measures and the diagnostic and therapeutic needs of the population. However, large‐scale epidemiological research of mucormycosis in Asia is scarce and mostly limited to reports from single‐centre or multicentre studies in India.\n7\n Taiwan is situated in East Asia, which belongs to the tropical and subtropical climate zone. It launched the National Health Insurance (NHI) in 1995, which covers more than 99% of the 23 million people who compose the Taiwanese population; therefore, Taiwan's National Health Insurance Research Database (NHIRD) could serve as a population‐based database for large‐scale, healthcare research.\n8\n Using the NHIRD, studies revealed an increase in the immunocompromised population and in diabetes prevalence over the years and a significant increase in the incidence of invasive pulmonary aspergillosis from 0.94 cases in 2002 to 2.06 cases in 2011 per million population (p < .0001),\n9\n, \n10\n, \n11\n, \n12\n, \n13\n while the incidence of mucormycosis remained undescribed. Therefore, this study aimed to delineate the temporal trend of disease burden and demographic characteristics of mucormycosis in Taiwan during 2006–2017 based on the NHIRD using those of aspergillosis as comparators.", "We conducted a nationwide, population‐based retrospective cohort study that investigated the incidence, demographic and clinical characteristics of mucormycosis and aspergillosis that occurred between 2006 and 2017 in Taiwan by analysing data from the NHIRD. The NHIRD comprises secondary data released for research purposes, which includes information on patient demographics, outpatient visits, hospital admission, prescribed medication, interventional procedures and up to four diagnoses for outpatients and up to five discharge diagnoses for inpatients. Diseases are coded based on the ICD, Ninth Revision, Clinical Modification (ICD‐9‐CM) code from 2006 to 2015, and ICD‐10‐CM code from 2016 to 2017. Personal information was deidentified by the Taiwan Health and Welfare Data Science Centre (HWDC) but could be anonymously linked to the official Taiwan Death Registry dataset, which was also assessed in this study to obtain the patient outcome (mortality). According to the regulation of HWDC, case numbers lower than three were not allowed to be reported in the analyses to avoid reidentification.", "Patients with either mucormycosis or aspergillosis from 2006 to 2017 were identified if they had received an ICD code for either disease (not restricted to primary and secondary diagnosis), and the ICD codes for both diseases are provided in Table Table S1. To ensure the accuracy of disease diagnosis, only patients who were hospitalised and treated with either oral and/or intravenous mould‐active drugs, including amphotericin B (both deoxycholate and lipid formulation), itraconazole, voriconazole, posaconazole, anidulafungin, micafungin and caspofungin, were enrolled. The flowchart in Figure 1 summarises the study selection process. The index date was defined as the first date of hospital admission during which mucormycosis or aspergillosis was diagnosed. The annual case number, demographics, comorbidities, medical treatment and all‐cause mortality of patients were assessed. Comorbidities that were coded in at least two outpatient visits or one discharge diagnosis for inpatients within 1 year preceding the index date were included for analyses, which included autoimmune disease, chronic heart failure, chronic obstructive pulmonary disease (COPD), diabetes mellitus, human immunodeficiency virus (HIV) infection, liver cirrhosis and malignant neoplasms, including both haematological cancer and solid cancer. End‐stage renal disease (ESRD) was coded if the patients received regular dialysis within 90 days preceding the index admission. The medical treatment assessed included antineoplastic chemotherapy and corticosteroids within 90 days preceding and during the index admission, transplantation (bone marrow transplantation and organ transplantation) within 1 year preceding and during the index admission and mould‐active antifungals during the index admission. The severity of comorbidity was scored based on the Charlson Comorbidity Index (CCI).\n14\n The ICD codes for comorbidities and medical treatment are provided in Table Table S1, and the Anatomical Therapeutic Chemical (ATC) Classification codes for mould‐active antifungals and corticosteroids are provided in Table S2.\nFlowchart summarising the study selection process in this study\nAll‐cause in‐hospital mortality was defined as death during the index admission or within 72 hours after discharge. All‐cause 30‐day and 90‐day mortality was defined as death at 30 days and 90 days after discharge, respectively.", "All statistical analyses were performed with SAS 9.4 software (SAS Institute). The temporal trend of disease incidence was examined using the trend test. Demographic and clinical data of mucormycosis and aspergillosis were compared, and factors associated with mortality in either disease were examined by both univariate analysis and multivariate analyses using the backward elimination method. The chi‐square test was used for categorical variables, and Student's t test was used for continuous variables. Survival probability was estimated for each group using the Kaplan–Meier method and compared using the log‐rank test. All statistical tests were 2‐sided, and p values <.05 were considered statistically significant.\nThis study was approved by the institutional review board (IRB) of National Chen‐Kung University Hospital and National Health Research Institute (registration numbers: A‐ER‐108‐479 and EC1070104‐E) and the requirement to obtain informed consent was waived.", "Shih H.I. and Wu C.J. involved in conceptualization, funding acquisition and writing—review and editing. Huang Y.T. and Shih H.I. involved in data curation. Huang Y.T. involved in Formal analysis. Shih H.I., Huang Y.T. and Wu C.J. involved methodology and validation. Shih H.I., Huang Y.T. and Wu C.J. involved in writing—original draft." ]
[ null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Study design and database", "Study subjects and definitions", "Statistical analysis", "RESULTS", "DISCUSSION", "AUTHOR CONTRIBUTIONS", "CONFLICT OF INTEREST", "Supporting information" ]
[ "Mucormycosis is the most common invasive mould disease after aspergillosis. It is caused by members of the order Mucorales, which are thermotolerant fungi ubiquitously distributed in the environment, such as in decaying organic materials and soils. Patients usually acquire infection via inhalation, ingestion or direct inoculation of fungal spores. Those with diabetes mellitus, haematological malignancy, solid organ transplantation, corticosteroid use, iron overload and major trauma are at particular risk for developing mucormycosis.\n1\n, \n2\n Mucormycosis is characterised by its angioinvasive nature and is associated with substantial mortality. Rhino‐orbital‐cerebral and pulmonary forms are the most common manifestation of mucormycosis, followed by cutaneous, gastrointestinal and renal infection, and localised infection might progress to disseminated infection in severe cases.\n1\n, \n2\n\n\nEpidemiological studies have demonstrated varied incidences and features of mucormycosis across different countries.\n1\n, \n2\n India was reported to have the highest estimated incidence of mucormycosis (14 cases per 100,000 population), with uncontrolled diabetes as the major risk factor, whereas lower incidences (0.06–0.2/100,000) were reported in most of the European countries, with haematological malignancy and transplantation as the major risk factors.\n2\n Nevertheless, the disease burden of mucormycosis described in the literature was mostly deduced from estimations based on available epidemiological data and modelling.\n2\n The currently available nationwide population‐based incidence data were mainly from a French study, which used data extraction codes according to the International Classification of Diseases (ICD) from the French hospital information system.\n3\n It revealed an increasing trend in the incidence of mucormycosis from 0.07/100,000 in 1997 to 0.12/100,000 in 2006, with an average of 0.09/100,000 over the 10 years. This French incidence rate (0.09–1.2/100,000) was thus often used as a reference value for estimating the burden of mucormycosis in other countries.\n4\n, \n5\n, \n6\n\n\nA comprehensive estimation of disease burden and identifying population at risk aids in determining public health measures and the diagnostic and therapeutic needs of the population. However, large‐scale epidemiological research of mucormycosis in Asia is scarce and mostly limited to reports from single‐centre or multicentre studies in India.\n7\n Taiwan is situated in East Asia, which belongs to the tropical and subtropical climate zone. It launched the National Health Insurance (NHI) in 1995, which covers more than 99% of the 23 million people who compose the Taiwanese population; therefore, Taiwan's National Health Insurance Research Database (NHIRD) could serve as a population‐based database for large‐scale, healthcare research.\n8\n Using the NHIRD, studies revealed an increase in the immunocompromised population and in diabetes prevalence over the years and a significant increase in the incidence of invasive pulmonary aspergillosis from 0.94 cases in 2002 to 2.06 cases in 2011 per million population (p < .0001),\n9\n, \n10\n, \n11\n, \n12\n, \n13\n while the incidence of mucormycosis remained undescribed. Therefore, this study aimed to delineate the temporal trend of disease burden and demographic characteristics of mucormycosis in Taiwan during 2006–2017 based on the NHIRD using those of aspergillosis as comparators.", "Study design and database We conducted a nationwide, population‐based retrospective cohort study that investigated the incidence, demographic and clinical characteristics of mucormycosis and aspergillosis that occurred between 2006 and 2017 in Taiwan by analysing data from the NHIRD. The NHIRD comprises secondary data released for research purposes, which includes information on patient demographics, outpatient visits, hospital admission, prescribed medication, interventional procedures and up to four diagnoses for outpatients and up to five discharge diagnoses for inpatients. Diseases are coded based on the ICD, Ninth Revision, Clinical Modification (ICD‐9‐CM) code from 2006 to 2015, and ICD‐10‐CM code from 2016 to 2017. Personal information was deidentified by the Taiwan Health and Welfare Data Science Centre (HWDC) but could be anonymously linked to the official Taiwan Death Registry dataset, which was also assessed in this study to obtain the patient outcome (mortality). According to the regulation of HWDC, case numbers lower than three were not allowed to be reported in the analyses to avoid reidentification.\nWe conducted a nationwide, population‐based retrospective cohort study that investigated the incidence, demographic and clinical characteristics of mucormycosis and aspergillosis that occurred between 2006 and 2017 in Taiwan by analysing data from the NHIRD. The NHIRD comprises secondary data released for research purposes, which includes information on patient demographics, outpatient visits, hospital admission, prescribed medication, interventional procedures and up to four diagnoses for outpatients and up to five discharge diagnoses for inpatients. Diseases are coded based on the ICD, Ninth Revision, Clinical Modification (ICD‐9‐CM) code from 2006 to 2015, and ICD‐10‐CM code from 2016 to 2017. Personal information was deidentified by the Taiwan Health and Welfare Data Science Centre (HWDC) but could be anonymously linked to the official Taiwan Death Registry dataset, which was also assessed in this study to obtain the patient outcome (mortality). According to the regulation of HWDC, case numbers lower than three were not allowed to be reported in the analyses to avoid reidentification.\nStudy subjects and definitions Patients with either mucormycosis or aspergillosis from 2006 to 2017 were identified if they had received an ICD code for either disease (not restricted to primary and secondary diagnosis), and the ICD codes for both diseases are provided in Table Table S1. To ensure the accuracy of disease diagnosis, only patients who were hospitalised and treated with either oral and/or intravenous mould‐active drugs, including amphotericin B (both deoxycholate and lipid formulation), itraconazole, voriconazole, posaconazole, anidulafungin, micafungin and caspofungin, were enrolled. The flowchart in Figure 1 summarises the study selection process. The index date was defined as the first date of hospital admission during which mucormycosis or aspergillosis was diagnosed. The annual case number, demographics, comorbidities, medical treatment and all‐cause mortality of patients were assessed. Comorbidities that were coded in at least two outpatient visits or one discharge diagnosis for inpatients within 1 year preceding the index date were included for analyses, which included autoimmune disease, chronic heart failure, chronic obstructive pulmonary disease (COPD), diabetes mellitus, human immunodeficiency virus (HIV) infection, liver cirrhosis and malignant neoplasms, including both haematological cancer and solid cancer. End‐stage renal disease (ESRD) was coded if the patients received regular dialysis within 90 days preceding the index admission. The medical treatment assessed included antineoplastic chemotherapy and corticosteroids within 90 days preceding and during the index admission, transplantation (bone marrow transplantation and organ transplantation) within 1 year preceding and during the index admission and mould‐active antifungals during the index admission. The severity of comorbidity was scored based on the Charlson Comorbidity Index (CCI).\n14\n The ICD codes for comorbidities and medical treatment are provided in Table Table S1, and the Anatomical Therapeutic Chemical (ATC) Classification codes for mould‐active antifungals and corticosteroids are provided in Table S2.\nFlowchart summarising the study selection process in this study\nAll‐cause in‐hospital mortality was defined as death during the index admission or within 72 hours after discharge. All‐cause 30‐day and 90‐day mortality was defined as death at 30 days and 90 days after discharge, respectively.\nPatients with either mucormycosis or aspergillosis from 2006 to 2017 were identified if they had received an ICD code for either disease (not restricted to primary and secondary diagnosis), and the ICD codes for both diseases are provided in Table Table S1. To ensure the accuracy of disease diagnosis, only patients who were hospitalised and treated with either oral and/or intravenous mould‐active drugs, including amphotericin B (both deoxycholate and lipid formulation), itraconazole, voriconazole, posaconazole, anidulafungin, micafungin and caspofungin, were enrolled. The flowchart in Figure 1 summarises the study selection process. The index date was defined as the first date of hospital admission during which mucormycosis or aspergillosis was diagnosed. The annual case number, demographics, comorbidities, medical treatment and all‐cause mortality of patients were assessed. Comorbidities that were coded in at least two outpatient visits or one discharge diagnosis for inpatients within 1 year preceding the index date were included for analyses, which included autoimmune disease, chronic heart failure, chronic obstructive pulmonary disease (COPD), diabetes mellitus, human immunodeficiency virus (HIV) infection, liver cirrhosis and malignant neoplasms, including both haematological cancer and solid cancer. End‐stage renal disease (ESRD) was coded if the patients received regular dialysis within 90 days preceding the index admission. The medical treatment assessed included antineoplastic chemotherapy and corticosteroids within 90 days preceding and during the index admission, transplantation (bone marrow transplantation and organ transplantation) within 1 year preceding and during the index admission and mould‐active antifungals during the index admission. The severity of comorbidity was scored based on the Charlson Comorbidity Index (CCI).\n14\n The ICD codes for comorbidities and medical treatment are provided in Table Table S1, and the Anatomical Therapeutic Chemical (ATC) Classification codes for mould‐active antifungals and corticosteroids are provided in Table S2.\nFlowchart summarising the study selection process in this study\nAll‐cause in‐hospital mortality was defined as death during the index admission or within 72 hours after discharge. All‐cause 30‐day and 90‐day mortality was defined as death at 30 days and 90 days after discharge, respectively.\nStatistical analysis All statistical analyses were performed with SAS 9.4 software (SAS Institute). The temporal trend of disease incidence was examined using the trend test. Demographic and clinical data of mucormycosis and aspergillosis were compared, and factors associated with mortality in either disease were examined by both univariate analysis and multivariate analyses using the backward elimination method. The chi‐square test was used for categorical variables, and Student's t test was used for continuous variables. Survival probability was estimated for each group using the Kaplan–Meier method and compared using the log‐rank test. All statistical tests were 2‐sided, and p values <.05 were considered statistically significant.\nThis study was approved by the institutional review board (IRB) of National Chen‐Kung University Hospital and National Health Research Institute (registration numbers: A‐ER‐108‐479 and EC1070104‐E) and the requirement to obtain informed consent was waived.\nAll statistical analyses were performed with SAS 9.4 software (SAS Institute). The temporal trend of disease incidence was examined using the trend test. Demographic and clinical data of mucormycosis and aspergillosis were compared, and factors associated with mortality in either disease were examined by both univariate analysis and multivariate analyses using the backward elimination method. The chi‐square test was used for categorical variables, and Student's t test was used for continuous variables. Survival probability was estimated for each group using the Kaplan–Meier method and compared using the log‐rank test. All statistical tests were 2‐sided, and p values <.05 were considered statistically significant.\nThis study was approved by the institutional review board (IRB) of National Chen‐Kung University Hospital and National Health Research Institute (registration numbers: A‐ER‐108‐479 and EC1070104‐E) and the requirement to obtain informed consent was waived.", "We conducted a nationwide, population‐based retrospective cohort study that investigated the incidence, demographic and clinical characteristics of mucormycosis and aspergillosis that occurred between 2006 and 2017 in Taiwan by analysing data from the NHIRD. The NHIRD comprises secondary data released for research purposes, which includes information on patient demographics, outpatient visits, hospital admission, prescribed medication, interventional procedures and up to four diagnoses for outpatients and up to five discharge diagnoses for inpatients. Diseases are coded based on the ICD, Ninth Revision, Clinical Modification (ICD‐9‐CM) code from 2006 to 2015, and ICD‐10‐CM code from 2016 to 2017. Personal information was deidentified by the Taiwan Health and Welfare Data Science Centre (HWDC) but could be anonymously linked to the official Taiwan Death Registry dataset, which was also assessed in this study to obtain the patient outcome (mortality). According to the regulation of HWDC, case numbers lower than three were not allowed to be reported in the analyses to avoid reidentification.", "Patients with either mucormycosis or aspergillosis from 2006 to 2017 were identified if they had received an ICD code for either disease (not restricted to primary and secondary diagnosis), and the ICD codes for both diseases are provided in Table Table S1. To ensure the accuracy of disease diagnosis, only patients who were hospitalised and treated with either oral and/or intravenous mould‐active drugs, including amphotericin B (both deoxycholate and lipid formulation), itraconazole, voriconazole, posaconazole, anidulafungin, micafungin and caspofungin, were enrolled. The flowchart in Figure 1 summarises the study selection process. The index date was defined as the first date of hospital admission during which mucormycosis or aspergillosis was diagnosed. The annual case number, demographics, comorbidities, medical treatment and all‐cause mortality of patients were assessed. Comorbidities that were coded in at least two outpatient visits or one discharge diagnosis for inpatients within 1 year preceding the index date were included for analyses, which included autoimmune disease, chronic heart failure, chronic obstructive pulmonary disease (COPD), diabetes mellitus, human immunodeficiency virus (HIV) infection, liver cirrhosis and malignant neoplasms, including both haematological cancer and solid cancer. End‐stage renal disease (ESRD) was coded if the patients received regular dialysis within 90 days preceding the index admission. The medical treatment assessed included antineoplastic chemotherapy and corticosteroids within 90 days preceding and during the index admission, transplantation (bone marrow transplantation and organ transplantation) within 1 year preceding and during the index admission and mould‐active antifungals during the index admission. The severity of comorbidity was scored based on the Charlson Comorbidity Index (CCI).\n14\n The ICD codes for comorbidities and medical treatment are provided in Table Table S1, and the Anatomical Therapeutic Chemical (ATC) Classification codes for mould‐active antifungals and corticosteroids are provided in Table S2.\nFlowchart summarising the study selection process in this study\nAll‐cause in‐hospital mortality was defined as death during the index admission or within 72 hours after discharge. All‐cause 30‐day and 90‐day mortality was defined as death at 30 days and 90 days after discharge, respectively.", "All statistical analyses were performed with SAS 9.4 software (SAS Institute). The temporal trend of disease incidence was examined using the trend test. Demographic and clinical data of mucormycosis and aspergillosis were compared, and factors associated with mortality in either disease were examined by both univariate analysis and multivariate analyses using the backward elimination method. The chi‐square test was used for categorical variables, and Student's t test was used for continuous variables. Survival probability was estimated for each group using the Kaplan–Meier method and compared using the log‐rank test. All statistical tests were 2‐sided, and p values <.05 were considered statistically significant.\nThis study was approved by the institutional review board (IRB) of National Chen‐Kung University Hospital and National Health Research Institute (registration numbers: A‐ER‐108‐479 and EC1070104‐E) and the requirement to obtain informed consent was waived.", "Over the 12‐year study period, the population covered by Taiwan's NHI increased from 22.4 million in 2006 to 23.8 million in 2017, exceeding 99% of the total population in Taiwan. A total of 204 patients with mucormycosis and 2270 patients with aspergillosis who were hospitalised and treated with mould‐active antifungals from 2006 to 2017 were identified and extracted from the NHIRD for further analyses (Figure 1\n). Overall, the annual incidence of mucormycosis (range 0.04–0.11/100,000) was lower than that of aspergillosis (range 0.40–1.25/100,000), and the average annual incidence of aspergillosis (0.81/100,000) was approximately 11‐fold higher than that of mucormycosis (0.07/100,000) (Figure 2). Notably, the incidence of aspergillosis increased significantly over the years, from 0.48/100,000 in 2006 to 1.19/100,000 in 2017 (p < .0001). Although the incidence of mucormycosis also increased from 0.04/100,000 in 2006 to 0.11/100,000 in 2017, the increase did not reach statistical significance (p = .07).\nAnnual case numbers and incidence of mucormycosis compared with those of aspergillosis during 2006–2017\nPatients with mucormycosis and patients with aspergillosis were old‐aged (mean age 59.4‐ and 57.6‐year‐old, respectively), and most patients were male (67.2% and 64.3%, respectively). Diabetes mellitus (60.8% vs. 23.8%; adjusted odds ratio (aOR) 4.30, 95% confidence interval (CI) 3.17–5.82, p < .0001) was more commonly associated with mucormycosis, whereas malignant neoplasms (22.1% vs. 45.9%, p < .0001), including haematological malignancy (16.7% vs. 34.2%), solid cancer (7.8% vs. 16.3%) and COPD (14.2% vs. 23.9%, p = .0002), were more commonly associated with aspergillosis (Table 1). HIV infection was identified in less than three (<1%) patients with mucormycosis and in 32 (1.4%) patients with aspergillosis.\nDemographic and clinical data of patients with mucormycosis and patients with aspergillosis from 2006 to 2017\nAbbreviations: aOR, adjusted odds ratio; CCI, Charlson Comorbidity Index; CI, confidence interval; COPD, chronic obstructive pulmonary disease; ESRD, end‐stage renal disease; SD, standard deviation.\nThe all‐cause in‐hospital, 30‐day and 90‐day mortality rates did not differ significantly between mucormycosis and aspergillosis (22.1% vs. 22.3%, p = .93; 29.9% vs. 26.9%, p = .35; 38.7% vs. 37.1%, p = .64) (Table 1). Kaplan–Meier survival curves revealed similar survival probability for both diseases at 90 days after discharge (p = .60) (Figure 3A). Of 204 patients with mucormycosis, 187 (91.6%) patients had received Mucorales‐active agents with either amphotericin B or posaconazole. In the multivariate analysis which examined factors of gender, age, underlying diseases, chemotherapy, corticosteroids use and prescribed antifungal agents, posaconazole use was found to be associated with lower in‐hospital mortality (aOR 0.38; 95% CI 0.15–0.97; p = .04), whereas voriconazole use was associated with in‐hospital mortality (aOR 3.31; 95% CI 1.48–7.39; p = .003) (Table 2). Kaplan–Meier survival analysis also demonstrated that posaconazole use during hospitalisation was associated with a higher survival probability at 90 days after discharge (p = .04), whereas voriconazole use was associated with a lower survival probability (p = .02) (Figure 3B,C). Of 2270 patients with aspergillosis, all patients had received mould‐active agents. Multivariate analysis revealed that COPD, haematological cancer and antineoplastic chemotherapy were associated with lower in‐hospital mortality, whereas age ≥ 65 y.o., CCI ≥3 and use of amphotericin B or corticosteroids were associated with in‐hospital mortality (Table S3).\nKaplan–Meier survival curves at 90 days after discharge according to (A) diagnosis (mucormycosis vs. aspergillosis), (B) posaconazole use in mucormycosis and (C) voriconazole use in mucormycosis (na, data reporting not allowed)\nClinical data and factors associated with all‐cause in‐hospital mortality among patients with mucormycosis from 2006 to 2017\nAbbreviations: aOR, adjusted odds ratio; CCI, Charlson Comorbidity Index; CI, confidence interval; COPD, chronic obstructive pulmonary disease; na, nonapplicable; SD, standard deviation.\nThe number of patients receiving haemodialysis or transplantation was less than three in some cells, and thus the data regarding end‐stage renal disease and transplantation was not provided.\nAnti‐Aspergillus‐only agents include itraconazole, voriconazole, anidulafungin, caspofungin and micafungin; Mucorales‐active agents include posaconazole and amphotericin B; mould‐active agents include amphotericin B, itraconazole, voriconazole, posaconazole, anidulafungin, caspofungin and micafungin.", "Our study hereby presented the temporal trend of disease burden and demographic characteristics of mucormycosis during a 12‐year period (2006–2017) in Taiwan by analysing nationwide, population‐based data. With aspergillosis as a comparator, four major findings were revealed: (1) a 12‐year average incidence of 0.07/100,000 and an incidence of 0.11/100,000 in 2017 for mucormycosis were documented; (2) the major underlying disease identified was diabetes mellitus for mucormycosis and malignant neoplasm for aspergillosis; (3) a significant increase in incidence was observed for aspergillosis but was not evident for mucormycosis and (4) posaconazole use was associated with a better survival probability in mucormycosis.\nThe incidence of mucormycosis in Taiwan (0.11/100,000 in 2017) is similar to that reported in France in 2006 (0.12/100,000), both based on nationwide population‐based data.\n3\n As national data on fungal infection were not available in most countries, the Leading International Fungal Education (LIFE) program led by David Denning (http://www.LIFE‐worldwide.org) has made great efforts to estimate the global burden of serious fungal infection, including mucormycosis, by using the available epidemiological data (the population at risk, published literature) and modelling. In addition to the French incidence (0.09–0.12/100,000), an incidence rate of 0.2/100,000 was also used as a general literature value for incidence estimation of mucormycosis in some countries.\n15\n, \n16\n, \n17\n, \n18\n The estimated incidence rates in different countries, mostly from the data of the LIFE program, have been comprehensively summarised in a review article by Prakash et al.\n2\n In this review, the incidences of mucormycosis in most countries were within the range of 0.06–0.2/100,000 and were as follows: Europe (Greece 0.06, United Kingdom 0.09, Norway 0.1 and Ireland 0.2), North America (Canada 0.12 and USA 0.3), Asia (Korea 0.14, Japan 0.2 and Thailand 0.2), Africa (Algeria, Kenya, Malawi, Nigeria 0.2), Russia (0.16) and Australia (0.06), whereas India (14), Pakistan (14) and Portugal (9.5) ranked as the countries with the highest incidences. The incidence of mucormycosis in Taiwan was also in line with estimated incidences in most countries within the same period, and hence provided the second and latest nationwide data, after the French study, for estimating the disease burden of mucormycosis.\nRisk factors and underlying comorbidities for mucormycosis vary considerably with geographical area.\n1\n Diabetes was the leading underlying condition in India (57–74%), in contrast to haematological malignancies in Europe (44%), including France (50%) and Italy (62%), and Australia (49%).\n1\n As observed in India, we found in Taiwan that diabetes (61%) predominated over malignant neoplasms (22%), including haematological malignancy (17%), as the major underlying disease in mucormycosis. This feature was further contrasted with aspergillosis, for which underlying malignant neoplasms (46%), including haematological malignancies (34%), predominated over diabetes (24%). According to the IDF Diabetes Atlas in 2021, India (9.6%) and Taiwan (9.7%) have higher age‐adjusted comparative diabetes prevalence in adults 20–79 years (though India might have a number of undiagnosed diabetes) than the European countries (7.0%) and Australia (6.4%).\n19\n On the contrary, the Global Burden of Disease (GBD) 2017 study revealed that Taiwan is one of the countries with higher leukaemia prevalence and shared similar prevalence with France, whereas India has lower leukaemia prevalence.\n20\n Experts hypothesized one of the possible reasons for the high occurrence of mucormycosis in India to be the abundance of thermotolerant and thermophilic Mucorales in the environment due to a hot and humid climate.\n7\n Taiwan has a tropical and subtropical monsoon climate with wet and humid summers, which might allow ubiquitous growth of Mucorales in the environment. Collectively, it is presumed that community residents with diabetes in Taiwan would have a higher probability of inhaling spores of pathogenic Mucorales, leading to invasive disease when immune responses compromise. Further environmental surveillance would be helpful to elucidate this hypothesis. From a clinical perspective, these findings suggested that medical staff in Taiwan must remain vigilant of the occurrence of mucormycosis in diabetic patients without haematological malignancy, especially those presenting invasive sinusitis or pneumonia not responding to antibacterial therapy.\nOur study, along with reports elsewhere, demonstrated that the incidences of aspergillosis were generally 10–30 times higher than those of mucormycosis.\n2\n Notably, the incidence of mucormycosis in Taiwan remained stable over the years, in contrast to an apparent rising trend in aspergillosis. The galactomannan (GM) antigen test was approved by the United States Food and Drug Association (FDA) for diagnosing Aspergillus infection and was introduced into Taiwan in 2004. Based on Taiwan's NHIRD, studies revealed that the increase in the incidence of invasive pulmonary aspergillosis was positively correlated with the increase in the use of GM testing during 2002–2011.\n13\n Moreover, in Taiwan, the incidences for all cancers doubled from 1988 to 2016, the crude annual incidence of acute myeloid leukaemia increased from 2.78/100,000 in 2006 to 3.21/100,000 in 2015, and the annual number of haematopoietic stem cell transplants performed increased from 381 in 2006 to 521 in 2015.\n9\n, \n10\n, \n11\n Therefore, the rising incidence of aspergillosis in Taiwan might be in part attributed to the clinical application of new diagnostics and an expanded susceptible population. In addition, the prevalence of diabetes, the major underlying disease of mucormycosis, in patients aged 20–79 years increased by 41% from 2005 to 2014.\n12\n Given that there were increasing susceptible hosts (diabetes and malignancy) in Taiwan and observed rising trends of mucormycosis in France and India,\n2\n, \n3\n we initially expected a similar upwards trend of mucormycosis to be revealed in this study, but it turned out that our data did not support this assumption. From 2006 to 2017 in Taiwan, the diagnosis of mucormycosis mainly depended on conventional culture methods and histopathological examination of biopsy tissue from infected sites demonstrating broad‐based, pauciseptate ribbon‐like fungal elements consistent with Mucorales. However, their diagnostic sensitivity or feasibility are suboptimal for diagnosing mucormycosis. A multicentre study from Taiwan found that the culture yield rate was only 37% for invasive mucormycosis compared with 67% for invasive aspergillosis.\n21\n Furthermore, patients with mucormycosis might have a rapidly deteriorated and fulminant fatal course, which makes timely diagnosis by fungal culture or tissue biopsy less likely or makes laboratory reports available only postmortem. Even in critical patients with a less fulminant course, collecting samples from deep tissue is also challenging. Additionally, there is no commercially available antigen testing to detect Mucorales in blood or respiratory samples as a noninvasive diagnostic approach so far. Taken together, the incidence of mucormycosis here might probably be underestimated due to limited diagnostic tools, which further underlines the unmet need for improved diagnostics of mucormycosis.\nAmphotericin B, posaconazole and the new triazole isavuconazole are recommended as antifungals for the treatment of aspergillosis and mucormycosis, while voriconazole is a recommended drug for aspergillosis but not mucormycosis, as it is inactive against Mucorales.\n22\n, \n23\n Posaconazole was approved for clinical use by the U.S. FDA in 2006 and has been available in Taiwan since 2010. An association of posaconazole use with a higher survival probability demonstrated here supported its clinical efficacy in treating mucormycosis. However, confounding bias might exist. For instance, patients who had the chance to receive posaconazole might have a less fulminant disease course or lower disease severity, which allowed obtaining clinical samples for fungal culture or histopathological examination to achieve accurate aetiological diagnosis and prescribe targeted antifungal therapy accordingly. Isavuconazole was approved for clinical use by the U.S. FDA in 2015 and available in Taiwan since 2021, and hence, its impact on clinical outcome could not be assessed here. However, future studies comparing the clinical efficacy of posaconazole and isavuconazole would be appealing, as they exhibit different in vitro activities against certain members of Mucorales, such as Cunninghamella bertholletiae.\n24\n\n\nThis study has some limitations. First, given the inherent limitations of the ICD system and lack of detailed clinical, laboratory and radiological data in the NHRID, the accuracy of diagnosis was suboptimal, and disease classification as proven and probable invasive fungal disease based on the European Organisation for Research and Treatment of Cancer and the Mycoses Study Group Education and Research Consortium (EORTC/MSG) definitions could not be made.\n25\n In addition, ICD‐9 did not include assignment of infected organ yet, neither for mucormycosis nor for aspergillosis, and thus, correlation between severity of disease and outcome could not be investigated in this study. To confirm diagnosis, we enrolled only patients who had ever received mould‐active antifungal(s), indicating high clinical suspicion of invasive fungal disease. It is well understood that by this restriction, patients diagnosed post‐mortem without systemic antifungal treatment were also excluded from disease burden calculation. Second, the clinical efficacy of antifungal prescription evaluated from different aspects (aetiological agents, time to initiation of antifungal therapy, monotherapy vs. combination therapy and antifungal duration) could not be assessed due to limitations of the data structure. A multicentre study enrolling a large number of patients with detailed clinical information is needed to elucidate these issues.\nIn conclusion, the nationwide population‐based database revealed that mucormycosis is an uncommon fungal disease in Taiwan that occurs mostly in diabetic patients. An increasing trend in incidence was not evident, but the incidence might be underestimated due to limited diagnostics for mucormycosis. With increasing number of susceptible hosts, availability of isavuconazole and global and domestic efforts to raise clinical awareness, continuous surveillance might aid in delineating the evolving features of mucormycosis.", "Shih H.I. and Wu C.J. involved in conceptualization, funding acquisition and writing—review and editing. Huang Y.T. and Shih H.I. involved in data curation. Huang Y.T. involved in Formal analysis. Shih H.I., Huang Y.T. and Wu C.J. involved methodology and validation. Shih H.I., Huang Y.T. and Wu C.J. involved in writing—original draft.", "None to declare.", "\nTable S1\n\nClick here for additional data file." ]
[ null, "materials-and-methods", null, null, null, "results", "discussion", null, "COI-statement", "supplementary-material" ]
[ "aspergillosis", "diabetes", "epidemiology", "haematological malignancy", "incidence", "mucormycosis", "posaconazole", "Taiwan" ]
INTRODUCTION: Mucormycosis is the most common invasive mould disease after aspergillosis. It is caused by members of the order Mucorales, which are thermotolerant fungi ubiquitously distributed in the environment, such as in decaying organic materials and soils. Patients usually acquire infection via inhalation, ingestion or direct inoculation of fungal spores. Those with diabetes mellitus, haematological malignancy, solid organ transplantation, corticosteroid use, iron overload and major trauma are at particular risk for developing mucormycosis. 1 , 2 Mucormycosis is characterised by its angioinvasive nature and is associated with substantial mortality. Rhino‐orbital‐cerebral and pulmonary forms are the most common manifestation of mucormycosis, followed by cutaneous, gastrointestinal and renal infection, and localised infection might progress to disseminated infection in severe cases. 1 , 2 Epidemiological studies have demonstrated varied incidences and features of mucormycosis across different countries. 1 , 2 India was reported to have the highest estimated incidence of mucormycosis (14 cases per 100,000 population), with uncontrolled diabetes as the major risk factor, whereas lower incidences (0.06–0.2/100,000) were reported in most of the European countries, with haematological malignancy and transplantation as the major risk factors. 2 Nevertheless, the disease burden of mucormycosis described in the literature was mostly deduced from estimations based on available epidemiological data and modelling. 2 The currently available nationwide population‐based incidence data were mainly from a French study, which used data extraction codes according to the International Classification of Diseases (ICD) from the French hospital information system. 3 It revealed an increasing trend in the incidence of mucormycosis from 0.07/100,000 in 1997 to 0.12/100,000 in 2006, with an average of 0.09/100,000 over the 10 years. This French incidence rate (0.09–1.2/100,000) was thus often used as a reference value for estimating the burden of mucormycosis in other countries. 4 , 5 , 6 A comprehensive estimation of disease burden and identifying population at risk aids in determining public health measures and the diagnostic and therapeutic needs of the population. However, large‐scale epidemiological research of mucormycosis in Asia is scarce and mostly limited to reports from single‐centre or multicentre studies in India. 7 Taiwan is situated in East Asia, which belongs to the tropical and subtropical climate zone. It launched the National Health Insurance (NHI) in 1995, which covers more than 99% of the 23 million people who compose the Taiwanese population; therefore, Taiwan's National Health Insurance Research Database (NHIRD) could serve as a population‐based database for large‐scale, healthcare research. 8 Using the NHIRD, studies revealed an increase in the immunocompromised population and in diabetes prevalence over the years and a significant increase in the incidence of invasive pulmonary aspergillosis from 0.94 cases in 2002 to 2.06 cases in 2011 per million population (p < .0001), 9 , 10 , 11 , 12 , 13 while the incidence of mucormycosis remained undescribed. Therefore, this study aimed to delineate the temporal trend of disease burden and demographic characteristics of mucormycosis in Taiwan during 2006–2017 based on the NHIRD using those of aspergillosis as comparators. MATERIALS AND METHODS: Study design and database We conducted a nationwide, population‐based retrospective cohort study that investigated the incidence, demographic and clinical characteristics of mucormycosis and aspergillosis that occurred between 2006 and 2017 in Taiwan by analysing data from the NHIRD. The NHIRD comprises secondary data released for research purposes, which includes information on patient demographics, outpatient visits, hospital admission, prescribed medication, interventional procedures and up to four diagnoses for outpatients and up to five discharge diagnoses for inpatients. Diseases are coded based on the ICD, Ninth Revision, Clinical Modification (ICD‐9‐CM) code from 2006 to 2015, and ICD‐10‐CM code from 2016 to 2017. Personal information was deidentified by the Taiwan Health and Welfare Data Science Centre (HWDC) but could be anonymously linked to the official Taiwan Death Registry dataset, which was also assessed in this study to obtain the patient outcome (mortality). According to the regulation of HWDC, case numbers lower than three were not allowed to be reported in the analyses to avoid reidentification. We conducted a nationwide, population‐based retrospective cohort study that investigated the incidence, demographic and clinical characteristics of mucormycosis and aspergillosis that occurred between 2006 and 2017 in Taiwan by analysing data from the NHIRD. The NHIRD comprises secondary data released for research purposes, which includes information on patient demographics, outpatient visits, hospital admission, prescribed medication, interventional procedures and up to four diagnoses for outpatients and up to five discharge diagnoses for inpatients. Diseases are coded based on the ICD, Ninth Revision, Clinical Modification (ICD‐9‐CM) code from 2006 to 2015, and ICD‐10‐CM code from 2016 to 2017. Personal information was deidentified by the Taiwan Health and Welfare Data Science Centre (HWDC) but could be anonymously linked to the official Taiwan Death Registry dataset, which was also assessed in this study to obtain the patient outcome (mortality). According to the regulation of HWDC, case numbers lower than three were not allowed to be reported in the analyses to avoid reidentification. Study subjects and definitions Patients with either mucormycosis or aspergillosis from 2006 to 2017 were identified if they had received an ICD code for either disease (not restricted to primary and secondary diagnosis), and the ICD codes for both diseases are provided in Table Table S1. To ensure the accuracy of disease diagnosis, only patients who were hospitalised and treated with either oral and/or intravenous mould‐active drugs, including amphotericin B (both deoxycholate and lipid formulation), itraconazole, voriconazole, posaconazole, anidulafungin, micafungin and caspofungin, were enrolled. The flowchart in Figure 1 summarises the study selection process. The index date was defined as the first date of hospital admission during which mucormycosis or aspergillosis was diagnosed. The annual case number, demographics, comorbidities, medical treatment and all‐cause mortality of patients were assessed. Comorbidities that were coded in at least two outpatient visits or one discharge diagnosis for inpatients within 1 year preceding the index date were included for analyses, which included autoimmune disease, chronic heart failure, chronic obstructive pulmonary disease (COPD), diabetes mellitus, human immunodeficiency virus (HIV) infection, liver cirrhosis and malignant neoplasms, including both haematological cancer and solid cancer. End‐stage renal disease (ESRD) was coded if the patients received regular dialysis within 90 days preceding the index admission. The medical treatment assessed included antineoplastic chemotherapy and corticosteroids within 90 days preceding and during the index admission, transplantation (bone marrow transplantation and organ transplantation) within 1 year preceding and during the index admission and mould‐active antifungals during the index admission. The severity of comorbidity was scored based on the Charlson Comorbidity Index (CCI). 14 The ICD codes for comorbidities and medical treatment are provided in Table Table S1, and the Anatomical Therapeutic Chemical (ATC) Classification codes for mould‐active antifungals and corticosteroids are provided in Table S2. Flowchart summarising the study selection process in this study All‐cause in‐hospital mortality was defined as death during the index admission or within 72 hours after discharge. All‐cause 30‐day and 90‐day mortality was defined as death at 30 days and 90 days after discharge, respectively. Patients with either mucormycosis or aspergillosis from 2006 to 2017 were identified if they had received an ICD code for either disease (not restricted to primary and secondary diagnosis), and the ICD codes for both diseases are provided in Table Table S1. To ensure the accuracy of disease diagnosis, only patients who were hospitalised and treated with either oral and/or intravenous mould‐active drugs, including amphotericin B (both deoxycholate and lipid formulation), itraconazole, voriconazole, posaconazole, anidulafungin, micafungin and caspofungin, were enrolled. The flowchart in Figure 1 summarises the study selection process. The index date was defined as the first date of hospital admission during which mucormycosis or aspergillosis was diagnosed. The annual case number, demographics, comorbidities, medical treatment and all‐cause mortality of patients were assessed. Comorbidities that were coded in at least two outpatient visits or one discharge diagnosis for inpatients within 1 year preceding the index date were included for analyses, which included autoimmune disease, chronic heart failure, chronic obstructive pulmonary disease (COPD), diabetes mellitus, human immunodeficiency virus (HIV) infection, liver cirrhosis and malignant neoplasms, including both haematological cancer and solid cancer. End‐stage renal disease (ESRD) was coded if the patients received regular dialysis within 90 days preceding the index admission. The medical treatment assessed included antineoplastic chemotherapy and corticosteroids within 90 days preceding and during the index admission, transplantation (bone marrow transplantation and organ transplantation) within 1 year preceding and during the index admission and mould‐active antifungals during the index admission. The severity of comorbidity was scored based on the Charlson Comorbidity Index (CCI). 14 The ICD codes for comorbidities and medical treatment are provided in Table Table S1, and the Anatomical Therapeutic Chemical (ATC) Classification codes for mould‐active antifungals and corticosteroids are provided in Table S2. Flowchart summarising the study selection process in this study All‐cause in‐hospital mortality was defined as death during the index admission or within 72 hours after discharge. All‐cause 30‐day and 90‐day mortality was defined as death at 30 days and 90 days after discharge, respectively. Statistical analysis All statistical analyses were performed with SAS 9.4 software (SAS Institute). The temporal trend of disease incidence was examined using the trend test. Demographic and clinical data of mucormycosis and aspergillosis were compared, and factors associated with mortality in either disease were examined by both univariate analysis and multivariate analyses using the backward elimination method. The chi‐square test was used for categorical variables, and Student's t test was used for continuous variables. Survival probability was estimated for each group using the Kaplan–Meier method and compared using the log‐rank test. All statistical tests were 2‐sided, and p values <.05 were considered statistically significant. This study was approved by the institutional review board (IRB) of National Chen‐Kung University Hospital and National Health Research Institute (registration numbers: A‐ER‐108‐479 and EC1070104‐E) and the requirement to obtain informed consent was waived. All statistical analyses were performed with SAS 9.4 software (SAS Institute). The temporal trend of disease incidence was examined using the trend test. Demographic and clinical data of mucormycosis and aspergillosis were compared, and factors associated with mortality in either disease were examined by both univariate analysis and multivariate analyses using the backward elimination method. The chi‐square test was used for categorical variables, and Student's t test was used for continuous variables. Survival probability was estimated for each group using the Kaplan–Meier method and compared using the log‐rank test. All statistical tests were 2‐sided, and p values <.05 were considered statistically significant. This study was approved by the institutional review board (IRB) of National Chen‐Kung University Hospital and National Health Research Institute (registration numbers: A‐ER‐108‐479 and EC1070104‐E) and the requirement to obtain informed consent was waived. Study design and database: We conducted a nationwide, population‐based retrospective cohort study that investigated the incidence, demographic and clinical characteristics of mucormycosis and aspergillosis that occurred between 2006 and 2017 in Taiwan by analysing data from the NHIRD. The NHIRD comprises secondary data released for research purposes, which includes information on patient demographics, outpatient visits, hospital admission, prescribed medication, interventional procedures and up to four diagnoses for outpatients and up to five discharge diagnoses for inpatients. Diseases are coded based on the ICD, Ninth Revision, Clinical Modification (ICD‐9‐CM) code from 2006 to 2015, and ICD‐10‐CM code from 2016 to 2017. Personal information was deidentified by the Taiwan Health and Welfare Data Science Centre (HWDC) but could be anonymously linked to the official Taiwan Death Registry dataset, which was also assessed in this study to obtain the patient outcome (mortality). According to the regulation of HWDC, case numbers lower than three were not allowed to be reported in the analyses to avoid reidentification. Study subjects and definitions: Patients with either mucormycosis or aspergillosis from 2006 to 2017 were identified if they had received an ICD code for either disease (not restricted to primary and secondary diagnosis), and the ICD codes for both diseases are provided in Table Table S1. To ensure the accuracy of disease diagnosis, only patients who were hospitalised and treated with either oral and/or intravenous mould‐active drugs, including amphotericin B (both deoxycholate and lipid formulation), itraconazole, voriconazole, posaconazole, anidulafungin, micafungin and caspofungin, were enrolled. The flowchart in Figure 1 summarises the study selection process. The index date was defined as the first date of hospital admission during which mucormycosis or aspergillosis was diagnosed. The annual case number, demographics, comorbidities, medical treatment and all‐cause mortality of patients were assessed. Comorbidities that were coded in at least two outpatient visits or one discharge diagnosis for inpatients within 1 year preceding the index date were included for analyses, which included autoimmune disease, chronic heart failure, chronic obstructive pulmonary disease (COPD), diabetes mellitus, human immunodeficiency virus (HIV) infection, liver cirrhosis and malignant neoplasms, including both haematological cancer and solid cancer. End‐stage renal disease (ESRD) was coded if the patients received regular dialysis within 90 days preceding the index admission. The medical treatment assessed included antineoplastic chemotherapy and corticosteroids within 90 days preceding and during the index admission, transplantation (bone marrow transplantation and organ transplantation) within 1 year preceding and during the index admission and mould‐active antifungals during the index admission. The severity of comorbidity was scored based on the Charlson Comorbidity Index (CCI). 14 The ICD codes for comorbidities and medical treatment are provided in Table Table S1, and the Anatomical Therapeutic Chemical (ATC) Classification codes for mould‐active antifungals and corticosteroids are provided in Table S2. Flowchart summarising the study selection process in this study All‐cause in‐hospital mortality was defined as death during the index admission or within 72 hours after discharge. All‐cause 30‐day and 90‐day mortality was defined as death at 30 days and 90 days after discharge, respectively. Statistical analysis: All statistical analyses were performed with SAS 9.4 software (SAS Institute). The temporal trend of disease incidence was examined using the trend test. Demographic and clinical data of mucormycosis and aspergillosis were compared, and factors associated with mortality in either disease were examined by both univariate analysis and multivariate analyses using the backward elimination method. The chi‐square test was used for categorical variables, and Student's t test was used for continuous variables. Survival probability was estimated for each group using the Kaplan–Meier method and compared using the log‐rank test. All statistical tests were 2‐sided, and p values <.05 were considered statistically significant. This study was approved by the institutional review board (IRB) of National Chen‐Kung University Hospital and National Health Research Institute (registration numbers: A‐ER‐108‐479 and EC1070104‐E) and the requirement to obtain informed consent was waived. RESULTS: Over the 12‐year study period, the population covered by Taiwan's NHI increased from 22.4 million in 2006 to 23.8 million in 2017, exceeding 99% of the total population in Taiwan. A total of 204 patients with mucormycosis and 2270 patients with aspergillosis who were hospitalised and treated with mould‐active antifungals from 2006 to 2017 were identified and extracted from the NHIRD for further analyses (Figure 1 ). Overall, the annual incidence of mucormycosis (range 0.04–0.11/100,000) was lower than that of aspergillosis (range 0.40–1.25/100,000), and the average annual incidence of aspergillosis (0.81/100,000) was approximately 11‐fold higher than that of mucormycosis (0.07/100,000) (Figure 2). Notably, the incidence of aspergillosis increased significantly over the years, from 0.48/100,000 in 2006 to 1.19/100,000 in 2017 (p < .0001). Although the incidence of mucormycosis also increased from 0.04/100,000 in 2006 to 0.11/100,000 in 2017, the increase did not reach statistical significance (p = .07). Annual case numbers and incidence of mucormycosis compared with those of aspergillosis during 2006–2017 Patients with mucormycosis and patients with aspergillosis were old‐aged (mean age 59.4‐ and 57.6‐year‐old, respectively), and most patients were male (67.2% and 64.3%, respectively). Diabetes mellitus (60.8% vs. 23.8%; adjusted odds ratio (aOR) 4.30, 95% confidence interval (CI) 3.17–5.82, p < .0001) was more commonly associated with mucormycosis, whereas malignant neoplasms (22.1% vs. 45.9%, p < .0001), including haematological malignancy (16.7% vs. 34.2%), solid cancer (7.8% vs. 16.3%) and COPD (14.2% vs. 23.9%, p = .0002), were more commonly associated with aspergillosis (Table 1). HIV infection was identified in less than three (<1%) patients with mucormycosis and in 32 (1.4%) patients with aspergillosis. Demographic and clinical data of patients with mucormycosis and patients with aspergillosis from 2006 to 2017 Abbreviations: aOR, adjusted odds ratio; CCI, Charlson Comorbidity Index; CI, confidence interval; COPD, chronic obstructive pulmonary disease; ESRD, end‐stage renal disease; SD, standard deviation. The all‐cause in‐hospital, 30‐day and 90‐day mortality rates did not differ significantly between mucormycosis and aspergillosis (22.1% vs. 22.3%, p = .93; 29.9% vs. 26.9%, p = .35; 38.7% vs. 37.1%, p = .64) (Table 1). Kaplan–Meier survival curves revealed similar survival probability for both diseases at 90 days after discharge (p = .60) (Figure 3A). Of 204 patients with mucormycosis, 187 (91.6%) patients had received Mucorales‐active agents with either amphotericin B or posaconazole. In the multivariate analysis which examined factors of gender, age, underlying diseases, chemotherapy, corticosteroids use and prescribed antifungal agents, posaconazole use was found to be associated with lower in‐hospital mortality (aOR 0.38; 95% CI 0.15–0.97; p = .04), whereas voriconazole use was associated with in‐hospital mortality (aOR 3.31; 95% CI 1.48–7.39; p = .003) (Table 2). Kaplan–Meier survival analysis also demonstrated that posaconazole use during hospitalisation was associated with a higher survival probability at 90 days after discharge (p = .04), whereas voriconazole use was associated with a lower survival probability (p = .02) (Figure 3B,C). Of 2270 patients with aspergillosis, all patients had received mould‐active agents. Multivariate analysis revealed that COPD, haematological cancer and antineoplastic chemotherapy were associated with lower in‐hospital mortality, whereas age ≥ 65 y.o., CCI ≥3 and use of amphotericin B or corticosteroids were associated with in‐hospital mortality (Table S3). Kaplan–Meier survival curves at 90 days after discharge according to (A) diagnosis (mucormycosis vs. aspergillosis), (B) posaconazole use in mucormycosis and (C) voriconazole use in mucormycosis (na, data reporting not allowed) Clinical data and factors associated with all‐cause in‐hospital mortality among patients with mucormycosis from 2006 to 2017 Abbreviations: aOR, adjusted odds ratio; CCI, Charlson Comorbidity Index; CI, confidence interval; COPD, chronic obstructive pulmonary disease; na, nonapplicable; SD, standard deviation. The number of patients receiving haemodialysis or transplantation was less than three in some cells, and thus the data regarding end‐stage renal disease and transplantation was not provided. Anti‐Aspergillus‐only agents include itraconazole, voriconazole, anidulafungin, caspofungin and micafungin; Mucorales‐active agents include posaconazole and amphotericin B; mould‐active agents include amphotericin B, itraconazole, voriconazole, posaconazole, anidulafungin, caspofungin and micafungin. DISCUSSION: Our study hereby presented the temporal trend of disease burden and demographic characteristics of mucormycosis during a 12‐year period (2006–2017) in Taiwan by analysing nationwide, population‐based data. With aspergillosis as a comparator, four major findings were revealed: (1) a 12‐year average incidence of 0.07/100,000 and an incidence of 0.11/100,000 in 2017 for mucormycosis were documented; (2) the major underlying disease identified was diabetes mellitus for mucormycosis and malignant neoplasm for aspergillosis; (3) a significant increase in incidence was observed for aspergillosis but was not evident for mucormycosis and (4) posaconazole use was associated with a better survival probability in mucormycosis. The incidence of mucormycosis in Taiwan (0.11/100,000 in 2017) is similar to that reported in France in 2006 (0.12/100,000), both based on nationwide population‐based data. 3 As national data on fungal infection were not available in most countries, the Leading International Fungal Education (LIFE) program led by David Denning (http://www.LIFE‐worldwide.org) has made great efforts to estimate the global burden of serious fungal infection, including mucormycosis, by using the available epidemiological data (the population at risk, published literature) and modelling. In addition to the French incidence (0.09–0.12/100,000), an incidence rate of 0.2/100,000 was also used as a general literature value for incidence estimation of mucormycosis in some countries. 15 , 16 , 17 , 18 The estimated incidence rates in different countries, mostly from the data of the LIFE program, have been comprehensively summarised in a review article by Prakash et al. 2 In this review, the incidences of mucormycosis in most countries were within the range of 0.06–0.2/100,000 and were as follows: Europe (Greece 0.06, United Kingdom 0.09, Norway 0.1 and Ireland 0.2), North America (Canada 0.12 and USA 0.3), Asia (Korea 0.14, Japan 0.2 and Thailand 0.2), Africa (Algeria, Kenya, Malawi, Nigeria 0.2), Russia (0.16) and Australia (0.06), whereas India (14), Pakistan (14) and Portugal (9.5) ranked as the countries with the highest incidences. The incidence of mucormycosis in Taiwan was also in line with estimated incidences in most countries within the same period, and hence provided the second and latest nationwide data, after the French study, for estimating the disease burden of mucormycosis. Risk factors and underlying comorbidities for mucormycosis vary considerably with geographical area. 1 Diabetes was the leading underlying condition in India (57–74%), in contrast to haematological malignancies in Europe (44%), including France (50%) and Italy (62%), and Australia (49%). 1  As observed in India, we found in Taiwan that diabetes (61%) predominated over malignant neoplasms (22%), including haematological malignancy (17%), as the major underlying disease in mucormycosis. This feature was further contrasted with aspergillosis, for which underlying malignant neoplasms (46%), including haematological malignancies (34%), predominated over diabetes (24%). According to the IDF Diabetes Atlas in 2021, India (9.6%) and Taiwan (9.7%) have higher age‐adjusted comparative diabetes prevalence in adults 20–79 years (though India might have a number of undiagnosed diabetes) than the European countries (7.0%) and Australia (6.4%). 19 On the contrary, the Global Burden of Disease (GBD) 2017 study revealed that Taiwan is one of the countries with higher leukaemia prevalence and shared similar prevalence with France, whereas India has lower leukaemia prevalence. 20 Experts hypothesized one of the possible reasons for the high occurrence of mucormycosis in India to be the abundance of thermotolerant and thermophilic Mucorales in the environment due to a hot and humid climate. 7 Taiwan has a tropical and subtropical monsoon climate with wet and humid summers, which might allow ubiquitous growth of Mucorales in the environment. Collectively, it is presumed that community residents with diabetes in Taiwan would have a higher probability of inhaling spores of pathogenic Mucorales, leading to invasive disease when immune responses compromise. Further environmental surveillance would be helpful to elucidate this hypothesis. From a clinical perspective, these findings suggested that medical staff in Taiwan must remain vigilant of the occurrence of mucormycosis in diabetic patients without haematological malignancy, especially those presenting invasive sinusitis or pneumonia not responding to antibacterial therapy. Our study, along with reports elsewhere, demonstrated that the incidences of aspergillosis were generally 10–30 times higher than those of mucormycosis. 2 Notably, the incidence of mucormycosis in Taiwan remained stable over the years, in contrast to an apparent rising trend in aspergillosis. The galactomannan (GM) antigen test was approved by the United States Food and Drug Association (FDA) for diagnosing Aspergillus infection and was introduced into Taiwan in 2004. Based on Taiwan's NHIRD, studies revealed that the increase in the incidence of invasive pulmonary aspergillosis was positively correlated with the increase in the use of GM testing during 2002–2011. 13 Moreover, in Taiwan, the incidences for all cancers doubled from 1988 to 2016, the crude annual incidence of acute myeloid leukaemia increased from 2.78/100,000 in 2006 to 3.21/100,000 in 2015, and the annual number of haematopoietic stem cell transplants performed increased from 381 in 2006 to 521 in 2015. 9 , 10 , 11 Therefore, the rising incidence of aspergillosis in Taiwan might be in part attributed to the clinical application of new diagnostics and an expanded susceptible population. In addition, the prevalence of diabetes, the major underlying disease of mucormycosis, in patients aged 20–79 years increased by 41% from 2005 to 2014. 12 Given that there were increasing susceptible hosts (diabetes and malignancy) in Taiwan and observed rising trends of mucormycosis in France and India, 2 , 3 we initially expected a similar upwards trend of mucormycosis to be revealed in this study, but it turned out that our data did not support this assumption. From 2006 to 2017 in Taiwan, the diagnosis of mucormycosis mainly depended on conventional culture methods and histopathological examination of biopsy tissue from infected sites demonstrating broad‐based, pauciseptate ribbon‐like fungal elements consistent with Mucorales. However, their diagnostic sensitivity or feasibility are suboptimal for diagnosing mucormycosis. A multicentre study from Taiwan found that the culture yield rate was only 37% for invasive mucormycosis compared with 67% for invasive aspergillosis. 21 Furthermore, patients with mucormycosis might have a rapidly deteriorated and fulminant fatal course, which makes timely diagnosis by fungal culture or tissue biopsy less likely or makes laboratory reports available only postmortem. Even in critical patients with a less fulminant course, collecting samples from deep tissue is also challenging. Additionally, there is no commercially available antigen testing to detect Mucorales in blood or respiratory samples as a noninvasive diagnostic approach so far. Taken together, the incidence of mucormycosis here might probably be underestimated due to limited diagnostic tools, which further underlines the unmet need for improved diagnostics of mucormycosis. Amphotericin B, posaconazole and the new triazole isavuconazole are recommended as antifungals for the treatment of aspergillosis and mucormycosis, while voriconazole is a recommended drug for aspergillosis but not mucormycosis, as it is inactive against Mucorales. 22 , 23 Posaconazole was approved for clinical use by the U.S. FDA in 2006 and has been available in Taiwan since 2010. An association of posaconazole use with a higher survival probability demonstrated here supported its clinical efficacy in treating mucormycosis. However, confounding bias might exist. For instance, patients who had the chance to receive posaconazole might have a less fulminant disease course or lower disease severity, which allowed obtaining clinical samples for fungal culture or histopathological examination to achieve accurate aetiological diagnosis and prescribe targeted antifungal therapy accordingly. Isavuconazole was approved for clinical use by the U.S. FDA in 2015 and available in Taiwan since 2021, and hence, its impact on clinical outcome could not be assessed here. However, future studies comparing the clinical efficacy of posaconazole and isavuconazole would be appealing, as they exhibit different in vitro activities against certain members of Mucorales, such as Cunninghamella bertholletiae. 24 This study has some limitations. First, given the inherent limitations of the ICD system and lack of detailed clinical, laboratory and radiological data in the NHRID, the accuracy of diagnosis was suboptimal, and disease classification as proven and probable invasive fungal disease based on the European Organisation for Research and Treatment of Cancer and the Mycoses Study Group Education and Research Consortium (EORTC/MSG) definitions could not be made. 25 In addition, ICD‐9 did not include assignment of infected organ yet, neither for mucormycosis nor for aspergillosis, and thus, correlation between severity of disease and outcome could not be investigated in this study. To confirm diagnosis, we enrolled only patients who had ever received mould‐active antifungal(s), indicating high clinical suspicion of invasive fungal disease. It is well understood that by this restriction, patients diagnosed post‐mortem without systemic antifungal treatment were also excluded from disease burden calculation. Second, the clinical efficacy of antifungal prescription evaluated from different aspects (aetiological agents, time to initiation of antifungal therapy, monotherapy vs. combination therapy and antifungal duration) could not be assessed due to limitations of the data structure. A multicentre study enrolling a large number of patients with detailed clinical information is needed to elucidate these issues. In conclusion, the nationwide population‐based database revealed that mucormycosis is an uncommon fungal disease in Taiwan that occurs mostly in diabetic patients. An increasing trend in incidence was not evident, but the incidence might be underestimated due to limited diagnostics for mucormycosis. With increasing number of susceptible hosts, availability of isavuconazole and global and domestic efforts to raise clinical awareness, continuous surveillance might aid in delineating the evolving features of mucormycosis. AUTHOR CONTRIBUTIONS: Shih H.I. and Wu C.J. involved in conceptualization, funding acquisition and writing—review and editing. Huang Y.T. and Shih H.I. involved in data curation. Huang Y.T. involved in Formal analysis. Shih H.I., Huang Y.T. and Wu C.J. involved methodology and validation. Shih H.I., Huang Y.T. and Wu C.J. involved in writing—original draft. CONFLICT OF INTEREST: None to declare. Supporting information: Table S1 Click here for additional data file.
Background: Epidemiological knowledge of mucormycosis obtained from national population-based databases is scarce. Methods: Data from patients with either mucormycosis or aspergillosis from 2006 to 2017 identified with the International Classification of Diseases (ICD) codes were extracted from the NHIRD. The incidence, demographics and clinical data of both diseases were analysed. Results: A total of 204 patients with mucormycosis and 2270 patients with aspergillosis who were hospitalised and treated with mould-active antifungals between 2006 and 2017 were identified. The average annual incidence of aspergillosis (0.81 cases per 100,000 population [0.81/100,000]) was 11-fold higher than that of mucormycosis (0.07/100,000). A significant increase in incidence was observed for aspergillosis (from 0.48/100,000 in 2006 to 1.19/100,000 in 2017, p < .0001) but not for mucormycosis (from 0.04/100,000 in 2006 to 0.11/100,000 in 2017, p = .07). The major underlying disease identified was diabetes mellitus (60.8%) for mucormycosis and malignant neoplasms (45.9%) for aspergillosis. The all-cause 90-day mortality rate was similar between mucormycosis and aspergillosis patients (39% vs. 37%, p = .60). For mucormycosis patients, multivariate analysis revealed that posaconazole use was associated with lower in-hospital mortality (aOR 0.38; 95% CI 0.15-0.97; p = .04). Conclusions: Mucormycosis is an uncommon fungal disease in Taiwan, occurring mostly in diabetic patients. However, the incidence might be underestimated due to limited diagnostics. Continuous surveillance might aid in delineating the evolving features of mucormycosis.
null
null
5,698
304
[ 592, 181, 396, 158, 63 ]
10
[ "mucormycosis", "disease", "aspergillosis", "patients", "study", "taiwan", "incidence", "data", "index", "2006" ]
[ "patients mucormycosis aspergillosis", "mucormycosis vs aspergillosis", "aspergillosis evident mucormycosis", "mucormycosis compared aspergillosis", "mucormycosis aspergillosis occurred" ]
null
null
null
[CONTENT] aspergillosis | diabetes | epidemiology | haematological malignancy | incidence | mucormycosis | posaconazole | Taiwan [SUMMARY]
null
[CONTENT] aspergillosis | diabetes | epidemiology | haematological malignancy | incidence | mucormycosis | posaconazole | Taiwan [SUMMARY]
null
[CONTENT] aspergillosis | diabetes | epidemiology | haematological malignancy | incidence | mucormycosis | posaconazole | Taiwan [SUMMARY]
null
[CONTENT] Antifungal Agents | Aspergillosis | Cost of Illness | Hospital Mortality | Humans | Mucormycosis | Taiwan [SUMMARY]
null
[CONTENT] Antifungal Agents | Aspergillosis | Cost of Illness | Hospital Mortality | Humans | Mucormycosis | Taiwan [SUMMARY]
null
[CONTENT] Antifungal Agents | Aspergillosis | Cost of Illness | Hospital Mortality | Humans | Mucormycosis | Taiwan [SUMMARY]
null
[CONTENT] patients mucormycosis aspergillosis | mucormycosis vs aspergillosis | aspergillosis evident mucormycosis | mucormycosis compared aspergillosis | mucormycosis aspergillosis occurred [SUMMARY]
null
[CONTENT] patients mucormycosis aspergillosis | mucormycosis vs aspergillosis | aspergillosis evident mucormycosis | mucormycosis compared aspergillosis | mucormycosis aspergillosis occurred [SUMMARY]
null
[CONTENT] patients mucormycosis aspergillosis | mucormycosis vs aspergillosis | aspergillosis evident mucormycosis | mucormycosis compared aspergillosis | mucormycosis aspergillosis occurred [SUMMARY]
null
[CONTENT] mucormycosis | disease | aspergillosis | patients | study | taiwan | incidence | data | index | 2006 [SUMMARY]
null
[CONTENT] mucormycosis | disease | aspergillosis | patients | study | taiwan | incidence | data | index | 2006 [SUMMARY]
null
[CONTENT] mucormycosis | disease | aspergillosis | patients | study | taiwan | incidence | data | index | 2006 [SUMMARY]
null
[CONTENT] mucormycosis | population | 000 | 100 000 | 100 | cases | burden | risk | incidence | major [SUMMARY]
null
[CONTENT] patients | vs | mucormycosis | 000 | 100 000 | 100 | use | aspergillosis | associated | agents [SUMMARY]
null
[CONTENT] declare | mucormycosis | disease | data | table | patients | index | taiwan | aspergillosis | involved [SUMMARY]
null
[CONTENT] [SUMMARY]
null
[CONTENT] 204 | 2270 | between 2006 and 2017 ||| annual | 0.81 | 100,000 | 11-fold ||| 0.48/100,000 | 2006 | 1.19/100,000 | 2017 | 2006 | 0.11/100,000 | 2017 ||| 60.8% | 45.9% ||| 90-day | 39% | 37% ||| aOR 0.38 | 95% | CI | 0.15-0.97 [SUMMARY]
null
[CONTENT] ||| 2006 to 2017 | the International Classification of Diseases | ICD | NHIRD ||| ||| 204 | 2270 | between 2006 and 2017 ||| annual | 0.81 | 100,000 | 11-fold ||| 0.48/100,000 | 2006 | 1.19/100,000 | 2017 | 2006 | 0.11/100,000 | 2017 ||| 60.8% | 45.9% ||| 90-day | 39% | 37% ||| aOR 0.38 | 95% | CI | 0.15-0.97 ||| Taiwan ||| ||| Continuous [SUMMARY]
null
A Steep Early Learning Curve for Endoscopic Submucosal Dissection in the Live Porcine Model.
34915487
Endoscopic submucosal dissection (ESD) is a demanding procedure requiring high level of expertise. ESD training programs incorporate procedures with live animal models. This study aimed to assess the early learning curve for performing ESD on live porcine models by endoscopists without any or with limited previous ESD experience.
BACKGROUND
In a live porcine model ESD workshop, number of resections, completeness of the resections, en bloc resections, adverse events, tutor intervention, type of knife, ESD time and size of resected specimens were recorded. ESD speed was calculated.
METHODS
A total of 70 procedures were carried out by 17 trainees. The percentage of complete resections, en bloc resections and ESD speed increased from the first to the latest procedures (88.2%-100%, 76.5%-100%, 8.6-31.4 mm2/min, respectively). The number of procedures in which a trainee needed tutor intervention and the number of adverse events also decreased throughout the procedures (4 to 0 and 6 to 0, respectively). During the workshop, when participants changed to a different type of knife, ESD speed slightly decreased (18.5 mm2/min to 17.0 mm2/min) and adverse events increased again (0-2).
RESULTS
Through successive procedures, complete resections, en bloc resections, and ESD speed improve whereas adverse events decrease, supporting the role of the live porcine model in the preclinical learning phase. Changing ESD knives has a momentarily negative impact on the learning curve.
CONCLUSIONS
[ "Swine", "Humans", "Animals", "Endoscopic Mucosal Resection", "Learning Curve", "Dissection", "Models, Animal" ]
9808771
Introduction
Endoscopic submucosal dissection (ESD) is a demanding technique that enables high en bloc resection rates, R0 resections and low rate of local recurrences regardless of lesion size as well as an accurate tumor histopathological staging and grading with precise evaluation of resection margins [1, 2, 3, 4, 5, 6, 7, 8]. On the other hand, it is associated with long procedure times and considerable risk of adverse events (such as bleeding and perforation) [9, 10, 11]. In addition, ESD has a difficult and prolonged learning curve [5, 9, 12, 13]. ESD training programs include acquiring basic knowledge, practicing on animal models (both ex vivo and in vivo models), visiting centers with a high ESD volume, attendance of hands-on training ESD workshops and only then proceeding to clinical practice, ideally under supervision of an expert during the first cases [14, 15, 16, 17, 18]. This is particularly relevant in Western settings where the number of ESD experts is limited and the possibility of starting ESD in superficial gastric neoplasms, which are the easiest and safest procedures, is limited due to the low prevalence of such lesions [5, 9, 12, 13]. Workshops using animal models are organized in training centers with the potential to improve the learning process and achieve initial competence in ESD [19, 20, 21, 22]. This study aimed to assess the early learning curve for performing ESD on live porcine models by endoscopists without any or with limited previous ESD experience.
Methods
Study Design This prospective study was conducted during a workshop for ESD training, co-organized by the European Association for Gastroenterology, Endoscopy and Nutrition (EAGEN) and the European Society of Gastrointestinal Endoscopy (ESGE). It included theoretical lectures and hands-on practice with live porcine models. The workshop had duration of 2 days and took place in the training center of the Erasmus School of Endoscopy at the Erasmus University Medical Center in Rotterdam, the Netherlands. This prospective study was conducted during a workshop for ESD training, co-organized by the European Association for Gastroenterology, Endoscopy and Nutrition (EAGEN) and the European Society of Gastrointestinal Endoscopy (ESGE). It included theoretical lectures and hands-on practice with live porcine models. The workshop had duration of 2 days and took place in the training center of the Erasmus School of Endoscopy at the Erasmus University Medical Center in Rotterdam, the Netherlands. Animal Model The use of the live porcine model for training purposes in the workshop was approved by local Ethical Committee for the welfare of animals in medical training. It was conducted under the National Experiments on Animals Act, revised in 2014 to implement the European Directive 2010/63 protecting animals used in research. Animal welfare and regulatory compliance were overseen by local Animal Welfare Body and government inspectors on behalf of the authorities (project license NL license # AVD1010020173286). Procedures were conducted in accordance with the “Animal Research: Reporting of in vivo Experiments” (ARRIVE) Guidelines [23]. Live pigs (Sus scrofus domesticus), weighing between 30 and 40 kg were used. Animals were given a liquid diet for 3 days and fasted for 8 h before the procedures. According to local protocol, with the support of the veterinarian staff throughout the course, general anesthesia with endotracheal intubation and mechanical ventilation was performed. Drugs were employed in the following sequence: premedication − ketamine 25–35 mg/kg IM and midazolam 1 mg/kg IM; induction − propofol 0.5–1 mg/kg IV; maintenance − isoflurane 1.5–2.5% ET PO, sufentanil 0.3–0.5 µg/kg IV, NaCl 0.9% 10 g/kg IV or gelofusine, propofol, and sufentanil as required. Animals were euthanized with pentobarbital overdose. The use of the live porcine model for training purposes in the workshop was approved by local Ethical Committee for the welfare of animals in medical training. It was conducted under the National Experiments on Animals Act, revised in 2014 to implement the European Directive 2010/63 protecting animals used in research. Animal welfare and regulatory compliance were overseen by local Animal Welfare Body and government inspectors on behalf of the authorities (project license NL license # AVD1010020173286). Procedures were conducted in accordance with the “Animal Research: Reporting of in vivo Experiments” (ARRIVE) Guidelines [23]. Live pigs (Sus scrofus domesticus), weighing between 30 and 40 kg were used. Animals were given a liquid diet for 3 days and fasted for 8 h before the procedures. According to local protocol, with the support of the veterinarian staff throughout the course, general anesthesia with endotracheal intubation and mechanical ventilation was performed. Drugs were employed in the following sequence: premedication − ketamine 25–35 mg/kg IM and midazolam 1 mg/kg IM; induction − propofol 0.5–1 mg/kg IV; maintenance − isoflurane 1.5–2.5% ET PO, sufentanil 0.3–0.5 µg/kg IV, NaCl 0.9% 10 g/kg IV or gelofusine, propofol, and sufentanil as required. Animals were euthanized with pentobarbital overdose. Participants Trainees attending the workshop for ESD training were evaluated. Demographic data and previous endoscopic experience were collected. Identity protection and confidentiality of the collected data were guaranteed. Only the ones who wished to do so responded willingly and all had the possibility of not doing it if they wished. Accordingly, participant's acceptance/consent was implicit, and written informed consent was not necessary. Trainees attending the workshop for ESD training were evaluated. Demographic data and previous endoscopic experience were collected. Identity protection and confidentiality of the collected data were guaranteed. Only the ones who wished to do so responded willingly and all had the possibility of not doing it if they wished. Accordingly, participant's acceptance/consent was implicit, and written informed consent was not necessary. Procedures Standard Pentax® (Tokyo, Japan) flexible endoscopes and ERBE® (Tübingen, Germany) VIO 200s and VIO3 electrosurgical units were used. A soft, straight distal attachment cap from Olympus® (Tokyo, Japan) was mounted on the tip of the endoscope. Direct assistance was assured by one of the participants, who acted according to the instructions of the endoscopist performing the procedure. International experts supervised this assistance and all the steps of the procedure, giving verbal instructions and intervening when deemed necessary (Fig. 1). All of them were highly experienced in clinical ESD (more than 10 years of continuous clinical practice) and in supervising ESD training courses (more than 5 years of continuous tutor practice). Gastric ESD was conducted using ERBE® HybridKnives® (Tübingen, Germany) coupled with a high-pressure injection system, ERBERJET 2® (Tübingen, Germany) which allows for cutting, coagulation and submucosal injection with the same knife. The injection solution was physiological saline with a few drops of indigo carmine. During the first procedures, the HybridKnife® I-type (which is a noninsulated needle-type knife) was used and during following procedures the HybridKnife® O-type (which is a partially insulated tip knife). Trainees were instructed to choose a place in the gastric mucosa, mark the outer margins of a pseudo-lesion with the tip of the knife, perform submucosal injection, and create a circumferential incision, followed by submucosal dissection and re-lifting the pseudo-lesion when needed. Bleedings were treated using submucosal injection, by coagulation with the knife, by using a bipolar hemostatic forceps, HemoStat-Y®, Pentax® (Tokyo, Japan) or by clip application, Instinct clip®, Cook Medical® (Winston-Salem, NC, USA). Resected specimens were retrieved at the end of each procedure, carefully stretched to in vivo size, and measured by an independent observer (RKM). Procedures were performed over the 2 days of the workshop. Standard Pentax® (Tokyo, Japan) flexible endoscopes and ERBE® (Tübingen, Germany) VIO 200s and VIO3 electrosurgical units were used. A soft, straight distal attachment cap from Olympus® (Tokyo, Japan) was mounted on the tip of the endoscope. Direct assistance was assured by one of the participants, who acted according to the instructions of the endoscopist performing the procedure. International experts supervised this assistance and all the steps of the procedure, giving verbal instructions and intervening when deemed necessary (Fig. 1). All of them were highly experienced in clinical ESD (more than 10 years of continuous clinical practice) and in supervising ESD training courses (more than 5 years of continuous tutor practice). Gastric ESD was conducted using ERBE® HybridKnives® (Tübingen, Germany) coupled with a high-pressure injection system, ERBERJET 2® (Tübingen, Germany) which allows for cutting, coagulation and submucosal injection with the same knife. The injection solution was physiological saline with a few drops of indigo carmine. During the first procedures, the HybridKnife® I-type (which is a noninsulated needle-type knife) was used and during following procedures the HybridKnife® O-type (which is a partially insulated tip knife). Trainees were instructed to choose a place in the gastric mucosa, mark the outer margins of a pseudo-lesion with the tip of the knife, perform submucosal injection, and create a circumferential incision, followed by submucosal dissection and re-lifting the pseudo-lesion when needed. Bleedings were treated using submucosal injection, by coagulation with the knife, by using a bipolar hemostatic forceps, HemoStat-Y®, Pentax® (Tokyo, Japan) or by clip application, Instinct clip®, Cook Medical® (Winston-Salem, NC, USA). Resected specimens were retrieved at the end of each procedure, carefully stretched to in vivo size, and measured by an independent observer (RKM). Procedures were performed over the 2 days of the workshop. Assessment Parameters related to gastric ESD performance like number of resections, completeness of the resection (ability to completely resect the lesion), en bloc resection (ability to completely resect the lesion en bloc with all markings visible on the specimen), adverse events (intraprocedural bleeding and perforation), tutor intervention, procedure time (from the first submucosal injection after marking to the complete removal of the pseudo lesion), and size of resected specimens were recorded for each procedure. Only significant continued bleeding that did not subside spontaneously or with coagulation from the tip of the ESD knife and required the use of an hemostatic forceps or endoclip was considered an intraprocedural bleeding. Tutor intervention was defined as a short hands-on assistance in procedures in which the tutor deemed it necessary, after which the endoscope was handed back to the trainee. Verbal guidance was allowed at any time during the workshop and was not recorded. The type of knife used was documented. The surface area of the resected specimen was calculated using a pre-specified formula: surface area (mm2) = largest diameter of specimen (mm) × smallest diameter of specimen (mm) × 0.25 × π. ESD speed was calculated accordingly: speed (mm2/min) = surface area (mm2) of resected specimen/time of procedure (min). Parameters related to gastric ESD performance like number of resections, completeness of the resection (ability to completely resect the lesion), en bloc resection (ability to completely resect the lesion en bloc with all markings visible on the specimen), adverse events (intraprocedural bleeding and perforation), tutor intervention, procedure time (from the first submucosal injection after marking to the complete removal of the pseudo lesion), and size of resected specimens were recorded for each procedure. Only significant continued bleeding that did not subside spontaneously or with coagulation from the tip of the ESD knife and required the use of an hemostatic forceps or endoclip was considered an intraprocedural bleeding. Tutor intervention was defined as a short hands-on assistance in procedures in which the tutor deemed it necessary, after which the endoscope was handed back to the trainee. Verbal guidance was allowed at any time during the workshop and was not recorded. The type of knife used was documented. The surface area of the resected specimen was calculated using a pre-specified formula: surface area (mm2) = largest diameter of specimen (mm) × smallest diameter of specimen (mm) × 0.25 × π. ESD speed was calculated accordingly: speed (mm2/min) = surface area (mm2) of resected specimen/time of procedure (min). Statistical Analysis The IBM Statistical Package for Social Sciences (SPSS Version 25.0.0; SPSS Inc, Chicago, IL, USA) was used for data analysis. Descriptive statistics were determined for all measures according to type of variables. Proportions were reported for dichotomous and ordinal variables. For continuous variables, the medians (with interquartile range [IQR] 25–75) or means (with standard error) were described. Nonparametric tests were used to assess statistical differences (Wilcoxon signed-rank test). Proportions were compared with the χ2 test. Significance level was defined as p < 0.05. The IBM Statistical Package for Social Sciences (SPSS Version 25.0.0; SPSS Inc, Chicago, IL, USA) was used for data analysis. Descriptive statistics were determined for all measures according to type of variables. Proportions were reported for dichotomous and ordinal variables. For continuous variables, the medians (with interquartile range [IQR] 25–75) or means (with standard error) were described. Nonparametric tests were used to assess statistical differences (Wilcoxon signed-rank test). Proportions were compared with the χ2 test. Significance level was defined as p < 0.05.
Results
From the 17 participants, background characteristics and endoscopic experience were obtained for 15 and are summarized in Table 1. Their median age was 40 years (IQR, 36–47). A total of 70 procedures (P) were performed by 17 trainees (Table 2). Each trainee performed a median of 4 procedures (IQR, 3.5–5.0). The percentage of complete resections was 88.2% in the first 2 procedures and 100% in the following ones. En bloc resections improved from 76.5% and 88.2% in the first 2 procedures respectively, to 100% during the subsequent procedures (P1 vs. P3, p < 0.042). The number of procedures in which a trainee needed hands-on intervention from the tutor also decreased throughout the procedures (4 in P1, 2 in P2 to 0 in P5 and P6; P1 vs. P3, p < 0.042). Median ESD speed increased from the first to the latest procedures: 8.6 mm2/min in P1 to 14.8 mm2/min in P2 (P1 vs. P2, p = 0.006), 24.7 mm2/min in P5 (P4 vs. P5, p = 0.028) and 31.4 mm2/min in P6. In contrast, ESD time decreased: 39 min in P1 to 34 min in P2, 21 min in P3 (P1 vs. P3, p = 0.002; P2 vs. P3, p = 0.01) and 9 min in P6. No significant variation in specimen's area occurred comparing successive procedures. The first procedures were all performed with the noninsulated tip HybridKnife I-type. After 1 to 3 procedures, all trainees switched to the HybridKnife O-type. Adverse events decreased throughout procedures (3 bleedings and 3 perforations in the P1 to 0 in P5 and P6). In total, there were 7 perforations (10%) and 4 bleedings (5.7%), managed with endoclips and hemostatic forceps. Perforations in P1 and P2 occurred with type I while perforations in P4 occurred with the O type. When a great number of participants (n = 8) changed to a different type of knife (P4), ESD speed slightly temporarily decreased (18 5 mm2/min on P3 to 17.0 mm2/min on P4) and the number of perforations increased again (0 on P3 to 2 on P4) (Fig. 2).
null
null
[ "Study Design", "Animal Model", "Participants", "Procedures", "Assessment", "Statistical Analysis", "Statement of Ethics", "Funding Sources", "Author Contributions" ]
[ "This prospective study was conducted during a workshop for ESD training, co-organized by the European Association for Gastroenterology, Endoscopy and Nutrition (EAGEN) and the European Society of Gastrointestinal Endoscopy (ESGE). It included theoretical lectures and hands-on practice with live porcine models. The workshop had duration of 2 days and took place in the training center of the Erasmus School of Endoscopy at the Erasmus University Medical Center in Rotterdam, the Netherlands.", "The use of the live porcine model for training purposes in the workshop was approved by local Ethical Committee for the welfare of animals in medical training. It was conducted under the National Experiments on Animals Act, revised in 2014 to implement the European Directive 2010/63 protecting animals used in research. Animal welfare and regulatory compliance were overseen by local Animal Welfare Body and government inspectors on behalf of the authorities (project license NL license # AVD1010020173286).\nProcedures were conducted in accordance with the “Animal Research: Reporting of in vivo Experiments” (ARRIVE) Guidelines [23]. Live pigs (Sus scrofus domesticus), weighing between 30 and 40 kg were used. Animals were given a liquid diet for 3 days and fasted for 8 h before the procedures. According to local protocol, with the support of the veterinarian staff throughout the course, general anesthesia with endotracheal intubation and mechanical ventilation was performed. Drugs were employed in the following sequence: premedication − ketamine 25–35 mg/kg IM and midazolam 1 mg/kg IM; induction − propofol 0.5–1 mg/kg IV; maintenance − isoflurane 1.5–2.5% ET PO, sufentanil 0.3–0.5 µg/kg IV, NaCl 0.9% 10 g/kg IV or gelofusine, propofol, and sufentanil as required. Animals were euthanized with pentobarbital overdose.", "Trainees attending the workshop for ESD training were evaluated. Demographic data and previous endoscopic experience were collected. Identity protection and confidentiality of the collected data were guaranteed. Only the ones who wished to do so responded willingly and all had the possibility of not doing it if they wished. Accordingly, participant's acceptance/consent was implicit, and written informed consent was not necessary.", "Standard Pentax® (Tokyo, Japan) flexible endoscopes and ERBE® (Tübingen, Germany) VIO 200s and VIO3 electrosurgical units were used. A soft, straight distal attachment cap from Olympus® (Tokyo, Japan) was mounted on the tip of the endoscope.\nDirect assistance was assured by one of the participants, who acted according to the instructions of the endoscopist performing the procedure. International experts supervised this assistance and all the steps of the procedure, giving verbal instructions and intervening when deemed necessary (Fig. 1). All of them were highly experienced in clinical ESD (more than 10 years of continuous clinical practice) and in supervising ESD training courses (more than 5 years of continuous tutor practice).\nGastric ESD was conducted using ERBE® HybridKnives® (Tübingen, Germany) coupled with a high-pressure injection system, ERBERJET 2® (Tübingen, Germany) which allows for cutting, coagulation and submucosal injection with the same knife. The injection solution was physiological saline with a few drops of indigo carmine.\nDuring the first procedures, the HybridKnife® I-type (which is a noninsulated needle-type knife) was used and during following procedures the HybridKnife® O-type (which is a partially insulated tip knife). Trainees were instructed to choose a place in the gastric mucosa, mark the outer margins of a pseudo-lesion with the tip of the knife, perform submucosal injection, and create a circumferential incision, followed by submucosal dissection and re-lifting the pseudo-lesion when needed. Bleedings were treated using submucosal injection, by coagulation with the knife, by using a bipolar hemostatic forceps, HemoStat-Y®, Pentax® (Tokyo, Japan) or by clip application, Instinct clip®, Cook Medical® (Winston-Salem, NC, USA). Resected specimens were retrieved at the end of each procedure, carefully stretched to in vivo size, and measured by an independent observer (RKM). Procedures were performed over the 2 days of the workshop.", "Parameters related to gastric ESD performance like number of resections, completeness of the resection (ability to completely resect the lesion), en bloc resection (ability to completely resect the lesion en bloc with all markings visible on the specimen), adverse events (intraprocedural bleeding and perforation), tutor intervention, procedure time (from the first submucosal injection after marking to the complete removal of the pseudo lesion), and size of resected specimens were recorded for each procedure. Only significant continued bleeding that did not subside spontaneously or with coagulation from the tip of the ESD knife and required the use of an hemostatic forceps or endoclip was considered an intraprocedural bleeding. Tutor intervention was defined as a short hands-on assistance in procedures in which the tutor deemed it necessary, after which the endoscope was handed back to the trainee. Verbal guidance was allowed at any time during the workshop and was not recorded.\nThe type of knife used was documented. The surface area of the resected specimen was calculated using a pre-specified formula: surface area (mm2) = largest diameter of specimen (mm) × smallest diameter of specimen (mm) × 0.25 × π. ESD speed was calculated accordingly: speed (mm2/min) = surface area (mm2) of resected specimen/time of procedure (min).", "The IBM Statistical Package for Social Sciences (SPSS Version 25.0.0; SPSS Inc, Chicago, IL, USA) was used for data analysis. Descriptive statistics were determined for all measures according to type of variables. Proportions were reported for dichotomous and ordinal variables. For continuous variables, the medians (with interquartile range [IQR] 25–75) or means (with standard error) were described. Nonparametric tests were used to assess statistical differences (Wilcoxon signed-rank test). Proportions were compared with the χ2 test. Significance level was defined as p < 0.05.", "The workshop was approved by the local Ethics Committee of Erasmus MC, University Medical Center, Rotterdam, The Netherlands, for the welfare of animals in medical training (Ref. No. AVD1010020173286). It was conducted under the National Experiments on Animals Act, revised in 2014 to implement the European Directive 2010/63 protecting animals used in research. Procedures were conducted in accordance with the “Animal Research: Reporting of in vivo Experiments” (ARRIVE) Guidelines. Only the endoscopists who wished to participate did it willingly and all had the possibility of not doing it if they wished. Invitees were informed that data would be analyzed and participation was voluntary. Identity protection and confidentiality of the collected data were guaranteed according to General Data Protection Regulation. Accordingly and in agreement with the Ethics Committee, completing the questionnaire implied the participant's implicit acceptance/consent. Therefore, the requirement for written informed consent was waived.", "This research did not receive any specific grant from funding agencies in the public, commercial, or nonprofit sectors. The workshop that allowed the present study was unrestrictedly sponsored by Pentax® (Tokyo, Japan), ERBE® (Tübingen, Germany), and Cook Medical® (Winston-Salem, NC, USA) which provided flexible endoscopes, electrosurgical units, and endoscopic devices. The sponsors had no influence on the scientific content of the workshop neither on this study.", "All authors have contributed and agreed on the content of the manuscript. Ricardo Küttner-Magalhães contributed to the study conception, study design, data acquisition, data analysis, data interpretation, manuscript writing, and critical revision. Mário Dinis-Ribeiro contributed to the study conception, study design, study supervision, data analysis, data interpretation, and critical revision of the manuscript. Marco J. Bruno contributed to the study conception, study design, and critical revision of the manuscript. Ricardo Marcos-Pinto contributed to data interpretation and critical revision of the manuscript. Carla Rolanda contributed to the study conception and design, data interpretation, and critical revision of the manuscript. Arjun D. Kock contributed to the study conception and design, study supervision, data analysis, data interpretation, and critical revision of the manuscript. All authors read and approved the final version of the manuscript." ]
[ null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Study Design", "Animal Model", "Participants", "Procedures", "Assessment", "Statistical Analysis", "Results", "Discussion", "Statement of Ethics", "Conflict of Interest Statement", "Funding Sources", "Author Contributions", "Data Availability Statement" ]
[ "Endoscopic submucosal dissection (ESD) is a demanding technique that enables high en bloc resection rates, R0 resections and low rate of local recurrences regardless of lesion size as well as an accurate tumor histopathological staging and grading with precise evaluation of resection margins [1, 2, 3, 4, 5, 6, 7, 8]. On the other hand, it is associated with long procedure times and considerable risk of adverse events (such as bleeding and perforation) [9, 10, 11]. In addition, ESD has a difficult and prolonged learning curve [5, 9, 12, 13]. ESD training programs include acquiring basic knowledge, practicing on animal models (both ex vivo and in vivo models), visiting centers with a high ESD volume, attendance of hands-on training ESD workshops and only then proceeding to clinical practice, ideally under supervision of an expert during the first cases [14, 15, 16, 17, 18].\nThis is particularly relevant in Western settings where the number of ESD experts is limited and the possibility of starting ESD in superficial gastric neoplasms, which are the easiest and safest procedures, is limited due to the low prevalence of such lesions [5, 9, 12, 13]. Workshops using animal models are organized in training centers with the potential to improve the learning process and achieve initial competence in ESD [19, 20, 21, 22]. This study aimed to assess the early learning curve for performing ESD on live porcine models by endoscopists without any or with limited previous ESD experience.", "Study Design This prospective study was conducted during a workshop for ESD training, co-organized by the European Association for Gastroenterology, Endoscopy and Nutrition (EAGEN) and the European Society of Gastrointestinal Endoscopy (ESGE). It included theoretical lectures and hands-on practice with live porcine models. The workshop had duration of 2 days and took place in the training center of the Erasmus School of Endoscopy at the Erasmus University Medical Center in Rotterdam, the Netherlands.\nThis prospective study was conducted during a workshop for ESD training, co-organized by the European Association for Gastroenterology, Endoscopy and Nutrition (EAGEN) and the European Society of Gastrointestinal Endoscopy (ESGE). It included theoretical lectures and hands-on practice with live porcine models. The workshop had duration of 2 days and took place in the training center of the Erasmus School of Endoscopy at the Erasmus University Medical Center in Rotterdam, the Netherlands.\nAnimal Model The use of the live porcine model for training purposes in the workshop was approved by local Ethical Committee for the welfare of animals in medical training. It was conducted under the National Experiments on Animals Act, revised in 2014 to implement the European Directive 2010/63 protecting animals used in research. Animal welfare and regulatory compliance were overseen by local Animal Welfare Body and government inspectors on behalf of the authorities (project license NL license # AVD1010020173286).\nProcedures were conducted in accordance with the “Animal Research: Reporting of in vivo Experiments” (ARRIVE) Guidelines [23]. Live pigs (Sus scrofus domesticus), weighing between 30 and 40 kg were used. Animals were given a liquid diet for 3 days and fasted for 8 h before the procedures. According to local protocol, with the support of the veterinarian staff throughout the course, general anesthesia with endotracheal intubation and mechanical ventilation was performed. Drugs were employed in the following sequence: premedication − ketamine 25–35 mg/kg IM and midazolam 1 mg/kg IM; induction − propofol 0.5–1 mg/kg IV; maintenance − isoflurane 1.5–2.5% ET PO, sufentanil 0.3–0.5 µg/kg IV, NaCl 0.9% 10 g/kg IV or gelofusine, propofol, and sufentanil as required. Animals were euthanized with pentobarbital overdose.\nThe use of the live porcine model for training purposes in the workshop was approved by local Ethical Committee for the welfare of animals in medical training. It was conducted under the National Experiments on Animals Act, revised in 2014 to implement the European Directive 2010/63 protecting animals used in research. Animal welfare and regulatory compliance were overseen by local Animal Welfare Body and government inspectors on behalf of the authorities (project license NL license # AVD1010020173286).\nProcedures were conducted in accordance with the “Animal Research: Reporting of in vivo Experiments” (ARRIVE) Guidelines [23]. Live pigs (Sus scrofus domesticus), weighing between 30 and 40 kg were used. Animals were given a liquid diet for 3 days and fasted for 8 h before the procedures. According to local protocol, with the support of the veterinarian staff throughout the course, general anesthesia with endotracheal intubation and mechanical ventilation was performed. Drugs were employed in the following sequence: premedication − ketamine 25–35 mg/kg IM and midazolam 1 mg/kg IM; induction − propofol 0.5–1 mg/kg IV; maintenance − isoflurane 1.5–2.5% ET PO, sufentanil 0.3–0.5 µg/kg IV, NaCl 0.9% 10 g/kg IV or gelofusine, propofol, and sufentanil as required. Animals were euthanized with pentobarbital overdose.\nParticipants Trainees attending the workshop for ESD training were evaluated. Demographic data and previous endoscopic experience were collected. Identity protection and confidentiality of the collected data were guaranteed. Only the ones who wished to do so responded willingly and all had the possibility of not doing it if they wished. Accordingly, participant's acceptance/consent was implicit, and written informed consent was not necessary.\nTrainees attending the workshop for ESD training were evaluated. Demographic data and previous endoscopic experience were collected. Identity protection and confidentiality of the collected data were guaranteed. Only the ones who wished to do so responded willingly and all had the possibility of not doing it if they wished. Accordingly, participant's acceptance/consent was implicit, and written informed consent was not necessary.\nProcedures Standard Pentax® (Tokyo, Japan) flexible endoscopes and ERBE® (Tübingen, Germany) VIO 200s and VIO3 electrosurgical units were used. A soft, straight distal attachment cap from Olympus® (Tokyo, Japan) was mounted on the tip of the endoscope.\nDirect assistance was assured by one of the participants, who acted according to the instructions of the endoscopist performing the procedure. International experts supervised this assistance and all the steps of the procedure, giving verbal instructions and intervening when deemed necessary (Fig. 1). All of them were highly experienced in clinical ESD (more than 10 years of continuous clinical practice) and in supervising ESD training courses (more than 5 years of continuous tutor practice).\nGastric ESD was conducted using ERBE® HybridKnives® (Tübingen, Germany) coupled with a high-pressure injection system, ERBERJET 2® (Tübingen, Germany) which allows for cutting, coagulation and submucosal injection with the same knife. The injection solution was physiological saline with a few drops of indigo carmine.\nDuring the first procedures, the HybridKnife® I-type (which is a noninsulated needle-type knife) was used and during following procedures the HybridKnife® O-type (which is a partially insulated tip knife). Trainees were instructed to choose a place in the gastric mucosa, mark the outer margins of a pseudo-lesion with the tip of the knife, perform submucosal injection, and create a circumferential incision, followed by submucosal dissection and re-lifting the pseudo-lesion when needed. Bleedings were treated using submucosal injection, by coagulation with the knife, by using a bipolar hemostatic forceps, HemoStat-Y®, Pentax® (Tokyo, Japan) or by clip application, Instinct clip®, Cook Medical® (Winston-Salem, NC, USA). Resected specimens were retrieved at the end of each procedure, carefully stretched to in vivo size, and measured by an independent observer (RKM). Procedures were performed over the 2 days of the workshop.\nStandard Pentax® (Tokyo, Japan) flexible endoscopes and ERBE® (Tübingen, Germany) VIO 200s and VIO3 electrosurgical units were used. A soft, straight distal attachment cap from Olympus® (Tokyo, Japan) was mounted on the tip of the endoscope.\nDirect assistance was assured by one of the participants, who acted according to the instructions of the endoscopist performing the procedure. International experts supervised this assistance and all the steps of the procedure, giving verbal instructions and intervening when deemed necessary (Fig. 1). All of them were highly experienced in clinical ESD (more than 10 years of continuous clinical practice) and in supervising ESD training courses (more than 5 years of continuous tutor practice).\nGastric ESD was conducted using ERBE® HybridKnives® (Tübingen, Germany) coupled with a high-pressure injection system, ERBERJET 2® (Tübingen, Germany) which allows for cutting, coagulation and submucosal injection with the same knife. The injection solution was physiological saline with a few drops of indigo carmine.\nDuring the first procedures, the HybridKnife® I-type (which is a noninsulated needle-type knife) was used and during following procedures the HybridKnife® O-type (which is a partially insulated tip knife). Trainees were instructed to choose a place in the gastric mucosa, mark the outer margins of a pseudo-lesion with the tip of the knife, perform submucosal injection, and create a circumferential incision, followed by submucosal dissection and re-lifting the pseudo-lesion when needed. Bleedings were treated using submucosal injection, by coagulation with the knife, by using a bipolar hemostatic forceps, HemoStat-Y®, Pentax® (Tokyo, Japan) or by clip application, Instinct clip®, Cook Medical® (Winston-Salem, NC, USA). Resected specimens were retrieved at the end of each procedure, carefully stretched to in vivo size, and measured by an independent observer (RKM). Procedures were performed over the 2 days of the workshop.\nAssessment Parameters related to gastric ESD performance like number of resections, completeness of the resection (ability to completely resect the lesion), en bloc resection (ability to completely resect the lesion en bloc with all markings visible on the specimen), adverse events (intraprocedural bleeding and perforation), tutor intervention, procedure time (from the first submucosal injection after marking to the complete removal of the pseudo lesion), and size of resected specimens were recorded for each procedure. Only significant continued bleeding that did not subside spontaneously or with coagulation from the tip of the ESD knife and required the use of an hemostatic forceps or endoclip was considered an intraprocedural bleeding. Tutor intervention was defined as a short hands-on assistance in procedures in which the tutor deemed it necessary, after which the endoscope was handed back to the trainee. Verbal guidance was allowed at any time during the workshop and was not recorded.\nThe type of knife used was documented. The surface area of the resected specimen was calculated using a pre-specified formula: surface area (mm2) = largest diameter of specimen (mm) × smallest diameter of specimen (mm) × 0.25 × π. ESD speed was calculated accordingly: speed (mm2/min) = surface area (mm2) of resected specimen/time of procedure (min).\nParameters related to gastric ESD performance like number of resections, completeness of the resection (ability to completely resect the lesion), en bloc resection (ability to completely resect the lesion en bloc with all markings visible on the specimen), adverse events (intraprocedural bleeding and perforation), tutor intervention, procedure time (from the first submucosal injection after marking to the complete removal of the pseudo lesion), and size of resected specimens were recorded for each procedure. Only significant continued bleeding that did not subside spontaneously or with coagulation from the tip of the ESD knife and required the use of an hemostatic forceps or endoclip was considered an intraprocedural bleeding. Tutor intervention was defined as a short hands-on assistance in procedures in which the tutor deemed it necessary, after which the endoscope was handed back to the trainee. Verbal guidance was allowed at any time during the workshop and was not recorded.\nThe type of knife used was documented. The surface area of the resected specimen was calculated using a pre-specified formula: surface area (mm2) = largest diameter of specimen (mm) × smallest diameter of specimen (mm) × 0.25 × π. ESD speed was calculated accordingly: speed (mm2/min) = surface area (mm2) of resected specimen/time of procedure (min).\nStatistical Analysis The IBM Statistical Package for Social Sciences (SPSS Version 25.0.0; SPSS Inc, Chicago, IL, USA) was used for data analysis. Descriptive statistics were determined for all measures according to type of variables. Proportions were reported for dichotomous and ordinal variables. For continuous variables, the medians (with interquartile range [IQR] 25–75) or means (with standard error) were described. Nonparametric tests were used to assess statistical differences (Wilcoxon signed-rank test). Proportions were compared with the χ2 test. Significance level was defined as p < 0.05.\nThe IBM Statistical Package for Social Sciences (SPSS Version 25.0.0; SPSS Inc, Chicago, IL, USA) was used for data analysis. Descriptive statistics were determined for all measures according to type of variables. Proportions were reported for dichotomous and ordinal variables. For continuous variables, the medians (with interquartile range [IQR] 25–75) or means (with standard error) were described. Nonparametric tests were used to assess statistical differences (Wilcoxon signed-rank test). Proportions were compared with the χ2 test. Significance level was defined as p < 0.05.", "This prospective study was conducted during a workshop for ESD training, co-organized by the European Association for Gastroenterology, Endoscopy and Nutrition (EAGEN) and the European Society of Gastrointestinal Endoscopy (ESGE). It included theoretical lectures and hands-on practice with live porcine models. The workshop had duration of 2 days and took place in the training center of the Erasmus School of Endoscopy at the Erasmus University Medical Center in Rotterdam, the Netherlands.", "The use of the live porcine model for training purposes in the workshop was approved by local Ethical Committee for the welfare of animals in medical training. It was conducted under the National Experiments on Animals Act, revised in 2014 to implement the European Directive 2010/63 protecting animals used in research. Animal welfare and regulatory compliance were overseen by local Animal Welfare Body and government inspectors on behalf of the authorities (project license NL license # AVD1010020173286).\nProcedures were conducted in accordance with the “Animal Research: Reporting of in vivo Experiments” (ARRIVE) Guidelines [23]. Live pigs (Sus scrofus domesticus), weighing between 30 and 40 kg were used. Animals were given a liquid diet for 3 days and fasted for 8 h before the procedures. According to local protocol, with the support of the veterinarian staff throughout the course, general anesthesia with endotracheal intubation and mechanical ventilation was performed. Drugs were employed in the following sequence: premedication − ketamine 25–35 mg/kg IM and midazolam 1 mg/kg IM; induction − propofol 0.5–1 mg/kg IV; maintenance − isoflurane 1.5–2.5% ET PO, sufentanil 0.3–0.5 µg/kg IV, NaCl 0.9% 10 g/kg IV or gelofusine, propofol, and sufentanil as required. Animals were euthanized with pentobarbital overdose.", "Trainees attending the workshop for ESD training were evaluated. Demographic data and previous endoscopic experience were collected. Identity protection and confidentiality of the collected data were guaranteed. Only the ones who wished to do so responded willingly and all had the possibility of not doing it if they wished. Accordingly, participant's acceptance/consent was implicit, and written informed consent was not necessary.", "Standard Pentax® (Tokyo, Japan) flexible endoscopes and ERBE® (Tübingen, Germany) VIO 200s and VIO3 electrosurgical units were used. A soft, straight distal attachment cap from Olympus® (Tokyo, Japan) was mounted on the tip of the endoscope.\nDirect assistance was assured by one of the participants, who acted according to the instructions of the endoscopist performing the procedure. International experts supervised this assistance and all the steps of the procedure, giving verbal instructions and intervening when deemed necessary (Fig. 1). All of them were highly experienced in clinical ESD (more than 10 years of continuous clinical practice) and in supervising ESD training courses (more than 5 years of continuous tutor practice).\nGastric ESD was conducted using ERBE® HybridKnives® (Tübingen, Germany) coupled with a high-pressure injection system, ERBERJET 2® (Tübingen, Germany) which allows for cutting, coagulation and submucosal injection with the same knife. The injection solution was physiological saline with a few drops of indigo carmine.\nDuring the first procedures, the HybridKnife® I-type (which is a noninsulated needle-type knife) was used and during following procedures the HybridKnife® O-type (which is a partially insulated tip knife). Trainees were instructed to choose a place in the gastric mucosa, mark the outer margins of a pseudo-lesion with the tip of the knife, perform submucosal injection, and create a circumferential incision, followed by submucosal dissection and re-lifting the pseudo-lesion when needed. Bleedings were treated using submucosal injection, by coagulation with the knife, by using a bipolar hemostatic forceps, HemoStat-Y®, Pentax® (Tokyo, Japan) or by clip application, Instinct clip®, Cook Medical® (Winston-Salem, NC, USA). Resected specimens were retrieved at the end of each procedure, carefully stretched to in vivo size, and measured by an independent observer (RKM). Procedures were performed over the 2 days of the workshop.", "Parameters related to gastric ESD performance like number of resections, completeness of the resection (ability to completely resect the lesion), en bloc resection (ability to completely resect the lesion en bloc with all markings visible on the specimen), adverse events (intraprocedural bleeding and perforation), tutor intervention, procedure time (from the first submucosal injection after marking to the complete removal of the pseudo lesion), and size of resected specimens were recorded for each procedure. Only significant continued bleeding that did not subside spontaneously or with coagulation from the tip of the ESD knife and required the use of an hemostatic forceps or endoclip was considered an intraprocedural bleeding. Tutor intervention was defined as a short hands-on assistance in procedures in which the tutor deemed it necessary, after which the endoscope was handed back to the trainee. Verbal guidance was allowed at any time during the workshop and was not recorded.\nThe type of knife used was documented. The surface area of the resected specimen was calculated using a pre-specified formula: surface area (mm2) = largest diameter of specimen (mm) × smallest diameter of specimen (mm) × 0.25 × π. ESD speed was calculated accordingly: speed (mm2/min) = surface area (mm2) of resected specimen/time of procedure (min).", "The IBM Statistical Package for Social Sciences (SPSS Version 25.0.0; SPSS Inc, Chicago, IL, USA) was used for data analysis. Descriptive statistics were determined for all measures according to type of variables. Proportions were reported for dichotomous and ordinal variables. For continuous variables, the medians (with interquartile range [IQR] 25–75) or means (with standard error) were described. Nonparametric tests were used to assess statistical differences (Wilcoxon signed-rank test). Proportions were compared with the χ2 test. Significance level was defined as p < 0.05.", "From the 17 participants, background characteristics and endoscopic experience were obtained for 15 and are summarized in Table 1. Their median age was 40 years (IQR, 36–47). A total of 70 procedures (P) were performed by 17 trainees (Table 2). Each trainee performed a median of 4 procedures (IQR, 3.5–5.0).\nThe percentage of complete resections was 88.2% in the first 2 procedures and 100% in the following ones. En bloc resections improved from 76.5% and 88.2% in the first 2 procedures respectively, to 100% during the subsequent procedures (P1 vs. P3, p < 0.042). The number of procedures in which a trainee needed hands-on intervention from the tutor also decreased throughout the procedures (4 in P1, 2 in P2 to 0 in P5 and P6; P1 vs. P3, p < 0.042). Median ESD speed increased from the first to the latest procedures: 8.6 mm2/min in P1 to 14.8 mm2/min in P2 (P1 vs. P2, p = 0.006), 24.7 mm2/min in P5 (P4 vs. P5, p = 0.028) and 31.4 mm2/min in P6. In contrast, ESD time decreased: 39 min in P1 to 34 min in P2, 21 min in P3 (P1 vs. P3, p = 0.002; P2 vs. P3, p = 0.01) and 9 min in P6. No significant variation in specimen's area occurred comparing successive procedures.\nThe first procedures were all performed with the noninsulated tip HybridKnife I-type. After 1 to 3 procedures, all trainees switched to the HybridKnife O-type.\nAdverse events decreased throughout procedures (3 bleedings and 3 perforations in the P1 to 0 in P5 and P6). In total, there were 7 perforations (10%) and 4 bleedings (5.7%), managed with endoclips and hemostatic forceps. Perforations in P1 and P2 occurred with type I while perforations in P4 occurred with the O type. When a great number of participants (n = 8) changed to a different type of knife (P4), ESD speed slightly temporarily decreased (18 5 mm2/min on P3 to 17.0 mm2/min on P4) and the number of perforations increased again (0 on P3 to 2 on P4) (Fig. 2).", "In this study, we have demonstrated that through successive procedures within a hands-on training program, en bloc resection rates and ESD speed increase whereas adverse events decrease. Changing to a different ESD knife has a momentarily negative impact on the learning curve.\nESD training programs include hands-on practicing in live animal models [14, 15, 16, 17]. Face, expert, and content validity of the live porcine model in training for endoscopic mucosal resection and ESD have been established in a previous study. This model is considered very realistic and procedures accurately resemble human cases. Moreover, the simulation process is highly appreciated as a learning tool [19]. Current virtual reality and mechanical simulators are not suitable for training advanced endoscopic resection due to the inability to sufficiently reproduce tissue properties like elasticity and tactile feedback resembling human tissue [24].\nIn general, the skill-lab setting with animal models provide the opportunity to train team coordination between the endoscopist and assistant, to gain familiarity with the devices and electrosurgical units, to adequate handle and position the scope and to rehearse the kind of movements typical and crucial to ESD, improving tip control [20, 25].\nSome authors support the idea that ex vivo training is not sufficient to simulate clinical practice and that in vivo training is essential [26, 27]. Live animal models can achieve those characteristics and add breathing movements, heartbeats, peristalsis, intraluminal secretions and tissue reaction to injection, and electrocautery, which makes them closely related to the human setting. In addition, dealing with intraprocedural complications, such as bleedings and perforations can be trained as close as it gets to the human situation [15, 19, 25, 27, 28]. On the downside, availability of and the need to sacrifice animals, dedicated facilities, equipment, veterinarian support, and costs represent the disadvantage of the model [15, 25, 27, 28, 29]. For these reasons, ex vivo models are probably adequate in the initial learning phase and animal models should be reserved only for the subsequent stage of preclinical training.\nA previous study demonstrated that the mean resection time was significantly diminished for porcine gastric ESDs in the second half of the cases, although all procedures were carried out by a single endoscopist and both ex vivo and in vivo models were used [25]. In an ex vivo model, performing 30 gastric ESDs led to decreased procedure times, a lower perforation rate and improvement in en bloc resection rate [30]. In ex vivo stomach pig models, to achieve a predefined level of competence, trainees needed 23–25 procedures in one study [31]. In a bovine ex vivo model, an ESD goal was achieved by 71% of trainees in a software learning group (84% at the 30th procedure) and 61% in the control group (50% at the 30th procedure) [32]. In a bovine ex vivo model, complete resections were achieved in 92%, with 89% en bloc. Perforations occurred in 6% and the mean procedure time was 33 min with an average dissection speed of 5.2 mm2/min [33]. A study on ex vivo porcine colonic model, with an overall en bloc resection of 94.4% and perforation of 14.4%, demonstrated an inflexion of the learning curve at the 9th ESD procedure [34]. A summary of the studies is presented in Table 3.\nIn our study, we highlight that we addressed ESD speed (which increased throughout the procedures), instead of time or surface area alone. We find that the increment in speed better reflects the acquisition of skills or learning curve of the trainee. The use of only a single parameter-like procedure time or surface area is a poor measure for improved performance because these measures are highly related to each other. It will take more time to resect a larger lesion and vice versa. Another important observation is that the rate of adverse events did not increase when trainees started performing faster ESDs. The number of adverse events decreased as performance and speed improved during the learning phase.\nESD speed in live models varies between studies from means of 5–10 mm2/min [35] in the first procedures to 22–30 mm2/min [21, 22] in the later ones. This is in line with our results that varied from median 8.6 mm2/min in the first procedure to 24.7 mm2/min and 31.4 mm2/min in the last ones.\nWe observed a momentarily decrease in ESD speed and a slight increase in perforation number when trainees switched to a different knife with different properties. Complete and en bloc resection rates were unaffected. ESD performance improved up to that point and introducing a new type of knife immediately had a negative impact on ESD performance. This finding supports the idea that each individual has to get used to a particular type of knife and changing it leads to an adaptive phase. This phenomenon was observed despite the fact that we switched to a potentially “safer” knife because of its partial insulation at the tip. Therefore, we collected supporting evidence for one key informal advice that ESD experts often provide, which is to maintain experience with the same knife and not change frequently the type of knife used.\nThere were 7 perforations which correspond to 10% of the procedures. This denotes the difficulty of the technique and is in line with other published studies: 5.2–62.5% [21, 22, 36] in live animal models and 4.6–14.5% in ex vivo models [30, 31, 32, 33, 34].\nThe fact that a plateau phase was not reached in this study is in all likelihood related to the limited number of ESD procedures performed by each endoscopist and supports the need and rationale for further training in the animal model. A minimum of 10–30 ESD procedures in the animal model is suggested, before moving to human cases [14, 30, 37]. This study supports this conclusion and the importance of using live porcine models to train endoscopists in ESD.\nConcerning the strengths of this study, we highlighted its prospective design, the objective nature of the parameters that were assessed by an independent observer and the diverse origin and background of the trainees. This supports the generalization of the results. It is also important to emphasize that ESD performance improved dramatically in a 2 days course and that each knife is operated differently.\nWith regard of limitations, we stress that trainees can have different background characteristics and distinct skills and learning curves. We underline the limited number of procedures, the fact that locations on the pig stomach were not assessed and that the porcine colon was not included. Tutors varied from different groups, although presenting similar characteristics. Also, procedures were performed on healthy tissue with no pathological findings, as opposed to clinical practice, where neoplastic changes might affect submucosal tissue and thereby hampering submucosal dissection. Nevertheless, this is a known limitation of training in porcine model, but still experts agree that it represents the closest resemblance possible to the human setting, as was previously demonstrated [19].\nIn conclusion, training in live animal models improves ESD performance measures supporting its role in teaching programs. Distinct ESD knives are operated differently.", "The workshop was approved by the local Ethics Committee of Erasmus MC, University Medical Center, Rotterdam, The Netherlands, for the welfare of animals in medical training (Ref. No. AVD1010020173286). It was conducted under the National Experiments on Animals Act, revised in 2014 to implement the European Directive 2010/63 protecting animals used in research. Procedures were conducted in accordance with the “Animal Research: Reporting of in vivo Experiments” (ARRIVE) Guidelines. Only the endoscopists who wished to participate did it willingly and all had the possibility of not doing it if they wished. Invitees were informed that data would be analyzed and participation was voluntary. Identity protection and confidentiality of the collected data were guaranteed according to General Data Protection Regulation. Accordingly and in agreement with the Ethics Committee, completing the questionnaire implied the participant's implicit acceptance/consent. Therefore, the requirement for written informed consent was waived.", "Ricardo Küttner-Magalhães, Ricardo Marcos-Pinto, and Carla Rolanda have no conflicts of interest or financial ties to disclose. Mário Dinis-Ribeiro reports research grants from Olympus and Fujifilm. Marco J. Bruno reports consultant support for industry and investigator-initiated studies from Boston Scientific and Cook Medical, consultant support for investigator-initiated studies from Pentax Medical, Mylan, ChiRoStim, and 3M. Arjun D. Koch reports speaker fees from Cook Medical, ERBE, Pentax and Boston Scientific.", "This research did not receive any specific grant from funding agencies in the public, commercial, or nonprofit sectors. The workshop that allowed the present study was unrestrictedly sponsored by Pentax® (Tokyo, Japan), ERBE® (Tübingen, Germany), and Cook Medical® (Winston-Salem, NC, USA) which provided flexible endoscopes, electrosurgical units, and endoscopic devices. The sponsors had no influence on the scientific content of the workshop neither on this study.", "All authors have contributed and agreed on the content of the manuscript. Ricardo Küttner-Magalhães contributed to the study conception, study design, data acquisition, data analysis, data interpretation, manuscript writing, and critical revision. Mário Dinis-Ribeiro contributed to the study conception, study design, study supervision, data analysis, data interpretation, and critical revision of the manuscript. Marco J. Bruno contributed to the study conception, study design, and critical revision of the manuscript. Ricardo Marcos-Pinto contributed to data interpretation and critical revision of the manuscript. Carla Rolanda contributed to the study conception and design, data interpretation, and critical revision of the manuscript. Arjun D. Kock contributed to the study conception and design, study supervision, data analysis, data interpretation, and critical revision of the manuscript. All authors read and approved the final version of the manuscript.", "All data generated or analyzed during this study are included in this article. Further inquiries can be directed to the corresponding author." ]
[ "intro", "methods", null, null, null, null, null, null, "results", "discussion", null, "COI-statement", null, null, "data-availability" ]
[ "Endoscopic submucosal dissection", "Gastrointestinal endoscopy", "Learning curve", "Animal models", "Simulation", "Training" ]
Introduction: Endoscopic submucosal dissection (ESD) is a demanding technique that enables high en bloc resection rates, R0 resections and low rate of local recurrences regardless of lesion size as well as an accurate tumor histopathological staging and grading with precise evaluation of resection margins [1, 2, 3, 4, 5, 6, 7, 8]. On the other hand, it is associated with long procedure times and considerable risk of adverse events (such as bleeding and perforation) [9, 10, 11]. In addition, ESD has a difficult and prolonged learning curve [5, 9, 12, 13]. ESD training programs include acquiring basic knowledge, practicing on animal models (both ex vivo and in vivo models), visiting centers with a high ESD volume, attendance of hands-on training ESD workshops and only then proceeding to clinical practice, ideally under supervision of an expert during the first cases [14, 15, 16, 17, 18]. This is particularly relevant in Western settings where the number of ESD experts is limited and the possibility of starting ESD in superficial gastric neoplasms, which are the easiest and safest procedures, is limited due to the low prevalence of such lesions [5, 9, 12, 13]. Workshops using animal models are organized in training centers with the potential to improve the learning process and achieve initial competence in ESD [19, 20, 21, 22]. This study aimed to assess the early learning curve for performing ESD on live porcine models by endoscopists without any or with limited previous ESD experience. Methods: Study Design This prospective study was conducted during a workshop for ESD training, co-organized by the European Association for Gastroenterology, Endoscopy and Nutrition (EAGEN) and the European Society of Gastrointestinal Endoscopy (ESGE). It included theoretical lectures and hands-on practice with live porcine models. The workshop had duration of 2 days and took place in the training center of the Erasmus School of Endoscopy at the Erasmus University Medical Center in Rotterdam, the Netherlands. This prospective study was conducted during a workshop for ESD training, co-organized by the European Association for Gastroenterology, Endoscopy and Nutrition (EAGEN) and the European Society of Gastrointestinal Endoscopy (ESGE). It included theoretical lectures and hands-on practice with live porcine models. The workshop had duration of 2 days and took place in the training center of the Erasmus School of Endoscopy at the Erasmus University Medical Center in Rotterdam, the Netherlands. Animal Model The use of the live porcine model for training purposes in the workshop was approved by local Ethical Committee for the welfare of animals in medical training. It was conducted under the National Experiments on Animals Act, revised in 2014 to implement the European Directive 2010/63 protecting animals used in research. Animal welfare and regulatory compliance were overseen by local Animal Welfare Body and government inspectors on behalf of the authorities (project license NL license # AVD1010020173286). Procedures were conducted in accordance with the “Animal Research: Reporting of in vivo Experiments” (ARRIVE) Guidelines [23]. Live pigs (Sus scrofus domesticus), weighing between 30 and 40 kg were used. Animals were given a liquid diet for 3 days and fasted for 8 h before the procedures. According to local protocol, with the support of the veterinarian staff throughout the course, general anesthesia with endotracheal intubation and mechanical ventilation was performed. Drugs were employed in the following sequence: premedication − ketamine 25–35 mg/kg IM and midazolam 1 mg/kg IM; induction − propofol 0.5–1 mg/kg IV; maintenance − isoflurane 1.5–2.5% ET PO, sufentanil 0.3–0.5 µg/kg IV, NaCl 0.9% 10 g/kg IV or gelofusine, propofol, and sufentanil as required. Animals were euthanized with pentobarbital overdose. The use of the live porcine model for training purposes in the workshop was approved by local Ethical Committee for the welfare of animals in medical training. It was conducted under the National Experiments on Animals Act, revised in 2014 to implement the European Directive 2010/63 protecting animals used in research. Animal welfare and regulatory compliance were overseen by local Animal Welfare Body and government inspectors on behalf of the authorities (project license NL license # AVD1010020173286). Procedures were conducted in accordance with the “Animal Research: Reporting of in vivo Experiments” (ARRIVE) Guidelines [23]. Live pigs (Sus scrofus domesticus), weighing between 30 and 40 kg were used. Animals were given a liquid diet for 3 days and fasted for 8 h before the procedures. According to local protocol, with the support of the veterinarian staff throughout the course, general anesthesia with endotracheal intubation and mechanical ventilation was performed. Drugs were employed in the following sequence: premedication − ketamine 25–35 mg/kg IM and midazolam 1 mg/kg IM; induction − propofol 0.5–1 mg/kg IV; maintenance − isoflurane 1.5–2.5% ET PO, sufentanil 0.3–0.5 µg/kg IV, NaCl 0.9% 10 g/kg IV or gelofusine, propofol, and sufentanil as required. Animals were euthanized with pentobarbital overdose. Participants Trainees attending the workshop for ESD training were evaluated. Demographic data and previous endoscopic experience were collected. Identity protection and confidentiality of the collected data were guaranteed. Only the ones who wished to do so responded willingly and all had the possibility of not doing it if they wished. Accordingly, participant's acceptance/consent was implicit, and written informed consent was not necessary. Trainees attending the workshop for ESD training were evaluated. Demographic data and previous endoscopic experience were collected. Identity protection and confidentiality of the collected data were guaranteed. Only the ones who wished to do so responded willingly and all had the possibility of not doing it if they wished. Accordingly, participant's acceptance/consent was implicit, and written informed consent was not necessary. Procedures Standard Pentax® (Tokyo, Japan) flexible endoscopes and ERBE® (Tübingen, Germany) VIO 200s and VIO3 electrosurgical units were used. A soft, straight distal attachment cap from Olympus® (Tokyo, Japan) was mounted on the tip of the endoscope. Direct assistance was assured by one of the participants, who acted according to the instructions of the endoscopist performing the procedure. International experts supervised this assistance and all the steps of the procedure, giving verbal instructions and intervening when deemed necessary (Fig. 1). All of them were highly experienced in clinical ESD (more than 10 years of continuous clinical practice) and in supervising ESD training courses (more than 5 years of continuous tutor practice). Gastric ESD was conducted using ERBE® HybridKnives® (Tübingen, Germany) coupled with a high-pressure injection system, ERBERJET 2® (Tübingen, Germany) which allows for cutting, coagulation and submucosal injection with the same knife. The injection solution was physiological saline with a few drops of indigo carmine. During the first procedures, the HybridKnife® I-type (which is a noninsulated needle-type knife) was used and during following procedures the HybridKnife® O-type (which is a partially insulated tip knife). Trainees were instructed to choose a place in the gastric mucosa, mark the outer margins of a pseudo-lesion with the tip of the knife, perform submucosal injection, and create a circumferential incision, followed by submucosal dissection and re-lifting the pseudo-lesion when needed. Bleedings were treated using submucosal injection, by coagulation with the knife, by using a bipolar hemostatic forceps, HemoStat-Y®, Pentax® (Tokyo, Japan) or by clip application, Instinct clip®, Cook Medical® (Winston-Salem, NC, USA). Resected specimens were retrieved at the end of each procedure, carefully stretched to in vivo size, and measured by an independent observer (RKM). Procedures were performed over the 2 days of the workshop. Standard Pentax® (Tokyo, Japan) flexible endoscopes and ERBE® (Tübingen, Germany) VIO 200s and VIO3 electrosurgical units were used. A soft, straight distal attachment cap from Olympus® (Tokyo, Japan) was mounted on the tip of the endoscope. Direct assistance was assured by one of the participants, who acted according to the instructions of the endoscopist performing the procedure. International experts supervised this assistance and all the steps of the procedure, giving verbal instructions and intervening when deemed necessary (Fig. 1). All of them were highly experienced in clinical ESD (more than 10 years of continuous clinical practice) and in supervising ESD training courses (more than 5 years of continuous tutor practice). Gastric ESD was conducted using ERBE® HybridKnives® (Tübingen, Germany) coupled with a high-pressure injection system, ERBERJET 2® (Tübingen, Germany) which allows for cutting, coagulation and submucosal injection with the same knife. The injection solution was physiological saline with a few drops of indigo carmine. During the first procedures, the HybridKnife® I-type (which is a noninsulated needle-type knife) was used and during following procedures the HybridKnife® O-type (which is a partially insulated tip knife). Trainees were instructed to choose a place in the gastric mucosa, mark the outer margins of a pseudo-lesion with the tip of the knife, perform submucosal injection, and create a circumferential incision, followed by submucosal dissection and re-lifting the pseudo-lesion when needed. Bleedings were treated using submucosal injection, by coagulation with the knife, by using a bipolar hemostatic forceps, HemoStat-Y®, Pentax® (Tokyo, Japan) or by clip application, Instinct clip®, Cook Medical® (Winston-Salem, NC, USA). Resected specimens were retrieved at the end of each procedure, carefully stretched to in vivo size, and measured by an independent observer (RKM). Procedures were performed over the 2 days of the workshop. Assessment Parameters related to gastric ESD performance like number of resections, completeness of the resection (ability to completely resect the lesion), en bloc resection (ability to completely resect the lesion en bloc with all markings visible on the specimen), adverse events (intraprocedural bleeding and perforation), tutor intervention, procedure time (from the first submucosal injection after marking to the complete removal of the pseudo lesion), and size of resected specimens were recorded for each procedure. Only significant continued bleeding that did not subside spontaneously or with coagulation from the tip of the ESD knife and required the use of an hemostatic forceps or endoclip was considered an intraprocedural bleeding. Tutor intervention was defined as a short hands-on assistance in procedures in which the tutor deemed it necessary, after which the endoscope was handed back to the trainee. Verbal guidance was allowed at any time during the workshop and was not recorded. The type of knife used was documented. The surface area of the resected specimen was calculated using a pre-specified formula: surface area (mm2) = largest diameter of specimen (mm) × smallest diameter of specimen (mm) × 0.25 × π. ESD speed was calculated accordingly: speed (mm2/min) = surface area (mm2) of resected specimen/time of procedure (min). Parameters related to gastric ESD performance like number of resections, completeness of the resection (ability to completely resect the lesion), en bloc resection (ability to completely resect the lesion en bloc with all markings visible on the specimen), adverse events (intraprocedural bleeding and perforation), tutor intervention, procedure time (from the first submucosal injection after marking to the complete removal of the pseudo lesion), and size of resected specimens were recorded for each procedure. Only significant continued bleeding that did not subside spontaneously or with coagulation from the tip of the ESD knife and required the use of an hemostatic forceps or endoclip was considered an intraprocedural bleeding. Tutor intervention was defined as a short hands-on assistance in procedures in which the tutor deemed it necessary, after which the endoscope was handed back to the trainee. Verbal guidance was allowed at any time during the workshop and was not recorded. The type of knife used was documented. The surface area of the resected specimen was calculated using a pre-specified formula: surface area (mm2) = largest diameter of specimen (mm) × smallest diameter of specimen (mm) × 0.25 × π. ESD speed was calculated accordingly: speed (mm2/min) = surface area (mm2) of resected specimen/time of procedure (min). Statistical Analysis The IBM Statistical Package for Social Sciences (SPSS Version 25.0.0; SPSS Inc, Chicago, IL, USA) was used for data analysis. Descriptive statistics were determined for all measures according to type of variables. Proportions were reported for dichotomous and ordinal variables. For continuous variables, the medians (with interquartile range [IQR] 25–75) or means (with standard error) were described. Nonparametric tests were used to assess statistical differences (Wilcoxon signed-rank test). Proportions were compared with the χ2 test. Significance level was defined as p < 0.05. The IBM Statistical Package for Social Sciences (SPSS Version 25.0.0; SPSS Inc, Chicago, IL, USA) was used for data analysis. Descriptive statistics were determined for all measures according to type of variables. Proportions were reported for dichotomous and ordinal variables. For continuous variables, the medians (with interquartile range [IQR] 25–75) or means (with standard error) were described. Nonparametric tests were used to assess statistical differences (Wilcoxon signed-rank test). Proportions were compared with the χ2 test. Significance level was defined as p < 0.05. Study Design: This prospective study was conducted during a workshop for ESD training, co-organized by the European Association for Gastroenterology, Endoscopy and Nutrition (EAGEN) and the European Society of Gastrointestinal Endoscopy (ESGE). It included theoretical lectures and hands-on practice with live porcine models. The workshop had duration of 2 days and took place in the training center of the Erasmus School of Endoscopy at the Erasmus University Medical Center in Rotterdam, the Netherlands. Animal Model: The use of the live porcine model for training purposes in the workshop was approved by local Ethical Committee for the welfare of animals in medical training. It was conducted under the National Experiments on Animals Act, revised in 2014 to implement the European Directive 2010/63 protecting animals used in research. Animal welfare and regulatory compliance were overseen by local Animal Welfare Body and government inspectors on behalf of the authorities (project license NL license # AVD1010020173286). Procedures were conducted in accordance with the “Animal Research: Reporting of in vivo Experiments” (ARRIVE) Guidelines [23]. Live pigs (Sus scrofus domesticus), weighing between 30 and 40 kg were used. Animals were given a liquid diet for 3 days and fasted for 8 h before the procedures. According to local protocol, with the support of the veterinarian staff throughout the course, general anesthesia with endotracheal intubation and mechanical ventilation was performed. Drugs were employed in the following sequence: premedication − ketamine 25–35 mg/kg IM and midazolam 1 mg/kg IM; induction − propofol 0.5–1 mg/kg IV; maintenance − isoflurane 1.5–2.5% ET PO, sufentanil 0.3–0.5 µg/kg IV, NaCl 0.9% 10 g/kg IV or gelofusine, propofol, and sufentanil as required. Animals were euthanized with pentobarbital overdose. Participants: Trainees attending the workshop for ESD training were evaluated. Demographic data and previous endoscopic experience were collected. Identity protection and confidentiality of the collected data were guaranteed. Only the ones who wished to do so responded willingly and all had the possibility of not doing it if they wished. Accordingly, participant's acceptance/consent was implicit, and written informed consent was not necessary. Procedures: Standard Pentax® (Tokyo, Japan) flexible endoscopes and ERBE® (Tübingen, Germany) VIO 200s and VIO3 electrosurgical units were used. A soft, straight distal attachment cap from Olympus® (Tokyo, Japan) was mounted on the tip of the endoscope. Direct assistance was assured by one of the participants, who acted according to the instructions of the endoscopist performing the procedure. International experts supervised this assistance and all the steps of the procedure, giving verbal instructions and intervening when deemed necessary (Fig. 1). All of them were highly experienced in clinical ESD (more than 10 years of continuous clinical practice) and in supervising ESD training courses (more than 5 years of continuous tutor practice). Gastric ESD was conducted using ERBE® HybridKnives® (Tübingen, Germany) coupled with a high-pressure injection system, ERBERJET 2® (Tübingen, Germany) which allows for cutting, coagulation and submucosal injection with the same knife. The injection solution was physiological saline with a few drops of indigo carmine. During the first procedures, the HybridKnife® I-type (which is a noninsulated needle-type knife) was used and during following procedures the HybridKnife® O-type (which is a partially insulated tip knife). Trainees were instructed to choose a place in the gastric mucosa, mark the outer margins of a pseudo-lesion with the tip of the knife, perform submucosal injection, and create a circumferential incision, followed by submucosal dissection and re-lifting the pseudo-lesion when needed. Bleedings were treated using submucosal injection, by coagulation with the knife, by using a bipolar hemostatic forceps, HemoStat-Y®, Pentax® (Tokyo, Japan) or by clip application, Instinct clip®, Cook Medical® (Winston-Salem, NC, USA). Resected specimens were retrieved at the end of each procedure, carefully stretched to in vivo size, and measured by an independent observer (RKM). Procedures were performed over the 2 days of the workshop. Assessment: Parameters related to gastric ESD performance like number of resections, completeness of the resection (ability to completely resect the lesion), en bloc resection (ability to completely resect the lesion en bloc with all markings visible on the specimen), adverse events (intraprocedural bleeding and perforation), tutor intervention, procedure time (from the first submucosal injection after marking to the complete removal of the pseudo lesion), and size of resected specimens were recorded for each procedure. Only significant continued bleeding that did not subside spontaneously or with coagulation from the tip of the ESD knife and required the use of an hemostatic forceps or endoclip was considered an intraprocedural bleeding. Tutor intervention was defined as a short hands-on assistance in procedures in which the tutor deemed it necessary, after which the endoscope was handed back to the trainee. Verbal guidance was allowed at any time during the workshop and was not recorded. The type of knife used was documented. The surface area of the resected specimen was calculated using a pre-specified formula: surface area (mm2) = largest diameter of specimen (mm) × smallest diameter of specimen (mm) × 0.25 × π. ESD speed was calculated accordingly: speed (mm2/min) = surface area (mm2) of resected specimen/time of procedure (min). Statistical Analysis: The IBM Statistical Package for Social Sciences (SPSS Version 25.0.0; SPSS Inc, Chicago, IL, USA) was used for data analysis. Descriptive statistics were determined for all measures according to type of variables. Proportions were reported for dichotomous and ordinal variables. For continuous variables, the medians (with interquartile range [IQR] 25–75) or means (with standard error) were described. Nonparametric tests were used to assess statistical differences (Wilcoxon signed-rank test). Proportions were compared with the χ2 test. Significance level was defined as p < 0.05. Results: From the 17 participants, background characteristics and endoscopic experience were obtained for 15 and are summarized in Table 1. Their median age was 40 years (IQR, 36–47). A total of 70 procedures (P) were performed by 17 trainees (Table 2). Each trainee performed a median of 4 procedures (IQR, 3.5–5.0). The percentage of complete resections was 88.2% in the first 2 procedures and 100% in the following ones. En bloc resections improved from 76.5% and 88.2% in the first 2 procedures respectively, to 100% during the subsequent procedures (P1 vs. P3, p < 0.042). The number of procedures in which a trainee needed hands-on intervention from the tutor also decreased throughout the procedures (4 in P1, 2 in P2 to 0 in P5 and P6; P1 vs. P3, p < 0.042). Median ESD speed increased from the first to the latest procedures: 8.6 mm2/min in P1 to 14.8 mm2/min in P2 (P1 vs. P2, p = 0.006), 24.7 mm2/min in P5 (P4 vs. P5, p = 0.028) and 31.4 mm2/min in P6. In contrast, ESD time decreased: 39 min in P1 to 34 min in P2, 21 min in P3 (P1 vs. P3, p = 0.002; P2 vs. P3, p = 0.01) and 9 min in P6. No significant variation in specimen's area occurred comparing successive procedures. The first procedures were all performed with the noninsulated tip HybridKnife I-type. After 1 to 3 procedures, all trainees switched to the HybridKnife O-type. Adverse events decreased throughout procedures (3 bleedings and 3 perforations in the P1 to 0 in P5 and P6). In total, there were 7 perforations (10%) and 4 bleedings (5.7%), managed with endoclips and hemostatic forceps. Perforations in P1 and P2 occurred with type I while perforations in P4 occurred with the O type. When a great number of participants (n = 8) changed to a different type of knife (P4), ESD speed slightly temporarily decreased (18 5 mm2/min on P3 to 17.0 mm2/min on P4) and the number of perforations increased again (0 on P3 to 2 on P4) (Fig. 2). Discussion: In this study, we have demonstrated that through successive procedures within a hands-on training program, en bloc resection rates and ESD speed increase whereas adverse events decrease. Changing to a different ESD knife has a momentarily negative impact on the learning curve. ESD training programs include hands-on practicing in live animal models [14, 15, 16, 17]. Face, expert, and content validity of the live porcine model in training for endoscopic mucosal resection and ESD have been established in a previous study. This model is considered very realistic and procedures accurately resemble human cases. Moreover, the simulation process is highly appreciated as a learning tool [19]. Current virtual reality and mechanical simulators are not suitable for training advanced endoscopic resection due to the inability to sufficiently reproduce tissue properties like elasticity and tactile feedback resembling human tissue [24]. In general, the skill-lab setting with animal models provide the opportunity to train team coordination between the endoscopist and assistant, to gain familiarity with the devices and electrosurgical units, to adequate handle and position the scope and to rehearse the kind of movements typical and crucial to ESD, improving tip control [20, 25]. Some authors support the idea that ex vivo training is not sufficient to simulate clinical practice and that in vivo training is essential [26, 27]. Live animal models can achieve those characteristics and add breathing movements, heartbeats, peristalsis, intraluminal secretions and tissue reaction to injection, and electrocautery, which makes them closely related to the human setting. In addition, dealing with intraprocedural complications, such as bleedings and perforations can be trained as close as it gets to the human situation [15, 19, 25, 27, 28]. On the downside, availability of and the need to sacrifice animals, dedicated facilities, equipment, veterinarian support, and costs represent the disadvantage of the model [15, 25, 27, 28, 29]. For these reasons, ex vivo models are probably adequate in the initial learning phase and animal models should be reserved only for the subsequent stage of preclinical training. A previous study demonstrated that the mean resection time was significantly diminished for porcine gastric ESDs in the second half of the cases, although all procedures were carried out by a single endoscopist and both ex vivo and in vivo models were used [25]. In an ex vivo model, performing 30 gastric ESDs led to decreased procedure times, a lower perforation rate and improvement in en bloc resection rate [30]. In ex vivo stomach pig models, to achieve a predefined level of competence, trainees needed 23–25 procedures in one study [31]. In a bovine ex vivo model, an ESD goal was achieved by 71% of trainees in a software learning group (84% at the 30th procedure) and 61% in the control group (50% at the 30th procedure) [32]. In a bovine ex vivo model, complete resections were achieved in 92%, with 89% en bloc. Perforations occurred in 6% and the mean procedure time was 33 min with an average dissection speed of 5.2 mm2/min [33]. A study on ex vivo porcine colonic model, with an overall en bloc resection of 94.4% and perforation of 14.4%, demonstrated an inflexion of the learning curve at the 9th ESD procedure [34]. A summary of the studies is presented in Table 3. In our study, we highlight that we addressed ESD speed (which increased throughout the procedures), instead of time or surface area alone. We find that the increment in speed better reflects the acquisition of skills or learning curve of the trainee. The use of only a single parameter-like procedure time or surface area is a poor measure for improved performance because these measures are highly related to each other. It will take more time to resect a larger lesion and vice versa. Another important observation is that the rate of adverse events did not increase when trainees started performing faster ESDs. The number of adverse events decreased as performance and speed improved during the learning phase. ESD speed in live models varies between studies from means of 5–10 mm2/min [35] in the first procedures to 22–30 mm2/min [21, 22] in the later ones. This is in line with our results that varied from median 8.6 mm2/min in the first procedure to 24.7 mm2/min and 31.4 mm2/min in the last ones. We observed a momentarily decrease in ESD speed and a slight increase in perforation number when trainees switched to a different knife with different properties. Complete and en bloc resection rates were unaffected. ESD performance improved up to that point and introducing a new type of knife immediately had a negative impact on ESD performance. This finding supports the idea that each individual has to get used to a particular type of knife and changing it leads to an adaptive phase. This phenomenon was observed despite the fact that we switched to a potentially “safer” knife because of its partial insulation at the tip. Therefore, we collected supporting evidence for one key informal advice that ESD experts often provide, which is to maintain experience with the same knife and not change frequently the type of knife used. There were 7 perforations which correspond to 10% of the procedures. This denotes the difficulty of the technique and is in line with other published studies: 5.2–62.5% [21, 22, 36] in live animal models and 4.6–14.5% in ex vivo models [30, 31, 32, 33, 34]. The fact that a plateau phase was not reached in this study is in all likelihood related to the limited number of ESD procedures performed by each endoscopist and supports the need and rationale for further training in the animal model. A minimum of 10–30 ESD procedures in the animal model is suggested, before moving to human cases [14, 30, 37]. This study supports this conclusion and the importance of using live porcine models to train endoscopists in ESD. Concerning the strengths of this study, we highlighted its prospective design, the objective nature of the parameters that were assessed by an independent observer and the diverse origin and background of the trainees. This supports the generalization of the results. It is also important to emphasize that ESD performance improved dramatically in a 2 days course and that each knife is operated differently. With regard of limitations, we stress that trainees can have different background characteristics and distinct skills and learning curves. We underline the limited number of procedures, the fact that locations on the pig stomach were not assessed and that the porcine colon was not included. Tutors varied from different groups, although presenting similar characteristics. Also, procedures were performed on healthy tissue with no pathological findings, as opposed to clinical practice, where neoplastic changes might affect submucosal tissue and thereby hampering submucosal dissection. Nevertheless, this is a known limitation of training in porcine model, but still experts agree that it represents the closest resemblance possible to the human setting, as was previously demonstrated [19]. In conclusion, training in live animal models improves ESD performance measures supporting its role in teaching programs. Distinct ESD knives are operated differently. Statement of Ethics: The workshop was approved by the local Ethics Committee of Erasmus MC, University Medical Center, Rotterdam, The Netherlands, for the welfare of animals in medical training (Ref. No. AVD1010020173286). It was conducted under the National Experiments on Animals Act, revised in 2014 to implement the European Directive 2010/63 protecting animals used in research. Procedures were conducted in accordance with the “Animal Research: Reporting of in vivo Experiments” (ARRIVE) Guidelines. Only the endoscopists who wished to participate did it willingly and all had the possibility of not doing it if they wished. Invitees were informed that data would be analyzed and participation was voluntary. Identity protection and confidentiality of the collected data were guaranteed according to General Data Protection Regulation. Accordingly and in agreement with the Ethics Committee, completing the questionnaire implied the participant's implicit acceptance/consent. Therefore, the requirement for written informed consent was waived. Conflict of Interest Statement: Ricardo Küttner-Magalhães, Ricardo Marcos-Pinto, and Carla Rolanda have no conflicts of interest or financial ties to disclose. Mário Dinis-Ribeiro reports research grants from Olympus and Fujifilm. Marco J. Bruno reports consultant support for industry and investigator-initiated studies from Boston Scientific and Cook Medical, consultant support for investigator-initiated studies from Pentax Medical, Mylan, ChiRoStim, and 3M. Arjun D. Koch reports speaker fees from Cook Medical, ERBE, Pentax and Boston Scientific. Funding Sources: This research did not receive any specific grant from funding agencies in the public, commercial, or nonprofit sectors. The workshop that allowed the present study was unrestrictedly sponsored by Pentax® (Tokyo, Japan), ERBE® (Tübingen, Germany), and Cook Medical® (Winston-Salem, NC, USA) which provided flexible endoscopes, electrosurgical units, and endoscopic devices. The sponsors had no influence on the scientific content of the workshop neither on this study. Author Contributions: All authors have contributed and agreed on the content of the manuscript. Ricardo Küttner-Magalhães contributed to the study conception, study design, data acquisition, data analysis, data interpretation, manuscript writing, and critical revision. Mário Dinis-Ribeiro contributed to the study conception, study design, study supervision, data analysis, data interpretation, and critical revision of the manuscript. Marco J. Bruno contributed to the study conception, study design, and critical revision of the manuscript. Ricardo Marcos-Pinto contributed to data interpretation and critical revision of the manuscript. Carla Rolanda contributed to the study conception and design, data interpretation, and critical revision of the manuscript. Arjun D. Kock contributed to the study conception and design, study supervision, data analysis, data interpretation, and critical revision of the manuscript. All authors read and approved the final version of the manuscript. Data Availability Statement: All data generated or analyzed during this study are included in this article. Further inquiries can be directed to the corresponding author.
Background: Endoscopic submucosal dissection (ESD) is a demanding procedure requiring high level of expertise. ESD training programs incorporate procedures with live animal models. This study aimed to assess the early learning curve for performing ESD on live porcine models by endoscopists without any or with limited previous ESD experience. Methods: In a live porcine model ESD workshop, number of resections, completeness of the resections, en bloc resections, adverse events, tutor intervention, type of knife, ESD time and size of resected specimens were recorded. ESD speed was calculated. Results: A total of 70 procedures were carried out by 17 trainees. The percentage of complete resections, en bloc resections and ESD speed increased from the first to the latest procedures (88.2%-100%, 76.5%-100%, 8.6-31.4 mm2/min, respectively). The number of procedures in which a trainee needed tutor intervention and the number of adverse events also decreased throughout the procedures (4 to 0 and 6 to 0, respectively). During the workshop, when participants changed to a different type of knife, ESD speed slightly decreased (18.5 mm2/min to 17.0 mm2/min) and adverse events increased again (0-2). Conclusions: Through successive procedures, complete resections, en bloc resections, and ESD speed improve whereas adverse events decrease, supporting the role of the live porcine model in the preclinical learning phase. Changing ESD knives has a momentarily negative impact on the learning curve.
null
null
6,192
284
[ 85, 244, 71, 388, 250, 107, 171, 90, 164 ]
15
[ "esd", "procedures", "training", "knife", "study", "procedure", "min", "type", "data", "animal" ]
[ "training endoscopic", "resection esd established", "gastric neoplasms easiest", "gastric esd performance", "submucosal dissection esd" ]
null
null
[CONTENT] Endoscopic submucosal dissection | Gastrointestinal endoscopy | Learning curve | Animal models | Simulation | Training [SUMMARY]
[CONTENT] Endoscopic submucosal dissection | Gastrointestinal endoscopy | Learning curve | Animal models | Simulation | Training [SUMMARY]
[CONTENT] Endoscopic submucosal dissection | Gastrointestinal endoscopy | Learning curve | Animal models | Simulation | Training [SUMMARY]
null
[CONTENT] Endoscopic submucosal dissection | Gastrointestinal endoscopy | Learning curve | Animal models | Simulation | Training [SUMMARY]
null
[CONTENT] Swine | Humans | Animals | Endoscopic Mucosal Resection | Learning Curve | Dissection | Models, Animal [SUMMARY]
[CONTENT] Swine | Humans | Animals | Endoscopic Mucosal Resection | Learning Curve | Dissection | Models, Animal [SUMMARY]
[CONTENT] Swine | Humans | Animals | Endoscopic Mucosal Resection | Learning Curve | Dissection | Models, Animal [SUMMARY]
null
[CONTENT] Swine | Humans | Animals | Endoscopic Mucosal Resection | Learning Curve | Dissection | Models, Animal [SUMMARY]
null
[CONTENT] training endoscopic | resection esd established | gastric neoplasms easiest | gastric esd performance | submucosal dissection esd [SUMMARY]
[CONTENT] training endoscopic | resection esd established | gastric neoplasms easiest | gastric esd performance | submucosal dissection esd [SUMMARY]
[CONTENT] training endoscopic | resection esd established | gastric neoplasms easiest | gastric esd performance | submucosal dissection esd [SUMMARY]
null
[CONTENT] training endoscopic | resection esd established | gastric neoplasms easiest | gastric esd performance | submucosal dissection esd [SUMMARY]
null
[CONTENT] esd | procedures | training | knife | study | procedure | min | type | data | animal [SUMMARY]
[CONTENT] esd | procedures | training | knife | study | procedure | min | type | data | animal [SUMMARY]
[CONTENT] esd | procedures | training | knife | study | procedure | min | type | data | animal [SUMMARY]
null
[CONTENT] esd | procedures | training | knife | study | procedure | min | type | data | animal [SUMMARY]
null
[CONTENT] esd | models | limited | learning | workshops | 12 | 12 13 | centers | 13 | low [SUMMARY]
[CONTENT] kg | knife | injection | esd | specimen | procedure | animals | procedures | lesion | submucosal [SUMMARY]
[CONTENT] p1 | min | p3 | procedures | p2 | vs | p4 | perforations | mm2 min | mm2 [SUMMARY]
null
[CONTENT] esd | data | study | procedures | training | workshop | knife | animals | min | kg [SUMMARY]
null
[CONTENT] ESD ||| ESD ||| ESD | ESD [SUMMARY]
[CONTENT] ESD | ESD ||| ESD [SUMMARY]
[CONTENT] 70 | 17 ||| ESD | first | 88.2%-100% | 76.5%-100% | 8.6-31.4 ||| 4 | 6 ||| ESD | 18.5 mm2/min | 17.0 mm2/min | 0-2 [SUMMARY]
null
[CONTENT] ESD ||| ESD ||| ESD | ESD ||| ESD | ESD ||| ESD ||| ||| 70 | 17 ||| ESD | first | 88.2%-100% | 76.5%-100% | 8.6-31.4 ||| 4 | 6 ||| ESD | 18.5 mm2/min | 17.0 mm2/min | 0-2 ||| ESD ||| [SUMMARY]
null
Evaluation of the Inter and Intra-Observer Reliability of the AO Classification of Intertrochanteric Fractures and the Device Choice (DHS, PFNA, and DCS) of Fixations.
33911837
ArbeitsgemeinschaftfürOsteosynthesefragen (AO) classification is the most frequently used tool to classify intertrochanteric fractures. However, there is limited evidence regarding its reliability. Therefore, this study was designed to evaluate inter-observer and intra-observer reliability of the AO-2018 intertrochanteric fracture classification.
BACKGROUND
A retrospective study was conducted in Imam Khomeini Hospital Complex, on radiography of patients who came with intertrochanteric fractures from March 21, 2018, to March 19, 2019. Four orthopedic trauma surgeons assessed 96 anteroposterior pelvic radiographs of intertrochanteric fractures and classified using an AO intertrochanteric fracture classification of 2018. The reading and review of radiography were performed in 2 separate occasions in a 1-month interval. The inter-observer and intra-observer reliability was assessed using kappa statistics.
METHOD
The level of both mean inter-observer (K =0.322; 95%CI: 0.321-0.323) and intra-observer agreement (K =0.317; 95%CI: 0.314-0.320) in AO intertrochanteric fracture classification subgrouping were not satisfactory. The inter-observer (K =0.61; 95%CI: 0.608-0.611) and intra-observers' (K=0.560; 95%CI: 0.544-0.566) reliability in AO main groupings showed moderate agreement.
RESULT
The AO classification does not show adequate and acceptable inter-observer and intra-observer reliability and reproducibility. Therefore, it will be hard to base on the AO classification for treatment protocols.
CONCLUSION
[ "Hip Fractures", "Humans", "Observer Variation", "Radiography", "Reproducibility of Results", "Retrospective Studies" ]
8047276
Introduction
Intertrochanteric fracture is the fracture that occurs in the region between greater and lesser trochanters of the proximal femur. It is extracapsular where the vascularity of the femoral head is rarely affected (1). Intertrochanteric fracture makes about 50% of the hip fractures which is caused by low energy mechanisms such as falls (2). It can occur in both the elderly and the young. However, it is more common in the elderly population with osteoporosis due to low energy mechanisms (3). Generally, 6 million hip fractures are estimated to occur by 2050 (4). Most of the patients present with the absence of weight-bearing, painful shortened, and externally rotated lower limbs (5). For the evaluation and diagnosis of intertrochanteric fractures, standard X-ray of the pelvis and femur can be used. The radiological finding can also help to measure the width of the medullary cavity and assessment of the diaphyseal morphology. Thus, adequate radiological evaluation is required to understand fracture type, and for preoperative planning (6). The primary goal of intertrochanteric fracture treatment is the early mobilization and avoidance of secondary complications which can be achieved by appropriate reduction and fixations through different fixation devices (7). There are a number of fixation devices available for the treatments. Each has its indications, advantages, and disadvantages. The selection of the devices depends on the type of fracture. However, no implant fully satisfies all fixation requirements of intertrochanteric fractures (8). Implant selection and placement are important factors that can determine and predict the failure of fracture after fixation (9). Identifying the presence of atypical fractures or unstable fracture patterns is important for fracture management (10). The classification system should be valid and reliable and should have a prognostic value that can assist us to plan treatment protocols (11). As the AO classification is the most commonly used classification that is utilized to base our protocols of choosing appropriate fixation devices, it is worth much to assess the reliability, to optimize the treatment outcome. Therefore, this study aimed to evaluate the inter-observer and intra-observer reliability of the AO classification system in intertrochanteric fractures.
Methods
This retrospective study was conducted in Imam Khomeini Hospital Complex, Tehran, Iran. Patients with an intertrochanteric fracture who were admitted to the hospital from March 21, 2018, to March 19, 2019, were included in this study. All adult patients (age ≥ 18 years) with new intertrochanteric fractures were included. However, patients with pathological fractures, periprosthetic fractures, and subtrochanteric and neck fractures were excluded. Initially, the radiographs of 136 patients were identified, but 96 of the 136 radiographs met our inclusion criteria and were enrolled in this study. Four orthopedic trauma surgeons had evaluated and read the radiographs (x-ray) findings and provided their classifications twice in the onemonth interval. Data sources and collection: The health information system (HIS) of Imam Khomeini Hospital Complex was used to identify patients with an intertrochanteric fracture in the data collection period. The demographic characteristics of the patients such as age and sex, and the radiologic image (X-ray) were obtained from the HIS. The X-rays were matched and coded with the structured questionnaires. Four experienced orthopedic trauma surgeons who conduct on an average per month 4–7 intertrochanteric fracture fixations independently and who had 3 to 10 years of experience reviewed the radiographs and suggested fixation devices type. The radiographs were reviewed at 2 different times with a one-month interval between the readings. The reviewers were not told that there would be a second time reading. For the first round, the observers classified the fracture according to the AO intertrochanteric fracture classification-2018 and suggested their choice of treatment. One month later, the observers were provided with the same set of radiographs rearranged in a different order along with the same questionnaires and a chart of AO Intertrochanteric fracture classification -2018 were asked to classify the fracture and select the suitable treatment fixation of choice. Data analysis: Kappa statistics was performed by SPSS version 24 to assess inter-observer and intra-observer reliability. Interobserver agreement was evaluated by comparing the responses of 4 different observers in 2 different readings, while the intra-observer reliability was evaluated by comparing each observer's reading on 2 different occasions. The kappa value indicates −1.0 (complete disagreement), 0 (chance agreement) and 1.0 (complete agreement). Interpretation of the strength of agreement determined with the kappa values was given by adopting the criteria of Landis and Koch (12). Landis et al classify the level of agreements into six groups: perfect agreement (K ≥ 0.80), substantial agreement (K = 0.61–0.80), moderate agreement (K = 0.41–0.61), fair agreement (K = 0.21–0.41), slight agreement (K = 0–0.21) and poor agreement (K < 0). The level of significance was set at P-value < 0.05. Ethical consideration: This study was ethically approved by the research ethics board of Tehran University of Medical Sciences and obtained approval number ID IR.TUMS.IKHC.REC.1397.257.
Result
Patients' characteristics: A total of 96 confirmed radiological cases of intertrochanteric fractures of both sexes were included in this study. Of these, 52 patients were males. The mean (±SD) age of the participants was 70.5 (±17.3) years with the age range between 20–97 years old. Of the total patients, 47 were in the age category of 60 to 79 years (Table 1). Sex and age distribution of patients Interobserver reliability: In the first evaluation, interobserver agreement across the 4 observers in AO subgrouping classifications was fair agreement [K= 0.352; 95% CI: (0.351 – 0.353)] (Table 2). The reviewers' agreement regarding AO main grouping in the first round was substantial agreement [K=0.625; 95% CI: (0.623–0.626)]. In the second evaluation, the result showed fair and moderate agreement for both AO subgrouping [K= 0.292; 95% CI: (0.291–0.293)] and AO main grouping [K=0.595; 95% CI: (0.593–0.597)]. The level of agreement among the observers based on the fixation device choice for the first evaluation was moderate agreement [K = 0.560; 95% CI: [0.558– 0.563)], and in the second evaluation, it was fair agreement [K =0.490; 95% CI: (0.488–0.493)] (Table 2). Interobserver variation among orthopedic trauma surgeons in AO classification and device fixation of intertrochanteric fractures Intra-observer reliability: The intra-observer agreement and reliability were assessed and analyzed by comparing the data of the first observation with the data of the second observation. The mean intra-observer agreement for AO subgrouping for all observers was fair [K =0.317; 95% CI:(0.314–0.320)], (Table 3). The mean intra-observer agreement of AO main grouping for observers showed moderate agreement [K=0.560; 95% CI:(0.544–0.566)] (Table 3). Mean intra-observer variation among orthopedic trauma surgeons in the AO classification of intertrochanteric fractures Inter-observer reliability based on the treatment of choices: The interobserver agreement based on choice of fixation devices showed moderate agreement at first and second observations (first observation, [K= 0.560; 95% CI: (0.558–0.563)] and (second observation [K= 0.490; 95% CI: (0.488–0.493)]. The mean intraobserver agreement based on the choice of fixation devices also showed a moderate level of agreement [K=0.560; 95% CI:(0.544–0.566)].
null
null
[]
[]
[]
[ "Introduction", "Methods", "Result", "Discussion" ]
[ "Intertrochanteric fracture is the fracture that occurs in the region between greater and lesser trochanters of the proximal femur. It is extracapsular where the vascularity of the femoral head is rarely affected (1). Intertrochanteric fracture makes about 50% of the hip fractures which is caused by low energy mechanisms such as falls (2). It can occur in both the elderly and the young. However, it is more common in the elderly population with osteoporosis due to low energy mechanisms (3). Generally, 6 million hip fractures are estimated to occur by 2050 (4).\nMost of the patients present with the absence of weight-bearing, painful shortened, and externally rotated lower limbs (5). For the evaluation and diagnosis of intertrochanteric fractures, standard X-ray of the pelvis and femur can be used. The radiological finding can also help to measure the width of the medullary cavity and assessment of the diaphyseal morphology. Thus, adequate radiological evaluation is required to understand fracture type, and for preoperative planning (6).\nThe primary goal of intertrochanteric fracture treatment is the early mobilization and avoidance of secondary complications which can be achieved by appropriate reduction and fixations through different fixation devices (7). There are a number of fixation devices available for the treatments. Each has its indications, advantages, and disadvantages. The selection of the devices depends on the type of fracture. However, no implant fully satisfies all fixation requirements of intertrochanteric fractures (8). Implant selection and placement are important factors that can determine and predict the failure of fracture after fixation (9). Identifying the presence of atypical fractures or unstable fracture patterns is important for fracture management (10). The classification system should be valid and reliable and should have a prognostic value that can assist us to plan treatment protocols (11).\nAs the AO classification is the most commonly used classification that is utilized to base our protocols of choosing appropriate fixation devices, it is worth much to assess the reliability, to optimize the treatment outcome. Therefore, this study aimed to evaluate the inter-observer and intra-observer reliability of the AO classification system in intertrochanteric fractures.", "This retrospective study was conducted in Imam Khomeini Hospital Complex, Tehran, Iran. Patients with an intertrochanteric fracture who were admitted to the hospital from March 21, 2018, to March 19, 2019, were included in this study. All adult patients (age ≥ 18 years) with new intertrochanteric fractures were included. However, patients with pathological fractures, periprosthetic fractures, and subtrochanteric and neck fractures were excluded. Initially, the radiographs of 136 patients were identified, but 96 of the 136 radiographs met our inclusion criteria and were enrolled in this study. Four orthopedic trauma surgeons had evaluated and read the radiographs (x-ray) findings and provided their classifications twice in the onemonth interval.\nData sources and collection: The health information system (HIS) of Imam Khomeini Hospital Complex was used to identify patients with an intertrochanteric fracture in the data collection period. The demographic characteristics of the patients such as age and sex, and the radiologic image (X-ray) were obtained from the HIS. The X-rays were matched and coded with the structured questionnaires. Four experienced orthopedic trauma surgeons who conduct on an average per month 4–7 intertrochanteric fracture fixations independently and who had 3 to 10 years of experience reviewed the radiographs and suggested fixation devices type. The radiographs were reviewed at 2 different times with a one-month interval between the readings. The reviewers were not told that there would be a second time reading. For the first round, the observers classified the fracture according to the AO intertrochanteric fracture classification-2018 and suggested their choice of treatment. One month later, the observers were provided with the same set of radiographs rearranged in a different order along with the same questionnaires and a chart of AO Intertrochanteric fracture classification -2018 were asked to classify the fracture and select the suitable treatment fixation of choice.\nData analysis: Kappa statistics was performed by SPSS version 24 to assess inter-observer and intra-observer reliability. Interobserver agreement was evaluated by comparing the responses of 4 different observers in 2 different readings, while the intra-observer reliability was evaluated by comparing each observer's reading on 2 different occasions. The kappa value indicates −1.0 (complete disagreement), 0 (chance agreement) and 1.0 (complete agreement). Interpretation of the strength of agreement determined with the kappa values was given by adopting the criteria of Landis and Koch (12). Landis et al classify the level of agreements into six groups: perfect agreement (K ≥ 0.80), substantial agreement (K = 0.61–0.80), moderate agreement (K = 0.41–0.61), fair agreement (K = 0.21–0.41), slight agreement (K = 0–0.21) and poor agreement (K < 0). The level of significance was set at P-value < 0.05.\nEthical consideration: This study was ethically approved by the research ethics board of Tehran University of Medical Sciences and obtained approval number ID IR.TUMS.IKHC.REC.1397.257.", "Patients' characteristics: A total of 96 confirmed radiological cases of intertrochanteric fractures of both sexes were included in this study. Of these, 52 patients were males. The mean (±SD) age of the participants was 70.5 (±17.3) years with the age range between 20–97 years old. Of the total patients, 47 were in the age category of 60 to 79 years (Table 1).\nSex and age distribution of patients\nInterobserver reliability: In the first evaluation, interobserver agreement across the 4 observers in AO subgrouping classifications was fair agreement [K= 0.352; 95% CI: (0.351 – 0.353)] (Table 2). The reviewers' agreement regarding AO main grouping in the first round was substantial agreement [K=0.625; 95% CI: (0.623–0.626)]. In the second evaluation, the result showed fair and moderate agreement for both AO subgrouping [K= 0.292; 95% CI: (0.291–0.293)] and AO main grouping [K=0.595; 95% CI: (0.593–0.597)]. The level of agreement among the observers based on the fixation device choice for the first evaluation was moderate agreement [K = 0.560; 95% CI: [0.558– 0.563)], and in the second evaluation, it was fair agreement [K =0.490; 95% CI: (0.488–0.493)] (Table 2).\nInterobserver variation among orthopedic trauma surgeons in AO classification and device fixation of intertrochanteric fractures\nIntra-observer reliability: The intra-observer agreement and reliability were assessed and analyzed by comparing the data of the first observation with the data of the second observation. The mean intra-observer agreement for AO subgrouping for all observers was fair [K =0.317; 95% CI:(0.314–0.320)], (Table 3). The mean intra-observer agreement of AO main grouping for observers showed moderate agreement [K=0.560; 95% CI:(0.544–0.566)] (Table 3).\nMean intra-observer variation among orthopedic trauma surgeons in the AO classification of intertrochanteric fractures\nInter-observer reliability based on the treatment of choices: The interobserver agreement based on choice of fixation devices showed moderate agreement at first and second observations (first observation, [K= 0.560; 95% CI: (0.558–0.563)] and (second observation [K= 0.490; 95% CI: (0.488–0.493)]. The mean intraobserver agreement based on the choice of fixation devices also showed a moderate level of agreement [K=0.560; 95% CI:(0.544–0.566)].", "In this study, several attending orthopedic trauma surgeons who had different levels of experience in terms of intertrochanteric fracture management participated to evaluate the reliability of the AO 2018 intertrochanteric classification. Classification of intertrochanteric fracture serves as a guideline for treatment and helps to predict the result (13) or provides a reasonable estimation of the likely outcome (14). Therefore, the reliability of the fracture classification depends on the inter-observer and intra-observer agreement. A low level of agreement among and between observers can limit the use of classification systems in decision making (15). If the preoperative classification is not correct, the usefulness of the prognosis will also be limited (14). However, there is limited evidence in the reliability of fracture classification using the AO-2018 classification criteria in the study area. Therefore, this study was intended to determine whether the reliability of the fracture classification depends on the inter-observer and intra-observer agreement.\nIn this study, the inter-observer reliability in AO intertrochanteric fracture classification for the subgroup analyses of the first and second observations was fair. However, the interobserver reliability in AO intertrochanteric fracture classification for the main group at the first and second observations was moderate. The interobserver agreement based on the choice of fixation devices had also shown moderate agreement at the first and second observations. The intra-observer agreements in the sub and main groupings had shown lower agreement compared to interobserver agreements. The agreements were fair for the subgrouping and moderate for the main groupings.\nA previous study reported by Schipper et al (16) which used the AO classification system to classify trochanteric fractures of 20 X-rays indicated a mean intra-observer kappa value of 0.48 and interobserver kappa values of 0.33 and 0.34 in sub-grouping. However, for the main grouping classifications, intra-observer kappa value was 0.78, while interobserver kappa values were 0.67 and 0.63. These findings are in agreement with our results. However, the intra-observer agreement of our study was slightly lower than the interobserver agreement in comparison to the above study (15). Besides, our study evaluated the agreement among observers based on device choice of fixations which showed a moderate level of agreement.\nA study reported by Pervez et al (13) in which 88 sets of radiographs were observed by using AO classifications and Jensen modification of the Evans indicated that the mean intra-observer agreements were K = 0.42 for sub-grouping and K = 0.72 for main grouping. Similarly, mean interobserver agreements were K = 0.33 for sub-grouping and K = 0.62 for main groupings. Moreover, a study reported by De Boeck (17) was also found the AO classification unreliable. Our results are in agreement with this study as there is no adequate reliability.\nThe study reported by Newey et al (18) found that the AO intertrochanteric fracture classification system is unnecessarily complicated and falls short of playing a useful role in the management of intertrochanteric fractures. Since the classification system intends to indicate the nature of the injury and provides a rationale for treatment (18) and most of the orthopedic surgeons use this classification for choosing appropriate fixations or devices, there is the need for modified criteria or classification system which can help the surgeons to make appropriate clinical decisions. This study's main limitation was the use of X-rays which were not equally standardized.\nIn conclusion, this study of AO intertrochanteric classification did not show adequate acceptable interobserver and intraobserver reliability and reproducibility. Therefore, based on the findings of this study and that of other studies, there is a probability that AO intertrochanteric classification cannot help to support the exact treatment selection protocols since the results were not reliably strong. Finally, it is better to have back up of one extra fixation device (DHS+PFNA or DHS+DCS) during the operation because based on the above results during the operation, a fracture may not become the one which was seen in the radiographic X-ray." ]
[ "intro", "methods", "results", "discussion" ]
[ "Osteosynthesefragen classification", "Intertrochanteric fracture", "Reliability", "Orthopedic trauma" ]
Introduction: Intertrochanteric fracture is the fracture that occurs in the region between greater and lesser trochanters of the proximal femur. It is extracapsular where the vascularity of the femoral head is rarely affected (1). Intertrochanteric fracture makes about 50% of the hip fractures which is caused by low energy mechanisms such as falls (2). It can occur in both the elderly and the young. However, it is more common in the elderly population with osteoporosis due to low energy mechanisms (3). Generally, 6 million hip fractures are estimated to occur by 2050 (4). Most of the patients present with the absence of weight-bearing, painful shortened, and externally rotated lower limbs (5). For the evaluation and diagnosis of intertrochanteric fractures, standard X-ray of the pelvis and femur can be used. The radiological finding can also help to measure the width of the medullary cavity and assessment of the diaphyseal morphology. Thus, adequate radiological evaluation is required to understand fracture type, and for preoperative planning (6). The primary goal of intertrochanteric fracture treatment is the early mobilization and avoidance of secondary complications which can be achieved by appropriate reduction and fixations through different fixation devices (7). There are a number of fixation devices available for the treatments. Each has its indications, advantages, and disadvantages. The selection of the devices depends on the type of fracture. However, no implant fully satisfies all fixation requirements of intertrochanteric fractures (8). Implant selection and placement are important factors that can determine and predict the failure of fracture after fixation (9). Identifying the presence of atypical fractures or unstable fracture patterns is important for fracture management (10). The classification system should be valid and reliable and should have a prognostic value that can assist us to plan treatment protocols (11). As the AO classification is the most commonly used classification that is utilized to base our protocols of choosing appropriate fixation devices, it is worth much to assess the reliability, to optimize the treatment outcome. Therefore, this study aimed to evaluate the inter-observer and intra-observer reliability of the AO classification system in intertrochanteric fractures. Methods: This retrospective study was conducted in Imam Khomeini Hospital Complex, Tehran, Iran. Patients with an intertrochanteric fracture who were admitted to the hospital from March 21, 2018, to March 19, 2019, were included in this study. All adult patients (age ≥ 18 years) with new intertrochanteric fractures were included. However, patients with pathological fractures, periprosthetic fractures, and subtrochanteric and neck fractures were excluded. Initially, the radiographs of 136 patients were identified, but 96 of the 136 radiographs met our inclusion criteria and were enrolled in this study. Four orthopedic trauma surgeons had evaluated and read the radiographs (x-ray) findings and provided their classifications twice in the onemonth interval. Data sources and collection: The health information system (HIS) of Imam Khomeini Hospital Complex was used to identify patients with an intertrochanteric fracture in the data collection period. The demographic characteristics of the patients such as age and sex, and the radiologic image (X-ray) were obtained from the HIS. The X-rays were matched and coded with the structured questionnaires. Four experienced orthopedic trauma surgeons who conduct on an average per month 4–7 intertrochanteric fracture fixations independently and who had 3 to 10 years of experience reviewed the radiographs and suggested fixation devices type. The radiographs were reviewed at 2 different times with a one-month interval between the readings. The reviewers were not told that there would be a second time reading. For the first round, the observers classified the fracture according to the AO intertrochanteric fracture classification-2018 and suggested their choice of treatment. One month later, the observers were provided with the same set of radiographs rearranged in a different order along with the same questionnaires and a chart of AO Intertrochanteric fracture classification -2018 were asked to classify the fracture and select the suitable treatment fixation of choice. Data analysis: Kappa statistics was performed by SPSS version 24 to assess inter-observer and intra-observer reliability. Interobserver agreement was evaluated by comparing the responses of 4 different observers in 2 different readings, while the intra-observer reliability was evaluated by comparing each observer's reading on 2 different occasions. The kappa value indicates −1.0 (complete disagreement), 0 (chance agreement) and 1.0 (complete agreement). Interpretation of the strength of agreement determined with the kappa values was given by adopting the criteria of Landis and Koch (12). Landis et al classify the level of agreements into six groups: perfect agreement (K ≥ 0.80), substantial agreement (K = 0.61–0.80), moderate agreement (K = 0.41–0.61), fair agreement (K = 0.21–0.41), slight agreement (K = 0–0.21) and poor agreement (K < 0). The level of significance was set at P-value < 0.05. Ethical consideration: This study was ethically approved by the research ethics board of Tehran University of Medical Sciences and obtained approval number ID IR.TUMS.IKHC.REC.1397.257. Result: Patients' characteristics: A total of 96 confirmed radiological cases of intertrochanteric fractures of both sexes were included in this study. Of these, 52 patients were males. The mean (±SD) age of the participants was 70.5 (±17.3) years with the age range between 20–97 years old. Of the total patients, 47 were in the age category of 60 to 79 years (Table 1). Sex and age distribution of patients Interobserver reliability: In the first evaluation, interobserver agreement across the 4 observers in AO subgrouping classifications was fair agreement [K= 0.352; 95% CI: (0.351 – 0.353)] (Table 2). The reviewers' agreement regarding AO main grouping in the first round was substantial agreement [K=0.625; 95% CI: (0.623–0.626)]. In the second evaluation, the result showed fair and moderate agreement for both AO subgrouping [K= 0.292; 95% CI: (0.291–0.293)] and AO main grouping [K=0.595; 95% CI: (0.593–0.597)]. The level of agreement among the observers based on the fixation device choice for the first evaluation was moderate agreement [K = 0.560; 95% CI: [0.558– 0.563)], and in the second evaluation, it was fair agreement [K =0.490; 95% CI: (0.488–0.493)] (Table 2). Interobserver variation among orthopedic trauma surgeons in AO classification and device fixation of intertrochanteric fractures Intra-observer reliability: The intra-observer agreement and reliability were assessed and analyzed by comparing the data of the first observation with the data of the second observation. The mean intra-observer agreement for AO subgrouping for all observers was fair [K =0.317; 95% CI:(0.314–0.320)], (Table 3). The mean intra-observer agreement of AO main grouping for observers showed moderate agreement [K=0.560; 95% CI:(0.544–0.566)] (Table 3). Mean intra-observer variation among orthopedic trauma surgeons in the AO classification of intertrochanteric fractures Inter-observer reliability based on the treatment of choices: The interobserver agreement based on choice of fixation devices showed moderate agreement at first and second observations (first observation, [K= 0.560; 95% CI: (0.558–0.563)] and (second observation [K= 0.490; 95% CI: (0.488–0.493)]. The mean intraobserver agreement based on the choice of fixation devices also showed a moderate level of agreement [K=0.560; 95% CI:(0.544–0.566)]. Discussion: In this study, several attending orthopedic trauma surgeons who had different levels of experience in terms of intertrochanteric fracture management participated to evaluate the reliability of the AO 2018 intertrochanteric classification. Classification of intertrochanteric fracture serves as a guideline for treatment and helps to predict the result (13) or provides a reasonable estimation of the likely outcome (14). Therefore, the reliability of the fracture classification depends on the inter-observer and intra-observer agreement. A low level of agreement among and between observers can limit the use of classification systems in decision making (15). If the preoperative classification is not correct, the usefulness of the prognosis will also be limited (14). However, there is limited evidence in the reliability of fracture classification using the AO-2018 classification criteria in the study area. Therefore, this study was intended to determine whether the reliability of the fracture classification depends on the inter-observer and intra-observer agreement. In this study, the inter-observer reliability in AO intertrochanteric fracture classification for the subgroup analyses of the first and second observations was fair. However, the interobserver reliability in AO intertrochanteric fracture classification for the main group at the first and second observations was moderate. The interobserver agreement based on the choice of fixation devices had also shown moderate agreement at the first and second observations. The intra-observer agreements in the sub and main groupings had shown lower agreement compared to interobserver agreements. The agreements were fair for the subgrouping and moderate for the main groupings. A previous study reported by Schipper et al (16) which used the AO classification system to classify trochanteric fractures of 20 X-rays indicated a mean intra-observer kappa value of 0.48 and interobserver kappa values of 0.33 and 0.34 in sub-grouping. However, for the main grouping classifications, intra-observer kappa value was 0.78, while interobserver kappa values were 0.67 and 0.63. These findings are in agreement with our results. However, the intra-observer agreement of our study was slightly lower than the interobserver agreement in comparison to the above study (15). Besides, our study evaluated the agreement among observers based on device choice of fixations which showed a moderate level of agreement. A study reported by Pervez et al (13) in which 88 sets of radiographs were observed by using AO classifications and Jensen modification of the Evans indicated that the mean intra-observer agreements were K = 0.42 for sub-grouping and K = 0.72 for main grouping. Similarly, mean interobserver agreements were K = 0.33 for sub-grouping and K = 0.62 for main groupings. Moreover, a study reported by De Boeck (17) was also found the AO classification unreliable. Our results are in agreement with this study as there is no adequate reliability. The study reported by Newey et al (18) found that the AO intertrochanteric fracture classification system is unnecessarily complicated and falls short of playing a useful role in the management of intertrochanteric fractures. Since the classification system intends to indicate the nature of the injury and provides a rationale for treatment (18) and most of the orthopedic surgeons use this classification for choosing appropriate fixations or devices, there is the need for modified criteria or classification system which can help the surgeons to make appropriate clinical decisions. This study's main limitation was the use of X-rays which were not equally standardized. In conclusion, this study of AO intertrochanteric classification did not show adequate acceptable interobserver and intraobserver reliability and reproducibility. Therefore, based on the findings of this study and that of other studies, there is a probability that AO intertrochanteric classification cannot help to support the exact treatment selection protocols since the results were not reliably strong. Finally, it is better to have back up of one extra fixation device (DHS+PFNA or DHS+DCS) during the operation because based on the above results during the operation, a fracture may not become the one which was seen in the radiographic X-ray.
Background: ArbeitsgemeinschaftfürOsteosynthesefragen (AO) classification is the most frequently used tool to classify intertrochanteric fractures. However, there is limited evidence regarding its reliability. Therefore, this study was designed to evaluate inter-observer and intra-observer reliability of the AO-2018 intertrochanteric fracture classification. Methods: A retrospective study was conducted in Imam Khomeini Hospital Complex, on radiography of patients who came with intertrochanteric fractures from March 21, 2018, to March 19, 2019. Four orthopedic trauma surgeons assessed 96 anteroposterior pelvic radiographs of intertrochanteric fractures and classified using an AO intertrochanteric fracture classification of 2018. The reading and review of radiography were performed in 2 separate occasions in a 1-month interval. The inter-observer and intra-observer reliability was assessed using kappa statistics. Results: The level of both mean inter-observer (K =0.322; 95%CI: 0.321-0.323) and intra-observer agreement (K =0.317; 95%CI: 0.314-0.320) in AO intertrochanteric fracture classification subgrouping were not satisfactory. The inter-observer (K =0.61; 95%CI: 0.608-0.611) and intra-observers' (K=0.560; 95%CI: 0.544-0.566) reliability in AO main groupings showed moderate agreement. Conclusions: The AO classification does not show adequate and acceptable inter-observer and intra-observer reliability and reproducibility. Therefore, it will be hard to base on the AO classification for treatment protocols.
null
null
2,201
272
[]
4
[ "agreement", "classification", "fracture", "intertrochanteric", "observer", "ao", "study", "reliability", "intra observer", "intra" ]
[ "intertrochanteric fractures classification", "requirements intertrochanteric fractures", "new intertrochanteric fractures", "hip fractures estimated", "hip fractures" ]
null
null
[CONTENT] Osteosynthesefragen classification | Intertrochanteric fracture | Reliability | Orthopedic trauma [SUMMARY]
[CONTENT] Osteosynthesefragen classification | Intertrochanteric fracture | Reliability | Orthopedic trauma [SUMMARY]
[CONTENT] Osteosynthesefragen classification | Intertrochanteric fracture | Reliability | Orthopedic trauma [SUMMARY]
null
[CONTENT] Osteosynthesefragen classification | Intertrochanteric fracture | Reliability | Orthopedic trauma [SUMMARY]
null
[CONTENT] Hip Fractures | Humans | Observer Variation | Radiography | Reproducibility of Results | Retrospective Studies [SUMMARY]
[CONTENT] Hip Fractures | Humans | Observer Variation | Radiography | Reproducibility of Results | Retrospective Studies [SUMMARY]
[CONTENT] Hip Fractures | Humans | Observer Variation | Radiography | Reproducibility of Results | Retrospective Studies [SUMMARY]
null
[CONTENT] Hip Fractures | Humans | Observer Variation | Radiography | Reproducibility of Results | Retrospective Studies [SUMMARY]
null
[CONTENT] intertrochanteric fractures classification | requirements intertrochanteric fractures | new intertrochanteric fractures | hip fractures estimated | hip fractures [SUMMARY]
[CONTENT] intertrochanteric fractures classification | requirements intertrochanteric fractures | new intertrochanteric fractures | hip fractures estimated | hip fractures [SUMMARY]
[CONTENT] intertrochanteric fractures classification | requirements intertrochanteric fractures | new intertrochanteric fractures | hip fractures estimated | hip fractures [SUMMARY]
null
[CONTENT] intertrochanteric fractures classification | requirements intertrochanteric fractures | new intertrochanteric fractures | hip fractures estimated | hip fractures [SUMMARY]
null
[CONTENT] agreement | classification | fracture | intertrochanteric | observer | ao | study | reliability | intra observer | intra [SUMMARY]
[CONTENT] agreement | classification | fracture | intertrochanteric | observer | ao | study | reliability | intra observer | intra [SUMMARY]
[CONTENT] agreement | classification | fracture | intertrochanteric | observer | ao | study | reliability | intra observer | intra [SUMMARY]
null
[CONTENT] agreement | classification | fracture | intertrochanteric | observer | ao | study | reliability | intra observer | intra [SUMMARY]
null
[CONTENT] fracture | intertrochanteric | fractures | fixation | classification | devices | femur | elderly | mechanisms | energy mechanisms [SUMMARY]
[CONTENT] agreement | radiographs | fracture | patients | intertrochanteric fracture | different | intertrochanteric | hospital | month | 21 [SUMMARY]
[CONTENT] 95 ci | 95 | ci | agreement | table | ao | 560 95 ci | 560 95 | 560 | observation [SUMMARY]
null
[CONTENT] agreement | fracture | intertrochanteric | classification | observer | ao | ci | 95 ci | 95 | study [SUMMARY]
null
[CONTENT] ArbeitsgemeinschaftfürOsteosynthesefragen | AO ||| ||| AO-2018 [SUMMARY]
[CONTENT] Imam Khomeini Hospital Complex | March 21, 2018 | March 19, 2019 ||| Four | 96 | AO | 2018 ||| 2 | 1-month ||| [SUMMARY]
[CONTENT] 0.322 | 0.321-0.323 | 0.317 | 0.314 | AO ||| 0.61 | 0.608-0.611 | 0.544 | AO [SUMMARY]
null
[CONTENT] ArbeitsgemeinschaftfürOsteosynthesefragen | AO ||| ||| AO-2018 ||| Imam Khomeini Hospital Complex | March 21, 2018 | March 19, 2019 ||| Four | 96 | AO | 2018 ||| 2 | 1-month ||| ||| 0.322 | 0.321-0.323 | 0.317 | 0.314 | AO ||| 0.61 | 0.608-0.611 | 0.544 | AO ||| AO ||| AO [SUMMARY]
null
Association between ERCC1 and TS mRNA levels and disease free survival in colorectal cancer patients receiving oxaliplatin and fluorouracil (5-FU) adjuvant chemotherapy.
25175730
Aim was to explore the association of ERCC1 and TS mRNA levels with the disease free survival (DFS) in Chinese colorectal cancer (CRC) patients receiving oxaliplatin and 5-FU based adjuvant chemotherapy.
BACKGROUND
Total 112 Chinese stage II-III CRC patients were respectively treated by four different chemotherapy regimens after curative operation. The TS and ERCC1 mRNA levels in primary tumor were measured by real-time RT-PCR. Kaplan-Meier curves and log-rank tests were used for DFS analysis. The Cox proportional hazards model was used for prognostic analysis.
METHODS
In univariate analysis, the hazard ratio (HR) for the mRNA expression levels of TS and ERCC1 (logTS: HR = 0.820, 95% CI = 0.600 - 1.117, P = 0.210; logERCC1: HR = 1.054, 95% CI = 0.852 - 1.304, P = 0.638) indicated no significant association of DFS with the TS and ERCC1 mRNA levels. In multivariate analyses, tumor stage (IIIc: reference, P = 0.083; IIb: HR = 0.240, 95% CI = 0.080 - 0.724, P = 0.011; IIc: HR < 0.0001, P = 0.977; IIIa: HR = 0.179, 95% CI = 0.012 - 2.593, P = 0.207) was confirmed to be the independent prognostic factor for DFS. Moreover, the Kaplan-Meier DFS curves showed that TS and ERCC1 mRNA levels were not significantly associated with the DFS (TS: P = 0.264; ERCC1: P = 0.484).
RESULTS
The mRNA expression of ERCC1 and TS were not applicable to predict the DFS of Chinese stage II-III CRC patients receiving 5-FU and oxaliplatin based adjuvant chemotherapy.
CONCLUSION
[ "Adult", "Aged", "Antineoplastic Combined Chemotherapy Protocols", "Chemotherapy, Adjuvant", "Colectomy", "Colorectal Neoplasms", "DNA-Binding Proteins", "Disease-Free Survival", "Endonucleases", "Female", "Fluorouracil", "Gene Expression Regulation, Neoplastic", "Humans", "Male", "Middle Aged", "Neoplasm Staging", "Organoplatinum Compounds", "Oxaliplatin", "Prognosis", "Proportional Hazards Models", "RNA, Messenger", "Real-Time Polymerase Chain Reaction", "Thymidylate Synthase", "Treatment Outcome" ]
4156636
Background
Colorectal cancer (CRC) is the third most common cancer worldwide and has a high mortality rate [1]. About 608,000 deaths from CRC are estimated worldwide, accounting for 8% of all cancer deaths [2, 3]. Surgery is the most common treatment for CRC, yet prognosis remains poor [4]. As a result, considerable interest has concentrated on chemotherapy after surgery, such as oxaliplatin or 5-fluorouracil (5-FU)-based adjuvant chemotherapy [5, 6]. The 5-FU is an analogue of uracil with a fluorine atom at the C-5 position in place of hydrogen, which disrupts RNA synthesis and the action of thymidylate synthase by converting to several active metabolites: fluorodeoxyuridine monophosphate (FdUMP), fluorodeoxyuridine triphosphate (FdUTP) and fluorouridine triphosphate (FUTP) [7]. It was reported that 5-FU-based chemotherapy was a safe and effective treatment for elderly patients with advanced CRC [8]. Oxaliplatin is a platinum-based drug that has demonstrated antitumor activities in CRC both in vitro and in vivo [9]. It was reported that oxaliplatin based chemotherapy could significantly increase the progression-free survival in patients with metastatic CRC [10]. Moreover, the better efficacy and safety of oxaliplatin combined with 5-FU as first-line chemotherapy for elderly patients with metastatic CRC has been proved [11]. However, there is no predictive factor for efficacy of these treatments. The ERCC1 (encodes excision cross-complementing 1) gene codes for a nucleotide excision repair protein involved in the repair of radiation and chemotherapy-induced DNA damage [12]. It has been reported that the gene polymorphism of ERCC1 at codon 118 was a predictive factor for the tumor response to oxaliplatin/5-FU combination chemotherapy in patients with advanced CRC [13]. Furthermore, thymidylate synthase (TS), as a target enzyme of 5-FU, is associated with response to 5-FU in human colorectal and gastric tumors [14, 15]. It was reported that TS genotyping could be of help in predicting toxicity to 5-FU-based chemotherapy in CRC patients [16]. However, little is known about the association between mRNA expression levels of ERCC1 and TS and clinical outcomes of oxaliplatin and 5-FU based adjuvant chemotherapy in Chinese people with CRC. In this study, we investigated the association of ERCC1 and TS mRNA levels with the disease free survival (DFS) in Chinese CRC patients receiving oxaliplatin and 5-FU based adjuvant chemotherapy.
Methods
Patients This study was carried out with the Institutional Ethics Committees approval and following the Chinese Medical Research Council guidelines. All participants gave their written informed consent prior to entering the study. This is a prospective study. A total of 112 Chinese CRC patients who were treated at Jiangsu Tumor Hospital, China, from May 2005 to January 2010 were investigated in this study. Eligibility criterion was histological confirmation of stage II-III CRC after surgery according to the AJCC TNM classification [17]. This study was carried out with the Institutional Ethics Committees approval and following the Chinese Medical Research Council guidelines. All participants gave their written informed consent prior to entering the study. This is a prospective study. A total of 112 Chinese CRC patients who were treated at Jiangsu Tumor Hospital, China, from May 2005 to January 2010 were investigated in this study. Eligibility criterion was histological confirmation of stage II-III CRC after surgery according to the AJCC TNM classification [17]. Chemotherapy treatment All the patients were treated with chemotherapy after curative operation. Four types of chemotherapy regimens were used for the treatment of CRC patients: i) the first one was the standard FOLFOX-4 consisting of 2-hour intravenous infusion of oxaliplatin (85 mg/m2) on day 1, and 2-hour intravenous drip infusion of calcium folinate (200 mg/m2) on days 1–2, followed by intravenous injection of 5-FU (400 mg/m2) and continuous infusion of 5-FU (600 mg/m2) lasting 22 h on days 1–2, every 2 weeks; ii) the second one was the modified FOLFOX consisting of intravenous infusion of oxaliplatin (130 mg/m2) and 2-hour intravenous drip infusion of folinate calcium (200 mg/m2) on day 1, followed by intravenous injection of 5-FU (400 mg/m2) and continuous infusion of 5-FU (1000 mg/m2) over 24 h on days 1 to 3, every 3 weeks; iii) the third one was oral XELOX consisting of 2-hour intravenous infusion of oxaliplatin 130 mg/m2 on day 1 plus oral capecitabine 850 mg/m2 twice daily for 2 weeks in a 3-week cycle; iv) the fourth one was a conventional intravenous drip infusion including 2-hour intravenous infusion of oxaliplatin 130 mg/m2 on day 1 and continuous infusion of 5-FU (750 mg/m2) lasting 4 h on days 1 to 5, every 3 weeks. All the chemotherapy regimens were performed by a trained nurse. The selection of chemotherapy regimens for each patient was according to the recommendation of an experienced expert. After treatment, the clinical outcomes were obtained by telephone follow-up or a return visit with the deadline of January 2014. DFS, which defined as the time from the end of chemotherapy to the first event of either recurrent disease or death, was calculated according to follow-up data. All the patients were treated with chemotherapy after curative operation. Four types of chemotherapy regimens were used for the treatment of CRC patients: i) the first one was the standard FOLFOX-4 consisting of 2-hour intravenous infusion of oxaliplatin (85 mg/m2) on day 1, and 2-hour intravenous drip infusion of calcium folinate (200 mg/m2) on days 1–2, followed by intravenous injection of 5-FU (400 mg/m2) and continuous infusion of 5-FU (600 mg/m2) lasting 22 h on days 1–2, every 2 weeks; ii) the second one was the modified FOLFOX consisting of intravenous infusion of oxaliplatin (130 mg/m2) and 2-hour intravenous drip infusion of folinate calcium (200 mg/m2) on day 1, followed by intravenous injection of 5-FU (400 mg/m2) and continuous infusion of 5-FU (1000 mg/m2) over 24 h on days 1 to 3, every 3 weeks; iii) the third one was oral XELOX consisting of 2-hour intravenous infusion of oxaliplatin 130 mg/m2 on day 1 plus oral capecitabine 850 mg/m2 twice daily for 2 weeks in a 3-week cycle; iv) the fourth one was a conventional intravenous drip infusion including 2-hour intravenous infusion of oxaliplatin 130 mg/m2 on day 1 and continuous infusion of 5-FU (750 mg/m2) lasting 4 h on days 1 to 5, every 3 weeks. All the chemotherapy regimens were performed by a trained nurse. The selection of chemotherapy regimens for each patient was according to the recommendation of an experienced expert. After treatment, the clinical outcomes were obtained by telephone follow-up or a return visit with the deadline of January 2014. DFS, which defined as the time from the end of chemotherapy to the first event of either recurrent disease or death, was calculated according to follow-up data. RNA extraction and real-time RT-PCR RNA was extracted and purified from formalin-fixed paraffin-embedded (FFPE) tissue samples of surgically resected primary CRC using an RNeasy mini kit (Qiagen, Inc.) according to the manufacturer’s instructions [18]. The cDNA of ERCC1 and TS was prepared by reverse transcription from RNA [19]. The ABI PRISM 7700 Sequence Detection System (Perkin-Elmer Applied Biosystems, Foster City, CA) was used to perform TaqMan probe-based real-time PCR reactions as previously described [20–22]. Relative levels of mRNA transcripts were calculated according to the comparative Ct method using β-actin as an endogenous control [23]. RNA was extracted and purified from formalin-fixed paraffin-embedded (FFPE) tissue samples of surgically resected primary CRC using an RNeasy mini kit (Qiagen, Inc.) according to the manufacturer’s instructions [18]. The cDNA of ERCC1 and TS was prepared by reverse transcription from RNA [19]. The ABI PRISM 7700 Sequence Detection System (Perkin-Elmer Applied Biosystems, Foster City, CA) was used to perform TaqMan probe-based real-time PCR reactions as previously described [20–22]. Relative levels of mRNA transcripts were calculated according to the comparative Ct method using β-actin as an endogenous control [23]. Statistical analysis The Cox proportional hazards model was used for univariate and multivariate analysis of prognostic factors. The variables included six continuous variables (age, duration of chemotherapy courses, interval between surgery and chemotherapy, cumulated dosage of oxaliplatin and mRNA expression levels of ERCC1 and TS) and eight categorical variables (sex, primary tumor location, tumor stage, tumor differentiation, lymph node staging, nerve invasion, vascular invasion and chemotherapy regimens). The logarithms of the TS and ERCC1 mRNA levels (logTS, logERCC1) were calculated for fitting normal distribution as the requirement of analysis. Dummy variables were considered for all the categorical variables. The chemotherapy regimens were used as a stratification variable in all the analyses. The backward stepwise method was used in the multivariate analysis base on the likelihood ratio statistics. Kaplan–Meier curves and log-rank tests were used for DFS analysis. Hazards ratios were used to calculate the relative risks of recurrence or death. All tests were two-sided, and p < 0.05 was considered as statistically significant. Analyses were performed using SAS version 9.1 (Institute, Cary, NC) and SPSS 19.0 (IBM, Armonk, New York). Power was calculated using the PS Power and Sample Size Calculation, version 3.0.43 (Vanderbilt University, Nashville, TN, USA). The Cox proportional hazards model was used for univariate and multivariate analysis of prognostic factors. The variables included six continuous variables (age, duration of chemotherapy courses, interval between surgery and chemotherapy, cumulated dosage of oxaliplatin and mRNA expression levels of ERCC1 and TS) and eight categorical variables (sex, primary tumor location, tumor stage, tumor differentiation, lymph node staging, nerve invasion, vascular invasion and chemotherapy regimens). The logarithms of the TS and ERCC1 mRNA levels (logTS, logERCC1) were calculated for fitting normal distribution as the requirement of analysis. Dummy variables were considered for all the categorical variables. The chemotherapy regimens were used as a stratification variable in all the analyses. The backward stepwise method was used in the multivariate analysis base on the likelihood ratio statistics. Kaplan–Meier curves and log-rank tests were used for DFS analysis. Hazards ratios were used to calculate the relative risks of recurrence or death. All tests were two-sided, and p < 0.05 was considered as statistically significant. Analyses were performed using SAS version 9.1 (Institute, Cary, NC) and SPSS 19.0 (IBM, Armonk, New York). Power was calculated using the PS Power and Sample Size Calculation, version 3.0.43 (Vanderbilt University, Nashville, TN, USA).
Results
Characteristics of patients and follow-up results Demographic details on the patients investigated in this study are shown in Table  1. A total of 112 Chinese patients (40 females and 72 males) aged from 32 to 75 years old (average, 52.75) were analyzed in this study. There were 61 rectum cancer patients and 38 colon cancer patients. All patients (stage IIa, 24; stage IIb, 1; stage IIIa, 3; stage IIIb, 53; stage IIIc, 31) underwent curative operation and then received 4 different chemotherapy regimens, respectively, including the standard FOLFOX-4 (20 cases), modified FOLFOX (15 cases), oral XELOX (19 cases) and conventional intravenous drip infusion (58 cases). The median follow-up duration was 36 months (ranged from 1.2 to 78 months). Relapse occurred in forty-four patients (39.3%) and eleven patients (9.8%) died of disease. The median DFS was 36 months (minimum: 3 months; maximum: more than 77 months) (Table  1).Table 1 Demographic and clinical parameters of patients (n = 112)CharacteristicsPatientsNo.% Age (years) Mean52.72 Range32-75 Sex  Female4035.70% Male7264.30% Lymph node staging  N02522.32% N14943.75% N23833.93% Tumor stage  Stage IIa2421.40% Stage IIb10.90% Stage IIIa32.70% Stage IIIb5347% Stage IIIc3127.60% Primary tumor location  Rectum6154.46% Colon3833.93% Vascular invasion  Positive1513.39% Negative9786.61% Nerve invasion  Positive2421.43% Negative8878.57% Chemotherapy regimen  Standard FOLFOX-42017.80% Modified FOLFOX1513.40% Oral XELOX1917% Conventional intravenous drip infusion5851.70% Interval of chemotherapy and surgery  Within 28 days3632.14% More than 28 days7667.86% Duration of chemotherapy course  1-6 weeks2118.75% 7-12 weeks2522.32% 13-18 weeks5851.80% 19-24 weeks87.14% Tumor differentiation  High10.8% High or medium21.7% Medium6457.1% Medium or low2925.9% Low1614.3% Follow-up  Median36 months Range1.2-78 months Relapse4439.3% Death119.8% Disease free survival  Median36 months Range3-77 months Demographic and clinical parameters of patients (n = 112) Demographic details on the patients investigated in this study are shown in Table  1. A total of 112 Chinese patients (40 females and 72 males) aged from 32 to 75 years old (average, 52.75) were analyzed in this study. There were 61 rectum cancer patients and 38 colon cancer patients. All patients (stage IIa, 24; stage IIb, 1; stage IIIa, 3; stage IIIb, 53; stage IIIc, 31) underwent curative operation and then received 4 different chemotherapy regimens, respectively, including the standard FOLFOX-4 (20 cases), modified FOLFOX (15 cases), oral XELOX (19 cases) and conventional intravenous drip infusion (58 cases). The median follow-up duration was 36 months (ranged from 1.2 to 78 months). Relapse occurred in forty-four patients (39.3%) and eleven patients (9.8%) died of disease. The median DFS was 36 months (minimum: 3 months; maximum: more than 77 months) (Table  1).Table 1 Demographic and clinical parameters of patients (n = 112)CharacteristicsPatientsNo.% Age (years) Mean52.72 Range32-75 Sex  Female4035.70% Male7264.30% Lymph node staging  N02522.32% N14943.75% N23833.93% Tumor stage  Stage IIa2421.40% Stage IIb10.90% Stage IIIa32.70% Stage IIIb5347% Stage IIIc3127.60% Primary tumor location  Rectum6154.46% Colon3833.93% Vascular invasion  Positive1513.39% Negative9786.61% Nerve invasion  Positive2421.43% Negative8878.57% Chemotherapy regimen  Standard FOLFOX-42017.80% Modified FOLFOX1513.40% Oral XELOX1917% Conventional intravenous drip infusion5851.70% Interval of chemotherapy and surgery  Within 28 days3632.14% More than 28 days7667.86% Duration of chemotherapy course  1-6 weeks2118.75% 7-12 weeks2522.32% 13-18 weeks5851.80% 19-24 weeks87.14% Tumor differentiation  High10.8% High or medium21.7% Medium6457.1% Medium or low2925.9% Low1614.3% Follow-up  Median36 months Range1.2-78 months Relapse4439.3% Death119.8% Disease free survival  Median36 months Range3-77 months Demographic and clinical parameters of patients (n = 112) The mRNA expression levels of TS and ERCC1 The median mRNA expression level of TS, relative to the housekeeping gene β-actin, was 2.86 × 10-1 (minimum expression, 2.7 × 10-2; maximum expression, 6.31). The median mRNA expression level of ERCC1, relative to the housekeeping gene β-actin, was 1.7 × 10-3 (minimum, 8.57 × 10-5; maximum, 6.7 × 10-2). In addition, when analyzed by sex and age, no significant association between the TS or ERCC1 mRNA levels and these parameters was found (P > 0.05). The median mRNA expression level of TS, relative to the housekeeping gene β-actin, was 2.86 × 10-1 (minimum expression, 2.7 × 10-2; maximum expression, 6.31). The median mRNA expression level of ERCC1, relative to the housekeeping gene β-actin, was 1.7 × 10-3 (minimum, 8.57 × 10-5; maximum, 6.7 × 10-2). In addition, when analyzed by sex and age, no significant association between the TS or ERCC1 mRNA levels and these parameters was found (P > 0.05). Association of DFS with TS and ERCC1 mRNA levels For the univariate analysis, the hazard ratio (HR) for the mRNA expression levels of TS and ERCC1 (logTS: HR = 0.820, 95% CI = 0.600 - 1.117, P = 0.210; logERCC1: HR = 1.054, 95% CI = 0.852 - 1.304, P = 0.638) indicated that there was no significant association of DFS with the mRNA expression levels of TS and ERCC1 in Chinese CRC patients treated with oxaliplatin and 5-FU based adjuvant chemotherapy (Table  2).Table 2 Univariate analyses of disease free survival according to the cox regression model VariableHazard ratio95% confidence interval P logTS0.8200.602-1.1180.210logERCC11.0520.851-1.3020.638logTS: the logarithms of the expression level of TS; logERCC1: the logarithms of the expression level of ERCC1. Univariate analyses of disease free survival according to the cox regression model logTS: the logarithms of the expression level of TS; logERCC1: the logarithms of the expression level of ERCC1. Factors considered in the multivariate analyses included age, sex (male, female; reference category: female), tumor stage (stage IIa, stage IIb, stage IIIa, stage IIIb, stage IIIc; reference category: stage IIIc), tumor differentiation (high, high or medium, medium, medium or low, low; reference category: low), primary tumor location (rectum, colon; reference category: rectum), lymph node staging (N0, N1, N2; reference category: N2), vascular invasion (positive, negative; reference category: positive), nerve invasion (positive, negative; reference category: positive), interval between surgery and chemotherapy, duration of chemotherapy course, cumulated dosage of oxaliplatin as well as the mRNA expression levels of TS and ERCC1. It is clearly showed that only the tumor stage (tumor stage IIIc, reference, P = 0.083; tumor stage IIb, HR = 0.240, 95% CI = 0.080 - 0.724, P = 0.011; tumor stage IIc, HR < 0.0001, P = 0.977; Tumor stage IIIa, HR = 0.179, 95% CI = 0.012 - 2.593, P = 0.207) entered the model in the final step and was confirmed to be the independent prognostic factor for DFS. The results indicated that the DFS in the patients with tumor stage IIb was significantly longer than that in the patients with tumor stage IIIc. In addition, there was no evidence to prove the association between the DFS and the TS and ERCC1 mRNA levels (Table  3).Table 3 Cox regression analysis for multivariate analysis VariableDisease free survivalHazard ratio95% confidence interval P Tumor stage (reference category: tumor stage IIIc)0.083Tumor stage IIb0.2400.080 - 0.7240.011Tumor stage IIc< 0.0001--0.977Tumor stage IIIa0.1790.012 - 2.5930.207For the tumor stage IIc, the hazard ratio was so small that the 95% confidence interval could not be displayed by the software. Cox regression analysis for multivariate analysis For the tumor stage IIc, the hazard ratio was so small that the 95% confidence interval could not be displayed by the software. The patients were divided into two groups based on the median mRNA expression levels of TS (high expression group: > 2.86 × 10-1; low expression group: ≤ 2.86 × 10-1) and ERCC1 (high expression group: > 1.7 × 10-3; low expression group: ≤ 1.7 × 10-3). The Kaplan–Meier DFS curves according to the mRNA expression levels of TS and ERCC1 all showed no significant difference between high and low expression group (TS: P = 0.264; ERCC1: P = 0.484), suggesting that the mRNA expression levels of TS and ERCC1 was not significantly associated with the DFS (Figure  1).Figure 1 Disease free survival curves according to the expression level of TS and ERCC1 . A: disease free survival curves according to the expression level of TS. B: Disease free survival curves according to the expression level of ERCC1. 0: low expression group; 1: high expression group. Disease free survival curves according to the expression level of TS and ERCC1 . A: disease free survival curves according to the expression level of TS. B: Disease free survival curves according to the expression level of ERCC1. 0: low expression group; 1: high expression group. For the univariate analysis, the hazard ratio (HR) for the mRNA expression levels of TS and ERCC1 (logTS: HR = 0.820, 95% CI = 0.600 - 1.117, P = 0.210; logERCC1: HR = 1.054, 95% CI = 0.852 - 1.304, P = 0.638) indicated that there was no significant association of DFS with the mRNA expression levels of TS and ERCC1 in Chinese CRC patients treated with oxaliplatin and 5-FU based adjuvant chemotherapy (Table  2).Table 2 Univariate analyses of disease free survival according to the cox regression model VariableHazard ratio95% confidence interval P logTS0.8200.602-1.1180.210logERCC11.0520.851-1.3020.638logTS: the logarithms of the expression level of TS; logERCC1: the logarithms of the expression level of ERCC1. Univariate analyses of disease free survival according to the cox regression model logTS: the logarithms of the expression level of TS; logERCC1: the logarithms of the expression level of ERCC1. Factors considered in the multivariate analyses included age, sex (male, female; reference category: female), tumor stage (stage IIa, stage IIb, stage IIIa, stage IIIb, stage IIIc; reference category: stage IIIc), tumor differentiation (high, high or medium, medium, medium or low, low; reference category: low), primary tumor location (rectum, colon; reference category: rectum), lymph node staging (N0, N1, N2; reference category: N2), vascular invasion (positive, negative; reference category: positive), nerve invasion (positive, negative; reference category: positive), interval between surgery and chemotherapy, duration of chemotherapy course, cumulated dosage of oxaliplatin as well as the mRNA expression levels of TS and ERCC1. It is clearly showed that only the tumor stage (tumor stage IIIc, reference, P = 0.083; tumor stage IIb, HR = 0.240, 95% CI = 0.080 - 0.724, P = 0.011; tumor stage IIc, HR < 0.0001, P = 0.977; Tumor stage IIIa, HR = 0.179, 95% CI = 0.012 - 2.593, P = 0.207) entered the model in the final step and was confirmed to be the independent prognostic factor for DFS. The results indicated that the DFS in the patients with tumor stage IIb was significantly longer than that in the patients with tumor stage IIIc. In addition, there was no evidence to prove the association between the DFS and the TS and ERCC1 mRNA levels (Table  3).Table 3 Cox regression analysis for multivariate analysis VariableDisease free survivalHazard ratio95% confidence interval P Tumor stage (reference category: tumor stage IIIc)0.083Tumor stage IIb0.2400.080 - 0.7240.011Tumor stage IIc< 0.0001--0.977Tumor stage IIIa0.1790.012 - 2.5930.207For the tumor stage IIc, the hazard ratio was so small that the 95% confidence interval could not be displayed by the software. Cox regression analysis for multivariate analysis For the tumor stage IIc, the hazard ratio was so small that the 95% confidence interval could not be displayed by the software. The patients were divided into two groups based on the median mRNA expression levels of TS (high expression group: > 2.86 × 10-1; low expression group: ≤ 2.86 × 10-1) and ERCC1 (high expression group: > 1.7 × 10-3; low expression group: ≤ 1.7 × 10-3). The Kaplan–Meier DFS curves according to the mRNA expression levels of TS and ERCC1 all showed no significant difference between high and low expression group (TS: P = 0.264; ERCC1: P = 0.484), suggesting that the mRNA expression levels of TS and ERCC1 was not significantly associated with the DFS (Figure  1).Figure 1 Disease free survival curves according to the expression level of TS and ERCC1 . A: disease free survival curves according to the expression level of TS. B: Disease free survival curves according to the expression level of ERCC1. 0: low expression group; 1: high expression group. Disease free survival curves according to the expression level of TS and ERCC1 . A: disease free survival curves according to the expression level of TS. B: Disease free survival curves according to the expression level of ERCC1. 0: low expression group; 1: high expression group.
Conclusions
In conclusion, our data demonstrated that mRNA expression levels of ERCC1 and TS were not significantly correlated with the DFS of Chinese stage II-III CRC patients receiving 5-FU and oxaliplatin based adjuvant chemotherapy. It suggested that the mRNA expression levels of ERCC1 and TS were not applicable as the predictive factors for DFS in Chinese stage II-III CRC patients receiving 5-FU and oxaliplatin based adjuvant chemotherapy. Further investigations to clearly define the role of ERCC1 and TS gene expression in this setting are needed.
[ "Background", "Patients", "Chemotherapy treatment", "RNA extraction and real-time RT-PCR", "Statistical analysis", "Characteristics of patients and follow-up results", "The mRNA expression levels of TS and ERCC1", "Association of DFS with TS and ERCC1 mRNA levels" ]
[ "Colorectal cancer (CRC) is the third most common cancer worldwide and has a high mortality rate\n[1]. About 608,000 deaths from CRC are estimated worldwide, accounting for 8% of all cancer deaths\n[2, 3]. Surgery is the most common treatment for CRC, yet prognosis remains poor\n[4]. As a result, considerable interest has concentrated on chemotherapy after surgery, such as oxaliplatin or 5-fluorouracil (5-FU)-based adjuvant chemotherapy\n[5, 6]. The 5-FU is an analogue of uracil with a fluorine atom at the C-5 position in place of hydrogen, which disrupts RNA synthesis and the action of thymidylate synthase by converting to several active metabolites: fluorodeoxyuridine monophosphate (FdUMP), fluorodeoxyuridine triphosphate (FdUTP) and fluorouridine triphosphate (FUTP)\n[7]. It was reported that 5-FU-based chemotherapy was a safe and effective treatment for elderly patients with advanced CRC\n[8]. Oxaliplatin is a platinum-based drug that has demonstrated antitumor activities in CRC both in vitro and in vivo\n[9]. It was reported that oxaliplatin based chemotherapy could significantly increase the progression-free survival in patients with metastatic CRC\n[10]. Moreover, the better efficacy and safety of oxaliplatin combined with 5-FU as first-line chemotherapy for elderly patients with metastatic CRC has been proved\n[11]. However, there is no predictive factor for efficacy of these treatments.\nThe ERCC1 (encodes excision cross-complementing 1) gene codes for a nucleotide excision repair protein involved in the repair of radiation and chemotherapy-induced DNA damage\n[12]. It has been reported that the gene polymorphism of ERCC1 at codon 118 was a predictive factor for the tumor response to oxaliplatin/5-FU combination chemotherapy in patients with advanced CRC\n[13]. Furthermore, thymidylate synthase (TS), as a target enzyme of 5-FU, is associated with response to 5-FU in human colorectal and gastric tumors\n[14, 15]. It was reported that TS genotyping could be of help in predicting toxicity to 5-FU-based chemotherapy in CRC patients\n[16]. However, little is known about the association between mRNA expression levels of ERCC1 and TS and clinical outcomes of oxaliplatin and 5-FU based adjuvant chemotherapy in Chinese people with CRC.\nIn this study, we investigated the association of ERCC1 and TS mRNA levels with the disease free survival (DFS) in Chinese CRC patients receiving oxaliplatin and 5-FU based adjuvant chemotherapy.", "This study was carried out with the Institutional Ethics Committees approval and following the Chinese Medical Research Council guidelines. All participants gave their written informed consent prior to entering the study.\nThis is a prospective study. A total of 112 Chinese CRC patients who were treated at Jiangsu Tumor Hospital, China, from May 2005 to January 2010 were investigated in this study. Eligibility criterion was histological confirmation of stage II-III CRC after surgery according to the AJCC TNM classification\n[17].", "All the patients were treated with chemotherapy after curative operation. Four types of chemotherapy regimens were used for the treatment of CRC patients: i) the first one was the standard FOLFOX-4 consisting of 2-hour intravenous infusion of oxaliplatin (85 mg/m2) on day 1, and 2-hour intravenous drip infusion of calcium folinate (200 mg/m2) on days 1–2, followed by intravenous injection of 5-FU (400 mg/m2) and continuous infusion of 5-FU (600 mg/m2) lasting 22 h on days 1–2, every 2 weeks; ii) the second one was the modified FOLFOX consisting of intravenous infusion of oxaliplatin (130 mg/m2) and 2-hour intravenous drip infusion of folinate calcium (200 mg/m2) on day 1, followed by intravenous injection of 5-FU (400 mg/m2) and continuous infusion of 5-FU (1000 mg/m2) over 24 h on days 1 to 3, every 3 weeks; iii) the third one was oral XELOX consisting of 2-hour intravenous infusion of oxaliplatin 130 mg/m2 on day 1 plus oral capecitabine 850 mg/m2 twice daily for 2 weeks in a 3-week cycle; iv) the fourth one was a conventional intravenous drip infusion including 2-hour intravenous infusion of oxaliplatin 130 mg/m2 on day 1 and continuous infusion of 5-FU (750 mg/m2) lasting 4 h on days 1 to 5, every 3 weeks. All the chemotherapy regimens were performed by a trained nurse. The selection of chemotherapy regimens for each patient was according to the recommendation of an experienced expert.\nAfter treatment, the clinical outcomes were obtained by telephone follow-up or a return visit with the deadline of January 2014. DFS, which defined as the time from the end of chemotherapy to the first event of either recurrent disease or death, was calculated according to follow-up data.", "RNA was extracted and purified from formalin-fixed paraffin-embedded (FFPE) tissue samples of surgically resected primary CRC using an RNeasy mini kit (Qiagen, Inc.) according to the manufacturer’s instructions\n[18]. The cDNA of ERCC1 and TS was prepared by reverse transcription from RNA\n[19]. The ABI PRISM 7700 Sequence Detection System (Perkin-Elmer Applied Biosystems, Foster City, CA) was used to perform TaqMan probe-based real-time PCR reactions as previously described\n[20–22]. Relative levels of mRNA transcripts were calculated according to the comparative Ct method using β-actin as an endogenous control\n[23].", "The Cox proportional hazards model was used for univariate and multivariate analysis of prognostic factors. The variables included six continuous variables (age, duration of chemotherapy courses, interval between surgery and chemotherapy, cumulated dosage of oxaliplatin and mRNA expression levels of ERCC1 and TS) and eight categorical variables (sex, primary tumor location, tumor stage, tumor differentiation, lymph node staging, nerve invasion, vascular invasion and chemotherapy regimens). The logarithms of the TS and ERCC1 mRNA levels (logTS, logERCC1) were calculated for fitting normal distribution as the requirement of analysis. Dummy variables were considered for all the categorical variables. The chemotherapy regimens were used as a stratification variable in all the analyses. The backward stepwise method was used in the multivariate analysis base on the likelihood ratio statistics. Kaplan–Meier curves and log-rank tests were used for DFS analysis. Hazards ratios were used to calculate the relative risks of recurrence or death. All tests were two-sided, and p < 0.05 was considered as statistically significant. Analyses were performed using SAS version 9.1 (Institute, Cary, NC) and SPSS 19.0 (IBM, Armonk, New York). Power was calculated using the PS Power and Sample Size Calculation, version 3.0.43 (Vanderbilt University, Nashville, TN, USA).", "Demographic details on the patients investigated in this study are shown in Table \n1. A total of 112 Chinese patients (40 females and 72 males) aged from 32 to 75 years old (average, 52.75) were analyzed in this study. There were 61 rectum cancer patients and 38 colon cancer patients. All patients (stage IIa, 24; stage IIb, 1; stage IIIa, 3; stage IIIb, 53; stage IIIc, 31) underwent curative operation and then received 4 different chemotherapy regimens, respectively, including the standard FOLFOX-4 (20 cases), modified FOLFOX (15 cases), oral XELOX (19 cases) and conventional intravenous drip infusion (58 cases). The median follow-up duration was 36 months (ranged from 1.2 to 78 months). Relapse occurred in forty-four patients (39.3%) and eleven patients (9.8%) died of disease. The median DFS was 36 months (minimum: 3 months; maximum: more than 77 months) (Table \n1).Table 1\nDemographic and clinical parameters of patients (n = 112)CharacteristicsPatientsNo.%\nAge (years) Mean52.72 Range32-75\nSex\n Female4035.70% Male7264.30%\nLymph node staging\n N02522.32% N14943.75% N23833.93%\nTumor stage\n Stage IIa2421.40% Stage IIb10.90% Stage IIIa32.70% Stage IIIb5347% Stage IIIc3127.60%\nPrimary tumor location\n Rectum6154.46% Colon3833.93%\nVascular invasion\n Positive1513.39% Negative9786.61%\nNerve invasion\n Positive2421.43% Negative8878.57%\nChemotherapy regimen\n Standard FOLFOX-42017.80% Modified FOLFOX1513.40% Oral XELOX1917% Conventional intravenous drip infusion5851.70%\nInterval of chemotherapy and surgery\n Within 28 days3632.14% More than 28 days7667.86%\nDuration of chemotherapy course\n 1-6 weeks2118.75% 7-12 weeks2522.32% 13-18 weeks5851.80% 19-24 weeks87.14%\nTumor differentiation\n High10.8% High or medium21.7% Medium6457.1% Medium or low2925.9% Low1614.3%\nFollow-up\n Median36 months Range1.2-78 months Relapse4439.3% Death119.8%\nDisease free survival\n Median36 months Range3-77 months\n\nDemographic and clinical parameters of patients (n = 112)", "The median mRNA expression level of TS, relative to the housekeeping gene β-actin, was 2.86 × 10-1 (minimum expression, 2.7 × 10-2; maximum expression, 6.31). The median mRNA expression level of ERCC1, relative to the housekeeping gene β-actin, was 1.7 × 10-3 (minimum, 8.57 × 10-5; maximum, 6.7 × 10-2). In addition, when analyzed by sex and age, no significant association between the TS or ERCC1 mRNA levels and these parameters was found (P > 0.05).", "For the univariate analysis, the hazard ratio (HR) for the mRNA expression levels of TS and ERCC1 (logTS: HR = 0.820, 95% CI = 0.600 - 1.117, P = 0.210; logERCC1: HR = 1.054, 95% CI = 0.852 - 1.304, P = 0.638) indicated that there was no significant association of DFS with the mRNA expression levels of TS and ERCC1 in Chinese CRC patients treated with oxaliplatin and 5-FU based adjuvant chemotherapy (Table \n2).Table 2\nUnivariate analyses of disease free survival according to the cox regression model\nVariableHazard ratio95% confidence interval\nP\nlogTS0.8200.602-1.1180.210logERCC11.0520.851-1.3020.638logTS: the logarithms of the expression level of TS; logERCC1: the logarithms of the expression level of ERCC1.\n\nUnivariate analyses of disease free survival according to the cox regression model\n\nlogTS: the logarithms of the expression level of TS; logERCC1: the logarithms of the expression level of ERCC1.\nFactors considered in the multivariate analyses included age, sex (male, female; reference category: female), tumor stage (stage IIa, stage IIb, stage IIIa, stage IIIb, stage IIIc; reference category: stage IIIc), tumor differentiation (high, high or medium, medium, medium or low, low; reference category: low), primary tumor location (rectum, colon; reference category: rectum), lymph node staging (N0, N1, N2; reference category: N2), vascular invasion (positive, negative; reference category: positive), nerve invasion (positive, negative; reference category: positive), interval between surgery and chemotherapy, duration of chemotherapy course, cumulated dosage of oxaliplatin as well as the mRNA expression levels of TS and ERCC1. It is clearly showed that only the tumor stage (tumor stage IIIc, reference, P = 0.083; tumor stage IIb, HR = 0.240, 95% CI = 0.080 - 0.724, P = 0.011; tumor stage IIc, HR < 0.0001, P = 0.977; Tumor stage IIIa, HR = 0.179, 95% CI = 0.012 - 2.593, P = 0.207) entered the model in the final step and was confirmed to be the independent prognostic factor for DFS. The results indicated that the DFS in the patients with tumor stage IIb was significantly longer than that in the patients with tumor stage IIIc. In addition, there was no evidence to prove the association between the DFS and the TS and ERCC1 mRNA levels (Table \n3).Table 3\nCox regression analysis for multivariate analysis\nVariableDisease free survivalHazard ratio95% confidence interval\nP\nTumor stage (reference category: tumor stage IIIc)0.083Tumor stage IIb0.2400.080 - 0.7240.011Tumor stage IIc< 0.0001--0.977Tumor stage IIIa0.1790.012 - 2.5930.207For the tumor stage IIc, the hazard ratio was so small that the 95% confidence interval could not be displayed by the software.\n\nCox regression analysis for multivariate analysis\n\nFor the tumor stage IIc, the hazard ratio was so small that the 95% confidence interval could not be displayed by the software.\nThe patients were divided into two groups based on the median mRNA expression levels of TS (high expression group: > 2.86 × 10-1; low expression group: ≤ 2.86 × 10-1) and ERCC1 (high expression group: > 1.7 × 10-3; low expression group: ≤ 1.7 × 10-3). The Kaplan–Meier DFS curves according to the mRNA expression levels of TS and ERCC1 all showed no significant difference between high and low expression group (TS: P = 0.264; ERCC1: P = 0.484), suggesting that the mRNA expression levels of TS and ERCC1 was not significantly associated with the DFS (Figure \n1).Figure 1\nDisease free survival curves according to the expression level of\nTS\nand\nERCC1\n. A: disease free survival curves according to the expression level of TS. B: Disease free survival curves according to the expression level of ERCC1. 0: low expression group; 1: high expression group.\n\nDisease free survival curves according to the expression level of\nTS\nand\nERCC1\n. A: disease free survival curves according to the expression level of TS. B: Disease free survival curves according to the expression level of ERCC1. 0: low expression group; 1: high expression group." ]
[ null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Patients", "Chemotherapy treatment", "RNA extraction and real-time RT-PCR", "Statistical analysis", "Results", "Characteristics of patients and follow-up results", "The mRNA expression levels of TS and ERCC1", "Association of DFS with TS and ERCC1 mRNA levels", "Discussion", "Conclusions" ]
[ "Colorectal cancer (CRC) is the third most common cancer worldwide and has a high mortality rate\n[1]. About 608,000 deaths from CRC are estimated worldwide, accounting for 8% of all cancer deaths\n[2, 3]. Surgery is the most common treatment for CRC, yet prognosis remains poor\n[4]. As a result, considerable interest has concentrated on chemotherapy after surgery, such as oxaliplatin or 5-fluorouracil (5-FU)-based adjuvant chemotherapy\n[5, 6]. The 5-FU is an analogue of uracil with a fluorine atom at the C-5 position in place of hydrogen, which disrupts RNA synthesis and the action of thymidylate synthase by converting to several active metabolites: fluorodeoxyuridine monophosphate (FdUMP), fluorodeoxyuridine triphosphate (FdUTP) and fluorouridine triphosphate (FUTP)\n[7]. It was reported that 5-FU-based chemotherapy was a safe and effective treatment for elderly patients with advanced CRC\n[8]. Oxaliplatin is a platinum-based drug that has demonstrated antitumor activities in CRC both in vitro and in vivo\n[9]. It was reported that oxaliplatin based chemotherapy could significantly increase the progression-free survival in patients with metastatic CRC\n[10]. Moreover, the better efficacy and safety of oxaliplatin combined with 5-FU as first-line chemotherapy for elderly patients with metastatic CRC has been proved\n[11]. However, there is no predictive factor for efficacy of these treatments.\nThe ERCC1 (encodes excision cross-complementing 1) gene codes for a nucleotide excision repair protein involved in the repair of radiation and chemotherapy-induced DNA damage\n[12]. It has been reported that the gene polymorphism of ERCC1 at codon 118 was a predictive factor for the tumor response to oxaliplatin/5-FU combination chemotherapy in patients with advanced CRC\n[13]. Furthermore, thymidylate synthase (TS), as a target enzyme of 5-FU, is associated with response to 5-FU in human colorectal and gastric tumors\n[14, 15]. It was reported that TS genotyping could be of help in predicting toxicity to 5-FU-based chemotherapy in CRC patients\n[16]. However, little is known about the association between mRNA expression levels of ERCC1 and TS and clinical outcomes of oxaliplatin and 5-FU based adjuvant chemotherapy in Chinese people with CRC.\nIn this study, we investigated the association of ERCC1 and TS mRNA levels with the disease free survival (DFS) in Chinese CRC patients receiving oxaliplatin and 5-FU based adjuvant chemotherapy.", " Patients This study was carried out with the Institutional Ethics Committees approval and following the Chinese Medical Research Council guidelines. All participants gave their written informed consent prior to entering the study.\nThis is a prospective study. A total of 112 Chinese CRC patients who were treated at Jiangsu Tumor Hospital, China, from May 2005 to January 2010 were investigated in this study. Eligibility criterion was histological confirmation of stage II-III CRC after surgery according to the AJCC TNM classification\n[17].\nThis study was carried out with the Institutional Ethics Committees approval and following the Chinese Medical Research Council guidelines. All participants gave their written informed consent prior to entering the study.\nThis is a prospective study. A total of 112 Chinese CRC patients who were treated at Jiangsu Tumor Hospital, China, from May 2005 to January 2010 were investigated in this study. Eligibility criterion was histological confirmation of stage II-III CRC after surgery according to the AJCC TNM classification\n[17].\n Chemotherapy treatment All the patients were treated with chemotherapy after curative operation. Four types of chemotherapy regimens were used for the treatment of CRC patients: i) the first one was the standard FOLFOX-4 consisting of 2-hour intravenous infusion of oxaliplatin (85 mg/m2) on day 1, and 2-hour intravenous drip infusion of calcium folinate (200 mg/m2) on days 1–2, followed by intravenous injection of 5-FU (400 mg/m2) and continuous infusion of 5-FU (600 mg/m2) lasting 22 h on days 1–2, every 2 weeks; ii) the second one was the modified FOLFOX consisting of intravenous infusion of oxaliplatin (130 mg/m2) and 2-hour intravenous drip infusion of folinate calcium (200 mg/m2) on day 1, followed by intravenous injection of 5-FU (400 mg/m2) and continuous infusion of 5-FU (1000 mg/m2) over 24 h on days 1 to 3, every 3 weeks; iii) the third one was oral XELOX consisting of 2-hour intravenous infusion of oxaliplatin 130 mg/m2 on day 1 plus oral capecitabine 850 mg/m2 twice daily for 2 weeks in a 3-week cycle; iv) the fourth one was a conventional intravenous drip infusion including 2-hour intravenous infusion of oxaliplatin 130 mg/m2 on day 1 and continuous infusion of 5-FU (750 mg/m2) lasting 4 h on days 1 to 5, every 3 weeks. All the chemotherapy regimens were performed by a trained nurse. The selection of chemotherapy regimens for each patient was according to the recommendation of an experienced expert.\nAfter treatment, the clinical outcomes were obtained by telephone follow-up or a return visit with the deadline of January 2014. DFS, which defined as the time from the end of chemotherapy to the first event of either recurrent disease or death, was calculated according to follow-up data.\nAll the patients were treated with chemotherapy after curative operation. Four types of chemotherapy regimens were used for the treatment of CRC patients: i) the first one was the standard FOLFOX-4 consisting of 2-hour intravenous infusion of oxaliplatin (85 mg/m2) on day 1, and 2-hour intravenous drip infusion of calcium folinate (200 mg/m2) on days 1–2, followed by intravenous injection of 5-FU (400 mg/m2) and continuous infusion of 5-FU (600 mg/m2) lasting 22 h on days 1–2, every 2 weeks; ii) the second one was the modified FOLFOX consisting of intravenous infusion of oxaliplatin (130 mg/m2) and 2-hour intravenous drip infusion of folinate calcium (200 mg/m2) on day 1, followed by intravenous injection of 5-FU (400 mg/m2) and continuous infusion of 5-FU (1000 mg/m2) over 24 h on days 1 to 3, every 3 weeks; iii) the third one was oral XELOX consisting of 2-hour intravenous infusion of oxaliplatin 130 mg/m2 on day 1 plus oral capecitabine 850 mg/m2 twice daily for 2 weeks in a 3-week cycle; iv) the fourth one was a conventional intravenous drip infusion including 2-hour intravenous infusion of oxaliplatin 130 mg/m2 on day 1 and continuous infusion of 5-FU (750 mg/m2) lasting 4 h on days 1 to 5, every 3 weeks. All the chemotherapy regimens were performed by a trained nurse. The selection of chemotherapy regimens for each patient was according to the recommendation of an experienced expert.\nAfter treatment, the clinical outcomes were obtained by telephone follow-up or a return visit with the deadline of January 2014. DFS, which defined as the time from the end of chemotherapy to the first event of either recurrent disease or death, was calculated according to follow-up data.\n RNA extraction and real-time RT-PCR RNA was extracted and purified from formalin-fixed paraffin-embedded (FFPE) tissue samples of surgically resected primary CRC using an RNeasy mini kit (Qiagen, Inc.) according to the manufacturer’s instructions\n[18]. The cDNA of ERCC1 and TS was prepared by reverse transcription from RNA\n[19]. The ABI PRISM 7700 Sequence Detection System (Perkin-Elmer Applied Biosystems, Foster City, CA) was used to perform TaqMan probe-based real-time PCR reactions as previously described\n[20–22]. Relative levels of mRNA transcripts were calculated according to the comparative Ct method using β-actin as an endogenous control\n[23].\nRNA was extracted and purified from formalin-fixed paraffin-embedded (FFPE) tissue samples of surgically resected primary CRC using an RNeasy mini kit (Qiagen, Inc.) according to the manufacturer’s instructions\n[18]. The cDNA of ERCC1 and TS was prepared by reverse transcription from RNA\n[19]. The ABI PRISM 7700 Sequence Detection System (Perkin-Elmer Applied Biosystems, Foster City, CA) was used to perform TaqMan probe-based real-time PCR reactions as previously described\n[20–22]. Relative levels of mRNA transcripts were calculated according to the comparative Ct method using β-actin as an endogenous control\n[23].\n Statistical analysis The Cox proportional hazards model was used for univariate and multivariate analysis of prognostic factors. The variables included six continuous variables (age, duration of chemotherapy courses, interval between surgery and chemotherapy, cumulated dosage of oxaliplatin and mRNA expression levels of ERCC1 and TS) and eight categorical variables (sex, primary tumor location, tumor stage, tumor differentiation, lymph node staging, nerve invasion, vascular invasion and chemotherapy regimens). The logarithms of the TS and ERCC1 mRNA levels (logTS, logERCC1) were calculated for fitting normal distribution as the requirement of analysis. Dummy variables were considered for all the categorical variables. The chemotherapy regimens were used as a stratification variable in all the analyses. The backward stepwise method was used in the multivariate analysis base on the likelihood ratio statistics. Kaplan–Meier curves and log-rank tests were used for DFS analysis. Hazards ratios were used to calculate the relative risks of recurrence or death. All tests were two-sided, and p < 0.05 was considered as statistically significant. Analyses were performed using SAS version 9.1 (Institute, Cary, NC) and SPSS 19.0 (IBM, Armonk, New York). Power was calculated using the PS Power and Sample Size Calculation, version 3.0.43 (Vanderbilt University, Nashville, TN, USA).\nThe Cox proportional hazards model was used for univariate and multivariate analysis of prognostic factors. The variables included six continuous variables (age, duration of chemotherapy courses, interval between surgery and chemotherapy, cumulated dosage of oxaliplatin and mRNA expression levels of ERCC1 and TS) and eight categorical variables (sex, primary tumor location, tumor stage, tumor differentiation, lymph node staging, nerve invasion, vascular invasion and chemotherapy regimens). The logarithms of the TS and ERCC1 mRNA levels (logTS, logERCC1) were calculated for fitting normal distribution as the requirement of analysis. Dummy variables were considered for all the categorical variables. The chemotherapy regimens were used as a stratification variable in all the analyses. The backward stepwise method was used in the multivariate analysis base on the likelihood ratio statistics. Kaplan–Meier curves and log-rank tests were used for DFS analysis. Hazards ratios were used to calculate the relative risks of recurrence or death. All tests were two-sided, and p < 0.05 was considered as statistically significant. Analyses were performed using SAS version 9.1 (Institute, Cary, NC) and SPSS 19.0 (IBM, Armonk, New York). Power was calculated using the PS Power and Sample Size Calculation, version 3.0.43 (Vanderbilt University, Nashville, TN, USA).", "This study was carried out with the Institutional Ethics Committees approval and following the Chinese Medical Research Council guidelines. All participants gave their written informed consent prior to entering the study.\nThis is a prospective study. A total of 112 Chinese CRC patients who were treated at Jiangsu Tumor Hospital, China, from May 2005 to January 2010 were investigated in this study. Eligibility criterion was histological confirmation of stage II-III CRC after surgery according to the AJCC TNM classification\n[17].", "All the patients were treated with chemotherapy after curative operation. Four types of chemotherapy regimens were used for the treatment of CRC patients: i) the first one was the standard FOLFOX-4 consisting of 2-hour intravenous infusion of oxaliplatin (85 mg/m2) on day 1, and 2-hour intravenous drip infusion of calcium folinate (200 mg/m2) on days 1–2, followed by intravenous injection of 5-FU (400 mg/m2) and continuous infusion of 5-FU (600 mg/m2) lasting 22 h on days 1–2, every 2 weeks; ii) the second one was the modified FOLFOX consisting of intravenous infusion of oxaliplatin (130 mg/m2) and 2-hour intravenous drip infusion of folinate calcium (200 mg/m2) on day 1, followed by intravenous injection of 5-FU (400 mg/m2) and continuous infusion of 5-FU (1000 mg/m2) over 24 h on days 1 to 3, every 3 weeks; iii) the third one was oral XELOX consisting of 2-hour intravenous infusion of oxaliplatin 130 mg/m2 on day 1 plus oral capecitabine 850 mg/m2 twice daily for 2 weeks in a 3-week cycle; iv) the fourth one was a conventional intravenous drip infusion including 2-hour intravenous infusion of oxaliplatin 130 mg/m2 on day 1 and continuous infusion of 5-FU (750 mg/m2) lasting 4 h on days 1 to 5, every 3 weeks. All the chemotherapy regimens were performed by a trained nurse. The selection of chemotherapy regimens for each patient was according to the recommendation of an experienced expert.\nAfter treatment, the clinical outcomes were obtained by telephone follow-up or a return visit with the deadline of January 2014. DFS, which defined as the time from the end of chemotherapy to the first event of either recurrent disease or death, was calculated according to follow-up data.", "RNA was extracted and purified from formalin-fixed paraffin-embedded (FFPE) tissue samples of surgically resected primary CRC using an RNeasy mini kit (Qiagen, Inc.) according to the manufacturer’s instructions\n[18]. The cDNA of ERCC1 and TS was prepared by reverse transcription from RNA\n[19]. The ABI PRISM 7700 Sequence Detection System (Perkin-Elmer Applied Biosystems, Foster City, CA) was used to perform TaqMan probe-based real-time PCR reactions as previously described\n[20–22]. Relative levels of mRNA transcripts were calculated according to the comparative Ct method using β-actin as an endogenous control\n[23].", "The Cox proportional hazards model was used for univariate and multivariate analysis of prognostic factors. The variables included six continuous variables (age, duration of chemotherapy courses, interval between surgery and chemotherapy, cumulated dosage of oxaliplatin and mRNA expression levels of ERCC1 and TS) and eight categorical variables (sex, primary tumor location, tumor stage, tumor differentiation, lymph node staging, nerve invasion, vascular invasion and chemotherapy regimens). The logarithms of the TS and ERCC1 mRNA levels (logTS, logERCC1) were calculated for fitting normal distribution as the requirement of analysis. Dummy variables were considered for all the categorical variables. The chemotherapy regimens were used as a stratification variable in all the analyses. The backward stepwise method was used in the multivariate analysis base on the likelihood ratio statistics. Kaplan–Meier curves and log-rank tests were used for DFS analysis. Hazards ratios were used to calculate the relative risks of recurrence or death. All tests were two-sided, and p < 0.05 was considered as statistically significant. Analyses were performed using SAS version 9.1 (Institute, Cary, NC) and SPSS 19.0 (IBM, Armonk, New York). Power was calculated using the PS Power and Sample Size Calculation, version 3.0.43 (Vanderbilt University, Nashville, TN, USA).", " Characteristics of patients and follow-up results Demographic details on the patients investigated in this study are shown in Table \n1. A total of 112 Chinese patients (40 females and 72 males) aged from 32 to 75 years old (average, 52.75) were analyzed in this study. There were 61 rectum cancer patients and 38 colon cancer patients. All patients (stage IIa, 24; stage IIb, 1; stage IIIa, 3; stage IIIb, 53; stage IIIc, 31) underwent curative operation and then received 4 different chemotherapy regimens, respectively, including the standard FOLFOX-4 (20 cases), modified FOLFOX (15 cases), oral XELOX (19 cases) and conventional intravenous drip infusion (58 cases). The median follow-up duration was 36 months (ranged from 1.2 to 78 months). Relapse occurred in forty-four patients (39.3%) and eleven patients (9.8%) died of disease. The median DFS was 36 months (minimum: 3 months; maximum: more than 77 months) (Table \n1).Table 1\nDemographic and clinical parameters of patients (n = 112)CharacteristicsPatientsNo.%\nAge (years) Mean52.72 Range32-75\nSex\n Female4035.70% Male7264.30%\nLymph node staging\n N02522.32% N14943.75% N23833.93%\nTumor stage\n Stage IIa2421.40% Stage IIb10.90% Stage IIIa32.70% Stage IIIb5347% Stage IIIc3127.60%\nPrimary tumor location\n Rectum6154.46% Colon3833.93%\nVascular invasion\n Positive1513.39% Negative9786.61%\nNerve invasion\n Positive2421.43% Negative8878.57%\nChemotherapy regimen\n Standard FOLFOX-42017.80% Modified FOLFOX1513.40% Oral XELOX1917% Conventional intravenous drip infusion5851.70%\nInterval of chemotherapy and surgery\n Within 28 days3632.14% More than 28 days7667.86%\nDuration of chemotherapy course\n 1-6 weeks2118.75% 7-12 weeks2522.32% 13-18 weeks5851.80% 19-24 weeks87.14%\nTumor differentiation\n High10.8% High or medium21.7% Medium6457.1% Medium or low2925.9% Low1614.3%\nFollow-up\n Median36 months Range1.2-78 months Relapse4439.3% Death119.8%\nDisease free survival\n Median36 months Range3-77 months\n\nDemographic and clinical parameters of patients (n = 112)\nDemographic details on the patients investigated in this study are shown in Table \n1. A total of 112 Chinese patients (40 females and 72 males) aged from 32 to 75 years old (average, 52.75) were analyzed in this study. There were 61 rectum cancer patients and 38 colon cancer patients. All patients (stage IIa, 24; stage IIb, 1; stage IIIa, 3; stage IIIb, 53; stage IIIc, 31) underwent curative operation and then received 4 different chemotherapy regimens, respectively, including the standard FOLFOX-4 (20 cases), modified FOLFOX (15 cases), oral XELOX (19 cases) and conventional intravenous drip infusion (58 cases). The median follow-up duration was 36 months (ranged from 1.2 to 78 months). Relapse occurred in forty-four patients (39.3%) and eleven patients (9.8%) died of disease. The median DFS was 36 months (minimum: 3 months; maximum: more than 77 months) (Table \n1).Table 1\nDemographic and clinical parameters of patients (n = 112)CharacteristicsPatientsNo.%\nAge (years) Mean52.72 Range32-75\nSex\n Female4035.70% Male7264.30%\nLymph node staging\n N02522.32% N14943.75% N23833.93%\nTumor stage\n Stage IIa2421.40% Stage IIb10.90% Stage IIIa32.70% Stage IIIb5347% Stage IIIc3127.60%\nPrimary tumor location\n Rectum6154.46% Colon3833.93%\nVascular invasion\n Positive1513.39% Negative9786.61%\nNerve invasion\n Positive2421.43% Negative8878.57%\nChemotherapy regimen\n Standard FOLFOX-42017.80% Modified FOLFOX1513.40% Oral XELOX1917% Conventional intravenous drip infusion5851.70%\nInterval of chemotherapy and surgery\n Within 28 days3632.14% More than 28 days7667.86%\nDuration of chemotherapy course\n 1-6 weeks2118.75% 7-12 weeks2522.32% 13-18 weeks5851.80% 19-24 weeks87.14%\nTumor differentiation\n High10.8% High or medium21.7% Medium6457.1% Medium or low2925.9% Low1614.3%\nFollow-up\n Median36 months Range1.2-78 months Relapse4439.3% Death119.8%\nDisease free survival\n Median36 months Range3-77 months\n\nDemographic and clinical parameters of patients (n = 112)\n The mRNA expression levels of TS and ERCC1 The median mRNA expression level of TS, relative to the housekeeping gene β-actin, was 2.86 × 10-1 (minimum expression, 2.7 × 10-2; maximum expression, 6.31). The median mRNA expression level of ERCC1, relative to the housekeeping gene β-actin, was 1.7 × 10-3 (minimum, 8.57 × 10-5; maximum, 6.7 × 10-2). In addition, when analyzed by sex and age, no significant association between the TS or ERCC1 mRNA levels and these parameters was found (P > 0.05).\nThe median mRNA expression level of TS, relative to the housekeeping gene β-actin, was 2.86 × 10-1 (minimum expression, 2.7 × 10-2; maximum expression, 6.31). The median mRNA expression level of ERCC1, relative to the housekeeping gene β-actin, was 1.7 × 10-3 (minimum, 8.57 × 10-5; maximum, 6.7 × 10-2). In addition, when analyzed by sex and age, no significant association between the TS or ERCC1 mRNA levels and these parameters was found (P > 0.05).\n Association of DFS with TS and ERCC1 mRNA levels For the univariate analysis, the hazard ratio (HR) for the mRNA expression levels of TS and ERCC1 (logTS: HR = 0.820, 95% CI = 0.600 - 1.117, P = 0.210; logERCC1: HR = 1.054, 95% CI = 0.852 - 1.304, P = 0.638) indicated that there was no significant association of DFS with the mRNA expression levels of TS and ERCC1 in Chinese CRC patients treated with oxaliplatin and 5-FU based adjuvant chemotherapy (Table \n2).Table 2\nUnivariate analyses of disease free survival according to the cox regression model\nVariableHazard ratio95% confidence interval\nP\nlogTS0.8200.602-1.1180.210logERCC11.0520.851-1.3020.638logTS: the logarithms of the expression level of TS; logERCC1: the logarithms of the expression level of ERCC1.\n\nUnivariate analyses of disease free survival according to the cox regression model\n\nlogTS: the logarithms of the expression level of TS; logERCC1: the logarithms of the expression level of ERCC1.\nFactors considered in the multivariate analyses included age, sex (male, female; reference category: female), tumor stage (stage IIa, stage IIb, stage IIIa, stage IIIb, stage IIIc; reference category: stage IIIc), tumor differentiation (high, high or medium, medium, medium or low, low; reference category: low), primary tumor location (rectum, colon; reference category: rectum), lymph node staging (N0, N1, N2; reference category: N2), vascular invasion (positive, negative; reference category: positive), nerve invasion (positive, negative; reference category: positive), interval between surgery and chemotherapy, duration of chemotherapy course, cumulated dosage of oxaliplatin as well as the mRNA expression levels of TS and ERCC1. It is clearly showed that only the tumor stage (tumor stage IIIc, reference, P = 0.083; tumor stage IIb, HR = 0.240, 95% CI = 0.080 - 0.724, P = 0.011; tumor stage IIc, HR < 0.0001, P = 0.977; Tumor stage IIIa, HR = 0.179, 95% CI = 0.012 - 2.593, P = 0.207) entered the model in the final step and was confirmed to be the independent prognostic factor for DFS. The results indicated that the DFS in the patients with tumor stage IIb was significantly longer than that in the patients with tumor stage IIIc. In addition, there was no evidence to prove the association between the DFS and the TS and ERCC1 mRNA levels (Table \n3).Table 3\nCox regression analysis for multivariate analysis\nVariableDisease free survivalHazard ratio95% confidence interval\nP\nTumor stage (reference category: tumor stage IIIc)0.083Tumor stage IIb0.2400.080 - 0.7240.011Tumor stage IIc< 0.0001--0.977Tumor stage IIIa0.1790.012 - 2.5930.207For the tumor stage IIc, the hazard ratio was so small that the 95% confidence interval could not be displayed by the software.\n\nCox regression analysis for multivariate analysis\n\nFor the tumor stage IIc, the hazard ratio was so small that the 95% confidence interval could not be displayed by the software.\nThe patients were divided into two groups based on the median mRNA expression levels of TS (high expression group: > 2.86 × 10-1; low expression group: ≤ 2.86 × 10-1) and ERCC1 (high expression group: > 1.7 × 10-3; low expression group: ≤ 1.7 × 10-3). The Kaplan–Meier DFS curves according to the mRNA expression levels of TS and ERCC1 all showed no significant difference between high and low expression group (TS: P = 0.264; ERCC1: P = 0.484), suggesting that the mRNA expression levels of TS and ERCC1 was not significantly associated with the DFS (Figure \n1).Figure 1\nDisease free survival curves according to the expression level of\nTS\nand\nERCC1\n. A: disease free survival curves according to the expression level of TS. B: Disease free survival curves according to the expression level of ERCC1. 0: low expression group; 1: high expression group.\n\nDisease free survival curves according to the expression level of\nTS\nand\nERCC1\n. A: disease free survival curves according to the expression level of TS. B: Disease free survival curves according to the expression level of ERCC1. 0: low expression group; 1: high expression group.\nFor the univariate analysis, the hazard ratio (HR) for the mRNA expression levels of TS and ERCC1 (logTS: HR = 0.820, 95% CI = 0.600 - 1.117, P = 0.210; logERCC1: HR = 1.054, 95% CI = 0.852 - 1.304, P = 0.638) indicated that there was no significant association of DFS with the mRNA expression levels of TS and ERCC1 in Chinese CRC patients treated with oxaliplatin and 5-FU based adjuvant chemotherapy (Table \n2).Table 2\nUnivariate analyses of disease free survival according to the cox regression model\nVariableHazard ratio95% confidence interval\nP\nlogTS0.8200.602-1.1180.210logERCC11.0520.851-1.3020.638logTS: the logarithms of the expression level of TS; logERCC1: the logarithms of the expression level of ERCC1.\n\nUnivariate analyses of disease free survival according to the cox regression model\n\nlogTS: the logarithms of the expression level of TS; logERCC1: the logarithms of the expression level of ERCC1.\nFactors considered in the multivariate analyses included age, sex (male, female; reference category: female), tumor stage (stage IIa, stage IIb, stage IIIa, stage IIIb, stage IIIc; reference category: stage IIIc), tumor differentiation (high, high or medium, medium, medium or low, low; reference category: low), primary tumor location (rectum, colon; reference category: rectum), lymph node staging (N0, N1, N2; reference category: N2), vascular invasion (positive, negative; reference category: positive), nerve invasion (positive, negative; reference category: positive), interval between surgery and chemotherapy, duration of chemotherapy course, cumulated dosage of oxaliplatin as well as the mRNA expression levels of TS and ERCC1. It is clearly showed that only the tumor stage (tumor stage IIIc, reference, P = 0.083; tumor stage IIb, HR = 0.240, 95% CI = 0.080 - 0.724, P = 0.011; tumor stage IIc, HR < 0.0001, P = 0.977; Tumor stage IIIa, HR = 0.179, 95% CI = 0.012 - 2.593, P = 0.207) entered the model in the final step and was confirmed to be the independent prognostic factor for DFS. The results indicated that the DFS in the patients with tumor stage IIb was significantly longer than that in the patients with tumor stage IIIc. In addition, there was no evidence to prove the association between the DFS and the TS and ERCC1 mRNA levels (Table \n3).Table 3\nCox regression analysis for multivariate analysis\nVariableDisease free survivalHazard ratio95% confidence interval\nP\nTumor stage (reference category: tumor stage IIIc)0.083Tumor stage IIb0.2400.080 - 0.7240.011Tumor stage IIc< 0.0001--0.977Tumor stage IIIa0.1790.012 - 2.5930.207For the tumor stage IIc, the hazard ratio was so small that the 95% confidence interval could not be displayed by the software.\n\nCox regression analysis for multivariate analysis\n\nFor the tumor stage IIc, the hazard ratio was so small that the 95% confidence interval could not be displayed by the software.\nThe patients were divided into two groups based on the median mRNA expression levels of TS (high expression group: > 2.86 × 10-1; low expression group: ≤ 2.86 × 10-1) and ERCC1 (high expression group: > 1.7 × 10-3; low expression group: ≤ 1.7 × 10-3). The Kaplan–Meier DFS curves according to the mRNA expression levels of TS and ERCC1 all showed no significant difference between high and low expression group (TS: P = 0.264; ERCC1: P = 0.484), suggesting that the mRNA expression levels of TS and ERCC1 was not significantly associated with the DFS (Figure \n1).Figure 1\nDisease free survival curves according to the expression level of\nTS\nand\nERCC1\n. A: disease free survival curves according to the expression level of TS. B: Disease free survival curves according to the expression level of ERCC1. 0: low expression group; 1: high expression group.\n\nDisease free survival curves according to the expression level of\nTS\nand\nERCC1\n. A: disease free survival curves according to the expression level of TS. B: Disease free survival curves according to the expression level of ERCC1. 0: low expression group; 1: high expression group.", "Demographic details on the patients investigated in this study are shown in Table \n1. A total of 112 Chinese patients (40 females and 72 males) aged from 32 to 75 years old (average, 52.75) were analyzed in this study. There were 61 rectum cancer patients and 38 colon cancer patients. All patients (stage IIa, 24; stage IIb, 1; stage IIIa, 3; stage IIIb, 53; stage IIIc, 31) underwent curative operation and then received 4 different chemotherapy regimens, respectively, including the standard FOLFOX-4 (20 cases), modified FOLFOX (15 cases), oral XELOX (19 cases) and conventional intravenous drip infusion (58 cases). The median follow-up duration was 36 months (ranged from 1.2 to 78 months). Relapse occurred in forty-four patients (39.3%) and eleven patients (9.8%) died of disease. The median DFS was 36 months (minimum: 3 months; maximum: more than 77 months) (Table \n1).Table 1\nDemographic and clinical parameters of patients (n = 112)CharacteristicsPatientsNo.%\nAge (years) Mean52.72 Range32-75\nSex\n Female4035.70% Male7264.30%\nLymph node staging\n N02522.32% N14943.75% N23833.93%\nTumor stage\n Stage IIa2421.40% Stage IIb10.90% Stage IIIa32.70% Stage IIIb5347% Stage IIIc3127.60%\nPrimary tumor location\n Rectum6154.46% Colon3833.93%\nVascular invasion\n Positive1513.39% Negative9786.61%\nNerve invasion\n Positive2421.43% Negative8878.57%\nChemotherapy regimen\n Standard FOLFOX-42017.80% Modified FOLFOX1513.40% Oral XELOX1917% Conventional intravenous drip infusion5851.70%\nInterval of chemotherapy and surgery\n Within 28 days3632.14% More than 28 days7667.86%\nDuration of chemotherapy course\n 1-6 weeks2118.75% 7-12 weeks2522.32% 13-18 weeks5851.80% 19-24 weeks87.14%\nTumor differentiation\n High10.8% High or medium21.7% Medium6457.1% Medium or low2925.9% Low1614.3%\nFollow-up\n Median36 months Range1.2-78 months Relapse4439.3% Death119.8%\nDisease free survival\n Median36 months Range3-77 months\n\nDemographic and clinical parameters of patients (n = 112)", "The median mRNA expression level of TS, relative to the housekeeping gene β-actin, was 2.86 × 10-1 (minimum expression, 2.7 × 10-2; maximum expression, 6.31). The median mRNA expression level of ERCC1, relative to the housekeeping gene β-actin, was 1.7 × 10-3 (minimum, 8.57 × 10-5; maximum, 6.7 × 10-2). In addition, when analyzed by sex and age, no significant association between the TS or ERCC1 mRNA levels and these parameters was found (P > 0.05).", "For the univariate analysis, the hazard ratio (HR) for the mRNA expression levels of TS and ERCC1 (logTS: HR = 0.820, 95% CI = 0.600 - 1.117, P = 0.210; logERCC1: HR = 1.054, 95% CI = 0.852 - 1.304, P = 0.638) indicated that there was no significant association of DFS with the mRNA expression levels of TS and ERCC1 in Chinese CRC patients treated with oxaliplatin and 5-FU based adjuvant chemotherapy (Table \n2).Table 2\nUnivariate analyses of disease free survival according to the cox regression model\nVariableHazard ratio95% confidence interval\nP\nlogTS0.8200.602-1.1180.210logERCC11.0520.851-1.3020.638logTS: the logarithms of the expression level of TS; logERCC1: the logarithms of the expression level of ERCC1.\n\nUnivariate analyses of disease free survival according to the cox regression model\n\nlogTS: the logarithms of the expression level of TS; logERCC1: the logarithms of the expression level of ERCC1.\nFactors considered in the multivariate analyses included age, sex (male, female; reference category: female), tumor stage (stage IIa, stage IIb, stage IIIa, stage IIIb, stage IIIc; reference category: stage IIIc), tumor differentiation (high, high or medium, medium, medium or low, low; reference category: low), primary tumor location (rectum, colon; reference category: rectum), lymph node staging (N0, N1, N2; reference category: N2), vascular invasion (positive, negative; reference category: positive), nerve invasion (positive, negative; reference category: positive), interval between surgery and chemotherapy, duration of chemotherapy course, cumulated dosage of oxaliplatin as well as the mRNA expression levels of TS and ERCC1. It is clearly showed that only the tumor stage (tumor stage IIIc, reference, P = 0.083; tumor stage IIb, HR = 0.240, 95% CI = 0.080 - 0.724, P = 0.011; tumor stage IIc, HR < 0.0001, P = 0.977; Tumor stage IIIa, HR = 0.179, 95% CI = 0.012 - 2.593, P = 0.207) entered the model in the final step and was confirmed to be the independent prognostic factor for DFS. The results indicated that the DFS in the patients with tumor stage IIb was significantly longer than that in the patients with tumor stage IIIc. In addition, there was no evidence to prove the association between the DFS and the TS and ERCC1 mRNA levels (Table \n3).Table 3\nCox regression analysis for multivariate analysis\nVariableDisease free survivalHazard ratio95% confidence interval\nP\nTumor stage (reference category: tumor stage IIIc)0.083Tumor stage IIb0.2400.080 - 0.7240.011Tumor stage IIc< 0.0001--0.977Tumor stage IIIa0.1790.012 - 2.5930.207For the tumor stage IIc, the hazard ratio was so small that the 95% confidence interval could not be displayed by the software.\n\nCox regression analysis for multivariate analysis\n\nFor the tumor stage IIc, the hazard ratio was so small that the 95% confidence interval could not be displayed by the software.\nThe patients were divided into two groups based on the median mRNA expression levels of TS (high expression group: > 2.86 × 10-1; low expression group: ≤ 2.86 × 10-1) and ERCC1 (high expression group: > 1.7 × 10-3; low expression group: ≤ 1.7 × 10-3). The Kaplan–Meier DFS curves according to the mRNA expression levels of TS and ERCC1 all showed no significant difference between high and low expression group (TS: P = 0.264; ERCC1: P = 0.484), suggesting that the mRNA expression levels of TS and ERCC1 was not significantly associated with the DFS (Figure \n1).Figure 1\nDisease free survival curves according to the expression level of\nTS\nand\nERCC1\n. A: disease free survival curves according to the expression level of TS. B: Disease free survival curves according to the expression level of ERCC1. 0: low expression group; 1: high expression group.\n\nDisease free survival curves according to the expression level of\nTS\nand\nERCC1\n. A: disease free survival curves according to the expression level of TS. B: Disease free survival curves according to the expression level of ERCC1. 0: low expression group; 1: high expression group.", "The expression of ERCC1 and TS has been reported to be related with the clinical outcomes of patients treated with the oxaliplatin or 5-FU-based adjuvant chemotherapy\n[24, 25]. Nevertheless, there was no enough evidence to prove the prognostic role of ERCC1 and TS expression in CRC patients treated with oxaliplatin and 5-FU-based adjuvant chemotherapy. Therefore, we analyzed the association of mRNA expression levels of ERCC1 and TS with DFS in Chinese patients with stage II-III CRC in this study. The results indicated no significant association between DFS and the mRNA expression levels of ERCC1 and TS, suggesting that the expression of ERCC1 and TS were not applicable as the predictive factors for DFS in Chinese stage II-III CRC patients receiving 5-FU and oxaliplatin based adjuvant chemotherapy.\nHowever, the mRNA levels of ERCC1 and TS has been reported by Shirota Y et al. to be associated with survival of 5-FU and oxaliplatin adjuvant chemotherapy in CRC patients\n[20]. It may be due to the difference in cancer stage and ethnicity of patients. In this study, the patients were at the stage II-III of CRC. However, the patients in the study of Shirota Y et al.\n[20] were at stage IV. The mRNA expression levels of ERCC1 and TS may vary with different stage of cancer. In addition, the patients were all Chinese in this study but American in the study of Shirota Y et al. The gene expression profiles were different among ethnic groups\n[26]. Therefore, we inferred that the response of gene ERCC1 and TS to oxaliplatin and 5-FU based adjuvant chemotherapy might be different between Chinese and American patients.\nSome previous studies have reported that ERCC1 expression is a predictive factor for survival after chemotherapy in advanced non-small cell lung cancer\n[27], bladder cancer\n[28], gastric cancer\n[24]. However, there was no evidence to prove the association between the mRNA expression of ERCC1 and DFS of stage II-III CRC patients receiving oxaliplatin and 5-FU based adjuvant chemotherapy in this study. It indicated that ERCC1 expression could predict clinical outcomes of chemotherapy in cancers such as non-small cell lung cancer, bladder cancer, and gastric cancer but not in stage II-III CRC.\nMoreover, in this study, we found that tumor stage was a significant prognostic factor of DFS in CRC patients receiving 5-FU and oxaliplatin based adjuvant chemotherapy. It has been reported that the survival of advanced/recurrent rectal cancers treated with 5-FU based chemotherapy was significantly associated with the tumor stage\n[29]. Meanwhile, another studies reported that pathologic stage significantly influenced the DFS of locally advanced rectal cancer patients after preoperative chemoradiation (5-FU or oxaliplatin)\n[30]. Therefore, tumor stage must be considered in the further studies for the prognostic analysis of 5-FU and oxaliplatin based adjuvant chemotherapy for CRC patients.\nThere were some notable limitations of this study. First, power calculation (α = 0.05; TS: power 1 - β = 0.511; ERCC1: power 1 - β = 0.656) showed that the sample size was small for reliably accessing the association between TS or ERCC1 expression and DFS. Thus, more studies with larger sample size were required. Second, the follow-up duration was short, so that further studies must be done to verify the results of this study. Third, there were four chemotherapy regimens in this study, which may be a limitation for identifying association between TS or ERCC1 expression and DFS in this study. In addition, the dose of 5-FU may also affect the clinical outcomes of chemotherapy, and we must investigate this potentially prognostic factor in the further studies.", "In conclusion, our data demonstrated that mRNA expression levels of ERCC1 and TS were not significantly correlated with the DFS of Chinese stage II-III CRC patients receiving 5-FU and oxaliplatin based adjuvant chemotherapy. It suggested that the mRNA expression levels of ERCC1 and TS were not applicable as the predictive factors for DFS in Chinese stage II-III CRC patients receiving 5-FU and oxaliplatin based adjuvant chemotherapy. Further investigations to clearly define the role of ERCC1 and TS gene expression in this setting are needed." ]
[ null, "methods", null, null, null, null, "results", null, null, null, "discussion", "conclusions" ]
[ "Colorectal cancer", "ERCC1", "TS", "Real-time PCR", "Adjuvant chemotherapy" ]
Background: Colorectal cancer (CRC) is the third most common cancer worldwide and has a high mortality rate [1]. About 608,000 deaths from CRC are estimated worldwide, accounting for 8% of all cancer deaths [2, 3]. Surgery is the most common treatment for CRC, yet prognosis remains poor [4]. As a result, considerable interest has concentrated on chemotherapy after surgery, such as oxaliplatin or 5-fluorouracil (5-FU)-based adjuvant chemotherapy [5, 6]. The 5-FU is an analogue of uracil with a fluorine atom at the C-5 position in place of hydrogen, which disrupts RNA synthesis and the action of thymidylate synthase by converting to several active metabolites: fluorodeoxyuridine monophosphate (FdUMP), fluorodeoxyuridine triphosphate (FdUTP) and fluorouridine triphosphate (FUTP) [7]. It was reported that 5-FU-based chemotherapy was a safe and effective treatment for elderly patients with advanced CRC [8]. Oxaliplatin is a platinum-based drug that has demonstrated antitumor activities in CRC both in vitro and in vivo [9]. It was reported that oxaliplatin based chemotherapy could significantly increase the progression-free survival in patients with metastatic CRC [10]. Moreover, the better efficacy and safety of oxaliplatin combined with 5-FU as first-line chemotherapy for elderly patients with metastatic CRC has been proved [11]. However, there is no predictive factor for efficacy of these treatments. The ERCC1 (encodes excision cross-complementing 1) gene codes for a nucleotide excision repair protein involved in the repair of radiation and chemotherapy-induced DNA damage [12]. It has been reported that the gene polymorphism of ERCC1 at codon 118 was a predictive factor for the tumor response to oxaliplatin/5-FU combination chemotherapy in patients with advanced CRC [13]. Furthermore, thymidylate synthase (TS), as a target enzyme of 5-FU, is associated with response to 5-FU in human colorectal and gastric tumors [14, 15]. It was reported that TS genotyping could be of help in predicting toxicity to 5-FU-based chemotherapy in CRC patients [16]. However, little is known about the association between mRNA expression levels of ERCC1 and TS and clinical outcomes of oxaliplatin and 5-FU based adjuvant chemotherapy in Chinese people with CRC. In this study, we investigated the association of ERCC1 and TS mRNA levels with the disease free survival (DFS) in Chinese CRC patients receiving oxaliplatin and 5-FU based adjuvant chemotherapy. Methods: Patients This study was carried out with the Institutional Ethics Committees approval and following the Chinese Medical Research Council guidelines. All participants gave their written informed consent prior to entering the study. This is a prospective study. A total of 112 Chinese CRC patients who were treated at Jiangsu Tumor Hospital, China, from May 2005 to January 2010 were investigated in this study. Eligibility criterion was histological confirmation of stage II-III CRC after surgery according to the AJCC TNM classification [17]. This study was carried out with the Institutional Ethics Committees approval and following the Chinese Medical Research Council guidelines. All participants gave their written informed consent prior to entering the study. This is a prospective study. A total of 112 Chinese CRC patients who were treated at Jiangsu Tumor Hospital, China, from May 2005 to January 2010 were investigated in this study. Eligibility criterion was histological confirmation of stage II-III CRC after surgery according to the AJCC TNM classification [17]. Chemotherapy treatment All the patients were treated with chemotherapy after curative operation. Four types of chemotherapy regimens were used for the treatment of CRC patients: i) the first one was the standard FOLFOX-4 consisting of 2-hour intravenous infusion of oxaliplatin (85 mg/m2) on day 1, and 2-hour intravenous drip infusion of calcium folinate (200 mg/m2) on days 1–2, followed by intravenous injection of 5-FU (400 mg/m2) and continuous infusion of 5-FU (600 mg/m2) lasting 22 h on days 1–2, every 2 weeks; ii) the second one was the modified FOLFOX consisting of intravenous infusion of oxaliplatin (130 mg/m2) and 2-hour intravenous drip infusion of folinate calcium (200 mg/m2) on day 1, followed by intravenous injection of 5-FU (400 mg/m2) and continuous infusion of 5-FU (1000 mg/m2) over 24 h on days 1 to 3, every 3 weeks; iii) the third one was oral XELOX consisting of 2-hour intravenous infusion of oxaliplatin 130 mg/m2 on day 1 plus oral capecitabine 850 mg/m2 twice daily for 2 weeks in a 3-week cycle; iv) the fourth one was a conventional intravenous drip infusion including 2-hour intravenous infusion of oxaliplatin 130 mg/m2 on day 1 and continuous infusion of 5-FU (750 mg/m2) lasting 4 h on days 1 to 5, every 3 weeks. All the chemotherapy regimens were performed by a trained nurse. The selection of chemotherapy regimens for each patient was according to the recommendation of an experienced expert. After treatment, the clinical outcomes were obtained by telephone follow-up or a return visit with the deadline of January 2014. DFS, which defined as the time from the end of chemotherapy to the first event of either recurrent disease or death, was calculated according to follow-up data. All the patients were treated with chemotherapy after curative operation. Four types of chemotherapy regimens were used for the treatment of CRC patients: i) the first one was the standard FOLFOX-4 consisting of 2-hour intravenous infusion of oxaliplatin (85 mg/m2) on day 1, and 2-hour intravenous drip infusion of calcium folinate (200 mg/m2) on days 1–2, followed by intravenous injection of 5-FU (400 mg/m2) and continuous infusion of 5-FU (600 mg/m2) lasting 22 h on days 1–2, every 2 weeks; ii) the second one was the modified FOLFOX consisting of intravenous infusion of oxaliplatin (130 mg/m2) and 2-hour intravenous drip infusion of folinate calcium (200 mg/m2) on day 1, followed by intravenous injection of 5-FU (400 mg/m2) and continuous infusion of 5-FU (1000 mg/m2) over 24 h on days 1 to 3, every 3 weeks; iii) the third one was oral XELOX consisting of 2-hour intravenous infusion of oxaliplatin 130 mg/m2 on day 1 plus oral capecitabine 850 mg/m2 twice daily for 2 weeks in a 3-week cycle; iv) the fourth one was a conventional intravenous drip infusion including 2-hour intravenous infusion of oxaliplatin 130 mg/m2 on day 1 and continuous infusion of 5-FU (750 mg/m2) lasting 4 h on days 1 to 5, every 3 weeks. All the chemotherapy regimens were performed by a trained nurse. The selection of chemotherapy regimens for each patient was according to the recommendation of an experienced expert. After treatment, the clinical outcomes were obtained by telephone follow-up or a return visit with the deadline of January 2014. DFS, which defined as the time from the end of chemotherapy to the first event of either recurrent disease or death, was calculated according to follow-up data. RNA extraction and real-time RT-PCR RNA was extracted and purified from formalin-fixed paraffin-embedded (FFPE) tissue samples of surgically resected primary CRC using an RNeasy mini kit (Qiagen, Inc.) according to the manufacturer’s instructions [18]. The cDNA of ERCC1 and TS was prepared by reverse transcription from RNA [19]. The ABI PRISM 7700 Sequence Detection System (Perkin-Elmer Applied Biosystems, Foster City, CA) was used to perform TaqMan probe-based real-time PCR reactions as previously described [20–22]. Relative levels of mRNA transcripts were calculated according to the comparative Ct method using β-actin as an endogenous control [23]. RNA was extracted and purified from formalin-fixed paraffin-embedded (FFPE) tissue samples of surgically resected primary CRC using an RNeasy mini kit (Qiagen, Inc.) according to the manufacturer’s instructions [18]. The cDNA of ERCC1 and TS was prepared by reverse transcription from RNA [19]. The ABI PRISM 7700 Sequence Detection System (Perkin-Elmer Applied Biosystems, Foster City, CA) was used to perform TaqMan probe-based real-time PCR reactions as previously described [20–22]. Relative levels of mRNA transcripts were calculated according to the comparative Ct method using β-actin as an endogenous control [23]. Statistical analysis The Cox proportional hazards model was used for univariate and multivariate analysis of prognostic factors. The variables included six continuous variables (age, duration of chemotherapy courses, interval between surgery and chemotherapy, cumulated dosage of oxaliplatin and mRNA expression levels of ERCC1 and TS) and eight categorical variables (sex, primary tumor location, tumor stage, tumor differentiation, lymph node staging, nerve invasion, vascular invasion and chemotherapy regimens). The logarithms of the TS and ERCC1 mRNA levels (logTS, logERCC1) were calculated for fitting normal distribution as the requirement of analysis. Dummy variables were considered for all the categorical variables. The chemotherapy regimens were used as a stratification variable in all the analyses. The backward stepwise method was used in the multivariate analysis base on the likelihood ratio statistics. Kaplan–Meier curves and log-rank tests were used for DFS analysis. Hazards ratios were used to calculate the relative risks of recurrence or death. All tests were two-sided, and p < 0.05 was considered as statistically significant. Analyses were performed using SAS version 9.1 (Institute, Cary, NC) and SPSS 19.0 (IBM, Armonk, New York). Power was calculated using the PS Power and Sample Size Calculation, version 3.0.43 (Vanderbilt University, Nashville, TN, USA). The Cox proportional hazards model was used for univariate and multivariate analysis of prognostic factors. The variables included six continuous variables (age, duration of chemotherapy courses, interval between surgery and chemotherapy, cumulated dosage of oxaliplatin and mRNA expression levels of ERCC1 and TS) and eight categorical variables (sex, primary tumor location, tumor stage, tumor differentiation, lymph node staging, nerve invasion, vascular invasion and chemotherapy regimens). The logarithms of the TS and ERCC1 mRNA levels (logTS, logERCC1) were calculated for fitting normal distribution as the requirement of analysis. Dummy variables were considered for all the categorical variables. The chemotherapy regimens were used as a stratification variable in all the analyses. The backward stepwise method was used in the multivariate analysis base on the likelihood ratio statistics. Kaplan–Meier curves and log-rank tests were used for DFS analysis. Hazards ratios were used to calculate the relative risks of recurrence or death. All tests were two-sided, and p < 0.05 was considered as statistically significant. Analyses were performed using SAS version 9.1 (Institute, Cary, NC) and SPSS 19.0 (IBM, Armonk, New York). Power was calculated using the PS Power and Sample Size Calculation, version 3.0.43 (Vanderbilt University, Nashville, TN, USA). Patients: This study was carried out with the Institutional Ethics Committees approval and following the Chinese Medical Research Council guidelines. All participants gave their written informed consent prior to entering the study. This is a prospective study. A total of 112 Chinese CRC patients who were treated at Jiangsu Tumor Hospital, China, from May 2005 to January 2010 were investigated in this study. Eligibility criterion was histological confirmation of stage II-III CRC after surgery according to the AJCC TNM classification [17]. Chemotherapy treatment: All the patients were treated with chemotherapy after curative operation. Four types of chemotherapy regimens were used for the treatment of CRC patients: i) the first one was the standard FOLFOX-4 consisting of 2-hour intravenous infusion of oxaliplatin (85 mg/m2) on day 1, and 2-hour intravenous drip infusion of calcium folinate (200 mg/m2) on days 1–2, followed by intravenous injection of 5-FU (400 mg/m2) and continuous infusion of 5-FU (600 mg/m2) lasting 22 h on days 1–2, every 2 weeks; ii) the second one was the modified FOLFOX consisting of intravenous infusion of oxaliplatin (130 mg/m2) and 2-hour intravenous drip infusion of folinate calcium (200 mg/m2) on day 1, followed by intravenous injection of 5-FU (400 mg/m2) and continuous infusion of 5-FU (1000 mg/m2) over 24 h on days 1 to 3, every 3 weeks; iii) the third one was oral XELOX consisting of 2-hour intravenous infusion of oxaliplatin 130 mg/m2 on day 1 plus oral capecitabine 850 mg/m2 twice daily for 2 weeks in a 3-week cycle; iv) the fourth one was a conventional intravenous drip infusion including 2-hour intravenous infusion of oxaliplatin 130 mg/m2 on day 1 and continuous infusion of 5-FU (750 mg/m2) lasting 4 h on days 1 to 5, every 3 weeks. All the chemotherapy regimens were performed by a trained nurse. The selection of chemotherapy regimens for each patient was according to the recommendation of an experienced expert. After treatment, the clinical outcomes were obtained by telephone follow-up or a return visit with the deadline of January 2014. DFS, which defined as the time from the end of chemotherapy to the first event of either recurrent disease or death, was calculated according to follow-up data. RNA extraction and real-time RT-PCR: RNA was extracted and purified from formalin-fixed paraffin-embedded (FFPE) tissue samples of surgically resected primary CRC using an RNeasy mini kit (Qiagen, Inc.) according to the manufacturer’s instructions [18]. The cDNA of ERCC1 and TS was prepared by reverse transcription from RNA [19]. The ABI PRISM 7700 Sequence Detection System (Perkin-Elmer Applied Biosystems, Foster City, CA) was used to perform TaqMan probe-based real-time PCR reactions as previously described [20–22]. Relative levels of mRNA transcripts were calculated according to the comparative Ct method using β-actin as an endogenous control [23]. Statistical analysis: The Cox proportional hazards model was used for univariate and multivariate analysis of prognostic factors. The variables included six continuous variables (age, duration of chemotherapy courses, interval between surgery and chemotherapy, cumulated dosage of oxaliplatin and mRNA expression levels of ERCC1 and TS) and eight categorical variables (sex, primary tumor location, tumor stage, tumor differentiation, lymph node staging, nerve invasion, vascular invasion and chemotherapy regimens). The logarithms of the TS and ERCC1 mRNA levels (logTS, logERCC1) were calculated for fitting normal distribution as the requirement of analysis. Dummy variables were considered for all the categorical variables. The chemotherapy regimens were used as a stratification variable in all the analyses. The backward stepwise method was used in the multivariate analysis base on the likelihood ratio statistics. Kaplan–Meier curves and log-rank tests were used for DFS analysis. Hazards ratios were used to calculate the relative risks of recurrence or death. All tests were two-sided, and p < 0.05 was considered as statistically significant. Analyses were performed using SAS version 9.1 (Institute, Cary, NC) and SPSS 19.0 (IBM, Armonk, New York). Power was calculated using the PS Power and Sample Size Calculation, version 3.0.43 (Vanderbilt University, Nashville, TN, USA). Results: Characteristics of patients and follow-up results Demographic details on the patients investigated in this study are shown in Table  1. A total of 112 Chinese patients (40 females and 72 males) aged from 32 to 75 years old (average, 52.75) were analyzed in this study. There were 61 rectum cancer patients and 38 colon cancer patients. All patients (stage IIa, 24; stage IIb, 1; stage IIIa, 3; stage IIIb, 53; stage IIIc, 31) underwent curative operation and then received 4 different chemotherapy regimens, respectively, including the standard FOLFOX-4 (20 cases), modified FOLFOX (15 cases), oral XELOX (19 cases) and conventional intravenous drip infusion (58 cases). The median follow-up duration was 36 months (ranged from 1.2 to 78 months). Relapse occurred in forty-four patients (39.3%) and eleven patients (9.8%) died of disease. The median DFS was 36 months (minimum: 3 months; maximum: more than 77 months) (Table  1).Table 1 Demographic and clinical parameters of patients (n = 112)CharacteristicsPatientsNo.% Age (years) Mean52.72 Range32-75 Sex  Female4035.70% Male7264.30% Lymph node staging  N02522.32% N14943.75% N23833.93% Tumor stage  Stage IIa2421.40% Stage IIb10.90% Stage IIIa32.70% Stage IIIb5347% Stage IIIc3127.60% Primary tumor location  Rectum6154.46% Colon3833.93% Vascular invasion  Positive1513.39% Negative9786.61% Nerve invasion  Positive2421.43% Negative8878.57% Chemotherapy regimen  Standard FOLFOX-42017.80% Modified FOLFOX1513.40% Oral XELOX1917% Conventional intravenous drip infusion5851.70% Interval of chemotherapy and surgery  Within 28 days3632.14% More than 28 days7667.86% Duration of chemotherapy course  1-6 weeks2118.75% 7-12 weeks2522.32% 13-18 weeks5851.80% 19-24 weeks87.14% Tumor differentiation  High10.8% High or medium21.7% Medium6457.1% Medium or low2925.9% Low1614.3% Follow-up  Median36 months Range1.2-78 months Relapse4439.3% Death119.8% Disease free survival  Median36 months Range3-77 months Demographic and clinical parameters of patients (n = 112) Demographic details on the patients investigated in this study are shown in Table  1. A total of 112 Chinese patients (40 females and 72 males) aged from 32 to 75 years old (average, 52.75) were analyzed in this study. There were 61 rectum cancer patients and 38 colon cancer patients. All patients (stage IIa, 24; stage IIb, 1; stage IIIa, 3; stage IIIb, 53; stage IIIc, 31) underwent curative operation and then received 4 different chemotherapy regimens, respectively, including the standard FOLFOX-4 (20 cases), modified FOLFOX (15 cases), oral XELOX (19 cases) and conventional intravenous drip infusion (58 cases). The median follow-up duration was 36 months (ranged from 1.2 to 78 months). Relapse occurred in forty-four patients (39.3%) and eleven patients (9.8%) died of disease. The median DFS was 36 months (minimum: 3 months; maximum: more than 77 months) (Table  1).Table 1 Demographic and clinical parameters of patients (n = 112)CharacteristicsPatientsNo.% Age (years) Mean52.72 Range32-75 Sex  Female4035.70% Male7264.30% Lymph node staging  N02522.32% N14943.75% N23833.93% Tumor stage  Stage IIa2421.40% Stage IIb10.90% Stage IIIa32.70% Stage IIIb5347% Stage IIIc3127.60% Primary tumor location  Rectum6154.46% Colon3833.93% Vascular invasion  Positive1513.39% Negative9786.61% Nerve invasion  Positive2421.43% Negative8878.57% Chemotherapy regimen  Standard FOLFOX-42017.80% Modified FOLFOX1513.40% Oral XELOX1917% Conventional intravenous drip infusion5851.70% Interval of chemotherapy and surgery  Within 28 days3632.14% More than 28 days7667.86% Duration of chemotherapy course  1-6 weeks2118.75% 7-12 weeks2522.32% 13-18 weeks5851.80% 19-24 weeks87.14% Tumor differentiation  High10.8% High or medium21.7% Medium6457.1% Medium or low2925.9% Low1614.3% Follow-up  Median36 months Range1.2-78 months Relapse4439.3% Death119.8% Disease free survival  Median36 months Range3-77 months Demographic and clinical parameters of patients (n = 112) The mRNA expression levels of TS and ERCC1 The median mRNA expression level of TS, relative to the housekeeping gene β-actin, was 2.86 × 10-1 (minimum expression, 2.7 × 10-2; maximum expression, 6.31). The median mRNA expression level of ERCC1, relative to the housekeeping gene β-actin, was 1.7 × 10-3 (minimum, 8.57 × 10-5; maximum, 6.7 × 10-2). In addition, when analyzed by sex and age, no significant association between the TS or ERCC1 mRNA levels and these parameters was found (P > 0.05). The median mRNA expression level of TS, relative to the housekeeping gene β-actin, was 2.86 × 10-1 (minimum expression, 2.7 × 10-2; maximum expression, 6.31). The median mRNA expression level of ERCC1, relative to the housekeeping gene β-actin, was 1.7 × 10-3 (minimum, 8.57 × 10-5; maximum, 6.7 × 10-2). In addition, when analyzed by sex and age, no significant association between the TS or ERCC1 mRNA levels and these parameters was found (P > 0.05). Association of DFS with TS and ERCC1 mRNA levels For the univariate analysis, the hazard ratio (HR) for the mRNA expression levels of TS and ERCC1 (logTS: HR = 0.820, 95% CI = 0.600 - 1.117, P = 0.210; logERCC1: HR = 1.054, 95% CI = 0.852 - 1.304, P = 0.638) indicated that there was no significant association of DFS with the mRNA expression levels of TS and ERCC1 in Chinese CRC patients treated with oxaliplatin and 5-FU based adjuvant chemotherapy (Table  2).Table 2 Univariate analyses of disease free survival according to the cox regression model VariableHazard ratio95% confidence interval P logTS0.8200.602-1.1180.210logERCC11.0520.851-1.3020.638logTS: the logarithms of the expression level of TS; logERCC1: the logarithms of the expression level of ERCC1. Univariate analyses of disease free survival according to the cox regression model logTS: the logarithms of the expression level of TS; logERCC1: the logarithms of the expression level of ERCC1. Factors considered in the multivariate analyses included age, sex (male, female; reference category: female), tumor stage (stage IIa, stage IIb, stage IIIa, stage IIIb, stage IIIc; reference category: stage IIIc), tumor differentiation (high, high or medium, medium, medium or low, low; reference category: low), primary tumor location (rectum, colon; reference category: rectum), lymph node staging (N0, N1, N2; reference category: N2), vascular invasion (positive, negative; reference category: positive), nerve invasion (positive, negative; reference category: positive), interval between surgery and chemotherapy, duration of chemotherapy course, cumulated dosage of oxaliplatin as well as the mRNA expression levels of TS and ERCC1. It is clearly showed that only the tumor stage (tumor stage IIIc, reference, P = 0.083; tumor stage IIb, HR = 0.240, 95% CI = 0.080 - 0.724, P = 0.011; tumor stage IIc, HR < 0.0001, P = 0.977; Tumor stage IIIa, HR = 0.179, 95% CI = 0.012 - 2.593, P = 0.207) entered the model in the final step and was confirmed to be the independent prognostic factor for DFS. The results indicated that the DFS in the patients with tumor stage IIb was significantly longer than that in the patients with tumor stage IIIc. In addition, there was no evidence to prove the association between the DFS and the TS and ERCC1 mRNA levels (Table  3).Table 3 Cox regression analysis for multivariate analysis VariableDisease free survivalHazard ratio95% confidence interval P Tumor stage (reference category: tumor stage IIIc)0.083Tumor stage IIb0.2400.080 - 0.7240.011Tumor stage IIc< 0.0001--0.977Tumor stage IIIa0.1790.012 - 2.5930.207For the tumor stage IIc, the hazard ratio was so small that the 95% confidence interval could not be displayed by the software. Cox regression analysis for multivariate analysis For the tumor stage IIc, the hazard ratio was so small that the 95% confidence interval could not be displayed by the software. The patients were divided into two groups based on the median mRNA expression levels of TS (high expression group: > 2.86 × 10-1; low expression group: ≤ 2.86 × 10-1) and ERCC1 (high expression group: > 1.7 × 10-3; low expression group: ≤ 1.7 × 10-3). The Kaplan–Meier DFS curves according to the mRNA expression levels of TS and ERCC1 all showed no significant difference between high and low expression group (TS: P = 0.264; ERCC1: P = 0.484), suggesting that the mRNA expression levels of TS and ERCC1 was not significantly associated with the DFS (Figure  1).Figure 1 Disease free survival curves according to the expression level of TS and ERCC1 . A: disease free survival curves according to the expression level of TS. B: Disease free survival curves according to the expression level of ERCC1. 0: low expression group; 1: high expression group. Disease free survival curves according to the expression level of TS and ERCC1 . A: disease free survival curves according to the expression level of TS. B: Disease free survival curves according to the expression level of ERCC1. 0: low expression group; 1: high expression group. For the univariate analysis, the hazard ratio (HR) for the mRNA expression levels of TS and ERCC1 (logTS: HR = 0.820, 95% CI = 0.600 - 1.117, P = 0.210; logERCC1: HR = 1.054, 95% CI = 0.852 - 1.304, P = 0.638) indicated that there was no significant association of DFS with the mRNA expression levels of TS and ERCC1 in Chinese CRC patients treated with oxaliplatin and 5-FU based adjuvant chemotherapy (Table  2).Table 2 Univariate analyses of disease free survival according to the cox regression model VariableHazard ratio95% confidence interval P logTS0.8200.602-1.1180.210logERCC11.0520.851-1.3020.638logTS: the logarithms of the expression level of TS; logERCC1: the logarithms of the expression level of ERCC1. Univariate analyses of disease free survival according to the cox regression model logTS: the logarithms of the expression level of TS; logERCC1: the logarithms of the expression level of ERCC1. Factors considered in the multivariate analyses included age, sex (male, female; reference category: female), tumor stage (stage IIa, stage IIb, stage IIIa, stage IIIb, stage IIIc; reference category: stage IIIc), tumor differentiation (high, high or medium, medium, medium or low, low; reference category: low), primary tumor location (rectum, colon; reference category: rectum), lymph node staging (N0, N1, N2; reference category: N2), vascular invasion (positive, negative; reference category: positive), nerve invasion (positive, negative; reference category: positive), interval between surgery and chemotherapy, duration of chemotherapy course, cumulated dosage of oxaliplatin as well as the mRNA expression levels of TS and ERCC1. It is clearly showed that only the tumor stage (tumor stage IIIc, reference, P = 0.083; tumor stage IIb, HR = 0.240, 95% CI = 0.080 - 0.724, P = 0.011; tumor stage IIc, HR < 0.0001, P = 0.977; Tumor stage IIIa, HR = 0.179, 95% CI = 0.012 - 2.593, P = 0.207) entered the model in the final step and was confirmed to be the independent prognostic factor for DFS. The results indicated that the DFS in the patients with tumor stage IIb was significantly longer than that in the patients with tumor stage IIIc. In addition, there was no evidence to prove the association between the DFS and the TS and ERCC1 mRNA levels (Table  3).Table 3 Cox regression analysis for multivariate analysis VariableDisease free survivalHazard ratio95% confidence interval P Tumor stage (reference category: tumor stage IIIc)0.083Tumor stage IIb0.2400.080 - 0.7240.011Tumor stage IIc< 0.0001--0.977Tumor stage IIIa0.1790.012 - 2.5930.207For the tumor stage IIc, the hazard ratio was so small that the 95% confidence interval could not be displayed by the software. Cox regression analysis for multivariate analysis For the tumor stage IIc, the hazard ratio was so small that the 95% confidence interval could not be displayed by the software. The patients were divided into two groups based on the median mRNA expression levels of TS (high expression group: > 2.86 × 10-1; low expression group: ≤ 2.86 × 10-1) and ERCC1 (high expression group: > 1.7 × 10-3; low expression group: ≤ 1.7 × 10-3). The Kaplan–Meier DFS curves according to the mRNA expression levels of TS and ERCC1 all showed no significant difference between high and low expression group (TS: P = 0.264; ERCC1: P = 0.484), suggesting that the mRNA expression levels of TS and ERCC1 was not significantly associated with the DFS (Figure  1).Figure 1 Disease free survival curves according to the expression level of TS and ERCC1 . A: disease free survival curves according to the expression level of TS. B: Disease free survival curves according to the expression level of ERCC1. 0: low expression group; 1: high expression group. Disease free survival curves according to the expression level of TS and ERCC1 . A: disease free survival curves according to the expression level of TS. B: Disease free survival curves according to the expression level of ERCC1. 0: low expression group; 1: high expression group. Characteristics of patients and follow-up results: Demographic details on the patients investigated in this study are shown in Table  1. A total of 112 Chinese patients (40 females and 72 males) aged from 32 to 75 years old (average, 52.75) were analyzed in this study. There were 61 rectum cancer patients and 38 colon cancer patients. All patients (stage IIa, 24; stage IIb, 1; stage IIIa, 3; stage IIIb, 53; stage IIIc, 31) underwent curative operation and then received 4 different chemotherapy regimens, respectively, including the standard FOLFOX-4 (20 cases), modified FOLFOX (15 cases), oral XELOX (19 cases) and conventional intravenous drip infusion (58 cases). The median follow-up duration was 36 months (ranged from 1.2 to 78 months). Relapse occurred in forty-four patients (39.3%) and eleven patients (9.8%) died of disease. The median DFS was 36 months (minimum: 3 months; maximum: more than 77 months) (Table  1).Table 1 Demographic and clinical parameters of patients (n = 112)CharacteristicsPatientsNo.% Age (years) Mean52.72 Range32-75 Sex  Female4035.70% Male7264.30% Lymph node staging  N02522.32% N14943.75% N23833.93% Tumor stage  Stage IIa2421.40% Stage IIb10.90% Stage IIIa32.70% Stage IIIb5347% Stage IIIc3127.60% Primary tumor location  Rectum6154.46% Colon3833.93% Vascular invasion  Positive1513.39% Negative9786.61% Nerve invasion  Positive2421.43% Negative8878.57% Chemotherapy regimen  Standard FOLFOX-42017.80% Modified FOLFOX1513.40% Oral XELOX1917% Conventional intravenous drip infusion5851.70% Interval of chemotherapy and surgery  Within 28 days3632.14% More than 28 days7667.86% Duration of chemotherapy course  1-6 weeks2118.75% 7-12 weeks2522.32% 13-18 weeks5851.80% 19-24 weeks87.14% Tumor differentiation  High10.8% High or medium21.7% Medium6457.1% Medium or low2925.9% Low1614.3% Follow-up  Median36 months Range1.2-78 months Relapse4439.3% Death119.8% Disease free survival  Median36 months Range3-77 months Demographic and clinical parameters of patients (n = 112) The mRNA expression levels of TS and ERCC1: The median mRNA expression level of TS, relative to the housekeeping gene β-actin, was 2.86 × 10-1 (minimum expression, 2.7 × 10-2; maximum expression, 6.31). The median mRNA expression level of ERCC1, relative to the housekeeping gene β-actin, was 1.7 × 10-3 (minimum, 8.57 × 10-5; maximum, 6.7 × 10-2). In addition, when analyzed by sex and age, no significant association between the TS or ERCC1 mRNA levels and these parameters was found (P > 0.05). Association of DFS with TS and ERCC1 mRNA levels: For the univariate analysis, the hazard ratio (HR) for the mRNA expression levels of TS and ERCC1 (logTS: HR = 0.820, 95% CI = 0.600 - 1.117, P = 0.210; logERCC1: HR = 1.054, 95% CI = 0.852 - 1.304, P = 0.638) indicated that there was no significant association of DFS with the mRNA expression levels of TS and ERCC1 in Chinese CRC patients treated with oxaliplatin and 5-FU based adjuvant chemotherapy (Table  2).Table 2 Univariate analyses of disease free survival according to the cox regression model VariableHazard ratio95% confidence interval P logTS0.8200.602-1.1180.210logERCC11.0520.851-1.3020.638logTS: the logarithms of the expression level of TS; logERCC1: the logarithms of the expression level of ERCC1. Univariate analyses of disease free survival according to the cox regression model logTS: the logarithms of the expression level of TS; logERCC1: the logarithms of the expression level of ERCC1. Factors considered in the multivariate analyses included age, sex (male, female; reference category: female), tumor stage (stage IIa, stage IIb, stage IIIa, stage IIIb, stage IIIc; reference category: stage IIIc), tumor differentiation (high, high or medium, medium, medium or low, low; reference category: low), primary tumor location (rectum, colon; reference category: rectum), lymph node staging (N0, N1, N2; reference category: N2), vascular invasion (positive, negative; reference category: positive), nerve invasion (positive, negative; reference category: positive), interval between surgery and chemotherapy, duration of chemotherapy course, cumulated dosage of oxaliplatin as well as the mRNA expression levels of TS and ERCC1. It is clearly showed that only the tumor stage (tumor stage IIIc, reference, P = 0.083; tumor stage IIb, HR = 0.240, 95% CI = 0.080 - 0.724, P = 0.011; tumor stage IIc, HR < 0.0001, P = 0.977; Tumor stage IIIa, HR = 0.179, 95% CI = 0.012 - 2.593, P = 0.207) entered the model in the final step and was confirmed to be the independent prognostic factor for DFS. The results indicated that the DFS in the patients with tumor stage IIb was significantly longer than that in the patients with tumor stage IIIc. In addition, there was no evidence to prove the association between the DFS and the TS and ERCC1 mRNA levels (Table  3).Table 3 Cox regression analysis for multivariate analysis VariableDisease free survivalHazard ratio95% confidence interval P Tumor stage (reference category: tumor stage IIIc)0.083Tumor stage IIb0.2400.080 - 0.7240.011Tumor stage IIc< 0.0001--0.977Tumor stage IIIa0.1790.012 - 2.5930.207For the tumor stage IIc, the hazard ratio was so small that the 95% confidence interval could not be displayed by the software. Cox regression analysis for multivariate analysis For the tumor stage IIc, the hazard ratio was so small that the 95% confidence interval could not be displayed by the software. The patients were divided into two groups based on the median mRNA expression levels of TS (high expression group: > 2.86 × 10-1; low expression group: ≤ 2.86 × 10-1) and ERCC1 (high expression group: > 1.7 × 10-3; low expression group: ≤ 1.7 × 10-3). The Kaplan–Meier DFS curves according to the mRNA expression levels of TS and ERCC1 all showed no significant difference between high and low expression group (TS: P = 0.264; ERCC1: P = 0.484), suggesting that the mRNA expression levels of TS and ERCC1 was not significantly associated with the DFS (Figure  1).Figure 1 Disease free survival curves according to the expression level of TS and ERCC1 . A: disease free survival curves according to the expression level of TS. B: Disease free survival curves according to the expression level of ERCC1. 0: low expression group; 1: high expression group. Disease free survival curves according to the expression level of TS and ERCC1 . A: disease free survival curves according to the expression level of TS. B: Disease free survival curves according to the expression level of ERCC1. 0: low expression group; 1: high expression group. Discussion: The expression of ERCC1 and TS has been reported to be related with the clinical outcomes of patients treated with the oxaliplatin or 5-FU-based adjuvant chemotherapy [24, 25]. Nevertheless, there was no enough evidence to prove the prognostic role of ERCC1 and TS expression in CRC patients treated with oxaliplatin and 5-FU-based adjuvant chemotherapy. Therefore, we analyzed the association of mRNA expression levels of ERCC1 and TS with DFS in Chinese patients with stage II-III CRC in this study. The results indicated no significant association between DFS and the mRNA expression levels of ERCC1 and TS, suggesting that the expression of ERCC1 and TS were not applicable as the predictive factors for DFS in Chinese stage II-III CRC patients receiving 5-FU and oxaliplatin based adjuvant chemotherapy. However, the mRNA levels of ERCC1 and TS has been reported by Shirota Y et al. to be associated with survival of 5-FU and oxaliplatin adjuvant chemotherapy in CRC patients [20]. It may be due to the difference in cancer stage and ethnicity of patients. In this study, the patients were at the stage II-III of CRC. However, the patients in the study of Shirota Y et al. [20] were at stage IV. The mRNA expression levels of ERCC1 and TS may vary with different stage of cancer. In addition, the patients were all Chinese in this study but American in the study of Shirota Y et al. The gene expression profiles were different among ethnic groups [26]. Therefore, we inferred that the response of gene ERCC1 and TS to oxaliplatin and 5-FU based adjuvant chemotherapy might be different between Chinese and American patients. Some previous studies have reported that ERCC1 expression is a predictive factor for survival after chemotherapy in advanced non-small cell lung cancer [27], bladder cancer [28], gastric cancer [24]. However, there was no evidence to prove the association between the mRNA expression of ERCC1 and DFS of stage II-III CRC patients receiving oxaliplatin and 5-FU based adjuvant chemotherapy in this study. It indicated that ERCC1 expression could predict clinical outcomes of chemotherapy in cancers such as non-small cell lung cancer, bladder cancer, and gastric cancer but not in stage II-III CRC. Moreover, in this study, we found that tumor stage was a significant prognostic factor of DFS in CRC patients receiving 5-FU and oxaliplatin based adjuvant chemotherapy. It has been reported that the survival of advanced/recurrent rectal cancers treated with 5-FU based chemotherapy was significantly associated with the tumor stage [29]. Meanwhile, another studies reported that pathologic stage significantly influenced the DFS of locally advanced rectal cancer patients after preoperative chemoradiation (5-FU or oxaliplatin) [30]. Therefore, tumor stage must be considered in the further studies for the prognostic analysis of 5-FU and oxaliplatin based adjuvant chemotherapy for CRC patients. There were some notable limitations of this study. First, power calculation (α = 0.05; TS: power 1 - β = 0.511; ERCC1: power 1 - β = 0.656) showed that the sample size was small for reliably accessing the association between TS or ERCC1 expression and DFS. Thus, more studies with larger sample size were required. Second, the follow-up duration was short, so that further studies must be done to verify the results of this study. Third, there were four chemotherapy regimens in this study, which may be a limitation for identifying association between TS or ERCC1 expression and DFS in this study. In addition, the dose of 5-FU may also affect the clinical outcomes of chemotherapy, and we must investigate this potentially prognostic factor in the further studies. Conclusions: In conclusion, our data demonstrated that mRNA expression levels of ERCC1 and TS were not significantly correlated with the DFS of Chinese stage II-III CRC patients receiving 5-FU and oxaliplatin based adjuvant chemotherapy. It suggested that the mRNA expression levels of ERCC1 and TS were not applicable as the predictive factors for DFS in Chinese stage II-III CRC patients receiving 5-FU and oxaliplatin based adjuvant chemotherapy. Further investigations to clearly define the role of ERCC1 and TS gene expression in this setting are needed.
Background: Aim was to explore the association of ERCC1 and TS mRNA levels with the disease free survival (DFS) in Chinese colorectal cancer (CRC) patients receiving oxaliplatin and 5-FU based adjuvant chemotherapy. Methods: Total 112 Chinese stage II-III CRC patients were respectively treated by four different chemotherapy regimens after curative operation. The TS and ERCC1 mRNA levels in primary tumor were measured by real-time RT-PCR. Kaplan-Meier curves and log-rank tests were used for DFS analysis. The Cox proportional hazards model was used for prognostic analysis. Results: In univariate analysis, the hazard ratio (HR) for the mRNA expression levels of TS and ERCC1 (logTS: HR = 0.820, 95% CI = 0.600 - 1.117, P = 0.210; logERCC1: HR = 1.054, 95% CI = 0.852 - 1.304, P = 0.638) indicated no significant association of DFS with the TS and ERCC1 mRNA levels. In multivariate analyses, tumor stage (IIIc: reference, P = 0.083; IIb: HR = 0.240, 95% CI = 0.080 - 0.724, P = 0.011; IIc: HR < 0.0001, P = 0.977; IIIa: HR = 0.179, 95% CI = 0.012 - 2.593, P = 0.207) was confirmed to be the independent prognostic factor for DFS. Moreover, the Kaplan-Meier DFS curves showed that TS and ERCC1 mRNA levels were not significantly associated with the DFS (TS: P = 0.264; ERCC1: P = 0.484). Conclusions: The mRNA expression of ERCC1 and TS were not applicable to predict the DFS of Chinese stage II-III CRC patients receiving 5-FU and oxaliplatin based adjuvant chemotherapy.
Background: Colorectal cancer (CRC) is the third most common cancer worldwide and has a high mortality rate [1]. About 608,000 deaths from CRC are estimated worldwide, accounting for 8% of all cancer deaths [2, 3]. Surgery is the most common treatment for CRC, yet prognosis remains poor [4]. As a result, considerable interest has concentrated on chemotherapy after surgery, such as oxaliplatin or 5-fluorouracil (5-FU)-based adjuvant chemotherapy [5, 6]. The 5-FU is an analogue of uracil with a fluorine atom at the C-5 position in place of hydrogen, which disrupts RNA synthesis and the action of thymidylate synthase by converting to several active metabolites: fluorodeoxyuridine monophosphate (FdUMP), fluorodeoxyuridine triphosphate (FdUTP) and fluorouridine triphosphate (FUTP) [7]. It was reported that 5-FU-based chemotherapy was a safe and effective treatment for elderly patients with advanced CRC [8]. Oxaliplatin is a platinum-based drug that has demonstrated antitumor activities in CRC both in vitro and in vivo [9]. It was reported that oxaliplatin based chemotherapy could significantly increase the progression-free survival in patients with metastatic CRC [10]. Moreover, the better efficacy and safety of oxaliplatin combined with 5-FU as first-line chemotherapy for elderly patients with metastatic CRC has been proved [11]. However, there is no predictive factor for efficacy of these treatments. The ERCC1 (encodes excision cross-complementing 1) gene codes for a nucleotide excision repair protein involved in the repair of radiation and chemotherapy-induced DNA damage [12]. It has been reported that the gene polymorphism of ERCC1 at codon 118 was a predictive factor for the tumor response to oxaliplatin/5-FU combination chemotherapy in patients with advanced CRC [13]. Furthermore, thymidylate synthase (TS), as a target enzyme of 5-FU, is associated with response to 5-FU in human colorectal and gastric tumors [14, 15]. It was reported that TS genotyping could be of help in predicting toxicity to 5-FU-based chemotherapy in CRC patients [16]. However, little is known about the association between mRNA expression levels of ERCC1 and TS and clinical outcomes of oxaliplatin and 5-FU based adjuvant chemotherapy in Chinese people with CRC. In this study, we investigated the association of ERCC1 and TS mRNA levels with the disease free survival (DFS) in Chinese CRC patients receiving oxaliplatin and 5-FU based adjuvant chemotherapy. Conclusions: In conclusion, our data demonstrated that mRNA expression levels of ERCC1 and TS were not significantly correlated with the DFS of Chinese stage II-III CRC patients receiving 5-FU and oxaliplatin based adjuvant chemotherapy. It suggested that the mRNA expression levels of ERCC1 and TS were not applicable as the predictive factors for DFS in Chinese stage II-III CRC patients receiving 5-FU and oxaliplatin based adjuvant chemotherapy. Further investigations to clearly define the role of ERCC1 and TS gene expression in this setting are needed.
Background: Aim was to explore the association of ERCC1 and TS mRNA levels with the disease free survival (DFS) in Chinese colorectal cancer (CRC) patients receiving oxaliplatin and 5-FU based adjuvant chemotherapy. Methods: Total 112 Chinese stage II-III CRC patients were respectively treated by four different chemotherapy regimens after curative operation. The TS and ERCC1 mRNA levels in primary tumor were measured by real-time RT-PCR. Kaplan-Meier curves and log-rank tests were used for DFS analysis. The Cox proportional hazards model was used for prognostic analysis. Results: In univariate analysis, the hazard ratio (HR) for the mRNA expression levels of TS and ERCC1 (logTS: HR = 0.820, 95% CI = 0.600 - 1.117, P = 0.210; logERCC1: HR = 1.054, 95% CI = 0.852 - 1.304, P = 0.638) indicated no significant association of DFS with the TS and ERCC1 mRNA levels. In multivariate analyses, tumor stage (IIIc: reference, P = 0.083; IIb: HR = 0.240, 95% CI = 0.080 - 0.724, P = 0.011; IIc: HR < 0.0001, P = 0.977; IIIa: HR = 0.179, 95% CI = 0.012 - 2.593, P = 0.207) was confirmed to be the independent prognostic factor for DFS. Moreover, the Kaplan-Meier DFS curves showed that TS and ERCC1 mRNA levels were not significantly associated with the DFS (TS: P = 0.264; ERCC1: P = 0.484). Conclusions: The mRNA expression of ERCC1 and TS were not applicable to predict the DFS of Chinese stage II-III CRC patients receiving 5-FU and oxaliplatin based adjuvant chemotherapy.
8,284
329
[ 492, 93, 392, 127, 246, 435, 123, 860 ]
12
[ "stage", "expression", "ercc1", "ts", "chemotherapy", "patients", "tumor", "mrna", "tumor stage", "levels" ]
[ "fu oxaliplatin adjuvant", "oxaliplatin adjuvant chemotherapy", "fu combination chemotherapy", "chemotherapy fu analogue", "surgery oxaliplatin fluorouracil" ]
[CONTENT] Colorectal cancer | ERCC1 | TS | Real-time PCR | Adjuvant chemotherapy [SUMMARY]
[CONTENT] Colorectal cancer | ERCC1 | TS | Real-time PCR | Adjuvant chemotherapy [SUMMARY]
[CONTENT] Colorectal cancer | ERCC1 | TS | Real-time PCR | Adjuvant chemotherapy [SUMMARY]
[CONTENT] Colorectal cancer | ERCC1 | TS | Real-time PCR | Adjuvant chemotherapy [SUMMARY]
[CONTENT] Colorectal cancer | ERCC1 | TS | Real-time PCR | Adjuvant chemotherapy [SUMMARY]
[CONTENT] Colorectal cancer | ERCC1 | TS | Real-time PCR | Adjuvant chemotherapy [SUMMARY]
[CONTENT] Adult | Aged | Antineoplastic Combined Chemotherapy Protocols | Chemotherapy, Adjuvant | Colectomy | Colorectal Neoplasms | DNA-Binding Proteins | Disease-Free Survival | Endonucleases | Female | Fluorouracil | Gene Expression Regulation, Neoplastic | Humans | Male | Middle Aged | Neoplasm Staging | Organoplatinum Compounds | Oxaliplatin | Prognosis | Proportional Hazards Models | RNA, Messenger | Real-Time Polymerase Chain Reaction | Thymidylate Synthase | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Aged | Antineoplastic Combined Chemotherapy Protocols | Chemotherapy, Adjuvant | Colectomy | Colorectal Neoplasms | DNA-Binding Proteins | Disease-Free Survival | Endonucleases | Female | Fluorouracil | Gene Expression Regulation, Neoplastic | Humans | Male | Middle Aged | Neoplasm Staging | Organoplatinum Compounds | Oxaliplatin | Prognosis | Proportional Hazards Models | RNA, Messenger | Real-Time Polymerase Chain Reaction | Thymidylate Synthase | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Aged | Antineoplastic Combined Chemotherapy Protocols | Chemotherapy, Adjuvant | Colectomy | Colorectal Neoplasms | DNA-Binding Proteins | Disease-Free Survival | Endonucleases | Female | Fluorouracil | Gene Expression Regulation, Neoplastic | Humans | Male | Middle Aged | Neoplasm Staging | Organoplatinum Compounds | Oxaliplatin | Prognosis | Proportional Hazards Models | RNA, Messenger | Real-Time Polymerase Chain Reaction | Thymidylate Synthase | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Aged | Antineoplastic Combined Chemotherapy Protocols | Chemotherapy, Adjuvant | Colectomy | Colorectal Neoplasms | DNA-Binding Proteins | Disease-Free Survival | Endonucleases | Female | Fluorouracil | Gene Expression Regulation, Neoplastic | Humans | Male | Middle Aged | Neoplasm Staging | Organoplatinum Compounds | Oxaliplatin | Prognosis | Proportional Hazards Models | RNA, Messenger | Real-Time Polymerase Chain Reaction | Thymidylate Synthase | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Aged | Antineoplastic Combined Chemotherapy Protocols | Chemotherapy, Adjuvant | Colectomy | Colorectal Neoplasms | DNA-Binding Proteins | Disease-Free Survival | Endonucleases | Female | Fluorouracil | Gene Expression Regulation, Neoplastic | Humans | Male | Middle Aged | Neoplasm Staging | Organoplatinum Compounds | Oxaliplatin | Prognosis | Proportional Hazards Models | RNA, Messenger | Real-Time Polymerase Chain Reaction | Thymidylate Synthase | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Aged | Antineoplastic Combined Chemotherapy Protocols | Chemotherapy, Adjuvant | Colectomy | Colorectal Neoplasms | DNA-Binding Proteins | Disease-Free Survival | Endonucleases | Female | Fluorouracil | Gene Expression Regulation, Neoplastic | Humans | Male | Middle Aged | Neoplasm Staging | Organoplatinum Compounds | Oxaliplatin | Prognosis | Proportional Hazards Models | RNA, Messenger | Real-Time Polymerase Chain Reaction | Thymidylate Synthase | Treatment Outcome [SUMMARY]
[CONTENT] fu oxaliplatin adjuvant | oxaliplatin adjuvant chemotherapy | fu combination chemotherapy | chemotherapy fu analogue | surgery oxaliplatin fluorouracil [SUMMARY]
[CONTENT] fu oxaliplatin adjuvant | oxaliplatin adjuvant chemotherapy | fu combination chemotherapy | chemotherapy fu analogue | surgery oxaliplatin fluorouracil [SUMMARY]
[CONTENT] fu oxaliplatin adjuvant | oxaliplatin adjuvant chemotherapy | fu combination chemotherapy | chemotherapy fu analogue | surgery oxaliplatin fluorouracil [SUMMARY]
[CONTENT] fu oxaliplatin adjuvant | oxaliplatin adjuvant chemotherapy | fu combination chemotherapy | chemotherapy fu analogue | surgery oxaliplatin fluorouracil [SUMMARY]
[CONTENT] fu oxaliplatin adjuvant | oxaliplatin adjuvant chemotherapy | fu combination chemotherapy | chemotherapy fu analogue | surgery oxaliplatin fluorouracil [SUMMARY]
[CONTENT] fu oxaliplatin adjuvant | oxaliplatin adjuvant chemotherapy | fu combination chemotherapy | chemotherapy fu analogue | surgery oxaliplatin fluorouracil [SUMMARY]
[CONTENT] stage | expression | ercc1 | ts | chemotherapy | patients | tumor | mrna | tumor stage | levels [SUMMARY]
[CONTENT] stage | expression | ercc1 | ts | chemotherapy | patients | tumor | mrna | tumor stage | levels [SUMMARY]
[CONTENT] stage | expression | ercc1 | ts | chemotherapy | patients | tumor | mrna | tumor stage | levels [SUMMARY]
[CONTENT] stage | expression | ercc1 | ts | chemotherapy | patients | tumor | mrna | tumor stage | levels [SUMMARY]
[CONTENT] stage | expression | ercc1 | ts | chemotherapy | patients | tumor | mrna | tumor stage | levels [SUMMARY]
[CONTENT] stage | expression | ercc1 | ts | chemotherapy | patients | tumor | mrna | tumor stage | levels [SUMMARY]
[CONTENT] fu | crc | chemotherapy | based | reported | fu based | oxaliplatin | patients | based chemotherapy | fu based adjuvant chemotherapy [SUMMARY]
[CONTENT] mg | m2 | mg m2 | infusion | intravenous | hour | variables | hour intravenous | chemotherapy | days [SUMMARY]
[CONTENT] stage | expression | expression level | level | tumor | months | expression group | reference | group | ercc1 [SUMMARY]
[CONTENT] oxaliplatin based adjuvant chemotherapy | iii crc patients | chinese stage | chinese stage ii | chinese stage ii iii | fu oxaliplatin | oxaliplatin based adjuvant | patients receiving fu | fu oxaliplatin based adjuvant | receiving fu oxaliplatin based [SUMMARY]
[CONTENT] stage | expression | chemotherapy | ercc1 | ts | patients | mg | mg m2 | m2 | fu [SUMMARY]
[CONTENT] stage | expression | chemotherapy | ercc1 | ts | patients | mg | mg m2 | m2 | fu [SUMMARY]
[CONTENT] Aim | ERCC1 | Chinese | CRC | 5-FU [SUMMARY]
[CONTENT] 112 | Chinese | II-III | four ||| TS | ERCC1 | RT-PCR ||| Kaplan-Meier ||| Cox [SUMMARY]
[CONTENT] TS and ERCC1 | 0.820 | 95% | CI | 0.600 | 0.210 | 1.054 | 95% | CI | 0.852 | 0.638 | TS | ERCC1 ||| 0.083 | 0.240 | 95% | CI | 0.080 | 0.011 | 0.0001 | 0.977 | 0.179 | 95% | CI | 0.012 | 0.207 ||| the Kaplan-Meier DFS | TS | ERCC1 | 0.264 | ERCC1 | 0.484 [SUMMARY]
[CONTENT] ERCC1 and TS | Chinese | 5 [SUMMARY]
[CONTENT] Aim | ERCC1 | Chinese | CRC | 5-FU ||| 112 | Chinese | II-III | four ||| TS | ERCC1 | RT-PCR ||| Kaplan-Meier ||| Cox ||| TS and ERCC1 | 0.820 | 95% | CI | 0.600 | 0.210 | 1.054 | 95% | CI | 0.852 | 0.638 | TS | ERCC1 ||| 0.083 | 0.240 | 95% | CI | 0.080 | 0.011 | 0.0001 | 0.977 | 0.179 | 95% | CI | 0.012 | 0.207 ||| the Kaplan-Meier DFS | TS | ERCC1 | 0.264 | ERCC1 | 0.484 ||| ERCC1 and TS | Chinese | 5 [SUMMARY]
[CONTENT] Aim | ERCC1 | Chinese | CRC | 5-FU ||| 112 | Chinese | II-III | four ||| TS | ERCC1 | RT-PCR ||| Kaplan-Meier ||| Cox ||| TS and ERCC1 | 0.820 | 95% | CI | 0.600 | 0.210 | 1.054 | 95% | CI | 0.852 | 0.638 | TS | ERCC1 ||| 0.083 | 0.240 | 95% | CI | 0.080 | 0.011 | 0.0001 | 0.977 | 0.179 | 95% | CI | 0.012 | 0.207 ||| the Kaplan-Meier DFS | TS | ERCC1 | 0.264 | ERCC1 | 0.484 ||| ERCC1 and TS | Chinese | 5 [SUMMARY]
Cardiac Myxoma among Patients Undergoing Cardiac Surgery in a Tertiary Care Center: A Descriptive Cross-sectional Study.
35210647
Heart neoplasms are rare tumors. Myxoma is the commonest primary benign tumor of the heart presenting with features of obstruction, arrhythmia, and embolism. Surgical excision of the tumor is the gold standard of treatment. The aim of the study is to find out the prevalence of cardiac myxoma among all cardiac surgeries operated during the study period.
INTRODUCTION
A descriptive cross-sectional study was done among 3800 patients undergoing surgery for cardiac tumors in a tertiary care center after obtaining approval from the Institutional Review Committee (Reference number- 36/(6-11)E2/077/078). The data was collected retrospectively from August 2012 to August 2020 using convenience sampling method. Statistical analysis was performed using Microsoft Excel 2016. Point estimate at 95% Confidence Interval was calculated along with frequency, percentage, mean and standard deviation.
METHODS
There were 26 (0.68%) (0.42-0.94 at 95% Confidence Interval) myxoma among 3800 cardiac surgeries performed over eight years. The mean age of the patients was 54.76±14.31 (range 17-75) years. Twenty (76.92%) patients were females. The commonest presenting symptom was shortness of breath in 19 (73.07%) patients. En masse excision with the closure of the atrial septal defect was the principal surgical technique. The mean Intensive Care Unit stay and hospital stays were 2.92±1.29 and 6.26±2.61 days respectively. There was no perioperative mortality.
RESULTS
Cardiac myxoma was the most common cardiac tumor encountered as in other studies.
CONCLUSIONS
[ "Adolescent", "Adult", "Aged", "Cardiac Surgical Procedures", "Cross-Sectional Studies", "Female", "Heart Neoplasms", "Humans", "Middle Aged", "Myxoma", "Retrospective Studies", "Tertiary Care Centers", "Young Adult" ]
9199997
INTRODUCTION
Cardiac tumors are rare tumors with a prevalence of 0.0017 to 0.28%.1 Metastatic tumor of the heart is commoner than benign tumors with a prevalence of 1% and 0.1%, respectively.2 The ratio of occurrence of primary benign to malignant cardiac tumors is 2:1.3 Myxoma and sarcoma are the commonest benign and malignant cardiac tumors respectively.4 Myxoma affects women in the 3rd-6th decades of life more frequently.5 Such patients present with features of valvular obstruction, heart failure, arrhythmia, embolization, and constitutional symptoms.6 Heart neoplasms are diagnosed by echocardiogram, computed tomography, magnetic resonance imaging.7 Benign tumors are treated with complete surgical resection whereas malignant tumors need to be completely excised as far as possible and treated with adjuvant chemotherapy and radiotherapy.8 Researches on the cardiac tumor from Nepal are scarce. This study is intended to find out the prevalence of cardiac myxoma among all cardiac tumors operated in our center during the study period.
METHODS
This was a descriptive cross-sectional study conducted in the Department of Cardiothoracic Vascular Surgery in Manmohan Cardiothoracic Vascular and Transplant Center, Maharajgunj, Kathmandu, Nepal. The data was collected retrospectively from August 2012 to August 2020 through medical records. Ethical approval was obtained from the Institutional Review Committee (IRC) of Institute of Medicine (Reference number: 36/ (6-11)E2/077/078). Convenience sampling technique was used and sample size was calculated according to the formula: n = Z2 × p × q / e2   = (1.96)2 × 0.5 × (1-0.5) / (0.02)2   = 2401 Where, n = minimum required sample size Z = 1.96 at 95% of Confidence Interval (CI) p = prevalence taken as 50 % for maximum sample size q = 1-p e = margin of error, 2% Adding a 10% non-response rate, the calculated sample size was 2641. However, 3800 cases were taken. Data collected from patient files included patient characteristics-age, gender, New York Heart Association (NYHA) class, presentation of cardiac tumors, intraoperative tumor size, cardiopulmonary bypass time, aortic cross-clamp time, postoperative hours of ventilation, mediastinal drainage, postoperative complications, duration of hospital and Intensive Care Unit (ICU) stay, histopathology of a tumor, in-hospital and long-term survival. Long-term survival was studied by reviewing the follow-up record and telephonic interview. Data were recorded and descriptive statistics like frequency, percentage, mean and standard deviaton were calculated using Microsoft Excel 2016. Point estimate at 95% Confidence Interval was also calculated.
RESULTS
Among 3800 cardiac surgeries, 26 (0.68%) (0.42-0.94 at 95% Confidence Interval) were performed for cardiac myxoma. The mean age of the patients with myxoma was 54.76±14.31 years (range 17-75). Eleven (42.30%) of the patients were in the 50-59 years group. The majority of the patients were females, with a female to male ratio of 3.3:1. The mean duration of symptoms was 119.84±134.11 days. Two cases were diagnosed incidentally while being worked up for hypertension. The presenting symptoms were shortness of breath, palpitation, cough, anorexia with weight loss, and features of embolism (Table 1). Only six (23.07%) patients with cardiac myxoma had a history of smoking. The most common tumor location was the left atrium followed by the right atrium. Tricuspid regurgitation was the commonest associated valvular dysfunction followed by mitral regurgitation (Table 1). One patient had functional mitral stenosis whereas another one had an organic stenotic lesion. All the patients with myxoma underwent complete excision of the tumor. A separate right and left atriotomy (Bicameral approach) was the most commonly employed approach (Table 2). A total of 14 (53.84%) patients had myxoma with a broad base whereas the remaining tumors were pedunculated. Piecemeal excision of the very friable tumors was performed. The majority of the tumors were excised with a wide rim of interatrial septal attachment followed by the repair of the interatrial septum either with a pericardial patch or direct closure (Table 2). Aortic cross clamp time was 30.46±14.52 minutes and cardiopulmonary bypass time was 46.19±18.19 minutes. Adjunct procedures performed included tricuspid valve repair, mitral valve repair, mitral valve replacement, and coronary artery bypass grafting. The majority of the patients were extubated the same day, had insignificant mediastinal drainage and stayed in ICU and hospital for an average period of 2.92±1.29 and 6.26±2.61 days respectively (Table 3). Seven (26.92%) of the patients had arrhythmias requiring intervention (Table 4). One patient developed hemiparesis in the postoperative period which got improved by the time of discharge. The CT head of the patient did not reveal ischemia or hemorrhage. There was no perioperative mortality. The patients followed up for a mean period of 1.49 years (range 7 days to 6.5 years). A review of medical records and telephonic interviews identified five late deaths.
CONCLUSIONS
Cardiac myxoma was the most common cardiac tumor in our study as in other published studies performed. Proper diagnosis and management of cardiac myxoma could result in complete and safe excision with low perioperative morbidity and mortality.
[]
[]
[]
[ "INTRODUCTION", "METHODS", "RESULTS", "DISCUSSION", "CONCLUSIONS" ]
[ "Cardiac tumors are rare tumors with a prevalence of 0.0017 to 0.28%.1 Metastatic tumor of the heart is commoner than benign tumors with a prevalence of 1% and 0.1%, respectively.2 The ratio of occurrence of primary benign to malignant cardiac tumors is 2:1.3 Myxoma and sarcoma are the commonest benign and malignant cardiac tumors respectively.4\nMyxoma affects women in the 3rd-6th decades of life more frequently.5 Such patients present with features of valvular obstruction, heart failure, arrhythmia, embolization, and constitutional symptoms.6 Heart neoplasms are diagnosed by echocardiogram, computed tomography, magnetic resonance imaging.7 Benign tumors are treated with complete surgical resection whereas malignant tumors need to be completely excised as far as possible and treated with adjuvant chemotherapy and radiotherapy.8\nResearches on the cardiac tumor from Nepal are scarce. This study is intended to find out the prevalence of cardiac myxoma among all cardiac tumors operated in our center during the study period.", "This was a descriptive cross-sectional study conducted in the Department of Cardiothoracic Vascular Surgery in Manmohan Cardiothoracic Vascular and Transplant Center, Maharajgunj, Kathmandu, Nepal. The data was collected retrospectively from August 2012 to August 2020 through medical records. Ethical approval was obtained from the Institutional Review Committee (IRC) of Institute of Medicine (Reference number: 36/ (6-11)E2/077/078). Convenience sampling technique was used and sample size was calculated according to the formula:\nn = Z2 × p × q / e2\n  = (1.96)2 × 0.5 × (1-0.5) / (0.02)2\n  = 2401\nWhere,\nn = minimum required sample size\nZ = 1.96 at 95% of Confidence Interval (CI)\np = prevalence taken as 50 % for maximum sample size\nq = 1-p\ne = margin of error, 2%\nAdding a 10% non-response rate, the calculated sample size was 2641. However, 3800 cases were taken.\nData collected from patient files included patient characteristics-age, gender, New York Heart Association (NYHA) class, presentation of cardiac tumors, intraoperative tumor size, cardiopulmonary bypass time, aortic cross-clamp time, postoperative hours of ventilation, mediastinal drainage, postoperative complications, duration of hospital and Intensive Care Unit (ICU) stay, histopathology of a tumor, in-hospital and long-term survival. Long-term survival was studied by reviewing the follow-up record and telephonic interview.\nData were recorded and descriptive statistics like frequency, percentage, mean and standard deviaton were calculated using Microsoft Excel 2016. Point estimate at 95% Confidence Interval was also calculated.", "Among 3800 cardiac surgeries, 26 (0.68%) (0.42-0.94 at 95% Confidence Interval) were performed for cardiac myxoma. The mean age of the patients with myxoma was 54.76±14.31 years (range 17-75). Eleven (42.30%) of the patients were in the 50-59 years group. The majority of the patients were females, with a female to male ratio of 3.3:1. The mean duration of symptoms was 119.84±134.11 days. Two cases were diagnosed incidentally while being worked up for hypertension. The presenting symptoms were shortness of breath, palpitation, cough, anorexia with weight loss, and features of embolism (Table 1). Only six (23.07%) patients with cardiac myxoma had a history of smoking. The most common tumor location was the left atrium followed by the right atrium. Tricuspid regurgitation was the commonest associated valvular dysfunction followed by mitral regurgitation (Table 1). One patient had functional mitral stenosis whereas another one had an organic stenotic lesion.\nAll the patients with myxoma underwent complete excision of the tumor. A separate right and left atriotomy (Bicameral approach) was the most commonly employed approach (Table 2). A total of 14 (53.84%) patients had myxoma with a broad base whereas the remaining tumors were pedunculated. Piecemeal excision of the very friable tumors was performed. The majority of the tumors were excised with a wide rim of interatrial septal attachment followed by the repair of the interatrial septum either with a pericardial patch or direct closure (Table 2). Aortic cross clamp time was 30.46±14.52 minutes and cardiopulmonary bypass time was 46.19±18.19 minutes. Adjunct procedures performed included tricuspid valve repair, mitral valve repair, mitral valve replacement, and coronary artery bypass grafting.\nThe majority of the patients were extubated the same day, had insignificant mediastinal drainage and stayed in ICU and hospital for an average period of 2.92±1.29 and 6.26±2.61 days respectively (Table 3). Seven (26.92%) of the patients had arrhythmias requiring intervention (Table 4). One patient developed hemiparesis in the postoperative period which got improved by the time of discharge. The CT head of the patient did not reveal ischemia or hemorrhage. There was no perioperative mortality.\nThe patients followed up for a mean period of 1.49 years (range 7 days to 6.5 years). A review of medical records and telephonic interviews identified five late deaths.", "In our study 89% of surgically treated cardiac tumors was myxoma similar to Keeling IM, et al. who found that 86% of surgically treated cardiac tumors were cardiac myxomas.9 The incidence of cardiac myxoma is 0.3% in patients undergoing on-pump cardiac surgeries.10 Cardiac myxoma excision contributed to 0.68% of cardiac surgeries in our centre. Of the primary cardiac tumors, cardiac myxoma were 92.85% and 7.14% were malignant tumors. The majority of the patients, 20 (76.92%) with myxoma were females and the mean age was found to be 54.76±14.31 years. In a study by Mandal sc, et al. the female to male ratio of myxoma occurrence ratio was 1.9:1 and (43%) of patients were in the 4th to 5th decade of life.11 The mean age of the patients was 55 years (range 22-79 years) in a study by Keeling IM, et al.9 The mean duration of onset of symptoms to diagnosis was 119.84±134.11 in our population.\nThe most common presenting complaint of the patients included shortness of breath followed by palpitation and 16 (61.52%) patients were in NYHA class II, in our study which is similar to that reported in other studies. Fifty-one percent 51.0% presented with dyspnea and 65.78% were in NYHA Class II.10,12 Two (7.69%) patients had a preoperative stroke and two (7.69%) had femoral artery embolism in our center. In another study from Pakistan, 50% of patients had neurological symptoms at presentation including stroke and transient ischemic attack, and 14% had limb ischemia secondary to embolism.13 Similarly preoperative atrial fibrillation was seen in 3.84% in our study. Cianciulli TF, et al. found 9.3% patients had atrial fibrillation.5 Similar to our findings, 82.4%-86.6% of the myxoma was located in the left atrium attached to the interatrial septum.11,14 Protrusion of the tumor through the mitral/ tricuspid valve can give rise to symptoms of valvular obstruction, increased pulmonary artery pressure, and tricuspid regurgitation. Fourteen (53.84%) of our patients had myxoma protrusion through the mitral valve whereas eight (30.76%) patients had varying grades of tricuspid insufficiency.\nVarious operative approaches to myxoma excision have been described in literature like minimally invasive, endoscopic, and robotically-assisted techniques, however, median sternotomy remains the most commonly performed one.2 All our patients underwent median sternotomy. The tumor size ranged from 2cm to 8cm, with a mean size of 4.13±1.39cm. The majority of the tumors 53.84% were broad-based. Tumor size ranged from 1cm to 12cm and 67.6% to 85.48% of the tumors were pedunculated in other studies.14,15 Complete excision of the tumor is of utmost importance in managing cardiac tumors. Hence the commonest approach for left atrial myxoma excision was the bicameral approach in 22 (84.61%) patients. 21 (80.79%) of our patients underwent en masse excision of the tumor and 13 (50%) underwent excision of the tumor along with a portion of the interatrial septum and the defect closure with an autologous pericardial patch. Similar approaches and techniques were followed by other surgeons.16 Various studies have shown patients with myxoma excision may need concomitant procedures like mitral valve and tricuspid valve repair or replacements, coronary artery bypass grafting.16 Tricuspid repair was done by Kay's annuloplasty in one patient and ring annuloplasty in another patient. One patient underwent mitral valve replacement with excision of the tumor and another patient underwent coronary artery bypass grafting to a distal right coronary artery. The mean cardiopulmonary bypass time and aortic cross-clamp time were found to be 46.19±18.19 minutes and 30.46±14.52 minutes in our study. However longer mean cardiopulmonary bypass time (80.7±39.0 minutes) and mean aortic cross-clamping time (51.3±27.5 minutes) have also been reported.17\nThe postoperative duration of ventilation was 7.91±6.27 hours in our patients. The mean mediastinal drainage was 272.50±192.66ml and the drains were placed for a mean duration of 1.57±0.64 days. However, packed cells transfusion was required in two patients only. One patient had hemiparesis; seven patients had arrhythmia in the post-operative period. Two patients had supraventricular tachycardia, two had atrial fibrillation with fast ventricular rate, managed by chemical cardioversion and two patients had ventricular fibrillation requiring electrical cardioversion. One patient had a run of ventricular tachycardia abolished by Lidocaine. One patient with postoperative cardiac failure with renal impairment responded well to medical management and three sessions of dialysis. None of the patients had re-exploration for bleeding or tamponade, sepsis, and surgical site infection.\nThe mean postoperative intensive care unit stay and hospital stays were 2.92±1.29 days and 6.26±2.61 days respectively. In the hospital, perioperative mortality was not seen. Most of our patients were from remote areas and hence most of them did not follow up for a long period. The mean follow-up period was 1.49 years. Long-term survival was seen in 21 (80.76%) patients. There were two late in-hospital mortalities. One patient presented to the Emergency room with acute right lower limb ischemia of 24 hours duration and cardiac failure, one year after myxoma excision. The echocardiographic evaluation revealed recurrence of left atrial myxomatous tumor approximately 2cm in size. However, the patient crashed in an emergency and succumbed to the disease before she could be shifted to the operating theatre. Another patient had expired following tricuspid valve replacement for severe tricuspid regurgitation, three years after right atrial myxoma excision. In a telephonic interview, it was found that three patients (above 72 years of age) died at home.\nIn a study from India, 7.89% of patients had supraventricular arrhythmias, 5.26% of patients had an atrioventricular block requiring temporary pacing and the early death rate was 5.88%. 15.78% of patients were lost to follow-up. There were no late deaths and recurrence.12 In a Greek study of 153 patients, the mean ICU stay and hospital stay were 2.0±0.9 days and 8.02±2.8 days respectively. Postoperatively, atrial fibrillation was seen in 5.9%, permanent pacemaker insertion was required in 5.2% and in-hospital mortality due to sepsis was 0.7%. Recurrence was found in 3.3% of patients in a mean follow-up of 3.7±4.3 years.14\nThe limitations of the study were that the small sample size and it was a single-center study.", "Cardiac myxoma was the most common cardiac tumor in our study as in other published studies performed. Proper diagnosis and management of cardiac myxoma could result in complete and safe excision with low perioperative morbidity and mortality." ]
[ "intro", "methods", "results", "discussion", "conclusions" ]
[ "\nembolism\n", "\nheart neoplasms\n", "\nmyxoma\n" ]
INTRODUCTION: Cardiac tumors are rare tumors with a prevalence of 0.0017 to 0.28%.1 Metastatic tumor of the heart is commoner than benign tumors with a prevalence of 1% and 0.1%, respectively.2 The ratio of occurrence of primary benign to malignant cardiac tumors is 2:1.3 Myxoma and sarcoma are the commonest benign and malignant cardiac tumors respectively.4 Myxoma affects women in the 3rd-6th decades of life more frequently.5 Such patients present with features of valvular obstruction, heart failure, arrhythmia, embolization, and constitutional symptoms.6 Heart neoplasms are diagnosed by echocardiogram, computed tomography, magnetic resonance imaging.7 Benign tumors are treated with complete surgical resection whereas malignant tumors need to be completely excised as far as possible and treated with adjuvant chemotherapy and radiotherapy.8 Researches on the cardiac tumor from Nepal are scarce. This study is intended to find out the prevalence of cardiac myxoma among all cardiac tumors operated in our center during the study period. METHODS: This was a descriptive cross-sectional study conducted in the Department of Cardiothoracic Vascular Surgery in Manmohan Cardiothoracic Vascular and Transplant Center, Maharajgunj, Kathmandu, Nepal. The data was collected retrospectively from August 2012 to August 2020 through medical records. Ethical approval was obtained from the Institutional Review Committee (IRC) of Institute of Medicine (Reference number: 36/ (6-11)E2/077/078). Convenience sampling technique was used and sample size was calculated according to the formula: n = Z2 × p × q / e2   = (1.96)2 × 0.5 × (1-0.5) / (0.02)2   = 2401 Where, n = minimum required sample size Z = 1.96 at 95% of Confidence Interval (CI) p = prevalence taken as 50 % for maximum sample size q = 1-p e = margin of error, 2% Adding a 10% non-response rate, the calculated sample size was 2641. However, 3800 cases were taken. Data collected from patient files included patient characteristics-age, gender, New York Heart Association (NYHA) class, presentation of cardiac tumors, intraoperative tumor size, cardiopulmonary bypass time, aortic cross-clamp time, postoperative hours of ventilation, mediastinal drainage, postoperative complications, duration of hospital and Intensive Care Unit (ICU) stay, histopathology of a tumor, in-hospital and long-term survival. Long-term survival was studied by reviewing the follow-up record and telephonic interview. Data were recorded and descriptive statistics like frequency, percentage, mean and standard deviaton were calculated using Microsoft Excel 2016. Point estimate at 95% Confidence Interval was also calculated. RESULTS: Among 3800 cardiac surgeries, 26 (0.68%) (0.42-0.94 at 95% Confidence Interval) were performed for cardiac myxoma. The mean age of the patients with myxoma was 54.76±14.31 years (range 17-75). Eleven (42.30%) of the patients were in the 50-59 years group. The majority of the patients were females, with a female to male ratio of 3.3:1. The mean duration of symptoms was 119.84±134.11 days. Two cases were diagnosed incidentally while being worked up for hypertension. The presenting symptoms were shortness of breath, palpitation, cough, anorexia with weight loss, and features of embolism (Table 1). Only six (23.07%) patients with cardiac myxoma had a history of smoking. The most common tumor location was the left atrium followed by the right atrium. Tricuspid regurgitation was the commonest associated valvular dysfunction followed by mitral regurgitation (Table 1). One patient had functional mitral stenosis whereas another one had an organic stenotic lesion. All the patients with myxoma underwent complete excision of the tumor. A separate right and left atriotomy (Bicameral approach) was the most commonly employed approach (Table 2). A total of 14 (53.84%) patients had myxoma with a broad base whereas the remaining tumors were pedunculated. Piecemeal excision of the very friable tumors was performed. The majority of the tumors were excised with a wide rim of interatrial septal attachment followed by the repair of the interatrial septum either with a pericardial patch or direct closure (Table 2). Aortic cross clamp time was 30.46±14.52 minutes and cardiopulmonary bypass time was 46.19±18.19 minutes. Adjunct procedures performed included tricuspid valve repair, mitral valve repair, mitral valve replacement, and coronary artery bypass grafting. The majority of the patients were extubated the same day, had insignificant mediastinal drainage and stayed in ICU and hospital for an average period of 2.92±1.29 and 6.26±2.61 days respectively (Table 3). Seven (26.92%) of the patients had arrhythmias requiring intervention (Table 4). One patient developed hemiparesis in the postoperative period which got improved by the time of discharge. The CT head of the patient did not reveal ischemia or hemorrhage. There was no perioperative mortality. The patients followed up for a mean period of 1.49 years (range 7 days to 6.5 years). A review of medical records and telephonic interviews identified five late deaths. DISCUSSION: In our study 89% of surgically treated cardiac tumors was myxoma similar to Keeling IM, et al. who found that 86% of surgically treated cardiac tumors were cardiac myxomas.9 The incidence of cardiac myxoma is 0.3% in patients undergoing on-pump cardiac surgeries.10 Cardiac myxoma excision contributed to 0.68% of cardiac surgeries in our centre. Of the primary cardiac tumors, cardiac myxoma were 92.85% and 7.14% were malignant tumors. The majority of the patients, 20 (76.92%) with myxoma were females and the mean age was found to be 54.76±14.31 years. In a study by Mandal sc, et al. the female to male ratio of myxoma occurrence ratio was 1.9:1 and (43%) of patients were in the 4th to 5th decade of life.11 The mean age of the patients was 55 years (range 22-79 years) in a study by Keeling IM, et al.9 The mean duration of onset of symptoms to diagnosis was 119.84±134.11 in our population. The most common presenting complaint of the patients included shortness of breath followed by palpitation and 16 (61.52%) patients were in NYHA class II, in our study which is similar to that reported in other studies. Fifty-one percent 51.0% presented with dyspnea and 65.78% were in NYHA Class II.10,12 Two (7.69%) patients had a preoperative stroke and two (7.69%) had femoral artery embolism in our center. In another study from Pakistan, 50% of patients had neurological symptoms at presentation including stroke and transient ischemic attack, and 14% had limb ischemia secondary to embolism.13 Similarly preoperative atrial fibrillation was seen in 3.84% in our study. Cianciulli TF, et al. found 9.3% patients had atrial fibrillation.5 Similar to our findings, 82.4%-86.6% of the myxoma was located in the left atrium attached to the interatrial septum.11,14 Protrusion of the tumor through the mitral/ tricuspid valve can give rise to symptoms of valvular obstruction, increased pulmonary artery pressure, and tricuspid regurgitation. Fourteen (53.84%) of our patients had myxoma protrusion through the mitral valve whereas eight (30.76%) patients had varying grades of tricuspid insufficiency. Various operative approaches to myxoma excision have been described in literature like minimally invasive, endoscopic, and robotically-assisted techniques, however, median sternotomy remains the most commonly performed one.2 All our patients underwent median sternotomy. The tumor size ranged from 2cm to 8cm, with a mean size of 4.13±1.39cm. The majority of the tumors 53.84% were broad-based. Tumor size ranged from 1cm to 12cm and 67.6% to 85.48% of the tumors were pedunculated in other studies.14,15 Complete excision of the tumor is of utmost importance in managing cardiac tumors. Hence the commonest approach for left atrial myxoma excision was the bicameral approach in 22 (84.61%) patients. 21 (80.79%) of our patients underwent en masse excision of the tumor and 13 (50%) underwent excision of the tumor along with a portion of the interatrial septum and the defect closure with an autologous pericardial patch. Similar approaches and techniques were followed by other surgeons.16 Various studies have shown patients with myxoma excision may need concomitant procedures like mitral valve and tricuspid valve repair or replacements, coronary artery bypass grafting.16 Tricuspid repair was done by Kay's annuloplasty in one patient and ring annuloplasty in another patient. One patient underwent mitral valve replacement with excision of the tumor and another patient underwent coronary artery bypass grafting to a distal right coronary artery. The mean cardiopulmonary bypass time and aortic cross-clamp time were found to be 46.19±18.19 minutes and 30.46±14.52 minutes in our study. However longer mean cardiopulmonary bypass time (80.7±39.0 minutes) and mean aortic cross-clamping time (51.3±27.5 minutes) have also been reported.17 The postoperative duration of ventilation was 7.91±6.27 hours in our patients. The mean mediastinal drainage was 272.50±192.66ml and the drains were placed for a mean duration of 1.57±0.64 days. However, packed cells transfusion was required in two patients only. One patient had hemiparesis; seven patients had arrhythmia in the post-operative period. Two patients had supraventricular tachycardia, two had atrial fibrillation with fast ventricular rate, managed by chemical cardioversion and two patients had ventricular fibrillation requiring electrical cardioversion. One patient had a run of ventricular tachycardia abolished by Lidocaine. One patient with postoperative cardiac failure with renal impairment responded well to medical management and three sessions of dialysis. None of the patients had re-exploration for bleeding or tamponade, sepsis, and surgical site infection. The mean postoperative intensive care unit stay and hospital stays were 2.92±1.29 days and 6.26±2.61 days respectively. In the hospital, perioperative mortality was not seen. Most of our patients were from remote areas and hence most of them did not follow up for a long period. The mean follow-up period was 1.49 years. Long-term survival was seen in 21 (80.76%) patients. There were two late in-hospital mortalities. One patient presented to the Emergency room with acute right lower limb ischemia of 24 hours duration and cardiac failure, one year after myxoma excision. The echocardiographic evaluation revealed recurrence of left atrial myxomatous tumor approximately 2cm in size. However, the patient crashed in an emergency and succumbed to the disease before she could be shifted to the operating theatre. Another patient had expired following tricuspid valve replacement for severe tricuspid regurgitation, three years after right atrial myxoma excision. In a telephonic interview, it was found that three patients (above 72 years of age) died at home. In a study from India, 7.89% of patients had supraventricular arrhythmias, 5.26% of patients had an atrioventricular block requiring temporary pacing and the early death rate was 5.88%. 15.78% of patients were lost to follow-up. There were no late deaths and recurrence.12 In a Greek study of 153 patients, the mean ICU stay and hospital stay were 2.0±0.9 days and 8.02±2.8 days respectively. Postoperatively, atrial fibrillation was seen in 5.9%, permanent pacemaker insertion was required in 5.2% and in-hospital mortality due to sepsis was 0.7%. Recurrence was found in 3.3% of patients in a mean follow-up of 3.7±4.3 years.14 The limitations of the study were that the small sample size and it was a single-center study. CONCLUSIONS: Cardiac myxoma was the most common cardiac tumor in our study as in other published studies performed. Proper diagnosis and management of cardiac myxoma could result in complete and safe excision with low perioperative morbidity and mortality.
Background: Heart neoplasms are rare tumors. Myxoma is the commonest primary benign tumor of the heart presenting with features of obstruction, arrhythmia, and embolism. Surgical excision of the tumor is the gold standard of treatment. The aim of the study is to find out the prevalence of cardiac myxoma among all cardiac surgeries operated during the study period. Methods: A descriptive cross-sectional study was done among 3800 patients undergoing surgery for cardiac tumors in a tertiary care center after obtaining approval from the Institutional Review Committee (Reference number- 36/(6-11)E2/077/078). The data was collected retrospectively from August 2012 to August 2020 using convenience sampling method. Statistical analysis was performed using Microsoft Excel 2016. Point estimate at 95% Confidence Interval was calculated along with frequency, percentage, mean and standard deviation. Results: There were 26 (0.68%) (0.42-0.94 at 95% Confidence Interval) myxoma among 3800 cardiac surgeries performed over eight years. The mean age of the patients was 54.76±14.31 (range 17-75) years. Twenty (76.92%) patients were females. The commonest presenting symptom was shortness of breath in 19 (73.07%) patients. En masse excision with the closure of the atrial septal defect was the principal surgical technique. The mean Intensive Care Unit stay and hospital stays were 2.92±1.29 and 6.26±2.61 days respectively. There was no perioperative mortality. Conclusions: Cardiac myxoma was the most common cardiac tumor encountered as in other studies.
INTRODUCTION: Cardiac tumors are rare tumors with a prevalence of 0.0017 to 0.28%.1 Metastatic tumor of the heart is commoner than benign tumors with a prevalence of 1% and 0.1%, respectively.2 The ratio of occurrence of primary benign to malignant cardiac tumors is 2:1.3 Myxoma and sarcoma are the commonest benign and malignant cardiac tumors respectively.4 Myxoma affects women in the 3rd-6th decades of life more frequently.5 Such patients present with features of valvular obstruction, heart failure, arrhythmia, embolization, and constitutional symptoms.6 Heart neoplasms are diagnosed by echocardiogram, computed tomography, magnetic resonance imaging.7 Benign tumors are treated with complete surgical resection whereas malignant tumors need to be completely excised as far as possible and treated with adjuvant chemotherapy and radiotherapy.8 Researches on the cardiac tumor from Nepal are scarce. This study is intended to find out the prevalence of cardiac myxoma among all cardiac tumors operated in our center during the study period. CONCLUSIONS: Cardiac myxoma was the most common cardiac tumor in our study as in other published studies performed. Proper diagnosis and management of cardiac myxoma could result in complete and safe excision with low perioperative morbidity and mortality.
Background: Heart neoplasms are rare tumors. Myxoma is the commonest primary benign tumor of the heart presenting with features of obstruction, arrhythmia, and embolism. Surgical excision of the tumor is the gold standard of treatment. The aim of the study is to find out the prevalence of cardiac myxoma among all cardiac surgeries operated during the study period. Methods: A descriptive cross-sectional study was done among 3800 patients undergoing surgery for cardiac tumors in a tertiary care center after obtaining approval from the Institutional Review Committee (Reference number- 36/(6-11)E2/077/078). The data was collected retrospectively from August 2012 to August 2020 using convenience sampling method. Statistical analysis was performed using Microsoft Excel 2016. Point estimate at 95% Confidence Interval was calculated along with frequency, percentage, mean and standard deviation. Results: There were 26 (0.68%) (0.42-0.94 at 95% Confidence Interval) myxoma among 3800 cardiac surgeries performed over eight years. The mean age of the patients was 54.76±14.31 (range 17-75) years. Twenty (76.92%) patients were females. The commonest presenting symptom was shortness of breath in 19 (73.07%) patients. En masse excision with the closure of the atrial septal defect was the principal surgical technique. The mean Intensive Care Unit stay and hospital stays were 2.92±1.29 and 6.26±2.61 days respectively. There was no perioperative mortality. Conclusions: Cardiac myxoma was the most common cardiac tumor encountered as in other studies.
2,176
281
[]
5
[ "patients", "cardiac", "myxoma", "tumors", "mean", "patient", "tumor", "study", "excision", "years" ]
[ "cardiac tumors rare", "cardiac tumor study", "malignant cardiac tumors", "prevalence cardiac myxoma", "tumors cardiac myxomas" ]
[CONTENT] embolism | heart neoplasms | myxoma [SUMMARY]
[CONTENT] embolism | heart neoplasms | myxoma [SUMMARY]
[CONTENT] embolism | heart neoplasms | myxoma [SUMMARY]
[CONTENT] embolism | heart neoplasms | myxoma [SUMMARY]
[CONTENT] embolism | heart neoplasms | myxoma [SUMMARY]
[CONTENT] embolism | heart neoplasms | myxoma [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Cardiac Surgical Procedures | Cross-Sectional Studies | Female | Heart Neoplasms | Humans | Middle Aged | Myxoma | Retrospective Studies | Tertiary Care Centers | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Cardiac Surgical Procedures | Cross-Sectional Studies | Female | Heart Neoplasms | Humans | Middle Aged | Myxoma | Retrospective Studies | Tertiary Care Centers | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Cardiac Surgical Procedures | Cross-Sectional Studies | Female | Heart Neoplasms | Humans | Middle Aged | Myxoma | Retrospective Studies | Tertiary Care Centers | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Cardiac Surgical Procedures | Cross-Sectional Studies | Female | Heart Neoplasms | Humans | Middle Aged | Myxoma | Retrospective Studies | Tertiary Care Centers | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Cardiac Surgical Procedures | Cross-Sectional Studies | Female | Heart Neoplasms | Humans | Middle Aged | Myxoma | Retrospective Studies | Tertiary Care Centers | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Cardiac Surgical Procedures | Cross-Sectional Studies | Female | Heart Neoplasms | Humans | Middle Aged | Myxoma | Retrospective Studies | Tertiary Care Centers | Young Adult [SUMMARY]
[CONTENT] cardiac tumors rare | cardiac tumor study | malignant cardiac tumors | prevalence cardiac myxoma | tumors cardiac myxomas [SUMMARY]
[CONTENT] cardiac tumors rare | cardiac tumor study | malignant cardiac tumors | prevalence cardiac myxoma | tumors cardiac myxomas [SUMMARY]
[CONTENT] cardiac tumors rare | cardiac tumor study | malignant cardiac tumors | prevalence cardiac myxoma | tumors cardiac myxomas [SUMMARY]
[CONTENT] cardiac tumors rare | cardiac tumor study | malignant cardiac tumors | prevalence cardiac myxoma | tumors cardiac myxomas [SUMMARY]
[CONTENT] cardiac tumors rare | cardiac tumor study | malignant cardiac tumors | prevalence cardiac myxoma | tumors cardiac myxomas [SUMMARY]
[CONTENT] cardiac tumors rare | cardiac tumor study | malignant cardiac tumors | prevalence cardiac myxoma | tumors cardiac myxomas [SUMMARY]
[CONTENT] patients | cardiac | myxoma | tumors | mean | patient | tumor | study | excision | years [SUMMARY]
[CONTENT] patients | cardiac | myxoma | tumors | mean | patient | tumor | study | excision | years [SUMMARY]
[CONTENT] patients | cardiac | myxoma | tumors | mean | patient | tumor | study | excision | years [SUMMARY]
[CONTENT] patients | cardiac | myxoma | tumors | mean | patient | tumor | study | excision | years [SUMMARY]
[CONTENT] patients | cardiac | myxoma | tumors | mean | patient | tumor | study | excision | years [SUMMARY]
[CONTENT] patients | cardiac | myxoma | tumors | mean | patient | tumor | study | excision | years [SUMMARY]
[CONTENT] tumors | benign | cardiac | cardiac tumors | malignant | prevalence | heart | malignant cardiac | malignant cardiac tumors | benign malignant [SUMMARY]
[CONTENT] size | calculated | sample | sample size | data | data collected | august | collected | e2 | cardiothoracic vascular [SUMMARY]
[CONTENT] patients | table | followed | mitral | years | myxoma | patients myxoma | days | 26 | majority [SUMMARY]
[CONTENT] cardiac | myxoma | cardiac myxoma | performed proper diagnosis | low perioperative | low | proper diagnosis | proper diagnosis management | proper diagnosis management cardiac | safe [SUMMARY]
[CONTENT] patients | cardiac | myxoma | tumors | study | mean | tumor | excision | patient | cardiac myxoma [SUMMARY]
[CONTENT] patients | cardiac | myxoma | tumors | study | mean | tumor | excision | patient | cardiac myxoma [SUMMARY]
[CONTENT] ||| Myxoma ||| ||| [SUMMARY]
[CONTENT] 3800 | tertiary | the Institutional Review Committee | 36/(6-11)E2/077/078 ||| August 2012 to August 2020 ||| Microsoft | Excel | 2016 ||| Point | 95% [SUMMARY]
[CONTENT] 26 | 0.68% | 0.42-0.94 | 95% | 3800 | eight years ||| 17-75) years ||| Twenty | 76.92% ||| ||| 19 | 73.07% ||| ||| Intensive Care Unit | 2.92±1.29 and 6.26±2.61 days ||| [SUMMARY]
[CONTENT] Cardiac [SUMMARY]
[CONTENT] ||| Myxoma ||| ||| ||| 3800 | tertiary | the Institutional Review Committee | 36/(6-11)E2/077/078 ||| August 2012 to August 2020 ||| Microsoft | Excel | 2016 ||| Point | 95% ||| 26 | 0.68% | 0.42-0.94 | 95% | 3800 | eight years ||| 17-75) years ||| Twenty | 76.92% ||| ||| 19 | 73.07% ||| ||| Intensive Care Unit | 2.92±1.29 and 6.26±2.61 days ||| ||| [SUMMARY]
[CONTENT] ||| Myxoma ||| ||| ||| 3800 | tertiary | the Institutional Review Committee | 36/(6-11)E2/077/078 ||| August 2012 to August 2020 ||| Microsoft | Excel | 2016 ||| Point | 95% ||| 26 | 0.68% | 0.42-0.94 | 95% | 3800 | eight years ||| 17-75) years ||| Twenty | 76.92% ||| ||| 19 | 73.07% ||| ||| Intensive Care Unit | 2.92±1.29 and 6.26±2.61 days ||| ||| [SUMMARY]
Molecular profiling of metastatic colorectal tumors using next-generation sequencing: a single-institution experience.
28178681
Recent molecular characterization of colorectal tumors has identified several molecular alterations of interest that are considered targetable in metastatic colorectal cancer (mCRC).
BACKGROUND
We conducted a single-institution, retrospective study based on comprehensive genomic profiling of tumors from 138 patients with mCRC using next-generation sequencing (NGS) via FoundationOne.
METHODS
Overall, RAS mutations were present in 51.4% and RAF mutations were seen in 7.2% of mCRC patients. We found a novel KRASR68S1 mutation associated with an aggressive phenotype. RAS amplifications (1.4% KRAS and 0.7% NRAS), MET amplifications (2.2%), BRAFL597Ralterations (0.7%), ARAFS214F alterations (0.7%), and concurrent RAS+RAF (1.4%), BRAF+RAF1 (0.7%), and rare PTEN-PIK3CA-AKT pathway mutations were identified and predominantly associated with poor prognosis. ERBB2 (HER2) amplified tumors were identified in 5.1% and all arose from the rectosigmoid colon. Three cases (2.2%) were associated with a hypermutated profile that was corroborated with findings of high tumor mutational burden (TMB): 2 cases with MSI-H and 1 case with a POLE mutation.
RESULTS
Comprehensive genomic profiling can uncover alterations beyond the well-characterized RAS/RAF mutations associated with anti-EGFR resistance. ERBB2 amplified tumors commonly originate from the rectosigmoid colon, are predominantly RAS/BRAF wild-type, and may predict benefit to HER2-directed therapy. Hypermutant tumors or tumors with high TMB correlate with MSI-H status or POLE mutations and may predict a benefit from anti-PD-1 therapy.
CONCLUSIONS
[ "Adult", "Aged", "Aged, 80 and over", "Biomarkers, Tumor", "Colorectal Neoplasms", "Female", "Gene Amplification", "Gene Expression Profiling", "Genetic Predisposition to Disease", "Genetic Variation", "High-Throughput Nucleotide Sequencing", "Humans", "Male", "Middle Aged", "Mutation", "Neoplasm Metastasis", "Neoplasm Staging", "Proto-Oncogene Proteins", "Retrospective Studies" ]
5522060
INTRODUCTION
Colorectal cancer (CRC) remains the third leading cause of cancer death in both men and women in the United States with an estimated 134,490 new cases and 49,190 deaths in 2016 [1]. Recent advances in the treatment of metastatic CRC (mCRC) have identified improved outcomes with the addition of epidermal growth factor receptor (EGFR)-targeting agents to conventional combination cytotoxic therapy in patients with extended RAS wild-type tumors. In contrast, activating mutations in the RAS gene (KRAS or NRAS, present in approximately 50% of cases of mCRC) and BRAF gene (present in about 5% of mCRC patients) have been associated with lack of clinically meaningful benefit or harm when anti-EGFR therapy is employed [2]. The identification of candidates for anti-EGFR therapy through the exclusion of RAS and BRAF mutations in mCRC serves as a model of selecting optimal therapy based on patient genomic profiles and molecular phenotypes. Several decades of genomic studies, including the use of more recent next-generation sequencing (NGS), have expedited the search of genetic alterations for potential therapeutic targeting in CRC [3, 4]. Recently, comprehensive molecular characterization of 224 colorectal tumors was performed by The Cancer Genome Atlas (TCGA) Network [5]. Sixteen percent of colorectal tumors were found to be hypermutated and more commonly found in the right colon with 75% of these cases demonstrating expectedly high microsatellite instability (MSI-H). Twenty-four genes were identified to have significant mutations of interest including APC, SMAD4, TP53, PIK3CA, and KRAS mutations, as expected. Interestingly, mutations, deletions, or amplifications of the ERRB gene family were found in 19% of tumors. In sum, this genomic analysis identified several molecular alterations that are considered targetable, including mediators of dysregulated WNT, RAS, and PI3K pathways such as ERRB2, ERRB3, MEK, AKT, MTOR, IGF2, and IGFR. The recent identification of gene mutations and amplifications of potential significance for therapeutic purposes has led us to investigate the genomic profiles of mCRC patients using NGS (FoundationOne). Here, we describe a single-institution experience in reporting results from comprehensive genomic analysis of tumors from 138 mCRC patients. We aim to characterize genetic alterations present in our study population that have known correlates to prognosis, therapeutic resistance, and potential therapeutic targets in mCRC. In this study, we also report the existence of concurrent gene mutations rarely described in the literature and novel mutations and amplifications that can lead to targeting outside of National Comprehensive Cancer Network (NCCN) standard treatments.
MATERIALS AND METHODS
Study patients and tumor samples Patients with advanced or metastatic (stage IV) colorectal cancer treated at the Gastrointestinal Medical Oncology Clinic at City of Hope National Medical Center (Duarte, CA) between April 2013 and February 2016 were screened for this study. Eligibility criteria was limited to those who underwent expanded genomic tumor analysis by FoundationOne. There were no exclusions to tumor histology, medical comorbidities, previous treatment or lines of prior therapy, or performance status. Comprehensive genomic profiling was conducted through NGS via FoundationOne (Foundation Medicine, Inc., Cambridge, MA) with reports generated from April 2013 to February 2016. The study was approved by the Institutional Review Board (IRB). Patients with advanced or metastatic (stage IV) colorectal cancer treated at the Gastrointestinal Medical Oncology Clinic at City of Hope National Medical Center (Duarte, CA) between April 2013 and February 2016 were screened for this study. Eligibility criteria was limited to those who underwent expanded genomic tumor analysis by FoundationOne. There were no exclusions to tumor histology, medical comorbidities, previous treatment or lines of prior therapy, or performance status. Comprehensive genomic profiling was conducted through NGS via FoundationOne (Foundation Medicine, Inc., Cambridge, MA) with reports generated from April 2013 to February 2016. The study was approved by the Institutional Review Board (IRB). Next-generation sequencing Comprehensive genomic analysis was conducted on tumor samples (formalin-fixed paraffin-embedded) retrieved from surgical resection, core needle biopsies, or excisional biopsies and delivered to Foundation Medicine, Inc. The NGS assay performed by FoundationOne has been previously described and validated [45]. The initial whole-genome shotgun library construction and hybridization-based capture of 4,557 exons from 287 cancer-related genes and 47 introns from 19 genes with frequent DNA rearrangements has since been expanded to identify genetic alterations across the coding regions of 315 cancer-related genes and introns from 28 genes commonly rearranged in solid cancers. Comprehensive genomic analysis was conducted on tumor samples (formalin-fixed paraffin-embedded) retrieved from surgical resection, core needle biopsies, or excisional biopsies and delivered to Foundation Medicine, Inc. The NGS assay performed by FoundationOne has been previously described and validated [45]. The initial whole-genome shotgun library construction and hybridization-based capture of 4,557 exons from 287 cancer-related genes and 47 introns from 19 genes with frequent DNA rearrangements has since been expanded to identify genetic alterations across the coding regions of 315 cancer-related genes and introns from 28 genes commonly rearranged in solid cancers. Study design Retrospective analysis of genetic mutations, amplifications, or alterations present in our cohort of 138 patients with mCRC was performed through test results provided in an integrative report available via FoundationICE (Interactive Cancer Explorer). Patient demographics including age, sex, ethnicity, site of primary, stage at diagnosis, and number of previous treatments were obtained from chart abstraction of each patient's electronic medical record (EMR). Microsatellite instability classified as stable (MSS), low (MSI-L), or high (MSI-H) were abstracted from pathology reports and response to anti-EGFR therapy, when available, was described according to Response Evaluation Criteria in Solid Tumors (RECIST) criteria and obtained from medical records [46]. The total number of clinically significant alterations for each patient was determined by tallying the sum of alterations included in the panel of clinically significant variants provided by FoundationICE reports and arbitrarily categorized into 3 groups (<9, 9-16, and > 16 total number of alterations). We defined hypermutant tumors as those in the highest number of mutations group that were also found to have high TMB as validated by FoundationOne (high >23.1 mutations/MB, intermediate 3.2-23.1 mutations/MB, and low <3.2 mutations/MB) [34]. Retrospective analysis of genetic mutations, amplifications, or alterations present in our cohort of 138 patients with mCRC was performed through test results provided in an integrative report available via FoundationICE (Interactive Cancer Explorer). Patient demographics including age, sex, ethnicity, site of primary, stage at diagnosis, and number of previous treatments were obtained from chart abstraction of each patient's electronic medical record (EMR). Microsatellite instability classified as stable (MSS), low (MSI-L), or high (MSI-H) were abstracted from pathology reports and response to anti-EGFR therapy, when available, was described according to Response Evaluation Criteria in Solid Tumors (RECIST) criteria and obtained from medical records [46]. The total number of clinically significant alterations for each patient was determined by tallying the sum of alterations included in the panel of clinically significant variants provided by FoundationICE reports and arbitrarily categorized into 3 groups (<9, 9-16, and > 16 total number of alterations). We defined hypermutant tumors as those in the highest number of mutations group that were also found to have high TMB as validated by FoundationOne (high >23.1 mutations/MB, intermediate 3.2-23.1 mutations/MB, and low <3.2 mutations/MB) [34]. Statistical analyses All statistical analyses performed were descriptive and no formal statistical hypotheses were assessed. The sample size was determined by the total number of mCRC patients with FoundationOne results available. All descriptive statistics were conducted in Excel with associated formulas and functions. All statistical analyses performed were descriptive and no formal statistical hypotheses were assessed. The sample size was determined by the total number of mCRC patients with FoundationOne results available. All descriptive statistics were conducted in Excel with associated formulas and functions.
null
null
DISCUSSION
All statistical analyses performed were descriptive and no formal statistical hypotheses were assessed. The sample size was determined by the total number of mCRC patients with FoundationOne results available. All descriptive statistics were conducted in Excel with associated formulas and functions.
[ "INTRODUCTION", "RESULTS", "Study population", "RAS mutations", "RAF mutations", "ERBB2 amplifications", "AKT1/2 mutations", "PIK3CA and PTEN mutations", "MET amplifications", "Hypermutant status", "DISCUSSION" ]
[ "Colorectal cancer (CRC) remains the third leading cause of cancer death in both men and women in the United States with an estimated 134,490 new cases and 49,190 deaths in 2016 [1]. Recent advances in the treatment of metastatic CRC (mCRC) have identified improved outcomes with the addition of epidermal growth factor receptor (EGFR)-targeting agents to conventional combination cytotoxic therapy in patients with extended RAS wild-type tumors. In contrast, activating mutations in the RAS gene (KRAS or NRAS, present in approximately 50% of cases of mCRC) and BRAF gene (present in about 5% of mCRC patients) have been associated with lack of clinically meaningful benefit or harm when anti-EGFR therapy is employed [2]. The identification of candidates for anti-EGFR therapy through the exclusion of RAS and BRAF mutations in mCRC serves as a model of selecting optimal therapy based on patient genomic profiles and molecular phenotypes.\nSeveral decades of genomic studies, including the use of more recent next-generation sequencing (NGS), have expedited the search of genetic alterations for potential therapeutic targeting in CRC [3, 4]. Recently, comprehensive molecular characterization of 224 colorectal tumors was performed by The Cancer Genome Atlas (TCGA) Network [5]. Sixteen percent of colorectal tumors were found to be hypermutated and more commonly found in the right colon with 75% of these cases demonstrating expectedly high microsatellite instability (MSI-H). Twenty-four genes were identified to have significant mutations of interest including APC, SMAD4, TP53, PIK3CA, and KRAS mutations, as expected. Interestingly, mutations, deletions, or amplifications of the ERRB gene family were found in 19% of tumors. In sum, this genomic analysis identified several molecular alterations that are considered targetable, including mediators of dysregulated WNT, RAS, and PI3K pathways such as ERRB2, ERRB3, MEK, AKT, MTOR, IGF2, and IGFR.\nThe recent identification of gene mutations and amplifications of potential significance for therapeutic purposes has led us to investigate the genomic profiles of mCRC patients using NGS (FoundationOne). Here, we describe a single-institution experience in reporting results from comprehensive genomic analysis of tumors from 138 mCRC patients. We aim to characterize genetic alterations present in our study population that have known correlates to prognosis, therapeutic resistance, and potential therapeutic targets in mCRC. In this study, we also report the existence of concurrent gene mutations rarely described in the literature and novel mutations and amplifications that can lead to targeting outside of National Comprehensive Cancer Network (NCCN) standard treatments.", " Study population The molecular results from FoundationOne testing of tumors from 138 mCRC patients are summarized in Table 1. The median age of our study group was 56 years (range 27-88) with 59.4% (82) males and 40.6% (56) females. The most common ethnicity was White (85, 61.6%) followed by Asian (29, 21.0%). The most common sites of primary were sigmoid colon (33.3%), rectum (19.6%), and cecum (15.2%). Sixty-eight patients (49.3%) had KRAS mutations, 9 patients (6.5%) had BRAF mutations, 3 (2.2%) had NRAS mutations, 1 (0.7%) had an ARAF mutation, 1 (0.7%) had a RAF-1 mutation, 7 (5.1%) had ERRB2 amplifications, 25 (18.1%) had PIK3CA mutations, 15 (10.9%) had PTEN mutations, 4 (2.9%) had AKT mutations, and 3 (2.2%) had MET amplifications.\nNOS: Not otherwise specified; MSI: Microsatellite instability; MSS: Microsatellite stable; MSI-L: MSI low; MSI-H: MSI high.\nThe molecular results from FoundationOne testing of tumors from 138 mCRC patients are summarized in Table 1. The median age of our study group was 56 years (range 27-88) with 59.4% (82) males and 40.6% (56) females. The most common ethnicity was White (85, 61.6%) followed by Asian (29, 21.0%). The most common sites of primary were sigmoid colon (33.3%), rectum (19.6%), and cecum (15.2%). Sixty-eight patients (49.3%) had KRAS mutations, 9 patients (6.5%) had BRAF mutations, 3 (2.2%) had NRAS mutations, 1 (0.7%) had an ARAF mutation, 1 (0.7%) had a RAF-1 mutation, 7 (5.1%) had ERRB2 amplifications, 25 (18.1%) had PIK3CA mutations, 15 (10.9%) had PTEN mutations, 4 (2.9%) had AKT mutations, and 3 (2.2%) had MET amplifications.\nNOS: Not otherwise specified; MSI: Microsatellite instability; MSS: Microsatellite stable; MSI-L: MSI low; MSI-H: MSI high.\n RAS mutations Overall, RAS mutations were present in 51.4% of our mCRC patients, RAF mutations were seen in 7.2%, of which RAS+RAF concurrent mutations were seen in 1.4%. The remainder (42.8%) were RAS/RAF wild type (Figure 1). The most common RAS mutations were KRAS mutations of exon 2 (codons 12 and 13) including G12D (32.4%), G13D (14.1%), G12V (11.3%), G12S (9.9%), G12C (8.5%), and G12A (2.8%, Figure 2). Beyond the well-established point mutations in codons 12 and 13 of exon 2 of KRAS, we identified mutations in codon 61 of exon 3 (Q61H, 1.4%; Q61K, 1.4%; Q61L, 1.4%), codon 117 of exon 4 (K117N, 1.4%), and codon 146 of exon 4 (A146V, 1.4%; A146V^sub, 1.4%; A146T, 4.2%). Two mutations (2.8%) in codon 61 (exon 3) of NRAS were also detected. Altogether, these non-KRAS exon 2 mutations constitute 15.5% of RAS mutations.\nRAS+RAF mutations are not included in RAS or RAF percentages.\nArrows denote common mutations of exon 2 (codon 12-13). Brackets denote panel of extended RAS mutations or novel RAS mutation.\nIn our patient population, 2 KRAS amplifications (2.8%) and 1 NRAS amplification (1.4%) were identified. One patient was a 51-year-old female with KRAS amplified rectal cancer with synchronous diffuse metastases (lung and liver). Her best overall response to standard first-line combination chemotherapy (5-fluorouracil (5-FU) and irinotecan or FOLFIRI) plus anti-EGFR therapy (panitumumab) was stable disease (SD) for 6 months. The other patient with KRAS amplification was a 51-year-old male diagnosed with right-sided colon cancer and synchronous metastases to the liver and peritoneum who had rapid progression on first- and second-line non-anti-EGFR based therapies. Our 74-year-old male patient with NRAS amplification presented with poorly differentiated rectosigmoid adenocarcinoma and synchronous diffuse metastases (liver, mesentery, and bones) and experienced progressive disease (PD) at 2 months on second-line FOLFIRI + cetuximab. Notably, a novel KRASR68S1 alteration (Figure 2) was identified in a 41-year-old female (1.4%) with rectal cancer and synchronous metastases to the liver and retroperitoneal and supraclavicular lymph nodes who experienced PD at 2 months on anti-EGFR therapy with second-line irinotecan + cetuximab.\nOverall, RAS mutations were present in 51.4% of our mCRC patients, RAF mutations were seen in 7.2%, of which RAS+RAF concurrent mutations were seen in 1.4%. The remainder (42.8%) were RAS/RAF wild type (Figure 1). The most common RAS mutations were KRAS mutations of exon 2 (codons 12 and 13) including G12D (32.4%), G13D (14.1%), G12V (11.3%), G12S (9.9%), G12C (8.5%), and G12A (2.8%, Figure 2). Beyond the well-established point mutations in codons 12 and 13 of exon 2 of KRAS, we identified mutations in codon 61 of exon 3 (Q61H, 1.4%; Q61K, 1.4%; Q61L, 1.4%), codon 117 of exon 4 (K117N, 1.4%), and codon 146 of exon 4 (A146V, 1.4%; A146V^sub, 1.4%; A146T, 4.2%). Two mutations (2.8%) in codon 61 (exon 3) of NRAS were also detected. Altogether, these non-KRAS exon 2 mutations constitute 15.5% of RAS mutations.\nRAS+RAF mutations are not included in RAS or RAF percentages.\nArrows denote common mutations of exon 2 (codon 12-13). Brackets denote panel of extended RAS mutations or novel RAS mutation.\nIn our patient population, 2 KRAS amplifications (2.8%) and 1 NRAS amplification (1.4%) were identified. One patient was a 51-year-old female with KRAS amplified rectal cancer with synchronous diffuse metastases (lung and liver). Her best overall response to standard first-line combination chemotherapy (5-fluorouracil (5-FU) and irinotecan or FOLFIRI) plus anti-EGFR therapy (panitumumab) was stable disease (SD) for 6 months. The other patient with KRAS amplification was a 51-year-old male diagnosed with right-sided colon cancer and synchronous metastases to the liver and peritoneum who had rapid progression on first- and second-line non-anti-EGFR based therapies. Our 74-year-old male patient with NRAS amplification presented with poorly differentiated rectosigmoid adenocarcinoma and synchronous diffuse metastases (liver, mesentery, and bones) and experienced progressive disease (PD) at 2 months on second-line FOLFIRI + cetuximab. Notably, a novel KRASR68S1 alteration (Figure 2) was identified in a 41-year-old female (1.4%) with rectal cancer and synchronous metastases to the liver and retroperitoneal and supraclavicular lymph nodes who experienced PD at 2 months on anti-EGFR therapy with second-line irinotecan + cetuximab.\n RAF mutations A total of 11 RAF mutations (1 concurrent BRAF+RAF1 mutation) were found in 7.2% of our patients (Figure 3). Of these, BRAFV600E activating mutations (exon 15) were the most common single mutations present (40.0%). One activating BRAFL597Ralteration (exon 15) was identified (10.0%) in a 56-year-old male with bulky rectal adenocarcinoma with synchronous metastases that progressed through 9 months of first-line anti-EGFR therapy. One activating ARAFS214F alteration was also identified (10.0%) in our series of RAF mutations. This 60-year-old male patient developed multiple recurrences of rectal adenocarcinoma including, most recently, metastatic disease to the lung treated with neoadjuvant 5-FU, oxaliplatin, and irinotecan (FOLFOXIRI) followed by metastatectomy; he remains in clinical remission. A dual BRAFV600E+KRASA164V^subalterationwas present (10.0%) in an elderly male (age 72) with poorly differentiated right-sided colon cancer with synchronous metastases on first-line systemic combination therapy without anti-EGFR agents. Here an oncogenic RAS alteration was paired with a known activating BRAF mutation.\nArrows denote known activating mutations. Brackets denote known deactivating mutations.\nDeactivating mutations in BRAFD594G(10.0%), BRAFG466V concurrent with KRASG12S (10.0%), and BRAFG469E concurrent with RAF1S257L (10.0%) were also identified. Our patient with a deactivating BRAFD594G mutation was a 59-year-old male with moderately-poorly differentiated right-sided colon cancer with diffuse metastases that was refractory to all standard of care chemotherapy, including FOLFIRI + cetuximab, and ultimately died from progressive disease. Interestingly, he was noted to have a concurrent MET amplification. The patient with a deactivating BRAFG466V mutation concurrent with an activating KRASG12S mutation was a 51-year-old male with right-sided colon cancer with diffuse metastases that progressed with carcinomatosis while on FOLFOX and immediately following salvage debulking surgery with hyperthermic chemotherapy. Notably, he currently has achieved ongoing partial response (PR) on third-line FOLFIRI and bevacizumab (41+ cycles). Our 66-year-old female with dual deactivating BRAFG469E and activating RAF1S257L mutation presented with a right colon cancer with synchronous metastases to bone, liver, lung, and peritoneum. Her disease was refractory to FOLFOX + bevacizumab and is currently on second-line FOLFIRI + bevacizumab with a clinical benefit.\nA total of 11 RAF mutations (1 concurrent BRAF+RAF1 mutation) were found in 7.2% of our patients (Figure 3). Of these, BRAFV600E activating mutations (exon 15) were the most common single mutations present (40.0%). One activating BRAFL597Ralteration (exon 15) was identified (10.0%) in a 56-year-old male with bulky rectal adenocarcinoma with synchronous metastases that progressed through 9 months of first-line anti-EGFR therapy. One activating ARAFS214F alteration was also identified (10.0%) in our series of RAF mutations. This 60-year-old male patient developed multiple recurrences of rectal adenocarcinoma including, most recently, metastatic disease to the lung treated with neoadjuvant 5-FU, oxaliplatin, and irinotecan (FOLFOXIRI) followed by metastatectomy; he remains in clinical remission. A dual BRAFV600E+KRASA164V^subalterationwas present (10.0%) in an elderly male (age 72) with poorly differentiated right-sided colon cancer with synchronous metastases on first-line systemic combination therapy without anti-EGFR agents. Here an oncogenic RAS alteration was paired with a known activating BRAF mutation.\nArrows denote known activating mutations. Brackets denote known deactivating mutations.\nDeactivating mutations in BRAFD594G(10.0%), BRAFG466V concurrent with KRASG12S (10.0%), and BRAFG469E concurrent with RAF1S257L (10.0%) were also identified. Our patient with a deactivating BRAFD594G mutation was a 59-year-old male with moderately-poorly differentiated right-sided colon cancer with diffuse metastases that was refractory to all standard of care chemotherapy, including FOLFIRI + cetuximab, and ultimately died from progressive disease. Interestingly, he was noted to have a concurrent MET amplification. The patient with a deactivating BRAFG466V mutation concurrent with an activating KRASG12S mutation was a 51-year-old male with right-sided colon cancer with diffuse metastases that progressed with carcinomatosis while on FOLFOX and immediately following salvage debulking surgery with hyperthermic chemotherapy. Notably, he currently has achieved ongoing partial response (PR) on third-line FOLFIRI and bevacizumab (41+ cycles). Our 66-year-old female with dual deactivating BRAFG469E and activating RAF1S257L mutation presented with a right colon cancer with synchronous metastases to bone, liver, lung, and peritoneum. Her disease was refractory to FOLFOX + bevacizumab and is currently on second-line FOLFIRI + bevacizumab with a clinical benefit.\n ERBB2 amplifications Seven patients (5.1%) were found to have ERRB2 amplified tumors with one having a concurrent KRASG12D mutation (Figure 4). The majority of these tumors were MSS (87.5%) with HER2 copy numbers that ranged from 9-190 (Table 2). Notably, all ERRB2 amplified tumors were located in the rectosigmoid colon as its primary disease site. Four patients with RAS wild-type ERBB2 amplification received anti-EGFR therapy, 3 experienced SD ≥ 4 months (2 first-line and 1 second-line) and 1 (second-line) experienced a PR lasting for 5 months as their best overall response to anti-EGFR therapy. The concurrent ERRB2 amplified and KRASG12D mutated tumor was found in a 58-year-old male with moderately differentiated rectal adenocarcinoma with synchronous solitary liver metastasis treated with neoadjuvant 5-FU, oxaliplatin (FOLFOX) followed by hepatic resection and resection of primary – he is currently under surveillance and without evidence of disease.\nEGFR: Epidermal growth factor receptor; MSI: Microsatellite instability; SD: Stable disease; NR: Not reported; MSS: Microsatellite stable; PR: Partial response.\nSeven patients (5.1%) were found to have ERRB2 amplified tumors with one having a concurrent KRASG12D mutation (Figure 4). The majority of these tumors were MSS (87.5%) with HER2 copy numbers that ranged from 9-190 (Table 2). Notably, all ERRB2 amplified tumors were located in the rectosigmoid colon as its primary disease site. Four patients with RAS wild-type ERBB2 amplification received anti-EGFR therapy, 3 experienced SD ≥ 4 months (2 first-line and 1 second-line) and 1 (second-line) experienced a PR lasting for 5 months as their best overall response to anti-EGFR therapy. The concurrent ERRB2 amplified and KRASG12D mutated tumor was found in a 58-year-old male with moderately differentiated rectal adenocarcinoma with synchronous solitary liver metastasis treated with neoadjuvant 5-FU, oxaliplatin (FOLFOX) followed by hepatic resection and resection of primary – he is currently under surveillance and without evidence of disease.\nEGFR: Epidermal growth factor receptor; MSI: Microsatellite instability; SD: Stable disease; NR: Not reported; MSS: Microsatellite stable; PR: Partial response.\n AKT1/2 mutations Three patients (2.2%) had AKT1E17K mutations while 1 patient (0.7%) had an AKT2E17K mutation (Table 3). Of these, a majority had concurrent mutations (75%) and tumors located in the right colon (75%). One AKT1E17K mutated tumor was found to have concurrent BRAFV600E +KRASA164V^subalterations with phenotype described above. One AKT1E17K mutated tumor had concurrent alterations in KRASA146T+PIK3CAG106V and was found in a 61-year-old male with initial right-sided colon cancer that recurred with metastases to the liver showing moderately differentiated colon adenocarcinoma. His tumor was characterized by aggressive features, including metastatic disease recurrence following a diagnosis of stage I disease, and development of bony metastases within the first year of recurrence. A concurrent AKT2E17K+KRASG12C altered tumor was found in a 57-year-old female with originally moderately differentiated sigmoid adenocarcinoma that was resected but recurred with metastases to the retroperitoneal lymph nodes currently on first-line FOLFIRI + bevacizumab.\n* For concurrent RAS and BRAF mutations only, NOS: Not otherwise specified; MSI: Microsatellite instability; NR: Not reported; MSS: Microsatellite stable.\nThree patients (2.2%) had AKT1E17K mutations while 1 patient (0.7%) had an AKT2E17K mutation (Table 3). Of these, a majority had concurrent mutations (75%) and tumors located in the right colon (75%). One AKT1E17K mutated tumor was found to have concurrent BRAFV600E +KRASA164V^subalterations with phenotype described above. One AKT1E17K mutated tumor had concurrent alterations in KRASA146T+PIK3CAG106V and was found in a 61-year-old male with initial right-sided colon cancer that recurred with metastases to the liver showing moderately differentiated colon adenocarcinoma. His tumor was characterized by aggressive features, including metastatic disease recurrence following a diagnosis of stage I disease, and development of bony metastases within the first year of recurrence. A concurrent AKT2E17K+KRASG12C altered tumor was found in a 57-year-old female with originally moderately differentiated sigmoid adenocarcinoma that was resected but recurred with metastases to the retroperitoneal lymph nodes currently on first-line FOLFIRI + bevacizumab.\n* For concurrent RAS and BRAF mutations only, NOS: Not otherwise specified; MSI: Microsatellite instability; NR: Not reported; MSS: Microsatellite stable.\n PIK3CA and PTEN mutations In total, we identified 25 patients (18.1%) with PIK3CA alterations in our cohort (Table 4). The most common primary disease sites included cecum (36.0%), sigmoid colon (12.0%), and rectum (12.0%). Notably, right-sided colon cancers comprised nearly half (48.0%) of tumors with PIK3CA alterations. The most commonly identified variants were E545K (24.0%, exon 9), E542K (12.0% exon 9), E110del (8.0%), and Q546K (8.0%). Tumors with PIK3CA alterations frequently had concurrent mutations in the RAS-RAF-MAPK signaling pathway. A majority (19 or 76.0%) had concurrent mutations in KRAS (G12D 36.0%, G12S 12.0%, G13D 8.0%, and A146T 8.0%). Two patients (8.0%) with PIK3CA tumors were found to have concurrent deactivating BRAF mutations (G466V and G469E). Notably, these 2 patients had additional alterations in KRASG12S and RAF1S257L, respectively, with phenotypes described above. We also identified additional alterations in the PTEN-PIK3CA-AKT signaling pathway in our group of PIK3CA altered tumors (Figure 5). Five patients (20.0%) with PIK3CA altered tumors also had PTEN mutations, while 1 patient (4.0%) had a dual PIK3CA and AKT1 mutated tumor. Of note, 1 female patient (age 55) with a dual PIK3CA and PTEN mutation had a rectal tumor demonstrating MSI-H and developed a solitary liver metastasis that has since been resected and treated with adjuvant FOLFOX – she is currently in remission.\nMSI: Microsatellite instability; MSS: Microsatellite stable; MSI-L: MSI low; MSI-H: MSI high.\nValues in parentheses represent numbers and not percentages.\nIn total, we identified 25 patients (18.1%) with PIK3CA alterations in our cohort (Table 4). The most common primary disease sites included cecum (36.0%), sigmoid colon (12.0%), and rectum (12.0%). Notably, right-sided colon cancers comprised nearly half (48.0%) of tumors with PIK3CA alterations. The most commonly identified variants were E545K (24.0%, exon 9), E542K (12.0% exon 9), E110del (8.0%), and Q546K (8.0%). Tumors with PIK3CA alterations frequently had concurrent mutations in the RAS-RAF-MAPK signaling pathway. A majority (19 or 76.0%) had concurrent mutations in KRAS (G12D 36.0%, G12S 12.0%, G13D 8.0%, and A146T 8.0%). Two patients (8.0%) with PIK3CA tumors were found to have concurrent deactivating BRAF mutations (G466V and G469E). Notably, these 2 patients had additional alterations in KRASG12S and RAF1S257L, respectively, with phenotypes described above. We also identified additional alterations in the PTEN-PIK3CA-AKT signaling pathway in our group of PIK3CA altered tumors (Figure 5). Five patients (20.0%) with PIK3CA altered tumors also had PTEN mutations, while 1 patient (4.0%) had a dual PIK3CA and AKT1 mutated tumor. Of note, 1 female patient (age 55) with a dual PIK3CA and PTEN mutation had a rectal tumor demonstrating MSI-H and developed a solitary liver metastasis that has since been resected and treated with adjuvant FOLFOX – she is currently in remission.\nMSI: Microsatellite instability; MSS: Microsatellite stable; MSI-L: MSI low; MSI-H: MSI high.\nValues in parentheses represent numbers and not percentages.\n MET amplifications Three patients (2.2%) in our series had MET amplifications (Table 5). Two-thirds of these tumors were MSS, located in the right colon, and associated with concurrent mutations in RAS or RAF genes. One 67-year-old male was initially diagnosed with right-sided colon cancer (KRASG13D+MET alterations present) and synchronous liver metastases. His course has been punctuated by recurrent metastases to the liver and lungs despite several systemic and regional therapies. Another right-sided colon cancer was identified with both a deactivating BRAFD594G mutation and MET amplification with aggressive phenotype described above. A third patient was a 27-year-old male with primary rectal adenocarcinoma that recurred with metastases to the liver and retroperitoneal lymph nodes and refractory to capecitabine + irinotecan + cetuximab. In particular, 2 of 3 patients with MET amplications and RAS/BRAFV600E wild-type tumors were refractory to anti-EGFR-based therapies.\n* For concurrent RAS and BRAF mutations only, MSI: Microsatellite instability; NOS: Not otherwise specified; NR: Not reported; MSS: Microsatellite stable.\nThree patients (2.2%) in our series had MET amplifications (Table 5). Two-thirds of these tumors were MSS, located in the right colon, and associated with concurrent mutations in RAS or RAF genes. One 67-year-old male was initially diagnosed with right-sided colon cancer (KRASG13D+MET alterations present) and synchronous liver metastases. His course has been punctuated by recurrent metastases to the liver and lungs despite several systemic and regional therapies. Another right-sided colon cancer was identified with both a deactivating BRAFD594G mutation and MET amplification with aggressive phenotype described above. A third patient was a 27-year-old male with primary rectal adenocarcinoma that recurred with metastases to the liver and retroperitoneal lymph nodes and refractory to capecitabine + irinotecan + cetuximab. In particular, 2 of 3 patients with MET amplications and RAS/BRAFV600E wild-type tumors were refractory to anti-EGFR-based therapies.\n* For concurrent RAS and BRAF mutations only, MSI: Microsatellite instability; NOS: Not otherwise specified; NR: Not reported; MSS: Microsatellite stable.\n Hypermutant status The majority of our 138 patients with mCRC had tumors with <9 clinically significant alterations (121 or 87.7%) as described by FoundationOne reports (Table 6). The majority of these tumors were located in the left colon and all were MSS. Fourteen patients (10.1%) had 9-16 total alterations while only 3 patients (2.2%) were allocated to the highest number of clinically significant alterations category (17-25). Notably, 2 patients with MSI-H tumors were identified in the highest number of alterations group. One 55-year-old female patient was found to have a dual PIK3CA and PTEN mutated rectal tumor with phenotype described previously. Tumor mutational burden (TMB) from FoundationOne report showed a high TMB of 33 mutations per megabase (Mb). The other was a 47-year-old female with KRASG12V mutated metastatic rectal cancer that has progressed through 3 lines of systemic therapy and currently on anti-PD-1 therapy with pembrolizumab with a clinical response. Again, TMB corroborated her findings of a relatively hypermutated tumor with a TMB of 31 mutations/Mb. The third patient with a hypermutant FoundationOne profile had a MSS tumor with an associated POLEV411L mutation. Interestingly, this patient was elderly (age 80), had a right colon tumor, and had a recurrence pattern consistent with locoregional recurrence. This patient demonstrated a TMB of 122 mutations/Mb, which was the highest among the cohort.\nMSI: Microsatellite instability; NOS: Not otherwise specified; MSS: Microsatellite stable; NR: Not reported; MSI-H: MSI high.\nThe majority of our 138 patients with mCRC had tumors with <9 clinically significant alterations (121 or 87.7%) as described by FoundationOne reports (Table 6). The majority of these tumors were located in the left colon and all were MSS. Fourteen patients (10.1%) had 9-16 total alterations while only 3 patients (2.2%) were allocated to the highest number of clinically significant alterations category (17-25). Notably, 2 patients with MSI-H tumors were identified in the highest number of alterations group. One 55-year-old female patient was found to have a dual PIK3CA and PTEN mutated rectal tumor with phenotype described previously. Tumor mutational burden (TMB) from FoundationOne report showed a high TMB of 33 mutations per megabase (Mb). The other was a 47-year-old female with KRASG12V mutated metastatic rectal cancer that has progressed through 3 lines of systemic therapy and currently on anti-PD-1 therapy with pembrolizumab with a clinical response. Again, TMB corroborated her findings of a relatively hypermutated tumor with a TMB of 31 mutations/Mb. The third patient with a hypermutant FoundationOne profile had a MSS tumor with an associated POLEV411L mutation. Interestingly, this patient was elderly (age 80), had a right colon tumor, and had a recurrence pattern consistent with locoregional recurrence. This patient demonstrated a TMB of 122 mutations/Mb, which was the highest among the cohort.\nMSI: Microsatellite instability; NOS: Not otherwise specified; MSS: Microsatellite stable; NR: Not reported; MSI-H: MSI high.", "The molecular results from FoundationOne testing of tumors from 138 mCRC patients are summarized in Table 1. The median age of our study group was 56 years (range 27-88) with 59.4% (82) males and 40.6% (56) females. The most common ethnicity was White (85, 61.6%) followed by Asian (29, 21.0%). The most common sites of primary were sigmoid colon (33.3%), rectum (19.6%), and cecum (15.2%). Sixty-eight patients (49.3%) had KRAS mutations, 9 patients (6.5%) had BRAF mutations, 3 (2.2%) had NRAS mutations, 1 (0.7%) had an ARAF mutation, 1 (0.7%) had a RAF-1 mutation, 7 (5.1%) had ERRB2 amplifications, 25 (18.1%) had PIK3CA mutations, 15 (10.9%) had PTEN mutations, 4 (2.9%) had AKT mutations, and 3 (2.2%) had MET amplifications.\nNOS: Not otherwise specified; MSI: Microsatellite instability; MSS: Microsatellite stable; MSI-L: MSI low; MSI-H: MSI high.", "Overall, RAS mutations were present in 51.4% of our mCRC patients, RAF mutations were seen in 7.2%, of which RAS+RAF concurrent mutations were seen in 1.4%. The remainder (42.8%) were RAS/RAF wild type (Figure 1). The most common RAS mutations were KRAS mutations of exon 2 (codons 12 and 13) including G12D (32.4%), G13D (14.1%), G12V (11.3%), G12S (9.9%), G12C (8.5%), and G12A (2.8%, Figure 2). Beyond the well-established point mutations in codons 12 and 13 of exon 2 of KRAS, we identified mutations in codon 61 of exon 3 (Q61H, 1.4%; Q61K, 1.4%; Q61L, 1.4%), codon 117 of exon 4 (K117N, 1.4%), and codon 146 of exon 4 (A146V, 1.4%; A146V^sub, 1.4%; A146T, 4.2%). Two mutations (2.8%) in codon 61 (exon 3) of NRAS were also detected. Altogether, these non-KRAS exon 2 mutations constitute 15.5% of RAS mutations.\nRAS+RAF mutations are not included in RAS or RAF percentages.\nArrows denote common mutations of exon 2 (codon 12-13). Brackets denote panel of extended RAS mutations or novel RAS mutation.\nIn our patient population, 2 KRAS amplifications (2.8%) and 1 NRAS amplification (1.4%) were identified. One patient was a 51-year-old female with KRAS amplified rectal cancer with synchronous diffuse metastases (lung and liver). Her best overall response to standard first-line combination chemotherapy (5-fluorouracil (5-FU) and irinotecan or FOLFIRI) plus anti-EGFR therapy (panitumumab) was stable disease (SD) for 6 months. The other patient with KRAS amplification was a 51-year-old male diagnosed with right-sided colon cancer and synchronous metastases to the liver and peritoneum who had rapid progression on first- and second-line non-anti-EGFR based therapies. Our 74-year-old male patient with NRAS amplification presented with poorly differentiated rectosigmoid adenocarcinoma and synchronous diffuse metastases (liver, mesentery, and bones) and experienced progressive disease (PD) at 2 months on second-line FOLFIRI + cetuximab. Notably, a novel KRASR68S1 alteration (Figure 2) was identified in a 41-year-old female (1.4%) with rectal cancer and synchronous metastases to the liver and retroperitoneal and supraclavicular lymph nodes who experienced PD at 2 months on anti-EGFR therapy with second-line irinotecan + cetuximab.", "A total of 11 RAF mutations (1 concurrent BRAF+RAF1 mutation) were found in 7.2% of our patients (Figure 3). Of these, BRAFV600E activating mutations (exon 15) were the most common single mutations present (40.0%). One activating BRAFL597Ralteration (exon 15) was identified (10.0%) in a 56-year-old male with bulky rectal adenocarcinoma with synchronous metastases that progressed through 9 months of first-line anti-EGFR therapy. One activating ARAFS214F alteration was also identified (10.0%) in our series of RAF mutations. This 60-year-old male patient developed multiple recurrences of rectal adenocarcinoma including, most recently, metastatic disease to the lung treated with neoadjuvant 5-FU, oxaliplatin, and irinotecan (FOLFOXIRI) followed by metastatectomy; he remains in clinical remission. A dual BRAFV600E+KRASA164V^subalterationwas present (10.0%) in an elderly male (age 72) with poorly differentiated right-sided colon cancer with synchronous metastases on first-line systemic combination therapy without anti-EGFR agents. Here an oncogenic RAS alteration was paired with a known activating BRAF mutation.\nArrows denote known activating mutations. Brackets denote known deactivating mutations.\nDeactivating mutations in BRAFD594G(10.0%), BRAFG466V concurrent with KRASG12S (10.0%), and BRAFG469E concurrent with RAF1S257L (10.0%) were also identified. Our patient with a deactivating BRAFD594G mutation was a 59-year-old male with moderately-poorly differentiated right-sided colon cancer with diffuse metastases that was refractory to all standard of care chemotherapy, including FOLFIRI + cetuximab, and ultimately died from progressive disease. Interestingly, he was noted to have a concurrent MET amplification. The patient with a deactivating BRAFG466V mutation concurrent with an activating KRASG12S mutation was a 51-year-old male with right-sided colon cancer with diffuse metastases that progressed with carcinomatosis while on FOLFOX and immediately following salvage debulking surgery with hyperthermic chemotherapy. Notably, he currently has achieved ongoing partial response (PR) on third-line FOLFIRI and bevacizumab (41+ cycles). Our 66-year-old female with dual deactivating BRAFG469E and activating RAF1S257L mutation presented with a right colon cancer with synchronous metastases to bone, liver, lung, and peritoneum. Her disease was refractory to FOLFOX + bevacizumab and is currently on second-line FOLFIRI + bevacizumab with a clinical benefit.", "Seven patients (5.1%) were found to have ERRB2 amplified tumors with one having a concurrent KRASG12D mutation (Figure 4). The majority of these tumors were MSS (87.5%) with HER2 copy numbers that ranged from 9-190 (Table 2). Notably, all ERRB2 amplified tumors were located in the rectosigmoid colon as its primary disease site. Four patients with RAS wild-type ERBB2 amplification received anti-EGFR therapy, 3 experienced SD ≥ 4 months (2 first-line and 1 second-line) and 1 (second-line) experienced a PR lasting for 5 months as their best overall response to anti-EGFR therapy. The concurrent ERRB2 amplified and KRASG12D mutated tumor was found in a 58-year-old male with moderately differentiated rectal adenocarcinoma with synchronous solitary liver metastasis treated with neoadjuvant 5-FU, oxaliplatin (FOLFOX) followed by hepatic resection and resection of primary – he is currently under surveillance and without evidence of disease.\nEGFR: Epidermal growth factor receptor; MSI: Microsatellite instability; SD: Stable disease; NR: Not reported; MSS: Microsatellite stable; PR: Partial response.", "Three patients (2.2%) had AKT1E17K mutations while 1 patient (0.7%) had an AKT2E17K mutation (Table 3). Of these, a majority had concurrent mutations (75%) and tumors located in the right colon (75%). One AKT1E17K mutated tumor was found to have concurrent BRAFV600E +KRASA164V^subalterations with phenotype described above. One AKT1E17K mutated tumor had concurrent alterations in KRASA146T+PIK3CAG106V and was found in a 61-year-old male with initial right-sided colon cancer that recurred with metastases to the liver showing moderately differentiated colon adenocarcinoma. His tumor was characterized by aggressive features, including metastatic disease recurrence following a diagnosis of stage I disease, and development of bony metastases within the first year of recurrence. A concurrent AKT2E17K+KRASG12C altered tumor was found in a 57-year-old female with originally moderately differentiated sigmoid adenocarcinoma that was resected but recurred with metastases to the retroperitoneal lymph nodes currently on first-line FOLFIRI + bevacizumab.\n* For concurrent RAS and BRAF mutations only, NOS: Not otherwise specified; MSI: Microsatellite instability; NR: Not reported; MSS: Microsatellite stable.", "In total, we identified 25 patients (18.1%) with PIK3CA alterations in our cohort (Table 4). The most common primary disease sites included cecum (36.0%), sigmoid colon (12.0%), and rectum (12.0%). Notably, right-sided colon cancers comprised nearly half (48.0%) of tumors with PIK3CA alterations. The most commonly identified variants were E545K (24.0%, exon 9), E542K (12.0% exon 9), E110del (8.0%), and Q546K (8.0%). Tumors with PIK3CA alterations frequently had concurrent mutations in the RAS-RAF-MAPK signaling pathway. A majority (19 or 76.0%) had concurrent mutations in KRAS (G12D 36.0%, G12S 12.0%, G13D 8.0%, and A146T 8.0%). Two patients (8.0%) with PIK3CA tumors were found to have concurrent deactivating BRAF mutations (G466V and G469E). Notably, these 2 patients had additional alterations in KRASG12S and RAF1S257L, respectively, with phenotypes described above. We also identified additional alterations in the PTEN-PIK3CA-AKT signaling pathway in our group of PIK3CA altered tumors (Figure 5). Five patients (20.0%) with PIK3CA altered tumors also had PTEN mutations, while 1 patient (4.0%) had a dual PIK3CA and AKT1 mutated tumor. Of note, 1 female patient (age 55) with a dual PIK3CA and PTEN mutation had a rectal tumor demonstrating MSI-H and developed a solitary liver metastasis that has since been resected and treated with adjuvant FOLFOX – she is currently in remission.\nMSI: Microsatellite instability; MSS: Microsatellite stable; MSI-L: MSI low; MSI-H: MSI high.\nValues in parentheses represent numbers and not percentages.", "Three patients (2.2%) in our series had MET amplifications (Table 5). Two-thirds of these tumors were MSS, located in the right colon, and associated with concurrent mutations in RAS or RAF genes. One 67-year-old male was initially diagnosed with right-sided colon cancer (KRASG13D+MET alterations present) and synchronous liver metastases. His course has been punctuated by recurrent metastases to the liver and lungs despite several systemic and regional therapies. Another right-sided colon cancer was identified with both a deactivating BRAFD594G mutation and MET amplification with aggressive phenotype described above. A third patient was a 27-year-old male with primary rectal adenocarcinoma that recurred with metastases to the liver and retroperitoneal lymph nodes and refractory to capecitabine + irinotecan + cetuximab. In particular, 2 of 3 patients with MET amplications and RAS/BRAFV600E wild-type tumors were refractory to anti-EGFR-based therapies.\n* For concurrent RAS and BRAF mutations only, MSI: Microsatellite instability; NOS: Not otherwise specified; NR: Not reported; MSS: Microsatellite stable.", "The majority of our 138 patients with mCRC had tumors with <9 clinically significant alterations (121 or 87.7%) as described by FoundationOne reports (Table 6). The majority of these tumors were located in the left colon and all were MSS. Fourteen patients (10.1%) had 9-16 total alterations while only 3 patients (2.2%) were allocated to the highest number of clinically significant alterations category (17-25). Notably, 2 patients with MSI-H tumors were identified in the highest number of alterations group. One 55-year-old female patient was found to have a dual PIK3CA and PTEN mutated rectal tumor with phenotype described previously. Tumor mutational burden (TMB) from FoundationOne report showed a high TMB of 33 mutations per megabase (Mb). The other was a 47-year-old female with KRASG12V mutated metastatic rectal cancer that has progressed through 3 lines of systemic therapy and currently on anti-PD-1 therapy with pembrolizumab with a clinical response. Again, TMB corroborated her findings of a relatively hypermutated tumor with a TMB of 31 mutations/Mb. The third patient with a hypermutant FoundationOne profile had a MSS tumor with an associated POLEV411L mutation. Interestingly, this patient was elderly (age 80), had a right colon tumor, and had a recurrence pattern consistent with locoregional recurrence. This patient demonstrated a TMB of 122 mutations/Mb, which was the highest among the cohort.\nMSI: Microsatellite instability; NOS: Not otherwise specified; MSS: Microsatellite stable; NR: Not reported; MSI-H: MSI high.", "Comprehensive molecular characterization of 138 tumors from patients with mCRC was performed via NGS (FoundationOne) in this single-institution retrospective study. Overall, 51.4% and 7.2% of our patients with mCRC were shown to carry RAS and RAF mutations, respectively, which is concordant with frequencies historically reported in mCRC [2]. The majority of our RAS mutations were KRAS mutations of exon 2 (codons 12 and 13), which represent those identified in initial phase III trials that predicted lack of benefit from anti-EGFR therapy in mCRC [6, 7]. We also found that 15.5% of all RAS mutations in our population comprised a panel of extended RAS mutations. This is also consistent with recent data from the PRIME and CRYSTAL clinical trials, where exon 3 and 4 KRAS and exons 2, 3, and 4 NRAS mutations reflected 14-17% of RAS mutations [8, 9]. Identifying these rare RAS mutations has major clinical significance, given their association with anti-EGFR resistance [10].\nNotably, we identified 2 KRAS amplifications and 1 NRAS amplification that are extremely rare and poorly characterized. These were found in 3 patients with diffusely metastatic CRC progressive through several lines of systemic therapy including anti-EGFR therapy.\nPutative high-level amplifications of NRAS were observed in <1% of cases in TCGA dataset though its significance in CRC remains poorly described [5]. KRAS amplifications have been associated with acquired resistance to EGFR inhibitors cetuximab or panitumumab in CRC preclinical models [11]. To our knowledge, we are the first to report a novel KRASR68S1 alteration that was associated with a particularly aggressive phenotype and PD at 2 months on anti-EGFR therapy with cetuximab.\nThe majority of RAF mutations found in our population were BRAFV600E activating mutations (exon 15), which have been historically associated with poorer survival, resistance to chemotherapy, and lack of clinical benefit with anti-EGFR therapy in mCRC [12–15]. We also identified a lone BRAFL597Ralteration (exon 15), which is poorly described in CRC but has been shown to similarly activate RAF-MEK-ERK signaling in melanoma in vitro [16]. Of note, this patient received 9 months of first-line anti-EGFR therapy though our sample size of 1 precludes any meaningful generalizations. One ARAFS214F alteration was also identified in a patient whose course has been characterized by multiple recurrences of rectal cancer. Mutations in ARAF have been linked as oncogenic drivers in lung adenocarcinoma, and are exceedingly rare in CRC and comprise approximately 2% of cases in the CRC dataset from TCGA [5, 17]. Treatment with the oral RAF inhibitor, sorafenib, has demonstrated prolonged response in a case of refractory non-small-cell lung cancer and rapid responses in patients with refractory histiocytic neoplasms bearing somatic ARAF mutations [17, 18].\nDespite a previous conception that KRAS and BRAF mutations are mutually exclusive, we found 1 dual BRAFV600E+KRASA164V^submutated tumor that, in our case, was associated with poor prognostic features [19]. One case of concurrent BRAFG466V+KRASG12S mutation and one patient with a concurrent BRAFG469E+RAF1S257L mutation were present in our cohort. BRAF mutants G466V and G469E have been shown to represent variants with impaired or complete loss of kinase activity in vitro [20, 21]. Nevertheless, it has been shown that tumorigenesis is promoted in the presence of deactivating BRAF mutations through oncogenic RAS mutation and/or CRAF (or RAF-1) signaling [21, 22]. In our study, one deactivating BRAFG466V mutation was paired with an oncogenic KRASG12S mutation, and one deactivating BRAFG469E mutation was paired with an oncogenic RAF1S257L alteration, supporting the notion of an evolutionary adaptation in the cancer genome to overcome BRAF mutations with impaired function. In both cases, there were associated features of poor prognosis though the dual BRAFG466V+KRASG12S mutated tumor has seen disease control recently on 41 cycles of FOLFIRI and bevacizumab, which may argue for varying degrees of relative contribution from each mutation on tumor phenotype. Interestingly, one patient with deactivating BRAFD594Gmutation was refractory to all lines of treatment, including anti-EGFR therapy, and ultimately died of aggressive disease. This is at odds with recent reports suggesting that BRAFD594G mutation may be an indicator of good prognosis [23]. It is unclear whether this patient's concurrent MET amplification may have contributed to his overall poor prognosis and therapeutic resistance.\nERBB2 (HER2/neu) amplifications were found in 5.1% of our mCRC patients with the majority in KRAS wild-type tumors (except for 1 with a concurrent ERRB2+KRASG12D alteration). Another FoundationOne analysis of >10,000 cases of gastrointestinal malignancies identified HER2 amplifications and mutations in 3.0% and 4.8%, respectively, of cases from the CRC cohort [24]. Our patients with HER2 amplified tumors appeared to have shortened clinical benefit with anti-EGFR therapy, which is consistent with the recent phase II HERACLES trial where none of the patients with HER2 amplified, RAS/BRAF wild-type metastatic colorectal tumors had a response to anti-EGFR therapy [25]. Similar to the preponderance of left colon primary tumors in the HERACLES trial, all of our HER2 amplified tumors were located in the rectosigmoid colon. In short, the identification of HER2 amplifications in patients with RAS/BRAF wild-type metastatic colorectal tumors is of major significance given the clinical benefit derived from dual HER2-directed therapy including trastuzumab + lapatinib (HERACLES) or trastuzumab + pertuzumab (MyPathway) [25, 26].\nPIK3CA, PTEN, and AKT mutations were identified in 18.1% (25), 10.9% (15), and 2.9% (4) of our mCRC patients, respectively. Many of these patients had metastatic tumors associated with aggressive features. In addition, 75% of AKT mutated tumors were located in the right colon, almost half (48.0%) of PIK3CA mutated tumors were right-sided colon cancers, and concurrent mutations in RAS-RAF-MAPK or PTEN-PIK3CA-AKT signaling were common. For example, 19 patients (76.0%) with PIK3CA mutations also had concurrent KRAS mutations while 5 (20.0%) and 1 (4.0%) with PIK3CA altered tumors also had concurrent PTEN and AKT1 mutations, respectively. Mutations in mediators of the PTEN-PIK3CA-AKT signaling pathway in CRC have been associated with poorer prognosis and lack of clinical response to anti-EGFR therapy [27, 28]. For PIK3CA mutations, in particular, prior studies have demonstrated that exon 9 mutations had no effect while exon 20 mutations were associated with resistance to anti-EGFR therapy [29]. However, this differential effect by exon has not been supported by recent meta-analysis [30]. Given the high rate of concurrent RAS mutations seen with PIK3CA and related pathway mutations, a definitive association between resistance to EGFR inhibition and PTEN-PIK3CA-AKT pathway mutations is difficult to make. Further studies are needed to resolve this issue.\nThree patients (2.2%) demonstrated MET amplifications associated with poor prognostic features. MET amplification and increased c-MET expression have also been associated with an aggressive phenotype and therapeutic resistance, particularly to MEK inhibition, in mCRC [31, 32]. Interestingly, we have observed anti-EGFR refractoriness in 2 of our patients with MET amplifications despite the presence of a RAS- wild-type phenotype and lack of activating BRAF mutations. This is consistent with preclinical data suggesting MET activation as a mechanism of resistance to anti-EGFR therapy [33].\nWe lastly identified 3 patients (2.2%) with tumors categorized in the highest number of clinically significant alterations group (17-25) that also demonstrated high TMB as per FoundationOne. TMB categories per FoundationOne testing have been validated in melanoma patients treated with PD-1 blockade [34]. Response to PD-1 inhibitors was significantly superior in patients with high TMB (>23.1 mutations/MB) compared to intermediate or low TMB (3.2-23.1 mutations/MB and <3.2 mutations/MB, respectively). Furthermore, a recent phase II study showed that patients with advanced urothelial cancer who responded to the programmed death ligand 1 (PD-L1) inhibitor atezolizumab had a significantly higher TMB (median 12.4 mutations/Mb) than non-responders (median 6.4 mutations/Mb, p < 0.0001) [35]. Two patients had MSI-H tumors while 1 hypermutant tumor was MSS and harbored a POLE mutation. Interestingly, 42.9% of tumors with 9-16 clinically significant alterations were located in the right colon while one-third of tumors with 17-25 total alterations were located in the right colon; tumors with <9 number of alterations were predominantly located in the left colon. In the CRC dataset from TCGA, 75% of hypermutated tumors arose from the right colon yet not all of them were MSI-H [5]. Mutations in polymerase ε or POLE were found among 25% of hypermutated tumors in this cohort. Mutations in POLE have been shown to contribute to an ultramutated yet MSS phenotype in colorectal tumors [36]. A recent NGS study confirmed that increasing mutational load correlated with MSI yet colorectal tumors with the highest mutational burden that were distinct from MSI tumors all harbored POLE mutations [37]. Furthermore, mismatch repair-deficiency or MSI has recently been shown to predict clinical benefit to immune checkpoint blockade with anti-PD-1 therapy in mCRC [38]. The characterization of mutational load in CRC may serve as a better indicator than MSI status in determining a hypermutant profile that could predict benefit from immunotherapy. Our findings are hypothesis generating and offer support to consider molecular analysis of tumors to determine the total number of alterations as a potential correlate to MSI and candidacy for anti-PD-1 therapy in mCRC.\nFuture studies of larger size and, ideally, prospective design will be helpful in corroborating associations between molecular alterations of interest described in our study and prognosis, resistance to EGFR inhibition, and/or ability to be targeted for therapy in mCRC. Comparative genomic analyses have identified a high level of concordance particularly for RAS, BRAF, and PIK3CA mutations between colorectal primary and metastatic tumors [39, 40]. However, other molecular alterations may differ based on the site of tumor and/or exposure to chemotherapy [41–44]. Although such mixed results are likely dependent on the specific mutation that is profiled, other factors including specimen integrity and sampling method may also contribute to heterogeneity. Indeed, further analyses are needed to describe the concordance or discordance of other mutations across tumor sites and treatment effects in mCRC, and careful consideration in design will be needed in order to account for confounding factors as described above.\nIn conclusion, comprehensive genomic profiling can uncover gene alterations beyond conventional RAS or RAF mutant subtypes that predict resistance to anti-EGFR therapy and in identifying potential therapeutic targets outside of NCCN standard treatments in mCRC. ERBB2 amplified tumors commonly originate from the rectosigmoid colon, are predominantly RAS/BRAF wild-type, and may predict benefit to HER2-directed therapy. Hypermutant tumors or tumors with POLE mutations may predict benefit to anti-PD-1 therapy. Our findings are hypothesis generating and warrant further investigation in larger datasets and in prospective settings." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "RESULTS", "Study population", "RAS mutations", "RAF mutations", "ERBB2 amplifications", "AKT1/2 mutations", "PIK3CA and PTEN mutations", "MET amplifications", "Hypermutant status", "DISCUSSION", "MATERIALS AND METHODS" ]
[ "Colorectal cancer (CRC) remains the third leading cause of cancer death in both men and women in the United States with an estimated 134,490 new cases and 49,190 deaths in 2016 [1]. Recent advances in the treatment of metastatic CRC (mCRC) have identified improved outcomes with the addition of epidermal growth factor receptor (EGFR)-targeting agents to conventional combination cytotoxic therapy in patients with extended RAS wild-type tumors. In contrast, activating mutations in the RAS gene (KRAS or NRAS, present in approximately 50% of cases of mCRC) and BRAF gene (present in about 5% of mCRC patients) have been associated with lack of clinically meaningful benefit or harm when anti-EGFR therapy is employed [2]. The identification of candidates for anti-EGFR therapy through the exclusion of RAS and BRAF mutations in mCRC serves as a model of selecting optimal therapy based on patient genomic profiles and molecular phenotypes.\nSeveral decades of genomic studies, including the use of more recent next-generation sequencing (NGS), have expedited the search of genetic alterations for potential therapeutic targeting in CRC [3, 4]. Recently, comprehensive molecular characterization of 224 colorectal tumors was performed by The Cancer Genome Atlas (TCGA) Network [5]. Sixteen percent of colorectal tumors were found to be hypermutated and more commonly found in the right colon with 75% of these cases demonstrating expectedly high microsatellite instability (MSI-H). Twenty-four genes were identified to have significant mutations of interest including APC, SMAD4, TP53, PIK3CA, and KRAS mutations, as expected. Interestingly, mutations, deletions, or amplifications of the ERRB gene family were found in 19% of tumors. In sum, this genomic analysis identified several molecular alterations that are considered targetable, including mediators of dysregulated WNT, RAS, and PI3K pathways such as ERRB2, ERRB3, MEK, AKT, MTOR, IGF2, and IGFR.\nThe recent identification of gene mutations and amplifications of potential significance for therapeutic purposes has led us to investigate the genomic profiles of mCRC patients using NGS (FoundationOne). Here, we describe a single-institution experience in reporting results from comprehensive genomic analysis of tumors from 138 mCRC patients. We aim to characterize genetic alterations present in our study population that have known correlates to prognosis, therapeutic resistance, and potential therapeutic targets in mCRC. In this study, we also report the existence of concurrent gene mutations rarely described in the literature and novel mutations and amplifications that can lead to targeting outside of National Comprehensive Cancer Network (NCCN) standard treatments.", " Study population The molecular results from FoundationOne testing of tumors from 138 mCRC patients are summarized in Table 1. The median age of our study group was 56 years (range 27-88) with 59.4% (82) males and 40.6% (56) females. The most common ethnicity was White (85, 61.6%) followed by Asian (29, 21.0%). The most common sites of primary were sigmoid colon (33.3%), rectum (19.6%), and cecum (15.2%). Sixty-eight patients (49.3%) had KRAS mutations, 9 patients (6.5%) had BRAF mutations, 3 (2.2%) had NRAS mutations, 1 (0.7%) had an ARAF mutation, 1 (0.7%) had a RAF-1 mutation, 7 (5.1%) had ERRB2 amplifications, 25 (18.1%) had PIK3CA mutations, 15 (10.9%) had PTEN mutations, 4 (2.9%) had AKT mutations, and 3 (2.2%) had MET amplifications.\nNOS: Not otherwise specified; MSI: Microsatellite instability; MSS: Microsatellite stable; MSI-L: MSI low; MSI-H: MSI high.\nThe molecular results from FoundationOne testing of tumors from 138 mCRC patients are summarized in Table 1. The median age of our study group was 56 years (range 27-88) with 59.4% (82) males and 40.6% (56) females. The most common ethnicity was White (85, 61.6%) followed by Asian (29, 21.0%). The most common sites of primary were sigmoid colon (33.3%), rectum (19.6%), and cecum (15.2%). Sixty-eight patients (49.3%) had KRAS mutations, 9 patients (6.5%) had BRAF mutations, 3 (2.2%) had NRAS mutations, 1 (0.7%) had an ARAF mutation, 1 (0.7%) had a RAF-1 mutation, 7 (5.1%) had ERRB2 amplifications, 25 (18.1%) had PIK3CA mutations, 15 (10.9%) had PTEN mutations, 4 (2.9%) had AKT mutations, and 3 (2.2%) had MET amplifications.\nNOS: Not otherwise specified; MSI: Microsatellite instability; MSS: Microsatellite stable; MSI-L: MSI low; MSI-H: MSI high.\n RAS mutations Overall, RAS mutations were present in 51.4% of our mCRC patients, RAF mutations were seen in 7.2%, of which RAS+RAF concurrent mutations were seen in 1.4%. The remainder (42.8%) were RAS/RAF wild type (Figure 1). The most common RAS mutations were KRAS mutations of exon 2 (codons 12 and 13) including G12D (32.4%), G13D (14.1%), G12V (11.3%), G12S (9.9%), G12C (8.5%), and G12A (2.8%, Figure 2). Beyond the well-established point mutations in codons 12 and 13 of exon 2 of KRAS, we identified mutations in codon 61 of exon 3 (Q61H, 1.4%; Q61K, 1.4%; Q61L, 1.4%), codon 117 of exon 4 (K117N, 1.4%), and codon 146 of exon 4 (A146V, 1.4%; A146V^sub, 1.4%; A146T, 4.2%). Two mutations (2.8%) in codon 61 (exon 3) of NRAS were also detected. Altogether, these non-KRAS exon 2 mutations constitute 15.5% of RAS mutations.\nRAS+RAF mutations are not included in RAS or RAF percentages.\nArrows denote common mutations of exon 2 (codon 12-13). Brackets denote panel of extended RAS mutations or novel RAS mutation.\nIn our patient population, 2 KRAS amplifications (2.8%) and 1 NRAS amplification (1.4%) were identified. One patient was a 51-year-old female with KRAS amplified rectal cancer with synchronous diffuse metastases (lung and liver). Her best overall response to standard first-line combination chemotherapy (5-fluorouracil (5-FU) and irinotecan or FOLFIRI) plus anti-EGFR therapy (panitumumab) was stable disease (SD) for 6 months. The other patient with KRAS amplification was a 51-year-old male diagnosed with right-sided colon cancer and synchronous metastases to the liver and peritoneum who had rapid progression on first- and second-line non-anti-EGFR based therapies. Our 74-year-old male patient with NRAS amplification presented with poorly differentiated rectosigmoid adenocarcinoma and synchronous diffuse metastases (liver, mesentery, and bones) and experienced progressive disease (PD) at 2 months on second-line FOLFIRI + cetuximab. Notably, a novel KRASR68S1 alteration (Figure 2) was identified in a 41-year-old female (1.4%) with rectal cancer and synchronous metastases to the liver and retroperitoneal and supraclavicular lymph nodes who experienced PD at 2 months on anti-EGFR therapy with second-line irinotecan + cetuximab.\nOverall, RAS mutations were present in 51.4% of our mCRC patients, RAF mutations were seen in 7.2%, of which RAS+RAF concurrent mutations were seen in 1.4%. The remainder (42.8%) were RAS/RAF wild type (Figure 1). The most common RAS mutations were KRAS mutations of exon 2 (codons 12 and 13) including G12D (32.4%), G13D (14.1%), G12V (11.3%), G12S (9.9%), G12C (8.5%), and G12A (2.8%, Figure 2). Beyond the well-established point mutations in codons 12 and 13 of exon 2 of KRAS, we identified mutations in codon 61 of exon 3 (Q61H, 1.4%; Q61K, 1.4%; Q61L, 1.4%), codon 117 of exon 4 (K117N, 1.4%), and codon 146 of exon 4 (A146V, 1.4%; A146V^sub, 1.4%; A146T, 4.2%). Two mutations (2.8%) in codon 61 (exon 3) of NRAS were also detected. Altogether, these non-KRAS exon 2 mutations constitute 15.5% of RAS mutations.\nRAS+RAF mutations are not included in RAS or RAF percentages.\nArrows denote common mutations of exon 2 (codon 12-13). Brackets denote panel of extended RAS mutations or novel RAS mutation.\nIn our patient population, 2 KRAS amplifications (2.8%) and 1 NRAS amplification (1.4%) were identified. One patient was a 51-year-old female with KRAS amplified rectal cancer with synchronous diffuse metastases (lung and liver). Her best overall response to standard first-line combination chemotherapy (5-fluorouracil (5-FU) and irinotecan or FOLFIRI) plus anti-EGFR therapy (panitumumab) was stable disease (SD) for 6 months. The other patient with KRAS amplification was a 51-year-old male diagnosed with right-sided colon cancer and synchronous metastases to the liver and peritoneum who had rapid progression on first- and second-line non-anti-EGFR based therapies. Our 74-year-old male patient with NRAS amplification presented with poorly differentiated rectosigmoid adenocarcinoma and synchronous diffuse metastases (liver, mesentery, and bones) and experienced progressive disease (PD) at 2 months on second-line FOLFIRI + cetuximab. Notably, a novel KRASR68S1 alteration (Figure 2) was identified in a 41-year-old female (1.4%) with rectal cancer and synchronous metastases to the liver and retroperitoneal and supraclavicular lymph nodes who experienced PD at 2 months on anti-EGFR therapy with second-line irinotecan + cetuximab.\n RAF mutations A total of 11 RAF mutations (1 concurrent BRAF+RAF1 mutation) were found in 7.2% of our patients (Figure 3). Of these, BRAFV600E activating mutations (exon 15) were the most common single mutations present (40.0%). One activating BRAFL597Ralteration (exon 15) was identified (10.0%) in a 56-year-old male with bulky rectal adenocarcinoma with synchronous metastases that progressed through 9 months of first-line anti-EGFR therapy. One activating ARAFS214F alteration was also identified (10.0%) in our series of RAF mutations. This 60-year-old male patient developed multiple recurrences of rectal adenocarcinoma including, most recently, metastatic disease to the lung treated with neoadjuvant 5-FU, oxaliplatin, and irinotecan (FOLFOXIRI) followed by metastatectomy; he remains in clinical remission. A dual BRAFV600E+KRASA164V^subalterationwas present (10.0%) in an elderly male (age 72) with poorly differentiated right-sided colon cancer with synchronous metastases on first-line systemic combination therapy without anti-EGFR agents. Here an oncogenic RAS alteration was paired with a known activating BRAF mutation.\nArrows denote known activating mutations. Brackets denote known deactivating mutations.\nDeactivating mutations in BRAFD594G(10.0%), BRAFG466V concurrent with KRASG12S (10.0%), and BRAFG469E concurrent with RAF1S257L (10.0%) were also identified. Our patient with a deactivating BRAFD594G mutation was a 59-year-old male with moderately-poorly differentiated right-sided colon cancer with diffuse metastases that was refractory to all standard of care chemotherapy, including FOLFIRI + cetuximab, and ultimately died from progressive disease. Interestingly, he was noted to have a concurrent MET amplification. The patient with a deactivating BRAFG466V mutation concurrent with an activating KRASG12S mutation was a 51-year-old male with right-sided colon cancer with diffuse metastases that progressed with carcinomatosis while on FOLFOX and immediately following salvage debulking surgery with hyperthermic chemotherapy. Notably, he currently has achieved ongoing partial response (PR) on third-line FOLFIRI and bevacizumab (41+ cycles). Our 66-year-old female with dual deactivating BRAFG469E and activating RAF1S257L mutation presented with a right colon cancer with synchronous metastases to bone, liver, lung, and peritoneum. Her disease was refractory to FOLFOX + bevacizumab and is currently on second-line FOLFIRI + bevacizumab with a clinical benefit.\nA total of 11 RAF mutations (1 concurrent BRAF+RAF1 mutation) were found in 7.2% of our patients (Figure 3). Of these, BRAFV600E activating mutations (exon 15) were the most common single mutations present (40.0%). One activating BRAFL597Ralteration (exon 15) was identified (10.0%) in a 56-year-old male with bulky rectal adenocarcinoma with synchronous metastases that progressed through 9 months of first-line anti-EGFR therapy. One activating ARAFS214F alteration was also identified (10.0%) in our series of RAF mutations. This 60-year-old male patient developed multiple recurrences of rectal adenocarcinoma including, most recently, metastatic disease to the lung treated with neoadjuvant 5-FU, oxaliplatin, and irinotecan (FOLFOXIRI) followed by metastatectomy; he remains in clinical remission. A dual BRAFV600E+KRASA164V^subalterationwas present (10.0%) in an elderly male (age 72) with poorly differentiated right-sided colon cancer with synchronous metastases on first-line systemic combination therapy without anti-EGFR agents. Here an oncogenic RAS alteration was paired with a known activating BRAF mutation.\nArrows denote known activating mutations. Brackets denote known deactivating mutations.\nDeactivating mutations in BRAFD594G(10.0%), BRAFG466V concurrent with KRASG12S (10.0%), and BRAFG469E concurrent with RAF1S257L (10.0%) were also identified. Our patient with a deactivating BRAFD594G mutation was a 59-year-old male with moderately-poorly differentiated right-sided colon cancer with diffuse metastases that was refractory to all standard of care chemotherapy, including FOLFIRI + cetuximab, and ultimately died from progressive disease. Interestingly, he was noted to have a concurrent MET amplification. The patient with a deactivating BRAFG466V mutation concurrent with an activating KRASG12S mutation was a 51-year-old male with right-sided colon cancer with diffuse metastases that progressed with carcinomatosis while on FOLFOX and immediately following salvage debulking surgery with hyperthermic chemotherapy. Notably, he currently has achieved ongoing partial response (PR) on third-line FOLFIRI and bevacizumab (41+ cycles). Our 66-year-old female with dual deactivating BRAFG469E and activating RAF1S257L mutation presented with a right colon cancer with synchronous metastases to bone, liver, lung, and peritoneum. Her disease was refractory to FOLFOX + bevacizumab and is currently on second-line FOLFIRI + bevacizumab with a clinical benefit.\n ERBB2 amplifications Seven patients (5.1%) were found to have ERRB2 amplified tumors with one having a concurrent KRASG12D mutation (Figure 4). The majority of these tumors were MSS (87.5%) with HER2 copy numbers that ranged from 9-190 (Table 2). Notably, all ERRB2 amplified tumors were located in the rectosigmoid colon as its primary disease site. Four patients with RAS wild-type ERBB2 amplification received anti-EGFR therapy, 3 experienced SD ≥ 4 months (2 first-line and 1 second-line) and 1 (second-line) experienced a PR lasting for 5 months as their best overall response to anti-EGFR therapy. The concurrent ERRB2 amplified and KRASG12D mutated tumor was found in a 58-year-old male with moderately differentiated rectal adenocarcinoma with synchronous solitary liver metastasis treated with neoadjuvant 5-FU, oxaliplatin (FOLFOX) followed by hepatic resection and resection of primary – he is currently under surveillance and without evidence of disease.\nEGFR: Epidermal growth factor receptor; MSI: Microsatellite instability; SD: Stable disease; NR: Not reported; MSS: Microsatellite stable; PR: Partial response.\nSeven patients (5.1%) were found to have ERRB2 amplified tumors with one having a concurrent KRASG12D mutation (Figure 4). The majority of these tumors were MSS (87.5%) with HER2 copy numbers that ranged from 9-190 (Table 2). Notably, all ERRB2 amplified tumors were located in the rectosigmoid colon as its primary disease site. Four patients with RAS wild-type ERBB2 amplification received anti-EGFR therapy, 3 experienced SD ≥ 4 months (2 first-line and 1 second-line) and 1 (second-line) experienced a PR lasting for 5 months as their best overall response to anti-EGFR therapy. The concurrent ERRB2 amplified and KRASG12D mutated tumor was found in a 58-year-old male with moderately differentiated rectal adenocarcinoma with synchronous solitary liver metastasis treated with neoadjuvant 5-FU, oxaliplatin (FOLFOX) followed by hepatic resection and resection of primary – he is currently under surveillance and without evidence of disease.\nEGFR: Epidermal growth factor receptor; MSI: Microsatellite instability; SD: Stable disease; NR: Not reported; MSS: Microsatellite stable; PR: Partial response.\n AKT1/2 mutations Three patients (2.2%) had AKT1E17K mutations while 1 patient (0.7%) had an AKT2E17K mutation (Table 3). Of these, a majority had concurrent mutations (75%) and tumors located in the right colon (75%). One AKT1E17K mutated tumor was found to have concurrent BRAFV600E +KRASA164V^subalterations with phenotype described above. One AKT1E17K mutated tumor had concurrent alterations in KRASA146T+PIK3CAG106V and was found in a 61-year-old male with initial right-sided colon cancer that recurred with metastases to the liver showing moderately differentiated colon adenocarcinoma. His tumor was characterized by aggressive features, including metastatic disease recurrence following a diagnosis of stage I disease, and development of bony metastases within the first year of recurrence. A concurrent AKT2E17K+KRASG12C altered tumor was found in a 57-year-old female with originally moderately differentiated sigmoid adenocarcinoma that was resected but recurred with metastases to the retroperitoneal lymph nodes currently on first-line FOLFIRI + bevacizumab.\n* For concurrent RAS and BRAF mutations only, NOS: Not otherwise specified; MSI: Microsatellite instability; NR: Not reported; MSS: Microsatellite stable.\nThree patients (2.2%) had AKT1E17K mutations while 1 patient (0.7%) had an AKT2E17K mutation (Table 3). Of these, a majority had concurrent mutations (75%) and tumors located in the right colon (75%). One AKT1E17K mutated tumor was found to have concurrent BRAFV600E +KRASA164V^subalterations with phenotype described above. One AKT1E17K mutated tumor had concurrent alterations in KRASA146T+PIK3CAG106V and was found in a 61-year-old male with initial right-sided colon cancer that recurred with metastases to the liver showing moderately differentiated colon adenocarcinoma. His tumor was characterized by aggressive features, including metastatic disease recurrence following a diagnosis of stage I disease, and development of bony metastases within the first year of recurrence. A concurrent AKT2E17K+KRASG12C altered tumor was found in a 57-year-old female with originally moderately differentiated sigmoid adenocarcinoma that was resected but recurred with metastases to the retroperitoneal lymph nodes currently on first-line FOLFIRI + bevacizumab.\n* For concurrent RAS and BRAF mutations only, NOS: Not otherwise specified; MSI: Microsatellite instability; NR: Not reported; MSS: Microsatellite stable.\n PIK3CA and PTEN mutations In total, we identified 25 patients (18.1%) with PIK3CA alterations in our cohort (Table 4). The most common primary disease sites included cecum (36.0%), sigmoid colon (12.0%), and rectum (12.0%). Notably, right-sided colon cancers comprised nearly half (48.0%) of tumors with PIK3CA alterations. The most commonly identified variants were E545K (24.0%, exon 9), E542K (12.0% exon 9), E110del (8.0%), and Q546K (8.0%). Tumors with PIK3CA alterations frequently had concurrent mutations in the RAS-RAF-MAPK signaling pathway. A majority (19 or 76.0%) had concurrent mutations in KRAS (G12D 36.0%, G12S 12.0%, G13D 8.0%, and A146T 8.0%). Two patients (8.0%) with PIK3CA tumors were found to have concurrent deactivating BRAF mutations (G466V and G469E). Notably, these 2 patients had additional alterations in KRASG12S and RAF1S257L, respectively, with phenotypes described above. We also identified additional alterations in the PTEN-PIK3CA-AKT signaling pathway in our group of PIK3CA altered tumors (Figure 5). Five patients (20.0%) with PIK3CA altered tumors also had PTEN mutations, while 1 patient (4.0%) had a dual PIK3CA and AKT1 mutated tumor. Of note, 1 female patient (age 55) with a dual PIK3CA and PTEN mutation had a rectal tumor demonstrating MSI-H and developed a solitary liver metastasis that has since been resected and treated with adjuvant FOLFOX – she is currently in remission.\nMSI: Microsatellite instability; MSS: Microsatellite stable; MSI-L: MSI low; MSI-H: MSI high.\nValues in parentheses represent numbers and not percentages.\nIn total, we identified 25 patients (18.1%) with PIK3CA alterations in our cohort (Table 4). The most common primary disease sites included cecum (36.0%), sigmoid colon (12.0%), and rectum (12.0%). Notably, right-sided colon cancers comprised nearly half (48.0%) of tumors with PIK3CA alterations. The most commonly identified variants were E545K (24.0%, exon 9), E542K (12.0% exon 9), E110del (8.0%), and Q546K (8.0%). Tumors with PIK3CA alterations frequently had concurrent mutations in the RAS-RAF-MAPK signaling pathway. A majority (19 or 76.0%) had concurrent mutations in KRAS (G12D 36.0%, G12S 12.0%, G13D 8.0%, and A146T 8.0%). Two patients (8.0%) with PIK3CA tumors were found to have concurrent deactivating BRAF mutations (G466V and G469E). Notably, these 2 patients had additional alterations in KRASG12S and RAF1S257L, respectively, with phenotypes described above. We also identified additional alterations in the PTEN-PIK3CA-AKT signaling pathway in our group of PIK3CA altered tumors (Figure 5). Five patients (20.0%) with PIK3CA altered tumors also had PTEN mutations, while 1 patient (4.0%) had a dual PIK3CA and AKT1 mutated tumor. Of note, 1 female patient (age 55) with a dual PIK3CA and PTEN mutation had a rectal tumor demonstrating MSI-H and developed a solitary liver metastasis that has since been resected and treated with adjuvant FOLFOX – she is currently in remission.\nMSI: Microsatellite instability; MSS: Microsatellite stable; MSI-L: MSI low; MSI-H: MSI high.\nValues in parentheses represent numbers and not percentages.\n MET amplifications Three patients (2.2%) in our series had MET amplifications (Table 5). Two-thirds of these tumors were MSS, located in the right colon, and associated with concurrent mutations in RAS or RAF genes. One 67-year-old male was initially diagnosed with right-sided colon cancer (KRASG13D+MET alterations present) and synchronous liver metastases. His course has been punctuated by recurrent metastases to the liver and lungs despite several systemic and regional therapies. Another right-sided colon cancer was identified with both a deactivating BRAFD594G mutation and MET amplification with aggressive phenotype described above. A third patient was a 27-year-old male with primary rectal adenocarcinoma that recurred with metastases to the liver and retroperitoneal lymph nodes and refractory to capecitabine + irinotecan + cetuximab. In particular, 2 of 3 patients with MET amplications and RAS/BRAFV600E wild-type tumors were refractory to anti-EGFR-based therapies.\n* For concurrent RAS and BRAF mutations only, MSI: Microsatellite instability; NOS: Not otherwise specified; NR: Not reported; MSS: Microsatellite stable.\nThree patients (2.2%) in our series had MET amplifications (Table 5). Two-thirds of these tumors were MSS, located in the right colon, and associated with concurrent mutations in RAS or RAF genes. One 67-year-old male was initially diagnosed with right-sided colon cancer (KRASG13D+MET alterations present) and synchronous liver metastases. His course has been punctuated by recurrent metastases to the liver and lungs despite several systemic and regional therapies. Another right-sided colon cancer was identified with both a deactivating BRAFD594G mutation and MET amplification with aggressive phenotype described above. A third patient was a 27-year-old male with primary rectal adenocarcinoma that recurred with metastases to the liver and retroperitoneal lymph nodes and refractory to capecitabine + irinotecan + cetuximab. In particular, 2 of 3 patients with MET amplications and RAS/BRAFV600E wild-type tumors were refractory to anti-EGFR-based therapies.\n* For concurrent RAS and BRAF mutations only, MSI: Microsatellite instability; NOS: Not otherwise specified; NR: Not reported; MSS: Microsatellite stable.\n Hypermutant status The majority of our 138 patients with mCRC had tumors with <9 clinically significant alterations (121 or 87.7%) as described by FoundationOne reports (Table 6). The majority of these tumors were located in the left colon and all were MSS. Fourteen patients (10.1%) had 9-16 total alterations while only 3 patients (2.2%) were allocated to the highest number of clinically significant alterations category (17-25). Notably, 2 patients with MSI-H tumors were identified in the highest number of alterations group. One 55-year-old female patient was found to have a dual PIK3CA and PTEN mutated rectal tumor with phenotype described previously. Tumor mutational burden (TMB) from FoundationOne report showed a high TMB of 33 mutations per megabase (Mb). The other was a 47-year-old female with KRASG12V mutated metastatic rectal cancer that has progressed through 3 lines of systemic therapy and currently on anti-PD-1 therapy with pembrolizumab with a clinical response. Again, TMB corroborated her findings of a relatively hypermutated tumor with a TMB of 31 mutations/Mb. The third patient with a hypermutant FoundationOne profile had a MSS tumor with an associated POLEV411L mutation. Interestingly, this patient was elderly (age 80), had a right colon tumor, and had a recurrence pattern consistent with locoregional recurrence. This patient demonstrated a TMB of 122 mutations/Mb, which was the highest among the cohort.\nMSI: Microsatellite instability; NOS: Not otherwise specified; MSS: Microsatellite stable; NR: Not reported; MSI-H: MSI high.\nThe majority of our 138 patients with mCRC had tumors with <9 clinically significant alterations (121 or 87.7%) as described by FoundationOne reports (Table 6). The majority of these tumors were located in the left colon and all were MSS. Fourteen patients (10.1%) had 9-16 total alterations while only 3 patients (2.2%) were allocated to the highest number of clinically significant alterations category (17-25). Notably, 2 patients with MSI-H tumors were identified in the highest number of alterations group. One 55-year-old female patient was found to have a dual PIK3CA and PTEN mutated rectal tumor with phenotype described previously. Tumor mutational burden (TMB) from FoundationOne report showed a high TMB of 33 mutations per megabase (Mb). The other was a 47-year-old female with KRASG12V mutated metastatic rectal cancer that has progressed through 3 lines of systemic therapy and currently on anti-PD-1 therapy with pembrolizumab with a clinical response. Again, TMB corroborated her findings of a relatively hypermutated tumor with a TMB of 31 mutations/Mb. The third patient with a hypermutant FoundationOne profile had a MSS tumor with an associated POLEV411L mutation. Interestingly, this patient was elderly (age 80), had a right colon tumor, and had a recurrence pattern consistent with locoregional recurrence. This patient demonstrated a TMB of 122 mutations/Mb, which was the highest among the cohort.\nMSI: Microsatellite instability; NOS: Not otherwise specified; MSS: Microsatellite stable; NR: Not reported; MSI-H: MSI high.", "The molecular results from FoundationOne testing of tumors from 138 mCRC patients are summarized in Table 1. The median age of our study group was 56 years (range 27-88) with 59.4% (82) males and 40.6% (56) females. The most common ethnicity was White (85, 61.6%) followed by Asian (29, 21.0%). The most common sites of primary were sigmoid colon (33.3%), rectum (19.6%), and cecum (15.2%). Sixty-eight patients (49.3%) had KRAS mutations, 9 patients (6.5%) had BRAF mutations, 3 (2.2%) had NRAS mutations, 1 (0.7%) had an ARAF mutation, 1 (0.7%) had a RAF-1 mutation, 7 (5.1%) had ERRB2 amplifications, 25 (18.1%) had PIK3CA mutations, 15 (10.9%) had PTEN mutations, 4 (2.9%) had AKT mutations, and 3 (2.2%) had MET amplifications.\nNOS: Not otherwise specified; MSI: Microsatellite instability; MSS: Microsatellite stable; MSI-L: MSI low; MSI-H: MSI high.", "Overall, RAS mutations were present in 51.4% of our mCRC patients, RAF mutations were seen in 7.2%, of which RAS+RAF concurrent mutations were seen in 1.4%. The remainder (42.8%) were RAS/RAF wild type (Figure 1). The most common RAS mutations were KRAS mutations of exon 2 (codons 12 and 13) including G12D (32.4%), G13D (14.1%), G12V (11.3%), G12S (9.9%), G12C (8.5%), and G12A (2.8%, Figure 2). Beyond the well-established point mutations in codons 12 and 13 of exon 2 of KRAS, we identified mutations in codon 61 of exon 3 (Q61H, 1.4%; Q61K, 1.4%; Q61L, 1.4%), codon 117 of exon 4 (K117N, 1.4%), and codon 146 of exon 4 (A146V, 1.4%; A146V^sub, 1.4%; A146T, 4.2%). Two mutations (2.8%) in codon 61 (exon 3) of NRAS were also detected. Altogether, these non-KRAS exon 2 mutations constitute 15.5% of RAS mutations.\nRAS+RAF mutations are not included in RAS or RAF percentages.\nArrows denote common mutations of exon 2 (codon 12-13). Brackets denote panel of extended RAS mutations or novel RAS mutation.\nIn our patient population, 2 KRAS amplifications (2.8%) and 1 NRAS amplification (1.4%) were identified. One patient was a 51-year-old female with KRAS amplified rectal cancer with synchronous diffuse metastases (lung and liver). Her best overall response to standard first-line combination chemotherapy (5-fluorouracil (5-FU) and irinotecan or FOLFIRI) plus anti-EGFR therapy (panitumumab) was stable disease (SD) for 6 months. The other patient with KRAS amplification was a 51-year-old male diagnosed with right-sided colon cancer and synchronous metastases to the liver and peritoneum who had rapid progression on first- and second-line non-anti-EGFR based therapies. Our 74-year-old male patient with NRAS amplification presented with poorly differentiated rectosigmoid adenocarcinoma and synchronous diffuse metastases (liver, mesentery, and bones) and experienced progressive disease (PD) at 2 months on second-line FOLFIRI + cetuximab. Notably, a novel KRASR68S1 alteration (Figure 2) was identified in a 41-year-old female (1.4%) with rectal cancer and synchronous metastases to the liver and retroperitoneal and supraclavicular lymph nodes who experienced PD at 2 months on anti-EGFR therapy with second-line irinotecan + cetuximab.", "A total of 11 RAF mutations (1 concurrent BRAF+RAF1 mutation) were found in 7.2% of our patients (Figure 3). Of these, BRAFV600E activating mutations (exon 15) were the most common single mutations present (40.0%). One activating BRAFL597Ralteration (exon 15) was identified (10.0%) in a 56-year-old male with bulky rectal adenocarcinoma with synchronous metastases that progressed through 9 months of first-line anti-EGFR therapy. One activating ARAFS214F alteration was also identified (10.0%) in our series of RAF mutations. This 60-year-old male patient developed multiple recurrences of rectal adenocarcinoma including, most recently, metastatic disease to the lung treated with neoadjuvant 5-FU, oxaliplatin, and irinotecan (FOLFOXIRI) followed by metastatectomy; he remains in clinical remission. A dual BRAFV600E+KRASA164V^subalterationwas present (10.0%) in an elderly male (age 72) with poorly differentiated right-sided colon cancer with synchronous metastases on first-line systemic combination therapy without anti-EGFR agents. Here an oncogenic RAS alteration was paired with a known activating BRAF mutation.\nArrows denote known activating mutations. Brackets denote known deactivating mutations.\nDeactivating mutations in BRAFD594G(10.0%), BRAFG466V concurrent with KRASG12S (10.0%), and BRAFG469E concurrent with RAF1S257L (10.0%) were also identified. Our patient with a deactivating BRAFD594G mutation was a 59-year-old male with moderately-poorly differentiated right-sided colon cancer with diffuse metastases that was refractory to all standard of care chemotherapy, including FOLFIRI + cetuximab, and ultimately died from progressive disease. Interestingly, he was noted to have a concurrent MET amplification. The patient with a deactivating BRAFG466V mutation concurrent with an activating KRASG12S mutation was a 51-year-old male with right-sided colon cancer with diffuse metastases that progressed with carcinomatosis while on FOLFOX and immediately following salvage debulking surgery with hyperthermic chemotherapy. Notably, he currently has achieved ongoing partial response (PR) on third-line FOLFIRI and bevacizumab (41+ cycles). Our 66-year-old female with dual deactivating BRAFG469E and activating RAF1S257L mutation presented with a right colon cancer with synchronous metastases to bone, liver, lung, and peritoneum. Her disease was refractory to FOLFOX + bevacizumab and is currently on second-line FOLFIRI + bevacizumab with a clinical benefit.", "Seven patients (5.1%) were found to have ERRB2 amplified tumors with one having a concurrent KRASG12D mutation (Figure 4). The majority of these tumors were MSS (87.5%) with HER2 copy numbers that ranged from 9-190 (Table 2). Notably, all ERRB2 amplified tumors were located in the rectosigmoid colon as its primary disease site. Four patients with RAS wild-type ERBB2 amplification received anti-EGFR therapy, 3 experienced SD ≥ 4 months (2 first-line and 1 second-line) and 1 (second-line) experienced a PR lasting for 5 months as their best overall response to anti-EGFR therapy. The concurrent ERRB2 amplified and KRASG12D mutated tumor was found in a 58-year-old male with moderately differentiated rectal adenocarcinoma with synchronous solitary liver metastasis treated with neoadjuvant 5-FU, oxaliplatin (FOLFOX) followed by hepatic resection and resection of primary – he is currently under surveillance and without evidence of disease.\nEGFR: Epidermal growth factor receptor; MSI: Microsatellite instability; SD: Stable disease; NR: Not reported; MSS: Microsatellite stable; PR: Partial response.", "Three patients (2.2%) had AKT1E17K mutations while 1 patient (0.7%) had an AKT2E17K mutation (Table 3). Of these, a majority had concurrent mutations (75%) and tumors located in the right colon (75%). One AKT1E17K mutated tumor was found to have concurrent BRAFV600E +KRASA164V^subalterations with phenotype described above. One AKT1E17K mutated tumor had concurrent alterations in KRASA146T+PIK3CAG106V and was found in a 61-year-old male with initial right-sided colon cancer that recurred with metastases to the liver showing moderately differentiated colon adenocarcinoma. His tumor was characterized by aggressive features, including metastatic disease recurrence following a diagnosis of stage I disease, and development of bony metastases within the first year of recurrence. A concurrent AKT2E17K+KRASG12C altered tumor was found in a 57-year-old female with originally moderately differentiated sigmoid adenocarcinoma that was resected but recurred with metastases to the retroperitoneal lymph nodes currently on first-line FOLFIRI + bevacizumab.\n* For concurrent RAS and BRAF mutations only, NOS: Not otherwise specified; MSI: Microsatellite instability; NR: Not reported; MSS: Microsatellite stable.", "In total, we identified 25 patients (18.1%) with PIK3CA alterations in our cohort (Table 4). The most common primary disease sites included cecum (36.0%), sigmoid colon (12.0%), and rectum (12.0%). Notably, right-sided colon cancers comprised nearly half (48.0%) of tumors with PIK3CA alterations. The most commonly identified variants were E545K (24.0%, exon 9), E542K (12.0% exon 9), E110del (8.0%), and Q546K (8.0%). Tumors with PIK3CA alterations frequently had concurrent mutations in the RAS-RAF-MAPK signaling pathway. A majority (19 or 76.0%) had concurrent mutations in KRAS (G12D 36.0%, G12S 12.0%, G13D 8.0%, and A146T 8.0%). Two patients (8.0%) with PIK3CA tumors were found to have concurrent deactivating BRAF mutations (G466V and G469E). Notably, these 2 patients had additional alterations in KRASG12S and RAF1S257L, respectively, with phenotypes described above. We also identified additional alterations in the PTEN-PIK3CA-AKT signaling pathway in our group of PIK3CA altered tumors (Figure 5). Five patients (20.0%) with PIK3CA altered tumors also had PTEN mutations, while 1 patient (4.0%) had a dual PIK3CA and AKT1 mutated tumor. Of note, 1 female patient (age 55) with a dual PIK3CA and PTEN mutation had a rectal tumor demonstrating MSI-H and developed a solitary liver metastasis that has since been resected and treated with adjuvant FOLFOX – she is currently in remission.\nMSI: Microsatellite instability; MSS: Microsatellite stable; MSI-L: MSI low; MSI-H: MSI high.\nValues in parentheses represent numbers and not percentages.", "Three patients (2.2%) in our series had MET amplifications (Table 5). Two-thirds of these tumors were MSS, located in the right colon, and associated with concurrent mutations in RAS or RAF genes. One 67-year-old male was initially diagnosed with right-sided colon cancer (KRASG13D+MET alterations present) and synchronous liver metastases. His course has been punctuated by recurrent metastases to the liver and lungs despite several systemic and regional therapies. Another right-sided colon cancer was identified with both a deactivating BRAFD594G mutation and MET amplification with aggressive phenotype described above. A third patient was a 27-year-old male with primary rectal adenocarcinoma that recurred with metastases to the liver and retroperitoneal lymph nodes and refractory to capecitabine + irinotecan + cetuximab. In particular, 2 of 3 patients with MET amplications and RAS/BRAFV600E wild-type tumors were refractory to anti-EGFR-based therapies.\n* For concurrent RAS and BRAF mutations only, MSI: Microsatellite instability; NOS: Not otherwise specified; NR: Not reported; MSS: Microsatellite stable.", "The majority of our 138 patients with mCRC had tumors with <9 clinically significant alterations (121 or 87.7%) as described by FoundationOne reports (Table 6). The majority of these tumors were located in the left colon and all were MSS. Fourteen patients (10.1%) had 9-16 total alterations while only 3 patients (2.2%) were allocated to the highest number of clinically significant alterations category (17-25). Notably, 2 patients with MSI-H tumors were identified in the highest number of alterations group. One 55-year-old female patient was found to have a dual PIK3CA and PTEN mutated rectal tumor with phenotype described previously. Tumor mutational burden (TMB) from FoundationOne report showed a high TMB of 33 mutations per megabase (Mb). The other was a 47-year-old female with KRASG12V mutated metastatic rectal cancer that has progressed through 3 lines of systemic therapy and currently on anti-PD-1 therapy with pembrolizumab with a clinical response. Again, TMB corroborated her findings of a relatively hypermutated tumor with a TMB of 31 mutations/Mb. The third patient with a hypermutant FoundationOne profile had a MSS tumor with an associated POLEV411L mutation. Interestingly, this patient was elderly (age 80), had a right colon tumor, and had a recurrence pattern consistent with locoregional recurrence. This patient demonstrated a TMB of 122 mutations/Mb, which was the highest among the cohort.\nMSI: Microsatellite instability; NOS: Not otherwise specified; MSS: Microsatellite stable; NR: Not reported; MSI-H: MSI high.", "Comprehensive molecular characterization of 138 tumors from patients with mCRC was performed via NGS (FoundationOne) in this single-institution retrospective study. Overall, 51.4% and 7.2% of our patients with mCRC were shown to carry RAS and RAF mutations, respectively, which is concordant with frequencies historically reported in mCRC [2]. The majority of our RAS mutations were KRAS mutations of exon 2 (codons 12 and 13), which represent those identified in initial phase III trials that predicted lack of benefit from anti-EGFR therapy in mCRC [6, 7]. We also found that 15.5% of all RAS mutations in our population comprised a panel of extended RAS mutations. This is also consistent with recent data from the PRIME and CRYSTAL clinical trials, where exon 3 and 4 KRAS and exons 2, 3, and 4 NRAS mutations reflected 14-17% of RAS mutations [8, 9]. Identifying these rare RAS mutations has major clinical significance, given their association with anti-EGFR resistance [10].\nNotably, we identified 2 KRAS amplifications and 1 NRAS amplification that are extremely rare and poorly characterized. These were found in 3 patients with diffusely metastatic CRC progressive through several lines of systemic therapy including anti-EGFR therapy.\nPutative high-level amplifications of NRAS were observed in <1% of cases in TCGA dataset though its significance in CRC remains poorly described [5]. KRAS amplifications have been associated with acquired resistance to EGFR inhibitors cetuximab or panitumumab in CRC preclinical models [11]. To our knowledge, we are the first to report a novel KRASR68S1 alteration that was associated with a particularly aggressive phenotype and PD at 2 months on anti-EGFR therapy with cetuximab.\nThe majority of RAF mutations found in our population were BRAFV600E activating mutations (exon 15), which have been historically associated with poorer survival, resistance to chemotherapy, and lack of clinical benefit with anti-EGFR therapy in mCRC [12–15]. We also identified a lone BRAFL597Ralteration (exon 15), which is poorly described in CRC but has been shown to similarly activate RAF-MEK-ERK signaling in melanoma in vitro [16]. Of note, this patient received 9 months of first-line anti-EGFR therapy though our sample size of 1 precludes any meaningful generalizations. One ARAFS214F alteration was also identified in a patient whose course has been characterized by multiple recurrences of rectal cancer. Mutations in ARAF have been linked as oncogenic drivers in lung adenocarcinoma, and are exceedingly rare in CRC and comprise approximately 2% of cases in the CRC dataset from TCGA [5, 17]. Treatment with the oral RAF inhibitor, sorafenib, has demonstrated prolonged response in a case of refractory non-small-cell lung cancer and rapid responses in patients with refractory histiocytic neoplasms bearing somatic ARAF mutations [17, 18].\nDespite a previous conception that KRAS and BRAF mutations are mutually exclusive, we found 1 dual BRAFV600E+KRASA164V^submutated tumor that, in our case, was associated with poor prognostic features [19]. One case of concurrent BRAFG466V+KRASG12S mutation and one patient with a concurrent BRAFG469E+RAF1S257L mutation were present in our cohort. BRAF mutants G466V and G469E have been shown to represent variants with impaired or complete loss of kinase activity in vitro [20, 21]. Nevertheless, it has been shown that tumorigenesis is promoted in the presence of deactivating BRAF mutations through oncogenic RAS mutation and/or CRAF (or RAF-1) signaling [21, 22]. In our study, one deactivating BRAFG466V mutation was paired with an oncogenic KRASG12S mutation, and one deactivating BRAFG469E mutation was paired with an oncogenic RAF1S257L alteration, supporting the notion of an evolutionary adaptation in the cancer genome to overcome BRAF mutations with impaired function. In both cases, there were associated features of poor prognosis though the dual BRAFG466V+KRASG12S mutated tumor has seen disease control recently on 41 cycles of FOLFIRI and bevacizumab, which may argue for varying degrees of relative contribution from each mutation on tumor phenotype. Interestingly, one patient with deactivating BRAFD594Gmutation was refractory to all lines of treatment, including anti-EGFR therapy, and ultimately died of aggressive disease. This is at odds with recent reports suggesting that BRAFD594G mutation may be an indicator of good prognosis [23]. It is unclear whether this patient's concurrent MET amplification may have contributed to his overall poor prognosis and therapeutic resistance.\nERBB2 (HER2/neu) amplifications were found in 5.1% of our mCRC patients with the majority in KRAS wild-type tumors (except for 1 with a concurrent ERRB2+KRASG12D alteration). Another FoundationOne analysis of >10,000 cases of gastrointestinal malignancies identified HER2 amplifications and mutations in 3.0% and 4.8%, respectively, of cases from the CRC cohort [24]. Our patients with HER2 amplified tumors appeared to have shortened clinical benefit with anti-EGFR therapy, which is consistent with the recent phase II HERACLES trial where none of the patients with HER2 amplified, RAS/BRAF wild-type metastatic colorectal tumors had a response to anti-EGFR therapy [25]. Similar to the preponderance of left colon primary tumors in the HERACLES trial, all of our HER2 amplified tumors were located in the rectosigmoid colon. In short, the identification of HER2 amplifications in patients with RAS/BRAF wild-type metastatic colorectal tumors is of major significance given the clinical benefit derived from dual HER2-directed therapy including trastuzumab + lapatinib (HERACLES) or trastuzumab + pertuzumab (MyPathway) [25, 26].\nPIK3CA, PTEN, and AKT mutations were identified in 18.1% (25), 10.9% (15), and 2.9% (4) of our mCRC patients, respectively. Many of these patients had metastatic tumors associated with aggressive features. In addition, 75% of AKT mutated tumors were located in the right colon, almost half (48.0%) of PIK3CA mutated tumors were right-sided colon cancers, and concurrent mutations in RAS-RAF-MAPK or PTEN-PIK3CA-AKT signaling were common. For example, 19 patients (76.0%) with PIK3CA mutations also had concurrent KRAS mutations while 5 (20.0%) and 1 (4.0%) with PIK3CA altered tumors also had concurrent PTEN and AKT1 mutations, respectively. Mutations in mediators of the PTEN-PIK3CA-AKT signaling pathway in CRC have been associated with poorer prognosis and lack of clinical response to anti-EGFR therapy [27, 28]. For PIK3CA mutations, in particular, prior studies have demonstrated that exon 9 mutations had no effect while exon 20 mutations were associated with resistance to anti-EGFR therapy [29]. However, this differential effect by exon has not been supported by recent meta-analysis [30]. Given the high rate of concurrent RAS mutations seen with PIK3CA and related pathway mutations, a definitive association between resistance to EGFR inhibition and PTEN-PIK3CA-AKT pathway mutations is difficult to make. Further studies are needed to resolve this issue.\nThree patients (2.2%) demonstrated MET amplifications associated with poor prognostic features. MET amplification and increased c-MET expression have also been associated with an aggressive phenotype and therapeutic resistance, particularly to MEK inhibition, in mCRC [31, 32]. Interestingly, we have observed anti-EGFR refractoriness in 2 of our patients with MET amplifications despite the presence of a RAS- wild-type phenotype and lack of activating BRAF mutations. This is consistent with preclinical data suggesting MET activation as a mechanism of resistance to anti-EGFR therapy [33].\nWe lastly identified 3 patients (2.2%) with tumors categorized in the highest number of clinically significant alterations group (17-25) that also demonstrated high TMB as per FoundationOne. TMB categories per FoundationOne testing have been validated in melanoma patients treated with PD-1 blockade [34]. Response to PD-1 inhibitors was significantly superior in patients with high TMB (>23.1 mutations/MB) compared to intermediate or low TMB (3.2-23.1 mutations/MB and <3.2 mutations/MB, respectively). Furthermore, a recent phase II study showed that patients with advanced urothelial cancer who responded to the programmed death ligand 1 (PD-L1) inhibitor atezolizumab had a significantly higher TMB (median 12.4 mutations/Mb) than non-responders (median 6.4 mutations/Mb, p < 0.0001) [35]. Two patients had MSI-H tumors while 1 hypermutant tumor was MSS and harbored a POLE mutation. Interestingly, 42.9% of tumors with 9-16 clinically significant alterations were located in the right colon while one-third of tumors with 17-25 total alterations were located in the right colon; tumors with <9 number of alterations were predominantly located in the left colon. In the CRC dataset from TCGA, 75% of hypermutated tumors arose from the right colon yet not all of them were MSI-H [5]. Mutations in polymerase ε or POLE were found among 25% of hypermutated tumors in this cohort. Mutations in POLE have been shown to contribute to an ultramutated yet MSS phenotype in colorectal tumors [36]. A recent NGS study confirmed that increasing mutational load correlated with MSI yet colorectal tumors with the highest mutational burden that were distinct from MSI tumors all harbored POLE mutations [37]. Furthermore, mismatch repair-deficiency or MSI has recently been shown to predict clinical benefit to immune checkpoint blockade with anti-PD-1 therapy in mCRC [38]. The characterization of mutational load in CRC may serve as a better indicator than MSI status in determining a hypermutant profile that could predict benefit from immunotherapy. Our findings are hypothesis generating and offer support to consider molecular analysis of tumors to determine the total number of alterations as a potential correlate to MSI and candidacy for anti-PD-1 therapy in mCRC.\nFuture studies of larger size and, ideally, prospective design will be helpful in corroborating associations between molecular alterations of interest described in our study and prognosis, resistance to EGFR inhibition, and/or ability to be targeted for therapy in mCRC. Comparative genomic analyses have identified a high level of concordance particularly for RAS, BRAF, and PIK3CA mutations between colorectal primary and metastatic tumors [39, 40]. However, other molecular alterations may differ based on the site of tumor and/or exposure to chemotherapy [41–44]. Although such mixed results are likely dependent on the specific mutation that is profiled, other factors including specimen integrity and sampling method may also contribute to heterogeneity. Indeed, further analyses are needed to describe the concordance or discordance of other mutations across tumor sites and treatment effects in mCRC, and careful consideration in design will be needed in order to account for confounding factors as described above.\nIn conclusion, comprehensive genomic profiling can uncover gene alterations beyond conventional RAS or RAF mutant subtypes that predict resistance to anti-EGFR therapy and in identifying potential therapeutic targets outside of NCCN standard treatments in mCRC. ERBB2 amplified tumors commonly originate from the rectosigmoid colon, are predominantly RAS/BRAF wild-type, and may predict benefit to HER2-directed therapy. Hypermutant tumors or tumors with POLE mutations may predict benefit to anti-PD-1 therapy. Our findings are hypothesis generating and warrant further investigation in larger datasets and in prospective settings.", " Study patients and tumor samples Patients with advanced or metastatic (stage IV) colorectal cancer treated at the Gastrointestinal Medical Oncology Clinic at City of Hope National Medical Center (Duarte, CA) between April 2013 and February 2016 were screened for this study. Eligibility criteria was limited to those who underwent expanded genomic tumor analysis by FoundationOne. There were no exclusions to tumor histology, medical comorbidities, previous treatment or lines of prior therapy, or performance status. Comprehensive genomic profiling was conducted through NGS via FoundationOne (Foundation Medicine, Inc., Cambridge, MA) with reports generated from April 2013 to February 2016. The study was approved by the Institutional Review Board (IRB).\nPatients with advanced or metastatic (stage IV) colorectal cancer treated at the Gastrointestinal Medical Oncology Clinic at City of Hope National Medical Center (Duarte, CA) between April 2013 and February 2016 were screened for this study. Eligibility criteria was limited to those who underwent expanded genomic tumor analysis by FoundationOne. There were no exclusions to tumor histology, medical comorbidities, previous treatment or lines of prior therapy, or performance status. Comprehensive genomic profiling was conducted through NGS via FoundationOne (Foundation Medicine, Inc., Cambridge, MA) with reports generated from April 2013 to February 2016. The study was approved by the Institutional Review Board (IRB).\n Next-generation sequencing Comprehensive genomic analysis was conducted on tumor samples (formalin-fixed paraffin-embedded) retrieved from surgical resection, core needle biopsies, or excisional biopsies and delivered to Foundation Medicine, Inc. The NGS assay performed by FoundationOne has been previously described and validated [45]. The initial whole-genome shotgun library construction and hybridization-based capture of 4,557 exons from 287 cancer-related genes and 47 introns from 19 genes with frequent DNA rearrangements has since been expanded to identify genetic alterations across the coding regions of 315 cancer-related genes and introns from 28 genes commonly rearranged in solid cancers.\nComprehensive genomic analysis was conducted on tumor samples (formalin-fixed paraffin-embedded) retrieved from surgical resection, core needle biopsies, or excisional biopsies and delivered to Foundation Medicine, Inc. The NGS assay performed by FoundationOne has been previously described and validated [45]. The initial whole-genome shotgun library construction and hybridization-based capture of 4,557 exons from 287 cancer-related genes and 47 introns from 19 genes with frequent DNA rearrangements has since been expanded to identify genetic alterations across the coding regions of 315 cancer-related genes and introns from 28 genes commonly rearranged in solid cancers.\n Study design Retrospective analysis of genetic mutations, amplifications, or alterations present in our cohort of 138 patients with mCRC was performed through test results provided in an integrative report available via FoundationICE (Interactive Cancer Explorer). Patient demographics including age, sex, ethnicity, site of primary, stage at diagnosis, and number of previous treatments were obtained from chart abstraction of each patient's electronic medical record (EMR). Microsatellite instability classified as stable (MSS), low (MSI-L), or high (MSI-H) were abstracted from pathology reports and response to anti-EGFR therapy, when available, was described according to Response Evaluation Criteria in Solid Tumors (RECIST) criteria and obtained from medical records [46]. The total number of clinically significant alterations for each patient was determined by tallying the sum of alterations included in the panel of clinically significant variants provided by FoundationICE reports and arbitrarily categorized into 3 groups (<9, 9-16, and > 16 total number of alterations). We defined hypermutant tumors as those in the highest number of mutations group that were also found to have high TMB as validated by FoundationOne (high >23.1 mutations/MB, intermediate 3.2-23.1 mutations/MB, and low <3.2 mutations/MB) [34].\nRetrospective analysis of genetic mutations, amplifications, or alterations present in our cohort of 138 patients with mCRC was performed through test results provided in an integrative report available via FoundationICE (Interactive Cancer Explorer). Patient demographics including age, sex, ethnicity, site of primary, stage at diagnosis, and number of previous treatments were obtained from chart abstraction of each patient's electronic medical record (EMR). Microsatellite instability classified as stable (MSS), low (MSI-L), or high (MSI-H) were abstracted from pathology reports and response to anti-EGFR therapy, when available, was described according to Response Evaluation Criteria in Solid Tumors (RECIST) criteria and obtained from medical records [46]. The total number of clinically significant alterations for each patient was determined by tallying the sum of alterations included in the panel of clinically significant variants provided by FoundationICE reports and arbitrarily categorized into 3 groups (<9, 9-16, and > 16 total number of alterations). We defined hypermutant tumors as those in the highest number of mutations group that were also found to have high TMB as validated by FoundationOne (high >23.1 mutations/MB, intermediate 3.2-23.1 mutations/MB, and low <3.2 mutations/MB) [34].\n Statistical analyses All statistical analyses performed were descriptive and no formal statistical hypotheses were assessed. The sample size was determined by the total number of mCRC patients with FoundationOne results available. All descriptive statistics were conducted in Excel with associated formulas and functions.\nAll statistical analyses performed were descriptive and no formal statistical hypotheses were assessed. The sample size was determined by the total number of mCRC patients with FoundationOne results available. All descriptive statistics were conducted in Excel with associated formulas and functions." ]
[ null, null, null, null, null, null, null, null, null, null, null, "methods" ]
[ "metastatic colorectal cancer", "comprehensive genomic profiling", "next-generation sequencing", "FoundationOne", "retrospective" ]
INTRODUCTION: Colorectal cancer (CRC) remains the third leading cause of cancer death in both men and women in the United States with an estimated 134,490 new cases and 49,190 deaths in 2016 [1]. Recent advances in the treatment of metastatic CRC (mCRC) have identified improved outcomes with the addition of epidermal growth factor receptor (EGFR)-targeting agents to conventional combination cytotoxic therapy in patients with extended RAS wild-type tumors. In contrast, activating mutations in the RAS gene (KRAS or NRAS, present in approximately 50% of cases of mCRC) and BRAF gene (present in about 5% of mCRC patients) have been associated with lack of clinically meaningful benefit or harm when anti-EGFR therapy is employed [2]. The identification of candidates for anti-EGFR therapy through the exclusion of RAS and BRAF mutations in mCRC serves as a model of selecting optimal therapy based on patient genomic profiles and molecular phenotypes. Several decades of genomic studies, including the use of more recent next-generation sequencing (NGS), have expedited the search of genetic alterations for potential therapeutic targeting in CRC [3, 4]. Recently, comprehensive molecular characterization of 224 colorectal tumors was performed by The Cancer Genome Atlas (TCGA) Network [5]. Sixteen percent of colorectal tumors were found to be hypermutated and more commonly found in the right colon with 75% of these cases demonstrating expectedly high microsatellite instability (MSI-H). Twenty-four genes were identified to have significant mutations of interest including APC, SMAD4, TP53, PIK3CA, and KRAS mutations, as expected. Interestingly, mutations, deletions, or amplifications of the ERRB gene family were found in 19% of tumors. In sum, this genomic analysis identified several molecular alterations that are considered targetable, including mediators of dysregulated WNT, RAS, and PI3K pathways such as ERRB2, ERRB3, MEK, AKT, MTOR, IGF2, and IGFR. The recent identification of gene mutations and amplifications of potential significance for therapeutic purposes has led us to investigate the genomic profiles of mCRC patients using NGS (FoundationOne). Here, we describe a single-institution experience in reporting results from comprehensive genomic analysis of tumors from 138 mCRC patients. We aim to characterize genetic alterations present in our study population that have known correlates to prognosis, therapeutic resistance, and potential therapeutic targets in mCRC. In this study, we also report the existence of concurrent gene mutations rarely described in the literature and novel mutations and amplifications that can lead to targeting outside of National Comprehensive Cancer Network (NCCN) standard treatments. RESULTS: Study population The molecular results from FoundationOne testing of tumors from 138 mCRC patients are summarized in Table 1. The median age of our study group was 56 years (range 27-88) with 59.4% (82) males and 40.6% (56) females. The most common ethnicity was White (85, 61.6%) followed by Asian (29, 21.0%). The most common sites of primary were sigmoid colon (33.3%), rectum (19.6%), and cecum (15.2%). Sixty-eight patients (49.3%) had KRAS mutations, 9 patients (6.5%) had BRAF mutations, 3 (2.2%) had NRAS mutations, 1 (0.7%) had an ARAF mutation, 1 (0.7%) had a RAF-1 mutation, 7 (5.1%) had ERRB2 amplifications, 25 (18.1%) had PIK3CA mutations, 15 (10.9%) had PTEN mutations, 4 (2.9%) had AKT mutations, and 3 (2.2%) had MET amplifications. NOS: Not otherwise specified; MSI: Microsatellite instability; MSS: Microsatellite stable; MSI-L: MSI low; MSI-H: MSI high. The molecular results from FoundationOne testing of tumors from 138 mCRC patients are summarized in Table 1. The median age of our study group was 56 years (range 27-88) with 59.4% (82) males and 40.6% (56) females. The most common ethnicity was White (85, 61.6%) followed by Asian (29, 21.0%). The most common sites of primary were sigmoid colon (33.3%), rectum (19.6%), and cecum (15.2%). Sixty-eight patients (49.3%) had KRAS mutations, 9 patients (6.5%) had BRAF mutations, 3 (2.2%) had NRAS mutations, 1 (0.7%) had an ARAF mutation, 1 (0.7%) had a RAF-1 mutation, 7 (5.1%) had ERRB2 amplifications, 25 (18.1%) had PIK3CA mutations, 15 (10.9%) had PTEN mutations, 4 (2.9%) had AKT mutations, and 3 (2.2%) had MET amplifications. NOS: Not otherwise specified; MSI: Microsatellite instability; MSS: Microsatellite stable; MSI-L: MSI low; MSI-H: MSI high. RAS mutations Overall, RAS mutations were present in 51.4% of our mCRC patients, RAF mutations were seen in 7.2%, of which RAS+RAF concurrent mutations were seen in 1.4%. The remainder (42.8%) were RAS/RAF wild type (Figure 1). The most common RAS mutations were KRAS mutations of exon 2 (codons 12 and 13) including G12D (32.4%), G13D (14.1%), G12V (11.3%), G12S (9.9%), G12C (8.5%), and G12A (2.8%, Figure 2). Beyond the well-established point mutations in codons 12 and 13 of exon 2 of KRAS, we identified mutations in codon 61 of exon 3 (Q61H, 1.4%; Q61K, 1.4%; Q61L, 1.4%), codon 117 of exon 4 (K117N, 1.4%), and codon 146 of exon 4 (A146V, 1.4%; A146V^sub, 1.4%; A146T, 4.2%). Two mutations (2.8%) in codon 61 (exon 3) of NRAS were also detected. Altogether, these non-KRAS exon 2 mutations constitute 15.5% of RAS mutations. RAS+RAF mutations are not included in RAS or RAF percentages. Arrows denote common mutations of exon 2 (codon 12-13). Brackets denote panel of extended RAS mutations or novel RAS mutation. In our patient population, 2 KRAS amplifications (2.8%) and 1 NRAS amplification (1.4%) were identified. One patient was a 51-year-old female with KRAS amplified rectal cancer with synchronous diffuse metastases (lung and liver). Her best overall response to standard first-line combination chemotherapy (5-fluorouracil (5-FU) and irinotecan or FOLFIRI) plus anti-EGFR therapy (panitumumab) was stable disease (SD) for 6 months. The other patient with KRAS amplification was a 51-year-old male diagnosed with right-sided colon cancer and synchronous metastases to the liver and peritoneum who had rapid progression on first- and second-line non-anti-EGFR based therapies. Our 74-year-old male patient with NRAS amplification presented with poorly differentiated rectosigmoid adenocarcinoma and synchronous diffuse metastases (liver, mesentery, and bones) and experienced progressive disease (PD) at 2 months on second-line FOLFIRI + cetuximab. Notably, a novel KRASR68S1 alteration (Figure 2) was identified in a 41-year-old female (1.4%) with rectal cancer and synchronous metastases to the liver and retroperitoneal and supraclavicular lymph nodes who experienced PD at 2 months on anti-EGFR therapy with second-line irinotecan + cetuximab. Overall, RAS mutations were present in 51.4% of our mCRC patients, RAF mutations were seen in 7.2%, of which RAS+RAF concurrent mutations were seen in 1.4%. The remainder (42.8%) were RAS/RAF wild type (Figure 1). The most common RAS mutations were KRAS mutations of exon 2 (codons 12 and 13) including G12D (32.4%), G13D (14.1%), G12V (11.3%), G12S (9.9%), G12C (8.5%), and G12A (2.8%, Figure 2). Beyond the well-established point mutations in codons 12 and 13 of exon 2 of KRAS, we identified mutations in codon 61 of exon 3 (Q61H, 1.4%; Q61K, 1.4%; Q61L, 1.4%), codon 117 of exon 4 (K117N, 1.4%), and codon 146 of exon 4 (A146V, 1.4%; A146V^sub, 1.4%; A146T, 4.2%). Two mutations (2.8%) in codon 61 (exon 3) of NRAS were also detected. Altogether, these non-KRAS exon 2 mutations constitute 15.5% of RAS mutations. RAS+RAF mutations are not included in RAS or RAF percentages. Arrows denote common mutations of exon 2 (codon 12-13). Brackets denote panel of extended RAS mutations or novel RAS mutation. In our patient population, 2 KRAS amplifications (2.8%) and 1 NRAS amplification (1.4%) were identified. One patient was a 51-year-old female with KRAS amplified rectal cancer with synchronous diffuse metastases (lung and liver). Her best overall response to standard first-line combination chemotherapy (5-fluorouracil (5-FU) and irinotecan or FOLFIRI) plus anti-EGFR therapy (panitumumab) was stable disease (SD) for 6 months. The other patient with KRAS amplification was a 51-year-old male diagnosed with right-sided colon cancer and synchronous metastases to the liver and peritoneum who had rapid progression on first- and second-line non-anti-EGFR based therapies. Our 74-year-old male patient with NRAS amplification presented with poorly differentiated rectosigmoid adenocarcinoma and synchronous diffuse metastases (liver, mesentery, and bones) and experienced progressive disease (PD) at 2 months on second-line FOLFIRI + cetuximab. Notably, a novel KRASR68S1 alteration (Figure 2) was identified in a 41-year-old female (1.4%) with rectal cancer and synchronous metastases to the liver and retroperitoneal and supraclavicular lymph nodes who experienced PD at 2 months on anti-EGFR therapy with second-line irinotecan + cetuximab. RAF mutations A total of 11 RAF mutations (1 concurrent BRAF+RAF1 mutation) were found in 7.2% of our patients (Figure 3). Of these, BRAFV600E activating mutations (exon 15) were the most common single mutations present (40.0%). One activating BRAFL597Ralteration (exon 15) was identified (10.0%) in a 56-year-old male with bulky rectal adenocarcinoma with synchronous metastases that progressed through 9 months of first-line anti-EGFR therapy. One activating ARAFS214F alteration was also identified (10.0%) in our series of RAF mutations. This 60-year-old male patient developed multiple recurrences of rectal adenocarcinoma including, most recently, metastatic disease to the lung treated with neoadjuvant 5-FU, oxaliplatin, and irinotecan (FOLFOXIRI) followed by metastatectomy; he remains in clinical remission. A dual BRAFV600E+KRASA164V^subalterationwas present (10.0%) in an elderly male (age 72) with poorly differentiated right-sided colon cancer with synchronous metastases on first-line systemic combination therapy without anti-EGFR agents. Here an oncogenic RAS alteration was paired with a known activating BRAF mutation. Arrows denote known activating mutations. Brackets denote known deactivating mutations. Deactivating mutations in BRAFD594G(10.0%), BRAFG466V concurrent with KRASG12S (10.0%), and BRAFG469E concurrent with RAF1S257L (10.0%) were also identified. Our patient with a deactivating BRAFD594G mutation was a 59-year-old male with moderately-poorly differentiated right-sided colon cancer with diffuse metastases that was refractory to all standard of care chemotherapy, including FOLFIRI + cetuximab, and ultimately died from progressive disease. Interestingly, he was noted to have a concurrent MET amplification. The patient with a deactivating BRAFG466V mutation concurrent with an activating KRASG12S mutation was a 51-year-old male with right-sided colon cancer with diffuse metastases that progressed with carcinomatosis while on FOLFOX and immediately following salvage debulking surgery with hyperthermic chemotherapy. Notably, he currently has achieved ongoing partial response (PR) on third-line FOLFIRI and bevacizumab (41+ cycles). Our 66-year-old female with dual deactivating BRAFG469E and activating RAF1S257L mutation presented with a right colon cancer with synchronous metastases to bone, liver, lung, and peritoneum. Her disease was refractory to FOLFOX + bevacizumab and is currently on second-line FOLFIRI + bevacizumab with a clinical benefit. A total of 11 RAF mutations (1 concurrent BRAF+RAF1 mutation) were found in 7.2% of our patients (Figure 3). Of these, BRAFV600E activating mutations (exon 15) were the most common single mutations present (40.0%). One activating BRAFL597Ralteration (exon 15) was identified (10.0%) in a 56-year-old male with bulky rectal adenocarcinoma with synchronous metastases that progressed through 9 months of first-line anti-EGFR therapy. One activating ARAFS214F alteration was also identified (10.0%) in our series of RAF mutations. This 60-year-old male patient developed multiple recurrences of rectal adenocarcinoma including, most recently, metastatic disease to the lung treated with neoadjuvant 5-FU, oxaliplatin, and irinotecan (FOLFOXIRI) followed by metastatectomy; he remains in clinical remission. A dual BRAFV600E+KRASA164V^subalterationwas present (10.0%) in an elderly male (age 72) with poorly differentiated right-sided colon cancer with synchronous metastases on first-line systemic combination therapy without anti-EGFR agents. Here an oncogenic RAS alteration was paired with a known activating BRAF mutation. Arrows denote known activating mutations. Brackets denote known deactivating mutations. Deactivating mutations in BRAFD594G(10.0%), BRAFG466V concurrent with KRASG12S (10.0%), and BRAFG469E concurrent with RAF1S257L (10.0%) were also identified. Our patient with a deactivating BRAFD594G mutation was a 59-year-old male with moderately-poorly differentiated right-sided colon cancer with diffuse metastases that was refractory to all standard of care chemotherapy, including FOLFIRI + cetuximab, and ultimately died from progressive disease. Interestingly, he was noted to have a concurrent MET amplification. The patient with a deactivating BRAFG466V mutation concurrent with an activating KRASG12S mutation was a 51-year-old male with right-sided colon cancer with diffuse metastases that progressed with carcinomatosis while on FOLFOX and immediately following salvage debulking surgery with hyperthermic chemotherapy. Notably, he currently has achieved ongoing partial response (PR) on third-line FOLFIRI and bevacizumab (41+ cycles). Our 66-year-old female with dual deactivating BRAFG469E and activating RAF1S257L mutation presented with a right colon cancer with synchronous metastases to bone, liver, lung, and peritoneum. Her disease was refractory to FOLFOX + bevacizumab and is currently on second-line FOLFIRI + bevacizumab with a clinical benefit. ERBB2 amplifications Seven patients (5.1%) were found to have ERRB2 amplified tumors with one having a concurrent KRASG12D mutation (Figure 4). The majority of these tumors were MSS (87.5%) with HER2 copy numbers that ranged from 9-190 (Table 2). Notably, all ERRB2 amplified tumors were located in the rectosigmoid colon as its primary disease site. Four patients with RAS wild-type ERBB2 amplification received anti-EGFR therapy, 3 experienced SD ≥ 4 months (2 first-line and 1 second-line) and 1 (second-line) experienced a PR lasting for 5 months as their best overall response to anti-EGFR therapy. The concurrent ERRB2 amplified and KRASG12D mutated tumor was found in a 58-year-old male with moderately differentiated rectal adenocarcinoma with synchronous solitary liver metastasis treated with neoadjuvant 5-FU, oxaliplatin (FOLFOX) followed by hepatic resection and resection of primary – he is currently under surveillance and without evidence of disease. EGFR: Epidermal growth factor receptor; MSI: Microsatellite instability; SD: Stable disease; NR: Not reported; MSS: Microsatellite stable; PR: Partial response. Seven patients (5.1%) were found to have ERRB2 amplified tumors with one having a concurrent KRASG12D mutation (Figure 4). The majority of these tumors were MSS (87.5%) with HER2 copy numbers that ranged from 9-190 (Table 2). Notably, all ERRB2 amplified tumors were located in the rectosigmoid colon as its primary disease site. Four patients with RAS wild-type ERBB2 amplification received anti-EGFR therapy, 3 experienced SD ≥ 4 months (2 first-line and 1 second-line) and 1 (second-line) experienced a PR lasting for 5 months as their best overall response to anti-EGFR therapy. The concurrent ERRB2 amplified and KRASG12D mutated tumor was found in a 58-year-old male with moderately differentiated rectal adenocarcinoma with synchronous solitary liver metastasis treated with neoadjuvant 5-FU, oxaliplatin (FOLFOX) followed by hepatic resection and resection of primary – he is currently under surveillance and without evidence of disease. EGFR: Epidermal growth factor receptor; MSI: Microsatellite instability; SD: Stable disease; NR: Not reported; MSS: Microsatellite stable; PR: Partial response. AKT1/2 mutations Three patients (2.2%) had AKT1E17K mutations while 1 patient (0.7%) had an AKT2E17K mutation (Table 3). Of these, a majority had concurrent mutations (75%) and tumors located in the right colon (75%). One AKT1E17K mutated tumor was found to have concurrent BRAFV600E +KRASA164V^subalterations with phenotype described above. One AKT1E17K mutated tumor had concurrent alterations in KRASA146T+PIK3CAG106V and was found in a 61-year-old male with initial right-sided colon cancer that recurred with metastases to the liver showing moderately differentiated colon adenocarcinoma. His tumor was characterized by aggressive features, including metastatic disease recurrence following a diagnosis of stage I disease, and development of bony metastases within the first year of recurrence. A concurrent AKT2E17K+KRASG12C altered tumor was found in a 57-year-old female with originally moderately differentiated sigmoid adenocarcinoma that was resected but recurred with metastases to the retroperitoneal lymph nodes currently on first-line FOLFIRI + bevacizumab. * For concurrent RAS and BRAF mutations only, NOS: Not otherwise specified; MSI: Microsatellite instability; NR: Not reported; MSS: Microsatellite stable. Three patients (2.2%) had AKT1E17K mutations while 1 patient (0.7%) had an AKT2E17K mutation (Table 3). Of these, a majority had concurrent mutations (75%) and tumors located in the right colon (75%). One AKT1E17K mutated tumor was found to have concurrent BRAFV600E +KRASA164V^subalterations with phenotype described above. One AKT1E17K mutated tumor had concurrent alterations in KRASA146T+PIK3CAG106V and was found in a 61-year-old male with initial right-sided colon cancer that recurred with metastases to the liver showing moderately differentiated colon adenocarcinoma. His tumor was characterized by aggressive features, including metastatic disease recurrence following a diagnosis of stage I disease, and development of bony metastases within the first year of recurrence. A concurrent AKT2E17K+KRASG12C altered tumor was found in a 57-year-old female with originally moderately differentiated sigmoid adenocarcinoma that was resected but recurred with metastases to the retroperitoneal lymph nodes currently on first-line FOLFIRI + bevacizumab. * For concurrent RAS and BRAF mutations only, NOS: Not otherwise specified; MSI: Microsatellite instability; NR: Not reported; MSS: Microsatellite stable. PIK3CA and PTEN mutations In total, we identified 25 patients (18.1%) with PIK3CA alterations in our cohort (Table 4). The most common primary disease sites included cecum (36.0%), sigmoid colon (12.0%), and rectum (12.0%). Notably, right-sided colon cancers comprised nearly half (48.0%) of tumors with PIK3CA alterations. The most commonly identified variants were E545K (24.0%, exon 9), E542K (12.0% exon 9), E110del (8.0%), and Q546K (8.0%). Tumors with PIK3CA alterations frequently had concurrent mutations in the RAS-RAF-MAPK signaling pathway. A majority (19 or 76.0%) had concurrent mutations in KRAS (G12D 36.0%, G12S 12.0%, G13D 8.0%, and A146T 8.0%). Two patients (8.0%) with PIK3CA tumors were found to have concurrent deactivating BRAF mutations (G466V and G469E). Notably, these 2 patients had additional alterations in KRASG12S and RAF1S257L, respectively, with phenotypes described above. We also identified additional alterations in the PTEN-PIK3CA-AKT signaling pathway in our group of PIK3CA altered tumors (Figure 5). Five patients (20.0%) with PIK3CA altered tumors also had PTEN mutations, while 1 patient (4.0%) had a dual PIK3CA and AKT1 mutated tumor. Of note, 1 female patient (age 55) with a dual PIK3CA and PTEN mutation had a rectal tumor demonstrating MSI-H and developed a solitary liver metastasis that has since been resected and treated with adjuvant FOLFOX – she is currently in remission. MSI: Microsatellite instability; MSS: Microsatellite stable; MSI-L: MSI low; MSI-H: MSI high. Values in parentheses represent numbers and not percentages. In total, we identified 25 patients (18.1%) with PIK3CA alterations in our cohort (Table 4). The most common primary disease sites included cecum (36.0%), sigmoid colon (12.0%), and rectum (12.0%). Notably, right-sided colon cancers comprised nearly half (48.0%) of tumors with PIK3CA alterations. The most commonly identified variants were E545K (24.0%, exon 9), E542K (12.0% exon 9), E110del (8.0%), and Q546K (8.0%). Tumors with PIK3CA alterations frequently had concurrent mutations in the RAS-RAF-MAPK signaling pathway. A majority (19 or 76.0%) had concurrent mutations in KRAS (G12D 36.0%, G12S 12.0%, G13D 8.0%, and A146T 8.0%). Two patients (8.0%) with PIK3CA tumors were found to have concurrent deactivating BRAF mutations (G466V and G469E). Notably, these 2 patients had additional alterations in KRASG12S and RAF1S257L, respectively, with phenotypes described above. We also identified additional alterations in the PTEN-PIK3CA-AKT signaling pathway in our group of PIK3CA altered tumors (Figure 5). Five patients (20.0%) with PIK3CA altered tumors also had PTEN mutations, while 1 patient (4.0%) had a dual PIK3CA and AKT1 mutated tumor. Of note, 1 female patient (age 55) with a dual PIK3CA and PTEN mutation had a rectal tumor demonstrating MSI-H and developed a solitary liver metastasis that has since been resected and treated with adjuvant FOLFOX – she is currently in remission. MSI: Microsatellite instability; MSS: Microsatellite stable; MSI-L: MSI low; MSI-H: MSI high. Values in parentheses represent numbers and not percentages. MET amplifications Three patients (2.2%) in our series had MET amplifications (Table 5). Two-thirds of these tumors were MSS, located in the right colon, and associated with concurrent mutations in RAS or RAF genes. One 67-year-old male was initially diagnosed with right-sided colon cancer (KRASG13D+MET alterations present) and synchronous liver metastases. His course has been punctuated by recurrent metastases to the liver and lungs despite several systemic and regional therapies. Another right-sided colon cancer was identified with both a deactivating BRAFD594G mutation and MET amplification with aggressive phenotype described above. A third patient was a 27-year-old male with primary rectal adenocarcinoma that recurred with metastases to the liver and retroperitoneal lymph nodes and refractory to capecitabine + irinotecan + cetuximab. In particular, 2 of 3 patients with MET amplications and RAS/BRAFV600E wild-type tumors were refractory to anti-EGFR-based therapies. * For concurrent RAS and BRAF mutations only, MSI: Microsatellite instability; NOS: Not otherwise specified; NR: Not reported; MSS: Microsatellite stable. Three patients (2.2%) in our series had MET amplifications (Table 5). Two-thirds of these tumors were MSS, located in the right colon, and associated with concurrent mutations in RAS or RAF genes. One 67-year-old male was initially diagnosed with right-sided colon cancer (KRASG13D+MET alterations present) and synchronous liver metastases. His course has been punctuated by recurrent metastases to the liver and lungs despite several systemic and regional therapies. Another right-sided colon cancer was identified with both a deactivating BRAFD594G mutation and MET amplification with aggressive phenotype described above. A third patient was a 27-year-old male with primary rectal adenocarcinoma that recurred with metastases to the liver and retroperitoneal lymph nodes and refractory to capecitabine + irinotecan + cetuximab. In particular, 2 of 3 patients with MET amplications and RAS/BRAFV600E wild-type tumors were refractory to anti-EGFR-based therapies. * For concurrent RAS and BRAF mutations only, MSI: Microsatellite instability; NOS: Not otherwise specified; NR: Not reported; MSS: Microsatellite stable. Hypermutant status The majority of our 138 patients with mCRC had tumors with <9 clinically significant alterations (121 or 87.7%) as described by FoundationOne reports (Table 6). The majority of these tumors were located in the left colon and all were MSS. Fourteen patients (10.1%) had 9-16 total alterations while only 3 patients (2.2%) were allocated to the highest number of clinically significant alterations category (17-25). Notably, 2 patients with MSI-H tumors were identified in the highest number of alterations group. One 55-year-old female patient was found to have a dual PIK3CA and PTEN mutated rectal tumor with phenotype described previously. Tumor mutational burden (TMB) from FoundationOne report showed a high TMB of 33 mutations per megabase (Mb). The other was a 47-year-old female with KRASG12V mutated metastatic rectal cancer that has progressed through 3 lines of systemic therapy and currently on anti-PD-1 therapy with pembrolizumab with a clinical response. Again, TMB corroborated her findings of a relatively hypermutated tumor with a TMB of 31 mutations/Mb. The third patient with a hypermutant FoundationOne profile had a MSS tumor with an associated POLEV411L mutation. Interestingly, this patient was elderly (age 80), had a right colon tumor, and had a recurrence pattern consistent with locoregional recurrence. This patient demonstrated a TMB of 122 mutations/Mb, which was the highest among the cohort. MSI: Microsatellite instability; NOS: Not otherwise specified; MSS: Microsatellite stable; NR: Not reported; MSI-H: MSI high. The majority of our 138 patients with mCRC had tumors with <9 clinically significant alterations (121 or 87.7%) as described by FoundationOne reports (Table 6). The majority of these tumors were located in the left colon and all were MSS. Fourteen patients (10.1%) had 9-16 total alterations while only 3 patients (2.2%) were allocated to the highest number of clinically significant alterations category (17-25). Notably, 2 patients with MSI-H tumors were identified in the highest number of alterations group. One 55-year-old female patient was found to have a dual PIK3CA and PTEN mutated rectal tumor with phenotype described previously. Tumor mutational burden (TMB) from FoundationOne report showed a high TMB of 33 mutations per megabase (Mb). The other was a 47-year-old female with KRASG12V mutated metastatic rectal cancer that has progressed through 3 lines of systemic therapy and currently on anti-PD-1 therapy with pembrolizumab with a clinical response. Again, TMB corroborated her findings of a relatively hypermutated tumor with a TMB of 31 mutations/Mb. The third patient with a hypermutant FoundationOne profile had a MSS tumor with an associated POLEV411L mutation. Interestingly, this patient was elderly (age 80), had a right colon tumor, and had a recurrence pattern consistent with locoregional recurrence. This patient demonstrated a TMB of 122 mutations/Mb, which was the highest among the cohort. MSI: Microsatellite instability; NOS: Not otherwise specified; MSS: Microsatellite stable; NR: Not reported; MSI-H: MSI high. Study population: The molecular results from FoundationOne testing of tumors from 138 mCRC patients are summarized in Table 1. The median age of our study group was 56 years (range 27-88) with 59.4% (82) males and 40.6% (56) females. The most common ethnicity was White (85, 61.6%) followed by Asian (29, 21.0%). The most common sites of primary were sigmoid colon (33.3%), rectum (19.6%), and cecum (15.2%). Sixty-eight patients (49.3%) had KRAS mutations, 9 patients (6.5%) had BRAF mutations, 3 (2.2%) had NRAS mutations, 1 (0.7%) had an ARAF mutation, 1 (0.7%) had a RAF-1 mutation, 7 (5.1%) had ERRB2 amplifications, 25 (18.1%) had PIK3CA mutations, 15 (10.9%) had PTEN mutations, 4 (2.9%) had AKT mutations, and 3 (2.2%) had MET amplifications. NOS: Not otherwise specified; MSI: Microsatellite instability; MSS: Microsatellite stable; MSI-L: MSI low; MSI-H: MSI high. RAS mutations: Overall, RAS mutations were present in 51.4% of our mCRC patients, RAF mutations were seen in 7.2%, of which RAS+RAF concurrent mutations were seen in 1.4%. The remainder (42.8%) were RAS/RAF wild type (Figure 1). The most common RAS mutations were KRAS mutations of exon 2 (codons 12 and 13) including G12D (32.4%), G13D (14.1%), G12V (11.3%), G12S (9.9%), G12C (8.5%), and G12A (2.8%, Figure 2). Beyond the well-established point mutations in codons 12 and 13 of exon 2 of KRAS, we identified mutations in codon 61 of exon 3 (Q61H, 1.4%; Q61K, 1.4%; Q61L, 1.4%), codon 117 of exon 4 (K117N, 1.4%), and codon 146 of exon 4 (A146V, 1.4%; A146V^sub, 1.4%; A146T, 4.2%). Two mutations (2.8%) in codon 61 (exon 3) of NRAS were also detected. Altogether, these non-KRAS exon 2 mutations constitute 15.5% of RAS mutations. RAS+RAF mutations are not included in RAS or RAF percentages. Arrows denote common mutations of exon 2 (codon 12-13). Brackets denote panel of extended RAS mutations or novel RAS mutation. In our patient population, 2 KRAS amplifications (2.8%) and 1 NRAS amplification (1.4%) were identified. One patient was a 51-year-old female with KRAS amplified rectal cancer with synchronous diffuse metastases (lung and liver). Her best overall response to standard first-line combination chemotherapy (5-fluorouracil (5-FU) and irinotecan or FOLFIRI) plus anti-EGFR therapy (panitumumab) was stable disease (SD) for 6 months. The other patient with KRAS amplification was a 51-year-old male diagnosed with right-sided colon cancer and synchronous metastases to the liver and peritoneum who had rapid progression on first- and second-line non-anti-EGFR based therapies. Our 74-year-old male patient with NRAS amplification presented with poorly differentiated rectosigmoid adenocarcinoma and synchronous diffuse metastases (liver, mesentery, and bones) and experienced progressive disease (PD) at 2 months on second-line FOLFIRI + cetuximab. Notably, a novel KRASR68S1 alteration (Figure 2) was identified in a 41-year-old female (1.4%) with rectal cancer and synchronous metastases to the liver and retroperitoneal and supraclavicular lymph nodes who experienced PD at 2 months on anti-EGFR therapy with second-line irinotecan + cetuximab. RAF mutations: A total of 11 RAF mutations (1 concurrent BRAF+RAF1 mutation) were found in 7.2% of our patients (Figure 3). Of these, BRAFV600E activating mutations (exon 15) were the most common single mutations present (40.0%). One activating BRAFL597Ralteration (exon 15) was identified (10.0%) in a 56-year-old male with bulky rectal adenocarcinoma with synchronous metastases that progressed through 9 months of first-line anti-EGFR therapy. One activating ARAFS214F alteration was also identified (10.0%) in our series of RAF mutations. This 60-year-old male patient developed multiple recurrences of rectal adenocarcinoma including, most recently, metastatic disease to the lung treated with neoadjuvant 5-FU, oxaliplatin, and irinotecan (FOLFOXIRI) followed by metastatectomy; he remains in clinical remission. A dual BRAFV600E+KRASA164V^subalterationwas present (10.0%) in an elderly male (age 72) with poorly differentiated right-sided colon cancer with synchronous metastases on first-line systemic combination therapy without anti-EGFR agents. Here an oncogenic RAS alteration was paired with a known activating BRAF mutation. Arrows denote known activating mutations. Brackets denote known deactivating mutations. Deactivating mutations in BRAFD594G(10.0%), BRAFG466V concurrent with KRASG12S (10.0%), and BRAFG469E concurrent with RAF1S257L (10.0%) were also identified. Our patient with a deactivating BRAFD594G mutation was a 59-year-old male with moderately-poorly differentiated right-sided colon cancer with diffuse metastases that was refractory to all standard of care chemotherapy, including FOLFIRI + cetuximab, and ultimately died from progressive disease. Interestingly, he was noted to have a concurrent MET amplification. The patient with a deactivating BRAFG466V mutation concurrent with an activating KRASG12S mutation was a 51-year-old male with right-sided colon cancer with diffuse metastases that progressed with carcinomatosis while on FOLFOX and immediately following salvage debulking surgery with hyperthermic chemotherapy. Notably, he currently has achieved ongoing partial response (PR) on third-line FOLFIRI and bevacizumab (41+ cycles). Our 66-year-old female with dual deactivating BRAFG469E and activating RAF1S257L mutation presented with a right colon cancer with synchronous metastases to bone, liver, lung, and peritoneum. Her disease was refractory to FOLFOX + bevacizumab and is currently on second-line FOLFIRI + bevacizumab with a clinical benefit. ERBB2 amplifications: Seven patients (5.1%) were found to have ERRB2 amplified tumors with one having a concurrent KRASG12D mutation (Figure 4). The majority of these tumors were MSS (87.5%) with HER2 copy numbers that ranged from 9-190 (Table 2). Notably, all ERRB2 amplified tumors were located in the rectosigmoid colon as its primary disease site. Four patients with RAS wild-type ERBB2 amplification received anti-EGFR therapy, 3 experienced SD ≥ 4 months (2 first-line and 1 second-line) and 1 (second-line) experienced a PR lasting for 5 months as their best overall response to anti-EGFR therapy. The concurrent ERRB2 amplified and KRASG12D mutated tumor was found in a 58-year-old male with moderately differentiated rectal adenocarcinoma with synchronous solitary liver metastasis treated with neoadjuvant 5-FU, oxaliplatin (FOLFOX) followed by hepatic resection and resection of primary – he is currently under surveillance and without evidence of disease. EGFR: Epidermal growth factor receptor; MSI: Microsatellite instability; SD: Stable disease; NR: Not reported; MSS: Microsatellite stable; PR: Partial response. AKT1/2 mutations: Three patients (2.2%) had AKT1E17K mutations while 1 patient (0.7%) had an AKT2E17K mutation (Table 3). Of these, a majority had concurrent mutations (75%) and tumors located in the right colon (75%). One AKT1E17K mutated tumor was found to have concurrent BRAFV600E +KRASA164V^subalterations with phenotype described above. One AKT1E17K mutated tumor had concurrent alterations in KRASA146T+PIK3CAG106V and was found in a 61-year-old male with initial right-sided colon cancer that recurred with metastases to the liver showing moderately differentiated colon adenocarcinoma. His tumor was characterized by aggressive features, including metastatic disease recurrence following a diagnosis of stage I disease, and development of bony metastases within the first year of recurrence. A concurrent AKT2E17K+KRASG12C altered tumor was found in a 57-year-old female with originally moderately differentiated sigmoid adenocarcinoma that was resected but recurred with metastases to the retroperitoneal lymph nodes currently on first-line FOLFIRI + bevacizumab. * For concurrent RAS and BRAF mutations only, NOS: Not otherwise specified; MSI: Microsatellite instability; NR: Not reported; MSS: Microsatellite stable. PIK3CA and PTEN mutations: In total, we identified 25 patients (18.1%) with PIK3CA alterations in our cohort (Table 4). The most common primary disease sites included cecum (36.0%), sigmoid colon (12.0%), and rectum (12.0%). Notably, right-sided colon cancers comprised nearly half (48.0%) of tumors with PIK3CA alterations. The most commonly identified variants were E545K (24.0%, exon 9), E542K (12.0% exon 9), E110del (8.0%), and Q546K (8.0%). Tumors with PIK3CA alterations frequently had concurrent mutations in the RAS-RAF-MAPK signaling pathway. A majority (19 or 76.0%) had concurrent mutations in KRAS (G12D 36.0%, G12S 12.0%, G13D 8.0%, and A146T 8.0%). Two patients (8.0%) with PIK3CA tumors were found to have concurrent deactivating BRAF mutations (G466V and G469E). Notably, these 2 patients had additional alterations in KRASG12S and RAF1S257L, respectively, with phenotypes described above. We also identified additional alterations in the PTEN-PIK3CA-AKT signaling pathway in our group of PIK3CA altered tumors (Figure 5). Five patients (20.0%) with PIK3CA altered tumors also had PTEN mutations, while 1 patient (4.0%) had a dual PIK3CA and AKT1 mutated tumor. Of note, 1 female patient (age 55) with a dual PIK3CA and PTEN mutation had a rectal tumor demonstrating MSI-H and developed a solitary liver metastasis that has since been resected and treated with adjuvant FOLFOX – she is currently in remission. MSI: Microsatellite instability; MSS: Microsatellite stable; MSI-L: MSI low; MSI-H: MSI high. Values in parentheses represent numbers and not percentages. MET amplifications: Three patients (2.2%) in our series had MET amplifications (Table 5). Two-thirds of these tumors were MSS, located in the right colon, and associated with concurrent mutations in RAS or RAF genes. One 67-year-old male was initially diagnosed with right-sided colon cancer (KRASG13D+MET alterations present) and synchronous liver metastases. His course has been punctuated by recurrent metastases to the liver and lungs despite several systemic and regional therapies. Another right-sided colon cancer was identified with both a deactivating BRAFD594G mutation and MET amplification with aggressive phenotype described above. A third patient was a 27-year-old male with primary rectal adenocarcinoma that recurred with metastases to the liver and retroperitoneal lymph nodes and refractory to capecitabine + irinotecan + cetuximab. In particular, 2 of 3 patients with MET amplications and RAS/BRAFV600E wild-type tumors were refractory to anti-EGFR-based therapies. * For concurrent RAS and BRAF mutations only, MSI: Microsatellite instability; NOS: Not otherwise specified; NR: Not reported; MSS: Microsatellite stable. Hypermutant status: The majority of our 138 patients with mCRC had tumors with <9 clinically significant alterations (121 or 87.7%) as described by FoundationOne reports (Table 6). The majority of these tumors were located in the left colon and all were MSS. Fourteen patients (10.1%) had 9-16 total alterations while only 3 patients (2.2%) were allocated to the highest number of clinically significant alterations category (17-25). Notably, 2 patients with MSI-H tumors were identified in the highest number of alterations group. One 55-year-old female patient was found to have a dual PIK3CA and PTEN mutated rectal tumor with phenotype described previously. Tumor mutational burden (TMB) from FoundationOne report showed a high TMB of 33 mutations per megabase (Mb). The other was a 47-year-old female with KRASG12V mutated metastatic rectal cancer that has progressed through 3 lines of systemic therapy and currently on anti-PD-1 therapy with pembrolizumab with a clinical response. Again, TMB corroborated her findings of a relatively hypermutated tumor with a TMB of 31 mutations/Mb. The third patient with a hypermutant FoundationOne profile had a MSS tumor with an associated POLEV411L mutation. Interestingly, this patient was elderly (age 80), had a right colon tumor, and had a recurrence pattern consistent with locoregional recurrence. This patient demonstrated a TMB of 122 mutations/Mb, which was the highest among the cohort. MSI: Microsatellite instability; NOS: Not otherwise specified; MSS: Microsatellite stable; NR: Not reported; MSI-H: MSI high. DISCUSSION: Comprehensive molecular characterization of 138 tumors from patients with mCRC was performed via NGS (FoundationOne) in this single-institution retrospective study. Overall, 51.4% and 7.2% of our patients with mCRC were shown to carry RAS and RAF mutations, respectively, which is concordant with frequencies historically reported in mCRC [2]. The majority of our RAS mutations were KRAS mutations of exon 2 (codons 12 and 13), which represent those identified in initial phase III trials that predicted lack of benefit from anti-EGFR therapy in mCRC [6, 7]. We also found that 15.5% of all RAS mutations in our population comprised a panel of extended RAS mutations. This is also consistent with recent data from the PRIME and CRYSTAL clinical trials, where exon 3 and 4 KRAS and exons 2, 3, and 4 NRAS mutations reflected 14-17% of RAS mutations [8, 9]. Identifying these rare RAS mutations has major clinical significance, given their association with anti-EGFR resistance [10]. Notably, we identified 2 KRAS amplifications and 1 NRAS amplification that are extremely rare and poorly characterized. These were found in 3 patients with diffusely metastatic CRC progressive through several lines of systemic therapy including anti-EGFR therapy. Putative high-level amplifications of NRAS were observed in <1% of cases in TCGA dataset though its significance in CRC remains poorly described [5]. KRAS amplifications have been associated with acquired resistance to EGFR inhibitors cetuximab or panitumumab in CRC preclinical models [11]. To our knowledge, we are the first to report a novel KRASR68S1 alteration that was associated with a particularly aggressive phenotype and PD at 2 months on anti-EGFR therapy with cetuximab. The majority of RAF mutations found in our population were BRAFV600E activating mutations (exon 15), which have been historically associated with poorer survival, resistance to chemotherapy, and lack of clinical benefit with anti-EGFR therapy in mCRC [12–15]. We also identified a lone BRAFL597Ralteration (exon 15), which is poorly described in CRC but has been shown to similarly activate RAF-MEK-ERK signaling in melanoma in vitro [16]. Of note, this patient received 9 months of first-line anti-EGFR therapy though our sample size of 1 precludes any meaningful generalizations. One ARAFS214F alteration was also identified in a patient whose course has been characterized by multiple recurrences of rectal cancer. Mutations in ARAF have been linked as oncogenic drivers in lung adenocarcinoma, and are exceedingly rare in CRC and comprise approximately 2% of cases in the CRC dataset from TCGA [5, 17]. Treatment with the oral RAF inhibitor, sorafenib, has demonstrated prolonged response in a case of refractory non-small-cell lung cancer and rapid responses in patients with refractory histiocytic neoplasms bearing somatic ARAF mutations [17, 18]. Despite a previous conception that KRAS and BRAF mutations are mutually exclusive, we found 1 dual BRAFV600E+KRASA164V^submutated tumor that, in our case, was associated with poor prognostic features [19]. One case of concurrent BRAFG466V+KRASG12S mutation and one patient with a concurrent BRAFG469E+RAF1S257L mutation were present in our cohort. BRAF mutants G466V and G469E have been shown to represent variants with impaired or complete loss of kinase activity in vitro [20, 21]. Nevertheless, it has been shown that tumorigenesis is promoted in the presence of deactivating BRAF mutations through oncogenic RAS mutation and/or CRAF (or RAF-1) signaling [21, 22]. In our study, one deactivating BRAFG466V mutation was paired with an oncogenic KRASG12S mutation, and one deactivating BRAFG469E mutation was paired with an oncogenic RAF1S257L alteration, supporting the notion of an evolutionary adaptation in the cancer genome to overcome BRAF mutations with impaired function. In both cases, there were associated features of poor prognosis though the dual BRAFG466V+KRASG12S mutated tumor has seen disease control recently on 41 cycles of FOLFIRI and bevacizumab, which may argue for varying degrees of relative contribution from each mutation on tumor phenotype. Interestingly, one patient with deactivating BRAFD594Gmutation was refractory to all lines of treatment, including anti-EGFR therapy, and ultimately died of aggressive disease. This is at odds with recent reports suggesting that BRAFD594G mutation may be an indicator of good prognosis [23]. It is unclear whether this patient's concurrent MET amplification may have contributed to his overall poor prognosis and therapeutic resistance. ERBB2 (HER2/neu) amplifications were found in 5.1% of our mCRC patients with the majority in KRAS wild-type tumors (except for 1 with a concurrent ERRB2+KRASG12D alteration). Another FoundationOne analysis of >10,000 cases of gastrointestinal malignancies identified HER2 amplifications and mutations in 3.0% and 4.8%, respectively, of cases from the CRC cohort [24]. Our patients with HER2 amplified tumors appeared to have shortened clinical benefit with anti-EGFR therapy, which is consistent with the recent phase II HERACLES trial where none of the patients with HER2 amplified, RAS/BRAF wild-type metastatic colorectal tumors had a response to anti-EGFR therapy [25]. Similar to the preponderance of left colon primary tumors in the HERACLES trial, all of our HER2 amplified tumors were located in the rectosigmoid colon. In short, the identification of HER2 amplifications in patients with RAS/BRAF wild-type metastatic colorectal tumors is of major significance given the clinical benefit derived from dual HER2-directed therapy including trastuzumab + lapatinib (HERACLES) or trastuzumab + pertuzumab (MyPathway) [25, 26]. PIK3CA, PTEN, and AKT mutations were identified in 18.1% (25), 10.9% (15), and 2.9% (4) of our mCRC patients, respectively. Many of these patients had metastatic tumors associated with aggressive features. In addition, 75% of AKT mutated tumors were located in the right colon, almost half (48.0%) of PIK3CA mutated tumors were right-sided colon cancers, and concurrent mutations in RAS-RAF-MAPK or PTEN-PIK3CA-AKT signaling were common. For example, 19 patients (76.0%) with PIK3CA mutations also had concurrent KRAS mutations while 5 (20.0%) and 1 (4.0%) with PIK3CA altered tumors also had concurrent PTEN and AKT1 mutations, respectively. Mutations in mediators of the PTEN-PIK3CA-AKT signaling pathway in CRC have been associated with poorer prognosis and lack of clinical response to anti-EGFR therapy [27, 28]. For PIK3CA mutations, in particular, prior studies have demonstrated that exon 9 mutations had no effect while exon 20 mutations were associated with resistance to anti-EGFR therapy [29]. However, this differential effect by exon has not been supported by recent meta-analysis [30]. Given the high rate of concurrent RAS mutations seen with PIK3CA and related pathway mutations, a definitive association between resistance to EGFR inhibition and PTEN-PIK3CA-AKT pathway mutations is difficult to make. Further studies are needed to resolve this issue. Three patients (2.2%) demonstrated MET amplifications associated with poor prognostic features. MET amplification and increased c-MET expression have also been associated with an aggressive phenotype and therapeutic resistance, particularly to MEK inhibition, in mCRC [31, 32]. Interestingly, we have observed anti-EGFR refractoriness in 2 of our patients with MET amplifications despite the presence of a RAS- wild-type phenotype and lack of activating BRAF mutations. This is consistent with preclinical data suggesting MET activation as a mechanism of resistance to anti-EGFR therapy [33]. We lastly identified 3 patients (2.2%) with tumors categorized in the highest number of clinically significant alterations group (17-25) that also demonstrated high TMB as per FoundationOne. TMB categories per FoundationOne testing have been validated in melanoma patients treated with PD-1 blockade [34]. Response to PD-1 inhibitors was significantly superior in patients with high TMB (>23.1 mutations/MB) compared to intermediate or low TMB (3.2-23.1 mutations/MB and <3.2 mutations/MB, respectively). Furthermore, a recent phase II study showed that patients with advanced urothelial cancer who responded to the programmed death ligand 1 (PD-L1) inhibitor atezolizumab had a significantly higher TMB (median 12.4 mutations/Mb) than non-responders (median 6.4 mutations/Mb, p < 0.0001) [35]. Two patients had MSI-H tumors while 1 hypermutant tumor was MSS and harbored a POLE mutation. Interestingly, 42.9% of tumors with 9-16 clinically significant alterations were located in the right colon while one-third of tumors with 17-25 total alterations were located in the right colon; tumors with <9 number of alterations were predominantly located in the left colon. In the CRC dataset from TCGA, 75% of hypermutated tumors arose from the right colon yet not all of them were MSI-H [5]. Mutations in polymerase ε or POLE were found among 25% of hypermutated tumors in this cohort. Mutations in POLE have been shown to contribute to an ultramutated yet MSS phenotype in colorectal tumors [36]. A recent NGS study confirmed that increasing mutational load correlated with MSI yet colorectal tumors with the highest mutational burden that were distinct from MSI tumors all harbored POLE mutations [37]. Furthermore, mismatch repair-deficiency or MSI has recently been shown to predict clinical benefit to immune checkpoint blockade with anti-PD-1 therapy in mCRC [38]. The characterization of mutational load in CRC may serve as a better indicator than MSI status in determining a hypermutant profile that could predict benefit from immunotherapy. Our findings are hypothesis generating and offer support to consider molecular analysis of tumors to determine the total number of alterations as a potential correlate to MSI and candidacy for anti-PD-1 therapy in mCRC. Future studies of larger size and, ideally, prospective design will be helpful in corroborating associations between molecular alterations of interest described in our study and prognosis, resistance to EGFR inhibition, and/or ability to be targeted for therapy in mCRC. Comparative genomic analyses have identified a high level of concordance particularly for RAS, BRAF, and PIK3CA mutations between colorectal primary and metastatic tumors [39, 40]. However, other molecular alterations may differ based on the site of tumor and/or exposure to chemotherapy [41–44]. Although such mixed results are likely dependent on the specific mutation that is profiled, other factors including specimen integrity and sampling method may also contribute to heterogeneity. Indeed, further analyses are needed to describe the concordance or discordance of other mutations across tumor sites and treatment effects in mCRC, and careful consideration in design will be needed in order to account for confounding factors as described above. In conclusion, comprehensive genomic profiling can uncover gene alterations beyond conventional RAS or RAF mutant subtypes that predict resistance to anti-EGFR therapy and in identifying potential therapeutic targets outside of NCCN standard treatments in mCRC. ERBB2 amplified tumors commonly originate from the rectosigmoid colon, are predominantly RAS/BRAF wild-type, and may predict benefit to HER2-directed therapy. Hypermutant tumors or tumors with POLE mutations may predict benefit to anti-PD-1 therapy. Our findings are hypothesis generating and warrant further investigation in larger datasets and in prospective settings. MATERIALS AND METHODS: Study patients and tumor samples Patients with advanced or metastatic (stage IV) colorectal cancer treated at the Gastrointestinal Medical Oncology Clinic at City of Hope National Medical Center (Duarte, CA) between April 2013 and February 2016 were screened for this study. Eligibility criteria was limited to those who underwent expanded genomic tumor analysis by FoundationOne. There were no exclusions to tumor histology, medical comorbidities, previous treatment or lines of prior therapy, or performance status. Comprehensive genomic profiling was conducted through NGS via FoundationOne (Foundation Medicine, Inc., Cambridge, MA) with reports generated from April 2013 to February 2016. The study was approved by the Institutional Review Board (IRB). Patients with advanced or metastatic (stage IV) colorectal cancer treated at the Gastrointestinal Medical Oncology Clinic at City of Hope National Medical Center (Duarte, CA) between April 2013 and February 2016 were screened for this study. Eligibility criteria was limited to those who underwent expanded genomic tumor analysis by FoundationOne. There were no exclusions to tumor histology, medical comorbidities, previous treatment or lines of prior therapy, or performance status. Comprehensive genomic profiling was conducted through NGS via FoundationOne (Foundation Medicine, Inc., Cambridge, MA) with reports generated from April 2013 to February 2016. The study was approved by the Institutional Review Board (IRB). Next-generation sequencing Comprehensive genomic analysis was conducted on tumor samples (formalin-fixed paraffin-embedded) retrieved from surgical resection, core needle biopsies, or excisional biopsies and delivered to Foundation Medicine, Inc. The NGS assay performed by FoundationOne has been previously described and validated [45]. The initial whole-genome shotgun library construction and hybridization-based capture of 4,557 exons from 287 cancer-related genes and 47 introns from 19 genes with frequent DNA rearrangements has since been expanded to identify genetic alterations across the coding regions of 315 cancer-related genes and introns from 28 genes commonly rearranged in solid cancers. Comprehensive genomic analysis was conducted on tumor samples (formalin-fixed paraffin-embedded) retrieved from surgical resection, core needle biopsies, or excisional biopsies and delivered to Foundation Medicine, Inc. The NGS assay performed by FoundationOne has been previously described and validated [45]. The initial whole-genome shotgun library construction and hybridization-based capture of 4,557 exons from 287 cancer-related genes and 47 introns from 19 genes with frequent DNA rearrangements has since been expanded to identify genetic alterations across the coding regions of 315 cancer-related genes and introns from 28 genes commonly rearranged in solid cancers. Study design Retrospective analysis of genetic mutations, amplifications, or alterations present in our cohort of 138 patients with mCRC was performed through test results provided in an integrative report available via FoundationICE (Interactive Cancer Explorer). Patient demographics including age, sex, ethnicity, site of primary, stage at diagnosis, and number of previous treatments were obtained from chart abstraction of each patient's electronic medical record (EMR). Microsatellite instability classified as stable (MSS), low (MSI-L), or high (MSI-H) were abstracted from pathology reports and response to anti-EGFR therapy, when available, was described according to Response Evaluation Criteria in Solid Tumors (RECIST) criteria and obtained from medical records [46]. The total number of clinically significant alterations for each patient was determined by tallying the sum of alterations included in the panel of clinically significant variants provided by FoundationICE reports and arbitrarily categorized into 3 groups (<9, 9-16, and > 16 total number of alterations). We defined hypermutant tumors as those in the highest number of mutations group that were also found to have high TMB as validated by FoundationOne (high >23.1 mutations/MB, intermediate 3.2-23.1 mutations/MB, and low <3.2 mutations/MB) [34]. Retrospective analysis of genetic mutations, amplifications, or alterations present in our cohort of 138 patients with mCRC was performed through test results provided in an integrative report available via FoundationICE (Interactive Cancer Explorer). Patient demographics including age, sex, ethnicity, site of primary, stage at diagnosis, and number of previous treatments were obtained from chart abstraction of each patient's electronic medical record (EMR). Microsatellite instability classified as stable (MSS), low (MSI-L), or high (MSI-H) were abstracted from pathology reports and response to anti-EGFR therapy, when available, was described according to Response Evaluation Criteria in Solid Tumors (RECIST) criteria and obtained from medical records [46]. The total number of clinically significant alterations for each patient was determined by tallying the sum of alterations included in the panel of clinically significant variants provided by FoundationICE reports and arbitrarily categorized into 3 groups (<9, 9-16, and > 16 total number of alterations). We defined hypermutant tumors as those in the highest number of mutations group that were also found to have high TMB as validated by FoundationOne (high >23.1 mutations/MB, intermediate 3.2-23.1 mutations/MB, and low <3.2 mutations/MB) [34]. Statistical analyses All statistical analyses performed were descriptive and no formal statistical hypotheses were assessed. The sample size was determined by the total number of mCRC patients with FoundationOne results available. All descriptive statistics were conducted in Excel with associated formulas and functions. All statistical analyses performed were descriptive and no formal statistical hypotheses were assessed. The sample size was determined by the total number of mCRC patients with FoundationOne results available. All descriptive statistics were conducted in Excel with associated formulas and functions.
Background: Recent molecular characterization of colorectal tumors has identified several molecular alterations of interest that are considered targetable in metastatic colorectal cancer (mCRC). Methods: We conducted a single-institution, retrospective study based on comprehensive genomic profiling of tumors from 138 patients with mCRC using next-generation sequencing (NGS) via FoundationOne. Results: Overall, RAS mutations were present in 51.4% and RAF mutations were seen in 7.2% of mCRC patients. We found a novel KRASR68S1 mutation associated with an aggressive phenotype. RAS amplifications (1.4% KRAS and 0.7% NRAS), MET amplifications (2.2%), BRAFL597Ralterations (0.7%), ARAFS214F alterations (0.7%), and concurrent RAS+RAF (1.4%), BRAF+RAF1 (0.7%), and rare PTEN-PIK3CA-AKT pathway mutations were identified and predominantly associated with poor prognosis. ERBB2 (HER2) amplified tumors were identified in 5.1% and all arose from the rectosigmoid colon. Three cases (2.2%) were associated with a hypermutated profile that was corroborated with findings of high tumor mutational burden (TMB): 2 cases with MSI-H and 1 case with a POLE mutation. Conclusions: Comprehensive genomic profiling can uncover alterations beyond the well-characterized RAS/RAF mutations associated with anti-EGFR resistance. ERBB2 amplified tumors commonly originate from the rectosigmoid colon, are predominantly RAS/BRAF wild-type, and may predict benefit to HER2-directed therapy. Hypermutant tumors or tumors with high TMB correlate with MSI-H status or POLE mutations and may predict a benefit from anti-PD-1 therapy.
INTRODUCTION: Colorectal cancer (CRC) remains the third leading cause of cancer death in both men and women in the United States with an estimated 134,490 new cases and 49,190 deaths in 2016 [1]. Recent advances in the treatment of metastatic CRC (mCRC) have identified improved outcomes with the addition of epidermal growth factor receptor (EGFR)-targeting agents to conventional combination cytotoxic therapy in patients with extended RAS wild-type tumors. In contrast, activating mutations in the RAS gene (KRAS or NRAS, present in approximately 50% of cases of mCRC) and BRAF gene (present in about 5% of mCRC patients) have been associated with lack of clinically meaningful benefit or harm when anti-EGFR therapy is employed [2]. The identification of candidates for anti-EGFR therapy through the exclusion of RAS and BRAF mutations in mCRC serves as a model of selecting optimal therapy based on patient genomic profiles and molecular phenotypes. Several decades of genomic studies, including the use of more recent next-generation sequencing (NGS), have expedited the search of genetic alterations for potential therapeutic targeting in CRC [3, 4]. Recently, comprehensive molecular characterization of 224 colorectal tumors was performed by The Cancer Genome Atlas (TCGA) Network [5]. Sixteen percent of colorectal tumors were found to be hypermutated and more commonly found in the right colon with 75% of these cases demonstrating expectedly high microsatellite instability (MSI-H). Twenty-four genes were identified to have significant mutations of interest including APC, SMAD4, TP53, PIK3CA, and KRAS mutations, as expected. Interestingly, mutations, deletions, or amplifications of the ERRB gene family were found in 19% of tumors. In sum, this genomic analysis identified several molecular alterations that are considered targetable, including mediators of dysregulated WNT, RAS, and PI3K pathways such as ERRB2, ERRB3, MEK, AKT, MTOR, IGF2, and IGFR. The recent identification of gene mutations and amplifications of potential significance for therapeutic purposes has led us to investigate the genomic profiles of mCRC patients using NGS (FoundationOne). Here, we describe a single-institution experience in reporting results from comprehensive genomic analysis of tumors from 138 mCRC patients. We aim to characterize genetic alterations present in our study population that have known correlates to prognosis, therapeutic resistance, and potential therapeutic targets in mCRC. In this study, we also report the existence of concurrent gene mutations rarely described in the literature and novel mutations and amplifications that can lead to targeting outside of National Comprehensive Cancer Network (NCCN) standard treatments. DISCUSSION: All statistical analyses performed were descriptive and no formal statistical hypotheses were assessed. The sample size was determined by the total number of mCRC patients with FoundationOne results available. All descriptive statistics were conducted in Excel with associated formulas and functions.
Background: Recent molecular characterization of colorectal tumors has identified several molecular alterations of interest that are considered targetable in metastatic colorectal cancer (mCRC). Methods: We conducted a single-institution, retrospective study based on comprehensive genomic profiling of tumors from 138 patients with mCRC using next-generation sequencing (NGS) via FoundationOne. Results: Overall, RAS mutations were present in 51.4% and RAF mutations were seen in 7.2% of mCRC patients. We found a novel KRASR68S1 mutation associated with an aggressive phenotype. RAS amplifications (1.4% KRAS and 0.7% NRAS), MET amplifications (2.2%), BRAFL597Ralterations (0.7%), ARAFS214F alterations (0.7%), and concurrent RAS+RAF (1.4%), BRAF+RAF1 (0.7%), and rare PTEN-PIK3CA-AKT pathway mutations were identified and predominantly associated with poor prognosis. ERBB2 (HER2) amplified tumors were identified in 5.1% and all arose from the rectosigmoid colon. Three cases (2.2%) were associated with a hypermutated profile that was corroborated with findings of high tumor mutational burden (TMB): 2 cases with MSI-H and 1 case with a POLE mutation. Conclusions: Comprehensive genomic profiling can uncover alterations beyond the well-characterized RAS/RAF mutations associated with anti-EGFR resistance. ERBB2 amplified tumors commonly originate from the rectosigmoid colon, are predominantly RAS/BRAF wild-type, and may predict benefit to HER2-directed therapy. Hypermutant tumors or tumors with high TMB correlate with MSI-H status or POLE mutations and may predict a benefit from anti-PD-1 therapy.
11,165
305
[ 488, 4966, 226, 507, 444, 218, 215, 342, 207, 303, 2122 ]
12
[ "mutations", "patients", "tumors", "ras", "msi", "concurrent", "colon", "patient", "alterations", "mutation" ]
[ "egfr inhibitors", "colorectal cancer crc", "mutations oncogenic ras", "prognosis resistance egfr", "egfr therapy mcrc" ]
null
[CONTENT] metastatic colorectal cancer | comprehensive genomic profiling | next-generation sequencing | FoundationOne | retrospective [SUMMARY]
[CONTENT] metastatic colorectal cancer | comprehensive genomic profiling | next-generation sequencing | FoundationOne | retrospective [SUMMARY]
null
[CONTENT] metastatic colorectal cancer | comprehensive genomic profiling | next-generation sequencing | FoundationOne | retrospective [SUMMARY]
[CONTENT] metastatic colorectal cancer | comprehensive genomic profiling | next-generation sequencing | FoundationOne | retrospective [SUMMARY]
[CONTENT] metastatic colorectal cancer | comprehensive genomic profiling | next-generation sequencing | FoundationOne | retrospective [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Biomarkers, Tumor | Colorectal Neoplasms | Female | Gene Amplification | Gene Expression Profiling | Genetic Predisposition to Disease | Genetic Variation | High-Throughput Nucleotide Sequencing | Humans | Male | Middle Aged | Mutation | Neoplasm Metastasis | Neoplasm Staging | Proto-Oncogene Proteins | Retrospective Studies [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Biomarkers, Tumor | Colorectal Neoplasms | Female | Gene Amplification | Gene Expression Profiling | Genetic Predisposition to Disease | Genetic Variation | High-Throughput Nucleotide Sequencing | Humans | Male | Middle Aged | Mutation | Neoplasm Metastasis | Neoplasm Staging | Proto-Oncogene Proteins | Retrospective Studies [SUMMARY]
null
[CONTENT] Adult | Aged | Aged, 80 and over | Biomarkers, Tumor | Colorectal Neoplasms | Female | Gene Amplification | Gene Expression Profiling | Genetic Predisposition to Disease | Genetic Variation | High-Throughput Nucleotide Sequencing | Humans | Male | Middle Aged | Mutation | Neoplasm Metastasis | Neoplasm Staging | Proto-Oncogene Proteins | Retrospective Studies [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Biomarkers, Tumor | Colorectal Neoplasms | Female | Gene Amplification | Gene Expression Profiling | Genetic Predisposition to Disease | Genetic Variation | High-Throughput Nucleotide Sequencing | Humans | Male | Middle Aged | Mutation | Neoplasm Metastasis | Neoplasm Staging | Proto-Oncogene Proteins | Retrospective Studies [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Biomarkers, Tumor | Colorectal Neoplasms | Female | Gene Amplification | Gene Expression Profiling | Genetic Predisposition to Disease | Genetic Variation | High-Throughput Nucleotide Sequencing | Humans | Male | Middle Aged | Mutation | Neoplasm Metastasis | Neoplasm Staging | Proto-Oncogene Proteins | Retrospective Studies [SUMMARY]
[CONTENT] egfr inhibitors | colorectal cancer crc | mutations oncogenic ras | prognosis resistance egfr | egfr therapy mcrc [SUMMARY]
[CONTENT] egfr inhibitors | colorectal cancer crc | mutations oncogenic ras | prognosis resistance egfr | egfr therapy mcrc [SUMMARY]
null
[CONTENT] egfr inhibitors | colorectal cancer crc | mutations oncogenic ras | prognosis resistance egfr | egfr therapy mcrc [SUMMARY]
[CONTENT] egfr inhibitors | colorectal cancer crc | mutations oncogenic ras | prognosis resistance egfr | egfr therapy mcrc [SUMMARY]
[CONTENT] egfr inhibitors | colorectal cancer crc | mutations oncogenic ras | prognosis resistance egfr | egfr therapy mcrc [SUMMARY]
[CONTENT] mutations | patients | tumors | ras | msi | concurrent | colon | patient | alterations | mutation [SUMMARY]
[CONTENT] mutations | patients | tumors | ras | msi | concurrent | colon | patient | alterations | mutation [SUMMARY]
null
[CONTENT] mutations | patients | tumors | ras | msi | concurrent | colon | patient | alterations | mutation [SUMMARY]
[CONTENT] mutations | patients | tumors | ras | msi | concurrent | colon | patient | alterations | mutation [SUMMARY]
[CONTENT] mutations | patients | tumors | ras | msi | concurrent | colon | patient | alterations | mutation [SUMMARY]
[CONTENT] gene | genomic | mcrc | therapeutic | mutations | targeting | cases | potential | crc | recent [SUMMARY]
[CONTENT] medical | number | available | conducted | criteria | foundationone | genes | total number | statistical | alterations [SUMMARY]
null
[CONTENT] mutations | tumors | therapy | crc | resistance | egfr | anti | mcrc | anti egfr | ras [SUMMARY]
[CONTENT] mutations | tumors | patients | msi | concurrent | ras | metastases | year | tumor | alterations [SUMMARY]
[CONTENT] mutations | tumors | patients | msi | concurrent | ras | metastases | year | tumor | alterations [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] 138 | mCRC | NGS | FoundationOne [SUMMARY]
null
[CONTENT] RAS | RAF ||| RAS/BRAF ||| [SUMMARY]
[CONTENT] ||| ||| 138 | mCRC | NGS | FoundationOne ||| RAS | 51.4% | RAF | 7.2% | mCRC ||| ||| RAS | 1.4% | KRAS | 0.7% | NRAS | MET | 2.2% | 0.7% | 0.7% | 1.4% | 0.7% ||| ERBB2 | 5.1% ||| Three | 2.2% | TMB | 2 | 1 | POLE ||| RAS | RAF ||| RAS/BRAF ||| [SUMMARY]
[CONTENT] ||| ||| 138 | mCRC | NGS | FoundationOne ||| RAS | 51.4% | RAF | 7.2% | mCRC ||| ||| RAS | 1.4% | KRAS | 0.7% | NRAS | MET | 2.2% | 0.7% | 0.7% | 1.4% | 0.7% ||| ERBB2 | 5.1% ||| Three | 2.2% | TMB | 2 | 1 | POLE ||| RAS | RAF ||| RAS/BRAF ||| [SUMMARY]
The Incidence of Psychotic Disorders and Area-level Marginalization in Ontario, Canada: A Population-based Retrospective Cohort Study.
33896210
There is limited Canadian evidence on the impact of socio-environmental factors on psychosis risk. We sought to examine the relationship between area-level indicators of marginalization and the incidence of psychotic disorders in Ontario.
BACKGROUND
We conducted a retrospective cohort study of all people aged 14 to 40 years living in Ontario in 1999 using health administrative data and identified incident cases of psychotic disorders over a 10-year follow-up period. Age-standardized incidence rates were estimated for census metropolitan areas (CMAs). Poisson regression models adjusting for age and sex were used to calculate incidence rate ratios (IRRs) based on CMA and area-level marginalization indices.
METHODS
There is variation in the incidence of psychotic disorders across the CMAs. Our findings suggest a higher rate of psychotic disorders in areas with the highest levels of residential instability (IRR = 1.26, 95% confidence interval [CI], 1.18 to 1.35), material deprivation (IRR = 1.30, 95% CI, 1.16 to 1.45), ethnic concentration (IRR = 1.61, 95% CI, 1.38 to 1.89), and dependency (IRR = 1.35, 95% CI, 1.18 to 1.54) when compared to areas with the lowest levels of marginalization. Marginalization attenuates the risk in some CMAs.
RESULTS
There is geographic variation in the incidence of psychotic disorders across the province of Ontario. Areas with greater levels of marginalization have a higher incidence of psychotic disorders, and marginalization attenuates the differences in risk across geographic location. With further study, replication, and the use of the most up-to-date data, a case may be made to consider social policy interventions as preventative measures and to direct services to areas with the highest risk. Future research should examine how marginalization may interact with other social factors including ethnicity and immigration.
CONCLUSIONS
[ "Cohort Studies", "Humans", "Incidence", "Ontario", "Psychotic Disorders", "Retrospective Studies" ]
8935600
Introduction
The seminal work of Faris and Dunham conducted in Chicago in the 1930s provided empirical evidence that the incidence of non-affective psychotic disorders varied based on geographical and neighbourhood-level sociodemographic factors. It was observed that in neighbourhoods with increasing levels of social disorganization, there was a higher incidence of schizophrenia. 1 In the intervening years since this early work, and with the advent of improved epidemiological methods, many studies have examined this association and similarly found area-level social factors to be associated with the risk of developing a psychotic disorder. 2 –4 Most of the research examining the social causes of psychotic disorders have been conducted in Europe. 5 The international research has highlighted that people living in the most deprived neighbourhoods are at higher risk of having a psychotic disorder. 2 The most recent epidemiological work from several European countries has also highlighted large differences in the incidence of psychotic disorders between different cities and urban contexts. 6 In Canada, there is a small but growing body of research on the role of social factors which may influence both the incidence and course of psychotic disorders. 7 –11 This is an important area of study considering prior work conducted in Ontario has found people with schizophrenia have a 3-fold increase in all-cause mortality when compared to the general population 12 and high levels of ongoing health service use. 13 In Ontario, the largest province in Canada, we know that where people live impacts how they use services. People who live in more deprived areas use more mental health services. 14 In Toronto, the largest and most diverse city in the province, presentation to emergency mental health services for psychosis differs based on the level of marginalization of the neighbourhood in which people reside. 10 Although there is prior Canadian research on health service use in this clinical population, 13,15 –17 there has been limited study of the role of social factors in the risk of developing a psychotic disorder in the Canadian context. One study in Ontario has looked at the risk of developing a psychotic disorder in immigrant and refugee groups, finding that some migrant groups have an elevated risk whereas others have a lower risk. 8 In Quebec, the second largest province, health administrative data have been used to examine the role of socio-environmental and geographical factors in the risk of developing first-episode psychosis. Similar to international work on this topic, there was a higher incidence of psychosis in the most deprived areas in Montreal. 7 Differences in the incidence rates of schizophrenia between Quebec City and Montréal, 2 of the main metropolitan centres in the province, and between urban and rural areas have also been found. 18 The aim of the study was to examine the geographical distribution and the role of area-level marginalization indicators on the incidence of schizophrenia spectrum psychotic disorders in Ontario. We hypothesize that (i) there will be variation in incidence between major metropolitan centres and (ii) there will be a higher incidence in areas with the highest levels of marginalization.
Methods
Study Design, Setting, and Population We constructed a retrospective cohort that included all Ontario residents aged 14 to 40 years as of April 1, 1999, using data from the universal public health insurance plan, which has been described in detail previously. 8 The cohort was followed for 10 years to ascertain incident cases of psychotic disorders. These ages were used as it would allow for a 10-year follow-up period beyond the maximum age of some of the early psychosis intervention programme in Ontario. The cohort was constructed from the administrative data holdings at ICES (formerly known as the Institute for Clinical Evaluative Sciences), which enables linkage of individual records from multiple health administrative databases across the province of Ontario. At the time of cohort inception, approximately 11.5 million people resided in Ontario. 19 All individuals included in the cohort were eligible for the Ontario Health Insurance Plan (OHIP), the provincially administered health insurance plan, in the 5 years prior to cohort inception. All long-term residents who primarily reside in Ontario are eligible for OHIP. Person-time of follow-up was calculated for each person in the cohort from the time of cohort inception until an index episode of a psychotic disorder, death, or the end of the follow-up period. Those who had a history of contact with health services in Ontario for a psychotic disorder up to 20 years prior to the cohort start date, dependent on the databases, were removed to ensure incident cases were identified. This lookback window is in keeping with the optimal lookback period described in the literature. 20 All covariates were defined at the time of cohort inception. We constructed a retrospective cohort that included all Ontario residents aged 14 to 40 years as of April 1, 1999, using data from the universal public health insurance plan, which has been described in detail previously. 8 The cohort was followed for 10 years to ascertain incident cases of psychotic disorders. These ages were used as it would allow for a 10-year follow-up period beyond the maximum age of some of the early psychosis intervention programme in Ontario. The cohort was constructed from the administrative data holdings at ICES (formerly known as the Institute for Clinical Evaluative Sciences), which enables linkage of individual records from multiple health administrative databases across the province of Ontario. At the time of cohort inception, approximately 11.5 million people resided in Ontario. 19 All individuals included in the cohort were eligible for the Ontario Health Insurance Plan (OHIP), the provincially administered health insurance plan, in the 5 years prior to cohort inception. All long-term residents who primarily reside in Ontario are eligible for OHIP. Person-time of follow-up was calculated for each person in the cohort from the time of cohort inception until an index episode of a psychotic disorder, death, or the end of the follow-up period. Those who had a history of contact with health services in Ontario for a psychotic disorder up to 20 years prior to the cohort start date, dependent on the databases, were removed to ensure incident cases were identified. This lookback window is in keeping with the optimal lookback period described in the literature. 20 All covariates were defined at the time of cohort inception. Data Sources Sources of data included the Registered Persons Database which is a central population registry containing basic demographic information that enables linkage across administrative data by identifying all Ontario residents insured by OHIP; the Ontario Mental Health Reporting System containing data on hospitalizations to adult psychiatry beds; the Canadian Institute for Health Information Discharge Abstract Database containing data on all other acute care hospitalizations and inpatient psychiatric hospitalizations prior to 2005; the National Ambulatory Care Reporting System containing information on emergency department visits; data on outpatient physician billings from OHIP; and the Ontario Marginalization Index (ON-Marg) containing area-level deprivation indices based on census data. Sources of data included the Registered Persons Database which is a central population registry containing basic demographic information that enables linkage across administrative data by identifying all Ontario residents insured by OHIP; the Ontario Mental Health Reporting System containing data on hospitalizations to adult psychiatry beds; the Canadian Institute for Health Information Discharge Abstract Database containing data on all other acute care hospitalizations and inpatient psychiatric hospitalizations prior to 2005; the National Ambulatory Care Reporting System containing information on emergency department visits; data on outpatient physician billings from OHIP; and the Ontario Marginalization Index (ON-Marg) containing area-level deprivation indices based on census data. Case Ascertainment Incident cases of psychotic disorders were identified over the 10-year follow-up window of 1999 to 2008 inclusive. Incident cases of psychotic disorders were based on either (i) a primary discharge diagnosis from a general hospital bed with a diagnosis of schizophrenia or schizoaffective disorder based on International Classification of Diseases (ICD)-9 code 295.x, or ICD-10 code F20 or F25, (ii) a primary discharge diagnosis of schizophrenia or schizoaffective disorder from a psychiatric hospital bed based on Diagnostic and Statistical Manual of Mental Disorders, fourth edition code 295.x, or (iii) a minimum of 2 OHIP billing claims or emergency department visits with a diagnostic code for schizophrenia or schizoaffective disorder (ICD-9 code 295.x, or ICD-10 code F20 or F25) in a 12-month period. Previous research has validated a similar algorithm for case ascertainment against medical chart diagnoses and found high sensitivity (91.6%) and moderate specificity (67.4%). 21 Incident cases of psychotic disorders were identified over the 10-year follow-up window of 1999 to 2008 inclusive. Incident cases of psychotic disorders were based on either (i) a primary discharge diagnosis from a general hospital bed with a diagnosis of schizophrenia or schizoaffective disorder based on International Classification of Diseases (ICD)-9 code 295.x, or ICD-10 code F20 or F25, (ii) a primary discharge diagnosis of schizophrenia or schizoaffective disorder from a psychiatric hospital bed based on Diagnostic and Statistical Manual of Mental Disorders, fourth edition code 295.x, or (iii) a minimum of 2 OHIP billing claims or emergency department visits with a diagnostic code for schizophrenia or schizoaffective disorder (ICD-9 code 295.x, or ICD-10 code F20 or F25) in a 12-month period. Previous research has validated a similar algorithm for case ascertainment against medical chart diagnoses and found high sensitivity (91.6%) and moderate specificity (67.4%). 21 Covariates and Exposure Classification The socio-environmental exposures of interest included (i) where people in the cohort reside in the province based on census metropolitan area (CMA) and (ii) area-level indicators of marginalization. (i) CMAs The CMA of each cohort member was identified based on postal code linkage at the time of cohort entry. A CMA is a census geography that consists of 1 or more municipalities situated around a core urban area. All CMAs have a total population of at least 100,000 people, of which at least 50,000 live in an urban core. The areas surrounding the urban core have a high degree of integration with the core. 22 Of note, some areas within the CMA that are outside the urban core may be classified as rural and described as the rural fringe; however, these areas have a high degree of integration and exposure to the urban population centre. For the purpose of this study, we are comparing people who reside in each of the province’s largest metropolitan population centres relative to those who reside in all other non-urban areas and smaller population centres. The CMA of each cohort member was identified based on postal code linkage at the time of cohort entry. A CMA is a census geography that consists of 1 or more municipalities situated around a core urban area. All CMAs have a total population of at least 100,000 people, of which at least 50,000 live in an urban core. The areas surrounding the urban core have a high degree of integration with the core. 22 Of note, some areas within the CMA that are outside the urban core may be classified as rural and described as the rural fringe; however, these areas have a high degree of integration and exposure to the urban population centre. For the purpose of this study, we are comparing people who reside in each of the province’s largest metropolitan population centres relative to those who reside in all other non-urban areas and smaller population centres. (ii) Area-level indicators of marginalization Exposure to area-level marginalization was captured by linking postal codes for all cohort members at the time of cohort entry to marginalization data from the ON-Marg. The ON-Marg is based on census data and is comprised of 4 factors (constructed from principal component factor analysis) and 18 census indicators presented in Table 1. The index is updated at regular intervals with the most recent census data available; for the current study, the 2006 indicators were used. The factors cover 4 distinct dimensions of marginalization: (i) material deprivation, an indicator of area levels of poverty and inability to access and attain basic material needs; (ii) residential instability, an indicator of housing or family instability; (iii) dependency, an indicator of the concentration of people who do not have income from employment or may not be compensated for their work; and (iv) ethnic concentration, an indicator of the concentration of people who are immigrants and/or self-identify as belonging to a visible minority group. For each dimension, scores were divided into quintiles based on the provincial distribution, with the first quintile representing the least marginalized areas and the fifth quintile representing the most marginalized areas. Ontario Marginalization Index Dimensions and Census Indicators. 23,24 Exposure to area-level marginalization was captured by linking postal codes for all cohort members at the time of cohort entry to marginalization data from the ON-Marg. The ON-Marg is based on census data and is comprised of 4 factors (constructed from principal component factor analysis) and 18 census indicators presented in Table 1. The index is updated at regular intervals with the most recent census data available; for the current study, the 2006 indicators were used. The factors cover 4 distinct dimensions of marginalization: (i) material deprivation, an indicator of area levels of poverty and inability to access and attain basic material needs; (ii) residential instability, an indicator of housing or family instability; (iii) dependency, an indicator of the concentration of people who do not have income from employment or may not be compensated for their work; and (iv) ethnic concentration, an indicator of the concentration of people who are immigrants and/or self-identify as belonging to a visible minority group. For each dimension, scores were divided into quintiles based on the provincial distribution, with the first quintile representing the least marginalized areas and the fifth quintile representing the most marginalized areas. Ontario Marginalization Index Dimensions and Census Indicators. 23,24 The socio-environmental exposures of interest included (i) where people in the cohort reside in the province based on census metropolitan area (CMA) and (ii) area-level indicators of marginalization. (i) CMAs The CMA of each cohort member was identified based on postal code linkage at the time of cohort entry. A CMA is a census geography that consists of 1 or more municipalities situated around a core urban area. All CMAs have a total population of at least 100,000 people, of which at least 50,000 live in an urban core. The areas surrounding the urban core have a high degree of integration with the core. 22 Of note, some areas within the CMA that are outside the urban core may be classified as rural and described as the rural fringe; however, these areas have a high degree of integration and exposure to the urban population centre. For the purpose of this study, we are comparing people who reside in each of the province’s largest metropolitan population centres relative to those who reside in all other non-urban areas and smaller population centres. The CMA of each cohort member was identified based on postal code linkage at the time of cohort entry. A CMA is a census geography that consists of 1 or more municipalities situated around a core urban area. All CMAs have a total population of at least 100,000 people, of which at least 50,000 live in an urban core. The areas surrounding the urban core have a high degree of integration with the core. 22 Of note, some areas within the CMA that are outside the urban core may be classified as rural and described as the rural fringe; however, these areas have a high degree of integration and exposure to the urban population centre. For the purpose of this study, we are comparing people who reside in each of the province’s largest metropolitan population centres relative to those who reside in all other non-urban areas and smaller population centres. (ii) Area-level indicators of marginalization Exposure to area-level marginalization was captured by linking postal codes for all cohort members at the time of cohort entry to marginalization data from the ON-Marg. The ON-Marg is based on census data and is comprised of 4 factors (constructed from principal component factor analysis) and 18 census indicators presented in Table 1. The index is updated at regular intervals with the most recent census data available; for the current study, the 2006 indicators were used. The factors cover 4 distinct dimensions of marginalization: (i) material deprivation, an indicator of area levels of poverty and inability to access and attain basic material needs; (ii) residential instability, an indicator of housing or family instability; (iii) dependency, an indicator of the concentration of people who do not have income from employment or may not be compensated for their work; and (iv) ethnic concentration, an indicator of the concentration of people who are immigrants and/or self-identify as belonging to a visible minority group. For each dimension, scores were divided into quintiles based on the provincial distribution, with the first quintile representing the least marginalized areas and the fifth quintile representing the most marginalized areas. Ontario Marginalization Index Dimensions and Census Indicators. 23,24 Exposure to area-level marginalization was captured by linking postal codes for all cohort members at the time of cohort entry to marginalization data from the ON-Marg. The ON-Marg is based on census data and is comprised of 4 factors (constructed from principal component factor analysis) and 18 census indicators presented in Table 1. The index is updated at regular intervals with the most recent census data available; for the current study, the 2006 indicators were used. The factors cover 4 distinct dimensions of marginalization: (i) material deprivation, an indicator of area levels of poverty and inability to access and attain basic material needs; (ii) residential instability, an indicator of housing or family instability; (iii) dependency, an indicator of the concentration of people who do not have income from employment or may not be compensated for their work; and (iv) ethnic concentration, an indicator of the concentration of people who are immigrants and/or self-identify as belonging to a visible minority group. For each dimension, scores were divided into quintiles based on the provincial distribution, with the first quintile representing the least marginalized areas and the fifth quintile representing the most marginalized areas. Ontario Marginalization Index Dimensions and Census Indicators. 23,24 Statistical Analyses We summarized baseline characteristics of the cohort using descriptive statistics, specifically means and standard deviations (SD) for continuous data and proportions for categorical data. Age- and sex-standardized incidence rates were calculated per 100,000 person-years for the entire province and for the CMAs, using the 1996 population of Canada 25 as the standard population to facilitate comparison across geographies by adjusting to the age structure of the standard population. The 1996 census was used as this was the last census prior to cohort entry. Sex-stratified age-standardized rates were also calculated, as the risk of psychotic disorders differs between males and females. 26 We used multivariable Poisson regression models to obtain adjusted incidence rate ratios (IRRs) with 95% confidence intervals (CIs). Complete case analyses were used for all regression models. We first fit a model for the incidence in each CMA, relative to people not living in a CMA, while adjusting for age and sex. We then proceeded to fit a model that accounted for exposure to area-level marginalization, in addition to CMA, while adjusting for age and sex. Poisson regression models were compared to negative binomial models—given that the model estimates were similar, the data were not overdispersed, and the results of the Poisson regression were presented. All statistical analyses were conducted using Stata version 16.1 27 and presented as incidence rates or IRRs with 95% CIs, and confidence intervals that do not include unity are considered statistically significant. Mapping of incidence estimates was conducted using QGIS version 3.6. 28 We summarized baseline characteristics of the cohort using descriptive statistics, specifically means and standard deviations (SD) for continuous data and proportions for categorical data. Age- and sex-standardized incidence rates were calculated per 100,000 person-years for the entire province and for the CMAs, using the 1996 population of Canada 25 as the standard population to facilitate comparison across geographies by adjusting to the age structure of the standard population. The 1996 census was used as this was the last census prior to cohort entry. Sex-stratified age-standardized rates were also calculated, as the risk of psychotic disorders differs between males and females. 26 We used multivariable Poisson regression models to obtain adjusted incidence rate ratios (IRRs) with 95% confidence intervals (CIs). Complete case analyses were used for all regression models. We first fit a model for the incidence in each CMA, relative to people not living in a CMA, while adjusting for age and sex. We then proceeded to fit a model that accounted for exposure to area-level marginalization, in addition to CMA, while adjusting for age and sex. Poisson regression models were compared to negative binomial models—given that the model estimates were similar, the data were not overdispersed, and the results of the Poisson regression were presented. All statistical analyses were conducted using Stata version 16.1 27 and presented as incidence rates or IRRs with 95% CIs, and confidence intervals that do not include unity are considered statistically significant. Mapping of incidence estimates was conducted using QGIS version 3.6. 28 Ethics Approval Ethics approval for this study was obtained from the research ethics board at the Centre for Addiction and Mental Health, Toronto, Ontario, Canada. Ethics approval for this study was obtained from the research ethics board at the Centre for Addiction and Mental Health, Toronto, Ontario, Canada.
Results
The cohort included 4,284,694 people, of whom 50% (n = 2,158,166) were male. Of the total cohort, 0.7% (n = 32,017) people were unable to be linked to the ON-Marg database due to missing postal code information and were excluded from the analyses. Baseline characteristics of the cohort are presented in Table 2. There were 25,686 incident cases of psychotic disorder, of whom 62% (n = 15,809) were male and 38% (n = 9,877) were female. The mean age at the time of cohort entry was 28.0 years (SD = 7.9), and the mean age at the time of index diagnosis was 32.5 years (SD = 8.6). Sociodemographic Characteristics of the Cohort Aged 14 to 40 Years Living in Ontario as of April 1, 1999. The age- and sex-standardized incidence rate of psychotic disorders among the entire cohort was 54.9 (95% CI, 53.6 to 56.3) per 100,000 person-years. Incidence rates by CMA are visualized in Figure 1 and presented in Online Appendix 1. Across the province, there was a higher incidence among males, with an age-standardized incidence rate of 67.4 (95% CI, 65.4 to 69.6) per 100,000 person-years, compared to an age-standardized incidence rate of 42.4 (95% CI, 40.8 to 44.1) per 100,000 person-years in females. Incidence rates varied between CMAs across the province. For the entire cohort, standardized incidence rates ranged from 51.4 (95% CI, 50.1 to 52.7) per 100,000 person-years in people residing outside of CMAs to 74.5 (95% CI, 73.0 to 76.1) per 100,000 person-years in Kingston. Maps of the age-adjusted incidence rates of psychotic disorders in Ontario for the entire cohort, males and females per 100,000 person years. We found the risk of developing a psychotic disorder was higher in specific CMAs and was associated with area-level marginalization (Table 3). The rates of psychotic disorder were significantly elevated in Kingston, Belleville, Peterborough, Toronto, Hamilton, St. Catharines, Brantford, Guelph, London, Windsor, Sarnia, and Sudbury, when compared to those who were not residing in a CMA and without area-level marginalization being taken into account. The highest risk was observed in Kingston (IRR = 1.48, 95% CI, 1.27 to 1.62) when compared to non-CMAs. Age- and Sex-adjusted Incidence Rate Ratios by Census Metropolitan Areas (CMAs) Compared to Non-CMAs in Ontario and Model with Marginalization Factors. Notes. IRR = incidence rate ratio. CI = confidence interval, Ref. = reference category, *Unless otherwise indicated; statistically significant results bolded. Marginalization attenuated the IRR when added to the model, whereby previously significant IRRs in many of the CMAs are no longer statistically significant when compared to non-CMA areas. When area-level marginalization is taken into account, the elevated risk persists in Kingston (IRR = 1.20, 95% CI, 1.05 to 1.37), Guelph (IRR = 1.23, 95% CI, 1.06 to 1.41), Sarnia (IRR = 1.24, 95% CI, 1.05 to 1.46), and a marginally elevated risk in Toronto (IRR = 1.04, 95% CI, 1.00 to 1.08). With these additional factors accounted for, we found a lower risk of developing a psychotic disorder in Hamilton (IRR = 0.86, 95% CI, 0.80 to 0.92) and Windsor (IRR = 0.90, 95% CI, 0.82 to 0.99) when compared to non-CMA areas. We found higher risk of psychotic disorders in areas with higher levels of marginalization for each of the 4 indicators, when compared to areas with the lowest levels of marginalization. There is higher risk in areas with the highest levels of instability (IRR = 1.26, 95% CI, 1.18 to 1.35), deprivation (IRR = 1.30, 95% CI, 1.16 to 1.45), ethnic concentration (IRR = 1.61, 95% CI, 1.38 to 1.89), and dependency (IRR = 1.35, 95% CI, 1.18 to 1.54) when compared to areas with the lowest levels of marginalization on these indicators.
Conclusion
We found geographic variation in the incidence of psychotic disorders across the province of Ontario, and incidence was associated with contextual socio-environmental factors. There were elevated rates of psychotic disorders in some of the major metropolitan areas in the province when compared to areas outside of these metropolitan areas. Area-level marginalization appears to attenuate the risk associated with geographical location. Future research should account for important individual-level factors and examine how they may influence the risk of developing a psychotic disorder, particularly in relation to area-level factors. With further replication, use of the most up-to-date data and further study of socio-environmental exposures future work may be useful in informing social policy interventions as preventative measures and planning delivery of services. It is particularly important to target services for people with the first episode of psychosis, to ensure adequate resource allocation across the province, and to direct services to areas with elevated rates of psychotic disorders.
[ "Study Design, Setting, and Population", "Data Sources", "Case Ascertainment", "Covariates and Exposure Classification", "(i) CMAs", "(ii) Area-level indicators of marginalization", "Statistical Analyses", "Ethics Approval", "Limitations" ]
[ "We constructed a retrospective cohort that included all Ontario residents aged 14 to 40 years as of April 1, 1999, using data from the universal public health insurance plan, which has been described in detail previously.\n8\n The cohort was followed for 10 years to ascertain incident cases of psychotic disorders. These ages were used as it would allow for a 10-year follow-up period beyond the maximum age of some of the early psychosis intervention programme in Ontario. The cohort was constructed from the administrative data holdings at ICES (formerly known as the Institute for Clinical Evaluative Sciences), which enables linkage of individual records from multiple health administrative databases across the province of Ontario.\nAt the time of cohort inception, approximately 11.5 million people resided in Ontario.\n19\n All individuals included in the cohort were eligible for the Ontario Health Insurance Plan (OHIP), the provincially administered health insurance plan, in the 5 years prior to cohort inception. All long-term residents who primarily reside in Ontario are eligible for OHIP.\nPerson-time of follow-up was calculated for each person in the cohort from the time of cohort inception until an index episode of a psychotic disorder, death, or the end of the follow-up period. Those who had a history of contact with health services in Ontario for a psychotic disorder up to 20 years prior to the cohort start date, dependent on the databases, were removed to ensure incident cases were identified. This lookback window is in keeping with the optimal lookback period described in the literature.\n20\n All covariates were defined at the time of cohort inception.", "Sources of data included the Registered Persons Database which is a central population registry containing basic demographic information that enables linkage across administrative data by identifying all Ontario residents insured by OHIP; the Ontario Mental Health Reporting System containing data on hospitalizations to adult psychiatry beds; the Canadian Institute for Health Information Discharge Abstract Database containing data on all other acute care hospitalizations and inpatient psychiatric hospitalizations prior to 2005; the National Ambulatory Care Reporting System containing information on emergency department visits; data on outpatient physician billings from OHIP; and the Ontario Marginalization Index (ON-Marg) containing area-level deprivation indices based on census data.", "Incident cases of psychotic disorders were identified over the 10-year follow-up window of 1999 to 2008 inclusive. Incident cases of psychotic disorders were based on either (i) a primary discharge diagnosis from a general hospital bed with a diagnosis of schizophrenia or schizoaffective disorder based on International Classification of Diseases (ICD)-9 code 295.x, or ICD-10 code F20 or F25, (ii) a primary discharge diagnosis of schizophrenia or schizoaffective disorder from a psychiatric hospital bed based on Diagnostic and Statistical Manual of Mental Disorders, fourth edition code 295.x, or (iii) a minimum of 2 OHIP billing claims or emergency department visits with a diagnostic code for schizophrenia or schizoaffective disorder (ICD-9 code 295.x, or ICD-10 code F20 or F25) in a 12-month period. Previous research has validated a similar algorithm for case ascertainment against medical chart diagnoses and found high sensitivity (91.6%) and moderate specificity (67.4%).\n21\n\n", "The socio-environmental exposures of interest included (i) where people in the cohort reside in the province based on census metropolitan area (CMA) and (ii) area-level indicators of marginalization.\n(i) CMAs The CMA of each cohort member was identified based on postal code linkage at the time of cohort entry. A CMA is a census geography that consists of 1 or more municipalities situated around a core urban area. All CMAs have a total population of at least 100,000 people, of which at least 50,000 live in an urban core. The areas surrounding the urban core have a high degree of integration with the core.\n22\n Of note, some areas within the CMA that are outside the urban core may be classified as rural and described as the rural fringe; however, these areas have a high degree of integration and exposure to the urban population centre. For the purpose of this study, we are comparing people who reside in each of the province’s largest metropolitan population centres relative to those who reside in all other non-urban areas and smaller population centres.\nThe CMA of each cohort member was identified based on postal code linkage at the time of cohort entry. A CMA is a census geography that consists of 1 or more municipalities situated around a core urban area. All CMAs have a total population of at least 100,000 people, of which at least 50,000 live in an urban core. The areas surrounding the urban core have a high degree of integration with the core.\n22\n Of note, some areas within the CMA that are outside the urban core may be classified as rural and described as the rural fringe; however, these areas have a high degree of integration and exposure to the urban population centre. For the purpose of this study, we are comparing people who reside in each of the province’s largest metropolitan population centres relative to those who reside in all other non-urban areas and smaller population centres.\n(ii) Area-level indicators of marginalization Exposure to area-level marginalization was captured by linking postal codes for all cohort members at the time of cohort entry to marginalization data from the ON-Marg. The ON-Marg is based on census data and is comprised of 4 factors (constructed from principal component factor analysis) and 18 census indicators presented in Table 1. The index is updated at regular intervals with the most recent census data available; for the current study, the 2006 indicators were used. The factors cover 4 distinct dimensions of marginalization: (i) material deprivation, an indicator of area levels of poverty and inability to access and attain basic material needs; (ii) residential instability, an indicator of housing or family instability; (iii) dependency, an indicator of the concentration of people who do not have income from employment or may not be compensated for their work; and (iv) ethnic concentration, an indicator of the concentration of people who are immigrants and/or self-identify as belonging to a visible minority group. For each dimension, scores were divided into quintiles based on the provincial distribution, with the first quintile representing the least marginalized areas and the fifth quintile representing the most marginalized areas.\nOntario Marginalization Index Dimensions and Census Indicators.\n23,24\n\n\nExposure to area-level marginalization was captured by linking postal codes for all cohort members at the time of cohort entry to marginalization data from the ON-Marg. The ON-Marg is based on census data and is comprised of 4 factors (constructed from principal component factor analysis) and 18 census indicators presented in Table 1. The index is updated at regular intervals with the most recent census data available; for the current study, the 2006 indicators were used. The factors cover 4 distinct dimensions of marginalization: (i) material deprivation, an indicator of area levels of poverty and inability to access and attain basic material needs; (ii) residential instability, an indicator of housing or family instability; (iii) dependency, an indicator of the concentration of people who do not have income from employment or may not be compensated for their work; and (iv) ethnic concentration, an indicator of the concentration of people who are immigrants and/or self-identify as belonging to a visible minority group. For each dimension, scores were divided into quintiles based on the provincial distribution, with the first quintile representing the least marginalized areas and the fifth quintile representing the most marginalized areas.\nOntario Marginalization Index Dimensions and Census Indicators.\n23,24\n\n", "The CMA of each cohort member was identified based on postal code linkage at the time of cohort entry. A CMA is a census geography that consists of 1 or more municipalities situated around a core urban area. All CMAs have a total population of at least 100,000 people, of which at least 50,000 live in an urban core. The areas surrounding the urban core have a high degree of integration with the core.\n22\n Of note, some areas within the CMA that are outside the urban core may be classified as rural and described as the rural fringe; however, these areas have a high degree of integration and exposure to the urban population centre. For the purpose of this study, we are comparing people who reside in each of the province’s largest metropolitan population centres relative to those who reside in all other non-urban areas and smaller population centres.", "Exposure to area-level marginalization was captured by linking postal codes for all cohort members at the time of cohort entry to marginalization data from the ON-Marg. The ON-Marg is based on census data and is comprised of 4 factors (constructed from principal component factor analysis) and 18 census indicators presented in Table 1. The index is updated at regular intervals with the most recent census data available; for the current study, the 2006 indicators were used. The factors cover 4 distinct dimensions of marginalization: (i) material deprivation, an indicator of area levels of poverty and inability to access and attain basic material needs; (ii) residential instability, an indicator of housing or family instability; (iii) dependency, an indicator of the concentration of people who do not have income from employment or may not be compensated for their work; and (iv) ethnic concentration, an indicator of the concentration of people who are immigrants and/or self-identify as belonging to a visible minority group. For each dimension, scores were divided into quintiles based on the provincial distribution, with the first quintile representing the least marginalized areas and the fifth quintile representing the most marginalized areas.\nOntario Marginalization Index Dimensions and Census Indicators.\n23,24\n\n", "We summarized baseline characteristics of the cohort using descriptive statistics, specifically means and standard deviations (SD) for continuous data and proportions for categorical data. Age- and sex-standardized incidence rates were calculated per 100,000 person-years for the entire province and for the CMAs, using the 1996 population of Canada\n25\n as the standard population to facilitate comparison across geographies by adjusting to the age structure of the standard population. The 1996 census was used as this was the last census prior to cohort entry. Sex-stratified age-standardized rates were also calculated, as the risk of psychotic disorders differs between males and females.\n26\n\n\nWe used multivariable Poisson regression models to obtain adjusted incidence rate ratios (IRRs) with 95% confidence intervals (CIs). Complete case analyses were used for all regression models. We first fit a model for the incidence in each CMA, relative to people not living in a CMA, while adjusting for age and sex. We then proceeded to fit a model that accounted for exposure to area-level marginalization, in addition to CMA, while adjusting for age and sex. Poisson regression models were compared to negative binomial models—given that the model estimates were similar, the data were not overdispersed, and the results of the Poisson regression were presented.\nAll statistical analyses were conducted using Stata version 16.1\n27\n and presented as incidence rates or IRRs with 95% CIs, and confidence intervals that do not include unity are considered statistically significant. Mapping of incidence estimates was conducted using QGIS version 3.6.\n28\n\n", "Ethics approval for this study was obtained from the research ethics board at the Centre for Addiction and Mental Health, Toronto, Ontario, Canada.", "This study is limited by the fact that exposure to area-level marginalization was defined at the time of cohort entry, and we did not account for changes in exposure during the follow-up period. Furthermore, any movement between geographical areas was not accounted for and may have influenced exposure and risk. Prior research suggests that following the first episode of psychosis, people may move to both areas of higher and lower marginalization.\n32,33\n This study only accounts for marginalization at the area level, and it is important to highlight that individual-level data on sociodemographic factors were not available, including individual-level immigration history. Previous research has found that neighbourhood-level factors moderate the role of individual-level social factors in relation to psychosis risk.\n29,30\n\n\nThis study used administrative health data that were not collected to specifically answer the research questions we posed. To reduce potential misclassification, we used a validated algorithm to identify incident cases.\n21\n The algorithm was created to identify cases of chronic schizophrenia; however, in this study, we are using it to identify incident cases of psychotic disorder, and it may therefore have different psychometric properties. The algorithm has a positive predictive value of 67.4%, which suggests that some cases identified in this cohort may be false positives.\nFurthermore, the data used for the current study and the previous validation study are over 10 years old and warrant replication. Beyond replication, there is also an opportunity to use up-to-date socio-environmental and clinical data for predictive modelling to forecast service use and resource allocation as recently been done in the United Kingdom.\n34,35\n\n\nThis study was not designed to make causal inferences and does not account for all factors that may be part of a causal pathway. Given that environmental factors only explain a portion of the risk, it is also important to consider genetic and other individual biological factors that impact a person’s risk of developing psychosis.\n36\n Both family history, genetic factors\n37\n and unobservable familial selections factors (e.g., relatives who also have an increased risk of developing a psychotic disorder who may be more likely to reside in marginalized or urban areas)\n38\n as well as patterns of substance use may be associated with socio-environmental and geographical factors that are examined in this study. Due to limitations of the data sources used, we were not able to account for substance-use patterns at the individual and area levels nor genetic and familial factors in this study. Future research should build on these limitations and further examine how the socio-environmental factors examined in this study are impacted by known biological risk factors and substance use patterns\n39\n using spatial approaches that explore synergistic risk.\n37\n Beyond biological factors, further attention should also be given to the role of other socio-environmental factors including immigration, ethnicity, and additional contextual factors including social capital, which may have important roles in moderating risk. These factors, in addition to geography and socio-environmental factors, should be examined in relation to the incidence of psychotic disorders as well as in relation to health service utilization and care outcomes." ]
[ null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Study Design, Setting, and Population", "Data Sources", "Case Ascertainment", "Covariates and Exposure Classification", "(i) CMAs", "(ii) Area-level indicators of marginalization", "Statistical Analyses", "Ethics Approval", "Results", "Discussion", "Limitations", "Conclusion", "Supplemental Material" ]
[ "The seminal work of Faris and Dunham conducted in Chicago in the 1930s provided empirical evidence that the incidence of non-affective psychotic disorders varied based on geographical and neighbourhood-level sociodemographic factors. It was observed that in neighbourhoods with increasing levels of social disorganization, there was a higher incidence of schizophrenia.\n1\n In the intervening years since this early work, and with the advent of improved epidemiological methods, many studies have examined this association and similarly found area-level social factors to be associated with the risk of developing a psychotic disorder.\n2\n–4\n Most of the research examining the social causes of psychotic disorders have been conducted in Europe.\n5\n The international research has highlighted that people living in the most deprived neighbourhoods are at higher risk of having a psychotic disorder.\n2\n The most recent epidemiological work from several European countries has also highlighted large differences in the incidence of psychotic disorders between different cities and urban contexts.\n6\n\n\nIn Canada, there is a small but growing body of research on the role of social factors which may influence both the incidence and course of psychotic disorders.\n7\n\n\n–11\n This is an important area of study considering prior work conducted in Ontario has found people with schizophrenia have a 3-fold increase in all-cause mortality when compared to the general population\n12\n and high levels of ongoing health service use.\n13\n\n\nIn Ontario, the largest province in Canada, we know that where people live impacts how they use services. People who live in more deprived areas use more mental health services.\n14\n In Toronto, the largest and most diverse city in the province, presentation to emergency mental health services for psychosis differs based on the level of marginalization of the neighbourhood in which people reside.\n10\n Although there is prior Canadian research on health service use in this clinical population,\n13,15\n–17\n there has been limited study of the role of social factors in the risk of developing a psychotic disorder in the Canadian context. One study in Ontario has looked at the risk of developing a psychotic disorder in immigrant and refugee groups, finding that some migrant groups have an elevated risk whereas others have a lower risk.\n8\n\n\nIn Quebec, the second largest province, health administrative data have been used to examine the role of socio-environmental and geographical factors in the risk of developing first-episode psychosis. Similar to international work on this topic, there was a higher incidence of psychosis in the most deprived areas in Montreal.\n7\n Differences in the incidence rates of schizophrenia between Quebec City and Montréal, 2 of the main metropolitan centres in the province, and between urban and rural areas have also been found.\n18\n\n\nThe aim of the study was to examine the geographical distribution and the role of area-level marginalization indicators on the incidence of schizophrenia spectrum psychotic disorders in Ontario. We hypothesize that (i) there will be variation in incidence between major metropolitan centres and (ii) there will be a higher incidence in areas with the highest levels of marginalization.", "Study Design, Setting, and Population We constructed a retrospective cohort that included all Ontario residents aged 14 to 40 years as of April 1, 1999, using data from the universal public health insurance plan, which has been described in detail previously.\n8\n The cohort was followed for 10 years to ascertain incident cases of psychotic disorders. These ages were used as it would allow for a 10-year follow-up period beyond the maximum age of some of the early psychosis intervention programme in Ontario. The cohort was constructed from the administrative data holdings at ICES (formerly known as the Institute for Clinical Evaluative Sciences), which enables linkage of individual records from multiple health administrative databases across the province of Ontario.\nAt the time of cohort inception, approximately 11.5 million people resided in Ontario.\n19\n All individuals included in the cohort were eligible for the Ontario Health Insurance Plan (OHIP), the provincially administered health insurance plan, in the 5 years prior to cohort inception. All long-term residents who primarily reside in Ontario are eligible for OHIP.\nPerson-time of follow-up was calculated for each person in the cohort from the time of cohort inception until an index episode of a psychotic disorder, death, or the end of the follow-up period. Those who had a history of contact with health services in Ontario for a psychotic disorder up to 20 years prior to the cohort start date, dependent on the databases, were removed to ensure incident cases were identified. This lookback window is in keeping with the optimal lookback period described in the literature.\n20\n All covariates were defined at the time of cohort inception.\nWe constructed a retrospective cohort that included all Ontario residents aged 14 to 40 years as of April 1, 1999, using data from the universal public health insurance plan, which has been described in detail previously.\n8\n The cohort was followed for 10 years to ascertain incident cases of psychotic disorders. These ages were used as it would allow for a 10-year follow-up period beyond the maximum age of some of the early psychosis intervention programme in Ontario. The cohort was constructed from the administrative data holdings at ICES (formerly known as the Institute for Clinical Evaluative Sciences), which enables linkage of individual records from multiple health administrative databases across the province of Ontario.\nAt the time of cohort inception, approximately 11.5 million people resided in Ontario.\n19\n All individuals included in the cohort were eligible for the Ontario Health Insurance Plan (OHIP), the provincially administered health insurance plan, in the 5 years prior to cohort inception. All long-term residents who primarily reside in Ontario are eligible for OHIP.\nPerson-time of follow-up was calculated for each person in the cohort from the time of cohort inception until an index episode of a psychotic disorder, death, or the end of the follow-up period. Those who had a history of contact with health services in Ontario for a psychotic disorder up to 20 years prior to the cohort start date, dependent on the databases, were removed to ensure incident cases were identified. This lookback window is in keeping with the optimal lookback period described in the literature.\n20\n All covariates were defined at the time of cohort inception.\nData Sources Sources of data included the Registered Persons Database which is a central population registry containing basic demographic information that enables linkage across administrative data by identifying all Ontario residents insured by OHIP; the Ontario Mental Health Reporting System containing data on hospitalizations to adult psychiatry beds; the Canadian Institute for Health Information Discharge Abstract Database containing data on all other acute care hospitalizations and inpatient psychiatric hospitalizations prior to 2005; the National Ambulatory Care Reporting System containing information on emergency department visits; data on outpatient physician billings from OHIP; and the Ontario Marginalization Index (ON-Marg) containing area-level deprivation indices based on census data.\nSources of data included the Registered Persons Database which is a central population registry containing basic demographic information that enables linkage across administrative data by identifying all Ontario residents insured by OHIP; the Ontario Mental Health Reporting System containing data on hospitalizations to adult psychiatry beds; the Canadian Institute for Health Information Discharge Abstract Database containing data on all other acute care hospitalizations and inpatient psychiatric hospitalizations prior to 2005; the National Ambulatory Care Reporting System containing information on emergency department visits; data on outpatient physician billings from OHIP; and the Ontario Marginalization Index (ON-Marg) containing area-level deprivation indices based on census data.\nCase Ascertainment Incident cases of psychotic disorders were identified over the 10-year follow-up window of 1999 to 2008 inclusive. Incident cases of psychotic disorders were based on either (i) a primary discharge diagnosis from a general hospital bed with a diagnosis of schizophrenia or schizoaffective disorder based on International Classification of Diseases (ICD)-9 code 295.x, or ICD-10 code F20 or F25, (ii) a primary discharge diagnosis of schizophrenia or schizoaffective disorder from a psychiatric hospital bed based on Diagnostic and Statistical Manual of Mental Disorders, fourth edition code 295.x, or (iii) a minimum of 2 OHIP billing claims or emergency department visits with a diagnostic code for schizophrenia or schizoaffective disorder (ICD-9 code 295.x, or ICD-10 code F20 or F25) in a 12-month period. Previous research has validated a similar algorithm for case ascertainment against medical chart diagnoses and found high sensitivity (91.6%) and moderate specificity (67.4%).\n21\n\n\nIncident cases of psychotic disorders were identified over the 10-year follow-up window of 1999 to 2008 inclusive. Incident cases of psychotic disorders were based on either (i) a primary discharge diagnosis from a general hospital bed with a diagnosis of schizophrenia or schizoaffective disorder based on International Classification of Diseases (ICD)-9 code 295.x, or ICD-10 code F20 or F25, (ii) a primary discharge diagnosis of schizophrenia or schizoaffective disorder from a psychiatric hospital bed based on Diagnostic and Statistical Manual of Mental Disorders, fourth edition code 295.x, or (iii) a minimum of 2 OHIP billing claims or emergency department visits with a diagnostic code for schizophrenia or schizoaffective disorder (ICD-9 code 295.x, or ICD-10 code F20 or F25) in a 12-month period. Previous research has validated a similar algorithm for case ascertainment against medical chart diagnoses and found high sensitivity (91.6%) and moderate specificity (67.4%).\n21\n\n\nCovariates and Exposure Classification The socio-environmental exposures of interest included (i) where people in the cohort reside in the province based on census metropolitan area (CMA) and (ii) area-level indicators of marginalization.\n(i) CMAs The CMA of each cohort member was identified based on postal code linkage at the time of cohort entry. A CMA is a census geography that consists of 1 or more municipalities situated around a core urban area. All CMAs have a total population of at least 100,000 people, of which at least 50,000 live in an urban core. The areas surrounding the urban core have a high degree of integration with the core.\n22\n Of note, some areas within the CMA that are outside the urban core may be classified as rural and described as the rural fringe; however, these areas have a high degree of integration and exposure to the urban population centre. For the purpose of this study, we are comparing people who reside in each of the province’s largest metropolitan population centres relative to those who reside in all other non-urban areas and smaller population centres.\nThe CMA of each cohort member was identified based on postal code linkage at the time of cohort entry. A CMA is a census geography that consists of 1 or more municipalities situated around a core urban area. All CMAs have a total population of at least 100,000 people, of which at least 50,000 live in an urban core. The areas surrounding the urban core have a high degree of integration with the core.\n22\n Of note, some areas within the CMA that are outside the urban core may be classified as rural and described as the rural fringe; however, these areas have a high degree of integration and exposure to the urban population centre. For the purpose of this study, we are comparing people who reside in each of the province’s largest metropolitan population centres relative to those who reside in all other non-urban areas and smaller population centres.\n(ii) Area-level indicators of marginalization Exposure to area-level marginalization was captured by linking postal codes for all cohort members at the time of cohort entry to marginalization data from the ON-Marg. The ON-Marg is based on census data and is comprised of 4 factors (constructed from principal component factor analysis) and 18 census indicators presented in Table 1. The index is updated at regular intervals with the most recent census data available; for the current study, the 2006 indicators were used. The factors cover 4 distinct dimensions of marginalization: (i) material deprivation, an indicator of area levels of poverty and inability to access and attain basic material needs; (ii) residential instability, an indicator of housing or family instability; (iii) dependency, an indicator of the concentration of people who do not have income from employment or may not be compensated for their work; and (iv) ethnic concentration, an indicator of the concentration of people who are immigrants and/or self-identify as belonging to a visible minority group. For each dimension, scores were divided into quintiles based on the provincial distribution, with the first quintile representing the least marginalized areas and the fifth quintile representing the most marginalized areas.\nOntario Marginalization Index Dimensions and Census Indicators.\n23,24\n\n\nExposure to area-level marginalization was captured by linking postal codes for all cohort members at the time of cohort entry to marginalization data from the ON-Marg. The ON-Marg is based on census data and is comprised of 4 factors (constructed from principal component factor analysis) and 18 census indicators presented in Table 1. The index is updated at regular intervals with the most recent census data available; for the current study, the 2006 indicators were used. The factors cover 4 distinct dimensions of marginalization: (i) material deprivation, an indicator of area levels of poverty and inability to access and attain basic material needs; (ii) residential instability, an indicator of housing or family instability; (iii) dependency, an indicator of the concentration of people who do not have income from employment or may not be compensated for their work; and (iv) ethnic concentration, an indicator of the concentration of people who are immigrants and/or self-identify as belonging to a visible minority group. For each dimension, scores were divided into quintiles based on the provincial distribution, with the first quintile representing the least marginalized areas and the fifth quintile representing the most marginalized areas.\nOntario Marginalization Index Dimensions and Census Indicators.\n23,24\n\n\nThe socio-environmental exposures of interest included (i) where people in the cohort reside in the province based on census metropolitan area (CMA) and (ii) area-level indicators of marginalization.\n(i) CMAs The CMA of each cohort member was identified based on postal code linkage at the time of cohort entry. A CMA is a census geography that consists of 1 or more municipalities situated around a core urban area. All CMAs have a total population of at least 100,000 people, of which at least 50,000 live in an urban core. The areas surrounding the urban core have a high degree of integration with the core.\n22\n Of note, some areas within the CMA that are outside the urban core may be classified as rural and described as the rural fringe; however, these areas have a high degree of integration and exposure to the urban population centre. For the purpose of this study, we are comparing people who reside in each of the province’s largest metropolitan population centres relative to those who reside in all other non-urban areas and smaller population centres.\nThe CMA of each cohort member was identified based on postal code linkage at the time of cohort entry. A CMA is a census geography that consists of 1 or more municipalities situated around a core urban area. All CMAs have a total population of at least 100,000 people, of which at least 50,000 live in an urban core. The areas surrounding the urban core have a high degree of integration with the core.\n22\n Of note, some areas within the CMA that are outside the urban core may be classified as rural and described as the rural fringe; however, these areas have a high degree of integration and exposure to the urban population centre. For the purpose of this study, we are comparing people who reside in each of the province’s largest metropolitan population centres relative to those who reside in all other non-urban areas and smaller population centres.\n(ii) Area-level indicators of marginalization Exposure to area-level marginalization was captured by linking postal codes for all cohort members at the time of cohort entry to marginalization data from the ON-Marg. The ON-Marg is based on census data and is comprised of 4 factors (constructed from principal component factor analysis) and 18 census indicators presented in Table 1. The index is updated at regular intervals with the most recent census data available; for the current study, the 2006 indicators were used. The factors cover 4 distinct dimensions of marginalization: (i) material deprivation, an indicator of area levels of poverty and inability to access and attain basic material needs; (ii) residential instability, an indicator of housing or family instability; (iii) dependency, an indicator of the concentration of people who do not have income from employment or may not be compensated for their work; and (iv) ethnic concentration, an indicator of the concentration of people who are immigrants and/or self-identify as belonging to a visible minority group. For each dimension, scores were divided into quintiles based on the provincial distribution, with the first quintile representing the least marginalized areas and the fifth quintile representing the most marginalized areas.\nOntario Marginalization Index Dimensions and Census Indicators.\n23,24\n\n\nExposure to area-level marginalization was captured by linking postal codes for all cohort members at the time of cohort entry to marginalization data from the ON-Marg. The ON-Marg is based on census data and is comprised of 4 factors (constructed from principal component factor analysis) and 18 census indicators presented in Table 1. The index is updated at regular intervals with the most recent census data available; for the current study, the 2006 indicators were used. The factors cover 4 distinct dimensions of marginalization: (i) material deprivation, an indicator of area levels of poverty and inability to access and attain basic material needs; (ii) residential instability, an indicator of housing or family instability; (iii) dependency, an indicator of the concentration of people who do not have income from employment or may not be compensated for their work; and (iv) ethnic concentration, an indicator of the concentration of people who are immigrants and/or self-identify as belonging to a visible minority group. For each dimension, scores were divided into quintiles based on the provincial distribution, with the first quintile representing the least marginalized areas and the fifth quintile representing the most marginalized areas.\nOntario Marginalization Index Dimensions and Census Indicators.\n23,24\n\n\nStatistical Analyses We summarized baseline characteristics of the cohort using descriptive statistics, specifically means and standard deviations (SD) for continuous data and proportions for categorical data. Age- and sex-standardized incidence rates were calculated per 100,000 person-years for the entire province and for the CMAs, using the 1996 population of Canada\n25\n as the standard population to facilitate comparison across geographies by adjusting to the age structure of the standard population. The 1996 census was used as this was the last census prior to cohort entry. Sex-stratified age-standardized rates were also calculated, as the risk of psychotic disorders differs between males and females.\n26\n\n\nWe used multivariable Poisson regression models to obtain adjusted incidence rate ratios (IRRs) with 95% confidence intervals (CIs). Complete case analyses were used for all regression models. We first fit a model for the incidence in each CMA, relative to people not living in a CMA, while adjusting for age and sex. We then proceeded to fit a model that accounted for exposure to area-level marginalization, in addition to CMA, while adjusting for age and sex. Poisson regression models were compared to negative binomial models—given that the model estimates were similar, the data were not overdispersed, and the results of the Poisson regression were presented.\nAll statistical analyses were conducted using Stata version 16.1\n27\n and presented as incidence rates or IRRs with 95% CIs, and confidence intervals that do not include unity are considered statistically significant. Mapping of incidence estimates was conducted using QGIS version 3.6.\n28\n\n\nWe summarized baseline characteristics of the cohort using descriptive statistics, specifically means and standard deviations (SD) for continuous data and proportions for categorical data. Age- and sex-standardized incidence rates were calculated per 100,000 person-years for the entire province and for the CMAs, using the 1996 population of Canada\n25\n as the standard population to facilitate comparison across geographies by adjusting to the age structure of the standard population. The 1996 census was used as this was the last census prior to cohort entry. Sex-stratified age-standardized rates were also calculated, as the risk of psychotic disorders differs between males and females.\n26\n\n\nWe used multivariable Poisson regression models to obtain adjusted incidence rate ratios (IRRs) with 95% confidence intervals (CIs). Complete case analyses were used for all regression models. We first fit a model for the incidence in each CMA, relative to people not living in a CMA, while adjusting for age and sex. We then proceeded to fit a model that accounted for exposure to area-level marginalization, in addition to CMA, while adjusting for age and sex. Poisson regression models were compared to negative binomial models—given that the model estimates were similar, the data were not overdispersed, and the results of the Poisson regression were presented.\nAll statistical analyses were conducted using Stata version 16.1\n27\n and presented as incidence rates or IRRs with 95% CIs, and confidence intervals that do not include unity are considered statistically significant. Mapping of incidence estimates was conducted using QGIS version 3.6.\n28\n\n\nEthics Approval Ethics approval for this study was obtained from the research ethics board at the Centre for Addiction and Mental Health, Toronto, Ontario, Canada.\nEthics approval for this study was obtained from the research ethics board at the Centre for Addiction and Mental Health, Toronto, Ontario, Canada.", "We constructed a retrospective cohort that included all Ontario residents aged 14 to 40 years as of April 1, 1999, using data from the universal public health insurance plan, which has been described in detail previously.\n8\n The cohort was followed for 10 years to ascertain incident cases of psychotic disorders. These ages were used as it would allow for a 10-year follow-up period beyond the maximum age of some of the early psychosis intervention programme in Ontario. The cohort was constructed from the administrative data holdings at ICES (formerly known as the Institute for Clinical Evaluative Sciences), which enables linkage of individual records from multiple health administrative databases across the province of Ontario.\nAt the time of cohort inception, approximately 11.5 million people resided in Ontario.\n19\n All individuals included in the cohort were eligible for the Ontario Health Insurance Plan (OHIP), the provincially administered health insurance plan, in the 5 years prior to cohort inception. All long-term residents who primarily reside in Ontario are eligible for OHIP.\nPerson-time of follow-up was calculated for each person in the cohort from the time of cohort inception until an index episode of a psychotic disorder, death, or the end of the follow-up period. Those who had a history of contact with health services in Ontario for a psychotic disorder up to 20 years prior to the cohort start date, dependent on the databases, were removed to ensure incident cases were identified. This lookback window is in keeping with the optimal lookback period described in the literature.\n20\n All covariates were defined at the time of cohort inception.", "Sources of data included the Registered Persons Database which is a central population registry containing basic demographic information that enables linkage across administrative data by identifying all Ontario residents insured by OHIP; the Ontario Mental Health Reporting System containing data on hospitalizations to adult psychiatry beds; the Canadian Institute for Health Information Discharge Abstract Database containing data on all other acute care hospitalizations and inpatient psychiatric hospitalizations prior to 2005; the National Ambulatory Care Reporting System containing information on emergency department visits; data on outpatient physician billings from OHIP; and the Ontario Marginalization Index (ON-Marg) containing area-level deprivation indices based on census data.", "Incident cases of psychotic disorders were identified over the 10-year follow-up window of 1999 to 2008 inclusive. Incident cases of psychotic disorders were based on either (i) a primary discharge diagnosis from a general hospital bed with a diagnosis of schizophrenia or schizoaffective disorder based on International Classification of Diseases (ICD)-9 code 295.x, or ICD-10 code F20 or F25, (ii) a primary discharge diagnosis of schizophrenia or schizoaffective disorder from a psychiatric hospital bed based on Diagnostic and Statistical Manual of Mental Disorders, fourth edition code 295.x, or (iii) a minimum of 2 OHIP billing claims or emergency department visits with a diagnostic code for schizophrenia or schizoaffective disorder (ICD-9 code 295.x, or ICD-10 code F20 or F25) in a 12-month period. Previous research has validated a similar algorithm for case ascertainment against medical chart diagnoses and found high sensitivity (91.6%) and moderate specificity (67.4%).\n21\n\n", "The socio-environmental exposures of interest included (i) where people in the cohort reside in the province based on census metropolitan area (CMA) and (ii) area-level indicators of marginalization.\n(i) CMAs The CMA of each cohort member was identified based on postal code linkage at the time of cohort entry. A CMA is a census geography that consists of 1 or more municipalities situated around a core urban area. All CMAs have a total population of at least 100,000 people, of which at least 50,000 live in an urban core. The areas surrounding the urban core have a high degree of integration with the core.\n22\n Of note, some areas within the CMA that are outside the urban core may be classified as rural and described as the rural fringe; however, these areas have a high degree of integration and exposure to the urban population centre. For the purpose of this study, we are comparing people who reside in each of the province’s largest metropolitan population centres relative to those who reside in all other non-urban areas and smaller population centres.\nThe CMA of each cohort member was identified based on postal code linkage at the time of cohort entry. A CMA is a census geography that consists of 1 or more municipalities situated around a core urban area. All CMAs have a total population of at least 100,000 people, of which at least 50,000 live in an urban core. The areas surrounding the urban core have a high degree of integration with the core.\n22\n Of note, some areas within the CMA that are outside the urban core may be classified as rural and described as the rural fringe; however, these areas have a high degree of integration and exposure to the urban population centre. For the purpose of this study, we are comparing people who reside in each of the province’s largest metropolitan population centres relative to those who reside in all other non-urban areas and smaller population centres.\n(ii) Area-level indicators of marginalization Exposure to area-level marginalization was captured by linking postal codes for all cohort members at the time of cohort entry to marginalization data from the ON-Marg. The ON-Marg is based on census data and is comprised of 4 factors (constructed from principal component factor analysis) and 18 census indicators presented in Table 1. The index is updated at regular intervals with the most recent census data available; for the current study, the 2006 indicators were used. The factors cover 4 distinct dimensions of marginalization: (i) material deprivation, an indicator of area levels of poverty and inability to access and attain basic material needs; (ii) residential instability, an indicator of housing or family instability; (iii) dependency, an indicator of the concentration of people who do not have income from employment or may not be compensated for their work; and (iv) ethnic concentration, an indicator of the concentration of people who are immigrants and/or self-identify as belonging to a visible minority group. For each dimension, scores were divided into quintiles based on the provincial distribution, with the first quintile representing the least marginalized areas and the fifth quintile representing the most marginalized areas.\nOntario Marginalization Index Dimensions and Census Indicators.\n23,24\n\n\nExposure to area-level marginalization was captured by linking postal codes for all cohort members at the time of cohort entry to marginalization data from the ON-Marg. The ON-Marg is based on census data and is comprised of 4 factors (constructed from principal component factor analysis) and 18 census indicators presented in Table 1. The index is updated at regular intervals with the most recent census data available; for the current study, the 2006 indicators were used. The factors cover 4 distinct dimensions of marginalization: (i) material deprivation, an indicator of area levels of poverty and inability to access and attain basic material needs; (ii) residential instability, an indicator of housing or family instability; (iii) dependency, an indicator of the concentration of people who do not have income from employment or may not be compensated for their work; and (iv) ethnic concentration, an indicator of the concentration of people who are immigrants and/or self-identify as belonging to a visible minority group. For each dimension, scores were divided into quintiles based on the provincial distribution, with the first quintile representing the least marginalized areas and the fifth quintile representing the most marginalized areas.\nOntario Marginalization Index Dimensions and Census Indicators.\n23,24\n\n", "The CMA of each cohort member was identified based on postal code linkage at the time of cohort entry. A CMA is a census geography that consists of 1 or more municipalities situated around a core urban area. All CMAs have a total population of at least 100,000 people, of which at least 50,000 live in an urban core. The areas surrounding the urban core have a high degree of integration with the core.\n22\n Of note, some areas within the CMA that are outside the urban core may be classified as rural and described as the rural fringe; however, these areas have a high degree of integration and exposure to the urban population centre. For the purpose of this study, we are comparing people who reside in each of the province’s largest metropolitan population centres relative to those who reside in all other non-urban areas and smaller population centres.", "Exposure to area-level marginalization was captured by linking postal codes for all cohort members at the time of cohort entry to marginalization data from the ON-Marg. The ON-Marg is based on census data and is comprised of 4 factors (constructed from principal component factor analysis) and 18 census indicators presented in Table 1. The index is updated at regular intervals with the most recent census data available; for the current study, the 2006 indicators were used. The factors cover 4 distinct dimensions of marginalization: (i) material deprivation, an indicator of area levels of poverty and inability to access and attain basic material needs; (ii) residential instability, an indicator of housing or family instability; (iii) dependency, an indicator of the concentration of people who do not have income from employment or may not be compensated for their work; and (iv) ethnic concentration, an indicator of the concentration of people who are immigrants and/or self-identify as belonging to a visible minority group. For each dimension, scores were divided into quintiles based on the provincial distribution, with the first quintile representing the least marginalized areas and the fifth quintile representing the most marginalized areas.\nOntario Marginalization Index Dimensions and Census Indicators.\n23,24\n\n", "We summarized baseline characteristics of the cohort using descriptive statistics, specifically means and standard deviations (SD) for continuous data and proportions for categorical data. Age- and sex-standardized incidence rates were calculated per 100,000 person-years for the entire province and for the CMAs, using the 1996 population of Canada\n25\n as the standard population to facilitate comparison across geographies by adjusting to the age structure of the standard population. The 1996 census was used as this was the last census prior to cohort entry. Sex-stratified age-standardized rates were also calculated, as the risk of psychotic disorders differs between males and females.\n26\n\n\nWe used multivariable Poisson regression models to obtain adjusted incidence rate ratios (IRRs) with 95% confidence intervals (CIs). Complete case analyses were used for all regression models. We first fit a model for the incidence in each CMA, relative to people not living in a CMA, while adjusting for age and sex. We then proceeded to fit a model that accounted for exposure to area-level marginalization, in addition to CMA, while adjusting for age and sex. Poisson regression models were compared to negative binomial models—given that the model estimates were similar, the data were not overdispersed, and the results of the Poisson regression were presented.\nAll statistical analyses were conducted using Stata version 16.1\n27\n and presented as incidence rates or IRRs with 95% CIs, and confidence intervals that do not include unity are considered statistically significant. Mapping of incidence estimates was conducted using QGIS version 3.6.\n28\n\n", "Ethics approval for this study was obtained from the research ethics board at the Centre for Addiction and Mental Health, Toronto, Ontario, Canada.", "The cohort included 4,284,694 people, of whom 50% (n = 2,158,166) were male. Of the total cohort, 0.7% (n = 32,017) people were unable to be linked to the ON-Marg database due to missing postal code information and were excluded from the analyses. Baseline characteristics of the cohort are presented in Table 2. There were 25,686 incident cases of psychotic disorder, of whom 62% (n = 15,809) were male and 38% (n = 9,877) were female. The mean age at the time of cohort entry was 28.0 years (SD = 7.9), and the mean age at the time of index diagnosis was 32.5 years (SD = 8.6).\nSociodemographic Characteristics of the Cohort Aged 14 to 40 Years Living in Ontario as of April 1, 1999.\nThe age- and sex-standardized incidence rate of psychotic disorders among the entire cohort was 54.9 (95% CI, 53.6 to 56.3) per 100,000 person-years. Incidence rates by CMA are visualized in Figure 1 and presented in Online Appendix 1. Across the province, there was a higher incidence among males, with an age-standardized incidence rate of 67.4 (95% CI, 65.4 to 69.6) per 100,000 person-years, compared to an age-standardized incidence rate of 42.4 (95% CI, 40.8 to 44.1) per 100,000 person-years in females. Incidence rates varied between CMAs across the province. For the entire cohort, standardized incidence rates ranged from 51.4 (95% CI, 50.1 to 52.7) per 100,000 person-years in people residing outside of CMAs to 74.5 (95% CI, 73.0 to 76.1) per 100,000 person-years in Kingston.\nMaps of the age-adjusted incidence rates of psychotic disorders in Ontario for the entire cohort, males and females per 100,000 person years.\nWe found the risk of developing a psychotic disorder was higher in specific CMAs and was associated with area-level marginalization (Table 3). The rates of psychotic disorder were significantly elevated in Kingston, Belleville, Peterborough, Toronto, Hamilton, St. Catharines, Brantford, Guelph, London, Windsor, Sarnia, and Sudbury, when compared to those who were not residing in a CMA and without area-level marginalization being taken into account. The highest risk was observed in Kingston (IRR = 1.48, 95% CI, 1.27 to 1.62) when compared to non-CMAs.\nAge- and Sex-adjusted Incidence Rate Ratios by Census Metropolitan Areas (CMAs) Compared to Non-CMAs in Ontario and Model with Marginalization Factors.\n\nNotes. IRR = incidence rate ratio.\nCI = confidence interval, Ref. = reference category, *Unless otherwise indicated; statistically significant results bolded.\nMarginalization attenuated the IRR when added to the model, whereby previously significant IRRs in many of the CMAs are no longer statistically significant when compared to non-CMA areas. When area-level marginalization is taken into account, the elevated risk persists in Kingston (IRR = 1.20, 95% CI, 1.05 to 1.37), Guelph (IRR = 1.23, 95% CI, 1.06 to 1.41), Sarnia (IRR = 1.24, 95% CI, 1.05 to 1.46), and a marginally elevated risk in Toronto (IRR = 1.04, 95% CI, 1.00 to 1.08). With these additional factors accounted for, we found a lower risk of developing a psychotic disorder in Hamilton (IRR = 0.86, 95% CI, 0.80 to 0.92) and Windsor (IRR = 0.90, 95% CI, 0.82 to 0.99) when compared to non-CMA areas.\nWe found higher risk of psychotic disorders in areas with higher levels of marginalization for each of the 4 indicators, when compared to areas with the lowest levels of marginalization. There is higher risk in areas with the highest levels of instability (IRR = 1.26, 95% CI, 1.18 to 1.35), deprivation (IRR = 1.30, 95% CI, 1.16 to 1.45), ethnic concentration (IRR = 1.61, 95% CI, 1.38 to 1.89), and dependency (IRR = 1.35, 95% CI, 1.18 to 1.54) when compared to areas with the lowest levels of marginalization on these indicators.", "In this study, we found differences in the incidence rates of psychotic disorders across geographic areas in the province of Ontario. There are differences in incidence rates between males and females, and differences between major metropolitan areas, when compared to areas outside of metropolitan areas. Approximately 40% of incident cases occurred outside of the major metropolitan areas in the province. Some geographical differences remain when area-level marginalization is considered, although the effects are attenuated. Greater levels of marginalization on each of the 4 factors were associated with a higher incidence of psychotic disorders.\nBefore accounting for marginalization, we observed significantly elevated rates of psychotic disorder in smaller metropolitan areas in Southeastern Ontario, specifically in Kingston, Belleville, and Peterborough. There were also elevated rates in South Western Ontario in Guelph, London, Windsor, and Sarnia. In Northern Ontario, there were elevated rates in Sudbury. Toronto, the largest city in Ontario, also has an elevated rate of psychotic disorders, compared to non-metropolitan areas.\nMarginalization attenuates differences in risk across metropolitan areas. We found lower rates in Windsor and Hamilton when area-level marginalization was accounted for. The rates in Kingston, Guelph, and Sarnia remain elevated, albeit with attenuated effects. This suggests that area-level marginalization may play an important role in explaining geographic differences in the risk of developing a psychotic illness.\nPrevious literature has highlighted elevated rates of psychotic illness in urban areas.\n29\n In the current study, we looked at rates among people residing in major metropolitan areas, which include urban areas and surrounding areas that are integrated with the urban core. This suggests that contextual factors associated with geography are important to consider. Some of the marginalization factors examined in this study may be present at different levels in metropolitan and non-metropolitan areas, which can increase the risk of developing a psychotic disorder. Therefore, there may be marginalized areas in both metropolitan and non-metropolitan areas which may have elevated risk. Although greater levels of ethnic concentration may be largely located in urban environments, areas with high levels of material deprivation and dependency are present in rural regions of the province.\nIn the current study, areas with the highest levels of ethnic concentration had the highest risk of psychotic disorders. Previous work looking at the risk of developing psychotic disorders among immigrant and refugee populations in Ontario has found that there are elevated rates of psychosis in some migrant groups and lower rates in others.\n8\n The current study does not account for individual-level immigration status, which would be important to further understand the role of area-level ethnic concentration, which takes into account both visible minority status and immigrant concentration. Previous work in Europe has found that ethnic density, which looks at the concentration of people of a similar ethnic background in an area, moderates risk of psychosis in ethnic minority groups, who may have an elevated baseline risk compared to the general population.\n30,31\n Although there may be some similarities between ethnic concentration and ethnic density, it is not the same measure, as the latter implies same-group ethnic density, and we do not know the ethnic backgrounds of people in the current study nor the specific ethnic breakdown in the areas in which they reside. There has yet to be work conducted in Ontario that looks at the incidence of psychotic disorder in relation to ethnic minority status or racialized identity.\nLimitations This study is limited by the fact that exposure to area-level marginalization was defined at the time of cohort entry, and we did not account for changes in exposure during the follow-up period. Furthermore, any movement between geographical areas was not accounted for and may have influenced exposure and risk. Prior research suggests that following the first episode of psychosis, people may move to both areas of higher and lower marginalization.\n32,33\n This study only accounts for marginalization at the area level, and it is important to highlight that individual-level data on sociodemographic factors were not available, including individual-level immigration history. Previous research has found that neighbourhood-level factors moderate the role of individual-level social factors in relation to psychosis risk.\n29,30\n\n\nThis study used administrative health data that were not collected to specifically answer the research questions we posed. To reduce potential misclassification, we used a validated algorithm to identify incident cases.\n21\n The algorithm was created to identify cases of chronic schizophrenia; however, in this study, we are using it to identify incident cases of psychotic disorder, and it may therefore have different psychometric properties. The algorithm has a positive predictive value of 67.4%, which suggests that some cases identified in this cohort may be false positives.\nFurthermore, the data used for the current study and the previous validation study are over 10 years old and warrant replication. Beyond replication, there is also an opportunity to use up-to-date socio-environmental and clinical data for predictive modelling to forecast service use and resource allocation as recently been done in the United Kingdom.\n34,35\n\n\nThis study was not designed to make causal inferences and does not account for all factors that may be part of a causal pathway. Given that environmental factors only explain a portion of the risk, it is also important to consider genetic and other individual biological factors that impact a person’s risk of developing psychosis.\n36\n Both family history, genetic factors\n37\n and unobservable familial selections factors (e.g., relatives who also have an increased risk of developing a psychotic disorder who may be more likely to reside in marginalized or urban areas)\n38\n as well as patterns of substance use may be associated with socio-environmental and geographical factors that are examined in this study. Due to limitations of the data sources used, we were not able to account for substance-use patterns at the individual and area levels nor genetic and familial factors in this study. Future research should build on these limitations and further examine how the socio-environmental factors examined in this study are impacted by known biological risk factors and substance use patterns\n39\n using spatial approaches that explore synergistic risk.\n37\n Beyond biological factors, further attention should also be given to the role of other socio-environmental factors including immigration, ethnicity, and additional contextual factors including social capital, which may have important roles in moderating risk. These factors, in addition to geography and socio-environmental factors, should be examined in relation to the incidence of psychotic disorders as well as in relation to health service utilization and care outcomes.\nThis study is limited by the fact that exposure to area-level marginalization was defined at the time of cohort entry, and we did not account for changes in exposure during the follow-up period. Furthermore, any movement between geographical areas was not accounted for and may have influenced exposure and risk. Prior research suggests that following the first episode of psychosis, people may move to both areas of higher and lower marginalization.\n32,33\n This study only accounts for marginalization at the area level, and it is important to highlight that individual-level data on sociodemographic factors were not available, including individual-level immigration history. Previous research has found that neighbourhood-level factors moderate the role of individual-level social factors in relation to psychosis risk.\n29,30\n\n\nThis study used administrative health data that were not collected to specifically answer the research questions we posed. To reduce potential misclassification, we used a validated algorithm to identify incident cases.\n21\n The algorithm was created to identify cases of chronic schizophrenia; however, in this study, we are using it to identify incident cases of psychotic disorder, and it may therefore have different psychometric properties. The algorithm has a positive predictive value of 67.4%, which suggests that some cases identified in this cohort may be false positives.\nFurthermore, the data used for the current study and the previous validation study are over 10 years old and warrant replication. Beyond replication, there is also an opportunity to use up-to-date socio-environmental and clinical data for predictive modelling to forecast service use and resource allocation as recently been done in the United Kingdom.\n34,35\n\n\nThis study was not designed to make causal inferences and does not account for all factors that may be part of a causal pathway. Given that environmental factors only explain a portion of the risk, it is also important to consider genetic and other individual biological factors that impact a person’s risk of developing psychosis.\n36\n Both family history, genetic factors\n37\n and unobservable familial selections factors (e.g., relatives who also have an increased risk of developing a psychotic disorder who may be more likely to reside in marginalized or urban areas)\n38\n as well as patterns of substance use may be associated with socio-environmental and geographical factors that are examined in this study. Due to limitations of the data sources used, we were not able to account for substance-use patterns at the individual and area levels nor genetic and familial factors in this study. Future research should build on these limitations and further examine how the socio-environmental factors examined in this study are impacted by known biological risk factors and substance use patterns\n39\n using spatial approaches that explore synergistic risk.\n37\n Beyond biological factors, further attention should also be given to the role of other socio-environmental factors including immigration, ethnicity, and additional contextual factors including social capital, which may have important roles in moderating risk. These factors, in addition to geography and socio-environmental factors, should be examined in relation to the incidence of psychotic disorders as well as in relation to health service utilization and care outcomes.", "This study is limited by the fact that exposure to area-level marginalization was defined at the time of cohort entry, and we did not account for changes in exposure during the follow-up period. Furthermore, any movement between geographical areas was not accounted for and may have influenced exposure and risk. Prior research suggests that following the first episode of psychosis, people may move to both areas of higher and lower marginalization.\n32,33\n This study only accounts for marginalization at the area level, and it is important to highlight that individual-level data on sociodemographic factors were not available, including individual-level immigration history. Previous research has found that neighbourhood-level factors moderate the role of individual-level social factors in relation to psychosis risk.\n29,30\n\n\nThis study used administrative health data that were not collected to specifically answer the research questions we posed. To reduce potential misclassification, we used a validated algorithm to identify incident cases.\n21\n The algorithm was created to identify cases of chronic schizophrenia; however, in this study, we are using it to identify incident cases of psychotic disorder, and it may therefore have different psychometric properties. The algorithm has a positive predictive value of 67.4%, which suggests that some cases identified in this cohort may be false positives.\nFurthermore, the data used for the current study and the previous validation study are over 10 years old and warrant replication. Beyond replication, there is also an opportunity to use up-to-date socio-environmental and clinical data for predictive modelling to forecast service use and resource allocation as recently been done in the United Kingdom.\n34,35\n\n\nThis study was not designed to make causal inferences and does not account for all factors that may be part of a causal pathway. Given that environmental factors only explain a portion of the risk, it is also important to consider genetic and other individual biological factors that impact a person’s risk of developing psychosis.\n36\n Both family history, genetic factors\n37\n and unobservable familial selections factors (e.g., relatives who also have an increased risk of developing a psychotic disorder who may be more likely to reside in marginalized or urban areas)\n38\n as well as patterns of substance use may be associated with socio-environmental and geographical factors that are examined in this study. Due to limitations of the data sources used, we were not able to account for substance-use patterns at the individual and area levels nor genetic and familial factors in this study. Future research should build on these limitations and further examine how the socio-environmental factors examined in this study are impacted by known biological risk factors and substance use patterns\n39\n using spatial approaches that explore synergistic risk.\n37\n Beyond biological factors, further attention should also be given to the role of other socio-environmental factors including immigration, ethnicity, and additional contextual factors including social capital, which may have important roles in moderating risk. These factors, in addition to geography and socio-environmental factors, should be examined in relation to the incidence of psychotic disorders as well as in relation to health service utilization and care outcomes.", "We found geographic variation in the incidence of psychotic disorders across the province of Ontario, and incidence was associated with contextual socio-environmental factors. There were elevated rates of psychotic disorders in some of the major metropolitan areas in the province when compared to areas outside of these metropolitan areas. Area-level marginalization appears to attenuate the risk associated with geographical location. Future research should account for important individual-level factors and examine how they may influence the risk of developing a psychotic disorder, particularly in relation to area-level factors. With further replication, use of the most up-to-date data and further study of socio-environmental exposures future work may be useful in informing social policy interventions as preventative measures and planning delivery of services. It is particularly important to target services for people with the first episode of psychosis, to ensure adequate resource allocation across the province, and to direct services to areas with elevated rates of psychotic disorders.", "Click here for additional data file.\nSupplemental Material, sj-docx-1-cpa-10.1177_07067437211011852 for The Incidence of Psychotic Disorders and Area-level Marginalization in Ontario, Canada: A Population-based Retrospective Cohort Study by Martin Rotenberg, Andrew Tuck, Kelly K. Anderson and Kwame McKenzie in The Canadian Journal of Psychiatry\nClick here for additional data file.\nSupplemental Material, sj-docx-2-cpa-10.1177_07067437211011852 for The Incidence of Psychotic Disorders and Area-level Marginalization in Ontario, Canada: A Population-based Retrospective Cohort Study by Martin Rotenberg, Andrew Tuck, Kelly K. Anderson and Kwame McKenzie in The Canadian Journal of Psychiatry" ]
[ "intro", "methods", null, null, null, null, null, null, null, null, "results", "discussion", null, "conclusions", "supplementary-material" ]
[ "epidemiology", "incidence", "geography", "marginalization", "psychosis", "schizophrenia", "social determinants", "socio-environmental" ]
Introduction: The seminal work of Faris and Dunham conducted in Chicago in the 1930s provided empirical evidence that the incidence of non-affective psychotic disorders varied based on geographical and neighbourhood-level sociodemographic factors. It was observed that in neighbourhoods with increasing levels of social disorganization, there was a higher incidence of schizophrenia. 1 In the intervening years since this early work, and with the advent of improved epidemiological methods, many studies have examined this association and similarly found area-level social factors to be associated with the risk of developing a psychotic disorder. 2 –4 Most of the research examining the social causes of psychotic disorders have been conducted in Europe. 5 The international research has highlighted that people living in the most deprived neighbourhoods are at higher risk of having a psychotic disorder. 2 The most recent epidemiological work from several European countries has also highlighted large differences in the incidence of psychotic disorders between different cities and urban contexts. 6 In Canada, there is a small but growing body of research on the role of social factors which may influence both the incidence and course of psychotic disorders. 7 –11 This is an important area of study considering prior work conducted in Ontario has found people with schizophrenia have a 3-fold increase in all-cause mortality when compared to the general population 12 and high levels of ongoing health service use. 13 In Ontario, the largest province in Canada, we know that where people live impacts how they use services. People who live in more deprived areas use more mental health services. 14 In Toronto, the largest and most diverse city in the province, presentation to emergency mental health services for psychosis differs based on the level of marginalization of the neighbourhood in which people reside. 10 Although there is prior Canadian research on health service use in this clinical population, 13,15 –17 there has been limited study of the role of social factors in the risk of developing a psychotic disorder in the Canadian context. One study in Ontario has looked at the risk of developing a psychotic disorder in immigrant and refugee groups, finding that some migrant groups have an elevated risk whereas others have a lower risk. 8 In Quebec, the second largest province, health administrative data have been used to examine the role of socio-environmental and geographical factors in the risk of developing first-episode psychosis. Similar to international work on this topic, there was a higher incidence of psychosis in the most deprived areas in Montreal. 7 Differences in the incidence rates of schizophrenia between Quebec City and Montréal, 2 of the main metropolitan centres in the province, and between urban and rural areas have also been found. 18 The aim of the study was to examine the geographical distribution and the role of area-level marginalization indicators on the incidence of schizophrenia spectrum psychotic disorders in Ontario. We hypothesize that (i) there will be variation in incidence between major metropolitan centres and (ii) there will be a higher incidence in areas with the highest levels of marginalization. Methods: Study Design, Setting, and Population We constructed a retrospective cohort that included all Ontario residents aged 14 to 40 years as of April 1, 1999, using data from the universal public health insurance plan, which has been described in detail previously. 8 The cohort was followed for 10 years to ascertain incident cases of psychotic disorders. These ages were used as it would allow for a 10-year follow-up period beyond the maximum age of some of the early psychosis intervention programme in Ontario. The cohort was constructed from the administrative data holdings at ICES (formerly known as the Institute for Clinical Evaluative Sciences), which enables linkage of individual records from multiple health administrative databases across the province of Ontario. At the time of cohort inception, approximately 11.5 million people resided in Ontario. 19 All individuals included in the cohort were eligible for the Ontario Health Insurance Plan (OHIP), the provincially administered health insurance plan, in the 5 years prior to cohort inception. All long-term residents who primarily reside in Ontario are eligible for OHIP. Person-time of follow-up was calculated for each person in the cohort from the time of cohort inception until an index episode of a psychotic disorder, death, or the end of the follow-up period. Those who had a history of contact with health services in Ontario for a psychotic disorder up to 20 years prior to the cohort start date, dependent on the databases, were removed to ensure incident cases were identified. This lookback window is in keeping with the optimal lookback period described in the literature. 20 All covariates were defined at the time of cohort inception. We constructed a retrospective cohort that included all Ontario residents aged 14 to 40 years as of April 1, 1999, using data from the universal public health insurance plan, which has been described in detail previously. 8 The cohort was followed for 10 years to ascertain incident cases of psychotic disorders. These ages were used as it would allow for a 10-year follow-up period beyond the maximum age of some of the early psychosis intervention programme in Ontario. The cohort was constructed from the administrative data holdings at ICES (formerly known as the Institute for Clinical Evaluative Sciences), which enables linkage of individual records from multiple health administrative databases across the province of Ontario. At the time of cohort inception, approximately 11.5 million people resided in Ontario. 19 All individuals included in the cohort were eligible for the Ontario Health Insurance Plan (OHIP), the provincially administered health insurance plan, in the 5 years prior to cohort inception. All long-term residents who primarily reside in Ontario are eligible for OHIP. Person-time of follow-up was calculated for each person in the cohort from the time of cohort inception until an index episode of a psychotic disorder, death, or the end of the follow-up period. Those who had a history of contact with health services in Ontario for a psychotic disorder up to 20 years prior to the cohort start date, dependent on the databases, were removed to ensure incident cases were identified. This lookback window is in keeping with the optimal lookback period described in the literature. 20 All covariates were defined at the time of cohort inception. Data Sources Sources of data included the Registered Persons Database which is a central population registry containing basic demographic information that enables linkage across administrative data by identifying all Ontario residents insured by OHIP; the Ontario Mental Health Reporting System containing data on hospitalizations to adult psychiatry beds; the Canadian Institute for Health Information Discharge Abstract Database containing data on all other acute care hospitalizations and inpatient psychiatric hospitalizations prior to 2005; the National Ambulatory Care Reporting System containing information on emergency department visits; data on outpatient physician billings from OHIP; and the Ontario Marginalization Index (ON-Marg) containing area-level deprivation indices based on census data. Sources of data included the Registered Persons Database which is a central population registry containing basic demographic information that enables linkage across administrative data by identifying all Ontario residents insured by OHIP; the Ontario Mental Health Reporting System containing data on hospitalizations to adult psychiatry beds; the Canadian Institute for Health Information Discharge Abstract Database containing data on all other acute care hospitalizations and inpatient psychiatric hospitalizations prior to 2005; the National Ambulatory Care Reporting System containing information on emergency department visits; data on outpatient physician billings from OHIP; and the Ontario Marginalization Index (ON-Marg) containing area-level deprivation indices based on census data. Case Ascertainment Incident cases of psychotic disorders were identified over the 10-year follow-up window of 1999 to 2008 inclusive. Incident cases of psychotic disorders were based on either (i) a primary discharge diagnosis from a general hospital bed with a diagnosis of schizophrenia or schizoaffective disorder based on International Classification of Diseases (ICD)-9 code 295.x, or ICD-10 code F20 or F25, (ii) a primary discharge diagnosis of schizophrenia or schizoaffective disorder from a psychiatric hospital bed based on Diagnostic and Statistical Manual of Mental Disorders, fourth edition code 295.x, or (iii) a minimum of 2 OHIP billing claims or emergency department visits with a diagnostic code for schizophrenia or schizoaffective disorder (ICD-9 code 295.x, or ICD-10 code F20 or F25) in a 12-month period. Previous research has validated a similar algorithm for case ascertainment against medical chart diagnoses and found high sensitivity (91.6%) and moderate specificity (67.4%). 21 Incident cases of psychotic disorders were identified over the 10-year follow-up window of 1999 to 2008 inclusive. Incident cases of psychotic disorders were based on either (i) a primary discharge diagnosis from a general hospital bed with a diagnosis of schizophrenia or schizoaffective disorder based on International Classification of Diseases (ICD)-9 code 295.x, or ICD-10 code F20 or F25, (ii) a primary discharge diagnosis of schizophrenia or schizoaffective disorder from a psychiatric hospital bed based on Diagnostic and Statistical Manual of Mental Disorders, fourth edition code 295.x, or (iii) a minimum of 2 OHIP billing claims or emergency department visits with a diagnostic code for schizophrenia or schizoaffective disorder (ICD-9 code 295.x, or ICD-10 code F20 or F25) in a 12-month period. Previous research has validated a similar algorithm for case ascertainment against medical chart diagnoses and found high sensitivity (91.6%) and moderate specificity (67.4%). 21 Covariates and Exposure Classification The socio-environmental exposures of interest included (i) where people in the cohort reside in the province based on census metropolitan area (CMA) and (ii) area-level indicators of marginalization. (i) CMAs The CMA of each cohort member was identified based on postal code linkage at the time of cohort entry. A CMA is a census geography that consists of 1 or more municipalities situated around a core urban area. All CMAs have a total population of at least 100,000 people, of which at least 50,000 live in an urban core. The areas surrounding the urban core have a high degree of integration with the core. 22 Of note, some areas within the CMA that are outside the urban core may be classified as rural and described as the rural fringe; however, these areas have a high degree of integration and exposure to the urban population centre. For the purpose of this study, we are comparing people who reside in each of the province’s largest metropolitan population centres relative to those who reside in all other non-urban areas and smaller population centres. The CMA of each cohort member was identified based on postal code linkage at the time of cohort entry. A CMA is a census geography that consists of 1 or more municipalities situated around a core urban area. All CMAs have a total population of at least 100,000 people, of which at least 50,000 live in an urban core. The areas surrounding the urban core have a high degree of integration with the core. 22 Of note, some areas within the CMA that are outside the urban core may be classified as rural and described as the rural fringe; however, these areas have a high degree of integration and exposure to the urban population centre. For the purpose of this study, we are comparing people who reside in each of the province’s largest metropolitan population centres relative to those who reside in all other non-urban areas and smaller population centres. (ii) Area-level indicators of marginalization Exposure to area-level marginalization was captured by linking postal codes for all cohort members at the time of cohort entry to marginalization data from the ON-Marg. The ON-Marg is based on census data and is comprised of 4 factors (constructed from principal component factor analysis) and 18 census indicators presented in Table 1. The index is updated at regular intervals with the most recent census data available; for the current study, the 2006 indicators were used. The factors cover 4 distinct dimensions of marginalization: (i) material deprivation, an indicator of area levels of poverty and inability to access and attain basic material needs; (ii) residential instability, an indicator of housing or family instability; (iii) dependency, an indicator of the concentration of people who do not have income from employment or may not be compensated for their work; and (iv) ethnic concentration, an indicator of the concentration of people who are immigrants and/or self-identify as belonging to a visible minority group. For each dimension, scores were divided into quintiles based on the provincial distribution, with the first quintile representing the least marginalized areas and the fifth quintile representing the most marginalized areas. Ontario Marginalization Index Dimensions and Census Indicators. 23,24 Exposure to area-level marginalization was captured by linking postal codes for all cohort members at the time of cohort entry to marginalization data from the ON-Marg. The ON-Marg is based on census data and is comprised of 4 factors (constructed from principal component factor analysis) and 18 census indicators presented in Table 1. The index is updated at regular intervals with the most recent census data available; for the current study, the 2006 indicators were used. The factors cover 4 distinct dimensions of marginalization: (i) material deprivation, an indicator of area levels of poverty and inability to access and attain basic material needs; (ii) residential instability, an indicator of housing or family instability; (iii) dependency, an indicator of the concentration of people who do not have income from employment or may not be compensated for their work; and (iv) ethnic concentration, an indicator of the concentration of people who are immigrants and/or self-identify as belonging to a visible minority group. For each dimension, scores were divided into quintiles based on the provincial distribution, with the first quintile representing the least marginalized areas and the fifth quintile representing the most marginalized areas. Ontario Marginalization Index Dimensions and Census Indicators. 23,24 The socio-environmental exposures of interest included (i) where people in the cohort reside in the province based on census metropolitan area (CMA) and (ii) area-level indicators of marginalization. (i) CMAs The CMA of each cohort member was identified based on postal code linkage at the time of cohort entry. A CMA is a census geography that consists of 1 or more municipalities situated around a core urban area. All CMAs have a total population of at least 100,000 people, of which at least 50,000 live in an urban core. The areas surrounding the urban core have a high degree of integration with the core. 22 Of note, some areas within the CMA that are outside the urban core may be classified as rural and described as the rural fringe; however, these areas have a high degree of integration and exposure to the urban population centre. For the purpose of this study, we are comparing people who reside in each of the province’s largest metropolitan population centres relative to those who reside in all other non-urban areas and smaller population centres. The CMA of each cohort member was identified based on postal code linkage at the time of cohort entry. A CMA is a census geography that consists of 1 or more municipalities situated around a core urban area. All CMAs have a total population of at least 100,000 people, of which at least 50,000 live in an urban core. The areas surrounding the urban core have a high degree of integration with the core. 22 Of note, some areas within the CMA that are outside the urban core may be classified as rural and described as the rural fringe; however, these areas have a high degree of integration and exposure to the urban population centre. For the purpose of this study, we are comparing people who reside in each of the province’s largest metropolitan population centres relative to those who reside in all other non-urban areas and smaller population centres. (ii) Area-level indicators of marginalization Exposure to area-level marginalization was captured by linking postal codes for all cohort members at the time of cohort entry to marginalization data from the ON-Marg. The ON-Marg is based on census data and is comprised of 4 factors (constructed from principal component factor analysis) and 18 census indicators presented in Table 1. The index is updated at regular intervals with the most recent census data available; for the current study, the 2006 indicators were used. The factors cover 4 distinct dimensions of marginalization: (i) material deprivation, an indicator of area levels of poverty and inability to access and attain basic material needs; (ii) residential instability, an indicator of housing or family instability; (iii) dependency, an indicator of the concentration of people who do not have income from employment or may not be compensated for their work; and (iv) ethnic concentration, an indicator of the concentration of people who are immigrants and/or self-identify as belonging to a visible minority group. For each dimension, scores were divided into quintiles based on the provincial distribution, with the first quintile representing the least marginalized areas and the fifth quintile representing the most marginalized areas. Ontario Marginalization Index Dimensions and Census Indicators. 23,24 Exposure to area-level marginalization was captured by linking postal codes for all cohort members at the time of cohort entry to marginalization data from the ON-Marg. The ON-Marg is based on census data and is comprised of 4 factors (constructed from principal component factor analysis) and 18 census indicators presented in Table 1. The index is updated at regular intervals with the most recent census data available; for the current study, the 2006 indicators were used. The factors cover 4 distinct dimensions of marginalization: (i) material deprivation, an indicator of area levels of poverty and inability to access and attain basic material needs; (ii) residential instability, an indicator of housing or family instability; (iii) dependency, an indicator of the concentration of people who do not have income from employment or may not be compensated for their work; and (iv) ethnic concentration, an indicator of the concentration of people who are immigrants and/or self-identify as belonging to a visible minority group. For each dimension, scores were divided into quintiles based on the provincial distribution, with the first quintile representing the least marginalized areas and the fifth quintile representing the most marginalized areas. Ontario Marginalization Index Dimensions and Census Indicators. 23,24 Statistical Analyses We summarized baseline characteristics of the cohort using descriptive statistics, specifically means and standard deviations (SD) for continuous data and proportions for categorical data. Age- and sex-standardized incidence rates were calculated per 100,000 person-years for the entire province and for the CMAs, using the 1996 population of Canada 25 as the standard population to facilitate comparison across geographies by adjusting to the age structure of the standard population. The 1996 census was used as this was the last census prior to cohort entry. Sex-stratified age-standardized rates were also calculated, as the risk of psychotic disorders differs between males and females. 26 We used multivariable Poisson regression models to obtain adjusted incidence rate ratios (IRRs) with 95% confidence intervals (CIs). Complete case analyses were used for all regression models. We first fit a model for the incidence in each CMA, relative to people not living in a CMA, while adjusting for age and sex. We then proceeded to fit a model that accounted for exposure to area-level marginalization, in addition to CMA, while adjusting for age and sex. Poisson regression models were compared to negative binomial models—given that the model estimates were similar, the data were not overdispersed, and the results of the Poisson regression were presented. All statistical analyses were conducted using Stata version 16.1 27 and presented as incidence rates or IRRs with 95% CIs, and confidence intervals that do not include unity are considered statistically significant. Mapping of incidence estimates was conducted using QGIS version 3.6. 28 We summarized baseline characteristics of the cohort using descriptive statistics, specifically means and standard deviations (SD) for continuous data and proportions for categorical data. Age- and sex-standardized incidence rates were calculated per 100,000 person-years for the entire province and for the CMAs, using the 1996 population of Canada 25 as the standard population to facilitate comparison across geographies by adjusting to the age structure of the standard population. The 1996 census was used as this was the last census prior to cohort entry. Sex-stratified age-standardized rates were also calculated, as the risk of psychotic disorders differs between males and females. 26 We used multivariable Poisson regression models to obtain adjusted incidence rate ratios (IRRs) with 95% confidence intervals (CIs). Complete case analyses were used for all regression models. We first fit a model for the incidence in each CMA, relative to people not living in a CMA, while adjusting for age and sex. We then proceeded to fit a model that accounted for exposure to area-level marginalization, in addition to CMA, while adjusting for age and sex. Poisson regression models were compared to negative binomial models—given that the model estimates were similar, the data were not overdispersed, and the results of the Poisson regression were presented. All statistical analyses were conducted using Stata version 16.1 27 and presented as incidence rates or IRRs with 95% CIs, and confidence intervals that do not include unity are considered statistically significant. Mapping of incidence estimates was conducted using QGIS version 3.6. 28 Ethics Approval Ethics approval for this study was obtained from the research ethics board at the Centre for Addiction and Mental Health, Toronto, Ontario, Canada. Ethics approval for this study was obtained from the research ethics board at the Centre for Addiction and Mental Health, Toronto, Ontario, Canada. Study Design, Setting, and Population: We constructed a retrospective cohort that included all Ontario residents aged 14 to 40 years as of April 1, 1999, using data from the universal public health insurance plan, which has been described in detail previously. 8 The cohort was followed for 10 years to ascertain incident cases of psychotic disorders. These ages were used as it would allow for a 10-year follow-up period beyond the maximum age of some of the early psychosis intervention programme in Ontario. The cohort was constructed from the administrative data holdings at ICES (formerly known as the Institute for Clinical Evaluative Sciences), which enables linkage of individual records from multiple health administrative databases across the province of Ontario. At the time of cohort inception, approximately 11.5 million people resided in Ontario. 19 All individuals included in the cohort were eligible for the Ontario Health Insurance Plan (OHIP), the provincially administered health insurance plan, in the 5 years prior to cohort inception. All long-term residents who primarily reside in Ontario are eligible for OHIP. Person-time of follow-up was calculated for each person in the cohort from the time of cohort inception until an index episode of a psychotic disorder, death, or the end of the follow-up period. Those who had a history of contact with health services in Ontario for a psychotic disorder up to 20 years prior to the cohort start date, dependent on the databases, were removed to ensure incident cases were identified. This lookback window is in keeping with the optimal lookback period described in the literature. 20 All covariates were defined at the time of cohort inception. Data Sources: Sources of data included the Registered Persons Database which is a central population registry containing basic demographic information that enables linkage across administrative data by identifying all Ontario residents insured by OHIP; the Ontario Mental Health Reporting System containing data on hospitalizations to adult psychiatry beds; the Canadian Institute for Health Information Discharge Abstract Database containing data on all other acute care hospitalizations and inpatient psychiatric hospitalizations prior to 2005; the National Ambulatory Care Reporting System containing information on emergency department visits; data on outpatient physician billings from OHIP; and the Ontario Marginalization Index (ON-Marg) containing area-level deprivation indices based on census data. Case Ascertainment: Incident cases of psychotic disorders were identified over the 10-year follow-up window of 1999 to 2008 inclusive. Incident cases of psychotic disorders were based on either (i) a primary discharge diagnosis from a general hospital bed with a diagnosis of schizophrenia or schizoaffective disorder based on International Classification of Diseases (ICD)-9 code 295.x, or ICD-10 code F20 or F25, (ii) a primary discharge diagnosis of schizophrenia or schizoaffective disorder from a psychiatric hospital bed based on Diagnostic and Statistical Manual of Mental Disorders, fourth edition code 295.x, or (iii) a minimum of 2 OHIP billing claims or emergency department visits with a diagnostic code for schizophrenia or schizoaffective disorder (ICD-9 code 295.x, or ICD-10 code F20 or F25) in a 12-month period. Previous research has validated a similar algorithm for case ascertainment against medical chart diagnoses and found high sensitivity (91.6%) and moderate specificity (67.4%). 21 Covariates and Exposure Classification: The socio-environmental exposures of interest included (i) where people in the cohort reside in the province based on census metropolitan area (CMA) and (ii) area-level indicators of marginalization. (i) CMAs The CMA of each cohort member was identified based on postal code linkage at the time of cohort entry. A CMA is a census geography that consists of 1 or more municipalities situated around a core urban area. All CMAs have a total population of at least 100,000 people, of which at least 50,000 live in an urban core. The areas surrounding the urban core have a high degree of integration with the core. 22 Of note, some areas within the CMA that are outside the urban core may be classified as rural and described as the rural fringe; however, these areas have a high degree of integration and exposure to the urban population centre. For the purpose of this study, we are comparing people who reside in each of the province’s largest metropolitan population centres relative to those who reside in all other non-urban areas and smaller population centres. The CMA of each cohort member was identified based on postal code linkage at the time of cohort entry. A CMA is a census geography that consists of 1 or more municipalities situated around a core urban area. All CMAs have a total population of at least 100,000 people, of which at least 50,000 live in an urban core. The areas surrounding the urban core have a high degree of integration with the core. 22 Of note, some areas within the CMA that are outside the urban core may be classified as rural and described as the rural fringe; however, these areas have a high degree of integration and exposure to the urban population centre. For the purpose of this study, we are comparing people who reside in each of the province’s largest metropolitan population centres relative to those who reside in all other non-urban areas and smaller population centres. (ii) Area-level indicators of marginalization Exposure to area-level marginalization was captured by linking postal codes for all cohort members at the time of cohort entry to marginalization data from the ON-Marg. The ON-Marg is based on census data and is comprised of 4 factors (constructed from principal component factor analysis) and 18 census indicators presented in Table 1. The index is updated at regular intervals with the most recent census data available; for the current study, the 2006 indicators were used. The factors cover 4 distinct dimensions of marginalization: (i) material deprivation, an indicator of area levels of poverty and inability to access and attain basic material needs; (ii) residential instability, an indicator of housing or family instability; (iii) dependency, an indicator of the concentration of people who do not have income from employment or may not be compensated for their work; and (iv) ethnic concentration, an indicator of the concentration of people who are immigrants and/or self-identify as belonging to a visible minority group. For each dimension, scores were divided into quintiles based on the provincial distribution, with the first quintile representing the least marginalized areas and the fifth quintile representing the most marginalized areas. Ontario Marginalization Index Dimensions and Census Indicators. 23,24 Exposure to area-level marginalization was captured by linking postal codes for all cohort members at the time of cohort entry to marginalization data from the ON-Marg. The ON-Marg is based on census data and is comprised of 4 factors (constructed from principal component factor analysis) and 18 census indicators presented in Table 1. The index is updated at regular intervals with the most recent census data available; for the current study, the 2006 indicators were used. The factors cover 4 distinct dimensions of marginalization: (i) material deprivation, an indicator of area levels of poverty and inability to access and attain basic material needs; (ii) residential instability, an indicator of housing or family instability; (iii) dependency, an indicator of the concentration of people who do not have income from employment or may not be compensated for their work; and (iv) ethnic concentration, an indicator of the concentration of people who are immigrants and/or self-identify as belonging to a visible minority group. For each dimension, scores were divided into quintiles based on the provincial distribution, with the first quintile representing the least marginalized areas and the fifth quintile representing the most marginalized areas. Ontario Marginalization Index Dimensions and Census Indicators. 23,24 (i) CMAs: The CMA of each cohort member was identified based on postal code linkage at the time of cohort entry. A CMA is a census geography that consists of 1 or more municipalities situated around a core urban area. All CMAs have a total population of at least 100,000 people, of which at least 50,000 live in an urban core. The areas surrounding the urban core have a high degree of integration with the core. 22 Of note, some areas within the CMA that are outside the urban core may be classified as rural and described as the rural fringe; however, these areas have a high degree of integration and exposure to the urban population centre. For the purpose of this study, we are comparing people who reside in each of the province’s largest metropolitan population centres relative to those who reside in all other non-urban areas and smaller population centres. (ii) Area-level indicators of marginalization: Exposure to area-level marginalization was captured by linking postal codes for all cohort members at the time of cohort entry to marginalization data from the ON-Marg. The ON-Marg is based on census data and is comprised of 4 factors (constructed from principal component factor analysis) and 18 census indicators presented in Table 1. The index is updated at regular intervals with the most recent census data available; for the current study, the 2006 indicators were used. The factors cover 4 distinct dimensions of marginalization: (i) material deprivation, an indicator of area levels of poverty and inability to access and attain basic material needs; (ii) residential instability, an indicator of housing or family instability; (iii) dependency, an indicator of the concentration of people who do not have income from employment or may not be compensated for their work; and (iv) ethnic concentration, an indicator of the concentration of people who are immigrants and/or self-identify as belonging to a visible minority group. For each dimension, scores were divided into quintiles based on the provincial distribution, with the first quintile representing the least marginalized areas and the fifth quintile representing the most marginalized areas. Ontario Marginalization Index Dimensions and Census Indicators. 23,24 Statistical Analyses: We summarized baseline characteristics of the cohort using descriptive statistics, specifically means and standard deviations (SD) for continuous data and proportions for categorical data. Age- and sex-standardized incidence rates were calculated per 100,000 person-years for the entire province and for the CMAs, using the 1996 population of Canada 25 as the standard population to facilitate comparison across geographies by adjusting to the age structure of the standard population. The 1996 census was used as this was the last census prior to cohort entry. Sex-stratified age-standardized rates were also calculated, as the risk of psychotic disorders differs between males and females. 26 We used multivariable Poisson regression models to obtain adjusted incidence rate ratios (IRRs) with 95% confidence intervals (CIs). Complete case analyses were used for all regression models. We first fit a model for the incidence in each CMA, relative to people not living in a CMA, while adjusting for age and sex. We then proceeded to fit a model that accounted for exposure to area-level marginalization, in addition to CMA, while adjusting for age and sex. Poisson regression models were compared to negative binomial models—given that the model estimates were similar, the data were not overdispersed, and the results of the Poisson regression were presented. All statistical analyses were conducted using Stata version 16.1 27 and presented as incidence rates or IRRs with 95% CIs, and confidence intervals that do not include unity are considered statistically significant. Mapping of incidence estimates was conducted using QGIS version 3.6. 28 Ethics Approval: Ethics approval for this study was obtained from the research ethics board at the Centre for Addiction and Mental Health, Toronto, Ontario, Canada. Results: The cohort included 4,284,694 people, of whom 50% (n = 2,158,166) were male. Of the total cohort, 0.7% (n = 32,017) people were unable to be linked to the ON-Marg database due to missing postal code information and were excluded from the analyses. Baseline characteristics of the cohort are presented in Table 2. There were 25,686 incident cases of psychotic disorder, of whom 62% (n = 15,809) were male and 38% (n = 9,877) were female. The mean age at the time of cohort entry was 28.0 years (SD = 7.9), and the mean age at the time of index diagnosis was 32.5 years (SD = 8.6). Sociodemographic Characteristics of the Cohort Aged 14 to 40 Years Living in Ontario as of April 1, 1999. The age- and sex-standardized incidence rate of psychotic disorders among the entire cohort was 54.9 (95% CI, 53.6 to 56.3) per 100,000 person-years. Incidence rates by CMA are visualized in Figure 1 and presented in Online Appendix 1. Across the province, there was a higher incidence among males, with an age-standardized incidence rate of 67.4 (95% CI, 65.4 to 69.6) per 100,000 person-years, compared to an age-standardized incidence rate of 42.4 (95% CI, 40.8 to 44.1) per 100,000 person-years in females. Incidence rates varied between CMAs across the province. For the entire cohort, standardized incidence rates ranged from 51.4 (95% CI, 50.1 to 52.7) per 100,000 person-years in people residing outside of CMAs to 74.5 (95% CI, 73.0 to 76.1) per 100,000 person-years in Kingston. Maps of the age-adjusted incidence rates of psychotic disorders in Ontario for the entire cohort, males and females per 100,000 person years. We found the risk of developing a psychotic disorder was higher in specific CMAs and was associated with area-level marginalization (Table 3). The rates of psychotic disorder were significantly elevated in Kingston, Belleville, Peterborough, Toronto, Hamilton, St. Catharines, Brantford, Guelph, London, Windsor, Sarnia, and Sudbury, when compared to those who were not residing in a CMA and without area-level marginalization being taken into account. The highest risk was observed in Kingston (IRR = 1.48, 95% CI, 1.27 to 1.62) when compared to non-CMAs. Age- and Sex-adjusted Incidence Rate Ratios by Census Metropolitan Areas (CMAs) Compared to Non-CMAs in Ontario and Model with Marginalization Factors. Notes. IRR = incidence rate ratio. CI = confidence interval, Ref. = reference category, *Unless otherwise indicated; statistically significant results bolded. Marginalization attenuated the IRR when added to the model, whereby previously significant IRRs in many of the CMAs are no longer statistically significant when compared to non-CMA areas. When area-level marginalization is taken into account, the elevated risk persists in Kingston (IRR = 1.20, 95% CI, 1.05 to 1.37), Guelph (IRR = 1.23, 95% CI, 1.06 to 1.41), Sarnia (IRR = 1.24, 95% CI, 1.05 to 1.46), and a marginally elevated risk in Toronto (IRR = 1.04, 95% CI, 1.00 to 1.08). With these additional factors accounted for, we found a lower risk of developing a psychotic disorder in Hamilton (IRR = 0.86, 95% CI, 0.80 to 0.92) and Windsor (IRR = 0.90, 95% CI, 0.82 to 0.99) when compared to non-CMA areas. We found higher risk of psychotic disorders in areas with higher levels of marginalization for each of the 4 indicators, when compared to areas with the lowest levels of marginalization. There is higher risk in areas with the highest levels of instability (IRR = 1.26, 95% CI, 1.18 to 1.35), deprivation (IRR = 1.30, 95% CI, 1.16 to 1.45), ethnic concentration (IRR = 1.61, 95% CI, 1.38 to 1.89), and dependency (IRR = 1.35, 95% CI, 1.18 to 1.54) when compared to areas with the lowest levels of marginalization on these indicators. Discussion: In this study, we found differences in the incidence rates of psychotic disorders across geographic areas in the province of Ontario. There are differences in incidence rates between males and females, and differences between major metropolitan areas, when compared to areas outside of metropolitan areas. Approximately 40% of incident cases occurred outside of the major metropolitan areas in the province. Some geographical differences remain when area-level marginalization is considered, although the effects are attenuated. Greater levels of marginalization on each of the 4 factors were associated with a higher incidence of psychotic disorders. Before accounting for marginalization, we observed significantly elevated rates of psychotic disorder in smaller metropolitan areas in Southeastern Ontario, specifically in Kingston, Belleville, and Peterborough. There were also elevated rates in South Western Ontario in Guelph, London, Windsor, and Sarnia. In Northern Ontario, there were elevated rates in Sudbury. Toronto, the largest city in Ontario, also has an elevated rate of psychotic disorders, compared to non-metropolitan areas. Marginalization attenuates differences in risk across metropolitan areas. We found lower rates in Windsor and Hamilton when area-level marginalization was accounted for. The rates in Kingston, Guelph, and Sarnia remain elevated, albeit with attenuated effects. This suggests that area-level marginalization may play an important role in explaining geographic differences in the risk of developing a psychotic illness. Previous literature has highlighted elevated rates of psychotic illness in urban areas. 29 In the current study, we looked at rates among people residing in major metropolitan areas, which include urban areas and surrounding areas that are integrated with the urban core. This suggests that contextual factors associated with geography are important to consider. Some of the marginalization factors examined in this study may be present at different levels in metropolitan and non-metropolitan areas, which can increase the risk of developing a psychotic disorder. Therefore, there may be marginalized areas in both metropolitan and non-metropolitan areas which may have elevated risk. Although greater levels of ethnic concentration may be largely located in urban environments, areas with high levels of material deprivation and dependency are present in rural regions of the province. In the current study, areas with the highest levels of ethnic concentration had the highest risk of psychotic disorders. Previous work looking at the risk of developing psychotic disorders among immigrant and refugee populations in Ontario has found that there are elevated rates of psychosis in some migrant groups and lower rates in others. 8 The current study does not account for individual-level immigration status, which would be important to further understand the role of area-level ethnic concentration, which takes into account both visible minority status and immigrant concentration. Previous work in Europe has found that ethnic density, which looks at the concentration of people of a similar ethnic background in an area, moderates risk of psychosis in ethnic minority groups, who may have an elevated baseline risk compared to the general population. 30,31 Although there may be some similarities between ethnic concentration and ethnic density, it is not the same measure, as the latter implies same-group ethnic density, and we do not know the ethnic backgrounds of people in the current study nor the specific ethnic breakdown in the areas in which they reside. There has yet to be work conducted in Ontario that looks at the incidence of psychotic disorder in relation to ethnic minority status or racialized identity. Limitations This study is limited by the fact that exposure to area-level marginalization was defined at the time of cohort entry, and we did not account for changes in exposure during the follow-up period. Furthermore, any movement between geographical areas was not accounted for and may have influenced exposure and risk. Prior research suggests that following the first episode of psychosis, people may move to both areas of higher and lower marginalization. 32,33 This study only accounts for marginalization at the area level, and it is important to highlight that individual-level data on sociodemographic factors were not available, including individual-level immigration history. Previous research has found that neighbourhood-level factors moderate the role of individual-level social factors in relation to psychosis risk. 29,30 This study used administrative health data that were not collected to specifically answer the research questions we posed. To reduce potential misclassification, we used a validated algorithm to identify incident cases. 21 The algorithm was created to identify cases of chronic schizophrenia; however, in this study, we are using it to identify incident cases of psychotic disorder, and it may therefore have different psychometric properties. The algorithm has a positive predictive value of 67.4%, which suggests that some cases identified in this cohort may be false positives. Furthermore, the data used for the current study and the previous validation study are over 10 years old and warrant replication. Beyond replication, there is also an opportunity to use up-to-date socio-environmental and clinical data for predictive modelling to forecast service use and resource allocation as recently been done in the United Kingdom. 34,35 This study was not designed to make causal inferences and does not account for all factors that may be part of a causal pathway. Given that environmental factors only explain a portion of the risk, it is also important to consider genetic and other individual biological factors that impact a person’s risk of developing psychosis. 36 Both family history, genetic factors 37 and unobservable familial selections factors (e.g., relatives who also have an increased risk of developing a psychotic disorder who may be more likely to reside in marginalized or urban areas) 38 as well as patterns of substance use may be associated with socio-environmental and geographical factors that are examined in this study. Due to limitations of the data sources used, we were not able to account for substance-use patterns at the individual and area levels nor genetic and familial factors in this study. Future research should build on these limitations and further examine how the socio-environmental factors examined in this study are impacted by known biological risk factors and substance use patterns 39 using spatial approaches that explore synergistic risk. 37 Beyond biological factors, further attention should also be given to the role of other socio-environmental factors including immigration, ethnicity, and additional contextual factors including social capital, which may have important roles in moderating risk. These factors, in addition to geography and socio-environmental factors, should be examined in relation to the incidence of psychotic disorders as well as in relation to health service utilization and care outcomes. This study is limited by the fact that exposure to area-level marginalization was defined at the time of cohort entry, and we did not account for changes in exposure during the follow-up period. Furthermore, any movement between geographical areas was not accounted for and may have influenced exposure and risk. Prior research suggests that following the first episode of psychosis, people may move to both areas of higher and lower marginalization. 32,33 This study only accounts for marginalization at the area level, and it is important to highlight that individual-level data on sociodemographic factors were not available, including individual-level immigration history. Previous research has found that neighbourhood-level factors moderate the role of individual-level social factors in relation to psychosis risk. 29,30 This study used administrative health data that were not collected to specifically answer the research questions we posed. To reduce potential misclassification, we used a validated algorithm to identify incident cases. 21 The algorithm was created to identify cases of chronic schizophrenia; however, in this study, we are using it to identify incident cases of psychotic disorder, and it may therefore have different psychometric properties. The algorithm has a positive predictive value of 67.4%, which suggests that some cases identified in this cohort may be false positives. Furthermore, the data used for the current study and the previous validation study are over 10 years old and warrant replication. Beyond replication, there is also an opportunity to use up-to-date socio-environmental and clinical data for predictive modelling to forecast service use and resource allocation as recently been done in the United Kingdom. 34,35 This study was not designed to make causal inferences and does not account for all factors that may be part of a causal pathway. Given that environmental factors only explain a portion of the risk, it is also important to consider genetic and other individual biological factors that impact a person’s risk of developing psychosis. 36 Both family history, genetic factors 37 and unobservable familial selections factors (e.g., relatives who also have an increased risk of developing a psychotic disorder who may be more likely to reside in marginalized or urban areas) 38 as well as patterns of substance use may be associated with socio-environmental and geographical factors that are examined in this study. Due to limitations of the data sources used, we were not able to account for substance-use patterns at the individual and area levels nor genetic and familial factors in this study. Future research should build on these limitations and further examine how the socio-environmental factors examined in this study are impacted by known biological risk factors and substance use patterns 39 using spatial approaches that explore synergistic risk. 37 Beyond biological factors, further attention should also be given to the role of other socio-environmental factors including immigration, ethnicity, and additional contextual factors including social capital, which may have important roles in moderating risk. These factors, in addition to geography and socio-environmental factors, should be examined in relation to the incidence of psychotic disorders as well as in relation to health service utilization and care outcomes. Limitations: This study is limited by the fact that exposure to area-level marginalization was defined at the time of cohort entry, and we did not account for changes in exposure during the follow-up period. Furthermore, any movement between geographical areas was not accounted for and may have influenced exposure and risk. Prior research suggests that following the first episode of psychosis, people may move to both areas of higher and lower marginalization. 32,33 This study only accounts for marginalization at the area level, and it is important to highlight that individual-level data on sociodemographic factors were not available, including individual-level immigration history. Previous research has found that neighbourhood-level factors moderate the role of individual-level social factors in relation to psychosis risk. 29,30 This study used administrative health data that were not collected to specifically answer the research questions we posed. To reduce potential misclassification, we used a validated algorithm to identify incident cases. 21 The algorithm was created to identify cases of chronic schizophrenia; however, in this study, we are using it to identify incident cases of psychotic disorder, and it may therefore have different psychometric properties. The algorithm has a positive predictive value of 67.4%, which suggests that some cases identified in this cohort may be false positives. Furthermore, the data used for the current study and the previous validation study are over 10 years old and warrant replication. Beyond replication, there is also an opportunity to use up-to-date socio-environmental and clinical data for predictive modelling to forecast service use and resource allocation as recently been done in the United Kingdom. 34,35 This study was not designed to make causal inferences and does not account for all factors that may be part of a causal pathway. Given that environmental factors only explain a portion of the risk, it is also important to consider genetic and other individual biological factors that impact a person’s risk of developing psychosis. 36 Both family history, genetic factors 37 and unobservable familial selections factors (e.g., relatives who also have an increased risk of developing a psychotic disorder who may be more likely to reside in marginalized or urban areas) 38 as well as patterns of substance use may be associated with socio-environmental and geographical factors that are examined in this study. Due to limitations of the data sources used, we were not able to account for substance-use patterns at the individual and area levels nor genetic and familial factors in this study. Future research should build on these limitations and further examine how the socio-environmental factors examined in this study are impacted by known biological risk factors and substance use patterns 39 using spatial approaches that explore synergistic risk. 37 Beyond biological factors, further attention should also be given to the role of other socio-environmental factors including immigration, ethnicity, and additional contextual factors including social capital, which may have important roles in moderating risk. These factors, in addition to geography and socio-environmental factors, should be examined in relation to the incidence of psychotic disorders as well as in relation to health service utilization and care outcomes. Conclusion: We found geographic variation in the incidence of psychotic disorders across the province of Ontario, and incidence was associated with contextual socio-environmental factors. There were elevated rates of psychotic disorders in some of the major metropolitan areas in the province when compared to areas outside of these metropolitan areas. Area-level marginalization appears to attenuate the risk associated with geographical location. Future research should account for important individual-level factors and examine how they may influence the risk of developing a psychotic disorder, particularly in relation to area-level factors. With further replication, use of the most up-to-date data and further study of socio-environmental exposures future work may be useful in informing social policy interventions as preventative measures and planning delivery of services. It is particularly important to target services for people with the first episode of psychosis, to ensure adequate resource allocation across the province, and to direct services to areas with elevated rates of psychotic disorders. Supplemental Material: Click here for additional data file. Supplemental Material, sj-docx-1-cpa-10.1177_07067437211011852 for The Incidence of Psychotic Disorders and Area-level Marginalization in Ontario, Canada: A Population-based Retrospective Cohort Study by Martin Rotenberg, Andrew Tuck, Kelly K. Anderson and Kwame McKenzie in The Canadian Journal of Psychiatry Click here for additional data file. Supplemental Material, sj-docx-2-cpa-10.1177_07067437211011852 for The Incidence of Psychotic Disorders and Area-level Marginalization in Ontario, Canada: A Population-based Retrospective Cohort Study by Martin Rotenberg, Andrew Tuck, Kelly K. Anderson and Kwame McKenzie in The Canadian Journal of Psychiatry
Background: There is limited Canadian evidence on the impact of socio-environmental factors on psychosis risk. We sought to examine the relationship between area-level indicators of marginalization and the incidence of psychotic disorders in Ontario. Methods: We conducted a retrospective cohort study of all people aged 14 to 40 years living in Ontario in 1999 using health administrative data and identified incident cases of psychotic disorders over a 10-year follow-up period. Age-standardized incidence rates were estimated for census metropolitan areas (CMAs). Poisson regression models adjusting for age and sex were used to calculate incidence rate ratios (IRRs) based on CMA and area-level marginalization indices. Results: There is variation in the incidence of psychotic disorders across the CMAs. Our findings suggest a higher rate of psychotic disorders in areas with the highest levels of residential instability (IRR = 1.26, 95% confidence interval [CI], 1.18 to 1.35), material deprivation (IRR = 1.30, 95% CI, 1.16 to 1.45), ethnic concentration (IRR = 1.61, 95% CI, 1.38 to 1.89), and dependency (IRR = 1.35, 95% CI, 1.18 to 1.54) when compared to areas with the lowest levels of marginalization. Marginalization attenuates the risk in some CMAs. Conclusions: There is geographic variation in the incidence of psychotic disorders across the province of Ontario. Areas with greater levels of marginalization have a higher incidence of psychotic disorders, and marginalization attenuates the differences in risk across geographic location. With further study, replication, and the use of the most up-to-date data, a case may be made to consider social policy interventions as preventative measures and to direct services to areas with the highest risk. Future research should examine how marginalization may interact with other social factors including ethnicity and immigration.
Introduction: The seminal work of Faris and Dunham conducted in Chicago in the 1930s provided empirical evidence that the incidence of non-affective psychotic disorders varied based on geographical and neighbourhood-level sociodemographic factors. It was observed that in neighbourhoods with increasing levels of social disorganization, there was a higher incidence of schizophrenia. 1 In the intervening years since this early work, and with the advent of improved epidemiological methods, many studies have examined this association and similarly found area-level social factors to be associated with the risk of developing a psychotic disorder. 2 –4 Most of the research examining the social causes of psychotic disorders have been conducted in Europe. 5 The international research has highlighted that people living in the most deprived neighbourhoods are at higher risk of having a psychotic disorder. 2 The most recent epidemiological work from several European countries has also highlighted large differences in the incidence of psychotic disorders between different cities and urban contexts. 6 In Canada, there is a small but growing body of research on the role of social factors which may influence both the incidence and course of psychotic disorders. 7 –11 This is an important area of study considering prior work conducted in Ontario has found people with schizophrenia have a 3-fold increase in all-cause mortality when compared to the general population 12 and high levels of ongoing health service use. 13 In Ontario, the largest province in Canada, we know that where people live impacts how they use services. People who live in more deprived areas use more mental health services. 14 In Toronto, the largest and most diverse city in the province, presentation to emergency mental health services for psychosis differs based on the level of marginalization of the neighbourhood in which people reside. 10 Although there is prior Canadian research on health service use in this clinical population, 13,15 –17 there has been limited study of the role of social factors in the risk of developing a psychotic disorder in the Canadian context. One study in Ontario has looked at the risk of developing a psychotic disorder in immigrant and refugee groups, finding that some migrant groups have an elevated risk whereas others have a lower risk. 8 In Quebec, the second largest province, health administrative data have been used to examine the role of socio-environmental and geographical factors in the risk of developing first-episode psychosis. Similar to international work on this topic, there was a higher incidence of psychosis in the most deprived areas in Montreal. 7 Differences in the incidence rates of schizophrenia between Quebec City and Montréal, 2 of the main metropolitan centres in the province, and between urban and rural areas have also been found. 18 The aim of the study was to examine the geographical distribution and the role of area-level marginalization indicators on the incidence of schizophrenia spectrum psychotic disorders in Ontario. We hypothesize that (i) there will be variation in incidence between major metropolitan centres and (ii) there will be a higher incidence in areas with the highest levels of marginalization. Conclusion: We found geographic variation in the incidence of psychotic disorders across the province of Ontario, and incidence was associated with contextual socio-environmental factors. There were elevated rates of psychotic disorders in some of the major metropolitan areas in the province when compared to areas outside of these metropolitan areas. Area-level marginalization appears to attenuate the risk associated with geographical location. Future research should account for important individual-level factors and examine how they may influence the risk of developing a psychotic disorder, particularly in relation to area-level factors. With further replication, use of the most up-to-date data and further study of socio-environmental exposures future work may be useful in informing social policy interventions as preventative measures and planning delivery of services. It is particularly important to target services for people with the first episode of psychosis, to ensure adequate resource allocation across the province, and to direct services to areas with elevated rates of psychotic disorders.
Background: There is limited Canadian evidence on the impact of socio-environmental factors on psychosis risk. We sought to examine the relationship between area-level indicators of marginalization and the incidence of psychotic disorders in Ontario. Methods: We conducted a retrospective cohort study of all people aged 14 to 40 years living in Ontario in 1999 using health administrative data and identified incident cases of psychotic disorders over a 10-year follow-up period. Age-standardized incidence rates were estimated for census metropolitan areas (CMAs). Poisson regression models adjusting for age and sex were used to calculate incidence rate ratios (IRRs) based on CMA and area-level marginalization indices. Results: There is variation in the incidence of psychotic disorders across the CMAs. Our findings suggest a higher rate of psychotic disorders in areas with the highest levels of residential instability (IRR = 1.26, 95% confidence interval [CI], 1.18 to 1.35), material deprivation (IRR = 1.30, 95% CI, 1.16 to 1.45), ethnic concentration (IRR = 1.61, 95% CI, 1.38 to 1.89), and dependency (IRR = 1.35, 95% CI, 1.18 to 1.54) when compared to areas with the lowest levels of marginalization. Marginalization attenuates the risk in some CMAs. Conclusions: There is geographic variation in the incidence of psychotic disorders across the province of Ontario. Areas with greater levels of marginalization have a higher incidence of psychotic disorders, and marginalization attenuates the differences in risk across geographic location. With further study, replication, and the use of the most up-to-date data, a case may be made to consider social policy interventions as preventative measures and to direct services to areas with the highest risk. Future research should examine how marginalization may interact with other social factors including ethnicity and immigration.
10,051
353
[ 312, 115, 177, 863, 166, 238, 299, 27, 603 ]
15
[ "areas", "cohort", "factors", "data", "marginalization", "study", "area", "psychotic", "ontario", "level" ]
[ "incidence rates psychotic", "social causes psychotic", "ontario psychotic disorder", "rates psychotic disorders", "psychotic disorders province" ]
[CONTENT] epidemiology | incidence | geography | marginalization | psychosis | schizophrenia | social determinants | socio-environmental [SUMMARY]
[CONTENT] epidemiology | incidence | geography | marginalization | psychosis | schizophrenia | social determinants | socio-environmental [SUMMARY]
[CONTENT] epidemiology | incidence | geography | marginalization | psychosis | schizophrenia | social determinants | socio-environmental [SUMMARY]
[CONTENT] epidemiology | incidence | geography | marginalization | psychosis | schizophrenia | social determinants | socio-environmental [SUMMARY]
[CONTENT] epidemiology | incidence | geography | marginalization | psychosis | schizophrenia | social determinants | socio-environmental [SUMMARY]
[CONTENT] epidemiology | incidence | geography | marginalization | psychosis | schizophrenia | social determinants | socio-environmental [SUMMARY]
[CONTENT] Cohort Studies | Humans | Incidence | Ontario | Psychotic Disorders | Retrospective Studies [SUMMARY]
[CONTENT] Cohort Studies | Humans | Incidence | Ontario | Psychotic Disorders | Retrospective Studies [SUMMARY]
[CONTENT] Cohort Studies | Humans | Incidence | Ontario | Psychotic Disorders | Retrospective Studies [SUMMARY]
[CONTENT] Cohort Studies | Humans | Incidence | Ontario | Psychotic Disorders | Retrospective Studies [SUMMARY]
[CONTENT] Cohort Studies | Humans | Incidence | Ontario | Psychotic Disorders | Retrospective Studies [SUMMARY]
[CONTENT] Cohort Studies | Humans | Incidence | Ontario | Psychotic Disorders | Retrospective Studies [SUMMARY]
[CONTENT] incidence rates psychotic | social causes psychotic | ontario psychotic disorder | rates psychotic disorders | psychotic disorders province [SUMMARY]
[CONTENT] incidence rates psychotic | social causes psychotic | ontario psychotic disorder | rates psychotic disorders | psychotic disorders province [SUMMARY]
[CONTENT] incidence rates psychotic | social causes psychotic | ontario psychotic disorder | rates psychotic disorders | psychotic disorders province [SUMMARY]
[CONTENT] incidence rates psychotic | social causes psychotic | ontario psychotic disorder | rates psychotic disorders | psychotic disorders province [SUMMARY]
[CONTENT] incidence rates psychotic | social causes psychotic | ontario psychotic disorder | rates psychotic disorders | psychotic disorders province [SUMMARY]
[CONTENT] incidence rates psychotic | social causes psychotic | ontario psychotic disorder | rates psychotic disorders | psychotic disorders province [SUMMARY]
[CONTENT] areas | cohort | factors | data | marginalization | study | area | psychotic | ontario | level [SUMMARY]
[CONTENT] areas | cohort | factors | data | marginalization | study | area | psychotic | ontario | level [SUMMARY]
[CONTENT] areas | cohort | factors | data | marginalization | study | area | psychotic | ontario | level [SUMMARY]
[CONTENT] areas | cohort | factors | data | marginalization | study | area | psychotic | ontario | level [SUMMARY]
[CONTENT] areas | cohort | factors | data | marginalization | study | area | psychotic | ontario | level [SUMMARY]
[CONTENT] areas | cohort | factors | data | marginalization | study | area | psychotic | ontario | level [SUMMARY]
[CONTENT] incidence | psychotic | risk | social | role | deprived | work | higher | use | health [SUMMARY]
[CONTENT] cohort | census | data | urban | core | cma | population | indicator | areas | based [SUMMARY]
[CONTENT] ci | 95 ci | irr | 95 | incidence | years | age | compared | 100 000 person years | person years [SUMMARY]
[CONTENT] services | particularly | elevated rates psychotic disorders | areas | psychotic | elevated rates | elevated rates psychotic | future | level factors | rates psychotic [SUMMARY]
[CONTENT] factors | areas | cohort | data | psychotic | study | urban | risk | marginalization | ontario [SUMMARY]
[CONTENT] factors | areas | cohort | data | psychotic | study | urban | risk | marginalization | ontario [SUMMARY]
[CONTENT] Canadian ||| Ontario [SUMMARY]
[CONTENT] 14 to 40 years | Ontario | 1999 | 10-year ||| ||| CMA [SUMMARY]
[CONTENT] ||| IRR | 1.26 | 95% ||| CI] | 1.18 | 1.35 | IRR | 1.30 | 95% | CI | 1.16 | 1.45 | IRR | 1.61 | 95% | CI | 1.38 | 1.89 | IRR | 1.35 | 95% | CI | 1.18 | 1.54 ||| [SUMMARY]
[CONTENT] Ontario ||| ||| ||| [SUMMARY]
[CONTENT] Canadian ||| Ontario ||| 14 to 40 years | Ontario | 1999 | 10-year ||| ||| CMA ||| ||| IRR | 1.26 | 95% ||| CI] | 1.18 | 1.35 | IRR | 1.30 | 95% | CI | 1.16 | 1.45 | IRR | 1.61 | 95% | CI | 1.38 | 1.89 | IRR | 1.35 | 95% | CI | 1.18 | 1.54 ||| ||| Ontario ||| ||| ||| [SUMMARY]
[CONTENT] Canadian ||| Ontario ||| 14 to 40 years | Ontario | 1999 | 10-year ||| ||| CMA ||| ||| IRR | 1.26 | 95% ||| CI] | 1.18 | 1.35 | IRR | 1.30 | 95% | CI | 1.16 | 1.45 | IRR | 1.61 | 95% | CI | 1.38 | 1.89 | IRR | 1.35 | 95% | CI | 1.18 | 1.54 ||| ||| Ontario ||| ||| ||| [SUMMARY]
Comparison of Early and Late Intubation in COVID-19 and Its Effect on Mortality.
35270767
Best practices for management of COVID-19 patients with acute respiratory failure continue to evolve. Initial debate existed over whether patients should be intubated in the emergency department or trialed on noninvasive methods prior to intubation outside the emergency department.
BACKGROUND
We conducted a retrospective observational chart review of patients who had a confirmed positive COVID-19 test and required endotracheal intubation during their hospital course between 1 March 2020 and 1 June 2020. Patients were divided into two groups based on location of intubation: early intubation in the emergency department or late intubation performed outside the emergency department. Clinical and demographic information was collected including comorbid medical conditions, qSOFA score, and patient mortality.
METHODS
Of the 131 COVID-19-positive patients requiring intubation, 30 (22.9%) patients were intubated in the emergency department. No statistically significant difference existed in age, gender, ethnicity, or smoking status between the two groups at baseline. Patients in the early intubation cohort had a greater number of existing comorbidities (2.5, p = 0.06) and a higher median qSOFA score (3, p ≤ 0.001). Patients managed with early intubation had a statistically significant higher mortality rate (19/30, 63.3%) compared to the late intubation group (42/101, 41.6%).
RESULTS
COVID-19 patients intubated in the emergency department had a higher qSOFA score and a greater number of pre-existing comorbidities. All-cause mortality in COVID-19 was greater in patients intubated in the emergency department compared to patients intubated outside the emergency department.
CONCLUSION
[ "COVID-19", "Humans", "Intubation, Intratracheal", "Records", "Retrospective Studies", "SARS-CoV-2" ]
8910588
1. Introduction
Over the span of a few months in early 2020, the coronavirus disease 2019 (COVID-19) pandemic spread from an isolated outbreak in the city of Wuhan, China to a global catastrophe of unrivaled proportions during this century [1,2,3]. As the novel virus spread globally, the medical community was forced to manage critically ill patients without data-based evidence for best practices. Many of the early treatment protocols for SARS-CoV-2 were devised from trial and error, theoretical calculations, or from extrapolating from previously successful therapies for treating similar pathologies [4]. The understanding of the pathophysiology and management of critically ill COVID-19 patients continues to evolve. Previously reported data from the initial outbreak in Wuhan, China demonstrate that older patients (age > 65 years old), patients with underlying medical conditions, and patients who develop acute respiratory distress syndrome (ARDS) have higher mortality from SARS-CoV-2 pneumonia [5]. For critically ill patients who progress to COVID-19-related respiratory failure, the timing and threshold for escalating from noninvasive oxygen supplementation to mechanical ventilation is unclear. The emergency department management of COVID-19 patients presents a unique dilemma for clinicians. In particular, the decision to perform endotracheal intubation or utilize noninvasive forms of oxygenation is a point of debate. The urge to promptly improve oxygenation and the work of breathing is weighed against complications such as ventilator-associated pneumonia, ventilator-induced lung injury, hemodynamic changes, and complications related to prolonged sedation and immobilization which are inherent to intubated patients [6]. The risk of peri-intubation hypoxemia (SpO2 < 80%), previously reported as 10% of intubations, is thought to be greater in COVID-19 patients [7]. The aim of our study is to expand on these data comparing patients managed with endotracheal intubation for COVID-19-related respiratory failure. To the best of our knowledge, this is the first study comparing outcomes in COVID-19 patients intubated in the emergency department and in the intensive care unit. We hypothesize that, counter to current practice guidelines recommending early endotracheal intubation of critically ill COVID-19 patients, no mortality benefit exists in patients who are intubated emergently in the emergency department versus patients who are intubated in an intensive care unit (ICU), operating room, or in other more controlled settings.
null
null
3. Results
A total of 131 COVID-19-positive patients requiring intubation during their hospitalization were analyzed. Thirty (22.9%) of these patients were intubated in the ED and were designated as early intubation (EI), while the remaining 101 (77.1%) were intubated elsewhere in the hospital and were designated as late intubation (LI). Overall, these two cohorts had similar baseline demographic characteristics (Table 1). The mean age for those intubated in the ED was 68.5 + 12.1, and it was 64.4 + 13.6 for patients intubated elsewhere (p = 0.12). In total, 33.3% of the EI group and 35.6% of the LI group were female (p = 0.82), and 41.4% and 54.1% of the EI and LI groups, respectively, were non-white individuals (p = 0.23). Fifty percent of the EI and 56.4% of the LI cohort were identified as never having smoked, with the remainder of patients either being current smokers or past smokers (p = 0.53). Lastly, both groups had similar prior existing comorbidities, with a median of 2.5 and 2 in the EI and LI groups, respectively, (p = 0.06). Patients intubated in the ED had a higher mortality rate during hospitalization, in contrast to those intubated elsewhere in the hospital and later during their admission after leaving the emergency department (Table 2). The death rate in the EI group was 63.3%, in contrast to 41.6% in the LI group, which is statistically significant (p = 0.04). However, it should be noted that there was a statistically significant difference in the baseline quick sequential organ failure assessment (qSOFA) scores between the cohorts, with ED intubations having a higher median qSOFA score (p < 0.0001). Patients in the EI cohort had a median qSOFA score of 2.53 and patients in the LI cohort had a median qSOFA score of 1.43.
6. Conclusions
In reviewing 131 COVID-19 patients intubated between 1 March 2020 and 1 June 2020, we found that patients intubated in the emergency department had a higher qSOFA score and a greater number of pre-existing comorbidities than patients intubated outside the emergency department. Patients intubated in the emergency department had an in-hospital mortality rate of 63.3%, compared to 41.6% for patients intubated outside the emergency department.
[ "2. Materials and Methods", "5. Limitations", "7. Article Summary" ]
[ "This was a retrospective observational chart review study without any patient interventions. Data were collected from a large healthcare network comprising 12 hospitals in eastern Pennsylvania and western New Jersey. Due to the nature of the study, informed consent was waived. Data were provided by the study hospital via the electronic medical record (EMR). Patients were identified via the EMR as having a confirmed positive COVID-19 test result that was ordered between 1 March 2020 and 1 June 2020. Furthermore, from this list, all patients who had received endotracheal intubation were identified and extracted to their own data set, and the location and timing of their intubation was determined via manual chart review. Demographic information and clinical outcomes were documented and recorded, also via chart review. In order to compare the populations of the two groups, we collected past medical conditions as presented in the initial-encounter H&P to look for the comorbidities of congestive heart failure, diabetes, hypertension, hyperlipidemia, malignancy, immunocompromised state, CAD, COPD, asthma, and CKD, and assigned each comorbidity present on admission 1 point to give a total comorbidity score.\nInclusion criteria were adult patients who had a confirmed positive COVID-19 polymerase chain reaction (PCR) test and went on to require endotracheal intubation within the hospital network between 1 March 2020 and 1 June 2020. Exclusion criteria were patients under the age of 18, patients that did not require endotracheal intubation, and patients who may have had symptoms where COVID-19 was suspected but who had a negative COVID-19 PCR test result. We defined early intubation (EI) as patients who were intubated in the ED and late intubation (LI) as patients who were intubated in a more controlled setting such as the ICU or operating room. There was no specific time which defined EI vs. LI, rather patients were stratified based on the disposition and location of intubation.\nResults were analyzed by conducting separate unadjusted comparisons between intubated and non-intubated ED patients using independent samples tests or Mann–Whitney rank sum tests for our continuous outcomes, as appropriate, and chi-square tests for our categorical outcomes. We analyzed data using SPSS version 25 (IBM Corp., Armonk, NY, USA), with p < 0.05 denoting statistical significance and no adjustment for the multiple comparisons.\nThis study was approved by the IRB and was found to be in accordance with the tenets of the Declaration of Helsinki.", "This study does have several limitations. The first is that the decision by providers to utilize noninvasive oxygenation versus performing endotracheal intubation was left to provider discretion. No standard criteria were utilized to determine the need for endotracheal intubation. Secondly, this collection of data was obtained early in the COVID-19 pandemic. Since this time, many adjuvant therapies and recommendations have been published which reflect more comfort with noninvasive modes of oxygenation before performing intubation. In addition, noninvasive oxygenation technology has evolved to minimize viral particle aerosolization. The third limitation is that a selection bias may exist. Patients who were intubated earlier in their hospital course were likely to be sicker at presentation, and thus would be expected to have a higher mortality rate. This is evidenced by the higher qSOFA score in the EI cohort versus the LI cohort in this study. Also, the qSOFA score has been proven to be of little utility in the evaluation of patients with COVID-19 since the septic shock occurs in a late phase of the disease. We could have used other well known prediction scores specific to COVID, but these were developed after our study was conducted. We also thought to run a Propensity Score Matched analysis on the sample, to evaluate if the difference in mortality still exists despite the difference in qSOFA score, however, our sample size was too small to run this analysis. The retrospective nature and small sample size of this study are also added limitations. Future research would benefit from a large, prospective trial comparing EI versus LI.\nDespite the limitations mentioned above, these data add to the current and developing literature suggesting that no mortality benefit exists with earlier intubation of SARS-CoV-2 patients. In fact, these data suggest that early intubation may be detrimental.", "Why is this topic important?\nDetermining the timing of intubation in COVID-19 hypoxic respiratory failure is a topic of debate. Studies are conflicting regarding the optimal timing of intubation in COVID-19, with regard to its effect on lung injury and mortality.\nWhat does this study attempt to show?\nEarly intubation in COVID-19 patients with hypoxic respiratory failure may be detrimental.\nWhat are the key findings?\nPatients with higher qSOFA scores and hypoxic respiratory failure secondary to COVID-19 have higher mortality when intubated earlier, in the emergency department.\nHow is patient care impacted?\nBased on the findings, we would recommend delaying intubation in COVID-19 hypoxic respiratory failure in favor of other noninvasive modes of oxygenation." ]
[ null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "3. Results", "4. Discussion", "5. Limitations", "6. Conclusions", "7. Article Summary" ]
[ "Over the span of a few months in early 2020, the coronavirus disease 2019 (COVID-19) pandemic spread from an isolated outbreak in the city of Wuhan, China to a global catastrophe of unrivaled proportions during this century [1,2,3]. As the novel virus spread globally, the medical community was forced to manage critically ill patients without data-based evidence for best practices. Many of the early treatment protocols for SARS-CoV-2 were devised from trial and error, theoretical calculations, or from extrapolating from previously successful therapies for treating similar pathologies [4]. The understanding of the pathophysiology and management of critically ill COVID-19 patients continues to evolve. Previously reported data from the initial outbreak in Wuhan, China demonstrate that older patients (age > 65 years old), patients with underlying medical conditions, and patients who develop acute respiratory distress syndrome (ARDS) have higher mortality from SARS-CoV-2 pneumonia [5]. For critically ill patients who progress to COVID-19-related respiratory failure, the timing and threshold for escalating from noninvasive oxygen supplementation to mechanical ventilation is unclear.\nThe emergency department management of COVID-19 patients presents a unique dilemma for clinicians. In particular, the decision to perform endotracheal intubation or utilize noninvasive forms of oxygenation is a point of debate. The urge to promptly improve oxygenation and the work of breathing is weighed against complications such as ventilator-associated pneumonia, ventilator-induced lung injury, hemodynamic changes, and complications related to prolonged sedation and immobilization which are inherent to intubated patients [6]. The risk of peri-intubation hypoxemia (SpO2 < 80%), previously reported as 10% of intubations, is thought to be greater in COVID-19 patients [7].\nThe aim of our study is to expand on these data comparing patients managed with endotracheal intubation for COVID-19-related respiratory failure. To the best of our knowledge, this is the first study comparing outcomes in COVID-19 patients intubated in the emergency department and in the intensive care unit. We hypothesize that, counter to current practice guidelines recommending early endotracheal intubation of critically ill COVID-19 patients, no mortality benefit exists in patients who are intubated emergently in the emergency department versus patients who are intubated in an intensive care unit (ICU), operating room, or in other more controlled settings.", "This was a retrospective observational chart review study without any patient interventions. Data were collected from a large healthcare network comprising 12 hospitals in eastern Pennsylvania and western New Jersey. Due to the nature of the study, informed consent was waived. Data were provided by the study hospital via the electronic medical record (EMR). Patients were identified via the EMR as having a confirmed positive COVID-19 test result that was ordered between 1 March 2020 and 1 June 2020. Furthermore, from this list, all patients who had received endotracheal intubation were identified and extracted to their own data set, and the location and timing of their intubation was determined via manual chart review. Demographic information and clinical outcomes were documented and recorded, also via chart review. In order to compare the populations of the two groups, we collected past medical conditions as presented in the initial-encounter H&P to look for the comorbidities of congestive heart failure, diabetes, hypertension, hyperlipidemia, malignancy, immunocompromised state, CAD, COPD, asthma, and CKD, and assigned each comorbidity present on admission 1 point to give a total comorbidity score.\nInclusion criteria were adult patients who had a confirmed positive COVID-19 polymerase chain reaction (PCR) test and went on to require endotracheal intubation within the hospital network between 1 March 2020 and 1 June 2020. Exclusion criteria were patients under the age of 18, patients that did not require endotracheal intubation, and patients who may have had symptoms where COVID-19 was suspected but who had a negative COVID-19 PCR test result. We defined early intubation (EI) as patients who were intubated in the ED and late intubation (LI) as patients who were intubated in a more controlled setting such as the ICU or operating room. There was no specific time which defined EI vs. LI, rather patients were stratified based on the disposition and location of intubation.\nResults were analyzed by conducting separate unadjusted comparisons between intubated and non-intubated ED patients using independent samples tests or Mann–Whitney rank sum tests for our continuous outcomes, as appropriate, and chi-square tests for our categorical outcomes. We analyzed data using SPSS version 25 (IBM Corp., Armonk, NY, USA), with p < 0.05 denoting statistical significance and no adjustment for the multiple comparisons.\nThis study was approved by the IRB and was found to be in accordance with the tenets of the Declaration of Helsinki.", "A total of 131 COVID-19-positive patients requiring intubation during their hospitalization were analyzed. Thirty (22.9%) of these patients were intubated in the ED and were designated as early intubation (EI), while the remaining 101 (77.1%) were intubated elsewhere in the hospital and were designated as late intubation (LI). Overall, these two cohorts had similar baseline demographic characteristics (Table 1). The mean age for those intubated in the ED was 68.5 + 12.1, and it was 64.4 + 13.6 for patients intubated elsewhere (p = 0.12). In total, 33.3% of the EI group and 35.6% of the LI group were female (p = 0.82), and 41.4% and 54.1% of the EI and LI groups, respectively, were non-white individuals (p = 0.23). Fifty percent of the EI and 56.4% of the LI cohort were identified as never having smoked, with the remainder of patients either being current smokers or past smokers (p = 0.53). Lastly, both groups had similar prior existing comorbidities, with a median of 2.5 and 2 in the EI and LI groups, respectively, (p = 0.06).\nPatients intubated in the ED had a higher mortality rate during hospitalization, in contrast to those intubated elsewhere in the hospital and later during their admission after leaving the emergency department (Table 2). The death rate in the EI group was 63.3%, in contrast to 41.6% in the LI group, which is statistically significant (p = 0.04). However, it should be noted that there was a statistically significant difference in the baseline quick sequential organ failure assessment (qSOFA) scores between the cohorts, with ED intubations having a higher median qSOFA score (p < 0.0001). Patients in the EI cohort had a median qSOFA score of 2.53 and patients in the LI cohort had a median qSOFA score of 1.43.", "We hypothesized that no mortality benefit existed for patients that were intubated emergently in the emergency department (EI) versus patients who were intubated later during their hospital course (LI). The data demonstrated that patients who were intubated emergently in the emergency department in fact had a higher in-hospital mortality rate (63.3% versus 41.6%) than those who were intubated later in their admission.\nWhile the overall demographics of age, gender, smoking status, and race did not show significant differences between the two groups, there was a large, though not statistically significant, difference between the total number of comorbidities (2.6 versus 2.0, p = 0.06). The EI patients also had a higher initial acuity based on their qSOFA scores (3 versus 1). Prior to the COVID-19 pandemic, it was believed that early intubation for patients presenting with hypoxic respiratory failure provided the best patient outcomes.\nReflecting on several months of managing critically ill COVID-19 patients, the Chinese Society of Anesthesiology Task Force provided recommendations in February 2020 for criteria for intubating patients with respiratory failure due to COVID-19 [8]. These criteria include critically ill patients who, after two hours of noninvasive oxygen supplementation, remain hypoxemic, remain in respiratory distress, or have unresolved tachypnea. This recommendation was based on the theory that early intubation may be physiologically protective by reducing a phenomenon known as self-induced lung injury [9]. Expert opinion articles comment that increased respiratory effort may lead to self-induced lung injury (SILI). It is thought that intubation and mechanical ventilation have protective effects by diminishing inspiratory effort and tidal volumes, thus limiting the effect of SILI [9,10]. Some of the critical care providers who initially cared for COVID-19 patients in Wuhan, China lamented that patients developed an oxygen debt and had been intubated too late in their disease course [11]. This led to an early hypothesis-driven advisory that COVID-19 patients should be ventilated early in the disease process to prevent lung injury.\nHowever, new data regarding COVID-19 management and outcomes have called this paradigm into question [6,12,13,14,15]. The pathophysiology of COVID-19 differs from more traditional acute respiratory distress illness, and it is more likely to respond to noninvasive forms of oxygenation (e.g. high-flow nasal cannula) [12]. As such, early management of COVID-19-induced hypoxemia revolves around noninvasive forms of oxygenation [13]. Some authors caution against performing early intubation in COVID-19 patients by citing a lack of clinical evidence for this practice, and they note that the proposed hypothesis of COVID-19-patient-induced SILI is more theoretical [6,14]. Literature studies regarding the timing of intubation of COVID-19 patients are limited, though small studies have demonstrated conflicting mortality rates in patients intubated earlier in their clinical course [15,16].\nIt should be noted that some literature studies recommended against utilizing noninvasive ventilation for fear this would aerosolize COVID-19 particles [17]. It is possible that some emergency department providers bypassed noninvasive oxygenation and performed intubation for this reason.\nWe believe our data are in support of delayed intubation in cases of COVID-19. There has been conflicting evidence regarding timing of intubation, and this has evolved as more has become known about this disease. As our data show, the sicker patients, as evidenced by higher qSOFA scores, were intubated earlier; however, this did increase mortality. This is likely to be due to the increased lung injury that occurred due to mechanical ventilation. We believe that further studies would elucidate the reasons why other strategies in severe COVID-19 would be more beneficial in improving mortality than early intubation. These would include high-flow nasal cannula, noninvasive ventilation, proning, and other therapeutic interventions such as glucocorticoids. Early intubation, as mentioned, is likely to be causing more harm due to worsening lung injury earlier in the disease progression, thus causing worsening hypoxemia and increased multi-organ failure and leading to a higher mortality. We believe our data confirm this hypothesis.\nAs understanding of COVID-19 pathophysiology evolved, many authors noted differences in COVID-19 lung injury compared to acute respiratory distress syndrome from other etiologies [18]. Typical ARDS is associated with decreased lung compliance, which is managed with lung-protective ventilation strategies [19]. However, mechanical ventilation in COVID-19 patients is associated with a high mortality rate and may actually worsen acute lung injury [20,21].\nToday, the interventions recommended for preventing respiratory failure in COVID-19 are centered around delaying endotracheal intubation and utilizing noninvasive modes of oxygenation [22]. These modes include mid-flow nasal cannula, high-flow nasal cannula, CPAP or BIPAP, and self-proning strategies [23,24,25,26]. At present, endotracheal intubation is viewed as a last resort for refractory hypoxia. Providers should tolerate lower oxygen saturations so long as patients do not exhibit changes in mental status or respiratory fatigue. The data presented here add to the growing body of literature supporting delayed intubation for COVID-19 patients.", "This study does have several limitations. The first is that the decision by providers to utilize noninvasive oxygenation versus performing endotracheal intubation was left to provider discretion. No standard criteria were utilized to determine the need for endotracheal intubation. Secondly, this collection of data was obtained early in the COVID-19 pandemic. Since this time, many adjuvant therapies and recommendations have been published which reflect more comfort with noninvasive modes of oxygenation before performing intubation. In addition, noninvasive oxygenation technology has evolved to minimize viral particle aerosolization. The third limitation is that a selection bias may exist. Patients who were intubated earlier in their hospital course were likely to be sicker at presentation, and thus would be expected to have a higher mortality rate. This is evidenced by the higher qSOFA score in the EI cohort versus the LI cohort in this study. Also, the qSOFA score has been proven to be of little utility in the evaluation of patients with COVID-19 since the septic shock occurs in a late phase of the disease. We could have used other well known prediction scores specific to COVID, but these were developed after our study was conducted. We also thought to run a Propensity Score Matched analysis on the sample, to evaluate if the difference in mortality still exists despite the difference in qSOFA score, however, our sample size was too small to run this analysis. The retrospective nature and small sample size of this study are also added limitations. Future research would benefit from a large, prospective trial comparing EI versus LI.\nDespite the limitations mentioned above, these data add to the current and developing literature suggesting that no mortality benefit exists with earlier intubation of SARS-CoV-2 patients. In fact, these data suggest that early intubation may be detrimental.", "In reviewing 131 COVID-19 patients intubated between 1 March 2020 and 1 June 2020, we found that patients intubated in the emergency department had a higher qSOFA score and a greater number of pre-existing comorbidities than patients intubated outside the emergency department. Patients intubated in the emergency department had an in-hospital mortality rate of 63.3%, compared to 41.6% for patients intubated outside the emergency department.", "Why is this topic important?\nDetermining the timing of intubation in COVID-19 hypoxic respiratory failure is a topic of debate. Studies are conflicting regarding the optimal timing of intubation in COVID-19, with regard to its effect on lung injury and mortality.\nWhat does this study attempt to show?\nEarly intubation in COVID-19 patients with hypoxic respiratory failure may be detrimental.\nWhat are the key findings?\nPatients with higher qSOFA scores and hypoxic respiratory failure secondary to COVID-19 have higher mortality when intubated earlier, in the emergency department.\nHow is patient care impacted?\nBased on the findings, we would recommend delaying intubation in COVID-19 hypoxic respiratory failure in favor of other noninvasive modes of oxygenation." ]
[ "intro", null, "results", "discussion", null, "conclusions", null ]
[ "COVID-19", "intubation", "emergency department", "qSOFA" ]
1. Introduction: Over the span of a few months in early 2020, the coronavirus disease 2019 (COVID-19) pandemic spread from an isolated outbreak in the city of Wuhan, China to a global catastrophe of unrivaled proportions during this century [1,2,3]. As the novel virus spread globally, the medical community was forced to manage critically ill patients without data-based evidence for best practices. Many of the early treatment protocols for SARS-CoV-2 were devised from trial and error, theoretical calculations, or from extrapolating from previously successful therapies for treating similar pathologies [4]. The understanding of the pathophysiology and management of critically ill COVID-19 patients continues to evolve. Previously reported data from the initial outbreak in Wuhan, China demonstrate that older patients (age > 65 years old), patients with underlying medical conditions, and patients who develop acute respiratory distress syndrome (ARDS) have higher mortality from SARS-CoV-2 pneumonia [5]. For critically ill patients who progress to COVID-19-related respiratory failure, the timing and threshold for escalating from noninvasive oxygen supplementation to mechanical ventilation is unclear. The emergency department management of COVID-19 patients presents a unique dilemma for clinicians. In particular, the decision to perform endotracheal intubation or utilize noninvasive forms of oxygenation is a point of debate. The urge to promptly improve oxygenation and the work of breathing is weighed against complications such as ventilator-associated pneumonia, ventilator-induced lung injury, hemodynamic changes, and complications related to prolonged sedation and immobilization which are inherent to intubated patients [6]. The risk of peri-intubation hypoxemia (SpO2 < 80%), previously reported as 10% of intubations, is thought to be greater in COVID-19 patients [7]. The aim of our study is to expand on these data comparing patients managed with endotracheal intubation for COVID-19-related respiratory failure. To the best of our knowledge, this is the first study comparing outcomes in COVID-19 patients intubated in the emergency department and in the intensive care unit. We hypothesize that, counter to current practice guidelines recommending early endotracheal intubation of critically ill COVID-19 patients, no mortality benefit exists in patients who are intubated emergently in the emergency department versus patients who are intubated in an intensive care unit (ICU), operating room, or in other more controlled settings. 2. Materials and Methods: This was a retrospective observational chart review study without any patient interventions. Data were collected from a large healthcare network comprising 12 hospitals in eastern Pennsylvania and western New Jersey. Due to the nature of the study, informed consent was waived. Data were provided by the study hospital via the electronic medical record (EMR). Patients were identified via the EMR as having a confirmed positive COVID-19 test result that was ordered between 1 March 2020 and 1 June 2020. Furthermore, from this list, all patients who had received endotracheal intubation were identified and extracted to their own data set, and the location and timing of their intubation was determined via manual chart review. Demographic information and clinical outcomes were documented and recorded, also via chart review. In order to compare the populations of the two groups, we collected past medical conditions as presented in the initial-encounter H&P to look for the comorbidities of congestive heart failure, diabetes, hypertension, hyperlipidemia, malignancy, immunocompromised state, CAD, COPD, asthma, and CKD, and assigned each comorbidity present on admission 1 point to give a total comorbidity score. Inclusion criteria were adult patients who had a confirmed positive COVID-19 polymerase chain reaction (PCR) test and went on to require endotracheal intubation within the hospital network between 1 March 2020 and 1 June 2020. Exclusion criteria were patients under the age of 18, patients that did not require endotracheal intubation, and patients who may have had symptoms where COVID-19 was suspected but who had a negative COVID-19 PCR test result. We defined early intubation (EI) as patients who were intubated in the ED and late intubation (LI) as patients who were intubated in a more controlled setting such as the ICU or operating room. There was no specific time which defined EI vs. LI, rather patients were stratified based on the disposition and location of intubation. Results were analyzed by conducting separate unadjusted comparisons between intubated and non-intubated ED patients using independent samples tests or Mann–Whitney rank sum tests for our continuous outcomes, as appropriate, and chi-square tests for our categorical outcomes. We analyzed data using SPSS version 25 (IBM Corp., Armonk, NY, USA), with p < 0.05 denoting statistical significance and no adjustment for the multiple comparisons. This study was approved by the IRB and was found to be in accordance with the tenets of the Declaration of Helsinki. 3. Results: A total of 131 COVID-19-positive patients requiring intubation during their hospitalization were analyzed. Thirty (22.9%) of these patients were intubated in the ED and were designated as early intubation (EI), while the remaining 101 (77.1%) were intubated elsewhere in the hospital and were designated as late intubation (LI). Overall, these two cohorts had similar baseline demographic characteristics (Table 1). The mean age for those intubated in the ED was 68.5 + 12.1, and it was 64.4 + 13.6 for patients intubated elsewhere (p = 0.12). In total, 33.3% of the EI group and 35.6% of the LI group were female (p = 0.82), and 41.4% and 54.1% of the EI and LI groups, respectively, were non-white individuals (p = 0.23). Fifty percent of the EI and 56.4% of the LI cohort were identified as never having smoked, with the remainder of patients either being current smokers or past smokers (p = 0.53). Lastly, both groups had similar prior existing comorbidities, with a median of 2.5 and 2 in the EI and LI groups, respectively, (p = 0.06). Patients intubated in the ED had a higher mortality rate during hospitalization, in contrast to those intubated elsewhere in the hospital and later during their admission after leaving the emergency department (Table 2). The death rate in the EI group was 63.3%, in contrast to 41.6% in the LI group, which is statistically significant (p = 0.04). However, it should be noted that there was a statistically significant difference in the baseline quick sequential organ failure assessment (qSOFA) scores between the cohorts, with ED intubations having a higher median qSOFA score (p < 0.0001). Patients in the EI cohort had a median qSOFA score of 2.53 and patients in the LI cohort had a median qSOFA score of 1.43. 4. Discussion: We hypothesized that no mortality benefit existed for patients that were intubated emergently in the emergency department (EI) versus patients who were intubated later during their hospital course (LI). The data demonstrated that patients who were intubated emergently in the emergency department in fact had a higher in-hospital mortality rate (63.3% versus 41.6%) than those who were intubated later in their admission. While the overall demographics of age, gender, smoking status, and race did not show significant differences between the two groups, there was a large, though not statistically significant, difference between the total number of comorbidities (2.6 versus 2.0, p = 0.06). The EI patients also had a higher initial acuity based on their qSOFA scores (3 versus 1). Prior to the COVID-19 pandemic, it was believed that early intubation for patients presenting with hypoxic respiratory failure provided the best patient outcomes. Reflecting on several months of managing critically ill COVID-19 patients, the Chinese Society of Anesthesiology Task Force provided recommendations in February 2020 for criteria for intubating patients with respiratory failure due to COVID-19 [8]. These criteria include critically ill patients who, after two hours of noninvasive oxygen supplementation, remain hypoxemic, remain in respiratory distress, or have unresolved tachypnea. This recommendation was based on the theory that early intubation may be physiologically protective by reducing a phenomenon known as self-induced lung injury [9]. Expert opinion articles comment that increased respiratory effort may lead to self-induced lung injury (SILI). It is thought that intubation and mechanical ventilation have protective effects by diminishing inspiratory effort and tidal volumes, thus limiting the effect of SILI [9,10]. Some of the critical care providers who initially cared for COVID-19 patients in Wuhan, China lamented that patients developed an oxygen debt and had been intubated too late in their disease course [11]. This led to an early hypothesis-driven advisory that COVID-19 patients should be ventilated early in the disease process to prevent lung injury. However, new data regarding COVID-19 management and outcomes have called this paradigm into question [6,12,13,14,15]. The pathophysiology of COVID-19 differs from more traditional acute respiratory distress illness, and it is more likely to respond to noninvasive forms of oxygenation (e.g. high-flow nasal cannula) [12]. As such, early management of COVID-19-induced hypoxemia revolves around noninvasive forms of oxygenation [13]. Some authors caution against performing early intubation in COVID-19 patients by citing a lack of clinical evidence for this practice, and they note that the proposed hypothesis of COVID-19-patient-induced SILI is more theoretical [6,14]. Literature studies regarding the timing of intubation of COVID-19 patients are limited, though small studies have demonstrated conflicting mortality rates in patients intubated earlier in their clinical course [15,16]. It should be noted that some literature studies recommended against utilizing noninvasive ventilation for fear this would aerosolize COVID-19 particles [17]. It is possible that some emergency department providers bypassed noninvasive oxygenation and performed intubation for this reason. We believe our data are in support of delayed intubation in cases of COVID-19. There has been conflicting evidence regarding timing of intubation, and this has evolved as more has become known about this disease. As our data show, the sicker patients, as evidenced by higher qSOFA scores, were intubated earlier; however, this did increase mortality. This is likely to be due to the increased lung injury that occurred due to mechanical ventilation. We believe that further studies would elucidate the reasons why other strategies in severe COVID-19 would be more beneficial in improving mortality than early intubation. These would include high-flow nasal cannula, noninvasive ventilation, proning, and other therapeutic interventions such as glucocorticoids. Early intubation, as mentioned, is likely to be causing more harm due to worsening lung injury earlier in the disease progression, thus causing worsening hypoxemia and increased multi-organ failure and leading to a higher mortality. We believe our data confirm this hypothesis. As understanding of COVID-19 pathophysiology evolved, many authors noted differences in COVID-19 lung injury compared to acute respiratory distress syndrome from other etiologies [18]. Typical ARDS is associated with decreased lung compliance, which is managed with lung-protective ventilation strategies [19]. However, mechanical ventilation in COVID-19 patients is associated with a high mortality rate and may actually worsen acute lung injury [20,21]. Today, the interventions recommended for preventing respiratory failure in COVID-19 are centered around delaying endotracheal intubation and utilizing noninvasive modes of oxygenation [22]. These modes include mid-flow nasal cannula, high-flow nasal cannula, CPAP or BIPAP, and self-proning strategies [23,24,25,26]. At present, endotracheal intubation is viewed as a last resort for refractory hypoxia. Providers should tolerate lower oxygen saturations so long as patients do not exhibit changes in mental status or respiratory fatigue. The data presented here add to the growing body of literature supporting delayed intubation for COVID-19 patients. 5. Limitations: This study does have several limitations. The first is that the decision by providers to utilize noninvasive oxygenation versus performing endotracheal intubation was left to provider discretion. No standard criteria were utilized to determine the need for endotracheal intubation. Secondly, this collection of data was obtained early in the COVID-19 pandemic. Since this time, many adjuvant therapies and recommendations have been published which reflect more comfort with noninvasive modes of oxygenation before performing intubation. In addition, noninvasive oxygenation technology has evolved to minimize viral particle aerosolization. The third limitation is that a selection bias may exist. Patients who were intubated earlier in their hospital course were likely to be sicker at presentation, and thus would be expected to have a higher mortality rate. This is evidenced by the higher qSOFA score in the EI cohort versus the LI cohort in this study. Also, the qSOFA score has been proven to be of little utility in the evaluation of patients with COVID-19 since the septic shock occurs in a late phase of the disease. We could have used other well known prediction scores specific to COVID, but these were developed after our study was conducted. We also thought to run a Propensity Score Matched analysis on the sample, to evaluate if the difference in mortality still exists despite the difference in qSOFA score, however, our sample size was too small to run this analysis. The retrospective nature and small sample size of this study are also added limitations. Future research would benefit from a large, prospective trial comparing EI versus LI. Despite the limitations mentioned above, these data add to the current and developing literature suggesting that no mortality benefit exists with earlier intubation of SARS-CoV-2 patients. In fact, these data suggest that early intubation may be detrimental. 6. Conclusions: In reviewing 131 COVID-19 patients intubated between 1 March 2020 and 1 June 2020, we found that patients intubated in the emergency department had a higher qSOFA score and a greater number of pre-existing comorbidities than patients intubated outside the emergency department. Patients intubated in the emergency department had an in-hospital mortality rate of 63.3%, compared to 41.6% for patients intubated outside the emergency department. 7. Article Summary: Why is this topic important? Determining the timing of intubation in COVID-19 hypoxic respiratory failure is a topic of debate. Studies are conflicting regarding the optimal timing of intubation in COVID-19, with regard to its effect on lung injury and mortality. What does this study attempt to show? Early intubation in COVID-19 patients with hypoxic respiratory failure may be detrimental. What are the key findings? Patients with higher qSOFA scores and hypoxic respiratory failure secondary to COVID-19 have higher mortality when intubated earlier, in the emergency department. How is patient care impacted? Based on the findings, we would recommend delaying intubation in COVID-19 hypoxic respiratory failure in favor of other noninvasive modes of oxygenation.
Background: Best practices for management of COVID-19 patients with acute respiratory failure continue to evolve. Initial debate existed over whether patients should be intubated in the emergency department or trialed on noninvasive methods prior to intubation outside the emergency department. Methods: We conducted a retrospective observational chart review of patients who had a confirmed positive COVID-19 test and required endotracheal intubation during their hospital course between 1 March 2020 and 1 June 2020. Patients were divided into two groups based on location of intubation: early intubation in the emergency department or late intubation performed outside the emergency department. Clinical and demographic information was collected including comorbid medical conditions, qSOFA score, and patient mortality. Results: Of the 131 COVID-19-positive patients requiring intubation, 30 (22.9%) patients were intubated in the emergency department. No statistically significant difference existed in age, gender, ethnicity, or smoking status between the two groups at baseline. Patients in the early intubation cohort had a greater number of existing comorbidities (2.5, p = 0.06) and a higher median qSOFA score (3, p ≤ 0.001). Patients managed with early intubation had a statistically significant higher mortality rate (19/30, 63.3%) compared to the late intubation group (42/101, 41.6%). Conclusions: COVID-19 patients intubated in the emergency department had a higher qSOFA score and a greater number of pre-existing comorbidities. All-cause mortality in COVID-19 was greater in patients intubated in the emergency department compared to patients intubated outside the emergency department.
1. Introduction: Over the span of a few months in early 2020, the coronavirus disease 2019 (COVID-19) pandemic spread from an isolated outbreak in the city of Wuhan, China to a global catastrophe of unrivaled proportions during this century [1,2,3]. As the novel virus spread globally, the medical community was forced to manage critically ill patients without data-based evidence for best practices. Many of the early treatment protocols for SARS-CoV-2 were devised from trial and error, theoretical calculations, or from extrapolating from previously successful therapies for treating similar pathologies [4]. The understanding of the pathophysiology and management of critically ill COVID-19 patients continues to evolve. Previously reported data from the initial outbreak in Wuhan, China demonstrate that older patients (age > 65 years old), patients with underlying medical conditions, and patients who develop acute respiratory distress syndrome (ARDS) have higher mortality from SARS-CoV-2 pneumonia [5]. For critically ill patients who progress to COVID-19-related respiratory failure, the timing and threshold for escalating from noninvasive oxygen supplementation to mechanical ventilation is unclear. The emergency department management of COVID-19 patients presents a unique dilemma for clinicians. In particular, the decision to perform endotracheal intubation or utilize noninvasive forms of oxygenation is a point of debate. The urge to promptly improve oxygenation and the work of breathing is weighed against complications such as ventilator-associated pneumonia, ventilator-induced lung injury, hemodynamic changes, and complications related to prolonged sedation and immobilization which are inherent to intubated patients [6]. The risk of peri-intubation hypoxemia (SpO2 < 80%), previously reported as 10% of intubations, is thought to be greater in COVID-19 patients [7]. The aim of our study is to expand on these data comparing patients managed with endotracheal intubation for COVID-19-related respiratory failure. To the best of our knowledge, this is the first study comparing outcomes in COVID-19 patients intubated in the emergency department and in the intensive care unit. We hypothesize that, counter to current practice guidelines recommending early endotracheal intubation of critically ill COVID-19 patients, no mortality benefit exists in patients who are intubated emergently in the emergency department versus patients who are intubated in an intensive care unit (ICU), operating room, or in other more controlled settings. 6. Conclusions: In reviewing 131 COVID-19 patients intubated between 1 March 2020 and 1 June 2020, we found that patients intubated in the emergency department had a higher qSOFA score and a greater number of pre-existing comorbidities than patients intubated outside the emergency department. Patients intubated in the emergency department had an in-hospital mortality rate of 63.3%, compared to 41.6% for patients intubated outside the emergency department.
Background: Best practices for management of COVID-19 patients with acute respiratory failure continue to evolve. Initial debate existed over whether patients should be intubated in the emergency department or trialed on noninvasive methods prior to intubation outside the emergency department. Methods: We conducted a retrospective observational chart review of patients who had a confirmed positive COVID-19 test and required endotracheal intubation during their hospital course between 1 March 2020 and 1 June 2020. Patients were divided into two groups based on location of intubation: early intubation in the emergency department or late intubation performed outside the emergency department. Clinical and demographic information was collected including comorbid medical conditions, qSOFA score, and patient mortality. Results: Of the 131 COVID-19-positive patients requiring intubation, 30 (22.9%) patients were intubated in the emergency department. No statistically significant difference existed in age, gender, ethnicity, or smoking status between the two groups at baseline. Patients in the early intubation cohort had a greater number of existing comorbidities (2.5, p = 0.06) and a higher median qSOFA score (3, p ≤ 0.001). Patients managed with early intubation had a statistically significant higher mortality rate (19/30, 63.3%) compared to the late intubation group (42/101, 41.6%). Conclusions: COVID-19 patients intubated in the emergency department had a higher qSOFA score and a greater number of pre-existing comorbidities. All-cause mortality in COVID-19 was greater in patients intubated in the emergency department compared to patients intubated outside the emergency department.
2,775
290
[ 454, 328, 133 ]
7
[ "patients", "19", "covid", "covid 19", "intubation", "intubated", "patients intubated", "data", "mortality", "early" ]
[ "sars cov pneumonia", "intubation covid 19", "coronavirus disease 2019", "cov pneumonia critically", "respiratory failure covid" ]
null
[CONTENT] COVID-19 | intubation | emergency department | qSOFA [SUMMARY]
null
[CONTENT] COVID-19 | intubation | emergency department | qSOFA [SUMMARY]
[CONTENT] COVID-19 | intubation | emergency department | qSOFA [SUMMARY]
[CONTENT] COVID-19 | intubation | emergency department | qSOFA [SUMMARY]
[CONTENT] COVID-19 | intubation | emergency department | qSOFA [SUMMARY]
[CONTENT] COVID-19 | Humans | Intubation, Intratracheal | Records | Retrospective Studies | SARS-CoV-2 [SUMMARY]
null
[CONTENT] COVID-19 | Humans | Intubation, Intratracheal | Records | Retrospective Studies | SARS-CoV-2 [SUMMARY]
[CONTENT] COVID-19 | Humans | Intubation, Intratracheal | Records | Retrospective Studies | SARS-CoV-2 [SUMMARY]
[CONTENT] COVID-19 | Humans | Intubation, Intratracheal | Records | Retrospective Studies | SARS-CoV-2 [SUMMARY]
[CONTENT] COVID-19 | Humans | Intubation, Intratracheal | Records | Retrospective Studies | SARS-CoV-2 [SUMMARY]
[CONTENT] sars cov pneumonia | intubation covid 19 | coronavirus disease 2019 | cov pneumonia critically | respiratory failure covid [SUMMARY]
null
[CONTENT] sars cov pneumonia | intubation covid 19 | coronavirus disease 2019 | cov pneumonia critically | respiratory failure covid [SUMMARY]
[CONTENT] sars cov pneumonia | intubation covid 19 | coronavirus disease 2019 | cov pneumonia critically | respiratory failure covid [SUMMARY]
[CONTENT] sars cov pneumonia | intubation covid 19 | coronavirus disease 2019 | cov pneumonia critically | respiratory failure covid [SUMMARY]
[CONTENT] sars cov pneumonia | intubation covid 19 | coronavirus disease 2019 | cov pneumonia critically | respiratory failure covid [SUMMARY]
[CONTENT] patients | 19 | covid | covid 19 | intubation | intubated | patients intubated | data | mortality | early [SUMMARY]
null
[CONTENT] patients | 19 | covid | covid 19 | intubation | intubated | patients intubated | data | mortality | early [SUMMARY]
[CONTENT] patients | 19 | covid | covid 19 | intubation | intubated | patients intubated | data | mortality | early [SUMMARY]
[CONTENT] patients | 19 | covid | covid 19 | intubation | intubated | patients intubated | data | mortality | early [SUMMARY]
[CONTENT] patients | 19 | covid | covid 19 | intubation | intubated | patients intubated | data | mortality | early [SUMMARY]
[CONTENT] patients | 19 | covid | covid 19 | critically | ill | critically ill | 19 patients | covid 19 patients | previously [SUMMARY]
null
[CONTENT] li | ei | group | median | ed | median qsofa score | median qsofa | patients | intubated | intubated ed [SUMMARY]
[CONTENT] patients intubated | department | emergency department | emergency | intubated | patients | outside | patients intubated outside | patients intubated outside emergency | intubated outside emergency department [SUMMARY]
[CONTENT] patients | intubation | covid | 19 | covid 19 | intubated | patients intubated | respiratory | department | emergency department [SUMMARY]
[CONTENT] patients | intubation | covid | 19 | covid 19 | intubated | patients intubated | respiratory | department | emergency department [SUMMARY]
[CONTENT] COVID-19 ||| [SUMMARY]
null
[CONTENT] 131 | COVID-19 | 30 | 22.9% ||| two ||| 2.5 | 0.06 | 3 ||| 19/30 | 63.3% | 42/101 | 41.6% [SUMMARY]
[CONTENT] ||| COVID-19 [SUMMARY]
[CONTENT] COVID-19 ||| ||| COVID-19 | between 1 March 2020 | June 2020 ||| two ||| qSOFA ||| ||| 131 | COVID-19 | 30 | 22.9% ||| two ||| 2.5 | 0.06 | 3 ||| 19/30 | 63.3% | 42/101 | 41.6% ||| ||| COVID-19 [SUMMARY]
[CONTENT] COVID-19 ||| ||| COVID-19 | between 1 March 2020 | June 2020 ||| two ||| qSOFA ||| ||| 131 | COVID-19 | 30 | 22.9% ||| two ||| 2.5 | 0.06 | 3 ||| 19/30 | 63.3% | 42/101 | 41.6% ||| ||| COVID-19 [SUMMARY]
Clinical educators' attitudes towards the use of technology in the clinical teaching environment. A mixed methods study.
31006997
In healthcare, there is ongoing flux in expectations for students and practitioners. Establishing integrated systems of monitoring and evidencing students' development is imperative. With current trends towards the use of technology in tertiary education, online learning environments (OLEs) could constitute more effective evidencing of student progress in the clinical environment. However, there is little research exploring clinical educators' experiences with implementing technology in clinical education. The research aimed to: Examine clinical educators' attitudes towards technology and its use in clinical education. Explore clinical educators' experiences of implementing technologies in a clinical environment.
INTRODUCTION
A mixed methods approach was taken to explore the aims. A previously validated technology attitude survey (TAS) was used with slight modifications, as well as open-ended qualitative responses. These explored clinical educators' experiences of the implementation of one specific OLE (PebblePad™) in their clinical environments. The survey was sent to clinical educators involved in the supervision of Medical Imaging students on clinical placement.
METHODS
Clinical educators play pivotal roles in students' professional development and, given current trends in tertiary education, are under increasing pressure to utilise OLEs. This poses particular challenges in clinical environments. Irrespective of the challenges, successful implementation of technology in any environment is dependent on the attitudes of the users.
RESULTS
Clinical environments have specific challenges when implementing technology such as access to computers and time constraints on practitioners. Even with positive attitudes towards technology, a change in pedagogical outlook when using technology in clinical teaching is necessary.
CONCLUSIONS
[ "Attitude to Computers", "Documentation", "Education, Medical", "Humans", "Information Technology", "Surveys and Questionnaires", "Technology" ]
6545477
Introduction
In healthcare, there is ongoing flux in expectations for students and practitioners. There has been an explosion of technology and the volume of medical knowledge has increased exponentially.1, 2, 3 In Australia, the Medical Radiation Practice Board of Australia (MRPBA) professional capabilities framework4 obliges radiographers to embrace contemporary conceptualisations of competence. They must keep abreast of shifting expectations5, 6 while the profession struggles to shed the perception that it is fulfilling the role of technical operatives.1 Due to these shifting expectations and to assure the MRPBA that students meet registration requirements on graduation; it is imperative to establish integrated systems of monitoring and evidencing students’ development. With trends towards the use of technology in tertiary education7, 8, 9 online learning environments (OLEs) could fulfil this need. Within radiography, while course content is increasingly facilitated online10, 11, recording radiography students’ development during clinical rotations remains traditional.10 However with technological trends, clinical educators are increasingly pressured to utilise OLEs.12, 13 Introducing technology in clinical environments poses challenges for clinical educators as much as it does for students and teachers.12, 14 These range from; cost, security of the devices and the data on them and pedagogical challenges.13 Negative attitudes towards technology is a barrier to successful implementation of technology in clinical education.14, 15, 16, 17 While the attitudes of academic staff, clinical staff in non‐teaching roles and students to technology has been explored18, 19, 20, 21 much less is known about clinical educators’ perceptions of technology.13, 22 The Bachelor of Radiography and Medical Imaging (Honours) at Monash University is a 4‐year integrated academic and clinical course. Intake is approximately 80 students yearly. Students complete clinical placements each year. The degree provides a qualification allowing students to seek employment in Australia and worldwide. In 2014, PebblePad™ was introduced as a contemporary learning platform for clinical studies, replacing paper workbooks. PebblePad™ is a web‐based platform offering an array of tools to help students record evidence of their learning and reflect on their clinical experiences. PebblePad™ and its ePortfolio functionality provides students with a holistic and integrated learning experience with a focus on preparation for professional life. Students can continue to access the platform after graduation. The implementation of PebblePad™ requires significant input from clinical educators. Aims of the study The research aimed to: Examine clinical educators’ attitudes towards technology generally.Examine clinical educators’ perceptions of the use of technology in clinical education.Explore clinical educators’ experiences of implementing PebblePad™ in their clinical environment. Examine clinical educators’ attitudes towards technology generally. Examine clinical educators’ perceptions of the use of technology in clinical education. Explore clinical educators’ experiences of implementing PebblePad™ in their clinical environment. For the purpose of this study, ‘technology’ refers to computers, software, hardware and use of the Internet, in keeping with Maag's study17 using a similar survey tool. The research aimed to: Examine clinical educators’ attitudes towards technology generally.Examine clinical educators’ perceptions of the use of technology in clinical education.Explore clinical educators’ experiences of implementing PebblePad™ in their clinical environment. Examine clinical educators’ attitudes towards technology generally. Examine clinical educators’ perceptions of the use of technology in clinical education. Explore clinical educators’ experiences of implementing PebblePad™ in their clinical environment. For the purpose of this study, ‘technology’ refers to computers, software, hardware and use of the Internet, in keeping with Maag's study17 using a similar survey tool.
Methods
This mixed methods study reports on clinical supervisors, educators and tutors responsible for the supervision of Medical Imaging students on clinical placement. Herein, they will be referred to as ‘clinical educators’. Quantitative data meant we could measure the prevalence of the dimensions of the phenomenon we were exploring. Qualitative questions allow deeper exploration of a phenomenon where little is known about it, which was the case in this instance.23 An email invitation was sent by the Clinical Support Officer to clinical educators to participate. It included an explanatory statement and a link to the survey. Sites with minimal student engagement, for example only taking students occasionally, were excluded. This was done to keep the data clean, only collecting responses from educators who are au‐fait with course expectations. Data collection Data collection was conducted between May and July 2017 using an online survey through Qualtrics™. The decision to complete the anonymous questionnaire ultimately rested with respondents. The first section of the survey collected demographic data. The second section provided quantitative data. This was minimally modified from a validated Technology Attitude Survey (TAS). Mc Farlane et al24 tested the TAS for reliability and validity to appraise teachers’ attitudes towards technology. The TAS was modified by Maag17 for the student nursing setting and likewise found adequate reliability for the tool. Permission was granted from Maag to modify the TAS for this study. The TAS was adapted to ensure it was fit for purpose. One statement, ‘Technology makes me feel stupid’, was removed as it was irrelevant. ‘Nursing student’ was replaced by ‘clinical educator’ to represent the audience. The modified TAS contained 14 items with 5‐point Likert scale responses. Questions were based on positively and negatively geared statements portraying positive and negative attitudes towards technology. This ‘reverse wording’ changes the direction of the scale by asking the same or similar questions in a positive and negative voice and adds to the validity of responses, reducing response sets and bias.25 The third section of the survey, modelled by the researchers, evaluated clinical educators’ experiences with the use of PebblePad™ in the clinical setting. There were two quantitative and two qualitative questions. The quantitative questions evaluated clinical educators’ experiences with using PebblePad™ in the clinical setting. These appraised how easy the clinical educators’ found it to learn the platform as well as how difficult the implementation was for them. One of the qualitative questions evaluated for perceived challenges using OLEs in a clinical setting. The other explored what clinical educators consider the advantages of OLEs over paper‐based documentation. Data collection was conducted between May and July 2017 using an online survey through Qualtrics™. The decision to complete the anonymous questionnaire ultimately rested with respondents. The first section of the survey collected demographic data. The second section provided quantitative data. This was minimally modified from a validated Technology Attitude Survey (TAS). Mc Farlane et al24 tested the TAS for reliability and validity to appraise teachers’ attitudes towards technology. The TAS was modified by Maag17 for the student nursing setting and likewise found adequate reliability for the tool. Permission was granted from Maag to modify the TAS for this study. The TAS was adapted to ensure it was fit for purpose. One statement, ‘Technology makes me feel stupid’, was removed as it was irrelevant. ‘Nursing student’ was replaced by ‘clinical educator’ to represent the audience. The modified TAS contained 14 items with 5‐point Likert scale responses. Questions were based on positively and negatively geared statements portraying positive and negative attitudes towards technology. This ‘reverse wording’ changes the direction of the scale by asking the same or similar questions in a positive and negative voice and adds to the validity of responses, reducing response sets and bias.25 The third section of the survey, modelled by the researchers, evaluated clinical educators’ experiences with the use of PebblePad™ in the clinical setting. There were two quantitative and two qualitative questions. The quantitative questions evaluated clinical educators’ experiences with using PebblePad™ in the clinical setting. These appraised how easy the clinical educators’ found it to learn the platform as well as how difficult the implementation was for them. One of the qualitative questions evaluated for perceived challenges using OLEs in a clinical setting. The other explored what clinical educators consider the advantages of OLEs over paper‐based documentation. Data analysis Quantitative data were initially explored using statistical descriptive analyses in SPSS™. For the purpose of data analysis the ordinal scale responses to positively geared question response were scored (5‐Strongly agree, 4‐Agree, 3‐Neither agree nor disagree, 2‐Disagree, 1‐Strongly disagree). Negatively geared statements were reverse scored. An independent samples t‐test was used to determine whether location, rural or remote, was significant in influencing attitudes towards technology. All the quantitative data, the TAS and PebblePad™ questions, were included in this analysis. A Pearson's’ correlation coefficient was computed on the TAS questions to assess the relationship between clinical educators’ attitudes towards technology generally and their perceptions of its use in their role as educators. A scatter plot summarises the results to illuminate the interplay between the two variables. To conduct this analysis the eight TAS questions relating to technology generally were considered a single variable vs. the six TAS questions relating to the perception of the use of technology in clinical education. The sum of mean scores were calculated, computed and plotted in SPSS™. Thematic analysis, using Braun and Clarke's26 method, was used to interpret the qualitative data. It provides a rich and detailed interpretation and there is an emphasis on reflexive dialogue which the researcher engages in throughout the process before reporting the themes. It involves a six‐step approach (Table 1) and the researcher moves forward and back between the steps as many times as required to make sense of the data.26 Braun and Clarke's six phases for thematic analysis.26 Quantitative data were initially explored using statistical descriptive analyses in SPSS™. For the purpose of data analysis the ordinal scale responses to positively geared question response were scored (5‐Strongly agree, 4‐Agree, 3‐Neither agree nor disagree, 2‐Disagree, 1‐Strongly disagree). Negatively geared statements were reverse scored. An independent samples t‐test was used to determine whether location, rural or remote, was significant in influencing attitudes towards technology. All the quantitative data, the TAS and PebblePad™ questions, were included in this analysis. A Pearson's’ correlation coefficient was computed on the TAS questions to assess the relationship between clinical educators’ attitudes towards technology generally and their perceptions of its use in their role as educators. A scatter plot summarises the results to illuminate the interplay between the two variables. To conduct this analysis the eight TAS questions relating to technology generally were considered a single variable vs. the six TAS questions relating to the perception of the use of technology in clinical education. The sum of mean scores were calculated, computed and plotted in SPSS™. Thematic analysis, using Braun and Clarke's26 method, was used to interpret the qualitative data. It provides a rich and detailed interpretation and there is an emphasis on reflexive dialogue which the researcher engages in throughout the process before reporting the themes. It involves a six‐step approach (Table 1) and the researcher moves forward and back between the steps as many times as required to make sense of the data.26 Braun and Clarke's six phases for thematic analysis.26 Ethics approval Ethics approval was granted by the Monash University Human Research Ethics Committee, project number 0197. Ethics approval was granted by the Monash University Human Research Ethics Committee, project number 0197.
Results
From a pool of 74 possible participants, 49 surveys were returned. One survey was incomplete and excluded from the data analysis. This constitutes a response rate of 48/74, 65%. There was a 100% response rate within the surveys to both the qualitative and quantitative components. The majority of respondents worked in Metropolitan sites (37/48, 77.1%). 39/48, 81.3% worked at a single clinical site. Medium sized sites were best represented 24/48, 50% with a relatively even mix of large 14/48, 29.2% and small 15/48, 31.3% sites represented. A majority of respondents worked in private imaging 29/48, 60.4% (Table 2). Participant demographic characteristics Multiple responses possible. Using the modified TAS, participants were asked eight questions based on their attitudes towards technology generally (Table 3). These appraise personal feelings when using technology such as confidence, nervousness and perceived importance and level of difficulty in learning new technologies. Six questions were based on their perceptions of the use of technology in clinical education (Table 4). A general positive attitude towards technology was evident. A Pearson's r showed a mildly positive correlation between attitudes towards technology and an appreciation for the benefit of technology to the role of clinical educator (Table 5). A scatter plot illustrates that the relationship between them is not absolute (Fig. 1). Descriptive analysis, attitudes to technology Descriptive analysis, perceived use of technology in clinical education Pearson's correlation: attitudes to technology and its perceived use in clinical education Correlation of attitudes towards technology versus the perceived usefulness in clinical education. Two questions related specifically to the PebblePad™ platform (Q19 and 20). These were aimed at appraising how easy the clinical educators found it to learn how to use the platform and how difficult the implementation of it in a clinical environment proved. There was a large spread in these responses (Table 6). 33.2% found PebblePad™ easy to learn how to use with 47.9% finding that it was not easy. Furthermore implementing the OLE in the clinical site was challenging for 35.4% of respondents, with only 37.5% finding it easy. This correlates with what Nairn et al12 cite as challenges facing clinical educators surrounding OLEs. Learning how to use PebblePad™ and implementation of PebblePad™ in a clinical environment No significant difference was noted between rural or remote locations (Table 7). TAS and rural and metropolitan location In keeping with Braun and Clarke's26 thematic analysis, the qualitative data were coded with five themes identified. Theme 1: availability of IT resources (hardware) The strongest theme was lack of access and availability of IT resources for using OLEs. Respondent 5 quoted the ‘lack of availability of portable devices with multiple users’ at their site. This was compounded by the fact that ‘Access to computer time can be limited in a clinical practice that utilises computers for clinical work’ Respondent 27. The strongest theme was lack of access and availability of IT resources for using OLEs. Respondent 5 quoted the ‘lack of availability of portable devices with multiple users’ at their site. This was compounded by the fact that ‘Access to computer time can be limited in a clinical practice that utilises computers for clinical work’ Respondent 27. Theme 2: time resource in clinical environment A large proportion of participants reported that lack of time is a major factor when using OLEs in clinical sites. Respondent 24 mentioned that the ‘time to concentrate on filling out reports without interruption’ is difficult to find. Respondent 8 commented that ‘It is still easier to have a piece of paper in front of your when you are in the process of assessing a student’. Other time factors specific to OLEs was the time required to retrieve passwords and the time it takes to login. A large proportion of participants reported that lack of time is a major factor when using OLEs in clinical sites. Respondent 24 mentioned that the ‘time to concentrate on filling out reports without interruption’ is difficult to find. Respondent 8 commented that ‘It is still easier to have a piece of paper in front of your when you are in the process of assessing a student’. Other time factors specific to OLEs was the time required to retrieve passwords and the time it takes to login. Theme 3: platform design Respondents reported both favourably and unfavourably under this theme. Respondent 12 mentioned that ‘Difficult to navigate systems make documentation difficult’. Five respondents commented that the design of PebblePad™ was not intuitive to use. Respondent 6 suggested that it was ‘not designed for the purpose and so is very unintuitive to use’. Conversely four respondents mentioned the ease of navigating the platform over paper. Respondents 21 and 8 found it much easier ‘Not having to flick through pages of student books to find the correct page’. This equivocal response was reflected in the quantitative data where 47.9% of respondents agreed with the statement ‘I found PebblePad™ easy to learn how to use’, with a further 18.7% saying they neither agreed nor disagreed with the statement. Several respondents mentioned the capability for multiple users at different locations to be able to access the student's work as a significant advantage of the online platform. Respondent 11 mentioned that it was ‘Easy for all parties to access the online documents at the same time or at any time that it is convenient’. Respondent 41 furthered this, mentioning that ‘It is easier for me to check on the work at my own leisure without needing to chase anyone up or be told they left their book at home’. Respondents reported both favourably and unfavourably under this theme. Respondent 12 mentioned that ‘Difficult to navigate systems make documentation difficult’. Five respondents commented that the design of PebblePad™ was not intuitive to use. Respondent 6 suggested that it was ‘not designed for the purpose and so is very unintuitive to use’. Conversely four respondents mentioned the ease of navigating the platform over paper. Respondents 21 and 8 found it much easier ‘Not having to flick through pages of student books to find the correct page’. This equivocal response was reflected in the quantitative data where 47.9% of respondents agreed with the statement ‘I found PebblePad™ easy to learn how to use’, with a further 18.7% saying they neither agreed nor disagreed with the statement. Several respondents mentioned the capability for multiple users at different locations to be able to access the student's work as a significant advantage of the online platform. Respondent 11 mentioned that it was ‘Easy for all parties to access the online documents at the same time or at any time that it is convenient’. Respondent 41 furthered this, mentioning that ‘It is easier for me to check on the work at my own leisure without needing to chase anyone up or be told they left their book at home’. Theme 4: tracking of progress/documentation There was a diversity of responses under this theme. This major theme has been broken down into two ‘sub’ themes for clarity. 13/48, 27.1% of respondents mentioned one or other of these sub themes Tracking of the documentation and;Tracking of student progress. Tracking of the documentation and; Tracking of student progress. Tracking documentation Participants mentioned that with an OLE one ‘can track documentation’ (Respondent 12) with ‘No misfiling or misplacement of paper documents’ (Respondent 39). Respondent 7 mentioned that having students’ work stored online allows ‘Easy storage and retrievals’ of documentation. Others mentioned that important documentation such as ‘Assessment and course structure guidelines for tutors are more available online’ (Respondent 33). Participants mentioned that with an OLE one ‘can track documentation’ (Respondent 12) with ‘No misfiling or misplacement of paper documents’ (Respondent 39). Respondent 7 mentioned that having students’ work stored online allows ‘Easy storage and retrievals’ of documentation. Others mentioned that important documentation such as ‘Assessment and course structure guidelines for tutors are more available online’ (Respondent 33). Tracking of student progress 24/48, 50% of respondents suggested that ‘there is a record of when and where assessments were performed and also no loss of information’ (Respondent 23). This allowed for ‘Real time checking of student progress’ (Respondent 26), stakeholders have ‘immediate access to the student's progress and my reports and can see quickly if the student is entering all the clinical requirements at the appropriate times’ (Respondent 25). There was also a strong sense for longitudinal development. The online system made it ‘Easy to review and compare to previous work where appropriate’ (Respondent 40). ‘Sharing information across sites/placements allows for more adequate follow‐up on student's progress’ (Respondent 46) allowing for ‘more permanent record of progress in time relevance – make progress tracking easier’ (Respondent 47). Respondent 27 went on to record that the online version is a ‘Permanent record’ which could actually ‘be maintained and continually updated throughout [a radiographer's] career’ 24/48, 50% of respondents suggested that ‘there is a record of when and where assessments were performed and also no loss of information’ (Respondent 23). This allowed for ‘Real time checking of student progress’ (Respondent 26), stakeholders have ‘immediate access to the student's progress and my reports and can see quickly if the student is entering all the clinical requirements at the appropriate times’ (Respondent 25). There was also a strong sense for longitudinal development. The online system made it ‘Easy to review and compare to previous work where appropriate’ (Respondent 40). ‘Sharing information across sites/placements allows for more adequate follow‐up on student's progress’ (Respondent 46) allowing for ‘more permanent record of progress in time relevance – make progress tracking easier’ (Respondent 47). Respondent 27 went on to record that the online version is a ‘Permanent record’ which could actually ‘be maintained and continually updated throughout [a radiographer's] career’ There was a diversity of responses under this theme. This major theme has been broken down into two ‘sub’ themes for clarity. 13/48, 27.1% of respondents mentioned one or other of these sub themes Tracking of the documentation and;Tracking of student progress. Tracking of the documentation and; Tracking of student progress. Tracking documentation Participants mentioned that with an OLE one ‘can track documentation’ (Respondent 12) with ‘No misfiling or misplacement of paper documents’ (Respondent 39). Respondent 7 mentioned that having students’ work stored online allows ‘Easy storage and retrievals’ of documentation. Others mentioned that important documentation such as ‘Assessment and course structure guidelines for tutors are more available online’ (Respondent 33). Participants mentioned that with an OLE one ‘can track documentation’ (Respondent 12) with ‘No misfiling or misplacement of paper documents’ (Respondent 39). Respondent 7 mentioned that having students’ work stored online allows ‘Easy storage and retrievals’ of documentation. Others mentioned that important documentation such as ‘Assessment and course structure guidelines for tutors are more available online’ (Respondent 33). Tracking of student progress 24/48, 50% of respondents suggested that ‘there is a record of when and where assessments were performed and also no loss of information’ (Respondent 23). This allowed for ‘Real time checking of student progress’ (Respondent 26), stakeholders have ‘immediate access to the student's progress and my reports and can see quickly if the student is entering all the clinical requirements at the appropriate times’ (Respondent 25). There was also a strong sense for longitudinal development. The online system made it ‘Easy to review and compare to previous work where appropriate’ (Respondent 40). ‘Sharing information across sites/placements allows for more adequate follow‐up on student's progress’ (Respondent 46) allowing for ‘more permanent record of progress in time relevance – make progress tracking easier’ (Respondent 47). Respondent 27 went on to record that the online version is a ‘Permanent record’ which could actually ‘be maintained and continually updated throughout [a radiographer's] career’ 24/48, 50% of respondents suggested that ‘there is a record of when and where assessments were performed and also no loss of information’ (Respondent 23). This allowed for ‘Real time checking of student progress’ (Respondent 26), stakeholders have ‘immediate access to the student's progress and my reports and can see quickly if the student is entering all the clinical requirements at the appropriate times’ (Respondent 25). There was also a strong sense for longitudinal development. The online system made it ‘Easy to review and compare to previous work where appropriate’ (Respondent 40). ‘Sharing information across sites/placements allows for more adequate follow‐up on student's progress’ (Respondent 46) allowing for ‘more permanent record of progress in time relevance – make progress tracking easier’ (Respondent 47). Respondent 27 went on to record that the online version is a ‘Permanent record’ which could actually ‘be maintained and continually updated throughout [a radiographer's] career’ Theme 5: increased security Clinical educators were cognisant that OLEs are a more secure environment which is ‘Tamper proof’ (Respondent 41). Respondent 42 said that ‘my signature can't be forged … their work can't be tampered with and more importantly that they aren't able to take advantage of staff members who can't be bothered to check their work and sign everything off for them’. Clinical educators were cognisant that OLEs are a more secure environment which is ‘Tamper proof’ (Respondent 41). Respondent 42 said that ‘my signature can't be forged … their work can't be tampered with and more importantly that they aren't able to take advantage of staff members who can't be bothered to check their work and sign everything off for them’.
Conclusion
Monash University was the first undergraduate radiography course in Australia to implement an OLE for all aspects of clinical placements. Clinical education is crucial in the education of Radiography students and educators assume a key role in this. PebblePad™ is at the heart of the clinical programme, requiring significant input from clinical educators. This study serves to give voice to this important but under represented group, those clinical educators tasked with implementation of OLEs in clinical environments. It is imperative to understand the perceptions of clinical educators within their environments. As evidenced by the study, even with a positive attitude to technology, clinical environments have particular challenges. With technology taking a more central role in education, it is imperative that we understand how to ensure it is utilised to its full potential in all domains of education. Therefore enculturating positive attitudes towards technology and associated pedagogical change is important. Training and support specific to OLEs is crucial for successful implementation. Partaking in this study will afford clinical educators an opportunity to reflect on their own experiences with technology in their roles and implementation strategies for future technological advances.
[ "Introduction", "Aims of the study", "Data collection", "Data analysis", "Ethics approval", "Theme 1: availability of IT resources (hardware)", "Theme 2: time resource in clinical environment", "Theme 3: platform design", "Theme 4: tracking of progress/documentation", "Tracking documentation", "Tracking of student progress", "Theme 5: increased security", "Implications for future research" ]
[ "In healthcare, there is ongoing flux in expectations for students and practitioners. There has been an explosion of technology and the volume of medical knowledge has increased exponentially.1, 2, 3\n\nIn Australia, the Medical Radiation Practice Board of Australia (MRPBA) professional capabilities framework4 obliges radiographers to embrace contemporary conceptualisations of competence. They must keep abreast of shifting expectations5, 6 while the profession struggles to shed the perception that it is fulfilling the role of technical operatives.1 Due to these shifting expectations and to assure the MRPBA that students meet registration requirements on graduation; it is imperative to establish integrated systems of monitoring and evidencing students’ development. With trends towards the use of technology in tertiary education7, 8, 9 online learning environments (OLEs) could fulfil this need.\nWithin radiography, while course content is increasingly facilitated online10, 11, recording radiography students’ development during clinical rotations remains traditional.10 However with technological trends, clinical educators are increasingly pressured to utilise OLEs.12, 13 Introducing technology in clinical environments poses challenges for clinical educators as much as it does for students and teachers.12, 14 These range from; cost, security of the devices and the data on them and pedagogical challenges.13 Negative attitudes towards technology is a barrier to successful implementation of technology in clinical education.14, 15, 16, 17\n\nWhile the attitudes of academic staff, clinical staff in non‐teaching roles and students to technology has been explored18, 19, 20, 21 much less is known about clinical educators’ perceptions of technology.13, 22\n\nThe Bachelor of Radiography and Medical Imaging (Honours) at Monash University is a 4‐year integrated academic and clinical course. Intake is approximately 80 students yearly. Students complete clinical placements each year. The degree provides a qualification allowing students to seek employment in Australia and worldwide. In 2014, PebblePad™ was introduced as a contemporary learning platform for clinical studies, replacing paper workbooks.\nPebblePad™ is a web‐based platform offering an array of tools to help students record evidence of their learning and reflect on their clinical experiences. PebblePad™ and its ePortfolio functionality provides students with a holistic and integrated learning experience with a focus on preparation for professional life. Students can continue to access the platform after graduation. The implementation of PebblePad™ requires significant input from clinical educators.\n Aims of the study The research aimed to:\nExamine clinical educators’ attitudes towards technology generally.Examine clinical educators’ perceptions of the use of technology in clinical education.Explore clinical educators’ experiences of implementing PebblePad™ in their clinical environment.\n\nExamine clinical educators’ attitudes towards technology generally.\nExamine clinical educators’ perceptions of the use of technology in clinical education.\nExplore clinical educators’ experiences of implementing PebblePad™ in their clinical environment.\nFor the purpose of this study, ‘technology’ refers to computers, software, hardware and use of the Internet, in keeping with Maag's study17 using a similar survey tool.\nThe research aimed to:\nExamine clinical educators’ attitudes towards technology generally.Examine clinical educators’ perceptions of the use of technology in clinical education.Explore clinical educators’ experiences of implementing PebblePad™ in their clinical environment.\n\nExamine clinical educators’ attitudes towards technology generally.\nExamine clinical educators’ perceptions of the use of technology in clinical education.\nExplore clinical educators’ experiences of implementing PebblePad™ in their clinical environment.\nFor the purpose of this study, ‘technology’ refers to computers, software, hardware and use of the Internet, in keeping with Maag's study17 using a similar survey tool.", "The research aimed to:\nExamine clinical educators’ attitudes towards technology generally.Examine clinical educators’ perceptions of the use of technology in clinical education.Explore clinical educators’ experiences of implementing PebblePad™ in their clinical environment.\n\nExamine clinical educators’ attitudes towards technology generally.\nExamine clinical educators’ perceptions of the use of technology in clinical education.\nExplore clinical educators’ experiences of implementing PebblePad™ in their clinical environment.\nFor the purpose of this study, ‘technology’ refers to computers, software, hardware and use of the Internet, in keeping with Maag's study17 using a similar survey tool.", "Data collection was conducted between May and July 2017 using an online survey through Qualtrics™. The decision to complete the anonymous questionnaire ultimately rested with respondents.\nThe first section of the survey collected demographic data.\nThe second section provided quantitative data. This was minimally modified from a validated Technology Attitude Survey (TAS). Mc Farlane et al24 tested the TAS for reliability and validity to appraise teachers’ attitudes towards technology. The TAS was modified by Maag17 for the student nursing setting and likewise found adequate reliability for the tool. Permission was granted from Maag to modify the TAS for this study.\nThe TAS was adapted to ensure it was fit for purpose. One statement, ‘Technology makes me feel stupid’, was removed as it was irrelevant. ‘Nursing student’ was replaced by ‘clinical educator’ to represent the audience.\nThe modified TAS contained 14 items with 5‐point Likert scale responses. Questions were based on positively and negatively geared statements portraying positive and negative attitudes towards technology. This ‘reverse wording’ changes the direction of the scale by asking the same or similar questions in a positive and negative voice and adds to the validity of responses, reducing response sets and bias.25\n\nThe third section of the survey, modelled by the researchers, evaluated clinical educators’ experiences with the use of PebblePad™ in the clinical setting. There were two quantitative and two qualitative questions. The quantitative questions evaluated clinical educators’ experiences with using PebblePad™ in the clinical setting. These appraised how easy the clinical educators’ found it to learn the platform as well as how difficult the implementation was for them. One of the qualitative questions evaluated for perceived challenges using OLEs in a clinical setting. The other explored what clinical educators consider the advantages of OLEs over paper‐based documentation.", "Quantitative data were initially explored using statistical descriptive analyses in SPSS™. For the purpose of data analysis the ordinal scale responses to positively geared question response were scored (5‐Strongly agree, 4‐Agree, 3‐Neither agree nor disagree, 2‐Disagree, 1‐Strongly disagree). Negatively geared statements were reverse scored.\nAn independent samples t‐test was used to determine whether location, rural or remote, was significant in influencing attitudes towards technology. All the quantitative data, the TAS and PebblePad™ questions, were included in this analysis.\nA Pearson's’ correlation coefficient was computed on the TAS questions to assess the relationship between clinical educators’ attitudes towards technology generally and their perceptions of its use in their role as educators. A scatter plot summarises the results to illuminate the interplay between the two variables. To conduct this analysis the eight TAS questions relating to technology generally were considered a single variable vs. the six TAS questions relating to the perception of the use of technology in clinical education. The sum of mean scores were calculated, computed and plotted in SPSS™.\nThematic analysis, using Braun and Clarke's26 method, was used to interpret the qualitative data. It provides a rich and detailed interpretation and there is an emphasis on reflexive dialogue which the researcher engages in throughout the process before reporting the themes. It involves a six‐step approach (Table 1) and the researcher moves forward and back between the steps as many times as required to make sense of the data.26\n\nBraun and Clarke's six phases for thematic analysis.26\n", "Ethics approval was granted by the Monash University Human Research Ethics Committee, project number 0197.", "The strongest theme was lack of access and availability of IT resources for using OLEs. Respondent 5 quoted the ‘lack of availability of portable devices with multiple users’ at their site. This was compounded by the fact that ‘Access to computer time can be limited in a clinical practice that utilises computers for clinical work’ Respondent 27.", "A large proportion of participants reported that lack of time is a major factor when using OLEs in clinical sites. Respondent 24 mentioned that the ‘time to concentrate on filling out reports without interruption’ is difficult to find. Respondent 8 commented that ‘It is still easier to have a piece of paper in front of your when you are in the process of assessing a student’. Other time factors specific to OLEs was the time required to retrieve passwords and the time it takes to login.", "Respondents reported both favourably and unfavourably under this theme. Respondent 12 mentioned that ‘Difficult to navigate systems make documentation difficult’. Five respondents commented that the design of PebblePad™ was not intuitive to use. Respondent 6 suggested that it was ‘not designed for the purpose and so is very unintuitive to use’. Conversely four respondents mentioned the ease of navigating the platform over paper. Respondents 21 and 8 found it much easier ‘Not having to flick through pages of student books to find the correct page’.\nThis equivocal response was reflected in the quantitative data where 47.9% of respondents agreed with the statement ‘I found PebblePad™ easy to learn how to use’, with a further 18.7% saying they neither agreed nor disagreed with the statement.\nSeveral respondents mentioned the capability for multiple users at different locations to be able to access the student's work as a significant advantage of the online platform. Respondent 11 mentioned that it was ‘Easy for all parties to access the online documents at the same time or at any time that it is convenient’. Respondent 41 furthered this, mentioning that ‘It is easier for me to check on the work at my own leisure without needing to chase anyone up or be told they left their book at home’.", "There was a diversity of responses under this theme. This major theme has been broken down into two ‘sub’ themes for clarity. 13/48, 27.1% of respondents mentioned one or other of these sub themes\nTracking of the documentation and;Tracking of student progress.\n\nTracking of the documentation and;\nTracking of student progress.\n Tracking documentation Participants mentioned that with an OLE one ‘can track documentation’ (Respondent 12) with ‘No misfiling or misplacement of paper documents’ (Respondent 39). Respondent 7 mentioned that having students’ work stored online allows ‘Easy storage and retrievals’ of documentation. Others mentioned that important documentation such as ‘Assessment and course structure guidelines for tutors are more available online’ (Respondent 33).\nParticipants mentioned that with an OLE one ‘can track documentation’ (Respondent 12) with ‘No misfiling or misplacement of paper documents’ (Respondent 39). Respondent 7 mentioned that having students’ work stored online allows ‘Easy storage and retrievals’ of documentation. Others mentioned that important documentation such as ‘Assessment and course structure guidelines for tutors are more available online’ (Respondent 33).\n Tracking of student progress 24/48, 50% of respondents suggested that ‘there is a record of when and where assessments were performed and also no loss of information’ (Respondent 23). This allowed for ‘Real time checking of student progress’ (Respondent 26), stakeholders have ‘immediate access to the student's progress and my reports and can see quickly if the student is entering all the clinical requirements at the appropriate times’ (Respondent 25).\nThere was also a strong sense for longitudinal development. The online system made it ‘Easy to review and compare to previous work where appropriate’ (Respondent 40). ‘Sharing information across sites/placements allows for more adequate follow‐up on student's progress’ (Respondent 46) allowing for ‘more permanent record of progress in time relevance – make progress tracking easier’ (Respondent 47). Respondent 27 went on to record that the online version is a ‘Permanent record’ which could actually ‘be maintained and continually updated throughout [a radiographer's] career’\n24/48, 50% of respondents suggested that ‘there is a record of when and where assessments were performed and also no loss of information’ (Respondent 23). This allowed for ‘Real time checking of student progress’ (Respondent 26), stakeholders have ‘immediate access to the student's progress and my reports and can see quickly if the student is entering all the clinical requirements at the appropriate times’ (Respondent 25).\nThere was also a strong sense for longitudinal development. The online system made it ‘Easy to review and compare to previous work where appropriate’ (Respondent 40). ‘Sharing information across sites/placements allows for more adequate follow‐up on student's progress’ (Respondent 46) allowing for ‘more permanent record of progress in time relevance – make progress tracking easier’ (Respondent 47). Respondent 27 went on to record that the online version is a ‘Permanent record’ which could actually ‘be maintained and continually updated throughout [a radiographer's] career’", "Participants mentioned that with an OLE one ‘can track documentation’ (Respondent 12) with ‘No misfiling or misplacement of paper documents’ (Respondent 39). Respondent 7 mentioned that having students’ work stored online allows ‘Easy storage and retrievals’ of documentation. Others mentioned that important documentation such as ‘Assessment and course structure guidelines for tutors are more available online’ (Respondent 33).", "24/48, 50% of respondents suggested that ‘there is a record of when and where assessments were performed and also no loss of information’ (Respondent 23). This allowed for ‘Real time checking of student progress’ (Respondent 26), stakeholders have ‘immediate access to the student's progress and my reports and can see quickly if the student is entering all the clinical requirements at the appropriate times’ (Respondent 25).\nThere was also a strong sense for longitudinal development. The online system made it ‘Easy to review and compare to previous work where appropriate’ (Respondent 40). ‘Sharing information across sites/placements allows for more adequate follow‐up on student's progress’ (Respondent 46) allowing for ‘more permanent record of progress in time relevance – make progress tracking easier’ (Respondent 47). Respondent 27 went on to record that the online version is a ‘Permanent record’ which could actually ‘be maintained and continually updated throughout [a radiographer's] career’", "Clinical educators were cognisant that OLEs are a more secure environment which is ‘Tamper proof’ (Respondent 41). Respondent 42 said that ‘my signature can't be forged … their work can't be tampered with and more importantly that they aren't able to take advantage of staff members who can't be bothered to check their work and sign everything off for them’.", "This pilot study establishes a baseline but more importantly paves the way for further research in this crucial area. Incongruence exists between attitudes to technology generally and perceptions of the usefulness of technology for clinical education. A larger study should be completed to fully examine the significance of this finding. Furthermore while the research question aimed to examine the attitudes and experiences of clinical educators, it would be of great benefit to gain an understanding of the factors that influence clinical educators’ attitudes and experiences with technology in their environments. Factors such as the setting where the educators work and the nature of their appointments would be important to consider however were beyond the scope of this study. Educators have varied supervisory arrangements and appointments in different locations and it is feasible that these factors could influence their experiences and understanding of OLEs." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Aims of the study", "Methods", "Data collection", "Data analysis", "Ethics approval", "Results", "Theme 1: availability of IT resources (hardware)", "Theme 2: time resource in clinical environment", "Theme 3: platform design", "Theme 4: tracking of progress/documentation", "Tracking documentation", "Tracking of student progress", "Theme 5: increased security", "Discussion", "Implications for future research", "Conclusion", "Conflict of Interest" ]
[ "In healthcare, there is ongoing flux in expectations for students and practitioners. There has been an explosion of technology and the volume of medical knowledge has increased exponentially.1, 2, 3\n\nIn Australia, the Medical Radiation Practice Board of Australia (MRPBA) professional capabilities framework4 obliges radiographers to embrace contemporary conceptualisations of competence. They must keep abreast of shifting expectations5, 6 while the profession struggles to shed the perception that it is fulfilling the role of technical operatives.1 Due to these shifting expectations and to assure the MRPBA that students meet registration requirements on graduation; it is imperative to establish integrated systems of monitoring and evidencing students’ development. With trends towards the use of technology in tertiary education7, 8, 9 online learning environments (OLEs) could fulfil this need.\nWithin radiography, while course content is increasingly facilitated online10, 11, recording radiography students’ development during clinical rotations remains traditional.10 However with technological trends, clinical educators are increasingly pressured to utilise OLEs.12, 13 Introducing technology in clinical environments poses challenges for clinical educators as much as it does for students and teachers.12, 14 These range from; cost, security of the devices and the data on them and pedagogical challenges.13 Negative attitudes towards technology is a barrier to successful implementation of technology in clinical education.14, 15, 16, 17\n\nWhile the attitudes of academic staff, clinical staff in non‐teaching roles and students to technology has been explored18, 19, 20, 21 much less is known about clinical educators’ perceptions of technology.13, 22\n\nThe Bachelor of Radiography and Medical Imaging (Honours) at Monash University is a 4‐year integrated academic and clinical course. Intake is approximately 80 students yearly. Students complete clinical placements each year. The degree provides a qualification allowing students to seek employment in Australia and worldwide. In 2014, PebblePad™ was introduced as a contemporary learning platform for clinical studies, replacing paper workbooks.\nPebblePad™ is a web‐based platform offering an array of tools to help students record evidence of their learning and reflect on their clinical experiences. PebblePad™ and its ePortfolio functionality provides students with a holistic and integrated learning experience with a focus on preparation for professional life. Students can continue to access the platform after graduation. The implementation of PebblePad™ requires significant input from clinical educators.\n Aims of the study The research aimed to:\nExamine clinical educators’ attitudes towards technology generally.Examine clinical educators’ perceptions of the use of technology in clinical education.Explore clinical educators’ experiences of implementing PebblePad™ in their clinical environment.\n\nExamine clinical educators’ attitudes towards technology generally.\nExamine clinical educators’ perceptions of the use of technology in clinical education.\nExplore clinical educators’ experiences of implementing PebblePad™ in their clinical environment.\nFor the purpose of this study, ‘technology’ refers to computers, software, hardware and use of the Internet, in keeping with Maag's study17 using a similar survey tool.\nThe research aimed to:\nExamine clinical educators’ attitudes towards technology generally.Examine clinical educators’ perceptions of the use of technology in clinical education.Explore clinical educators’ experiences of implementing PebblePad™ in their clinical environment.\n\nExamine clinical educators’ attitudes towards technology generally.\nExamine clinical educators’ perceptions of the use of technology in clinical education.\nExplore clinical educators’ experiences of implementing PebblePad™ in their clinical environment.\nFor the purpose of this study, ‘technology’ refers to computers, software, hardware and use of the Internet, in keeping with Maag's study17 using a similar survey tool.", "The research aimed to:\nExamine clinical educators’ attitudes towards technology generally.Examine clinical educators’ perceptions of the use of technology in clinical education.Explore clinical educators’ experiences of implementing PebblePad™ in their clinical environment.\n\nExamine clinical educators’ attitudes towards technology generally.\nExamine clinical educators’ perceptions of the use of technology in clinical education.\nExplore clinical educators’ experiences of implementing PebblePad™ in their clinical environment.\nFor the purpose of this study, ‘technology’ refers to computers, software, hardware and use of the Internet, in keeping with Maag's study17 using a similar survey tool.", "This mixed methods study reports on clinical supervisors, educators and tutors responsible for the supervision of Medical Imaging students on clinical placement. Herein, they will be referred to as ‘clinical educators’. Quantitative data meant we could measure the prevalence of the dimensions of the phenomenon we were exploring. Qualitative questions allow deeper exploration of a phenomenon where little is known about it, which was the case in this instance.23\n\nAn email invitation was sent by the Clinical Support Officer to clinical educators to participate. It included an explanatory statement and a link to the survey. Sites with minimal student engagement, for example only taking students occasionally, were excluded. This was done to keep the data clean, only collecting responses from educators who are au‐fait with course expectations.\n Data collection Data collection was conducted between May and July 2017 using an online survey through Qualtrics™. The decision to complete the anonymous questionnaire ultimately rested with respondents.\nThe first section of the survey collected demographic data.\nThe second section provided quantitative data. This was minimally modified from a validated Technology Attitude Survey (TAS). Mc Farlane et al24 tested the TAS for reliability and validity to appraise teachers’ attitudes towards technology. The TAS was modified by Maag17 for the student nursing setting and likewise found adequate reliability for the tool. Permission was granted from Maag to modify the TAS for this study.\nThe TAS was adapted to ensure it was fit for purpose. One statement, ‘Technology makes me feel stupid’, was removed as it was irrelevant. ‘Nursing student’ was replaced by ‘clinical educator’ to represent the audience.\nThe modified TAS contained 14 items with 5‐point Likert scale responses. Questions were based on positively and negatively geared statements portraying positive and negative attitudes towards technology. This ‘reverse wording’ changes the direction of the scale by asking the same or similar questions in a positive and negative voice and adds to the validity of responses, reducing response sets and bias.25\n\nThe third section of the survey, modelled by the researchers, evaluated clinical educators’ experiences with the use of PebblePad™ in the clinical setting. There were two quantitative and two qualitative questions. The quantitative questions evaluated clinical educators’ experiences with using PebblePad™ in the clinical setting. These appraised how easy the clinical educators’ found it to learn the platform as well as how difficult the implementation was for them. One of the qualitative questions evaluated for perceived challenges using OLEs in a clinical setting. The other explored what clinical educators consider the advantages of OLEs over paper‐based documentation.\nData collection was conducted between May and July 2017 using an online survey through Qualtrics™. The decision to complete the anonymous questionnaire ultimately rested with respondents.\nThe first section of the survey collected demographic data.\nThe second section provided quantitative data. This was minimally modified from a validated Technology Attitude Survey (TAS). Mc Farlane et al24 tested the TAS for reliability and validity to appraise teachers’ attitudes towards technology. The TAS was modified by Maag17 for the student nursing setting and likewise found adequate reliability for the tool. Permission was granted from Maag to modify the TAS for this study.\nThe TAS was adapted to ensure it was fit for purpose. One statement, ‘Technology makes me feel stupid’, was removed as it was irrelevant. ‘Nursing student’ was replaced by ‘clinical educator’ to represent the audience.\nThe modified TAS contained 14 items with 5‐point Likert scale responses. Questions were based on positively and negatively geared statements portraying positive and negative attitudes towards technology. This ‘reverse wording’ changes the direction of the scale by asking the same or similar questions in a positive and negative voice and adds to the validity of responses, reducing response sets and bias.25\n\nThe third section of the survey, modelled by the researchers, evaluated clinical educators’ experiences with the use of PebblePad™ in the clinical setting. There were two quantitative and two qualitative questions. The quantitative questions evaluated clinical educators’ experiences with using PebblePad™ in the clinical setting. These appraised how easy the clinical educators’ found it to learn the platform as well as how difficult the implementation was for them. One of the qualitative questions evaluated for perceived challenges using OLEs in a clinical setting. The other explored what clinical educators consider the advantages of OLEs over paper‐based documentation.\n Data analysis Quantitative data were initially explored using statistical descriptive analyses in SPSS™. For the purpose of data analysis the ordinal scale responses to positively geared question response were scored (5‐Strongly agree, 4‐Agree, 3‐Neither agree nor disagree, 2‐Disagree, 1‐Strongly disagree). Negatively geared statements were reverse scored.\nAn independent samples t‐test was used to determine whether location, rural or remote, was significant in influencing attitudes towards technology. All the quantitative data, the TAS and PebblePad™ questions, were included in this analysis.\nA Pearson's’ correlation coefficient was computed on the TAS questions to assess the relationship between clinical educators’ attitudes towards technology generally and their perceptions of its use in their role as educators. A scatter plot summarises the results to illuminate the interplay between the two variables. To conduct this analysis the eight TAS questions relating to technology generally were considered a single variable vs. the six TAS questions relating to the perception of the use of technology in clinical education. The sum of mean scores were calculated, computed and plotted in SPSS™.\nThematic analysis, using Braun and Clarke's26 method, was used to interpret the qualitative data. It provides a rich and detailed interpretation and there is an emphasis on reflexive dialogue which the researcher engages in throughout the process before reporting the themes. It involves a six‐step approach (Table 1) and the researcher moves forward and back between the steps as many times as required to make sense of the data.26\n\nBraun and Clarke's six phases for thematic analysis.26\n\nQuantitative data were initially explored using statistical descriptive analyses in SPSS™. For the purpose of data analysis the ordinal scale responses to positively geared question response were scored (5‐Strongly agree, 4‐Agree, 3‐Neither agree nor disagree, 2‐Disagree, 1‐Strongly disagree). Negatively geared statements were reverse scored.\nAn independent samples t‐test was used to determine whether location, rural or remote, was significant in influencing attitudes towards technology. All the quantitative data, the TAS and PebblePad™ questions, were included in this analysis.\nA Pearson's’ correlation coefficient was computed on the TAS questions to assess the relationship between clinical educators’ attitudes towards technology generally and their perceptions of its use in their role as educators. A scatter plot summarises the results to illuminate the interplay between the two variables. To conduct this analysis the eight TAS questions relating to technology generally were considered a single variable vs. the six TAS questions relating to the perception of the use of technology in clinical education. The sum of mean scores were calculated, computed and plotted in SPSS™.\nThematic analysis, using Braun and Clarke's26 method, was used to interpret the qualitative data. It provides a rich and detailed interpretation and there is an emphasis on reflexive dialogue which the researcher engages in throughout the process before reporting the themes. It involves a six‐step approach (Table 1) and the researcher moves forward and back between the steps as many times as required to make sense of the data.26\n\nBraun and Clarke's six phases for thematic analysis.26\n\n Ethics approval Ethics approval was granted by the Monash University Human Research Ethics Committee, project number 0197.\nEthics approval was granted by the Monash University Human Research Ethics Committee, project number 0197.", "Data collection was conducted between May and July 2017 using an online survey through Qualtrics™. The decision to complete the anonymous questionnaire ultimately rested with respondents.\nThe first section of the survey collected demographic data.\nThe second section provided quantitative data. This was minimally modified from a validated Technology Attitude Survey (TAS). Mc Farlane et al24 tested the TAS for reliability and validity to appraise teachers’ attitudes towards technology. The TAS was modified by Maag17 for the student nursing setting and likewise found adequate reliability for the tool. Permission was granted from Maag to modify the TAS for this study.\nThe TAS was adapted to ensure it was fit for purpose. One statement, ‘Technology makes me feel stupid’, was removed as it was irrelevant. ‘Nursing student’ was replaced by ‘clinical educator’ to represent the audience.\nThe modified TAS contained 14 items with 5‐point Likert scale responses. Questions were based on positively and negatively geared statements portraying positive and negative attitudes towards technology. This ‘reverse wording’ changes the direction of the scale by asking the same or similar questions in a positive and negative voice and adds to the validity of responses, reducing response sets and bias.25\n\nThe third section of the survey, modelled by the researchers, evaluated clinical educators’ experiences with the use of PebblePad™ in the clinical setting. There were two quantitative and two qualitative questions. The quantitative questions evaluated clinical educators’ experiences with using PebblePad™ in the clinical setting. These appraised how easy the clinical educators’ found it to learn the platform as well as how difficult the implementation was for them. One of the qualitative questions evaluated for perceived challenges using OLEs in a clinical setting. The other explored what clinical educators consider the advantages of OLEs over paper‐based documentation.", "Quantitative data were initially explored using statistical descriptive analyses in SPSS™. For the purpose of data analysis the ordinal scale responses to positively geared question response were scored (5‐Strongly agree, 4‐Agree, 3‐Neither agree nor disagree, 2‐Disagree, 1‐Strongly disagree). Negatively geared statements were reverse scored.\nAn independent samples t‐test was used to determine whether location, rural or remote, was significant in influencing attitudes towards technology. All the quantitative data, the TAS and PebblePad™ questions, were included in this analysis.\nA Pearson's’ correlation coefficient was computed on the TAS questions to assess the relationship between clinical educators’ attitudes towards technology generally and their perceptions of its use in their role as educators. A scatter plot summarises the results to illuminate the interplay between the two variables. To conduct this analysis the eight TAS questions relating to technology generally were considered a single variable vs. the six TAS questions relating to the perception of the use of technology in clinical education. The sum of mean scores were calculated, computed and plotted in SPSS™.\nThematic analysis, using Braun and Clarke's26 method, was used to interpret the qualitative data. It provides a rich and detailed interpretation and there is an emphasis on reflexive dialogue which the researcher engages in throughout the process before reporting the themes. It involves a six‐step approach (Table 1) and the researcher moves forward and back between the steps as many times as required to make sense of the data.26\n\nBraun and Clarke's six phases for thematic analysis.26\n", "Ethics approval was granted by the Monash University Human Research Ethics Committee, project number 0197.", "From a pool of 74 possible participants, 49 surveys were returned. One survey was incomplete and excluded from the data analysis. This constitutes a response rate of 48/74, 65%. There was a 100% response rate within the surveys to both the qualitative and quantitative components.\nThe majority of respondents worked in Metropolitan sites (37/48, 77.1%). 39/48, 81.3% worked at a single clinical site. Medium sized sites were best represented 24/48, 50% with a relatively even mix of large 14/48, 29.2% and small 15/48, 31.3% sites represented. A majority of respondents worked in private imaging 29/48, 60.4% (Table 2).\nParticipant demographic characteristics\nMultiple responses possible.\nUsing the modified TAS, participants were asked eight questions based on their attitudes towards technology generally (Table 3). These appraise personal feelings when using technology such as confidence, nervousness and perceived importance and level of difficulty in learning new technologies. Six questions were based on their perceptions of the use of technology in clinical education (Table 4). A general positive attitude towards technology was evident. A Pearson's r showed a mildly positive correlation between attitudes towards technology and an appreciation for the benefit of technology to the role of clinical educator (Table 5). A scatter plot illustrates that the relationship between them is not absolute (Fig. 1).\nDescriptive analysis, attitudes to technology\nDescriptive analysis, perceived use of technology in clinical education\nPearson's correlation: attitudes to technology and its perceived use in clinical education\nCorrelation of attitudes towards technology versus the perceived usefulness in clinical education.\nTwo questions related specifically to the PebblePad™ platform (Q19 and 20). These were aimed at appraising how easy the clinical educators found it to learn how to use the platform and how difficult the implementation of it in a clinical environment proved. There was a large spread in these responses (Table 6). 33.2% found PebblePad™ easy to learn how to use with 47.9% finding that it was not easy. Furthermore implementing the OLE in the clinical site was challenging for 35.4% of respondents, with only 37.5% finding it easy. This correlates with what Nairn et al12 cite as challenges facing clinical educators surrounding OLEs.\nLearning how to use PebblePad™ and implementation of PebblePad™ in a clinical environment\nNo significant difference was noted between rural or remote locations (Table 7).\nTAS and rural and metropolitan location\nIn keeping with Braun and Clarke's26 thematic analysis, the qualitative data were coded with five themes identified.\n Theme 1: availability of IT resources (hardware) The strongest theme was lack of access and availability of IT resources for using OLEs. Respondent 5 quoted the ‘lack of availability of portable devices with multiple users’ at their site. This was compounded by the fact that ‘Access to computer time can be limited in a clinical practice that utilises computers for clinical work’ Respondent 27.\nThe strongest theme was lack of access and availability of IT resources for using OLEs. Respondent 5 quoted the ‘lack of availability of portable devices with multiple users’ at their site. This was compounded by the fact that ‘Access to computer time can be limited in a clinical practice that utilises computers for clinical work’ Respondent 27.\n Theme 2: time resource in clinical environment A large proportion of participants reported that lack of time is a major factor when using OLEs in clinical sites. Respondent 24 mentioned that the ‘time to concentrate on filling out reports without interruption’ is difficult to find. Respondent 8 commented that ‘It is still easier to have a piece of paper in front of your when you are in the process of assessing a student’. Other time factors specific to OLEs was the time required to retrieve passwords and the time it takes to login.\nA large proportion of participants reported that lack of time is a major factor when using OLEs in clinical sites. Respondent 24 mentioned that the ‘time to concentrate on filling out reports without interruption’ is difficult to find. Respondent 8 commented that ‘It is still easier to have a piece of paper in front of your when you are in the process of assessing a student’. Other time factors specific to OLEs was the time required to retrieve passwords and the time it takes to login.\n Theme 3: platform design Respondents reported both favourably and unfavourably under this theme. Respondent 12 mentioned that ‘Difficult to navigate systems make documentation difficult’. Five respondents commented that the design of PebblePad™ was not intuitive to use. Respondent 6 suggested that it was ‘not designed for the purpose and so is very unintuitive to use’. Conversely four respondents mentioned the ease of navigating the platform over paper. Respondents 21 and 8 found it much easier ‘Not having to flick through pages of student books to find the correct page’.\nThis equivocal response was reflected in the quantitative data where 47.9% of respondents agreed with the statement ‘I found PebblePad™ easy to learn how to use’, with a further 18.7% saying they neither agreed nor disagreed with the statement.\nSeveral respondents mentioned the capability for multiple users at different locations to be able to access the student's work as a significant advantage of the online platform. Respondent 11 mentioned that it was ‘Easy for all parties to access the online documents at the same time or at any time that it is convenient’. Respondent 41 furthered this, mentioning that ‘It is easier for me to check on the work at my own leisure without needing to chase anyone up or be told they left their book at home’.\nRespondents reported both favourably and unfavourably under this theme. Respondent 12 mentioned that ‘Difficult to navigate systems make documentation difficult’. Five respondents commented that the design of PebblePad™ was not intuitive to use. Respondent 6 suggested that it was ‘not designed for the purpose and so is very unintuitive to use’. Conversely four respondents mentioned the ease of navigating the platform over paper. Respondents 21 and 8 found it much easier ‘Not having to flick through pages of student books to find the correct page’.\nThis equivocal response was reflected in the quantitative data where 47.9% of respondents agreed with the statement ‘I found PebblePad™ easy to learn how to use’, with a further 18.7% saying they neither agreed nor disagreed with the statement.\nSeveral respondents mentioned the capability for multiple users at different locations to be able to access the student's work as a significant advantage of the online platform. Respondent 11 mentioned that it was ‘Easy for all parties to access the online documents at the same time or at any time that it is convenient’. Respondent 41 furthered this, mentioning that ‘It is easier for me to check on the work at my own leisure without needing to chase anyone up or be told they left their book at home’.\n Theme 4: tracking of progress/documentation There was a diversity of responses under this theme. This major theme has been broken down into two ‘sub’ themes for clarity. 13/48, 27.1% of respondents mentioned one or other of these sub themes\nTracking of the documentation and;Tracking of student progress.\n\nTracking of the documentation and;\nTracking of student progress.\n Tracking documentation Participants mentioned that with an OLE one ‘can track documentation’ (Respondent 12) with ‘No misfiling or misplacement of paper documents’ (Respondent 39). Respondent 7 mentioned that having students’ work stored online allows ‘Easy storage and retrievals’ of documentation. Others mentioned that important documentation such as ‘Assessment and course structure guidelines for tutors are more available online’ (Respondent 33).\nParticipants mentioned that with an OLE one ‘can track documentation’ (Respondent 12) with ‘No misfiling or misplacement of paper documents’ (Respondent 39). Respondent 7 mentioned that having students’ work stored online allows ‘Easy storage and retrievals’ of documentation. Others mentioned that important documentation such as ‘Assessment and course structure guidelines for tutors are more available online’ (Respondent 33).\n Tracking of student progress 24/48, 50% of respondents suggested that ‘there is a record of when and where assessments were performed and also no loss of information’ (Respondent 23). This allowed for ‘Real time checking of student progress’ (Respondent 26), stakeholders have ‘immediate access to the student's progress and my reports and can see quickly if the student is entering all the clinical requirements at the appropriate times’ (Respondent 25).\nThere was also a strong sense for longitudinal development. The online system made it ‘Easy to review and compare to previous work where appropriate’ (Respondent 40). ‘Sharing information across sites/placements allows for more adequate follow‐up on student's progress’ (Respondent 46) allowing for ‘more permanent record of progress in time relevance – make progress tracking easier’ (Respondent 47). Respondent 27 went on to record that the online version is a ‘Permanent record’ which could actually ‘be maintained and continually updated throughout [a radiographer's] career’\n24/48, 50% of respondents suggested that ‘there is a record of when and where assessments were performed and also no loss of information’ (Respondent 23). This allowed for ‘Real time checking of student progress’ (Respondent 26), stakeholders have ‘immediate access to the student's progress and my reports and can see quickly if the student is entering all the clinical requirements at the appropriate times’ (Respondent 25).\nThere was also a strong sense for longitudinal development. The online system made it ‘Easy to review and compare to previous work where appropriate’ (Respondent 40). ‘Sharing information across sites/placements allows for more adequate follow‐up on student's progress’ (Respondent 46) allowing for ‘more permanent record of progress in time relevance – make progress tracking easier’ (Respondent 47). Respondent 27 went on to record that the online version is a ‘Permanent record’ which could actually ‘be maintained and continually updated throughout [a radiographer's] career’\nThere was a diversity of responses under this theme. This major theme has been broken down into two ‘sub’ themes for clarity. 13/48, 27.1% of respondents mentioned one or other of these sub themes\nTracking of the documentation and;Tracking of student progress.\n\nTracking of the documentation and;\nTracking of student progress.\n Tracking documentation Participants mentioned that with an OLE one ‘can track documentation’ (Respondent 12) with ‘No misfiling or misplacement of paper documents’ (Respondent 39). Respondent 7 mentioned that having students’ work stored online allows ‘Easy storage and retrievals’ of documentation. Others mentioned that important documentation such as ‘Assessment and course structure guidelines for tutors are more available online’ (Respondent 33).\nParticipants mentioned that with an OLE one ‘can track documentation’ (Respondent 12) with ‘No misfiling or misplacement of paper documents’ (Respondent 39). Respondent 7 mentioned that having students’ work stored online allows ‘Easy storage and retrievals’ of documentation. Others mentioned that important documentation such as ‘Assessment and course structure guidelines for tutors are more available online’ (Respondent 33).\n Tracking of student progress 24/48, 50% of respondents suggested that ‘there is a record of when and where assessments were performed and also no loss of information’ (Respondent 23). This allowed for ‘Real time checking of student progress’ (Respondent 26), stakeholders have ‘immediate access to the student's progress and my reports and can see quickly if the student is entering all the clinical requirements at the appropriate times’ (Respondent 25).\nThere was also a strong sense for longitudinal development. The online system made it ‘Easy to review and compare to previous work where appropriate’ (Respondent 40). ‘Sharing information across sites/placements allows for more adequate follow‐up on student's progress’ (Respondent 46) allowing for ‘more permanent record of progress in time relevance – make progress tracking easier’ (Respondent 47). Respondent 27 went on to record that the online version is a ‘Permanent record’ which could actually ‘be maintained and continually updated throughout [a radiographer's] career’\n24/48, 50% of respondents suggested that ‘there is a record of when and where assessments were performed and also no loss of information’ (Respondent 23). This allowed for ‘Real time checking of student progress’ (Respondent 26), stakeholders have ‘immediate access to the student's progress and my reports and can see quickly if the student is entering all the clinical requirements at the appropriate times’ (Respondent 25).\nThere was also a strong sense for longitudinal development. The online system made it ‘Easy to review and compare to previous work where appropriate’ (Respondent 40). ‘Sharing information across sites/placements allows for more adequate follow‐up on student's progress’ (Respondent 46) allowing for ‘more permanent record of progress in time relevance – make progress tracking easier’ (Respondent 47). Respondent 27 went on to record that the online version is a ‘Permanent record’ which could actually ‘be maintained and continually updated throughout [a radiographer's] career’\n Theme 5: increased security Clinical educators were cognisant that OLEs are a more secure environment which is ‘Tamper proof’ (Respondent 41). Respondent 42 said that ‘my signature can't be forged … their work can't be tampered with and more importantly that they aren't able to take advantage of staff members who can't be bothered to check their work and sign everything off for them’.\nClinical educators were cognisant that OLEs are a more secure environment which is ‘Tamper proof’ (Respondent 41). Respondent 42 said that ‘my signature can't be forged … their work can't be tampered with and more importantly that they aren't able to take advantage of staff members who can't be bothered to check their work and sign everything off for them’.", "The strongest theme was lack of access and availability of IT resources for using OLEs. Respondent 5 quoted the ‘lack of availability of portable devices with multiple users’ at their site. This was compounded by the fact that ‘Access to computer time can be limited in a clinical practice that utilises computers for clinical work’ Respondent 27.", "A large proportion of participants reported that lack of time is a major factor when using OLEs in clinical sites. Respondent 24 mentioned that the ‘time to concentrate on filling out reports without interruption’ is difficult to find. Respondent 8 commented that ‘It is still easier to have a piece of paper in front of your when you are in the process of assessing a student’. Other time factors specific to OLEs was the time required to retrieve passwords and the time it takes to login.", "Respondents reported both favourably and unfavourably under this theme. Respondent 12 mentioned that ‘Difficult to navigate systems make documentation difficult’. Five respondents commented that the design of PebblePad™ was not intuitive to use. Respondent 6 suggested that it was ‘not designed for the purpose and so is very unintuitive to use’. Conversely four respondents mentioned the ease of navigating the platform over paper. Respondents 21 and 8 found it much easier ‘Not having to flick through pages of student books to find the correct page’.\nThis equivocal response was reflected in the quantitative data where 47.9% of respondents agreed with the statement ‘I found PebblePad™ easy to learn how to use’, with a further 18.7% saying they neither agreed nor disagreed with the statement.\nSeveral respondents mentioned the capability for multiple users at different locations to be able to access the student's work as a significant advantage of the online platform. Respondent 11 mentioned that it was ‘Easy for all parties to access the online documents at the same time or at any time that it is convenient’. Respondent 41 furthered this, mentioning that ‘It is easier for me to check on the work at my own leisure without needing to chase anyone up or be told they left their book at home’.", "There was a diversity of responses under this theme. This major theme has been broken down into two ‘sub’ themes for clarity. 13/48, 27.1% of respondents mentioned one or other of these sub themes\nTracking of the documentation and;Tracking of student progress.\n\nTracking of the documentation and;\nTracking of student progress.\n Tracking documentation Participants mentioned that with an OLE one ‘can track documentation’ (Respondent 12) with ‘No misfiling or misplacement of paper documents’ (Respondent 39). Respondent 7 mentioned that having students’ work stored online allows ‘Easy storage and retrievals’ of documentation. Others mentioned that important documentation such as ‘Assessment and course structure guidelines for tutors are more available online’ (Respondent 33).\nParticipants mentioned that with an OLE one ‘can track documentation’ (Respondent 12) with ‘No misfiling or misplacement of paper documents’ (Respondent 39). Respondent 7 mentioned that having students’ work stored online allows ‘Easy storage and retrievals’ of documentation. Others mentioned that important documentation such as ‘Assessment and course structure guidelines for tutors are more available online’ (Respondent 33).\n Tracking of student progress 24/48, 50% of respondents suggested that ‘there is a record of when and where assessments were performed and also no loss of information’ (Respondent 23). This allowed for ‘Real time checking of student progress’ (Respondent 26), stakeholders have ‘immediate access to the student's progress and my reports and can see quickly if the student is entering all the clinical requirements at the appropriate times’ (Respondent 25).\nThere was also a strong sense for longitudinal development. The online system made it ‘Easy to review and compare to previous work where appropriate’ (Respondent 40). ‘Sharing information across sites/placements allows for more adequate follow‐up on student's progress’ (Respondent 46) allowing for ‘more permanent record of progress in time relevance – make progress tracking easier’ (Respondent 47). Respondent 27 went on to record that the online version is a ‘Permanent record’ which could actually ‘be maintained and continually updated throughout [a radiographer's] career’\n24/48, 50% of respondents suggested that ‘there is a record of when and where assessments were performed and also no loss of information’ (Respondent 23). This allowed for ‘Real time checking of student progress’ (Respondent 26), stakeholders have ‘immediate access to the student's progress and my reports and can see quickly if the student is entering all the clinical requirements at the appropriate times’ (Respondent 25).\nThere was also a strong sense for longitudinal development. The online system made it ‘Easy to review and compare to previous work where appropriate’ (Respondent 40). ‘Sharing information across sites/placements allows for more adequate follow‐up on student's progress’ (Respondent 46) allowing for ‘more permanent record of progress in time relevance – make progress tracking easier’ (Respondent 47). Respondent 27 went on to record that the online version is a ‘Permanent record’ which could actually ‘be maintained and continually updated throughout [a radiographer's] career’", "Participants mentioned that with an OLE one ‘can track documentation’ (Respondent 12) with ‘No misfiling or misplacement of paper documents’ (Respondent 39). Respondent 7 mentioned that having students’ work stored online allows ‘Easy storage and retrievals’ of documentation. Others mentioned that important documentation such as ‘Assessment and course structure guidelines for tutors are more available online’ (Respondent 33).", "24/48, 50% of respondents suggested that ‘there is a record of when and where assessments were performed and also no loss of information’ (Respondent 23). This allowed for ‘Real time checking of student progress’ (Respondent 26), stakeholders have ‘immediate access to the student's progress and my reports and can see quickly if the student is entering all the clinical requirements at the appropriate times’ (Respondent 25).\nThere was also a strong sense for longitudinal development. The online system made it ‘Easy to review and compare to previous work where appropriate’ (Respondent 40). ‘Sharing information across sites/placements allows for more adequate follow‐up on student's progress’ (Respondent 46) allowing for ‘more permanent record of progress in time relevance – make progress tracking easier’ (Respondent 47). Respondent 27 went on to record that the online version is a ‘Permanent record’ which could actually ‘be maintained and continually updated throughout [a radiographer's] career’", "Clinical educators were cognisant that OLEs are a more secure environment which is ‘Tamper proof’ (Respondent 41). Respondent 42 said that ‘my signature can't be forged … their work can't be tampered with and more importantly that they aren't able to take advantage of staff members who can't be bothered to check their work and sign everything off for them’.", "Due to current trends in tertiary education, clinical educators are under pressure to utilise OLEs, posing challenges for clinical mentors.8, 12 This aspect of OLEs, that is in clinical environments, therefore requires attention. This study provides insight into clinical educators’ attitudes towards technology, both in general and specifically in clinical education. It explores some of the challenges faced by clinical educators when implementing such technology in the clinical environment.\nIn the current era, it can be taken for granted that computers are almost ubiquitously available. The research highlighted that this is not the case in clinical environments. The extent to which it seems computer allocations are so sparse was surprising, it should be explored whether there are differences between metropolitan and rural sites in this regard. While it was acknowledged that access to computers might be an obstacle, during the PebblePad™ roll out (2014–2015) clinical educators were offered tablet computers to address this challenge. Only 12 clinical sites took up the offer at the time. Thus further strategies should be considered to allow clinical educators with educational roles access to computer resources. This could involve protected time on computers, allocation of computers for the purpose etc. This would need to be considered in close collaboration with the clinical partners and their institutions.\nBusy practitioners find themselves with time pressures and competing priorities when fulfilling teaching duties. There was mixed opinion whether the online environment aided efficient time management. For some, working online was quicker but for others, the necessity for passwords added to the time required making it more arduous than using paper versions. Disparity was also noted in how easy the educators found it to learn using PebblePad™ itself. There were some findings that could explain these differences. A ‘Lack of basic knowledge or skills’ (Respondent 41) and ‘My lack of IT skills as a mature radiographer’ (Respondent 33) were factors in adapting to OLEs. As Respondent 18 reported ‘Online can be more efficient when all involved have a certain level of competence’. This reinforces Chow, Herold, Choo & Chan's27 findings that training and support specific to technologies used in teaching is crucial for successful implementation. While some respondents mentioned that face to face training would be preferential, ‘I think that if you have not had any immediate instruction on how to use it, it can be a little difficult’ (Respondent 18), the geographical spread of clinical sites is challenging in this regard. Not all educators can attend hands on workshops on campus and the University staff required for in‐services is infeasible. It was reassuring that a higher proportion of respondents found they could implement PebblePad™ reasonably easily at their clinical sites (Table 6). This suggests that, despite the difficulties learning a new platform, educators were able to mitigate these during implementation.\nThe tracking of students’ progress within placements and over time was an important correlation with the vision for transitioning online. With paper workbooks, providing formative feedback on work in progress was difficult and it was also difficult to appreciate longitudinal development of skills and self‐efficacy in the students’ learning journey. The results from the study indicate that the objectives and vision for moving online has had some traction. As respondent 42 mentions, ‘Being able to actually view their work allows me to give them additional feedback in regards to their thoughts and what they write down’. There was an appreciation for the ability to track students’ progress over time rather than within single placements. This can help identify at risk students and assist educators to devise individual learning plans for those students while allowing appropriate follow‐up on student's progress.\nClinical educators found the online system more secure than paper workbooks. This is in keeping with what has been our own experience. Supervisors were confident that it was not possible for the students to tamper with the entries in the online environment. This is a positive of having password protection which can however add to the time pressure when using OLEs.\nRespondents displayed a positive attitude towards technology in general. However responses to statements addressing the perceptions of using technology for clinical learning suggest that the link between knowing about technology and using it for a specific purpose is not absolute. While 100% of respondents said that learning about technology is worthwhile and 91.7% ‘like using technology’, there was a much more lukewarm response as to whether they saw the value of using technology in their roles as clinical educators. This begs the question of what clinical partners perceive as the role of technology in clinical teaching and how to successfully use it in supporting students on clinical placement. As Klenowski28 pointed out, clinical teachers need to change their teaching styles to balance traditional didactic approaches with contemporary conceptualisations of learning using technology. Klenowski28 further states that if the pedagogical approach to technology for clinical teachers is not addressed, technical reductionist approaches that trivialise the process of learning can make the learning superficial.\nA further corroboration was that some respondents mentioned that they ‘see the students taking notes and then doubling up time by entering the information online’ Respondent 8. However this is a trend which evolved with paper workbooks and is not in keeping with the pedagogy of reflective practice. While the workbooks are the students’ primary submission they are expected to only carry a notebook with them on clinical rotations and following reflection on action, decide which cases will form the basis of their clinical portfolio (See * below).\n*‘Take notes/collect information continuously throughout placements in notebook or similar. Any new information regarding particular examinations should be written down in this book as it is effectively a learning outcome. When completing examinations on this region you can then implement this learning outcome and include in case write‐up’. \n(Excerpt from Clinical Studies Manual for Students,\n29\n2016, pp 38)\n\n\n\n\n*‘Take notes/collect information continuously throughout placements in notebook or similar. Any new information regarding particular examinations should be written down in this book as it is effectively a learning outcome. When completing examinations on this region you can then implement this learning outcome and include in case write‐up’. \n(Excerpt from Clinical Studies Manual for Students,\n29\n2016, pp 38)\n\n\nOne other finding worthy of mention was a comment from respondent 26, ‘a student may be at computer undertaking clinical documentation but is perceived by staff as being disinterested in clinical work’. This suggests that staff perceptions about the use of technology is manifest. As trends towards using technology in tertiary education accelerate this is a key finding. It is important for those who drive change to be aware of the landscape and perceptions of those at the coal face. These were deemed significant findings as it may be that there is a need to enculturate acceptance of technology, as well as teaching students about the appropriate use of technology on clinical placements.", "This pilot study establishes a baseline but more importantly paves the way for further research in this crucial area. Incongruence exists between attitudes to technology generally and perceptions of the usefulness of technology for clinical education. A larger study should be completed to fully examine the significance of this finding. Furthermore while the research question aimed to examine the attitudes and experiences of clinical educators, it would be of great benefit to gain an understanding of the factors that influence clinical educators’ attitudes and experiences with technology in their environments. Factors such as the setting where the educators work and the nature of their appointments would be important to consider however were beyond the scope of this study. Educators have varied supervisory arrangements and appointments in different locations and it is feasible that these factors could influence their experiences and understanding of OLEs.", "Monash University was the first undergraduate radiography course in Australia to implement an OLE for all aspects of clinical placements. Clinical education is crucial in the education of Radiography students and educators assume a key role in this. PebblePad™ is at the heart of the clinical programme, requiring significant input from clinical educators. This study serves to give voice to this important but under represented group, those clinical educators tasked with implementation of OLEs in clinical environments. It is imperative to understand the perceptions of clinical educators within their environments. As evidenced by the study, even with a positive attitude to technology, clinical environments have particular challenges. With technology taking a more central role in education, it is imperative that we understand how to ensure it is utilised to its full potential in all domains of education. Therefore enculturating positive attitudes towards technology and associated pedagogical change is important. Training and support specific to OLEs is crucial for successful implementation. Partaking in this study will afford clinical educators an opportunity to reflect on their own experiences with technology in their roles and implementation strategies for future technological advances.", "The authors declare no conflict of interest." ]
[ null, null, "methods", null, null, null, "results", null, null, null, null, null, null, null, "discussion", null, "conclusions", "COI-statement" ]
[ "Attitude", "education", "educational technology", "medical", "radiography" ]
Introduction: In healthcare, there is ongoing flux in expectations for students and practitioners. There has been an explosion of technology and the volume of medical knowledge has increased exponentially.1, 2, 3 In Australia, the Medical Radiation Practice Board of Australia (MRPBA) professional capabilities framework4 obliges radiographers to embrace contemporary conceptualisations of competence. They must keep abreast of shifting expectations5, 6 while the profession struggles to shed the perception that it is fulfilling the role of technical operatives.1 Due to these shifting expectations and to assure the MRPBA that students meet registration requirements on graduation; it is imperative to establish integrated systems of monitoring and evidencing students’ development. With trends towards the use of technology in tertiary education7, 8, 9 online learning environments (OLEs) could fulfil this need. Within radiography, while course content is increasingly facilitated online10, 11, recording radiography students’ development during clinical rotations remains traditional.10 However with technological trends, clinical educators are increasingly pressured to utilise OLEs.12, 13 Introducing technology in clinical environments poses challenges for clinical educators as much as it does for students and teachers.12, 14 These range from; cost, security of the devices and the data on them and pedagogical challenges.13 Negative attitudes towards technology is a barrier to successful implementation of technology in clinical education.14, 15, 16, 17 While the attitudes of academic staff, clinical staff in non‐teaching roles and students to technology has been explored18, 19, 20, 21 much less is known about clinical educators’ perceptions of technology.13, 22 The Bachelor of Radiography and Medical Imaging (Honours) at Monash University is a 4‐year integrated academic and clinical course. Intake is approximately 80 students yearly. Students complete clinical placements each year. The degree provides a qualification allowing students to seek employment in Australia and worldwide. In 2014, PebblePad™ was introduced as a contemporary learning platform for clinical studies, replacing paper workbooks. PebblePad™ is a web‐based platform offering an array of tools to help students record evidence of their learning and reflect on their clinical experiences. PebblePad™ and its ePortfolio functionality provides students with a holistic and integrated learning experience with a focus on preparation for professional life. Students can continue to access the platform after graduation. The implementation of PebblePad™ requires significant input from clinical educators. Aims of the study The research aimed to: Examine clinical educators’ attitudes towards technology generally.Examine clinical educators’ perceptions of the use of technology in clinical education.Explore clinical educators’ experiences of implementing PebblePad™ in their clinical environment. Examine clinical educators’ attitudes towards technology generally. Examine clinical educators’ perceptions of the use of technology in clinical education. Explore clinical educators’ experiences of implementing PebblePad™ in their clinical environment. For the purpose of this study, ‘technology’ refers to computers, software, hardware and use of the Internet, in keeping with Maag's study17 using a similar survey tool. The research aimed to: Examine clinical educators’ attitudes towards technology generally.Examine clinical educators’ perceptions of the use of technology in clinical education.Explore clinical educators’ experiences of implementing PebblePad™ in their clinical environment. Examine clinical educators’ attitudes towards technology generally. Examine clinical educators’ perceptions of the use of technology in clinical education. Explore clinical educators’ experiences of implementing PebblePad™ in their clinical environment. For the purpose of this study, ‘technology’ refers to computers, software, hardware and use of the Internet, in keeping with Maag's study17 using a similar survey tool. Aims of the study: The research aimed to: Examine clinical educators’ attitudes towards technology generally.Examine clinical educators’ perceptions of the use of technology in clinical education.Explore clinical educators’ experiences of implementing PebblePad™ in their clinical environment. Examine clinical educators’ attitudes towards technology generally. Examine clinical educators’ perceptions of the use of technology in clinical education. Explore clinical educators’ experiences of implementing PebblePad™ in their clinical environment. For the purpose of this study, ‘technology’ refers to computers, software, hardware and use of the Internet, in keeping with Maag's study17 using a similar survey tool. Methods: This mixed methods study reports on clinical supervisors, educators and tutors responsible for the supervision of Medical Imaging students on clinical placement. Herein, they will be referred to as ‘clinical educators’. Quantitative data meant we could measure the prevalence of the dimensions of the phenomenon we were exploring. Qualitative questions allow deeper exploration of a phenomenon where little is known about it, which was the case in this instance.23 An email invitation was sent by the Clinical Support Officer to clinical educators to participate. It included an explanatory statement and a link to the survey. Sites with minimal student engagement, for example only taking students occasionally, were excluded. This was done to keep the data clean, only collecting responses from educators who are au‐fait with course expectations. Data collection Data collection was conducted between May and July 2017 using an online survey through Qualtrics™. The decision to complete the anonymous questionnaire ultimately rested with respondents. The first section of the survey collected demographic data. The second section provided quantitative data. This was minimally modified from a validated Technology Attitude Survey (TAS). Mc Farlane et al24 tested the TAS for reliability and validity to appraise teachers’ attitudes towards technology. The TAS was modified by Maag17 for the student nursing setting and likewise found adequate reliability for the tool. Permission was granted from Maag to modify the TAS for this study. The TAS was adapted to ensure it was fit for purpose. One statement, ‘Technology makes me feel stupid’, was removed as it was irrelevant. ‘Nursing student’ was replaced by ‘clinical educator’ to represent the audience. The modified TAS contained 14 items with 5‐point Likert scale responses. Questions were based on positively and negatively geared statements portraying positive and negative attitudes towards technology. This ‘reverse wording’ changes the direction of the scale by asking the same or similar questions in a positive and negative voice and adds to the validity of responses, reducing response sets and bias.25 The third section of the survey, modelled by the researchers, evaluated clinical educators’ experiences with the use of PebblePad™ in the clinical setting. There were two quantitative and two qualitative questions. The quantitative questions evaluated clinical educators’ experiences with using PebblePad™ in the clinical setting. These appraised how easy the clinical educators’ found it to learn the platform as well as how difficult the implementation was for them. One of the qualitative questions evaluated for perceived challenges using OLEs in a clinical setting. The other explored what clinical educators consider the advantages of OLEs over paper‐based documentation. Data collection was conducted between May and July 2017 using an online survey through Qualtrics™. The decision to complete the anonymous questionnaire ultimately rested with respondents. The first section of the survey collected demographic data. The second section provided quantitative data. This was minimally modified from a validated Technology Attitude Survey (TAS). Mc Farlane et al24 tested the TAS for reliability and validity to appraise teachers’ attitudes towards technology. The TAS was modified by Maag17 for the student nursing setting and likewise found adequate reliability for the tool. Permission was granted from Maag to modify the TAS for this study. The TAS was adapted to ensure it was fit for purpose. One statement, ‘Technology makes me feel stupid’, was removed as it was irrelevant. ‘Nursing student’ was replaced by ‘clinical educator’ to represent the audience. The modified TAS contained 14 items with 5‐point Likert scale responses. Questions were based on positively and negatively geared statements portraying positive and negative attitudes towards technology. This ‘reverse wording’ changes the direction of the scale by asking the same or similar questions in a positive and negative voice and adds to the validity of responses, reducing response sets and bias.25 The third section of the survey, modelled by the researchers, evaluated clinical educators’ experiences with the use of PebblePad™ in the clinical setting. There were two quantitative and two qualitative questions. The quantitative questions evaluated clinical educators’ experiences with using PebblePad™ in the clinical setting. These appraised how easy the clinical educators’ found it to learn the platform as well as how difficult the implementation was for them. One of the qualitative questions evaluated for perceived challenges using OLEs in a clinical setting. The other explored what clinical educators consider the advantages of OLEs over paper‐based documentation. Data analysis Quantitative data were initially explored using statistical descriptive analyses in SPSS™. For the purpose of data analysis the ordinal scale responses to positively geared question response were scored (5‐Strongly agree, 4‐Agree, 3‐Neither agree nor disagree, 2‐Disagree, 1‐Strongly disagree). Negatively geared statements were reverse scored. An independent samples t‐test was used to determine whether location, rural or remote, was significant in influencing attitudes towards technology. All the quantitative data, the TAS and PebblePad™ questions, were included in this analysis. A Pearson's’ correlation coefficient was computed on the TAS questions to assess the relationship between clinical educators’ attitudes towards technology generally and their perceptions of its use in their role as educators. A scatter plot summarises the results to illuminate the interplay between the two variables. To conduct this analysis the eight TAS questions relating to technology generally were considered a single variable vs. the six TAS questions relating to the perception of the use of technology in clinical education. The sum of mean scores were calculated, computed and plotted in SPSS™. Thematic analysis, using Braun and Clarke's26 method, was used to interpret the qualitative data. It provides a rich and detailed interpretation and there is an emphasis on reflexive dialogue which the researcher engages in throughout the process before reporting the themes. It involves a six‐step approach (Table 1) and the researcher moves forward and back between the steps as many times as required to make sense of the data.26 Braun and Clarke's six phases for thematic analysis.26 Quantitative data were initially explored using statistical descriptive analyses in SPSS™. For the purpose of data analysis the ordinal scale responses to positively geared question response were scored (5‐Strongly agree, 4‐Agree, 3‐Neither agree nor disagree, 2‐Disagree, 1‐Strongly disagree). Negatively geared statements were reverse scored. An independent samples t‐test was used to determine whether location, rural or remote, was significant in influencing attitudes towards technology. All the quantitative data, the TAS and PebblePad™ questions, were included in this analysis. A Pearson's’ correlation coefficient was computed on the TAS questions to assess the relationship between clinical educators’ attitudes towards technology generally and their perceptions of its use in their role as educators. A scatter plot summarises the results to illuminate the interplay between the two variables. To conduct this analysis the eight TAS questions relating to technology generally were considered a single variable vs. the six TAS questions relating to the perception of the use of technology in clinical education. The sum of mean scores were calculated, computed and plotted in SPSS™. Thematic analysis, using Braun and Clarke's26 method, was used to interpret the qualitative data. It provides a rich and detailed interpretation and there is an emphasis on reflexive dialogue which the researcher engages in throughout the process before reporting the themes. It involves a six‐step approach (Table 1) and the researcher moves forward and back between the steps as many times as required to make sense of the data.26 Braun and Clarke's six phases for thematic analysis.26 Ethics approval Ethics approval was granted by the Monash University Human Research Ethics Committee, project number 0197. Ethics approval was granted by the Monash University Human Research Ethics Committee, project number 0197. Data collection: Data collection was conducted between May and July 2017 using an online survey through Qualtrics™. The decision to complete the anonymous questionnaire ultimately rested with respondents. The first section of the survey collected demographic data. The second section provided quantitative data. This was minimally modified from a validated Technology Attitude Survey (TAS). Mc Farlane et al24 tested the TAS for reliability and validity to appraise teachers’ attitudes towards technology. The TAS was modified by Maag17 for the student nursing setting and likewise found adequate reliability for the tool. Permission was granted from Maag to modify the TAS for this study. The TAS was adapted to ensure it was fit for purpose. One statement, ‘Technology makes me feel stupid’, was removed as it was irrelevant. ‘Nursing student’ was replaced by ‘clinical educator’ to represent the audience. The modified TAS contained 14 items with 5‐point Likert scale responses. Questions were based on positively and negatively geared statements portraying positive and negative attitudes towards technology. This ‘reverse wording’ changes the direction of the scale by asking the same or similar questions in a positive and negative voice and adds to the validity of responses, reducing response sets and bias.25 The third section of the survey, modelled by the researchers, evaluated clinical educators’ experiences with the use of PebblePad™ in the clinical setting. There were two quantitative and two qualitative questions. The quantitative questions evaluated clinical educators’ experiences with using PebblePad™ in the clinical setting. These appraised how easy the clinical educators’ found it to learn the platform as well as how difficult the implementation was for them. One of the qualitative questions evaluated for perceived challenges using OLEs in a clinical setting. The other explored what clinical educators consider the advantages of OLEs over paper‐based documentation. Data analysis: Quantitative data were initially explored using statistical descriptive analyses in SPSS™. For the purpose of data analysis the ordinal scale responses to positively geared question response were scored (5‐Strongly agree, 4‐Agree, 3‐Neither agree nor disagree, 2‐Disagree, 1‐Strongly disagree). Negatively geared statements were reverse scored. An independent samples t‐test was used to determine whether location, rural or remote, was significant in influencing attitudes towards technology. All the quantitative data, the TAS and PebblePad™ questions, were included in this analysis. A Pearson's’ correlation coefficient was computed on the TAS questions to assess the relationship between clinical educators’ attitudes towards technology generally and their perceptions of its use in their role as educators. A scatter plot summarises the results to illuminate the interplay between the two variables. To conduct this analysis the eight TAS questions relating to technology generally were considered a single variable vs. the six TAS questions relating to the perception of the use of technology in clinical education. The sum of mean scores were calculated, computed and plotted in SPSS™. Thematic analysis, using Braun and Clarke's26 method, was used to interpret the qualitative data. It provides a rich and detailed interpretation and there is an emphasis on reflexive dialogue which the researcher engages in throughout the process before reporting the themes. It involves a six‐step approach (Table 1) and the researcher moves forward and back between the steps as many times as required to make sense of the data.26 Braun and Clarke's six phases for thematic analysis.26 Ethics approval: Ethics approval was granted by the Monash University Human Research Ethics Committee, project number 0197. Results: From a pool of 74 possible participants, 49 surveys were returned. One survey was incomplete and excluded from the data analysis. This constitutes a response rate of 48/74, 65%. There was a 100% response rate within the surveys to both the qualitative and quantitative components. The majority of respondents worked in Metropolitan sites (37/48, 77.1%). 39/48, 81.3% worked at a single clinical site. Medium sized sites were best represented 24/48, 50% with a relatively even mix of large 14/48, 29.2% and small 15/48, 31.3% sites represented. A majority of respondents worked in private imaging 29/48, 60.4% (Table 2). Participant demographic characteristics Multiple responses possible. Using the modified TAS, participants were asked eight questions based on their attitudes towards technology generally (Table 3). These appraise personal feelings when using technology such as confidence, nervousness and perceived importance and level of difficulty in learning new technologies. Six questions were based on their perceptions of the use of technology in clinical education (Table 4). A general positive attitude towards technology was evident. A Pearson's r showed a mildly positive correlation between attitudes towards technology and an appreciation for the benefit of technology to the role of clinical educator (Table 5). A scatter plot illustrates that the relationship between them is not absolute (Fig. 1). Descriptive analysis, attitudes to technology Descriptive analysis, perceived use of technology in clinical education Pearson's correlation: attitudes to technology and its perceived use in clinical education Correlation of attitudes towards technology versus the perceived usefulness in clinical education. Two questions related specifically to the PebblePad™ platform (Q19 and 20). These were aimed at appraising how easy the clinical educators found it to learn how to use the platform and how difficult the implementation of it in a clinical environment proved. There was a large spread in these responses (Table 6). 33.2% found PebblePad™ easy to learn how to use with 47.9% finding that it was not easy. Furthermore implementing the OLE in the clinical site was challenging for 35.4% of respondents, with only 37.5% finding it easy. This correlates with what Nairn et al12 cite as challenges facing clinical educators surrounding OLEs. Learning how to use PebblePad™ and implementation of PebblePad™ in a clinical environment No significant difference was noted between rural or remote locations (Table 7). TAS and rural and metropolitan location In keeping with Braun and Clarke's26 thematic analysis, the qualitative data were coded with five themes identified. Theme 1: availability of IT resources (hardware) The strongest theme was lack of access and availability of IT resources for using OLEs. Respondent 5 quoted the ‘lack of availability of portable devices with multiple users’ at their site. This was compounded by the fact that ‘Access to computer time can be limited in a clinical practice that utilises computers for clinical work’ Respondent 27. The strongest theme was lack of access and availability of IT resources for using OLEs. Respondent 5 quoted the ‘lack of availability of portable devices with multiple users’ at their site. This was compounded by the fact that ‘Access to computer time can be limited in a clinical practice that utilises computers for clinical work’ Respondent 27. Theme 2: time resource in clinical environment A large proportion of participants reported that lack of time is a major factor when using OLEs in clinical sites. Respondent 24 mentioned that the ‘time to concentrate on filling out reports without interruption’ is difficult to find. Respondent 8 commented that ‘It is still easier to have a piece of paper in front of your when you are in the process of assessing a student’. Other time factors specific to OLEs was the time required to retrieve passwords and the time it takes to login. A large proportion of participants reported that lack of time is a major factor when using OLEs in clinical sites. Respondent 24 mentioned that the ‘time to concentrate on filling out reports without interruption’ is difficult to find. Respondent 8 commented that ‘It is still easier to have a piece of paper in front of your when you are in the process of assessing a student’. Other time factors specific to OLEs was the time required to retrieve passwords and the time it takes to login. Theme 3: platform design Respondents reported both favourably and unfavourably under this theme. Respondent 12 mentioned that ‘Difficult to navigate systems make documentation difficult’. Five respondents commented that the design of PebblePad™ was not intuitive to use. Respondent 6 suggested that it was ‘not designed for the purpose and so is very unintuitive to use’. Conversely four respondents mentioned the ease of navigating the platform over paper. Respondents 21 and 8 found it much easier ‘Not having to flick through pages of student books to find the correct page’. This equivocal response was reflected in the quantitative data where 47.9% of respondents agreed with the statement ‘I found PebblePad™ easy to learn how to use’, with a further 18.7% saying they neither agreed nor disagreed with the statement. Several respondents mentioned the capability for multiple users at different locations to be able to access the student's work as a significant advantage of the online platform. Respondent 11 mentioned that it was ‘Easy for all parties to access the online documents at the same time or at any time that it is convenient’. Respondent 41 furthered this, mentioning that ‘It is easier for me to check on the work at my own leisure without needing to chase anyone up or be told they left their book at home’. Respondents reported both favourably and unfavourably under this theme. Respondent 12 mentioned that ‘Difficult to navigate systems make documentation difficult’. Five respondents commented that the design of PebblePad™ was not intuitive to use. Respondent 6 suggested that it was ‘not designed for the purpose and so is very unintuitive to use’. Conversely four respondents mentioned the ease of navigating the platform over paper. Respondents 21 and 8 found it much easier ‘Not having to flick through pages of student books to find the correct page’. This equivocal response was reflected in the quantitative data where 47.9% of respondents agreed with the statement ‘I found PebblePad™ easy to learn how to use’, with a further 18.7% saying they neither agreed nor disagreed with the statement. Several respondents mentioned the capability for multiple users at different locations to be able to access the student's work as a significant advantage of the online platform. Respondent 11 mentioned that it was ‘Easy for all parties to access the online documents at the same time or at any time that it is convenient’. Respondent 41 furthered this, mentioning that ‘It is easier for me to check on the work at my own leisure without needing to chase anyone up or be told they left their book at home’. Theme 4: tracking of progress/documentation There was a diversity of responses under this theme. This major theme has been broken down into two ‘sub’ themes for clarity. 13/48, 27.1% of respondents mentioned one or other of these sub themes Tracking of the documentation and;Tracking of student progress. Tracking of the documentation and; Tracking of student progress. Tracking documentation Participants mentioned that with an OLE one ‘can track documentation’ (Respondent 12) with ‘No misfiling or misplacement of paper documents’ (Respondent 39). Respondent 7 mentioned that having students’ work stored online allows ‘Easy storage and retrievals’ of documentation. Others mentioned that important documentation such as ‘Assessment and course structure guidelines for tutors are more available online’ (Respondent 33). Participants mentioned that with an OLE one ‘can track documentation’ (Respondent 12) with ‘No misfiling or misplacement of paper documents’ (Respondent 39). Respondent 7 mentioned that having students’ work stored online allows ‘Easy storage and retrievals’ of documentation. Others mentioned that important documentation such as ‘Assessment and course structure guidelines for tutors are more available online’ (Respondent 33). Tracking of student progress 24/48, 50% of respondents suggested that ‘there is a record of when and where assessments were performed and also no loss of information’ (Respondent 23). This allowed for ‘Real time checking of student progress’ (Respondent 26), stakeholders have ‘immediate access to the student's progress and my reports and can see quickly if the student is entering all the clinical requirements at the appropriate times’ (Respondent 25). There was also a strong sense for longitudinal development. The online system made it ‘Easy to review and compare to previous work where appropriate’ (Respondent 40). ‘Sharing information across sites/placements allows for more adequate follow‐up on student's progress’ (Respondent 46) allowing for ‘more permanent record of progress in time relevance – make progress tracking easier’ (Respondent 47). Respondent 27 went on to record that the online version is a ‘Permanent record’ which could actually ‘be maintained and continually updated throughout [a radiographer's] career’ 24/48, 50% of respondents suggested that ‘there is a record of when and where assessments were performed and also no loss of information’ (Respondent 23). This allowed for ‘Real time checking of student progress’ (Respondent 26), stakeholders have ‘immediate access to the student's progress and my reports and can see quickly if the student is entering all the clinical requirements at the appropriate times’ (Respondent 25). There was also a strong sense for longitudinal development. The online system made it ‘Easy to review and compare to previous work where appropriate’ (Respondent 40). ‘Sharing information across sites/placements allows for more adequate follow‐up on student's progress’ (Respondent 46) allowing for ‘more permanent record of progress in time relevance – make progress tracking easier’ (Respondent 47). Respondent 27 went on to record that the online version is a ‘Permanent record’ which could actually ‘be maintained and continually updated throughout [a radiographer's] career’ There was a diversity of responses under this theme. This major theme has been broken down into two ‘sub’ themes for clarity. 13/48, 27.1% of respondents mentioned one or other of these sub themes Tracking of the documentation and;Tracking of student progress. Tracking of the documentation and; Tracking of student progress. Tracking documentation Participants mentioned that with an OLE one ‘can track documentation’ (Respondent 12) with ‘No misfiling or misplacement of paper documents’ (Respondent 39). Respondent 7 mentioned that having students’ work stored online allows ‘Easy storage and retrievals’ of documentation. Others mentioned that important documentation such as ‘Assessment and course structure guidelines for tutors are more available online’ (Respondent 33). Participants mentioned that with an OLE one ‘can track documentation’ (Respondent 12) with ‘No misfiling or misplacement of paper documents’ (Respondent 39). Respondent 7 mentioned that having students’ work stored online allows ‘Easy storage and retrievals’ of documentation. Others mentioned that important documentation such as ‘Assessment and course structure guidelines for tutors are more available online’ (Respondent 33). Tracking of student progress 24/48, 50% of respondents suggested that ‘there is a record of when and where assessments were performed and also no loss of information’ (Respondent 23). This allowed for ‘Real time checking of student progress’ (Respondent 26), stakeholders have ‘immediate access to the student's progress and my reports and can see quickly if the student is entering all the clinical requirements at the appropriate times’ (Respondent 25). There was also a strong sense for longitudinal development. The online system made it ‘Easy to review and compare to previous work where appropriate’ (Respondent 40). ‘Sharing information across sites/placements allows for more adequate follow‐up on student's progress’ (Respondent 46) allowing for ‘more permanent record of progress in time relevance – make progress tracking easier’ (Respondent 47). Respondent 27 went on to record that the online version is a ‘Permanent record’ which could actually ‘be maintained and continually updated throughout [a radiographer's] career’ 24/48, 50% of respondents suggested that ‘there is a record of when and where assessments were performed and also no loss of information’ (Respondent 23). This allowed for ‘Real time checking of student progress’ (Respondent 26), stakeholders have ‘immediate access to the student's progress and my reports and can see quickly if the student is entering all the clinical requirements at the appropriate times’ (Respondent 25). There was also a strong sense for longitudinal development. The online system made it ‘Easy to review and compare to previous work where appropriate’ (Respondent 40). ‘Sharing information across sites/placements allows for more adequate follow‐up on student's progress’ (Respondent 46) allowing for ‘more permanent record of progress in time relevance – make progress tracking easier’ (Respondent 47). Respondent 27 went on to record that the online version is a ‘Permanent record’ which could actually ‘be maintained and continually updated throughout [a radiographer's] career’ Theme 5: increased security Clinical educators were cognisant that OLEs are a more secure environment which is ‘Tamper proof’ (Respondent 41). Respondent 42 said that ‘my signature can't be forged … their work can't be tampered with and more importantly that they aren't able to take advantage of staff members who can't be bothered to check their work and sign everything off for them’. Clinical educators were cognisant that OLEs are a more secure environment which is ‘Tamper proof’ (Respondent 41). Respondent 42 said that ‘my signature can't be forged … their work can't be tampered with and more importantly that they aren't able to take advantage of staff members who can't be bothered to check their work and sign everything off for them’. Theme 1: availability of IT resources (hardware): The strongest theme was lack of access and availability of IT resources for using OLEs. Respondent 5 quoted the ‘lack of availability of portable devices with multiple users’ at their site. This was compounded by the fact that ‘Access to computer time can be limited in a clinical practice that utilises computers for clinical work’ Respondent 27. Theme 2: time resource in clinical environment: A large proportion of participants reported that lack of time is a major factor when using OLEs in clinical sites. Respondent 24 mentioned that the ‘time to concentrate on filling out reports without interruption’ is difficult to find. Respondent 8 commented that ‘It is still easier to have a piece of paper in front of your when you are in the process of assessing a student’. Other time factors specific to OLEs was the time required to retrieve passwords and the time it takes to login. Theme 3: platform design: Respondents reported both favourably and unfavourably under this theme. Respondent 12 mentioned that ‘Difficult to navigate systems make documentation difficult’. Five respondents commented that the design of PebblePad™ was not intuitive to use. Respondent 6 suggested that it was ‘not designed for the purpose and so is very unintuitive to use’. Conversely four respondents mentioned the ease of navigating the platform over paper. Respondents 21 and 8 found it much easier ‘Not having to flick through pages of student books to find the correct page’. This equivocal response was reflected in the quantitative data where 47.9% of respondents agreed with the statement ‘I found PebblePad™ easy to learn how to use’, with a further 18.7% saying they neither agreed nor disagreed with the statement. Several respondents mentioned the capability for multiple users at different locations to be able to access the student's work as a significant advantage of the online platform. Respondent 11 mentioned that it was ‘Easy for all parties to access the online documents at the same time or at any time that it is convenient’. Respondent 41 furthered this, mentioning that ‘It is easier for me to check on the work at my own leisure without needing to chase anyone up or be told they left their book at home’. Theme 4: tracking of progress/documentation: There was a diversity of responses under this theme. This major theme has been broken down into two ‘sub’ themes for clarity. 13/48, 27.1% of respondents mentioned one or other of these sub themes Tracking of the documentation and;Tracking of student progress. Tracking of the documentation and; Tracking of student progress. Tracking documentation Participants mentioned that with an OLE one ‘can track documentation’ (Respondent 12) with ‘No misfiling or misplacement of paper documents’ (Respondent 39). Respondent 7 mentioned that having students’ work stored online allows ‘Easy storage and retrievals’ of documentation. Others mentioned that important documentation such as ‘Assessment and course structure guidelines for tutors are more available online’ (Respondent 33). Participants mentioned that with an OLE one ‘can track documentation’ (Respondent 12) with ‘No misfiling or misplacement of paper documents’ (Respondent 39). Respondent 7 mentioned that having students’ work stored online allows ‘Easy storage and retrievals’ of documentation. Others mentioned that important documentation such as ‘Assessment and course structure guidelines for tutors are more available online’ (Respondent 33). Tracking of student progress 24/48, 50% of respondents suggested that ‘there is a record of when and where assessments were performed and also no loss of information’ (Respondent 23). This allowed for ‘Real time checking of student progress’ (Respondent 26), stakeholders have ‘immediate access to the student's progress and my reports and can see quickly if the student is entering all the clinical requirements at the appropriate times’ (Respondent 25). There was also a strong sense for longitudinal development. The online system made it ‘Easy to review and compare to previous work where appropriate’ (Respondent 40). ‘Sharing information across sites/placements allows for more adequate follow‐up on student's progress’ (Respondent 46) allowing for ‘more permanent record of progress in time relevance – make progress tracking easier’ (Respondent 47). Respondent 27 went on to record that the online version is a ‘Permanent record’ which could actually ‘be maintained and continually updated throughout [a radiographer's] career’ 24/48, 50% of respondents suggested that ‘there is a record of when and where assessments were performed and also no loss of information’ (Respondent 23). This allowed for ‘Real time checking of student progress’ (Respondent 26), stakeholders have ‘immediate access to the student's progress and my reports and can see quickly if the student is entering all the clinical requirements at the appropriate times’ (Respondent 25). There was also a strong sense for longitudinal development. The online system made it ‘Easy to review and compare to previous work where appropriate’ (Respondent 40). ‘Sharing information across sites/placements allows for more adequate follow‐up on student's progress’ (Respondent 46) allowing for ‘more permanent record of progress in time relevance – make progress tracking easier’ (Respondent 47). Respondent 27 went on to record that the online version is a ‘Permanent record’ which could actually ‘be maintained and continually updated throughout [a radiographer's] career’ Tracking documentation: Participants mentioned that with an OLE one ‘can track documentation’ (Respondent 12) with ‘No misfiling or misplacement of paper documents’ (Respondent 39). Respondent 7 mentioned that having students’ work stored online allows ‘Easy storage and retrievals’ of documentation. Others mentioned that important documentation such as ‘Assessment and course structure guidelines for tutors are more available online’ (Respondent 33). Tracking of student progress: 24/48, 50% of respondents suggested that ‘there is a record of when and where assessments were performed and also no loss of information’ (Respondent 23). This allowed for ‘Real time checking of student progress’ (Respondent 26), stakeholders have ‘immediate access to the student's progress and my reports and can see quickly if the student is entering all the clinical requirements at the appropriate times’ (Respondent 25). There was also a strong sense for longitudinal development. The online system made it ‘Easy to review and compare to previous work where appropriate’ (Respondent 40). ‘Sharing information across sites/placements allows for more adequate follow‐up on student's progress’ (Respondent 46) allowing for ‘more permanent record of progress in time relevance – make progress tracking easier’ (Respondent 47). Respondent 27 went on to record that the online version is a ‘Permanent record’ which could actually ‘be maintained and continually updated throughout [a radiographer's] career’ Theme 5: increased security: Clinical educators were cognisant that OLEs are a more secure environment which is ‘Tamper proof’ (Respondent 41). Respondent 42 said that ‘my signature can't be forged … their work can't be tampered with and more importantly that they aren't able to take advantage of staff members who can't be bothered to check their work and sign everything off for them’. Discussion: Due to current trends in tertiary education, clinical educators are under pressure to utilise OLEs, posing challenges for clinical mentors.8, 12 This aspect of OLEs, that is in clinical environments, therefore requires attention. This study provides insight into clinical educators’ attitudes towards technology, both in general and specifically in clinical education. It explores some of the challenges faced by clinical educators when implementing such technology in the clinical environment. In the current era, it can be taken for granted that computers are almost ubiquitously available. The research highlighted that this is not the case in clinical environments. The extent to which it seems computer allocations are so sparse was surprising, it should be explored whether there are differences between metropolitan and rural sites in this regard. While it was acknowledged that access to computers might be an obstacle, during the PebblePad™ roll out (2014–2015) clinical educators were offered tablet computers to address this challenge. Only 12 clinical sites took up the offer at the time. Thus further strategies should be considered to allow clinical educators with educational roles access to computer resources. This could involve protected time on computers, allocation of computers for the purpose etc. This would need to be considered in close collaboration with the clinical partners and their institutions. Busy practitioners find themselves with time pressures and competing priorities when fulfilling teaching duties. There was mixed opinion whether the online environment aided efficient time management. For some, working online was quicker but for others, the necessity for passwords added to the time required making it more arduous than using paper versions. Disparity was also noted in how easy the educators found it to learn using PebblePad™ itself. There were some findings that could explain these differences. A ‘Lack of basic knowledge or skills’ (Respondent 41) and ‘My lack of IT skills as a mature radiographer’ (Respondent 33) were factors in adapting to OLEs. As Respondent 18 reported ‘Online can be more efficient when all involved have a certain level of competence’. This reinforces Chow, Herold, Choo & Chan's27 findings that training and support specific to technologies used in teaching is crucial for successful implementation. While some respondents mentioned that face to face training would be preferential, ‘I think that if you have not had any immediate instruction on how to use it, it can be a little difficult’ (Respondent 18), the geographical spread of clinical sites is challenging in this regard. Not all educators can attend hands on workshops on campus and the University staff required for in‐services is infeasible. It was reassuring that a higher proportion of respondents found they could implement PebblePad™ reasonably easily at their clinical sites (Table 6). This suggests that, despite the difficulties learning a new platform, educators were able to mitigate these during implementation. The tracking of students’ progress within placements and over time was an important correlation with the vision for transitioning online. With paper workbooks, providing formative feedback on work in progress was difficult and it was also difficult to appreciate longitudinal development of skills and self‐efficacy in the students’ learning journey. The results from the study indicate that the objectives and vision for moving online has had some traction. As respondent 42 mentions, ‘Being able to actually view their work allows me to give them additional feedback in regards to their thoughts and what they write down’. There was an appreciation for the ability to track students’ progress over time rather than within single placements. This can help identify at risk students and assist educators to devise individual learning plans for those students while allowing appropriate follow‐up on student's progress. Clinical educators found the online system more secure than paper workbooks. This is in keeping with what has been our own experience. Supervisors were confident that it was not possible for the students to tamper with the entries in the online environment. This is a positive of having password protection which can however add to the time pressure when using OLEs. Respondents displayed a positive attitude towards technology in general. However responses to statements addressing the perceptions of using technology for clinical learning suggest that the link between knowing about technology and using it for a specific purpose is not absolute. While 100% of respondents said that learning about technology is worthwhile and 91.7% ‘like using technology’, there was a much more lukewarm response as to whether they saw the value of using technology in their roles as clinical educators. This begs the question of what clinical partners perceive as the role of technology in clinical teaching and how to successfully use it in supporting students on clinical placement. As Klenowski28 pointed out, clinical teachers need to change their teaching styles to balance traditional didactic approaches with contemporary conceptualisations of learning using technology. Klenowski28 further states that if the pedagogical approach to technology for clinical teachers is not addressed, technical reductionist approaches that trivialise the process of learning can make the learning superficial. A further corroboration was that some respondents mentioned that they ‘see the students taking notes and then doubling up time by entering the information online’ Respondent 8. However this is a trend which evolved with paper workbooks and is not in keeping with the pedagogy of reflective practice. While the workbooks are the students’ primary submission they are expected to only carry a notebook with them on clinical rotations and following reflection on action, decide which cases will form the basis of their clinical portfolio (See * below). *‘Take notes/collect information continuously throughout placements in notebook or similar. Any new information regarding particular examinations should be written down in this book as it is effectively a learning outcome. When completing examinations on this region you can then implement this learning outcome and include in case write‐up’. (Excerpt from Clinical Studies Manual for Students, 29 2016, pp 38) *‘Take notes/collect information continuously throughout placements in notebook or similar. Any new information regarding particular examinations should be written down in this book as it is effectively a learning outcome. When completing examinations on this region you can then implement this learning outcome and include in case write‐up’. (Excerpt from Clinical Studies Manual for Students, 29 2016, pp 38) One other finding worthy of mention was a comment from respondent 26, ‘a student may be at computer undertaking clinical documentation but is perceived by staff as being disinterested in clinical work’. This suggests that staff perceptions about the use of technology is manifest. As trends towards using technology in tertiary education accelerate this is a key finding. It is important for those who drive change to be aware of the landscape and perceptions of those at the coal face. These were deemed significant findings as it may be that there is a need to enculturate acceptance of technology, as well as teaching students about the appropriate use of technology on clinical placements. Implications for future research: This pilot study establishes a baseline but more importantly paves the way for further research in this crucial area. Incongruence exists between attitudes to technology generally and perceptions of the usefulness of technology for clinical education. A larger study should be completed to fully examine the significance of this finding. Furthermore while the research question aimed to examine the attitudes and experiences of clinical educators, it would be of great benefit to gain an understanding of the factors that influence clinical educators’ attitudes and experiences with technology in their environments. Factors such as the setting where the educators work and the nature of their appointments would be important to consider however were beyond the scope of this study. Educators have varied supervisory arrangements and appointments in different locations and it is feasible that these factors could influence their experiences and understanding of OLEs. Conclusion: Monash University was the first undergraduate radiography course in Australia to implement an OLE for all aspects of clinical placements. Clinical education is crucial in the education of Radiography students and educators assume a key role in this. PebblePad™ is at the heart of the clinical programme, requiring significant input from clinical educators. This study serves to give voice to this important but under represented group, those clinical educators tasked with implementation of OLEs in clinical environments. It is imperative to understand the perceptions of clinical educators within their environments. As evidenced by the study, even with a positive attitude to technology, clinical environments have particular challenges. With technology taking a more central role in education, it is imperative that we understand how to ensure it is utilised to its full potential in all domains of education. Therefore enculturating positive attitudes towards technology and associated pedagogical change is important. Training and support specific to OLEs is crucial for successful implementation. Partaking in this study will afford clinical educators an opportunity to reflect on their own experiences with technology in their roles and implementation strategies for future technological advances. Conflict of Interest: The authors declare no conflict of interest.
Background: In healthcare, there is ongoing flux in expectations for students and practitioners. Establishing integrated systems of monitoring and evidencing students' development is imperative. With current trends towards the use of technology in tertiary education, online learning environments (OLEs) could constitute more effective evidencing of student progress in the clinical environment. However, there is little research exploring clinical educators' experiences with implementing technology in clinical education. The research aimed to: Examine clinical educators' attitudes towards technology and its use in clinical education. Explore clinical educators' experiences of implementing technologies in a clinical environment. Methods: A mixed methods approach was taken to explore the aims. A previously validated technology attitude survey (TAS) was used with slight modifications, as well as open-ended qualitative responses. These explored clinical educators' experiences of the implementation of one specific OLE (PebblePad™) in their clinical environments. The survey was sent to clinical educators involved in the supervision of Medical Imaging students on clinical placement. Results: Clinical educators play pivotal roles in students' professional development and, given current trends in tertiary education, are under increasing pressure to utilise OLEs. This poses particular challenges in clinical environments. Irrespective of the challenges, successful implementation of technology in any environment is dependent on the attitudes of the users. Conclusions: Clinical environments have specific challenges when implementing technology such as access to computers and time constraints on practitioners. Even with positive attitudes towards technology, a change in pedagogical outlook when using technology in clinical teaching is necessary.
Introduction: In healthcare, there is ongoing flux in expectations for students and practitioners. There has been an explosion of technology and the volume of medical knowledge has increased exponentially.1, 2, 3 In Australia, the Medical Radiation Practice Board of Australia (MRPBA) professional capabilities framework4 obliges radiographers to embrace contemporary conceptualisations of competence. They must keep abreast of shifting expectations5, 6 while the profession struggles to shed the perception that it is fulfilling the role of technical operatives.1 Due to these shifting expectations and to assure the MRPBA that students meet registration requirements on graduation; it is imperative to establish integrated systems of monitoring and evidencing students’ development. With trends towards the use of technology in tertiary education7, 8, 9 online learning environments (OLEs) could fulfil this need. Within radiography, while course content is increasingly facilitated online10, 11, recording radiography students’ development during clinical rotations remains traditional.10 However with technological trends, clinical educators are increasingly pressured to utilise OLEs.12, 13 Introducing technology in clinical environments poses challenges for clinical educators as much as it does for students and teachers.12, 14 These range from; cost, security of the devices and the data on them and pedagogical challenges.13 Negative attitudes towards technology is a barrier to successful implementation of technology in clinical education.14, 15, 16, 17 While the attitudes of academic staff, clinical staff in non‐teaching roles and students to technology has been explored18, 19, 20, 21 much less is known about clinical educators’ perceptions of technology.13, 22 The Bachelor of Radiography and Medical Imaging (Honours) at Monash University is a 4‐year integrated academic and clinical course. Intake is approximately 80 students yearly. Students complete clinical placements each year. The degree provides a qualification allowing students to seek employment in Australia and worldwide. In 2014, PebblePad™ was introduced as a contemporary learning platform for clinical studies, replacing paper workbooks. PebblePad™ is a web‐based platform offering an array of tools to help students record evidence of their learning and reflect on their clinical experiences. PebblePad™ and its ePortfolio functionality provides students with a holistic and integrated learning experience with a focus on preparation for professional life. Students can continue to access the platform after graduation. The implementation of PebblePad™ requires significant input from clinical educators. Aims of the study The research aimed to: Examine clinical educators’ attitudes towards technology generally.Examine clinical educators’ perceptions of the use of technology in clinical education.Explore clinical educators’ experiences of implementing PebblePad™ in their clinical environment. Examine clinical educators’ attitudes towards technology generally. Examine clinical educators’ perceptions of the use of technology in clinical education. Explore clinical educators’ experiences of implementing PebblePad™ in their clinical environment. For the purpose of this study, ‘technology’ refers to computers, software, hardware and use of the Internet, in keeping with Maag's study17 using a similar survey tool. The research aimed to: Examine clinical educators’ attitudes towards technology generally.Examine clinical educators’ perceptions of the use of technology in clinical education.Explore clinical educators’ experiences of implementing PebblePad™ in their clinical environment. Examine clinical educators’ attitudes towards technology generally. Examine clinical educators’ perceptions of the use of technology in clinical education. Explore clinical educators’ experiences of implementing PebblePad™ in their clinical environment. For the purpose of this study, ‘technology’ refers to computers, software, hardware and use of the Internet, in keeping with Maag's study17 using a similar survey tool. Conclusion: Monash University was the first undergraduate radiography course in Australia to implement an OLE for all aspects of clinical placements. Clinical education is crucial in the education of Radiography students and educators assume a key role in this. PebblePad™ is at the heart of the clinical programme, requiring significant input from clinical educators. This study serves to give voice to this important but under represented group, those clinical educators tasked with implementation of OLEs in clinical environments. It is imperative to understand the perceptions of clinical educators within their environments. As evidenced by the study, even with a positive attitude to technology, clinical environments have particular challenges. With technology taking a more central role in education, it is imperative that we understand how to ensure it is utilised to its full potential in all domains of education. Therefore enculturating positive attitudes towards technology and associated pedagogical change is important. Training and support specific to OLEs is crucial for successful implementation. Partaking in this study will afford clinical educators an opportunity to reflect on their own experiences with technology in their roles and implementation strategies for future technological advances.
Background: In healthcare, there is ongoing flux in expectations for students and practitioners. Establishing integrated systems of monitoring and evidencing students' development is imperative. With current trends towards the use of technology in tertiary education, online learning environments (OLEs) could constitute more effective evidencing of student progress in the clinical environment. However, there is little research exploring clinical educators' experiences with implementing technology in clinical education. The research aimed to: Examine clinical educators' attitudes towards technology and its use in clinical education. Explore clinical educators' experiences of implementing technologies in a clinical environment. Methods: A mixed methods approach was taken to explore the aims. A previously validated technology attitude survey (TAS) was used with slight modifications, as well as open-ended qualitative responses. These explored clinical educators' experiences of the implementation of one specific OLE (PebblePad™) in their clinical environments. The survey was sent to clinical educators involved in the supervision of Medical Imaging students on clinical placement. Results: Clinical educators play pivotal roles in students' professional development and, given current trends in tertiary education, are under increasing pressure to utilise OLEs. This poses particular challenges in clinical environments. Irrespective of the challenges, successful implementation of technology in any environment is dependent on the attitudes of the users. Conclusions: Clinical environments have specific challenges when implementing technology such as access to computers and time constraints on practitioners. Even with positive attitudes towards technology, a change in pedagogical outlook when using technology in clinical teaching is necessary.
8,731
295
[ 672, 119, 340, 286, 17, 64, 94, 244, 613, 76, 193, 72, 151 ]
18
[ "clinical", "respondent", "technology", "educators", "clinical educators", "student", "progress", "time", "online", "mentioned" ]
[ "radiography course australia", "recording radiography students", "imaging students clinical", "obliges radiographers embrace", "technology clinical learning" ]
[CONTENT] Attitude | education | educational technology | medical | radiography [SUMMARY]
[CONTENT] Attitude | education | educational technology | medical | radiography [SUMMARY]
[CONTENT] Attitude | education | educational technology | medical | radiography [SUMMARY]
[CONTENT] Attitude | education | educational technology | medical | radiography [SUMMARY]
[CONTENT] Attitude | education | educational technology | medical | radiography [SUMMARY]
[CONTENT] Attitude | education | educational technology | medical | radiography [SUMMARY]
[CONTENT] Attitude to Computers | Documentation | Education, Medical | Humans | Information Technology | Surveys and Questionnaires | Technology [SUMMARY]
[CONTENT] Attitude to Computers | Documentation | Education, Medical | Humans | Information Technology | Surveys and Questionnaires | Technology [SUMMARY]
[CONTENT] Attitude to Computers | Documentation | Education, Medical | Humans | Information Technology | Surveys and Questionnaires | Technology [SUMMARY]
[CONTENT] Attitude to Computers | Documentation | Education, Medical | Humans | Information Technology | Surveys and Questionnaires | Technology [SUMMARY]
[CONTENT] Attitude to Computers | Documentation | Education, Medical | Humans | Information Technology | Surveys and Questionnaires | Technology [SUMMARY]
[CONTENT] Attitude to Computers | Documentation | Education, Medical | Humans | Information Technology | Surveys and Questionnaires | Technology [SUMMARY]
[CONTENT] radiography course australia | recording radiography students | imaging students clinical | obliges radiographers embrace | technology clinical learning [SUMMARY]
[CONTENT] radiography course australia | recording radiography students | imaging students clinical | obliges radiographers embrace | technology clinical learning [SUMMARY]
[CONTENT] radiography course australia | recording radiography students | imaging students clinical | obliges radiographers embrace | technology clinical learning [SUMMARY]
[CONTENT] radiography course australia | recording radiography students | imaging students clinical | obliges radiographers embrace | technology clinical learning [SUMMARY]
[CONTENT] radiography course australia | recording radiography students | imaging students clinical | obliges radiographers embrace | technology clinical learning [SUMMARY]
[CONTENT] radiography course australia | recording radiography students | imaging students clinical | obliges radiographers embrace | technology clinical learning [SUMMARY]
[CONTENT] clinical | respondent | technology | educators | clinical educators | student | progress | time | online | mentioned [SUMMARY]
[CONTENT] clinical | respondent | technology | educators | clinical educators | student | progress | time | online | mentioned [SUMMARY]
[CONTENT] clinical | respondent | technology | educators | clinical educators | student | progress | time | online | mentioned [SUMMARY]
[CONTENT] clinical | respondent | technology | educators | clinical educators | student | progress | time | online | mentioned [SUMMARY]
[CONTENT] clinical | respondent | technology | educators | clinical educators | student | progress | time | online | mentioned [SUMMARY]
[CONTENT] clinical | respondent | technology | educators | clinical educators | student | progress | time | online | mentioned [SUMMARY]
[CONTENT] clinical | technology | clinical educators | educators | examine clinical | examine clinical educators | students | examine | clinical educators perceptions | educators perceptions [SUMMARY]
[CONTENT] tas | questions | data | clinical | analysis | technology | educators | quantitative | setting | survey [SUMMARY]
[CONTENT] respondent | progress | student | mentioned | time | student progress | tracking | documentation | respondents | record [SUMMARY]
[CONTENT] clinical | educators | environments | education | technology | understand | imperative understand | clinical educators | implementation | imperative [SUMMARY]
[CONTENT] respondent | clinical | technology | educators | clinical educators | time | progress | mentioned | student | online [SUMMARY]
[CONTENT] respondent | clinical | technology | educators | clinical educators | time | progress | mentioned | student | online [SUMMARY]
[CONTENT] ||| ||| tertiary ||| ||| ||| [SUMMARY]
[CONTENT] ||| TAS ||| one | OLE | PebblePad ||| Medical Imaging [SUMMARY]
[CONTENT] tertiary ||| ||| [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| ||| tertiary ||| ||| ||| ||| ||| TAS ||| one | OLE | PebblePad ||| Medical Imaging ||| ||| tertiary ||| ||| ||| ||| [SUMMARY]
[CONTENT] ||| ||| tertiary ||| ||| ||| ||| ||| TAS ||| one | OLE | PebblePad ||| Medical Imaging ||| ||| tertiary ||| ||| ||| ||| [SUMMARY]
New classification of Herlyn-Werner-Wunderlich syndrome.
25591566
Uterus didelphys and blind hemivagina associated with ipsilateral renal agenesis are collectively known as Herlyn-Werner-Wunderlich syndrome (HWWS). In the literature, the syndrome often appears as a single case report or as a small series. In our study, we reviewed the characteristics of all HWWS patients at Peking Union Medical College Hospital (PUMCH) and suggested a new classification for this syndrome because the clinical characteristics differed significantly between the completely and incompletely obstructed vaginal septum. This new classification allows for earlier diagnosis and treatment.
BACKGROUND
From January 1986 to March 2013, all diagnosed cases of HWWS at PUMCH were reviewed. A retrospective long-term follow-up study of the clinical presentation, surgical prognosis, and pregnancy outcomes was performed. Statistical analyses were performed using SPSS, version 15.0 (IBM, Armonk, NY, USA). Between-group comparisons were performed using the χ2 test, Fisher's exact test, and the t-test. The significance level for all analyses was set at P < 0.05.
METHODS
The clinical data from 79 patients with HWWS were analyzed until March 31, 2013. According to our newly identified characteristics, we recommend that the syndrome be classified by the complete or incomplete obstruction of the hemivagina as follows: Classification 1, a completely obstructed hemivagina and Classification 2, an incompletely obstructed hemivagina. The clinical details associated with these two types are distinctly different.
RESULTS
HWWS patients should be differentiated according to these two classifications. The two classifications could be generalized by gynecologists world-wide.
CONCLUSIONS
[ "Adolescent", "Child", "Congenital Abnormalities", "Female", "Humans", "Male", "Retrospective Studies", "Urogenital Abnormalities", "Uterus", "Vagina" ]
4837842
INTRODUCTION
Uterus didelphys and blind hemivagina associated with ipsilateral renal agenesis are collectively known as Herlyn-Werner-Wunderlich syndrome (HWWS), a rare congenital anomaly. The exact etiology of HWWS is still unknown, but it may be caused by the abnormal development of Müllerian and Wolffian ducts.[12] Its estimated occurrence is 0.1%–3.8%.[1] Herlyn-Werner syndrome (i.e., renal agenesis and an ipsilateral blind hemivagina) was initially described in 1971 by Herlyn and Werner.[3] In 1976, Wunderlich described an association of right renal aplasia with a bicornuate uterus and simple vagina in the presence of an isolated hematocervix.[4] Since that time, the anomaly of HWWS has appeared as a single case report or as a small series in the literature. Recent reports regarding uterus didelphys and blind hemivagina associated with ipsilateral renal agenesis have involved eight cases of HWWS and their appropriate interventions in 2004, twelve cases of pediatric HWWS patients in 2006, one case of HWWS and ectopic ureter presenting with vulvodynia and recurrent fever in 2010, and 36 cases of HWWS, with long-term follow-ups, in 1997.[5678] In 2013, 70 patients with confirmed diagnoses of HWWS who were admitted to the Peking Union Medical College Hospital (PUMCH) between January 1995 and December 2010 were retrospectively reviewed by Tong et al.[9] According to the data provided, as well as a literature search, this is the largest case series of HWWS to date. With a population of 1.4 billion and the largest referral center for complex gynecologic disease in China, at PUMCH, we have diagnosed and treated numerous patients with various female genital malformations over the years. In March 2013, based on Tong’ study,[9] we updated relevant information regarding HWWS patients at our hospital.
METHODS
From January 1986 to March 2013, 2238 cases of female genital malformations were treated at PUMCH. Of these 2238 cases, 79 patients (3.53%) were diagnosed with HWWS. A retrospective long-term follow-up study of the clinical presentation, surgical prognosis, and pregnancy outcomes was performed. All patients were preoperatively diagnosed by ultrasonography and pelvic examination. The anatomical variations among patients were confirmed with intraoperative findings. Data were collected and analyzed. Statistical analyses were performed using SPSS, version 15.0 (IBM, Armonk, NY, USA). Between-group comparisons were performed using the χ2 test, Fisher's exact test, and the t-test. The significance level for all analyses was set at P < 0.05. Study approval was provided by the Ethics Committee of PUMCH. All patients provided consent for chart reviews and follow-ups.
RESULTS
From January 1986 to March 31, 2013 the clinical data, long-term follow-up information and pregnancy data were collected from a total of 79 patients with HWWS. An analysis of the medical records from these patients revealed several important clinical characteristics of HWWS that were not previously reported in the literature [Table 1]. According to these newly identified characteristics, we recommend that HWWS be classified according to the complete or incomplete obstruction of the hemivagina as follows: Classification 1, patients with a completely obstructed hemivagina, and Classification 2, patients with an incompletely obstructed hemivagina. Clinical characteristics of patients with completely or incompletely obstructed hemivagina According to our analysis, 24 patients were categorized as Classification 1 and 55 were categorized as Classification 2. The clinical manifestations of patients with complete and those with incomplete obstruction of hemivagina are distinctively different [Table 1]. The mean age at symptoms onset of Classification 1 was significantly younger than the mean age at diagnosis of Classification 2 (P < 0.05). The mean age at diagnosis of Classification 1 was also significantly younger than the mean age at diagnosis of Classification 2 (P < 0.05). The median time between menarche and the onset of cyclic pelvic pain was 0.3 year for Classification 1 and 3 years for Classification 2. The incidence of dysmenorrhea, intermittent mucopurulent discharge and irregular vaginal hemorrhage for Classification 1 were all significantly lower than Classification 2 (P < 0.05). The occurrence of pelvic endometriosis was significantly higher in patients with Classification 1 (38%, 9/24) than in those with Classification 2 (13%, 7/55; P < 0.05). Besides, the occurrence of acute pelvic inflammation was significantly lower in patients with Classification 1 (4%, 1/24) than in those with Classification 2 (27%, 15/55; P < 0.05). In addition, patients with Classification 1 are more prone to hematometros, hematosalpinx and hemoperitoneum, especially in some more severely affected patients. Acute onset of abdominal pain, fever, and vomiting are common symptoms seveal months after menarche. Endometriosis is common complications, if not treated in time; the condition can progress to secondary endometriosis, pelvic adhesion, pyosalpinx, and even pyocolpos. Although, patients with Classification 2 mainly complaints of purulent or bloody vaginal discharge and ascending genital system infection years after menarche. Most patients with Classification 2 have normal menstrual cycle, but longer menstrual periods, illness attacks years after menarche. The follow-up period ranged from 1 to 120 months. The median follow-up period was 17 months. The renal agenesis favored the right side in 45 (57%) and the left in 34 (43%) patients. All patients underwent resection of the vaginal septum and drainage of hematocolpos. Eleven (14%) patients underwent abdominal exploration via laparotomy or laparoscopy. In total, 40 women were married and sexually active. There were 52 pregnancies among 28 (85%) of the 33 women who wished to conceive. Pregnancy occurred in the uterus ipsilateral to the hemivaginal septum in 19 (37%) cases, and in the uterus contralateral to the hemivaginal septum in 33 (64%) cases. Eight women experienced separate pregnancies in each of the bilateral uteri. There were no pathologic pregnancies or pregnancy complications. Full resection of the vaginal septum resulted in good outcomes and fertility.
null
null
[ "Classification 1, completely obstructed hemivagina", "Classification 2, incompletely obstructed hemivagina", "DIAGNOSIS", "TREATMENT", "PROGNOSIS" ]
[ "\nClassification 1.1, with blind hemivagina\n\nIn this classification, the hemivagina is completely obstructed; the uterus behind the septum is completely isolated from the contralateral uterus, and no communication is present between the duplicated uterus and vagina. Hematocolpos may occur only a few months after menarche. Hematometra and hematosalpinx occurred in some more severely affected patients, as well as bleeding in the periadnexal and peritoneal space. Patients with this classification have an earlier age of onset, with a short time from menarche to attack. The presenting symptoms may include the acute onset of abdominal pain, fever, and vomiting. Hemoperitoneum, due to bleeding from the fallopian tube, can be found at surgery.[510] Endometriosis can result from blood reflux into the abdominal cavity and may have dire consequences. If not treated in time, the condition can progress to secondary endometriosis, pelvic adhesion, pyosalpinx, and even pyocolpos [Figure 1].[67]\nClassification 1.1, with blind hemivagina.\n\nClassification 1.2, cervicovaginal atresia without communicating uteri\n\nIn this classification, the hemivagina is completely obstructed; the cervix behind the septum is maldeveloped or atresic, and menses from the uterus behind the septum cannot outflow through the atresic cervix. Patients with this classification have similar clinical features as patients with Classification 1.1 [Figure 2].\nClassification 1.2, cervicovaginal atresia, without communicating uteri.", "\nClassification 2.1, partial reabsorption of the vaginal septum\n\nIn this classification, a small communication exists between the two vaginas, which make the vaginal cavity behind the septum incompletely obstructed. The uterus behind the septum is completely isolated from the contralateral uterus. The menses can outflow through the small communication, but the drainage is impeded. These patients have a later age of onset. The attack often comes years after menarche. Purulent or bloody vaginal discharge can be the chief complaints. Patients often have ascending genital system infection [Figure 3].\nClassification 2.1, partial reabsorption of the vaginal septum.\n\nClassification 2.2, with communicating uteri\n\nIn this classification, the hemivagina is completely obstructed, and a small communication exists between the duplicated cervices. Menses from the uterus behind the septum can outflow through the communication to the offside contralateral cervix. Because the communication is small, the drainage is still impeded [Figure 4].\nClassification 2.2, incompletely obstructed hemivagina with communicating uteri.", "Sonography and magnetic resonance imaging (MRI) are extremely useful in diagnosing and classifying Müllerian duct anomalies.[10] The ultrasonography features of these conditions include uterine anomalies (didelphic/bicornuate uterus), with or without uterine effusion; an echo-free area below one cervix, sometimes with intensive dot-like hyperechoic regions in the no-echo area; and ipsilateral renal agenesis with compensatory hypertrophy of the contralateral kidney. Centesis of the paravaginal mass indicates accumulated pus or blood. MRI with multiplanar image acquisition provides more detailed information.[11] For patients with Classification 2.2, hysterosalpingography showed that iodine oil passed through the communication between the duplicated cervices to the contralateral uterus and then the cavity behind the septum.", "To alleviate symptoms and retain fertility in these patients, the most effective treatment is surgery. Resection of as much of the obstructing vaginal septum as possible is the optimal surgery for patients with Classifications 1.1, 2.1, and 2.2. Most patients can recover completely after resection of the vaginal septum. The best time for surgery in these patients is approximately at the time of menstruation, particularly in patients with Classification 1.1, as a large distended hematocolpos is easy to visualize and palpate, which aids in resection. Hur et al. suggested that laparoscopic evaluation should not be omitted in patients who have an obstructed vaginal septum, which may inevitably result in massive menstrual regurgitation or even endometriosis and pelvic adhesions, which cannot be detected by ultrasonography or MRI.[12] Treatment for patients with Classification 1.2 differs from the treatment of patients with other classifications. Cervical agenesis is difficult to correct surgically. After being diagnosed with renal agenesis or renal malformation by imaging studies, laparoscopic or the transabdominal resection of the atresic uterus is suggested.", "In conclusion, the prognosis of HWWS is good with early diagnosis and early treatment, except for patients with Classification 1.2. In cases complicated by cervical atresia, ipsilateral hysterectomy is suggested because resection of the septum would not relieve obstructed symptoms. The onset of clinical manifestations was much earlier and more serious in patients with completely obstructed hemivaginal septa compared with those with incomplete obstructions. This new classification of HWWS can help to provide clinicians with earlier diagnoses and treatments to prevent secondary pelvic endometriosis and pelvic inflammation." ]
[ null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "RESULTS", "DISCUSSION", "Classification 1, completely obstructed hemivagina", "Classification 2, incompletely obstructed hemivagina", "DIAGNOSIS", "TREATMENT", "PROGNOSIS" ]
[ "Uterus didelphys and blind hemivagina associated with ipsilateral renal agenesis are collectively known as Herlyn-Werner-Wunderlich syndrome (HWWS), a rare congenital anomaly. The exact etiology of HWWS is still unknown, but it may be caused by the abnormal development of Müllerian and Wolffian ducts.[12] Its estimated occurrence is 0.1%–3.8%.[1]\nHerlyn-Werner syndrome (i.e., renal agenesis and an ipsilateral blind hemivagina) was initially described in 1971 by Herlyn and Werner.[3] In 1976, Wunderlich described an association of right renal aplasia with a bicornuate uterus and simple vagina in the presence of an isolated hematocervix.[4] Since that time, the anomaly of HWWS has appeared as a single case report or as a small series in the literature. Recent reports regarding uterus didelphys and blind hemivagina associated with ipsilateral renal agenesis have involved eight cases of HWWS and their appropriate interventions in 2004, twelve cases of pediatric HWWS patients in 2006, one case of HWWS and ectopic ureter presenting with vulvodynia and recurrent fever in 2010, and 36 cases of HWWS, with long-term follow-ups, in 1997.[5678] In 2013, 70 patients with confirmed diagnoses of HWWS who were admitted to the Peking Union Medical College Hospital (PUMCH) between January 1995 and December 2010 were retrospectively reviewed by Tong et al.[9] According to the data provided, as well as a literature search, this is the largest case series of HWWS to date.\nWith a population of 1.4 billion and the largest referral center for complex gynecologic disease in China, at PUMCH, we have diagnosed and treated numerous patients with various female genital malformations over the years. In March 2013, based on Tong’ study,[9] we updated relevant information regarding HWWS patients at our hospital.", "From January 1986 to March 2013, 2238 cases of female genital malformations were treated at PUMCH. Of these 2238 cases, 79 patients (3.53%) were diagnosed with HWWS. A retrospective long-term follow-up study of the clinical presentation, surgical prognosis, and pregnancy outcomes was performed. All patients were preoperatively diagnosed by ultrasonography and pelvic examination. The anatomical variations among patients were confirmed with intraoperative findings. Data were collected and analyzed. Statistical analyses were performed using SPSS, version 15.0 (IBM, Armonk, NY, USA). Between-group comparisons were performed using the χ2 test, Fisher's exact test, and the t-test. The significance level for all analyses was set at P < 0.05. Study approval was provided by the Ethics Committee of PUMCH. All patients provided consent for chart reviews and follow-ups.", "From January 1986 to March 31, 2013 the clinical data, long-term follow-up information and pregnancy data were collected from a total of 79 patients with HWWS. An analysis of the medical records from these patients revealed several important clinical characteristics of HWWS that were not previously reported in the literature [Table 1]. According to these newly identified characteristics, we recommend that HWWS be classified according to the complete or incomplete obstruction of the hemivagina as follows: Classification 1, patients with a completely obstructed hemivagina, and Classification 2, patients with an incompletely obstructed hemivagina.\nClinical characteristics of patients with completely or incompletely obstructed hemivagina\nAccording to our analysis, 24 patients were categorized as Classification 1 and 55 were categorized as Classification 2. The clinical manifestations of patients with complete and those with incomplete obstruction of hemivagina are distinctively different [Table 1]. The mean age at symptoms onset of Classification 1 was significantly younger than the mean age at diagnosis of Classification 2 (P < 0.05). The mean age at diagnosis of Classification 1 was also significantly younger than the mean age at diagnosis of Classification 2 (P < 0.05). The median time between menarche and the onset of cyclic pelvic pain was 0.3 year for Classification 1 and 3 years for Classification 2. The incidence of dysmenorrhea, intermittent mucopurulent discharge and irregular vaginal hemorrhage for Classification 1 were all significantly lower than Classification 2 (P < 0.05). The occurrence of pelvic endometriosis was significantly higher in patients with Classification 1 (38%, 9/24) than in those with Classification 2 (13%, 7/55; P < 0.05). Besides, the occurrence of acute pelvic inflammation was significantly lower in patients with Classification 1 (4%, 1/24) than in those with Classification 2 (27%, 15/55; P < 0.05).\nIn addition, patients with Classification 1 are more prone to hematometros, hematosalpinx and hemoperitoneum, especially in some more severely affected patients. Acute onset of abdominal pain, fever, and vomiting are common symptoms seveal months after menarche. Endometriosis is common complications, if not treated in time; the condition can progress to secondary endometriosis, pelvic adhesion, pyosalpinx, and even pyocolpos. Although, patients with Classification 2 mainly complaints of purulent or bloody vaginal discharge and ascending genital system infection years after menarche. Most patients with Classification 2 have normal menstrual cycle, but longer menstrual periods, illness attacks years after menarche.\nThe follow-up period ranged from 1 to 120 months. The median follow-up period was 17 months. The renal agenesis favored the right side in 45 (57%) and the left in 34 (43%) patients. All patients underwent resection of the vaginal septum and drainage of hematocolpos. Eleven (14%) patients underwent abdominal exploration via laparotomy or laparoscopy. In total, 40 women were married and sexually active. There were 52 pregnancies among 28 (85%) of the 33 women who wished to conceive. Pregnancy occurred in the uterus ipsilateral to the hemivaginal septum in 19 (37%) cases, and in the uterus contralateral to the hemivaginal septum in 33 (64%) cases. Eight women experienced separate pregnancies in each of the bilateral uteri. There were no pathologic pregnancies or pregnancy complications. Full resection of the vaginal septum resulted in good outcomes and fertility.", "Based on our study of HWWS patients at PUMCH, HWWS can be classified into two new types. Below, we describe the characteristics of each classification in detail.\n Classification 1, completely obstructed hemivagina \nClassification 1.1, with blind hemivagina\n\nIn this classification, the hemivagina is completely obstructed; the uterus behind the septum is completely isolated from the contralateral uterus, and no communication is present between the duplicated uterus and vagina. Hematocolpos may occur only a few months after menarche. Hematometra and hematosalpinx occurred in some more severely affected patients, as well as bleeding in the periadnexal and peritoneal space. Patients with this classification have an earlier age of onset, with a short time from menarche to attack. The presenting symptoms may include the acute onset of abdominal pain, fever, and vomiting. Hemoperitoneum, due to bleeding from the fallopian tube, can be found at surgery.[510] Endometriosis can result from blood reflux into the abdominal cavity and may have dire consequences. If not treated in time, the condition can progress to secondary endometriosis, pelvic adhesion, pyosalpinx, and even pyocolpos [Figure 1].[67]\nClassification 1.1, with blind hemivagina.\n\nClassification 1.2, cervicovaginal atresia without communicating uteri\n\nIn this classification, the hemivagina is completely obstructed; the cervix behind the septum is maldeveloped or atresic, and menses from the uterus behind the septum cannot outflow through the atresic cervix. Patients with this classification have similar clinical features as patients with Classification 1.1 [Figure 2].\nClassification 1.2, cervicovaginal atresia, without communicating uteri.\n\nClassification 1.1, with blind hemivagina\n\nIn this classification, the hemivagina is completely obstructed; the uterus behind the septum is completely isolated from the contralateral uterus, and no communication is present between the duplicated uterus and vagina. Hematocolpos may occur only a few months after menarche. Hematometra and hematosalpinx occurred in some more severely affected patients, as well as bleeding in the periadnexal and peritoneal space. Patients with this classification have an earlier age of onset, with a short time from menarche to attack. The presenting symptoms may include the acute onset of abdominal pain, fever, and vomiting. Hemoperitoneum, due to bleeding from the fallopian tube, can be found at surgery.[510] Endometriosis can result from blood reflux into the abdominal cavity and may have dire consequences. If not treated in time, the condition can progress to secondary endometriosis, pelvic adhesion, pyosalpinx, and even pyocolpos [Figure 1].[67]\nClassification 1.1, with blind hemivagina.\n\nClassification 1.2, cervicovaginal atresia without communicating uteri\n\nIn this classification, the hemivagina is completely obstructed; the cervix behind the septum is maldeveloped or atresic, and menses from the uterus behind the septum cannot outflow through the atresic cervix. Patients with this classification have similar clinical features as patients with Classification 1.1 [Figure 2].\nClassification 1.2, cervicovaginal atresia, without communicating uteri.\n Classification 2, incompletely obstructed hemivagina \nClassification 2.1, partial reabsorption of the vaginal septum\n\nIn this classification, a small communication exists between the two vaginas, which make the vaginal cavity behind the septum incompletely obstructed. The uterus behind the septum is completely isolated from the contralateral uterus. The menses can outflow through the small communication, but the drainage is impeded. These patients have a later age of onset. The attack often comes years after menarche. Purulent or bloody vaginal discharge can be the chief complaints. Patients often have ascending genital system infection [Figure 3].\nClassification 2.1, partial reabsorption of the vaginal septum.\n\nClassification 2.2, with communicating uteri\n\nIn this classification, the hemivagina is completely obstructed, and a small communication exists between the duplicated cervices. Menses from the uterus behind the septum can outflow through the communication to the offside contralateral cervix. Because the communication is small, the drainage is still impeded [Figure 4].\nClassification 2.2, incompletely obstructed hemivagina with communicating uteri.\n\nClassification 2.1, partial reabsorption of the vaginal septum\n\nIn this classification, a small communication exists between the two vaginas, which make the vaginal cavity behind the septum incompletely obstructed. The uterus behind the septum is completely isolated from the contralateral uterus. The menses can outflow through the small communication, but the drainage is impeded. These patients have a later age of onset. The attack often comes years after menarche. Purulent or bloody vaginal discharge can be the chief complaints. Patients often have ascending genital system infection [Figure 3].\nClassification 2.1, partial reabsorption of the vaginal septum.\n\nClassification 2.2, with communicating uteri\n\nIn this classification, the hemivagina is completely obstructed, and a small communication exists between the duplicated cervices. Menses from the uterus behind the septum can outflow through the communication to the offside contralateral cervix. Because the communication is small, the drainage is still impeded [Figure 4].\nClassification 2.2, incompletely obstructed hemivagina with communicating uteri.", "\nClassification 1.1, with blind hemivagina\n\nIn this classification, the hemivagina is completely obstructed; the uterus behind the septum is completely isolated from the contralateral uterus, and no communication is present between the duplicated uterus and vagina. Hematocolpos may occur only a few months after menarche. Hematometra and hematosalpinx occurred in some more severely affected patients, as well as bleeding in the periadnexal and peritoneal space. Patients with this classification have an earlier age of onset, with a short time from menarche to attack. The presenting symptoms may include the acute onset of abdominal pain, fever, and vomiting. Hemoperitoneum, due to bleeding from the fallopian tube, can be found at surgery.[510] Endometriosis can result from blood reflux into the abdominal cavity and may have dire consequences. If not treated in time, the condition can progress to secondary endometriosis, pelvic adhesion, pyosalpinx, and even pyocolpos [Figure 1].[67]\nClassification 1.1, with blind hemivagina.\n\nClassification 1.2, cervicovaginal atresia without communicating uteri\n\nIn this classification, the hemivagina is completely obstructed; the cervix behind the septum is maldeveloped or atresic, and menses from the uterus behind the septum cannot outflow through the atresic cervix. Patients with this classification have similar clinical features as patients with Classification 1.1 [Figure 2].\nClassification 1.2, cervicovaginal atresia, without communicating uteri.", "\nClassification 2.1, partial reabsorption of the vaginal septum\n\nIn this classification, a small communication exists between the two vaginas, which make the vaginal cavity behind the septum incompletely obstructed. The uterus behind the septum is completely isolated from the contralateral uterus. The menses can outflow through the small communication, but the drainage is impeded. These patients have a later age of onset. The attack often comes years after menarche. Purulent or bloody vaginal discharge can be the chief complaints. Patients often have ascending genital system infection [Figure 3].\nClassification 2.1, partial reabsorption of the vaginal septum.\n\nClassification 2.2, with communicating uteri\n\nIn this classification, the hemivagina is completely obstructed, and a small communication exists between the duplicated cervices. Menses from the uterus behind the septum can outflow through the communication to the offside contralateral cervix. Because the communication is small, the drainage is still impeded [Figure 4].\nClassification 2.2, incompletely obstructed hemivagina with communicating uteri.", "Sonography and magnetic resonance imaging (MRI) are extremely useful in diagnosing and classifying Müllerian duct anomalies.[10] The ultrasonography features of these conditions include uterine anomalies (didelphic/bicornuate uterus), with or without uterine effusion; an echo-free area below one cervix, sometimes with intensive dot-like hyperechoic regions in the no-echo area; and ipsilateral renal agenesis with compensatory hypertrophy of the contralateral kidney. Centesis of the paravaginal mass indicates accumulated pus or blood. MRI with multiplanar image acquisition provides more detailed information.[11] For patients with Classification 2.2, hysterosalpingography showed that iodine oil passed through the communication between the duplicated cervices to the contralateral uterus and then the cavity behind the septum.", "To alleviate symptoms and retain fertility in these patients, the most effective treatment is surgery. Resection of as much of the obstructing vaginal septum as possible is the optimal surgery for patients with Classifications 1.1, 2.1, and 2.2. Most patients can recover completely after resection of the vaginal septum. The best time for surgery in these patients is approximately at the time of menstruation, particularly in patients with Classification 1.1, as a large distended hematocolpos is easy to visualize and palpate, which aids in resection. Hur et al. suggested that laparoscopic evaluation should not be omitted in patients who have an obstructed vaginal septum, which may inevitably result in massive menstrual regurgitation or even endometriosis and pelvic adhesions, which cannot be detected by ultrasonography or MRI.[12] Treatment for patients with Classification 1.2 differs from the treatment of patients with other classifications. Cervical agenesis is difficult to correct surgically. After being diagnosed with renal agenesis or renal malformation by imaging studies, laparoscopic or the transabdominal resection of the atresic uterus is suggested.", "In conclusion, the prognosis of HWWS is good with early diagnosis and early treatment, except for patients with Classification 1.2. In cases complicated by cervical atresia, ipsilateral hysterectomy is suggested because resection of the septum would not relieve obstructed symptoms. The onset of clinical manifestations was much earlier and more serious in patients with completely obstructed hemivaginal septa compared with those with incomplete obstructions. This new classification of HWWS can help to provide clinicians with earlier diagnoses and treatments to prevent secondary pelvic endometriosis and pelvic inflammation." ]
[ "intro", "methods", "results", "discussion", null, null, null, null, null ]
[ "Classification", "Diagnosis", "Therapy" ]
INTRODUCTION: Uterus didelphys and blind hemivagina associated with ipsilateral renal agenesis are collectively known as Herlyn-Werner-Wunderlich syndrome (HWWS), a rare congenital anomaly. The exact etiology of HWWS is still unknown, but it may be caused by the abnormal development of Müllerian and Wolffian ducts.[12] Its estimated occurrence is 0.1%–3.8%.[1] Herlyn-Werner syndrome (i.e., renal agenesis and an ipsilateral blind hemivagina) was initially described in 1971 by Herlyn and Werner.[3] In 1976, Wunderlich described an association of right renal aplasia with a bicornuate uterus and simple vagina in the presence of an isolated hematocervix.[4] Since that time, the anomaly of HWWS has appeared as a single case report or as a small series in the literature. Recent reports regarding uterus didelphys and blind hemivagina associated with ipsilateral renal agenesis have involved eight cases of HWWS and their appropriate interventions in 2004, twelve cases of pediatric HWWS patients in 2006, one case of HWWS and ectopic ureter presenting with vulvodynia and recurrent fever in 2010, and 36 cases of HWWS, with long-term follow-ups, in 1997.[5678] In 2013, 70 patients with confirmed diagnoses of HWWS who were admitted to the Peking Union Medical College Hospital (PUMCH) between January 1995 and December 2010 were retrospectively reviewed by Tong et al.[9] According to the data provided, as well as a literature search, this is the largest case series of HWWS to date. With a population of 1.4 billion and the largest referral center for complex gynecologic disease in China, at PUMCH, we have diagnosed and treated numerous patients with various female genital malformations over the years. In March 2013, based on Tong’ study,[9] we updated relevant information regarding HWWS patients at our hospital. METHODS: From January 1986 to March 2013, 2238 cases of female genital malformations were treated at PUMCH. Of these 2238 cases, 79 patients (3.53%) were diagnosed with HWWS. A retrospective long-term follow-up study of the clinical presentation, surgical prognosis, and pregnancy outcomes was performed. All patients were preoperatively diagnosed by ultrasonography and pelvic examination. The anatomical variations among patients were confirmed with intraoperative findings. Data were collected and analyzed. Statistical analyses were performed using SPSS, version 15.0 (IBM, Armonk, NY, USA). Between-group comparisons were performed using the χ2 test, Fisher's exact test, and the t-test. The significance level for all analyses was set at P < 0.05. Study approval was provided by the Ethics Committee of PUMCH. All patients provided consent for chart reviews and follow-ups. RESULTS: From January 1986 to March 31, 2013 the clinical data, long-term follow-up information and pregnancy data were collected from a total of 79 patients with HWWS. An analysis of the medical records from these patients revealed several important clinical characteristics of HWWS that were not previously reported in the literature [Table 1]. According to these newly identified characteristics, we recommend that HWWS be classified according to the complete or incomplete obstruction of the hemivagina as follows: Classification 1, patients with a completely obstructed hemivagina, and Classification 2, patients with an incompletely obstructed hemivagina. Clinical characteristics of patients with completely or incompletely obstructed hemivagina According to our analysis, 24 patients were categorized as Classification 1 and 55 were categorized as Classification 2. The clinical manifestations of patients with complete and those with incomplete obstruction of hemivagina are distinctively different [Table 1]. The mean age at symptoms onset of Classification 1 was significantly younger than the mean age at diagnosis of Classification 2 (P < 0.05). The mean age at diagnosis of Classification 1 was also significantly younger than the mean age at diagnosis of Classification 2 (P < 0.05). The median time between menarche and the onset of cyclic pelvic pain was 0.3 year for Classification 1 and 3 years for Classification 2. The incidence of dysmenorrhea, intermittent mucopurulent discharge and irregular vaginal hemorrhage for Classification 1 were all significantly lower than Classification 2 (P < 0.05). The occurrence of pelvic endometriosis was significantly higher in patients with Classification 1 (38%, 9/24) than in those with Classification 2 (13%, 7/55; P < 0.05). Besides, the occurrence of acute pelvic inflammation was significantly lower in patients with Classification 1 (4%, 1/24) than in those with Classification 2 (27%, 15/55; P < 0.05). In addition, patients with Classification 1 are more prone to hematometros, hematosalpinx and hemoperitoneum, especially in some more severely affected patients. Acute onset of abdominal pain, fever, and vomiting are common symptoms seveal months after menarche. Endometriosis is common complications, if not treated in time; the condition can progress to secondary endometriosis, pelvic adhesion, pyosalpinx, and even pyocolpos. Although, patients with Classification 2 mainly complaints of purulent or bloody vaginal discharge and ascending genital system infection years after menarche. Most patients with Classification 2 have normal menstrual cycle, but longer menstrual periods, illness attacks years after menarche. The follow-up period ranged from 1 to 120 months. The median follow-up period was 17 months. The renal agenesis favored the right side in 45 (57%) and the left in 34 (43%) patients. All patients underwent resection of the vaginal septum and drainage of hematocolpos. Eleven (14%) patients underwent abdominal exploration via laparotomy or laparoscopy. In total, 40 women were married and sexually active. There were 52 pregnancies among 28 (85%) of the 33 women who wished to conceive. Pregnancy occurred in the uterus ipsilateral to the hemivaginal septum in 19 (37%) cases, and in the uterus contralateral to the hemivaginal septum in 33 (64%) cases. Eight women experienced separate pregnancies in each of the bilateral uteri. There were no pathologic pregnancies or pregnancy complications. Full resection of the vaginal septum resulted in good outcomes and fertility. DISCUSSION: Based on our study of HWWS patients at PUMCH, HWWS can be classified into two new types. Below, we describe the characteristics of each classification in detail. Classification 1, completely obstructed hemivagina Classification 1.1, with blind hemivagina In this classification, the hemivagina is completely obstructed; the uterus behind the septum is completely isolated from the contralateral uterus, and no communication is present between the duplicated uterus and vagina. Hematocolpos may occur only a few months after menarche. Hematometra and hematosalpinx occurred in some more severely affected patients, as well as bleeding in the periadnexal and peritoneal space. Patients with this classification have an earlier age of onset, with a short time from menarche to attack. The presenting symptoms may include the acute onset of abdominal pain, fever, and vomiting. Hemoperitoneum, due to bleeding from the fallopian tube, can be found at surgery.[510] Endometriosis can result from blood reflux into the abdominal cavity and may have dire consequences. If not treated in time, the condition can progress to secondary endometriosis, pelvic adhesion, pyosalpinx, and even pyocolpos [Figure 1].[67] Classification 1.1, with blind hemivagina. Classification 1.2, cervicovaginal atresia without communicating uteri In this classification, the hemivagina is completely obstructed; the cervix behind the septum is maldeveloped or atresic, and menses from the uterus behind the septum cannot outflow through the atresic cervix. Patients with this classification have similar clinical features as patients with Classification 1.1 [Figure 2]. Classification 1.2, cervicovaginal atresia, without communicating uteri. Classification 1.1, with blind hemivagina In this classification, the hemivagina is completely obstructed; the uterus behind the septum is completely isolated from the contralateral uterus, and no communication is present between the duplicated uterus and vagina. Hematocolpos may occur only a few months after menarche. Hematometra and hematosalpinx occurred in some more severely affected patients, as well as bleeding in the periadnexal and peritoneal space. Patients with this classification have an earlier age of onset, with a short time from menarche to attack. The presenting symptoms may include the acute onset of abdominal pain, fever, and vomiting. Hemoperitoneum, due to bleeding from the fallopian tube, can be found at surgery.[510] Endometriosis can result from blood reflux into the abdominal cavity and may have dire consequences. If not treated in time, the condition can progress to secondary endometriosis, pelvic adhesion, pyosalpinx, and even pyocolpos [Figure 1].[67] Classification 1.1, with blind hemivagina. Classification 1.2, cervicovaginal atresia without communicating uteri In this classification, the hemivagina is completely obstructed; the cervix behind the septum is maldeveloped or atresic, and menses from the uterus behind the septum cannot outflow through the atresic cervix. Patients with this classification have similar clinical features as patients with Classification 1.1 [Figure 2]. Classification 1.2, cervicovaginal atresia, without communicating uteri. Classification 2, incompletely obstructed hemivagina Classification 2.1, partial reabsorption of the vaginal septum In this classification, a small communication exists between the two vaginas, which make the vaginal cavity behind the septum incompletely obstructed. The uterus behind the septum is completely isolated from the contralateral uterus. The menses can outflow through the small communication, but the drainage is impeded. These patients have a later age of onset. The attack often comes years after menarche. Purulent or bloody vaginal discharge can be the chief complaints. Patients often have ascending genital system infection [Figure 3]. Classification 2.1, partial reabsorption of the vaginal septum. Classification 2.2, with communicating uteri In this classification, the hemivagina is completely obstructed, and a small communication exists between the duplicated cervices. Menses from the uterus behind the septum can outflow through the communication to the offside contralateral cervix. Because the communication is small, the drainage is still impeded [Figure 4]. Classification 2.2, incompletely obstructed hemivagina with communicating uteri. Classification 2.1, partial reabsorption of the vaginal septum In this classification, a small communication exists between the two vaginas, which make the vaginal cavity behind the septum incompletely obstructed. The uterus behind the septum is completely isolated from the contralateral uterus. The menses can outflow through the small communication, but the drainage is impeded. These patients have a later age of onset. The attack often comes years after menarche. Purulent or bloody vaginal discharge can be the chief complaints. Patients often have ascending genital system infection [Figure 3]. Classification 2.1, partial reabsorption of the vaginal septum. Classification 2.2, with communicating uteri In this classification, the hemivagina is completely obstructed, and a small communication exists between the duplicated cervices. Menses from the uterus behind the septum can outflow through the communication to the offside contralateral cervix. Because the communication is small, the drainage is still impeded [Figure 4]. Classification 2.2, incompletely obstructed hemivagina with communicating uteri. Classification 1, completely obstructed hemivagina: Classification 1.1, with blind hemivagina In this classification, the hemivagina is completely obstructed; the uterus behind the septum is completely isolated from the contralateral uterus, and no communication is present between the duplicated uterus and vagina. Hematocolpos may occur only a few months after menarche. Hematometra and hematosalpinx occurred in some more severely affected patients, as well as bleeding in the periadnexal and peritoneal space. Patients with this classification have an earlier age of onset, with a short time from menarche to attack. The presenting symptoms may include the acute onset of abdominal pain, fever, and vomiting. Hemoperitoneum, due to bleeding from the fallopian tube, can be found at surgery.[510] Endometriosis can result from blood reflux into the abdominal cavity and may have dire consequences. If not treated in time, the condition can progress to secondary endometriosis, pelvic adhesion, pyosalpinx, and even pyocolpos [Figure 1].[67] Classification 1.1, with blind hemivagina. Classification 1.2, cervicovaginal atresia without communicating uteri In this classification, the hemivagina is completely obstructed; the cervix behind the septum is maldeveloped or atresic, and menses from the uterus behind the septum cannot outflow through the atresic cervix. Patients with this classification have similar clinical features as patients with Classification 1.1 [Figure 2]. Classification 1.2, cervicovaginal atresia, without communicating uteri. Classification 2, incompletely obstructed hemivagina: Classification 2.1, partial reabsorption of the vaginal septum In this classification, a small communication exists between the two vaginas, which make the vaginal cavity behind the septum incompletely obstructed. The uterus behind the septum is completely isolated from the contralateral uterus. The menses can outflow through the small communication, but the drainage is impeded. These patients have a later age of onset. The attack often comes years after menarche. Purulent or bloody vaginal discharge can be the chief complaints. Patients often have ascending genital system infection [Figure 3]. Classification 2.1, partial reabsorption of the vaginal septum. Classification 2.2, with communicating uteri In this classification, the hemivagina is completely obstructed, and a small communication exists between the duplicated cervices. Menses from the uterus behind the septum can outflow through the communication to the offside contralateral cervix. Because the communication is small, the drainage is still impeded [Figure 4]. Classification 2.2, incompletely obstructed hemivagina with communicating uteri. DIAGNOSIS: Sonography and magnetic resonance imaging (MRI) are extremely useful in diagnosing and classifying Müllerian duct anomalies.[10] The ultrasonography features of these conditions include uterine anomalies (didelphic/bicornuate uterus), with or without uterine effusion; an echo-free area below one cervix, sometimes with intensive dot-like hyperechoic regions in the no-echo area; and ipsilateral renal agenesis with compensatory hypertrophy of the contralateral kidney. Centesis of the paravaginal mass indicates accumulated pus or blood. MRI with multiplanar image acquisition provides more detailed information.[11] For patients with Classification 2.2, hysterosalpingography showed that iodine oil passed through the communication between the duplicated cervices to the contralateral uterus and then the cavity behind the septum. TREATMENT: To alleviate symptoms and retain fertility in these patients, the most effective treatment is surgery. Resection of as much of the obstructing vaginal septum as possible is the optimal surgery for patients with Classifications 1.1, 2.1, and 2.2. Most patients can recover completely after resection of the vaginal septum. The best time for surgery in these patients is approximately at the time of menstruation, particularly in patients with Classification 1.1, as a large distended hematocolpos is easy to visualize and palpate, which aids in resection. Hur et al. suggested that laparoscopic evaluation should not be omitted in patients who have an obstructed vaginal septum, which may inevitably result in massive menstrual regurgitation or even endometriosis and pelvic adhesions, which cannot be detected by ultrasonography or MRI.[12] Treatment for patients with Classification 1.2 differs from the treatment of patients with other classifications. Cervical agenesis is difficult to correct surgically. After being diagnosed with renal agenesis or renal malformation by imaging studies, laparoscopic or the transabdominal resection of the atresic uterus is suggested. PROGNOSIS: In conclusion, the prognosis of HWWS is good with early diagnosis and early treatment, except for patients with Classification 1.2. In cases complicated by cervical atresia, ipsilateral hysterectomy is suggested because resection of the septum would not relieve obstructed symptoms. The onset of clinical manifestations was much earlier and more serious in patients with completely obstructed hemivaginal septa compared with those with incomplete obstructions. This new classification of HWWS can help to provide clinicians with earlier diagnoses and treatments to prevent secondary pelvic endometriosis and pelvic inflammation.
Background: Uterus didelphys and blind hemivagina associated with ipsilateral renal agenesis are collectively known as Herlyn-Werner-Wunderlich syndrome (HWWS). In the literature, the syndrome often appears as a single case report or as a small series. In our study, we reviewed the characteristics of all HWWS patients at Peking Union Medical College Hospital (PUMCH) and suggested a new classification for this syndrome because the clinical characteristics differed significantly between the completely and incompletely obstructed vaginal septum. This new classification allows for earlier diagnosis and treatment. Methods: From January 1986 to March 2013, all diagnosed cases of HWWS at PUMCH were reviewed. A retrospective long-term follow-up study of the clinical presentation, surgical prognosis, and pregnancy outcomes was performed. Statistical analyses were performed using SPSS, version 15.0 (IBM, Armonk, NY, USA). Between-group comparisons were performed using the χ2 test, Fisher's exact test, and the t-test. The significance level for all analyses was set at P < 0.05. Results: The clinical data from 79 patients with HWWS were analyzed until March 31, 2013. According to our newly identified characteristics, we recommend that the syndrome be classified by the complete or incomplete obstruction of the hemivagina as follows: Classification 1, a completely obstructed hemivagina and Classification 2, an incompletely obstructed hemivagina. The clinical details associated with these two types are distinctly different. Conclusions: HWWS patients should be differentiated according to these two classifications. The two classifications could be generalized by gynecologists world-wide.
null
null
2,941
299
[ 253, 187, 129, 190, 94 ]
9
[ "classification", "patients", "septum", "hemivagina", "uterus", "obstructed", "completely", "vaginal", "communication", "hwws" ]
[ "cases pediatric hwws", "uterus didelphys", "wunderlich syndrome hwws", "uterus didelphys blind", "werner syndrome renal" ]
null
null
[CONTENT] Classification | Diagnosis | Therapy [SUMMARY]
[CONTENT] Classification | Diagnosis | Therapy [SUMMARY]
[CONTENT] Classification | Diagnosis | Therapy [SUMMARY]
null
[CONTENT] Classification | Diagnosis | Therapy [SUMMARY]
null
[CONTENT] Adolescent | Child | Congenital Abnormalities | Female | Humans | Male | Retrospective Studies | Urogenital Abnormalities | Uterus | Vagina [SUMMARY]
[CONTENT] Adolescent | Child | Congenital Abnormalities | Female | Humans | Male | Retrospective Studies | Urogenital Abnormalities | Uterus | Vagina [SUMMARY]
[CONTENT] Adolescent | Child | Congenital Abnormalities | Female | Humans | Male | Retrospective Studies | Urogenital Abnormalities | Uterus | Vagina [SUMMARY]
null
[CONTENT] Adolescent | Child | Congenital Abnormalities | Female | Humans | Male | Retrospective Studies | Urogenital Abnormalities | Uterus | Vagina [SUMMARY]
null
[CONTENT] cases pediatric hwws | uterus didelphys | wunderlich syndrome hwws | uterus didelphys blind | werner syndrome renal [SUMMARY]
[CONTENT] cases pediatric hwws | uterus didelphys | wunderlich syndrome hwws | uterus didelphys blind | werner syndrome renal [SUMMARY]
[CONTENT] cases pediatric hwws | uterus didelphys | wunderlich syndrome hwws | uterus didelphys blind | werner syndrome renal [SUMMARY]
null
[CONTENT] cases pediatric hwws | uterus didelphys | wunderlich syndrome hwws | uterus didelphys blind | werner syndrome renal [SUMMARY]
null
[CONTENT] classification | patients | septum | hemivagina | uterus | obstructed | completely | vaginal | communication | hwws [SUMMARY]
[CONTENT] classification | patients | septum | hemivagina | uterus | obstructed | completely | vaginal | communication | hwws [SUMMARY]
[CONTENT] classification | patients | septum | hemivagina | uterus | obstructed | completely | vaginal | communication | hwws [SUMMARY]
null
[CONTENT] classification | patients | septum | hemivagina | uterus | obstructed | completely | vaginal | communication | hwws [SUMMARY]
null
[CONTENT] hwws | herlyn | case | herlyn werner | werner | renal | blind hemivagina | blind | tong | cases hwws [SUMMARY]
[CONTENT] test | performed | analyses | 2238 cases | 2238 | provided | patients | follow | pumch | diagnosed [SUMMARY]
[CONTENT] classification | patients | significantly | 05 | mean | mean age | classification 05 | age diagnosis classification | age diagnosis | 55 [SUMMARY]
null
[CONTENT] classification | patients | septum | hemivagina | uterus | vaginal | obstructed | communication | hwws | completely [SUMMARY]
null
[CONTENT] Herlyn-Werner-Wunderlich ||| ||| Peking Union Medical College Hospital | PUMCH ||| [SUMMARY]
[CONTENT] January 1986 to March 2013 | PUMCH ||| ||| SPSS | 15.0 | IBM | Armonk | NY | USA ||| χ2 | Fisher ||| P < 0.05 [SUMMARY]
[CONTENT] 79 | March 31, 2013 ||| Classification 1 | Classification 2 ||| two [SUMMARY]
null
[CONTENT] Herlyn-Werner-Wunderlich ||| ||| Peking Union Medical College Hospital | PUMCH ||| ||| January 1986 to March 2013 | PUMCH ||| ||| SPSS | 15.0 | IBM | Armonk | NY | USA ||| χ2 | Fisher ||| P < 0.05 ||| ||| 79 | March 31, 2013 ||| Classification 1 | Classification 2 ||| two ||| two ||| two [SUMMARY]
null
Added value of advanced over conventional magnetic resonance imaging in grading gliomas and other primary brain tumors.
25608821
Although conventional MR imaging (MRI) is the most widely used non-invasive technique for brain tumor grading, its accuracy has been reported to be relatively low. Advanced MR techniques, such as perfusion-weighted imaging (PWI), diffusion-weighted imaging (DWI), and magnetic resonance spectroscopy (MRS), could predict neoplastic histology, but their added value over conventional MRI is still open to debate.
BACKGROUND
We prospectively analyzed 129 patients diagnosed with primary brain tumors (118 gliomas) classified as low-grade in 30 cases and high-grade in 99 cases.
METHODS
Significant differences were obtained in high-grade tumors for conventional MRI variables (necrosis, enhancement, edema, hemorrhage, and neovascularization); high relative cerebral blood volume values (rCBV), low relative apparent diffusion coefficients (rADC), high ratio of N-acetyl-aspartate/creatine at short echo time (TE) and high choline/creatine at long TE. Among conventional MRI variables, the presence of enhancement and necrosis were demonstrated to be the best predictors of high grade in primary brain tumors (sensitivity 95.9%; specificity 70%). The best results in primary brain tumors were obtained for enhancement, necrosis, and rADC (sensitivity 98.9%; specificity 75.9%). Necrosis and enhancement were the only predictors of high grade in gliomas (sensitivity 97.6%; specificity 76%) when all the magnetic resonance variables were combined.
RESULTS
MRI is highly accurate in the assessment of tumor grade. The combination of conventional MRI features with advanced MR variables showed only improved tumor grading by adding rADC to conventional MRI variables in primary brain tumors.
CONCLUSIONS
[ "Adolescent", "Adult", "Aged", "Aged, 80 and over", "Brain Neoplasms", "Child", "Diffusion Magnetic Resonance Imaging", "Female", "Glioma", "Humans", "Magnetic Resonance Imaging", "Magnetic Resonance Spectroscopy", "Male", "Middle Aged", "Neoplasm Grading", "Prospective Studies" ]
4300038
Background
Primary brain tumors constitute a heterogeneous group that can be classified according to their histological type and grade of malignancy. The World Health Organization (WHO) classifies primary brain tumors into four different grades of malignancy [1]. Histological tumor grading has several drawbacks, one of which is the need for stereotactic biopsy, an invasive procedure with a certain risk of morbidity and mortality. In addition, this approach is subject to sampling error, and its results depend upon the neuropathologist’s experience [2]. These limitations lend support to research into non-invasive imaging techniques. Although conventional Magnetic Resonance Imaging (MRI) is an established technique for the characterization of brain tumors, it is not completely reliable [3]. Perfusion Weighted Imaging (PWI), diffusion-weighted imaging (DWI), Magnetic Resonance Spectroscopy (MRS) could provide additional information to conventional MRI, as they better reflect histopathology findings [3,4]. The feasibility of PWI, DWI, and MRS for tumor grading has been clearly proved [5–7]. However, their additional value, separately or in different combinations, over conventional MRI has not yet been quantified. The results obtained with different MR techniques are contradictory, as shown for MRS and PWI with respect to diagnostic accuracy in grading tumors [3,4,8–11]. Furthermore, no significant differences have been found in the assessment of tumor grade using advanced techniques such as PWI [12], MRS [13,14], and DWI [9,15]. Although a small number of studies compared these techniques, to our knowledge only one published study has combined all four in a single center [16]. We hypothesized conventional MRI could accurately evaluate the grade of intraaxial brain tumors, and the added value of other MRI techniques is very small. Our aim was to quantify the improvement in diagnostic accuracy resulting from the combination of conventional MRI with PWI, DWI, and MRS.
null
null
Results
Univariate analysis The rCBV ratio could not be obtained in five patients owing to magnetic susceptibility artifacts resulting from an extensive tumor hemorrhage or a poor adjustment to the gamma curve. Values of rADC were not calculated in five patients because of the presence of extensive necrosis. In this situation, there was not enough solid area to place the ROIs without partial volume effects in the necrotic region. Quantitative MRS results were not taken into consideration in 20 cases, owing to the poor quality of the spectra. In 24 patients, at least one of the metabolic ratios was missed, because the internal references (Cho or H2O) or the metabolite peaks of Cho and/or NAA could not be measured. The presence of the MRI features was significantly greater in high-grade tumors (p < 0.0001). Statistically significant differences were found in rCBV (p < 0.0001), rADC (p < 0.0001), and the NAA/Cr (p = 0.005) ratio at a short TE and in the Cho/Cr ratio (p = 0.008) at a long TE (Table 2). The Lip peak was significantly present in high-grade tumors (p < 0.0001) (Table 2). The variables enhancement and necrosis showed the highest Odds Ratio (OR) for classifying high-grade tumors (55.42 and 23.82, respectively) (Table 3).Table 2 Comparison of perfusion-weighted, diffusion-weighted, and magnetic resonance spectroscopy variables between low-grade and high-grade tumor groups Low-grade tumor High-grade tumor Variable n Range Mean SD n Range Mean SD P Value rCBV290.00-13.612.093.17950.51-19.185.754.10<0.0001rADC300.81-2.551.740.44940.43-2.631.170.38<0.0001NAA/Cra 230.00-11.732.393.00640.00-42.267.289.120.005Cho/Crb 240.95-21.433.565.34700.81-77.467.9514.880.008Lipids26N/AN/AN/A83N/AN/AN/A<0.0001NAA/Crb 240.00-11.611.532.28710.00-20.361.493.15NSCho/Cra 230.07-9.302.002,13650.00-30.153.244.91NSCho/H2Oa 193.02×10−4-5.88×10−3 6.60×10−4 1.29×10−3 630.00-6.18×10−3 4.21×10−3 8.03×10−4 NSNAA/H2Oa 200.00-3.28×10−3 6.67×10−4 7.85×10−4 630.00-1.73×10−2 1.00×10−3 2.20×10−3 NSCho/H2Ob 172.10x10−4-3.98×10−3 8.39×10−4 9.00×10−3 644.74×10−4-8.27×10−3 8.34×10−4 1.23×10−3 NSNAA/H2Ob 180.00-1.01×10−3 3.12×10−4 2.74×10−3 670.00-6.86×10−3 3.98×10−4 1,18×10−3 NSLactate26N/AN/AN/A83N/AN/AN/ANSCho, choline; Cr, creatine; NAA, N-acetyl-aspartate; N/A, not available (qualitative variables); NS, not significant; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume. aTE = 23 ms. bTE = 144 ms.Table 3 Odds ratios obtained from the univariate analysis of the magnetic resonance variables with significant differences between low-grade and high-grade tumor groups Variable OR 95% CI P value Enhancement55.4215.58-197.15<0.0001Necrosis23.828.43-67.35<0.0001Neovascularization7.802.53-24.02<0.0001Edema7.433.02-18.26<0.0001Hemorrhage13.853.93-48.77<0.0001rCBV1.681.23-2.19<0.0001rADC0.050.01-0.16<0.0001NAA/Cra 1.211.03-1.420.02Cho/Crb 1.050.97-1.140.22Lipids9.803.56-26.96<0.0001CI, confidence interval; OR, odds ratio; Cho, choline; Cr, creatine; NAA, N-acetyl-aspartate; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume. aTE = 23 ms. bTE = 144 ms. Comparison of perfusion-weighted, diffusion-weighted, and magnetic resonance spectroscopy variables between low-grade and high-grade tumor groups Cho, choline; Cr, creatine; NAA, N-acetyl-aspartate; N/A, not available (qualitative variables); NS, not significant; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume. aTE = 23 ms. bTE = 144 ms. Odds ratios obtained from the univariate analysis of the magnetic resonance variables with significant differences between low-grade and high-grade tumor groups CI, confidence interval; OR, odds ratio; Cho, choline; Cr, creatine; NAA, N-acetyl-aspartate; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume. aTE = 23 ms. bTE = 144 ms. The rCBV ratio could not be obtained in five patients owing to magnetic susceptibility artifacts resulting from an extensive tumor hemorrhage or a poor adjustment to the gamma curve. Values of rADC were not calculated in five patients because of the presence of extensive necrosis. In this situation, there was not enough solid area to place the ROIs without partial volume effects in the necrotic region. Quantitative MRS results were not taken into consideration in 20 cases, owing to the poor quality of the spectra. In 24 patients, at least one of the metabolic ratios was missed, because the internal references (Cho or H2O) or the metabolite peaks of Cho and/or NAA could not be measured. The presence of the MRI features was significantly greater in high-grade tumors (p < 0.0001). Statistically significant differences were found in rCBV (p < 0.0001), rADC (p < 0.0001), and the NAA/Cr (p = 0.005) ratio at a short TE and in the Cho/Cr ratio (p = 0.008) at a long TE (Table 2). The Lip peak was significantly present in high-grade tumors (p < 0.0001) (Table 2). The variables enhancement and necrosis showed the highest Odds Ratio (OR) for classifying high-grade tumors (55.42 and 23.82, respectively) (Table 3).Table 2 Comparison of perfusion-weighted, diffusion-weighted, and magnetic resonance spectroscopy variables between low-grade and high-grade tumor groups Low-grade tumor High-grade tumor Variable n Range Mean SD n Range Mean SD P Value rCBV290.00-13.612.093.17950.51-19.185.754.10<0.0001rADC300.81-2.551.740.44940.43-2.631.170.38<0.0001NAA/Cra 230.00-11.732.393.00640.00-42.267.289.120.005Cho/Crb 240.95-21.433.565.34700.81-77.467.9514.880.008Lipids26N/AN/AN/A83N/AN/AN/A<0.0001NAA/Crb 240.00-11.611.532.28710.00-20.361.493.15NSCho/Cra 230.07-9.302.002,13650.00-30.153.244.91NSCho/H2Oa 193.02×10−4-5.88×10−3 6.60×10−4 1.29×10−3 630.00-6.18×10−3 4.21×10−3 8.03×10−4 NSNAA/H2Oa 200.00-3.28×10−3 6.67×10−4 7.85×10−4 630.00-1.73×10−2 1.00×10−3 2.20×10−3 NSCho/H2Ob 172.10x10−4-3.98×10−3 8.39×10−4 9.00×10−3 644.74×10−4-8.27×10−3 8.34×10−4 1.23×10−3 NSNAA/H2Ob 180.00-1.01×10−3 3.12×10−4 2.74×10−3 670.00-6.86×10−3 3.98×10−4 1,18×10−3 NSLactate26N/AN/AN/A83N/AN/AN/ANSCho, choline; Cr, creatine; NAA, N-acetyl-aspartate; N/A, not available (qualitative variables); NS, not significant; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume. aTE = 23 ms. bTE = 144 ms.Table 3 Odds ratios obtained from the univariate analysis of the magnetic resonance variables with significant differences between low-grade and high-grade tumor groups Variable OR 95% CI P value Enhancement55.4215.58-197.15<0.0001Necrosis23.828.43-67.35<0.0001Neovascularization7.802.53-24.02<0.0001Edema7.433.02-18.26<0.0001Hemorrhage13.853.93-48.77<0.0001rCBV1.681.23-2.19<0.0001rADC0.050.01-0.16<0.0001NAA/Cra 1.211.03-1.420.02Cho/Crb 1.050.97-1.140.22Lipids9.803.56-26.96<0.0001CI, confidence interval; OR, odds ratio; Cho, choline; Cr, creatine; NAA, N-acetyl-aspartate; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume. aTE = 23 ms. bTE = 144 ms. Comparison of perfusion-weighted, diffusion-weighted, and magnetic resonance spectroscopy variables between low-grade and high-grade tumor groups Cho, choline; Cr, creatine; NAA, N-acetyl-aspartate; N/A, not available (qualitative variables); NS, not significant; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume. aTE = 23 ms. bTE = 144 ms. Odds ratios obtained from the univariate analysis of the magnetic resonance variables with significant differences between low-grade and high-grade tumor groups CI, confidence interval; OR, odds ratio; Cho, choline; Cr, creatine; NAA, N-acetyl-aspartate; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume. aTE = 23 ms. bTE = 144 ms. Combination of the variables obtained by conventional MRI The imaging features identified as independent predictors of tumor grade were enhancement (OR, 23.37; 95% Confidence Interval -CI, 5.85-93.25) and necrosis (OR, 9.04; 95% CI, 2.61 -31.25). The sensitivity and specificity for the identification of high-grade tumors with these two features combined were 95.9% and 70%, respectively (Table 4). The Area Under the receiver operating characteristic Curve (AUC) was 0.890 (Figure 4).Table 4 Variables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in primary brain tumors MR Technique N Predictor variables OR 95% CI P Coefficient (β) Standard error AUC Sensitivity (%) Specificity (%) PPV (%) NPV (%) MRI129Enhancement23.375.85-93.25<0.00013.150.710.89095.97091.384Necrosis9.042.61-31.250.0012,200.63MRI71Enhancement11.632.35-58.820.0032.460.810.87198.246.787.387.5PWIDWINecrosis8.481.95-37.040.0052.130.77MRSMRIEnhancement16.953.89-71.43<0.00012.830.75PWI120Necrosis5.401.32-21.740.0191.690.720.92398.975.992.895.6DWIrADC0.220.50-0.970.045−1.510.75AUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value.Figure 4 Receiving operating characteristic curves of the combination of MR imaging features alone (a) or associated with PWI and DWI parameters (b) for grading of primary brain tumors. Variables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in primary brain tumors AUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value. Receiving operating characteristic curves of the combination of MR imaging features alone (a) or associated with PWI and DWI parameters (b) for grading of primary brain tumors. The imaging features identified as independent predictors of tumor grade were enhancement (OR, 23.37; 95% Confidence Interval -CI, 5.85-93.25) and necrosis (OR, 9.04; 95% CI, 2.61 -31.25). The sensitivity and specificity for the identification of high-grade tumors with these two features combined were 95.9% and 70%, respectively (Table 4). The Area Under the receiver operating characteristic Curve (AUC) was 0.890 (Figure 4).Table 4 Variables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in primary brain tumors MR Technique N Predictor variables OR 95% CI P Coefficient (β) Standard error AUC Sensitivity (%) Specificity (%) PPV (%) NPV (%) MRI129Enhancement23.375.85-93.25<0.00013.150.710.89095.97091.384Necrosis9.042.61-31.250.0012,200.63MRI71Enhancement11.632.35-58.820.0032.460.810.87198.246.787.387.5PWIDWINecrosis8.481.95-37.040.0052.130.77MRSMRIEnhancement16.953.89-71.43<0.00012.830.75PWI120Necrosis5.401.32-21.740.0191.690.720.92398.975.992.895.6DWIrADC0.220.50-0.970.045−1.510.75AUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value.Figure 4 Receiving operating characteristic curves of the combination of MR imaging features alone (a) or associated with PWI and DWI parameters (b) for grading of primary brain tumors. Variables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in primary brain tumors AUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value. Receiving operating characteristic curves of the combination of MR imaging features alone (a) or associated with PWI and DWI parameters (b) for grading of primary brain tumors. Combination of the variables obtained by conventional MRI, PWI, DWI, and MRS This multivariate logistic regression analysis identified only enhancement (OR, 11.63; 95% CI, 2.35-58.82) and necrosis (OR, 8.48; 95% CI, 1.95-37.04) as independent predictors (Table 4). The sensitivity and specificity of these variables in grading brain tumors was 98.2% and 46.7%, respectively, and the AUC was 0.871. Thus, advanced variables seemed not to provide additional predictive value. However, it is noteworthy that one or more variables were missing in 58 cases, thus considerably reducing the sample size and the power of the analysis. This multivariate logistic regression analysis identified only enhancement (OR, 11.63; 95% CI, 2.35-58.82) and necrosis (OR, 8.48; 95% CI, 1.95-37.04) as independent predictors (Table 4). The sensitivity and specificity of these variables in grading brain tumors was 98.2% and 46.7%, respectively, and the AUC was 0.871. Thus, advanced variables seemed not to provide additional predictive value. However, it is noteworthy that one or more variables were missing in 58 cases, thus considerably reducing the sample size and the power of the analysis. Combination of the variables obtained by conventional MRI, PWI, and DWI As most missing data corresponded to MRS, we conducted another analysis excluding the MRS–related variables. In this case, the number of cases remaining was 120, and the variables identified as independent predictors of high-grade tumors were enhancement (OR, 16.95; 95% CI, 3.89-71.43), necrosis (OR, 5.40; 95% CI, 1.32-21.74), and low rADC values (OR,0.22; 95% CI, 0.50-0.97) (Table 4). The sensitivity and specificity obtained with these three variables were 98.9% and 75.9%, respectively. The AUC was 0.923 (Figure 4). As most missing data corresponded to MRS, we conducted another analysis excluding the MRS–related variables. In this case, the number of cases remaining was 120, and the variables identified as independent predictors of high-grade tumors were enhancement (OR, 16.95; 95% CI, 3.89-71.43), necrosis (OR, 5.40; 95% CI, 1.32-21.74), and low rADC values (OR,0.22; 95% CI, 0.50-0.97) (Table 4). The sensitivity and specificity obtained with these three variables were 98.9% and 75.9%, respectively. The AUC was 0.923 (Figure 4). Multivariate analysis of gliomas As most tumors in our series (91.5%) were gliomas (astrocytoma and oligodendroglioma), the multivariate logistic regression analysis was repeated in this histologic subtype. Independently of the variables included in the analysis, the only predictors of high grade were necrosis and enhancement, with a sensitivity of 97.6% and specificity of 76% when the conventional MRI, PWI, and DWI variables were combined (Table 5).Table 5 Variables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in gliomas Technique N Predictor variables OR 95% CI P Coefficient (β) Standard error AUC Sensitivity (%) Specificity (%) PPV (%) NPV (%) MRI118Enhancement58.829.35-333.33<0.00014.060.930.94097.876.993.790.9Necrosis13.892.91-66.670.0012,630.80MRI67Enhancement253.60-166.670.0012.460.990.94096.264.391.181.8PWIDWINecrosis8.261.53-43.480.0142.110.86MRSMRI110Enhancement508.13-333.33<0.00013.940.940.94097.67693.390.5PWIDWINecrosis13.892.91- 66.670.0012.640.80AUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value. Variables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in gliomas AUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value. As most tumors in our series (91.5%) were gliomas (astrocytoma and oligodendroglioma), the multivariate logistic regression analysis was repeated in this histologic subtype. Independently of the variables included in the analysis, the only predictors of high grade were necrosis and enhancement, with a sensitivity of 97.6% and specificity of 76% when the conventional MRI, PWI, and DWI variables were combined (Table 5).Table 5 Variables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in gliomas Technique N Predictor variables OR 95% CI P Coefficient (β) Standard error AUC Sensitivity (%) Specificity (%) PPV (%) NPV (%) MRI118Enhancement58.829.35-333.33<0.00014.060.930.94097.876.993.790.9Necrosis13.892.91-66.670.0012,630.80MRI67Enhancement253.60-166.670.0012.460.990.94096.264.391.181.8PWIDWINecrosis8.261.53-43.480.0142.110.86MRSMRI110Enhancement508.13-333.33<0.00013.940.940.94097.67693.390.5PWIDWINecrosis13.892.91- 66.670.0012.640.80AUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value. Variables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in gliomas AUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value.
Conclusions
Preoperative diagnosis of tumor grade by MRI could assist in treatment planning, which is essential in cases were a histological diagnosis cannot be made. Our work focuses on different types of primary brain tumors, since, in clinical practice, tumor grade is analyzed using MRI with no previous knowledge of histological type. An appropriate analysis of conventional MRI features enables primary brain tumors to be graded with high accuracy. The best results for the prediction of high-grade tumors were obtained by combining the variables enhancement, necrosis, and rADC. Only a slight improvement was obtained with respect to conventional MRI criteria combined with the only advanced MRI variable considered as predictive (rADC). No advanced MR variables seem to add value to conventional MRI alone in the determination of grade in gliomas.
[ "Tumor histology", "Conventional MRI", "Dynamic contrast-enhanced PWI", "DWI", "MRS", "Definition of image variables", "Statistical analysis", "Univariate analysis", "Combination of the variables obtained by conventional MRI", "Combination of the variables obtained by conventional MRI, PWI, DWI, and MRS", "Combination of the variables obtained by conventional MRI, PWI, and DWI", "Multivariate analysis of gliomas" ]
[ "The histology specimen was obtained by surgical resection in 119 cases and by stereotactic biopsy in 10 cases and analyzed by an expert neuropathologist with more than 30 years of experience blinded to radiological assessment.\nBrain tumors were classified as aggressive high-grade tumors (WHO grades III and IV) in 99 cases and low-grade tumors (WHO grades I and II) in the remaining 30 patients (Table 1). We considered grades I and II as low-grade tumors and grades III and IV as high-grade tumors according to the differences in treatment and survival between groups: in general, high-grade tumors present lower survival rates and need complementary therapy after surgery (usually chemotherapy and radiotherapy), whereas in low-grade tumors, survival is higher and the use of complementary therapy remains open to debate.Table 1\nFrequency distribution of histological subtypes of brain tumors\n\nHistological subtype\n\nWHO grade\n\nNo.\n\nPercentage\nAstrocytomaI182.9%II16III20IV70OligodendrogliomaII98.5%III2PNETIV75.4%DNETI21.6%HemangioblastomaI10.8%NeurocytomaII10.8%DNET, dysembryoplastic neuroepithelial tumor; PNET, primitive neuroectodermal tumor.\n\nFrequency distribution of histological subtypes of brain tumors\n\nDNET, dysembryoplastic neuroepithelial tumor; PNET, primitive neuroectodermal tumor.\nIn order to detect possible misclassifications of histological grading, three months after the end of recruitment medical records were reviewed to determine survival, defined as the time elapsed between MRI diagnosis and death or the last admission to our institution. No case of disagreement was found between survival and tumoral grade that indicates histological misclassifications.", "All patients were prospectively examined using a 1.5T MRI scanner (Intera or Achieva, Philips Healthcare, The Netherlands). Sagittal T1-weighted images (647/15 ms [TR/TE] SE) and coronal T2-weighted images (TR 4742/TE, 100 ms; Turbo SE; Fluid-Attenuated Inversion Recovery (FLAIR), 11000/140/2800 ms [TR/TE/TI]) were obtained with a 230-mm, Field Of View (FOV) and matrix size of 512 × 512.\nAfter intravenous administration of a double dose (0.2 mmol/kg) of gadobutrol 1.0 mmol/ml (Gadovist, Bayer Schering Farma, Berlin, Germany), an axial T1-weighted 3D fast-field echo sequence was acquired (TR 16/TE 4.6 ms) with a flip angle of 8°, FOV of 256 × 256 mm, and matrix size of 176 × 288.", "PWI was performed using a dynamic contrast-enhanced T2*-weighted gradient echo. EPI- Echo planar images -EPI- (single shot [TR 1678/TE 30 ms] with an EPI factor of 61, flip angle of 40°, matrix size of 128 × 128, and FOV of 230 mm) were acquired during the first pass of the gadobutrol bolus, which was injected intravenously using a 20-gauge needle at a rate of 4.0 ml/sec.\nA series of 40 acquisitions were performed at 1.7-second intervals. The first acquisition was acquired before injection to establish baseline (precontrast) intensity.\nThe results were transferred to a PC workstation for processing (ViewForum Workstation, release 5.1V1L2; Philips Healthcare). The possible effect of tracer recirculation or leakage due to the disruption of the blood–brain barrier was considered in the mathematical model by fitting a gamma-variate function to the observed 1/T2* relaxation rate curve. This gamma-variate function was automatically implemented by the workstation.", "Diffusion-weighted images were obtained with axial multislice single-shot EPI SE sequences as follows: TR 3745 ms/TE 120 ms; EPI factor, 61; matrix size, 128 × 128; FOV, 230 mm; and diffusion gradient encoding in three orthogonal directions. The images and Apparent Diffusion Coefficient (ADC) maps were calculated using b values of 0 and 2500 s/mm2. ADC values were quantified using the PC workstation mentioned above.", "Single-voxel proton MRS was performed in 117 patients. Twelve patients refused to undergo this technique because of the additional examination time involved. The technique used was point-resolved spectroscopy (PRESS) with a TR of 2000 ms and two different TEs (23/144 ms). The measurement of each spectrum was repeated 128 times with a cycling-phase of 16 to improve the signal-to-noise ratio.\nThe size (mean 8.34 cc, range 5.6 – 18.2 cc) and location of the voxels of interest were established in order to position the largest possible voxel within the solid tumor area, with minimal contamination from the surrounding non-tumor tissue and avoiding areas of necrosis, cysts, or hemorrhage as much as possible. We selected single-voxel MRS owing to its lower time requirements, which enabled all the MR sequences to be performed in a single session.\nSpectra were analyzed using custom-designed software [17]. Signal intensity of metabolic peaks, spectral positions, and decay constants were taken into consideration in coupled metabolite peaks. Signals of Choline (Cho), N-acetyl-aspartate (NAA), Creatine (Cr), Lipids (Lip), and Lactate (Lact) were quantified. The same quantification procedure was followed to analyze the water peak, although, in this case, Hankel singular value decomposition was not performed to suppress the water signal.", "The five features evaluated by conventional MRI were as follows: 1) Enhancement, defined as an increased signal in T1-weighted sequences in the tumor after administration of gadolinium; 2) Necrosis (and cystic necrosis), identified as areas within the neoplasm with a signal in T1- and T2-weighted images similar to that of cerebrospinal fluid (on FLAIR images these areas may be hyperintense, owing to excess protein content); 3) Edema, defined as an area of homogenous high signal on FLAIR sequences surrounding the tumor; 4) Neovascularization, defined as the presence of tubular structures within the tumor showing flow-void patterns on T2-weighted images representing abnormal tumor vessels; and 5) Hemorrhage, characterized by an area of magnetic field distortion due to the paramagnetic blood breakdown products on the EPI T2*-weighted images, which were obtained in the first series of PWI before gadolinium reached the cerebral parenchyma (Figures 1 and 2). All five features were assessed dichotomously (presence or absence).Figure 1\nExample of MR images from a patient with glioblastoma (grade IV). a, Contrast-enhanced axial T1-weighted image shows an enhancing mass in the right frontal region. b, Axial FLAIR image shows peritumoral edema. c, Coronal turbo SE T2-weighted image revealing abnormal macroscopic vessels (arrows) within the tumor (neovascularization).Figure 2\nExample of MR images of glioblastoma (grade IV). a-c, Signs of necrosis, identified as regions within the neoplasm with hypointensity on contrast-enhanced axial T1-weighted images (a) and hyperintensity on T2-weighted images (b) and FLAIR images (c). d, Signs of hemorrhage seen as an area of hypointensity (arrow) due the paramagnetic blood breakdown products on the axial EPI T2*-weighted images.\n\nExample of MR images from a patient with glioblastoma (grade IV). a, Contrast-enhanced axial T1-weighted image shows an enhancing mass in the right frontal region. b, Axial FLAIR image shows peritumoral edema. c, Coronal turbo SE T2-weighted image revealing abnormal macroscopic vessels (arrows) within the tumor (neovascularization).\n\nExample of MR images of glioblastoma (grade IV). a-c, Signs of necrosis, identified as regions within the neoplasm with hypointensity on contrast-enhanced axial T1-weighted images (a) and hyperintensity on T2-weighted images (b) and FLAIR images (c). d, Signs of hemorrhage seen as an area of hypointensity (arrow) due the paramagnetic blood breakdown products on the axial EPI T2*-weighted images.\nThe relative Cerebral Blood Volume (rCBV) was calculated using PWI (Figure 3) based on a region of interest (ROI) centered on the highest tumor rCBV value in the parametric map. This ROI was drawn as large as possible in an attempt to include all voxels with the highest and similar values of CBV. Unprocessed perfusion images and conventional post-gadolinium T1-weighted MRI images were used to ensure that ROIs were not placed over blood vessels. Tumor CBV was normalized to contralateral white matter CBV, on which an ROI of the same dimensions was drawn.Figure 3\nExample of PWI and DWI assessment in cases of glioblastomas (grade IV). a, Calculation of the rCBV ratio. The figure shows the ROI location covering the maximal values of CBV (T) in the parametric map. A similar ROI was placed in the contralateral white matter to normalize the image (N). This image corresponds to the conventional MRI images of Figure 1. b, Calculation of rADC. Example of five different ROIs placed within the solid area of the tumor in the ADC map (represented by number 1) and in the contralateral healthy area (represented by number 2). The MRI features of this case are shown in Figure 2.\n\nExample of PWI and DWI assessment in cases of glioblastomas (grade IV). a, Calculation of the rCBV ratio. The figure shows the ROI location covering the maximal values of CBV (T) in the parametric map. A similar ROI was placed in the contralateral white matter to normalize the image (N). This image corresponds to the conventional MRI images of Figure 1. b, Calculation of rADC. Example of five different ROIs placed within the solid area of the tumor in the ADC map (represented by number 1) and in the contralateral healthy area (represented by number 2). The MRI features of this case are shown in Figure 2.\nThe relative Apparent Diffusion Coefficient (rADC) was calculated using DWI (Figure 3). Five different round-shaped ROIs ranging from 9.1 mm2 to 9.7 mm2 were placed in the solid tumor area. A further five ROIs with the same dimensions were placed in the contralateral normal cerebral area. The rADC was defined as the ratio of averaged ADCs between tumors and normal areas [18].\nThe Cho/Cr, NAA/Cr, Cho/H2O, and NAA/H2O ratios and the presence or absence of Lip or Lact were measured by MRS at long and short TE. We included the water peak as an internal reference following previously published data that report this approach to be a robust method for standardization [19]. Metabolic peak positions were assigned as follows: Cho, 3.22 ppm; Cr, 3.02 ppm; NAA, 2.02 ppm; Lip, 0.5-1.5 ppm. Lact (1.33 ppm) was identified as an inverted doublet at 144 ppm.\nAll variables obtained were assessed by consensus of two expert neuroradiologists with more than 10 years of experience (JG and PF).", "In the univariate analysis, continuous variables were assessed using the Mann–Whitney test and qualitative variables using a two-tailed Fisher exact test.\nA multivariate logistic regression model was applied to assess the combined and independent values of predictor variables. We used a forward stepwise selection procedure with p-to-enter and p-to-remove value thresholds of p < 0.05 and p > 0.01 and a cutoff value of 0.5. Sensitivity, specificity, positive predictive value, negative predictive value, and a Receiver Operating Characteristic (ROC) curve were obtained for the predictor variables.\nStatistical procedures were performed with SPSS version 13.0 (SPSS Inc, Chicago, Illinois, USA).", "The rCBV ratio could not be obtained in five patients owing to magnetic susceptibility artifacts resulting from an extensive tumor hemorrhage or a poor adjustment to the gamma curve.\nValues of rADC were not calculated in five patients because of the presence of extensive necrosis. In this situation, there was not enough solid area to place the ROIs without partial volume effects in the necrotic region. Quantitative MRS results were not taken into consideration in 20 cases, owing to the poor quality of the spectra. In 24 patients, at least one of the metabolic ratios was missed, because the internal references (Cho or H2O) or the metabolite peaks of Cho and/or NAA could not be measured.\nThe presence of the MRI features was significantly greater in high-grade tumors (p < 0.0001). Statistically significant differences were found in rCBV (p < 0.0001), rADC (p < 0.0001), and the NAA/Cr (p = 0.005) ratio at a short TE and in the Cho/Cr ratio (p = 0.008) at a long TE (Table 2). The Lip peak was significantly present in high-grade tumors (p < 0.0001) (Table 2). The variables enhancement and necrosis showed the highest Odds Ratio (OR) for classifying high-grade tumors (55.42 and 23.82, respectively) (Table 3).Table 2\nComparison of perfusion-weighted, diffusion-weighted, and magnetic resonance spectroscopy variables between low-grade and high-grade tumor groups\n\nLow-grade tumor\n\nHigh-grade tumor\n\nVariable\n\nn\n\nRange\n\nMean\n\nSD\n\nn\n\nRange\n\nMean\n\nSD\n\nP\nValue\nrCBV290.00-13.612.093.17950.51-19.185.754.10<0.0001rADC300.81-2.551.740.44940.43-2.631.170.38<0.0001NAA/Cra\n230.00-11.732.393.00640.00-42.267.289.120.005Cho/Crb\n240.95-21.433.565.34700.81-77.467.9514.880.008Lipids26N/AN/AN/A83N/AN/AN/A<0.0001NAA/Crb\n240.00-11.611.532.28710.00-20.361.493.15NSCho/Cra\n230.07-9.302.002,13650.00-30.153.244.91NSCho/H2Oa\n193.02×10−4-5.88×10−3\n6.60×10−4\n1.29×10−3\n630.00-6.18×10−3\n4.21×10−3\n8.03×10−4\nNSNAA/H2Oa\n200.00-3.28×10−3\n6.67×10−4\n7.85×10−4\n630.00-1.73×10−2\n1.00×10−3\n2.20×10−3\nNSCho/H2Ob\n172.10x10−4-3.98×10−3\n8.39×10−4\n9.00×10−3\n644.74×10−4-8.27×10−3\n8.34×10−4\n1.23×10−3\nNSNAA/H2Ob\n180.00-1.01×10−3\n3.12×10−4\n2.74×10−3\n670.00-6.86×10−3\n3.98×10−4\n1,18×10−3\nNSLactate26N/AN/AN/A83N/AN/AN/ANSCho, choline; Cr, creatine; NAA, N-acetyl-aspartate; N/A, not available (qualitative variables); NS, not significant; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume.\naTE = 23 ms.\nbTE = 144 ms.Table 3\nOdds ratios obtained from the univariate analysis of the magnetic resonance variables with significant differences between low-grade and high-grade tumor groups\n\nVariable\n\nOR\n\n95% CI\n\nP\nvalue\nEnhancement55.4215.58-197.15<0.0001Necrosis23.828.43-67.35<0.0001Neovascularization7.802.53-24.02<0.0001Edema7.433.02-18.26<0.0001Hemorrhage13.853.93-48.77<0.0001rCBV1.681.23-2.19<0.0001rADC0.050.01-0.16<0.0001NAA/Cra\n1.211.03-1.420.02Cho/Crb\n1.050.97-1.140.22Lipids9.803.56-26.96<0.0001CI, confidence interval; OR, odds ratio; Cho, choline; Cr, creatine; NAA, N-acetyl-aspartate; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume.\naTE = 23 ms.\nbTE = 144 ms.\n\nComparison of perfusion-weighted, diffusion-weighted, and magnetic resonance spectroscopy variables between low-grade and high-grade tumor groups\n\nCho, choline; Cr, creatine; NAA, N-acetyl-aspartate; N/A, not available (qualitative variables); NS, not significant; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume.\n\naTE = 23 ms.\n\nbTE = 144 ms.\n\nOdds ratios obtained from the univariate analysis of the magnetic resonance variables with significant differences between low-grade and high-grade tumor groups\n\nCI, confidence interval; OR, odds ratio; Cho, choline; Cr, creatine; NAA, N-acetyl-aspartate; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume.\n\naTE = 23 ms.\n\nbTE = 144 ms.", "The imaging features identified as independent predictors of tumor grade were enhancement (OR, 23.37; 95% Confidence Interval -CI, 5.85-93.25) and necrosis (OR, 9.04; 95% CI, 2.61 -31.25). The sensitivity and specificity for the identification of high-grade tumors with these two features combined were 95.9% and 70%, respectively (Table 4). The Area Under the receiver operating characteristic Curve (AUC) was 0.890 (Figure 4).Table 4\nVariables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in primary brain tumors\n\nMR Technique\n\nN\n\nPredictor variables\n\nOR\n\n95% CI\n\nP\n\nCoefficient (β)\n\nStandard error\n\nAUC\n\nSensitivity (%)\n\nSpecificity (%)\n\nPPV (%)\n\nNPV (%)\nMRI129Enhancement23.375.85-93.25<0.00013.150.710.89095.97091.384Necrosis9.042.61-31.250.0012,200.63MRI71Enhancement11.632.35-58.820.0032.460.810.87198.246.787.387.5PWIDWINecrosis8.481.95-37.040.0052.130.77MRSMRIEnhancement16.953.89-71.43<0.00012.830.75PWI120Necrosis5.401.32-21.740.0191.690.720.92398.975.992.895.6DWIrADC0.220.50-0.970.045−1.510.75AUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value.Figure 4\nReceiving operating characteristic curves of the combination of MR imaging features alone (a) or associated with PWI and DWI parameters (b) for grading of primary brain tumors.\n\n\nVariables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in primary brain tumors\n\nAUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value.\n\nReceiving operating characteristic curves of the combination of MR imaging features alone (a) or associated with PWI and DWI parameters (b) for grading of primary brain tumors.\n", "This multivariate logistic regression analysis identified only enhancement (OR, 11.63; 95% CI, 2.35-58.82) and necrosis (OR, 8.48; 95% CI, 1.95-37.04) as independent predictors (Table 4). The sensitivity and specificity of these variables in grading brain tumors was 98.2% and 46.7%, respectively, and the AUC was 0.871. Thus, advanced variables seemed not to provide additional predictive value. However, it is noteworthy that one or more variables were missing in 58 cases, thus considerably reducing the sample size and the power of the analysis.", "As most missing data corresponded to MRS, we conducted another analysis excluding the MRS–related variables. In this case, the number of cases remaining was 120, and the variables identified as independent predictors of high-grade tumors were enhancement (OR, 16.95; 95% CI, 3.89-71.43), necrosis (OR, 5.40; 95% CI, 1.32-21.74), and low rADC values (OR,0.22; 95% CI, 0.50-0.97) (Table 4). The sensitivity and specificity obtained with these three variables were 98.9% and 75.9%, respectively. The AUC was 0.923 (Figure 4).", "As most tumors in our series (91.5%) were gliomas (astrocytoma and oligodendroglioma), the multivariate logistic regression analysis was repeated in this histologic subtype. Independently of the variables included in the analysis, the only predictors of high grade were necrosis and enhancement, with a sensitivity of 97.6% and specificity of 76% when the conventional MRI, PWI, and DWI variables were combined (Table 5).Table 5\nVariables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in gliomas\n\nTechnique\n\nN\n\nPredictor variables\n\nOR\n\n95% CI\n\nP\n\nCoefficient (β)\n\nStandard error\n\nAUC\n\nSensitivity (%)\n\nSpecificity (%)\n\nPPV (%)\n\nNPV (%)\nMRI118Enhancement58.829.35-333.33<0.00014.060.930.94097.876.993.790.9Necrosis13.892.91-66.670.0012,630.80MRI67Enhancement253.60-166.670.0012.460.990.94096.264.391.181.8PWIDWINecrosis8.261.53-43.480.0142.110.86MRSMRI110Enhancement508.13-333.33<0.00013.940.940.94097.67693.390.5PWIDWINecrosis13.892.91- 66.670.0012.640.80AUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value.\n\nVariables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in gliomas\n\nAUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Tumor histology", "Conventional MRI", "Dynamic contrast-enhanced PWI", "DWI", "MRS", "Definition of image variables", "Statistical analysis", "Results", "Univariate analysis", "Combination of the variables obtained by conventional MRI", "Combination of the variables obtained by conventional MRI, PWI, DWI, and MRS", "Combination of the variables obtained by conventional MRI, PWI, and DWI", "Multivariate analysis of gliomas", "Discussion", "Conclusions" ]
[ "Primary brain tumors constitute a heterogeneous group that can be classified according to their histological type and grade of malignancy. The World Health Organization (WHO) classifies primary brain tumors into four different grades of malignancy [1]. Histological tumor grading has several drawbacks, one of which is the need for stereotactic biopsy, an invasive procedure with a certain risk of morbidity and mortality. In addition, this approach is subject to sampling error, and its results depend upon the neuropathologist’s experience [2]. These limitations lend support to research into non-invasive imaging techniques.\nAlthough conventional Magnetic Resonance Imaging (MRI) is an established technique for the characterization of brain tumors, it is not completely reliable [3]. Perfusion Weighted Imaging (PWI), diffusion-weighted imaging (DWI), Magnetic Resonance Spectroscopy (MRS) could provide additional information to conventional MRI, as they better reflect histopathology findings [3,4].\nThe feasibility of PWI, DWI, and MRS for tumor grading has been clearly proved [5–7]. However, their additional value, separately or in different combinations, over conventional MRI has not yet been quantified.\nThe results obtained with different MR techniques are contradictory, as shown for MRS and PWI with respect to diagnostic accuracy in grading tumors [3,4,8–11]. Furthermore, no significant differences have been found in the assessment of tumor grade using advanced techniques such as PWI [12], MRS [13,14], and DWI [9,15]. Although a small number of studies compared these techniques, to our knowledge only one published study has combined all four in a single center [16].\nWe hypothesized conventional MRI could accurately evaluate the grade of intraaxial brain tumors, and the added value of other MRI techniques is very small. Our aim was to quantify the improvement in diagnostic accuracy resulting from the combination of conventional MRI with PWI, DWI, and MRS.", "The study population comprised 129 patients (71 men and 58 women; mean age 52.7 years, range 11 to 84 years) diagnosed with primary brain tumors (the only inclusion criterion) who were consecutively recruited between February 2004 and April 2009 at our institution. The exclusion criteria were as follows: presence of non-neoplastic brain masses; absence of histopathology data; extensive hemorrhage that prevented evaluation by PWI, DWI, and MRS; and previous surgical intervention, chemotherapy, or radiotherapy.\nThe institutional research and ethics boards of Hospital General Universitario Gregorio Marañón approved the study, and all the patients gave their written informed consent.\n Tumor histology The histology specimen was obtained by surgical resection in 119 cases and by stereotactic biopsy in 10 cases and analyzed by an expert neuropathologist with more than 30 years of experience blinded to radiological assessment.\nBrain tumors were classified as aggressive high-grade tumors (WHO grades III and IV) in 99 cases and low-grade tumors (WHO grades I and II) in the remaining 30 patients (Table 1). We considered grades I and II as low-grade tumors and grades III and IV as high-grade tumors according to the differences in treatment and survival between groups: in general, high-grade tumors present lower survival rates and need complementary therapy after surgery (usually chemotherapy and radiotherapy), whereas in low-grade tumors, survival is higher and the use of complementary therapy remains open to debate.Table 1\nFrequency distribution of histological subtypes of brain tumors\n\nHistological subtype\n\nWHO grade\n\nNo.\n\nPercentage\nAstrocytomaI182.9%II16III20IV70OligodendrogliomaII98.5%III2PNETIV75.4%DNETI21.6%HemangioblastomaI10.8%NeurocytomaII10.8%DNET, dysembryoplastic neuroepithelial tumor; PNET, primitive neuroectodermal tumor.\n\nFrequency distribution of histological subtypes of brain tumors\n\nDNET, dysembryoplastic neuroepithelial tumor; PNET, primitive neuroectodermal tumor.\nIn order to detect possible misclassifications of histological grading, three months after the end of recruitment medical records were reviewed to determine survival, defined as the time elapsed between MRI diagnosis and death or the last admission to our institution. No case of disagreement was found between survival and tumoral grade that indicates histological misclassifications.\nThe histology specimen was obtained by surgical resection in 119 cases and by stereotactic biopsy in 10 cases and analyzed by an expert neuropathologist with more than 30 years of experience blinded to radiological assessment.\nBrain tumors were classified as aggressive high-grade tumors (WHO grades III and IV) in 99 cases and low-grade tumors (WHO grades I and II) in the remaining 30 patients (Table 1). We considered grades I and II as low-grade tumors and grades III and IV as high-grade tumors according to the differences in treatment and survival between groups: in general, high-grade tumors present lower survival rates and need complementary therapy after surgery (usually chemotherapy and radiotherapy), whereas in low-grade tumors, survival is higher and the use of complementary therapy remains open to debate.Table 1\nFrequency distribution of histological subtypes of brain tumors\n\nHistological subtype\n\nWHO grade\n\nNo.\n\nPercentage\nAstrocytomaI182.9%II16III20IV70OligodendrogliomaII98.5%III2PNETIV75.4%DNETI21.6%HemangioblastomaI10.8%NeurocytomaII10.8%DNET, dysembryoplastic neuroepithelial tumor; PNET, primitive neuroectodermal tumor.\n\nFrequency distribution of histological subtypes of brain tumors\n\nDNET, dysembryoplastic neuroepithelial tumor; PNET, primitive neuroectodermal tumor.\nIn order to detect possible misclassifications of histological grading, three months after the end of recruitment medical records were reviewed to determine survival, defined as the time elapsed between MRI diagnosis and death or the last admission to our institution. No case of disagreement was found between survival and tumoral grade that indicates histological misclassifications.\n Conventional MRI All patients were prospectively examined using a 1.5T MRI scanner (Intera or Achieva, Philips Healthcare, The Netherlands). Sagittal T1-weighted images (647/15 ms [TR/TE] SE) and coronal T2-weighted images (TR 4742/TE, 100 ms; Turbo SE; Fluid-Attenuated Inversion Recovery (FLAIR), 11000/140/2800 ms [TR/TE/TI]) were obtained with a 230-mm, Field Of View (FOV) and matrix size of 512 × 512.\nAfter intravenous administration of a double dose (0.2 mmol/kg) of gadobutrol 1.0 mmol/ml (Gadovist, Bayer Schering Farma, Berlin, Germany), an axial T1-weighted 3D fast-field echo sequence was acquired (TR 16/TE 4.6 ms) with a flip angle of 8°, FOV of 256 × 256 mm, and matrix size of 176 × 288.\nAll patients were prospectively examined using a 1.5T MRI scanner (Intera or Achieva, Philips Healthcare, The Netherlands). Sagittal T1-weighted images (647/15 ms [TR/TE] SE) and coronal T2-weighted images (TR 4742/TE, 100 ms; Turbo SE; Fluid-Attenuated Inversion Recovery (FLAIR), 11000/140/2800 ms [TR/TE/TI]) were obtained with a 230-mm, Field Of View (FOV) and matrix size of 512 × 512.\nAfter intravenous administration of a double dose (0.2 mmol/kg) of gadobutrol 1.0 mmol/ml (Gadovist, Bayer Schering Farma, Berlin, Germany), an axial T1-weighted 3D fast-field echo sequence was acquired (TR 16/TE 4.6 ms) with a flip angle of 8°, FOV of 256 × 256 mm, and matrix size of 176 × 288.\n Dynamic contrast-enhanced PWI PWI was performed using a dynamic contrast-enhanced T2*-weighted gradient echo. EPI- Echo planar images -EPI- (single shot [TR 1678/TE 30 ms] with an EPI factor of 61, flip angle of 40°, matrix size of 128 × 128, and FOV of 230 mm) were acquired during the first pass of the gadobutrol bolus, which was injected intravenously using a 20-gauge needle at a rate of 4.0 ml/sec.\nA series of 40 acquisitions were performed at 1.7-second intervals. The first acquisition was acquired before injection to establish baseline (precontrast) intensity.\nThe results were transferred to a PC workstation for processing (ViewForum Workstation, release 5.1V1L2; Philips Healthcare). The possible effect of tracer recirculation or leakage due to the disruption of the blood–brain barrier was considered in the mathematical model by fitting a gamma-variate function to the observed 1/T2* relaxation rate curve. This gamma-variate function was automatically implemented by the workstation.\nPWI was performed using a dynamic contrast-enhanced T2*-weighted gradient echo. EPI- Echo planar images -EPI- (single shot [TR 1678/TE 30 ms] with an EPI factor of 61, flip angle of 40°, matrix size of 128 × 128, and FOV of 230 mm) were acquired during the first pass of the gadobutrol bolus, which was injected intravenously using a 20-gauge needle at a rate of 4.0 ml/sec.\nA series of 40 acquisitions were performed at 1.7-second intervals. The first acquisition was acquired before injection to establish baseline (precontrast) intensity.\nThe results were transferred to a PC workstation for processing (ViewForum Workstation, release 5.1V1L2; Philips Healthcare). The possible effect of tracer recirculation or leakage due to the disruption of the blood–brain barrier was considered in the mathematical model by fitting a gamma-variate function to the observed 1/T2* relaxation rate curve. This gamma-variate function was automatically implemented by the workstation.\n DWI Diffusion-weighted images were obtained with axial multislice single-shot EPI SE sequences as follows: TR 3745 ms/TE 120 ms; EPI factor, 61; matrix size, 128 × 128; FOV, 230 mm; and diffusion gradient encoding in three orthogonal directions. The images and Apparent Diffusion Coefficient (ADC) maps were calculated using b values of 0 and 2500 s/mm2. ADC values were quantified using the PC workstation mentioned above.\nDiffusion-weighted images were obtained with axial multislice single-shot EPI SE sequences as follows: TR 3745 ms/TE 120 ms; EPI factor, 61; matrix size, 128 × 128; FOV, 230 mm; and diffusion gradient encoding in three orthogonal directions. The images and Apparent Diffusion Coefficient (ADC) maps were calculated using b values of 0 and 2500 s/mm2. ADC values were quantified using the PC workstation mentioned above.\n MRS Single-voxel proton MRS was performed in 117 patients. Twelve patients refused to undergo this technique because of the additional examination time involved. The technique used was point-resolved spectroscopy (PRESS) with a TR of 2000 ms and two different TEs (23/144 ms). The measurement of each spectrum was repeated 128 times with a cycling-phase of 16 to improve the signal-to-noise ratio.\nThe size (mean 8.34 cc, range 5.6 – 18.2 cc) and location of the voxels of interest were established in order to position the largest possible voxel within the solid tumor area, with minimal contamination from the surrounding non-tumor tissue and avoiding areas of necrosis, cysts, or hemorrhage as much as possible. We selected single-voxel MRS owing to its lower time requirements, which enabled all the MR sequences to be performed in a single session.\nSpectra were analyzed using custom-designed software [17]. Signal intensity of metabolic peaks, spectral positions, and decay constants were taken into consideration in coupled metabolite peaks. Signals of Choline (Cho), N-acetyl-aspartate (NAA), Creatine (Cr), Lipids (Lip), and Lactate (Lact) were quantified. The same quantification procedure was followed to analyze the water peak, although, in this case, Hankel singular value decomposition was not performed to suppress the water signal.\nSingle-voxel proton MRS was performed in 117 patients. Twelve patients refused to undergo this technique because of the additional examination time involved. The technique used was point-resolved spectroscopy (PRESS) with a TR of 2000 ms and two different TEs (23/144 ms). The measurement of each spectrum was repeated 128 times with a cycling-phase of 16 to improve the signal-to-noise ratio.\nThe size (mean 8.34 cc, range 5.6 – 18.2 cc) and location of the voxels of interest were established in order to position the largest possible voxel within the solid tumor area, with minimal contamination from the surrounding non-tumor tissue and avoiding areas of necrosis, cysts, or hemorrhage as much as possible. We selected single-voxel MRS owing to its lower time requirements, which enabled all the MR sequences to be performed in a single session.\nSpectra were analyzed using custom-designed software [17]. Signal intensity of metabolic peaks, spectral positions, and decay constants were taken into consideration in coupled metabolite peaks. Signals of Choline (Cho), N-acetyl-aspartate (NAA), Creatine (Cr), Lipids (Lip), and Lactate (Lact) were quantified. The same quantification procedure was followed to analyze the water peak, although, in this case, Hankel singular value decomposition was not performed to suppress the water signal.\n Definition of image variables The five features evaluated by conventional MRI were as follows: 1) Enhancement, defined as an increased signal in T1-weighted sequences in the tumor after administration of gadolinium; 2) Necrosis (and cystic necrosis), identified as areas within the neoplasm with a signal in T1- and T2-weighted images similar to that of cerebrospinal fluid (on FLAIR images these areas may be hyperintense, owing to excess protein content); 3) Edema, defined as an area of homogenous high signal on FLAIR sequences surrounding the tumor; 4) Neovascularization, defined as the presence of tubular structures within the tumor showing flow-void patterns on T2-weighted images representing abnormal tumor vessels; and 5) Hemorrhage, characterized by an area of magnetic field distortion due to the paramagnetic blood breakdown products on the EPI T2*-weighted images, which were obtained in the first series of PWI before gadolinium reached the cerebral parenchyma (Figures 1 and 2). All five features were assessed dichotomously (presence or absence).Figure 1\nExample of MR images from a patient with glioblastoma (grade IV). a, Contrast-enhanced axial T1-weighted image shows an enhancing mass in the right frontal region. b, Axial FLAIR image shows peritumoral edema. c, Coronal turbo SE T2-weighted image revealing abnormal macroscopic vessels (arrows) within the tumor (neovascularization).Figure 2\nExample of MR images of glioblastoma (grade IV). a-c, Signs of necrosis, identified as regions within the neoplasm with hypointensity on contrast-enhanced axial T1-weighted images (a) and hyperintensity on T2-weighted images (b) and FLAIR images (c). d, Signs of hemorrhage seen as an area of hypointensity (arrow) due the paramagnetic blood breakdown products on the axial EPI T2*-weighted images.\n\nExample of MR images from a patient with glioblastoma (grade IV). a, Contrast-enhanced axial T1-weighted image shows an enhancing mass in the right frontal region. b, Axial FLAIR image shows peritumoral edema. c, Coronal turbo SE T2-weighted image revealing abnormal macroscopic vessels (arrows) within the tumor (neovascularization).\n\nExample of MR images of glioblastoma (grade IV). a-c, Signs of necrosis, identified as regions within the neoplasm with hypointensity on contrast-enhanced axial T1-weighted images (a) and hyperintensity on T2-weighted images (b) and FLAIR images (c). d, Signs of hemorrhage seen as an area of hypointensity (arrow) due the paramagnetic blood breakdown products on the axial EPI T2*-weighted images.\nThe relative Cerebral Blood Volume (rCBV) was calculated using PWI (Figure 3) based on a region of interest (ROI) centered on the highest tumor rCBV value in the parametric map. This ROI was drawn as large as possible in an attempt to include all voxels with the highest and similar values of CBV. Unprocessed perfusion images and conventional post-gadolinium T1-weighted MRI images were used to ensure that ROIs were not placed over blood vessels. Tumor CBV was normalized to contralateral white matter CBV, on which an ROI of the same dimensions was drawn.Figure 3\nExample of PWI and DWI assessment in cases of glioblastomas (grade IV). a, Calculation of the rCBV ratio. The figure shows the ROI location covering the maximal values of CBV (T) in the parametric map. A similar ROI was placed in the contralateral white matter to normalize the image (N). This image corresponds to the conventional MRI images of Figure 1. b, Calculation of rADC. Example of five different ROIs placed within the solid area of the tumor in the ADC map (represented by number 1) and in the contralateral healthy area (represented by number 2). The MRI features of this case are shown in Figure 2.\n\nExample of PWI and DWI assessment in cases of glioblastomas (grade IV). a, Calculation of the rCBV ratio. The figure shows the ROI location covering the maximal values of CBV (T) in the parametric map. A similar ROI was placed in the contralateral white matter to normalize the image (N). This image corresponds to the conventional MRI images of Figure 1. b, Calculation of rADC. Example of five different ROIs placed within the solid area of the tumor in the ADC map (represented by number 1) and in the contralateral healthy area (represented by number 2). The MRI features of this case are shown in Figure 2.\nThe relative Apparent Diffusion Coefficient (rADC) was calculated using DWI (Figure 3). Five different round-shaped ROIs ranging from 9.1 mm2 to 9.7 mm2 were placed in the solid tumor area. A further five ROIs with the same dimensions were placed in the contralateral normal cerebral area. The rADC was defined as the ratio of averaged ADCs between tumors and normal areas [18].\nThe Cho/Cr, NAA/Cr, Cho/H2O, and NAA/H2O ratios and the presence or absence of Lip or Lact were measured by MRS at long and short TE. We included the water peak as an internal reference following previously published data that report this approach to be a robust method for standardization [19]. Metabolic peak positions were assigned as follows: Cho, 3.22 ppm; Cr, 3.02 ppm; NAA, 2.02 ppm; Lip, 0.5-1.5 ppm. Lact (1.33 ppm) was identified as an inverted doublet at 144 ppm.\nAll variables obtained were assessed by consensus of two expert neuroradiologists with more than 10 years of experience (JG and PF).\nThe five features evaluated by conventional MRI were as follows: 1) Enhancement, defined as an increased signal in T1-weighted sequences in the tumor after administration of gadolinium; 2) Necrosis (and cystic necrosis), identified as areas within the neoplasm with a signal in T1- and T2-weighted images similar to that of cerebrospinal fluid (on FLAIR images these areas may be hyperintense, owing to excess protein content); 3) Edema, defined as an area of homogenous high signal on FLAIR sequences surrounding the tumor; 4) Neovascularization, defined as the presence of tubular structures within the tumor showing flow-void patterns on T2-weighted images representing abnormal tumor vessels; and 5) Hemorrhage, characterized by an area of magnetic field distortion due to the paramagnetic blood breakdown products on the EPI T2*-weighted images, which were obtained in the first series of PWI before gadolinium reached the cerebral parenchyma (Figures 1 and 2). All five features were assessed dichotomously (presence or absence).Figure 1\nExample of MR images from a patient with glioblastoma (grade IV). a, Contrast-enhanced axial T1-weighted image shows an enhancing mass in the right frontal region. b, Axial FLAIR image shows peritumoral edema. c, Coronal turbo SE T2-weighted image revealing abnormal macroscopic vessels (arrows) within the tumor (neovascularization).Figure 2\nExample of MR images of glioblastoma (grade IV). a-c, Signs of necrosis, identified as regions within the neoplasm with hypointensity on contrast-enhanced axial T1-weighted images (a) and hyperintensity on T2-weighted images (b) and FLAIR images (c). d, Signs of hemorrhage seen as an area of hypointensity (arrow) due the paramagnetic blood breakdown products on the axial EPI T2*-weighted images.\n\nExample of MR images from a patient with glioblastoma (grade IV). a, Contrast-enhanced axial T1-weighted image shows an enhancing mass in the right frontal region. b, Axial FLAIR image shows peritumoral edema. c, Coronal turbo SE T2-weighted image revealing abnormal macroscopic vessels (arrows) within the tumor (neovascularization).\n\nExample of MR images of glioblastoma (grade IV). a-c, Signs of necrosis, identified as regions within the neoplasm with hypointensity on contrast-enhanced axial T1-weighted images (a) and hyperintensity on T2-weighted images (b) and FLAIR images (c). d, Signs of hemorrhage seen as an area of hypointensity (arrow) due the paramagnetic blood breakdown products on the axial EPI T2*-weighted images.\nThe relative Cerebral Blood Volume (rCBV) was calculated using PWI (Figure 3) based on a region of interest (ROI) centered on the highest tumor rCBV value in the parametric map. This ROI was drawn as large as possible in an attempt to include all voxels with the highest and similar values of CBV. Unprocessed perfusion images and conventional post-gadolinium T1-weighted MRI images were used to ensure that ROIs were not placed over blood vessels. Tumor CBV was normalized to contralateral white matter CBV, on which an ROI of the same dimensions was drawn.Figure 3\nExample of PWI and DWI assessment in cases of glioblastomas (grade IV). a, Calculation of the rCBV ratio. The figure shows the ROI location covering the maximal values of CBV (T) in the parametric map. A similar ROI was placed in the contralateral white matter to normalize the image (N). This image corresponds to the conventional MRI images of Figure 1. b, Calculation of rADC. Example of five different ROIs placed within the solid area of the tumor in the ADC map (represented by number 1) and in the contralateral healthy area (represented by number 2). The MRI features of this case are shown in Figure 2.\n\nExample of PWI and DWI assessment in cases of glioblastomas (grade IV). a, Calculation of the rCBV ratio. The figure shows the ROI location covering the maximal values of CBV (T) in the parametric map. A similar ROI was placed in the contralateral white matter to normalize the image (N). This image corresponds to the conventional MRI images of Figure 1. b, Calculation of rADC. Example of five different ROIs placed within the solid area of the tumor in the ADC map (represented by number 1) and in the contralateral healthy area (represented by number 2). The MRI features of this case are shown in Figure 2.\nThe relative Apparent Diffusion Coefficient (rADC) was calculated using DWI (Figure 3). Five different round-shaped ROIs ranging from 9.1 mm2 to 9.7 mm2 were placed in the solid tumor area. A further five ROIs with the same dimensions were placed in the contralateral normal cerebral area. The rADC was defined as the ratio of averaged ADCs between tumors and normal areas [18].\nThe Cho/Cr, NAA/Cr, Cho/H2O, and NAA/H2O ratios and the presence or absence of Lip or Lact were measured by MRS at long and short TE. We included the water peak as an internal reference following previously published data that report this approach to be a robust method for standardization [19]. Metabolic peak positions were assigned as follows: Cho, 3.22 ppm; Cr, 3.02 ppm; NAA, 2.02 ppm; Lip, 0.5-1.5 ppm. Lact (1.33 ppm) was identified as an inverted doublet at 144 ppm.\nAll variables obtained were assessed by consensus of two expert neuroradiologists with more than 10 years of experience (JG and PF).\n Statistical analysis In the univariate analysis, continuous variables were assessed using the Mann–Whitney test and qualitative variables using a two-tailed Fisher exact test.\nA multivariate logistic regression model was applied to assess the combined and independent values of predictor variables. We used a forward stepwise selection procedure with p-to-enter and p-to-remove value thresholds of p < 0.05 and p > 0.01 and a cutoff value of 0.5. Sensitivity, specificity, positive predictive value, negative predictive value, and a Receiver Operating Characteristic (ROC) curve were obtained for the predictor variables.\nStatistical procedures were performed with SPSS version 13.0 (SPSS Inc, Chicago, Illinois, USA).\nIn the univariate analysis, continuous variables were assessed using the Mann–Whitney test and qualitative variables using a two-tailed Fisher exact test.\nA multivariate logistic regression model was applied to assess the combined and independent values of predictor variables. We used a forward stepwise selection procedure with p-to-enter and p-to-remove value thresholds of p < 0.05 and p > 0.01 and a cutoff value of 0.5. Sensitivity, specificity, positive predictive value, negative predictive value, and a Receiver Operating Characteristic (ROC) curve were obtained for the predictor variables.\nStatistical procedures were performed with SPSS version 13.0 (SPSS Inc, Chicago, Illinois, USA).", "The histology specimen was obtained by surgical resection in 119 cases and by stereotactic biopsy in 10 cases and analyzed by an expert neuropathologist with more than 30 years of experience blinded to radiological assessment.\nBrain tumors were classified as aggressive high-grade tumors (WHO grades III and IV) in 99 cases and low-grade tumors (WHO grades I and II) in the remaining 30 patients (Table 1). We considered grades I and II as low-grade tumors and grades III and IV as high-grade tumors according to the differences in treatment and survival between groups: in general, high-grade tumors present lower survival rates and need complementary therapy after surgery (usually chemotherapy and radiotherapy), whereas in low-grade tumors, survival is higher and the use of complementary therapy remains open to debate.Table 1\nFrequency distribution of histological subtypes of brain tumors\n\nHistological subtype\n\nWHO grade\n\nNo.\n\nPercentage\nAstrocytomaI182.9%II16III20IV70OligodendrogliomaII98.5%III2PNETIV75.4%DNETI21.6%HemangioblastomaI10.8%NeurocytomaII10.8%DNET, dysembryoplastic neuroepithelial tumor; PNET, primitive neuroectodermal tumor.\n\nFrequency distribution of histological subtypes of brain tumors\n\nDNET, dysembryoplastic neuroepithelial tumor; PNET, primitive neuroectodermal tumor.\nIn order to detect possible misclassifications of histological grading, three months after the end of recruitment medical records were reviewed to determine survival, defined as the time elapsed between MRI diagnosis and death or the last admission to our institution. No case of disagreement was found between survival and tumoral grade that indicates histological misclassifications.", "All patients were prospectively examined using a 1.5T MRI scanner (Intera or Achieva, Philips Healthcare, The Netherlands). Sagittal T1-weighted images (647/15 ms [TR/TE] SE) and coronal T2-weighted images (TR 4742/TE, 100 ms; Turbo SE; Fluid-Attenuated Inversion Recovery (FLAIR), 11000/140/2800 ms [TR/TE/TI]) were obtained with a 230-mm, Field Of View (FOV) and matrix size of 512 × 512.\nAfter intravenous administration of a double dose (0.2 mmol/kg) of gadobutrol 1.0 mmol/ml (Gadovist, Bayer Schering Farma, Berlin, Germany), an axial T1-weighted 3D fast-field echo sequence was acquired (TR 16/TE 4.6 ms) with a flip angle of 8°, FOV of 256 × 256 mm, and matrix size of 176 × 288.", "PWI was performed using a dynamic contrast-enhanced T2*-weighted gradient echo. EPI- Echo planar images -EPI- (single shot [TR 1678/TE 30 ms] with an EPI factor of 61, flip angle of 40°, matrix size of 128 × 128, and FOV of 230 mm) were acquired during the first pass of the gadobutrol bolus, which was injected intravenously using a 20-gauge needle at a rate of 4.0 ml/sec.\nA series of 40 acquisitions were performed at 1.7-second intervals. The first acquisition was acquired before injection to establish baseline (precontrast) intensity.\nThe results were transferred to a PC workstation for processing (ViewForum Workstation, release 5.1V1L2; Philips Healthcare). The possible effect of tracer recirculation or leakage due to the disruption of the blood–brain barrier was considered in the mathematical model by fitting a gamma-variate function to the observed 1/T2* relaxation rate curve. This gamma-variate function was automatically implemented by the workstation.", "Diffusion-weighted images were obtained with axial multislice single-shot EPI SE sequences as follows: TR 3745 ms/TE 120 ms; EPI factor, 61; matrix size, 128 × 128; FOV, 230 mm; and diffusion gradient encoding in three orthogonal directions. The images and Apparent Diffusion Coefficient (ADC) maps were calculated using b values of 0 and 2500 s/mm2. ADC values were quantified using the PC workstation mentioned above.", "Single-voxel proton MRS was performed in 117 patients. Twelve patients refused to undergo this technique because of the additional examination time involved. The technique used was point-resolved spectroscopy (PRESS) with a TR of 2000 ms and two different TEs (23/144 ms). The measurement of each spectrum was repeated 128 times with a cycling-phase of 16 to improve the signal-to-noise ratio.\nThe size (mean 8.34 cc, range 5.6 – 18.2 cc) and location of the voxels of interest were established in order to position the largest possible voxel within the solid tumor area, with minimal contamination from the surrounding non-tumor tissue and avoiding areas of necrosis, cysts, or hemorrhage as much as possible. We selected single-voxel MRS owing to its lower time requirements, which enabled all the MR sequences to be performed in a single session.\nSpectra were analyzed using custom-designed software [17]. Signal intensity of metabolic peaks, spectral positions, and decay constants were taken into consideration in coupled metabolite peaks. Signals of Choline (Cho), N-acetyl-aspartate (NAA), Creatine (Cr), Lipids (Lip), and Lactate (Lact) were quantified. The same quantification procedure was followed to analyze the water peak, although, in this case, Hankel singular value decomposition was not performed to suppress the water signal.", "The five features evaluated by conventional MRI were as follows: 1) Enhancement, defined as an increased signal in T1-weighted sequences in the tumor after administration of gadolinium; 2) Necrosis (and cystic necrosis), identified as areas within the neoplasm with a signal in T1- and T2-weighted images similar to that of cerebrospinal fluid (on FLAIR images these areas may be hyperintense, owing to excess protein content); 3) Edema, defined as an area of homogenous high signal on FLAIR sequences surrounding the tumor; 4) Neovascularization, defined as the presence of tubular structures within the tumor showing flow-void patterns on T2-weighted images representing abnormal tumor vessels; and 5) Hemorrhage, characterized by an area of magnetic field distortion due to the paramagnetic blood breakdown products on the EPI T2*-weighted images, which were obtained in the first series of PWI before gadolinium reached the cerebral parenchyma (Figures 1 and 2). All five features were assessed dichotomously (presence or absence).Figure 1\nExample of MR images from a patient with glioblastoma (grade IV). a, Contrast-enhanced axial T1-weighted image shows an enhancing mass in the right frontal region. b, Axial FLAIR image shows peritumoral edema. c, Coronal turbo SE T2-weighted image revealing abnormal macroscopic vessels (arrows) within the tumor (neovascularization).Figure 2\nExample of MR images of glioblastoma (grade IV). a-c, Signs of necrosis, identified as regions within the neoplasm with hypointensity on contrast-enhanced axial T1-weighted images (a) and hyperintensity on T2-weighted images (b) and FLAIR images (c). d, Signs of hemorrhage seen as an area of hypointensity (arrow) due the paramagnetic blood breakdown products on the axial EPI T2*-weighted images.\n\nExample of MR images from a patient with glioblastoma (grade IV). a, Contrast-enhanced axial T1-weighted image shows an enhancing mass in the right frontal region. b, Axial FLAIR image shows peritumoral edema. c, Coronal turbo SE T2-weighted image revealing abnormal macroscopic vessels (arrows) within the tumor (neovascularization).\n\nExample of MR images of glioblastoma (grade IV). a-c, Signs of necrosis, identified as regions within the neoplasm with hypointensity on contrast-enhanced axial T1-weighted images (a) and hyperintensity on T2-weighted images (b) and FLAIR images (c). d, Signs of hemorrhage seen as an area of hypointensity (arrow) due the paramagnetic blood breakdown products on the axial EPI T2*-weighted images.\nThe relative Cerebral Blood Volume (rCBV) was calculated using PWI (Figure 3) based on a region of interest (ROI) centered on the highest tumor rCBV value in the parametric map. This ROI was drawn as large as possible in an attempt to include all voxels with the highest and similar values of CBV. Unprocessed perfusion images and conventional post-gadolinium T1-weighted MRI images were used to ensure that ROIs were not placed over blood vessels. Tumor CBV was normalized to contralateral white matter CBV, on which an ROI of the same dimensions was drawn.Figure 3\nExample of PWI and DWI assessment in cases of glioblastomas (grade IV). a, Calculation of the rCBV ratio. The figure shows the ROI location covering the maximal values of CBV (T) in the parametric map. A similar ROI was placed in the contralateral white matter to normalize the image (N). This image corresponds to the conventional MRI images of Figure 1. b, Calculation of rADC. Example of five different ROIs placed within the solid area of the tumor in the ADC map (represented by number 1) and in the contralateral healthy area (represented by number 2). The MRI features of this case are shown in Figure 2.\n\nExample of PWI and DWI assessment in cases of glioblastomas (grade IV). a, Calculation of the rCBV ratio. The figure shows the ROI location covering the maximal values of CBV (T) in the parametric map. A similar ROI was placed in the contralateral white matter to normalize the image (N). This image corresponds to the conventional MRI images of Figure 1. b, Calculation of rADC. Example of five different ROIs placed within the solid area of the tumor in the ADC map (represented by number 1) and in the contralateral healthy area (represented by number 2). The MRI features of this case are shown in Figure 2.\nThe relative Apparent Diffusion Coefficient (rADC) was calculated using DWI (Figure 3). Five different round-shaped ROIs ranging from 9.1 mm2 to 9.7 mm2 were placed in the solid tumor area. A further five ROIs with the same dimensions were placed in the contralateral normal cerebral area. The rADC was defined as the ratio of averaged ADCs between tumors and normal areas [18].\nThe Cho/Cr, NAA/Cr, Cho/H2O, and NAA/H2O ratios and the presence or absence of Lip or Lact were measured by MRS at long and short TE. We included the water peak as an internal reference following previously published data that report this approach to be a robust method for standardization [19]. Metabolic peak positions were assigned as follows: Cho, 3.22 ppm; Cr, 3.02 ppm; NAA, 2.02 ppm; Lip, 0.5-1.5 ppm. Lact (1.33 ppm) was identified as an inverted doublet at 144 ppm.\nAll variables obtained were assessed by consensus of two expert neuroradiologists with more than 10 years of experience (JG and PF).", "In the univariate analysis, continuous variables were assessed using the Mann–Whitney test and qualitative variables using a two-tailed Fisher exact test.\nA multivariate logistic regression model was applied to assess the combined and independent values of predictor variables. We used a forward stepwise selection procedure with p-to-enter and p-to-remove value thresholds of p < 0.05 and p > 0.01 and a cutoff value of 0.5. Sensitivity, specificity, positive predictive value, negative predictive value, and a Receiver Operating Characteristic (ROC) curve were obtained for the predictor variables.\nStatistical procedures were performed with SPSS version 13.0 (SPSS Inc, Chicago, Illinois, USA).", " Univariate analysis The rCBV ratio could not be obtained in five patients owing to magnetic susceptibility artifacts resulting from an extensive tumor hemorrhage or a poor adjustment to the gamma curve.\nValues of rADC were not calculated in five patients because of the presence of extensive necrosis. In this situation, there was not enough solid area to place the ROIs without partial volume effects in the necrotic region. Quantitative MRS results were not taken into consideration in 20 cases, owing to the poor quality of the spectra. In 24 patients, at least one of the metabolic ratios was missed, because the internal references (Cho or H2O) or the metabolite peaks of Cho and/or NAA could not be measured.\nThe presence of the MRI features was significantly greater in high-grade tumors (p < 0.0001). Statistically significant differences were found in rCBV (p < 0.0001), rADC (p < 0.0001), and the NAA/Cr (p = 0.005) ratio at a short TE and in the Cho/Cr ratio (p = 0.008) at a long TE (Table 2). The Lip peak was significantly present in high-grade tumors (p < 0.0001) (Table 2). The variables enhancement and necrosis showed the highest Odds Ratio (OR) for classifying high-grade tumors (55.42 and 23.82, respectively) (Table 3).Table 2\nComparison of perfusion-weighted, diffusion-weighted, and magnetic resonance spectroscopy variables between low-grade and high-grade tumor groups\n\nLow-grade tumor\n\nHigh-grade tumor\n\nVariable\n\nn\n\nRange\n\nMean\n\nSD\n\nn\n\nRange\n\nMean\n\nSD\n\nP\nValue\nrCBV290.00-13.612.093.17950.51-19.185.754.10<0.0001rADC300.81-2.551.740.44940.43-2.631.170.38<0.0001NAA/Cra\n230.00-11.732.393.00640.00-42.267.289.120.005Cho/Crb\n240.95-21.433.565.34700.81-77.467.9514.880.008Lipids26N/AN/AN/A83N/AN/AN/A<0.0001NAA/Crb\n240.00-11.611.532.28710.00-20.361.493.15NSCho/Cra\n230.07-9.302.002,13650.00-30.153.244.91NSCho/H2Oa\n193.02×10−4-5.88×10−3\n6.60×10−4\n1.29×10−3\n630.00-6.18×10−3\n4.21×10−3\n8.03×10−4\nNSNAA/H2Oa\n200.00-3.28×10−3\n6.67×10−4\n7.85×10−4\n630.00-1.73×10−2\n1.00×10−3\n2.20×10−3\nNSCho/H2Ob\n172.10x10−4-3.98×10−3\n8.39×10−4\n9.00×10−3\n644.74×10−4-8.27×10−3\n8.34×10−4\n1.23×10−3\nNSNAA/H2Ob\n180.00-1.01×10−3\n3.12×10−4\n2.74×10−3\n670.00-6.86×10−3\n3.98×10−4\n1,18×10−3\nNSLactate26N/AN/AN/A83N/AN/AN/ANSCho, choline; Cr, creatine; NAA, N-acetyl-aspartate; N/A, not available (qualitative variables); NS, not significant; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume.\naTE = 23 ms.\nbTE = 144 ms.Table 3\nOdds ratios obtained from the univariate analysis of the magnetic resonance variables with significant differences between low-grade and high-grade tumor groups\n\nVariable\n\nOR\n\n95% CI\n\nP\nvalue\nEnhancement55.4215.58-197.15<0.0001Necrosis23.828.43-67.35<0.0001Neovascularization7.802.53-24.02<0.0001Edema7.433.02-18.26<0.0001Hemorrhage13.853.93-48.77<0.0001rCBV1.681.23-2.19<0.0001rADC0.050.01-0.16<0.0001NAA/Cra\n1.211.03-1.420.02Cho/Crb\n1.050.97-1.140.22Lipids9.803.56-26.96<0.0001CI, confidence interval; OR, odds ratio; Cho, choline; Cr, creatine; NAA, N-acetyl-aspartate; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume.\naTE = 23 ms.\nbTE = 144 ms.\n\nComparison of perfusion-weighted, diffusion-weighted, and magnetic resonance spectroscopy variables between low-grade and high-grade tumor groups\n\nCho, choline; Cr, creatine; NAA, N-acetyl-aspartate; N/A, not available (qualitative variables); NS, not significant; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume.\n\naTE = 23 ms.\n\nbTE = 144 ms.\n\nOdds ratios obtained from the univariate analysis of the magnetic resonance variables with significant differences between low-grade and high-grade tumor groups\n\nCI, confidence interval; OR, odds ratio; Cho, choline; Cr, creatine; NAA, N-acetyl-aspartate; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume.\n\naTE = 23 ms.\n\nbTE = 144 ms.\nThe rCBV ratio could not be obtained in five patients owing to magnetic susceptibility artifacts resulting from an extensive tumor hemorrhage or a poor adjustment to the gamma curve.\nValues of rADC were not calculated in five patients because of the presence of extensive necrosis. In this situation, there was not enough solid area to place the ROIs without partial volume effects in the necrotic region. Quantitative MRS results were not taken into consideration in 20 cases, owing to the poor quality of the spectra. In 24 patients, at least one of the metabolic ratios was missed, because the internal references (Cho or H2O) or the metabolite peaks of Cho and/or NAA could not be measured.\nThe presence of the MRI features was significantly greater in high-grade tumors (p < 0.0001). Statistically significant differences were found in rCBV (p < 0.0001), rADC (p < 0.0001), and the NAA/Cr (p = 0.005) ratio at a short TE and in the Cho/Cr ratio (p = 0.008) at a long TE (Table 2). The Lip peak was significantly present in high-grade tumors (p < 0.0001) (Table 2). The variables enhancement and necrosis showed the highest Odds Ratio (OR) for classifying high-grade tumors (55.42 and 23.82, respectively) (Table 3).Table 2\nComparison of perfusion-weighted, diffusion-weighted, and magnetic resonance spectroscopy variables between low-grade and high-grade tumor groups\n\nLow-grade tumor\n\nHigh-grade tumor\n\nVariable\n\nn\n\nRange\n\nMean\n\nSD\n\nn\n\nRange\n\nMean\n\nSD\n\nP\nValue\nrCBV290.00-13.612.093.17950.51-19.185.754.10<0.0001rADC300.81-2.551.740.44940.43-2.631.170.38<0.0001NAA/Cra\n230.00-11.732.393.00640.00-42.267.289.120.005Cho/Crb\n240.95-21.433.565.34700.81-77.467.9514.880.008Lipids26N/AN/AN/A83N/AN/AN/A<0.0001NAA/Crb\n240.00-11.611.532.28710.00-20.361.493.15NSCho/Cra\n230.07-9.302.002,13650.00-30.153.244.91NSCho/H2Oa\n193.02×10−4-5.88×10−3\n6.60×10−4\n1.29×10−3\n630.00-6.18×10−3\n4.21×10−3\n8.03×10−4\nNSNAA/H2Oa\n200.00-3.28×10−3\n6.67×10−4\n7.85×10−4\n630.00-1.73×10−2\n1.00×10−3\n2.20×10−3\nNSCho/H2Ob\n172.10x10−4-3.98×10−3\n8.39×10−4\n9.00×10−3\n644.74×10−4-8.27×10−3\n8.34×10−4\n1.23×10−3\nNSNAA/H2Ob\n180.00-1.01×10−3\n3.12×10−4\n2.74×10−3\n670.00-6.86×10−3\n3.98×10−4\n1,18×10−3\nNSLactate26N/AN/AN/A83N/AN/AN/ANSCho, choline; Cr, creatine; NAA, N-acetyl-aspartate; N/A, not available (qualitative variables); NS, not significant; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume.\naTE = 23 ms.\nbTE = 144 ms.Table 3\nOdds ratios obtained from the univariate analysis of the magnetic resonance variables with significant differences between low-grade and high-grade tumor groups\n\nVariable\n\nOR\n\n95% CI\n\nP\nvalue\nEnhancement55.4215.58-197.15<0.0001Necrosis23.828.43-67.35<0.0001Neovascularization7.802.53-24.02<0.0001Edema7.433.02-18.26<0.0001Hemorrhage13.853.93-48.77<0.0001rCBV1.681.23-2.19<0.0001rADC0.050.01-0.16<0.0001NAA/Cra\n1.211.03-1.420.02Cho/Crb\n1.050.97-1.140.22Lipids9.803.56-26.96<0.0001CI, confidence interval; OR, odds ratio; Cho, choline; Cr, creatine; NAA, N-acetyl-aspartate; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume.\naTE = 23 ms.\nbTE = 144 ms.\n\nComparison of perfusion-weighted, diffusion-weighted, and magnetic resonance spectroscopy variables between low-grade and high-grade tumor groups\n\nCho, choline; Cr, creatine; NAA, N-acetyl-aspartate; N/A, not available (qualitative variables); NS, not significant; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume.\n\naTE = 23 ms.\n\nbTE = 144 ms.\n\nOdds ratios obtained from the univariate analysis of the magnetic resonance variables with significant differences between low-grade and high-grade tumor groups\n\nCI, confidence interval; OR, odds ratio; Cho, choline; Cr, creatine; NAA, N-acetyl-aspartate; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume.\n\naTE = 23 ms.\n\nbTE = 144 ms.\n Combination of the variables obtained by conventional MRI The imaging features identified as independent predictors of tumor grade were enhancement (OR, 23.37; 95% Confidence Interval -CI, 5.85-93.25) and necrosis (OR, 9.04; 95% CI, 2.61 -31.25). The sensitivity and specificity for the identification of high-grade tumors with these two features combined were 95.9% and 70%, respectively (Table 4). The Area Under the receiver operating characteristic Curve (AUC) was 0.890 (Figure 4).Table 4\nVariables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in primary brain tumors\n\nMR Technique\n\nN\n\nPredictor variables\n\nOR\n\n95% CI\n\nP\n\nCoefficient (β)\n\nStandard error\n\nAUC\n\nSensitivity (%)\n\nSpecificity (%)\n\nPPV (%)\n\nNPV (%)\nMRI129Enhancement23.375.85-93.25<0.00013.150.710.89095.97091.384Necrosis9.042.61-31.250.0012,200.63MRI71Enhancement11.632.35-58.820.0032.460.810.87198.246.787.387.5PWIDWINecrosis8.481.95-37.040.0052.130.77MRSMRIEnhancement16.953.89-71.43<0.00012.830.75PWI120Necrosis5.401.32-21.740.0191.690.720.92398.975.992.895.6DWIrADC0.220.50-0.970.045−1.510.75AUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value.Figure 4\nReceiving operating characteristic curves of the combination of MR imaging features alone (a) or associated with PWI and DWI parameters (b) for grading of primary brain tumors.\n\n\nVariables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in primary brain tumors\n\nAUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value.\n\nReceiving operating characteristic curves of the combination of MR imaging features alone (a) or associated with PWI and DWI parameters (b) for grading of primary brain tumors.\n\nThe imaging features identified as independent predictors of tumor grade were enhancement (OR, 23.37; 95% Confidence Interval -CI, 5.85-93.25) and necrosis (OR, 9.04; 95% CI, 2.61 -31.25). The sensitivity and specificity for the identification of high-grade tumors with these two features combined were 95.9% and 70%, respectively (Table 4). The Area Under the receiver operating characteristic Curve (AUC) was 0.890 (Figure 4).Table 4\nVariables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in primary brain tumors\n\nMR Technique\n\nN\n\nPredictor variables\n\nOR\n\n95% CI\n\nP\n\nCoefficient (β)\n\nStandard error\n\nAUC\n\nSensitivity (%)\n\nSpecificity (%)\n\nPPV (%)\n\nNPV (%)\nMRI129Enhancement23.375.85-93.25<0.00013.150.710.89095.97091.384Necrosis9.042.61-31.250.0012,200.63MRI71Enhancement11.632.35-58.820.0032.460.810.87198.246.787.387.5PWIDWINecrosis8.481.95-37.040.0052.130.77MRSMRIEnhancement16.953.89-71.43<0.00012.830.75PWI120Necrosis5.401.32-21.740.0191.690.720.92398.975.992.895.6DWIrADC0.220.50-0.970.045−1.510.75AUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value.Figure 4\nReceiving operating characteristic curves of the combination of MR imaging features alone (a) or associated with PWI and DWI parameters (b) for grading of primary brain tumors.\n\n\nVariables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in primary brain tumors\n\nAUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value.\n\nReceiving operating characteristic curves of the combination of MR imaging features alone (a) or associated with PWI and DWI parameters (b) for grading of primary brain tumors.\n\n Combination of the variables obtained by conventional MRI, PWI, DWI, and MRS This multivariate logistic regression analysis identified only enhancement (OR, 11.63; 95% CI, 2.35-58.82) and necrosis (OR, 8.48; 95% CI, 1.95-37.04) as independent predictors (Table 4). The sensitivity and specificity of these variables in grading brain tumors was 98.2% and 46.7%, respectively, and the AUC was 0.871. Thus, advanced variables seemed not to provide additional predictive value. However, it is noteworthy that one or more variables were missing in 58 cases, thus considerably reducing the sample size and the power of the analysis.\nThis multivariate logistic regression analysis identified only enhancement (OR, 11.63; 95% CI, 2.35-58.82) and necrosis (OR, 8.48; 95% CI, 1.95-37.04) as independent predictors (Table 4). The sensitivity and specificity of these variables in grading brain tumors was 98.2% and 46.7%, respectively, and the AUC was 0.871. Thus, advanced variables seemed not to provide additional predictive value. However, it is noteworthy that one or more variables were missing in 58 cases, thus considerably reducing the sample size and the power of the analysis.\n Combination of the variables obtained by conventional MRI, PWI, and DWI As most missing data corresponded to MRS, we conducted another analysis excluding the MRS–related variables. In this case, the number of cases remaining was 120, and the variables identified as independent predictors of high-grade tumors were enhancement (OR, 16.95; 95% CI, 3.89-71.43), necrosis (OR, 5.40; 95% CI, 1.32-21.74), and low rADC values (OR,0.22; 95% CI, 0.50-0.97) (Table 4). The sensitivity and specificity obtained with these three variables were 98.9% and 75.9%, respectively. The AUC was 0.923 (Figure 4).\nAs most missing data corresponded to MRS, we conducted another analysis excluding the MRS–related variables. In this case, the number of cases remaining was 120, and the variables identified as independent predictors of high-grade tumors were enhancement (OR, 16.95; 95% CI, 3.89-71.43), necrosis (OR, 5.40; 95% CI, 1.32-21.74), and low rADC values (OR,0.22; 95% CI, 0.50-0.97) (Table 4). The sensitivity and specificity obtained with these three variables were 98.9% and 75.9%, respectively. The AUC was 0.923 (Figure 4).\n Multivariate analysis of gliomas As most tumors in our series (91.5%) were gliomas (astrocytoma and oligodendroglioma), the multivariate logistic regression analysis was repeated in this histologic subtype. Independently of the variables included in the analysis, the only predictors of high grade were necrosis and enhancement, with a sensitivity of 97.6% and specificity of 76% when the conventional MRI, PWI, and DWI variables were combined (Table 5).Table 5\nVariables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in gliomas\n\nTechnique\n\nN\n\nPredictor variables\n\nOR\n\n95% CI\n\nP\n\nCoefficient (β)\n\nStandard error\n\nAUC\n\nSensitivity (%)\n\nSpecificity (%)\n\nPPV (%)\n\nNPV (%)\nMRI118Enhancement58.829.35-333.33<0.00014.060.930.94097.876.993.790.9Necrosis13.892.91-66.670.0012,630.80MRI67Enhancement253.60-166.670.0012.460.990.94096.264.391.181.8PWIDWINecrosis8.261.53-43.480.0142.110.86MRSMRI110Enhancement508.13-333.33<0.00013.940.940.94097.67693.390.5PWIDWINecrosis13.892.91- 66.670.0012.640.80AUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value.\n\nVariables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in gliomas\n\nAUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value.\nAs most tumors in our series (91.5%) were gliomas (astrocytoma and oligodendroglioma), the multivariate logistic regression analysis was repeated in this histologic subtype. Independently of the variables included in the analysis, the only predictors of high grade were necrosis and enhancement, with a sensitivity of 97.6% and specificity of 76% when the conventional MRI, PWI, and DWI variables were combined (Table 5).Table 5\nVariables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in gliomas\n\nTechnique\n\nN\n\nPredictor variables\n\nOR\n\n95% CI\n\nP\n\nCoefficient (β)\n\nStandard error\n\nAUC\n\nSensitivity (%)\n\nSpecificity (%)\n\nPPV (%)\n\nNPV (%)\nMRI118Enhancement58.829.35-333.33<0.00014.060.930.94097.876.993.790.9Necrosis13.892.91-66.670.0012,630.80MRI67Enhancement253.60-166.670.0012.460.990.94096.264.391.181.8PWIDWINecrosis8.261.53-43.480.0142.110.86MRSMRI110Enhancement508.13-333.33<0.00013.940.940.94097.67693.390.5PWIDWINecrosis13.892.91- 66.670.0012.640.80AUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value.\n\nVariables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in gliomas\n\nAUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value.", "The rCBV ratio could not be obtained in five patients owing to magnetic susceptibility artifacts resulting from an extensive tumor hemorrhage or a poor adjustment to the gamma curve.\nValues of rADC were not calculated in five patients because of the presence of extensive necrosis. In this situation, there was not enough solid area to place the ROIs without partial volume effects in the necrotic region. Quantitative MRS results were not taken into consideration in 20 cases, owing to the poor quality of the spectra. In 24 patients, at least one of the metabolic ratios was missed, because the internal references (Cho or H2O) or the metabolite peaks of Cho and/or NAA could not be measured.\nThe presence of the MRI features was significantly greater in high-grade tumors (p < 0.0001). Statistically significant differences were found in rCBV (p < 0.0001), rADC (p < 0.0001), and the NAA/Cr (p = 0.005) ratio at a short TE and in the Cho/Cr ratio (p = 0.008) at a long TE (Table 2). The Lip peak was significantly present in high-grade tumors (p < 0.0001) (Table 2). The variables enhancement and necrosis showed the highest Odds Ratio (OR) for classifying high-grade tumors (55.42 and 23.82, respectively) (Table 3).Table 2\nComparison of perfusion-weighted, diffusion-weighted, and magnetic resonance spectroscopy variables between low-grade and high-grade tumor groups\n\nLow-grade tumor\n\nHigh-grade tumor\n\nVariable\n\nn\n\nRange\n\nMean\n\nSD\n\nn\n\nRange\n\nMean\n\nSD\n\nP\nValue\nrCBV290.00-13.612.093.17950.51-19.185.754.10<0.0001rADC300.81-2.551.740.44940.43-2.631.170.38<0.0001NAA/Cra\n230.00-11.732.393.00640.00-42.267.289.120.005Cho/Crb\n240.95-21.433.565.34700.81-77.467.9514.880.008Lipids26N/AN/AN/A83N/AN/AN/A<0.0001NAA/Crb\n240.00-11.611.532.28710.00-20.361.493.15NSCho/Cra\n230.07-9.302.002,13650.00-30.153.244.91NSCho/H2Oa\n193.02×10−4-5.88×10−3\n6.60×10−4\n1.29×10−3\n630.00-6.18×10−3\n4.21×10−3\n8.03×10−4\nNSNAA/H2Oa\n200.00-3.28×10−3\n6.67×10−4\n7.85×10−4\n630.00-1.73×10−2\n1.00×10−3\n2.20×10−3\nNSCho/H2Ob\n172.10x10−4-3.98×10−3\n8.39×10−4\n9.00×10−3\n644.74×10−4-8.27×10−3\n8.34×10−4\n1.23×10−3\nNSNAA/H2Ob\n180.00-1.01×10−3\n3.12×10−4\n2.74×10−3\n670.00-6.86×10−3\n3.98×10−4\n1,18×10−3\nNSLactate26N/AN/AN/A83N/AN/AN/ANSCho, choline; Cr, creatine; NAA, N-acetyl-aspartate; N/A, not available (qualitative variables); NS, not significant; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume.\naTE = 23 ms.\nbTE = 144 ms.Table 3\nOdds ratios obtained from the univariate analysis of the magnetic resonance variables with significant differences between low-grade and high-grade tumor groups\n\nVariable\n\nOR\n\n95% CI\n\nP\nvalue\nEnhancement55.4215.58-197.15<0.0001Necrosis23.828.43-67.35<0.0001Neovascularization7.802.53-24.02<0.0001Edema7.433.02-18.26<0.0001Hemorrhage13.853.93-48.77<0.0001rCBV1.681.23-2.19<0.0001rADC0.050.01-0.16<0.0001NAA/Cra\n1.211.03-1.420.02Cho/Crb\n1.050.97-1.140.22Lipids9.803.56-26.96<0.0001CI, confidence interval; OR, odds ratio; Cho, choline; Cr, creatine; NAA, N-acetyl-aspartate; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume.\naTE = 23 ms.\nbTE = 144 ms.\n\nComparison of perfusion-weighted, diffusion-weighted, and magnetic resonance spectroscopy variables between low-grade and high-grade tumor groups\n\nCho, choline; Cr, creatine; NAA, N-acetyl-aspartate; N/A, not available (qualitative variables); NS, not significant; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume.\n\naTE = 23 ms.\n\nbTE = 144 ms.\n\nOdds ratios obtained from the univariate analysis of the magnetic resonance variables with significant differences between low-grade and high-grade tumor groups\n\nCI, confidence interval; OR, odds ratio; Cho, choline; Cr, creatine; NAA, N-acetyl-aspartate; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume.\n\naTE = 23 ms.\n\nbTE = 144 ms.", "The imaging features identified as independent predictors of tumor grade were enhancement (OR, 23.37; 95% Confidence Interval -CI, 5.85-93.25) and necrosis (OR, 9.04; 95% CI, 2.61 -31.25). The sensitivity and specificity for the identification of high-grade tumors with these two features combined were 95.9% and 70%, respectively (Table 4). The Area Under the receiver operating characteristic Curve (AUC) was 0.890 (Figure 4).Table 4\nVariables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in primary brain tumors\n\nMR Technique\n\nN\n\nPredictor variables\n\nOR\n\n95% CI\n\nP\n\nCoefficient (β)\n\nStandard error\n\nAUC\n\nSensitivity (%)\n\nSpecificity (%)\n\nPPV (%)\n\nNPV (%)\nMRI129Enhancement23.375.85-93.25<0.00013.150.710.89095.97091.384Necrosis9.042.61-31.250.0012,200.63MRI71Enhancement11.632.35-58.820.0032.460.810.87198.246.787.387.5PWIDWINecrosis8.481.95-37.040.0052.130.77MRSMRIEnhancement16.953.89-71.43<0.00012.830.75PWI120Necrosis5.401.32-21.740.0191.690.720.92398.975.992.895.6DWIrADC0.220.50-0.970.045−1.510.75AUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value.Figure 4\nReceiving operating characteristic curves of the combination of MR imaging features alone (a) or associated with PWI and DWI parameters (b) for grading of primary brain tumors.\n\n\nVariables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in primary brain tumors\n\nAUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value.\n\nReceiving operating characteristic curves of the combination of MR imaging features alone (a) or associated with PWI and DWI parameters (b) for grading of primary brain tumors.\n", "This multivariate logistic regression analysis identified only enhancement (OR, 11.63; 95% CI, 2.35-58.82) and necrosis (OR, 8.48; 95% CI, 1.95-37.04) as independent predictors (Table 4). The sensitivity and specificity of these variables in grading brain tumors was 98.2% and 46.7%, respectively, and the AUC was 0.871. Thus, advanced variables seemed not to provide additional predictive value. However, it is noteworthy that one or more variables were missing in 58 cases, thus considerably reducing the sample size and the power of the analysis.", "As most missing data corresponded to MRS, we conducted another analysis excluding the MRS–related variables. In this case, the number of cases remaining was 120, and the variables identified as independent predictors of high-grade tumors were enhancement (OR, 16.95; 95% CI, 3.89-71.43), necrosis (OR, 5.40; 95% CI, 1.32-21.74), and low rADC values (OR,0.22; 95% CI, 0.50-0.97) (Table 4). The sensitivity and specificity obtained with these three variables were 98.9% and 75.9%, respectively. The AUC was 0.923 (Figure 4).", "As most tumors in our series (91.5%) were gliomas (astrocytoma and oligodendroglioma), the multivariate logistic regression analysis was repeated in this histologic subtype. Independently of the variables included in the analysis, the only predictors of high grade were necrosis and enhancement, with a sensitivity of 97.6% and specificity of 76% when the conventional MRI, PWI, and DWI variables were combined (Table 5).Table 5\nVariables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in gliomas\n\nTechnique\n\nN\n\nPredictor variables\n\nOR\n\n95% CI\n\nP\n\nCoefficient (β)\n\nStandard error\n\nAUC\n\nSensitivity (%)\n\nSpecificity (%)\n\nPPV (%)\n\nNPV (%)\nMRI118Enhancement58.829.35-333.33<0.00014.060.930.94097.876.993.790.9Necrosis13.892.91-66.670.0012,630.80MRI67Enhancement253.60-166.670.0012.460.990.94096.264.391.181.8PWIDWINecrosis8.261.53-43.480.0142.110.86MRSMRI110Enhancement508.13-333.33<0.00013.940.940.94097.67693.390.5PWIDWINecrosis13.892.91- 66.670.0012.640.80AUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value.\n\nVariables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in gliomas\n\nAUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value.", "Conventional MRI constitutes the most used MRI technique in the assessment of primary brain tumors. However, higher accuracy is necessary when grading brain tumors [3,8]. Advanced MR techniques provide additional information related to histological features of the tumor, as grade of neovascularization, cellularity, and mitotic index [10,20,21]. The validity of conventional and advanced MR in grading tumors has been widely reported in the medical literature [22–26]. The significant differences we found for the variables using all the MR techniques analyzed (conventional MRI variables, rCBV, rADC, NAA/Cr at short TE, Cho/Cr at long TE, and presence of lipids) were consistent with those reported in the literature.\nFew attempts have been made to combine different MRI techniques in grading tumors [3,4,8,27] and, to our knowledge, only the study by Yoon et al. [16] has combined conventional MRI with PWI, DWI, and MRS in a group of patients diagnosed with cerebral gliomas. That study showed that there were no significant differences in the diagnostic performances of any of those MR imaging techniques. Recently, Caulo et al. [28] has also analyzed the information provided by these advanced MR techniques in the assessment of tumor grade in gliomas but, in their analysis, ADC maps were only used to define different tumoral regions in order to guide ROIs’ placement. Thus, the ADC values were not calculated to differentiate grade of aggressiveness. In our prospective study, we analyzed a series of variables for conventional MRI and advanced techniques (PWI, DWI, and MRS) to determine whether a combination of techniques was better than conventional MRI alone and, unlike in Caulo et al. [28], we used ADC values to assess tumor grade. We found that two conventional MRI variables, enhancement and necrosis, were the only predictors of grade in primary brain tumors.\nThe fact that data were missing from our study (at least one variable in 58 cases) could affect the statistical power. However, excluding these cases—and including only patients with all the variables—could lead to a selection bias. The radiologist should analyze brain tumors based on histological features (extensive necrosis or hemorrhage) that impair evaluation with advanced MRI techniques. For example, when necrosis was extensive, the area of the solid part was much too small to calculate advanced MR parameters [29,30]. Most published articles obviate this situation by avoiding cases with at least one missing data obtained by different MRI techniques [3,4,8]. As missing data correspond to MRS in most cases (56 cases), we performed an analysis combining all the variables except MRS data and found that at least one item of data was missed in only 9 patients. As a result, enhancement, necrosis, and low rADC were predictors of high tumor grade, and these variables provided higher accuracy (sensitivity 98.9%; specificity 75.9%) than those obtained with the other combinations analyzed (only MRI variables or MRI, PWI, DWI, and MRS variables). However, the improvement was no more than modest compared with the results obtained by combining only MRI variables (sensitivity 95.9%, specificity 70%).\nPrevious studies that analyzed differences between tumor grades were limited to gliomas [3,4,8,31–33]. However, we included all the primary brain tumors in order to mimic conditions of clinical practice, in which the radiologist has to provide a presumptive diagnosis of malignancy before surgery, regardless of the histological type. In our series 8.5% of tumors were non-gliomas due to the lower frequency of these type of tumors. Nevertheless, we repeated the multivariate analysis including only gliomas, since these were the most frequent histological subtype in our series. Unlike other authors, we were unable to demonstrate any additional value of advanced MR over conventional MRI [3,8,34], possibly because of our approach in assessing MRI. We showed high sensitivity and specificity (97.8% and 76.9%, respectively) for necrosis and enhancement as they were the best predictor variables of MRI in grading gliomas based on the results of the multivariate analysis combining conventional MRI variables. Using conventional MRI criteria, other authors obtained lower values (sensitivity of 42.1%-93.3% and a specificity of 60%-75.0%), possibly as a result of different selection criteria for high-grade MRI criteria [3,4,8,31,32]. For example, signal heterogeneity of the lesions could be related to other variables, such as presence of hemorrhage or necrosis. Mass effect is inherent to any tumor and is not necessarily associated with the histological grade. Furthermore, the existence of ill-defined borders is not useful in certain cases, such as glioblastomas, which could show well-defined borders on MRI, and low-grade gliomas, which tend to have an infiltrating appearance [35]. Some authors globally assessed the MRI features without specifying the diagnostic value of each of these imaging variables [3,4,22]. In addition, it is important to note that studies with negative results are less likely to be published, despite being well designed and conducted [36] thus leading to publication bias and overestimation of the value of advanced MR techniques in previous studies.\nOur study has several limitations. We performed single-voxel MRS instead multivoxel MRS, which more accurately assesses tumor heterogeneity [37]. However, single-voxel studies have certain advantages, such as low time requirements, quicker post-processing, and better field homogeneity in the volume of interest [25]. The fact that most missing data were spectroscopic variables could lead us to underestimate the added value of MRS over conventional MRI in grading brain tumors. The rADC and rCBV ratios were calculated by selecting ROIs. As in the case of single-voxel MRS, this approach may be prone to sampling error, thus reinforcing the importance of careful placement of the ROIs. To reduce the T1 leakage effects, we did not administer a preload of contrast agent. Although this method seems to be the most robust for the evaluation of brain tumors, a statistical validation has not been provided [38]. In addition, to minimize the T1 and/or T2 leakage effects [38], we applied a gamma-variate function as a correction algorithm and the analysis of the MR signal-intensity curves did not show in any case of our series a rising of postbolus signal above the prebolus baseline that indicates T1-leakage effect.\nIn our study, we have excluded patients with massive hemorrhage. This criterion may be interpreted as contradictory because we analyzed the presence of tumoral hemorrhage by conventional MRI. However, we decided not consider only that cases that could be interpreted as brain hematomas secondary to brain tumors since the assessment of all advanced MRI techniques in these patients was not possible due to magnetic susceptibility artifacts secondary to the big quantity of blood products.\nThe restricted size of our sample prevented us from dividing it into training and validation sets, which would constitute a more suitable design for our analysis. Consequently, the accuracy we report in grading brain tumors could be somewhat optimistic. Nevertheless, the conclusions obtained by combining different MRI techniques should remain unaffected. Our approach is widely reported, thus making our results comparable [3,4,8,11,39].", "Preoperative diagnosis of tumor grade by MRI could assist in treatment planning, which is essential in cases were a histological diagnosis cannot be made. Our work focuses on different types of primary brain tumors, since, in clinical practice, tumor grade is analyzed using MRI with no previous knowledge of histological type. An appropriate analysis of conventional MRI features enables primary brain tumors to be graded with high accuracy. The best results for the prediction of high-grade tumors were obtained by combining the variables enhancement, necrosis, and rADC. Only a slight improvement was obtained with respect to conventional MRI criteria combined with the only advanced MRI variable considered as predictive (rADC). No advanced MR variables seem to add value to conventional MRI alone in the determination of grade in gliomas." ]
[ "introduction", "materials|methods", null, null, null, null, null, null, null, "results", null, null, null, null, null, "discussion", "conclusion" ]
[ "Brain neoplasms", "Magnetic resonance imaging", "Magnetic resonance spectroscopy", "Diffusion-weighted MRI", "Perfusion-weighted MRI" ]
Background: Primary brain tumors constitute a heterogeneous group that can be classified according to their histological type and grade of malignancy. The World Health Organization (WHO) classifies primary brain tumors into four different grades of malignancy [1]. Histological tumor grading has several drawbacks, one of which is the need for stereotactic biopsy, an invasive procedure with a certain risk of morbidity and mortality. In addition, this approach is subject to sampling error, and its results depend upon the neuropathologist’s experience [2]. These limitations lend support to research into non-invasive imaging techniques. Although conventional Magnetic Resonance Imaging (MRI) is an established technique for the characterization of brain tumors, it is not completely reliable [3]. Perfusion Weighted Imaging (PWI), diffusion-weighted imaging (DWI), Magnetic Resonance Spectroscopy (MRS) could provide additional information to conventional MRI, as they better reflect histopathology findings [3,4]. The feasibility of PWI, DWI, and MRS for tumor grading has been clearly proved [5–7]. However, their additional value, separately or in different combinations, over conventional MRI has not yet been quantified. The results obtained with different MR techniques are contradictory, as shown for MRS and PWI with respect to diagnostic accuracy in grading tumors [3,4,8–11]. Furthermore, no significant differences have been found in the assessment of tumor grade using advanced techniques such as PWI [12], MRS [13,14], and DWI [9,15]. Although a small number of studies compared these techniques, to our knowledge only one published study has combined all four in a single center [16]. We hypothesized conventional MRI could accurately evaluate the grade of intraaxial brain tumors, and the added value of other MRI techniques is very small. Our aim was to quantify the improvement in diagnostic accuracy resulting from the combination of conventional MRI with PWI, DWI, and MRS. Methods: The study population comprised 129 patients (71 men and 58 women; mean age 52.7 years, range 11 to 84 years) diagnosed with primary brain tumors (the only inclusion criterion) who were consecutively recruited between February 2004 and April 2009 at our institution. The exclusion criteria were as follows: presence of non-neoplastic brain masses; absence of histopathology data; extensive hemorrhage that prevented evaluation by PWI, DWI, and MRS; and previous surgical intervention, chemotherapy, or radiotherapy. The institutional research and ethics boards of Hospital General Universitario Gregorio Marañón approved the study, and all the patients gave their written informed consent. Tumor histology The histology specimen was obtained by surgical resection in 119 cases and by stereotactic biopsy in 10 cases and analyzed by an expert neuropathologist with more than 30 years of experience blinded to radiological assessment. Brain tumors were classified as aggressive high-grade tumors (WHO grades III and IV) in 99 cases and low-grade tumors (WHO grades I and II) in the remaining 30 patients (Table 1). We considered grades I and II as low-grade tumors and grades III and IV as high-grade tumors according to the differences in treatment and survival between groups: in general, high-grade tumors present lower survival rates and need complementary therapy after surgery (usually chemotherapy and radiotherapy), whereas in low-grade tumors, survival is higher and the use of complementary therapy remains open to debate.Table 1 Frequency distribution of histological subtypes of brain tumors Histological subtype WHO grade No. Percentage AstrocytomaI182.9%II16III20IV70OligodendrogliomaII98.5%III2PNETIV75.4%DNETI21.6%HemangioblastomaI10.8%NeurocytomaII10.8%DNET, dysembryoplastic neuroepithelial tumor; PNET, primitive neuroectodermal tumor. Frequency distribution of histological subtypes of brain tumors DNET, dysembryoplastic neuroepithelial tumor; PNET, primitive neuroectodermal tumor. In order to detect possible misclassifications of histological grading, three months after the end of recruitment medical records were reviewed to determine survival, defined as the time elapsed between MRI diagnosis and death or the last admission to our institution. No case of disagreement was found between survival and tumoral grade that indicates histological misclassifications. The histology specimen was obtained by surgical resection in 119 cases and by stereotactic biopsy in 10 cases and analyzed by an expert neuropathologist with more than 30 years of experience blinded to radiological assessment. Brain tumors were classified as aggressive high-grade tumors (WHO grades III and IV) in 99 cases and low-grade tumors (WHO grades I and II) in the remaining 30 patients (Table 1). We considered grades I and II as low-grade tumors and grades III and IV as high-grade tumors according to the differences in treatment and survival between groups: in general, high-grade tumors present lower survival rates and need complementary therapy after surgery (usually chemotherapy and radiotherapy), whereas in low-grade tumors, survival is higher and the use of complementary therapy remains open to debate.Table 1 Frequency distribution of histological subtypes of brain tumors Histological subtype WHO grade No. Percentage AstrocytomaI182.9%II16III20IV70OligodendrogliomaII98.5%III2PNETIV75.4%DNETI21.6%HemangioblastomaI10.8%NeurocytomaII10.8%DNET, dysembryoplastic neuroepithelial tumor; PNET, primitive neuroectodermal tumor. Frequency distribution of histological subtypes of brain tumors DNET, dysembryoplastic neuroepithelial tumor; PNET, primitive neuroectodermal tumor. In order to detect possible misclassifications of histological grading, three months after the end of recruitment medical records were reviewed to determine survival, defined as the time elapsed between MRI diagnosis and death or the last admission to our institution. No case of disagreement was found between survival and tumoral grade that indicates histological misclassifications. Conventional MRI All patients were prospectively examined using a 1.5T MRI scanner (Intera or Achieva, Philips Healthcare, The Netherlands). Sagittal T1-weighted images (647/15 ms [TR/TE] SE) and coronal T2-weighted images (TR 4742/TE, 100 ms; Turbo SE; Fluid-Attenuated Inversion Recovery (FLAIR), 11000/140/2800 ms [TR/TE/TI]) were obtained with a 230-mm, Field Of View (FOV) and matrix size of 512 × 512. After intravenous administration of a double dose (0.2 mmol/kg) of gadobutrol 1.0 mmol/ml (Gadovist, Bayer Schering Farma, Berlin, Germany), an axial T1-weighted 3D fast-field echo sequence was acquired (TR 16/TE 4.6 ms) with a flip angle of 8°, FOV of 256 × 256 mm, and matrix size of 176 × 288. All patients were prospectively examined using a 1.5T MRI scanner (Intera or Achieva, Philips Healthcare, The Netherlands). Sagittal T1-weighted images (647/15 ms [TR/TE] SE) and coronal T2-weighted images (TR 4742/TE, 100 ms; Turbo SE; Fluid-Attenuated Inversion Recovery (FLAIR), 11000/140/2800 ms [TR/TE/TI]) were obtained with a 230-mm, Field Of View (FOV) and matrix size of 512 × 512. After intravenous administration of a double dose (0.2 mmol/kg) of gadobutrol 1.0 mmol/ml (Gadovist, Bayer Schering Farma, Berlin, Germany), an axial T1-weighted 3D fast-field echo sequence was acquired (TR 16/TE 4.6 ms) with a flip angle of 8°, FOV of 256 × 256 mm, and matrix size of 176 × 288. Dynamic contrast-enhanced PWI PWI was performed using a dynamic contrast-enhanced T2*-weighted gradient echo. EPI- Echo planar images -EPI- (single shot [TR 1678/TE 30 ms] with an EPI factor of 61, flip angle of 40°, matrix size of 128 × 128, and FOV of 230 mm) were acquired during the first pass of the gadobutrol bolus, which was injected intravenously using a 20-gauge needle at a rate of 4.0 ml/sec. A series of 40 acquisitions were performed at 1.7-second intervals. The first acquisition was acquired before injection to establish baseline (precontrast) intensity. The results were transferred to a PC workstation for processing (ViewForum Workstation, release 5.1V1L2; Philips Healthcare). The possible effect of tracer recirculation or leakage due to the disruption of the blood–brain barrier was considered in the mathematical model by fitting a gamma-variate function to the observed 1/T2* relaxation rate curve. This gamma-variate function was automatically implemented by the workstation. PWI was performed using a dynamic contrast-enhanced T2*-weighted gradient echo. EPI- Echo planar images -EPI- (single shot [TR 1678/TE 30 ms] with an EPI factor of 61, flip angle of 40°, matrix size of 128 × 128, and FOV of 230 mm) were acquired during the first pass of the gadobutrol bolus, which was injected intravenously using a 20-gauge needle at a rate of 4.0 ml/sec. A series of 40 acquisitions were performed at 1.7-second intervals. The first acquisition was acquired before injection to establish baseline (precontrast) intensity. The results were transferred to a PC workstation for processing (ViewForum Workstation, release 5.1V1L2; Philips Healthcare). The possible effect of tracer recirculation or leakage due to the disruption of the blood–brain barrier was considered in the mathematical model by fitting a gamma-variate function to the observed 1/T2* relaxation rate curve. This gamma-variate function was automatically implemented by the workstation. DWI Diffusion-weighted images were obtained with axial multislice single-shot EPI SE sequences as follows: TR 3745 ms/TE 120 ms; EPI factor, 61; matrix size, 128 × 128; FOV, 230 mm; and diffusion gradient encoding in three orthogonal directions. The images and Apparent Diffusion Coefficient (ADC) maps were calculated using b values of 0 and 2500 s/mm2. ADC values were quantified using the PC workstation mentioned above. Diffusion-weighted images were obtained with axial multislice single-shot EPI SE sequences as follows: TR 3745 ms/TE 120 ms; EPI factor, 61; matrix size, 128 × 128; FOV, 230 mm; and diffusion gradient encoding in three orthogonal directions. The images and Apparent Diffusion Coefficient (ADC) maps were calculated using b values of 0 and 2500 s/mm2. ADC values were quantified using the PC workstation mentioned above. MRS Single-voxel proton MRS was performed in 117 patients. Twelve patients refused to undergo this technique because of the additional examination time involved. The technique used was point-resolved spectroscopy (PRESS) with a TR of 2000 ms and two different TEs (23/144 ms). The measurement of each spectrum was repeated 128 times with a cycling-phase of 16 to improve the signal-to-noise ratio. The size (mean 8.34 cc, range 5.6 – 18.2 cc) and location of the voxels of interest were established in order to position the largest possible voxel within the solid tumor area, with minimal contamination from the surrounding non-tumor tissue and avoiding areas of necrosis, cysts, or hemorrhage as much as possible. We selected single-voxel MRS owing to its lower time requirements, which enabled all the MR sequences to be performed in a single session. Spectra were analyzed using custom-designed software [17]. Signal intensity of metabolic peaks, spectral positions, and decay constants were taken into consideration in coupled metabolite peaks. Signals of Choline (Cho), N-acetyl-aspartate (NAA), Creatine (Cr), Lipids (Lip), and Lactate (Lact) were quantified. The same quantification procedure was followed to analyze the water peak, although, in this case, Hankel singular value decomposition was not performed to suppress the water signal. Single-voxel proton MRS was performed in 117 patients. Twelve patients refused to undergo this technique because of the additional examination time involved. The technique used was point-resolved spectroscopy (PRESS) with a TR of 2000 ms and two different TEs (23/144 ms). The measurement of each spectrum was repeated 128 times with a cycling-phase of 16 to improve the signal-to-noise ratio. The size (mean 8.34 cc, range 5.6 – 18.2 cc) and location of the voxels of interest were established in order to position the largest possible voxel within the solid tumor area, with minimal contamination from the surrounding non-tumor tissue and avoiding areas of necrosis, cysts, or hemorrhage as much as possible. We selected single-voxel MRS owing to its lower time requirements, which enabled all the MR sequences to be performed in a single session. Spectra were analyzed using custom-designed software [17]. Signal intensity of metabolic peaks, spectral positions, and decay constants were taken into consideration in coupled metabolite peaks. Signals of Choline (Cho), N-acetyl-aspartate (NAA), Creatine (Cr), Lipids (Lip), and Lactate (Lact) were quantified. The same quantification procedure was followed to analyze the water peak, although, in this case, Hankel singular value decomposition was not performed to suppress the water signal. Definition of image variables The five features evaluated by conventional MRI were as follows: 1) Enhancement, defined as an increased signal in T1-weighted sequences in the tumor after administration of gadolinium; 2) Necrosis (and cystic necrosis), identified as areas within the neoplasm with a signal in T1- and T2-weighted images similar to that of cerebrospinal fluid (on FLAIR images these areas may be hyperintense, owing to excess protein content); 3) Edema, defined as an area of homogenous high signal on FLAIR sequences surrounding the tumor; 4) Neovascularization, defined as the presence of tubular structures within the tumor showing flow-void patterns on T2-weighted images representing abnormal tumor vessels; and 5) Hemorrhage, characterized by an area of magnetic field distortion due to the paramagnetic blood breakdown products on the EPI T2*-weighted images, which were obtained in the first series of PWI before gadolinium reached the cerebral parenchyma (Figures 1 and 2). All five features were assessed dichotomously (presence or absence).Figure 1 Example of MR images from a patient with glioblastoma (grade IV). a, Contrast-enhanced axial T1-weighted image shows an enhancing mass in the right frontal region. b, Axial FLAIR image shows peritumoral edema. c, Coronal turbo SE T2-weighted image revealing abnormal macroscopic vessels (arrows) within the tumor (neovascularization).Figure 2 Example of MR images of glioblastoma (grade IV). a-c, Signs of necrosis, identified as regions within the neoplasm with hypointensity on contrast-enhanced axial T1-weighted images (a) and hyperintensity on T2-weighted images (b) and FLAIR images (c). d, Signs of hemorrhage seen as an area of hypointensity (arrow) due the paramagnetic blood breakdown products on the axial EPI T2*-weighted images. Example of MR images from a patient with glioblastoma (grade IV). a, Contrast-enhanced axial T1-weighted image shows an enhancing mass in the right frontal region. b, Axial FLAIR image shows peritumoral edema. c, Coronal turbo SE T2-weighted image revealing abnormal macroscopic vessels (arrows) within the tumor (neovascularization). Example of MR images of glioblastoma (grade IV). a-c, Signs of necrosis, identified as regions within the neoplasm with hypointensity on contrast-enhanced axial T1-weighted images (a) and hyperintensity on T2-weighted images (b) and FLAIR images (c). d, Signs of hemorrhage seen as an area of hypointensity (arrow) due the paramagnetic blood breakdown products on the axial EPI T2*-weighted images. The relative Cerebral Blood Volume (rCBV) was calculated using PWI (Figure 3) based on a region of interest (ROI) centered on the highest tumor rCBV value in the parametric map. This ROI was drawn as large as possible in an attempt to include all voxels with the highest and similar values of CBV. Unprocessed perfusion images and conventional post-gadolinium T1-weighted MRI images were used to ensure that ROIs were not placed over blood vessels. Tumor CBV was normalized to contralateral white matter CBV, on which an ROI of the same dimensions was drawn.Figure 3 Example of PWI and DWI assessment in cases of glioblastomas (grade IV). a, Calculation of the rCBV ratio. The figure shows the ROI location covering the maximal values of CBV (T) in the parametric map. A similar ROI was placed in the contralateral white matter to normalize the image (N). This image corresponds to the conventional MRI images of Figure 1. b, Calculation of rADC. Example of five different ROIs placed within the solid area of the tumor in the ADC map (represented by number 1) and in the contralateral healthy area (represented by number 2). The MRI features of this case are shown in Figure 2. Example of PWI and DWI assessment in cases of glioblastomas (grade IV). a, Calculation of the rCBV ratio. The figure shows the ROI location covering the maximal values of CBV (T) in the parametric map. A similar ROI was placed in the contralateral white matter to normalize the image (N). This image corresponds to the conventional MRI images of Figure 1. b, Calculation of rADC. Example of five different ROIs placed within the solid area of the tumor in the ADC map (represented by number 1) and in the contralateral healthy area (represented by number 2). The MRI features of this case are shown in Figure 2. The relative Apparent Diffusion Coefficient (rADC) was calculated using DWI (Figure 3). Five different round-shaped ROIs ranging from 9.1 mm2 to 9.7 mm2 were placed in the solid tumor area. A further five ROIs with the same dimensions were placed in the contralateral normal cerebral area. The rADC was defined as the ratio of averaged ADCs between tumors and normal areas [18]. The Cho/Cr, NAA/Cr, Cho/H2O, and NAA/H2O ratios and the presence or absence of Lip or Lact were measured by MRS at long and short TE. We included the water peak as an internal reference following previously published data that report this approach to be a robust method for standardization [19]. Metabolic peak positions were assigned as follows: Cho, 3.22 ppm; Cr, 3.02 ppm; NAA, 2.02 ppm; Lip, 0.5-1.5 ppm. Lact (1.33 ppm) was identified as an inverted doublet at 144 ppm. All variables obtained were assessed by consensus of two expert neuroradiologists with more than 10 years of experience (JG and PF). The five features evaluated by conventional MRI were as follows: 1) Enhancement, defined as an increased signal in T1-weighted sequences in the tumor after administration of gadolinium; 2) Necrosis (and cystic necrosis), identified as areas within the neoplasm with a signal in T1- and T2-weighted images similar to that of cerebrospinal fluid (on FLAIR images these areas may be hyperintense, owing to excess protein content); 3) Edema, defined as an area of homogenous high signal on FLAIR sequences surrounding the tumor; 4) Neovascularization, defined as the presence of tubular structures within the tumor showing flow-void patterns on T2-weighted images representing abnormal tumor vessels; and 5) Hemorrhage, characterized by an area of magnetic field distortion due to the paramagnetic blood breakdown products on the EPI T2*-weighted images, which were obtained in the first series of PWI before gadolinium reached the cerebral parenchyma (Figures 1 and 2). All five features were assessed dichotomously (presence or absence).Figure 1 Example of MR images from a patient with glioblastoma (grade IV). a, Contrast-enhanced axial T1-weighted image shows an enhancing mass in the right frontal region. b, Axial FLAIR image shows peritumoral edema. c, Coronal turbo SE T2-weighted image revealing abnormal macroscopic vessels (arrows) within the tumor (neovascularization).Figure 2 Example of MR images of glioblastoma (grade IV). a-c, Signs of necrosis, identified as regions within the neoplasm with hypointensity on contrast-enhanced axial T1-weighted images (a) and hyperintensity on T2-weighted images (b) and FLAIR images (c). d, Signs of hemorrhage seen as an area of hypointensity (arrow) due the paramagnetic blood breakdown products on the axial EPI T2*-weighted images. Example of MR images from a patient with glioblastoma (grade IV). a, Contrast-enhanced axial T1-weighted image shows an enhancing mass in the right frontal region. b, Axial FLAIR image shows peritumoral edema. c, Coronal turbo SE T2-weighted image revealing abnormal macroscopic vessels (arrows) within the tumor (neovascularization). Example of MR images of glioblastoma (grade IV). a-c, Signs of necrosis, identified as regions within the neoplasm with hypointensity on contrast-enhanced axial T1-weighted images (a) and hyperintensity on T2-weighted images (b) and FLAIR images (c). d, Signs of hemorrhage seen as an area of hypointensity (arrow) due the paramagnetic blood breakdown products on the axial EPI T2*-weighted images. The relative Cerebral Blood Volume (rCBV) was calculated using PWI (Figure 3) based on a region of interest (ROI) centered on the highest tumor rCBV value in the parametric map. This ROI was drawn as large as possible in an attempt to include all voxels with the highest and similar values of CBV. Unprocessed perfusion images and conventional post-gadolinium T1-weighted MRI images were used to ensure that ROIs were not placed over blood vessels. Tumor CBV was normalized to contralateral white matter CBV, on which an ROI of the same dimensions was drawn.Figure 3 Example of PWI and DWI assessment in cases of glioblastomas (grade IV). a, Calculation of the rCBV ratio. The figure shows the ROI location covering the maximal values of CBV (T) in the parametric map. A similar ROI was placed in the contralateral white matter to normalize the image (N). This image corresponds to the conventional MRI images of Figure 1. b, Calculation of rADC. Example of five different ROIs placed within the solid area of the tumor in the ADC map (represented by number 1) and in the contralateral healthy area (represented by number 2). The MRI features of this case are shown in Figure 2. Example of PWI and DWI assessment in cases of glioblastomas (grade IV). a, Calculation of the rCBV ratio. The figure shows the ROI location covering the maximal values of CBV (T) in the parametric map. A similar ROI was placed in the contralateral white matter to normalize the image (N). This image corresponds to the conventional MRI images of Figure 1. b, Calculation of rADC. Example of five different ROIs placed within the solid area of the tumor in the ADC map (represented by number 1) and in the contralateral healthy area (represented by number 2). The MRI features of this case are shown in Figure 2. The relative Apparent Diffusion Coefficient (rADC) was calculated using DWI (Figure 3). Five different round-shaped ROIs ranging from 9.1 mm2 to 9.7 mm2 were placed in the solid tumor area. A further five ROIs with the same dimensions were placed in the contralateral normal cerebral area. The rADC was defined as the ratio of averaged ADCs between tumors and normal areas [18]. The Cho/Cr, NAA/Cr, Cho/H2O, and NAA/H2O ratios and the presence or absence of Lip or Lact were measured by MRS at long and short TE. We included the water peak as an internal reference following previously published data that report this approach to be a robust method for standardization [19]. Metabolic peak positions were assigned as follows: Cho, 3.22 ppm; Cr, 3.02 ppm; NAA, 2.02 ppm; Lip, 0.5-1.5 ppm. Lact (1.33 ppm) was identified as an inverted doublet at 144 ppm. All variables obtained were assessed by consensus of two expert neuroradiologists with more than 10 years of experience (JG and PF). Statistical analysis In the univariate analysis, continuous variables were assessed using the Mann–Whitney test and qualitative variables using a two-tailed Fisher exact test. A multivariate logistic regression model was applied to assess the combined and independent values of predictor variables. We used a forward stepwise selection procedure with p-to-enter and p-to-remove value thresholds of p < 0.05 and p > 0.01 and a cutoff value of 0.5. Sensitivity, specificity, positive predictive value, negative predictive value, and a Receiver Operating Characteristic (ROC) curve were obtained for the predictor variables. Statistical procedures were performed with SPSS version 13.0 (SPSS Inc, Chicago, Illinois, USA). In the univariate analysis, continuous variables were assessed using the Mann–Whitney test and qualitative variables using a two-tailed Fisher exact test. A multivariate logistic regression model was applied to assess the combined and independent values of predictor variables. We used a forward stepwise selection procedure with p-to-enter and p-to-remove value thresholds of p < 0.05 and p > 0.01 and a cutoff value of 0.5. Sensitivity, specificity, positive predictive value, negative predictive value, and a Receiver Operating Characteristic (ROC) curve were obtained for the predictor variables. Statistical procedures were performed with SPSS version 13.0 (SPSS Inc, Chicago, Illinois, USA). Tumor histology: The histology specimen was obtained by surgical resection in 119 cases and by stereotactic biopsy in 10 cases and analyzed by an expert neuropathologist with more than 30 years of experience blinded to radiological assessment. Brain tumors were classified as aggressive high-grade tumors (WHO grades III and IV) in 99 cases and low-grade tumors (WHO grades I and II) in the remaining 30 patients (Table 1). We considered grades I and II as low-grade tumors and grades III and IV as high-grade tumors according to the differences in treatment and survival between groups: in general, high-grade tumors present lower survival rates and need complementary therapy after surgery (usually chemotherapy and radiotherapy), whereas in low-grade tumors, survival is higher and the use of complementary therapy remains open to debate.Table 1 Frequency distribution of histological subtypes of brain tumors Histological subtype WHO grade No. Percentage AstrocytomaI182.9%II16III20IV70OligodendrogliomaII98.5%III2PNETIV75.4%DNETI21.6%HemangioblastomaI10.8%NeurocytomaII10.8%DNET, dysembryoplastic neuroepithelial tumor; PNET, primitive neuroectodermal tumor. Frequency distribution of histological subtypes of brain tumors DNET, dysembryoplastic neuroepithelial tumor; PNET, primitive neuroectodermal tumor. In order to detect possible misclassifications of histological grading, three months after the end of recruitment medical records were reviewed to determine survival, defined as the time elapsed between MRI diagnosis and death or the last admission to our institution. No case of disagreement was found between survival and tumoral grade that indicates histological misclassifications. Conventional MRI: All patients were prospectively examined using a 1.5T MRI scanner (Intera or Achieva, Philips Healthcare, The Netherlands). Sagittal T1-weighted images (647/15 ms [TR/TE] SE) and coronal T2-weighted images (TR 4742/TE, 100 ms; Turbo SE; Fluid-Attenuated Inversion Recovery (FLAIR), 11000/140/2800 ms [TR/TE/TI]) were obtained with a 230-mm, Field Of View (FOV) and matrix size of 512 × 512. After intravenous administration of a double dose (0.2 mmol/kg) of gadobutrol 1.0 mmol/ml (Gadovist, Bayer Schering Farma, Berlin, Germany), an axial T1-weighted 3D fast-field echo sequence was acquired (TR 16/TE 4.6 ms) with a flip angle of 8°, FOV of 256 × 256 mm, and matrix size of 176 × 288. Dynamic contrast-enhanced PWI: PWI was performed using a dynamic contrast-enhanced T2*-weighted gradient echo. EPI- Echo planar images -EPI- (single shot [TR 1678/TE 30 ms] with an EPI factor of 61, flip angle of 40°, matrix size of 128 × 128, and FOV of 230 mm) were acquired during the first pass of the gadobutrol bolus, which was injected intravenously using a 20-gauge needle at a rate of 4.0 ml/sec. A series of 40 acquisitions were performed at 1.7-second intervals. The first acquisition was acquired before injection to establish baseline (precontrast) intensity. The results were transferred to a PC workstation for processing (ViewForum Workstation, release 5.1V1L2; Philips Healthcare). The possible effect of tracer recirculation or leakage due to the disruption of the blood–brain barrier was considered in the mathematical model by fitting a gamma-variate function to the observed 1/T2* relaxation rate curve. This gamma-variate function was automatically implemented by the workstation. DWI: Diffusion-weighted images were obtained with axial multislice single-shot EPI SE sequences as follows: TR 3745 ms/TE 120 ms; EPI factor, 61; matrix size, 128 × 128; FOV, 230 mm; and diffusion gradient encoding in three orthogonal directions. The images and Apparent Diffusion Coefficient (ADC) maps were calculated using b values of 0 and 2500 s/mm2. ADC values were quantified using the PC workstation mentioned above. MRS: Single-voxel proton MRS was performed in 117 patients. Twelve patients refused to undergo this technique because of the additional examination time involved. The technique used was point-resolved spectroscopy (PRESS) with a TR of 2000 ms and two different TEs (23/144 ms). The measurement of each spectrum was repeated 128 times with a cycling-phase of 16 to improve the signal-to-noise ratio. The size (mean 8.34 cc, range 5.6 – 18.2 cc) and location of the voxels of interest were established in order to position the largest possible voxel within the solid tumor area, with minimal contamination from the surrounding non-tumor tissue and avoiding areas of necrosis, cysts, or hemorrhage as much as possible. We selected single-voxel MRS owing to its lower time requirements, which enabled all the MR sequences to be performed in a single session. Spectra were analyzed using custom-designed software [17]. Signal intensity of metabolic peaks, spectral positions, and decay constants were taken into consideration in coupled metabolite peaks. Signals of Choline (Cho), N-acetyl-aspartate (NAA), Creatine (Cr), Lipids (Lip), and Lactate (Lact) were quantified. The same quantification procedure was followed to analyze the water peak, although, in this case, Hankel singular value decomposition was not performed to suppress the water signal. Definition of image variables: The five features evaluated by conventional MRI were as follows: 1) Enhancement, defined as an increased signal in T1-weighted sequences in the tumor after administration of gadolinium; 2) Necrosis (and cystic necrosis), identified as areas within the neoplasm with a signal in T1- and T2-weighted images similar to that of cerebrospinal fluid (on FLAIR images these areas may be hyperintense, owing to excess protein content); 3) Edema, defined as an area of homogenous high signal on FLAIR sequences surrounding the tumor; 4) Neovascularization, defined as the presence of tubular structures within the tumor showing flow-void patterns on T2-weighted images representing abnormal tumor vessels; and 5) Hemorrhage, characterized by an area of magnetic field distortion due to the paramagnetic blood breakdown products on the EPI T2*-weighted images, which were obtained in the first series of PWI before gadolinium reached the cerebral parenchyma (Figures 1 and 2). All five features were assessed dichotomously (presence or absence).Figure 1 Example of MR images from a patient with glioblastoma (grade IV). a, Contrast-enhanced axial T1-weighted image shows an enhancing mass in the right frontal region. b, Axial FLAIR image shows peritumoral edema. c, Coronal turbo SE T2-weighted image revealing abnormal macroscopic vessels (arrows) within the tumor (neovascularization).Figure 2 Example of MR images of glioblastoma (grade IV). a-c, Signs of necrosis, identified as regions within the neoplasm with hypointensity on contrast-enhanced axial T1-weighted images (a) and hyperintensity on T2-weighted images (b) and FLAIR images (c). d, Signs of hemorrhage seen as an area of hypointensity (arrow) due the paramagnetic blood breakdown products on the axial EPI T2*-weighted images. Example of MR images from a patient with glioblastoma (grade IV). a, Contrast-enhanced axial T1-weighted image shows an enhancing mass in the right frontal region. b, Axial FLAIR image shows peritumoral edema. c, Coronal turbo SE T2-weighted image revealing abnormal macroscopic vessels (arrows) within the tumor (neovascularization). Example of MR images of glioblastoma (grade IV). a-c, Signs of necrosis, identified as regions within the neoplasm with hypointensity on contrast-enhanced axial T1-weighted images (a) and hyperintensity on T2-weighted images (b) and FLAIR images (c). d, Signs of hemorrhage seen as an area of hypointensity (arrow) due the paramagnetic blood breakdown products on the axial EPI T2*-weighted images. The relative Cerebral Blood Volume (rCBV) was calculated using PWI (Figure 3) based on a region of interest (ROI) centered on the highest tumor rCBV value in the parametric map. This ROI was drawn as large as possible in an attempt to include all voxels with the highest and similar values of CBV. Unprocessed perfusion images and conventional post-gadolinium T1-weighted MRI images were used to ensure that ROIs were not placed over blood vessels. Tumor CBV was normalized to contralateral white matter CBV, on which an ROI of the same dimensions was drawn.Figure 3 Example of PWI and DWI assessment in cases of glioblastomas (grade IV). a, Calculation of the rCBV ratio. The figure shows the ROI location covering the maximal values of CBV (T) in the parametric map. A similar ROI was placed in the contralateral white matter to normalize the image (N). This image corresponds to the conventional MRI images of Figure 1. b, Calculation of rADC. Example of five different ROIs placed within the solid area of the tumor in the ADC map (represented by number 1) and in the contralateral healthy area (represented by number 2). The MRI features of this case are shown in Figure 2. Example of PWI and DWI assessment in cases of glioblastomas (grade IV). a, Calculation of the rCBV ratio. The figure shows the ROI location covering the maximal values of CBV (T) in the parametric map. A similar ROI was placed in the contralateral white matter to normalize the image (N). This image corresponds to the conventional MRI images of Figure 1. b, Calculation of rADC. Example of five different ROIs placed within the solid area of the tumor in the ADC map (represented by number 1) and in the contralateral healthy area (represented by number 2). The MRI features of this case are shown in Figure 2. The relative Apparent Diffusion Coefficient (rADC) was calculated using DWI (Figure 3). Five different round-shaped ROIs ranging from 9.1 mm2 to 9.7 mm2 were placed in the solid tumor area. A further five ROIs with the same dimensions were placed in the contralateral normal cerebral area. The rADC was defined as the ratio of averaged ADCs between tumors and normal areas [18]. The Cho/Cr, NAA/Cr, Cho/H2O, and NAA/H2O ratios and the presence or absence of Lip or Lact were measured by MRS at long and short TE. We included the water peak as an internal reference following previously published data that report this approach to be a robust method for standardization [19]. Metabolic peak positions were assigned as follows: Cho, 3.22 ppm; Cr, 3.02 ppm; NAA, 2.02 ppm; Lip, 0.5-1.5 ppm. Lact (1.33 ppm) was identified as an inverted doublet at 144 ppm. All variables obtained were assessed by consensus of two expert neuroradiologists with more than 10 years of experience (JG and PF). Statistical analysis: In the univariate analysis, continuous variables were assessed using the Mann–Whitney test and qualitative variables using a two-tailed Fisher exact test. A multivariate logistic regression model was applied to assess the combined and independent values of predictor variables. We used a forward stepwise selection procedure with p-to-enter and p-to-remove value thresholds of p < 0.05 and p > 0.01 and a cutoff value of 0.5. Sensitivity, specificity, positive predictive value, negative predictive value, and a Receiver Operating Characteristic (ROC) curve were obtained for the predictor variables. Statistical procedures were performed with SPSS version 13.0 (SPSS Inc, Chicago, Illinois, USA). Results: Univariate analysis The rCBV ratio could not be obtained in five patients owing to magnetic susceptibility artifacts resulting from an extensive tumor hemorrhage or a poor adjustment to the gamma curve. Values of rADC were not calculated in five patients because of the presence of extensive necrosis. In this situation, there was not enough solid area to place the ROIs without partial volume effects in the necrotic region. Quantitative MRS results were not taken into consideration in 20 cases, owing to the poor quality of the spectra. In 24 patients, at least one of the metabolic ratios was missed, because the internal references (Cho or H2O) or the metabolite peaks of Cho and/or NAA could not be measured. The presence of the MRI features was significantly greater in high-grade tumors (p < 0.0001). Statistically significant differences were found in rCBV (p < 0.0001), rADC (p < 0.0001), and the NAA/Cr (p = 0.005) ratio at a short TE and in the Cho/Cr ratio (p = 0.008) at a long TE (Table 2). The Lip peak was significantly present in high-grade tumors (p < 0.0001) (Table 2). The variables enhancement and necrosis showed the highest Odds Ratio (OR) for classifying high-grade tumors (55.42 and 23.82, respectively) (Table 3).Table 2 Comparison of perfusion-weighted, diffusion-weighted, and magnetic resonance spectroscopy variables between low-grade and high-grade tumor groups Low-grade tumor High-grade tumor Variable n Range Mean SD n Range Mean SD P Value rCBV290.00-13.612.093.17950.51-19.185.754.10<0.0001rADC300.81-2.551.740.44940.43-2.631.170.38<0.0001NAA/Cra 230.00-11.732.393.00640.00-42.267.289.120.005Cho/Crb 240.95-21.433.565.34700.81-77.467.9514.880.008Lipids26N/AN/AN/A83N/AN/AN/A<0.0001NAA/Crb 240.00-11.611.532.28710.00-20.361.493.15NSCho/Cra 230.07-9.302.002,13650.00-30.153.244.91NSCho/H2Oa 193.02×10−4-5.88×10−3 6.60×10−4 1.29×10−3 630.00-6.18×10−3 4.21×10−3 8.03×10−4 NSNAA/H2Oa 200.00-3.28×10−3 6.67×10−4 7.85×10−4 630.00-1.73×10−2 1.00×10−3 2.20×10−3 NSCho/H2Ob 172.10x10−4-3.98×10−3 8.39×10−4 9.00×10−3 644.74×10−4-8.27×10−3 8.34×10−4 1.23×10−3 NSNAA/H2Ob 180.00-1.01×10−3 3.12×10−4 2.74×10−3 670.00-6.86×10−3 3.98×10−4 1,18×10−3 NSLactate26N/AN/AN/A83N/AN/AN/ANSCho, choline; Cr, creatine; NAA, N-acetyl-aspartate; N/A, not available (qualitative variables); NS, not significant; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume. aTE = 23 ms. bTE = 144 ms.Table 3 Odds ratios obtained from the univariate analysis of the magnetic resonance variables with significant differences between low-grade and high-grade tumor groups Variable OR 95% CI P value Enhancement55.4215.58-197.15<0.0001Necrosis23.828.43-67.35<0.0001Neovascularization7.802.53-24.02<0.0001Edema7.433.02-18.26<0.0001Hemorrhage13.853.93-48.77<0.0001rCBV1.681.23-2.19<0.0001rADC0.050.01-0.16<0.0001NAA/Cra 1.211.03-1.420.02Cho/Crb 1.050.97-1.140.22Lipids9.803.56-26.96<0.0001CI, confidence interval; OR, odds ratio; Cho, choline; Cr, creatine; NAA, N-acetyl-aspartate; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume. aTE = 23 ms. bTE = 144 ms. Comparison of perfusion-weighted, diffusion-weighted, and magnetic resonance spectroscopy variables between low-grade and high-grade tumor groups Cho, choline; Cr, creatine; NAA, N-acetyl-aspartate; N/A, not available (qualitative variables); NS, not significant; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume. aTE = 23 ms. bTE = 144 ms. Odds ratios obtained from the univariate analysis of the magnetic resonance variables with significant differences between low-grade and high-grade tumor groups CI, confidence interval; OR, odds ratio; Cho, choline; Cr, creatine; NAA, N-acetyl-aspartate; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume. aTE = 23 ms. bTE = 144 ms. The rCBV ratio could not be obtained in five patients owing to magnetic susceptibility artifacts resulting from an extensive tumor hemorrhage or a poor adjustment to the gamma curve. Values of rADC were not calculated in five patients because of the presence of extensive necrosis. In this situation, there was not enough solid area to place the ROIs without partial volume effects in the necrotic region. Quantitative MRS results were not taken into consideration in 20 cases, owing to the poor quality of the spectra. In 24 patients, at least one of the metabolic ratios was missed, because the internal references (Cho or H2O) or the metabolite peaks of Cho and/or NAA could not be measured. The presence of the MRI features was significantly greater in high-grade tumors (p < 0.0001). Statistically significant differences were found in rCBV (p < 0.0001), rADC (p < 0.0001), and the NAA/Cr (p = 0.005) ratio at a short TE and in the Cho/Cr ratio (p = 0.008) at a long TE (Table 2). The Lip peak was significantly present in high-grade tumors (p < 0.0001) (Table 2). The variables enhancement and necrosis showed the highest Odds Ratio (OR) for classifying high-grade tumors (55.42 and 23.82, respectively) (Table 3).Table 2 Comparison of perfusion-weighted, diffusion-weighted, and magnetic resonance spectroscopy variables between low-grade and high-grade tumor groups Low-grade tumor High-grade tumor Variable n Range Mean SD n Range Mean SD P Value rCBV290.00-13.612.093.17950.51-19.185.754.10<0.0001rADC300.81-2.551.740.44940.43-2.631.170.38<0.0001NAA/Cra 230.00-11.732.393.00640.00-42.267.289.120.005Cho/Crb 240.95-21.433.565.34700.81-77.467.9514.880.008Lipids26N/AN/AN/A83N/AN/AN/A<0.0001NAA/Crb 240.00-11.611.532.28710.00-20.361.493.15NSCho/Cra 230.07-9.302.002,13650.00-30.153.244.91NSCho/H2Oa 193.02×10−4-5.88×10−3 6.60×10−4 1.29×10−3 630.00-6.18×10−3 4.21×10−3 8.03×10−4 NSNAA/H2Oa 200.00-3.28×10−3 6.67×10−4 7.85×10−4 630.00-1.73×10−2 1.00×10−3 2.20×10−3 NSCho/H2Ob 172.10x10−4-3.98×10−3 8.39×10−4 9.00×10−3 644.74×10−4-8.27×10−3 8.34×10−4 1.23×10−3 NSNAA/H2Ob 180.00-1.01×10−3 3.12×10−4 2.74×10−3 670.00-6.86×10−3 3.98×10−4 1,18×10−3 NSLactate26N/AN/AN/A83N/AN/AN/ANSCho, choline; Cr, creatine; NAA, N-acetyl-aspartate; N/A, not available (qualitative variables); NS, not significant; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume. aTE = 23 ms. bTE = 144 ms.Table 3 Odds ratios obtained from the univariate analysis of the magnetic resonance variables with significant differences between low-grade and high-grade tumor groups Variable OR 95% CI P value Enhancement55.4215.58-197.15<0.0001Necrosis23.828.43-67.35<0.0001Neovascularization7.802.53-24.02<0.0001Edema7.433.02-18.26<0.0001Hemorrhage13.853.93-48.77<0.0001rCBV1.681.23-2.19<0.0001rADC0.050.01-0.16<0.0001NAA/Cra 1.211.03-1.420.02Cho/Crb 1.050.97-1.140.22Lipids9.803.56-26.96<0.0001CI, confidence interval; OR, odds ratio; Cho, choline; Cr, creatine; NAA, N-acetyl-aspartate; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume. aTE = 23 ms. bTE = 144 ms. Comparison of perfusion-weighted, diffusion-weighted, and magnetic resonance spectroscopy variables between low-grade and high-grade tumor groups Cho, choline; Cr, creatine; NAA, N-acetyl-aspartate; N/A, not available (qualitative variables); NS, not significant; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume. aTE = 23 ms. bTE = 144 ms. Odds ratios obtained from the univariate analysis of the magnetic resonance variables with significant differences between low-grade and high-grade tumor groups CI, confidence interval; OR, odds ratio; Cho, choline; Cr, creatine; NAA, N-acetyl-aspartate; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume. aTE = 23 ms. bTE = 144 ms. Combination of the variables obtained by conventional MRI The imaging features identified as independent predictors of tumor grade were enhancement (OR, 23.37; 95% Confidence Interval -CI, 5.85-93.25) and necrosis (OR, 9.04; 95% CI, 2.61 -31.25). The sensitivity and specificity for the identification of high-grade tumors with these two features combined were 95.9% and 70%, respectively (Table 4). The Area Under the receiver operating characteristic Curve (AUC) was 0.890 (Figure 4).Table 4 Variables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in primary brain tumors MR Technique N Predictor variables OR 95% CI P Coefficient (β) Standard error AUC Sensitivity (%) Specificity (%) PPV (%) NPV (%) MRI129Enhancement23.375.85-93.25<0.00013.150.710.89095.97091.384Necrosis9.042.61-31.250.0012,200.63MRI71Enhancement11.632.35-58.820.0032.460.810.87198.246.787.387.5PWIDWINecrosis8.481.95-37.040.0052.130.77MRSMRIEnhancement16.953.89-71.43<0.00012.830.75PWI120Necrosis5.401.32-21.740.0191.690.720.92398.975.992.895.6DWIrADC0.220.50-0.970.045−1.510.75AUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value.Figure 4 Receiving operating characteristic curves of the combination of MR imaging features alone (a) or associated with PWI and DWI parameters (b) for grading of primary brain tumors. Variables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in primary brain tumors AUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value. Receiving operating characteristic curves of the combination of MR imaging features alone (a) or associated with PWI and DWI parameters (b) for grading of primary brain tumors. The imaging features identified as independent predictors of tumor grade were enhancement (OR, 23.37; 95% Confidence Interval -CI, 5.85-93.25) and necrosis (OR, 9.04; 95% CI, 2.61 -31.25). The sensitivity and specificity for the identification of high-grade tumors with these two features combined were 95.9% and 70%, respectively (Table 4). The Area Under the receiver operating characteristic Curve (AUC) was 0.890 (Figure 4).Table 4 Variables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in primary brain tumors MR Technique N Predictor variables OR 95% CI P Coefficient (β) Standard error AUC Sensitivity (%) Specificity (%) PPV (%) NPV (%) MRI129Enhancement23.375.85-93.25<0.00013.150.710.89095.97091.384Necrosis9.042.61-31.250.0012,200.63MRI71Enhancement11.632.35-58.820.0032.460.810.87198.246.787.387.5PWIDWINecrosis8.481.95-37.040.0052.130.77MRSMRIEnhancement16.953.89-71.43<0.00012.830.75PWI120Necrosis5.401.32-21.740.0191.690.720.92398.975.992.895.6DWIrADC0.220.50-0.970.045−1.510.75AUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value.Figure 4 Receiving operating characteristic curves of the combination of MR imaging features alone (a) or associated with PWI and DWI parameters (b) for grading of primary brain tumors. Variables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in primary brain tumors AUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value. Receiving operating characteristic curves of the combination of MR imaging features alone (a) or associated with PWI and DWI parameters (b) for grading of primary brain tumors. Combination of the variables obtained by conventional MRI, PWI, DWI, and MRS This multivariate logistic regression analysis identified only enhancement (OR, 11.63; 95% CI, 2.35-58.82) and necrosis (OR, 8.48; 95% CI, 1.95-37.04) as independent predictors (Table 4). The sensitivity and specificity of these variables in grading brain tumors was 98.2% and 46.7%, respectively, and the AUC was 0.871. Thus, advanced variables seemed not to provide additional predictive value. However, it is noteworthy that one or more variables were missing in 58 cases, thus considerably reducing the sample size and the power of the analysis. This multivariate logistic regression analysis identified only enhancement (OR, 11.63; 95% CI, 2.35-58.82) and necrosis (OR, 8.48; 95% CI, 1.95-37.04) as independent predictors (Table 4). The sensitivity and specificity of these variables in grading brain tumors was 98.2% and 46.7%, respectively, and the AUC was 0.871. Thus, advanced variables seemed not to provide additional predictive value. However, it is noteworthy that one or more variables were missing in 58 cases, thus considerably reducing the sample size and the power of the analysis. Combination of the variables obtained by conventional MRI, PWI, and DWI As most missing data corresponded to MRS, we conducted another analysis excluding the MRS–related variables. In this case, the number of cases remaining was 120, and the variables identified as independent predictors of high-grade tumors were enhancement (OR, 16.95; 95% CI, 3.89-71.43), necrosis (OR, 5.40; 95% CI, 1.32-21.74), and low rADC values (OR,0.22; 95% CI, 0.50-0.97) (Table 4). The sensitivity and specificity obtained with these three variables were 98.9% and 75.9%, respectively. The AUC was 0.923 (Figure 4). As most missing data corresponded to MRS, we conducted another analysis excluding the MRS–related variables. In this case, the number of cases remaining was 120, and the variables identified as independent predictors of high-grade tumors were enhancement (OR, 16.95; 95% CI, 3.89-71.43), necrosis (OR, 5.40; 95% CI, 1.32-21.74), and low rADC values (OR,0.22; 95% CI, 0.50-0.97) (Table 4). The sensitivity and specificity obtained with these three variables were 98.9% and 75.9%, respectively. The AUC was 0.923 (Figure 4). Multivariate analysis of gliomas As most tumors in our series (91.5%) were gliomas (astrocytoma and oligodendroglioma), the multivariate logistic regression analysis was repeated in this histologic subtype. Independently of the variables included in the analysis, the only predictors of high grade were necrosis and enhancement, with a sensitivity of 97.6% and specificity of 76% when the conventional MRI, PWI, and DWI variables were combined (Table 5).Table 5 Variables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in gliomas Technique N Predictor variables OR 95% CI P Coefficient (β) Standard error AUC Sensitivity (%) Specificity (%) PPV (%) NPV (%) MRI118Enhancement58.829.35-333.33<0.00014.060.930.94097.876.993.790.9Necrosis13.892.91-66.670.0012,630.80MRI67Enhancement253.60-166.670.0012.460.990.94096.264.391.181.8PWIDWINecrosis8.261.53-43.480.0142.110.86MRSMRI110Enhancement508.13-333.33<0.00013.940.940.94097.67693.390.5PWIDWINecrosis13.892.91- 66.670.0012.640.80AUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value. Variables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in gliomas AUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value. As most tumors in our series (91.5%) were gliomas (astrocytoma and oligodendroglioma), the multivariate logistic regression analysis was repeated in this histologic subtype. Independently of the variables included in the analysis, the only predictors of high grade were necrosis and enhancement, with a sensitivity of 97.6% and specificity of 76% when the conventional MRI, PWI, and DWI variables were combined (Table 5).Table 5 Variables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in gliomas Technique N Predictor variables OR 95% CI P Coefficient (β) Standard error AUC Sensitivity (%) Specificity (%) PPV (%) NPV (%) MRI118Enhancement58.829.35-333.33<0.00014.060.930.94097.876.993.790.9Necrosis13.892.91-66.670.0012,630.80MRI67Enhancement253.60-166.670.0012.460.990.94096.264.391.181.8PWIDWINecrosis8.261.53-43.480.0142.110.86MRSMRI110Enhancement508.13-333.33<0.00013.940.940.94097.67693.390.5PWIDWINecrosis13.892.91- 66.670.0012.640.80AUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value. Variables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in gliomas AUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value. Univariate analysis: The rCBV ratio could not be obtained in five patients owing to magnetic susceptibility artifacts resulting from an extensive tumor hemorrhage or a poor adjustment to the gamma curve. Values of rADC were not calculated in five patients because of the presence of extensive necrosis. In this situation, there was not enough solid area to place the ROIs without partial volume effects in the necrotic region. Quantitative MRS results were not taken into consideration in 20 cases, owing to the poor quality of the spectra. In 24 patients, at least one of the metabolic ratios was missed, because the internal references (Cho or H2O) or the metabolite peaks of Cho and/or NAA could not be measured. The presence of the MRI features was significantly greater in high-grade tumors (p < 0.0001). Statistically significant differences were found in rCBV (p < 0.0001), rADC (p < 0.0001), and the NAA/Cr (p = 0.005) ratio at a short TE and in the Cho/Cr ratio (p = 0.008) at a long TE (Table 2). The Lip peak was significantly present in high-grade tumors (p < 0.0001) (Table 2). The variables enhancement and necrosis showed the highest Odds Ratio (OR) for classifying high-grade tumors (55.42 and 23.82, respectively) (Table 3).Table 2 Comparison of perfusion-weighted, diffusion-weighted, and magnetic resonance spectroscopy variables between low-grade and high-grade tumor groups Low-grade tumor High-grade tumor Variable n Range Mean SD n Range Mean SD P Value rCBV290.00-13.612.093.17950.51-19.185.754.10<0.0001rADC300.81-2.551.740.44940.43-2.631.170.38<0.0001NAA/Cra 230.00-11.732.393.00640.00-42.267.289.120.005Cho/Crb 240.95-21.433.565.34700.81-77.467.9514.880.008Lipids26N/AN/AN/A83N/AN/AN/A<0.0001NAA/Crb 240.00-11.611.532.28710.00-20.361.493.15NSCho/Cra 230.07-9.302.002,13650.00-30.153.244.91NSCho/H2Oa 193.02×10−4-5.88×10−3 6.60×10−4 1.29×10−3 630.00-6.18×10−3 4.21×10−3 8.03×10−4 NSNAA/H2Oa 200.00-3.28×10−3 6.67×10−4 7.85×10−4 630.00-1.73×10−2 1.00×10−3 2.20×10−3 NSCho/H2Ob 172.10x10−4-3.98×10−3 8.39×10−4 9.00×10−3 644.74×10−4-8.27×10−3 8.34×10−4 1.23×10−3 NSNAA/H2Ob 180.00-1.01×10−3 3.12×10−4 2.74×10−3 670.00-6.86×10−3 3.98×10−4 1,18×10−3 NSLactate26N/AN/AN/A83N/AN/AN/ANSCho, choline; Cr, creatine; NAA, N-acetyl-aspartate; N/A, not available (qualitative variables); NS, not significant; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume. aTE = 23 ms. bTE = 144 ms.Table 3 Odds ratios obtained from the univariate analysis of the magnetic resonance variables with significant differences between low-grade and high-grade tumor groups Variable OR 95% CI P value Enhancement55.4215.58-197.15<0.0001Necrosis23.828.43-67.35<0.0001Neovascularization7.802.53-24.02<0.0001Edema7.433.02-18.26<0.0001Hemorrhage13.853.93-48.77<0.0001rCBV1.681.23-2.19<0.0001rADC0.050.01-0.16<0.0001NAA/Cra 1.211.03-1.420.02Cho/Crb 1.050.97-1.140.22Lipids9.803.56-26.96<0.0001CI, confidence interval; OR, odds ratio; Cho, choline; Cr, creatine; NAA, N-acetyl-aspartate; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume. aTE = 23 ms. bTE = 144 ms. Comparison of perfusion-weighted, diffusion-weighted, and magnetic resonance spectroscopy variables between low-grade and high-grade tumor groups Cho, choline; Cr, creatine; NAA, N-acetyl-aspartate; N/A, not available (qualitative variables); NS, not significant; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume. aTE = 23 ms. bTE = 144 ms. Odds ratios obtained from the univariate analysis of the magnetic resonance variables with significant differences between low-grade and high-grade tumor groups CI, confidence interval; OR, odds ratio; Cho, choline; Cr, creatine; NAA, N-acetyl-aspartate; rADC, relative apparent diffusion coefficient; rCBV, relative cerebral blood volume. aTE = 23 ms. bTE = 144 ms. Combination of the variables obtained by conventional MRI: The imaging features identified as independent predictors of tumor grade were enhancement (OR, 23.37; 95% Confidence Interval -CI, 5.85-93.25) and necrosis (OR, 9.04; 95% CI, 2.61 -31.25). The sensitivity and specificity for the identification of high-grade tumors with these two features combined were 95.9% and 70%, respectively (Table 4). The Area Under the receiver operating characteristic Curve (AUC) was 0.890 (Figure 4).Table 4 Variables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in primary brain tumors MR Technique N Predictor variables OR 95% CI P Coefficient (β) Standard error AUC Sensitivity (%) Specificity (%) PPV (%) NPV (%) MRI129Enhancement23.375.85-93.25<0.00013.150.710.89095.97091.384Necrosis9.042.61-31.250.0012,200.63MRI71Enhancement11.632.35-58.820.0032.460.810.87198.246.787.387.5PWIDWINecrosis8.481.95-37.040.0052.130.77MRSMRIEnhancement16.953.89-71.43<0.00012.830.75PWI120Necrosis5.401.32-21.740.0191.690.720.92398.975.992.895.6DWIrADC0.220.50-0.970.045−1.510.75AUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value.Figure 4 Receiving operating characteristic curves of the combination of MR imaging features alone (a) or associated with PWI and DWI parameters (b) for grading of primary brain tumors. Variables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in primary brain tumors AUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value. Receiving operating characteristic curves of the combination of MR imaging features alone (a) or associated with PWI and DWI parameters (b) for grading of primary brain tumors. Combination of the variables obtained by conventional MRI, PWI, DWI, and MRS: This multivariate logistic regression analysis identified only enhancement (OR, 11.63; 95% CI, 2.35-58.82) and necrosis (OR, 8.48; 95% CI, 1.95-37.04) as independent predictors (Table 4). The sensitivity and specificity of these variables in grading brain tumors was 98.2% and 46.7%, respectively, and the AUC was 0.871. Thus, advanced variables seemed not to provide additional predictive value. However, it is noteworthy that one or more variables were missing in 58 cases, thus considerably reducing the sample size and the power of the analysis. Combination of the variables obtained by conventional MRI, PWI, and DWI: As most missing data corresponded to MRS, we conducted another analysis excluding the MRS–related variables. In this case, the number of cases remaining was 120, and the variables identified as independent predictors of high-grade tumors were enhancement (OR, 16.95; 95% CI, 3.89-71.43), necrosis (OR, 5.40; 95% CI, 1.32-21.74), and low rADC values (OR,0.22; 95% CI, 0.50-0.97) (Table 4). The sensitivity and specificity obtained with these three variables were 98.9% and 75.9%, respectively. The AUC was 0.923 (Figure 4). Multivariate analysis of gliomas: As most tumors in our series (91.5%) were gliomas (astrocytoma and oligodendroglioma), the multivariate logistic regression analysis was repeated in this histologic subtype. Independently of the variables included in the analysis, the only predictors of high grade were necrosis and enhancement, with a sensitivity of 97.6% and specificity of 76% when the conventional MRI, PWI, and DWI variables were combined (Table 5).Table 5 Variables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in gliomas Technique N Predictor variables OR 95% CI P Coefficient (β) Standard error AUC Sensitivity (%) Specificity (%) PPV (%) NPV (%) MRI118Enhancement58.829.35-333.33<0.00014.060.930.94097.876.993.790.9Necrosis13.892.91-66.670.0012,630.80MRI67Enhancement253.60-166.670.0012.460.990.94096.264.391.181.8PWIDWINecrosis8.261.53-43.480.0142.110.86MRSMRI110Enhancement508.13-333.33<0.00013.940.940.94097.67693.390.5PWIDWINecrosis13.892.91- 66.670.0012.640.80AUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value. Variables with independent predictive value obtained from combining different magnetic resonance techniques (multivariate logistic regression) in gliomas AUC, area under the receiver operating characteristic curve; CI, confidence interval; DWI, diffusion-weighted imaging; MRI, magnetic resonance imaging; MRS, MR spectroscopy; N, number of cases from which each classifier was constructed; PWI, perfusion-weighted imaging; rADC, relative apparent diffusion coefficient; OR, odds ratio; PPV, positive predictive value; NPV, negative predictive value. Discussion: Conventional MRI constitutes the most used MRI technique in the assessment of primary brain tumors. However, higher accuracy is necessary when grading brain tumors [3,8]. Advanced MR techniques provide additional information related to histological features of the tumor, as grade of neovascularization, cellularity, and mitotic index [10,20,21]. The validity of conventional and advanced MR in grading tumors has been widely reported in the medical literature [22–26]. The significant differences we found for the variables using all the MR techniques analyzed (conventional MRI variables, rCBV, rADC, NAA/Cr at short TE, Cho/Cr at long TE, and presence of lipids) were consistent with those reported in the literature. Few attempts have been made to combine different MRI techniques in grading tumors [3,4,8,27] and, to our knowledge, only the study by Yoon et al. [16] has combined conventional MRI with PWI, DWI, and MRS in a group of patients diagnosed with cerebral gliomas. That study showed that there were no significant differences in the diagnostic performances of any of those MR imaging techniques. Recently, Caulo et al. [28] has also analyzed the information provided by these advanced MR techniques in the assessment of tumor grade in gliomas but, in their analysis, ADC maps were only used to define different tumoral regions in order to guide ROIs’ placement. Thus, the ADC values were not calculated to differentiate grade of aggressiveness. In our prospective study, we analyzed a series of variables for conventional MRI and advanced techniques (PWI, DWI, and MRS) to determine whether a combination of techniques was better than conventional MRI alone and, unlike in Caulo et al. [28], we used ADC values to assess tumor grade. We found that two conventional MRI variables, enhancement and necrosis, were the only predictors of grade in primary brain tumors. The fact that data were missing from our study (at least one variable in 58 cases) could affect the statistical power. However, excluding these cases—and including only patients with all the variables—could lead to a selection bias. The radiologist should analyze brain tumors based on histological features (extensive necrosis or hemorrhage) that impair evaluation with advanced MRI techniques. For example, when necrosis was extensive, the area of the solid part was much too small to calculate advanced MR parameters [29,30]. Most published articles obviate this situation by avoiding cases with at least one missing data obtained by different MRI techniques [3,4,8]. As missing data correspond to MRS in most cases (56 cases), we performed an analysis combining all the variables except MRS data and found that at least one item of data was missed in only 9 patients. As a result, enhancement, necrosis, and low rADC were predictors of high tumor grade, and these variables provided higher accuracy (sensitivity 98.9%; specificity 75.9%) than those obtained with the other combinations analyzed (only MRI variables or MRI, PWI, DWI, and MRS variables). However, the improvement was no more than modest compared with the results obtained by combining only MRI variables (sensitivity 95.9%, specificity 70%). Previous studies that analyzed differences between tumor grades were limited to gliomas [3,4,8,31–33]. However, we included all the primary brain tumors in order to mimic conditions of clinical practice, in which the radiologist has to provide a presumptive diagnosis of malignancy before surgery, regardless of the histological type. In our series 8.5% of tumors were non-gliomas due to the lower frequency of these type of tumors. Nevertheless, we repeated the multivariate analysis including only gliomas, since these were the most frequent histological subtype in our series. Unlike other authors, we were unable to demonstrate any additional value of advanced MR over conventional MRI [3,8,34], possibly because of our approach in assessing MRI. We showed high sensitivity and specificity (97.8% and 76.9%, respectively) for necrosis and enhancement as they were the best predictor variables of MRI in grading gliomas based on the results of the multivariate analysis combining conventional MRI variables. Using conventional MRI criteria, other authors obtained lower values (sensitivity of 42.1%-93.3% and a specificity of 60%-75.0%), possibly as a result of different selection criteria for high-grade MRI criteria [3,4,8,31,32]. For example, signal heterogeneity of the lesions could be related to other variables, such as presence of hemorrhage or necrosis. Mass effect is inherent to any tumor and is not necessarily associated with the histological grade. Furthermore, the existence of ill-defined borders is not useful in certain cases, such as glioblastomas, which could show well-defined borders on MRI, and low-grade gliomas, which tend to have an infiltrating appearance [35]. Some authors globally assessed the MRI features without specifying the diagnostic value of each of these imaging variables [3,4,22]. In addition, it is important to note that studies with negative results are less likely to be published, despite being well designed and conducted [36] thus leading to publication bias and overestimation of the value of advanced MR techniques in previous studies. Our study has several limitations. We performed single-voxel MRS instead multivoxel MRS, which more accurately assesses tumor heterogeneity [37]. However, single-voxel studies have certain advantages, such as low time requirements, quicker post-processing, and better field homogeneity in the volume of interest [25]. The fact that most missing data were spectroscopic variables could lead us to underestimate the added value of MRS over conventional MRI in grading brain tumors. The rADC and rCBV ratios were calculated by selecting ROIs. As in the case of single-voxel MRS, this approach may be prone to sampling error, thus reinforcing the importance of careful placement of the ROIs. To reduce the T1 leakage effects, we did not administer a preload of contrast agent. Although this method seems to be the most robust for the evaluation of brain tumors, a statistical validation has not been provided [38]. In addition, to minimize the T1 and/or T2 leakage effects [38], we applied a gamma-variate function as a correction algorithm and the analysis of the MR signal-intensity curves did not show in any case of our series a rising of postbolus signal above the prebolus baseline that indicates T1-leakage effect. In our study, we have excluded patients with massive hemorrhage. This criterion may be interpreted as contradictory because we analyzed the presence of tumoral hemorrhage by conventional MRI. However, we decided not consider only that cases that could be interpreted as brain hematomas secondary to brain tumors since the assessment of all advanced MRI techniques in these patients was not possible due to magnetic susceptibility artifacts secondary to the big quantity of blood products. The restricted size of our sample prevented us from dividing it into training and validation sets, which would constitute a more suitable design for our analysis. Consequently, the accuracy we report in grading brain tumors could be somewhat optimistic. Nevertheless, the conclusions obtained by combining different MRI techniques should remain unaffected. Our approach is widely reported, thus making our results comparable [3,4,8,11,39]. Conclusions: Preoperative diagnosis of tumor grade by MRI could assist in treatment planning, which is essential in cases were a histological diagnosis cannot be made. Our work focuses on different types of primary brain tumors, since, in clinical practice, tumor grade is analyzed using MRI with no previous knowledge of histological type. An appropriate analysis of conventional MRI features enables primary brain tumors to be graded with high accuracy. The best results for the prediction of high-grade tumors were obtained by combining the variables enhancement, necrosis, and rADC. Only a slight improvement was obtained with respect to conventional MRI criteria combined with the only advanced MRI variable considered as predictive (rADC). No advanced MR variables seem to add value to conventional MRI alone in the determination of grade in gliomas.
Background: Although conventional MR imaging (MRI) is the most widely used non-invasive technique for brain tumor grading, its accuracy has been reported to be relatively low. Advanced MR techniques, such as perfusion-weighted imaging (PWI), diffusion-weighted imaging (DWI), and magnetic resonance spectroscopy (MRS), could predict neoplastic histology, but their added value over conventional MRI is still open to debate. Methods: We prospectively analyzed 129 patients diagnosed with primary brain tumors (118 gliomas) classified as low-grade in 30 cases and high-grade in 99 cases. Results: Significant differences were obtained in high-grade tumors for conventional MRI variables (necrosis, enhancement, edema, hemorrhage, and neovascularization); high relative cerebral blood volume values (rCBV), low relative apparent diffusion coefficients (rADC), high ratio of N-acetyl-aspartate/creatine at short echo time (TE) and high choline/creatine at long TE. Among conventional MRI variables, the presence of enhancement and necrosis were demonstrated to be the best predictors of high grade in primary brain tumors (sensitivity 95.9%; specificity 70%). The best results in primary brain tumors were obtained for enhancement, necrosis, and rADC (sensitivity 98.9%; specificity 75.9%). Necrosis and enhancement were the only predictors of high grade in gliomas (sensitivity 97.6%; specificity 76%) when all the magnetic resonance variables were combined. Conclusions: MRI is highly accurate in the assessment of tumor grade. The combination of conventional MRI features with advanced MR variables showed only improved tumor grading by adding rADC to conventional MRI variables in primary brain tumors.
Background: Primary brain tumors constitute a heterogeneous group that can be classified according to their histological type and grade of malignancy. The World Health Organization (WHO) classifies primary brain tumors into four different grades of malignancy [1]. Histological tumor grading has several drawbacks, one of which is the need for stereotactic biopsy, an invasive procedure with a certain risk of morbidity and mortality. In addition, this approach is subject to sampling error, and its results depend upon the neuropathologist’s experience [2]. These limitations lend support to research into non-invasive imaging techniques. Although conventional Magnetic Resonance Imaging (MRI) is an established technique for the characterization of brain tumors, it is not completely reliable [3]. Perfusion Weighted Imaging (PWI), diffusion-weighted imaging (DWI), Magnetic Resonance Spectroscopy (MRS) could provide additional information to conventional MRI, as they better reflect histopathology findings [3,4]. The feasibility of PWI, DWI, and MRS for tumor grading has been clearly proved [5–7]. However, their additional value, separately or in different combinations, over conventional MRI has not yet been quantified. The results obtained with different MR techniques are contradictory, as shown for MRS and PWI with respect to diagnostic accuracy in grading tumors [3,4,8–11]. Furthermore, no significant differences have been found in the assessment of tumor grade using advanced techniques such as PWI [12], MRS [13,14], and DWI [9,15]. Although a small number of studies compared these techniques, to our knowledge only one published study has combined all four in a single center [16]. We hypothesized conventional MRI could accurately evaluate the grade of intraaxial brain tumors, and the added value of other MRI techniques is very small. Our aim was to quantify the improvement in diagnostic accuracy resulting from the combination of conventional MRI with PWI, DWI, and MRS. Conclusions: Preoperative diagnosis of tumor grade by MRI could assist in treatment planning, which is essential in cases were a histological diagnosis cannot be made. Our work focuses on different types of primary brain tumors, since, in clinical practice, tumor grade is analyzed using MRI with no previous knowledge of histological type. An appropriate analysis of conventional MRI features enables primary brain tumors to be graded with high accuracy. The best results for the prediction of high-grade tumors were obtained by combining the variables enhancement, necrosis, and rADC. Only a slight improvement was obtained with respect to conventional MRI criteria combined with the only advanced MRI variable considered as predictive (rADC). No advanced MR variables seem to add value to conventional MRI alone in the determination of grade in gliomas.
Background: Although conventional MR imaging (MRI) is the most widely used non-invasive technique for brain tumor grading, its accuracy has been reported to be relatively low. Advanced MR techniques, such as perfusion-weighted imaging (PWI), diffusion-weighted imaging (DWI), and magnetic resonance spectroscopy (MRS), could predict neoplastic histology, but their added value over conventional MRI is still open to debate. Methods: We prospectively analyzed 129 patients diagnosed with primary brain tumors (118 gliomas) classified as low-grade in 30 cases and high-grade in 99 cases. Results: Significant differences were obtained in high-grade tumors for conventional MRI variables (necrosis, enhancement, edema, hemorrhage, and neovascularization); high relative cerebral blood volume values (rCBV), low relative apparent diffusion coefficients (rADC), high ratio of N-acetyl-aspartate/creatine at short echo time (TE) and high choline/creatine at long TE. Among conventional MRI variables, the presence of enhancement and necrosis were demonstrated to be the best predictors of high grade in primary brain tumors (sensitivity 95.9%; specificity 70%). The best results in primary brain tumors were obtained for enhancement, necrosis, and rADC (sensitivity 98.9%; specificity 75.9%). Necrosis and enhancement were the only predictors of high grade in gliomas (sensitivity 97.6%; specificity 76%) when all the magnetic resonance variables were combined. Conclusions: MRI is highly accurate in the assessment of tumor grade. The combination of conventional MRI features with advanced MR variables showed only improved tumor grading by adding rADC to conventional MRI variables in primary brain tumors.
14,255
324
[ 274, 173, 192, 86, 266, 1080, 134, 833, 418, 112, 123, 328 ]
17
[ "grade", "variables", "weighted", "tumor", "10", "mri", "tumors", "images", "value", "obtained" ]
[ "analyze brain tumors", "evaluation brain tumors", "mri grading brain", "conventional mri grading", "tumor grade mri" ]
null
[CONTENT] Brain neoplasms | Magnetic resonance imaging | Magnetic resonance spectroscopy | Diffusion-weighted MRI | Perfusion-weighted MRI [SUMMARY]
null
[CONTENT] Brain neoplasms | Magnetic resonance imaging | Magnetic resonance spectroscopy | Diffusion-weighted MRI | Perfusion-weighted MRI [SUMMARY]
[CONTENT] Brain neoplasms | Magnetic resonance imaging | Magnetic resonance spectroscopy | Diffusion-weighted MRI | Perfusion-weighted MRI [SUMMARY]
[CONTENT] Brain neoplasms | Magnetic resonance imaging | Magnetic resonance spectroscopy | Diffusion-weighted MRI | Perfusion-weighted MRI [SUMMARY]
[CONTENT] Brain neoplasms | Magnetic resonance imaging | Magnetic resonance spectroscopy | Diffusion-weighted MRI | Perfusion-weighted MRI [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Brain Neoplasms | Child | Diffusion Magnetic Resonance Imaging | Female | Glioma | Humans | Magnetic Resonance Imaging | Magnetic Resonance Spectroscopy | Male | Middle Aged | Neoplasm Grading | Prospective Studies [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Brain Neoplasms | Child | Diffusion Magnetic Resonance Imaging | Female | Glioma | Humans | Magnetic Resonance Imaging | Magnetic Resonance Spectroscopy | Male | Middle Aged | Neoplasm Grading | Prospective Studies [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Brain Neoplasms | Child | Diffusion Magnetic Resonance Imaging | Female | Glioma | Humans | Magnetic Resonance Imaging | Magnetic Resonance Spectroscopy | Male | Middle Aged | Neoplasm Grading | Prospective Studies [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Brain Neoplasms | Child | Diffusion Magnetic Resonance Imaging | Female | Glioma | Humans | Magnetic Resonance Imaging | Magnetic Resonance Spectroscopy | Male | Middle Aged | Neoplasm Grading | Prospective Studies [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Brain Neoplasms | Child | Diffusion Magnetic Resonance Imaging | Female | Glioma | Humans | Magnetic Resonance Imaging | Magnetic Resonance Spectroscopy | Male | Middle Aged | Neoplasm Grading | Prospective Studies [SUMMARY]
[CONTENT] analyze brain tumors | evaluation brain tumors | mri grading brain | conventional mri grading | tumor grade mri [SUMMARY]
null
[CONTENT] analyze brain tumors | evaluation brain tumors | mri grading brain | conventional mri grading | tumor grade mri [SUMMARY]
[CONTENT] analyze brain tumors | evaluation brain tumors | mri grading brain | conventional mri grading | tumor grade mri [SUMMARY]
[CONTENT] analyze brain tumors | evaluation brain tumors | mri grading brain | conventional mri grading | tumor grade mri [SUMMARY]
[CONTENT] analyze brain tumors | evaluation brain tumors | mri grading brain | conventional mri grading | tumor grade mri [SUMMARY]
[CONTENT] grade | variables | weighted | tumor | 10 | mri | tumors | images | value | obtained [SUMMARY]
null
[CONTENT] grade | variables | weighted | tumor | 10 | mri | tumors | images | value | obtained [SUMMARY]
[CONTENT] grade | variables | weighted | tumor | 10 | mri | tumors | images | value | obtained [SUMMARY]
[CONTENT] grade | variables | weighted | tumor | 10 | mri | tumors | images | value | obtained [SUMMARY]
[CONTENT] grade | variables | weighted | tumor | 10 | mri | tumors | images | value | obtained [SUMMARY]
[CONTENT] techniques | conventional | pwi | mri | imaging | mrs | conventional mri | dwi | brain tumors | tumors [SUMMARY]
null
[CONTENT] 10 | 00 | variables | imaging | ci | 95 | predictive value | magnetic resonance | resonance | grade [SUMMARY]
[CONTENT] mri | grade | conventional | conventional mri | diagnosis | histological | advanced | tumor grade | tumors | primary brain tumors [SUMMARY]
[CONTENT] variables | grade | mri | tumors | images | weighted | tumor | 95 | 10 | value [SUMMARY]
[CONTENT] variables | grade | mri | tumors | images | weighted | tumor | 95 | 10 | value [SUMMARY]
[CONTENT] ||| DWI [SUMMARY]
null
[CONTENT] TE | TE ||| 95.9% | 70% ||| 98.9% | 75.9% ||| 97.6% | 76% [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| DWI ||| 129 | 118 | 30 | 99 ||| ||| TE | TE ||| 95.9% | 70% ||| 98.9% | 75.9% ||| 97.6% | 76% ||| ||| [SUMMARY]
[CONTENT] ||| DWI ||| 129 | 118 | 30 | 99 ||| ||| TE | TE ||| 95.9% | 70% ||| 98.9% | 75.9% ||| 97.6% | 76% ||| ||| [SUMMARY]
Distribution of birth weight for gestational age in Japanese infants delivered by cesarean section.
21478642
Neonatal anthropometric charts of the distribution of measurements, mainly birth weight, taken at different gestational ages are widely used by obstetricians and pediatricians. However, the relationship between delivery mode and neonatal anthropometric data has not been investigated in Japan or other countries.
BACKGROUND
The subjects were selected from the registration database of the Japan Society of Obstetrics and Gynecology (2003-2005). Tenth centile, median, and 90th centile of birth weight by sex, birth order, and delivery mode were observed by gestational age from 22 to 42 weeks among eligible singleton births.
METHODS
After excluding 248 outliers and 5243 births that did not satisfy the inclusion criteria, 144,980 births were included in the analysis. The distribution of 10th centile curves was skewed toward lower birth weights during the preterm period among both first live births and second and later live births delivered by cesarean section. More than 40% of both male and female live births were delivered by cesarean section at 37 weeks or earlier.
RESULTS
The large proportion of cesarean sections influenced the skewness of the birth weight distribution in the preterm period.
CONCLUSIONS
[ "Anthropometry", "Birth Order", "Birth Weight", "Cesarean Section", "Delivery, Obstetric", "Female", "Gestational Age", "Humans", "Infant, Newborn", "Infant, Premature", "Japan", "Male", "Pregnancy", "Reference Values" ]
3899412
INTRODUCTION
Neonatal anthropometric charts are based on the distribution of measurements, mainly birth weight, of neonates at different gestational ages.1 The Japanese neonatal anthropometric charts, which were revised in 1995,2 are widely distributed to Japanese obstetricians and pediatricians for managing pregnancy and newborns. Because more than 10 years had passed since publishing the revised charts, the research committee of the Ministry of Health, Welfare, and Labour for Multicenter Benchmark Research on Neonatal Outcomes in Japan attempted to develop new anthropometric charts. Due to the small sample size, the 1995 charts only contained data classified by sex and birth order. Using the registration database of the Japan Society of Obstetrics and Gynecology (JSOG), which includes a large number of pregnant women and their babies, we attempted to construct charts by mode of delivery, ie, vaginal delivery and cesarean section, as well as sex and birth order. This delivery mode-specific chart is unique to Japan, as no such chart exists in other countries.3–7 In this study, we describe the different birth-weight distributions by gestational age and mode of delivery and discuss the factors that influenced this distribution.
METHODS
JSOG manages a registration system for pregnant women and their infants. To construct new neonatal anthropometric charts, we collected data from 2003 to 2005 on gestational age, birth weight, sex, birth order, and information on complications of singleton births from this database. Because JSOG approved the use of their database for the purpose of creating new neonatal anthropometric charts, this study was not subject to institutional review. Stillborn infants and those with severe asphyxia (Apgar score of 0 at 1 and 5 minutes after delivery), hydrops, or malformations were excluded from the analysis. Infants with missing information on sex or gestational age were also excluded. Regarding mode of delivery, 6 modes were reported in the registration database: natural vaginal delivery, vacuum-assisted vaginal delivery, forceps-assisted vaginal delivery, elective cesarean section, emergency cesarean section, and others. Natural vaginal delivery, vacuum-assisted vaginal delivery, and forceps-assisted vaginal delivery were defined as vaginal delivery, and elective and emergency cesarean sections were defined as cesarean delivery in this study. Because more than 80% of births delivered by elective cesarean section were delivered from 37 to 41 gestational weeks and approximately 60% of those delivered by emergency cesarean section were delivered at 36 week or earlier, we combined these modes of delivery in the analysis. Pregnant women for whom mode of delivery was classified as “others” were excluded from this analysis. First, 10th centile, median, and 90th centile of birth weight by sex and birth order (first live births or second and later live births) were observed by gestational age from 22 to 42 weeks among all eligible births. Then, a similar observation was made by delivery mode. The values obtained were then plotted and fitted to cubic curves using the least squares method.
RESULTS
During the study period, 147 medical facilities participated in the JSOG registration system, and 150 471 singleton births were reported to the registration database. A total of 5243 births were excluded from the analysis; thus, the study population comprised 145 228 births. Then, an additional 248 clinical outliers were excluded from this population. Consequently, 144 980 singleton births (74 740 boys and 70 240 girls) were included in the analysis (Table 1). Among the 74 740 boys, 39 707 were first live births and 35 033 were second or later live births. Among the 70 240 girls, 36 827 and 33 413 were first live births and second or later live births, respectively. Figure 1 shows the birth weight distribution of singleton male infants by gestational age and birth order. The 10th centile curves of first live births and second and later live births were skewed to lower birth weights in the preterm period. When the birth weight distributions are classified by delivery mode, the 10th centile curves were skewed to lower birth weights among both first live births and second and later live births delivered by cesarean section (Table 2, Figures 2 and 3). Coefficients of determination of all fitted curves were higher than 0.98, and the skewness was similar in 10th centile curves of birth weight of female infants who were delivered by cesarean section (data not shown). aThere were no eligible first live male births born by cesarean section in gestational week 22. Values were plotted and fitted to cubic curves by using the least squares method in Figures 2 and 3. The proportion of first live births delivered by cesarean section by gestational age is shown in Figure 4. More than 40% of male and female births were delivered by cesarean section at 37 weeks or earlier. From 26 to 29 weeks, more than 70% of births were delivered by cesarean section.
null
null
[ "INTRODUCTION" ]
[ "Neonatal anthropometric charts are based on the distribution of measurements, mainly birth weight, of neonates at different gestational ages.1 The Japanese neonatal anthropometric charts, which were revised in 1995,2 are widely distributed to Japanese obstetricians and pediatricians for managing pregnancy and newborns.\nBecause more than 10 years had passed since publishing the revised charts, the research committee of the Ministry of Health, Welfare, and Labour for Multicenter Benchmark Research on Neonatal Outcomes in Japan attempted to develop new anthropometric charts. Due to the small sample size, the 1995 charts only contained data classified by sex and birth order. Using the registration database of the Japan Society of Obstetrics and Gynecology (JSOG), which includes a large number of pregnant women and their babies, we attempted to construct charts by mode of delivery, ie, vaginal delivery and cesarean section, as well as sex and birth order. This delivery mode-specific chart is unique to Japan, as no such chart exists in other countries.3–7 In this study, we describe the different birth-weight distributions by gestational age and mode of delivery and discuss the factors that influenced this distribution." ]
[ null ]
[ "INTRODUCTION", "METHODS", "RESULTS", "DISCUSSION" ]
[ "Neonatal anthropometric charts are based on the distribution of measurements, mainly birth weight, of neonates at different gestational ages.1 The Japanese neonatal anthropometric charts, which were revised in 1995,2 are widely distributed to Japanese obstetricians and pediatricians for managing pregnancy and newborns.\nBecause more than 10 years had passed since publishing the revised charts, the research committee of the Ministry of Health, Welfare, and Labour for Multicenter Benchmark Research on Neonatal Outcomes in Japan attempted to develop new anthropometric charts. Due to the small sample size, the 1995 charts only contained data classified by sex and birth order. Using the registration database of the Japan Society of Obstetrics and Gynecology (JSOG), which includes a large number of pregnant women and their babies, we attempted to construct charts by mode of delivery, ie, vaginal delivery and cesarean section, as well as sex and birth order. This delivery mode-specific chart is unique to Japan, as no such chart exists in other countries.3–7 In this study, we describe the different birth-weight distributions by gestational age and mode of delivery and discuss the factors that influenced this distribution.", "JSOG manages a registration system for pregnant women and their infants. To construct new neonatal anthropometric charts, we collected data from 2003 to 2005 on gestational age, birth weight, sex, birth order, and information on complications of singleton births from this database. Because JSOG approved the use of their database for the purpose of creating new neonatal anthropometric charts, this study was not subject to institutional review. Stillborn infants and those with severe asphyxia (Apgar score of 0 at 1 and 5 minutes after delivery), hydrops, or malformations were excluded from the analysis. Infants with missing information on sex or gestational age were also excluded.\nRegarding mode of delivery, 6 modes were reported in the registration database: natural vaginal delivery, vacuum-assisted vaginal delivery, forceps-assisted vaginal delivery, elective cesarean section, emergency cesarean section, and others. Natural vaginal delivery, vacuum-assisted vaginal delivery, and forceps-assisted vaginal delivery were defined as vaginal delivery, and elective and emergency cesarean sections were defined as cesarean delivery in this study. Because more than 80% of births delivered by elective cesarean section were delivered from 37 to 41 gestational weeks and approximately 60% of those delivered by emergency cesarean section were delivered at 36 week or earlier, we combined these modes of delivery in the analysis. Pregnant women for whom mode of delivery was classified as “others” were excluded from this analysis.\nFirst, 10th centile, median, and 90th centile of birth weight by sex and birth order (first live births or second and later live births) were observed by gestational age from 22 to 42 weeks among all eligible births. Then, a similar observation was made by delivery mode. The values obtained were then plotted and fitted to cubic curves using the least squares method.", "During the study period, 147 medical facilities participated in the JSOG registration system, and 150 471 singleton births were reported to the registration database. A total of 5243 births were excluded from the analysis; thus, the study population comprised 145 228 births. Then, an additional 248 clinical outliers were excluded from this population. Consequently, 144 980 singleton births (74 740 boys and 70 240 girls) were included in the analysis (Table 1). Among the 74 740 boys, 39 707 were first live births and 35 033 were second or later live births. Among the 70 240 girls, 36 827 and 33 413 were first live births and second or later live births, respectively.\nFigure 1 shows the birth weight distribution of singleton male infants by gestational age and birth order. The 10th centile curves of first live births and second and later live births were skewed to lower birth weights in the preterm period. When the birth weight distributions are classified by delivery mode, the 10th centile curves were skewed to lower birth weights among both first live births and second and later live births delivered by cesarean section (Table 2, Figures 2 and 3). Coefficients of determination of all fitted curves were higher than 0.98, and the skewness was similar in 10th centile curves of birth weight of female infants who were delivered by cesarean section (data not shown).\naThere were no eligible first live male births born by cesarean section in gestational week 22.\nValues were plotted and fitted to cubic curves by using the least squares method in Figures 2 and 3.\nThe proportion of first live births delivered by cesarean section by gestational age is shown in Figure 4. More than 40% of male and female births were delivered by cesarean section at 37 weeks or earlier. From 26 to 29 weeks, more than 70% of births were delivered by cesarean section.", "The 10th centile birth weight curves of Japanese singleton infants by gestational age were skewed toward low values during the preterm period. Cesarean section influenced this distribution because the proportion of births delivered by cesarean section was large during the preterm period, especially from 26 to 29 weeks. As curves for 24 to 26 gestational weeks appeared to be markedly skewed toward low values, there was a difference in gestational period between the area of the curves with the most skewness and that representing the largest proportion of cesarean sections. We were unable to determine the reason for this, as no country has included delivery mode in neonatal anthropometric charts.2–7 Due to this uncertainty, the research committee for creating new neonatal anthropometric charts in Japan decided to eliminate cesarean deliveries from the charts. The new Japanese neonatal anthropometric chart will thus include only the birth weight of singleton infants born by vaginal delivery as standard curves, which are created after excluding factors related to fetal growth. We used the least squares methods to calculate the distribution of birth weights in this study because it was also employed in the revised charts in 1995.2 The LMS (λ, μ, σ) method, however, will be used to create the new Japanese charts.8\nMore than 40% of preterm infants were delivered by cesarean section in Japan. The proportion of cesarean sections was reported to be increasing among preterm infants in the United States.9–11 The reasons for cesarean section are not available in the JSOG registration database; however, one known reason is fetal growth restriction (FGR), which is a decrease in the fetal growth rate that inhibits an infant from obtaining its complete genetic growth potential. FGR is caused by placental dysfunction or maternal complications such as pre-eclampsia.12,13 It is associated with increased perinatal mortality and morbidity, as well as with increased risk of long-term complications such as impaired neurodevelopment, adult type 2 diabetes, and hypertension.13 Ultrasonography techniques, including the non-stress test, biophysical profile scoring, and pulse Doppler methods, enable obstetricians to carefully evaluate fetal growth.14 Due to these methods of fetal management, especially observation of growth in fetal head circumference, obstetricians are more likely to deliver fetuses with FGR during preterm in the event of non-reassuring fetal status. Indeed, approximately 80% of fetuses with FGR were delivered by cesarean section in European countries.15 Cesarean section is also likely to be selected in cases of preterm premature rupture of membranes.16\nBecause the JSOG database mainly includes tertiary hospitals, low birth weight infants were overrepresented in our study population as compared with the general population. It has been reported that whereas 8.5% of male births and 10.8% of female births were less than 2500 grams in the general population, approximately 25% of births were less than 2500 grams in some tertiary hospitals.17–19 In addition, pregnant women with complications might be more likely to be admitted to, and undergo cesarean section in, tertiary hospitals. Due to this selection bias, 10th centile birth weights of cesarean section births may be less than those of the general population. The reliability of gestational age is the most important issue in creating neonatal anthropometric charts. We were unable to confirm whether gestational age was assessed by ultrasonography during first trimester among pregnant women registered in the JSOG system. Many Japanese clinics and hospitals that treat pregnant women have ultrasonography equipment. However, because estimation of gestational age by ultrasonography was not mentioned in Japanese guidelines for obstetrical practice, some facilities may have calculated gestational age by asking pregnant women about their last menstrual period.20\nIn conclusion, the 10th centile birth weight curves of Japanese singleton infants delivered by cesarean section by gestational age were skewed toward low values during the preterm period. This might reflect the fact that fetuses with FGR were more likely to be delivered by cesarean section to prevent worsening fetal growth. Thus, the birth weights of singleton infants born by vaginal delivery were used as standard curves to develop new Japanese neonatal anthropometric charts." ]
[ null, "methods", "results", "discussion" ]
[ "birth weight", "distribution", "gestational age", "cesarean section", "preterm" ]
INTRODUCTION: Neonatal anthropometric charts are based on the distribution of measurements, mainly birth weight, of neonates at different gestational ages.1 The Japanese neonatal anthropometric charts, which were revised in 1995,2 are widely distributed to Japanese obstetricians and pediatricians for managing pregnancy and newborns. Because more than 10 years had passed since publishing the revised charts, the research committee of the Ministry of Health, Welfare, and Labour for Multicenter Benchmark Research on Neonatal Outcomes in Japan attempted to develop new anthropometric charts. Due to the small sample size, the 1995 charts only contained data classified by sex and birth order. Using the registration database of the Japan Society of Obstetrics and Gynecology (JSOG), which includes a large number of pregnant women and their babies, we attempted to construct charts by mode of delivery, ie, vaginal delivery and cesarean section, as well as sex and birth order. This delivery mode-specific chart is unique to Japan, as no such chart exists in other countries.3–7 In this study, we describe the different birth-weight distributions by gestational age and mode of delivery and discuss the factors that influenced this distribution. METHODS: JSOG manages a registration system for pregnant women and their infants. To construct new neonatal anthropometric charts, we collected data from 2003 to 2005 on gestational age, birth weight, sex, birth order, and information on complications of singleton births from this database. Because JSOG approved the use of their database for the purpose of creating new neonatal anthropometric charts, this study was not subject to institutional review. Stillborn infants and those with severe asphyxia (Apgar score of 0 at 1 and 5 minutes after delivery), hydrops, or malformations were excluded from the analysis. Infants with missing information on sex or gestational age were also excluded. Regarding mode of delivery, 6 modes were reported in the registration database: natural vaginal delivery, vacuum-assisted vaginal delivery, forceps-assisted vaginal delivery, elective cesarean section, emergency cesarean section, and others. Natural vaginal delivery, vacuum-assisted vaginal delivery, and forceps-assisted vaginal delivery were defined as vaginal delivery, and elective and emergency cesarean sections were defined as cesarean delivery in this study. Because more than 80% of births delivered by elective cesarean section were delivered from 37 to 41 gestational weeks and approximately 60% of those delivered by emergency cesarean section were delivered at 36 week or earlier, we combined these modes of delivery in the analysis. Pregnant women for whom mode of delivery was classified as “others” were excluded from this analysis. First, 10th centile, median, and 90th centile of birth weight by sex and birth order (first live births or second and later live births) were observed by gestational age from 22 to 42 weeks among all eligible births. Then, a similar observation was made by delivery mode. The values obtained were then plotted and fitted to cubic curves using the least squares method. RESULTS: During the study period, 147 medical facilities participated in the JSOG registration system, and 150 471 singleton births were reported to the registration database. A total of 5243 births were excluded from the analysis; thus, the study population comprised 145 228 births. Then, an additional 248 clinical outliers were excluded from this population. Consequently, 144 980 singleton births (74 740 boys and 70 240 girls) were included in the analysis (Table 1). Among the 74 740 boys, 39 707 were first live births and 35 033 were second or later live births. Among the 70 240 girls, 36 827 and 33 413 were first live births and second or later live births, respectively. Figure 1 shows the birth weight distribution of singleton male infants by gestational age and birth order. The 10th centile curves of first live births and second and later live births were skewed to lower birth weights in the preterm period. When the birth weight distributions are classified by delivery mode, the 10th centile curves were skewed to lower birth weights among both first live births and second and later live births delivered by cesarean section (Table 2, Figures 2 and 3). Coefficients of determination of all fitted curves were higher than 0.98, and the skewness was similar in 10th centile curves of birth weight of female infants who were delivered by cesarean section (data not shown). aThere were no eligible first live male births born by cesarean section in gestational week 22. Values were plotted and fitted to cubic curves by using the least squares method in Figures 2 and 3. The proportion of first live births delivered by cesarean section by gestational age is shown in Figure 4. More than 40% of male and female births were delivered by cesarean section at 37 weeks or earlier. From 26 to 29 weeks, more than 70% of births were delivered by cesarean section. DISCUSSION: The 10th centile birth weight curves of Japanese singleton infants by gestational age were skewed toward low values during the preterm period. Cesarean section influenced this distribution because the proportion of births delivered by cesarean section was large during the preterm period, especially from 26 to 29 weeks. As curves for 24 to 26 gestational weeks appeared to be markedly skewed toward low values, there was a difference in gestational period between the area of the curves with the most skewness and that representing the largest proportion of cesarean sections. We were unable to determine the reason for this, as no country has included delivery mode in neonatal anthropometric charts.2–7 Due to this uncertainty, the research committee for creating new neonatal anthropometric charts in Japan decided to eliminate cesarean deliveries from the charts. The new Japanese neonatal anthropometric chart will thus include only the birth weight of singleton infants born by vaginal delivery as standard curves, which are created after excluding factors related to fetal growth. We used the least squares methods to calculate the distribution of birth weights in this study because it was also employed in the revised charts in 1995.2 The LMS (λ, μ, σ) method, however, will be used to create the new Japanese charts.8 More than 40% of preterm infants were delivered by cesarean section in Japan. The proportion of cesarean sections was reported to be increasing among preterm infants in the United States.9–11 The reasons for cesarean section are not available in the JSOG registration database; however, one known reason is fetal growth restriction (FGR), which is a decrease in the fetal growth rate that inhibits an infant from obtaining its complete genetic growth potential. FGR is caused by placental dysfunction or maternal complications such as pre-eclampsia.12,13 It is associated with increased perinatal mortality and morbidity, as well as with increased risk of long-term complications such as impaired neurodevelopment, adult type 2 diabetes, and hypertension.13 Ultrasonography techniques, including the non-stress test, biophysical profile scoring, and pulse Doppler methods, enable obstetricians to carefully evaluate fetal growth.14 Due to these methods of fetal management, especially observation of growth in fetal head circumference, obstetricians are more likely to deliver fetuses with FGR during preterm in the event of non-reassuring fetal status. Indeed, approximately 80% of fetuses with FGR were delivered by cesarean section in European countries.15 Cesarean section is also likely to be selected in cases of preterm premature rupture of membranes.16 Because the JSOG database mainly includes tertiary hospitals, low birth weight infants were overrepresented in our study population as compared with the general population. It has been reported that whereas 8.5% of male births and 10.8% of female births were less than 2500 grams in the general population, approximately 25% of births were less than 2500 grams in some tertiary hospitals.17–19 In addition, pregnant women with complications might be more likely to be admitted to, and undergo cesarean section in, tertiary hospitals. Due to this selection bias, 10th centile birth weights of cesarean section births may be less than those of the general population. The reliability of gestational age is the most important issue in creating neonatal anthropometric charts. We were unable to confirm whether gestational age was assessed by ultrasonography during first trimester among pregnant women registered in the JSOG system. Many Japanese clinics and hospitals that treat pregnant women have ultrasonography equipment. However, because estimation of gestational age by ultrasonography was not mentioned in Japanese guidelines for obstetrical practice, some facilities may have calculated gestational age by asking pregnant women about their last menstrual period.20 In conclusion, the 10th centile birth weight curves of Japanese singleton infants delivered by cesarean section by gestational age were skewed toward low values during the preterm period. This might reflect the fact that fetuses with FGR were more likely to be delivered by cesarean section to prevent worsening fetal growth. Thus, the birth weights of singleton infants born by vaginal delivery were used as standard curves to develop new Japanese neonatal anthropometric charts.
Background: Neonatal anthropometric charts of the distribution of measurements, mainly birth weight, taken at different gestational ages are widely used by obstetricians and pediatricians. However, the relationship between delivery mode and neonatal anthropometric data has not been investigated in Japan or other countries. Methods: The subjects were selected from the registration database of the Japan Society of Obstetrics and Gynecology (2003-2005). Tenth centile, median, and 90th centile of birth weight by sex, birth order, and delivery mode were observed by gestational age from 22 to 42 weeks among eligible singleton births. Results: After excluding 248 outliers and 5243 births that did not satisfy the inclusion criteria, 144,980 births were included in the analysis. The distribution of 10th centile curves was skewed toward lower birth weights during the preterm period among both first live births and second and later live births delivered by cesarean section. More than 40% of both male and female live births were delivered by cesarean section at 37 weeks or earlier. Conclusions: The large proportion of cesarean sections influenced the skewness of the birth weight distribution in the preterm period.
null
null
1,660
215
[ 210 ]
4
[ "births", "cesarean", "birth", "delivery", "cesarean section", "section", "gestational", "charts", "delivered", "gestational age" ]
[ "mode neonatal anthropometric", "new neonatal anthropometric", "japanese guidelines obstetrical", "neonatal outcomes japan", "anthropometric charts japan" ]
null
null
[CONTENT] birth weight | distribution | gestational age | cesarean section | preterm [SUMMARY]
[CONTENT] birth weight | distribution | gestational age | cesarean section | preterm [SUMMARY]
[CONTENT] birth weight | distribution | gestational age | cesarean section | preterm [SUMMARY]
null
[CONTENT] birth weight | distribution | gestational age | cesarean section | preterm [SUMMARY]
null
[CONTENT] Anthropometry | Birth Order | Birth Weight | Cesarean Section | Delivery, Obstetric | Female | Gestational Age | Humans | Infant, Newborn | Infant, Premature | Japan | Male | Pregnancy | Reference Values [SUMMARY]
[CONTENT] Anthropometry | Birth Order | Birth Weight | Cesarean Section | Delivery, Obstetric | Female | Gestational Age | Humans | Infant, Newborn | Infant, Premature | Japan | Male | Pregnancy | Reference Values [SUMMARY]
[CONTENT] Anthropometry | Birth Order | Birth Weight | Cesarean Section | Delivery, Obstetric | Female | Gestational Age | Humans | Infant, Newborn | Infant, Premature | Japan | Male | Pregnancy | Reference Values [SUMMARY]
null
[CONTENT] Anthropometry | Birth Order | Birth Weight | Cesarean Section | Delivery, Obstetric | Female | Gestational Age | Humans | Infant, Newborn | Infant, Premature | Japan | Male | Pregnancy | Reference Values [SUMMARY]
null
[CONTENT] mode neonatal anthropometric | new neonatal anthropometric | japanese guidelines obstetrical | neonatal outcomes japan | anthropometric charts japan [SUMMARY]
[CONTENT] mode neonatal anthropometric | new neonatal anthropometric | japanese guidelines obstetrical | neonatal outcomes japan | anthropometric charts japan [SUMMARY]
[CONTENT] mode neonatal anthropometric | new neonatal anthropometric | japanese guidelines obstetrical | neonatal outcomes japan | anthropometric charts japan [SUMMARY]
null
[CONTENT] mode neonatal anthropometric | new neonatal anthropometric | japanese guidelines obstetrical | neonatal outcomes japan | anthropometric charts japan [SUMMARY]
null
[CONTENT] births | cesarean | birth | delivery | cesarean section | section | gestational | charts | delivered | gestational age [SUMMARY]
[CONTENT] births | cesarean | birth | delivery | cesarean section | section | gestational | charts | delivered | gestational age [SUMMARY]
[CONTENT] births | cesarean | birth | delivery | cesarean section | section | gestational | charts | delivered | gestational age [SUMMARY]
null
[CONTENT] births | cesarean | birth | delivery | cesarean section | section | gestational | charts | delivered | gestational age [SUMMARY]
null
[CONTENT] charts | japan | birth | delivery | different | attempted | anthropometric | anthropometric charts | neonatal | revised [SUMMARY]
[CONTENT] delivery | vaginal | vaginal delivery | assisted | assisted vaginal | assisted vaginal delivery | births | cesarean | elective | emergency [SUMMARY]
[CONTENT] births | live | live births | delivered cesarean section | delivered cesarean | curves | delivered | later live | second later live births | second later live [SUMMARY]
null
[CONTENT] births | delivery | cesarean | birth | charts | cesarean section | section | live | live births | gestational [SUMMARY]
null
[CONTENT] ||| Japan [SUMMARY]
[CONTENT] the Japan Society of Obstetrics and Gynecology | 2003-2005 ||| Tenth | 90th | 22 to 42 weeks [SUMMARY]
[CONTENT] 248 | 5243 | 144,980 ||| 10th | first | second ||| More than 40% | 37 weeks [SUMMARY]
null
[CONTENT] ||| Japan ||| the Japan Society of Obstetrics and Gynecology | 2003-2005 ||| Tenth | 90th | 22 to 42 weeks ||| 248 | 5243 | 144,980 ||| 10th | first | second ||| More than 40% | 37 weeks ||| [SUMMARY]
null
[The influence of occupational activity on diseases of the musculoskeletal system of the upper extremity].
34939146
Diseases of the musculoskeletal system of the upper extremity are the reason for increasing sickness-related absenteeism among the working population.
INTRODUCTION
We included 1070 patients who underwent surgical rotator cuff (RC) reconstruction for an RC lesion between 2016 and 2019. The relevant data were retrospectively documented from the hospital information system. The patients' occupations were classified according to the Classification of Occupations 2010 (KldB 2010) and compared with routinely recorded and anonymized freely available data (Federal Statistical Office, Federal Employment Agency).
MATERIALS AND METHODS
Of the 1070 patients, 844 were of working age. The age structure of the individual areas showed no significant differences. Based on the comparisons of patient data with the population, significantly higher RC injury rates were found in agriculture, forestry, animal husbandry and horticulture (p = 0,003); construction, architecture, surveying and building services engineering (p < 0,001); transport, logistics, protection, and security (p < 0.001) and business organization, accounting, law, and administration (p < 0,001). There was a significantly reduced risk in science, geography and computer science (p = 0.015); commercial services, goods trade, distribution, hotel and tourism (p < 0,001); health, social affairs, teaching and education (p < 0,001).
RESULTS
The prevalence of RC lesions shows a statistical correlation with the occupation performed depending on the occupational branches. In addition to occupational dependency, gender-specific work factors play a role. Shoulder pain in gainful employment should be considered in a more differentiated way. This should enable preventive measures to be taken in a targeted manner.
CONCLUSION
[ "Humans", "Musculoskeletal Diseases", "Musculoskeletal System", "Occupational Diseases", "Retrospective Studies", "Upper Extremity" ]
9352613
null
null
null
null
null
null
Fazit für die Praxis
Der Beruf kann einen Einfluss auf die Entstehung von Rotatorenmanschetten(RM)-Läsionen haben.Spezifische sekundäre gesundheitsbezogene Risiken können ebenfalls einen Einfluss auf die Entstehung von RM-Läsionen haben.Zusammenhänge aus den belastenden Tätigkeiten der oberen Extremität und die epidemiologische Verteilung der vorliegenden Studie zu RM-Läsionen in den Berufszweigen führen zu der Vermutung einer Häufung der schulterbelastenden Tätigkeiten.Berufssektoren mit schulterbelastenden Tätigkeiten sollten genauer untersucht werden, um prädisponierende Faktoren zu definieren und gegebenenfalls auf die Prävention oder auch die Definition einer Berufserkrankung abzielen zu können.Symptome der oberen Extremität bei Betroffenen, die in prädestinierten Berufszweigen arbeiten, sollten differenziert betrachtet werden. Der Beruf kann einen Einfluss auf die Entstehung von Rotatorenmanschetten(RM)-Läsionen haben. Spezifische sekundäre gesundheitsbezogene Risiken können ebenfalls einen Einfluss auf die Entstehung von RM-Läsionen haben. Zusammenhänge aus den belastenden Tätigkeiten der oberen Extremität und die epidemiologische Verteilung der vorliegenden Studie zu RM-Läsionen in den Berufszweigen führen zu der Vermutung einer Häufung der schulterbelastenden Tätigkeiten. Berufssektoren mit schulterbelastenden Tätigkeiten sollten genauer untersucht werden, um prädisponierende Faktoren zu definieren und gegebenenfalls auf die Prävention oder auch die Definition einer Berufserkrankung abzielen zu können. Symptome der oberen Extremität bei Betroffenen, die in prädestinierten Berufszweigen arbeiten, sollten differenziert betrachtet werden.
[ "Kurze Hinführung zum Thema", "Einleitung", "Methodik", "Die KldB 2010", "Daten für Deutschland", "Studienvergleichsparameter", "Datenerhebung", "Ergebnisse", "Epidemiologische Daten", "Altersverteilung", "Vergleich Altersverteilung Studienpopulation und KldB 2010 (2020)", "Erwerbsstatus und Geschlecht", "Erwerbsstatus, Geschlecht, RM-Läsion im Vergleich zur erwerbstätigen Gesamtbevölkerung nach KldB 2010 (2020)", "RM-Läsionen in der Studienpopulation im Vergleich zur Gesamtbevölkerung nach KldB 2010 (2020)", "Einfluss von Geschlecht in Abhängigkeit von KldB 2010 (2020)", "RM-Läsionen im GUV-Verfahren", "Diskussion", "Limitation" ]
[ "Einen entscheidenden Einfluss auf die Entstehung von Erkrankungen des Bewegungsapparates der oberen Extremität hat der aktuell ausgeübte Beruf. Der Einfluss des Berufs resultiert dabei aus einer Reihe von Faktoren. Naheliegend ist zunächst die Annahme von berufsspezifisch unterschiedlichen gesundheitsbezogenen Risiken als Folge der Belastung am Arbeitsplatz. Bisherige Erkenntnisse in diesem Zusammenhang weisen darauf hin, dass bestimmte Berufszweige einem höheren Erkrankungsrisiko am Arbeitsplatz ausgesetzt sind als andere. Kann die berufliche Belastung so groß sein, dass sie zu einer Krankheit führt, oder spielen dabei andere Aspekte außerhalb des Arbeitslebens eine größere Rolle für die Krankheitsentstehung?", "Das Schultergelenk mit multidirektionaler Beweglichkeit und hoher Bedeutung für die Arm- und Handfunktion spielt eine wichtige Rolle für die Arbeitsfähigkeit [1, 2]. Läsionen der Rotatorenmanschette (RM) sind eine häufige muskuloskelettale Verletzung der oberen Extremität in der arbeitenden Bevölkerung. Hiervon sind 6,60 % der Männer und 8,50 % der Frauen von einer betroffen [3]. Die Folge sind lange Abwesenheitsperioden vom Arbeitsplatz [4, 5]. Rotatorenmanschettenpathologien und Schmerzen der Schulter sind in Finnland die häufigste Ursache für krankheitsbedingte Abwesenheit vom Arbeitsplatz [6, 7]. Hierbei wurde die krankheitsbedingte Abwesenheit anhand des Auftretens und der Dauer der Abwesenheit bei den krankheitsbedingten Fehlzeiten aufgrund von Muskel-Skelett-Erkrankungen definiert. Die krankheitsbedingte Abwesenheit hing in der Arbeit von Pekkala et al. von den geleisteten Arbeitsjahren ab, mit einem Hauptanteil zwischen dem 45. und 65. Lebensjahr. Je höher die Anzahl der geleisteten Arbeitsjahre war, umso länger war die krankheitsbedingte Abwesenheit von Erwerbstätigen aufgrund einer RM-Läsion [6]. Einen entscheidenden Einfluss auf die Entstehung von Erkrankungen des Bewegungsapparates hat der aktuell ausgeübte Beruf. Der Einfluss des Berufs resultiert dabei aus einer Reihe von Faktoren. Naheliegend ist zunächst die Annahme von berufsspezifisch unterschiedlichen gesundheitsbezogenen Risiken als Folge der Belastung am Arbeitsplatz. Zu den arbeitsassoziierten Faktoren gehört u. a. das häufige Tragen von Gewichten, stark wiederholende Arbeiten und Überkopfarbeiten [8]. In der Literatur finden sich wenig Daten, die den Einfluss der beruflichen Tätigkeit auf die Entstehung von RM-Läsionen darstellen [9]. Bisherige Erkenntnisse in diesem Zusammenhang weisen darauf hin, dass bestimmte Berufszweige einem höheren Erkrankungsrisiko am Arbeitsplatz ausgesetzt sind als andere [10–12]. Ziel dieser Studie ist es, den Einfluss der Berufsabhängigkeit auf die Entstehung von RM-Läsionen zu untersuchen und neben berufsspezifischen Faktoren, gesundheitsbezogene Risiken darzustellen.", "Die zuständige Ethikkommission der Universität Jena wurde informiert und hatte keine Einwände gegen die retrospektive, monozentrische Auswertung der Studie mit dem Namen PAMO-NUNK-Shoulder-Study (Reg.-Nr.: 2018-1165-Daten). In dieser Studie wurden alle Patienten eingeschlossen, welche eine arthroskopische Rekonstruktion der RM in den Jahren 2016 bis 2019 erhalten haben. Die Patientenauswahl erfolgte anhand der internationalen Klassifikation der Behandlungsmethoden in der Medizin mit ICD-Codes: M75.1 (Läsionen der RM) und S46.0 (Verletzung der Muskeln und der Sehnen der RM). Anhand der dokumentierten Anamnese und Operationsberichte wurden alle Patienten insbesondere im Hinblick auf ihre berufliche Tätigkeit ausgewertet. Es konnten 1070 Patienten eingeschlossen werden. Bei allen Patienten wurde der aktuelle Beruf dokumentiert, sowie Berentung oder Arbeitslosigkeit, Alter und Geschlecht. Weiterhin wurde die gegebenenfalls zugehörige Berufsgenossenschaft (BG) von der BG anerkannten Verletzungen erhoben.\nDie KldB 2010 Die Klassifikation der Berufe 2010, kurz KldB 2010, wurde von der Bundesagentur für Arbeit und dem Institut für Arbeitsmarkt- und Berufsforschung unter Beteiligung des Statistischen Bundesamtes und den betroffenen Bundesministerien sowie Experten der berufskundlichen und empirischen Forschung entwickelt. Diese wurde im Jahr 2011 eingeführt. Die KldB 2010 wurde vollständig neu entwickelt und löst die Klassifizierung der Berufe aus den Jahren 1988 und 1992 ab. Die KldB 2010 bildet die aktuelle Berufslandschaft in Deutschland realitätsnah ab und bietet zugleich eine hohe Kompatibilität zu anderen internationalen Berufsklassifikationen. Dabei werden Berufe in zehn übergeordneten Berufszweigen sektorenorientiert zusammengefasst (Tab. 1). Die Daten der deutschen Bevölkerung von März 2020 sind in Tab. 1 dargestellt.BerufsbereichAktuelle Zahlen 03/2020Gesamt:33.648.183Land‑, Forst- und Tierwirtschaft und Gartenbau500.654Rohstoffgewinnung, Produktion und Fertigung7.218.186Bau, Architektur, Vermessung und Gebäudetechnik2.019.667Naturwissenschaft, Geografie und Informatik1.357.948Verkehr, Logistik, Schutz und Sicherheit4.499.345Kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus3.971.079Unternehmensorganisation, Buchhaltung, Recht und Verwaltung6.819.708Gesundheit, Soziales, Lehre und Erziehung6.181.809Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung892.605Keine Angabe187.182\nDie Klassifikation der Berufe 2010, kurz KldB 2010, wurde von der Bundesagentur für Arbeit und dem Institut für Arbeitsmarkt- und Berufsforschung unter Beteiligung des Statistischen Bundesamtes und den betroffenen Bundesministerien sowie Experten der berufskundlichen und empirischen Forschung entwickelt. Diese wurde im Jahr 2011 eingeführt. Die KldB 2010 wurde vollständig neu entwickelt und löst die Klassifizierung der Berufe aus den Jahren 1988 und 1992 ab. Die KldB 2010 bildet die aktuelle Berufslandschaft in Deutschland realitätsnah ab und bietet zugleich eine hohe Kompatibilität zu anderen internationalen Berufsklassifikationen. Dabei werden Berufe in zehn übergeordneten Berufszweigen sektorenorientiert zusammengefasst (Tab. 1). Die Daten der deutschen Bevölkerung von März 2020 sind in Tab. 1 dargestellt.BerufsbereichAktuelle Zahlen 03/2020Gesamt:33.648.183Land‑, Forst- und Tierwirtschaft und Gartenbau500.654Rohstoffgewinnung, Produktion und Fertigung7.218.186Bau, Architektur, Vermessung und Gebäudetechnik2.019.667Naturwissenschaft, Geografie und Informatik1.357.948Verkehr, Logistik, Schutz und Sicherheit4.499.345Kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus3.971.079Unternehmensorganisation, Buchhaltung, Recht und Verwaltung6.819.708Gesundheit, Soziales, Lehre und Erziehung6.181.809Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung892.605Keine Angabe187.182\nDaten für Deutschland Nach Aussage des statistischen Bundesamtes belief sich die Einwohnerzahl in Deutschland im März 2020 auf 83 Mio. Einwohner, wovon 54 Mio. im erwerbsfähigen Alter (15–65 Jahre) waren, 33 Mio. gingen einer sozialversicherungspflichtigen Tätigkeit nach [13]. Bei der Betrachtung der Wirtschaftssektoren befanden sich 2017 lediglich 1,4 % der Erwerbstätigen im primären Sektor (Land- und Forstwirtschaft, Fischerei), wohingegen im sekundären Sektor (produzierendes Gewerbe) 24,1 % eine Beschäftigung fanden. Mit 74,5 % war 2017 der Dienstleistungssektor am stärksten vertreten. Die Gewichtung dieser Bereiche lässt sich durch den strukturellen Wandel der Gesellschaft, welcher sich durch beispielsweise veränderte Nachfrage und zunehmende Automatisierung erklärt, verstehen. Im Dienstleistungssektor ist der Bereich öffentliche Dienstleister, Erziehung, Gesundheit mit 10,9 Mio. Erwerbstätigen (24,7 %) am stärksten gewichtet. Ähnlich viele Personen (10,1 Mio./22,8 %) finden im Wirtschaftsbereich Handel, Verkehr und Gastgewerbe eine Beschäftigung. Nicht zu vernachlässigen ist das Gebiet der Finanzierung, Immobilien, Unternehmensdienstleister mit immerhin 17,4 % aller Erwerbstätigen [14]. Die Bundesagentur für Arbeit gruppierte die Erwerbstätigen in Deutschland anhand der Klassifikation der Berufe 2010. Die Daten der deutschen Bevölkerung von März 2020 sind in Tab. 1 dargestellt [15, 16].\nNach Aussage des statistischen Bundesamtes belief sich die Einwohnerzahl in Deutschland im März 2020 auf 83 Mio. Einwohner, wovon 54 Mio. im erwerbsfähigen Alter (15–65 Jahre) waren, 33 Mio. gingen einer sozialversicherungspflichtigen Tätigkeit nach [13]. Bei der Betrachtung der Wirtschaftssektoren befanden sich 2017 lediglich 1,4 % der Erwerbstätigen im primären Sektor (Land- und Forstwirtschaft, Fischerei), wohingegen im sekundären Sektor (produzierendes Gewerbe) 24,1 % eine Beschäftigung fanden. Mit 74,5 % war 2017 der Dienstleistungssektor am stärksten vertreten. Die Gewichtung dieser Bereiche lässt sich durch den strukturellen Wandel der Gesellschaft, welcher sich durch beispielsweise veränderte Nachfrage und zunehmende Automatisierung erklärt, verstehen. Im Dienstleistungssektor ist der Bereich öffentliche Dienstleister, Erziehung, Gesundheit mit 10,9 Mio. Erwerbstätigen (24,7 %) am stärksten gewichtet. Ähnlich viele Personen (10,1 Mio./22,8 %) finden im Wirtschaftsbereich Handel, Verkehr und Gastgewerbe eine Beschäftigung. Nicht zu vernachlässigen ist das Gebiet der Finanzierung, Immobilien, Unternehmensdienstleister mit immerhin 17,4 % aller Erwerbstätigen [14]. Die Bundesagentur für Arbeit gruppierte die Erwerbstätigen in Deutschland anhand der Klassifikation der Berufe 2010. Die Daten der deutschen Bevölkerung von März 2020 sind in Tab. 1 dargestellt [15, 16].\nStudienvergleichsparameter Anhand der KldB 2010 wurden die dokumentierten Berufe der Patienten eingeteilt. Hierdurch wird eine einheitliche Berufsklassifikation innerhalb dieser Arbeit geschaffen, die statistische Vergleiche zur Gesamtbevölkerung ermöglichen. Diese Daten dienen als Vergleichsgrundlage der Studienergebnisse mit denen von der KldB 2010 aus dem Jahr 2020.\nAnhand der KldB 2010 wurden die dokumentierten Berufe der Patienten eingeteilt. Hierdurch wird eine einheitliche Berufsklassifikation innerhalb dieser Arbeit geschaffen, die statistische Vergleiche zur Gesamtbevölkerung ermöglichen. Diese Daten dienen als Vergleichsgrundlage der Studienergebnisse mit denen von der KldB 2010 aus dem Jahr 2020.\nDatenerhebung Die Datenerhebung wurde mittels Microsoft Excel 365 (Fa. Microsoft, Redmond, WA, USA) und die statistische Analyse mittels des Statistikprogramms SPSS (Version 22.0, SPSS Inc., Chicago, IL, USA) durchgeführt. Die in der Studie dokumentierten Datensätze wurden mit routinemäßig erfassten und anonymisierten, frei verfügbaren Daten des Statistischen Bundesamtes verglichen. Diese Datensätze sind digital und frei verfügbar auf der Internetseite zugänglich unter der Kategorie: Themen/Arbeit/Arbeitsmarkt/Erwerbstätigkeit [17]. Die deskriptiven Statistiken umfassen Mengen, Prozentsätze, Medianwerte und Bereiche für ordinäre Variablen. Die Verteilung der Daten wurden auf die Normalverteilung untersucht.\nDer Chi-Quadrattest wurde für die Analyse von Einflussparametern genutzt und zum Vergleich der Studiendaten mit den Daten des statistischen Bundesamtes eingesetzt. Der Kruskal-Wallis-Test wurde spezifisch zur Erhebung der Altersverteilung in den Berufsgruppen verwendet. Der p-Wert von weniger als 0,05 wurde als statistisch signifikant angesehen.\nDie Datenerhebung wurde mittels Microsoft Excel 365 (Fa. Microsoft, Redmond, WA, USA) und die statistische Analyse mittels des Statistikprogramms SPSS (Version 22.0, SPSS Inc., Chicago, IL, USA) durchgeführt. Die in der Studie dokumentierten Datensätze wurden mit routinemäßig erfassten und anonymisierten, frei verfügbaren Daten des Statistischen Bundesamtes verglichen. Diese Datensätze sind digital und frei verfügbar auf der Internetseite zugänglich unter der Kategorie: Themen/Arbeit/Arbeitsmarkt/Erwerbstätigkeit [17]. Die deskriptiven Statistiken umfassen Mengen, Prozentsätze, Medianwerte und Bereiche für ordinäre Variablen. Die Verteilung der Daten wurden auf die Normalverteilung untersucht.\nDer Chi-Quadrattest wurde für die Analyse von Einflussparametern genutzt und zum Vergleich der Studiendaten mit den Daten des statistischen Bundesamtes eingesetzt. Der Kruskal-Wallis-Test wurde spezifisch zur Erhebung der Altersverteilung in den Berufsgruppen verwendet. Der p-Wert von weniger als 0,05 wurde als statistisch signifikant angesehen.", "Die Klassifikation der Berufe 2010, kurz KldB 2010, wurde von der Bundesagentur für Arbeit und dem Institut für Arbeitsmarkt- und Berufsforschung unter Beteiligung des Statistischen Bundesamtes und den betroffenen Bundesministerien sowie Experten der berufskundlichen und empirischen Forschung entwickelt. Diese wurde im Jahr 2011 eingeführt. Die KldB 2010 wurde vollständig neu entwickelt und löst die Klassifizierung der Berufe aus den Jahren 1988 und 1992 ab. Die KldB 2010 bildet die aktuelle Berufslandschaft in Deutschland realitätsnah ab und bietet zugleich eine hohe Kompatibilität zu anderen internationalen Berufsklassifikationen. Dabei werden Berufe in zehn übergeordneten Berufszweigen sektorenorientiert zusammengefasst (Tab. 1). Die Daten der deutschen Bevölkerung von März 2020 sind in Tab. 1 dargestellt.BerufsbereichAktuelle Zahlen 03/2020Gesamt:33.648.183Land‑, Forst- und Tierwirtschaft und Gartenbau500.654Rohstoffgewinnung, Produktion und Fertigung7.218.186Bau, Architektur, Vermessung und Gebäudetechnik2.019.667Naturwissenschaft, Geografie und Informatik1.357.948Verkehr, Logistik, Schutz und Sicherheit4.499.345Kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus3.971.079Unternehmensorganisation, Buchhaltung, Recht und Verwaltung6.819.708Gesundheit, Soziales, Lehre und Erziehung6.181.809Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung892.605Keine Angabe187.182", "Nach Aussage des statistischen Bundesamtes belief sich die Einwohnerzahl in Deutschland im März 2020 auf 83 Mio. Einwohner, wovon 54 Mio. im erwerbsfähigen Alter (15–65 Jahre) waren, 33 Mio. gingen einer sozialversicherungspflichtigen Tätigkeit nach [13]. Bei der Betrachtung der Wirtschaftssektoren befanden sich 2017 lediglich 1,4 % der Erwerbstätigen im primären Sektor (Land- und Forstwirtschaft, Fischerei), wohingegen im sekundären Sektor (produzierendes Gewerbe) 24,1 % eine Beschäftigung fanden. Mit 74,5 % war 2017 der Dienstleistungssektor am stärksten vertreten. Die Gewichtung dieser Bereiche lässt sich durch den strukturellen Wandel der Gesellschaft, welcher sich durch beispielsweise veränderte Nachfrage und zunehmende Automatisierung erklärt, verstehen. Im Dienstleistungssektor ist der Bereich öffentliche Dienstleister, Erziehung, Gesundheit mit 10,9 Mio. Erwerbstätigen (24,7 %) am stärksten gewichtet. Ähnlich viele Personen (10,1 Mio./22,8 %) finden im Wirtschaftsbereich Handel, Verkehr und Gastgewerbe eine Beschäftigung. Nicht zu vernachlässigen ist das Gebiet der Finanzierung, Immobilien, Unternehmensdienstleister mit immerhin 17,4 % aller Erwerbstätigen [14]. Die Bundesagentur für Arbeit gruppierte die Erwerbstätigen in Deutschland anhand der Klassifikation der Berufe 2010. Die Daten der deutschen Bevölkerung von März 2020 sind in Tab. 1 dargestellt [15, 16].", "Anhand der KldB 2010 wurden die dokumentierten Berufe der Patienten eingeteilt. Hierdurch wird eine einheitliche Berufsklassifikation innerhalb dieser Arbeit geschaffen, die statistische Vergleiche zur Gesamtbevölkerung ermöglichen. Diese Daten dienen als Vergleichsgrundlage der Studienergebnisse mit denen von der KldB 2010 aus dem Jahr 2020.", "Die Datenerhebung wurde mittels Microsoft Excel 365 (Fa. Microsoft, Redmond, WA, USA) und die statistische Analyse mittels des Statistikprogramms SPSS (Version 22.0, SPSS Inc., Chicago, IL, USA) durchgeführt. Die in der Studie dokumentierten Datensätze wurden mit routinemäßig erfassten und anonymisierten, frei verfügbaren Daten des Statistischen Bundesamtes verglichen. Diese Datensätze sind digital und frei verfügbar auf der Internetseite zugänglich unter der Kategorie: Themen/Arbeit/Arbeitsmarkt/Erwerbstätigkeit [17]. Die deskriptiven Statistiken umfassen Mengen, Prozentsätze, Medianwerte und Bereiche für ordinäre Variablen. Die Verteilung der Daten wurden auf die Normalverteilung untersucht.\nDer Chi-Quadrattest wurde für die Analyse von Einflussparametern genutzt und zum Vergleich der Studiendaten mit den Daten des statistischen Bundesamtes eingesetzt. Der Kruskal-Wallis-Test wurde spezifisch zur Erhebung der Altersverteilung in den Berufsgruppen verwendet. Der p-Wert von weniger als 0,05 wurde als statistisch signifikant angesehen.", "Epidemiologische Daten Die Probanden setzten sich aus 662 (61,9 %) Männer und 408 (38,1 %) Frauen zusammen (Abb. 1).\nDie Probanden setzten sich aus 662 (61,9 %) Männer und 408 (38,1 %) Frauen zusammen (Abb. 1).\nAltersverteilung Im Patientenkollektiv waren 26 % älter als 65 Jahre. Im Alter zwischen 50 und 65 waren 61 % und 10 % waren zwischen 40 und 49 Jahre alt. Lediglich 3 % der Patienten, welche eine RM-Rekonstruktion erhielten, waren jünger als 40 Jahre (Abb. 2).\nDas Durchschnittsalter lag bei 59,52 Jahren (18–97 Jahre, Median 60 Jahre). Der Altersdurchschnitt betrug für die Männer 59 (±8,33) Jahre und für die Frauen 61 (±8,15) Jahre.\nIm Patientenkollektiv waren 26 % älter als 65 Jahre. Im Alter zwischen 50 und 65 waren 61 % und 10 % waren zwischen 40 und 49 Jahre alt. Lediglich 3 % der Patienten, welche eine RM-Rekonstruktion erhielten, waren jünger als 40 Jahre (Abb. 2).\nDas Durchschnittsalter lag bei 59,52 Jahren (18–97 Jahre, Median 60 Jahre). Der Altersdurchschnitt betrug für die Männer 59 (±8,33) Jahre und für die Frauen 61 (±8,15) Jahre.\nVergleich Altersverteilung Studienpopulation und KldB 2010 (2020) Bei der Altersstruktur zwischen den einzelnen Bereichen unserer Studienpopulation nach den aktuellen Daten der KldB 2010 (2020) ließ sich kein signifikanter Unterschied nachweisen (Abb. 3).\nBei der Altersstruktur zwischen den einzelnen Bereichen unserer Studienpopulation nach den aktuellen Daten der KldB 2010 (2020) ließ sich kein signifikanter Unterschied nachweisen (Abb. 3).\nErwerbsstatus und Geschlecht Es waren 844 Patienten im arbeitsfähigen Alter (< 65 Jahre), hiervon gingen 597 einer Beschäftigung nach. Aufgrund von Elternzeit, Arbeitslosigkeit oder EU-Rente gingen 247 keiner Tätigkeit nach. Unter den Berufstätigen fanden sich rund 65 % Männer und 35 % Frauen. Der Altersdurchschnitt der aktuell Beschäftigten lag bei Männern bei 54 Jahren und 55 Jahren bei Frauen. Acht Befragte machten widersprüchliche Angaben, welchen Beruf sie ausübten und konnten nicht eingeschlossen werden (Tab. 2).AnzahlAlterLand‑, Forst- und Tierwirtschaft und Gartenbau3155Rohstoffgewinnung, Produktion und Fertigung11253Bau, Architektur, Vermessung und Gebäudetechnik13355Naturwissenschaft, Geografie und Informatik1056Verkehr, Logistik, Schutz und Sicherheit10555Kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus6655Unternehmensorganisation, Buchhaltung, Recht und Verwaltung4254Gesundheit, Soziales, Lehre und Erziehung7955Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung1158Militär00\nEs waren 844 Patienten im arbeitsfähigen Alter (< 65 Jahre), hiervon gingen 597 einer Beschäftigung nach. Aufgrund von Elternzeit, Arbeitslosigkeit oder EU-Rente gingen 247 keiner Tätigkeit nach. Unter den Berufstätigen fanden sich rund 65 % Männer und 35 % Frauen. Der Altersdurchschnitt der aktuell Beschäftigten lag bei Männern bei 54 Jahren und 55 Jahren bei Frauen. Acht Befragte machten widersprüchliche Angaben, welchen Beruf sie ausübten und konnten nicht eingeschlossen werden (Tab. 2).AnzahlAlterLand‑, Forst- und Tierwirtschaft und Gartenbau3155Rohstoffgewinnung, Produktion und Fertigung11253Bau, Architektur, Vermessung und Gebäudetechnik13355Naturwissenschaft, Geografie und Informatik1056Verkehr, Logistik, Schutz und Sicherheit10555Kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus6655Unternehmensorganisation, Buchhaltung, Recht und Verwaltung4254Gesundheit, Soziales, Lehre und Erziehung7955Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung1158Militär00\nErwerbsstatus, Geschlecht, RM-Läsion im Vergleich zur erwerbstätigen Gesamtbevölkerung nach KldB 2010 (2020) In der Patientenklientel zeigte sich eine signifikante Häufung von Männern (m = 382, w = 207, p < 0,001). Nach der Klassifikation der Berufe entfielen auf den Bereich Bau, Architektur, Vermessung und Gebäudetechnik die meisten Arbeitnehmer mit RM-Läsionen mit 23 % (133/589) und den meisten männlichen Arbeitenden mit 32 % (120/382), gefolgt von Rohstoffgewinnung, Produktion und Fertigung mit 19 % (n = 112/589) und dem zweithöchsten Männeranteil mit 23 % (89/382), auf Platz 3 folgte der Bereich Verkehr, Logistik, Schutz und Sicherheit mit 18 % (105/589) und dem dritthöchsten Männeranteil mit 21 % (80/382). Der höchste Frauenanteil mit RM-Läsion fand sich im Bereich Gesundheit, Soziales, Lehre und Erziehung. Die Altersstruktur der Gruppen unterteilt nach Klassifikation der Berufe zeigte keine signifikanten Unterschiede (p = 0,493). Eine Auflistung wird in Tab. 2 und Abb. 4 dargestellt.\nIn der Patientenklientel zeigte sich eine signifikante Häufung von Männern (m = 382, w = 207, p < 0,001). Nach der Klassifikation der Berufe entfielen auf den Bereich Bau, Architektur, Vermessung und Gebäudetechnik die meisten Arbeitnehmer mit RM-Läsionen mit 23 % (133/589) und den meisten männlichen Arbeitenden mit 32 % (120/382), gefolgt von Rohstoffgewinnung, Produktion und Fertigung mit 19 % (n = 112/589) und dem zweithöchsten Männeranteil mit 23 % (89/382), auf Platz 3 folgte der Bereich Verkehr, Logistik, Schutz und Sicherheit mit 18 % (105/589) und dem dritthöchsten Männeranteil mit 21 % (80/382). Der höchste Frauenanteil mit RM-Läsion fand sich im Bereich Gesundheit, Soziales, Lehre und Erziehung. Die Altersstruktur der Gruppen unterteilt nach Klassifikation der Berufe zeigte keine signifikanten Unterschiede (p = 0,493). Eine Auflistung wird in Tab. 2 und Abb. 4 dargestellt.\nRM-Läsionen in der Studienpopulation im Vergleich zur Gesamtbevölkerung nach KldB 2010 (2020) Verglichen mit der Gesamtbevölkerung (Tab. 1) zeigen sich Unterschiede in der Häufigkeit von RM-Läsionen in Bezug auf die Häufigkeit der entsprechenden Berufsklassifikationen (Tab. 3). 25 % der Erwerbstätigen mit sozialversicherungspflichtigem Beruf gingen einer Tätigkeit aus dem Bereich kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus nach, hingegen zeigten in unserem Patientenklientel nur 11 % eine RM-Läsion. In den Berufen Bau, Architektur, Vermessung und Gebäudetechnik arbeiten 9 % der Bevölkerung, allerdings entfielen 23 % der RM-Läsionen unseres Patientenklientels auf diese Berufszweige. Anhand der Vergleiche der Patienten mit der Bevölkerung ergab sich eine signifikant höhere Erkrankungsrate in den Bereichen Land‑, Forst- und Tierwirtschaft und Gartenbau, Bau, Architektur, Vermessung und Gebäudetechnik, Verkehr, Logistik, Schutz und Sicherheit und Unternehmensorganisation, Buchhaltung, Recht und Verwaltung. Ein signifikant reduziertes Risiko bestand in den Berufszweigen Naturwissenshaft, Geografie und Informatik, kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus, Gesundheit, Soziales, Lehre und Erziehung und Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung.Gesamte Bevölkerung in %Mit Rotatorenmanschettenläsion in %p-WertLand‑, Forst- und Tierwirtschaft und Gartenbau35p = 0,003*Rohstoffgewinnung, Produktion und Fertigung2119p = 0,312Bau, Architektur, Vermessung und Gebäudetechnik923p < 0,001*Naturwissenschaft, Geografie und Informatik42p = 0,015*Verkehr, Logistik, Schutz und Sicherheit1018p < 0,001*Kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus2511p < 0,001*Unternehmensorganisation, Buchhaltung, Recht und Verwaltung57p < 0,001*Gesundheit, Soziales, Lehre und Erziehung1913p < 0,001*Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung32p = 0,091*Militär10p = 0,212*Signifikanter Unterschied\n*Signifikanter Unterschied\nVerglichen mit der Gesamtbevölkerung (Tab. 1) zeigen sich Unterschiede in der Häufigkeit von RM-Läsionen in Bezug auf die Häufigkeit der entsprechenden Berufsklassifikationen (Tab. 3). 25 % der Erwerbstätigen mit sozialversicherungspflichtigem Beruf gingen einer Tätigkeit aus dem Bereich kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus nach, hingegen zeigten in unserem Patientenklientel nur 11 % eine RM-Läsion. In den Berufen Bau, Architektur, Vermessung und Gebäudetechnik arbeiten 9 % der Bevölkerung, allerdings entfielen 23 % der RM-Läsionen unseres Patientenklientels auf diese Berufszweige. Anhand der Vergleiche der Patienten mit der Bevölkerung ergab sich eine signifikant höhere Erkrankungsrate in den Bereichen Land‑, Forst- und Tierwirtschaft und Gartenbau, Bau, Architektur, Vermessung und Gebäudetechnik, Verkehr, Logistik, Schutz und Sicherheit und Unternehmensorganisation, Buchhaltung, Recht und Verwaltung. Ein signifikant reduziertes Risiko bestand in den Berufszweigen Naturwissenshaft, Geografie und Informatik, kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus, Gesundheit, Soziales, Lehre und Erziehung und Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung.Gesamte Bevölkerung in %Mit Rotatorenmanschettenläsion in %p-WertLand‑, Forst- und Tierwirtschaft und Gartenbau35p = 0,003*Rohstoffgewinnung, Produktion und Fertigung2119p = 0,312Bau, Architektur, Vermessung und Gebäudetechnik923p < 0,001*Naturwissenschaft, Geografie und Informatik42p = 0,015*Verkehr, Logistik, Schutz und Sicherheit1018p < 0,001*Kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus2511p < 0,001*Unternehmensorganisation, Buchhaltung, Recht und Verwaltung57p < 0,001*Gesundheit, Soziales, Lehre und Erziehung1913p < 0,001*Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung32p = 0,091*Militär10p = 0,212*Signifikanter Unterschied\n*Signifikanter Unterschied\nEinfluss von Geschlecht in Abhängigkeit von KldB 2010 (2020) Bei der Bildung von Untergruppen nach dem Geschlecht zeigten sich für Männer in den Berufszweigen Bau, Architektur, Vermessung und Gebäudetechnik sowie Verkehr, Logistik, Schutz und Sicherheit signifikant höhere Erkrankungsraten. Im Gegensatz dazu arbeiteten Frauen häufiger in den Berufsgruppen kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus sowie im Bereich Gesundheit, Soziales, Lehre und Erziehung, welche ein signifikant geringeres Risiko für RM-Läsionen aufweisen.\nBei der Bildung von Untergruppen nach dem Geschlecht zeigten sich für Männer in den Berufszweigen Bau, Architektur, Vermessung und Gebäudetechnik sowie Verkehr, Logistik, Schutz und Sicherheit signifikant höhere Erkrankungsraten. Im Gegensatz dazu arbeiteten Frauen häufiger in den Berufsgruppen kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus sowie im Bereich Gesundheit, Soziales, Lehre und Erziehung, welche ein signifikant geringeres Risiko für RM-Läsionen aufweisen.\nRM-Läsionen im GUV-Verfahren Von den 589 Patienten mit RM-Läsionen wurden 10 % (61 Fälle) von der zugehörigen Berufsgenossenschaft anerkannt, 84 % der anerkannten Fälle waren männliche Arbeitende. Betroffen waren, abgestuft nach Häufigkeit, die Berufsgruppen Handel und Verkehr (31 %; 19/61), Baugewerbe (25 %; 15/61) und sonstige Dienstleistungen (20 %; 12/61).\nVon den 589 Patienten mit RM-Läsionen wurden 10 % (61 Fälle) von der zugehörigen Berufsgenossenschaft anerkannt, 84 % der anerkannten Fälle waren männliche Arbeitende. Betroffen waren, abgestuft nach Häufigkeit, die Berufsgruppen Handel und Verkehr (31 %; 19/61), Baugewerbe (25 %; 15/61) und sonstige Dienstleistungen (20 %; 12/61).", "Die Probanden setzten sich aus 662 (61,9 %) Männer und 408 (38,1 %) Frauen zusammen (Abb. 1).", "Im Patientenkollektiv waren 26 % älter als 65 Jahre. Im Alter zwischen 50 und 65 waren 61 % und 10 % waren zwischen 40 und 49 Jahre alt. Lediglich 3 % der Patienten, welche eine RM-Rekonstruktion erhielten, waren jünger als 40 Jahre (Abb. 2).\nDas Durchschnittsalter lag bei 59,52 Jahren (18–97 Jahre, Median 60 Jahre). Der Altersdurchschnitt betrug für die Männer 59 (±8,33) Jahre und für die Frauen 61 (±8,15) Jahre.", "Bei der Altersstruktur zwischen den einzelnen Bereichen unserer Studienpopulation nach den aktuellen Daten der KldB 2010 (2020) ließ sich kein signifikanter Unterschied nachweisen (Abb. 3).", "Es waren 844 Patienten im arbeitsfähigen Alter (< 65 Jahre), hiervon gingen 597 einer Beschäftigung nach. Aufgrund von Elternzeit, Arbeitslosigkeit oder EU-Rente gingen 247 keiner Tätigkeit nach. Unter den Berufstätigen fanden sich rund 65 % Männer und 35 % Frauen. Der Altersdurchschnitt der aktuell Beschäftigten lag bei Männern bei 54 Jahren und 55 Jahren bei Frauen. Acht Befragte machten widersprüchliche Angaben, welchen Beruf sie ausübten und konnten nicht eingeschlossen werden (Tab. 2).AnzahlAlterLand‑, Forst- und Tierwirtschaft und Gartenbau3155Rohstoffgewinnung, Produktion und Fertigung11253Bau, Architektur, Vermessung und Gebäudetechnik13355Naturwissenschaft, Geografie und Informatik1056Verkehr, Logistik, Schutz und Sicherheit10555Kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus6655Unternehmensorganisation, Buchhaltung, Recht und Verwaltung4254Gesundheit, Soziales, Lehre und Erziehung7955Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung1158Militär00", "In der Patientenklientel zeigte sich eine signifikante Häufung von Männern (m = 382, w = 207, p < 0,001). Nach der Klassifikation der Berufe entfielen auf den Bereich Bau, Architektur, Vermessung und Gebäudetechnik die meisten Arbeitnehmer mit RM-Läsionen mit 23 % (133/589) und den meisten männlichen Arbeitenden mit 32 % (120/382), gefolgt von Rohstoffgewinnung, Produktion und Fertigung mit 19 % (n = 112/589) und dem zweithöchsten Männeranteil mit 23 % (89/382), auf Platz 3 folgte der Bereich Verkehr, Logistik, Schutz und Sicherheit mit 18 % (105/589) und dem dritthöchsten Männeranteil mit 21 % (80/382). Der höchste Frauenanteil mit RM-Läsion fand sich im Bereich Gesundheit, Soziales, Lehre und Erziehung. Die Altersstruktur der Gruppen unterteilt nach Klassifikation der Berufe zeigte keine signifikanten Unterschiede (p = 0,493). Eine Auflistung wird in Tab. 2 und Abb. 4 dargestellt.", "Verglichen mit der Gesamtbevölkerung (Tab. 1) zeigen sich Unterschiede in der Häufigkeit von RM-Läsionen in Bezug auf die Häufigkeit der entsprechenden Berufsklassifikationen (Tab. 3). 25 % der Erwerbstätigen mit sozialversicherungspflichtigem Beruf gingen einer Tätigkeit aus dem Bereich kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus nach, hingegen zeigten in unserem Patientenklientel nur 11 % eine RM-Läsion. In den Berufen Bau, Architektur, Vermessung und Gebäudetechnik arbeiten 9 % der Bevölkerung, allerdings entfielen 23 % der RM-Läsionen unseres Patientenklientels auf diese Berufszweige. Anhand der Vergleiche der Patienten mit der Bevölkerung ergab sich eine signifikant höhere Erkrankungsrate in den Bereichen Land‑, Forst- und Tierwirtschaft und Gartenbau, Bau, Architektur, Vermessung und Gebäudetechnik, Verkehr, Logistik, Schutz und Sicherheit und Unternehmensorganisation, Buchhaltung, Recht und Verwaltung. Ein signifikant reduziertes Risiko bestand in den Berufszweigen Naturwissenshaft, Geografie und Informatik, kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus, Gesundheit, Soziales, Lehre und Erziehung und Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung.Gesamte Bevölkerung in %Mit Rotatorenmanschettenläsion in %p-WertLand‑, Forst- und Tierwirtschaft und Gartenbau35p = 0,003*Rohstoffgewinnung, Produktion und Fertigung2119p = 0,312Bau, Architektur, Vermessung und Gebäudetechnik923p < 0,001*Naturwissenschaft, Geografie und Informatik42p = 0,015*Verkehr, Logistik, Schutz und Sicherheit1018p < 0,001*Kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus2511p < 0,001*Unternehmensorganisation, Buchhaltung, Recht und Verwaltung57p < 0,001*Gesundheit, Soziales, Lehre und Erziehung1913p < 0,001*Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung32p = 0,091*Militär10p = 0,212*Signifikanter Unterschied\n*Signifikanter Unterschied", "Bei der Bildung von Untergruppen nach dem Geschlecht zeigten sich für Männer in den Berufszweigen Bau, Architektur, Vermessung und Gebäudetechnik sowie Verkehr, Logistik, Schutz und Sicherheit signifikant höhere Erkrankungsraten. Im Gegensatz dazu arbeiteten Frauen häufiger in den Berufsgruppen kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus sowie im Bereich Gesundheit, Soziales, Lehre und Erziehung, welche ein signifikant geringeres Risiko für RM-Läsionen aufweisen.", "Von den 589 Patienten mit RM-Läsionen wurden 10 % (61 Fälle) von der zugehörigen Berufsgenossenschaft anerkannt, 84 % der anerkannten Fälle waren männliche Arbeitende. Betroffen waren, abgestuft nach Häufigkeit, die Berufsgruppen Handel und Verkehr (31 %; 19/61), Baugewerbe (25 %; 15/61) und sonstige Dienstleistungen (20 %; 12/61).", "Die Berufsabhängigkeit kann auf die Entstehung von Erkrankungen des Bewegungsapparates der oberen Extremität einen Einfluss haben. Neben berufsspezifischen Faktoren lassen sich gesundheitsbezogene Risiken darstellen. Das Geschlecht eines Erwerbtätigen kann innerhalb einer Berufsgruppe Auswirkungen für das Ausmaß von Erkrankungen des Bewegungsapparates der oberen Extremität und die Reaktionen des Körpers auf bestimmte Arbeitsabläufe und somit das Erkrankungsrisiko haben. Es existieren einige Studien mit der Frage, wann nach einer RM-Läsion oder deren operativen Versorgung eine Rückkehr an den Arbeitsplatz möglich ist [18, 19]. Analysen unter Berücksichtigung anerkannter Berufsklassifikationssysteme zu arbeitsassoziierten Faktoren, welche zu einer RM-Läsion führen oder epidemiologische Untersuchungen zum Auftreten von RM-Läsionen und dem ausgeübten Beruf sind in der Literatur wenig beschrieben.\nIn der Literatur zum Thema arbeitsassoziierte Faktoren und muskuloskelettale Erkrankungen der oberen Extremität sind meist verschiedene Erkrankungen zusammengefasst, bei denen die RM-Läsion einen hohen Stellenwert hat [20, 21]. In dem Vergleich der Häufigkeiten von Berufsgruppen innerhalb der erwerbsfähigen Bevölkerung und der Häufigkeit der Berufe in der Patientengruppe mit RM-Läsion im erwerbsfähigen Alter zeigt sich eine erhöhte Prävalenz für bestimmte berufliche Belastungen. Aus der Literatur war der Zusammenhang von RM-Läsionen zu Zwangshaltungen mit erhobenen Armen und Überkopfarbeiten [22], für repetitive monotone Arbeiten in Flexion und Abduktion mit Werkzeugen [23], für das Anheben mittlerer und schwerer Lasten über Schulterniveau, für das Ziehen oder Schieben von Gewichten mit den Armen und Vibrationen [24] bekannt. Somit ist bei Berufszweigen mit häufigem Ausführen schulterbelastender Tätigkeiten eine höhere Erkrankungsrate von RM-Läsionen zu erwarten. Dies korreliert stark mit den Ergebnissen, dass die häufigsten RM-Läsionen im Vergleich zur Häufigkeit des Berufszweiges in den körperlich schwereren Tätigkeitsbereichen Bau, Architektur, Vermessung und Gebäudetechnik sowie Verkehr, Logistik, Schutz und Sicherheit zu verzeichnen waren. In der KldB 2010 (2020) waren die Berufszweige erfasst und liefern Anhaltspunkte, dass RM-Verletzungen auch über den allgemeinen degenerativen Prozess durch die Berufstätigkeit beeinflusst werden können. Die hier durchgeführte retrospektive Studie deckt signifikante Zusammenhänge zu bestimmten Berufszweigen auf.\nDer Faktor Alter spielte in der vorliegenden Studie als Einflussparameter keine Rolle, da es keine signifikanten Unterschiede im Alter in den Berufszweigen gab. Dies zeigte sich in vorangegangenen internationalen Studien divergent [25–27].\nDas Geschlecht als gesundheitsbezogenes Risiko im Kontext zu RM-Läsionen in diesem Zusammenhang spielte bisher in wissenschaftlichen Fragestellungen eine untergeordnete Rolle. Rolf et al. untersuchten mit ihrer Studie, ob Läsionen der RM eine Berufserkrankung seien [6]. Die Daten dieser Studie deuteten darauf hin, dass die berufsbedingte Exposition das Risiko einer RM-Läsion erhöht oder zu einer klinischen RM-Ruptur führen kann. Alleinig wurden hierfür 472 Männer eingeschlossen, sodass im weiterführenden Kontext geschlechtsspezifische Faktoren im berufsspezifischen Kontext in dieser Studie keine Rolle spielten. Weitere internationale Studien zu diesem Thema zeigen ähnliche Inhalte [28, 29]. In einer Studie von van Dyck et al. wurden geschlechtsspezifische Faktoren im beruflichen Kontext bei der Entstehung von RM-Läsionen beschrieben [31]. Van Dyck et al. zeigten auf, dass es im Gesundheitssektor bei den berufsfeldspezifischen Krankenständen aufgrund einer RM-Läsion unter Frauen ein höheres Niveau gab als bei den männlichen Kollegen. Aus diesen Daten resultiert die Annahme, dass neben der Berufsabhängigkeit auch weitere gesundheitsspezifische Risiken bei der Entstehung von RM-Läsionen einen Einfluss haben können [32].\nNaheliegend ist die Annahme von berufsspezifisch unterschiedlichen gesundheitsbezogenen Risiken als Folge der Belastung am Arbeitsplatz. So sind Erwerbstätige bestimmter Berufssektoren einem höheren Verletzungsrisiko am Arbeitsplatz ausgesetzt als andere [33]. Der Beruf spielt aber auch insofern eine Rolle, als dass die Tätigkeitsausübung bei ein und derselben gesundheitlichen Einschränkung berufsabhängig unterschiedlich stark beeinträchtigen und somit spezifische sekundäre gesundheitsbezogene Risiken nach sich ziehen kann [26]. So zeigen sich bei Personen mit einer RM-Läsion, die in einem Berufssektor mit wenig Belastung der oberen Extremität tätig sind, eine geringere Gesamtdauer der Arbeitsunfähigkeit im Vergleich zu Personen, deren berufliche Ausübung mit langen oder starken Belastungen der oberen Extremität verbunden ist. Hierbei liegt die Differenz der Dauer der Arbeitsunfähigkeit bei mehreren Wochen [10]. \nWeitere, zum Teil in unterschiedliche Richtungen und nicht ausschließlich berufsgruppenspezifisch wirkende Einflüsse entstehen durch Selektionseffekte oder durch nur mittelbar gesundheitsrelevante Berufsbedingungen. Dazu gehört der sogenannte „healthy worker effect“. Bei Anstellung von körperlich überdurchschnittlich gesunden Personen für besonders belastende Tätigkeiten, können trotz hoher Belastung in bestimmten Berufsgruppen geringe Erkrankungsraten resultieren. Zudem können durch Möglichkeiten zur vorzeitigen Berentung und Einflüsse von tariflich unterschiedlich vereinbarten Entgeltfortzahlungen im Krankheitsfall Selektionseffekte entstehen [34]. Aber auch arbeitnehmerabhängige interindividuelle Faktoren, wie persönliche Kompetenz und Verantwortlichkeit im ausgeübten Beruf, können einen Einfluss auf die Entstehung von Erkrankungen der oberen Extremität haben. Berufs- und zeitabhängig unterschiedlich wahrgenommene Gefahren des Arbeitsplatzverlusts sowie die Berufszufriedenheit und das Arbeitsklima können weiter auf die Entstehung von RM-Läsionen einen Einfluss haben [35]. Die in der Literatur beschriebenen Zusammenhänge aus belastenden Tätigkeiten der oberen Extremität und die epidemiologische Verteilung der vorliegenden Studie zu RM-Läsionen in den Berufszweigen führt zu der Vermutung einer Häufung der schulterbelastenden Tätigkeiten insbesondere im Berufszweig Bau, Architektur, Vermessung und Gebäudetechnik sowie Verkehr, Logistik, Schutz und Sicherheit. Diese Bereiche sollten auf die schulterbelastenden Tätigkeiten genauer untersucht werden, um prädisponierende Faktoren zu definieren und gegebenenfalls auf Prävention oder auch die Definition einer Berufserkrankung abzielen zu können.", "Limitierend waren die fehlenden Informationen, wie lange die berufliche Tätigkeit ausgeführt wurde, und die Historie zur beruflichen Tätigkeit der Patienten. Weiterhin fehlten die Informationen zu den einzelnen belastenden Tätigkeiten pro Arbeitstag, wie zum Beispiel Überkopfarbeiten, repetitive Arbeiten oder die Belastung durch Vibrationen. Ein Selektionsbias dieser Studie liegt insofern vor, dass Patienten mit nicht mehr rekonstruierbarer oder konservativ behandelter RM-Läsion nicht in die Analyse einbezogen werden konnten. In der Studie erfolgte keine weitere Unterteilung der RM-Läsionen nach Grad der Verfettung oder nach beteiligten Sehnen und Rissarten. Weiterhin wurde die Histologie nicht erfasst, um Rückschlüsse auf die Pathologie der Läsion zu ziehen. Eine vollständige Diskussion der berufsgruppenspezifischen Berufsabhängigkeit auf die Entstehung von Erkrankungen des Bewegungsapparates der oberen Extremität muss all diese Einflussmöglichkeiten abwägen. Allerdings zeigen sich bei einer Betrachtung von entsprechenden Auswertungsergebnissen Muster, die sich auch ohne den Anspruch einer vollständigen Diskussion sinnvoll interpretieren lassen können. Diese wissenschaftliche Arbeit behandelt eine interessante sozioökonomische Fragestellung in Bezug auf die Entstehung von RM-Läsionen." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Kurze Hinführung zum Thema", "Einleitung", "Methodik", "Die KldB 2010", "Daten für Deutschland", "Studienvergleichsparameter", "Datenerhebung", "Ergebnisse", "Epidemiologische Daten", "Altersverteilung", "Vergleich Altersverteilung Studienpopulation und KldB 2010 (2020)", "Erwerbsstatus und Geschlecht", "Erwerbsstatus, Geschlecht, RM-Läsion im Vergleich zur erwerbstätigen Gesamtbevölkerung nach KldB 2010 (2020)", "RM-Läsionen in der Studienpopulation im Vergleich zur Gesamtbevölkerung nach KldB 2010 (2020)", "Einfluss von Geschlecht in Abhängigkeit von KldB 2010 (2020)", "RM-Läsionen im GUV-Verfahren", "Diskussion", "Limitation", "Fazit für die Praxis" ]
[ "Einen entscheidenden Einfluss auf die Entstehung von Erkrankungen des Bewegungsapparates der oberen Extremität hat der aktuell ausgeübte Beruf. Der Einfluss des Berufs resultiert dabei aus einer Reihe von Faktoren. Naheliegend ist zunächst die Annahme von berufsspezifisch unterschiedlichen gesundheitsbezogenen Risiken als Folge der Belastung am Arbeitsplatz. Bisherige Erkenntnisse in diesem Zusammenhang weisen darauf hin, dass bestimmte Berufszweige einem höheren Erkrankungsrisiko am Arbeitsplatz ausgesetzt sind als andere. Kann die berufliche Belastung so groß sein, dass sie zu einer Krankheit führt, oder spielen dabei andere Aspekte außerhalb des Arbeitslebens eine größere Rolle für die Krankheitsentstehung?", "Das Schultergelenk mit multidirektionaler Beweglichkeit und hoher Bedeutung für die Arm- und Handfunktion spielt eine wichtige Rolle für die Arbeitsfähigkeit [1, 2]. Läsionen der Rotatorenmanschette (RM) sind eine häufige muskuloskelettale Verletzung der oberen Extremität in der arbeitenden Bevölkerung. Hiervon sind 6,60 % der Männer und 8,50 % der Frauen von einer betroffen [3]. Die Folge sind lange Abwesenheitsperioden vom Arbeitsplatz [4, 5]. Rotatorenmanschettenpathologien und Schmerzen der Schulter sind in Finnland die häufigste Ursache für krankheitsbedingte Abwesenheit vom Arbeitsplatz [6, 7]. Hierbei wurde die krankheitsbedingte Abwesenheit anhand des Auftretens und der Dauer der Abwesenheit bei den krankheitsbedingten Fehlzeiten aufgrund von Muskel-Skelett-Erkrankungen definiert. Die krankheitsbedingte Abwesenheit hing in der Arbeit von Pekkala et al. von den geleisteten Arbeitsjahren ab, mit einem Hauptanteil zwischen dem 45. und 65. Lebensjahr. Je höher die Anzahl der geleisteten Arbeitsjahre war, umso länger war die krankheitsbedingte Abwesenheit von Erwerbstätigen aufgrund einer RM-Läsion [6]. Einen entscheidenden Einfluss auf die Entstehung von Erkrankungen des Bewegungsapparates hat der aktuell ausgeübte Beruf. Der Einfluss des Berufs resultiert dabei aus einer Reihe von Faktoren. Naheliegend ist zunächst die Annahme von berufsspezifisch unterschiedlichen gesundheitsbezogenen Risiken als Folge der Belastung am Arbeitsplatz. Zu den arbeitsassoziierten Faktoren gehört u. a. das häufige Tragen von Gewichten, stark wiederholende Arbeiten und Überkopfarbeiten [8]. In der Literatur finden sich wenig Daten, die den Einfluss der beruflichen Tätigkeit auf die Entstehung von RM-Läsionen darstellen [9]. Bisherige Erkenntnisse in diesem Zusammenhang weisen darauf hin, dass bestimmte Berufszweige einem höheren Erkrankungsrisiko am Arbeitsplatz ausgesetzt sind als andere [10–12]. Ziel dieser Studie ist es, den Einfluss der Berufsabhängigkeit auf die Entstehung von RM-Läsionen zu untersuchen und neben berufsspezifischen Faktoren, gesundheitsbezogene Risiken darzustellen.", "Die zuständige Ethikkommission der Universität Jena wurde informiert und hatte keine Einwände gegen die retrospektive, monozentrische Auswertung der Studie mit dem Namen PAMO-NUNK-Shoulder-Study (Reg.-Nr.: 2018-1165-Daten). In dieser Studie wurden alle Patienten eingeschlossen, welche eine arthroskopische Rekonstruktion der RM in den Jahren 2016 bis 2019 erhalten haben. Die Patientenauswahl erfolgte anhand der internationalen Klassifikation der Behandlungsmethoden in der Medizin mit ICD-Codes: M75.1 (Läsionen der RM) und S46.0 (Verletzung der Muskeln und der Sehnen der RM). Anhand der dokumentierten Anamnese und Operationsberichte wurden alle Patienten insbesondere im Hinblick auf ihre berufliche Tätigkeit ausgewertet. Es konnten 1070 Patienten eingeschlossen werden. Bei allen Patienten wurde der aktuelle Beruf dokumentiert, sowie Berentung oder Arbeitslosigkeit, Alter und Geschlecht. Weiterhin wurde die gegebenenfalls zugehörige Berufsgenossenschaft (BG) von der BG anerkannten Verletzungen erhoben.\nDie KldB 2010 Die Klassifikation der Berufe 2010, kurz KldB 2010, wurde von der Bundesagentur für Arbeit und dem Institut für Arbeitsmarkt- und Berufsforschung unter Beteiligung des Statistischen Bundesamtes und den betroffenen Bundesministerien sowie Experten der berufskundlichen und empirischen Forschung entwickelt. Diese wurde im Jahr 2011 eingeführt. Die KldB 2010 wurde vollständig neu entwickelt und löst die Klassifizierung der Berufe aus den Jahren 1988 und 1992 ab. Die KldB 2010 bildet die aktuelle Berufslandschaft in Deutschland realitätsnah ab und bietet zugleich eine hohe Kompatibilität zu anderen internationalen Berufsklassifikationen. Dabei werden Berufe in zehn übergeordneten Berufszweigen sektorenorientiert zusammengefasst (Tab. 1). Die Daten der deutschen Bevölkerung von März 2020 sind in Tab. 1 dargestellt.BerufsbereichAktuelle Zahlen 03/2020Gesamt:33.648.183Land‑, Forst- und Tierwirtschaft und Gartenbau500.654Rohstoffgewinnung, Produktion und Fertigung7.218.186Bau, Architektur, Vermessung und Gebäudetechnik2.019.667Naturwissenschaft, Geografie und Informatik1.357.948Verkehr, Logistik, Schutz und Sicherheit4.499.345Kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus3.971.079Unternehmensorganisation, Buchhaltung, Recht und Verwaltung6.819.708Gesundheit, Soziales, Lehre und Erziehung6.181.809Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung892.605Keine Angabe187.182\nDie Klassifikation der Berufe 2010, kurz KldB 2010, wurde von der Bundesagentur für Arbeit und dem Institut für Arbeitsmarkt- und Berufsforschung unter Beteiligung des Statistischen Bundesamtes und den betroffenen Bundesministerien sowie Experten der berufskundlichen und empirischen Forschung entwickelt. Diese wurde im Jahr 2011 eingeführt. Die KldB 2010 wurde vollständig neu entwickelt und löst die Klassifizierung der Berufe aus den Jahren 1988 und 1992 ab. Die KldB 2010 bildet die aktuelle Berufslandschaft in Deutschland realitätsnah ab und bietet zugleich eine hohe Kompatibilität zu anderen internationalen Berufsklassifikationen. Dabei werden Berufe in zehn übergeordneten Berufszweigen sektorenorientiert zusammengefasst (Tab. 1). Die Daten der deutschen Bevölkerung von März 2020 sind in Tab. 1 dargestellt.BerufsbereichAktuelle Zahlen 03/2020Gesamt:33.648.183Land‑, Forst- und Tierwirtschaft und Gartenbau500.654Rohstoffgewinnung, Produktion und Fertigung7.218.186Bau, Architektur, Vermessung und Gebäudetechnik2.019.667Naturwissenschaft, Geografie und Informatik1.357.948Verkehr, Logistik, Schutz und Sicherheit4.499.345Kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus3.971.079Unternehmensorganisation, Buchhaltung, Recht und Verwaltung6.819.708Gesundheit, Soziales, Lehre und Erziehung6.181.809Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung892.605Keine Angabe187.182\nDaten für Deutschland Nach Aussage des statistischen Bundesamtes belief sich die Einwohnerzahl in Deutschland im März 2020 auf 83 Mio. Einwohner, wovon 54 Mio. im erwerbsfähigen Alter (15–65 Jahre) waren, 33 Mio. gingen einer sozialversicherungspflichtigen Tätigkeit nach [13]. Bei der Betrachtung der Wirtschaftssektoren befanden sich 2017 lediglich 1,4 % der Erwerbstätigen im primären Sektor (Land- und Forstwirtschaft, Fischerei), wohingegen im sekundären Sektor (produzierendes Gewerbe) 24,1 % eine Beschäftigung fanden. Mit 74,5 % war 2017 der Dienstleistungssektor am stärksten vertreten. Die Gewichtung dieser Bereiche lässt sich durch den strukturellen Wandel der Gesellschaft, welcher sich durch beispielsweise veränderte Nachfrage und zunehmende Automatisierung erklärt, verstehen. Im Dienstleistungssektor ist der Bereich öffentliche Dienstleister, Erziehung, Gesundheit mit 10,9 Mio. Erwerbstätigen (24,7 %) am stärksten gewichtet. Ähnlich viele Personen (10,1 Mio./22,8 %) finden im Wirtschaftsbereich Handel, Verkehr und Gastgewerbe eine Beschäftigung. Nicht zu vernachlässigen ist das Gebiet der Finanzierung, Immobilien, Unternehmensdienstleister mit immerhin 17,4 % aller Erwerbstätigen [14]. Die Bundesagentur für Arbeit gruppierte die Erwerbstätigen in Deutschland anhand der Klassifikation der Berufe 2010. Die Daten der deutschen Bevölkerung von März 2020 sind in Tab. 1 dargestellt [15, 16].\nNach Aussage des statistischen Bundesamtes belief sich die Einwohnerzahl in Deutschland im März 2020 auf 83 Mio. Einwohner, wovon 54 Mio. im erwerbsfähigen Alter (15–65 Jahre) waren, 33 Mio. gingen einer sozialversicherungspflichtigen Tätigkeit nach [13]. Bei der Betrachtung der Wirtschaftssektoren befanden sich 2017 lediglich 1,4 % der Erwerbstätigen im primären Sektor (Land- und Forstwirtschaft, Fischerei), wohingegen im sekundären Sektor (produzierendes Gewerbe) 24,1 % eine Beschäftigung fanden. Mit 74,5 % war 2017 der Dienstleistungssektor am stärksten vertreten. Die Gewichtung dieser Bereiche lässt sich durch den strukturellen Wandel der Gesellschaft, welcher sich durch beispielsweise veränderte Nachfrage und zunehmende Automatisierung erklärt, verstehen. Im Dienstleistungssektor ist der Bereich öffentliche Dienstleister, Erziehung, Gesundheit mit 10,9 Mio. Erwerbstätigen (24,7 %) am stärksten gewichtet. Ähnlich viele Personen (10,1 Mio./22,8 %) finden im Wirtschaftsbereich Handel, Verkehr und Gastgewerbe eine Beschäftigung. Nicht zu vernachlässigen ist das Gebiet der Finanzierung, Immobilien, Unternehmensdienstleister mit immerhin 17,4 % aller Erwerbstätigen [14]. Die Bundesagentur für Arbeit gruppierte die Erwerbstätigen in Deutschland anhand der Klassifikation der Berufe 2010. Die Daten der deutschen Bevölkerung von März 2020 sind in Tab. 1 dargestellt [15, 16].\nStudienvergleichsparameter Anhand der KldB 2010 wurden die dokumentierten Berufe der Patienten eingeteilt. Hierdurch wird eine einheitliche Berufsklassifikation innerhalb dieser Arbeit geschaffen, die statistische Vergleiche zur Gesamtbevölkerung ermöglichen. Diese Daten dienen als Vergleichsgrundlage der Studienergebnisse mit denen von der KldB 2010 aus dem Jahr 2020.\nAnhand der KldB 2010 wurden die dokumentierten Berufe der Patienten eingeteilt. Hierdurch wird eine einheitliche Berufsklassifikation innerhalb dieser Arbeit geschaffen, die statistische Vergleiche zur Gesamtbevölkerung ermöglichen. Diese Daten dienen als Vergleichsgrundlage der Studienergebnisse mit denen von der KldB 2010 aus dem Jahr 2020.\nDatenerhebung Die Datenerhebung wurde mittels Microsoft Excel 365 (Fa. Microsoft, Redmond, WA, USA) und die statistische Analyse mittels des Statistikprogramms SPSS (Version 22.0, SPSS Inc., Chicago, IL, USA) durchgeführt. Die in der Studie dokumentierten Datensätze wurden mit routinemäßig erfassten und anonymisierten, frei verfügbaren Daten des Statistischen Bundesamtes verglichen. Diese Datensätze sind digital und frei verfügbar auf der Internetseite zugänglich unter der Kategorie: Themen/Arbeit/Arbeitsmarkt/Erwerbstätigkeit [17]. Die deskriptiven Statistiken umfassen Mengen, Prozentsätze, Medianwerte und Bereiche für ordinäre Variablen. Die Verteilung der Daten wurden auf die Normalverteilung untersucht.\nDer Chi-Quadrattest wurde für die Analyse von Einflussparametern genutzt und zum Vergleich der Studiendaten mit den Daten des statistischen Bundesamtes eingesetzt. Der Kruskal-Wallis-Test wurde spezifisch zur Erhebung der Altersverteilung in den Berufsgruppen verwendet. Der p-Wert von weniger als 0,05 wurde als statistisch signifikant angesehen.\nDie Datenerhebung wurde mittels Microsoft Excel 365 (Fa. Microsoft, Redmond, WA, USA) und die statistische Analyse mittels des Statistikprogramms SPSS (Version 22.0, SPSS Inc., Chicago, IL, USA) durchgeführt. Die in der Studie dokumentierten Datensätze wurden mit routinemäßig erfassten und anonymisierten, frei verfügbaren Daten des Statistischen Bundesamtes verglichen. Diese Datensätze sind digital und frei verfügbar auf der Internetseite zugänglich unter der Kategorie: Themen/Arbeit/Arbeitsmarkt/Erwerbstätigkeit [17]. Die deskriptiven Statistiken umfassen Mengen, Prozentsätze, Medianwerte und Bereiche für ordinäre Variablen. Die Verteilung der Daten wurden auf die Normalverteilung untersucht.\nDer Chi-Quadrattest wurde für die Analyse von Einflussparametern genutzt und zum Vergleich der Studiendaten mit den Daten des statistischen Bundesamtes eingesetzt. Der Kruskal-Wallis-Test wurde spezifisch zur Erhebung der Altersverteilung in den Berufsgruppen verwendet. Der p-Wert von weniger als 0,05 wurde als statistisch signifikant angesehen.", "Die Klassifikation der Berufe 2010, kurz KldB 2010, wurde von der Bundesagentur für Arbeit und dem Institut für Arbeitsmarkt- und Berufsforschung unter Beteiligung des Statistischen Bundesamtes und den betroffenen Bundesministerien sowie Experten der berufskundlichen und empirischen Forschung entwickelt. Diese wurde im Jahr 2011 eingeführt. Die KldB 2010 wurde vollständig neu entwickelt und löst die Klassifizierung der Berufe aus den Jahren 1988 und 1992 ab. Die KldB 2010 bildet die aktuelle Berufslandschaft in Deutschland realitätsnah ab und bietet zugleich eine hohe Kompatibilität zu anderen internationalen Berufsklassifikationen. Dabei werden Berufe in zehn übergeordneten Berufszweigen sektorenorientiert zusammengefasst (Tab. 1). Die Daten der deutschen Bevölkerung von März 2020 sind in Tab. 1 dargestellt.BerufsbereichAktuelle Zahlen 03/2020Gesamt:33.648.183Land‑, Forst- und Tierwirtschaft und Gartenbau500.654Rohstoffgewinnung, Produktion und Fertigung7.218.186Bau, Architektur, Vermessung und Gebäudetechnik2.019.667Naturwissenschaft, Geografie und Informatik1.357.948Verkehr, Logistik, Schutz und Sicherheit4.499.345Kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus3.971.079Unternehmensorganisation, Buchhaltung, Recht und Verwaltung6.819.708Gesundheit, Soziales, Lehre und Erziehung6.181.809Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung892.605Keine Angabe187.182", "Nach Aussage des statistischen Bundesamtes belief sich die Einwohnerzahl in Deutschland im März 2020 auf 83 Mio. Einwohner, wovon 54 Mio. im erwerbsfähigen Alter (15–65 Jahre) waren, 33 Mio. gingen einer sozialversicherungspflichtigen Tätigkeit nach [13]. Bei der Betrachtung der Wirtschaftssektoren befanden sich 2017 lediglich 1,4 % der Erwerbstätigen im primären Sektor (Land- und Forstwirtschaft, Fischerei), wohingegen im sekundären Sektor (produzierendes Gewerbe) 24,1 % eine Beschäftigung fanden. Mit 74,5 % war 2017 der Dienstleistungssektor am stärksten vertreten. Die Gewichtung dieser Bereiche lässt sich durch den strukturellen Wandel der Gesellschaft, welcher sich durch beispielsweise veränderte Nachfrage und zunehmende Automatisierung erklärt, verstehen. Im Dienstleistungssektor ist der Bereich öffentliche Dienstleister, Erziehung, Gesundheit mit 10,9 Mio. Erwerbstätigen (24,7 %) am stärksten gewichtet. Ähnlich viele Personen (10,1 Mio./22,8 %) finden im Wirtschaftsbereich Handel, Verkehr und Gastgewerbe eine Beschäftigung. Nicht zu vernachlässigen ist das Gebiet der Finanzierung, Immobilien, Unternehmensdienstleister mit immerhin 17,4 % aller Erwerbstätigen [14]. Die Bundesagentur für Arbeit gruppierte die Erwerbstätigen in Deutschland anhand der Klassifikation der Berufe 2010. Die Daten der deutschen Bevölkerung von März 2020 sind in Tab. 1 dargestellt [15, 16].", "Anhand der KldB 2010 wurden die dokumentierten Berufe der Patienten eingeteilt. Hierdurch wird eine einheitliche Berufsklassifikation innerhalb dieser Arbeit geschaffen, die statistische Vergleiche zur Gesamtbevölkerung ermöglichen. Diese Daten dienen als Vergleichsgrundlage der Studienergebnisse mit denen von der KldB 2010 aus dem Jahr 2020.", "Die Datenerhebung wurde mittels Microsoft Excel 365 (Fa. Microsoft, Redmond, WA, USA) und die statistische Analyse mittels des Statistikprogramms SPSS (Version 22.0, SPSS Inc., Chicago, IL, USA) durchgeführt. Die in der Studie dokumentierten Datensätze wurden mit routinemäßig erfassten und anonymisierten, frei verfügbaren Daten des Statistischen Bundesamtes verglichen. Diese Datensätze sind digital und frei verfügbar auf der Internetseite zugänglich unter der Kategorie: Themen/Arbeit/Arbeitsmarkt/Erwerbstätigkeit [17]. Die deskriptiven Statistiken umfassen Mengen, Prozentsätze, Medianwerte und Bereiche für ordinäre Variablen. Die Verteilung der Daten wurden auf die Normalverteilung untersucht.\nDer Chi-Quadrattest wurde für die Analyse von Einflussparametern genutzt und zum Vergleich der Studiendaten mit den Daten des statistischen Bundesamtes eingesetzt. Der Kruskal-Wallis-Test wurde spezifisch zur Erhebung der Altersverteilung in den Berufsgruppen verwendet. Der p-Wert von weniger als 0,05 wurde als statistisch signifikant angesehen.", "Epidemiologische Daten Die Probanden setzten sich aus 662 (61,9 %) Männer und 408 (38,1 %) Frauen zusammen (Abb. 1).\nDie Probanden setzten sich aus 662 (61,9 %) Männer und 408 (38,1 %) Frauen zusammen (Abb. 1).\nAltersverteilung Im Patientenkollektiv waren 26 % älter als 65 Jahre. Im Alter zwischen 50 und 65 waren 61 % und 10 % waren zwischen 40 und 49 Jahre alt. Lediglich 3 % der Patienten, welche eine RM-Rekonstruktion erhielten, waren jünger als 40 Jahre (Abb. 2).\nDas Durchschnittsalter lag bei 59,52 Jahren (18–97 Jahre, Median 60 Jahre). Der Altersdurchschnitt betrug für die Männer 59 (±8,33) Jahre und für die Frauen 61 (±8,15) Jahre.\nIm Patientenkollektiv waren 26 % älter als 65 Jahre. Im Alter zwischen 50 und 65 waren 61 % und 10 % waren zwischen 40 und 49 Jahre alt. Lediglich 3 % der Patienten, welche eine RM-Rekonstruktion erhielten, waren jünger als 40 Jahre (Abb. 2).\nDas Durchschnittsalter lag bei 59,52 Jahren (18–97 Jahre, Median 60 Jahre). Der Altersdurchschnitt betrug für die Männer 59 (±8,33) Jahre und für die Frauen 61 (±8,15) Jahre.\nVergleich Altersverteilung Studienpopulation und KldB 2010 (2020) Bei der Altersstruktur zwischen den einzelnen Bereichen unserer Studienpopulation nach den aktuellen Daten der KldB 2010 (2020) ließ sich kein signifikanter Unterschied nachweisen (Abb. 3).\nBei der Altersstruktur zwischen den einzelnen Bereichen unserer Studienpopulation nach den aktuellen Daten der KldB 2010 (2020) ließ sich kein signifikanter Unterschied nachweisen (Abb. 3).\nErwerbsstatus und Geschlecht Es waren 844 Patienten im arbeitsfähigen Alter (< 65 Jahre), hiervon gingen 597 einer Beschäftigung nach. Aufgrund von Elternzeit, Arbeitslosigkeit oder EU-Rente gingen 247 keiner Tätigkeit nach. Unter den Berufstätigen fanden sich rund 65 % Männer und 35 % Frauen. Der Altersdurchschnitt der aktuell Beschäftigten lag bei Männern bei 54 Jahren und 55 Jahren bei Frauen. Acht Befragte machten widersprüchliche Angaben, welchen Beruf sie ausübten und konnten nicht eingeschlossen werden (Tab. 2).AnzahlAlterLand‑, Forst- und Tierwirtschaft und Gartenbau3155Rohstoffgewinnung, Produktion und Fertigung11253Bau, Architektur, Vermessung und Gebäudetechnik13355Naturwissenschaft, Geografie und Informatik1056Verkehr, Logistik, Schutz und Sicherheit10555Kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus6655Unternehmensorganisation, Buchhaltung, Recht und Verwaltung4254Gesundheit, Soziales, Lehre und Erziehung7955Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung1158Militär00\nEs waren 844 Patienten im arbeitsfähigen Alter (< 65 Jahre), hiervon gingen 597 einer Beschäftigung nach. Aufgrund von Elternzeit, Arbeitslosigkeit oder EU-Rente gingen 247 keiner Tätigkeit nach. Unter den Berufstätigen fanden sich rund 65 % Männer und 35 % Frauen. Der Altersdurchschnitt der aktuell Beschäftigten lag bei Männern bei 54 Jahren und 55 Jahren bei Frauen. Acht Befragte machten widersprüchliche Angaben, welchen Beruf sie ausübten und konnten nicht eingeschlossen werden (Tab. 2).AnzahlAlterLand‑, Forst- und Tierwirtschaft und Gartenbau3155Rohstoffgewinnung, Produktion und Fertigung11253Bau, Architektur, Vermessung und Gebäudetechnik13355Naturwissenschaft, Geografie und Informatik1056Verkehr, Logistik, Schutz und Sicherheit10555Kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus6655Unternehmensorganisation, Buchhaltung, Recht und Verwaltung4254Gesundheit, Soziales, Lehre und Erziehung7955Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung1158Militär00\nErwerbsstatus, Geschlecht, RM-Läsion im Vergleich zur erwerbstätigen Gesamtbevölkerung nach KldB 2010 (2020) In der Patientenklientel zeigte sich eine signifikante Häufung von Männern (m = 382, w = 207, p < 0,001). Nach der Klassifikation der Berufe entfielen auf den Bereich Bau, Architektur, Vermessung und Gebäudetechnik die meisten Arbeitnehmer mit RM-Läsionen mit 23 % (133/589) und den meisten männlichen Arbeitenden mit 32 % (120/382), gefolgt von Rohstoffgewinnung, Produktion und Fertigung mit 19 % (n = 112/589) und dem zweithöchsten Männeranteil mit 23 % (89/382), auf Platz 3 folgte der Bereich Verkehr, Logistik, Schutz und Sicherheit mit 18 % (105/589) und dem dritthöchsten Männeranteil mit 21 % (80/382). Der höchste Frauenanteil mit RM-Läsion fand sich im Bereich Gesundheit, Soziales, Lehre und Erziehung. Die Altersstruktur der Gruppen unterteilt nach Klassifikation der Berufe zeigte keine signifikanten Unterschiede (p = 0,493). Eine Auflistung wird in Tab. 2 und Abb. 4 dargestellt.\nIn der Patientenklientel zeigte sich eine signifikante Häufung von Männern (m = 382, w = 207, p < 0,001). Nach der Klassifikation der Berufe entfielen auf den Bereich Bau, Architektur, Vermessung und Gebäudetechnik die meisten Arbeitnehmer mit RM-Läsionen mit 23 % (133/589) und den meisten männlichen Arbeitenden mit 32 % (120/382), gefolgt von Rohstoffgewinnung, Produktion und Fertigung mit 19 % (n = 112/589) und dem zweithöchsten Männeranteil mit 23 % (89/382), auf Platz 3 folgte der Bereich Verkehr, Logistik, Schutz und Sicherheit mit 18 % (105/589) und dem dritthöchsten Männeranteil mit 21 % (80/382). Der höchste Frauenanteil mit RM-Läsion fand sich im Bereich Gesundheit, Soziales, Lehre und Erziehung. Die Altersstruktur der Gruppen unterteilt nach Klassifikation der Berufe zeigte keine signifikanten Unterschiede (p = 0,493). Eine Auflistung wird in Tab. 2 und Abb. 4 dargestellt.\nRM-Läsionen in der Studienpopulation im Vergleich zur Gesamtbevölkerung nach KldB 2010 (2020) Verglichen mit der Gesamtbevölkerung (Tab. 1) zeigen sich Unterschiede in der Häufigkeit von RM-Läsionen in Bezug auf die Häufigkeit der entsprechenden Berufsklassifikationen (Tab. 3). 25 % der Erwerbstätigen mit sozialversicherungspflichtigem Beruf gingen einer Tätigkeit aus dem Bereich kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus nach, hingegen zeigten in unserem Patientenklientel nur 11 % eine RM-Läsion. In den Berufen Bau, Architektur, Vermessung und Gebäudetechnik arbeiten 9 % der Bevölkerung, allerdings entfielen 23 % der RM-Läsionen unseres Patientenklientels auf diese Berufszweige. Anhand der Vergleiche der Patienten mit der Bevölkerung ergab sich eine signifikant höhere Erkrankungsrate in den Bereichen Land‑, Forst- und Tierwirtschaft und Gartenbau, Bau, Architektur, Vermessung und Gebäudetechnik, Verkehr, Logistik, Schutz und Sicherheit und Unternehmensorganisation, Buchhaltung, Recht und Verwaltung. Ein signifikant reduziertes Risiko bestand in den Berufszweigen Naturwissenshaft, Geografie und Informatik, kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus, Gesundheit, Soziales, Lehre und Erziehung und Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung.Gesamte Bevölkerung in %Mit Rotatorenmanschettenläsion in %p-WertLand‑, Forst- und Tierwirtschaft und Gartenbau35p = 0,003*Rohstoffgewinnung, Produktion und Fertigung2119p = 0,312Bau, Architektur, Vermessung und Gebäudetechnik923p < 0,001*Naturwissenschaft, Geografie und Informatik42p = 0,015*Verkehr, Logistik, Schutz und Sicherheit1018p < 0,001*Kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus2511p < 0,001*Unternehmensorganisation, Buchhaltung, Recht und Verwaltung57p < 0,001*Gesundheit, Soziales, Lehre und Erziehung1913p < 0,001*Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung32p = 0,091*Militär10p = 0,212*Signifikanter Unterschied\n*Signifikanter Unterschied\nVerglichen mit der Gesamtbevölkerung (Tab. 1) zeigen sich Unterschiede in der Häufigkeit von RM-Läsionen in Bezug auf die Häufigkeit der entsprechenden Berufsklassifikationen (Tab. 3). 25 % der Erwerbstätigen mit sozialversicherungspflichtigem Beruf gingen einer Tätigkeit aus dem Bereich kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus nach, hingegen zeigten in unserem Patientenklientel nur 11 % eine RM-Läsion. In den Berufen Bau, Architektur, Vermessung und Gebäudetechnik arbeiten 9 % der Bevölkerung, allerdings entfielen 23 % der RM-Läsionen unseres Patientenklientels auf diese Berufszweige. Anhand der Vergleiche der Patienten mit der Bevölkerung ergab sich eine signifikant höhere Erkrankungsrate in den Bereichen Land‑, Forst- und Tierwirtschaft und Gartenbau, Bau, Architektur, Vermessung und Gebäudetechnik, Verkehr, Logistik, Schutz und Sicherheit und Unternehmensorganisation, Buchhaltung, Recht und Verwaltung. Ein signifikant reduziertes Risiko bestand in den Berufszweigen Naturwissenshaft, Geografie und Informatik, kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus, Gesundheit, Soziales, Lehre und Erziehung und Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung.Gesamte Bevölkerung in %Mit Rotatorenmanschettenläsion in %p-WertLand‑, Forst- und Tierwirtschaft und Gartenbau35p = 0,003*Rohstoffgewinnung, Produktion und Fertigung2119p = 0,312Bau, Architektur, Vermessung und Gebäudetechnik923p < 0,001*Naturwissenschaft, Geografie und Informatik42p = 0,015*Verkehr, Logistik, Schutz und Sicherheit1018p < 0,001*Kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus2511p < 0,001*Unternehmensorganisation, Buchhaltung, Recht und Verwaltung57p < 0,001*Gesundheit, Soziales, Lehre und Erziehung1913p < 0,001*Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung32p = 0,091*Militär10p = 0,212*Signifikanter Unterschied\n*Signifikanter Unterschied\nEinfluss von Geschlecht in Abhängigkeit von KldB 2010 (2020) Bei der Bildung von Untergruppen nach dem Geschlecht zeigten sich für Männer in den Berufszweigen Bau, Architektur, Vermessung und Gebäudetechnik sowie Verkehr, Logistik, Schutz und Sicherheit signifikant höhere Erkrankungsraten. Im Gegensatz dazu arbeiteten Frauen häufiger in den Berufsgruppen kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus sowie im Bereich Gesundheit, Soziales, Lehre und Erziehung, welche ein signifikant geringeres Risiko für RM-Läsionen aufweisen.\nBei der Bildung von Untergruppen nach dem Geschlecht zeigten sich für Männer in den Berufszweigen Bau, Architektur, Vermessung und Gebäudetechnik sowie Verkehr, Logistik, Schutz und Sicherheit signifikant höhere Erkrankungsraten. Im Gegensatz dazu arbeiteten Frauen häufiger in den Berufsgruppen kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus sowie im Bereich Gesundheit, Soziales, Lehre und Erziehung, welche ein signifikant geringeres Risiko für RM-Läsionen aufweisen.\nRM-Läsionen im GUV-Verfahren Von den 589 Patienten mit RM-Läsionen wurden 10 % (61 Fälle) von der zugehörigen Berufsgenossenschaft anerkannt, 84 % der anerkannten Fälle waren männliche Arbeitende. Betroffen waren, abgestuft nach Häufigkeit, die Berufsgruppen Handel und Verkehr (31 %; 19/61), Baugewerbe (25 %; 15/61) und sonstige Dienstleistungen (20 %; 12/61).\nVon den 589 Patienten mit RM-Läsionen wurden 10 % (61 Fälle) von der zugehörigen Berufsgenossenschaft anerkannt, 84 % der anerkannten Fälle waren männliche Arbeitende. Betroffen waren, abgestuft nach Häufigkeit, die Berufsgruppen Handel und Verkehr (31 %; 19/61), Baugewerbe (25 %; 15/61) und sonstige Dienstleistungen (20 %; 12/61).", "Die Probanden setzten sich aus 662 (61,9 %) Männer und 408 (38,1 %) Frauen zusammen (Abb. 1).", "Im Patientenkollektiv waren 26 % älter als 65 Jahre. Im Alter zwischen 50 und 65 waren 61 % und 10 % waren zwischen 40 und 49 Jahre alt. Lediglich 3 % der Patienten, welche eine RM-Rekonstruktion erhielten, waren jünger als 40 Jahre (Abb. 2).\nDas Durchschnittsalter lag bei 59,52 Jahren (18–97 Jahre, Median 60 Jahre). Der Altersdurchschnitt betrug für die Männer 59 (±8,33) Jahre und für die Frauen 61 (±8,15) Jahre.", "Bei der Altersstruktur zwischen den einzelnen Bereichen unserer Studienpopulation nach den aktuellen Daten der KldB 2010 (2020) ließ sich kein signifikanter Unterschied nachweisen (Abb. 3).", "Es waren 844 Patienten im arbeitsfähigen Alter (< 65 Jahre), hiervon gingen 597 einer Beschäftigung nach. Aufgrund von Elternzeit, Arbeitslosigkeit oder EU-Rente gingen 247 keiner Tätigkeit nach. Unter den Berufstätigen fanden sich rund 65 % Männer und 35 % Frauen. Der Altersdurchschnitt der aktuell Beschäftigten lag bei Männern bei 54 Jahren und 55 Jahren bei Frauen. Acht Befragte machten widersprüchliche Angaben, welchen Beruf sie ausübten und konnten nicht eingeschlossen werden (Tab. 2).AnzahlAlterLand‑, Forst- und Tierwirtschaft und Gartenbau3155Rohstoffgewinnung, Produktion und Fertigung11253Bau, Architektur, Vermessung und Gebäudetechnik13355Naturwissenschaft, Geografie und Informatik1056Verkehr, Logistik, Schutz und Sicherheit10555Kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus6655Unternehmensorganisation, Buchhaltung, Recht und Verwaltung4254Gesundheit, Soziales, Lehre und Erziehung7955Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung1158Militär00", "In der Patientenklientel zeigte sich eine signifikante Häufung von Männern (m = 382, w = 207, p < 0,001). Nach der Klassifikation der Berufe entfielen auf den Bereich Bau, Architektur, Vermessung und Gebäudetechnik die meisten Arbeitnehmer mit RM-Läsionen mit 23 % (133/589) und den meisten männlichen Arbeitenden mit 32 % (120/382), gefolgt von Rohstoffgewinnung, Produktion und Fertigung mit 19 % (n = 112/589) und dem zweithöchsten Männeranteil mit 23 % (89/382), auf Platz 3 folgte der Bereich Verkehr, Logistik, Schutz und Sicherheit mit 18 % (105/589) und dem dritthöchsten Männeranteil mit 21 % (80/382). Der höchste Frauenanteil mit RM-Läsion fand sich im Bereich Gesundheit, Soziales, Lehre und Erziehung. Die Altersstruktur der Gruppen unterteilt nach Klassifikation der Berufe zeigte keine signifikanten Unterschiede (p = 0,493). Eine Auflistung wird in Tab. 2 und Abb. 4 dargestellt.", "Verglichen mit der Gesamtbevölkerung (Tab. 1) zeigen sich Unterschiede in der Häufigkeit von RM-Läsionen in Bezug auf die Häufigkeit der entsprechenden Berufsklassifikationen (Tab. 3). 25 % der Erwerbstätigen mit sozialversicherungspflichtigem Beruf gingen einer Tätigkeit aus dem Bereich kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus nach, hingegen zeigten in unserem Patientenklientel nur 11 % eine RM-Läsion. In den Berufen Bau, Architektur, Vermessung und Gebäudetechnik arbeiten 9 % der Bevölkerung, allerdings entfielen 23 % der RM-Läsionen unseres Patientenklientels auf diese Berufszweige. Anhand der Vergleiche der Patienten mit der Bevölkerung ergab sich eine signifikant höhere Erkrankungsrate in den Bereichen Land‑, Forst- und Tierwirtschaft und Gartenbau, Bau, Architektur, Vermessung und Gebäudetechnik, Verkehr, Logistik, Schutz und Sicherheit und Unternehmensorganisation, Buchhaltung, Recht und Verwaltung. Ein signifikant reduziertes Risiko bestand in den Berufszweigen Naturwissenshaft, Geografie und Informatik, kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus, Gesundheit, Soziales, Lehre und Erziehung und Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung.Gesamte Bevölkerung in %Mit Rotatorenmanschettenläsion in %p-WertLand‑, Forst- und Tierwirtschaft und Gartenbau35p = 0,003*Rohstoffgewinnung, Produktion und Fertigung2119p = 0,312Bau, Architektur, Vermessung und Gebäudetechnik923p < 0,001*Naturwissenschaft, Geografie und Informatik42p = 0,015*Verkehr, Logistik, Schutz und Sicherheit1018p < 0,001*Kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus2511p < 0,001*Unternehmensorganisation, Buchhaltung, Recht und Verwaltung57p < 0,001*Gesundheit, Soziales, Lehre und Erziehung1913p < 0,001*Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung32p = 0,091*Militär10p = 0,212*Signifikanter Unterschied\n*Signifikanter Unterschied", "Bei der Bildung von Untergruppen nach dem Geschlecht zeigten sich für Männer in den Berufszweigen Bau, Architektur, Vermessung und Gebäudetechnik sowie Verkehr, Logistik, Schutz und Sicherheit signifikant höhere Erkrankungsraten. Im Gegensatz dazu arbeiteten Frauen häufiger in den Berufsgruppen kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus sowie im Bereich Gesundheit, Soziales, Lehre und Erziehung, welche ein signifikant geringeres Risiko für RM-Läsionen aufweisen.", "Von den 589 Patienten mit RM-Läsionen wurden 10 % (61 Fälle) von der zugehörigen Berufsgenossenschaft anerkannt, 84 % der anerkannten Fälle waren männliche Arbeitende. Betroffen waren, abgestuft nach Häufigkeit, die Berufsgruppen Handel und Verkehr (31 %; 19/61), Baugewerbe (25 %; 15/61) und sonstige Dienstleistungen (20 %; 12/61).", "Die Berufsabhängigkeit kann auf die Entstehung von Erkrankungen des Bewegungsapparates der oberen Extremität einen Einfluss haben. Neben berufsspezifischen Faktoren lassen sich gesundheitsbezogene Risiken darstellen. Das Geschlecht eines Erwerbtätigen kann innerhalb einer Berufsgruppe Auswirkungen für das Ausmaß von Erkrankungen des Bewegungsapparates der oberen Extremität und die Reaktionen des Körpers auf bestimmte Arbeitsabläufe und somit das Erkrankungsrisiko haben. Es existieren einige Studien mit der Frage, wann nach einer RM-Läsion oder deren operativen Versorgung eine Rückkehr an den Arbeitsplatz möglich ist [18, 19]. Analysen unter Berücksichtigung anerkannter Berufsklassifikationssysteme zu arbeitsassoziierten Faktoren, welche zu einer RM-Läsion führen oder epidemiologische Untersuchungen zum Auftreten von RM-Läsionen und dem ausgeübten Beruf sind in der Literatur wenig beschrieben.\nIn der Literatur zum Thema arbeitsassoziierte Faktoren und muskuloskelettale Erkrankungen der oberen Extremität sind meist verschiedene Erkrankungen zusammengefasst, bei denen die RM-Läsion einen hohen Stellenwert hat [20, 21]. In dem Vergleich der Häufigkeiten von Berufsgruppen innerhalb der erwerbsfähigen Bevölkerung und der Häufigkeit der Berufe in der Patientengruppe mit RM-Läsion im erwerbsfähigen Alter zeigt sich eine erhöhte Prävalenz für bestimmte berufliche Belastungen. Aus der Literatur war der Zusammenhang von RM-Läsionen zu Zwangshaltungen mit erhobenen Armen und Überkopfarbeiten [22], für repetitive monotone Arbeiten in Flexion und Abduktion mit Werkzeugen [23], für das Anheben mittlerer und schwerer Lasten über Schulterniveau, für das Ziehen oder Schieben von Gewichten mit den Armen und Vibrationen [24] bekannt. Somit ist bei Berufszweigen mit häufigem Ausführen schulterbelastender Tätigkeiten eine höhere Erkrankungsrate von RM-Läsionen zu erwarten. Dies korreliert stark mit den Ergebnissen, dass die häufigsten RM-Läsionen im Vergleich zur Häufigkeit des Berufszweiges in den körperlich schwereren Tätigkeitsbereichen Bau, Architektur, Vermessung und Gebäudetechnik sowie Verkehr, Logistik, Schutz und Sicherheit zu verzeichnen waren. In der KldB 2010 (2020) waren die Berufszweige erfasst und liefern Anhaltspunkte, dass RM-Verletzungen auch über den allgemeinen degenerativen Prozess durch die Berufstätigkeit beeinflusst werden können. Die hier durchgeführte retrospektive Studie deckt signifikante Zusammenhänge zu bestimmten Berufszweigen auf.\nDer Faktor Alter spielte in der vorliegenden Studie als Einflussparameter keine Rolle, da es keine signifikanten Unterschiede im Alter in den Berufszweigen gab. Dies zeigte sich in vorangegangenen internationalen Studien divergent [25–27].\nDas Geschlecht als gesundheitsbezogenes Risiko im Kontext zu RM-Läsionen in diesem Zusammenhang spielte bisher in wissenschaftlichen Fragestellungen eine untergeordnete Rolle. Rolf et al. untersuchten mit ihrer Studie, ob Läsionen der RM eine Berufserkrankung seien [6]. Die Daten dieser Studie deuteten darauf hin, dass die berufsbedingte Exposition das Risiko einer RM-Läsion erhöht oder zu einer klinischen RM-Ruptur führen kann. Alleinig wurden hierfür 472 Männer eingeschlossen, sodass im weiterführenden Kontext geschlechtsspezifische Faktoren im berufsspezifischen Kontext in dieser Studie keine Rolle spielten. Weitere internationale Studien zu diesem Thema zeigen ähnliche Inhalte [28, 29]. In einer Studie von van Dyck et al. wurden geschlechtsspezifische Faktoren im beruflichen Kontext bei der Entstehung von RM-Läsionen beschrieben [31]. Van Dyck et al. zeigten auf, dass es im Gesundheitssektor bei den berufsfeldspezifischen Krankenständen aufgrund einer RM-Läsion unter Frauen ein höheres Niveau gab als bei den männlichen Kollegen. Aus diesen Daten resultiert die Annahme, dass neben der Berufsabhängigkeit auch weitere gesundheitsspezifische Risiken bei der Entstehung von RM-Läsionen einen Einfluss haben können [32].\nNaheliegend ist die Annahme von berufsspezifisch unterschiedlichen gesundheitsbezogenen Risiken als Folge der Belastung am Arbeitsplatz. So sind Erwerbstätige bestimmter Berufssektoren einem höheren Verletzungsrisiko am Arbeitsplatz ausgesetzt als andere [33]. Der Beruf spielt aber auch insofern eine Rolle, als dass die Tätigkeitsausübung bei ein und derselben gesundheitlichen Einschränkung berufsabhängig unterschiedlich stark beeinträchtigen und somit spezifische sekundäre gesundheitsbezogene Risiken nach sich ziehen kann [26]. So zeigen sich bei Personen mit einer RM-Läsion, die in einem Berufssektor mit wenig Belastung der oberen Extremität tätig sind, eine geringere Gesamtdauer der Arbeitsunfähigkeit im Vergleich zu Personen, deren berufliche Ausübung mit langen oder starken Belastungen der oberen Extremität verbunden ist. Hierbei liegt die Differenz der Dauer der Arbeitsunfähigkeit bei mehreren Wochen [10]. \nWeitere, zum Teil in unterschiedliche Richtungen und nicht ausschließlich berufsgruppenspezifisch wirkende Einflüsse entstehen durch Selektionseffekte oder durch nur mittelbar gesundheitsrelevante Berufsbedingungen. Dazu gehört der sogenannte „healthy worker effect“. Bei Anstellung von körperlich überdurchschnittlich gesunden Personen für besonders belastende Tätigkeiten, können trotz hoher Belastung in bestimmten Berufsgruppen geringe Erkrankungsraten resultieren. Zudem können durch Möglichkeiten zur vorzeitigen Berentung und Einflüsse von tariflich unterschiedlich vereinbarten Entgeltfortzahlungen im Krankheitsfall Selektionseffekte entstehen [34]. Aber auch arbeitnehmerabhängige interindividuelle Faktoren, wie persönliche Kompetenz und Verantwortlichkeit im ausgeübten Beruf, können einen Einfluss auf die Entstehung von Erkrankungen der oberen Extremität haben. Berufs- und zeitabhängig unterschiedlich wahrgenommene Gefahren des Arbeitsplatzverlusts sowie die Berufszufriedenheit und das Arbeitsklima können weiter auf die Entstehung von RM-Läsionen einen Einfluss haben [35]. Die in der Literatur beschriebenen Zusammenhänge aus belastenden Tätigkeiten der oberen Extremität und die epidemiologische Verteilung der vorliegenden Studie zu RM-Läsionen in den Berufszweigen führt zu der Vermutung einer Häufung der schulterbelastenden Tätigkeiten insbesondere im Berufszweig Bau, Architektur, Vermessung und Gebäudetechnik sowie Verkehr, Logistik, Schutz und Sicherheit. Diese Bereiche sollten auf die schulterbelastenden Tätigkeiten genauer untersucht werden, um prädisponierende Faktoren zu definieren und gegebenenfalls auf Prävention oder auch die Definition einer Berufserkrankung abzielen zu können.", "Limitierend waren die fehlenden Informationen, wie lange die berufliche Tätigkeit ausgeführt wurde, und die Historie zur beruflichen Tätigkeit der Patienten. Weiterhin fehlten die Informationen zu den einzelnen belastenden Tätigkeiten pro Arbeitstag, wie zum Beispiel Überkopfarbeiten, repetitive Arbeiten oder die Belastung durch Vibrationen. Ein Selektionsbias dieser Studie liegt insofern vor, dass Patienten mit nicht mehr rekonstruierbarer oder konservativ behandelter RM-Läsion nicht in die Analyse einbezogen werden konnten. In der Studie erfolgte keine weitere Unterteilung der RM-Läsionen nach Grad der Verfettung oder nach beteiligten Sehnen und Rissarten. Weiterhin wurde die Histologie nicht erfasst, um Rückschlüsse auf die Pathologie der Läsion zu ziehen. Eine vollständige Diskussion der berufsgruppenspezifischen Berufsabhängigkeit auf die Entstehung von Erkrankungen des Bewegungsapparates der oberen Extremität muss all diese Einflussmöglichkeiten abwägen. Allerdings zeigen sich bei einer Betrachtung von entsprechenden Auswertungsergebnissen Muster, die sich auch ohne den Anspruch einer vollständigen Diskussion sinnvoll interpretieren lassen können. Diese wissenschaftliche Arbeit behandelt eine interessante sozioökonomische Fragestellung in Bezug auf die Entstehung von RM-Läsionen.", "\nDer Beruf kann einen Einfluss auf die Entstehung von Rotatorenmanschetten(RM)-Läsionen haben.Spezifische sekundäre gesundheitsbezogene Risiken können ebenfalls einen Einfluss auf die Entstehung von RM-Läsionen haben.Zusammenhänge aus den belastenden Tätigkeiten der oberen Extremität und die epidemiologische Verteilung der vorliegenden Studie zu RM-Läsionen in den Berufszweigen führen zu der Vermutung einer Häufung der schulterbelastenden Tätigkeiten.Berufssektoren mit schulterbelastenden Tätigkeiten sollten genauer untersucht werden, um prädisponierende Faktoren zu definieren und gegebenenfalls auf die Prävention oder auch die Definition einer Berufserkrankung abzielen zu können.Symptome der oberen Extremität bei Betroffenen, die in prädestinierten Berufszweigen arbeiten, sollten differenziert betrachtet werden.\n\nDer Beruf kann einen Einfluss auf die Entstehung von Rotatorenmanschetten(RM)-Läsionen haben.\nSpezifische sekundäre gesundheitsbezogene Risiken können ebenfalls einen Einfluss auf die Entstehung von RM-Läsionen haben.\nZusammenhänge aus den belastenden Tätigkeiten der oberen Extremität und die epidemiologische Verteilung der vorliegenden Studie zu RM-Läsionen in den Berufszweigen führen zu der Vermutung einer Häufung der schulterbelastenden Tätigkeiten.\nBerufssektoren mit schulterbelastenden Tätigkeiten sollten genauer untersucht werden, um prädisponierende Faktoren zu definieren und gegebenenfalls auf die Prävention oder auch die Definition einer Berufserkrankung abzielen zu können.\nSymptome der oberen Extremität bei Betroffenen, die in prädestinierten Berufszweigen arbeiten, sollten differenziert betrachtet werden." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "conclusion" ]
[ "Beruf", "Prävalenz", "Retrospektive Studie", "Rotatorenmanschette", "Schulterschmerzen", "Occupation", "Prevalence", "Retrospective studies", "Rotator cuff", "Shoulder pain" ]
Kurze Hinführung zum Thema: Einen entscheidenden Einfluss auf die Entstehung von Erkrankungen des Bewegungsapparates der oberen Extremität hat der aktuell ausgeübte Beruf. Der Einfluss des Berufs resultiert dabei aus einer Reihe von Faktoren. Naheliegend ist zunächst die Annahme von berufsspezifisch unterschiedlichen gesundheitsbezogenen Risiken als Folge der Belastung am Arbeitsplatz. Bisherige Erkenntnisse in diesem Zusammenhang weisen darauf hin, dass bestimmte Berufszweige einem höheren Erkrankungsrisiko am Arbeitsplatz ausgesetzt sind als andere. Kann die berufliche Belastung so groß sein, dass sie zu einer Krankheit führt, oder spielen dabei andere Aspekte außerhalb des Arbeitslebens eine größere Rolle für die Krankheitsentstehung? Einleitung: Das Schultergelenk mit multidirektionaler Beweglichkeit und hoher Bedeutung für die Arm- und Handfunktion spielt eine wichtige Rolle für die Arbeitsfähigkeit [1, 2]. Läsionen der Rotatorenmanschette (RM) sind eine häufige muskuloskelettale Verletzung der oberen Extremität in der arbeitenden Bevölkerung. Hiervon sind 6,60 % der Männer und 8,50 % der Frauen von einer betroffen [3]. Die Folge sind lange Abwesenheitsperioden vom Arbeitsplatz [4, 5]. Rotatorenmanschettenpathologien und Schmerzen der Schulter sind in Finnland die häufigste Ursache für krankheitsbedingte Abwesenheit vom Arbeitsplatz [6, 7]. Hierbei wurde die krankheitsbedingte Abwesenheit anhand des Auftretens und der Dauer der Abwesenheit bei den krankheitsbedingten Fehlzeiten aufgrund von Muskel-Skelett-Erkrankungen definiert. Die krankheitsbedingte Abwesenheit hing in der Arbeit von Pekkala et al. von den geleisteten Arbeitsjahren ab, mit einem Hauptanteil zwischen dem 45. und 65. Lebensjahr. Je höher die Anzahl der geleisteten Arbeitsjahre war, umso länger war die krankheitsbedingte Abwesenheit von Erwerbstätigen aufgrund einer RM-Läsion [6]. Einen entscheidenden Einfluss auf die Entstehung von Erkrankungen des Bewegungsapparates hat der aktuell ausgeübte Beruf. Der Einfluss des Berufs resultiert dabei aus einer Reihe von Faktoren. Naheliegend ist zunächst die Annahme von berufsspezifisch unterschiedlichen gesundheitsbezogenen Risiken als Folge der Belastung am Arbeitsplatz. Zu den arbeitsassoziierten Faktoren gehört u. a. das häufige Tragen von Gewichten, stark wiederholende Arbeiten und Überkopfarbeiten [8]. In der Literatur finden sich wenig Daten, die den Einfluss der beruflichen Tätigkeit auf die Entstehung von RM-Läsionen darstellen [9]. Bisherige Erkenntnisse in diesem Zusammenhang weisen darauf hin, dass bestimmte Berufszweige einem höheren Erkrankungsrisiko am Arbeitsplatz ausgesetzt sind als andere [10–12]. Ziel dieser Studie ist es, den Einfluss der Berufsabhängigkeit auf die Entstehung von RM-Läsionen zu untersuchen und neben berufsspezifischen Faktoren, gesundheitsbezogene Risiken darzustellen. Methodik: Die zuständige Ethikkommission der Universität Jena wurde informiert und hatte keine Einwände gegen die retrospektive, monozentrische Auswertung der Studie mit dem Namen PAMO-NUNK-Shoulder-Study (Reg.-Nr.: 2018-1165-Daten). In dieser Studie wurden alle Patienten eingeschlossen, welche eine arthroskopische Rekonstruktion der RM in den Jahren 2016 bis 2019 erhalten haben. Die Patientenauswahl erfolgte anhand der internationalen Klassifikation der Behandlungsmethoden in der Medizin mit ICD-Codes: M75.1 (Läsionen der RM) und S46.0 (Verletzung der Muskeln und der Sehnen der RM). Anhand der dokumentierten Anamnese und Operationsberichte wurden alle Patienten insbesondere im Hinblick auf ihre berufliche Tätigkeit ausgewertet. Es konnten 1070 Patienten eingeschlossen werden. Bei allen Patienten wurde der aktuelle Beruf dokumentiert, sowie Berentung oder Arbeitslosigkeit, Alter und Geschlecht. Weiterhin wurde die gegebenenfalls zugehörige Berufsgenossenschaft (BG) von der BG anerkannten Verletzungen erhoben. Die KldB 2010 Die Klassifikation der Berufe 2010, kurz KldB 2010, wurde von der Bundesagentur für Arbeit und dem Institut für Arbeitsmarkt- und Berufsforschung unter Beteiligung des Statistischen Bundesamtes und den betroffenen Bundesministerien sowie Experten der berufskundlichen und empirischen Forschung entwickelt. Diese wurde im Jahr 2011 eingeführt. Die KldB 2010 wurde vollständig neu entwickelt und löst die Klassifizierung der Berufe aus den Jahren 1988 und 1992 ab. Die KldB 2010 bildet die aktuelle Berufslandschaft in Deutschland realitätsnah ab und bietet zugleich eine hohe Kompatibilität zu anderen internationalen Berufsklassifikationen. Dabei werden Berufe in zehn übergeordneten Berufszweigen sektorenorientiert zusammengefasst (Tab. 1). Die Daten der deutschen Bevölkerung von März 2020 sind in Tab. 1 dargestellt.BerufsbereichAktuelle Zahlen 03/2020Gesamt:33.648.183Land‑, Forst- und Tierwirtschaft und Gartenbau500.654Rohstoffgewinnung, Produktion und Fertigung7.218.186Bau, Architektur, Vermessung und Gebäudetechnik2.019.667Naturwissenschaft, Geografie und Informatik1.357.948Verkehr, Logistik, Schutz und Sicherheit4.499.345Kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus3.971.079Unternehmensorganisation, Buchhaltung, Recht und Verwaltung6.819.708Gesundheit, Soziales, Lehre und Erziehung6.181.809Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung892.605Keine Angabe187.182 Die Klassifikation der Berufe 2010, kurz KldB 2010, wurde von der Bundesagentur für Arbeit und dem Institut für Arbeitsmarkt- und Berufsforschung unter Beteiligung des Statistischen Bundesamtes und den betroffenen Bundesministerien sowie Experten der berufskundlichen und empirischen Forschung entwickelt. Diese wurde im Jahr 2011 eingeführt. Die KldB 2010 wurde vollständig neu entwickelt und löst die Klassifizierung der Berufe aus den Jahren 1988 und 1992 ab. Die KldB 2010 bildet die aktuelle Berufslandschaft in Deutschland realitätsnah ab und bietet zugleich eine hohe Kompatibilität zu anderen internationalen Berufsklassifikationen. Dabei werden Berufe in zehn übergeordneten Berufszweigen sektorenorientiert zusammengefasst (Tab. 1). Die Daten der deutschen Bevölkerung von März 2020 sind in Tab. 1 dargestellt.BerufsbereichAktuelle Zahlen 03/2020Gesamt:33.648.183Land‑, Forst- und Tierwirtschaft und Gartenbau500.654Rohstoffgewinnung, Produktion und Fertigung7.218.186Bau, Architektur, Vermessung und Gebäudetechnik2.019.667Naturwissenschaft, Geografie und Informatik1.357.948Verkehr, Logistik, Schutz und Sicherheit4.499.345Kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus3.971.079Unternehmensorganisation, Buchhaltung, Recht und Verwaltung6.819.708Gesundheit, Soziales, Lehre und Erziehung6.181.809Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung892.605Keine Angabe187.182 Daten für Deutschland Nach Aussage des statistischen Bundesamtes belief sich die Einwohnerzahl in Deutschland im März 2020 auf 83 Mio. Einwohner, wovon 54 Mio. im erwerbsfähigen Alter (15–65 Jahre) waren, 33 Mio. gingen einer sozialversicherungspflichtigen Tätigkeit nach [13]. Bei der Betrachtung der Wirtschaftssektoren befanden sich 2017 lediglich 1,4 % der Erwerbstätigen im primären Sektor (Land- und Forstwirtschaft, Fischerei), wohingegen im sekundären Sektor (produzierendes Gewerbe) 24,1 % eine Beschäftigung fanden. Mit 74,5 % war 2017 der Dienstleistungssektor am stärksten vertreten. Die Gewichtung dieser Bereiche lässt sich durch den strukturellen Wandel der Gesellschaft, welcher sich durch beispielsweise veränderte Nachfrage und zunehmende Automatisierung erklärt, verstehen. Im Dienstleistungssektor ist der Bereich öffentliche Dienstleister, Erziehung, Gesundheit mit 10,9 Mio. Erwerbstätigen (24,7 %) am stärksten gewichtet. Ähnlich viele Personen (10,1 Mio./22,8 %) finden im Wirtschaftsbereich Handel, Verkehr und Gastgewerbe eine Beschäftigung. Nicht zu vernachlässigen ist das Gebiet der Finanzierung, Immobilien, Unternehmensdienstleister mit immerhin 17,4 % aller Erwerbstätigen [14]. Die Bundesagentur für Arbeit gruppierte die Erwerbstätigen in Deutschland anhand der Klassifikation der Berufe 2010. Die Daten der deutschen Bevölkerung von März 2020 sind in Tab. 1 dargestellt [15, 16]. Nach Aussage des statistischen Bundesamtes belief sich die Einwohnerzahl in Deutschland im März 2020 auf 83 Mio. Einwohner, wovon 54 Mio. im erwerbsfähigen Alter (15–65 Jahre) waren, 33 Mio. gingen einer sozialversicherungspflichtigen Tätigkeit nach [13]. Bei der Betrachtung der Wirtschaftssektoren befanden sich 2017 lediglich 1,4 % der Erwerbstätigen im primären Sektor (Land- und Forstwirtschaft, Fischerei), wohingegen im sekundären Sektor (produzierendes Gewerbe) 24,1 % eine Beschäftigung fanden. Mit 74,5 % war 2017 der Dienstleistungssektor am stärksten vertreten. Die Gewichtung dieser Bereiche lässt sich durch den strukturellen Wandel der Gesellschaft, welcher sich durch beispielsweise veränderte Nachfrage und zunehmende Automatisierung erklärt, verstehen. Im Dienstleistungssektor ist der Bereich öffentliche Dienstleister, Erziehung, Gesundheit mit 10,9 Mio. Erwerbstätigen (24,7 %) am stärksten gewichtet. Ähnlich viele Personen (10,1 Mio./22,8 %) finden im Wirtschaftsbereich Handel, Verkehr und Gastgewerbe eine Beschäftigung. Nicht zu vernachlässigen ist das Gebiet der Finanzierung, Immobilien, Unternehmensdienstleister mit immerhin 17,4 % aller Erwerbstätigen [14]. Die Bundesagentur für Arbeit gruppierte die Erwerbstätigen in Deutschland anhand der Klassifikation der Berufe 2010. Die Daten der deutschen Bevölkerung von März 2020 sind in Tab. 1 dargestellt [15, 16]. Studienvergleichsparameter Anhand der KldB 2010 wurden die dokumentierten Berufe der Patienten eingeteilt. Hierdurch wird eine einheitliche Berufsklassifikation innerhalb dieser Arbeit geschaffen, die statistische Vergleiche zur Gesamtbevölkerung ermöglichen. Diese Daten dienen als Vergleichsgrundlage der Studienergebnisse mit denen von der KldB 2010 aus dem Jahr 2020. Anhand der KldB 2010 wurden die dokumentierten Berufe der Patienten eingeteilt. Hierdurch wird eine einheitliche Berufsklassifikation innerhalb dieser Arbeit geschaffen, die statistische Vergleiche zur Gesamtbevölkerung ermöglichen. Diese Daten dienen als Vergleichsgrundlage der Studienergebnisse mit denen von der KldB 2010 aus dem Jahr 2020. Datenerhebung Die Datenerhebung wurde mittels Microsoft Excel 365 (Fa. Microsoft, Redmond, WA, USA) und die statistische Analyse mittels des Statistikprogramms SPSS (Version 22.0, SPSS Inc., Chicago, IL, USA) durchgeführt. Die in der Studie dokumentierten Datensätze wurden mit routinemäßig erfassten und anonymisierten, frei verfügbaren Daten des Statistischen Bundesamtes verglichen. Diese Datensätze sind digital und frei verfügbar auf der Internetseite zugänglich unter der Kategorie: Themen/Arbeit/Arbeitsmarkt/Erwerbstätigkeit [17]. Die deskriptiven Statistiken umfassen Mengen, Prozentsätze, Medianwerte und Bereiche für ordinäre Variablen. Die Verteilung der Daten wurden auf die Normalverteilung untersucht. Der Chi-Quadrattest wurde für die Analyse von Einflussparametern genutzt und zum Vergleich der Studiendaten mit den Daten des statistischen Bundesamtes eingesetzt. Der Kruskal-Wallis-Test wurde spezifisch zur Erhebung der Altersverteilung in den Berufsgruppen verwendet. Der p-Wert von weniger als 0,05 wurde als statistisch signifikant angesehen. Die Datenerhebung wurde mittels Microsoft Excel 365 (Fa. Microsoft, Redmond, WA, USA) und die statistische Analyse mittels des Statistikprogramms SPSS (Version 22.0, SPSS Inc., Chicago, IL, USA) durchgeführt. Die in der Studie dokumentierten Datensätze wurden mit routinemäßig erfassten und anonymisierten, frei verfügbaren Daten des Statistischen Bundesamtes verglichen. Diese Datensätze sind digital und frei verfügbar auf der Internetseite zugänglich unter der Kategorie: Themen/Arbeit/Arbeitsmarkt/Erwerbstätigkeit [17]. Die deskriptiven Statistiken umfassen Mengen, Prozentsätze, Medianwerte und Bereiche für ordinäre Variablen. Die Verteilung der Daten wurden auf die Normalverteilung untersucht. Der Chi-Quadrattest wurde für die Analyse von Einflussparametern genutzt und zum Vergleich der Studiendaten mit den Daten des statistischen Bundesamtes eingesetzt. Der Kruskal-Wallis-Test wurde spezifisch zur Erhebung der Altersverteilung in den Berufsgruppen verwendet. Der p-Wert von weniger als 0,05 wurde als statistisch signifikant angesehen. Die KldB 2010: Die Klassifikation der Berufe 2010, kurz KldB 2010, wurde von der Bundesagentur für Arbeit und dem Institut für Arbeitsmarkt- und Berufsforschung unter Beteiligung des Statistischen Bundesamtes und den betroffenen Bundesministerien sowie Experten der berufskundlichen und empirischen Forschung entwickelt. Diese wurde im Jahr 2011 eingeführt. Die KldB 2010 wurde vollständig neu entwickelt und löst die Klassifizierung der Berufe aus den Jahren 1988 und 1992 ab. Die KldB 2010 bildet die aktuelle Berufslandschaft in Deutschland realitätsnah ab und bietet zugleich eine hohe Kompatibilität zu anderen internationalen Berufsklassifikationen. Dabei werden Berufe in zehn übergeordneten Berufszweigen sektorenorientiert zusammengefasst (Tab. 1). Die Daten der deutschen Bevölkerung von März 2020 sind in Tab. 1 dargestellt.BerufsbereichAktuelle Zahlen 03/2020Gesamt:33.648.183Land‑, Forst- und Tierwirtschaft und Gartenbau500.654Rohstoffgewinnung, Produktion und Fertigung7.218.186Bau, Architektur, Vermessung und Gebäudetechnik2.019.667Naturwissenschaft, Geografie und Informatik1.357.948Verkehr, Logistik, Schutz und Sicherheit4.499.345Kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus3.971.079Unternehmensorganisation, Buchhaltung, Recht und Verwaltung6.819.708Gesundheit, Soziales, Lehre und Erziehung6.181.809Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung892.605Keine Angabe187.182 Daten für Deutschland: Nach Aussage des statistischen Bundesamtes belief sich die Einwohnerzahl in Deutschland im März 2020 auf 83 Mio. Einwohner, wovon 54 Mio. im erwerbsfähigen Alter (15–65 Jahre) waren, 33 Mio. gingen einer sozialversicherungspflichtigen Tätigkeit nach [13]. Bei der Betrachtung der Wirtschaftssektoren befanden sich 2017 lediglich 1,4 % der Erwerbstätigen im primären Sektor (Land- und Forstwirtschaft, Fischerei), wohingegen im sekundären Sektor (produzierendes Gewerbe) 24,1 % eine Beschäftigung fanden. Mit 74,5 % war 2017 der Dienstleistungssektor am stärksten vertreten. Die Gewichtung dieser Bereiche lässt sich durch den strukturellen Wandel der Gesellschaft, welcher sich durch beispielsweise veränderte Nachfrage und zunehmende Automatisierung erklärt, verstehen. Im Dienstleistungssektor ist der Bereich öffentliche Dienstleister, Erziehung, Gesundheit mit 10,9 Mio. Erwerbstätigen (24,7 %) am stärksten gewichtet. Ähnlich viele Personen (10,1 Mio./22,8 %) finden im Wirtschaftsbereich Handel, Verkehr und Gastgewerbe eine Beschäftigung. Nicht zu vernachlässigen ist das Gebiet der Finanzierung, Immobilien, Unternehmensdienstleister mit immerhin 17,4 % aller Erwerbstätigen [14]. Die Bundesagentur für Arbeit gruppierte die Erwerbstätigen in Deutschland anhand der Klassifikation der Berufe 2010. Die Daten der deutschen Bevölkerung von März 2020 sind in Tab. 1 dargestellt [15, 16]. Studienvergleichsparameter: Anhand der KldB 2010 wurden die dokumentierten Berufe der Patienten eingeteilt. Hierdurch wird eine einheitliche Berufsklassifikation innerhalb dieser Arbeit geschaffen, die statistische Vergleiche zur Gesamtbevölkerung ermöglichen. Diese Daten dienen als Vergleichsgrundlage der Studienergebnisse mit denen von der KldB 2010 aus dem Jahr 2020. Datenerhebung: Die Datenerhebung wurde mittels Microsoft Excel 365 (Fa. Microsoft, Redmond, WA, USA) und die statistische Analyse mittels des Statistikprogramms SPSS (Version 22.0, SPSS Inc., Chicago, IL, USA) durchgeführt. Die in der Studie dokumentierten Datensätze wurden mit routinemäßig erfassten und anonymisierten, frei verfügbaren Daten des Statistischen Bundesamtes verglichen. Diese Datensätze sind digital und frei verfügbar auf der Internetseite zugänglich unter der Kategorie: Themen/Arbeit/Arbeitsmarkt/Erwerbstätigkeit [17]. Die deskriptiven Statistiken umfassen Mengen, Prozentsätze, Medianwerte und Bereiche für ordinäre Variablen. Die Verteilung der Daten wurden auf die Normalverteilung untersucht. Der Chi-Quadrattest wurde für die Analyse von Einflussparametern genutzt und zum Vergleich der Studiendaten mit den Daten des statistischen Bundesamtes eingesetzt. Der Kruskal-Wallis-Test wurde spezifisch zur Erhebung der Altersverteilung in den Berufsgruppen verwendet. Der p-Wert von weniger als 0,05 wurde als statistisch signifikant angesehen. Ergebnisse: Epidemiologische Daten Die Probanden setzten sich aus 662 (61,9 %) Männer und 408 (38,1 %) Frauen zusammen (Abb. 1). Die Probanden setzten sich aus 662 (61,9 %) Männer und 408 (38,1 %) Frauen zusammen (Abb. 1). Altersverteilung Im Patientenkollektiv waren 26 % älter als 65 Jahre. Im Alter zwischen 50 und 65 waren 61 % und 10 % waren zwischen 40 und 49 Jahre alt. Lediglich 3 % der Patienten, welche eine RM-Rekonstruktion erhielten, waren jünger als 40 Jahre (Abb. 2). Das Durchschnittsalter lag bei 59,52 Jahren (18–97 Jahre, Median 60 Jahre). Der Altersdurchschnitt betrug für die Männer 59 (±8,33) Jahre und für die Frauen 61 (±8,15) Jahre. Im Patientenkollektiv waren 26 % älter als 65 Jahre. Im Alter zwischen 50 und 65 waren 61 % und 10 % waren zwischen 40 und 49 Jahre alt. Lediglich 3 % der Patienten, welche eine RM-Rekonstruktion erhielten, waren jünger als 40 Jahre (Abb. 2). Das Durchschnittsalter lag bei 59,52 Jahren (18–97 Jahre, Median 60 Jahre). Der Altersdurchschnitt betrug für die Männer 59 (±8,33) Jahre und für die Frauen 61 (±8,15) Jahre. Vergleich Altersverteilung Studienpopulation und KldB 2010 (2020) Bei der Altersstruktur zwischen den einzelnen Bereichen unserer Studienpopulation nach den aktuellen Daten der KldB 2010 (2020) ließ sich kein signifikanter Unterschied nachweisen (Abb. 3). Bei der Altersstruktur zwischen den einzelnen Bereichen unserer Studienpopulation nach den aktuellen Daten der KldB 2010 (2020) ließ sich kein signifikanter Unterschied nachweisen (Abb. 3). Erwerbsstatus und Geschlecht Es waren 844 Patienten im arbeitsfähigen Alter (< 65 Jahre), hiervon gingen 597 einer Beschäftigung nach. Aufgrund von Elternzeit, Arbeitslosigkeit oder EU-Rente gingen 247 keiner Tätigkeit nach. Unter den Berufstätigen fanden sich rund 65 % Männer und 35 % Frauen. Der Altersdurchschnitt der aktuell Beschäftigten lag bei Männern bei 54 Jahren und 55 Jahren bei Frauen. Acht Befragte machten widersprüchliche Angaben, welchen Beruf sie ausübten und konnten nicht eingeschlossen werden (Tab. 2).AnzahlAlterLand‑, Forst- und Tierwirtschaft und Gartenbau3155Rohstoffgewinnung, Produktion und Fertigung11253Bau, Architektur, Vermessung und Gebäudetechnik13355Naturwissenschaft, Geografie und Informatik1056Verkehr, Logistik, Schutz und Sicherheit10555Kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus6655Unternehmensorganisation, Buchhaltung, Recht und Verwaltung4254Gesundheit, Soziales, Lehre und Erziehung7955Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung1158Militär00 Es waren 844 Patienten im arbeitsfähigen Alter (< 65 Jahre), hiervon gingen 597 einer Beschäftigung nach. Aufgrund von Elternzeit, Arbeitslosigkeit oder EU-Rente gingen 247 keiner Tätigkeit nach. Unter den Berufstätigen fanden sich rund 65 % Männer und 35 % Frauen. Der Altersdurchschnitt der aktuell Beschäftigten lag bei Männern bei 54 Jahren und 55 Jahren bei Frauen. Acht Befragte machten widersprüchliche Angaben, welchen Beruf sie ausübten und konnten nicht eingeschlossen werden (Tab. 2).AnzahlAlterLand‑, Forst- und Tierwirtschaft und Gartenbau3155Rohstoffgewinnung, Produktion und Fertigung11253Bau, Architektur, Vermessung und Gebäudetechnik13355Naturwissenschaft, Geografie und Informatik1056Verkehr, Logistik, Schutz und Sicherheit10555Kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus6655Unternehmensorganisation, Buchhaltung, Recht und Verwaltung4254Gesundheit, Soziales, Lehre und Erziehung7955Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung1158Militär00 Erwerbsstatus, Geschlecht, RM-Läsion im Vergleich zur erwerbstätigen Gesamtbevölkerung nach KldB 2010 (2020) In der Patientenklientel zeigte sich eine signifikante Häufung von Männern (m = 382, w = 207, p < 0,001). Nach der Klassifikation der Berufe entfielen auf den Bereich Bau, Architektur, Vermessung und Gebäudetechnik die meisten Arbeitnehmer mit RM-Läsionen mit 23 % (133/589) und den meisten männlichen Arbeitenden mit 32 % (120/382), gefolgt von Rohstoffgewinnung, Produktion und Fertigung mit 19 % (n = 112/589) und dem zweithöchsten Männeranteil mit 23 % (89/382), auf Platz 3 folgte der Bereich Verkehr, Logistik, Schutz und Sicherheit mit 18 % (105/589) und dem dritthöchsten Männeranteil mit 21 % (80/382). Der höchste Frauenanteil mit RM-Läsion fand sich im Bereich Gesundheit, Soziales, Lehre und Erziehung. Die Altersstruktur der Gruppen unterteilt nach Klassifikation der Berufe zeigte keine signifikanten Unterschiede (p = 0,493). Eine Auflistung wird in Tab. 2 und Abb. 4 dargestellt. In der Patientenklientel zeigte sich eine signifikante Häufung von Männern (m = 382, w = 207, p < 0,001). Nach der Klassifikation der Berufe entfielen auf den Bereich Bau, Architektur, Vermessung und Gebäudetechnik die meisten Arbeitnehmer mit RM-Läsionen mit 23 % (133/589) und den meisten männlichen Arbeitenden mit 32 % (120/382), gefolgt von Rohstoffgewinnung, Produktion und Fertigung mit 19 % (n = 112/589) und dem zweithöchsten Männeranteil mit 23 % (89/382), auf Platz 3 folgte der Bereich Verkehr, Logistik, Schutz und Sicherheit mit 18 % (105/589) und dem dritthöchsten Männeranteil mit 21 % (80/382). Der höchste Frauenanteil mit RM-Läsion fand sich im Bereich Gesundheit, Soziales, Lehre und Erziehung. Die Altersstruktur der Gruppen unterteilt nach Klassifikation der Berufe zeigte keine signifikanten Unterschiede (p = 0,493). Eine Auflistung wird in Tab. 2 und Abb. 4 dargestellt. RM-Läsionen in der Studienpopulation im Vergleich zur Gesamtbevölkerung nach KldB 2010 (2020) Verglichen mit der Gesamtbevölkerung (Tab. 1) zeigen sich Unterschiede in der Häufigkeit von RM-Läsionen in Bezug auf die Häufigkeit der entsprechenden Berufsklassifikationen (Tab. 3). 25 % der Erwerbstätigen mit sozialversicherungspflichtigem Beruf gingen einer Tätigkeit aus dem Bereich kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus nach, hingegen zeigten in unserem Patientenklientel nur 11 % eine RM-Läsion. In den Berufen Bau, Architektur, Vermessung und Gebäudetechnik arbeiten 9 % der Bevölkerung, allerdings entfielen 23 % der RM-Läsionen unseres Patientenklientels auf diese Berufszweige. Anhand der Vergleiche der Patienten mit der Bevölkerung ergab sich eine signifikant höhere Erkrankungsrate in den Bereichen Land‑, Forst- und Tierwirtschaft und Gartenbau, Bau, Architektur, Vermessung und Gebäudetechnik, Verkehr, Logistik, Schutz und Sicherheit und Unternehmensorganisation, Buchhaltung, Recht und Verwaltung. Ein signifikant reduziertes Risiko bestand in den Berufszweigen Naturwissenshaft, Geografie und Informatik, kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus, Gesundheit, Soziales, Lehre und Erziehung und Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung.Gesamte Bevölkerung in %Mit Rotatorenmanschettenläsion in %p-WertLand‑, Forst- und Tierwirtschaft und Gartenbau35p = 0,003*Rohstoffgewinnung, Produktion und Fertigung2119p = 0,312Bau, Architektur, Vermessung und Gebäudetechnik923p < 0,001*Naturwissenschaft, Geografie und Informatik42p = 0,015*Verkehr, Logistik, Schutz und Sicherheit1018p < 0,001*Kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus2511p < 0,001*Unternehmensorganisation, Buchhaltung, Recht und Verwaltung57p < 0,001*Gesundheit, Soziales, Lehre und Erziehung1913p < 0,001*Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung32p = 0,091*Militär10p = 0,212*Signifikanter Unterschied *Signifikanter Unterschied Verglichen mit der Gesamtbevölkerung (Tab. 1) zeigen sich Unterschiede in der Häufigkeit von RM-Läsionen in Bezug auf die Häufigkeit der entsprechenden Berufsklassifikationen (Tab. 3). 25 % der Erwerbstätigen mit sozialversicherungspflichtigem Beruf gingen einer Tätigkeit aus dem Bereich kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus nach, hingegen zeigten in unserem Patientenklientel nur 11 % eine RM-Läsion. In den Berufen Bau, Architektur, Vermessung und Gebäudetechnik arbeiten 9 % der Bevölkerung, allerdings entfielen 23 % der RM-Läsionen unseres Patientenklientels auf diese Berufszweige. Anhand der Vergleiche der Patienten mit der Bevölkerung ergab sich eine signifikant höhere Erkrankungsrate in den Bereichen Land‑, Forst- und Tierwirtschaft und Gartenbau, Bau, Architektur, Vermessung und Gebäudetechnik, Verkehr, Logistik, Schutz und Sicherheit und Unternehmensorganisation, Buchhaltung, Recht und Verwaltung. Ein signifikant reduziertes Risiko bestand in den Berufszweigen Naturwissenshaft, Geografie und Informatik, kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus, Gesundheit, Soziales, Lehre und Erziehung und Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung.Gesamte Bevölkerung in %Mit Rotatorenmanschettenläsion in %p-WertLand‑, Forst- und Tierwirtschaft und Gartenbau35p = 0,003*Rohstoffgewinnung, Produktion und Fertigung2119p = 0,312Bau, Architektur, Vermessung und Gebäudetechnik923p < 0,001*Naturwissenschaft, Geografie und Informatik42p = 0,015*Verkehr, Logistik, Schutz und Sicherheit1018p < 0,001*Kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus2511p < 0,001*Unternehmensorganisation, Buchhaltung, Recht und Verwaltung57p < 0,001*Gesundheit, Soziales, Lehre und Erziehung1913p < 0,001*Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung32p = 0,091*Militär10p = 0,212*Signifikanter Unterschied *Signifikanter Unterschied Einfluss von Geschlecht in Abhängigkeit von KldB 2010 (2020) Bei der Bildung von Untergruppen nach dem Geschlecht zeigten sich für Männer in den Berufszweigen Bau, Architektur, Vermessung und Gebäudetechnik sowie Verkehr, Logistik, Schutz und Sicherheit signifikant höhere Erkrankungsraten. Im Gegensatz dazu arbeiteten Frauen häufiger in den Berufsgruppen kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus sowie im Bereich Gesundheit, Soziales, Lehre und Erziehung, welche ein signifikant geringeres Risiko für RM-Läsionen aufweisen. Bei der Bildung von Untergruppen nach dem Geschlecht zeigten sich für Männer in den Berufszweigen Bau, Architektur, Vermessung und Gebäudetechnik sowie Verkehr, Logistik, Schutz und Sicherheit signifikant höhere Erkrankungsraten. Im Gegensatz dazu arbeiteten Frauen häufiger in den Berufsgruppen kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus sowie im Bereich Gesundheit, Soziales, Lehre und Erziehung, welche ein signifikant geringeres Risiko für RM-Läsionen aufweisen. RM-Läsionen im GUV-Verfahren Von den 589 Patienten mit RM-Läsionen wurden 10 % (61 Fälle) von der zugehörigen Berufsgenossenschaft anerkannt, 84 % der anerkannten Fälle waren männliche Arbeitende. Betroffen waren, abgestuft nach Häufigkeit, die Berufsgruppen Handel und Verkehr (31 %; 19/61), Baugewerbe (25 %; 15/61) und sonstige Dienstleistungen (20 %; 12/61). Von den 589 Patienten mit RM-Läsionen wurden 10 % (61 Fälle) von der zugehörigen Berufsgenossenschaft anerkannt, 84 % der anerkannten Fälle waren männliche Arbeitende. Betroffen waren, abgestuft nach Häufigkeit, die Berufsgruppen Handel und Verkehr (31 %; 19/61), Baugewerbe (25 %; 15/61) und sonstige Dienstleistungen (20 %; 12/61). Epidemiologische Daten: Die Probanden setzten sich aus 662 (61,9 %) Männer und 408 (38,1 %) Frauen zusammen (Abb. 1). Altersverteilung: Im Patientenkollektiv waren 26 % älter als 65 Jahre. Im Alter zwischen 50 und 65 waren 61 % und 10 % waren zwischen 40 und 49 Jahre alt. Lediglich 3 % der Patienten, welche eine RM-Rekonstruktion erhielten, waren jünger als 40 Jahre (Abb. 2). Das Durchschnittsalter lag bei 59,52 Jahren (18–97 Jahre, Median 60 Jahre). Der Altersdurchschnitt betrug für die Männer 59 (±8,33) Jahre und für die Frauen 61 (±8,15) Jahre. Vergleich Altersverteilung Studienpopulation und KldB 2010 (2020): Bei der Altersstruktur zwischen den einzelnen Bereichen unserer Studienpopulation nach den aktuellen Daten der KldB 2010 (2020) ließ sich kein signifikanter Unterschied nachweisen (Abb. 3). Erwerbsstatus und Geschlecht: Es waren 844 Patienten im arbeitsfähigen Alter (< 65 Jahre), hiervon gingen 597 einer Beschäftigung nach. Aufgrund von Elternzeit, Arbeitslosigkeit oder EU-Rente gingen 247 keiner Tätigkeit nach. Unter den Berufstätigen fanden sich rund 65 % Männer und 35 % Frauen. Der Altersdurchschnitt der aktuell Beschäftigten lag bei Männern bei 54 Jahren und 55 Jahren bei Frauen. Acht Befragte machten widersprüchliche Angaben, welchen Beruf sie ausübten und konnten nicht eingeschlossen werden (Tab. 2).AnzahlAlterLand‑, Forst- und Tierwirtschaft und Gartenbau3155Rohstoffgewinnung, Produktion und Fertigung11253Bau, Architektur, Vermessung und Gebäudetechnik13355Naturwissenschaft, Geografie und Informatik1056Verkehr, Logistik, Schutz und Sicherheit10555Kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus6655Unternehmensorganisation, Buchhaltung, Recht und Verwaltung4254Gesundheit, Soziales, Lehre und Erziehung7955Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung1158Militär00 Erwerbsstatus, Geschlecht, RM-Läsion im Vergleich zur erwerbstätigen Gesamtbevölkerung nach KldB 2010 (2020): In der Patientenklientel zeigte sich eine signifikante Häufung von Männern (m = 382, w = 207, p < 0,001). Nach der Klassifikation der Berufe entfielen auf den Bereich Bau, Architektur, Vermessung und Gebäudetechnik die meisten Arbeitnehmer mit RM-Läsionen mit 23 % (133/589) und den meisten männlichen Arbeitenden mit 32 % (120/382), gefolgt von Rohstoffgewinnung, Produktion und Fertigung mit 19 % (n = 112/589) und dem zweithöchsten Männeranteil mit 23 % (89/382), auf Platz 3 folgte der Bereich Verkehr, Logistik, Schutz und Sicherheit mit 18 % (105/589) und dem dritthöchsten Männeranteil mit 21 % (80/382). Der höchste Frauenanteil mit RM-Läsion fand sich im Bereich Gesundheit, Soziales, Lehre und Erziehung. Die Altersstruktur der Gruppen unterteilt nach Klassifikation der Berufe zeigte keine signifikanten Unterschiede (p = 0,493). Eine Auflistung wird in Tab. 2 und Abb. 4 dargestellt. RM-Läsionen in der Studienpopulation im Vergleich zur Gesamtbevölkerung nach KldB 2010 (2020): Verglichen mit der Gesamtbevölkerung (Tab. 1) zeigen sich Unterschiede in der Häufigkeit von RM-Läsionen in Bezug auf die Häufigkeit der entsprechenden Berufsklassifikationen (Tab. 3). 25 % der Erwerbstätigen mit sozialversicherungspflichtigem Beruf gingen einer Tätigkeit aus dem Bereich kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus nach, hingegen zeigten in unserem Patientenklientel nur 11 % eine RM-Läsion. In den Berufen Bau, Architektur, Vermessung und Gebäudetechnik arbeiten 9 % der Bevölkerung, allerdings entfielen 23 % der RM-Läsionen unseres Patientenklientels auf diese Berufszweige. Anhand der Vergleiche der Patienten mit der Bevölkerung ergab sich eine signifikant höhere Erkrankungsrate in den Bereichen Land‑, Forst- und Tierwirtschaft und Gartenbau, Bau, Architektur, Vermessung und Gebäudetechnik, Verkehr, Logistik, Schutz und Sicherheit und Unternehmensorganisation, Buchhaltung, Recht und Verwaltung. Ein signifikant reduziertes Risiko bestand in den Berufszweigen Naturwissenshaft, Geografie und Informatik, kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus, Gesundheit, Soziales, Lehre und Erziehung und Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung.Gesamte Bevölkerung in %Mit Rotatorenmanschettenläsion in %p-WertLand‑, Forst- und Tierwirtschaft und Gartenbau35p = 0,003*Rohstoffgewinnung, Produktion und Fertigung2119p = 0,312Bau, Architektur, Vermessung und Gebäudetechnik923p < 0,001*Naturwissenschaft, Geografie und Informatik42p = 0,015*Verkehr, Logistik, Schutz und Sicherheit1018p < 0,001*Kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus2511p < 0,001*Unternehmensorganisation, Buchhaltung, Recht und Verwaltung57p < 0,001*Gesundheit, Soziales, Lehre und Erziehung1913p < 0,001*Sprach‑, Literatur‑, Geistes‑, Gesellschafts- und Wirtschaftswissenschaften, Medien, Kunst, Kultur und Gestaltung32p = 0,091*Militär10p = 0,212*Signifikanter Unterschied *Signifikanter Unterschied Einfluss von Geschlecht in Abhängigkeit von KldB 2010 (2020): Bei der Bildung von Untergruppen nach dem Geschlecht zeigten sich für Männer in den Berufszweigen Bau, Architektur, Vermessung und Gebäudetechnik sowie Verkehr, Logistik, Schutz und Sicherheit signifikant höhere Erkrankungsraten. Im Gegensatz dazu arbeiteten Frauen häufiger in den Berufsgruppen kaufmännische Dienstleistungen, Warenhandel, Vertrieb, Hotel und Tourismus sowie im Bereich Gesundheit, Soziales, Lehre und Erziehung, welche ein signifikant geringeres Risiko für RM-Läsionen aufweisen. RM-Läsionen im GUV-Verfahren: Von den 589 Patienten mit RM-Läsionen wurden 10 % (61 Fälle) von der zugehörigen Berufsgenossenschaft anerkannt, 84 % der anerkannten Fälle waren männliche Arbeitende. Betroffen waren, abgestuft nach Häufigkeit, die Berufsgruppen Handel und Verkehr (31 %; 19/61), Baugewerbe (25 %; 15/61) und sonstige Dienstleistungen (20 %; 12/61). Diskussion: Die Berufsabhängigkeit kann auf die Entstehung von Erkrankungen des Bewegungsapparates der oberen Extremität einen Einfluss haben. Neben berufsspezifischen Faktoren lassen sich gesundheitsbezogene Risiken darstellen. Das Geschlecht eines Erwerbtätigen kann innerhalb einer Berufsgruppe Auswirkungen für das Ausmaß von Erkrankungen des Bewegungsapparates der oberen Extremität und die Reaktionen des Körpers auf bestimmte Arbeitsabläufe und somit das Erkrankungsrisiko haben. Es existieren einige Studien mit der Frage, wann nach einer RM-Läsion oder deren operativen Versorgung eine Rückkehr an den Arbeitsplatz möglich ist [18, 19]. Analysen unter Berücksichtigung anerkannter Berufsklassifikationssysteme zu arbeitsassoziierten Faktoren, welche zu einer RM-Läsion führen oder epidemiologische Untersuchungen zum Auftreten von RM-Läsionen und dem ausgeübten Beruf sind in der Literatur wenig beschrieben. In der Literatur zum Thema arbeitsassoziierte Faktoren und muskuloskelettale Erkrankungen der oberen Extremität sind meist verschiedene Erkrankungen zusammengefasst, bei denen die RM-Läsion einen hohen Stellenwert hat [20, 21]. In dem Vergleich der Häufigkeiten von Berufsgruppen innerhalb der erwerbsfähigen Bevölkerung und der Häufigkeit der Berufe in der Patientengruppe mit RM-Läsion im erwerbsfähigen Alter zeigt sich eine erhöhte Prävalenz für bestimmte berufliche Belastungen. Aus der Literatur war der Zusammenhang von RM-Läsionen zu Zwangshaltungen mit erhobenen Armen und Überkopfarbeiten [22], für repetitive monotone Arbeiten in Flexion und Abduktion mit Werkzeugen [23], für das Anheben mittlerer und schwerer Lasten über Schulterniveau, für das Ziehen oder Schieben von Gewichten mit den Armen und Vibrationen [24] bekannt. Somit ist bei Berufszweigen mit häufigem Ausführen schulterbelastender Tätigkeiten eine höhere Erkrankungsrate von RM-Läsionen zu erwarten. Dies korreliert stark mit den Ergebnissen, dass die häufigsten RM-Läsionen im Vergleich zur Häufigkeit des Berufszweiges in den körperlich schwereren Tätigkeitsbereichen Bau, Architektur, Vermessung und Gebäudetechnik sowie Verkehr, Logistik, Schutz und Sicherheit zu verzeichnen waren. In der KldB 2010 (2020) waren die Berufszweige erfasst und liefern Anhaltspunkte, dass RM-Verletzungen auch über den allgemeinen degenerativen Prozess durch die Berufstätigkeit beeinflusst werden können. Die hier durchgeführte retrospektive Studie deckt signifikante Zusammenhänge zu bestimmten Berufszweigen auf. Der Faktor Alter spielte in der vorliegenden Studie als Einflussparameter keine Rolle, da es keine signifikanten Unterschiede im Alter in den Berufszweigen gab. Dies zeigte sich in vorangegangenen internationalen Studien divergent [25–27]. Das Geschlecht als gesundheitsbezogenes Risiko im Kontext zu RM-Läsionen in diesem Zusammenhang spielte bisher in wissenschaftlichen Fragestellungen eine untergeordnete Rolle. Rolf et al. untersuchten mit ihrer Studie, ob Läsionen der RM eine Berufserkrankung seien [6]. Die Daten dieser Studie deuteten darauf hin, dass die berufsbedingte Exposition das Risiko einer RM-Läsion erhöht oder zu einer klinischen RM-Ruptur führen kann. Alleinig wurden hierfür 472 Männer eingeschlossen, sodass im weiterführenden Kontext geschlechtsspezifische Faktoren im berufsspezifischen Kontext in dieser Studie keine Rolle spielten. Weitere internationale Studien zu diesem Thema zeigen ähnliche Inhalte [28, 29]. In einer Studie von van Dyck et al. wurden geschlechtsspezifische Faktoren im beruflichen Kontext bei der Entstehung von RM-Läsionen beschrieben [31]. Van Dyck et al. zeigten auf, dass es im Gesundheitssektor bei den berufsfeldspezifischen Krankenständen aufgrund einer RM-Läsion unter Frauen ein höheres Niveau gab als bei den männlichen Kollegen. Aus diesen Daten resultiert die Annahme, dass neben der Berufsabhängigkeit auch weitere gesundheitsspezifische Risiken bei der Entstehung von RM-Läsionen einen Einfluss haben können [32]. Naheliegend ist die Annahme von berufsspezifisch unterschiedlichen gesundheitsbezogenen Risiken als Folge der Belastung am Arbeitsplatz. So sind Erwerbstätige bestimmter Berufssektoren einem höheren Verletzungsrisiko am Arbeitsplatz ausgesetzt als andere [33]. Der Beruf spielt aber auch insofern eine Rolle, als dass die Tätigkeitsausübung bei ein und derselben gesundheitlichen Einschränkung berufsabhängig unterschiedlich stark beeinträchtigen und somit spezifische sekundäre gesundheitsbezogene Risiken nach sich ziehen kann [26]. So zeigen sich bei Personen mit einer RM-Läsion, die in einem Berufssektor mit wenig Belastung der oberen Extremität tätig sind, eine geringere Gesamtdauer der Arbeitsunfähigkeit im Vergleich zu Personen, deren berufliche Ausübung mit langen oder starken Belastungen der oberen Extremität verbunden ist. Hierbei liegt die Differenz der Dauer der Arbeitsunfähigkeit bei mehreren Wochen [10]. Weitere, zum Teil in unterschiedliche Richtungen und nicht ausschließlich berufsgruppenspezifisch wirkende Einflüsse entstehen durch Selektionseffekte oder durch nur mittelbar gesundheitsrelevante Berufsbedingungen. Dazu gehört der sogenannte „healthy worker effect“. Bei Anstellung von körperlich überdurchschnittlich gesunden Personen für besonders belastende Tätigkeiten, können trotz hoher Belastung in bestimmten Berufsgruppen geringe Erkrankungsraten resultieren. Zudem können durch Möglichkeiten zur vorzeitigen Berentung und Einflüsse von tariflich unterschiedlich vereinbarten Entgeltfortzahlungen im Krankheitsfall Selektionseffekte entstehen [34]. Aber auch arbeitnehmerabhängige interindividuelle Faktoren, wie persönliche Kompetenz und Verantwortlichkeit im ausgeübten Beruf, können einen Einfluss auf die Entstehung von Erkrankungen der oberen Extremität haben. Berufs- und zeitabhängig unterschiedlich wahrgenommene Gefahren des Arbeitsplatzverlusts sowie die Berufszufriedenheit und das Arbeitsklima können weiter auf die Entstehung von RM-Läsionen einen Einfluss haben [35]. Die in der Literatur beschriebenen Zusammenhänge aus belastenden Tätigkeiten der oberen Extremität und die epidemiologische Verteilung der vorliegenden Studie zu RM-Läsionen in den Berufszweigen führt zu der Vermutung einer Häufung der schulterbelastenden Tätigkeiten insbesondere im Berufszweig Bau, Architektur, Vermessung und Gebäudetechnik sowie Verkehr, Logistik, Schutz und Sicherheit. Diese Bereiche sollten auf die schulterbelastenden Tätigkeiten genauer untersucht werden, um prädisponierende Faktoren zu definieren und gegebenenfalls auf Prävention oder auch die Definition einer Berufserkrankung abzielen zu können. Limitation: Limitierend waren die fehlenden Informationen, wie lange die berufliche Tätigkeit ausgeführt wurde, und die Historie zur beruflichen Tätigkeit der Patienten. Weiterhin fehlten die Informationen zu den einzelnen belastenden Tätigkeiten pro Arbeitstag, wie zum Beispiel Überkopfarbeiten, repetitive Arbeiten oder die Belastung durch Vibrationen. Ein Selektionsbias dieser Studie liegt insofern vor, dass Patienten mit nicht mehr rekonstruierbarer oder konservativ behandelter RM-Läsion nicht in die Analyse einbezogen werden konnten. In der Studie erfolgte keine weitere Unterteilung der RM-Läsionen nach Grad der Verfettung oder nach beteiligten Sehnen und Rissarten. Weiterhin wurde die Histologie nicht erfasst, um Rückschlüsse auf die Pathologie der Läsion zu ziehen. Eine vollständige Diskussion der berufsgruppenspezifischen Berufsabhängigkeit auf die Entstehung von Erkrankungen des Bewegungsapparates der oberen Extremität muss all diese Einflussmöglichkeiten abwägen. Allerdings zeigen sich bei einer Betrachtung von entsprechenden Auswertungsergebnissen Muster, die sich auch ohne den Anspruch einer vollständigen Diskussion sinnvoll interpretieren lassen können. Diese wissenschaftliche Arbeit behandelt eine interessante sozioökonomische Fragestellung in Bezug auf die Entstehung von RM-Läsionen. Fazit für die Praxis: Der Beruf kann einen Einfluss auf die Entstehung von Rotatorenmanschetten(RM)-Läsionen haben.Spezifische sekundäre gesundheitsbezogene Risiken können ebenfalls einen Einfluss auf die Entstehung von RM-Läsionen haben.Zusammenhänge aus den belastenden Tätigkeiten der oberen Extremität und die epidemiologische Verteilung der vorliegenden Studie zu RM-Läsionen in den Berufszweigen führen zu der Vermutung einer Häufung der schulterbelastenden Tätigkeiten.Berufssektoren mit schulterbelastenden Tätigkeiten sollten genauer untersucht werden, um prädisponierende Faktoren zu definieren und gegebenenfalls auf die Prävention oder auch die Definition einer Berufserkrankung abzielen zu können.Symptome der oberen Extremität bei Betroffenen, die in prädestinierten Berufszweigen arbeiten, sollten differenziert betrachtet werden. Der Beruf kann einen Einfluss auf die Entstehung von Rotatorenmanschetten(RM)-Läsionen haben. Spezifische sekundäre gesundheitsbezogene Risiken können ebenfalls einen Einfluss auf die Entstehung von RM-Läsionen haben. Zusammenhänge aus den belastenden Tätigkeiten der oberen Extremität und die epidemiologische Verteilung der vorliegenden Studie zu RM-Läsionen in den Berufszweigen führen zu der Vermutung einer Häufung der schulterbelastenden Tätigkeiten. Berufssektoren mit schulterbelastenden Tätigkeiten sollten genauer untersucht werden, um prädisponierende Faktoren zu definieren und gegebenenfalls auf die Prävention oder auch die Definition einer Berufserkrankung abzielen zu können. Symptome der oberen Extremität bei Betroffenen, die in prädestinierten Berufszweigen arbeiten, sollten differenziert betrachtet werden.
Background: Diseases of the musculoskeletal system of the upper extremity are the reason for increasing sickness-related absenteeism among the working population. Methods: We included 1070 patients who underwent surgical rotator cuff (RC) reconstruction for an RC lesion between 2016 and 2019. The relevant data were retrospectively documented from the hospital information system. The patients' occupations were classified according to the Classification of Occupations 2010 (KldB 2010) and compared with routinely recorded and anonymized freely available data (Federal Statistical Office, Federal Employment Agency). Results: Of the 1070 patients, 844 were of working age. The age structure of the individual areas showed no significant differences. Based on the comparisons of patient data with the population, significantly higher RC injury rates were found in agriculture, forestry, animal husbandry and horticulture (p = 0,003); construction, architecture, surveying and building services engineering (p < 0,001); transport, logistics, protection, and security (p < 0.001) and business organization, accounting, law, and administration (p < 0,001). There was a significantly reduced risk in science, geography and computer science (p = 0.015); commercial services, goods trade, distribution, hotel and tourism (p < 0,001); health, social affairs, teaching and education (p < 0,001). Conclusions: The prevalence of RC lesions shows a statistical correlation with the occupation performed depending on the occupational branches. In addition to occupational dependency, gender-specific work factors play a role. Shoulder pain in gainful employment should be considered in a more differentiated way. This should enable preventive measures to be taken in a targeted manner.
null
null
7,123
342
[ 100, 331, 1473, 191, 240, 47, 170, 2061, 28, 104, 32, 157, 191, 326, 77, 73, 977, 180 ]
19
[ "und", "der", "die", "von", "mit", "den", "rm", "im", "sich", "auf" ]
[ "arbeitsplatz bisherige erkenntnisse", "der berufsgruppenspezifischen berufsabhängigkeit", "einschränkung berufsabhängig", "beruflichen tätigkeit auf", "berufsklassifikationssysteme zu arbeitsassoziierten" ]
null
null
null
null
null
null
null
[CONTENT] Beruf | Prävalenz | Retrospektive Studie | Rotatorenmanschette | Schulterschmerzen | Occupation | Prevalence | Retrospective studies | Rotator cuff | Shoulder pain [SUMMARY]
[CONTENT] Beruf | Prävalenz | Retrospektive Studie | Rotatorenmanschette | Schulterschmerzen | Occupation | Prevalence | Retrospective studies | Rotator cuff | Shoulder pain [SUMMARY]
null
null
null
null
[CONTENT] Humans | Musculoskeletal Diseases | Musculoskeletal System | Occupational Diseases | Retrospective Studies | Upper Extremity [SUMMARY]
[CONTENT] Humans | Musculoskeletal Diseases | Musculoskeletal System | Occupational Diseases | Retrospective Studies | Upper Extremity [SUMMARY]
null
null
null
null
[CONTENT] arbeitsplatz bisherige erkenntnisse | der berufsgruppenspezifischen berufsabhängigkeit | einschränkung berufsabhängig | beruflichen tätigkeit auf | berufsklassifikationssysteme zu arbeitsassoziierten [SUMMARY]
[CONTENT] arbeitsplatz bisherige erkenntnisse | der berufsgruppenspezifischen berufsabhängigkeit | einschränkung berufsabhängig | beruflichen tätigkeit auf | berufsklassifikationssysteme zu arbeitsassoziierten [SUMMARY]
null
null
null
null
[CONTENT] und | der | die | von | mit | den | rm | im | sich | auf [SUMMARY]
[CONTENT] und | der | die | von | mit | den | rm | im | sich | auf [SUMMARY]
null
null
null
null
[CONTENT] tätigkeiten | zu | die | läsionen haben | rm läsionen haben | der | einen einfluss auf die | schulterbelastenden | sollten | einen einfluss auf [SUMMARY]
[CONTENT] und | der | die | mit | von | den | rm | im | 61 | sich [SUMMARY]
null
null
null
null
[CONTENT] RC ||| ||| ||| [SUMMARY]
[CONTENT] ||| 1070 | between 2016 and 2019 ||| ||| the Classification of Occupations 2010 | 2010 | Federal Statistical Office | Federal Employment Agency ||| 1070 | 844 ||| ||| 0,003 | 0,001 | 0.001 | 0,001 ||| 0.015 | 0,001 | 0,001 ||| RC ||| ||| ||| [SUMMARY]
null
Estrogen augmented visceral pain and colonic neuron modulation in a double-hit model of prenatal and adult stress.
34497435
Chronic stress during pregnancy may increase visceral hyperalgesia of offspring in a sex-dependent way. Combining adult stress in offspring will increase this sensitivity. Based on the evidence implicating estrogen in exacerbating visceral hypersensitivity in female rodents in preclinical models, we predicted that chronic prenatal stress (CPS) + chronic adult stress (CAS) will maximize visceral hyperalgesia; and that estrogen plays an important role in colonic hyperalgesia.
BACKGROUND
We established a CPS plus CAS rodent model in which the balloon was used to distend the colorectum. The single-fiber recording in vivo and patch clamp experiments in vitro were used to monitor the colonic neuron's activity. The reverse transcription-polymerase chain reaction, western blot, and immunofluorescence were used to study the effects of CPS and CAS on colon primary afferent sensitivity. We used ovariectomy and letrozole to reduce estrogen levels of female rats respectively in order to assess the role of estrogen in female-specific enhanced primary afferent sensitization.
METHODS
Spontaneous activity and single fiber activity were significantly greater in females than in males. The enhanced sensitization in female rats mainly came from low-threshold neurons. CPS significantly increased single-unit afferent fiber activity in L6-S2 dorsal roots in response. Activity was further enhanced by CAS. In addition, the excitability of colon-projecting dorsal root ganglion (DRG) neurons increased in CPS + CAS rats and was associated with a decrease in transient A-type K+ currents. Compared with ovariectomy, treatment with the aromatase inhibitor letrozole significantly reduced estrogen levels in female rats, confirming the gender difference. Moreover, mice treated with letrozole had decreased colonic DRG neuron excitability. The intrathecal infusion of estrogen increased brain-derived neurotrophic factor (BDNF) protein levels and contributed to the response to visceral pain. Western blotting showed that nerve growth factor protein was upregulated in CPS + CAS mice.
RESULTS
This study adds to the evidence that estrogen-dependent sensitization of primary afferent colon neurons is involved in the development of chronic stress-induced visceral hypersensitivity in female rats.
CONCLUSION
[ "Animals", "Colon", "Estrogens", "Female", "Ganglia, Spinal", "Hyperalgesia", "Male", "Mice", "Neurons", "Pregnancy", "Rats", "Rats, Sprague-Dawley", "Visceral Pain" ]
8384739
INTRODUCTION
Visceral pain of colonic origin is the most prominent symptom in irritable bowel syndrome (IBS) patients[1]. Female IBS patients report more severe pain that occurs more frequently and with longer episodes than in male patients[1,2]. The ratio of female to male IBS is about 2:1 among patients seen in medical clinics[3]. Moreover, females have a higher prevalence of IBS co-morbidities such as anxiety and depression[4,5] and are more vulnerable to stress-induced exacerbation of IBS symptoms compared with males[3,6,7]. Clinical studies show that early life adverse experiences are risk factors for the development of IBS symptoms, including visceral pain and ongoing chronic stress, especially abdominal pain[8-10]. These factors contribute to the development of visceral hypersensitivity, a key component of the IBS symptom complex and one that may be responsible for symptoms of pain[11,12]. Our previous research found that the female offspring of mothers subjected to chronic prenatal stress (CPS) had a markedly greater visceral sensitivity than their male littermates following challenge by another chronic adult stress (CAS) protocol. A critical molecular event in the development of this female-enhanced visceral hypersensitivity is upregulation of brain-derived neurotrophic factor (BDNF) expression in the lumbar-sacral spinal cord of female CPS + CAS rats[13]. However, the neurophysiological changes underlying the enhanced female-specific visceral hypersensitivity and the role of hormone in the development of stress-induced visceral hypersensitivity are not well understood. Visceral hypersensitivity in IBS involves abnormal changes in neurophysiology throughout the brain-gut axis. In IBS, there is evidence for sensitization of primary afferents to jejunal distention and electrical stimulation[14], and there is evidence for increased sensitivity of lumbar splanchnic afferents[15,16]. In animal models of either early life adverse events or adult stress-induced visceral hypersensitivity[17], there is evidence of colon primary afferent sensitization. However, the studies were performed in male rodents. Therefore, in this study, we established a CPS and CAS rodent model to analyze the impact on female colon afferent neuron function and the role of estrogen. Our hypothesis was that female CPS offspring subjected to chronic stress as adults would exhibit greater colonic dorsal root ganglion (DRG) neuron sensitization compared with their male littermates, and that the enhanced visceral sensitization and primary afferent sensitization in females was estrogen dependent.
MATERIALS AND METHODS
Animals The Institutional Animal Care and Use Committee of the University of Texas Medical Branch at Galveston, TX approved all animal procedures. Experiments were performed on pregnant Sprague Dawley rats and their 8-wk-old to 16-wk-old male and female offspring. Rats were housed individual cages with access to food and water in a room with controlled conditions (22 ± 2 °C, relative humidity of 50% ± 5%), and a 12 h light/12 h dark cycle. The Institutional Animal Care and Use Committee of the University of Texas Medical Branch at Galveston, TX approved all animal procedures. Experiments were performed on pregnant Sprague Dawley rats and their 8-wk-old to 16-wk-old male and female offspring. Rats were housed individual cages with access to food and water in a room with controlled conditions (22 ± 2 °C, relative humidity of 50% ± 5%), and a 12 h light/12 h dark cycle. CPS and CAS models Pregnant dams were subjected to a CPS protocol that consisted of a random sequence of twice-daily applications of one of three stress sessions, a 1-h water-avoidance, 45-min cold-restraint, or a 20-min forced swim starting on day 6 and continuing until delivery on day 21. Male and female offspring of the stressed dams were designated CPS rats. Control dams received sham stress and their offspring were designated control rats. As adults at 8-16 wk of age, control and prenatally stressed offspring were challenged by the same CAS protocol for 9 d. Ovariectomy (OVX) or sham surgery was performed on female prenatal-stress offspring on day 56. Daily letrozole treatment was initiated on day 49, 2 wk prior to initiation of adult stress. Treatment was continued through the stress protocol. A schematic diagram of the study procedures is shown in Figure 1A. Primary afferent responses to colorectal distention. A: Chronic prenatal stress (CPS) plus chronic adult stress (CAS) model. Pregnant dams were subjected to prenatal stress from on day 11 of gestation. Ovariectomy (OVX) or sham surgery was performed on female prenatal-stress offspring on day 56. Daily Letrozole was initiated on day 49, 2 wk prior to initiation of adult stress. Treatment was continued through the stress protocol; B: Spontaneous activity (SA) of single afferent units in male and female control rats (n = 70 fibers in 6 rats in each group, t-test, aP < 0.05); C: Average response to graded colorectal distention (CRD) of 56 afferent fibers in 6 male and 70 afferent fibers in 6 female control rats; two-way analysis of variance (ANOVA; aP < 0.05 vs the same pressure male group); D: Responses of low-threshold (LT) fibers to CRD in 42 fibers in 6 male rats and 40 fibers in 6 female control rats (ANOVA, aP < 0.05 vs the same pressure male group); E: Responses of high-threshold (HT) afferent fibers to CRD in 14 fibers in 6 male and 29 fibers in 6 female control rats; F: Effects of CAS on afferent fiber responses to CRD from 59 fibers in 6 control and 99 fibers in 6 CPS female rats; (two-way ANOVA, aP < 0.05 vs the same pressure control group, bP < 0.05 vs the same pressure CPS group); G: Effects of CAS on afferent fiber responses to CRD in control and CPS male rats (n = 6 rats, 57 fibers for control and 95 fibers for CPS female group; two-way ANOVA, aP < 0.05 vs the same pressure-control group). Pregnant dams were subjected to a CPS protocol that consisted of a random sequence of twice-daily applications of one of three stress sessions, a 1-h water-avoidance, 45-min cold-restraint, or a 20-min forced swim starting on day 6 and continuing until delivery on day 21. Male and female offspring of the stressed dams were designated CPS rats. Control dams received sham stress and their offspring were designated control rats. As adults at 8-16 wk of age, control and prenatally stressed offspring were challenged by the same CAS protocol for 9 d. Ovariectomy (OVX) or sham surgery was performed on female prenatal-stress offspring on day 56. Daily letrozole treatment was initiated on day 49, 2 wk prior to initiation of adult stress. Treatment was continued through the stress protocol. A schematic diagram of the study procedures is shown in Figure 1A. Primary afferent responses to colorectal distention. A: Chronic prenatal stress (CPS) plus chronic adult stress (CAS) model. Pregnant dams were subjected to prenatal stress from on day 11 of gestation. Ovariectomy (OVX) or sham surgery was performed on female prenatal-stress offspring on day 56. Daily Letrozole was initiated on day 49, 2 wk prior to initiation of adult stress. Treatment was continued through the stress protocol; B: Spontaneous activity (SA) of single afferent units in male and female control rats (n = 70 fibers in 6 rats in each group, t-test, aP < 0.05); C: Average response to graded colorectal distention (CRD) of 56 afferent fibers in 6 male and 70 afferent fibers in 6 female control rats; two-way analysis of variance (ANOVA; aP < 0.05 vs the same pressure male group); D: Responses of low-threshold (LT) fibers to CRD in 42 fibers in 6 male rats and 40 fibers in 6 female control rats (ANOVA, aP < 0.05 vs the same pressure male group); E: Responses of high-threshold (HT) afferent fibers to CRD in 14 fibers in 6 male and 29 fibers in 6 female control rats; F: Effects of CAS on afferent fiber responses to CRD from 59 fibers in 6 control and 99 fibers in 6 CPS female rats; (two-way ANOVA, aP < 0.05 vs the same pressure control group, bP < 0.05 vs the same pressure CPS group); G: Effects of CAS on afferent fiber responses to CRD in control and CPS male rats (n = 6 rats, 57 fibers for control and 95 fibers for CPS female group; two-way ANOVA, aP < 0.05 vs the same pressure-control group). Rat treatment Before OVX or letrozole treatment, vaginal smears were used to identify the estrus cycle phase. OVX or sham surgery was performed on female prenatal-stress offspring on day 56. The aromatase inhibitor letrozole [4,4’-(1H-1,2,4-triazol-l-yl-methylene)-bis-benzonitrile], (Novartis) 1.0 mg/kg was orally administered in the experimental group and vehicle (hydroxypropyl cellulose 0.3% in water) was given in the control group once daily for 14 d. Direct transcutaneous intrathecal injections of estrogen and letrozole were performed as described by Mestre et al[18]. Before OVX or letrozole treatment, vaginal smears were used to identify the estrus cycle phase. OVX or sham surgery was performed on female prenatal-stress offspring on day 56. The aromatase inhibitor letrozole [4,4’-(1H-1,2,4-triazol-l-yl-methylene)-bis-benzonitrile], (Novartis) 1.0 mg/kg was orally administered in the experimental group and vehicle (hydroxypropyl cellulose 0.3% in water) was given in the control group once daily for 14 d. Direct transcutaneous intrathecal injections of estrogen and letrozole were performed as described by Mestre et al[18]. In vivo single fiber recording of L6-S2 DRG rootlets Multiunit afferent discharges were recorded from the distal ends of L6-S2 dorsal rootlets decentralized close to their entry into the spinal cord. A bundle of multiunit fibers was distinguished into 2-6 single units off-line using wave mark template matching in Spike 2 software that differentiates spikes by shape and amplitude. Colonic afferent fibers were identified by their response to graded colorectal distention (CRD). A balloon was used to distend the colorectum. Isoflurane, 2.5%, followed by 50 mg/kg intraperitoneal sodium pentobarbital induced general anesthesia that was maintained by infusing a mixture of pentobarbital sodium + pancuronium bromide + saline by intravenous infusion through the tail vein. The adequacy of anesthesia was confirmed by the absence of corneal and pupillary reflexes and stability of the end-tidal CO2 level. A tracheotomy tube connected to a ventilator system provided a mixture of room air and oxygen. Expired CO2 was monitored and maintained at 3.5%. Body temperature was monitored and maintained at 37 °C by a servo-controlled heating blanket. A laminectomy from T12 to S2 exposed the spinal cord. The head was stabilized in a stereotaxic frame. Multiunit afferent discharges were recorded from the distal ends of L6-S2 dorsal rootlets decentralized close to their entry into the spinal cord. A bundle of multiunit fibers was distinguished into 2-6 single units off-line using wave mark template matching in Spike 2 software that differentiates spikes by shape and amplitude. Colonic afferent fibers were identified by their response to graded colorectal distention (CRD). A balloon was used to distend the colorectum. Isoflurane, 2.5%, followed by 50 mg/kg intraperitoneal sodium pentobarbital induced general anesthesia that was maintained by infusing a mixture of pentobarbital sodium + pancuronium bromide + saline by intravenous infusion through the tail vein. The adequacy of anesthesia was confirmed by the absence of corneal and pupillary reflexes and stability of the end-tidal CO2 level. A tracheotomy tube connected to a ventilator system provided a mixture of room air and oxygen. Expired CO2 was monitored and maintained at 3.5%. Body temperature was monitored and maintained at 37 °C by a servo-controlled heating blanket. A laminectomy from T12 to S2 exposed the spinal cord. The head was stabilized in a stereotaxic frame. In vitro patch clamp recordings in colonic DRG neurons Retrograde fluorescence label injections: Labeling of colon-projecting DRG neurons was performed as previously described[13]. Under general 2% isoflurane anesthesia, the lipid soluble fluorescent dye, 1,1’-dioleyl-3,3,3’,3’-tetramethylindocarbocyanine methane-sulfonate (9-DiI, Invitrogen, Carlsbad, CA) was injected (50 mg/mL) into the muscularis externa on the exposed distal colon in 8 to 10 sites (2 μL each site). To prevent leakage, the needle was kept in place for 1 min following each injection. Dissociation and culture of DRG neurons: Rats were deeply anesthetized with isoflurane followed by decapitation. Lumbosacral (L6–S2) DRGs were collected in ice cold and oxygenated dissecting solution, containing (in mM) 130 NaCl, 5 KCl, 2 KH2PO4, 1.5 CaCl2, 6 MgSO4, 10 glucose, and 10 HEPES, pH 7.2 (305 mOsm). After removal of the connective tissue, the ganglia were transferred to a 5 mL dissecting solution containing collagenase D (1.8 mg/mL; Roche) and trypsin (1.0 mg/mL; Sigma, St Louis, MO), and incubated for 1.5 h at 34.5 °C. DRGs were then taken from the enzyme solution, washed, and put in 0.5-2 mL of the dissecting solution containing DNase (0.5 mg/mL; Sigma). Cells were subsequently dissociated by gentle trituration 10 to 15 times with fire-polished glass pipettes and placed on acid-cleaned glass coverslips. The dissociated DRG neurons were kept in 1 mL DMEM (with 10% FBS) in an incubator (95% O2/5% CO2) at 37 °C overnight. Whole-cell patch clamp recordings from dissociated DRG neurons: Before each experiment, a glass coverslip with DRG neurons was transferred to a recording chamber perfused (1.5 mL/min) with external solution containing (10 mM): 130 NaCl, 5 KCl, 2 KH2PO4, 2.5 CaCl2, 1 MgCl2, 10 HEPES, and 10 glucose, pH adjusted to pH 7.4 with NaOH (300 mOsm) at room temperature. Recording pipettes, pulled from borosilicate glass tubing, with resistance of 1-5 MΩ, were filled with solution containing (in mM): 100 KMeSO3, 40 KCl, and 10 HEPES, pH 7.25 adjusted with KOH (290 mOsm). DiI-labeled neurons were identified by fluorescence microscopy. Whole-cell currents and voltage were recorded from DiI-labeled neurons using a Dagan 3911 patch clamp amplifier. Data were acquired and analyzed by pCLAMP 9.2 (Molecular Devices, Sunnyvale, CA). The currents were filtered at 2–5 kHz and sampled at 50 or 100 s per point. While still under voltage clamp, the Clampex Membrane Test program (Molecular Devices) was used to determine membrane capacitance (Cm) and membrane resistance (Rm), during a 10 ms, 5 mV depolarizing pulse form a holding potential of −60 mV. The configuration was then switched to current clamp (0 pA) to determine other electrophysiological properties. After stabilizing for 2–3 min, the resting membrane potential was measured. The minimum acceptable resting membrane potential was −40 mV. Spontaneous activity (SA) was then recorded over two 30 s periods separated by 60 s without recording, as described by Bedi et al[19]. Transient A-type K+ current (IA) recording method in patch studies: To record voltage-gated K+ current (Kv), Na+ in control external solution was replaced with equimolar choline and the Ca2+ concentration was reduced to 0.03 mM to suppress Ca2+ currents and to prevent Ca2+ channels becoming Na+ conducting. The reduced external Ca2+ would also be expected to suppress Ca2+-activated K+ currents. The current traces of Kv in DRG neurons were measured at different holding potentials. The membrane potential was held at −100 mV and voltage steps were from −40 to +30 mV to record the total Kv. The membrane potential was held at −50 mV to record the sustained Kv. The IA currents were calculated by subtracting the sustained current from the total current. The current density (in pA/pF) was calculated by dividing the current amplitude by cell membrane capacitance. Retrograde fluorescence label injections: Labeling of colon-projecting DRG neurons was performed as previously described[13]. Under general 2% isoflurane anesthesia, the lipid soluble fluorescent dye, 1,1’-dioleyl-3,3,3’,3’-tetramethylindocarbocyanine methane-sulfonate (9-DiI, Invitrogen, Carlsbad, CA) was injected (50 mg/mL) into the muscularis externa on the exposed distal colon in 8 to 10 sites (2 μL each site). To prevent leakage, the needle was kept in place for 1 min following each injection. Dissociation and culture of DRG neurons: Rats were deeply anesthetized with isoflurane followed by decapitation. Lumbosacral (L6–S2) DRGs were collected in ice cold and oxygenated dissecting solution, containing (in mM) 130 NaCl, 5 KCl, 2 KH2PO4, 1.5 CaCl2, 6 MgSO4, 10 glucose, and 10 HEPES, pH 7.2 (305 mOsm). After removal of the connective tissue, the ganglia were transferred to a 5 mL dissecting solution containing collagenase D (1.8 mg/mL; Roche) and trypsin (1.0 mg/mL; Sigma, St Louis, MO), and incubated for 1.5 h at 34.5 °C. DRGs were then taken from the enzyme solution, washed, and put in 0.5-2 mL of the dissecting solution containing DNase (0.5 mg/mL; Sigma). Cells were subsequently dissociated by gentle trituration 10 to 15 times with fire-polished glass pipettes and placed on acid-cleaned glass coverslips. The dissociated DRG neurons were kept in 1 mL DMEM (with 10% FBS) in an incubator (95% O2/5% CO2) at 37 °C overnight. Whole-cell patch clamp recordings from dissociated DRG neurons: Before each experiment, a glass coverslip with DRG neurons was transferred to a recording chamber perfused (1.5 mL/min) with external solution containing (10 mM): 130 NaCl, 5 KCl, 2 KH2PO4, 2.5 CaCl2, 1 MgCl2, 10 HEPES, and 10 glucose, pH adjusted to pH 7.4 with NaOH (300 mOsm) at room temperature. Recording pipettes, pulled from borosilicate glass tubing, with resistance of 1-5 MΩ, were filled with solution containing (in mM): 100 KMeSO3, 40 KCl, and 10 HEPES, pH 7.25 adjusted with KOH (290 mOsm). DiI-labeled neurons were identified by fluorescence microscopy. Whole-cell currents and voltage were recorded from DiI-labeled neurons using a Dagan 3911 patch clamp amplifier. Data were acquired and analyzed by pCLAMP 9.2 (Molecular Devices, Sunnyvale, CA). The currents were filtered at 2–5 kHz and sampled at 50 or 100 s per point. While still under voltage clamp, the Clampex Membrane Test program (Molecular Devices) was used to determine membrane capacitance (Cm) and membrane resistance (Rm), during a 10 ms, 5 mV depolarizing pulse form a holding potential of −60 mV. The configuration was then switched to current clamp (0 pA) to determine other electrophysiological properties. After stabilizing for 2–3 min, the resting membrane potential was measured. The minimum acceptable resting membrane potential was −40 mV. Spontaneous activity (SA) was then recorded over two 30 s periods separated by 60 s without recording, as described by Bedi et al[19]. Transient A-type K+ current (IA) recording method in patch studies: To record voltage-gated K+ current (Kv), Na+ in control external solution was replaced with equimolar choline and the Ca2+ concentration was reduced to 0.03 mM to suppress Ca2+ currents and to prevent Ca2+ channels becoming Na+ conducting. The reduced external Ca2+ would also be expected to suppress Ca2+-activated K+ currents. The current traces of Kv in DRG neurons were measured at different holding potentials. The membrane potential was held at −100 mV and voltage steps were from −40 to +30 mV to record the total Kv. The membrane potential was held at −50 mV to record the sustained Kv. The IA currents were calculated by subtracting the sustained current from the total current. The current density (in pA/pF) was calculated by dividing the current amplitude by cell membrane capacitance. Real time reverse transcription-polymerase chain reaction (RT-PCR) Total RNA was extracted using RNeasy Mini Kits (QIAGEN, Valencia, CA). One microgram of total RNA was reverse-transcribed using the SuperScriptTM III First-Strand Synthesis System. PCR assays were performed on a StepOnePlus thermal cycler with 18 s as the normalizer using Applied Biosystems primer/probe set Rn02531967_s1 directed against the translated exon IX. Fold-change relative to control was calculated using the ΔΔCt method (Applied Biosystems). Total RNA was extracted using RNeasy Mini Kits (QIAGEN, Valencia, CA). One microgram of total RNA was reverse-transcribed using the SuperScriptTM III First-Strand Synthesis System. PCR assays were performed on a StepOnePlus thermal cycler with 18 s as the normalizer using Applied Biosystems primer/probe set Rn02531967_s1 directed against the translated exon IX. Fold-change relative to control was calculated using the ΔΔCt method (Applied Biosystems). Western blot Samples were lysed in RIPA buffer containing protease inhibitor cocktail and phenylmethanesulfonyl fluoride. Lysates were incubated for 30 min on ice and then centrifuged at 10 000 × g for 10 min at 4 °C. The protein concentration in the supernatant was determined using bicinchoninic acid (BCA) assay kits with bovine serum albumin as a standard. Equal amounts of protein (30 μg per lane) were separated by 10% sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and then transferred to nitrocellulose membranes (Bio-Rad, United States). The membranes were blocked in Li-Cor blocking buffer for 1 h at room temperature and then incubated with primary antibodies. BDNF antibody (Santa Cruz Biotechnologies, Santa Cruz, CA) was used at 1:200 dilution; nerve growth factor (NGF) antibody (Abcam, MA) was used at 1:1000 dilution; β-actin antibody (Sigma Aldrich, St Louis, MO) was used at 1:5000 dilution. The secondary antibodies were donkey anti-rabbit Alexa Fluor 680 (Invitrogen) and goat anti-mouse IRDye 800 (Rockland). Images were acquired and band intensities measured using a Li-Cor Odyssey system (Li-Cor, Lincoln, NE). Samples were lysed in RIPA buffer containing protease inhibitor cocktail and phenylmethanesulfonyl fluoride. Lysates were incubated for 30 min on ice and then centrifuged at 10 000 × g for 10 min at 4 °C. The protein concentration in the supernatant was determined using bicinchoninic acid (BCA) assay kits with bovine serum albumin as a standard. Equal amounts of protein (30 μg per lane) were separated by 10% sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and then transferred to nitrocellulose membranes (Bio-Rad, United States). The membranes were blocked in Li-Cor blocking buffer for 1 h at room temperature and then incubated with primary antibodies. BDNF antibody (Santa Cruz Biotechnologies, Santa Cruz, CA) was used at 1:200 dilution; nerve growth factor (NGF) antibody (Abcam, MA) was used at 1:1000 dilution; β-actin antibody (Sigma Aldrich, St Louis, MO) was used at 1:5000 dilution. The secondary antibodies were donkey anti-rabbit Alexa Fluor 680 (Invitrogen) and goat anti-mouse IRDye 800 (Rockland). Images were acquired and band intensities measured using a Li-Cor Odyssey system (Li-Cor, Lincoln, NE). Immunofluorescence Frozen sections of colon tissue from control, CAS, CPS and CPS + CAS female rats were mounted on glass slides, and rehydrated in phosphate buffered saline at room temperature. The slides were treated for antigen retrieval and blocked with 10% normal goat serum diluted in 0.3% phosphate buffered saline-Triton for 1 h, and then incubated with NGF primary antibody in antibody diluent (Renoir Red, Biocare Medical, Concord, CA) at 4 °C overnight. The slides were exposed to fluorescent dye-conjugated secondary antibody for 2 h at room temperature, counterstained with 4',6-diamidino-2-phenylindole and coverslipped. Images were taken in fluorescence mode on an Olympus laser scanning confocal microscope and the average signal intensity was calculated by the bundled software. Frozen sections of colon tissue from control, CAS, CPS and CPS + CAS female rats were mounted on glass slides, and rehydrated in phosphate buffered saline at room temperature. The slides were treated for antigen retrieval and blocked with 10% normal goat serum diluted in 0.3% phosphate buffered saline-Triton for 1 h, and then incubated with NGF primary antibody in antibody diluent (Renoir Red, Biocare Medical, Concord, CA) at 4 °C overnight. The slides were exposed to fluorescent dye-conjugated secondary antibody for 2 h at room temperature, counterstained with 4',6-diamidino-2-phenylindole and coverslipped. Images were taken in fluorescence mode on an Olympus laser scanning confocal microscope and the average signal intensity was calculated by the bundled software. Serum estradiol and norepinephrine levels Serum estradiol, adrenocorticotropic hormone (ACTH), and norepinephrine levels were measured using specific enzyme-linked immunosorbent assay kits for each analyte (CSB-E05110r, CSB-E06875r, CSB-E07022, Cusabio Bioteck CO., United States) following the manufacturer’s instructions. Serum estradiol, adrenocorticotropic hormone (ACTH), and norepinephrine levels were measured using specific enzyme-linked immunosorbent assay kits for each analyte (CSB-E05110r, CSB-E06875r, CSB-E07022, Cusabio Bioteck CO., United States) following the manufacturer’s instructions. Data analysis Single fiber responses (impulses/second) to CRD were calculated by subtracting SA from the mean 30 s maximal activity during distension. Fibers were considered responsive if CRD increased their activity to 30% greater than the baseline value. Mechanosensitive single units were classified as high threshold (> 20 mmHg) or low threshold (≤ 20 mmHg) on the basis of their response threshold and profile during CRD. Single fiber activity data were analyzed by analysis of variance with repeated measures; CRD intensity was the repeated factor and the experimental group was the between-group factor. If significant main effects were present, the individual means were compared using the Fisher post-hoc test. All authors had access to the study data and reviewed and approved the final manuscript. Single fiber responses (impulses/second) to CRD were calculated by subtracting SA from the mean 30 s maximal activity during distension. Fibers were considered responsive if CRD increased their activity to 30% greater than the baseline value. Mechanosensitive single units were classified as high threshold (> 20 mmHg) or low threshold (≤ 20 mmHg) on the basis of their response threshold and profile during CRD. Single fiber activity data were analyzed by analysis of variance with repeated measures; CRD intensity was the repeated factor and the experimental group was the between-group factor. If significant main effects were present, the individual means were compared using the Fisher post-hoc test. All authors had access to the study data and reviewed and approved the final manuscript.
null
null
CONCLUSION
The authors would like to acknowledge the essential intellectual contributions of our recently deceased mentor, Dr. Sushil K Sarna. His guidance was essential to the successful completion of these studies. Thanks to Dr. Usman Ali for assistance in preparing and modifying the manuscript.
[ "INTRODUCTION", "Animals", "CPS and CAS models", "Rat treatment", "In vivo single fiber recording of L6-S2 DRG rootlets", "In vitro patch clamp recordings in colonic DRG neurons", "Real time reverse transcription-polymerase chain reaction (RT-PCR)", "Western blot", "Immunofluorescence", "Serum estradiol and norepinephrine levels", "Data analysis", "RESULTS", "Effects of CPS plus CAS on primary afferent responses to CRD in male and female rats", "Increase in excitability of colon-projecting lumbosacral DRG neurons in female CPS + CAS rats", "Effects of CPS and/or CAS on plasma estrogen concentration", "Effects of Letrozole treatment on colon DRG neuron excitability", "Spinal cord BDNF levels regulated by estrogen", "Peripheral NGF level increased in CPS + CAS female rats", "DISCUSSION", "CONCLUSION" ]
[ "Visceral pain of colonic origin is the most prominent symptom in irritable bowel syndrome (IBS) patients[1]. Female IBS patients report more severe pain that occurs more frequently and with longer episodes than in male patients[1,2]. The ratio of female to male IBS is about 2:1 among patients seen in medical clinics[3]. Moreover, females have a higher prevalence of IBS co-morbidities such as anxiety and depression[4,5] and are more vulnerable to stress-induced exacerbation of IBS symptoms compared with males[3,6,7].\nClinical studies show that early life adverse experiences are risk factors for the development of IBS symptoms, including visceral pain and ongoing chronic stress, especially abdominal pain[8-10]. These factors contribute to the development of visceral hypersensitivity, a key component of the IBS symptom complex and one that may be responsible for symptoms of pain[11,12]. Our previous research found that the female offspring of mothers subjected to chronic prenatal stress (CPS) had a markedly greater visceral sensitivity than their male littermates following challenge by another chronic adult stress (CAS) protocol. A critical molecular event in the development of this female-enhanced visceral hypersensitivity is upregulation of brain-derived neurotrophic factor (BDNF) expression in the lumbar-sacral spinal cord of female CPS + CAS rats[13]. However, the neurophysiological changes underlying the enhanced female-specific visceral hypersensitivity and the role of hormone in the development of stress-induced visceral hypersensitivity are not well understood.\nVisceral hypersensitivity in IBS involves abnormal changes in neurophysiology throughout the brain-gut axis. In IBS, there is evidence for sensitization of primary afferents to jejunal distention and electrical stimulation[14], and there is evidence for increased sensitivity of lumbar splanchnic afferents[15,16]. In animal models of either early life adverse events or adult stress-induced visceral hypersensitivity[17], there is evidence of colon primary afferent sensitization. However, the studies were performed in male rodents. Therefore, in this study, we established a CPS and CAS rodent model to analyze the impact on female colon afferent neuron function and the role of estrogen. Our hypothesis was that female CPS offspring subjected to chronic stress as adults would exhibit greater colonic dorsal root ganglion (DRG) neuron sensitization compared with their male littermates, and that the enhanced visceral sensitization and primary afferent sensitization in females was estrogen dependent.", "The Institutional Animal Care and Use Committee of the University of Texas Medical Branch at Galveston, TX approved all animal procedures. Experiments were performed on pregnant Sprague Dawley rats and their 8-wk-old to 16-wk-old male and female offspring. Rats were housed individual cages with access to food and water in a room with controlled conditions (22 ± 2 °C, relative humidity of 50% ± 5%), and a 12 h light/12 h dark cycle.", "Pregnant dams were subjected to a CPS protocol that consisted of a random sequence of twice-daily applications of one of three stress sessions, a 1-h water-avoidance, 45-min cold-restraint, or a 20-min forced swim starting on day 6 and continuing until delivery on day 21. Male and female offspring of the stressed dams were designated CPS rats. Control dams received sham stress and their offspring were designated control rats. As adults at 8-16 wk of age, control and prenatally stressed offspring were challenged by the same CAS protocol for 9 d. Ovariectomy (OVX) or sham surgery was performed on female prenatal-stress offspring on day 56. Daily letrozole treatment was initiated on day 49, 2 wk prior to initiation of adult stress. Treatment was continued through the stress protocol. A schematic diagram of the study procedures is shown in Figure 1A.\nPrimary afferent responses to colorectal distention. A: Chronic prenatal stress (CPS) plus chronic adult stress (CAS) model. Pregnant dams were subjected to prenatal stress from on day 11 of gestation. Ovariectomy (OVX) or sham surgery was performed on female prenatal-stress offspring on day 56. Daily Letrozole was initiated on day 49, 2 wk prior to initiation of adult stress. Treatment was continued through the stress protocol; B: Spontaneous activity (SA) of single afferent units in male and female control rats (n = 70 fibers in 6 rats in each group, t-test, aP < 0.05); C: Average response to graded colorectal distention (CRD) of 56 afferent fibers in 6 male and 70 afferent fibers in 6 female control rats; two-way analysis of variance (ANOVA; aP < 0.05 vs the same pressure male group); D: Responses of low-threshold (LT) fibers to CRD in 42 fibers in 6 male rats and 40 fibers in 6 female control rats (ANOVA, aP < 0.05 vs the same pressure male group); E: Responses of high-threshold (HT) afferent fibers to CRD in 14 fibers in 6 male and 29 fibers in 6 female control rats; F: Effects of CAS on afferent fiber responses to CRD from 59 fibers in 6 control and 99 fibers in 6 CPS female rats; (two-way ANOVA, aP < 0.05 vs the same pressure control group, bP < 0.05 vs the same pressure CPS group); G: Effects of CAS on afferent fiber responses to CRD in control and CPS male rats (n = 6 rats, 57 fibers for control and 95 fibers for CPS female group; two-way ANOVA, aP < 0.05 vs the same pressure-control group). ", "Before OVX or letrozole treatment, vaginal smears were used to identify the estrus cycle phase. OVX or sham surgery was performed on female prenatal-stress offspring on day 56. The aromatase inhibitor letrozole [4,4’-(1H-1,2,4-triazol-l-yl-methylene)-bis-benzonitrile], (Novartis) 1.0 mg/kg was orally administered in the experimental group and vehicle (hydroxypropyl cellulose 0.3% in water) was given in the control group once daily for 14 d. Direct transcutaneous intrathecal injections of estrogen and letrozole were performed as described by Mestre et al[18].", "Multiunit afferent discharges were recorded from the distal ends of L6-S2 dorsal rootlets decentralized close to their entry into the spinal cord. A bundle of multiunit fibers was distinguished into 2-6 single units off-line using wave mark template matching in Spike 2 software that differentiates spikes by shape and amplitude. Colonic afferent fibers were identified by their response to graded colorectal distention (CRD). A balloon was used to distend the colorectum. Isoflurane, 2.5%, followed by 50 mg/kg intraperitoneal sodium pentobarbital induced general anesthesia that was maintained by infusing a mixture of pentobarbital sodium + pancuronium bromide + saline by intravenous infusion through the tail vein. The adequacy of anesthesia was confirmed by the absence of corneal and pupillary reflexes and stability of the end-tidal CO2 level. A tracheotomy tube connected to a ventilator system provided a mixture of room air and oxygen. Expired CO2 was monitored and maintained at 3.5%. Body temperature was monitored and maintained at 37 °C by a servo-controlled heating blanket. A laminectomy from T12 to S2 exposed the spinal cord. The head was stabilized in a stereotaxic frame.", "Retrograde fluorescence label injections: Labeling of colon-projecting DRG neurons was performed as previously described[13]. Under general 2% isoflurane anesthesia, the lipid soluble fluorescent dye, 1,1’-dioleyl-3,3,3’,3’-tetramethylindocarbocyanine methane-sulfonate (9-DiI, Invitrogen, Carlsbad, CA) was injected (50 mg/mL) into the muscularis externa on the exposed distal colon in 8 to 10 sites (2 μL each site). To prevent leakage, the needle was kept in place for 1 min following each injection.\nDissociation and culture of DRG neurons: Rats were deeply anesthetized with isoflurane followed by decapitation. Lumbosacral (L6–S2) DRGs were collected in ice cold and oxygenated dissecting solution, containing (in mM) 130 NaCl, 5 KCl, 2 KH2PO4, 1.5 CaCl2, 6 MgSO4, 10 glucose, and 10 HEPES, pH 7.2 (305 mOsm). After removal of the connective tissue, the ganglia were transferred to a 5 mL dissecting solution containing collagenase D (1.8 mg/mL; Roche) and trypsin (1.0 mg/mL; Sigma, St Louis, MO), and incubated for 1.5 h at 34.5 °C. DRGs were then taken from the enzyme solution, washed, and put in 0.5-2 mL of the dissecting solution containing DNase (0.5 mg/mL; Sigma). Cells were subsequently dissociated by gentle trituration 10 to 15 times with fire-polished glass pipettes and placed on acid-cleaned glass coverslips. The dissociated DRG neurons were kept in 1 mL DMEM (with 10% FBS) in an incubator (95% O2/5% CO2) at 37 °C overnight.\nWhole-cell patch clamp recordings from dissociated DRG neurons: Before each experiment, a glass coverslip with DRG neurons was transferred to a recording chamber perfused (1.5 mL/min) with external solution containing (10 mM): 130 NaCl, 5 KCl, 2 KH2PO4, 2.5 CaCl2, 1 MgCl2, 10 HEPES, and 10 glucose, pH adjusted to pH 7.4 with NaOH (300 mOsm) at room temperature. Recording pipettes, pulled from borosilicate glass tubing, with resistance of 1-5 MΩ, were filled with solution containing (in mM): 100 KMeSO3, 40 KCl, and 10 HEPES, pH 7.25 adjusted with KOH (290 mOsm). DiI-labeled neurons were identified by fluorescence microscopy. Whole-cell currents and voltage were recorded from DiI-labeled neurons using a Dagan 3911 patch clamp amplifier. Data were acquired and analyzed by pCLAMP 9.2 (Molecular Devices, Sunnyvale, CA). The currents were filtered at 2–5 kHz and sampled at 50 or 100 s per point. While still under voltage clamp, the Clampex Membrane Test program (Molecular Devices) was used to determine membrane capacitance (Cm) and membrane resistance (Rm), during a 10 ms, 5 mV depolarizing pulse form a holding potential of −60 mV. The configuration was then switched to current clamp (0 pA) to determine other electrophysiological properties. After stabilizing for 2–3 min, the resting membrane potential was measured. The minimum acceptable resting membrane potential was −40 mV. Spontaneous activity (SA) was then recorded over two 30 s periods separated by 60 s without recording, as described by Bedi et al[19].\nTransient A-type K+ current (IA) recording method in patch studies: To record voltage-gated K+ current (Kv), Na+ in control external solution was replaced with equimolar choline and the Ca2+ concentration was reduced to 0.03 mM to suppress Ca2+ currents and to prevent Ca2+ channels becoming Na+ conducting. The reduced external Ca2+ would also be expected to suppress Ca2+-activated K+ currents. The current traces of Kv in DRG neurons were measured at different holding potentials. The membrane potential was held at −100 mV and voltage steps were from −40 to +30 mV to record the total Kv. The membrane potential was held at −50 mV to record the sustained Kv. The IA currents were calculated by subtracting the sustained current from the total current. The current density (in pA/pF) was calculated by dividing the current amplitude by cell membrane capacitance.", "Total RNA was extracted using RNeasy Mini Kits (QIAGEN, Valencia, CA). One microgram of total RNA was reverse-transcribed using the SuperScriptTM III First-Strand Synthesis System. PCR assays were performed on a StepOnePlus thermal cycler with 18 s as the normalizer using Applied Biosystems primer/probe set Rn02531967_s1 directed against the translated exon IX. Fold-change relative to control was calculated using the ΔΔCt method (Applied Biosystems).", "Samples were lysed in RIPA buffer containing protease inhibitor cocktail and phenylmethanesulfonyl fluoride. Lysates were incubated for 30 min on ice and then centrifuged at 10 000 × g for 10 min at 4 °C. The protein concentration in the supernatant was determined using bicinchoninic acid (BCA) assay kits with bovine serum albumin as a standard. Equal amounts of protein (30 μg per lane) were separated by 10% sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and then transferred to nitrocellulose membranes (Bio-Rad, United States). The membranes were blocked in Li-Cor blocking buffer for 1 h at room temperature and then incubated with primary antibodies. BDNF antibody (Santa Cruz Biotechnologies, Santa Cruz, CA) was used at 1:200 dilution; nerve growth factor (NGF) antibody (Abcam, MA) was used at 1:1000 dilution; β-actin antibody (Sigma Aldrich, St Louis, MO) was used at 1:5000 dilution. The secondary antibodies were donkey anti-rabbit Alexa Fluor 680 (Invitrogen) and goat anti-mouse IRDye 800 (Rockland). Images were acquired and band intensities measured using a Li-Cor Odyssey system (Li-Cor, Lincoln, NE).", "Frozen sections of colon tissue from control, CAS, CPS and CPS + CAS female rats were mounted on glass slides, and rehydrated in phosphate buffered saline at room temperature. The slides were treated for antigen retrieval and blocked with 10% normal goat serum diluted in 0.3% phosphate buffered saline-Triton for 1 h, and then incubated with NGF primary antibody in antibody diluent (Renoir Red, Biocare Medical, Concord, CA) at 4 °C overnight. The slides were exposed to fluorescent dye-conjugated secondary antibody for 2 h at room temperature, counterstained with 4',6-diamidino-2-phenylindole and coverslipped. Images were taken in fluorescence mode on an Olympus laser scanning confocal microscope and the average signal intensity was calculated by the bundled software.", "Serum estradiol, adrenocorticotropic hormone (ACTH), and norepinephrine levels were measured using specific enzyme-linked immunosorbent assay kits for each analyte (CSB-E05110r, CSB-E06875r, CSB-E07022, Cusabio Bioteck CO., United States) following the manufacturer’s instructions.", "Single fiber responses (impulses/second) to CRD were calculated by subtracting SA from the mean 30 s maximal activity during distension. Fibers were considered responsive if CRD increased their activity to 30% greater than the baseline value. Mechanosensitive single units were classified as high threshold (> 20 mmHg) or low threshold (≤ 20 mmHg) on the basis of their response threshold and profile during CRD. Single fiber activity data were analyzed by analysis of variance with repeated measures; CRD intensity was the repeated factor and the experimental group was the between-group factor. If significant main effects were present, the individual means were compared using the Fisher post-hoc test. All authors had access to the study data and reviewed and approved the final manuscript.", " Effects of CPS plus CAS on primary afferent responses to CRD in male and female rats The basal activity of a spinal afferent fiber was defined as the average number of action potentials per second (impulses/sec) in the 60 s period before the onset of a distention stimulus. In male controls, 66% of the afferent fibers under study displayed SA and SA was significantly higher in female controls than in male controls (0.71 ± 0.21 vs 1.24 ± 0.20 imp/sec; Figure 1B). The average single fiber activity in response to CRD was significantly higher in female control rats compared with male controls (Figure 1C). We found that the enhanced sensitization in female rats mainly came from the low-threshold fibers (Figure 1D and E).\nTo assess the effects of CPS + CAS on colon afferent fiber activities, we compared average single colon afferent fiber activities projecting from dorsal roots S1-L6 in response to CRD in male and female control, CPS, control + CAS and CPS + CAS rats recorded approximately 24 h after the last stressor. In females, CPS significantly increased single-unit afferent activity in response to CRD vs control female rats (Figure 1F). CAS alone enhanced single-unit activity compared with control. The increase in average afferent responses after CAS in prenatally stressed female rats (44.0%) was significantly greater than the increase in female control rats (39.3%). In males, CPS had no significant effect on primary afferent responses (Figure 1G). When we compared males to females within each experimental group, we found that the average single fiber activity was significantly higher in female compared with male CPS + CAS rats (Figure 1F, G). The increased activity may contribute to the enhanced female visceral hypersensitivity previously reported in this model. Average single-fiber activities were significantly greater in control and CPS and CAS females than in their corresponding male experimental groups (Figure 1F and G). Both CAS and CPS + CAS rats had significantly increased primary afferent responses compared with control and CPS rats. Thus, our CPS and CAS protocols sensitized colon-projecting primary afferent fibers, with the greatest effects produced by the combination of CPS + CAS in both males and females.\nThe basal activity of a spinal afferent fiber was defined as the average number of action potentials per second (impulses/sec) in the 60 s period before the onset of a distention stimulus. In male controls, 66% of the afferent fibers under study displayed SA and SA was significantly higher in female controls than in male controls (0.71 ± 0.21 vs 1.24 ± 0.20 imp/sec; Figure 1B). The average single fiber activity in response to CRD was significantly higher in female control rats compared with male controls (Figure 1C). We found that the enhanced sensitization in female rats mainly came from the low-threshold fibers (Figure 1D and E).\nTo assess the effects of CPS + CAS on colon afferent fiber activities, we compared average single colon afferent fiber activities projecting from dorsal roots S1-L6 in response to CRD in male and female control, CPS, control + CAS and CPS + CAS rats recorded approximately 24 h after the last stressor. In females, CPS significantly increased single-unit afferent activity in response to CRD vs control female rats (Figure 1F). CAS alone enhanced single-unit activity compared with control. The increase in average afferent responses after CAS in prenatally stressed female rats (44.0%) was significantly greater than the increase in female control rats (39.3%). In males, CPS had no significant effect on primary afferent responses (Figure 1G). When we compared males to females within each experimental group, we found that the average single fiber activity was significantly higher in female compared with male CPS + CAS rats (Figure 1F, G). The increased activity may contribute to the enhanced female visceral hypersensitivity previously reported in this model. Average single-fiber activities were significantly greater in control and CPS and CAS females than in their corresponding male experimental groups (Figure 1F and G). Both CAS and CPS + CAS rats had significantly increased primary afferent responses compared with control and CPS rats. Thus, our CPS and CAS protocols sensitized colon-projecting primary afferent fibers, with the greatest effects produced by the combination of CPS + CAS in both males and females.\n Increase in excitability of colon-projecting lumbosacral DRG neurons in female CPS + CAS rats To elucidate the electrophysiological basis of enhanced stress-induced primary afferent activity in female rats, we performed patch clamp studies on acutely dissociated retrograde-labeled colon-projecting neurons from the L6-S2 DRGs in control, prenatal stress, adult stress only, and CPS + CAS female rats isolated 24 h after the last adult stressor (Figure 2A). Input resistance (Figure 2B) and rheobase (Figure 2C) were significantly decreased in neurons from CPS + CAS rats compared with the other three groups. The number of action potentials elicited at either 2 × or 3 × the rheobase were significantly greater in adult stress and CPS + CAS neurons compared with control and to CPS neurons (Figure 2D, E). CAS significantly increased action potential overshoot with or without CPS (Figure 2F), but it did not significantly alter other electrophysiological characteristics, such as number of spontaneous spikes, membrane capacitance (pF), resting membrane potential, cell diameter, time constant, and DRG neuron action potential amplitude and duration (Table 1).\nPatch clamp recording in colonic dorsal root ganglion neurons from female rats. A: Patch clamp process of cell labeling. Under isoflurane anesthesia, the lipid soluble fluorescent dye 9-DiI was injected into muscularis externa of the exposed distal colon (left figure). Lumbosacral (L6–S2) dorsal root ganglions (upper photograph) were isolated and DiI-labeled neurons were identified by fluorescence microscopy (lower photograph). Electrophysiological properties of each neuron were measured using whole-cell current and voltage clamp protocols (right figure); B: Rheobase from all four experimental groups (n = 5 rats, 45 cells in each group, one-way ANOVA, aP < 0.05 vs control or bP < 0.05 vs CAS); C: Representative action potentials (APs) elicited by current injection at 2 × the rheobase in neurons from control, chronic adult stress (CAS), chronic prenatal stress (CPS) and CPS + CAS female rats; D: Membrane input resistance from all four groups (n = 5 rats, 45 cells in each group, one-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); E: Number of APs elicited by current injection at either 2 × and 3 × the rheobase in all four experimental groups (two-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); F: AP overshoot recorded from all four experimental groups (aP < 0.05 vs control); G: The proportion of neurons from each experimental group exhibiting spontaneous APs. Red numbers represent spontaneous AP firing cells; black numbers represent total cells; H: Representative total, IK and IA current tracings and average values of potassium currents: Itotal, IK and IA are shown in female CPS + CAS, CAS, CPS (n = 15 neurons, from 5 rats in each group), and control groups (n = 12 neurons from 5 rats); two-way ANOVA, aP < 0.05 vs each control group. \nElectrophysiological characteristics of colon related DRG neuron\nValues means ± standard error. \nP < 0.05;\nP < 0.001 vs control group. CAS: Chronic adult stress; CPS: Chronic prenatal stress.\nThe percentage of neurons with SA in was significantly greater in CPS + CAS rats than in control or CPS only rats (Figure 2G). Under voltage clamp conditions (Figure 2H), neurons from female CPS + CAS, CAS, CPS and control groups had IA and sustained outward rectifier K+ currents (IK). Compared with the other three groups, DRG neurons from CPS + CAS rats had significantly reduced average IA (P < 0.05). The average IK density was decreased but the change was not significant.\nTo elucidate the electrophysiological basis of enhanced stress-induced primary afferent activity in female rats, we performed patch clamp studies on acutely dissociated retrograde-labeled colon-projecting neurons from the L6-S2 DRGs in control, prenatal stress, adult stress only, and CPS + CAS female rats isolated 24 h after the last adult stressor (Figure 2A). Input resistance (Figure 2B) and rheobase (Figure 2C) were significantly decreased in neurons from CPS + CAS rats compared with the other three groups. The number of action potentials elicited at either 2 × or 3 × the rheobase were significantly greater in adult stress and CPS + CAS neurons compared with control and to CPS neurons (Figure 2D, E). CAS significantly increased action potential overshoot with or without CPS (Figure 2F), but it did not significantly alter other electrophysiological characteristics, such as number of spontaneous spikes, membrane capacitance (pF), resting membrane potential, cell diameter, time constant, and DRG neuron action potential amplitude and duration (Table 1).\nPatch clamp recording in colonic dorsal root ganglion neurons from female rats. A: Patch clamp process of cell labeling. Under isoflurane anesthesia, the lipid soluble fluorescent dye 9-DiI was injected into muscularis externa of the exposed distal colon (left figure). Lumbosacral (L6–S2) dorsal root ganglions (upper photograph) were isolated and DiI-labeled neurons were identified by fluorescence microscopy (lower photograph). Electrophysiological properties of each neuron were measured using whole-cell current and voltage clamp protocols (right figure); B: Rheobase from all four experimental groups (n = 5 rats, 45 cells in each group, one-way ANOVA, aP < 0.05 vs control or bP < 0.05 vs CAS); C: Representative action potentials (APs) elicited by current injection at 2 × the rheobase in neurons from control, chronic adult stress (CAS), chronic prenatal stress (CPS) and CPS + CAS female rats; D: Membrane input resistance from all four groups (n = 5 rats, 45 cells in each group, one-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); E: Number of APs elicited by current injection at either 2 × and 3 × the rheobase in all four experimental groups (two-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); F: AP overshoot recorded from all four experimental groups (aP < 0.05 vs control); G: The proportion of neurons from each experimental group exhibiting spontaneous APs. Red numbers represent spontaneous AP firing cells; black numbers represent total cells; H: Representative total, IK and IA current tracings and average values of potassium currents: Itotal, IK and IA are shown in female CPS + CAS, CAS, CPS (n = 15 neurons, from 5 rats in each group), and control groups (n = 12 neurons from 5 rats); two-way ANOVA, aP < 0.05 vs each control group. \nElectrophysiological characteristics of colon related DRG neuron\nValues means ± standard error. \nP < 0.05;\nP < 0.001 vs control group. CAS: Chronic adult stress; CPS: Chronic prenatal stress.\nThe percentage of neurons with SA in was significantly greater in CPS + CAS rats than in control or CPS only rats (Figure 2G). Under voltage clamp conditions (Figure 2H), neurons from female CPS + CAS, CAS, CPS and control groups had IA and sustained outward rectifier K+ currents (IK). Compared with the other three groups, DRG neurons from CPS + CAS rats had significantly reduced average IA (P < 0.05). The average IK density was decreased but the change was not significant.\n Effects of CPS and/or CAS on plasma estrogen concentration We did a vaginal smear test to identify the estrus cycle phases by identifying the vaginal cytological cell types. Estrogen concentration was significantly higher in the CPS proestrus/estrus phase compared with control diestrus, control proestrus/estrus, and CPS diestrus proestrus (P < 0.05; Figure 3A). Comparison of the plasma estrogen concentrations in control, CAS, CPS, CPS + CAS showed that CPS significantly increased plasma estrogen levels compared with the control rats and that CAS increased plasma estrogen level compared with the control and CPS rats (Figure 3B).\nEffects of chronic prenatal stress, chronic adult stress, ovariectomy, and letrozole treatment on plasma estrogen levels in female rats. A: Plasma estrogen level in control and chronic prenatal stress (CPS) rats by estrus cycle phase (n = 8 rats, one-way ANOVA, aP < 0.05 vs control proestrus/estrus (P-E) phase; bP < 0.05 vs CPS diestrus (D) phase); B: Plasma estrogen levels increased in CPS rats and following chronic adult stress (CAS) 24 h after the last adult stressor (n = 8 rats, one-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); C: Ovariectomy (OVX) significantly reduced CPS female rat plasma estrogen levels before and after CAS (n = 5 rats, one-way ANOVA, aP < 0.05 vs sham group); D: Letrozole treatment significantly reduced CPS female rat plasma estrogen levels before or after CAS (n = 5 rats, one-way ANOVA, aP < 0.05 vs vehicle group; cP < 0.0001); E: Plasma norepinephrine levels from control, CAS, CPS and CPS + CAS group female rats (n = 5 rats, one-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); F: Plasma adrenocorticotropic hormone (ACTH) levels from control and CPS + CAS group female rats (n = 5 rats, t-test, aP < 0.05 vs control). \nTo determine whether estrogen contributed to stress-induced visceral hypersensitivity in prenatal stressed females, we reduced plasma estrogen levels by either OVX or letrozole treatment. OVX significantly lowered serum estradiol levels before and after CAS (Figure 3C). Treatment was continued throughout CAS. After treatment with letrozole, serum estradiol levels were significantly reduced (Figure 3D). To study the effects of gender and stress on norepinephrine and ACTH levels, we measured plasma norepinephrine levels in female rats from all four experimental groups. CAS alone significantly increased plasma norepinephrine levels compared with both the controls and with CPS alone (Figure 3E) Plasma norepinephrine levels were significantly increased in CPS + CAS rats compared with CAS alone as well as with controls and CPS. Plasma ACTH levels were significantly increased in CPS + CAS rats compared with controls. (Figure 3F).\nWe did a vaginal smear test to identify the estrus cycle phases by identifying the vaginal cytological cell types. Estrogen concentration was significantly higher in the CPS proestrus/estrus phase compared with control diestrus, control proestrus/estrus, and CPS diestrus proestrus (P < 0.05; Figure 3A). Comparison of the plasma estrogen concentrations in control, CAS, CPS, CPS + CAS showed that CPS significantly increased plasma estrogen levels compared with the control rats and that CAS increased plasma estrogen level compared with the control and CPS rats (Figure 3B).\nEffects of chronic prenatal stress, chronic adult stress, ovariectomy, and letrozole treatment on plasma estrogen levels in female rats. A: Plasma estrogen level in control and chronic prenatal stress (CPS) rats by estrus cycle phase (n = 8 rats, one-way ANOVA, aP < 0.05 vs control proestrus/estrus (P-E) phase; bP < 0.05 vs CPS diestrus (D) phase); B: Plasma estrogen levels increased in CPS rats and following chronic adult stress (CAS) 24 h after the last adult stressor (n = 8 rats, one-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); C: Ovariectomy (OVX) significantly reduced CPS female rat plasma estrogen levels before and after CAS (n = 5 rats, one-way ANOVA, aP < 0.05 vs sham group); D: Letrozole treatment significantly reduced CPS female rat plasma estrogen levels before or after CAS (n = 5 rats, one-way ANOVA, aP < 0.05 vs vehicle group; cP < 0.0001); E: Plasma norepinephrine levels from control, CAS, CPS and CPS + CAS group female rats (n = 5 rats, one-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); F: Plasma adrenocorticotropic hormone (ACTH) levels from control and CPS + CAS group female rats (n = 5 rats, t-test, aP < 0.05 vs control). \nTo determine whether estrogen contributed to stress-induced visceral hypersensitivity in prenatal stressed females, we reduced plasma estrogen levels by either OVX or letrozole treatment. OVX significantly lowered serum estradiol levels before and after CAS (Figure 3C). Treatment was continued throughout CAS. After treatment with letrozole, serum estradiol levels were significantly reduced (Figure 3D). To study the effects of gender and stress on norepinephrine and ACTH levels, we measured plasma norepinephrine levels in female rats from all four experimental groups. CAS alone significantly increased plasma norepinephrine levels compared with both the controls and with CPS alone (Figure 3E) Plasma norepinephrine levels were significantly increased in CPS + CAS rats compared with CAS alone as well as with controls and CPS. Plasma ACTH levels were significantly increased in CPS + CAS rats compared with controls. (Figure 3F).\n Effects of Letrozole treatment on colon DRG neuron excitability We performed patch clamp experiments on acutely isolated retrograde-labeled DRG neurons from CPS + CAS females with or without letrozole treatment 24 h after the last adult stressor. Letrozole treatment significantly increased rheobase (Figure 4A), and significantly reduced input resistance (Figure 4B). Action potential overshoot (Figure 4C) and the number of action potentials elicited by a current injection at either 2 × or 3 × rheobase were significantly reduced by letrozole treatment (Figure 4D). Other electrophysiological properties were not significantly altered (Table 2). We also recorded electromyographic activity to determine whether the reduction in visceral sensitivity in female CPS + CAS rats caused by OVX or systemic letrozole treatment reduced visceromotor responses. The findings demonstrated a significant decrease in excitability of colon-projecting L6-S2 neurons.\nEffects of Letrozole treatment on colon dorsal root ganglion neuron excitability. A: Rheobase (n = 45 cells in 6 rats in each group, t-test, cP < 0.001 vs Veh. + chronic adult stress [CAS] + chronic prenatal stress [CPS]); B: Membrane input resistance (RIn) (t-test, aP < 0.05); C: Action potential (AP) overshoot (t-test, aP < 0.05); D: Number of APs elicited by current injection at 2 × and 3 × rheobase (two-way ANOVA, aP < 0.05; bP < 0.01). \nElectrophysiological characteristics of colon related DRG neuron after Letrozole treatment\nValues are means ± standard error. \nP < 0.05;\nP < 0.001 vs control group. CAS: Chronic adult stress; CPS: Chronic prenatal stress.\nWe performed patch clamp experiments on acutely isolated retrograde-labeled DRG neurons from CPS + CAS females with or without letrozole treatment 24 h after the last adult stressor. Letrozole treatment significantly increased rheobase (Figure 4A), and significantly reduced input resistance (Figure 4B). Action potential overshoot (Figure 4C) and the number of action potentials elicited by a current injection at either 2 × or 3 × rheobase were significantly reduced by letrozole treatment (Figure 4D). Other electrophysiological properties were not significantly altered (Table 2). We also recorded electromyographic activity to determine whether the reduction in visceral sensitivity in female CPS + CAS rats caused by OVX or systemic letrozole treatment reduced visceromotor responses. The findings demonstrated a significant decrease in excitability of colon-projecting L6-S2 neurons.\nEffects of Letrozole treatment on colon dorsal root ganglion neuron excitability. A: Rheobase (n = 45 cells in 6 rats in each group, t-test, cP < 0.001 vs Veh. + chronic adult stress [CAS] + chronic prenatal stress [CPS]); B: Membrane input resistance (RIn) (t-test, aP < 0.05); C: Action potential (AP) overshoot (t-test, aP < 0.05); D: Number of APs elicited by current injection at 2 × and 3 × rheobase (two-way ANOVA, aP < 0.05; bP < 0.01). \nElectrophysiological characteristics of colon related DRG neuron after Letrozole treatment\nValues are means ± standard error. \nP < 0.05;\nP < 0.001 vs control group. CAS: Chronic adult stress; CPS: Chronic prenatal stress.\n Spinal cord BDNF levels regulated by estrogen To investigate the effect of estrogen on BDNF expression, we measured BDNF mRNA and protein levels in the lumbar-sacral spinal cords of OVX and Sham CPS + CAS female rats. Systemic estradiol administration to naïve cycling females produced significant increases in plasma estrogen (Figure 5A), lumbar-sacral spinal cord BDNF mRNA (Figure 5B), and protein (Figure 5C). We also measured BDNF mRNA and protein levels in the lumbar-sacral spinal cords of OVX and Sham CPS + CAS female rats. BDNF mRNA and protein expression were significantly suppressed by OVX compared with sham rats. Another experiment showed that intrathecal infusion of estrogen into naïve female rats significantly increased BDNF protein levels, which proved that estrogen reversed the experimental results and contributed to the response to visceral pain[13].\nBrain-derived neurotrophic factor expression in lumbar-sacral spinal cord is regulated by estrogen. A: Plasma estrogen levels in cycling females that received a bolus estradiol (E2) infusion on day 1; B: Lumbar-sacral spinal cord brain-derived neurotrophic factor (BDNF) mRNA following bolus estrogen infusion; C: Lumbar-sacral spinal cord BDNF protein following bolus estrogen infusion. (n = 8 rats in each group, two-way ANOVA, aP < 0.05 vs vehicle group). \nTo investigate the effect of estrogen on BDNF expression, we measured BDNF mRNA and protein levels in the lumbar-sacral spinal cords of OVX and Sham CPS + CAS female rats. Systemic estradiol administration to naïve cycling females produced significant increases in plasma estrogen (Figure 5A), lumbar-sacral spinal cord BDNF mRNA (Figure 5B), and protein (Figure 5C). We also measured BDNF mRNA and protein levels in the lumbar-sacral spinal cords of OVX and Sham CPS + CAS female rats. BDNF mRNA and protein expression were significantly suppressed by OVX compared with sham rats. Another experiment showed that intrathecal infusion of estrogen into naïve female rats significantly increased BDNF protein levels, which proved that estrogen reversed the experimental results and contributed to the response to visceral pain[13].\nBrain-derived neurotrophic factor expression in lumbar-sacral spinal cord is regulated by estrogen. A: Plasma estrogen levels in cycling females that received a bolus estradiol (E2) infusion on day 1; B: Lumbar-sacral spinal cord brain-derived neurotrophic factor (BDNF) mRNA following bolus estrogen infusion; C: Lumbar-sacral spinal cord BDNF protein following bolus estrogen infusion. (n = 8 rats in each group, two-way ANOVA, aP < 0.05 vs vehicle group). \n Peripheral NGF level increased in CPS + CAS female rats We examined NGF expression in the colons of females from all four experimental groups by immunohistochemistry (Figure 6A). Morphometric analysis showed that CAS and CPS + CAS significantly increased NGF levels in the colon wall, with the increase in CPS + CAS significantly greater that of CAS alone (Figure 6B). Western blotting showed that NGF protein was significantly upregulated in CPS + CAS rats compared with controls (Figure 6C).\nNerve growth factor expression level in the colon wall. A: Immunohistochemical staining of nerve growth factor (NGF; green) was detected with nuclear counterstaining staining (blue) in controls, chronic adult stress (CAS), chronic prenatal stress (CPS) and CPS + CAS group female rat colon walls. × 400 magnification representative pictures were shown; B: Quantification of NGF levels from colon wall in immunohistochemistry (IHC) (n = 4 rats in each group, one-way ANOVA, aP < 0.05 vs control group; bP < 0.05 vs CPS group); C: Western blots of NGF protein from control and CPS + CAS female rats colon wall tissue (n = 6 rats in each group, t-test, aP < 0.05 vs control group). \nWe examined NGF expression in the colons of females from all four experimental groups by immunohistochemistry (Figure 6A). Morphometric analysis showed that CAS and CPS + CAS significantly increased NGF levels in the colon wall, with the increase in CPS + CAS significantly greater that of CAS alone (Figure 6B). Western blotting showed that NGF protein was significantly upregulated in CPS + CAS rats compared with controls (Figure 6C).\nNerve growth factor expression level in the colon wall. A: Immunohistochemical staining of nerve growth factor (NGF; green) was detected with nuclear counterstaining staining (blue) in controls, chronic adult stress (CAS), chronic prenatal stress (CPS) and CPS + CAS group female rat colon walls. × 400 magnification representative pictures were shown; B: Quantification of NGF levels from colon wall in immunohistochemistry (IHC) (n = 4 rats in each group, one-way ANOVA, aP < 0.05 vs control group; bP < 0.05 vs CPS group); C: Western blots of NGF protein from control and CPS + CAS female rats colon wall tissue (n = 6 rats in each group, t-test, aP < 0.05 vs control group). ", "The basal activity of a spinal afferent fiber was defined as the average number of action potentials per second (impulses/sec) in the 60 s period before the onset of a distention stimulus. In male controls, 66% of the afferent fibers under study displayed SA and SA was significantly higher in female controls than in male controls (0.71 ± 0.21 vs 1.24 ± 0.20 imp/sec; Figure 1B). The average single fiber activity in response to CRD was significantly higher in female control rats compared with male controls (Figure 1C). We found that the enhanced sensitization in female rats mainly came from the low-threshold fibers (Figure 1D and E).\nTo assess the effects of CPS + CAS on colon afferent fiber activities, we compared average single colon afferent fiber activities projecting from dorsal roots S1-L6 in response to CRD in male and female control, CPS, control + CAS and CPS + CAS rats recorded approximately 24 h after the last stressor. In females, CPS significantly increased single-unit afferent activity in response to CRD vs control female rats (Figure 1F). CAS alone enhanced single-unit activity compared with control. The increase in average afferent responses after CAS in prenatally stressed female rats (44.0%) was significantly greater than the increase in female control rats (39.3%). In males, CPS had no significant effect on primary afferent responses (Figure 1G). When we compared males to females within each experimental group, we found that the average single fiber activity was significantly higher in female compared with male CPS + CAS rats (Figure 1F, G). The increased activity may contribute to the enhanced female visceral hypersensitivity previously reported in this model. Average single-fiber activities were significantly greater in control and CPS and CAS females than in their corresponding male experimental groups (Figure 1F and G). Both CAS and CPS + CAS rats had significantly increased primary afferent responses compared with control and CPS rats. Thus, our CPS and CAS protocols sensitized colon-projecting primary afferent fibers, with the greatest effects produced by the combination of CPS + CAS in both males and females.", "To elucidate the electrophysiological basis of enhanced stress-induced primary afferent activity in female rats, we performed patch clamp studies on acutely dissociated retrograde-labeled colon-projecting neurons from the L6-S2 DRGs in control, prenatal stress, adult stress only, and CPS + CAS female rats isolated 24 h after the last adult stressor (Figure 2A). Input resistance (Figure 2B) and rheobase (Figure 2C) were significantly decreased in neurons from CPS + CAS rats compared with the other three groups. The number of action potentials elicited at either 2 × or 3 × the rheobase were significantly greater in adult stress and CPS + CAS neurons compared with control and to CPS neurons (Figure 2D, E). CAS significantly increased action potential overshoot with or without CPS (Figure 2F), but it did not significantly alter other electrophysiological characteristics, such as number of spontaneous spikes, membrane capacitance (pF), resting membrane potential, cell diameter, time constant, and DRG neuron action potential amplitude and duration (Table 1).\nPatch clamp recording in colonic dorsal root ganglion neurons from female rats. A: Patch clamp process of cell labeling. Under isoflurane anesthesia, the lipid soluble fluorescent dye 9-DiI was injected into muscularis externa of the exposed distal colon (left figure). Lumbosacral (L6–S2) dorsal root ganglions (upper photograph) were isolated and DiI-labeled neurons were identified by fluorescence microscopy (lower photograph). Electrophysiological properties of each neuron were measured using whole-cell current and voltage clamp protocols (right figure); B: Rheobase from all four experimental groups (n = 5 rats, 45 cells in each group, one-way ANOVA, aP < 0.05 vs control or bP < 0.05 vs CAS); C: Representative action potentials (APs) elicited by current injection at 2 × the rheobase in neurons from control, chronic adult stress (CAS), chronic prenatal stress (CPS) and CPS + CAS female rats; D: Membrane input resistance from all four groups (n = 5 rats, 45 cells in each group, one-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); E: Number of APs elicited by current injection at either 2 × and 3 × the rheobase in all four experimental groups (two-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); F: AP overshoot recorded from all four experimental groups (aP < 0.05 vs control); G: The proportion of neurons from each experimental group exhibiting spontaneous APs. Red numbers represent spontaneous AP firing cells; black numbers represent total cells; H: Representative total, IK and IA current tracings and average values of potassium currents: Itotal, IK and IA are shown in female CPS + CAS, CAS, CPS (n = 15 neurons, from 5 rats in each group), and control groups (n = 12 neurons from 5 rats); two-way ANOVA, aP < 0.05 vs each control group. \nElectrophysiological characteristics of colon related DRG neuron\nValues means ± standard error. \nP < 0.05;\nP < 0.001 vs control group. CAS: Chronic adult stress; CPS: Chronic prenatal stress.\nThe percentage of neurons with SA in was significantly greater in CPS + CAS rats than in control or CPS only rats (Figure 2G). Under voltage clamp conditions (Figure 2H), neurons from female CPS + CAS, CAS, CPS and control groups had IA and sustained outward rectifier K+ currents (IK). Compared with the other three groups, DRG neurons from CPS + CAS rats had significantly reduced average IA (P < 0.05). The average IK density was decreased but the change was not significant.", "We did a vaginal smear test to identify the estrus cycle phases by identifying the vaginal cytological cell types. Estrogen concentration was significantly higher in the CPS proestrus/estrus phase compared with control diestrus, control proestrus/estrus, and CPS diestrus proestrus (P < 0.05; Figure 3A). Comparison of the plasma estrogen concentrations in control, CAS, CPS, CPS + CAS showed that CPS significantly increased plasma estrogen levels compared with the control rats and that CAS increased plasma estrogen level compared with the control and CPS rats (Figure 3B).\nEffects of chronic prenatal stress, chronic adult stress, ovariectomy, and letrozole treatment on plasma estrogen levels in female rats. A: Plasma estrogen level in control and chronic prenatal stress (CPS) rats by estrus cycle phase (n = 8 rats, one-way ANOVA, aP < 0.05 vs control proestrus/estrus (P-E) phase; bP < 0.05 vs CPS diestrus (D) phase); B: Plasma estrogen levels increased in CPS rats and following chronic adult stress (CAS) 24 h after the last adult stressor (n = 8 rats, one-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); C: Ovariectomy (OVX) significantly reduced CPS female rat plasma estrogen levels before and after CAS (n = 5 rats, one-way ANOVA, aP < 0.05 vs sham group); D: Letrozole treatment significantly reduced CPS female rat plasma estrogen levels before or after CAS (n = 5 rats, one-way ANOVA, aP < 0.05 vs vehicle group; cP < 0.0001); E: Plasma norepinephrine levels from control, CAS, CPS and CPS + CAS group female rats (n = 5 rats, one-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); F: Plasma adrenocorticotropic hormone (ACTH) levels from control and CPS + CAS group female rats (n = 5 rats, t-test, aP < 0.05 vs control). \nTo determine whether estrogen contributed to stress-induced visceral hypersensitivity in prenatal stressed females, we reduced plasma estrogen levels by either OVX or letrozole treatment. OVX significantly lowered serum estradiol levels before and after CAS (Figure 3C). Treatment was continued throughout CAS. After treatment with letrozole, serum estradiol levels were significantly reduced (Figure 3D). To study the effects of gender and stress on norepinephrine and ACTH levels, we measured plasma norepinephrine levels in female rats from all four experimental groups. CAS alone significantly increased plasma norepinephrine levels compared with both the controls and with CPS alone (Figure 3E) Plasma norepinephrine levels were significantly increased in CPS + CAS rats compared with CAS alone as well as with controls and CPS. Plasma ACTH levels were significantly increased in CPS + CAS rats compared with controls. (Figure 3F).", "We performed patch clamp experiments on acutely isolated retrograde-labeled DRG neurons from CPS + CAS females with or without letrozole treatment 24 h after the last adult stressor. Letrozole treatment significantly increased rheobase (Figure 4A), and significantly reduced input resistance (Figure 4B). Action potential overshoot (Figure 4C) and the number of action potentials elicited by a current injection at either 2 × or 3 × rheobase were significantly reduced by letrozole treatment (Figure 4D). Other electrophysiological properties were not significantly altered (Table 2). We also recorded electromyographic activity to determine whether the reduction in visceral sensitivity in female CPS + CAS rats caused by OVX or systemic letrozole treatment reduced visceromotor responses. The findings demonstrated a significant decrease in excitability of colon-projecting L6-S2 neurons.\nEffects of Letrozole treatment on colon dorsal root ganglion neuron excitability. A: Rheobase (n = 45 cells in 6 rats in each group, t-test, cP < 0.001 vs Veh. + chronic adult stress [CAS] + chronic prenatal stress [CPS]); B: Membrane input resistance (RIn) (t-test, aP < 0.05); C: Action potential (AP) overshoot (t-test, aP < 0.05); D: Number of APs elicited by current injection at 2 × and 3 × rheobase (two-way ANOVA, aP < 0.05; bP < 0.01). \nElectrophysiological characteristics of colon related DRG neuron after Letrozole treatment\nValues are means ± standard error. \nP < 0.05;\nP < 0.001 vs control group. CAS: Chronic adult stress; CPS: Chronic prenatal stress.", "To investigate the effect of estrogen on BDNF expression, we measured BDNF mRNA and protein levels in the lumbar-sacral spinal cords of OVX and Sham CPS + CAS female rats. Systemic estradiol administration to naïve cycling females produced significant increases in plasma estrogen (Figure 5A), lumbar-sacral spinal cord BDNF mRNA (Figure 5B), and protein (Figure 5C). We also measured BDNF mRNA and protein levels in the lumbar-sacral spinal cords of OVX and Sham CPS + CAS female rats. BDNF mRNA and protein expression were significantly suppressed by OVX compared with sham rats. Another experiment showed that intrathecal infusion of estrogen into naïve female rats significantly increased BDNF protein levels, which proved that estrogen reversed the experimental results and contributed to the response to visceral pain[13].\nBrain-derived neurotrophic factor expression in lumbar-sacral spinal cord is regulated by estrogen. A: Plasma estrogen levels in cycling females that received a bolus estradiol (E2) infusion on day 1; B: Lumbar-sacral spinal cord brain-derived neurotrophic factor (BDNF) mRNA following bolus estrogen infusion; C: Lumbar-sacral spinal cord BDNF protein following bolus estrogen infusion. (n = 8 rats in each group, two-way ANOVA, aP < 0.05 vs vehicle group). ", "We examined NGF expression in the colons of females from all four experimental groups by immunohistochemistry (Figure 6A). Morphometric analysis showed that CAS and CPS + CAS significantly increased NGF levels in the colon wall, with the increase in CPS + CAS significantly greater that of CAS alone (Figure 6B). Western blotting showed that NGF protein was significantly upregulated in CPS + CAS rats compared with controls (Figure 6C).\nNerve growth factor expression level in the colon wall. A: Immunohistochemical staining of nerve growth factor (NGF; green) was detected with nuclear counterstaining staining (blue) in controls, chronic adult stress (CAS), chronic prenatal stress (CPS) and CPS + CAS group female rat colon walls. × 400 magnification representative pictures were shown; B: Quantification of NGF levels from colon wall in immunohistochemistry (IHC) (n = 4 rats in each group, one-way ANOVA, aP < 0.05 vs control group; bP < 0.05 vs CPS group); C: Western blots of NGF protein from control and CPS + CAS female rats colon wall tissue (n = 6 rats in each group, t-test, aP < 0.05 vs control group). ", "Enhanced CPS-induced visceral hypersensitivity in female rats was associated with an increase in the responses of lumbosacral nerve fibers to CRD in both male and female offspring. These findings are further supported by data showing increased excitability of colon-projecting DRG neurons from females in patch clamp studies. The magnitude of the sensitization was the greatest in female CPS + CAS rats, suggesting that it made a major contribution to the observed enhanced female visceral hypersensitivity in our model.\nChronic stress is known to increase the excitability of colon-projecting DRG neurons in rats and mice. In adult male Sprague Dawley rats, colon DRG neuron sensitization was shown to be driven by increases in NGF expression in the colon muscularis externa[13]. In our model, we also observed a significant increase in colon NGF, but its potential role in primary afferent sensitization and visceral hypersensitivity was not investigated.. Other studies in male mice showed that stress, in the form of water avoidance, significantly increased the excitability of colon-projecting DRG neurons and that the combined activity of the stress mediators corticosterone and norepinephrine increased DRG neuron excitability in vitro[20,21]. We previously found significant increases in the serum levels of norepinephrine in CPS + CAS females. However, daily systemic treatment with adrenergic antagonists during the adult stress protocol failed to reduce visceral hypersensitivity in female neonatal + adult stress rats[13] suggesting that norepinephrine did not play a major role in the acquisition of enhanced female visceral hypersensitivity or primary afferent sensitization in our model.\nWhen we tested on lumbar-sacral afferent fibers and dissociated neurons in patch clamp studies, we found significant decreases in transient potassium IA currents in neurons isolated from CPS + CAS females compared with the other three experimental groups. Declines in A-type Kv currents in DRG neurons have been associated with persistent pain in multiple chronic pain models[22]. Whether the decline was caused by changes in channel properties or expression was not investigated in this study. However, another study demonstrated that estrogen significantly shifted the activation curve of IA currents in the hyperpolarizing direction and that estrogen inhibited Kv (+) channels in mouse DRG neurons through a membrane ER-activated nongenomic pathway[23].\nOur results showed that the excitability of colon-projecting neurons in CPS + CAS females was significantly reduced by systemic letrozole treatment, suggesting that estrogen contributed to the sensitization process. Previous studies show that estrogen receptors expressed on primary afferent neurons contributed to enhanced sensitivity in various pain models[24-26]. One study found no decline in the responses of colon-projecting nerve fibers to CRD following OVX and found no detectable estrogen receptor alpha immunoreactivity in colon-projecting DRG neurons[27]. The reasons for the differing results are not clear, but local production of estrogen in DRG neurons could be sufficient to sustain sensitization.\nNGF and its receptors play important roles in the mechanism of visceral pain and hyperalgesia in women. For example, endometriosis is estrogen dependent and is commonly diagnosed. The main symptoms are various types of pelvic pain that have a serious effect on physical and mental health, but the mechanisms of abdominal pain are still unclear. Studies have shown NGF to be an inflammatory mediator and modulator of pain in adulthood[28].", "In this study, we examined the sex differences and effects of estrogen on the acquisition of enhanced visceral hypersensitivity in the offspring of rats in a model of prenatal and adult stress as shown in Figure 7. Our study shows that estrogen acted in the spinal cord and the primary afferent neurons to enhance visceral nociception. Acute blockade of the endogenous synthesis of estrogens in rat spinal cord significantly reduced visceral hypersensitivity, suggesting that locally produced estrogen in the central nervous system can regulate nociceptive neurons to modulate visceral hypersensitivity. The chronic stress-estrogen-BDNF axis sensitized visceral hypersensitivity in the offspring of females subjected to CPS. The development of chronic stress-induced visceral hypersensitivity in female rats was estrogen dependent. A key component of this hypersensitivity was estrogen-dependent sensitization of primary afferent colon neurons. Our findings provide key scientific evidence in a preclinical model in support of developing gender-based treatment for abdominal pain in IBS.\nSummary diagram of estrogen re-enhanced visceral hyperalgesia investigated in chronic prenatal stress plus chronic adult stress models. BDNF: Brain-derived neurotrophic factor; NGF: Nerve growth factor." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Animals", "CPS and CAS models", "Rat treatment", "In vivo single fiber recording of L6-S2 DRG rootlets", "In vitro patch clamp recordings in colonic DRG neurons", "Real time reverse transcription-polymerase chain reaction (RT-PCR)", "Western blot", "Immunofluorescence", "Serum estradiol and norepinephrine levels", "Data analysis", "RESULTS", "Effects of CPS plus CAS on primary afferent responses to CRD in male and female rats", "Increase in excitability of colon-projecting lumbosacral DRG neurons in female CPS + CAS rats", "Effects of CPS and/or CAS on plasma estrogen concentration", "Effects of Letrozole treatment on colon DRG neuron excitability", "Spinal cord BDNF levels regulated by estrogen", "Peripheral NGF level increased in CPS + CAS female rats", "DISCUSSION", "CONCLUSION" ]
[ "Visceral pain of colonic origin is the most prominent symptom in irritable bowel syndrome (IBS) patients[1]. Female IBS patients report more severe pain that occurs more frequently and with longer episodes than in male patients[1,2]. The ratio of female to male IBS is about 2:1 among patients seen in medical clinics[3]. Moreover, females have a higher prevalence of IBS co-morbidities such as anxiety and depression[4,5] and are more vulnerable to stress-induced exacerbation of IBS symptoms compared with males[3,6,7].\nClinical studies show that early life adverse experiences are risk factors for the development of IBS symptoms, including visceral pain and ongoing chronic stress, especially abdominal pain[8-10]. These factors contribute to the development of visceral hypersensitivity, a key component of the IBS symptom complex and one that may be responsible for symptoms of pain[11,12]. Our previous research found that the female offspring of mothers subjected to chronic prenatal stress (CPS) had a markedly greater visceral sensitivity than their male littermates following challenge by another chronic adult stress (CAS) protocol. A critical molecular event in the development of this female-enhanced visceral hypersensitivity is upregulation of brain-derived neurotrophic factor (BDNF) expression in the lumbar-sacral spinal cord of female CPS + CAS rats[13]. However, the neurophysiological changes underlying the enhanced female-specific visceral hypersensitivity and the role of hormone in the development of stress-induced visceral hypersensitivity are not well understood.\nVisceral hypersensitivity in IBS involves abnormal changes in neurophysiology throughout the brain-gut axis. In IBS, there is evidence for sensitization of primary afferents to jejunal distention and electrical stimulation[14], and there is evidence for increased sensitivity of lumbar splanchnic afferents[15,16]. In animal models of either early life adverse events or adult stress-induced visceral hypersensitivity[17], there is evidence of colon primary afferent sensitization. However, the studies were performed in male rodents. Therefore, in this study, we established a CPS and CAS rodent model to analyze the impact on female colon afferent neuron function and the role of estrogen. Our hypothesis was that female CPS offspring subjected to chronic stress as adults would exhibit greater colonic dorsal root ganglion (DRG) neuron sensitization compared with their male littermates, and that the enhanced visceral sensitization and primary afferent sensitization in females was estrogen dependent.", " Animals The Institutional Animal Care and Use Committee of the University of Texas Medical Branch at Galveston, TX approved all animal procedures. Experiments were performed on pregnant Sprague Dawley rats and their 8-wk-old to 16-wk-old male and female offspring. Rats were housed individual cages with access to food and water in a room with controlled conditions (22 ± 2 °C, relative humidity of 50% ± 5%), and a 12 h light/12 h dark cycle.\nThe Institutional Animal Care and Use Committee of the University of Texas Medical Branch at Galveston, TX approved all animal procedures. Experiments were performed on pregnant Sprague Dawley rats and their 8-wk-old to 16-wk-old male and female offspring. Rats were housed individual cages with access to food and water in a room with controlled conditions (22 ± 2 °C, relative humidity of 50% ± 5%), and a 12 h light/12 h dark cycle.\n CPS and CAS models Pregnant dams were subjected to a CPS protocol that consisted of a random sequence of twice-daily applications of one of three stress sessions, a 1-h water-avoidance, 45-min cold-restraint, or a 20-min forced swim starting on day 6 and continuing until delivery on day 21. Male and female offspring of the stressed dams were designated CPS rats. Control dams received sham stress and their offspring were designated control rats. As adults at 8-16 wk of age, control and prenatally stressed offspring were challenged by the same CAS protocol for 9 d. Ovariectomy (OVX) or sham surgery was performed on female prenatal-stress offspring on day 56. Daily letrozole treatment was initiated on day 49, 2 wk prior to initiation of adult stress. Treatment was continued through the stress protocol. A schematic diagram of the study procedures is shown in Figure 1A.\nPrimary afferent responses to colorectal distention. A: Chronic prenatal stress (CPS) plus chronic adult stress (CAS) model. Pregnant dams were subjected to prenatal stress from on day 11 of gestation. Ovariectomy (OVX) or sham surgery was performed on female prenatal-stress offspring on day 56. Daily Letrozole was initiated on day 49, 2 wk prior to initiation of adult stress. Treatment was continued through the stress protocol; B: Spontaneous activity (SA) of single afferent units in male and female control rats (n = 70 fibers in 6 rats in each group, t-test, aP < 0.05); C: Average response to graded colorectal distention (CRD) of 56 afferent fibers in 6 male and 70 afferent fibers in 6 female control rats; two-way analysis of variance (ANOVA; aP < 0.05 vs the same pressure male group); D: Responses of low-threshold (LT) fibers to CRD in 42 fibers in 6 male rats and 40 fibers in 6 female control rats (ANOVA, aP < 0.05 vs the same pressure male group); E: Responses of high-threshold (HT) afferent fibers to CRD in 14 fibers in 6 male and 29 fibers in 6 female control rats; F: Effects of CAS on afferent fiber responses to CRD from 59 fibers in 6 control and 99 fibers in 6 CPS female rats; (two-way ANOVA, aP < 0.05 vs the same pressure control group, bP < 0.05 vs the same pressure CPS group); G: Effects of CAS on afferent fiber responses to CRD in control and CPS male rats (n = 6 rats, 57 fibers for control and 95 fibers for CPS female group; two-way ANOVA, aP < 0.05 vs the same pressure-control group). \nPregnant dams were subjected to a CPS protocol that consisted of a random sequence of twice-daily applications of one of three stress sessions, a 1-h water-avoidance, 45-min cold-restraint, or a 20-min forced swim starting on day 6 and continuing until delivery on day 21. Male and female offspring of the stressed dams were designated CPS rats. Control dams received sham stress and their offspring were designated control rats. As adults at 8-16 wk of age, control and prenatally stressed offspring were challenged by the same CAS protocol for 9 d. Ovariectomy (OVX) or sham surgery was performed on female prenatal-stress offspring on day 56. Daily letrozole treatment was initiated on day 49, 2 wk prior to initiation of adult stress. Treatment was continued through the stress protocol. A schematic diagram of the study procedures is shown in Figure 1A.\nPrimary afferent responses to colorectal distention. A: Chronic prenatal stress (CPS) plus chronic adult stress (CAS) model. Pregnant dams were subjected to prenatal stress from on day 11 of gestation. Ovariectomy (OVX) or sham surgery was performed on female prenatal-stress offspring on day 56. Daily Letrozole was initiated on day 49, 2 wk prior to initiation of adult stress. Treatment was continued through the stress protocol; B: Spontaneous activity (SA) of single afferent units in male and female control rats (n = 70 fibers in 6 rats in each group, t-test, aP < 0.05); C: Average response to graded colorectal distention (CRD) of 56 afferent fibers in 6 male and 70 afferent fibers in 6 female control rats; two-way analysis of variance (ANOVA; aP < 0.05 vs the same pressure male group); D: Responses of low-threshold (LT) fibers to CRD in 42 fibers in 6 male rats and 40 fibers in 6 female control rats (ANOVA, aP < 0.05 vs the same pressure male group); E: Responses of high-threshold (HT) afferent fibers to CRD in 14 fibers in 6 male and 29 fibers in 6 female control rats; F: Effects of CAS on afferent fiber responses to CRD from 59 fibers in 6 control and 99 fibers in 6 CPS female rats; (two-way ANOVA, aP < 0.05 vs the same pressure control group, bP < 0.05 vs the same pressure CPS group); G: Effects of CAS on afferent fiber responses to CRD in control and CPS male rats (n = 6 rats, 57 fibers for control and 95 fibers for CPS female group; two-way ANOVA, aP < 0.05 vs the same pressure-control group). \n Rat treatment Before OVX or letrozole treatment, vaginal smears were used to identify the estrus cycle phase. OVX or sham surgery was performed on female prenatal-stress offspring on day 56. The aromatase inhibitor letrozole [4,4’-(1H-1,2,4-triazol-l-yl-methylene)-bis-benzonitrile], (Novartis) 1.0 mg/kg was orally administered in the experimental group and vehicle (hydroxypropyl cellulose 0.3% in water) was given in the control group once daily for 14 d. Direct transcutaneous intrathecal injections of estrogen and letrozole were performed as described by Mestre et al[18].\nBefore OVX or letrozole treatment, vaginal smears were used to identify the estrus cycle phase. OVX or sham surgery was performed on female prenatal-stress offspring on day 56. The aromatase inhibitor letrozole [4,4’-(1H-1,2,4-triazol-l-yl-methylene)-bis-benzonitrile], (Novartis) 1.0 mg/kg was orally administered in the experimental group and vehicle (hydroxypropyl cellulose 0.3% in water) was given in the control group once daily for 14 d. Direct transcutaneous intrathecal injections of estrogen and letrozole were performed as described by Mestre et al[18].\n In vivo single fiber recording of L6-S2 DRG rootlets Multiunit afferent discharges were recorded from the distal ends of L6-S2 dorsal rootlets decentralized close to their entry into the spinal cord. A bundle of multiunit fibers was distinguished into 2-6 single units off-line using wave mark template matching in Spike 2 software that differentiates spikes by shape and amplitude. Colonic afferent fibers were identified by their response to graded colorectal distention (CRD). A balloon was used to distend the colorectum. Isoflurane, 2.5%, followed by 50 mg/kg intraperitoneal sodium pentobarbital induced general anesthesia that was maintained by infusing a mixture of pentobarbital sodium + pancuronium bromide + saline by intravenous infusion through the tail vein. The adequacy of anesthesia was confirmed by the absence of corneal and pupillary reflexes and stability of the end-tidal CO2 level. A tracheotomy tube connected to a ventilator system provided a mixture of room air and oxygen. Expired CO2 was monitored and maintained at 3.5%. Body temperature was monitored and maintained at 37 °C by a servo-controlled heating blanket. A laminectomy from T12 to S2 exposed the spinal cord. The head was stabilized in a stereotaxic frame.\nMultiunit afferent discharges were recorded from the distal ends of L6-S2 dorsal rootlets decentralized close to their entry into the spinal cord. A bundle of multiunit fibers was distinguished into 2-6 single units off-line using wave mark template matching in Spike 2 software that differentiates spikes by shape and amplitude. Colonic afferent fibers were identified by their response to graded colorectal distention (CRD). A balloon was used to distend the colorectum. Isoflurane, 2.5%, followed by 50 mg/kg intraperitoneal sodium pentobarbital induced general anesthesia that was maintained by infusing a mixture of pentobarbital sodium + pancuronium bromide + saline by intravenous infusion through the tail vein. The adequacy of anesthesia was confirmed by the absence of corneal and pupillary reflexes and stability of the end-tidal CO2 level. A tracheotomy tube connected to a ventilator system provided a mixture of room air and oxygen. Expired CO2 was monitored and maintained at 3.5%. Body temperature was monitored and maintained at 37 °C by a servo-controlled heating blanket. A laminectomy from T12 to S2 exposed the spinal cord. The head was stabilized in a stereotaxic frame.\n In vitro patch clamp recordings in colonic DRG neurons Retrograde fluorescence label injections: Labeling of colon-projecting DRG neurons was performed as previously described[13]. Under general 2% isoflurane anesthesia, the lipid soluble fluorescent dye, 1,1’-dioleyl-3,3,3’,3’-tetramethylindocarbocyanine methane-sulfonate (9-DiI, Invitrogen, Carlsbad, CA) was injected (50 mg/mL) into the muscularis externa on the exposed distal colon in 8 to 10 sites (2 μL each site). To prevent leakage, the needle was kept in place for 1 min following each injection.\nDissociation and culture of DRG neurons: Rats were deeply anesthetized with isoflurane followed by decapitation. Lumbosacral (L6–S2) DRGs were collected in ice cold and oxygenated dissecting solution, containing (in mM) 130 NaCl, 5 KCl, 2 KH2PO4, 1.5 CaCl2, 6 MgSO4, 10 glucose, and 10 HEPES, pH 7.2 (305 mOsm). After removal of the connective tissue, the ganglia were transferred to a 5 mL dissecting solution containing collagenase D (1.8 mg/mL; Roche) and trypsin (1.0 mg/mL; Sigma, St Louis, MO), and incubated for 1.5 h at 34.5 °C. DRGs were then taken from the enzyme solution, washed, and put in 0.5-2 mL of the dissecting solution containing DNase (0.5 mg/mL; Sigma). Cells were subsequently dissociated by gentle trituration 10 to 15 times with fire-polished glass pipettes and placed on acid-cleaned glass coverslips. The dissociated DRG neurons were kept in 1 mL DMEM (with 10% FBS) in an incubator (95% O2/5% CO2) at 37 °C overnight.\nWhole-cell patch clamp recordings from dissociated DRG neurons: Before each experiment, a glass coverslip with DRG neurons was transferred to a recording chamber perfused (1.5 mL/min) with external solution containing (10 mM): 130 NaCl, 5 KCl, 2 KH2PO4, 2.5 CaCl2, 1 MgCl2, 10 HEPES, and 10 glucose, pH adjusted to pH 7.4 with NaOH (300 mOsm) at room temperature. Recording pipettes, pulled from borosilicate glass tubing, with resistance of 1-5 MΩ, were filled with solution containing (in mM): 100 KMeSO3, 40 KCl, and 10 HEPES, pH 7.25 adjusted with KOH (290 mOsm). DiI-labeled neurons were identified by fluorescence microscopy. Whole-cell currents and voltage were recorded from DiI-labeled neurons using a Dagan 3911 patch clamp amplifier. Data were acquired and analyzed by pCLAMP 9.2 (Molecular Devices, Sunnyvale, CA). The currents were filtered at 2–5 kHz and sampled at 50 or 100 s per point. While still under voltage clamp, the Clampex Membrane Test program (Molecular Devices) was used to determine membrane capacitance (Cm) and membrane resistance (Rm), during a 10 ms, 5 mV depolarizing pulse form a holding potential of −60 mV. The configuration was then switched to current clamp (0 pA) to determine other electrophysiological properties. After stabilizing for 2–3 min, the resting membrane potential was measured. The minimum acceptable resting membrane potential was −40 mV. Spontaneous activity (SA) was then recorded over two 30 s periods separated by 60 s without recording, as described by Bedi et al[19].\nTransient A-type K+ current (IA) recording method in patch studies: To record voltage-gated K+ current (Kv), Na+ in control external solution was replaced with equimolar choline and the Ca2+ concentration was reduced to 0.03 mM to suppress Ca2+ currents and to prevent Ca2+ channels becoming Na+ conducting. The reduced external Ca2+ would also be expected to suppress Ca2+-activated K+ currents. The current traces of Kv in DRG neurons were measured at different holding potentials. The membrane potential was held at −100 mV and voltage steps were from −40 to +30 mV to record the total Kv. The membrane potential was held at −50 mV to record the sustained Kv. The IA currents were calculated by subtracting the sustained current from the total current. The current density (in pA/pF) was calculated by dividing the current amplitude by cell membrane capacitance.\nRetrograde fluorescence label injections: Labeling of colon-projecting DRG neurons was performed as previously described[13]. Under general 2% isoflurane anesthesia, the lipid soluble fluorescent dye, 1,1’-dioleyl-3,3,3’,3’-tetramethylindocarbocyanine methane-sulfonate (9-DiI, Invitrogen, Carlsbad, CA) was injected (50 mg/mL) into the muscularis externa on the exposed distal colon in 8 to 10 sites (2 μL each site). To prevent leakage, the needle was kept in place for 1 min following each injection.\nDissociation and culture of DRG neurons: Rats were deeply anesthetized with isoflurane followed by decapitation. Lumbosacral (L6–S2) DRGs were collected in ice cold and oxygenated dissecting solution, containing (in mM) 130 NaCl, 5 KCl, 2 KH2PO4, 1.5 CaCl2, 6 MgSO4, 10 glucose, and 10 HEPES, pH 7.2 (305 mOsm). After removal of the connective tissue, the ganglia were transferred to a 5 mL dissecting solution containing collagenase D (1.8 mg/mL; Roche) and trypsin (1.0 mg/mL; Sigma, St Louis, MO), and incubated for 1.5 h at 34.5 °C. DRGs were then taken from the enzyme solution, washed, and put in 0.5-2 mL of the dissecting solution containing DNase (0.5 mg/mL; Sigma). Cells were subsequently dissociated by gentle trituration 10 to 15 times with fire-polished glass pipettes and placed on acid-cleaned glass coverslips. The dissociated DRG neurons were kept in 1 mL DMEM (with 10% FBS) in an incubator (95% O2/5% CO2) at 37 °C overnight.\nWhole-cell patch clamp recordings from dissociated DRG neurons: Before each experiment, a glass coverslip with DRG neurons was transferred to a recording chamber perfused (1.5 mL/min) with external solution containing (10 mM): 130 NaCl, 5 KCl, 2 KH2PO4, 2.5 CaCl2, 1 MgCl2, 10 HEPES, and 10 glucose, pH adjusted to pH 7.4 with NaOH (300 mOsm) at room temperature. Recording pipettes, pulled from borosilicate glass tubing, with resistance of 1-5 MΩ, were filled with solution containing (in mM): 100 KMeSO3, 40 KCl, and 10 HEPES, pH 7.25 adjusted with KOH (290 mOsm). DiI-labeled neurons were identified by fluorescence microscopy. Whole-cell currents and voltage were recorded from DiI-labeled neurons using a Dagan 3911 patch clamp amplifier. Data were acquired and analyzed by pCLAMP 9.2 (Molecular Devices, Sunnyvale, CA). The currents were filtered at 2–5 kHz and sampled at 50 or 100 s per point. While still under voltage clamp, the Clampex Membrane Test program (Molecular Devices) was used to determine membrane capacitance (Cm) and membrane resistance (Rm), during a 10 ms, 5 mV depolarizing pulse form a holding potential of −60 mV. The configuration was then switched to current clamp (0 pA) to determine other electrophysiological properties. After stabilizing for 2–3 min, the resting membrane potential was measured. The minimum acceptable resting membrane potential was −40 mV. Spontaneous activity (SA) was then recorded over two 30 s periods separated by 60 s without recording, as described by Bedi et al[19].\nTransient A-type K+ current (IA) recording method in patch studies: To record voltage-gated K+ current (Kv), Na+ in control external solution was replaced with equimolar choline and the Ca2+ concentration was reduced to 0.03 mM to suppress Ca2+ currents and to prevent Ca2+ channels becoming Na+ conducting. The reduced external Ca2+ would also be expected to suppress Ca2+-activated K+ currents. The current traces of Kv in DRG neurons were measured at different holding potentials. The membrane potential was held at −100 mV and voltage steps were from −40 to +30 mV to record the total Kv. The membrane potential was held at −50 mV to record the sustained Kv. The IA currents were calculated by subtracting the sustained current from the total current. The current density (in pA/pF) was calculated by dividing the current amplitude by cell membrane capacitance.\n Real time reverse transcription-polymerase chain reaction (RT-PCR) Total RNA was extracted using RNeasy Mini Kits (QIAGEN, Valencia, CA). One microgram of total RNA was reverse-transcribed using the SuperScriptTM III First-Strand Synthesis System. PCR assays were performed on a StepOnePlus thermal cycler with 18 s as the normalizer using Applied Biosystems primer/probe set Rn02531967_s1 directed against the translated exon IX. Fold-change relative to control was calculated using the ΔΔCt method (Applied Biosystems).\nTotal RNA was extracted using RNeasy Mini Kits (QIAGEN, Valencia, CA). One microgram of total RNA was reverse-transcribed using the SuperScriptTM III First-Strand Synthesis System. PCR assays were performed on a StepOnePlus thermal cycler with 18 s as the normalizer using Applied Biosystems primer/probe set Rn02531967_s1 directed against the translated exon IX. Fold-change relative to control was calculated using the ΔΔCt method (Applied Biosystems).\n Western blot Samples were lysed in RIPA buffer containing protease inhibitor cocktail and phenylmethanesulfonyl fluoride. Lysates were incubated for 30 min on ice and then centrifuged at 10 000 × g for 10 min at 4 °C. The protein concentration in the supernatant was determined using bicinchoninic acid (BCA) assay kits with bovine serum albumin as a standard. Equal amounts of protein (30 μg per lane) were separated by 10% sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and then transferred to nitrocellulose membranes (Bio-Rad, United States). The membranes were blocked in Li-Cor blocking buffer for 1 h at room temperature and then incubated with primary antibodies. BDNF antibody (Santa Cruz Biotechnologies, Santa Cruz, CA) was used at 1:200 dilution; nerve growth factor (NGF) antibody (Abcam, MA) was used at 1:1000 dilution; β-actin antibody (Sigma Aldrich, St Louis, MO) was used at 1:5000 dilution. The secondary antibodies were donkey anti-rabbit Alexa Fluor 680 (Invitrogen) and goat anti-mouse IRDye 800 (Rockland). Images were acquired and band intensities measured using a Li-Cor Odyssey system (Li-Cor, Lincoln, NE).\nSamples were lysed in RIPA buffer containing protease inhibitor cocktail and phenylmethanesulfonyl fluoride. Lysates were incubated for 30 min on ice and then centrifuged at 10 000 × g for 10 min at 4 °C. The protein concentration in the supernatant was determined using bicinchoninic acid (BCA) assay kits with bovine serum albumin as a standard. Equal amounts of protein (30 μg per lane) were separated by 10% sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and then transferred to nitrocellulose membranes (Bio-Rad, United States). The membranes were blocked in Li-Cor blocking buffer for 1 h at room temperature and then incubated with primary antibodies. BDNF antibody (Santa Cruz Biotechnologies, Santa Cruz, CA) was used at 1:200 dilution; nerve growth factor (NGF) antibody (Abcam, MA) was used at 1:1000 dilution; β-actin antibody (Sigma Aldrich, St Louis, MO) was used at 1:5000 dilution. The secondary antibodies were donkey anti-rabbit Alexa Fluor 680 (Invitrogen) and goat anti-mouse IRDye 800 (Rockland). Images were acquired and band intensities measured using a Li-Cor Odyssey system (Li-Cor, Lincoln, NE).\n Immunofluorescence Frozen sections of colon tissue from control, CAS, CPS and CPS + CAS female rats were mounted on glass slides, and rehydrated in phosphate buffered saline at room temperature. The slides were treated for antigen retrieval and blocked with 10% normal goat serum diluted in 0.3% phosphate buffered saline-Triton for 1 h, and then incubated with NGF primary antibody in antibody diluent (Renoir Red, Biocare Medical, Concord, CA) at 4 °C overnight. The slides were exposed to fluorescent dye-conjugated secondary antibody for 2 h at room temperature, counterstained with 4',6-diamidino-2-phenylindole and coverslipped. Images were taken in fluorescence mode on an Olympus laser scanning confocal microscope and the average signal intensity was calculated by the bundled software.\nFrozen sections of colon tissue from control, CAS, CPS and CPS + CAS female rats were mounted on glass slides, and rehydrated in phosphate buffered saline at room temperature. The slides were treated for antigen retrieval and blocked with 10% normal goat serum diluted in 0.3% phosphate buffered saline-Triton for 1 h, and then incubated with NGF primary antibody in antibody diluent (Renoir Red, Biocare Medical, Concord, CA) at 4 °C overnight. The slides were exposed to fluorescent dye-conjugated secondary antibody for 2 h at room temperature, counterstained with 4',6-diamidino-2-phenylindole and coverslipped. Images were taken in fluorescence mode on an Olympus laser scanning confocal microscope and the average signal intensity was calculated by the bundled software.\n Serum estradiol and norepinephrine levels Serum estradiol, adrenocorticotropic hormone (ACTH), and norepinephrine levels were measured using specific enzyme-linked immunosorbent assay kits for each analyte (CSB-E05110r, CSB-E06875r, CSB-E07022, Cusabio Bioteck CO., United States) following the manufacturer’s instructions.\nSerum estradiol, adrenocorticotropic hormone (ACTH), and norepinephrine levels were measured using specific enzyme-linked immunosorbent assay kits for each analyte (CSB-E05110r, CSB-E06875r, CSB-E07022, Cusabio Bioteck CO., United States) following the manufacturer’s instructions.\n Data analysis Single fiber responses (impulses/second) to CRD were calculated by subtracting SA from the mean 30 s maximal activity during distension. Fibers were considered responsive if CRD increased their activity to 30% greater than the baseline value. Mechanosensitive single units were classified as high threshold (> 20 mmHg) or low threshold (≤ 20 mmHg) on the basis of their response threshold and profile during CRD. Single fiber activity data were analyzed by analysis of variance with repeated measures; CRD intensity was the repeated factor and the experimental group was the between-group factor. If significant main effects were present, the individual means were compared using the Fisher post-hoc test. All authors had access to the study data and reviewed and approved the final manuscript.\nSingle fiber responses (impulses/second) to CRD were calculated by subtracting SA from the mean 30 s maximal activity during distension. Fibers were considered responsive if CRD increased their activity to 30% greater than the baseline value. Mechanosensitive single units were classified as high threshold (> 20 mmHg) or low threshold (≤ 20 mmHg) on the basis of their response threshold and profile during CRD. Single fiber activity data were analyzed by analysis of variance with repeated measures; CRD intensity was the repeated factor and the experimental group was the between-group factor. If significant main effects were present, the individual means were compared using the Fisher post-hoc test. All authors had access to the study data and reviewed and approved the final manuscript.", "The Institutional Animal Care and Use Committee of the University of Texas Medical Branch at Galveston, TX approved all animal procedures. Experiments were performed on pregnant Sprague Dawley rats and their 8-wk-old to 16-wk-old male and female offspring. Rats were housed individual cages with access to food and water in a room with controlled conditions (22 ± 2 °C, relative humidity of 50% ± 5%), and a 12 h light/12 h dark cycle.", "Pregnant dams were subjected to a CPS protocol that consisted of a random sequence of twice-daily applications of one of three stress sessions, a 1-h water-avoidance, 45-min cold-restraint, or a 20-min forced swim starting on day 6 and continuing until delivery on day 21. Male and female offspring of the stressed dams were designated CPS rats. Control dams received sham stress and their offspring were designated control rats. As adults at 8-16 wk of age, control and prenatally stressed offspring were challenged by the same CAS protocol for 9 d. Ovariectomy (OVX) or sham surgery was performed on female prenatal-stress offspring on day 56. Daily letrozole treatment was initiated on day 49, 2 wk prior to initiation of adult stress. Treatment was continued through the stress protocol. A schematic diagram of the study procedures is shown in Figure 1A.\nPrimary afferent responses to colorectal distention. A: Chronic prenatal stress (CPS) plus chronic adult stress (CAS) model. Pregnant dams were subjected to prenatal stress from on day 11 of gestation. Ovariectomy (OVX) or sham surgery was performed on female prenatal-stress offspring on day 56. Daily Letrozole was initiated on day 49, 2 wk prior to initiation of adult stress. Treatment was continued through the stress protocol; B: Spontaneous activity (SA) of single afferent units in male and female control rats (n = 70 fibers in 6 rats in each group, t-test, aP < 0.05); C: Average response to graded colorectal distention (CRD) of 56 afferent fibers in 6 male and 70 afferent fibers in 6 female control rats; two-way analysis of variance (ANOVA; aP < 0.05 vs the same pressure male group); D: Responses of low-threshold (LT) fibers to CRD in 42 fibers in 6 male rats and 40 fibers in 6 female control rats (ANOVA, aP < 0.05 vs the same pressure male group); E: Responses of high-threshold (HT) afferent fibers to CRD in 14 fibers in 6 male and 29 fibers in 6 female control rats; F: Effects of CAS on afferent fiber responses to CRD from 59 fibers in 6 control and 99 fibers in 6 CPS female rats; (two-way ANOVA, aP < 0.05 vs the same pressure control group, bP < 0.05 vs the same pressure CPS group); G: Effects of CAS on afferent fiber responses to CRD in control and CPS male rats (n = 6 rats, 57 fibers for control and 95 fibers for CPS female group; two-way ANOVA, aP < 0.05 vs the same pressure-control group). ", "Before OVX or letrozole treatment, vaginal smears were used to identify the estrus cycle phase. OVX or sham surgery was performed on female prenatal-stress offspring on day 56. The aromatase inhibitor letrozole [4,4’-(1H-1,2,4-triazol-l-yl-methylene)-bis-benzonitrile], (Novartis) 1.0 mg/kg was orally administered in the experimental group and vehicle (hydroxypropyl cellulose 0.3% in water) was given in the control group once daily for 14 d. Direct transcutaneous intrathecal injections of estrogen and letrozole were performed as described by Mestre et al[18].", "Multiunit afferent discharges were recorded from the distal ends of L6-S2 dorsal rootlets decentralized close to their entry into the spinal cord. A bundle of multiunit fibers was distinguished into 2-6 single units off-line using wave mark template matching in Spike 2 software that differentiates spikes by shape and amplitude. Colonic afferent fibers were identified by their response to graded colorectal distention (CRD). A balloon was used to distend the colorectum. Isoflurane, 2.5%, followed by 50 mg/kg intraperitoneal sodium pentobarbital induced general anesthesia that was maintained by infusing a mixture of pentobarbital sodium + pancuronium bromide + saline by intravenous infusion through the tail vein. The adequacy of anesthesia was confirmed by the absence of corneal and pupillary reflexes and stability of the end-tidal CO2 level. A tracheotomy tube connected to a ventilator system provided a mixture of room air and oxygen. Expired CO2 was monitored and maintained at 3.5%. Body temperature was monitored and maintained at 37 °C by a servo-controlled heating blanket. A laminectomy from T12 to S2 exposed the spinal cord. The head was stabilized in a stereotaxic frame.", "Retrograde fluorescence label injections: Labeling of colon-projecting DRG neurons was performed as previously described[13]. Under general 2% isoflurane anesthesia, the lipid soluble fluorescent dye, 1,1’-dioleyl-3,3,3’,3’-tetramethylindocarbocyanine methane-sulfonate (9-DiI, Invitrogen, Carlsbad, CA) was injected (50 mg/mL) into the muscularis externa on the exposed distal colon in 8 to 10 sites (2 μL each site). To prevent leakage, the needle was kept in place for 1 min following each injection.\nDissociation and culture of DRG neurons: Rats were deeply anesthetized with isoflurane followed by decapitation. Lumbosacral (L6–S2) DRGs were collected in ice cold and oxygenated dissecting solution, containing (in mM) 130 NaCl, 5 KCl, 2 KH2PO4, 1.5 CaCl2, 6 MgSO4, 10 glucose, and 10 HEPES, pH 7.2 (305 mOsm). After removal of the connective tissue, the ganglia were transferred to a 5 mL dissecting solution containing collagenase D (1.8 mg/mL; Roche) and trypsin (1.0 mg/mL; Sigma, St Louis, MO), and incubated for 1.5 h at 34.5 °C. DRGs were then taken from the enzyme solution, washed, and put in 0.5-2 mL of the dissecting solution containing DNase (0.5 mg/mL; Sigma). Cells were subsequently dissociated by gentle trituration 10 to 15 times with fire-polished glass pipettes and placed on acid-cleaned glass coverslips. The dissociated DRG neurons were kept in 1 mL DMEM (with 10% FBS) in an incubator (95% O2/5% CO2) at 37 °C overnight.\nWhole-cell patch clamp recordings from dissociated DRG neurons: Before each experiment, a glass coverslip with DRG neurons was transferred to a recording chamber perfused (1.5 mL/min) with external solution containing (10 mM): 130 NaCl, 5 KCl, 2 KH2PO4, 2.5 CaCl2, 1 MgCl2, 10 HEPES, and 10 glucose, pH adjusted to pH 7.4 with NaOH (300 mOsm) at room temperature. Recording pipettes, pulled from borosilicate glass tubing, with resistance of 1-5 MΩ, were filled with solution containing (in mM): 100 KMeSO3, 40 KCl, and 10 HEPES, pH 7.25 adjusted with KOH (290 mOsm). DiI-labeled neurons were identified by fluorescence microscopy. Whole-cell currents and voltage were recorded from DiI-labeled neurons using a Dagan 3911 patch clamp amplifier. Data were acquired and analyzed by pCLAMP 9.2 (Molecular Devices, Sunnyvale, CA). The currents were filtered at 2–5 kHz and sampled at 50 or 100 s per point. While still under voltage clamp, the Clampex Membrane Test program (Molecular Devices) was used to determine membrane capacitance (Cm) and membrane resistance (Rm), during a 10 ms, 5 mV depolarizing pulse form a holding potential of −60 mV. The configuration was then switched to current clamp (0 pA) to determine other electrophysiological properties. After stabilizing for 2–3 min, the resting membrane potential was measured. The minimum acceptable resting membrane potential was −40 mV. Spontaneous activity (SA) was then recorded over two 30 s periods separated by 60 s without recording, as described by Bedi et al[19].\nTransient A-type K+ current (IA) recording method in patch studies: To record voltage-gated K+ current (Kv), Na+ in control external solution was replaced with equimolar choline and the Ca2+ concentration was reduced to 0.03 mM to suppress Ca2+ currents and to prevent Ca2+ channels becoming Na+ conducting. The reduced external Ca2+ would also be expected to suppress Ca2+-activated K+ currents. The current traces of Kv in DRG neurons were measured at different holding potentials. The membrane potential was held at −100 mV and voltage steps were from −40 to +30 mV to record the total Kv. The membrane potential was held at −50 mV to record the sustained Kv. The IA currents were calculated by subtracting the sustained current from the total current. The current density (in pA/pF) was calculated by dividing the current amplitude by cell membrane capacitance.", "Total RNA was extracted using RNeasy Mini Kits (QIAGEN, Valencia, CA). One microgram of total RNA was reverse-transcribed using the SuperScriptTM III First-Strand Synthesis System. PCR assays were performed on a StepOnePlus thermal cycler with 18 s as the normalizer using Applied Biosystems primer/probe set Rn02531967_s1 directed against the translated exon IX. Fold-change relative to control was calculated using the ΔΔCt method (Applied Biosystems).", "Samples were lysed in RIPA buffer containing protease inhibitor cocktail and phenylmethanesulfonyl fluoride. Lysates were incubated for 30 min on ice and then centrifuged at 10 000 × g for 10 min at 4 °C. The protein concentration in the supernatant was determined using bicinchoninic acid (BCA) assay kits with bovine serum albumin as a standard. Equal amounts of protein (30 μg per lane) were separated by 10% sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and then transferred to nitrocellulose membranes (Bio-Rad, United States). The membranes were blocked in Li-Cor blocking buffer for 1 h at room temperature and then incubated with primary antibodies. BDNF antibody (Santa Cruz Biotechnologies, Santa Cruz, CA) was used at 1:200 dilution; nerve growth factor (NGF) antibody (Abcam, MA) was used at 1:1000 dilution; β-actin antibody (Sigma Aldrich, St Louis, MO) was used at 1:5000 dilution. The secondary antibodies were donkey anti-rabbit Alexa Fluor 680 (Invitrogen) and goat anti-mouse IRDye 800 (Rockland). Images were acquired and band intensities measured using a Li-Cor Odyssey system (Li-Cor, Lincoln, NE).", "Frozen sections of colon tissue from control, CAS, CPS and CPS + CAS female rats were mounted on glass slides, and rehydrated in phosphate buffered saline at room temperature. The slides were treated for antigen retrieval and blocked with 10% normal goat serum diluted in 0.3% phosphate buffered saline-Triton for 1 h, and then incubated with NGF primary antibody in antibody diluent (Renoir Red, Biocare Medical, Concord, CA) at 4 °C overnight. The slides were exposed to fluorescent dye-conjugated secondary antibody for 2 h at room temperature, counterstained with 4',6-diamidino-2-phenylindole and coverslipped. Images were taken in fluorescence mode on an Olympus laser scanning confocal microscope and the average signal intensity was calculated by the bundled software.", "Serum estradiol, adrenocorticotropic hormone (ACTH), and norepinephrine levels were measured using specific enzyme-linked immunosorbent assay kits for each analyte (CSB-E05110r, CSB-E06875r, CSB-E07022, Cusabio Bioteck CO., United States) following the manufacturer’s instructions.", "Single fiber responses (impulses/second) to CRD were calculated by subtracting SA from the mean 30 s maximal activity during distension. Fibers were considered responsive if CRD increased their activity to 30% greater than the baseline value. Mechanosensitive single units were classified as high threshold (> 20 mmHg) or low threshold (≤ 20 mmHg) on the basis of their response threshold and profile during CRD. Single fiber activity data were analyzed by analysis of variance with repeated measures; CRD intensity was the repeated factor and the experimental group was the between-group factor. If significant main effects were present, the individual means were compared using the Fisher post-hoc test. All authors had access to the study data and reviewed and approved the final manuscript.", " Effects of CPS plus CAS on primary afferent responses to CRD in male and female rats The basal activity of a spinal afferent fiber was defined as the average number of action potentials per second (impulses/sec) in the 60 s period before the onset of a distention stimulus. In male controls, 66% of the afferent fibers under study displayed SA and SA was significantly higher in female controls than in male controls (0.71 ± 0.21 vs 1.24 ± 0.20 imp/sec; Figure 1B). The average single fiber activity in response to CRD was significantly higher in female control rats compared with male controls (Figure 1C). We found that the enhanced sensitization in female rats mainly came from the low-threshold fibers (Figure 1D and E).\nTo assess the effects of CPS + CAS on colon afferent fiber activities, we compared average single colon afferent fiber activities projecting from dorsal roots S1-L6 in response to CRD in male and female control, CPS, control + CAS and CPS + CAS rats recorded approximately 24 h after the last stressor. In females, CPS significantly increased single-unit afferent activity in response to CRD vs control female rats (Figure 1F). CAS alone enhanced single-unit activity compared with control. The increase in average afferent responses after CAS in prenatally stressed female rats (44.0%) was significantly greater than the increase in female control rats (39.3%). In males, CPS had no significant effect on primary afferent responses (Figure 1G). When we compared males to females within each experimental group, we found that the average single fiber activity was significantly higher in female compared with male CPS + CAS rats (Figure 1F, G). The increased activity may contribute to the enhanced female visceral hypersensitivity previously reported in this model. Average single-fiber activities were significantly greater in control and CPS and CAS females than in their corresponding male experimental groups (Figure 1F and G). Both CAS and CPS + CAS rats had significantly increased primary afferent responses compared with control and CPS rats. Thus, our CPS and CAS protocols sensitized colon-projecting primary afferent fibers, with the greatest effects produced by the combination of CPS + CAS in both males and females.\nThe basal activity of a spinal afferent fiber was defined as the average number of action potentials per second (impulses/sec) in the 60 s period before the onset of a distention stimulus. In male controls, 66% of the afferent fibers under study displayed SA and SA was significantly higher in female controls than in male controls (0.71 ± 0.21 vs 1.24 ± 0.20 imp/sec; Figure 1B). The average single fiber activity in response to CRD was significantly higher in female control rats compared with male controls (Figure 1C). We found that the enhanced sensitization in female rats mainly came from the low-threshold fibers (Figure 1D and E).\nTo assess the effects of CPS + CAS on colon afferent fiber activities, we compared average single colon afferent fiber activities projecting from dorsal roots S1-L6 in response to CRD in male and female control, CPS, control + CAS and CPS + CAS rats recorded approximately 24 h after the last stressor. In females, CPS significantly increased single-unit afferent activity in response to CRD vs control female rats (Figure 1F). CAS alone enhanced single-unit activity compared with control. The increase in average afferent responses after CAS in prenatally stressed female rats (44.0%) was significantly greater than the increase in female control rats (39.3%). In males, CPS had no significant effect on primary afferent responses (Figure 1G). When we compared males to females within each experimental group, we found that the average single fiber activity was significantly higher in female compared with male CPS + CAS rats (Figure 1F, G). The increased activity may contribute to the enhanced female visceral hypersensitivity previously reported in this model. Average single-fiber activities were significantly greater in control and CPS and CAS females than in their corresponding male experimental groups (Figure 1F and G). Both CAS and CPS + CAS rats had significantly increased primary afferent responses compared with control and CPS rats. Thus, our CPS and CAS protocols sensitized colon-projecting primary afferent fibers, with the greatest effects produced by the combination of CPS + CAS in both males and females.\n Increase in excitability of colon-projecting lumbosacral DRG neurons in female CPS + CAS rats To elucidate the electrophysiological basis of enhanced stress-induced primary afferent activity in female rats, we performed patch clamp studies on acutely dissociated retrograde-labeled colon-projecting neurons from the L6-S2 DRGs in control, prenatal stress, adult stress only, and CPS + CAS female rats isolated 24 h after the last adult stressor (Figure 2A). Input resistance (Figure 2B) and rheobase (Figure 2C) were significantly decreased in neurons from CPS + CAS rats compared with the other three groups. The number of action potentials elicited at either 2 × or 3 × the rheobase were significantly greater in adult stress and CPS + CAS neurons compared with control and to CPS neurons (Figure 2D, E). CAS significantly increased action potential overshoot with or without CPS (Figure 2F), but it did not significantly alter other electrophysiological characteristics, such as number of spontaneous spikes, membrane capacitance (pF), resting membrane potential, cell diameter, time constant, and DRG neuron action potential amplitude and duration (Table 1).\nPatch clamp recording in colonic dorsal root ganglion neurons from female rats. A: Patch clamp process of cell labeling. Under isoflurane anesthesia, the lipid soluble fluorescent dye 9-DiI was injected into muscularis externa of the exposed distal colon (left figure). Lumbosacral (L6–S2) dorsal root ganglions (upper photograph) were isolated and DiI-labeled neurons were identified by fluorescence microscopy (lower photograph). Electrophysiological properties of each neuron were measured using whole-cell current and voltage clamp protocols (right figure); B: Rheobase from all four experimental groups (n = 5 rats, 45 cells in each group, one-way ANOVA, aP < 0.05 vs control or bP < 0.05 vs CAS); C: Representative action potentials (APs) elicited by current injection at 2 × the rheobase in neurons from control, chronic adult stress (CAS), chronic prenatal stress (CPS) and CPS + CAS female rats; D: Membrane input resistance from all four groups (n = 5 rats, 45 cells in each group, one-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); E: Number of APs elicited by current injection at either 2 × and 3 × the rheobase in all four experimental groups (two-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); F: AP overshoot recorded from all four experimental groups (aP < 0.05 vs control); G: The proportion of neurons from each experimental group exhibiting spontaneous APs. Red numbers represent spontaneous AP firing cells; black numbers represent total cells; H: Representative total, IK and IA current tracings and average values of potassium currents: Itotal, IK and IA are shown in female CPS + CAS, CAS, CPS (n = 15 neurons, from 5 rats in each group), and control groups (n = 12 neurons from 5 rats); two-way ANOVA, aP < 0.05 vs each control group. \nElectrophysiological characteristics of colon related DRG neuron\nValues means ± standard error. \nP < 0.05;\nP < 0.001 vs control group. CAS: Chronic adult stress; CPS: Chronic prenatal stress.\nThe percentage of neurons with SA in was significantly greater in CPS + CAS rats than in control or CPS only rats (Figure 2G). Under voltage clamp conditions (Figure 2H), neurons from female CPS + CAS, CAS, CPS and control groups had IA and sustained outward rectifier K+ currents (IK). Compared with the other three groups, DRG neurons from CPS + CAS rats had significantly reduced average IA (P < 0.05). The average IK density was decreased but the change was not significant.\nTo elucidate the electrophysiological basis of enhanced stress-induced primary afferent activity in female rats, we performed patch clamp studies on acutely dissociated retrograde-labeled colon-projecting neurons from the L6-S2 DRGs in control, prenatal stress, adult stress only, and CPS + CAS female rats isolated 24 h after the last adult stressor (Figure 2A). Input resistance (Figure 2B) and rheobase (Figure 2C) were significantly decreased in neurons from CPS + CAS rats compared with the other three groups. The number of action potentials elicited at either 2 × or 3 × the rheobase were significantly greater in adult stress and CPS + CAS neurons compared with control and to CPS neurons (Figure 2D, E). CAS significantly increased action potential overshoot with or without CPS (Figure 2F), but it did not significantly alter other electrophysiological characteristics, such as number of spontaneous spikes, membrane capacitance (pF), resting membrane potential, cell diameter, time constant, and DRG neuron action potential amplitude and duration (Table 1).\nPatch clamp recording in colonic dorsal root ganglion neurons from female rats. A: Patch clamp process of cell labeling. Under isoflurane anesthesia, the lipid soluble fluorescent dye 9-DiI was injected into muscularis externa of the exposed distal colon (left figure). Lumbosacral (L6–S2) dorsal root ganglions (upper photograph) were isolated and DiI-labeled neurons were identified by fluorescence microscopy (lower photograph). Electrophysiological properties of each neuron were measured using whole-cell current and voltage clamp protocols (right figure); B: Rheobase from all four experimental groups (n = 5 rats, 45 cells in each group, one-way ANOVA, aP < 0.05 vs control or bP < 0.05 vs CAS); C: Representative action potentials (APs) elicited by current injection at 2 × the rheobase in neurons from control, chronic adult stress (CAS), chronic prenatal stress (CPS) and CPS + CAS female rats; D: Membrane input resistance from all four groups (n = 5 rats, 45 cells in each group, one-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); E: Number of APs elicited by current injection at either 2 × and 3 × the rheobase in all four experimental groups (two-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); F: AP overshoot recorded from all four experimental groups (aP < 0.05 vs control); G: The proportion of neurons from each experimental group exhibiting spontaneous APs. Red numbers represent spontaneous AP firing cells; black numbers represent total cells; H: Representative total, IK and IA current tracings and average values of potassium currents: Itotal, IK and IA are shown in female CPS + CAS, CAS, CPS (n = 15 neurons, from 5 rats in each group), and control groups (n = 12 neurons from 5 rats); two-way ANOVA, aP < 0.05 vs each control group. \nElectrophysiological characteristics of colon related DRG neuron\nValues means ± standard error. \nP < 0.05;\nP < 0.001 vs control group. CAS: Chronic adult stress; CPS: Chronic prenatal stress.\nThe percentage of neurons with SA in was significantly greater in CPS + CAS rats than in control or CPS only rats (Figure 2G). Under voltage clamp conditions (Figure 2H), neurons from female CPS + CAS, CAS, CPS and control groups had IA and sustained outward rectifier K+ currents (IK). Compared with the other three groups, DRG neurons from CPS + CAS rats had significantly reduced average IA (P < 0.05). The average IK density was decreased but the change was not significant.\n Effects of CPS and/or CAS on plasma estrogen concentration We did a vaginal smear test to identify the estrus cycle phases by identifying the vaginal cytological cell types. Estrogen concentration was significantly higher in the CPS proestrus/estrus phase compared with control diestrus, control proestrus/estrus, and CPS diestrus proestrus (P < 0.05; Figure 3A). Comparison of the plasma estrogen concentrations in control, CAS, CPS, CPS + CAS showed that CPS significantly increased plasma estrogen levels compared with the control rats and that CAS increased plasma estrogen level compared with the control and CPS rats (Figure 3B).\nEffects of chronic prenatal stress, chronic adult stress, ovariectomy, and letrozole treatment on plasma estrogen levels in female rats. A: Plasma estrogen level in control and chronic prenatal stress (CPS) rats by estrus cycle phase (n = 8 rats, one-way ANOVA, aP < 0.05 vs control proestrus/estrus (P-E) phase; bP < 0.05 vs CPS diestrus (D) phase); B: Plasma estrogen levels increased in CPS rats and following chronic adult stress (CAS) 24 h after the last adult stressor (n = 8 rats, one-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); C: Ovariectomy (OVX) significantly reduced CPS female rat plasma estrogen levels before and after CAS (n = 5 rats, one-way ANOVA, aP < 0.05 vs sham group); D: Letrozole treatment significantly reduced CPS female rat plasma estrogen levels before or after CAS (n = 5 rats, one-way ANOVA, aP < 0.05 vs vehicle group; cP < 0.0001); E: Plasma norepinephrine levels from control, CAS, CPS and CPS + CAS group female rats (n = 5 rats, one-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); F: Plasma adrenocorticotropic hormone (ACTH) levels from control and CPS + CAS group female rats (n = 5 rats, t-test, aP < 0.05 vs control). \nTo determine whether estrogen contributed to stress-induced visceral hypersensitivity in prenatal stressed females, we reduced plasma estrogen levels by either OVX or letrozole treatment. OVX significantly lowered serum estradiol levels before and after CAS (Figure 3C). Treatment was continued throughout CAS. After treatment with letrozole, serum estradiol levels were significantly reduced (Figure 3D). To study the effects of gender and stress on norepinephrine and ACTH levels, we measured plasma norepinephrine levels in female rats from all four experimental groups. CAS alone significantly increased plasma norepinephrine levels compared with both the controls and with CPS alone (Figure 3E) Plasma norepinephrine levels were significantly increased in CPS + CAS rats compared with CAS alone as well as with controls and CPS. Plasma ACTH levels were significantly increased in CPS + CAS rats compared with controls. (Figure 3F).\nWe did a vaginal smear test to identify the estrus cycle phases by identifying the vaginal cytological cell types. Estrogen concentration was significantly higher in the CPS proestrus/estrus phase compared with control diestrus, control proestrus/estrus, and CPS diestrus proestrus (P < 0.05; Figure 3A). Comparison of the plasma estrogen concentrations in control, CAS, CPS, CPS + CAS showed that CPS significantly increased plasma estrogen levels compared with the control rats and that CAS increased plasma estrogen level compared with the control and CPS rats (Figure 3B).\nEffects of chronic prenatal stress, chronic adult stress, ovariectomy, and letrozole treatment on plasma estrogen levels in female rats. A: Plasma estrogen level in control and chronic prenatal stress (CPS) rats by estrus cycle phase (n = 8 rats, one-way ANOVA, aP < 0.05 vs control proestrus/estrus (P-E) phase; bP < 0.05 vs CPS diestrus (D) phase); B: Plasma estrogen levels increased in CPS rats and following chronic adult stress (CAS) 24 h after the last adult stressor (n = 8 rats, one-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); C: Ovariectomy (OVX) significantly reduced CPS female rat plasma estrogen levels before and after CAS (n = 5 rats, one-way ANOVA, aP < 0.05 vs sham group); D: Letrozole treatment significantly reduced CPS female rat plasma estrogen levels before or after CAS (n = 5 rats, one-way ANOVA, aP < 0.05 vs vehicle group; cP < 0.0001); E: Plasma norepinephrine levels from control, CAS, CPS and CPS + CAS group female rats (n = 5 rats, one-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); F: Plasma adrenocorticotropic hormone (ACTH) levels from control and CPS + CAS group female rats (n = 5 rats, t-test, aP < 0.05 vs control). \nTo determine whether estrogen contributed to stress-induced visceral hypersensitivity in prenatal stressed females, we reduced plasma estrogen levels by either OVX or letrozole treatment. OVX significantly lowered serum estradiol levels before and after CAS (Figure 3C). Treatment was continued throughout CAS. After treatment with letrozole, serum estradiol levels were significantly reduced (Figure 3D). To study the effects of gender and stress on norepinephrine and ACTH levels, we measured plasma norepinephrine levels in female rats from all four experimental groups. CAS alone significantly increased plasma norepinephrine levels compared with both the controls and with CPS alone (Figure 3E) Plasma norepinephrine levels were significantly increased in CPS + CAS rats compared with CAS alone as well as with controls and CPS. Plasma ACTH levels were significantly increased in CPS + CAS rats compared with controls. (Figure 3F).\n Effects of Letrozole treatment on colon DRG neuron excitability We performed patch clamp experiments on acutely isolated retrograde-labeled DRG neurons from CPS + CAS females with or without letrozole treatment 24 h after the last adult stressor. Letrozole treatment significantly increased rheobase (Figure 4A), and significantly reduced input resistance (Figure 4B). Action potential overshoot (Figure 4C) and the number of action potentials elicited by a current injection at either 2 × or 3 × rheobase were significantly reduced by letrozole treatment (Figure 4D). Other electrophysiological properties were not significantly altered (Table 2). We also recorded electromyographic activity to determine whether the reduction in visceral sensitivity in female CPS + CAS rats caused by OVX or systemic letrozole treatment reduced visceromotor responses. The findings demonstrated a significant decrease in excitability of colon-projecting L6-S2 neurons.\nEffects of Letrozole treatment on colon dorsal root ganglion neuron excitability. A: Rheobase (n = 45 cells in 6 rats in each group, t-test, cP < 0.001 vs Veh. + chronic adult stress [CAS] + chronic prenatal stress [CPS]); B: Membrane input resistance (RIn) (t-test, aP < 0.05); C: Action potential (AP) overshoot (t-test, aP < 0.05); D: Number of APs elicited by current injection at 2 × and 3 × rheobase (two-way ANOVA, aP < 0.05; bP < 0.01). \nElectrophysiological characteristics of colon related DRG neuron after Letrozole treatment\nValues are means ± standard error. \nP < 0.05;\nP < 0.001 vs control group. CAS: Chronic adult stress; CPS: Chronic prenatal stress.\nWe performed patch clamp experiments on acutely isolated retrograde-labeled DRG neurons from CPS + CAS females with or without letrozole treatment 24 h after the last adult stressor. Letrozole treatment significantly increased rheobase (Figure 4A), and significantly reduced input resistance (Figure 4B). Action potential overshoot (Figure 4C) and the number of action potentials elicited by a current injection at either 2 × or 3 × rheobase were significantly reduced by letrozole treatment (Figure 4D). Other electrophysiological properties were not significantly altered (Table 2). We also recorded electromyographic activity to determine whether the reduction in visceral sensitivity in female CPS + CAS rats caused by OVX or systemic letrozole treatment reduced visceromotor responses. The findings demonstrated a significant decrease in excitability of colon-projecting L6-S2 neurons.\nEffects of Letrozole treatment on colon dorsal root ganglion neuron excitability. A: Rheobase (n = 45 cells in 6 rats in each group, t-test, cP < 0.001 vs Veh. + chronic adult stress [CAS] + chronic prenatal stress [CPS]); B: Membrane input resistance (RIn) (t-test, aP < 0.05); C: Action potential (AP) overshoot (t-test, aP < 0.05); D: Number of APs elicited by current injection at 2 × and 3 × rheobase (two-way ANOVA, aP < 0.05; bP < 0.01). \nElectrophysiological characteristics of colon related DRG neuron after Letrozole treatment\nValues are means ± standard error. \nP < 0.05;\nP < 0.001 vs control group. CAS: Chronic adult stress; CPS: Chronic prenatal stress.\n Spinal cord BDNF levels regulated by estrogen To investigate the effect of estrogen on BDNF expression, we measured BDNF mRNA and protein levels in the lumbar-sacral spinal cords of OVX and Sham CPS + CAS female rats. Systemic estradiol administration to naïve cycling females produced significant increases in plasma estrogen (Figure 5A), lumbar-sacral spinal cord BDNF mRNA (Figure 5B), and protein (Figure 5C). We also measured BDNF mRNA and protein levels in the lumbar-sacral spinal cords of OVX and Sham CPS + CAS female rats. BDNF mRNA and protein expression were significantly suppressed by OVX compared with sham rats. Another experiment showed that intrathecal infusion of estrogen into naïve female rats significantly increased BDNF protein levels, which proved that estrogen reversed the experimental results and contributed to the response to visceral pain[13].\nBrain-derived neurotrophic factor expression in lumbar-sacral spinal cord is regulated by estrogen. A: Plasma estrogen levels in cycling females that received a bolus estradiol (E2) infusion on day 1; B: Lumbar-sacral spinal cord brain-derived neurotrophic factor (BDNF) mRNA following bolus estrogen infusion; C: Lumbar-sacral spinal cord BDNF protein following bolus estrogen infusion. (n = 8 rats in each group, two-way ANOVA, aP < 0.05 vs vehicle group). \nTo investigate the effect of estrogen on BDNF expression, we measured BDNF mRNA and protein levels in the lumbar-sacral spinal cords of OVX and Sham CPS + CAS female rats. Systemic estradiol administration to naïve cycling females produced significant increases in plasma estrogen (Figure 5A), lumbar-sacral spinal cord BDNF mRNA (Figure 5B), and protein (Figure 5C). We also measured BDNF mRNA and protein levels in the lumbar-sacral spinal cords of OVX and Sham CPS + CAS female rats. BDNF mRNA and protein expression were significantly suppressed by OVX compared with sham rats. Another experiment showed that intrathecal infusion of estrogen into naïve female rats significantly increased BDNF protein levels, which proved that estrogen reversed the experimental results and contributed to the response to visceral pain[13].\nBrain-derived neurotrophic factor expression in lumbar-sacral spinal cord is regulated by estrogen. A: Plasma estrogen levels in cycling females that received a bolus estradiol (E2) infusion on day 1; B: Lumbar-sacral spinal cord brain-derived neurotrophic factor (BDNF) mRNA following bolus estrogen infusion; C: Lumbar-sacral spinal cord BDNF protein following bolus estrogen infusion. (n = 8 rats in each group, two-way ANOVA, aP < 0.05 vs vehicle group). \n Peripheral NGF level increased in CPS + CAS female rats We examined NGF expression in the colons of females from all four experimental groups by immunohistochemistry (Figure 6A). Morphometric analysis showed that CAS and CPS + CAS significantly increased NGF levels in the colon wall, with the increase in CPS + CAS significantly greater that of CAS alone (Figure 6B). Western blotting showed that NGF protein was significantly upregulated in CPS + CAS rats compared with controls (Figure 6C).\nNerve growth factor expression level in the colon wall. A: Immunohistochemical staining of nerve growth factor (NGF; green) was detected with nuclear counterstaining staining (blue) in controls, chronic adult stress (CAS), chronic prenatal stress (CPS) and CPS + CAS group female rat colon walls. × 400 magnification representative pictures were shown; B: Quantification of NGF levels from colon wall in immunohistochemistry (IHC) (n = 4 rats in each group, one-way ANOVA, aP < 0.05 vs control group; bP < 0.05 vs CPS group); C: Western blots of NGF protein from control and CPS + CAS female rats colon wall tissue (n = 6 rats in each group, t-test, aP < 0.05 vs control group). \nWe examined NGF expression in the colons of females from all four experimental groups by immunohistochemistry (Figure 6A). Morphometric analysis showed that CAS and CPS + CAS significantly increased NGF levels in the colon wall, with the increase in CPS + CAS significantly greater that of CAS alone (Figure 6B). Western blotting showed that NGF protein was significantly upregulated in CPS + CAS rats compared with controls (Figure 6C).\nNerve growth factor expression level in the colon wall. A: Immunohistochemical staining of nerve growth factor (NGF; green) was detected with nuclear counterstaining staining (blue) in controls, chronic adult stress (CAS), chronic prenatal stress (CPS) and CPS + CAS group female rat colon walls. × 400 magnification representative pictures were shown; B: Quantification of NGF levels from colon wall in immunohistochemistry (IHC) (n = 4 rats in each group, one-way ANOVA, aP < 0.05 vs control group; bP < 0.05 vs CPS group); C: Western blots of NGF protein from control and CPS + CAS female rats colon wall tissue (n = 6 rats in each group, t-test, aP < 0.05 vs control group). ", "The basal activity of a spinal afferent fiber was defined as the average number of action potentials per second (impulses/sec) in the 60 s period before the onset of a distention stimulus. In male controls, 66% of the afferent fibers under study displayed SA and SA was significantly higher in female controls than in male controls (0.71 ± 0.21 vs 1.24 ± 0.20 imp/sec; Figure 1B). The average single fiber activity in response to CRD was significantly higher in female control rats compared with male controls (Figure 1C). We found that the enhanced sensitization in female rats mainly came from the low-threshold fibers (Figure 1D and E).\nTo assess the effects of CPS + CAS on colon afferent fiber activities, we compared average single colon afferent fiber activities projecting from dorsal roots S1-L6 in response to CRD in male and female control, CPS, control + CAS and CPS + CAS rats recorded approximately 24 h after the last stressor. In females, CPS significantly increased single-unit afferent activity in response to CRD vs control female rats (Figure 1F). CAS alone enhanced single-unit activity compared with control. The increase in average afferent responses after CAS in prenatally stressed female rats (44.0%) was significantly greater than the increase in female control rats (39.3%). In males, CPS had no significant effect on primary afferent responses (Figure 1G). When we compared males to females within each experimental group, we found that the average single fiber activity was significantly higher in female compared with male CPS + CAS rats (Figure 1F, G). The increased activity may contribute to the enhanced female visceral hypersensitivity previously reported in this model. Average single-fiber activities were significantly greater in control and CPS and CAS females than in their corresponding male experimental groups (Figure 1F and G). Both CAS and CPS + CAS rats had significantly increased primary afferent responses compared with control and CPS rats. Thus, our CPS and CAS protocols sensitized colon-projecting primary afferent fibers, with the greatest effects produced by the combination of CPS + CAS in both males and females.", "To elucidate the electrophysiological basis of enhanced stress-induced primary afferent activity in female rats, we performed patch clamp studies on acutely dissociated retrograde-labeled colon-projecting neurons from the L6-S2 DRGs in control, prenatal stress, adult stress only, and CPS + CAS female rats isolated 24 h after the last adult stressor (Figure 2A). Input resistance (Figure 2B) and rheobase (Figure 2C) were significantly decreased in neurons from CPS + CAS rats compared with the other three groups. The number of action potentials elicited at either 2 × or 3 × the rheobase were significantly greater in adult stress and CPS + CAS neurons compared with control and to CPS neurons (Figure 2D, E). CAS significantly increased action potential overshoot with or without CPS (Figure 2F), but it did not significantly alter other electrophysiological characteristics, such as number of spontaneous spikes, membrane capacitance (pF), resting membrane potential, cell diameter, time constant, and DRG neuron action potential amplitude and duration (Table 1).\nPatch clamp recording in colonic dorsal root ganglion neurons from female rats. A: Patch clamp process of cell labeling. Under isoflurane anesthesia, the lipid soluble fluorescent dye 9-DiI was injected into muscularis externa of the exposed distal colon (left figure). Lumbosacral (L6–S2) dorsal root ganglions (upper photograph) were isolated and DiI-labeled neurons were identified by fluorescence microscopy (lower photograph). Electrophysiological properties of each neuron were measured using whole-cell current and voltage clamp protocols (right figure); B: Rheobase from all four experimental groups (n = 5 rats, 45 cells in each group, one-way ANOVA, aP < 0.05 vs control or bP < 0.05 vs CAS); C: Representative action potentials (APs) elicited by current injection at 2 × the rheobase in neurons from control, chronic adult stress (CAS), chronic prenatal stress (CPS) and CPS + CAS female rats; D: Membrane input resistance from all four groups (n = 5 rats, 45 cells in each group, one-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); E: Number of APs elicited by current injection at either 2 × and 3 × the rheobase in all four experimental groups (two-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); F: AP overshoot recorded from all four experimental groups (aP < 0.05 vs control); G: The proportion of neurons from each experimental group exhibiting spontaneous APs. Red numbers represent spontaneous AP firing cells; black numbers represent total cells; H: Representative total, IK and IA current tracings and average values of potassium currents: Itotal, IK and IA are shown in female CPS + CAS, CAS, CPS (n = 15 neurons, from 5 rats in each group), and control groups (n = 12 neurons from 5 rats); two-way ANOVA, aP < 0.05 vs each control group. \nElectrophysiological characteristics of colon related DRG neuron\nValues means ± standard error. \nP < 0.05;\nP < 0.001 vs control group. CAS: Chronic adult stress; CPS: Chronic prenatal stress.\nThe percentage of neurons with SA in was significantly greater in CPS + CAS rats than in control or CPS only rats (Figure 2G). Under voltage clamp conditions (Figure 2H), neurons from female CPS + CAS, CAS, CPS and control groups had IA and sustained outward rectifier K+ currents (IK). Compared with the other three groups, DRG neurons from CPS + CAS rats had significantly reduced average IA (P < 0.05). The average IK density was decreased but the change was not significant.", "We did a vaginal smear test to identify the estrus cycle phases by identifying the vaginal cytological cell types. Estrogen concentration was significantly higher in the CPS proestrus/estrus phase compared with control diestrus, control proestrus/estrus, and CPS diestrus proestrus (P < 0.05; Figure 3A). Comparison of the plasma estrogen concentrations in control, CAS, CPS, CPS + CAS showed that CPS significantly increased plasma estrogen levels compared with the control rats and that CAS increased plasma estrogen level compared with the control and CPS rats (Figure 3B).\nEffects of chronic prenatal stress, chronic adult stress, ovariectomy, and letrozole treatment on plasma estrogen levels in female rats. A: Plasma estrogen level in control and chronic prenatal stress (CPS) rats by estrus cycle phase (n = 8 rats, one-way ANOVA, aP < 0.05 vs control proestrus/estrus (P-E) phase; bP < 0.05 vs CPS diestrus (D) phase); B: Plasma estrogen levels increased in CPS rats and following chronic adult stress (CAS) 24 h after the last adult stressor (n = 8 rats, one-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); C: Ovariectomy (OVX) significantly reduced CPS female rat plasma estrogen levels before and after CAS (n = 5 rats, one-way ANOVA, aP < 0.05 vs sham group); D: Letrozole treatment significantly reduced CPS female rat plasma estrogen levels before or after CAS (n = 5 rats, one-way ANOVA, aP < 0.05 vs vehicle group; cP < 0.0001); E: Plasma norepinephrine levels from control, CAS, CPS and CPS + CAS group female rats (n = 5 rats, one-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); F: Plasma adrenocorticotropic hormone (ACTH) levels from control and CPS + CAS group female rats (n = 5 rats, t-test, aP < 0.05 vs control). \nTo determine whether estrogen contributed to stress-induced visceral hypersensitivity in prenatal stressed females, we reduced plasma estrogen levels by either OVX or letrozole treatment. OVX significantly lowered serum estradiol levels before and after CAS (Figure 3C). Treatment was continued throughout CAS. After treatment with letrozole, serum estradiol levels were significantly reduced (Figure 3D). To study the effects of gender and stress on norepinephrine and ACTH levels, we measured plasma norepinephrine levels in female rats from all four experimental groups. CAS alone significantly increased plasma norepinephrine levels compared with both the controls and with CPS alone (Figure 3E) Plasma norepinephrine levels were significantly increased in CPS + CAS rats compared with CAS alone as well as with controls and CPS. Plasma ACTH levels were significantly increased in CPS + CAS rats compared with controls. (Figure 3F).", "We performed patch clamp experiments on acutely isolated retrograde-labeled DRG neurons from CPS + CAS females with or without letrozole treatment 24 h after the last adult stressor. Letrozole treatment significantly increased rheobase (Figure 4A), and significantly reduced input resistance (Figure 4B). Action potential overshoot (Figure 4C) and the number of action potentials elicited by a current injection at either 2 × or 3 × rheobase were significantly reduced by letrozole treatment (Figure 4D). Other electrophysiological properties were not significantly altered (Table 2). We also recorded electromyographic activity to determine whether the reduction in visceral sensitivity in female CPS + CAS rats caused by OVX or systemic letrozole treatment reduced visceromotor responses. The findings demonstrated a significant decrease in excitability of colon-projecting L6-S2 neurons.\nEffects of Letrozole treatment on colon dorsal root ganglion neuron excitability. A: Rheobase (n = 45 cells in 6 rats in each group, t-test, cP < 0.001 vs Veh. + chronic adult stress [CAS] + chronic prenatal stress [CPS]); B: Membrane input resistance (RIn) (t-test, aP < 0.05); C: Action potential (AP) overshoot (t-test, aP < 0.05); D: Number of APs elicited by current injection at 2 × and 3 × rheobase (two-way ANOVA, aP < 0.05; bP < 0.01). \nElectrophysiological characteristics of colon related DRG neuron after Letrozole treatment\nValues are means ± standard error. \nP < 0.05;\nP < 0.001 vs control group. CAS: Chronic adult stress; CPS: Chronic prenatal stress.", "To investigate the effect of estrogen on BDNF expression, we measured BDNF mRNA and protein levels in the lumbar-sacral spinal cords of OVX and Sham CPS + CAS female rats. Systemic estradiol administration to naïve cycling females produced significant increases in plasma estrogen (Figure 5A), lumbar-sacral spinal cord BDNF mRNA (Figure 5B), and protein (Figure 5C). We also measured BDNF mRNA and protein levels in the lumbar-sacral spinal cords of OVX and Sham CPS + CAS female rats. BDNF mRNA and protein expression were significantly suppressed by OVX compared with sham rats. Another experiment showed that intrathecal infusion of estrogen into naïve female rats significantly increased BDNF protein levels, which proved that estrogen reversed the experimental results and contributed to the response to visceral pain[13].\nBrain-derived neurotrophic factor expression in lumbar-sacral spinal cord is regulated by estrogen. A: Plasma estrogen levels in cycling females that received a bolus estradiol (E2) infusion on day 1; B: Lumbar-sacral spinal cord brain-derived neurotrophic factor (BDNF) mRNA following bolus estrogen infusion; C: Lumbar-sacral spinal cord BDNF protein following bolus estrogen infusion. (n = 8 rats in each group, two-way ANOVA, aP < 0.05 vs vehicle group). ", "We examined NGF expression in the colons of females from all four experimental groups by immunohistochemistry (Figure 6A). Morphometric analysis showed that CAS and CPS + CAS significantly increased NGF levels in the colon wall, with the increase in CPS + CAS significantly greater that of CAS alone (Figure 6B). Western blotting showed that NGF protein was significantly upregulated in CPS + CAS rats compared with controls (Figure 6C).\nNerve growth factor expression level in the colon wall. A: Immunohistochemical staining of nerve growth factor (NGF; green) was detected with nuclear counterstaining staining (blue) in controls, chronic adult stress (CAS), chronic prenatal stress (CPS) and CPS + CAS group female rat colon walls. × 400 magnification representative pictures were shown; B: Quantification of NGF levels from colon wall in immunohistochemistry (IHC) (n = 4 rats in each group, one-way ANOVA, aP < 0.05 vs control group; bP < 0.05 vs CPS group); C: Western blots of NGF protein from control and CPS + CAS female rats colon wall tissue (n = 6 rats in each group, t-test, aP < 0.05 vs control group). ", "Enhanced CPS-induced visceral hypersensitivity in female rats was associated with an increase in the responses of lumbosacral nerve fibers to CRD in both male and female offspring. These findings are further supported by data showing increased excitability of colon-projecting DRG neurons from females in patch clamp studies. The magnitude of the sensitization was the greatest in female CPS + CAS rats, suggesting that it made a major contribution to the observed enhanced female visceral hypersensitivity in our model.\nChronic stress is known to increase the excitability of colon-projecting DRG neurons in rats and mice. In adult male Sprague Dawley rats, colon DRG neuron sensitization was shown to be driven by increases in NGF expression in the colon muscularis externa[13]. In our model, we also observed a significant increase in colon NGF, but its potential role in primary afferent sensitization and visceral hypersensitivity was not investigated.. Other studies in male mice showed that stress, in the form of water avoidance, significantly increased the excitability of colon-projecting DRG neurons and that the combined activity of the stress mediators corticosterone and norepinephrine increased DRG neuron excitability in vitro[20,21]. We previously found significant increases in the serum levels of norepinephrine in CPS + CAS females. However, daily systemic treatment with adrenergic antagonists during the adult stress protocol failed to reduce visceral hypersensitivity in female neonatal + adult stress rats[13] suggesting that norepinephrine did not play a major role in the acquisition of enhanced female visceral hypersensitivity or primary afferent sensitization in our model.\nWhen we tested on lumbar-sacral afferent fibers and dissociated neurons in patch clamp studies, we found significant decreases in transient potassium IA currents in neurons isolated from CPS + CAS females compared with the other three experimental groups. Declines in A-type Kv currents in DRG neurons have been associated with persistent pain in multiple chronic pain models[22]. Whether the decline was caused by changes in channel properties or expression was not investigated in this study. However, another study demonstrated that estrogen significantly shifted the activation curve of IA currents in the hyperpolarizing direction and that estrogen inhibited Kv (+) channels in mouse DRG neurons through a membrane ER-activated nongenomic pathway[23].\nOur results showed that the excitability of colon-projecting neurons in CPS + CAS females was significantly reduced by systemic letrozole treatment, suggesting that estrogen contributed to the sensitization process. Previous studies show that estrogen receptors expressed on primary afferent neurons contributed to enhanced sensitivity in various pain models[24-26]. One study found no decline in the responses of colon-projecting nerve fibers to CRD following OVX and found no detectable estrogen receptor alpha immunoreactivity in colon-projecting DRG neurons[27]. The reasons for the differing results are not clear, but local production of estrogen in DRG neurons could be sufficient to sustain sensitization.\nNGF and its receptors play important roles in the mechanism of visceral pain and hyperalgesia in women. For example, endometriosis is estrogen dependent and is commonly diagnosed. The main symptoms are various types of pelvic pain that have a serious effect on physical and mental health, but the mechanisms of abdominal pain are still unclear. Studies have shown NGF to be an inflammatory mediator and modulator of pain in adulthood[28].", "In this study, we examined the sex differences and effects of estrogen on the acquisition of enhanced visceral hypersensitivity in the offspring of rats in a model of prenatal and adult stress as shown in Figure 7. Our study shows that estrogen acted in the spinal cord and the primary afferent neurons to enhance visceral nociception. Acute blockade of the endogenous synthesis of estrogens in rat spinal cord significantly reduced visceral hypersensitivity, suggesting that locally produced estrogen in the central nervous system can regulate nociceptive neurons to modulate visceral hypersensitivity. The chronic stress-estrogen-BDNF axis sensitized visceral hypersensitivity in the offspring of females subjected to CPS. The development of chronic stress-induced visceral hypersensitivity in female rats was estrogen dependent. A key component of this hypersensitivity was estrogen-dependent sensitization of primary afferent colon neurons. Our findings provide key scientific evidence in a preclinical model in support of developing gender-based treatment for abdominal pain in IBS.\nSummary diagram of estrogen re-enhanced visceral hyperalgesia investigated in chronic prenatal stress plus chronic adult stress models. BDNF: Brain-derived neurotrophic factor; NGF: Nerve growth factor." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Chronic prenatal stress", "Estrogen", "Visceral pain", "Neuronal sensitization", "Excitability", "Letrozole" ]
INTRODUCTION: Visceral pain of colonic origin is the most prominent symptom in irritable bowel syndrome (IBS) patients[1]. Female IBS patients report more severe pain that occurs more frequently and with longer episodes than in male patients[1,2]. The ratio of female to male IBS is about 2:1 among patients seen in medical clinics[3]. Moreover, females have a higher prevalence of IBS co-morbidities such as anxiety and depression[4,5] and are more vulnerable to stress-induced exacerbation of IBS symptoms compared with males[3,6,7]. Clinical studies show that early life adverse experiences are risk factors for the development of IBS symptoms, including visceral pain and ongoing chronic stress, especially abdominal pain[8-10]. These factors contribute to the development of visceral hypersensitivity, a key component of the IBS symptom complex and one that may be responsible for symptoms of pain[11,12]. Our previous research found that the female offspring of mothers subjected to chronic prenatal stress (CPS) had a markedly greater visceral sensitivity than their male littermates following challenge by another chronic adult stress (CAS) protocol. A critical molecular event in the development of this female-enhanced visceral hypersensitivity is upregulation of brain-derived neurotrophic factor (BDNF) expression in the lumbar-sacral spinal cord of female CPS + CAS rats[13]. However, the neurophysiological changes underlying the enhanced female-specific visceral hypersensitivity and the role of hormone in the development of stress-induced visceral hypersensitivity are not well understood. Visceral hypersensitivity in IBS involves abnormal changes in neurophysiology throughout the brain-gut axis. In IBS, there is evidence for sensitization of primary afferents to jejunal distention and electrical stimulation[14], and there is evidence for increased sensitivity of lumbar splanchnic afferents[15,16]. In animal models of either early life adverse events or adult stress-induced visceral hypersensitivity[17], there is evidence of colon primary afferent sensitization. However, the studies were performed in male rodents. Therefore, in this study, we established a CPS and CAS rodent model to analyze the impact on female colon afferent neuron function and the role of estrogen. Our hypothesis was that female CPS offspring subjected to chronic stress as adults would exhibit greater colonic dorsal root ganglion (DRG) neuron sensitization compared with their male littermates, and that the enhanced visceral sensitization and primary afferent sensitization in females was estrogen dependent. MATERIALS AND METHODS: Animals The Institutional Animal Care and Use Committee of the University of Texas Medical Branch at Galveston, TX approved all animal procedures. Experiments were performed on pregnant Sprague Dawley rats and their 8-wk-old to 16-wk-old male and female offspring. Rats were housed individual cages with access to food and water in a room with controlled conditions (22 ± 2 °C, relative humidity of 50% ± 5%), and a 12 h light/12 h dark cycle. The Institutional Animal Care and Use Committee of the University of Texas Medical Branch at Galveston, TX approved all animal procedures. Experiments were performed on pregnant Sprague Dawley rats and their 8-wk-old to 16-wk-old male and female offspring. Rats were housed individual cages with access to food and water in a room with controlled conditions (22 ± 2 °C, relative humidity of 50% ± 5%), and a 12 h light/12 h dark cycle. CPS and CAS models Pregnant dams were subjected to a CPS protocol that consisted of a random sequence of twice-daily applications of one of three stress sessions, a 1-h water-avoidance, 45-min cold-restraint, or a 20-min forced swim starting on day 6 and continuing until delivery on day 21. Male and female offspring of the stressed dams were designated CPS rats. Control dams received sham stress and their offspring were designated control rats. As adults at 8-16 wk of age, control and prenatally stressed offspring were challenged by the same CAS protocol for 9 d. Ovariectomy (OVX) or sham surgery was performed on female prenatal-stress offspring on day 56. Daily letrozole treatment was initiated on day 49, 2 wk prior to initiation of adult stress. Treatment was continued through the stress protocol. A schematic diagram of the study procedures is shown in Figure 1A. Primary afferent responses to colorectal distention. A: Chronic prenatal stress (CPS) plus chronic adult stress (CAS) model. Pregnant dams were subjected to prenatal stress from on day 11 of gestation. Ovariectomy (OVX) or sham surgery was performed on female prenatal-stress offspring on day 56. Daily Letrozole was initiated on day 49, 2 wk prior to initiation of adult stress. Treatment was continued through the stress protocol; B: Spontaneous activity (SA) of single afferent units in male and female control rats (n = 70 fibers in 6 rats in each group, t-test, aP < 0.05); C: Average response to graded colorectal distention (CRD) of 56 afferent fibers in 6 male and 70 afferent fibers in 6 female control rats; two-way analysis of variance (ANOVA; aP < 0.05 vs the same pressure male group); D: Responses of low-threshold (LT) fibers to CRD in 42 fibers in 6 male rats and 40 fibers in 6 female control rats (ANOVA, aP < 0.05 vs the same pressure male group); E: Responses of high-threshold (HT) afferent fibers to CRD in 14 fibers in 6 male and 29 fibers in 6 female control rats; F: Effects of CAS on afferent fiber responses to CRD from 59 fibers in 6 control and 99 fibers in 6 CPS female rats; (two-way ANOVA, aP < 0.05 vs the same pressure control group, bP < 0.05 vs the same pressure CPS group); G: Effects of CAS on afferent fiber responses to CRD in control and CPS male rats (n = 6 rats, 57 fibers for control and 95 fibers for CPS female group; two-way ANOVA, aP < 0.05 vs the same pressure-control group). Pregnant dams were subjected to a CPS protocol that consisted of a random sequence of twice-daily applications of one of three stress sessions, a 1-h water-avoidance, 45-min cold-restraint, or a 20-min forced swim starting on day 6 and continuing until delivery on day 21. Male and female offspring of the stressed dams were designated CPS rats. Control dams received sham stress and their offspring were designated control rats. As adults at 8-16 wk of age, control and prenatally stressed offspring were challenged by the same CAS protocol for 9 d. Ovariectomy (OVX) or sham surgery was performed on female prenatal-stress offspring on day 56. Daily letrozole treatment was initiated on day 49, 2 wk prior to initiation of adult stress. Treatment was continued through the stress protocol. A schematic diagram of the study procedures is shown in Figure 1A. Primary afferent responses to colorectal distention. A: Chronic prenatal stress (CPS) plus chronic adult stress (CAS) model. Pregnant dams were subjected to prenatal stress from on day 11 of gestation. Ovariectomy (OVX) or sham surgery was performed on female prenatal-stress offspring on day 56. Daily Letrozole was initiated on day 49, 2 wk prior to initiation of adult stress. Treatment was continued through the stress protocol; B: Spontaneous activity (SA) of single afferent units in male and female control rats (n = 70 fibers in 6 rats in each group, t-test, aP < 0.05); C: Average response to graded colorectal distention (CRD) of 56 afferent fibers in 6 male and 70 afferent fibers in 6 female control rats; two-way analysis of variance (ANOVA; aP < 0.05 vs the same pressure male group); D: Responses of low-threshold (LT) fibers to CRD in 42 fibers in 6 male rats and 40 fibers in 6 female control rats (ANOVA, aP < 0.05 vs the same pressure male group); E: Responses of high-threshold (HT) afferent fibers to CRD in 14 fibers in 6 male and 29 fibers in 6 female control rats; F: Effects of CAS on afferent fiber responses to CRD from 59 fibers in 6 control and 99 fibers in 6 CPS female rats; (two-way ANOVA, aP < 0.05 vs the same pressure control group, bP < 0.05 vs the same pressure CPS group); G: Effects of CAS on afferent fiber responses to CRD in control and CPS male rats (n = 6 rats, 57 fibers for control and 95 fibers for CPS female group; two-way ANOVA, aP < 0.05 vs the same pressure-control group). Rat treatment Before OVX or letrozole treatment, vaginal smears were used to identify the estrus cycle phase. OVX or sham surgery was performed on female prenatal-stress offspring on day 56. The aromatase inhibitor letrozole [4,4’-(1H-1,2,4-triazol-l-yl-methylene)-bis-benzonitrile], (Novartis) 1.0 mg/kg was orally administered in the experimental group and vehicle (hydroxypropyl cellulose 0.3% in water) was given in the control group once daily for 14 d. Direct transcutaneous intrathecal injections of estrogen and letrozole were performed as described by Mestre et al[18]. Before OVX or letrozole treatment, vaginal smears were used to identify the estrus cycle phase. OVX or sham surgery was performed on female prenatal-stress offspring on day 56. The aromatase inhibitor letrozole [4,4’-(1H-1,2,4-triazol-l-yl-methylene)-bis-benzonitrile], (Novartis) 1.0 mg/kg was orally administered in the experimental group and vehicle (hydroxypropyl cellulose 0.3% in water) was given in the control group once daily for 14 d. Direct transcutaneous intrathecal injections of estrogen and letrozole were performed as described by Mestre et al[18]. In vivo single fiber recording of L6-S2 DRG rootlets Multiunit afferent discharges were recorded from the distal ends of L6-S2 dorsal rootlets decentralized close to their entry into the spinal cord. A bundle of multiunit fibers was distinguished into 2-6 single units off-line using wave mark template matching in Spike 2 software that differentiates spikes by shape and amplitude. Colonic afferent fibers were identified by their response to graded colorectal distention (CRD). A balloon was used to distend the colorectum. Isoflurane, 2.5%, followed by 50 mg/kg intraperitoneal sodium pentobarbital induced general anesthesia that was maintained by infusing a mixture of pentobarbital sodium + pancuronium bromide + saline by intravenous infusion through the tail vein. The adequacy of anesthesia was confirmed by the absence of corneal and pupillary reflexes and stability of the end-tidal CO2 level. A tracheotomy tube connected to a ventilator system provided a mixture of room air and oxygen. Expired CO2 was monitored and maintained at 3.5%. Body temperature was monitored and maintained at 37 °C by a servo-controlled heating blanket. A laminectomy from T12 to S2 exposed the spinal cord. The head was stabilized in a stereotaxic frame. Multiunit afferent discharges were recorded from the distal ends of L6-S2 dorsal rootlets decentralized close to their entry into the spinal cord. A bundle of multiunit fibers was distinguished into 2-6 single units off-line using wave mark template matching in Spike 2 software that differentiates spikes by shape and amplitude. Colonic afferent fibers were identified by their response to graded colorectal distention (CRD). A balloon was used to distend the colorectum. Isoflurane, 2.5%, followed by 50 mg/kg intraperitoneal sodium pentobarbital induced general anesthesia that was maintained by infusing a mixture of pentobarbital sodium + pancuronium bromide + saline by intravenous infusion through the tail vein. The adequacy of anesthesia was confirmed by the absence of corneal and pupillary reflexes and stability of the end-tidal CO2 level. A tracheotomy tube connected to a ventilator system provided a mixture of room air and oxygen. Expired CO2 was monitored and maintained at 3.5%. Body temperature was monitored and maintained at 37 °C by a servo-controlled heating blanket. A laminectomy from T12 to S2 exposed the spinal cord. The head was stabilized in a stereotaxic frame. In vitro patch clamp recordings in colonic DRG neurons Retrograde fluorescence label injections: Labeling of colon-projecting DRG neurons was performed as previously described[13]. Under general 2% isoflurane anesthesia, the lipid soluble fluorescent dye, 1,1’-dioleyl-3,3,3’,3’-tetramethylindocarbocyanine methane-sulfonate (9-DiI, Invitrogen, Carlsbad, CA) was injected (50 mg/mL) into the muscularis externa on the exposed distal colon in 8 to 10 sites (2 μL each site). To prevent leakage, the needle was kept in place for 1 min following each injection. Dissociation and culture of DRG neurons: Rats were deeply anesthetized with isoflurane followed by decapitation. Lumbosacral (L6–S2) DRGs were collected in ice cold and oxygenated dissecting solution, containing (in mM) 130 NaCl, 5 KCl, 2 KH2PO4, 1.5 CaCl2, 6 MgSO4, 10 glucose, and 10 HEPES, pH 7.2 (305 mOsm). After removal of the connective tissue, the ganglia were transferred to a 5 mL dissecting solution containing collagenase D (1.8 mg/mL; Roche) and trypsin (1.0 mg/mL; Sigma, St Louis, MO), and incubated for 1.5 h at 34.5 °C. DRGs were then taken from the enzyme solution, washed, and put in 0.5-2 mL of the dissecting solution containing DNase (0.5 mg/mL; Sigma). Cells were subsequently dissociated by gentle trituration 10 to 15 times with fire-polished glass pipettes and placed on acid-cleaned glass coverslips. The dissociated DRG neurons were kept in 1 mL DMEM (with 10% FBS) in an incubator (95% O2/5% CO2) at 37 °C overnight. Whole-cell patch clamp recordings from dissociated DRG neurons: Before each experiment, a glass coverslip with DRG neurons was transferred to a recording chamber perfused (1.5 mL/min) with external solution containing (10 mM): 130 NaCl, 5 KCl, 2 KH2PO4, 2.5 CaCl2, 1 MgCl2, 10 HEPES, and 10 glucose, pH adjusted to pH 7.4 with NaOH (300 mOsm) at room temperature. Recording pipettes, pulled from borosilicate glass tubing, with resistance of 1-5 MΩ, were filled with solution containing (in mM): 100 KMeSO3, 40 KCl, and 10 HEPES, pH 7.25 adjusted with KOH (290 mOsm). DiI-labeled neurons were identified by fluorescence microscopy. Whole-cell currents and voltage were recorded from DiI-labeled neurons using a Dagan 3911 patch clamp amplifier. Data were acquired and analyzed by pCLAMP 9.2 (Molecular Devices, Sunnyvale, CA). The currents were filtered at 2–5 kHz and sampled at 50 or 100 s per point. While still under voltage clamp, the Clampex Membrane Test program (Molecular Devices) was used to determine membrane capacitance (Cm) and membrane resistance (Rm), during a 10 ms, 5 mV depolarizing pulse form a holding potential of −60 mV. The configuration was then switched to current clamp (0 pA) to determine other electrophysiological properties. After stabilizing for 2–3 min, the resting membrane potential was measured. The minimum acceptable resting membrane potential was −40 mV. Spontaneous activity (SA) was then recorded over two 30 s periods separated by 60 s without recording, as described by Bedi et al[19]. Transient A-type K+ current (IA) recording method in patch studies: To record voltage-gated K+ current (Kv), Na+ in control external solution was replaced with equimolar choline and the Ca2+ concentration was reduced to 0.03 mM to suppress Ca2+ currents and to prevent Ca2+ channels becoming Na+ conducting. The reduced external Ca2+ would also be expected to suppress Ca2+-activated K+ currents. The current traces of Kv in DRG neurons were measured at different holding potentials. The membrane potential was held at −100 mV and voltage steps were from −40 to +30 mV to record the total Kv. The membrane potential was held at −50 mV to record the sustained Kv. The IA currents were calculated by subtracting the sustained current from the total current. The current density (in pA/pF) was calculated by dividing the current amplitude by cell membrane capacitance. Retrograde fluorescence label injections: Labeling of colon-projecting DRG neurons was performed as previously described[13]. Under general 2% isoflurane anesthesia, the lipid soluble fluorescent dye, 1,1’-dioleyl-3,3,3’,3’-tetramethylindocarbocyanine methane-sulfonate (9-DiI, Invitrogen, Carlsbad, CA) was injected (50 mg/mL) into the muscularis externa on the exposed distal colon in 8 to 10 sites (2 μL each site). To prevent leakage, the needle was kept in place for 1 min following each injection. Dissociation and culture of DRG neurons: Rats were deeply anesthetized with isoflurane followed by decapitation. Lumbosacral (L6–S2) DRGs were collected in ice cold and oxygenated dissecting solution, containing (in mM) 130 NaCl, 5 KCl, 2 KH2PO4, 1.5 CaCl2, 6 MgSO4, 10 glucose, and 10 HEPES, pH 7.2 (305 mOsm). After removal of the connective tissue, the ganglia were transferred to a 5 mL dissecting solution containing collagenase D (1.8 mg/mL; Roche) and trypsin (1.0 mg/mL; Sigma, St Louis, MO), and incubated for 1.5 h at 34.5 °C. DRGs were then taken from the enzyme solution, washed, and put in 0.5-2 mL of the dissecting solution containing DNase (0.5 mg/mL; Sigma). Cells were subsequently dissociated by gentle trituration 10 to 15 times with fire-polished glass pipettes and placed on acid-cleaned glass coverslips. The dissociated DRG neurons were kept in 1 mL DMEM (with 10% FBS) in an incubator (95% O2/5% CO2) at 37 °C overnight. Whole-cell patch clamp recordings from dissociated DRG neurons: Before each experiment, a glass coverslip with DRG neurons was transferred to a recording chamber perfused (1.5 mL/min) with external solution containing (10 mM): 130 NaCl, 5 KCl, 2 KH2PO4, 2.5 CaCl2, 1 MgCl2, 10 HEPES, and 10 glucose, pH adjusted to pH 7.4 with NaOH (300 mOsm) at room temperature. Recording pipettes, pulled from borosilicate glass tubing, with resistance of 1-5 MΩ, were filled with solution containing (in mM): 100 KMeSO3, 40 KCl, and 10 HEPES, pH 7.25 adjusted with KOH (290 mOsm). DiI-labeled neurons were identified by fluorescence microscopy. Whole-cell currents and voltage were recorded from DiI-labeled neurons using a Dagan 3911 patch clamp amplifier. Data were acquired and analyzed by pCLAMP 9.2 (Molecular Devices, Sunnyvale, CA). The currents were filtered at 2–5 kHz and sampled at 50 or 100 s per point. While still under voltage clamp, the Clampex Membrane Test program (Molecular Devices) was used to determine membrane capacitance (Cm) and membrane resistance (Rm), during a 10 ms, 5 mV depolarizing pulse form a holding potential of −60 mV. The configuration was then switched to current clamp (0 pA) to determine other electrophysiological properties. After stabilizing for 2–3 min, the resting membrane potential was measured. The minimum acceptable resting membrane potential was −40 mV. Spontaneous activity (SA) was then recorded over two 30 s periods separated by 60 s without recording, as described by Bedi et al[19]. Transient A-type K+ current (IA) recording method in patch studies: To record voltage-gated K+ current (Kv), Na+ in control external solution was replaced with equimolar choline and the Ca2+ concentration was reduced to 0.03 mM to suppress Ca2+ currents and to prevent Ca2+ channels becoming Na+ conducting. The reduced external Ca2+ would also be expected to suppress Ca2+-activated K+ currents. The current traces of Kv in DRG neurons were measured at different holding potentials. The membrane potential was held at −100 mV and voltage steps were from −40 to +30 mV to record the total Kv. The membrane potential was held at −50 mV to record the sustained Kv. The IA currents were calculated by subtracting the sustained current from the total current. The current density (in pA/pF) was calculated by dividing the current amplitude by cell membrane capacitance. Real time reverse transcription-polymerase chain reaction (RT-PCR) Total RNA was extracted using RNeasy Mini Kits (QIAGEN, Valencia, CA). One microgram of total RNA was reverse-transcribed using the SuperScriptTM III First-Strand Synthesis System. PCR assays were performed on a StepOnePlus thermal cycler with 18 s as the normalizer using Applied Biosystems primer/probe set Rn02531967_s1 directed against the translated exon IX. Fold-change relative to control was calculated using the ΔΔCt method (Applied Biosystems). Total RNA was extracted using RNeasy Mini Kits (QIAGEN, Valencia, CA). One microgram of total RNA was reverse-transcribed using the SuperScriptTM III First-Strand Synthesis System. PCR assays were performed on a StepOnePlus thermal cycler with 18 s as the normalizer using Applied Biosystems primer/probe set Rn02531967_s1 directed against the translated exon IX. Fold-change relative to control was calculated using the ΔΔCt method (Applied Biosystems). Western blot Samples were lysed in RIPA buffer containing protease inhibitor cocktail and phenylmethanesulfonyl fluoride. Lysates were incubated for 30 min on ice and then centrifuged at 10 000 × g for 10 min at 4 °C. The protein concentration in the supernatant was determined using bicinchoninic acid (BCA) assay kits with bovine serum albumin as a standard. Equal amounts of protein (30 μg per lane) were separated by 10% sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and then transferred to nitrocellulose membranes (Bio-Rad, United States). The membranes were blocked in Li-Cor blocking buffer for 1 h at room temperature and then incubated with primary antibodies. BDNF antibody (Santa Cruz Biotechnologies, Santa Cruz, CA) was used at 1:200 dilution; nerve growth factor (NGF) antibody (Abcam, MA) was used at 1:1000 dilution; β-actin antibody (Sigma Aldrich, St Louis, MO) was used at 1:5000 dilution. The secondary antibodies were donkey anti-rabbit Alexa Fluor 680 (Invitrogen) and goat anti-mouse IRDye 800 (Rockland). Images were acquired and band intensities measured using a Li-Cor Odyssey system (Li-Cor, Lincoln, NE). Samples were lysed in RIPA buffer containing protease inhibitor cocktail and phenylmethanesulfonyl fluoride. Lysates were incubated for 30 min on ice and then centrifuged at 10 000 × g for 10 min at 4 °C. The protein concentration in the supernatant was determined using bicinchoninic acid (BCA) assay kits with bovine serum albumin as a standard. Equal amounts of protein (30 μg per lane) were separated by 10% sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and then transferred to nitrocellulose membranes (Bio-Rad, United States). The membranes were blocked in Li-Cor blocking buffer for 1 h at room temperature and then incubated with primary antibodies. BDNF antibody (Santa Cruz Biotechnologies, Santa Cruz, CA) was used at 1:200 dilution; nerve growth factor (NGF) antibody (Abcam, MA) was used at 1:1000 dilution; β-actin antibody (Sigma Aldrich, St Louis, MO) was used at 1:5000 dilution. The secondary antibodies were donkey anti-rabbit Alexa Fluor 680 (Invitrogen) and goat anti-mouse IRDye 800 (Rockland). Images were acquired and band intensities measured using a Li-Cor Odyssey system (Li-Cor, Lincoln, NE). Immunofluorescence Frozen sections of colon tissue from control, CAS, CPS and CPS + CAS female rats were mounted on glass slides, and rehydrated in phosphate buffered saline at room temperature. The slides were treated for antigen retrieval and blocked with 10% normal goat serum diluted in 0.3% phosphate buffered saline-Triton for 1 h, and then incubated with NGF primary antibody in antibody diluent (Renoir Red, Biocare Medical, Concord, CA) at 4 °C overnight. The slides were exposed to fluorescent dye-conjugated secondary antibody for 2 h at room temperature, counterstained with 4',6-diamidino-2-phenylindole and coverslipped. Images were taken in fluorescence mode on an Olympus laser scanning confocal microscope and the average signal intensity was calculated by the bundled software. Frozen sections of colon tissue from control, CAS, CPS and CPS + CAS female rats were mounted on glass slides, and rehydrated in phosphate buffered saline at room temperature. The slides were treated for antigen retrieval and blocked with 10% normal goat serum diluted in 0.3% phosphate buffered saline-Triton for 1 h, and then incubated with NGF primary antibody in antibody diluent (Renoir Red, Biocare Medical, Concord, CA) at 4 °C overnight. The slides were exposed to fluorescent dye-conjugated secondary antibody for 2 h at room temperature, counterstained with 4',6-diamidino-2-phenylindole and coverslipped. Images were taken in fluorescence mode on an Olympus laser scanning confocal microscope and the average signal intensity was calculated by the bundled software. Serum estradiol and norepinephrine levels Serum estradiol, adrenocorticotropic hormone (ACTH), and norepinephrine levels were measured using specific enzyme-linked immunosorbent assay kits for each analyte (CSB-E05110r, CSB-E06875r, CSB-E07022, Cusabio Bioteck CO., United States) following the manufacturer’s instructions. Serum estradiol, adrenocorticotropic hormone (ACTH), and norepinephrine levels were measured using specific enzyme-linked immunosorbent assay kits for each analyte (CSB-E05110r, CSB-E06875r, CSB-E07022, Cusabio Bioteck CO., United States) following the manufacturer’s instructions. Data analysis Single fiber responses (impulses/second) to CRD were calculated by subtracting SA from the mean 30 s maximal activity during distension. Fibers were considered responsive if CRD increased their activity to 30% greater than the baseline value. Mechanosensitive single units were classified as high threshold (> 20 mmHg) or low threshold (≤ 20 mmHg) on the basis of their response threshold and profile during CRD. Single fiber activity data were analyzed by analysis of variance with repeated measures; CRD intensity was the repeated factor and the experimental group was the between-group factor. If significant main effects were present, the individual means were compared using the Fisher post-hoc test. All authors had access to the study data and reviewed and approved the final manuscript. Single fiber responses (impulses/second) to CRD were calculated by subtracting SA from the mean 30 s maximal activity during distension. Fibers were considered responsive if CRD increased their activity to 30% greater than the baseline value. Mechanosensitive single units were classified as high threshold (> 20 mmHg) or low threshold (≤ 20 mmHg) on the basis of their response threshold and profile during CRD. Single fiber activity data were analyzed by analysis of variance with repeated measures; CRD intensity was the repeated factor and the experimental group was the between-group factor. If significant main effects were present, the individual means were compared using the Fisher post-hoc test. All authors had access to the study data and reviewed and approved the final manuscript. Animals: The Institutional Animal Care and Use Committee of the University of Texas Medical Branch at Galveston, TX approved all animal procedures. Experiments were performed on pregnant Sprague Dawley rats and their 8-wk-old to 16-wk-old male and female offspring. Rats were housed individual cages with access to food and water in a room with controlled conditions (22 ± 2 °C, relative humidity of 50% ± 5%), and a 12 h light/12 h dark cycle. CPS and CAS models: Pregnant dams were subjected to a CPS protocol that consisted of a random sequence of twice-daily applications of one of three stress sessions, a 1-h water-avoidance, 45-min cold-restraint, or a 20-min forced swim starting on day 6 and continuing until delivery on day 21. Male and female offspring of the stressed dams were designated CPS rats. Control dams received sham stress and their offspring were designated control rats. As adults at 8-16 wk of age, control and prenatally stressed offspring were challenged by the same CAS protocol for 9 d. Ovariectomy (OVX) or sham surgery was performed on female prenatal-stress offspring on day 56. Daily letrozole treatment was initiated on day 49, 2 wk prior to initiation of adult stress. Treatment was continued through the stress protocol. A schematic diagram of the study procedures is shown in Figure 1A. Primary afferent responses to colorectal distention. A: Chronic prenatal stress (CPS) plus chronic adult stress (CAS) model. Pregnant dams were subjected to prenatal stress from on day 11 of gestation. Ovariectomy (OVX) or sham surgery was performed on female prenatal-stress offspring on day 56. Daily Letrozole was initiated on day 49, 2 wk prior to initiation of adult stress. Treatment was continued through the stress protocol; B: Spontaneous activity (SA) of single afferent units in male and female control rats (n = 70 fibers in 6 rats in each group, t-test, aP < 0.05); C: Average response to graded colorectal distention (CRD) of 56 afferent fibers in 6 male and 70 afferent fibers in 6 female control rats; two-way analysis of variance (ANOVA; aP < 0.05 vs the same pressure male group); D: Responses of low-threshold (LT) fibers to CRD in 42 fibers in 6 male rats and 40 fibers in 6 female control rats (ANOVA, aP < 0.05 vs the same pressure male group); E: Responses of high-threshold (HT) afferent fibers to CRD in 14 fibers in 6 male and 29 fibers in 6 female control rats; F: Effects of CAS on afferent fiber responses to CRD from 59 fibers in 6 control and 99 fibers in 6 CPS female rats; (two-way ANOVA, aP < 0.05 vs the same pressure control group, bP < 0.05 vs the same pressure CPS group); G: Effects of CAS on afferent fiber responses to CRD in control and CPS male rats (n = 6 rats, 57 fibers for control and 95 fibers for CPS female group; two-way ANOVA, aP < 0.05 vs the same pressure-control group). Rat treatment: Before OVX or letrozole treatment, vaginal smears were used to identify the estrus cycle phase. OVX or sham surgery was performed on female prenatal-stress offspring on day 56. The aromatase inhibitor letrozole [4,4’-(1H-1,2,4-triazol-l-yl-methylene)-bis-benzonitrile], (Novartis) 1.0 mg/kg was orally administered in the experimental group and vehicle (hydroxypropyl cellulose 0.3% in water) was given in the control group once daily for 14 d. Direct transcutaneous intrathecal injections of estrogen and letrozole were performed as described by Mestre et al[18]. In vivo single fiber recording of L6-S2 DRG rootlets: Multiunit afferent discharges were recorded from the distal ends of L6-S2 dorsal rootlets decentralized close to their entry into the spinal cord. A bundle of multiunit fibers was distinguished into 2-6 single units off-line using wave mark template matching in Spike 2 software that differentiates spikes by shape and amplitude. Colonic afferent fibers were identified by their response to graded colorectal distention (CRD). A balloon was used to distend the colorectum. Isoflurane, 2.5%, followed by 50 mg/kg intraperitoneal sodium pentobarbital induced general anesthesia that was maintained by infusing a mixture of pentobarbital sodium + pancuronium bromide + saline by intravenous infusion through the tail vein. The adequacy of anesthesia was confirmed by the absence of corneal and pupillary reflexes and stability of the end-tidal CO2 level. A tracheotomy tube connected to a ventilator system provided a mixture of room air and oxygen. Expired CO2 was monitored and maintained at 3.5%. Body temperature was monitored and maintained at 37 °C by a servo-controlled heating blanket. A laminectomy from T12 to S2 exposed the spinal cord. The head was stabilized in a stereotaxic frame. In vitro patch clamp recordings in colonic DRG neurons: Retrograde fluorescence label injections: Labeling of colon-projecting DRG neurons was performed as previously described[13]. Under general 2% isoflurane anesthesia, the lipid soluble fluorescent dye, 1,1’-dioleyl-3,3,3’,3’-tetramethylindocarbocyanine methane-sulfonate (9-DiI, Invitrogen, Carlsbad, CA) was injected (50 mg/mL) into the muscularis externa on the exposed distal colon in 8 to 10 sites (2 μL each site). To prevent leakage, the needle was kept in place for 1 min following each injection. Dissociation and culture of DRG neurons: Rats were deeply anesthetized with isoflurane followed by decapitation. Lumbosacral (L6–S2) DRGs were collected in ice cold and oxygenated dissecting solution, containing (in mM) 130 NaCl, 5 KCl, 2 KH2PO4, 1.5 CaCl2, 6 MgSO4, 10 glucose, and 10 HEPES, pH 7.2 (305 mOsm). After removal of the connective tissue, the ganglia were transferred to a 5 mL dissecting solution containing collagenase D (1.8 mg/mL; Roche) and trypsin (1.0 mg/mL; Sigma, St Louis, MO), and incubated for 1.5 h at 34.5 °C. DRGs were then taken from the enzyme solution, washed, and put in 0.5-2 mL of the dissecting solution containing DNase (0.5 mg/mL; Sigma). Cells were subsequently dissociated by gentle trituration 10 to 15 times with fire-polished glass pipettes and placed on acid-cleaned glass coverslips. The dissociated DRG neurons were kept in 1 mL DMEM (with 10% FBS) in an incubator (95% O2/5% CO2) at 37 °C overnight. Whole-cell patch clamp recordings from dissociated DRG neurons: Before each experiment, a glass coverslip with DRG neurons was transferred to a recording chamber perfused (1.5 mL/min) with external solution containing (10 mM): 130 NaCl, 5 KCl, 2 KH2PO4, 2.5 CaCl2, 1 MgCl2, 10 HEPES, and 10 glucose, pH adjusted to pH 7.4 with NaOH (300 mOsm) at room temperature. Recording pipettes, pulled from borosilicate glass tubing, with resistance of 1-5 MΩ, were filled with solution containing (in mM): 100 KMeSO3, 40 KCl, and 10 HEPES, pH 7.25 adjusted with KOH (290 mOsm). DiI-labeled neurons were identified by fluorescence microscopy. Whole-cell currents and voltage were recorded from DiI-labeled neurons using a Dagan 3911 patch clamp amplifier. Data were acquired and analyzed by pCLAMP 9.2 (Molecular Devices, Sunnyvale, CA). The currents were filtered at 2–5 kHz and sampled at 50 or 100 s per point. While still under voltage clamp, the Clampex Membrane Test program (Molecular Devices) was used to determine membrane capacitance (Cm) and membrane resistance (Rm), during a 10 ms, 5 mV depolarizing pulse form a holding potential of −60 mV. The configuration was then switched to current clamp (0 pA) to determine other electrophysiological properties. After stabilizing for 2–3 min, the resting membrane potential was measured. The minimum acceptable resting membrane potential was −40 mV. Spontaneous activity (SA) was then recorded over two 30 s periods separated by 60 s without recording, as described by Bedi et al[19]. Transient A-type K+ current (IA) recording method in patch studies: To record voltage-gated K+ current (Kv), Na+ in control external solution was replaced with equimolar choline and the Ca2+ concentration was reduced to 0.03 mM to suppress Ca2+ currents and to prevent Ca2+ channels becoming Na+ conducting. The reduced external Ca2+ would also be expected to suppress Ca2+-activated K+ currents. The current traces of Kv in DRG neurons were measured at different holding potentials. The membrane potential was held at −100 mV and voltage steps were from −40 to +30 mV to record the total Kv. The membrane potential was held at −50 mV to record the sustained Kv. The IA currents were calculated by subtracting the sustained current from the total current. The current density (in pA/pF) was calculated by dividing the current amplitude by cell membrane capacitance. Real time reverse transcription-polymerase chain reaction (RT-PCR): Total RNA was extracted using RNeasy Mini Kits (QIAGEN, Valencia, CA). One microgram of total RNA was reverse-transcribed using the SuperScriptTM III First-Strand Synthesis System. PCR assays were performed on a StepOnePlus thermal cycler with 18 s as the normalizer using Applied Biosystems primer/probe set Rn02531967_s1 directed against the translated exon IX. Fold-change relative to control was calculated using the ΔΔCt method (Applied Biosystems). Western blot: Samples were lysed in RIPA buffer containing protease inhibitor cocktail and phenylmethanesulfonyl fluoride. Lysates were incubated for 30 min on ice and then centrifuged at 10 000 × g for 10 min at 4 °C. The protein concentration in the supernatant was determined using bicinchoninic acid (BCA) assay kits with bovine serum albumin as a standard. Equal amounts of protein (30 μg per lane) were separated by 10% sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and then transferred to nitrocellulose membranes (Bio-Rad, United States). The membranes were blocked in Li-Cor blocking buffer for 1 h at room temperature and then incubated with primary antibodies. BDNF antibody (Santa Cruz Biotechnologies, Santa Cruz, CA) was used at 1:200 dilution; nerve growth factor (NGF) antibody (Abcam, MA) was used at 1:1000 dilution; β-actin antibody (Sigma Aldrich, St Louis, MO) was used at 1:5000 dilution. The secondary antibodies were donkey anti-rabbit Alexa Fluor 680 (Invitrogen) and goat anti-mouse IRDye 800 (Rockland). Images were acquired and band intensities measured using a Li-Cor Odyssey system (Li-Cor, Lincoln, NE). Immunofluorescence: Frozen sections of colon tissue from control, CAS, CPS and CPS + CAS female rats were mounted on glass slides, and rehydrated in phosphate buffered saline at room temperature. The slides were treated for antigen retrieval and blocked with 10% normal goat serum diluted in 0.3% phosphate buffered saline-Triton for 1 h, and then incubated with NGF primary antibody in antibody diluent (Renoir Red, Biocare Medical, Concord, CA) at 4 °C overnight. The slides were exposed to fluorescent dye-conjugated secondary antibody for 2 h at room temperature, counterstained with 4',6-diamidino-2-phenylindole and coverslipped. Images were taken in fluorescence mode on an Olympus laser scanning confocal microscope and the average signal intensity was calculated by the bundled software. Serum estradiol and norepinephrine levels: Serum estradiol, adrenocorticotropic hormone (ACTH), and norepinephrine levels were measured using specific enzyme-linked immunosorbent assay kits for each analyte (CSB-E05110r, CSB-E06875r, CSB-E07022, Cusabio Bioteck CO., United States) following the manufacturer’s instructions. Data analysis: Single fiber responses (impulses/second) to CRD were calculated by subtracting SA from the mean 30 s maximal activity during distension. Fibers were considered responsive if CRD increased their activity to 30% greater than the baseline value. Mechanosensitive single units were classified as high threshold (> 20 mmHg) or low threshold (≤ 20 mmHg) on the basis of their response threshold and profile during CRD. Single fiber activity data were analyzed by analysis of variance with repeated measures; CRD intensity was the repeated factor and the experimental group was the between-group factor. If significant main effects were present, the individual means were compared using the Fisher post-hoc test. All authors had access to the study data and reviewed and approved the final manuscript. RESULTS: Effects of CPS plus CAS on primary afferent responses to CRD in male and female rats The basal activity of a spinal afferent fiber was defined as the average number of action potentials per second (impulses/sec) in the 60 s period before the onset of a distention stimulus. In male controls, 66% of the afferent fibers under study displayed SA and SA was significantly higher in female controls than in male controls (0.71 ± 0.21 vs 1.24 ± 0.20 imp/sec; Figure 1B). The average single fiber activity in response to CRD was significantly higher in female control rats compared with male controls (Figure 1C). We found that the enhanced sensitization in female rats mainly came from the low-threshold fibers (Figure 1D and E). To assess the effects of CPS + CAS on colon afferent fiber activities, we compared average single colon afferent fiber activities projecting from dorsal roots S1-L6 in response to CRD in male and female control, CPS, control + CAS and CPS + CAS rats recorded approximately 24 h after the last stressor. In females, CPS significantly increased single-unit afferent activity in response to CRD vs control female rats (Figure 1F). CAS alone enhanced single-unit activity compared with control. The increase in average afferent responses after CAS in prenatally stressed female rats (44.0%) was significantly greater than the increase in female control rats (39.3%). In males, CPS had no significant effect on primary afferent responses (Figure 1G). When we compared males to females within each experimental group, we found that the average single fiber activity was significantly higher in female compared with male CPS + CAS rats (Figure 1F, G). The increased activity may contribute to the enhanced female visceral hypersensitivity previously reported in this model. Average single-fiber activities were significantly greater in control and CPS and CAS females than in their corresponding male experimental groups (Figure 1F and G). Both CAS and CPS + CAS rats had significantly increased primary afferent responses compared with control and CPS rats. Thus, our CPS and CAS protocols sensitized colon-projecting primary afferent fibers, with the greatest effects produced by the combination of CPS + CAS in both males and females. The basal activity of a spinal afferent fiber was defined as the average number of action potentials per second (impulses/sec) in the 60 s period before the onset of a distention stimulus. In male controls, 66% of the afferent fibers under study displayed SA and SA was significantly higher in female controls than in male controls (0.71 ± 0.21 vs 1.24 ± 0.20 imp/sec; Figure 1B). The average single fiber activity in response to CRD was significantly higher in female control rats compared with male controls (Figure 1C). We found that the enhanced sensitization in female rats mainly came from the low-threshold fibers (Figure 1D and E). To assess the effects of CPS + CAS on colon afferent fiber activities, we compared average single colon afferent fiber activities projecting from dorsal roots S1-L6 in response to CRD in male and female control, CPS, control + CAS and CPS + CAS rats recorded approximately 24 h after the last stressor. In females, CPS significantly increased single-unit afferent activity in response to CRD vs control female rats (Figure 1F). CAS alone enhanced single-unit activity compared with control. The increase in average afferent responses after CAS in prenatally stressed female rats (44.0%) was significantly greater than the increase in female control rats (39.3%). In males, CPS had no significant effect on primary afferent responses (Figure 1G). When we compared males to females within each experimental group, we found that the average single fiber activity was significantly higher in female compared with male CPS + CAS rats (Figure 1F, G). The increased activity may contribute to the enhanced female visceral hypersensitivity previously reported in this model. Average single-fiber activities were significantly greater in control and CPS and CAS females than in their corresponding male experimental groups (Figure 1F and G). Both CAS and CPS + CAS rats had significantly increased primary afferent responses compared with control and CPS rats. Thus, our CPS and CAS protocols sensitized colon-projecting primary afferent fibers, with the greatest effects produced by the combination of CPS + CAS in both males and females. Increase in excitability of colon-projecting lumbosacral DRG neurons in female CPS + CAS rats To elucidate the electrophysiological basis of enhanced stress-induced primary afferent activity in female rats, we performed patch clamp studies on acutely dissociated retrograde-labeled colon-projecting neurons from the L6-S2 DRGs in control, prenatal stress, adult stress only, and CPS + CAS female rats isolated 24 h after the last adult stressor (Figure 2A). Input resistance (Figure 2B) and rheobase (Figure 2C) were significantly decreased in neurons from CPS + CAS rats compared with the other three groups. The number of action potentials elicited at either 2 × or 3 × the rheobase were significantly greater in adult stress and CPS + CAS neurons compared with control and to CPS neurons (Figure 2D, E). CAS significantly increased action potential overshoot with or without CPS (Figure 2F), but it did not significantly alter other electrophysiological characteristics, such as number of spontaneous spikes, membrane capacitance (pF), resting membrane potential, cell diameter, time constant, and DRG neuron action potential amplitude and duration (Table 1). Patch clamp recording in colonic dorsal root ganglion neurons from female rats. A: Patch clamp process of cell labeling. Under isoflurane anesthesia, the lipid soluble fluorescent dye 9-DiI was injected into muscularis externa of the exposed distal colon (left figure). Lumbosacral (L6–S2) dorsal root ganglions (upper photograph) were isolated and DiI-labeled neurons were identified by fluorescence microscopy (lower photograph). Electrophysiological properties of each neuron were measured using whole-cell current and voltage clamp protocols (right figure); B: Rheobase from all four experimental groups (n = 5 rats, 45 cells in each group, one-way ANOVA, aP < 0.05 vs control or bP < 0.05 vs CAS); C: Representative action potentials (APs) elicited by current injection at 2 × the rheobase in neurons from control, chronic adult stress (CAS), chronic prenatal stress (CPS) and CPS + CAS female rats; D: Membrane input resistance from all four groups (n = 5 rats, 45 cells in each group, one-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); E: Number of APs elicited by current injection at either 2 × and 3 × the rheobase in all four experimental groups (two-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); F: AP overshoot recorded from all four experimental groups (aP < 0.05 vs control); G: The proportion of neurons from each experimental group exhibiting spontaneous APs. Red numbers represent spontaneous AP firing cells; black numbers represent total cells; H: Representative total, IK and IA current tracings and average values of potassium currents: Itotal, IK and IA are shown in female CPS + CAS, CAS, CPS (n = 15 neurons, from 5 rats in each group), and control groups (n = 12 neurons from 5 rats); two-way ANOVA, aP < 0.05 vs each control group. Electrophysiological characteristics of colon related DRG neuron Values means ± standard error. P < 0.05; P < 0.001 vs control group. CAS: Chronic adult stress; CPS: Chronic prenatal stress. The percentage of neurons with SA in was significantly greater in CPS + CAS rats than in control or CPS only rats (Figure 2G). Under voltage clamp conditions (Figure 2H), neurons from female CPS + CAS, CAS, CPS and control groups had IA and sustained outward rectifier K+ currents (IK). Compared with the other three groups, DRG neurons from CPS + CAS rats had significantly reduced average IA (P < 0.05). The average IK density was decreased but the change was not significant. To elucidate the electrophysiological basis of enhanced stress-induced primary afferent activity in female rats, we performed patch clamp studies on acutely dissociated retrograde-labeled colon-projecting neurons from the L6-S2 DRGs in control, prenatal stress, adult stress only, and CPS + CAS female rats isolated 24 h after the last adult stressor (Figure 2A). Input resistance (Figure 2B) and rheobase (Figure 2C) were significantly decreased in neurons from CPS + CAS rats compared with the other three groups. The number of action potentials elicited at either 2 × or 3 × the rheobase were significantly greater in adult stress and CPS + CAS neurons compared with control and to CPS neurons (Figure 2D, E). CAS significantly increased action potential overshoot with or without CPS (Figure 2F), but it did not significantly alter other electrophysiological characteristics, such as number of spontaneous spikes, membrane capacitance (pF), resting membrane potential, cell diameter, time constant, and DRG neuron action potential amplitude and duration (Table 1). Patch clamp recording in colonic dorsal root ganglion neurons from female rats. A: Patch clamp process of cell labeling. Under isoflurane anesthesia, the lipid soluble fluorescent dye 9-DiI was injected into muscularis externa of the exposed distal colon (left figure). Lumbosacral (L6–S2) dorsal root ganglions (upper photograph) were isolated and DiI-labeled neurons were identified by fluorescence microscopy (lower photograph). Electrophysiological properties of each neuron were measured using whole-cell current and voltage clamp protocols (right figure); B: Rheobase from all four experimental groups (n = 5 rats, 45 cells in each group, one-way ANOVA, aP < 0.05 vs control or bP < 0.05 vs CAS); C: Representative action potentials (APs) elicited by current injection at 2 × the rheobase in neurons from control, chronic adult stress (CAS), chronic prenatal stress (CPS) and CPS + CAS female rats; D: Membrane input resistance from all four groups (n = 5 rats, 45 cells in each group, one-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); E: Number of APs elicited by current injection at either 2 × and 3 × the rheobase in all four experimental groups (two-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); F: AP overshoot recorded from all four experimental groups (aP < 0.05 vs control); G: The proportion of neurons from each experimental group exhibiting spontaneous APs. Red numbers represent spontaneous AP firing cells; black numbers represent total cells; H: Representative total, IK and IA current tracings and average values of potassium currents: Itotal, IK and IA are shown in female CPS + CAS, CAS, CPS (n = 15 neurons, from 5 rats in each group), and control groups (n = 12 neurons from 5 rats); two-way ANOVA, aP < 0.05 vs each control group. Electrophysiological characteristics of colon related DRG neuron Values means ± standard error. P < 0.05; P < 0.001 vs control group. CAS: Chronic adult stress; CPS: Chronic prenatal stress. The percentage of neurons with SA in was significantly greater in CPS + CAS rats than in control or CPS only rats (Figure 2G). Under voltage clamp conditions (Figure 2H), neurons from female CPS + CAS, CAS, CPS and control groups had IA and sustained outward rectifier K+ currents (IK). Compared with the other three groups, DRG neurons from CPS + CAS rats had significantly reduced average IA (P < 0.05). The average IK density was decreased but the change was not significant. Effects of CPS and/or CAS on plasma estrogen concentration We did a vaginal smear test to identify the estrus cycle phases by identifying the vaginal cytological cell types. Estrogen concentration was significantly higher in the CPS proestrus/estrus phase compared with control diestrus, control proestrus/estrus, and CPS diestrus proestrus (P < 0.05; Figure 3A). Comparison of the plasma estrogen concentrations in control, CAS, CPS, CPS + CAS showed that CPS significantly increased plasma estrogen levels compared with the control rats and that CAS increased plasma estrogen level compared with the control and CPS rats (Figure 3B). Effects of chronic prenatal stress, chronic adult stress, ovariectomy, and letrozole treatment on plasma estrogen levels in female rats. A: Plasma estrogen level in control and chronic prenatal stress (CPS) rats by estrus cycle phase (n = 8 rats, one-way ANOVA, aP < 0.05 vs control proestrus/estrus (P-E) phase; bP < 0.05 vs CPS diestrus (D) phase); B: Plasma estrogen levels increased in CPS rats and following chronic adult stress (CAS) 24 h after the last adult stressor (n = 8 rats, one-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); C: Ovariectomy (OVX) significantly reduced CPS female rat plasma estrogen levels before and after CAS (n = 5 rats, one-way ANOVA, aP < 0.05 vs sham group); D: Letrozole treatment significantly reduced CPS female rat plasma estrogen levels before or after CAS (n = 5 rats, one-way ANOVA, aP < 0.05 vs vehicle group; cP < 0.0001); E: Plasma norepinephrine levels from control, CAS, CPS and CPS + CAS group female rats (n = 5 rats, one-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); F: Plasma adrenocorticotropic hormone (ACTH) levels from control and CPS + CAS group female rats (n = 5 rats, t-test, aP < 0.05 vs control). To determine whether estrogen contributed to stress-induced visceral hypersensitivity in prenatal stressed females, we reduced plasma estrogen levels by either OVX or letrozole treatment. OVX significantly lowered serum estradiol levels before and after CAS (Figure 3C). Treatment was continued throughout CAS. After treatment with letrozole, serum estradiol levels were significantly reduced (Figure 3D). To study the effects of gender and stress on norepinephrine and ACTH levels, we measured plasma norepinephrine levels in female rats from all four experimental groups. CAS alone significantly increased plasma norepinephrine levels compared with both the controls and with CPS alone (Figure 3E) Plasma norepinephrine levels were significantly increased in CPS + CAS rats compared with CAS alone as well as with controls and CPS. Plasma ACTH levels were significantly increased in CPS + CAS rats compared with controls. (Figure 3F). We did a vaginal smear test to identify the estrus cycle phases by identifying the vaginal cytological cell types. Estrogen concentration was significantly higher in the CPS proestrus/estrus phase compared with control diestrus, control proestrus/estrus, and CPS diestrus proestrus (P < 0.05; Figure 3A). Comparison of the plasma estrogen concentrations in control, CAS, CPS, CPS + CAS showed that CPS significantly increased plasma estrogen levels compared with the control rats and that CAS increased plasma estrogen level compared with the control and CPS rats (Figure 3B). Effects of chronic prenatal stress, chronic adult stress, ovariectomy, and letrozole treatment on plasma estrogen levels in female rats. A: Plasma estrogen level in control and chronic prenatal stress (CPS) rats by estrus cycle phase (n = 8 rats, one-way ANOVA, aP < 0.05 vs control proestrus/estrus (P-E) phase; bP < 0.05 vs CPS diestrus (D) phase); B: Plasma estrogen levels increased in CPS rats and following chronic adult stress (CAS) 24 h after the last adult stressor (n = 8 rats, one-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); C: Ovariectomy (OVX) significantly reduced CPS female rat plasma estrogen levels before and after CAS (n = 5 rats, one-way ANOVA, aP < 0.05 vs sham group); D: Letrozole treatment significantly reduced CPS female rat plasma estrogen levels before or after CAS (n = 5 rats, one-way ANOVA, aP < 0.05 vs vehicle group; cP < 0.0001); E: Plasma norepinephrine levels from control, CAS, CPS and CPS + CAS group female rats (n = 5 rats, one-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); F: Plasma adrenocorticotropic hormone (ACTH) levels from control and CPS + CAS group female rats (n = 5 rats, t-test, aP < 0.05 vs control). To determine whether estrogen contributed to stress-induced visceral hypersensitivity in prenatal stressed females, we reduced plasma estrogen levels by either OVX or letrozole treatment. OVX significantly lowered serum estradiol levels before and after CAS (Figure 3C). Treatment was continued throughout CAS. After treatment with letrozole, serum estradiol levels were significantly reduced (Figure 3D). To study the effects of gender and stress on norepinephrine and ACTH levels, we measured plasma norepinephrine levels in female rats from all four experimental groups. CAS alone significantly increased plasma norepinephrine levels compared with both the controls and with CPS alone (Figure 3E) Plasma norepinephrine levels were significantly increased in CPS + CAS rats compared with CAS alone as well as with controls and CPS. Plasma ACTH levels were significantly increased in CPS + CAS rats compared with controls. (Figure 3F). Effects of Letrozole treatment on colon DRG neuron excitability We performed patch clamp experiments on acutely isolated retrograde-labeled DRG neurons from CPS + CAS females with or without letrozole treatment 24 h after the last adult stressor. Letrozole treatment significantly increased rheobase (Figure 4A), and significantly reduced input resistance (Figure 4B). Action potential overshoot (Figure 4C) and the number of action potentials elicited by a current injection at either 2 × or 3 × rheobase were significantly reduced by letrozole treatment (Figure 4D). Other electrophysiological properties were not significantly altered (Table 2). We also recorded electromyographic activity to determine whether the reduction in visceral sensitivity in female CPS + CAS rats caused by OVX or systemic letrozole treatment reduced visceromotor responses. The findings demonstrated a significant decrease in excitability of colon-projecting L6-S2 neurons. Effects of Letrozole treatment on colon dorsal root ganglion neuron excitability. A: Rheobase (n = 45 cells in 6 rats in each group, t-test, cP < 0.001 vs Veh. + chronic adult stress [CAS] + chronic prenatal stress [CPS]); B: Membrane input resistance (RIn) (t-test, aP < 0.05); C: Action potential (AP) overshoot (t-test, aP < 0.05); D: Number of APs elicited by current injection at 2 × and 3 × rheobase (two-way ANOVA, aP < 0.05; bP < 0.01). Electrophysiological characteristics of colon related DRG neuron after Letrozole treatment Values are means ± standard error. P < 0.05; P < 0.001 vs control group. CAS: Chronic adult stress; CPS: Chronic prenatal stress. We performed patch clamp experiments on acutely isolated retrograde-labeled DRG neurons from CPS + CAS females with or without letrozole treatment 24 h after the last adult stressor. Letrozole treatment significantly increased rheobase (Figure 4A), and significantly reduced input resistance (Figure 4B). Action potential overshoot (Figure 4C) and the number of action potentials elicited by a current injection at either 2 × or 3 × rheobase were significantly reduced by letrozole treatment (Figure 4D). Other electrophysiological properties were not significantly altered (Table 2). We also recorded electromyographic activity to determine whether the reduction in visceral sensitivity in female CPS + CAS rats caused by OVX or systemic letrozole treatment reduced visceromotor responses. The findings demonstrated a significant decrease in excitability of colon-projecting L6-S2 neurons. Effects of Letrozole treatment on colon dorsal root ganglion neuron excitability. A: Rheobase (n = 45 cells in 6 rats in each group, t-test, cP < 0.001 vs Veh. + chronic adult stress [CAS] + chronic prenatal stress [CPS]); B: Membrane input resistance (RIn) (t-test, aP < 0.05); C: Action potential (AP) overshoot (t-test, aP < 0.05); D: Number of APs elicited by current injection at 2 × and 3 × rheobase (two-way ANOVA, aP < 0.05; bP < 0.01). Electrophysiological characteristics of colon related DRG neuron after Letrozole treatment Values are means ± standard error. P < 0.05; P < 0.001 vs control group. CAS: Chronic adult stress; CPS: Chronic prenatal stress. Spinal cord BDNF levels regulated by estrogen To investigate the effect of estrogen on BDNF expression, we measured BDNF mRNA and protein levels in the lumbar-sacral spinal cords of OVX and Sham CPS + CAS female rats. Systemic estradiol administration to naïve cycling females produced significant increases in plasma estrogen (Figure 5A), lumbar-sacral spinal cord BDNF mRNA (Figure 5B), and protein (Figure 5C). We also measured BDNF mRNA and protein levels in the lumbar-sacral spinal cords of OVX and Sham CPS + CAS female rats. BDNF mRNA and protein expression were significantly suppressed by OVX compared with sham rats. Another experiment showed that intrathecal infusion of estrogen into naïve female rats significantly increased BDNF protein levels, which proved that estrogen reversed the experimental results and contributed to the response to visceral pain[13]. Brain-derived neurotrophic factor expression in lumbar-sacral spinal cord is regulated by estrogen. A: Plasma estrogen levels in cycling females that received a bolus estradiol (E2) infusion on day 1; B: Lumbar-sacral spinal cord brain-derived neurotrophic factor (BDNF) mRNA following bolus estrogen infusion; C: Lumbar-sacral spinal cord BDNF protein following bolus estrogen infusion. (n = 8 rats in each group, two-way ANOVA, aP < 0.05 vs vehicle group). To investigate the effect of estrogen on BDNF expression, we measured BDNF mRNA and protein levels in the lumbar-sacral spinal cords of OVX and Sham CPS + CAS female rats. Systemic estradiol administration to naïve cycling females produced significant increases in plasma estrogen (Figure 5A), lumbar-sacral spinal cord BDNF mRNA (Figure 5B), and protein (Figure 5C). We also measured BDNF mRNA and protein levels in the lumbar-sacral spinal cords of OVX and Sham CPS + CAS female rats. BDNF mRNA and protein expression were significantly suppressed by OVX compared with sham rats. Another experiment showed that intrathecal infusion of estrogen into naïve female rats significantly increased BDNF protein levels, which proved that estrogen reversed the experimental results and contributed to the response to visceral pain[13]. Brain-derived neurotrophic factor expression in lumbar-sacral spinal cord is regulated by estrogen. A: Plasma estrogen levels in cycling females that received a bolus estradiol (E2) infusion on day 1; B: Lumbar-sacral spinal cord brain-derived neurotrophic factor (BDNF) mRNA following bolus estrogen infusion; C: Lumbar-sacral spinal cord BDNF protein following bolus estrogen infusion. (n = 8 rats in each group, two-way ANOVA, aP < 0.05 vs vehicle group). Peripheral NGF level increased in CPS + CAS female rats We examined NGF expression in the colons of females from all four experimental groups by immunohistochemistry (Figure 6A). Morphometric analysis showed that CAS and CPS + CAS significantly increased NGF levels in the colon wall, with the increase in CPS + CAS significantly greater that of CAS alone (Figure 6B). Western blotting showed that NGF protein was significantly upregulated in CPS + CAS rats compared with controls (Figure 6C). Nerve growth factor expression level in the colon wall. A: Immunohistochemical staining of nerve growth factor (NGF; green) was detected with nuclear counterstaining staining (blue) in controls, chronic adult stress (CAS), chronic prenatal stress (CPS) and CPS + CAS group female rat colon walls. × 400 magnification representative pictures were shown; B: Quantification of NGF levels from colon wall in immunohistochemistry (IHC) (n = 4 rats in each group, one-way ANOVA, aP < 0.05 vs control group; bP < 0.05 vs CPS group); C: Western blots of NGF protein from control and CPS + CAS female rats colon wall tissue (n = 6 rats in each group, t-test, aP < 0.05 vs control group). We examined NGF expression in the colons of females from all four experimental groups by immunohistochemistry (Figure 6A). Morphometric analysis showed that CAS and CPS + CAS significantly increased NGF levels in the colon wall, with the increase in CPS + CAS significantly greater that of CAS alone (Figure 6B). Western blotting showed that NGF protein was significantly upregulated in CPS + CAS rats compared with controls (Figure 6C). Nerve growth factor expression level in the colon wall. A: Immunohistochemical staining of nerve growth factor (NGF; green) was detected with nuclear counterstaining staining (blue) in controls, chronic adult stress (CAS), chronic prenatal stress (CPS) and CPS + CAS group female rat colon walls. × 400 magnification representative pictures were shown; B: Quantification of NGF levels from colon wall in immunohistochemistry (IHC) (n = 4 rats in each group, one-way ANOVA, aP < 0.05 vs control group; bP < 0.05 vs CPS group); C: Western blots of NGF protein from control and CPS + CAS female rats colon wall tissue (n = 6 rats in each group, t-test, aP < 0.05 vs control group). Effects of CPS plus CAS on primary afferent responses to CRD in male and female rats: The basal activity of a spinal afferent fiber was defined as the average number of action potentials per second (impulses/sec) in the 60 s period before the onset of a distention stimulus. In male controls, 66% of the afferent fibers under study displayed SA and SA was significantly higher in female controls than in male controls (0.71 ± 0.21 vs 1.24 ± 0.20 imp/sec; Figure 1B). The average single fiber activity in response to CRD was significantly higher in female control rats compared with male controls (Figure 1C). We found that the enhanced sensitization in female rats mainly came from the low-threshold fibers (Figure 1D and E). To assess the effects of CPS + CAS on colon afferent fiber activities, we compared average single colon afferent fiber activities projecting from dorsal roots S1-L6 in response to CRD in male and female control, CPS, control + CAS and CPS + CAS rats recorded approximately 24 h after the last stressor. In females, CPS significantly increased single-unit afferent activity in response to CRD vs control female rats (Figure 1F). CAS alone enhanced single-unit activity compared with control. The increase in average afferent responses after CAS in prenatally stressed female rats (44.0%) was significantly greater than the increase in female control rats (39.3%). In males, CPS had no significant effect on primary afferent responses (Figure 1G). When we compared males to females within each experimental group, we found that the average single fiber activity was significantly higher in female compared with male CPS + CAS rats (Figure 1F, G). The increased activity may contribute to the enhanced female visceral hypersensitivity previously reported in this model. Average single-fiber activities were significantly greater in control and CPS and CAS females than in their corresponding male experimental groups (Figure 1F and G). Both CAS and CPS + CAS rats had significantly increased primary afferent responses compared with control and CPS rats. Thus, our CPS and CAS protocols sensitized colon-projecting primary afferent fibers, with the greatest effects produced by the combination of CPS + CAS in both males and females. Increase in excitability of colon-projecting lumbosacral DRG neurons in female CPS + CAS rats: To elucidate the electrophysiological basis of enhanced stress-induced primary afferent activity in female rats, we performed patch clamp studies on acutely dissociated retrograde-labeled colon-projecting neurons from the L6-S2 DRGs in control, prenatal stress, adult stress only, and CPS + CAS female rats isolated 24 h after the last adult stressor (Figure 2A). Input resistance (Figure 2B) and rheobase (Figure 2C) were significantly decreased in neurons from CPS + CAS rats compared with the other three groups. The number of action potentials elicited at either 2 × or 3 × the rheobase were significantly greater in adult stress and CPS + CAS neurons compared with control and to CPS neurons (Figure 2D, E). CAS significantly increased action potential overshoot with or without CPS (Figure 2F), but it did not significantly alter other electrophysiological characteristics, such as number of spontaneous spikes, membrane capacitance (pF), resting membrane potential, cell diameter, time constant, and DRG neuron action potential amplitude and duration (Table 1). Patch clamp recording in colonic dorsal root ganglion neurons from female rats. A: Patch clamp process of cell labeling. Under isoflurane anesthesia, the lipid soluble fluorescent dye 9-DiI was injected into muscularis externa of the exposed distal colon (left figure). Lumbosacral (L6–S2) dorsal root ganglions (upper photograph) were isolated and DiI-labeled neurons were identified by fluorescence microscopy (lower photograph). Electrophysiological properties of each neuron were measured using whole-cell current and voltage clamp protocols (right figure); B: Rheobase from all four experimental groups (n = 5 rats, 45 cells in each group, one-way ANOVA, aP < 0.05 vs control or bP < 0.05 vs CAS); C: Representative action potentials (APs) elicited by current injection at 2 × the rheobase in neurons from control, chronic adult stress (CAS), chronic prenatal stress (CPS) and CPS + CAS female rats; D: Membrane input resistance from all four groups (n = 5 rats, 45 cells in each group, one-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); E: Number of APs elicited by current injection at either 2 × and 3 × the rheobase in all four experimental groups (two-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); F: AP overshoot recorded from all four experimental groups (aP < 0.05 vs control); G: The proportion of neurons from each experimental group exhibiting spontaneous APs. Red numbers represent spontaneous AP firing cells; black numbers represent total cells; H: Representative total, IK and IA current tracings and average values of potassium currents: Itotal, IK and IA are shown in female CPS + CAS, CAS, CPS (n = 15 neurons, from 5 rats in each group), and control groups (n = 12 neurons from 5 rats); two-way ANOVA, aP < 0.05 vs each control group. Electrophysiological characteristics of colon related DRG neuron Values means ± standard error. P < 0.05; P < 0.001 vs control group. CAS: Chronic adult stress; CPS: Chronic prenatal stress. The percentage of neurons with SA in was significantly greater in CPS + CAS rats than in control or CPS only rats (Figure 2G). Under voltage clamp conditions (Figure 2H), neurons from female CPS + CAS, CAS, CPS and control groups had IA and sustained outward rectifier K+ currents (IK). Compared with the other three groups, DRG neurons from CPS + CAS rats had significantly reduced average IA (P < 0.05). The average IK density was decreased but the change was not significant. Effects of CPS and/or CAS on plasma estrogen concentration: We did a vaginal smear test to identify the estrus cycle phases by identifying the vaginal cytological cell types. Estrogen concentration was significantly higher in the CPS proestrus/estrus phase compared with control diestrus, control proestrus/estrus, and CPS diestrus proestrus (P < 0.05; Figure 3A). Comparison of the plasma estrogen concentrations in control, CAS, CPS, CPS + CAS showed that CPS significantly increased plasma estrogen levels compared with the control rats and that CAS increased plasma estrogen level compared with the control and CPS rats (Figure 3B). Effects of chronic prenatal stress, chronic adult stress, ovariectomy, and letrozole treatment on plasma estrogen levels in female rats. A: Plasma estrogen level in control and chronic prenatal stress (CPS) rats by estrus cycle phase (n = 8 rats, one-way ANOVA, aP < 0.05 vs control proestrus/estrus (P-E) phase; bP < 0.05 vs CPS diestrus (D) phase); B: Plasma estrogen levels increased in CPS rats and following chronic adult stress (CAS) 24 h after the last adult stressor (n = 8 rats, one-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); C: Ovariectomy (OVX) significantly reduced CPS female rat plasma estrogen levels before and after CAS (n = 5 rats, one-way ANOVA, aP < 0.05 vs sham group); D: Letrozole treatment significantly reduced CPS female rat plasma estrogen levels before or after CAS (n = 5 rats, one-way ANOVA, aP < 0.05 vs vehicle group; cP < 0.0001); E: Plasma norepinephrine levels from control, CAS, CPS and CPS + CAS group female rats (n = 5 rats, one-way ANOVA, aP < 0.05 vs control; bP < 0.05 vs CPS); F: Plasma adrenocorticotropic hormone (ACTH) levels from control and CPS + CAS group female rats (n = 5 rats, t-test, aP < 0.05 vs control). To determine whether estrogen contributed to stress-induced visceral hypersensitivity in prenatal stressed females, we reduced plasma estrogen levels by either OVX or letrozole treatment. OVX significantly lowered serum estradiol levels before and after CAS (Figure 3C). Treatment was continued throughout CAS. After treatment with letrozole, serum estradiol levels were significantly reduced (Figure 3D). To study the effects of gender and stress on norepinephrine and ACTH levels, we measured plasma norepinephrine levels in female rats from all four experimental groups. CAS alone significantly increased plasma norepinephrine levels compared with both the controls and with CPS alone (Figure 3E) Plasma norepinephrine levels were significantly increased in CPS + CAS rats compared with CAS alone as well as with controls and CPS. Plasma ACTH levels were significantly increased in CPS + CAS rats compared with controls. (Figure 3F). Effects of Letrozole treatment on colon DRG neuron excitability: We performed patch clamp experiments on acutely isolated retrograde-labeled DRG neurons from CPS + CAS females with or without letrozole treatment 24 h after the last adult stressor. Letrozole treatment significantly increased rheobase (Figure 4A), and significantly reduced input resistance (Figure 4B). Action potential overshoot (Figure 4C) and the number of action potentials elicited by a current injection at either 2 × or 3 × rheobase were significantly reduced by letrozole treatment (Figure 4D). Other electrophysiological properties were not significantly altered (Table 2). We also recorded electromyographic activity to determine whether the reduction in visceral sensitivity in female CPS + CAS rats caused by OVX or systemic letrozole treatment reduced visceromotor responses. The findings demonstrated a significant decrease in excitability of colon-projecting L6-S2 neurons. Effects of Letrozole treatment on colon dorsal root ganglion neuron excitability. A: Rheobase (n = 45 cells in 6 rats in each group, t-test, cP < 0.001 vs Veh. + chronic adult stress [CAS] + chronic prenatal stress [CPS]); B: Membrane input resistance (RIn) (t-test, aP < 0.05); C: Action potential (AP) overshoot (t-test, aP < 0.05); D: Number of APs elicited by current injection at 2 × and 3 × rheobase (two-way ANOVA, aP < 0.05; bP < 0.01). Electrophysiological characteristics of colon related DRG neuron after Letrozole treatment Values are means ± standard error. P < 0.05; P < 0.001 vs control group. CAS: Chronic adult stress; CPS: Chronic prenatal stress. Spinal cord BDNF levels regulated by estrogen: To investigate the effect of estrogen on BDNF expression, we measured BDNF mRNA and protein levels in the lumbar-sacral spinal cords of OVX and Sham CPS + CAS female rats. Systemic estradiol administration to naïve cycling females produced significant increases in plasma estrogen (Figure 5A), lumbar-sacral spinal cord BDNF mRNA (Figure 5B), and protein (Figure 5C). We also measured BDNF mRNA and protein levels in the lumbar-sacral spinal cords of OVX and Sham CPS + CAS female rats. BDNF mRNA and protein expression were significantly suppressed by OVX compared with sham rats. Another experiment showed that intrathecal infusion of estrogen into naïve female rats significantly increased BDNF protein levels, which proved that estrogen reversed the experimental results and contributed to the response to visceral pain[13]. Brain-derived neurotrophic factor expression in lumbar-sacral spinal cord is regulated by estrogen. A: Plasma estrogen levels in cycling females that received a bolus estradiol (E2) infusion on day 1; B: Lumbar-sacral spinal cord brain-derived neurotrophic factor (BDNF) mRNA following bolus estrogen infusion; C: Lumbar-sacral spinal cord BDNF protein following bolus estrogen infusion. (n = 8 rats in each group, two-way ANOVA, aP < 0.05 vs vehicle group). Peripheral NGF level increased in CPS + CAS female rats: We examined NGF expression in the colons of females from all four experimental groups by immunohistochemistry (Figure 6A). Morphometric analysis showed that CAS and CPS + CAS significantly increased NGF levels in the colon wall, with the increase in CPS + CAS significantly greater that of CAS alone (Figure 6B). Western blotting showed that NGF protein was significantly upregulated in CPS + CAS rats compared with controls (Figure 6C). Nerve growth factor expression level in the colon wall. A: Immunohistochemical staining of nerve growth factor (NGF; green) was detected with nuclear counterstaining staining (blue) in controls, chronic adult stress (CAS), chronic prenatal stress (CPS) and CPS + CAS group female rat colon walls. × 400 magnification representative pictures were shown; B: Quantification of NGF levels from colon wall in immunohistochemistry (IHC) (n = 4 rats in each group, one-way ANOVA, aP < 0.05 vs control group; bP < 0.05 vs CPS group); C: Western blots of NGF protein from control and CPS + CAS female rats colon wall tissue (n = 6 rats in each group, t-test, aP < 0.05 vs control group). DISCUSSION: Enhanced CPS-induced visceral hypersensitivity in female rats was associated with an increase in the responses of lumbosacral nerve fibers to CRD in both male and female offspring. These findings are further supported by data showing increased excitability of colon-projecting DRG neurons from females in patch clamp studies. The magnitude of the sensitization was the greatest in female CPS + CAS rats, suggesting that it made a major contribution to the observed enhanced female visceral hypersensitivity in our model. Chronic stress is known to increase the excitability of colon-projecting DRG neurons in rats and mice. In adult male Sprague Dawley rats, colon DRG neuron sensitization was shown to be driven by increases in NGF expression in the colon muscularis externa[13]. In our model, we also observed a significant increase in colon NGF, but its potential role in primary afferent sensitization and visceral hypersensitivity was not investigated.. Other studies in male mice showed that stress, in the form of water avoidance, significantly increased the excitability of colon-projecting DRG neurons and that the combined activity of the stress mediators corticosterone and norepinephrine increased DRG neuron excitability in vitro[20,21]. We previously found significant increases in the serum levels of norepinephrine in CPS + CAS females. However, daily systemic treatment with adrenergic antagonists during the adult stress protocol failed to reduce visceral hypersensitivity in female neonatal + adult stress rats[13] suggesting that norepinephrine did not play a major role in the acquisition of enhanced female visceral hypersensitivity or primary afferent sensitization in our model. When we tested on lumbar-sacral afferent fibers and dissociated neurons in patch clamp studies, we found significant decreases in transient potassium IA currents in neurons isolated from CPS + CAS females compared with the other three experimental groups. Declines in A-type Kv currents in DRG neurons have been associated with persistent pain in multiple chronic pain models[22]. Whether the decline was caused by changes in channel properties or expression was not investigated in this study. However, another study demonstrated that estrogen significantly shifted the activation curve of IA currents in the hyperpolarizing direction and that estrogen inhibited Kv (+) channels in mouse DRG neurons through a membrane ER-activated nongenomic pathway[23]. Our results showed that the excitability of colon-projecting neurons in CPS + CAS females was significantly reduced by systemic letrozole treatment, suggesting that estrogen contributed to the sensitization process. Previous studies show that estrogen receptors expressed on primary afferent neurons contributed to enhanced sensitivity in various pain models[24-26]. One study found no decline in the responses of colon-projecting nerve fibers to CRD following OVX and found no detectable estrogen receptor alpha immunoreactivity in colon-projecting DRG neurons[27]. The reasons for the differing results are not clear, but local production of estrogen in DRG neurons could be sufficient to sustain sensitization. NGF and its receptors play important roles in the mechanism of visceral pain and hyperalgesia in women. For example, endometriosis is estrogen dependent and is commonly diagnosed. The main symptoms are various types of pelvic pain that have a serious effect on physical and mental health, but the mechanisms of abdominal pain are still unclear. Studies have shown NGF to be an inflammatory mediator and modulator of pain in adulthood[28]. CONCLUSION: In this study, we examined the sex differences and effects of estrogen on the acquisition of enhanced visceral hypersensitivity in the offspring of rats in a model of prenatal and adult stress as shown in Figure 7. Our study shows that estrogen acted in the spinal cord and the primary afferent neurons to enhance visceral nociception. Acute blockade of the endogenous synthesis of estrogens in rat spinal cord significantly reduced visceral hypersensitivity, suggesting that locally produced estrogen in the central nervous system can regulate nociceptive neurons to modulate visceral hypersensitivity. The chronic stress-estrogen-BDNF axis sensitized visceral hypersensitivity in the offspring of females subjected to CPS. The development of chronic stress-induced visceral hypersensitivity in female rats was estrogen dependent. A key component of this hypersensitivity was estrogen-dependent sensitization of primary afferent colon neurons. Our findings provide key scientific evidence in a preclinical model in support of developing gender-based treatment for abdominal pain in IBS. Summary diagram of estrogen re-enhanced visceral hyperalgesia investigated in chronic prenatal stress plus chronic adult stress models. BDNF: Brain-derived neurotrophic factor; NGF: Nerve growth factor.
Background: Chronic stress during pregnancy may increase visceral hyperalgesia of offspring in a sex-dependent way. Combining adult stress in offspring will increase this sensitivity. Based on the evidence implicating estrogen in exacerbating visceral hypersensitivity in female rodents in preclinical models, we predicted that chronic prenatal stress (CPS) + chronic adult stress (CAS) will maximize visceral hyperalgesia; and that estrogen plays an important role in colonic hyperalgesia. Methods: We established a CPS plus CAS rodent model in which the balloon was used to distend the colorectum. The single-fiber recording in vivo and patch clamp experiments in vitro were used to monitor the colonic neuron's activity. The reverse transcription-polymerase chain reaction, western blot, and immunofluorescence were used to study the effects of CPS and CAS on colon primary afferent sensitivity. We used ovariectomy and letrozole to reduce estrogen levels of female rats respectively in order to assess the role of estrogen in female-specific enhanced primary afferent sensitization. Results: Spontaneous activity and single fiber activity were significantly greater in females than in males. The enhanced sensitization in female rats mainly came from low-threshold neurons. CPS significantly increased single-unit afferent fiber activity in L6-S2 dorsal roots in response. Activity was further enhanced by CAS. In addition, the excitability of colon-projecting dorsal root ganglion (DRG) neurons increased in CPS + CAS rats and was associated with a decrease in transient A-type K+ currents. Compared with ovariectomy, treatment with the aromatase inhibitor letrozole significantly reduced estrogen levels in female rats, confirming the gender difference. Moreover, mice treated with letrozole had decreased colonic DRG neuron excitability. The intrathecal infusion of estrogen increased brain-derived neurotrophic factor (BDNF) protein levels and contributed to the response to visceral pain. Western blotting showed that nerve growth factor protein was upregulated in CPS + CAS mice. Conclusions: This study adds to the evidence that estrogen-dependent sensitization of primary afferent colon neurons is involved in the development of chronic stress-induced visceral hypersensitivity in female rats.
INTRODUCTION: Visceral pain of colonic origin is the most prominent symptom in irritable bowel syndrome (IBS) patients[1]. Female IBS patients report more severe pain that occurs more frequently and with longer episodes than in male patients[1,2]. The ratio of female to male IBS is about 2:1 among patients seen in medical clinics[3]. Moreover, females have a higher prevalence of IBS co-morbidities such as anxiety and depression[4,5] and are more vulnerable to stress-induced exacerbation of IBS symptoms compared with males[3,6,7]. Clinical studies show that early life adverse experiences are risk factors for the development of IBS symptoms, including visceral pain and ongoing chronic stress, especially abdominal pain[8-10]. These factors contribute to the development of visceral hypersensitivity, a key component of the IBS symptom complex and one that may be responsible for symptoms of pain[11,12]. Our previous research found that the female offspring of mothers subjected to chronic prenatal stress (CPS) had a markedly greater visceral sensitivity than their male littermates following challenge by another chronic adult stress (CAS) protocol. A critical molecular event in the development of this female-enhanced visceral hypersensitivity is upregulation of brain-derived neurotrophic factor (BDNF) expression in the lumbar-sacral spinal cord of female CPS + CAS rats[13]. However, the neurophysiological changes underlying the enhanced female-specific visceral hypersensitivity and the role of hormone in the development of stress-induced visceral hypersensitivity are not well understood. Visceral hypersensitivity in IBS involves abnormal changes in neurophysiology throughout the brain-gut axis. In IBS, there is evidence for sensitization of primary afferents to jejunal distention and electrical stimulation[14], and there is evidence for increased sensitivity of lumbar splanchnic afferents[15,16]. In animal models of either early life adverse events or adult stress-induced visceral hypersensitivity[17], there is evidence of colon primary afferent sensitization. However, the studies were performed in male rodents. Therefore, in this study, we established a CPS and CAS rodent model to analyze the impact on female colon afferent neuron function and the role of estrogen. Our hypothesis was that female CPS offspring subjected to chronic stress as adults would exhibit greater colonic dorsal root ganglion (DRG) neuron sensitization compared with their male littermates, and that the enhanced visceral sensitization and primary afferent sensitization in females was estrogen dependent. CONCLUSION: The authors would like to acknowledge the essential intellectual contributions of our recently deceased mentor, Dr. Sushil K Sarna. His guidance was essential to the successful completion of these studies. Thanks to Dr. Usman Ali for assistance in preparing and modifying the manuscript.
Background: Chronic stress during pregnancy may increase visceral hyperalgesia of offspring in a sex-dependent way. Combining adult stress in offspring will increase this sensitivity. Based on the evidence implicating estrogen in exacerbating visceral hypersensitivity in female rodents in preclinical models, we predicted that chronic prenatal stress (CPS) + chronic adult stress (CAS) will maximize visceral hyperalgesia; and that estrogen plays an important role in colonic hyperalgesia. Methods: We established a CPS plus CAS rodent model in which the balloon was used to distend the colorectum. The single-fiber recording in vivo and patch clamp experiments in vitro were used to monitor the colonic neuron's activity. The reverse transcription-polymerase chain reaction, western blot, and immunofluorescence were used to study the effects of CPS and CAS on colon primary afferent sensitivity. We used ovariectomy and letrozole to reduce estrogen levels of female rats respectively in order to assess the role of estrogen in female-specific enhanced primary afferent sensitization. Results: Spontaneous activity and single fiber activity were significantly greater in females than in males. The enhanced sensitization in female rats mainly came from low-threshold neurons. CPS significantly increased single-unit afferent fiber activity in L6-S2 dorsal roots in response. Activity was further enhanced by CAS. In addition, the excitability of colon-projecting dorsal root ganglion (DRG) neurons increased in CPS + CAS rats and was associated with a decrease in transient A-type K+ currents. Compared with ovariectomy, treatment with the aromatase inhibitor letrozole significantly reduced estrogen levels in female rats, confirming the gender difference. Moreover, mice treated with letrozole had decreased colonic DRG neuron excitability. The intrathecal infusion of estrogen increased brain-derived neurotrophic factor (BDNF) protein levels and contributed to the response to visceral pain. Western blotting showed that nerve growth factor protein was upregulated in CPS + CAS mice. Conclusions: This study adds to the evidence that estrogen-dependent sensitization of primary afferent colon neurons is involved in the development of chronic stress-induced visceral hypersensitivity in female rats.
15,994
391
[ 434, 92, 513, 104, 212, 780, 82, 230, 141, 52, 143, 4991, 409, 719, 542, 312, 243, 228, 598, 207 ]
21
[ "cps", "rats", "cas", "control", "female", "stress", "cps cas", "05", "figure", "significantly" ]
[ "pain ibs summary", "abdominal pain ibs", "visceral hypersensitivity ibs", "irritable bowel syndrome", "development ibs symptoms" ]
null
[CONTENT] Chronic prenatal stress | Estrogen | Visceral pain | Neuronal sensitization | Excitability | Letrozole [SUMMARY]
[CONTENT] Chronic prenatal stress | Estrogen | Visceral pain | Neuronal sensitization | Excitability | Letrozole [SUMMARY]
null
[CONTENT] Chronic prenatal stress | Estrogen | Visceral pain | Neuronal sensitization | Excitability | Letrozole [SUMMARY]
[CONTENT] Chronic prenatal stress | Estrogen | Visceral pain | Neuronal sensitization | Excitability | Letrozole [SUMMARY]
[CONTENT] Chronic prenatal stress | Estrogen | Visceral pain | Neuronal sensitization | Excitability | Letrozole [SUMMARY]
[CONTENT] Animals | Colon | Estrogens | Female | Ganglia, Spinal | Hyperalgesia | Male | Mice | Neurons | Pregnancy | Rats | Rats, Sprague-Dawley | Visceral Pain [SUMMARY]
[CONTENT] Animals | Colon | Estrogens | Female | Ganglia, Spinal | Hyperalgesia | Male | Mice | Neurons | Pregnancy | Rats | Rats, Sprague-Dawley | Visceral Pain [SUMMARY]
null
[CONTENT] Animals | Colon | Estrogens | Female | Ganglia, Spinal | Hyperalgesia | Male | Mice | Neurons | Pregnancy | Rats | Rats, Sprague-Dawley | Visceral Pain [SUMMARY]
[CONTENT] Animals | Colon | Estrogens | Female | Ganglia, Spinal | Hyperalgesia | Male | Mice | Neurons | Pregnancy | Rats | Rats, Sprague-Dawley | Visceral Pain [SUMMARY]
[CONTENT] Animals | Colon | Estrogens | Female | Ganglia, Spinal | Hyperalgesia | Male | Mice | Neurons | Pregnancy | Rats | Rats, Sprague-Dawley | Visceral Pain [SUMMARY]
[CONTENT] pain ibs summary | abdominal pain ibs | visceral hypersensitivity ibs | irritable bowel syndrome | development ibs symptoms [SUMMARY]
[CONTENT] pain ibs summary | abdominal pain ibs | visceral hypersensitivity ibs | irritable bowel syndrome | development ibs symptoms [SUMMARY]
null
[CONTENT] pain ibs summary | abdominal pain ibs | visceral hypersensitivity ibs | irritable bowel syndrome | development ibs symptoms [SUMMARY]
[CONTENT] pain ibs summary | abdominal pain ibs | visceral hypersensitivity ibs | irritable bowel syndrome | development ibs symptoms [SUMMARY]
[CONTENT] pain ibs summary | abdominal pain ibs | visceral hypersensitivity ibs | irritable bowel syndrome | development ibs symptoms [SUMMARY]
[CONTENT] cps | rats | cas | control | female | stress | cps cas | 05 | figure | significantly [SUMMARY]
[CONTENT] cps | rats | cas | control | female | stress | cps cas | 05 | figure | significantly [SUMMARY]
null
[CONTENT] cps | rats | cas | control | female | stress | cps cas | 05 | figure | significantly [SUMMARY]
[CONTENT] cps | rats | cas | control | female | stress | cps cas | 05 | figure | significantly [SUMMARY]
[CONTENT] cps | rats | cas | control | female | stress | cps cas | 05 | figure | significantly [SUMMARY]
[CONTENT] ibs | visceral | patients | visceral hypersensitivity | hypersensitivity | development | sensitization | pain | stress | female [SUMMARY]
[CONTENT] 10 | fibers | control | ml | solution | crd | rats | stress | current | day [SUMMARY]
null
[CONTENT] visceral | estrogen | hypersensitivity | visceral hypersensitivity | stress | hypersensitivity offspring | visceral hypersensitivity offspring | chronic | neurons | enhanced visceral [SUMMARY]
[CONTENT] cps | cas | rats | control | stress | female | estrogen | neurons | 05 | group [SUMMARY]
[CONTENT] cps | cas | rats | control | stress | female | estrogen | neurons | 05 | group [SUMMARY]
[CONTENT] ||| ||| CPS | CAS [SUMMARY]
[CONTENT] CPS | CAS ||| ||| CPS | CAS ||| [SUMMARY]
null
[CONTENT] [SUMMARY]
[CONTENT] ||| ||| CPS | CAS ||| CPS | CAS ||| ||| CPS | CAS ||| ||| ||| ||| CPS | L6-S2 ||| CAS ||| CPS | K+ ||| ||| DRG neuron ||| ||| CPS ||| [SUMMARY]
[CONTENT] ||| ||| CPS | CAS ||| CPS | CAS ||| ||| CPS | CAS ||| ||| ||| ||| CPS | L6-S2 ||| CAS ||| CPS | K+ ||| ||| DRG neuron ||| ||| CPS ||| [SUMMARY]
Reproducibility and validity of dietary patterns assessed by a food frequency questionnaire used in the 5-year follow-up survey of the Japan Public Health Center-Based Prospective Study.
22343330
Analysis of dietary pattern is increasingly popular in nutritional epidemiology. However, few studies have examined the validity and reproducibility of dietary patterns. We assessed the reproducibility and validity of dietary patterns identified by a food frequency questionnaire (FFQ) used in the 5-year follow-up survey of the Japan Public Health Center-Based Prospective Study (JPHC Study).
BACKGROUND
The participants were a subsample (244 men and 254 women) from the JPHC Study. Principal component analysis was used to identify dietary patterns from 28- or 14-day dietary records and 2 FFQs. To assess reproducibility and validity, we calculated Spearman correlation coefficients between dietary pattern scores derived from FFQs separated by a 1-year interval, and between dietary pattern scores derived from dietary records and those derived from a FFQ completed after the dietary records, respectively.
METHODS
We identified 3 Japanese dietary patterns from the dietary records and 2 FFQs: prudent, westernized, and traditional. Regarding reproducibility, Spearman correlation coefficients between the 2 FFQs ranged from 0.55 for the westernized Japanese pattern in men and the prudent Japanese pattern in women to 0.77 for the traditional Japanese pattern in men. Regarding validity, the corresponding values between dietary records and the FFQ ranged from 0.32 for the westernized Japanese pattern in men to 0.63 for the traditional Japanese pattern in women.
RESULTS
Acceptable reproducibility and validity was shown by the 3 dietary patterns identified by principal component analysis based on the FFQ used in the 5-year follow-up survey of the JPHC Study.
CONCLUSIONS
[ "Adult", "Aged", "Cohort Studies", "Data Collection", "Diet", "Female", "Follow-Up Studies", "Humans", "Japan", "Male", "Middle Aged", "Nutrition Surveys", "Nutritional Status", "Surveys and Questionnaires" ]
3798621
INTRODUCTION
In nutritional epidemiology, there is increased interest in the analysis of dietary pattern, a comprehensive variable that integrates consumption of several foods or food groups. The effect of a single nutrient, food, or food group on disease risk and morbid conditions is difficult to assess in observational studies because foods and nutrients are consumed in combination and their complex effects are likely to be interactive or synergistic.1 However, dietary pattern can overcome problems relating to the close intercorrelation among foods or nutrients and is expected to have a greater impact on disease risk than any single nutrient.1–3 Therefore, analysis using dietary pattern could be used as a complementary approach in nutritional epidemiology. Indeed, many previous studies have reported associations of dietary patterns with mortality, several diseases (such as cancer, cardiovascular disease, and type 2 diabetes), and biomarkers.4,5 Extraction of dietary pattern by principal component analysis—which is often used to identify dietary patterns—requires some arbitrary decisions to group food items, determine the number of factors to retain, select the method of rotation of the initial factors, and label the dietary patterns.6 Moreover, dietary patterns may differ with regard not only to age and sex but also resident area, ethnic group, and culture. Therefore, it might be necessary to examine the validity and reproducibility of the identified dietary patterns among the study population of each study. To date, several studies have examined the reproducibility and validity of dietary patterns.7–13 However, among studies of the association between dietary patterns and disease risk in Japan, only 1 examined the validity of dietary patterns in Japan12 and none examined reproducibility. Therefore, we assessed the reproducibility and validity of dietary patterns identified by principal component analysis of a food frequency questionnaire (FFQ) used in the 5-year follow-up survey of the Japan Public Health Center-Based Prospective Study (JPHC Study).
METHODS
JPHC Study The JPHC Study is a population-based prospective study of cancer and cardiovascular disease and was launched in 1990 for cohort I and in 1993 for cohort II.14 The participants were 140 420 residents of 11 public health center areas and were aged 40 to 59 years for cohort I and aged 40 to 69 years for cohort II at each baseline survey. At baseline and at the 5- and 10-year follow-ups, a questionnaire survey was conducted to obtain information on medical histories and health-related lifestyle, including diet. The JPHC Study is a population-based prospective study of cancer and cardiovascular disease and was launched in 1990 for cohort I and in 1993 for cohort II.14 The participants were 140 420 residents of 11 public health center areas and were aged 40 to 59 years for cohort I and aged 40 to 69 years for cohort II at each baseline survey. At baseline and at the 5- and 10-year follow-ups, a questionnaire survey was conducted to obtain information on medical histories and health-related lifestyle, including diet. Study population The subjects of the reproducibility and validity study of the FFQ used in the 5-year follow-up survey were a subsample of the participants in JPHC Study cohorts I and II. Married couples were recruited. The present study used a data set for 498 subjects (244 men and 254 women; 209 in cohort I and 289 in cohort II) who completed 2 FFQs with a 1-year interval and dietary records for a total of 28 or 14 days. Details of the validation study have been described elsewhere.15,16 There was no significant difference in mean age between participants of the validation study and cohort members among men and women in cohort I or among men in cohort II; however, there was a 3-year age difference among cohort II women (mean age, 56 years for the validation subsample vs 59 years for the cohort participants). In the present study, we identified dietary patterns using FFQ data from the 5-year follow-up survey. Oral or written informed consent from the participants was received before the study. The study did not undergo ethical approval since it was conducted before the advent of ethical guidelines for epidemiology research in Japan, which mandate such approval. The subjects of the reproducibility and validity study of the FFQ used in the 5-year follow-up survey were a subsample of the participants in JPHC Study cohorts I and II. Married couples were recruited. The present study used a data set for 498 subjects (244 men and 254 women; 209 in cohort I and 289 in cohort II) who completed 2 FFQs with a 1-year interval and dietary records for a total of 28 or 14 days. Details of the validation study have been described elsewhere.15,16 There was no significant difference in mean age between participants of the validation study and cohort members among men and women in cohort I or among men in cohort II; however, there was a 3-year age difference among cohort II women (mean age, 56 years for the validation subsample vs 59 years for the cohort participants). In the present study, we identified dietary patterns using FFQ data from the 5-year follow-up survey. Oral or written informed consent from the participants was received before the study. The study did not undergo ethical approval since it was conducted before the advent of ethical guidelines for epidemiology research in Japan, which mandate such approval. Dietary assessment The participants completed the FFQ twice, with an approximately 1-year interval. The FFQ for the evaluation of validity (FFQ_V) was completed after completion of the last dietary record and was compared with the dietary records. The FFQ for the evaluation of reproducibility (FFQ_R) was administered 1 year before or after the FFQ_V and was compared with the FFQ_V. The FFQ included questions on 138 food items (with standard portions/units and eating frequency) and 14 supplementary questions during the previous year, from which a composition table was developed for 147 foods items.17 For most food items, 9 response options were available for eating frequency: rarely, 1 to 3 times/month, 1 to 2 times/week, 3 to 4 times/week, 5 to 6 times/week, once a day, 2 to 3 times/day, 4 to 6 times/day, and 7 or more times/day. Slightly different options were used for beverage intake: rarely, 1 to 2 times/week, 3 to 4 times/week, 5 to 6 times/week, 1 cup/day, 2 to 3 cups/day, 4 to 6 cups/day, 7 to 9 cups/day, and 10 or more cups/day. A standard portion size was specified for each food item, and the respondents were asked to choose their usual portion size from 3 options (less than half the standard portion size, standard portion size, or more than 1.5 times the standard portion size). We calculated daily intake of most foods by multiplying daily frequency of consumption by usual portion size. In the present analysis, we used 134 food and beverage items of the FFQ (excluding 11 items that correlated strongly with others and 2 items with no energy or nutrition). The participants completed a 28- or 14-day (Okinawa) dietary record in 1 year, ie, 7-day dietary records were collected 4 (or 2) times at 3-month (or 6-month) intervals during the course of a year. The survey method using dietary records has been described elsewhere.15,16,18 We matched the 1101 unique food codes in the dietary records to food items on the FFQ to ensure that daily food intakes derived from the dietary records were comparable with those derived from the FFQ. A total of 558 food codes in the dietary records were matched to the 134 food items on the FFQ. The participants completed the FFQ twice, with an approximately 1-year interval. The FFQ for the evaluation of validity (FFQ_V) was completed after completion of the last dietary record and was compared with the dietary records. The FFQ for the evaluation of reproducibility (FFQ_R) was administered 1 year before or after the FFQ_V and was compared with the FFQ_V. The FFQ included questions on 138 food items (with standard portions/units and eating frequency) and 14 supplementary questions during the previous year, from which a composition table was developed for 147 foods items.17 For most food items, 9 response options were available for eating frequency: rarely, 1 to 3 times/month, 1 to 2 times/week, 3 to 4 times/week, 5 to 6 times/week, once a day, 2 to 3 times/day, 4 to 6 times/day, and 7 or more times/day. Slightly different options were used for beverage intake: rarely, 1 to 2 times/week, 3 to 4 times/week, 5 to 6 times/week, 1 cup/day, 2 to 3 cups/day, 4 to 6 cups/day, 7 to 9 cups/day, and 10 or more cups/day. A standard portion size was specified for each food item, and the respondents were asked to choose their usual portion size from 3 options (less than half the standard portion size, standard portion size, or more than 1.5 times the standard portion size). We calculated daily intake of most foods by multiplying daily frequency of consumption by usual portion size. In the present analysis, we used 134 food and beverage items of the FFQ (excluding 11 items that correlated strongly with others and 2 items with no energy or nutrition). The participants completed a 28- or 14-day (Okinawa) dietary record in 1 year, ie, 7-day dietary records were collected 4 (or 2) times at 3-month (or 6-month) intervals during the course of a year. The survey method using dietary records has been described elsewhere.15,16,18 We matched the 1101 unique food codes in the dietary records to food items on the FFQ to ensure that daily food intakes derived from the dietary records were comparable with those derived from the FFQ. A total of 558 food codes in the dietary records were matched to the 134 food items on the FFQ. Statistical analysis Men and women were analyzed separately. Some foods or food groups that were similar in nutritional content or culinary use were combined, leaving 48 food groups for the present analysis (Table 1). When information from dietary records was used to combine 2 or more foods into 1 food group, the amount of food before cooking was used for all food items except noodles and rice. Consumption of noodles and rice was expressed in weight of boiled noodles and steamed rice, respectively. Consumption of miso soup and beverages such as coffee and green tea was expressed as the weight of the consumed infusion rather than the weight of miso, coffee beans, instant coffee, or tea leaves. We calculated the means and standard deviations of each food group intake estimated from dietary records and the 2 FFQs. In addition, the reproducibility and validity of 48 food group intakes were evaluated by using Spearman correlation coefficients between intakes from the 2 FFQs and between intakes derived from the dietary records and those derived from the FFQ_V. To identify dietary patterns, we performed principal component analysis based on log-transformed intakes of 48 food groups for each of the 2 FFQs and the dietary records. The factors were rotated by orthogonal transformation (varimax rotation) to maintain uncorrelated factors and greater interpretability. We considered eigenvalues, the scree plot, and the interpretability of the factors to determine the number of factors to retain and identified 3 dietary patterns (factors) in both men and women. The dietary patterns were named according to the food items showing high loading (absolute value) in each dietary pattern. The factor scores for each dietary pattern were calculated by summing intakes of food items weighted by their factor loadings. The scores were energy-adjusted using the residual method. When we examined reproducibility and validity by using dietary pattern scores without energy adjustment, or based on energy-adjusted intake of food groups, the derived dietary patterns and their reproducibility and validity were similar to those based on energy-adjusted scores. We examined the correspondence of each dietary pattern identified from the dietary records with dietary patterns extracted from the FFQ_V by comparing the pattern of food items showing high loading. We repeated this procedure to link dietary patterns identified from the FFQ_R and FFQ_V. For each dietary pattern, Spearman correlation coefficients between the factor score for the dietary patterns derived from the dietary records and those derived from the FFQ_V were calculated to evaluate the validity of the FFQ. In addition, to determine the reproducibility of the FFQ, Spearman correlation coefficients between the factor scores for the dietary patterns derived from the 2 FFQs administered 1 year apart were calculated. We estimated Spearman correlation coefficients between factor loadings from the FFQ in each population (subsample of the validity study, overall cohort, men, women, and men and women combined) to compare the dietary patterns derived from the FFQ between the subsample of the validity study and overall cohort and between men or women and men and women combined in the overall cohort. All analyses were performed using Statistical Analysis System (SAS) version 9.1 (SAS Institute, Cary, NC, USA). Men and women were analyzed separately. Some foods or food groups that were similar in nutritional content or culinary use were combined, leaving 48 food groups for the present analysis (Table 1). When information from dietary records was used to combine 2 or more foods into 1 food group, the amount of food before cooking was used for all food items except noodles and rice. Consumption of noodles and rice was expressed in weight of boiled noodles and steamed rice, respectively. Consumption of miso soup and beverages such as coffee and green tea was expressed as the weight of the consumed infusion rather than the weight of miso, coffee beans, instant coffee, or tea leaves. We calculated the means and standard deviations of each food group intake estimated from dietary records and the 2 FFQs. In addition, the reproducibility and validity of 48 food group intakes were evaluated by using Spearman correlation coefficients between intakes from the 2 FFQs and between intakes derived from the dietary records and those derived from the FFQ_V. To identify dietary patterns, we performed principal component analysis based on log-transformed intakes of 48 food groups for each of the 2 FFQs and the dietary records. The factors were rotated by orthogonal transformation (varimax rotation) to maintain uncorrelated factors and greater interpretability. We considered eigenvalues, the scree plot, and the interpretability of the factors to determine the number of factors to retain and identified 3 dietary patterns (factors) in both men and women. The dietary patterns were named according to the food items showing high loading (absolute value) in each dietary pattern. The factor scores for each dietary pattern were calculated by summing intakes of food items weighted by their factor loadings. The scores were energy-adjusted using the residual method. When we examined reproducibility and validity by using dietary pattern scores without energy adjustment, or based on energy-adjusted intake of food groups, the derived dietary patterns and their reproducibility and validity were similar to those based on energy-adjusted scores. We examined the correspondence of each dietary pattern identified from the dietary records with dietary patterns extracted from the FFQ_V by comparing the pattern of food items showing high loading. We repeated this procedure to link dietary patterns identified from the FFQ_R and FFQ_V. For each dietary pattern, Spearman correlation coefficients between the factor score for the dietary patterns derived from the dietary records and those derived from the FFQ_V were calculated to evaluate the validity of the FFQ. In addition, to determine the reproducibility of the FFQ, Spearman correlation coefficients between the factor scores for the dietary patterns derived from the 2 FFQs administered 1 year apart were calculated. We estimated Spearman correlation coefficients between factor loadings from the FFQ in each population (subsample of the validity study, overall cohort, men, women, and men and women combined) to compare the dietary patterns derived from the FFQ between the subsample of the validity study and overall cohort and between men or women and men and women combined in the overall cohort. All analyses were performed using Statistical Analysis System (SAS) version 9.1 (SAS Institute, Cary, NC, USA).
RESULTS
The 48 food group intakes calculated by 28- or 14-day dietary records and the 2 FFQs, and their correlations, are shown in Table 2 for men and Table 3 for women. Regarding reproducibility, Spearman correlation coefficients between the 2 FFQs ranged from 0.21 for wine to 0.80 for coffee in men (Table 2) and from 0.37 for sake to 0.81 for miso soup in women (Table 3). In particular, intakes of rice, bread, miso soup, pickles, other fruit, salt fish, processed meat, milk, dairy products, green tea, and coffee showed high correlation coefficients (r >0.60) between the 2 FFQs in both men and women. Regarding validity, Spearman correlation coefficients between the dietary records and the FFQ_V ranged from 0.06 for oily fish to 0.75 for coffee in men (Table 2) and from 0.12 for shochu to 0.80 for coffee in women (Table 3). In both men and women, intakes of coffee, bread, pickles, and milk estimated by the FFQ_V showed a high correlation (r >0.60) with those estimated by the dietary records, whereas the validity of oily fish between the dietary records and the FFQ_V was very low. Abbreviations: DR, dietary record; FFQ, food frequency questionnaire; SD, standard deviation. aFFQ_R was administered 1 year after or before FFQ_V. bFFQ_V was administered after completion of the DRs. c28- and 14-day DRs were collected over a 1-year period. Abbreviations: DR, dietary record; FFQ, food frequency questionnaire; SD, standard deviation. aFFQ_R was administered 1 year after or before FFQ_V. bFFQ_V was administered after completion of the DRs. c28- and 14-day DRs were collected over a 1-year period. Principal component analysis identified 3 dietary patterns from the dietary records and 2 FFQs (Table 4 for men and Table 5 for women), and they were similar in terms of factor loading pattern across the data sources and between sexes. Of the 3 dietary patterns, a pattern highly loaded by intakes of vegetables, fruit, potatoes, soy products, mushrooms, seaweed, oily fish, and green tea was named the prudent Japanese pattern. A dietary pattern associated with high intakes of bread, meat, processed meat, fruit juice, coffee, black tea, soft drinks, sauces, mayonnaise, and dressing was named the westernized Japanese pattern. Another dietary pattern was characterized by high intakes of rice, miso soup, pickles, salmon, salty fish, seafood other than fish, fruit, and sake (men only), and was named the traditional Japanese pattern. The 3 dietary patterns from the dietary records, the FFQ_R, and the FFQ_V explained 23.9%, 29.4%, and 26.5%, respectively, of variance in men and 23.0%, 24.9%, and 32.9%, respectively, of variance in women. Abbreviations: FFQ, food frequency questionnaire; DR, dietary record. aFor simplicity, factor loadings less than ±0.10 are not listed. bFFQ_R was administered 1 year after or before FFQ_V. cFFQ_V was administered after completion of the DRs. d28- and 14-day DRs were collected over a 1-year period. Abbreviations: FFQ, food frequency questionnaire; DR, dietary record. aFor simplicity, factor loadings less than ±0.10 are not listed. bFFQ_R was administered 1 year after or before FFQ_V. cFFQ_V was administered after completion of the DRs. d28- and 14-day DRs were collected over a 1-year period. Three similar dietary patterns were identified in the overall cohort for both men and women (Appendix). The Spearman correlation coefficients between factor loadings from FFQ_R in the subsample of the validity study and those from the FFQ in the overall cohort for 48 food groups were 0.89 in men and 0.82 in women for the prudent Japanese pattern, 0.82 in men and 0.75 in women for the westernized Japanese pattern, and 0.75 in men and 0.47 in women for the traditional Japanese pattern. In the overall cohort, the corresponding values between factor loadings for men or women and those for men and women combined were 0.98 in both men and women for the prudent Japanese pattern, 0.96 in men and 0.95 in women for the westernized Japanese pattern, and 0.76 in men and 0.74 in women for the traditional Japanese pattern. The Spearman correlation coefficients between the dietary records and the 2 FFQs are shown in Table 6. Regarding reproducibility between the 2 FFQs, all 3 dietary patterns were acceptable in both men and women. In particular, the traditional Japanese pattern in men and the westernized Japanese pattern in women were highly reproduced (correlation coefficients: 0.77 in men and 0.71 in women). Regarding validity, the traditional Japanese pattern had higher correlation coefficients between the dietary records and the FFQ_V than did other patterns among both men and women (correlation coefficient: 0.49 in men and 0.63 in women), whereas the westernized Japanese pattern in men (0.32) had the lowest correlation coefficient among the 3 dietary patterns. Abbreviations: DR, dietary record; FFQ, food frequency questionnaire. aDietary pattern scores were adjusted for energy intake by using the residual method. bFFQ_R was administered 1 year after or before FFQ_V. cFFQ_V was administered after completion of the DRs. d28- or 14-day DRs were collected over a 1-year period. All correlations: P < 0.001.
null
null
[ "INTRODUCTION", "JPHC Study", "Study population", "Dietary assessment", "Statistical analysis" ]
[ "In nutritional epidemiology, there is increased interest in the analysis of dietary pattern, a comprehensive variable that integrates consumption of several foods or food groups. The effect of a single nutrient, food, or food group on disease risk and morbid conditions is difficult to assess in observational studies because foods and nutrients are consumed in combination and their complex effects are likely to be interactive or synergistic.1 However, dietary pattern can overcome problems relating to the close intercorrelation among foods or nutrients and is expected to have a greater impact on disease risk than any single nutrient.1–3 Therefore, analysis using dietary pattern could be used as a complementary approach in nutritional epidemiology. Indeed, many previous studies have reported associations of dietary patterns with mortality, several diseases (such as cancer, cardiovascular disease, and type 2 diabetes), and biomarkers.4,5\nExtraction of dietary pattern by principal component analysis—which is often used to identify dietary patterns—requires some arbitrary decisions to group food items, determine the number of factors to retain, select the method of rotation of the initial factors, and label the dietary patterns.6 Moreover, dietary patterns may differ with regard not only to age and sex but also resident area, ethnic group, and culture. Therefore, it might be necessary to examine the validity and reproducibility of the identified dietary patterns among the study population of each study. To date, several studies have examined the reproducibility and validity of dietary patterns.7–13 However, among studies of the association between dietary patterns and disease risk in Japan, only 1 examined the validity of dietary patterns in Japan12 and none examined reproducibility. Therefore, we assessed the reproducibility and validity of dietary patterns identified by principal component analysis of a food frequency questionnaire (FFQ) used in the 5-year follow-up survey of the Japan Public Health Center-Based Prospective Study (JPHC Study).", "The JPHC Study is a population-based prospective study of cancer and cardiovascular disease and was launched in 1990 for cohort I and in 1993 for cohort II.14 The participants were 140 420 residents of 11 public health center areas and were aged 40 to 59 years for cohort I and aged 40 to 69 years for cohort II at each baseline survey. At baseline and at the 5- and 10-year follow-ups, a questionnaire survey was conducted to obtain information on medical histories and health-related lifestyle, including diet.", "The subjects of the reproducibility and validity study of the FFQ used in the 5-year follow-up survey were a subsample of the participants in JPHC Study cohorts I and II. Married couples were recruited. The present study used a data set for 498 subjects (244 men and 254 women; 209 in cohort I and 289 in cohort II) who completed 2 FFQs with a 1-year interval and dietary records for a total of 28 or 14 days. Details of the validation study have been described elsewhere.15,16 There was no significant difference in mean age between participants of the validation study and cohort members among men and women in cohort I or among men in cohort II; however, there was a 3-year age difference among cohort II women (mean age, 56 years for the validation subsample vs 59 years for the cohort participants). In the present study, we identified dietary patterns using FFQ data from the 5-year follow-up survey. Oral or written informed consent from the participants was received before the study. The study did not undergo ethical approval since it was conducted before the advent of ethical guidelines for epidemiology research in Japan, which mandate such approval.", "The participants completed the FFQ twice, with an approximately 1-year interval. The FFQ for the evaluation of validity (FFQ_V) was completed after completion of the last dietary record and was compared with the dietary records. The FFQ for the evaluation of reproducibility (FFQ_R) was administered 1 year before or after the FFQ_V and was compared with the FFQ_V. The FFQ included questions on 138 food items (with standard portions/units and eating frequency) and 14 supplementary questions during the previous year, from which a composition table was developed for 147 foods items.17 For most food items, 9 response options were available for eating frequency: rarely, 1 to 3 times/month, 1 to 2 times/week, 3 to 4 times/week, 5 to 6 times/week, once a day, 2 to 3 times/day, 4 to 6 times/day, and 7 or more times/day. Slightly different options were used for beverage intake: rarely, 1 to 2 times/week, 3 to 4 times/week, 5 to 6 times/week, 1 cup/day, 2 to 3 cups/day, 4 to 6 cups/day, 7 to 9 cups/day, and 10 or more cups/day. A standard portion size was specified for each food item, and the respondents were asked to choose their usual portion size from 3 options (less than half the standard portion size, standard portion size, or more than 1.5 times the standard portion size). We calculated daily intake of most foods by multiplying daily frequency of consumption by usual portion size. In the present analysis, we used 134 food and beverage items of the FFQ (excluding 11 items that correlated strongly with others and 2 items with no energy or nutrition).\nThe participants completed a 28- or 14-day (Okinawa) dietary record in 1 year, ie, 7-day dietary records were collected 4 (or 2) times at 3-month (or 6-month) intervals during the course of a year. The survey method using dietary records has been described elsewhere.15,16,18 We matched the 1101 unique food codes in the dietary records to food items on the FFQ to ensure that daily food intakes derived from the dietary records were comparable with those derived from the FFQ. A total of 558 food codes in the dietary records were matched to the 134 food items on the FFQ.", "Men and women were analyzed separately. Some foods or food groups that were similar in nutritional content or culinary use were combined, leaving 48 food groups for the present analysis (Table 1). When information from dietary records was used to combine 2 or more foods into 1 food group, the amount of food before cooking was used for all food items except noodles and rice. Consumption of noodles and rice was expressed in weight of boiled noodles and steamed rice, respectively. Consumption of miso soup and beverages such as coffee and green tea was expressed as the weight of the consumed infusion rather than the weight of miso, coffee beans, instant coffee, or tea leaves. We calculated the means and standard deviations of each food group intake estimated from dietary records and the 2 FFQs. In addition, the reproducibility and validity of 48 food group intakes were evaluated by using Spearman correlation coefficients between intakes from the 2 FFQs and between intakes derived from the dietary records and those derived from the FFQ_V.\nTo identify dietary patterns, we performed principal component analysis based on log-transformed intakes of 48 food groups for each of the 2 FFQs and the dietary records. The factors were rotated by orthogonal transformation (varimax rotation) to maintain uncorrelated factors and greater interpretability. We considered eigenvalues, the scree plot, and the interpretability of the factors to determine the number of factors to retain and identified 3 dietary patterns (factors) in both men and women. The dietary patterns were named according to the food items showing high loading (absolute value) in each dietary pattern. The factor scores for each dietary pattern were calculated by summing intakes of food items weighted by their factor loadings. The scores were energy-adjusted using the residual method. When we examined reproducibility and validity by using dietary pattern scores without energy adjustment, or based on energy-adjusted intake of food groups, the derived dietary patterns and their reproducibility and validity were similar to those based on energy-adjusted scores.\nWe examined the correspondence of each dietary pattern identified from the dietary records with dietary patterns extracted from the FFQ_V by comparing the pattern of food items showing high loading. We repeated this procedure to link dietary patterns identified from the FFQ_R and FFQ_V. For each dietary pattern, Spearman correlation coefficients between the factor score for the dietary patterns derived from the dietary records and those derived from the FFQ_V were calculated to evaluate the validity of the FFQ. In addition, to determine the reproducibility of the FFQ, Spearman correlation coefficients between the factor scores for the dietary patterns derived from the 2 FFQs administered 1 year apart were calculated. We estimated Spearman correlation coefficients between factor loadings from the FFQ in each population (subsample of the validity study, overall cohort, men, women, and men and women combined) to compare the dietary patterns derived from the FFQ between the subsample of the validity study and overall cohort and between men or women and men and women combined in the overall cohort. All analyses were performed using Statistical Analysis System (SAS) version 9.1 (SAS Institute, Cary, NC, USA)." ]
[ null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "JPHC Study", "Study population", "Dietary assessment", "Statistical analysis", "RESULTS", "DISCUSSION" ]
[ "In nutritional epidemiology, there is increased interest in the analysis of dietary pattern, a comprehensive variable that integrates consumption of several foods or food groups. The effect of a single nutrient, food, or food group on disease risk and morbid conditions is difficult to assess in observational studies because foods and nutrients are consumed in combination and their complex effects are likely to be interactive or synergistic.1 However, dietary pattern can overcome problems relating to the close intercorrelation among foods or nutrients and is expected to have a greater impact on disease risk than any single nutrient.1–3 Therefore, analysis using dietary pattern could be used as a complementary approach in nutritional epidemiology. Indeed, many previous studies have reported associations of dietary patterns with mortality, several diseases (such as cancer, cardiovascular disease, and type 2 diabetes), and biomarkers.4,5\nExtraction of dietary pattern by principal component analysis—which is often used to identify dietary patterns—requires some arbitrary decisions to group food items, determine the number of factors to retain, select the method of rotation of the initial factors, and label the dietary patterns.6 Moreover, dietary patterns may differ with regard not only to age and sex but also resident area, ethnic group, and culture. Therefore, it might be necessary to examine the validity and reproducibility of the identified dietary patterns among the study population of each study. To date, several studies have examined the reproducibility and validity of dietary patterns.7–13 However, among studies of the association between dietary patterns and disease risk in Japan, only 1 examined the validity of dietary patterns in Japan12 and none examined reproducibility. Therefore, we assessed the reproducibility and validity of dietary patterns identified by principal component analysis of a food frequency questionnaire (FFQ) used in the 5-year follow-up survey of the Japan Public Health Center-Based Prospective Study (JPHC Study).", " JPHC Study The JPHC Study is a population-based prospective study of cancer and cardiovascular disease and was launched in 1990 for cohort I and in 1993 for cohort II.14 The participants were 140 420 residents of 11 public health center areas and were aged 40 to 59 years for cohort I and aged 40 to 69 years for cohort II at each baseline survey. At baseline and at the 5- and 10-year follow-ups, a questionnaire survey was conducted to obtain information on medical histories and health-related lifestyle, including diet.\nThe JPHC Study is a population-based prospective study of cancer and cardiovascular disease and was launched in 1990 for cohort I and in 1993 for cohort II.14 The participants were 140 420 residents of 11 public health center areas and were aged 40 to 59 years for cohort I and aged 40 to 69 years for cohort II at each baseline survey. At baseline and at the 5- and 10-year follow-ups, a questionnaire survey was conducted to obtain information on medical histories and health-related lifestyle, including diet.\n Study population The subjects of the reproducibility and validity study of the FFQ used in the 5-year follow-up survey were a subsample of the participants in JPHC Study cohorts I and II. Married couples were recruited. The present study used a data set for 498 subjects (244 men and 254 women; 209 in cohort I and 289 in cohort II) who completed 2 FFQs with a 1-year interval and dietary records for a total of 28 or 14 days. Details of the validation study have been described elsewhere.15,16 There was no significant difference in mean age between participants of the validation study and cohort members among men and women in cohort I or among men in cohort II; however, there was a 3-year age difference among cohort II women (mean age, 56 years for the validation subsample vs 59 years for the cohort participants). In the present study, we identified dietary patterns using FFQ data from the 5-year follow-up survey. Oral or written informed consent from the participants was received before the study. The study did not undergo ethical approval since it was conducted before the advent of ethical guidelines for epidemiology research in Japan, which mandate such approval.\nThe subjects of the reproducibility and validity study of the FFQ used in the 5-year follow-up survey were a subsample of the participants in JPHC Study cohorts I and II. Married couples were recruited. The present study used a data set for 498 subjects (244 men and 254 women; 209 in cohort I and 289 in cohort II) who completed 2 FFQs with a 1-year interval and dietary records for a total of 28 or 14 days. Details of the validation study have been described elsewhere.15,16 There was no significant difference in mean age between participants of the validation study and cohort members among men and women in cohort I or among men in cohort II; however, there was a 3-year age difference among cohort II women (mean age, 56 years for the validation subsample vs 59 years for the cohort participants). In the present study, we identified dietary patterns using FFQ data from the 5-year follow-up survey. Oral or written informed consent from the participants was received before the study. The study did not undergo ethical approval since it was conducted before the advent of ethical guidelines for epidemiology research in Japan, which mandate such approval.\n Dietary assessment The participants completed the FFQ twice, with an approximately 1-year interval. The FFQ for the evaluation of validity (FFQ_V) was completed after completion of the last dietary record and was compared with the dietary records. The FFQ for the evaluation of reproducibility (FFQ_R) was administered 1 year before or after the FFQ_V and was compared with the FFQ_V. The FFQ included questions on 138 food items (with standard portions/units and eating frequency) and 14 supplementary questions during the previous year, from which a composition table was developed for 147 foods items.17 For most food items, 9 response options were available for eating frequency: rarely, 1 to 3 times/month, 1 to 2 times/week, 3 to 4 times/week, 5 to 6 times/week, once a day, 2 to 3 times/day, 4 to 6 times/day, and 7 or more times/day. Slightly different options were used for beverage intake: rarely, 1 to 2 times/week, 3 to 4 times/week, 5 to 6 times/week, 1 cup/day, 2 to 3 cups/day, 4 to 6 cups/day, 7 to 9 cups/day, and 10 or more cups/day. A standard portion size was specified for each food item, and the respondents were asked to choose their usual portion size from 3 options (less than half the standard portion size, standard portion size, or more than 1.5 times the standard portion size). We calculated daily intake of most foods by multiplying daily frequency of consumption by usual portion size. In the present analysis, we used 134 food and beverage items of the FFQ (excluding 11 items that correlated strongly with others and 2 items with no energy or nutrition).\nThe participants completed a 28- or 14-day (Okinawa) dietary record in 1 year, ie, 7-day dietary records were collected 4 (or 2) times at 3-month (or 6-month) intervals during the course of a year. The survey method using dietary records has been described elsewhere.15,16,18 We matched the 1101 unique food codes in the dietary records to food items on the FFQ to ensure that daily food intakes derived from the dietary records were comparable with those derived from the FFQ. A total of 558 food codes in the dietary records were matched to the 134 food items on the FFQ.\nThe participants completed the FFQ twice, with an approximately 1-year interval. The FFQ for the evaluation of validity (FFQ_V) was completed after completion of the last dietary record and was compared with the dietary records. The FFQ for the evaluation of reproducibility (FFQ_R) was administered 1 year before or after the FFQ_V and was compared with the FFQ_V. The FFQ included questions on 138 food items (with standard portions/units and eating frequency) and 14 supplementary questions during the previous year, from which a composition table was developed for 147 foods items.17 For most food items, 9 response options were available for eating frequency: rarely, 1 to 3 times/month, 1 to 2 times/week, 3 to 4 times/week, 5 to 6 times/week, once a day, 2 to 3 times/day, 4 to 6 times/day, and 7 or more times/day. Slightly different options were used for beverage intake: rarely, 1 to 2 times/week, 3 to 4 times/week, 5 to 6 times/week, 1 cup/day, 2 to 3 cups/day, 4 to 6 cups/day, 7 to 9 cups/day, and 10 or more cups/day. A standard portion size was specified for each food item, and the respondents were asked to choose their usual portion size from 3 options (less than half the standard portion size, standard portion size, or more than 1.5 times the standard portion size). We calculated daily intake of most foods by multiplying daily frequency of consumption by usual portion size. In the present analysis, we used 134 food and beverage items of the FFQ (excluding 11 items that correlated strongly with others and 2 items with no energy or nutrition).\nThe participants completed a 28- or 14-day (Okinawa) dietary record in 1 year, ie, 7-day dietary records were collected 4 (or 2) times at 3-month (or 6-month) intervals during the course of a year. The survey method using dietary records has been described elsewhere.15,16,18 We matched the 1101 unique food codes in the dietary records to food items on the FFQ to ensure that daily food intakes derived from the dietary records were comparable with those derived from the FFQ. A total of 558 food codes in the dietary records were matched to the 134 food items on the FFQ.\n Statistical analysis Men and women were analyzed separately. Some foods or food groups that were similar in nutritional content or culinary use were combined, leaving 48 food groups for the present analysis (Table 1). When information from dietary records was used to combine 2 or more foods into 1 food group, the amount of food before cooking was used for all food items except noodles and rice. Consumption of noodles and rice was expressed in weight of boiled noodles and steamed rice, respectively. Consumption of miso soup and beverages such as coffee and green tea was expressed as the weight of the consumed infusion rather than the weight of miso, coffee beans, instant coffee, or tea leaves. We calculated the means and standard deviations of each food group intake estimated from dietary records and the 2 FFQs. In addition, the reproducibility and validity of 48 food group intakes were evaluated by using Spearman correlation coefficients between intakes from the 2 FFQs and between intakes derived from the dietary records and those derived from the FFQ_V.\nTo identify dietary patterns, we performed principal component analysis based on log-transformed intakes of 48 food groups for each of the 2 FFQs and the dietary records. The factors were rotated by orthogonal transformation (varimax rotation) to maintain uncorrelated factors and greater interpretability. We considered eigenvalues, the scree plot, and the interpretability of the factors to determine the number of factors to retain and identified 3 dietary patterns (factors) in both men and women. The dietary patterns were named according to the food items showing high loading (absolute value) in each dietary pattern. The factor scores for each dietary pattern were calculated by summing intakes of food items weighted by their factor loadings. The scores were energy-adjusted using the residual method. When we examined reproducibility and validity by using dietary pattern scores without energy adjustment, or based on energy-adjusted intake of food groups, the derived dietary patterns and their reproducibility and validity were similar to those based on energy-adjusted scores.\nWe examined the correspondence of each dietary pattern identified from the dietary records with dietary patterns extracted from the FFQ_V by comparing the pattern of food items showing high loading. We repeated this procedure to link dietary patterns identified from the FFQ_R and FFQ_V. For each dietary pattern, Spearman correlation coefficients between the factor score for the dietary patterns derived from the dietary records and those derived from the FFQ_V were calculated to evaluate the validity of the FFQ. In addition, to determine the reproducibility of the FFQ, Spearman correlation coefficients between the factor scores for the dietary patterns derived from the 2 FFQs administered 1 year apart were calculated. We estimated Spearman correlation coefficients between factor loadings from the FFQ in each population (subsample of the validity study, overall cohort, men, women, and men and women combined) to compare the dietary patterns derived from the FFQ between the subsample of the validity study and overall cohort and between men or women and men and women combined in the overall cohort. All analyses were performed using Statistical Analysis System (SAS) version 9.1 (SAS Institute, Cary, NC, USA).\nMen and women were analyzed separately. Some foods or food groups that were similar in nutritional content or culinary use were combined, leaving 48 food groups for the present analysis (Table 1). When information from dietary records was used to combine 2 or more foods into 1 food group, the amount of food before cooking was used for all food items except noodles and rice. Consumption of noodles and rice was expressed in weight of boiled noodles and steamed rice, respectively. Consumption of miso soup and beverages such as coffee and green tea was expressed as the weight of the consumed infusion rather than the weight of miso, coffee beans, instant coffee, or tea leaves. We calculated the means and standard deviations of each food group intake estimated from dietary records and the 2 FFQs. In addition, the reproducibility and validity of 48 food group intakes were evaluated by using Spearman correlation coefficients between intakes from the 2 FFQs and between intakes derived from the dietary records and those derived from the FFQ_V.\nTo identify dietary patterns, we performed principal component analysis based on log-transformed intakes of 48 food groups for each of the 2 FFQs and the dietary records. The factors were rotated by orthogonal transformation (varimax rotation) to maintain uncorrelated factors and greater interpretability. We considered eigenvalues, the scree plot, and the interpretability of the factors to determine the number of factors to retain and identified 3 dietary patterns (factors) in both men and women. The dietary patterns were named according to the food items showing high loading (absolute value) in each dietary pattern. The factor scores for each dietary pattern were calculated by summing intakes of food items weighted by their factor loadings. The scores were energy-adjusted using the residual method. When we examined reproducibility and validity by using dietary pattern scores without energy adjustment, or based on energy-adjusted intake of food groups, the derived dietary patterns and their reproducibility and validity were similar to those based on energy-adjusted scores.\nWe examined the correspondence of each dietary pattern identified from the dietary records with dietary patterns extracted from the FFQ_V by comparing the pattern of food items showing high loading. We repeated this procedure to link dietary patterns identified from the FFQ_R and FFQ_V. For each dietary pattern, Spearman correlation coefficients between the factor score for the dietary patterns derived from the dietary records and those derived from the FFQ_V were calculated to evaluate the validity of the FFQ. In addition, to determine the reproducibility of the FFQ, Spearman correlation coefficients between the factor scores for the dietary patterns derived from the 2 FFQs administered 1 year apart were calculated. We estimated Spearman correlation coefficients between factor loadings from the FFQ in each population (subsample of the validity study, overall cohort, men, women, and men and women combined) to compare the dietary patterns derived from the FFQ between the subsample of the validity study and overall cohort and between men or women and men and women combined in the overall cohort. All analyses were performed using Statistical Analysis System (SAS) version 9.1 (SAS Institute, Cary, NC, USA).", "The JPHC Study is a population-based prospective study of cancer and cardiovascular disease and was launched in 1990 for cohort I and in 1993 for cohort II.14 The participants were 140 420 residents of 11 public health center areas and were aged 40 to 59 years for cohort I and aged 40 to 69 years for cohort II at each baseline survey. At baseline and at the 5- and 10-year follow-ups, a questionnaire survey was conducted to obtain information on medical histories and health-related lifestyle, including diet.", "The subjects of the reproducibility and validity study of the FFQ used in the 5-year follow-up survey were a subsample of the participants in JPHC Study cohorts I and II. Married couples were recruited. The present study used a data set for 498 subjects (244 men and 254 women; 209 in cohort I and 289 in cohort II) who completed 2 FFQs with a 1-year interval and dietary records for a total of 28 or 14 days. Details of the validation study have been described elsewhere.15,16 There was no significant difference in mean age between participants of the validation study and cohort members among men and women in cohort I or among men in cohort II; however, there was a 3-year age difference among cohort II women (mean age, 56 years for the validation subsample vs 59 years for the cohort participants). In the present study, we identified dietary patterns using FFQ data from the 5-year follow-up survey. Oral or written informed consent from the participants was received before the study. The study did not undergo ethical approval since it was conducted before the advent of ethical guidelines for epidemiology research in Japan, which mandate such approval.", "The participants completed the FFQ twice, with an approximately 1-year interval. The FFQ for the evaluation of validity (FFQ_V) was completed after completion of the last dietary record and was compared with the dietary records. The FFQ for the evaluation of reproducibility (FFQ_R) was administered 1 year before or after the FFQ_V and was compared with the FFQ_V. The FFQ included questions on 138 food items (with standard portions/units and eating frequency) and 14 supplementary questions during the previous year, from which a composition table was developed for 147 foods items.17 For most food items, 9 response options were available for eating frequency: rarely, 1 to 3 times/month, 1 to 2 times/week, 3 to 4 times/week, 5 to 6 times/week, once a day, 2 to 3 times/day, 4 to 6 times/day, and 7 or more times/day. Slightly different options were used for beverage intake: rarely, 1 to 2 times/week, 3 to 4 times/week, 5 to 6 times/week, 1 cup/day, 2 to 3 cups/day, 4 to 6 cups/day, 7 to 9 cups/day, and 10 or more cups/day. A standard portion size was specified for each food item, and the respondents were asked to choose their usual portion size from 3 options (less than half the standard portion size, standard portion size, or more than 1.5 times the standard portion size). We calculated daily intake of most foods by multiplying daily frequency of consumption by usual portion size. In the present analysis, we used 134 food and beverage items of the FFQ (excluding 11 items that correlated strongly with others and 2 items with no energy or nutrition).\nThe participants completed a 28- or 14-day (Okinawa) dietary record in 1 year, ie, 7-day dietary records were collected 4 (or 2) times at 3-month (or 6-month) intervals during the course of a year. The survey method using dietary records has been described elsewhere.15,16,18 We matched the 1101 unique food codes in the dietary records to food items on the FFQ to ensure that daily food intakes derived from the dietary records were comparable with those derived from the FFQ. A total of 558 food codes in the dietary records were matched to the 134 food items on the FFQ.", "Men and women were analyzed separately. Some foods or food groups that were similar in nutritional content or culinary use were combined, leaving 48 food groups for the present analysis (Table 1). When information from dietary records was used to combine 2 or more foods into 1 food group, the amount of food before cooking was used for all food items except noodles and rice. Consumption of noodles and rice was expressed in weight of boiled noodles and steamed rice, respectively. Consumption of miso soup and beverages such as coffee and green tea was expressed as the weight of the consumed infusion rather than the weight of miso, coffee beans, instant coffee, or tea leaves. We calculated the means and standard deviations of each food group intake estimated from dietary records and the 2 FFQs. In addition, the reproducibility and validity of 48 food group intakes were evaluated by using Spearman correlation coefficients between intakes from the 2 FFQs and between intakes derived from the dietary records and those derived from the FFQ_V.\nTo identify dietary patterns, we performed principal component analysis based on log-transformed intakes of 48 food groups for each of the 2 FFQs and the dietary records. The factors were rotated by orthogonal transformation (varimax rotation) to maintain uncorrelated factors and greater interpretability. We considered eigenvalues, the scree plot, and the interpretability of the factors to determine the number of factors to retain and identified 3 dietary patterns (factors) in both men and women. The dietary patterns were named according to the food items showing high loading (absolute value) in each dietary pattern. The factor scores for each dietary pattern were calculated by summing intakes of food items weighted by their factor loadings. The scores were energy-adjusted using the residual method. When we examined reproducibility and validity by using dietary pattern scores without energy adjustment, or based on energy-adjusted intake of food groups, the derived dietary patterns and their reproducibility and validity were similar to those based on energy-adjusted scores.\nWe examined the correspondence of each dietary pattern identified from the dietary records with dietary patterns extracted from the FFQ_V by comparing the pattern of food items showing high loading. We repeated this procedure to link dietary patterns identified from the FFQ_R and FFQ_V. For each dietary pattern, Spearman correlation coefficients between the factor score for the dietary patterns derived from the dietary records and those derived from the FFQ_V were calculated to evaluate the validity of the FFQ. In addition, to determine the reproducibility of the FFQ, Spearman correlation coefficients between the factor scores for the dietary patterns derived from the 2 FFQs administered 1 year apart were calculated. We estimated Spearman correlation coefficients between factor loadings from the FFQ in each population (subsample of the validity study, overall cohort, men, women, and men and women combined) to compare the dietary patterns derived from the FFQ between the subsample of the validity study and overall cohort and between men or women and men and women combined in the overall cohort. All analyses were performed using Statistical Analysis System (SAS) version 9.1 (SAS Institute, Cary, NC, USA).", "The 48 food group intakes calculated by 28- or 14-day dietary records and the 2 FFQs, and their correlations, are shown in Table 2 for men and Table 3 for women. Regarding reproducibility, Spearman correlation coefficients between the 2 FFQs ranged from 0.21 for wine to 0.80 for coffee in men (Table 2) and from 0.37 for sake to 0.81 for miso soup in women (Table 3). In particular, intakes of rice, bread, miso soup, pickles, other fruit, salt fish, processed meat, milk, dairy products, green tea, and coffee showed high correlation coefficients (r >0.60) between the 2 FFQs in both men and women. Regarding validity, Spearman correlation coefficients between the dietary records and the FFQ_V ranged from 0.06 for oily fish to 0.75 for coffee in men (Table 2) and from 0.12 for shochu to 0.80 for coffee in women (Table 3). In both men and women, intakes of coffee, bread, pickles, and milk estimated by the FFQ_V showed a high correlation (r >0.60) with those estimated by the dietary records, whereas the validity of oily fish between the dietary records and the FFQ_V was very low.\nAbbreviations: DR, dietary record; FFQ, food frequency questionnaire; SD, standard deviation.\naFFQ_R was administered 1 year after or before FFQ_V.\nbFFQ_V was administered after completion of the DRs.\nc28- and 14-day DRs were collected over a 1-year period.\nAbbreviations: DR, dietary record; FFQ, food frequency questionnaire; SD, standard deviation.\naFFQ_R was administered 1 year after or before FFQ_V.\nbFFQ_V was administered after completion of the DRs.\nc28- and 14-day DRs were collected over a 1-year period.\nPrincipal component analysis identified 3 dietary patterns from the dietary records and 2 FFQs (Table 4 for men and Table 5 for women), and they were similar in terms of factor loading pattern across the data sources and between sexes. Of the 3 dietary patterns, a pattern highly loaded by intakes of vegetables, fruit, potatoes, soy products, mushrooms, seaweed, oily fish, and green tea was named the prudent Japanese pattern. A dietary pattern associated with high intakes of bread, meat, processed meat, fruit juice, coffee, black tea, soft drinks, sauces, mayonnaise, and dressing was named the westernized Japanese pattern. Another dietary pattern was characterized by high intakes of rice, miso soup, pickles, salmon, salty fish, seafood other than fish, fruit, and sake (men only), and was named the traditional Japanese pattern. The 3 dietary patterns from the dietary records, the FFQ_R, and the FFQ_V explained 23.9%, 29.4%, and 26.5%, respectively, of variance in men and 23.0%, 24.9%, and 32.9%, respectively, of variance in women.\nAbbreviations: FFQ, food frequency questionnaire; DR, dietary record.\naFor simplicity, factor loadings less than ±0.10 are not listed.\nbFFQ_R was administered 1 year after or before FFQ_V.\ncFFQ_V was administered after completion of the DRs.\nd28- and 14-day DRs were collected over a 1-year period.\nAbbreviations: FFQ, food frequency questionnaire; DR, dietary record.\naFor simplicity, factor loadings less than ±0.10 are not listed.\nbFFQ_R was administered 1 year after or before FFQ_V.\ncFFQ_V was administered after completion of the DRs.\nd28- and 14-day DRs were collected over a 1-year period.\nThree similar dietary patterns were identified in the overall cohort for both men and women (Appendix). The Spearman correlation coefficients between factor loadings from FFQ_R in the subsample of the validity study and those from the FFQ in the overall cohort for 48 food groups were 0.89 in men and 0.82 in women for the prudent Japanese pattern, 0.82 in men and 0.75 in women for the westernized Japanese pattern, and 0.75 in men and 0.47 in women for the traditional Japanese pattern. In the overall cohort, the corresponding values between factor loadings for men or women and those for men and women combined were 0.98 in both men and women for the prudent Japanese pattern, 0.96 in men and 0.95 in women for the westernized Japanese pattern, and 0.76 in men and 0.74 in women for the traditional Japanese pattern.\nThe Spearman correlation coefficients between the dietary records and the 2 FFQs are shown in Table 6. Regarding reproducibility between the 2 FFQs, all 3 dietary patterns were acceptable in both men and women. In particular, the traditional Japanese pattern in men and the westernized Japanese pattern in women were highly reproduced (correlation coefficients: 0.77 in men and 0.71 in women). Regarding validity, the traditional Japanese pattern had higher correlation coefficients between the dietary records and the FFQ_V than did other patterns among both men and women (correlation coefficient: 0.49 in men and 0.63 in women), whereas the westernized Japanese pattern in men (0.32) had the lowest correlation coefficient among the 3 dietary patterns.\nAbbreviations: DR, dietary record; FFQ, food frequency questionnaire.\naDietary pattern scores were adjusted for energy intake by using the residual method.\nbFFQ_R was administered 1 year after or before FFQ_V.\ncFFQ_V was administered after completion of the DRs.\nd28- or 14-day DRs were collected over a 1-year period.\nAll correlations: P < 0.001.", "In a subsample of the validation study of diet in the JPHC Study, we identified 3 major Japanese dietary patterns (prudent, westernized, and traditional) that were similar across data sources and between sexes. The reproducibility and validity of dietary patterns derived from the FFQ were acceptable. Spearman correlation coefficients between scores of the dietary patterns derived from the 2 FFQs ranged from 0.55 to 0.77. The corresponding values between scores of the dietary patterns derived from the dietary records and those derived from the FFQ ranged from 0.32 to 0.63. These dietary patterns were also identified in the entire population of the JPHC Study.\nThe 3 major dietary patterns identified in the present study have also been reported in previous Japanese studies. Most studies in Japan have identified a dietary pattern similar to the prudent Japanese pattern in the present study.19–35 This dietary pattern is characterized by high intakes of not only vegetables and fruit but also typical Japanese foods, including soy products, seaweed, mushrooms, fish, and green tea. Dietary patterns similar to the westernized Japanese pattern in the present study, ie, high intakes of meat, processed meat, bread, coffee, black tea, mayonnaise, and dressing, have also been observed in many Japanese studies.19–35 Moreover, the traditional Japanese pattern, which is characterized by high intakes of rice, miso soup, pickles, salmon, salty fish, and fruit, has been observed in several studies.19–21,26,30,31 Because the westernized breakfast pattern22,24,28,29,33,34 and similar patterns, which were identified in some other studies, is associated with low intakes of rice and miso soup, the westernized breakfast pattern could be viewed as the mirror image of the traditional Japanese pattern.\nWe observed acceptable validity and reproducibility of the 3 dietary patterns; the Spearman correlation coefficients between the dietary records and 2 FFQs ranged from 0.55 to 0.77 for reproducibility and from 0.32 to 0.63 for validity. These values were comparable to those reported in previous studies. Regarding reproducibility, the correlation coefficients ranged from 0.67 to 0.70 (Pearson) in the Health Professional Follow-up Study,9 from 0.63 to 0.73 (over a 1-year period; Spearman),10 and from 0.30 to 0.52 (over a 10-year period; Pearson)11 in the Swedish Mammography Cohort, and from 0.64 to 0.81 (Spearman) in the Southampton Women’s Survey.7 Regarding validity, the correlation coefficients ranged from 0.34 to 0.64 (Pearson) in the Health Professionals Follow-up Study,9 from 0.34 to 0.61 (Pearson) in the Monitoring of Trends and Determinants in Cardiovascular Diseases,13 from 0.41 to 0.73 (Spearman) in the Swedish Mammography Cohort,10 from 0.35 to 0.67 (Pearson) in a UK study,8 and from 0.36 to 0.62 (Pearson) in a Japanese study.12\nIn the present study, the traditional Japanese pattern in men and the westernized Japanese pattern in women were highly reproducible. In the 2 FFQs, salty fish, salmon, pickles, sake, and rice, which are characteristic of the traditional Japanese pattern in men, showed high loadings to the pattern. In addition, these food intakes were highly correlated between the 2 FFQs. Similarly, intakes of beef, processed meat, coffee, bread, pork, chicken, and dressing showed high loadings to the westernized Japanese pattern in women in the 2 FFQs and were also highly correlated between the 2 FFQs. Of the 3 dietary patterns identified, the traditional Japanese pattern showed relatively good validity in both men and women. This might be because pickles, rice, salty fish, salmon, and sake (men only) showed high factor loadings to both dietary record and FFQ, and these intakes were moderately to highly correlated between the dietary record and FFQ. In contrast, the westernized Japanese pattern in men was less valid, perhaps because among foods that were highly and positively loaded to the westernized Japanese pattern derived using dietary record data, some fish items were inversely (not positively) associated with the dietary pattern based on FFQ data. In addition, intakes of meat and processed meat were only moderately correlated between dietary record and FFQ.\nRegarding the traditional Japanese pattern in women, the factor loadings from the FFQ in the subsample and those from the FFQ in the overall cohort for food groups were not highly correlated (correlation coefficient = 0.47). This might be because factor loadings of pickles, citrus fruit, and noodles in the traditional Japanese pattern were lower in the overall cohort than in the subsample, whereas those for pork, beef, processed meat, egg, oily fish, and fish products were higher in the overall cohort than in the subsample.\nOur study had some limitations. First, although we assessed the validity and reproducibility of dietary patterns in a subsample of the JPHC Study population, participants in the validation study might be more health conscious than nonparticipants. Therefore, the present findings might not be applicable to the entire population of the JPHC Study. However, we also identified 3 similar dietary patterns in the entire population. Second, we observed similar dietary patterns in men and women. This might be partly because married couples were recruited for the validity study. Third, we assessed validity based on a single measurement of dietary intake, which might not reflect past dietary patterns. In addition, dietary pattern alone could change during the follow-up period, due to changes in food preference and food availability. Finally, we used the FFQ completed at the end of the dietary records to evaluate validity. Participant recall of dietary intake could be influenced by the administration of dietary records.\nIn conclusion, we identified 3 Japanese dietary patterns (prudent, westernized, and traditional) using FFQ data among participants of the validation study and confirmed that validity and reproducibility of the FFQ are acceptable. These validation data provide a basis for future analysis of the association between dietary patterns and disease risk in the JPHC Study population.\n\nAbbreviation: JPHC Study, Japan Public Health Center-Based Prospective Study.\naFor simplicity, factor loadings less than ±0.10 are not listed." ]
[ null, "methods", null, null, null, null, "results", "discussion" ]
[ "dietary patterns", "Japanese", "reproducibility", "validity" ]
INTRODUCTION: In nutritional epidemiology, there is increased interest in the analysis of dietary pattern, a comprehensive variable that integrates consumption of several foods or food groups. The effect of a single nutrient, food, or food group on disease risk and morbid conditions is difficult to assess in observational studies because foods and nutrients are consumed in combination and their complex effects are likely to be interactive or synergistic.1 However, dietary pattern can overcome problems relating to the close intercorrelation among foods or nutrients and is expected to have a greater impact on disease risk than any single nutrient.1–3 Therefore, analysis using dietary pattern could be used as a complementary approach in nutritional epidemiology. Indeed, many previous studies have reported associations of dietary patterns with mortality, several diseases (such as cancer, cardiovascular disease, and type 2 diabetes), and biomarkers.4,5 Extraction of dietary pattern by principal component analysis—which is often used to identify dietary patterns—requires some arbitrary decisions to group food items, determine the number of factors to retain, select the method of rotation of the initial factors, and label the dietary patterns.6 Moreover, dietary patterns may differ with regard not only to age and sex but also resident area, ethnic group, and culture. Therefore, it might be necessary to examine the validity and reproducibility of the identified dietary patterns among the study population of each study. To date, several studies have examined the reproducibility and validity of dietary patterns.7–13 However, among studies of the association between dietary patterns and disease risk in Japan, only 1 examined the validity of dietary patterns in Japan12 and none examined reproducibility. Therefore, we assessed the reproducibility and validity of dietary patterns identified by principal component analysis of a food frequency questionnaire (FFQ) used in the 5-year follow-up survey of the Japan Public Health Center-Based Prospective Study (JPHC Study). METHODS: JPHC Study The JPHC Study is a population-based prospective study of cancer and cardiovascular disease and was launched in 1990 for cohort I and in 1993 for cohort II.14 The participants were 140 420 residents of 11 public health center areas and were aged 40 to 59 years for cohort I and aged 40 to 69 years for cohort II at each baseline survey. At baseline and at the 5- and 10-year follow-ups, a questionnaire survey was conducted to obtain information on medical histories and health-related lifestyle, including diet. The JPHC Study is a population-based prospective study of cancer and cardiovascular disease and was launched in 1990 for cohort I and in 1993 for cohort II.14 The participants were 140 420 residents of 11 public health center areas and were aged 40 to 59 years for cohort I and aged 40 to 69 years for cohort II at each baseline survey. At baseline and at the 5- and 10-year follow-ups, a questionnaire survey was conducted to obtain information on medical histories and health-related lifestyle, including diet. Study population The subjects of the reproducibility and validity study of the FFQ used in the 5-year follow-up survey were a subsample of the participants in JPHC Study cohorts I and II. Married couples were recruited. The present study used a data set for 498 subjects (244 men and 254 women; 209 in cohort I and 289 in cohort II) who completed 2 FFQs with a 1-year interval and dietary records for a total of 28 or 14 days. Details of the validation study have been described elsewhere.15,16 There was no significant difference in mean age between participants of the validation study and cohort members among men and women in cohort I or among men in cohort II; however, there was a 3-year age difference among cohort II women (mean age, 56 years for the validation subsample vs 59 years for the cohort participants). In the present study, we identified dietary patterns using FFQ data from the 5-year follow-up survey. Oral or written informed consent from the participants was received before the study. The study did not undergo ethical approval since it was conducted before the advent of ethical guidelines for epidemiology research in Japan, which mandate such approval. The subjects of the reproducibility and validity study of the FFQ used in the 5-year follow-up survey were a subsample of the participants in JPHC Study cohorts I and II. Married couples were recruited. The present study used a data set for 498 subjects (244 men and 254 women; 209 in cohort I and 289 in cohort II) who completed 2 FFQs with a 1-year interval and dietary records for a total of 28 or 14 days. Details of the validation study have been described elsewhere.15,16 There was no significant difference in mean age between participants of the validation study and cohort members among men and women in cohort I or among men in cohort II; however, there was a 3-year age difference among cohort II women (mean age, 56 years for the validation subsample vs 59 years for the cohort participants). In the present study, we identified dietary patterns using FFQ data from the 5-year follow-up survey. Oral or written informed consent from the participants was received before the study. The study did not undergo ethical approval since it was conducted before the advent of ethical guidelines for epidemiology research in Japan, which mandate such approval. Dietary assessment The participants completed the FFQ twice, with an approximately 1-year interval. The FFQ for the evaluation of validity (FFQ_V) was completed after completion of the last dietary record and was compared with the dietary records. The FFQ for the evaluation of reproducibility (FFQ_R) was administered 1 year before or after the FFQ_V and was compared with the FFQ_V. The FFQ included questions on 138 food items (with standard portions/units and eating frequency) and 14 supplementary questions during the previous year, from which a composition table was developed for 147 foods items.17 For most food items, 9 response options were available for eating frequency: rarely, 1 to 3 times/month, 1 to 2 times/week, 3 to 4 times/week, 5 to 6 times/week, once a day, 2 to 3 times/day, 4 to 6 times/day, and 7 or more times/day. Slightly different options were used for beverage intake: rarely, 1 to 2 times/week, 3 to 4 times/week, 5 to 6 times/week, 1 cup/day, 2 to 3 cups/day, 4 to 6 cups/day, 7 to 9 cups/day, and 10 or more cups/day. A standard portion size was specified for each food item, and the respondents were asked to choose their usual portion size from 3 options (less than half the standard portion size, standard portion size, or more than 1.5 times the standard portion size). We calculated daily intake of most foods by multiplying daily frequency of consumption by usual portion size. In the present analysis, we used 134 food and beverage items of the FFQ (excluding 11 items that correlated strongly with others and 2 items with no energy or nutrition). The participants completed a 28- or 14-day (Okinawa) dietary record in 1 year, ie, 7-day dietary records were collected 4 (or 2) times at 3-month (or 6-month) intervals during the course of a year. The survey method using dietary records has been described elsewhere.15,16,18 We matched the 1101 unique food codes in the dietary records to food items on the FFQ to ensure that daily food intakes derived from the dietary records were comparable with those derived from the FFQ. A total of 558 food codes in the dietary records were matched to the 134 food items on the FFQ. The participants completed the FFQ twice, with an approximately 1-year interval. The FFQ for the evaluation of validity (FFQ_V) was completed after completion of the last dietary record and was compared with the dietary records. The FFQ for the evaluation of reproducibility (FFQ_R) was administered 1 year before or after the FFQ_V and was compared with the FFQ_V. The FFQ included questions on 138 food items (with standard portions/units and eating frequency) and 14 supplementary questions during the previous year, from which a composition table was developed for 147 foods items.17 For most food items, 9 response options were available for eating frequency: rarely, 1 to 3 times/month, 1 to 2 times/week, 3 to 4 times/week, 5 to 6 times/week, once a day, 2 to 3 times/day, 4 to 6 times/day, and 7 or more times/day. Slightly different options were used for beverage intake: rarely, 1 to 2 times/week, 3 to 4 times/week, 5 to 6 times/week, 1 cup/day, 2 to 3 cups/day, 4 to 6 cups/day, 7 to 9 cups/day, and 10 or more cups/day. A standard portion size was specified for each food item, and the respondents were asked to choose their usual portion size from 3 options (less than half the standard portion size, standard portion size, or more than 1.5 times the standard portion size). We calculated daily intake of most foods by multiplying daily frequency of consumption by usual portion size. In the present analysis, we used 134 food and beverage items of the FFQ (excluding 11 items that correlated strongly with others and 2 items with no energy or nutrition). The participants completed a 28- or 14-day (Okinawa) dietary record in 1 year, ie, 7-day dietary records were collected 4 (or 2) times at 3-month (or 6-month) intervals during the course of a year. The survey method using dietary records has been described elsewhere.15,16,18 We matched the 1101 unique food codes in the dietary records to food items on the FFQ to ensure that daily food intakes derived from the dietary records were comparable with those derived from the FFQ. A total of 558 food codes in the dietary records were matched to the 134 food items on the FFQ. Statistical analysis Men and women were analyzed separately. Some foods or food groups that were similar in nutritional content or culinary use were combined, leaving 48 food groups for the present analysis (Table 1). When information from dietary records was used to combine 2 or more foods into 1 food group, the amount of food before cooking was used for all food items except noodles and rice. Consumption of noodles and rice was expressed in weight of boiled noodles and steamed rice, respectively. Consumption of miso soup and beverages such as coffee and green tea was expressed as the weight of the consumed infusion rather than the weight of miso, coffee beans, instant coffee, or tea leaves. We calculated the means and standard deviations of each food group intake estimated from dietary records and the 2 FFQs. In addition, the reproducibility and validity of 48 food group intakes were evaluated by using Spearman correlation coefficients between intakes from the 2 FFQs and between intakes derived from the dietary records and those derived from the FFQ_V. To identify dietary patterns, we performed principal component analysis based on log-transformed intakes of 48 food groups for each of the 2 FFQs and the dietary records. The factors were rotated by orthogonal transformation (varimax rotation) to maintain uncorrelated factors and greater interpretability. We considered eigenvalues, the scree plot, and the interpretability of the factors to determine the number of factors to retain and identified 3 dietary patterns (factors) in both men and women. The dietary patterns were named according to the food items showing high loading (absolute value) in each dietary pattern. The factor scores for each dietary pattern were calculated by summing intakes of food items weighted by their factor loadings. The scores were energy-adjusted using the residual method. When we examined reproducibility and validity by using dietary pattern scores without energy adjustment, or based on energy-adjusted intake of food groups, the derived dietary patterns and their reproducibility and validity were similar to those based on energy-adjusted scores. We examined the correspondence of each dietary pattern identified from the dietary records with dietary patterns extracted from the FFQ_V by comparing the pattern of food items showing high loading. We repeated this procedure to link dietary patterns identified from the FFQ_R and FFQ_V. For each dietary pattern, Spearman correlation coefficients between the factor score for the dietary patterns derived from the dietary records and those derived from the FFQ_V were calculated to evaluate the validity of the FFQ. In addition, to determine the reproducibility of the FFQ, Spearman correlation coefficients between the factor scores for the dietary patterns derived from the 2 FFQs administered 1 year apart were calculated. We estimated Spearman correlation coefficients between factor loadings from the FFQ in each population (subsample of the validity study, overall cohort, men, women, and men and women combined) to compare the dietary patterns derived from the FFQ between the subsample of the validity study and overall cohort and between men or women and men and women combined in the overall cohort. All analyses were performed using Statistical Analysis System (SAS) version 9.1 (SAS Institute, Cary, NC, USA). Men and women were analyzed separately. Some foods or food groups that were similar in nutritional content or culinary use were combined, leaving 48 food groups for the present analysis (Table 1). When information from dietary records was used to combine 2 or more foods into 1 food group, the amount of food before cooking was used for all food items except noodles and rice. Consumption of noodles and rice was expressed in weight of boiled noodles and steamed rice, respectively. Consumption of miso soup and beverages such as coffee and green tea was expressed as the weight of the consumed infusion rather than the weight of miso, coffee beans, instant coffee, or tea leaves. We calculated the means and standard deviations of each food group intake estimated from dietary records and the 2 FFQs. In addition, the reproducibility and validity of 48 food group intakes were evaluated by using Spearman correlation coefficients between intakes from the 2 FFQs and between intakes derived from the dietary records and those derived from the FFQ_V. To identify dietary patterns, we performed principal component analysis based on log-transformed intakes of 48 food groups for each of the 2 FFQs and the dietary records. The factors were rotated by orthogonal transformation (varimax rotation) to maintain uncorrelated factors and greater interpretability. We considered eigenvalues, the scree plot, and the interpretability of the factors to determine the number of factors to retain and identified 3 dietary patterns (factors) in both men and women. The dietary patterns were named according to the food items showing high loading (absolute value) in each dietary pattern. The factor scores for each dietary pattern were calculated by summing intakes of food items weighted by their factor loadings. The scores were energy-adjusted using the residual method. When we examined reproducibility and validity by using dietary pattern scores without energy adjustment, or based on energy-adjusted intake of food groups, the derived dietary patterns and their reproducibility and validity were similar to those based on energy-adjusted scores. We examined the correspondence of each dietary pattern identified from the dietary records with dietary patterns extracted from the FFQ_V by comparing the pattern of food items showing high loading. We repeated this procedure to link dietary patterns identified from the FFQ_R and FFQ_V. For each dietary pattern, Spearman correlation coefficients between the factor score for the dietary patterns derived from the dietary records and those derived from the FFQ_V were calculated to evaluate the validity of the FFQ. In addition, to determine the reproducibility of the FFQ, Spearman correlation coefficients between the factor scores for the dietary patterns derived from the 2 FFQs administered 1 year apart were calculated. We estimated Spearman correlation coefficients between factor loadings from the FFQ in each population (subsample of the validity study, overall cohort, men, women, and men and women combined) to compare the dietary patterns derived from the FFQ between the subsample of the validity study and overall cohort and between men or women and men and women combined in the overall cohort. All analyses were performed using Statistical Analysis System (SAS) version 9.1 (SAS Institute, Cary, NC, USA). JPHC Study: The JPHC Study is a population-based prospective study of cancer and cardiovascular disease and was launched in 1990 for cohort I and in 1993 for cohort II.14 The participants were 140 420 residents of 11 public health center areas and were aged 40 to 59 years for cohort I and aged 40 to 69 years for cohort II at each baseline survey. At baseline and at the 5- and 10-year follow-ups, a questionnaire survey was conducted to obtain information on medical histories and health-related lifestyle, including diet. Study population: The subjects of the reproducibility and validity study of the FFQ used in the 5-year follow-up survey were a subsample of the participants in JPHC Study cohorts I and II. Married couples were recruited. The present study used a data set for 498 subjects (244 men and 254 women; 209 in cohort I and 289 in cohort II) who completed 2 FFQs with a 1-year interval and dietary records for a total of 28 or 14 days. Details of the validation study have been described elsewhere.15,16 There was no significant difference in mean age between participants of the validation study and cohort members among men and women in cohort I or among men in cohort II; however, there was a 3-year age difference among cohort II women (mean age, 56 years for the validation subsample vs 59 years for the cohort participants). In the present study, we identified dietary patterns using FFQ data from the 5-year follow-up survey. Oral or written informed consent from the participants was received before the study. The study did not undergo ethical approval since it was conducted before the advent of ethical guidelines for epidemiology research in Japan, which mandate such approval. Dietary assessment: The participants completed the FFQ twice, with an approximately 1-year interval. The FFQ for the evaluation of validity (FFQ_V) was completed after completion of the last dietary record and was compared with the dietary records. The FFQ for the evaluation of reproducibility (FFQ_R) was administered 1 year before or after the FFQ_V and was compared with the FFQ_V. The FFQ included questions on 138 food items (with standard portions/units and eating frequency) and 14 supplementary questions during the previous year, from which a composition table was developed for 147 foods items.17 For most food items, 9 response options were available for eating frequency: rarely, 1 to 3 times/month, 1 to 2 times/week, 3 to 4 times/week, 5 to 6 times/week, once a day, 2 to 3 times/day, 4 to 6 times/day, and 7 or more times/day. Slightly different options were used for beverage intake: rarely, 1 to 2 times/week, 3 to 4 times/week, 5 to 6 times/week, 1 cup/day, 2 to 3 cups/day, 4 to 6 cups/day, 7 to 9 cups/day, and 10 or more cups/day. A standard portion size was specified for each food item, and the respondents were asked to choose their usual portion size from 3 options (less than half the standard portion size, standard portion size, or more than 1.5 times the standard portion size). We calculated daily intake of most foods by multiplying daily frequency of consumption by usual portion size. In the present analysis, we used 134 food and beverage items of the FFQ (excluding 11 items that correlated strongly with others and 2 items with no energy or nutrition). The participants completed a 28- or 14-day (Okinawa) dietary record in 1 year, ie, 7-day dietary records were collected 4 (or 2) times at 3-month (or 6-month) intervals during the course of a year. The survey method using dietary records has been described elsewhere.15,16,18 We matched the 1101 unique food codes in the dietary records to food items on the FFQ to ensure that daily food intakes derived from the dietary records were comparable with those derived from the FFQ. A total of 558 food codes in the dietary records were matched to the 134 food items on the FFQ. Statistical analysis: Men and women were analyzed separately. Some foods or food groups that were similar in nutritional content or culinary use were combined, leaving 48 food groups for the present analysis (Table 1). When information from dietary records was used to combine 2 or more foods into 1 food group, the amount of food before cooking was used for all food items except noodles and rice. Consumption of noodles and rice was expressed in weight of boiled noodles and steamed rice, respectively. Consumption of miso soup and beverages such as coffee and green tea was expressed as the weight of the consumed infusion rather than the weight of miso, coffee beans, instant coffee, or tea leaves. We calculated the means and standard deviations of each food group intake estimated from dietary records and the 2 FFQs. In addition, the reproducibility and validity of 48 food group intakes were evaluated by using Spearman correlation coefficients between intakes from the 2 FFQs and between intakes derived from the dietary records and those derived from the FFQ_V. To identify dietary patterns, we performed principal component analysis based on log-transformed intakes of 48 food groups for each of the 2 FFQs and the dietary records. The factors were rotated by orthogonal transformation (varimax rotation) to maintain uncorrelated factors and greater interpretability. We considered eigenvalues, the scree plot, and the interpretability of the factors to determine the number of factors to retain and identified 3 dietary patterns (factors) in both men and women. The dietary patterns were named according to the food items showing high loading (absolute value) in each dietary pattern. The factor scores for each dietary pattern were calculated by summing intakes of food items weighted by their factor loadings. The scores were energy-adjusted using the residual method. When we examined reproducibility and validity by using dietary pattern scores without energy adjustment, or based on energy-adjusted intake of food groups, the derived dietary patterns and their reproducibility and validity were similar to those based on energy-adjusted scores. We examined the correspondence of each dietary pattern identified from the dietary records with dietary patterns extracted from the FFQ_V by comparing the pattern of food items showing high loading. We repeated this procedure to link dietary patterns identified from the FFQ_R and FFQ_V. For each dietary pattern, Spearman correlation coefficients between the factor score for the dietary patterns derived from the dietary records and those derived from the FFQ_V were calculated to evaluate the validity of the FFQ. In addition, to determine the reproducibility of the FFQ, Spearman correlation coefficients between the factor scores for the dietary patterns derived from the 2 FFQs administered 1 year apart were calculated. We estimated Spearman correlation coefficients between factor loadings from the FFQ in each population (subsample of the validity study, overall cohort, men, women, and men and women combined) to compare the dietary patterns derived from the FFQ between the subsample of the validity study and overall cohort and between men or women and men and women combined in the overall cohort. All analyses were performed using Statistical Analysis System (SAS) version 9.1 (SAS Institute, Cary, NC, USA). RESULTS: The 48 food group intakes calculated by 28- or 14-day dietary records and the 2 FFQs, and their correlations, are shown in Table 2 for men and Table 3 for women. Regarding reproducibility, Spearman correlation coefficients between the 2 FFQs ranged from 0.21 for wine to 0.80 for coffee in men (Table 2) and from 0.37 for sake to 0.81 for miso soup in women (Table 3). In particular, intakes of rice, bread, miso soup, pickles, other fruit, salt fish, processed meat, milk, dairy products, green tea, and coffee showed high correlation coefficients (r >0.60) between the 2 FFQs in both men and women. Regarding validity, Spearman correlation coefficients between the dietary records and the FFQ_V ranged from 0.06 for oily fish to 0.75 for coffee in men (Table 2) and from 0.12 for shochu to 0.80 for coffee in women (Table 3). In both men and women, intakes of coffee, bread, pickles, and milk estimated by the FFQ_V showed a high correlation (r >0.60) with those estimated by the dietary records, whereas the validity of oily fish between the dietary records and the FFQ_V was very low. Abbreviations: DR, dietary record; FFQ, food frequency questionnaire; SD, standard deviation. aFFQ_R was administered 1 year after or before FFQ_V. bFFQ_V was administered after completion of the DRs. c28- and 14-day DRs were collected over a 1-year period. Abbreviations: DR, dietary record; FFQ, food frequency questionnaire; SD, standard deviation. aFFQ_R was administered 1 year after or before FFQ_V. bFFQ_V was administered after completion of the DRs. c28- and 14-day DRs were collected over a 1-year period. Principal component analysis identified 3 dietary patterns from the dietary records and 2 FFQs (Table 4 for men and Table 5 for women), and they were similar in terms of factor loading pattern across the data sources and between sexes. Of the 3 dietary patterns, a pattern highly loaded by intakes of vegetables, fruit, potatoes, soy products, mushrooms, seaweed, oily fish, and green tea was named the prudent Japanese pattern. A dietary pattern associated with high intakes of bread, meat, processed meat, fruit juice, coffee, black tea, soft drinks, sauces, mayonnaise, and dressing was named the westernized Japanese pattern. Another dietary pattern was characterized by high intakes of rice, miso soup, pickles, salmon, salty fish, seafood other than fish, fruit, and sake (men only), and was named the traditional Japanese pattern. The 3 dietary patterns from the dietary records, the FFQ_R, and the FFQ_V explained 23.9%, 29.4%, and 26.5%, respectively, of variance in men and 23.0%, 24.9%, and 32.9%, respectively, of variance in women. Abbreviations: FFQ, food frequency questionnaire; DR, dietary record. aFor simplicity, factor loadings less than ±0.10 are not listed. bFFQ_R was administered 1 year after or before FFQ_V. cFFQ_V was administered after completion of the DRs. d28- and 14-day DRs were collected over a 1-year period. Abbreviations: FFQ, food frequency questionnaire; DR, dietary record. aFor simplicity, factor loadings less than ±0.10 are not listed. bFFQ_R was administered 1 year after or before FFQ_V. cFFQ_V was administered after completion of the DRs. d28- and 14-day DRs were collected over a 1-year period. Three similar dietary patterns were identified in the overall cohort for both men and women (Appendix). The Spearman correlation coefficients between factor loadings from FFQ_R in the subsample of the validity study and those from the FFQ in the overall cohort for 48 food groups were 0.89 in men and 0.82 in women for the prudent Japanese pattern, 0.82 in men and 0.75 in women for the westernized Japanese pattern, and 0.75 in men and 0.47 in women for the traditional Japanese pattern. In the overall cohort, the corresponding values between factor loadings for men or women and those for men and women combined were 0.98 in both men and women for the prudent Japanese pattern, 0.96 in men and 0.95 in women for the westernized Japanese pattern, and 0.76 in men and 0.74 in women for the traditional Japanese pattern. The Spearman correlation coefficients between the dietary records and the 2 FFQs are shown in Table 6. Regarding reproducibility between the 2 FFQs, all 3 dietary patterns were acceptable in both men and women. In particular, the traditional Japanese pattern in men and the westernized Japanese pattern in women were highly reproduced (correlation coefficients: 0.77 in men and 0.71 in women). Regarding validity, the traditional Japanese pattern had higher correlation coefficients between the dietary records and the FFQ_V than did other patterns among both men and women (correlation coefficient: 0.49 in men and 0.63 in women), whereas the westernized Japanese pattern in men (0.32) had the lowest correlation coefficient among the 3 dietary patterns. Abbreviations: DR, dietary record; FFQ, food frequency questionnaire. aDietary pattern scores were adjusted for energy intake by using the residual method. bFFQ_R was administered 1 year after or before FFQ_V. cFFQ_V was administered after completion of the DRs. d28- or 14-day DRs were collected over a 1-year period. All correlations: P < 0.001. DISCUSSION: In a subsample of the validation study of diet in the JPHC Study, we identified 3 major Japanese dietary patterns (prudent, westernized, and traditional) that were similar across data sources and between sexes. The reproducibility and validity of dietary patterns derived from the FFQ were acceptable. Spearman correlation coefficients between scores of the dietary patterns derived from the 2 FFQs ranged from 0.55 to 0.77. The corresponding values between scores of the dietary patterns derived from the dietary records and those derived from the FFQ ranged from 0.32 to 0.63. These dietary patterns were also identified in the entire population of the JPHC Study. The 3 major dietary patterns identified in the present study have also been reported in previous Japanese studies. Most studies in Japan have identified a dietary pattern similar to the prudent Japanese pattern in the present study.19–35 This dietary pattern is characterized by high intakes of not only vegetables and fruit but also typical Japanese foods, including soy products, seaweed, mushrooms, fish, and green tea. Dietary patterns similar to the westernized Japanese pattern in the present study, ie, high intakes of meat, processed meat, bread, coffee, black tea, mayonnaise, and dressing, have also been observed in many Japanese studies.19–35 Moreover, the traditional Japanese pattern, which is characterized by high intakes of rice, miso soup, pickles, salmon, salty fish, and fruit, has been observed in several studies.19–21,26,30,31 Because the westernized breakfast pattern22,24,28,29,33,34 and similar patterns, which were identified in some other studies, is associated with low intakes of rice and miso soup, the westernized breakfast pattern could be viewed as the mirror image of the traditional Japanese pattern. We observed acceptable validity and reproducibility of the 3 dietary patterns; the Spearman correlation coefficients between the dietary records and 2 FFQs ranged from 0.55 to 0.77 for reproducibility and from 0.32 to 0.63 for validity. These values were comparable to those reported in previous studies. Regarding reproducibility, the correlation coefficients ranged from 0.67 to 0.70 (Pearson) in the Health Professional Follow-up Study,9 from 0.63 to 0.73 (over a 1-year period; Spearman),10 and from 0.30 to 0.52 (over a 10-year period; Pearson)11 in the Swedish Mammography Cohort, and from 0.64 to 0.81 (Spearman) in the Southampton Women’s Survey.7 Regarding validity, the correlation coefficients ranged from 0.34 to 0.64 (Pearson) in the Health Professionals Follow-up Study,9 from 0.34 to 0.61 (Pearson) in the Monitoring of Trends and Determinants in Cardiovascular Diseases,13 from 0.41 to 0.73 (Spearman) in the Swedish Mammography Cohort,10 from 0.35 to 0.67 (Pearson) in a UK study,8 and from 0.36 to 0.62 (Pearson) in a Japanese study.12 In the present study, the traditional Japanese pattern in men and the westernized Japanese pattern in women were highly reproducible. In the 2 FFQs, salty fish, salmon, pickles, sake, and rice, which are characteristic of the traditional Japanese pattern in men, showed high loadings to the pattern. In addition, these food intakes were highly correlated between the 2 FFQs. Similarly, intakes of beef, processed meat, coffee, bread, pork, chicken, and dressing showed high loadings to the westernized Japanese pattern in women in the 2 FFQs and were also highly correlated between the 2 FFQs. Of the 3 dietary patterns identified, the traditional Japanese pattern showed relatively good validity in both men and women. This might be because pickles, rice, salty fish, salmon, and sake (men only) showed high factor loadings to both dietary record and FFQ, and these intakes were moderately to highly correlated between the dietary record and FFQ. In contrast, the westernized Japanese pattern in men was less valid, perhaps because among foods that were highly and positively loaded to the westernized Japanese pattern derived using dietary record data, some fish items were inversely (not positively) associated with the dietary pattern based on FFQ data. In addition, intakes of meat and processed meat were only moderately correlated between dietary record and FFQ. Regarding the traditional Japanese pattern in women, the factor loadings from the FFQ in the subsample and those from the FFQ in the overall cohort for food groups were not highly correlated (correlation coefficient = 0.47). This might be because factor loadings of pickles, citrus fruit, and noodles in the traditional Japanese pattern were lower in the overall cohort than in the subsample, whereas those for pork, beef, processed meat, egg, oily fish, and fish products were higher in the overall cohort than in the subsample. Our study had some limitations. First, although we assessed the validity and reproducibility of dietary patterns in a subsample of the JPHC Study population, participants in the validation study might be more health conscious than nonparticipants. Therefore, the present findings might not be applicable to the entire population of the JPHC Study. However, we also identified 3 similar dietary patterns in the entire population. Second, we observed similar dietary patterns in men and women. This might be partly because married couples were recruited for the validity study. Third, we assessed validity based on a single measurement of dietary intake, which might not reflect past dietary patterns. In addition, dietary pattern alone could change during the follow-up period, due to changes in food preference and food availability. Finally, we used the FFQ completed at the end of the dietary records to evaluate validity. Participant recall of dietary intake could be influenced by the administration of dietary records. In conclusion, we identified 3 Japanese dietary patterns (prudent, westernized, and traditional) using FFQ data among participants of the validation study and confirmed that validity and reproducibility of the FFQ are acceptable. These validation data provide a basis for future analysis of the association between dietary patterns and disease risk in the JPHC Study population. Abbreviation: JPHC Study, Japan Public Health Center-Based Prospective Study. aFor simplicity, factor loadings less than ±0.10 are not listed.
Background: Analysis of dietary pattern is increasingly popular in nutritional epidemiology. However, few studies have examined the validity and reproducibility of dietary patterns. We assessed the reproducibility and validity of dietary patterns identified by a food frequency questionnaire (FFQ) used in the 5-year follow-up survey of the Japan Public Health Center-Based Prospective Study (JPHC Study). Methods: The participants were a subsample (244 men and 254 women) from the JPHC Study. Principal component analysis was used to identify dietary patterns from 28- or 14-day dietary records and 2 FFQs. To assess reproducibility and validity, we calculated Spearman correlation coefficients between dietary pattern scores derived from FFQs separated by a 1-year interval, and between dietary pattern scores derived from dietary records and those derived from a FFQ completed after the dietary records, respectively. Results: We identified 3 Japanese dietary patterns from the dietary records and 2 FFQs: prudent, westernized, and traditional. Regarding reproducibility, Spearman correlation coefficients between the 2 FFQs ranged from 0.55 for the westernized Japanese pattern in men and the prudent Japanese pattern in women to 0.77 for the traditional Japanese pattern in men. Regarding validity, the corresponding values between dietary records and the FFQ ranged from 0.32 for the westernized Japanese pattern in men to 0.63 for the traditional Japanese pattern in women. Conclusions: Acceptable reproducibility and validity was shown by the 3 dietary patterns identified by principal component analysis based on the FFQ used in the 5-year follow-up survey of the JPHC Study.
null
null
6,678
295
[ 347, 99, 226, 462, 584 ]
8
[ "dietary", "food", "study", "patterns", "ffq", "dietary patterns", "pattern", "men", "women", "dietary records" ]
[ "associated dietary pattern", "dietary patterns study", "associations dietary patterns", "analysis association dietary", "analysis dietary pattern" ]
null
null
[CONTENT] dietary patterns | Japanese | reproducibility | validity [SUMMARY]
[CONTENT] dietary patterns | Japanese | reproducibility | validity [SUMMARY]
[CONTENT] dietary patterns | Japanese | reproducibility | validity [SUMMARY]
null
[CONTENT] dietary patterns | Japanese | reproducibility | validity [SUMMARY]
null
[CONTENT] Adult | Aged | Cohort Studies | Data Collection | Diet | Female | Follow-Up Studies | Humans | Japan | Male | Middle Aged | Nutrition Surveys | Nutritional Status | Surveys and Questionnaires [SUMMARY]
[CONTENT] Adult | Aged | Cohort Studies | Data Collection | Diet | Female | Follow-Up Studies | Humans | Japan | Male | Middle Aged | Nutrition Surveys | Nutritional Status | Surveys and Questionnaires [SUMMARY]
[CONTENT] Adult | Aged | Cohort Studies | Data Collection | Diet | Female | Follow-Up Studies | Humans | Japan | Male | Middle Aged | Nutrition Surveys | Nutritional Status | Surveys and Questionnaires [SUMMARY]
null
[CONTENT] Adult | Aged | Cohort Studies | Data Collection | Diet | Female | Follow-Up Studies | Humans | Japan | Male | Middle Aged | Nutrition Surveys | Nutritional Status | Surveys and Questionnaires [SUMMARY]
null
[CONTENT] associated dietary pattern | dietary patterns study | associations dietary patterns | analysis association dietary | analysis dietary pattern [SUMMARY]
[CONTENT] associated dietary pattern | dietary patterns study | associations dietary patterns | analysis association dietary | analysis dietary pattern [SUMMARY]
[CONTENT] associated dietary pattern | dietary patterns study | associations dietary patterns | analysis association dietary | analysis dietary pattern [SUMMARY]
null
[CONTENT] associated dietary pattern | dietary patterns study | associations dietary patterns | analysis association dietary | analysis dietary pattern [SUMMARY]
null
[CONTENT] dietary | food | study | patterns | ffq | dietary patterns | pattern | men | women | dietary records [SUMMARY]
[CONTENT] dietary | food | study | patterns | ffq | dietary patterns | pattern | men | women | dietary records [SUMMARY]
[CONTENT] dietary | food | study | patterns | ffq | dietary patterns | pattern | men | women | dietary records [SUMMARY]
null
[CONTENT] dietary | food | study | patterns | ffq | dietary patterns | pattern | men | women | dietary records [SUMMARY]
null
[CONTENT] dietary | patterns | dietary patterns | studies | disease | risk | disease risk | validity dietary patterns | food | dietary pattern [SUMMARY]
[CONTENT] dietary | food | times | day | cohort | items | dietary records | records | ffq | study [SUMMARY]
[CONTENT] men | women | japanese pattern | japanese | pattern | drs | dietary | correlation | administered | ffq_v [SUMMARY]
null
[CONTENT] dietary | food | study | patterns | cohort | dietary patterns | pattern | men | women | times [SUMMARY]
null
[CONTENT] ||| ||| 5-year | the Japan Public Health Center-Based Prospective Study [SUMMARY]
[CONTENT] 244 | 254 ||| 28- | 14-day | 2 ||| Spearman | 1-year [SUMMARY]
[CONTENT] 3 | Japanese | 2 ||| Spearman | 2 | 0.55 | Japanese | Japanese | 0.77 | Japanese ||| FFQ | 0.32 | Japanese | 0.63 | Japanese [SUMMARY]
null
[CONTENT] ||| ||| 5-year | the Japan Public Health Center-Based Prospective Study ||| 244 | 254 ||| 28- | 14-day | 2 ||| Spearman | 1-year ||| ||| 3 | Japanese | 2 ||| Spearman | 2 | 0.55 | Japanese | Japanese | 0.77 | Japanese ||| FFQ | 0.32 | Japanese | 0.63 | Japanese ||| 3 | FFQ | 5-year [SUMMARY]
null
Longitudinal study of leptin levels in chronic hemodialysis patients.
21676262
The influence of serum leptin levels on nutritional status and survival in chronic hemodialysis patients remained to be elucidated. We conducted a prospective longitudinal study of leptin levels and nutritional parameters to determine whether changes of serum leptin levels modify nutritional status and survival in a cohort of prevalent hemodialysis patients.
BACKGROUND
Leptin, dietary energy and protein intake, biochemical markers of nutrition and body composition (anthropometry and bioimpedance analysis) were measured at baseline and at 6, 12, 18 and 24 months following enrollment, in 101 prevalent hemodialysis patients (37% women) with a mean age of 64.6 ± 11.5 years. Observation of this cohort was continued over 2 additional years. Changes in repeated measures were evaluated, with adjustment for baseline differences in demographic and clinical parameters.
METHODS
Significant reduction of leptin levels with time were observed (linear estimate: -2.5010 ± 0.57 ng/ml/2 y; p < 0.001) with a more rapid decline in leptin levels in the highest leptin tertile in both unadjusted (p = 0.007) and fully adjusted (p = 0.047) models. A significant reduction in body composition parameters over time was observed, but was not influenced by leptin (leptin-by-time interactions were not significant). No significant associations were noted between leptin levels and changes in dietary protein or energy intake, or laboratory nutritional markers. Finally, cumulative incidences of survival were unaffected by the baseline serum leptin levels.
RESULTS
Thus leptin levels reflect fat mass depots, rather than independently contributing to uremic anorexia or modifying nutritional status and/or survival in chronic hemodialysis patients. The importance of such information is high if leptin is contemplated as a potential therapeutic target in hemodialysis patients.
CONCLUSIONS
[ "Aged", "Biomarkers", "Body Composition", "Body Mass Index", "Cross-Sectional Studies", "Dietary Proteins", "Female", "Follow-Up Studies", "Humans", "Kidney Failure, Chronic", "Leptin", "Linear Models", "Longitudinal Studies", "Male", "Middle Aged", "Multivariate Analysis", "Nutritional Status", "Prospective Studies", "Renal Dialysis", "Survival Analysis" ]
3132708
Background
In recent years, the number of patients with end-stage renal disease (ESRD) has been increasing worldwide [1]. Depending in part upon the method used to evaluate nutritional status and the population studied, from 40 to 70 percent of patients with ESRD are malnourished [2,3] resulting in poor clinical outcomes [4]. Among the mechanisms responsible for malnutrition, leptin was believed to influence nutritional markers in patients with ESRD [5]. Leptin is a 16-kDa protein identified as the product of the obese gene; it is exclusively produced in adipocytes, and regulates food intake and energy expenditure in animal models [6]. Leptin decreases food intake by decreasing NPY (neuropeptide Y - one of the most potent stimulators of food intake) mRNA [7] and increasing alpha-MSH (alpha-melanocyte-stimulating hormone - an inhibitor of food intake) [8]. Besides linking adiposity and central nervous circuits to reduced appetite and enhanced energy expenditure in the general population [9], leptin has been shown to increase overall sympathetic nerve activity [10], facilitate glucose utilization and improve insulin sensitivity [11]. Furthermore, the prospective West of Scotland Coronary Prevention Study (WOSCOPS) reported that elevated leptin increases the relative risk of cardio-vascular disease in the general population independently of fat mass [12]. In general, serum leptin levels are significantly elevated in patients with renal failure, particularly when compared to age, gender and body mass index (BMI)-matched controls [13,14]. However, the role of hyperleptinemia in ESRD patients is somewhat unconventional. In contrast with its anorexogenic effects recognized in the general population [9] and even in experimental models of uremia (in subtotal nephrectomized and leptin receptor-deficient [db/db] mice) [15], leptin has not been reported to affect perceived appetite and nutrient intake in dialysis patients [16,17]. Although in some observational studies, increased serum leptin concentrations were observed in ESRD patients in parallel with loss of lean body mass [18,19] or with hypoalbuminemia and low protein intake [20], some others failed to find any correlation between hyperleptinemia and weight change [9] or lean mass [21] in this population. Moreover, several clinical studies suggested that leptin is a negative acute phase protein [22] and can serve as a marker of adequate nutritional status, rather than an appetite-reducing uremic toxin in hemodialysis patients [23-25]. Finally, the relationship between elevated serum leptin levels and clinical outcomes in ESRD has not been fully defined. In one small prospective cohort of hemodialysis patients, lower baseline serum leptin levels predicted mortality [26], but neither changes in leptin over time were measured, nor were leptin levels normalized to body fat mass in this study. Thus, the influence of serum leptin levels on nutritional status and survival in chronic hemodialysis patients remained to be elucidated. In view of leptin's physiological role, information on effects of prolonged hyperleptinemia (independent of fat mass) on nutritional status of chronic hemodialysis patients, which may also impact on their survival, would be of interest. The aim of the present prospective longitudinal study was therefore to study longitudinal changes in serum leptin levels and to relate them to the changes in nutritional markers and survival in chronic hemodialysis patients.
Methods
Patients This prospective observational study was approved by the Ethics Committee of Assaf Harofeh Medical Center (Zerifin, Affiliated to the Sackler Faculty of Medicine Tel Aviv University, Israel). Informed consent was obtained before any trial-related activities. Patients were eligible for entry when they had been on HD therapy for at least 3 months and were 18 years or older, with no clinically active cardio-vascular or infectious diseases on entry. We excluded patients with edema, pleural effusion or ascites at their initial assessment, as well as patients with malignant disease, liver cirrhosis, neuro-muscular diseases, amputations or any deformities of the body. Exclusion criteria at the entry of the study also included co-morbidity (auto-immune disease and/or acute infections) and/or medication (prednisone) that might interfere with plasma leptin concentrations. A flow chart of the study is presented in Figure 1. In total, 101 patients (64 men and 37 women) with a mean age of 64.6 ± 11.5 years, receiving maintenance hemodialysis treatment at our outpatient HD clinic, were included in the study. Of the patients studied, 52 were diabetic (all diabetic patients had type 2 diabetes). Study measurements were performed at baseline and at 6, 12, 18 and 24 months from enrollment. After the longitudinal measurements ended, we continued clinical observation on our cohort during 2 additional years. Thus, in total, the study period extended 35 ± 17 months. During this period, 33 patients (32.7%) died (the main causes of death were cardio-vascular [12 of 33 patients; 36.4%] and sepsis [12 of 33 patients; 36.4%]), 13 patients (12.9%) underwent kidney transplantation, 3 patients (3.0%) changed dialysis modality, and 10 patients (10.0%) transferred to other hemodialysis units. All patients underwent regular dialysis via their vascular access (81.2% of patients had arterio-venous fistula) 4-5 h three times per week at a blood flow rate of 250-300 ml/min. Bicarbonate dialysate (30 mEq/L) at a dialysis solution flow rate of 500 ml/min was used in all cases. All dialysis was performed with biocompatible dialyzer membrane with a surface area of 1.0-1.8 m2. The efficiency of the dialysis was assessed based on the delivered dose of dialysis (Kt/V urea) using a single-pool urea kinetic model (mean Kt/V was1.31 ± 0.23 in our population). Flow diagram of the study. Information on vascular disease (cerebral vascular, peripheral vascular and heart disease) was obtained from a detailed medical history. Most patients were required antihypertensive medications as well as other drugs commonly used in ESRD, such as phosphate and potassium binders, diuretics, and supplements of vitamins B, C, and D. This prospective observational study was approved by the Ethics Committee of Assaf Harofeh Medical Center (Zerifin, Affiliated to the Sackler Faculty of Medicine Tel Aviv University, Israel). Informed consent was obtained before any trial-related activities. Patients were eligible for entry when they had been on HD therapy for at least 3 months and were 18 years or older, with no clinically active cardio-vascular or infectious diseases on entry. We excluded patients with edema, pleural effusion or ascites at their initial assessment, as well as patients with malignant disease, liver cirrhosis, neuro-muscular diseases, amputations or any deformities of the body. Exclusion criteria at the entry of the study also included co-morbidity (auto-immune disease and/or acute infections) and/or medication (prednisone) that might interfere with plasma leptin concentrations. A flow chart of the study is presented in Figure 1. In total, 101 patients (64 men and 37 women) with a mean age of 64.6 ± 11.5 years, receiving maintenance hemodialysis treatment at our outpatient HD clinic, were included in the study. Of the patients studied, 52 were diabetic (all diabetic patients had type 2 diabetes). Study measurements were performed at baseline and at 6, 12, 18 and 24 months from enrollment. After the longitudinal measurements ended, we continued clinical observation on our cohort during 2 additional years. Thus, in total, the study period extended 35 ± 17 months. During this period, 33 patients (32.7%) died (the main causes of death were cardio-vascular [12 of 33 patients; 36.4%] and sepsis [12 of 33 patients; 36.4%]), 13 patients (12.9%) underwent kidney transplantation, 3 patients (3.0%) changed dialysis modality, and 10 patients (10.0%) transferred to other hemodialysis units. All patients underwent regular dialysis via their vascular access (81.2% of patients had arterio-venous fistula) 4-5 h three times per week at a blood flow rate of 250-300 ml/min. Bicarbonate dialysate (30 mEq/L) at a dialysis solution flow rate of 500 ml/min was used in all cases. All dialysis was performed with biocompatible dialyzer membrane with a surface area of 1.0-1.8 m2. The efficiency of the dialysis was assessed based on the delivered dose of dialysis (Kt/V urea) using a single-pool urea kinetic model (mean Kt/V was1.31 ± 0.23 in our population). Flow diagram of the study. Information on vascular disease (cerebral vascular, peripheral vascular and heart disease) was obtained from a detailed medical history. Most patients were required antihypertensive medications as well as other drugs commonly used in ESRD, such as phosphate and potassium binders, diuretics, and supplements of vitamins B, C, and D. Dietary intake A continuous 3-day dietary history (which included a dialysis day, a weekend day and a non-dialysis day) was recorded on a self-completed food diary. Then, dietary energy and protein intake were calculated and normalized for adjusted body weight (ABW) by the following formula [27]: SBW - standard body weight was determined by the National Healthand Nutrition Examination Survey II (NHANES II) population medians for age, sex, frame size, and stature [27]. Dietary protein intake was also estimated by the protein catabolic rate (PCR) calculation from the patient's urea generation rate by urea kinetics modeling [28]. Single-pool model urea kinetics was used to estimate the nPCR. A continuous 3-day dietary history (which included a dialysis day, a weekend day and a non-dialysis day) was recorded on a self-completed food diary. Then, dietary energy and protein intake were calculated and normalized for adjusted body weight (ABW) by the following formula [27]: SBW - standard body weight was determined by the National Healthand Nutrition Examination Survey II (NHANES II) population medians for age, sex, frame size, and stature [27]. Dietary protein intake was also estimated by the protein catabolic rate (PCR) calculation from the patient's urea generation rate by urea kinetics modeling [28]. Single-pool model urea kinetics was used to estimate the nPCR. Anthropometric measurements BMI, triceps skinfold thickness (TSF), mid arm circumference (MAC) and calculated mid arm muscle circumference (MAMC) were measured as anthropometric variables. The BMI was calculated as dry weight in kilograms divided by the square of height in meters. TSF was measured with a conventional skinfold caliper using standard techniques. Mid arm circumference was measured with a plastic measuring tape. MAMC was estimated as follows: BMI, triceps skinfold thickness (TSF), mid arm circumference (MAC) and calculated mid arm muscle circumference (MAMC) were measured as anthropometric variables. The BMI was calculated as dry weight in kilograms divided by the square of height in meters. TSF was measured with a conventional skinfold caliper using standard techniques. Mid arm circumference was measured with a plastic measuring tape. MAMC was estimated as follows: Body composition analysis Body composition was determined by body impedance analysis (B.I.A. Nutriguard- M, Data-Input, Frankfurt, Germany). We used gel-based electrodes specifically developed for BIA measurements - Bianostic AT (Data-Input GmbH). On the day of blood collection, patients underwent BIA measurement at approximately 30 minutes postdialysis. BIA electrodes were placed on the same body side used for anthropometric measurements. The multi-frequency technique was used. FFM was calculated by using the approach of Kyle et al. validated by dual-energy x-ray absorptiometry on 343 healthy adult subjects [29]: Fat mass and fat free mass were standardized by squared height (m2), and expressed in kg/m2 as fat mass index (FMI) and fat free mass index (FFMI), respectively. Phase angle (PA) describes the relationship between the two vector components of impedance [reactance and resistance] of the human body to an alternating electric current. PA has been shown to provide a BIA prognostic index of morbidity and mortality [30]. Body composition was determined by body impedance analysis (B.I.A. Nutriguard- M, Data-Input, Frankfurt, Germany). We used gel-based electrodes specifically developed for BIA measurements - Bianostic AT (Data-Input GmbH). On the day of blood collection, patients underwent BIA measurement at approximately 30 minutes postdialysis. BIA electrodes were placed on the same body side used for anthropometric measurements. The multi-frequency technique was used. FFM was calculated by using the approach of Kyle et al. validated by dual-energy x-ray absorptiometry on 343 healthy adult subjects [29]: Fat mass and fat free mass were standardized by squared height (m2), and expressed in kg/m2 as fat mass index (FMI) and fat free mass index (FFMI), respectively. Phase angle (PA) describes the relationship between the two vector components of impedance [reactance and resistance] of the human body to an alternating electric current. PA has been shown to provide a BIA prognostic index of morbidity and mortality [30]. Laboratory evaluation Blood samples were taken in a non-fasting state before a midweek hemodialysis session. CBC, creatinine, urea, albumin, transferrin and total cholesterol were measured by routine laboratory methods. IL-6 and leptin were measured in plasma samples using commercially available enzyme-linked immunosorbent assay (ELISA) kits (R&D System, Minneapolis, MN, USA) according to the manufacturer's protocol. The mean minimal detectable level for IL-6 was 0.7 pg/ml, and 7.8 pg/ml for leptin. Intra-assay and inter-assay coefficients of variation for IL-6 were 4.2% and 6.4% respectively, and for leptin - 3.3% and 5.4% respectively. Blood samples were taken in a non-fasting state before a midweek hemodialysis session. CBC, creatinine, urea, albumin, transferrin and total cholesterol were measured by routine laboratory methods. IL-6 and leptin were measured in plasma samples using commercially available enzyme-linked immunosorbent assay (ELISA) kits (R&D System, Minneapolis, MN, USA) according to the manufacturer's protocol. The mean minimal detectable level for IL-6 was 0.7 pg/ml, and 7.8 pg/ml for leptin. Intra-assay and inter-assay coefficients of variation for IL-6 were 4.2% and 6.4% respectively, and for leptin - 3.3% and 5.4% respectively. Statistical analysis Data are expressed as mean ± standard deviation (SD), median and interquartile range (Q1 to Q3) for variables that did not follow a normal distribution, or frequencies, as noted. To compare the means of continuous variables measured between sex-specific tertiles of leptin, one way ANOVA and analysis of covariance with adjustments (ANCOVA) were used. Categorical data are presented as percentages and were compared among groups by χ2 tests. Correlations between leptin and clinical and laboratory parameters were assessed using Spearman rank order correlation coefficients (because of the skewed distribution of leptin levels). Multivariate regression analysis was performed to obtain partial (adjusted) correlations (R2). Case-mix-adjusted models were controlled for age, gender, history of CV disease, presence or absence of diabetes mellitus, dialysis vintage and fat mass. Repeated-measures analysis of variance was performed by using the MIXED model. Only patients with ≥ 2 study visits were included in the analyses. Base models were adjusted for age, sex, diabetes status, dialysis vintage, history of cardio-vascular diseases and fat mass. F Tests were used to assess the significance of the fixed effects, and P less than 0.05 was considered significant. To evaluate whether leptin influenced the trends in the various dependent variables, we included in each base model terms for individual "leptin-by-time" interactions. Survival analyses were performed using the Kaplan-Meier survival curve and the Cox proportional hazard model. The univariate and multivariate Cox regression analyses are presented as (HR; CI). All statistical analyses were performed using SPSS software, version 16.0 (SPSS Inc, Chicago, IL). Data are expressed as mean ± standard deviation (SD), median and interquartile range (Q1 to Q3) for variables that did not follow a normal distribution, or frequencies, as noted. To compare the means of continuous variables measured between sex-specific tertiles of leptin, one way ANOVA and analysis of covariance with adjustments (ANCOVA) were used. Categorical data are presented as percentages and were compared among groups by χ2 tests. Correlations between leptin and clinical and laboratory parameters were assessed using Spearman rank order correlation coefficients (because of the skewed distribution of leptin levels). Multivariate regression analysis was performed to obtain partial (adjusted) correlations (R2). Case-mix-adjusted models were controlled for age, gender, history of CV disease, presence or absence of diabetes mellitus, dialysis vintage and fat mass. Repeated-measures analysis of variance was performed by using the MIXED model. Only patients with ≥ 2 study visits were included in the analyses. Base models were adjusted for age, sex, diabetes status, dialysis vintage, history of cardio-vascular diseases and fat mass. F Tests were used to assess the significance of the fixed effects, and P less than 0.05 was considered significant. To evaluate whether leptin influenced the trends in the various dependent variables, we included in each base model terms for individual "leptin-by-time" interactions. Survival analyses were performed using the Kaplan-Meier survival curve and the Cox proportional hazard model. The univariate and multivariate Cox regression analyses are presented as (HR; CI). All statistical analyses were performed using SPSS software, version 16.0 (SPSS Inc, Chicago, IL).
null
null
Conclusions
IB designed, organized and coordinated the study, managed data entry, contributed to data analysis and interpretation of data and wrote the manuscript. IS carried out the immunoassays, contributed to analyzing and interpretation of data and writing the manuscript. AA and HY carried out nutrition assessment (food intake analysis, body composition assessment), contributed to analyzing and interpretation of data and writing the manuscript. LF contributed to analyzing and interpretation of data. ZA and JW contributed to analyzing and interpretation of data and writing the manuscript. All authors read and approved the final manuscript.
[ "Background", "Patients", "Dietary intake", "Anthropometric measurements", "Body composition analysis", "Laboratory evaluation", "Statistical analysis", "Results", "Cross-sectional associations", "Longitudinal associationas", "Survival analysis", "Discussion", "Conclusions" ]
[ "In recent years, the number of patients with end-stage renal disease (ESRD) has been increasing worldwide [1]. Depending in part upon the method used to evaluate nutritional status and the population studied, from 40 to 70 percent of patients with ESRD are malnourished [2,3] resulting in poor clinical outcomes [4]. Among the mechanisms responsible for malnutrition, leptin was believed to influence nutritional markers in patients with ESRD [5]. Leptin is a 16-kDa protein identified as the product of the obese gene; it is exclusively produced in adipocytes, and regulates food intake and energy expenditure in animal models [6]. Leptin decreases food intake by decreasing NPY (neuropeptide Y - one of the most potent stimulators of food intake) mRNA [7] and increasing alpha-MSH (alpha-melanocyte-stimulating hormone - an inhibitor of food intake) [8]. Besides linking adiposity and central nervous circuits to reduced appetite and enhanced energy expenditure in the general population [9], leptin has been shown to increase overall sympathetic nerve activity [10], facilitate glucose utilization and improve insulin sensitivity [11]. Furthermore, the prospective West of Scotland Coronary Prevention Study (WOSCOPS) reported that elevated leptin increases the relative risk of cardio-vascular disease in the general population independently of fat mass [12].\nIn general, serum leptin levels are significantly elevated in patients with renal failure, particularly when compared to age, gender and body mass index (BMI)-matched controls [13,14]. However, the role of hyperleptinemia in ESRD patients is somewhat unconventional. In contrast with its anorexogenic effects recognized in the general population [9] and even in experimental models of uremia (in subtotal nephrectomized and leptin receptor-deficient [db/db] mice) [15], leptin has not been reported to affect perceived appetite and nutrient intake in dialysis patients [16,17]. Although in some observational studies, increased serum leptin concentrations were observed in ESRD patients in parallel with loss of lean body mass [18,19] or with hypoalbuminemia and low protein intake [20], some others failed to find any correlation between hyperleptinemia and weight change [9] or lean mass [21] in this population. Moreover, several clinical studies suggested that leptin is a negative acute phase protein [22] and can serve as a marker of adequate nutritional status, rather than an appetite-reducing uremic toxin in hemodialysis patients [23-25]. Finally, the relationship between elevated serum leptin levels and clinical outcomes in ESRD has not been fully defined. In one small prospective cohort of hemodialysis patients, lower baseline serum leptin levels predicted mortality [26], but neither changes in leptin over time were measured, nor were leptin levels normalized to body fat mass in this study.\nThus, the influence of serum leptin levels on nutritional status and survival in chronic hemodialysis patients remained to be elucidated. In view of leptin's physiological role, information on effects of prolonged hyperleptinemia (independent of fat mass) on nutritional status of chronic hemodialysis patients, which may also impact on their survival, would be of interest. The aim of the present prospective longitudinal study was therefore to study longitudinal changes in serum leptin levels and to relate them to the changes in nutritional markers and survival in chronic hemodialysis patients.", "This prospective observational study was approved by the Ethics Committee of Assaf Harofeh Medical Center (Zerifin, Affiliated to the Sackler Faculty of Medicine Tel Aviv University, Israel). Informed consent was obtained before any trial-related activities. Patients were eligible for entry when they had been on HD therapy for at least 3 months and were 18 years or older, with no clinically active cardio-vascular or infectious diseases on entry. We excluded patients with edema, pleural effusion or ascites at their initial assessment, as well as patients with malignant disease, liver cirrhosis, neuro-muscular diseases, amputations or any deformities of the body. Exclusion criteria at the entry of the study also included co-morbidity (auto-immune disease and/or acute infections) and/or medication (prednisone) that might interfere with plasma leptin concentrations. A flow chart of the study is presented in Figure 1. In total, 101 patients (64 men and 37 women) with a mean age of 64.6 ± 11.5 years, receiving maintenance hemodialysis treatment at our outpatient HD clinic, were included in the study. Of the patients studied, 52 were diabetic (all diabetic patients had type 2 diabetes). Study measurements were performed at baseline and at 6, 12, 18 and 24 months from enrollment. After the longitudinal measurements ended, we continued clinical observation on our cohort during 2 additional years. Thus, in total, the study period extended 35 ± 17 months. During this period, 33 patients (32.7%) died (the main causes of death were cardio-vascular [12 of 33 patients; 36.4%] and sepsis [12 of 33 patients; 36.4%]), 13 patients (12.9%) underwent kidney transplantation, 3 patients (3.0%) changed dialysis modality, and 10 patients (10.0%) transferred to other hemodialysis units. All patients underwent regular dialysis via their vascular access (81.2% of patients had arterio-venous fistula) 4-5 h three times per week at a blood flow rate of 250-300 ml/min. Bicarbonate dialysate (30 mEq/L) at a dialysis solution flow rate of 500 ml/min was used in all cases. All dialysis was performed with biocompatible dialyzer membrane with a surface area of 1.0-1.8 m2. The efficiency of the dialysis was assessed based on the delivered dose of dialysis (Kt/V urea) using a single-pool urea kinetic model (mean Kt/V was1.31 ± 0.23 in our population).\nFlow diagram of the study.\nInformation on vascular disease (cerebral vascular, peripheral vascular and heart disease) was obtained from a detailed medical history.\nMost patients were required antihypertensive medications as well as other drugs commonly used in ESRD, such as phosphate and potassium binders, diuretics, and supplements of vitamins B, C, and D.", "A continuous 3-day dietary history (which included a dialysis day, a weekend day and a non-dialysis day) was recorded on a self-completed food diary. Then, dietary energy and protein intake were calculated and normalized for adjusted body weight (ABW) by the following formula [27]:\nSBW - standard body weight was determined by the National Healthand Nutrition Examination Survey II (NHANES II) population medians for age, sex, frame size, and stature [27].\nDietary protein intake was also estimated by the protein catabolic rate (PCR) calculation from the patient's urea generation rate by urea kinetics modeling [28]. Single-pool model urea kinetics was used to estimate the nPCR.", "BMI, triceps skinfold thickness (TSF), mid arm circumference (MAC) and calculated mid arm muscle circumference (MAMC) were measured as anthropometric variables. The BMI was calculated as dry weight in kilograms divided by the square of height in meters. TSF was measured with a conventional skinfold caliper using standard techniques. Mid arm circumference was measured with a plastic measuring tape. MAMC was estimated as follows:", "Body composition was determined by body impedance analysis (B.I.A. Nutriguard- M, Data-Input, Frankfurt, Germany). We used gel-based electrodes specifically developed for BIA measurements - Bianostic AT (Data-Input GmbH). On the day of blood collection, patients underwent BIA measurement at approximately 30 minutes postdialysis. BIA electrodes were placed on the same body side used for anthropometric measurements. The multi-frequency technique was used. FFM was calculated by using the approach of Kyle et al. validated by dual-energy x-ray absorptiometry on 343 healthy adult subjects [29]:\nFat mass and fat free mass were standardized by squared height (m2), and expressed in kg/m2 as fat mass index (FMI) and fat free mass index (FFMI), respectively.\nPhase angle (PA) describes the relationship between the two vector components of impedance [reactance and resistance] of the human body to an alternating electric current. PA has been shown to provide a BIA prognostic index of morbidity and mortality [30].", "Blood samples were taken in a non-fasting state before a midweek hemodialysis session. CBC, creatinine, urea, albumin, transferrin and total cholesterol were measured by routine laboratory methods. IL-6 and leptin were measured in plasma samples using commercially available enzyme-linked immunosorbent assay (ELISA) kits (R&D System, Minneapolis, MN, USA) according to the manufacturer's protocol. The mean minimal detectable level for IL-6 was 0.7 pg/ml, and 7.8 pg/ml for leptin. Intra-assay and inter-assay coefficients of variation for IL-6 were 4.2% and 6.4% respectively, and for leptin - 3.3% and 5.4% respectively.", "Data are expressed as mean ± standard deviation (SD), median and interquartile range (Q1 to Q3) for variables that did not follow a normal distribution, or frequencies, as noted.\nTo compare the means of continuous variables measured between sex-specific tertiles of leptin, one way ANOVA and analysis of covariance with adjustments (ANCOVA) were used. Categorical data are presented as percentages and were compared among groups by χ2 tests. Correlations between leptin and clinical and laboratory parameters were assessed using Spearman rank order correlation coefficients (because of the skewed distribution of leptin levels). Multivariate regression analysis was performed to obtain partial (adjusted) correlations (R2). Case-mix-adjusted models were controlled for age, gender, history of CV disease, presence or absence of diabetes mellitus, dialysis vintage and fat mass.\nRepeated-measures analysis of variance was performed by using the MIXED model. Only patients with ≥ 2 study visits were included in the analyses. Base models were adjusted for age, sex, diabetes status, dialysis vintage, history of cardio-vascular diseases and fat mass. F Tests were used to assess the significance of the fixed effects, and P less than 0.05 was considered significant. To evaluate whether leptin influenced the trends in the various dependent variables, we included in each base model terms for individual \"leptin-by-time\" interactions.\nSurvival analyses were performed using the Kaplan-Meier survival curve and the Cox proportional hazard model. The univariate and multivariate Cox regression analyses are presented as (HR; CI).\nAll statistical analyses were performed using SPSS software, version 16.0 (SPSS Inc, Chicago, IL).", " Cross-sectional associations The 101 prevalent HD patients participating in this study included 36.6% women; over half of the participants (51.5%) had diabetes mellitus (DM) and nearly the same proportion had a history of cardiovascular disease (46.9%), including myocardial infarction, coronary artery procedures such as angioplasty or surgery, previous cerebral-vascular accident, or peripheral vascular disease. Average age was 64.6 ± 11.5 years and median maintenance HD vintage was 31 months (Q1 to Q3, 15.5-51.5 months) at the study initiation. For HD patients at the start of the cohort, serum leptin averaged (mean ± SD) 37.3 ± 39.4 ng/ml (median, 16.5 ng/ml; Q1 to Q3, 5.60-71.4 ng/ml).\nTo examine the relationship between different levels of serum leptin and relevant demographic, clinical and laboratory measures, serum leptin levels were divided into three equal gender specific tertiles (data not shown). Diabetes proportion and age distribution were similar across the three groups. No statistically significant differences were evident between groups in the use of medications that may affect inflammatory markers such as statins, aspirin, or angiotensin-converting enzyme inhibitors (data not shown). As expected females, tended to have higher leptin levels than males (p = 0.0001). Body composition parameters measured by anthropometry and bioelectric impedance analysis (BIA) including phase angle (PA) were incrementally greater across increasing leptin tertiles even after adjustments for baseline demographic and clinical parameters (age, DM status, dialysis vintage and cardio-vascular disease in the past). However, these associations were attenuated after including of fat mass in multivariate analyses. No significant differences in any of the biochemical markers of nutrition, normalized protein nitrogen appearance (nPNA), or actual energy or protein intake normalized to adjusted body weight were found between the HD patient groups according to leptin tertiles.\nAt baseline, anthropometric measurements (body mass index [BMI], triceps skinfold thickness [TSF], mid arm circumference [MAC] and midarm circumference calculated [MAMC]) and BIA-derived parameters of body composition (fat mass index [FMI] and fat free mass index [FFMI]) correlated positively with serum leptin levels, in both unadjusted and adjusted (for age, DM status, dialysis vintage, and previous cardio-vascular disease) models. Thus, an increased level of leptin was associated with better nutritional status (Table 1). Associations between levels of serum leptin and the two laboratory nutritional markers (albumin and cholesterol) and PA derived by BIA were statistically significant, but these associations were attenuated after adjustments for demographic and clinical parameters including fat mass. No significant correlations were observed between serum leptin and nPNA or dietary energy (DEI) and protein intake (DPI) normalized to adjusted body weight in both sexes after adjustments for demographic and clinical parameters including fat mass.\nUnadjusted and multivariate adjusted Spearman's correlation coefficients of baseline serum leptin and nutritional clinical and laboratory parameters in the study population at baseline\nCorrelation coefficient values ≥ 0.25 appear in bold.\na All case-mix-adjusted models include age, gender, DM status, dialysis vintage, CV disease in the past and fat mass.\nAbbreviations: nPNA, normalized protein nitrogen appearance; IL-6, interleukin-6; BMI, body mass index; TSF, triceps skinfold thickness; MAC, mid-arm circumference; MAMC, mid-arm muscle circumference calculated; FMI, fat mass index; FFMI, fat-free mass index, DM, diabetes mellitus; CV, cardio-vascular.\nThe 101 prevalent HD patients participating in this study included 36.6% women; over half of the participants (51.5%) had diabetes mellitus (DM) and nearly the same proportion had a history of cardiovascular disease (46.9%), including myocardial infarction, coronary artery procedures such as angioplasty or surgery, previous cerebral-vascular accident, or peripheral vascular disease. Average age was 64.6 ± 11.5 years and median maintenance HD vintage was 31 months (Q1 to Q3, 15.5-51.5 months) at the study initiation. For HD patients at the start of the cohort, serum leptin averaged (mean ± SD) 37.3 ± 39.4 ng/ml (median, 16.5 ng/ml; Q1 to Q3, 5.60-71.4 ng/ml).\nTo examine the relationship between different levels of serum leptin and relevant demographic, clinical and laboratory measures, serum leptin levels were divided into three equal gender specific tertiles (data not shown). Diabetes proportion and age distribution were similar across the three groups. No statistically significant differences were evident between groups in the use of medications that may affect inflammatory markers such as statins, aspirin, or angiotensin-converting enzyme inhibitors (data not shown). As expected females, tended to have higher leptin levels than males (p = 0.0001). Body composition parameters measured by anthropometry and bioelectric impedance analysis (BIA) including phase angle (PA) were incrementally greater across increasing leptin tertiles even after adjustments for baseline demographic and clinical parameters (age, DM status, dialysis vintage and cardio-vascular disease in the past). However, these associations were attenuated after including of fat mass in multivariate analyses. No significant differences in any of the biochemical markers of nutrition, normalized protein nitrogen appearance (nPNA), or actual energy or protein intake normalized to adjusted body weight were found between the HD patient groups according to leptin tertiles.\nAt baseline, anthropometric measurements (body mass index [BMI], triceps skinfold thickness [TSF], mid arm circumference [MAC] and midarm circumference calculated [MAMC]) and BIA-derived parameters of body composition (fat mass index [FMI] and fat free mass index [FFMI]) correlated positively with serum leptin levels, in both unadjusted and adjusted (for age, DM status, dialysis vintage, and previous cardio-vascular disease) models. Thus, an increased level of leptin was associated with better nutritional status (Table 1). Associations between levels of serum leptin and the two laboratory nutritional markers (albumin and cholesterol) and PA derived by BIA were statistically significant, but these associations were attenuated after adjustments for demographic and clinical parameters including fat mass. No significant correlations were observed between serum leptin and nPNA or dietary energy (DEI) and protein intake (DPI) normalized to adjusted body weight in both sexes after adjustments for demographic and clinical parameters including fat mass.\nUnadjusted and multivariate adjusted Spearman's correlation coefficients of baseline serum leptin and nutritional clinical and laboratory parameters in the study population at baseline\nCorrelation coefficient values ≥ 0.25 appear in bold.\na All case-mix-adjusted models include age, gender, DM status, dialysis vintage, CV disease in the past and fat mass.\nAbbreviations: nPNA, normalized protein nitrogen appearance; IL-6, interleukin-6; BMI, body mass index; TSF, triceps skinfold thickness; MAC, mid-arm circumference; MAMC, mid-arm muscle circumference calculated; FMI, fat mass index; FFMI, fat-free mass index, DM, diabetes mellitus; CV, cardio-vascular.\n Longitudinal associationas Linear mixed models were used to study the effects of longitudinal leptin changes on changes in nutritional parameters (slopes) over 24 months including fixed parameters such as age, gender, diabetes status, dialysis vintage, previous cardio-vascular events, and fat mass (Table 2). No significant changes in dietary intake or in any of the biochemical nutritional markers over time were found. In contrast, a significant reduction in body composition parameters (both those measured by anthropometry [BMI, TSF and MAMC] and by BIA [FMI, FFMI, PA]) over time was observed. However, leptin did not modulate the changes in outcome variables over time in our cohort (leptin-by-time interactions were insignificant).\nRegression coefficients with 95% Confidence Intervals for the effect of longitudinal leptin changes on nutritional parameters changes (slopes) during 24 months based on mixed-effects model with linear trends for variable and fixed parameters\nNOTE: All nutritional variables presented in Table were modeled separately as dependent variables, whereas independent variables included fixed factors (such as age, gender, diabetes status, dialysis vintage, past cardio-vascular disease and fat mass) and leptin as continuous variable. Terms for individual \"leptin-by-time\" interactions were included in each model. The model takes into account every measurements of leptin and presented nutritional variables in each time point for each patient separately. Presented regression coefficients demonstrate changes in outcome variables with time (24 months).\na P for trend for leptin-by-time interactions.\nb FMI and BMI were adjusted only by age, gender, DM status, history of cardio-vascular disease and dialysis vintage.\nAbbreviations: DEI, daily energy intake; DPI, daily protein intake; nPNA, normalized protein nitrogen appearance; IL-6, interleukin-6; EPO, erythropoietin; BMI, body mass index; TSF, triceps skinfold thickness; MAC, mid-arm circumference; MAMC, mid-arm muscle circumference calculated; FMI, fat mass index; FFMI, fat-free mass index; DM, diabetes mellitus; CV, cardio-vascular.\nA mixed model including the fixed factors sex, age, DM status, dialysis vintage, history of CV disease and some of nutritional parameters predicted variability in leptin levels during the study presented in Table 3. We observed significant changes of leptin levels with time (linear estimate ± SE: -2.5010 ± 0.57 ng/ml/2y; p < 0.001). The effect of longitudinal changes of FMI on serum leptin variability over time was evident according to this model.\nEffects of longitudinal changes of nutritional parameters on changes (slopes) of leptin during 24 months based on mixed-effects model with linear trends for variable and fixed parameters\nNOTE: The Table represents a simple Mixed-Model with leptin as dependent variable, and presented nutritional parameters and fixed factors (age, gender, DM status, dialysis vintage and history of CV disease) as independent variables. The model takes into account every measurements of leptin and presented nutritional variables in each time point for each patient separately.\nAbbreviations: DEI, daily energy intake; IL-6, interleukin-6; EPO, erythropoietin; FMI, fat mass index; DM, diabetes mellitus; CV, cardio-vascular.\nOf interest, a significant reduction over time was associated with a more rapid decline in leptin levels in the highest leptin tertile in both unadjusted (p = 0.007 for leptin-by-time interaction) and fully adjusted (p = 0.047 for leptin-by-time interaction) models (Figure 2).\nLeptin levels decline over time in the study population. Changes in estimated marginal means of serum leptin levels at baseline (month 0), month 6, month 12, month 18 and month 24 in the study population groups according to sex-specific tertiles* of baseline serum leptin: 1. Unadjusted model: P = 0.007 for leptin-by-time interaction; 2. Fully adjusted model (for age, gender, DM status, dialysis vintage, previous cardio-vascular morbidity and fat mass): P = 0.047 for leptin-by-time interaction. *Median (interquartile range) leptin (ng/ml) tertiles for men (n = 64) were 1.70 (1.2-2.4), 10.2 (7.6-13.8) and 38.3 (26.2-81.3) and for women (n = 37) were 15.8 (9.3-18.3), 68.0 (41.4-85.9) and 112.6 (109.2-116.0).\nLinear mixed models were used to study the effects of longitudinal leptin changes on changes in nutritional parameters (slopes) over 24 months including fixed parameters such as age, gender, diabetes status, dialysis vintage, previous cardio-vascular events, and fat mass (Table 2). No significant changes in dietary intake or in any of the biochemical nutritional markers over time were found. In contrast, a significant reduction in body composition parameters (both those measured by anthropometry [BMI, TSF and MAMC] and by BIA [FMI, FFMI, PA]) over time was observed. However, leptin did not modulate the changes in outcome variables over time in our cohort (leptin-by-time interactions were insignificant).\nRegression coefficients with 95% Confidence Intervals for the effect of longitudinal leptin changes on nutritional parameters changes (slopes) during 24 months based on mixed-effects model with linear trends for variable and fixed parameters\nNOTE: All nutritional variables presented in Table were modeled separately as dependent variables, whereas independent variables included fixed factors (such as age, gender, diabetes status, dialysis vintage, past cardio-vascular disease and fat mass) and leptin as continuous variable. Terms for individual \"leptin-by-time\" interactions were included in each model. The model takes into account every measurements of leptin and presented nutritional variables in each time point for each patient separately. Presented regression coefficients demonstrate changes in outcome variables with time (24 months).\na P for trend for leptin-by-time interactions.\nb FMI and BMI were adjusted only by age, gender, DM status, history of cardio-vascular disease and dialysis vintage.\nAbbreviations: DEI, daily energy intake; DPI, daily protein intake; nPNA, normalized protein nitrogen appearance; IL-6, interleukin-6; EPO, erythropoietin; BMI, body mass index; TSF, triceps skinfold thickness; MAC, mid-arm circumference; MAMC, mid-arm muscle circumference calculated; FMI, fat mass index; FFMI, fat-free mass index; DM, diabetes mellitus; CV, cardio-vascular.\nA mixed model including the fixed factors sex, age, DM status, dialysis vintage, history of CV disease and some of nutritional parameters predicted variability in leptin levels during the study presented in Table 3. We observed significant changes of leptin levels with time (linear estimate ± SE: -2.5010 ± 0.57 ng/ml/2y; p < 0.001). The effect of longitudinal changes of FMI on serum leptin variability over time was evident according to this model.\nEffects of longitudinal changes of nutritional parameters on changes (slopes) of leptin during 24 months based on mixed-effects model with linear trends for variable and fixed parameters\nNOTE: The Table represents a simple Mixed-Model with leptin as dependent variable, and presented nutritional parameters and fixed factors (age, gender, DM status, dialysis vintage and history of CV disease) as independent variables. The model takes into account every measurements of leptin and presented nutritional variables in each time point for each patient separately.\nAbbreviations: DEI, daily energy intake; IL-6, interleukin-6; EPO, erythropoietin; FMI, fat mass index; DM, diabetes mellitus; CV, cardio-vascular.\nOf interest, a significant reduction over time was associated with a more rapid decline in leptin levels in the highest leptin tertile in both unadjusted (p = 0.007 for leptin-by-time interaction) and fully adjusted (p = 0.047 for leptin-by-time interaction) models (Figure 2).\nLeptin levels decline over time in the study population. Changes in estimated marginal means of serum leptin levels at baseline (month 0), month 6, month 12, month 18 and month 24 in the study population groups according to sex-specific tertiles* of baseline serum leptin: 1. Unadjusted model: P = 0.007 for leptin-by-time interaction; 2. Fully adjusted model (for age, gender, DM status, dialysis vintage, previous cardio-vascular morbidity and fat mass): P = 0.047 for leptin-by-time interaction. *Median (interquartile range) leptin (ng/ml) tertiles for men (n = 64) were 1.70 (1.2-2.4), 10.2 (7.6-13.8) and 38.3 (26.2-81.3) and for women (n = 37) were 15.8 (9.3-18.3), 68.0 (41.4-85.9) and 112.6 (109.2-116.0).\n Survival analysis During an average follow-up of 35 months (median, 40 months; Q1 to Q3, 17-52), 33 patients died: 11 (32%) in the low tertile group, 12 (35%) in the median tertile group and 10 (30%) in the high tertile group, with a median time to death of 25 months (Q1 to Q3, 14-40 months). Cumulative incidences of survival were unaffected by the baseline serum leptin levels (Figure 3). To further assess the possible association of baseline serum leptin level with cardio-vascular morbidity and mortality, cumulative hazards of all-cause death and first composite cardio-vascular event from Cox regression (after adjustment for baseline demographic parameters and fat mass) were calculated. First cardio-vascular event was defined as myocardial infarction (MI), requiring coronary artery procedures such as angioplasty or surgery, cerebral-vascular accident (CVA), or peripheral vascular disease (PVD), requiring angioplasty, bypass or amputation and diagnosed after the participant entered the study. No statistically significant differences in hazards between the different groups according leptin tertiles were observed. Additionally, no differences were found by Cox regression analysis in terms of survival probabilities from the analysis of all-cause mortality for each 10 ng/ml increase in baseline serum leptin (data not shown).\nKaplan-Meier curves of surviving patients comparing subgroups of patients stratified by baseline serum leptin tertiles.\nThus, although chronic HD patients with higher baseline leptin had better nutritional status as evidenced by their significantly more favorable body composition parameters at the start of the study, this disparity did not appear to widen over the 24 months of follow-up; thus, leptin levels did not predict body composition parameter changes over time. Moreover, no survival advantage of the high leptin group was found in our observational cohort.\nDuring an average follow-up of 35 months (median, 40 months; Q1 to Q3, 17-52), 33 patients died: 11 (32%) in the low tertile group, 12 (35%) in the median tertile group and 10 (30%) in the high tertile group, with a median time to death of 25 months (Q1 to Q3, 14-40 months). Cumulative incidences of survival were unaffected by the baseline serum leptin levels (Figure 3). To further assess the possible association of baseline serum leptin level with cardio-vascular morbidity and mortality, cumulative hazards of all-cause death and first composite cardio-vascular event from Cox regression (after adjustment for baseline demographic parameters and fat mass) were calculated. First cardio-vascular event was defined as myocardial infarction (MI), requiring coronary artery procedures such as angioplasty or surgery, cerebral-vascular accident (CVA), or peripheral vascular disease (PVD), requiring angioplasty, bypass or amputation and diagnosed after the participant entered the study. No statistically significant differences in hazards between the different groups according leptin tertiles were observed. Additionally, no differences were found by Cox regression analysis in terms of survival probabilities from the analysis of all-cause mortality for each 10 ng/ml increase in baseline serum leptin (data not shown).\nKaplan-Meier curves of surviving patients comparing subgroups of patients stratified by baseline serum leptin tertiles.\nThus, although chronic HD patients with higher baseline leptin had better nutritional status as evidenced by their significantly more favorable body composition parameters at the start of the study, this disparity did not appear to widen over the 24 months of follow-up; thus, leptin levels did not predict body composition parameter changes over time. Moreover, no survival advantage of the high leptin group was found in our observational cohort.", "The 101 prevalent HD patients participating in this study included 36.6% women; over half of the participants (51.5%) had diabetes mellitus (DM) and nearly the same proportion had a history of cardiovascular disease (46.9%), including myocardial infarction, coronary artery procedures such as angioplasty or surgery, previous cerebral-vascular accident, or peripheral vascular disease. Average age was 64.6 ± 11.5 years and median maintenance HD vintage was 31 months (Q1 to Q3, 15.5-51.5 months) at the study initiation. For HD patients at the start of the cohort, serum leptin averaged (mean ± SD) 37.3 ± 39.4 ng/ml (median, 16.5 ng/ml; Q1 to Q3, 5.60-71.4 ng/ml).\nTo examine the relationship between different levels of serum leptin and relevant demographic, clinical and laboratory measures, serum leptin levels were divided into three equal gender specific tertiles (data not shown). Diabetes proportion and age distribution were similar across the three groups. No statistically significant differences were evident between groups in the use of medications that may affect inflammatory markers such as statins, aspirin, or angiotensin-converting enzyme inhibitors (data not shown). As expected females, tended to have higher leptin levels than males (p = 0.0001). Body composition parameters measured by anthropometry and bioelectric impedance analysis (BIA) including phase angle (PA) were incrementally greater across increasing leptin tertiles even after adjustments for baseline demographic and clinical parameters (age, DM status, dialysis vintage and cardio-vascular disease in the past). However, these associations were attenuated after including of fat mass in multivariate analyses. No significant differences in any of the biochemical markers of nutrition, normalized protein nitrogen appearance (nPNA), or actual energy or protein intake normalized to adjusted body weight were found between the HD patient groups according to leptin tertiles.\nAt baseline, anthropometric measurements (body mass index [BMI], triceps skinfold thickness [TSF], mid arm circumference [MAC] and midarm circumference calculated [MAMC]) and BIA-derived parameters of body composition (fat mass index [FMI] and fat free mass index [FFMI]) correlated positively with serum leptin levels, in both unadjusted and adjusted (for age, DM status, dialysis vintage, and previous cardio-vascular disease) models. Thus, an increased level of leptin was associated with better nutritional status (Table 1). Associations between levels of serum leptin and the two laboratory nutritional markers (albumin and cholesterol) and PA derived by BIA were statistically significant, but these associations were attenuated after adjustments for demographic and clinical parameters including fat mass. No significant correlations were observed between serum leptin and nPNA or dietary energy (DEI) and protein intake (DPI) normalized to adjusted body weight in both sexes after adjustments for demographic and clinical parameters including fat mass.\nUnadjusted and multivariate adjusted Spearman's correlation coefficients of baseline serum leptin and nutritional clinical and laboratory parameters in the study population at baseline\nCorrelation coefficient values ≥ 0.25 appear in bold.\na All case-mix-adjusted models include age, gender, DM status, dialysis vintage, CV disease in the past and fat mass.\nAbbreviations: nPNA, normalized protein nitrogen appearance; IL-6, interleukin-6; BMI, body mass index; TSF, triceps skinfold thickness; MAC, mid-arm circumference; MAMC, mid-arm muscle circumference calculated; FMI, fat mass index; FFMI, fat-free mass index, DM, diabetes mellitus; CV, cardio-vascular.", "Linear mixed models were used to study the effects of longitudinal leptin changes on changes in nutritional parameters (slopes) over 24 months including fixed parameters such as age, gender, diabetes status, dialysis vintage, previous cardio-vascular events, and fat mass (Table 2). No significant changes in dietary intake or in any of the biochemical nutritional markers over time were found. In contrast, a significant reduction in body composition parameters (both those measured by anthropometry [BMI, TSF and MAMC] and by BIA [FMI, FFMI, PA]) over time was observed. However, leptin did not modulate the changes in outcome variables over time in our cohort (leptin-by-time interactions were insignificant).\nRegression coefficients with 95% Confidence Intervals for the effect of longitudinal leptin changes on nutritional parameters changes (slopes) during 24 months based on mixed-effects model with linear trends for variable and fixed parameters\nNOTE: All nutritional variables presented in Table were modeled separately as dependent variables, whereas independent variables included fixed factors (such as age, gender, diabetes status, dialysis vintage, past cardio-vascular disease and fat mass) and leptin as continuous variable. Terms for individual \"leptin-by-time\" interactions were included in each model. The model takes into account every measurements of leptin and presented nutritional variables in each time point for each patient separately. Presented regression coefficients demonstrate changes in outcome variables with time (24 months).\na P for trend for leptin-by-time interactions.\nb FMI and BMI were adjusted only by age, gender, DM status, history of cardio-vascular disease and dialysis vintage.\nAbbreviations: DEI, daily energy intake; DPI, daily protein intake; nPNA, normalized protein nitrogen appearance; IL-6, interleukin-6; EPO, erythropoietin; BMI, body mass index; TSF, triceps skinfold thickness; MAC, mid-arm circumference; MAMC, mid-arm muscle circumference calculated; FMI, fat mass index; FFMI, fat-free mass index; DM, diabetes mellitus; CV, cardio-vascular.\nA mixed model including the fixed factors sex, age, DM status, dialysis vintage, history of CV disease and some of nutritional parameters predicted variability in leptin levels during the study presented in Table 3. We observed significant changes of leptin levels with time (linear estimate ± SE: -2.5010 ± 0.57 ng/ml/2y; p < 0.001). The effect of longitudinal changes of FMI on serum leptin variability over time was evident according to this model.\nEffects of longitudinal changes of nutritional parameters on changes (slopes) of leptin during 24 months based on mixed-effects model with linear trends for variable and fixed parameters\nNOTE: The Table represents a simple Mixed-Model with leptin as dependent variable, and presented nutritional parameters and fixed factors (age, gender, DM status, dialysis vintage and history of CV disease) as independent variables. The model takes into account every measurements of leptin and presented nutritional variables in each time point for each patient separately.\nAbbreviations: DEI, daily energy intake; IL-6, interleukin-6; EPO, erythropoietin; FMI, fat mass index; DM, diabetes mellitus; CV, cardio-vascular.\nOf interest, a significant reduction over time was associated with a more rapid decline in leptin levels in the highest leptin tertile in both unadjusted (p = 0.007 for leptin-by-time interaction) and fully adjusted (p = 0.047 for leptin-by-time interaction) models (Figure 2).\nLeptin levels decline over time in the study population. Changes in estimated marginal means of serum leptin levels at baseline (month 0), month 6, month 12, month 18 and month 24 in the study population groups according to sex-specific tertiles* of baseline serum leptin: 1. Unadjusted model: P = 0.007 for leptin-by-time interaction; 2. Fully adjusted model (for age, gender, DM status, dialysis vintage, previous cardio-vascular morbidity and fat mass): P = 0.047 for leptin-by-time interaction. *Median (interquartile range) leptin (ng/ml) tertiles for men (n = 64) were 1.70 (1.2-2.4), 10.2 (7.6-13.8) and 38.3 (26.2-81.3) and for women (n = 37) were 15.8 (9.3-18.3), 68.0 (41.4-85.9) and 112.6 (109.2-116.0).", "During an average follow-up of 35 months (median, 40 months; Q1 to Q3, 17-52), 33 patients died: 11 (32%) in the low tertile group, 12 (35%) in the median tertile group and 10 (30%) in the high tertile group, with a median time to death of 25 months (Q1 to Q3, 14-40 months). Cumulative incidences of survival were unaffected by the baseline serum leptin levels (Figure 3). To further assess the possible association of baseline serum leptin level with cardio-vascular morbidity and mortality, cumulative hazards of all-cause death and first composite cardio-vascular event from Cox regression (after adjustment for baseline demographic parameters and fat mass) were calculated. First cardio-vascular event was defined as myocardial infarction (MI), requiring coronary artery procedures such as angioplasty or surgery, cerebral-vascular accident (CVA), or peripheral vascular disease (PVD), requiring angioplasty, bypass or amputation and diagnosed after the participant entered the study. No statistically significant differences in hazards between the different groups according leptin tertiles were observed. Additionally, no differences were found by Cox regression analysis in terms of survival probabilities from the analysis of all-cause mortality for each 10 ng/ml increase in baseline serum leptin (data not shown).\nKaplan-Meier curves of surviving patients comparing subgroups of patients stratified by baseline serum leptin tertiles.\nThus, although chronic HD patients with higher baseline leptin had better nutritional status as evidenced by their significantly more favorable body composition parameters at the start of the study, this disparity did not appear to widen over the 24 months of follow-up; thus, leptin levels did not predict body composition parameter changes over time. Moreover, no survival advantage of the high leptin group was found in our observational cohort.", "In the present study, we wished to determine whether changes of serum leptin levels are correlated with nutritional status over time in a cohort of prevalent hemodialysis patients. We observed that despite the strong and significant cross-sectional associations between serum leptin and body composition parameters at baseline, leptin levels did not reflect changes in nutritional status in our population.\nOur study confirms several previous cross-sectional studies in which elevated serum leptin levels were demonstrated to be positively associated with several nutritional markers (laboratory or body composition parameters, mainly BMI and fat mass) in chronic HD patients [14,20,21,23-25]. HD patients treated with megestrole acetate (24), growth hormone [31] or high-calorie supplementation [32] demonstrated progressively increased serum leptin parallel to an improved nutritional state.\nHowever, not all studies are consistent with positive associations between elevated levels of leptin and several nutritional parameters in ESRD patients. An inverse correlation was observed between leptin/fat mass and dietary intake, as well as with significantly lower lean tissue mass in chronic renal failure (23 patients) and ESRD patients receiving PD (24 patients) and HD (22 patients) therapy in study by Young et al [5]. In a cross-sectional study of 28 dialysis patients and 41 healthy control subjects, Johansen et al [20] showed that leptin levels were negatively correlated with albumin and PCR, suggesting a possible negative role of leptin in nutrition. Based on longitudinal observation, Stenvinkel et al [18] demonstrated that increases in serum leptin levels of 36 peritoneal dialysis patients were associated with a decrease in lean body mass. The discrepancies between these studies may be related to the population recruited, to their small sample size, and to their design; specifically, serum leptin levels were measured among patients with varying degrees of CRF, some of whom were undergoing maintenance hemodialysis and peritoneal dialysis, and in healthy controls [5,18,20]; many of these studies had a low number of study participants [5,18,20,23-25] and a majority of these studies were cross-sectional [5,20,23,25].\nLeptin, a regulator of eating behavior, has been shown to be a major determinant of anorexia in uremic animals via signaling through the hypothalamic melanocortin system [15]. Subsequently, a recent experimental study by Cheung et al [33] showed that intraperitoneal administration of the melanocortin-4 receptor (MC4-R) antagonist NBI-12i stimulates food intake and weight gain in uremic mice. However, the effects of leptin in uremic patients are not unequivocal. Relevant to our findings, Bossola at al [17] demonstrated that serum leptin levels were not different in anorexic and in non-anorexic hemodialysis patients. Several other studies failed as well to demonstrate any role of hyperleptinemia on reduced dietary intake in ESRD patients receiving maintenance HD treatment [5,16,34]. Since hyperleptinemia is common in HD patients mainly due to impaired renal clearance [20], it seemed reasonable to assume that selective leptin resistance (such as occurs in obese humans [35]) is the responsible mechanism for attenuation of the negative effect of leptin on appetite, and accordingly, on dietary intake under conditions of continuous stimulation. The molecular mechanisms of leptin resistance, with a focus on the contribution of the intracellular tyrosine residues in the hypothalamic receptor of leptin (LEPRb) and their interaction with the negative feedback regulators, suppressor of cytokine signaling 1 and 3 (SOCS1 and SOCS3), are described in a recent study by Knobelspies et al [36].\nThus, although the cross-group comparisons confirm that hyperleptinemia is associated with better body composition parameters, no unequivocal associations in terms of biochemical data or dietary intake with serum leptin levels are evident in HD patients according to the available literature. Our study confirms these data.\nAnother finding of the present study was that 24 months of HD was associated with a significant reduction of plasma leptin levels, with a more rapid decline observed in patients with higher baseline leptin levels. Only limited information is available on the relationship between longitudinal serum leptin measurements and nutritional characteristics in chronic HD patients [18,32]. To the best of our knowledge, the present study is the first long-term longitudinal study to show the relationship between serum leptin level and nutritional status in ESRD patients receiving maintenance HD therapy. Several mechanisms may be involved in decrease of serum leptin levels over time in hemodialysis patients. These include, the increasing use of high flux and super-flux hemodialyzers [37], and/or use of alternative dialysis strategies such as hemodiafiltration [38]; recombinant human erythropoietin treatment (generally followed by a significant decline of leptinaemia in hemodialysed patients [39]); chronic inflammation (under the assumption that leptin, like albumin or transferrin, is a negative acute phase protein in chronic HD patients [22], although other studies [40] do not support this mechanism); metabolic acidosis, which can reduce the release of leptin from adipose tissue [41]; and finally, a decrease in fat mass leading to a decrease in the rate of leptin biosynthesis [14,18] (based on an insulin or a nutrient-sensing pathway regulating leptin gene expression in fat tissue [42]). Although patients in our facility were all treated using low flux hemodialysis, reduction of BMI and accordingly of fat mass over 24 months of the study may explain this longitudinal decrease of serum leptin levels in our cohort.\nOf interest is the observation that adverse changes in body composition parameters occur over time, supporting the hypothesis that end-stage renal disease is associated with wasting as proposed in some [43,44] but not all [45] of the previous studies. The results of our study were consistent with those reported by Johansen et al [43] who did not find significant changes in any of the biochemical markers with time, but a significant reduction in phase angle over 1 year follow up was evident in prevalent HD patients. In parallel, together with reduced body composition parameters over time, we observed a longitudinal decline in serum leptin levels in our cohort. However, the lack of an association between serum leptin levels and future changes in body composition parameters (as exhibited by non-significant leptin-by-time interactions), suggests that the levels of leptin follow body composition (mainly fat mass) trajectory or its accompanying metabolic changes, rather than governing it.\nFinally, serum leptin level did not appear as a useful predictor of all cause mortality or cardio-vascular morbidity in our study population during up to 4 years of observation. In this aspect, our findings support the results of earlier study by Tsai et al [46]. Although Scholze et al [26] showed a paradoxical reverse association between leptin level and clinical outcome in 71 chronic HD patients, our patients did not show such an association. The difference between our results and theirs might have been caused by differences in the cohorts. First, the population presented by Scholze et al [26] had a lower BMI at baseline than our patients, and no body composition parameters were measured. Further, the prevalence of diabetes mellitus was much greater in patients in the lower leptin group than in patients of the higher leptin group in the cohort of Scholze et al [26]. Indeed, HD patients with diabetes have a poor prognosis [4] that might be the cause of the lower survival rate in the lower leptin group in the above study [26]. While low levels of leptin may reflect a state of malnutrition in HD patients, proatherogenic effects of hyperleptinemia were linked to an adverse cardiovascular profile in general population [10,12]. We speculate that the lack of long-term benefits of hyperleptinemia in terms of survival of chronic HD patients could reflect the modulating influence of proatherogenic effects of high serum leptin levels. Finally, no relationships between serum leptin levels and previous cardio-vascular events were found by Diez et al [47] in a retrospective study on 82 dialysis patients. Zoccali et al [40] also did not show any difference in cardiovascular event-free survival in cohort of 192 hemodialysis patients after their stratification on the basis of the median value of plasma leptin.\nSome limitations of the present study should be considered. First, this study is based on a relatively small sample size limiting detection of more subtle changes over time. Second, this study used only an observational approach, without manipulation of exposure factors, and therefore, no definitive cause-and-effect relationship can be derived for any of the risk factors analyzed. Dietary intake assessed by 3 day food records is another limitation of the study, as results can be subjective and incomplete, and can vary considerably from day to day as a result of dialysis treatment sessions and associated disturbances in food intake. Finally, significant circadian fluctuations of plasma leptin (regardless of the underlying degree of glucose tolerance, fasting, or diet) can certainly obscure some small but significant changes in plasma leptin [48] and may be a source of potential bias for longitudinal observation. Nevertheless, the present study has the advantage of providing long-term longitudinal data on the relationship between serum concentrations of leptin and nutritional parameters in prevalent HD patients.", "In conclusion, we found positive linear associations between serum leptin levels and body composition, whereas no significant relations in terms of biochemical data or dietary intake were evident in our cohort at baseline. Analyzing longitudinal data we concluded that any influence of serum leptin levels on nutritional status is limited to mirroring attained BMI and fat mass trajectory. Further, nutritional advantage at baseline of the study population with hyperleptinemia did not translate into long-term benefits in terms of survival. Taken together, our results suggest that in our cohort, leptin levels reflect fat mass depots, rather than independently contributing to uremic anorexia or modifying nutritional status and/or survival in chronic HD patients. These results are of particular importance if the use of subcutaneous injections of recombinant methionyl leptin [49] or leptin removal by super-flux polysulfone dialysers [37] is contemplated as a therapeutic intervention." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Patients", "Dietary intake", "Anthropometric measurements", "Body composition analysis", "Laboratory evaluation", "Statistical analysis", "Results", "Cross-sectional associations", "Longitudinal associationas", "Survival analysis", "Discussion", "Conclusions" ]
[ "In recent years, the number of patients with end-stage renal disease (ESRD) has been increasing worldwide [1]. Depending in part upon the method used to evaluate nutritional status and the population studied, from 40 to 70 percent of patients with ESRD are malnourished [2,3] resulting in poor clinical outcomes [4]. Among the mechanisms responsible for malnutrition, leptin was believed to influence nutritional markers in patients with ESRD [5]. Leptin is a 16-kDa protein identified as the product of the obese gene; it is exclusively produced in adipocytes, and regulates food intake and energy expenditure in animal models [6]. Leptin decreases food intake by decreasing NPY (neuropeptide Y - one of the most potent stimulators of food intake) mRNA [7] and increasing alpha-MSH (alpha-melanocyte-stimulating hormone - an inhibitor of food intake) [8]. Besides linking adiposity and central nervous circuits to reduced appetite and enhanced energy expenditure in the general population [9], leptin has been shown to increase overall sympathetic nerve activity [10], facilitate glucose utilization and improve insulin sensitivity [11]. Furthermore, the prospective West of Scotland Coronary Prevention Study (WOSCOPS) reported that elevated leptin increases the relative risk of cardio-vascular disease in the general population independently of fat mass [12].\nIn general, serum leptin levels are significantly elevated in patients with renal failure, particularly when compared to age, gender and body mass index (BMI)-matched controls [13,14]. However, the role of hyperleptinemia in ESRD patients is somewhat unconventional. In contrast with its anorexogenic effects recognized in the general population [9] and even in experimental models of uremia (in subtotal nephrectomized and leptin receptor-deficient [db/db] mice) [15], leptin has not been reported to affect perceived appetite and nutrient intake in dialysis patients [16,17]. Although in some observational studies, increased serum leptin concentrations were observed in ESRD patients in parallel with loss of lean body mass [18,19] or with hypoalbuminemia and low protein intake [20], some others failed to find any correlation between hyperleptinemia and weight change [9] or lean mass [21] in this population. Moreover, several clinical studies suggested that leptin is a negative acute phase protein [22] and can serve as a marker of adequate nutritional status, rather than an appetite-reducing uremic toxin in hemodialysis patients [23-25]. Finally, the relationship between elevated serum leptin levels and clinical outcomes in ESRD has not been fully defined. In one small prospective cohort of hemodialysis patients, lower baseline serum leptin levels predicted mortality [26], but neither changes in leptin over time were measured, nor were leptin levels normalized to body fat mass in this study.\nThus, the influence of serum leptin levels on nutritional status and survival in chronic hemodialysis patients remained to be elucidated. In view of leptin's physiological role, information on effects of prolonged hyperleptinemia (independent of fat mass) on nutritional status of chronic hemodialysis patients, which may also impact on their survival, would be of interest. The aim of the present prospective longitudinal study was therefore to study longitudinal changes in serum leptin levels and to relate them to the changes in nutritional markers and survival in chronic hemodialysis patients.", " Patients This prospective observational study was approved by the Ethics Committee of Assaf Harofeh Medical Center (Zerifin, Affiliated to the Sackler Faculty of Medicine Tel Aviv University, Israel). Informed consent was obtained before any trial-related activities. Patients were eligible for entry when they had been on HD therapy for at least 3 months and were 18 years or older, with no clinically active cardio-vascular or infectious diseases on entry. We excluded patients with edema, pleural effusion or ascites at their initial assessment, as well as patients with malignant disease, liver cirrhosis, neuro-muscular diseases, amputations or any deformities of the body. Exclusion criteria at the entry of the study also included co-morbidity (auto-immune disease and/or acute infections) and/or medication (prednisone) that might interfere with plasma leptin concentrations. A flow chart of the study is presented in Figure 1. In total, 101 patients (64 men and 37 women) with a mean age of 64.6 ± 11.5 years, receiving maintenance hemodialysis treatment at our outpatient HD clinic, were included in the study. Of the patients studied, 52 were diabetic (all diabetic patients had type 2 diabetes). Study measurements were performed at baseline and at 6, 12, 18 and 24 months from enrollment. After the longitudinal measurements ended, we continued clinical observation on our cohort during 2 additional years. Thus, in total, the study period extended 35 ± 17 months. During this period, 33 patients (32.7%) died (the main causes of death were cardio-vascular [12 of 33 patients; 36.4%] and sepsis [12 of 33 patients; 36.4%]), 13 patients (12.9%) underwent kidney transplantation, 3 patients (3.0%) changed dialysis modality, and 10 patients (10.0%) transferred to other hemodialysis units. All patients underwent regular dialysis via their vascular access (81.2% of patients had arterio-venous fistula) 4-5 h three times per week at a blood flow rate of 250-300 ml/min. Bicarbonate dialysate (30 mEq/L) at a dialysis solution flow rate of 500 ml/min was used in all cases. All dialysis was performed with biocompatible dialyzer membrane with a surface area of 1.0-1.8 m2. The efficiency of the dialysis was assessed based on the delivered dose of dialysis (Kt/V urea) using a single-pool urea kinetic model (mean Kt/V was1.31 ± 0.23 in our population).\nFlow diagram of the study.\nInformation on vascular disease (cerebral vascular, peripheral vascular and heart disease) was obtained from a detailed medical history.\nMost patients were required antihypertensive medications as well as other drugs commonly used in ESRD, such as phosphate and potassium binders, diuretics, and supplements of vitamins B, C, and D.\nThis prospective observational study was approved by the Ethics Committee of Assaf Harofeh Medical Center (Zerifin, Affiliated to the Sackler Faculty of Medicine Tel Aviv University, Israel). Informed consent was obtained before any trial-related activities. Patients were eligible for entry when they had been on HD therapy for at least 3 months and were 18 years or older, with no clinically active cardio-vascular or infectious diseases on entry. We excluded patients with edema, pleural effusion or ascites at their initial assessment, as well as patients with malignant disease, liver cirrhosis, neuro-muscular diseases, amputations or any deformities of the body. Exclusion criteria at the entry of the study also included co-morbidity (auto-immune disease and/or acute infections) and/or medication (prednisone) that might interfere with plasma leptin concentrations. A flow chart of the study is presented in Figure 1. In total, 101 patients (64 men and 37 women) with a mean age of 64.6 ± 11.5 years, receiving maintenance hemodialysis treatment at our outpatient HD clinic, were included in the study. Of the patients studied, 52 were diabetic (all diabetic patients had type 2 diabetes). Study measurements were performed at baseline and at 6, 12, 18 and 24 months from enrollment. After the longitudinal measurements ended, we continued clinical observation on our cohort during 2 additional years. Thus, in total, the study period extended 35 ± 17 months. During this period, 33 patients (32.7%) died (the main causes of death were cardio-vascular [12 of 33 patients; 36.4%] and sepsis [12 of 33 patients; 36.4%]), 13 patients (12.9%) underwent kidney transplantation, 3 patients (3.0%) changed dialysis modality, and 10 patients (10.0%) transferred to other hemodialysis units. All patients underwent regular dialysis via their vascular access (81.2% of patients had arterio-venous fistula) 4-5 h three times per week at a blood flow rate of 250-300 ml/min. Bicarbonate dialysate (30 mEq/L) at a dialysis solution flow rate of 500 ml/min was used in all cases. All dialysis was performed with biocompatible dialyzer membrane with a surface area of 1.0-1.8 m2. The efficiency of the dialysis was assessed based on the delivered dose of dialysis (Kt/V urea) using a single-pool urea kinetic model (mean Kt/V was1.31 ± 0.23 in our population).\nFlow diagram of the study.\nInformation on vascular disease (cerebral vascular, peripheral vascular and heart disease) was obtained from a detailed medical history.\nMost patients were required antihypertensive medications as well as other drugs commonly used in ESRD, such as phosphate and potassium binders, diuretics, and supplements of vitamins B, C, and D.\n Dietary intake A continuous 3-day dietary history (which included a dialysis day, a weekend day and a non-dialysis day) was recorded on a self-completed food diary. Then, dietary energy and protein intake were calculated and normalized for adjusted body weight (ABW) by the following formula [27]:\nSBW - standard body weight was determined by the National Healthand Nutrition Examination Survey II (NHANES II) population medians for age, sex, frame size, and stature [27].\nDietary protein intake was also estimated by the protein catabolic rate (PCR) calculation from the patient's urea generation rate by urea kinetics modeling [28]. Single-pool model urea kinetics was used to estimate the nPCR.\nA continuous 3-day dietary history (which included a dialysis day, a weekend day and a non-dialysis day) was recorded on a self-completed food diary. Then, dietary energy and protein intake were calculated and normalized for adjusted body weight (ABW) by the following formula [27]:\nSBW - standard body weight was determined by the National Healthand Nutrition Examination Survey II (NHANES II) population medians for age, sex, frame size, and stature [27].\nDietary protein intake was also estimated by the protein catabolic rate (PCR) calculation from the patient's urea generation rate by urea kinetics modeling [28]. Single-pool model urea kinetics was used to estimate the nPCR.\n Anthropometric measurements BMI, triceps skinfold thickness (TSF), mid arm circumference (MAC) and calculated mid arm muscle circumference (MAMC) were measured as anthropometric variables. The BMI was calculated as dry weight in kilograms divided by the square of height in meters. TSF was measured with a conventional skinfold caliper using standard techniques. Mid arm circumference was measured with a plastic measuring tape. MAMC was estimated as follows:\nBMI, triceps skinfold thickness (TSF), mid arm circumference (MAC) and calculated mid arm muscle circumference (MAMC) were measured as anthropometric variables. The BMI was calculated as dry weight in kilograms divided by the square of height in meters. TSF was measured with a conventional skinfold caliper using standard techniques. Mid arm circumference was measured with a plastic measuring tape. MAMC was estimated as follows:\n Body composition analysis Body composition was determined by body impedance analysis (B.I.A. Nutriguard- M, Data-Input, Frankfurt, Germany). We used gel-based electrodes specifically developed for BIA measurements - Bianostic AT (Data-Input GmbH). On the day of blood collection, patients underwent BIA measurement at approximately 30 minutes postdialysis. BIA electrodes were placed on the same body side used for anthropometric measurements. The multi-frequency technique was used. FFM was calculated by using the approach of Kyle et al. validated by dual-energy x-ray absorptiometry on 343 healthy adult subjects [29]:\nFat mass and fat free mass were standardized by squared height (m2), and expressed in kg/m2 as fat mass index (FMI) and fat free mass index (FFMI), respectively.\nPhase angle (PA) describes the relationship between the two vector components of impedance [reactance and resistance] of the human body to an alternating electric current. PA has been shown to provide a BIA prognostic index of morbidity and mortality [30].\nBody composition was determined by body impedance analysis (B.I.A. Nutriguard- M, Data-Input, Frankfurt, Germany). We used gel-based electrodes specifically developed for BIA measurements - Bianostic AT (Data-Input GmbH). On the day of blood collection, patients underwent BIA measurement at approximately 30 minutes postdialysis. BIA electrodes were placed on the same body side used for anthropometric measurements. The multi-frequency technique was used. FFM was calculated by using the approach of Kyle et al. validated by dual-energy x-ray absorptiometry on 343 healthy adult subjects [29]:\nFat mass and fat free mass were standardized by squared height (m2), and expressed in kg/m2 as fat mass index (FMI) and fat free mass index (FFMI), respectively.\nPhase angle (PA) describes the relationship between the two vector components of impedance [reactance and resistance] of the human body to an alternating electric current. PA has been shown to provide a BIA prognostic index of morbidity and mortality [30].\n Laboratory evaluation Blood samples were taken in a non-fasting state before a midweek hemodialysis session. CBC, creatinine, urea, albumin, transferrin and total cholesterol were measured by routine laboratory methods. IL-6 and leptin were measured in plasma samples using commercially available enzyme-linked immunosorbent assay (ELISA) kits (R&D System, Minneapolis, MN, USA) according to the manufacturer's protocol. The mean minimal detectable level for IL-6 was 0.7 pg/ml, and 7.8 pg/ml for leptin. Intra-assay and inter-assay coefficients of variation for IL-6 were 4.2% and 6.4% respectively, and for leptin - 3.3% and 5.4% respectively.\nBlood samples were taken in a non-fasting state before a midweek hemodialysis session. CBC, creatinine, urea, albumin, transferrin and total cholesterol were measured by routine laboratory methods. IL-6 and leptin were measured in plasma samples using commercially available enzyme-linked immunosorbent assay (ELISA) kits (R&D System, Minneapolis, MN, USA) according to the manufacturer's protocol. The mean minimal detectable level for IL-6 was 0.7 pg/ml, and 7.8 pg/ml for leptin. Intra-assay and inter-assay coefficients of variation for IL-6 were 4.2% and 6.4% respectively, and for leptin - 3.3% and 5.4% respectively.\n Statistical analysis Data are expressed as mean ± standard deviation (SD), median and interquartile range (Q1 to Q3) for variables that did not follow a normal distribution, or frequencies, as noted.\nTo compare the means of continuous variables measured between sex-specific tertiles of leptin, one way ANOVA and analysis of covariance with adjustments (ANCOVA) were used. Categorical data are presented as percentages and were compared among groups by χ2 tests. Correlations between leptin and clinical and laboratory parameters were assessed using Spearman rank order correlation coefficients (because of the skewed distribution of leptin levels). Multivariate regression analysis was performed to obtain partial (adjusted) correlations (R2). Case-mix-adjusted models were controlled for age, gender, history of CV disease, presence or absence of diabetes mellitus, dialysis vintage and fat mass.\nRepeated-measures analysis of variance was performed by using the MIXED model. Only patients with ≥ 2 study visits were included in the analyses. Base models were adjusted for age, sex, diabetes status, dialysis vintage, history of cardio-vascular diseases and fat mass. F Tests were used to assess the significance of the fixed effects, and P less than 0.05 was considered significant. To evaluate whether leptin influenced the trends in the various dependent variables, we included in each base model terms for individual \"leptin-by-time\" interactions.\nSurvival analyses were performed using the Kaplan-Meier survival curve and the Cox proportional hazard model. The univariate and multivariate Cox regression analyses are presented as (HR; CI).\nAll statistical analyses were performed using SPSS software, version 16.0 (SPSS Inc, Chicago, IL).\nData are expressed as mean ± standard deviation (SD), median and interquartile range (Q1 to Q3) for variables that did not follow a normal distribution, or frequencies, as noted.\nTo compare the means of continuous variables measured between sex-specific tertiles of leptin, one way ANOVA and analysis of covariance with adjustments (ANCOVA) were used. Categorical data are presented as percentages and were compared among groups by χ2 tests. Correlations between leptin and clinical and laboratory parameters were assessed using Spearman rank order correlation coefficients (because of the skewed distribution of leptin levels). Multivariate regression analysis was performed to obtain partial (adjusted) correlations (R2). Case-mix-adjusted models were controlled for age, gender, history of CV disease, presence or absence of diabetes mellitus, dialysis vintage and fat mass.\nRepeated-measures analysis of variance was performed by using the MIXED model. Only patients with ≥ 2 study visits were included in the analyses. Base models were adjusted for age, sex, diabetes status, dialysis vintage, history of cardio-vascular diseases and fat mass. F Tests were used to assess the significance of the fixed effects, and P less than 0.05 was considered significant. To evaluate whether leptin influenced the trends in the various dependent variables, we included in each base model terms for individual \"leptin-by-time\" interactions.\nSurvival analyses were performed using the Kaplan-Meier survival curve and the Cox proportional hazard model. The univariate and multivariate Cox regression analyses are presented as (HR; CI).\nAll statistical analyses were performed using SPSS software, version 16.0 (SPSS Inc, Chicago, IL).", "This prospective observational study was approved by the Ethics Committee of Assaf Harofeh Medical Center (Zerifin, Affiliated to the Sackler Faculty of Medicine Tel Aviv University, Israel). Informed consent was obtained before any trial-related activities. Patients were eligible for entry when they had been on HD therapy for at least 3 months and were 18 years or older, with no clinically active cardio-vascular or infectious diseases on entry. We excluded patients with edema, pleural effusion or ascites at their initial assessment, as well as patients with malignant disease, liver cirrhosis, neuro-muscular diseases, amputations or any deformities of the body. Exclusion criteria at the entry of the study also included co-morbidity (auto-immune disease and/or acute infections) and/or medication (prednisone) that might interfere with plasma leptin concentrations. A flow chart of the study is presented in Figure 1. In total, 101 patients (64 men and 37 women) with a mean age of 64.6 ± 11.5 years, receiving maintenance hemodialysis treatment at our outpatient HD clinic, were included in the study. Of the patients studied, 52 were diabetic (all diabetic patients had type 2 diabetes). Study measurements were performed at baseline and at 6, 12, 18 and 24 months from enrollment. After the longitudinal measurements ended, we continued clinical observation on our cohort during 2 additional years. Thus, in total, the study period extended 35 ± 17 months. During this period, 33 patients (32.7%) died (the main causes of death were cardio-vascular [12 of 33 patients; 36.4%] and sepsis [12 of 33 patients; 36.4%]), 13 patients (12.9%) underwent kidney transplantation, 3 patients (3.0%) changed dialysis modality, and 10 patients (10.0%) transferred to other hemodialysis units. All patients underwent regular dialysis via their vascular access (81.2% of patients had arterio-venous fistula) 4-5 h three times per week at a blood flow rate of 250-300 ml/min. Bicarbonate dialysate (30 mEq/L) at a dialysis solution flow rate of 500 ml/min was used in all cases. All dialysis was performed with biocompatible dialyzer membrane with a surface area of 1.0-1.8 m2. The efficiency of the dialysis was assessed based on the delivered dose of dialysis (Kt/V urea) using a single-pool urea kinetic model (mean Kt/V was1.31 ± 0.23 in our population).\nFlow diagram of the study.\nInformation on vascular disease (cerebral vascular, peripheral vascular and heart disease) was obtained from a detailed medical history.\nMost patients were required antihypertensive medications as well as other drugs commonly used in ESRD, such as phosphate and potassium binders, diuretics, and supplements of vitamins B, C, and D.", "A continuous 3-day dietary history (which included a dialysis day, a weekend day and a non-dialysis day) was recorded on a self-completed food diary. Then, dietary energy and protein intake were calculated and normalized for adjusted body weight (ABW) by the following formula [27]:\nSBW - standard body weight was determined by the National Healthand Nutrition Examination Survey II (NHANES II) population medians for age, sex, frame size, and stature [27].\nDietary protein intake was also estimated by the protein catabolic rate (PCR) calculation from the patient's urea generation rate by urea kinetics modeling [28]. Single-pool model urea kinetics was used to estimate the nPCR.", "BMI, triceps skinfold thickness (TSF), mid arm circumference (MAC) and calculated mid arm muscle circumference (MAMC) were measured as anthropometric variables. The BMI was calculated as dry weight in kilograms divided by the square of height in meters. TSF was measured with a conventional skinfold caliper using standard techniques. Mid arm circumference was measured with a plastic measuring tape. MAMC was estimated as follows:", "Body composition was determined by body impedance analysis (B.I.A. Nutriguard- M, Data-Input, Frankfurt, Germany). We used gel-based electrodes specifically developed for BIA measurements - Bianostic AT (Data-Input GmbH). On the day of blood collection, patients underwent BIA measurement at approximately 30 minutes postdialysis. BIA electrodes were placed on the same body side used for anthropometric measurements. The multi-frequency technique was used. FFM was calculated by using the approach of Kyle et al. validated by dual-energy x-ray absorptiometry on 343 healthy adult subjects [29]:\nFat mass and fat free mass were standardized by squared height (m2), and expressed in kg/m2 as fat mass index (FMI) and fat free mass index (FFMI), respectively.\nPhase angle (PA) describes the relationship between the two vector components of impedance [reactance and resistance] of the human body to an alternating electric current. PA has been shown to provide a BIA prognostic index of morbidity and mortality [30].", "Blood samples were taken in a non-fasting state before a midweek hemodialysis session. CBC, creatinine, urea, albumin, transferrin and total cholesterol were measured by routine laboratory methods. IL-6 and leptin were measured in plasma samples using commercially available enzyme-linked immunosorbent assay (ELISA) kits (R&D System, Minneapolis, MN, USA) according to the manufacturer's protocol. The mean minimal detectable level for IL-6 was 0.7 pg/ml, and 7.8 pg/ml for leptin. Intra-assay and inter-assay coefficients of variation for IL-6 were 4.2% and 6.4% respectively, and for leptin - 3.3% and 5.4% respectively.", "Data are expressed as mean ± standard deviation (SD), median and interquartile range (Q1 to Q3) for variables that did not follow a normal distribution, or frequencies, as noted.\nTo compare the means of continuous variables measured between sex-specific tertiles of leptin, one way ANOVA and analysis of covariance with adjustments (ANCOVA) were used. Categorical data are presented as percentages and were compared among groups by χ2 tests. Correlations between leptin and clinical and laboratory parameters were assessed using Spearman rank order correlation coefficients (because of the skewed distribution of leptin levels). Multivariate regression analysis was performed to obtain partial (adjusted) correlations (R2). Case-mix-adjusted models were controlled for age, gender, history of CV disease, presence or absence of diabetes mellitus, dialysis vintage and fat mass.\nRepeated-measures analysis of variance was performed by using the MIXED model. Only patients with ≥ 2 study visits were included in the analyses. Base models were adjusted for age, sex, diabetes status, dialysis vintage, history of cardio-vascular diseases and fat mass. F Tests were used to assess the significance of the fixed effects, and P less than 0.05 was considered significant. To evaluate whether leptin influenced the trends in the various dependent variables, we included in each base model terms for individual \"leptin-by-time\" interactions.\nSurvival analyses were performed using the Kaplan-Meier survival curve and the Cox proportional hazard model. The univariate and multivariate Cox regression analyses are presented as (HR; CI).\nAll statistical analyses were performed using SPSS software, version 16.0 (SPSS Inc, Chicago, IL).", " Cross-sectional associations The 101 prevalent HD patients participating in this study included 36.6% women; over half of the participants (51.5%) had diabetes mellitus (DM) and nearly the same proportion had a history of cardiovascular disease (46.9%), including myocardial infarction, coronary artery procedures such as angioplasty or surgery, previous cerebral-vascular accident, or peripheral vascular disease. Average age was 64.6 ± 11.5 years and median maintenance HD vintage was 31 months (Q1 to Q3, 15.5-51.5 months) at the study initiation. For HD patients at the start of the cohort, serum leptin averaged (mean ± SD) 37.3 ± 39.4 ng/ml (median, 16.5 ng/ml; Q1 to Q3, 5.60-71.4 ng/ml).\nTo examine the relationship between different levels of serum leptin and relevant demographic, clinical and laboratory measures, serum leptin levels were divided into three equal gender specific tertiles (data not shown). Diabetes proportion and age distribution were similar across the three groups. No statistically significant differences were evident between groups in the use of medications that may affect inflammatory markers such as statins, aspirin, or angiotensin-converting enzyme inhibitors (data not shown). As expected females, tended to have higher leptin levels than males (p = 0.0001). Body composition parameters measured by anthropometry and bioelectric impedance analysis (BIA) including phase angle (PA) were incrementally greater across increasing leptin tertiles even after adjustments for baseline demographic and clinical parameters (age, DM status, dialysis vintage and cardio-vascular disease in the past). However, these associations were attenuated after including of fat mass in multivariate analyses. No significant differences in any of the biochemical markers of nutrition, normalized protein nitrogen appearance (nPNA), or actual energy or protein intake normalized to adjusted body weight were found between the HD patient groups according to leptin tertiles.\nAt baseline, anthropometric measurements (body mass index [BMI], triceps skinfold thickness [TSF], mid arm circumference [MAC] and midarm circumference calculated [MAMC]) and BIA-derived parameters of body composition (fat mass index [FMI] and fat free mass index [FFMI]) correlated positively with serum leptin levels, in both unadjusted and adjusted (for age, DM status, dialysis vintage, and previous cardio-vascular disease) models. Thus, an increased level of leptin was associated with better nutritional status (Table 1). Associations between levels of serum leptin and the two laboratory nutritional markers (albumin and cholesterol) and PA derived by BIA were statistically significant, but these associations were attenuated after adjustments for demographic and clinical parameters including fat mass. No significant correlations were observed between serum leptin and nPNA or dietary energy (DEI) and protein intake (DPI) normalized to adjusted body weight in both sexes after adjustments for demographic and clinical parameters including fat mass.\nUnadjusted and multivariate adjusted Spearman's correlation coefficients of baseline serum leptin and nutritional clinical and laboratory parameters in the study population at baseline\nCorrelation coefficient values ≥ 0.25 appear in bold.\na All case-mix-adjusted models include age, gender, DM status, dialysis vintage, CV disease in the past and fat mass.\nAbbreviations: nPNA, normalized protein nitrogen appearance; IL-6, interleukin-6; BMI, body mass index; TSF, triceps skinfold thickness; MAC, mid-arm circumference; MAMC, mid-arm muscle circumference calculated; FMI, fat mass index; FFMI, fat-free mass index, DM, diabetes mellitus; CV, cardio-vascular.\nThe 101 prevalent HD patients participating in this study included 36.6% women; over half of the participants (51.5%) had diabetes mellitus (DM) and nearly the same proportion had a history of cardiovascular disease (46.9%), including myocardial infarction, coronary artery procedures such as angioplasty or surgery, previous cerebral-vascular accident, or peripheral vascular disease. Average age was 64.6 ± 11.5 years and median maintenance HD vintage was 31 months (Q1 to Q3, 15.5-51.5 months) at the study initiation. For HD patients at the start of the cohort, serum leptin averaged (mean ± SD) 37.3 ± 39.4 ng/ml (median, 16.5 ng/ml; Q1 to Q3, 5.60-71.4 ng/ml).\nTo examine the relationship between different levels of serum leptin and relevant demographic, clinical and laboratory measures, serum leptin levels were divided into three equal gender specific tertiles (data not shown). Diabetes proportion and age distribution were similar across the three groups. No statistically significant differences were evident between groups in the use of medications that may affect inflammatory markers such as statins, aspirin, or angiotensin-converting enzyme inhibitors (data not shown). As expected females, tended to have higher leptin levels than males (p = 0.0001). Body composition parameters measured by anthropometry and bioelectric impedance analysis (BIA) including phase angle (PA) were incrementally greater across increasing leptin tertiles even after adjustments for baseline demographic and clinical parameters (age, DM status, dialysis vintage and cardio-vascular disease in the past). However, these associations were attenuated after including of fat mass in multivariate analyses. No significant differences in any of the biochemical markers of nutrition, normalized protein nitrogen appearance (nPNA), or actual energy or protein intake normalized to adjusted body weight were found between the HD patient groups according to leptin tertiles.\nAt baseline, anthropometric measurements (body mass index [BMI], triceps skinfold thickness [TSF], mid arm circumference [MAC] and midarm circumference calculated [MAMC]) and BIA-derived parameters of body composition (fat mass index [FMI] and fat free mass index [FFMI]) correlated positively with serum leptin levels, in both unadjusted and adjusted (for age, DM status, dialysis vintage, and previous cardio-vascular disease) models. Thus, an increased level of leptin was associated with better nutritional status (Table 1). Associations between levels of serum leptin and the two laboratory nutritional markers (albumin and cholesterol) and PA derived by BIA were statistically significant, but these associations were attenuated after adjustments for demographic and clinical parameters including fat mass. No significant correlations were observed between serum leptin and nPNA or dietary energy (DEI) and protein intake (DPI) normalized to adjusted body weight in both sexes after adjustments for demographic and clinical parameters including fat mass.\nUnadjusted and multivariate adjusted Spearman's correlation coefficients of baseline serum leptin and nutritional clinical and laboratory parameters in the study population at baseline\nCorrelation coefficient values ≥ 0.25 appear in bold.\na All case-mix-adjusted models include age, gender, DM status, dialysis vintage, CV disease in the past and fat mass.\nAbbreviations: nPNA, normalized protein nitrogen appearance; IL-6, interleukin-6; BMI, body mass index; TSF, triceps skinfold thickness; MAC, mid-arm circumference; MAMC, mid-arm muscle circumference calculated; FMI, fat mass index; FFMI, fat-free mass index, DM, diabetes mellitus; CV, cardio-vascular.\n Longitudinal associationas Linear mixed models were used to study the effects of longitudinal leptin changes on changes in nutritional parameters (slopes) over 24 months including fixed parameters such as age, gender, diabetes status, dialysis vintage, previous cardio-vascular events, and fat mass (Table 2). No significant changes in dietary intake or in any of the biochemical nutritional markers over time were found. In contrast, a significant reduction in body composition parameters (both those measured by anthropometry [BMI, TSF and MAMC] and by BIA [FMI, FFMI, PA]) over time was observed. However, leptin did not modulate the changes in outcome variables over time in our cohort (leptin-by-time interactions were insignificant).\nRegression coefficients with 95% Confidence Intervals for the effect of longitudinal leptin changes on nutritional parameters changes (slopes) during 24 months based on mixed-effects model with linear trends for variable and fixed parameters\nNOTE: All nutritional variables presented in Table were modeled separately as dependent variables, whereas independent variables included fixed factors (such as age, gender, diabetes status, dialysis vintage, past cardio-vascular disease and fat mass) and leptin as continuous variable. Terms for individual \"leptin-by-time\" interactions were included in each model. The model takes into account every measurements of leptin and presented nutritional variables in each time point for each patient separately. Presented regression coefficients demonstrate changes in outcome variables with time (24 months).\na P for trend for leptin-by-time interactions.\nb FMI and BMI were adjusted only by age, gender, DM status, history of cardio-vascular disease and dialysis vintage.\nAbbreviations: DEI, daily energy intake; DPI, daily protein intake; nPNA, normalized protein nitrogen appearance; IL-6, interleukin-6; EPO, erythropoietin; BMI, body mass index; TSF, triceps skinfold thickness; MAC, mid-arm circumference; MAMC, mid-arm muscle circumference calculated; FMI, fat mass index; FFMI, fat-free mass index; DM, diabetes mellitus; CV, cardio-vascular.\nA mixed model including the fixed factors sex, age, DM status, dialysis vintage, history of CV disease and some of nutritional parameters predicted variability in leptin levels during the study presented in Table 3. We observed significant changes of leptin levels with time (linear estimate ± SE: -2.5010 ± 0.57 ng/ml/2y; p < 0.001). The effect of longitudinal changes of FMI on serum leptin variability over time was evident according to this model.\nEffects of longitudinal changes of nutritional parameters on changes (slopes) of leptin during 24 months based on mixed-effects model with linear trends for variable and fixed parameters\nNOTE: The Table represents a simple Mixed-Model with leptin as dependent variable, and presented nutritional parameters and fixed factors (age, gender, DM status, dialysis vintage and history of CV disease) as independent variables. The model takes into account every measurements of leptin and presented nutritional variables in each time point for each patient separately.\nAbbreviations: DEI, daily energy intake; IL-6, interleukin-6; EPO, erythropoietin; FMI, fat mass index; DM, diabetes mellitus; CV, cardio-vascular.\nOf interest, a significant reduction over time was associated with a more rapid decline in leptin levels in the highest leptin tertile in both unadjusted (p = 0.007 for leptin-by-time interaction) and fully adjusted (p = 0.047 for leptin-by-time interaction) models (Figure 2).\nLeptin levels decline over time in the study population. Changes in estimated marginal means of serum leptin levels at baseline (month 0), month 6, month 12, month 18 and month 24 in the study population groups according to sex-specific tertiles* of baseline serum leptin: 1. Unadjusted model: P = 0.007 for leptin-by-time interaction; 2. Fully adjusted model (for age, gender, DM status, dialysis vintage, previous cardio-vascular morbidity and fat mass): P = 0.047 for leptin-by-time interaction. *Median (interquartile range) leptin (ng/ml) tertiles for men (n = 64) were 1.70 (1.2-2.4), 10.2 (7.6-13.8) and 38.3 (26.2-81.3) and for women (n = 37) were 15.8 (9.3-18.3), 68.0 (41.4-85.9) and 112.6 (109.2-116.0).\nLinear mixed models were used to study the effects of longitudinal leptin changes on changes in nutritional parameters (slopes) over 24 months including fixed parameters such as age, gender, diabetes status, dialysis vintage, previous cardio-vascular events, and fat mass (Table 2). No significant changes in dietary intake or in any of the biochemical nutritional markers over time were found. In contrast, a significant reduction in body composition parameters (both those measured by anthropometry [BMI, TSF and MAMC] and by BIA [FMI, FFMI, PA]) over time was observed. However, leptin did not modulate the changes in outcome variables over time in our cohort (leptin-by-time interactions were insignificant).\nRegression coefficients with 95% Confidence Intervals for the effect of longitudinal leptin changes on nutritional parameters changes (slopes) during 24 months based on mixed-effects model with linear trends for variable and fixed parameters\nNOTE: All nutritional variables presented in Table were modeled separately as dependent variables, whereas independent variables included fixed factors (such as age, gender, diabetes status, dialysis vintage, past cardio-vascular disease and fat mass) and leptin as continuous variable. Terms for individual \"leptin-by-time\" interactions were included in each model. The model takes into account every measurements of leptin and presented nutritional variables in each time point for each patient separately. Presented regression coefficients demonstrate changes in outcome variables with time (24 months).\na P for trend for leptin-by-time interactions.\nb FMI and BMI were adjusted only by age, gender, DM status, history of cardio-vascular disease and dialysis vintage.\nAbbreviations: DEI, daily energy intake; DPI, daily protein intake; nPNA, normalized protein nitrogen appearance; IL-6, interleukin-6; EPO, erythropoietin; BMI, body mass index; TSF, triceps skinfold thickness; MAC, mid-arm circumference; MAMC, mid-arm muscle circumference calculated; FMI, fat mass index; FFMI, fat-free mass index; DM, diabetes mellitus; CV, cardio-vascular.\nA mixed model including the fixed factors sex, age, DM status, dialysis vintage, history of CV disease and some of nutritional parameters predicted variability in leptin levels during the study presented in Table 3. We observed significant changes of leptin levels with time (linear estimate ± SE: -2.5010 ± 0.57 ng/ml/2y; p < 0.001). The effect of longitudinal changes of FMI on serum leptin variability over time was evident according to this model.\nEffects of longitudinal changes of nutritional parameters on changes (slopes) of leptin during 24 months based on mixed-effects model with linear trends for variable and fixed parameters\nNOTE: The Table represents a simple Mixed-Model with leptin as dependent variable, and presented nutritional parameters and fixed factors (age, gender, DM status, dialysis vintage and history of CV disease) as independent variables. The model takes into account every measurements of leptin and presented nutritional variables in each time point for each patient separately.\nAbbreviations: DEI, daily energy intake; IL-6, interleukin-6; EPO, erythropoietin; FMI, fat mass index; DM, diabetes mellitus; CV, cardio-vascular.\nOf interest, a significant reduction over time was associated with a more rapid decline in leptin levels in the highest leptin tertile in both unadjusted (p = 0.007 for leptin-by-time interaction) and fully adjusted (p = 0.047 for leptin-by-time interaction) models (Figure 2).\nLeptin levels decline over time in the study population. Changes in estimated marginal means of serum leptin levels at baseline (month 0), month 6, month 12, month 18 and month 24 in the study population groups according to sex-specific tertiles* of baseline serum leptin: 1. Unadjusted model: P = 0.007 for leptin-by-time interaction; 2. Fully adjusted model (for age, gender, DM status, dialysis vintage, previous cardio-vascular morbidity and fat mass): P = 0.047 for leptin-by-time interaction. *Median (interquartile range) leptin (ng/ml) tertiles for men (n = 64) were 1.70 (1.2-2.4), 10.2 (7.6-13.8) and 38.3 (26.2-81.3) and for women (n = 37) were 15.8 (9.3-18.3), 68.0 (41.4-85.9) and 112.6 (109.2-116.0).\n Survival analysis During an average follow-up of 35 months (median, 40 months; Q1 to Q3, 17-52), 33 patients died: 11 (32%) in the low tertile group, 12 (35%) in the median tertile group and 10 (30%) in the high tertile group, with a median time to death of 25 months (Q1 to Q3, 14-40 months). Cumulative incidences of survival were unaffected by the baseline serum leptin levels (Figure 3). To further assess the possible association of baseline serum leptin level with cardio-vascular morbidity and mortality, cumulative hazards of all-cause death and first composite cardio-vascular event from Cox regression (after adjustment for baseline demographic parameters and fat mass) were calculated. First cardio-vascular event was defined as myocardial infarction (MI), requiring coronary artery procedures such as angioplasty or surgery, cerebral-vascular accident (CVA), or peripheral vascular disease (PVD), requiring angioplasty, bypass or amputation and diagnosed after the participant entered the study. No statistically significant differences in hazards between the different groups according leptin tertiles were observed. Additionally, no differences were found by Cox regression analysis in terms of survival probabilities from the analysis of all-cause mortality for each 10 ng/ml increase in baseline serum leptin (data not shown).\nKaplan-Meier curves of surviving patients comparing subgroups of patients stratified by baseline serum leptin tertiles.\nThus, although chronic HD patients with higher baseline leptin had better nutritional status as evidenced by their significantly more favorable body composition parameters at the start of the study, this disparity did not appear to widen over the 24 months of follow-up; thus, leptin levels did not predict body composition parameter changes over time. Moreover, no survival advantage of the high leptin group was found in our observational cohort.\nDuring an average follow-up of 35 months (median, 40 months; Q1 to Q3, 17-52), 33 patients died: 11 (32%) in the low tertile group, 12 (35%) in the median tertile group and 10 (30%) in the high tertile group, with a median time to death of 25 months (Q1 to Q3, 14-40 months). Cumulative incidences of survival were unaffected by the baseline serum leptin levels (Figure 3). To further assess the possible association of baseline serum leptin level with cardio-vascular morbidity and mortality, cumulative hazards of all-cause death and first composite cardio-vascular event from Cox regression (after adjustment for baseline demographic parameters and fat mass) were calculated. First cardio-vascular event was defined as myocardial infarction (MI), requiring coronary artery procedures such as angioplasty or surgery, cerebral-vascular accident (CVA), or peripheral vascular disease (PVD), requiring angioplasty, bypass or amputation and diagnosed after the participant entered the study. No statistically significant differences in hazards between the different groups according leptin tertiles were observed. Additionally, no differences were found by Cox regression analysis in terms of survival probabilities from the analysis of all-cause mortality for each 10 ng/ml increase in baseline serum leptin (data not shown).\nKaplan-Meier curves of surviving patients comparing subgroups of patients stratified by baseline serum leptin tertiles.\nThus, although chronic HD patients with higher baseline leptin had better nutritional status as evidenced by their significantly more favorable body composition parameters at the start of the study, this disparity did not appear to widen over the 24 months of follow-up; thus, leptin levels did not predict body composition parameter changes over time. Moreover, no survival advantage of the high leptin group was found in our observational cohort.", "The 101 prevalent HD patients participating in this study included 36.6% women; over half of the participants (51.5%) had diabetes mellitus (DM) and nearly the same proportion had a history of cardiovascular disease (46.9%), including myocardial infarction, coronary artery procedures such as angioplasty or surgery, previous cerebral-vascular accident, or peripheral vascular disease. Average age was 64.6 ± 11.5 years and median maintenance HD vintage was 31 months (Q1 to Q3, 15.5-51.5 months) at the study initiation. For HD patients at the start of the cohort, serum leptin averaged (mean ± SD) 37.3 ± 39.4 ng/ml (median, 16.5 ng/ml; Q1 to Q3, 5.60-71.4 ng/ml).\nTo examine the relationship between different levels of serum leptin and relevant demographic, clinical and laboratory measures, serum leptin levels were divided into three equal gender specific tertiles (data not shown). Diabetes proportion and age distribution were similar across the three groups. No statistically significant differences were evident between groups in the use of medications that may affect inflammatory markers such as statins, aspirin, or angiotensin-converting enzyme inhibitors (data not shown). As expected females, tended to have higher leptin levels than males (p = 0.0001). Body composition parameters measured by anthropometry and bioelectric impedance analysis (BIA) including phase angle (PA) were incrementally greater across increasing leptin tertiles even after adjustments for baseline demographic and clinical parameters (age, DM status, dialysis vintage and cardio-vascular disease in the past). However, these associations were attenuated after including of fat mass in multivariate analyses. No significant differences in any of the biochemical markers of nutrition, normalized protein nitrogen appearance (nPNA), or actual energy or protein intake normalized to adjusted body weight were found between the HD patient groups according to leptin tertiles.\nAt baseline, anthropometric measurements (body mass index [BMI], triceps skinfold thickness [TSF], mid arm circumference [MAC] and midarm circumference calculated [MAMC]) and BIA-derived parameters of body composition (fat mass index [FMI] and fat free mass index [FFMI]) correlated positively with serum leptin levels, in both unadjusted and adjusted (for age, DM status, dialysis vintage, and previous cardio-vascular disease) models. Thus, an increased level of leptin was associated with better nutritional status (Table 1). Associations between levels of serum leptin and the two laboratory nutritional markers (albumin and cholesterol) and PA derived by BIA were statistically significant, but these associations were attenuated after adjustments for demographic and clinical parameters including fat mass. No significant correlations were observed between serum leptin and nPNA or dietary energy (DEI) and protein intake (DPI) normalized to adjusted body weight in both sexes after adjustments for demographic and clinical parameters including fat mass.\nUnadjusted and multivariate adjusted Spearman's correlation coefficients of baseline serum leptin and nutritional clinical and laboratory parameters in the study population at baseline\nCorrelation coefficient values ≥ 0.25 appear in bold.\na All case-mix-adjusted models include age, gender, DM status, dialysis vintage, CV disease in the past and fat mass.\nAbbreviations: nPNA, normalized protein nitrogen appearance; IL-6, interleukin-6; BMI, body mass index; TSF, triceps skinfold thickness; MAC, mid-arm circumference; MAMC, mid-arm muscle circumference calculated; FMI, fat mass index; FFMI, fat-free mass index, DM, diabetes mellitus; CV, cardio-vascular.", "Linear mixed models were used to study the effects of longitudinal leptin changes on changes in nutritional parameters (slopes) over 24 months including fixed parameters such as age, gender, diabetes status, dialysis vintage, previous cardio-vascular events, and fat mass (Table 2). No significant changes in dietary intake or in any of the biochemical nutritional markers over time were found. In contrast, a significant reduction in body composition parameters (both those measured by anthropometry [BMI, TSF and MAMC] and by BIA [FMI, FFMI, PA]) over time was observed. However, leptin did not modulate the changes in outcome variables over time in our cohort (leptin-by-time interactions were insignificant).\nRegression coefficients with 95% Confidence Intervals for the effect of longitudinal leptin changes on nutritional parameters changes (slopes) during 24 months based on mixed-effects model with linear trends for variable and fixed parameters\nNOTE: All nutritional variables presented in Table were modeled separately as dependent variables, whereas independent variables included fixed factors (such as age, gender, diabetes status, dialysis vintage, past cardio-vascular disease and fat mass) and leptin as continuous variable. Terms for individual \"leptin-by-time\" interactions were included in each model. The model takes into account every measurements of leptin and presented nutritional variables in each time point for each patient separately. Presented regression coefficients demonstrate changes in outcome variables with time (24 months).\na P for trend for leptin-by-time interactions.\nb FMI and BMI were adjusted only by age, gender, DM status, history of cardio-vascular disease and dialysis vintage.\nAbbreviations: DEI, daily energy intake; DPI, daily protein intake; nPNA, normalized protein nitrogen appearance; IL-6, interleukin-6; EPO, erythropoietin; BMI, body mass index; TSF, triceps skinfold thickness; MAC, mid-arm circumference; MAMC, mid-arm muscle circumference calculated; FMI, fat mass index; FFMI, fat-free mass index; DM, diabetes mellitus; CV, cardio-vascular.\nA mixed model including the fixed factors sex, age, DM status, dialysis vintage, history of CV disease and some of nutritional parameters predicted variability in leptin levels during the study presented in Table 3. We observed significant changes of leptin levels with time (linear estimate ± SE: -2.5010 ± 0.57 ng/ml/2y; p < 0.001). The effect of longitudinal changes of FMI on serum leptin variability over time was evident according to this model.\nEffects of longitudinal changes of nutritional parameters on changes (slopes) of leptin during 24 months based on mixed-effects model with linear trends for variable and fixed parameters\nNOTE: The Table represents a simple Mixed-Model with leptin as dependent variable, and presented nutritional parameters and fixed factors (age, gender, DM status, dialysis vintage and history of CV disease) as independent variables. The model takes into account every measurements of leptin and presented nutritional variables in each time point for each patient separately.\nAbbreviations: DEI, daily energy intake; IL-6, interleukin-6; EPO, erythropoietin; FMI, fat mass index; DM, diabetes mellitus; CV, cardio-vascular.\nOf interest, a significant reduction over time was associated with a more rapid decline in leptin levels in the highest leptin tertile in both unadjusted (p = 0.007 for leptin-by-time interaction) and fully adjusted (p = 0.047 for leptin-by-time interaction) models (Figure 2).\nLeptin levels decline over time in the study population. Changes in estimated marginal means of serum leptin levels at baseline (month 0), month 6, month 12, month 18 and month 24 in the study population groups according to sex-specific tertiles* of baseline serum leptin: 1. Unadjusted model: P = 0.007 for leptin-by-time interaction; 2. Fully adjusted model (for age, gender, DM status, dialysis vintage, previous cardio-vascular morbidity and fat mass): P = 0.047 for leptin-by-time interaction. *Median (interquartile range) leptin (ng/ml) tertiles for men (n = 64) were 1.70 (1.2-2.4), 10.2 (7.6-13.8) and 38.3 (26.2-81.3) and for women (n = 37) were 15.8 (9.3-18.3), 68.0 (41.4-85.9) and 112.6 (109.2-116.0).", "During an average follow-up of 35 months (median, 40 months; Q1 to Q3, 17-52), 33 patients died: 11 (32%) in the low tertile group, 12 (35%) in the median tertile group and 10 (30%) in the high tertile group, with a median time to death of 25 months (Q1 to Q3, 14-40 months). Cumulative incidences of survival were unaffected by the baseline serum leptin levels (Figure 3). To further assess the possible association of baseline serum leptin level with cardio-vascular morbidity and mortality, cumulative hazards of all-cause death and first composite cardio-vascular event from Cox regression (after adjustment for baseline demographic parameters and fat mass) were calculated. First cardio-vascular event was defined as myocardial infarction (MI), requiring coronary artery procedures such as angioplasty or surgery, cerebral-vascular accident (CVA), or peripheral vascular disease (PVD), requiring angioplasty, bypass or amputation and diagnosed after the participant entered the study. No statistically significant differences in hazards between the different groups according leptin tertiles were observed. Additionally, no differences were found by Cox regression analysis in terms of survival probabilities from the analysis of all-cause mortality for each 10 ng/ml increase in baseline serum leptin (data not shown).\nKaplan-Meier curves of surviving patients comparing subgroups of patients stratified by baseline serum leptin tertiles.\nThus, although chronic HD patients with higher baseline leptin had better nutritional status as evidenced by their significantly more favorable body composition parameters at the start of the study, this disparity did not appear to widen over the 24 months of follow-up; thus, leptin levels did not predict body composition parameter changes over time. Moreover, no survival advantage of the high leptin group was found in our observational cohort.", "In the present study, we wished to determine whether changes of serum leptin levels are correlated with nutritional status over time in a cohort of prevalent hemodialysis patients. We observed that despite the strong and significant cross-sectional associations between serum leptin and body composition parameters at baseline, leptin levels did not reflect changes in nutritional status in our population.\nOur study confirms several previous cross-sectional studies in which elevated serum leptin levels were demonstrated to be positively associated with several nutritional markers (laboratory or body composition parameters, mainly BMI and fat mass) in chronic HD patients [14,20,21,23-25]. HD patients treated with megestrole acetate (24), growth hormone [31] or high-calorie supplementation [32] demonstrated progressively increased serum leptin parallel to an improved nutritional state.\nHowever, not all studies are consistent with positive associations between elevated levels of leptin and several nutritional parameters in ESRD patients. An inverse correlation was observed between leptin/fat mass and dietary intake, as well as with significantly lower lean tissue mass in chronic renal failure (23 patients) and ESRD patients receiving PD (24 patients) and HD (22 patients) therapy in study by Young et al [5]. In a cross-sectional study of 28 dialysis patients and 41 healthy control subjects, Johansen et al [20] showed that leptin levels were negatively correlated with albumin and PCR, suggesting a possible negative role of leptin in nutrition. Based on longitudinal observation, Stenvinkel et al [18] demonstrated that increases in serum leptin levels of 36 peritoneal dialysis patients were associated with a decrease in lean body mass. The discrepancies between these studies may be related to the population recruited, to their small sample size, and to their design; specifically, serum leptin levels were measured among patients with varying degrees of CRF, some of whom were undergoing maintenance hemodialysis and peritoneal dialysis, and in healthy controls [5,18,20]; many of these studies had a low number of study participants [5,18,20,23-25] and a majority of these studies were cross-sectional [5,20,23,25].\nLeptin, a regulator of eating behavior, has been shown to be a major determinant of anorexia in uremic animals via signaling through the hypothalamic melanocortin system [15]. Subsequently, a recent experimental study by Cheung et al [33] showed that intraperitoneal administration of the melanocortin-4 receptor (MC4-R) antagonist NBI-12i stimulates food intake and weight gain in uremic mice. However, the effects of leptin in uremic patients are not unequivocal. Relevant to our findings, Bossola at al [17] demonstrated that serum leptin levels were not different in anorexic and in non-anorexic hemodialysis patients. Several other studies failed as well to demonstrate any role of hyperleptinemia on reduced dietary intake in ESRD patients receiving maintenance HD treatment [5,16,34]. Since hyperleptinemia is common in HD patients mainly due to impaired renal clearance [20], it seemed reasonable to assume that selective leptin resistance (such as occurs in obese humans [35]) is the responsible mechanism for attenuation of the negative effect of leptin on appetite, and accordingly, on dietary intake under conditions of continuous stimulation. The molecular mechanisms of leptin resistance, with a focus on the contribution of the intracellular tyrosine residues in the hypothalamic receptor of leptin (LEPRb) and their interaction with the negative feedback regulators, suppressor of cytokine signaling 1 and 3 (SOCS1 and SOCS3), are described in a recent study by Knobelspies et al [36].\nThus, although the cross-group comparisons confirm that hyperleptinemia is associated with better body composition parameters, no unequivocal associations in terms of biochemical data or dietary intake with serum leptin levels are evident in HD patients according to the available literature. Our study confirms these data.\nAnother finding of the present study was that 24 months of HD was associated with a significant reduction of plasma leptin levels, with a more rapid decline observed in patients with higher baseline leptin levels. Only limited information is available on the relationship between longitudinal serum leptin measurements and nutritional characteristics in chronic HD patients [18,32]. To the best of our knowledge, the present study is the first long-term longitudinal study to show the relationship between serum leptin level and nutritional status in ESRD patients receiving maintenance HD therapy. Several mechanisms may be involved in decrease of serum leptin levels over time in hemodialysis patients. These include, the increasing use of high flux and super-flux hemodialyzers [37], and/or use of alternative dialysis strategies such as hemodiafiltration [38]; recombinant human erythropoietin treatment (generally followed by a significant decline of leptinaemia in hemodialysed patients [39]); chronic inflammation (under the assumption that leptin, like albumin or transferrin, is a negative acute phase protein in chronic HD patients [22], although other studies [40] do not support this mechanism); metabolic acidosis, which can reduce the release of leptin from adipose tissue [41]; and finally, a decrease in fat mass leading to a decrease in the rate of leptin biosynthesis [14,18] (based on an insulin or a nutrient-sensing pathway regulating leptin gene expression in fat tissue [42]). Although patients in our facility were all treated using low flux hemodialysis, reduction of BMI and accordingly of fat mass over 24 months of the study may explain this longitudinal decrease of serum leptin levels in our cohort.\nOf interest is the observation that adverse changes in body composition parameters occur over time, supporting the hypothesis that end-stage renal disease is associated with wasting as proposed in some [43,44] but not all [45] of the previous studies. The results of our study were consistent with those reported by Johansen et al [43] who did not find significant changes in any of the biochemical markers with time, but a significant reduction in phase angle over 1 year follow up was evident in prevalent HD patients. In parallel, together with reduced body composition parameters over time, we observed a longitudinal decline in serum leptin levels in our cohort. However, the lack of an association between serum leptin levels and future changes in body composition parameters (as exhibited by non-significant leptin-by-time interactions), suggests that the levels of leptin follow body composition (mainly fat mass) trajectory or its accompanying metabolic changes, rather than governing it.\nFinally, serum leptin level did not appear as a useful predictor of all cause mortality or cardio-vascular morbidity in our study population during up to 4 years of observation. In this aspect, our findings support the results of earlier study by Tsai et al [46]. Although Scholze et al [26] showed a paradoxical reverse association between leptin level and clinical outcome in 71 chronic HD patients, our patients did not show such an association. The difference between our results and theirs might have been caused by differences in the cohorts. First, the population presented by Scholze et al [26] had a lower BMI at baseline than our patients, and no body composition parameters were measured. Further, the prevalence of diabetes mellitus was much greater in patients in the lower leptin group than in patients of the higher leptin group in the cohort of Scholze et al [26]. Indeed, HD patients with diabetes have a poor prognosis [4] that might be the cause of the lower survival rate in the lower leptin group in the above study [26]. While low levels of leptin may reflect a state of malnutrition in HD patients, proatherogenic effects of hyperleptinemia were linked to an adverse cardiovascular profile in general population [10,12]. We speculate that the lack of long-term benefits of hyperleptinemia in terms of survival of chronic HD patients could reflect the modulating influence of proatherogenic effects of high serum leptin levels. Finally, no relationships between serum leptin levels and previous cardio-vascular events were found by Diez et al [47] in a retrospective study on 82 dialysis patients. Zoccali et al [40] also did not show any difference in cardiovascular event-free survival in cohort of 192 hemodialysis patients after their stratification on the basis of the median value of plasma leptin.\nSome limitations of the present study should be considered. First, this study is based on a relatively small sample size limiting detection of more subtle changes over time. Second, this study used only an observational approach, without manipulation of exposure factors, and therefore, no definitive cause-and-effect relationship can be derived for any of the risk factors analyzed. Dietary intake assessed by 3 day food records is another limitation of the study, as results can be subjective and incomplete, and can vary considerably from day to day as a result of dialysis treatment sessions and associated disturbances in food intake. Finally, significant circadian fluctuations of plasma leptin (regardless of the underlying degree of glucose tolerance, fasting, or diet) can certainly obscure some small but significant changes in plasma leptin [48] and may be a source of potential bias for longitudinal observation. Nevertheless, the present study has the advantage of providing long-term longitudinal data on the relationship between serum concentrations of leptin and nutritional parameters in prevalent HD patients.", "In conclusion, we found positive linear associations between serum leptin levels and body composition, whereas no significant relations in terms of biochemical data or dietary intake were evident in our cohort at baseline. Analyzing longitudinal data we concluded that any influence of serum leptin levels on nutritional status is limited to mirroring attained BMI and fat mass trajectory. Further, nutritional advantage at baseline of the study population with hyperleptinemia did not translate into long-term benefits in terms of survival. Taken together, our results suggest that in our cohort, leptin levels reflect fat mass depots, rather than independently contributing to uremic anorexia or modifying nutritional status and/or survival in chronic HD patients. These results are of particular importance if the use of subcutaneous injections of recombinant methionyl leptin [49] or leptin removal by super-flux polysulfone dialysers [37] is contemplated as a therapeutic intervention." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Leptin", "Nutrition", "Bioimpedance", "Inflammation", "Hemodialysis" ]
Background: In recent years, the number of patients with end-stage renal disease (ESRD) has been increasing worldwide [1]. Depending in part upon the method used to evaluate nutritional status and the population studied, from 40 to 70 percent of patients with ESRD are malnourished [2,3] resulting in poor clinical outcomes [4]. Among the mechanisms responsible for malnutrition, leptin was believed to influence nutritional markers in patients with ESRD [5]. Leptin is a 16-kDa protein identified as the product of the obese gene; it is exclusively produced in adipocytes, and regulates food intake and energy expenditure in animal models [6]. Leptin decreases food intake by decreasing NPY (neuropeptide Y - one of the most potent stimulators of food intake) mRNA [7] and increasing alpha-MSH (alpha-melanocyte-stimulating hormone - an inhibitor of food intake) [8]. Besides linking adiposity and central nervous circuits to reduced appetite and enhanced energy expenditure in the general population [9], leptin has been shown to increase overall sympathetic nerve activity [10], facilitate glucose utilization and improve insulin sensitivity [11]. Furthermore, the prospective West of Scotland Coronary Prevention Study (WOSCOPS) reported that elevated leptin increases the relative risk of cardio-vascular disease in the general population independently of fat mass [12]. In general, serum leptin levels are significantly elevated in patients with renal failure, particularly when compared to age, gender and body mass index (BMI)-matched controls [13,14]. However, the role of hyperleptinemia in ESRD patients is somewhat unconventional. In contrast with its anorexogenic effects recognized in the general population [9] and even in experimental models of uremia (in subtotal nephrectomized and leptin receptor-deficient [db/db] mice) [15], leptin has not been reported to affect perceived appetite and nutrient intake in dialysis patients [16,17]. Although in some observational studies, increased serum leptin concentrations were observed in ESRD patients in parallel with loss of lean body mass [18,19] or with hypoalbuminemia and low protein intake [20], some others failed to find any correlation between hyperleptinemia and weight change [9] or lean mass [21] in this population. Moreover, several clinical studies suggested that leptin is a negative acute phase protein [22] and can serve as a marker of adequate nutritional status, rather than an appetite-reducing uremic toxin in hemodialysis patients [23-25]. Finally, the relationship between elevated serum leptin levels and clinical outcomes in ESRD has not been fully defined. In one small prospective cohort of hemodialysis patients, lower baseline serum leptin levels predicted mortality [26], but neither changes in leptin over time were measured, nor were leptin levels normalized to body fat mass in this study. Thus, the influence of serum leptin levels on nutritional status and survival in chronic hemodialysis patients remained to be elucidated. In view of leptin's physiological role, information on effects of prolonged hyperleptinemia (independent of fat mass) on nutritional status of chronic hemodialysis patients, which may also impact on their survival, would be of interest. The aim of the present prospective longitudinal study was therefore to study longitudinal changes in serum leptin levels and to relate them to the changes in nutritional markers and survival in chronic hemodialysis patients. Methods: Patients This prospective observational study was approved by the Ethics Committee of Assaf Harofeh Medical Center (Zerifin, Affiliated to the Sackler Faculty of Medicine Tel Aviv University, Israel). Informed consent was obtained before any trial-related activities. Patients were eligible for entry when they had been on HD therapy for at least 3 months and were 18 years or older, with no clinically active cardio-vascular or infectious diseases on entry. We excluded patients with edema, pleural effusion or ascites at their initial assessment, as well as patients with malignant disease, liver cirrhosis, neuro-muscular diseases, amputations or any deformities of the body. Exclusion criteria at the entry of the study also included co-morbidity (auto-immune disease and/or acute infections) and/or medication (prednisone) that might interfere with plasma leptin concentrations. A flow chart of the study is presented in Figure 1. In total, 101 patients (64 men and 37 women) with a mean age of 64.6 ± 11.5 years, receiving maintenance hemodialysis treatment at our outpatient HD clinic, were included in the study. Of the patients studied, 52 were diabetic (all diabetic patients had type 2 diabetes). Study measurements were performed at baseline and at 6, 12, 18 and 24 months from enrollment. After the longitudinal measurements ended, we continued clinical observation on our cohort during 2 additional years. Thus, in total, the study period extended 35 ± 17 months. During this period, 33 patients (32.7%) died (the main causes of death were cardio-vascular [12 of 33 patients; 36.4%] and sepsis [12 of 33 patients; 36.4%]), 13 patients (12.9%) underwent kidney transplantation, 3 patients (3.0%) changed dialysis modality, and 10 patients (10.0%) transferred to other hemodialysis units. All patients underwent regular dialysis via their vascular access (81.2% of patients had arterio-venous fistula) 4-5 h three times per week at a blood flow rate of 250-300 ml/min. Bicarbonate dialysate (30 mEq/L) at a dialysis solution flow rate of 500 ml/min was used in all cases. All dialysis was performed with biocompatible dialyzer membrane with a surface area of 1.0-1.8 m2. The efficiency of the dialysis was assessed based on the delivered dose of dialysis (Kt/V urea) using a single-pool urea kinetic model (mean Kt/V was1.31 ± 0.23 in our population). Flow diagram of the study. Information on vascular disease (cerebral vascular, peripheral vascular and heart disease) was obtained from a detailed medical history. Most patients were required antihypertensive medications as well as other drugs commonly used in ESRD, such as phosphate and potassium binders, diuretics, and supplements of vitamins B, C, and D. This prospective observational study was approved by the Ethics Committee of Assaf Harofeh Medical Center (Zerifin, Affiliated to the Sackler Faculty of Medicine Tel Aviv University, Israel). Informed consent was obtained before any trial-related activities. Patients were eligible for entry when they had been on HD therapy for at least 3 months and were 18 years or older, with no clinically active cardio-vascular or infectious diseases on entry. We excluded patients with edema, pleural effusion or ascites at their initial assessment, as well as patients with malignant disease, liver cirrhosis, neuro-muscular diseases, amputations or any deformities of the body. Exclusion criteria at the entry of the study also included co-morbidity (auto-immune disease and/or acute infections) and/or medication (prednisone) that might interfere with plasma leptin concentrations. A flow chart of the study is presented in Figure 1. In total, 101 patients (64 men and 37 women) with a mean age of 64.6 ± 11.5 years, receiving maintenance hemodialysis treatment at our outpatient HD clinic, were included in the study. Of the patients studied, 52 were diabetic (all diabetic patients had type 2 diabetes). Study measurements were performed at baseline and at 6, 12, 18 and 24 months from enrollment. After the longitudinal measurements ended, we continued clinical observation on our cohort during 2 additional years. Thus, in total, the study period extended 35 ± 17 months. During this period, 33 patients (32.7%) died (the main causes of death were cardio-vascular [12 of 33 patients; 36.4%] and sepsis [12 of 33 patients; 36.4%]), 13 patients (12.9%) underwent kidney transplantation, 3 patients (3.0%) changed dialysis modality, and 10 patients (10.0%) transferred to other hemodialysis units. All patients underwent regular dialysis via their vascular access (81.2% of patients had arterio-venous fistula) 4-5 h three times per week at a blood flow rate of 250-300 ml/min. Bicarbonate dialysate (30 mEq/L) at a dialysis solution flow rate of 500 ml/min was used in all cases. All dialysis was performed with biocompatible dialyzer membrane with a surface area of 1.0-1.8 m2. The efficiency of the dialysis was assessed based on the delivered dose of dialysis (Kt/V urea) using a single-pool urea kinetic model (mean Kt/V was1.31 ± 0.23 in our population). Flow diagram of the study. Information on vascular disease (cerebral vascular, peripheral vascular and heart disease) was obtained from a detailed medical history. Most patients were required antihypertensive medications as well as other drugs commonly used in ESRD, such as phosphate and potassium binders, diuretics, and supplements of vitamins B, C, and D. Dietary intake A continuous 3-day dietary history (which included a dialysis day, a weekend day and a non-dialysis day) was recorded on a self-completed food diary. Then, dietary energy and protein intake were calculated and normalized for adjusted body weight (ABW) by the following formula [27]: SBW - standard body weight was determined by the National Healthand Nutrition Examination Survey II (NHANES II) population medians for age, sex, frame size, and stature [27]. Dietary protein intake was also estimated by the protein catabolic rate (PCR) calculation from the patient's urea generation rate by urea kinetics modeling [28]. Single-pool model urea kinetics was used to estimate the nPCR. A continuous 3-day dietary history (which included a dialysis day, a weekend day and a non-dialysis day) was recorded on a self-completed food diary. Then, dietary energy and protein intake were calculated and normalized for adjusted body weight (ABW) by the following formula [27]: SBW - standard body weight was determined by the National Healthand Nutrition Examination Survey II (NHANES II) population medians for age, sex, frame size, and stature [27]. Dietary protein intake was also estimated by the protein catabolic rate (PCR) calculation from the patient's urea generation rate by urea kinetics modeling [28]. Single-pool model urea kinetics was used to estimate the nPCR. Anthropometric measurements BMI, triceps skinfold thickness (TSF), mid arm circumference (MAC) and calculated mid arm muscle circumference (MAMC) were measured as anthropometric variables. The BMI was calculated as dry weight in kilograms divided by the square of height in meters. TSF was measured with a conventional skinfold caliper using standard techniques. Mid arm circumference was measured with a plastic measuring tape. MAMC was estimated as follows: BMI, triceps skinfold thickness (TSF), mid arm circumference (MAC) and calculated mid arm muscle circumference (MAMC) were measured as anthropometric variables. The BMI was calculated as dry weight in kilograms divided by the square of height in meters. TSF was measured with a conventional skinfold caliper using standard techniques. Mid arm circumference was measured with a plastic measuring tape. MAMC was estimated as follows: Body composition analysis Body composition was determined by body impedance analysis (B.I.A. Nutriguard- M, Data-Input, Frankfurt, Germany). We used gel-based electrodes specifically developed for BIA measurements - Bianostic AT (Data-Input GmbH). On the day of blood collection, patients underwent BIA measurement at approximately 30 minutes postdialysis. BIA electrodes were placed on the same body side used for anthropometric measurements. The multi-frequency technique was used. FFM was calculated by using the approach of Kyle et al. validated by dual-energy x-ray absorptiometry on 343 healthy adult subjects [29]: Fat mass and fat free mass were standardized by squared height (m2), and expressed in kg/m2 as fat mass index (FMI) and fat free mass index (FFMI), respectively. Phase angle (PA) describes the relationship between the two vector components of impedance [reactance and resistance] of the human body to an alternating electric current. PA has been shown to provide a BIA prognostic index of morbidity and mortality [30]. Body composition was determined by body impedance analysis (B.I.A. Nutriguard- M, Data-Input, Frankfurt, Germany). We used gel-based electrodes specifically developed for BIA measurements - Bianostic AT (Data-Input GmbH). On the day of blood collection, patients underwent BIA measurement at approximately 30 minutes postdialysis. BIA electrodes were placed on the same body side used for anthropometric measurements. The multi-frequency technique was used. FFM was calculated by using the approach of Kyle et al. validated by dual-energy x-ray absorptiometry on 343 healthy adult subjects [29]: Fat mass and fat free mass were standardized by squared height (m2), and expressed in kg/m2 as fat mass index (FMI) and fat free mass index (FFMI), respectively. Phase angle (PA) describes the relationship between the two vector components of impedance [reactance and resistance] of the human body to an alternating electric current. PA has been shown to provide a BIA prognostic index of morbidity and mortality [30]. Laboratory evaluation Blood samples were taken in a non-fasting state before a midweek hemodialysis session. CBC, creatinine, urea, albumin, transferrin and total cholesterol were measured by routine laboratory methods. IL-6 and leptin were measured in plasma samples using commercially available enzyme-linked immunosorbent assay (ELISA) kits (R&D System, Minneapolis, MN, USA) according to the manufacturer's protocol. The mean minimal detectable level for IL-6 was 0.7 pg/ml, and 7.8 pg/ml for leptin. Intra-assay and inter-assay coefficients of variation for IL-6 were 4.2% and 6.4% respectively, and for leptin - 3.3% and 5.4% respectively. Blood samples were taken in a non-fasting state before a midweek hemodialysis session. CBC, creatinine, urea, albumin, transferrin and total cholesterol were measured by routine laboratory methods. IL-6 and leptin were measured in plasma samples using commercially available enzyme-linked immunosorbent assay (ELISA) kits (R&D System, Minneapolis, MN, USA) according to the manufacturer's protocol. The mean minimal detectable level for IL-6 was 0.7 pg/ml, and 7.8 pg/ml for leptin. Intra-assay and inter-assay coefficients of variation for IL-6 were 4.2% and 6.4% respectively, and for leptin - 3.3% and 5.4% respectively. Statistical analysis Data are expressed as mean ± standard deviation (SD), median and interquartile range (Q1 to Q3) for variables that did not follow a normal distribution, or frequencies, as noted. To compare the means of continuous variables measured between sex-specific tertiles of leptin, one way ANOVA and analysis of covariance with adjustments (ANCOVA) were used. Categorical data are presented as percentages and were compared among groups by χ2 tests. Correlations between leptin and clinical and laboratory parameters were assessed using Spearman rank order correlation coefficients (because of the skewed distribution of leptin levels). Multivariate regression analysis was performed to obtain partial (adjusted) correlations (R2). Case-mix-adjusted models were controlled for age, gender, history of CV disease, presence or absence of diabetes mellitus, dialysis vintage and fat mass. Repeated-measures analysis of variance was performed by using the MIXED model. Only patients with ≥ 2 study visits were included in the analyses. Base models were adjusted for age, sex, diabetes status, dialysis vintage, history of cardio-vascular diseases and fat mass. F Tests were used to assess the significance of the fixed effects, and P less than 0.05 was considered significant. To evaluate whether leptin influenced the trends in the various dependent variables, we included in each base model terms for individual "leptin-by-time" interactions. Survival analyses were performed using the Kaplan-Meier survival curve and the Cox proportional hazard model. The univariate and multivariate Cox regression analyses are presented as (HR; CI). All statistical analyses were performed using SPSS software, version 16.0 (SPSS Inc, Chicago, IL). Data are expressed as mean ± standard deviation (SD), median and interquartile range (Q1 to Q3) for variables that did not follow a normal distribution, or frequencies, as noted. To compare the means of continuous variables measured between sex-specific tertiles of leptin, one way ANOVA and analysis of covariance with adjustments (ANCOVA) were used. Categorical data are presented as percentages and were compared among groups by χ2 tests. Correlations between leptin and clinical and laboratory parameters were assessed using Spearman rank order correlation coefficients (because of the skewed distribution of leptin levels). Multivariate regression analysis was performed to obtain partial (adjusted) correlations (R2). Case-mix-adjusted models were controlled for age, gender, history of CV disease, presence or absence of diabetes mellitus, dialysis vintage and fat mass. Repeated-measures analysis of variance was performed by using the MIXED model. Only patients with ≥ 2 study visits were included in the analyses. Base models were adjusted for age, sex, diabetes status, dialysis vintage, history of cardio-vascular diseases and fat mass. F Tests were used to assess the significance of the fixed effects, and P less than 0.05 was considered significant. To evaluate whether leptin influenced the trends in the various dependent variables, we included in each base model terms for individual "leptin-by-time" interactions. Survival analyses were performed using the Kaplan-Meier survival curve and the Cox proportional hazard model. The univariate and multivariate Cox regression analyses are presented as (HR; CI). All statistical analyses were performed using SPSS software, version 16.0 (SPSS Inc, Chicago, IL). Patients: This prospective observational study was approved by the Ethics Committee of Assaf Harofeh Medical Center (Zerifin, Affiliated to the Sackler Faculty of Medicine Tel Aviv University, Israel). Informed consent was obtained before any trial-related activities. Patients were eligible for entry when they had been on HD therapy for at least 3 months and were 18 years or older, with no clinically active cardio-vascular or infectious diseases on entry. We excluded patients with edema, pleural effusion or ascites at their initial assessment, as well as patients with malignant disease, liver cirrhosis, neuro-muscular diseases, amputations or any deformities of the body. Exclusion criteria at the entry of the study also included co-morbidity (auto-immune disease and/or acute infections) and/or medication (prednisone) that might interfere with plasma leptin concentrations. A flow chart of the study is presented in Figure 1. In total, 101 patients (64 men and 37 women) with a mean age of 64.6 ± 11.5 years, receiving maintenance hemodialysis treatment at our outpatient HD clinic, were included in the study. Of the patients studied, 52 were diabetic (all diabetic patients had type 2 diabetes). Study measurements were performed at baseline and at 6, 12, 18 and 24 months from enrollment. After the longitudinal measurements ended, we continued clinical observation on our cohort during 2 additional years. Thus, in total, the study period extended 35 ± 17 months. During this period, 33 patients (32.7%) died (the main causes of death were cardio-vascular [12 of 33 patients; 36.4%] and sepsis [12 of 33 patients; 36.4%]), 13 patients (12.9%) underwent kidney transplantation, 3 patients (3.0%) changed dialysis modality, and 10 patients (10.0%) transferred to other hemodialysis units. All patients underwent regular dialysis via their vascular access (81.2% of patients had arterio-venous fistula) 4-5 h three times per week at a blood flow rate of 250-300 ml/min. Bicarbonate dialysate (30 mEq/L) at a dialysis solution flow rate of 500 ml/min was used in all cases. All dialysis was performed with biocompatible dialyzer membrane with a surface area of 1.0-1.8 m2. The efficiency of the dialysis was assessed based on the delivered dose of dialysis (Kt/V urea) using a single-pool urea kinetic model (mean Kt/V was1.31 ± 0.23 in our population). Flow diagram of the study. Information on vascular disease (cerebral vascular, peripheral vascular and heart disease) was obtained from a detailed medical history. Most patients were required antihypertensive medications as well as other drugs commonly used in ESRD, such as phosphate and potassium binders, diuretics, and supplements of vitamins B, C, and D. Dietary intake: A continuous 3-day dietary history (which included a dialysis day, a weekend day and a non-dialysis day) was recorded on a self-completed food diary. Then, dietary energy and protein intake were calculated and normalized for adjusted body weight (ABW) by the following formula [27]: SBW - standard body weight was determined by the National Healthand Nutrition Examination Survey II (NHANES II) population medians for age, sex, frame size, and stature [27]. Dietary protein intake was also estimated by the protein catabolic rate (PCR) calculation from the patient's urea generation rate by urea kinetics modeling [28]. Single-pool model urea kinetics was used to estimate the nPCR. Anthropometric measurements: BMI, triceps skinfold thickness (TSF), mid arm circumference (MAC) and calculated mid arm muscle circumference (MAMC) were measured as anthropometric variables. The BMI was calculated as dry weight in kilograms divided by the square of height in meters. TSF was measured with a conventional skinfold caliper using standard techniques. Mid arm circumference was measured with a plastic measuring tape. MAMC was estimated as follows: Body composition analysis: Body composition was determined by body impedance analysis (B.I.A. Nutriguard- M, Data-Input, Frankfurt, Germany). We used gel-based electrodes specifically developed for BIA measurements - Bianostic AT (Data-Input GmbH). On the day of blood collection, patients underwent BIA measurement at approximately 30 minutes postdialysis. BIA electrodes were placed on the same body side used for anthropometric measurements. The multi-frequency technique was used. FFM was calculated by using the approach of Kyle et al. validated by dual-energy x-ray absorptiometry on 343 healthy adult subjects [29]: Fat mass and fat free mass were standardized by squared height (m2), and expressed in kg/m2 as fat mass index (FMI) and fat free mass index (FFMI), respectively. Phase angle (PA) describes the relationship between the two vector components of impedance [reactance and resistance] of the human body to an alternating electric current. PA has been shown to provide a BIA prognostic index of morbidity and mortality [30]. Laboratory evaluation: Blood samples were taken in a non-fasting state before a midweek hemodialysis session. CBC, creatinine, urea, albumin, transferrin and total cholesterol were measured by routine laboratory methods. IL-6 and leptin were measured in plasma samples using commercially available enzyme-linked immunosorbent assay (ELISA) kits (R&D System, Minneapolis, MN, USA) according to the manufacturer's protocol. The mean minimal detectable level for IL-6 was 0.7 pg/ml, and 7.8 pg/ml for leptin. Intra-assay and inter-assay coefficients of variation for IL-6 were 4.2% and 6.4% respectively, and for leptin - 3.3% and 5.4% respectively. Statistical analysis: Data are expressed as mean ± standard deviation (SD), median and interquartile range (Q1 to Q3) for variables that did not follow a normal distribution, or frequencies, as noted. To compare the means of continuous variables measured between sex-specific tertiles of leptin, one way ANOVA and analysis of covariance with adjustments (ANCOVA) were used. Categorical data are presented as percentages and were compared among groups by χ2 tests. Correlations between leptin and clinical and laboratory parameters were assessed using Spearman rank order correlation coefficients (because of the skewed distribution of leptin levels). Multivariate regression analysis was performed to obtain partial (adjusted) correlations (R2). Case-mix-adjusted models were controlled for age, gender, history of CV disease, presence or absence of diabetes mellitus, dialysis vintage and fat mass. Repeated-measures analysis of variance was performed by using the MIXED model. Only patients with ≥ 2 study visits were included in the analyses. Base models were adjusted for age, sex, diabetes status, dialysis vintage, history of cardio-vascular diseases and fat mass. F Tests were used to assess the significance of the fixed effects, and P less than 0.05 was considered significant. To evaluate whether leptin influenced the trends in the various dependent variables, we included in each base model terms for individual "leptin-by-time" interactions. Survival analyses were performed using the Kaplan-Meier survival curve and the Cox proportional hazard model. The univariate and multivariate Cox regression analyses are presented as (HR; CI). All statistical analyses were performed using SPSS software, version 16.0 (SPSS Inc, Chicago, IL). Results: Cross-sectional associations The 101 prevalent HD patients participating in this study included 36.6% women; over half of the participants (51.5%) had diabetes mellitus (DM) and nearly the same proportion had a history of cardiovascular disease (46.9%), including myocardial infarction, coronary artery procedures such as angioplasty or surgery, previous cerebral-vascular accident, or peripheral vascular disease. Average age was 64.6 ± 11.5 years and median maintenance HD vintage was 31 months (Q1 to Q3, 15.5-51.5 months) at the study initiation. For HD patients at the start of the cohort, serum leptin averaged (mean ± SD) 37.3 ± 39.4 ng/ml (median, 16.5 ng/ml; Q1 to Q3, 5.60-71.4 ng/ml). To examine the relationship between different levels of serum leptin and relevant demographic, clinical and laboratory measures, serum leptin levels were divided into three equal gender specific tertiles (data not shown). Diabetes proportion and age distribution were similar across the three groups. No statistically significant differences were evident between groups in the use of medications that may affect inflammatory markers such as statins, aspirin, or angiotensin-converting enzyme inhibitors (data not shown). As expected females, tended to have higher leptin levels than males (p = 0.0001). Body composition parameters measured by anthropometry and bioelectric impedance analysis (BIA) including phase angle (PA) were incrementally greater across increasing leptin tertiles even after adjustments for baseline demographic and clinical parameters (age, DM status, dialysis vintage and cardio-vascular disease in the past). However, these associations were attenuated after including of fat mass in multivariate analyses. No significant differences in any of the biochemical markers of nutrition, normalized protein nitrogen appearance (nPNA), or actual energy or protein intake normalized to adjusted body weight were found between the HD patient groups according to leptin tertiles. At baseline, anthropometric measurements (body mass index [BMI], triceps skinfold thickness [TSF], mid arm circumference [MAC] and midarm circumference calculated [MAMC]) and BIA-derived parameters of body composition (fat mass index [FMI] and fat free mass index [FFMI]) correlated positively with serum leptin levels, in both unadjusted and adjusted (for age, DM status, dialysis vintage, and previous cardio-vascular disease) models. Thus, an increased level of leptin was associated with better nutritional status (Table 1). Associations between levels of serum leptin and the two laboratory nutritional markers (albumin and cholesterol) and PA derived by BIA were statistically significant, but these associations were attenuated after adjustments for demographic and clinical parameters including fat mass. No significant correlations were observed between serum leptin and nPNA or dietary energy (DEI) and protein intake (DPI) normalized to adjusted body weight in both sexes after adjustments for demographic and clinical parameters including fat mass. Unadjusted and multivariate adjusted Spearman's correlation coefficients of baseline serum leptin and nutritional clinical and laboratory parameters in the study population at baseline Correlation coefficient values ≥ 0.25 appear in bold. a All case-mix-adjusted models include age, gender, DM status, dialysis vintage, CV disease in the past and fat mass. Abbreviations: nPNA, normalized protein nitrogen appearance; IL-6, interleukin-6; BMI, body mass index; TSF, triceps skinfold thickness; MAC, mid-arm circumference; MAMC, mid-arm muscle circumference calculated; FMI, fat mass index; FFMI, fat-free mass index, DM, diabetes mellitus; CV, cardio-vascular. The 101 prevalent HD patients participating in this study included 36.6% women; over half of the participants (51.5%) had diabetes mellitus (DM) and nearly the same proportion had a history of cardiovascular disease (46.9%), including myocardial infarction, coronary artery procedures such as angioplasty or surgery, previous cerebral-vascular accident, or peripheral vascular disease. Average age was 64.6 ± 11.5 years and median maintenance HD vintage was 31 months (Q1 to Q3, 15.5-51.5 months) at the study initiation. For HD patients at the start of the cohort, serum leptin averaged (mean ± SD) 37.3 ± 39.4 ng/ml (median, 16.5 ng/ml; Q1 to Q3, 5.60-71.4 ng/ml). To examine the relationship between different levels of serum leptin and relevant demographic, clinical and laboratory measures, serum leptin levels were divided into three equal gender specific tertiles (data not shown). Diabetes proportion and age distribution were similar across the three groups. No statistically significant differences were evident between groups in the use of medications that may affect inflammatory markers such as statins, aspirin, or angiotensin-converting enzyme inhibitors (data not shown). As expected females, tended to have higher leptin levels than males (p = 0.0001). Body composition parameters measured by anthropometry and bioelectric impedance analysis (BIA) including phase angle (PA) were incrementally greater across increasing leptin tertiles even after adjustments for baseline demographic and clinical parameters (age, DM status, dialysis vintage and cardio-vascular disease in the past). However, these associations were attenuated after including of fat mass in multivariate analyses. No significant differences in any of the biochemical markers of nutrition, normalized protein nitrogen appearance (nPNA), or actual energy or protein intake normalized to adjusted body weight were found between the HD patient groups according to leptin tertiles. At baseline, anthropometric measurements (body mass index [BMI], triceps skinfold thickness [TSF], mid arm circumference [MAC] and midarm circumference calculated [MAMC]) and BIA-derived parameters of body composition (fat mass index [FMI] and fat free mass index [FFMI]) correlated positively with serum leptin levels, in both unadjusted and adjusted (for age, DM status, dialysis vintage, and previous cardio-vascular disease) models. Thus, an increased level of leptin was associated with better nutritional status (Table 1). Associations between levels of serum leptin and the two laboratory nutritional markers (albumin and cholesterol) and PA derived by BIA were statistically significant, but these associations were attenuated after adjustments for demographic and clinical parameters including fat mass. No significant correlations were observed between serum leptin and nPNA or dietary energy (DEI) and protein intake (DPI) normalized to adjusted body weight in both sexes after adjustments for demographic and clinical parameters including fat mass. Unadjusted and multivariate adjusted Spearman's correlation coefficients of baseline serum leptin and nutritional clinical and laboratory parameters in the study population at baseline Correlation coefficient values ≥ 0.25 appear in bold. a All case-mix-adjusted models include age, gender, DM status, dialysis vintage, CV disease in the past and fat mass. Abbreviations: nPNA, normalized protein nitrogen appearance; IL-6, interleukin-6; BMI, body mass index; TSF, triceps skinfold thickness; MAC, mid-arm circumference; MAMC, mid-arm muscle circumference calculated; FMI, fat mass index; FFMI, fat-free mass index, DM, diabetes mellitus; CV, cardio-vascular. Longitudinal associationas Linear mixed models were used to study the effects of longitudinal leptin changes on changes in nutritional parameters (slopes) over 24 months including fixed parameters such as age, gender, diabetes status, dialysis vintage, previous cardio-vascular events, and fat mass (Table 2). No significant changes in dietary intake or in any of the biochemical nutritional markers over time were found. In contrast, a significant reduction in body composition parameters (both those measured by anthropometry [BMI, TSF and MAMC] and by BIA [FMI, FFMI, PA]) over time was observed. However, leptin did not modulate the changes in outcome variables over time in our cohort (leptin-by-time interactions were insignificant). Regression coefficients with 95% Confidence Intervals for the effect of longitudinal leptin changes on nutritional parameters changes (slopes) during 24 months based on mixed-effects model with linear trends for variable and fixed parameters NOTE: All nutritional variables presented in Table were modeled separately as dependent variables, whereas independent variables included fixed factors (such as age, gender, diabetes status, dialysis vintage, past cardio-vascular disease and fat mass) and leptin as continuous variable. Terms for individual "leptin-by-time" interactions were included in each model. The model takes into account every measurements of leptin and presented nutritional variables in each time point for each patient separately. Presented regression coefficients demonstrate changes in outcome variables with time (24 months). a P for trend for leptin-by-time interactions. b FMI and BMI were adjusted only by age, gender, DM status, history of cardio-vascular disease and dialysis vintage. Abbreviations: DEI, daily energy intake; DPI, daily protein intake; nPNA, normalized protein nitrogen appearance; IL-6, interleukin-6; EPO, erythropoietin; BMI, body mass index; TSF, triceps skinfold thickness; MAC, mid-arm circumference; MAMC, mid-arm muscle circumference calculated; FMI, fat mass index; FFMI, fat-free mass index; DM, diabetes mellitus; CV, cardio-vascular. A mixed model including the fixed factors sex, age, DM status, dialysis vintage, history of CV disease and some of nutritional parameters predicted variability in leptin levels during the study presented in Table 3. We observed significant changes of leptin levels with time (linear estimate ± SE: -2.5010 ± 0.57 ng/ml/2y; p < 0.001). The effect of longitudinal changes of FMI on serum leptin variability over time was evident according to this model. Effects of longitudinal changes of nutritional parameters on changes (slopes) of leptin during 24 months based on mixed-effects model with linear trends for variable and fixed parameters NOTE: The Table represents a simple Mixed-Model with leptin as dependent variable, and presented nutritional parameters and fixed factors (age, gender, DM status, dialysis vintage and history of CV disease) as independent variables. The model takes into account every measurements of leptin and presented nutritional variables in each time point for each patient separately. Abbreviations: DEI, daily energy intake; IL-6, interleukin-6; EPO, erythropoietin; FMI, fat mass index; DM, diabetes mellitus; CV, cardio-vascular. Of interest, a significant reduction over time was associated with a more rapid decline in leptin levels in the highest leptin tertile in both unadjusted (p = 0.007 for leptin-by-time interaction) and fully adjusted (p = 0.047 for leptin-by-time interaction) models (Figure 2). Leptin levels decline over time in the study population. Changes in estimated marginal means of serum leptin levels at baseline (month 0), month 6, month 12, month 18 and month 24 in the study population groups according to sex-specific tertiles* of baseline serum leptin: 1. Unadjusted model: P = 0.007 for leptin-by-time interaction; 2. Fully adjusted model (for age, gender, DM status, dialysis vintage, previous cardio-vascular morbidity and fat mass): P = 0.047 for leptin-by-time interaction. *Median (interquartile range) leptin (ng/ml) tertiles for men (n = 64) were 1.70 (1.2-2.4), 10.2 (7.6-13.8) and 38.3 (26.2-81.3) and for women (n = 37) were 15.8 (9.3-18.3), 68.0 (41.4-85.9) and 112.6 (109.2-116.0). Linear mixed models were used to study the effects of longitudinal leptin changes on changes in nutritional parameters (slopes) over 24 months including fixed parameters such as age, gender, diabetes status, dialysis vintage, previous cardio-vascular events, and fat mass (Table 2). No significant changes in dietary intake or in any of the biochemical nutritional markers over time were found. In contrast, a significant reduction in body composition parameters (both those measured by anthropometry [BMI, TSF and MAMC] and by BIA [FMI, FFMI, PA]) over time was observed. However, leptin did not modulate the changes in outcome variables over time in our cohort (leptin-by-time interactions were insignificant). Regression coefficients with 95% Confidence Intervals for the effect of longitudinal leptin changes on nutritional parameters changes (slopes) during 24 months based on mixed-effects model with linear trends for variable and fixed parameters NOTE: All nutritional variables presented in Table were modeled separately as dependent variables, whereas independent variables included fixed factors (such as age, gender, diabetes status, dialysis vintage, past cardio-vascular disease and fat mass) and leptin as continuous variable. Terms for individual "leptin-by-time" interactions were included in each model. The model takes into account every measurements of leptin and presented nutritional variables in each time point for each patient separately. Presented regression coefficients demonstrate changes in outcome variables with time (24 months). a P for trend for leptin-by-time interactions. b FMI and BMI were adjusted only by age, gender, DM status, history of cardio-vascular disease and dialysis vintage. Abbreviations: DEI, daily energy intake; DPI, daily protein intake; nPNA, normalized protein nitrogen appearance; IL-6, interleukin-6; EPO, erythropoietin; BMI, body mass index; TSF, triceps skinfold thickness; MAC, mid-arm circumference; MAMC, mid-arm muscle circumference calculated; FMI, fat mass index; FFMI, fat-free mass index; DM, diabetes mellitus; CV, cardio-vascular. A mixed model including the fixed factors sex, age, DM status, dialysis vintage, history of CV disease and some of nutritional parameters predicted variability in leptin levels during the study presented in Table 3. We observed significant changes of leptin levels with time (linear estimate ± SE: -2.5010 ± 0.57 ng/ml/2y; p < 0.001). The effect of longitudinal changes of FMI on serum leptin variability over time was evident according to this model. Effects of longitudinal changes of nutritional parameters on changes (slopes) of leptin during 24 months based on mixed-effects model with linear trends for variable and fixed parameters NOTE: The Table represents a simple Mixed-Model with leptin as dependent variable, and presented nutritional parameters and fixed factors (age, gender, DM status, dialysis vintage and history of CV disease) as independent variables. The model takes into account every measurements of leptin and presented nutritional variables in each time point for each patient separately. Abbreviations: DEI, daily energy intake; IL-6, interleukin-6; EPO, erythropoietin; FMI, fat mass index; DM, diabetes mellitus; CV, cardio-vascular. Of interest, a significant reduction over time was associated with a more rapid decline in leptin levels in the highest leptin tertile in both unadjusted (p = 0.007 for leptin-by-time interaction) and fully adjusted (p = 0.047 for leptin-by-time interaction) models (Figure 2). Leptin levels decline over time in the study population. Changes in estimated marginal means of serum leptin levels at baseline (month 0), month 6, month 12, month 18 and month 24 in the study population groups according to sex-specific tertiles* of baseline serum leptin: 1. Unadjusted model: P = 0.007 for leptin-by-time interaction; 2. Fully adjusted model (for age, gender, DM status, dialysis vintage, previous cardio-vascular morbidity and fat mass): P = 0.047 for leptin-by-time interaction. *Median (interquartile range) leptin (ng/ml) tertiles for men (n = 64) were 1.70 (1.2-2.4), 10.2 (7.6-13.8) and 38.3 (26.2-81.3) and for women (n = 37) were 15.8 (9.3-18.3), 68.0 (41.4-85.9) and 112.6 (109.2-116.0). Survival analysis During an average follow-up of 35 months (median, 40 months; Q1 to Q3, 17-52), 33 patients died: 11 (32%) in the low tertile group, 12 (35%) in the median tertile group and 10 (30%) in the high tertile group, with a median time to death of 25 months (Q1 to Q3, 14-40 months). Cumulative incidences of survival were unaffected by the baseline serum leptin levels (Figure 3). To further assess the possible association of baseline serum leptin level with cardio-vascular morbidity and mortality, cumulative hazards of all-cause death and first composite cardio-vascular event from Cox regression (after adjustment for baseline demographic parameters and fat mass) were calculated. First cardio-vascular event was defined as myocardial infarction (MI), requiring coronary artery procedures such as angioplasty or surgery, cerebral-vascular accident (CVA), or peripheral vascular disease (PVD), requiring angioplasty, bypass or amputation and diagnosed after the participant entered the study. No statistically significant differences in hazards between the different groups according leptin tertiles were observed. Additionally, no differences were found by Cox regression analysis in terms of survival probabilities from the analysis of all-cause mortality for each 10 ng/ml increase in baseline serum leptin (data not shown). Kaplan-Meier curves of surviving patients comparing subgroups of patients stratified by baseline serum leptin tertiles. Thus, although chronic HD patients with higher baseline leptin had better nutritional status as evidenced by their significantly more favorable body composition parameters at the start of the study, this disparity did not appear to widen over the 24 months of follow-up; thus, leptin levels did not predict body composition parameter changes over time. Moreover, no survival advantage of the high leptin group was found in our observational cohort. During an average follow-up of 35 months (median, 40 months; Q1 to Q3, 17-52), 33 patients died: 11 (32%) in the low tertile group, 12 (35%) in the median tertile group and 10 (30%) in the high tertile group, with a median time to death of 25 months (Q1 to Q3, 14-40 months). Cumulative incidences of survival were unaffected by the baseline serum leptin levels (Figure 3). To further assess the possible association of baseline serum leptin level with cardio-vascular morbidity and mortality, cumulative hazards of all-cause death and first composite cardio-vascular event from Cox regression (after adjustment for baseline demographic parameters and fat mass) were calculated. First cardio-vascular event was defined as myocardial infarction (MI), requiring coronary artery procedures such as angioplasty or surgery, cerebral-vascular accident (CVA), or peripheral vascular disease (PVD), requiring angioplasty, bypass or amputation and diagnosed after the participant entered the study. No statistically significant differences in hazards between the different groups according leptin tertiles were observed. Additionally, no differences were found by Cox regression analysis in terms of survival probabilities from the analysis of all-cause mortality for each 10 ng/ml increase in baseline serum leptin (data not shown). Kaplan-Meier curves of surviving patients comparing subgroups of patients stratified by baseline serum leptin tertiles. Thus, although chronic HD patients with higher baseline leptin had better nutritional status as evidenced by their significantly more favorable body composition parameters at the start of the study, this disparity did not appear to widen over the 24 months of follow-up; thus, leptin levels did not predict body composition parameter changes over time. Moreover, no survival advantage of the high leptin group was found in our observational cohort. Cross-sectional associations: The 101 prevalent HD patients participating in this study included 36.6% women; over half of the participants (51.5%) had diabetes mellitus (DM) and nearly the same proportion had a history of cardiovascular disease (46.9%), including myocardial infarction, coronary artery procedures such as angioplasty or surgery, previous cerebral-vascular accident, or peripheral vascular disease. Average age was 64.6 ± 11.5 years and median maintenance HD vintage was 31 months (Q1 to Q3, 15.5-51.5 months) at the study initiation. For HD patients at the start of the cohort, serum leptin averaged (mean ± SD) 37.3 ± 39.4 ng/ml (median, 16.5 ng/ml; Q1 to Q3, 5.60-71.4 ng/ml). To examine the relationship between different levels of serum leptin and relevant demographic, clinical and laboratory measures, serum leptin levels were divided into three equal gender specific tertiles (data not shown). Diabetes proportion and age distribution were similar across the three groups. No statistically significant differences were evident between groups in the use of medications that may affect inflammatory markers such as statins, aspirin, or angiotensin-converting enzyme inhibitors (data not shown). As expected females, tended to have higher leptin levels than males (p = 0.0001). Body composition parameters measured by anthropometry and bioelectric impedance analysis (BIA) including phase angle (PA) were incrementally greater across increasing leptin tertiles even after adjustments for baseline demographic and clinical parameters (age, DM status, dialysis vintage and cardio-vascular disease in the past). However, these associations were attenuated after including of fat mass in multivariate analyses. No significant differences in any of the biochemical markers of nutrition, normalized protein nitrogen appearance (nPNA), or actual energy or protein intake normalized to adjusted body weight were found between the HD patient groups according to leptin tertiles. At baseline, anthropometric measurements (body mass index [BMI], triceps skinfold thickness [TSF], mid arm circumference [MAC] and midarm circumference calculated [MAMC]) and BIA-derived parameters of body composition (fat mass index [FMI] and fat free mass index [FFMI]) correlated positively with serum leptin levels, in both unadjusted and adjusted (for age, DM status, dialysis vintage, and previous cardio-vascular disease) models. Thus, an increased level of leptin was associated with better nutritional status (Table 1). Associations between levels of serum leptin and the two laboratory nutritional markers (albumin and cholesterol) and PA derived by BIA were statistically significant, but these associations were attenuated after adjustments for demographic and clinical parameters including fat mass. No significant correlations were observed between serum leptin and nPNA or dietary energy (DEI) and protein intake (DPI) normalized to adjusted body weight in both sexes after adjustments for demographic and clinical parameters including fat mass. Unadjusted and multivariate adjusted Spearman's correlation coefficients of baseline serum leptin and nutritional clinical and laboratory parameters in the study population at baseline Correlation coefficient values ≥ 0.25 appear in bold. a All case-mix-adjusted models include age, gender, DM status, dialysis vintage, CV disease in the past and fat mass. Abbreviations: nPNA, normalized protein nitrogen appearance; IL-6, interleukin-6; BMI, body mass index; TSF, triceps skinfold thickness; MAC, mid-arm circumference; MAMC, mid-arm muscle circumference calculated; FMI, fat mass index; FFMI, fat-free mass index, DM, diabetes mellitus; CV, cardio-vascular. Longitudinal associationas: Linear mixed models were used to study the effects of longitudinal leptin changes on changes in nutritional parameters (slopes) over 24 months including fixed parameters such as age, gender, diabetes status, dialysis vintage, previous cardio-vascular events, and fat mass (Table 2). No significant changes in dietary intake or in any of the biochemical nutritional markers over time were found. In contrast, a significant reduction in body composition parameters (both those measured by anthropometry [BMI, TSF and MAMC] and by BIA [FMI, FFMI, PA]) over time was observed. However, leptin did not modulate the changes in outcome variables over time in our cohort (leptin-by-time interactions were insignificant). Regression coefficients with 95% Confidence Intervals for the effect of longitudinal leptin changes on nutritional parameters changes (slopes) during 24 months based on mixed-effects model with linear trends for variable and fixed parameters NOTE: All nutritional variables presented in Table were modeled separately as dependent variables, whereas independent variables included fixed factors (such as age, gender, diabetes status, dialysis vintage, past cardio-vascular disease and fat mass) and leptin as continuous variable. Terms for individual "leptin-by-time" interactions were included in each model. The model takes into account every measurements of leptin and presented nutritional variables in each time point for each patient separately. Presented regression coefficients demonstrate changes in outcome variables with time (24 months). a P for trend for leptin-by-time interactions. b FMI and BMI were adjusted only by age, gender, DM status, history of cardio-vascular disease and dialysis vintage. Abbreviations: DEI, daily energy intake; DPI, daily protein intake; nPNA, normalized protein nitrogen appearance; IL-6, interleukin-6; EPO, erythropoietin; BMI, body mass index; TSF, triceps skinfold thickness; MAC, mid-arm circumference; MAMC, mid-arm muscle circumference calculated; FMI, fat mass index; FFMI, fat-free mass index; DM, diabetes mellitus; CV, cardio-vascular. A mixed model including the fixed factors sex, age, DM status, dialysis vintage, history of CV disease and some of nutritional parameters predicted variability in leptin levels during the study presented in Table 3. We observed significant changes of leptin levels with time (linear estimate ± SE: -2.5010 ± 0.57 ng/ml/2y; p < 0.001). The effect of longitudinal changes of FMI on serum leptin variability over time was evident according to this model. Effects of longitudinal changes of nutritional parameters on changes (slopes) of leptin during 24 months based on mixed-effects model with linear trends for variable and fixed parameters NOTE: The Table represents a simple Mixed-Model with leptin as dependent variable, and presented nutritional parameters and fixed factors (age, gender, DM status, dialysis vintage and history of CV disease) as independent variables. The model takes into account every measurements of leptin and presented nutritional variables in each time point for each patient separately. Abbreviations: DEI, daily energy intake; IL-6, interleukin-6; EPO, erythropoietin; FMI, fat mass index; DM, diabetes mellitus; CV, cardio-vascular. Of interest, a significant reduction over time was associated with a more rapid decline in leptin levels in the highest leptin tertile in both unadjusted (p = 0.007 for leptin-by-time interaction) and fully adjusted (p = 0.047 for leptin-by-time interaction) models (Figure 2). Leptin levels decline over time in the study population. Changes in estimated marginal means of serum leptin levels at baseline (month 0), month 6, month 12, month 18 and month 24 in the study population groups according to sex-specific tertiles* of baseline serum leptin: 1. Unadjusted model: P = 0.007 for leptin-by-time interaction; 2. Fully adjusted model (for age, gender, DM status, dialysis vintage, previous cardio-vascular morbidity and fat mass): P = 0.047 for leptin-by-time interaction. *Median (interquartile range) leptin (ng/ml) tertiles for men (n = 64) were 1.70 (1.2-2.4), 10.2 (7.6-13.8) and 38.3 (26.2-81.3) and for women (n = 37) were 15.8 (9.3-18.3), 68.0 (41.4-85.9) and 112.6 (109.2-116.0). Survival analysis: During an average follow-up of 35 months (median, 40 months; Q1 to Q3, 17-52), 33 patients died: 11 (32%) in the low tertile group, 12 (35%) in the median tertile group and 10 (30%) in the high tertile group, with a median time to death of 25 months (Q1 to Q3, 14-40 months). Cumulative incidences of survival were unaffected by the baseline serum leptin levels (Figure 3). To further assess the possible association of baseline serum leptin level with cardio-vascular morbidity and mortality, cumulative hazards of all-cause death and first composite cardio-vascular event from Cox regression (after adjustment for baseline demographic parameters and fat mass) were calculated. First cardio-vascular event was defined as myocardial infarction (MI), requiring coronary artery procedures such as angioplasty or surgery, cerebral-vascular accident (CVA), or peripheral vascular disease (PVD), requiring angioplasty, bypass or amputation and diagnosed after the participant entered the study. No statistically significant differences in hazards between the different groups according leptin tertiles were observed. Additionally, no differences were found by Cox regression analysis in terms of survival probabilities from the analysis of all-cause mortality for each 10 ng/ml increase in baseline serum leptin (data not shown). Kaplan-Meier curves of surviving patients comparing subgroups of patients stratified by baseline serum leptin tertiles. Thus, although chronic HD patients with higher baseline leptin had better nutritional status as evidenced by their significantly more favorable body composition parameters at the start of the study, this disparity did not appear to widen over the 24 months of follow-up; thus, leptin levels did not predict body composition parameter changes over time. Moreover, no survival advantage of the high leptin group was found in our observational cohort. Discussion: In the present study, we wished to determine whether changes of serum leptin levels are correlated with nutritional status over time in a cohort of prevalent hemodialysis patients. We observed that despite the strong and significant cross-sectional associations between serum leptin and body composition parameters at baseline, leptin levels did not reflect changes in nutritional status in our population. Our study confirms several previous cross-sectional studies in which elevated serum leptin levels were demonstrated to be positively associated with several nutritional markers (laboratory or body composition parameters, mainly BMI and fat mass) in chronic HD patients [14,20,21,23-25]. HD patients treated with megestrole acetate (24), growth hormone [31] or high-calorie supplementation [32] demonstrated progressively increased serum leptin parallel to an improved nutritional state. However, not all studies are consistent with positive associations between elevated levels of leptin and several nutritional parameters in ESRD patients. An inverse correlation was observed between leptin/fat mass and dietary intake, as well as with significantly lower lean tissue mass in chronic renal failure (23 patients) and ESRD patients receiving PD (24 patients) and HD (22 patients) therapy in study by Young et al [5]. In a cross-sectional study of 28 dialysis patients and 41 healthy control subjects, Johansen et al [20] showed that leptin levels were negatively correlated with albumin and PCR, suggesting a possible negative role of leptin in nutrition. Based on longitudinal observation, Stenvinkel et al [18] demonstrated that increases in serum leptin levels of 36 peritoneal dialysis patients were associated with a decrease in lean body mass. The discrepancies between these studies may be related to the population recruited, to their small sample size, and to their design; specifically, serum leptin levels were measured among patients with varying degrees of CRF, some of whom were undergoing maintenance hemodialysis and peritoneal dialysis, and in healthy controls [5,18,20]; many of these studies had a low number of study participants [5,18,20,23-25] and a majority of these studies were cross-sectional [5,20,23,25]. Leptin, a regulator of eating behavior, has been shown to be a major determinant of anorexia in uremic animals via signaling through the hypothalamic melanocortin system [15]. Subsequently, a recent experimental study by Cheung et al [33] showed that intraperitoneal administration of the melanocortin-4 receptor (MC4-R) antagonist NBI-12i stimulates food intake and weight gain in uremic mice. However, the effects of leptin in uremic patients are not unequivocal. Relevant to our findings, Bossola at al [17] demonstrated that serum leptin levels were not different in anorexic and in non-anorexic hemodialysis patients. Several other studies failed as well to demonstrate any role of hyperleptinemia on reduced dietary intake in ESRD patients receiving maintenance HD treatment [5,16,34]. Since hyperleptinemia is common in HD patients mainly due to impaired renal clearance [20], it seemed reasonable to assume that selective leptin resistance (such as occurs in obese humans [35]) is the responsible mechanism for attenuation of the negative effect of leptin on appetite, and accordingly, on dietary intake under conditions of continuous stimulation. The molecular mechanisms of leptin resistance, with a focus on the contribution of the intracellular tyrosine residues in the hypothalamic receptor of leptin (LEPRb) and their interaction with the negative feedback regulators, suppressor of cytokine signaling 1 and 3 (SOCS1 and SOCS3), are described in a recent study by Knobelspies et al [36]. Thus, although the cross-group comparisons confirm that hyperleptinemia is associated with better body composition parameters, no unequivocal associations in terms of biochemical data or dietary intake with serum leptin levels are evident in HD patients according to the available literature. Our study confirms these data. Another finding of the present study was that 24 months of HD was associated with a significant reduction of plasma leptin levels, with a more rapid decline observed in patients with higher baseline leptin levels. Only limited information is available on the relationship between longitudinal serum leptin measurements and nutritional characteristics in chronic HD patients [18,32]. To the best of our knowledge, the present study is the first long-term longitudinal study to show the relationship between serum leptin level and nutritional status in ESRD patients receiving maintenance HD therapy. Several mechanisms may be involved in decrease of serum leptin levels over time in hemodialysis patients. These include, the increasing use of high flux and super-flux hemodialyzers [37], and/or use of alternative dialysis strategies such as hemodiafiltration [38]; recombinant human erythropoietin treatment (generally followed by a significant decline of leptinaemia in hemodialysed patients [39]); chronic inflammation (under the assumption that leptin, like albumin or transferrin, is a negative acute phase protein in chronic HD patients [22], although other studies [40] do not support this mechanism); metabolic acidosis, which can reduce the release of leptin from adipose tissue [41]; and finally, a decrease in fat mass leading to a decrease in the rate of leptin biosynthesis [14,18] (based on an insulin or a nutrient-sensing pathway regulating leptin gene expression in fat tissue [42]). Although patients in our facility were all treated using low flux hemodialysis, reduction of BMI and accordingly of fat mass over 24 months of the study may explain this longitudinal decrease of serum leptin levels in our cohort. Of interest is the observation that adverse changes in body composition parameters occur over time, supporting the hypothesis that end-stage renal disease is associated with wasting as proposed in some [43,44] but not all [45] of the previous studies. The results of our study were consistent with those reported by Johansen et al [43] who did not find significant changes in any of the biochemical markers with time, but a significant reduction in phase angle over 1 year follow up was evident in prevalent HD patients. In parallel, together with reduced body composition parameters over time, we observed a longitudinal decline in serum leptin levels in our cohort. However, the lack of an association between serum leptin levels and future changes in body composition parameters (as exhibited by non-significant leptin-by-time interactions), suggests that the levels of leptin follow body composition (mainly fat mass) trajectory or its accompanying metabolic changes, rather than governing it. Finally, serum leptin level did not appear as a useful predictor of all cause mortality or cardio-vascular morbidity in our study population during up to 4 years of observation. In this aspect, our findings support the results of earlier study by Tsai et al [46]. Although Scholze et al [26] showed a paradoxical reverse association between leptin level and clinical outcome in 71 chronic HD patients, our patients did not show such an association. The difference between our results and theirs might have been caused by differences in the cohorts. First, the population presented by Scholze et al [26] had a lower BMI at baseline than our patients, and no body composition parameters were measured. Further, the prevalence of diabetes mellitus was much greater in patients in the lower leptin group than in patients of the higher leptin group in the cohort of Scholze et al [26]. Indeed, HD patients with diabetes have a poor prognosis [4] that might be the cause of the lower survival rate in the lower leptin group in the above study [26]. While low levels of leptin may reflect a state of malnutrition in HD patients, proatherogenic effects of hyperleptinemia were linked to an adverse cardiovascular profile in general population [10,12]. We speculate that the lack of long-term benefits of hyperleptinemia in terms of survival of chronic HD patients could reflect the modulating influence of proatherogenic effects of high serum leptin levels. Finally, no relationships between serum leptin levels and previous cardio-vascular events were found by Diez et al [47] in a retrospective study on 82 dialysis patients. Zoccali et al [40] also did not show any difference in cardiovascular event-free survival in cohort of 192 hemodialysis patients after their stratification on the basis of the median value of plasma leptin. Some limitations of the present study should be considered. First, this study is based on a relatively small sample size limiting detection of more subtle changes over time. Second, this study used only an observational approach, without manipulation of exposure factors, and therefore, no definitive cause-and-effect relationship can be derived for any of the risk factors analyzed. Dietary intake assessed by 3 day food records is another limitation of the study, as results can be subjective and incomplete, and can vary considerably from day to day as a result of dialysis treatment sessions and associated disturbances in food intake. Finally, significant circadian fluctuations of plasma leptin (regardless of the underlying degree of glucose tolerance, fasting, or diet) can certainly obscure some small but significant changes in plasma leptin [48] and may be a source of potential bias for longitudinal observation. Nevertheless, the present study has the advantage of providing long-term longitudinal data on the relationship between serum concentrations of leptin and nutritional parameters in prevalent HD patients. Conclusions: In conclusion, we found positive linear associations between serum leptin levels and body composition, whereas no significant relations in terms of biochemical data or dietary intake were evident in our cohort at baseline. Analyzing longitudinal data we concluded that any influence of serum leptin levels on nutritional status is limited to mirroring attained BMI and fat mass trajectory. Further, nutritional advantage at baseline of the study population with hyperleptinemia did not translate into long-term benefits in terms of survival. Taken together, our results suggest that in our cohort, leptin levels reflect fat mass depots, rather than independently contributing to uremic anorexia or modifying nutritional status and/or survival in chronic HD patients. These results are of particular importance if the use of subcutaneous injections of recombinant methionyl leptin [49] or leptin removal by super-flux polysulfone dialysers [37] is contemplated as a therapeutic intervention.
Background: The influence of serum leptin levels on nutritional status and survival in chronic hemodialysis patients remained to be elucidated. We conducted a prospective longitudinal study of leptin levels and nutritional parameters to determine whether changes of serum leptin levels modify nutritional status and survival in a cohort of prevalent hemodialysis patients. Methods: Leptin, dietary energy and protein intake, biochemical markers of nutrition and body composition (anthropometry and bioimpedance analysis) were measured at baseline and at 6, 12, 18 and 24 months following enrollment, in 101 prevalent hemodialysis patients (37% women) with a mean age of 64.6 ± 11.5 years. Observation of this cohort was continued over 2 additional years. Changes in repeated measures were evaluated, with adjustment for baseline differences in demographic and clinical parameters. Results: Significant reduction of leptin levels with time were observed (linear estimate: -2.5010 ± 0.57 ng/ml/2 y; p < 0.001) with a more rapid decline in leptin levels in the highest leptin tertile in both unadjusted (p = 0.007) and fully adjusted (p = 0.047) models. A significant reduction in body composition parameters over time was observed, but was not influenced by leptin (leptin-by-time interactions were not significant). No significant associations were noted between leptin levels and changes in dietary protein or energy intake, or laboratory nutritional markers. Finally, cumulative incidences of survival were unaffected by the baseline serum leptin levels. Conclusions: Thus leptin levels reflect fat mass depots, rather than independently contributing to uremic anorexia or modifying nutritional status and/or survival in chronic hemodialysis patients. The importance of such information is high if leptin is contemplated as a potential therapeutic target in hemodialysis patients.
Background: In recent years, the number of patients with end-stage renal disease (ESRD) has been increasing worldwide [1]. Depending in part upon the method used to evaluate nutritional status and the population studied, from 40 to 70 percent of patients with ESRD are malnourished [2,3] resulting in poor clinical outcomes [4]. Among the mechanisms responsible for malnutrition, leptin was believed to influence nutritional markers in patients with ESRD [5]. Leptin is a 16-kDa protein identified as the product of the obese gene; it is exclusively produced in adipocytes, and regulates food intake and energy expenditure in animal models [6]. Leptin decreases food intake by decreasing NPY (neuropeptide Y - one of the most potent stimulators of food intake) mRNA [7] and increasing alpha-MSH (alpha-melanocyte-stimulating hormone - an inhibitor of food intake) [8]. Besides linking adiposity and central nervous circuits to reduced appetite and enhanced energy expenditure in the general population [9], leptin has been shown to increase overall sympathetic nerve activity [10], facilitate glucose utilization and improve insulin sensitivity [11]. Furthermore, the prospective West of Scotland Coronary Prevention Study (WOSCOPS) reported that elevated leptin increases the relative risk of cardio-vascular disease in the general population independently of fat mass [12]. In general, serum leptin levels are significantly elevated in patients with renal failure, particularly when compared to age, gender and body mass index (BMI)-matched controls [13,14]. However, the role of hyperleptinemia in ESRD patients is somewhat unconventional. In contrast with its anorexogenic effects recognized in the general population [9] and even in experimental models of uremia (in subtotal nephrectomized and leptin receptor-deficient [db/db] mice) [15], leptin has not been reported to affect perceived appetite and nutrient intake in dialysis patients [16,17]. Although in some observational studies, increased serum leptin concentrations were observed in ESRD patients in parallel with loss of lean body mass [18,19] or with hypoalbuminemia and low protein intake [20], some others failed to find any correlation between hyperleptinemia and weight change [9] or lean mass [21] in this population. Moreover, several clinical studies suggested that leptin is a negative acute phase protein [22] and can serve as a marker of adequate nutritional status, rather than an appetite-reducing uremic toxin in hemodialysis patients [23-25]. Finally, the relationship between elevated serum leptin levels and clinical outcomes in ESRD has not been fully defined. In one small prospective cohort of hemodialysis patients, lower baseline serum leptin levels predicted mortality [26], but neither changes in leptin over time were measured, nor were leptin levels normalized to body fat mass in this study. Thus, the influence of serum leptin levels on nutritional status and survival in chronic hemodialysis patients remained to be elucidated. In view of leptin's physiological role, information on effects of prolonged hyperleptinemia (independent of fat mass) on nutritional status of chronic hemodialysis patients, which may also impact on their survival, would be of interest. The aim of the present prospective longitudinal study was therefore to study longitudinal changes in serum leptin levels and to relate them to the changes in nutritional markers and survival in chronic hemodialysis patients. Conclusions: IB designed, organized and coordinated the study, managed data entry, contributed to data analysis and interpretation of data and wrote the manuscript. IS carried out the immunoassays, contributed to analyzing and interpretation of data and writing the manuscript. AA and HY carried out nutrition assessment (food intake analysis, body composition assessment), contributed to analyzing and interpretation of data and writing the manuscript. LF contributed to analyzing and interpretation of data. ZA and JW contributed to analyzing and interpretation of data and writing the manuscript. All authors read and approved the final manuscript.
Background: The influence of serum leptin levels on nutritional status and survival in chronic hemodialysis patients remained to be elucidated. We conducted a prospective longitudinal study of leptin levels and nutritional parameters to determine whether changes of serum leptin levels modify nutritional status and survival in a cohort of prevalent hemodialysis patients. Methods: Leptin, dietary energy and protein intake, biochemical markers of nutrition and body composition (anthropometry and bioimpedance analysis) were measured at baseline and at 6, 12, 18 and 24 months following enrollment, in 101 prevalent hemodialysis patients (37% women) with a mean age of 64.6 ± 11.5 years. Observation of this cohort was continued over 2 additional years. Changes in repeated measures were evaluated, with adjustment for baseline differences in demographic and clinical parameters. Results: Significant reduction of leptin levels with time were observed (linear estimate: -2.5010 ± 0.57 ng/ml/2 y; p < 0.001) with a more rapid decline in leptin levels in the highest leptin tertile in both unadjusted (p = 0.007) and fully adjusted (p = 0.047) models. A significant reduction in body composition parameters over time was observed, but was not influenced by leptin (leptin-by-time interactions were not significant). No significant associations were noted between leptin levels and changes in dietary protein or energy intake, or laboratory nutritional markers. Finally, cumulative incidences of survival were unaffected by the baseline serum leptin levels. Conclusions: Thus leptin levels reflect fat mass depots, rather than independently contributing to uremic anorexia or modifying nutritional status and/or survival in chronic hemodialysis patients. The importance of such information is high if leptin is contemplated as a potential therapeutic target in hemodialysis patients.
12,547
324
[ 632, 540, 140, 77, 201, 124, 320, 3811, 679, 861, 357, 1745, 161 ]
14
[ "leptin", "patients", "mass", "study", "fat", "vascular", "time", "serum", "levels", "serum leptin" ]
[ "leptin better nutritional", "levels leptin nutritional", "leptin nutritional parameters", "leptin nutritional clinical", "leptin changes nutritional" ]
null
[CONTENT] Leptin | Nutrition | Bioimpedance | Inflammation | Hemodialysis [SUMMARY]
[CONTENT] Leptin | Nutrition | Bioimpedance | Inflammation | Hemodialysis [SUMMARY]
null
[CONTENT] Leptin | Nutrition | Bioimpedance | Inflammation | Hemodialysis [SUMMARY]
[CONTENT] Leptin | Nutrition | Bioimpedance | Inflammation | Hemodialysis [SUMMARY]
[CONTENT] Leptin | Nutrition | Bioimpedance | Inflammation | Hemodialysis [SUMMARY]
[CONTENT] Aged | Biomarkers | Body Composition | Body Mass Index | Cross-Sectional Studies | Dietary Proteins | Female | Follow-Up Studies | Humans | Kidney Failure, Chronic | Leptin | Linear Models | Longitudinal Studies | Male | Middle Aged | Multivariate Analysis | Nutritional Status | Prospective Studies | Renal Dialysis | Survival Analysis [SUMMARY]
[CONTENT] Aged | Biomarkers | Body Composition | Body Mass Index | Cross-Sectional Studies | Dietary Proteins | Female | Follow-Up Studies | Humans | Kidney Failure, Chronic | Leptin | Linear Models | Longitudinal Studies | Male | Middle Aged | Multivariate Analysis | Nutritional Status | Prospective Studies | Renal Dialysis | Survival Analysis [SUMMARY]
null
[CONTENT] Aged | Biomarkers | Body Composition | Body Mass Index | Cross-Sectional Studies | Dietary Proteins | Female | Follow-Up Studies | Humans | Kidney Failure, Chronic | Leptin | Linear Models | Longitudinal Studies | Male | Middle Aged | Multivariate Analysis | Nutritional Status | Prospective Studies | Renal Dialysis | Survival Analysis [SUMMARY]
[CONTENT] Aged | Biomarkers | Body Composition | Body Mass Index | Cross-Sectional Studies | Dietary Proteins | Female | Follow-Up Studies | Humans | Kidney Failure, Chronic | Leptin | Linear Models | Longitudinal Studies | Male | Middle Aged | Multivariate Analysis | Nutritional Status | Prospective Studies | Renal Dialysis | Survival Analysis [SUMMARY]
[CONTENT] Aged | Biomarkers | Body Composition | Body Mass Index | Cross-Sectional Studies | Dietary Proteins | Female | Follow-Up Studies | Humans | Kidney Failure, Chronic | Leptin | Linear Models | Longitudinal Studies | Male | Middle Aged | Multivariate Analysis | Nutritional Status | Prospective Studies | Renal Dialysis | Survival Analysis [SUMMARY]
[CONTENT] leptin better nutritional | levels leptin nutritional | leptin nutritional parameters | leptin nutritional clinical | leptin changes nutritional [SUMMARY]
[CONTENT] leptin better nutritional | levels leptin nutritional | leptin nutritional parameters | leptin nutritional clinical | leptin changes nutritional [SUMMARY]
null
[CONTENT] leptin better nutritional | levels leptin nutritional | leptin nutritional parameters | leptin nutritional clinical | leptin changes nutritional [SUMMARY]
[CONTENT] leptin better nutritional | levels leptin nutritional | leptin nutritional parameters | leptin nutritional clinical | leptin changes nutritional [SUMMARY]
[CONTENT] leptin better nutritional | levels leptin nutritional | leptin nutritional parameters | leptin nutritional clinical | leptin changes nutritional [SUMMARY]
[CONTENT] leptin | patients | mass | study | fat | vascular | time | serum | levels | serum leptin [SUMMARY]
[CONTENT] leptin | patients | mass | study | fat | vascular | time | serum | levels | serum leptin [SUMMARY]
null
[CONTENT] leptin | patients | mass | study | fat | vascular | time | serum | levels | serum leptin [SUMMARY]
[CONTENT] leptin | patients | mass | study | fat | vascular | time | serum | levels | serum leptin [SUMMARY]
[CONTENT] leptin | patients | mass | study | fat | vascular | time | serum | levels | serum leptin [SUMMARY]
[CONTENT] leptin | patients | hemodialysis patients | esrd | food intake | general | nutritional | serum | serum leptin | hemodialysis [SUMMARY]
[CONTENT] patients | dialysis | performed | urea | leptin | day | study | flow | vascular | body [SUMMARY]
null
[CONTENT] leptin | results | nutritional | levels | leptin levels | nutritional status | survival | serum leptin | serum leptin levels | serum [SUMMARY]
[CONTENT] leptin | patients | mass | serum | serum leptin | fat | study | levels | vascular | nutritional [SUMMARY]
[CONTENT] leptin | patients | mass | serum | serum leptin | fat | study | levels | vascular | nutritional [SUMMARY]
[CONTENT] serum leptin ||| leptin [SUMMARY]
[CONTENT] Leptin | 6 | 12 | 18 | 24 months | 101 | 37% | 64.6 | 11.5 years ||| 2 additional years ||| [SUMMARY]
null
[CONTENT] ||| leptin [SUMMARY]
[CONTENT] serum leptin ||| leptin ||| Leptin | 6 | 12 | 18 | 24 months | 101 | 37% | 64.6 | 11.5 years ||| 2 additional years ||| ||| ||| linear | 0.57 ||| p < 0.001 | 0.007 | 0.047 ||| leptin | leptin ||| ||| serum leptin ||| ||| ||| leptin [SUMMARY]
[CONTENT] serum leptin ||| leptin ||| Leptin | 6 | 12 | 18 | 24 months | 101 | 37% | 64.6 | 11.5 years ||| 2 additional years ||| ||| ||| linear | 0.57 ||| p < 0.001 | 0.007 | 0.047 ||| leptin | leptin ||| ||| serum leptin ||| ||| ||| leptin [SUMMARY]
Non-communicable diseases in disasters: a protocol for a systematic review.
33459280
NCDs require an ongoing management for optimal outcomes, which is challenging in emergency settings, because natural disasters increase the risk of acute NCD exacerbations and lead to health systems' inability to respond. This study aims to develop a protocol for a systematic review on non-communicable diseases in natural disaster settings.
BACKGROUND
This systematic review protocol is submitted to the International Prospective Register of Systematic Reviews (Registration No. CRD42020164032). The electronic databases to be used in this study include: Medline, Scopus, Web of Science, Clinical Key, CINAHL, EBSCO, Ovid, EMBASE, ProQuest, Google Scholar, Cochrane Library (Cochrane database of systematic reviews; Cochrane central Register of controlled Trials). Records from 1997 to 2019 are subject to this investigation. Three independent researchers will review the titles, abstracts, and full texts of articles eligible for inclusion, and if not matched, they will be reviewed by a final fourth reviewer. The proposed systematic review will be reported in accordance with the reporting guideline provided in the Preferred Reporting Items for Systematic review and Meta-Analysis (PRISMA) statement. We select studies based on: PICOs (Participants, Interventions, Comparators, and Outcomes).
METHODS
This systematic review identifies any impacts of natural disasters on patients with NCDs in three stages i.e. before, during and in the aftermath of natural disasters.
RESULTS
A comprehensive response to NCD management in natural disasters is an important but neglected aspect of non-communicable disease control and humanitarian response, which can significantly reduce the potential risk of morbidity and mortality associated with natural disasters.
CONCLUSIONS
[ "Disasters", "Humans", "Noncommunicable Diseases", "Systematic Reviews as Topic" ]
8142338
Introduction
Disasters ar e serious events disturbing communities.1 In terms of medical aspects, these events cause numerous casualties and the high demand of medical care may require the enhancement of responders’ capacity for delivering timely and effective services.2,3 Individuals with chronic conditions require special attention in planning, response, and recovery phases after natural disasters, given their unique needs for medication, medical equipment and continuing healthcare, and potential exacerbation of their condition that require resource-intensive management.4 Natural disasters can impact the public health infrastructure and the social protection systems essential for vulnerable populations. Patients with non-communicable diseases e.g. respiratory and cardiovascular diseases, cancer and diabetes, are among vulnerable groups in critical conditions, who are highly affected by natural disasters.5 Non-Communicable diseases (NCDs) require an ongoing management for optimal outcomes, which is challenging in emergency settings, since natural disasters increase the risk of acute exacerbation in the health of people with NCDs and decrease the health systems responsiveness.6 NCD Management in emergencies requires the inclusion of non-communicable disease care into standard operating procedures, which would facilitate horizontal and vertical integration to other aspects of relief efforts.7,8 Patients with chronic illnesses including those with cardiovascular diseases, diabetes, cancers, and respiratory conditions are of the most vulnerable populations in disaster settings, who face various problems after natural disasters.9,10 By the collapse in some medical care systems and overloaded operating hospitals and other medical centers, provision of services to chronic patients seems to be a critical concern.11 Inadequately managed chronic illnesses can present a threat to life and well-being of the community in the immediate wake of these disasters, but their treatment traditionally has not been recognized as a public health or medical priority.12 Many patients did not have their medications or medical supplies, and too many did not know the names of their illnesses or medications or how to access the information.13 A critical problem in the resulting health crisis is the inability of the displaced population to manage their chronic diseases.14 The Center for Disease Control and Prevention (CDC) reported that NCDs accounted for five of the six most commonly reported conditions after Hurricane Katrina.15 This leads to indirect causes of mortality and more complications up to 70%-90%, primarily due to the deterioration of life-threatening conditions and exacerbation of chronic diseases.16 After natural disasters, inadequate care and resources, and lack of continuity of care for chronic diseases such as cardiovascular diseases, asthma, diabetes, renal diseases have led to exacerbation of symptoms associated with increased morbidity and mortality among this population.17 However, non-communicable diseases have received little attention from human-rights organizations during the acute phase of crises and emergencies, and there is a need to refocus on emergency disaster systems in the 21st Century.18 More than 45% of evacuees did not carry their daily medicines with them, meaning that over two third of total medicines provided during the disaster response were used to treat chronic diseases.19 Patients with chronic diseases face many challenges and have different needs during and after natural disasters and medical care must be continued during and after natural disasters. Statistics on different diseases reveal that at the time of natural disasters, there are an increased number of hospital admissions of patients with at least one chronic disease. As an example, in Sichuan earthquake, patients with hypertension and those with diabetes, constituted 47% and 24%, respectively, of city hospital admissions.20 Despite the significance and the critical impact of natural disasters on patients with non-communicable diseases and the exacerbation of their symptoms, there are not enough studies on the issue.21 Disaster and crisis manuals and guidelines mainly focus on communicable diseases like Aleppo boil, measles, cholera and diarrhea; and among available research literature, there is a limited number of studies on the management of non-communicable diseases in emergencies.22 Several studies have been conducted on the effects of natural disasters on non-communicable diseases reporting the exacerbation of clinical effects and insufficient medical facilities and equipment to care for patients. Therefore, this study aims to obtain a systematic review protocol for non-communicable diseases in the natural disasters.
null
null
Results
The primary outcome of this study will be the identified needs of patients with chronic diseases such as diabetes, cardiovascular, hypertension, respiratory disease, and cancers that are critical before, during, and after natural disasters. Secondary outcome of the study will be the reported challenges in the process of health service provision to these patients. The following data will be extracted from the selected articles: general information (title, name of authors, year, research type, subject group (by disease type), classification (before, during and after natural disaster), and factors essential to follow a process of health service provision to patients with chronic illnesses in natural disasters. Acknowledgments This article is extracted from a PhD thesis on Health in Emergency and Disaster, with COI: IR.UMSU.REC.1398.228, the Research Center for Social Factors Effective on Health, Urmia University of Medical Sciences. We extend our special thanks to supervisors and advisors, who collaborated in this research. Abbreviations NCDs: Non-Communicable diseases CDC: Center for Disease Control PICO: Patient/ Population/ Participants/ Problem, Intervention, Comparison, Outcome COPD: Chronic Obstructive Pulmonary Disease PROSPERO: Prospective Register of Systematic Reviews
null
null
[]
[]
[]
[ "Introduction", "Methods and Methods", "Results" ]
[ "Disasters ar e serious events disturbing communities.1 In terms of medical aspects, these events cause numerous casualties and the high demand of medical care may require the enhancement of responders’ capacity for delivering timely and effective services.2,3 Individuals with chronic conditions require special attention in planning, response, and recovery phases after natural disasters, given their unique needs for medication, medical equipment and continuing healthcare, and potential exacerbation of their condition that require resource-intensive management.4 Natural disasters can impact the public health infrastructure and the social protection systems essential for vulnerable populations. Patients with non-communicable diseases e.g. respiratory and cardiovascular diseases, cancer and diabetes, are among vulnerable groups in critical conditions, who are highly affected by natural disasters.5\n\nNon-Communicable diseases (NCDs) require an ongoing management for optimal outcomes, which is challenging in emergency settings, since natural disasters increase the risk of acute exacerbation in the health of people with NCDs and decrease the health systems responsiveness.6 NCD Management in emergencies requires the inclusion of non-communicable disease care into standard operating procedures, which would facilitate horizontal and vertical integration to other aspects of relief efforts.7,8\n\nPatients with chronic illnesses including those with cardiovascular diseases, diabetes, cancers, and respiratory conditions are of the most vulnerable populations in disaster settings, who face various problems after natural disasters.9,10 By the collapse in some medical care systems and overloaded operating hospitals and other medical centers, provision of services to chronic patients seems to be a critical concern.11\n\nInadequately managed chronic illnesses can present a threat to life and well-being of the community in the immediate wake of these disasters, but their treatment traditionally has not been recognized as a public health or medical priority.12 Many patients did not have their medications or medical supplies, and too many did not know the names of their illnesses or medications or how to access the information.13\n\nA critical problem in the resulting health crisis is the inability of the displaced population to manage their chronic diseases.14 The Center for Disease Control and Prevention (CDC) reported that NCDs accounted for five of the six most commonly reported conditions after Hurricane Katrina.15 This leads to indirect causes of mortality and more complications up to 70%-90%, primarily due to the deterioration of life-threatening conditions and exacerbation of chronic diseases.16 After natural disasters, inadequate care and resources, and lack of continuity of care for chronic diseases such as cardiovascular diseases, asthma, diabetes, renal diseases have led to exacerbation of symptoms associated with increased morbidity and mortality among this population.17 However, non-communicable diseases have received little attention from human-rights organizations during the acute phase of crises and emergencies, and there is a need to refocus on emergency disaster systems in the 21st Century.18 More than 45% of evacuees did not carry their daily medicines with them, meaning that over two third of total medicines provided during the disaster response were used to treat chronic diseases.19 Patients with chronic diseases face many challenges and have different needs during and after natural disasters and medical care must be continued during and after natural disasters. Statistics on different diseases reveal that at the time of natural disasters, there are an increased number of hospital admissions of patients with at least one chronic disease. As an example, in Sichuan earthquake, patients with hypertension and those with diabetes, constituted 47% and 24%, respectively, of city hospital admissions.20 Despite the significance and the critical impact of natural disasters on patients with non-communicable diseases and the exacerbation of their symptoms, there are not enough studies on the issue.21 Disaster and crisis manuals and guidelines mainly focus on communicable diseases like Aleppo boil, measles, cholera and diarrhea; and among available research literature, there is a limited number of studies on the management of non-communicable diseases in emergencies.22 Several studies have been conducted on the effects of natural disasters on non-communicable diseases reporting the exacerbation of clinical effects and insufficient medical facilities and equipment to care for patients. Therefore, this study aims to obtain a systematic review protocol for non-communicable diseases in the natural disasters.", "This systematic review protocol is submitted to the International Prospective Register of Systematic Reviews. (http://www.crd.york.ac.uk/PROSPERO) (The registration number was: CRD42020164032). Preferred Reporting Items for Systematic review and Meta-Analysis Protocols (PRISMAP) will be applied to develop this review protocol.23\n\n\nEligibility Criteria\n\nApplying a systematic review method, authors will investigate studies focused on non-communicable diseases in disasters from different aspects including epidemiological factors, risks, effects, interventions, patient needs, preparedness, as well as academic literatures from around the world. This study and its findings are intended to serve as a roadmap for future research in this area, by giving information on intervention development and policy change.\nThe target population is the group of patients with chronic diseases. The top four leading causes of death in patients with NCDs, which constitute the tenets of the WHO 2013-2020 NCD action plan, are: cardiovascular diseases, cancer, chronic respiratory disease and diabetes. The formal search strategy will be applied for relevant controlled vocabularies and free text synonymous words and phrases in concept mapping for health conditions including heart attack, myocardia, ischemia, acute coronary syndrome, stroke, hypertension, diabetes, cancer, Chronic Obstructive Pulmonary Disease(COPD) and asthma through advanced search syntax. Authors will search for possible relevant titles in the reference list of eligible studies. There will be no natural disaster location or natural disaster type limitation in our search. Also there is no restriction on the research study design. Research papers in English will be qualified. PICO (Patient/Population/Participant, Intervention, Comparison, Outcome) framework will be applied in formulating questions and facilitate the search strategy articulation.\n\nParticipants\n\nThe subject patients in the study are those with chronic diseases including cardiovascular and chronic respiratory diseases, diabetes and cancer who are affected by natural disasters. Subject to our unlimited survey in terms of the natural disaster location, the population (participants) may belong to developing or developed countries.\n\nInterventions\n\nThe study will investigate the management of health service delivery to people with non-communicable diseases who are affected by natural disasters. There is no restriction on the type of natural disaster.\n\nComparison\n\nThe only comparison to be made in this study will be that of the impact of natural disasters on the provision of medical care services to patients with non-communicable diseases before, during and after natural disasters.\n\nOutcomes\n\nThe study will classify findings based on the type of non-communicable diseases, the type of natural disasters, and the natural disaster occurrence-NCD condition relationship (NCDs before, during and after natural disasters). Finally, results concerning the clinical impacts and symptom exacerbations in patients with non-communicable diseases and models of service delivery to these patients during natural disasters as well as challenges and deficiencies in the study will be discussed.\n\nInformation Sources and Search Strategy\n\nThe electronic database search strategy will be adopted to gather relevant information from 1997 thru 2019 (reviews published before this period are likely to be out of date) using the following databases: Medline, Scopus, Web of Science, Clinical Key, CINAHL, EBSCO, Ovid, EMBASE, ProQuest, Google Scholar, Cochrane Library (Cochrane database of systematic reviews; Cochrane central Register of controlled Trials. Three independent reviewers will evaluate titles, abstracts and full texts of eligible articles for inclusion, and the final vetting is to be by the fourth reviewer, in case of discrepancies. The search process may be re-run and more studies may be retrieved for inclusion before the final analysis. The search strategy will be developed based on the MeSH terms and Key words related to natural disasters and non-communicable disease. The focus will be on the top four leading causes of NCD-related death, constituting the tenets of the WHO 2013-2020 Global NCD Action Plan i.e. cardiovascular and chronic respiratory diseases, cancer, and diabetes. The formal search strategy will be applied for relevant controlled vocabularies and free text synonymous words and phrases in concept mapping for health conditions including heart attack, myocardia, ischemia, acute coronary syndrome, stroke, hypertension, diabetes, cancer, COPD, asthma. The search strategy applied in electronic databases like the PubMed database is provided in Appendix 1.\n\nData Collection and Extraction\n\nReference lists of searched out articles will be examined to identify further studies. Also, bibliographies in systematic and non-systematic review articles will be investigated to identify relevant studies.\nFor any query pertaining to methodology, study outcomes and data collection, authors will contact reviewers. Abstracts and full texts of searched out manuscripts will be reviewed. Reference lists of systematic reviews and included studies will be screened and citation tracking will be conducted, wherever feasible. Multiple publications and overlaps will be identified, grouped and represented as a single reference. Database search results will be imported into the citation management program to aggregate relevant review articles and exclude duplicates. \nTitles and abstracts of all reviews searched out from electronic databases will be imported into EndNote (EndNote X6) and duplicates will be excluded. Authors will search and review titles, abstracts and keywords. Duplicate references will be omitted, and afterwards two of the reviewers (EGh, FN) will screen the titles. Three independent researchers will evaluate titles, abstracts and full texts of eligible articles for inclusion (DKZ, EGH, FN), and there will be a final vetting by the fourth reviewer (IM), in case of discrepancies, who will check the results. The search process may be re-run and more studies may be retrieved for inclusion just before the final analysis. A predefined inclusion and exclusion criterion is to be used for assessment of the full texts of the remaining titles. The excluded studies will be listed in a table associated with the reason for exclusion. Data extraction will be processed electronically using a developed data abstraction form adapted from the Cochrane Public Health Group. By data extraction and assessment form, any information on aspects deemed necessary as per Methodological Expectations of Cochrane Intervention Reviews (MECIR) standards will be collected.24 The data abstraction form will be piloted on a random sample of five included articles, and modifications will be made as required, based on the team’s feedback. Full data abstraction will be started only when there is a sufficient agreement (i.e. The percentage of agreement >90 %). Data extraction for the literature review will be based on study goals, characteristics (e.g. the first author, the publication date), search strategy and terminology, to describe the review and its settings and timeframe policy. The process of selecting studies will be documented in a PRISMA flow chart. (Figure 1)\n\nQuality Assessment\n\nThe researchers will evaluate the quality of selected articles based on valid checklists for study types. The quality assessment of observational studies such as cohort and cross-sectional articles will be carried out by strengthening the reporting of observational studies in epidemiology check list (STROBE).25 Based on this checklist, ranking scores are from 0 to 34 and studies will be classified into 3 groups based on their ranking score as follows: weak quality ranking score from 0 to 11; moderate quality ranking score from 12 to 22; and high quality ranking score from 23 to 34. The quality assessment of experimental studies will be carried out by transparent reporting of evaluations with nonrandomized designs (TREND).26 Based on this checklist, ranking scores are from 0 to 59 and studies will be classified into 3 groups based on their ranking score as follows: weak quality ranking score from 0 to 21; moderate quality ranking score from 22 to 41; and high quality ranking score from 42 to 59. The quality assessment of qualitative studies will be carried out using the Critical Appraisal Skills Programme (CASP). 27 Based on this checklist, ranking scores are from 0 to 10 and studies will be classified into 2 groups based on their ranking score as follows: weak quality ranking score from 0 to 5; and high quality ranking score from 6 to 10. The quality assessment of the systematic reviews and meta-analyses will be carried out by the checklist for preferred reporting items for systematic reviews and meta-analyses (PRISMA).23 The checklist consists of 27 items and papers are reviewed for each item and marked either as implemented or not-implemented. In case of the absence of an item in a paper, it will be rated ZERO, and if the subject item exists in the paper, it will be rated ONE. When items are not as distinct, the unclear sections will be assessed several times until a precise interpretation is reached and a valid evaluation of the study is made. The risk of bias will be assessed using ROBIS Risk of Bias assessment tool.23,28\n", "The primary outcome of this study will be the identified needs of patients with chronic diseases such as diabetes, cardiovascular, hypertension, respiratory disease, and cancers that are critical before, during, and after natural disasters. Secondary outcome of the study will be the reported challenges in the process of health service provision to these patients. The following data will be extracted from the selected articles: general information (title, name of authors, year, research type, subject group (by disease type), classification (before, during and after natural disaster), and factors essential to follow a process of health service provision to patients with chronic illnesses in natural disasters.\n\nAcknowledgments\n\nThis article is extracted from a PhD thesis on Health in Emergency and Disaster, with COI: IR.UMSU.REC.1398.228, the Research Center for Social Factors Effective on Health, Urmia University of Medical Sciences. We extend our special thanks to supervisors and advisors, who collaborated in this research.\n\nAbbreviations \n\nNCDs: Non-Communicable diseases\nCDC: Center for Disease Control \nPICO: Patient/ Population/ Participants/ Problem, Intervention, Comparison, Outcome\nCOPD: Chronic Obstructive Pulmonary Disease\nPROSPERO: Prospective Register of Systematic Reviews" ]
[ "introduction", "methods and methods", "results" ]
[ "Crisis", "Natural Disaster", "Management", "Non Communicable Diseases", "Chronic illness" ]
Introduction: Disasters ar e serious events disturbing communities.1 In terms of medical aspects, these events cause numerous casualties and the high demand of medical care may require the enhancement of responders’ capacity for delivering timely and effective services.2,3 Individuals with chronic conditions require special attention in planning, response, and recovery phases after natural disasters, given their unique needs for medication, medical equipment and continuing healthcare, and potential exacerbation of their condition that require resource-intensive management.4 Natural disasters can impact the public health infrastructure and the social protection systems essential for vulnerable populations. Patients with non-communicable diseases e.g. respiratory and cardiovascular diseases, cancer and diabetes, are among vulnerable groups in critical conditions, who are highly affected by natural disasters.5 Non-Communicable diseases (NCDs) require an ongoing management for optimal outcomes, which is challenging in emergency settings, since natural disasters increase the risk of acute exacerbation in the health of people with NCDs and decrease the health systems responsiveness.6 NCD Management in emergencies requires the inclusion of non-communicable disease care into standard operating procedures, which would facilitate horizontal and vertical integration to other aspects of relief efforts.7,8 Patients with chronic illnesses including those with cardiovascular diseases, diabetes, cancers, and respiratory conditions are of the most vulnerable populations in disaster settings, who face various problems after natural disasters.9,10 By the collapse in some medical care systems and overloaded operating hospitals and other medical centers, provision of services to chronic patients seems to be a critical concern.11 Inadequately managed chronic illnesses can present a threat to life and well-being of the community in the immediate wake of these disasters, but their treatment traditionally has not been recognized as a public health or medical priority.12 Many patients did not have their medications or medical supplies, and too many did not know the names of their illnesses or medications or how to access the information.13 A critical problem in the resulting health crisis is the inability of the displaced population to manage their chronic diseases.14 The Center for Disease Control and Prevention (CDC) reported that NCDs accounted for five of the six most commonly reported conditions after Hurricane Katrina.15 This leads to indirect causes of mortality and more complications up to 70%-90%, primarily due to the deterioration of life-threatening conditions and exacerbation of chronic diseases.16 After natural disasters, inadequate care and resources, and lack of continuity of care for chronic diseases such as cardiovascular diseases, asthma, diabetes, renal diseases have led to exacerbation of symptoms associated with increased morbidity and mortality among this population.17 However, non-communicable diseases have received little attention from human-rights organizations during the acute phase of crises and emergencies, and there is a need to refocus on emergency disaster systems in the 21st Century.18 More than 45% of evacuees did not carry their daily medicines with them, meaning that over two third of total medicines provided during the disaster response were used to treat chronic diseases.19 Patients with chronic diseases face many challenges and have different needs during and after natural disasters and medical care must be continued during and after natural disasters. Statistics on different diseases reveal that at the time of natural disasters, there are an increased number of hospital admissions of patients with at least one chronic disease. As an example, in Sichuan earthquake, patients with hypertension and those with diabetes, constituted 47% and 24%, respectively, of city hospital admissions.20 Despite the significance and the critical impact of natural disasters on patients with non-communicable diseases and the exacerbation of their symptoms, there are not enough studies on the issue.21 Disaster and crisis manuals and guidelines mainly focus on communicable diseases like Aleppo boil, measles, cholera and diarrhea; and among available research literature, there is a limited number of studies on the management of non-communicable diseases in emergencies.22 Several studies have been conducted on the effects of natural disasters on non-communicable diseases reporting the exacerbation of clinical effects and insufficient medical facilities and equipment to care for patients. Therefore, this study aims to obtain a systematic review protocol for non-communicable diseases in the natural disasters. Methods and Methods: This systematic review protocol is submitted to the International Prospective Register of Systematic Reviews. (http://www.crd.york.ac.uk/PROSPERO) (The registration number was: CRD42020164032). Preferred Reporting Items for Systematic review and Meta-Analysis Protocols (PRISMAP) will be applied to develop this review protocol.23 Eligibility Criteria Applying a systematic review method, authors will investigate studies focused on non-communicable diseases in disasters from different aspects including epidemiological factors, risks, effects, interventions, patient needs, preparedness, as well as academic literatures from around the world. This study and its findings are intended to serve as a roadmap for future research in this area, by giving information on intervention development and policy change. The target population is the group of patients with chronic diseases. The top four leading causes of death in patients with NCDs, which constitute the tenets of the WHO 2013-2020 NCD action plan, are: cardiovascular diseases, cancer, chronic respiratory disease and diabetes. The formal search strategy will be applied for relevant controlled vocabularies and free text synonymous words and phrases in concept mapping for health conditions including heart attack, myocardia, ischemia, acute coronary syndrome, stroke, hypertension, diabetes, cancer, Chronic Obstructive Pulmonary Disease(COPD) and asthma through advanced search syntax. Authors will search for possible relevant titles in the reference list of eligible studies. There will be no natural disaster location or natural disaster type limitation in our search. Also there is no restriction on the research study design. Research papers in English will be qualified. PICO (Patient/Population/Participant, Intervention, Comparison, Outcome) framework will be applied in formulating questions and facilitate the search strategy articulation. Participants The subject patients in the study are those with chronic diseases including cardiovascular and chronic respiratory diseases, diabetes and cancer who are affected by natural disasters. Subject to our unlimited survey in terms of the natural disaster location, the population (participants) may belong to developing or developed countries. Interventions The study will investigate the management of health service delivery to people with non-communicable diseases who are affected by natural disasters. There is no restriction on the type of natural disaster. Comparison The only comparison to be made in this study will be that of the impact of natural disasters on the provision of medical care services to patients with non-communicable diseases before, during and after natural disasters. Outcomes The study will classify findings based on the type of non-communicable diseases, the type of natural disasters, and the natural disaster occurrence-NCD condition relationship (NCDs before, during and after natural disasters). Finally, results concerning the clinical impacts and symptom exacerbations in patients with non-communicable diseases and models of service delivery to these patients during natural disasters as well as challenges and deficiencies in the study will be discussed. Information Sources and Search Strategy The electronic database search strategy will be adopted to gather relevant information from 1997 thru 2019 (reviews published before this period are likely to be out of date) using the following databases: Medline, Scopus, Web of Science, Clinical Key, CINAHL, EBSCO, Ovid, EMBASE, ProQuest, Google Scholar, Cochrane Library (Cochrane database of systematic reviews; Cochrane central Register of controlled Trials. Three independent reviewers will evaluate titles, abstracts and full texts of eligible articles for inclusion, and the final vetting is to be by the fourth reviewer, in case of discrepancies. The search process may be re-run and more studies may be retrieved for inclusion before the final analysis. The search strategy will be developed based on the MeSH terms and Key words related to natural disasters and non-communicable disease. The focus will be on the top four leading causes of NCD-related death, constituting the tenets of the WHO 2013-2020 Global NCD Action Plan i.e. cardiovascular and chronic respiratory diseases, cancer, and diabetes. The formal search strategy will be applied for relevant controlled vocabularies and free text synonymous words and phrases in concept mapping for health conditions including heart attack, myocardia, ischemia, acute coronary syndrome, stroke, hypertension, diabetes, cancer, COPD, asthma. The search strategy applied in electronic databases like the PubMed database is provided in Appendix 1. Data Collection and Extraction Reference lists of searched out articles will be examined to identify further studies. Also, bibliographies in systematic and non-systematic review articles will be investigated to identify relevant studies. For any query pertaining to methodology, study outcomes and data collection, authors will contact reviewers. Abstracts and full texts of searched out manuscripts will be reviewed. Reference lists of systematic reviews and included studies will be screened and citation tracking will be conducted, wherever feasible. Multiple publications and overlaps will be identified, grouped and represented as a single reference. Database search results will be imported into the citation management program to aggregate relevant review articles and exclude duplicates. Titles and abstracts of all reviews searched out from electronic databases will be imported into EndNote (EndNote X6) and duplicates will be excluded. Authors will search and review titles, abstracts and keywords. Duplicate references will be omitted, and afterwards two of the reviewers (EGh, FN) will screen the titles. Three independent researchers will evaluate titles, abstracts and full texts of eligible articles for inclusion (DKZ, EGH, FN), and there will be a final vetting by the fourth reviewer (IM), in case of discrepancies, who will check the results. The search process may be re-run and more studies may be retrieved for inclusion just before the final analysis. A predefined inclusion and exclusion criterion is to be used for assessment of the full texts of the remaining titles. The excluded studies will be listed in a table associated with the reason for exclusion. Data extraction will be processed electronically using a developed data abstraction form adapted from the Cochrane Public Health Group. By data extraction and assessment form, any information on aspects deemed necessary as per Methodological Expectations of Cochrane Intervention Reviews (MECIR) standards will be collected.24 The data abstraction form will be piloted on a random sample of five included articles, and modifications will be made as required, based on the team’s feedback. Full data abstraction will be started only when there is a sufficient agreement (i.e. The percentage of agreement >90 %). Data extraction for the literature review will be based on study goals, characteristics (e.g. the first author, the publication date), search strategy and terminology, to describe the review and its settings and timeframe policy. The process of selecting studies will be documented in a PRISMA flow chart. (Figure 1) Quality Assessment The researchers will evaluate the quality of selected articles based on valid checklists for study types. The quality assessment of observational studies such as cohort and cross-sectional articles will be carried out by strengthening the reporting of observational studies in epidemiology check list (STROBE).25 Based on this checklist, ranking scores are from 0 to 34 and studies will be classified into 3 groups based on their ranking score as follows: weak quality ranking score from 0 to 11; moderate quality ranking score from 12 to 22; and high quality ranking score from 23 to 34. The quality assessment of experimental studies will be carried out by transparent reporting of evaluations with nonrandomized designs (TREND).26 Based on this checklist, ranking scores are from 0 to 59 and studies will be classified into 3 groups based on their ranking score as follows: weak quality ranking score from 0 to 21; moderate quality ranking score from 22 to 41; and high quality ranking score from 42 to 59. The quality assessment of qualitative studies will be carried out using the Critical Appraisal Skills Programme (CASP). 27 Based on this checklist, ranking scores are from 0 to 10 and studies will be classified into 2 groups based on their ranking score as follows: weak quality ranking score from 0 to 5; and high quality ranking score from 6 to 10. The quality assessment of the systematic reviews and meta-analyses will be carried out by the checklist for preferred reporting items for systematic reviews and meta-analyses (PRISMA).23 The checklist consists of 27 items and papers are reviewed for each item and marked either as implemented or not-implemented. In case of the absence of an item in a paper, it will be rated ZERO, and if the subject item exists in the paper, it will be rated ONE. When items are not as distinct, the unclear sections will be assessed several times until a precise interpretation is reached and a valid evaluation of the study is made. The risk of bias will be assessed using ROBIS Risk of Bias assessment tool.23,28 Results: The primary outcome of this study will be the identified needs of patients with chronic diseases such as diabetes, cardiovascular, hypertension, respiratory disease, and cancers that are critical before, during, and after natural disasters. Secondary outcome of the study will be the reported challenges in the process of health service provision to these patients. The following data will be extracted from the selected articles: general information (title, name of authors, year, research type, subject group (by disease type), classification (before, during and after natural disaster), and factors essential to follow a process of health service provision to patients with chronic illnesses in natural disasters. Acknowledgments This article is extracted from a PhD thesis on Health in Emergency and Disaster, with COI: IR.UMSU.REC.1398.228, the Research Center for Social Factors Effective on Health, Urmia University of Medical Sciences. We extend our special thanks to supervisors and advisors, who collaborated in this research. Abbreviations NCDs: Non-Communicable diseases CDC: Center for Disease Control PICO: Patient/ Population/ Participants/ Problem, Intervention, Comparison, Outcome COPD: Chronic Obstructive Pulmonary Disease PROSPERO: Prospective Register of Systematic Reviews
Background: NCDs require an ongoing management for optimal outcomes, which is challenging in emergency settings, because natural disasters increase the risk of acute NCD exacerbations and lead to health systems' inability to respond. This study aims to develop a protocol for a systematic review on non-communicable diseases in natural disaster settings. Methods: This systematic review protocol is submitted to the International Prospective Register of Systematic Reviews (Registration No. CRD42020164032). The electronic databases to be used in this study include: Medline, Scopus, Web of Science, Clinical Key, CINAHL, EBSCO, Ovid, EMBASE, ProQuest, Google Scholar, Cochrane Library (Cochrane database of systematic reviews; Cochrane central Register of controlled Trials). Records from 1997 to 2019 are subject to this investigation. Three independent researchers will review the titles, abstracts, and full texts of articles eligible for inclusion, and if not matched, they will be reviewed by a final fourth reviewer. The proposed systematic review will be reported in accordance with the reporting guideline provided in the Preferred Reporting Items for Systematic review and Meta-Analysis (PRISMA) statement. We select studies based on: PICOs (Participants, Interventions, Comparators, and Outcomes). Results: This systematic review identifies any impacts of natural disasters on patients with NCDs in three stages i.e. before, during and in the aftermath of natural disasters. Conclusions: A comprehensive response to NCD management in natural disasters is an important but neglected aspect of non-communicable disease control and humanitarian response, which can significantly reduce the potential risk of morbidity and mortality associated with natural disasters.
null
null
2,640
308
[]
3
[ "diseases", "natural", "disasters", "natural disasters", "chronic", "studies", "patients", "non", "communicable", "search" ]
[ "illnesses natural disasters", "disasters inadequate care", "disasters patients non", "communicable diseases emergencies", "health emergency disaster" ]
null
null
null
[CONTENT] Crisis | Natural Disaster | Management | Non Communicable Diseases | Chronic illness [SUMMARY]
null
[CONTENT] Crisis | Natural Disaster | Management | Non Communicable Diseases | Chronic illness [SUMMARY]
null
[CONTENT] Crisis | Natural Disaster | Management | Non Communicable Diseases | Chronic illness [SUMMARY]
null
[CONTENT] Disasters | Humans | Noncommunicable Diseases | Systematic Reviews as Topic [SUMMARY]
null
[CONTENT] Disasters | Humans | Noncommunicable Diseases | Systematic Reviews as Topic [SUMMARY]
null
[CONTENT] Disasters | Humans | Noncommunicable Diseases | Systematic Reviews as Topic [SUMMARY]
null
[CONTENT] illnesses natural disasters | disasters inadequate care | disasters patients non | communicable diseases emergencies | health emergency disaster [SUMMARY]
null
[CONTENT] illnesses natural disasters | disasters inadequate care | disasters patients non | communicable diseases emergencies | health emergency disaster [SUMMARY]
null
[CONTENT] illnesses natural disasters | disasters inadequate care | disasters patients non | communicable diseases emergencies | health emergency disaster [SUMMARY]
null
[CONTENT] diseases | natural | disasters | natural disasters | chronic | studies | patients | non | communicable | search [SUMMARY]
null
[CONTENT] diseases | natural | disasters | natural disasters | chronic | studies | patients | non | communicable | search [SUMMARY]
null
[CONTENT] diseases | natural | disasters | natural disasters | chronic | studies | patients | non | communicable | search [SUMMARY]
null
[CONTENT] diseases | disasters | natural disasters | natural | exacerbation | chronic | care | patients | communicable | medical [SUMMARY]
null
[CONTENT] health | disease | outcome | service provision | service provision patients | extracted | health service provision | provision patients | health service provision patients | outcome study [SUMMARY]
null
[CONTENT] diseases | natural | disasters | natural disasters | chronic | patients | health | studies | search | communicable [SUMMARY]
null
[CONTENT] ||| [SUMMARY]
null
[CONTENT] three [SUMMARY]
null
[CONTENT] ||| ||| the International Prospective Register of Systematic Reviews ||| Medline, Scopus, Web of Science, Clinical Key | EBSCO | Ovid | EMBASE | ProQuest | Google Scholar | Cochrane Library | Cochrane | Cochrane | Trials ||| 1997 | 2019 ||| Three | fourth ||| the Preferred Reporting Items for Systematic | Meta-Analysis ||| Comparators | Outcomes ||| ||| three ||| NCD [SUMMARY]
null
The expression of interleukin-32 is activated by human cytomegalovirus infection and down regulated by hcmv-miR-UL112-1.
23402302
Interleukin-32 (IL-32) is an important factor in innate and adaptive immune responses, which activates the p38MAPK, NF-kappa B and AP-1 signaling pathways. Recent reports have highlighted that IL-32 is regulated during viral infection in humans.
BACKGROUND
Enzyme-linked immunosorbent assays (ELISA) were carried out to detect IL-32 levels in serum samples. Detailed kinetics of the transcription of IL-32 mRNA and expression of IL-32 protein during human cytomegalovirus (HCMV) infection were determined by semi-quantitative RT-PCR and western blot, respectively. The expression levels of hcmv-miR-UL112-1 were detected using TaqMan® miRNA assays during a time course of 96 hours. The effects of hcmv-miR-UL112-1 on IL-32 expression were demonstrated by luciferase assay and western blot, respectively.
METHODS
Serum levels of IL-32 in HCMV-IgM positive patients (indicating an active HCMV infection) were significantly higher than those in HCMV-IgM negative controls. HCMV infection activated cellular IL-32 transcription mainly in the immediately early (IE) phase and elevated IL-32 protein levels between 6 and 72 hours post infection (hpi) in the human embryonic lung fibroblast cell line, MRC-5. The expression of hcmv-miR-UL112-1 was detected at 24 hpi and increased gradually as the HCMV-infection process was prolonged. In addition, it was demonstrated that hcmv-miR-UL112-1 targets a sequence in the IL-32 3'-UTR. The protein level of IL-32 in HEK293 cells could be functionally down-regulated by transfected hcmv-miR-UL112-1.
RESULTS
IL-32 expression was induced by active HCMV infection and could be functionally down-regulated by ectopically expressed hcmv-miR-UL112-1. Our data may indicate a new strategy of immune evasion by HCMV through post-transcriptional regulation.
CONCLUSIONS
[ "Cell Line", "Child, Preschool", "Cytomegalovirus", "Cytomegalovirus Infections", "Female", "Gene Expression Profiling", "Gene Expression Regulation", "Humans", "Infant", "Interleukins", "Male", "MicroRNAs", "RNA, Viral", "Serum" ]
3598236
Background
Interleukin-32 (IL-32) is a newly-discovered pro-inflammatory cytokine, which plays a role in innate and adaptive immune responses [1,2]. It lacks sequence homology to any presently known cytokine families. IL-32 is associated with the induction of inflammatory responses by activating the p38MAPK, NF-kappa B and AP-1 signaling pathways. It has been implicated in inflammatory disorders, mycobacterium tuberculosis infections and inflammatory bowel disease, as well as in some autoimmune diseases, such as rheumatoid arthritis, ulcerative colitis and Crohn’s disease [3-10]. Moreover, it has been reported that IL-32 has pro-inflammatory effects on myeloid cells and promotes the differentiation of osteoclast precursors into multinucleated cells expressing specific osteoclast markers [11,12]. In recent studies, IL-32 has also been found to be regulated during viral infections. Elevated levels of IL-32 were found in sera from patients infected with influenza A virus [13-15], hepatitis B virus (HBV) [16], hepatitis C virus (HCV) [17], human papillomavirus (HPV) [18] and human immunodeficiency virus (HIV) [19-21], suggesting that IL-32 might play an important role in host defense against viral infections Human cytomegalovirus (HCMV) is an ubiquitous β-herpesvirus that infects a broad range of cell types in human hosts, contributing to its complex and varied pathogenesis. HCMV Infection leads to life-long persistence in 50%–90% of the population, which is generally subclinical in healthy individuals [22]. However, it can lead to serious complications in immunocompromised patients, such as transplant recipients or AIDS patients [23,24]. MicroRNAs (miRNAs) are an abundant class of small non-coding RNA molecules that target mRNAs generally within their 3′ untranslated regions (3′UTRs). MiRNAs suppress gene expression mainly through inhibition of translation or, rarely, through degradation of mRNA [25,26]. Clinical isolates of HCMV encode at least 17 miRNAs [27,28]. However, only a few functional targets have been validated experimentally for some HCMV-encoded miRNAs [29-34]. It has been demonstrated that hcmv-miR-UL112-1 targets and reduces the expression of HCMV UL123 (IE1 or IE72), UL114 and the major histocompatibility complex class 1-related chain B (MICB). Moreover, BclAF1 protein, a human cytomegalovirus restriction factor, was reported to be a new target of hcmv-miR-UL112-1 [35]. In addition, multiple cellular targets of hcmv-miR-US25-1, which are associated with cell cycle control, were identified by RNA induced silencing complex immunoprecipitation (RISC-IP), including cyclin E2, BRCC3, EID1, MAPRE2 and CD147. IL-32, which is not present in the confirmed targets, was screened out as a candidate target of hcmv-miR-UL112-1 in our previous study [36]. However, no further experiments to validate these findings have been performed. In the present study, the expression levels of IL-32 were compared among serum samples from patients with active HCMV infection and samples from HCMV-IgM negative individuals. The expression levels of IL-32 and hcmv-miR-UL112-1 in HCMV infected MRC-5 cells were detected at different stages of infection and time points. In addition, functional down-regulation of IL-32 by hcmv-miR-UL112-1 was detected in transfected human embryonic kidney (HEK293) cells, and the effect of hcmv-miR-UL112-1 on IL-32 during HCMV infection was primarily discussed.
null
null
Results
Relatively high IL-32 levels among individuals with active HCMV infection IL-32 protein levels were measured in the sera of 40 patients with active HCMV infections (HCMV IgM positive) and 32 HCMV IgM negative control individuals by enzyme-linked immunosorbent assays (ELISA). According to the results of HCMV IgG detection, HCMV IgM negative controls were divided into two groups: Previously HCMV infected group (IgG positive, n=17) and healthy group (IgG negative, n=15). Mean IL-32 levels in the sera of patients with active HCMV infection were 4.1778±1.1663 ng/ml (shown in Figure 1). Mean IL-32 levels in sera of the previously HCMV infected group and healthy group were 2.4653±0.8287 ng/ml and 2.4480±0.9162 ng/ml, respectively. The data were analyzed by F test. IL-32 levels in the sera of patients with active HCMV infection were significantly higher than those in the HCMV IgM negative control group (F=23.957, P<0.001). No significant difference was observed between the previously HCMV infected group and the healthy group (P=0.963). An illustration showing IL-32 levels in serum samples detected by ELISA. Serum samples were from actively HCMV-infected patients (n=40), HCMV previously infected individuals (n=17) and healthy individuals (n=15), respectively. IL-32 protein levels were measured in the sera of 40 patients with active HCMV infections (HCMV IgM positive) and 32 HCMV IgM negative control individuals by enzyme-linked immunosorbent assays (ELISA). According to the results of HCMV IgG detection, HCMV IgM negative controls were divided into two groups: Previously HCMV infected group (IgG positive, n=17) and healthy group (IgG negative, n=15). Mean IL-32 levels in the sera of patients with active HCMV infection were 4.1778±1.1663 ng/ml (shown in Figure 1). Mean IL-32 levels in sera of the previously HCMV infected group and healthy group were 2.4653±0.8287 ng/ml and 2.4480±0.9162 ng/ml, respectively. The data were analyzed by F test. IL-32 levels in the sera of patients with active HCMV infection were significantly higher than those in the HCMV IgM negative control group (F=23.957, P<0.001). No significant difference was observed between the previously HCMV infected group and the healthy group (P=0.963). An illustration showing IL-32 levels in serum samples detected by ELISA. Serum samples were from actively HCMV-infected patients (n=40), HCMV previously infected individuals (n=17) and healthy individuals (n=15), respectively. Kinetics of the expression of IL-32 mRNA, IL-32 protein and hcmv-miR-UL112-1 in HCMV infected MRC-5 cells Cells were harvested at different infection stages and time points after HCMV infection as described in the Materials and Methods section. We used specific drugs to determine the expression profile of IL-32 at different stages of viral replication. CHX was used to determine the immediate early (IE) stage and PAA was used to determine early (E) stage. As shown in Figure 2A and B, semi-quantitative RT-PCR analysis revealed that IL-32 mRNA expression was significantly activated at IE stage but not at E or late (L) stages post-infection. IL-32 protein expression levels at different time points following HCMV infection were measured by western blot analysis. As shown in Figure 2C and D, IL-32 protein expression reached a peak value at 6 hours post infection (hpi). The relative IL-32 concentration in HCMV infected cells collected at 6 hpi was 10 fold higher than in uninfected cells. All measured values were balanced by using β-actin as an internal control. No accumulation of either IL-32 mRNA or protein was found following the prolongation of HCMV-infection process. IL-32 mRNA levels in HCMV infected cells decreased in E and L stages to a similar level as in uninfected cells. IL-32 protein levels decreased with subsequent hours post infection. Expression of IL-32 induced by HCMV infection in MRC-5 cells. (A) IL-32 mRNA levels in HCMV infected cells were compared to that in uninfected MRC-5 cells. β-actin was used as an internal control. The universal primers for all IL-32 transcripts were used. U is for uninfected MRC-5; IE is for immediate early stage; E is for early stage; L is for late stage. (B) IL-32 mRNA levels were represented relative to that in IE stage. (C) IL-32 protein levels in HCMV infected MRC-5 cells were detected at different time points. IL-32 densitometer values were normalized by β-actin values. (D) The IL-32 protein levels were represented relative to that in infected cells collected at 6 hpi. (E) The kinetics of hcmv-miR-UL112-1 expression were measured using TaqMan® miRNA assays. The expression of hcmv-miR-UL112-1 could be detected at 24 hpi and increased gradually as the HCMV-infection process was prolonged. In addition, the kinetics of hcmv-miR-UL112-1 expression were measured using TaqMan® miRNA assays. Measured values were normalized by using U6 as an internal control. The expression of hcmv-miR-UL112-1 could be detected at 24 hpi. As shown in Figure 2E, a steep increase was observed at 48 hpi. The relative quantity of hcmv-miR-UL112-1 at 48 hpi was 6.5 fold higher than at 24 hpi. The expression of hcmv-miR-UL112-1 increased gradually as the HCMV-infection process was prolonged. The relative quantity of hcmv-miR-UL112-1 in HCMV infected cells at 96 hpi was 10 fold higher than at 24 hpi. Cells were harvested at different infection stages and time points after HCMV infection as described in the Materials and Methods section. We used specific drugs to determine the expression profile of IL-32 at different stages of viral replication. CHX was used to determine the immediate early (IE) stage and PAA was used to determine early (E) stage. As shown in Figure 2A and B, semi-quantitative RT-PCR analysis revealed that IL-32 mRNA expression was significantly activated at IE stage but not at E or late (L) stages post-infection. IL-32 protein expression levels at different time points following HCMV infection were measured by western blot analysis. As shown in Figure 2C and D, IL-32 protein expression reached a peak value at 6 hours post infection (hpi). The relative IL-32 concentration in HCMV infected cells collected at 6 hpi was 10 fold higher than in uninfected cells. All measured values were balanced by using β-actin as an internal control. No accumulation of either IL-32 mRNA or protein was found following the prolongation of HCMV-infection process. IL-32 mRNA levels in HCMV infected cells decreased in E and L stages to a similar level as in uninfected cells. IL-32 protein levels decreased with subsequent hours post infection. Expression of IL-32 induced by HCMV infection in MRC-5 cells. (A) IL-32 mRNA levels in HCMV infected cells were compared to that in uninfected MRC-5 cells. β-actin was used as an internal control. The universal primers for all IL-32 transcripts were used. U is for uninfected MRC-5; IE is for immediate early stage; E is for early stage; L is for late stage. (B) IL-32 mRNA levels were represented relative to that in IE stage. (C) IL-32 protein levels in HCMV infected MRC-5 cells were detected at different time points. IL-32 densitometer values were normalized by β-actin values. (D) The IL-32 protein levels were represented relative to that in infected cells collected at 6 hpi. (E) The kinetics of hcmv-miR-UL112-1 expression were measured using TaqMan® miRNA assays. The expression of hcmv-miR-UL112-1 could be detected at 24 hpi and increased gradually as the HCMV-infection process was prolonged. In addition, the kinetics of hcmv-miR-UL112-1 expression were measured using TaqMan® miRNA assays. Measured values were normalized by using U6 as an internal control. The expression of hcmv-miR-UL112-1 could be detected at 24 hpi. As shown in Figure 2E, a steep increase was observed at 48 hpi. The relative quantity of hcmv-miR-UL112-1 at 48 hpi was 6.5 fold higher than at 24 hpi. The expression of hcmv-miR-UL112-1 increased gradually as the HCMV-infection process was prolonged. The relative quantity of hcmv-miR-UL112-1 in HCMV infected cells at 96 hpi was 10 fold higher than at 24 hpi. Functional down-regulation of endogenous IL-32 expression by ectopically expressed hcmv-miR-UL112-1 In our previous study, a sequence in the 3′UTR of IL-32 mRNA was identified by hybrid-PCR to be a candidate target site of hcmv-miR-UL112-1. The predicted binding site for hcmv-miR-UL112-1 was 195 nt upstream of the polyA structure of IL-32 mRNA. A schematic representation of hcmv-miR-UL112-1 binding to the IL-32 3′UTR sequence is shown in Figure 3A. Down-regulation of IL-32 expression by hcmv-miR-UL112-1. (A) The diagram shows the predicted sequences of hcmv-miR-UL112-1 binding to IL-32 mRNA. (B) As a candidate target, IL-32 was validated for its ability to inhibit expression of a luciferase reporter construct in the presence of hcmv-miR-UL112-1 (pS-miR-UL112-1). Results are shown as percentage expression of negative control sample (pS-neg) following correction for transfection levels according to the control of renilla luciferase expression. Values are shown as means ± standard deviations for triplicate samples. (C) HEK 293 cells were transfected with pS-miR-UL112-1 or control vector respectively. Cells were collected 48 hpi and were subjected to western blot analysis using the indicated antibodies. IL-32 densitometer values were normalized to that of the β-actin values. (D) The amounts of proteins presented in panel C were quantified by densitometry. Results are shown as percentage expression of the negative control sample (Empty). To further determine whether the IL-32 3′UTR sequence represents a functional target site for hcmv-miR-UL112-1, the IL-32 3′UTR sequence was validated by luciferase reporter assays. As shown in Figure 3B, transfected pMIR-IL-32UTR led to a significant inhibition of luciferase activity by 39.27% in the presence of pS-miR-UL112-1. pS-miR-UL112-1 had no significant inhibitory effects on the luciferase activities of pMIR and pMIR-IL-32MUT. These results demonstrate that the IL-32 3′UTR sequence provided a specific and functional binding site for hcmv-miR-UL112-1. The regulatory effect of hcmv-miR-UL112-1 on IL-32 protein expression was then examined in HEK293 cells by western blot. IL-32 protein level was significantly reduced in cells transfected with hcmv-miR-UL112-1 in comparison to cells transfected with pSilencer negative control (Figure 3C and D). IL-32 densitometer values were normalized to that of β-actin values. The level of reduction of IL-32 protein was approximately 67% as determined by densitometry (Figure 3D). Our observations confirmed that the expression of endogenous IL-32 protein could be specifically inhibited by ectopically expressed hcmv-miR-UL112-1. In our previous study, a sequence in the 3′UTR of IL-32 mRNA was identified by hybrid-PCR to be a candidate target site of hcmv-miR-UL112-1. The predicted binding site for hcmv-miR-UL112-1 was 195 nt upstream of the polyA structure of IL-32 mRNA. A schematic representation of hcmv-miR-UL112-1 binding to the IL-32 3′UTR sequence is shown in Figure 3A. Down-regulation of IL-32 expression by hcmv-miR-UL112-1. (A) The diagram shows the predicted sequences of hcmv-miR-UL112-1 binding to IL-32 mRNA. (B) As a candidate target, IL-32 was validated for its ability to inhibit expression of a luciferase reporter construct in the presence of hcmv-miR-UL112-1 (pS-miR-UL112-1). Results are shown as percentage expression of negative control sample (pS-neg) following correction for transfection levels according to the control of renilla luciferase expression. Values are shown as means ± standard deviations for triplicate samples. (C) HEK 293 cells were transfected with pS-miR-UL112-1 or control vector respectively. Cells were collected 48 hpi and were subjected to western blot analysis using the indicated antibodies. IL-32 densitometer values were normalized to that of the β-actin values. (D) The amounts of proteins presented in panel C were quantified by densitometry. Results are shown as percentage expression of the negative control sample (Empty). To further determine whether the IL-32 3′UTR sequence represents a functional target site for hcmv-miR-UL112-1, the IL-32 3′UTR sequence was validated by luciferase reporter assays. As shown in Figure 3B, transfected pMIR-IL-32UTR led to a significant inhibition of luciferase activity by 39.27% in the presence of pS-miR-UL112-1. pS-miR-UL112-1 had no significant inhibitory effects on the luciferase activities of pMIR and pMIR-IL-32MUT. These results demonstrate that the IL-32 3′UTR sequence provided a specific and functional binding site for hcmv-miR-UL112-1. The regulatory effect of hcmv-miR-UL112-1 on IL-32 protein expression was then examined in HEK293 cells by western blot. IL-32 protein level was significantly reduced in cells transfected with hcmv-miR-UL112-1 in comparison to cells transfected with pSilencer negative control (Figure 3C and D). IL-32 densitometer values were normalized to that of β-actin values. The level of reduction of IL-32 protein was approximately 67% as determined by densitometry (Figure 3D). Our observations confirmed that the expression of endogenous IL-32 protein could be specifically inhibited by ectopically expressed hcmv-miR-UL112-1.
Conclusions
In summary, IL-32 expression in active HCMV infection was firstly investigated in our study. Detailed kinetics of the transcription of hcmv-miR-UL112-1, IL-32 mRNA and its protein expression were determined in HCMV infected MRC-5 cells. Furthermore, IL-32 expression was demonstrated to be functionally down-regulated by ectopically expressed hcmv-miR-UL112-1 in HEK293 cells. Follow up studies will investigate the mechanism of immune evasion by HCMV mediated by hcmv-miR-UL112-1, which has been confirmed to modulate NK cell activation during HCMV infection through post-transcriptional regulation.
[ "Background", "Relatively high IL-32 levels among individuals with active HCMV infection", "Kinetics of the expression of IL-32 mRNA, IL-32 protein and hcmv-miR-UL112-1 in HCMV infected MRC-5 cells", "Functional down-regulation of endogenous IL-32 expression by ectopically expressed hcmv-miR-UL112-1", "Serum samples", "Cell lines", "Virus", "Plasmid constructs", "Enzyme-linked immunosorbent assay", "Semi-quantitative RT-PCR analysis", "miRNA extraction and TaqMan assays", "Luciferase assay", "Western blot analysis", "Statistical analysis", "Competing interests", "Authors’ contributions" ]
[ "Interleukin-32 (IL-32) is a newly-discovered pro-inflammatory cytokine, which plays a role in innate and adaptive immune responses [1,2]. It lacks sequence homology to any presently known cytokine families. IL-32 is associated with the induction of inflammatory responses by activating the p38MAPK, NF-kappa B and AP-1 signaling pathways. It has been implicated in inflammatory disorders, mycobacterium tuberculosis infections and inflammatory bowel disease, as well as in some autoimmune diseases, such as rheumatoid arthritis, ulcerative colitis and Crohn’s disease [3-10]. Moreover, it has been reported that IL-32 has pro-inflammatory effects on myeloid cells and promotes the differentiation of osteoclast precursors into multinucleated cells expressing specific osteoclast markers [11,12]. In recent studies, IL-32 has also been found to be regulated during viral infections. Elevated levels of IL-32 were found in sera from patients infected with influenza A virus [13-15], hepatitis B virus (HBV) [16], hepatitis C virus (HCV) [17], human papillomavirus (HPV) [18] and human immunodeficiency virus (HIV) [19-21], suggesting that IL-32 might play an important role in host defense against viral infections\nHuman cytomegalovirus (HCMV) is an ubiquitous β-herpesvirus that infects a broad range of cell types in human hosts, contributing to its complex and varied pathogenesis. HCMV Infection leads to life-long persistence in 50%–90% of the population, which is generally subclinical in healthy individuals [22]. However, it can lead to serious complications in immunocompromised patients, such as transplant recipients or AIDS patients [23,24].\nMicroRNAs (miRNAs) are an abundant class of small non-coding RNA molecules that target mRNAs generally within their 3′ untranslated regions (3′UTRs). MiRNAs suppress gene expression mainly through inhibition of translation or, rarely, through degradation of mRNA [25,26]. Clinical isolates of HCMV encode at least 17 miRNAs [27,28]. However, only a few functional targets have been validated experimentally for some HCMV-encoded miRNAs [29-34]. It has been demonstrated that hcmv-miR-UL112-1 targets and reduces the expression of HCMV UL123 (IE1 or IE72), UL114 and the major histocompatibility complex class 1-related chain B (MICB). Moreover, BclAF1 protein, a human cytomegalovirus restriction factor, was reported to be a new target of hcmv-miR-UL112-1 [35]. In addition, multiple cellular targets of hcmv-miR-US25-1, which are associated with cell cycle control, were identified by RNA induced silencing complex immunoprecipitation (RISC-IP), including cyclin E2, BRCC3, EID1, MAPRE2 and CD147. IL-32, which is not present in the confirmed targets, was screened out as a candidate target of hcmv-miR-UL112-1 in our previous study [36]. However, no further experiments to validate these findings have been performed.\nIn the present study, the expression levels of IL-32 were compared among serum samples from patients with active HCMV infection and samples from HCMV-IgM negative individuals. The expression levels of IL-32 and hcmv-miR-UL112-1 in HCMV infected MRC-5 cells were detected at different stages of infection and time points. In addition, functional down-regulation of IL-32 by hcmv-miR-UL112-1 was detected in transfected human embryonic kidney (HEK293) cells, and the effect of hcmv-miR-UL112-1 on IL-32 during HCMV infection was primarily discussed.", "IL-32 protein levels were measured in the sera of 40 patients with active HCMV infections (HCMV IgM positive) and 32 HCMV IgM negative control individuals by enzyme-linked immunosorbent assays (ELISA). According to the results of HCMV IgG detection, HCMV IgM negative controls were divided into two groups: Previously HCMV infected group (IgG positive, n=17) and healthy group (IgG negative, n=15). Mean IL-32 levels in the sera of patients with active HCMV infection were 4.1778±1.1663 ng/ml (shown in Figure 1). Mean IL-32 levels in sera of the previously HCMV infected group and healthy group were 2.4653±0.8287 ng/ml and 2.4480±0.9162 ng/ml, respectively. The data were analyzed by F test. IL-32 levels in the sera of patients with active HCMV infection were significantly higher than those in the HCMV IgM negative control group (F=23.957, P<0.001). No significant difference was observed between the previously HCMV infected group and the healthy group (P=0.963).\n\nAn illustration showing IL-32 levels in serum samples detected by ELISA. Serum samples were from actively HCMV-infected patients (n=40), HCMV previously infected individuals (n=17) and healthy individuals (n=15), respectively.", "Cells were harvested at different infection stages and time points after HCMV infection as described in the Materials and Methods section. We used specific drugs to determine the expression profile of IL-32 at different stages of viral replication. CHX was used to determine the immediate early (IE) stage and PAA was used to determine early (E) stage. As shown in Figure 2A and B, semi-quantitative RT-PCR analysis revealed that IL-32 mRNA expression was significantly activated at IE stage but not at E or late (L) stages post-infection. IL-32 protein expression levels at different time points following HCMV infection were measured by western blot analysis. As shown in Figure 2C and D, IL-32 protein expression reached a peak value at 6 hours post infection (hpi). The relative IL-32 concentration in HCMV infected cells collected at 6 hpi was 10 fold higher than in uninfected cells. All measured values were balanced by using β-actin as an internal control. No accumulation of either IL-32 mRNA or protein was found following the prolongation of HCMV-infection process. IL-32 mRNA levels in HCMV infected cells decreased in E and L stages to a similar level as in uninfected cells. IL-32 protein levels decreased with subsequent hours post infection.\n\nExpression of IL-32 induced by HCMV infection in MRC-5 cells. (A) IL-32 mRNA levels in HCMV infected cells were compared to that in uninfected MRC-5 cells. β-actin was used as an internal control. The universal primers for all IL-32 transcripts were used. U is for uninfected MRC-5; IE is for immediate early stage; E is for early stage; L is for late stage. (B) IL-32 mRNA levels were represented relative to that in IE stage. (C) IL-32 protein levels in HCMV infected MRC-5 cells were detected at different time points. IL-32 densitometer values were normalized by β-actin values. (D) The IL-32 protein levels were represented relative to that in infected cells collected at 6 hpi. (E) The kinetics of hcmv-miR-UL112-1 expression were measured using TaqMan® miRNA assays. The expression of hcmv-miR-UL112-1 could be detected at 24 hpi and increased gradually as the HCMV-infection process was prolonged.\nIn addition, the kinetics of hcmv-miR-UL112-1 expression were measured using TaqMan® miRNA assays. Measured values were normalized by using U6 as an internal control. The expression of hcmv-miR-UL112-1 could be detected at 24 hpi. As shown in Figure 2E, a steep increase was observed at 48 hpi. The relative quantity of hcmv-miR-UL112-1 at 48 hpi was 6.5 fold higher than at 24 hpi. The expression of hcmv-miR-UL112-1 increased gradually as the HCMV-infection process was prolonged. The relative quantity of hcmv-miR-UL112-1 in HCMV infected cells at 96 hpi was 10 fold higher than at 24 hpi.", "In our previous study, a sequence in the 3′UTR of IL-32 mRNA was identified by hybrid-PCR to be a candidate target site of hcmv-miR-UL112-1. The predicted binding site for hcmv-miR-UL112-1 was 195 nt upstream of the polyA structure of IL-32 mRNA. A schematic representation of hcmv-miR-UL112-1 binding to the IL-32 3′UTR sequence is shown in Figure 3A.\n\nDown-regulation of IL-32 expression by hcmv-miR-UL112-1. (A) The diagram shows the predicted sequences of hcmv-miR-UL112-1 binding to IL-32 mRNA. (B) As a candidate target, IL-32 was validated for its ability to inhibit expression of a luciferase reporter construct in the presence of hcmv-miR-UL112-1 (pS-miR-UL112-1). Results are shown as percentage expression of negative control sample (pS-neg) following correction for transfection levels according to the control of renilla luciferase expression. Values are shown as means ± standard deviations for triplicate samples. (C) HEK 293 cells were transfected with pS-miR-UL112-1 or control vector respectively. Cells were collected 48 hpi and were subjected to western blot analysis using the indicated antibodies. IL-32 densitometer values were normalized to that of the β-actin values. (D) The amounts of proteins presented in panel C were quantified by densitometry. Results are shown as percentage expression of the negative control sample (Empty).\nTo further determine whether the IL-32 3′UTR sequence represents a functional target site for hcmv-miR-UL112-1, the IL-32 3′UTR sequence was validated by luciferase reporter assays. As shown in Figure 3B, transfected pMIR-IL-32UTR led to a significant inhibition of luciferase activity by 39.27% in the presence of pS-miR-UL112-1. pS-miR-UL112-1 had no significant inhibitory effects on the luciferase activities of pMIR and pMIR-IL-32MUT. These results demonstrate that the IL-32 3′UTR sequence provided a specific and functional binding site for hcmv-miR-UL112-1.\nThe regulatory effect of hcmv-miR-UL112-1 on IL-32 protein expression was then examined in HEK293 cells by western blot. IL-32 protein level was significantly reduced in cells transfected with hcmv-miR-UL112-1 in comparison to cells transfected with pSilencer negative control (Figure 3C and D). IL-32 densitometer values were normalized to that of β-actin values. The level of reduction of IL-32 protein was approximately 67% as determined by densitometry (Figure 3D). Our observations confirmed that the expression of endogenous IL-32 protein could be specifically inhibited by ectopically expressed hcmv-miR-UL112-1.", "Serum samples were obtained from 40 patients with HCMV infection (22 male, 18 female, aged 2.2 ± 0.42 yr) from Shengjing Hospital of China Medical University. All patients were HCMV-IgM positive and HCMV-IgG negative. The viral loads in urine samples from these patients were between 1.793×103 and 4.115×107 HCMV DNA copies per milliliter, as detected by routine QF-PCR (DaAn Gene). Blood samples collected from 32 HCMV-IgM negative individuals (17 male, 15 female, aged 2.7±0.35 yr) at the Medical Examination Centre of Shengjing Hospital were used as controls. According to the HCMV IgG detection results, HCMV IgM negative controls were divided into two groups: Previously HCMV infected group (IgG positive, n=17) and healthy group (IgG negative, n=15). All individuals involved in our study were seronegative for influenza A virus, HBV, HCV and HIV. This study was specifically approved by the Ethical Committee of Shengjing hospital.", "HEK293 cell line was a gift of Dr. Fangjie Chen from Department of Medical Genetics, China Medical University. HEK293 cells were cultured in Dulbecco’s Modified Eagle’s Medium (DMEM) supplemented with 10% fetal bovine serum (FBS), 100 μg/ml penicillin, 100 μg/ml streptomycin sulfate and 2 mM L-glutamine. Human embryonic lung fibroblasts cells, MRC-5, were acquired from Shanghai Institute for Biological Sciences, Chinese Academy of Sciences (CAS). MRC-5 cells were maintained in Modified Eagle’s Medium (MEM) supplemented with 10% FBS, 100 μg/ml penicillin and 100 μg/ml streptomycin sulfate. All cell cultures were maintained at 37°C in a 5% CO2 incubator.", "HCMV clinical strain H was isolated from a urine sample of a 5-month-old infant hospitalized in Shengjing Hospital of China Medical University. Stock virus was propagated in MRC-5 maintained in MEM supplemented with 2% FBS, 100 μg/ml penicillin and 100 μg/ml streptomycin. The supernatant was then harvested, and the aliquots were stored at −80°C before use.", "Luciferase reporter construct pMIR was acquired from Ambion Company. IL-32 3′UTR sequence was obtained by RT-PCR from RNA harvested from HCMV infected cells, and inserted into SpeI and HindIII site of the multiple cloning regions (pMIR-IL-32UTR). A point mutation at corresponding seed region binding site was maintained by using Site-directed Gene Mutagenesis Kit (Beyotime), resulting in GUC to GAA. Sequence predicted to express hcmv-miR-UL112-1 was amplified by PCR directly from HCMV genome. The PCR product was inserted into BamHI and HindIII sites of pSilencer4.1 (Ambion) to generate a miRNA expression vector pS-miR-UL112-1. All primer sequences used in plasmid construction are listed in Table 1. All constructs were confirmed by DNA sequencing.\n\nPrimers used in plasmid construction and semi-quantitative RT-PCR\nNote: sequences recognized by restriction en donuclases are down lined.", "The concentration of IL-32 in serum samples were detected using human IL-32 ELISA Kit (Cusabio) according to the manufacture’s protocols. The absorbance value at wavelength 450 nm was measured. The IL-32 concentrations were calculated from the standard curve.", "MRC-5 cells were inoculated with H strain at 3–5 multiplicity of infection (m.o.i.). Infections were carried out under IE, E and L conditions respectively. For preparation of IE RNA, CHX (Sigma) (100 μl/ml) was added to the culture medium 1 hour before infection and the cells were harvested at 24 hpi. For E RNA, DNA synthesis inhibitor phosphonoacetic acid (PAA) (Sigma) (100 μl/ml) was added to the medium immediately after infection, and the cells were harvested at 48 hpi. L RNA and uninfected cellular RNA was derived from infected (at 96 hpi) and uninfected cells, respectively. Total RNA was isolated using Trizol reagent (TaKaRa), treated with DNase Iand then reverse-transcribed with MLV reverse transcriptase and random primers (TaKaRa). PCR was performed for 24 cycles in 50 μl reactions with the IL-32 specific primer pairs listed in Table 1. mRNA of β-actin was amplified for normalization in all reactions. The primers for IL-32 amplification were designed for detecting all known isoforms of human IL-32 based on the sequences from Gnebank database (NM_001012631.1-001012636.1, NM_004221.4 and NM_001012718.1). The PCR products were analyzed by electrophoresis on 2% agarose gel containing ethidium bromide.", "Total RNA was extracted from HCMV infected cells at different time points using the mivVana miRNA isolation kit (Ambion). We measured hcmv-miR-UL112-1 in all samples using TaqMan® miRNA assays (MIMAT0001577 for hcmv-miR-UL112-1, ABI). U6 expression was used to normalize the expression of hcmv-miR-UL112-1. HCMV uninfected MRC-5 cells were used as negative control. Fold difference for hcmv-miR-UL112-1 expression was calculated using the equation 2ΔΔct, where ΔCt = Ct (hcmv-miR-UL112-1) – Ct (U6 control) and ΔΔCt = ΔCt (HCMV infected MRC-5) – ΔCt (MRC-5). The measurements were done in triplicates and the results are presented at the means ± S.D.", "MRC-5 cells were plated in 24-well plate at a density of 4.0×105 cells per well and grown to reach about 80% confluence at the time for transfection. 200 ng pMIR-IL-32UTR/pMIR-IL-32MUT plasmid or an empty vector control was co-transfected with 400 ng pS-miR-UL112-1 or pS-neg control into the cells by using lipofectamine 2000 reagent (Invitrogen) according to the manufacture’s protocol. 200 ng renilla luciferase plasmid (Promega) was introduced into each co-transfected cells as an internal control plasmid at the same time. Cells were collected 48 hours post transfection and luciferase activity levels were measured using the Dual luciferase reporter assay system (Promega) according to the manufacture’s guidelines. All measurements were done in triplicates and signals were normalized for transfection efficiency to the internal renilla control. The results are presented at the means ±S.D.", "Western blot analyses were performed to detect the IL-32 levels in HCMV infected MRC-5 cells and hcmv-miR-UL112-1 over expressed HEK293 cells respectively. MRC-5 cells were inoculated with H strain at 3–5 m.o.i. Cells were harvested at different time points post infection (6 h, 24 h, 48 h, 72 h and 96 h). HEK293 cells were transfected with 8 μg pS-miR-UL112-1 or pS-neg control using lipofectamine 2000 (Invitrogen) and were incubated at 37°C for 48 hours.\nProteins of the cells described above were prepared by suspending cells in lysis buffer, followed by centrifugation. Concentrations of proteins in supernatant were quantified using protein assay kit (Beyotime). Protein was separated by 10% acrylamide gel electrophoresis and transferred onto a nitrocellulose membrane. Western blot analysis was performed using IL-32 antibody (abcam). Immunoblots were visualized with ECL detection system. Sample loading was normalized by quantities of β-actin detected parallel.", "All experiments were reproducible. The results are presented at the means ± S.D. Student’s t test and F test was used to determine statistical significance. Differences were considered statistically significant at value of P≤0.05.", "The authors declare that they have no competing interests.", "YJH carried out the semi-quantitative RT-PCR analysis, western blot analysis and Statistical analysis. YQ and RH carried out the plasmids construction and luciferase assay. QR as the corresponding author designed the study and corrected the manuscript. YPM and YHJ carried out the preparation of virus and cell lines. ZRS carried out the serum samples collection and ELISA detection. All authors have read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Results", "Relatively high IL-32 levels among individuals with active HCMV infection", "Kinetics of the expression of IL-32 mRNA, IL-32 protein and hcmv-miR-UL112-1 in HCMV infected MRC-5 cells", "Functional down-regulation of endogenous IL-32 expression by ectopically expressed hcmv-miR-UL112-1", "Discussion", "Conclusions", "Materials and methods", "Serum samples", "Cell lines", "Virus", "Plasmid constructs", "Enzyme-linked immunosorbent assay", "Semi-quantitative RT-PCR analysis", "miRNA extraction and TaqMan assays", "Luciferase assay", "Western blot analysis", "Statistical analysis", "Competing interests", "Authors’ contributions" ]
[ "Interleukin-32 (IL-32) is a newly-discovered pro-inflammatory cytokine, which plays a role in innate and adaptive immune responses [1,2]. It lacks sequence homology to any presently known cytokine families. IL-32 is associated with the induction of inflammatory responses by activating the p38MAPK, NF-kappa B and AP-1 signaling pathways. It has been implicated in inflammatory disorders, mycobacterium tuberculosis infections and inflammatory bowel disease, as well as in some autoimmune diseases, such as rheumatoid arthritis, ulcerative colitis and Crohn’s disease [3-10]. Moreover, it has been reported that IL-32 has pro-inflammatory effects on myeloid cells and promotes the differentiation of osteoclast precursors into multinucleated cells expressing specific osteoclast markers [11,12]. In recent studies, IL-32 has also been found to be regulated during viral infections. Elevated levels of IL-32 were found in sera from patients infected with influenza A virus [13-15], hepatitis B virus (HBV) [16], hepatitis C virus (HCV) [17], human papillomavirus (HPV) [18] and human immunodeficiency virus (HIV) [19-21], suggesting that IL-32 might play an important role in host defense against viral infections\nHuman cytomegalovirus (HCMV) is an ubiquitous β-herpesvirus that infects a broad range of cell types in human hosts, contributing to its complex and varied pathogenesis. HCMV Infection leads to life-long persistence in 50%–90% of the population, which is generally subclinical in healthy individuals [22]. However, it can lead to serious complications in immunocompromised patients, such as transplant recipients or AIDS patients [23,24].\nMicroRNAs (miRNAs) are an abundant class of small non-coding RNA molecules that target mRNAs generally within their 3′ untranslated regions (3′UTRs). MiRNAs suppress gene expression mainly through inhibition of translation or, rarely, through degradation of mRNA [25,26]. Clinical isolates of HCMV encode at least 17 miRNAs [27,28]. However, only a few functional targets have been validated experimentally for some HCMV-encoded miRNAs [29-34]. It has been demonstrated that hcmv-miR-UL112-1 targets and reduces the expression of HCMV UL123 (IE1 or IE72), UL114 and the major histocompatibility complex class 1-related chain B (MICB). Moreover, BclAF1 protein, a human cytomegalovirus restriction factor, was reported to be a new target of hcmv-miR-UL112-1 [35]. In addition, multiple cellular targets of hcmv-miR-US25-1, which are associated with cell cycle control, were identified by RNA induced silencing complex immunoprecipitation (RISC-IP), including cyclin E2, BRCC3, EID1, MAPRE2 and CD147. IL-32, which is not present in the confirmed targets, was screened out as a candidate target of hcmv-miR-UL112-1 in our previous study [36]. However, no further experiments to validate these findings have been performed.\nIn the present study, the expression levels of IL-32 were compared among serum samples from patients with active HCMV infection and samples from HCMV-IgM negative individuals. The expression levels of IL-32 and hcmv-miR-UL112-1 in HCMV infected MRC-5 cells were detected at different stages of infection and time points. In addition, functional down-regulation of IL-32 by hcmv-miR-UL112-1 was detected in transfected human embryonic kidney (HEK293) cells, and the effect of hcmv-miR-UL112-1 on IL-32 during HCMV infection was primarily discussed.", " Relatively high IL-32 levels among individuals with active HCMV infection IL-32 protein levels were measured in the sera of 40 patients with active HCMV infections (HCMV IgM positive) and 32 HCMV IgM negative control individuals by enzyme-linked immunosorbent assays (ELISA). According to the results of HCMV IgG detection, HCMV IgM negative controls were divided into two groups: Previously HCMV infected group (IgG positive, n=17) and healthy group (IgG negative, n=15). Mean IL-32 levels in the sera of patients with active HCMV infection were 4.1778±1.1663 ng/ml (shown in Figure 1). Mean IL-32 levels in sera of the previously HCMV infected group and healthy group were 2.4653±0.8287 ng/ml and 2.4480±0.9162 ng/ml, respectively. The data were analyzed by F test. IL-32 levels in the sera of patients with active HCMV infection were significantly higher than those in the HCMV IgM negative control group (F=23.957, P<0.001). No significant difference was observed between the previously HCMV infected group and the healthy group (P=0.963).\n\nAn illustration showing IL-32 levels in serum samples detected by ELISA. Serum samples were from actively HCMV-infected patients (n=40), HCMV previously infected individuals (n=17) and healthy individuals (n=15), respectively.\nIL-32 protein levels were measured in the sera of 40 patients with active HCMV infections (HCMV IgM positive) and 32 HCMV IgM negative control individuals by enzyme-linked immunosorbent assays (ELISA). According to the results of HCMV IgG detection, HCMV IgM negative controls were divided into two groups: Previously HCMV infected group (IgG positive, n=17) and healthy group (IgG negative, n=15). Mean IL-32 levels in the sera of patients with active HCMV infection were 4.1778±1.1663 ng/ml (shown in Figure 1). Mean IL-32 levels in sera of the previously HCMV infected group and healthy group were 2.4653±0.8287 ng/ml and 2.4480±0.9162 ng/ml, respectively. The data were analyzed by F test. IL-32 levels in the sera of patients with active HCMV infection were significantly higher than those in the HCMV IgM negative control group (F=23.957, P<0.001). No significant difference was observed between the previously HCMV infected group and the healthy group (P=0.963).\n\nAn illustration showing IL-32 levels in serum samples detected by ELISA. Serum samples were from actively HCMV-infected patients (n=40), HCMV previously infected individuals (n=17) and healthy individuals (n=15), respectively.\n Kinetics of the expression of IL-32 mRNA, IL-32 protein and hcmv-miR-UL112-1 in HCMV infected MRC-5 cells Cells were harvested at different infection stages and time points after HCMV infection as described in the Materials and Methods section. We used specific drugs to determine the expression profile of IL-32 at different stages of viral replication. CHX was used to determine the immediate early (IE) stage and PAA was used to determine early (E) stage. As shown in Figure 2A and B, semi-quantitative RT-PCR analysis revealed that IL-32 mRNA expression was significantly activated at IE stage but not at E or late (L) stages post-infection. IL-32 protein expression levels at different time points following HCMV infection were measured by western blot analysis. As shown in Figure 2C and D, IL-32 protein expression reached a peak value at 6 hours post infection (hpi). The relative IL-32 concentration in HCMV infected cells collected at 6 hpi was 10 fold higher than in uninfected cells. All measured values were balanced by using β-actin as an internal control. No accumulation of either IL-32 mRNA or protein was found following the prolongation of HCMV-infection process. IL-32 mRNA levels in HCMV infected cells decreased in E and L stages to a similar level as in uninfected cells. IL-32 protein levels decreased with subsequent hours post infection.\n\nExpression of IL-32 induced by HCMV infection in MRC-5 cells. (A) IL-32 mRNA levels in HCMV infected cells were compared to that in uninfected MRC-5 cells. β-actin was used as an internal control. The universal primers for all IL-32 transcripts were used. U is for uninfected MRC-5; IE is for immediate early stage; E is for early stage; L is for late stage. (B) IL-32 mRNA levels were represented relative to that in IE stage. (C) IL-32 protein levels in HCMV infected MRC-5 cells were detected at different time points. IL-32 densitometer values were normalized by β-actin values. (D) The IL-32 protein levels were represented relative to that in infected cells collected at 6 hpi. (E) The kinetics of hcmv-miR-UL112-1 expression were measured using TaqMan® miRNA assays. The expression of hcmv-miR-UL112-1 could be detected at 24 hpi and increased gradually as the HCMV-infection process was prolonged.\nIn addition, the kinetics of hcmv-miR-UL112-1 expression were measured using TaqMan® miRNA assays. Measured values were normalized by using U6 as an internal control. The expression of hcmv-miR-UL112-1 could be detected at 24 hpi. As shown in Figure 2E, a steep increase was observed at 48 hpi. The relative quantity of hcmv-miR-UL112-1 at 48 hpi was 6.5 fold higher than at 24 hpi. The expression of hcmv-miR-UL112-1 increased gradually as the HCMV-infection process was prolonged. The relative quantity of hcmv-miR-UL112-1 in HCMV infected cells at 96 hpi was 10 fold higher than at 24 hpi.\nCells were harvested at different infection stages and time points after HCMV infection as described in the Materials and Methods section. We used specific drugs to determine the expression profile of IL-32 at different stages of viral replication. CHX was used to determine the immediate early (IE) stage and PAA was used to determine early (E) stage. As shown in Figure 2A and B, semi-quantitative RT-PCR analysis revealed that IL-32 mRNA expression was significantly activated at IE stage but not at E or late (L) stages post-infection. IL-32 protein expression levels at different time points following HCMV infection were measured by western blot analysis. As shown in Figure 2C and D, IL-32 protein expression reached a peak value at 6 hours post infection (hpi). The relative IL-32 concentration in HCMV infected cells collected at 6 hpi was 10 fold higher than in uninfected cells. All measured values were balanced by using β-actin as an internal control. No accumulation of either IL-32 mRNA or protein was found following the prolongation of HCMV-infection process. IL-32 mRNA levels in HCMV infected cells decreased in E and L stages to a similar level as in uninfected cells. IL-32 protein levels decreased with subsequent hours post infection.\n\nExpression of IL-32 induced by HCMV infection in MRC-5 cells. (A) IL-32 mRNA levels in HCMV infected cells were compared to that in uninfected MRC-5 cells. β-actin was used as an internal control. The universal primers for all IL-32 transcripts were used. U is for uninfected MRC-5; IE is for immediate early stage; E is for early stage; L is for late stage. (B) IL-32 mRNA levels were represented relative to that in IE stage. (C) IL-32 protein levels in HCMV infected MRC-5 cells were detected at different time points. IL-32 densitometer values were normalized by β-actin values. (D) The IL-32 protein levels were represented relative to that in infected cells collected at 6 hpi. (E) The kinetics of hcmv-miR-UL112-1 expression were measured using TaqMan® miRNA assays. The expression of hcmv-miR-UL112-1 could be detected at 24 hpi and increased gradually as the HCMV-infection process was prolonged.\nIn addition, the kinetics of hcmv-miR-UL112-1 expression were measured using TaqMan® miRNA assays. Measured values were normalized by using U6 as an internal control. The expression of hcmv-miR-UL112-1 could be detected at 24 hpi. As shown in Figure 2E, a steep increase was observed at 48 hpi. The relative quantity of hcmv-miR-UL112-1 at 48 hpi was 6.5 fold higher than at 24 hpi. The expression of hcmv-miR-UL112-1 increased gradually as the HCMV-infection process was prolonged. The relative quantity of hcmv-miR-UL112-1 in HCMV infected cells at 96 hpi was 10 fold higher than at 24 hpi.\n Functional down-regulation of endogenous IL-32 expression by ectopically expressed hcmv-miR-UL112-1 In our previous study, a sequence in the 3′UTR of IL-32 mRNA was identified by hybrid-PCR to be a candidate target site of hcmv-miR-UL112-1. The predicted binding site for hcmv-miR-UL112-1 was 195 nt upstream of the polyA structure of IL-32 mRNA. A schematic representation of hcmv-miR-UL112-1 binding to the IL-32 3′UTR sequence is shown in Figure 3A.\n\nDown-regulation of IL-32 expression by hcmv-miR-UL112-1. (A) The diagram shows the predicted sequences of hcmv-miR-UL112-1 binding to IL-32 mRNA. (B) As a candidate target, IL-32 was validated for its ability to inhibit expression of a luciferase reporter construct in the presence of hcmv-miR-UL112-1 (pS-miR-UL112-1). Results are shown as percentage expression of negative control sample (pS-neg) following correction for transfection levels according to the control of renilla luciferase expression. Values are shown as means ± standard deviations for triplicate samples. (C) HEK 293 cells were transfected with pS-miR-UL112-1 or control vector respectively. Cells were collected 48 hpi and were subjected to western blot analysis using the indicated antibodies. IL-32 densitometer values were normalized to that of the β-actin values. (D) The amounts of proteins presented in panel C were quantified by densitometry. Results are shown as percentage expression of the negative control sample (Empty).\nTo further determine whether the IL-32 3′UTR sequence represents a functional target site for hcmv-miR-UL112-1, the IL-32 3′UTR sequence was validated by luciferase reporter assays. As shown in Figure 3B, transfected pMIR-IL-32UTR led to a significant inhibition of luciferase activity by 39.27% in the presence of pS-miR-UL112-1. pS-miR-UL112-1 had no significant inhibitory effects on the luciferase activities of pMIR and pMIR-IL-32MUT. These results demonstrate that the IL-32 3′UTR sequence provided a specific and functional binding site for hcmv-miR-UL112-1.\nThe regulatory effect of hcmv-miR-UL112-1 on IL-32 protein expression was then examined in HEK293 cells by western blot. IL-32 protein level was significantly reduced in cells transfected with hcmv-miR-UL112-1 in comparison to cells transfected with pSilencer negative control (Figure 3C and D). IL-32 densitometer values were normalized to that of β-actin values. The level of reduction of IL-32 protein was approximately 67% as determined by densitometry (Figure 3D). Our observations confirmed that the expression of endogenous IL-32 protein could be specifically inhibited by ectopically expressed hcmv-miR-UL112-1.\nIn our previous study, a sequence in the 3′UTR of IL-32 mRNA was identified by hybrid-PCR to be a candidate target site of hcmv-miR-UL112-1. The predicted binding site for hcmv-miR-UL112-1 was 195 nt upstream of the polyA structure of IL-32 mRNA. A schematic representation of hcmv-miR-UL112-1 binding to the IL-32 3′UTR sequence is shown in Figure 3A.\n\nDown-regulation of IL-32 expression by hcmv-miR-UL112-1. (A) The diagram shows the predicted sequences of hcmv-miR-UL112-1 binding to IL-32 mRNA. (B) As a candidate target, IL-32 was validated for its ability to inhibit expression of a luciferase reporter construct in the presence of hcmv-miR-UL112-1 (pS-miR-UL112-1). Results are shown as percentage expression of negative control sample (pS-neg) following correction for transfection levels according to the control of renilla luciferase expression. Values are shown as means ± standard deviations for triplicate samples. (C) HEK 293 cells were transfected with pS-miR-UL112-1 or control vector respectively. Cells were collected 48 hpi and were subjected to western blot analysis using the indicated antibodies. IL-32 densitometer values were normalized to that of the β-actin values. (D) The amounts of proteins presented in panel C were quantified by densitometry. Results are shown as percentage expression of the negative control sample (Empty).\nTo further determine whether the IL-32 3′UTR sequence represents a functional target site for hcmv-miR-UL112-1, the IL-32 3′UTR sequence was validated by luciferase reporter assays. As shown in Figure 3B, transfected pMIR-IL-32UTR led to a significant inhibition of luciferase activity by 39.27% in the presence of pS-miR-UL112-1. pS-miR-UL112-1 had no significant inhibitory effects on the luciferase activities of pMIR and pMIR-IL-32MUT. These results demonstrate that the IL-32 3′UTR sequence provided a specific and functional binding site for hcmv-miR-UL112-1.\nThe regulatory effect of hcmv-miR-UL112-1 on IL-32 protein expression was then examined in HEK293 cells by western blot. IL-32 protein level was significantly reduced in cells transfected with hcmv-miR-UL112-1 in comparison to cells transfected with pSilencer negative control (Figure 3C and D). IL-32 densitometer values were normalized to that of β-actin values. The level of reduction of IL-32 protein was approximately 67% as determined by densitometry (Figure 3D). Our observations confirmed that the expression of endogenous IL-32 protein could be specifically inhibited by ectopically expressed hcmv-miR-UL112-1.", "IL-32 protein levels were measured in the sera of 40 patients with active HCMV infections (HCMV IgM positive) and 32 HCMV IgM negative control individuals by enzyme-linked immunosorbent assays (ELISA). According to the results of HCMV IgG detection, HCMV IgM negative controls were divided into two groups: Previously HCMV infected group (IgG positive, n=17) and healthy group (IgG negative, n=15). Mean IL-32 levels in the sera of patients with active HCMV infection were 4.1778±1.1663 ng/ml (shown in Figure 1). Mean IL-32 levels in sera of the previously HCMV infected group and healthy group were 2.4653±0.8287 ng/ml and 2.4480±0.9162 ng/ml, respectively. The data were analyzed by F test. IL-32 levels in the sera of patients with active HCMV infection were significantly higher than those in the HCMV IgM negative control group (F=23.957, P<0.001). No significant difference was observed between the previously HCMV infected group and the healthy group (P=0.963).\n\nAn illustration showing IL-32 levels in serum samples detected by ELISA. Serum samples were from actively HCMV-infected patients (n=40), HCMV previously infected individuals (n=17) and healthy individuals (n=15), respectively.", "Cells were harvested at different infection stages and time points after HCMV infection as described in the Materials and Methods section. We used specific drugs to determine the expression profile of IL-32 at different stages of viral replication. CHX was used to determine the immediate early (IE) stage and PAA was used to determine early (E) stage. As shown in Figure 2A and B, semi-quantitative RT-PCR analysis revealed that IL-32 mRNA expression was significantly activated at IE stage but not at E or late (L) stages post-infection. IL-32 protein expression levels at different time points following HCMV infection were measured by western blot analysis. As shown in Figure 2C and D, IL-32 protein expression reached a peak value at 6 hours post infection (hpi). The relative IL-32 concentration in HCMV infected cells collected at 6 hpi was 10 fold higher than in uninfected cells. All measured values were balanced by using β-actin as an internal control. No accumulation of either IL-32 mRNA or protein was found following the prolongation of HCMV-infection process. IL-32 mRNA levels in HCMV infected cells decreased in E and L stages to a similar level as in uninfected cells. IL-32 protein levels decreased with subsequent hours post infection.\n\nExpression of IL-32 induced by HCMV infection in MRC-5 cells. (A) IL-32 mRNA levels in HCMV infected cells were compared to that in uninfected MRC-5 cells. β-actin was used as an internal control. The universal primers for all IL-32 transcripts were used. U is for uninfected MRC-5; IE is for immediate early stage; E is for early stage; L is for late stage. (B) IL-32 mRNA levels were represented relative to that in IE stage. (C) IL-32 protein levels in HCMV infected MRC-5 cells were detected at different time points. IL-32 densitometer values were normalized by β-actin values. (D) The IL-32 protein levels were represented relative to that in infected cells collected at 6 hpi. (E) The kinetics of hcmv-miR-UL112-1 expression were measured using TaqMan® miRNA assays. The expression of hcmv-miR-UL112-1 could be detected at 24 hpi and increased gradually as the HCMV-infection process was prolonged.\nIn addition, the kinetics of hcmv-miR-UL112-1 expression were measured using TaqMan® miRNA assays. Measured values were normalized by using U6 as an internal control. The expression of hcmv-miR-UL112-1 could be detected at 24 hpi. As shown in Figure 2E, a steep increase was observed at 48 hpi. The relative quantity of hcmv-miR-UL112-1 at 48 hpi was 6.5 fold higher than at 24 hpi. The expression of hcmv-miR-UL112-1 increased gradually as the HCMV-infection process was prolonged. The relative quantity of hcmv-miR-UL112-1 in HCMV infected cells at 96 hpi was 10 fold higher than at 24 hpi.", "In our previous study, a sequence in the 3′UTR of IL-32 mRNA was identified by hybrid-PCR to be a candidate target site of hcmv-miR-UL112-1. The predicted binding site for hcmv-miR-UL112-1 was 195 nt upstream of the polyA structure of IL-32 mRNA. A schematic representation of hcmv-miR-UL112-1 binding to the IL-32 3′UTR sequence is shown in Figure 3A.\n\nDown-regulation of IL-32 expression by hcmv-miR-UL112-1. (A) The diagram shows the predicted sequences of hcmv-miR-UL112-1 binding to IL-32 mRNA. (B) As a candidate target, IL-32 was validated for its ability to inhibit expression of a luciferase reporter construct in the presence of hcmv-miR-UL112-1 (pS-miR-UL112-1). Results are shown as percentage expression of negative control sample (pS-neg) following correction for transfection levels according to the control of renilla luciferase expression. Values are shown as means ± standard deviations for triplicate samples. (C) HEK 293 cells were transfected with pS-miR-UL112-1 or control vector respectively. Cells were collected 48 hpi and were subjected to western blot analysis using the indicated antibodies. IL-32 densitometer values were normalized to that of the β-actin values. (D) The amounts of proteins presented in panel C were quantified by densitometry. Results are shown as percentage expression of the negative control sample (Empty).\nTo further determine whether the IL-32 3′UTR sequence represents a functional target site for hcmv-miR-UL112-1, the IL-32 3′UTR sequence was validated by luciferase reporter assays. As shown in Figure 3B, transfected pMIR-IL-32UTR led to a significant inhibition of luciferase activity by 39.27% in the presence of pS-miR-UL112-1. pS-miR-UL112-1 had no significant inhibitory effects on the luciferase activities of pMIR and pMIR-IL-32MUT. These results demonstrate that the IL-32 3′UTR sequence provided a specific and functional binding site for hcmv-miR-UL112-1.\nThe regulatory effect of hcmv-miR-UL112-1 on IL-32 protein expression was then examined in HEK293 cells by western blot. IL-32 protein level was significantly reduced in cells transfected with hcmv-miR-UL112-1 in comparison to cells transfected with pSilencer negative control (Figure 3C and D). IL-32 densitometer values were normalized to that of β-actin values. The level of reduction of IL-32 protein was approximately 67% as determined by densitometry (Figure 3D). Our observations confirmed that the expression of endogenous IL-32 protein could be specifically inhibited by ectopically expressed hcmv-miR-UL112-1.", "IL-32 is a proinflammatory cytokine and plays a critical role in inflammatory responses. It induces the expression of IL-1β, IL-6, IL-8 and TNF through the p38MAPK, NF-kappa B and AP-1 signaling pathways. Recent reports have highlighted that IL-32 is regulated during viral infection in humans. It has been reported that the IL-32 expression could be induced during infections of influenza A virus, HBV, HCV, HPV and HIV [13-21].\nIL-32 levels, detected by ELISA, were 4.1778±1.1663 ng/ml in sera from actively HCMV-infected patients. IL-32 levels in sera from the previously infected group and healthy group were 2.4653±0.8287 ng/ml and 2.4480±0.9162 ng/ml, respectively. IL-32 levels in sera from actively HCMV-infected patients were significantly higher than those in control groups (F=23.957, P<0.001). However, no significant differences in IL-32 levels were observed between the HCMV previously infected group and healthy group (P=0.963). These results demonstrate that IL-32 may be a newly described reactive protein corresponding to active HCMV infection.\nHCMV infection can induce the expression of IL-32 at both the mRNA and protein levels in HCMV-infected MRC-5 cells. The peak value of IL-32 mRNA was detected as early as IE stage. The IL-32 protein expression level reached peak value at 6 hpi. The IL-32 protein concentration in HCMV-infected cells at 6 hpi was 10 fold higher than that in uninfected cells. This result demonstrated that the production of IL-32 can be specifically induced by active HCMV infection, while the mechanism that induced the expression of IL-32 during HCMV infection is still unknown. Is the induction of IL-32 mediated only by viral entry or by the expression of IE proteins? Additional experiments would be necessary to provide these answers, and might include the use of UV-irradiated virus. Accumulation of either IL-32 mRNA or protein was not observed following prolonged HCMV-infection process. IL-32 mRNA levels in HCMV-infected cells decreased rapidly in E and L stage to a similar level to that of uninfected cells. IL-32 protein levels decreased in subsequent hours post infection and could only be detected in a small quantity by western blot at 96 hpi. It is hypothesized that certain pathways might be activated to down-regulate or block the expression of IL-32 during HCMV infection.\nMiRNAs are the most studied non-coding RNAs in recent years. They regulate gene expression at the post-transcriptional level and act as key regulators in diverse regulatory pathways, including early development, cell differentiation, cell proliferation, metabolism and apoptosis [37-40]. We carried out additional experiments to validate IL-32 mRNA as a target of hcmv-miR-UL112-1. In the presence of pS-miR-UL112-1, 39.27% luciferase activity of pMIR-IL-32UTR was observed to be significantly decreased, and a 67% reduction of endogenous IL-32 protein expression level was measured in HEK293 cells. It was confirmed that hcmv-miR-UL112-1 could specifically repress the expression of IL-32 through the predicted binding site.\nTo examine the role of hcmv-miR-UL112-1 in regulating IL-32 expression during an active HCMV infection, the kinetics of hcmv-miR-UL112-1 expression in HCMV-infected MRC-5 cells were measured using TaqMan® miRNA assays. The expression of hcmv-miR-UL112-1 could be detected at 24 hpi and then increased gradually with prolonged HCMV infection. This result suggests that hcmv-miR-UL112-1 might function and lead to the decrease of IL-32 levels in E stage during active HCMV infection. The decrease of IL-32 mRNA, but not IL-32 protein level, together with the dramatic increase of hcmv-miR-UL112-1 during the E stage, suggests the possibility that hcmv-miR-UL112-1 may function primarily in the degradation of mRNA. Further studies are needed to address this point.\nThrough co-evolution with its host, HCMV has evolved effective immune evasion strategies by encoding many immunomodulatory proteins that modulate the host immune response. It is conceivable that HCMV-encoded miRNAs might also be exploited during immune evasion. Hcmv-miR-UL112-1 is expressed with early kinetics. Some functional targets of hcmv-miR-UL112-1 have been shown experimentally to be involved in the modulation of NK cell activation. Stern-Ginossar et al. have identified MICB as a functional target of hcmv-miR-UL112-1 [30]. MICB is a stress-induced ligand of the NK cell activating receptor, NKG2D, which is critical for killing of virus-infected cells by NK cells. Hcmv-miR-UL112-1-mediated down-regulation of MICB, perturbed MICB binding with NKG2D and reduced killing of HCMV infected cells by NK cells. In addition, HCMV IE1, which would increase the TNF-α level by activation of the TNF-α promoter, was then identified as a target of hcmv-miR-UL112-1 by Murphy et al. in 2008 [29]. It has been confirmed that IL-32 induces significant amounts of TNF-α in a dose-dependent manner, and that silencing endogenous IL-32 by short hairpin RNA impairs the induction of TNF-α [2]. In this study, we validated that expression of IL-32 was activated by active HCMV infection and functionally down-regulated by ectopically expressed hcmv-miR-UL112-1 in HEK293 cells. It may be deduced from these results that, besides IE1 and MICB, hcmv-miR-UL112-1 might also synergistically achieve the modulation of NK cell activation through the TNF-α pathway by the down-regulation of IL-32, and immune evasion by HCMV.", "In summary, IL-32 expression in active HCMV infection was firstly investigated in our study. Detailed kinetics of the transcription of hcmv-miR-UL112-1, IL-32 mRNA and its protein expression were determined in HCMV infected MRC-5 cells. Furthermore, IL-32 expression was demonstrated to be functionally down-regulated by ectopically expressed hcmv-miR-UL112-1 in HEK293 cells. Follow up studies will investigate the mechanism of immune evasion by HCMV mediated by hcmv-miR-UL112-1, which has been confirmed to modulate NK cell activation during HCMV infection through post-transcriptional regulation.", " Serum samples Serum samples were obtained from 40 patients with HCMV infection (22 male, 18 female, aged 2.2 ± 0.42 yr) from Shengjing Hospital of China Medical University. All patients were HCMV-IgM positive and HCMV-IgG negative. The viral loads in urine samples from these patients were between 1.793×103 and 4.115×107 HCMV DNA copies per milliliter, as detected by routine QF-PCR (DaAn Gene). Blood samples collected from 32 HCMV-IgM negative individuals (17 male, 15 female, aged 2.7±0.35 yr) at the Medical Examination Centre of Shengjing Hospital were used as controls. According to the HCMV IgG detection results, HCMV IgM negative controls were divided into two groups: Previously HCMV infected group (IgG positive, n=17) and healthy group (IgG negative, n=15). All individuals involved in our study were seronegative for influenza A virus, HBV, HCV and HIV. This study was specifically approved by the Ethical Committee of Shengjing hospital.\nSerum samples were obtained from 40 patients with HCMV infection (22 male, 18 female, aged 2.2 ± 0.42 yr) from Shengjing Hospital of China Medical University. All patients were HCMV-IgM positive and HCMV-IgG negative. The viral loads in urine samples from these patients were between 1.793×103 and 4.115×107 HCMV DNA copies per milliliter, as detected by routine QF-PCR (DaAn Gene). Blood samples collected from 32 HCMV-IgM negative individuals (17 male, 15 female, aged 2.7±0.35 yr) at the Medical Examination Centre of Shengjing Hospital were used as controls. According to the HCMV IgG detection results, HCMV IgM negative controls were divided into two groups: Previously HCMV infected group (IgG positive, n=17) and healthy group (IgG negative, n=15). All individuals involved in our study were seronegative for influenza A virus, HBV, HCV and HIV. This study was specifically approved by the Ethical Committee of Shengjing hospital.\n Cell lines HEK293 cell line was a gift of Dr. Fangjie Chen from Department of Medical Genetics, China Medical University. HEK293 cells were cultured in Dulbecco’s Modified Eagle’s Medium (DMEM) supplemented with 10% fetal bovine serum (FBS), 100 μg/ml penicillin, 100 μg/ml streptomycin sulfate and 2 mM L-glutamine. Human embryonic lung fibroblasts cells, MRC-5, were acquired from Shanghai Institute for Biological Sciences, Chinese Academy of Sciences (CAS). MRC-5 cells were maintained in Modified Eagle’s Medium (MEM) supplemented with 10% FBS, 100 μg/ml penicillin and 100 μg/ml streptomycin sulfate. All cell cultures were maintained at 37°C in a 5% CO2 incubator.\nHEK293 cell line was a gift of Dr. Fangjie Chen from Department of Medical Genetics, China Medical University. HEK293 cells were cultured in Dulbecco’s Modified Eagle’s Medium (DMEM) supplemented with 10% fetal bovine serum (FBS), 100 μg/ml penicillin, 100 μg/ml streptomycin sulfate and 2 mM L-glutamine. Human embryonic lung fibroblasts cells, MRC-5, were acquired from Shanghai Institute for Biological Sciences, Chinese Academy of Sciences (CAS). MRC-5 cells were maintained in Modified Eagle’s Medium (MEM) supplemented with 10% FBS, 100 μg/ml penicillin and 100 μg/ml streptomycin sulfate. All cell cultures were maintained at 37°C in a 5% CO2 incubator.\n Virus HCMV clinical strain H was isolated from a urine sample of a 5-month-old infant hospitalized in Shengjing Hospital of China Medical University. Stock virus was propagated in MRC-5 maintained in MEM supplemented with 2% FBS, 100 μg/ml penicillin and 100 μg/ml streptomycin. The supernatant was then harvested, and the aliquots were stored at −80°C before use.\nHCMV clinical strain H was isolated from a urine sample of a 5-month-old infant hospitalized in Shengjing Hospital of China Medical University. Stock virus was propagated in MRC-5 maintained in MEM supplemented with 2% FBS, 100 μg/ml penicillin and 100 μg/ml streptomycin. The supernatant was then harvested, and the aliquots were stored at −80°C before use.\n Plasmid constructs Luciferase reporter construct pMIR was acquired from Ambion Company. IL-32 3′UTR sequence was obtained by RT-PCR from RNA harvested from HCMV infected cells, and inserted into SpeI and HindIII site of the multiple cloning regions (pMIR-IL-32UTR). A point mutation at corresponding seed region binding site was maintained by using Site-directed Gene Mutagenesis Kit (Beyotime), resulting in GUC to GAA. Sequence predicted to express hcmv-miR-UL112-1 was amplified by PCR directly from HCMV genome. The PCR product was inserted into BamHI and HindIII sites of pSilencer4.1 (Ambion) to generate a miRNA expression vector pS-miR-UL112-1. All primer sequences used in plasmid construction are listed in Table 1. All constructs were confirmed by DNA sequencing.\n\nPrimers used in plasmid construction and semi-quantitative RT-PCR\nNote: sequences recognized by restriction en donuclases are down lined.\nLuciferase reporter construct pMIR was acquired from Ambion Company. IL-32 3′UTR sequence was obtained by RT-PCR from RNA harvested from HCMV infected cells, and inserted into SpeI and HindIII site of the multiple cloning regions (pMIR-IL-32UTR). A point mutation at corresponding seed region binding site was maintained by using Site-directed Gene Mutagenesis Kit (Beyotime), resulting in GUC to GAA. Sequence predicted to express hcmv-miR-UL112-1 was amplified by PCR directly from HCMV genome. The PCR product was inserted into BamHI and HindIII sites of pSilencer4.1 (Ambion) to generate a miRNA expression vector pS-miR-UL112-1. All primer sequences used in plasmid construction are listed in Table 1. All constructs were confirmed by DNA sequencing.\n\nPrimers used in plasmid construction and semi-quantitative RT-PCR\nNote: sequences recognized by restriction en donuclases are down lined.\n Enzyme-linked immunosorbent assay The concentration of IL-32 in serum samples were detected using human IL-32 ELISA Kit (Cusabio) according to the manufacture’s protocols. The absorbance value at wavelength 450 nm was measured. The IL-32 concentrations were calculated from the standard curve.\nThe concentration of IL-32 in serum samples were detected using human IL-32 ELISA Kit (Cusabio) according to the manufacture’s protocols. The absorbance value at wavelength 450 nm was measured. The IL-32 concentrations were calculated from the standard curve.\n Semi-quantitative RT-PCR analysis MRC-5 cells were inoculated with H strain at 3–5 multiplicity of infection (m.o.i.). Infections were carried out under IE, E and L conditions respectively. For preparation of IE RNA, CHX (Sigma) (100 μl/ml) was added to the culture medium 1 hour before infection and the cells were harvested at 24 hpi. For E RNA, DNA synthesis inhibitor phosphonoacetic acid (PAA) (Sigma) (100 μl/ml) was added to the medium immediately after infection, and the cells were harvested at 48 hpi. L RNA and uninfected cellular RNA was derived from infected (at 96 hpi) and uninfected cells, respectively. Total RNA was isolated using Trizol reagent (TaKaRa), treated with DNase Iand then reverse-transcribed with MLV reverse transcriptase and random primers (TaKaRa). PCR was performed for 24 cycles in 50 μl reactions with the IL-32 specific primer pairs listed in Table 1. mRNA of β-actin was amplified for normalization in all reactions. The primers for IL-32 amplification were designed for detecting all known isoforms of human IL-32 based on the sequences from Gnebank database (NM_001012631.1-001012636.1, NM_004221.4 and NM_001012718.1). The PCR products were analyzed by electrophoresis on 2% agarose gel containing ethidium bromide.\nMRC-5 cells were inoculated with H strain at 3–5 multiplicity of infection (m.o.i.). Infections were carried out under IE, E and L conditions respectively. For preparation of IE RNA, CHX (Sigma) (100 μl/ml) was added to the culture medium 1 hour before infection and the cells were harvested at 24 hpi. For E RNA, DNA synthesis inhibitor phosphonoacetic acid (PAA) (Sigma) (100 μl/ml) was added to the medium immediately after infection, and the cells were harvested at 48 hpi. L RNA and uninfected cellular RNA was derived from infected (at 96 hpi) and uninfected cells, respectively. Total RNA was isolated using Trizol reagent (TaKaRa), treated with DNase Iand then reverse-transcribed with MLV reverse transcriptase and random primers (TaKaRa). PCR was performed for 24 cycles in 50 μl reactions with the IL-32 specific primer pairs listed in Table 1. mRNA of β-actin was amplified for normalization in all reactions. The primers for IL-32 amplification were designed for detecting all known isoforms of human IL-32 based on the sequences from Gnebank database (NM_001012631.1-001012636.1, NM_004221.4 and NM_001012718.1). The PCR products were analyzed by electrophoresis on 2% agarose gel containing ethidium bromide.\n miRNA extraction and TaqMan assays Total RNA was extracted from HCMV infected cells at different time points using the mivVana miRNA isolation kit (Ambion). We measured hcmv-miR-UL112-1 in all samples using TaqMan® miRNA assays (MIMAT0001577 for hcmv-miR-UL112-1, ABI). U6 expression was used to normalize the expression of hcmv-miR-UL112-1. HCMV uninfected MRC-5 cells were used as negative control. Fold difference for hcmv-miR-UL112-1 expression was calculated using the equation 2ΔΔct, where ΔCt = Ct (hcmv-miR-UL112-1) – Ct (U6 control) and ΔΔCt = ΔCt (HCMV infected MRC-5) – ΔCt (MRC-5). The measurements were done in triplicates and the results are presented at the means ± S.D.\nTotal RNA was extracted from HCMV infected cells at different time points using the mivVana miRNA isolation kit (Ambion). We measured hcmv-miR-UL112-1 in all samples using TaqMan® miRNA assays (MIMAT0001577 for hcmv-miR-UL112-1, ABI). U6 expression was used to normalize the expression of hcmv-miR-UL112-1. HCMV uninfected MRC-5 cells were used as negative control. Fold difference for hcmv-miR-UL112-1 expression was calculated using the equation 2ΔΔct, where ΔCt = Ct (hcmv-miR-UL112-1) – Ct (U6 control) and ΔΔCt = ΔCt (HCMV infected MRC-5) – ΔCt (MRC-5). The measurements were done in triplicates and the results are presented at the means ± S.D.\n Luciferase assay MRC-5 cells were plated in 24-well plate at a density of 4.0×105 cells per well and grown to reach about 80% confluence at the time for transfection. 200 ng pMIR-IL-32UTR/pMIR-IL-32MUT plasmid or an empty vector control was co-transfected with 400 ng pS-miR-UL112-1 or pS-neg control into the cells by using lipofectamine 2000 reagent (Invitrogen) according to the manufacture’s protocol. 200 ng renilla luciferase plasmid (Promega) was introduced into each co-transfected cells as an internal control plasmid at the same time. Cells were collected 48 hours post transfection and luciferase activity levels were measured using the Dual luciferase reporter assay system (Promega) according to the manufacture’s guidelines. All measurements were done in triplicates and signals were normalized for transfection efficiency to the internal renilla control. The results are presented at the means ±S.D.\nMRC-5 cells were plated in 24-well plate at a density of 4.0×105 cells per well and grown to reach about 80% confluence at the time for transfection. 200 ng pMIR-IL-32UTR/pMIR-IL-32MUT plasmid or an empty vector control was co-transfected with 400 ng pS-miR-UL112-1 or pS-neg control into the cells by using lipofectamine 2000 reagent (Invitrogen) according to the manufacture’s protocol. 200 ng renilla luciferase plasmid (Promega) was introduced into each co-transfected cells as an internal control plasmid at the same time. Cells were collected 48 hours post transfection and luciferase activity levels were measured using the Dual luciferase reporter assay system (Promega) according to the manufacture’s guidelines. All measurements were done in triplicates and signals were normalized for transfection efficiency to the internal renilla control. The results are presented at the means ±S.D.\n Western blot analysis Western blot analyses were performed to detect the IL-32 levels in HCMV infected MRC-5 cells and hcmv-miR-UL112-1 over expressed HEK293 cells respectively. MRC-5 cells were inoculated with H strain at 3–5 m.o.i. Cells were harvested at different time points post infection (6 h, 24 h, 48 h, 72 h and 96 h). HEK293 cells were transfected with 8 μg pS-miR-UL112-1 or pS-neg control using lipofectamine 2000 (Invitrogen) and were incubated at 37°C for 48 hours.\nProteins of the cells described above were prepared by suspending cells in lysis buffer, followed by centrifugation. Concentrations of proteins in supernatant were quantified using protein assay kit (Beyotime). Protein was separated by 10% acrylamide gel electrophoresis and transferred onto a nitrocellulose membrane. Western blot analysis was performed using IL-32 antibody (abcam). Immunoblots were visualized with ECL detection system. Sample loading was normalized by quantities of β-actin detected parallel.\nWestern blot analyses were performed to detect the IL-32 levels in HCMV infected MRC-5 cells and hcmv-miR-UL112-1 over expressed HEK293 cells respectively. MRC-5 cells were inoculated with H strain at 3–5 m.o.i. Cells were harvested at different time points post infection (6 h, 24 h, 48 h, 72 h and 96 h). HEK293 cells were transfected with 8 μg pS-miR-UL112-1 or pS-neg control using lipofectamine 2000 (Invitrogen) and were incubated at 37°C for 48 hours.\nProteins of the cells described above were prepared by suspending cells in lysis buffer, followed by centrifugation. Concentrations of proteins in supernatant were quantified using protein assay kit (Beyotime). Protein was separated by 10% acrylamide gel electrophoresis and transferred onto a nitrocellulose membrane. Western blot analysis was performed using IL-32 antibody (abcam). Immunoblots were visualized with ECL detection system. Sample loading was normalized by quantities of β-actin detected parallel.\n Statistical analysis All experiments were reproducible. The results are presented at the means ± S.D. Student’s t test and F test was used to determine statistical significance. Differences were considered statistically significant at value of P≤0.05.\nAll experiments were reproducible. The results are presented at the means ± S.D. Student’s t test and F test was used to determine statistical significance. Differences were considered statistically significant at value of P≤0.05.", "Serum samples were obtained from 40 patients with HCMV infection (22 male, 18 female, aged 2.2 ± 0.42 yr) from Shengjing Hospital of China Medical University. All patients were HCMV-IgM positive and HCMV-IgG negative. The viral loads in urine samples from these patients were between 1.793×103 and 4.115×107 HCMV DNA copies per milliliter, as detected by routine QF-PCR (DaAn Gene). Blood samples collected from 32 HCMV-IgM negative individuals (17 male, 15 female, aged 2.7±0.35 yr) at the Medical Examination Centre of Shengjing Hospital were used as controls. According to the HCMV IgG detection results, HCMV IgM negative controls were divided into two groups: Previously HCMV infected group (IgG positive, n=17) and healthy group (IgG negative, n=15). All individuals involved in our study were seronegative for influenza A virus, HBV, HCV and HIV. This study was specifically approved by the Ethical Committee of Shengjing hospital.", "HEK293 cell line was a gift of Dr. Fangjie Chen from Department of Medical Genetics, China Medical University. HEK293 cells were cultured in Dulbecco’s Modified Eagle’s Medium (DMEM) supplemented with 10% fetal bovine serum (FBS), 100 μg/ml penicillin, 100 μg/ml streptomycin sulfate and 2 mM L-glutamine. Human embryonic lung fibroblasts cells, MRC-5, were acquired from Shanghai Institute for Biological Sciences, Chinese Academy of Sciences (CAS). MRC-5 cells were maintained in Modified Eagle’s Medium (MEM) supplemented with 10% FBS, 100 μg/ml penicillin and 100 μg/ml streptomycin sulfate. All cell cultures were maintained at 37°C in a 5% CO2 incubator.", "HCMV clinical strain H was isolated from a urine sample of a 5-month-old infant hospitalized in Shengjing Hospital of China Medical University. Stock virus was propagated in MRC-5 maintained in MEM supplemented with 2% FBS, 100 μg/ml penicillin and 100 μg/ml streptomycin. The supernatant was then harvested, and the aliquots were stored at −80°C before use.", "Luciferase reporter construct pMIR was acquired from Ambion Company. IL-32 3′UTR sequence was obtained by RT-PCR from RNA harvested from HCMV infected cells, and inserted into SpeI and HindIII site of the multiple cloning regions (pMIR-IL-32UTR). A point mutation at corresponding seed region binding site was maintained by using Site-directed Gene Mutagenesis Kit (Beyotime), resulting in GUC to GAA. Sequence predicted to express hcmv-miR-UL112-1 was amplified by PCR directly from HCMV genome. The PCR product was inserted into BamHI and HindIII sites of pSilencer4.1 (Ambion) to generate a miRNA expression vector pS-miR-UL112-1. All primer sequences used in plasmid construction are listed in Table 1. All constructs were confirmed by DNA sequencing.\n\nPrimers used in plasmid construction and semi-quantitative RT-PCR\nNote: sequences recognized by restriction en donuclases are down lined.", "The concentration of IL-32 in serum samples were detected using human IL-32 ELISA Kit (Cusabio) according to the manufacture’s protocols. The absorbance value at wavelength 450 nm was measured. The IL-32 concentrations were calculated from the standard curve.", "MRC-5 cells were inoculated with H strain at 3–5 multiplicity of infection (m.o.i.). Infections were carried out under IE, E and L conditions respectively. For preparation of IE RNA, CHX (Sigma) (100 μl/ml) was added to the culture medium 1 hour before infection and the cells were harvested at 24 hpi. For E RNA, DNA synthesis inhibitor phosphonoacetic acid (PAA) (Sigma) (100 μl/ml) was added to the medium immediately after infection, and the cells were harvested at 48 hpi. L RNA and uninfected cellular RNA was derived from infected (at 96 hpi) and uninfected cells, respectively. Total RNA was isolated using Trizol reagent (TaKaRa), treated with DNase Iand then reverse-transcribed with MLV reverse transcriptase and random primers (TaKaRa). PCR was performed for 24 cycles in 50 μl reactions with the IL-32 specific primer pairs listed in Table 1. mRNA of β-actin was amplified for normalization in all reactions. The primers for IL-32 amplification were designed for detecting all known isoforms of human IL-32 based on the sequences from Gnebank database (NM_001012631.1-001012636.1, NM_004221.4 and NM_001012718.1). The PCR products were analyzed by electrophoresis on 2% agarose gel containing ethidium bromide.", "Total RNA was extracted from HCMV infected cells at different time points using the mivVana miRNA isolation kit (Ambion). We measured hcmv-miR-UL112-1 in all samples using TaqMan® miRNA assays (MIMAT0001577 for hcmv-miR-UL112-1, ABI). U6 expression was used to normalize the expression of hcmv-miR-UL112-1. HCMV uninfected MRC-5 cells were used as negative control. Fold difference for hcmv-miR-UL112-1 expression was calculated using the equation 2ΔΔct, where ΔCt = Ct (hcmv-miR-UL112-1) – Ct (U6 control) and ΔΔCt = ΔCt (HCMV infected MRC-5) – ΔCt (MRC-5). The measurements were done in triplicates and the results are presented at the means ± S.D.", "MRC-5 cells were plated in 24-well plate at a density of 4.0×105 cells per well and grown to reach about 80% confluence at the time for transfection. 200 ng pMIR-IL-32UTR/pMIR-IL-32MUT plasmid or an empty vector control was co-transfected with 400 ng pS-miR-UL112-1 or pS-neg control into the cells by using lipofectamine 2000 reagent (Invitrogen) according to the manufacture’s protocol. 200 ng renilla luciferase plasmid (Promega) was introduced into each co-transfected cells as an internal control plasmid at the same time. Cells were collected 48 hours post transfection and luciferase activity levels were measured using the Dual luciferase reporter assay system (Promega) according to the manufacture’s guidelines. All measurements were done in triplicates and signals were normalized for transfection efficiency to the internal renilla control. The results are presented at the means ±S.D.", "Western blot analyses were performed to detect the IL-32 levels in HCMV infected MRC-5 cells and hcmv-miR-UL112-1 over expressed HEK293 cells respectively. MRC-5 cells were inoculated with H strain at 3–5 m.o.i. Cells were harvested at different time points post infection (6 h, 24 h, 48 h, 72 h and 96 h). HEK293 cells were transfected with 8 μg pS-miR-UL112-1 or pS-neg control using lipofectamine 2000 (Invitrogen) and were incubated at 37°C for 48 hours.\nProteins of the cells described above were prepared by suspending cells in lysis buffer, followed by centrifugation. Concentrations of proteins in supernatant were quantified using protein assay kit (Beyotime). Protein was separated by 10% acrylamide gel electrophoresis and transferred onto a nitrocellulose membrane. Western blot analysis was performed using IL-32 antibody (abcam). Immunoblots were visualized with ECL detection system. Sample loading was normalized by quantities of β-actin detected parallel.", "All experiments were reproducible. The results are presented at the means ± S.D. Student’s t test and F test was used to determine statistical significance. Differences were considered statistically significant at value of P≤0.05.", "The authors declare that they have no competing interests.", "YJH carried out the semi-quantitative RT-PCR analysis, western blot analysis and Statistical analysis. YQ and RH carried out the plasmids construction and luciferase assay. QR as the corresponding author designed the study and corrected the manuscript. YPM and YHJ carried out the preparation of virus and cell lines. ZRS carried out the serum samples collection and ELISA detection. All authors have read and approved the final manuscript." ]
[ null, "results", null, null, null, "discussion", "conclusions", "materials|methods", null, null, null, null, null, null, null, null, null, null, null, null ]
[ "IL-32", "HCMV infection", "hcmv-miR-UL112-1" ]
Background: Interleukin-32 (IL-32) is a newly-discovered pro-inflammatory cytokine, which plays a role in innate and adaptive immune responses [1,2]. It lacks sequence homology to any presently known cytokine families. IL-32 is associated with the induction of inflammatory responses by activating the p38MAPK, NF-kappa B and AP-1 signaling pathways. It has been implicated in inflammatory disorders, mycobacterium tuberculosis infections and inflammatory bowel disease, as well as in some autoimmune diseases, such as rheumatoid arthritis, ulcerative colitis and Crohn’s disease [3-10]. Moreover, it has been reported that IL-32 has pro-inflammatory effects on myeloid cells and promotes the differentiation of osteoclast precursors into multinucleated cells expressing specific osteoclast markers [11,12]. In recent studies, IL-32 has also been found to be regulated during viral infections. Elevated levels of IL-32 were found in sera from patients infected with influenza A virus [13-15], hepatitis B virus (HBV) [16], hepatitis C virus (HCV) [17], human papillomavirus (HPV) [18] and human immunodeficiency virus (HIV) [19-21], suggesting that IL-32 might play an important role in host defense against viral infections Human cytomegalovirus (HCMV) is an ubiquitous β-herpesvirus that infects a broad range of cell types in human hosts, contributing to its complex and varied pathogenesis. HCMV Infection leads to life-long persistence in 50%–90% of the population, which is generally subclinical in healthy individuals [22]. However, it can lead to serious complications in immunocompromised patients, such as transplant recipients or AIDS patients [23,24]. MicroRNAs (miRNAs) are an abundant class of small non-coding RNA molecules that target mRNAs generally within their 3′ untranslated regions (3′UTRs). MiRNAs suppress gene expression mainly through inhibition of translation or, rarely, through degradation of mRNA [25,26]. Clinical isolates of HCMV encode at least 17 miRNAs [27,28]. However, only a few functional targets have been validated experimentally for some HCMV-encoded miRNAs [29-34]. It has been demonstrated that hcmv-miR-UL112-1 targets and reduces the expression of HCMV UL123 (IE1 or IE72), UL114 and the major histocompatibility complex class 1-related chain B (MICB). Moreover, BclAF1 protein, a human cytomegalovirus restriction factor, was reported to be a new target of hcmv-miR-UL112-1 [35]. In addition, multiple cellular targets of hcmv-miR-US25-1, which are associated with cell cycle control, were identified by RNA induced silencing complex immunoprecipitation (RISC-IP), including cyclin E2, BRCC3, EID1, MAPRE2 and CD147. IL-32, which is not present in the confirmed targets, was screened out as a candidate target of hcmv-miR-UL112-1 in our previous study [36]. However, no further experiments to validate these findings have been performed. In the present study, the expression levels of IL-32 were compared among serum samples from patients with active HCMV infection and samples from HCMV-IgM negative individuals. The expression levels of IL-32 and hcmv-miR-UL112-1 in HCMV infected MRC-5 cells were detected at different stages of infection and time points. In addition, functional down-regulation of IL-32 by hcmv-miR-UL112-1 was detected in transfected human embryonic kidney (HEK293) cells, and the effect of hcmv-miR-UL112-1 on IL-32 during HCMV infection was primarily discussed. Results: Relatively high IL-32 levels among individuals with active HCMV infection IL-32 protein levels were measured in the sera of 40 patients with active HCMV infections (HCMV IgM positive) and 32 HCMV IgM negative control individuals by enzyme-linked immunosorbent assays (ELISA). According to the results of HCMV IgG detection, HCMV IgM negative controls were divided into two groups: Previously HCMV infected group (IgG positive, n=17) and healthy group (IgG negative, n=15). Mean IL-32 levels in the sera of patients with active HCMV infection were 4.1778±1.1663 ng/ml (shown in Figure 1). Mean IL-32 levels in sera of the previously HCMV infected group and healthy group were 2.4653±0.8287 ng/ml and 2.4480±0.9162 ng/ml, respectively. The data were analyzed by F test. IL-32 levels in the sera of patients with active HCMV infection were significantly higher than those in the HCMV IgM negative control group (F=23.957, P<0.001). No significant difference was observed between the previously HCMV infected group and the healthy group (P=0.963). An illustration showing IL-32 levels in serum samples detected by ELISA. Serum samples were from actively HCMV-infected patients (n=40), HCMV previously infected individuals (n=17) and healthy individuals (n=15), respectively. IL-32 protein levels were measured in the sera of 40 patients with active HCMV infections (HCMV IgM positive) and 32 HCMV IgM negative control individuals by enzyme-linked immunosorbent assays (ELISA). According to the results of HCMV IgG detection, HCMV IgM negative controls were divided into two groups: Previously HCMV infected group (IgG positive, n=17) and healthy group (IgG negative, n=15). Mean IL-32 levels in the sera of patients with active HCMV infection were 4.1778±1.1663 ng/ml (shown in Figure 1). Mean IL-32 levels in sera of the previously HCMV infected group and healthy group were 2.4653±0.8287 ng/ml and 2.4480±0.9162 ng/ml, respectively. The data were analyzed by F test. IL-32 levels in the sera of patients with active HCMV infection were significantly higher than those in the HCMV IgM negative control group (F=23.957, P<0.001). No significant difference was observed between the previously HCMV infected group and the healthy group (P=0.963). An illustration showing IL-32 levels in serum samples detected by ELISA. Serum samples were from actively HCMV-infected patients (n=40), HCMV previously infected individuals (n=17) and healthy individuals (n=15), respectively. Kinetics of the expression of IL-32 mRNA, IL-32 protein and hcmv-miR-UL112-1 in HCMV infected MRC-5 cells Cells were harvested at different infection stages and time points after HCMV infection as described in the Materials and Methods section. We used specific drugs to determine the expression profile of IL-32 at different stages of viral replication. CHX was used to determine the immediate early (IE) stage and PAA was used to determine early (E) stage. As shown in Figure 2A and B, semi-quantitative RT-PCR analysis revealed that IL-32 mRNA expression was significantly activated at IE stage but not at E or late (L) stages post-infection. IL-32 protein expression levels at different time points following HCMV infection were measured by western blot analysis. As shown in Figure 2C and D, IL-32 protein expression reached a peak value at 6 hours post infection (hpi). The relative IL-32 concentration in HCMV infected cells collected at 6 hpi was 10 fold higher than in uninfected cells. All measured values were balanced by using β-actin as an internal control. No accumulation of either IL-32 mRNA or protein was found following the prolongation of HCMV-infection process. IL-32 mRNA levels in HCMV infected cells decreased in E and L stages to a similar level as in uninfected cells. IL-32 protein levels decreased with subsequent hours post infection. Expression of IL-32 induced by HCMV infection in MRC-5 cells. (A) IL-32 mRNA levels in HCMV infected cells were compared to that in uninfected MRC-5 cells. β-actin was used as an internal control. The universal primers for all IL-32 transcripts were used. U is for uninfected MRC-5; IE is for immediate early stage; E is for early stage; L is for late stage. (B) IL-32 mRNA levels were represented relative to that in IE stage. (C) IL-32 protein levels in HCMV infected MRC-5 cells were detected at different time points. IL-32 densitometer values were normalized by β-actin values. (D) The IL-32 protein levels were represented relative to that in infected cells collected at 6 hpi. (E) The kinetics of hcmv-miR-UL112-1 expression were measured using TaqMan® miRNA assays. The expression of hcmv-miR-UL112-1 could be detected at 24 hpi and increased gradually as the HCMV-infection process was prolonged. In addition, the kinetics of hcmv-miR-UL112-1 expression were measured using TaqMan® miRNA assays. Measured values were normalized by using U6 as an internal control. The expression of hcmv-miR-UL112-1 could be detected at 24 hpi. As shown in Figure 2E, a steep increase was observed at 48 hpi. The relative quantity of hcmv-miR-UL112-1 at 48 hpi was 6.5 fold higher than at 24 hpi. The expression of hcmv-miR-UL112-1 increased gradually as the HCMV-infection process was prolonged. The relative quantity of hcmv-miR-UL112-1 in HCMV infected cells at 96 hpi was 10 fold higher than at 24 hpi. Cells were harvested at different infection stages and time points after HCMV infection as described in the Materials and Methods section. We used specific drugs to determine the expression profile of IL-32 at different stages of viral replication. CHX was used to determine the immediate early (IE) stage and PAA was used to determine early (E) stage. As shown in Figure 2A and B, semi-quantitative RT-PCR analysis revealed that IL-32 mRNA expression was significantly activated at IE stage but not at E or late (L) stages post-infection. IL-32 protein expression levels at different time points following HCMV infection were measured by western blot analysis. As shown in Figure 2C and D, IL-32 protein expression reached a peak value at 6 hours post infection (hpi). The relative IL-32 concentration in HCMV infected cells collected at 6 hpi was 10 fold higher than in uninfected cells. All measured values were balanced by using β-actin as an internal control. No accumulation of either IL-32 mRNA or protein was found following the prolongation of HCMV-infection process. IL-32 mRNA levels in HCMV infected cells decreased in E and L stages to a similar level as in uninfected cells. IL-32 protein levels decreased with subsequent hours post infection. Expression of IL-32 induced by HCMV infection in MRC-5 cells. (A) IL-32 mRNA levels in HCMV infected cells were compared to that in uninfected MRC-5 cells. β-actin was used as an internal control. The universal primers for all IL-32 transcripts were used. U is for uninfected MRC-5; IE is for immediate early stage; E is for early stage; L is for late stage. (B) IL-32 mRNA levels were represented relative to that in IE stage. (C) IL-32 protein levels in HCMV infected MRC-5 cells were detected at different time points. IL-32 densitometer values were normalized by β-actin values. (D) The IL-32 protein levels were represented relative to that in infected cells collected at 6 hpi. (E) The kinetics of hcmv-miR-UL112-1 expression were measured using TaqMan® miRNA assays. The expression of hcmv-miR-UL112-1 could be detected at 24 hpi and increased gradually as the HCMV-infection process was prolonged. In addition, the kinetics of hcmv-miR-UL112-1 expression were measured using TaqMan® miRNA assays. Measured values were normalized by using U6 as an internal control. The expression of hcmv-miR-UL112-1 could be detected at 24 hpi. As shown in Figure 2E, a steep increase was observed at 48 hpi. The relative quantity of hcmv-miR-UL112-1 at 48 hpi was 6.5 fold higher than at 24 hpi. The expression of hcmv-miR-UL112-1 increased gradually as the HCMV-infection process was prolonged. The relative quantity of hcmv-miR-UL112-1 in HCMV infected cells at 96 hpi was 10 fold higher than at 24 hpi. Functional down-regulation of endogenous IL-32 expression by ectopically expressed hcmv-miR-UL112-1 In our previous study, a sequence in the 3′UTR of IL-32 mRNA was identified by hybrid-PCR to be a candidate target site of hcmv-miR-UL112-1. The predicted binding site for hcmv-miR-UL112-1 was 195 nt upstream of the polyA structure of IL-32 mRNA. A schematic representation of hcmv-miR-UL112-1 binding to the IL-32 3′UTR sequence is shown in Figure 3A. Down-regulation of IL-32 expression by hcmv-miR-UL112-1. (A) The diagram shows the predicted sequences of hcmv-miR-UL112-1 binding to IL-32 mRNA. (B) As a candidate target, IL-32 was validated for its ability to inhibit expression of a luciferase reporter construct in the presence of hcmv-miR-UL112-1 (pS-miR-UL112-1). Results are shown as percentage expression of negative control sample (pS-neg) following correction for transfection levels according to the control of renilla luciferase expression. Values are shown as means ± standard deviations for triplicate samples. (C) HEK 293 cells were transfected with pS-miR-UL112-1 or control vector respectively. Cells were collected 48 hpi and were subjected to western blot analysis using the indicated antibodies. IL-32 densitometer values were normalized to that of the β-actin values. (D) The amounts of proteins presented in panel C were quantified by densitometry. Results are shown as percentage expression of the negative control sample (Empty). To further determine whether the IL-32 3′UTR sequence represents a functional target site for hcmv-miR-UL112-1, the IL-32 3′UTR sequence was validated by luciferase reporter assays. As shown in Figure 3B, transfected pMIR-IL-32UTR led to a significant inhibition of luciferase activity by 39.27% in the presence of pS-miR-UL112-1. pS-miR-UL112-1 had no significant inhibitory effects on the luciferase activities of pMIR and pMIR-IL-32MUT. These results demonstrate that the IL-32 3′UTR sequence provided a specific and functional binding site for hcmv-miR-UL112-1. The regulatory effect of hcmv-miR-UL112-1 on IL-32 protein expression was then examined in HEK293 cells by western blot. IL-32 protein level was significantly reduced in cells transfected with hcmv-miR-UL112-1 in comparison to cells transfected with pSilencer negative control (Figure 3C and D). IL-32 densitometer values were normalized to that of β-actin values. The level of reduction of IL-32 protein was approximately 67% as determined by densitometry (Figure 3D). Our observations confirmed that the expression of endogenous IL-32 protein could be specifically inhibited by ectopically expressed hcmv-miR-UL112-1. In our previous study, a sequence in the 3′UTR of IL-32 mRNA was identified by hybrid-PCR to be a candidate target site of hcmv-miR-UL112-1. The predicted binding site for hcmv-miR-UL112-1 was 195 nt upstream of the polyA structure of IL-32 mRNA. A schematic representation of hcmv-miR-UL112-1 binding to the IL-32 3′UTR sequence is shown in Figure 3A. Down-regulation of IL-32 expression by hcmv-miR-UL112-1. (A) The diagram shows the predicted sequences of hcmv-miR-UL112-1 binding to IL-32 mRNA. (B) As a candidate target, IL-32 was validated for its ability to inhibit expression of a luciferase reporter construct in the presence of hcmv-miR-UL112-1 (pS-miR-UL112-1). Results are shown as percentage expression of negative control sample (pS-neg) following correction for transfection levels according to the control of renilla luciferase expression. Values are shown as means ± standard deviations for triplicate samples. (C) HEK 293 cells were transfected with pS-miR-UL112-1 or control vector respectively. Cells were collected 48 hpi and were subjected to western blot analysis using the indicated antibodies. IL-32 densitometer values were normalized to that of the β-actin values. (D) The amounts of proteins presented in panel C were quantified by densitometry. Results are shown as percentage expression of the negative control sample (Empty). To further determine whether the IL-32 3′UTR sequence represents a functional target site for hcmv-miR-UL112-1, the IL-32 3′UTR sequence was validated by luciferase reporter assays. As shown in Figure 3B, transfected pMIR-IL-32UTR led to a significant inhibition of luciferase activity by 39.27% in the presence of pS-miR-UL112-1. pS-miR-UL112-1 had no significant inhibitory effects on the luciferase activities of pMIR and pMIR-IL-32MUT. These results demonstrate that the IL-32 3′UTR sequence provided a specific and functional binding site for hcmv-miR-UL112-1. The regulatory effect of hcmv-miR-UL112-1 on IL-32 protein expression was then examined in HEK293 cells by western blot. IL-32 protein level was significantly reduced in cells transfected with hcmv-miR-UL112-1 in comparison to cells transfected with pSilencer negative control (Figure 3C and D). IL-32 densitometer values were normalized to that of β-actin values. The level of reduction of IL-32 protein was approximately 67% as determined by densitometry (Figure 3D). Our observations confirmed that the expression of endogenous IL-32 protein could be specifically inhibited by ectopically expressed hcmv-miR-UL112-1. Relatively high IL-32 levels among individuals with active HCMV infection: IL-32 protein levels were measured in the sera of 40 patients with active HCMV infections (HCMV IgM positive) and 32 HCMV IgM negative control individuals by enzyme-linked immunosorbent assays (ELISA). According to the results of HCMV IgG detection, HCMV IgM negative controls were divided into two groups: Previously HCMV infected group (IgG positive, n=17) and healthy group (IgG negative, n=15). Mean IL-32 levels in the sera of patients with active HCMV infection were 4.1778±1.1663 ng/ml (shown in Figure 1). Mean IL-32 levels in sera of the previously HCMV infected group and healthy group were 2.4653±0.8287 ng/ml and 2.4480±0.9162 ng/ml, respectively. The data were analyzed by F test. IL-32 levels in the sera of patients with active HCMV infection were significantly higher than those in the HCMV IgM negative control group (F=23.957, P<0.001). No significant difference was observed between the previously HCMV infected group and the healthy group (P=0.963). An illustration showing IL-32 levels in serum samples detected by ELISA. Serum samples were from actively HCMV-infected patients (n=40), HCMV previously infected individuals (n=17) and healthy individuals (n=15), respectively. Kinetics of the expression of IL-32 mRNA, IL-32 protein and hcmv-miR-UL112-1 in HCMV infected MRC-5 cells: Cells were harvested at different infection stages and time points after HCMV infection as described in the Materials and Methods section. We used specific drugs to determine the expression profile of IL-32 at different stages of viral replication. CHX was used to determine the immediate early (IE) stage and PAA was used to determine early (E) stage. As shown in Figure 2A and B, semi-quantitative RT-PCR analysis revealed that IL-32 mRNA expression was significantly activated at IE stage but not at E or late (L) stages post-infection. IL-32 protein expression levels at different time points following HCMV infection were measured by western blot analysis. As shown in Figure 2C and D, IL-32 protein expression reached a peak value at 6 hours post infection (hpi). The relative IL-32 concentration in HCMV infected cells collected at 6 hpi was 10 fold higher than in uninfected cells. All measured values were balanced by using β-actin as an internal control. No accumulation of either IL-32 mRNA or protein was found following the prolongation of HCMV-infection process. IL-32 mRNA levels in HCMV infected cells decreased in E and L stages to a similar level as in uninfected cells. IL-32 protein levels decreased with subsequent hours post infection. Expression of IL-32 induced by HCMV infection in MRC-5 cells. (A) IL-32 mRNA levels in HCMV infected cells were compared to that in uninfected MRC-5 cells. β-actin was used as an internal control. The universal primers for all IL-32 transcripts were used. U is for uninfected MRC-5; IE is for immediate early stage; E is for early stage; L is for late stage. (B) IL-32 mRNA levels were represented relative to that in IE stage. (C) IL-32 protein levels in HCMV infected MRC-5 cells were detected at different time points. IL-32 densitometer values were normalized by β-actin values. (D) The IL-32 protein levels were represented relative to that in infected cells collected at 6 hpi. (E) The kinetics of hcmv-miR-UL112-1 expression were measured using TaqMan® miRNA assays. The expression of hcmv-miR-UL112-1 could be detected at 24 hpi and increased gradually as the HCMV-infection process was prolonged. In addition, the kinetics of hcmv-miR-UL112-1 expression were measured using TaqMan® miRNA assays. Measured values were normalized by using U6 as an internal control. The expression of hcmv-miR-UL112-1 could be detected at 24 hpi. As shown in Figure 2E, a steep increase was observed at 48 hpi. The relative quantity of hcmv-miR-UL112-1 at 48 hpi was 6.5 fold higher than at 24 hpi. The expression of hcmv-miR-UL112-1 increased gradually as the HCMV-infection process was prolonged. The relative quantity of hcmv-miR-UL112-1 in HCMV infected cells at 96 hpi was 10 fold higher than at 24 hpi. Functional down-regulation of endogenous IL-32 expression by ectopically expressed hcmv-miR-UL112-1: In our previous study, a sequence in the 3′UTR of IL-32 mRNA was identified by hybrid-PCR to be a candidate target site of hcmv-miR-UL112-1. The predicted binding site for hcmv-miR-UL112-1 was 195 nt upstream of the polyA structure of IL-32 mRNA. A schematic representation of hcmv-miR-UL112-1 binding to the IL-32 3′UTR sequence is shown in Figure 3A. Down-regulation of IL-32 expression by hcmv-miR-UL112-1. (A) The diagram shows the predicted sequences of hcmv-miR-UL112-1 binding to IL-32 mRNA. (B) As a candidate target, IL-32 was validated for its ability to inhibit expression of a luciferase reporter construct in the presence of hcmv-miR-UL112-1 (pS-miR-UL112-1). Results are shown as percentage expression of negative control sample (pS-neg) following correction for transfection levels according to the control of renilla luciferase expression. Values are shown as means ± standard deviations for triplicate samples. (C) HEK 293 cells were transfected with pS-miR-UL112-1 or control vector respectively. Cells were collected 48 hpi and were subjected to western blot analysis using the indicated antibodies. IL-32 densitometer values were normalized to that of the β-actin values. (D) The amounts of proteins presented in panel C were quantified by densitometry. Results are shown as percentage expression of the negative control sample (Empty). To further determine whether the IL-32 3′UTR sequence represents a functional target site for hcmv-miR-UL112-1, the IL-32 3′UTR sequence was validated by luciferase reporter assays. As shown in Figure 3B, transfected pMIR-IL-32UTR led to a significant inhibition of luciferase activity by 39.27% in the presence of pS-miR-UL112-1. pS-miR-UL112-1 had no significant inhibitory effects on the luciferase activities of pMIR and pMIR-IL-32MUT. These results demonstrate that the IL-32 3′UTR sequence provided a specific and functional binding site for hcmv-miR-UL112-1. The regulatory effect of hcmv-miR-UL112-1 on IL-32 protein expression was then examined in HEK293 cells by western blot. IL-32 protein level was significantly reduced in cells transfected with hcmv-miR-UL112-1 in comparison to cells transfected with pSilencer negative control (Figure 3C and D). IL-32 densitometer values were normalized to that of β-actin values. The level of reduction of IL-32 protein was approximately 67% as determined by densitometry (Figure 3D). Our observations confirmed that the expression of endogenous IL-32 protein could be specifically inhibited by ectopically expressed hcmv-miR-UL112-1. Discussion: IL-32 is a proinflammatory cytokine and plays a critical role in inflammatory responses. It induces the expression of IL-1β, IL-6, IL-8 and TNF through the p38MAPK, NF-kappa B and AP-1 signaling pathways. Recent reports have highlighted that IL-32 is regulated during viral infection in humans. It has been reported that the IL-32 expression could be induced during infections of influenza A virus, HBV, HCV, HPV and HIV [13-21]. IL-32 levels, detected by ELISA, were 4.1778±1.1663 ng/ml in sera from actively HCMV-infected patients. IL-32 levels in sera from the previously infected group and healthy group were 2.4653±0.8287 ng/ml and 2.4480±0.9162 ng/ml, respectively. IL-32 levels in sera from actively HCMV-infected patients were significantly higher than those in control groups (F=23.957, P<0.001). However, no significant differences in IL-32 levels were observed between the HCMV previously infected group and healthy group (P=0.963). These results demonstrate that IL-32 may be a newly described reactive protein corresponding to active HCMV infection. HCMV infection can induce the expression of IL-32 at both the mRNA and protein levels in HCMV-infected MRC-5 cells. The peak value of IL-32 mRNA was detected as early as IE stage. The IL-32 protein expression level reached peak value at 6 hpi. The IL-32 protein concentration in HCMV-infected cells at 6 hpi was 10 fold higher than that in uninfected cells. This result demonstrated that the production of IL-32 can be specifically induced by active HCMV infection, while the mechanism that induced the expression of IL-32 during HCMV infection is still unknown. Is the induction of IL-32 mediated only by viral entry or by the expression of IE proteins? Additional experiments would be necessary to provide these answers, and might include the use of UV-irradiated virus. Accumulation of either IL-32 mRNA or protein was not observed following prolonged HCMV-infection process. IL-32 mRNA levels in HCMV-infected cells decreased rapidly in E and L stage to a similar level to that of uninfected cells. IL-32 protein levels decreased in subsequent hours post infection and could only be detected in a small quantity by western blot at 96 hpi. It is hypothesized that certain pathways might be activated to down-regulate or block the expression of IL-32 during HCMV infection. MiRNAs are the most studied non-coding RNAs in recent years. They regulate gene expression at the post-transcriptional level and act as key regulators in diverse regulatory pathways, including early development, cell differentiation, cell proliferation, metabolism and apoptosis [37-40]. We carried out additional experiments to validate IL-32 mRNA as a target of hcmv-miR-UL112-1. In the presence of pS-miR-UL112-1, 39.27% luciferase activity of pMIR-IL-32UTR was observed to be significantly decreased, and a 67% reduction of endogenous IL-32 protein expression level was measured in HEK293 cells. It was confirmed that hcmv-miR-UL112-1 could specifically repress the expression of IL-32 through the predicted binding site. To examine the role of hcmv-miR-UL112-1 in regulating IL-32 expression during an active HCMV infection, the kinetics of hcmv-miR-UL112-1 expression in HCMV-infected MRC-5 cells were measured using TaqMan® miRNA assays. The expression of hcmv-miR-UL112-1 could be detected at 24 hpi and then increased gradually with prolonged HCMV infection. This result suggests that hcmv-miR-UL112-1 might function and lead to the decrease of IL-32 levels in E stage during active HCMV infection. The decrease of IL-32 mRNA, but not IL-32 protein level, together with the dramatic increase of hcmv-miR-UL112-1 during the E stage, suggests the possibility that hcmv-miR-UL112-1 may function primarily in the degradation of mRNA. Further studies are needed to address this point. Through co-evolution with its host, HCMV has evolved effective immune evasion strategies by encoding many immunomodulatory proteins that modulate the host immune response. It is conceivable that HCMV-encoded miRNAs might also be exploited during immune evasion. Hcmv-miR-UL112-1 is expressed with early kinetics. Some functional targets of hcmv-miR-UL112-1 have been shown experimentally to be involved in the modulation of NK cell activation. Stern-Ginossar et al. have identified MICB as a functional target of hcmv-miR-UL112-1 [30]. MICB is a stress-induced ligand of the NK cell activating receptor, NKG2D, which is critical for killing of virus-infected cells by NK cells. Hcmv-miR-UL112-1-mediated down-regulation of MICB, perturbed MICB binding with NKG2D and reduced killing of HCMV infected cells by NK cells. In addition, HCMV IE1, which would increase the TNF-α level by activation of the TNF-α promoter, was then identified as a target of hcmv-miR-UL112-1 by Murphy et al. in 2008 [29]. It has been confirmed that IL-32 induces significant amounts of TNF-α in a dose-dependent manner, and that silencing endogenous IL-32 by short hairpin RNA impairs the induction of TNF-α [2]. In this study, we validated that expression of IL-32 was activated by active HCMV infection and functionally down-regulated by ectopically expressed hcmv-miR-UL112-1 in HEK293 cells. It may be deduced from these results that, besides IE1 and MICB, hcmv-miR-UL112-1 might also synergistically achieve the modulation of NK cell activation through the TNF-α pathway by the down-regulation of IL-32, and immune evasion by HCMV. Conclusions: In summary, IL-32 expression in active HCMV infection was firstly investigated in our study. Detailed kinetics of the transcription of hcmv-miR-UL112-1, IL-32 mRNA and its protein expression were determined in HCMV infected MRC-5 cells. Furthermore, IL-32 expression was demonstrated to be functionally down-regulated by ectopically expressed hcmv-miR-UL112-1 in HEK293 cells. Follow up studies will investigate the mechanism of immune evasion by HCMV mediated by hcmv-miR-UL112-1, which has been confirmed to modulate NK cell activation during HCMV infection through post-transcriptional regulation. Materials and methods: Serum samples Serum samples were obtained from 40 patients with HCMV infection (22 male, 18 female, aged 2.2 ± 0.42 yr) from Shengjing Hospital of China Medical University. All patients were HCMV-IgM positive and HCMV-IgG negative. The viral loads in urine samples from these patients were between 1.793×103 and 4.115×107 HCMV DNA copies per milliliter, as detected by routine QF-PCR (DaAn Gene). Blood samples collected from 32 HCMV-IgM negative individuals (17 male, 15 female, aged 2.7±0.35 yr) at the Medical Examination Centre of Shengjing Hospital were used as controls. According to the HCMV IgG detection results, HCMV IgM negative controls were divided into two groups: Previously HCMV infected group (IgG positive, n=17) and healthy group (IgG negative, n=15). All individuals involved in our study were seronegative for influenza A virus, HBV, HCV and HIV. This study was specifically approved by the Ethical Committee of Shengjing hospital. Serum samples were obtained from 40 patients with HCMV infection (22 male, 18 female, aged 2.2 ± 0.42 yr) from Shengjing Hospital of China Medical University. All patients were HCMV-IgM positive and HCMV-IgG negative. The viral loads in urine samples from these patients were between 1.793×103 and 4.115×107 HCMV DNA copies per milliliter, as detected by routine QF-PCR (DaAn Gene). Blood samples collected from 32 HCMV-IgM negative individuals (17 male, 15 female, aged 2.7±0.35 yr) at the Medical Examination Centre of Shengjing Hospital were used as controls. According to the HCMV IgG detection results, HCMV IgM negative controls were divided into two groups: Previously HCMV infected group (IgG positive, n=17) and healthy group (IgG negative, n=15). All individuals involved in our study were seronegative for influenza A virus, HBV, HCV and HIV. This study was specifically approved by the Ethical Committee of Shengjing hospital. Cell lines HEK293 cell line was a gift of Dr. Fangjie Chen from Department of Medical Genetics, China Medical University. HEK293 cells were cultured in Dulbecco’s Modified Eagle’s Medium (DMEM) supplemented with 10% fetal bovine serum (FBS), 100 μg/ml penicillin, 100 μg/ml streptomycin sulfate and 2 mM L-glutamine. Human embryonic lung fibroblasts cells, MRC-5, were acquired from Shanghai Institute for Biological Sciences, Chinese Academy of Sciences (CAS). MRC-5 cells were maintained in Modified Eagle’s Medium (MEM) supplemented with 10% FBS, 100 μg/ml penicillin and 100 μg/ml streptomycin sulfate. All cell cultures were maintained at 37°C in a 5% CO2 incubator. HEK293 cell line was a gift of Dr. Fangjie Chen from Department of Medical Genetics, China Medical University. HEK293 cells were cultured in Dulbecco’s Modified Eagle’s Medium (DMEM) supplemented with 10% fetal bovine serum (FBS), 100 μg/ml penicillin, 100 μg/ml streptomycin sulfate and 2 mM L-glutamine. Human embryonic lung fibroblasts cells, MRC-5, were acquired from Shanghai Institute for Biological Sciences, Chinese Academy of Sciences (CAS). MRC-5 cells were maintained in Modified Eagle’s Medium (MEM) supplemented with 10% FBS, 100 μg/ml penicillin and 100 μg/ml streptomycin sulfate. All cell cultures were maintained at 37°C in a 5% CO2 incubator. Virus HCMV clinical strain H was isolated from a urine sample of a 5-month-old infant hospitalized in Shengjing Hospital of China Medical University. Stock virus was propagated in MRC-5 maintained in MEM supplemented with 2% FBS, 100 μg/ml penicillin and 100 μg/ml streptomycin. The supernatant was then harvested, and the aliquots were stored at −80°C before use. HCMV clinical strain H was isolated from a urine sample of a 5-month-old infant hospitalized in Shengjing Hospital of China Medical University. Stock virus was propagated in MRC-5 maintained in MEM supplemented with 2% FBS, 100 μg/ml penicillin and 100 μg/ml streptomycin. The supernatant was then harvested, and the aliquots were stored at −80°C before use. Plasmid constructs Luciferase reporter construct pMIR was acquired from Ambion Company. IL-32 3′UTR sequence was obtained by RT-PCR from RNA harvested from HCMV infected cells, and inserted into SpeI and HindIII site of the multiple cloning regions (pMIR-IL-32UTR). A point mutation at corresponding seed region binding site was maintained by using Site-directed Gene Mutagenesis Kit (Beyotime), resulting in GUC to GAA. Sequence predicted to express hcmv-miR-UL112-1 was amplified by PCR directly from HCMV genome. The PCR product was inserted into BamHI and HindIII sites of pSilencer4.1 (Ambion) to generate a miRNA expression vector pS-miR-UL112-1. All primer sequences used in plasmid construction are listed in Table 1. All constructs were confirmed by DNA sequencing. Primers used in plasmid construction and semi-quantitative RT-PCR Note: sequences recognized by restriction en donuclases are down lined. Luciferase reporter construct pMIR was acquired from Ambion Company. IL-32 3′UTR sequence was obtained by RT-PCR from RNA harvested from HCMV infected cells, and inserted into SpeI and HindIII site of the multiple cloning regions (pMIR-IL-32UTR). A point mutation at corresponding seed region binding site was maintained by using Site-directed Gene Mutagenesis Kit (Beyotime), resulting in GUC to GAA. Sequence predicted to express hcmv-miR-UL112-1 was amplified by PCR directly from HCMV genome. The PCR product was inserted into BamHI and HindIII sites of pSilencer4.1 (Ambion) to generate a miRNA expression vector pS-miR-UL112-1. All primer sequences used in plasmid construction are listed in Table 1. All constructs were confirmed by DNA sequencing. Primers used in plasmid construction and semi-quantitative RT-PCR Note: sequences recognized by restriction en donuclases are down lined. Enzyme-linked immunosorbent assay The concentration of IL-32 in serum samples were detected using human IL-32 ELISA Kit (Cusabio) according to the manufacture’s protocols. The absorbance value at wavelength 450 nm was measured. The IL-32 concentrations were calculated from the standard curve. The concentration of IL-32 in serum samples were detected using human IL-32 ELISA Kit (Cusabio) according to the manufacture’s protocols. The absorbance value at wavelength 450 nm was measured. The IL-32 concentrations were calculated from the standard curve. Semi-quantitative RT-PCR analysis MRC-5 cells were inoculated with H strain at 3–5 multiplicity of infection (m.o.i.). Infections were carried out under IE, E and L conditions respectively. For preparation of IE RNA, CHX (Sigma) (100 μl/ml) was added to the culture medium 1 hour before infection and the cells were harvested at 24 hpi. For E RNA, DNA synthesis inhibitor phosphonoacetic acid (PAA) (Sigma) (100 μl/ml) was added to the medium immediately after infection, and the cells were harvested at 48 hpi. L RNA and uninfected cellular RNA was derived from infected (at 96 hpi) and uninfected cells, respectively. Total RNA was isolated using Trizol reagent (TaKaRa), treated with DNase Iand then reverse-transcribed with MLV reverse transcriptase and random primers (TaKaRa). PCR was performed for 24 cycles in 50 μl reactions with the IL-32 specific primer pairs listed in Table 1. mRNA of β-actin was amplified for normalization in all reactions. The primers for IL-32 amplification were designed for detecting all known isoforms of human IL-32 based on the sequences from Gnebank database (NM_001012631.1-001012636.1, NM_004221.4 and NM_001012718.1). The PCR products were analyzed by electrophoresis on 2% agarose gel containing ethidium bromide. MRC-5 cells were inoculated with H strain at 3–5 multiplicity of infection (m.o.i.). Infections were carried out under IE, E and L conditions respectively. For preparation of IE RNA, CHX (Sigma) (100 μl/ml) was added to the culture medium 1 hour before infection and the cells were harvested at 24 hpi. For E RNA, DNA synthesis inhibitor phosphonoacetic acid (PAA) (Sigma) (100 μl/ml) was added to the medium immediately after infection, and the cells were harvested at 48 hpi. L RNA and uninfected cellular RNA was derived from infected (at 96 hpi) and uninfected cells, respectively. Total RNA was isolated using Trizol reagent (TaKaRa), treated with DNase Iand then reverse-transcribed with MLV reverse transcriptase and random primers (TaKaRa). PCR was performed for 24 cycles in 50 μl reactions with the IL-32 specific primer pairs listed in Table 1. mRNA of β-actin was amplified for normalization in all reactions. The primers for IL-32 amplification were designed for detecting all known isoforms of human IL-32 based on the sequences from Gnebank database (NM_001012631.1-001012636.1, NM_004221.4 and NM_001012718.1). The PCR products were analyzed by electrophoresis on 2% agarose gel containing ethidium bromide. miRNA extraction and TaqMan assays Total RNA was extracted from HCMV infected cells at different time points using the mivVana miRNA isolation kit (Ambion). We measured hcmv-miR-UL112-1 in all samples using TaqMan® miRNA assays (MIMAT0001577 for hcmv-miR-UL112-1, ABI). U6 expression was used to normalize the expression of hcmv-miR-UL112-1. HCMV uninfected MRC-5 cells were used as negative control. Fold difference for hcmv-miR-UL112-1 expression was calculated using the equation 2ΔΔct, where ΔCt = Ct (hcmv-miR-UL112-1) – Ct (U6 control) and ΔΔCt = ΔCt (HCMV infected MRC-5) – ΔCt (MRC-5). The measurements were done in triplicates and the results are presented at the means ± S.D. Total RNA was extracted from HCMV infected cells at different time points using the mivVana miRNA isolation kit (Ambion). We measured hcmv-miR-UL112-1 in all samples using TaqMan® miRNA assays (MIMAT0001577 for hcmv-miR-UL112-1, ABI). U6 expression was used to normalize the expression of hcmv-miR-UL112-1. HCMV uninfected MRC-5 cells were used as negative control. Fold difference for hcmv-miR-UL112-1 expression was calculated using the equation 2ΔΔct, where ΔCt = Ct (hcmv-miR-UL112-1) – Ct (U6 control) and ΔΔCt = ΔCt (HCMV infected MRC-5) – ΔCt (MRC-5). The measurements were done in triplicates and the results are presented at the means ± S.D. Luciferase assay MRC-5 cells were plated in 24-well plate at a density of 4.0×105 cells per well and grown to reach about 80% confluence at the time for transfection. 200 ng pMIR-IL-32UTR/pMIR-IL-32MUT plasmid or an empty vector control was co-transfected with 400 ng pS-miR-UL112-1 or pS-neg control into the cells by using lipofectamine 2000 reagent (Invitrogen) according to the manufacture’s protocol. 200 ng renilla luciferase plasmid (Promega) was introduced into each co-transfected cells as an internal control plasmid at the same time. Cells were collected 48 hours post transfection and luciferase activity levels were measured using the Dual luciferase reporter assay system (Promega) according to the manufacture’s guidelines. All measurements were done in triplicates and signals were normalized for transfection efficiency to the internal renilla control. The results are presented at the means ±S.D. MRC-5 cells were plated in 24-well plate at a density of 4.0×105 cells per well and grown to reach about 80% confluence at the time for transfection. 200 ng pMIR-IL-32UTR/pMIR-IL-32MUT plasmid or an empty vector control was co-transfected with 400 ng pS-miR-UL112-1 or pS-neg control into the cells by using lipofectamine 2000 reagent (Invitrogen) according to the manufacture’s protocol. 200 ng renilla luciferase plasmid (Promega) was introduced into each co-transfected cells as an internal control plasmid at the same time. Cells were collected 48 hours post transfection and luciferase activity levels were measured using the Dual luciferase reporter assay system (Promega) according to the manufacture’s guidelines. All measurements were done in triplicates and signals were normalized for transfection efficiency to the internal renilla control. The results are presented at the means ±S.D. Western blot analysis Western blot analyses were performed to detect the IL-32 levels in HCMV infected MRC-5 cells and hcmv-miR-UL112-1 over expressed HEK293 cells respectively. MRC-5 cells were inoculated with H strain at 3–5 m.o.i. Cells were harvested at different time points post infection (6 h, 24 h, 48 h, 72 h and 96 h). HEK293 cells were transfected with 8 μg pS-miR-UL112-1 or pS-neg control using lipofectamine 2000 (Invitrogen) and were incubated at 37°C for 48 hours. Proteins of the cells described above were prepared by suspending cells in lysis buffer, followed by centrifugation. Concentrations of proteins in supernatant were quantified using protein assay kit (Beyotime). Protein was separated by 10% acrylamide gel electrophoresis and transferred onto a nitrocellulose membrane. Western blot analysis was performed using IL-32 antibody (abcam). Immunoblots were visualized with ECL detection system. Sample loading was normalized by quantities of β-actin detected parallel. Western blot analyses were performed to detect the IL-32 levels in HCMV infected MRC-5 cells and hcmv-miR-UL112-1 over expressed HEK293 cells respectively. MRC-5 cells were inoculated with H strain at 3–5 m.o.i. Cells were harvested at different time points post infection (6 h, 24 h, 48 h, 72 h and 96 h). HEK293 cells were transfected with 8 μg pS-miR-UL112-1 or pS-neg control using lipofectamine 2000 (Invitrogen) and were incubated at 37°C for 48 hours. Proteins of the cells described above were prepared by suspending cells in lysis buffer, followed by centrifugation. Concentrations of proteins in supernatant were quantified using protein assay kit (Beyotime). Protein was separated by 10% acrylamide gel electrophoresis and transferred onto a nitrocellulose membrane. Western blot analysis was performed using IL-32 antibody (abcam). Immunoblots were visualized with ECL detection system. Sample loading was normalized by quantities of β-actin detected parallel. Statistical analysis All experiments were reproducible. The results are presented at the means ± S.D. Student’s t test and F test was used to determine statistical significance. Differences were considered statistically significant at value of P≤0.05. All experiments were reproducible. The results are presented at the means ± S.D. Student’s t test and F test was used to determine statistical significance. Differences were considered statistically significant at value of P≤0.05. Serum samples: Serum samples were obtained from 40 patients with HCMV infection (22 male, 18 female, aged 2.2 ± 0.42 yr) from Shengjing Hospital of China Medical University. All patients were HCMV-IgM positive and HCMV-IgG negative. The viral loads in urine samples from these patients were between 1.793×103 and 4.115×107 HCMV DNA copies per milliliter, as detected by routine QF-PCR (DaAn Gene). Blood samples collected from 32 HCMV-IgM negative individuals (17 male, 15 female, aged 2.7±0.35 yr) at the Medical Examination Centre of Shengjing Hospital were used as controls. According to the HCMV IgG detection results, HCMV IgM negative controls were divided into two groups: Previously HCMV infected group (IgG positive, n=17) and healthy group (IgG negative, n=15). All individuals involved in our study were seronegative for influenza A virus, HBV, HCV and HIV. This study was specifically approved by the Ethical Committee of Shengjing hospital. Cell lines: HEK293 cell line was a gift of Dr. Fangjie Chen from Department of Medical Genetics, China Medical University. HEK293 cells were cultured in Dulbecco’s Modified Eagle’s Medium (DMEM) supplemented with 10% fetal bovine serum (FBS), 100 μg/ml penicillin, 100 μg/ml streptomycin sulfate and 2 mM L-glutamine. Human embryonic lung fibroblasts cells, MRC-5, were acquired from Shanghai Institute for Biological Sciences, Chinese Academy of Sciences (CAS). MRC-5 cells were maintained in Modified Eagle’s Medium (MEM) supplemented with 10% FBS, 100 μg/ml penicillin and 100 μg/ml streptomycin sulfate. All cell cultures were maintained at 37°C in a 5% CO2 incubator. Virus: HCMV clinical strain H was isolated from a urine sample of a 5-month-old infant hospitalized in Shengjing Hospital of China Medical University. Stock virus was propagated in MRC-5 maintained in MEM supplemented with 2% FBS, 100 μg/ml penicillin and 100 μg/ml streptomycin. The supernatant was then harvested, and the aliquots were stored at −80°C before use. Plasmid constructs: Luciferase reporter construct pMIR was acquired from Ambion Company. IL-32 3′UTR sequence was obtained by RT-PCR from RNA harvested from HCMV infected cells, and inserted into SpeI and HindIII site of the multiple cloning regions (pMIR-IL-32UTR). A point mutation at corresponding seed region binding site was maintained by using Site-directed Gene Mutagenesis Kit (Beyotime), resulting in GUC to GAA. Sequence predicted to express hcmv-miR-UL112-1 was amplified by PCR directly from HCMV genome. The PCR product was inserted into BamHI and HindIII sites of pSilencer4.1 (Ambion) to generate a miRNA expression vector pS-miR-UL112-1. All primer sequences used in plasmid construction are listed in Table 1. All constructs were confirmed by DNA sequencing. Primers used in plasmid construction and semi-quantitative RT-PCR Note: sequences recognized by restriction en donuclases are down lined. Enzyme-linked immunosorbent assay: The concentration of IL-32 in serum samples were detected using human IL-32 ELISA Kit (Cusabio) according to the manufacture’s protocols. The absorbance value at wavelength 450 nm was measured. The IL-32 concentrations were calculated from the standard curve. Semi-quantitative RT-PCR analysis: MRC-5 cells were inoculated with H strain at 3–5 multiplicity of infection (m.o.i.). Infections were carried out under IE, E and L conditions respectively. For preparation of IE RNA, CHX (Sigma) (100 μl/ml) was added to the culture medium 1 hour before infection and the cells were harvested at 24 hpi. For E RNA, DNA synthesis inhibitor phosphonoacetic acid (PAA) (Sigma) (100 μl/ml) was added to the medium immediately after infection, and the cells were harvested at 48 hpi. L RNA and uninfected cellular RNA was derived from infected (at 96 hpi) and uninfected cells, respectively. Total RNA was isolated using Trizol reagent (TaKaRa), treated with DNase Iand then reverse-transcribed with MLV reverse transcriptase and random primers (TaKaRa). PCR was performed for 24 cycles in 50 μl reactions with the IL-32 specific primer pairs listed in Table 1. mRNA of β-actin was amplified for normalization in all reactions. The primers for IL-32 amplification were designed for detecting all known isoforms of human IL-32 based on the sequences from Gnebank database (NM_001012631.1-001012636.1, NM_004221.4 and NM_001012718.1). The PCR products were analyzed by electrophoresis on 2% agarose gel containing ethidium bromide. miRNA extraction and TaqMan assays: Total RNA was extracted from HCMV infected cells at different time points using the mivVana miRNA isolation kit (Ambion). We measured hcmv-miR-UL112-1 in all samples using TaqMan® miRNA assays (MIMAT0001577 for hcmv-miR-UL112-1, ABI). U6 expression was used to normalize the expression of hcmv-miR-UL112-1. HCMV uninfected MRC-5 cells were used as negative control. Fold difference for hcmv-miR-UL112-1 expression was calculated using the equation 2ΔΔct, where ΔCt = Ct (hcmv-miR-UL112-1) – Ct (U6 control) and ΔΔCt = ΔCt (HCMV infected MRC-5) – ΔCt (MRC-5). The measurements were done in triplicates and the results are presented at the means ± S.D. Luciferase assay: MRC-5 cells were plated in 24-well plate at a density of 4.0×105 cells per well and grown to reach about 80% confluence at the time for transfection. 200 ng pMIR-IL-32UTR/pMIR-IL-32MUT plasmid or an empty vector control was co-transfected with 400 ng pS-miR-UL112-1 or pS-neg control into the cells by using lipofectamine 2000 reagent (Invitrogen) according to the manufacture’s protocol. 200 ng renilla luciferase plasmid (Promega) was introduced into each co-transfected cells as an internal control plasmid at the same time. Cells were collected 48 hours post transfection and luciferase activity levels were measured using the Dual luciferase reporter assay system (Promega) according to the manufacture’s guidelines. All measurements were done in triplicates and signals were normalized for transfection efficiency to the internal renilla control. The results are presented at the means ±S.D. Western blot analysis: Western blot analyses were performed to detect the IL-32 levels in HCMV infected MRC-5 cells and hcmv-miR-UL112-1 over expressed HEK293 cells respectively. MRC-5 cells were inoculated with H strain at 3–5 m.o.i. Cells were harvested at different time points post infection (6 h, 24 h, 48 h, 72 h and 96 h). HEK293 cells were transfected with 8 μg pS-miR-UL112-1 or pS-neg control using lipofectamine 2000 (Invitrogen) and were incubated at 37°C for 48 hours. Proteins of the cells described above were prepared by suspending cells in lysis buffer, followed by centrifugation. Concentrations of proteins in supernatant were quantified using protein assay kit (Beyotime). Protein was separated by 10% acrylamide gel electrophoresis and transferred onto a nitrocellulose membrane. Western blot analysis was performed using IL-32 antibody (abcam). Immunoblots were visualized with ECL detection system. Sample loading was normalized by quantities of β-actin detected parallel. Statistical analysis: All experiments were reproducible. The results are presented at the means ± S.D. Student’s t test and F test was used to determine statistical significance. Differences were considered statistically significant at value of P≤0.05. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: YJH carried out the semi-quantitative RT-PCR analysis, western blot analysis and Statistical analysis. YQ and RH carried out the plasmids construction and luciferase assay. QR as the corresponding author designed the study and corrected the manuscript. YPM and YHJ carried out the preparation of virus and cell lines. ZRS carried out the serum samples collection and ELISA detection. All authors have read and approved the final manuscript.
Background: Interleukin-32 (IL-32) is an important factor in innate and adaptive immune responses, which activates the p38MAPK, NF-kappa B and AP-1 signaling pathways. Recent reports have highlighted that IL-32 is regulated during viral infection in humans. Methods: Enzyme-linked immunosorbent assays (ELISA) were carried out to detect IL-32 levels in serum samples. Detailed kinetics of the transcription of IL-32 mRNA and expression of IL-32 protein during human cytomegalovirus (HCMV) infection were determined by semi-quantitative RT-PCR and western blot, respectively. The expression levels of hcmv-miR-UL112-1 were detected using TaqMan® miRNA assays during a time course of 96 hours. The effects of hcmv-miR-UL112-1 on IL-32 expression were demonstrated by luciferase assay and western blot, respectively. Results: Serum levels of IL-32 in HCMV-IgM positive patients (indicating an active HCMV infection) were significantly higher than those in HCMV-IgM negative controls. HCMV infection activated cellular IL-32 transcription mainly in the immediately early (IE) phase and elevated IL-32 protein levels between 6 and 72 hours post infection (hpi) in the human embryonic lung fibroblast cell line, MRC-5. The expression of hcmv-miR-UL112-1 was detected at 24 hpi and increased gradually as the HCMV-infection process was prolonged. In addition, it was demonstrated that hcmv-miR-UL112-1 targets a sequence in the IL-32 3'-UTR. The protein level of IL-32 in HEK293 cells could be functionally down-regulated by transfected hcmv-miR-UL112-1. Conclusions: IL-32 expression was induced by active HCMV infection and could be functionally down-regulated by ectopically expressed hcmv-miR-UL112-1. Our data may indicate a new strategy of immune evasion by HCMV through post-transcriptional regulation.
Background: Interleukin-32 (IL-32) is a newly-discovered pro-inflammatory cytokine, which plays a role in innate and adaptive immune responses [1,2]. It lacks sequence homology to any presently known cytokine families. IL-32 is associated with the induction of inflammatory responses by activating the p38MAPK, NF-kappa B and AP-1 signaling pathways. It has been implicated in inflammatory disorders, mycobacterium tuberculosis infections and inflammatory bowel disease, as well as in some autoimmune diseases, such as rheumatoid arthritis, ulcerative colitis and Crohn’s disease [3-10]. Moreover, it has been reported that IL-32 has pro-inflammatory effects on myeloid cells and promotes the differentiation of osteoclast precursors into multinucleated cells expressing specific osteoclast markers [11,12]. In recent studies, IL-32 has also been found to be regulated during viral infections. Elevated levels of IL-32 were found in sera from patients infected with influenza A virus [13-15], hepatitis B virus (HBV) [16], hepatitis C virus (HCV) [17], human papillomavirus (HPV) [18] and human immunodeficiency virus (HIV) [19-21], suggesting that IL-32 might play an important role in host defense against viral infections Human cytomegalovirus (HCMV) is an ubiquitous β-herpesvirus that infects a broad range of cell types in human hosts, contributing to its complex and varied pathogenesis. HCMV Infection leads to life-long persistence in 50%–90% of the population, which is generally subclinical in healthy individuals [22]. However, it can lead to serious complications in immunocompromised patients, such as transplant recipients or AIDS patients [23,24]. MicroRNAs (miRNAs) are an abundant class of small non-coding RNA molecules that target mRNAs generally within their 3′ untranslated regions (3′UTRs). MiRNAs suppress gene expression mainly through inhibition of translation or, rarely, through degradation of mRNA [25,26]. Clinical isolates of HCMV encode at least 17 miRNAs [27,28]. However, only a few functional targets have been validated experimentally for some HCMV-encoded miRNAs [29-34]. It has been demonstrated that hcmv-miR-UL112-1 targets and reduces the expression of HCMV UL123 (IE1 or IE72), UL114 and the major histocompatibility complex class 1-related chain B (MICB). Moreover, BclAF1 protein, a human cytomegalovirus restriction factor, was reported to be a new target of hcmv-miR-UL112-1 [35]. In addition, multiple cellular targets of hcmv-miR-US25-1, which are associated with cell cycle control, were identified by RNA induced silencing complex immunoprecipitation (RISC-IP), including cyclin E2, BRCC3, EID1, MAPRE2 and CD147. IL-32, which is not present in the confirmed targets, was screened out as a candidate target of hcmv-miR-UL112-1 in our previous study [36]. However, no further experiments to validate these findings have been performed. In the present study, the expression levels of IL-32 were compared among serum samples from patients with active HCMV infection and samples from HCMV-IgM negative individuals. The expression levels of IL-32 and hcmv-miR-UL112-1 in HCMV infected MRC-5 cells were detected at different stages of infection and time points. In addition, functional down-regulation of IL-32 by hcmv-miR-UL112-1 was detected in transfected human embryonic kidney (HEK293) cells, and the effect of hcmv-miR-UL112-1 on IL-32 during HCMV infection was primarily discussed. Conclusions: In summary, IL-32 expression in active HCMV infection was firstly investigated in our study. Detailed kinetics of the transcription of hcmv-miR-UL112-1, IL-32 mRNA and its protein expression were determined in HCMV infected MRC-5 cells. Furthermore, IL-32 expression was demonstrated to be functionally down-regulated by ectopically expressed hcmv-miR-UL112-1 in HEK293 cells. Follow up studies will investigate the mechanism of immune evasion by HCMV mediated by hcmv-miR-UL112-1, which has been confirmed to modulate NK cell activation during HCMV infection through post-transcriptional regulation.
Background: Interleukin-32 (IL-32) is an important factor in innate and adaptive immune responses, which activates the p38MAPK, NF-kappa B and AP-1 signaling pathways. Recent reports have highlighted that IL-32 is regulated during viral infection in humans. Methods: Enzyme-linked immunosorbent assays (ELISA) were carried out to detect IL-32 levels in serum samples. Detailed kinetics of the transcription of IL-32 mRNA and expression of IL-32 protein during human cytomegalovirus (HCMV) infection were determined by semi-quantitative RT-PCR and western blot, respectively. The expression levels of hcmv-miR-UL112-1 were detected using TaqMan® miRNA assays during a time course of 96 hours. The effects of hcmv-miR-UL112-1 on IL-32 expression were demonstrated by luciferase assay and western blot, respectively. Results: Serum levels of IL-32 in HCMV-IgM positive patients (indicating an active HCMV infection) were significantly higher than those in HCMV-IgM negative controls. HCMV infection activated cellular IL-32 transcription mainly in the immediately early (IE) phase and elevated IL-32 protein levels between 6 and 72 hours post infection (hpi) in the human embryonic lung fibroblast cell line, MRC-5. The expression of hcmv-miR-UL112-1 was detected at 24 hpi and increased gradually as the HCMV-infection process was prolonged. In addition, it was demonstrated that hcmv-miR-UL112-1 targets a sequence in the IL-32 3'-UTR. The protein level of IL-32 in HEK293 cells could be functionally down-regulated by transfected hcmv-miR-UL112-1. Conclusions: IL-32 expression was induced by active HCMV infection and could be functionally down-regulated by ectopically expressed hcmv-miR-UL112-1. Our data may indicate a new strategy of immune evasion by HCMV through post-transcriptional regulation.
10,259
349
[ 673, 225, 561, 516, 180, 137, 72, 170, 44, 237, 148, 166, 186, 38, 10, 78 ]
20
[ "hcmv", "il", "32", "il 32", "cells", "mir", "ul112", "mir ul112", "hcmv mir", "hcmv mir ul112" ]
[ "antibodies il 32", "background interleukin 32", "mrna il 32", "il 32 immune", "interleukin 32" ]
null
[CONTENT] IL-32 | HCMV infection | hcmv-miR-UL112-1 [SUMMARY]
null
[CONTENT] IL-32 | HCMV infection | hcmv-miR-UL112-1 [SUMMARY]
[CONTENT] IL-32 | HCMV infection | hcmv-miR-UL112-1 [SUMMARY]
[CONTENT] IL-32 | HCMV infection | hcmv-miR-UL112-1 [SUMMARY]
[CONTENT] IL-32 | HCMV infection | hcmv-miR-UL112-1 [SUMMARY]
[CONTENT] Cell Line | Child, Preschool | Cytomegalovirus | Cytomegalovirus Infections | Female | Gene Expression Profiling | Gene Expression Regulation | Humans | Infant | Interleukins | Male | MicroRNAs | RNA, Viral | Serum [SUMMARY]
null
[CONTENT] Cell Line | Child, Preschool | Cytomegalovirus | Cytomegalovirus Infections | Female | Gene Expression Profiling | Gene Expression Regulation | Humans | Infant | Interleukins | Male | MicroRNAs | RNA, Viral | Serum [SUMMARY]
[CONTENT] Cell Line | Child, Preschool | Cytomegalovirus | Cytomegalovirus Infections | Female | Gene Expression Profiling | Gene Expression Regulation | Humans | Infant | Interleukins | Male | MicroRNAs | RNA, Viral | Serum [SUMMARY]
[CONTENT] Cell Line | Child, Preschool | Cytomegalovirus | Cytomegalovirus Infections | Female | Gene Expression Profiling | Gene Expression Regulation | Humans | Infant | Interleukins | Male | MicroRNAs | RNA, Viral | Serum [SUMMARY]
[CONTENT] Cell Line | Child, Preschool | Cytomegalovirus | Cytomegalovirus Infections | Female | Gene Expression Profiling | Gene Expression Regulation | Humans | Infant | Interleukins | Male | MicroRNAs | RNA, Viral | Serum [SUMMARY]
[CONTENT] antibodies il 32 | background interleukin 32 | mrna il 32 | il 32 immune | interleukin 32 [SUMMARY]
null
[CONTENT] antibodies il 32 | background interleukin 32 | mrna il 32 | il 32 immune | interleukin 32 [SUMMARY]
[CONTENT] antibodies il 32 | background interleukin 32 | mrna il 32 | il 32 immune | interleukin 32 [SUMMARY]
[CONTENT] antibodies il 32 | background interleukin 32 | mrna il 32 | il 32 immune | interleukin 32 [SUMMARY]
[CONTENT] antibodies il 32 | background interleukin 32 | mrna il 32 | il 32 immune | interleukin 32 [SUMMARY]
[CONTENT] hcmv | il | 32 | il 32 | cells | mir | ul112 | mir ul112 | hcmv mir | hcmv mir ul112 [SUMMARY]
null
[CONTENT] hcmv | il | 32 | il 32 | cells | mir | ul112 | mir ul112 | hcmv mir | hcmv mir ul112 [SUMMARY]
[CONTENT] hcmv | il | 32 | il 32 | cells | mir | ul112 | mir ul112 | hcmv mir | hcmv mir ul112 [SUMMARY]
[CONTENT] hcmv | il | 32 | il 32 | cells | mir | ul112 | mir ul112 | hcmv mir | hcmv mir ul112 [SUMMARY]
[CONTENT] hcmv | il | 32 | il 32 | cells | mir | ul112 | mir ul112 | hcmv mir | hcmv mir ul112 [SUMMARY]
[CONTENT] hcmv | 32 | il 32 | il | inflammatory | human | mirnas | targets | hcmv mir | mir [SUMMARY]
null
[CONTENT] hcmv | il | il 32 | 32 | mir | mir ul112 | ul112 | expression | hcmv mir | hcmv mir ul112 [SUMMARY]
[CONTENT] hcmv | expression | hcmv mir | hcmv mir ul112 | 32 expression | il 32 expression | ul112 | mir ul112 | mir | il 32 [SUMMARY]
[CONTENT] hcmv | il | il 32 | 32 | cells | mir | mir ul112 | ul112 | hcmv mir | hcmv mir ul112 [SUMMARY]
[CONTENT] hcmv | il | il 32 | 32 | cells | mir | mir ul112 | ul112 | hcmv mir | hcmv mir ul112 [SUMMARY]
[CONTENT] AP-1 ||| IL-32 [SUMMARY]
null
[CONTENT] IL-32 | HCMV | HCMV | HCMV ||| between 6 and 72 hours | MRC-5 ||| 24 | HCMV ||| ||| IL-32 | HEK293 [SUMMARY]
[CONTENT] HCMV ||| HCMV [SUMMARY]
[CONTENT] AP-1 ||| IL-32 ||| ELISA | IL-32 ||| transcription | IL-32 | HCMV | RT-PCR ||| 96 hours ||| IL-32 ||| IL-32 | HCMV | HCMV | HCMV ||| between 6 and 72 hours | MRC-5 ||| 24 | HCMV ||| ||| IL-32 | HEK293 ||| HCMV ||| HCMV [SUMMARY]
[CONTENT] AP-1 ||| IL-32 ||| ELISA | IL-32 ||| transcription | IL-32 | HCMV | RT-PCR ||| 96 hours ||| IL-32 ||| IL-32 | HCMV | HCMV | HCMV ||| between 6 and 72 hours | MRC-5 ||| 24 | HCMV ||| ||| IL-32 | HEK293 ||| HCMV ||| HCMV [SUMMARY]
Tools for primary care patient safety: a narrative review.
25346425
Patient safety in primary care is a developing field with an embryonic but evolving evidence base. This narrative review aims to identify tools that can be used by family practitioners as part of a patient safety toolkit to improve the safety of the care and services provided by their practices.
BACKGROUND
Searches were performed in 6 healthcare databases in 2011 using 3 search stems; location (primary care), patient safety synonyms and outcome measure synonyms. Two reviewers analysed the results using a numerical and thematic analyses. Extensive grey literature exploration was also conducted.
METHODS
Overall, 114 Tools were identified with 26 accrued from grey literature. Most published literature originated from the USA (41%) and the UK (23%) within the last 10 years. Most of the literature addresses the themes of medication error (55%) followed by safety climate (8%) and adverse event reporting (8%). Minor themes included; informatics (4.5%) patient role (3%) and general measures to correct error (5%). The primary/secondary care interface is well described (5%) but few specific tools for primary care exist. Diagnostic error and results handling appear infrequently (<1% of total literature) despite their relative importance. The remainder of literature (11%) related to referrals, Out-Of-Hours (OOH) care, telephone care, organisational issues, mortality and clerical error.
RESULTS
This review identified tools and indicators that are available for use in family practice to measure patient safety, which is crucial to improve safety and design a patient safety toolkit. However, many of the tools have yet to be used in quality improvement strategies and cycles such as plan-do-study-act (PDSA) so there is a dearth of evidence of their utility in improving as opposed to measuring and highlighting safety issues. The lack of focus on diagnostics, systems safety and results handling provide direction and priorities for future research.
CONCLUSIONS
[ "Family Practice", "Humans", "Medication Errors", "Organizational Culture", "Patient Safety", "Primary Health Care", "Risk Management", "Safety Management" ]
4288623
Background
Patient safety has been on the agenda of hospital physicians since the publication of the Institute of Medicine’s 2000 report, ‘To Err is Human’, revealed that more people were dying in the US as a result of medical error than from road traffic accidents [1]. However, most healthcare interactions in the developed world occur in family medicine: 90% of contacts in the England with the National Health Service take place in primary care [2]. In England there are approximately 1 billion community prescriptions dispensed each year [3]. The potential for adverse events is therefore huge but the knowledge base about primary care patient safety is still sparse. A literature review of the nature and frequency of error in primary care suggested that there are between 5–80 safety incidents per 100,000 consultations, which in the UK would translate to between 37–600 incidents per day [4]. Another review estimates that there may be a patient safety incident in approximately 2% of family practice consultations [5]. A 2011 report by the American Medical Association on ambulatory patient safety concluded that the introduction of, and research into, patient safety in the primary care environment have lagged behind that of secondary care [6]. Understanding the epidemiology of hospital errors was crucial in developing hospital based safety interventions and the media’s reporting of this data ensured public support for efforts to improve safety [6]. Some of its authors concluded that there needed to be a similar focus on primary care, because there were ‘virtually no credible studies on how to improve safety’ [7]. Moreover, a report by the Health Foundation in 2013 emphasised the importance of knowing what methods, tools and indicators are currently being used in primary care to measure patient safety [8]. In this paper, patient safety refers to the ‘avoidance, prevention and amelioration of adverse outcomes or injuries stemming from the process of healthcare’ [8]. Staff and systems in primary care environments have the potential to contribute to serious error that can cause both morbidity and mortality; which has been demonstrated in the field of prescribing [9]. Evidence on primary care error comes mainly from the statistics of the medical defence organisations and from small pilot studies [10]. And yet, experts we have consulted in the field were anecdotally aware of a multiplicity of interventions or ‘tools’ from their own and others’ work world-wide, which helped identify grey literature for this study. This paper reports a narrative review of ‘tools’ to improve, measure, and monitor patient safety in the ambulatory settings with a focus on family practice. A narrative review broadly covers a specific topic but does not adhere to strict systematic methods to locate and synthesize articles and enables description and synthesis of qualitative research and categorises studies into more homogenous groups [11]. To the authors’ knowledge no such broad-ranging review has been attempted. The context of this study is worldwide including both the US and the UK and throughout the term primary care is used to address the terms general practice and family practice.
Methods
Data sources and searches Our structured narrative review was planned and conducted according to guidance in the Preferred Reporting Items for Systematic Reviews and Meta- Analyses (PRISMA) guidelines [12] but following a more narrative approach (especially with regard to grey literature). The starting point for determining the search terms used in the review was a 3 point definition of our search question and exploration of Medical Subject Heading (MeSH) terms [13]. We used a multi-centre team (including leading UK experts on patient safety) at a strategic planning meeting to comment on and finalise the search terms for the review. References were managed in Endnote. Broad ranging search terms were used for developing a search strategy in 3 stems; setting (primary care [i.e. general/family practice, ambulatory care, community care, generalist care], safety synonyms [i.e. error, adverse event, fault, malpractice] and types of tool [i.e. indicator, survey, guideline, scale] (see web Additional file 1: Appendix 1). The aim was to be as inclusive as possible and address administrative, clinical and patient experience issues. The search was performed on the following databases; Embase, CINAHL, Pubmed, Medline (Ovid 1996 onward), Health Management Information Consortium and Web of Science on the 1/11/2011. We did not limit our search by year of publication or to the English language, in order to capture a world-wide perspective on patient safety. However, only abstracts in English were included due to resource restraints for translation. Grey literature was identified from known internet patient safety sources from the US and UK to expand the scope of the review (see web Additional file: 1 Appendix 6). In order to fully explore a single tool many resources often had to be read – for example the IHI trigger tool is described in a number of web publications and supporting documents. Care was taken not to count published tools that also appeared in grey literature twice by discussion between the two reviewers and wider team. Our structured narrative review was planned and conducted according to guidance in the Preferred Reporting Items for Systematic Reviews and Meta- Analyses (PRISMA) guidelines [12] but following a more narrative approach (especially with regard to grey literature). The starting point for determining the search terms used in the review was a 3 point definition of our search question and exploration of Medical Subject Heading (MeSH) terms [13]. We used a multi-centre team (including leading UK experts on patient safety) at a strategic planning meeting to comment on and finalise the search terms for the review. References were managed in Endnote. Broad ranging search terms were used for developing a search strategy in 3 stems; setting (primary care [i.e. general/family practice, ambulatory care, community care, generalist care], safety synonyms [i.e. error, adverse event, fault, malpractice] and types of tool [i.e. indicator, survey, guideline, scale] (see web Additional file 1: Appendix 1). The aim was to be as inclusive as possible and address administrative, clinical and patient experience issues. The search was performed on the following databases; Embase, CINAHL, Pubmed, Medline (Ovid 1996 onward), Health Management Information Consortium and Web of Science on the 1/11/2011. We did not limit our search by year of publication or to the English language, in order to capture a world-wide perspective on patient safety. However, only abstracts in English were included due to resource restraints for translation. Grey literature was identified from known internet patient safety sources from the US and UK to expand the scope of the review (see web Additional file: 1 Appendix 6). In order to fully explore a single tool many resources often had to be read – for example the IHI trigger tool is described in a number of web publications and supporting documents. Care was taken not to count published tools that also appeared in grey literature twice by discussion between the two reviewers and wider team. Study selection Reviewer one was a GP Academic Clinical Fellow with an interest in pharmacology (RS) and reviewer two was a health services researcher with an interest in family practice (SC). Similarly to the strategy used in the AMA’s report [6], we were interested in highly generalisable tools so research addressing single drugs or conditions in very specific settings was excluded. Inclusion and exclusion criteria are listed in below. An inclusive strategy was employed such that neither reviewer could exclude studies the other felt were potentially relevant. Disagreements were resolved by regular discussions between the 2 reviewers. The inclusion and exclusion criteria were as follows: Exclusion criteria:hospital care/setting - unless transferableopinion pieces/editorialssingle drugs/conditions where the focus was felt to be on the specific drug or condition rather than on transferable toolsonly about quality of care without explicit patient safety componentexclude on basis of journal (for example “Health Care Food & Nutrition focus”)economic impact of errors (relevant papers were taken out at this stage for other purposes within the project)Both the abstract and main text were not in EnglishInclusion criteria:if unsure always include - for example, ‘good advice’ which might later inform other toolstools or strategies to improve or analyse safety which are of relevance to Primary Care. Exclusion criteria:hospital care/setting - unless transferableopinion pieces/editorialssingle drugs/conditions where the focus was felt to be on the specific drug or condition rather than on transferable toolsonly about quality of care without explicit patient safety componentexclude on basis of journal (for example “Health Care Food & Nutrition focus”)economic impact of errors (relevant papers were taken out at this stage for other purposes within the project)Both the abstract and main text were not in English hospital care/setting - unless transferable opinion pieces/editorials single drugs/conditions where the focus was felt to be on the specific drug or condition rather than on transferable tools only about quality of care without explicit patient safety component exclude on basis of journal (for example “Health Care Food & Nutrition focus”) economic impact of errors (relevant papers were taken out at this stage for other purposes within the project) Both the abstract and main text were not in English Inclusion criteria:if unsure always include - for example, ‘good advice’ which might later inform other toolstools or strategies to improve or analyse safety which are of relevance to Primary Care. if unsure always include - for example, ‘good advice’ which might later inform other tools tools or strategies to improve or analyse safety which are of relevance to Primary Care. Reviewer one was a GP Academic Clinical Fellow with an interest in pharmacology (RS) and reviewer two was a health services researcher with an interest in family practice (SC). Similarly to the strategy used in the AMA’s report [6], we were interested in highly generalisable tools so research addressing single drugs or conditions in very specific settings was excluded. Inclusion and exclusion criteria are listed in below. An inclusive strategy was employed such that neither reviewer could exclude studies the other felt were potentially relevant. Disagreements were resolved by regular discussions between the 2 reviewers. The inclusion and exclusion criteria were as follows: Exclusion criteria:hospital care/setting - unless transferableopinion pieces/editorialssingle drugs/conditions where the focus was felt to be on the specific drug or condition rather than on transferable toolsonly about quality of care without explicit patient safety componentexclude on basis of journal (for example “Health Care Food & Nutrition focus”)economic impact of errors (relevant papers were taken out at this stage for other purposes within the project)Both the abstract and main text were not in EnglishInclusion criteria:if unsure always include - for example, ‘good advice’ which might later inform other toolstools or strategies to improve or analyse safety which are of relevance to Primary Care. Exclusion criteria:hospital care/setting - unless transferableopinion pieces/editorialssingle drugs/conditions where the focus was felt to be on the specific drug or condition rather than on transferable toolsonly about quality of care without explicit patient safety componentexclude on basis of journal (for example “Health Care Food & Nutrition focus”)economic impact of errors (relevant papers were taken out at this stage for other purposes within the project)Both the abstract and main text were not in English hospital care/setting - unless transferable opinion pieces/editorials single drugs/conditions where the focus was felt to be on the specific drug or condition rather than on transferable tools only about quality of care without explicit patient safety component exclude on basis of journal (for example “Health Care Food & Nutrition focus”) economic impact of errors (relevant papers were taken out at this stage for other purposes within the project) Both the abstract and main text were not in English Inclusion criteria:if unsure always include - for example, ‘good advice’ which might later inform other toolstools or strategies to improve or analyse safety which are of relevance to Primary Care. if unsure always include - for example, ‘good advice’ which might later inform other tools tools or strategies to improve or analyse safety which are of relevance to Primary Care. Data quality and extraction Data were extracted independently by the two reviewers (RS and SC). A dual approach was taken to data extraction from published material using both a Word document (Web Additional file: 1 Appendix 5) and an Excel template. For reporting data from selected papers we used a modified PRISMA [12] checklist, which combined aspects of different methodologies (not just systematic reviews and meta-analyses) into a form for all study types (available on application to the authors). For example the PRISMA checklist requires a discussion of limitations which is a highly transferable requirement to all methodologies, but it also requires specific methods based items such as ‘confidence intervals on meta-analyses’. Using information collected on the modified PRISMA form, a numerical data extraction system was agreed by both reviewers in order to present results from the selected papers data for analysis were extracted from that Excel document. A pilot of 10 key papers with differing methodologies was undertaken prior to commencing full data extraction – high levels of agreement were found between the two reviewers. At the end of data extraction differences in rating on the Excel sheet were discussed and analysed across a series of face-to-face and telephone meetings, attempts to reach consensus were almost always possible. Data were extracted independently by the two reviewers (RS and SC). A dual approach was taken to data extraction from published material using both a Word document (Web Additional file: 1 Appendix 5) and an Excel template. For reporting data from selected papers we used a modified PRISMA [12] checklist, which combined aspects of different methodologies (not just systematic reviews and meta-analyses) into a form for all study types (available on application to the authors). For example the PRISMA checklist requires a discussion of limitations which is a highly transferable requirement to all methodologies, but it also requires specific methods based items such as ‘confidence intervals on meta-analyses’. Using information collected on the modified PRISMA form, a numerical data extraction system was agreed by both reviewers in order to present results from the selected papers data for analysis were extracted from that Excel document. A pilot of 10 key papers with differing methodologies was undertaken prior to commencing full data extraction – high levels of agreement were found between the two reviewers. At the end of data extraction differences in rating on the Excel sheet were discussed and analysed across a series of face-to-face and telephone meetings, attempts to reach consensus were almost always possible. Funding This review is part of a National Institute for Health Research (NIHR), School for Primary Care Research (SPCR: http://www.spcr.nihr.ac.uk/) (UK) project, undertaken with the aim of constructing a Patient Safety Toolkit for English family practice. This review is part of a National Institute for Health Research (NIHR), School for Primary Care Research (SPCR: http://www.spcr.nihr.ac.uk/) (UK) project, undertaken with the aim of constructing a Patient Safety Toolkit for English family practice.
null
null
Conclusion
We have identified 114 published and unpublished tools and indicators, which can be used currently in primary care to measure patient safety. However, the AMA concluded that there are virtually no credible studies on how to improve safety in primary care [6] andthe challenge is still to turn measurement into improvement as few tools have been used in quality improvement cycles or as part of performance targets for safety in ambulatory care. Having a comprehensive set of tools for tracking and preventing safety events is the first step in fixing that, and this paper clearly shows where our current toolkit is wanting. The results of this review will enable a better understanding of the epidemiology of ambulatory care safety and help underpin the future development of primary care based safety interventions.
[ "Background", "Data sources and searches", "Study selection", "Data quality and extraction", "Funding", "Results and discussion", "Summary of main findings", "Comparison to existing literature", "Strengths/limitations", "Implications for practice, policy, or future research", "Authors’ information", "" ]
[ "Patient safety has been on the agenda of hospital physicians since the publication of the Institute of Medicine’s 2000 report, ‘To Err is Human’, revealed that more people were dying in the US as a result of medical error than from road traffic accidents\n[1]. However, most healthcare interactions in the developed world occur in family medicine: 90% of contacts in the England with the National Health Service take place in primary care\n[2]. In England there are approximately 1 billion community prescriptions dispensed each year\n[3]. The potential for adverse events is therefore huge but the knowledge base about primary care patient safety is still sparse. A literature review of the nature and frequency of error in primary care suggested that there are between 5–80 safety incidents per 100,000 consultations, which in the UK would translate to between 37–600 incidents per day\n[4]. Another review estimates that there may be a patient safety incident in approximately 2% of family practice consultations\n[5].\nA 2011 report by the American Medical Association on ambulatory patient safety concluded that the introduction of, and research into, patient safety in the primary care environment have lagged behind that of secondary care\n[6]. Understanding the epidemiology of hospital errors was crucial in developing hospital based safety interventions and the media’s reporting of this data ensured public support for efforts to improve safety\n[6]. Some of its authors concluded that there needed to be a similar focus on primary care, because there were ‘virtually no credible studies on how to improve safety’\n[7]. Moreover, a report by the Health Foundation in 2013 emphasised the importance of knowing what methods, tools and indicators are currently being used in primary care to measure patient safety\n[8]. In this paper, patient safety refers to the ‘avoidance, prevention and amelioration of adverse outcomes or injuries stemming from the process of healthcare’\n[8].\nStaff and systems in primary care environments have the potential to contribute to serious error that can cause both morbidity and mortality; which has been demonstrated in the field of prescribing\n[9]. Evidence on primary care error comes mainly from the statistics of the medical defence organisations and from small pilot studies\n[10]. And yet, experts we have consulted in the field were anecdotally aware of a multiplicity of interventions or ‘tools’ from their own and others’ work world-wide, which helped identify grey literature for this study.\nThis paper reports a narrative review of ‘tools’ to improve, measure, and monitor patient safety in the ambulatory settings with a focus on family practice. A narrative review broadly covers a specific topic but does not adhere to strict systematic methods to locate and synthesize articles and enables description and synthesis of qualitative research and categorises studies into more homogenous groups\n[11]. To the authors’ knowledge no such broad-ranging review has been attempted. The context of this study is worldwide including both the US and the UK and throughout the term primary care is used to address the terms general practice and family practice.", "Our structured narrative review was planned and conducted according to guidance in the Preferred Reporting Items for Systematic Reviews and Meta- Analyses (PRISMA) guidelines\n[12] but following a more narrative approach (especially with regard to grey literature). The starting point for determining the search terms used in the review was a 3 point definition of our search question and exploration of Medical Subject Heading (MeSH) terms\n[13]. We used a multi-centre team (including leading UK experts on patient safety) at a strategic planning meeting to comment on and finalise the search terms for the review. References were managed in Endnote. Broad ranging search terms were used for developing a search strategy in 3 stems; setting (primary care [i.e. general/family practice, ambulatory care, community care, generalist care], safety synonyms [i.e. error, adverse event, fault, malpractice] and types of tool [i.e. indicator, survey, guideline, scale] (see web Additional file\n1: Appendix 1). The aim was to be as inclusive as possible and address administrative, clinical and patient experience issues. The search was performed on the following databases; Embase, CINAHL, Pubmed, Medline (Ovid 1996 onward), Health Management Information Consortium and Web of Science on the 1/11/2011. We did not limit our search by year of publication or to the English language, in order to capture a world-wide perspective on patient safety. However, only abstracts in English were included due to resource restraints for translation. Grey literature was identified from known internet patient safety sources from the US and UK to expand the scope of the review (see web Additional file:\n1 Appendix 6). In order to fully explore a single tool many resources often had to be read – for example the IHI trigger tool is described in a number of web publications and supporting documents. Care was taken not to count published tools that also appeared in grey literature twice by discussion between the two reviewers and wider team.", "Reviewer one was a GP Academic Clinical Fellow with an interest in pharmacology (RS) and reviewer two was a health services researcher with an interest in family practice (SC). Similarly to the strategy used in the AMA’s report\n[6], we were interested in highly generalisable tools so research addressing single drugs or conditions in very specific settings was excluded. Inclusion and exclusion criteria are listed in below. An inclusive strategy was employed such that neither reviewer could exclude studies the other felt were potentially relevant. Disagreements were resolved by regular discussions between the 2 reviewers. The inclusion and exclusion criteria were as follows:\nExclusion criteria:hospital care/setting - unless transferableopinion pieces/editorialssingle drugs/conditions where the focus was felt to be on the specific drug or condition rather than on transferable toolsonly about quality of care without explicit patient safety componentexclude on basis of journal (for example “Health Care Food & Nutrition focus”)economic impact of errors (relevant papers were taken out at this stage for other purposes within the project)Both the abstract and main text were not in EnglishInclusion criteria:if unsure always include - for example, ‘good advice’ which might later inform other toolstools or strategies to improve or analyse safety which are of relevance to Primary Care.\nExclusion criteria:hospital care/setting - unless transferableopinion pieces/editorialssingle drugs/conditions where the focus was felt to be on the specific drug or condition rather than on transferable toolsonly about quality of care without explicit patient safety componentexclude on basis of journal (for example “Health Care Food & Nutrition focus”)economic impact of errors (relevant papers were taken out at this stage for other purposes within the project)Both the abstract and main text were not in English\nhospital care/setting - unless transferable\nopinion pieces/editorials\nsingle drugs/conditions where the focus was felt to be on the specific drug or condition rather than on transferable tools\nonly about quality of care without explicit patient safety component\nexclude on basis of journal (for example “Health Care Food & Nutrition focus”)\neconomic impact of errors (relevant papers were taken out at this stage for other purposes within the project)\nBoth the abstract and main text were not in English\nInclusion criteria:if unsure always include - for example, ‘good advice’ which might later inform other toolstools or strategies to improve or analyse safety which are of relevance to Primary Care.\nif unsure always include - for example, ‘good advice’ which might later inform other tools\ntools or strategies to improve or analyse safety which are of relevance to Primary Care.", "Data were extracted independently by the two reviewers (RS and SC). A dual approach was taken to data extraction from published material using both a Word document (Web Additional file:\n1 Appendix 5) and an Excel template. For reporting data from selected papers we used a modified PRISMA\n[12] checklist, which combined aspects of different methodologies (not just systematic reviews and meta-analyses) into a form for all study types (available on application to the authors). For example the PRISMA checklist requires a discussion of limitations which is a highly transferable requirement to all methodologies, but it also requires specific methods based items such as ‘confidence intervals on meta-analyses’. Using information collected on the modified PRISMA form, a numerical data extraction system was agreed by both reviewers in order to present results from the selected papers data for analysis were extracted from that Excel document. A pilot of 10 key papers with differing methodologies was undertaken prior to commencing full data extraction – high levels of agreement were found between the two reviewers. At the end of data extraction differences in rating on the Excel sheet were discussed and analysed across a series of face-to-face and telephone meetings, attempts to reach consensus were almost always possible.", "This review is part of a National Institute for Health Research (NIHR), School for Primary Care Research (SPCR: http://www.spcr.nihr.ac.uk/) (UK) project, undertaken with the aim of constructing a Patient Safety Toolkit for English family practice.", "Grey literature results are not included in the following calculations and flow diagrams; results are instead included in the list of tools found in web Additional file\n1: Appendix 3 (where they are clearly marked as being from grey sources). Using the process described in Figure \n1, we selected approximately 10% (n = 1311) of the original search total (n = 13,240) for evaluation of abstracts; titles excluded at this stage were clearly not of relevance e.g. relating to non-healthcare safety topics. Abstracts were then analysed for tools, after excluding papers which were from the correct setting but which did not contain any interventions; around 14% of the abstracts were included for full paper analysis (n = 189).Figure 1\n‘Toolkit’ review stages. Graph 1 –a) illustration of the literature base in primary care patient safety 1987-2011from Pubmed b) Papers from the review divided by the annual Pubmed output for the same year.\n\n‘Toolkit’ review stages. Graph 1 –a) illustration of the literature base in primary care patient safety 1987-2011from Pubmed b) Papers from the review divided by the annual Pubmed output for the same year.\nAs Graph 1a) illustrates, the majority of publications in the review have been published in the last decade. These data could be a product of MeSH term development or consistency in the last 10 years. However, as Graph 1b) shows, the same take-off in 2001 occurs even when presented as a proportion of the total literature published on Pubmed. Analysing the MeSH terms of the 11 papers in the review from prior to 2000 revealed that the terms ‘medical error’ and ‘diagnostic error’ were each used twice and ‘risk management’ was used in 3 papers. Keywords using the term ‘drug’ appeared 13 times, family practice and ambulatory care were commonly used keywords.\nThe majority of the literature focused on prescribing (55%), which excludes IT interventions for prescribing that were attributed to the informatics theme instead. Other prominent areas were adverse events in primary care and safety climate (which comprised 8% of the total published literature each). A number of climate measures had been refined from earlier climate surveys for use specifically in primary care (i.e. SAQ (ambulatory)\n[14]. Minor themes included; informatics (4.5%) patient role (3%) and general measures to correct error (5%). The primary/secondary care interface is well described in the literature (5%) but, as yet, only 1 published interface tool specifically for primary care exists. Diagnostic error and results handling appear infrequently (<1% of total literature) despite their relative importance. The remainder of literature (11%) related to referrals, Out-Of-Hours (OOH) care, telephone care, organisational issues, mortality and clerical error. Overall, 114 Tools were identified with 26 accrued from grey literature.\nThe setting of the research uncovered was predominantly family practice (in keeping with our search strategy); the term ‘health system’ was used to describe research such as consensus outputs from multi-disciplinary teams or across the whole healthcare system. Most published literature was US based (41%) followed by UK located studies (23%), with other countries producing no more than 5% of published papers each. The grey literature also reflects the predominance of US and UK sources. A variety of study designs were identified in the review including, for example, consensus techniques (10%) , observational methods (15%) but most were mixed methods studies in patient safety research.\nWe classified the data from the published papers in this review using a taxonomy for primary care patient safety based on previous taxonomies by experts in quality of care and patient safety\n[15–19]. It differs from our data collection form as it was evolved later in the process of our analyses. In our taxonomy (Web Additional file\n1: Appendix 4) there are two principal dimensions of safety: ‘access to safe services’ and ‘effectiveness of safety processes’, which are discussed in terms of the structure of the health care system, processes of safety and health outcomes. This taxonomy is based on previous conceptual work on quality of care\n[15]; in essence, do users get the safe care they need, and is the care safe when they get it?\nIn our review, 88 of the 114 unique tools identified (Web Additional file\n1: Appendix 3) came from the published literature and 26 from grey literature sources – the majority of these being from the websites of known patient safety organisations (see data sources in Methods section). The review identified a wide range of ‘tools’ that cannot be described fully here due to word count constraints. The detailed output from the review will be described across a series of subsequent papers on ambulatory patient safety. However, Table \n1 shows key examples found in each dimension of patient safety: We have presented the most well-known or most often used US or UK tools as illustrative examples. The Table is presented in order of weight of literature with most common topics appearing first.Table 1\nTypes of tools found in the review, where possible well-known US examples of the type of tool are given in order to aid understanding\nType of tool (Explanation of tool)Used in the US?Used in the UK?US ExampleData sourceNumber of primary care tools of this type identifiedPrescribing Indicator PacksYesYesBeers criteria\n[20]EHR15 main sets, much overlap -3(criteria for ‘never events’ in prescribing) - other prescribing toolsGRAM reports\n[21], MAI\n[22] (Geriatric Risk Assessment MedGuide™ Medications Appropriateness Index)EHR, staffTrigger Tools-GeneralYesYesIHI Outpatient AdverseEHR5-MedicationsYesNoEvent Trigger Tool\n[23]3-SurgeryYesYesAdverse drug events among older adults in primary care\n[24] Ambulatory surgery\n[25]1\n(Criteria are screened for in a sample of medical records ‘triggering’ more detailed review)\nEvent Reporting Systems (National systems for informing relevant authorities about safety problems with all aspects of healthcare)YesYesASIPS\n[26] (Applied Strategies for Improving Patient Safety)EHR, Staff and patients6Medicines/device Reporting SystemsYesYesMEADERS\n[27]EHR, Staff and Patients4(National systems for informing relevant authorities about safety problems specific to the above)(Medication Error and Adverse Drug Event Reporting System) VAERS\n[28] (Vaccine Adverse Events Reporting System)Safety Climate/Culture Measures (The practice team rate themselves against safety criteria and discuss the results to make changes)YesYesSafety Attitudes Questionnaire\n[14]Staff10Significant Event Analysis Tools (The practice team discuss untoward events, using a standardised structure, in order to learn from them)NoYesUK example - NHS Education for Scotland\n[29]Staff, EHR and patients5General Primary/Secondary Interface Tools (standardised systems for handling patient care at transition in care level – often electronic discharge summaries)YesYes‘Care Transitions Approach’\n[30]EHR, hospital recordsOnly 3 within the direct control of family doctorsMedication Reconciliation Tools (aligning medication histories after secondary care contact)YesNo formal tool usedPartner’s Post Discharge Tool\n[31]EHR, hospital records3PROMs for safety (questionnaire determining the patient perspective of safety in their practice)YesYesSEAPS\n[32] (Seniors Empowerment and Advocacy in Patient Safety)Patients8Other Patient Involvement Measures (variety of tools including literature for patients, computerised systems and medications specific tools)YesYes‘Speak-Up’ from JCAHO\n[33]Patients4IT MeasuresYesYesSEMI-P\n[34]EHR11(not just CDSS but a variety of measures often tackling systems error, many relate to prescribing safety)(Safety Enhancement and Monitoring Instrument that is Patient centred)Diagnostic Tools (Mainly CDSS designed to improve diagnosis)YesNoDxPlain\n[35]EHR3\nAbbreviations:\nCDSS Computer Decision Support Software.\nEHR Electronic Health Record.\nPROM Patient reported outcome measures.\nUK United Kingsdom.\nUS United States.\n\nTypes of tools found in the review, where possible well-known US examples of the type of tool are given in order to aid understanding\n\n\nAbbreviations:\n\nCDSS Computer Decision Support Software.\n\nEHR Electronic Health Record.\n\nPROM Patient reported outcome measures.\n\nUK United Kingsdom.\n\nUS United States.\n Summary of main findings We have demonstrated that there has been an upsurge in publications on primary care patient safety since 2001 and that most of the literature comes from the USA and the UK, with the pre-eminent topic being prescribing safety. The list of discrete tools (which includes grey literature) has a much more even spread across the dimensions within our conceptual taxonomy (Web Appendices 3 and 4). Using this taxonomy shows that some areas of patient safety are relatively neglected in the published literature on primary care patient safety tools; for example, diagnostic error. Tools for test results and referrals are also poorly represented; there were 5 descriptive papers in total, one un-validated tool for electronic referrals and one indicator set dealing with referrals from OOH care. No tools for investigations management were found.\nWe have demonstrated that there has been an upsurge in publications on primary care patient safety since 2001 and that most of the literature comes from the USA and the UK, with the pre-eminent topic being prescribing safety. The list of discrete tools (which includes grey literature) has a much more even spread across the dimensions within our conceptual taxonomy (Web Appendices 3 and 4). Using this taxonomy shows that some areas of patient safety are relatively neglected in the published literature on primary care patient safety tools; for example, diagnostic error. Tools for test results and referrals are also poorly represented; there were 5 descriptive papers in total, one un-validated tool for electronic referrals and one indicator set dealing with referrals from OOH care. No tools for investigations management were found.\n Comparison to existing literature To the authors’ knowledge no similar review has been undertaken to look specifically at instruments for measuring patient safety in primary care. The AMA report on ambulatory patient safety\n[6] found that the number of reported interventions in primary care is low but their search strategy did not take a worldwide approach and only focused on interventions that reduce error or harm. We designed a more inclusive search strategy to capture measurement tools and strategies and were therefore able to find a wider body of literature. The focus on measurement tools and strategies reflects the importance of knowing what is being used currently in primary care to measure patient safety\n[8]. Many of the 114 tools found are iterations of tools constructed previously and re-designed for other countries. For example, the UK NHS Institute for Innovation and Improvement Primary Care Trigger Tool\n[36] has much in common with the IHI Outpatient Adverse Event Trigger Tool\n[23].\nOur study resonates with the view that there can be no one single measure of patient safety\n[8]. Rather, a framework for patient safety should include for example; past harm, reliability, ‘sensitivity to operations’, anticipation/preparedness and Integration/learning. Vincent C, 2013\n[8] Measures of past harm are prevalent among the tools we found i.e. – adverse event reporting systems. Some tools clearly straddle boundaries within the definition: Significant Event Analyses (SEA) (a technique commonly used in the UK), for example, straddles past events, future anticipation of similar situations and learning in relation to the significant event. Few tools address safety reliability in primary care; practices may set their own standards for audit of patient safety but as yet no formal targets exist for primary care in the UK or US (in direct contrast with hospital mortality data and target dashboards such as HEDIS - http://www.ncqa.org/HEDISQualityMeasurement.aspx). Sensitivity to operations is an umbrella term referring to the information and capacity in clinical systems to monitor safety on an hourly or daily basis, climate measures often address questions to staff in relation to their adaptability to change but there are no other measures of this dynamic in primary care. The work of the defence organisations in advising practices about risks and loop-holes in operating systems comes close to fulfilling this goal and roughly equates to a safety ‘walk-round’\n[37]. The challenge to any ‘toolkit’ is to incorporate prospective measures that prevent and anticipate error. The major elements of the toolkit that address prevention are; trigger tools\n[23–25, 36] (potential rather than actual harm), medicines reconciliation packages\n[30] (prevent harm from changes to prescriptions at the interface of primary and secondary care), safety culture\n[38] and a ‘safe systems’ checklist, which encourages primary care practices to seek out loopholes in their established systems.\nTo the authors’ knowledge no similar review has been undertaken to look specifically at instruments for measuring patient safety in primary care. The AMA report on ambulatory patient safety\n[6] found that the number of reported interventions in primary care is low but their search strategy did not take a worldwide approach and only focused on interventions that reduce error or harm. We designed a more inclusive search strategy to capture measurement tools and strategies and were therefore able to find a wider body of literature. The focus on measurement tools and strategies reflects the importance of knowing what is being used currently in primary care to measure patient safety\n[8]. Many of the 114 tools found are iterations of tools constructed previously and re-designed for other countries. For example, the UK NHS Institute for Innovation and Improvement Primary Care Trigger Tool\n[36] has much in common with the IHI Outpatient Adverse Event Trigger Tool\n[23].\nOur study resonates with the view that there can be no one single measure of patient safety\n[8]. Rather, a framework for patient safety should include for example; past harm, reliability, ‘sensitivity to operations’, anticipation/preparedness and Integration/learning. Vincent C, 2013\n[8] Measures of past harm are prevalent among the tools we found i.e. – adverse event reporting systems. Some tools clearly straddle boundaries within the definition: Significant Event Analyses (SEA) (a technique commonly used in the UK), for example, straddles past events, future anticipation of similar situations and learning in relation to the significant event. Few tools address safety reliability in primary care; practices may set their own standards for audit of patient safety but as yet no formal targets exist for primary care in the UK or US (in direct contrast with hospital mortality data and target dashboards such as HEDIS - http://www.ncqa.org/HEDISQualityMeasurement.aspx). Sensitivity to operations is an umbrella term referring to the information and capacity in clinical systems to monitor safety on an hourly or daily basis, climate measures often address questions to staff in relation to their adaptability to change but there are no other measures of this dynamic in primary care. The work of the defence organisations in advising practices about risks and loop-holes in operating systems comes close to fulfilling this goal and roughly equates to a safety ‘walk-round’\n[37]. The challenge to any ‘toolkit’ is to incorporate prospective measures that prevent and anticipate error. The major elements of the toolkit that address prevention are; trigger tools\n[23–25, 36] (potential rather than actual harm), medicines reconciliation packages\n[30] (prevent harm from changes to prescriptions at the interface of primary and secondary care), safety culture\n[38] and a ‘safe systems’ checklist, which encourages primary care practices to seek out loopholes in their established systems.\n Strengths/limitations This review presented challenges due to the broad nature of our question and that there are no criteria for a standardised definition of, or criteria for classifying, a ‘tool’. Tools can be alerts, scoring systems, order sets, dashboards, questionnaires, educational materials, forms or templates to name but a few and, as such, the output of the review is highly heterogeneous. Therefore, we employed pragmatic ways of handling the large amount of data extracted from the review. Our strengths have included using a two reviewer system and a dual extraction process of both numerically coded data and free text summaries of papers, which enabled us to analyse identified instruments in-depth and an extensive exploration of grey literature with a world-wide perspective. Exclusions due to translation costs were minor – only 6 papers out of 280 were excluded on language basis alone. Time-constraints on the project meant we could not use back and forward citation methods systematically due to the sheer number of papers involved in the review.\nThis review presented challenges due to the broad nature of our question and that there are no criteria for a standardised definition of, or criteria for classifying, a ‘tool’. Tools can be alerts, scoring systems, order sets, dashboards, questionnaires, educational materials, forms or templates to name but a few and, as such, the output of the review is highly heterogeneous. Therefore, we employed pragmatic ways of handling the large amount of data extracted from the review. Our strengths have included using a two reviewer system and a dual extraction process of both numerically coded data and free text summaries of papers, which enabled us to analyse identified instruments in-depth and an extensive exploration of grey literature with a world-wide perspective. Exclusions due to translation costs were minor – only 6 papers out of 280 were excluded on language basis alone. Time-constraints on the project meant we could not use back and forward citation methods systematically due to the sheer number of papers involved in the review.\n Implications for practice, policy, or future research The main aim of the Tools identified is to measure or report safety issues. While measurement and baselines are a prerequisite to improvement, few of the Tools include an embedded implementation strategy that would address improvement or a quality cycle to alter strategies and measure for change\n[39]. It is difficult to estimate the impact the various measurement tools identified in this review would have in improving patient safety; for instance, prescribing indicators would only seem to measure level of harm at surface value but have been found to change harmful prescribing patterns when combined with educational feedback\n[40]. Moreover, standards and consistency of reporting vary and many studies, for example around culture and climate surveys, do not report reliability, validity, details of their study characteristics and participants etc.\nOthers have advocated the need for outcome measures in patient safety\n[41]. However, measurement systems need to be tested to ensure they measure what is claimed, whether they can reliably tell if deterioration or improvement is occurring and what other (untoward and unintended) consequences could occur\n[8]. This adheres to the wider imperative that measures of quality or safety, and the data collected, adhere to key attributes such as reliability and validity and also address issues such as acceptability, implementation issues and possible unintended consequences\n[42, 43]. The aim of future work will be to test the suitability and acceptability of the proposed measures in the toolkit and to test changes within practices after application of the toolkit; as well as intended and unintended (positive and negative) consequences. Measureable outcomes are only one feature of Safety Management Systems, and as such the toolkit should not rely exclusively on them but also develop other areas such as training, policy, culture and feedback of outcomes data in line with other established models of patient safety\n[9, 43–45]. There is a need also to embrace qualitative methodologies to patient safety such as the Manchester Patient Safety Framework (MaPSaF )\n[46].\nThe main aim of the Tools identified is to measure or report safety issues. While measurement and baselines are a prerequisite to improvement, few of the Tools include an embedded implementation strategy that would address improvement or a quality cycle to alter strategies and measure for change\n[39]. It is difficult to estimate the impact the various measurement tools identified in this review would have in improving patient safety; for instance, prescribing indicators would only seem to measure level of harm at surface value but have been found to change harmful prescribing patterns when combined with educational feedback\n[40]. Moreover, standards and consistency of reporting vary and many studies, for example around culture and climate surveys, do not report reliability, validity, details of their study characteristics and participants etc.\nOthers have advocated the need for outcome measures in patient safety\n[41]. However, measurement systems need to be tested to ensure they measure what is claimed, whether they can reliably tell if deterioration or improvement is occurring and what other (untoward and unintended) consequences could occur\n[8]. This adheres to the wider imperative that measures of quality or safety, and the data collected, adhere to key attributes such as reliability and validity and also address issues such as acceptability, implementation issues and possible unintended consequences\n[42, 43]. The aim of future work will be to test the suitability and acceptability of the proposed measures in the toolkit and to test changes within practices after application of the toolkit; as well as intended and unintended (positive and negative) consequences. Measureable outcomes are only one feature of Safety Management Systems, and as such the toolkit should not rely exclusively on them but also develop other areas such as training, policy, culture and feedback of outcomes data in line with other established models of patient safety\n[9, 43–45]. There is a need also to embrace qualitative methodologies to patient safety such as the Manchester Patient Safety Framework (MaPSaF )\n[46].", "We have demonstrated that there has been an upsurge in publications on primary care patient safety since 2001 and that most of the literature comes from the USA and the UK, with the pre-eminent topic being prescribing safety. The list of discrete tools (which includes grey literature) has a much more even spread across the dimensions within our conceptual taxonomy (Web Appendices 3 and 4). Using this taxonomy shows that some areas of patient safety are relatively neglected in the published literature on primary care patient safety tools; for example, diagnostic error. Tools for test results and referrals are also poorly represented; there were 5 descriptive papers in total, one un-validated tool for electronic referrals and one indicator set dealing with referrals from OOH care. No tools for investigations management were found.", "To the authors’ knowledge no similar review has been undertaken to look specifically at instruments for measuring patient safety in primary care. The AMA report on ambulatory patient safety\n[6] found that the number of reported interventions in primary care is low but their search strategy did not take a worldwide approach and only focused on interventions that reduce error or harm. We designed a more inclusive search strategy to capture measurement tools and strategies and were therefore able to find a wider body of literature. The focus on measurement tools and strategies reflects the importance of knowing what is being used currently in primary care to measure patient safety\n[8]. Many of the 114 tools found are iterations of tools constructed previously and re-designed for other countries. For example, the UK NHS Institute for Innovation and Improvement Primary Care Trigger Tool\n[36] has much in common with the IHI Outpatient Adverse Event Trigger Tool\n[23].\nOur study resonates with the view that there can be no one single measure of patient safety\n[8]. Rather, a framework for patient safety should include for example; past harm, reliability, ‘sensitivity to operations’, anticipation/preparedness and Integration/learning. Vincent C, 2013\n[8] Measures of past harm are prevalent among the tools we found i.e. – adverse event reporting systems. Some tools clearly straddle boundaries within the definition: Significant Event Analyses (SEA) (a technique commonly used in the UK), for example, straddles past events, future anticipation of similar situations and learning in relation to the significant event. Few tools address safety reliability in primary care; practices may set their own standards for audit of patient safety but as yet no formal targets exist for primary care in the UK or US (in direct contrast with hospital mortality data and target dashboards such as HEDIS - http://www.ncqa.org/HEDISQualityMeasurement.aspx). Sensitivity to operations is an umbrella term referring to the information and capacity in clinical systems to monitor safety on an hourly or daily basis, climate measures often address questions to staff in relation to their adaptability to change but there are no other measures of this dynamic in primary care. The work of the defence organisations in advising practices about risks and loop-holes in operating systems comes close to fulfilling this goal and roughly equates to a safety ‘walk-round’\n[37]. The challenge to any ‘toolkit’ is to incorporate prospective measures that prevent and anticipate error. The major elements of the toolkit that address prevention are; trigger tools\n[23–25, 36] (potential rather than actual harm), medicines reconciliation packages\n[30] (prevent harm from changes to prescriptions at the interface of primary and secondary care), safety culture\n[38] and a ‘safe systems’ checklist, which encourages primary care practices to seek out loopholes in their established systems.", "This review presented challenges due to the broad nature of our question and that there are no criteria for a standardised definition of, or criteria for classifying, a ‘tool’. Tools can be alerts, scoring systems, order sets, dashboards, questionnaires, educational materials, forms or templates to name but a few and, as such, the output of the review is highly heterogeneous. Therefore, we employed pragmatic ways of handling the large amount of data extracted from the review. Our strengths have included using a two reviewer system and a dual extraction process of both numerically coded data and free text summaries of papers, which enabled us to analyse identified instruments in-depth and an extensive exploration of grey literature with a world-wide perspective. Exclusions due to translation costs were minor – only 6 papers out of 280 were excluded on language basis alone. Time-constraints on the project meant we could not use back and forward citation methods systematically due to the sheer number of papers involved in the review.", "The main aim of the Tools identified is to measure or report safety issues. While measurement and baselines are a prerequisite to improvement, few of the Tools include an embedded implementation strategy that would address improvement or a quality cycle to alter strategies and measure for change\n[39]. It is difficult to estimate the impact the various measurement tools identified in this review would have in improving patient safety; for instance, prescribing indicators would only seem to measure level of harm at surface value but have been found to change harmful prescribing patterns when combined with educational feedback\n[40]. Moreover, standards and consistency of reporting vary and many studies, for example around culture and climate surveys, do not report reliability, validity, details of their study characteristics and participants etc.\nOthers have advocated the need for outcome measures in patient safety\n[41]. However, measurement systems need to be tested to ensure they measure what is claimed, whether they can reliably tell if deterioration or improvement is occurring and what other (untoward and unintended) consequences could occur\n[8]. This adheres to the wider imperative that measures of quality or safety, and the data collected, adhere to key attributes such as reliability and validity and also address issues such as acceptability, implementation issues and possible unintended consequences\n[42, 43]. The aim of future work will be to test the suitability and acceptability of the proposed measures in the toolkit and to test changes within practices after application of the toolkit; as well as intended and unintended (positive and negative) consequences. Measureable outcomes are only one feature of Safety Management Systems, and as such the toolkit should not rely exclusively on them but also develop other areas such as training, policy, culture and feedback of outcomes data in line with other established models of patient safety\n[9, 43–45]. There is a need also to embrace qualitative methodologies to patient safety such as the Manchester Patient Safety Framework (MaPSaF )\n[46].", "Dr Rachel Spencer is an academic General Practitioner at the University of Nottingham and Professor Stephen Campbell is a health services researcher at the University of Manchester.", "Additional file 1:\nTools for Primary Care Patient Safety; a Systematic Review.\n(DOCX 65 KB)" ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Data sources and searches", "Study selection", "Data quality and extraction", "Funding", "Results and discussion", "Summary of main findings", "Comparison to existing literature", "Strengths/limitations", "Implications for practice, policy, or future research", "Conclusion", "Authors’ information", "Electronic supplementary material", "" ]
[ "Patient safety has been on the agenda of hospital physicians since the publication of the Institute of Medicine’s 2000 report, ‘To Err is Human’, revealed that more people were dying in the US as a result of medical error than from road traffic accidents\n[1]. However, most healthcare interactions in the developed world occur in family medicine: 90% of contacts in the England with the National Health Service take place in primary care\n[2]. In England there are approximately 1 billion community prescriptions dispensed each year\n[3]. The potential for adverse events is therefore huge but the knowledge base about primary care patient safety is still sparse. A literature review of the nature and frequency of error in primary care suggested that there are between 5–80 safety incidents per 100,000 consultations, which in the UK would translate to between 37–600 incidents per day\n[4]. Another review estimates that there may be a patient safety incident in approximately 2% of family practice consultations\n[5].\nA 2011 report by the American Medical Association on ambulatory patient safety concluded that the introduction of, and research into, patient safety in the primary care environment have lagged behind that of secondary care\n[6]. Understanding the epidemiology of hospital errors was crucial in developing hospital based safety interventions and the media’s reporting of this data ensured public support for efforts to improve safety\n[6]. Some of its authors concluded that there needed to be a similar focus on primary care, because there were ‘virtually no credible studies on how to improve safety’\n[7]. Moreover, a report by the Health Foundation in 2013 emphasised the importance of knowing what methods, tools and indicators are currently being used in primary care to measure patient safety\n[8]. In this paper, patient safety refers to the ‘avoidance, prevention and amelioration of adverse outcomes or injuries stemming from the process of healthcare’\n[8].\nStaff and systems in primary care environments have the potential to contribute to serious error that can cause both morbidity and mortality; which has been demonstrated in the field of prescribing\n[9]. Evidence on primary care error comes mainly from the statistics of the medical defence organisations and from small pilot studies\n[10]. And yet, experts we have consulted in the field were anecdotally aware of a multiplicity of interventions or ‘tools’ from their own and others’ work world-wide, which helped identify grey literature for this study.\nThis paper reports a narrative review of ‘tools’ to improve, measure, and monitor patient safety in the ambulatory settings with a focus on family practice. A narrative review broadly covers a specific topic but does not adhere to strict systematic methods to locate and synthesize articles and enables description and synthesis of qualitative research and categorises studies into more homogenous groups\n[11]. To the authors’ knowledge no such broad-ranging review has been attempted. The context of this study is worldwide including both the US and the UK and throughout the term primary care is used to address the terms general practice and family practice.", " Data sources and searches Our structured narrative review was planned and conducted according to guidance in the Preferred Reporting Items for Systematic Reviews and Meta- Analyses (PRISMA) guidelines\n[12] but following a more narrative approach (especially with regard to grey literature). The starting point for determining the search terms used in the review was a 3 point definition of our search question and exploration of Medical Subject Heading (MeSH) terms\n[13]. We used a multi-centre team (including leading UK experts on patient safety) at a strategic planning meeting to comment on and finalise the search terms for the review. References were managed in Endnote. Broad ranging search terms were used for developing a search strategy in 3 stems; setting (primary care [i.e. general/family practice, ambulatory care, community care, generalist care], safety synonyms [i.e. error, adverse event, fault, malpractice] and types of tool [i.e. indicator, survey, guideline, scale] (see web Additional file\n1: Appendix 1). The aim was to be as inclusive as possible and address administrative, clinical and patient experience issues. The search was performed on the following databases; Embase, CINAHL, Pubmed, Medline (Ovid 1996 onward), Health Management Information Consortium and Web of Science on the 1/11/2011. We did not limit our search by year of publication or to the English language, in order to capture a world-wide perspective on patient safety. However, only abstracts in English were included due to resource restraints for translation. Grey literature was identified from known internet patient safety sources from the US and UK to expand the scope of the review (see web Additional file:\n1 Appendix 6). In order to fully explore a single tool many resources often had to be read – for example the IHI trigger tool is described in a number of web publications and supporting documents. Care was taken not to count published tools that also appeared in grey literature twice by discussion between the two reviewers and wider team.\nOur structured narrative review was planned and conducted according to guidance in the Preferred Reporting Items for Systematic Reviews and Meta- Analyses (PRISMA) guidelines\n[12] but following a more narrative approach (especially with regard to grey literature). The starting point for determining the search terms used in the review was a 3 point definition of our search question and exploration of Medical Subject Heading (MeSH) terms\n[13]. We used a multi-centre team (including leading UK experts on patient safety) at a strategic planning meeting to comment on and finalise the search terms for the review. References were managed in Endnote. Broad ranging search terms were used for developing a search strategy in 3 stems; setting (primary care [i.e. general/family practice, ambulatory care, community care, generalist care], safety synonyms [i.e. error, adverse event, fault, malpractice] and types of tool [i.e. indicator, survey, guideline, scale] (see web Additional file\n1: Appendix 1). The aim was to be as inclusive as possible and address administrative, clinical and patient experience issues. The search was performed on the following databases; Embase, CINAHL, Pubmed, Medline (Ovid 1996 onward), Health Management Information Consortium and Web of Science on the 1/11/2011. We did not limit our search by year of publication or to the English language, in order to capture a world-wide perspective on patient safety. However, only abstracts in English were included due to resource restraints for translation. Grey literature was identified from known internet patient safety sources from the US and UK to expand the scope of the review (see web Additional file:\n1 Appendix 6). In order to fully explore a single tool many resources often had to be read – for example the IHI trigger tool is described in a number of web publications and supporting documents. Care was taken not to count published tools that also appeared in grey literature twice by discussion between the two reviewers and wider team.\n Study selection Reviewer one was a GP Academic Clinical Fellow with an interest in pharmacology (RS) and reviewer two was a health services researcher with an interest in family practice (SC). Similarly to the strategy used in the AMA’s report\n[6], we were interested in highly generalisable tools so research addressing single drugs or conditions in very specific settings was excluded. Inclusion and exclusion criteria are listed in below. An inclusive strategy was employed such that neither reviewer could exclude studies the other felt were potentially relevant. Disagreements were resolved by regular discussions between the 2 reviewers. The inclusion and exclusion criteria were as follows:\nExclusion criteria:hospital care/setting - unless transferableopinion pieces/editorialssingle drugs/conditions where the focus was felt to be on the specific drug or condition rather than on transferable toolsonly about quality of care without explicit patient safety componentexclude on basis of journal (for example “Health Care Food & Nutrition focus”)economic impact of errors (relevant papers were taken out at this stage for other purposes within the project)Both the abstract and main text were not in EnglishInclusion criteria:if unsure always include - for example, ‘good advice’ which might later inform other toolstools or strategies to improve or analyse safety which are of relevance to Primary Care.\nExclusion criteria:hospital care/setting - unless transferableopinion pieces/editorialssingle drugs/conditions where the focus was felt to be on the specific drug or condition rather than on transferable toolsonly about quality of care without explicit patient safety componentexclude on basis of journal (for example “Health Care Food & Nutrition focus”)economic impact of errors (relevant papers were taken out at this stage for other purposes within the project)Both the abstract and main text were not in English\nhospital care/setting - unless transferable\nopinion pieces/editorials\nsingle drugs/conditions where the focus was felt to be on the specific drug or condition rather than on transferable tools\nonly about quality of care without explicit patient safety component\nexclude on basis of journal (for example “Health Care Food & Nutrition focus”)\neconomic impact of errors (relevant papers were taken out at this stage for other purposes within the project)\nBoth the abstract and main text were not in English\nInclusion criteria:if unsure always include - for example, ‘good advice’ which might later inform other toolstools or strategies to improve or analyse safety which are of relevance to Primary Care.\nif unsure always include - for example, ‘good advice’ which might later inform other tools\ntools or strategies to improve or analyse safety which are of relevance to Primary Care.\nReviewer one was a GP Academic Clinical Fellow with an interest in pharmacology (RS) and reviewer two was a health services researcher with an interest in family practice (SC). Similarly to the strategy used in the AMA’s report\n[6], we were interested in highly generalisable tools so research addressing single drugs or conditions in very specific settings was excluded. Inclusion and exclusion criteria are listed in below. An inclusive strategy was employed such that neither reviewer could exclude studies the other felt were potentially relevant. Disagreements were resolved by regular discussions between the 2 reviewers. The inclusion and exclusion criteria were as follows:\nExclusion criteria:hospital care/setting - unless transferableopinion pieces/editorialssingle drugs/conditions where the focus was felt to be on the specific drug or condition rather than on transferable toolsonly about quality of care without explicit patient safety componentexclude on basis of journal (for example “Health Care Food & Nutrition focus”)economic impact of errors (relevant papers were taken out at this stage for other purposes within the project)Both the abstract and main text were not in EnglishInclusion criteria:if unsure always include - for example, ‘good advice’ which might later inform other toolstools or strategies to improve or analyse safety which are of relevance to Primary Care.\nExclusion criteria:hospital care/setting - unless transferableopinion pieces/editorialssingle drugs/conditions where the focus was felt to be on the specific drug or condition rather than on transferable toolsonly about quality of care without explicit patient safety componentexclude on basis of journal (for example “Health Care Food & Nutrition focus”)economic impact of errors (relevant papers were taken out at this stage for other purposes within the project)Both the abstract and main text were not in English\nhospital care/setting - unless transferable\nopinion pieces/editorials\nsingle drugs/conditions where the focus was felt to be on the specific drug or condition rather than on transferable tools\nonly about quality of care without explicit patient safety component\nexclude on basis of journal (for example “Health Care Food & Nutrition focus”)\neconomic impact of errors (relevant papers were taken out at this stage for other purposes within the project)\nBoth the abstract and main text were not in English\nInclusion criteria:if unsure always include - for example, ‘good advice’ which might later inform other toolstools or strategies to improve or analyse safety which are of relevance to Primary Care.\nif unsure always include - for example, ‘good advice’ which might later inform other tools\ntools or strategies to improve or analyse safety which are of relevance to Primary Care.\n Data quality and extraction Data were extracted independently by the two reviewers (RS and SC). A dual approach was taken to data extraction from published material using both a Word document (Web Additional file:\n1 Appendix 5) and an Excel template. For reporting data from selected papers we used a modified PRISMA\n[12] checklist, which combined aspects of different methodologies (not just systematic reviews and meta-analyses) into a form for all study types (available on application to the authors). For example the PRISMA checklist requires a discussion of limitations which is a highly transferable requirement to all methodologies, but it also requires specific methods based items such as ‘confidence intervals on meta-analyses’. Using information collected on the modified PRISMA form, a numerical data extraction system was agreed by both reviewers in order to present results from the selected papers data for analysis were extracted from that Excel document. A pilot of 10 key papers with differing methodologies was undertaken prior to commencing full data extraction – high levels of agreement were found between the two reviewers. At the end of data extraction differences in rating on the Excel sheet were discussed and analysed across a series of face-to-face and telephone meetings, attempts to reach consensus were almost always possible.\nData were extracted independently by the two reviewers (RS and SC). A dual approach was taken to data extraction from published material using both a Word document (Web Additional file:\n1 Appendix 5) and an Excel template. For reporting data from selected papers we used a modified PRISMA\n[12] checklist, which combined aspects of different methodologies (not just systematic reviews and meta-analyses) into a form for all study types (available on application to the authors). For example the PRISMA checklist requires a discussion of limitations which is a highly transferable requirement to all methodologies, but it also requires specific methods based items such as ‘confidence intervals on meta-analyses’. Using information collected on the modified PRISMA form, a numerical data extraction system was agreed by both reviewers in order to present results from the selected papers data for analysis were extracted from that Excel document. A pilot of 10 key papers with differing methodologies was undertaken prior to commencing full data extraction – high levels of agreement were found between the two reviewers. At the end of data extraction differences in rating on the Excel sheet were discussed and analysed across a series of face-to-face and telephone meetings, attempts to reach consensus were almost always possible.\n Funding This review is part of a National Institute for Health Research (NIHR), School for Primary Care Research (SPCR: http://www.spcr.nihr.ac.uk/) (UK) project, undertaken with the aim of constructing a Patient Safety Toolkit for English family practice.\nThis review is part of a National Institute for Health Research (NIHR), School for Primary Care Research (SPCR: http://www.spcr.nihr.ac.uk/) (UK) project, undertaken with the aim of constructing a Patient Safety Toolkit for English family practice.", "Our structured narrative review was planned and conducted according to guidance in the Preferred Reporting Items for Systematic Reviews and Meta- Analyses (PRISMA) guidelines\n[12] but following a more narrative approach (especially with regard to grey literature). The starting point for determining the search terms used in the review was a 3 point definition of our search question and exploration of Medical Subject Heading (MeSH) terms\n[13]. We used a multi-centre team (including leading UK experts on patient safety) at a strategic planning meeting to comment on and finalise the search terms for the review. References were managed in Endnote. Broad ranging search terms were used for developing a search strategy in 3 stems; setting (primary care [i.e. general/family practice, ambulatory care, community care, generalist care], safety synonyms [i.e. error, adverse event, fault, malpractice] and types of tool [i.e. indicator, survey, guideline, scale] (see web Additional file\n1: Appendix 1). The aim was to be as inclusive as possible and address administrative, clinical and patient experience issues. The search was performed on the following databases; Embase, CINAHL, Pubmed, Medline (Ovid 1996 onward), Health Management Information Consortium and Web of Science on the 1/11/2011. We did not limit our search by year of publication or to the English language, in order to capture a world-wide perspective on patient safety. However, only abstracts in English were included due to resource restraints for translation. Grey literature was identified from known internet patient safety sources from the US and UK to expand the scope of the review (see web Additional file:\n1 Appendix 6). In order to fully explore a single tool many resources often had to be read – for example the IHI trigger tool is described in a number of web publications and supporting documents. Care was taken not to count published tools that also appeared in grey literature twice by discussion between the two reviewers and wider team.", "Reviewer one was a GP Academic Clinical Fellow with an interest in pharmacology (RS) and reviewer two was a health services researcher with an interest in family practice (SC). Similarly to the strategy used in the AMA’s report\n[6], we were interested in highly generalisable tools so research addressing single drugs or conditions in very specific settings was excluded. Inclusion and exclusion criteria are listed in below. An inclusive strategy was employed such that neither reviewer could exclude studies the other felt were potentially relevant. Disagreements were resolved by regular discussions between the 2 reviewers. The inclusion and exclusion criteria were as follows:\nExclusion criteria:hospital care/setting - unless transferableopinion pieces/editorialssingle drugs/conditions where the focus was felt to be on the specific drug or condition rather than on transferable toolsonly about quality of care without explicit patient safety componentexclude on basis of journal (for example “Health Care Food & Nutrition focus”)economic impact of errors (relevant papers were taken out at this stage for other purposes within the project)Both the abstract and main text were not in EnglishInclusion criteria:if unsure always include - for example, ‘good advice’ which might later inform other toolstools or strategies to improve or analyse safety which are of relevance to Primary Care.\nExclusion criteria:hospital care/setting - unless transferableopinion pieces/editorialssingle drugs/conditions where the focus was felt to be on the specific drug or condition rather than on transferable toolsonly about quality of care without explicit patient safety componentexclude on basis of journal (for example “Health Care Food & Nutrition focus”)economic impact of errors (relevant papers were taken out at this stage for other purposes within the project)Both the abstract and main text were not in English\nhospital care/setting - unless transferable\nopinion pieces/editorials\nsingle drugs/conditions where the focus was felt to be on the specific drug or condition rather than on transferable tools\nonly about quality of care without explicit patient safety component\nexclude on basis of journal (for example “Health Care Food & Nutrition focus”)\neconomic impact of errors (relevant papers were taken out at this stage for other purposes within the project)\nBoth the abstract and main text were not in English\nInclusion criteria:if unsure always include - for example, ‘good advice’ which might later inform other toolstools or strategies to improve or analyse safety which are of relevance to Primary Care.\nif unsure always include - for example, ‘good advice’ which might later inform other tools\ntools or strategies to improve or analyse safety which are of relevance to Primary Care.", "Data were extracted independently by the two reviewers (RS and SC). A dual approach was taken to data extraction from published material using both a Word document (Web Additional file:\n1 Appendix 5) and an Excel template. For reporting data from selected papers we used a modified PRISMA\n[12] checklist, which combined aspects of different methodologies (not just systematic reviews and meta-analyses) into a form for all study types (available on application to the authors). For example the PRISMA checklist requires a discussion of limitations which is a highly transferable requirement to all methodologies, but it also requires specific methods based items such as ‘confidence intervals on meta-analyses’. Using information collected on the modified PRISMA form, a numerical data extraction system was agreed by both reviewers in order to present results from the selected papers data for analysis were extracted from that Excel document. A pilot of 10 key papers with differing methodologies was undertaken prior to commencing full data extraction – high levels of agreement were found between the two reviewers. At the end of data extraction differences in rating on the Excel sheet were discussed and analysed across a series of face-to-face and telephone meetings, attempts to reach consensus were almost always possible.", "This review is part of a National Institute for Health Research (NIHR), School for Primary Care Research (SPCR: http://www.spcr.nihr.ac.uk/) (UK) project, undertaken with the aim of constructing a Patient Safety Toolkit for English family practice.", "Grey literature results are not included in the following calculations and flow diagrams; results are instead included in the list of tools found in web Additional file\n1: Appendix 3 (where they are clearly marked as being from grey sources). Using the process described in Figure \n1, we selected approximately 10% (n = 1311) of the original search total (n = 13,240) for evaluation of abstracts; titles excluded at this stage were clearly not of relevance e.g. relating to non-healthcare safety topics. Abstracts were then analysed for tools, after excluding papers which were from the correct setting but which did not contain any interventions; around 14% of the abstracts were included for full paper analysis (n = 189).Figure 1\n‘Toolkit’ review stages. Graph 1 –a) illustration of the literature base in primary care patient safety 1987-2011from Pubmed b) Papers from the review divided by the annual Pubmed output for the same year.\n\n‘Toolkit’ review stages. Graph 1 –a) illustration of the literature base in primary care patient safety 1987-2011from Pubmed b) Papers from the review divided by the annual Pubmed output for the same year.\nAs Graph 1a) illustrates, the majority of publications in the review have been published in the last decade. These data could be a product of MeSH term development or consistency in the last 10 years. However, as Graph 1b) shows, the same take-off in 2001 occurs even when presented as a proportion of the total literature published on Pubmed. Analysing the MeSH terms of the 11 papers in the review from prior to 2000 revealed that the terms ‘medical error’ and ‘diagnostic error’ were each used twice and ‘risk management’ was used in 3 papers. Keywords using the term ‘drug’ appeared 13 times, family practice and ambulatory care were commonly used keywords.\nThe majority of the literature focused on prescribing (55%), which excludes IT interventions for prescribing that were attributed to the informatics theme instead. Other prominent areas were adverse events in primary care and safety climate (which comprised 8% of the total published literature each). A number of climate measures had been refined from earlier climate surveys for use specifically in primary care (i.e. SAQ (ambulatory)\n[14]. Minor themes included; informatics (4.5%) patient role (3%) and general measures to correct error (5%). The primary/secondary care interface is well described in the literature (5%) but, as yet, only 1 published interface tool specifically for primary care exists. Diagnostic error and results handling appear infrequently (<1% of total literature) despite their relative importance. The remainder of literature (11%) related to referrals, Out-Of-Hours (OOH) care, telephone care, organisational issues, mortality and clerical error. Overall, 114 Tools were identified with 26 accrued from grey literature.\nThe setting of the research uncovered was predominantly family practice (in keeping with our search strategy); the term ‘health system’ was used to describe research such as consensus outputs from multi-disciplinary teams or across the whole healthcare system. Most published literature was US based (41%) followed by UK located studies (23%), with other countries producing no more than 5% of published papers each. The grey literature also reflects the predominance of US and UK sources. A variety of study designs were identified in the review including, for example, consensus techniques (10%) , observational methods (15%) but most were mixed methods studies in patient safety research.\nWe classified the data from the published papers in this review using a taxonomy for primary care patient safety based on previous taxonomies by experts in quality of care and patient safety\n[15–19]. It differs from our data collection form as it was evolved later in the process of our analyses. In our taxonomy (Web Additional file\n1: Appendix 4) there are two principal dimensions of safety: ‘access to safe services’ and ‘effectiveness of safety processes’, which are discussed in terms of the structure of the health care system, processes of safety and health outcomes. This taxonomy is based on previous conceptual work on quality of care\n[15]; in essence, do users get the safe care they need, and is the care safe when they get it?\nIn our review, 88 of the 114 unique tools identified (Web Additional file\n1: Appendix 3) came from the published literature and 26 from grey literature sources – the majority of these being from the websites of known patient safety organisations (see data sources in Methods section). The review identified a wide range of ‘tools’ that cannot be described fully here due to word count constraints. The detailed output from the review will be described across a series of subsequent papers on ambulatory patient safety. However, Table \n1 shows key examples found in each dimension of patient safety: We have presented the most well-known or most often used US or UK tools as illustrative examples. The Table is presented in order of weight of literature with most common topics appearing first.Table 1\nTypes of tools found in the review, where possible well-known US examples of the type of tool are given in order to aid understanding\nType of tool (Explanation of tool)Used in the US?Used in the UK?US ExampleData sourceNumber of primary care tools of this type identifiedPrescribing Indicator PacksYesYesBeers criteria\n[20]EHR15 main sets, much overlap -3(criteria for ‘never events’ in prescribing) - other prescribing toolsGRAM reports\n[21], MAI\n[22] (Geriatric Risk Assessment MedGuide™ Medications Appropriateness Index)EHR, staffTrigger Tools-GeneralYesYesIHI Outpatient AdverseEHR5-MedicationsYesNoEvent Trigger Tool\n[23]3-SurgeryYesYesAdverse drug events among older adults in primary care\n[24] Ambulatory surgery\n[25]1\n(Criteria are screened for in a sample of medical records ‘triggering’ more detailed review)\nEvent Reporting Systems (National systems for informing relevant authorities about safety problems with all aspects of healthcare)YesYesASIPS\n[26] (Applied Strategies for Improving Patient Safety)EHR, Staff and patients6Medicines/device Reporting SystemsYesYesMEADERS\n[27]EHR, Staff and Patients4(National systems for informing relevant authorities about safety problems specific to the above)(Medication Error and Adverse Drug Event Reporting System) VAERS\n[28] (Vaccine Adverse Events Reporting System)Safety Climate/Culture Measures (The practice team rate themselves against safety criteria and discuss the results to make changes)YesYesSafety Attitudes Questionnaire\n[14]Staff10Significant Event Analysis Tools (The practice team discuss untoward events, using a standardised structure, in order to learn from them)NoYesUK example - NHS Education for Scotland\n[29]Staff, EHR and patients5General Primary/Secondary Interface Tools (standardised systems for handling patient care at transition in care level – often electronic discharge summaries)YesYes‘Care Transitions Approach’\n[30]EHR, hospital recordsOnly 3 within the direct control of family doctorsMedication Reconciliation Tools (aligning medication histories after secondary care contact)YesNo formal tool usedPartner’s Post Discharge Tool\n[31]EHR, hospital records3PROMs for safety (questionnaire determining the patient perspective of safety in their practice)YesYesSEAPS\n[32] (Seniors Empowerment and Advocacy in Patient Safety)Patients8Other Patient Involvement Measures (variety of tools including literature for patients, computerised systems and medications specific tools)YesYes‘Speak-Up’ from JCAHO\n[33]Patients4IT MeasuresYesYesSEMI-P\n[34]EHR11(not just CDSS but a variety of measures often tackling systems error, many relate to prescribing safety)(Safety Enhancement and Monitoring Instrument that is Patient centred)Diagnostic Tools (Mainly CDSS designed to improve diagnosis)YesNoDxPlain\n[35]EHR3\nAbbreviations:\nCDSS Computer Decision Support Software.\nEHR Electronic Health Record.\nPROM Patient reported outcome measures.\nUK United Kingsdom.\nUS United States.\n\nTypes of tools found in the review, where possible well-known US examples of the type of tool are given in order to aid understanding\n\n\nAbbreviations:\n\nCDSS Computer Decision Support Software.\n\nEHR Electronic Health Record.\n\nPROM Patient reported outcome measures.\n\nUK United Kingsdom.\n\nUS United States.\n Summary of main findings We have demonstrated that there has been an upsurge in publications on primary care patient safety since 2001 and that most of the literature comes from the USA and the UK, with the pre-eminent topic being prescribing safety. The list of discrete tools (which includes grey literature) has a much more even spread across the dimensions within our conceptual taxonomy (Web Appendices 3 and 4). Using this taxonomy shows that some areas of patient safety are relatively neglected in the published literature on primary care patient safety tools; for example, diagnostic error. Tools for test results and referrals are also poorly represented; there were 5 descriptive papers in total, one un-validated tool for electronic referrals and one indicator set dealing with referrals from OOH care. No tools for investigations management were found.\nWe have demonstrated that there has been an upsurge in publications on primary care patient safety since 2001 and that most of the literature comes from the USA and the UK, with the pre-eminent topic being prescribing safety. The list of discrete tools (which includes grey literature) has a much more even spread across the dimensions within our conceptual taxonomy (Web Appendices 3 and 4). Using this taxonomy shows that some areas of patient safety are relatively neglected in the published literature on primary care patient safety tools; for example, diagnostic error. Tools for test results and referrals are also poorly represented; there were 5 descriptive papers in total, one un-validated tool for electronic referrals and one indicator set dealing with referrals from OOH care. No tools for investigations management were found.\n Comparison to existing literature To the authors’ knowledge no similar review has been undertaken to look specifically at instruments for measuring patient safety in primary care. The AMA report on ambulatory patient safety\n[6] found that the number of reported interventions in primary care is low but their search strategy did not take a worldwide approach and only focused on interventions that reduce error or harm. We designed a more inclusive search strategy to capture measurement tools and strategies and were therefore able to find a wider body of literature. The focus on measurement tools and strategies reflects the importance of knowing what is being used currently in primary care to measure patient safety\n[8]. Many of the 114 tools found are iterations of tools constructed previously and re-designed for other countries. For example, the UK NHS Institute for Innovation and Improvement Primary Care Trigger Tool\n[36] has much in common with the IHI Outpatient Adverse Event Trigger Tool\n[23].\nOur study resonates with the view that there can be no one single measure of patient safety\n[8]. Rather, a framework for patient safety should include for example; past harm, reliability, ‘sensitivity to operations’, anticipation/preparedness and Integration/learning. Vincent C, 2013\n[8] Measures of past harm are prevalent among the tools we found i.e. – adverse event reporting systems. Some tools clearly straddle boundaries within the definition: Significant Event Analyses (SEA) (a technique commonly used in the UK), for example, straddles past events, future anticipation of similar situations and learning in relation to the significant event. Few tools address safety reliability in primary care; practices may set their own standards for audit of patient safety but as yet no formal targets exist for primary care in the UK or US (in direct contrast with hospital mortality data and target dashboards such as HEDIS - http://www.ncqa.org/HEDISQualityMeasurement.aspx). Sensitivity to operations is an umbrella term referring to the information and capacity in clinical systems to monitor safety on an hourly or daily basis, climate measures often address questions to staff in relation to their adaptability to change but there are no other measures of this dynamic in primary care. The work of the defence organisations in advising practices about risks and loop-holes in operating systems comes close to fulfilling this goal and roughly equates to a safety ‘walk-round’\n[37]. The challenge to any ‘toolkit’ is to incorporate prospective measures that prevent and anticipate error. The major elements of the toolkit that address prevention are; trigger tools\n[23–25, 36] (potential rather than actual harm), medicines reconciliation packages\n[30] (prevent harm from changes to prescriptions at the interface of primary and secondary care), safety culture\n[38] and a ‘safe systems’ checklist, which encourages primary care practices to seek out loopholes in their established systems.\nTo the authors’ knowledge no similar review has been undertaken to look specifically at instruments for measuring patient safety in primary care. The AMA report on ambulatory patient safety\n[6] found that the number of reported interventions in primary care is low but their search strategy did not take a worldwide approach and only focused on interventions that reduce error or harm. We designed a more inclusive search strategy to capture measurement tools and strategies and were therefore able to find a wider body of literature. The focus on measurement tools and strategies reflects the importance of knowing what is being used currently in primary care to measure patient safety\n[8]. Many of the 114 tools found are iterations of tools constructed previously and re-designed for other countries. For example, the UK NHS Institute for Innovation and Improvement Primary Care Trigger Tool\n[36] has much in common with the IHI Outpatient Adverse Event Trigger Tool\n[23].\nOur study resonates with the view that there can be no one single measure of patient safety\n[8]. Rather, a framework for patient safety should include for example; past harm, reliability, ‘sensitivity to operations’, anticipation/preparedness and Integration/learning. Vincent C, 2013\n[8] Measures of past harm are prevalent among the tools we found i.e. – adverse event reporting systems. Some tools clearly straddle boundaries within the definition: Significant Event Analyses (SEA) (a technique commonly used in the UK), for example, straddles past events, future anticipation of similar situations and learning in relation to the significant event. Few tools address safety reliability in primary care; practices may set their own standards for audit of patient safety but as yet no formal targets exist for primary care in the UK or US (in direct contrast with hospital mortality data and target dashboards such as HEDIS - http://www.ncqa.org/HEDISQualityMeasurement.aspx). Sensitivity to operations is an umbrella term referring to the information and capacity in clinical systems to monitor safety on an hourly or daily basis, climate measures often address questions to staff in relation to their adaptability to change but there are no other measures of this dynamic in primary care. The work of the defence organisations in advising practices about risks and loop-holes in operating systems comes close to fulfilling this goal and roughly equates to a safety ‘walk-round’\n[37]. The challenge to any ‘toolkit’ is to incorporate prospective measures that prevent and anticipate error. The major elements of the toolkit that address prevention are; trigger tools\n[23–25, 36] (potential rather than actual harm), medicines reconciliation packages\n[30] (prevent harm from changes to prescriptions at the interface of primary and secondary care), safety culture\n[38] and a ‘safe systems’ checklist, which encourages primary care practices to seek out loopholes in their established systems.\n Strengths/limitations This review presented challenges due to the broad nature of our question and that there are no criteria for a standardised definition of, or criteria for classifying, a ‘tool’. Tools can be alerts, scoring systems, order sets, dashboards, questionnaires, educational materials, forms or templates to name but a few and, as such, the output of the review is highly heterogeneous. Therefore, we employed pragmatic ways of handling the large amount of data extracted from the review. Our strengths have included using a two reviewer system and a dual extraction process of both numerically coded data and free text summaries of papers, which enabled us to analyse identified instruments in-depth and an extensive exploration of grey literature with a world-wide perspective. Exclusions due to translation costs were minor – only 6 papers out of 280 were excluded on language basis alone. Time-constraints on the project meant we could not use back and forward citation methods systematically due to the sheer number of papers involved in the review.\nThis review presented challenges due to the broad nature of our question and that there are no criteria for a standardised definition of, or criteria for classifying, a ‘tool’. Tools can be alerts, scoring systems, order sets, dashboards, questionnaires, educational materials, forms or templates to name but a few and, as such, the output of the review is highly heterogeneous. Therefore, we employed pragmatic ways of handling the large amount of data extracted from the review. Our strengths have included using a two reviewer system and a dual extraction process of both numerically coded data and free text summaries of papers, which enabled us to analyse identified instruments in-depth and an extensive exploration of grey literature with a world-wide perspective. Exclusions due to translation costs were minor – only 6 papers out of 280 were excluded on language basis alone. Time-constraints on the project meant we could not use back and forward citation methods systematically due to the sheer number of papers involved in the review.\n Implications for practice, policy, or future research The main aim of the Tools identified is to measure or report safety issues. While measurement and baselines are a prerequisite to improvement, few of the Tools include an embedded implementation strategy that would address improvement or a quality cycle to alter strategies and measure for change\n[39]. It is difficult to estimate the impact the various measurement tools identified in this review would have in improving patient safety; for instance, prescribing indicators would only seem to measure level of harm at surface value but have been found to change harmful prescribing patterns when combined with educational feedback\n[40]. Moreover, standards and consistency of reporting vary and many studies, for example around culture and climate surveys, do not report reliability, validity, details of their study characteristics and participants etc.\nOthers have advocated the need for outcome measures in patient safety\n[41]. However, measurement systems need to be tested to ensure they measure what is claimed, whether they can reliably tell if deterioration or improvement is occurring and what other (untoward and unintended) consequences could occur\n[8]. This adheres to the wider imperative that measures of quality or safety, and the data collected, adhere to key attributes such as reliability and validity and also address issues such as acceptability, implementation issues and possible unintended consequences\n[42, 43]. The aim of future work will be to test the suitability and acceptability of the proposed measures in the toolkit and to test changes within practices after application of the toolkit; as well as intended and unintended (positive and negative) consequences. Measureable outcomes are only one feature of Safety Management Systems, and as such the toolkit should not rely exclusively on them but also develop other areas such as training, policy, culture and feedback of outcomes data in line with other established models of patient safety\n[9, 43–45]. There is a need also to embrace qualitative methodologies to patient safety such as the Manchester Patient Safety Framework (MaPSaF )\n[46].\nThe main aim of the Tools identified is to measure or report safety issues. While measurement and baselines are a prerequisite to improvement, few of the Tools include an embedded implementation strategy that would address improvement or a quality cycle to alter strategies and measure for change\n[39]. It is difficult to estimate the impact the various measurement tools identified in this review would have in improving patient safety; for instance, prescribing indicators would only seem to measure level of harm at surface value but have been found to change harmful prescribing patterns when combined with educational feedback\n[40]. Moreover, standards and consistency of reporting vary and many studies, for example around culture and climate surveys, do not report reliability, validity, details of their study characteristics and participants etc.\nOthers have advocated the need for outcome measures in patient safety\n[41]. However, measurement systems need to be tested to ensure they measure what is claimed, whether they can reliably tell if deterioration or improvement is occurring and what other (untoward and unintended) consequences could occur\n[8]. This adheres to the wider imperative that measures of quality or safety, and the data collected, adhere to key attributes such as reliability and validity and also address issues such as acceptability, implementation issues and possible unintended consequences\n[42, 43]. The aim of future work will be to test the suitability and acceptability of the proposed measures in the toolkit and to test changes within practices after application of the toolkit; as well as intended and unintended (positive and negative) consequences. Measureable outcomes are only one feature of Safety Management Systems, and as such the toolkit should not rely exclusively on them but also develop other areas such as training, policy, culture and feedback of outcomes data in line with other established models of patient safety\n[9, 43–45]. There is a need also to embrace qualitative methodologies to patient safety such as the Manchester Patient Safety Framework (MaPSaF )\n[46].", "We have demonstrated that there has been an upsurge in publications on primary care patient safety since 2001 and that most of the literature comes from the USA and the UK, with the pre-eminent topic being prescribing safety. The list of discrete tools (which includes grey literature) has a much more even spread across the dimensions within our conceptual taxonomy (Web Appendices 3 and 4). Using this taxonomy shows that some areas of patient safety are relatively neglected in the published literature on primary care patient safety tools; for example, diagnostic error. Tools for test results and referrals are also poorly represented; there were 5 descriptive papers in total, one un-validated tool for electronic referrals and one indicator set dealing with referrals from OOH care. No tools for investigations management were found.", "To the authors’ knowledge no similar review has been undertaken to look specifically at instruments for measuring patient safety in primary care. The AMA report on ambulatory patient safety\n[6] found that the number of reported interventions in primary care is low but their search strategy did not take a worldwide approach and only focused on interventions that reduce error or harm. We designed a more inclusive search strategy to capture measurement tools and strategies and were therefore able to find a wider body of literature. The focus on measurement tools and strategies reflects the importance of knowing what is being used currently in primary care to measure patient safety\n[8]. Many of the 114 tools found are iterations of tools constructed previously and re-designed for other countries. For example, the UK NHS Institute for Innovation and Improvement Primary Care Trigger Tool\n[36] has much in common with the IHI Outpatient Adverse Event Trigger Tool\n[23].\nOur study resonates with the view that there can be no one single measure of patient safety\n[8]. Rather, a framework for patient safety should include for example; past harm, reliability, ‘sensitivity to operations’, anticipation/preparedness and Integration/learning. Vincent C, 2013\n[8] Measures of past harm are prevalent among the tools we found i.e. – adverse event reporting systems. Some tools clearly straddle boundaries within the definition: Significant Event Analyses (SEA) (a technique commonly used in the UK), for example, straddles past events, future anticipation of similar situations and learning in relation to the significant event. Few tools address safety reliability in primary care; practices may set their own standards for audit of patient safety but as yet no formal targets exist for primary care in the UK or US (in direct contrast with hospital mortality data and target dashboards such as HEDIS - http://www.ncqa.org/HEDISQualityMeasurement.aspx). Sensitivity to operations is an umbrella term referring to the information and capacity in clinical systems to monitor safety on an hourly or daily basis, climate measures often address questions to staff in relation to their adaptability to change but there are no other measures of this dynamic in primary care. The work of the defence organisations in advising practices about risks and loop-holes in operating systems comes close to fulfilling this goal and roughly equates to a safety ‘walk-round’\n[37]. The challenge to any ‘toolkit’ is to incorporate prospective measures that prevent and anticipate error. The major elements of the toolkit that address prevention are; trigger tools\n[23–25, 36] (potential rather than actual harm), medicines reconciliation packages\n[30] (prevent harm from changes to prescriptions at the interface of primary and secondary care), safety culture\n[38] and a ‘safe systems’ checklist, which encourages primary care practices to seek out loopholes in their established systems.", "This review presented challenges due to the broad nature of our question and that there are no criteria for a standardised definition of, or criteria for classifying, a ‘tool’. Tools can be alerts, scoring systems, order sets, dashboards, questionnaires, educational materials, forms or templates to name but a few and, as such, the output of the review is highly heterogeneous. Therefore, we employed pragmatic ways of handling the large amount of data extracted from the review. Our strengths have included using a two reviewer system and a dual extraction process of both numerically coded data and free text summaries of papers, which enabled us to analyse identified instruments in-depth and an extensive exploration of grey literature with a world-wide perspective. Exclusions due to translation costs were minor – only 6 papers out of 280 were excluded on language basis alone. Time-constraints on the project meant we could not use back and forward citation methods systematically due to the sheer number of papers involved in the review.", "The main aim of the Tools identified is to measure or report safety issues. While measurement and baselines are a prerequisite to improvement, few of the Tools include an embedded implementation strategy that would address improvement or a quality cycle to alter strategies and measure for change\n[39]. It is difficult to estimate the impact the various measurement tools identified in this review would have in improving patient safety; for instance, prescribing indicators would only seem to measure level of harm at surface value but have been found to change harmful prescribing patterns when combined with educational feedback\n[40]. Moreover, standards and consistency of reporting vary and many studies, for example around culture and climate surveys, do not report reliability, validity, details of their study characteristics and participants etc.\nOthers have advocated the need for outcome measures in patient safety\n[41]. However, measurement systems need to be tested to ensure they measure what is claimed, whether they can reliably tell if deterioration or improvement is occurring and what other (untoward and unintended) consequences could occur\n[8]. This adheres to the wider imperative that measures of quality or safety, and the data collected, adhere to key attributes such as reliability and validity and also address issues such as acceptability, implementation issues and possible unintended consequences\n[42, 43]. The aim of future work will be to test the suitability and acceptability of the proposed measures in the toolkit and to test changes within practices after application of the toolkit; as well as intended and unintended (positive and negative) consequences. Measureable outcomes are only one feature of Safety Management Systems, and as such the toolkit should not rely exclusively on them but also develop other areas such as training, policy, culture and feedback of outcomes data in line with other established models of patient safety\n[9, 43–45]. There is a need also to embrace qualitative methodologies to patient safety such as the Manchester Patient Safety Framework (MaPSaF )\n[46].", "We have identified 114 published and unpublished tools and indicators, which can be used currently in primary care to measure patient safety. However, the AMA concluded that there are virtually no credible studies on how to improve safety in primary care\n[6] andthe challenge is still to turn measurement into improvement as few tools have been used in quality improvement cycles or as part of performance targets for safety in ambulatory care. Having a comprehensive set of tools for tracking and preventing safety events is the first step in fixing that, and this paper clearly shows where our current toolkit is wanting. The results of this review will enable a better understanding of the epidemiology of ambulatory care safety and help underpin the future development of primary care based safety interventions.", "Dr Rachel Spencer is an academic General Practitioner at the University of Nottingham and Professor Stephen Campbell is a health services researcher at the University of Manchester.", " Additional file 1:\nTools for Primary Care Patient Safety; a Systematic Review.\n(DOCX 65 KB)\nAdditional file 1:\nTools for Primary Care Patient Safety; a Systematic Review.\n(DOCX 65 KB)", "Additional file 1:\nTools for Primary Care Patient Safety; a Systematic Review.\n(DOCX 65 KB)" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, "conclusions", null, "supplementary-material", null ]
[ "Family practice", "Patient safety", "Review" ]
Background: Patient safety has been on the agenda of hospital physicians since the publication of the Institute of Medicine’s 2000 report, ‘To Err is Human’, revealed that more people were dying in the US as a result of medical error than from road traffic accidents [1]. However, most healthcare interactions in the developed world occur in family medicine: 90% of contacts in the England with the National Health Service take place in primary care [2]. In England there are approximately 1 billion community prescriptions dispensed each year [3]. The potential for adverse events is therefore huge but the knowledge base about primary care patient safety is still sparse. A literature review of the nature and frequency of error in primary care suggested that there are between 5–80 safety incidents per 100,000 consultations, which in the UK would translate to between 37–600 incidents per day [4]. Another review estimates that there may be a patient safety incident in approximately 2% of family practice consultations [5]. A 2011 report by the American Medical Association on ambulatory patient safety concluded that the introduction of, and research into, patient safety in the primary care environment have lagged behind that of secondary care [6]. Understanding the epidemiology of hospital errors was crucial in developing hospital based safety interventions and the media’s reporting of this data ensured public support for efforts to improve safety [6]. Some of its authors concluded that there needed to be a similar focus on primary care, because there were ‘virtually no credible studies on how to improve safety’ [7]. Moreover, a report by the Health Foundation in 2013 emphasised the importance of knowing what methods, tools and indicators are currently being used in primary care to measure patient safety [8]. In this paper, patient safety refers to the ‘avoidance, prevention and amelioration of adverse outcomes or injuries stemming from the process of healthcare’ [8]. Staff and systems in primary care environments have the potential to contribute to serious error that can cause both morbidity and mortality; which has been demonstrated in the field of prescribing [9]. Evidence on primary care error comes mainly from the statistics of the medical defence organisations and from small pilot studies [10]. And yet, experts we have consulted in the field were anecdotally aware of a multiplicity of interventions or ‘tools’ from their own and others’ work world-wide, which helped identify grey literature for this study. This paper reports a narrative review of ‘tools’ to improve, measure, and monitor patient safety in the ambulatory settings with a focus on family practice. A narrative review broadly covers a specific topic but does not adhere to strict systematic methods to locate and synthesize articles and enables description and synthesis of qualitative research and categorises studies into more homogenous groups [11]. To the authors’ knowledge no such broad-ranging review has been attempted. The context of this study is worldwide including both the US and the UK and throughout the term primary care is used to address the terms general practice and family practice. Methods: Data sources and searches Our structured narrative review was planned and conducted according to guidance in the Preferred Reporting Items for Systematic Reviews and Meta- Analyses (PRISMA) guidelines [12] but following a more narrative approach (especially with regard to grey literature). The starting point for determining the search terms used in the review was a 3 point definition of our search question and exploration of Medical Subject Heading (MeSH) terms [13]. We used a multi-centre team (including leading UK experts on patient safety) at a strategic planning meeting to comment on and finalise the search terms for the review. References were managed in Endnote. Broad ranging search terms were used for developing a search strategy in 3 stems; setting (primary care [i.e. general/family practice, ambulatory care, community care, generalist care], safety synonyms [i.e. error, adverse event, fault, malpractice] and types of tool [i.e. indicator, survey, guideline, scale] (see web Additional file 1: Appendix 1). The aim was to be as inclusive as possible and address administrative, clinical and patient experience issues. The search was performed on the following databases; Embase, CINAHL, Pubmed, Medline (Ovid 1996 onward), Health Management Information Consortium and Web of Science on the 1/11/2011. We did not limit our search by year of publication or to the English language, in order to capture a world-wide perspective on patient safety. However, only abstracts in English were included due to resource restraints for translation. Grey literature was identified from known internet patient safety sources from the US and UK to expand the scope of the review (see web Additional file: 1 Appendix 6). In order to fully explore a single tool many resources often had to be read – for example the IHI trigger tool is described in a number of web publications and supporting documents. Care was taken not to count published tools that also appeared in grey literature twice by discussion between the two reviewers and wider team. Our structured narrative review was planned and conducted according to guidance in the Preferred Reporting Items for Systematic Reviews and Meta- Analyses (PRISMA) guidelines [12] but following a more narrative approach (especially with regard to grey literature). The starting point for determining the search terms used in the review was a 3 point definition of our search question and exploration of Medical Subject Heading (MeSH) terms [13]. We used a multi-centre team (including leading UK experts on patient safety) at a strategic planning meeting to comment on and finalise the search terms for the review. References were managed in Endnote. Broad ranging search terms were used for developing a search strategy in 3 stems; setting (primary care [i.e. general/family practice, ambulatory care, community care, generalist care], safety synonyms [i.e. error, adverse event, fault, malpractice] and types of tool [i.e. indicator, survey, guideline, scale] (see web Additional file 1: Appendix 1). The aim was to be as inclusive as possible and address administrative, clinical and patient experience issues. The search was performed on the following databases; Embase, CINAHL, Pubmed, Medline (Ovid 1996 onward), Health Management Information Consortium and Web of Science on the 1/11/2011. We did not limit our search by year of publication or to the English language, in order to capture a world-wide perspective on patient safety. However, only abstracts in English were included due to resource restraints for translation. Grey literature was identified from known internet patient safety sources from the US and UK to expand the scope of the review (see web Additional file: 1 Appendix 6). In order to fully explore a single tool many resources often had to be read – for example the IHI trigger tool is described in a number of web publications and supporting documents. Care was taken not to count published tools that also appeared in grey literature twice by discussion between the two reviewers and wider team. Study selection Reviewer one was a GP Academic Clinical Fellow with an interest in pharmacology (RS) and reviewer two was a health services researcher with an interest in family practice (SC). Similarly to the strategy used in the AMA’s report [6], we were interested in highly generalisable tools so research addressing single drugs or conditions in very specific settings was excluded. Inclusion and exclusion criteria are listed in below. An inclusive strategy was employed such that neither reviewer could exclude studies the other felt were potentially relevant. Disagreements were resolved by regular discussions between the 2 reviewers. The inclusion and exclusion criteria were as follows: Exclusion criteria:hospital care/setting - unless transferableopinion pieces/editorialssingle drugs/conditions where the focus was felt to be on the specific drug or condition rather than on transferable toolsonly about quality of care without explicit patient safety componentexclude on basis of journal (for example “Health Care Food & Nutrition focus”)economic impact of errors (relevant papers were taken out at this stage for other purposes within the project)Both the abstract and main text were not in EnglishInclusion criteria:if unsure always include - for example, ‘good advice’ which might later inform other toolstools or strategies to improve or analyse safety which are of relevance to Primary Care. Exclusion criteria:hospital care/setting - unless transferableopinion pieces/editorialssingle drugs/conditions where the focus was felt to be on the specific drug or condition rather than on transferable toolsonly about quality of care without explicit patient safety componentexclude on basis of journal (for example “Health Care Food & Nutrition focus”)economic impact of errors (relevant papers were taken out at this stage for other purposes within the project)Both the abstract and main text were not in English hospital care/setting - unless transferable opinion pieces/editorials single drugs/conditions where the focus was felt to be on the specific drug or condition rather than on transferable tools only about quality of care without explicit patient safety component exclude on basis of journal (for example “Health Care Food & Nutrition focus”) economic impact of errors (relevant papers were taken out at this stage for other purposes within the project) Both the abstract and main text were not in English Inclusion criteria:if unsure always include - for example, ‘good advice’ which might later inform other toolstools or strategies to improve or analyse safety which are of relevance to Primary Care. if unsure always include - for example, ‘good advice’ which might later inform other tools tools or strategies to improve or analyse safety which are of relevance to Primary Care. Reviewer one was a GP Academic Clinical Fellow with an interest in pharmacology (RS) and reviewer two was a health services researcher with an interest in family practice (SC). Similarly to the strategy used in the AMA’s report [6], we were interested in highly generalisable tools so research addressing single drugs or conditions in very specific settings was excluded. Inclusion and exclusion criteria are listed in below. An inclusive strategy was employed such that neither reviewer could exclude studies the other felt were potentially relevant. Disagreements were resolved by regular discussions between the 2 reviewers. The inclusion and exclusion criteria were as follows: Exclusion criteria:hospital care/setting - unless transferableopinion pieces/editorialssingle drugs/conditions where the focus was felt to be on the specific drug or condition rather than on transferable toolsonly about quality of care without explicit patient safety componentexclude on basis of journal (for example “Health Care Food & Nutrition focus”)economic impact of errors (relevant papers were taken out at this stage for other purposes within the project)Both the abstract and main text were not in EnglishInclusion criteria:if unsure always include - for example, ‘good advice’ which might later inform other toolstools or strategies to improve or analyse safety which are of relevance to Primary Care. Exclusion criteria:hospital care/setting - unless transferableopinion pieces/editorialssingle drugs/conditions where the focus was felt to be on the specific drug or condition rather than on transferable toolsonly about quality of care without explicit patient safety componentexclude on basis of journal (for example “Health Care Food & Nutrition focus”)economic impact of errors (relevant papers were taken out at this stage for other purposes within the project)Both the abstract and main text were not in English hospital care/setting - unless transferable opinion pieces/editorials single drugs/conditions where the focus was felt to be on the specific drug or condition rather than on transferable tools only about quality of care without explicit patient safety component exclude on basis of journal (for example “Health Care Food & Nutrition focus”) economic impact of errors (relevant papers were taken out at this stage for other purposes within the project) Both the abstract and main text were not in English Inclusion criteria:if unsure always include - for example, ‘good advice’ which might later inform other toolstools or strategies to improve or analyse safety which are of relevance to Primary Care. if unsure always include - for example, ‘good advice’ which might later inform other tools tools or strategies to improve or analyse safety which are of relevance to Primary Care. Data quality and extraction Data were extracted independently by the two reviewers (RS and SC). A dual approach was taken to data extraction from published material using both a Word document (Web Additional file: 1 Appendix 5) and an Excel template. For reporting data from selected papers we used a modified PRISMA [12] checklist, which combined aspects of different methodologies (not just systematic reviews and meta-analyses) into a form for all study types (available on application to the authors). For example the PRISMA checklist requires a discussion of limitations which is a highly transferable requirement to all methodologies, but it also requires specific methods based items such as ‘confidence intervals on meta-analyses’. Using information collected on the modified PRISMA form, a numerical data extraction system was agreed by both reviewers in order to present results from the selected papers data for analysis were extracted from that Excel document. A pilot of 10 key papers with differing methodologies was undertaken prior to commencing full data extraction – high levels of agreement were found between the two reviewers. At the end of data extraction differences in rating on the Excel sheet were discussed and analysed across a series of face-to-face and telephone meetings, attempts to reach consensus were almost always possible. Data were extracted independently by the two reviewers (RS and SC). A dual approach was taken to data extraction from published material using both a Word document (Web Additional file: 1 Appendix 5) and an Excel template. For reporting data from selected papers we used a modified PRISMA [12] checklist, which combined aspects of different methodologies (not just systematic reviews and meta-analyses) into a form for all study types (available on application to the authors). For example the PRISMA checklist requires a discussion of limitations which is a highly transferable requirement to all methodologies, but it also requires specific methods based items such as ‘confidence intervals on meta-analyses’. Using information collected on the modified PRISMA form, a numerical data extraction system was agreed by both reviewers in order to present results from the selected papers data for analysis were extracted from that Excel document. A pilot of 10 key papers with differing methodologies was undertaken prior to commencing full data extraction – high levels of agreement were found between the two reviewers. At the end of data extraction differences in rating on the Excel sheet were discussed and analysed across a series of face-to-face and telephone meetings, attempts to reach consensus were almost always possible. Funding This review is part of a National Institute for Health Research (NIHR), School for Primary Care Research (SPCR: http://www.spcr.nihr.ac.uk/) (UK) project, undertaken with the aim of constructing a Patient Safety Toolkit for English family practice. This review is part of a National Institute for Health Research (NIHR), School for Primary Care Research (SPCR: http://www.spcr.nihr.ac.uk/) (UK) project, undertaken with the aim of constructing a Patient Safety Toolkit for English family practice. Data sources and searches: Our structured narrative review was planned and conducted according to guidance in the Preferred Reporting Items for Systematic Reviews and Meta- Analyses (PRISMA) guidelines [12] but following a more narrative approach (especially with regard to grey literature). The starting point for determining the search terms used in the review was a 3 point definition of our search question and exploration of Medical Subject Heading (MeSH) terms [13]. We used a multi-centre team (including leading UK experts on patient safety) at a strategic planning meeting to comment on and finalise the search terms for the review. References were managed in Endnote. Broad ranging search terms were used for developing a search strategy in 3 stems; setting (primary care [i.e. general/family practice, ambulatory care, community care, generalist care], safety synonyms [i.e. error, adverse event, fault, malpractice] and types of tool [i.e. indicator, survey, guideline, scale] (see web Additional file 1: Appendix 1). The aim was to be as inclusive as possible and address administrative, clinical and patient experience issues. The search was performed on the following databases; Embase, CINAHL, Pubmed, Medline (Ovid 1996 onward), Health Management Information Consortium and Web of Science on the 1/11/2011. We did not limit our search by year of publication or to the English language, in order to capture a world-wide perspective on patient safety. However, only abstracts in English were included due to resource restraints for translation. Grey literature was identified from known internet patient safety sources from the US and UK to expand the scope of the review (see web Additional file: 1 Appendix 6). In order to fully explore a single tool many resources often had to be read – for example the IHI trigger tool is described in a number of web publications and supporting documents. Care was taken not to count published tools that also appeared in grey literature twice by discussion between the two reviewers and wider team. Study selection: Reviewer one was a GP Academic Clinical Fellow with an interest in pharmacology (RS) and reviewer two was a health services researcher with an interest in family practice (SC). Similarly to the strategy used in the AMA’s report [6], we were interested in highly generalisable tools so research addressing single drugs or conditions in very specific settings was excluded. Inclusion and exclusion criteria are listed in below. An inclusive strategy was employed such that neither reviewer could exclude studies the other felt were potentially relevant. Disagreements were resolved by regular discussions between the 2 reviewers. The inclusion and exclusion criteria were as follows: Exclusion criteria:hospital care/setting - unless transferableopinion pieces/editorialssingle drugs/conditions where the focus was felt to be on the specific drug or condition rather than on transferable toolsonly about quality of care without explicit patient safety componentexclude on basis of journal (for example “Health Care Food & Nutrition focus”)economic impact of errors (relevant papers were taken out at this stage for other purposes within the project)Both the abstract and main text were not in EnglishInclusion criteria:if unsure always include - for example, ‘good advice’ which might later inform other toolstools or strategies to improve or analyse safety which are of relevance to Primary Care. Exclusion criteria:hospital care/setting - unless transferableopinion pieces/editorialssingle drugs/conditions where the focus was felt to be on the specific drug or condition rather than on transferable toolsonly about quality of care without explicit patient safety componentexclude on basis of journal (for example “Health Care Food & Nutrition focus”)economic impact of errors (relevant papers were taken out at this stage for other purposes within the project)Both the abstract and main text were not in English hospital care/setting - unless transferable opinion pieces/editorials single drugs/conditions where the focus was felt to be on the specific drug or condition rather than on transferable tools only about quality of care without explicit patient safety component exclude on basis of journal (for example “Health Care Food & Nutrition focus”) economic impact of errors (relevant papers were taken out at this stage for other purposes within the project) Both the abstract and main text were not in English Inclusion criteria:if unsure always include - for example, ‘good advice’ which might later inform other toolstools or strategies to improve or analyse safety which are of relevance to Primary Care. if unsure always include - for example, ‘good advice’ which might later inform other tools tools or strategies to improve or analyse safety which are of relevance to Primary Care. Data quality and extraction: Data were extracted independently by the two reviewers (RS and SC). A dual approach was taken to data extraction from published material using both a Word document (Web Additional file: 1 Appendix 5) and an Excel template. For reporting data from selected papers we used a modified PRISMA [12] checklist, which combined aspects of different methodologies (not just systematic reviews and meta-analyses) into a form for all study types (available on application to the authors). For example the PRISMA checklist requires a discussion of limitations which is a highly transferable requirement to all methodologies, but it also requires specific methods based items such as ‘confidence intervals on meta-analyses’. Using information collected on the modified PRISMA form, a numerical data extraction system was agreed by both reviewers in order to present results from the selected papers data for analysis were extracted from that Excel document. A pilot of 10 key papers with differing methodologies was undertaken prior to commencing full data extraction – high levels of agreement were found between the two reviewers. At the end of data extraction differences in rating on the Excel sheet were discussed and analysed across a series of face-to-face and telephone meetings, attempts to reach consensus were almost always possible. Funding: This review is part of a National Institute for Health Research (NIHR), School for Primary Care Research (SPCR: http://www.spcr.nihr.ac.uk/) (UK) project, undertaken with the aim of constructing a Patient Safety Toolkit for English family practice. Results and discussion: Grey literature results are not included in the following calculations and flow diagrams; results are instead included in the list of tools found in web Additional file 1: Appendix 3 (where they are clearly marked as being from grey sources). Using the process described in Figure  1, we selected approximately 10% (n = 1311) of the original search total (n = 13,240) for evaluation of abstracts; titles excluded at this stage were clearly not of relevance e.g. relating to non-healthcare safety topics. Abstracts were then analysed for tools, after excluding papers which were from the correct setting but which did not contain any interventions; around 14% of the abstracts were included for full paper analysis (n = 189).Figure 1 ‘Toolkit’ review stages. Graph 1 –a) illustration of the literature base in primary care patient safety 1987-2011from Pubmed b) Papers from the review divided by the annual Pubmed output for the same year. ‘Toolkit’ review stages. Graph 1 –a) illustration of the literature base in primary care patient safety 1987-2011from Pubmed b) Papers from the review divided by the annual Pubmed output for the same year. As Graph 1a) illustrates, the majority of publications in the review have been published in the last decade. These data could be a product of MeSH term development or consistency in the last 10 years. However, as Graph 1b) shows, the same take-off in 2001 occurs even when presented as a proportion of the total literature published on Pubmed. Analysing the MeSH terms of the 11 papers in the review from prior to 2000 revealed that the terms ‘medical error’ and ‘diagnostic error’ were each used twice and ‘risk management’ was used in 3 papers. Keywords using the term ‘drug’ appeared 13 times, family practice and ambulatory care were commonly used keywords. The majority of the literature focused on prescribing (55%), which excludes IT interventions for prescribing that were attributed to the informatics theme instead. Other prominent areas were adverse events in primary care and safety climate (which comprised 8% of the total published literature each). A number of climate measures had been refined from earlier climate surveys for use specifically in primary care (i.e. SAQ (ambulatory) [14]. Minor themes included; informatics (4.5%) patient role (3%) and general measures to correct error (5%). The primary/secondary care interface is well described in the literature (5%) but, as yet, only 1 published interface tool specifically for primary care exists. Diagnostic error and results handling appear infrequently (<1% of total literature) despite their relative importance. The remainder of literature (11%) related to referrals, Out-Of-Hours (OOH) care, telephone care, organisational issues, mortality and clerical error. Overall, 114 Tools were identified with 26 accrued from grey literature. The setting of the research uncovered was predominantly family practice (in keeping with our search strategy); the term ‘health system’ was used to describe research such as consensus outputs from multi-disciplinary teams or across the whole healthcare system. Most published literature was US based (41%) followed by UK located studies (23%), with other countries producing no more than 5% of published papers each. The grey literature also reflects the predominance of US and UK sources. A variety of study designs were identified in the review including, for example, consensus techniques (10%) , observational methods (15%) but most were mixed methods studies in patient safety research. We classified the data from the published papers in this review using a taxonomy for primary care patient safety based on previous taxonomies by experts in quality of care and patient safety [15–19]. It differs from our data collection form as it was evolved later in the process of our analyses. In our taxonomy (Web Additional file 1: Appendix 4) there are two principal dimensions of safety: ‘access to safe services’ and ‘effectiveness of safety processes’, which are discussed in terms of the structure of the health care system, processes of safety and health outcomes. This taxonomy is based on previous conceptual work on quality of care [15]; in essence, do users get the safe care they need, and is the care safe when they get it? In our review, 88 of the 114 unique tools identified (Web Additional file 1: Appendix 3) came from the published literature and 26 from grey literature sources – the majority of these being from the websites of known patient safety organisations (see data sources in Methods section). The review identified a wide range of ‘tools’ that cannot be described fully here due to word count constraints. The detailed output from the review will be described across a series of subsequent papers on ambulatory patient safety. However, Table  1 shows key examples found in each dimension of patient safety: We have presented the most well-known or most often used US or UK tools as illustrative examples. The Table is presented in order of weight of literature with most common topics appearing first.Table 1 Types of tools found in the review, where possible well-known US examples of the type of tool are given in order to aid understanding Type of tool (Explanation of tool)Used in the US?Used in the UK?US ExampleData sourceNumber of primary care tools of this type identifiedPrescribing Indicator PacksYesYesBeers criteria [20]EHR15 main sets, much overlap -3(criteria for ‘never events’ in prescribing) - other prescribing toolsGRAM reports [21], MAI [22] (Geriatric Risk Assessment MedGuide™ Medications Appropriateness Index)EHR, staffTrigger Tools-GeneralYesYesIHI Outpatient AdverseEHR5-MedicationsYesNoEvent Trigger Tool [23]3-SurgeryYesYesAdverse drug events among older adults in primary care [24] Ambulatory surgery [25]1 (Criteria are screened for in a sample of medical records ‘triggering’ more detailed review) Event Reporting Systems (National systems for informing relevant authorities about safety problems with all aspects of healthcare)YesYesASIPS [26] (Applied Strategies for Improving Patient Safety)EHR, Staff and patients6Medicines/device Reporting SystemsYesYesMEADERS [27]EHR, Staff and Patients4(National systems for informing relevant authorities about safety problems specific to the above)(Medication Error and Adverse Drug Event Reporting System) VAERS [28] (Vaccine Adverse Events Reporting System)Safety Climate/Culture Measures (The practice team rate themselves against safety criteria and discuss the results to make changes)YesYesSafety Attitudes Questionnaire [14]Staff10Significant Event Analysis Tools (The practice team discuss untoward events, using a standardised structure, in order to learn from them)NoYesUK example - NHS Education for Scotland [29]Staff, EHR and patients5General Primary/Secondary Interface Tools (standardised systems for handling patient care at transition in care level – often electronic discharge summaries)YesYes‘Care Transitions Approach’ [30]EHR, hospital recordsOnly 3 within the direct control of family doctorsMedication Reconciliation Tools (aligning medication histories after secondary care contact)YesNo formal tool usedPartner’s Post Discharge Tool [31]EHR, hospital records3PROMs for safety (questionnaire determining the patient perspective of safety in their practice)YesYesSEAPS [32] (Seniors Empowerment and Advocacy in Patient Safety)Patients8Other Patient Involvement Measures (variety of tools including literature for patients, computerised systems and medications specific tools)YesYes‘Speak-Up’ from JCAHO [33]Patients4IT MeasuresYesYesSEMI-P [34]EHR11(not just CDSS but a variety of measures often tackling systems error, many relate to prescribing safety)(Safety Enhancement and Monitoring Instrument that is Patient centred)Diagnostic Tools (Mainly CDSS designed to improve diagnosis)YesNoDxPlain [35]EHR3 Abbreviations: CDSS Computer Decision Support Software. EHR Electronic Health Record. PROM Patient reported outcome measures. UK United Kingsdom. US United States. Types of tools found in the review, where possible well-known US examples of the type of tool are given in order to aid understanding Abbreviations: CDSS Computer Decision Support Software. EHR Electronic Health Record. PROM Patient reported outcome measures. UK United Kingsdom. US United States. Summary of main findings We have demonstrated that there has been an upsurge in publications on primary care patient safety since 2001 and that most of the literature comes from the USA and the UK, with the pre-eminent topic being prescribing safety. The list of discrete tools (which includes grey literature) has a much more even spread across the dimensions within our conceptual taxonomy (Web Appendices 3 and 4). Using this taxonomy shows that some areas of patient safety are relatively neglected in the published literature on primary care patient safety tools; for example, diagnostic error. Tools for test results and referrals are also poorly represented; there were 5 descriptive papers in total, one un-validated tool for electronic referrals and one indicator set dealing with referrals from OOH care. No tools for investigations management were found. We have demonstrated that there has been an upsurge in publications on primary care patient safety since 2001 and that most of the literature comes from the USA and the UK, with the pre-eminent topic being prescribing safety. The list of discrete tools (which includes grey literature) has a much more even spread across the dimensions within our conceptual taxonomy (Web Appendices 3 and 4). Using this taxonomy shows that some areas of patient safety are relatively neglected in the published literature on primary care patient safety tools; for example, diagnostic error. Tools for test results and referrals are also poorly represented; there were 5 descriptive papers in total, one un-validated tool for electronic referrals and one indicator set dealing with referrals from OOH care. No tools for investigations management were found. Comparison to existing literature To the authors’ knowledge no similar review has been undertaken to look specifically at instruments for measuring patient safety in primary care. The AMA report on ambulatory patient safety [6] found that the number of reported interventions in primary care is low but their search strategy did not take a worldwide approach and only focused on interventions that reduce error or harm. We designed a more inclusive search strategy to capture measurement tools and strategies and were therefore able to find a wider body of literature. The focus on measurement tools and strategies reflects the importance of knowing what is being used currently in primary care to measure patient safety [8]. Many of the 114 tools found are iterations of tools constructed previously and re-designed for other countries. For example, the UK NHS Institute for Innovation and Improvement Primary Care Trigger Tool [36] has much in common with the IHI Outpatient Adverse Event Trigger Tool [23]. Our study resonates with the view that there can be no one single measure of patient safety [8]. Rather, a framework for patient safety should include for example; past harm, reliability, ‘sensitivity to operations’, anticipation/preparedness and Integration/learning. Vincent C, 2013 [8] Measures of past harm are prevalent among the tools we found i.e. – adverse event reporting systems. Some tools clearly straddle boundaries within the definition: Significant Event Analyses (SEA) (a technique commonly used in the UK), for example, straddles past events, future anticipation of similar situations and learning in relation to the significant event. Few tools address safety reliability in primary care; practices may set their own standards for audit of patient safety but as yet no formal targets exist for primary care in the UK or US (in direct contrast with hospital mortality data and target dashboards such as HEDIS - http://www.ncqa.org/HEDISQualityMeasurement.aspx). Sensitivity to operations is an umbrella term referring to the information and capacity in clinical systems to monitor safety on an hourly or daily basis, climate measures often address questions to staff in relation to their adaptability to change but there are no other measures of this dynamic in primary care. The work of the defence organisations in advising practices about risks and loop-holes in operating systems comes close to fulfilling this goal and roughly equates to a safety ‘walk-round’ [37]. The challenge to any ‘toolkit’ is to incorporate prospective measures that prevent and anticipate error. The major elements of the toolkit that address prevention are; trigger tools [23–25, 36] (potential rather than actual harm), medicines reconciliation packages [30] (prevent harm from changes to prescriptions at the interface of primary and secondary care), safety culture [38] and a ‘safe systems’ checklist, which encourages primary care practices to seek out loopholes in their established systems. To the authors’ knowledge no similar review has been undertaken to look specifically at instruments for measuring patient safety in primary care. The AMA report on ambulatory patient safety [6] found that the number of reported interventions in primary care is low but their search strategy did not take a worldwide approach and only focused on interventions that reduce error or harm. We designed a more inclusive search strategy to capture measurement tools and strategies and were therefore able to find a wider body of literature. The focus on measurement tools and strategies reflects the importance of knowing what is being used currently in primary care to measure patient safety [8]. Many of the 114 tools found are iterations of tools constructed previously and re-designed for other countries. For example, the UK NHS Institute for Innovation and Improvement Primary Care Trigger Tool [36] has much in common with the IHI Outpatient Adverse Event Trigger Tool [23]. Our study resonates with the view that there can be no one single measure of patient safety [8]. Rather, a framework for patient safety should include for example; past harm, reliability, ‘sensitivity to operations’, anticipation/preparedness and Integration/learning. Vincent C, 2013 [8] Measures of past harm are prevalent among the tools we found i.e. – adverse event reporting systems. Some tools clearly straddle boundaries within the definition: Significant Event Analyses (SEA) (a technique commonly used in the UK), for example, straddles past events, future anticipation of similar situations and learning in relation to the significant event. Few tools address safety reliability in primary care; practices may set their own standards for audit of patient safety but as yet no formal targets exist for primary care in the UK or US (in direct contrast with hospital mortality data and target dashboards such as HEDIS - http://www.ncqa.org/HEDISQualityMeasurement.aspx). Sensitivity to operations is an umbrella term referring to the information and capacity in clinical systems to monitor safety on an hourly or daily basis, climate measures often address questions to staff in relation to their adaptability to change but there are no other measures of this dynamic in primary care. The work of the defence organisations in advising practices about risks and loop-holes in operating systems comes close to fulfilling this goal and roughly equates to a safety ‘walk-round’ [37]. The challenge to any ‘toolkit’ is to incorporate prospective measures that prevent and anticipate error. The major elements of the toolkit that address prevention are; trigger tools [23–25, 36] (potential rather than actual harm), medicines reconciliation packages [30] (prevent harm from changes to prescriptions at the interface of primary and secondary care), safety culture [38] and a ‘safe systems’ checklist, which encourages primary care practices to seek out loopholes in their established systems. Strengths/limitations This review presented challenges due to the broad nature of our question and that there are no criteria for a standardised definition of, or criteria for classifying, a ‘tool’. Tools can be alerts, scoring systems, order sets, dashboards, questionnaires, educational materials, forms or templates to name but a few and, as such, the output of the review is highly heterogeneous. Therefore, we employed pragmatic ways of handling the large amount of data extracted from the review. Our strengths have included using a two reviewer system and a dual extraction process of both numerically coded data and free text summaries of papers, which enabled us to analyse identified instruments in-depth and an extensive exploration of grey literature with a world-wide perspective. Exclusions due to translation costs were minor – only 6 papers out of 280 were excluded on language basis alone. Time-constraints on the project meant we could not use back and forward citation methods systematically due to the sheer number of papers involved in the review. This review presented challenges due to the broad nature of our question and that there are no criteria for a standardised definition of, or criteria for classifying, a ‘tool’. Tools can be alerts, scoring systems, order sets, dashboards, questionnaires, educational materials, forms or templates to name but a few and, as such, the output of the review is highly heterogeneous. Therefore, we employed pragmatic ways of handling the large amount of data extracted from the review. Our strengths have included using a two reviewer system and a dual extraction process of both numerically coded data and free text summaries of papers, which enabled us to analyse identified instruments in-depth and an extensive exploration of grey literature with a world-wide perspective. Exclusions due to translation costs were minor – only 6 papers out of 280 were excluded on language basis alone. Time-constraints on the project meant we could not use back and forward citation methods systematically due to the sheer number of papers involved in the review. Implications for practice, policy, or future research The main aim of the Tools identified is to measure or report safety issues. While measurement and baselines are a prerequisite to improvement, few of the Tools include an embedded implementation strategy that would address improvement or a quality cycle to alter strategies and measure for change [39]. It is difficult to estimate the impact the various measurement tools identified in this review would have in improving patient safety; for instance, prescribing indicators would only seem to measure level of harm at surface value but have been found to change harmful prescribing patterns when combined with educational feedback [40]. Moreover, standards and consistency of reporting vary and many studies, for example around culture and climate surveys, do not report reliability, validity, details of their study characteristics and participants etc. Others have advocated the need for outcome measures in patient safety [41]. However, measurement systems need to be tested to ensure they measure what is claimed, whether they can reliably tell if deterioration or improvement is occurring and what other (untoward and unintended) consequences could occur [8]. This adheres to the wider imperative that measures of quality or safety, and the data collected, adhere to key attributes such as reliability and validity and also address issues such as acceptability, implementation issues and possible unintended consequences [42, 43]. The aim of future work will be to test the suitability and acceptability of the proposed measures in the toolkit and to test changes within practices after application of the toolkit; as well as intended and unintended (positive and negative) consequences. Measureable outcomes are only one feature of Safety Management Systems, and as such the toolkit should not rely exclusively on them but also develop other areas such as training, policy, culture and feedback of outcomes data in line with other established models of patient safety [9, 43–45]. There is a need also to embrace qualitative methodologies to patient safety such as the Manchester Patient Safety Framework (MaPSaF ) [46]. The main aim of the Tools identified is to measure or report safety issues. While measurement and baselines are a prerequisite to improvement, few of the Tools include an embedded implementation strategy that would address improvement or a quality cycle to alter strategies and measure for change [39]. It is difficult to estimate the impact the various measurement tools identified in this review would have in improving patient safety; for instance, prescribing indicators would only seem to measure level of harm at surface value but have been found to change harmful prescribing patterns when combined with educational feedback [40]. Moreover, standards and consistency of reporting vary and many studies, for example around culture and climate surveys, do not report reliability, validity, details of their study characteristics and participants etc. Others have advocated the need for outcome measures in patient safety [41]. However, measurement systems need to be tested to ensure they measure what is claimed, whether they can reliably tell if deterioration or improvement is occurring and what other (untoward and unintended) consequences could occur [8]. This adheres to the wider imperative that measures of quality or safety, and the data collected, adhere to key attributes such as reliability and validity and also address issues such as acceptability, implementation issues and possible unintended consequences [42, 43]. The aim of future work will be to test the suitability and acceptability of the proposed measures in the toolkit and to test changes within practices after application of the toolkit; as well as intended and unintended (positive and negative) consequences. Measureable outcomes are only one feature of Safety Management Systems, and as such the toolkit should not rely exclusively on them but also develop other areas such as training, policy, culture and feedback of outcomes data in line with other established models of patient safety [9, 43–45]. There is a need also to embrace qualitative methodologies to patient safety such as the Manchester Patient Safety Framework (MaPSaF ) [46]. Summary of main findings: We have demonstrated that there has been an upsurge in publications on primary care patient safety since 2001 and that most of the literature comes from the USA and the UK, with the pre-eminent topic being prescribing safety. The list of discrete tools (which includes grey literature) has a much more even spread across the dimensions within our conceptual taxonomy (Web Appendices 3 and 4). Using this taxonomy shows that some areas of patient safety are relatively neglected in the published literature on primary care patient safety tools; for example, diagnostic error. Tools for test results and referrals are also poorly represented; there were 5 descriptive papers in total, one un-validated tool for electronic referrals and one indicator set dealing with referrals from OOH care. No tools for investigations management were found. Comparison to existing literature: To the authors’ knowledge no similar review has been undertaken to look specifically at instruments for measuring patient safety in primary care. The AMA report on ambulatory patient safety [6] found that the number of reported interventions in primary care is low but their search strategy did not take a worldwide approach and only focused on interventions that reduce error or harm. We designed a more inclusive search strategy to capture measurement tools and strategies and were therefore able to find a wider body of literature. The focus on measurement tools and strategies reflects the importance of knowing what is being used currently in primary care to measure patient safety [8]. Many of the 114 tools found are iterations of tools constructed previously and re-designed for other countries. For example, the UK NHS Institute for Innovation and Improvement Primary Care Trigger Tool [36] has much in common with the IHI Outpatient Adverse Event Trigger Tool [23]. Our study resonates with the view that there can be no one single measure of patient safety [8]. Rather, a framework for patient safety should include for example; past harm, reliability, ‘sensitivity to operations’, anticipation/preparedness and Integration/learning. Vincent C, 2013 [8] Measures of past harm are prevalent among the tools we found i.e. – adverse event reporting systems. Some tools clearly straddle boundaries within the definition: Significant Event Analyses (SEA) (a technique commonly used in the UK), for example, straddles past events, future anticipation of similar situations and learning in relation to the significant event. Few tools address safety reliability in primary care; practices may set their own standards for audit of patient safety but as yet no formal targets exist for primary care in the UK or US (in direct contrast with hospital mortality data and target dashboards such as HEDIS - http://www.ncqa.org/HEDISQualityMeasurement.aspx). Sensitivity to operations is an umbrella term referring to the information and capacity in clinical systems to monitor safety on an hourly or daily basis, climate measures often address questions to staff in relation to their adaptability to change but there are no other measures of this dynamic in primary care. The work of the defence organisations in advising practices about risks and loop-holes in operating systems comes close to fulfilling this goal and roughly equates to a safety ‘walk-round’ [37]. The challenge to any ‘toolkit’ is to incorporate prospective measures that prevent and anticipate error. The major elements of the toolkit that address prevention are; trigger tools [23–25, 36] (potential rather than actual harm), medicines reconciliation packages [30] (prevent harm from changes to prescriptions at the interface of primary and secondary care), safety culture [38] and a ‘safe systems’ checklist, which encourages primary care practices to seek out loopholes in their established systems. Strengths/limitations: This review presented challenges due to the broad nature of our question and that there are no criteria for a standardised definition of, or criteria for classifying, a ‘tool’. Tools can be alerts, scoring systems, order sets, dashboards, questionnaires, educational materials, forms or templates to name but a few and, as such, the output of the review is highly heterogeneous. Therefore, we employed pragmatic ways of handling the large amount of data extracted from the review. Our strengths have included using a two reviewer system and a dual extraction process of both numerically coded data and free text summaries of papers, which enabled us to analyse identified instruments in-depth and an extensive exploration of grey literature with a world-wide perspective. Exclusions due to translation costs were minor – only 6 papers out of 280 were excluded on language basis alone. Time-constraints on the project meant we could not use back and forward citation methods systematically due to the sheer number of papers involved in the review. Implications for practice, policy, or future research: The main aim of the Tools identified is to measure or report safety issues. While measurement and baselines are a prerequisite to improvement, few of the Tools include an embedded implementation strategy that would address improvement or a quality cycle to alter strategies and measure for change [39]. It is difficult to estimate the impact the various measurement tools identified in this review would have in improving patient safety; for instance, prescribing indicators would only seem to measure level of harm at surface value but have been found to change harmful prescribing patterns when combined with educational feedback [40]. Moreover, standards and consistency of reporting vary and many studies, for example around culture and climate surveys, do not report reliability, validity, details of their study characteristics and participants etc. Others have advocated the need for outcome measures in patient safety [41]. However, measurement systems need to be tested to ensure they measure what is claimed, whether they can reliably tell if deterioration or improvement is occurring and what other (untoward and unintended) consequences could occur [8]. This adheres to the wider imperative that measures of quality or safety, and the data collected, adhere to key attributes such as reliability and validity and also address issues such as acceptability, implementation issues and possible unintended consequences [42, 43]. The aim of future work will be to test the suitability and acceptability of the proposed measures in the toolkit and to test changes within practices after application of the toolkit; as well as intended and unintended (positive and negative) consequences. Measureable outcomes are only one feature of Safety Management Systems, and as such the toolkit should not rely exclusively on them but also develop other areas such as training, policy, culture and feedback of outcomes data in line with other established models of patient safety [9, 43–45]. There is a need also to embrace qualitative methodologies to patient safety such as the Manchester Patient Safety Framework (MaPSaF ) [46]. Conclusion: We have identified 114 published and unpublished tools and indicators, which can be used currently in primary care to measure patient safety. However, the AMA concluded that there are virtually no credible studies on how to improve safety in primary care [6] andthe challenge is still to turn measurement into improvement as few tools have been used in quality improvement cycles or as part of performance targets for safety in ambulatory care. Having a comprehensive set of tools for tracking and preventing safety events is the first step in fixing that, and this paper clearly shows where our current toolkit is wanting. The results of this review will enable a better understanding of the epidemiology of ambulatory care safety and help underpin the future development of primary care based safety interventions. Authors’ information: Dr Rachel Spencer is an academic General Practitioner at the University of Nottingham and Professor Stephen Campbell is a health services researcher at the University of Manchester. Electronic supplementary material: Additional file 1: Tools for Primary Care Patient Safety; a Systematic Review. (DOCX 65 KB) Additional file 1: Tools for Primary Care Patient Safety; a Systematic Review. (DOCX 65 KB) : Additional file 1: Tools for Primary Care Patient Safety; a Systematic Review. (DOCX 65 KB)
Background: Patient safety in primary care is a developing field with an embryonic but evolving evidence base. This narrative review aims to identify tools that can be used by family practitioners as part of a patient safety toolkit to improve the safety of the care and services provided by their practices. Methods: Searches were performed in 6 healthcare databases in 2011 using 3 search stems; location (primary care), patient safety synonyms and outcome measure synonyms. Two reviewers analysed the results using a numerical and thematic analyses. Extensive grey literature exploration was also conducted. Results: Overall, 114 Tools were identified with 26 accrued from grey literature. Most published literature originated from the USA (41%) and the UK (23%) within the last 10 years. Most of the literature addresses the themes of medication error (55%) followed by safety climate (8%) and adverse event reporting (8%). Minor themes included; informatics (4.5%) patient role (3%) and general measures to correct error (5%). The primary/secondary care interface is well described (5%) but few specific tools for primary care exist. Diagnostic error and results handling appear infrequently (<1% of total literature) despite their relative importance. The remainder of literature (11%) related to referrals, Out-Of-Hours (OOH) care, telephone care, organisational issues, mortality and clerical error. Conclusions: This review identified tools and indicators that are available for use in family practice to measure patient safety, which is crucial to improve safety and design a patient safety toolkit. However, many of the tools have yet to be used in quality improvement strategies and cycles such as plan-do-study-act (PDSA) so there is a dearth of evidence of their utility in improving as opposed to measuring and highlighting safety issues. The lack of focus on diagnostics, systems safety and results handling provide direction and priorities for future research.
Background: Patient safety has been on the agenda of hospital physicians since the publication of the Institute of Medicine’s 2000 report, ‘To Err is Human’, revealed that more people were dying in the US as a result of medical error than from road traffic accidents [1]. However, most healthcare interactions in the developed world occur in family medicine: 90% of contacts in the England with the National Health Service take place in primary care [2]. In England there are approximately 1 billion community prescriptions dispensed each year [3]. The potential for adverse events is therefore huge but the knowledge base about primary care patient safety is still sparse. A literature review of the nature and frequency of error in primary care suggested that there are between 5–80 safety incidents per 100,000 consultations, which in the UK would translate to between 37–600 incidents per day [4]. Another review estimates that there may be a patient safety incident in approximately 2% of family practice consultations [5]. A 2011 report by the American Medical Association on ambulatory patient safety concluded that the introduction of, and research into, patient safety in the primary care environment have lagged behind that of secondary care [6]. Understanding the epidemiology of hospital errors was crucial in developing hospital based safety interventions and the media’s reporting of this data ensured public support for efforts to improve safety [6]. Some of its authors concluded that there needed to be a similar focus on primary care, because there were ‘virtually no credible studies on how to improve safety’ [7]. Moreover, a report by the Health Foundation in 2013 emphasised the importance of knowing what methods, tools and indicators are currently being used in primary care to measure patient safety [8]. In this paper, patient safety refers to the ‘avoidance, prevention and amelioration of adverse outcomes or injuries stemming from the process of healthcare’ [8]. Staff and systems in primary care environments have the potential to contribute to serious error that can cause both morbidity and mortality; which has been demonstrated in the field of prescribing [9]. Evidence on primary care error comes mainly from the statistics of the medical defence organisations and from small pilot studies [10]. And yet, experts we have consulted in the field were anecdotally aware of a multiplicity of interventions or ‘tools’ from their own and others’ work world-wide, which helped identify grey literature for this study. This paper reports a narrative review of ‘tools’ to improve, measure, and monitor patient safety in the ambulatory settings with a focus on family practice. A narrative review broadly covers a specific topic but does not adhere to strict systematic methods to locate and synthesize articles and enables description and synthesis of qualitative research and categorises studies into more homogenous groups [11]. To the authors’ knowledge no such broad-ranging review has been attempted. The context of this study is worldwide including both the US and the UK and throughout the term primary care is used to address the terms general practice and family practice. Conclusion: We have identified 114 published and unpublished tools and indicators, which can be used currently in primary care to measure patient safety. However, the AMA concluded that there are virtually no credible studies on how to improve safety in primary care [6] andthe challenge is still to turn measurement into improvement as few tools have been used in quality improvement cycles or as part of performance targets for safety in ambulatory care. Having a comprehensive set of tools for tracking and preventing safety events is the first step in fixing that, and this paper clearly shows where our current toolkit is wanting. The results of this review will enable a better understanding of the epidemiology of ambulatory care safety and help underpin the future development of primary care based safety interventions.
Background: Patient safety in primary care is a developing field with an embryonic but evolving evidence base. This narrative review aims to identify tools that can be used by family practitioners as part of a patient safety toolkit to improve the safety of the care and services provided by their practices. Methods: Searches were performed in 6 healthcare databases in 2011 using 3 search stems; location (primary care), patient safety synonyms and outcome measure synonyms. Two reviewers analysed the results using a numerical and thematic analyses. Extensive grey literature exploration was also conducted. Results: Overall, 114 Tools were identified with 26 accrued from grey literature. Most published literature originated from the USA (41%) and the UK (23%) within the last 10 years. Most of the literature addresses the themes of medication error (55%) followed by safety climate (8%) and adverse event reporting (8%). Minor themes included; informatics (4.5%) patient role (3%) and general measures to correct error (5%). The primary/secondary care interface is well described (5%) but few specific tools for primary care exist. Diagnostic error and results handling appear infrequently (<1% of total literature) despite their relative importance. The remainder of literature (11%) related to referrals, Out-Of-Hours (OOH) care, telephone care, organisational issues, mortality and clerical error. Conclusions: This review identified tools and indicators that are available for use in family practice to measure patient safety, which is crucial to improve safety and design a patient safety toolkit. However, many of the tools have yet to be used in quality improvement strategies and cycles such as plan-do-study-act (PDSA) so there is a dearth of evidence of their utility in improving as opposed to measuring and highlighting safety issues. The lack of focus on diagnostics, systems safety and results handling provide direction and priorities for future research.
9,823
386
[ 600, 385, 493, 239, 45, 4120, 150, 547, 192, 382, 28, 22 ]
15
[ "safety", "care", "patient", "tools", "patient safety", "primary", "primary care", "review", "data", "literature" ]
[ "ambulatory patient safety", "patient safety research", "patient safety incident", "patient safety systematic", "primary care safety" ]
null
[CONTENT] Family practice | Patient safety | Review [SUMMARY]
[CONTENT] Family practice | Patient safety | Review [SUMMARY]
null
[CONTENT] Family practice | Patient safety | Review [SUMMARY]
[CONTENT] Family practice | Patient safety | Review [SUMMARY]
[CONTENT] Family practice | Patient safety | Review [SUMMARY]
[CONTENT] Family Practice | Humans | Medication Errors | Organizational Culture | Patient Safety | Primary Health Care | Risk Management | Safety Management [SUMMARY]
[CONTENT] Family Practice | Humans | Medication Errors | Organizational Culture | Patient Safety | Primary Health Care | Risk Management | Safety Management [SUMMARY]
null
[CONTENT] Family Practice | Humans | Medication Errors | Organizational Culture | Patient Safety | Primary Health Care | Risk Management | Safety Management [SUMMARY]
[CONTENT] Family Practice | Humans | Medication Errors | Organizational Culture | Patient Safety | Primary Health Care | Risk Management | Safety Management [SUMMARY]
[CONTENT] Family Practice | Humans | Medication Errors | Organizational Culture | Patient Safety | Primary Health Care | Risk Management | Safety Management [SUMMARY]
[CONTENT] ambulatory patient safety | patient safety research | patient safety incident | patient safety systematic | primary care safety [SUMMARY]
[CONTENT] ambulatory patient safety | patient safety research | patient safety incident | patient safety systematic | primary care safety [SUMMARY]
null
[CONTENT] ambulatory patient safety | patient safety research | patient safety incident | patient safety systematic | primary care safety [SUMMARY]
[CONTENT] ambulatory patient safety | patient safety research | patient safety incident | patient safety systematic | primary care safety [SUMMARY]
[CONTENT] ambulatory patient safety | patient safety research | patient safety incident | patient safety systematic | primary care safety [SUMMARY]
[CONTENT] safety | care | patient | tools | patient safety | primary | primary care | review | data | literature [SUMMARY]
[CONTENT] safety | care | patient | tools | patient safety | primary | primary care | review | data | literature [SUMMARY]
null
[CONTENT] safety | care | patient | tools | patient safety | primary | primary care | review | data | literature [SUMMARY]
[CONTENT] safety | care | patient | tools | patient safety | primary | primary care | review | data | literature [SUMMARY]
[CONTENT] safety | care | patient | tools | patient safety | primary | primary care | review | data | literature [SUMMARY]
[CONTENT] safety | care | primary | primary care | patient safety | patient | practice | error | family | medical [SUMMARY]
[CONTENT] care | search | data | safety | criteria | example | transferable | focus | papers | english [SUMMARY]
null
[CONTENT] safety | care | improvement | ambulatory care | primary | primary care | ambulatory | tools | published unpublished | tools quality improvement cycles [SUMMARY]
[CONTENT] safety | care | patient | patient safety | tools | primary | primary care | review | data | papers [SUMMARY]
[CONTENT] safety | care | patient | patient safety | tools | primary | primary care | review | data | papers [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] 6 | 2011 | 3 ||| Two ||| [SUMMARY]
null
[CONTENT] ||| PDSA ||| [SUMMARY]
[CONTENT] ||| ||| 6 | 2011 | 3 ||| Two ||| ||| ||| 114 | 26 | grey literature ||| USA | 41% | UK | 23% | the last 10 years ||| 55% | 8% | 8% ||| 4.5% | 3% | 5% ||| 5% ||| 1% ||| 11% ||| ||| PDSA ||| [SUMMARY]
[CONTENT] ||| ||| 6 | 2011 | 3 ||| Two ||| ||| ||| 114 | 26 | grey literature ||| USA | 41% | UK | 23% | the last 10 years ||| 55% | 8% | 8% ||| 4.5% | 3% | 5% ||| 5% ||| 1% ||| 11% ||| ||| PDSA ||| [SUMMARY]
Exploring the Suicide Mechanism Path of High-Suicide-Risk Adolescents-Based on Weibo Text Analysis.
36141767
Adolescent suicide can have serious consequences for individuals, families and society, so we should pay attention to it. As social media becomes a platform for adolescents to share their daily lives and express their emotions, online identification and intervention of adolescent suicide problems become possible. In order to find the suicide mechanism path of high-suicide-risk adolescents, we explore the factors that influence is, especially the relations between psychological pain, hopelessness and suicide stages.
BACKGROUND
We identified high-suicide-risk adolescents through machine learning model identification and manual identification, and used the Weibo text analysis method to explore the suicide mechanism path of high-suicide-risk adolescents.
METHODS
Qualitative analysis showed that 36.2% of high-suicide-risk adolescents suffered from mental illness, and depression accounted for 76.3% of all mental illnesses. The mediating effect analysis showed that hopelessness played a complete mediating role between psychological pain and suicide stages. In addition, hopelessness was significantly negatively correlated with suicide stages.
RESULTS
mental illness (especially depression) in high-suicide-risk adolescents is closely related to suicide stages, the later the suicide stage, the higher the diagnosis rate of mental illness. The suicide mechanism path in high-suicide-risk adolescents is: psychological pain→ hopelessness → suicide stages, indicating that psychological pain mainly affects suicide risk through hopelessness. Adolescents who are later in the suicide stages have fewer expressions of hopelessness in the traditional sense.
CONCLUSION
[ "Adolescent", "Emotions", "Humans", "Pain", "Risk Factors", "Self Concept", "Social Media", "Suicide" ]
9517096
1. Introduction
Adolescent suicide has always been a global public health concern to us. Suicide is the fourth leading cause of death globally among people aged 15–19 [1], and in China, it is the second leading cause of death for people aged 20–34 [2]. The consequences of adolescent suicide are the most serious among many mental health problems. It means disability or even a loss of life for individuals, heavy blows and severe trauma for families, and huge expenditures on the health, welfare and justice sectors for society. The study of suicide risk factors and the exploration of suicide mechanisms in high-suicide-risk adolescents are of great significance for the prevention of suicide. With the development of technology, social media has become a platform for people to record their daily life and express their emotions. In China, Sina Weibo as a typical product of the big data era has extensive influence. Sina Weibo is China’s most popular social media platform, just like Twitter in America. Although there are some short video platforms (e.g., Douyin) and forums (e.g., Douban) that are also popular in China, it is only on Sina Weibo that users can share information in real-time and communicate interactively in the forms of text, pictures, videos and other multimedia. According to a report released by Sina Weibo, Weibo users show a younger trend, of which the post-90s and post-00s youth groups account for nearly 80%, and the daily active volume can reach 224 million [3]. The user’s emotional expression on social platforms has higher immediacy and less masking. Identifying the user’s psychological state through their expression on social media will be more convenient and quicker, and the obtained data will be more real. Therefore, we use the Weibo text analysis method to explore the suicide mechanism path of high-suicide-risk adolescents. Text analysis refers to the representation of text and the selection of feature items, which is a basic problem in text mining and information retrieval. Text analysis means that we can quantify the feature words extracted from text to represent text information. Therefore, we can extract the psychological indicators we want from the Weibo text by constructing a dictionary of psychological indicators. Suicide represents a series of consecutive stages, including suicide ideation, suicide plan, and suicide attempt [4,5]. Suicide ideation, as a high-risk factor for suicide, has a significant predictive effect [6,7]. Approximately 1/3 of suicide ideators ultimately attempt suicide, and the number of suicide attempters is several times higher than those who really die by suicide, so suicide attempt is a greater risk factor for suicide than suicidal ideation [1,8,9]. The later the suicide stage, the closer to suicide death, and the greater risk of suicide. Therefore, in order to quantify the degree of suicide risk and provide accurate information for suicide interventions, we decided to divide suicidal behavior into different stages. At the same time, considering that depressed mood is an important signal of suicide, we finally divided suicide into 4 stages, which the suicide risk gradually increases: depressed mood, suicide ideation, suicide plan and suicide attempt, and defined adolescents at different suicide stages as high-suicide-risk adolescents. A meta-analysis of the causes of suicide among adolescents found that suicide is a public health problem involving complex factors, such as biology, psychology, family, society and culture [8]. Among the psychological factors, psychological pain and hopelessness are important risk factors featuring strongly in suicide research [9,10,11]. Although psychological pain and hopelessness are often mentioned together as having an impact on suicidal behavior, as two most common motivations for suicide attempts [12], it is unclear how the two proximal factors of suicide affect the suicide stages. Psychological pain is defined as the “introspective experience of negative emotions such as fear, despair, grief, shame, guilt, blocked love, loneliness and loss” [13,14,15,16]. When Shneidman first hypothesized the relationship between psychological pain and suicide, he pointed out that other factors can only affect suicidal behavior through psychological pain, without which suicide would not occur [13]. Evidence suggests that psychological pain plays a central role in suicidal behavior [10,17,18]. Psychological pain is not enough to develop suicide ideation, it also requires hopelessness [19]. Hopelessness is defined as negative expectations or pessimism about oneself or the future [20]. Previous studies had showed that hopelessness was also an important risk factor of suicide [21,22,23]. To explore the relationship between psychological pain, hopelessness and suicide stages for high-suicide-risk adolescents, we established a hypothesis for the suicide mechanism path: psychological pain → hopelessness → suicide stages, with hopelessness as the mediating variable. In addition, there is a high correlation between mental illness and suicidal behaviors [24,25], and in order to better clarify the effect of psychological pain and hopelessness on suicide stages, a mediating effect model of our study is proposed with mental illness as a control variable (Figure 1). We used the more ecological Weibo text analysis method in this study, and hoped to further deepen the research on adolescent suicide and to find a new theoretical basis for future suicide interventions drawing on online data.
null
null
3. Results
3.1. Group Characteristics Reflected by Qualitative Analysis The mental illness diagnosis of the experimental group was marked; it was found that 38 were diagnosed and 67 were undiagnosed. The proportion of users diagnosed with mental illness was 36.2%. Among the 38 high-suicide-risk adolescents diagnosed with mental illness, 29 users had depression (or depression was included in multiple mental illnesses), accounting for 76.3%; the rest included bipolar disorder (13.16%), anxiety disorder (13.16%), etc.; 9 of them suffered from multiple mental illnesses (e.g., depression and anxiety disorder), accounting for 23.68%. Among the experimental group, 57 were in the stage of depressed mood (54.3%), 21 were in the stage of suicide ideation (20.0%), 19 were in the stage of suicide plan (18.1%), and 8 people were in the stage of suicide attempt (7.6%) (Table 2). The mental illness diagnosis of the experimental group was marked; it was found that 38 were diagnosed and 67 were undiagnosed. The proportion of users diagnosed with mental illness was 36.2%. Among the 38 high-suicide-risk adolescents diagnosed with mental illness, 29 users had depression (or depression was included in multiple mental illnesses), accounting for 76.3%; the rest included bipolar disorder (13.16%), anxiety disorder (13.16%), etc.; 9 of them suffered from multiple mental illnesses (e.g., depression and anxiety disorder), accounting for 23.68%. Among the experimental group, 57 were in the stage of depressed mood (54.3%), 21 were in the stage of suicide ideation (20.0%), 19 were in the stage of suicide plan (18.1%), and 8 people were in the stage of suicide attempt (7.6%) (Table 2). 3.2. Path Analysis of Psychological Pain, Hopelessness and Suicide Stages A mediation effect test was performed on the path model (psychological pain → hopelessness → suicide stages). The performance of each fit index of the model was good (RMSEA < 0.08; CFI > 0.9; TLI > 0.9), indicating that the model fitting effect is excellent (Table 3). After controlling for mental illness factors, the total effect c was significant (β = −0.154, p < 0.05), indicating that there was a mediation effect; the path coefficients a and b were significant (β = 0.571, p < 0.001; β = −0.271, p < 0.05), but the direct effect c’ was not significant (β = 0.052, p > 0.05), indicating that there was a complete mediating effect of hopelessness. That is, psychological pain mainly affects suicide risk through hopelessness (Table 4). It was worth noting that the relationship between the expression of hopelessness and the stages of suicide showed a significant negative correlation (β = −0.271, p < 0.05), indicating that the late suicide stage such as suicide attempt, which may be closer to suicide death, included less expressions of hopelessness (Table 4). Furthermore, there was a significant positive correlation between diagnosed mental illness and suicide stages (β = 0.398, p < 0.001), indicating the necessity of treating mental illness as a control variable in order to see the relationship between psychological pain, hopelessness and suicide stages more clearly. At the same time, it also showed that the later the suicide stage, the higher the diagnosis rate of mental illness (Table 4). A mediation effect test was performed on the path model (psychological pain → hopelessness → suicide stages). The performance of each fit index of the model was good (RMSEA < 0.08; CFI > 0.9; TLI > 0.9), indicating that the model fitting effect is excellent (Table 3). After controlling for mental illness factors, the total effect c was significant (β = −0.154, p < 0.05), indicating that there was a mediation effect; the path coefficients a and b were significant (β = 0.571, p < 0.001; β = −0.271, p < 0.05), but the direct effect c’ was not significant (β = 0.052, p > 0.05), indicating that there was a complete mediating effect of hopelessness. That is, psychological pain mainly affects suicide risk through hopelessness (Table 4). It was worth noting that the relationship between the expression of hopelessness and the stages of suicide showed a significant negative correlation (β = −0.271, p < 0.05), indicating that the late suicide stage such as suicide attempt, which may be closer to suicide death, included less expressions of hopelessness (Table 4). Furthermore, there was a significant positive correlation between diagnosed mental illness and suicide stages (β = 0.398, p < 0.001), indicating the necessity of treating mental illness as a control variable in order to see the relationship between psychological pain, hopelessness and suicide stages more clearly. At the same time, it also showed that the later the suicide stage, the higher the diagnosis rate of mental illness (Table 4).
6. Conclusions
The qualitative findings suggested that mental illness in high-suicide-risk adolescents was strongly associated with suicide risk, and that depression was the most common mental illness associated with suicide. Another finding was that the later the suicide stage, the smaller was the proportion of high-suicide-risk adolescents. The results of the mediation effect analysis verified the suicide mechanism path: psychological pain → hopelessness → suicide stages, indicating that psychological pain mainly affected suicide stages through hopelessness, and hopelessness may be a better predictor of suicide risk for high-suicide-risk adolescents. Notably, although adolescents in the later suicide stages were at greater suicide risk, they had less expression of hopelessness in the traditional sense.
[ "2. Materials and Methods", "2.2. Online Psychological Assistance Project", "2.3. Data Collection and Processing", "2.3.1. Labeling the Interview Records Data", "2.3.2. Getting the Weibo Text Data", "2.4. Data Analysis", "2.4.1. Qualitative Analysis", "2.4.2. Analysis of Suicide Path", "3.1. Group Characteristics Reflected by Qualitative Analysis", "3.2. Path Analysis of Psychological Pain, Hopelessness and Suicide Stages", "4.1. The Qualitative Analysis Results", "4.2. The Mediating Effect of Hopelessness", "4.3. The Hopelessness Expressions and the Suicide Stages", "5. Limitations and Future Studies" ]
[ "2.1. Participants Through the online psychological assistance project, we obtained the interview records data of 3627 Weibo users, and then processed the user’s interview records data and Weibo texts data. Finally, we retained 105 high-suicide-risk adolescents with enough Weibo texts as the experimental group.\nEthical Statement: Participants gave informed consent when agreeing to take part in the online psychological assistance project. The project received ethical approval from the Institution-al Review Board of the Institute of Psychology, Chinese Academy of Sciences, with the ethics approval number H16003.\nThrough the online psychological assistance project, we obtained the interview records data of 3627 Weibo users, and then processed the user’s interview records data and Weibo texts data. Finally, we retained 105 high-suicide-risk adolescents with enough Weibo texts as the experimental group.\nEthical Statement: Participants gave informed consent when agreeing to take part in the online psychological assistance project. The project received ethical approval from the Institution-al Review Board of the Institute of Psychology, Chinese Academy of Sciences, with the ethics approval number H16003.\n2.2. Online Psychological Assistance Project We used the suicide signal recognition model established by the Computational Cyber-Psychology Lab (CCPL) to identify the suicide signal of a single microblog, which can effectively determine whether the single microblog has shown suicidal risk [26].\nSince November 2016, we had used the suicide signal recognition model for online identification of Weibo comments. The last Weibo post before the death of Weibo user “Zoufan”, as a place where people can express their negative emotions, has received more than 1 million comments until now, and the number of comments keep rising (Figure 2).\nFirstly, we used the suicide signal model to identify the suicide signal of the Weibo comments. Then, the comments identified by the model were re-identified by manual recognition. Finally, we used the official Weibo account “Psychological Map Psy” and sent a private message to the Weibo users marked as having a suicide risk. The contents of the private message mainly include inquiries about the user’s emotional state, expressions of concern, links to self-assessment psychological scales, and the hotline of the psychological crisis intervention center, etc. (Figure 3). In addition, users can choose whether to communicate with the voluntary counselor online according to their own wishes, so as to achieve agile psychological crisis intervention.\nWe used the suicide signal recognition model established by the Computational Cyber-Psychology Lab (CCPL) to identify the suicide signal of a single microblog, which can effectively determine whether the single microblog has shown suicidal risk [26].\nSince November 2016, we had used the suicide signal recognition model for online identification of Weibo comments. The last Weibo post before the death of Weibo user “Zoufan”, as a place where people can express their negative emotions, has received more than 1 million comments until now, and the number of comments keep rising (Figure 2).\nFirstly, we used the suicide signal model to identify the suicide signal of the Weibo comments. Then, the comments identified by the model were re-identified by manual recognition. Finally, we used the official Weibo account “Psychological Map Psy” and sent a private message to the Weibo users marked as having a suicide risk. The contents of the private message mainly include inquiries about the user’s emotional state, expressions of concern, links to self-assessment psychological scales, and the hotline of the psychological crisis intervention center, etc. (Figure 3). In addition, users can choose whether to communicate with the voluntary counselor online according to their own wishes, so as to achieve agile psychological crisis intervention.\n2.3. Data Collection and Processing 2.3.1. Labeling the Interview Records Data During the online psychological assistance project period (November 2016 to April 2019), the official Weibo account had communicated with 3627 users online, and 127,336 message records were accumulated. Adhering to the principle of privacy and confidentiality, the use of interview records data was strictly restricted. The experimenter must protect the privacy of the research subjects involved in the project, and should not use any confidential information related to the research subjects (including but not limited to accounts, nicknames, contact information, personal homepage, network IP, location, information records, etc.) outside the scope of the research, and would not violate the privacy of the subjects in any way.\nThe interview records data were encoded by qualitative analysis. The basic content of interview records data were the current status and problems of users. We extracted three indicators from the interview records data, and the specific descriptions of them are shown below:(1)Determining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.(2)Determining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.(3)Coding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable.\nDetermining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.\nDetermining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.\nCoding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable.\nSuicide stages were coded on the basis of the following criteria: (a) Depressed mood: feeling negative and depressed, but not yet having suicidal ideation. It means participants did not express suicidal intention explicitly or implicitly(whether they actively expressed suicidal intentions, or exposed suicidal intentions under inquiries from the official Weibo account). (b) Suicide ideation: there is suicidal intent, but suicide plan has not formed. Due to being affected by an intense depressive mood, the idea of avoiding pain through suicide arises, but there is no specific time and way to implement it. (c) Suicide plan: on the basis of suicidal ideation, there is a rough or detailed suicide plan, such as when, where, and how to commit suicide. (d) Suicide attempt: at least one suicide attempt with the aim of ending one’s own life.\nDuring the online psychological assistance project period (November 2016 to April 2019), the official Weibo account had communicated with 3627 users online, and 127,336 message records were accumulated. Adhering to the principle of privacy and confidentiality, the use of interview records data was strictly restricted. The experimenter must protect the privacy of the research subjects involved in the project, and should not use any confidential information related to the research subjects (including but not limited to accounts, nicknames, contact information, personal homepage, network IP, location, information records, etc.) outside the scope of the research, and would not violate the privacy of the subjects in any way.\nThe interview records data were encoded by qualitative analysis. The basic content of interview records data were the current status and problems of users. We extracted three indicators from the interview records data, and the specific descriptions of them are shown below:(1)Determining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.(2)Determining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.(3)Coding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable.\nDetermining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.\nDetermining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.\nCoding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable.\nSuicide stages were coded on the basis of the following criteria: (a) Depressed mood: feeling negative and depressed, but not yet having suicidal ideation. It means participants did not express suicidal intention explicitly or implicitly(whether they actively expressed suicidal intentions, or exposed suicidal intentions under inquiries from the official Weibo account). (b) Suicide ideation: there is suicidal intent, but suicide plan has not formed. Due to being affected by an intense depressive mood, the idea of avoiding pain through suicide arises, but there is no specific time and way to implement it. (c) Suicide plan: on the basis of suicidal ideation, there is a rough or detailed suicide plan, such as when, where, and how to commit suicide. (d) Suicide attempt: at least one suicide attempt with the aim of ending one’s own life.\n2.3.2. Getting the Weibo Text Data We scraped Weibo texts published by the same 3627 users through the Application Programming Interface (API) of Sina Weibo. In order to obtain sufficient Weibo texts for analysis, we took the first time that the official Weibo account sent a private message to the user as the beginning of suicide intervention, downloaded all Weibo texts in the 3 months before the intervention, and filtered the users whose Weibo texts are less than 300 words (Figure 4).\nFinally, 105 high-suicide-risk adolescents (as the experimental group) were included in the analysis. We then removed stop words and performed Chinese word segmentation on the Weibo texts, and calculated word frequency. “The Chinese Suicide Dictionary” [27] was used to measure the word frequency ratio of their psychological pain and hopelessness. The dictionary outline is shown in Table 1. The psychological pain dimension represents the pain and torment of individuals at the psychological level, including 403 keywords; the hopelessness dimension represents the subjective feeling of losing hope, including 188 keywords.\nWe scraped Weibo texts published by the same 3627 users through the Application Programming Interface (API) of Sina Weibo. In order to obtain sufficient Weibo texts for analysis, we took the first time that the official Weibo account sent a private message to the user as the beginning of suicide intervention, downloaded all Weibo texts in the 3 months before the intervention, and filtered the users whose Weibo texts are less than 300 words (Figure 4).\nFinally, 105 high-suicide-risk adolescents (as the experimental group) were included in the analysis. We then removed stop words and performed Chinese word segmentation on the Weibo texts, and calculated word frequency. “The Chinese Suicide Dictionary” [27] was used to measure the word frequency ratio of their psychological pain and hopelessness. The dictionary outline is shown in Table 1. The psychological pain dimension represents the pain and torment of individuals at the psychological level, including 403 keywords; the hopelessness dimension represents the subjective feeling of losing hope, including 188 keywords.\n2.3.1. Labeling the Interview Records Data During the online psychological assistance project period (November 2016 to April 2019), the official Weibo account had communicated with 3627 users online, and 127,336 message records were accumulated. Adhering to the principle of privacy and confidentiality, the use of interview records data was strictly restricted. The experimenter must protect the privacy of the research subjects involved in the project, and should not use any confidential information related to the research subjects (including but not limited to accounts, nicknames, contact information, personal homepage, network IP, location, information records, etc.) outside the scope of the research, and would not violate the privacy of the subjects in any way.\nThe interview records data were encoded by qualitative analysis. The basic content of interview records data were the current status and problems of users. We extracted three indicators from the interview records data, and the specific descriptions of them are shown below:(1)Determining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.(2)Determining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.(3)Coding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable.\nDetermining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.\nDetermining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.\nCoding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable.\nSuicide stages were coded on the basis of the following criteria: (a) Depressed mood: feeling negative and depressed, but not yet having suicidal ideation. It means participants did not express suicidal intention explicitly or implicitly(whether they actively expressed suicidal intentions, or exposed suicidal intentions under inquiries from the official Weibo account). (b) Suicide ideation: there is suicidal intent, but suicide plan has not formed. Due to being affected by an intense depressive mood, the idea of avoiding pain through suicide arises, but there is no specific time and way to implement it. (c) Suicide plan: on the basis of suicidal ideation, there is a rough or detailed suicide plan, such as when, where, and how to commit suicide. (d) Suicide attempt: at least one suicide attempt with the aim of ending one’s own life.\nDuring the online psychological assistance project period (November 2016 to April 2019), the official Weibo account had communicated with 3627 users online, and 127,336 message records were accumulated. Adhering to the principle of privacy and confidentiality, the use of interview records data was strictly restricted. The experimenter must protect the privacy of the research subjects involved in the project, and should not use any confidential information related to the research subjects (including but not limited to accounts, nicknames, contact information, personal homepage, network IP, location, information records, etc.) outside the scope of the research, and would not violate the privacy of the subjects in any way.\nThe interview records data were encoded by qualitative analysis. The basic content of interview records data were the current status and problems of users. We extracted three indicators from the interview records data, and the specific descriptions of them are shown below:(1)Determining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.(2)Determining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.(3)Coding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable.\nDetermining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.\nDetermining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.\nCoding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable.\nSuicide stages were coded on the basis of the following criteria: (a) Depressed mood: feeling negative and depressed, but not yet having suicidal ideation. It means participants did not express suicidal intention explicitly or implicitly(whether they actively expressed suicidal intentions, or exposed suicidal intentions under inquiries from the official Weibo account). (b) Suicide ideation: there is suicidal intent, but suicide plan has not formed. Due to being affected by an intense depressive mood, the idea of avoiding pain through suicide arises, but there is no specific time and way to implement it. (c) Suicide plan: on the basis of suicidal ideation, there is a rough or detailed suicide plan, such as when, where, and how to commit suicide. (d) Suicide attempt: at least one suicide attempt with the aim of ending one’s own life.\n2.3.2. Getting the Weibo Text Data We scraped Weibo texts published by the same 3627 users through the Application Programming Interface (API) of Sina Weibo. In order to obtain sufficient Weibo texts for analysis, we took the first time that the official Weibo account sent a private message to the user as the beginning of suicide intervention, downloaded all Weibo texts in the 3 months before the intervention, and filtered the users whose Weibo texts are less than 300 words (Figure 4).\nFinally, 105 high-suicide-risk adolescents (as the experimental group) were included in the analysis. We then removed stop words and performed Chinese word segmentation on the Weibo texts, and calculated word frequency. “The Chinese Suicide Dictionary” [27] was used to measure the word frequency ratio of their psychological pain and hopelessness. The dictionary outline is shown in Table 1. The psychological pain dimension represents the pain and torment of individuals at the psychological level, including 403 keywords; the hopelessness dimension represents the subjective feeling of losing hope, including 188 keywords.\nWe scraped Weibo texts published by the same 3627 users through the Application Programming Interface (API) of Sina Weibo. In order to obtain sufficient Weibo texts for analysis, we took the first time that the official Weibo account sent a private message to the user as the beginning of suicide intervention, downloaded all Weibo texts in the 3 months before the intervention, and filtered the users whose Weibo texts are less than 300 words (Figure 4).\nFinally, 105 high-suicide-risk adolescents (as the experimental group) were included in the analysis. We then removed stop words and performed Chinese word segmentation on the Weibo texts, and calculated word frequency. “The Chinese Suicide Dictionary” [27] was used to measure the word frequency ratio of their psychological pain and hopelessness. The dictionary outline is shown in Table 1. The psychological pain dimension represents the pain and torment of individuals at the psychological level, including 403 keywords; the hopelessness dimension represents the subjective feeling of losing hope, including 188 keywords.\n2.4. Data Analysis 2.4.1. Qualitative Analysis Through the labeling results of interview records data, we investigated the situation of mental illness and the distribution of suicide stages for the experimental group.\nThrough the labeling results of interview records data, we investigated the situation of mental illness and the distribution of suicide stages for the experimental group.\n2.4.2. Analysis of Suicide Path The mediation effect analysis was executed on the software Mplus by using the bootstrap method to test the suicide mechanism path model. The word frequency of psychological pain and hopelessness of the experimental group were treated as independent variable (X) and mediating variable (M), respectively. The suicide stages were treated as dependent variable (Y). The mental illness was included in the model as a control variable (Z).\nThe mediation effect analysis was executed on the software Mplus by using the bootstrap method to test the suicide mechanism path model. The word frequency of psychological pain and hopelessness of the experimental group were treated as independent variable (X) and mediating variable (M), respectively. The suicide stages were treated as dependent variable (Y). The mental illness was included in the model as a control variable (Z).\n2.4.1. Qualitative Analysis Through the labeling results of interview records data, we investigated the situation of mental illness and the distribution of suicide stages for the experimental group.\nThrough the labeling results of interview records data, we investigated the situation of mental illness and the distribution of suicide stages for the experimental group.\n2.4.2. Analysis of Suicide Path The mediation effect analysis was executed on the software Mplus by using the bootstrap method to test the suicide mechanism path model. The word frequency of psychological pain and hopelessness of the experimental group were treated as independent variable (X) and mediating variable (M), respectively. The suicide stages were treated as dependent variable (Y). The mental illness was included in the model as a control variable (Z).\nThe mediation effect analysis was executed on the software Mplus by using the bootstrap method to test the suicide mechanism path model. The word frequency of psychological pain and hopelessness of the experimental group were treated as independent variable (X) and mediating variable (M), respectively. The suicide stages were treated as dependent variable (Y). The mental illness was included in the model as a control variable (Z).", "We used the suicide signal recognition model established by the Computational Cyber-Psychology Lab (CCPL) to identify the suicide signal of a single microblog, which can effectively determine whether the single microblog has shown suicidal risk [26].\nSince November 2016, we had used the suicide signal recognition model for online identification of Weibo comments. The last Weibo post before the death of Weibo user “Zoufan”, as a place where people can express their negative emotions, has received more than 1 million comments until now, and the number of comments keep rising (Figure 2).\nFirstly, we used the suicide signal model to identify the suicide signal of the Weibo comments. Then, the comments identified by the model were re-identified by manual recognition. Finally, we used the official Weibo account “Psychological Map Psy” and sent a private message to the Weibo users marked as having a suicide risk. The contents of the private message mainly include inquiries about the user’s emotional state, expressions of concern, links to self-assessment psychological scales, and the hotline of the psychological crisis intervention center, etc. (Figure 3). In addition, users can choose whether to communicate with the voluntary counselor online according to their own wishes, so as to achieve agile psychological crisis intervention.", "2.3.1. Labeling the Interview Records Data During the online psychological assistance project period (November 2016 to April 2019), the official Weibo account had communicated with 3627 users online, and 127,336 message records were accumulated. Adhering to the principle of privacy and confidentiality, the use of interview records data was strictly restricted. The experimenter must protect the privacy of the research subjects involved in the project, and should not use any confidential information related to the research subjects (including but not limited to accounts, nicknames, contact information, personal homepage, network IP, location, information records, etc.) outside the scope of the research, and would not violate the privacy of the subjects in any way.\nThe interview records data were encoded by qualitative analysis. The basic content of interview records data were the current status and problems of users. We extracted three indicators from the interview records data, and the specific descriptions of them are shown below:(1)Determining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.(2)Determining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.(3)Coding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable.\nDetermining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.\nDetermining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.\nCoding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable.\nSuicide stages were coded on the basis of the following criteria: (a) Depressed mood: feeling negative and depressed, but not yet having suicidal ideation. It means participants did not express suicidal intention explicitly or implicitly(whether they actively expressed suicidal intentions, or exposed suicidal intentions under inquiries from the official Weibo account). (b) Suicide ideation: there is suicidal intent, but suicide plan has not formed. Due to being affected by an intense depressive mood, the idea of avoiding pain through suicide arises, but there is no specific time and way to implement it. (c) Suicide plan: on the basis of suicidal ideation, there is a rough or detailed suicide plan, such as when, where, and how to commit suicide. (d) Suicide attempt: at least one suicide attempt with the aim of ending one’s own life.\nDuring the online psychological assistance project period (November 2016 to April 2019), the official Weibo account had communicated with 3627 users online, and 127,336 message records were accumulated. Adhering to the principle of privacy and confidentiality, the use of interview records data was strictly restricted. The experimenter must protect the privacy of the research subjects involved in the project, and should not use any confidential information related to the research subjects (including but not limited to accounts, nicknames, contact information, personal homepage, network IP, location, information records, etc.) outside the scope of the research, and would not violate the privacy of the subjects in any way.\nThe interview records data were encoded by qualitative analysis. The basic content of interview records data were the current status and problems of users. We extracted three indicators from the interview records data, and the specific descriptions of them are shown below:(1)Determining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.(2)Determining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.(3)Coding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable.\nDetermining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.\nDetermining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.\nCoding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable.\nSuicide stages were coded on the basis of the following criteria: (a) Depressed mood: feeling negative and depressed, but not yet having suicidal ideation. It means participants did not express suicidal intention explicitly or implicitly(whether they actively expressed suicidal intentions, or exposed suicidal intentions under inquiries from the official Weibo account). (b) Suicide ideation: there is suicidal intent, but suicide plan has not formed. Due to being affected by an intense depressive mood, the idea of avoiding pain through suicide arises, but there is no specific time and way to implement it. (c) Suicide plan: on the basis of suicidal ideation, there is a rough or detailed suicide plan, such as when, where, and how to commit suicide. (d) Suicide attempt: at least one suicide attempt with the aim of ending one’s own life.\n2.3.2. Getting the Weibo Text Data We scraped Weibo texts published by the same 3627 users through the Application Programming Interface (API) of Sina Weibo. In order to obtain sufficient Weibo texts for analysis, we took the first time that the official Weibo account sent a private message to the user as the beginning of suicide intervention, downloaded all Weibo texts in the 3 months before the intervention, and filtered the users whose Weibo texts are less than 300 words (Figure 4).\nFinally, 105 high-suicide-risk adolescents (as the experimental group) were included in the analysis. We then removed stop words and performed Chinese word segmentation on the Weibo texts, and calculated word frequency. “The Chinese Suicide Dictionary” [27] was used to measure the word frequency ratio of their psychological pain and hopelessness. The dictionary outline is shown in Table 1. The psychological pain dimension represents the pain and torment of individuals at the psychological level, including 403 keywords; the hopelessness dimension represents the subjective feeling of losing hope, including 188 keywords.\nWe scraped Weibo texts published by the same 3627 users through the Application Programming Interface (API) of Sina Weibo. In order to obtain sufficient Weibo texts for analysis, we took the first time that the official Weibo account sent a private message to the user as the beginning of suicide intervention, downloaded all Weibo texts in the 3 months before the intervention, and filtered the users whose Weibo texts are less than 300 words (Figure 4).\nFinally, 105 high-suicide-risk adolescents (as the experimental group) were included in the analysis. We then removed stop words and performed Chinese word segmentation on the Weibo texts, and calculated word frequency. “The Chinese Suicide Dictionary” [27] was used to measure the word frequency ratio of their psychological pain and hopelessness. The dictionary outline is shown in Table 1. The psychological pain dimension represents the pain and torment of individuals at the psychological level, including 403 keywords; the hopelessness dimension represents the subjective feeling of losing hope, including 188 keywords.", "During the online psychological assistance project period (November 2016 to April 2019), the official Weibo account had communicated with 3627 users online, and 127,336 message records were accumulated. Adhering to the principle of privacy and confidentiality, the use of interview records data was strictly restricted. The experimenter must protect the privacy of the research subjects involved in the project, and should not use any confidential information related to the research subjects (including but not limited to accounts, nicknames, contact information, personal homepage, network IP, location, information records, etc.) outside the scope of the research, and would not violate the privacy of the subjects in any way.\nThe interview records data were encoded by qualitative analysis. The basic content of interview records data were the current status and problems of users. We extracted three indicators from the interview records data, and the specific descriptions of them are shown below:(1)Determining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.(2)Determining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.(3)Coding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable.\nDetermining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.\nDetermining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.\nCoding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable.\nSuicide stages were coded on the basis of the following criteria: (a) Depressed mood: feeling negative and depressed, but not yet having suicidal ideation. It means participants did not express suicidal intention explicitly or implicitly(whether they actively expressed suicidal intentions, or exposed suicidal intentions under inquiries from the official Weibo account). (b) Suicide ideation: there is suicidal intent, but suicide plan has not formed. Due to being affected by an intense depressive mood, the idea of avoiding pain through suicide arises, but there is no specific time and way to implement it. (c) Suicide plan: on the basis of suicidal ideation, there is a rough or detailed suicide plan, such as when, where, and how to commit suicide. (d) Suicide attempt: at least one suicide attempt with the aim of ending one’s own life.", "We scraped Weibo texts published by the same 3627 users through the Application Programming Interface (API) of Sina Weibo. In order to obtain sufficient Weibo texts for analysis, we took the first time that the official Weibo account sent a private message to the user as the beginning of suicide intervention, downloaded all Weibo texts in the 3 months before the intervention, and filtered the users whose Weibo texts are less than 300 words (Figure 4).\nFinally, 105 high-suicide-risk adolescents (as the experimental group) were included in the analysis. We then removed stop words and performed Chinese word segmentation on the Weibo texts, and calculated word frequency. “The Chinese Suicide Dictionary” [27] was used to measure the word frequency ratio of their psychological pain and hopelessness. The dictionary outline is shown in Table 1. The psychological pain dimension represents the pain and torment of individuals at the psychological level, including 403 keywords; the hopelessness dimension represents the subjective feeling of losing hope, including 188 keywords.", "2.4.1. Qualitative Analysis Through the labeling results of interview records data, we investigated the situation of mental illness and the distribution of suicide stages for the experimental group.\nThrough the labeling results of interview records data, we investigated the situation of mental illness and the distribution of suicide stages for the experimental group.\n2.4.2. Analysis of Suicide Path The mediation effect analysis was executed on the software Mplus by using the bootstrap method to test the suicide mechanism path model. The word frequency of psychological pain and hopelessness of the experimental group were treated as independent variable (X) and mediating variable (M), respectively. The suicide stages were treated as dependent variable (Y). The mental illness was included in the model as a control variable (Z).\nThe mediation effect analysis was executed on the software Mplus by using the bootstrap method to test the suicide mechanism path model. The word frequency of psychological pain and hopelessness of the experimental group were treated as independent variable (X) and mediating variable (M), respectively. The suicide stages were treated as dependent variable (Y). The mental illness was included in the model as a control variable (Z).", "Through the labeling results of interview records data, we investigated the situation of mental illness and the distribution of suicide stages for the experimental group.", "The mediation effect analysis was executed on the software Mplus by using the bootstrap method to test the suicide mechanism path model. The word frequency of psychological pain and hopelessness of the experimental group were treated as independent variable (X) and mediating variable (M), respectively. The suicide stages were treated as dependent variable (Y). The mental illness was included in the model as a control variable (Z).", "The mental illness diagnosis of the experimental group was marked; it was found that 38 were diagnosed and 67 were undiagnosed. The proportion of users diagnosed with mental illness was 36.2%. Among the 38 high-suicide-risk adolescents diagnosed with mental illness, 29 users had depression (or depression was included in multiple mental illnesses), accounting for 76.3%; the rest included bipolar disorder (13.16%), anxiety disorder (13.16%), etc.; 9 of them suffered from multiple mental illnesses (e.g., depression and anxiety disorder), accounting for 23.68%.\nAmong the experimental group, 57 were in the stage of depressed mood (54.3%), 21 were in the stage of suicide ideation (20.0%), 19 were in the stage of suicide plan (18.1%), and 8 people were in the stage of suicide attempt (7.6%) (Table 2).", "A mediation effect test was performed on the path model (psychological pain → hopelessness → suicide stages). The performance of each fit index of the model was good (RMSEA < 0.08; CFI > 0.9; TLI > 0.9), indicating that the model fitting effect is excellent (Table 3).\nAfter controlling for mental illness factors, the total effect c was significant (β = −0.154, p < 0.05), indicating that there was a mediation effect; the path coefficients a and b were significant (β = 0.571, p < 0.001; β = −0.271, p < 0.05), but the direct effect c’ was not significant (β = 0.052, p > 0.05), indicating that there was a complete mediating effect of hopelessness. That is, psychological pain mainly affects suicide risk through hopelessness (Table 4).\nIt was worth noting that the relationship between the expression of hopelessness and the stages of suicide showed a significant negative correlation (β = −0.271, p < 0.05), indicating that the late suicide stage such as suicide attempt, which may be closer to suicide death, included less expressions of hopelessness (Table 4).\nFurthermore, there was a significant positive correlation between diagnosed mental illness and suicide stages (β = 0.398, p < 0.001), indicating the necessity of treating mental illness as a control variable in order to see the relationship between psychological pain, hopelessness and suicide stages more clearly. At the same time, it also showed that the later the suicide stage, the higher the diagnosis rate of mental illness (Table 4).", "Among the experimental group, 36.2% of users suffered from mental illness. Depression accounted for 76.3% of all mental illness; it is the most common mental illness among the high-suicide-risk adolescents. The incidence of suicide among students whose relatives were affected by psychosis were higher than that of general residents. Suicide and mental illness were closely linked [28]. British doctor Sainsbury conducted a qualitative study on the data of 400 suicides examined by a coroner in London between 1936 and 1938, and found that 37% of the suicides reflected mental illness [29]. In a meta-analysis conducted in 2004, twenty-seven studies comprising 3275 suicides were included, of which 87.3% had been diagnosed with a mental disorder prior to their death [30]. In contrast, the research population in our study was only a high suicide risk group, so the detection rate of mental illness was relatively low. Although the research populations of the other two studies were both suicide decedents, the time interval between the two studies was large. It is very possible that there existed better methods to detect the mental illness of the suicide decedents in later studies, and it is understandable that the detection rate of mental illness has greatly increased. Furthermore, a more recent study has shown that the most common mental illness associated with suicide was depression [31]. The proportion of depression in all mental illnesses in our study also supported this conclusion.\nAnother result of the qualitative analysis showed that the later the suicide stage, the lower the proportion of adolescents. A study on residents’ suicidal ideation, suicide plans and suicide attempts in Dalian, China showed that the detection rate of suicidal ideation, suicide plan and suicide attempt decreased in that order in different groups [32]. This may because not all suicide ideations translate into suicidal actions, and most people with suicide ideation have not attempted suicide [33,34].", "Our research found that the effect of psychological pain on suicide stages was totally mediated by hopelessness, confirming our suicide mechanism path hypothesis: psychological pain → hopelessness → suicide stages. Although the first step of suicide ideation starts from psychological pain, psychological pain itself is not enough to generate suicide ideation, and hopelessness is also needed to develop suicide ideation [19]. People make decisions and take actions based in part on their emotional predictions [35,36], and hopelessness makes people at suicide risk overestimate their future emotional pain [37]. If the individual experiences constant pain in life, this may reduce the individual’s desire to live, leading to suicidal thoughts. However, pain alone is not enough to completely lose the belief in life. Only when the individual feels that the terrible situation cannot be improved, and he or she will live in pain forever, suicide will be considered.\nThus, expressions of hopelessness may be a better predictor of suicide risk than psychological pain, and we found a traceable pathway for suicide intervention. Based on the Weibo posts, we can capture whether individual released suicide signals and it is possible to track the suicide stages in the future, making suicide intervention more accurate.", "There was a negative predictive relationship between hopelessness expressions and suicide stages, as adolescents at the later suicide stages had less expressions of hopelessness. One possible reason is that adolescents at higher risk of suicide may use less concrete, direct words of hopelessness to express such feelings. From a linguistic point of view, Gao Yihong and Meng Ling analyzed the Weibo texts of “Zoufan” three months before suicide. They found that “Zoufan” often used decontextualized and fragmented ways to express personal depressed mood, and expressed metaphors and intentions about death frequently [38]. This euphemistic and poetic way of venting may be the commonality of high suicide risk adolescents. Another possible reason is “the Presuicidal Syndrome”, which mentioned that there exists an “ominous quiet” in the short period before suicide. This is a sign of “affective restriction”, which represents extreme repression of intolerable feelings [39]. Such “affective restriction” may be why people in the later suicide stage are less likely to express hopelessness. Moreover, adolescents at the later suicide stage may be more accepting and tolerant of psychological pain and hopelessness, which makes them less willing to talk about individual psychological pain and hopelessness, and with lack of supportive survival signals through texts, lose the belief in getting help.\nMany researchers have paid attention to the use of social media for suicide prevention and intervention, and have conducted a series of studies [40,41]. However, little attention has been paid to whether the suicidal signal was from an early suicide ideator or a later suicide attempter. Our study fills this gap, complementing and extending suicide-related research. Our results suggest that adolescents who are at a later suicide stage and have a higher suicide risk often do not express too much hopelessness directly. Therefore, we cannot judge closeness to suicide only by the level of traditional expressions of hopelessness.", "Although this study could divide the suicide stages of adolescents through labeling Weibo private message data, it cannot effectively determine the gender of the user. Previous studies had shown that suicidal behaviors were also affected by gender, and that the incidence of suicide attempts in adolescent females was higher than that of males [42]. Compared with males, females were more psychologically sensitive and vulnerable [43], and were more likely to express their pain through suicidal behaviors [44]. The expressions of hopelessness and psychological pain as predictors of suicide may differ by gender. There were only 105 adolescent users with high suicide risk in our study. Caution is thus necessary when generalizing the conclusions to other case.\nOur findings provided a traceable pathway for suicide intervention, finding that the effect of psychological pain on suicide stages was totally mediated by hopelessness, and that adolescents in the later suicide stages had less expressions of hopelessness in the traditional sense. However, the specific language characteristics of each suicide stage are still unknown. If we want to accomplish the precise prediction of suicide stages through the expressions on Weibo posts and provide more accurate and effective information for suicide intervention, we need to further explore the specific language expression characteristics of high suicide risk adolescents in each suicide stage, especially how to identify abstract metaphors that may constitute a suicidal signal. It is thus possible that we can provide more accurate suicide intervention for adolescents at different suicide stages in the future.\nWe mainly focus on the expression on social media of two psychological indicators (psychological pain, hopelessness). However, there are many factors which affect suicide, and we can research suicide intervention from multiple perspectives, not only examining some negative indicators, but also exploring some positive indicators in future research. For example, previous studies had shown that social support [45], psychological resilience [46], and self-compassion [47] were protective factors for adolescent suicide." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Participants", "2.2. Online Psychological Assistance Project", "2.3. Data Collection and Processing", "2.3.1. Labeling the Interview Records Data", "2.3.2. Getting the Weibo Text Data", "2.4. Data Analysis", "2.4.1. Qualitative Analysis", "2.4.2. Analysis of Suicide Path", "3. Results", "3.1. Group Characteristics Reflected by Qualitative Analysis", "3.2. Path Analysis of Psychological Pain, Hopelessness and Suicide Stages", "4. Discussion", "4.1. The Qualitative Analysis Results", "4.2. The Mediating Effect of Hopelessness", "4.3. The Hopelessness Expressions and the Suicide Stages", "5. Limitations and Future Studies", "6. Conclusions" ]
[ "Adolescent suicide has always been a global public health concern to us. Suicide is the fourth leading cause of death globally among people aged 15–19 [1], and in China, it is the second leading cause of death for people aged 20–34 [2]. The consequences of adolescent suicide are the most serious among many mental health problems. It means disability or even a loss of life for individuals, heavy blows and severe trauma for families, and huge expenditures on the health, welfare and justice sectors for society. The study of suicide risk factors and the exploration of suicide mechanisms in high-suicide-risk adolescents are of great significance for the prevention of suicide.\nWith the development of technology, social media has become a platform for people to record their daily life and express their emotions. In China, Sina Weibo as a typical product of the big data era has extensive influence. Sina Weibo is China’s most popular social media platform, just like Twitter in America. Although there are some short video platforms (e.g., Douyin) and forums (e.g., Douban) that are also popular in China, it is only on Sina Weibo that users can share information in real-time and communicate interactively in the forms of text, pictures, videos and other multimedia. According to a report released by Sina Weibo, Weibo users show a younger trend, of which the post-90s and post-00s youth groups account for nearly 80%, and the daily active volume can reach 224 million [3].\nThe user’s emotional expression on social platforms has higher immediacy and less masking. Identifying the user’s psychological state through their expression on social media will be more convenient and quicker, and the obtained data will be more real. Therefore, we use the Weibo text analysis method to explore the suicide mechanism path of high-suicide-risk adolescents. Text analysis refers to the representation of text and the selection of feature items, which is a basic problem in text mining and information retrieval. Text analysis means that we can quantify the feature words extracted from text to represent text information. Therefore, we can extract the psychological indicators we want from the Weibo text by constructing a dictionary of psychological indicators.\nSuicide represents a series of consecutive stages, including suicide ideation, suicide plan, and suicide attempt [4,5]. Suicide ideation, as a high-risk factor for suicide, has a significant predictive effect [6,7]. Approximately 1/3 of suicide ideators ultimately attempt suicide, and the number of suicide attempters is several times higher than those who really die by suicide, so suicide attempt is a greater risk factor for suicide than suicidal ideation [1,8,9]. The later the suicide stage, the closer to suicide death, and the greater risk of suicide. Therefore, in order to quantify the degree of suicide risk and provide accurate information for suicide interventions, we decided to divide suicidal behavior into different stages. At the same time, considering that depressed mood is an important signal of suicide, we finally divided suicide into 4 stages, which the suicide risk gradually increases: depressed mood, suicide ideation, suicide plan and suicide attempt, and defined adolescents at different suicide stages as high-suicide-risk adolescents.\nA meta-analysis of the causes of suicide among adolescents found that suicide is a public health problem involving complex factors, such as biology, psychology, family, society and culture [8]. Among the psychological factors, psychological pain and hopelessness are important risk factors featuring strongly in suicide research [9,10,11]. Although psychological pain and hopelessness are often mentioned together as having an impact on suicidal behavior, as two most common motivations for suicide attempts [12], it is unclear how the two proximal factors of suicide affect the suicide stages.\nPsychological pain is defined as the “introspective experience of negative emotions such as fear, despair, grief, shame, guilt, blocked love, loneliness and loss” [13,14,15,16]. When Shneidman first hypothesized the relationship between psychological pain and suicide, he pointed out that other factors can only affect suicidal behavior through psychological pain, without which suicide would not occur [13]. Evidence suggests that psychological pain plays a central role in suicidal behavior [10,17,18]. Psychological pain is not enough to develop suicide ideation, it also requires hopelessness [19]. Hopelessness is defined as negative expectations or pessimism about oneself or the future [20]. Previous studies had showed that hopelessness was also an important risk factor of suicide [21,22,23].\nTo explore the relationship between psychological pain, hopelessness and suicide stages for high-suicide-risk adolescents, we established a hypothesis for the suicide mechanism path: psychological pain → hopelessness → suicide stages, with hopelessness as the mediating variable. In addition, there is a high correlation between mental illness and suicidal behaviors [24,25], and in order to better clarify the effect of psychological pain and hopelessness on suicide stages, a mediating effect model of our study is proposed with mental illness as a control variable (Figure 1). We used the more ecological Weibo text analysis method in this study, and hoped to further deepen the research on adolescent suicide and to find a new theoretical basis for future suicide interventions drawing on online data.", "2.1. Participants Through the online psychological assistance project, we obtained the interview records data of 3627 Weibo users, and then processed the user’s interview records data and Weibo texts data. Finally, we retained 105 high-suicide-risk adolescents with enough Weibo texts as the experimental group.\nEthical Statement: Participants gave informed consent when agreeing to take part in the online psychological assistance project. The project received ethical approval from the Institution-al Review Board of the Institute of Psychology, Chinese Academy of Sciences, with the ethics approval number H16003.\nThrough the online psychological assistance project, we obtained the interview records data of 3627 Weibo users, and then processed the user’s interview records data and Weibo texts data. Finally, we retained 105 high-suicide-risk adolescents with enough Weibo texts as the experimental group.\nEthical Statement: Participants gave informed consent when agreeing to take part in the online psychological assistance project. The project received ethical approval from the Institution-al Review Board of the Institute of Psychology, Chinese Academy of Sciences, with the ethics approval number H16003.\n2.2. Online Psychological Assistance Project We used the suicide signal recognition model established by the Computational Cyber-Psychology Lab (CCPL) to identify the suicide signal of a single microblog, which can effectively determine whether the single microblog has shown suicidal risk [26].\nSince November 2016, we had used the suicide signal recognition model for online identification of Weibo comments. The last Weibo post before the death of Weibo user “Zoufan”, as a place where people can express their negative emotions, has received more than 1 million comments until now, and the number of comments keep rising (Figure 2).\nFirstly, we used the suicide signal model to identify the suicide signal of the Weibo comments. Then, the comments identified by the model were re-identified by manual recognition. Finally, we used the official Weibo account “Psychological Map Psy” and sent a private message to the Weibo users marked as having a suicide risk. The contents of the private message mainly include inquiries about the user’s emotional state, expressions of concern, links to self-assessment psychological scales, and the hotline of the psychological crisis intervention center, etc. (Figure 3). In addition, users can choose whether to communicate with the voluntary counselor online according to their own wishes, so as to achieve agile psychological crisis intervention.\nWe used the suicide signal recognition model established by the Computational Cyber-Psychology Lab (CCPL) to identify the suicide signal of a single microblog, which can effectively determine whether the single microblog has shown suicidal risk [26].\nSince November 2016, we had used the suicide signal recognition model for online identification of Weibo comments. The last Weibo post before the death of Weibo user “Zoufan”, as a place where people can express their negative emotions, has received more than 1 million comments until now, and the number of comments keep rising (Figure 2).\nFirstly, we used the suicide signal model to identify the suicide signal of the Weibo comments. Then, the comments identified by the model were re-identified by manual recognition. Finally, we used the official Weibo account “Psychological Map Psy” and sent a private message to the Weibo users marked as having a suicide risk. The contents of the private message mainly include inquiries about the user’s emotional state, expressions of concern, links to self-assessment psychological scales, and the hotline of the psychological crisis intervention center, etc. (Figure 3). In addition, users can choose whether to communicate with the voluntary counselor online according to their own wishes, so as to achieve agile psychological crisis intervention.\n2.3. Data Collection and Processing 2.3.1. Labeling the Interview Records Data During the online psychological assistance project period (November 2016 to April 2019), the official Weibo account had communicated with 3627 users online, and 127,336 message records were accumulated. Adhering to the principle of privacy and confidentiality, the use of interview records data was strictly restricted. The experimenter must protect the privacy of the research subjects involved in the project, and should not use any confidential information related to the research subjects (including but not limited to accounts, nicknames, contact information, personal homepage, network IP, location, information records, etc.) outside the scope of the research, and would not violate the privacy of the subjects in any way.\nThe interview records data were encoded by qualitative analysis. The basic content of interview records data were the current status and problems of users. We extracted three indicators from the interview records data, and the specific descriptions of them are shown below:(1)Determining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.(2)Determining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.(3)Coding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable.\nDetermining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.\nDetermining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.\nCoding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable.\nSuicide stages were coded on the basis of the following criteria: (a) Depressed mood: feeling negative and depressed, but not yet having suicidal ideation. It means participants did not express suicidal intention explicitly or implicitly(whether they actively expressed suicidal intentions, or exposed suicidal intentions under inquiries from the official Weibo account). (b) Suicide ideation: there is suicidal intent, but suicide plan has not formed. Due to being affected by an intense depressive mood, the idea of avoiding pain through suicide arises, but there is no specific time and way to implement it. (c) Suicide plan: on the basis of suicidal ideation, there is a rough or detailed suicide plan, such as when, where, and how to commit suicide. (d) Suicide attempt: at least one suicide attempt with the aim of ending one’s own life.\nDuring the online psychological assistance project period (November 2016 to April 2019), the official Weibo account had communicated with 3627 users online, and 127,336 message records were accumulated. Adhering to the principle of privacy and confidentiality, the use of interview records data was strictly restricted. The experimenter must protect the privacy of the research subjects involved in the project, and should not use any confidential information related to the research subjects (including but not limited to accounts, nicknames, contact information, personal homepage, network IP, location, information records, etc.) outside the scope of the research, and would not violate the privacy of the subjects in any way.\nThe interview records data were encoded by qualitative analysis. The basic content of interview records data were the current status and problems of users. We extracted three indicators from the interview records data, and the specific descriptions of them are shown below:(1)Determining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.(2)Determining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.(3)Coding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable.\nDetermining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.\nDetermining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.\nCoding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable.\nSuicide stages were coded on the basis of the following criteria: (a) Depressed mood: feeling negative and depressed, but not yet having suicidal ideation. It means participants did not express suicidal intention explicitly or implicitly(whether they actively expressed suicidal intentions, or exposed suicidal intentions under inquiries from the official Weibo account). (b) Suicide ideation: there is suicidal intent, but suicide plan has not formed. Due to being affected by an intense depressive mood, the idea of avoiding pain through suicide arises, but there is no specific time and way to implement it. (c) Suicide plan: on the basis of suicidal ideation, there is a rough or detailed suicide plan, such as when, where, and how to commit suicide. (d) Suicide attempt: at least one suicide attempt with the aim of ending one’s own life.\n2.3.2. Getting the Weibo Text Data We scraped Weibo texts published by the same 3627 users through the Application Programming Interface (API) of Sina Weibo. In order to obtain sufficient Weibo texts for analysis, we took the first time that the official Weibo account sent a private message to the user as the beginning of suicide intervention, downloaded all Weibo texts in the 3 months before the intervention, and filtered the users whose Weibo texts are less than 300 words (Figure 4).\nFinally, 105 high-suicide-risk adolescents (as the experimental group) were included in the analysis. We then removed stop words and performed Chinese word segmentation on the Weibo texts, and calculated word frequency. “The Chinese Suicide Dictionary” [27] was used to measure the word frequency ratio of their psychological pain and hopelessness. The dictionary outline is shown in Table 1. The psychological pain dimension represents the pain and torment of individuals at the psychological level, including 403 keywords; the hopelessness dimension represents the subjective feeling of losing hope, including 188 keywords.\nWe scraped Weibo texts published by the same 3627 users through the Application Programming Interface (API) of Sina Weibo. In order to obtain sufficient Weibo texts for analysis, we took the first time that the official Weibo account sent a private message to the user as the beginning of suicide intervention, downloaded all Weibo texts in the 3 months before the intervention, and filtered the users whose Weibo texts are less than 300 words (Figure 4).\nFinally, 105 high-suicide-risk adolescents (as the experimental group) were included in the analysis. We then removed stop words and performed Chinese word segmentation on the Weibo texts, and calculated word frequency. “The Chinese Suicide Dictionary” [27] was used to measure the word frequency ratio of their psychological pain and hopelessness. The dictionary outline is shown in Table 1. The psychological pain dimension represents the pain and torment of individuals at the psychological level, including 403 keywords; the hopelessness dimension represents the subjective feeling of losing hope, including 188 keywords.\n2.3.1. Labeling the Interview Records Data During the online psychological assistance project period (November 2016 to April 2019), the official Weibo account had communicated with 3627 users online, and 127,336 message records were accumulated. Adhering to the principle of privacy and confidentiality, the use of interview records data was strictly restricted. The experimenter must protect the privacy of the research subjects involved in the project, and should not use any confidential information related to the research subjects (including but not limited to accounts, nicknames, contact information, personal homepage, network IP, location, information records, etc.) outside the scope of the research, and would not violate the privacy of the subjects in any way.\nThe interview records data were encoded by qualitative analysis. The basic content of interview records data were the current status and problems of users. We extracted three indicators from the interview records data, and the specific descriptions of them are shown below:(1)Determining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.(2)Determining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.(3)Coding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable.\nDetermining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.\nDetermining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.\nCoding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable.\nSuicide stages were coded on the basis of the following criteria: (a) Depressed mood: feeling negative and depressed, but not yet having suicidal ideation. It means participants did not express suicidal intention explicitly or implicitly(whether they actively expressed suicidal intentions, or exposed suicidal intentions under inquiries from the official Weibo account). (b) Suicide ideation: there is suicidal intent, but suicide plan has not formed. Due to being affected by an intense depressive mood, the idea of avoiding pain through suicide arises, but there is no specific time and way to implement it. (c) Suicide plan: on the basis of suicidal ideation, there is a rough or detailed suicide plan, such as when, where, and how to commit suicide. (d) Suicide attempt: at least one suicide attempt with the aim of ending one’s own life.\nDuring the online psychological assistance project period (November 2016 to April 2019), the official Weibo account had communicated with 3627 users online, and 127,336 message records were accumulated. Adhering to the principle of privacy and confidentiality, the use of interview records data was strictly restricted. The experimenter must protect the privacy of the research subjects involved in the project, and should not use any confidential information related to the research subjects (including but not limited to accounts, nicknames, contact information, personal homepage, network IP, location, information records, etc.) outside the scope of the research, and would not violate the privacy of the subjects in any way.\nThe interview records data were encoded by qualitative analysis. The basic content of interview records data were the current status and problems of users. We extracted three indicators from the interview records data, and the specific descriptions of them are shown below:(1)Determining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.(2)Determining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.(3)Coding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable.\nDetermining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.\nDetermining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.\nCoding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable.\nSuicide stages were coded on the basis of the following criteria: (a) Depressed mood: feeling negative and depressed, but not yet having suicidal ideation. It means participants did not express suicidal intention explicitly or implicitly(whether they actively expressed suicidal intentions, or exposed suicidal intentions under inquiries from the official Weibo account). (b) Suicide ideation: there is suicidal intent, but suicide plan has not formed. Due to being affected by an intense depressive mood, the idea of avoiding pain through suicide arises, but there is no specific time and way to implement it. (c) Suicide plan: on the basis of suicidal ideation, there is a rough or detailed suicide plan, such as when, where, and how to commit suicide. (d) Suicide attempt: at least one suicide attempt with the aim of ending one’s own life.\n2.3.2. Getting the Weibo Text Data We scraped Weibo texts published by the same 3627 users through the Application Programming Interface (API) of Sina Weibo. In order to obtain sufficient Weibo texts for analysis, we took the first time that the official Weibo account sent a private message to the user as the beginning of suicide intervention, downloaded all Weibo texts in the 3 months before the intervention, and filtered the users whose Weibo texts are less than 300 words (Figure 4).\nFinally, 105 high-suicide-risk adolescents (as the experimental group) were included in the analysis. We then removed stop words and performed Chinese word segmentation on the Weibo texts, and calculated word frequency. “The Chinese Suicide Dictionary” [27] was used to measure the word frequency ratio of their psychological pain and hopelessness. The dictionary outline is shown in Table 1. The psychological pain dimension represents the pain and torment of individuals at the psychological level, including 403 keywords; the hopelessness dimension represents the subjective feeling of losing hope, including 188 keywords.\nWe scraped Weibo texts published by the same 3627 users through the Application Programming Interface (API) of Sina Weibo. In order to obtain sufficient Weibo texts for analysis, we took the first time that the official Weibo account sent a private message to the user as the beginning of suicide intervention, downloaded all Weibo texts in the 3 months before the intervention, and filtered the users whose Weibo texts are less than 300 words (Figure 4).\nFinally, 105 high-suicide-risk adolescents (as the experimental group) were included in the analysis. We then removed stop words and performed Chinese word segmentation on the Weibo texts, and calculated word frequency. “The Chinese Suicide Dictionary” [27] was used to measure the word frequency ratio of their psychological pain and hopelessness. The dictionary outline is shown in Table 1. The psychological pain dimension represents the pain and torment of individuals at the psychological level, including 403 keywords; the hopelessness dimension represents the subjective feeling of losing hope, including 188 keywords.\n2.4. Data Analysis 2.4.1. Qualitative Analysis Through the labeling results of interview records data, we investigated the situation of mental illness and the distribution of suicide stages for the experimental group.\nThrough the labeling results of interview records data, we investigated the situation of mental illness and the distribution of suicide stages for the experimental group.\n2.4.2. Analysis of Suicide Path The mediation effect analysis was executed on the software Mplus by using the bootstrap method to test the suicide mechanism path model. The word frequency of psychological pain and hopelessness of the experimental group were treated as independent variable (X) and mediating variable (M), respectively. The suicide stages were treated as dependent variable (Y). The mental illness was included in the model as a control variable (Z).\nThe mediation effect analysis was executed on the software Mplus by using the bootstrap method to test the suicide mechanism path model. The word frequency of psychological pain and hopelessness of the experimental group were treated as independent variable (X) and mediating variable (M), respectively. The suicide stages were treated as dependent variable (Y). The mental illness was included in the model as a control variable (Z).\n2.4.1. Qualitative Analysis Through the labeling results of interview records data, we investigated the situation of mental illness and the distribution of suicide stages for the experimental group.\nThrough the labeling results of interview records data, we investigated the situation of mental illness and the distribution of suicide stages for the experimental group.\n2.4.2. Analysis of Suicide Path The mediation effect analysis was executed on the software Mplus by using the bootstrap method to test the suicide mechanism path model. The word frequency of psychological pain and hopelessness of the experimental group were treated as independent variable (X) and mediating variable (M), respectively. The suicide stages were treated as dependent variable (Y). The mental illness was included in the model as a control variable (Z).\nThe mediation effect analysis was executed on the software Mplus by using the bootstrap method to test the suicide mechanism path model. The word frequency of psychological pain and hopelessness of the experimental group were treated as independent variable (X) and mediating variable (M), respectively. The suicide stages were treated as dependent variable (Y). The mental illness was included in the model as a control variable (Z).", "Through the online psychological assistance project, we obtained the interview records data of 3627 Weibo users, and then processed the user’s interview records data and Weibo texts data. Finally, we retained 105 high-suicide-risk adolescents with enough Weibo texts as the experimental group.\nEthical Statement: Participants gave informed consent when agreeing to take part in the online psychological assistance project. The project received ethical approval from the Institution-al Review Board of the Institute of Psychology, Chinese Academy of Sciences, with the ethics approval number H16003.", "We used the suicide signal recognition model established by the Computational Cyber-Psychology Lab (CCPL) to identify the suicide signal of a single microblog, which can effectively determine whether the single microblog has shown suicidal risk [26].\nSince November 2016, we had used the suicide signal recognition model for online identification of Weibo comments. The last Weibo post before the death of Weibo user “Zoufan”, as a place where people can express their negative emotions, has received more than 1 million comments until now, and the number of comments keep rising (Figure 2).\nFirstly, we used the suicide signal model to identify the suicide signal of the Weibo comments. Then, the comments identified by the model were re-identified by manual recognition. Finally, we used the official Weibo account “Psychological Map Psy” and sent a private message to the Weibo users marked as having a suicide risk. The contents of the private message mainly include inquiries about the user’s emotional state, expressions of concern, links to self-assessment psychological scales, and the hotline of the psychological crisis intervention center, etc. (Figure 3). In addition, users can choose whether to communicate with the voluntary counselor online according to their own wishes, so as to achieve agile psychological crisis intervention.", "2.3.1. Labeling the Interview Records Data During the online psychological assistance project period (November 2016 to April 2019), the official Weibo account had communicated with 3627 users online, and 127,336 message records were accumulated. Adhering to the principle of privacy and confidentiality, the use of interview records data was strictly restricted. The experimenter must protect the privacy of the research subjects involved in the project, and should not use any confidential information related to the research subjects (including but not limited to accounts, nicknames, contact information, personal homepage, network IP, location, information records, etc.) outside the scope of the research, and would not violate the privacy of the subjects in any way.\nThe interview records data were encoded by qualitative analysis. The basic content of interview records data were the current status and problems of users. We extracted three indicators from the interview records data, and the specific descriptions of them are shown below:(1)Determining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.(2)Determining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.(3)Coding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable.\nDetermining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.\nDetermining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.\nCoding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable.\nSuicide stages were coded on the basis of the following criteria: (a) Depressed mood: feeling negative and depressed, but not yet having suicidal ideation. It means participants did not express suicidal intention explicitly or implicitly(whether they actively expressed suicidal intentions, or exposed suicidal intentions under inquiries from the official Weibo account). (b) Suicide ideation: there is suicidal intent, but suicide plan has not formed. Due to being affected by an intense depressive mood, the idea of avoiding pain through suicide arises, but there is no specific time and way to implement it. (c) Suicide plan: on the basis of suicidal ideation, there is a rough or detailed suicide plan, such as when, where, and how to commit suicide. (d) Suicide attempt: at least one suicide attempt with the aim of ending one’s own life.\nDuring the online psychological assistance project period (November 2016 to April 2019), the official Weibo account had communicated with 3627 users online, and 127,336 message records were accumulated. Adhering to the principle of privacy and confidentiality, the use of interview records data was strictly restricted. The experimenter must protect the privacy of the research subjects involved in the project, and should not use any confidential information related to the research subjects (including but not limited to accounts, nicknames, contact information, personal homepage, network IP, location, information records, etc.) outside the scope of the research, and would not violate the privacy of the subjects in any way.\nThe interview records data were encoded by qualitative analysis. The basic content of interview records data were the current status and problems of users. We extracted three indicators from the interview records data, and the specific descriptions of them are shown below:(1)Determining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.(2)Determining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.(3)Coding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable.\nDetermining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.\nDetermining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.\nCoding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable.\nSuicide stages were coded on the basis of the following criteria: (a) Depressed mood: feeling negative and depressed, but not yet having suicidal ideation. It means participants did not express suicidal intention explicitly or implicitly(whether they actively expressed suicidal intentions, or exposed suicidal intentions under inquiries from the official Weibo account). (b) Suicide ideation: there is suicidal intent, but suicide plan has not formed. Due to being affected by an intense depressive mood, the idea of avoiding pain through suicide arises, but there is no specific time and way to implement it. (c) Suicide plan: on the basis of suicidal ideation, there is a rough or detailed suicide plan, such as when, where, and how to commit suicide. (d) Suicide attempt: at least one suicide attempt with the aim of ending one’s own life.\n2.3.2. Getting the Weibo Text Data We scraped Weibo texts published by the same 3627 users through the Application Programming Interface (API) of Sina Weibo. In order to obtain sufficient Weibo texts for analysis, we took the first time that the official Weibo account sent a private message to the user as the beginning of suicide intervention, downloaded all Weibo texts in the 3 months before the intervention, and filtered the users whose Weibo texts are less than 300 words (Figure 4).\nFinally, 105 high-suicide-risk adolescents (as the experimental group) were included in the analysis. We then removed stop words and performed Chinese word segmentation on the Weibo texts, and calculated word frequency. “The Chinese Suicide Dictionary” [27] was used to measure the word frequency ratio of their psychological pain and hopelessness. The dictionary outline is shown in Table 1. The psychological pain dimension represents the pain and torment of individuals at the psychological level, including 403 keywords; the hopelessness dimension represents the subjective feeling of losing hope, including 188 keywords.\nWe scraped Weibo texts published by the same 3627 users through the Application Programming Interface (API) of Sina Weibo. In order to obtain sufficient Weibo texts for analysis, we took the first time that the official Weibo account sent a private message to the user as the beginning of suicide intervention, downloaded all Weibo texts in the 3 months before the intervention, and filtered the users whose Weibo texts are less than 300 words (Figure 4).\nFinally, 105 high-suicide-risk adolescents (as the experimental group) were included in the analysis. We then removed stop words and performed Chinese word segmentation on the Weibo texts, and calculated word frequency. “The Chinese Suicide Dictionary” [27] was used to measure the word frequency ratio of their psychological pain and hopelessness. The dictionary outline is shown in Table 1. The psychological pain dimension represents the pain and torment of individuals at the psychological level, including 403 keywords; the hopelessness dimension represents the subjective feeling of losing hope, including 188 keywords.", "During the online psychological assistance project period (November 2016 to April 2019), the official Weibo account had communicated with 3627 users online, and 127,336 message records were accumulated. Adhering to the principle of privacy and confidentiality, the use of interview records data was strictly restricted. The experimenter must protect the privacy of the research subjects involved in the project, and should not use any confidential information related to the research subjects (including but not limited to accounts, nicknames, contact information, personal homepage, network IP, location, information records, etc.) outside the scope of the research, and would not violate the privacy of the subjects in any way.\nThe interview records data were encoded by qualitative analysis. The basic content of interview records data were the current status and problems of users. We extracted three indicators from the interview records data, and the specific descriptions of them are shown below:(1)Determining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.(2)Determining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.(3)Coding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable.\nDetermining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.\nDetermining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.\nCoding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable.\nSuicide stages were coded on the basis of the following criteria: (a) Depressed mood: feeling negative and depressed, but not yet having suicidal ideation. It means participants did not express suicidal intention explicitly or implicitly(whether they actively expressed suicidal intentions, or exposed suicidal intentions under inquiries from the official Weibo account). (b) Suicide ideation: there is suicidal intent, but suicide plan has not formed. Due to being affected by an intense depressive mood, the idea of avoiding pain through suicide arises, but there is no specific time and way to implement it. (c) Suicide plan: on the basis of suicidal ideation, there is a rough or detailed suicide plan, such as when, where, and how to commit suicide. (d) Suicide attempt: at least one suicide attempt with the aim of ending one’s own life.", "We scraped Weibo texts published by the same 3627 users through the Application Programming Interface (API) of Sina Weibo. In order to obtain sufficient Weibo texts for analysis, we took the first time that the official Weibo account sent a private message to the user as the beginning of suicide intervention, downloaded all Weibo texts in the 3 months before the intervention, and filtered the users whose Weibo texts are less than 300 words (Figure 4).\nFinally, 105 high-suicide-risk adolescents (as the experimental group) were included in the analysis. We then removed stop words and performed Chinese word segmentation on the Weibo texts, and calculated word frequency. “The Chinese Suicide Dictionary” [27] was used to measure the word frequency ratio of their psychological pain and hopelessness. The dictionary outline is shown in Table 1. The psychological pain dimension represents the pain and torment of individuals at the psychological level, including 403 keywords; the hopelessness dimension represents the subjective feeling of losing hope, including 188 keywords.", "2.4.1. Qualitative Analysis Through the labeling results of interview records data, we investigated the situation of mental illness and the distribution of suicide stages for the experimental group.\nThrough the labeling results of interview records data, we investigated the situation of mental illness and the distribution of suicide stages for the experimental group.\n2.4.2. Analysis of Suicide Path The mediation effect analysis was executed on the software Mplus by using the bootstrap method to test the suicide mechanism path model. The word frequency of psychological pain and hopelessness of the experimental group were treated as independent variable (X) and mediating variable (M), respectively. The suicide stages were treated as dependent variable (Y). The mental illness was included in the model as a control variable (Z).\nThe mediation effect analysis was executed on the software Mplus by using the bootstrap method to test the suicide mechanism path model. The word frequency of psychological pain and hopelessness of the experimental group were treated as independent variable (X) and mediating variable (M), respectively. The suicide stages were treated as dependent variable (Y). The mental illness was included in the model as a control variable (Z).", "Through the labeling results of interview records data, we investigated the situation of mental illness and the distribution of suicide stages for the experimental group.", "The mediation effect analysis was executed on the software Mplus by using the bootstrap method to test the suicide mechanism path model. The word frequency of psychological pain and hopelessness of the experimental group were treated as independent variable (X) and mediating variable (M), respectively. The suicide stages were treated as dependent variable (Y). The mental illness was included in the model as a control variable (Z).", "3.1. Group Characteristics Reflected by Qualitative Analysis The mental illness diagnosis of the experimental group was marked; it was found that 38 were diagnosed and 67 were undiagnosed. The proportion of users diagnosed with mental illness was 36.2%. Among the 38 high-suicide-risk adolescents diagnosed with mental illness, 29 users had depression (or depression was included in multiple mental illnesses), accounting for 76.3%; the rest included bipolar disorder (13.16%), anxiety disorder (13.16%), etc.; 9 of them suffered from multiple mental illnesses (e.g., depression and anxiety disorder), accounting for 23.68%.\nAmong the experimental group, 57 were in the stage of depressed mood (54.3%), 21 were in the stage of suicide ideation (20.0%), 19 were in the stage of suicide plan (18.1%), and 8 people were in the stage of suicide attempt (7.6%) (Table 2).\nThe mental illness diagnosis of the experimental group was marked; it was found that 38 were diagnosed and 67 were undiagnosed. The proportion of users diagnosed with mental illness was 36.2%. Among the 38 high-suicide-risk adolescents diagnosed with mental illness, 29 users had depression (or depression was included in multiple mental illnesses), accounting for 76.3%; the rest included bipolar disorder (13.16%), anxiety disorder (13.16%), etc.; 9 of them suffered from multiple mental illnesses (e.g., depression and anxiety disorder), accounting for 23.68%.\nAmong the experimental group, 57 were in the stage of depressed mood (54.3%), 21 were in the stage of suicide ideation (20.0%), 19 were in the stage of suicide plan (18.1%), and 8 people were in the stage of suicide attempt (7.6%) (Table 2).\n3.2. Path Analysis of Psychological Pain, Hopelessness and Suicide Stages A mediation effect test was performed on the path model (psychological pain → hopelessness → suicide stages). The performance of each fit index of the model was good (RMSEA < 0.08; CFI > 0.9; TLI > 0.9), indicating that the model fitting effect is excellent (Table 3).\nAfter controlling for mental illness factors, the total effect c was significant (β = −0.154, p < 0.05), indicating that there was a mediation effect; the path coefficients a and b were significant (β = 0.571, p < 0.001; β = −0.271, p < 0.05), but the direct effect c’ was not significant (β = 0.052, p > 0.05), indicating that there was a complete mediating effect of hopelessness. That is, psychological pain mainly affects suicide risk through hopelessness (Table 4).\nIt was worth noting that the relationship between the expression of hopelessness and the stages of suicide showed a significant negative correlation (β = −0.271, p < 0.05), indicating that the late suicide stage such as suicide attempt, which may be closer to suicide death, included less expressions of hopelessness (Table 4).\nFurthermore, there was a significant positive correlation between diagnosed mental illness and suicide stages (β = 0.398, p < 0.001), indicating the necessity of treating mental illness as a control variable in order to see the relationship between psychological pain, hopelessness and suicide stages more clearly. At the same time, it also showed that the later the suicide stage, the higher the diagnosis rate of mental illness (Table 4).\nA mediation effect test was performed on the path model (psychological pain → hopelessness → suicide stages). The performance of each fit index of the model was good (RMSEA < 0.08; CFI > 0.9; TLI > 0.9), indicating that the model fitting effect is excellent (Table 3).\nAfter controlling for mental illness factors, the total effect c was significant (β = −0.154, p < 0.05), indicating that there was a mediation effect; the path coefficients a and b were significant (β = 0.571, p < 0.001; β = −0.271, p < 0.05), but the direct effect c’ was not significant (β = 0.052, p > 0.05), indicating that there was a complete mediating effect of hopelessness. That is, psychological pain mainly affects suicide risk through hopelessness (Table 4).\nIt was worth noting that the relationship between the expression of hopelessness and the stages of suicide showed a significant negative correlation (β = −0.271, p < 0.05), indicating that the late suicide stage such as suicide attempt, which may be closer to suicide death, included less expressions of hopelessness (Table 4).\nFurthermore, there was a significant positive correlation between diagnosed mental illness and suicide stages (β = 0.398, p < 0.001), indicating the necessity of treating mental illness as a control variable in order to see the relationship between psychological pain, hopelessness and suicide stages more clearly. At the same time, it also showed that the later the suicide stage, the higher the diagnosis rate of mental illness (Table 4).", "The mental illness diagnosis of the experimental group was marked; it was found that 38 were diagnosed and 67 were undiagnosed. The proportion of users diagnosed with mental illness was 36.2%. Among the 38 high-suicide-risk adolescents diagnosed with mental illness, 29 users had depression (or depression was included in multiple mental illnesses), accounting for 76.3%; the rest included bipolar disorder (13.16%), anxiety disorder (13.16%), etc.; 9 of them suffered from multiple mental illnesses (e.g., depression and anxiety disorder), accounting for 23.68%.\nAmong the experimental group, 57 were in the stage of depressed mood (54.3%), 21 were in the stage of suicide ideation (20.0%), 19 were in the stage of suicide plan (18.1%), and 8 people were in the stage of suicide attempt (7.6%) (Table 2).", "A mediation effect test was performed on the path model (psychological pain → hopelessness → suicide stages). The performance of each fit index of the model was good (RMSEA < 0.08; CFI > 0.9; TLI > 0.9), indicating that the model fitting effect is excellent (Table 3).\nAfter controlling for mental illness factors, the total effect c was significant (β = −0.154, p < 0.05), indicating that there was a mediation effect; the path coefficients a and b were significant (β = 0.571, p < 0.001; β = −0.271, p < 0.05), but the direct effect c’ was not significant (β = 0.052, p > 0.05), indicating that there was a complete mediating effect of hopelessness. That is, psychological pain mainly affects suicide risk through hopelessness (Table 4).\nIt was worth noting that the relationship between the expression of hopelessness and the stages of suicide showed a significant negative correlation (β = −0.271, p < 0.05), indicating that the late suicide stage such as suicide attempt, which may be closer to suicide death, included less expressions of hopelessness (Table 4).\nFurthermore, there was a significant positive correlation between diagnosed mental illness and suicide stages (β = 0.398, p < 0.001), indicating the necessity of treating mental illness as a control variable in order to see the relationship between psychological pain, hopelessness and suicide stages more clearly. At the same time, it also showed that the later the suicide stage, the higher the diagnosis rate of mental illness (Table 4).", "4.1. The Qualitative Analysis Results Among the experimental group, 36.2% of users suffered from mental illness. Depression accounted for 76.3% of all mental illness; it is the most common mental illness among the high-suicide-risk adolescents. The incidence of suicide among students whose relatives were affected by psychosis were higher than that of general residents. Suicide and mental illness were closely linked [28]. British doctor Sainsbury conducted a qualitative study on the data of 400 suicides examined by a coroner in London between 1936 and 1938, and found that 37% of the suicides reflected mental illness [29]. In a meta-analysis conducted in 2004, twenty-seven studies comprising 3275 suicides were included, of which 87.3% had been diagnosed with a mental disorder prior to their death [30]. In contrast, the research population in our study was only a high suicide risk group, so the detection rate of mental illness was relatively low. Although the research populations of the other two studies were both suicide decedents, the time interval between the two studies was large. It is very possible that there existed better methods to detect the mental illness of the suicide decedents in later studies, and it is understandable that the detection rate of mental illness has greatly increased. Furthermore, a more recent study has shown that the most common mental illness associated with suicide was depression [31]. The proportion of depression in all mental illnesses in our study also supported this conclusion.\nAnother result of the qualitative analysis showed that the later the suicide stage, the lower the proportion of adolescents. A study on residents’ suicidal ideation, suicide plans and suicide attempts in Dalian, China showed that the detection rate of suicidal ideation, suicide plan and suicide attempt decreased in that order in different groups [32]. This may because not all suicide ideations translate into suicidal actions, and most people with suicide ideation have not attempted suicide [33,34].\nAmong the experimental group, 36.2% of users suffered from mental illness. Depression accounted for 76.3% of all mental illness; it is the most common mental illness among the high-suicide-risk adolescents. The incidence of suicide among students whose relatives were affected by psychosis were higher than that of general residents. Suicide and mental illness were closely linked [28]. British doctor Sainsbury conducted a qualitative study on the data of 400 suicides examined by a coroner in London between 1936 and 1938, and found that 37% of the suicides reflected mental illness [29]. In a meta-analysis conducted in 2004, twenty-seven studies comprising 3275 suicides were included, of which 87.3% had been diagnosed with a mental disorder prior to their death [30]. In contrast, the research population in our study was only a high suicide risk group, so the detection rate of mental illness was relatively low. Although the research populations of the other two studies were both suicide decedents, the time interval between the two studies was large. It is very possible that there existed better methods to detect the mental illness of the suicide decedents in later studies, and it is understandable that the detection rate of mental illness has greatly increased. Furthermore, a more recent study has shown that the most common mental illness associated with suicide was depression [31]. The proportion of depression in all mental illnesses in our study also supported this conclusion.\nAnother result of the qualitative analysis showed that the later the suicide stage, the lower the proportion of adolescents. A study on residents’ suicidal ideation, suicide plans and suicide attempts in Dalian, China showed that the detection rate of suicidal ideation, suicide plan and suicide attempt decreased in that order in different groups [32]. This may because not all suicide ideations translate into suicidal actions, and most people with suicide ideation have not attempted suicide [33,34].\n4.2. The Mediating Effect of Hopelessness Our research found that the effect of psychological pain on suicide stages was totally mediated by hopelessness, confirming our suicide mechanism path hypothesis: psychological pain → hopelessness → suicide stages. Although the first step of suicide ideation starts from psychological pain, psychological pain itself is not enough to generate suicide ideation, and hopelessness is also needed to develop suicide ideation [19]. People make decisions and take actions based in part on their emotional predictions [35,36], and hopelessness makes people at suicide risk overestimate their future emotional pain [37]. If the individual experiences constant pain in life, this may reduce the individual’s desire to live, leading to suicidal thoughts. However, pain alone is not enough to completely lose the belief in life. Only when the individual feels that the terrible situation cannot be improved, and he or she will live in pain forever, suicide will be considered.\nThus, expressions of hopelessness may be a better predictor of suicide risk than psychological pain, and we found a traceable pathway for suicide intervention. Based on the Weibo posts, we can capture whether individual released suicide signals and it is possible to track the suicide stages in the future, making suicide intervention more accurate.\nOur research found that the effect of psychological pain on suicide stages was totally mediated by hopelessness, confirming our suicide mechanism path hypothesis: psychological pain → hopelessness → suicide stages. Although the first step of suicide ideation starts from psychological pain, psychological pain itself is not enough to generate suicide ideation, and hopelessness is also needed to develop suicide ideation [19]. People make decisions and take actions based in part on their emotional predictions [35,36], and hopelessness makes people at suicide risk overestimate their future emotional pain [37]. If the individual experiences constant pain in life, this may reduce the individual’s desire to live, leading to suicidal thoughts. However, pain alone is not enough to completely lose the belief in life. Only when the individual feels that the terrible situation cannot be improved, and he or she will live in pain forever, suicide will be considered.\nThus, expressions of hopelessness may be a better predictor of suicide risk than psychological pain, and we found a traceable pathway for suicide intervention. Based on the Weibo posts, we can capture whether individual released suicide signals and it is possible to track the suicide stages in the future, making suicide intervention more accurate.\n4.3. The Hopelessness Expressions and the Suicide Stages There was a negative predictive relationship between hopelessness expressions and suicide stages, as adolescents at the later suicide stages had less expressions of hopelessness. One possible reason is that adolescents at higher risk of suicide may use less concrete, direct words of hopelessness to express such feelings. From a linguistic point of view, Gao Yihong and Meng Ling analyzed the Weibo texts of “Zoufan” three months before suicide. They found that “Zoufan” often used decontextualized and fragmented ways to express personal depressed mood, and expressed metaphors and intentions about death frequently [38]. This euphemistic and poetic way of venting may be the commonality of high suicide risk adolescents. Another possible reason is “the Presuicidal Syndrome”, which mentioned that there exists an “ominous quiet” in the short period before suicide. This is a sign of “affective restriction”, which represents extreme repression of intolerable feelings [39]. Such “affective restriction” may be why people in the later suicide stage are less likely to express hopelessness. Moreover, adolescents at the later suicide stage may be more accepting and tolerant of psychological pain and hopelessness, which makes them less willing to talk about individual psychological pain and hopelessness, and with lack of supportive survival signals through texts, lose the belief in getting help.\nMany researchers have paid attention to the use of social media for suicide prevention and intervention, and have conducted a series of studies [40,41]. However, little attention has been paid to whether the suicidal signal was from an early suicide ideator or a later suicide attempter. Our study fills this gap, complementing and extending suicide-related research. Our results suggest that adolescents who are at a later suicide stage and have a higher suicide risk often do not express too much hopelessness directly. Therefore, we cannot judge closeness to suicide only by the level of traditional expressions of hopelessness.\nThere was a negative predictive relationship between hopelessness expressions and suicide stages, as adolescents at the later suicide stages had less expressions of hopelessness. One possible reason is that adolescents at higher risk of suicide may use less concrete, direct words of hopelessness to express such feelings. From a linguistic point of view, Gao Yihong and Meng Ling analyzed the Weibo texts of “Zoufan” three months before suicide. They found that “Zoufan” often used decontextualized and fragmented ways to express personal depressed mood, and expressed metaphors and intentions about death frequently [38]. This euphemistic and poetic way of venting may be the commonality of high suicide risk adolescents. Another possible reason is “the Presuicidal Syndrome”, which mentioned that there exists an “ominous quiet” in the short period before suicide. This is a sign of “affective restriction”, which represents extreme repression of intolerable feelings [39]. Such “affective restriction” may be why people in the later suicide stage are less likely to express hopelessness. Moreover, adolescents at the later suicide stage may be more accepting and tolerant of psychological pain and hopelessness, which makes them less willing to talk about individual psychological pain and hopelessness, and with lack of supportive survival signals through texts, lose the belief in getting help.\nMany researchers have paid attention to the use of social media for suicide prevention and intervention, and have conducted a series of studies [40,41]. However, little attention has been paid to whether the suicidal signal was from an early suicide ideator or a later suicide attempter. Our study fills this gap, complementing and extending suicide-related research. Our results suggest that adolescents who are at a later suicide stage and have a higher suicide risk often do not express too much hopelessness directly. Therefore, we cannot judge closeness to suicide only by the level of traditional expressions of hopelessness.", "Among the experimental group, 36.2% of users suffered from mental illness. Depression accounted for 76.3% of all mental illness; it is the most common mental illness among the high-suicide-risk adolescents. The incidence of suicide among students whose relatives were affected by psychosis were higher than that of general residents. Suicide and mental illness were closely linked [28]. British doctor Sainsbury conducted a qualitative study on the data of 400 suicides examined by a coroner in London between 1936 and 1938, and found that 37% of the suicides reflected mental illness [29]. In a meta-analysis conducted in 2004, twenty-seven studies comprising 3275 suicides were included, of which 87.3% had been diagnosed with a mental disorder prior to their death [30]. In contrast, the research population in our study was only a high suicide risk group, so the detection rate of mental illness was relatively low. Although the research populations of the other two studies were both suicide decedents, the time interval between the two studies was large. It is very possible that there existed better methods to detect the mental illness of the suicide decedents in later studies, and it is understandable that the detection rate of mental illness has greatly increased. Furthermore, a more recent study has shown that the most common mental illness associated with suicide was depression [31]. The proportion of depression in all mental illnesses in our study also supported this conclusion.\nAnother result of the qualitative analysis showed that the later the suicide stage, the lower the proportion of adolescents. A study on residents’ suicidal ideation, suicide plans and suicide attempts in Dalian, China showed that the detection rate of suicidal ideation, suicide plan and suicide attempt decreased in that order in different groups [32]. This may because not all suicide ideations translate into suicidal actions, and most people with suicide ideation have not attempted suicide [33,34].", "Our research found that the effect of psychological pain on suicide stages was totally mediated by hopelessness, confirming our suicide mechanism path hypothesis: psychological pain → hopelessness → suicide stages. Although the first step of suicide ideation starts from psychological pain, psychological pain itself is not enough to generate suicide ideation, and hopelessness is also needed to develop suicide ideation [19]. People make decisions and take actions based in part on their emotional predictions [35,36], and hopelessness makes people at suicide risk overestimate their future emotional pain [37]. If the individual experiences constant pain in life, this may reduce the individual’s desire to live, leading to suicidal thoughts. However, pain alone is not enough to completely lose the belief in life. Only when the individual feels that the terrible situation cannot be improved, and he or she will live in pain forever, suicide will be considered.\nThus, expressions of hopelessness may be a better predictor of suicide risk than psychological pain, and we found a traceable pathway for suicide intervention. Based on the Weibo posts, we can capture whether individual released suicide signals and it is possible to track the suicide stages in the future, making suicide intervention more accurate.", "There was a negative predictive relationship between hopelessness expressions and suicide stages, as adolescents at the later suicide stages had less expressions of hopelessness. One possible reason is that adolescents at higher risk of suicide may use less concrete, direct words of hopelessness to express such feelings. From a linguistic point of view, Gao Yihong and Meng Ling analyzed the Weibo texts of “Zoufan” three months before suicide. They found that “Zoufan” often used decontextualized and fragmented ways to express personal depressed mood, and expressed metaphors and intentions about death frequently [38]. This euphemistic and poetic way of venting may be the commonality of high suicide risk adolescents. Another possible reason is “the Presuicidal Syndrome”, which mentioned that there exists an “ominous quiet” in the short period before suicide. This is a sign of “affective restriction”, which represents extreme repression of intolerable feelings [39]. Such “affective restriction” may be why people in the later suicide stage are less likely to express hopelessness. Moreover, adolescents at the later suicide stage may be more accepting and tolerant of psychological pain and hopelessness, which makes them less willing to talk about individual psychological pain and hopelessness, and with lack of supportive survival signals through texts, lose the belief in getting help.\nMany researchers have paid attention to the use of social media for suicide prevention and intervention, and have conducted a series of studies [40,41]. However, little attention has been paid to whether the suicidal signal was from an early suicide ideator or a later suicide attempter. Our study fills this gap, complementing and extending suicide-related research. Our results suggest that adolescents who are at a later suicide stage and have a higher suicide risk often do not express too much hopelessness directly. Therefore, we cannot judge closeness to suicide only by the level of traditional expressions of hopelessness.", "Although this study could divide the suicide stages of adolescents through labeling Weibo private message data, it cannot effectively determine the gender of the user. Previous studies had shown that suicidal behaviors were also affected by gender, and that the incidence of suicide attempts in adolescent females was higher than that of males [42]. Compared with males, females were more psychologically sensitive and vulnerable [43], and were more likely to express their pain through suicidal behaviors [44]. The expressions of hopelessness and psychological pain as predictors of suicide may differ by gender. There were only 105 adolescent users with high suicide risk in our study. Caution is thus necessary when generalizing the conclusions to other case.\nOur findings provided a traceable pathway for suicide intervention, finding that the effect of psychological pain on suicide stages was totally mediated by hopelessness, and that adolescents in the later suicide stages had less expressions of hopelessness in the traditional sense. However, the specific language characteristics of each suicide stage are still unknown. If we want to accomplish the precise prediction of suicide stages through the expressions on Weibo posts and provide more accurate and effective information for suicide intervention, we need to further explore the specific language expression characteristics of high suicide risk adolescents in each suicide stage, especially how to identify abstract metaphors that may constitute a suicidal signal. It is thus possible that we can provide more accurate suicide intervention for adolescents at different suicide stages in the future.\nWe mainly focus on the expression on social media of two psychological indicators (psychological pain, hopelessness). However, there are many factors which affect suicide, and we can research suicide intervention from multiple perspectives, not only examining some negative indicators, but also exploring some positive indicators in future research. For example, previous studies had shown that social support [45], psychological resilience [46], and self-compassion [47] were protective factors for adolescent suicide.", "The qualitative findings suggested that mental illness in high-suicide-risk adolescents was strongly associated with suicide risk, and that depression was the most common mental illness associated with suicide. Another finding was that the later the suicide stage, the smaller was the proportion of high-suicide-risk adolescents.\nThe results of the mediation effect analysis verified the suicide mechanism path: psychological pain → hopelessness → suicide stages, indicating that psychological pain mainly affected suicide stages through hopelessness, and hopelessness may be a better predictor of suicide risk for high-suicide-risk adolescents.\nNotably, although adolescents in the later suicide stages were at greater suicide risk, they had less expression of hopelessness in the traditional sense." ]
[ "intro", null, "subjects", null, null, null, null, null, null, null, "results", null, null, "discussion", null, null, null, null, "conclusions" ]
[ "high-suicide-risk adolescents", "psychological pain", "hopelessness", "suicide stages", "Weibo text analysis" ]
1. Introduction: Adolescent suicide has always been a global public health concern to us. Suicide is the fourth leading cause of death globally among people aged 15–19 [1], and in China, it is the second leading cause of death for people aged 20–34 [2]. The consequences of adolescent suicide are the most serious among many mental health problems. It means disability or even a loss of life for individuals, heavy blows and severe trauma for families, and huge expenditures on the health, welfare and justice sectors for society. The study of suicide risk factors and the exploration of suicide mechanisms in high-suicide-risk adolescents are of great significance for the prevention of suicide. With the development of technology, social media has become a platform for people to record their daily life and express their emotions. In China, Sina Weibo as a typical product of the big data era has extensive influence. Sina Weibo is China’s most popular social media platform, just like Twitter in America. Although there are some short video platforms (e.g., Douyin) and forums (e.g., Douban) that are also popular in China, it is only on Sina Weibo that users can share information in real-time and communicate interactively in the forms of text, pictures, videos and other multimedia. According to a report released by Sina Weibo, Weibo users show a younger trend, of which the post-90s and post-00s youth groups account for nearly 80%, and the daily active volume can reach 224 million [3]. The user’s emotional expression on social platforms has higher immediacy and less masking. Identifying the user’s psychological state through their expression on social media will be more convenient and quicker, and the obtained data will be more real. Therefore, we use the Weibo text analysis method to explore the suicide mechanism path of high-suicide-risk adolescents. Text analysis refers to the representation of text and the selection of feature items, which is a basic problem in text mining and information retrieval. Text analysis means that we can quantify the feature words extracted from text to represent text information. Therefore, we can extract the psychological indicators we want from the Weibo text by constructing a dictionary of psychological indicators. Suicide represents a series of consecutive stages, including suicide ideation, suicide plan, and suicide attempt [4,5]. Suicide ideation, as a high-risk factor for suicide, has a significant predictive effect [6,7]. Approximately 1/3 of suicide ideators ultimately attempt suicide, and the number of suicide attempters is several times higher than those who really die by suicide, so suicide attempt is a greater risk factor for suicide than suicidal ideation [1,8,9]. The later the suicide stage, the closer to suicide death, and the greater risk of suicide. Therefore, in order to quantify the degree of suicide risk and provide accurate information for suicide interventions, we decided to divide suicidal behavior into different stages. At the same time, considering that depressed mood is an important signal of suicide, we finally divided suicide into 4 stages, which the suicide risk gradually increases: depressed mood, suicide ideation, suicide plan and suicide attempt, and defined adolescents at different suicide stages as high-suicide-risk adolescents. A meta-analysis of the causes of suicide among adolescents found that suicide is a public health problem involving complex factors, such as biology, psychology, family, society and culture [8]. Among the psychological factors, psychological pain and hopelessness are important risk factors featuring strongly in suicide research [9,10,11]. Although psychological pain and hopelessness are often mentioned together as having an impact on suicidal behavior, as two most common motivations for suicide attempts [12], it is unclear how the two proximal factors of suicide affect the suicide stages. Psychological pain is defined as the “introspective experience of negative emotions such as fear, despair, grief, shame, guilt, blocked love, loneliness and loss” [13,14,15,16]. When Shneidman first hypothesized the relationship between psychological pain and suicide, he pointed out that other factors can only affect suicidal behavior through psychological pain, without which suicide would not occur [13]. Evidence suggests that psychological pain plays a central role in suicidal behavior [10,17,18]. Psychological pain is not enough to develop suicide ideation, it also requires hopelessness [19]. Hopelessness is defined as negative expectations or pessimism about oneself or the future [20]. Previous studies had showed that hopelessness was also an important risk factor of suicide [21,22,23]. To explore the relationship between psychological pain, hopelessness and suicide stages for high-suicide-risk adolescents, we established a hypothesis for the suicide mechanism path: psychological pain → hopelessness → suicide stages, with hopelessness as the mediating variable. In addition, there is a high correlation between mental illness and suicidal behaviors [24,25], and in order to better clarify the effect of psychological pain and hopelessness on suicide stages, a mediating effect model of our study is proposed with mental illness as a control variable (Figure 1). We used the more ecological Weibo text analysis method in this study, and hoped to further deepen the research on adolescent suicide and to find a new theoretical basis for future suicide interventions drawing on online data. 2. Materials and Methods: 2.1. Participants Through the online psychological assistance project, we obtained the interview records data of 3627 Weibo users, and then processed the user’s interview records data and Weibo texts data. Finally, we retained 105 high-suicide-risk adolescents with enough Weibo texts as the experimental group. Ethical Statement: Participants gave informed consent when agreeing to take part in the online psychological assistance project. The project received ethical approval from the Institution-al Review Board of the Institute of Psychology, Chinese Academy of Sciences, with the ethics approval number H16003. Through the online psychological assistance project, we obtained the interview records data of 3627 Weibo users, and then processed the user’s interview records data and Weibo texts data. Finally, we retained 105 high-suicide-risk adolescents with enough Weibo texts as the experimental group. Ethical Statement: Participants gave informed consent when agreeing to take part in the online psychological assistance project. The project received ethical approval from the Institution-al Review Board of the Institute of Psychology, Chinese Academy of Sciences, with the ethics approval number H16003. 2.2. Online Psychological Assistance Project We used the suicide signal recognition model established by the Computational Cyber-Psychology Lab (CCPL) to identify the suicide signal of a single microblog, which can effectively determine whether the single microblog has shown suicidal risk [26]. Since November 2016, we had used the suicide signal recognition model for online identification of Weibo comments. The last Weibo post before the death of Weibo user “Zoufan”, as a place where people can express their negative emotions, has received more than 1 million comments until now, and the number of comments keep rising (Figure 2). Firstly, we used the suicide signal model to identify the suicide signal of the Weibo comments. Then, the comments identified by the model were re-identified by manual recognition. Finally, we used the official Weibo account “Psychological Map Psy” and sent a private message to the Weibo users marked as having a suicide risk. The contents of the private message mainly include inquiries about the user’s emotional state, expressions of concern, links to self-assessment psychological scales, and the hotline of the psychological crisis intervention center, etc. (Figure 3). In addition, users can choose whether to communicate with the voluntary counselor online according to their own wishes, so as to achieve agile psychological crisis intervention. We used the suicide signal recognition model established by the Computational Cyber-Psychology Lab (CCPL) to identify the suicide signal of a single microblog, which can effectively determine whether the single microblog has shown suicidal risk [26]. Since November 2016, we had used the suicide signal recognition model for online identification of Weibo comments. The last Weibo post before the death of Weibo user “Zoufan”, as a place where people can express their negative emotions, has received more than 1 million comments until now, and the number of comments keep rising (Figure 2). Firstly, we used the suicide signal model to identify the suicide signal of the Weibo comments. Then, the comments identified by the model were re-identified by manual recognition. Finally, we used the official Weibo account “Psychological Map Psy” and sent a private message to the Weibo users marked as having a suicide risk. The contents of the private message mainly include inquiries about the user’s emotional state, expressions of concern, links to self-assessment psychological scales, and the hotline of the psychological crisis intervention center, etc. (Figure 3). In addition, users can choose whether to communicate with the voluntary counselor online according to their own wishes, so as to achieve agile psychological crisis intervention. 2.3. Data Collection and Processing 2.3.1. Labeling the Interview Records Data During the online psychological assistance project period (November 2016 to April 2019), the official Weibo account had communicated with 3627 users online, and 127,336 message records were accumulated. Adhering to the principle of privacy and confidentiality, the use of interview records data was strictly restricted. The experimenter must protect the privacy of the research subjects involved in the project, and should not use any confidential information related to the research subjects (including but not limited to accounts, nicknames, contact information, personal homepage, network IP, location, information records, etc.) outside the scope of the research, and would not violate the privacy of the subjects in any way. The interview records data were encoded by qualitative analysis. The basic content of interview records data were the current status and problems of users. We extracted three indicators from the interview records data, and the specific descriptions of them are shown below:(1)Determining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.(2)Determining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.(3)Coding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable. Determining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents. Determining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable. Coding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable. Suicide stages were coded on the basis of the following criteria: (a) Depressed mood: feeling negative and depressed, but not yet having suicidal ideation. It means participants did not express suicidal intention explicitly or implicitly(whether they actively expressed suicidal intentions, or exposed suicidal intentions under inquiries from the official Weibo account). (b) Suicide ideation: there is suicidal intent, but suicide plan has not formed. Due to being affected by an intense depressive mood, the idea of avoiding pain through suicide arises, but there is no specific time and way to implement it. (c) Suicide plan: on the basis of suicidal ideation, there is a rough or detailed suicide plan, such as when, where, and how to commit suicide. (d) Suicide attempt: at least one suicide attempt with the aim of ending one’s own life. During the online psychological assistance project period (November 2016 to April 2019), the official Weibo account had communicated with 3627 users online, and 127,336 message records were accumulated. Adhering to the principle of privacy and confidentiality, the use of interview records data was strictly restricted. The experimenter must protect the privacy of the research subjects involved in the project, and should not use any confidential information related to the research subjects (including but not limited to accounts, nicknames, contact information, personal homepage, network IP, location, information records, etc.) outside the scope of the research, and would not violate the privacy of the subjects in any way. The interview records data were encoded by qualitative analysis. The basic content of interview records data were the current status and problems of users. We extracted three indicators from the interview records data, and the specific descriptions of them are shown below:(1)Determining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.(2)Determining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.(3)Coding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable. Determining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents. Determining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable. Coding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable. Suicide stages were coded on the basis of the following criteria: (a) Depressed mood: feeling negative and depressed, but not yet having suicidal ideation. It means participants did not express suicidal intention explicitly or implicitly(whether they actively expressed suicidal intentions, or exposed suicidal intentions under inquiries from the official Weibo account). (b) Suicide ideation: there is suicidal intent, but suicide plan has not formed. Due to being affected by an intense depressive mood, the idea of avoiding pain through suicide arises, but there is no specific time and way to implement it. (c) Suicide plan: on the basis of suicidal ideation, there is a rough or detailed suicide plan, such as when, where, and how to commit suicide. (d) Suicide attempt: at least one suicide attempt with the aim of ending one’s own life. 2.3.2. Getting the Weibo Text Data We scraped Weibo texts published by the same 3627 users through the Application Programming Interface (API) of Sina Weibo. In order to obtain sufficient Weibo texts for analysis, we took the first time that the official Weibo account sent a private message to the user as the beginning of suicide intervention, downloaded all Weibo texts in the 3 months before the intervention, and filtered the users whose Weibo texts are less than 300 words (Figure 4). Finally, 105 high-suicide-risk adolescents (as the experimental group) were included in the analysis. We then removed stop words and performed Chinese word segmentation on the Weibo texts, and calculated word frequency. “The Chinese Suicide Dictionary” [27] was used to measure the word frequency ratio of their psychological pain and hopelessness. The dictionary outline is shown in Table 1. The psychological pain dimension represents the pain and torment of individuals at the psychological level, including 403 keywords; the hopelessness dimension represents the subjective feeling of losing hope, including 188 keywords. We scraped Weibo texts published by the same 3627 users through the Application Programming Interface (API) of Sina Weibo. In order to obtain sufficient Weibo texts for analysis, we took the first time that the official Weibo account sent a private message to the user as the beginning of suicide intervention, downloaded all Weibo texts in the 3 months before the intervention, and filtered the users whose Weibo texts are less than 300 words (Figure 4). Finally, 105 high-suicide-risk adolescents (as the experimental group) were included in the analysis. We then removed stop words and performed Chinese word segmentation on the Weibo texts, and calculated word frequency. “The Chinese Suicide Dictionary” [27] was used to measure the word frequency ratio of their psychological pain and hopelessness. The dictionary outline is shown in Table 1. The psychological pain dimension represents the pain and torment of individuals at the psychological level, including 403 keywords; the hopelessness dimension represents the subjective feeling of losing hope, including 188 keywords. 2.3.1. Labeling the Interview Records Data During the online psychological assistance project period (November 2016 to April 2019), the official Weibo account had communicated with 3627 users online, and 127,336 message records were accumulated. Adhering to the principle of privacy and confidentiality, the use of interview records data was strictly restricted. The experimenter must protect the privacy of the research subjects involved in the project, and should not use any confidential information related to the research subjects (including but not limited to accounts, nicknames, contact information, personal homepage, network IP, location, information records, etc.) outside the scope of the research, and would not violate the privacy of the subjects in any way. The interview records data were encoded by qualitative analysis. The basic content of interview records data were the current status and problems of users. We extracted three indicators from the interview records data, and the specific descriptions of them are shown below:(1)Determining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.(2)Determining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.(3)Coding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable. Determining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents. Determining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable. Coding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable. Suicide stages were coded on the basis of the following criteria: (a) Depressed mood: feeling negative and depressed, but not yet having suicidal ideation. It means participants did not express suicidal intention explicitly or implicitly(whether they actively expressed suicidal intentions, or exposed suicidal intentions under inquiries from the official Weibo account). (b) Suicide ideation: there is suicidal intent, but suicide plan has not formed. Due to being affected by an intense depressive mood, the idea of avoiding pain through suicide arises, but there is no specific time and way to implement it. (c) Suicide plan: on the basis of suicidal ideation, there is a rough or detailed suicide plan, such as when, where, and how to commit suicide. (d) Suicide attempt: at least one suicide attempt with the aim of ending one’s own life. During the online psychological assistance project period (November 2016 to April 2019), the official Weibo account had communicated with 3627 users online, and 127,336 message records were accumulated. Adhering to the principle of privacy and confidentiality, the use of interview records data was strictly restricted. The experimenter must protect the privacy of the research subjects involved in the project, and should not use any confidential information related to the research subjects (including but not limited to accounts, nicknames, contact information, personal homepage, network IP, location, information records, etc.) outside the scope of the research, and would not violate the privacy of the subjects in any way. The interview records data were encoded by qualitative analysis. The basic content of interview records data were the current status and problems of users. We extracted three indicators from the interview records data, and the specific descriptions of them are shown below:(1)Determining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.(2)Determining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.(3)Coding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable. Determining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents. Determining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable. Coding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable. Suicide stages were coded on the basis of the following criteria: (a) Depressed mood: feeling negative and depressed, but not yet having suicidal ideation. It means participants did not express suicidal intention explicitly or implicitly(whether they actively expressed suicidal intentions, or exposed suicidal intentions under inquiries from the official Weibo account). (b) Suicide ideation: there is suicidal intent, but suicide plan has not formed. Due to being affected by an intense depressive mood, the idea of avoiding pain through suicide arises, but there is no specific time and way to implement it. (c) Suicide plan: on the basis of suicidal ideation, there is a rough or detailed suicide plan, such as when, where, and how to commit suicide. (d) Suicide attempt: at least one suicide attempt with the aim of ending one’s own life. 2.3.2. Getting the Weibo Text Data We scraped Weibo texts published by the same 3627 users through the Application Programming Interface (API) of Sina Weibo. In order to obtain sufficient Weibo texts for analysis, we took the first time that the official Weibo account sent a private message to the user as the beginning of suicide intervention, downloaded all Weibo texts in the 3 months before the intervention, and filtered the users whose Weibo texts are less than 300 words (Figure 4). Finally, 105 high-suicide-risk adolescents (as the experimental group) were included in the analysis. We then removed stop words and performed Chinese word segmentation on the Weibo texts, and calculated word frequency. “The Chinese Suicide Dictionary” [27] was used to measure the word frequency ratio of their psychological pain and hopelessness. The dictionary outline is shown in Table 1. The psychological pain dimension represents the pain and torment of individuals at the psychological level, including 403 keywords; the hopelessness dimension represents the subjective feeling of losing hope, including 188 keywords. We scraped Weibo texts published by the same 3627 users through the Application Programming Interface (API) of Sina Weibo. In order to obtain sufficient Weibo texts for analysis, we took the first time that the official Weibo account sent a private message to the user as the beginning of suicide intervention, downloaded all Weibo texts in the 3 months before the intervention, and filtered the users whose Weibo texts are less than 300 words (Figure 4). Finally, 105 high-suicide-risk adolescents (as the experimental group) were included in the analysis. We then removed stop words and performed Chinese word segmentation on the Weibo texts, and calculated word frequency. “The Chinese Suicide Dictionary” [27] was used to measure the word frequency ratio of their psychological pain and hopelessness. The dictionary outline is shown in Table 1. The psychological pain dimension represents the pain and torment of individuals at the psychological level, including 403 keywords; the hopelessness dimension represents the subjective feeling of losing hope, including 188 keywords. 2.4. Data Analysis 2.4.1. Qualitative Analysis Through the labeling results of interview records data, we investigated the situation of mental illness and the distribution of suicide stages for the experimental group. Through the labeling results of interview records data, we investigated the situation of mental illness and the distribution of suicide stages for the experimental group. 2.4.2. Analysis of Suicide Path The mediation effect analysis was executed on the software Mplus by using the bootstrap method to test the suicide mechanism path model. The word frequency of psychological pain and hopelessness of the experimental group were treated as independent variable (X) and mediating variable (M), respectively. The suicide stages were treated as dependent variable (Y). The mental illness was included in the model as a control variable (Z). The mediation effect analysis was executed on the software Mplus by using the bootstrap method to test the suicide mechanism path model. The word frequency of psychological pain and hopelessness of the experimental group were treated as independent variable (X) and mediating variable (M), respectively. The suicide stages were treated as dependent variable (Y). The mental illness was included in the model as a control variable (Z). 2.4.1. Qualitative Analysis Through the labeling results of interview records data, we investigated the situation of mental illness and the distribution of suicide stages for the experimental group. Through the labeling results of interview records data, we investigated the situation of mental illness and the distribution of suicide stages for the experimental group. 2.4.2. Analysis of Suicide Path The mediation effect analysis was executed on the software Mplus by using the bootstrap method to test the suicide mechanism path model. The word frequency of psychological pain and hopelessness of the experimental group were treated as independent variable (X) and mediating variable (M), respectively. The suicide stages were treated as dependent variable (Y). The mental illness was included in the model as a control variable (Z). The mediation effect analysis was executed on the software Mplus by using the bootstrap method to test the suicide mechanism path model. The word frequency of psychological pain and hopelessness of the experimental group were treated as independent variable (X) and mediating variable (M), respectively. The suicide stages were treated as dependent variable (Y). The mental illness was included in the model as a control variable (Z). 2.1. Participants: Through the online psychological assistance project, we obtained the interview records data of 3627 Weibo users, and then processed the user’s interview records data and Weibo texts data. Finally, we retained 105 high-suicide-risk adolescents with enough Weibo texts as the experimental group. Ethical Statement: Participants gave informed consent when agreeing to take part in the online psychological assistance project. The project received ethical approval from the Institution-al Review Board of the Institute of Psychology, Chinese Academy of Sciences, with the ethics approval number H16003. 2.2. Online Psychological Assistance Project: We used the suicide signal recognition model established by the Computational Cyber-Psychology Lab (CCPL) to identify the suicide signal of a single microblog, which can effectively determine whether the single microblog has shown suicidal risk [26]. Since November 2016, we had used the suicide signal recognition model for online identification of Weibo comments. The last Weibo post before the death of Weibo user “Zoufan”, as a place where people can express their negative emotions, has received more than 1 million comments until now, and the number of comments keep rising (Figure 2). Firstly, we used the suicide signal model to identify the suicide signal of the Weibo comments. Then, the comments identified by the model were re-identified by manual recognition. Finally, we used the official Weibo account “Psychological Map Psy” and sent a private message to the Weibo users marked as having a suicide risk. The contents of the private message mainly include inquiries about the user’s emotional state, expressions of concern, links to self-assessment psychological scales, and the hotline of the psychological crisis intervention center, etc. (Figure 3). In addition, users can choose whether to communicate with the voluntary counselor online according to their own wishes, so as to achieve agile psychological crisis intervention. 2.3. Data Collection and Processing: 2.3.1. Labeling the Interview Records Data During the online psychological assistance project period (November 2016 to April 2019), the official Weibo account had communicated with 3627 users online, and 127,336 message records were accumulated. Adhering to the principle of privacy and confidentiality, the use of interview records data was strictly restricted. The experimenter must protect the privacy of the research subjects involved in the project, and should not use any confidential information related to the research subjects (including but not limited to accounts, nicknames, contact information, personal homepage, network IP, location, information records, etc.) outside the scope of the research, and would not violate the privacy of the subjects in any way. The interview records data were encoded by qualitative analysis. The basic content of interview records data were the current status and problems of users. We extracted three indicators from the interview records data, and the specific descriptions of them are shown below:(1)Determining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.(2)Determining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.(3)Coding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable. Determining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents. Determining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable. Coding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable. Suicide stages were coded on the basis of the following criteria: (a) Depressed mood: feeling negative and depressed, but not yet having suicidal ideation. It means participants did not express suicidal intention explicitly or implicitly(whether they actively expressed suicidal intentions, or exposed suicidal intentions under inquiries from the official Weibo account). (b) Suicide ideation: there is suicidal intent, but suicide plan has not formed. Due to being affected by an intense depressive mood, the idea of avoiding pain through suicide arises, but there is no specific time and way to implement it. (c) Suicide plan: on the basis of suicidal ideation, there is a rough or detailed suicide plan, such as when, where, and how to commit suicide. (d) Suicide attempt: at least one suicide attempt with the aim of ending one’s own life. During the online psychological assistance project period (November 2016 to April 2019), the official Weibo account had communicated with 3627 users online, and 127,336 message records were accumulated. Adhering to the principle of privacy and confidentiality, the use of interview records data was strictly restricted. The experimenter must protect the privacy of the research subjects involved in the project, and should not use any confidential information related to the research subjects (including but not limited to accounts, nicknames, contact information, personal homepage, network IP, location, information records, etc.) outside the scope of the research, and would not violate the privacy of the subjects in any way. The interview records data were encoded by qualitative analysis. The basic content of interview records data were the current status and problems of users. We extracted three indicators from the interview records data, and the specific descriptions of them are shown below:(1)Determining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.(2)Determining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.(3)Coding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable. Determining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents. Determining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable. Coding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable. Suicide stages were coded on the basis of the following criteria: (a) Depressed mood: feeling negative and depressed, but not yet having suicidal ideation. It means participants did not express suicidal intention explicitly or implicitly(whether they actively expressed suicidal intentions, or exposed suicidal intentions under inquiries from the official Weibo account). (b) Suicide ideation: there is suicidal intent, but suicide plan has not formed. Due to being affected by an intense depressive mood, the idea of avoiding pain through suicide arises, but there is no specific time and way to implement it. (c) Suicide plan: on the basis of suicidal ideation, there is a rough or detailed suicide plan, such as when, where, and how to commit suicide. (d) Suicide attempt: at least one suicide attempt with the aim of ending one’s own life. 2.3.2. Getting the Weibo Text Data We scraped Weibo texts published by the same 3627 users through the Application Programming Interface (API) of Sina Weibo. In order to obtain sufficient Weibo texts for analysis, we took the first time that the official Weibo account sent a private message to the user as the beginning of suicide intervention, downloaded all Weibo texts in the 3 months before the intervention, and filtered the users whose Weibo texts are less than 300 words (Figure 4). Finally, 105 high-suicide-risk adolescents (as the experimental group) were included in the analysis. We then removed stop words and performed Chinese word segmentation on the Weibo texts, and calculated word frequency. “The Chinese Suicide Dictionary” [27] was used to measure the word frequency ratio of their psychological pain and hopelessness. The dictionary outline is shown in Table 1. The psychological pain dimension represents the pain and torment of individuals at the psychological level, including 403 keywords; the hopelessness dimension represents the subjective feeling of losing hope, including 188 keywords. We scraped Weibo texts published by the same 3627 users through the Application Programming Interface (API) of Sina Weibo. In order to obtain sufficient Weibo texts for analysis, we took the first time that the official Weibo account sent a private message to the user as the beginning of suicide intervention, downloaded all Weibo texts in the 3 months before the intervention, and filtered the users whose Weibo texts are less than 300 words (Figure 4). Finally, 105 high-suicide-risk adolescents (as the experimental group) were included in the analysis. We then removed stop words and performed Chinese word segmentation on the Weibo texts, and calculated word frequency. “The Chinese Suicide Dictionary” [27] was used to measure the word frequency ratio of their psychological pain and hopelessness. The dictionary outline is shown in Table 1. The psychological pain dimension represents the pain and torment of individuals at the psychological level, including 403 keywords; the hopelessness dimension represents the subjective feeling of losing hope, including 188 keywords. 2.3.1. Labeling the Interview Records Data: During the online psychological assistance project period (November 2016 to April 2019), the official Weibo account had communicated with 3627 users online, and 127,336 message records were accumulated. Adhering to the principle of privacy and confidentiality, the use of interview records data was strictly restricted. The experimenter must protect the privacy of the research subjects involved in the project, and should not use any confidential information related to the research subjects (including but not limited to accounts, nicknames, contact information, personal homepage, network IP, location, information records, etc.) outside the scope of the research, and would not violate the privacy of the subjects in any way. The interview records data were encoded by qualitative analysis. The basic content of interview records data were the current status and problems of users. We extracted three indicators from the interview records data, and the specific descriptions of them are shown below:(1)Determining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents.(2)Determining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable.(3)Coding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable. Determining whether the user was in adolescent stages (junior high school, high school, college), and retaining the data labeled as the adolescents. Determining whether the user was diagnosed with mental illness by a psychiatrist, had a history of mental illness, or was taking medication for mental illness, and labeling a user had a mental illness or not as a 0/1 dummy variable. Coding the suicide stages (depressed mood, suicide ideation, suicide plan, suicide attempt) of adolescents, and marking the suicide stages as 1, 2, 3 and 4, an ordered categorical variable. Suicide stages were coded on the basis of the following criteria: (a) Depressed mood: feeling negative and depressed, but not yet having suicidal ideation. It means participants did not express suicidal intention explicitly or implicitly(whether they actively expressed suicidal intentions, or exposed suicidal intentions under inquiries from the official Weibo account). (b) Suicide ideation: there is suicidal intent, but suicide plan has not formed. Due to being affected by an intense depressive mood, the idea of avoiding pain through suicide arises, but there is no specific time and way to implement it. (c) Suicide plan: on the basis of suicidal ideation, there is a rough or detailed suicide plan, such as when, where, and how to commit suicide. (d) Suicide attempt: at least one suicide attempt with the aim of ending one’s own life. 2.3.2. Getting the Weibo Text Data: We scraped Weibo texts published by the same 3627 users through the Application Programming Interface (API) of Sina Weibo. In order to obtain sufficient Weibo texts for analysis, we took the first time that the official Weibo account sent a private message to the user as the beginning of suicide intervention, downloaded all Weibo texts in the 3 months before the intervention, and filtered the users whose Weibo texts are less than 300 words (Figure 4). Finally, 105 high-suicide-risk adolescents (as the experimental group) were included in the analysis. We then removed stop words and performed Chinese word segmentation on the Weibo texts, and calculated word frequency. “The Chinese Suicide Dictionary” [27] was used to measure the word frequency ratio of their psychological pain and hopelessness. The dictionary outline is shown in Table 1. The psychological pain dimension represents the pain and torment of individuals at the psychological level, including 403 keywords; the hopelessness dimension represents the subjective feeling of losing hope, including 188 keywords. 2.4. Data Analysis: 2.4.1. Qualitative Analysis Through the labeling results of interview records data, we investigated the situation of mental illness and the distribution of suicide stages for the experimental group. Through the labeling results of interview records data, we investigated the situation of mental illness and the distribution of suicide stages for the experimental group. 2.4.2. Analysis of Suicide Path The mediation effect analysis was executed on the software Mplus by using the bootstrap method to test the suicide mechanism path model. The word frequency of psychological pain and hopelessness of the experimental group were treated as independent variable (X) and mediating variable (M), respectively. The suicide stages were treated as dependent variable (Y). The mental illness was included in the model as a control variable (Z). The mediation effect analysis was executed on the software Mplus by using the bootstrap method to test the suicide mechanism path model. The word frequency of psychological pain and hopelessness of the experimental group were treated as independent variable (X) and mediating variable (M), respectively. The suicide stages were treated as dependent variable (Y). The mental illness was included in the model as a control variable (Z). 2.4.1. Qualitative Analysis: Through the labeling results of interview records data, we investigated the situation of mental illness and the distribution of suicide stages for the experimental group. 2.4.2. Analysis of Suicide Path: The mediation effect analysis was executed on the software Mplus by using the bootstrap method to test the suicide mechanism path model. The word frequency of psychological pain and hopelessness of the experimental group were treated as independent variable (X) and mediating variable (M), respectively. The suicide stages were treated as dependent variable (Y). The mental illness was included in the model as a control variable (Z). 3. Results: 3.1. Group Characteristics Reflected by Qualitative Analysis The mental illness diagnosis of the experimental group was marked; it was found that 38 were diagnosed and 67 were undiagnosed. The proportion of users diagnosed with mental illness was 36.2%. Among the 38 high-suicide-risk adolescents diagnosed with mental illness, 29 users had depression (or depression was included in multiple mental illnesses), accounting for 76.3%; the rest included bipolar disorder (13.16%), anxiety disorder (13.16%), etc.; 9 of them suffered from multiple mental illnesses (e.g., depression and anxiety disorder), accounting for 23.68%. Among the experimental group, 57 were in the stage of depressed mood (54.3%), 21 were in the stage of suicide ideation (20.0%), 19 were in the stage of suicide plan (18.1%), and 8 people were in the stage of suicide attempt (7.6%) (Table 2). The mental illness diagnosis of the experimental group was marked; it was found that 38 were diagnosed and 67 were undiagnosed. The proportion of users diagnosed with mental illness was 36.2%. Among the 38 high-suicide-risk adolescents diagnosed with mental illness, 29 users had depression (or depression was included in multiple mental illnesses), accounting for 76.3%; the rest included bipolar disorder (13.16%), anxiety disorder (13.16%), etc.; 9 of them suffered from multiple mental illnesses (e.g., depression and anxiety disorder), accounting for 23.68%. Among the experimental group, 57 were in the stage of depressed mood (54.3%), 21 were in the stage of suicide ideation (20.0%), 19 were in the stage of suicide plan (18.1%), and 8 people were in the stage of suicide attempt (7.6%) (Table 2). 3.2. Path Analysis of Psychological Pain, Hopelessness and Suicide Stages A mediation effect test was performed on the path model (psychological pain → hopelessness → suicide stages). The performance of each fit index of the model was good (RMSEA < 0.08; CFI > 0.9; TLI > 0.9), indicating that the model fitting effect is excellent (Table 3). After controlling for mental illness factors, the total effect c was significant (β = −0.154, p < 0.05), indicating that there was a mediation effect; the path coefficients a and b were significant (β = 0.571, p < 0.001; β = −0.271, p < 0.05), but the direct effect c’ was not significant (β = 0.052, p > 0.05), indicating that there was a complete mediating effect of hopelessness. That is, psychological pain mainly affects suicide risk through hopelessness (Table 4). It was worth noting that the relationship between the expression of hopelessness and the stages of suicide showed a significant negative correlation (β = −0.271, p < 0.05), indicating that the late suicide stage such as suicide attempt, which may be closer to suicide death, included less expressions of hopelessness (Table 4). Furthermore, there was a significant positive correlation between diagnosed mental illness and suicide stages (β = 0.398, p < 0.001), indicating the necessity of treating mental illness as a control variable in order to see the relationship between psychological pain, hopelessness and suicide stages more clearly. At the same time, it also showed that the later the suicide stage, the higher the diagnosis rate of mental illness (Table 4). A mediation effect test was performed on the path model (psychological pain → hopelessness → suicide stages). The performance of each fit index of the model was good (RMSEA < 0.08; CFI > 0.9; TLI > 0.9), indicating that the model fitting effect is excellent (Table 3). After controlling for mental illness factors, the total effect c was significant (β = −0.154, p < 0.05), indicating that there was a mediation effect; the path coefficients a and b were significant (β = 0.571, p < 0.001; β = −0.271, p < 0.05), but the direct effect c’ was not significant (β = 0.052, p > 0.05), indicating that there was a complete mediating effect of hopelessness. That is, psychological pain mainly affects suicide risk through hopelessness (Table 4). It was worth noting that the relationship between the expression of hopelessness and the stages of suicide showed a significant negative correlation (β = −0.271, p < 0.05), indicating that the late suicide stage such as suicide attempt, which may be closer to suicide death, included less expressions of hopelessness (Table 4). Furthermore, there was a significant positive correlation between diagnosed mental illness and suicide stages (β = 0.398, p < 0.001), indicating the necessity of treating mental illness as a control variable in order to see the relationship between psychological pain, hopelessness and suicide stages more clearly. At the same time, it also showed that the later the suicide stage, the higher the diagnosis rate of mental illness (Table 4). 3.1. Group Characteristics Reflected by Qualitative Analysis: The mental illness diagnosis of the experimental group was marked; it was found that 38 were diagnosed and 67 were undiagnosed. The proportion of users diagnosed with mental illness was 36.2%. Among the 38 high-suicide-risk adolescents diagnosed with mental illness, 29 users had depression (or depression was included in multiple mental illnesses), accounting for 76.3%; the rest included bipolar disorder (13.16%), anxiety disorder (13.16%), etc.; 9 of them suffered from multiple mental illnesses (e.g., depression and anxiety disorder), accounting for 23.68%. Among the experimental group, 57 were in the stage of depressed mood (54.3%), 21 were in the stage of suicide ideation (20.0%), 19 were in the stage of suicide plan (18.1%), and 8 people were in the stage of suicide attempt (7.6%) (Table 2). 3.2. Path Analysis of Psychological Pain, Hopelessness and Suicide Stages: A mediation effect test was performed on the path model (psychological pain → hopelessness → suicide stages). The performance of each fit index of the model was good (RMSEA < 0.08; CFI > 0.9; TLI > 0.9), indicating that the model fitting effect is excellent (Table 3). After controlling for mental illness factors, the total effect c was significant (β = −0.154, p < 0.05), indicating that there was a mediation effect; the path coefficients a and b were significant (β = 0.571, p < 0.001; β = −0.271, p < 0.05), but the direct effect c’ was not significant (β = 0.052, p > 0.05), indicating that there was a complete mediating effect of hopelessness. That is, psychological pain mainly affects suicide risk through hopelessness (Table 4). It was worth noting that the relationship between the expression of hopelessness and the stages of suicide showed a significant negative correlation (β = −0.271, p < 0.05), indicating that the late suicide stage such as suicide attempt, which may be closer to suicide death, included less expressions of hopelessness (Table 4). Furthermore, there was a significant positive correlation between diagnosed mental illness and suicide stages (β = 0.398, p < 0.001), indicating the necessity of treating mental illness as a control variable in order to see the relationship between psychological pain, hopelessness and suicide stages more clearly. At the same time, it also showed that the later the suicide stage, the higher the diagnosis rate of mental illness (Table 4). 4. Discussion: 4.1. The Qualitative Analysis Results Among the experimental group, 36.2% of users suffered from mental illness. Depression accounted for 76.3% of all mental illness; it is the most common mental illness among the high-suicide-risk adolescents. The incidence of suicide among students whose relatives were affected by psychosis were higher than that of general residents. Suicide and mental illness were closely linked [28]. British doctor Sainsbury conducted a qualitative study on the data of 400 suicides examined by a coroner in London between 1936 and 1938, and found that 37% of the suicides reflected mental illness [29]. In a meta-analysis conducted in 2004, twenty-seven studies comprising 3275 suicides were included, of which 87.3% had been diagnosed with a mental disorder prior to their death [30]. In contrast, the research population in our study was only a high suicide risk group, so the detection rate of mental illness was relatively low. Although the research populations of the other two studies were both suicide decedents, the time interval between the two studies was large. It is very possible that there existed better methods to detect the mental illness of the suicide decedents in later studies, and it is understandable that the detection rate of mental illness has greatly increased. Furthermore, a more recent study has shown that the most common mental illness associated with suicide was depression [31]. The proportion of depression in all mental illnesses in our study also supported this conclusion. Another result of the qualitative analysis showed that the later the suicide stage, the lower the proportion of adolescents. A study on residents’ suicidal ideation, suicide plans and suicide attempts in Dalian, China showed that the detection rate of suicidal ideation, suicide plan and suicide attempt decreased in that order in different groups [32]. This may because not all suicide ideations translate into suicidal actions, and most people with suicide ideation have not attempted suicide [33,34]. Among the experimental group, 36.2% of users suffered from mental illness. Depression accounted for 76.3% of all mental illness; it is the most common mental illness among the high-suicide-risk adolescents. The incidence of suicide among students whose relatives were affected by psychosis were higher than that of general residents. Suicide and mental illness were closely linked [28]. British doctor Sainsbury conducted a qualitative study on the data of 400 suicides examined by a coroner in London between 1936 and 1938, and found that 37% of the suicides reflected mental illness [29]. In a meta-analysis conducted in 2004, twenty-seven studies comprising 3275 suicides were included, of which 87.3% had been diagnosed with a mental disorder prior to their death [30]. In contrast, the research population in our study was only a high suicide risk group, so the detection rate of mental illness was relatively low. Although the research populations of the other two studies were both suicide decedents, the time interval between the two studies was large. It is very possible that there existed better methods to detect the mental illness of the suicide decedents in later studies, and it is understandable that the detection rate of mental illness has greatly increased. Furthermore, a more recent study has shown that the most common mental illness associated with suicide was depression [31]. The proportion of depression in all mental illnesses in our study also supported this conclusion. Another result of the qualitative analysis showed that the later the suicide stage, the lower the proportion of adolescents. A study on residents’ suicidal ideation, suicide plans and suicide attempts in Dalian, China showed that the detection rate of suicidal ideation, suicide plan and suicide attempt decreased in that order in different groups [32]. This may because not all suicide ideations translate into suicidal actions, and most people with suicide ideation have not attempted suicide [33,34]. 4.2. The Mediating Effect of Hopelessness Our research found that the effect of psychological pain on suicide stages was totally mediated by hopelessness, confirming our suicide mechanism path hypothesis: psychological pain → hopelessness → suicide stages. Although the first step of suicide ideation starts from psychological pain, psychological pain itself is not enough to generate suicide ideation, and hopelessness is also needed to develop suicide ideation [19]. People make decisions and take actions based in part on their emotional predictions [35,36], and hopelessness makes people at suicide risk overestimate their future emotional pain [37]. If the individual experiences constant pain in life, this may reduce the individual’s desire to live, leading to suicidal thoughts. However, pain alone is not enough to completely lose the belief in life. Only when the individual feels that the terrible situation cannot be improved, and he or she will live in pain forever, suicide will be considered. Thus, expressions of hopelessness may be a better predictor of suicide risk than psychological pain, and we found a traceable pathway for suicide intervention. Based on the Weibo posts, we can capture whether individual released suicide signals and it is possible to track the suicide stages in the future, making suicide intervention more accurate. Our research found that the effect of psychological pain on suicide stages was totally mediated by hopelessness, confirming our suicide mechanism path hypothesis: psychological pain → hopelessness → suicide stages. Although the first step of suicide ideation starts from psychological pain, psychological pain itself is not enough to generate suicide ideation, and hopelessness is also needed to develop suicide ideation [19]. People make decisions and take actions based in part on their emotional predictions [35,36], and hopelessness makes people at suicide risk overestimate their future emotional pain [37]. If the individual experiences constant pain in life, this may reduce the individual’s desire to live, leading to suicidal thoughts. However, pain alone is not enough to completely lose the belief in life. Only when the individual feels that the terrible situation cannot be improved, and he or she will live in pain forever, suicide will be considered. Thus, expressions of hopelessness may be a better predictor of suicide risk than psychological pain, and we found a traceable pathway for suicide intervention. Based on the Weibo posts, we can capture whether individual released suicide signals and it is possible to track the suicide stages in the future, making suicide intervention more accurate. 4.3. The Hopelessness Expressions and the Suicide Stages There was a negative predictive relationship between hopelessness expressions and suicide stages, as adolescents at the later suicide stages had less expressions of hopelessness. One possible reason is that adolescents at higher risk of suicide may use less concrete, direct words of hopelessness to express such feelings. From a linguistic point of view, Gao Yihong and Meng Ling analyzed the Weibo texts of “Zoufan” three months before suicide. They found that “Zoufan” often used decontextualized and fragmented ways to express personal depressed mood, and expressed metaphors and intentions about death frequently [38]. This euphemistic and poetic way of venting may be the commonality of high suicide risk adolescents. Another possible reason is “the Presuicidal Syndrome”, which mentioned that there exists an “ominous quiet” in the short period before suicide. This is a sign of “affective restriction”, which represents extreme repression of intolerable feelings [39]. Such “affective restriction” may be why people in the later suicide stage are less likely to express hopelessness. Moreover, adolescents at the later suicide stage may be more accepting and tolerant of psychological pain and hopelessness, which makes them less willing to talk about individual psychological pain and hopelessness, and with lack of supportive survival signals through texts, lose the belief in getting help. Many researchers have paid attention to the use of social media for suicide prevention and intervention, and have conducted a series of studies [40,41]. However, little attention has been paid to whether the suicidal signal was from an early suicide ideator or a later suicide attempter. Our study fills this gap, complementing and extending suicide-related research. Our results suggest that adolescents who are at a later suicide stage and have a higher suicide risk often do not express too much hopelessness directly. Therefore, we cannot judge closeness to suicide only by the level of traditional expressions of hopelessness. There was a negative predictive relationship between hopelessness expressions and suicide stages, as adolescents at the later suicide stages had less expressions of hopelessness. One possible reason is that adolescents at higher risk of suicide may use less concrete, direct words of hopelessness to express such feelings. From a linguistic point of view, Gao Yihong and Meng Ling analyzed the Weibo texts of “Zoufan” three months before suicide. They found that “Zoufan” often used decontextualized and fragmented ways to express personal depressed mood, and expressed metaphors and intentions about death frequently [38]. This euphemistic and poetic way of venting may be the commonality of high suicide risk adolescents. Another possible reason is “the Presuicidal Syndrome”, which mentioned that there exists an “ominous quiet” in the short period before suicide. This is a sign of “affective restriction”, which represents extreme repression of intolerable feelings [39]. Such “affective restriction” may be why people in the later suicide stage are less likely to express hopelessness. Moreover, adolescents at the later suicide stage may be more accepting and tolerant of psychological pain and hopelessness, which makes them less willing to talk about individual psychological pain and hopelessness, and with lack of supportive survival signals through texts, lose the belief in getting help. Many researchers have paid attention to the use of social media for suicide prevention and intervention, and have conducted a series of studies [40,41]. However, little attention has been paid to whether the suicidal signal was from an early suicide ideator or a later suicide attempter. Our study fills this gap, complementing and extending suicide-related research. Our results suggest that adolescents who are at a later suicide stage and have a higher suicide risk often do not express too much hopelessness directly. Therefore, we cannot judge closeness to suicide only by the level of traditional expressions of hopelessness. 4.1. The Qualitative Analysis Results: Among the experimental group, 36.2% of users suffered from mental illness. Depression accounted for 76.3% of all mental illness; it is the most common mental illness among the high-suicide-risk adolescents. The incidence of suicide among students whose relatives were affected by psychosis were higher than that of general residents. Suicide and mental illness were closely linked [28]. British doctor Sainsbury conducted a qualitative study on the data of 400 suicides examined by a coroner in London between 1936 and 1938, and found that 37% of the suicides reflected mental illness [29]. In a meta-analysis conducted in 2004, twenty-seven studies comprising 3275 suicides were included, of which 87.3% had been diagnosed with a mental disorder prior to their death [30]. In contrast, the research population in our study was only a high suicide risk group, so the detection rate of mental illness was relatively low. Although the research populations of the other two studies were both suicide decedents, the time interval between the two studies was large. It is very possible that there existed better methods to detect the mental illness of the suicide decedents in later studies, and it is understandable that the detection rate of mental illness has greatly increased. Furthermore, a more recent study has shown that the most common mental illness associated with suicide was depression [31]. The proportion of depression in all mental illnesses in our study also supported this conclusion. Another result of the qualitative analysis showed that the later the suicide stage, the lower the proportion of adolescents. A study on residents’ suicidal ideation, suicide plans and suicide attempts in Dalian, China showed that the detection rate of suicidal ideation, suicide plan and suicide attempt decreased in that order in different groups [32]. This may because not all suicide ideations translate into suicidal actions, and most people with suicide ideation have not attempted suicide [33,34]. 4.2. The Mediating Effect of Hopelessness: Our research found that the effect of psychological pain on suicide stages was totally mediated by hopelessness, confirming our suicide mechanism path hypothesis: psychological pain → hopelessness → suicide stages. Although the first step of suicide ideation starts from psychological pain, psychological pain itself is not enough to generate suicide ideation, and hopelessness is also needed to develop suicide ideation [19]. People make decisions and take actions based in part on their emotional predictions [35,36], and hopelessness makes people at suicide risk overestimate their future emotional pain [37]. If the individual experiences constant pain in life, this may reduce the individual’s desire to live, leading to suicidal thoughts. However, pain alone is not enough to completely lose the belief in life. Only when the individual feels that the terrible situation cannot be improved, and he or she will live in pain forever, suicide will be considered. Thus, expressions of hopelessness may be a better predictor of suicide risk than psychological pain, and we found a traceable pathway for suicide intervention. Based on the Weibo posts, we can capture whether individual released suicide signals and it is possible to track the suicide stages in the future, making suicide intervention more accurate. 4.3. The Hopelessness Expressions and the Suicide Stages: There was a negative predictive relationship between hopelessness expressions and suicide stages, as adolescents at the later suicide stages had less expressions of hopelessness. One possible reason is that adolescents at higher risk of suicide may use less concrete, direct words of hopelessness to express such feelings. From a linguistic point of view, Gao Yihong and Meng Ling analyzed the Weibo texts of “Zoufan” three months before suicide. They found that “Zoufan” often used decontextualized and fragmented ways to express personal depressed mood, and expressed metaphors and intentions about death frequently [38]. This euphemistic and poetic way of venting may be the commonality of high suicide risk adolescents. Another possible reason is “the Presuicidal Syndrome”, which mentioned that there exists an “ominous quiet” in the short period before suicide. This is a sign of “affective restriction”, which represents extreme repression of intolerable feelings [39]. Such “affective restriction” may be why people in the later suicide stage are less likely to express hopelessness. Moreover, adolescents at the later suicide stage may be more accepting and tolerant of psychological pain and hopelessness, which makes them less willing to talk about individual psychological pain and hopelessness, and with lack of supportive survival signals through texts, lose the belief in getting help. Many researchers have paid attention to the use of social media for suicide prevention and intervention, and have conducted a series of studies [40,41]. However, little attention has been paid to whether the suicidal signal was from an early suicide ideator or a later suicide attempter. Our study fills this gap, complementing and extending suicide-related research. Our results suggest that adolescents who are at a later suicide stage and have a higher suicide risk often do not express too much hopelessness directly. Therefore, we cannot judge closeness to suicide only by the level of traditional expressions of hopelessness. 5. Limitations and Future Studies: Although this study could divide the suicide stages of adolescents through labeling Weibo private message data, it cannot effectively determine the gender of the user. Previous studies had shown that suicidal behaviors were also affected by gender, and that the incidence of suicide attempts in adolescent females was higher than that of males [42]. Compared with males, females were more psychologically sensitive and vulnerable [43], and were more likely to express their pain through suicidal behaviors [44]. The expressions of hopelessness and psychological pain as predictors of suicide may differ by gender. There were only 105 adolescent users with high suicide risk in our study. Caution is thus necessary when generalizing the conclusions to other case. Our findings provided a traceable pathway for suicide intervention, finding that the effect of psychological pain on suicide stages was totally mediated by hopelessness, and that adolescents in the later suicide stages had less expressions of hopelessness in the traditional sense. However, the specific language characteristics of each suicide stage are still unknown. If we want to accomplish the precise prediction of suicide stages through the expressions on Weibo posts and provide more accurate and effective information for suicide intervention, we need to further explore the specific language expression characteristics of high suicide risk adolescents in each suicide stage, especially how to identify abstract metaphors that may constitute a suicidal signal. It is thus possible that we can provide more accurate suicide intervention for adolescents at different suicide stages in the future. We mainly focus on the expression on social media of two psychological indicators (psychological pain, hopelessness). However, there are many factors which affect suicide, and we can research suicide intervention from multiple perspectives, not only examining some negative indicators, but also exploring some positive indicators in future research. For example, previous studies had shown that social support [45], psychological resilience [46], and self-compassion [47] were protective factors for adolescent suicide. 6. Conclusions: The qualitative findings suggested that mental illness in high-suicide-risk adolescents was strongly associated with suicide risk, and that depression was the most common mental illness associated with suicide. Another finding was that the later the suicide stage, the smaller was the proportion of high-suicide-risk adolescents. The results of the mediation effect analysis verified the suicide mechanism path: psychological pain → hopelessness → suicide stages, indicating that psychological pain mainly affected suicide stages through hopelessness, and hopelessness may be a better predictor of suicide risk for high-suicide-risk adolescents. Notably, although adolescents in the later suicide stages were at greater suicide risk, they had less expression of hopelessness in the traditional sense.
Background: Adolescent suicide can have serious consequences for individuals, families and society, so we should pay attention to it. As social media becomes a platform for adolescents to share their daily lives and express their emotions, online identification and intervention of adolescent suicide problems become possible. In order to find the suicide mechanism path of high-suicide-risk adolescents, we explore the factors that influence is, especially the relations between psychological pain, hopelessness and suicide stages. Methods: We identified high-suicide-risk adolescents through machine learning model identification and manual identification, and used the Weibo text analysis method to explore the suicide mechanism path of high-suicide-risk adolescents. Results: Qualitative analysis showed that 36.2% of high-suicide-risk adolescents suffered from mental illness, and depression accounted for 76.3% of all mental illnesses. The mediating effect analysis showed that hopelessness played a complete mediating role between psychological pain and suicide stages. In addition, hopelessness was significantly negatively correlated with suicide stages. Conclusions: mental illness (especially depression) in high-suicide-risk adolescents is closely related to suicide stages, the later the suicide stage, the higher the diagnosis rate of mental illness. The suicide mechanism path in high-suicide-risk adolescents is: psychological pain→ hopelessness → suicide stages, indicating that psychological pain mainly affects suicide risk through hopelessness. Adolescents who are later in the suicide stages have fewer expressions of hopelessness in the traditional sense.
1. Introduction: Adolescent suicide has always been a global public health concern to us. Suicide is the fourth leading cause of death globally among people aged 15–19 [1], and in China, it is the second leading cause of death for people aged 20–34 [2]. The consequences of adolescent suicide are the most serious among many mental health problems. It means disability or even a loss of life for individuals, heavy blows and severe trauma for families, and huge expenditures on the health, welfare and justice sectors for society. The study of suicide risk factors and the exploration of suicide mechanisms in high-suicide-risk adolescents are of great significance for the prevention of suicide. With the development of technology, social media has become a platform for people to record their daily life and express their emotions. In China, Sina Weibo as a typical product of the big data era has extensive influence. Sina Weibo is China’s most popular social media platform, just like Twitter in America. Although there are some short video platforms (e.g., Douyin) and forums (e.g., Douban) that are also popular in China, it is only on Sina Weibo that users can share information in real-time and communicate interactively in the forms of text, pictures, videos and other multimedia. According to a report released by Sina Weibo, Weibo users show a younger trend, of which the post-90s and post-00s youth groups account for nearly 80%, and the daily active volume can reach 224 million [3]. The user’s emotional expression on social platforms has higher immediacy and less masking. Identifying the user’s psychological state through their expression on social media will be more convenient and quicker, and the obtained data will be more real. Therefore, we use the Weibo text analysis method to explore the suicide mechanism path of high-suicide-risk adolescents. Text analysis refers to the representation of text and the selection of feature items, which is a basic problem in text mining and information retrieval. Text analysis means that we can quantify the feature words extracted from text to represent text information. Therefore, we can extract the psychological indicators we want from the Weibo text by constructing a dictionary of psychological indicators. Suicide represents a series of consecutive stages, including suicide ideation, suicide plan, and suicide attempt [4,5]. Suicide ideation, as a high-risk factor for suicide, has a significant predictive effect [6,7]. Approximately 1/3 of suicide ideators ultimately attempt suicide, and the number of suicide attempters is several times higher than those who really die by suicide, so suicide attempt is a greater risk factor for suicide than suicidal ideation [1,8,9]. The later the suicide stage, the closer to suicide death, and the greater risk of suicide. Therefore, in order to quantify the degree of suicide risk and provide accurate information for suicide interventions, we decided to divide suicidal behavior into different stages. At the same time, considering that depressed mood is an important signal of suicide, we finally divided suicide into 4 stages, which the suicide risk gradually increases: depressed mood, suicide ideation, suicide plan and suicide attempt, and defined adolescents at different suicide stages as high-suicide-risk adolescents. A meta-analysis of the causes of suicide among adolescents found that suicide is a public health problem involving complex factors, such as biology, psychology, family, society and culture [8]. Among the psychological factors, psychological pain and hopelessness are important risk factors featuring strongly in suicide research [9,10,11]. Although psychological pain and hopelessness are often mentioned together as having an impact on suicidal behavior, as two most common motivations for suicide attempts [12], it is unclear how the two proximal factors of suicide affect the suicide stages. Psychological pain is defined as the “introspective experience of negative emotions such as fear, despair, grief, shame, guilt, blocked love, loneliness and loss” [13,14,15,16]. When Shneidman first hypothesized the relationship between psychological pain and suicide, he pointed out that other factors can only affect suicidal behavior through psychological pain, without which suicide would not occur [13]. Evidence suggests that psychological pain plays a central role in suicidal behavior [10,17,18]. Psychological pain is not enough to develop suicide ideation, it also requires hopelessness [19]. Hopelessness is defined as negative expectations or pessimism about oneself or the future [20]. Previous studies had showed that hopelessness was also an important risk factor of suicide [21,22,23]. To explore the relationship between psychological pain, hopelessness and suicide stages for high-suicide-risk adolescents, we established a hypothesis for the suicide mechanism path: psychological pain → hopelessness → suicide stages, with hopelessness as the mediating variable. In addition, there is a high correlation between mental illness and suicidal behaviors [24,25], and in order to better clarify the effect of psychological pain and hopelessness on suicide stages, a mediating effect model of our study is proposed with mental illness as a control variable (Figure 1). We used the more ecological Weibo text analysis method in this study, and hoped to further deepen the research on adolescent suicide and to find a new theoretical basis for future suicide interventions drawing on online data. 6. Conclusions: The qualitative findings suggested that mental illness in high-suicide-risk adolescents was strongly associated with suicide risk, and that depression was the most common mental illness associated with suicide. Another finding was that the later the suicide stage, the smaller was the proportion of high-suicide-risk adolescents. The results of the mediation effect analysis verified the suicide mechanism path: psychological pain → hopelessness → suicide stages, indicating that psychological pain mainly affected suicide stages through hopelessness, and hopelessness may be a better predictor of suicide risk for high-suicide-risk adolescents. Notably, although adolescents in the later suicide stages were at greater suicide risk, they had less expression of hopelessness in the traditional sense.
Background: Adolescent suicide can have serious consequences for individuals, families and society, so we should pay attention to it. As social media becomes a platform for adolescents to share their daily lives and express their emotions, online identification and intervention of adolescent suicide problems become possible. In order to find the suicide mechanism path of high-suicide-risk adolescents, we explore the factors that influence is, especially the relations between psychological pain, hopelessness and suicide stages. Methods: We identified high-suicide-risk adolescents through machine learning model identification and manual identification, and used the Weibo text analysis method to explore the suicide mechanism path of high-suicide-risk adolescents. Results: Qualitative analysis showed that 36.2% of high-suicide-risk adolescents suffered from mental illness, and depression accounted for 76.3% of all mental illnesses. The mediating effect analysis showed that hopelessness played a complete mediating role between psychological pain and suicide stages. In addition, hopelessness was significantly negatively correlated with suicide stages. Conclusions: mental illness (especially depression) in high-suicide-risk adolescents is closely related to suicide stages, the later the suicide stage, the higher the diagnosis rate of mental illness. The suicide mechanism path in high-suicide-risk adolescents is: psychological pain→ hopelessness → suicide stages, indicating that psychological pain mainly affects suicide risk through hopelessness. Adolescents who are later in the suicide stages have fewer expressions of hopelessness in the traditional sense.
13,205
283
[ 4224, 249, 1521, 557, 195, 227, 27, 80, 175, 306, 366, 231, 356, 367 ]
19
[ "suicide", "mental", "illness", "mental illness", "stages", "weibo", "psychological", "hopelessness", "pain", "suicide stages" ]
[ "china sina weibo", "frequency chinese suicide", "media suicide prevention", "adolescent suicide global", "adolescents weibo texts" ]
null
[CONTENT] high-suicide-risk adolescents | psychological pain | hopelessness | suicide stages | Weibo text analysis [SUMMARY]
null
[CONTENT] high-suicide-risk adolescents | psychological pain | hopelessness | suicide stages | Weibo text analysis [SUMMARY]
[CONTENT] high-suicide-risk adolescents | psychological pain | hopelessness | suicide stages | Weibo text analysis [SUMMARY]
[CONTENT] high-suicide-risk adolescents | psychological pain | hopelessness | suicide stages | Weibo text analysis [SUMMARY]
[CONTENT] high-suicide-risk adolescents | psychological pain | hopelessness | suicide stages | Weibo text analysis [SUMMARY]
[CONTENT] Adolescent | Emotions | Humans | Pain | Risk Factors | Self Concept | Social Media | Suicide [SUMMARY]
null
[CONTENT] Adolescent | Emotions | Humans | Pain | Risk Factors | Self Concept | Social Media | Suicide [SUMMARY]
[CONTENT] Adolescent | Emotions | Humans | Pain | Risk Factors | Self Concept | Social Media | Suicide [SUMMARY]
[CONTENT] Adolescent | Emotions | Humans | Pain | Risk Factors | Self Concept | Social Media | Suicide [SUMMARY]
[CONTENT] Adolescent | Emotions | Humans | Pain | Risk Factors | Self Concept | Social Media | Suicide [SUMMARY]
[CONTENT] china sina weibo | frequency chinese suicide | media suicide prevention | adolescent suicide global | adolescents weibo texts [SUMMARY]
null
[CONTENT] china sina weibo | frequency chinese suicide | media suicide prevention | adolescent suicide global | adolescents weibo texts [SUMMARY]
[CONTENT] china sina weibo | frequency chinese suicide | media suicide prevention | adolescent suicide global | adolescents weibo texts [SUMMARY]
[CONTENT] china sina weibo | frequency chinese suicide | media suicide prevention | adolescent suicide global | adolescents weibo texts [SUMMARY]
[CONTENT] china sina weibo | frequency chinese suicide | media suicide prevention | adolescent suicide global | adolescents weibo texts [SUMMARY]
[CONTENT] suicide | mental | illness | mental illness | stages | weibo | psychological | hopelessness | pain | suicide stages [SUMMARY]
null
[CONTENT] suicide | mental | illness | mental illness | stages | weibo | psychological | hopelessness | pain | suicide stages [SUMMARY]
[CONTENT] suicide | mental | illness | mental illness | stages | weibo | psychological | hopelessness | pain | suicide stages [SUMMARY]
[CONTENT] suicide | mental | illness | mental illness | stages | weibo | psychological | hopelessness | pain | suicide stages [SUMMARY]
[CONTENT] suicide | mental | illness | mental illness | stages | weibo | psychological | hopelessness | pain | suicide stages [SUMMARY]
[CONTENT] suicide | text | psychological | risk | factors | psychological pain | suicidal behavior | behavior | text analysis | health [SUMMARY]
null
[CONTENT] suicide | indicating | significant | mental | 05 | stage suicide | table | stage | effect | illness [SUMMARY]
[CONTENT] suicide | suicide risk | risk | adolescents | hopelessness | associated | associated suicide | suicide risk adolescents | risk adolescents | high suicide [SUMMARY]
[CONTENT] suicide | mental | illness | mental illness | hopelessness | stages | weibo | pain | psychological | suicide stages [SUMMARY]
[CONTENT] suicide | mental | illness | mental illness | hopelessness | stages | weibo | pain | psychological | suicide stages [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
null
[CONTENT] 36.2% | 76.3% ||| ||| [SUMMARY]
[CONTENT] ||| pain→ ||| [SUMMARY]
[CONTENT] ||| ||| ||| Weibo ||| ||| 36.2% | 76.3% ||| ||| ||| ||| pain→ ||| [SUMMARY]
[CONTENT] ||| ||| ||| Weibo ||| ||| 36.2% | 76.3% ||| ||| ||| ||| pain→ ||| [SUMMARY]
A pilot randomized controlled trial of dialectical behavior therapy (DBT) for reducing craving and achieving cessation in patients with marijuana use disorder: feasibility, acceptability, and appropriateness.
33844901
Thailand Registry of Clinical Trials, TCTR20200319007.
CLINICAL TRIAL REGISTRATION
Sixty-one patients with marijuana use disorder diagnoses were randomly assigned to a DBT group or a control group (psycho-education). Patients completed measures at pre-intervention, post-intervention, and at two-month follow-up. The Marijuana Craving Questionnaire (MCQ) and marijuana urine test kits were used to assess craving and abstinence respectively.
METHODS
The feasibility of DBT was significantly higher than control group feasibility. In the DBT 29/30 participants completed all sessions (96% retention) and 24/31 control group participants completed all sessions (77% retention) (χ2 = 4.95, p = 0.02). Moreover, 29/30 (96%) participants in the DBT group completed the two-month follow-up and 20/31 (64.5%) control group members completed the two-month follow-up (χ2 = 9.97, p = 0.002). The results showed that patients in the DBT group had significantly higher intervention acceptability rates (16.57 vs. 9.6) than those in the control group. This pattern was repeated for appropriateness rates (p < 0.05). The overall results for craving showed that there was no significant difference between the groups (F = 3.52, p > 0.05), although DBT showed a significant reduction in the "emotionality" subscale compared to the control group (F = 19.94, p < 0.05). To analyze cessation rates, DBT was compared to the control group at the posttest (46% vs. 16%) and follow-up (40% vs. 9.5%) and the results confirmed higher effectiveness in the DBT group for cessation (p < 0.05). Furthermore, among those who had lapsed, participants in the DBT group had fewer consumption days than those in the control group (p < 0.05).
RESULTS
DBT showed feasibility, acceptability, and promising efficacy in terms of the marijuana cessation rate.
CONCLUSIONS
[ "Behavior Therapy", "Craving", "Dialectical Behavior Therapy", "Feasibility Studies", "Humans", "Marijuana Use", "Pilot Projects", "Treatment Outcome" ]
8835386
Introduction
Marijuana is the most prevalent substance among those reported to be a significant problem among people seeking treatment for substance abuse. 1 According to WHO reports, more than 140 million people consume marijuana every year. 2 With regard to Iran, recent evidence shows that more than 5% of people consume marijuana every year, predominantly young males. However, in view of the harsh marijuana prohibition policy of the Iranian government, most clinicians estimate that these rates have been hugely underestimated. 3 Marijuana, as an illegal drug, is associated with significant physical, psychological, and social consequences. 4 Studies have shown that regular and heavy marijuana use patterns correlate with increased risk of mood disorders, anxiety, and psychotic episodes and although causality has not been demonstrated, these patterns can increase the course of mental health problems. 5 Also, several medical problems such as respiratory system deficits, stroke, myocardial infarction, and digestive tract cancers are associated with marijuana use patterns, especially among those with marijuana use disorder (MUD). 6 , 7 Approximately one in three marijuana users meet the criteria for MUD based on the DSM-5, and this proportion is rising. 8 One of the most important psychological problems in substance use disorder treatment is craving. Craving is a factor identified as the root cause of relapses and treatment failures. 9 , 10 MUD patients report visual, tactile, and olfactory cues related to craving and compulsivity sensations. 11 Based on these results, clinicians have tried to treat patients with marijuana use disorder. To date, the Food and Drug Administration (FDA) in the United States has not approved any psycho-pharmacotherapy for MUD, and therefore psycho-social interventions have received particular attention. 12 The most widely used psychological treatment in the substance use disorder (SUD) context is cognitive-behavioral therapy (CBT). 13 Results showed that CBT is somewhat effective for SUD, but that most patients with MUD do not achieve cessation and are not motivated to continue skills training during follow-up. Relapse rates therefore remain a considerable limitation of treatment. 10 , 13 One of the main reasons for this low success rate lies in the limitations of CBT. First, CBT protocols do not focus on comorbid problems, whereas most patients with SUD have at least one psychiatric or psychological problem. Secondly, cognitive restructuring may not be useful for all SUD patients. Some patients may be unable to restructure their dysfunctional cognition and core beliefs despite receiving CBT. 10 , 12 , 13 Furthermore, emotion regulation deficits are strongly associated with increased addictive behaviors such as SUD. With emotion regulation, people adjust their emotional experiences related to distressing and unpleasant events. Emotion regulation is essential for successful coping with environmental demands and personal welfare. 14 On the behavioral level, studies have found that marijuana craving cues are strongly associated with deficits in regulation of negative affect and emotions. 14 , 15 Also, on the neural level, during reappraisal of negative stimuli, patients with MUD and regular users have shown altered neural activity and functional connectivity. Moreover, marijuana use is related to dysfunctions in the amygdala and in amygdala-dorsolateral prefrontal cortex (DLPFC) coupling activity. 15 Together, these findings demonstrate that emotion-based psychotherapy must manage comorbid problems and eliminate the limitations of CBT. One of the psychotherapies in the third wave behavioral therapy cluster is dialectical behavior therapy (DBT). DBT has been described as intervention in emotion regulation deficits by focusing on dangerous impulses in borderline personality disorder and substance use disorder. The goals of DBT include improving and regulating emotions as a primary mechanism of change. DBT is a trans-diagnostic treatment and suitable for comorbid problems. DBT trains skills including distress tolerance, interpersonal effectiveness, emotion regulation, and mindfulness. Overall, in the context of SUD, DBT teaches emotion regulation skills to decrease engagement in pathological emotion regulation strategies. It also intervenes in low quality of life situations, reduces drug-seeking behavior, and helps patients function adaptively by accepting unpleasant emotions such as craving. 9 , 10 , 14 Research literature shows the efficacy and effectiveness of DBT in various comorbid problems and diseases such as suicide, 16 forensic psychiatric patients, 17 and irritable bowel syndrome. 18 Nevertheless, studies have reported contradictory results for the effectiveness of implementing DBT in various SUD populations. 19 , 20 Furthermore, the literature has recommended using larger samples, clearer instruments to measure outcome variables, and specific and integrated protocols. 21 Additionally, according to our investigations, no DBT randomized clinical trials have been conducted that investigated cessation in MUD patients (with or without comorbid problems). A DBT intervention aimed at increasing the cessation rate and reducing craving among MUD patients was developed for this study. This pilot trial investigated the feasibility and preliminary efficacy of DBT relative to a psycho-education intervention that was controlled for time duration and attention. Feasibility was assessed via satisfaction and session completion rates. Preliminary efficacy was evaluated via the impact of DBT on cessation rate and reduction of consumption rates, compared to the psycho-education intervention. Although craving and acceptance of craving are not the primary goals of DBT, they were also compared across the two interventions.
Methods
Trial design This study was designed as a controlled randomized clinical trial, including pretest, post-test, and two-month follow-up phases. This study was designed as a controlled randomized clinical trial, including pretest, post-test, and two-month follow-up phases. Sample size Since the sampling method comprises snowball sampling and strict eligibility criteria were applied, on the basis of data from similar studies 10 it was determined that at least 20 participants were needed in each group. However, in view of the predicted retention rates, we selected 30 patients for each group. Since the sampling method comprises snowball sampling and strict eligibility criteria were applied, on the basis of data from similar studies 10 it was determined that at least 20 participants were needed in each group. However, in view of the predicted retention rates, we selected 30 patients for each group. Selection criteria The inclusion criteria were as follows: 1) diagnosis of marijuana use disorder; 2) age 18 years or over; 3) no current or past history of major psychiatric disorders; 4) no other concurrent SUD treatment; and 5) willingness to attend intervention sessions, complete surveys, and take tests (questionnaires and urine test kits). Exclusion criteria were as follows: 1) unwillingness to participate; 2) not participating in intervention sessions for more than two weeks; 3) starting secondary psychotherapy; and 4) consuming methamphetamine, amphetamine, cannabis, methadone, benzodiazepines, or morphine during the research stages. The inclusion criteria were as follows: 1) diagnosis of marijuana use disorder; 2) age 18 years or over; 3) no current or past history of major psychiatric disorders; 4) no other concurrent SUD treatment; and 5) willingness to attend intervention sessions, complete surveys, and take tests (questionnaires and urine test kits). Exclusion criteria were as follows: 1) unwillingness to participate; 2) not participating in intervention sessions for more than two weeks; 3) starting secondary psychotherapy; and 4) consuming methamphetamine, amphetamine, cannabis, methadone, benzodiazepines, or morphine during the research stages. Participants, procedures, and randomization Since there are no cannabis use disorder treatment centers in Iran, there is no specific place to select patients. Furthermore, patients at drug treatment centers are referred for treatment of other substance use disorders and comorbidity of drug use is one of the exclusion criteria for this study, since it could lead to misleading results. Therefore, the relatives and acquaintances of those who had been referred to the drug treatment center were interviewed. From November 1, 2019, to November 5, 2019, 15 relatives and family members of drug users referred to drug treatment centers were diagnosed with MUD at this stage. Then, using snowball sampling, after 15 days of investigation, a total of 83 patients were diagnosed with MUD. Seventy-five of the 83 MUD patients who were approached consented, eight declined to participate, and 14 were ineligible. The primary reasons for declining were anxiety about addiction stigma and time constraints. Most of the ineligible patients had multiple illicit use disorders, so they did not meet the study criteria. Therefore, 61 patients completed the baseline assessment and were included in the current analyses. These patients were randomly assigned to each group using a random number table. The interventions were implemented from December 1, 2019, to March 20, 2020. The follow-up phase started on March 21, 2020, and ended on May 20, 2020, (at two months’ follow-up). In order to test for exclusion criteria before each session, a six-drug test kit for methamphetamine, amphetamine, cannabis, methadone, benzodiazepines, and morphine was administered to individuals using urine samples. Since there are no cannabis use disorder treatment centers in Iran, there is no specific place to select patients. Furthermore, patients at drug treatment centers are referred for treatment of other substance use disorders and comorbidity of drug use is one of the exclusion criteria for this study, since it could lead to misleading results. Therefore, the relatives and acquaintances of those who had been referred to the drug treatment center were interviewed. From November 1, 2019, to November 5, 2019, 15 relatives and family members of drug users referred to drug treatment centers were diagnosed with MUD at this stage. Then, using snowball sampling, after 15 days of investigation, a total of 83 patients were diagnosed with MUD. Seventy-five of the 83 MUD patients who were approached consented, eight declined to participate, and 14 were ineligible. The primary reasons for declining were anxiety about addiction stigma and time constraints. Most of the ineligible patients had multiple illicit use disorders, so they did not meet the study criteria. Therefore, 61 patients completed the baseline assessment and were included in the current analyses. These patients were randomly assigned to each group using a random number table. The interventions were implemented from December 1, 2019, to March 20, 2020. The follow-up phase started on March 21, 2020, and ended on May 20, 2020, (at two months’ follow-up). In order to test for exclusion criteria before each session, a six-drug test kit for methamphetamine, amphetamine, cannabis, methadone, benzodiazepines, and morphine was administered to individuals using urine samples. Blinding Both groups were blind to the existence of another group in the study. However, patients were informed about participating in research but not about another group. One day after the end of treatment, the post-test was carried out by mental health technicians with a master’s degree in psychology. Both groups were blind to the existence of another group in the study. However, patients were informed about participating in research but not about another group. One day after the end of treatment, the post-test was carried out by mental health technicians with a master’s degree in psychology. Outcome measures Abstinence A marijuana urine test kit prepared by Kian Teb Company (officially licensed by the National Medical Device Directorate IR. IRAN) was used to identify abstainers. A marijuana urine test kit prepared by Kian Teb Company (officially licensed by the National Medical Device Directorate IR. IRAN) was used to identify abstainers. Marijuana smoking A self-report scale was designed for patients who had lapsed during the post-test follow-up. On this scale, patients indicate the number of days of consumption over 30-day periods. The first thirty days after the last intervention session was considered the post-test smoking period and the second-month follow-up was considered as the follow-up marijuana use period. A self-report scale was designed for patients who had lapsed during the post-test follow-up. On this scale, patients indicate the number of days of consumption over 30-day periods. The first thirty days after the last intervention session was considered the post-test smoking period and the second-month follow-up was considered as the follow-up marijuana use period. Craving The Marijuana Craving Questionnaire (MCQ) short-form is a 12-item self-report questionnaire with ten items for subjective assessment of cannabis craving. The scale covers 4 factors: compulsivity, emotionality, expectancy, and purposefulness. According to how patients were thinking or feeling ‘’right now,’’ they placed checkmarks on the questionnaire to endorse responses ranging from 1 or strongly disagree to 7 or strongly agree. Results showed that this questionnaire’s internal consistency is adequate (α = 0.90). The measure was administered following a 12-hour deprivation period. The typical onset of marijuana craving and withdrawal symptoms is observed within approximately one day of cessation and so the current paper’s questionnaire scores can be conceptualized as an index of the propensity to experience marijuana craving following deprivation. 22 In Iranian MUD patients, the MCQ had internal consistency of α = 0.87. Details of the MCQ’s psychometrics properties will be published as a separate study as soon as possible. The Marijuana Craving Questionnaire (MCQ) short-form is a 12-item self-report questionnaire with ten items for subjective assessment of cannabis craving. The scale covers 4 factors: compulsivity, emotionality, expectancy, and purposefulness. According to how patients were thinking or feeling ‘’right now,’’ they placed checkmarks on the questionnaire to endorse responses ranging from 1 or strongly disagree to 7 or strongly agree. Results showed that this questionnaire’s internal consistency is adequate (α = 0.90). The measure was administered following a 12-hour deprivation period. The typical onset of marijuana craving and withdrawal symptoms is observed within approximately one day of cessation and so the current paper’s questionnaire scores can be conceptualized as an index of the propensity to experience marijuana craving following deprivation. 22 In Iranian MUD patients, the MCQ had internal consistency of α = 0.87. Details of the MCQ’s psychometrics properties will be published as a separate study as soon as possible. Acceptability The Acceptability of Intervention Measure (AIM) was employed to measure the acceptability of interventions. AIM response items are measured on a 5-point Likert scale (from Completely Disagree with 1 point to Completely Agree with 5 points). The mean of points scored for each item is taken as the final score. This questionnaire developed by Weiner et al., and they reported Cronbach’s ɑ = 85 for internal consistency. 23 The Acceptability of Intervention Measure (AIM) was employed to measure the acceptability of interventions. AIM response items are measured on a 5-point Likert scale (from Completely Disagree with 1 point to Completely Agree with 5 points). The mean of points scored for each item is taken as the final score. This questionnaire developed by Weiner et al., and they reported Cronbach’s ɑ = 85 for internal consistency. 23 Appropriateness The Intervention Appropriateness Measure (IAM) was used for Appropriateness. The IAM consist of a four-item scale that measures perceived intervention appropriateness. Items are measured on a five-point Likert scale (Completely Disagree to Completely Agree), and the mean of points scored for each item is taken as the final score. Higher scores mean that the participant feels this intervention is more appropriate for him/her. For this tool, Cronbach’s ɑ = 0.91 and all Factor Loadings are reported as higher than 0.8. 23 The Intervention Appropriateness Measure (IAM) was used for Appropriateness. The IAM consist of a four-item scale that measures perceived intervention appropriateness. Items are measured on a five-point Likert scale (Completely Disagree to Completely Agree), and the mean of points scored for each item is taken as the final score. Higher scores mean that the participant feels this intervention is more appropriate for him/her. For this tool, Cronbach’s ɑ = 0.91 and all Factor Loadings are reported as higher than 0.8. 23 Abstinence A marijuana urine test kit prepared by Kian Teb Company (officially licensed by the National Medical Device Directorate IR. IRAN) was used to identify abstainers. A marijuana urine test kit prepared by Kian Teb Company (officially licensed by the National Medical Device Directorate IR. IRAN) was used to identify abstainers. Marijuana smoking A self-report scale was designed for patients who had lapsed during the post-test follow-up. On this scale, patients indicate the number of days of consumption over 30-day periods. The first thirty days after the last intervention session was considered the post-test smoking period and the second-month follow-up was considered as the follow-up marijuana use period. A self-report scale was designed for patients who had lapsed during the post-test follow-up. On this scale, patients indicate the number of days of consumption over 30-day periods. The first thirty days after the last intervention session was considered the post-test smoking period and the second-month follow-up was considered as the follow-up marijuana use period. Craving The Marijuana Craving Questionnaire (MCQ) short-form is a 12-item self-report questionnaire with ten items for subjective assessment of cannabis craving. The scale covers 4 factors: compulsivity, emotionality, expectancy, and purposefulness. According to how patients were thinking or feeling ‘’right now,’’ they placed checkmarks on the questionnaire to endorse responses ranging from 1 or strongly disagree to 7 or strongly agree. Results showed that this questionnaire’s internal consistency is adequate (α = 0.90). The measure was administered following a 12-hour deprivation period. The typical onset of marijuana craving and withdrawal symptoms is observed within approximately one day of cessation and so the current paper’s questionnaire scores can be conceptualized as an index of the propensity to experience marijuana craving following deprivation. 22 In Iranian MUD patients, the MCQ had internal consistency of α = 0.87. Details of the MCQ’s psychometrics properties will be published as a separate study as soon as possible. The Marijuana Craving Questionnaire (MCQ) short-form is a 12-item self-report questionnaire with ten items for subjective assessment of cannabis craving. The scale covers 4 factors: compulsivity, emotionality, expectancy, and purposefulness. According to how patients were thinking or feeling ‘’right now,’’ they placed checkmarks on the questionnaire to endorse responses ranging from 1 or strongly disagree to 7 or strongly agree. Results showed that this questionnaire’s internal consistency is adequate (α = 0.90). The measure was administered following a 12-hour deprivation period. The typical onset of marijuana craving and withdrawal symptoms is observed within approximately one day of cessation and so the current paper’s questionnaire scores can be conceptualized as an index of the propensity to experience marijuana craving following deprivation. 22 In Iranian MUD patients, the MCQ had internal consistency of α = 0.87. Details of the MCQ’s psychometrics properties will be published as a separate study as soon as possible. Acceptability The Acceptability of Intervention Measure (AIM) was employed to measure the acceptability of interventions. AIM response items are measured on a 5-point Likert scale (from Completely Disagree with 1 point to Completely Agree with 5 points). The mean of points scored for each item is taken as the final score. This questionnaire developed by Weiner et al., and they reported Cronbach’s ɑ = 85 for internal consistency. 23 The Acceptability of Intervention Measure (AIM) was employed to measure the acceptability of interventions. AIM response items are measured on a 5-point Likert scale (from Completely Disagree with 1 point to Completely Agree with 5 points). The mean of points scored for each item is taken as the final score. This questionnaire developed by Weiner et al., and they reported Cronbach’s ɑ = 85 for internal consistency. 23 Appropriateness The Intervention Appropriateness Measure (IAM) was used for Appropriateness. The IAM consist of a four-item scale that measures perceived intervention appropriateness. Items are measured on a five-point Likert scale (Completely Disagree to Completely Agree), and the mean of points scored for each item is taken as the final score. Higher scores mean that the participant feels this intervention is more appropriate for him/her. For this tool, Cronbach’s ɑ = 0.91 and all Factor Loadings are reported as higher than 0.8. 23 The Intervention Appropriateness Measure (IAM) was used for Appropriateness. The IAM consist of a four-item scale that measures perceived intervention appropriateness. Items are measured on a five-point Likert scale (Completely Disagree to Completely Agree), and the mean of points scored for each item is taken as the final score. Higher scores mean that the participant feels this intervention is more appropriate for him/her. For this tool, Cronbach’s ɑ = 0.91 and all Factor Loadings are reported as higher than 0.8. 23
Results
Feasibility In the psycho-education group, 24/31 participants completed all sessions, compared to 29/30 members of the DBT group (retention rates: 77% in the control group vs. 96% in the DBT group). Additionally, 96% (29/30) of the DBT group members completed the two-month follow-up, whereas 64.5% (20/31) of the control group completed follow-up ( Figure 1 ). The chi-square test was applied, showing associations between group and retention, with χ2= 4.95, p = 0.02 for post-treatment and, χ2= 9.97, p = 0.002 for the follow-up phase. Consequently, DBT retention rates were significantly higher than psycho-education retention rates at post-treatment and follow-up. Figure 1Consort diagram. In the psycho-education group, 24/31 participants completed all sessions, compared to 29/30 members of the DBT group (retention rates: 77% in the control group vs. 96% in the DBT group). Additionally, 96% (29/30) of the DBT group members completed the two-month follow-up, whereas 64.5% (20/31) of the control group completed follow-up ( Figure 1 ). The chi-square test was applied, showing associations between group and retention, with χ2= 4.95, p = 0.02 for post-treatment and, χ2= 9.97, p = 0.002 for the follow-up phase. Consequently, DBT retention rates were significantly higher than psycho-education retention rates at post-treatment and follow-up. Figure 1Consort diagram. Acceptability and appropriateness To enable assessment of the acceptability and appropriateness of intervention, patients completed the AIM and IAM scales in the post-treatment phase. The acceptability scores were 16.57 for DBT and 9.6 for the control group (p < 0.05). The appropriateness scores were17.03 for DBT and 10.7 for the psychoeducation group (p < 0.05). Since there are no standards for these measures, these points were transferred to Likert-based questionnaire scales. For acceptability, the results equated to ‘’agree’’ for the DBT group versus ‘’neither agree nor disagree’’ for the psychoeducation group. For appropriateness, the results equated to ‘’completely agree’’ for the DBT group versus ‘’neither agree nor disagree’’ for the psychoeducation group. To enable assessment of the acceptability and appropriateness of intervention, patients completed the AIM and IAM scales in the post-treatment phase. The acceptability scores were 16.57 for DBT and 9.6 for the control group (p < 0.05). The appropriateness scores were17.03 for DBT and 10.7 for the psychoeducation group (p < 0.05). Since there are no standards for these measures, these points were transferred to Likert-based questionnaire scales. For acceptability, the results equated to ‘’agree’’ for the DBT group versus ‘’neither agree nor disagree’’ for the psychoeducation group. For appropriateness, the results equated to ‘’completely agree’’ for the DBT group versus ‘’neither agree nor disagree’’ for the psychoeducation group. Participant characteristics Participants’ demographic variables are shown in Table 2 . Analyses showed that there were no significant differences between the two groups regarding these variables. It should be noted that since over 97% of the participants were male from the beginning, the results were reported only for men. Table 2Mean and standard deviation of demographic variables in the intervention and control groups at the test phaseVariableIntervention groupControl groupp-valueEducational level*  0.2No higher education, n (%)2 (6)6 (19)Diploma, n (%)14 (46)15 (48)University student or graduate, n (%)14 (46)10 (33)Age †25.6 (5.67)27.19 (7.48)0.3Months of marijuana use19.53 (5.9)17.48 (6.03)0.1Craving (total) †   Pre-test45.2 (8.3)47.9 (10.2)0.2Post-test42.13 (7.7)44.48 (8.1)0.1Follow-up42.66 (9.25)45.8 (8.4)0.1Data presented as mean (standard deviation), unless otherwise specified.* Chi-square test.† Independent t test. Data presented as mean (standard deviation), unless otherwise specified. * Chi-square test. † Independent t test. Participants’ demographic variables are shown in Table 2 . Analyses showed that there were no significant differences between the two groups regarding these variables. It should be noted that since over 97% of the participants were male from the beginning, the results were reported only for men. Table 2Mean and standard deviation of demographic variables in the intervention and control groups at the test phaseVariableIntervention groupControl groupp-valueEducational level*  0.2No higher education, n (%)2 (6)6 (19)Diploma, n (%)14 (46)15 (48)University student or graduate, n (%)14 (46)10 (33)Age †25.6 (5.67)27.19 (7.48)0.3Months of marijuana use19.53 (5.9)17.48 (6.03)0.1Craving (total) †   Pre-test45.2 (8.3)47.9 (10.2)0.2Post-test42.13 (7.7)44.48 (8.1)0.1Follow-up42.66 (9.25)45.8 (8.4)0.1Data presented as mean (standard deviation), unless otherwise specified.* Chi-square test.† Independent t test. Data presented as mean (standard deviation), unless otherwise specified. * Chi-square test. † Independent t test. Efficacy outcomes The hypothesis of equal covariance matrices was examined for craving (Box’s M = 3.63, P = 0.74). The results of this test indicate homogeneity of covariance matrices. Mauchly’s test of sphericity also showed that the sphericity assumption was not violated (p = 0.42 and Mauchly’s W = 0.97). The results of the intergroup test and intergroup relations are also presented in Table 3 . As shown in Table 3 , the effect levels for craving (F = 3.52, p > 0.05) suggest that there is no significant difference between groups. These results were repeated for three of the subscales of craving: compulsivity, expectancy, and purposefulness. However, the emotionality subscale results showed a significant reduction in the DBT group compared to the control group 10.6 vs. 14.4 in the post-test and 10.43 vs. 13.26 in the follow-up phase (F = 19.94, p < 0.05). Table 3Repeated measures ANOVA for variables for the DBT group and control group in the pre-test, post-test, and follow-upVariable/sourceType III sum of squaresdfMean squareFSigPartial eta squaredObserved powerCraving       Tests of within-subjects effects       Factor1347.6962173.842.60.07*0.040.512Factor1 × group4.7622.380.0360.110.0010.055Error (factor1)7845.15411866.48    Tests of between-subjects effects       Group341.0841341.083.520.06 †0.0560.455Error5708.795996.75    Emotionality       Tests of within-subjects effects       Factor1257.332128.6511.690.00 †0.1650.9Factor1 × group74.70237.350.370.03*0.050.63Error (factor1)1297.8111810.99    Tests of between-subjects effects       Group283.891283.8919.940.00 †0.250.9Error839.9065914.23    * Significant to 0.05.† Significant to 0.01. * Significant to 0.05. † Significant to 0.01. With regard to cessation, the results indicated that DBT achieved a higher rate of cessation than the control treatment in both the post-test and at follow-up, ( Table 4 ) (p < 0.05). It was also found that among those who continued to use the drug, the number of use days per month in the post-test and the follow-up periods (two months) was significantly lower in the intervention group than the control group. Table 4Cessation and consumption between groups DBTControlp-valueCessation*   Post-test14 (46%)5 (16%)0.01 ‡Follow-up12 (40%)3 (9.5%)0.006 ‡Number of days use †   Post-test2.43±1.87.5±5.030.00 ‡Follow-up3.44±1.918.75±3.270.00 ‡Bold p-values are significant at critical levels.* The chi-square test was applied.† T test for independent samples.‡ Significant to 0.01. Bold p-values are significant at critical levels. * The chi-square test was applied. † T test for independent samples. ‡ Significant to 0.01. The hypothesis of equal covariance matrices was examined for craving (Box’s M = 3.63, P = 0.74). The results of this test indicate homogeneity of covariance matrices. Mauchly’s test of sphericity also showed that the sphericity assumption was not violated (p = 0.42 and Mauchly’s W = 0.97). The results of the intergroup test and intergroup relations are also presented in Table 3 . As shown in Table 3 , the effect levels for craving (F = 3.52, p > 0.05) suggest that there is no significant difference between groups. These results were repeated for three of the subscales of craving: compulsivity, expectancy, and purposefulness. However, the emotionality subscale results showed a significant reduction in the DBT group compared to the control group 10.6 vs. 14.4 in the post-test and 10.43 vs. 13.26 in the follow-up phase (F = 19.94, p < 0.05). Table 3Repeated measures ANOVA for variables for the DBT group and control group in the pre-test, post-test, and follow-upVariable/sourceType III sum of squaresdfMean squareFSigPartial eta squaredObserved powerCraving       Tests of within-subjects effects       Factor1347.6962173.842.60.07*0.040.512Factor1 × group4.7622.380.0360.110.0010.055Error (factor1)7845.15411866.48    Tests of between-subjects effects       Group341.0841341.083.520.06 †0.0560.455Error5708.795996.75    Emotionality       Tests of within-subjects effects       Factor1257.332128.6511.690.00 †0.1650.9Factor1 × group74.70237.350.370.03*0.050.63Error (factor1)1297.8111810.99    Tests of between-subjects effects       Group283.891283.8919.940.00 †0.250.9Error839.9065914.23    * Significant to 0.05.† Significant to 0.01. * Significant to 0.05. † Significant to 0.01. With regard to cessation, the results indicated that DBT achieved a higher rate of cessation than the control treatment in both the post-test and at follow-up, ( Table 4 ) (p < 0.05). It was also found that among those who continued to use the drug, the number of use days per month in the post-test and the follow-up periods (two months) was significantly lower in the intervention group than the control group. Table 4Cessation and consumption between groups DBTControlp-valueCessation*   Post-test14 (46%)5 (16%)0.01 ‡Follow-up12 (40%)3 (9.5%)0.006 ‡Number of days use †   Post-test2.43±1.87.5±5.030.00 ‡Follow-up3.44±1.918.75±3.270.00 ‡Bold p-values are significant at critical levels.* The chi-square test was applied.† T test for independent samples.‡ Significant to 0.01. Bold p-values are significant at critical levels. * The chi-square test was applied. † T test for independent samples. ‡ Significant to 0.01.
Conclusion
To conclude, DBT demonstrated adequate feasibility, acceptability, and appropriateness for patients with marijuana use disorder. Moreover, DBT also exhibited significant efficacy compared to the control group for achieving cessation and reducing emotion-related craving. Even in patients who could not achieve abstinence, DBT led to a reduction in marijuana consumption rates. These findings persisted at two-month follow-up. Limitations and future directions Despite these positive results, the present study also has some limitations. First, in order to evaluate the most significant treatment components (such as mindfulness and distress tolerance), no groups received the third wave versions of other therapies (ACT or MBSR). This study only had a two-month follow-up period and could not conduct long-term evaluation due to the study site’s medical and infrastructure conditions. It is recommended that future research should examine mediating and confounding variables to investigate the results of similar research to the present study. Other factors affecting relapse and recurrence could also be examined. Moreover, women should be investigated so that gender-related implications can be determined. Despite these positive results, the present study also has some limitations. First, in order to evaluate the most significant treatment components (such as mindfulness and distress tolerance), no groups received the third wave versions of other therapies (ACT or MBSR). This study only had a two-month follow-up period and could not conduct long-term evaluation due to the study site’s medical and infrastructure conditions. It is recommended that future research should examine mediating and confounding variables to investigate the results of similar research to the present study. Other factors affecting relapse and recurrence could also be examined. Moreover, women should be investigated so that gender-related implications can be determined.
[ "Trial design", "Sample size", "Selection criteria", "Participants, procedures, and randomization", "Blinding", "Outcome measures", "Abstinence", "Marijuana smoking", "Craving", "Acceptability", "Appropriateness", "Intervention", "Dialectical behavior therapy", "Psychoeducation", "Therapists and treatment adherence", "Statistical method", "Ethical considerations", "Feasibility", "Acceptability and appropriateness", "Participant characteristics", "Efficacy outcomes", "Limitations and future directions" ]
[ "This study was designed as a controlled randomized clinical trial, including pretest, post-test, and two-month follow-up phases.", "Since the sampling method comprises snowball sampling and strict eligibility criteria were applied, on the basis of data from similar studies 10 it was determined that at least 20 participants were needed in each group. However, in view of the predicted retention rates, we selected 30 patients for each group.", "The inclusion criteria were as follows: 1) diagnosis of marijuana use disorder; 2) age 18 years or over; 3) no current or past history of major psychiatric disorders; 4) no other concurrent SUD treatment; and 5) willingness to attend intervention sessions, complete surveys, and take tests (questionnaires and urine test kits).\nExclusion criteria were as follows: 1) unwillingness to participate; 2) not participating in intervention sessions for more than two weeks; 3) starting secondary psychotherapy; and 4) consuming methamphetamine, amphetamine, cannabis, methadone, benzodiazepines, or morphine during the research stages.", "Since there are no cannabis use disorder treatment centers in Iran, there is no specific place to select patients. Furthermore, patients at drug treatment centers are referred for treatment of other substance use disorders and comorbidity of drug use is one of the exclusion criteria for this study, since it could lead to misleading results. Therefore, the relatives and acquaintances of those who had been referred to the drug treatment center were interviewed. From November 1, 2019, to November 5, 2019, 15 relatives and family members of drug users referred to drug treatment centers were diagnosed with MUD at this stage. Then, using snowball sampling, after 15 days of investigation, a total of 83 patients were diagnosed with MUD. Seventy-five of the 83 MUD patients who were approached consented, eight declined to participate, and 14 were ineligible. The primary reasons for declining were anxiety about addiction stigma and time constraints. Most of the ineligible patients had multiple illicit use disorders, so they did not meet the study criteria. Therefore, 61 patients completed the baseline assessment and were included in the current analyses. These patients were randomly assigned to each group using a random number table. The interventions were implemented from December 1, 2019, to March 20, 2020. The follow-up phase started on March 21, 2020, and ended on May 20, 2020, (at two months’ follow-up). In order to test for exclusion criteria before each session, a six-drug test kit for methamphetamine, amphetamine, cannabis, methadone, benzodiazepines, and morphine was administered to individuals using urine samples.", "Both groups were blind to the existence of another group in the study. However, patients were informed about participating in research but not about another group. One day after the end of treatment, the post-test was carried out by mental health technicians with a master’s degree in psychology.", " Abstinence A marijuana urine test kit prepared by Kian Teb Company (officially licensed by the National Medical Device Directorate IR. IRAN) was used to identify abstainers.\nA marijuana urine test kit prepared by Kian Teb Company (officially licensed by the National Medical Device Directorate IR. IRAN) was used to identify abstainers.\n Marijuana smoking A self-report scale was designed for patients who had lapsed during the post-test follow-up. On this scale, patients indicate the number of days of consumption over 30-day periods. The first thirty days after the last intervention session was considered the post-test smoking period and the second-month follow-up was considered as the follow-up marijuana use period.\nA self-report scale was designed for patients who had lapsed during the post-test follow-up. On this scale, patients indicate the number of days of consumption over 30-day periods. The first thirty days after the last intervention session was considered the post-test smoking period and the second-month follow-up was considered as the follow-up marijuana use period.\n Craving The Marijuana Craving Questionnaire (MCQ) short-form is a 12-item self-report questionnaire with ten items for subjective assessment of cannabis craving. The scale covers 4 factors: compulsivity, emotionality, expectancy, and purposefulness. According to how patients were thinking or feeling ‘’right now,’’ they placed checkmarks on the questionnaire to endorse responses ranging from 1 or strongly disagree to 7 or strongly agree. Results showed that this questionnaire’s internal consistency is adequate (α = 0.90). The measure was administered following a 12-hour deprivation period. The typical onset of marijuana craving and withdrawal symptoms is observed within approximately one day of cessation and so the current paper’s questionnaire scores can be conceptualized as an index of the propensity to experience marijuana craving following deprivation. 22 In Iranian MUD patients, the MCQ had internal consistency of α = 0.87. Details of the MCQ’s psychometrics properties will be published as a separate study as soon as possible.\nThe Marijuana Craving Questionnaire (MCQ) short-form is a 12-item self-report questionnaire with ten items for subjective assessment of cannabis craving. The scale covers 4 factors: compulsivity, emotionality, expectancy, and purposefulness. According to how patients were thinking or feeling ‘’right now,’’ they placed checkmarks on the questionnaire to endorse responses ranging from 1 or strongly disagree to 7 or strongly agree. Results showed that this questionnaire’s internal consistency is adequate (α = 0.90). The measure was administered following a 12-hour deprivation period. The typical onset of marijuana craving and withdrawal symptoms is observed within approximately one day of cessation and so the current paper’s questionnaire scores can be conceptualized as an index of the propensity to experience marijuana craving following deprivation. 22 In Iranian MUD patients, the MCQ had internal consistency of α = 0.87. Details of the MCQ’s psychometrics properties will be published as a separate study as soon as possible.\n Acceptability The Acceptability of Intervention Measure (AIM) was employed to measure the acceptability of interventions. AIM response items are measured on a 5-point Likert scale (from Completely Disagree with 1 point to Completely Agree with 5 points). The mean of points scored for each item is taken as the final score. This questionnaire developed by Weiner et al., and they reported Cronbach’s ɑ = 85 for internal consistency. 23\nThe Acceptability of Intervention Measure (AIM) was employed to measure the acceptability of interventions. AIM response items are measured on a 5-point Likert scale (from Completely Disagree with 1 point to Completely Agree with 5 points). The mean of points scored for each item is taken as the final score. This questionnaire developed by Weiner et al., and they reported Cronbach’s ɑ = 85 for internal consistency. 23\n Appropriateness The Intervention Appropriateness Measure (IAM) was used for Appropriateness. The IAM consist of a four-item scale that measures perceived intervention appropriateness. Items are measured on a five-point Likert scale (Completely Disagree to Completely Agree), and the mean of points scored for each item is taken as the final score. Higher scores mean that the participant feels this intervention is more appropriate for him/her. For this tool, Cronbach’s ɑ = 0.91 and all Factor Loadings are reported as higher than 0.8. 23\nThe Intervention Appropriateness Measure (IAM) was used for Appropriateness. The IAM consist of a four-item scale that measures perceived intervention appropriateness. Items are measured on a five-point Likert scale (Completely Disagree to Completely Agree), and the mean of points scored for each item is taken as the final score. Higher scores mean that the participant feels this intervention is more appropriate for him/her. For this tool, Cronbach’s ɑ = 0.91 and all Factor Loadings are reported as higher than 0.8. 23", "A marijuana urine test kit prepared by Kian Teb Company (officially licensed by the National Medical Device Directorate IR. IRAN) was used to identify abstainers.", "A self-report scale was designed for patients who had lapsed during the post-test follow-up. On this scale, patients indicate the number of days of consumption over 30-day periods. The first thirty days after the last intervention session was considered the post-test smoking period and the second-month follow-up was considered as the follow-up marijuana use period.", "The Marijuana Craving Questionnaire (MCQ) short-form is a 12-item self-report questionnaire with ten items for subjective assessment of cannabis craving. The scale covers 4 factors: compulsivity, emotionality, expectancy, and purposefulness. According to how patients were thinking or feeling ‘’right now,’’ they placed checkmarks on the questionnaire to endorse responses ranging from 1 or strongly disagree to 7 or strongly agree. Results showed that this questionnaire’s internal consistency is adequate (α = 0.90). The measure was administered following a 12-hour deprivation period. The typical onset of marijuana craving and withdrawal symptoms is observed within approximately one day of cessation and so the current paper’s questionnaire scores can be conceptualized as an index of the propensity to experience marijuana craving following deprivation. 22 In Iranian MUD patients, the MCQ had internal consistency of α = 0.87. Details of the MCQ’s psychometrics properties will be published as a separate study as soon as possible.", "The Acceptability of Intervention Measure (AIM) was employed to measure the acceptability of interventions. AIM response items are measured on a 5-point Likert scale (from Completely Disagree with 1 point to Completely Agree with 5 points). The mean of points scored for each item is taken as the final score. This questionnaire developed by Weiner et al., and they reported Cronbach’s ɑ = 85 for internal consistency. 23", "The Intervention Appropriateness Measure (IAM) was used for Appropriateness. The IAM consist of a four-item scale that measures perceived intervention appropriateness. Items are measured on a five-point Likert scale (Completely Disagree to Completely Agree), and the mean of points scored for each item is taken as the final score. Higher scores mean that the participant feels this intervention is more appropriate for him/her. For this tool, Cronbach’s ɑ = 0.91 and all Factor Loadings are reported as higher than 0.8. 23", " Dialectical behavior therapy DBT is a group intervention consisting of 16 sessions (meeting once a week for 90 minutes) with one psychotherapist and her co-therapist. The intervention protocol is an adaptation of DBT for SUD based on three basic manuals. 10 , 24 , 25 The primary objective of the DBT is to reduce dysfunction in emotion regulation and craving via increasing cessation rates and improving skills. A psychotherapist with a PhD delivered the intervention sessions (with a psychologist as co-therapist) and they were blind both to the existence of another group and to the study objectives. Table 1 shows the content covered in each DBT session.\n\nTable 1DBT content per sessionCessationContentPre-sessionExplanation of dialectical behavioral therapy, principles, and goals. Brief introduction to the content of each session. Familiarity with participants. Participants are given an intervention booklet to read at home.1st session (mindfulness 1)Introduce the concept of mindfulness and three mental states (wise, reasonable, emotional) and their relations with substance use.2nd session (mindfulness 2)Teach two clusters of mindfulness skills. The first includes viewing, participation, and description. The second includes a non-judgmental stance and inclusive self-consciousness.3rd sessionSummarize the mindfulness sessions – definition of addiction, standard therapies of addiction, introduction to and teaching of dialectical avoidance technique. Review the positive and negative aspects of abstinence. Explanation and investigation of relapse and its causes. Explaining the skill of the pure mind, the addicted mind, the types of behaviors related to the pure mentality and the addicted mentality, and preparing a list of supporters.4th-5th sessions (Distress tolerance)Teaching distraction strategies with five skills include activities, comparisons, emotions, thoughts, and enjoyment. Through enjoyable activities, focusing on work or other topics, counting, leaving the situation, paying attention to daily tasks, distracting from thoughts, and self-harm behaviors – teaching and training self-soothing with five senses.6th-7th sessions (Emotion regulation)Definition of emotion, how emotions work, familiarity with emotion regulation skills. Emotion Identification Exercise, Emotion Registration Exercise. Identifying barriers to experiencing emotion in a healthy way and ways to overcome these barriers. Teaching creating short-term positive emotional experiences for experiencing positive emotional states.8th-10th sessions (Emotion regulation and distress tolerance in an MUD context)Explain craving and its connection to the experience of emotions. Introducing methods for identifying values. Importance of committed action based on a list of essential values in life. Develop new coping strategies in response to unpleasant emotions, sensations, and cognitions, especially craving as a multidimensional problem and teaching problem solving and behavior analysis.11th sessionBasic acceptance technique training. Introduce living in the present moment techniques.12th-13th sessionsInterpersonal effectiveness training. Participants learn assertiveness skills about substance users. Other skills include non-verbal communication, verbal communication, and problem-solving, decision-making, and listening skills.14th-16th sessionsReview of sessions. Elimination of ambiguities. Exercising skills in the presence of other people.\n\nDBT is a group intervention consisting of 16 sessions (meeting once a week for 90 minutes) with one psychotherapist and her co-therapist. The intervention protocol is an adaptation of DBT for SUD based on three basic manuals. 10 , 24 , 25 The primary objective of the DBT is to reduce dysfunction in emotion regulation and craving via increasing cessation rates and improving skills. A psychotherapist with a PhD delivered the intervention sessions (with a psychologist as co-therapist) and they were blind both to the existence of another group and to the study objectives. Table 1 shows the content covered in each DBT session.\n\nTable 1DBT content per sessionCessationContentPre-sessionExplanation of dialectical behavioral therapy, principles, and goals. Brief introduction to the content of each session. Familiarity with participants. Participants are given an intervention booklet to read at home.1st session (mindfulness 1)Introduce the concept of mindfulness and three mental states (wise, reasonable, emotional) and their relations with substance use.2nd session (mindfulness 2)Teach two clusters of mindfulness skills. The first includes viewing, participation, and description. The second includes a non-judgmental stance and inclusive self-consciousness.3rd sessionSummarize the mindfulness sessions – definition of addiction, standard therapies of addiction, introduction to and teaching of dialectical avoidance technique. Review the positive and negative aspects of abstinence. Explanation and investigation of relapse and its causes. Explaining the skill of the pure mind, the addicted mind, the types of behaviors related to the pure mentality and the addicted mentality, and preparing a list of supporters.4th-5th sessions (Distress tolerance)Teaching distraction strategies with five skills include activities, comparisons, emotions, thoughts, and enjoyment. Through enjoyable activities, focusing on work or other topics, counting, leaving the situation, paying attention to daily tasks, distracting from thoughts, and self-harm behaviors – teaching and training self-soothing with five senses.6th-7th sessions (Emotion regulation)Definition of emotion, how emotions work, familiarity with emotion regulation skills. Emotion Identification Exercise, Emotion Registration Exercise. Identifying barriers to experiencing emotion in a healthy way and ways to overcome these barriers. Teaching creating short-term positive emotional experiences for experiencing positive emotional states.8th-10th sessions (Emotion regulation and distress tolerance in an MUD context)Explain craving and its connection to the experience of emotions. Introducing methods for identifying values. Importance of committed action based on a list of essential values in life. Develop new coping strategies in response to unpleasant emotions, sensations, and cognitions, especially craving as a multidimensional problem and teaching problem solving and behavior analysis.11th sessionBasic acceptance technique training. Introduce living in the present moment techniques.12th-13th sessionsInterpersonal effectiveness training. Participants learn assertiveness skills about substance users. Other skills include non-verbal communication, verbal communication, and problem-solving, decision-making, and listening skills.14th-16th sessionsReview of sessions. Elimination of ambiguities. Exercising skills in the presence of other people.\n\n Psychoeducation This option is more ethical than not offering any intervention to the control group. A psychiatrist with five years of experience in addiction psychotherapy implemented this intervention. This intervention includes problem-solving skills, assertiveness, and craving management in eight sessions. Thepsychoeducation intervention was used to provide a basis for comparison with only those elements of the DBT intervention that are different from other psychotherapies. This intervention is utilized for MUD and health-related problems. The intention of this intervention is to provide individuals struggling with cravings and substance use disorder the knowledge needed to comprehensively appreciate their problems and the empowerment needed to cope with them. The psychoeducation intervention included information about the dangers of marijuana and the therapists also provided a pamphlet containing techniques for reduction of craving. 14 , 26 , 27\nThis option is more ethical than not offering any intervention to the control group. A psychiatrist with five years of experience in addiction psychotherapy implemented this intervention. This intervention includes problem-solving skills, assertiveness, and craving management in eight sessions. Thepsychoeducation intervention was used to provide a basis for comparison with only those elements of the DBT intervention that are different from other psychotherapies. This intervention is utilized for MUD and health-related problems. The intention of this intervention is to provide individuals struggling with cravings and substance use disorder the knowledge needed to comprehensively appreciate their problems and the empowerment needed to cope with them. The psychoeducation intervention included information about the dangers of marijuana and the therapists also provided a pamphlet containing techniques for reduction of craving. 14 , 26 , 27\n Therapists and treatment adherence To enable adherence to the principles of DBT to be checked, audios of the sessions were recorded with the consent of all participants. A DBT researcher and psychotherapist who was not involved in the treatment groups checked session content afterwards. Sessions were divided into 15-minute modules that were chosen for adherence checks at random. Treatment stance and occurrence and depth of DBT processes were appraised. Based on the treatment manual, modules were rated for adherence level as either adequate or not adequate. The majority (83%) were judged to have been conducted adequately.\nTo enable adherence to the principles of DBT to be checked, audios of the sessions were recorded with the consent of all participants. A DBT researcher and psychotherapist who was not involved in the treatment groups checked session content afterwards. Sessions were divided into 15-minute modules that were chosen for adherence checks at random. Treatment stance and occurrence and depth of DBT processes were appraised. Based on the treatment manual, modules were rated for adherence level as either adequate or not adequate. The majority (83%) were judged to have been conducted adequately.\n Statistical method Demographic information was gathered and reported as frequencies, means, and standard deviations and repeated measures ANOVA and chi-square tests were conducted for the outcomes using SPSS software, version 26.\nDemographic information was gathered and reported as frequencies, means, and standard deviations and repeated measures ANOVA and chi-square tests were conducted for the outcomes using SPSS software, version 26.", "DBT is a group intervention consisting of 16 sessions (meeting once a week for 90 minutes) with one psychotherapist and her co-therapist. The intervention protocol is an adaptation of DBT for SUD based on three basic manuals. 10 , 24 , 25 The primary objective of the DBT is to reduce dysfunction in emotion regulation and craving via increasing cessation rates and improving skills. A psychotherapist with a PhD delivered the intervention sessions (with a psychologist as co-therapist) and they were blind both to the existence of another group and to the study objectives. Table 1 shows the content covered in each DBT session.\n\nTable 1DBT content per sessionCessationContentPre-sessionExplanation of dialectical behavioral therapy, principles, and goals. Brief introduction to the content of each session. Familiarity with participants. Participants are given an intervention booklet to read at home.1st session (mindfulness 1)Introduce the concept of mindfulness and three mental states (wise, reasonable, emotional) and their relations with substance use.2nd session (mindfulness 2)Teach two clusters of mindfulness skills. The first includes viewing, participation, and description. The second includes a non-judgmental stance and inclusive self-consciousness.3rd sessionSummarize the mindfulness sessions – definition of addiction, standard therapies of addiction, introduction to and teaching of dialectical avoidance technique. Review the positive and negative aspects of abstinence. Explanation and investigation of relapse and its causes. Explaining the skill of the pure mind, the addicted mind, the types of behaviors related to the pure mentality and the addicted mentality, and preparing a list of supporters.4th-5th sessions (Distress tolerance)Teaching distraction strategies with five skills include activities, comparisons, emotions, thoughts, and enjoyment. Through enjoyable activities, focusing on work or other topics, counting, leaving the situation, paying attention to daily tasks, distracting from thoughts, and self-harm behaviors – teaching and training self-soothing with five senses.6th-7th sessions (Emotion regulation)Definition of emotion, how emotions work, familiarity with emotion regulation skills. Emotion Identification Exercise, Emotion Registration Exercise. Identifying barriers to experiencing emotion in a healthy way and ways to overcome these barriers. Teaching creating short-term positive emotional experiences for experiencing positive emotional states.8th-10th sessions (Emotion regulation and distress tolerance in an MUD context)Explain craving and its connection to the experience of emotions. Introducing methods for identifying values. Importance of committed action based on a list of essential values in life. Develop new coping strategies in response to unpleasant emotions, sensations, and cognitions, especially craving as a multidimensional problem and teaching problem solving and behavior analysis.11th sessionBasic acceptance technique training. Introduce living in the present moment techniques.12th-13th sessionsInterpersonal effectiveness training. Participants learn assertiveness skills about substance users. Other skills include non-verbal communication, verbal communication, and problem-solving, decision-making, and listening skills.14th-16th sessionsReview of sessions. Elimination of ambiguities. Exercising skills in the presence of other people.\n", "This option is more ethical than not offering any intervention to the control group. A psychiatrist with five years of experience in addiction psychotherapy implemented this intervention. This intervention includes problem-solving skills, assertiveness, and craving management in eight sessions. Thepsychoeducation intervention was used to provide a basis for comparison with only those elements of the DBT intervention that are different from other psychotherapies. This intervention is utilized for MUD and health-related problems. The intention of this intervention is to provide individuals struggling with cravings and substance use disorder the knowledge needed to comprehensively appreciate their problems and the empowerment needed to cope with them. The psychoeducation intervention included information about the dangers of marijuana and the therapists also provided a pamphlet containing techniques for reduction of craving. 14 , 26 , 27", "To enable adherence to the principles of DBT to be checked, audios of the sessions were recorded with the consent of all participants. A DBT researcher and psychotherapist who was not involved in the treatment groups checked session content afterwards. Sessions were divided into 15-minute modules that were chosen for adherence checks at random. Treatment stance and occurrence and depth of DBT processes were appraised. Based on the treatment manual, modules were rated for adherence level as either adequate or not adequate. The majority (83%) were judged to have been conducted adequately.", "Demographic information was gathered and reported as frequencies, means, and standard deviations and repeated measures ANOVA and chi-square tests were conducted for the outcomes using SPSS software, version 26.", "Written informed consent was obtained from all participants before initiation of the research. The tools used in this study were all filled-out anonymously, and an ID code was used to maintain the confidentiality of personal information (Ir.kums.rce.1398.1203). At the end of the research process, dialectical behavior therapy was also provided to the control group. This study is registered with the Thailand Registry of Clinical Trials (TCTR20200319007).", "In the psycho-education group, 24/31 participants completed all sessions, compared to 29/30 members of the DBT group (retention rates: 77% in the control group vs. 96% in the DBT group). Additionally, 96% (29/30) of the DBT group members completed the two-month follow-up, whereas 64.5% (20/31) of the control group completed follow-up ( Figure 1 ). The chi-square test was applied, showing associations between group and retention, with χ2= 4.95, p = 0.02 for post-treatment and, χ2= 9.97, p = 0.002 for the follow-up phase. Consequently, DBT retention rates were significantly higher than psycho-education retention rates at post-treatment and follow-up.\n\nFigure 1Consort diagram.\n", "To enable assessment of the acceptability and appropriateness of intervention, patients completed the AIM and IAM scales in the post-treatment phase. The acceptability scores were 16.57 for DBT and 9.6 for the control group (p < 0.05). The appropriateness scores were17.03 for DBT and 10.7 for the psychoeducation group (p < 0.05). Since there are no standards for these measures, these points were transferred to Likert-based questionnaire scales. For acceptability, the results equated to ‘’agree’’ for the DBT group versus ‘’neither agree nor disagree’’ for the psychoeducation group. For appropriateness, the results equated to ‘’completely agree’’ for the DBT group versus ‘’neither agree nor disagree’’ for the psychoeducation group.", "Participants’ demographic variables are shown in Table 2 . Analyses showed that there were no significant differences between the two groups regarding these variables. It should be noted that since over 97% of the participants were male from the beginning, the results were reported only for men.\n\nTable 2Mean and standard deviation of demographic variables in the intervention and control groups at the test phaseVariableIntervention groupControl groupp-valueEducational level*  0.2No higher education, n (%)2 (6)6 (19)Diploma, n (%)14 (46)15 (48)University student or graduate, n (%)14 (46)10 (33)Age †25.6 (5.67)27.19 (7.48)0.3Months of marijuana use19.53 (5.9)17.48 (6.03)0.1Craving (total) †   Pre-test45.2 (8.3)47.9 (10.2)0.2Post-test42.13 (7.7)44.48 (8.1)0.1Follow-up42.66 (9.25)45.8 (8.4)0.1Data presented as mean (standard deviation), unless otherwise specified.* Chi-square test.† Independent t test.\n\nData presented as mean (standard deviation), unless otherwise specified.\n* Chi-square test.\n† Independent t test.", "The hypothesis of equal covariance matrices was examined for craving (Box’s M = 3.63, P = 0.74). The results of this test indicate homogeneity of covariance matrices. Mauchly’s test of sphericity also showed that the sphericity assumption was not violated (p = 0.42 and Mauchly’s W = 0.97).\nThe results of the intergroup test and intergroup relations are also presented in Table 3 . As shown in Table 3 , the effect levels for craving (F = 3.52, p > 0.05) suggest that there is no significant difference between groups. These results were repeated for three of the subscales of craving: compulsivity, expectancy, and purposefulness. However, the emotionality subscale results showed a significant reduction in the DBT group compared to the control group 10.6 vs. 14.4 in the post-test and 10.43 vs. 13.26 in the follow-up phase (F = 19.94, p < 0.05).\n\nTable 3Repeated measures ANOVA for variables for the DBT group and control group in the pre-test, post-test, and follow-upVariable/sourceType III sum of squaresdfMean squareFSigPartial eta squaredObserved powerCraving       Tests of within-subjects effects       Factor1347.6962173.842.60.07*0.040.512Factor1 × group4.7622.380.0360.110.0010.055Error (factor1)7845.15411866.48    Tests of between-subjects effects       Group341.0841341.083.520.06 †0.0560.455Error5708.795996.75    Emotionality       Tests of within-subjects effects       Factor1257.332128.6511.690.00 †0.1650.9Factor1 × group74.70237.350.370.03*0.050.63Error (factor1)1297.8111810.99    Tests of between-subjects effects       Group283.891283.8919.940.00 †0.250.9Error839.9065914.23    * Significant to 0.05.† Significant to 0.01.\n\n* Significant to 0.05.\n† Significant to 0.01.\nWith regard to cessation, the results indicated that DBT achieved a higher rate of cessation than the control treatment in both the post-test and at follow-up, ( Table 4 ) (p < 0.05). It was also found that among those who continued to use the drug, the number of use days per month in the post-test and the follow-up periods (two months) was significantly lower in the intervention group than the control group.\n\nTable 4Cessation and consumption between groups DBTControlp-valueCessation*   Post-test14 (46%)5 (16%)0.01\n‡Follow-up12 (40%)3 (9.5%)0.006\n‡Number of days use †   Post-test2.43±1.87.5±5.030.00\n‡Follow-up3.44±1.918.75±3.270.00\n‡Bold p-values are significant at critical levels.* The chi-square test was applied.† T test for independent samples.‡ Significant to 0.01.\n\nBold p-values are significant at critical levels.\n* The chi-square test was applied.\n† T test for independent samples.\n‡ Significant to 0.01.", "Despite these positive results, the present study also has some limitations. First, in order to evaluate the most significant treatment components (such as mindfulness and distress tolerance), no groups received the third wave versions of other therapies (ACT or MBSR). This study only had a two-month follow-up period and could not conduct long-term evaluation due to the study site’s medical and infrastructure conditions. It is recommended that future research should examine mediating and confounding variables to investigate the results of similar research to the present study. Other factors affecting relapse and recurrence could also be examined. Moreover, women should be investigated so that gender-related implications can be determined." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Trial design", "Sample size", "Selection criteria", "Participants, procedures, and randomization", "Blinding", "Outcome measures", "Abstinence", "Marijuana smoking", "Craving", "Acceptability", "Appropriateness", "Intervention", "Dialectical behavior therapy", "Psychoeducation", "Therapists and treatment adherence", "Statistical method", "Ethical considerations", "Results", "Feasibility", "Acceptability and appropriateness", "Participant characteristics", "Efficacy outcomes", "Discussion and conclusion", "Conclusion", "Limitations and future directions" ]
[ "Marijuana is the most prevalent substance among those reported to be a significant problem among people seeking treatment for substance abuse. 1 According to WHO reports, more than 140 million people consume marijuana every year. 2 With regard to Iran, recent evidence shows that more than 5% of people consume marijuana every year, predominantly young males. However, in view of the harsh marijuana prohibition policy of the Iranian government, most clinicians estimate that these rates have been hugely underestimated. 3 Marijuana, as an illegal drug, is associated with significant physical, psychological, and social consequences. 4 Studies have shown that regular and heavy marijuana use patterns correlate with increased risk of mood disorders, anxiety, and psychotic episodes and although causality has not been demonstrated, these patterns can increase the course of mental health problems. 5 Also, several medical problems such as respiratory system deficits, stroke, myocardial infarction, and digestive tract cancers are associated with marijuana use patterns, especially among those with marijuana use disorder (MUD). 6 , 7 Approximately one in three marijuana users meet the criteria for MUD based on the DSM-5, and this proportion is rising. 8 One of the most important psychological problems in substance use disorder treatment is craving. Craving is a factor identified as the root cause of relapses and treatment failures. 9 , 10 MUD patients report visual, tactile, and olfactory cues related to craving and compulsivity sensations. 11 Based on these results, clinicians have tried to treat patients with marijuana use disorder.\nTo date, the Food and Drug Administration (FDA) in the United States has not approved any psycho-pharmacotherapy for MUD, and therefore psycho-social interventions have received particular attention. 12 The most widely used psychological treatment in the substance use disorder (SUD) context is cognitive-behavioral therapy (CBT). 13 Results showed that CBT is somewhat effective for SUD, but that most patients with MUD do not achieve cessation and are not motivated to continue skills training during follow-up. Relapse rates therefore remain a considerable limitation of treatment. 10 , 13 One of the main reasons for this low success rate lies in the limitations of CBT. First, CBT protocols do not focus on comorbid problems, whereas most patients with SUD have at least one psychiatric or psychological problem. Secondly, cognitive restructuring may not be useful for all SUD patients. Some patients may be unable to restructure their dysfunctional cognition and core beliefs despite receiving CBT. 10 , 12 , 13 Furthermore, emotion regulation deficits are strongly associated with increased addictive behaviors such as SUD. With emotion regulation, people adjust their emotional experiences related to distressing and unpleasant events. Emotion regulation is essential for successful coping with environmental demands and personal welfare. 14 On the behavioral level, studies have found that marijuana craving cues are strongly associated with deficits in regulation of negative affect and emotions. 14 , 15 Also, on the neural level, during reappraisal of negative stimuli, patients with MUD and regular users have shown altered neural activity and functional connectivity. Moreover, marijuana use is related to dysfunctions in the amygdala and in amygdala-dorsolateral prefrontal cortex (DLPFC) coupling activity. 15 Together, these findings demonstrate that emotion-based psychotherapy must manage comorbid problems and eliminate the limitations of CBT.\nOne of the psychotherapies in the third wave behavioral therapy cluster is dialectical behavior therapy (DBT). DBT has been described as intervention in emotion regulation deficits by focusing on dangerous impulses in borderline personality disorder and substance use disorder. The goals of DBT include improving and regulating emotions as a primary mechanism of change. DBT is a trans-diagnostic treatment and suitable for comorbid problems. DBT trains skills including distress tolerance, interpersonal effectiveness, emotion regulation, and mindfulness. Overall, in the context of SUD, DBT teaches emotion regulation skills to decrease engagement in pathological emotion regulation strategies. It also intervenes in low quality of life situations, reduces drug-seeking behavior, and helps patients function adaptively by accepting unpleasant emotions such as craving. 9 , 10 , 14\nResearch literature shows the efficacy and effectiveness of DBT in various comorbid problems and diseases such as suicide, 16 forensic psychiatric patients, 17 and irritable bowel syndrome. 18 Nevertheless, studies have reported contradictory results for the effectiveness of implementing DBT in various SUD populations. 19 , 20 Furthermore, the literature has recommended using larger samples, clearer instruments to measure outcome variables, and specific and integrated protocols. 21\nAdditionally, according to our investigations, no DBT randomized clinical trials have been conducted that investigated cessation in MUD patients (with or without comorbid problems). A DBT intervention aimed at increasing the cessation rate and reducing craving among MUD patients was developed for this study.\nThis pilot trial investigated the feasibility and preliminary efficacy of DBT relative to a psycho-education intervention that was controlled for time duration and attention. Feasibility was assessed via satisfaction and session completion rates. Preliminary efficacy was evaluated via the impact of DBT on cessation rate and reduction of consumption rates, compared to the psycho-education intervention. Although craving and acceptance of craving are not the primary goals of DBT, they were also compared across the two interventions.", " Trial design This study was designed as a controlled randomized clinical trial, including pretest, post-test, and two-month follow-up phases.\nThis study was designed as a controlled randomized clinical trial, including pretest, post-test, and two-month follow-up phases.\n Sample size Since the sampling method comprises snowball sampling and strict eligibility criteria were applied, on the basis of data from similar studies 10 it was determined that at least 20 participants were needed in each group. However, in view of the predicted retention rates, we selected 30 patients for each group.\nSince the sampling method comprises snowball sampling and strict eligibility criteria were applied, on the basis of data from similar studies 10 it was determined that at least 20 participants were needed in each group. However, in view of the predicted retention rates, we selected 30 patients for each group.\n Selection criteria The inclusion criteria were as follows: 1) diagnosis of marijuana use disorder; 2) age 18 years or over; 3) no current or past history of major psychiatric disorders; 4) no other concurrent SUD treatment; and 5) willingness to attend intervention sessions, complete surveys, and take tests (questionnaires and urine test kits).\nExclusion criteria were as follows: 1) unwillingness to participate; 2) not participating in intervention sessions for more than two weeks; 3) starting secondary psychotherapy; and 4) consuming methamphetamine, amphetamine, cannabis, methadone, benzodiazepines, or morphine during the research stages.\nThe inclusion criteria were as follows: 1) diagnosis of marijuana use disorder; 2) age 18 years or over; 3) no current or past history of major psychiatric disorders; 4) no other concurrent SUD treatment; and 5) willingness to attend intervention sessions, complete surveys, and take tests (questionnaires and urine test kits).\nExclusion criteria were as follows: 1) unwillingness to participate; 2) not participating in intervention sessions for more than two weeks; 3) starting secondary psychotherapy; and 4) consuming methamphetamine, amphetamine, cannabis, methadone, benzodiazepines, or morphine during the research stages.\n Participants, procedures, and randomization Since there are no cannabis use disorder treatment centers in Iran, there is no specific place to select patients. Furthermore, patients at drug treatment centers are referred for treatment of other substance use disorders and comorbidity of drug use is one of the exclusion criteria for this study, since it could lead to misleading results. Therefore, the relatives and acquaintances of those who had been referred to the drug treatment center were interviewed. From November 1, 2019, to November 5, 2019, 15 relatives and family members of drug users referred to drug treatment centers were diagnosed with MUD at this stage. Then, using snowball sampling, after 15 days of investigation, a total of 83 patients were diagnosed with MUD. Seventy-five of the 83 MUD patients who were approached consented, eight declined to participate, and 14 were ineligible. The primary reasons for declining were anxiety about addiction stigma and time constraints. Most of the ineligible patients had multiple illicit use disorders, so they did not meet the study criteria. Therefore, 61 patients completed the baseline assessment and were included in the current analyses. These patients were randomly assigned to each group using a random number table. The interventions were implemented from December 1, 2019, to March 20, 2020. The follow-up phase started on March 21, 2020, and ended on May 20, 2020, (at two months’ follow-up). In order to test for exclusion criteria before each session, a six-drug test kit for methamphetamine, amphetamine, cannabis, methadone, benzodiazepines, and morphine was administered to individuals using urine samples.\nSince there are no cannabis use disorder treatment centers in Iran, there is no specific place to select patients. Furthermore, patients at drug treatment centers are referred for treatment of other substance use disorders and comorbidity of drug use is one of the exclusion criteria for this study, since it could lead to misleading results. Therefore, the relatives and acquaintances of those who had been referred to the drug treatment center were interviewed. From November 1, 2019, to November 5, 2019, 15 relatives and family members of drug users referred to drug treatment centers were diagnosed with MUD at this stage. Then, using snowball sampling, after 15 days of investigation, a total of 83 patients were diagnosed with MUD. Seventy-five of the 83 MUD patients who were approached consented, eight declined to participate, and 14 were ineligible. The primary reasons for declining were anxiety about addiction stigma and time constraints. Most of the ineligible patients had multiple illicit use disorders, so they did not meet the study criteria. Therefore, 61 patients completed the baseline assessment and were included in the current analyses. These patients were randomly assigned to each group using a random number table. The interventions were implemented from December 1, 2019, to March 20, 2020. The follow-up phase started on March 21, 2020, and ended on May 20, 2020, (at two months’ follow-up). In order to test for exclusion criteria before each session, a six-drug test kit for methamphetamine, amphetamine, cannabis, methadone, benzodiazepines, and morphine was administered to individuals using urine samples.\n Blinding Both groups were blind to the existence of another group in the study. However, patients were informed about participating in research but not about another group. One day after the end of treatment, the post-test was carried out by mental health technicians with a master’s degree in psychology.\nBoth groups were blind to the existence of another group in the study. However, patients were informed about participating in research but not about another group. One day after the end of treatment, the post-test was carried out by mental health technicians with a master’s degree in psychology.\n Outcome measures Abstinence A marijuana urine test kit prepared by Kian Teb Company (officially licensed by the National Medical Device Directorate IR. IRAN) was used to identify abstainers.\nA marijuana urine test kit prepared by Kian Teb Company (officially licensed by the National Medical Device Directorate IR. IRAN) was used to identify abstainers.\n Marijuana smoking A self-report scale was designed for patients who had lapsed during the post-test follow-up. On this scale, patients indicate the number of days of consumption over 30-day periods. The first thirty days after the last intervention session was considered the post-test smoking period and the second-month follow-up was considered as the follow-up marijuana use period.\nA self-report scale was designed for patients who had lapsed during the post-test follow-up. On this scale, patients indicate the number of days of consumption over 30-day periods. The first thirty days after the last intervention session was considered the post-test smoking period and the second-month follow-up was considered as the follow-up marijuana use period.\n Craving The Marijuana Craving Questionnaire (MCQ) short-form is a 12-item self-report questionnaire with ten items for subjective assessment of cannabis craving. The scale covers 4 factors: compulsivity, emotionality, expectancy, and purposefulness. According to how patients were thinking or feeling ‘’right now,’’ they placed checkmarks on the questionnaire to endorse responses ranging from 1 or strongly disagree to 7 or strongly agree. Results showed that this questionnaire’s internal consistency is adequate (α = 0.90). The measure was administered following a 12-hour deprivation period. The typical onset of marijuana craving and withdrawal symptoms is observed within approximately one day of cessation and so the current paper’s questionnaire scores can be conceptualized as an index of the propensity to experience marijuana craving following deprivation. 22 In Iranian MUD patients, the MCQ had internal consistency of α = 0.87. Details of the MCQ’s psychometrics properties will be published as a separate study as soon as possible.\nThe Marijuana Craving Questionnaire (MCQ) short-form is a 12-item self-report questionnaire with ten items for subjective assessment of cannabis craving. The scale covers 4 factors: compulsivity, emotionality, expectancy, and purposefulness. According to how patients were thinking or feeling ‘’right now,’’ they placed checkmarks on the questionnaire to endorse responses ranging from 1 or strongly disagree to 7 or strongly agree. Results showed that this questionnaire’s internal consistency is adequate (α = 0.90). The measure was administered following a 12-hour deprivation period. The typical onset of marijuana craving and withdrawal symptoms is observed within approximately one day of cessation and so the current paper’s questionnaire scores can be conceptualized as an index of the propensity to experience marijuana craving following deprivation. 22 In Iranian MUD patients, the MCQ had internal consistency of α = 0.87. Details of the MCQ’s psychometrics properties will be published as a separate study as soon as possible.\n Acceptability The Acceptability of Intervention Measure (AIM) was employed to measure the acceptability of interventions. AIM response items are measured on a 5-point Likert scale (from Completely Disagree with 1 point to Completely Agree with 5 points). The mean of points scored for each item is taken as the final score. This questionnaire developed by Weiner et al., and they reported Cronbach’s ɑ = 85 for internal consistency. 23\nThe Acceptability of Intervention Measure (AIM) was employed to measure the acceptability of interventions. AIM response items are measured on a 5-point Likert scale (from Completely Disagree with 1 point to Completely Agree with 5 points). The mean of points scored for each item is taken as the final score. This questionnaire developed by Weiner et al., and they reported Cronbach’s ɑ = 85 for internal consistency. 23\n Appropriateness The Intervention Appropriateness Measure (IAM) was used for Appropriateness. The IAM consist of a four-item scale that measures perceived intervention appropriateness. Items are measured on a five-point Likert scale (Completely Disagree to Completely Agree), and the mean of points scored for each item is taken as the final score. Higher scores mean that the participant feels this intervention is more appropriate for him/her. For this tool, Cronbach’s ɑ = 0.91 and all Factor Loadings are reported as higher than 0.8. 23\nThe Intervention Appropriateness Measure (IAM) was used for Appropriateness. The IAM consist of a four-item scale that measures perceived intervention appropriateness. Items are measured on a five-point Likert scale (Completely Disagree to Completely Agree), and the mean of points scored for each item is taken as the final score. Higher scores mean that the participant feels this intervention is more appropriate for him/her. For this tool, Cronbach’s ɑ = 0.91 and all Factor Loadings are reported as higher than 0.8. 23\n Abstinence A marijuana urine test kit prepared by Kian Teb Company (officially licensed by the National Medical Device Directorate IR. IRAN) was used to identify abstainers.\nA marijuana urine test kit prepared by Kian Teb Company (officially licensed by the National Medical Device Directorate IR. IRAN) was used to identify abstainers.\n Marijuana smoking A self-report scale was designed for patients who had lapsed during the post-test follow-up. On this scale, patients indicate the number of days of consumption over 30-day periods. The first thirty days after the last intervention session was considered the post-test smoking period and the second-month follow-up was considered as the follow-up marijuana use period.\nA self-report scale was designed for patients who had lapsed during the post-test follow-up. On this scale, patients indicate the number of days of consumption over 30-day periods. The first thirty days after the last intervention session was considered the post-test smoking period and the second-month follow-up was considered as the follow-up marijuana use period.\n Craving The Marijuana Craving Questionnaire (MCQ) short-form is a 12-item self-report questionnaire with ten items for subjective assessment of cannabis craving. The scale covers 4 factors: compulsivity, emotionality, expectancy, and purposefulness. According to how patients were thinking or feeling ‘’right now,’’ they placed checkmarks on the questionnaire to endorse responses ranging from 1 or strongly disagree to 7 or strongly agree. Results showed that this questionnaire’s internal consistency is adequate (α = 0.90). The measure was administered following a 12-hour deprivation period. The typical onset of marijuana craving and withdrawal symptoms is observed within approximately one day of cessation and so the current paper’s questionnaire scores can be conceptualized as an index of the propensity to experience marijuana craving following deprivation. 22 In Iranian MUD patients, the MCQ had internal consistency of α = 0.87. Details of the MCQ’s psychometrics properties will be published as a separate study as soon as possible.\nThe Marijuana Craving Questionnaire (MCQ) short-form is a 12-item self-report questionnaire with ten items for subjective assessment of cannabis craving. The scale covers 4 factors: compulsivity, emotionality, expectancy, and purposefulness. According to how patients were thinking or feeling ‘’right now,’’ they placed checkmarks on the questionnaire to endorse responses ranging from 1 or strongly disagree to 7 or strongly agree. Results showed that this questionnaire’s internal consistency is adequate (α = 0.90). The measure was administered following a 12-hour deprivation period. The typical onset of marijuana craving and withdrawal symptoms is observed within approximately one day of cessation and so the current paper’s questionnaire scores can be conceptualized as an index of the propensity to experience marijuana craving following deprivation. 22 In Iranian MUD patients, the MCQ had internal consistency of α = 0.87. Details of the MCQ’s psychometrics properties will be published as a separate study as soon as possible.\n Acceptability The Acceptability of Intervention Measure (AIM) was employed to measure the acceptability of interventions. AIM response items are measured on a 5-point Likert scale (from Completely Disagree with 1 point to Completely Agree with 5 points). The mean of points scored for each item is taken as the final score. This questionnaire developed by Weiner et al., and they reported Cronbach’s ɑ = 85 for internal consistency. 23\nThe Acceptability of Intervention Measure (AIM) was employed to measure the acceptability of interventions. AIM response items are measured on a 5-point Likert scale (from Completely Disagree with 1 point to Completely Agree with 5 points). The mean of points scored for each item is taken as the final score. This questionnaire developed by Weiner et al., and they reported Cronbach’s ɑ = 85 for internal consistency. 23\n Appropriateness The Intervention Appropriateness Measure (IAM) was used for Appropriateness. The IAM consist of a four-item scale that measures perceived intervention appropriateness. Items are measured on a five-point Likert scale (Completely Disagree to Completely Agree), and the mean of points scored for each item is taken as the final score. Higher scores mean that the participant feels this intervention is more appropriate for him/her. For this tool, Cronbach’s ɑ = 0.91 and all Factor Loadings are reported as higher than 0.8. 23\nThe Intervention Appropriateness Measure (IAM) was used for Appropriateness. The IAM consist of a four-item scale that measures perceived intervention appropriateness. Items are measured on a five-point Likert scale (Completely Disagree to Completely Agree), and the mean of points scored for each item is taken as the final score. Higher scores mean that the participant feels this intervention is more appropriate for him/her. For this tool, Cronbach’s ɑ = 0.91 and all Factor Loadings are reported as higher than 0.8. 23", "This study was designed as a controlled randomized clinical trial, including pretest, post-test, and two-month follow-up phases.", "Since the sampling method comprises snowball sampling and strict eligibility criteria were applied, on the basis of data from similar studies 10 it was determined that at least 20 participants were needed in each group. However, in view of the predicted retention rates, we selected 30 patients for each group.", "The inclusion criteria were as follows: 1) diagnosis of marijuana use disorder; 2) age 18 years or over; 3) no current or past history of major psychiatric disorders; 4) no other concurrent SUD treatment; and 5) willingness to attend intervention sessions, complete surveys, and take tests (questionnaires and urine test kits).\nExclusion criteria were as follows: 1) unwillingness to participate; 2) not participating in intervention sessions for more than two weeks; 3) starting secondary psychotherapy; and 4) consuming methamphetamine, amphetamine, cannabis, methadone, benzodiazepines, or morphine during the research stages.", "Since there are no cannabis use disorder treatment centers in Iran, there is no specific place to select patients. Furthermore, patients at drug treatment centers are referred for treatment of other substance use disorders and comorbidity of drug use is one of the exclusion criteria for this study, since it could lead to misleading results. Therefore, the relatives and acquaintances of those who had been referred to the drug treatment center were interviewed. From November 1, 2019, to November 5, 2019, 15 relatives and family members of drug users referred to drug treatment centers were diagnosed with MUD at this stage. Then, using snowball sampling, after 15 days of investigation, a total of 83 patients were diagnosed with MUD. Seventy-five of the 83 MUD patients who were approached consented, eight declined to participate, and 14 were ineligible. The primary reasons for declining were anxiety about addiction stigma and time constraints. Most of the ineligible patients had multiple illicit use disorders, so they did not meet the study criteria. Therefore, 61 patients completed the baseline assessment and were included in the current analyses. These patients were randomly assigned to each group using a random number table. The interventions were implemented from December 1, 2019, to March 20, 2020. The follow-up phase started on March 21, 2020, and ended on May 20, 2020, (at two months’ follow-up). In order to test for exclusion criteria before each session, a six-drug test kit for methamphetamine, amphetamine, cannabis, methadone, benzodiazepines, and morphine was administered to individuals using urine samples.", "Both groups were blind to the existence of another group in the study. However, patients were informed about participating in research but not about another group. One day after the end of treatment, the post-test was carried out by mental health technicians with a master’s degree in psychology.", " Abstinence A marijuana urine test kit prepared by Kian Teb Company (officially licensed by the National Medical Device Directorate IR. IRAN) was used to identify abstainers.\nA marijuana urine test kit prepared by Kian Teb Company (officially licensed by the National Medical Device Directorate IR. IRAN) was used to identify abstainers.\n Marijuana smoking A self-report scale was designed for patients who had lapsed during the post-test follow-up. On this scale, patients indicate the number of days of consumption over 30-day periods. The first thirty days after the last intervention session was considered the post-test smoking period and the second-month follow-up was considered as the follow-up marijuana use period.\nA self-report scale was designed for patients who had lapsed during the post-test follow-up. On this scale, patients indicate the number of days of consumption over 30-day periods. The first thirty days after the last intervention session was considered the post-test smoking period and the second-month follow-up was considered as the follow-up marijuana use period.\n Craving The Marijuana Craving Questionnaire (MCQ) short-form is a 12-item self-report questionnaire with ten items for subjective assessment of cannabis craving. The scale covers 4 factors: compulsivity, emotionality, expectancy, and purposefulness. According to how patients were thinking or feeling ‘’right now,’’ they placed checkmarks on the questionnaire to endorse responses ranging from 1 or strongly disagree to 7 or strongly agree. Results showed that this questionnaire’s internal consistency is adequate (α = 0.90). The measure was administered following a 12-hour deprivation period. The typical onset of marijuana craving and withdrawal symptoms is observed within approximately one day of cessation and so the current paper’s questionnaire scores can be conceptualized as an index of the propensity to experience marijuana craving following deprivation. 22 In Iranian MUD patients, the MCQ had internal consistency of α = 0.87. Details of the MCQ’s psychometrics properties will be published as a separate study as soon as possible.\nThe Marijuana Craving Questionnaire (MCQ) short-form is a 12-item self-report questionnaire with ten items for subjective assessment of cannabis craving. The scale covers 4 factors: compulsivity, emotionality, expectancy, and purposefulness. According to how patients were thinking or feeling ‘’right now,’’ they placed checkmarks on the questionnaire to endorse responses ranging from 1 or strongly disagree to 7 or strongly agree. Results showed that this questionnaire’s internal consistency is adequate (α = 0.90). The measure was administered following a 12-hour deprivation period. The typical onset of marijuana craving and withdrawal symptoms is observed within approximately one day of cessation and so the current paper’s questionnaire scores can be conceptualized as an index of the propensity to experience marijuana craving following deprivation. 22 In Iranian MUD patients, the MCQ had internal consistency of α = 0.87. Details of the MCQ’s psychometrics properties will be published as a separate study as soon as possible.\n Acceptability The Acceptability of Intervention Measure (AIM) was employed to measure the acceptability of interventions. AIM response items are measured on a 5-point Likert scale (from Completely Disagree with 1 point to Completely Agree with 5 points). The mean of points scored for each item is taken as the final score. This questionnaire developed by Weiner et al., and they reported Cronbach’s ɑ = 85 for internal consistency. 23\nThe Acceptability of Intervention Measure (AIM) was employed to measure the acceptability of interventions. AIM response items are measured on a 5-point Likert scale (from Completely Disagree with 1 point to Completely Agree with 5 points). The mean of points scored for each item is taken as the final score. This questionnaire developed by Weiner et al., and they reported Cronbach’s ɑ = 85 for internal consistency. 23\n Appropriateness The Intervention Appropriateness Measure (IAM) was used for Appropriateness. The IAM consist of a four-item scale that measures perceived intervention appropriateness. Items are measured on a five-point Likert scale (Completely Disagree to Completely Agree), and the mean of points scored for each item is taken as the final score. Higher scores mean that the participant feels this intervention is more appropriate for him/her. For this tool, Cronbach’s ɑ = 0.91 and all Factor Loadings are reported as higher than 0.8. 23\nThe Intervention Appropriateness Measure (IAM) was used for Appropriateness. The IAM consist of a four-item scale that measures perceived intervention appropriateness. Items are measured on a five-point Likert scale (Completely Disagree to Completely Agree), and the mean of points scored for each item is taken as the final score. Higher scores mean that the participant feels this intervention is more appropriate for him/her. For this tool, Cronbach’s ɑ = 0.91 and all Factor Loadings are reported as higher than 0.8. 23", "A marijuana urine test kit prepared by Kian Teb Company (officially licensed by the National Medical Device Directorate IR. IRAN) was used to identify abstainers.", "A self-report scale was designed for patients who had lapsed during the post-test follow-up. On this scale, patients indicate the number of days of consumption over 30-day periods. The first thirty days after the last intervention session was considered the post-test smoking period and the second-month follow-up was considered as the follow-up marijuana use period.", "The Marijuana Craving Questionnaire (MCQ) short-form is a 12-item self-report questionnaire with ten items for subjective assessment of cannabis craving. The scale covers 4 factors: compulsivity, emotionality, expectancy, and purposefulness. According to how patients were thinking or feeling ‘’right now,’’ they placed checkmarks on the questionnaire to endorse responses ranging from 1 or strongly disagree to 7 or strongly agree. Results showed that this questionnaire’s internal consistency is adequate (α = 0.90). The measure was administered following a 12-hour deprivation period. The typical onset of marijuana craving and withdrawal symptoms is observed within approximately one day of cessation and so the current paper’s questionnaire scores can be conceptualized as an index of the propensity to experience marijuana craving following deprivation. 22 In Iranian MUD patients, the MCQ had internal consistency of α = 0.87. Details of the MCQ’s psychometrics properties will be published as a separate study as soon as possible.", "The Acceptability of Intervention Measure (AIM) was employed to measure the acceptability of interventions. AIM response items are measured on a 5-point Likert scale (from Completely Disagree with 1 point to Completely Agree with 5 points). The mean of points scored for each item is taken as the final score. This questionnaire developed by Weiner et al., and they reported Cronbach’s ɑ = 85 for internal consistency. 23", "The Intervention Appropriateness Measure (IAM) was used for Appropriateness. The IAM consist of a four-item scale that measures perceived intervention appropriateness. Items are measured on a five-point Likert scale (Completely Disagree to Completely Agree), and the mean of points scored for each item is taken as the final score. Higher scores mean that the participant feels this intervention is more appropriate for him/her. For this tool, Cronbach’s ɑ = 0.91 and all Factor Loadings are reported as higher than 0.8. 23", " Dialectical behavior therapy DBT is a group intervention consisting of 16 sessions (meeting once a week for 90 minutes) with one psychotherapist and her co-therapist. The intervention protocol is an adaptation of DBT for SUD based on three basic manuals. 10 , 24 , 25 The primary objective of the DBT is to reduce dysfunction in emotion regulation and craving via increasing cessation rates and improving skills. A psychotherapist with a PhD delivered the intervention sessions (with a psychologist as co-therapist) and they were blind both to the existence of another group and to the study objectives. Table 1 shows the content covered in each DBT session.\n\nTable 1DBT content per sessionCessationContentPre-sessionExplanation of dialectical behavioral therapy, principles, and goals. Brief introduction to the content of each session. Familiarity with participants. Participants are given an intervention booklet to read at home.1st session (mindfulness 1)Introduce the concept of mindfulness and three mental states (wise, reasonable, emotional) and their relations with substance use.2nd session (mindfulness 2)Teach two clusters of mindfulness skills. The first includes viewing, participation, and description. The second includes a non-judgmental stance and inclusive self-consciousness.3rd sessionSummarize the mindfulness sessions – definition of addiction, standard therapies of addiction, introduction to and teaching of dialectical avoidance technique. Review the positive and negative aspects of abstinence. Explanation and investigation of relapse and its causes. Explaining the skill of the pure mind, the addicted mind, the types of behaviors related to the pure mentality and the addicted mentality, and preparing a list of supporters.4th-5th sessions (Distress tolerance)Teaching distraction strategies with five skills include activities, comparisons, emotions, thoughts, and enjoyment. Through enjoyable activities, focusing on work or other topics, counting, leaving the situation, paying attention to daily tasks, distracting from thoughts, and self-harm behaviors – teaching and training self-soothing with five senses.6th-7th sessions (Emotion regulation)Definition of emotion, how emotions work, familiarity with emotion regulation skills. Emotion Identification Exercise, Emotion Registration Exercise. Identifying barriers to experiencing emotion in a healthy way and ways to overcome these barriers. Teaching creating short-term positive emotional experiences for experiencing positive emotional states.8th-10th sessions (Emotion regulation and distress tolerance in an MUD context)Explain craving and its connection to the experience of emotions. Introducing methods for identifying values. Importance of committed action based on a list of essential values in life. Develop new coping strategies in response to unpleasant emotions, sensations, and cognitions, especially craving as a multidimensional problem and teaching problem solving and behavior analysis.11th sessionBasic acceptance technique training. Introduce living in the present moment techniques.12th-13th sessionsInterpersonal effectiveness training. Participants learn assertiveness skills about substance users. Other skills include non-verbal communication, verbal communication, and problem-solving, decision-making, and listening skills.14th-16th sessionsReview of sessions. Elimination of ambiguities. Exercising skills in the presence of other people.\n\nDBT is a group intervention consisting of 16 sessions (meeting once a week for 90 minutes) with one psychotherapist and her co-therapist. The intervention protocol is an adaptation of DBT for SUD based on three basic manuals. 10 , 24 , 25 The primary objective of the DBT is to reduce dysfunction in emotion regulation and craving via increasing cessation rates and improving skills. A psychotherapist with a PhD delivered the intervention sessions (with a psychologist as co-therapist) and they were blind both to the existence of another group and to the study objectives. Table 1 shows the content covered in each DBT session.\n\nTable 1DBT content per sessionCessationContentPre-sessionExplanation of dialectical behavioral therapy, principles, and goals. Brief introduction to the content of each session. Familiarity with participants. Participants are given an intervention booklet to read at home.1st session (mindfulness 1)Introduce the concept of mindfulness and three mental states (wise, reasonable, emotional) and their relations with substance use.2nd session (mindfulness 2)Teach two clusters of mindfulness skills. The first includes viewing, participation, and description. The second includes a non-judgmental stance and inclusive self-consciousness.3rd sessionSummarize the mindfulness sessions – definition of addiction, standard therapies of addiction, introduction to and teaching of dialectical avoidance technique. Review the positive and negative aspects of abstinence. Explanation and investigation of relapse and its causes. Explaining the skill of the pure mind, the addicted mind, the types of behaviors related to the pure mentality and the addicted mentality, and preparing a list of supporters.4th-5th sessions (Distress tolerance)Teaching distraction strategies with five skills include activities, comparisons, emotions, thoughts, and enjoyment. Through enjoyable activities, focusing on work or other topics, counting, leaving the situation, paying attention to daily tasks, distracting from thoughts, and self-harm behaviors – teaching and training self-soothing with five senses.6th-7th sessions (Emotion regulation)Definition of emotion, how emotions work, familiarity with emotion regulation skills. Emotion Identification Exercise, Emotion Registration Exercise. Identifying barriers to experiencing emotion in a healthy way and ways to overcome these barriers. Teaching creating short-term positive emotional experiences for experiencing positive emotional states.8th-10th sessions (Emotion regulation and distress tolerance in an MUD context)Explain craving and its connection to the experience of emotions. Introducing methods for identifying values. Importance of committed action based on a list of essential values in life. Develop new coping strategies in response to unpleasant emotions, sensations, and cognitions, especially craving as a multidimensional problem and teaching problem solving and behavior analysis.11th sessionBasic acceptance technique training. Introduce living in the present moment techniques.12th-13th sessionsInterpersonal effectiveness training. Participants learn assertiveness skills about substance users. Other skills include non-verbal communication, verbal communication, and problem-solving, decision-making, and listening skills.14th-16th sessionsReview of sessions. Elimination of ambiguities. Exercising skills in the presence of other people.\n\n Psychoeducation This option is more ethical than not offering any intervention to the control group. A psychiatrist with five years of experience in addiction psychotherapy implemented this intervention. This intervention includes problem-solving skills, assertiveness, and craving management in eight sessions. Thepsychoeducation intervention was used to provide a basis for comparison with only those elements of the DBT intervention that are different from other psychotherapies. This intervention is utilized for MUD and health-related problems. The intention of this intervention is to provide individuals struggling with cravings and substance use disorder the knowledge needed to comprehensively appreciate their problems and the empowerment needed to cope with them. The psychoeducation intervention included information about the dangers of marijuana and the therapists also provided a pamphlet containing techniques for reduction of craving. 14 , 26 , 27\nThis option is more ethical than not offering any intervention to the control group. A psychiatrist with five years of experience in addiction psychotherapy implemented this intervention. This intervention includes problem-solving skills, assertiveness, and craving management in eight sessions. Thepsychoeducation intervention was used to provide a basis for comparison with only those elements of the DBT intervention that are different from other psychotherapies. This intervention is utilized for MUD and health-related problems. The intention of this intervention is to provide individuals struggling with cravings and substance use disorder the knowledge needed to comprehensively appreciate their problems and the empowerment needed to cope with them. The psychoeducation intervention included information about the dangers of marijuana and the therapists also provided a pamphlet containing techniques for reduction of craving. 14 , 26 , 27\n Therapists and treatment adherence To enable adherence to the principles of DBT to be checked, audios of the sessions were recorded with the consent of all participants. A DBT researcher and psychotherapist who was not involved in the treatment groups checked session content afterwards. Sessions were divided into 15-minute modules that were chosen for adherence checks at random. Treatment stance and occurrence and depth of DBT processes were appraised. Based on the treatment manual, modules were rated for adherence level as either adequate or not adequate. The majority (83%) were judged to have been conducted adequately.\nTo enable adherence to the principles of DBT to be checked, audios of the sessions were recorded with the consent of all participants. A DBT researcher and psychotherapist who was not involved in the treatment groups checked session content afterwards. Sessions were divided into 15-minute modules that were chosen for adherence checks at random. Treatment stance and occurrence and depth of DBT processes were appraised. Based on the treatment manual, modules were rated for adherence level as either adequate or not adequate. The majority (83%) were judged to have been conducted adequately.\n Statistical method Demographic information was gathered and reported as frequencies, means, and standard deviations and repeated measures ANOVA and chi-square tests were conducted for the outcomes using SPSS software, version 26.\nDemographic information was gathered and reported as frequencies, means, and standard deviations and repeated measures ANOVA and chi-square tests were conducted for the outcomes using SPSS software, version 26.", "DBT is a group intervention consisting of 16 sessions (meeting once a week for 90 minutes) with one psychotherapist and her co-therapist. The intervention protocol is an adaptation of DBT for SUD based on three basic manuals. 10 , 24 , 25 The primary objective of the DBT is to reduce dysfunction in emotion regulation and craving via increasing cessation rates and improving skills. A psychotherapist with a PhD delivered the intervention sessions (with a psychologist as co-therapist) and they were blind both to the existence of another group and to the study objectives. Table 1 shows the content covered in each DBT session.\n\nTable 1DBT content per sessionCessationContentPre-sessionExplanation of dialectical behavioral therapy, principles, and goals. Brief introduction to the content of each session. Familiarity with participants. Participants are given an intervention booklet to read at home.1st session (mindfulness 1)Introduce the concept of mindfulness and three mental states (wise, reasonable, emotional) and their relations with substance use.2nd session (mindfulness 2)Teach two clusters of mindfulness skills. The first includes viewing, participation, and description. The second includes a non-judgmental stance and inclusive self-consciousness.3rd sessionSummarize the mindfulness sessions – definition of addiction, standard therapies of addiction, introduction to and teaching of dialectical avoidance technique. Review the positive and negative aspects of abstinence. Explanation and investigation of relapse and its causes. Explaining the skill of the pure mind, the addicted mind, the types of behaviors related to the pure mentality and the addicted mentality, and preparing a list of supporters.4th-5th sessions (Distress tolerance)Teaching distraction strategies with five skills include activities, comparisons, emotions, thoughts, and enjoyment. Through enjoyable activities, focusing on work or other topics, counting, leaving the situation, paying attention to daily tasks, distracting from thoughts, and self-harm behaviors – teaching and training self-soothing with five senses.6th-7th sessions (Emotion regulation)Definition of emotion, how emotions work, familiarity with emotion regulation skills. Emotion Identification Exercise, Emotion Registration Exercise. Identifying barriers to experiencing emotion in a healthy way and ways to overcome these barriers. Teaching creating short-term positive emotional experiences for experiencing positive emotional states.8th-10th sessions (Emotion regulation and distress tolerance in an MUD context)Explain craving and its connection to the experience of emotions. Introducing methods for identifying values. Importance of committed action based on a list of essential values in life. Develop new coping strategies in response to unpleasant emotions, sensations, and cognitions, especially craving as a multidimensional problem and teaching problem solving and behavior analysis.11th sessionBasic acceptance technique training. Introduce living in the present moment techniques.12th-13th sessionsInterpersonal effectiveness training. Participants learn assertiveness skills about substance users. Other skills include non-verbal communication, verbal communication, and problem-solving, decision-making, and listening skills.14th-16th sessionsReview of sessions. Elimination of ambiguities. Exercising skills in the presence of other people.\n", "This option is more ethical than not offering any intervention to the control group. A psychiatrist with five years of experience in addiction psychotherapy implemented this intervention. This intervention includes problem-solving skills, assertiveness, and craving management in eight sessions. Thepsychoeducation intervention was used to provide a basis for comparison with only those elements of the DBT intervention that are different from other psychotherapies. This intervention is utilized for MUD and health-related problems. The intention of this intervention is to provide individuals struggling with cravings and substance use disorder the knowledge needed to comprehensively appreciate their problems and the empowerment needed to cope with them. The psychoeducation intervention included information about the dangers of marijuana and the therapists also provided a pamphlet containing techniques for reduction of craving. 14 , 26 , 27", "To enable adherence to the principles of DBT to be checked, audios of the sessions were recorded with the consent of all participants. A DBT researcher and psychotherapist who was not involved in the treatment groups checked session content afterwards. Sessions were divided into 15-minute modules that were chosen for adherence checks at random. Treatment stance and occurrence and depth of DBT processes were appraised. Based on the treatment manual, modules were rated for adherence level as either adequate or not adequate. The majority (83%) were judged to have been conducted adequately.", "Demographic information was gathered and reported as frequencies, means, and standard deviations and repeated measures ANOVA and chi-square tests were conducted for the outcomes using SPSS software, version 26.", "Written informed consent was obtained from all participants before initiation of the research. The tools used in this study were all filled-out anonymously, and an ID code was used to maintain the confidentiality of personal information (Ir.kums.rce.1398.1203). At the end of the research process, dialectical behavior therapy was also provided to the control group. This study is registered with the Thailand Registry of Clinical Trials (TCTR20200319007).", " Feasibility In the psycho-education group, 24/31 participants completed all sessions, compared to 29/30 members of the DBT group (retention rates: 77% in the control group vs. 96% in the DBT group). Additionally, 96% (29/30) of the DBT group members completed the two-month follow-up, whereas 64.5% (20/31) of the control group completed follow-up ( Figure 1 ). The chi-square test was applied, showing associations between group and retention, with χ2= 4.95, p = 0.02 for post-treatment and, χ2= 9.97, p = 0.002 for the follow-up phase. Consequently, DBT retention rates were significantly higher than psycho-education retention rates at post-treatment and follow-up.\n\nFigure 1Consort diagram.\n\nIn the psycho-education group, 24/31 participants completed all sessions, compared to 29/30 members of the DBT group (retention rates: 77% in the control group vs. 96% in the DBT group). Additionally, 96% (29/30) of the DBT group members completed the two-month follow-up, whereas 64.5% (20/31) of the control group completed follow-up ( Figure 1 ). The chi-square test was applied, showing associations between group and retention, with χ2= 4.95, p = 0.02 for post-treatment and, χ2= 9.97, p = 0.002 for the follow-up phase. Consequently, DBT retention rates were significantly higher than psycho-education retention rates at post-treatment and follow-up.\n\nFigure 1Consort diagram.\n\n Acceptability and appropriateness To enable assessment of the acceptability and appropriateness of intervention, patients completed the AIM and IAM scales in the post-treatment phase. The acceptability scores were 16.57 for DBT and 9.6 for the control group (p < 0.05). The appropriateness scores were17.03 for DBT and 10.7 for the psychoeducation group (p < 0.05). Since there are no standards for these measures, these points were transferred to Likert-based questionnaire scales. For acceptability, the results equated to ‘’agree’’ for the DBT group versus ‘’neither agree nor disagree’’ for the psychoeducation group. For appropriateness, the results equated to ‘’completely agree’’ for the DBT group versus ‘’neither agree nor disagree’’ for the psychoeducation group.\nTo enable assessment of the acceptability and appropriateness of intervention, patients completed the AIM and IAM scales in the post-treatment phase. The acceptability scores were 16.57 for DBT and 9.6 for the control group (p < 0.05). The appropriateness scores were17.03 for DBT and 10.7 for the psychoeducation group (p < 0.05). Since there are no standards for these measures, these points were transferred to Likert-based questionnaire scales. For acceptability, the results equated to ‘’agree’’ for the DBT group versus ‘’neither agree nor disagree’’ for the psychoeducation group. For appropriateness, the results equated to ‘’completely agree’’ for the DBT group versus ‘’neither agree nor disagree’’ for the psychoeducation group.\n Participant characteristics Participants’ demographic variables are shown in Table 2 . Analyses showed that there were no significant differences between the two groups regarding these variables. It should be noted that since over 97% of the participants were male from the beginning, the results were reported only for men.\n\nTable 2Mean and standard deviation of demographic variables in the intervention and control groups at the test phaseVariableIntervention groupControl groupp-valueEducational level*  0.2No higher education, n (%)2 (6)6 (19)Diploma, n (%)14 (46)15 (48)University student or graduate, n (%)14 (46)10 (33)Age †25.6 (5.67)27.19 (7.48)0.3Months of marijuana use19.53 (5.9)17.48 (6.03)0.1Craving (total) †   Pre-test45.2 (8.3)47.9 (10.2)0.2Post-test42.13 (7.7)44.48 (8.1)0.1Follow-up42.66 (9.25)45.8 (8.4)0.1Data presented as mean (standard deviation), unless otherwise specified.* Chi-square test.† Independent t test.\n\nData presented as mean (standard deviation), unless otherwise specified.\n* Chi-square test.\n† Independent t test.\nParticipants’ demographic variables are shown in Table 2 . Analyses showed that there were no significant differences between the two groups regarding these variables. It should be noted that since over 97% of the participants were male from the beginning, the results were reported only for men.\n\nTable 2Mean and standard deviation of demographic variables in the intervention and control groups at the test phaseVariableIntervention groupControl groupp-valueEducational level*  0.2No higher education, n (%)2 (6)6 (19)Diploma, n (%)14 (46)15 (48)University student or graduate, n (%)14 (46)10 (33)Age †25.6 (5.67)27.19 (7.48)0.3Months of marijuana use19.53 (5.9)17.48 (6.03)0.1Craving (total) †   Pre-test45.2 (8.3)47.9 (10.2)0.2Post-test42.13 (7.7)44.48 (8.1)0.1Follow-up42.66 (9.25)45.8 (8.4)0.1Data presented as mean (standard deviation), unless otherwise specified.* Chi-square test.† Independent t test.\n\nData presented as mean (standard deviation), unless otherwise specified.\n* Chi-square test.\n† Independent t test.\n Efficacy outcomes The hypothesis of equal covariance matrices was examined for craving (Box’s M = 3.63, P = 0.74). The results of this test indicate homogeneity of covariance matrices. Mauchly’s test of sphericity also showed that the sphericity assumption was not violated (p = 0.42 and Mauchly’s W = 0.97).\nThe results of the intergroup test and intergroup relations are also presented in Table 3 . As shown in Table 3 , the effect levels for craving (F = 3.52, p > 0.05) suggest that there is no significant difference between groups. These results were repeated for three of the subscales of craving: compulsivity, expectancy, and purposefulness. However, the emotionality subscale results showed a significant reduction in the DBT group compared to the control group 10.6 vs. 14.4 in the post-test and 10.43 vs. 13.26 in the follow-up phase (F = 19.94, p < 0.05).\n\nTable 3Repeated measures ANOVA for variables for the DBT group and control group in the pre-test, post-test, and follow-upVariable/sourceType III sum of squaresdfMean squareFSigPartial eta squaredObserved powerCraving       Tests of within-subjects effects       Factor1347.6962173.842.60.07*0.040.512Factor1 × group4.7622.380.0360.110.0010.055Error (factor1)7845.15411866.48    Tests of between-subjects effects       Group341.0841341.083.520.06 †0.0560.455Error5708.795996.75    Emotionality       Tests of within-subjects effects       Factor1257.332128.6511.690.00 †0.1650.9Factor1 × group74.70237.350.370.03*0.050.63Error (factor1)1297.8111810.99    Tests of between-subjects effects       Group283.891283.8919.940.00 †0.250.9Error839.9065914.23    * Significant to 0.05.† Significant to 0.01.\n\n* Significant to 0.05.\n† Significant to 0.01.\nWith regard to cessation, the results indicated that DBT achieved a higher rate of cessation than the control treatment in both the post-test and at follow-up, ( Table 4 ) (p < 0.05). It was also found that among those who continued to use the drug, the number of use days per month in the post-test and the follow-up periods (two months) was significantly lower in the intervention group than the control group.\n\nTable 4Cessation and consumption between groups DBTControlp-valueCessation*   Post-test14 (46%)5 (16%)0.01\n‡Follow-up12 (40%)3 (9.5%)0.006\n‡Number of days use †   Post-test2.43±1.87.5±5.030.00\n‡Follow-up3.44±1.918.75±3.270.00\n‡Bold p-values are significant at critical levels.* The chi-square test was applied.† T test for independent samples.‡ Significant to 0.01.\n\nBold p-values are significant at critical levels.\n* The chi-square test was applied.\n† T test for independent samples.\n‡ Significant to 0.01.\nThe hypothesis of equal covariance matrices was examined for craving (Box’s M = 3.63, P = 0.74). The results of this test indicate homogeneity of covariance matrices. Mauchly’s test of sphericity also showed that the sphericity assumption was not violated (p = 0.42 and Mauchly’s W = 0.97).\nThe results of the intergroup test and intergroup relations are also presented in Table 3 . As shown in Table 3 , the effect levels for craving (F = 3.52, p > 0.05) suggest that there is no significant difference between groups. These results were repeated for three of the subscales of craving: compulsivity, expectancy, and purposefulness. However, the emotionality subscale results showed a significant reduction in the DBT group compared to the control group 10.6 vs. 14.4 in the post-test and 10.43 vs. 13.26 in the follow-up phase (F = 19.94, p < 0.05).\n\nTable 3Repeated measures ANOVA for variables for the DBT group and control group in the pre-test, post-test, and follow-upVariable/sourceType III sum of squaresdfMean squareFSigPartial eta squaredObserved powerCraving       Tests of within-subjects effects       Factor1347.6962173.842.60.07*0.040.512Factor1 × group4.7622.380.0360.110.0010.055Error (factor1)7845.15411866.48    Tests of between-subjects effects       Group341.0841341.083.520.06 †0.0560.455Error5708.795996.75    Emotionality       Tests of within-subjects effects       Factor1257.332128.6511.690.00 †0.1650.9Factor1 × group74.70237.350.370.03*0.050.63Error (factor1)1297.8111810.99    Tests of between-subjects effects       Group283.891283.8919.940.00 †0.250.9Error839.9065914.23    * Significant to 0.05.† Significant to 0.01.\n\n* Significant to 0.05.\n† Significant to 0.01.\nWith regard to cessation, the results indicated that DBT achieved a higher rate of cessation than the control treatment in both the post-test and at follow-up, ( Table 4 ) (p < 0.05). It was also found that among those who continued to use the drug, the number of use days per month in the post-test and the follow-up periods (two months) was significantly lower in the intervention group than the control group.\n\nTable 4Cessation and consumption between groups DBTControlp-valueCessation*   Post-test14 (46%)5 (16%)0.01\n‡Follow-up12 (40%)3 (9.5%)0.006\n‡Number of days use †   Post-test2.43±1.87.5±5.030.00\n‡Follow-up3.44±1.918.75±3.270.00\n‡Bold p-values are significant at critical levels.* The chi-square test was applied.† T test for independent samples.‡ Significant to 0.01.\n\nBold p-values are significant at critical levels.\n* The chi-square test was applied.\n† T test for independent samples.\n‡ Significant to 0.01.", "In the psycho-education group, 24/31 participants completed all sessions, compared to 29/30 members of the DBT group (retention rates: 77% in the control group vs. 96% in the DBT group). Additionally, 96% (29/30) of the DBT group members completed the two-month follow-up, whereas 64.5% (20/31) of the control group completed follow-up ( Figure 1 ). The chi-square test was applied, showing associations between group and retention, with χ2= 4.95, p = 0.02 for post-treatment and, χ2= 9.97, p = 0.002 for the follow-up phase. Consequently, DBT retention rates were significantly higher than psycho-education retention rates at post-treatment and follow-up.\n\nFigure 1Consort diagram.\n", "To enable assessment of the acceptability and appropriateness of intervention, patients completed the AIM and IAM scales in the post-treatment phase. The acceptability scores were 16.57 for DBT and 9.6 for the control group (p < 0.05). The appropriateness scores were17.03 for DBT and 10.7 for the psychoeducation group (p < 0.05). Since there are no standards for these measures, these points were transferred to Likert-based questionnaire scales. For acceptability, the results equated to ‘’agree’’ for the DBT group versus ‘’neither agree nor disagree’’ for the psychoeducation group. For appropriateness, the results equated to ‘’completely agree’’ for the DBT group versus ‘’neither agree nor disagree’’ for the psychoeducation group.", "Participants’ demographic variables are shown in Table 2 . Analyses showed that there were no significant differences between the two groups regarding these variables. It should be noted that since over 97% of the participants were male from the beginning, the results were reported only for men.\n\nTable 2Mean and standard deviation of demographic variables in the intervention and control groups at the test phaseVariableIntervention groupControl groupp-valueEducational level*  0.2No higher education, n (%)2 (6)6 (19)Diploma, n (%)14 (46)15 (48)University student or graduate, n (%)14 (46)10 (33)Age †25.6 (5.67)27.19 (7.48)0.3Months of marijuana use19.53 (5.9)17.48 (6.03)0.1Craving (total) †   Pre-test45.2 (8.3)47.9 (10.2)0.2Post-test42.13 (7.7)44.48 (8.1)0.1Follow-up42.66 (9.25)45.8 (8.4)0.1Data presented as mean (standard deviation), unless otherwise specified.* Chi-square test.† Independent t test.\n\nData presented as mean (standard deviation), unless otherwise specified.\n* Chi-square test.\n† Independent t test.", "The hypothesis of equal covariance matrices was examined for craving (Box’s M = 3.63, P = 0.74). The results of this test indicate homogeneity of covariance matrices. Mauchly’s test of sphericity also showed that the sphericity assumption was not violated (p = 0.42 and Mauchly’s W = 0.97).\nThe results of the intergroup test and intergroup relations are also presented in Table 3 . As shown in Table 3 , the effect levels for craving (F = 3.52, p > 0.05) suggest that there is no significant difference between groups. These results were repeated for three of the subscales of craving: compulsivity, expectancy, and purposefulness. However, the emotionality subscale results showed a significant reduction in the DBT group compared to the control group 10.6 vs. 14.4 in the post-test and 10.43 vs. 13.26 in the follow-up phase (F = 19.94, p < 0.05).\n\nTable 3Repeated measures ANOVA for variables for the DBT group and control group in the pre-test, post-test, and follow-upVariable/sourceType III sum of squaresdfMean squareFSigPartial eta squaredObserved powerCraving       Tests of within-subjects effects       Factor1347.6962173.842.60.07*0.040.512Factor1 × group4.7622.380.0360.110.0010.055Error (factor1)7845.15411866.48    Tests of between-subjects effects       Group341.0841341.083.520.06 †0.0560.455Error5708.795996.75    Emotionality       Tests of within-subjects effects       Factor1257.332128.6511.690.00 †0.1650.9Factor1 × group74.70237.350.370.03*0.050.63Error (factor1)1297.8111810.99    Tests of between-subjects effects       Group283.891283.8919.940.00 †0.250.9Error839.9065914.23    * Significant to 0.05.† Significant to 0.01.\n\n* Significant to 0.05.\n† Significant to 0.01.\nWith regard to cessation, the results indicated that DBT achieved a higher rate of cessation than the control treatment in both the post-test and at follow-up, ( Table 4 ) (p < 0.05). It was also found that among those who continued to use the drug, the number of use days per month in the post-test and the follow-up periods (two months) was significantly lower in the intervention group than the control group.\n\nTable 4Cessation and consumption between groups DBTControlp-valueCessation*   Post-test14 (46%)5 (16%)0.01\n‡Follow-up12 (40%)3 (9.5%)0.006\n‡Number of days use †   Post-test2.43±1.87.5±5.030.00\n‡Follow-up3.44±1.918.75±3.270.00\n‡Bold p-values are significant at critical levels.* The chi-square test was applied.† T test for independent samples.‡ Significant to 0.01.\n\nBold p-values are significant at critical levels.\n* The chi-square test was applied.\n† T test for independent samples.\n‡ Significant to 0.01.", "This study examined the feasibility, acceptability, and preliminary efficacy of a 16-session DBT intervention to address craving and achieve cessation in MUD patients. This intervention showed strong evidence of feasibility. Moreover, acceptability and appropriateness rates in the DBT group were high and adequate. The results showed that DBT is a promising intervention for marijuana cessation in patients with MUD. Although this study is the first RCT of DBT for MUD, the scientific literature about DBT for other addictive behaviors reports similar results.\nRezaei et al. found that DBT significantly improved craving among methadone users. Their result showed that DBT could reduce methadone usage and improve emotion regulation. 28 Moreover, another study showed that implementation of DBT with alcohol-dependent patients improved alcohol-related behavior and emotional deficits, which is similar to the results of the present study. 29 However, the results for craving showed there was no significant difference between groups. With regard to this finding, our result differs from the majority of other research findings. For example, Rezaei et al. found that DBT significantly improved craving among methadone users. 10 This result was also repeated in Rabinovitz’s paper. 30 One of the main reasons for this difference lies in the finding in the present study that DBT had greater improvement in the emotionality subscale of craving (p < 0.5). Since the most important structure of marijuana craving is its emotional dimensions, 5 , 14 the lack of changes in other subscales resulted in non-significance for the overall craving scale score. The results of the present study with relation to craving are therefore somewhat co-directional with the findings of previous studies. On the behavioral level, other studies found that Marijuana craving cues were strongly associated with deficits in regulation of negative affect and emotions. 14 , 15 Also, when neural levels were assessed during reappraisal of negative stimuli, patients with MUD and regular users showed altered neural activity and functional connectivity. Furthermore, being a marijuana user was related to dysfunctions in the amygdala and in amygdala-DLPFC coupling activity. 14 Taken together, these findings demonstrate that emotion regulation problems and craving are prevalent in MUD patients and can interfere with the cessation process. Since DBT is a third-wave behavioral therapy, it has a strong emotional basis. This therapy encompasses three emotion-based goals: understanding emotions, reducing emotional vulnerability, and reducing emotional suffering. Patients are helped to understand that unpleasant emotions are a normal part of life and that accepting their existence is more healthy than trying to avoid controlling them. 10 , 28 Overall, DBT is an emotion regulation method that helps patients learn, understand, and label emotions, reducing emotional vulnerability and emotional suffering. These skills help MUD patients to label emotions related to craving. This improvement in emotional states can improve dysfunctions in the amygdala and in amygdala-DLPFC coupling activity. 9 , 31 , 32\nAlong the same lines, it also improves emotional craving-related brain structures and reduces impulsive behavior (e.g., lapses). Also, with the ‘’distress tolerance’’ component of DBT, patients learn to live with destructive emotions to accept unpleasant craving situations. 32 , 33 Therefore, by increasing craving, they no longer consume marijuana immediately. 22 , 24 Similarly, other DBT components teach MUD patients reinforcement management and problem-solving skills that can help them to reduce marijuana consumption. 34 , 35", "To conclude, DBT demonstrated adequate feasibility, acceptability, and appropriateness for patients with marijuana use disorder. Moreover, DBT also exhibited significant efficacy compared to the control group for achieving cessation and reducing emotion-related craving. Even in patients who could not achieve abstinence, DBT led to a reduction in marijuana consumption rates. These findings persisted at two-month follow-up.\n Limitations and future directions Despite these positive results, the present study also has some limitations. First, in order to evaluate the most significant treatment components (such as mindfulness and distress tolerance), no groups received the third wave versions of other therapies (ACT or MBSR). This study only had a two-month follow-up period and could not conduct long-term evaluation due to the study site’s medical and infrastructure conditions. It is recommended that future research should examine mediating and confounding variables to investigate the results of similar research to the present study. Other factors affecting relapse and recurrence could also be examined. Moreover, women should be investigated so that gender-related implications can be determined.\nDespite these positive results, the present study also has some limitations. First, in order to evaluate the most significant treatment components (such as mindfulness and distress tolerance), no groups received the third wave versions of other therapies (ACT or MBSR). This study only had a two-month follow-up period and could not conduct long-term evaluation due to the study site’s medical and infrastructure conditions. It is recommended that future research should examine mediating and confounding variables to investigate the results of similar research to the present study. Other factors affecting relapse and recurrence could also be examined. Moreover, women should be investigated so that gender-related implications can be determined.", "Despite these positive results, the present study also has some limitations. First, in order to evaluate the most significant treatment components (such as mindfulness and distress tolerance), no groups received the third wave versions of other therapies (ACT or MBSR). This study only had a two-month follow-up period and could not conduct long-term evaluation due to the study site’s medical and infrastructure conditions. It is recommended that future research should examine mediating and confounding variables to investigate the results of similar research to the present study. Other factors affecting relapse and recurrence could also be examined. Moreover, women should be investigated so that gender-related implications can be determined." ]
[ "intro", "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "results", null, null, null, null, "discussion", "conclusions", null ]
[ "Dialectical behavior therapy", "marijuana use", "feasibility studies", "craving", "lapse" ]
Introduction: Marijuana is the most prevalent substance among those reported to be a significant problem among people seeking treatment for substance abuse. 1 According to WHO reports, more than 140 million people consume marijuana every year. 2 With regard to Iran, recent evidence shows that more than 5% of people consume marijuana every year, predominantly young males. However, in view of the harsh marijuana prohibition policy of the Iranian government, most clinicians estimate that these rates have been hugely underestimated. 3 Marijuana, as an illegal drug, is associated with significant physical, psychological, and social consequences. 4 Studies have shown that regular and heavy marijuana use patterns correlate with increased risk of mood disorders, anxiety, and psychotic episodes and although causality has not been demonstrated, these patterns can increase the course of mental health problems. 5 Also, several medical problems such as respiratory system deficits, stroke, myocardial infarction, and digestive tract cancers are associated with marijuana use patterns, especially among those with marijuana use disorder (MUD). 6 , 7 Approximately one in three marijuana users meet the criteria for MUD based on the DSM-5, and this proportion is rising. 8 One of the most important psychological problems in substance use disorder treatment is craving. Craving is a factor identified as the root cause of relapses and treatment failures. 9 , 10 MUD patients report visual, tactile, and olfactory cues related to craving and compulsivity sensations. 11 Based on these results, clinicians have tried to treat patients with marijuana use disorder. To date, the Food and Drug Administration (FDA) in the United States has not approved any psycho-pharmacotherapy for MUD, and therefore psycho-social interventions have received particular attention. 12 The most widely used psychological treatment in the substance use disorder (SUD) context is cognitive-behavioral therapy (CBT). 13 Results showed that CBT is somewhat effective for SUD, but that most patients with MUD do not achieve cessation and are not motivated to continue skills training during follow-up. Relapse rates therefore remain a considerable limitation of treatment. 10 , 13 One of the main reasons for this low success rate lies in the limitations of CBT. First, CBT protocols do not focus on comorbid problems, whereas most patients with SUD have at least one psychiatric or psychological problem. Secondly, cognitive restructuring may not be useful for all SUD patients. Some patients may be unable to restructure their dysfunctional cognition and core beliefs despite receiving CBT. 10 , 12 , 13 Furthermore, emotion regulation deficits are strongly associated with increased addictive behaviors such as SUD. With emotion regulation, people adjust their emotional experiences related to distressing and unpleasant events. Emotion regulation is essential for successful coping with environmental demands and personal welfare. 14 On the behavioral level, studies have found that marijuana craving cues are strongly associated with deficits in regulation of negative affect and emotions. 14 , 15 Also, on the neural level, during reappraisal of negative stimuli, patients with MUD and regular users have shown altered neural activity and functional connectivity. Moreover, marijuana use is related to dysfunctions in the amygdala and in amygdala-dorsolateral prefrontal cortex (DLPFC) coupling activity. 15 Together, these findings demonstrate that emotion-based psychotherapy must manage comorbid problems and eliminate the limitations of CBT. One of the psychotherapies in the third wave behavioral therapy cluster is dialectical behavior therapy (DBT). DBT has been described as intervention in emotion regulation deficits by focusing on dangerous impulses in borderline personality disorder and substance use disorder. The goals of DBT include improving and regulating emotions as a primary mechanism of change. DBT is a trans-diagnostic treatment and suitable for comorbid problems. DBT trains skills including distress tolerance, interpersonal effectiveness, emotion regulation, and mindfulness. Overall, in the context of SUD, DBT teaches emotion regulation skills to decrease engagement in pathological emotion regulation strategies. It also intervenes in low quality of life situations, reduces drug-seeking behavior, and helps patients function adaptively by accepting unpleasant emotions such as craving. 9 , 10 , 14 Research literature shows the efficacy and effectiveness of DBT in various comorbid problems and diseases such as suicide, 16 forensic psychiatric patients, 17 and irritable bowel syndrome. 18 Nevertheless, studies have reported contradictory results for the effectiveness of implementing DBT in various SUD populations. 19 , 20 Furthermore, the literature has recommended using larger samples, clearer instruments to measure outcome variables, and specific and integrated protocols. 21 Additionally, according to our investigations, no DBT randomized clinical trials have been conducted that investigated cessation in MUD patients (with or without comorbid problems). A DBT intervention aimed at increasing the cessation rate and reducing craving among MUD patients was developed for this study. This pilot trial investigated the feasibility and preliminary efficacy of DBT relative to a psycho-education intervention that was controlled for time duration and attention. Feasibility was assessed via satisfaction and session completion rates. Preliminary efficacy was evaluated via the impact of DBT on cessation rate and reduction of consumption rates, compared to the psycho-education intervention. Although craving and acceptance of craving are not the primary goals of DBT, they were also compared across the two interventions. Methods: Trial design This study was designed as a controlled randomized clinical trial, including pretest, post-test, and two-month follow-up phases. This study was designed as a controlled randomized clinical trial, including pretest, post-test, and two-month follow-up phases. Sample size Since the sampling method comprises snowball sampling and strict eligibility criteria were applied, on the basis of data from similar studies 10 it was determined that at least 20 participants were needed in each group. However, in view of the predicted retention rates, we selected 30 patients for each group. Since the sampling method comprises snowball sampling and strict eligibility criteria were applied, on the basis of data from similar studies 10 it was determined that at least 20 participants were needed in each group. However, in view of the predicted retention rates, we selected 30 patients for each group. Selection criteria The inclusion criteria were as follows: 1) diagnosis of marijuana use disorder; 2) age 18 years or over; 3) no current or past history of major psychiatric disorders; 4) no other concurrent SUD treatment; and 5) willingness to attend intervention sessions, complete surveys, and take tests (questionnaires and urine test kits). Exclusion criteria were as follows: 1) unwillingness to participate; 2) not participating in intervention sessions for more than two weeks; 3) starting secondary psychotherapy; and 4) consuming methamphetamine, amphetamine, cannabis, methadone, benzodiazepines, or morphine during the research stages. The inclusion criteria were as follows: 1) diagnosis of marijuana use disorder; 2) age 18 years or over; 3) no current or past history of major psychiatric disorders; 4) no other concurrent SUD treatment; and 5) willingness to attend intervention sessions, complete surveys, and take tests (questionnaires and urine test kits). Exclusion criteria were as follows: 1) unwillingness to participate; 2) not participating in intervention sessions for more than two weeks; 3) starting secondary psychotherapy; and 4) consuming methamphetamine, amphetamine, cannabis, methadone, benzodiazepines, or morphine during the research stages. Participants, procedures, and randomization Since there are no cannabis use disorder treatment centers in Iran, there is no specific place to select patients. Furthermore, patients at drug treatment centers are referred for treatment of other substance use disorders and comorbidity of drug use is one of the exclusion criteria for this study, since it could lead to misleading results. Therefore, the relatives and acquaintances of those who had been referred to the drug treatment center were interviewed. From November 1, 2019, to November 5, 2019, 15 relatives and family members of drug users referred to drug treatment centers were diagnosed with MUD at this stage. Then, using snowball sampling, after 15 days of investigation, a total of 83 patients were diagnosed with MUD. Seventy-five of the 83 MUD patients who were approached consented, eight declined to participate, and 14 were ineligible. The primary reasons for declining were anxiety about addiction stigma and time constraints. Most of the ineligible patients had multiple illicit use disorders, so they did not meet the study criteria. Therefore, 61 patients completed the baseline assessment and were included in the current analyses. These patients were randomly assigned to each group using a random number table. The interventions were implemented from December 1, 2019, to March 20, 2020. The follow-up phase started on March 21, 2020, and ended on May 20, 2020, (at two months’ follow-up). In order to test for exclusion criteria before each session, a six-drug test kit for methamphetamine, amphetamine, cannabis, methadone, benzodiazepines, and morphine was administered to individuals using urine samples. Since there are no cannabis use disorder treatment centers in Iran, there is no specific place to select patients. Furthermore, patients at drug treatment centers are referred for treatment of other substance use disorders and comorbidity of drug use is one of the exclusion criteria for this study, since it could lead to misleading results. Therefore, the relatives and acquaintances of those who had been referred to the drug treatment center were interviewed. From November 1, 2019, to November 5, 2019, 15 relatives and family members of drug users referred to drug treatment centers were diagnosed with MUD at this stage. Then, using snowball sampling, after 15 days of investigation, a total of 83 patients were diagnosed with MUD. Seventy-five of the 83 MUD patients who were approached consented, eight declined to participate, and 14 were ineligible. The primary reasons for declining were anxiety about addiction stigma and time constraints. Most of the ineligible patients had multiple illicit use disorders, so they did not meet the study criteria. Therefore, 61 patients completed the baseline assessment and were included in the current analyses. These patients were randomly assigned to each group using a random number table. The interventions were implemented from December 1, 2019, to March 20, 2020. The follow-up phase started on March 21, 2020, and ended on May 20, 2020, (at two months’ follow-up). In order to test for exclusion criteria before each session, a six-drug test kit for methamphetamine, amphetamine, cannabis, methadone, benzodiazepines, and morphine was administered to individuals using urine samples. Blinding Both groups were blind to the existence of another group in the study. However, patients were informed about participating in research but not about another group. One day after the end of treatment, the post-test was carried out by mental health technicians with a master’s degree in psychology. Both groups were blind to the existence of another group in the study. However, patients were informed about participating in research but not about another group. One day after the end of treatment, the post-test was carried out by mental health technicians with a master’s degree in psychology. Outcome measures Abstinence A marijuana urine test kit prepared by Kian Teb Company (officially licensed by the National Medical Device Directorate IR. IRAN) was used to identify abstainers. A marijuana urine test kit prepared by Kian Teb Company (officially licensed by the National Medical Device Directorate IR. IRAN) was used to identify abstainers. Marijuana smoking A self-report scale was designed for patients who had lapsed during the post-test follow-up. On this scale, patients indicate the number of days of consumption over 30-day periods. The first thirty days after the last intervention session was considered the post-test smoking period and the second-month follow-up was considered as the follow-up marijuana use period. A self-report scale was designed for patients who had lapsed during the post-test follow-up. On this scale, patients indicate the number of days of consumption over 30-day periods. The first thirty days after the last intervention session was considered the post-test smoking period and the second-month follow-up was considered as the follow-up marijuana use period. Craving The Marijuana Craving Questionnaire (MCQ) short-form is a 12-item self-report questionnaire with ten items for subjective assessment of cannabis craving. The scale covers 4 factors: compulsivity, emotionality, expectancy, and purposefulness. According to how patients were thinking or feeling ‘’right now,’’ they placed checkmarks on the questionnaire to endorse responses ranging from 1 or strongly disagree to 7 or strongly agree. Results showed that this questionnaire’s internal consistency is adequate (α = 0.90). The measure was administered following a 12-hour deprivation period. The typical onset of marijuana craving and withdrawal symptoms is observed within approximately one day of cessation and so the current paper’s questionnaire scores can be conceptualized as an index of the propensity to experience marijuana craving following deprivation. 22 In Iranian MUD patients, the MCQ had internal consistency of α = 0.87. Details of the MCQ’s psychometrics properties will be published as a separate study as soon as possible. The Marijuana Craving Questionnaire (MCQ) short-form is a 12-item self-report questionnaire with ten items for subjective assessment of cannabis craving. The scale covers 4 factors: compulsivity, emotionality, expectancy, and purposefulness. According to how patients were thinking or feeling ‘’right now,’’ they placed checkmarks on the questionnaire to endorse responses ranging from 1 or strongly disagree to 7 or strongly agree. Results showed that this questionnaire’s internal consistency is adequate (α = 0.90). The measure was administered following a 12-hour deprivation period. The typical onset of marijuana craving and withdrawal symptoms is observed within approximately one day of cessation and so the current paper’s questionnaire scores can be conceptualized as an index of the propensity to experience marijuana craving following deprivation. 22 In Iranian MUD patients, the MCQ had internal consistency of α = 0.87. Details of the MCQ’s psychometrics properties will be published as a separate study as soon as possible. Acceptability The Acceptability of Intervention Measure (AIM) was employed to measure the acceptability of interventions. AIM response items are measured on a 5-point Likert scale (from Completely Disagree with 1 point to Completely Agree with 5 points). The mean of points scored for each item is taken as the final score. This questionnaire developed by Weiner et al., and they reported Cronbach’s ɑ = 85 for internal consistency. 23 The Acceptability of Intervention Measure (AIM) was employed to measure the acceptability of interventions. AIM response items are measured on a 5-point Likert scale (from Completely Disagree with 1 point to Completely Agree with 5 points). The mean of points scored for each item is taken as the final score. This questionnaire developed by Weiner et al., and they reported Cronbach’s ɑ = 85 for internal consistency. 23 Appropriateness The Intervention Appropriateness Measure (IAM) was used for Appropriateness. The IAM consist of a four-item scale that measures perceived intervention appropriateness. Items are measured on a five-point Likert scale (Completely Disagree to Completely Agree), and the mean of points scored for each item is taken as the final score. Higher scores mean that the participant feels this intervention is more appropriate for him/her. For this tool, Cronbach’s ɑ = 0.91 and all Factor Loadings are reported as higher than 0.8. 23 The Intervention Appropriateness Measure (IAM) was used for Appropriateness. The IAM consist of a four-item scale that measures perceived intervention appropriateness. Items are measured on a five-point Likert scale (Completely Disagree to Completely Agree), and the mean of points scored for each item is taken as the final score. Higher scores mean that the participant feels this intervention is more appropriate for him/her. For this tool, Cronbach’s ɑ = 0.91 and all Factor Loadings are reported as higher than 0.8. 23 Abstinence A marijuana urine test kit prepared by Kian Teb Company (officially licensed by the National Medical Device Directorate IR. IRAN) was used to identify abstainers. A marijuana urine test kit prepared by Kian Teb Company (officially licensed by the National Medical Device Directorate IR. IRAN) was used to identify abstainers. Marijuana smoking A self-report scale was designed for patients who had lapsed during the post-test follow-up. On this scale, patients indicate the number of days of consumption over 30-day periods. The first thirty days after the last intervention session was considered the post-test smoking period and the second-month follow-up was considered as the follow-up marijuana use period. A self-report scale was designed for patients who had lapsed during the post-test follow-up. On this scale, patients indicate the number of days of consumption over 30-day periods. The first thirty days after the last intervention session was considered the post-test smoking period and the second-month follow-up was considered as the follow-up marijuana use period. Craving The Marijuana Craving Questionnaire (MCQ) short-form is a 12-item self-report questionnaire with ten items for subjective assessment of cannabis craving. The scale covers 4 factors: compulsivity, emotionality, expectancy, and purposefulness. According to how patients were thinking or feeling ‘’right now,’’ they placed checkmarks on the questionnaire to endorse responses ranging from 1 or strongly disagree to 7 or strongly agree. Results showed that this questionnaire’s internal consistency is adequate (α = 0.90). The measure was administered following a 12-hour deprivation period. The typical onset of marijuana craving and withdrawal symptoms is observed within approximately one day of cessation and so the current paper’s questionnaire scores can be conceptualized as an index of the propensity to experience marijuana craving following deprivation. 22 In Iranian MUD patients, the MCQ had internal consistency of α = 0.87. Details of the MCQ’s psychometrics properties will be published as a separate study as soon as possible. The Marijuana Craving Questionnaire (MCQ) short-form is a 12-item self-report questionnaire with ten items for subjective assessment of cannabis craving. The scale covers 4 factors: compulsivity, emotionality, expectancy, and purposefulness. According to how patients were thinking or feeling ‘’right now,’’ they placed checkmarks on the questionnaire to endorse responses ranging from 1 or strongly disagree to 7 or strongly agree. Results showed that this questionnaire’s internal consistency is adequate (α = 0.90). The measure was administered following a 12-hour deprivation period. The typical onset of marijuana craving and withdrawal symptoms is observed within approximately one day of cessation and so the current paper’s questionnaire scores can be conceptualized as an index of the propensity to experience marijuana craving following deprivation. 22 In Iranian MUD patients, the MCQ had internal consistency of α = 0.87. Details of the MCQ’s psychometrics properties will be published as a separate study as soon as possible. Acceptability The Acceptability of Intervention Measure (AIM) was employed to measure the acceptability of interventions. AIM response items are measured on a 5-point Likert scale (from Completely Disagree with 1 point to Completely Agree with 5 points). The mean of points scored for each item is taken as the final score. This questionnaire developed by Weiner et al., and they reported Cronbach’s ɑ = 85 for internal consistency. 23 The Acceptability of Intervention Measure (AIM) was employed to measure the acceptability of interventions. AIM response items are measured on a 5-point Likert scale (from Completely Disagree with 1 point to Completely Agree with 5 points). The mean of points scored for each item is taken as the final score. This questionnaire developed by Weiner et al., and they reported Cronbach’s ɑ = 85 for internal consistency. 23 Appropriateness The Intervention Appropriateness Measure (IAM) was used for Appropriateness. The IAM consist of a four-item scale that measures perceived intervention appropriateness. Items are measured on a five-point Likert scale (Completely Disagree to Completely Agree), and the mean of points scored for each item is taken as the final score. Higher scores mean that the participant feels this intervention is more appropriate for him/her. For this tool, Cronbach’s ɑ = 0.91 and all Factor Loadings are reported as higher than 0.8. 23 The Intervention Appropriateness Measure (IAM) was used for Appropriateness. The IAM consist of a four-item scale that measures perceived intervention appropriateness. Items are measured on a five-point Likert scale (Completely Disagree to Completely Agree), and the mean of points scored for each item is taken as the final score. Higher scores mean that the participant feels this intervention is more appropriate for him/her. For this tool, Cronbach’s ɑ = 0.91 and all Factor Loadings are reported as higher than 0.8. 23 Trial design: This study was designed as a controlled randomized clinical trial, including pretest, post-test, and two-month follow-up phases. Sample size: Since the sampling method comprises snowball sampling and strict eligibility criteria were applied, on the basis of data from similar studies 10 it was determined that at least 20 participants were needed in each group. However, in view of the predicted retention rates, we selected 30 patients for each group. Selection criteria: The inclusion criteria were as follows: 1) diagnosis of marijuana use disorder; 2) age 18 years or over; 3) no current or past history of major psychiatric disorders; 4) no other concurrent SUD treatment; and 5) willingness to attend intervention sessions, complete surveys, and take tests (questionnaires and urine test kits). Exclusion criteria were as follows: 1) unwillingness to participate; 2) not participating in intervention sessions for more than two weeks; 3) starting secondary psychotherapy; and 4) consuming methamphetamine, amphetamine, cannabis, methadone, benzodiazepines, or morphine during the research stages. Participants, procedures, and randomization: Since there are no cannabis use disorder treatment centers in Iran, there is no specific place to select patients. Furthermore, patients at drug treatment centers are referred for treatment of other substance use disorders and comorbidity of drug use is one of the exclusion criteria for this study, since it could lead to misleading results. Therefore, the relatives and acquaintances of those who had been referred to the drug treatment center were interviewed. From November 1, 2019, to November 5, 2019, 15 relatives and family members of drug users referred to drug treatment centers were diagnosed with MUD at this stage. Then, using snowball sampling, after 15 days of investigation, a total of 83 patients were diagnosed with MUD. Seventy-five of the 83 MUD patients who were approached consented, eight declined to participate, and 14 were ineligible. The primary reasons for declining were anxiety about addiction stigma and time constraints. Most of the ineligible patients had multiple illicit use disorders, so they did not meet the study criteria. Therefore, 61 patients completed the baseline assessment and were included in the current analyses. These patients were randomly assigned to each group using a random number table. The interventions were implemented from December 1, 2019, to March 20, 2020. The follow-up phase started on March 21, 2020, and ended on May 20, 2020, (at two months’ follow-up). In order to test for exclusion criteria before each session, a six-drug test kit for methamphetamine, amphetamine, cannabis, methadone, benzodiazepines, and morphine was administered to individuals using urine samples. Blinding: Both groups were blind to the existence of another group in the study. However, patients were informed about participating in research but not about another group. One day after the end of treatment, the post-test was carried out by mental health technicians with a master’s degree in psychology. Outcome measures: Abstinence A marijuana urine test kit prepared by Kian Teb Company (officially licensed by the National Medical Device Directorate IR. IRAN) was used to identify abstainers. A marijuana urine test kit prepared by Kian Teb Company (officially licensed by the National Medical Device Directorate IR. IRAN) was used to identify abstainers. Marijuana smoking A self-report scale was designed for patients who had lapsed during the post-test follow-up. On this scale, patients indicate the number of days of consumption over 30-day periods. The first thirty days after the last intervention session was considered the post-test smoking period and the second-month follow-up was considered as the follow-up marijuana use period. A self-report scale was designed for patients who had lapsed during the post-test follow-up. On this scale, patients indicate the number of days of consumption over 30-day periods. The first thirty days after the last intervention session was considered the post-test smoking period and the second-month follow-up was considered as the follow-up marijuana use period. Craving The Marijuana Craving Questionnaire (MCQ) short-form is a 12-item self-report questionnaire with ten items for subjective assessment of cannabis craving. The scale covers 4 factors: compulsivity, emotionality, expectancy, and purposefulness. According to how patients were thinking or feeling ‘’right now,’’ they placed checkmarks on the questionnaire to endorse responses ranging from 1 or strongly disagree to 7 or strongly agree. Results showed that this questionnaire’s internal consistency is adequate (α = 0.90). The measure was administered following a 12-hour deprivation period. The typical onset of marijuana craving and withdrawal symptoms is observed within approximately one day of cessation and so the current paper’s questionnaire scores can be conceptualized as an index of the propensity to experience marijuana craving following deprivation. 22 In Iranian MUD patients, the MCQ had internal consistency of α = 0.87. Details of the MCQ’s psychometrics properties will be published as a separate study as soon as possible. The Marijuana Craving Questionnaire (MCQ) short-form is a 12-item self-report questionnaire with ten items for subjective assessment of cannabis craving. The scale covers 4 factors: compulsivity, emotionality, expectancy, and purposefulness. According to how patients were thinking or feeling ‘’right now,’’ they placed checkmarks on the questionnaire to endorse responses ranging from 1 or strongly disagree to 7 or strongly agree. Results showed that this questionnaire’s internal consistency is adequate (α = 0.90). The measure was administered following a 12-hour deprivation period. The typical onset of marijuana craving and withdrawal symptoms is observed within approximately one day of cessation and so the current paper’s questionnaire scores can be conceptualized as an index of the propensity to experience marijuana craving following deprivation. 22 In Iranian MUD patients, the MCQ had internal consistency of α = 0.87. Details of the MCQ’s psychometrics properties will be published as a separate study as soon as possible. Acceptability The Acceptability of Intervention Measure (AIM) was employed to measure the acceptability of interventions. AIM response items are measured on a 5-point Likert scale (from Completely Disagree with 1 point to Completely Agree with 5 points). The mean of points scored for each item is taken as the final score. This questionnaire developed by Weiner et al., and they reported Cronbach’s ɑ = 85 for internal consistency. 23 The Acceptability of Intervention Measure (AIM) was employed to measure the acceptability of interventions. AIM response items are measured on a 5-point Likert scale (from Completely Disagree with 1 point to Completely Agree with 5 points). The mean of points scored for each item is taken as the final score. This questionnaire developed by Weiner et al., and they reported Cronbach’s ɑ = 85 for internal consistency. 23 Appropriateness The Intervention Appropriateness Measure (IAM) was used for Appropriateness. The IAM consist of a four-item scale that measures perceived intervention appropriateness. Items are measured on a five-point Likert scale (Completely Disagree to Completely Agree), and the mean of points scored for each item is taken as the final score. Higher scores mean that the participant feels this intervention is more appropriate for him/her. For this tool, Cronbach’s ɑ = 0.91 and all Factor Loadings are reported as higher than 0.8. 23 The Intervention Appropriateness Measure (IAM) was used for Appropriateness. The IAM consist of a four-item scale that measures perceived intervention appropriateness. Items are measured on a five-point Likert scale (Completely Disagree to Completely Agree), and the mean of points scored for each item is taken as the final score. Higher scores mean that the participant feels this intervention is more appropriate for him/her. For this tool, Cronbach’s ɑ = 0.91 and all Factor Loadings are reported as higher than 0.8. 23 Abstinence: A marijuana urine test kit prepared by Kian Teb Company (officially licensed by the National Medical Device Directorate IR. IRAN) was used to identify abstainers. Marijuana smoking: A self-report scale was designed for patients who had lapsed during the post-test follow-up. On this scale, patients indicate the number of days of consumption over 30-day periods. The first thirty days after the last intervention session was considered the post-test smoking period and the second-month follow-up was considered as the follow-up marijuana use period. Craving: The Marijuana Craving Questionnaire (MCQ) short-form is a 12-item self-report questionnaire with ten items for subjective assessment of cannabis craving. The scale covers 4 factors: compulsivity, emotionality, expectancy, and purposefulness. According to how patients were thinking or feeling ‘’right now,’’ they placed checkmarks on the questionnaire to endorse responses ranging from 1 or strongly disagree to 7 or strongly agree. Results showed that this questionnaire’s internal consistency is adequate (α = 0.90). The measure was administered following a 12-hour deprivation period. The typical onset of marijuana craving and withdrawal symptoms is observed within approximately one day of cessation and so the current paper’s questionnaire scores can be conceptualized as an index of the propensity to experience marijuana craving following deprivation. 22 In Iranian MUD patients, the MCQ had internal consistency of α = 0.87. Details of the MCQ’s psychometrics properties will be published as a separate study as soon as possible. Acceptability: The Acceptability of Intervention Measure (AIM) was employed to measure the acceptability of interventions. AIM response items are measured on a 5-point Likert scale (from Completely Disagree with 1 point to Completely Agree with 5 points). The mean of points scored for each item is taken as the final score. This questionnaire developed by Weiner et al., and they reported Cronbach’s ɑ = 85 for internal consistency. 23 Appropriateness: The Intervention Appropriateness Measure (IAM) was used for Appropriateness. The IAM consist of a four-item scale that measures perceived intervention appropriateness. Items are measured on a five-point Likert scale (Completely Disagree to Completely Agree), and the mean of points scored for each item is taken as the final score. Higher scores mean that the participant feels this intervention is more appropriate for him/her. For this tool, Cronbach’s ɑ = 0.91 and all Factor Loadings are reported as higher than 0.8. 23 Intervention: Dialectical behavior therapy DBT is a group intervention consisting of 16 sessions (meeting once a week for 90 minutes) with one psychotherapist and her co-therapist. The intervention protocol is an adaptation of DBT for SUD based on three basic manuals. 10 , 24 , 25 The primary objective of the DBT is to reduce dysfunction in emotion regulation and craving via increasing cessation rates and improving skills. A psychotherapist with a PhD delivered the intervention sessions (with a psychologist as co-therapist) and they were blind both to the existence of another group and to the study objectives. Table 1 shows the content covered in each DBT session. Table 1DBT content per sessionCessationContentPre-sessionExplanation of dialectical behavioral therapy, principles, and goals. Brief introduction to the content of each session. Familiarity with participants. Participants are given an intervention booklet to read at home.1st session (mindfulness 1)Introduce the concept of mindfulness and three mental states (wise, reasonable, emotional) and their relations with substance use.2nd session (mindfulness 2)Teach two clusters of mindfulness skills. The first includes viewing, participation, and description. The second includes a non-judgmental stance and inclusive self-consciousness.3rd sessionSummarize the mindfulness sessions – definition of addiction, standard therapies of addiction, introduction to and teaching of dialectical avoidance technique. Review the positive and negative aspects of abstinence. Explanation and investigation of relapse and its causes. Explaining the skill of the pure mind, the addicted mind, the types of behaviors related to the pure mentality and the addicted mentality, and preparing a list of supporters.4th-5th sessions (Distress tolerance)Teaching distraction strategies with five skills include activities, comparisons, emotions, thoughts, and enjoyment. Through enjoyable activities, focusing on work or other topics, counting, leaving the situation, paying attention to daily tasks, distracting from thoughts, and self-harm behaviors – teaching and training self-soothing with five senses.6th-7th sessions (Emotion regulation)Definition of emotion, how emotions work, familiarity with emotion regulation skills. Emotion Identification Exercise, Emotion Registration Exercise. Identifying barriers to experiencing emotion in a healthy way and ways to overcome these barriers. Teaching creating short-term positive emotional experiences for experiencing positive emotional states.8th-10th sessions (Emotion regulation and distress tolerance in an MUD context)Explain craving and its connection to the experience of emotions. Introducing methods for identifying values. Importance of committed action based on a list of essential values in life. Develop new coping strategies in response to unpleasant emotions, sensations, and cognitions, especially craving as a multidimensional problem and teaching problem solving and behavior analysis.11th sessionBasic acceptance technique training. Introduce living in the present moment techniques.12th-13th sessionsInterpersonal effectiveness training. Participants learn assertiveness skills about substance users. Other skills include non-verbal communication, verbal communication, and problem-solving, decision-making, and listening skills.14th-16th sessionsReview of sessions. Elimination of ambiguities. Exercising skills in the presence of other people. DBT is a group intervention consisting of 16 sessions (meeting once a week for 90 minutes) with one psychotherapist and her co-therapist. The intervention protocol is an adaptation of DBT for SUD based on three basic manuals. 10 , 24 , 25 The primary objective of the DBT is to reduce dysfunction in emotion regulation and craving via increasing cessation rates and improving skills. A psychotherapist with a PhD delivered the intervention sessions (with a psychologist as co-therapist) and they were blind both to the existence of another group and to the study objectives. Table 1 shows the content covered in each DBT session. Table 1DBT content per sessionCessationContentPre-sessionExplanation of dialectical behavioral therapy, principles, and goals. Brief introduction to the content of each session. Familiarity with participants. Participants are given an intervention booklet to read at home.1st session (mindfulness 1)Introduce the concept of mindfulness and three mental states (wise, reasonable, emotional) and their relations with substance use.2nd session (mindfulness 2)Teach two clusters of mindfulness skills. The first includes viewing, participation, and description. The second includes a non-judgmental stance and inclusive self-consciousness.3rd sessionSummarize the mindfulness sessions – definition of addiction, standard therapies of addiction, introduction to and teaching of dialectical avoidance technique. Review the positive and negative aspects of abstinence. Explanation and investigation of relapse and its causes. Explaining the skill of the pure mind, the addicted mind, the types of behaviors related to the pure mentality and the addicted mentality, and preparing a list of supporters.4th-5th sessions (Distress tolerance)Teaching distraction strategies with five skills include activities, comparisons, emotions, thoughts, and enjoyment. Through enjoyable activities, focusing on work or other topics, counting, leaving the situation, paying attention to daily tasks, distracting from thoughts, and self-harm behaviors – teaching and training self-soothing with five senses.6th-7th sessions (Emotion regulation)Definition of emotion, how emotions work, familiarity with emotion regulation skills. Emotion Identification Exercise, Emotion Registration Exercise. Identifying barriers to experiencing emotion in a healthy way and ways to overcome these barriers. Teaching creating short-term positive emotional experiences for experiencing positive emotional states.8th-10th sessions (Emotion regulation and distress tolerance in an MUD context)Explain craving and its connection to the experience of emotions. Introducing methods for identifying values. Importance of committed action based on a list of essential values in life. Develop new coping strategies in response to unpleasant emotions, sensations, and cognitions, especially craving as a multidimensional problem and teaching problem solving and behavior analysis.11th sessionBasic acceptance technique training. Introduce living in the present moment techniques.12th-13th sessionsInterpersonal effectiveness training. Participants learn assertiveness skills about substance users. Other skills include non-verbal communication, verbal communication, and problem-solving, decision-making, and listening skills.14th-16th sessionsReview of sessions. Elimination of ambiguities. Exercising skills in the presence of other people. Psychoeducation This option is more ethical than not offering any intervention to the control group. A psychiatrist with five years of experience in addiction psychotherapy implemented this intervention. This intervention includes problem-solving skills, assertiveness, and craving management in eight sessions. Thepsychoeducation intervention was used to provide a basis for comparison with only those elements of the DBT intervention that are different from other psychotherapies. This intervention is utilized for MUD and health-related problems. The intention of this intervention is to provide individuals struggling with cravings and substance use disorder the knowledge needed to comprehensively appreciate their problems and the empowerment needed to cope with them. The psychoeducation intervention included information about the dangers of marijuana and the therapists also provided a pamphlet containing techniques for reduction of craving. 14 , 26 , 27 This option is more ethical than not offering any intervention to the control group. A psychiatrist with five years of experience in addiction psychotherapy implemented this intervention. This intervention includes problem-solving skills, assertiveness, and craving management in eight sessions. Thepsychoeducation intervention was used to provide a basis for comparison with only those elements of the DBT intervention that are different from other psychotherapies. This intervention is utilized for MUD and health-related problems. The intention of this intervention is to provide individuals struggling with cravings and substance use disorder the knowledge needed to comprehensively appreciate their problems and the empowerment needed to cope with them. The psychoeducation intervention included information about the dangers of marijuana and the therapists also provided a pamphlet containing techniques for reduction of craving. 14 , 26 , 27 Therapists and treatment adherence To enable adherence to the principles of DBT to be checked, audios of the sessions were recorded with the consent of all participants. A DBT researcher and psychotherapist who was not involved in the treatment groups checked session content afterwards. Sessions were divided into 15-minute modules that were chosen for adherence checks at random. Treatment stance and occurrence and depth of DBT processes were appraised. Based on the treatment manual, modules were rated for adherence level as either adequate or not adequate. The majority (83%) were judged to have been conducted adequately. To enable adherence to the principles of DBT to be checked, audios of the sessions were recorded with the consent of all participants. A DBT researcher and psychotherapist who was not involved in the treatment groups checked session content afterwards. Sessions were divided into 15-minute modules that were chosen for adherence checks at random. Treatment stance and occurrence and depth of DBT processes were appraised. Based on the treatment manual, modules were rated for adherence level as either adequate or not adequate. The majority (83%) were judged to have been conducted adequately. Statistical method Demographic information was gathered and reported as frequencies, means, and standard deviations and repeated measures ANOVA and chi-square tests were conducted for the outcomes using SPSS software, version 26. Demographic information was gathered and reported as frequencies, means, and standard deviations and repeated measures ANOVA and chi-square tests were conducted for the outcomes using SPSS software, version 26. Dialectical behavior therapy: DBT is a group intervention consisting of 16 sessions (meeting once a week for 90 minutes) with one psychotherapist and her co-therapist. The intervention protocol is an adaptation of DBT for SUD based on three basic manuals. 10 , 24 , 25 The primary objective of the DBT is to reduce dysfunction in emotion regulation and craving via increasing cessation rates and improving skills. A psychotherapist with a PhD delivered the intervention sessions (with a psychologist as co-therapist) and they were blind both to the existence of another group and to the study objectives. Table 1 shows the content covered in each DBT session. Table 1DBT content per sessionCessationContentPre-sessionExplanation of dialectical behavioral therapy, principles, and goals. Brief introduction to the content of each session. Familiarity with participants. Participants are given an intervention booklet to read at home.1st session (mindfulness 1)Introduce the concept of mindfulness and three mental states (wise, reasonable, emotional) and their relations with substance use.2nd session (mindfulness 2)Teach two clusters of mindfulness skills. The first includes viewing, participation, and description. The second includes a non-judgmental stance and inclusive self-consciousness.3rd sessionSummarize the mindfulness sessions – definition of addiction, standard therapies of addiction, introduction to and teaching of dialectical avoidance technique. Review the positive and negative aspects of abstinence. Explanation and investigation of relapse and its causes. Explaining the skill of the pure mind, the addicted mind, the types of behaviors related to the pure mentality and the addicted mentality, and preparing a list of supporters.4th-5th sessions (Distress tolerance)Teaching distraction strategies with five skills include activities, comparisons, emotions, thoughts, and enjoyment. Through enjoyable activities, focusing on work or other topics, counting, leaving the situation, paying attention to daily tasks, distracting from thoughts, and self-harm behaviors – teaching and training self-soothing with five senses.6th-7th sessions (Emotion regulation)Definition of emotion, how emotions work, familiarity with emotion regulation skills. Emotion Identification Exercise, Emotion Registration Exercise. Identifying barriers to experiencing emotion in a healthy way and ways to overcome these barriers. Teaching creating short-term positive emotional experiences for experiencing positive emotional states.8th-10th sessions (Emotion regulation and distress tolerance in an MUD context)Explain craving and its connection to the experience of emotions. Introducing methods for identifying values. Importance of committed action based on a list of essential values in life. Develop new coping strategies in response to unpleasant emotions, sensations, and cognitions, especially craving as a multidimensional problem and teaching problem solving and behavior analysis.11th sessionBasic acceptance technique training. Introduce living in the present moment techniques.12th-13th sessionsInterpersonal effectiveness training. Participants learn assertiveness skills about substance users. Other skills include non-verbal communication, verbal communication, and problem-solving, decision-making, and listening skills.14th-16th sessionsReview of sessions. Elimination of ambiguities. Exercising skills in the presence of other people. Psychoeducation: This option is more ethical than not offering any intervention to the control group. A psychiatrist with five years of experience in addiction psychotherapy implemented this intervention. This intervention includes problem-solving skills, assertiveness, and craving management in eight sessions. Thepsychoeducation intervention was used to provide a basis for comparison with only those elements of the DBT intervention that are different from other psychotherapies. This intervention is utilized for MUD and health-related problems. The intention of this intervention is to provide individuals struggling with cravings and substance use disorder the knowledge needed to comprehensively appreciate their problems and the empowerment needed to cope with them. The psychoeducation intervention included information about the dangers of marijuana and the therapists also provided a pamphlet containing techniques for reduction of craving. 14 , 26 , 27 Therapists and treatment adherence: To enable adherence to the principles of DBT to be checked, audios of the sessions were recorded with the consent of all participants. A DBT researcher and psychotherapist who was not involved in the treatment groups checked session content afterwards. Sessions were divided into 15-minute modules that were chosen for adherence checks at random. Treatment stance and occurrence and depth of DBT processes were appraised. Based on the treatment manual, modules were rated for adherence level as either adequate or not adequate. The majority (83%) were judged to have been conducted adequately. Statistical method: Demographic information was gathered and reported as frequencies, means, and standard deviations and repeated measures ANOVA and chi-square tests were conducted for the outcomes using SPSS software, version 26. Ethical considerations: Written informed consent was obtained from all participants before initiation of the research. The tools used in this study were all filled-out anonymously, and an ID code was used to maintain the confidentiality of personal information (Ir.kums.rce.1398.1203). At the end of the research process, dialectical behavior therapy was also provided to the control group. This study is registered with the Thailand Registry of Clinical Trials (TCTR20200319007). Results: Feasibility In the psycho-education group, 24/31 participants completed all sessions, compared to 29/30 members of the DBT group (retention rates: 77% in the control group vs. 96% in the DBT group). Additionally, 96% (29/30) of the DBT group members completed the two-month follow-up, whereas 64.5% (20/31) of the control group completed follow-up ( Figure 1 ). The chi-square test was applied, showing associations between group and retention, with χ2= 4.95, p = 0.02 for post-treatment and, χ2= 9.97, p = 0.002 for the follow-up phase. Consequently, DBT retention rates were significantly higher than psycho-education retention rates at post-treatment and follow-up. Figure 1Consort diagram. In the psycho-education group, 24/31 participants completed all sessions, compared to 29/30 members of the DBT group (retention rates: 77% in the control group vs. 96% in the DBT group). Additionally, 96% (29/30) of the DBT group members completed the two-month follow-up, whereas 64.5% (20/31) of the control group completed follow-up ( Figure 1 ). The chi-square test was applied, showing associations between group and retention, with χ2= 4.95, p = 0.02 for post-treatment and, χ2= 9.97, p = 0.002 for the follow-up phase. Consequently, DBT retention rates were significantly higher than psycho-education retention rates at post-treatment and follow-up. Figure 1Consort diagram. Acceptability and appropriateness To enable assessment of the acceptability and appropriateness of intervention, patients completed the AIM and IAM scales in the post-treatment phase. The acceptability scores were 16.57 for DBT and 9.6 for the control group (p < 0.05). The appropriateness scores were17.03 for DBT and 10.7 for the psychoeducation group (p < 0.05). Since there are no standards for these measures, these points were transferred to Likert-based questionnaire scales. For acceptability, the results equated to ‘’agree’’ for the DBT group versus ‘’neither agree nor disagree’’ for the psychoeducation group. For appropriateness, the results equated to ‘’completely agree’’ for the DBT group versus ‘’neither agree nor disagree’’ for the psychoeducation group. To enable assessment of the acceptability and appropriateness of intervention, patients completed the AIM and IAM scales in the post-treatment phase. The acceptability scores were 16.57 for DBT and 9.6 for the control group (p < 0.05). The appropriateness scores were17.03 for DBT and 10.7 for the psychoeducation group (p < 0.05). Since there are no standards for these measures, these points were transferred to Likert-based questionnaire scales. For acceptability, the results equated to ‘’agree’’ for the DBT group versus ‘’neither agree nor disagree’’ for the psychoeducation group. For appropriateness, the results equated to ‘’completely agree’’ for the DBT group versus ‘’neither agree nor disagree’’ for the psychoeducation group. Participant characteristics Participants’ demographic variables are shown in Table 2 . Analyses showed that there were no significant differences between the two groups regarding these variables. It should be noted that since over 97% of the participants were male from the beginning, the results were reported only for men. Table 2Mean and standard deviation of demographic variables in the intervention and control groups at the test phaseVariableIntervention groupControl groupp-valueEducational level*  0.2No higher education, n (%)2 (6)6 (19)Diploma, n (%)14 (46)15 (48)University student or graduate, n (%)14 (46)10 (33)Age †25.6 (5.67)27.19 (7.48)0.3Months of marijuana use19.53 (5.9)17.48 (6.03)0.1Craving (total) †   Pre-test45.2 (8.3)47.9 (10.2)0.2Post-test42.13 (7.7)44.48 (8.1)0.1Follow-up42.66 (9.25)45.8 (8.4)0.1Data presented as mean (standard deviation), unless otherwise specified.* Chi-square test.† Independent t test. Data presented as mean (standard deviation), unless otherwise specified. * Chi-square test. † Independent t test. Participants’ demographic variables are shown in Table 2 . Analyses showed that there were no significant differences between the two groups regarding these variables. It should be noted that since over 97% of the participants were male from the beginning, the results were reported only for men. Table 2Mean and standard deviation of demographic variables in the intervention and control groups at the test phaseVariableIntervention groupControl groupp-valueEducational level*  0.2No higher education, n (%)2 (6)6 (19)Diploma, n (%)14 (46)15 (48)University student or graduate, n (%)14 (46)10 (33)Age †25.6 (5.67)27.19 (7.48)0.3Months of marijuana use19.53 (5.9)17.48 (6.03)0.1Craving (total) †   Pre-test45.2 (8.3)47.9 (10.2)0.2Post-test42.13 (7.7)44.48 (8.1)0.1Follow-up42.66 (9.25)45.8 (8.4)0.1Data presented as mean (standard deviation), unless otherwise specified.* Chi-square test.† Independent t test. Data presented as mean (standard deviation), unless otherwise specified. * Chi-square test. † Independent t test. Efficacy outcomes The hypothesis of equal covariance matrices was examined for craving (Box’s M = 3.63, P = 0.74). The results of this test indicate homogeneity of covariance matrices. Mauchly’s test of sphericity also showed that the sphericity assumption was not violated (p = 0.42 and Mauchly’s W = 0.97). The results of the intergroup test and intergroup relations are also presented in Table 3 . As shown in Table 3 , the effect levels for craving (F = 3.52, p > 0.05) suggest that there is no significant difference between groups. These results were repeated for three of the subscales of craving: compulsivity, expectancy, and purposefulness. However, the emotionality subscale results showed a significant reduction in the DBT group compared to the control group 10.6 vs. 14.4 in the post-test and 10.43 vs. 13.26 in the follow-up phase (F = 19.94, p < 0.05). Table 3Repeated measures ANOVA for variables for the DBT group and control group in the pre-test, post-test, and follow-upVariable/sourceType III sum of squaresdfMean squareFSigPartial eta squaredObserved powerCraving       Tests of within-subjects effects       Factor1347.6962173.842.60.07*0.040.512Factor1 × group4.7622.380.0360.110.0010.055Error (factor1)7845.15411866.48    Tests of between-subjects effects       Group341.0841341.083.520.06 †0.0560.455Error5708.795996.75    Emotionality       Tests of within-subjects effects       Factor1257.332128.6511.690.00 †0.1650.9Factor1 × group74.70237.350.370.03*0.050.63Error (factor1)1297.8111810.99    Tests of between-subjects effects       Group283.891283.8919.940.00 †0.250.9Error839.9065914.23    * Significant to 0.05.† Significant to 0.01. * Significant to 0.05. † Significant to 0.01. With regard to cessation, the results indicated that DBT achieved a higher rate of cessation than the control treatment in both the post-test and at follow-up, ( Table 4 ) (p < 0.05). It was also found that among those who continued to use the drug, the number of use days per month in the post-test and the follow-up periods (two months) was significantly lower in the intervention group than the control group. Table 4Cessation and consumption between groups DBTControlp-valueCessation*   Post-test14 (46%)5 (16%)0.01 ‡Follow-up12 (40%)3 (9.5%)0.006 ‡Number of days use †   Post-test2.43±1.87.5±5.030.00 ‡Follow-up3.44±1.918.75±3.270.00 ‡Bold p-values are significant at critical levels.* The chi-square test was applied.† T test for independent samples.‡ Significant to 0.01. Bold p-values are significant at critical levels. * The chi-square test was applied. † T test for independent samples. ‡ Significant to 0.01. The hypothesis of equal covariance matrices was examined for craving (Box’s M = 3.63, P = 0.74). The results of this test indicate homogeneity of covariance matrices. Mauchly’s test of sphericity also showed that the sphericity assumption was not violated (p = 0.42 and Mauchly’s W = 0.97). The results of the intergroup test and intergroup relations are also presented in Table 3 . As shown in Table 3 , the effect levels for craving (F = 3.52, p > 0.05) suggest that there is no significant difference between groups. These results were repeated for three of the subscales of craving: compulsivity, expectancy, and purposefulness. However, the emotionality subscale results showed a significant reduction in the DBT group compared to the control group 10.6 vs. 14.4 in the post-test and 10.43 vs. 13.26 in the follow-up phase (F = 19.94, p < 0.05). Table 3Repeated measures ANOVA for variables for the DBT group and control group in the pre-test, post-test, and follow-upVariable/sourceType III sum of squaresdfMean squareFSigPartial eta squaredObserved powerCraving       Tests of within-subjects effects       Factor1347.6962173.842.60.07*0.040.512Factor1 × group4.7622.380.0360.110.0010.055Error (factor1)7845.15411866.48    Tests of between-subjects effects       Group341.0841341.083.520.06 †0.0560.455Error5708.795996.75    Emotionality       Tests of within-subjects effects       Factor1257.332128.6511.690.00 †0.1650.9Factor1 × group74.70237.350.370.03*0.050.63Error (factor1)1297.8111810.99    Tests of between-subjects effects       Group283.891283.8919.940.00 †0.250.9Error839.9065914.23    * Significant to 0.05.† Significant to 0.01. * Significant to 0.05. † Significant to 0.01. With regard to cessation, the results indicated that DBT achieved a higher rate of cessation than the control treatment in both the post-test and at follow-up, ( Table 4 ) (p < 0.05). It was also found that among those who continued to use the drug, the number of use days per month in the post-test and the follow-up periods (two months) was significantly lower in the intervention group than the control group. Table 4Cessation and consumption between groups DBTControlp-valueCessation*   Post-test14 (46%)5 (16%)0.01 ‡Follow-up12 (40%)3 (9.5%)0.006 ‡Number of days use †   Post-test2.43±1.87.5±5.030.00 ‡Follow-up3.44±1.918.75±3.270.00 ‡Bold p-values are significant at critical levels.* The chi-square test was applied.† T test for independent samples.‡ Significant to 0.01. Bold p-values are significant at critical levels. * The chi-square test was applied. † T test for independent samples. ‡ Significant to 0.01. Feasibility: In the psycho-education group, 24/31 participants completed all sessions, compared to 29/30 members of the DBT group (retention rates: 77% in the control group vs. 96% in the DBT group). Additionally, 96% (29/30) of the DBT group members completed the two-month follow-up, whereas 64.5% (20/31) of the control group completed follow-up ( Figure 1 ). The chi-square test was applied, showing associations between group and retention, with χ2= 4.95, p = 0.02 for post-treatment and, χ2= 9.97, p = 0.002 for the follow-up phase. Consequently, DBT retention rates were significantly higher than psycho-education retention rates at post-treatment and follow-up. Figure 1Consort diagram. Acceptability and appropriateness: To enable assessment of the acceptability and appropriateness of intervention, patients completed the AIM and IAM scales in the post-treatment phase. The acceptability scores were 16.57 for DBT and 9.6 for the control group (p < 0.05). The appropriateness scores were17.03 for DBT and 10.7 for the psychoeducation group (p < 0.05). Since there are no standards for these measures, these points were transferred to Likert-based questionnaire scales. For acceptability, the results equated to ‘’agree’’ for the DBT group versus ‘’neither agree nor disagree’’ for the psychoeducation group. For appropriateness, the results equated to ‘’completely agree’’ for the DBT group versus ‘’neither agree nor disagree’’ for the psychoeducation group. Participant characteristics: Participants’ demographic variables are shown in Table 2 . Analyses showed that there were no significant differences between the two groups regarding these variables. It should be noted that since over 97% of the participants were male from the beginning, the results were reported only for men. Table 2Mean and standard deviation of demographic variables in the intervention and control groups at the test phaseVariableIntervention groupControl groupp-valueEducational level*  0.2No higher education, n (%)2 (6)6 (19)Diploma, n (%)14 (46)15 (48)University student or graduate, n (%)14 (46)10 (33)Age †25.6 (5.67)27.19 (7.48)0.3Months of marijuana use19.53 (5.9)17.48 (6.03)0.1Craving (total) †   Pre-test45.2 (8.3)47.9 (10.2)0.2Post-test42.13 (7.7)44.48 (8.1)0.1Follow-up42.66 (9.25)45.8 (8.4)0.1Data presented as mean (standard deviation), unless otherwise specified.* Chi-square test.† Independent t test. Data presented as mean (standard deviation), unless otherwise specified. * Chi-square test. † Independent t test. Efficacy outcomes: The hypothesis of equal covariance matrices was examined for craving (Box’s M = 3.63, P = 0.74). The results of this test indicate homogeneity of covariance matrices. Mauchly’s test of sphericity also showed that the sphericity assumption was not violated (p = 0.42 and Mauchly’s W = 0.97). The results of the intergroup test and intergroup relations are also presented in Table 3 . As shown in Table 3 , the effect levels for craving (F = 3.52, p > 0.05) suggest that there is no significant difference between groups. These results were repeated for three of the subscales of craving: compulsivity, expectancy, and purposefulness. However, the emotionality subscale results showed a significant reduction in the DBT group compared to the control group 10.6 vs. 14.4 in the post-test and 10.43 vs. 13.26 in the follow-up phase (F = 19.94, p < 0.05). Table 3Repeated measures ANOVA for variables for the DBT group and control group in the pre-test, post-test, and follow-upVariable/sourceType III sum of squaresdfMean squareFSigPartial eta squaredObserved powerCraving       Tests of within-subjects effects       Factor1347.6962173.842.60.07*0.040.512Factor1 × group4.7622.380.0360.110.0010.055Error (factor1)7845.15411866.48    Tests of between-subjects effects       Group341.0841341.083.520.06 †0.0560.455Error5708.795996.75    Emotionality       Tests of within-subjects effects       Factor1257.332128.6511.690.00 †0.1650.9Factor1 × group74.70237.350.370.03*0.050.63Error (factor1)1297.8111810.99    Tests of between-subjects effects       Group283.891283.8919.940.00 †0.250.9Error839.9065914.23    * Significant to 0.05.† Significant to 0.01. * Significant to 0.05. † Significant to 0.01. With regard to cessation, the results indicated that DBT achieved a higher rate of cessation than the control treatment in both the post-test and at follow-up, ( Table 4 ) (p < 0.05). It was also found that among those who continued to use the drug, the number of use days per month in the post-test and the follow-up periods (two months) was significantly lower in the intervention group than the control group. Table 4Cessation and consumption between groups DBTControlp-valueCessation*   Post-test14 (46%)5 (16%)0.01 ‡Follow-up12 (40%)3 (9.5%)0.006 ‡Number of days use †   Post-test2.43±1.87.5±5.030.00 ‡Follow-up3.44±1.918.75±3.270.00 ‡Bold p-values are significant at critical levels.* The chi-square test was applied.† T test for independent samples.‡ Significant to 0.01. Bold p-values are significant at critical levels. * The chi-square test was applied. † T test for independent samples. ‡ Significant to 0.01. Discussion and conclusion: This study examined the feasibility, acceptability, and preliminary efficacy of a 16-session DBT intervention to address craving and achieve cessation in MUD patients. This intervention showed strong evidence of feasibility. Moreover, acceptability and appropriateness rates in the DBT group were high and adequate. The results showed that DBT is a promising intervention for marijuana cessation in patients with MUD. Although this study is the first RCT of DBT for MUD, the scientific literature about DBT for other addictive behaviors reports similar results. Rezaei et al. found that DBT significantly improved craving among methadone users. Their result showed that DBT could reduce methadone usage and improve emotion regulation. 28 Moreover, another study showed that implementation of DBT with alcohol-dependent patients improved alcohol-related behavior and emotional deficits, which is similar to the results of the present study. 29 However, the results for craving showed there was no significant difference between groups. With regard to this finding, our result differs from the majority of other research findings. For example, Rezaei et al. found that DBT significantly improved craving among methadone users. 10 This result was also repeated in Rabinovitz’s paper. 30 One of the main reasons for this difference lies in the finding in the present study that DBT had greater improvement in the emotionality subscale of craving (p < 0.5). Since the most important structure of marijuana craving is its emotional dimensions, 5 , 14 the lack of changes in other subscales resulted in non-significance for the overall craving scale score. The results of the present study with relation to craving are therefore somewhat co-directional with the findings of previous studies. On the behavioral level, other studies found that Marijuana craving cues were strongly associated with deficits in regulation of negative affect and emotions. 14 , 15 Also, when neural levels were assessed during reappraisal of negative stimuli, patients with MUD and regular users showed altered neural activity and functional connectivity. Furthermore, being a marijuana user was related to dysfunctions in the amygdala and in amygdala-DLPFC coupling activity. 14 Taken together, these findings demonstrate that emotion regulation problems and craving are prevalent in MUD patients and can interfere with the cessation process. Since DBT is a third-wave behavioral therapy, it has a strong emotional basis. This therapy encompasses three emotion-based goals: understanding emotions, reducing emotional vulnerability, and reducing emotional suffering. Patients are helped to understand that unpleasant emotions are a normal part of life and that accepting their existence is more healthy than trying to avoid controlling them. 10 , 28 Overall, DBT is an emotion regulation method that helps patients learn, understand, and label emotions, reducing emotional vulnerability and emotional suffering. These skills help MUD patients to label emotions related to craving. This improvement in emotional states can improve dysfunctions in the amygdala and in amygdala-DLPFC coupling activity. 9 , 31 , 32 Along the same lines, it also improves emotional craving-related brain structures and reduces impulsive behavior (e.g., lapses). Also, with the ‘’distress tolerance’’ component of DBT, patients learn to live with destructive emotions to accept unpleasant craving situations. 32 , 33 Therefore, by increasing craving, they no longer consume marijuana immediately. 22 , 24 Similarly, other DBT components teach MUD patients reinforcement management and problem-solving skills that can help them to reduce marijuana consumption. 34 , 35 Conclusion: To conclude, DBT demonstrated adequate feasibility, acceptability, and appropriateness for patients with marijuana use disorder. Moreover, DBT also exhibited significant efficacy compared to the control group for achieving cessation and reducing emotion-related craving. Even in patients who could not achieve abstinence, DBT led to a reduction in marijuana consumption rates. These findings persisted at two-month follow-up. Limitations and future directions Despite these positive results, the present study also has some limitations. First, in order to evaluate the most significant treatment components (such as mindfulness and distress tolerance), no groups received the third wave versions of other therapies (ACT or MBSR). This study only had a two-month follow-up period and could not conduct long-term evaluation due to the study site’s medical and infrastructure conditions. It is recommended that future research should examine mediating and confounding variables to investigate the results of similar research to the present study. Other factors affecting relapse and recurrence could also be examined. Moreover, women should be investigated so that gender-related implications can be determined. Despite these positive results, the present study also has some limitations. First, in order to evaluate the most significant treatment components (such as mindfulness and distress tolerance), no groups received the third wave versions of other therapies (ACT or MBSR). This study only had a two-month follow-up period and could not conduct long-term evaluation due to the study site’s medical and infrastructure conditions. It is recommended that future research should examine mediating and confounding variables to investigate the results of similar research to the present study. Other factors affecting relapse and recurrence could also be examined. Moreover, women should be investigated so that gender-related implications can be determined. Limitations and future directions: Despite these positive results, the present study also has some limitations. First, in order to evaluate the most significant treatment components (such as mindfulness and distress tolerance), no groups received the third wave versions of other therapies (ACT or MBSR). This study only had a two-month follow-up period and could not conduct long-term evaluation due to the study site’s medical and infrastructure conditions. It is recommended that future research should examine mediating and confounding variables to investigate the results of similar research to the present study. Other factors affecting relapse and recurrence could also be examined. Moreover, women should be investigated so that gender-related implications can be determined.
Background: Thailand Registry of Clinical Trials, TCTR20200319007. Methods: Sixty-one patients with marijuana use disorder diagnoses were randomly assigned to a DBT group or a control group (psycho-education). Patients completed measures at pre-intervention, post-intervention, and at two-month follow-up. The Marijuana Craving Questionnaire (MCQ) and marijuana urine test kits were used to assess craving and abstinence respectively. Results: The feasibility of DBT was significantly higher than control group feasibility. In the DBT 29/30 participants completed all sessions (96% retention) and 24/31 control group participants completed all sessions (77% retention) (χ2 = 4.95, p = 0.02). Moreover, 29/30 (96%) participants in the DBT group completed the two-month follow-up and 20/31 (64.5%) control group members completed the two-month follow-up (χ2 = 9.97, p = 0.002). The results showed that patients in the DBT group had significantly higher intervention acceptability rates (16.57 vs. 9.6) than those in the control group. This pattern was repeated for appropriateness rates (p < 0.05). The overall results for craving showed that there was no significant difference between the groups (F = 3.52, p > 0.05), although DBT showed a significant reduction in the "emotionality" subscale compared to the control group (F = 19.94, p < 0.05). To analyze cessation rates, DBT was compared to the control group at the posttest (46% vs. 16%) and follow-up (40% vs. 9.5%) and the results confirmed higher effectiveness in the DBT group for cessation (p < 0.05). Furthermore, among those who had lapsed, participants in the DBT group had fewer consumption days than those in the control group (p < 0.05). Conclusions: DBT showed feasibility, acceptability, and promising efficacy in terms of the marijuana cessation rate.
Introduction: Marijuana is the most prevalent substance among those reported to be a significant problem among people seeking treatment for substance abuse. 1 According to WHO reports, more than 140 million people consume marijuana every year. 2 With regard to Iran, recent evidence shows that more than 5% of people consume marijuana every year, predominantly young males. However, in view of the harsh marijuana prohibition policy of the Iranian government, most clinicians estimate that these rates have been hugely underestimated. 3 Marijuana, as an illegal drug, is associated with significant physical, psychological, and social consequences. 4 Studies have shown that regular and heavy marijuana use patterns correlate with increased risk of mood disorders, anxiety, and psychotic episodes and although causality has not been demonstrated, these patterns can increase the course of mental health problems. 5 Also, several medical problems such as respiratory system deficits, stroke, myocardial infarction, and digestive tract cancers are associated with marijuana use patterns, especially among those with marijuana use disorder (MUD). 6 , 7 Approximately one in three marijuana users meet the criteria for MUD based on the DSM-5, and this proportion is rising. 8 One of the most important psychological problems in substance use disorder treatment is craving. Craving is a factor identified as the root cause of relapses and treatment failures. 9 , 10 MUD patients report visual, tactile, and olfactory cues related to craving and compulsivity sensations. 11 Based on these results, clinicians have tried to treat patients with marijuana use disorder. To date, the Food and Drug Administration (FDA) in the United States has not approved any psycho-pharmacotherapy for MUD, and therefore psycho-social interventions have received particular attention. 12 The most widely used psychological treatment in the substance use disorder (SUD) context is cognitive-behavioral therapy (CBT). 13 Results showed that CBT is somewhat effective for SUD, but that most patients with MUD do not achieve cessation and are not motivated to continue skills training during follow-up. Relapse rates therefore remain a considerable limitation of treatment. 10 , 13 One of the main reasons for this low success rate lies in the limitations of CBT. First, CBT protocols do not focus on comorbid problems, whereas most patients with SUD have at least one psychiatric or psychological problem. Secondly, cognitive restructuring may not be useful for all SUD patients. Some patients may be unable to restructure their dysfunctional cognition and core beliefs despite receiving CBT. 10 , 12 , 13 Furthermore, emotion regulation deficits are strongly associated with increased addictive behaviors such as SUD. With emotion regulation, people adjust their emotional experiences related to distressing and unpleasant events. Emotion regulation is essential for successful coping with environmental demands and personal welfare. 14 On the behavioral level, studies have found that marijuana craving cues are strongly associated with deficits in regulation of negative affect and emotions. 14 , 15 Also, on the neural level, during reappraisal of negative stimuli, patients with MUD and regular users have shown altered neural activity and functional connectivity. Moreover, marijuana use is related to dysfunctions in the amygdala and in amygdala-dorsolateral prefrontal cortex (DLPFC) coupling activity. 15 Together, these findings demonstrate that emotion-based psychotherapy must manage comorbid problems and eliminate the limitations of CBT. One of the psychotherapies in the third wave behavioral therapy cluster is dialectical behavior therapy (DBT). DBT has been described as intervention in emotion regulation deficits by focusing on dangerous impulses in borderline personality disorder and substance use disorder. The goals of DBT include improving and regulating emotions as a primary mechanism of change. DBT is a trans-diagnostic treatment and suitable for comorbid problems. DBT trains skills including distress tolerance, interpersonal effectiveness, emotion regulation, and mindfulness. Overall, in the context of SUD, DBT teaches emotion regulation skills to decrease engagement in pathological emotion regulation strategies. It also intervenes in low quality of life situations, reduces drug-seeking behavior, and helps patients function adaptively by accepting unpleasant emotions such as craving. 9 , 10 , 14 Research literature shows the efficacy and effectiveness of DBT in various comorbid problems and diseases such as suicide, 16 forensic psychiatric patients, 17 and irritable bowel syndrome. 18 Nevertheless, studies have reported contradictory results for the effectiveness of implementing DBT in various SUD populations. 19 , 20 Furthermore, the literature has recommended using larger samples, clearer instruments to measure outcome variables, and specific and integrated protocols. 21 Additionally, according to our investigations, no DBT randomized clinical trials have been conducted that investigated cessation in MUD patients (with or without comorbid problems). A DBT intervention aimed at increasing the cessation rate and reducing craving among MUD patients was developed for this study. This pilot trial investigated the feasibility and preliminary efficacy of DBT relative to a psycho-education intervention that was controlled for time duration and attention. Feasibility was assessed via satisfaction and session completion rates. Preliminary efficacy was evaluated via the impact of DBT on cessation rate and reduction of consumption rates, compared to the psycho-education intervention. Although craving and acceptance of craving are not the primary goals of DBT, they were also compared across the two interventions. Conclusion: To conclude, DBT demonstrated adequate feasibility, acceptability, and appropriateness for patients with marijuana use disorder. Moreover, DBT also exhibited significant efficacy compared to the control group for achieving cessation and reducing emotion-related craving. Even in patients who could not achieve abstinence, DBT led to a reduction in marijuana consumption rates. These findings persisted at two-month follow-up. Limitations and future directions Despite these positive results, the present study also has some limitations. First, in order to evaluate the most significant treatment components (such as mindfulness and distress tolerance), no groups received the third wave versions of other therapies (ACT or MBSR). This study only had a two-month follow-up period and could not conduct long-term evaluation due to the study site’s medical and infrastructure conditions. It is recommended that future research should examine mediating and confounding variables to investigate the results of similar research to the present study. Other factors affecting relapse and recurrence could also be examined. Moreover, women should be investigated so that gender-related implications can be determined. Despite these positive results, the present study also has some limitations. First, in order to evaluate the most significant treatment components (such as mindfulness and distress tolerance), no groups received the third wave versions of other therapies (ACT or MBSR). This study only had a two-month follow-up period and could not conduct long-term evaluation due to the study site’s medical and infrastructure conditions. It is recommended that future research should examine mediating and confounding variables to investigate the results of similar research to the present study. Other factors affecting relapse and recurrence could also be examined. Moreover, women should be investigated so that gender-related implications can be determined.
Background: Thailand Registry of Clinical Trials, TCTR20200319007. Methods: Sixty-one patients with marijuana use disorder diagnoses were randomly assigned to a DBT group or a control group (psycho-education). Patients completed measures at pre-intervention, post-intervention, and at two-month follow-up. The Marijuana Craving Questionnaire (MCQ) and marijuana urine test kits were used to assess craving and abstinence respectively. Results: The feasibility of DBT was significantly higher than control group feasibility. In the DBT 29/30 participants completed all sessions (96% retention) and 24/31 control group participants completed all sessions (77% retention) (χ2 = 4.95, p = 0.02). Moreover, 29/30 (96%) participants in the DBT group completed the two-month follow-up and 20/31 (64.5%) control group members completed the two-month follow-up (χ2 = 9.97, p = 0.002). The results showed that patients in the DBT group had significantly higher intervention acceptability rates (16.57 vs. 9.6) than those in the control group. This pattern was repeated for appropriateness rates (p < 0.05). The overall results for craving showed that there was no significant difference between the groups (F = 3.52, p > 0.05), although DBT showed a significant reduction in the "emotionality" subscale compared to the control group (F = 19.94, p < 0.05). To analyze cessation rates, DBT was compared to the control group at the posttest (46% vs. 16%) and follow-up (40% vs. 9.5%) and the results confirmed higher effectiveness in the DBT group for cessation (p < 0.05). Furthermore, among those who had lapsed, participants in the DBT group had fewer consumption days than those in the control group (p < 0.05). Conclusions: DBT showed feasibility, acceptability, and promising efficacy in terms of the marijuana cessation rate.
12,763
375
[ 27, 55, 119, 306, 56, 955, 29, 75, 183, 81, 99, 1670, 540, 145, 105, 35, 78, 150, 139, 198, 480, 131 ]
27
[ "intervention", "test", "dbt", "group", "patients", "craving", "marijuana", "follow", "treatment", "results" ]
[ "marijuana craving emotional", "marijuana use related", "cannabis use disorder", "onset marijuana craving", "marijuana craving questionnaire" ]
[CONTENT] Dialectical behavior therapy | marijuana use | feasibility studies | craving | lapse [SUMMARY]
[CONTENT] Dialectical behavior therapy | marijuana use | feasibility studies | craving | lapse [SUMMARY]
[CONTENT] Dialectical behavior therapy | marijuana use | feasibility studies | craving | lapse [SUMMARY]
[CONTENT] Dialectical behavior therapy | marijuana use | feasibility studies | craving | lapse [SUMMARY]
[CONTENT] Dialectical behavior therapy | marijuana use | feasibility studies | craving | lapse [SUMMARY]
[CONTENT] Dialectical behavior therapy | marijuana use | feasibility studies | craving | lapse [SUMMARY]
[CONTENT] Behavior Therapy | Craving | Dialectical Behavior Therapy | Feasibility Studies | Humans | Marijuana Use | Pilot Projects | Treatment Outcome [SUMMARY]
[CONTENT] Behavior Therapy | Craving | Dialectical Behavior Therapy | Feasibility Studies | Humans | Marijuana Use | Pilot Projects | Treatment Outcome [SUMMARY]
[CONTENT] Behavior Therapy | Craving | Dialectical Behavior Therapy | Feasibility Studies | Humans | Marijuana Use | Pilot Projects | Treatment Outcome [SUMMARY]
[CONTENT] Behavior Therapy | Craving | Dialectical Behavior Therapy | Feasibility Studies | Humans | Marijuana Use | Pilot Projects | Treatment Outcome [SUMMARY]
[CONTENT] Behavior Therapy | Craving | Dialectical Behavior Therapy | Feasibility Studies | Humans | Marijuana Use | Pilot Projects | Treatment Outcome [SUMMARY]
[CONTENT] Behavior Therapy | Craving | Dialectical Behavior Therapy | Feasibility Studies | Humans | Marijuana Use | Pilot Projects | Treatment Outcome [SUMMARY]
[CONTENT] marijuana craving emotional | marijuana use related | cannabis use disorder | onset marijuana craving | marijuana craving questionnaire [SUMMARY]
[CONTENT] marijuana craving emotional | marijuana use related | cannabis use disorder | onset marijuana craving | marijuana craving questionnaire [SUMMARY]
[CONTENT] marijuana craving emotional | marijuana use related | cannabis use disorder | onset marijuana craving | marijuana craving questionnaire [SUMMARY]
[CONTENT] marijuana craving emotional | marijuana use related | cannabis use disorder | onset marijuana craving | marijuana craving questionnaire [SUMMARY]
[CONTENT] marijuana craving emotional | marijuana use related | cannabis use disorder | onset marijuana craving | marijuana craving questionnaire [SUMMARY]
[CONTENT] marijuana craving emotional | marijuana use related | cannabis use disorder | onset marijuana craving | marijuana craving questionnaire [SUMMARY]
[CONTENT] intervention | test | dbt | group | patients | craving | marijuana | follow | treatment | results [SUMMARY]
[CONTENT] intervention | test | dbt | group | patients | craving | marijuana | follow | treatment | results [SUMMARY]
[CONTENT] intervention | test | dbt | group | patients | craving | marijuana | follow | treatment | results [SUMMARY]
[CONTENT] intervention | test | dbt | group | patients | craving | marijuana | follow | treatment | results [SUMMARY]
[CONTENT] intervention | test | dbt | group | patients | craving | marijuana | follow | treatment | results [SUMMARY]
[CONTENT] intervention | test | dbt | group | patients | craving | marijuana | follow | treatment | results [SUMMARY]
[CONTENT] dbt | cbt | regulation | problems | emotion | marijuana | patients | emotion regulation | comorbid problems | comorbid [SUMMARY]
[CONTENT] patients | questionnaire | scale | item | marijuana | test | completely | measure | intervention | mcq [SUMMARY]
[CONTENT] test | group | significant | 05 | dbt | follow | post | control | 01 | table [SUMMARY]
[CONTENT] study | present study | present | future | limitations | research | results | significant | related | implications determined [SUMMARY]
[CONTENT] dbt | test | group | intervention | patients | follow | craving | study | marijuana | post [SUMMARY]
[CONTENT] dbt | test | group | intervention | patients | follow | craving | study | marijuana | post [SUMMARY]
[CONTENT] Thailand Registry of Clinical Trials [SUMMARY]
[CONTENT] Sixty-one | marijuana | DBT ||| two-month ||| The Marijuana Craving Questionnaire | marijuana [SUMMARY]
[CONTENT] DBT ||| DBT | 29/30 | 96% | 24/31 | 77% | 4.95 | 0.02 ||| 29/30 | 96% | DBT | two-month | 64.5% | two-month | 9.97 | 0.002 ||| DBT | 16.57 | 9.6 ||| ||| 3.52 | 0.05 | DBT | F = | 19.94 ||| DBT | 46% | 16% | 40% | 9.5% | DBT ||| DBT [SUMMARY]
[CONTENT] DBT | marijuana [SUMMARY]
[CONTENT] Thailand Registry of Clinical Trials ||| Sixty-one | marijuana | DBT ||| two-month ||| The Marijuana Craving Questionnaire | marijuana ||| ||| DBT ||| DBT | 29/30 | 96% | 24/31 | 77% | 4.95 | 0.02 ||| 29/30 | 96% | DBT | two-month | 64.5% | two-month | 9.97 | 0.002 ||| DBT | 16.57 | 9.6 ||| ||| 3.52 | 0.05 | DBT | F = | 19.94 ||| DBT | 46% | 16% | 40% | 9.5% | DBT ||| DBT ||| DBT | marijuana [SUMMARY]
[CONTENT] Thailand Registry of Clinical Trials ||| Sixty-one | marijuana | DBT ||| two-month ||| The Marijuana Craving Questionnaire | marijuana ||| ||| DBT ||| DBT | 29/30 | 96% | 24/31 | 77% | 4.95 | 0.02 ||| 29/30 | 96% | DBT | two-month | 64.5% | two-month | 9.97 | 0.002 ||| DBT | 16.57 | 9.6 ||| ||| 3.52 | 0.05 | DBT | F = | 19.94 ||| DBT | 46% | 16% | 40% | 9.5% | DBT ||| DBT ||| DBT | marijuana [SUMMARY]
Incidence of seed migration to the chest, abdomen, and pelvis after transperineal interstitial prostate brachytherapy with loose (125)I seeds.
21974959
The aim was to determine the incidence of seed migration not only to the chest, but also to the abdomen and pelvis after transperineal interstitial prostate brachytherapy with loose (125)I seeds.
BACKGROUND
We reviewed the records of 267 patients who underwent prostate brachytherapy with loose (125)I seeds. After seed implantation, orthogonal chest radiographs, an abdominal radiograph, and a pelvic radiograph were undertaken routinely to document the occurrence and sites of seed migration. The incidence of seed migration to the chest, abdomen, and pelvis was calculated. All patients who had seed migration to the abdomen and pelvis subsequently underwent a computed tomography scan to identify the exact location of the migrated seeds. Postimplant dosimetric analysis was undertaken, and dosimetric results were compared between patients with and without seed migration.
METHODS
A total of 19,236 seeds were implanted in 267 patients. Overall, 91 of 19,236 (0.47%) seeds migrated in 66 of 267 (24.7%) patients. Sixty-nine (0.36%) seeds migrated to the chest in 54 (20.2%) patients. Seven (0.036%) seeds migrated to the abdomen in six (2.2%) patients. Fifteen (0.078%) seeds migrated to the pelvis in 15 (5.6%) patients. Seed migration occurred predominantly within two weeks after seed implantation. None of the 66 patients had symptoms related to the migrated seeds. Postimplant prostate D90 was not significantly different between patients with and without seed migration.
RESULTS
We showed the incidence of seed migration to the chest, abdomen and pelvis. Seed migration did not have a significant effect on postimplant prostate D90.
CONCLUSION
[ "Aged", "Aged, 80 and over", "Brachytherapy", "Humans", "Iodine Radioisotopes", "Male", "Middle Aged", "Movement", "Pelvis", "Prostatic Neoplasms", "Radiography, Abdominal", "Radiography, Thoracic", "Radiotherapy" ]
3206434
Background
Seed migration is a well-recognized event that occurs after transperineal interstitial prostate brachytherapy, and it is observed more often with loose seeds than with linked seeds [1-5]. It is well known that the most frequent site of seed migration is the chest. The American Brachytherapy Society has advised that a chest radiograph should be undertaken at the first follow-up visit to scan the lungs for embolized seeds [6]. Consequently, the incidence of seed migration to the chest has been well reported [1,2,4,5,7-19]. However, documentation of the incidence of seed migration to the abdomen and pelvis is rare. Rare cases of seed migration to a coronary artery, the right ventricle, the liver, the kidneys, Batson's vertebral venous plexus, and the left testicular vein have been reported [20-26]. However, it has never been fully determined whether seed migration to these locations is really rare. The primary purposes of the present study were to determine the incidence of seed migration not only to the chest, but also to the abdomen and the pelvis at our institution and to identify the exact location of the seeds that had migrated to the abdomen and pelvis with computed tomography (CT). The secondary purpose was to determine the impact of seed migration on postimplant dosimetry.
Methods
We reviewed the records of 267 patients who underwent transperineal interstitial prostate brachytherapy with loose 125I seeds for clinical T1/T2 prostate cancer at our institution. Table 1 details the characteristics of all 267 patients. Two patients (0.75%) received brachytherapy plus external beam radiotherapy (45 Gy in 1.8 Gy fractions). One hundred twenty-three of the 267 (46.1%) patients also underwent neoadjuvant hormonal therapy (NHT), which consisted of luteinizing hormone-releasing hormone agonist and antiandrogens. NHT was generally undertaken in patients with a prostate volume >40 cc or those with pubic arch interference by transrectal ultrasound (TRUS) at the preimplant volume study [27]. Patient Characteristics (N = 267) Data are presented as mean ± standard error (range) or number (percent) of patients. Abbreviations: PSA = prostate-specific antigen; TRUS = transrectal ultrasound One month before seed implantation, a preplan was obtained with TRUS images taken at 5 mm intervals from the base to the apex of the prostate with the patient in the dorsal lithotomy position. The planning target volume included the prostate gland, with a margin of 3 mm anteriorly and laterally and 5 mm in the cranial and caudal directions. No margin was added posteriorly at the rectal interface. Treatment planning used a peripheral or a modified peripheral approach. For the 265 patients who received brachytherapy alone, the prescribed brachytherapy dose was 145 Gy and 160 Gy for the first 163 patients and the subsequent 102 patients, respectively. For the remaining two patients who received brachytherapy plus external beam radiotherapy, the prescribed brachytherapy dose was 110 Gy. TG 43 formalism was used in the preplanning and postimplant dosimetry analyses [28]. All 267 patients were treated with loose 125I radioactive seeds with a Mick applicator (Mick Radio-Nuclear Instruments, Bronx, NY). To ensure that no seeds were left in the bladder, postoperative fluoroscopic images were obtained. Prior to discharge, postoperative surveys of voided urine were conducted to detect voided seeds. Orthogonal chest radiographs, an abdominal radiograph, and a pelvic radiograph were undertaken to document the occurrence and sites of seed migration one day after seed implantation. These follow-up radiographs were undertaken routinely at each outpatient visit. Patients returned to our outpatient clinic two weeks and three months after seed implantation, then at three-month intervals for the first three years and at six-month intervals thereafter. Seed migration to the chest and the abdomen was recorded when one or more seeds were visualized on orthogonal chest radiographs and the anteroposterior (AP) abdominal radiograph, respectively. Seed migration to the pelvis was recorded when one or more seeds were separated from the main seed cluster on an AP pelvic radiograph. However, seeds placed into the bladder and the seminal vesicles or seeds placed inferior to the prostate by mistake were not scored as migrated. Seeds voided in the urine were not scored as migrated. Subsequently, all patients who had seed migration to the abdomen and pelvis underwent a CT scan to identify the exact location of the migrated seeds. The incidence of seed migration to the chest, abdomen, and pelvis was calculated. Postimplant dosimetric analysis by CT was performed one month after seed implantation. The seed count in the region of the prostate gland was determined on the AP pelvic radiographs obtained two weeks after seed implantation. The postimplant prostate D90 (the dose received by 90% of the volume of the prostate) value was compared between patients with and without seed migration. Statistical analysis was performed with Student's t-test. A p value of <0.05 was considered statistically significant.
null
null
Conclusions
AS and JN designed the study, collected the data, interpreted the results, performed the statistical analysis, drafted the manuscript, and oversaw the project's completion. EK, HN, RM, SS, YS and RK participated in data acquisition. MO and NS contributed to data analysis. All authors read and approved the manuscript.
[ "Background", "Results", "Seeds that migrated to the abdomen (seven seeds in six patients)", "Seeds that migrated to the pelvis (15 seeds in 15 patients)", "Seed relocation four years after seed implantation", "Postimplant dosimetric analysis", "Discussion", "Incidence of seed migration", "The dynamics of seed migration to the chest", "Seed migration to the kidneys and Batson's vertebral plexus is not very rare", "In some cases of seed migration to the kidneys, the mechanism is difficult to explain", "Other possible mechanisms of seed migration", "Influence of seed migration on postimplant dosimetry", "Seed migration-related sequelae", "Conclusions" ]
[ "Seed migration is a well-recognized event that occurs after transperineal interstitial prostate brachytherapy, and it is observed more often with loose seeds than with linked seeds [1-5].\nIt is well known that the most frequent site of seed migration is the chest. The American Brachytherapy Society has advised that a chest radiograph should be undertaken at the first follow-up visit to scan the lungs for embolized seeds [6]. Consequently, the incidence of seed migration to the chest has been well reported [1,2,4,5,7-19]. However, documentation of the incidence of seed migration to the abdomen and pelvis is rare. Rare cases of seed migration to a coronary artery, the right ventricle, the liver, the kidneys, Batson's vertebral venous plexus, and the left testicular vein have been reported [20-26]. However, it has never been fully determined whether seed migration to these locations is really rare.\nThe primary purposes of the present study were to determine the incidence of seed migration not only to the chest, but also to the abdomen and the pelvis at our institution and to identify the exact location of the seeds that had migrated to the abdomen and pelvis with computed tomography (CT). The secondary purpose was to determine the impact of seed migration on postimplant dosimetry.", "In total, 19,236 seeds were implanted in 267 patients. All 267 patients underwent follow-up radiographs. Median follow-up was 41 months (range, 8.5-76 months).\nAt one day after seed implantation, follow-up radiographs demonstrated that 41 of the 19,236 (0.21%) seeds migrated in 37 of the 267 (13.9%) patients: three seeds in one patient, two seeds in each of two patients, and a single seed in each of the remaining 34 patients. Fifteen (0.078%) seeds migrated to the chest in 15 (5.6%) patients. One (0.0052%) seed migrated to the abdomen in one (0.37%) patient. Twenty-five (0.13%) seeds migrated to the pelvis in 23 (8.6%) patients.\nAt two weeks after seed implantation, 85 of the 19,236 (0.44%) seeds migrated in 61 of the 267 (22.8%) patients: seven seeds in one patient, three seeds in each of four patients, two seeds in each of 10 patients, and a single seed in each of the remaining 46 patients. Sixty-one (0.32%) seeds migrated to the chest in 48 (18.0%) patients. Seven (0.036%) seeds migrated to the abdomen in six (2.2%) patients. Seventeen (0.088%) seeds migrated to the pelvis in 16 (6.0%) patients.\nAt three months after seed implantation, 87 of the 19,236 (0.45%) seeds migrated in 63 of the 267 (23.6%) patients: seven seeds in one patient, three seeds in each of four patients, two seeds in each of 10 patients, and a single seed in each of the remaining 48 patients. Sixty-three (0.33%) seeds migrated to the chest in 50 (18.7%) patients. Seven (0.036%) seeds migrated to the abdomen in six (2.2%) patients. Seventeen (0.088%) seeds migrated to the pelvis in 16 (6.0%) patients.\nAlthough seed migration occurred predominantly within two weeks after seed implantation, eventually, six seeds were found to migrate or relocate to the chest long after seed implantation: four seeds were found to have migrated to the chest at a median of 27 months (range 9.1-16 months), and two seeds were found to have relocated from the pelvis to the chest at 6.1 and 48 months after seed implantation, respectively. In these patients, follow-up radiographs were undertaken routinely at every outpatient visit; however, migration or relocation of these seeds was not found at the previous visit. Meanwhile, no seed relocation from the chest to other sites was observed in the present study.\nEventually, 91 of the 19,236 (0.47%) seeds migrated in 66 of the 267 (24.7%) patients: seven seeds in one patient, three seeds in each of five patients, two seeds in each of nine patients, and a single seed in each of the remaining 51 patients. Sixty-nine (0.36%) seeds migrated to the chest in 54 (20.2%) patients. Seven (0.036%) seeds migrated to the abdomen in six (2.2%) patients. Fifteen (0.078%) seeds migrated to the pelvis in 15 (5.6%) patients. All 66 patients were informed of seed migration; none of these patients had symptoms related to the migrated seeds.\n Seeds that migrated to the abdomen (seven seeds in six patients) Two of the 19,236 (0.010%) seeds migrated to the liver in two of the 267 (0.75%) patients: a single seed migrated to the liver in each of two patients. Five (0.026%) seeds migrated to the kidneys in four (1.5%) patients: two seeds migrated to the same kidney in one patient, and a single seed migrated to the kidney in each of the remaining three patients. In one patient (Case 1), one day after seed implantation, an abdominal radiograph showed that a seed had migrated to the right side of the middle abdomen, which was considered to be separated from the inferior vena cava (IVC) (Figure 1A). However, two weeks after seed implantation, an abdominal radiograph showed that the seed had disappeared from the right side of the middle abdomen, and showed a seed that had migrated to the left side of the middle abdomen (Figure 1B). On pelvic radiographs, there were no changes in number of seeds that had been implanted into the prostate between one day and two weeks after seed implantation. It was concluded that the seed had relocated from the right side of the middle abdomen to the left side of the middle abdomen. A subsequent abdominal CT demonstrated that the seed had migrated to the left kidney (Figure 1C). In another patient (Case 2), two weeks after seed implantation, an abdominal radiograph showed that two seeds had migrated to the same right kidney (Figure 2A-C).\nCase 1: Relocation from the right side of the middle abdomen to the left kidney. One day after seed implantation, a follow-up abdominal radiograph showed that a seed had migrated to the right side of the middle abdomen (solid arrow) (A). Two weeks later, the seed had relocated from the right side of the middle abdomen to the left side of the middle abdomen (solid arrow) (B). Subsequent computed tomography showed that the seed had migrated to the left kidney (solid arrow) (C).\nCase 2: Migration of two seeds to the same right kidney. Two weeks after seed implantation, a follow-up abdominal radiograph showed that two seeds had migrated to the right side of the middle abdomen (solid arrows) (A). Subsequent computed tomography showed that these two seeds had migrated to the same right kidney (solid arrows) (B,C).\nTwo of the 19,236 (0.010%) seeds migrated to the liver in two of the 267 (0.75%) patients: a single seed migrated to the liver in each of two patients. Five (0.026%) seeds migrated to the kidneys in four (1.5%) patients: two seeds migrated to the same kidney in one patient, and a single seed migrated to the kidney in each of the remaining three patients. In one patient (Case 1), one day after seed implantation, an abdominal radiograph showed that a seed had migrated to the right side of the middle abdomen, which was considered to be separated from the inferior vena cava (IVC) (Figure 1A). However, two weeks after seed implantation, an abdominal radiograph showed that the seed had disappeared from the right side of the middle abdomen, and showed a seed that had migrated to the left side of the middle abdomen (Figure 1B). On pelvic radiographs, there were no changes in number of seeds that had been implanted into the prostate between one day and two weeks after seed implantation. It was concluded that the seed had relocated from the right side of the middle abdomen to the left side of the middle abdomen. A subsequent abdominal CT demonstrated that the seed had migrated to the left kidney (Figure 1C). In another patient (Case 2), two weeks after seed implantation, an abdominal radiograph showed that two seeds had migrated to the same right kidney (Figure 2A-C).\nCase 1: Relocation from the right side of the middle abdomen to the left kidney. One day after seed implantation, a follow-up abdominal radiograph showed that a seed had migrated to the right side of the middle abdomen (solid arrow) (A). Two weeks later, the seed had relocated from the right side of the middle abdomen to the left side of the middle abdomen (solid arrow) (B). Subsequent computed tomography showed that the seed had migrated to the left kidney (solid arrow) (C).\nCase 2: Migration of two seeds to the same right kidney. Two weeks after seed implantation, a follow-up abdominal radiograph showed that two seeds had migrated to the right side of the middle abdomen (solid arrows) (A). Subsequent computed tomography showed that these two seeds had migrated to the same right kidney (solid arrows) (B,C).\n Seeds that migrated to the pelvis (15 seeds in 15 patients) A single seed migrated to the pelvis in each of 15 patients. Five of the 19,236 (0.026%) seeds migrated to Batson's vertebral venous plexus in five of the 267 (1.9%) patients. Four (0.021%) seeds migrated to the sacral venous plexus in four (1.5%) patients. Two (0.010%) seeds migrated to the iliac veins in two (0.75%) patients. Two (0.010%) seeds migrated to the right ischial bone in two (0.75%) patients. Two (0.010%) seeds migrated to the obturator internus muscles in two (0.75%) patients.\nA single seed migrated to the pelvis in each of 15 patients. Five of the 19,236 (0.026%) seeds migrated to Batson's vertebral venous plexus in five of the 267 (1.9%) patients. Four (0.021%) seeds migrated to the sacral venous plexus in four (1.5%) patients. Two (0.010%) seeds migrated to the iliac veins in two (0.75%) patients. Two (0.010%) seeds migrated to the right ischial bone in two (0.75%) patients. Two (0.010%) seeds migrated to the obturator internus muscles in two (0.75%) patients.\n Seed relocation four years after seed implantation In one patient (Case 3), seed relocation was found four years after seed implantation. A seed had migrated to the right groin area one day after seed implantation (Figure 3A-B). Three years and six months after seed implantation, a pelvic radiograph showed that the seed was in the same location. However, four years after seed implantation, a pelvic radiograph showed that the seed had disappeared from the right groin area (Figure 3C), and a chest radiograph showed a seed that had migrated to the left lung, which was not found at three years and six months after seed implantation. It was concluded that the seed had initially lodged in a branch of the right femoral vein, and then relocated to the left lung through the IVC, long after seed implantation.\nCase 3: Seed disappearance from the right groin area four years after seed implantation. One day after seed implantation, a follow-up pelvic radiograph showed that a seed was located inferior to the right side of the pelvis (solid arrow) (A). Subsequent computed tomography showed that the seed had migrated to the right groin area (solid arrow) (B). Four years after seed implantation, a follow-up pelvic radiograph showed that the seed had disappeared from the pelvic area (C).\nIn one patient (Case 3), seed relocation was found four years after seed implantation. A seed had migrated to the right groin area one day after seed implantation (Figure 3A-B). Three years and six months after seed implantation, a pelvic radiograph showed that the seed was in the same location. However, four years after seed implantation, a pelvic radiograph showed that the seed had disappeared from the right groin area (Figure 3C), and a chest radiograph showed a seed that had migrated to the left lung, which was not found at three years and six months after seed implantation. It was concluded that the seed had initially lodged in a branch of the right femoral vein, and then relocated to the left lung through the IVC, long after seed implantation.\nCase 3: Seed disappearance from the right groin area four years after seed implantation. One day after seed implantation, a follow-up pelvic radiograph showed that a seed was located inferior to the right side of the pelvis (solid arrow) (A). Subsequent computed tomography showed that the seed had migrated to the right groin area (solid arrow) (B). Four years after seed implantation, a follow-up pelvic radiograph showed that the seed had disappeared from the pelvic area (C).\n Postimplant dosimetric analysis In the 265 patients who received brachytherapy alone, the postimplant prostate D90 was 175.0 ± 1.3 Gy (mean ± standard error [SE]). In these 265 patients, the postimplant prostate D90 value was not significantly different between patients with and without seed migration (mean ± SE, 175.1 ± 2.3 Gy vs. 175.0 ± 1.5 Gy, respectively, p = 0.992). In 15 patients who had multiple migrated seeds, the postimplant prostate D90 ranged from 137.2 to 198.5 Gy (mean ± SE, 171.8 ± 5.5 Gy), which was not significantly different from that in patients without seed migration (p = 0.573).\nIn the remaining two patients who received brachytherapy plus external beam radiotherapy, the postimplant prostate D90 values were 116.4 Gy and 132.6 Gy. These two patients had no seed migration.\nIn the 265 patients who received brachytherapy alone, the postimplant prostate D90 was 175.0 ± 1.3 Gy (mean ± standard error [SE]). In these 265 patients, the postimplant prostate D90 value was not significantly different between patients with and without seed migration (mean ± SE, 175.1 ± 2.3 Gy vs. 175.0 ± 1.5 Gy, respectively, p = 0.992). In 15 patients who had multiple migrated seeds, the postimplant prostate D90 ranged from 137.2 to 198.5 Gy (mean ± SE, 171.8 ± 5.5 Gy), which was not significantly different from that in patients without seed migration (p = 0.573).\nIn the remaining two patients who received brachytherapy plus external beam radiotherapy, the postimplant prostate D90 values were 116.4 Gy and 132.6 Gy. These two patients had no seed migration.", "Two of the 19,236 (0.010%) seeds migrated to the liver in two of the 267 (0.75%) patients: a single seed migrated to the liver in each of two patients. Five (0.026%) seeds migrated to the kidneys in four (1.5%) patients: two seeds migrated to the same kidney in one patient, and a single seed migrated to the kidney in each of the remaining three patients. In one patient (Case 1), one day after seed implantation, an abdominal radiograph showed that a seed had migrated to the right side of the middle abdomen, which was considered to be separated from the inferior vena cava (IVC) (Figure 1A). However, two weeks after seed implantation, an abdominal radiograph showed that the seed had disappeared from the right side of the middle abdomen, and showed a seed that had migrated to the left side of the middle abdomen (Figure 1B). On pelvic radiographs, there were no changes in number of seeds that had been implanted into the prostate between one day and two weeks after seed implantation. It was concluded that the seed had relocated from the right side of the middle abdomen to the left side of the middle abdomen. A subsequent abdominal CT demonstrated that the seed had migrated to the left kidney (Figure 1C). In another patient (Case 2), two weeks after seed implantation, an abdominal radiograph showed that two seeds had migrated to the same right kidney (Figure 2A-C).\nCase 1: Relocation from the right side of the middle abdomen to the left kidney. One day after seed implantation, a follow-up abdominal radiograph showed that a seed had migrated to the right side of the middle abdomen (solid arrow) (A). Two weeks later, the seed had relocated from the right side of the middle abdomen to the left side of the middle abdomen (solid arrow) (B). Subsequent computed tomography showed that the seed had migrated to the left kidney (solid arrow) (C).\nCase 2: Migration of two seeds to the same right kidney. Two weeks after seed implantation, a follow-up abdominal radiograph showed that two seeds had migrated to the right side of the middle abdomen (solid arrows) (A). Subsequent computed tomography showed that these two seeds had migrated to the same right kidney (solid arrows) (B,C).", "A single seed migrated to the pelvis in each of 15 patients. Five of the 19,236 (0.026%) seeds migrated to Batson's vertebral venous plexus in five of the 267 (1.9%) patients. Four (0.021%) seeds migrated to the sacral venous plexus in four (1.5%) patients. Two (0.010%) seeds migrated to the iliac veins in two (0.75%) patients. Two (0.010%) seeds migrated to the right ischial bone in two (0.75%) patients. Two (0.010%) seeds migrated to the obturator internus muscles in two (0.75%) patients.", "In one patient (Case 3), seed relocation was found four years after seed implantation. A seed had migrated to the right groin area one day after seed implantation (Figure 3A-B). Three years and six months after seed implantation, a pelvic radiograph showed that the seed was in the same location. However, four years after seed implantation, a pelvic radiograph showed that the seed had disappeared from the right groin area (Figure 3C), and a chest radiograph showed a seed that had migrated to the left lung, which was not found at three years and six months after seed implantation. It was concluded that the seed had initially lodged in a branch of the right femoral vein, and then relocated to the left lung through the IVC, long after seed implantation.\nCase 3: Seed disappearance from the right groin area four years after seed implantation. One day after seed implantation, a follow-up pelvic radiograph showed that a seed was located inferior to the right side of the pelvis (solid arrow) (A). Subsequent computed tomography showed that the seed had migrated to the right groin area (solid arrow) (B). Four years after seed implantation, a follow-up pelvic radiograph showed that the seed had disappeared from the pelvic area (C).", "In the 265 patients who received brachytherapy alone, the postimplant prostate D90 was 175.0 ± 1.3 Gy (mean ± standard error [SE]). In these 265 patients, the postimplant prostate D90 value was not significantly different between patients with and without seed migration (mean ± SE, 175.1 ± 2.3 Gy vs. 175.0 ± 1.5 Gy, respectively, p = 0.992). In 15 patients who had multiple migrated seeds, the postimplant prostate D90 ranged from 137.2 to 198.5 Gy (mean ± SE, 171.8 ± 5.5 Gy), which was not significantly different from that in patients without seed migration (p = 0.573).\nIn the remaining two patients who received brachytherapy plus external beam radiotherapy, the postimplant prostate D90 values were 116.4 Gy and 132.6 Gy. These two patients had no seed migration.", " Incidence of seed migration We found that 0.36% of implanted seeds migrated to the chest in 20% of our patient population, similar to previous reports (Table 2). It has been reported that the incidence of seed migration to the chest can be as high as 55% per patient population and 0.98% per number of implanted seeds (Table 2) [9]. The variability of the incidence of seed migration to the chest among the reported results is considered to be attributed to different types of seeds (linked or loose), different designs of seed placement (intraprostatic or extraprostatic), different timings of follow-up radiographs, and different protocols of follow-up chest radiographs (orthogonal or AP alone) (Table 2) [1-5,9,13].\nLiterature Survey of Seed Migration\n*Linked seeds were restricted to needles placed at the periphery of the prostate implant. All seeds placed centrally, close to the urethra, were loose seeds. On average, 8-10 loose seeds were used.\n†Of the total 125I seeds, 64% were linked seeds. Linked seeds were limited to the region of the capsule/periprostatic region. Loose seeds were utilized in the central aspect of the gland.\n††9 patients, within 14 days; 75 patients, more than 30 days.\n§11patients within 14 days; 61patients, more than 30 days.\n||Of the 20 patients who had postimplant chest radiographs obtained within 14 days of brachytherapy, none had seeds detected in the lungs. Categorizing patients by isotope, for 125I, 18/75 (24.0%) and for 103Pd, 16/61 (26.2%) with chest radiographs beyond 30 days had seeds found in the lungs.\n¶A mixture of linked and free seeds, with a median 74% of needles per case containing seeds in linked, with central and posterior needles typically containing loose seeds.\nIn contrast, the incidence of seed migration to the abdomen and pelvis has been reported rarely (Table 2). A possible reason is that the American Brachytherapy Society does not specifically recommend follow-up abdominal and pelvic radiographs after seed implantation [6]. Therefore, in most institutions, follow-up abdominal and pelvic radiographs would not be undertaken routinely. However, we found that seed migration to the abdomen and pelvis occurred in 2.2% and 5.6%, respectively, of our patient population. Although the incidence of seed migration to the abdomen and pelvis is lower than that of seed migration to the chest, we would consider it advisable to undertake follow-up abdominal and pelvic radiographs after seed implantation.\nWe found that 0.36% of implanted seeds migrated to the chest in 20% of our patient population, similar to previous reports (Table 2). It has been reported that the incidence of seed migration to the chest can be as high as 55% per patient population and 0.98% per number of implanted seeds (Table 2) [9]. The variability of the incidence of seed migration to the chest among the reported results is considered to be attributed to different types of seeds (linked or loose), different designs of seed placement (intraprostatic or extraprostatic), different timings of follow-up radiographs, and different protocols of follow-up chest radiographs (orthogonal or AP alone) (Table 2) [1-5,9,13].\nLiterature Survey of Seed Migration\n*Linked seeds were restricted to needles placed at the periphery of the prostate implant. All seeds placed centrally, close to the urethra, were loose seeds. On average, 8-10 loose seeds were used.\n†Of the total 125I seeds, 64% were linked seeds. Linked seeds were limited to the region of the capsule/periprostatic region. Loose seeds were utilized in the central aspect of the gland.\n††9 patients, within 14 days; 75 patients, more than 30 days.\n§11patients within 14 days; 61patients, more than 30 days.\n||Of the 20 patients who had postimplant chest radiographs obtained within 14 days of brachytherapy, none had seeds detected in the lungs. Categorizing patients by isotope, for 125I, 18/75 (24.0%) and for 103Pd, 16/61 (26.2%) with chest radiographs beyond 30 days had seeds found in the lungs.\n¶A mixture of linked and free seeds, with a median 74% of needles per case containing seeds in linked, with central and posterior needles typically containing loose seeds.\nIn contrast, the incidence of seed migration to the abdomen and pelvis has been reported rarely (Table 2). A possible reason is that the American Brachytherapy Society does not specifically recommend follow-up abdominal and pelvic radiographs after seed implantation [6]. Therefore, in most institutions, follow-up abdominal and pelvic radiographs would not be undertaken routinely. However, we found that seed migration to the abdomen and pelvis occurred in 2.2% and 5.6%, respectively, of our patient population. Although the incidence of seed migration to the abdomen and pelvis is lower than that of seed migration to the chest, we would consider it advisable to undertake follow-up abdominal and pelvic radiographs after seed implantation.\n The dynamics of seed migration to the chest One day, two weeks, and three months after seed implantation, follow-up chest radiographs showed 22%, 88%, and 91%, respectively, of 69 seeds that eventually migrated to the chest. These results mean that 22%, 66%, and 2.9% of these 69 seeds migrated to the chest within one day, between one day and two weeks, and between two weeks and three months after seed implantation, respectively. About 90% of these 69 seeds migrated to the chest within two weeks after seed implantation. These results are similar to a previous report [13]. Merrick et al. have speculated that seed migration to the chest may be most likely to occur between 14 and 28 days after seed implantation [13]. Although we observed that several seeds migrated to the chest more than six months after seed implantation, this finding should be considered exceptional. Therefore, it is suggested that follow-up chest radiographs should be undertaken two weeks, or preferably a few months, after seed implantation to detect most seeds that will migrate to the chest.\nOne day, two weeks, and three months after seed implantation, follow-up chest radiographs showed 22%, 88%, and 91%, respectively, of 69 seeds that eventually migrated to the chest. These results mean that 22%, 66%, and 2.9% of these 69 seeds migrated to the chest within one day, between one day and two weeks, and between two weeks and three months after seed implantation, respectively. About 90% of these 69 seeds migrated to the chest within two weeks after seed implantation. These results are similar to a previous report [13]. Merrick et al. have speculated that seed migration to the chest may be most likely to occur between 14 and 28 days after seed implantation [13]. Although we observed that several seeds migrated to the chest more than six months after seed implantation, this finding should be considered exceptional. Therefore, it is suggested that follow-up chest radiographs should be undertaken two weeks, or preferably a few months, after seed implantation to detect most seeds that will migrate to the chest.\n Seed migration to the kidneys and Batson's vertebral plexus is not very rare The results of the present study show a total of four and five cases of seed migration to the kidneys and Batson's vertebral venous plexus, respectively, at only one institution, which suggests that such cases are not very rare. Meanwhile, in previous studies, a total of only four and four cases of seed migration to the kidneys and Batson's vertebral venous plexus, respectively, have been reported as rare cases, which is in disagreement with our conclusion [22,23,26,29]. The same number or more cases of seed migration to these areas were found in our single study compared with all previous studies. A possible explanation is that, in the present study, orthogonal chest radiographs, an abdominal radiograph, and a pelvic radiograph were undertaken routinely to detect seed migration to the chest, abdomen, and pelvis at several time points after seed implantation. Moreover, in all patients who had seed migration to the abdomen and pelvis, a CT scan was undertaken to identify the exact location of the migrated seeds. Consequently, more cases of seed migration to the kidneys and Batson's vertebral venous plexus were found in the present study. We speculate that some seed migration to the kidneys and Batson's vertebral venous plexus might have gone undetected in other institutions.\nThe results of the present study show a total of four and five cases of seed migration to the kidneys and Batson's vertebral venous plexus, respectively, at only one institution, which suggests that such cases are not very rare. Meanwhile, in previous studies, a total of only four and four cases of seed migration to the kidneys and Batson's vertebral venous plexus, respectively, have been reported as rare cases, which is in disagreement with our conclusion [22,23,26,29]. The same number or more cases of seed migration to these areas were found in our single study compared with all previous studies. A possible explanation is that, in the present study, orthogonal chest radiographs, an abdominal radiograph, and a pelvic radiograph were undertaken routinely to detect seed migration to the chest, abdomen, and pelvis at several time points after seed implantation. Moreover, in all patients who had seed migration to the abdomen and pelvis, a CT scan was undertaken to identify the exact location of the migrated seeds. Consequently, more cases of seed migration to the kidneys and Batson's vertebral venous plexus were found in the present study. We speculate that some seed migration to the kidneys and Batson's vertebral venous plexus might have gone undetected in other institutions.\n In some cases of seed migration to the kidneys, the mechanism is difficult to explain Previous investigators have explained seed migration to the kidneys as follows: migrated seeds in venous vessels would enter the systemic circulation through a right-to-left shunt, such as a patent foramen ovale or pulmonary arteriovenous malformation, and then embolize to a branch of the renal arteries [26]. We call this route of seed migration through a right-to-left shunt \"a paradoxical route.\"\nThe present study has provided some interesting cases of seed migration to the kidneys, the mechanism of which is difficult to explain. In Case 1 of the present study, a seed had migrated to the right side of the middle abdomen, which was considered to be separated from the IVC, and then had relocated to the left kidney (Figure 1A-C). If the seed had initially embolized to an artery of the right side of the middle abdomen, such as a branch of the right renal artery, through a paradoxical route, it is difficult to explain how the seed would have relocated to the left kidney. In Case 2, two seeds had migrated to the same right kidney (Figure 2A-C). The mechanism of seed migration to the kidney through a paradoxical route is too complicated. Therefore, it is highly unlikely that two seeds would happen to migrate to the same right kidney through a paradoxical route by chance, although the possibility cannot be completely excluded. Other possible mechanisms should be proposed to explain how seed migration to the kidneys would have occurred in the two cases mentioned above.\nPrevious investigators have explained seed migration to the kidneys as follows: migrated seeds in venous vessels would enter the systemic circulation through a right-to-left shunt, such as a patent foramen ovale or pulmonary arteriovenous malformation, and then embolize to a branch of the renal arteries [26]. We call this route of seed migration through a right-to-left shunt \"a paradoxical route.\"\nThe present study has provided some interesting cases of seed migration to the kidneys, the mechanism of which is difficult to explain. In Case 1 of the present study, a seed had migrated to the right side of the middle abdomen, which was considered to be separated from the IVC, and then had relocated to the left kidney (Figure 1A-C). If the seed had initially embolized to an artery of the right side of the middle abdomen, such as a branch of the right renal artery, through a paradoxical route, it is difficult to explain how the seed would have relocated to the left kidney. In Case 2, two seeds had migrated to the same right kidney (Figure 2A-C). The mechanism of seed migration to the kidney through a paradoxical route is too complicated. Therefore, it is highly unlikely that two seeds would happen to migrate to the same right kidney through a paradoxical route by chance, although the possibility cannot be completely excluded. Other possible mechanisms should be proposed to explain how seed migration to the kidneys would have occurred in the two cases mentioned above.\n Other possible mechanisms of seed migration We assume that seed movement in venous vessels would reflect not only the force of the blood flow but also the force of gravity, and that a seed sometimes would move in venous vessels against the blood flow, because the gravity could overcome the blood flow. The explanation is as follows. Intravascular missile migration following gunshot injury has been reported [30-34]. A missile sometimes moves in a retrograde fashion against the normal blood flow in major venous vessels including the IVC, and sometimes lodges in the renal vein and the hepatic vein [30-34]. It has been postulated that missile migration against the blood flow in venous vessels could occur because of gravity, the patient's position (upright) at the moment of wounding and/or positional changes of the body, the weight and shape of the missile, and possible low flow states [30]. Although the size and the weight of an 125I radioactive seed (4.5 mm in length, 0.8 mm in diameter, and about 10 mg in weight per seed) are smaller than those of a bullet, it is considered that a seed could move against the blood flow in major venous vessels because of gravity. The reasons are as follows. The specific gravity of the seed (about 4 g/mL) is much higher than that of blood, and the seed is less water resistant because of its rod-shaped structure. In addition, a seed would usually be located near the wall of major venous vessels due to gravity and the patient's position, and therefore, a seed would be affected by a relatively slow stream of blood near the vascular wall compared with that in the center of major venous vessels. It is known that in major venous vessels, the blood flow is generally laminar, and the distribution of velocity across the tube is parabolic [35]. Therefore, the velocity of blood flow decreases from the center toward the wall of the venous vessel tube [35].\nThe mechanism of seed migration to various sites could be explained by the assumption that a seed could move in major venous vessels against the blood flow by gravity. In the present cases, seed migration to the kidneys, the liver, and the right groin area (probably a branch of the right femoral vein) (Case 3) would have occurred in venous vessels, partially in a retrograde fashion by gravity, without being passed through a paradoxical route. This explanation does not require the assumption that a patient would have a right-to-left shunt.\nWe assume that seed movement in venous vessels would reflect not only the force of the blood flow but also the force of gravity, and that a seed sometimes would move in venous vessels against the blood flow, because the gravity could overcome the blood flow. The explanation is as follows. Intravascular missile migration following gunshot injury has been reported [30-34]. A missile sometimes moves in a retrograde fashion against the normal blood flow in major venous vessels including the IVC, and sometimes lodges in the renal vein and the hepatic vein [30-34]. It has been postulated that missile migration against the blood flow in venous vessels could occur because of gravity, the patient's position (upright) at the moment of wounding and/or positional changes of the body, the weight and shape of the missile, and possible low flow states [30]. Although the size and the weight of an 125I radioactive seed (4.5 mm in length, 0.8 mm in diameter, and about 10 mg in weight per seed) are smaller than those of a bullet, it is considered that a seed could move against the blood flow in major venous vessels because of gravity. The reasons are as follows. The specific gravity of the seed (about 4 g/mL) is much higher than that of blood, and the seed is less water resistant because of its rod-shaped structure. In addition, a seed would usually be located near the wall of major venous vessels due to gravity and the patient's position, and therefore, a seed would be affected by a relatively slow stream of blood near the vascular wall compared with that in the center of major venous vessels. It is known that in major venous vessels, the blood flow is generally laminar, and the distribution of velocity across the tube is parabolic [35]. Therefore, the velocity of blood flow decreases from the center toward the wall of the venous vessel tube [35].\nThe mechanism of seed migration to various sites could be explained by the assumption that a seed could move in major venous vessels against the blood flow by gravity. In the present cases, seed migration to the kidneys, the liver, and the right groin area (probably a branch of the right femoral vein) (Case 3) would have occurred in venous vessels, partially in a retrograde fashion by gravity, without being passed through a paradoxical route. This explanation does not require the assumption that a patient would have a right-to-left shunt.\n Influence of seed migration on postimplant dosimetry The postimplant prostate D90 was not significantly different between patients with and without seed migration. Moreover, in 15 patients who had multiple migrated seeds, the postimplant prostate D90 was relatively acceptable, and no supplemental seed implantation was required. These results indicate that seed migration did not have a significant effect on postimplant prostate dosimetry in the present study. Possible reasons are as follows. First, in most patients with seed migration, only one or two seeds had migrated, which would have less effect on the dosimetry of the prostate. Tapen et al. have suggested that the loss of a few seeds may not have a significant effect on dose homogeneity or total dose to the prostate [5]. Second, seed migration would have much less effect on the dosimetry of the prostate than other mechanisms of seed loss, such as seed misplacement to the seminal vesicle or perineum and being voided in the urine postoperatively. Merrick et al. have reported that seed migration to the chest accounted for only 10% of total seed loss from the prostate region, highlighting the importance of other mechanisms of loss [13].\nThe postimplant prostate D90 was not significantly different between patients with and without seed migration. Moreover, in 15 patients who had multiple migrated seeds, the postimplant prostate D90 was relatively acceptable, and no supplemental seed implantation was required. These results indicate that seed migration did not have a significant effect on postimplant prostate dosimetry in the present study. Possible reasons are as follows. First, in most patients with seed migration, only one or two seeds had migrated, which would have less effect on the dosimetry of the prostate. Tapen et al. have suggested that the loss of a few seeds may not have a significant effect on dose homogeneity or total dose to the prostate [5]. Second, seed migration would have much less effect on the dosimetry of the prostate than other mechanisms of seed loss, such as seed misplacement to the seminal vesicle or perineum and being voided in the urine postoperatively. Merrick et al. have reported that seed migration to the chest accounted for only 10% of total seed loss from the prostate region, highlighting the importance of other mechanisms of loss [13].\n Seed migration-related sequelae In the present study, none of the 66 patients had symptoms related to the migrated seeds. Although it has been reported that most patients with seed migration are asymptomatic, there have been a few reports of seed migration-related sequelae, such as a few anecdotal instances of cardiac arrhythmia, myocardial infarction, and radiation pneumonitis [21,36,37]. Therefore, it is important to try to reduce the incidence of seed migration. The use of seeds linked with an absorbable suture material has been associated with a dramatically decreased rate of seed migration, although a potentially higher risk of radiotoxicity to the urethra and rectum has been pointed out [1,3,5,38]. In our study, linked seeds were not administered, because they are not commercially available in our country at present.\nIn the present study, none of the 66 patients had symptoms related to the migrated seeds. Although it has been reported that most patients with seed migration are asymptomatic, there have been a few reports of seed migration-related sequelae, such as a few anecdotal instances of cardiac arrhythmia, myocardial infarction, and radiation pneumonitis [21,36,37]. Therefore, it is important to try to reduce the incidence of seed migration. The use of seeds linked with an absorbable suture material has been associated with a dramatically decreased rate of seed migration, although a potentially higher risk of radiotoxicity to the urethra and rectum has been pointed out [1,3,5,38]. In our study, linked seeds were not administered, because they are not commercially available in our country at present.", "We found that 0.36% of implanted seeds migrated to the chest in 20% of our patient population, similar to previous reports (Table 2). It has been reported that the incidence of seed migration to the chest can be as high as 55% per patient population and 0.98% per number of implanted seeds (Table 2) [9]. The variability of the incidence of seed migration to the chest among the reported results is considered to be attributed to different types of seeds (linked or loose), different designs of seed placement (intraprostatic or extraprostatic), different timings of follow-up radiographs, and different protocols of follow-up chest radiographs (orthogonal or AP alone) (Table 2) [1-5,9,13].\nLiterature Survey of Seed Migration\n*Linked seeds were restricted to needles placed at the periphery of the prostate implant. All seeds placed centrally, close to the urethra, were loose seeds. On average, 8-10 loose seeds were used.\n†Of the total 125I seeds, 64% were linked seeds. Linked seeds were limited to the region of the capsule/periprostatic region. Loose seeds were utilized in the central aspect of the gland.\n††9 patients, within 14 days; 75 patients, more than 30 days.\n§11patients within 14 days; 61patients, more than 30 days.\n||Of the 20 patients who had postimplant chest radiographs obtained within 14 days of brachytherapy, none had seeds detected in the lungs. Categorizing patients by isotope, for 125I, 18/75 (24.0%) and for 103Pd, 16/61 (26.2%) with chest radiographs beyond 30 days had seeds found in the lungs.\n¶A mixture of linked and free seeds, with a median 74% of needles per case containing seeds in linked, with central and posterior needles typically containing loose seeds.\nIn contrast, the incidence of seed migration to the abdomen and pelvis has been reported rarely (Table 2). A possible reason is that the American Brachytherapy Society does not specifically recommend follow-up abdominal and pelvic radiographs after seed implantation [6]. Therefore, in most institutions, follow-up abdominal and pelvic radiographs would not be undertaken routinely. However, we found that seed migration to the abdomen and pelvis occurred in 2.2% and 5.6%, respectively, of our patient population. Although the incidence of seed migration to the abdomen and pelvis is lower than that of seed migration to the chest, we would consider it advisable to undertake follow-up abdominal and pelvic radiographs after seed implantation.", "One day, two weeks, and three months after seed implantation, follow-up chest radiographs showed 22%, 88%, and 91%, respectively, of 69 seeds that eventually migrated to the chest. These results mean that 22%, 66%, and 2.9% of these 69 seeds migrated to the chest within one day, between one day and two weeks, and between two weeks and three months after seed implantation, respectively. About 90% of these 69 seeds migrated to the chest within two weeks after seed implantation. These results are similar to a previous report [13]. Merrick et al. have speculated that seed migration to the chest may be most likely to occur between 14 and 28 days after seed implantation [13]. Although we observed that several seeds migrated to the chest more than six months after seed implantation, this finding should be considered exceptional. Therefore, it is suggested that follow-up chest radiographs should be undertaken two weeks, or preferably a few months, after seed implantation to detect most seeds that will migrate to the chest.", "The results of the present study show a total of four and five cases of seed migration to the kidneys and Batson's vertebral venous plexus, respectively, at only one institution, which suggests that such cases are not very rare. Meanwhile, in previous studies, a total of only four and four cases of seed migration to the kidneys and Batson's vertebral venous plexus, respectively, have been reported as rare cases, which is in disagreement with our conclusion [22,23,26,29]. The same number or more cases of seed migration to these areas were found in our single study compared with all previous studies. A possible explanation is that, in the present study, orthogonal chest radiographs, an abdominal radiograph, and a pelvic radiograph were undertaken routinely to detect seed migration to the chest, abdomen, and pelvis at several time points after seed implantation. Moreover, in all patients who had seed migration to the abdomen and pelvis, a CT scan was undertaken to identify the exact location of the migrated seeds. Consequently, more cases of seed migration to the kidneys and Batson's vertebral venous plexus were found in the present study. We speculate that some seed migration to the kidneys and Batson's vertebral venous plexus might have gone undetected in other institutions.", "Previous investigators have explained seed migration to the kidneys as follows: migrated seeds in venous vessels would enter the systemic circulation through a right-to-left shunt, such as a patent foramen ovale or pulmonary arteriovenous malformation, and then embolize to a branch of the renal arteries [26]. We call this route of seed migration through a right-to-left shunt \"a paradoxical route.\"\nThe present study has provided some interesting cases of seed migration to the kidneys, the mechanism of which is difficult to explain. In Case 1 of the present study, a seed had migrated to the right side of the middle abdomen, which was considered to be separated from the IVC, and then had relocated to the left kidney (Figure 1A-C). If the seed had initially embolized to an artery of the right side of the middle abdomen, such as a branch of the right renal artery, through a paradoxical route, it is difficult to explain how the seed would have relocated to the left kidney. In Case 2, two seeds had migrated to the same right kidney (Figure 2A-C). The mechanism of seed migration to the kidney through a paradoxical route is too complicated. Therefore, it is highly unlikely that two seeds would happen to migrate to the same right kidney through a paradoxical route by chance, although the possibility cannot be completely excluded. Other possible mechanisms should be proposed to explain how seed migration to the kidneys would have occurred in the two cases mentioned above.", "We assume that seed movement in venous vessels would reflect not only the force of the blood flow but also the force of gravity, and that a seed sometimes would move in venous vessels against the blood flow, because the gravity could overcome the blood flow. The explanation is as follows. Intravascular missile migration following gunshot injury has been reported [30-34]. A missile sometimes moves in a retrograde fashion against the normal blood flow in major venous vessels including the IVC, and sometimes lodges in the renal vein and the hepatic vein [30-34]. It has been postulated that missile migration against the blood flow in venous vessels could occur because of gravity, the patient's position (upright) at the moment of wounding and/or positional changes of the body, the weight and shape of the missile, and possible low flow states [30]. Although the size and the weight of an 125I radioactive seed (4.5 mm in length, 0.8 mm in diameter, and about 10 mg in weight per seed) are smaller than those of a bullet, it is considered that a seed could move against the blood flow in major venous vessels because of gravity. The reasons are as follows. The specific gravity of the seed (about 4 g/mL) is much higher than that of blood, and the seed is less water resistant because of its rod-shaped structure. In addition, a seed would usually be located near the wall of major venous vessels due to gravity and the patient's position, and therefore, a seed would be affected by a relatively slow stream of blood near the vascular wall compared with that in the center of major venous vessels. It is known that in major venous vessels, the blood flow is generally laminar, and the distribution of velocity across the tube is parabolic [35]. Therefore, the velocity of blood flow decreases from the center toward the wall of the venous vessel tube [35].\nThe mechanism of seed migration to various sites could be explained by the assumption that a seed could move in major venous vessels against the blood flow by gravity. In the present cases, seed migration to the kidneys, the liver, and the right groin area (probably a branch of the right femoral vein) (Case 3) would have occurred in venous vessels, partially in a retrograde fashion by gravity, without being passed through a paradoxical route. This explanation does not require the assumption that a patient would have a right-to-left shunt.", "The postimplant prostate D90 was not significantly different between patients with and without seed migration. Moreover, in 15 patients who had multiple migrated seeds, the postimplant prostate D90 was relatively acceptable, and no supplemental seed implantation was required. These results indicate that seed migration did not have a significant effect on postimplant prostate dosimetry in the present study. Possible reasons are as follows. First, in most patients with seed migration, only one or two seeds had migrated, which would have less effect on the dosimetry of the prostate. Tapen et al. have suggested that the loss of a few seeds may not have a significant effect on dose homogeneity or total dose to the prostate [5]. Second, seed migration would have much less effect on the dosimetry of the prostate than other mechanisms of seed loss, such as seed misplacement to the seminal vesicle or perineum and being voided in the urine postoperatively. Merrick et al. have reported that seed migration to the chest accounted for only 10% of total seed loss from the prostate region, highlighting the importance of other mechanisms of loss [13].", "In the present study, none of the 66 patients had symptoms related to the migrated seeds. Although it has been reported that most patients with seed migration are asymptomatic, there have been a few reports of seed migration-related sequelae, such as a few anecdotal instances of cardiac arrhythmia, myocardial infarction, and radiation pneumonitis [21,36,37]. Therefore, it is important to try to reduce the incidence of seed migration. The use of seeds linked with an absorbable suture material has been associated with a dramatically decreased rate of seed migration, although a potentially higher risk of radiotoxicity to the urethra and rectum has been pointed out [1,3,5,38]. In our study, linked seeds were not administered, because they are not commercially available in our country at present.", "In the present study, we determined the incidence of seed migration not only to the chest, but also to the abdomen and pelvis. Although the incidence of seed migration to the abdomen and pelvis was lower than that of seed migration to the chest, it would be advisable to undertake follow-up abdominal and pelvic radiographs after seed implantation. Seed migration to the chest occurred predominantly within two weeks and, therefore, it is suggested that follow-up chest radiographs should be undertaken two weeks or later after seed implantation. Seed migration to the kidneys and Batson's vertebral venous plexus was not very rare. Seed migration did not significantly affect postimplant prostate D90." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Results", "Seeds that migrated to the abdomen (seven seeds in six patients)", "Seeds that migrated to the pelvis (15 seeds in 15 patients)", "Seed relocation four years after seed implantation", "Postimplant dosimetric analysis", "Discussion", "Incidence of seed migration", "The dynamics of seed migration to the chest", "Seed migration to the kidneys and Batson's vertebral plexus is not very rare", "In some cases of seed migration to the kidneys, the mechanism is difficult to explain", "Other possible mechanisms of seed migration", "Influence of seed migration on postimplant dosimetry", "Seed migration-related sequelae", "Conclusions" ]
[ "Seed migration is a well-recognized event that occurs after transperineal interstitial prostate brachytherapy, and it is observed more often with loose seeds than with linked seeds [1-5].\nIt is well known that the most frequent site of seed migration is the chest. The American Brachytherapy Society has advised that a chest radiograph should be undertaken at the first follow-up visit to scan the lungs for embolized seeds [6]. Consequently, the incidence of seed migration to the chest has been well reported [1,2,4,5,7-19]. However, documentation of the incidence of seed migration to the abdomen and pelvis is rare. Rare cases of seed migration to a coronary artery, the right ventricle, the liver, the kidneys, Batson's vertebral venous plexus, and the left testicular vein have been reported [20-26]. However, it has never been fully determined whether seed migration to these locations is really rare.\nThe primary purposes of the present study were to determine the incidence of seed migration not only to the chest, but also to the abdomen and the pelvis at our institution and to identify the exact location of the seeds that had migrated to the abdomen and pelvis with computed tomography (CT). The secondary purpose was to determine the impact of seed migration on postimplant dosimetry.", "We reviewed the records of 267 patients who underwent transperineal interstitial prostate brachytherapy with loose 125I seeds for clinical T1/T2 prostate cancer at our institution. Table 1 details the characteristics of all 267 patients. Two patients (0.75%) received brachytherapy plus external beam radiotherapy (45 Gy in 1.8 Gy fractions). One hundred twenty-three of the 267 (46.1%) patients also underwent neoadjuvant hormonal therapy (NHT), which consisted of luteinizing hormone-releasing hormone agonist and antiandrogens. NHT was generally undertaken in patients with a prostate volume >40 cc or those with pubic arch interference by transrectal ultrasound (TRUS) at the preimplant volume study [27].\nPatient Characteristics (N = 267)\nData are presented as mean ± standard error (range) or number (percent) of patients.\nAbbreviations: PSA = prostate-specific antigen; TRUS = transrectal ultrasound\nOne month before seed implantation, a preplan was obtained with TRUS images taken at 5 mm intervals from the base to the apex of the prostate with the patient in the dorsal lithotomy position. The planning target volume included the prostate gland, with a margin of 3 mm anteriorly and laterally and 5 mm in the cranial and caudal directions. No margin was added posteriorly at the rectal interface. Treatment planning used a peripheral or a modified peripheral approach. For the 265 patients who received brachytherapy alone, the prescribed brachytherapy dose was 145 Gy and 160 Gy for the first 163 patients and the subsequent 102 patients, respectively. For the remaining two patients who received brachytherapy plus external beam radiotherapy, the prescribed brachytherapy dose was 110 Gy. TG 43 formalism was used in the preplanning and postimplant dosimetry analyses [28]. All 267 patients were treated with loose 125I radioactive seeds with a Mick applicator (Mick Radio-Nuclear Instruments, Bronx, NY). To ensure that no seeds were left in the bladder, postoperative fluoroscopic images were obtained. Prior to discharge, postoperative surveys of voided urine were conducted to detect voided seeds.\nOrthogonal chest radiographs, an abdominal radiograph, and a pelvic radiograph were undertaken to document the occurrence and sites of seed migration one day after seed implantation. These follow-up radiographs were undertaken routinely at each outpatient visit. Patients returned to our outpatient clinic two weeks and three months after seed implantation, then at three-month intervals for the first three years and at six-month intervals thereafter. Seed migration to the chest and the abdomen was recorded when one or more seeds were visualized on orthogonal chest radiographs and the anteroposterior (AP) abdominal radiograph, respectively. Seed migration to the pelvis was recorded when one or more seeds were separated from the main seed cluster on an AP pelvic radiograph. However, seeds placed into the bladder and the seminal vesicles or seeds placed inferior to the prostate by mistake were not scored as migrated. Seeds voided in the urine were not scored as migrated. Subsequently, all patients who had seed migration to the abdomen and pelvis underwent a CT scan to identify the exact location of the migrated seeds. The incidence of seed migration to the chest, abdomen, and pelvis was calculated.\nPostimplant dosimetric analysis by CT was performed one month after seed implantation. The seed count in the region of the prostate gland was determined on the AP pelvic radiographs obtained two weeks after seed implantation. The postimplant prostate D90 (the dose received by 90% of the volume of the prostate) value was compared between patients with and without seed migration. Statistical analysis was performed with Student's t-test. A p value of <0.05 was considered statistically significant.", "In total, 19,236 seeds were implanted in 267 patients. All 267 patients underwent follow-up radiographs. Median follow-up was 41 months (range, 8.5-76 months).\nAt one day after seed implantation, follow-up radiographs demonstrated that 41 of the 19,236 (0.21%) seeds migrated in 37 of the 267 (13.9%) patients: three seeds in one patient, two seeds in each of two patients, and a single seed in each of the remaining 34 patients. Fifteen (0.078%) seeds migrated to the chest in 15 (5.6%) patients. One (0.0052%) seed migrated to the abdomen in one (0.37%) patient. Twenty-five (0.13%) seeds migrated to the pelvis in 23 (8.6%) patients.\nAt two weeks after seed implantation, 85 of the 19,236 (0.44%) seeds migrated in 61 of the 267 (22.8%) patients: seven seeds in one patient, three seeds in each of four patients, two seeds in each of 10 patients, and a single seed in each of the remaining 46 patients. Sixty-one (0.32%) seeds migrated to the chest in 48 (18.0%) patients. Seven (0.036%) seeds migrated to the abdomen in six (2.2%) patients. Seventeen (0.088%) seeds migrated to the pelvis in 16 (6.0%) patients.\nAt three months after seed implantation, 87 of the 19,236 (0.45%) seeds migrated in 63 of the 267 (23.6%) patients: seven seeds in one patient, three seeds in each of four patients, two seeds in each of 10 patients, and a single seed in each of the remaining 48 patients. Sixty-three (0.33%) seeds migrated to the chest in 50 (18.7%) patients. Seven (0.036%) seeds migrated to the abdomen in six (2.2%) patients. Seventeen (0.088%) seeds migrated to the pelvis in 16 (6.0%) patients.\nAlthough seed migration occurred predominantly within two weeks after seed implantation, eventually, six seeds were found to migrate or relocate to the chest long after seed implantation: four seeds were found to have migrated to the chest at a median of 27 months (range 9.1-16 months), and two seeds were found to have relocated from the pelvis to the chest at 6.1 and 48 months after seed implantation, respectively. In these patients, follow-up radiographs were undertaken routinely at every outpatient visit; however, migration or relocation of these seeds was not found at the previous visit. Meanwhile, no seed relocation from the chest to other sites was observed in the present study.\nEventually, 91 of the 19,236 (0.47%) seeds migrated in 66 of the 267 (24.7%) patients: seven seeds in one patient, three seeds in each of five patients, two seeds in each of nine patients, and a single seed in each of the remaining 51 patients. Sixty-nine (0.36%) seeds migrated to the chest in 54 (20.2%) patients. Seven (0.036%) seeds migrated to the abdomen in six (2.2%) patients. Fifteen (0.078%) seeds migrated to the pelvis in 15 (5.6%) patients. All 66 patients were informed of seed migration; none of these patients had symptoms related to the migrated seeds.\n Seeds that migrated to the abdomen (seven seeds in six patients) Two of the 19,236 (0.010%) seeds migrated to the liver in two of the 267 (0.75%) patients: a single seed migrated to the liver in each of two patients. Five (0.026%) seeds migrated to the kidneys in four (1.5%) patients: two seeds migrated to the same kidney in one patient, and a single seed migrated to the kidney in each of the remaining three patients. In one patient (Case 1), one day after seed implantation, an abdominal radiograph showed that a seed had migrated to the right side of the middle abdomen, which was considered to be separated from the inferior vena cava (IVC) (Figure 1A). However, two weeks after seed implantation, an abdominal radiograph showed that the seed had disappeared from the right side of the middle abdomen, and showed a seed that had migrated to the left side of the middle abdomen (Figure 1B). On pelvic radiographs, there were no changes in number of seeds that had been implanted into the prostate between one day and two weeks after seed implantation. It was concluded that the seed had relocated from the right side of the middle abdomen to the left side of the middle abdomen. A subsequent abdominal CT demonstrated that the seed had migrated to the left kidney (Figure 1C). In another patient (Case 2), two weeks after seed implantation, an abdominal radiograph showed that two seeds had migrated to the same right kidney (Figure 2A-C).\nCase 1: Relocation from the right side of the middle abdomen to the left kidney. One day after seed implantation, a follow-up abdominal radiograph showed that a seed had migrated to the right side of the middle abdomen (solid arrow) (A). Two weeks later, the seed had relocated from the right side of the middle abdomen to the left side of the middle abdomen (solid arrow) (B). Subsequent computed tomography showed that the seed had migrated to the left kidney (solid arrow) (C).\nCase 2: Migration of two seeds to the same right kidney. Two weeks after seed implantation, a follow-up abdominal radiograph showed that two seeds had migrated to the right side of the middle abdomen (solid arrows) (A). Subsequent computed tomography showed that these two seeds had migrated to the same right kidney (solid arrows) (B,C).\nTwo of the 19,236 (0.010%) seeds migrated to the liver in two of the 267 (0.75%) patients: a single seed migrated to the liver in each of two patients. Five (0.026%) seeds migrated to the kidneys in four (1.5%) patients: two seeds migrated to the same kidney in one patient, and a single seed migrated to the kidney in each of the remaining three patients. In one patient (Case 1), one day after seed implantation, an abdominal radiograph showed that a seed had migrated to the right side of the middle abdomen, which was considered to be separated from the inferior vena cava (IVC) (Figure 1A). However, two weeks after seed implantation, an abdominal radiograph showed that the seed had disappeared from the right side of the middle abdomen, and showed a seed that had migrated to the left side of the middle abdomen (Figure 1B). On pelvic radiographs, there were no changes in number of seeds that had been implanted into the prostate between one day and two weeks after seed implantation. It was concluded that the seed had relocated from the right side of the middle abdomen to the left side of the middle abdomen. A subsequent abdominal CT demonstrated that the seed had migrated to the left kidney (Figure 1C). In another patient (Case 2), two weeks after seed implantation, an abdominal radiograph showed that two seeds had migrated to the same right kidney (Figure 2A-C).\nCase 1: Relocation from the right side of the middle abdomen to the left kidney. One day after seed implantation, a follow-up abdominal radiograph showed that a seed had migrated to the right side of the middle abdomen (solid arrow) (A). Two weeks later, the seed had relocated from the right side of the middle abdomen to the left side of the middle abdomen (solid arrow) (B). Subsequent computed tomography showed that the seed had migrated to the left kidney (solid arrow) (C).\nCase 2: Migration of two seeds to the same right kidney. Two weeks after seed implantation, a follow-up abdominal radiograph showed that two seeds had migrated to the right side of the middle abdomen (solid arrows) (A). Subsequent computed tomography showed that these two seeds had migrated to the same right kidney (solid arrows) (B,C).\n Seeds that migrated to the pelvis (15 seeds in 15 patients) A single seed migrated to the pelvis in each of 15 patients. Five of the 19,236 (0.026%) seeds migrated to Batson's vertebral venous plexus in five of the 267 (1.9%) patients. Four (0.021%) seeds migrated to the sacral venous plexus in four (1.5%) patients. Two (0.010%) seeds migrated to the iliac veins in two (0.75%) patients. Two (0.010%) seeds migrated to the right ischial bone in two (0.75%) patients. Two (0.010%) seeds migrated to the obturator internus muscles in two (0.75%) patients.\nA single seed migrated to the pelvis in each of 15 patients. Five of the 19,236 (0.026%) seeds migrated to Batson's vertebral venous plexus in five of the 267 (1.9%) patients. Four (0.021%) seeds migrated to the sacral venous plexus in four (1.5%) patients. Two (0.010%) seeds migrated to the iliac veins in two (0.75%) patients. Two (0.010%) seeds migrated to the right ischial bone in two (0.75%) patients. Two (0.010%) seeds migrated to the obturator internus muscles in two (0.75%) patients.\n Seed relocation four years after seed implantation In one patient (Case 3), seed relocation was found four years after seed implantation. A seed had migrated to the right groin area one day after seed implantation (Figure 3A-B). Three years and six months after seed implantation, a pelvic radiograph showed that the seed was in the same location. However, four years after seed implantation, a pelvic radiograph showed that the seed had disappeared from the right groin area (Figure 3C), and a chest radiograph showed a seed that had migrated to the left lung, which was not found at three years and six months after seed implantation. It was concluded that the seed had initially lodged in a branch of the right femoral vein, and then relocated to the left lung through the IVC, long after seed implantation.\nCase 3: Seed disappearance from the right groin area four years after seed implantation. One day after seed implantation, a follow-up pelvic radiograph showed that a seed was located inferior to the right side of the pelvis (solid arrow) (A). Subsequent computed tomography showed that the seed had migrated to the right groin area (solid arrow) (B). Four years after seed implantation, a follow-up pelvic radiograph showed that the seed had disappeared from the pelvic area (C).\nIn one patient (Case 3), seed relocation was found four years after seed implantation. A seed had migrated to the right groin area one day after seed implantation (Figure 3A-B). Three years and six months after seed implantation, a pelvic radiograph showed that the seed was in the same location. However, four years after seed implantation, a pelvic radiograph showed that the seed had disappeared from the right groin area (Figure 3C), and a chest radiograph showed a seed that had migrated to the left lung, which was not found at three years and six months after seed implantation. It was concluded that the seed had initially lodged in a branch of the right femoral vein, and then relocated to the left lung through the IVC, long after seed implantation.\nCase 3: Seed disappearance from the right groin area four years after seed implantation. One day after seed implantation, a follow-up pelvic radiograph showed that a seed was located inferior to the right side of the pelvis (solid arrow) (A). Subsequent computed tomography showed that the seed had migrated to the right groin area (solid arrow) (B). Four years after seed implantation, a follow-up pelvic radiograph showed that the seed had disappeared from the pelvic area (C).\n Postimplant dosimetric analysis In the 265 patients who received brachytherapy alone, the postimplant prostate D90 was 175.0 ± 1.3 Gy (mean ± standard error [SE]). In these 265 patients, the postimplant prostate D90 value was not significantly different between patients with and without seed migration (mean ± SE, 175.1 ± 2.3 Gy vs. 175.0 ± 1.5 Gy, respectively, p = 0.992). In 15 patients who had multiple migrated seeds, the postimplant prostate D90 ranged from 137.2 to 198.5 Gy (mean ± SE, 171.8 ± 5.5 Gy), which was not significantly different from that in patients without seed migration (p = 0.573).\nIn the remaining two patients who received brachytherapy plus external beam radiotherapy, the postimplant prostate D90 values were 116.4 Gy and 132.6 Gy. These two patients had no seed migration.\nIn the 265 patients who received brachytherapy alone, the postimplant prostate D90 was 175.0 ± 1.3 Gy (mean ± standard error [SE]). In these 265 patients, the postimplant prostate D90 value was not significantly different between patients with and without seed migration (mean ± SE, 175.1 ± 2.3 Gy vs. 175.0 ± 1.5 Gy, respectively, p = 0.992). In 15 patients who had multiple migrated seeds, the postimplant prostate D90 ranged from 137.2 to 198.5 Gy (mean ± SE, 171.8 ± 5.5 Gy), which was not significantly different from that in patients without seed migration (p = 0.573).\nIn the remaining two patients who received brachytherapy plus external beam radiotherapy, the postimplant prostate D90 values were 116.4 Gy and 132.6 Gy. These two patients had no seed migration.", "Two of the 19,236 (0.010%) seeds migrated to the liver in two of the 267 (0.75%) patients: a single seed migrated to the liver in each of two patients. Five (0.026%) seeds migrated to the kidneys in four (1.5%) patients: two seeds migrated to the same kidney in one patient, and a single seed migrated to the kidney in each of the remaining three patients. In one patient (Case 1), one day after seed implantation, an abdominal radiograph showed that a seed had migrated to the right side of the middle abdomen, which was considered to be separated from the inferior vena cava (IVC) (Figure 1A). However, two weeks after seed implantation, an abdominal radiograph showed that the seed had disappeared from the right side of the middle abdomen, and showed a seed that had migrated to the left side of the middle abdomen (Figure 1B). On pelvic radiographs, there were no changes in number of seeds that had been implanted into the prostate between one day and two weeks after seed implantation. It was concluded that the seed had relocated from the right side of the middle abdomen to the left side of the middle abdomen. A subsequent abdominal CT demonstrated that the seed had migrated to the left kidney (Figure 1C). In another patient (Case 2), two weeks after seed implantation, an abdominal radiograph showed that two seeds had migrated to the same right kidney (Figure 2A-C).\nCase 1: Relocation from the right side of the middle abdomen to the left kidney. One day after seed implantation, a follow-up abdominal radiograph showed that a seed had migrated to the right side of the middle abdomen (solid arrow) (A). Two weeks later, the seed had relocated from the right side of the middle abdomen to the left side of the middle abdomen (solid arrow) (B). Subsequent computed tomography showed that the seed had migrated to the left kidney (solid arrow) (C).\nCase 2: Migration of two seeds to the same right kidney. Two weeks after seed implantation, a follow-up abdominal radiograph showed that two seeds had migrated to the right side of the middle abdomen (solid arrows) (A). Subsequent computed tomography showed that these two seeds had migrated to the same right kidney (solid arrows) (B,C).", "A single seed migrated to the pelvis in each of 15 patients. Five of the 19,236 (0.026%) seeds migrated to Batson's vertebral venous plexus in five of the 267 (1.9%) patients. Four (0.021%) seeds migrated to the sacral venous plexus in four (1.5%) patients. Two (0.010%) seeds migrated to the iliac veins in two (0.75%) patients. Two (0.010%) seeds migrated to the right ischial bone in two (0.75%) patients. Two (0.010%) seeds migrated to the obturator internus muscles in two (0.75%) patients.", "In one patient (Case 3), seed relocation was found four years after seed implantation. A seed had migrated to the right groin area one day after seed implantation (Figure 3A-B). Three years and six months after seed implantation, a pelvic radiograph showed that the seed was in the same location. However, four years after seed implantation, a pelvic radiograph showed that the seed had disappeared from the right groin area (Figure 3C), and a chest radiograph showed a seed that had migrated to the left lung, which was not found at three years and six months after seed implantation. It was concluded that the seed had initially lodged in a branch of the right femoral vein, and then relocated to the left lung through the IVC, long after seed implantation.\nCase 3: Seed disappearance from the right groin area four years after seed implantation. One day after seed implantation, a follow-up pelvic radiograph showed that a seed was located inferior to the right side of the pelvis (solid arrow) (A). Subsequent computed tomography showed that the seed had migrated to the right groin area (solid arrow) (B). Four years after seed implantation, a follow-up pelvic radiograph showed that the seed had disappeared from the pelvic area (C).", "In the 265 patients who received brachytherapy alone, the postimplant prostate D90 was 175.0 ± 1.3 Gy (mean ± standard error [SE]). In these 265 patients, the postimplant prostate D90 value was not significantly different between patients with and without seed migration (mean ± SE, 175.1 ± 2.3 Gy vs. 175.0 ± 1.5 Gy, respectively, p = 0.992). In 15 patients who had multiple migrated seeds, the postimplant prostate D90 ranged from 137.2 to 198.5 Gy (mean ± SE, 171.8 ± 5.5 Gy), which was not significantly different from that in patients without seed migration (p = 0.573).\nIn the remaining two patients who received brachytherapy plus external beam radiotherapy, the postimplant prostate D90 values were 116.4 Gy and 132.6 Gy. These two patients had no seed migration.", " Incidence of seed migration We found that 0.36% of implanted seeds migrated to the chest in 20% of our patient population, similar to previous reports (Table 2). It has been reported that the incidence of seed migration to the chest can be as high as 55% per patient population and 0.98% per number of implanted seeds (Table 2) [9]. The variability of the incidence of seed migration to the chest among the reported results is considered to be attributed to different types of seeds (linked or loose), different designs of seed placement (intraprostatic or extraprostatic), different timings of follow-up radiographs, and different protocols of follow-up chest radiographs (orthogonal or AP alone) (Table 2) [1-5,9,13].\nLiterature Survey of Seed Migration\n*Linked seeds were restricted to needles placed at the periphery of the prostate implant. All seeds placed centrally, close to the urethra, were loose seeds. On average, 8-10 loose seeds were used.\n†Of the total 125I seeds, 64% were linked seeds. Linked seeds were limited to the region of the capsule/periprostatic region. Loose seeds were utilized in the central aspect of the gland.\n††9 patients, within 14 days; 75 patients, more than 30 days.\n§11patients within 14 days; 61patients, more than 30 days.\n||Of the 20 patients who had postimplant chest radiographs obtained within 14 days of brachytherapy, none had seeds detected in the lungs. Categorizing patients by isotope, for 125I, 18/75 (24.0%) and for 103Pd, 16/61 (26.2%) with chest radiographs beyond 30 days had seeds found in the lungs.\n¶A mixture of linked and free seeds, with a median 74% of needles per case containing seeds in linked, with central and posterior needles typically containing loose seeds.\nIn contrast, the incidence of seed migration to the abdomen and pelvis has been reported rarely (Table 2). A possible reason is that the American Brachytherapy Society does not specifically recommend follow-up abdominal and pelvic radiographs after seed implantation [6]. Therefore, in most institutions, follow-up abdominal and pelvic radiographs would not be undertaken routinely. However, we found that seed migration to the abdomen and pelvis occurred in 2.2% and 5.6%, respectively, of our patient population. Although the incidence of seed migration to the abdomen and pelvis is lower than that of seed migration to the chest, we would consider it advisable to undertake follow-up abdominal and pelvic radiographs after seed implantation.\nWe found that 0.36% of implanted seeds migrated to the chest in 20% of our patient population, similar to previous reports (Table 2). It has been reported that the incidence of seed migration to the chest can be as high as 55% per patient population and 0.98% per number of implanted seeds (Table 2) [9]. The variability of the incidence of seed migration to the chest among the reported results is considered to be attributed to different types of seeds (linked or loose), different designs of seed placement (intraprostatic or extraprostatic), different timings of follow-up radiographs, and different protocols of follow-up chest radiographs (orthogonal or AP alone) (Table 2) [1-5,9,13].\nLiterature Survey of Seed Migration\n*Linked seeds were restricted to needles placed at the periphery of the prostate implant. All seeds placed centrally, close to the urethra, were loose seeds. On average, 8-10 loose seeds were used.\n†Of the total 125I seeds, 64% were linked seeds. Linked seeds were limited to the region of the capsule/periprostatic region. Loose seeds were utilized in the central aspect of the gland.\n††9 patients, within 14 days; 75 patients, more than 30 days.\n§11patients within 14 days; 61patients, more than 30 days.\n||Of the 20 patients who had postimplant chest radiographs obtained within 14 days of brachytherapy, none had seeds detected in the lungs. Categorizing patients by isotope, for 125I, 18/75 (24.0%) and for 103Pd, 16/61 (26.2%) with chest radiographs beyond 30 days had seeds found in the lungs.\n¶A mixture of linked and free seeds, with a median 74% of needles per case containing seeds in linked, with central and posterior needles typically containing loose seeds.\nIn contrast, the incidence of seed migration to the abdomen and pelvis has been reported rarely (Table 2). A possible reason is that the American Brachytherapy Society does not specifically recommend follow-up abdominal and pelvic radiographs after seed implantation [6]. Therefore, in most institutions, follow-up abdominal and pelvic radiographs would not be undertaken routinely. However, we found that seed migration to the abdomen and pelvis occurred in 2.2% and 5.6%, respectively, of our patient population. Although the incidence of seed migration to the abdomen and pelvis is lower than that of seed migration to the chest, we would consider it advisable to undertake follow-up abdominal and pelvic radiographs after seed implantation.\n The dynamics of seed migration to the chest One day, two weeks, and three months after seed implantation, follow-up chest radiographs showed 22%, 88%, and 91%, respectively, of 69 seeds that eventually migrated to the chest. These results mean that 22%, 66%, and 2.9% of these 69 seeds migrated to the chest within one day, between one day and two weeks, and between two weeks and three months after seed implantation, respectively. About 90% of these 69 seeds migrated to the chest within two weeks after seed implantation. These results are similar to a previous report [13]. Merrick et al. have speculated that seed migration to the chest may be most likely to occur between 14 and 28 days after seed implantation [13]. Although we observed that several seeds migrated to the chest more than six months after seed implantation, this finding should be considered exceptional. Therefore, it is suggested that follow-up chest radiographs should be undertaken two weeks, or preferably a few months, after seed implantation to detect most seeds that will migrate to the chest.\nOne day, two weeks, and three months after seed implantation, follow-up chest radiographs showed 22%, 88%, and 91%, respectively, of 69 seeds that eventually migrated to the chest. These results mean that 22%, 66%, and 2.9% of these 69 seeds migrated to the chest within one day, between one day and two weeks, and between two weeks and three months after seed implantation, respectively. About 90% of these 69 seeds migrated to the chest within two weeks after seed implantation. These results are similar to a previous report [13]. Merrick et al. have speculated that seed migration to the chest may be most likely to occur between 14 and 28 days after seed implantation [13]. Although we observed that several seeds migrated to the chest more than six months after seed implantation, this finding should be considered exceptional. Therefore, it is suggested that follow-up chest radiographs should be undertaken two weeks, or preferably a few months, after seed implantation to detect most seeds that will migrate to the chest.\n Seed migration to the kidneys and Batson's vertebral plexus is not very rare The results of the present study show a total of four and five cases of seed migration to the kidneys and Batson's vertebral venous plexus, respectively, at only one institution, which suggests that such cases are not very rare. Meanwhile, in previous studies, a total of only four and four cases of seed migration to the kidneys and Batson's vertebral venous plexus, respectively, have been reported as rare cases, which is in disagreement with our conclusion [22,23,26,29]. The same number or more cases of seed migration to these areas were found in our single study compared with all previous studies. A possible explanation is that, in the present study, orthogonal chest radiographs, an abdominal radiograph, and a pelvic radiograph were undertaken routinely to detect seed migration to the chest, abdomen, and pelvis at several time points after seed implantation. Moreover, in all patients who had seed migration to the abdomen and pelvis, a CT scan was undertaken to identify the exact location of the migrated seeds. Consequently, more cases of seed migration to the kidneys and Batson's vertebral venous plexus were found in the present study. We speculate that some seed migration to the kidneys and Batson's vertebral venous plexus might have gone undetected in other institutions.\nThe results of the present study show a total of four and five cases of seed migration to the kidneys and Batson's vertebral venous plexus, respectively, at only one institution, which suggests that such cases are not very rare. Meanwhile, in previous studies, a total of only four and four cases of seed migration to the kidneys and Batson's vertebral venous plexus, respectively, have been reported as rare cases, which is in disagreement with our conclusion [22,23,26,29]. The same number or more cases of seed migration to these areas were found in our single study compared with all previous studies. A possible explanation is that, in the present study, orthogonal chest radiographs, an abdominal radiograph, and a pelvic radiograph were undertaken routinely to detect seed migration to the chest, abdomen, and pelvis at several time points after seed implantation. Moreover, in all patients who had seed migration to the abdomen and pelvis, a CT scan was undertaken to identify the exact location of the migrated seeds. Consequently, more cases of seed migration to the kidneys and Batson's vertebral venous plexus were found in the present study. We speculate that some seed migration to the kidneys and Batson's vertebral venous plexus might have gone undetected in other institutions.\n In some cases of seed migration to the kidneys, the mechanism is difficult to explain Previous investigators have explained seed migration to the kidneys as follows: migrated seeds in venous vessels would enter the systemic circulation through a right-to-left shunt, such as a patent foramen ovale or pulmonary arteriovenous malformation, and then embolize to a branch of the renal arteries [26]. We call this route of seed migration through a right-to-left shunt \"a paradoxical route.\"\nThe present study has provided some interesting cases of seed migration to the kidneys, the mechanism of which is difficult to explain. In Case 1 of the present study, a seed had migrated to the right side of the middle abdomen, which was considered to be separated from the IVC, and then had relocated to the left kidney (Figure 1A-C). If the seed had initially embolized to an artery of the right side of the middle abdomen, such as a branch of the right renal artery, through a paradoxical route, it is difficult to explain how the seed would have relocated to the left kidney. In Case 2, two seeds had migrated to the same right kidney (Figure 2A-C). The mechanism of seed migration to the kidney through a paradoxical route is too complicated. Therefore, it is highly unlikely that two seeds would happen to migrate to the same right kidney through a paradoxical route by chance, although the possibility cannot be completely excluded. Other possible mechanisms should be proposed to explain how seed migration to the kidneys would have occurred in the two cases mentioned above.\nPrevious investigators have explained seed migration to the kidneys as follows: migrated seeds in venous vessels would enter the systemic circulation through a right-to-left shunt, such as a patent foramen ovale or pulmonary arteriovenous malformation, and then embolize to a branch of the renal arteries [26]. We call this route of seed migration through a right-to-left shunt \"a paradoxical route.\"\nThe present study has provided some interesting cases of seed migration to the kidneys, the mechanism of which is difficult to explain. In Case 1 of the present study, a seed had migrated to the right side of the middle abdomen, which was considered to be separated from the IVC, and then had relocated to the left kidney (Figure 1A-C). If the seed had initially embolized to an artery of the right side of the middle abdomen, such as a branch of the right renal artery, through a paradoxical route, it is difficult to explain how the seed would have relocated to the left kidney. In Case 2, two seeds had migrated to the same right kidney (Figure 2A-C). The mechanism of seed migration to the kidney through a paradoxical route is too complicated. Therefore, it is highly unlikely that two seeds would happen to migrate to the same right kidney through a paradoxical route by chance, although the possibility cannot be completely excluded. Other possible mechanisms should be proposed to explain how seed migration to the kidneys would have occurred in the two cases mentioned above.\n Other possible mechanisms of seed migration We assume that seed movement in venous vessels would reflect not only the force of the blood flow but also the force of gravity, and that a seed sometimes would move in venous vessels against the blood flow, because the gravity could overcome the blood flow. The explanation is as follows. Intravascular missile migration following gunshot injury has been reported [30-34]. A missile sometimes moves in a retrograde fashion against the normal blood flow in major venous vessels including the IVC, and sometimes lodges in the renal vein and the hepatic vein [30-34]. It has been postulated that missile migration against the blood flow in venous vessels could occur because of gravity, the patient's position (upright) at the moment of wounding and/or positional changes of the body, the weight and shape of the missile, and possible low flow states [30]. Although the size and the weight of an 125I radioactive seed (4.5 mm in length, 0.8 mm in diameter, and about 10 mg in weight per seed) are smaller than those of a bullet, it is considered that a seed could move against the blood flow in major venous vessels because of gravity. The reasons are as follows. The specific gravity of the seed (about 4 g/mL) is much higher than that of blood, and the seed is less water resistant because of its rod-shaped structure. In addition, a seed would usually be located near the wall of major venous vessels due to gravity and the patient's position, and therefore, a seed would be affected by a relatively slow stream of blood near the vascular wall compared with that in the center of major venous vessels. It is known that in major venous vessels, the blood flow is generally laminar, and the distribution of velocity across the tube is parabolic [35]. Therefore, the velocity of blood flow decreases from the center toward the wall of the venous vessel tube [35].\nThe mechanism of seed migration to various sites could be explained by the assumption that a seed could move in major venous vessels against the blood flow by gravity. In the present cases, seed migration to the kidneys, the liver, and the right groin area (probably a branch of the right femoral vein) (Case 3) would have occurred in venous vessels, partially in a retrograde fashion by gravity, without being passed through a paradoxical route. This explanation does not require the assumption that a patient would have a right-to-left shunt.\nWe assume that seed movement in venous vessels would reflect not only the force of the blood flow but also the force of gravity, and that a seed sometimes would move in venous vessels against the blood flow, because the gravity could overcome the blood flow. The explanation is as follows. Intravascular missile migration following gunshot injury has been reported [30-34]. A missile sometimes moves in a retrograde fashion against the normal blood flow in major venous vessels including the IVC, and sometimes lodges in the renal vein and the hepatic vein [30-34]. It has been postulated that missile migration against the blood flow in venous vessels could occur because of gravity, the patient's position (upright) at the moment of wounding and/or positional changes of the body, the weight and shape of the missile, and possible low flow states [30]. Although the size and the weight of an 125I radioactive seed (4.5 mm in length, 0.8 mm in diameter, and about 10 mg in weight per seed) are smaller than those of a bullet, it is considered that a seed could move against the blood flow in major venous vessels because of gravity. The reasons are as follows. The specific gravity of the seed (about 4 g/mL) is much higher than that of blood, and the seed is less water resistant because of its rod-shaped structure. In addition, a seed would usually be located near the wall of major venous vessels due to gravity and the patient's position, and therefore, a seed would be affected by a relatively slow stream of blood near the vascular wall compared with that in the center of major venous vessels. It is known that in major venous vessels, the blood flow is generally laminar, and the distribution of velocity across the tube is parabolic [35]. Therefore, the velocity of blood flow decreases from the center toward the wall of the venous vessel tube [35].\nThe mechanism of seed migration to various sites could be explained by the assumption that a seed could move in major venous vessels against the blood flow by gravity. In the present cases, seed migration to the kidneys, the liver, and the right groin area (probably a branch of the right femoral vein) (Case 3) would have occurred in venous vessels, partially in a retrograde fashion by gravity, without being passed through a paradoxical route. This explanation does not require the assumption that a patient would have a right-to-left shunt.\n Influence of seed migration on postimplant dosimetry The postimplant prostate D90 was not significantly different between patients with and without seed migration. Moreover, in 15 patients who had multiple migrated seeds, the postimplant prostate D90 was relatively acceptable, and no supplemental seed implantation was required. These results indicate that seed migration did not have a significant effect on postimplant prostate dosimetry in the present study. Possible reasons are as follows. First, in most patients with seed migration, only one or two seeds had migrated, which would have less effect on the dosimetry of the prostate. Tapen et al. have suggested that the loss of a few seeds may not have a significant effect on dose homogeneity or total dose to the prostate [5]. Second, seed migration would have much less effect on the dosimetry of the prostate than other mechanisms of seed loss, such as seed misplacement to the seminal vesicle or perineum and being voided in the urine postoperatively. Merrick et al. have reported that seed migration to the chest accounted for only 10% of total seed loss from the prostate region, highlighting the importance of other mechanisms of loss [13].\nThe postimplant prostate D90 was not significantly different between patients with and without seed migration. Moreover, in 15 patients who had multiple migrated seeds, the postimplant prostate D90 was relatively acceptable, and no supplemental seed implantation was required. These results indicate that seed migration did not have a significant effect on postimplant prostate dosimetry in the present study. Possible reasons are as follows. First, in most patients with seed migration, only one or two seeds had migrated, which would have less effect on the dosimetry of the prostate. Tapen et al. have suggested that the loss of a few seeds may not have a significant effect on dose homogeneity or total dose to the prostate [5]. Second, seed migration would have much less effect on the dosimetry of the prostate than other mechanisms of seed loss, such as seed misplacement to the seminal vesicle or perineum and being voided in the urine postoperatively. Merrick et al. have reported that seed migration to the chest accounted for only 10% of total seed loss from the prostate region, highlighting the importance of other mechanisms of loss [13].\n Seed migration-related sequelae In the present study, none of the 66 patients had symptoms related to the migrated seeds. Although it has been reported that most patients with seed migration are asymptomatic, there have been a few reports of seed migration-related sequelae, such as a few anecdotal instances of cardiac arrhythmia, myocardial infarction, and radiation pneumonitis [21,36,37]. Therefore, it is important to try to reduce the incidence of seed migration. The use of seeds linked with an absorbable suture material has been associated with a dramatically decreased rate of seed migration, although a potentially higher risk of radiotoxicity to the urethra and rectum has been pointed out [1,3,5,38]. In our study, linked seeds were not administered, because they are not commercially available in our country at present.\nIn the present study, none of the 66 patients had symptoms related to the migrated seeds. Although it has been reported that most patients with seed migration are asymptomatic, there have been a few reports of seed migration-related sequelae, such as a few anecdotal instances of cardiac arrhythmia, myocardial infarction, and radiation pneumonitis [21,36,37]. Therefore, it is important to try to reduce the incidence of seed migration. The use of seeds linked with an absorbable suture material has been associated with a dramatically decreased rate of seed migration, although a potentially higher risk of radiotoxicity to the urethra and rectum has been pointed out [1,3,5,38]. In our study, linked seeds were not administered, because they are not commercially available in our country at present.", "We found that 0.36% of implanted seeds migrated to the chest in 20% of our patient population, similar to previous reports (Table 2). It has been reported that the incidence of seed migration to the chest can be as high as 55% per patient population and 0.98% per number of implanted seeds (Table 2) [9]. The variability of the incidence of seed migration to the chest among the reported results is considered to be attributed to different types of seeds (linked or loose), different designs of seed placement (intraprostatic or extraprostatic), different timings of follow-up radiographs, and different protocols of follow-up chest radiographs (orthogonal or AP alone) (Table 2) [1-5,9,13].\nLiterature Survey of Seed Migration\n*Linked seeds were restricted to needles placed at the periphery of the prostate implant. All seeds placed centrally, close to the urethra, were loose seeds. On average, 8-10 loose seeds were used.\n†Of the total 125I seeds, 64% were linked seeds. Linked seeds were limited to the region of the capsule/periprostatic region. Loose seeds were utilized in the central aspect of the gland.\n††9 patients, within 14 days; 75 patients, more than 30 days.\n§11patients within 14 days; 61patients, more than 30 days.\n||Of the 20 patients who had postimplant chest radiographs obtained within 14 days of brachytherapy, none had seeds detected in the lungs. Categorizing patients by isotope, for 125I, 18/75 (24.0%) and for 103Pd, 16/61 (26.2%) with chest radiographs beyond 30 days had seeds found in the lungs.\n¶A mixture of linked and free seeds, with a median 74% of needles per case containing seeds in linked, with central and posterior needles typically containing loose seeds.\nIn contrast, the incidence of seed migration to the abdomen and pelvis has been reported rarely (Table 2). A possible reason is that the American Brachytherapy Society does not specifically recommend follow-up abdominal and pelvic radiographs after seed implantation [6]. Therefore, in most institutions, follow-up abdominal and pelvic radiographs would not be undertaken routinely. However, we found that seed migration to the abdomen and pelvis occurred in 2.2% and 5.6%, respectively, of our patient population. Although the incidence of seed migration to the abdomen and pelvis is lower than that of seed migration to the chest, we would consider it advisable to undertake follow-up abdominal and pelvic radiographs after seed implantation.", "One day, two weeks, and three months after seed implantation, follow-up chest radiographs showed 22%, 88%, and 91%, respectively, of 69 seeds that eventually migrated to the chest. These results mean that 22%, 66%, and 2.9% of these 69 seeds migrated to the chest within one day, between one day and two weeks, and between two weeks and three months after seed implantation, respectively. About 90% of these 69 seeds migrated to the chest within two weeks after seed implantation. These results are similar to a previous report [13]. Merrick et al. have speculated that seed migration to the chest may be most likely to occur between 14 and 28 days after seed implantation [13]. Although we observed that several seeds migrated to the chest more than six months after seed implantation, this finding should be considered exceptional. Therefore, it is suggested that follow-up chest radiographs should be undertaken two weeks, or preferably a few months, after seed implantation to detect most seeds that will migrate to the chest.", "The results of the present study show a total of four and five cases of seed migration to the kidneys and Batson's vertebral venous plexus, respectively, at only one institution, which suggests that such cases are not very rare. Meanwhile, in previous studies, a total of only four and four cases of seed migration to the kidneys and Batson's vertebral venous plexus, respectively, have been reported as rare cases, which is in disagreement with our conclusion [22,23,26,29]. The same number or more cases of seed migration to these areas were found in our single study compared with all previous studies. A possible explanation is that, in the present study, orthogonal chest radiographs, an abdominal radiograph, and a pelvic radiograph were undertaken routinely to detect seed migration to the chest, abdomen, and pelvis at several time points after seed implantation. Moreover, in all patients who had seed migration to the abdomen and pelvis, a CT scan was undertaken to identify the exact location of the migrated seeds. Consequently, more cases of seed migration to the kidneys and Batson's vertebral venous plexus were found in the present study. We speculate that some seed migration to the kidneys and Batson's vertebral venous plexus might have gone undetected in other institutions.", "Previous investigators have explained seed migration to the kidneys as follows: migrated seeds in venous vessels would enter the systemic circulation through a right-to-left shunt, such as a patent foramen ovale or pulmonary arteriovenous malformation, and then embolize to a branch of the renal arteries [26]. We call this route of seed migration through a right-to-left shunt \"a paradoxical route.\"\nThe present study has provided some interesting cases of seed migration to the kidneys, the mechanism of which is difficult to explain. In Case 1 of the present study, a seed had migrated to the right side of the middle abdomen, which was considered to be separated from the IVC, and then had relocated to the left kidney (Figure 1A-C). If the seed had initially embolized to an artery of the right side of the middle abdomen, such as a branch of the right renal artery, through a paradoxical route, it is difficult to explain how the seed would have relocated to the left kidney. In Case 2, two seeds had migrated to the same right kidney (Figure 2A-C). The mechanism of seed migration to the kidney through a paradoxical route is too complicated. Therefore, it is highly unlikely that two seeds would happen to migrate to the same right kidney through a paradoxical route by chance, although the possibility cannot be completely excluded. Other possible mechanisms should be proposed to explain how seed migration to the kidneys would have occurred in the two cases mentioned above.", "We assume that seed movement in venous vessels would reflect not only the force of the blood flow but also the force of gravity, and that a seed sometimes would move in venous vessels against the blood flow, because the gravity could overcome the blood flow. The explanation is as follows. Intravascular missile migration following gunshot injury has been reported [30-34]. A missile sometimes moves in a retrograde fashion against the normal blood flow in major venous vessels including the IVC, and sometimes lodges in the renal vein and the hepatic vein [30-34]. It has been postulated that missile migration against the blood flow in venous vessels could occur because of gravity, the patient's position (upright) at the moment of wounding and/or positional changes of the body, the weight and shape of the missile, and possible low flow states [30]. Although the size and the weight of an 125I radioactive seed (4.5 mm in length, 0.8 mm in diameter, and about 10 mg in weight per seed) are smaller than those of a bullet, it is considered that a seed could move against the blood flow in major venous vessels because of gravity. The reasons are as follows. The specific gravity of the seed (about 4 g/mL) is much higher than that of blood, and the seed is less water resistant because of its rod-shaped structure. In addition, a seed would usually be located near the wall of major venous vessels due to gravity and the patient's position, and therefore, a seed would be affected by a relatively slow stream of blood near the vascular wall compared with that in the center of major venous vessels. It is known that in major venous vessels, the blood flow is generally laminar, and the distribution of velocity across the tube is parabolic [35]. Therefore, the velocity of blood flow decreases from the center toward the wall of the venous vessel tube [35].\nThe mechanism of seed migration to various sites could be explained by the assumption that a seed could move in major venous vessels against the blood flow by gravity. In the present cases, seed migration to the kidneys, the liver, and the right groin area (probably a branch of the right femoral vein) (Case 3) would have occurred in venous vessels, partially in a retrograde fashion by gravity, without being passed through a paradoxical route. This explanation does not require the assumption that a patient would have a right-to-left shunt.", "The postimplant prostate D90 was not significantly different between patients with and without seed migration. Moreover, in 15 patients who had multiple migrated seeds, the postimplant prostate D90 was relatively acceptable, and no supplemental seed implantation was required. These results indicate that seed migration did not have a significant effect on postimplant prostate dosimetry in the present study. Possible reasons are as follows. First, in most patients with seed migration, only one or two seeds had migrated, which would have less effect on the dosimetry of the prostate. Tapen et al. have suggested that the loss of a few seeds may not have a significant effect on dose homogeneity or total dose to the prostate [5]. Second, seed migration would have much less effect on the dosimetry of the prostate than other mechanisms of seed loss, such as seed misplacement to the seminal vesicle or perineum and being voided in the urine postoperatively. Merrick et al. have reported that seed migration to the chest accounted for only 10% of total seed loss from the prostate region, highlighting the importance of other mechanisms of loss [13].", "In the present study, none of the 66 patients had symptoms related to the migrated seeds. Although it has been reported that most patients with seed migration are asymptomatic, there have been a few reports of seed migration-related sequelae, such as a few anecdotal instances of cardiac arrhythmia, myocardial infarction, and radiation pneumonitis [21,36,37]. Therefore, it is important to try to reduce the incidence of seed migration. The use of seeds linked with an absorbable suture material has been associated with a dramatically decreased rate of seed migration, although a potentially higher risk of radiotoxicity to the urethra and rectum has been pointed out [1,3,5,38]. In our study, linked seeds were not administered, because they are not commercially available in our country at present.", "In the present study, we determined the incidence of seed migration not only to the chest, but also to the abdomen and pelvis. Although the incidence of seed migration to the abdomen and pelvis was lower than that of seed migration to the chest, it would be advisable to undertake follow-up abdominal and pelvic radiographs after seed implantation. Seed migration to the chest occurred predominantly within two weeks and, therefore, it is suggested that follow-up chest radiographs should be undertaken two weeks or later after seed implantation. Seed migration to the kidneys and Batson's vertebral venous plexus was not very rare. Seed migration did not significantly affect postimplant prostate D90." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Brachytherapy", "125I", "Migration", "Prostate cancer", "Seed" ]
Background: Seed migration is a well-recognized event that occurs after transperineal interstitial prostate brachytherapy, and it is observed more often with loose seeds than with linked seeds [1-5]. It is well known that the most frequent site of seed migration is the chest. The American Brachytherapy Society has advised that a chest radiograph should be undertaken at the first follow-up visit to scan the lungs for embolized seeds [6]. Consequently, the incidence of seed migration to the chest has been well reported [1,2,4,5,7-19]. However, documentation of the incidence of seed migration to the abdomen and pelvis is rare. Rare cases of seed migration to a coronary artery, the right ventricle, the liver, the kidneys, Batson's vertebral venous plexus, and the left testicular vein have been reported [20-26]. However, it has never been fully determined whether seed migration to these locations is really rare. The primary purposes of the present study were to determine the incidence of seed migration not only to the chest, but also to the abdomen and the pelvis at our institution and to identify the exact location of the seeds that had migrated to the abdomen and pelvis with computed tomography (CT). The secondary purpose was to determine the impact of seed migration on postimplant dosimetry. Methods: We reviewed the records of 267 patients who underwent transperineal interstitial prostate brachytherapy with loose 125I seeds for clinical T1/T2 prostate cancer at our institution. Table 1 details the characteristics of all 267 patients. Two patients (0.75%) received brachytherapy plus external beam radiotherapy (45 Gy in 1.8 Gy fractions). One hundred twenty-three of the 267 (46.1%) patients also underwent neoadjuvant hormonal therapy (NHT), which consisted of luteinizing hormone-releasing hormone agonist and antiandrogens. NHT was generally undertaken in patients with a prostate volume >40 cc or those with pubic arch interference by transrectal ultrasound (TRUS) at the preimplant volume study [27]. Patient Characteristics (N = 267) Data are presented as mean ± standard error (range) or number (percent) of patients. Abbreviations: PSA = prostate-specific antigen; TRUS = transrectal ultrasound One month before seed implantation, a preplan was obtained with TRUS images taken at 5 mm intervals from the base to the apex of the prostate with the patient in the dorsal lithotomy position. The planning target volume included the prostate gland, with a margin of 3 mm anteriorly and laterally and 5 mm in the cranial and caudal directions. No margin was added posteriorly at the rectal interface. Treatment planning used a peripheral or a modified peripheral approach. For the 265 patients who received brachytherapy alone, the prescribed brachytherapy dose was 145 Gy and 160 Gy for the first 163 patients and the subsequent 102 patients, respectively. For the remaining two patients who received brachytherapy plus external beam radiotherapy, the prescribed brachytherapy dose was 110 Gy. TG 43 formalism was used in the preplanning and postimplant dosimetry analyses [28]. All 267 patients were treated with loose 125I radioactive seeds with a Mick applicator (Mick Radio-Nuclear Instruments, Bronx, NY). To ensure that no seeds were left in the bladder, postoperative fluoroscopic images were obtained. Prior to discharge, postoperative surveys of voided urine were conducted to detect voided seeds. Orthogonal chest radiographs, an abdominal radiograph, and a pelvic radiograph were undertaken to document the occurrence and sites of seed migration one day after seed implantation. These follow-up radiographs were undertaken routinely at each outpatient visit. Patients returned to our outpatient clinic two weeks and three months after seed implantation, then at three-month intervals for the first three years and at six-month intervals thereafter. Seed migration to the chest and the abdomen was recorded when one or more seeds were visualized on orthogonal chest radiographs and the anteroposterior (AP) abdominal radiograph, respectively. Seed migration to the pelvis was recorded when one or more seeds were separated from the main seed cluster on an AP pelvic radiograph. However, seeds placed into the bladder and the seminal vesicles or seeds placed inferior to the prostate by mistake were not scored as migrated. Seeds voided in the urine were not scored as migrated. Subsequently, all patients who had seed migration to the abdomen and pelvis underwent a CT scan to identify the exact location of the migrated seeds. The incidence of seed migration to the chest, abdomen, and pelvis was calculated. Postimplant dosimetric analysis by CT was performed one month after seed implantation. The seed count in the region of the prostate gland was determined on the AP pelvic radiographs obtained two weeks after seed implantation. The postimplant prostate D90 (the dose received by 90% of the volume of the prostate) value was compared between patients with and without seed migration. Statistical analysis was performed with Student's t-test. A p value of <0.05 was considered statistically significant. Results: In total, 19,236 seeds were implanted in 267 patients. All 267 patients underwent follow-up radiographs. Median follow-up was 41 months (range, 8.5-76 months). At one day after seed implantation, follow-up radiographs demonstrated that 41 of the 19,236 (0.21%) seeds migrated in 37 of the 267 (13.9%) patients: three seeds in one patient, two seeds in each of two patients, and a single seed in each of the remaining 34 patients. Fifteen (0.078%) seeds migrated to the chest in 15 (5.6%) patients. One (0.0052%) seed migrated to the abdomen in one (0.37%) patient. Twenty-five (0.13%) seeds migrated to the pelvis in 23 (8.6%) patients. At two weeks after seed implantation, 85 of the 19,236 (0.44%) seeds migrated in 61 of the 267 (22.8%) patients: seven seeds in one patient, three seeds in each of four patients, two seeds in each of 10 patients, and a single seed in each of the remaining 46 patients. Sixty-one (0.32%) seeds migrated to the chest in 48 (18.0%) patients. Seven (0.036%) seeds migrated to the abdomen in six (2.2%) patients. Seventeen (0.088%) seeds migrated to the pelvis in 16 (6.0%) patients. At three months after seed implantation, 87 of the 19,236 (0.45%) seeds migrated in 63 of the 267 (23.6%) patients: seven seeds in one patient, three seeds in each of four patients, two seeds in each of 10 patients, and a single seed in each of the remaining 48 patients. Sixty-three (0.33%) seeds migrated to the chest in 50 (18.7%) patients. Seven (0.036%) seeds migrated to the abdomen in six (2.2%) patients. Seventeen (0.088%) seeds migrated to the pelvis in 16 (6.0%) patients. Although seed migration occurred predominantly within two weeks after seed implantation, eventually, six seeds were found to migrate or relocate to the chest long after seed implantation: four seeds were found to have migrated to the chest at a median of 27 months (range 9.1-16 months), and two seeds were found to have relocated from the pelvis to the chest at 6.1 and 48 months after seed implantation, respectively. In these patients, follow-up radiographs were undertaken routinely at every outpatient visit; however, migration or relocation of these seeds was not found at the previous visit. Meanwhile, no seed relocation from the chest to other sites was observed in the present study. Eventually, 91 of the 19,236 (0.47%) seeds migrated in 66 of the 267 (24.7%) patients: seven seeds in one patient, three seeds in each of five patients, two seeds in each of nine patients, and a single seed in each of the remaining 51 patients. Sixty-nine (0.36%) seeds migrated to the chest in 54 (20.2%) patients. Seven (0.036%) seeds migrated to the abdomen in six (2.2%) patients. Fifteen (0.078%) seeds migrated to the pelvis in 15 (5.6%) patients. All 66 patients were informed of seed migration; none of these patients had symptoms related to the migrated seeds. Seeds that migrated to the abdomen (seven seeds in six patients) Two of the 19,236 (0.010%) seeds migrated to the liver in two of the 267 (0.75%) patients: a single seed migrated to the liver in each of two patients. Five (0.026%) seeds migrated to the kidneys in four (1.5%) patients: two seeds migrated to the same kidney in one patient, and a single seed migrated to the kidney in each of the remaining three patients. In one patient (Case 1), one day after seed implantation, an abdominal radiograph showed that a seed had migrated to the right side of the middle abdomen, which was considered to be separated from the inferior vena cava (IVC) (Figure 1A). However, two weeks after seed implantation, an abdominal radiograph showed that the seed had disappeared from the right side of the middle abdomen, and showed a seed that had migrated to the left side of the middle abdomen (Figure 1B). On pelvic radiographs, there were no changes in number of seeds that had been implanted into the prostate between one day and two weeks after seed implantation. It was concluded that the seed had relocated from the right side of the middle abdomen to the left side of the middle abdomen. A subsequent abdominal CT demonstrated that the seed had migrated to the left kidney (Figure 1C). In another patient (Case 2), two weeks after seed implantation, an abdominal radiograph showed that two seeds had migrated to the same right kidney (Figure 2A-C). Case 1: Relocation from the right side of the middle abdomen to the left kidney. One day after seed implantation, a follow-up abdominal radiograph showed that a seed had migrated to the right side of the middle abdomen (solid arrow) (A). Two weeks later, the seed had relocated from the right side of the middle abdomen to the left side of the middle abdomen (solid arrow) (B). Subsequent computed tomography showed that the seed had migrated to the left kidney (solid arrow) (C). Case 2: Migration of two seeds to the same right kidney. Two weeks after seed implantation, a follow-up abdominal radiograph showed that two seeds had migrated to the right side of the middle abdomen (solid arrows) (A). Subsequent computed tomography showed that these two seeds had migrated to the same right kidney (solid arrows) (B,C). Two of the 19,236 (0.010%) seeds migrated to the liver in two of the 267 (0.75%) patients: a single seed migrated to the liver in each of two patients. Five (0.026%) seeds migrated to the kidneys in four (1.5%) patients: two seeds migrated to the same kidney in one patient, and a single seed migrated to the kidney in each of the remaining three patients. In one patient (Case 1), one day after seed implantation, an abdominal radiograph showed that a seed had migrated to the right side of the middle abdomen, which was considered to be separated from the inferior vena cava (IVC) (Figure 1A). However, two weeks after seed implantation, an abdominal radiograph showed that the seed had disappeared from the right side of the middle abdomen, and showed a seed that had migrated to the left side of the middle abdomen (Figure 1B). On pelvic radiographs, there were no changes in number of seeds that had been implanted into the prostate between one day and two weeks after seed implantation. It was concluded that the seed had relocated from the right side of the middle abdomen to the left side of the middle abdomen. A subsequent abdominal CT demonstrated that the seed had migrated to the left kidney (Figure 1C). In another patient (Case 2), two weeks after seed implantation, an abdominal radiograph showed that two seeds had migrated to the same right kidney (Figure 2A-C). Case 1: Relocation from the right side of the middle abdomen to the left kidney. One day after seed implantation, a follow-up abdominal radiograph showed that a seed had migrated to the right side of the middle abdomen (solid arrow) (A). Two weeks later, the seed had relocated from the right side of the middle abdomen to the left side of the middle abdomen (solid arrow) (B). Subsequent computed tomography showed that the seed had migrated to the left kidney (solid arrow) (C). Case 2: Migration of two seeds to the same right kidney. Two weeks after seed implantation, a follow-up abdominal radiograph showed that two seeds had migrated to the right side of the middle abdomen (solid arrows) (A). Subsequent computed tomography showed that these two seeds had migrated to the same right kidney (solid arrows) (B,C). Seeds that migrated to the pelvis (15 seeds in 15 patients) A single seed migrated to the pelvis in each of 15 patients. Five of the 19,236 (0.026%) seeds migrated to Batson's vertebral venous plexus in five of the 267 (1.9%) patients. Four (0.021%) seeds migrated to the sacral venous plexus in four (1.5%) patients. Two (0.010%) seeds migrated to the iliac veins in two (0.75%) patients. Two (0.010%) seeds migrated to the right ischial bone in two (0.75%) patients. Two (0.010%) seeds migrated to the obturator internus muscles in two (0.75%) patients. A single seed migrated to the pelvis in each of 15 patients. Five of the 19,236 (0.026%) seeds migrated to Batson's vertebral venous plexus in five of the 267 (1.9%) patients. Four (0.021%) seeds migrated to the sacral venous plexus in four (1.5%) patients. Two (0.010%) seeds migrated to the iliac veins in two (0.75%) patients. Two (0.010%) seeds migrated to the right ischial bone in two (0.75%) patients. Two (0.010%) seeds migrated to the obturator internus muscles in two (0.75%) patients. Seed relocation four years after seed implantation In one patient (Case 3), seed relocation was found four years after seed implantation. A seed had migrated to the right groin area one day after seed implantation (Figure 3A-B). Three years and six months after seed implantation, a pelvic radiograph showed that the seed was in the same location. However, four years after seed implantation, a pelvic radiograph showed that the seed had disappeared from the right groin area (Figure 3C), and a chest radiograph showed a seed that had migrated to the left lung, which was not found at three years and six months after seed implantation. It was concluded that the seed had initially lodged in a branch of the right femoral vein, and then relocated to the left lung through the IVC, long after seed implantation. Case 3: Seed disappearance from the right groin area four years after seed implantation. One day after seed implantation, a follow-up pelvic radiograph showed that a seed was located inferior to the right side of the pelvis (solid arrow) (A). Subsequent computed tomography showed that the seed had migrated to the right groin area (solid arrow) (B). Four years after seed implantation, a follow-up pelvic radiograph showed that the seed had disappeared from the pelvic area (C). In one patient (Case 3), seed relocation was found four years after seed implantation. A seed had migrated to the right groin area one day after seed implantation (Figure 3A-B). Three years and six months after seed implantation, a pelvic radiograph showed that the seed was in the same location. However, four years after seed implantation, a pelvic radiograph showed that the seed had disappeared from the right groin area (Figure 3C), and a chest radiograph showed a seed that had migrated to the left lung, which was not found at three years and six months after seed implantation. It was concluded that the seed had initially lodged in a branch of the right femoral vein, and then relocated to the left lung through the IVC, long after seed implantation. Case 3: Seed disappearance from the right groin area four years after seed implantation. One day after seed implantation, a follow-up pelvic radiograph showed that a seed was located inferior to the right side of the pelvis (solid arrow) (A). Subsequent computed tomography showed that the seed had migrated to the right groin area (solid arrow) (B). Four years after seed implantation, a follow-up pelvic radiograph showed that the seed had disappeared from the pelvic area (C). Postimplant dosimetric analysis In the 265 patients who received brachytherapy alone, the postimplant prostate D90 was 175.0 ± 1.3 Gy (mean ± standard error [SE]). In these 265 patients, the postimplant prostate D90 value was not significantly different between patients with and without seed migration (mean ± SE, 175.1 ± 2.3 Gy vs. 175.0 ± 1.5 Gy, respectively, p = 0.992). In 15 patients who had multiple migrated seeds, the postimplant prostate D90 ranged from 137.2 to 198.5 Gy (mean ± SE, 171.8 ± 5.5 Gy), which was not significantly different from that in patients without seed migration (p = 0.573). In the remaining two patients who received brachytherapy plus external beam radiotherapy, the postimplant prostate D90 values were 116.4 Gy and 132.6 Gy. These two patients had no seed migration. In the 265 patients who received brachytherapy alone, the postimplant prostate D90 was 175.0 ± 1.3 Gy (mean ± standard error [SE]). In these 265 patients, the postimplant prostate D90 value was not significantly different between patients with and without seed migration (mean ± SE, 175.1 ± 2.3 Gy vs. 175.0 ± 1.5 Gy, respectively, p = 0.992). In 15 patients who had multiple migrated seeds, the postimplant prostate D90 ranged from 137.2 to 198.5 Gy (mean ± SE, 171.8 ± 5.5 Gy), which was not significantly different from that in patients without seed migration (p = 0.573). In the remaining two patients who received brachytherapy plus external beam radiotherapy, the postimplant prostate D90 values were 116.4 Gy and 132.6 Gy. These two patients had no seed migration. Seeds that migrated to the abdomen (seven seeds in six patients): Two of the 19,236 (0.010%) seeds migrated to the liver in two of the 267 (0.75%) patients: a single seed migrated to the liver in each of two patients. Five (0.026%) seeds migrated to the kidneys in four (1.5%) patients: two seeds migrated to the same kidney in one patient, and a single seed migrated to the kidney in each of the remaining three patients. In one patient (Case 1), one day after seed implantation, an abdominal radiograph showed that a seed had migrated to the right side of the middle abdomen, which was considered to be separated from the inferior vena cava (IVC) (Figure 1A). However, two weeks after seed implantation, an abdominal radiograph showed that the seed had disappeared from the right side of the middle abdomen, and showed a seed that had migrated to the left side of the middle abdomen (Figure 1B). On pelvic radiographs, there were no changes in number of seeds that had been implanted into the prostate between one day and two weeks after seed implantation. It was concluded that the seed had relocated from the right side of the middle abdomen to the left side of the middle abdomen. A subsequent abdominal CT demonstrated that the seed had migrated to the left kidney (Figure 1C). In another patient (Case 2), two weeks after seed implantation, an abdominal radiograph showed that two seeds had migrated to the same right kidney (Figure 2A-C). Case 1: Relocation from the right side of the middle abdomen to the left kidney. One day after seed implantation, a follow-up abdominal radiograph showed that a seed had migrated to the right side of the middle abdomen (solid arrow) (A). Two weeks later, the seed had relocated from the right side of the middle abdomen to the left side of the middle abdomen (solid arrow) (B). Subsequent computed tomography showed that the seed had migrated to the left kidney (solid arrow) (C). Case 2: Migration of two seeds to the same right kidney. Two weeks after seed implantation, a follow-up abdominal radiograph showed that two seeds had migrated to the right side of the middle abdomen (solid arrows) (A). Subsequent computed tomography showed that these two seeds had migrated to the same right kidney (solid arrows) (B,C). Seeds that migrated to the pelvis (15 seeds in 15 patients): A single seed migrated to the pelvis in each of 15 patients. Five of the 19,236 (0.026%) seeds migrated to Batson's vertebral venous plexus in five of the 267 (1.9%) patients. Four (0.021%) seeds migrated to the sacral venous plexus in four (1.5%) patients. Two (0.010%) seeds migrated to the iliac veins in two (0.75%) patients. Two (0.010%) seeds migrated to the right ischial bone in two (0.75%) patients. Two (0.010%) seeds migrated to the obturator internus muscles in two (0.75%) patients. Seed relocation four years after seed implantation: In one patient (Case 3), seed relocation was found four years after seed implantation. A seed had migrated to the right groin area one day after seed implantation (Figure 3A-B). Three years and six months after seed implantation, a pelvic radiograph showed that the seed was in the same location. However, four years after seed implantation, a pelvic radiograph showed that the seed had disappeared from the right groin area (Figure 3C), and a chest radiograph showed a seed that had migrated to the left lung, which was not found at three years and six months after seed implantation. It was concluded that the seed had initially lodged in a branch of the right femoral vein, and then relocated to the left lung through the IVC, long after seed implantation. Case 3: Seed disappearance from the right groin area four years after seed implantation. One day after seed implantation, a follow-up pelvic radiograph showed that a seed was located inferior to the right side of the pelvis (solid arrow) (A). Subsequent computed tomography showed that the seed had migrated to the right groin area (solid arrow) (B). Four years after seed implantation, a follow-up pelvic radiograph showed that the seed had disappeared from the pelvic area (C). Postimplant dosimetric analysis: In the 265 patients who received brachytherapy alone, the postimplant prostate D90 was 175.0 ± 1.3 Gy (mean ± standard error [SE]). In these 265 patients, the postimplant prostate D90 value was not significantly different between patients with and without seed migration (mean ± SE, 175.1 ± 2.3 Gy vs. 175.0 ± 1.5 Gy, respectively, p = 0.992). In 15 patients who had multiple migrated seeds, the postimplant prostate D90 ranged from 137.2 to 198.5 Gy (mean ± SE, 171.8 ± 5.5 Gy), which was not significantly different from that in patients without seed migration (p = 0.573). In the remaining two patients who received brachytherapy plus external beam radiotherapy, the postimplant prostate D90 values were 116.4 Gy and 132.6 Gy. These two patients had no seed migration. Discussion: Incidence of seed migration We found that 0.36% of implanted seeds migrated to the chest in 20% of our patient population, similar to previous reports (Table 2). It has been reported that the incidence of seed migration to the chest can be as high as 55% per patient population and 0.98% per number of implanted seeds (Table 2) [9]. The variability of the incidence of seed migration to the chest among the reported results is considered to be attributed to different types of seeds (linked or loose), different designs of seed placement (intraprostatic or extraprostatic), different timings of follow-up radiographs, and different protocols of follow-up chest radiographs (orthogonal or AP alone) (Table 2) [1-5,9,13]. Literature Survey of Seed Migration *Linked seeds were restricted to needles placed at the periphery of the prostate implant. All seeds placed centrally, close to the urethra, were loose seeds. On average, 8-10 loose seeds were used. †Of the total 125I seeds, 64% were linked seeds. Linked seeds were limited to the region of the capsule/periprostatic region. Loose seeds were utilized in the central aspect of the gland. ††9 patients, within 14 days; 75 patients, more than 30 days. §11patients within 14 days; 61patients, more than 30 days. ||Of the 20 patients who had postimplant chest radiographs obtained within 14 days of brachytherapy, none had seeds detected in the lungs. Categorizing patients by isotope, for 125I, 18/75 (24.0%) and for 103Pd, 16/61 (26.2%) with chest radiographs beyond 30 days had seeds found in the lungs. ¶A mixture of linked and free seeds, with a median 74% of needles per case containing seeds in linked, with central and posterior needles typically containing loose seeds. In contrast, the incidence of seed migration to the abdomen and pelvis has been reported rarely (Table 2). A possible reason is that the American Brachytherapy Society does not specifically recommend follow-up abdominal and pelvic radiographs after seed implantation [6]. Therefore, in most institutions, follow-up abdominal and pelvic radiographs would not be undertaken routinely. However, we found that seed migration to the abdomen and pelvis occurred in 2.2% and 5.6%, respectively, of our patient population. Although the incidence of seed migration to the abdomen and pelvis is lower than that of seed migration to the chest, we would consider it advisable to undertake follow-up abdominal and pelvic radiographs after seed implantation. We found that 0.36% of implanted seeds migrated to the chest in 20% of our patient population, similar to previous reports (Table 2). It has been reported that the incidence of seed migration to the chest can be as high as 55% per patient population and 0.98% per number of implanted seeds (Table 2) [9]. The variability of the incidence of seed migration to the chest among the reported results is considered to be attributed to different types of seeds (linked or loose), different designs of seed placement (intraprostatic or extraprostatic), different timings of follow-up radiographs, and different protocols of follow-up chest radiographs (orthogonal or AP alone) (Table 2) [1-5,9,13]. Literature Survey of Seed Migration *Linked seeds were restricted to needles placed at the periphery of the prostate implant. All seeds placed centrally, close to the urethra, were loose seeds. On average, 8-10 loose seeds were used. †Of the total 125I seeds, 64% were linked seeds. Linked seeds were limited to the region of the capsule/periprostatic region. Loose seeds were utilized in the central aspect of the gland. ††9 patients, within 14 days; 75 patients, more than 30 days. §11patients within 14 days; 61patients, more than 30 days. ||Of the 20 patients who had postimplant chest radiographs obtained within 14 days of brachytherapy, none had seeds detected in the lungs. Categorizing patients by isotope, for 125I, 18/75 (24.0%) and for 103Pd, 16/61 (26.2%) with chest radiographs beyond 30 days had seeds found in the lungs. ¶A mixture of linked and free seeds, with a median 74% of needles per case containing seeds in linked, with central and posterior needles typically containing loose seeds. In contrast, the incidence of seed migration to the abdomen and pelvis has been reported rarely (Table 2). A possible reason is that the American Brachytherapy Society does not specifically recommend follow-up abdominal and pelvic radiographs after seed implantation [6]. Therefore, in most institutions, follow-up abdominal and pelvic radiographs would not be undertaken routinely. However, we found that seed migration to the abdomen and pelvis occurred in 2.2% and 5.6%, respectively, of our patient population. Although the incidence of seed migration to the abdomen and pelvis is lower than that of seed migration to the chest, we would consider it advisable to undertake follow-up abdominal and pelvic radiographs after seed implantation. The dynamics of seed migration to the chest One day, two weeks, and three months after seed implantation, follow-up chest radiographs showed 22%, 88%, and 91%, respectively, of 69 seeds that eventually migrated to the chest. These results mean that 22%, 66%, and 2.9% of these 69 seeds migrated to the chest within one day, between one day and two weeks, and between two weeks and three months after seed implantation, respectively. About 90% of these 69 seeds migrated to the chest within two weeks after seed implantation. These results are similar to a previous report [13]. Merrick et al. have speculated that seed migration to the chest may be most likely to occur between 14 and 28 days after seed implantation [13]. Although we observed that several seeds migrated to the chest more than six months after seed implantation, this finding should be considered exceptional. Therefore, it is suggested that follow-up chest radiographs should be undertaken two weeks, or preferably a few months, after seed implantation to detect most seeds that will migrate to the chest. One day, two weeks, and three months after seed implantation, follow-up chest radiographs showed 22%, 88%, and 91%, respectively, of 69 seeds that eventually migrated to the chest. These results mean that 22%, 66%, and 2.9% of these 69 seeds migrated to the chest within one day, between one day and two weeks, and between two weeks and three months after seed implantation, respectively. About 90% of these 69 seeds migrated to the chest within two weeks after seed implantation. These results are similar to a previous report [13]. Merrick et al. have speculated that seed migration to the chest may be most likely to occur between 14 and 28 days after seed implantation [13]. Although we observed that several seeds migrated to the chest more than six months after seed implantation, this finding should be considered exceptional. Therefore, it is suggested that follow-up chest radiographs should be undertaken two weeks, or preferably a few months, after seed implantation to detect most seeds that will migrate to the chest. Seed migration to the kidneys and Batson's vertebral plexus is not very rare The results of the present study show a total of four and five cases of seed migration to the kidneys and Batson's vertebral venous plexus, respectively, at only one institution, which suggests that such cases are not very rare. Meanwhile, in previous studies, a total of only four and four cases of seed migration to the kidneys and Batson's vertebral venous plexus, respectively, have been reported as rare cases, which is in disagreement with our conclusion [22,23,26,29]. The same number or more cases of seed migration to these areas were found in our single study compared with all previous studies. A possible explanation is that, in the present study, orthogonal chest radiographs, an abdominal radiograph, and a pelvic radiograph were undertaken routinely to detect seed migration to the chest, abdomen, and pelvis at several time points after seed implantation. Moreover, in all patients who had seed migration to the abdomen and pelvis, a CT scan was undertaken to identify the exact location of the migrated seeds. Consequently, more cases of seed migration to the kidneys and Batson's vertebral venous plexus were found in the present study. We speculate that some seed migration to the kidneys and Batson's vertebral venous plexus might have gone undetected in other institutions. The results of the present study show a total of four and five cases of seed migration to the kidneys and Batson's vertebral venous plexus, respectively, at only one institution, which suggests that such cases are not very rare. Meanwhile, in previous studies, a total of only four and four cases of seed migration to the kidneys and Batson's vertebral venous plexus, respectively, have been reported as rare cases, which is in disagreement with our conclusion [22,23,26,29]. The same number or more cases of seed migration to these areas were found in our single study compared with all previous studies. A possible explanation is that, in the present study, orthogonal chest radiographs, an abdominal radiograph, and a pelvic radiograph were undertaken routinely to detect seed migration to the chest, abdomen, and pelvis at several time points after seed implantation. Moreover, in all patients who had seed migration to the abdomen and pelvis, a CT scan was undertaken to identify the exact location of the migrated seeds. Consequently, more cases of seed migration to the kidneys and Batson's vertebral venous plexus were found in the present study. We speculate that some seed migration to the kidneys and Batson's vertebral venous plexus might have gone undetected in other institutions. In some cases of seed migration to the kidneys, the mechanism is difficult to explain Previous investigators have explained seed migration to the kidneys as follows: migrated seeds in venous vessels would enter the systemic circulation through a right-to-left shunt, such as a patent foramen ovale or pulmonary arteriovenous malformation, and then embolize to a branch of the renal arteries [26]. We call this route of seed migration through a right-to-left shunt "a paradoxical route." The present study has provided some interesting cases of seed migration to the kidneys, the mechanism of which is difficult to explain. In Case 1 of the present study, a seed had migrated to the right side of the middle abdomen, which was considered to be separated from the IVC, and then had relocated to the left kidney (Figure 1A-C). If the seed had initially embolized to an artery of the right side of the middle abdomen, such as a branch of the right renal artery, through a paradoxical route, it is difficult to explain how the seed would have relocated to the left kidney. In Case 2, two seeds had migrated to the same right kidney (Figure 2A-C). The mechanism of seed migration to the kidney through a paradoxical route is too complicated. Therefore, it is highly unlikely that two seeds would happen to migrate to the same right kidney through a paradoxical route by chance, although the possibility cannot be completely excluded. Other possible mechanisms should be proposed to explain how seed migration to the kidneys would have occurred in the two cases mentioned above. Previous investigators have explained seed migration to the kidneys as follows: migrated seeds in venous vessels would enter the systemic circulation through a right-to-left shunt, such as a patent foramen ovale or pulmonary arteriovenous malformation, and then embolize to a branch of the renal arteries [26]. We call this route of seed migration through a right-to-left shunt "a paradoxical route." The present study has provided some interesting cases of seed migration to the kidneys, the mechanism of which is difficult to explain. In Case 1 of the present study, a seed had migrated to the right side of the middle abdomen, which was considered to be separated from the IVC, and then had relocated to the left kidney (Figure 1A-C). If the seed had initially embolized to an artery of the right side of the middle abdomen, such as a branch of the right renal artery, through a paradoxical route, it is difficult to explain how the seed would have relocated to the left kidney. In Case 2, two seeds had migrated to the same right kidney (Figure 2A-C). The mechanism of seed migration to the kidney through a paradoxical route is too complicated. Therefore, it is highly unlikely that two seeds would happen to migrate to the same right kidney through a paradoxical route by chance, although the possibility cannot be completely excluded. Other possible mechanisms should be proposed to explain how seed migration to the kidneys would have occurred in the two cases mentioned above. Other possible mechanisms of seed migration We assume that seed movement in venous vessels would reflect not only the force of the blood flow but also the force of gravity, and that a seed sometimes would move in venous vessels against the blood flow, because the gravity could overcome the blood flow. The explanation is as follows. Intravascular missile migration following gunshot injury has been reported [30-34]. A missile sometimes moves in a retrograde fashion against the normal blood flow in major venous vessels including the IVC, and sometimes lodges in the renal vein and the hepatic vein [30-34]. It has been postulated that missile migration against the blood flow in venous vessels could occur because of gravity, the patient's position (upright) at the moment of wounding and/or positional changes of the body, the weight and shape of the missile, and possible low flow states [30]. Although the size and the weight of an 125I radioactive seed (4.5 mm in length, 0.8 mm in diameter, and about 10 mg in weight per seed) are smaller than those of a bullet, it is considered that a seed could move against the blood flow in major venous vessels because of gravity. The reasons are as follows. The specific gravity of the seed (about 4 g/mL) is much higher than that of blood, and the seed is less water resistant because of its rod-shaped structure. In addition, a seed would usually be located near the wall of major venous vessels due to gravity and the patient's position, and therefore, a seed would be affected by a relatively slow stream of blood near the vascular wall compared with that in the center of major venous vessels. It is known that in major venous vessels, the blood flow is generally laminar, and the distribution of velocity across the tube is parabolic [35]. Therefore, the velocity of blood flow decreases from the center toward the wall of the venous vessel tube [35]. The mechanism of seed migration to various sites could be explained by the assumption that a seed could move in major venous vessels against the blood flow by gravity. In the present cases, seed migration to the kidneys, the liver, and the right groin area (probably a branch of the right femoral vein) (Case 3) would have occurred in venous vessels, partially in a retrograde fashion by gravity, without being passed through a paradoxical route. This explanation does not require the assumption that a patient would have a right-to-left shunt. We assume that seed movement in venous vessels would reflect not only the force of the blood flow but also the force of gravity, and that a seed sometimes would move in venous vessels against the blood flow, because the gravity could overcome the blood flow. The explanation is as follows. Intravascular missile migration following gunshot injury has been reported [30-34]. A missile sometimes moves in a retrograde fashion against the normal blood flow in major venous vessels including the IVC, and sometimes lodges in the renal vein and the hepatic vein [30-34]. It has been postulated that missile migration against the blood flow in venous vessels could occur because of gravity, the patient's position (upright) at the moment of wounding and/or positional changes of the body, the weight and shape of the missile, and possible low flow states [30]. Although the size and the weight of an 125I radioactive seed (4.5 mm in length, 0.8 mm in diameter, and about 10 mg in weight per seed) are smaller than those of a bullet, it is considered that a seed could move against the blood flow in major venous vessels because of gravity. The reasons are as follows. The specific gravity of the seed (about 4 g/mL) is much higher than that of blood, and the seed is less water resistant because of its rod-shaped structure. In addition, a seed would usually be located near the wall of major venous vessels due to gravity and the patient's position, and therefore, a seed would be affected by a relatively slow stream of blood near the vascular wall compared with that in the center of major venous vessels. It is known that in major venous vessels, the blood flow is generally laminar, and the distribution of velocity across the tube is parabolic [35]. Therefore, the velocity of blood flow decreases from the center toward the wall of the venous vessel tube [35]. The mechanism of seed migration to various sites could be explained by the assumption that a seed could move in major venous vessels against the blood flow by gravity. In the present cases, seed migration to the kidneys, the liver, and the right groin area (probably a branch of the right femoral vein) (Case 3) would have occurred in venous vessels, partially in a retrograde fashion by gravity, without being passed through a paradoxical route. This explanation does not require the assumption that a patient would have a right-to-left shunt. Influence of seed migration on postimplant dosimetry The postimplant prostate D90 was not significantly different between patients with and without seed migration. Moreover, in 15 patients who had multiple migrated seeds, the postimplant prostate D90 was relatively acceptable, and no supplemental seed implantation was required. These results indicate that seed migration did not have a significant effect on postimplant prostate dosimetry in the present study. Possible reasons are as follows. First, in most patients with seed migration, only one or two seeds had migrated, which would have less effect on the dosimetry of the prostate. Tapen et al. have suggested that the loss of a few seeds may not have a significant effect on dose homogeneity or total dose to the prostate [5]. Second, seed migration would have much less effect on the dosimetry of the prostate than other mechanisms of seed loss, such as seed misplacement to the seminal vesicle or perineum and being voided in the urine postoperatively. Merrick et al. have reported that seed migration to the chest accounted for only 10% of total seed loss from the prostate region, highlighting the importance of other mechanisms of loss [13]. The postimplant prostate D90 was not significantly different between patients with and without seed migration. Moreover, in 15 patients who had multiple migrated seeds, the postimplant prostate D90 was relatively acceptable, and no supplemental seed implantation was required. These results indicate that seed migration did not have a significant effect on postimplant prostate dosimetry in the present study. Possible reasons are as follows. First, in most patients with seed migration, only one or two seeds had migrated, which would have less effect on the dosimetry of the prostate. Tapen et al. have suggested that the loss of a few seeds may not have a significant effect on dose homogeneity or total dose to the prostate [5]. Second, seed migration would have much less effect on the dosimetry of the prostate than other mechanisms of seed loss, such as seed misplacement to the seminal vesicle or perineum and being voided in the urine postoperatively. Merrick et al. have reported that seed migration to the chest accounted for only 10% of total seed loss from the prostate region, highlighting the importance of other mechanisms of loss [13]. Seed migration-related sequelae In the present study, none of the 66 patients had symptoms related to the migrated seeds. Although it has been reported that most patients with seed migration are asymptomatic, there have been a few reports of seed migration-related sequelae, such as a few anecdotal instances of cardiac arrhythmia, myocardial infarction, and radiation pneumonitis [21,36,37]. Therefore, it is important to try to reduce the incidence of seed migration. The use of seeds linked with an absorbable suture material has been associated with a dramatically decreased rate of seed migration, although a potentially higher risk of radiotoxicity to the urethra and rectum has been pointed out [1,3,5,38]. In our study, linked seeds were not administered, because they are not commercially available in our country at present. In the present study, none of the 66 patients had symptoms related to the migrated seeds. Although it has been reported that most patients with seed migration are asymptomatic, there have been a few reports of seed migration-related sequelae, such as a few anecdotal instances of cardiac arrhythmia, myocardial infarction, and radiation pneumonitis [21,36,37]. Therefore, it is important to try to reduce the incidence of seed migration. The use of seeds linked with an absorbable suture material has been associated with a dramatically decreased rate of seed migration, although a potentially higher risk of radiotoxicity to the urethra and rectum has been pointed out [1,3,5,38]. In our study, linked seeds were not administered, because they are not commercially available in our country at present. Incidence of seed migration: We found that 0.36% of implanted seeds migrated to the chest in 20% of our patient population, similar to previous reports (Table 2). It has been reported that the incidence of seed migration to the chest can be as high as 55% per patient population and 0.98% per number of implanted seeds (Table 2) [9]. The variability of the incidence of seed migration to the chest among the reported results is considered to be attributed to different types of seeds (linked or loose), different designs of seed placement (intraprostatic or extraprostatic), different timings of follow-up radiographs, and different protocols of follow-up chest radiographs (orthogonal or AP alone) (Table 2) [1-5,9,13]. Literature Survey of Seed Migration *Linked seeds were restricted to needles placed at the periphery of the prostate implant. All seeds placed centrally, close to the urethra, were loose seeds. On average, 8-10 loose seeds were used. †Of the total 125I seeds, 64% were linked seeds. Linked seeds were limited to the region of the capsule/periprostatic region. Loose seeds were utilized in the central aspect of the gland. ††9 patients, within 14 days; 75 patients, more than 30 days. §11patients within 14 days; 61patients, more than 30 days. ||Of the 20 patients who had postimplant chest radiographs obtained within 14 days of brachytherapy, none had seeds detected in the lungs. Categorizing patients by isotope, for 125I, 18/75 (24.0%) and for 103Pd, 16/61 (26.2%) with chest radiographs beyond 30 days had seeds found in the lungs. ¶A mixture of linked and free seeds, with a median 74% of needles per case containing seeds in linked, with central and posterior needles typically containing loose seeds. In contrast, the incidence of seed migration to the abdomen and pelvis has been reported rarely (Table 2). A possible reason is that the American Brachytherapy Society does not specifically recommend follow-up abdominal and pelvic radiographs after seed implantation [6]. Therefore, in most institutions, follow-up abdominal and pelvic radiographs would not be undertaken routinely. However, we found that seed migration to the abdomen and pelvis occurred in 2.2% and 5.6%, respectively, of our patient population. Although the incidence of seed migration to the abdomen and pelvis is lower than that of seed migration to the chest, we would consider it advisable to undertake follow-up abdominal and pelvic radiographs after seed implantation. The dynamics of seed migration to the chest: One day, two weeks, and three months after seed implantation, follow-up chest radiographs showed 22%, 88%, and 91%, respectively, of 69 seeds that eventually migrated to the chest. These results mean that 22%, 66%, and 2.9% of these 69 seeds migrated to the chest within one day, between one day and two weeks, and between two weeks and three months after seed implantation, respectively. About 90% of these 69 seeds migrated to the chest within two weeks after seed implantation. These results are similar to a previous report [13]. Merrick et al. have speculated that seed migration to the chest may be most likely to occur between 14 and 28 days after seed implantation [13]. Although we observed that several seeds migrated to the chest more than six months after seed implantation, this finding should be considered exceptional. Therefore, it is suggested that follow-up chest radiographs should be undertaken two weeks, or preferably a few months, after seed implantation to detect most seeds that will migrate to the chest. Seed migration to the kidneys and Batson's vertebral plexus is not very rare: The results of the present study show a total of four and five cases of seed migration to the kidneys and Batson's vertebral venous plexus, respectively, at only one institution, which suggests that such cases are not very rare. Meanwhile, in previous studies, a total of only four and four cases of seed migration to the kidneys and Batson's vertebral venous plexus, respectively, have been reported as rare cases, which is in disagreement with our conclusion [22,23,26,29]. The same number or more cases of seed migration to these areas were found in our single study compared with all previous studies. A possible explanation is that, in the present study, orthogonal chest radiographs, an abdominal radiograph, and a pelvic radiograph were undertaken routinely to detect seed migration to the chest, abdomen, and pelvis at several time points after seed implantation. Moreover, in all patients who had seed migration to the abdomen and pelvis, a CT scan was undertaken to identify the exact location of the migrated seeds. Consequently, more cases of seed migration to the kidneys and Batson's vertebral venous plexus were found in the present study. We speculate that some seed migration to the kidneys and Batson's vertebral venous plexus might have gone undetected in other institutions. In some cases of seed migration to the kidneys, the mechanism is difficult to explain: Previous investigators have explained seed migration to the kidneys as follows: migrated seeds in venous vessels would enter the systemic circulation through a right-to-left shunt, such as a patent foramen ovale or pulmonary arteriovenous malformation, and then embolize to a branch of the renal arteries [26]. We call this route of seed migration through a right-to-left shunt "a paradoxical route." The present study has provided some interesting cases of seed migration to the kidneys, the mechanism of which is difficult to explain. In Case 1 of the present study, a seed had migrated to the right side of the middle abdomen, which was considered to be separated from the IVC, and then had relocated to the left kidney (Figure 1A-C). If the seed had initially embolized to an artery of the right side of the middle abdomen, such as a branch of the right renal artery, through a paradoxical route, it is difficult to explain how the seed would have relocated to the left kidney. In Case 2, two seeds had migrated to the same right kidney (Figure 2A-C). The mechanism of seed migration to the kidney through a paradoxical route is too complicated. Therefore, it is highly unlikely that two seeds would happen to migrate to the same right kidney through a paradoxical route by chance, although the possibility cannot be completely excluded. Other possible mechanisms should be proposed to explain how seed migration to the kidneys would have occurred in the two cases mentioned above. Other possible mechanisms of seed migration: We assume that seed movement in venous vessels would reflect not only the force of the blood flow but also the force of gravity, and that a seed sometimes would move in venous vessels against the blood flow, because the gravity could overcome the blood flow. The explanation is as follows. Intravascular missile migration following gunshot injury has been reported [30-34]. A missile sometimes moves in a retrograde fashion against the normal blood flow in major venous vessels including the IVC, and sometimes lodges in the renal vein and the hepatic vein [30-34]. It has been postulated that missile migration against the blood flow in venous vessels could occur because of gravity, the patient's position (upright) at the moment of wounding and/or positional changes of the body, the weight and shape of the missile, and possible low flow states [30]. Although the size and the weight of an 125I radioactive seed (4.5 mm in length, 0.8 mm in diameter, and about 10 mg in weight per seed) are smaller than those of a bullet, it is considered that a seed could move against the blood flow in major venous vessels because of gravity. The reasons are as follows. The specific gravity of the seed (about 4 g/mL) is much higher than that of blood, and the seed is less water resistant because of its rod-shaped structure. In addition, a seed would usually be located near the wall of major venous vessels due to gravity and the patient's position, and therefore, a seed would be affected by a relatively slow stream of blood near the vascular wall compared with that in the center of major venous vessels. It is known that in major venous vessels, the blood flow is generally laminar, and the distribution of velocity across the tube is parabolic [35]. Therefore, the velocity of blood flow decreases from the center toward the wall of the venous vessel tube [35]. The mechanism of seed migration to various sites could be explained by the assumption that a seed could move in major venous vessels against the blood flow by gravity. In the present cases, seed migration to the kidneys, the liver, and the right groin area (probably a branch of the right femoral vein) (Case 3) would have occurred in venous vessels, partially in a retrograde fashion by gravity, without being passed through a paradoxical route. This explanation does not require the assumption that a patient would have a right-to-left shunt. Influence of seed migration on postimplant dosimetry: The postimplant prostate D90 was not significantly different between patients with and without seed migration. Moreover, in 15 patients who had multiple migrated seeds, the postimplant prostate D90 was relatively acceptable, and no supplemental seed implantation was required. These results indicate that seed migration did not have a significant effect on postimplant prostate dosimetry in the present study. Possible reasons are as follows. First, in most patients with seed migration, only one or two seeds had migrated, which would have less effect on the dosimetry of the prostate. Tapen et al. have suggested that the loss of a few seeds may not have a significant effect on dose homogeneity or total dose to the prostate [5]. Second, seed migration would have much less effect on the dosimetry of the prostate than other mechanisms of seed loss, such as seed misplacement to the seminal vesicle or perineum and being voided in the urine postoperatively. Merrick et al. have reported that seed migration to the chest accounted for only 10% of total seed loss from the prostate region, highlighting the importance of other mechanisms of loss [13]. Seed migration-related sequelae: In the present study, none of the 66 patients had symptoms related to the migrated seeds. Although it has been reported that most patients with seed migration are asymptomatic, there have been a few reports of seed migration-related sequelae, such as a few anecdotal instances of cardiac arrhythmia, myocardial infarction, and radiation pneumonitis [21,36,37]. Therefore, it is important to try to reduce the incidence of seed migration. The use of seeds linked with an absorbable suture material has been associated with a dramatically decreased rate of seed migration, although a potentially higher risk of radiotoxicity to the urethra and rectum has been pointed out [1,3,5,38]. In our study, linked seeds were not administered, because they are not commercially available in our country at present. Conclusions: In the present study, we determined the incidence of seed migration not only to the chest, but also to the abdomen and pelvis. Although the incidence of seed migration to the abdomen and pelvis was lower than that of seed migration to the chest, it would be advisable to undertake follow-up abdominal and pelvic radiographs after seed implantation. Seed migration to the chest occurred predominantly within two weeks and, therefore, it is suggested that follow-up chest radiographs should be undertaken two weeks or later after seed implantation. Seed migration to the kidneys and Batson's vertebral venous plexus was not very rare. Seed migration did not significantly affect postimplant prostate D90.
Background: The aim was to determine the incidence of seed migration not only to the chest, but also to the abdomen and pelvis after transperineal interstitial prostate brachytherapy with loose (125)I seeds. Methods: We reviewed the records of 267 patients who underwent prostate brachytherapy with loose (125)I seeds. After seed implantation, orthogonal chest radiographs, an abdominal radiograph, and a pelvic radiograph were undertaken routinely to document the occurrence and sites of seed migration. The incidence of seed migration to the chest, abdomen, and pelvis was calculated. All patients who had seed migration to the abdomen and pelvis subsequently underwent a computed tomography scan to identify the exact location of the migrated seeds. Postimplant dosimetric analysis was undertaken, and dosimetric results were compared between patients with and without seed migration. Results: A total of 19,236 seeds were implanted in 267 patients. Overall, 91 of 19,236 (0.47%) seeds migrated in 66 of 267 (24.7%) patients. Sixty-nine (0.36%) seeds migrated to the chest in 54 (20.2%) patients. Seven (0.036%) seeds migrated to the abdomen in six (2.2%) patients. Fifteen (0.078%) seeds migrated to the pelvis in 15 (5.6%) patients. Seed migration occurred predominantly within two weeks after seed implantation. None of the 66 patients had symptoms related to the migrated seeds. Postimplant prostate D90 was not significantly different between patients with and without seed migration. Conclusions: We showed the incidence of seed migration to the chest, abdomen and pelvis. Seed migration did not have a significant effect on postimplant prostate D90.
Background: Seed migration is a well-recognized event that occurs after transperineal interstitial prostate brachytherapy, and it is observed more often with loose seeds than with linked seeds [1-5]. It is well known that the most frequent site of seed migration is the chest. The American Brachytherapy Society has advised that a chest radiograph should be undertaken at the first follow-up visit to scan the lungs for embolized seeds [6]. Consequently, the incidence of seed migration to the chest has been well reported [1,2,4,5,7-19]. However, documentation of the incidence of seed migration to the abdomen and pelvis is rare. Rare cases of seed migration to a coronary artery, the right ventricle, the liver, the kidneys, Batson's vertebral venous plexus, and the left testicular vein have been reported [20-26]. However, it has never been fully determined whether seed migration to these locations is really rare. The primary purposes of the present study were to determine the incidence of seed migration not only to the chest, but also to the abdomen and the pelvis at our institution and to identify the exact location of the seeds that had migrated to the abdomen and pelvis with computed tomography (CT). The secondary purpose was to determine the impact of seed migration on postimplant dosimetry. Conclusions: AS and JN designed the study, collected the data, interpreted the results, performed the statistical analysis, drafted the manuscript, and oversaw the project's completion. EK, HN, RM, SS, YS and RK participated in data acquisition. MO and NS contributed to data analysis. All authors read and approved the manuscript.
Background: The aim was to determine the incidence of seed migration not only to the chest, but also to the abdomen and pelvis after transperineal interstitial prostate brachytherapy with loose (125)I seeds. Methods: We reviewed the records of 267 patients who underwent prostate brachytherapy with loose (125)I seeds. After seed implantation, orthogonal chest radiographs, an abdominal radiograph, and a pelvic radiograph were undertaken routinely to document the occurrence and sites of seed migration. The incidence of seed migration to the chest, abdomen, and pelvis was calculated. All patients who had seed migration to the abdomen and pelvis subsequently underwent a computed tomography scan to identify the exact location of the migrated seeds. Postimplant dosimetric analysis was undertaken, and dosimetric results were compared between patients with and without seed migration. Results: A total of 19,236 seeds were implanted in 267 patients. Overall, 91 of 19,236 (0.47%) seeds migrated in 66 of 267 (24.7%) patients. Sixty-nine (0.36%) seeds migrated to the chest in 54 (20.2%) patients. Seven (0.036%) seeds migrated to the abdomen in six (2.2%) patients. Fifteen (0.078%) seeds migrated to the pelvis in 15 (5.6%) patients. Seed migration occurred predominantly within two weeks after seed implantation. None of the 66 patients had symptoms related to the migrated seeds. Postimplant prostate D90 was not significantly different between patients with and without seed migration. Conclusions: We showed the incidence of seed migration to the chest, abdomen and pelvis. Seed migration did not have a significant effect on postimplant prostate D90.
11,127
312
[ 250, 2678, 465, 119, 250, 153, 4197, 489, 208, 237, 291, 480, 209, 144, 125 ]
16
[ "seed", "seeds", "migration", "seed migration", "migrated", "patients", "seed implantation", "implantation", "right", "chest" ]
[ "seed relocation chest", "seed migration pelvis", "seed migration abdomen", "seeds migrated chest", "seed migration coronary" ]
null
[CONTENT] Brachytherapy | 125I | Migration | Prostate cancer | Seed [SUMMARY]
[CONTENT] Brachytherapy | 125I | Migration | Prostate cancer | Seed [SUMMARY]
null
[CONTENT] Brachytherapy | 125I | Migration | Prostate cancer | Seed [SUMMARY]
[CONTENT] Brachytherapy | 125I | Migration | Prostate cancer | Seed [SUMMARY]
[CONTENT] Brachytherapy | 125I | Migration | Prostate cancer | Seed [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Brachytherapy | Humans | Iodine Radioisotopes | Male | Middle Aged | Movement | Pelvis | Prostatic Neoplasms | Radiography, Abdominal | Radiography, Thoracic | Radiotherapy [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Brachytherapy | Humans | Iodine Radioisotopes | Male | Middle Aged | Movement | Pelvis | Prostatic Neoplasms | Radiography, Abdominal | Radiography, Thoracic | Radiotherapy [SUMMARY]
null
[CONTENT] Aged | Aged, 80 and over | Brachytherapy | Humans | Iodine Radioisotopes | Male | Middle Aged | Movement | Pelvis | Prostatic Neoplasms | Radiography, Abdominal | Radiography, Thoracic | Radiotherapy [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Brachytherapy | Humans | Iodine Radioisotopes | Male | Middle Aged | Movement | Pelvis | Prostatic Neoplasms | Radiography, Abdominal | Radiography, Thoracic | Radiotherapy [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Brachytherapy | Humans | Iodine Radioisotopes | Male | Middle Aged | Movement | Pelvis | Prostatic Neoplasms | Radiography, Abdominal | Radiography, Thoracic | Radiotherapy [SUMMARY]
[CONTENT] seed relocation chest | seed migration pelvis | seed migration abdomen | seeds migrated chest | seed migration coronary [SUMMARY]
[CONTENT] seed relocation chest | seed migration pelvis | seed migration abdomen | seeds migrated chest | seed migration coronary [SUMMARY]
null
[CONTENT] seed relocation chest | seed migration pelvis | seed migration abdomen | seeds migrated chest | seed migration coronary [SUMMARY]
[CONTENT] seed relocation chest | seed migration pelvis | seed migration abdomen | seeds migrated chest | seed migration coronary [SUMMARY]
[CONTENT] seed relocation chest | seed migration pelvis | seed migration abdomen | seeds migrated chest | seed migration coronary [SUMMARY]
[CONTENT] seed | seeds | migration | seed migration | migrated | patients | seed implantation | implantation | right | chest [SUMMARY]
[CONTENT] seed | seeds | migration | seed migration | migrated | patients | seed implantation | implantation | right | chest [SUMMARY]
null
[CONTENT] seed | seeds | migration | seed migration | migrated | patients | seed implantation | implantation | right | chest [SUMMARY]
[CONTENT] seed | seeds | migration | seed migration | migrated | patients | seed implantation | implantation | right | chest [SUMMARY]
[CONTENT] seed | seeds | migration | seed migration | migrated | patients | seed implantation | implantation | right | chest [SUMMARY]
[CONTENT] seed migration | migration | seed | rare | determine | chest | incidence | abdomen pelvis | incidence seed migration | incidence seed [SUMMARY]
[CONTENT] patients | prostate | seed | volume | month | gy | seeds | brachytherapy | 267 | received [SUMMARY]
null
[CONTENT] seed | seed migration | migration | seed implantation seed migration | implantation seed migration | chest | migration chest | seed migration chest | seed implantation seed | implantation seed [SUMMARY]
[CONTENT] seed | seeds | seed migration | migration | patients | migrated | chest | implantation | seed implantation | right [SUMMARY]
[CONTENT] seed | seeds | seed migration | migration | patients | migrated | chest | implantation | seed implantation | right [SUMMARY]
[CONTENT] 125)I [SUMMARY]
[CONTENT] 267 | 125)I ||| ||| ||| ||| [SUMMARY]
null
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| 125)I ||| 267 | 125)I ||| ||| ||| ||| ||| 19,236 | 267 ||| 91 | 19,236 | 0.47% | 66 | 267 | 24.7% ||| Sixty-nine | 0.36% | 54 | 20.2% ||| Seven | 0.036% | six | 2.2% ||| Fifteen | 0.078% ||| 15 | 5.6% ||| two weeks ||| 66 ||| D90 ||| ||| [SUMMARY]
[CONTENT] ||| 125)I ||| 267 | 125)I ||| ||| ||| ||| ||| 19,236 | 267 ||| 91 | 19,236 | 0.47% | 66 | 267 | 24.7% ||| Sixty-nine | 0.36% | 54 | 20.2% ||| Seven | 0.036% | six | 2.2% ||| Fifteen | 0.078% ||| 15 | 5.6% ||| two weeks ||| 66 ||| D90 ||| ||| [SUMMARY]
Immediate processing of erotic stimuli in paedophilia and controls: a case control study.
23510246
Most neuroimaging studies investigating sexual arousal in paedophilia used erotic pictures together with a blocked fMRI design and long stimulus presentation time. While this approach allows the detection of sexual arousal, it does not enable the assessment of the immediate processing of erotically salient stimuli. Our study aimed to identify neuronal networks related to the immediate processing of erotic stimuli in heterosexual male paedophiles and healthy age-matched controls.
BACKGROUND
We presented erotic pictures of prepubescent children and adults in an event related fMRI-design to eight paedophilic subjects and age-matched controls.
METHODS
Erotic pictures of females elicited more activation in the right temporal lobe, the right parietal lobe and both occipital lobes and erotic pictures of children activated the right dorsomedial prefrontal cortex in both groups. An interaction of sex, age and group was present in the right anteriolateral oribitofrontal cortex.
RESULTS
Our event related study design confirmed that erotic pictures activate some of the brain regions already known to be involved in the processing of erotic pictures when these are presented in blocks. In addition, it revealed that erotic pictures of prepubescent children activate brain regions critical for choosing response strategies in both groups, and that erotically salient stimuli selectively activate a brain region in paedophilic subjects that had previously been attributed to reward and punishment, and that had been shown to be implicated in the suppression of erotic response and deception.
CONCLUSIONS
[ "Adult", "Brain", "Case-Control Studies", "Emotions", "Erotica", "Evoked Potentials", "Humans", "Image Processing, Computer-Assisted", "Magnetic Resonance Imaging", "Male", "Middle Aged", "Pedophilia", "Photic Stimulation" ]
3610191
Background
Paedophilia is a matter of great public interest and often evokes emotional discussions in mass media. It is difficult to estimate the prevalence of paedophilia as both paedophilic offenders and victims often prefer not to identify themselves. From surveys of the general population we know that approximately 12% of men and 17% of women report experiencing sexual abuse in their childhood [1]. These figures underline the potential impact of this disease on society. According to DSM IV-TR, paedophilia is defined by two main criteria: firstly persistent sexual fantasies, urges or behaviour involving sexual activity with prepubescent children and secondly that these people have acted on these urges, or that these urges or fantasies caused marked distress or interpersonal difficulties [2]. Findings from neuropsychological studies on paedophilia are heterogeneous. Whilst a lower IQ [3], educational difficulties [4] and a higher rate of left-handedness [5] indicate rather generalised brain dysfunction, other studies suggest more specific alterations like focal weaknesses in frontal-executive [6,7] and/or temporal-verbal [8] skills or even a more deliberate response style and greater self-monitoring in paedophilic subjects [9,10]. Furthermore revealed research on personality traits in paedophilia various findings like impaired interpersonal functions, impaired self-awareness, disinhibitory traits, sociopathy and a propensity for cognitive distortions [11]. In summary, it can be stated that the neurobiology of paedophilia remains incomplete. In recent years, researchers have increasingly addressed the neuronal underpinnings of sexual arousal in order to better understand sexual behaviour. Stoléru et al. [12,13] and Redouté et al. [14,15] were the first to propose a neurophenomological model that disentangled the cognitive, emotional, motivational and physiological components of sexual arousal. Two recently published quantitative meta-analyses on sexual cue reactivity [16,17] underline that distinctive subcomponents of sexual arousal can be reliably localised by neuroimaging techniques. Based on the fMRI results about sexual arousal in healthy controls it was natural to transfer that approach to paedophilic subjects in order to better understand the cognitive and behavioural processes in paedophilia. In heterosexual paedophilia fMRI studies have demonstrated the altered processing of erotic visual stimuli in the dorsomedial prefrontal cortex (DMPFC), amygdala and hippocampus [18]. The latter study also reports that erotic pictures of adults induced a stronger activation of the hypothalmus, the periaqueductal grey and the dorsolateral prefrontal cortex (DLPFC) in healthy controls than in heterosexual paedophilic subjects. While this study highlights differences between controls and paedophilic subjects other studies report similar activation patterns for the respective erotic conditions. For example Schiffer et al. [19] show that erotic pictures of girls induced the same activation of limbic structures such as the amygdala, the cingulate gyrus or the hippocampus in heterosexual paedophilia as erotic pictures of women in a control group. However this study also found an additional activation of the DLPFC in the heterosexual paedophilic group alone. Research has also been carried out on homosexual paedophilia. Again in both paedophilic and control subjects, a common network relating to sexual activation was found, comprising the occipitotemporal and prefrontal cortex [20]. However, only paedophilic subjects showed activated subcortical regions such as the thalamus, the globus pallidus and the striatum during erotic stimulation. Another study on homosexual paedophilia describes differential activation of the right orbitofrontal cortex [21]. Most of the above-mentioned studies indicate that prefrontal brain regions might be related to paedophilia, but the results and structures involved differ from study to study. Considering these rather heterogeneous results, the findings from a recent study by Ponseti et al. [22] appear surprising. These authors propose an fMRI based classification procedure for heterosexual and homosexual non-paedophilic and paedophilic subjects with 95% accuracy. Unlike previous studies, these authors describe activation in regions known to be involved in the processing of erotic stimuli such as the caudate nucleus, cingulated cortex, insula, fusiform gyrus, temporal cortex, occipital cortex, thalamus, amygdala and cerebellum but not in the prefrontal cortex. Except for the study of Walter et al. [18] all of the above-mentioned studies that describe a prefrontal involvement in paedophilia used relatively long presentation times ranging from 19.2 - 38.5 s and blocked fMRI designs. One critical shortcoming of long presentation times is that different cognitive processes, such as sustained attention or self-referential processes might also take place and somehow interfere with the target process. Earlier fMRI studies on sexual arousal in normal subjects showed that with shorter presentation times of sexually arousing pictures specific neuronal networks and brain processes can be addressed. Static presentation for 8.75 s as used by Moulier et al. [23] demonstrated for example that the initiation and low levels of penile tumescence are controlled by frontal, parietal, insular and cingular cortical areas. In accordance with these results, Ferretti et al. [24] showed that longer presentation times (> 30 s) induce sexual arousal and penile erection whilst shorter presentation times (< 3 s) induce arousal without erection. A stimulation time of 5 s even allowed to distinguish specific sexual emotional effects from more general emotional effects [25]. A study comparing both types of fMRI-designs in the visual processing of erotic stimuli in healthy volunteers [26] provides strong support for the use of fast stimulation time, suggesting an event related design. The authors proposed that event related designs might be an alternative to blocked designs if the core interest is the detection of networks associated with the immediate processing of erotic stimuli. Besides being more useful for the detection of these networks, short presentation times also offer other potential benefits as long presentation times in paedophilia are even more problematic than in healthy controls. Paedophilic subjects are mostly recruited after coming into conflict with the legal authorities as a result of their sexual preferences and this may increase a tendency to suppress erotic arousal or dissimulate sexual attraction to the pictures of children presented. The latter point has been used to explain some of the differences between ratings of erotic salience and brain response observed in some of the aforementioned studies [19,21]. As presentation time becomes shorter, deliberate manipulation may become more difficult and the immediate processes following the perception of target stimuli could become visible. In our study we aim to identify the neuronal networks related to immediate processing of erotic stimuli in heterosexual paedophilic subjects recruited from a forensic outpatient setting. Based on previous neuropsychological research and recent fMRI studies we predicted a differential activation in the prefrontal cortex in paedophilic subjects.
Methods
Subjects Behavioural and functional magnetic resonance imaging data were acquired from 16 male right-handed subjects. All participants in this study were adults. Written informed consent was obtained from all subjects before participation in the study. The subjects in the paedophilic (N = 8) group were recruited from an outpatient cognitive behavioural group therapy at the department of forensic psychiatry in Basel, Switzerland. The heterosexual paedophilic subjects fulfilled DSM-IV-TR criteria for exclusive type (attracted only to prepubescent children) not limited to incest heterosexual paedophilia. Three of the subjects had previously molested prepubescent children, the other five had been convicted because of the possession of large quantities of explicit internet child pornography. Each participant’s sexual orientation and preference for prepubescent erotic stimuli was assessed in a clinical interview, the Multiphasic Sex Inventory (MSI) [27] and additionally verified by the clinical record and the court file. For all subjects, neither the interview nor the record indicated other comorbid paraphilias. None of the subjects had admitted paedophilia prior to the contact with the legal authorities. Heterosexual control subjects (N = 8) were recruited from an advert on the University Hospital notice board. A clinical evaluation of all subjects revealed no other psychiatric, neurological or medical conditions. The local Ethics Committee of the University of Basel, Switzerland approved the study. Behavioural and functional magnetic resonance imaging data were acquired from 16 male right-handed subjects. All participants in this study were adults. Written informed consent was obtained from all subjects before participation in the study. The subjects in the paedophilic (N = 8) group were recruited from an outpatient cognitive behavioural group therapy at the department of forensic psychiatry in Basel, Switzerland. The heterosexual paedophilic subjects fulfilled DSM-IV-TR criteria for exclusive type (attracted only to prepubescent children) not limited to incest heterosexual paedophilia. Three of the subjects had previously molested prepubescent children, the other five had been convicted because of the possession of large quantities of explicit internet child pornography. Each participant’s sexual orientation and preference for prepubescent erotic stimuli was assessed in a clinical interview, the Multiphasic Sex Inventory (MSI) [27] and additionally verified by the clinical record and the court file. For all subjects, neither the interview nor the record indicated other comorbid paraphilias. None of the subjects had admitted paedophilia prior to the contact with the legal authorities. Heterosexual control subjects (N = 8) were recruited from an advert on the University Hospital notice board. A clinical evaluation of all subjects revealed no other psychiatric, neurological or medical conditions. The local Ethics Committee of the University of Basel, Switzerland approved the study. Stimulation and paradigm The study design was adopted from Bühler et al. [26]. Visual stimuli were generated and displayed on a personal computer using the Neurobehavioral Systems (NBS) software package Presentation® 12.2. Stimuli were presented via fMRI-compatible digital video goggles (NordicNeuroLab, Bergen, Norway). While inside the scanner, subjects were presented with various types of pictures in an event related design. We presented 10 pictures from each category (erotic pictures of boys, girls, men, women or neutral control pictures) for 750 ms with a jittered interstimulus interval that varied randomly between 10 – 20 s in steps of full seconds (Figure 1). FMRI paradigm. We presented erotic pictures of adults (women/men) and children (girls/boys) interspersed with neutral control pictures. Pictures from each category were presented for 750 ms with a jittered interstimulus interval ranging from 10–20 s. In each picture, only one person of one of the above-mentioned categories was displayed in bathing clothes in front of a plain-coloured background. All pictures showed faces and the complete corpus. Secondary sexual characteristics were clearly visible in the pictures showing adults but were clearly absent in the pictures depicting prepubescent children. We avoided photographs of adolescents and applied a biological rather than a legal cut off to make the pictures of adults easily distinguishable from those of children. Before inclusion into the paradigm rated the study subjects the pictures and only the pictures with the highest ratings on a visual analogue scale were included. Neutral pictures showed simple objects like e.g. a small boot in front of the same background. In order to control attention during the passive fMRI task we used a previously applied procedure to assure attentive observation of the presented pictures [28]: Before the scanning procedure, subjects were instructed to attentively observe the pictures and told that we would check their attention by asking them to identify the pictures after the scanning session. Immediately after the fMRI session we presented the slides that they had seen interspersed with similar pictures, which had not been presented during the experiment. Subjects then had to decide which pictures they had seen during the fMRI stimulation. The study design was adopted from Bühler et al. [26]. Visual stimuli were generated and displayed on a personal computer using the Neurobehavioral Systems (NBS) software package Presentation® 12.2. Stimuli were presented via fMRI-compatible digital video goggles (NordicNeuroLab, Bergen, Norway). While inside the scanner, subjects were presented with various types of pictures in an event related design. We presented 10 pictures from each category (erotic pictures of boys, girls, men, women or neutral control pictures) for 750 ms with a jittered interstimulus interval that varied randomly between 10 – 20 s in steps of full seconds (Figure 1). FMRI paradigm. We presented erotic pictures of adults (women/men) and children (girls/boys) interspersed with neutral control pictures. Pictures from each category were presented for 750 ms with a jittered interstimulus interval ranging from 10–20 s. In each picture, only one person of one of the above-mentioned categories was displayed in bathing clothes in front of a plain-coloured background. All pictures showed faces and the complete corpus. Secondary sexual characteristics were clearly visible in the pictures showing adults but were clearly absent in the pictures depicting prepubescent children. We avoided photographs of adolescents and applied a biological rather than a legal cut off to make the pictures of adults easily distinguishable from those of children. Before inclusion into the paradigm rated the study subjects the pictures and only the pictures with the highest ratings on a visual analogue scale were included. Neutral pictures showed simple objects like e.g. a small boot in front of the same background. In order to control attention during the passive fMRI task we used a previously applied procedure to assure attentive observation of the presented pictures [28]: Before the scanning procedure, subjects were instructed to attentively observe the pictures and told that we would check their attention by asking them to identify the pictures after the scanning session. Immediately after the fMRI session we presented the slides that they had seen interspersed with similar pictures, which had not been presented during the experiment. Subjects then had to decide which pictures they had seen during the fMRI stimulation. FMRI acquisition and analysis Images were acquired using a 3 T MRI scanner (Verio, Siemens Healthcare, Erlangen, Germany) equipped with a standard radio frequency head coil. First a T1-weighted high-resolution data set that covered the whole brain was acquired using a three-dimensional MPRAGE (magnetization-prepared rapid acquisition gradient echo) sequence with a repetition time (TR) of 2.00, an isotropic spatial resolution of 1.0 mm3, and an echo time (TE) of 3.4 ms. T2* weighted functional images were recorded using echoplanar imaging with a TR of 2500 ms, an isotropic spatial resolution of 3×3×3 mm3 and a TE of 30 ms (FoV 228; matrix 76; spacing between slices: 0.51 mm; interslice time: 69 ms). Altogether 152 volumes with 36 image slices with a thickness of 3 mm were obtained. Images were acquired using a 3 T MRI scanner (Verio, Siemens Healthcare, Erlangen, Germany) equipped with a standard radio frequency head coil. First a T1-weighted high-resolution data set that covered the whole brain was acquired using a three-dimensional MPRAGE (magnetization-prepared rapid acquisition gradient echo) sequence with a repetition time (TR) of 2.00, an isotropic spatial resolution of 1.0 mm3, and an echo time (TE) of 3.4 ms. T2* weighted functional images were recorded using echoplanar imaging with a TR of 2500 ms, an isotropic spatial resolution of 3×3×3 mm3 and a TE of 30 ms (FoV 228; matrix 76; spacing between slices: 0.51 mm; interslice time: 69 ms). Altogether 152 volumes with 36 image slices with a thickness of 3 mm were obtained. Image pre-processing and statistical analysis Image time-series were processed using the BrainVoyager QX 2.3.0 software package (Brain Innovation, Maastricht, The Netherlands). Pre-processing included head motion correction, slice scan time correction, temporal high pass filtering and removal of linear trends. Using the results of the image registration with anatomical scans, the functional image-time series were then warped into Talairach space and resampled into 3 mm isotropic voxel time-series. Normalized images were smoothed using a 6.00 mm isotropic Gaussian kernel. For first-level analysis, a General Linear Model (GLM) analysis was applied with separate subject z-normalized predictors fitted to z-normalized voxel time-courses from all data sets. The orthogonal predictors of interest in the design matrix were: “boys”, “girls”, “men”, “women” and “neutral”. For second-level analysis, the GLM fits (beta weights) were assessed in a 3-way-ANOVA with two within subject factors (sex: female vs. male; age: child vs. adult) and one between subject factor group (paedophilia vs. control). We assessed the main effects of, and the interaction between, these factors at the voxel level and obtained maps that thresholded at the significance level of 5% (corrected for multiple comparison). To obtain this correction, an uncorrected statistical threshold was initially applied to each of the F-maps at p = 0.005, and a cluster-level threshold correction procedure based on Monte Carlo simulations [29] was applied in order to determine the minimum cluster size below which any activation was to be discarded. This procedure yielded a minimum cluster size of 10 voxels (corresponding to a minimum cluster extent of 280 mm) for the F-maps of the interaction of all factors, 12 voxel (297 mm) for the factor age and 19 voxels (482 mm) for the factor sex. The anatomical labelling of active clusters was defined on the basis of the peak voxel coordinates, using the Talairach Daemon (http://www.talairach.org) [30,31]. Active clusters resulting from the voxel-level ANOVA were further defined as Regions of Interests (ROI). Data from those ROIs were extracted and analysed in order to understand the direction of the effects of the ANOVA. To that end, we calculated linear contrasts for the different factors, i.e. sex (female > male), age (child > adult) and the interaction of the factors (girl > woman) separately for both groups. In order to further specify the interaction of all factors (sex, age, group) we extracted BOLD %-signal-change from the respective ROI and averaged the BOLD%-signal-change separately for the groups. Image time-series were processed using the BrainVoyager QX 2.3.0 software package (Brain Innovation, Maastricht, The Netherlands). Pre-processing included head motion correction, slice scan time correction, temporal high pass filtering and removal of linear trends. Using the results of the image registration with anatomical scans, the functional image-time series were then warped into Talairach space and resampled into 3 mm isotropic voxel time-series. Normalized images were smoothed using a 6.00 mm isotropic Gaussian kernel. For first-level analysis, a General Linear Model (GLM) analysis was applied with separate subject z-normalized predictors fitted to z-normalized voxel time-courses from all data sets. The orthogonal predictors of interest in the design matrix were: “boys”, “girls”, “men”, “women” and “neutral”. For second-level analysis, the GLM fits (beta weights) were assessed in a 3-way-ANOVA with two within subject factors (sex: female vs. male; age: child vs. adult) and one between subject factor group (paedophilia vs. control). We assessed the main effects of, and the interaction between, these factors at the voxel level and obtained maps that thresholded at the significance level of 5% (corrected for multiple comparison). To obtain this correction, an uncorrected statistical threshold was initially applied to each of the F-maps at p = 0.005, and a cluster-level threshold correction procedure based on Monte Carlo simulations [29] was applied in order to determine the minimum cluster size below which any activation was to be discarded. This procedure yielded a minimum cluster size of 10 voxels (corresponding to a minimum cluster extent of 280 mm) for the F-maps of the interaction of all factors, 12 voxel (297 mm) for the factor age and 19 voxels (482 mm) for the factor sex. The anatomical labelling of active clusters was defined on the basis of the peak voxel coordinates, using the Talairach Daemon (http://www.talairach.org) [30,31]. Active clusters resulting from the voxel-level ANOVA were further defined as Regions of Interests (ROI). Data from those ROIs were extracted and analysed in order to understand the direction of the effects of the ANOVA. To that end, we calculated linear contrasts for the different factors, i.e. sex (female > male), age (child > adult) and the interaction of the factors (girl > woman) separately for both groups. In order to further specify the interaction of all factors (sex, age, group) we extracted BOLD %-signal-change from the respective ROI and averaged the BOLD%-signal-change separately for the groups.
Results
Demographics Sexual orientation in the control group was heterosexual with a preference for mature women. In the paedophilia group all subjects were heterosexual and attracted exclusively by prepubescent stimuli. The mean age of the paedophilic subjects was 48.25 (standard deviation: 9.15) years and mean IQ 116 (12.12). In the control groups, the mean age was 46.25 (8.35) years and IQ 124.88 (13.91). Student t-tests showed no significant difference between these groups (Table 1). In the paedophilic group 75.4% of the pictures were correctly identified immediately after the scanning session. In the control group 77.5% of the pictures were correctly identified. Again a group difference was excluded by t-test. Mean age, IQ and rate of the correctly indentified pictures (behavioural control) with standard deviation (SD) and T-test for both groups None of the results reaches statistically significant differences. Sexual orientation in the control group was heterosexual with a preference for mature women. In the paedophilia group all subjects were heterosexual and attracted exclusively by prepubescent stimuli. The mean age of the paedophilic subjects was 48.25 (standard deviation: 9.15) years and mean IQ 116 (12.12). In the control groups, the mean age was 46.25 (8.35) years and IQ 124.88 (13.91). Student t-tests showed no significant difference between these groups (Table 1). In the paedophilic group 75.4% of the pictures were correctly identified immediately after the scanning session. In the control group 77.5% of the pictures were correctly identified. Again a group difference was excluded by t-test. Mean age, IQ and rate of the correctly indentified pictures (behavioural control) with standard deviation (SD) and T-test for both groups None of the results reaches statistically significant differences. FMRI Voxel level ANOVA We found a significant interaction of the factors sex, age and group (p < 0.05, corrected) in the middle frontal gyrus (Figure 2, Table 2). We furthermore found significant main effects for the factors sex and age outside of the middle frontal gyrus, i. e. in brain areas that didn`t show a significant interaction of all factors. Whole brain ANOVA. Active voxels showed a main effect of the factor sex (female/male) in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere the inferior occipital gyrus displayed active voxels (A). Furthermore we found bilateral activation in the cerebellum. The factor age (adult/child) showed an activation of the right dorsomedial prefrontal cortex (B). In the right lateral orbitofrontal cortex a significant interaction of sex, age and group was detected (C). Significant voxels in A-C represent effects at a corrected p-level of p < 0.05. X, Y, Z corresponds to the Tailarach coordinates. Regions that showed a significant effect in the voxel wise ANOVA X, Y, Z corresponds to the Tailarach coordinates (BA = Brodmann area; R = right, L = left). A significant main effect of the factor sex (female, male) was present at a corrected threshold of p < 0.05 in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere, the inferior occipital gyrus was also active. We also found bilateral activation in the cerebellum. The factor age revealed an activation of the right dorsomedial prefrontal cortex (p < 0.05, corrected). For the factor group, no significant activation was found at the latter threshold, but again this finding is restricted to brain areas that did not show a significant interaction of all factors. We found a significant interaction of the factors sex, age and group (p < 0.05, corrected) in the middle frontal gyrus (Figure 2, Table 2). We furthermore found significant main effects for the factors sex and age outside of the middle frontal gyrus, i. e. in brain areas that didn`t show a significant interaction of all factors. Whole brain ANOVA. Active voxels showed a main effect of the factor sex (female/male) in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere the inferior occipital gyrus displayed active voxels (A). Furthermore we found bilateral activation in the cerebellum. The factor age (adult/child) showed an activation of the right dorsomedial prefrontal cortex (B). In the right lateral orbitofrontal cortex a significant interaction of sex, age and group was detected (C). Significant voxels in A-C represent effects at a corrected p-level of p < 0.05. X, Y, Z corresponds to the Tailarach coordinates. Regions that showed a significant effect in the voxel wise ANOVA X, Y, Z corresponds to the Tailarach coordinates (BA = Brodmann area; R = right, L = left). A significant main effect of the factor sex (female, male) was present at a corrected threshold of p < 0.05 in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere, the inferior occipital gyrus was also active. We also found bilateral activation in the cerebellum. The factor age revealed an activation of the right dorsomedial prefrontal cortex (p < 0.05, corrected). For the factor group, no significant activation was found at the latter threshold, but again this finding is restricted to brain areas that did not show a significant interaction of all factors. Voxel level ANOVA We found a significant interaction of the factors sex, age and group (p < 0.05, corrected) in the middle frontal gyrus (Figure 2, Table 2). We furthermore found significant main effects for the factors sex and age outside of the middle frontal gyrus, i. e. in brain areas that didn`t show a significant interaction of all factors. Whole brain ANOVA. Active voxels showed a main effect of the factor sex (female/male) in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere the inferior occipital gyrus displayed active voxels (A). Furthermore we found bilateral activation in the cerebellum. The factor age (adult/child) showed an activation of the right dorsomedial prefrontal cortex (B). In the right lateral orbitofrontal cortex a significant interaction of sex, age and group was detected (C). Significant voxels in A-C represent effects at a corrected p-level of p < 0.05. X, Y, Z corresponds to the Tailarach coordinates. Regions that showed a significant effect in the voxel wise ANOVA X, Y, Z corresponds to the Tailarach coordinates (BA = Brodmann area; R = right, L = left). A significant main effect of the factor sex (female, male) was present at a corrected threshold of p < 0.05 in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere, the inferior occipital gyrus was also active. We also found bilateral activation in the cerebellum. The factor age revealed an activation of the right dorsomedial prefrontal cortex (p < 0.05, corrected). For the factor group, no significant activation was found at the latter threshold, but again this finding is restricted to brain areas that did not show a significant interaction of all factors. We found a significant interaction of the factors sex, age and group (p < 0.05, corrected) in the middle frontal gyrus (Figure 2, Table 2). We furthermore found significant main effects for the factors sex and age outside of the middle frontal gyrus, i. e. in brain areas that didn`t show a significant interaction of all factors. Whole brain ANOVA. Active voxels showed a main effect of the factor sex (female/male) in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere the inferior occipital gyrus displayed active voxels (A). Furthermore we found bilateral activation in the cerebellum. The factor age (adult/child) showed an activation of the right dorsomedial prefrontal cortex (B). In the right lateral orbitofrontal cortex a significant interaction of sex, age and group was detected (C). Significant voxels in A-C represent effects at a corrected p-level of p < 0.05. X, Y, Z corresponds to the Tailarach coordinates. Regions that showed a significant effect in the voxel wise ANOVA X, Y, Z corresponds to the Tailarach coordinates (BA = Brodmann area; R = right, L = left). A significant main effect of the factor sex (female, male) was present at a corrected threshold of p < 0.05 in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere, the inferior occipital gyrus was also active. We also found bilateral activation in the cerebellum. The factor age revealed an activation of the right dorsomedial prefrontal cortex (p < 0.05, corrected). For the factor group, no significant activation was found at the latter threshold, but again this finding is restricted to brain areas that did not show a significant interaction of all factors. ROI analysis The linear contrasts of these ROIs confirmed that, in both groups, the main effect of the factor sex was based on a higher activation due to female pictures and the main effect of the factor age on a stronger activation for pictures of children. ROI analysis clarified that in the ROI that became significant for the interaction of all factors, only paedophilic subjects showed activation in the girl condition while controls showed a deactivation in this condition (Figure 3A). ROIs analysis (A). The ROI female > male contrast level for the ROIs extracted from the ANOVA of the main effect of sex displayed a greater activation for female pictures in all active ROIs (rTL = right temporal lobe, rPL = right parietal lobe, rOL = right occipital lobe, lOL = left occipital lobe). The child > adult contrast confirmed that the main effect of age is based on a stronger activation due to pictures of children in both groups (rDMPFC = right dorsomedial prefrontal cortex). ROI analysis of the cluster showing an interaction of all factors revealed a significant difference (* = p < 0.05) for the contrast girl > woman, with activation in paedphilic subjects and deactivation in control subjects (rlOFC = right lateral orbitofrontal cortex). Event related averaging (B). BOLD %-signal-change in the rlOFC only illustrated a strong activation due to erotic pictures of girls in paedophilia. Event related averaging of BOLD %-signal-change for all conditions and all groups in that ROI illustrated a strong activation due to erotic pictures of girls in paedophilic subjects that was absent for all other picture conditions in both groups (Figure 3B). The linear contrasts of these ROIs confirmed that, in both groups, the main effect of the factor sex was based on a higher activation due to female pictures and the main effect of the factor age on a stronger activation for pictures of children. ROI analysis clarified that in the ROI that became significant for the interaction of all factors, only paedophilic subjects showed activation in the girl condition while controls showed a deactivation in this condition (Figure 3A). ROIs analysis (A). The ROI female > male contrast level for the ROIs extracted from the ANOVA of the main effect of sex displayed a greater activation for female pictures in all active ROIs (rTL = right temporal lobe, rPL = right parietal lobe, rOL = right occipital lobe, lOL = left occipital lobe). The child > adult contrast confirmed that the main effect of age is based on a stronger activation due to pictures of children in both groups (rDMPFC = right dorsomedial prefrontal cortex). ROI analysis of the cluster showing an interaction of all factors revealed a significant difference (* = p < 0.05) for the contrast girl > woman, with activation in paedphilic subjects and deactivation in control subjects (rlOFC = right lateral orbitofrontal cortex). Event related averaging (B). BOLD %-signal-change in the rlOFC only illustrated a strong activation due to erotic pictures of girls in paedophilia. Event related averaging of BOLD %-signal-change for all conditions and all groups in that ROI illustrated a strong activation due to erotic pictures of girls in paedophilic subjects that was absent for all other picture conditions in both groups (Figure 3B).
Conclusion
Our fMRI study confirms the findings of previous studies concerning the processing of visual erotic stimuli. Furthermore we were able to demonstrate that the dorsomedial prefrontal cortex is specifically engaged in processing erotic pictures of children regardless of the study group. This activation appears to represent the evaluation of relevance, which is apparently not linked to sexual orientation per se but rather to the study context. In addition we have made some new findings regarding the immediate processing of visual erotic stimulation in paedophilia, as we found an immediate activation of the brain regions involved in evaluating emotional salience and reward, and engaged in the regulation of emotional responses. This activation might be related to the suppression or deception of erotic responses.
[ "Background", "Subjects", "Stimulation and paradigm", "FMRI acquisition and analysis", "Image pre-processing and statistical analysis", "Demographics", "FMRI", "Voxel level ANOVA", "ROI analysis", "Abbreviations", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Paedophilia is a matter of great public interest and often evokes emotional discussions in mass media. It is difficult to estimate the prevalence of paedophilia as both paedophilic offenders and victims often prefer not to identify themselves. From surveys of the general population we know that approximately 12% of men and 17% of women report experiencing sexual abuse in their childhood [1]. These figures underline the potential impact of this disease on society. According to DSM IV-TR, paedophilia is defined by two main criteria: firstly persistent sexual fantasies, urges or behaviour involving sexual activity with prepubescent children and secondly that these people have acted on these urges, or that these urges or fantasies caused marked distress or interpersonal difficulties [2].\nFindings from neuropsychological studies on paedophilia are heterogeneous. Whilst a lower IQ [3], educational difficulties [4] and a higher rate of left-handedness [5] indicate rather generalised brain dysfunction, other studies suggest more specific alterations like focal weaknesses in frontal-executive [6,7] and/or temporal-verbal [8] skills or even a more deliberate response style and greater self-monitoring in paedophilic subjects [9,10]. Furthermore revealed research on personality traits in paedophilia various findings like impaired interpersonal functions, impaired self-awareness, disinhibitory traits, sociopathy and a propensity for cognitive distortions [11]. In summary, it can be stated that the neurobiology of paedophilia remains incomplete.\nIn recent years, researchers have increasingly addressed the neuronal underpinnings of sexual arousal in order to better understand sexual behaviour. Stoléru et al. [12,13] and Redouté et al. [14,15] were the first to propose a neurophenomological model that disentangled the cognitive, emotional, motivational and physiological components of sexual arousal. Two recently published quantitative meta-analyses on sexual cue reactivity [16,17] underline that distinctive subcomponents of sexual arousal can be reliably localised by neuroimaging techniques.\nBased on the fMRI results about sexual arousal in healthy controls it was natural to transfer that approach to paedophilic subjects in order to better understand the cognitive and behavioural processes in paedophilia. In heterosexual paedophilia fMRI studies have demonstrated the altered processing of erotic visual stimuli in the dorsomedial prefrontal cortex (DMPFC), amygdala and hippocampus [18]. The latter study also reports that erotic pictures of adults induced a stronger activation of the hypothalmus, the periaqueductal grey and the dorsolateral prefrontal cortex (DLPFC) in healthy controls than in heterosexual paedophilic subjects. While this study highlights differences between controls and paedophilic subjects other studies report similar activation patterns for the respective erotic conditions. For example Schiffer et al. [19] show that erotic pictures of girls induced the same activation of limbic structures such as the amygdala, the cingulate gyrus or the hippocampus in heterosexual paedophilia as erotic pictures of women in a control group. However this study also found an additional activation of the DLPFC in the heterosexual paedophilic group alone. Research has also been carried out on homosexual paedophilia. Again in both paedophilic and control subjects, a common network relating to sexual activation was found, comprising the occipitotemporal and prefrontal cortex [20]. However, only paedophilic subjects showed activated subcortical regions such as the thalamus, the globus pallidus and the striatum during erotic stimulation. Another study on homosexual paedophilia describes differential activation of the right orbitofrontal cortex [21].\nMost of the above-mentioned studies indicate that prefrontal brain regions might be related to paedophilia, but the results and structures involved differ from study to study. Considering these rather heterogeneous results, the findings from a recent study by Ponseti et al. [22] appear surprising. These authors propose an fMRI based classification procedure for heterosexual and homosexual non-paedophilic and paedophilic subjects with 95% accuracy. Unlike previous studies, these authors describe activation in regions known to be involved in the processing of erotic stimuli such as the caudate nucleus, cingulated cortex, insula, fusiform gyrus, temporal cortex, occipital cortex, thalamus, amygdala and cerebellum but not in the prefrontal cortex.\nExcept for the study of Walter et al. [18] all of the above-mentioned studies that describe a prefrontal involvement in paedophilia used relatively long presentation times ranging from 19.2 - 38.5 s and blocked fMRI designs.\nOne critical shortcoming of long presentation times is that different cognitive processes, such as sustained attention or self-referential processes might also take place and somehow interfere with the target process. Earlier fMRI studies on sexual arousal in normal subjects showed that with shorter presentation times of sexually arousing pictures specific neuronal networks and brain processes can be addressed. Static presentation for 8.75 s as used by Moulier et al. [23] demonstrated for example that the initiation and low levels of penile tumescence are controlled by frontal, parietal, insular and cingular cortical areas. In accordance with these results, Ferretti et al. [24] showed that longer presentation times (> 30 s) induce sexual arousal and penile erection whilst shorter presentation times (< 3 s) induce arousal without erection. A stimulation time of 5 s even allowed to distinguish specific sexual emotional effects from more general emotional effects [25]. A study comparing both types of fMRI-designs in the visual processing of erotic stimuli in healthy volunteers [26] provides strong support for the use of fast stimulation time, suggesting an event related design. The authors proposed that event related designs might be an alternative to blocked designs if the core interest is the detection of networks associated with the immediate processing of erotic stimuli.\nBesides being more useful for the detection of these networks, short presentation times also offer other potential benefits as long presentation times in paedophilia are even more problematic than in healthy controls. Paedophilic subjects are mostly recruited after coming into conflict with the legal authorities as a result of their sexual preferences and this may increase a tendency to suppress erotic arousal or dissimulate sexual attraction to the pictures of children presented. The latter point has been used to explain some of the differences between ratings of erotic salience and brain response observed in some of the aforementioned studies [19,21]. As presentation time becomes shorter, deliberate manipulation may become more difficult and the immediate processes following the perception of target stimuli could become visible.\nIn our study we aim to identify the neuronal networks related to immediate processing of erotic stimuli in heterosexual paedophilic subjects recruited from a forensic outpatient setting. Based on previous neuropsychological research and recent fMRI studies we predicted a differential activation in the prefrontal cortex in paedophilic subjects.", "Behavioural and functional magnetic resonance imaging data were acquired from 16 male right-handed subjects. All participants in this study were adults. Written informed consent was obtained from all subjects before participation in the study.\nThe subjects in the paedophilic (N = 8) group were recruited from an outpatient cognitive behavioural group therapy at the department of forensic psychiatry in Basel, Switzerland. The heterosexual paedophilic subjects fulfilled DSM-IV-TR criteria for exclusive type (attracted only to prepubescent children) not limited to incest heterosexual paedophilia. Three of the subjects had previously molested prepubescent children, the other five had been convicted because of the possession of large quantities of explicit internet child pornography. Each participant’s sexual orientation and preference for prepubescent erotic stimuli was assessed in a clinical interview, the Multiphasic Sex Inventory (MSI) [27] and additionally verified by the clinical record and the court file. For all subjects, neither the interview nor the record indicated other comorbid paraphilias. None of the subjects had admitted paedophilia prior to the contact with the legal authorities.\nHeterosexual control subjects (N = 8) were recruited from an advert on the University Hospital notice board. A clinical evaluation of all subjects revealed no other psychiatric, neurological or medical conditions. The local Ethics Committee of the University of Basel, Switzerland approved the study.", "The study design was adopted from Bühler et al. [26]. Visual stimuli were generated and displayed on a personal computer using the Neurobehavioral Systems (NBS) software package Presentation® 12.2. Stimuli were presented via fMRI-compatible digital video goggles (NordicNeuroLab, Bergen, Norway).\nWhile inside the scanner, subjects were presented with various types of pictures in an event related design. We presented 10 pictures from each category (erotic pictures of boys, girls, men, women or neutral control pictures) for 750 ms with a jittered interstimulus interval that varied randomly between 10 – 20 s in steps of full seconds (Figure 1).\nFMRI paradigm. We presented erotic pictures of adults (women/men) and children (girls/boys) interspersed with neutral control pictures. Pictures from each category were presented for 750 ms with a jittered interstimulus interval ranging from 10–20 s.\nIn each picture, only one person of one of the above-mentioned categories was displayed in bathing clothes in front of a plain-coloured background. All pictures showed faces and the complete corpus. Secondary sexual characteristics were clearly visible in the pictures showing adults but were clearly absent in the pictures depicting prepubescent children. We avoided photographs of adolescents and applied a biological rather than a legal cut off to make the pictures of adults easily distinguishable from those of children. Before inclusion into the paradigm rated the study subjects the pictures and only the pictures with the highest ratings on a visual analogue scale were included. Neutral pictures showed simple objects like e.g. a small boot in front of the same background.\nIn order to control attention during the passive fMRI task we used a previously applied procedure to assure attentive observation of the presented pictures [28]: Before the scanning procedure, subjects were instructed to attentively observe the pictures and told that we would check their attention by asking them to identify the pictures after the scanning session. Immediately after the fMRI session we presented the slides that they had seen interspersed with similar pictures, which had not been presented during the experiment. Subjects then had to decide which pictures they had seen during the fMRI stimulation.", "Images were acquired using a 3 T MRI scanner (Verio, Siemens Healthcare, Erlangen, Germany) equipped with a standard radio frequency head coil. First a T1-weighted high-resolution data set that covered the whole brain was acquired using a three-dimensional MPRAGE (magnetization-prepared rapid acquisition gradient echo) sequence with a repetition time (TR) of 2.00, an isotropic spatial resolution of 1.0 mm3, and an echo time (TE) of 3.4 ms. T2* weighted functional images were recorded using echoplanar imaging with a TR of 2500 ms, an isotropic spatial resolution of 3×3×3 mm3 and a TE of 30 ms (FoV 228; matrix 76; spacing between slices: 0.51 mm; interslice time: 69 ms). Altogether 152 volumes with 36 image slices with a thickness of 3 mm were obtained.", "Image time-series were processed using the BrainVoyager QX 2.3.0 software package (Brain Innovation, Maastricht, The Netherlands). Pre-processing included head motion correction, slice scan time correction, temporal high pass filtering and removal of linear trends. Using the results of the image registration with anatomical scans, the functional image-time series were then warped into Talairach space and resampled into 3 mm isotropic voxel time-series. Normalized images were smoothed using a 6.00 mm isotropic Gaussian kernel.\nFor first-level analysis, a General Linear Model (GLM) analysis was applied with separate subject z-normalized predictors fitted to z-normalized voxel time-courses from all data sets. The orthogonal predictors of interest in the design matrix were: “boys”, “girls”, “men”, “women” and “neutral”. For second-level analysis, the GLM fits (beta weights) were assessed in a 3-way-ANOVA with two within subject factors (sex: female vs. male; age: child vs. adult) and one between subject factor group (paedophilia vs. control). We assessed the main effects of, and the interaction between, these factors at the voxel level and obtained maps that thresholded at the significance level of 5% (corrected for multiple comparison). To obtain this correction, an uncorrected statistical threshold was initially applied to each of the F-maps at p = 0.005, and a cluster-level threshold correction procedure based on Monte Carlo simulations [29] was applied in order to determine the minimum cluster size below which any activation was to be discarded. This procedure yielded a minimum cluster size of 10 voxels (corresponding to a minimum cluster extent of 280 mm) for the F-maps of the interaction of all factors, 12 voxel (297 mm) for the factor age and 19 voxels (482 mm) for the factor sex.\nThe anatomical labelling of active clusters was defined on the basis of the peak voxel coordinates, using the Talairach Daemon (http://www.talairach.org) [30,31].\nActive clusters resulting from the voxel-level ANOVA were further defined as Regions of Interests (ROI). Data from those ROIs were extracted and analysed in order to understand the direction of the effects of the ANOVA. To that end, we calculated linear contrasts for the different factors, i.e. sex (female > male), age (child > adult) and the interaction of the factors (girl > woman) separately for both groups. In order to further specify the interaction of all factors (sex, age, group) we extracted BOLD %-signal-change from the respective ROI and averaged the BOLD%-signal-change separately for the groups.", "Sexual orientation in the control group was heterosexual with a preference for mature women. In the paedophilia group all subjects were heterosexual and attracted exclusively by prepubescent stimuli. The mean age of the paedophilic subjects was 48.25 (standard deviation: 9.15) years and mean IQ 116 (12.12). In the control groups, the mean age was 46.25 (8.35) years and IQ 124.88 (13.91). Student t-tests showed no significant difference between these groups (Table 1). In the paedophilic group 75.4% of the pictures were correctly identified immediately after the scanning session. In the control group 77.5% of the pictures were correctly identified. Again a group difference was excluded by t-test.\nMean age, IQ and rate of the correctly indentified pictures (behavioural control) with standard deviation (SD) and T-test for both groups\nNone of the results reaches statistically significant differences.", " Voxel level ANOVA We found a significant interaction of the factors sex, age and group (p < 0.05, corrected) in the middle frontal gyrus (Figure 2, Table 2). We furthermore found significant main effects for the factors sex and age outside of the middle frontal gyrus, i. e. in brain areas that didn`t show a significant interaction of all factors.\nWhole brain ANOVA. Active voxels showed a main effect of the factor sex (female/male) in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere the inferior occipital gyrus displayed active voxels (A). Furthermore we found bilateral activation in the cerebellum. The factor age (adult/child) showed an activation of the right dorsomedial prefrontal cortex (B). In the right lateral orbitofrontal cortex a significant interaction of sex, age and group was detected (C). Significant voxels in A-C represent effects at a corrected p-level of p < 0.05. X, Y, Z corresponds to the Tailarach coordinates.\nRegions that showed a significant effect in the voxel wise ANOVA\nX, Y, Z corresponds to the Tailarach coordinates (BA = Brodmann area; R = right, L = left).\nA significant main effect of the factor sex (female, male) was present at a corrected threshold of p < 0.05 in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere, the inferior occipital gyrus was also active. We also found bilateral activation in the cerebellum.\nThe factor age revealed an activation of the right dorsomedial prefrontal cortex (p < 0.05, corrected).\nFor the factor group, no significant activation was found at the latter threshold, but again this finding is restricted to brain areas that did not show a significant interaction of all factors.\nWe found a significant interaction of the factors sex, age and group (p < 0.05, corrected) in the middle frontal gyrus (Figure 2, Table 2). We furthermore found significant main effects for the factors sex and age outside of the middle frontal gyrus, i. e. in brain areas that didn`t show a significant interaction of all factors.\nWhole brain ANOVA. Active voxels showed a main effect of the factor sex (female/male) in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere the inferior occipital gyrus displayed active voxels (A). Furthermore we found bilateral activation in the cerebellum. The factor age (adult/child) showed an activation of the right dorsomedial prefrontal cortex (B). In the right lateral orbitofrontal cortex a significant interaction of sex, age and group was detected (C). Significant voxels in A-C represent effects at a corrected p-level of p < 0.05. X, Y, Z corresponds to the Tailarach coordinates.\nRegions that showed a significant effect in the voxel wise ANOVA\nX, Y, Z corresponds to the Tailarach coordinates (BA = Brodmann area; R = right, L = left).\nA significant main effect of the factor sex (female, male) was present at a corrected threshold of p < 0.05 in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere, the inferior occipital gyrus was also active. We also found bilateral activation in the cerebellum.\nThe factor age revealed an activation of the right dorsomedial prefrontal cortex (p < 0.05, corrected).\nFor the factor group, no significant activation was found at the latter threshold, but again this finding is restricted to brain areas that did not show a significant interaction of all factors.", "We found a significant interaction of the factors sex, age and group (p < 0.05, corrected) in the middle frontal gyrus (Figure 2, Table 2). We furthermore found significant main effects for the factors sex and age outside of the middle frontal gyrus, i. e. in brain areas that didn`t show a significant interaction of all factors.\nWhole brain ANOVA. Active voxels showed a main effect of the factor sex (female/male) in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere the inferior occipital gyrus displayed active voxels (A). Furthermore we found bilateral activation in the cerebellum. The factor age (adult/child) showed an activation of the right dorsomedial prefrontal cortex (B). In the right lateral orbitofrontal cortex a significant interaction of sex, age and group was detected (C). Significant voxels in A-C represent effects at a corrected p-level of p < 0.05. X, Y, Z corresponds to the Tailarach coordinates.\nRegions that showed a significant effect in the voxel wise ANOVA\nX, Y, Z corresponds to the Tailarach coordinates (BA = Brodmann area; R = right, L = left).\nA significant main effect of the factor sex (female, male) was present at a corrected threshold of p < 0.05 in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere, the inferior occipital gyrus was also active. We also found bilateral activation in the cerebellum.\nThe factor age revealed an activation of the right dorsomedial prefrontal cortex (p < 0.05, corrected).\nFor the factor group, no significant activation was found at the latter threshold, but again this finding is restricted to brain areas that did not show a significant interaction of all factors.", "The linear contrasts of these ROIs confirmed that, in both groups, the main effect of the factor sex was based on a higher activation due to female pictures and the main effect of the factor age on a stronger activation for pictures of children. ROI analysis clarified that in the ROI that became significant for the interaction of all factors, only paedophilic subjects showed activation in the girl condition while controls showed a deactivation in this condition (Figure 3A).\nROIs analysis (A). The ROI female > male contrast level for the ROIs extracted from the ANOVA of the main effect of sex displayed a greater activation for female pictures in all active ROIs (rTL = right temporal lobe, rPL = right parietal lobe, rOL = right occipital lobe, lOL = left occipital lobe). The child > adult contrast confirmed that the main effect of age is based on a stronger activation due to pictures of children in both groups (rDMPFC = right dorsomedial prefrontal cortex). ROI analysis of the cluster showing an interaction of all factors revealed a significant difference (* = p < 0.05) for the contrast girl > woman, with activation in paedphilic subjects and deactivation in control subjects (rlOFC = right lateral orbitofrontal cortex). Event related averaging (B). BOLD %-signal-change in the rlOFC only illustrated a strong activation due to erotic pictures of girls in paedophilia.\nEvent related averaging of BOLD %-signal-change for all conditions and all groups in that ROI illustrated a strong activation due to erotic pictures of girls in paedophilic subjects that was absent for all other picture conditions in both groups (Figure 3B).", "OFC: Orbitofrontal cortex; DMPFC: Dorsomedial prefrontal cortex; DLPFC: Dorsolateral prefrontal cortex; MPRAGE: Magnetization prepared rapid acquisistion gradient echo; GLM: General linear model, ROI: region of interest.", "The authors declare that they have no competing interests.", "BH, RM, MG, VD and ES conceived and designed the study. BH, NH, RM, PL and MK realized the study design and acquired the data. BH, FE, NH performed data analyses. BH and FE interpreted data and wrote the manuscript. BH, FE, NH, PL, MK, RM, VD, ES and MG contributed in the drafting and revision of the manuscript for important intellectual content. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-244X/13/88/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Subjects", "Stimulation and paradigm", "FMRI acquisition and analysis", "Image pre-processing and statistical analysis", "Results", "Demographics", "FMRI", "Voxel level ANOVA", "ROI analysis", "Discussion", "Conclusion", "Abbreviations", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Paedophilia is a matter of great public interest and often evokes emotional discussions in mass media. It is difficult to estimate the prevalence of paedophilia as both paedophilic offenders and victims often prefer not to identify themselves. From surveys of the general population we know that approximately 12% of men and 17% of women report experiencing sexual abuse in their childhood [1]. These figures underline the potential impact of this disease on society. According to DSM IV-TR, paedophilia is defined by two main criteria: firstly persistent sexual fantasies, urges or behaviour involving sexual activity with prepubescent children and secondly that these people have acted on these urges, or that these urges or fantasies caused marked distress or interpersonal difficulties [2].\nFindings from neuropsychological studies on paedophilia are heterogeneous. Whilst a lower IQ [3], educational difficulties [4] and a higher rate of left-handedness [5] indicate rather generalised brain dysfunction, other studies suggest more specific alterations like focal weaknesses in frontal-executive [6,7] and/or temporal-verbal [8] skills or even a more deliberate response style and greater self-monitoring in paedophilic subjects [9,10]. Furthermore revealed research on personality traits in paedophilia various findings like impaired interpersonal functions, impaired self-awareness, disinhibitory traits, sociopathy and a propensity for cognitive distortions [11]. In summary, it can be stated that the neurobiology of paedophilia remains incomplete.\nIn recent years, researchers have increasingly addressed the neuronal underpinnings of sexual arousal in order to better understand sexual behaviour. Stoléru et al. [12,13] and Redouté et al. [14,15] were the first to propose a neurophenomological model that disentangled the cognitive, emotional, motivational and physiological components of sexual arousal. Two recently published quantitative meta-analyses on sexual cue reactivity [16,17] underline that distinctive subcomponents of sexual arousal can be reliably localised by neuroimaging techniques.\nBased on the fMRI results about sexual arousal in healthy controls it was natural to transfer that approach to paedophilic subjects in order to better understand the cognitive and behavioural processes in paedophilia. In heterosexual paedophilia fMRI studies have demonstrated the altered processing of erotic visual stimuli in the dorsomedial prefrontal cortex (DMPFC), amygdala and hippocampus [18]. The latter study also reports that erotic pictures of adults induced a stronger activation of the hypothalmus, the periaqueductal grey and the dorsolateral prefrontal cortex (DLPFC) in healthy controls than in heterosexual paedophilic subjects. While this study highlights differences between controls and paedophilic subjects other studies report similar activation patterns for the respective erotic conditions. For example Schiffer et al. [19] show that erotic pictures of girls induced the same activation of limbic structures such as the amygdala, the cingulate gyrus or the hippocampus in heterosexual paedophilia as erotic pictures of women in a control group. However this study also found an additional activation of the DLPFC in the heterosexual paedophilic group alone. Research has also been carried out on homosexual paedophilia. Again in both paedophilic and control subjects, a common network relating to sexual activation was found, comprising the occipitotemporal and prefrontal cortex [20]. However, only paedophilic subjects showed activated subcortical regions such as the thalamus, the globus pallidus and the striatum during erotic stimulation. Another study on homosexual paedophilia describes differential activation of the right orbitofrontal cortex [21].\nMost of the above-mentioned studies indicate that prefrontal brain regions might be related to paedophilia, but the results and structures involved differ from study to study. Considering these rather heterogeneous results, the findings from a recent study by Ponseti et al. [22] appear surprising. These authors propose an fMRI based classification procedure for heterosexual and homosexual non-paedophilic and paedophilic subjects with 95% accuracy. Unlike previous studies, these authors describe activation in regions known to be involved in the processing of erotic stimuli such as the caudate nucleus, cingulated cortex, insula, fusiform gyrus, temporal cortex, occipital cortex, thalamus, amygdala and cerebellum but not in the prefrontal cortex.\nExcept for the study of Walter et al. [18] all of the above-mentioned studies that describe a prefrontal involvement in paedophilia used relatively long presentation times ranging from 19.2 - 38.5 s and blocked fMRI designs.\nOne critical shortcoming of long presentation times is that different cognitive processes, such as sustained attention or self-referential processes might also take place and somehow interfere with the target process. Earlier fMRI studies on sexual arousal in normal subjects showed that with shorter presentation times of sexually arousing pictures specific neuronal networks and brain processes can be addressed. Static presentation for 8.75 s as used by Moulier et al. [23] demonstrated for example that the initiation and low levels of penile tumescence are controlled by frontal, parietal, insular and cingular cortical areas. In accordance with these results, Ferretti et al. [24] showed that longer presentation times (> 30 s) induce sexual arousal and penile erection whilst shorter presentation times (< 3 s) induce arousal without erection. A stimulation time of 5 s even allowed to distinguish specific sexual emotional effects from more general emotional effects [25]. A study comparing both types of fMRI-designs in the visual processing of erotic stimuli in healthy volunteers [26] provides strong support for the use of fast stimulation time, suggesting an event related design. The authors proposed that event related designs might be an alternative to blocked designs if the core interest is the detection of networks associated with the immediate processing of erotic stimuli.\nBesides being more useful for the detection of these networks, short presentation times also offer other potential benefits as long presentation times in paedophilia are even more problematic than in healthy controls. Paedophilic subjects are mostly recruited after coming into conflict with the legal authorities as a result of their sexual preferences and this may increase a tendency to suppress erotic arousal or dissimulate sexual attraction to the pictures of children presented. The latter point has been used to explain some of the differences between ratings of erotic salience and brain response observed in some of the aforementioned studies [19,21]. As presentation time becomes shorter, deliberate manipulation may become more difficult and the immediate processes following the perception of target stimuli could become visible.\nIn our study we aim to identify the neuronal networks related to immediate processing of erotic stimuli in heterosexual paedophilic subjects recruited from a forensic outpatient setting. Based on previous neuropsychological research and recent fMRI studies we predicted a differential activation in the prefrontal cortex in paedophilic subjects.", " Subjects Behavioural and functional magnetic resonance imaging data were acquired from 16 male right-handed subjects. All participants in this study were adults. Written informed consent was obtained from all subjects before participation in the study.\nThe subjects in the paedophilic (N = 8) group were recruited from an outpatient cognitive behavioural group therapy at the department of forensic psychiatry in Basel, Switzerland. The heterosexual paedophilic subjects fulfilled DSM-IV-TR criteria for exclusive type (attracted only to prepubescent children) not limited to incest heterosexual paedophilia. Three of the subjects had previously molested prepubescent children, the other five had been convicted because of the possession of large quantities of explicit internet child pornography. Each participant’s sexual orientation and preference for prepubescent erotic stimuli was assessed in a clinical interview, the Multiphasic Sex Inventory (MSI) [27] and additionally verified by the clinical record and the court file. For all subjects, neither the interview nor the record indicated other comorbid paraphilias. None of the subjects had admitted paedophilia prior to the contact with the legal authorities.\nHeterosexual control subjects (N = 8) were recruited from an advert on the University Hospital notice board. A clinical evaluation of all subjects revealed no other psychiatric, neurological or medical conditions. The local Ethics Committee of the University of Basel, Switzerland approved the study.\nBehavioural and functional magnetic resonance imaging data were acquired from 16 male right-handed subjects. All participants in this study were adults. Written informed consent was obtained from all subjects before participation in the study.\nThe subjects in the paedophilic (N = 8) group were recruited from an outpatient cognitive behavioural group therapy at the department of forensic psychiatry in Basel, Switzerland. The heterosexual paedophilic subjects fulfilled DSM-IV-TR criteria for exclusive type (attracted only to prepubescent children) not limited to incest heterosexual paedophilia. Three of the subjects had previously molested prepubescent children, the other five had been convicted because of the possession of large quantities of explicit internet child pornography. Each participant’s sexual orientation and preference for prepubescent erotic stimuli was assessed in a clinical interview, the Multiphasic Sex Inventory (MSI) [27] and additionally verified by the clinical record and the court file. For all subjects, neither the interview nor the record indicated other comorbid paraphilias. None of the subjects had admitted paedophilia prior to the contact with the legal authorities.\nHeterosexual control subjects (N = 8) were recruited from an advert on the University Hospital notice board. A clinical evaluation of all subjects revealed no other psychiatric, neurological or medical conditions. The local Ethics Committee of the University of Basel, Switzerland approved the study.\n Stimulation and paradigm The study design was adopted from Bühler et al. [26]. Visual stimuli were generated and displayed on a personal computer using the Neurobehavioral Systems (NBS) software package Presentation® 12.2. Stimuli were presented via fMRI-compatible digital video goggles (NordicNeuroLab, Bergen, Norway).\nWhile inside the scanner, subjects were presented with various types of pictures in an event related design. We presented 10 pictures from each category (erotic pictures of boys, girls, men, women or neutral control pictures) for 750 ms with a jittered interstimulus interval that varied randomly between 10 – 20 s in steps of full seconds (Figure 1).\nFMRI paradigm. We presented erotic pictures of adults (women/men) and children (girls/boys) interspersed with neutral control pictures. Pictures from each category were presented for 750 ms with a jittered interstimulus interval ranging from 10–20 s.\nIn each picture, only one person of one of the above-mentioned categories was displayed in bathing clothes in front of a plain-coloured background. All pictures showed faces and the complete corpus. Secondary sexual characteristics were clearly visible in the pictures showing adults but were clearly absent in the pictures depicting prepubescent children. We avoided photographs of adolescents and applied a biological rather than a legal cut off to make the pictures of adults easily distinguishable from those of children. Before inclusion into the paradigm rated the study subjects the pictures and only the pictures with the highest ratings on a visual analogue scale were included. Neutral pictures showed simple objects like e.g. a small boot in front of the same background.\nIn order to control attention during the passive fMRI task we used a previously applied procedure to assure attentive observation of the presented pictures [28]: Before the scanning procedure, subjects were instructed to attentively observe the pictures and told that we would check their attention by asking them to identify the pictures after the scanning session. Immediately after the fMRI session we presented the slides that they had seen interspersed with similar pictures, which had not been presented during the experiment. Subjects then had to decide which pictures they had seen during the fMRI stimulation.\nThe study design was adopted from Bühler et al. [26]. Visual stimuli were generated and displayed on a personal computer using the Neurobehavioral Systems (NBS) software package Presentation® 12.2. Stimuli were presented via fMRI-compatible digital video goggles (NordicNeuroLab, Bergen, Norway).\nWhile inside the scanner, subjects were presented with various types of pictures in an event related design. We presented 10 pictures from each category (erotic pictures of boys, girls, men, women or neutral control pictures) for 750 ms with a jittered interstimulus interval that varied randomly between 10 – 20 s in steps of full seconds (Figure 1).\nFMRI paradigm. We presented erotic pictures of adults (women/men) and children (girls/boys) interspersed with neutral control pictures. Pictures from each category were presented for 750 ms with a jittered interstimulus interval ranging from 10–20 s.\nIn each picture, only one person of one of the above-mentioned categories was displayed in bathing clothes in front of a plain-coloured background. All pictures showed faces and the complete corpus. Secondary sexual characteristics were clearly visible in the pictures showing adults but were clearly absent in the pictures depicting prepubescent children. We avoided photographs of adolescents and applied a biological rather than a legal cut off to make the pictures of adults easily distinguishable from those of children. Before inclusion into the paradigm rated the study subjects the pictures and only the pictures with the highest ratings on a visual analogue scale were included. Neutral pictures showed simple objects like e.g. a small boot in front of the same background.\nIn order to control attention during the passive fMRI task we used a previously applied procedure to assure attentive observation of the presented pictures [28]: Before the scanning procedure, subjects were instructed to attentively observe the pictures and told that we would check their attention by asking them to identify the pictures after the scanning session. Immediately after the fMRI session we presented the slides that they had seen interspersed with similar pictures, which had not been presented during the experiment. Subjects then had to decide which pictures they had seen during the fMRI stimulation.\n FMRI acquisition and analysis Images were acquired using a 3 T MRI scanner (Verio, Siemens Healthcare, Erlangen, Germany) equipped with a standard radio frequency head coil. First a T1-weighted high-resolution data set that covered the whole brain was acquired using a three-dimensional MPRAGE (magnetization-prepared rapid acquisition gradient echo) sequence with a repetition time (TR) of 2.00, an isotropic spatial resolution of 1.0 mm3, and an echo time (TE) of 3.4 ms. T2* weighted functional images were recorded using echoplanar imaging with a TR of 2500 ms, an isotropic spatial resolution of 3×3×3 mm3 and a TE of 30 ms (FoV 228; matrix 76; spacing between slices: 0.51 mm; interslice time: 69 ms). Altogether 152 volumes with 36 image slices with a thickness of 3 mm were obtained.\nImages were acquired using a 3 T MRI scanner (Verio, Siemens Healthcare, Erlangen, Germany) equipped with a standard radio frequency head coil. First a T1-weighted high-resolution data set that covered the whole brain was acquired using a three-dimensional MPRAGE (magnetization-prepared rapid acquisition gradient echo) sequence with a repetition time (TR) of 2.00, an isotropic spatial resolution of 1.0 mm3, and an echo time (TE) of 3.4 ms. T2* weighted functional images were recorded using echoplanar imaging with a TR of 2500 ms, an isotropic spatial resolution of 3×3×3 mm3 and a TE of 30 ms (FoV 228; matrix 76; spacing between slices: 0.51 mm; interslice time: 69 ms). Altogether 152 volumes with 36 image slices with a thickness of 3 mm were obtained.\n Image pre-processing and statistical analysis Image time-series were processed using the BrainVoyager QX 2.3.0 software package (Brain Innovation, Maastricht, The Netherlands). Pre-processing included head motion correction, slice scan time correction, temporal high pass filtering and removal of linear trends. Using the results of the image registration with anatomical scans, the functional image-time series were then warped into Talairach space and resampled into 3 mm isotropic voxel time-series. Normalized images were smoothed using a 6.00 mm isotropic Gaussian kernel.\nFor first-level analysis, a General Linear Model (GLM) analysis was applied with separate subject z-normalized predictors fitted to z-normalized voxel time-courses from all data sets. The orthogonal predictors of interest in the design matrix were: “boys”, “girls”, “men”, “women” and “neutral”. For second-level analysis, the GLM fits (beta weights) were assessed in a 3-way-ANOVA with two within subject factors (sex: female vs. male; age: child vs. adult) and one between subject factor group (paedophilia vs. control). We assessed the main effects of, and the interaction between, these factors at the voxel level and obtained maps that thresholded at the significance level of 5% (corrected for multiple comparison). To obtain this correction, an uncorrected statistical threshold was initially applied to each of the F-maps at p = 0.005, and a cluster-level threshold correction procedure based on Monte Carlo simulations [29] was applied in order to determine the minimum cluster size below which any activation was to be discarded. This procedure yielded a minimum cluster size of 10 voxels (corresponding to a minimum cluster extent of 280 mm) for the F-maps of the interaction of all factors, 12 voxel (297 mm) for the factor age and 19 voxels (482 mm) for the factor sex.\nThe anatomical labelling of active clusters was defined on the basis of the peak voxel coordinates, using the Talairach Daemon (http://www.talairach.org) [30,31].\nActive clusters resulting from the voxel-level ANOVA were further defined as Regions of Interests (ROI). Data from those ROIs were extracted and analysed in order to understand the direction of the effects of the ANOVA. To that end, we calculated linear contrasts for the different factors, i.e. sex (female > male), age (child > adult) and the interaction of the factors (girl > woman) separately for both groups. In order to further specify the interaction of all factors (sex, age, group) we extracted BOLD %-signal-change from the respective ROI and averaged the BOLD%-signal-change separately for the groups.\nImage time-series were processed using the BrainVoyager QX 2.3.0 software package (Brain Innovation, Maastricht, The Netherlands). Pre-processing included head motion correction, slice scan time correction, temporal high pass filtering and removal of linear trends. Using the results of the image registration with anatomical scans, the functional image-time series were then warped into Talairach space and resampled into 3 mm isotropic voxel time-series. Normalized images were smoothed using a 6.00 mm isotropic Gaussian kernel.\nFor first-level analysis, a General Linear Model (GLM) analysis was applied with separate subject z-normalized predictors fitted to z-normalized voxel time-courses from all data sets. The orthogonal predictors of interest in the design matrix were: “boys”, “girls”, “men”, “women” and “neutral”. For second-level analysis, the GLM fits (beta weights) were assessed in a 3-way-ANOVA with two within subject factors (sex: female vs. male; age: child vs. adult) and one between subject factor group (paedophilia vs. control). We assessed the main effects of, and the interaction between, these factors at the voxel level and obtained maps that thresholded at the significance level of 5% (corrected for multiple comparison). To obtain this correction, an uncorrected statistical threshold was initially applied to each of the F-maps at p = 0.005, and a cluster-level threshold correction procedure based on Monte Carlo simulations [29] was applied in order to determine the minimum cluster size below which any activation was to be discarded. This procedure yielded a minimum cluster size of 10 voxels (corresponding to a minimum cluster extent of 280 mm) for the F-maps of the interaction of all factors, 12 voxel (297 mm) for the factor age and 19 voxels (482 mm) for the factor sex.\nThe anatomical labelling of active clusters was defined on the basis of the peak voxel coordinates, using the Talairach Daemon (http://www.talairach.org) [30,31].\nActive clusters resulting from the voxel-level ANOVA were further defined as Regions of Interests (ROI). Data from those ROIs were extracted and analysed in order to understand the direction of the effects of the ANOVA. To that end, we calculated linear contrasts for the different factors, i.e. sex (female > male), age (child > adult) and the interaction of the factors (girl > woman) separately for both groups. In order to further specify the interaction of all factors (sex, age, group) we extracted BOLD %-signal-change from the respective ROI and averaged the BOLD%-signal-change separately for the groups.", "Behavioural and functional magnetic resonance imaging data were acquired from 16 male right-handed subjects. All participants in this study were adults. Written informed consent was obtained from all subjects before participation in the study.\nThe subjects in the paedophilic (N = 8) group were recruited from an outpatient cognitive behavioural group therapy at the department of forensic psychiatry in Basel, Switzerland. The heterosexual paedophilic subjects fulfilled DSM-IV-TR criteria for exclusive type (attracted only to prepubescent children) not limited to incest heterosexual paedophilia. Three of the subjects had previously molested prepubescent children, the other five had been convicted because of the possession of large quantities of explicit internet child pornography. Each participant’s sexual orientation and preference for prepubescent erotic stimuli was assessed in a clinical interview, the Multiphasic Sex Inventory (MSI) [27] and additionally verified by the clinical record and the court file. For all subjects, neither the interview nor the record indicated other comorbid paraphilias. None of the subjects had admitted paedophilia prior to the contact with the legal authorities.\nHeterosexual control subjects (N = 8) were recruited from an advert on the University Hospital notice board. A clinical evaluation of all subjects revealed no other psychiatric, neurological or medical conditions. The local Ethics Committee of the University of Basel, Switzerland approved the study.", "The study design was adopted from Bühler et al. [26]. Visual stimuli were generated and displayed on a personal computer using the Neurobehavioral Systems (NBS) software package Presentation® 12.2. Stimuli were presented via fMRI-compatible digital video goggles (NordicNeuroLab, Bergen, Norway).\nWhile inside the scanner, subjects were presented with various types of pictures in an event related design. We presented 10 pictures from each category (erotic pictures of boys, girls, men, women or neutral control pictures) for 750 ms with a jittered interstimulus interval that varied randomly between 10 – 20 s in steps of full seconds (Figure 1).\nFMRI paradigm. We presented erotic pictures of adults (women/men) and children (girls/boys) interspersed with neutral control pictures. Pictures from each category were presented for 750 ms with a jittered interstimulus interval ranging from 10–20 s.\nIn each picture, only one person of one of the above-mentioned categories was displayed in bathing clothes in front of a plain-coloured background. All pictures showed faces and the complete corpus. Secondary sexual characteristics were clearly visible in the pictures showing adults but were clearly absent in the pictures depicting prepubescent children. We avoided photographs of adolescents and applied a biological rather than a legal cut off to make the pictures of adults easily distinguishable from those of children. Before inclusion into the paradigm rated the study subjects the pictures and only the pictures with the highest ratings on a visual analogue scale were included. Neutral pictures showed simple objects like e.g. a small boot in front of the same background.\nIn order to control attention during the passive fMRI task we used a previously applied procedure to assure attentive observation of the presented pictures [28]: Before the scanning procedure, subjects were instructed to attentively observe the pictures and told that we would check their attention by asking them to identify the pictures after the scanning session. Immediately after the fMRI session we presented the slides that they had seen interspersed with similar pictures, which had not been presented during the experiment. Subjects then had to decide which pictures they had seen during the fMRI stimulation.", "Images were acquired using a 3 T MRI scanner (Verio, Siemens Healthcare, Erlangen, Germany) equipped with a standard radio frequency head coil. First a T1-weighted high-resolution data set that covered the whole brain was acquired using a three-dimensional MPRAGE (magnetization-prepared rapid acquisition gradient echo) sequence with a repetition time (TR) of 2.00, an isotropic spatial resolution of 1.0 mm3, and an echo time (TE) of 3.4 ms. T2* weighted functional images were recorded using echoplanar imaging with a TR of 2500 ms, an isotropic spatial resolution of 3×3×3 mm3 and a TE of 30 ms (FoV 228; matrix 76; spacing between slices: 0.51 mm; interslice time: 69 ms). Altogether 152 volumes with 36 image slices with a thickness of 3 mm were obtained.", "Image time-series were processed using the BrainVoyager QX 2.3.0 software package (Brain Innovation, Maastricht, The Netherlands). Pre-processing included head motion correction, slice scan time correction, temporal high pass filtering and removal of linear trends. Using the results of the image registration with anatomical scans, the functional image-time series were then warped into Talairach space and resampled into 3 mm isotropic voxel time-series. Normalized images were smoothed using a 6.00 mm isotropic Gaussian kernel.\nFor first-level analysis, a General Linear Model (GLM) analysis was applied with separate subject z-normalized predictors fitted to z-normalized voxel time-courses from all data sets. The orthogonal predictors of interest in the design matrix were: “boys”, “girls”, “men”, “women” and “neutral”. For second-level analysis, the GLM fits (beta weights) were assessed in a 3-way-ANOVA with two within subject factors (sex: female vs. male; age: child vs. adult) and one between subject factor group (paedophilia vs. control). We assessed the main effects of, and the interaction between, these factors at the voxel level and obtained maps that thresholded at the significance level of 5% (corrected for multiple comparison). To obtain this correction, an uncorrected statistical threshold was initially applied to each of the F-maps at p = 0.005, and a cluster-level threshold correction procedure based on Monte Carlo simulations [29] was applied in order to determine the minimum cluster size below which any activation was to be discarded. This procedure yielded a minimum cluster size of 10 voxels (corresponding to a minimum cluster extent of 280 mm) for the F-maps of the interaction of all factors, 12 voxel (297 mm) for the factor age and 19 voxels (482 mm) for the factor sex.\nThe anatomical labelling of active clusters was defined on the basis of the peak voxel coordinates, using the Talairach Daemon (http://www.talairach.org) [30,31].\nActive clusters resulting from the voxel-level ANOVA were further defined as Regions of Interests (ROI). Data from those ROIs were extracted and analysed in order to understand the direction of the effects of the ANOVA. To that end, we calculated linear contrasts for the different factors, i.e. sex (female > male), age (child > adult) and the interaction of the factors (girl > woman) separately for both groups. In order to further specify the interaction of all factors (sex, age, group) we extracted BOLD %-signal-change from the respective ROI and averaged the BOLD%-signal-change separately for the groups.", " Demographics Sexual orientation in the control group was heterosexual with a preference for mature women. In the paedophilia group all subjects were heterosexual and attracted exclusively by prepubescent stimuli. The mean age of the paedophilic subjects was 48.25 (standard deviation: 9.15) years and mean IQ 116 (12.12). In the control groups, the mean age was 46.25 (8.35) years and IQ 124.88 (13.91). Student t-tests showed no significant difference between these groups (Table 1). In the paedophilic group 75.4% of the pictures were correctly identified immediately after the scanning session. In the control group 77.5% of the pictures were correctly identified. Again a group difference was excluded by t-test.\nMean age, IQ and rate of the correctly indentified pictures (behavioural control) with standard deviation (SD) and T-test for both groups\nNone of the results reaches statistically significant differences.\nSexual orientation in the control group was heterosexual with a preference for mature women. In the paedophilia group all subjects were heterosexual and attracted exclusively by prepubescent stimuli. The mean age of the paedophilic subjects was 48.25 (standard deviation: 9.15) years and mean IQ 116 (12.12). In the control groups, the mean age was 46.25 (8.35) years and IQ 124.88 (13.91). Student t-tests showed no significant difference between these groups (Table 1). In the paedophilic group 75.4% of the pictures were correctly identified immediately after the scanning session. In the control group 77.5% of the pictures were correctly identified. Again a group difference was excluded by t-test.\nMean age, IQ and rate of the correctly indentified pictures (behavioural control) with standard deviation (SD) and T-test for both groups\nNone of the results reaches statistically significant differences.\n FMRI Voxel level ANOVA We found a significant interaction of the factors sex, age and group (p < 0.05, corrected) in the middle frontal gyrus (Figure 2, Table 2). We furthermore found significant main effects for the factors sex and age outside of the middle frontal gyrus, i. e. in brain areas that didn`t show a significant interaction of all factors.\nWhole brain ANOVA. Active voxels showed a main effect of the factor sex (female/male) in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere the inferior occipital gyrus displayed active voxels (A). Furthermore we found bilateral activation in the cerebellum. The factor age (adult/child) showed an activation of the right dorsomedial prefrontal cortex (B). In the right lateral orbitofrontal cortex a significant interaction of sex, age and group was detected (C). Significant voxels in A-C represent effects at a corrected p-level of p < 0.05. X, Y, Z corresponds to the Tailarach coordinates.\nRegions that showed a significant effect in the voxel wise ANOVA\nX, Y, Z corresponds to the Tailarach coordinates (BA = Brodmann area; R = right, L = left).\nA significant main effect of the factor sex (female, male) was present at a corrected threshold of p < 0.05 in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere, the inferior occipital gyrus was also active. We also found bilateral activation in the cerebellum.\nThe factor age revealed an activation of the right dorsomedial prefrontal cortex (p < 0.05, corrected).\nFor the factor group, no significant activation was found at the latter threshold, but again this finding is restricted to brain areas that did not show a significant interaction of all factors.\nWe found a significant interaction of the factors sex, age and group (p < 0.05, corrected) in the middle frontal gyrus (Figure 2, Table 2). We furthermore found significant main effects for the factors sex and age outside of the middle frontal gyrus, i. e. in brain areas that didn`t show a significant interaction of all factors.\nWhole brain ANOVA. Active voxels showed a main effect of the factor sex (female/male) in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere the inferior occipital gyrus displayed active voxels (A). Furthermore we found bilateral activation in the cerebellum. The factor age (adult/child) showed an activation of the right dorsomedial prefrontal cortex (B). In the right lateral orbitofrontal cortex a significant interaction of sex, age and group was detected (C). Significant voxels in A-C represent effects at a corrected p-level of p < 0.05. X, Y, Z corresponds to the Tailarach coordinates.\nRegions that showed a significant effect in the voxel wise ANOVA\nX, Y, Z corresponds to the Tailarach coordinates (BA = Brodmann area; R = right, L = left).\nA significant main effect of the factor sex (female, male) was present at a corrected threshold of p < 0.05 in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere, the inferior occipital gyrus was also active. We also found bilateral activation in the cerebellum.\nThe factor age revealed an activation of the right dorsomedial prefrontal cortex (p < 0.05, corrected).\nFor the factor group, no significant activation was found at the latter threshold, but again this finding is restricted to brain areas that did not show a significant interaction of all factors.\n Voxel level ANOVA We found a significant interaction of the factors sex, age and group (p < 0.05, corrected) in the middle frontal gyrus (Figure 2, Table 2). We furthermore found significant main effects for the factors sex and age outside of the middle frontal gyrus, i. e. in brain areas that didn`t show a significant interaction of all factors.\nWhole brain ANOVA. Active voxels showed a main effect of the factor sex (female/male) in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere the inferior occipital gyrus displayed active voxels (A). Furthermore we found bilateral activation in the cerebellum. The factor age (adult/child) showed an activation of the right dorsomedial prefrontal cortex (B). In the right lateral orbitofrontal cortex a significant interaction of sex, age and group was detected (C). Significant voxels in A-C represent effects at a corrected p-level of p < 0.05. X, Y, Z corresponds to the Tailarach coordinates.\nRegions that showed a significant effect in the voxel wise ANOVA\nX, Y, Z corresponds to the Tailarach coordinates (BA = Brodmann area; R = right, L = left).\nA significant main effect of the factor sex (female, male) was present at a corrected threshold of p < 0.05 in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere, the inferior occipital gyrus was also active. We also found bilateral activation in the cerebellum.\nThe factor age revealed an activation of the right dorsomedial prefrontal cortex (p < 0.05, corrected).\nFor the factor group, no significant activation was found at the latter threshold, but again this finding is restricted to brain areas that did not show a significant interaction of all factors.\nWe found a significant interaction of the factors sex, age and group (p < 0.05, corrected) in the middle frontal gyrus (Figure 2, Table 2). We furthermore found significant main effects for the factors sex and age outside of the middle frontal gyrus, i. e. in brain areas that didn`t show a significant interaction of all factors.\nWhole brain ANOVA. Active voxels showed a main effect of the factor sex (female/male) in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere the inferior occipital gyrus displayed active voxels (A). Furthermore we found bilateral activation in the cerebellum. The factor age (adult/child) showed an activation of the right dorsomedial prefrontal cortex (B). In the right lateral orbitofrontal cortex a significant interaction of sex, age and group was detected (C). Significant voxels in A-C represent effects at a corrected p-level of p < 0.05. X, Y, Z corresponds to the Tailarach coordinates.\nRegions that showed a significant effect in the voxel wise ANOVA\nX, Y, Z corresponds to the Tailarach coordinates (BA = Brodmann area; R = right, L = left).\nA significant main effect of the factor sex (female, male) was present at a corrected threshold of p < 0.05 in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere, the inferior occipital gyrus was also active. We also found bilateral activation in the cerebellum.\nThe factor age revealed an activation of the right dorsomedial prefrontal cortex (p < 0.05, corrected).\nFor the factor group, no significant activation was found at the latter threshold, but again this finding is restricted to brain areas that did not show a significant interaction of all factors.\n ROI analysis The linear contrasts of these ROIs confirmed that, in both groups, the main effect of the factor sex was based on a higher activation due to female pictures and the main effect of the factor age on a stronger activation for pictures of children. ROI analysis clarified that in the ROI that became significant for the interaction of all factors, only paedophilic subjects showed activation in the girl condition while controls showed a deactivation in this condition (Figure 3A).\nROIs analysis (A). The ROI female > male contrast level for the ROIs extracted from the ANOVA of the main effect of sex displayed a greater activation for female pictures in all active ROIs (rTL = right temporal lobe, rPL = right parietal lobe, rOL = right occipital lobe, lOL = left occipital lobe). The child > adult contrast confirmed that the main effect of age is based on a stronger activation due to pictures of children in both groups (rDMPFC = right dorsomedial prefrontal cortex). ROI analysis of the cluster showing an interaction of all factors revealed a significant difference (* = p < 0.05) for the contrast girl > woman, with activation in paedphilic subjects and deactivation in control subjects (rlOFC = right lateral orbitofrontal cortex). Event related averaging (B). BOLD %-signal-change in the rlOFC only illustrated a strong activation due to erotic pictures of girls in paedophilia.\nEvent related averaging of BOLD %-signal-change for all conditions and all groups in that ROI illustrated a strong activation due to erotic pictures of girls in paedophilic subjects that was absent for all other picture conditions in both groups (Figure 3B).\nThe linear contrasts of these ROIs confirmed that, in both groups, the main effect of the factor sex was based on a higher activation due to female pictures and the main effect of the factor age on a stronger activation for pictures of children. ROI analysis clarified that in the ROI that became significant for the interaction of all factors, only paedophilic subjects showed activation in the girl condition while controls showed a deactivation in this condition (Figure 3A).\nROIs analysis (A). The ROI female > male contrast level for the ROIs extracted from the ANOVA of the main effect of sex displayed a greater activation for female pictures in all active ROIs (rTL = right temporal lobe, rPL = right parietal lobe, rOL = right occipital lobe, lOL = left occipital lobe). The child > adult contrast confirmed that the main effect of age is based on a stronger activation due to pictures of children in both groups (rDMPFC = right dorsomedial prefrontal cortex). ROI analysis of the cluster showing an interaction of all factors revealed a significant difference (* = p < 0.05) for the contrast girl > woman, with activation in paedphilic subjects and deactivation in control subjects (rlOFC = right lateral orbitofrontal cortex). Event related averaging (B). BOLD %-signal-change in the rlOFC only illustrated a strong activation due to erotic pictures of girls in paedophilia.\nEvent related averaging of BOLD %-signal-change for all conditions and all groups in that ROI illustrated a strong activation due to erotic pictures of girls in paedophilic subjects that was absent for all other picture conditions in both groups (Figure 3B).", "Sexual orientation in the control group was heterosexual with a preference for mature women. In the paedophilia group all subjects were heterosexual and attracted exclusively by prepubescent stimuli. The mean age of the paedophilic subjects was 48.25 (standard deviation: 9.15) years and mean IQ 116 (12.12). In the control groups, the mean age was 46.25 (8.35) years and IQ 124.88 (13.91). Student t-tests showed no significant difference between these groups (Table 1). In the paedophilic group 75.4% of the pictures were correctly identified immediately after the scanning session. In the control group 77.5% of the pictures were correctly identified. Again a group difference was excluded by t-test.\nMean age, IQ and rate of the correctly indentified pictures (behavioural control) with standard deviation (SD) and T-test for both groups\nNone of the results reaches statistically significant differences.", " Voxel level ANOVA We found a significant interaction of the factors sex, age and group (p < 0.05, corrected) in the middle frontal gyrus (Figure 2, Table 2). We furthermore found significant main effects for the factors sex and age outside of the middle frontal gyrus, i. e. in brain areas that didn`t show a significant interaction of all factors.\nWhole brain ANOVA. Active voxels showed a main effect of the factor sex (female/male) in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere the inferior occipital gyrus displayed active voxels (A). Furthermore we found bilateral activation in the cerebellum. The factor age (adult/child) showed an activation of the right dorsomedial prefrontal cortex (B). In the right lateral orbitofrontal cortex a significant interaction of sex, age and group was detected (C). Significant voxels in A-C represent effects at a corrected p-level of p < 0.05. X, Y, Z corresponds to the Tailarach coordinates.\nRegions that showed a significant effect in the voxel wise ANOVA\nX, Y, Z corresponds to the Tailarach coordinates (BA = Brodmann area; R = right, L = left).\nA significant main effect of the factor sex (female, male) was present at a corrected threshold of p < 0.05 in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere, the inferior occipital gyrus was also active. We also found bilateral activation in the cerebellum.\nThe factor age revealed an activation of the right dorsomedial prefrontal cortex (p < 0.05, corrected).\nFor the factor group, no significant activation was found at the latter threshold, but again this finding is restricted to brain areas that did not show a significant interaction of all factors.\nWe found a significant interaction of the factors sex, age and group (p < 0.05, corrected) in the middle frontal gyrus (Figure 2, Table 2). We furthermore found significant main effects for the factors sex and age outside of the middle frontal gyrus, i. e. in brain areas that didn`t show a significant interaction of all factors.\nWhole brain ANOVA. Active voxels showed a main effect of the factor sex (female/male) in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere the inferior occipital gyrus displayed active voxels (A). Furthermore we found bilateral activation in the cerebellum. The factor age (adult/child) showed an activation of the right dorsomedial prefrontal cortex (B). In the right lateral orbitofrontal cortex a significant interaction of sex, age and group was detected (C). Significant voxels in A-C represent effects at a corrected p-level of p < 0.05. X, Y, Z corresponds to the Tailarach coordinates.\nRegions that showed a significant effect in the voxel wise ANOVA\nX, Y, Z corresponds to the Tailarach coordinates (BA = Brodmann area; R = right, L = left).\nA significant main effect of the factor sex (female, male) was present at a corrected threshold of p < 0.05 in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere, the inferior occipital gyrus was also active. We also found bilateral activation in the cerebellum.\nThe factor age revealed an activation of the right dorsomedial prefrontal cortex (p < 0.05, corrected).\nFor the factor group, no significant activation was found at the latter threshold, but again this finding is restricted to brain areas that did not show a significant interaction of all factors.", "We found a significant interaction of the factors sex, age and group (p < 0.05, corrected) in the middle frontal gyrus (Figure 2, Table 2). We furthermore found significant main effects for the factors sex and age outside of the middle frontal gyrus, i. e. in brain areas that didn`t show a significant interaction of all factors.\nWhole brain ANOVA. Active voxels showed a main effect of the factor sex (female/male) in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere the inferior occipital gyrus displayed active voxels (A). Furthermore we found bilateral activation in the cerebellum. The factor age (adult/child) showed an activation of the right dorsomedial prefrontal cortex (B). In the right lateral orbitofrontal cortex a significant interaction of sex, age and group was detected (C). Significant voxels in A-C represent effects at a corrected p-level of p < 0.05. X, Y, Z corresponds to the Tailarach coordinates.\nRegions that showed a significant effect in the voxel wise ANOVA\nX, Y, Z corresponds to the Tailarach coordinates (BA = Brodmann area; R = right, L = left).\nA significant main effect of the factor sex (female, male) was present at a corrected threshold of p < 0.05 in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere, the inferior occipital gyrus was also active. We also found bilateral activation in the cerebellum.\nThe factor age revealed an activation of the right dorsomedial prefrontal cortex (p < 0.05, corrected).\nFor the factor group, no significant activation was found at the latter threshold, but again this finding is restricted to brain areas that did not show a significant interaction of all factors.", "The linear contrasts of these ROIs confirmed that, in both groups, the main effect of the factor sex was based on a higher activation due to female pictures and the main effect of the factor age on a stronger activation for pictures of children. ROI analysis clarified that in the ROI that became significant for the interaction of all factors, only paedophilic subjects showed activation in the girl condition while controls showed a deactivation in this condition (Figure 3A).\nROIs analysis (A). The ROI female > male contrast level for the ROIs extracted from the ANOVA of the main effect of sex displayed a greater activation for female pictures in all active ROIs (rTL = right temporal lobe, rPL = right parietal lobe, rOL = right occipital lobe, lOL = left occipital lobe). The child > adult contrast confirmed that the main effect of age is based on a stronger activation due to pictures of children in both groups (rDMPFC = right dorsomedial prefrontal cortex). ROI analysis of the cluster showing an interaction of all factors revealed a significant difference (* = p < 0.05) for the contrast girl > woman, with activation in paedphilic subjects and deactivation in control subjects (rlOFC = right lateral orbitofrontal cortex). Event related averaging (B). BOLD %-signal-change in the rlOFC only illustrated a strong activation due to erotic pictures of girls in paedophilia.\nEvent related averaging of BOLD %-signal-change for all conditions and all groups in that ROI illustrated a strong activation due to erotic pictures of girls in paedophilic subjects that was absent for all other picture conditions in both groups (Figure 3B).", "Visual presentation of erotic pictures has been used to address the underpinnings of erotic stimulation in paedophilia [19-22,28,32]. Results proved heterogeneous, which might partially be explained by the detection of non-specific secondary processes in blocked designs with long stimulus presentation time.\nOur study focuses on the immediate response to erotic pictures. Like previous studies using a short presentation time and an event-related fMRI design [22,26,33] we also found that female erotic pictures elicited more activation in the right temporal lobe, the right parietal lobe, both occipital lobes and the cerebellum in both heterosexual groups. This finding indicates that these regions respond to erotic visual stimulation in an immediate fashion, e.g. activation can be detected even after a short presentation time. This finding highlights that this activation seems independent from non-specific processes such as sustained attention. Furthermore there was no difference between BOLD response to female erotic pictures in both heterosexual groups.\nPictures of children elicited a differential activation regardless of sex in the right DMPFC. This is appears to be a very interesting finding as a crucial role in the critical evaluation of stimuli and for the preparation of response strategies has been attributed to the DMPFC [34]. All study subjects were well aware of the fact that our study was about paedophilia. We would expect that erotic pictures of children would be recognised as a stimulus of special interest in that context. Both groups are also likely to be aware that this condition is specifically attached to socially unaccepted sexual preference and behaviour. These assumptions might explain why we found a specific activation of the DMPFC in both groups, independent from the sex of the presented pictures. In the DMPFC these pictures are apparently not differentiated by preferred sex or specific erotic salience but are rather perceived as a category requiring specific attention. This assumption would nicely fit with the findings from Walter et al. [35] indicating that activation of the DMPFC is not related to specific processes such as the processing of erotic content, but rather to more general processes like attention. In line with our findings, Ponseti et al. reported [33] that the medial orbitofrontal cortex showed a stronger response to erotic pictures than neutral pictures, but this response was also independent from individual sexual preference, as it was present in heterosexual male subjects with both pictures of male and female erotic stimuli. In contrast to earlier studies showing a group effect and a different activation pattern in the prefrontal cortex after presentation of erotic pictures of children, our findings indicate that in the context of immediate processing of visual erotic stimuli a considerable overlap in the processing of these stimuli is present. Considering that both groups rely on the same neuronal stimulus evaluation and response preparation networks, this finding seems rather less surprising.\nIn contrast to the earlier studies that presented pictures for more than 30 s, we focus on the immediate response to visual stimuli. Many of the previous studies were conducted with forensic inpatients while we only included outpatients and most of our paedophilic study group were convicted due to their consumption of illegal internet pornography. Other studies [36,37] already demonstrated that social functioning and neurocognitive capabilities are apparently better in these subjects than in forensic inpatients with hands-on delinquency. In this context it has to be stressed that the IQ of the paedophilic participants can be considered extremely high especially in view of the already mentioned findings from Cantor et al. [3]. Hence we cannot exclude that selection of the study subjects contributed to our findings and that the generalisability of our results might be limited.\nBrain response in the right lateral oribitofrontal cortex (OFC) was shown to interact with the factors sex, preferred age and group. Mean group betas and event related averaging confirmed a specific activation of the right lateral OFC due to erotic pictures of prepubescent girls in heterosexual paedophilic subjects only whereas all other conditions in the paedophilic group and all conditions in the control group displayed a deactivation of the lateral OFC. Further support for a deviant activation in the right lateral OFC can be found in an fMRI study by Walter et al. [18] that revealed a reduced activation for nude stimuli of female adults in paedophilic subjects.\nFMRI research has already confirmed that OFC can be activated by visual inputs [38]. According to Kringelbach et al. [39] the OFC is a nexus for sensory integration. Phillips et al. [40] stated that OFC is related to the identification of the emotional significance of a stimulus and to the production of affective states in response to that stimulus. These authors furthermore suggested that, together with the amygdala, insula, ventral striatum and ventral anterior cingulated, the OFC is also involved in the automatic regulation of emotional responses. Hence, different processes such as the modulation of autonomic reactions, learning, prediction, decision making and emotional and reward-related behaviours are processed in the OFC. According to an fMRI study by Sescousse et al. [41] the anterior part of the lateral OFC responds more strongly to more abstract rewards such as monetary gain, while the posterior lateral OFC responds more strongly to more basic rewards, for example erotic stimulation. These findings were in line with a study by Kringelbach and Rolls [42] describing a posterior anterior distribution of primary (e.g. sexual stimulation) and more abstract secondary (monetary gain) reinforcers. They suggested that the anterior part of the lateral OFC is considered to be a more recent structure, phylogentically speaking, and might therefore be related to the processing of more abstract reinforcers like money.\nFindings from our study underline the notion that the OFC plays a critical role in evaluating reinforcers that might provoke behavioural changes [42,43]. This is even more interesting, as it has been shown that the anterior lateral OFC responds more to punishment while the anterior medial part of the OFC responds to reward [44,45]. In line with this idea, the specific group effect in the right lateral OFC might indicate that the specific emotional significance of the stimuli was only detected in paedophilic subjects. Erotic pictures of children provoked a brain response resembling that of punishment in paedophilic subjects. According to the literature, the right anterior lateral OFC is not only associated with punishment but also with cues indicating a behavioural shift. This significance for behavioural adoption has been highlighted in recent reviews on the OFC [39,43,46]. This is interesting as healthy controls have demonstrated that the lateral OFC became active when subjects tried to suppress erotic responses to visual erotic stimulation [47] or in deception [48]. Our results would therefore indicate that immediately after the presentation of erotic pictures of children, a brain region known to be involved in evaluating emotional salience became active in paedophilic subjects. This activation might be related not only to the regulation of emotional states or autonomic responses but also to behavioural changes like suppression or deception of emotional responses.\nThe findings from our study point to a network perspective of the immediate processing of visual erotic stimulation. This notion would be in line with the so-called orbital and medial prefrontal cortex [49]. In this network two likely interacting systems became active or inactive while processing the visual stimuli. Even after a short presentation of visual stimuli the DMPFC showed an increased activation in response to the target condition of our study (erotic pictures of children). It is most likely that this finding is related to attentional processes, which are evoked in both study groups. We further found a specific activation due to the respective erotic condition in the anterior lateral OFC in paedophilic subjects. Activation in that region might indicate evaluation of emotional salience and the reward value of that stimulus. Furthermore this activation might indicate the intent to modify behaviour and to dissimulate or deceive erotic responses.\nAssessing the immediate processing of erotic pictures using fMRI might be a useful tool for probing emotional engagement towards the presented stimuli in future studies. These results could also be therapeutically relevant as most cognitive behavioural treatment approaches aim to reduce cognitive distortions and the denial of the implications of paedophilic behaviours and to increase the awareness of problematic attraction to children [50] rather than to change the sexual preference. The often-reported tendency of paedophilic subjects to dissimulate and minimise the dramatic impact on potential victims of child abuse, might arise from similar processes, which aim at suppressing emotional distress. Interestingly Ponseti et al. [22] did not find such an activation in their study sample, which in contrast to our sample deliberately stated paedophilic sexual deviance.", "Our fMRI study confirms the findings of previous studies concerning the processing of visual erotic stimuli. Furthermore we were able to demonstrate that the dorsomedial prefrontal cortex is specifically engaged in processing erotic pictures of children regardless of the study group. This activation appears to represent the evaluation of relevance, which is apparently not linked to sexual orientation per se but rather to the study context. In addition we have made some new findings regarding the immediate processing of visual erotic stimulation in paedophilia, as we found an immediate activation of the brain regions involved in evaluating emotional salience and reward, and engaged in the regulation of emotional responses. This activation might be related to the suppression or deception of erotic responses.", "OFC: Orbitofrontal cortex; DMPFC: Dorsomedial prefrontal cortex; DLPFC: Dorsolateral prefrontal cortex; MPRAGE: Magnetization prepared rapid acquisistion gradient echo; GLM: General linear model, ROI: region of interest.", "The authors declare that they have no competing interests.", "BH, RM, MG, VD and ES conceived and designed the study. BH, NH, RM, PL and MK realized the study design and acquired the data. BH, FE, NH performed data analyses. BH and FE interpreted data and wrote the manuscript. BH, FE, NH, PL, MK, RM, VD, ES and MG contributed in the drafting and revision of the manuscript for important intellectual content. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-244X/13/88/prepub\n" ]
[ null, "methods", null, null, null, null, "results", null, null, null, null, "discussion", "conclusions", null, null, null, null ]
[ "Paedophilia", "Event related fMRI", "Erotic stimulation" ]
Background: Paedophilia is a matter of great public interest and often evokes emotional discussions in mass media. It is difficult to estimate the prevalence of paedophilia as both paedophilic offenders and victims often prefer not to identify themselves. From surveys of the general population we know that approximately 12% of men and 17% of women report experiencing sexual abuse in their childhood [1]. These figures underline the potential impact of this disease on society. According to DSM IV-TR, paedophilia is defined by two main criteria: firstly persistent sexual fantasies, urges or behaviour involving sexual activity with prepubescent children and secondly that these people have acted on these urges, or that these urges or fantasies caused marked distress or interpersonal difficulties [2]. Findings from neuropsychological studies on paedophilia are heterogeneous. Whilst a lower IQ [3], educational difficulties [4] and a higher rate of left-handedness [5] indicate rather generalised brain dysfunction, other studies suggest more specific alterations like focal weaknesses in frontal-executive [6,7] and/or temporal-verbal [8] skills or even a more deliberate response style and greater self-monitoring in paedophilic subjects [9,10]. Furthermore revealed research on personality traits in paedophilia various findings like impaired interpersonal functions, impaired self-awareness, disinhibitory traits, sociopathy and a propensity for cognitive distortions [11]. In summary, it can be stated that the neurobiology of paedophilia remains incomplete. In recent years, researchers have increasingly addressed the neuronal underpinnings of sexual arousal in order to better understand sexual behaviour. Stoléru et al. [12,13] and Redouté et al. [14,15] were the first to propose a neurophenomological model that disentangled the cognitive, emotional, motivational and physiological components of sexual arousal. Two recently published quantitative meta-analyses on sexual cue reactivity [16,17] underline that distinctive subcomponents of sexual arousal can be reliably localised by neuroimaging techniques. Based on the fMRI results about sexual arousal in healthy controls it was natural to transfer that approach to paedophilic subjects in order to better understand the cognitive and behavioural processes in paedophilia. In heterosexual paedophilia fMRI studies have demonstrated the altered processing of erotic visual stimuli in the dorsomedial prefrontal cortex (DMPFC), amygdala and hippocampus [18]. The latter study also reports that erotic pictures of adults induced a stronger activation of the hypothalmus, the periaqueductal grey and the dorsolateral prefrontal cortex (DLPFC) in healthy controls than in heterosexual paedophilic subjects. While this study highlights differences between controls and paedophilic subjects other studies report similar activation patterns for the respective erotic conditions. For example Schiffer et al. [19] show that erotic pictures of girls induced the same activation of limbic structures such as the amygdala, the cingulate gyrus or the hippocampus in heterosexual paedophilia as erotic pictures of women in a control group. However this study also found an additional activation of the DLPFC in the heterosexual paedophilic group alone. Research has also been carried out on homosexual paedophilia. Again in both paedophilic and control subjects, a common network relating to sexual activation was found, comprising the occipitotemporal and prefrontal cortex [20]. However, only paedophilic subjects showed activated subcortical regions such as the thalamus, the globus pallidus and the striatum during erotic stimulation. Another study on homosexual paedophilia describes differential activation of the right orbitofrontal cortex [21]. Most of the above-mentioned studies indicate that prefrontal brain regions might be related to paedophilia, but the results and structures involved differ from study to study. Considering these rather heterogeneous results, the findings from a recent study by Ponseti et al. [22] appear surprising. These authors propose an fMRI based classification procedure for heterosexual and homosexual non-paedophilic and paedophilic subjects with 95% accuracy. Unlike previous studies, these authors describe activation in regions known to be involved in the processing of erotic stimuli such as the caudate nucleus, cingulated cortex, insula, fusiform gyrus, temporal cortex, occipital cortex, thalamus, amygdala and cerebellum but not in the prefrontal cortex. Except for the study of Walter et al. [18] all of the above-mentioned studies that describe a prefrontal involvement in paedophilia used relatively long presentation times ranging from 19.2 - 38.5 s and blocked fMRI designs. One critical shortcoming of long presentation times is that different cognitive processes, such as sustained attention or self-referential processes might also take place and somehow interfere with the target process. Earlier fMRI studies on sexual arousal in normal subjects showed that with shorter presentation times of sexually arousing pictures specific neuronal networks and brain processes can be addressed. Static presentation for 8.75 s as used by Moulier et al. [23] demonstrated for example that the initiation and low levels of penile tumescence are controlled by frontal, parietal, insular and cingular cortical areas. In accordance with these results, Ferretti et al. [24] showed that longer presentation times (> 30 s) induce sexual arousal and penile erection whilst shorter presentation times (< 3 s) induce arousal without erection. A stimulation time of 5 s even allowed to distinguish specific sexual emotional effects from more general emotional effects [25]. A study comparing both types of fMRI-designs in the visual processing of erotic stimuli in healthy volunteers [26] provides strong support for the use of fast stimulation time, suggesting an event related design. The authors proposed that event related designs might be an alternative to blocked designs if the core interest is the detection of networks associated with the immediate processing of erotic stimuli. Besides being more useful for the detection of these networks, short presentation times also offer other potential benefits as long presentation times in paedophilia are even more problematic than in healthy controls. Paedophilic subjects are mostly recruited after coming into conflict with the legal authorities as a result of their sexual preferences and this may increase a tendency to suppress erotic arousal or dissimulate sexual attraction to the pictures of children presented. The latter point has been used to explain some of the differences between ratings of erotic salience and brain response observed in some of the aforementioned studies [19,21]. As presentation time becomes shorter, deliberate manipulation may become more difficult and the immediate processes following the perception of target stimuli could become visible. In our study we aim to identify the neuronal networks related to immediate processing of erotic stimuli in heterosexual paedophilic subjects recruited from a forensic outpatient setting. Based on previous neuropsychological research and recent fMRI studies we predicted a differential activation in the prefrontal cortex in paedophilic subjects. Methods: Subjects Behavioural and functional magnetic resonance imaging data were acquired from 16 male right-handed subjects. All participants in this study were adults. Written informed consent was obtained from all subjects before participation in the study. The subjects in the paedophilic (N = 8) group were recruited from an outpatient cognitive behavioural group therapy at the department of forensic psychiatry in Basel, Switzerland. The heterosexual paedophilic subjects fulfilled DSM-IV-TR criteria for exclusive type (attracted only to prepubescent children) not limited to incest heterosexual paedophilia. Three of the subjects had previously molested prepubescent children, the other five had been convicted because of the possession of large quantities of explicit internet child pornography. Each participant’s sexual orientation and preference for prepubescent erotic stimuli was assessed in a clinical interview, the Multiphasic Sex Inventory (MSI) [27] and additionally verified by the clinical record and the court file. For all subjects, neither the interview nor the record indicated other comorbid paraphilias. None of the subjects had admitted paedophilia prior to the contact with the legal authorities. Heterosexual control subjects (N = 8) were recruited from an advert on the University Hospital notice board. A clinical evaluation of all subjects revealed no other psychiatric, neurological or medical conditions. The local Ethics Committee of the University of Basel, Switzerland approved the study. Behavioural and functional magnetic resonance imaging data were acquired from 16 male right-handed subjects. All participants in this study were adults. Written informed consent was obtained from all subjects before participation in the study. The subjects in the paedophilic (N = 8) group were recruited from an outpatient cognitive behavioural group therapy at the department of forensic psychiatry in Basel, Switzerland. The heterosexual paedophilic subjects fulfilled DSM-IV-TR criteria for exclusive type (attracted only to prepubescent children) not limited to incest heterosexual paedophilia. Three of the subjects had previously molested prepubescent children, the other five had been convicted because of the possession of large quantities of explicit internet child pornography. Each participant’s sexual orientation and preference for prepubescent erotic stimuli was assessed in a clinical interview, the Multiphasic Sex Inventory (MSI) [27] and additionally verified by the clinical record and the court file. For all subjects, neither the interview nor the record indicated other comorbid paraphilias. None of the subjects had admitted paedophilia prior to the contact with the legal authorities. Heterosexual control subjects (N = 8) were recruited from an advert on the University Hospital notice board. A clinical evaluation of all subjects revealed no other psychiatric, neurological or medical conditions. The local Ethics Committee of the University of Basel, Switzerland approved the study. Stimulation and paradigm The study design was adopted from Bühler et al. [26]. Visual stimuli were generated and displayed on a personal computer using the Neurobehavioral Systems (NBS) software package Presentation® 12.2. Stimuli were presented via fMRI-compatible digital video goggles (NordicNeuroLab, Bergen, Norway). While inside the scanner, subjects were presented with various types of pictures in an event related design. We presented 10 pictures from each category (erotic pictures of boys, girls, men, women or neutral control pictures) for 750 ms with a jittered interstimulus interval that varied randomly between 10 – 20 s in steps of full seconds (Figure 1). FMRI paradigm. We presented erotic pictures of adults (women/men) and children (girls/boys) interspersed with neutral control pictures. Pictures from each category were presented for 750 ms with a jittered interstimulus interval ranging from 10–20 s. In each picture, only one person of one of the above-mentioned categories was displayed in bathing clothes in front of a plain-coloured background. All pictures showed faces and the complete corpus. Secondary sexual characteristics were clearly visible in the pictures showing adults but were clearly absent in the pictures depicting prepubescent children. We avoided photographs of adolescents and applied a biological rather than a legal cut off to make the pictures of adults easily distinguishable from those of children. Before inclusion into the paradigm rated the study subjects the pictures and only the pictures with the highest ratings on a visual analogue scale were included. Neutral pictures showed simple objects like e.g. a small boot in front of the same background. In order to control attention during the passive fMRI task we used a previously applied procedure to assure attentive observation of the presented pictures [28]: Before the scanning procedure, subjects were instructed to attentively observe the pictures and told that we would check their attention by asking them to identify the pictures after the scanning session. Immediately after the fMRI session we presented the slides that they had seen interspersed with similar pictures, which had not been presented during the experiment. Subjects then had to decide which pictures they had seen during the fMRI stimulation. The study design was adopted from Bühler et al. [26]. Visual stimuli were generated and displayed on a personal computer using the Neurobehavioral Systems (NBS) software package Presentation® 12.2. Stimuli were presented via fMRI-compatible digital video goggles (NordicNeuroLab, Bergen, Norway). While inside the scanner, subjects were presented with various types of pictures in an event related design. We presented 10 pictures from each category (erotic pictures of boys, girls, men, women or neutral control pictures) for 750 ms with a jittered interstimulus interval that varied randomly between 10 – 20 s in steps of full seconds (Figure 1). FMRI paradigm. We presented erotic pictures of adults (women/men) and children (girls/boys) interspersed with neutral control pictures. Pictures from each category were presented for 750 ms with a jittered interstimulus interval ranging from 10–20 s. In each picture, only one person of one of the above-mentioned categories was displayed in bathing clothes in front of a plain-coloured background. All pictures showed faces and the complete corpus. Secondary sexual characteristics were clearly visible in the pictures showing adults but were clearly absent in the pictures depicting prepubescent children. We avoided photographs of adolescents and applied a biological rather than a legal cut off to make the pictures of adults easily distinguishable from those of children. Before inclusion into the paradigm rated the study subjects the pictures and only the pictures with the highest ratings on a visual analogue scale were included. Neutral pictures showed simple objects like e.g. a small boot in front of the same background. In order to control attention during the passive fMRI task we used a previously applied procedure to assure attentive observation of the presented pictures [28]: Before the scanning procedure, subjects were instructed to attentively observe the pictures and told that we would check their attention by asking them to identify the pictures after the scanning session. Immediately after the fMRI session we presented the slides that they had seen interspersed with similar pictures, which had not been presented during the experiment. Subjects then had to decide which pictures they had seen during the fMRI stimulation. FMRI acquisition and analysis Images were acquired using a 3 T MRI scanner (Verio, Siemens Healthcare, Erlangen, Germany) equipped with a standard radio frequency head coil. First a T1-weighted high-resolution data set that covered the whole brain was acquired using a three-dimensional MPRAGE (magnetization-prepared rapid acquisition gradient echo) sequence with a repetition time (TR) of 2.00, an isotropic spatial resolution of 1.0 mm3, and an echo time (TE) of 3.4 ms. T2* weighted functional images were recorded using echoplanar imaging with a TR of 2500 ms, an isotropic spatial resolution of 3×3×3 mm3 and a TE of 30 ms (FoV 228; matrix 76; spacing between slices: 0.51 mm; interslice time: 69 ms). Altogether 152 volumes with 36 image slices with a thickness of 3 mm were obtained. Images were acquired using a 3 T MRI scanner (Verio, Siemens Healthcare, Erlangen, Germany) equipped with a standard radio frequency head coil. First a T1-weighted high-resolution data set that covered the whole brain was acquired using a three-dimensional MPRAGE (magnetization-prepared rapid acquisition gradient echo) sequence with a repetition time (TR) of 2.00, an isotropic spatial resolution of 1.0 mm3, and an echo time (TE) of 3.4 ms. T2* weighted functional images were recorded using echoplanar imaging with a TR of 2500 ms, an isotropic spatial resolution of 3×3×3 mm3 and a TE of 30 ms (FoV 228; matrix 76; spacing between slices: 0.51 mm; interslice time: 69 ms). Altogether 152 volumes with 36 image slices with a thickness of 3 mm were obtained. Image pre-processing and statistical analysis Image time-series were processed using the BrainVoyager QX 2.3.0 software package (Brain Innovation, Maastricht, The Netherlands). Pre-processing included head motion correction, slice scan time correction, temporal high pass filtering and removal of linear trends. Using the results of the image registration with anatomical scans, the functional image-time series were then warped into Talairach space and resampled into 3 mm isotropic voxel time-series. Normalized images were smoothed using a 6.00 mm isotropic Gaussian kernel. For first-level analysis, a General Linear Model (GLM) analysis was applied with separate subject z-normalized predictors fitted to z-normalized voxel time-courses from all data sets. The orthogonal predictors of interest in the design matrix were: “boys”, “girls”, “men”, “women” and “neutral”. For second-level analysis, the GLM fits (beta weights) were assessed in a 3-way-ANOVA with two within subject factors (sex: female vs. male; age: child vs. adult) and one between subject factor group (paedophilia vs. control). We assessed the main effects of, and the interaction between, these factors at the voxel level and obtained maps that thresholded at the significance level of 5% (corrected for multiple comparison). To obtain this correction, an uncorrected statistical threshold was initially applied to each of the F-maps at p = 0.005, and a cluster-level threshold correction procedure based on Monte Carlo simulations [29] was applied in order to determine the minimum cluster size below which any activation was to be discarded. This procedure yielded a minimum cluster size of 10 voxels (corresponding to a minimum cluster extent of 280 mm) for the F-maps of the interaction of all factors, 12 voxel (297 mm) for the factor age and 19 voxels (482 mm) for the factor sex. The anatomical labelling of active clusters was defined on the basis of the peak voxel coordinates, using the Talairach Daemon (http://www.talairach.org) [30,31]. Active clusters resulting from the voxel-level ANOVA were further defined as Regions of Interests (ROI). Data from those ROIs were extracted and analysed in order to understand the direction of the effects of the ANOVA. To that end, we calculated linear contrasts for the different factors, i.e. sex (female > male), age (child > adult) and the interaction of the factors (girl > woman) separately for both groups. In order to further specify the interaction of all factors (sex, age, group) we extracted BOLD %-signal-change from the respective ROI and averaged the BOLD%-signal-change separately for the groups. Image time-series were processed using the BrainVoyager QX 2.3.0 software package (Brain Innovation, Maastricht, The Netherlands). Pre-processing included head motion correction, slice scan time correction, temporal high pass filtering and removal of linear trends. Using the results of the image registration with anatomical scans, the functional image-time series were then warped into Talairach space and resampled into 3 mm isotropic voxel time-series. Normalized images were smoothed using a 6.00 mm isotropic Gaussian kernel. For first-level analysis, a General Linear Model (GLM) analysis was applied with separate subject z-normalized predictors fitted to z-normalized voxel time-courses from all data sets. The orthogonal predictors of interest in the design matrix were: “boys”, “girls”, “men”, “women” and “neutral”. For second-level analysis, the GLM fits (beta weights) were assessed in a 3-way-ANOVA with two within subject factors (sex: female vs. male; age: child vs. adult) and one between subject factor group (paedophilia vs. control). We assessed the main effects of, and the interaction between, these factors at the voxel level and obtained maps that thresholded at the significance level of 5% (corrected for multiple comparison). To obtain this correction, an uncorrected statistical threshold was initially applied to each of the F-maps at p = 0.005, and a cluster-level threshold correction procedure based on Monte Carlo simulations [29] was applied in order to determine the minimum cluster size below which any activation was to be discarded. This procedure yielded a minimum cluster size of 10 voxels (corresponding to a minimum cluster extent of 280 mm) for the F-maps of the interaction of all factors, 12 voxel (297 mm) for the factor age and 19 voxels (482 mm) for the factor sex. The anatomical labelling of active clusters was defined on the basis of the peak voxel coordinates, using the Talairach Daemon (http://www.talairach.org) [30,31]. Active clusters resulting from the voxel-level ANOVA were further defined as Regions of Interests (ROI). Data from those ROIs were extracted and analysed in order to understand the direction of the effects of the ANOVA. To that end, we calculated linear contrasts for the different factors, i.e. sex (female > male), age (child > adult) and the interaction of the factors (girl > woman) separately for both groups. In order to further specify the interaction of all factors (sex, age, group) we extracted BOLD %-signal-change from the respective ROI and averaged the BOLD%-signal-change separately for the groups. Subjects: Behavioural and functional magnetic resonance imaging data were acquired from 16 male right-handed subjects. All participants in this study were adults. Written informed consent was obtained from all subjects before participation in the study. The subjects in the paedophilic (N = 8) group were recruited from an outpatient cognitive behavioural group therapy at the department of forensic psychiatry in Basel, Switzerland. The heterosexual paedophilic subjects fulfilled DSM-IV-TR criteria for exclusive type (attracted only to prepubescent children) not limited to incest heterosexual paedophilia. Three of the subjects had previously molested prepubescent children, the other five had been convicted because of the possession of large quantities of explicit internet child pornography. Each participant’s sexual orientation and preference for prepubescent erotic stimuli was assessed in a clinical interview, the Multiphasic Sex Inventory (MSI) [27] and additionally verified by the clinical record and the court file. For all subjects, neither the interview nor the record indicated other comorbid paraphilias. None of the subjects had admitted paedophilia prior to the contact with the legal authorities. Heterosexual control subjects (N = 8) were recruited from an advert on the University Hospital notice board. A clinical evaluation of all subjects revealed no other psychiatric, neurological or medical conditions. The local Ethics Committee of the University of Basel, Switzerland approved the study. Stimulation and paradigm: The study design was adopted from Bühler et al. [26]. Visual stimuli were generated and displayed on a personal computer using the Neurobehavioral Systems (NBS) software package Presentation® 12.2. Stimuli were presented via fMRI-compatible digital video goggles (NordicNeuroLab, Bergen, Norway). While inside the scanner, subjects were presented with various types of pictures in an event related design. We presented 10 pictures from each category (erotic pictures of boys, girls, men, women or neutral control pictures) for 750 ms with a jittered interstimulus interval that varied randomly between 10 – 20 s in steps of full seconds (Figure 1). FMRI paradigm. We presented erotic pictures of adults (women/men) and children (girls/boys) interspersed with neutral control pictures. Pictures from each category were presented for 750 ms with a jittered interstimulus interval ranging from 10–20 s. In each picture, only one person of one of the above-mentioned categories was displayed in bathing clothes in front of a plain-coloured background. All pictures showed faces and the complete corpus. Secondary sexual characteristics were clearly visible in the pictures showing adults but were clearly absent in the pictures depicting prepubescent children. We avoided photographs of adolescents and applied a biological rather than a legal cut off to make the pictures of adults easily distinguishable from those of children. Before inclusion into the paradigm rated the study subjects the pictures and only the pictures with the highest ratings on a visual analogue scale were included. Neutral pictures showed simple objects like e.g. a small boot in front of the same background. In order to control attention during the passive fMRI task we used a previously applied procedure to assure attentive observation of the presented pictures [28]: Before the scanning procedure, subjects were instructed to attentively observe the pictures and told that we would check their attention by asking them to identify the pictures after the scanning session. Immediately after the fMRI session we presented the slides that they had seen interspersed with similar pictures, which had not been presented during the experiment. Subjects then had to decide which pictures they had seen during the fMRI stimulation. FMRI acquisition and analysis: Images were acquired using a 3 T MRI scanner (Verio, Siemens Healthcare, Erlangen, Germany) equipped with a standard radio frequency head coil. First a T1-weighted high-resolution data set that covered the whole brain was acquired using a three-dimensional MPRAGE (magnetization-prepared rapid acquisition gradient echo) sequence with a repetition time (TR) of 2.00, an isotropic spatial resolution of 1.0 mm3, and an echo time (TE) of 3.4 ms. T2* weighted functional images were recorded using echoplanar imaging with a TR of 2500 ms, an isotropic spatial resolution of 3×3×3 mm3 and a TE of 30 ms (FoV 228; matrix 76; spacing between slices: 0.51 mm; interslice time: 69 ms). Altogether 152 volumes with 36 image slices with a thickness of 3 mm were obtained. Image pre-processing and statistical analysis: Image time-series were processed using the BrainVoyager QX 2.3.0 software package (Brain Innovation, Maastricht, The Netherlands). Pre-processing included head motion correction, slice scan time correction, temporal high pass filtering and removal of linear trends. Using the results of the image registration with anatomical scans, the functional image-time series were then warped into Talairach space and resampled into 3 mm isotropic voxel time-series. Normalized images were smoothed using a 6.00 mm isotropic Gaussian kernel. For first-level analysis, a General Linear Model (GLM) analysis was applied with separate subject z-normalized predictors fitted to z-normalized voxel time-courses from all data sets. The orthogonal predictors of interest in the design matrix were: “boys”, “girls”, “men”, “women” and “neutral”. For second-level analysis, the GLM fits (beta weights) were assessed in a 3-way-ANOVA with two within subject factors (sex: female vs. male; age: child vs. adult) and one between subject factor group (paedophilia vs. control). We assessed the main effects of, and the interaction between, these factors at the voxel level and obtained maps that thresholded at the significance level of 5% (corrected for multiple comparison). To obtain this correction, an uncorrected statistical threshold was initially applied to each of the F-maps at p = 0.005, and a cluster-level threshold correction procedure based on Monte Carlo simulations [29] was applied in order to determine the minimum cluster size below which any activation was to be discarded. This procedure yielded a minimum cluster size of 10 voxels (corresponding to a minimum cluster extent of 280 mm) for the F-maps of the interaction of all factors, 12 voxel (297 mm) for the factor age and 19 voxels (482 mm) for the factor sex. The anatomical labelling of active clusters was defined on the basis of the peak voxel coordinates, using the Talairach Daemon (http://www.talairach.org) [30,31]. Active clusters resulting from the voxel-level ANOVA were further defined as Regions of Interests (ROI). Data from those ROIs were extracted and analysed in order to understand the direction of the effects of the ANOVA. To that end, we calculated linear contrasts for the different factors, i.e. sex (female > male), age (child > adult) and the interaction of the factors (girl > woman) separately for both groups. In order to further specify the interaction of all factors (sex, age, group) we extracted BOLD %-signal-change from the respective ROI and averaged the BOLD%-signal-change separately for the groups. Results: Demographics Sexual orientation in the control group was heterosexual with a preference for mature women. In the paedophilia group all subjects were heterosexual and attracted exclusively by prepubescent stimuli. The mean age of the paedophilic subjects was 48.25 (standard deviation: 9.15) years and mean IQ 116 (12.12). In the control groups, the mean age was 46.25 (8.35) years and IQ 124.88 (13.91). Student t-tests showed no significant difference between these groups (Table 1). In the paedophilic group 75.4% of the pictures were correctly identified immediately after the scanning session. In the control group 77.5% of the pictures were correctly identified. Again a group difference was excluded by t-test. Mean age, IQ and rate of the correctly indentified pictures (behavioural control) with standard deviation (SD) and T-test for both groups None of the results reaches statistically significant differences. Sexual orientation in the control group was heterosexual with a preference for mature women. In the paedophilia group all subjects were heterosexual and attracted exclusively by prepubescent stimuli. The mean age of the paedophilic subjects was 48.25 (standard deviation: 9.15) years and mean IQ 116 (12.12). In the control groups, the mean age was 46.25 (8.35) years and IQ 124.88 (13.91). Student t-tests showed no significant difference between these groups (Table 1). In the paedophilic group 75.4% of the pictures were correctly identified immediately after the scanning session. In the control group 77.5% of the pictures were correctly identified. Again a group difference was excluded by t-test. Mean age, IQ and rate of the correctly indentified pictures (behavioural control) with standard deviation (SD) and T-test for both groups None of the results reaches statistically significant differences. FMRI Voxel level ANOVA We found a significant interaction of the factors sex, age and group (p < 0.05, corrected) in the middle frontal gyrus (Figure 2, Table 2). We furthermore found significant main effects for the factors sex and age outside of the middle frontal gyrus, i. e. in brain areas that didn`t show a significant interaction of all factors. Whole brain ANOVA. Active voxels showed a main effect of the factor sex (female/male) in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere the inferior occipital gyrus displayed active voxels (A). Furthermore we found bilateral activation in the cerebellum. The factor age (adult/child) showed an activation of the right dorsomedial prefrontal cortex (B). In the right lateral orbitofrontal cortex a significant interaction of sex, age and group was detected (C). Significant voxels in A-C represent effects at a corrected p-level of p < 0.05. X, Y, Z corresponds to the Tailarach coordinates. Regions that showed a significant effect in the voxel wise ANOVA X, Y, Z corresponds to the Tailarach coordinates (BA = Brodmann area; R = right, L = left). A significant main effect of the factor sex (female, male) was present at a corrected threshold of p < 0.05 in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere, the inferior occipital gyrus was also active. We also found bilateral activation in the cerebellum. The factor age revealed an activation of the right dorsomedial prefrontal cortex (p < 0.05, corrected). For the factor group, no significant activation was found at the latter threshold, but again this finding is restricted to brain areas that did not show a significant interaction of all factors. We found a significant interaction of the factors sex, age and group (p < 0.05, corrected) in the middle frontal gyrus (Figure 2, Table 2). We furthermore found significant main effects for the factors sex and age outside of the middle frontal gyrus, i. e. in brain areas that didn`t show a significant interaction of all factors. Whole brain ANOVA. Active voxels showed a main effect of the factor sex (female/male) in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere the inferior occipital gyrus displayed active voxels (A). Furthermore we found bilateral activation in the cerebellum. The factor age (adult/child) showed an activation of the right dorsomedial prefrontal cortex (B). In the right lateral orbitofrontal cortex a significant interaction of sex, age and group was detected (C). Significant voxels in A-C represent effects at a corrected p-level of p < 0.05. X, Y, Z corresponds to the Tailarach coordinates. Regions that showed a significant effect in the voxel wise ANOVA X, Y, Z corresponds to the Tailarach coordinates (BA = Brodmann area; R = right, L = left). A significant main effect of the factor sex (female, male) was present at a corrected threshold of p < 0.05 in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere, the inferior occipital gyrus was also active. We also found bilateral activation in the cerebellum. The factor age revealed an activation of the right dorsomedial prefrontal cortex (p < 0.05, corrected). For the factor group, no significant activation was found at the latter threshold, but again this finding is restricted to brain areas that did not show a significant interaction of all factors. Voxel level ANOVA We found a significant interaction of the factors sex, age and group (p < 0.05, corrected) in the middle frontal gyrus (Figure 2, Table 2). We furthermore found significant main effects for the factors sex and age outside of the middle frontal gyrus, i. e. in brain areas that didn`t show a significant interaction of all factors. Whole brain ANOVA. Active voxels showed a main effect of the factor sex (female/male) in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere the inferior occipital gyrus displayed active voxels (A). Furthermore we found bilateral activation in the cerebellum. The factor age (adult/child) showed an activation of the right dorsomedial prefrontal cortex (B). In the right lateral orbitofrontal cortex a significant interaction of sex, age and group was detected (C). Significant voxels in A-C represent effects at a corrected p-level of p < 0.05. X, Y, Z corresponds to the Tailarach coordinates. Regions that showed a significant effect in the voxel wise ANOVA X, Y, Z corresponds to the Tailarach coordinates (BA = Brodmann area; R = right, L = left). A significant main effect of the factor sex (female, male) was present at a corrected threshold of p < 0.05 in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere, the inferior occipital gyrus was also active. We also found bilateral activation in the cerebellum. The factor age revealed an activation of the right dorsomedial prefrontal cortex (p < 0.05, corrected). For the factor group, no significant activation was found at the latter threshold, but again this finding is restricted to brain areas that did not show a significant interaction of all factors. We found a significant interaction of the factors sex, age and group (p < 0.05, corrected) in the middle frontal gyrus (Figure 2, Table 2). We furthermore found significant main effects for the factors sex and age outside of the middle frontal gyrus, i. e. in brain areas that didn`t show a significant interaction of all factors. Whole brain ANOVA. Active voxels showed a main effect of the factor sex (female/male) in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere the inferior occipital gyrus displayed active voxels (A). Furthermore we found bilateral activation in the cerebellum. The factor age (adult/child) showed an activation of the right dorsomedial prefrontal cortex (B). In the right lateral orbitofrontal cortex a significant interaction of sex, age and group was detected (C). Significant voxels in A-C represent effects at a corrected p-level of p < 0.05. X, Y, Z corresponds to the Tailarach coordinates. Regions that showed a significant effect in the voxel wise ANOVA X, Y, Z corresponds to the Tailarach coordinates (BA = Brodmann area; R = right, L = left). A significant main effect of the factor sex (female, male) was present at a corrected threshold of p < 0.05 in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere, the inferior occipital gyrus was also active. We also found bilateral activation in the cerebellum. The factor age revealed an activation of the right dorsomedial prefrontal cortex (p < 0.05, corrected). For the factor group, no significant activation was found at the latter threshold, but again this finding is restricted to brain areas that did not show a significant interaction of all factors. ROI analysis The linear contrasts of these ROIs confirmed that, in both groups, the main effect of the factor sex was based on a higher activation due to female pictures and the main effect of the factor age on a stronger activation for pictures of children. ROI analysis clarified that in the ROI that became significant for the interaction of all factors, only paedophilic subjects showed activation in the girl condition while controls showed a deactivation in this condition (Figure 3A). ROIs analysis (A). The ROI female > male contrast level for the ROIs extracted from the ANOVA of the main effect of sex displayed a greater activation for female pictures in all active ROIs (rTL = right temporal lobe, rPL = right parietal lobe, rOL = right occipital lobe, lOL = left occipital lobe). The child > adult contrast confirmed that the main effect of age is based on a stronger activation due to pictures of children in both groups (rDMPFC = right dorsomedial prefrontal cortex). ROI analysis of the cluster showing an interaction of all factors revealed a significant difference (* = p < 0.05) for the contrast girl > woman, with activation in paedphilic subjects and deactivation in control subjects (rlOFC = right lateral orbitofrontal cortex). Event related averaging (B). BOLD %-signal-change in the rlOFC only illustrated a strong activation due to erotic pictures of girls in paedophilia. Event related averaging of BOLD %-signal-change for all conditions and all groups in that ROI illustrated a strong activation due to erotic pictures of girls in paedophilic subjects that was absent for all other picture conditions in both groups (Figure 3B). The linear contrasts of these ROIs confirmed that, in both groups, the main effect of the factor sex was based on a higher activation due to female pictures and the main effect of the factor age on a stronger activation for pictures of children. ROI analysis clarified that in the ROI that became significant for the interaction of all factors, only paedophilic subjects showed activation in the girl condition while controls showed a deactivation in this condition (Figure 3A). ROIs analysis (A). The ROI female > male contrast level for the ROIs extracted from the ANOVA of the main effect of sex displayed a greater activation for female pictures in all active ROIs (rTL = right temporal lobe, rPL = right parietal lobe, rOL = right occipital lobe, lOL = left occipital lobe). The child > adult contrast confirmed that the main effect of age is based on a stronger activation due to pictures of children in both groups (rDMPFC = right dorsomedial prefrontal cortex). ROI analysis of the cluster showing an interaction of all factors revealed a significant difference (* = p < 0.05) for the contrast girl > woman, with activation in paedphilic subjects and deactivation in control subjects (rlOFC = right lateral orbitofrontal cortex). Event related averaging (B). BOLD %-signal-change in the rlOFC only illustrated a strong activation due to erotic pictures of girls in paedophilia. Event related averaging of BOLD %-signal-change for all conditions and all groups in that ROI illustrated a strong activation due to erotic pictures of girls in paedophilic subjects that was absent for all other picture conditions in both groups (Figure 3B). Demographics: Sexual orientation in the control group was heterosexual with a preference for mature women. In the paedophilia group all subjects were heterosexual and attracted exclusively by prepubescent stimuli. The mean age of the paedophilic subjects was 48.25 (standard deviation: 9.15) years and mean IQ 116 (12.12). In the control groups, the mean age was 46.25 (8.35) years and IQ 124.88 (13.91). Student t-tests showed no significant difference between these groups (Table 1). In the paedophilic group 75.4% of the pictures were correctly identified immediately after the scanning session. In the control group 77.5% of the pictures were correctly identified. Again a group difference was excluded by t-test. Mean age, IQ and rate of the correctly indentified pictures (behavioural control) with standard deviation (SD) and T-test for both groups None of the results reaches statistically significant differences. FMRI: Voxel level ANOVA We found a significant interaction of the factors sex, age and group (p < 0.05, corrected) in the middle frontal gyrus (Figure 2, Table 2). We furthermore found significant main effects for the factors sex and age outside of the middle frontal gyrus, i. e. in brain areas that didn`t show a significant interaction of all factors. Whole brain ANOVA. Active voxels showed a main effect of the factor sex (female/male) in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere the inferior occipital gyrus displayed active voxels (A). Furthermore we found bilateral activation in the cerebellum. The factor age (adult/child) showed an activation of the right dorsomedial prefrontal cortex (B). In the right lateral orbitofrontal cortex a significant interaction of sex, age and group was detected (C). Significant voxels in A-C represent effects at a corrected p-level of p < 0.05. X, Y, Z corresponds to the Tailarach coordinates. Regions that showed a significant effect in the voxel wise ANOVA X, Y, Z corresponds to the Tailarach coordinates (BA = Brodmann area; R = right, L = left). A significant main effect of the factor sex (female, male) was present at a corrected threshold of p < 0.05 in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere, the inferior occipital gyrus was also active. We also found bilateral activation in the cerebellum. The factor age revealed an activation of the right dorsomedial prefrontal cortex (p < 0.05, corrected). For the factor group, no significant activation was found at the latter threshold, but again this finding is restricted to brain areas that did not show a significant interaction of all factors. We found a significant interaction of the factors sex, age and group (p < 0.05, corrected) in the middle frontal gyrus (Figure 2, Table 2). We furthermore found significant main effects for the factors sex and age outside of the middle frontal gyrus, i. e. in brain areas that didn`t show a significant interaction of all factors. Whole brain ANOVA. Active voxels showed a main effect of the factor sex (female/male) in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere the inferior occipital gyrus displayed active voxels (A). Furthermore we found bilateral activation in the cerebellum. The factor age (adult/child) showed an activation of the right dorsomedial prefrontal cortex (B). In the right lateral orbitofrontal cortex a significant interaction of sex, age and group was detected (C). Significant voxels in A-C represent effects at a corrected p-level of p < 0.05. X, Y, Z corresponds to the Tailarach coordinates. Regions that showed a significant effect in the voxel wise ANOVA X, Y, Z corresponds to the Tailarach coordinates (BA = Brodmann area; R = right, L = left). A significant main effect of the factor sex (female, male) was present at a corrected threshold of p < 0.05 in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere, the inferior occipital gyrus was also active. We also found bilateral activation in the cerebellum. The factor age revealed an activation of the right dorsomedial prefrontal cortex (p < 0.05, corrected). For the factor group, no significant activation was found at the latter threshold, but again this finding is restricted to brain areas that did not show a significant interaction of all factors. Voxel level ANOVA: We found a significant interaction of the factors sex, age and group (p < 0.05, corrected) in the middle frontal gyrus (Figure 2, Table 2). We furthermore found significant main effects for the factors sex and age outside of the middle frontal gyrus, i. e. in brain areas that didn`t show a significant interaction of all factors. Whole brain ANOVA. Active voxels showed a main effect of the factor sex (female/male) in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere the inferior occipital gyrus displayed active voxels (A). Furthermore we found bilateral activation in the cerebellum. The factor age (adult/child) showed an activation of the right dorsomedial prefrontal cortex (B). In the right lateral orbitofrontal cortex a significant interaction of sex, age and group was detected (C). Significant voxels in A-C represent effects at a corrected p-level of p < 0.05. X, Y, Z corresponds to the Tailarach coordinates. Regions that showed a significant effect in the voxel wise ANOVA X, Y, Z corresponds to the Tailarach coordinates (BA = Brodmann area; R = right, L = left). A significant main effect of the factor sex (female, male) was present at a corrected threshold of p < 0.05 in the right middle temporal gyrus, right fusiform gyrus, right inferior and middle occipital gyrus and right parietal lobe. In the left hemisphere, the inferior occipital gyrus was also active. We also found bilateral activation in the cerebellum. The factor age revealed an activation of the right dorsomedial prefrontal cortex (p < 0.05, corrected). For the factor group, no significant activation was found at the latter threshold, but again this finding is restricted to brain areas that did not show a significant interaction of all factors. ROI analysis: The linear contrasts of these ROIs confirmed that, in both groups, the main effect of the factor sex was based on a higher activation due to female pictures and the main effect of the factor age on a stronger activation for pictures of children. ROI analysis clarified that in the ROI that became significant for the interaction of all factors, only paedophilic subjects showed activation in the girl condition while controls showed a deactivation in this condition (Figure 3A). ROIs analysis (A). The ROI female > male contrast level for the ROIs extracted from the ANOVA of the main effect of sex displayed a greater activation for female pictures in all active ROIs (rTL = right temporal lobe, rPL = right parietal lobe, rOL = right occipital lobe, lOL = left occipital lobe). The child > adult contrast confirmed that the main effect of age is based on a stronger activation due to pictures of children in both groups (rDMPFC = right dorsomedial prefrontal cortex). ROI analysis of the cluster showing an interaction of all factors revealed a significant difference (* = p < 0.05) for the contrast girl > woman, with activation in paedphilic subjects and deactivation in control subjects (rlOFC = right lateral orbitofrontal cortex). Event related averaging (B). BOLD %-signal-change in the rlOFC only illustrated a strong activation due to erotic pictures of girls in paedophilia. Event related averaging of BOLD %-signal-change for all conditions and all groups in that ROI illustrated a strong activation due to erotic pictures of girls in paedophilic subjects that was absent for all other picture conditions in both groups (Figure 3B). Discussion: Visual presentation of erotic pictures has been used to address the underpinnings of erotic stimulation in paedophilia [19-22,28,32]. Results proved heterogeneous, which might partially be explained by the detection of non-specific secondary processes in blocked designs with long stimulus presentation time. Our study focuses on the immediate response to erotic pictures. Like previous studies using a short presentation time and an event-related fMRI design [22,26,33] we also found that female erotic pictures elicited more activation in the right temporal lobe, the right parietal lobe, both occipital lobes and the cerebellum in both heterosexual groups. This finding indicates that these regions respond to erotic visual stimulation in an immediate fashion, e.g. activation can be detected even after a short presentation time. This finding highlights that this activation seems independent from non-specific processes such as sustained attention. Furthermore there was no difference between BOLD response to female erotic pictures in both heterosexual groups. Pictures of children elicited a differential activation regardless of sex in the right DMPFC. This is appears to be a very interesting finding as a crucial role in the critical evaluation of stimuli and for the preparation of response strategies has been attributed to the DMPFC [34]. All study subjects were well aware of the fact that our study was about paedophilia. We would expect that erotic pictures of children would be recognised as a stimulus of special interest in that context. Both groups are also likely to be aware that this condition is specifically attached to socially unaccepted sexual preference and behaviour. These assumptions might explain why we found a specific activation of the DMPFC in both groups, independent from the sex of the presented pictures. In the DMPFC these pictures are apparently not differentiated by preferred sex or specific erotic salience but are rather perceived as a category requiring specific attention. This assumption would nicely fit with the findings from Walter et al. [35] indicating that activation of the DMPFC is not related to specific processes such as the processing of erotic content, but rather to more general processes like attention. In line with our findings, Ponseti et al. reported [33] that the medial orbitofrontal cortex showed a stronger response to erotic pictures than neutral pictures, but this response was also independent from individual sexual preference, as it was present in heterosexual male subjects with both pictures of male and female erotic stimuli. In contrast to earlier studies showing a group effect and a different activation pattern in the prefrontal cortex after presentation of erotic pictures of children, our findings indicate that in the context of immediate processing of visual erotic stimuli a considerable overlap in the processing of these stimuli is present. Considering that both groups rely on the same neuronal stimulus evaluation and response preparation networks, this finding seems rather less surprising. In contrast to the earlier studies that presented pictures for more than 30 s, we focus on the immediate response to visual stimuli. Many of the previous studies were conducted with forensic inpatients while we only included outpatients and most of our paedophilic study group were convicted due to their consumption of illegal internet pornography. Other studies [36,37] already demonstrated that social functioning and neurocognitive capabilities are apparently better in these subjects than in forensic inpatients with hands-on delinquency. In this context it has to be stressed that the IQ of the paedophilic participants can be considered extremely high especially in view of the already mentioned findings from Cantor et al. [3]. Hence we cannot exclude that selection of the study subjects contributed to our findings and that the generalisability of our results might be limited. Brain response in the right lateral oribitofrontal cortex (OFC) was shown to interact with the factors sex, preferred age and group. Mean group betas and event related averaging confirmed a specific activation of the right lateral OFC due to erotic pictures of prepubescent girls in heterosexual paedophilic subjects only whereas all other conditions in the paedophilic group and all conditions in the control group displayed a deactivation of the lateral OFC. Further support for a deviant activation in the right lateral OFC can be found in an fMRI study by Walter et al. [18] that revealed a reduced activation for nude stimuli of female adults in paedophilic subjects. FMRI research has already confirmed that OFC can be activated by visual inputs [38]. According to Kringelbach et al. [39] the OFC is a nexus for sensory integration. Phillips et al. [40] stated that OFC is related to the identification of the emotional significance of a stimulus and to the production of affective states in response to that stimulus. These authors furthermore suggested that, together with the amygdala, insula, ventral striatum and ventral anterior cingulated, the OFC is also involved in the automatic regulation of emotional responses. Hence, different processes such as the modulation of autonomic reactions, learning, prediction, decision making and emotional and reward-related behaviours are processed in the OFC. According to an fMRI study by Sescousse et al. [41] the anterior part of the lateral OFC responds more strongly to more abstract rewards such as monetary gain, while the posterior lateral OFC responds more strongly to more basic rewards, for example erotic stimulation. These findings were in line with a study by Kringelbach and Rolls [42] describing a posterior anterior distribution of primary (e.g. sexual stimulation) and more abstract secondary (monetary gain) reinforcers. They suggested that the anterior part of the lateral OFC is considered to be a more recent structure, phylogentically speaking, and might therefore be related to the processing of more abstract reinforcers like money. Findings from our study underline the notion that the OFC plays a critical role in evaluating reinforcers that might provoke behavioural changes [42,43]. This is even more interesting, as it has been shown that the anterior lateral OFC responds more to punishment while the anterior medial part of the OFC responds to reward [44,45]. In line with this idea, the specific group effect in the right lateral OFC might indicate that the specific emotional significance of the stimuli was only detected in paedophilic subjects. Erotic pictures of children provoked a brain response resembling that of punishment in paedophilic subjects. According to the literature, the right anterior lateral OFC is not only associated with punishment but also with cues indicating a behavioural shift. This significance for behavioural adoption has been highlighted in recent reviews on the OFC [39,43,46]. This is interesting as healthy controls have demonstrated that the lateral OFC became active when subjects tried to suppress erotic responses to visual erotic stimulation [47] or in deception [48]. Our results would therefore indicate that immediately after the presentation of erotic pictures of children, a brain region known to be involved in evaluating emotional salience became active in paedophilic subjects. This activation might be related not only to the regulation of emotional states or autonomic responses but also to behavioural changes like suppression or deception of emotional responses. The findings from our study point to a network perspective of the immediate processing of visual erotic stimulation. This notion would be in line with the so-called orbital and medial prefrontal cortex [49]. In this network two likely interacting systems became active or inactive while processing the visual stimuli. Even after a short presentation of visual stimuli the DMPFC showed an increased activation in response to the target condition of our study (erotic pictures of children). It is most likely that this finding is related to attentional processes, which are evoked in both study groups. We further found a specific activation due to the respective erotic condition in the anterior lateral OFC in paedophilic subjects. Activation in that region might indicate evaluation of emotional salience and the reward value of that stimulus. Furthermore this activation might indicate the intent to modify behaviour and to dissimulate or deceive erotic responses. Assessing the immediate processing of erotic pictures using fMRI might be a useful tool for probing emotional engagement towards the presented stimuli in future studies. These results could also be therapeutically relevant as most cognitive behavioural treatment approaches aim to reduce cognitive distortions and the denial of the implications of paedophilic behaviours and to increase the awareness of problematic attraction to children [50] rather than to change the sexual preference. The often-reported tendency of paedophilic subjects to dissimulate and minimise the dramatic impact on potential victims of child abuse, might arise from similar processes, which aim at suppressing emotional distress. Interestingly Ponseti et al. [22] did not find such an activation in their study sample, which in contrast to our sample deliberately stated paedophilic sexual deviance. Conclusion: Our fMRI study confirms the findings of previous studies concerning the processing of visual erotic stimuli. Furthermore we were able to demonstrate that the dorsomedial prefrontal cortex is specifically engaged in processing erotic pictures of children regardless of the study group. This activation appears to represent the evaluation of relevance, which is apparently not linked to sexual orientation per se but rather to the study context. In addition we have made some new findings regarding the immediate processing of visual erotic stimulation in paedophilia, as we found an immediate activation of the brain regions involved in evaluating emotional salience and reward, and engaged in the regulation of emotional responses. This activation might be related to the suppression or deception of erotic responses. Abbreviations: OFC: Orbitofrontal cortex; DMPFC: Dorsomedial prefrontal cortex; DLPFC: Dorsolateral prefrontal cortex; MPRAGE: Magnetization prepared rapid acquisistion gradient echo; GLM: General linear model, ROI: region of interest. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: BH, RM, MG, VD and ES conceived and designed the study. BH, NH, RM, PL and MK realized the study design and acquired the data. BH, FE, NH performed data analyses. BH and FE interpreted data and wrote the manuscript. BH, FE, NH, PL, MK, RM, VD, ES and MG contributed in the drafting and revision of the manuscript for important intellectual content. All authors read and approved the final manuscript. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-244X/13/88/prepub
Background: Most neuroimaging studies investigating sexual arousal in paedophilia used erotic pictures together with a blocked fMRI design and long stimulus presentation time. While this approach allows the detection of sexual arousal, it does not enable the assessment of the immediate processing of erotically salient stimuli. Our study aimed to identify neuronal networks related to the immediate processing of erotic stimuli in heterosexual male paedophiles and healthy age-matched controls. Methods: We presented erotic pictures of prepubescent children and adults in an event related fMRI-design to eight paedophilic subjects and age-matched controls. Results: Erotic pictures of females elicited more activation in the right temporal lobe, the right parietal lobe and both occipital lobes and erotic pictures of children activated the right dorsomedial prefrontal cortex in both groups. An interaction of sex, age and group was present in the right anteriolateral oribitofrontal cortex. Conclusions: Our event related study design confirmed that erotic pictures activate some of the brain regions already known to be involved in the processing of erotic pictures when these are presented in blocks. In addition, it revealed that erotic pictures of prepubescent children activate brain regions critical for choosing response strategies in both groups, and that erotically salient stimuli selectively activate a brain region in paedophilic subjects that had previously been attributed to reward and punishment, and that had been shown to be implicated in the suppression of erotic response and deception.
Background: Paedophilia is a matter of great public interest and often evokes emotional discussions in mass media. It is difficult to estimate the prevalence of paedophilia as both paedophilic offenders and victims often prefer not to identify themselves. From surveys of the general population we know that approximately 12% of men and 17% of women report experiencing sexual abuse in their childhood [1]. These figures underline the potential impact of this disease on society. According to DSM IV-TR, paedophilia is defined by two main criteria: firstly persistent sexual fantasies, urges or behaviour involving sexual activity with prepubescent children and secondly that these people have acted on these urges, or that these urges or fantasies caused marked distress or interpersonal difficulties [2]. Findings from neuropsychological studies on paedophilia are heterogeneous. Whilst a lower IQ [3], educational difficulties [4] and a higher rate of left-handedness [5] indicate rather generalised brain dysfunction, other studies suggest more specific alterations like focal weaknesses in frontal-executive [6,7] and/or temporal-verbal [8] skills or even a more deliberate response style and greater self-monitoring in paedophilic subjects [9,10]. Furthermore revealed research on personality traits in paedophilia various findings like impaired interpersonal functions, impaired self-awareness, disinhibitory traits, sociopathy and a propensity for cognitive distortions [11]. In summary, it can be stated that the neurobiology of paedophilia remains incomplete. In recent years, researchers have increasingly addressed the neuronal underpinnings of sexual arousal in order to better understand sexual behaviour. Stoléru et al. [12,13] and Redouté et al. [14,15] were the first to propose a neurophenomological model that disentangled the cognitive, emotional, motivational and physiological components of sexual arousal. Two recently published quantitative meta-analyses on sexual cue reactivity [16,17] underline that distinctive subcomponents of sexual arousal can be reliably localised by neuroimaging techniques. Based on the fMRI results about sexual arousal in healthy controls it was natural to transfer that approach to paedophilic subjects in order to better understand the cognitive and behavioural processes in paedophilia. In heterosexual paedophilia fMRI studies have demonstrated the altered processing of erotic visual stimuli in the dorsomedial prefrontal cortex (DMPFC), amygdala and hippocampus [18]. The latter study also reports that erotic pictures of adults induced a stronger activation of the hypothalmus, the periaqueductal grey and the dorsolateral prefrontal cortex (DLPFC) in healthy controls than in heterosexual paedophilic subjects. While this study highlights differences between controls and paedophilic subjects other studies report similar activation patterns for the respective erotic conditions. For example Schiffer et al. [19] show that erotic pictures of girls induced the same activation of limbic structures such as the amygdala, the cingulate gyrus or the hippocampus in heterosexual paedophilia as erotic pictures of women in a control group. However this study also found an additional activation of the DLPFC in the heterosexual paedophilic group alone. Research has also been carried out on homosexual paedophilia. Again in both paedophilic and control subjects, a common network relating to sexual activation was found, comprising the occipitotemporal and prefrontal cortex [20]. However, only paedophilic subjects showed activated subcortical regions such as the thalamus, the globus pallidus and the striatum during erotic stimulation. Another study on homosexual paedophilia describes differential activation of the right orbitofrontal cortex [21]. Most of the above-mentioned studies indicate that prefrontal brain regions might be related to paedophilia, but the results and structures involved differ from study to study. Considering these rather heterogeneous results, the findings from a recent study by Ponseti et al. [22] appear surprising. These authors propose an fMRI based classification procedure for heterosexual and homosexual non-paedophilic and paedophilic subjects with 95% accuracy. Unlike previous studies, these authors describe activation in regions known to be involved in the processing of erotic stimuli such as the caudate nucleus, cingulated cortex, insula, fusiform gyrus, temporal cortex, occipital cortex, thalamus, amygdala and cerebellum but not in the prefrontal cortex. Except for the study of Walter et al. [18] all of the above-mentioned studies that describe a prefrontal involvement in paedophilia used relatively long presentation times ranging from 19.2 - 38.5 s and blocked fMRI designs. One critical shortcoming of long presentation times is that different cognitive processes, such as sustained attention or self-referential processes might also take place and somehow interfere with the target process. Earlier fMRI studies on sexual arousal in normal subjects showed that with shorter presentation times of sexually arousing pictures specific neuronal networks and brain processes can be addressed. Static presentation for 8.75 s as used by Moulier et al. [23] demonstrated for example that the initiation and low levels of penile tumescence are controlled by frontal, parietal, insular and cingular cortical areas. In accordance with these results, Ferretti et al. [24] showed that longer presentation times (> 30 s) induce sexual arousal and penile erection whilst shorter presentation times (< 3 s) induce arousal without erection. A stimulation time of 5 s even allowed to distinguish specific sexual emotional effects from more general emotional effects [25]. A study comparing both types of fMRI-designs in the visual processing of erotic stimuli in healthy volunteers [26] provides strong support for the use of fast stimulation time, suggesting an event related design. The authors proposed that event related designs might be an alternative to blocked designs if the core interest is the detection of networks associated with the immediate processing of erotic stimuli. Besides being more useful for the detection of these networks, short presentation times also offer other potential benefits as long presentation times in paedophilia are even more problematic than in healthy controls. Paedophilic subjects are mostly recruited after coming into conflict with the legal authorities as a result of their sexual preferences and this may increase a tendency to suppress erotic arousal or dissimulate sexual attraction to the pictures of children presented. The latter point has been used to explain some of the differences between ratings of erotic salience and brain response observed in some of the aforementioned studies [19,21]. As presentation time becomes shorter, deliberate manipulation may become more difficult and the immediate processes following the perception of target stimuli could become visible. In our study we aim to identify the neuronal networks related to immediate processing of erotic stimuli in heterosexual paedophilic subjects recruited from a forensic outpatient setting. Based on previous neuropsychological research and recent fMRI studies we predicted a differential activation in the prefrontal cortex in paedophilic subjects. Conclusion: Our fMRI study confirms the findings of previous studies concerning the processing of visual erotic stimuli. Furthermore we were able to demonstrate that the dorsomedial prefrontal cortex is specifically engaged in processing erotic pictures of children regardless of the study group. This activation appears to represent the evaluation of relevance, which is apparently not linked to sexual orientation per se but rather to the study context. In addition we have made some new findings regarding the immediate processing of visual erotic stimulation in paedophilia, as we found an immediate activation of the brain regions involved in evaluating emotional salience and reward, and engaged in the regulation of emotional responses. This activation might be related to the suppression or deception of erotic responses.
Background: Most neuroimaging studies investigating sexual arousal in paedophilia used erotic pictures together with a blocked fMRI design and long stimulus presentation time. While this approach allows the detection of sexual arousal, it does not enable the assessment of the immediate processing of erotically salient stimuli. Our study aimed to identify neuronal networks related to the immediate processing of erotic stimuli in heterosexual male paedophiles and healthy age-matched controls. Methods: We presented erotic pictures of prepubescent children and adults in an event related fMRI-design to eight paedophilic subjects and age-matched controls. Results: Erotic pictures of females elicited more activation in the right temporal lobe, the right parietal lobe and both occipital lobes and erotic pictures of children activated the right dorsomedial prefrontal cortex in both groups. An interaction of sex, age and group was present in the right anteriolateral oribitofrontal cortex. Conclusions: Our event related study design confirmed that erotic pictures activate some of the brain regions already known to be involved in the processing of erotic pictures when these are presented in blocks. In addition, it revealed that erotic pictures of prepubescent children activate brain regions critical for choosing response strategies in both groups, and that erotically salient stimuli selectively activate a brain region in paedophilic subjects that had previously been attributed to reward and punishment, and that had been shown to be implicated in the suppression of erotic response and deception.
11,290
264
[ 1224, 249, 408, 157, 517, 173, 746, 370, 312, 38, 10, 92, 16 ]
17
[ "right", "pictures", "activation", "subjects", "significant", "gyrus", "age", "sex", "group", "erotic" ]
[ "paedophilia findings like", "adults paedophilic subjects", "paedophilia fmri", "paedophilic subjects activation", "traits paedophilia findings" ]
[CONTENT] Paedophilia | Event related fMRI | Erotic stimulation [SUMMARY]
[CONTENT] Paedophilia | Event related fMRI | Erotic stimulation [SUMMARY]
[CONTENT] Paedophilia | Event related fMRI | Erotic stimulation [SUMMARY]
[CONTENT] Paedophilia | Event related fMRI | Erotic stimulation [SUMMARY]
[CONTENT] Paedophilia | Event related fMRI | Erotic stimulation [SUMMARY]
[CONTENT] Paedophilia | Event related fMRI | Erotic stimulation [SUMMARY]
[CONTENT] Adult | Brain | Case-Control Studies | Emotions | Erotica | Evoked Potentials | Humans | Image Processing, Computer-Assisted | Magnetic Resonance Imaging | Male | Middle Aged | Pedophilia | Photic Stimulation [SUMMARY]
[CONTENT] Adult | Brain | Case-Control Studies | Emotions | Erotica | Evoked Potentials | Humans | Image Processing, Computer-Assisted | Magnetic Resonance Imaging | Male | Middle Aged | Pedophilia | Photic Stimulation [SUMMARY]
[CONTENT] Adult | Brain | Case-Control Studies | Emotions | Erotica | Evoked Potentials | Humans | Image Processing, Computer-Assisted | Magnetic Resonance Imaging | Male | Middle Aged | Pedophilia | Photic Stimulation [SUMMARY]
[CONTENT] Adult | Brain | Case-Control Studies | Emotions | Erotica | Evoked Potentials | Humans | Image Processing, Computer-Assisted | Magnetic Resonance Imaging | Male | Middle Aged | Pedophilia | Photic Stimulation [SUMMARY]
[CONTENT] Adult | Brain | Case-Control Studies | Emotions | Erotica | Evoked Potentials | Humans | Image Processing, Computer-Assisted | Magnetic Resonance Imaging | Male | Middle Aged | Pedophilia | Photic Stimulation [SUMMARY]
[CONTENT] Adult | Brain | Case-Control Studies | Emotions | Erotica | Evoked Potentials | Humans | Image Processing, Computer-Assisted | Magnetic Resonance Imaging | Male | Middle Aged | Pedophilia | Photic Stimulation [SUMMARY]
[CONTENT] paedophilia findings like | adults paedophilic subjects | paedophilia fmri | paedophilic subjects activation | traits paedophilia findings [SUMMARY]
[CONTENT] paedophilia findings like | adults paedophilic subjects | paedophilia fmri | paedophilic subjects activation | traits paedophilia findings [SUMMARY]
[CONTENT] paedophilia findings like | adults paedophilic subjects | paedophilia fmri | paedophilic subjects activation | traits paedophilia findings [SUMMARY]
[CONTENT] paedophilia findings like | adults paedophilic subjects | paedophilia fmri | paedophilic subjects activation | traits paedophilia findings [SUMMARY]
[CONTENT] paedophilia findings like | adults paedophilic subjects | paedophilia fmri | paedophilic subjects activation | traits paedophilia findings [SUMMARY]
[CONTENT] paedophilia findings like | adults paedophilic subjects | paedophilia fmri | paedophilic subjects activation | traits paedophilia findings [SUMMARY]
[CONTENT] right | pictures | activation | subjects | significant | gyrus | age | sex | group | erotic [SUMMARY]
[CONTENT] right | pictures | activation | subjects | significant | gyrus | age | sex | group | erotic [SUMMARY]
[CONTENT] right | pictures | activation | subjects | significant | gyrus | age | sex | group | erotic [SUMMARY]
[CONTENT] right | pictures | activation | subjects | significant | gyrus | age | sex | group | erotic [SUMMARY]
[CONTENT] right | pictures | activation | subjects | significant | gyrus | age | sex | group | erotic [SUMMARY]
[CONTENT] right | pictures | activation | subjects | significant | gyrus | age | sex | group | erotic [SUMMARY]
[CONTENT] arousal | sexual | studies | paedophilic | presentation times | times | paedophilia | presentation | erotic | sexual arousal [SUMMARY]
[CONTENT] pictures | subjects | presented | mm | time | ms | voxel | applied | level | image [SUMMARY]
[CONTENT] right | significant | gyrus | middle | gyrus right | activation | age | factor | effect | 05 [SUMMARY]
[CONTENT] erotic | engaged | processing | processing visual erotic | visual erotic | processing visual | responses | study | emotional | findings [SUMMARY]
[CONTENT] pictures | right | subjects | activation | significant | gyrus | erotic | age | study | cortex [SUMMARY]
[CONTENT] pictures | right | subjects | activation | significant | gyrus | erotic | age | study | cortex [SUMMARY]
[CONTENT] paedophilia ||| ||| [SUMMARY]
[CONTENT] eight [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] paedophilia ||| ||| ||| eight ||| ||| ||| ||| ||| [SUMMARY]
[CONTENT] paedophilia ||| ||| ||| eight ||| ||| ||| ||| ||| [SUMMARY]
Quadratic relationship between systolic blood pressure and white matter lesions in individuals with hypertension.
36204999
There is a well documented relationship between cardiovascular risk factors and the development of brain injury, which can lead to cognitive dysfunction. Hypertension (HTN) is a condition increasing the risk of silent and symptomatic ischemic brain lesions. Although benefits of hypertension treatment are indisputable, the target blood pressure value where the possibility of tissue damage is most reduced remains under debate.
BACKGROUND
Our group performed a cross-sectional ( n  = 376) and longitudinal ( n  = 188) study of individuals without dementia or stroke (60% women n  = 228, age 68.5 ± 7.4 years; men n  = 148, age 70.7 ± 6.9 years). Participants were split into hypertensive ( n  = 169) and normotensive ( n  = 207) groups. MR images were obtained on a 3T system. Linear modeling was performed in hypertensive and normotensive cohorts to investigate the relationship between systolic (SBP) and diastolic (DBP) blood pressure, white matter lesion (WML), and brain volumes.
METHOD
Participants in the hypertensive cohort showed a quadratic relationship between SBP and WML, with the lowest amounts of WML being measured in participants with readings at approximately 124 mmHg. Additionally, the hypertensive cohort also exhibited a quadratic relationship between DBP and mean hippocampal volume; participants with readings at approximately 77 mmHg showing the largest volumes. Longitudinally, all groups experienced WML growth, despite different BP trajectories, further suggesting that WML expansion may occur despite or because of BP reduction in individuals with compromised vascular system.
RESULTS
Overall, our study suggests that in the hypertensive group there is a valley of mid-range blood pressures displaying less pathology in the brain.
CONCLUSION
[ "Male", "Female", "Humans", "Middle Aged", "Aged", "Blood Pressure", "White Matter", "Cross-Sectional Studies", "Magnetic Resonance Imaging", "Hypertension" ]
9794123
INTRODUCTION
The impact of cardiovascular disease on the United States’ population cannot be overstated. One out of every three Americans is hypertensive, with 66% of adults aged 60 or older exhibiting above average blood pressure readings [1,2]. It was estimated in 2018 that 11 million individuals in the United States were living with undiagnosed hypertension (HTN), placing them at an increased risk for adverse cardiovascular events that can directly affect the central nervous system as well as cognition [1,2]. High blood pressure (BP) can cause damage and dysfunction to organ systems with perhaps the most significant of such being the brain and the heart [3–5]. Hypertensive patients develop atherosclerotic plaques within the large blood vessels supplying blood flow both into and out of the brain at an accelerated rate, increasing their risk of an ischemic event [6,7]. HTN can cause a remodeling of the brain microvasculature with vessels exhibiting rarefication and narrowing lumen [4,6,8]. Such changes in vessel architecture can have negative effects on cerebral blood flow (CBF) and autoregulation. Reduction in CBF has been shown to cause brain atrophy both in animal models [9] and in clinical population [10], and has been consistently linked with white matter damage [11,12]. White matter lesions (WML) are very common in HTN participants [13–15]. Indeed, several studies have confirmed that two major risk factors for WML development are increased age and high BP, with smoking, high cholesterol, and heart disease also being significant contributors to lesion formation [14,16,17]. Although benefits of HTN treatment are indisputable for major cardiovascular outcome, the target blood pressure for the brain remains under debate. While some argue for aggressive blood pressure reductions [18], others point to possible danger of this approach [19], as marked lowering of BP in longstanding HTN may cause relative hypoperfusion. This suggested to us the existence of an optimum BP range. A recent metanalysis found a U-shape relationship between SBP and a risk of dementia in participants older than 75, as well as U-shape relationship between SBP and combined dementia and mortality across all age groups starting at 60 [20]. Our group has previously reported a quadratic (inverted U-shape) relationship between CBF and SBP in hypertensive patients, pointing to the optimum lying between 120 and 130 mmHg where CBF was the highest [21]. It was consistently shown that WML volumes are inversely related to global and local indices of CBF [22]. In this observational report, we investigated whether in hypertensive participants WML and atrophy, structural markers of HTN-related brain damage, show relationship with SBP, which is parallel to the one we observed for CBF and SBP: namely is there a quadratic association between them.
METHODS
The data of this study are available from the corresponding author upon request. Participants recruitment Study participants were recruited at the NYU School of Medicine, at the former Center for Brain Health. Out of 376 individuals described in this report, 369 (98%) were previously a part of a study assessing the relationship between BP and CBF. Here, we report on participants with quantitative assessment of white matter pathology. Baseline assessment was performed in a larger group of n = 376 participants, and n = 188 of them had follow-up examinations. The reasons for lack of follow-up are described in the Supplement. Participants were enrolled between November 2010 and November 2017. All participants signed Institutional Review Board (ethics committee)-approved consent forms and were free of neocortical stroke, brain tumor, life-long psychotic disorders and dementia at baseline. Information pertaining to recruitment, exclusion criteria, and participant flow is presented in the Supplement. Study participants were recruited at the NYU School of Medicine, at the former Center for Brain Health. Out of 376 individuals described in this report, 369 (98%) were previously a part of a study assessing the relationship between BP and CBF. Here, we report on participants with quantitative assessment of white matter pathology. Baseline assessment was performed in a larger group of n = 376 participants, and n = 188 of them had follow-up examinations. The reasons for lack of follow-up are described in the Supplement. Participants were enrolled between November 2010 and November 2017. All participants signed Institutional Review Board (ethics committee)-approved consent forms and were free of neocortical stroke, brain tumor, life-long psychotic disorders and dementia at baseline. Information pertaining to recruitment, exclusion criteria, and participant flow is presented in the Supplement. Clinical assessment All participants underwent medical, psychiatric, and neurological assessments, blood tests, electrocardiogram (ECG), and magnetic resonance imaging (MRI) evaluation examinations at baseline, and the longitudinal group also at follow-up. Participants exhibiting dementia, based on a physician-administered interview using the Brief Cognitive Rating Scale, rating on the Global Deterioration Scale [23], and Clinical Dementia Rating were excluded [24]. Fasting blood was tested for complete blood count, liver function, as well as metabolic and lipid panel. Blood pressure was taken once in a sitting position, after 5 min of rest. It was measured on the left upper arm using a manual sphygmomanometer. Body mass index (BMI) was calculated as weight/height (kg/m) [2]. Medication: We recoded the use of antihypertensive medications (angiotensin receptor blockers (ARBs), angiotensin converting enzyme inhibitors (ACE), beta-blockers, diuretics, and calcium channel blockers), statins and glucose lowering drugs. Diabetes mellitus was defined as fasting glucose level of 126 mg/dl and higher and/or current use of glucose lowering medication [25]. Smoking status was defined as positive if participants was a current smoker or smoked within last 10 years. All participants underwent medical, psychiatric, and neurological assessments, blood tests, electrocardiogram (ECG), and magnetic resonance imaging (MRI) evaluation examinations at baseline, and the longitudinal group also at follow-up. Participants exhibiting dementia, based on a physician-administered interview using the Brief Cognitive Rating Scale, rating on the Global Deterioration Scale [23], and Clinical Dementia Rating were excluded [24]. Fasting blood was tested for complete blood count, liver function, as well as metabolic and lipid panel. Blood pressure was taken once in a sitting position, after 5 min of rest. It was measured on the left upper arm using a manual sphygmomanometer. Body mass index (BMI) was calculated as weight/height (kg/m) [2]. Medication: We recoded the use of antihypertensive medications (angiotensin receptor blockers (ARBs), angiotensin converting enzyme inhibitors (ACE), beta-blockers, diuretics, and calcium channel blockers), statins and glucose lowering drugs. Diabetes mellitus was defined as fasting glucose level of 126 mg/dl and higher and/or current use of glucose lowering medication [25]. Smoking status was defined as positive if participants was a current smoker or smoked within last 10 years. Study groups For consistency with our previous study hypertension was defined as current antihypertensive treatment or BP ≥140/90 mmHg [26]. One hundred and sixty-nine participants were classified as hypertensive; 136 participants were taking antihypertensive medication and 42 were not taking any antihypertensive medication with high BP recorded during their in-office visit. Normotension (NTN) was defined as BP <140/90 mmHg and no antihypertensive treatment. For descriptive purpose and secondary analyses, we described hypertensive participants as untreated: no antihypertensive treatment and BP ≥140/90 mmHg; controlled: antihypertensive treatment and BP <140/90 mmHg, uncontrolled: antihypertensive treatment and BP ≥140/90 mmHg. We also divided our normotensive group into Stage 1 hypertension based on new guidelines: [27] no treatment and BP between 130–139/80–89 mmHg, and normotensive BP <130/80 mmHg. For consistency with our previous study hypertension was defined as current antihypertensive treatment or BP ≥140/90 mmHg [26]. One hundred and sixty-nine participants were classified as hypertensive; 136 participants were taking antihypertensive medication and 42 were not taking any antihypertensive medication with high BP recorded during their in-office visit. Normotension (NTN) was defined as BP <140/90 mmHg and no antihypertensive treatment. For descriptive purpose and secondary analyses, we described hypertensive participants as untreated: no antihypertensive treatment and BP ≥140/90 mmHg; controlled: antihypertensive treatment and BP <140/90 mmHg, uncontrolled: antihypertensive treatment and BP ≥140/90 mmHg. We also divided our normotensive group into Stage 1 hypertension based on new guidelines: [27] no treatment and BP between 130–139/80–89 mmHg, and normotensive BP <130/80 mmHg. MR imaging All MR imaging was performed on the 3T system (Siemens, Erlangen, Germany). Parameters for Magnetization Prepared Rapid Acquisition Gradient Echo (MPRAGE) images were: repetition time (TR) = 2250 ms, echo time (TE) = 2.7 ms, inversion time (TI) = 900 ms, flip angle (FA) = 8°, slice thickness: 1.0 mm, field of view (FOV) = 200 mm, acquisition matrix = 256 × 192 × 124, reconstructed as 256 × 256 × 124. Axial fluid attenuation inversion recovery (FLAIR) images were acquired with TR/TE/TI 9000/99/2500 ms; FA 130°, slice thickness: 3.3 mm, field of view (FOV) 220 mm, matrix = 256 × 192 reconstructed as thirty 256 × 256 images. All MR imaging was performed on the 3T system (Siemens, Erlangen, Germany). Parameters for Magnetization Prepared Rapid Acquisition Gradient Echo (MPRAGE) images were: repetition time (TR) = 2250 ms, echo time (TE) = 2.7 ms, inversion time (TI) = 900 ms, flip angle (FA) = 8°, slice thickness: 1.0 mm, field of view (FOV) = 200 mm, acquisition matrix = 256 × 192 × 124, reconstructed as 256 × 256 × 124. Axial fluid attenuation inversion recovery (FLAIR) images were acquired with TR/TE/TI 9000/99/2500 ms; FA 130°, slice thickness: 3.3 mm, field of view (FOV) 220 mm, matrix = 256 × 192 reconstructed as thirty 256 × 256 images. MRI processing White matter lesions were segmented on FLAIR MRI with freely available image analysis software FireVoxel (https://firevoxel.org/) using a previously validated workflow [28]. FLAIR images were first corrected for signal nonuniformity using the N3 algorithm. A 3D mask M was then created over the entire brain parenchyma, that is, the gray and white matter, with cerebrospinal fluid (CSF) excluded [29]. Voxels at the external surface S of M were then identified. Next was creation of set L, containing voxels in M with signal intensity 2.5 standard deviations above the mean FLAIR signal. Finally, L was filtered to remove (a) cortical voxels, those located within 3 mm of S, (b) small clusters of volume <12 mm3 (presumed to represent image noise), and (c) connected regions having >50% CSF border (presumed to represent the choroid plexus and the septum). The resulting mask represents our estimate of WML. Its volume VWML is presented as percentage of the total brain volume. Gray matter (GM), white matter (WM) and CSF volumes were estimated using Statistical Parametric Mapping segmentation procedure (SPM, version 8, with ‘New-Segment’ extension) [30]. Total brain volume was the sum of GM and WM volumes. Intracranial volume (ICV) was the sum of all tissue types. Left and right hippocampal volumes were obtained with FreeSurfer version 6.0 [31], and averaged. They are referred to from now on as ‘mean hippocampal volume’. GM, WM and hippocampal volumes are presented as a fraction of the ICV. MRI data were acquired over the span of 9 years a 3T magnet that underwent hardware and software upgrades. We detected the inter-epoch variability in GM measurements. To avoid this fixed bias we z-scored GM values separately by epoch, recentered and rescaled them. Even though no other structural measure was affected, for consistency, we used rescaled values for hippocampal and WM volumes. White matter lesions were segmented on FLAIR MRI with freely available image analysis software FireVoxel (https://firevoxel.org/) using a previously validated workflow [28]. FLAIR images were first corrected for signal nonuniformity using the N3 algorithm. A 3D mask M was then created over the entire brain parenchyma, that is, the gray and white matter, with cerebrospinal fluid (CSF) excluded [29]. Voxels at the external surface S of M were then identified. Next was creation of set L, containing voxels in M with signal intensity 2.5 standard deviations above the mean FLAIR signal. Finally, L was filtered to remove (a) cortical voxels, those located within 3 mm of S, (b) small clusters of volume <12 mm3 (presumed to represent image noise), and (c) connected regions having >50% CSF border (presumed to represent the choroid plexus and the septum). The resulting mask represents our estimate of WML. Its volume VWML is presented as percentage of the total brain volume. Gray matter (GM), white matter (WM) and CSF volumes were estimated using Statistical Parametric Mapping segmentation procedure (SPM, version 8, with ‘New-Segment’ extension) [30]. Total brain volume was the sum of GM and WM volumes. Intracranial volume (ICV) was the sum of all tissue types. Left and right hippocampal volumes were obtained with FreeSurfer version 6.0 [31], and averaged. They are referred to from now on as ‘mean hippocampal volume’. GM, WM and hippocampal volumes are presented as a fraction of the ICV. MRI data were acquired over the span of 9 years a 3T magnet that underwent hardware and software upgrades. We detected the inter-epoch variability in GM measurements. To avoid this fixed bias we z-scored GM values separately by epoch, recentered and rescaled them. Even though no other structural measure was affected, for consistency, we used rescaled values for hippocampal and WM volumes. Statistics Categorical variables were compared with χ2 tests. T-test, Mann–Whitney U-test, and analysis of covariance (ANCOVA) were used to compare group means for continuous variables, depending on the data distribution and the need to adjust for age and sex. Normality was tested with Shapiro–Wilk test. Relationships between BP, WML volume and brain volumes (hippocampal, GM, WM) at baseline were examined with linear regression. Volumes were dependent variables; age, sex, body mass index, smoking, diabetes, systolic blood pressure and diastolic blood pressure were independent predictors. These relationships were tested in participants with and without hypertension separately. We present here F-statistics for the regression models, as well unstandardized (B) and standardized (β) coefficients for individual predictors. Again, we hypothesized that in the HTN group an optimal BP may exist [21], which minimizes WML volume and maximizes brain volumes. Thus, as before, we tested for the existence of the peak, by introducing both the linear BP and quadratic (BP [2]) terms into the regression models. After a quadratic relationship between the volume and BP was confirmed, the critical BP at which volume reaches its maximum or minimum value was determined as: To examine whether BP, WML and brain volumes changed over time, we used mixed models for repeated measures (MMRM), where BP or volume was a dependent variable and time, group (HTN vs. NTN), and group × time interaction were predictors. For models with WML or brain volumes age as added as a covariate. Both intercept and slope were treated as random. To test whether baseline BP predicted change in WML and brain volumes, we created volume rates of change by regressing volume on time for each participant and tested whether baseline SBP and DBP was associated with volume slope. Baseline volumes, BMI, age, sex, smoking and diabetes status were initially included in the models, and retained only when significant. The relationships were tested in the normo- and hypertensive participants separately. For all the analyses, the most parsimonious model was chosen, defined as the model that included only significant or necessary (main effects when interaction was present) terms. To test whether change in BP (independent variable) was related to change in volume (dependent) we used MMRM. WML or brain volumes were a dependent variable and BP (SBP or DBP), group (HTN vs. NTN), and group × BP interaction were predictors. Both intercept and slope were treated as random. SBP and DBP were predictors in separate models. We performed supplementary analyses using pulse pressure (PP) as a predictor in all the models. WML volumes were log transformed. We checked the linear models for violations of models assumptions. For all the analyses, the most parsimonious model was chosen, defined as the model that included only significant or necessary (main effects when interaction was present) terms. Statistical significance was defined as a P-value <0.05. SPSS (version 25, SPSS, Inc., Chicago, Illinois, USA) software was used for all analyses. Categorical variables were compared with χ2 tests. T-test, Mann–Whitney U-test, and analysis of covariance (ANCOVA) were used to compare group means for continuous variables, depending on the data distribution and the need to adjust for age and sex. Normality was tested with Shapiro–Wilk test. Relationships between BP, WML volume and brain volumes (hippocampal, GM, WM) at baseline were examined with linear regression. Volumes were dependent variables; age, sex, body mass index, smoking, diabetes, systolic blood pressure and diastolic blood pressure were independent predictors. These relationships were tested in participants with and without hypertension separately. We present here F-statistics for the regression models, as well unstandardized (B) and standardized (β) coefficients for individual predictors. Again, we hypothesized that in the HTN group an optimal BP may exist [21], which minimizes WML volume and maximizes brain volumes. Thus, as before, we tested for the existence of the peak, by introducing both the linear BP and quadratic (BP [2]) terms into the regression models. After a quadratic relationship between the volume and BP was confirmed, the critical BP at which volume reaches its maximum or minimum value was determined as: To examine whether BP, WML and brain volumes changed over time, we used mixed models for repeated measures (MMRM), where BP or volume was a dependent variable and time, group (HTN vs. NTN), and group × time interaction were predictors. For models with WML or brain volumes age as added as a covariate. Both intercept and slope were treated as random. To test whether baseline BP predicted change in WML and brain volumes, we created volume rates of change by regressing volume on time for each participant and tested whether baseline SBP and DBP was associated with volume slope. Baseline volumes, BMI, age, sex, smoking and diabetes status were initially included in the models, and retained only when significant. The relationships were tested in the normo- and hypertensive participants separately. For all the analyses, the most parsimonious model was chosen, defined as the model that included only significant or necessary (main effects when interaction was present) terms. To test whether change in BP (independent variable) was related to change in volume (dependent) we used MMRM. WML or brain volumes were a dependent variable and BP (SBP or DBP), group (HTN vs. NTN), and group × BP interaction were predictors. Both intercept and slope were treated as random. SBP and DBP were predictors in separate models. We performed supplementary analyses using pulse pressure (PP) as a predictor in all the models. WML volumes were log transformed. We checked the linear models for violations of models assumptions. For all the analyses, the most parsimonious model was chosen, defined as the model that included only significant or necessary (main effects when interaction was present) terms. Statistical significance was defined as a P-value <0.05. SPSS (version 25, SPSS, Inc., Chicago, Illinois, USA) software was used for all analyses.
RESULTS
Our group of 376 participants consisted of HTN (n = 207) and NTN (n = 169) individuals. The characteristics of the groups are given in Table 1. Two hundred and twenty-eight (60.6%) were women (age, 68.5 ± 7.4 years (mean ± standard deviation); education, 16.7 ± 2.3 years); and 148 (39.4%) men (age, 70.7 ± 6.9 years; education, 17.0 ± 2.4 years). The majority of the participants were Caucasian (86%), 10% African American, 4% other races. Baseline characteristics of the study group (n = 376) Data is presented as mean ± standard deviation unless otherwise indicated. P-values come from t-test (total cholesterol, LDL) or Mann–Whitney U-test (age, education, BMI, HDL, triglycerides, glucose). For categorical variables, χ2 was used. n = 371 (NTN = 204, HTN = 167) n = 369 (NTN = 205, HTN = 164) n = 363 (NTN = 200, HTN = 163) n = 371 (NTN = 206, HTN = 165) Values are presented as mean ± standard errors (SE), P-values from ANCOVA after accounting for age. Hippocampal volume is a mean of left and right, values are presented as mean ± standard errors (SE), P-values from ANCOVA after accounting for age and gender. Since the residuals from GM model were not normally distributed, the values were reanalyzed using ranked ANCOVA. Results remained the same. ICV, intracranial volume; HTN, hypertension; NTN, normotensive. In the longitudinal group of 188 participants, 116 (61.7%) were women (age, 69.6 ± 7.5 years; education, 16.7 ± 2.1 years) and 72 (38.3%) men (age, 71.4 ± 6.1 years; education, 17.2 ± 2.3 years). The mean time of follow-up was 2.3 ± 0.70 years. The longitudinal group did not differ from the participants with only one examination in terms of sex, HTN prevalence, SBP, DBP, BMI, total cholesterol, LDL and HDL cholesterol, triglycerides, education and WML volume. Prevalence of smoking and diabetes, as well as the number of participants with normotension, controlled, uncontrolled and untreated hypertension were also not different between the two groups. However, participants with longitudinal observation were slightly older (age, 70.3 ± 7.0 vs. 68.4 ± 7.4 years, P = 0.005) and had lower glucose levels (glucose, 82.5 ± 16.0 vs. 85.6 ± 14 mg/dl) than the individuals with only one exam. The clinical characteristics, as well as baseline brain and WML volumes of HTN and NTN participants in the longitudinal subgroup are given in Table S1. WML volumes by five baseline study group are shown in Tables S2 and S3. Baseline analyses BP and white matter lesion volumes In the HTN group, SBP and quadratic SBP term were predictors of WML volume at baseline (regression model included age, F3,168 = 14.4, P < 0.001) (Table 2, Fig. 1). The results were similar with log-transformed data (F3,168 = 13.1, P < 0.001) (Table 3). The critical SBP value calculated from equation [2] was x = 124 mmHg. Linear regression model predicting WML volume in the HTN group at baseline (raw data) Critical value (Eq. [2]): x = –(−0.07317/(2 × 0.0003)) = 122 mmHg. HTN, hypertension; WML, white matter lesion. Quadratic relationship between systolic blood pressure readings and white matter lesion measurements in hypertensive participants. Linear regression model predicting WML volume in the HTN group at baseline (log transformed data: log(WML volume)) Critical value (Eq. [2]): x = –(−0.02741/(2 × 0.00011)) = 124.6 mmHg. HTN, hypertension; WML, white matter lesion. In the NTN group, neither SBP nor DBP were related to WML. In the HTN group, SBP and quadratic SBP term were predictors of WML volume at baseline (regression model included age, F3,168 = 14.4, P < 0.001) (Table 2, Fig. 1). The results were similar with log-transformed data (F3,168 = 13.1, P < 0.001) (Table 3). The critical SBP value calculated from equation [2] was x = 124 mmHg. Linear regression model predicting WML volume in the HTN group at baseline (raw data) Critical value (Eq. [2]): x = –(−0.07317/(2 × 0.0003)) = 122 mmHg. HTN, hypertension; WML, white matter lesion. Quadratic relationship between systolic blood pressure readings and white matter lesion measurements in hypertensive participants. Linear regression model predicting WML volume in the HTN group at baseline (log transformed data: log(WML volume)) Critical value (Eq. [2]): x = –(−0.02741/(2 × 0.00011)) = 124.6 mmHg. HTN, hypertension; WML, white matter lesion. In the NTN group, neither SBP nor DBP were related to WML. BP and brain volumes In HTN, the quadratic DBP term was a predictor of the mean hippocampal volume at baseline (the model included age and sex, F4,168 = 11.6, P < 0.001) (Table 4, Fig. 2). The critical DBP value calculated from equation [2] was x = 77 mmHg. Model predicting mean hippocampal volume in the HTN group at baseline Critical value (Eq. [2]): x = −(0.00399/(2 × (−0.000026))) = 77 mmHg. HTN, hypertension; WML, white matter lesion. Relationship between diastolic blood pressure readings and mean hippocampal brain volume measurements in hypertensive participants. GM was associated with SBP (β = −0.16, P = 0.02) and the model (F4,166 = 14.0, P < 0.001) included BMI, smoking and age. In NTN, the mean hippocampal volume was inversely related to SBP (β = −0.17, P < 0.008), (regression model included age and sex, F3,206 = 22.4, P < 0.001). GM volume was inversely related to DBP (β = −0.15, P = 0.01), (the model included age and sex, F3,206 = 32.3, P < 0.001). In HTN, the quadratic DBP term was a predictor of the mean hippocampal volume at baseline (the model included age and sex, F4,168 = 11.6, P < 0.001) (Table 4, Fig. 2). The critical DBP value calculated from equation [2] was x = 77 mmHg. Model predicting mean hippocampal volume in the HTN group at baseline Critical value (Eq. [2]): x = −(0.00399/(2 × (−0.000026))) = 77 mmHg. HTN, hypertension; WML, white matter lesion. Relationship between diastolic blood pressure readings and mean hippocampal brain volume measurements in hypertensive participants. GM was associated with SBP (β = −0.16, P = 0.02) and the model (F4,166 = 14.0, P < 0.001) included BMI, smoking and age. In NTN, the mean hippocampal volume was inversely related to SBP (β = −0.17, P < 0.008), (regression model included age and sex, F3,206 = 22.4, P < 0.001). GM volume was inversely related to DBP (β = −0.15, P = 0.01), (the model included age and sex, F3,206 = 32.3, P < 0.001). BP and white matter lesion volumes In the HTN group, SBP and quadratic SBP term were predictors of WML volume at baseline (regression model included age, F3,168 = 14.4, P < 0.001) (Table 2, Fig. 1). The results were similar with log-transformed data (F3,168 = 13.1, P < 0.001) (Table 3). The critical SBP value calculated from equation [2] was x = 124 mmHg. Linear regression model predicting WML volume in the HTN group at baseline (raw data) Critical value (Eq. [2]): x = –(−0.07317/(2 × 0.0003)) = 122 mmHg. HTN, hypertension; WML, white matter lesion. Quadratic relationship between systolic blood pressure readings and white matter lesion measurements in hypertensive participants. Linear regression model predicting WML volume in the HTN group at baseline (log transformed data: log(WML volume)) Critical value (Eq. [2]): x = –(−0.02741/(2 × 0.00011)) = 124.6 mmHg. HTN, hypertension; WML, white matter lesion. In the NTN group, neither SBP nor DBP were related to WML. In the HTN group, SBP and quadratic SBP term were predictors of WML volume at baseline (regression model included age, F3,168 = 14.4, P < 0.001) (Table 2, Fig. 1). The results were similar with log-transformed data (F3,168 = 13.1, P < 0.001) (Table 3). The critical SBP value calculated from equation [2] was x = 124 mmHg. Linear regression model predicting WML volume in the HTN group at baseline (raw data) Critical value (Eq. [2]): x = –(−0.07317/(2 × 0.0003)) = 122 mmHg. HTN, hypertension; WML, white matter lesion. Quadratic relationship between systolic blood pressure readings and white matter lesion measurements in hypertensive participants. Linear regression model predicting WML volume in the HTN group at baseline (log transformed data: log(WML volume)) Critical value (Eq. [2]): x = –(−0.02741/(2 × 0.00011)) = 124.6 mmHg. HTN, hypertension; WML, white matter lesion. In the NTN group, neither SBP nor DBP were related to WML. BP and brain volumes In HTN, the quadratic DBP term was a predictor of the mean hippocampal volume at baseline (the model included age and sex, F4,168 = 11.6, P < 0.001) (Table 4, Fig. 2). The critical DBP value calculated from equation [2] was x = 77 mmHg. Model predicting mean hippocampal volume in the HTN group at baseline Critical value (Eq. [2]): x = −(0.00399/(2 × (−0.000026))) = 77 mmHg. HTN, hypertension; WML, white matter lesion. Relationship between diastolic blood pressure readings and mean hippocampal brain volume measurements in hypertensive participants. GM was associated with SBP (β = −0.16, P = 0.02) and the model (F4,166 = 14.0, P < 0.001) included BMI, smoking and age. In NTN, the mean hippocampal volume was inversely related to SBP (β = −0.17, P < 0.008), (regression model included age and sex, F3,206 = 22.4, P < 0.001). GM volume was inversely related to DBP (β = −0.15, P = 0.01), (the model included age and sex, F3,206 = 32.3, P < 0.001). In HTN, the quadratic DBP term was a predictor of the mean hippocampal volume at baseline (the model included age and sex, F4,168 = 11.6, P < 0.001) (Table 4, Fig. 2). The critical DBP value calculated from equation [2] was x = 77 mmHg. Model predicting mean hippocampal volume in the HTN group at baseline Critical value (Eq. [2]): x = −(0.00399/(2 × (−0.000026))) = 77 mmHg. HTN, hypertension; WML, white matter lesion. Relationship between diastolic blood pressure readings and mean hippocampal brain volume measurements in hypertensive participants. GM was associated with SBP (β = −0.16, P = 0.02) and the model (F4,166 = 14.0, P < 0.001) included BMI, smoking and age. In NTN, the mean hippocampal volume was inversely related to SBP (β = −0.17, P < 0.008), (regression model included age and sex, F3,206 = 22.4, P < 0.001). GM volume was inversely related to DBP (β = −0.15, P = 0.01), (the model included age and sex, F3,206 = 32.3, P < 0.001). Longitudinal analyses Changes in BP and volumes over time SBP did not change significantly in the HTN or NTN group, but the interaction term was significant (P = 0.01) indicating divergent pattern of SBP dynamics (the estimate = −1.6, P = 0.07 for HTN and 1.4, P = 0.07 for NTN). Similarly, DBP patterns were different (P = 0.03 for interaction term) with a lack of change in the HTN participants (estimate −0.6, P = 0.31) and a significant increase in the NTN group (the estimate = 1.1, P = 0.03). In both groups, WML volumes increased with time, (the estimate = 0.025, P < 0.001) for HTN and (0.028, P < 0.001) for NTN. In both groups, brain volumes decreased with time. An estimate for the fixed effect of time for the mean hippocampal volumes was (−0.0044, P < 0.001) for HTN and (−0.0038, P < 0.001) among NTN participants. For GM, they were (−0.48, P < 0.001) and (−0.45, P < 0.001); and finally, for WM (−0.30, P < 0.001) and (−0.22, P = 0.001), for the HTN and NTN groups, respectively. Group × time interaction terms were not significant for models with brain or WML volumes, indicating that rates of change did not differ between groups. Figure S2 presents BP and WML changes in both groups. SBP did not change significantly in the HTN or NTN group, but the interaction term was significant (P = 0.01) indicating divergent pattern of SBP dynamics (the estimate = −1.6, P = 0.07 for HTN and 1.4, P = 0.07 for NTN). Similarly, DBP patterns were different (P = 0.03 for interaction term) with a lack of change in the HTN participants (estimate −0.6, P = 0.31) and a significant increase in the NTN group (the estimate = 1.1, P = 0.03). In both groups, WML volumes increased with time, (the estimate = 0.025, P < 0.001) for HTN and (0.028, P < 0.001) for NTN. In both groups, brain volumes decreased with time. An estimate for the fixed effect of time for the mean hippocampal volumes was (−0.0044, P < 0.001) for HTN and (−0.0038, P < 0.001) among NTN participants. For GM, they were (−0.48, P < 0.001) and (−0.45, P < 0.001); and finally, for WM (−0.30, P < 0.001) and (−0.22, P = 0.001), for the HTN and NTN groups, respectively. Group × time interaction terms were not significant for models with brain or WML volumes, indicating that rates of change did not differ between groups. Figure S2 presents BP and WML changes in both groups. Changes in BP and volumes over time SBP did not change significantly in the HTN or NTN group, but the interaction term was significant (P = 0.01) indicating divergent pattern of SBP dynamics (the estimate = −1.6, P = 0.07 for HTN and 1.4, P = 0.07 for NTN). Similarly, DBP patterns were different (P = 0.03 for interaction term) with a lack of change in the HTN participants (estimate −0.6, P = 0.31) and a significant increase in the NTN group (the estimate = 1.1, P = 0.03). In both groups, WML volumes increased with time, (the estimate = 0.025, P < 0.001) for HTN and (0.028, P < 0.001) for NTN. In both groups, brain volumes decreased with time. An estimate for the fixed effect of time for the mean hippocampal volumes was (−0.0044, P < 0.001) for HTN and (−0.0038, P < 0.001) among NTN participants. For GM, they were (−0.48, P < 0.001) and (−0.45, P < 0.001); and finally, for WM (−0.30, P < 0.001) and (−0.22, P = 0.001), for the HTN and NTN groups, respectively. Group × time interaction terms were not significant for models with brain or WML volumes, indicating that rates of change did not differ between groups. Figure S2 presents BP and WML changes in both groups. SBP did not change significantly in the HTN or NTN group, but the interaction term was significant (P = 0.01) indicating divergent pattern of SBP dynamics (the estimate = −1.6, P = 0.07 for HTN and 1.4, P = 0.07 for NTN). Similarly, DBP patterns were different (P = 0.03 for interaction term) with a lack of change in the HTN participants (estimate −0.6, P = 0.31) and a significant increase in the NTN group (the estimate = 1.1, P = 0.03). In both groups, WML volumes increased with time, (the estimate = 0.025, P < 0.001) for HTN and (0.028, P < 0.001) for NTN. In both groups, brain volumes decreased with time. An estimate for the fixed effect of time for the mean hippocampal volumes was (−0.0044, P < 0.001) for HTN and (−0.0038, P < 0.001) among NTN participants. For GM, they were (−0.48, P < 0.001) and (−0.45, P < 0.001); and finally, for WM (−0.30, P < 0.001) and (−0.22, P = 0.001), for the HTN and NTN groups, respectively. Group × time interaction terms were not significant for models with brain or WML volumes, indicating that rates of change did not differ between groups. Figure S2 presents BP and WML changes in both groups. Secondary analyses Table S4 shows the stability of secondary group definition (untreated, uncontrolled and controlled HTN, Stage 1 HTN, and normotension). Table S5 presents the results from MMRM analyses of WML volumes and BP changes conducted with all five subgroups defined at baseline based on HTN control status. One can appreciate that WML volume increased significantly in untreated and controlled hypertension, as well as in normotensive participants. Table S6 and Figure S3 shows that participants who at baseline were classified as untreated (n = 19) or uncontrolled hypertension (n = 20) and reclassified at the last visit as good outcome (improved BP control) experience the same increase in WML volumes as participants with bad outcome (BP remained untreated or uncontrolled). Baseline BP and changes in white matter lesion and brain volumes In the hypertensive group both baseline SBP (β = 0.33, P = 0.007) and DBP (β = −0.30, P = 0.008) were related to WML changes over time (entire model F4,83 = 10.5, P < 0.001, included also baseline WML volumes and smoking). Baseline BP was not related to changes in hippocampal, GM, or WM volumes. In normotensive participants baseline BP was not related to change in any examined volumes. In the hypertensive group both baseline SBP (β = 0.33, P = 0.007) and DBP (β = −0.30, P = 0.008) were related to WML changes over time (entire model F4,83 = 10.5, P < 0.001, included also baseline WML volumes and smoking). Baseline BP was not related to changes in hippocampal, GM, or WM volumes. In normotensive participants baseline BP was not related to change in any examined volumes. Longitudinal changes in BP and changes in white matter lesion and brain volumes In the HTN group, neither change in SBP nor DBP were associated with a change in WML, hippocampal, GM or WM volumes (Tables S7 and S8). In the NTN group, both increase in SBP (an estimate for the fixed effect of SBP = 0.0023, P = 0.01) and DBP (an estimate for the fixed effect of DBP = 0.0035, P = 0.01) over time were related to the increase in WML volume. The significant interaction term indicated that these dynamics were meaningfully different from the HTN group. Moreover, in the normotensive group both increase in SBP (an estimate for the fixed effect of SBP = −0.0002, P = 0.03) and DBP (an estimate for the fixed effect of DBP = −0.0004, P = 0.01) over time were related to reduction in the mean hippocampal volume. However, only for SBP the interaction term was significant, indicating that these changes were different from HTN group. Results for pulse pressure are presented in the Supplement p. 8–9. In the HTN group, neither change in SBP nor DBP were associated with a change in WML, hippocampal, GM or WM volumes (Tables S7 and S8). In the NTN group, both increase in SBP (an estimate for the fixed effect of SBP = 0.0023, P = 0.01) and DBP (an estimate for the fixed effect of DBP = 0.0035, P = 0.01) over time were related to the increase in WML volume. The significant interaction term indicated that these dynamics were meaningfully different from the HTN group. Moreover, in the normotensive group both increase in SBP (an estimate for the fixed effect of SBP = −0.0002, P = 0.03) and DBP (an estimate for the fixed effect of DBP = −0.0004, P = 0.01) over time were related to reduction in the mean hippocampal volume. However, only for SBP the interaction term was significant, indicating that these changes were different from HTN group. Results for pulse pressure are presented in the Supplement p. 8–9. Table S4 shows the stability of secondary group definition (untreated, uncontrolled and controlled HTN, Stage 1 HTN, and normotension). Table S5 presents the results from MMRM analyses of WML volumes and BP changes conducted with all five subgroups defined at baseline based on HTN control status. One can appreciate that WML volume increased significantly in untreated and controlled hypertension, as well as in normotensive participants. Table S6 and Figure S3 shows that participants who at baseline were classified as untreated (n = 19) or uncontrolled hypertension (n = 20) and reclassified at the last visit as good outcome (improved BP control) experience the same increase in WML volumes as participants with bad outcome (BP remained untreated or uncontrolled). Baseline BP and changes in white matter lesion and brain volumes In the hypertensive group both baseline SBP (β = 0.33, P = 0.007) and DBP (β = −0.30, P = 0.008) were related to WML changes over time (entire model F4,83 = 10.5, P < 0.001, included also baseline WML volumes and smoking). Baseline BP was not related to changes in hippocampal, GM, or WM volumes. In normotensive participants baseline BP was not related to change in any examined volumes. In the hypertensive group both baseline SBP (β = 0.33, P = 0.007) and DBP (β = −0.30, P = 0.008) were related to WML changes over time (entire model F4,83 = 10.5, P < 0.001, included also baseline WML volumes and smoking). Baseline BP was not related to changes in hippocampal, GM, or WM volumes. In normotensive participants baseline BP was not related to change in any examined volumes. Longitudinal changes in BP and changes in white matter lesion and brain volumes In the HTN group, neither change in SBP nor DBP were associated with a change in WML, hippocampal, GM or WM volumes (Tables S7 and S8). In the NTN group, both increase in SBP (an estimate for the fixed effect of SBP = 0.0023, P = 0.01) and DBP (an estimate for the fixed effect of DBP = 0.0035, P = 0.01) over time were related to the increase in WML volume. The significant interaction term indicated that these dynamics were meaningfully different from the HTN group. Moreover, in the normotensive group both increase in SBP (an estimate for the fixed effect of SBP = −0.0002, P = 0.03) and DBP (an estimate for the fixed effect of DBP = −0.0004, P = 0.01) over time were related to reduction in the mean hippocampal volume. However, only for SBP the interaction term was significant, indicating that these changes were different from HTN group. Results for pulse pressure are presented in the Supplement p. 8–9. In the HTN group, neither change in SBP nor DBP were associated with a change in WML, hippocampal, GM or WM volumes (Tables S7 and S8). In the NTN group, both increase in SBP (an estimate for the fixed effect of SBP = 0.0023, P = 0.01) and DBP (an estimate for the fixed effect of DBP = 0.0035, P = 0.01) over time were related to the increase in WML volume. The significant interaction term indicated that these dynamics were meaningfully different from the HTN group. Moreover, in the normotensive group both increase in SBP (an estimate for the fixed effect of SBP = −0.0002, P = 0.03) and DBP (an estimate for the fixed effect of DBP = −0.0004, P = 0.01) over time were related to reduction in the mean hippocampal volume. However, only for SBP the interaction term was significant, indicating that these changes were different from HTN group. Results for pulse pressure are presented in the Supplement p. 8–9.
null
null
[ "Clinical assessment", "Study groups", "MR imaging", "MRI processing", "Statistics", "Baseline analyses", "BP and white matter lesion volumes", "BP and brain volumes", "Longitudinal analyses", "Changes in BP and volumes over time", "Secondary analyses", "Baseline BP and changes in white matter lesion and brain volumes", "Longitudinal changes in BP and changes in white matter lesion and brain volumes", "ACKNOWLEDGEMENTS" ]
[ "All participants underwent medical, psychiatric, and neurological assessments, blood tests, electrocardiogram (ECG), and magnetic resonance imaging (MRI) evaluation examinations at baseline, and the longitudinal group also at follow-up. Participants exhibiting dementia, based on a physician-administered interview using the Brief Cognitive Rating Scale, rating on the Global Deterioration Scale [23], and Clinical Dementia Rating were excluded [24]. Fasting blood was tested for complete blood count, liver function, as well as metabolic and lipid panel.\nBlood pressure was taken once in a sitting position, after 5 min of rest. It was measured on the left upper arm using a manual sphygmomanometer. Body mass index (BMI) was calculated as weight/height (kg/m) [2].\nMedication: We recoded the use of antihypertensive medications (angiotensin receptor blockers (ARBs), angiotensin converting enzyme inhibitors (ACE), beta-blockers, diuretics, and calcium channel blockers), statins and glucose lowering drugs.\nDiabetes mellitus was defined as fasting glucose level of 126 mg/dl and higher and/or current use of glucose lowering medication [25]. Smoking status was defined as positive if participants was a current smoker or smoked within last 10 years.", "For consistency with our previous study hypertension was defined as current antihypertensive treatment or BP ≥140/90 mmHg [26]. One hundred and sixty-nine participants were classified as hypertensive; 136 participants were taking antihypertensive medication and 42 were not taking any antihypertensive medication with high BP recorded during their in-office visit. Normotension (NTN) was defined as BP <140/90 mmHg and no antihypertensive treatment.\nFor descriptive purpose and secondary analyses, we described hypertensive participants as untreated: no antihypertensive treatment and BP ≥140/90 mmHg; controlled: antihypertensive treatment and BP <140/90 mmHg, uncontrolled: antihypertensive treatment and BP ≥140/90 mmHg. We also divided our normotensive group into Stage 1 hypertension based on new guidelines: [27] no treatment and BP between 130–139/80–89 mmHg, and normotensive BP <130/80 mmHg.", "All MR imaging was performed on the 3T system (Siemens, Erlangen, Germany). Parameters for Magnetization Prepared Rapid Acquisition Gradient Echo (MPRAGE) images were: repetition time (TR) = 2250 ms, echo time (TE) = 2.7 ms, inversion time (TI) = 900 ms, flip angle (FA) = 8°, slice thickness: 1.0 mm, field of view (FOV) = 200 mm, acquisition matrix = 256 × 192 × 124, reconstructed as 256 × 256 × 124. Axial fluid attenuation inversion recovery (FLAIR) images were acquired with TR/TE/TI 9000/99/2500 ms; FA 130°, slice thickness: 3.3 mm, field of view (FOV) 220 mm, matrix = 256 × 192 reconstructed as thirty 256 × 256 images.", "White matter lesions were segmented on FLAIR MRI with freely available image analysis software FireVoxel (https://firevoxel.org/) using a previously validated workflow [28]. FLAIR images were first corrected for signal nonuniformity using the N3 algorithm. A 3D mask M was then created over the entire brain parenchyma, that is, the gray and white matter, with cerebrospinal fluid (CSF) excluded [29]. Voxels at the external surface S of M were then identified. Next was creation of set L, containing voxels in M with signal intensity 2.5 standard deviations above the mean FLAIR signal. Finally, L was filtered to remove (a) cortical voxels, those located within 3 mm of S, (b) small clusters of volume <12 mm3 (presumed to represent image noise), and (c) connected regions having >50% CSF border (presumed to represent the choroid plexus and the septum). The resulting mask represents our estimate of WML. Its volume VWML is presented as percentage of the total brain volume.\nGray matter (GM), white matter (WM) and CSF volumes were estimated using Statistical Parametric Mapping segmentation procedure (SPM, version 8, with ‘New-Segment’ extension) [30]. Total brain volume was the sum of GM and WM volumes. Intracranial volume (ICV) was the sum of all tissue types. Left and right hippocampal volumes were obtained with FreeSurfer version 6.0 [31], and averaged. They are referred to from now on as ‘mean hippocampal volume’. GM, WM and hippocampal volumes are presented as a fraction of the ICV. MRI data were acquired over the span of 9 years a 3T magnet that underwent hardware and software upgrades. We detected the inter-epoch variability in GM measurements. To avoid this fixed bias we z-scored GM values separately by epoch, recentered and rescaled them. Even though no other structural measure was affected, for consistency, we used rescaled values for hippocampal and WM volumes.", "Categorical variables were compared with χ2 tests. T-test, Mann–Whitney U-test, and analysis of covariance (ANCOVA) were used to compare group means for continuous variables, depending on the data distribution and the need to adjust for age and sex. Normality was tested with Shapiro–Wilk test.\nRelationships between BP, WML volume and brain volumes (hippocampal, GM, WM) at baseline were examined with linear regression. Volumes were dependent variables; age, sex, body mass index, smoking, diabetes, systolic blood pressure and diastolic blood pressure were independent predictors. These relationships were tested in participants with and without hypertension separately. We present here F-statistics for the regression models, as well unstandardized (B) and standardized (β) coefficients for individual predictors. Again, we hypothesized that in the HTN group an optimal BP may exist [21], which minimizes WML volume and maximizes brain volumes. Thus, as before, we tested for the existence of the peak, by introducing both the linear BP and quadratic (BP [2]) terms into the regression models.\nAfter a quadratic relationship between the volume and BP was confirmed, the critical BP at which volume reaches its maximum or minimum value was determined as:\nTo examine whether BP, WML and brain volumes changed over time, we used mixed models for repeated measures (MMRM), where BP or volume was a dependent variable and time, group (HTN vs. NTN), and group × time interaction were predictors. For models with WML or brain volumes age as added as a covariate. Both intercept and slope were treated as random.\nTo test whether baseline BP predicted change in WML and brain volumes, we created volume rates of change by regressing volume on time for each participant and tested whether baseline SBP and DBP was associated with volume slope. Baseline volumes, BMI, age, sex, smoking and diabetes status were initially included in the models, and retained only when significant. The relationships were tested in the normo- and hypertensive participants separately. For all the analyses, the most parsimonious model was chosen, defined as the model that included only significant or necessary (main effects when interaction was present) terms.\nTo test whether change in BP (independent variable) was related to change in volume (dependent) we used MMRM. WML or brain volumes were a dependent variable and BP (SBP or DBP), group (HTN vs. NTN), and group × BP interaction were predictors. Both intercept and slope were treated as random. SBP and DBP were predictors in separate models.\nWe performed supplementary analyses using pulse pressure (PP) as a predictor in all the models.\nWML volumes were log transformed. We checked the linear models for violations of models assumptions. For all the analyses, the most parsimonious model was chosen, defined as the model that included only significant or necessary (main effects when interaction was present) terms. Statistical significance was defined as a P-value <0.05. SPSS (version 25, SPSS, Inc., Chicago, Illinois, USA) software was used for all analyses.", "BP and white matter lesion volumes In the HTN group, SBP and quadratic SBP term were predictors of WML volume at baseline (regression model included age, F3,168 = 14.4, P < 0.001) (Table 2, Fig. 1). The results were similar with log-transformed data (F3,168 = 13.1, P < 0.001) (Table 3). The critical SBP value calculated from equation [2] was x = 124 mmHg.\nLinear regression model predicting WML volume in the HTN group at baseline (raw data)\nCritical value (Eq. [2]): x = –(−0.07317/(2 × 0.0003)) = 122 mmHg.\nHTN, hypertension; WML, white matter lesion.\nQuadratic relationship between systolic blood pressure readings and white matter lesion measurements in hypertensive participants.\nLinear regression model predicting WML volume in the HTN group at baseline (log transformed data: log(WML volume))\nCritical value (Eq. [2]): x = –(−0.02741/(2 × 0.00011)) = 124.6 mmHg.\nHTN, hypertension; WML, white matter lesion.\nIn the NTN group, neither SBP nor DBP were related to WML.\nIn the HTN group, SBP and quadratic SBP term were predictors of WML volume at baseline (regression model included age, F3,168 = 14.4, P < 0.001) (Table 2, Fig. 1). The results were similar with log-transformed data (F3,168 = 13.1, P < 0.001) (Table 3). The critical SBP value calculated from equation [2] was x = 124 mmHg.\nLinear regression model predicting WML volume in the HTN group at baseline (raw data)\nCritical value (Eq. [2]): x = –(−0.07317/(2 × 0.0003)) = 122 mmHg.\nHTN, hypertension; WML, white matter lesion.\nQuadratic relationship between systolic blood pressure readings and white matter lesion measurements in hypertensive participants.\nLinear regression model predicting WML volume in the HTN group at baseline (log transformed data: log(WML volume))\nCritical value (Eq. [2]): x = –(−0.02741/(2 × 0.00011)) = 124.6 mmHg.\nHTN, hypertension; WML, white matter lesion.\nIn the NTN group, neither SBP nor DBP were related to WML.\nBP and brain volumes In HTN, the quadratic DBP term was a predictor of the mean hippocampal volume at baseline (the model included age and sex, F4,168 = 11.6, P < 0.001) (Table 4, Fig. 2). The critical DBP value calculated from equation [2] was x = 77 mmHg.\nModel predicting mean hippocampal volume in the HTN group at baseline\nCritical value (Eq. [2]): x = −(0.00399/(2 × (−0.000026))) = 77 mmHg.\nHTN, hypertension; WML, white matter lesion.\nRelationship between diastolic blood pressure readings and mean hippocampal brain volume measurements in hypertensive participants.\nGM was associated with SBP (β = −0.16, P = 0.02) and the model (F4,166 = 14.0, P < 0.001) included BMI, smoking and age.\nIn NTN, the mean hippocampal volume was inversely related to SBP (β = −0.17, P < 0.008), (regression model included age and sex, F3,206 = 22.4, P < 0.001). GM volume was inversely related to DBP (β = −0.15, P = 0.01), (the model included age and sex, F3,206 = 32.3, P < 0.001).\nIn HTN, the quadratic DBP term was a predictor of the mean hippocampal volume at baseline (the model included age and sex, F4,168 = 11.6, P < 0.001) (Table 4, Fig. 2). The critical DBP value calculated from equation [2] was x = 77 mmHg.\nModel predicting mean hippocampal volume in the HTN group at baseline\nCritical value (Eq. [2]): x = −(0.00399/(2 × (−0.000026))) = 77 mmHg.\nHTN, hypertension; WML, white matter lesion.\nRelationship between diastolic blood pressure readings and mean hippocampal brain volume measurements in hypertensive participants.\nGM was associated with SBP (β = −0.16, P = 0.02) and the model (F4,166 = 14.0, P < 0.001) included BMI, smoking and age.\nIn NTN, the mean hippocampal volume was inversely related to SBP (β = −0.17, P < 0.008), (regression model included age and sex, F3,206 = 22.4, P < 0.001). GM volume was inversely related to DBP (β = −0.15, P = 0.01), (the model included age and sex, F3,206 = 32.3, P < 0.001).", "In the HTN group, SBP and quadratic SBP term were predictors of WML volume at baseline (regression model included age, F3,168 = 14.4, P < 0.001) (Table 2, Fig. 1). The results were similar with log-transformed data (F3,168 = 13.1, P < 0.001) (Table 3). The critical SBP value calculated from equation [2] was x = 124 mmHg.\nLinear regression model predicting WML volume in the HTN group at baseline (raw data)\nCritical value (Eq. [2]): x = –(−0.07317/(2 × 0.0003)) = 122 mmHg.\nHTN, hypertension; WML, white matter lesion.\nQuadratic relationship between systolic blood pressure readings and white matter lesion measurements in hypertensive participants.\nLinear regression model predicting WML volume in the HTN group at baseline (log transformed data: log(WML volume))\nCritical value (Eq. [2]): x = –(−0.02741/(2 × 0.00011)) = 124.6 mmHg.\nHTN, hypertension; WML, white matter lesion.\nIn the NTN group, neither SBP nor DBP were related to WML.", "In HTN, the quadratic DBP term was a predictor of the mean hippocampal volume at baseline (the model included age and sex, F4,168 = 11.6, P < 0.001) (Table 4, Fig. 2). The critical DBP value calculated from equation [2] was x = 77 mmHg.\nModel predicting mean hippocampal volume in the HTN group at baseline\nCritical value (Eq. [2]): x = −(0.00399/(2 × (−0.000026))) = 77 mmHg.\nHTN, hypertension; WML, white matter lesion.\nRelationship between diastolic blood pressure readings and mean hippocampal brain volume measurements in hypertensive participants.\nGM was associated with SBP (β = −0.16, P = 0.02) and the model (F4,166 = 14.0, P < 0.001) included BMI, smoking and age.\nIn NTN, the mean hippocampal volume was inversely related to SBP (β = −0.17, P < 0.008), (regression model included age and sex, F3,206 = 22.4, P < 0.001). GM volume was inversely related to DBP (β = −0.15, P = 0.01), (the model included age and sex, F3,206 = 32.3, P < 0.001).", "Changes in BP and volumes over time SBP did not change significantly in the HTN or NTN group, but the interaction term was significant (P = 0.01) indicating divergent pattern of SBP dynamics (the estimate = −1.6, P = 0.07 for HTN and 1.4, P = 0.07 for NTN). Similarly, DBP patterns were different (P = 0.03 for interaction term) with a lack of change in the HTN participants (estimate −0.6, P = 0.31) and a significant increase in the NTN group (the estimate = 1.1, P = 0.03). In both groups, WML volumes increased with time, (the estimate = 0.025, P < 0.001) for HTN and (0.028, P < 0.001) for NTN. In both groups, brain volumes decreased with time. An estimate for the fixed effect of time for the mean hippocampal volumes was (−0.0044, P < 0.001) for HTN and (−0.0038, P < 0.001) among NTN participants. For GM, they were (−0.48, P < 0.001) and (−0.45, P < 0.001); and finally, for WM (−0.30, P < 0.001) and (−0.22, P = 0.001), for the HTN and NTN groups, respectively. Group × time interaction terms were not significant for models with brain or WML volumes, indicating that rates of change did not differ between groups. Figure S2 presents BP and WML changes in both groups.\nSBP did not change significantly in the HTN or NTN group, but the interaction term was significant (P = 0.01) indicating divergent pattern of SBP dynamics (the estimate = −1.6, P = 0.07 for HTN and 1.4, P = 0.07 for NTN). Similarly, DBP patterns were different (P = 0.03 for interaction term) with a lack of change in the HTN participants (estimate −0.6, P = 0.31) and a significant increase in the NTN group (the estimate = 1.1, P = 0.03). In both groups, WML volumes increased with time, (the estimate = 0.025, P < 0.001) for HTN and (0.028, P < 0.001) for NTN. In both groups, brain volumes decreased with time. An estimate for the fixed effect of time for the mean hippocampal volumes was (−0.0044, P < 0.001) for HTN and (−0.0038, P < 0.001) among NTN participants. For GM, they were (−0.48, P < 0.001) and (−0.45, P < 0.001); and finally, for WM (−0.30, P < 0.001) and (−0.22, P = 0.001), for the HTN and NTN groups, respectively. Group × time interaction terms were not significant for models with brain or WML volumes, indicating that rates of change did not differ between groups. Figure S2 presents BP and WML changes in both groups.", "SBP did not change significantly in the HTN or NTN group, but the interaction term was significant (P = 0.01) indicating divergent pattern of SBP dynamics (the estimate = −1.6, P = 0.07 for HTN and 1.4, P = 0.07 for NTN). Similarly, DBP patterns were different (P = 0.03 for interaction term) with a lack of change in the HTN participants (estimate −0.6, P = 0.31) and a significant increase in the NTN group (the estimate = 1.1, P = 0.03). In both groups, WML volumes increased with time, (the estimate = 0.025, P < 0.001) for HTN and (0.028, P < 0.001) for NTN. In both groups, brain volumes decreased with time. An estimate for the fixed effect of time for the mean hippocampal volumes was (−0.0044, P < 0.001) for HTN and (−0.0038, P < 0.001) among NTN participants. For GM, they were (−0.48, P < 0.001) and (−0.45, P < 0.001); and finally, for WM (−0.30, P < 0.001) and (−0.22, P = 0.001), for the HTN and NTN groups, respectively. Group × time interaction terms were not significant for models with brain or WML volumes, indicating that rates of change did not differ between groups. Figure S2 presents BP and WML changes in both groups.", "Table S4 shows the stability of secondary group definition (untreated, uncontrolled and controlled HTN, Stage 1 HTN, and normotension). Table S5 presents the results from MMRM analyses of WML volumes and BP changes conducted with all five subgroups defined at baseline based on HTN control status. One can appreciate that WML volume increased significantly in untreated and controlled hypertension, as well as in normotensive participants.\nTable S6 and Figure S3 shows that participants who at baseline were classified as untreated (n = 19) or uncontrolled hypertension (n = 20) and reclassified at the last visit as good outcome (improved BP control) experience the same increase in WML volumes as participants with bad outcome (BP remained untreated or uncontrolled).\nBaseline BP and changes in white matter lesion and brain volumes In the hypertensive group both baseline SBP (β = 0.33, P = 0.007) and DBP (β = −0.30, P = 0.008) were related to WML changes over time (entire model F4,83 = 10.5, P < 0.001, included also baseline WML volumes and smoking). Baseline BP was not related to changes in hippocampal, GM, or WM volumes.\nIn normotensive participants baseline BP was not related to change in any examined volumes.\nIn the hypertensive group both baseline SBP (β = 0.33, P = 0.007) and DBP (β = −0.30, P = 0.008) were related to WML changes over time (entire model F4,83 = 10.5, P < 0.001, included also baseline WML volumes and smoking). Baseline BP was not related to changes in hippocampal, GM, or WM volumes.\nIn normotensive participants baseline BP was not related to change in any examined volumes.\nLongitudinal changes in BP and changes in white matter lesion and brain volumes In the HTN group, neither change in SBP nor DBP were associated with a change in WML, hippocampal, GM or WM volumes (Tables S7 and S8).\nIn the NTN group, both increase in SBP (an estimate for the fixed effect of SBP = 0.0023, P = 0.01) and DBP (an estimate for the fixed effect of DBP = 0.0035, P = 0.01) over time were related to the increase in WML volume. The significant interaction term indicated that these dynamics were meaningfully different from the HTN group. Moreover, in the normotensive group both increase in SBP (an estimate for the fixed effect of SBP = −0.0002, P = 0.03) and DBP (an estimate for the fixed effect of DBP = −0.0004, P = 0.01) over time were related to reduction in the mean hippocampal volume. However, only for SBP the interaction term was significant, indicating that these changes were different from HTN group.\nResults for pulse pressure are presented in the Supplement p. 8–9.\nIn the HTN group, neither change in SBP nor DBP were associated with a change in WML, hippocampal, GM or WM volumes (Tables S7 and S8).\nIn the NTN group, both increase in SBP (an estimate for the fixed effect of SBP = 0.0023, P = 0.01) and DBP (an estimate for the fixed effect of DBP = 0.0035, P = 0.01) over time were related to the increase in WML volume. The significant interaction term indicated that these dynamics were meaningfully different from the HTN group. Moreover, in the normotensive group both increase in SBP (an estimate for the fixed effect of SBP = −0.0002, P = 0.03) and DBP (an estimate for the fixed effect of DBP = −0.0004, P = 0.01) over time were related to reduction in the mean hippocampal volume. However, only for SBP the interaction term was significant, indicating that these changes were different from HTN group.\nResults for pulse pressure are presented in the Supplement p. 8–9.", "In the hypertensive group both baseline SBP (β = 0.33, P = 0.007) and DBP (β = −0.30, P = 0.008) were related to WML changes over time (entire model F4,83 = 10.5, P < 0.001, included also baseline WML volumes and smoking). Baseline BP was not related to changes in hippocampal, GM, or WM volumes.\nIn normotensive participants baseline BP was not related to change in any examined volumes.", "In the HTN group, neither change in SBP nor DBP were associated with a change in WML, hippocampal, GM or WM volumes (Tables S7 and S8).\nIn the NTN group, both increase in SBP (an estimate for the fixed effect of SBP = 0.0023, P = 0.01) and DBP (an estimate for the fixed effect of DBP = 0.0035, P = 0.01) over time were related to the increase in WML volume. The significant interaction term indicated that these dynamics were meaningfully different from the HTN group. Moreover, in the normotensive group both increase in SBP (an estimate for the fixed effect of SBP = −0.0002, P = 0.03) and DBP (an estimate for the fixed effect of DBP = −0.0004, P = 0.01) over time were related to reduction in the mean hippocampal volume. However, only for SBP the interaction term was significant, indicating that these changes were different from HTN group.\nResults for pulse pressure are presented in the Supplement p. 8–9.", "A special thanks to Ke Xi for her help with the statistical modeling as well as assistance in the production of regression plots for this manuscript.\nSources of funding: Study funding comes from NIH grants HL111724, NS104364, AG022374, AG12101, AG08051, and Alzheimer's Association NIRG-09-132490.\nDisclosures: None\nConflicts of interest There are no conflicts of interest.\nThere are no conflicts of interest." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Participants recruitment", "Clinical assessment", "Study groups", "MR imaging", "MRI processing", "Statistics", "RESULTS", "Baseline analyses", "BP and white matter lesion volumes", "BP and brain volumes", "Longitudinal analyses", "Changes in BP and volumes over time", "Secondary analyses", "Baseline BP and changes in white matter lesion and brain volumes", "Longitudinal changes in BP and changes in white matter lesion and brain volumes", "DISCUSSION", "ACKNOWLEDGEMENTS", "Conflicts of interest", "Supplementary Material" ]
[ "The impact of cardiovascular disease on the United States’ population cannot be overstated. One out of every three Americans is hypertensive, with 66% of adults aged 60 or older exhibiting above average blood pressure readings [1,2]. It was estimated in 2018 that 11 million individuals in the United States were living with undiagnosed hypertension (HTN), placing them at an increased risk for adverse cardiovascular events that can directly affect the central nervous system as well as cognition [1,2].\nHigh blood pressure (BP) can cause damage and dysfunction to organ systems with perhaps the most significant of such being the brain and the heart [3–5]. Hypertensive patients develop atherosclerotic plaques within the large blood vessels supplying blood flow both into and out of the brain at an accelerated rate, increasing their risk of an ischemic event [6,7]. HTN can cause a remodeling of the brain microvasculature with vessels exhibiting rarefication and narrowing lumen [4,6,8]. Such changes in vessel architecture can have negative effects on cerebral blood flow (CBF) and autoregulation. Reduction in CBF has been shown to cause brain atrophy both in animal models [9] and in clinical population [10], and has been consistently linked with white matter damage [11,12]. White matter lesions (WML) are very common in HTN participants [13–15]. Indeed, several studies have confirmed that two major risk factors for WML development are increased age and high BP, with smoking, high cholesterol, and heart disease also being significant contributors to lesion formation [14,16,17].\nAlthough benefits of HTN treatment are indisputable for major cardiovascular outcome, the target blood pressure for the brain remains under debate. While some argue for aggressive blood pressure reductions [18], others point to possible danger of this approach [19], as marked lowering of BP in longstanding HTN may cause relative hypoperfusion. This suggested to us the existence of an optimum BP range.\nA recent metanalysis found a U-shape relationship between SBP and a risk of dementia in participants older than 75, as well as U-shape relationship between SBP and combined dementia and mortality across all age groups starting at 60 [20]. Our group has previously reported a quadratic (inverted U-shape) relationship between CBF and SBP in hypertensive patients, pointing to the optimum lying between 120 and 130 mmHg where CBF was the highest [21]. It was consistently shown that WML volumes are inversely related to global and local indices of CBF [22]. In this observational report, we investigated whether in hypertensive participants WML and atrophy, structural markers of HTN-related brain damage, show relationship with SBP, which is parallel to the one we observed for CBF and SBP: namely is there a quadratic association between them.", "The data of this study are available from the corresponding author upon request.\nParticipants recruitment Study participants were recruited at the NYU School of Medicine, at the former Center for Brain Health. Out of 376 individuals described in this report, 369 (98%) were previously a part of a study assessing the relationship between BP and CBF. Here, we report on participants with quantitative assessment of white matter pathology. Baseline assessment was performed in a larger group of n = 376 participants, and n = 188 of them had follow-up examinations. The reasons for lack of follow-up are described in the Supplement. Participants were enrolled between November 2010 and November 2017. All participants signed Institutional Review Board (ethics committee)-approved consent forms and were free of neocortical stroke, brain tumor, life-long psychotic disorders and dementia at baseline. Information pertaining to recruitment, exclusion criteria, and participant flow is presented in the Supplement.\nStudy participants were recruited at the NYU School of Medicine, at the former Center for Brain Health. Out of 376 individuals described in this report, 369 (98%) were previously a part of a study assessing the relationship between BP and CBF. Here, we report on participants with quantitative assessment of white matter pathology. Baseline assessment was performed in a larger group of n = 376 participants, and n = 188 of them had follow-up examinations. The reasons for lack of follow-up are described in the Supplement. Participants were enrolled between November 2010 and November 2017. All participants signed Institutional Review Board (ethics committee)-approved consent forms and were free of neocortical stroke, brain tumor, life-long psychotic disorders and dementia at baseline. Information pertaining to recruitment, exclusion criteria, and participant flow is presented in the Supplement.\nClinical assessment All participants underwent medical, psychiatric, and neurological assessments, blood tests, electrocardiogram (ECG), and magnetic resonance imaging (MRI) evaluation examinations at baseline, and the longitudinal group also at follow-up. Participants exhibiting dementia, based on a physician-administered interview using the Brief Cognitive Rating Scale, rating on the Global Deterioration Scale [23], and Clinical Dementia Rating were excluded [24]. Fasting blood was tested for complete blood count, liver function, as well as metabolic and lipid panel.\nBlood pressure was taken once in a sitting position, after 5 min of rest. It was measured on the left upper arm using a manual sphygmomanometer. Body mass index (BMI) was calculated as weight/height (kg/m) [2].\nMedication: We recoded the use of antihypertensive medications (angiotensin receptor blockers (ARBs), angiotensin converting enzyme inhibitors (ACE), beta-blockers, diuretics, and calcium channel blockers), statins and glucose lowering drugs.\nDiabetes mellitus was defined as fasting glucose level of 126 mg/dl and higher and/or current use of glucose lowering medication [25]. Smoking status was defined as positive if participants was a current smoker or smoked within last 10 years.\nAll participants underwent medical, psychiatric, and neurological assessments, blood tests, electrocardiogram (ECG), and magnetic resonance imaging (MRI) evaluation examinations at baseline, and the longitudinal group also at follow-up. Participants exhibiting dementia, based on a physician-administered interview using the Brief Cognitive Rating Scale, rating on the Global Deterioration Scale [23], and Clinical Dementia Rating were excluded [24]. Fasting blood was tested for complete blood count, liver function, as well as metabolic and lipid panel.\nBlood pressure was taken once in a sitting position, after 5 min of rest. It was measured on the left upper arm using a manual sphygmomanometer. Body mass index (BMI) was calculated as weight/height (kg/m) [2].\nMedication: We recoded the use of antihypertensive medications (angiotensin receptor blockers (ARBs), angiotensin converting enzyme inhibitors (ACE), beta-blockers, diuretics, and calcium channel blockers), statins and glucose lowering drugs.\nDiabetes mellitus was defined as fasting glucose level of 126 mg/dl and higher and/or current use of glucose lowering medication [25]. Smoking status was defined as positive if participants was a current smoker or smoked within last 10 years.\nStudy groups For consistency with our previous study hypertension was defined as current antihypertensive treatment or BP ≥140/90 mmHg [26]. One hundred and sixty-nine participants were classified as hypertensive; 136 participants were taking antihypertensive medication and 42 were not taking any antihypertensive medication with high BP recorded during their in-office visit. Normotension (NTN) was defined as BP <140/90 mmHg and no antihypertensive treatment.\nFor descriptive purpose and secondary analyses, we described hypertensive participants as untreated: no antihypertensive treatment and BP ≥140/90 mmHg; controlled: antihypertensive treatment and BP <140/90 mmHg, uncontrolled: antihypertensive treatment and BP ≥140/90 mmHg. We also divided our normotensive group into Stage 1 hypertension based on new guidelines: [27] no treatment and BP between 130–139/80–89 mmHg, and normotensive BP <130/80 mmHg.\nFor consistency with our previous study hypertension was defined as current antihypertensive treatment or BP ≥140/90 mmHg [26]. One hundred and sixty-nine participants were classified as hypertensive; 136 participants were taking antihypertensive medication and 42 were not taking any antihypertensive medication with high BP recorded during their in-office visit. Normotension (NTN) was defined as BP <140/90 mmHg and no antihypertensive treatment.\nFor descriptive purpose and secondary analyses, we described hypertensive participants as untreated: no antihypertensive treatment and BP ≥140/90 mmHg; controlled: antihypertensive treatment and BP <140/90 mmHg, uncontrolled: antihypertensive treatment and BP ≥140/90 mmHg. We also divided our normotensive group into Stage 1 hypertension based on new guidelines: [27] no treatment and BP between 130–139/80–89 mmHg, and normotensive BP <130/80 mmHg.\nMR imaging All MR imaging was performed on the 3T system (Siemens, Erlangen, Germany). Parameters for Magnetization Prepared Rapid Acquisition Gradient Echo (MPRAGE) images were: repetition time (TR) = 2250 ms, echo time (TE) = 2.7 ms, inversion time (TI) = 900 ms, flip angle (FA) = 8°, slice thickness: 1.0 mm, field of view (FOV) = 200 mm, acquisition matrix = 256 × 192 × 124, reconstructed as 256 × 256 × 124. Axial fluid attenuation inversion recovery (FLAIR) images were acquired with TR/TE/TI 9000/99/2500 ms; FA 130°, slice thickness: 3.3 mm, field of view (FOV) 220 mm, matrix = 256 × 192 reconstructed as thirty 256 × 256 images.\nAll MR imaging was performed on the 3T system (Siemens, Erlangen, Germany). Parameters for Magnetization Prepared Rapid Acquisition Gradient Echo (MPRAGE) images were: repetition time (TR) = 2250 ms, echo time (TE) = 2.7 ms, inversion time (TI) = 900 ms, flip angle (FA) = 8°, slice thickness: 1.0 mm, field of view (FOV) = 200 mm, acquisition matrix = 256 × 192 × 124, reconstructed as 256 × 256 × 124. Axial fluid attenuation inversion recovery (FLAIR) images were acquired with TR/TE/TI 9000/99/2500 ms; FA 130°, slice thickness: 3.3 mm, field of view (FOV) 220 mm, matrix = 256 × 192 reconstructed as thirty 256 × 256 images.\nMRI processing White matter lesions were segmented on FLAIR MRI with freely available image analysis software FireVoxel (https://firevoxel.org/) using a previously validated workflow [28]. FLAIR images were first corrected for signal nonuniformity using the N3 algorithm. A 3D mask M was then created over the entire brain parenchyma, that is, the gray and white matter, with cerebrospinal fluid (CSF) excluded [29]. Voxels at the external surface S of M were then identified. Next was creation of set L, containing voxels in M with signal intensity 2.5 standard deviations above the mean FLAIR signal. Finally, L was filtered to remove (a) cortical voxels, those located within 3 mm of S, (b) small clusters of volume <12 mm3 (presumed to represent image noise), and (c) connected regions having >50% CSF border (presumed to represent the choroid plexus and the septum). The resulting mask represents our estimate of WML. Its volume VWML is presented as percentage of the total brain volume.\nGray matter (GM), white matter (WM) and CSF volumes were estimated using Statistical Parametric Mapping segmentation procedure (SPM, version 8, with ‘New-Segment’ extension) [30]. Total brain volume was the sum of GM and WM volumes. Intracranial volume (ICV) was the sum of all tissue types. Left and right hippocampal volumes were obtained with FreeSurfer version 6.0 [31], and averaged. They are referred to from now on as ‘mean hippocampal volume’. GM, WM and hippocampal volumes are presented as a fraction of the ICV. MRI data were acquired over the span of 9 years a 3T magnet that underwent hardware and software upgrades. We detected the inter-epoch variability in GM measurements. To avoid this fixed bias we z-scored GM values separately by epoch, recentered and rescaled them. Even though no other structural measure was affected, for consistency, we used rescaled values for hippocampal and WM volumes.\nWhite matter lesions were segmented on FLAIR MRI with freely available image analysis software FireVoxel (https://firevoxel.org/) using a previously validated workflow [28]. FLAIR images were first corrected for signal nonuniformity using the N3 algorithm. A 3D mask M was then created over the entire brain parenchyma, that is, the gray and white matter, with cerebrospinal fluid (CSF) excluded [29]. Voxels at the external surface S of M were then identified. Next was creation of set L, containing voxels in M with signal intensity 2.5 standard deviations above the mean FLAIR signal. Finally, L was filtered to remove (a) cortical voxels, those located within 3 mm of S, (b) small clusters of volume <12 mm3 (presumed to represent image noise), and (c) connected regions having >50% CSF border (presumed to represent the choroid plexus and the septum). The resulting mask represents our estimate of WML. Its volume VWML is presented as percentage of the total brain volume.\nGray matter (GM), white matter (WM) and CSF volumes were estimated using Statistical Parametric Mapping segmentation procedure (SPM, version 8, with ‘New-Segment’ extension) [30]. Total brain volume was the sum of GM and WM volumes. Intracranial volume (ICV) was the sum of all tissue types. Left and right hippocampal volumes were obtained with FreeSurfer version 6.0 [31], and averaged. They are referred to from now on as ‘mean hippocampal volume’. GM, WM and hippocampal volumes are presented as a fraction of the ICV. MRI data were acquired over the span of 9 years a 3T magnet that underwent hardware and software upgrades. We detected the inter-epoch variability in GM measurements. To avoid this fixed bias we z-scored GM values separately by epoch, recentered and rescaled them. Even though no other structural measure was affected, for consistency, we used rescaled values for hippocampal and WM volumes.\nStatistics Categorical variables were compared with χ2 tests. T-test, Mann–Whitney U-test, and analysis of covariance (ANCOVA) were used to compare group means for continuous variables, depending on the data distribution and the need to adjust for age and sex. Normality was tested with Shapiro–Wilk test.\nRelationships between BP, WML volume and brain volumes (hippocampal, GM, WM) at baseline were examined with linear regression. Volumes were dependent variables; age, sex, body mass index, smoking, diabetes, systolic blood pressure and diastolic blood pressure were independent predictors. These relationships were tested in participants with and without hypertension separately. We present here F-statistics for the regression models, as well unstandardized (B) and standardized (β) coefficients for individual predictors. Again, we hypothesized that in the HTN group an optimal BP may exist [21], which minimizes WML volume and maximizes brain volumes. Thus, as before, we tested for the existence of the peak, by introducing both the linear BP and quadratic (BP [2]) terms into the regression models.\nAfter a quadratic relationship between the volume and BP was confirmed, the critical BP at which volume reaches its maximum or minimum value was determined as:\nTo examine whether BP, WML and brain volumes changed over time, we used mixed models for repeated measures (MMRM), where BP or volume was a dependent variable and time, group (HTN vs. NTN), and group × time interaction were predictors. For models with WML or brain volumes age as added as a covariate. Both intercept and slope were treated as random.\nTo test whether baseline BP predicted change in WML and brain volumes, we created volume rates of change by regressing volume on time for each participant and tested whether baseline SBP and DBP was associated with volume slope. Baseline volumes, BMI, age, sex, smoking and diabetes status were initially included in the models, and retained only when significant. The relationships were tested in the normo- and hypertensive participants separately. For all the analyses, the most parsimonious model was chosen, defined as the model that included only significant or necessary (main effects when interaction was present) terms.\nTo test whether change in BP (independent variable) was related to change in volume (dependent) we used MMRM. WML or brain volumes were a dependent variable and BP (SBP or DBP), group (HTN vs. NTN), and group × BP interaction were predictors. Both intercept and slope were treated as random. SBP and DBP were predictors in separate models.\nWe performed supplementary analyses using pulse pressure (PP) as a predictor in all the models.\nWML volumes were log transformed. We checked the linear models for violations of models assumptions. For all the analyses, the most parsimonious model was chosen, defined as the model that included only significant or necessary (main effects when interaction was present) terms. Statistical significance was defined as a P-value <0.05. SPSS (version 25, SPSS, Inc., Chicago, Illinois, USA) software was used for all analyses.\nCategorical variables were compared with χ2 tests. T-test, Mann–Whitney U-test, and analysis of covariance (ANCOVA) were used to compare group means for continuous variables, depending on the data distribution and the need to adjust for age and sex. Normality was tested with Shapiro–Wilk test.\nRelationships between BP, WML volume and brain volumes (hippocampal, GM, WM) at baseline were examined with linear regression. Volumes were dependent variables; age, sex, body mass index, smoking, diabetes, systolic blood pressure and diastolic blood pressure were independent predictors. These relationships were tested in participants with and without hypertension separately. We present here F-statistics for the regression models, as well unstandardized (B) and standardized (β) coefficients for individual predictors. Again, we hypothesized that in the HTN group an optimal BP may exist [21], which minimizes WML volume and maximizes brain volumes. Thus, as before, we tested for the existence of the peak, by introducing both the linear BP and quadratic (BP [2]) terms into the regression models.\nAfter a quadratic relationship between the volume and BP was confirmed, the critical BP at which volume reaches its maximum or minimum value was determined as:\nTo examine whether BP, WML and brain volumes changed over time, we used mixed models for repeated measures (MMRM), where BP or volume was a dependent variable and time, group (HTN vs. NTN), and group × time interaction were predictors. For models with WML or brain volumes age as added as a covariate. Both intercept and slope were treated as random.\nTo test whether baseline BP predicted change in WML and brain volumes, we created volume rates of change by regressing volume on time for each participant and tested whether baseline SBP and DBP was associated with volume slope. Baseline volumes, BMI, age, sex, smoking and diabetes status were initially included in the models, and retained only when significant. The relationships were tested in the normo- and hypertensive participants separately. For all the analyses, the most parsimonious model was chosen, defined as the model that included only significant or necessary (main effects when interaction was present) terms.\nTo test whether change in BP (independent variable) was related to change in volume (dependent) we used MMRM. WML or brain volumes were a dependent variable and BP (SBP or DBP), group (HTN vs. NTN), and group × BP interaction were predictors. Both intercept and slope were treated as random. SBP and DBP were predictors in separate models.\nWe performed supplementary analyses using pulse pressure (PP) as a predictor in all the models.\nWML volumes were log transformed. We checked the linear models for violations of models assumptions. For all the analyses, the most parsimonious model was chosen, defined as the model that included only significant or necessary (main effects when interaction was present) terms. Statistical significance was defined as a P-value <0.05. SPSS (version 25, SPSS, Inc., Chicago, Illinois, USA) software was used for all analyses.", "Study participants were recruited at the NYU School of Medicine, at the former Center for Brain Health. Out of 376 individuals described in this report, 369 (98%) were previously a part of a study assessing the relationship between BP and CBF. Here, we report on participants with quantitative assessment of white matter pathology. Baseline assessment was performed in a larger group of n = 376 participants, and n = 188 of them had follow-up examinations. The reasons for lack of follow-up are described in the Supplement. Participants were enrolled between November 2010 and November 2017. All participants signed Institutional Review Board (ethics committee)-approved consent forms and were free of neocortical stroke, brain tumor, life-long psychotic disorders and dementia at baseline. Information pertaining to recruitment, exclusion criteria, and participant flow is presented in the Supplement.", "All participants underwent medical, psychiatric, and neurological assessments, blood tests, electrocardiogram (ECG), and magnetic resonance imaging (MRI) evaluation examinations at baseline, and the longitudinal group also at follow-up. Participants exhibiting dementia, based on a physician-administered interview using the Brief Cognitive Rating Scale, rating on the Global Deterioration Scale [23], and Clinical Dementia Rating were excluded [24]. Fasting blood was tested for complete blood count, liver function, as well as metabolic and lipid panel.\nBlood pressure was taken once in a sitting position, after 5 min of rest. It was measured on the left upper arm using a manual sphygmomanometer. Body mass index (BMI) was calculated as weight/height (kg/m) [2].\nMedication: We recoded the use of antihypertensive medications (angiotensin receptor blockers (ARBs), angiotensin converting enzyme inhibitors (ACE), beta-blockers, diuretics, and calcium channel blockers), statins and glucose lowering drugs.\nDiabetes mellitus was defined as fasting glucose level of 126 mg/dl and higher and/or current use of glucose lowering medication [25]. Smoking status was defined as positive if participants was a current smoker or smoked within last 10 years.", "For consistency with our previous study hypertension was defined as current antihypertensive treatment or BP ≥140/90 mmHg [26]. One hundred and sixty-nine participants were classified as hypertensive; 136 participants were taking antihypertensive medication and 42 were not taking any antihypertensive medication with high BP recorded during their in-office visit. Normotension (NTN) was defined as BP <140/90 mmHg and no antihypertensive treatment.\nFor descriptive purpose and secondary analyses, we described hypertensive participants as untreated: no antihypertensive treatment and BP ≥140/90 mmHg; controlled: antihypertensive treatment and BP <140/90 mmHg, uncontrolled: antihypertensive treatment and BP ≥140/90 mmHg. We also divided our normotensive group into Stage 1 hypertension based on new guidelines: [27] no treatment and BP between 130–139/80–89 mmHg, and normotensive BP <130/80 mmHg.", "All MR imaging was performed on the 3T system (Siemens, Erlangen, Germany). Parameters for Magnetization Prepared Rapid Acquisition Gradient Echo (MPRAGE) images were: repetition time (TR) = 2250 ms, echo time (TE) = 2.7 ms, inversion time (TI) = 900 ms, flip angle (FA) = 8°, slice thickness: 1.0 mm, field of view (FOV) = 200 mm, acquisition matrix = 256 × 192 × 124, reconstructed as 256 × 256 × 124. Axial fluid attenuation inversion recovery (FLAIR) images were acquired with TR/TE/TI 9000/99/2500 ms; FA 130°, slice thickness: 3.3 mm, field of view (FOV) 220 mm, matrix = 256 × 192 reconstructed as thirty 256 × 256 images.", "White matter lesions were segmented on FLAIR MRI with freely available image analysis software FireVoxel (https://firevoxel.org/) using a previously validated workflow [28]. FLAIR images were first corrected for signal nonuniformity using the N3 algorithm. A 3D mask M was then created over the entire brain parenchyma, that is, the gray and white matter, with cerebrospinal fluid (CSF) excluded [29]. Voxels at the external surface S of M were then identified. Next was creation of set L, containing voxels in M with signal intensity 2.5 standard deviations above the mean FLAIR signal. Finally, L was filtered to remove (a) cortical voxels, those located within 3 mm of S, (b) small clusters of volume <12 mm3 (presumed to represent image noise), and (c) connected regions having >50% CSF border (presumed to represent the choroid plexus and the septum). The resulting mask represents our estimate of WML. Its volume VWML is presented as percentage of the total brain volume.\nGray matter (GM), white matter (WM) and CSF volumes were estimated using Statistical Parametric Mapping segmentation procedure (SPM, version 8, with ‘New-Segment’ extension) [30]. Total brain volume was the sum of GM and WM volumes. Intracranial volume (ICV) was the sum of all tissue types. Left and right hippocampal volumes were obtained with FreeSurfer version 6.0 [31], and averaged. They are referred to from now on as ‘mean hippocampal volume’. GM, WM and hippocampal volumes are presented as a fraction of the ICV. MRI data were acquired over the span of 9 years a 3T magnet that underwent hardware and software upgrades. We detected the inter-epoch variability in GM measurements. To avoid this fixed bias we z-scored GM values separately by epoch, recentered and rescaled them. Even though no other structural measure was affected, for consistency, we used rescaled values for hippocampal and WM volumes.", "Categorical variables were compared with χ2 tests. T-test, Mann–Whitney U-test, and analysis of covariance (ANCOVA) were used to compare group means for continuous variables, depending on the data distribution and the need to adjust for age and sex. Normality was tested with Shapiro–Wilk test.\nRelationships between BP, WML volume and brain volumes (hippocampal, GM, WM) at baseline were examined with linear regression. Volumes were dependent variables; age, sex, body mass index, smoking, diabetes, systolic blood pressure and diastolic blood pressure were independent predictors. These relationships were tested in participants with and without hypertension separately. We present here F-statistics for the regression models, as well unstandardized (B) and standardized (β) coefficients for individual predictors. Again, we hypothesized that in the HTN group an optimal BP may exist [21], which minimizes WML volume and maximizes brain volumes. Thus, as before, we tested for the existence of the peak, by introducing both the linear BP and quadratic (BP [2]) terms into the regression models.\nAfter a quadratic relationship between the volume and BP was confirmed, the critical BP at which volume reaches its maximum or minimum value was determined as:\nTo examine whether BP, WML and brain volumes changed over time, we used mixed models for repeated measures (MMRM), where BP or volume was a dependent variable and time, group (HTN vs. NTN), and group × time interaction were predictors. For models with WML or brain volumes age as added as a covariate. Both intercept and slope were treated as random.\nTo test whether baseline BP predicted change in WML and brain volumes, we created volume rates of change by regressing volume on time for each participant and tested whether baseline SBP and DBP was associated with volume slope. Baseline volumes, BMI, age, sex, smoking and diabetes status were initially included in the models, and retained only when significant. The relationships were tested in the normo- and hypertensive participants separately. For all the analyses, the most parsimonious model was chosen, defined as the model that included only significant or necessary (main effects when interaction was present) terms.\nTo test whether change in BP (independent variable) was related to change in volume (dependent) we used MMRM. WML or brain volumes were a dependent variable and BP (SBP or DBP), group (HTN vs. NTN), and group × BP interaction were predictors. Both intercept and slope were treated as random. SBP and DBP were predictors in separate models.\nWe performed supplementary analyses using pulse pressure (PP) as a predictor in all the models.\nWML volumes were log transformed. We checked the linear models for violations of models assumptions. For all the analyses, the most parsimonious model was chosen, defined as the model that included only significant or necessary (main effects when interaction was present) terms. Statistical significance was defined as a P-value <0.05. SPSS (version 25, SPSS, Inc., Chicago, Illinois, USA) software was used for all analyses.", "Our group of 376 participants consisted of HTN (n = 207) and NTN (n = 169) individuals. The characteristics of the groups are given in Table 1. Two hundred and twenty-eight (60.6%) were women (age, 68.5 ± 7.4 years (mean ± standard deviation); education, 16.7 ± 2.3 years); and 148 (39.4%) men (age, 70.7 ± 6.9 years; education, 17.0 ± 2.4 years). The majority of the participants were Caucasian (86%), 10% African American, 4% other races.\nBaseline characteristics of the study group (n = 376)\nData is presented as mean ± standard deviation unless otherwise indicated. P-values come from t-test (total cholesterol, LDL) or Mann–Whitney U-test (age, education, BMI, HDL, triglycerides, glucose). For categorical variables, χ2 was used.\nn = 371 (NTN = 204, HTN = 167)\nn = 369 (NTN = 205, HTN = 164)\nn = 363 (NTN = 200, HTN = 163)\nn = 371 (NTN = 206, HTN = 165)\nValues are presented as mean ± standard errors (SE), P-values from ANCOVA after accounting for age.\nHippocampal volume is a mean of left and right, values are presented as mean ± standard errors (SE), P-values from ANCOVA after accounting for age and gender. Since the residuals from GM model were not normally distributed, the values were reanalyzed using ranked ANCOVA. Results remained the same.\nICV, intracranial volume; HTN, hypertension; NTN, normotensive.\nIn the longitudinal group of 188 participants, 116 (61.7%) were women (age, 69.6 ± 7.5 years; education, 16.7 ± 2.1 years) and 72 (38.3%) men (age, 71.4 ± 6.1 years; education, 17.2 ± 2.3 years). The mean time of follow-up was 2.3 ± 0.70 years. The longitudinal group did not differ from the participants with only one examination in terms of sex, HTN prevalence, SBP, DBP, BMI, total cholesterol, LDL and HDL cholesterol, triglycerides, education and WML volume. Prevalence of smoking and diabetes, as well as the number of participants with normotension, controlled, uncontrolled and untreated hypertension were also not different between the two groups. However, participants with longitudinal observation were slightly older (age, 70.3 ± 7.0 vs. 68.4 ± 7.4 years, P = 0.005) and had lower glucose levels (glucose, 82.5 ± 16.0 vs. 85.6 ± 14 mg/dl) than the individuals with only one exam. The clinical characteristics, as well as baseline brain and WML volumes of HTN and NTN participants in the longitudinal subgroup are given in Table S1. WML volumes by five baseline study group are shown in Tables S2 and S3.\nBaseline analyses BP and white matter lesion volumes In the HTN group, SBP and quadratic SBP term were predictors of WML volume at baseline (regression model included age, F3,168 = 14.4, P < 0.001) (Table 2, Fig. 1). The results were similar with log-transformed data (F3,168 = 13.1, P < 0.001) (Table 3). The critical SBP value calculated from equation [2] was x = 124 mmHg.\nLinear regression model predicting WML volume in the HTN group at baseline (raw data)\nCritical value (Eq. [2]): x = –(−0.07317/(2 × 0.0003)) = 122 mmHg.\nHTN, hypertension; WML, white matter lesion.\nQuadratic relationship between systolic blood pressure readings and white matter lesion measurements in hypertensive participants.\nLinear regression model predicting WML volume in the HTN group at baseline (log transformed data: log(WML volume))\nCritical value (Eq. [2]): x = –(−0.02741/(2 × 0.00011)) = 124.6 mmHg.\nHTN, hypertension; WML, white matter lesion.\nIn the NTN group, neither SBP nor DBP were related to WML.\nIn the HTN group, SBP and quadratic SBP term were predictors of WML volume at baseline (regression model included age, F3,168 = 14.4, P < 0.001) (Table 2, Fig. 1). The results were similar with log-transformed data (F3,168 = 13.1, P < 0.001) (Table 3). The critical SBP value calculated from equation [2] was x = 124 mmHg.\nLinear regression model predicting WML volume in the HTN group at baseline (raw data)\nCritical value (Eq. [2]): x = –(−0.07317/(2 × 0.0003)) = 122 mmHg.\nHTN, hypertension; WML, white matter lesion.\nQuadratic relationship between systolic blood pressure readings and white matter lesion measurements in hypertensive participants.\nLinear regression model predicting WML volume in the HTN group at baseline (log transformed data: log(WML volume))\nCritical value (Eq. [2]): x = –(−0.02741/(2 × 0.00011)) = 124.6 mmHg.\nHTN, hypertension; WML, white matter lesion.\nIn the NTN group, neither SBP nor DBP were related to WML.\nBP and brain volumes In HTN, the quadratic DBP term was a predictor of the mean hippocampal volume at baseline (the model included age and sex, F4,168 = 11.6, P < 0.001) (Table 4, Fig. 2). The critical DBP value calculated from equation [2] was x = 77 mmHg.\nModel predicting mean hippocampal volume in the HTN group at baseline\nCritical value (Eq. [2]): x = −(0.00399/(2 × (−0.000026))) = 77 mmHg.\nHTN, hypertension; WML, white matter lesion.\nRelationship between diastolic blood pressure readings and mean hippocampal brain volume measurements in hypertensive participants.\nGM was associated with SBP (β = −0.16, P = 0.02) and the model (F4,166 = 14.0, P < 0.001) included BMI, smoking and age.\nIn NTN, the mean hippocampal volume was inversely related to SBP (β = −0.17, P < 0.008), (regression model included age and sex, F3,206 = 22.4, P < 0.001). GM volume was inversely related to DBP (β = −0.15, P = 0.01), (the model included age and sex, F3,206 = 32.3, P < 0.001).\nIn HTN, the quadratic DBP term was a predictor of the mean hippocampal volume at baseline (the model included age and sex, F4,168 = 11.6, P < 0.001) (Table 4, Fig. 2). The critical DBP value calculated from equation [2] was x = 77 mmHg.\nModel predicting mean hippocampal volume in the HTN group at baseline\nCritical value (Eq. [2]): x = −(0.00399/(2 × (−0.000026))) = 77 mmHg.\nHTN, hypertension; WML, white matter lesion.\nRelationship between diastolic blood pressure readings and mean hippocampal brain volume measurements in hypertensive participants.\nGM was associated with SBP (β = −0.16, P = 0.02) and the model (F4,166 = 14.0, P < 0.001) included BMI, smoking and age.\nIn NTN, the mean hippocampal volume was inversely related to SBP (β = −0.17, P < 0.008), (regression model included age and sex, F3,206 = 22.4, P < 0.001). GM volume was inversely related to DBP (β = −0.15, P = 0.01), (the model included age and sex, F3,206 = 32.3, P < 0.001).\nBP and white matter lesion volumes In the HTN group, SBP and quadratic SBP term were predictors of WML volume at baseline (regression model included age, F3,168 = 14.4, P < 0.001) (Table 2, Fig. 1). The results were similar with log-transformed data (F3,168 = 13.1, P < 0.001) (Table 3). The critical SBP value calculated from equation [2] was x = 124 mmHg.\nLinear regression model predicting WML volume in the HTN group at baseline (raw data)\nCritical value (Eq. [2]): x = –(−0.07317/(2 × 0.0003)) = 122 mmHg.\nHTN, hypertension; WML, white matter lesion.\nQuadratic relationship between systolic blood pressure readings and white matter lesion measurements in hypertensive participants.\nLinear regression model predicting WML volume in the HTN group at baseline (log transformed data: log(WML volume))\nCritical value (Eq. [2]): x = –(−0.02741/(2 × 0.00011)) = 124.6 mmHg.\nHTN, hypertension; WML, white matter lesion.\nIn the NTN group, neither SBP nor DBP were related to WML.\nIn the HTN group, SBP and quadratic SBP term were predictors of WML volume at baseline (regression model included age, F3,168 = 14.4, P < 0.001) (Table 2, Fig. 1). The results were similar with log-transformed data (F3,168 = 13.1, P < 0.001) (Table 3). The critical SBP value calculated from equation [2] was x = 124 mmHg.\nLinear regression model predicting WML volume in the HTN group at baseline (raw data)\nCritical value (Eq. [2]): x = –(−0.07317/(2 × 0.0003)) = 122 mmHg.\nHTN, hypertension; WML, white matter lesion.\nQuadratic relationship between systolic blood pressure readings and white matter lesion measurements in hypertensive participants.\nLinear regression model predicting WML volume in the HTN group at baseline (log transformed data: log(WML volume))\nCritical value (Eq. [2]): x = –(−0.02741/(2 × 0.00011)) = 124.6 mmHg.\nHTN, hypertension; WML, white matter lesion.\nIn the NTN group, neither SBP nor DBP were related to WML.\nBP and brain volumes In HTN, the quadratic DBP term was a predictor of the mean hippocampal volume at baseline (the model included age and sex, F4,168 = 11.6, P < 0.001) (Table 4, Fig. 2). The critical DBP value calculated from equation [2] was x = 77 mmHg.\nModel predicting mean hippocampal volume in the HTN group at baseline\nCritical value (Eq. [2]): x = −(0.00399/(2 × (−0.000026))) = 77 mmHg.\nHTN, hypertension; WML, white matter lesion.\nRelationship between diastolic blood pressure readings and mean hippocampal brain volume measurements in hypertensive participants.\nGM was associated with SBP (β = −0.16, P = 0.02) and the model (F4,166 = 14.0, P < 0.001) included BMI, smoking and age.\nIn NTN, the mean hippocampal volume was inversely related to SBP (β = −0.17, P < 0.008), (regression model included age and sex, F3,206 = 22.4, P < 0.001). GM volume was inversely related to DBP (β = −0.15, P = 0.01), (the model included age and sex, F3,206 = 32.3, P < 0.001).\nIn HTN, the quadratic DBP term was a predictor of the mean hippocampal volume at baseline (the model included age and sex, F4,168 = 11.6, P < 0.001) (Table 4, Fig. 2). The critical DBP value calculated from equation [2] was x = 77 mmHg.\nModel predicting mean hippocampal volume in the HTN group at baseline\nCritical value (Eq. [2]): x = −(0.00399/(2 × (−0.000026))) = 77 mmHg.\nHTN, hypertension; WML, white matter lesion.\nRelationship between diastolic blood pressure readings and mean hippocampal brain volume measurements in hypertensive participants.\nGM was associated with SBP (β = −0.16, P = 0.02) and the model (F4,166 = 14.0, P < 0.001) included BMI, smoking and age.\nIn NTN, the mean hippocampal volume was inversely related to SBP (β = −0.17, P < 0.008), (regression model included age and sex, F3,206 = 22.4, P < 0.001). GM volume was inversely related to DBP (β = −0.15, P = 0.01), (the model included age and sex, F3,206 = 32.3, P < 0.001).\nLongitudinal analyses Changes in BP and volumes over time SBP did not change significantly in the HTN or NTN group, but the interaction term was significant (P = 0.01) indicating divergent pattern of SBP dynamics (the estimate = −1.6, P = 0.07 for HTN and 1.4, P = 0.07 for NTN). Similarly, DBP patterns were different (P = 0.03 for interaction term) with a lack of change in the HTN participants (estimate −0.6, P = 0.31) and a significant increase in the NTN group (the estimate = 1.1, P = 0.03). In both groups, WML volumes increased with time, (the estimate = 0.025, P < 0.001) for HTN and (0.028, P < 0.001) for NTN. In both groups, brain volumes decreased with time. An estimate for the fixed effect of time for the mean hippocampal volumes was (−0.0044, P < 0.001) for HTN and (−0.0038, P < 0.001) among NTN participants. For GM, they were (−0.48, P < 0.001) and (−0.45, P < 0.001); and finally, for WM (−0.30, P < 0.001) and (−0.22, P = 0.001), for the HTN and NTN groups, respectively. Group × time interaction terms were not significant for models with brain or WML volumes, indicating that rates of change did not differ between groups. Figure S2 presents BP and WML changes in both groups.\nSBP did not change significantly in the HTN or NTN group, but the interaction term was significant (P = 0.01) indicating divergent pattern of SBP dynamics (the estimate = −1.6, P = 0.07 for HTN and 1.4, P = 0.07 for NTN). Similarly, DBP patterns were different (P = 0.03 for interaction term) with a lack of change in the HTN participants (estimate −0.6, P = 0.31) and a significant increase in the NTN group (the estimate = 1.1, P = 0.03). In both groups, WML volumes increased with time, (the estimate = 0.025, P < 0.001) for HTN and (0.028, P < 0.001) for NTN. In both groups, brain volumes decreased with time. An estimate for the fixed effect of time for the mean hippocampal volumes was (−0.0044, P < 0.001) for HTN and (−0.0038, P < 0.001) among NTN participants. For GM, they were (−0.48, P < 0.001) and (−0.45, P < 0.001); and finally, for WM (−0.30, P < 0.001) and (−0.22, P = 0.001), for the HTN and NTN groups, respectively. Group × time interaction terms were not significant for models with brain or WML volumes, indicating that rates of change did not differ between groups. Figure S2 presents BP and WML changes in both groups.\nChanges in BP and volumes over time SBP did not change significantly in the HTN or NTN group, but the interaction term was significant (P = 0.01) indicating divergent pattern of SBP dynamics (the estimate = −1.6, P = 0.07 for HTN and 1.4, P = 0.07 for NTN). Similarly, DBP patterns were different (P = 0.03 for interaction term) with a lack of change in the HTN participants (estimate −0.6, P = 0.31) and a significant increase in the NTN group (the estimate = 1.1, P = 0.03). In both groups, WML volumes increased with time, (the estimate = 0.025, P < 0.001) for HTN and (0.028, P < 0.001) for NTN. In both groups, brain volumes decreased with time. An estimate for the fixed effect of time for the mean hippocampal volumes was (−0.0044, P < 0.001) for HTN and (−0.0038, P < 0.001) among NTN participants. For GM, they were (−0.48, P < 0.001) and (−0.45, P < 0.001); and finally, for WM (−0.30, P < 0.001) and (−0.22, P = 0.001), for the HTN and NTN groups, respectively. Group × time interaction terms were not significant for models with brain or WML volumes, indicating that rates of change did not differ between groups. Figure S2 presents BP and WML changes in both groups.\nSBP did not change significantly in the HTN or NTN group, but the interaction term was significant (P = 0.01) indicating divergent pattern of SBP dynamics (the estimate = −1.6, P = 0.07 for HTN and 1.4, P = 0.07 for NTN). Similarly, DBP patterns were different (P = 0.03 for interaction term) with a lack of change in the HTN participants (estimate −0.6, P = 0.31) and a significant increase in the NTN group (the estimate = 1.1, P = 0.03). In both groups, WML volumes increased with time, (the estimate = 0.025, P < 0.001) for HTN and (0.028, P < 0.001) for NTN. In both groups, brain volumes decreased with time. An estimate for the fixed effect of time for the mean hippocampal volumes was (−0.0044, P < 0.001) for HTN and (−0.0038, P < 0.001) among NTN participants. For GM, they were (−0.48, P < 0.001) and (−0.45, P < 0.001); and finally, for WM (−0.30, P < 0.001) and (−0.22, P = 0.001), for the HTN and NTN groups, respectively. Group × time interaction terms were not significant for models with brain or WML volumes, indicating that rates of change did not differ between groups. Figure S2 presents BP and WML changes in both groups.\nSecondary analyses Table S4 shows the stability of secondary group definition (untreated, uncontrolled and controlled HTN, Stage 1 HTN, and normotension). Table S5 presents the results from MMRM analyses of WML volumes and BP changes conducted with all five subgroups defined at baseline based on HTN control status. One can appreciate that WML volume increased significantly in untreated and controlled hypertension, as well as in normotensive participants.\nTable S6 and Figure S3 shows that participants who at baseline were classified as untreated (n = 19) or uncontrolled hypertension (n = 20) and reclassified at the last visit as good outcome (improved BP control) experience the same increase in WML volumes as participants with bad outcome (BP remained untreated or uncontrolled).\nBaseline BP and changes in white matter lesion and brain volumes In the hypertensive group both baseline SBP (β = 0.33, P = 0.007) and DBP (β = −0.30, P = 0.008) were related to WML changes over time (entire model F4,83 = 10.5, P < 0.001, included also baseline WML volumes and smoking). Baseline BP was not related to changes in hippocampal, GM, or WM volumes.\nIn normotensive participants baseline BP was not related to change in any examined volumes.\nIn the hypertensive group both baseline SBP (β = 0.33, P = 0.007) and DBP (β = −0.30, P = 0.008) were related to WML changes over time (entire model F4,83 = 10.5, P < 0.001, included also baseline WML volumes and smoking). Baseline BP was not related to changes in hippocampal, GM, or WM volumes.\nIn normotensive participants baseline BP was not related to change in any examined volumes.\nLongitudinal changes in BP and changes in white matter lesion and brain volumes In the HTN group, neither change in SBP nor DBP were associated with a change in WML, hippocampal, GM or WM volumes (Tables S7 and S8).\nIn the NTN group, both increase in SBP (an estimate for the fixed effect of SBP = 0.0023, P = 0.01) and DBP (an estimate for the fixed effect of DBP = 0.0035, P = 0.01) over time were related to the increase in WML volume. The significant interaction term indicated that these dynamics were meaningfully different from the HTN group. Moreover, in the normotensive group both increase in SBP (an estimate for the fixed effect of SBP = −0.0002, P = 0.03) and DBP (an estimate for the fixed effect of DBP = −0.0004, P = 0.01) over time were related to reduction in the mean hippocampal volume. However, only for SBP the interaction term was significant, indicating that these changes were different from HTN group.\nResults for pulse pressure are presented in the Supplement p. 8–9.\nIn the HTN group, neither change in SBP nor DBP were associated with a change in WML, hippocampal, GM or WM volumes (Tables S7 and S8).\nIn the NTN group, both increase in SBP (an estimate for the fixed effect of SBP = 0.0023, P = 0.01) and DBP (an estimate for the fixed effect of DBP = 0.0035, P = 0.01) over time were related to the increase in WML volume. The significant interaction term indicated that these dynamics were meaningfully different from the HTN group. Moreover, in the normotensive group both increase in SBP (an estimate for the fixed effect of SBP = −0.0002, P = 0.03) and DBP (an estimate for the fixed effect of DBP = −0.0004, P = 0.01) over time were related to reduction in the mean hippocampal volume. However, only for SBP the interaction term was significant, indicating that these changes were different from HTN group.\nResults for pulse pressure are presented in the Supplement p. 8–9.\nTable S4 shows the stability of secondary group definition (untreated, uncontrolled and controlled HTN, Stage 1 HTN, and normotension). Table S5 presents the results from MMRM analyses of WML volumes and BP changes conducted with all five subgroups defined at baseline based on HTN control status. One can appreciate that WML volume increased significantly in untreated and controlled hypertension, as well as in normotensive participants.\nTable S6 and Figure S3 shows that participants who at baseline were classified as untreated (n = 19) or uncontrolled hypertension (n = 20) and reclassified at the last visit as good outcome (improved BP control) experience the same increase in WML volumes as participants with bad outcome (BP remained untreated or uncontrolled).\nBaseline BP and changes in white matter lesion and brain volumes In the hypertensive group both baseline SBP (β = 0.33, P = 0.007) and DBP (β = −0.30, P = 0.008) were related to WML changes over time (entire model F4,83 = 10.5, P < 0.001, included also baseline WML volumes and smoking). Baseline BP was not related to changes in hippocampal, GM, or WM volumes.\nIn normotensive participants baseline BP was not related to change in any examined volumes.\nIn the hypertensive group both baseline SBP (β = 0.33, P = 0.007) and DBP (β = −0.30, P = 0.008) were related to WML changes over time (entire model F4,83 = 10.5, P < 0.001, included also baseline WML volumes and smoking). Baseline BP was not related to changes in hippocampal, GM, or WM volumes.\nIn normotensive participants baseline BP was not related to change in any examined volumes.\nLongitudinal changes in BP and changes in white matter lesion and brain volumes In the HTN group, neither change in SBP nor DBP were associated with a change in WML, hippocampal, GM or WM volumes (Tables S7 and S8).\nIn the NTN group, both increase in SBP (an estimate for the fixed effect of SBP = 0.0023, P = 0.01) and DBP (an estimate for the fixed effect of DBP = 0.0035, P = 0.01) over time were related to the increase in WML volume. The significant interaction term indicated that these dynamics were meaningfully different from the HTN group. Moreover, in the normotensive group both increase in SBP (an estimate for the fixed effect of SBP = −0.0002, P = 0.03) and DBP (an estimate for the fixed effect of DBP = −0.0004, P = 0.01) over time were related to reduction in the mean hippocampal volume. However, only for SBP the interaction term was significant, indicating that these changes were different from HTN group.\nResults for pulse pressure are presented in the Supplement p. 8–9.\nIn the HTN group, neither change in SBP nor DBP were associated with a change in WML, hippocampal, GM or WM volumes (Tables S7 and S8).\nIn the NTN group, both increase in SBP (an estimate for the fixed effect of SBP = 0.0023, P = 0.01) and DBP (an estimate for the fixed effect of DBP = 0.0035, P = 0.01) over time were related to the increase in WML volume. The significant interaction term indicated that these dynamics were meaningfully different from the HTN group. Moreover, in the normotensive group both increase in SBP (an estimate for the fixed effect of SBP = −0.0002, P = 0.03) and DBP (an estimate for the fixed effect of DBP = −0.0004, P = 0.01) over time were related to reduction in the mean hippocampal volume. However, only for SBP the interaction term was significant, indicating that these changes were different from HTN group.\nResults for pulse pressure are presented in the Supplement p. 8–9.", "BP and white matter lesion volumes In the HTN group, SBP and quadratic SBP term were predictors of WML volume at baseline (regression model included age, F3,168 = 14.4, P < 0.001) (Table 2, Fig. 1). The results were similar with log-transformed data (F3,168 = 13.1, P < 0.001) (Table 3). The critical SBP value calculated from equation [2] was x = 124 mmHg.\nLinear regression model predicting WML volume in the HTN group at baseline (raw data)\nCritical value (Eq. [2]): x = –(−0.07317/(2 × 0.0003)) = 122 mmHg.\nHTN, hypertension; WML, white matter lesion.\nQuadratic relationship between systolic blood pressure readings and white matter lesion measurements in hypertensive participants.\nLinear regression model predicting WML volume in the HTN group at baseline (log transformed data: log(WML volume))\nCritical value (Eq. [2]): x = –(−0.02741/(2 × 0.00011)) = 124.6 mmHg.\nHTN, hypertension; WML, white matter lesion.\nIn the NTN group, neither SBP nor DBP were related to WML.\nIn the HTN group, SBP and quadratic SBP term were predictors of WML volume at baseline (regression model included age, F3,168 = 14.4, P < 0.001) (Table 2, Fig. 1). The results were similar with log-transformed data (F3,168 = 13.1, P < 0.001) (Table 3). The critical SBP value calculated from equation [2] was x = 124 mmHg.\nLinear regression model predicting WML volume in the HTN group at baseline (raw data)\nCritical value (Eq. [2]): x = –(−0.07317/(2 × 0.0003)) = 122 mmHg.\nHTN, hypertension; WML, white matter lesion.\nQuadratic relationship between systolic blood pressure readings and white matter lesion measurements in hypertensive participants.\nLinear regression model predicting WML volume in the HTN group at baseline (log transformed data: log(WML volume))\nCritical value (Eq. [2]): x = –(−0.02741/(2 × 0.00011)) = 124.6 mmHg.\nHTN, hypertension; WML, white matter lesion.\nIn the NTN group, neither SBP nor DBP were related to WML.\nBP and brain volumes In HTN, the quadratic DBP term was a predictor of the mean hippocampal volume at baseline (the model included age and sex, F4,168 = 11.6, P < 0.001) (Table 4, Fig. 2). The critical DBP value calculated from equation [2] was x = 77 mmHg.\nModel predicting mean hippocampal volume in the HTN group at baseline\nCritical value (Eq. [2]): x = −(0.00399/(2 × (−0.000026))) = 77 mmHg.\nHTN, hypertension; WML, white matter lesion.\nRelationship between diastolic blood pressure readings and mean hippocampal brain volume measurements in hypertensive participants.\nGM was associated with SBP (β = −0.16, P = 0.02) and the model (F4,166 = 14.0, P < 0.001) included BMI, smoking and age.\nIn NTN, the mean hippocampal volume was inversely related to SBP (β = −0.17, P < 0.008), (regression model included age and sex, F3,206 = 22.4, P < 0.001). GM volume was inversely related to DBP (β = −0.15, P = 0.01), (the model included age and sex, F3,206 = 32.3, P < 0.001).\nIn HTN, the quadratic DBP term was a predictor of the mean hippocampal volume at baseline (the model included age and sex, F4,168 = 11.6, P < 0.001) (Table 4, Fig. 2). The critical DBP value calculated from equation [2] was x = 77 mmHg.\nModel predicting mean hippocampal volume in the HTN group at baseline\nCritical value (Eq. [2]): x = −(0.00399/(2 × (−0.000026))) = 77 mmHg.\nHTN, hypertension; WML, white matter lesion.\nRelationship between diastolic blood pressure readings and mean hippocampal brain volume measurements in hypertensive participants.\nGM was associated with SBP (β = −0.16, P = 0.02) and the model (F4,166 = 14.0, P < 0.001) included BMI, smoking and age.\nIn NTN, the mean hippocampal volume was inversely related to SBP (β = −0.17, P < 0.008), (regression model included age and sex, F3,206 = 22.4, P < 0.001). GM volume was inversely related to DBP (β = −0.15, P = 0.01), (the model included age and sex, F3,206 = 32.3, P < 0.001).", "In the HTN group, SBP and quadratic SBP term were predictors of WML volume at baseline (regression model included age, F3,168 = 14.4, P < 0.001) (Table 2, Fig. 1). The results were similar with log-transformed data (F3,168 = 13.1, P < 0.001) (Table 3). The critical SBP value calculated from equation [2] was x = 124 mmHg.\nLinear regression model predicting WML volume in the HTN group at baseline (raw data)\nCritical value (Eq. [2]): x = –(−0.07317/(2 × 0.0003)) = 122 mmHg.\nHTN, hypertension; WML, white matter lesion.\nQuadratic relationship between systolic blood pressure readings and white matter lesion measurements in hypertensive participants.\nLinear regression model predicting WML volume in the HTN group at baseline (log transformed data: log(WML volume))\nCritical value (Eq. [2]): x = –(−0.02741/(2 × 0.00011)) = 124.6 mmHg.\nHTN, hypertension; WML, white matter lesion.\nIn the NTN group, neither SBP nor DBP were related to WML.", "In HTN, the quadratic DBP term was a predictor of the mean hippocampal volume at baseline (the model included age and sex, F4,168 = 11.6, P < 0.001) (Table 4, Fig. 2). The critical DBP value calculated from equation [2] was x = 77 mmHg.\nModel predicting mean hippocampal volume in the HTN group at baseline\nCritical value (Eq. [2]): x = −(0.00399/(2 × (−0.000026))) = 77 mmHg.\nHTN, hypertension; WML, white matter lesion.\nRelationship between diastolic blood pressure readings and mean hippocampal brain volume measurements in hypertensive participants.\nGM was associated with SBP (β = −0.16, P = 0.02) and the model (F4,166 = 14.0, P < 0.001) included BMI, smoking and age.\nIn NTN, the mean hippocampal volume was inversely related to SBP (β = −0.17, P < 0.008), (regression model included age and sex, F3,206 = 22.4, P < 0.001). GM volume was inversely related to DBP (β = −0.15, P = 0.01), (the model included age and sex, F3,206 = 32.3, P < 0.001).", "Changes in BP and volumes over time SBP did not change significantly in the HTN or NTN group, but the interaction term was significant (P = 0.01) indicating divergent pattern of SBP dynamics (the estimate = −1.6, P = 0.07 for HTN and 1.4, P = 0.07 for NTN). Similarly, DBP patterns were different (P = 0.03 for interaction term) with a lack of change in the HTN participants (estimate −0.6, P = 0.31) and a significant increase in the NTN group (the estimate = 1.1, P = 0.03). In both groups, WML volumes increased with time, (the estimate = 0.025, P < 0.001) for HTN and (0.028, P < 0.001) for NTN. In both groups, brain volumes decreased with time. An estimate for the fixed effect of time for the mean hippocampal volumes was (−0.0044, P < 0.001) for HTN and (−0.0038, P < 0.001) among NTN participants. For GM, they were (−0.48, P < 0.001) and (−0.45, P < 0.001); and finally, for WM (−0.30, P < 0.001) and (−0.22, P = 0.001), for the HTN and NTN groups, respectively. Group × time interaction terms were not significant for models with brain or WML volumes, indicating that rates of change did not differ between groups. Figure S2 presents BP and WML changes in both groups.\nSBP did not change significantly in the HTN or NTN group, but the interaction term was significant (P = 0.01) indicating divergent pattern of SBP dynamics (the estimate = −1.6, P = 0.07 for HTN and 1.4, P = 0.07 for NTN). Similarly, DBP patterns were different (P = 0.03 for interaction term) with a lack of change in the HTN participants (estimate −0.6, P = 0.31) and a significant increase in the NTN group (the estimate = 1.1, P = 0.03). In both groups, WML volumes increased with time, (the estimate = 0.025, P < 0.001) for HTN and (0.028, P < 0.001) for NTN. In both groups, brain volumes decreased with time. An estimate for the fixed effect of time for the mean hippocampal volumes was (−0.0044, P < 0.001) for HTN and (−0.0038, P < 0.001) among NTN participants. For GM, they were (−0.48, P < 0.001) and (−0.45, P < 0.001); and finally, for WM (−0.30, P < 0.001) and (−0.22, P = 0.001), for the HTN and NTN groups, respectively. Group × time interaction terms were not significant for models with brain or WML volumes, indicating that rates of change did not differ between groups. Figure S2 presents BP and WML changes in both groups.", "SBP did not change significantly in the HTN or NTN group, but the interaction term was significant (P = 0.01) indicating divergent pattern of SBP dynamics (the estimate = −1.6, P = 0.07 for HTN and 1.4, P = 0.07 for NTN). Similarly, DBP patterns were different (P = 0.03 for interaction term) with a lack of change in the HTN participants (estimate −0.6, P = 0.31) and a significant increase in the NTN group (the estimate = 1.1, P = 0.03). In both groups, WML volumes increased with time, (the estimate = 0.025, P < 0.001) for HTN and (0.028, P < 0.001) for NTN. In both groups, brain volumes decreased with time. An estimate for the fixed effect of time for the mean hippocampal volumes was (−0.0044, P < 0.001) for HTN and (−0.0038, P < 0.001) among NTN participants. For GM, they were (−0.48, P < 0.001) and (−0.45, P < 0.001); and finally, for WM (−0.30, P < 0.001) and (−0.22, P = 0.001), for the HTN and NTN groups, respectively. Group × time interaction terms were not significant for models with brain or WML volumes, indicating that rates of change did not differ between groups. Figure S2 presents BP and WML changes in both groups.", "Table S4 shows the stability of secondary group definition (untreated, uncontrolled and controlled HTN, Stage 1 HTN, and normotension). Table S5 presents the results from MMRM analyses of WML volumes and BP changes conducted with all five subgroups defined at baseline based on HTN control status. One can appreciate that WML volume increased significantly in untreated and controlled hypertension, as well as in normotensive participants.\nTable S6 and Figure S3 shows that participants who at baseline were classified as untreated (n = 19) or uncontrolled hypertension (n = 20) and reclassified at the last visit as good outcome (improved BP control) experience the same increase in WML volumes as participants with bad outcome (BP remained untreated or uncontrolled).\nBaseline BP and changes in white matter lesion and brain volumes In the hypertensive group both baseline SBP (β = 0.33, P = 0.007) and DBP (β = −0.30, P = 0.008) were related to WML changes over time (entire model F4,83 = 10.5, P < 0.001, included also baseline WML volumes and smoking). Baseline BP was not related to changes in hippocampal, GM, or WM volumes.\nIn normotensive participants baseline BP was not related to change in any examined volumes.\nIn the hypertensive group both baseline SBP (β = 0.33, P = 0.007) and DBP (β = −0.30, P = 0.008) were related to WML changes over time (entire model F4,83 = 10.5, P < 0.001, included also baseline WML volumes and smoking). Baseline BP was not related to changes in hippocampal, GM, or WM volumes.\nIn normotensive participants baseline BP was not related to change in any examined volumes.\nLongitudinal changes in BP and changes in white matter lesion and brain volumes In the HTN group, neither change in SBP nor DBP were associated with a change in WML, hippocampal, GM or WM volumes (Tables S7 and S8).\nIn the NTN group, both increase in SBP (an estimate for the fixed effect of SBP = 0.0023, P = 0.01) and DBP (an estimate for the fixed effect of DBP = 0.0035, P = 0.01) over time were related to the increase in WML volume. The significant interaction term indicated that these dynamics were meaningfully different from the HTN group. Moreover, in the normotensive group both increase in SBP (an estimate for the fixed effect of SBP = −0.0002, P = 0.03) and DBP (an estimate for the fixed effect of DBP = −0.0004, P = 0.01) over time were related to reduction in the mean hippocampal volume. However, only for SBP the interaction term was significant, indicating that these changes were different from HTN group.\nResults for pulse pressure are presented in the Supplement p. 8–9.\nIn the HTN group, neither change in SBP nor DBP were associated with a change in WML, hippocampal, GM or WM volumes (Tables S7 and S8).\nIn the NTN group, both increase in SBP (an estimate for the fixed effect of SBP = 0.0023, P = 0.01) and DBP (an estimate for the fixed effect of DBP = 0.0035, P = 0.01) over time were related to the increase in WML volume. The significant interaction term indicated that these dynamics were meaningfully different from the HTN group. Moreover, in the normotensive group both increase in SBP (an estimate for the fixed effect of SBP = −0.0002, P = 0.03) and DBP (an estimate for the fixed effect of DBP = −0.0004, P = 0.01) over time were related to reduction in the mean hippocampal volume. However, only for SBP the interaction term was significant, indicating that these changes were different from HTN group.\nResults for pulse pressure are presented in the Supplement p. 8–9.", "In the hypertensive group both baseline SBP (β = 0.33, P = 0.007) and DBP (β = −0.30, P = 0.008) were related to WML changes over time (entire model F4,83 = 10.5, P < 0.001, included also baseline WML volumes and smoking). Baseline BP was not related to changes in hippocampal, GM, or WM volumes.\nIn normotensive participants baseline BP was not related to change in any examined volumes.", "In the HTN group, neither change in SBP nor DBP were associated with a change in WML, hippocampal, GM or WM volumes (Tables S7 and S8).\nIn the NTN group, both increase in SBP (an estimate for the fixed effect of SBP = 0.0023, P = 0.01) and DBP (an estimate for the fixed effect of DBP = 0.0035, P = 0.01) over time were related to the increase in WML volume. The significant interaction term indicated that these dynamics were meaningfully different from the HTN group. Moreover, in the normotensive group both increase in SBP (an estimate for the fixed effect of SBP = −0.0002, P = 0.03) and DBP (an estimate for the fixed effect of DBP = −0.0004, P = 0.01) over time were related to reduction in the mean hippocampal volume. However, only for SBP the interaction term was significant, indicating that these changes were different from HTN group.\nResults for pulse pressure are presented in the Supplement p. 8–9.", "Longitudinal observations of brain health in aging have continuously concluded that BP has a significant impact on the nervous system. It is well known that increased cardiovascular burden, as represented by an elevated systolic or diastolic BP, can substantially increase the risk of developing hemorrhagic and ischemic stroke [1,2], white matter lesions and atrophy [14,32], and cognitive dysfunction [33,34].\nIn our examination of a cohort of normotensive and hypertensive participants, we posit that adequate BP is vital for a healthy brain. We have previously observed that in hypertension, there appears to be a window of mid-range SBP around 125 mmHg which maximizes perfusion [21]. In the present study, we show that hypertensive participants with a similar SBP (124 mmHg) have lower WML volumes that participants with SBP further away from this optimum. This is a novel finding further supporting the notion of an ideal blood pressure range that promotes optimal brain function and minimizes damage. Opposite to our results, the SPRINT-Mind study found that the group where BP was lowered aggressively to less than 120 experienced smaller WML progression over time [18]. However our findings are consistent with a report showing that in hypertensive participants low BP was related to higher PWML volume [35] and an earlier study where an increase in WML volume over time was most pronounced in participants with the highest BP who also had the greatest BP fluctuations [36].\nWe also find that among hypertensive participants there was a quadratic relationship between the mean hippocampal volume at baseline and DBP, such as both the participants with low and high DBP had lower volume. This observation, although interesting, is not completely novel. Others have previously described a parallel U-shape association between brain atrophy and DBP, where both low and high concurrent DBP were related to greater cortical volume reduction [37]. The SPRINT MIND study showed that intensive BP treatment group experienced significantly greater hippocampal volume reduction [38]. It further confirms the hypothesis indicating the existence of a BP optimum, and reconciles reports showing that both high [39] and low BP [40] are related to lower hippocampal volumes.\nAmong normotensive participants relationships between BP and brain volumes were linear such as both higher SBP and DBP were related to lower brain volumes. It is congruent with previous reports and meta-analyses showing both BP components associated with volumes [3]. Notably, no quadratic relationship was observed in this group, further strengthening our initial assumption that optimum exists in the group with preexisting impairment of the vascular system.\nReduction in the volume of specific brain regions over time is another parameter indicative of abnormal aging and is often seen reported in HTN [3,41]. Our HTN group had more global and hippocampal atrophy at baseline (Table 1) than their normotensive counterparts. However, after adjustment for age, the WML volume did not differ between HTN and NTN groups. It is possible that substantial percentage (55%) of participants with controlled BP attenuated the differences. Comparisons across five groups showed that participants with untreated and uncontrolled hypertension tended to have higher WML volumes (Table S2).\nIn our hypertensive participants, higher baseline SBP and lower DBP were both independently related to the growth of WML over time. While the first finding is not surprising, we offer two possible explanations for the latter. First, as the mean arterial pressure, the major driver of perfusion pressure, depends mostly on DBP, the low DBP in HTN may indicate the propensity to hypoperfusion. Second, this association would be consistent with widening of pulse pressure, indicative of arterial stiffness. Indeed, an earlier analysis of the Lothian Birth Cohort 1936 showed that the sequence of DBP reduction, PP increase and rising internal carotid artery pulsatility index leads to WM damage [42]. Our supplemental analyses also confirm that baseline PP was related to WML growth.\nIt is worth noting that both normo and hypertensive groups experienced reductions in brain volumes and WML growth over time. This was present despite dissimilar trajectories of blood pressure: increases in the NTN and apparent lack of change in the HTN participants. A more fine-grained picture emerged from analyzing five groups (Table S5): WML lesions grew despite BP reductions in the untreated group and similar trend was observed among individuals with uncontrolled HTN, yet lesion volume increased concomitantly to BP increases in participants with controlled HTN and among normotensive participants. Even more interestingly we show that WML growth was similar in participants with baseline untreated/ uncontrolled HTN who at the last visit had disparate outcomes: improved or unchanged BP control (Table S6). It stands in opposition to the results of SPRINT-MIND trial where smaller increase in WML volume was observed in participants with more aggressive BP lowering [18]. We believe that the main reason for this discrepancy is that our groups were much smaller and follow-up time shorter than in SPRINT study. In addition, the difference in WML volume change between SPRINT groups amounted only to 0.5 cm3.\nHowever, since BP load over time is unknown, it is also likely that in both groups BP was not controlled for a long time, and one group showed improvement only on the last measurement. Both groups could have also experienced similar, strong longitudinal BP fluctuation, which increase the probability of WML progression [36]. Then again, our results confirm that baseline WML volumes are good predictor of lesion progression, consistent with previous reports [43], and suggest that once the pathological process is set in motion, there is limited room for its modification. Such view has important implications for patient management, further strengthening the notion that early intervention (close monitoring of BP in normotensives and early treatment) would be most successful. Finally, the findings are also consistent with our hypothesis that in the setting of impaired cerebral flow regulation in participants with more severe form of HTN BP reductions are harmful.\nNot surprisingly, in the light of the above considerations, only in the group of normotensive individuals, longitudinal increases of SBP and DBP were related to lesion growth and reduction is mean hippocampal volume. It is to be expected, as rising BP would contribute to brain deterioration. This finding is also in agreement with a previous report of community residing participants over the age of 75, where the 2-year change in ambulatory SBP was associated with the 2-year WML increase [44].\nOur study suffered from a few limitations. First, only one measurement of blood pressure was taken at the time, instead of the average of three consecutive ones. It could have resulted in misclassification of cases. This was especially true for ‘untreated’: only 63% of participants defined at baseline as untreated were still classified as hypertensive at follow-up. However, high baseline WML volume in this group (Table S2) speaks against the possibility of misclassification. Almost everybody in the controlled and uncontrolled HTN groups remained hypertensive at follow-up, and 88% (91/104) of participants normotensive (<140/90) at baseline were normotensive at follow-up. Although, the method for BP measurement was not optimal it matched closely a common real-life practice. Second, the interval between assessment was short, and since WML progression is slow it might have precluded us from observing significant differences between groups. Analysis of participants WML volume did not include a differentiation of periventricular lesions from deep lesions. It is possible that the effects of hypertension of WML development could vary by lesion location. We did not perform manual segmentation of silent infractions. It may have artificially inflated lesion volumes, as some infarcts might have been erroneously classified as WML. Wang et al.[45] previously reported that including stroke lesions could bias WML volume estimation by as much as 20%. Although, in our case the error would have been likely smaller, since their study included patients recruited soon after presenting with stroke symptoms, while we excluded participants with overt infarcts, the misclassification cannot be excluded.\nOnly a half of participants had follow-up. Although participants studied only once and those with additional examinations did not differ in most of the characteristics, it still could have caused a bias. Another limitation was that our group was predominantly Caucasian, with high level of educational achievement thus generalizability to a more diverse cohort is uncertain. We did not have data on the duration of hypertension, which can modify the relationships we examined. Normotensive individuals were slightly (on average 3 years) but statistically significantly younger than hypertensive participants. It is plausible that had they been older, we might have seen stronger associations between BP and WML. A recent meta-analysis showed that the optimal blood pressure value changes depending on age [20]. We did not have a group large enough to address whether this is the case also for white matter lesions. Finally, since our study was observational, results cannot carry the same weight a these derived from clinical trial, and causal relations cannot be inferred.\nIn conclusion, we find that among hypertensive participants there was a quadratic relationship between BP and WML as well as mean hippocampal volume. The WML burden was the smallest when SBP was close to 124 mmHg. The mean hippocampal volume was the highest when DBP was in high 70-mmHg range. Such phenomena were not observed in the normotensive group, suggesting the existence of a BP optimum in hypertension. The current report extends our earlier findings showing that cerebral perfusion is maximized when systolic BP is in 120–130 mmHg range. Longitudinally, all groups experienced lesion growth, despite different BP trajectories, further suggesting the possibility that WML expansion may occur despite or because of BP reduction in individuals with compromised vascular system. Considering growing evidence showing nonlinear association between blood pressure an outcomes further clinical trials are needed.", "A special thanks to Ke Xi for her help with the statistical modeling as well as assistance in the production of regression plots for this manuscript.\nSources of funding: Study funding comes from NIH grants HL111724, NS104364, AG022374, AG12101, AG08051, and Alzheimer's Association NIRG-09-132490.\nDisclosures: None\nConflicts of interest There are no conflicts of interest.\nThere are no conflicts of interest.", "There are no conflicts of interest.", "" ]
[ "intro", "methods", "subjects", null, null, null, null, null, "results", null, null, null, null, null, null, null, null, "discussion", null, "COI-statement", "supplementary-material" ]
[ "hypertension", "magnetic resonance imaging", "neuro-imaging", "radiology", "systolic and diastolic blood pressure", "white matter lesions" ]
INTRODUCTION: The impact of cardiovascular disease on the United States’ population cannot be overstated. One out of every three Americans is hypertensive, with 66% of adults aged 60 or older exhibiting above average blood pressure readings [1,2]. It was estimated in 2018 that 11 million individuals in the United States were living with undiagnosed hypertension (HTN), placing them at an increased risk for adverse cardiovascular events that can directly affect the central nervous system as well as cognition [1,2]. High blood pressure (BP) can cause damage and dysfunction to organ systems with perhaps the most significant of such being the brain and the heart [3–5]. Hypertensive patients develop atherosclerotic plaques within the large blood vessels supplying blood flow both into and out of the brain at an accelerated rate, increasing their risk of an ischemic event [6,7]. HTN can cause a remodeling of the brain microvasculature with vessels exhibiting rarefication and narrowing lumen [4,6,8]. Such changes in vessel architecture can have negative effects on cerebral blood flow (CBF) and autoregulation. Reduction in CBF has been shown to cause brain atrophy both in animal models [9] and in clinical population [10], and has been consistently linked with white matter damage [11,12]. White matter lesions (WML) are very common in HTN participants [13–15]. Indeed, several studies have confirmed that two major risk factors for WML development are increased age and high BP, with smoking, high cholesterol, and heart disease also being significant contributors to lesion formation [14,16,17]. Although benefits of HTN treatment are indisputable for major cardiovascular outcome, the target blood pressure for the brain remains under debate. While some argue for aggressive blood pressure reductions [18], others point to possible danger of this approach [19], as marked lowering of BP in longstanding HTN may cause relative hypoperfusion. This suggested to us the existence of an optimum BP range. A recent metanalysis found a U-shape relationship between SBP and a risk of dementia in participants older than 75, as well as U-shape relationship between SBP and combined dementia and mortality across all age groups starting at 60 [20]. Our group has previously reported a quadratic (inverted U-shape) relationship between CBF and SBP in hypertensive patients, pointing to the optimum lying between 120 and 130 mmHg where CBF was the highest [21]. It was consistently shown that WML volumes are inversely related to global and local indices of CBF [22]. In this observational report, we investigated whether in hypertensive participants WML and atrophy, structural markers of HTN-related brain damage, show relationship with SBP, which is parallel to the one we observed for CBF and SBP: namely is there a quadratic association between them. METHODS: The data of this study are available from the corresponding author upon request. Participants recruitment Study participants were recruited at the NYU School of Medicine, at the former Center for Brain Health. Out of 376 individuals described in this report, 369 (98%) were previously a part of a study assessing the relationship between BP and CBF. Here, we report on participants with quantitative assessment of white matter pathology. Baseline assessment was performed in a larger group of n = 376 participants, and n = 188 of them had follow-up examinations. The reasons for lack of follow-up are described in the Supplement. Participants were enrolled between November 2010 and November 2017. All participants signed Institutional Review Board (ethics committee)-approved consent forms and were free of neocortical stroke, brain tumor, life-long psychotic disorders and dementia at baseline. Information pertaining to recruitment, exclusion criteria, and participant flow is presented in the Supplement. Study participants were recruited at the NYU School of Medicine, at the former Center for Brain Health. Out of 376 individuals described in this report, 369 (98%) were previously a part of a study assessing the relationship between BP and CBF. Here, we report on participants with quantitative assessment of white matter pathology. Baseline assessment was performed in a larger group of n = 376 participants, and n = 188 of them had follow-up examinations. The reasons for lack of follow-up are described in the Supplement. Participants were enrolled between November 2010 and November 2017. All participants signed Institutional Review Board (ethics committee)-approved consent forms and were free of neocortical stroke, brain tumor, life-long psychotic disorders and dementia at baseline. Information pertaining to recruitment, exclusion criteria, and participant flow is presented in the Supplement. Clinical assessment All participants underwent medical, psychiatric, and neurological assessments, blood tests, electrocardiogram (ECG), and magnetic resonance imaging (MRI) evaluation examinations at baseline, and the longitudinal group also at follow-up. Participants exhibiting dementia, based on a physician-administered interview using the Brief Cognitive Rating Scale, rating on the Global Deterioration Scale [23], and Clinical Dementia Rating were excluded [24]. Fasting blood was tested for complete blood count, liver function, as well as metabolic and lipid panel. Blood pressure was taken once in a sitting position, after 5 min of rest. It was measured on the left upper arm using a manual sphygmomanometer. Body mass index (BMI) was calculated as weight/height (kg/m) [2]. Medication: We recoded the use of antihypertensive medications (angiotensin receptor blockers (ARBs), angiotensin converting enzyme inhibitors (ACE), beta-blockers, diuretics, and calcium channel blockers), statins and glucose lowering drugs. Diabetes mellitus was defined as fasting glucose level of 126 mg/dl and higher and/or current use of glucose lowering medication [25]. Smoking status was defined as positive if participants was a current smoker or smoked within last 10 years. All participants underwent medical, psychiatric, and neurological assessments, blood tests, electrocardiogram (ECG), and magnetic resonance imaging (MRI) evaluation examinations at baseline, and the longitudinal group also at follow-up. Participants exhibiting dementia, based on a physician-administered interview using the Brief Cognitive Rating Scale, rating on the Global Deterioration Scale [23], and Clinical Dementia Rating were excluded [24]. Fasting blood was tested for complete blood count, liver function, as well as metabolic and lipid panel. Blood pressure was taken once in a sitting position, after 5 min of rest. It was measured on the left upper arm using a manual sphygmomanometer. Body mass index (BMI) was calculated as weight/height (kg/m) [2]. Medication: We recoded the use of antihypertensive medications (angiotensin receptor blockers (ARBs), angiotensin converting enzyme inhibitors (ACE), beta-blockers, diuretics, and calcium channel blockers), statins and glucose lowering drugs. Diabetes mellitus was defined as fasting glucose level of 126 mg/dl and higher and/or current use of glucose lowering medication [25]. Smoking status was defined as positive if participants was a current smoker or smoked within last 10 years. Study groups For consistency with our previous study hypertension was defined as current antihypertensive treatment or BP ≥140/90 mmHg [26]. One hundred and sixty-nine participants were classified as hypertensive; 136 participants were taking antihypertensive medication and 42 were not taking any antihypertensive medication with high BP recorded during their in-office visit. Normotension (NTN) was defined as BP <140/90 mmHg and no antihypertensive treatment. For descriptive purpose and secondary analyses, we described hypertensive participants as untreated: no antihypertensive treatment and BP ≥140/90 mmHg; controlled: antihypertensive treatment and BP <140/90 mmHg, uncontrolled: antihypertensive treatment and BP ≥140/90 mmHg. We also divided our normotensive group into Stage 1 hypertension based on new guidelines: [27] no treatment and BP between 130–139/80–89 mmHg, and normotensive BP <130/80 mmHg. For consistency with our previous study hypertension was defined as current antihypertensive treatment or BP ≥140/90 mmHg [26]. One hundred and sixty-nine participants were classified as hypertensive; 136 participants were taking antihypertensive medication and 42 were not taking any antihypertensive medication with high BP recorded during their in-office visit. Normotension (NTN) was defined as BP <140/90 mmHg and no antihypertensive treatment. For descriptive purpose and secondary analyses, we described hypertensive participants as untreated: no antihypertensive treatment and BP ≥140/90 mmHg; controlled: antihypertensive treatment and BP <140/90 mmHg, uncontrolled: antihypertensive treatment and BP ≥140/90 mmHg. We also divided our normotensive group into Stage 1 hypertension based on new guidelines: [27] no treatment and BP between 130–139/80–89 mmHg, and normotensive BP <130/80 mmHg. MR imaging All MR imaging was performed on the 3T system (Siemens, Erlangen, Germany). Parameters for Magnetization Prepared Rapid Acquisition Gradient Echo (MPRAGE) images were: repetition time (TR) = 2250 ms, echo time (TE) = 2.7 ms, inversion time (TI) = 900 ms, flip angle (FA) = 8°, slice thickness: 1.0 mm, field of view (FOV) = 200 mm, acquisition matrix = 256 × 192 × 124, reconstructed as 256 × 256 × 124. Axial fluid attenuation inversion recovery (FLAIR) images were acquired with TR/TE/TI 9000/99/2500 ms; FA 130°, slice thickness: 3.3 mm, field of view (FOV) 220 mm, matrix = 256 × 192 reconstructed as thirty 256 × 256 images. All MR imaging was performed on the 3T system (Siemens, Erlangen, Germany). Parameters for Magnetization Prepared Rapid Acquisition Gradient Echo (MPRAGE) images were: repetition time (TR) = 2250 ms, echo time (TE) = 2.7 ms, inversion time (TI) = 900 ms, flip angle (FA) = 8°, slice thickness: 1.0 mm, field of view (FOV) = 200 mm, acquisition matrix = 256 × 192 × 124, reconstructed as 256 × 256 × 124. Axial fluid attenuation inversion recovery (FLAIR) images were acquired with TR/TE/TI 9000/99/2500 ms; FA 130°, slice thickness: 3.3 mm, field of view (FOV) 220 mm, matrix = 256 × 192 reconstructed as thirty 256 × 256 images. MRI processing White matter lesions were segmented on FLAIR MRI with freely available image analysis software FireVoxel (https://firevoxel.org/) using a previously validated workflow [28]. FLAIR images were first corrected for signal nonuniformity using the N3 algorithm. A 3D mask M was then created over the entire brain parenchyma, that is, the gray and white matter, with cerebrospinal fluid (CSF) excluded [29]. Voxels at the external surface S of M were then identified. Next was creation of set L, containing voxels in M with signal intensity 2.5 standard deviations above the mean FLAIR signal. Finally, L was filtered to remove (a) cortical voxels, those located within 3 mm of S, (b) small clusters of volume <12 mm3 (presumed to represent image noise), and (c) connected regions having >50% CSF border (presumed to represent the choroid plexus and the septum). The resulting mask represents our estimate of WML. Its volume VWML is presented as percentage of the total brain volume. Gray matter (GM), white matter (WM) and CSF volumes were estimated using Statistical Parametric Mapping segmentation procedure (SPM, version 8, with ‘New-Segment’ extension) [30]. Total brain volume was the sum of GM and WM volumes. Intracranial volume (ICV) was the sum of all tissue types. Left and right hippocampal volumes were obtained with FreeSurfer version 6.0 [31], and averaged. They are referred to from now on as ‘mean hippocampal volume’. GM, WM and hippocampal volumes are presented as a fraction of the ICV. MRI data were acquired over the span of 9 years a 3T magnet that underwent hardware and software upgrades. We detected the inter-epoch variability in GM measurements. To avoid this fixed bias we z-scored GM values separately by epoch, recentered and rescaled them. Even though no other structural measure was affected, for consistency, we used rescaled values for hippocampal and WM volumes. White matter lesions were segmented on FLAIR MRI with freely available image analysis software FireVoxel (https://firevoxel.org/) using a previously validated workflow [28]. FLAIR images were first corrected for signal nonuniformity using the N3 algorithm. A 3D mask M was then created over the entire brain parenchyma, that is, the gray and white matter, with cerebrospinal fluid (CSF) excluded [29]. Voxels at the external surface S of M were then identified. Next was creation of set L, containing voxels in M with signal intensity 2.5 standard deviations above the mean FLAIR signal. Finally, L was filtered to remove (a) cortical voxels, those located within 3 mm of S, (b) small clusters of volume <12 mm3 (presumed to represent image noise), and (c) connected regions having >50% CSF border (presumed to represent the choroid plexus and the septum). The resulting mask represents our estimate of WML. Its volume VWML is presented as percentage of the total brain volume. Gray matter (GM), white matter (WM) and CSF volumes were estimated using Statistical Parametric Mapping segmentation procedure (SPM, version 8, with ‘New-Segment’ extension) [30]. Total brain volume was the sum of GM and WM volumes. Intracranial volume (ICV) was the sum of all tissue types. Left and right hippocampal volumes were obtained with FreeSurfer version 6.0 [31], and averaged. They are referred to from now on as ‘mean hippocampal volume’. GM, WM and hippocampal volumes are presented as a fraction of the ICV. MRI data were acquired over the span of 9 years a 3T magnet that underwent hardware and software upgrades. We detected the inter-epoch variability in GM measurements. To avoid this fixed bias we z-scored GM values separately by epoch, recentered and rescaled them. Even though no other structural measure was affected, for consistency, we used rescaled values for hippocampal and WM volumes. Statistics Categorical variables were compared with χ2 tests. T-test, Mann–Whitney U-test, and analysis of covariance (ANCOVA) were used to compare group means for continuous variables, depending on the data distribution and the need to adjust for age and sex. Normality was tested with Shapiro–Wilk test. Relationships between BP, WML volume and brain volumes (hippocampal, GM, WM) at baseline were examined with linear regression. Volumes were dependent variables; age, sex, body mass index, smoking, diabetes, systolic blood pressure and diastolic blood pressure were independent predictors. These relationships were tested in participants with and without hypertension separately. We present here F-statistics for the regression models, as well unstandardized (B) and standardized (β) coefficients for individual predictors. Again, we hypothesized that in the HTN group an optimal BP may exist [21], which minimizes WML volume and maximizes brain volumes. Thus, as before, we tested for the existence of the peak, by introducing both the linear BP and quadratic (BP [2]) terms into the regression models. After a quadratic relationship between the volume and BP was confirmed, the critical BP at which volume reaches its maximum or minimum value was determined as: To examine whether BP, WML and brain volumes changed over time, we used mixed models for repeated measures (MMRM), where BP or volume was a dependent variable and time, group (HTN vs. NTN), and group × time interaction were predictors. For models with WML or brain volumes age as added as a covariate. Both intercept and slope were treated as random. To test whether baseline BP predicted change in WML and brain volumes, we created volume rates of change by regressing volume on time for each participant and tested whether baseline SBP and DBP was associated with volume slope. Baseline volumes, BMI, age, sex, smoking and diabetes status were initially included in the models, and retained only when significant. The relationships were tested in the normo- and hypertensive participants separately. For all the analyses, the most parsimonious model was chosen, defined as the model that included only significant or necessary (main effects when interaction was present) terms. To test whether change in BP (independent variable) was related to change in volume (dependent) we used MMRM. WML or brain volumes were a dependent variable and BP (SBP or DBP), group (HTN vs. NTN), and group × BP interaction were predictors. Both intercept and slope were treated as random. SBP and DBP were predictors in separate models. We performed supplementary analyses using pulse pressure (PP) as a predictor in all the models. WML volumes were log transformed. We checked the linear models for violations of models assumptions. For all the analyses, the most parsimonious model was chosen, defined as the model that included only significant or necessary (main effects when interaction was present) terms. Statistical significance was defined as a P-value <0.05. SPSS (version 25, SPSS, Inc., Chicago, Illinois, USA) software was used for all analyses. Categorical variables were compared with χ2 tests. T-test, Mann–Whitney U-test, and analysis of covariance (ANCOVA) were used to compare group means for continuous variables, depending on the data distribution and the need to adjust for age and sex. Normality was tested with Shapiro–Wilk test. Relationships between BP, WML volume and brain volumes (hippocampal, GM, WM) at baseline were examined with linear regression. Volumes were dependent variables; age, sex, body mass index, smoking, diabetes, systolic blood pressure and diastolic blood pressure were independent predictors. These relationships were tested in participants with and without hypertension separately. We present here F-statistics for the regression models, as well unstandardized (B) and standardized (β) coefficients for individual predictors. Again, we hypothesized that in the HTN group an optimal BP may exist [21], which minimizes WML volume and maximizes brain volumes. Thus, as before, we tested for the existence of the peak, by introducing both the linear BP and quadratic (BP [2]) terms into the regression models. After a quadratic relationship between the volume and BP was confirmed, the critical BP at which volume reaches its maximum or minimum value was determined as: To examine whether BP, WML and brain volumes changed over time, we used mixed models for repeated measures (MMRM), where BP or volume was a dependent variable and time, group (HTN vs. NTN), and group × time interaction were predictors. For models with WML or brain volumes age as added as a covariate. Both intercept and slope were treated as random. To test whether baseline BP predicted change in WML and brain volumes, we created volume rates of change by regressing volume on time for each participant and tested whether baseline SBP and DBP was associated with volume slope. Baseline volumes, BMI, age, sex, smoking and diabetes status were initially included in the models, and retained only when significant. The relationships were tested in the normo- and hypertensive participants separately. For all the analyses, the most parsimonious model was chosen, defined as the model that included only significant or necessary (main effects when interaction was present) terms. To test whether change in BP (independent variable) was related to change in volume (dependent) we used MMRM. WML or brain volumes were a dependent variable and BP (SBP or DBP), group (HTN vs. NTN), and group × BP interaction were predictors. Both intercept and slope were treated as random. SBP and DBP were predictors in separate models. We performed supplementary analyses using pulse pressure (PP) as a predictor in all the models. WML volumes were log transformed. We checked the linear models for violations of models assumptions. For all the analyses, the most parsimonious model was chosen, defined as the model that included only significant or necessary (main effects when interaction was present) terms. Statistical significance was defined as a P-value <0.05. SPSS (version 25, SPSS, Inc., Chicago, Illinois, USA) software was used for all analyses. Participants recruitment: Study participants were recruited at the NYU School of Medicine, at the former Center for Brain Health. Out of 376 individuals described in this report, 369 (98%) were previously a part of a study assessing the relationship between BP and CBF. Here, we report on participants with quantitative assessment of white matter pathology. Baseline assessment was performed in a larger group of n = 376 participants, and n = 188 of them had follow-up examinations. The reasons for lack of follow-up are described in the Supplement. Participants were enrolled between November 2010 and November 2017. All participants signed Institutional Review Board (ethics committee)-approved consent forms and were free of neocortical stroke, brain tumor, life-long psychotic disorders and dementia at baseline. Information pertaining to recruitment, exclusion criteria, and participant flow is presented in the Supplement. Clinical assessment: All participants underwent medical, psychiatric, and neurological assessments, blood tests, electrocardiogram (ECG), and magnetic resonance imaging (MRI) evaluation examinations at baseline, and the longitudinal group also at follow-up. Participants exhibiting dementia, based on a physician-administered interview using the Brief Cognitive Rating Scale, rating on the Global Deterioration Scale [23], and Clinical Dementia Rating were excluded [24]. Fasting blood was tested for complete blood count, liver function, as well as metabolic and lipid panel. Blood pressure was taken once in a sitting position, after 5 min of rest. It was measured on the left upper arm using a manual sphygmomanometer. Body mass index (BMI) was calculated as weight/height (kg/m) [2]. Medication: We recoded the use of antihypertensive medications (angiotensin receptor blockers (ARBs), angiotensin converting enzyme inhibitors (ACE), beta-blockers, diuretics, and calcium channel blockers), statins and glucose lowering drugs. Diabetes mellitus was defined as fasting glucose level of 126 mg/dl and higher and/or current use of glucose lowering medication [25]. Smoking status was defined as positive if participants was a current smoker or smoked within last 10 years. Study groups: For consistency with our previous study hypertension was defined as current antihypertensive treatment or BP ≥140/90 mmHg [26]. One hundred and sixty-nine participants were classified as hypertensive; 136 participants were taking antihypertensive medication and 42 were not taking any antihypertensive medication with high BP recorded during their in-office visit. Normotension (NTN) was defined as BP <140/90 mmHg and no antihypertensive treatment. For descriptive purpose and secondary analyses, we described hypertensive participants as untreated: no antihypertensive treatment and BP ≥140/90 mmHg; controlled: antihypertensive treatment and BP <140/90 mmHg, uncontrolled: antihypertensive treatment and BP ≥140/90 mmHg. We also divided our normotensive group into Stage 1 hypertension based on new guidelines: [27] no treatment and BP between 130–139/80–89 mmHg, and normotensive BP <130/80 mmHg. MR imaging: All MR imaging was performed on the 3T system (Siemens, Erlangen, Germany). Parameters for Magnetization Prepared Rapid Acquisition Gradient Echo (MPRAGE) images were: repetition time (TR) = 2250 ms, echo time (TE) = 2.7 ms, inversion time (TI) = 900 ms, flip angle (FA) = 8°, slice thickness: 1.0 mm, field of view (FOV) = 200 mm, acquisition matrix = 256 × 192 × 124, reconstructed as 256 × 256 × 124. Axial fluid attenuation inversion recovery (FLAIR) images were acquired with TR/TE/TI 9000/99/2500 ms; FA 130°, slice thickness: 3.3 mm, field of view (FOV) 220 mm, matrix = 256 × 192 reconstructed as thirty 256 × 256 images. MRI processing: White matter lesions were segmented on FLAIR MRI with freely available image analysis software FireVoxel (https://firevoxel.org/) using a previously validated workflow [28]. FLAIR images were first corrected for signal nonuniformity using the N3 algorithm. A 3D mask M was then created over the entire brain parenchyma, that is, the gray and white matter, with cerebrospinal fluid (CSF) excluded [29]. Voxels at the external surface S of M were then identified. Next was creation of set L, containing voxels in M with signal intensity 2.5 standard deviations above the mean FLAIR signal. Finally, L was filtered to remove (a) cortical voxels, those located within 3 mm of S, (b) small clusters of volume <12 mm3 (presumed to represent image noise), and (c) connected regions having >50% CSF border (presumed to represent the choroid plexus and the septum). The resulting mask represents our estimate of WML. Its volume VWML is presented as percentage of the total brain volume. Gray matter (GM), white matter (WM) and CSF volumes were estimated using Statistical Parametric Mapping segmentation procedure (SPM, version 8, with ‘New-Segment’ extension) [30]. Total brain volume was the sum of GM and WM volumes. Intracranial volume (ICV) was the sum of all tissue types. Left and right hippocampal volumes were obtained with FreeSurfer version 6.0 [31], and averaged. They are referred to from now on as ‘mean hippocampal volume’. GM, WM and hippocampal volumes are presented as a fraction of the ICV. MRI data were acquired over the span of 9 years a 3T magnet that underwent hardware and software upgrades. We detected the inter-epoch variability in GM measurements. To avoid this fixed bias we z-scored GM values separately by epoch, recentered and rescaled them. Even though no other structural measure was affected, for consistency, we used rescaled values for hippocampal and WM volumes. Statistics: Categorical variables were compared with χ2 tests. T-test, Mann–Whitney U-test, and analysis of covariance (ANCOVA) were used to compare group means for continuous variables, depending on the data distribution and the need to adjust for age and sex. Normality was tested with Shapiro–Wilk test. Relationships between BP, WML volume and brain volumes (hippocampal, GM, WM) at baseline were examined with linear regression. Volumes were dependent variables; age, sex, body mass index, smoking, diabetes, systolic blood pressure and diastolic blood pressure were independent predictors. These relationships were tested in participants with and without hypertension separately. We present here F-statistics for the regression models, as well unstandardized (B) and standardized (β) coefficients for individual predictors. Again, we hypothesized that in the HTN group an optimal BP may exist [21], which minimizes WML volume and maximizes brain volumes. Thus, as before, we tested for the existence of the peak, by introducing both the linear BP and quadratic (BP [2]) terms into the regression models. After a quadratic relationship between the volume and BP was confirmed, the critical BP at which volume reaches its maximum or minimum value was determined as: To examine whether BP, WML and brain volumes changed over time, we used mixed models for repeated measures (MMRM), where BP or volume was a dependent variable and time, group (HTN vs. NTN), and group × time interaction were predictors. For models with WML or brain volumes age as added as a covariate. Both intercept and slope were treated as random. To test whether baseline BP predicted change in WML and brain volumes, we created volume rates of change by regressing volume on time for each participant and tested whether baseline SBP and DBP was associated with volume slope. Baseline volumes, BMI, age, sex, smoking and diabetes status were initially included in the models, and retained only when significant. The relationships were tested in the normo- and hypertensive participants separately. For all the analyses, the most parsimonious model was chosen, defined as the model that included only significant or necessary (main effects when interaction was present) terms. To test whether change in BP (independent variable) was related to change in volume (dependent) we used MMRM. WML or brain volumes were a dependent variable and BP (SBP or DBP), group (HTN vs. NTN), and group × BP interaction were predictors. Both intercept and slope were treated as random. SBP and DBP were predictors in separate models. We performed supplementary analyses using pulse pressure (PP) as a predictor in all the models. WML volumes were log transformed. We checked the linear models for violations of models assumptions. For all the analyses, the most parsimonious model was chosen, defined as the model that included only significant or necessary (main effects when interaction was present) terms. Statistical significance was defined as a P-value <0.05. SPSS (version 25, SPSS, Inc., Chicago, Illinois, USA) software was used for all analyses. RESULTS: Our group of 376 participants consisted of HTN (n = 207) and NTN (n = 169) individuals. The characteristics of the groups are given in Table 1. Two hundred and twenty-eight (60.6%) were women (age, 68.5 ± 7.4 years (mean ± standard deviation); education, 16.7 ± 2.3 years); and 148 (39.4%) men (age, 70.7 ± 6.9 years; education, 17.0 ± 2.4 years). The majority of the participants were Caucasian (86%), 10% African American, 4% other races. Baseline characteristics of the study group (n = 376) Data is presented as mean ± standard deviation unless otherwise indicated. P-values come from t-test (total cholesterol, LDL) or Mann–Whitney U-test (age, education, BMI, HDL, triglycerides, glucose). For categorical variables, χ2 was used. n = 371 (NTN = 204, HTN = 167) n = 369 (NTN = 205, HTN = 164) n = 363 (NTN = 200, HTN = 163) n = 371 (NTN = 206, HTN = 165) Values are presented as mean ± standard errors (SE), P-values from ANCOVA after accounting for age. Hippocampal volume is a mean of left and right, values are presented as mean ± standard errors (SE), P-values from ANCOVA after accounting for age and gender. Since the residuals from GM model were not normally distributed, the values were reanalyzed using ranked ANCOVA. Results remained the same. ICV, intracranial volume; HTN, hypertension; NTN, normotensive. In the longitudinal group of 188 participants, 116 (61.7%) were women (age, 69.6 ± 7.5 years; education, 16.7 ± 2.1 years) and 72 (38.3%) men (age, 71.4 ± 6.1 years; education, 17.2 ± 2.3 years). The mean time of follow-up was 2.3 ± 0.70 years. The longitudinal group did not differ from the participants with only one examination in terms of sex, HTN prevalence, SBP, DBP, BMI, total cholesterol, LDL and HDL cholesterol, triglycerides, education and WML volume. Prevalence of smoking and diabetes, as well as the number of participants with normotension, controlled, uncontrolled and untreated hypertension were also not different between the two groups. However, participants with longitudinal observation were slightly older (age, 70.3 ± 7.0 vs. 68.4 ± 7.4 years, P = 0.005) and had lower glucose levels (glucose, 82.5 ± 16.0 vs. 85.6 ± 14 mg/dl) than the individuals with only one exam. The clinical characteristics, as well as baseline brain and WML volumes of HTN and NTN participants in the longitudinal subgroup are given in Table S1. WML volumes by five baseline study group are shown in Tables S2 and S3. Baseline analyses BP and white matter lesion volumes In the HTN group, SBP and quadratic SBP term were predictors of WML volume at baseline (regression model included age, F3,168 = 14.4, P < 0.001) (Table 2, Fig. 1). The results were similar with log-transformed data (F3,168 = 13.1, P < 0.001) (Table 3). The critical SBP value calculated from equation [2] was x = 124 mmHg. Linear regression model predicting WML volume in the HTN group at baseline (raw data) Critical value (Eq. [2]): x = –(−0.07317/(2 × 0.0003)) = 122 mmHg. HTN, hypertension; WML, white matter lesion. Quadratic relationship between systolic blood pressure readings and white matter lesion measurements in hypertensive participants. Linear regression model predicting WML volume in the HTN group at baseline (log transformed data: log(WML volume)) Critical value (Eq. [2]): x = –(−0.02741/(2 × 0.00011)) = 124.6 mmHg. HTN, hypertension; WML, white matter lesion. In the NTN group, neither SBP nor DBP were related to WML. In the HTN group, SBP and quadratic SBP term were predictors of WML volume at baseline (regression model included age, F3,168 = 14.4, P < 0.001) (Table 2, Fig. 1). The results were similar with log-transformed data (F3,168 = 13.1, P < 0.001) (Table 3). The critical SBP value calculated from equation [2] was x = 124 mmHg. Linear regression model predicting WML volume in the HTN group at baseline (raw data) Critical value (Eq. [2]): x = –(−0.07317/(2 × 0.0003)) = 122 mmHg. HTN, hypertension; WML, white matter lesion. Quadratic relationship between systolic blood pressure readings and white matter lesion measurements in hypertensive participants. Linear regression model predicting WML volume in the HTN group at baseline (log transformed data: log(WML volume)) Critical value (Eq. [2]): x = –(−0.02741/(2 × 0.00011)) = 124.6 mmHg. HTN, hypertension; WML, white matter lesion. In the NTN group, neither SBP nor DBP were related to WML. BP and brain volumes In HTN, the quadratic DBP term was a predictor of the mean hippocampal volume at baseline (the model included age and sex, F4,168 = 11.6, P < 0.001) (Table 4, Fig. 2). The critical DBP value calculated from equation [2] was x = 77 mmHg. Model predicting mean hippocampal volume in the HTN group at baseline Critical value (Eq. [2]): x = −(0.00399/(2 × (−0.000026))) = 77 mmHg. HTN, hypertension; WML, white matter lesion. Relationship between diastolic blood pressure readings and mean hippocampal brain volume measurements in hypertensive participants. GM was associated with SBP (β = −0.16, P = 0.02) and the model (F4,166 = 14.0, P < 0.001) included BMI, smoking and age. In NTN, the mean hippocampal volume was inversely related to SBP (β = −0.17, P < 0.008), (regression model included age and sex, F3,206 = 22.4, P < 0.001). GM volume was inversely related to DBP (β = −0.15, P = 0.01), (the model included age and sex, F3,206 = 32.3, P < 0.001). In HTN, the quadratic DBP term was a predictor of the mean hippocampal volume at baseline (the model included age and sex, F4,168 = 11.6, P < 0.001) (Table 4, Fig. 2). The critical DBP value calculated from equation [2] was x = 77 mmHg. Model predicting mean hippocampal volume in the HTN group at baseline Critical value (Eq. [2]): x = −(0.00399/(2 × (−0.000026))) = 77 mmHg. HTN, hypertension; WML, white matter lesion. Relationship between diastolic blood pressure readings and mean hippocampal brain volume measurements in hypertensive participants. GM was associated with SBP (β = −0.16, P = 0.02) and the model (F4,166 = 14.0, P < 0.001) included BMI, smoking and age. In NTN, the mean hippocampal volume was inversely related to SBP (β = −0.17, P < 0.008), (regression model included age and sex, F3,206 = 22.4, P < 0.001). GM volume was inversely related to DBP (β = −0.15, P = 0.01), (the model included age and sex, F3,206 = 32.3, P < 0.001). BP and white matter lesion volumes In the HTN group, SBP and quadratic SBP term were predictors of WML volume at baseline (regression model included age, F3,168 = 14.4, P < 0.001) (Table 2, Fig. 1). The results were similar with log-transformed data (F3,168 = 13.1, P < 0.001) (Table 3). The critical SBP value calculated from equation [2] was x = 124 mmHg. Linear regression model predicting WML volume in the HTN group at baseline (raw data) Critical value (Eq. [2]): x = –(−0.07317/(2 × 0.0003)) = 122 mmHg. HTN, hypertension; WML, white matter lesion. Quadratic relationship between systolic blood pressure readings and white matter lesion measurements in hypertensive participants. Linear regression model predicting WML volume in the HTN group at baseline (log transformed data: log(WML volume)) Critical value (Eq. [2]): x = –(−0.02741/(2 × 0.00011)) = 124.6 mmHg. HTN, hypertension; WML, white matter lesion. In the NTN group, neither SBP nor DBP were related to WML. In the HTN group, SBP and quadratic SBP term were predictors of WML volume at baseline (regression model included age, F3,168 = 14.4, P < 0.001) (Table 2, Fig. 1). The results were similar with log-transformed data (F3,168 = 13.1, P < 0.001) (Table 3). The critical SBP value calculated from equation [2] was x = 124 mmHg. Linear regression model predicting WML volume in the HTN group at baseline (raw data) Critical value (Eq. [2]): x = –(−0.07317/(2 × 0.0003)) = 122 mmHg. HTN, hypertension; WML, white matter lesion. Quadratic relationship between systolic blood pressure readings and white matter lesion measurements in hypertensive participants. Linear regression model predicting WML volume in the HTN group at baseline (log transformed data: log(WML volume)) Critical value (Eq. [2]): x = –(−0.02741/(2 × 0.00011)) = 124.6 mmHg. HTN, hypertension; WML, white matter lesion. In the NTN group, neither SBP nor DBP were related to WML. BP and brain volumes In HTN, the quadratic DBP term was a predictor of the mean hippocampal volume at baseline (the model included age and sex, F4,168 = 11.6, P < 0.001) (Table 4, Fig. 2). The critical DBP value calculated from equation [2] was x = 77 mmHg. Model predicting mean hippocampal volume in the HTN group at baseline Critical value (Eq. [2]): x = −(0.00399/(2 × (−0.000026))) = 77 mmHg. HTN, hypertension; WML, white matter lesion. Relationship between diastolic blood pressure readings and mean hippocampal brain volume measurements in hypertensive participants. GM was associated with SBP (β = −0.16, P = 0.02) and the model (F4,166 = 14.0, P < 0.001) included BMI, smoking and age. In NTN, the mean hippocampal volume was inversely related to SBP (β = −0.17, P < 0.008), (regression model included age and sex, F3,206 = 22.4, P < 0.001). GM volume was inversely related to DBP (β = −0.15, P = 0.01), (the model included age and sex, F3,206 = 32.3, P < 0.001). In HTN, the quadratic DBP term was a predictor of the mean hippocampal volume at baseline (the model included age and sex, F4,168 = 11.6, P < 0.001) (Table 4, Fig. 2). The critical DBP value calculated from equation [2] was x = 77 mmHg. Model predicting mean hippocampal volume in the HTN group at baseline Critical value (Eq. [2]): x = −(0.00399/(2 × (−0.000026))) = 77 mmHg. HTN, hypertension; WML, white matter lesion. Relationship between diastolic blood pressure readings and mean hippocampal brain volume measurements in hypertensive participants. GM was associated with SBP (β = −0.16, P = 0.02) and the model (F4,166 = 14.0, P < 0.001) included BMI, smoking and age. In NTN, the mean hippocampal volume was inversely related to SBP (β = −0.17, P < 0.008), (regression model included age and sex, F3,206 = 22.4, P < 0.001). GM volume was inversely related to DBP (β = −0.15, P = 0.01), (the model included age and sex, F3,206 = 32.3, P < 0.001). Longitudinal analyses Changes in BP and volumes over time SBP did not change significantly in the HTN or NTN group, but the interaction term was significant (P = 0.01) indicating divergent pattern of SBP dynamics (the estimate = −1.6, P = 0.07 for HTN and 1.4, P = 0.07 for NTN). Similarly, DBP patterns were different (P = 0.03 for interaction term) with a lack of change in the HTN participants (estimate −0.6, P = 0.31) and a significant increase in the NTN group (the estimate = 1.1, P = 0.03). In both groups, WML volumes increased with time, (the estimate = 0.025, P < 0.001) for HTN and (0.028, P < 0.001) for NTN. In both groups, brain volumes decreased with time. An estimate for the fixed effect of time for the mean hippocampal volumes was (−0.0044, P < 0.001) for HTN and (−0.0038, P < 0.001) among NTN participants. For GM, they were (−0.48, P < 0.001) and (−0.45, P < 0.001); and finally, for WM (−0.30, P < 0.001) and (−0.22, P = 0.001), for the HTN and NTN groups, respectively. Group × time interaction terms were not significant for models with brain or WML volumes, indicating that rates of change did not differ between groups. Figure S2 presents BP and WML changes in both groups. SBP did not change significantly in the HTN or NTN group, but the interaction term was significant (P = 0.01) indicating divergent pattern of SBP dynamics (the estimate = −1.6, P = 0.07 for HTN and 1.4, P = 0.07 for NTN). Similarly, DBP patterns were different (P = 0.03 for interaction term) with a lack of change in the HTN participants (estimate −0.6, P = 0.31) and a significant increase in the NTN group (the estimate = 1.1, P = 0.03). In both groups, WML volumes increased with time, (the estimate = 0.025, P < 0.001) for HTN and (0.028, P < 0.001) for NTN. In both groups, brain volumes decreased with time. An estimate for the fixed effect of time for the mean hippocampal volumes was (−0.0044, P < 0.001) for HTN and (−0.0038, P < 0.001) among NTN participants. For GM, they were (−0.48, P < 0.001) and (−0.45, P < 0.001); and finally, for WM (−0.30, P < 0.001) and (−0.22, P = 0.001), for the HTN and NTN groups, respectively. Group × time interaction terms were not significant for models with brain or WML volumes, indicating that rates of change did not differ between groups. Figure S2 presents BP and WML changes in both groups. Changes in BP and volumes over time SBP did not change significantly in the HTN or NTN group, but the interaction term was significant (P = 0.01) indicating divergent pattern of SBP dynamics (the estimate = −1.6, P = 0.07 for HTN and 1.4, P = 0.07 for NTN). Similarly, DBP patterns were different (P = 0.03 for interaction term) with a lack of change in the HTN participants (estimate −0.6, P = 0.31) and a significant increase in the NTN group (the estimate = 1.1, P = 0.03). In both groups, WML volumes increased with time, (the estimate = 0.025, P < 0.001) for HTN and (0.028, P < 0.001) for NTN. In both groups, brain volumes decreased with time. An estimate for the fixed effect of time for the mean hippocampal volumes was (−0.0044, P < 0.001) for HTN and (−0.0038, P < 0.001) among NTN participants. For GM, they were (−0.48, P < 0.001) and (−0.45, P < 0.001); and finally, for WM (−0.30, P < 0.001) and (−0.22, P = 0.001), for the HTN and NTN groups, respectively. Group × time interaction terms were not significant for models with brain or WML volumes, indicating that rates of change did not differ between groups. Figure S2 presents BP and WML changes in both groups. SBP did not change significantly in the HTN or NTN group, but the interaction term was significant (P = 0.01) indicating divergent pattern of SBP dynamics (the estimate = −1.6, P = 0.07 for HTN and 1.4, P = 0.07 for NTN). Similarly, DBP patterns were different (P = 0.03 for interaction term) with a lack of change in the HTN participants (estimate −0.6, P = 0.31) and a significant increase in the NTN group (the estimate = 1.1, P = 0.03). In both groups, WML volumes increased with time, (the estimate = 0.025, P < 0.001) for HTN and (0.028, P < 0.001) for NTN. In both groups, brain volumes decreased with time. An estimate for the fixed effect of time for the mean hippocampal volumes was (−0.0044, P < 0.001) for HTN and (−0.0038, P < 0.001) among NTN participants. For GM, they were (−0.48, P < 0.001) and (−0.45, P < 0.001); and finally, for WM (−0.30, P < 0.001) and (−0.22, P = 0.001), for the HTN and NTN groups, respectively. Group × time interaction terms were not significant for models with brain or WML volumes, indicating that rates of change did not differ between groups. Figure S2 presents BP and WML changes in both groups. Secondary analyses Table S4 shows the stability of secondary group definition (untreated, uncontrolled and controlled HTN, Stage 1 HTN, and normotension). Table S5 presents the results from MMRM analyses of WML volumes and BP changes conducted with all five subgroups defined at baseline based on HTN control status. One can appreciate that WML volume increased significantly in untreated and controlled hypertension, as well as in normotensive participants. Table S6 and Figure S3 shows that participants who at baseline were classified as untreated (n = 19) or uncontrolled hypertension (n = 20) and reclassified at the last visit as good outcome (improved BP control) experience the same increase in WML volumes as participants with bad outcome (BP remained untreated or uncontrolled). Baseline BP and changes in white matter lesion and brain volumes In the hypertensive group both baseline SBP (β = 0.33, P = 0.007) and DBP (β = −0.30, P = 0.008) were related to WML changes over time (entire model F4,83 = 10.5, P < 0.001, included also baseline WML volumes and smoking). Baseline BP was not related to changes in hippocampal, GM, or WM volumes. In normotensive participants baseline BP was not related to change in any examined volumes. In the hypertensive group both baseline SBP (β = 0.33, P = 0.007) and DBP (β = −0.30, P = 0.008) were related to WML changes over time (entire model F4,83 = 10.5, P < 0.001, included also baseline WML volumes and smoking). Baseline BP was not related to changes in hippocampal, GM, or WM volumes. In normotensive participants baseline BP was not related to change in any examined volumes. Longitudinal changes in BP and changes in white matter lesion and brain volumes In the HTN group, neither change in SBP nor DBP were associated with a change in WML, hippocampal, GM or WM volumes (Tables S7 and S8). In the NTN group, both increase in SBP (an estimate for the fixed effect of SBP = 0.0023, P = 0.01) and DBP (an estimate for the fixed effect of DBP = 0.0035, P = 0.01) over time were related to the increase in WML volume. The significant interaction term indicated that these dynamics were meaningfully different from the HTN group. Moreover, in the normotensive group both increase in SBP (an estimate for the fixed effect of SBP = −0.0002, P = 0.03) and DBP (an estimate for the fixed effect of DBP = −0.0004, P = 0.01) over time were related to reduction in the mean hippocampal volume. However, only for SBP the interaction term was significant, indicating that these changes were different from HTN group. Results for pulse pressure are presented in the Supplement p. 8–9. In the HTN group, neither change in SBP nor DBP were associated with a change in WML, hippocampal, GM or WM volumes (Tables S7 and S8). In the NTN group, both increase in SBP (an estimate for the fixed effect of SBP = 0.0023, P = 0.01) and DBP (an estimate for the fixed effect of DBP = 0.0035, P = 0.01) over time were related to the increase in WML volume. The significant interaction term indicated that these dynamics were meaningfully different from the HTN group. Moreover, in the normotensive group both increase in SBP (an estimate for the fixed effect of SBP = −0.0002, P = 0.03) and DBP (an estimate for the fixed effect of DBP = −0.0004, P = 0.01) over time were related to reduction in the mean hippocampal volume. However, only for SBP the interaction term was significant, indicating that these changes were different from HTN group. Results for pulse pressure are presented in the Supplement p. 8–9. Table S4 shows the stability of secondary group definition (untreated, uncontrolled and controlled HTN, Stage 1 HTN, and normotension). Table S5 presents the results from MMRM analyses of WML volumes and BP changes conducted with all five subgroups defined at baseline based on HTN control status. One can appreciate that WML volume increased significantly in untreated and controlled hypertension, as well as in normotensive participants. Table S6 and Figure S3 shows that participants who at baseline were classified as untreated (n = 19) or uncontrolled hypertension (n = 20) and reclassified at the last visit as good outcome (improved BP control) experience the same increase in WML volumes as participants with bad outcome (BP remained untreated or uncontrolled). Baseline BP and changes in white matter lesion and brain volumes In the hypertensive group both baseline SBP (β = 0.33, P = 0.007) and DBP (β = −0.30, P = 0.008) were related to WML changes over time (entire model F4,83 = 10.5, P < 0.001, included also baseline WML volumes and smoking). Baseline BP was not related to changes in hippocampal, GM, or WM volumes. In normotensive participants baseline BP was not related to change in any examined volumes. In the hypertensive group both baseline SBP (β = 0.33, P = 0.007) and DBP (β = −0.30, P = 0.008) were related to WML changes over time (entire model F4,83 = 10.5, P < 0.001, included also baseline WML volumes and smoking). Baseline BP was not related to changes in hippocampal, GM, or WM volumes. In normotensive participants baseline BP was not related to change in any examined volumes. Longitudinal changes in BP and changes in white matter lesion and brain volumes In the HTN group, neither change in SBP nor DBP were associated with a change in WML, hippocampal, GM or WM volumes (Tables S7 and S8). In the NTN group, both increase in SBP (an estimate for the fixed effect of SBP = 0.0023, P = 0.01) and DBP (an estimate for the fixed effect of DBP = 0.0035, P = 0.01) over time were related to the increase in WML volume. The significant interaction term indicated that these dynamics were meaningfully different from the HTN group. Moreover, in the normotensive group both increase in SBP (an estimate for the fixed effect of SBP = −0.0002, P = 0.03) and DBP (an estimate for the fixed effect of DBP = −0.0004, P = 0.01) over time were related to reduction in the mean hippocampal volume. However, only for SBP the interaction term was significant, indicating that these changes were different from HTN group. Results for pulse pressure are presented in the Supplement p. 8–9. In the HTN group, neither change in SBP nor DBP were associated with a change in WML, hippocampal, GM or WM volumes (Tables S7 and S8). In the NTN group, both increase in SBP (an estimate for the fixed effect of SBP = 0.0023, P = 0.01) and DBP (an estimate for the fixed effect of DBP = 0.0035, P = 0.01) over time were related to the increase in WML volume. The significant interaction term indicated that these dynamics were meaningfully different from the HTN group. Moreover, in the normotensive group both increase in SBP (an estimate for the fixed effect of SBP = −0.0002, P = 0.03) and DBP (an estimate for the fixed effect of DBP = −0.0004, P = 0.01) over time were related to reduction in the mean hippocampal volume. However, only for SBP the interaction term was significant, indicating that these changes were different from HTN group. Results for pulse pressure are presented in the Supplement p. 8–9. Baseline analyses: BP and white matter lesion volumes In the HTN group, SBP and quadratic SBP term were predictors of WML volume at baseline (regression model included age, F3,168 = 14.4, P < 0.001) (Table 2, Fig. 1). The results were similar with log-transformed data (F3,168 = 13.1, P < 0.001) (Table 3). The critical SBP value calculated from equation [2] was x = 124 mmHg. Linear regression model predicting WML volume in the HTN group at baseline (raw data) Critical value (Eq. [2]): x = –(−0.07317/(2 × 0.0003)) = 122 mmHg. HTN, hypertension; WML, white matter lesion. Quadratic relationship between systolic blood pressure readings and white matter lesion measurements in hypertensive participants. Linear regression model predicting WML volume in the HTN group at baseline (log transformed data: log(WML volume)) Critical value (Eq. [2]): x = –(−0.02741/(2 × 0.00011)) = 124.6 mmHg. HTN, hypertension; WML, white matter lesion. In the NTN group, neither SBP nor DBP were related to WML. In the HTN group, SBP and quadratic SBP term were predictors of WML volume at baseline (regression model included age, F3,168 = 14.4, P < 0.001) (Table 2, Fig. 1). The results were similar with log-transformed data (F3,168 = 13.1, P < 0.001) (Table 3). The critical SBP value calculated from equation [2] was x = 124 mmHg. Linear regression model predicting WML volume in the HTN group at baseline (raw data) Critical value (Eq. [2]): x = –(−0.07317/(2 × 0.0003)) = 122 mmHg. HTN, hypertension; WML, white matter lesion. Quadratic relationship between systolic blood pressure readings and white matter lesion measurements in hypertensive participants. Linear regression model predicting WML volume in the HTN group at baseline (log transformed data: log(WML volume)) Critical value (Eq. [2]): x = –(−0.02741/(2 × 0.00011)) = 124.6 mmHg. HTN, hypertension; WML, white matter lesion. In the NTN group, neither SBP nor DBP were related to WML. BP and brain volumes In HTN, the quadratic DBP term was a predictor of the mean hippocampal volume at baseline (the model included age and sex, F4,168 = 11.6, P < 0.001) (Table 4, Fig. 2). The critical DBP value calculated from equation [2] was x = 77 mmHg. Model predicting mean hippocampal volume in the HTN group at baseline Critical value (Eq. [2]): x = −(0.00399/(2 × (−0.000026))) = 77 mmHg. HTN, hypertension; WML, white matter lesion. Relationship between diastolic blood pressure readings and mean hippocampal brain volume measurements in hypertensive participants. GM was associated with SBP (β = −0.16, P = 0.02) and the model (F4,166 = 14.0, P < 0.001) included BMI, smoking and age. In NTN, the mean hippocampal volume was inversely related to SBP (β = −0.17, P < 0.008), (regression model included age and sex, F3,206 = 22.4, P < 0.001). GM volume was inversely related to DBP (β = −0.15, P = 0.01), (the model included age and sex, F3,206 = 32.3, P < 0.001). In HTN, the quadratic DBP term was a predictor of the mean hippocampal volume at baseline (the model included age and sex, F4,168 = 11.6, P < 0.001) (Table 4, Fig. 2). The critical DBP value calculated from equation [2] was x = 77 mmHg. Model predicting mean hippocampal volume in the HTN group at baseline Critical value (Eq. [2]): x = −(0.00399/(2 × (−0.000026))) = 77 mmHg. HTN, hypertension; WML, white matter lesion. Relationship between diastolic blood pressure readings and mean hippocampal brain volume measurements in hypertensive participants. GM was associated with SBP (β = −0.16, P = 0.02) and the model (F4,166 = 14.0, P < 0.001) included BMI, smoking and age. In NTN, the mean hippocampal volume was inversely related to SBP (β = −0.17, P < 0.008), (regression model included age and sex, F3,206 = 22.4, P < 0.001). GM volume was inversely related to DBP (β = −0.15, P = 0.01), (the model included age and sex, F3,206 = 32.3, P < 0.001). BP and white matter lesion volumes: In the HTN group, SBP and quadratic SBP term were predictors of WML volume at baseline (regression model included age, F3,168 = 14.4, P < 0.001) (Table 2, Fig. 1). The results were similar with log-transformed data (F3,168 = 13.1, P < 0.001) (Table 3). The critical SBP value calculated from equation [2] was x = 124 mmHg. Linear regression model predicting WML volume in the HTN group at baseline (raw data) Critical value (Eq. [2]): x = –(−0.07317/(2 × 0.0003)) = 122 mmHg. HTN, hypertension; WML, white matter lesion. Quadratic relationship between systolic blood pressure readings and white matter lesion measurements in hypertensive participants. Linear regression model predicting WML volume in the HTN group at baseline (log transformed data: log(WML volume)) Critical value (Eq. [2]): x = –(−0.02741/(2 × 0.00011)) = 124.6 mmHg. HTN, hypertension; WML, white matter lesion. In the NTN group, neither SBP nor DBP were related to WML. BP and brain volumes: In HTN, the quadratic DBP term was a predictor of the mean hippocampal volume at baseline (the model included age and sex, F4,168 = 11.6, P < 0.001) (Table 4, Fig. 2). The critical DBP value calculated from equation [2] was x = 77 mmHg. Model predicting mean hippocampal volume in the HTN group at baseline Critical value (Eq. [2]): x = −(0.00399/(2 × (−0.000026))) = 77 mmHg. HTN, hypertension; WML, white matter lesion. Relationship between diastolic blood pressure readings and mean hippocampal brain volume measurements in hypertensive participants. GM was associated with SBP (β = −0.16, P = 0.02) and the model (F4,166 = 14.0, P < 0.001) included BMI, smoking and age. In NTN, the mean hippocampal volume was inversely related to SBP (β = −0.17, P < 0.008), (regression model included age and sex, F3,206 = 22.4, P < 0.001). GM volume was inversely related to DBP (β = −0.15, P = 0.01), (the model included age and sex, F3,206 = 32.3, P < 0.001). Longitudinal analyses: Changes in BP and volumes over time SBP did not change significantly in the HTN or NTN group, but the interaction term was significant (P = 0.01) indicating divergent pattern of SBP dynamics (the estimate = −1.6, P = 0.07 for HTN and 1.4, P = 0.07 for NTN). Similarly, DBP patterns were different (P = 0.03 for interaction term) with a lack of change in the HTN participants (estimate −0.6, P = 0.31) and a significant increase in the NTN group (the estimate = 1.1, P = 0.03). In both groups, WML volumes increased with time, (the estimate = 0.025, P < 0.001) for HTN and (0.028, P < 0.001) for NTN. In both groups, brain volumes decreased with time. An estimate for the fixed effect of time for the mean hippocampal volumes was (−0.0044, P < 0.001) for HTN and (−0.0038, P < 0.001) among NTN participants. For GM, they were (−0.48, P < 0.001) and (−0.45, P < 0.001); and finally, for WM (−0.30, P < 0.001) and (−0.22, P = 0.001), for the HTN and NTN groups, respectively. Group × time interaction terms were not significant for models with brain or WML volumes, indicating that rates of change did not differ between groups. Figure S2 presents BP and WML changes in both groups. SBP did not change significantly in the HTN or NTN group, but the interaction term was significant (P = 0.01) indicating divergent pattern of SBP dynamics (the estimate = −1.6, P = 0.07 for HTN and 1.4, P = 0.07 for NTN). Similarly, DBP patterns were different (P = 0.03 for interaction term) with a lack of change in the HTN participants (estimate −0.6, P = 0.31) and a significant increase in the NTN group (the estimate = 1.1, P = 0.03). In both groups, WML volumes increased with time, (the estimate = 0.025, P < 0.001) for HTN and (0.028, P < 0.001) for NTN. In both groups, brain volumes decreased with time. An estimate for the fixed effect of time for the mean hippocampal volumes was (−0.0044, P < 0.001) for HTN and (−0.0038, P < 0.001) among NTN participants. For GM, they were (−0.48, P < 0.001) and (−0.45, P < 0.001); and finally, for WM (−0.30, P < 0.001) and (−0.22, P = 0.001), for the HTN and NTN groups, respectively. Group × time interaction terms were not significant for models with brain or WML volumes, indicating that rates of change did not differ between groups. Figure S2 presents BP and WML changes in both groups. Changes in BP and volumes over time: SBP did not change significantly in the HTN or NTN group, but the interaction term was significant (P = 0.01) indicating divergent pattern of SBP dynamics (the estimate = −1.6, P = 0.07 for HTN and 1.4, P = 0.07 for NTN). Similarly, DBP patterns were different (P = 0.03 for interaction term) with a lack of change in the HTN participants (estimate −0.6, P = 0.31) and a significant increase in the NTN group (the estimate = 1.1, P = 0.03). In both groups, WML volumes increased with time, (the estimate = 0.025, P < 0.001) for HTN and (0.028, P < 0.001) for NTN. In both groups, brain volumes decreased with time. An estimate for the fixed effect of time for the mean hippocampal volumes was (−0.0044, P < 0.001) for HTN and (−0.0038, P < 0.001) among NTN participants. For GM, they were (−0.48, P < 0.001) and (−0.45, P < 0.001); and finally, for WM (−0.30, P < 0.001) and (−0.22, P = 0.001), for the HTN and NTN groups, respectively. Group × time interaction terms were not significant for models with brain or WML volumes, indicating that rates of change did not differ between groups. Figure S2 presents BP and WML changes in both groups. Secondary analyses: Table S4 shows the stability of secondary group definition (untreated, uncontrolled and controlled HTN, Stage 1 HTN, and normotension). Table S5 presents the results from MMRM analyses of WML volumes and BP changes conducted with all five subgroups defined at baseline based on HTN control status. One can appreciate that WML volume increased significantly in untreated and controlled hypertension, as well as in normotensive participants. Table S6 and Figure S3 shows that participants who at baseline were classified as untreated (n = 19) or uncontrolled hypertension (n = 20) and reclassified at the last visit as good outcome (improved BP control) experience the same increase in WML volumes as participants with bad outcome (BP remained untreated or uncontrolled). Baseline BP and changes in white matter lesion and brain volumes In the hypertensive group both baseline SBP (β = 0.33, P = 0.007) and DBP (β = −0.30, P = 0.008) were related to WML changes over time (entire model F4,83 = 10.5, P < 0.001, included also baseline WML volumes and smoking). Baseline BP was not related to changes in hippocampal, GM, or WM volumes. In normotensive participants baseline BP was not related to change in any examined volumes. In the hypertensive group both baseline SBP (β = 0.33, P = 0.007) and DBP (β = −0.30, P = 0.008) were related to WML changes over time (entire model F4,83 = 10.5, P < 0.001, included also baseline WML volumes and smoking). Baseline BP was not related to changes in hippocampal, GM, or WM volumes. In normotensive participants baseline BP was not related to change in any examined volumes. Longitudinal changes in BP and changes in white matter lesion and brain volumes In the HTN group, neither change in SBP nor DBP were associated with a change in WML, hippocampal, GM or WM volumes (Tables S7 and S8). In the NTN group, both increase in SBP (an estimate for the fixed effect of SBP = 0.0023, P = 0.01) and DBP (an estimate for the fixed effect of DBP = 0.0035, P = 0.01) over time were related to the increase in WML volume. The significant interaction term indicated that these dynamics were meaningfully different from the HTN group. Moreover, in the normotensive group both increase in SBP (an estimate for the fixed effect of SBP = −0.0002, P = 0.03) and DBP (an estimate for the fixed effect of DBP = −0.0004, P = 0.01) over time were related to reduction in the mean hippocampal volume. However, only for SBP the interaction term was significant, indicating that these changes were different from HTN group. Results for pulse pressure are presented in the Supplement p. 8–9. In the HTN group, neither change in SBP nor DBP were associated with a change in WML, hippocampal, GM or WM volumes (Tables S7 and S8). In the NTN group, both increase in SBP (an estimate for the fixed effect of SBP = 0.0023, P = 0.01) and DBP (an estimate for the fixed effect of DBP = 0.0035, P = 0.01) over time were related to the increase in WML volume. The significant interaction term indicated that these dynamics were meaningfully different from the HTN group. Moreover, in the normotensive group both increase in SBP (an estimate for the fixed effect of SBP = −0.0002, P = 0.03) and DBP (an estimate for the fixed effect of DBP = −0.0004, P = 0.01) over time were related to reduction in the mean hippocampal volume. However, only for SBP the interaction term was significant, indicating that these changes were different from HTN group. Results for pulse pressure are presented in the Supplement p. 8–9. Baseline BP and changes in white matter lesion and brain volumes: In the hypertensive group both baseline SBP (β = 0.33, P = 0.007) and DBP (β = −0.30, P = 0.008) were related to WML changes over time (entire model F4,83 = 10.5, P < 0.001, included also baseline WML volumes and smoking). Baseline BP was not related to changes in hippocampal, GM, or WM volumes. In normotensive participants baseline BP was not related to change in any examined volumes. Longitudinal changes in BP and changes in white matter lesion and brain volumes: In the HTN group, neither change in SBP nor DBP were associated with a change in WML, hippocampal, GM or WM volumes (Tables S7 and S8). In the NTN group, both increase in SBP (an estimate for the fixed effect of SBP = 0.0023, P = 0.01) and DBP (an estimate for the fixed effect of DBP = 0.0035, P = 0.01) over time were related to the increase in WML volume. The significant interaction term indicated that these dynamics were meaningfully different from the HTN group. Moreover, in the normotensive group both increase in SBP (an estimate for the fixed effect of SBP = −0.0002, P = 0.03) and DBP (an estimate for the fixed effect of DBP = −0.0004, P = 0.01) over time were related to reduction in the mean hippocampal volume. However, only for SBP the interaction term was significant, indicating that these changes were different from HTN group. Results for pulse pressure are presented in the Supplement p. 8–9. DISCUSSION: Longitudinal observations of brain health in aging have continuously concluded that BP has a significant impact on the nervous system. It is well known that increased cardiovascular burden, as represented by an elevated systolic or diastolic BP, can substantially increase the risk of developing hemorrhagic and ischemic stroke [1,2], white matter lesions and atrophy [14,32], and cognitive dysfunction [33,34]. In our examination of a cohort of normotensive and hypertensive participants, we posit that adequate BP is vital for a healthy brain. We have previously observed that in hypertension, there appears to be a window of mid-range SBP around 125 mmHg which maximizes perfusion [21]. In the present study, we show that hypertensive participants with a similar SBP (124 mmHg) have lower WML volumes that participants with SBP further away from this optimum. This is a novel finding further supporting the notion of an ideal blood pressure range that promotes optimal brain function and minimizes damage. Opposite to our results, the SPRINT-Mind study found that the group where BP was lowered aggressively to less than 120 experienced smaller WML progression over time [18]. However our findings are consistent with a report showing that in hypertensive participants low BP was related to higher PWML volume [35] and an earlier study where an increase in WML volume over time was most pronounced in participants with the highest BP who also had the greatest BP fluctuations [36]. We also find that among hypertensive participants there was a quadratic relationship between the mean hippocampal volume at baseline and DBP, such as both the participants with low and high DBP had lower volume. This observation, although interesting, is not completely novel. Others have previously described a parallel U-shape association between brain atrophy and DBP, where both low and high concurrent DBP were related to greater cortical volume reduction [37]. The SPRINT MIND study showed that intensive BP treatment group experienced significantly greater hippocampal volume reduction [38]. It further confirms the hypothesis indicating the existence of a BP optimum, and reconciles reports showing that both high [39] and low BP [40] are related to lower hippocampal volumes. Among normotensive participants relationships between BP and brain volumes were linear such as both higher SBP and DBP were related to lower brain volumes. It is congruent with previous reports and meta-analyses showing both BP components associated with volumes [3]. Notably, no quadratic relationship was observed in this group, further strengthening our initial assumption that optimum exists in the group with preexisting impairment of the vascular system. Reduction in the volume of specific brain regions over time is another parameter indicative of abnormal aging and is often seen reported in HTN [3,41]. Our HTN group had more global and hippocampal atrophy at baseline (Table 1) than their normotensive counterparts. However, after adjustment for age, the WML volume did not differ between HTN and NTN groups. It is possible that substantial percentage (55%) of participants with controlled BP attenuated the differences. Comparisons across five groups showed that participants with untreated and uncontrolled hypertension tended to have higher WML volumes (Table S2). In our hypertensive participants, higher baseline SBP and lower DBP were both independently related to the growth of WML over time. While the first finding is not surprising, we offer two possible explanations for the latter. First, as the mean arterial pressure, the major driver of perfusion pressure, depends mostly on DBP, the low DBP in HTN may indicate the propensity to hypoperfusion. Second, this association would be consistent with widening of pulse pressure, indicative of arterial stiffness. Indeed, an earlier analysis of the Lothian Birth Cohort 1936 showed that the sequence of DBP reduction, PP increase and rising internal carotid artery pulsatility index leads to WM damage [42]. Our supplemental analyses also confirm that baseline PP was related to WML growth. It is worth noting that both normo and hypertensive groups experienced reductions in brain volumes and WML growth over time. This was present despite dissimilar trajectories of blood pressure: increases in the NTN and apparent lack of change in the HTN participants. A more fine-grained picture emerged from analyzing five groups (Table S5): WML lesions grew despite BP reductions in the untreated group and similar trend was observed among individuals with uncontrolled HTN, yet lesion volume increased concomitantly to BP increases in participants with controlled HTN and among normotensive participants. Even more interestingly we show that WML growth was similar in participants with baseline untreated/ uncontrolled HTN who at the last visit had disparate outcomes: improved or unchanged BP control (Table S6). It stands in opposition to the results of SPRINT-MIND trial where smaller increase in WML volume was observed in participants with more aggressive BP lowering [18]. We believe that the main reason for this discrepancy is that our groups were much smaller and follow-up time shorter than in SPRINT study. In addition, the difference in WML volume change between SPRINT groups amounted only to 0.5 cm3. However, since BP load over time is unknown, it is also likely that in both groups BP was not controlled for a long time, and one group showed improvement only on the last measurement. Both groups could have also experienced similar, strong longitudinal BP fluctuation, which increase the probability of WML progression [36]. Then again, our results confirm that baseline WML volumes are good predictor of lesion progression, consistent with previous reports [43], and suggest that once the pathological process is set in motion, there is limited room for its modification. Such view has important implications for patient management, further strengthening the notion that early intervention (close monitoring of BP in normotensives and early treatment) would be most successful. Finally, the findings are also consistent with our hypothesis that in the setting of impaired cerebral flow regulation in participants with more severe form of HTN BP reductions are harmful. Not surprisingly, in the light of the above considerations, only in the group of normotensive individuals, longitudinal increases of SBP and DBP were related to lesion growth and reduction is mean hippocampal volume. It is to be expected, as rising BP would contribute to brain deterioration. This finding is also in agreement with a previous report of community residing participants over the age of 75, where the 2-year change in ambulatory SBP was associated with the 2-year WML increase [44]. Our study suffered from a few limitations. First, only one measurement of blood pressure was taken at the time, instead of the average of three consecutive ones. It could have resulted in misclassification of cases. This was especially true for ‘untreated’: only 63% of participants defined at baseline as untreated were still classified as hypertensive at follow-up. However, high baseline WML volume in this group (Table S2) speaks against the possibility of misclassification. Almost everybody in the controlled and uncontrolled HTN groups remained hypertensive at follow-up, and 88% (91/104) of participants normotensive (<140/90) at baseline were normotensive at follow-up. Although, the method for BP measurement was not optimal it matched closely a common real-life practice. Second, the interval between assessment was short, and since WML progression is slow it might have precluded us from observing significant differences between groups. Analysis of participants WML volume did not include a differentiation of periventricular lesions from deep lesions. It is possible that the effects of hypertension of WML development could vary by lesion location. We did not perform manual segmentation of silent infractions. It may have artificially inflated lesion volumes, as some infarcts might have been erroneously classified as WML. Wang et al.[45] previously reported that including stroke lesions could bias WML volume estimation by as much as 20%. Although, in our case the error would have been likely smaller, since their study included patients recruited soon after presenting with stroke symptoms, while we excluded participants with overt infarcts, the misclassification cannot be excluded. Only a half of participants had follow-up. Although participants studied only once and those with additional examinations did not differ in most of the characteristics, it still could have caused a bias. Another limitation was that our group was predominantly Caucasian, with high level of educational achievement thus generalizability to a more diverse cohort is uncertain. We did not have data on the duration of hypertension, which can modify the relationships we examined. Normotensive individuals were slightly (on average 3 years) but statistically significantly younger than hypertensive participants. It is plausible that had they been older, we might have seen stronger associations between BP and WML. A recent meta-analysis showed that the optimal blood pressure value changes depending on age [20]. We did not have a group large enough to address whether this is the case also for white matter lesions. Finally, since our study was observational, results cannot carry the same weight a these derived from clinical trial, and causal relations cannot be inferred. In conclusion, we find that among hypertensive participants there was a quadratic relationship between BP and WML as well as mean hippocampal volume. The WML burden was the smallest when SBP was close to 124 mmHg. The mean hippocampal volume was the highest when DBP was in high 70-mmHg range. Such phenomena were not observed in the normotensive group, suggesting the existence of a BP optimum in hypertension. The current report extends our earlier findings showing that cerebral perfusion is maximized when systolic BP is in 120–130 mmHg range. Longitudinally, all groups experienced lesion growth, despite different BP trajectories, further suggesting the possibility that WML expansion may occur despite or because of BP reduction in individuals with compromised vascular system. Considering growing evidence showing nonlinear association between blood pressure an outcomes further clinical trials are needed. ACKNOWLEDGEMENTS: A special thanks to Ke Xi for her help with the statistical modeling as well as assistance in the production of regression plots for this manuscript. Sources of funding: Study funding comes from NIH grants HL111724, NS104364, AG022374, AG12101, AG08051, and Alzheimer's Association NIRG-09-132490. Disclosures: None Conflicts of interest There are no conflicts of interest. There are no conflicts of interest. Conflicts of interest: There are no conflicts of interest. Supplementary Material:
Background: There is a well documented relationship between cardiovascular risk factors and the development of brain injury, which can lead to cognitive dysfunction. Hypertension (HTN) is a condition increasing the risk of silent and symptomatic ischemic brain lesions. Although benefits of hypertension treatment are indisputable, the target blood pressure value where the possibility of tissue damage is most reduced remains under debate. Methods: Our group performed a cross-sectional ( n  = 376) and longitudinal ( n  = 188) study of individuals without dementia or stroke (60% women n  = 228, age 68.5 ± 7.4 years; men n  = 148, age 70.7 ± 6.9 years). Participants were split into hypertensive ( n  = 169) and normotensive ( n  = 207) groups. MR images were obtained on a 3T system. Linear modeling was performed in hypertensive and normotensive cohorts to investigate the relationship between systolic (SBP) and diastolic (DBP) blood pressure, white matter lesion (WML), and brain volumes. Results: Participants in the hypertensive cohort showed a quadratic relationship between SBP and WML, with the lowest amounts of WML being measured in participants with readings at approximately 124 mmHg. Additionally, the hypertensive cohort also exhibited a quadratic relationship between DBP and mean hippocampal volume; participants with readings at approximately 77 mmHg showing the largest volumes. Longitudinally, all groups experienced WML growth, despite different BP trajectories, further suggesting that WML expansion may occur despite or because of BP reduction in individuals with compromised vascular system. Conclusions: Overall, our study suggests that in the hypertensive group there is a valley of mid-range blood pressures displaying less pathology in the brain.
null
null
16,795
341
[ 242, 156, 182, 382, 607, 1009, 239, 259, 614, 303, 775, 94, 209, 79 ]
21
[ "wml", "htn", "bp", "group", "volume", "volumes", "sbp", "participants", "baseline", "001" ]
[ "effects cerebral blood", "high blood pressure", "impact cardiovascular disease", "previously observed hypertension", "brain volumes hypertensive" ]
null
null
[CONTENT] hypertension | magnetic resonance imaging | neuro-imaging | radiology | systolic and diastolic blood pressure | white matter lesions [SUMMARY]
[CONTENT] hypertension | magnetic resonance imaging | neuro-imaging | radiology | systolic and diastolic blood pressure | white matter lesions [SUMMARY]
[CONTENT] hypertension | magnetic resonance imaging | neuro-imaging | radiology | systolic and diastolic blood pressure | white matter lesions [SUMMARY]
null
[CONTENT] hypertension | magnetic resonance imaging | neuro-imaging | radiology | systolic and diastolic blood pressure | white matter lesions [SUMMARY]
null
[CONTENT] Male | Female | Humans | Middle Aged | Aged | Blood Pressure | White Matter | Cross-Sectional Studies | Magnetic Resonance Imaging | Hypertension [SUMMARY]
[CONTENT] Male | Female | Humans | Middle Aged | Aged | Blood Pressure | White Matter | Cross-Sectional Studies | Magnetic Resonance Imaging | Hypertension [SUMMARY]
[CONTENT] Male | Female | Humans | Middle Aged | Aged | Blood Pressure | White Matter | Cross-Sectional Studies | Magnetic Resonance Imaging | Hypertension [SUMMARY]
null
[CONTENT] Male | Female | Humans | Middle Aged | Aged | Blood Pressure | White Matter | Cross-Sectional Studies | Magnetic Resonance Imaging | Hypertension [SUMMARY]
null
[CONTENT] effects cerebral blood | high blood pressure | impact cardiovascular disease | previously observed hypertension | brain volumes hypertensive [SUMMARY]
[CONTENT] effects cerebral blood | high blood pressure | impact cardiovascular disease | previously observed hypertension | brain volumes hypertensive [SUMMARY]
[CONTENT] effects cerebral blood | high blood pressure | impact cardiovascular disease | previously observed hypertension | brain volumes hypertensive [SUMMARY]
null
[CONTENT] effects cerebral blood | high blood pressure | impact cardiovascular disease | previously observed hypertension | brain volumes hypertensive [SUMMARY]
null
[CONTENT] wml | htn | bp | group | volume | volumes | sbp | participants | baseline | 001 [SUMMARY]
[CONTENT] wml | htn | bp | group | volume | volumes | sbp | participants | baseline | 001 [SUMMARY]
[CONTENT] wml | htn | bp | group | volume | volumes | sbp | participants | baseline | 001 [SUMMARY]
null
[CONTENT] wml | htn | bp | group | volume | volumes | sbp | participants | baseline | 001 [SUMMARY]
null
[CONTENT] cbf | cause | blood | risk | relationship sbp | shape relationship | htn | brain | shape | damage [SUMMARY]
[CONTENT] bp | volume | antihypertensive | volumes | models | 256 | participants | brain | tested | 140 90 mmhg [SUMMARY]
[CONTENT] htn | 001 | wml | sbp | volume | group | baseline | volumes | estimate | model [SUMMARY]
null
[CONTENT] htn | wml | volume | bp | 001 | volumes | sbp | group | baseline | participants [SUMMARY]
null
[CONTENT] ||| Hypertension (HTN ||| [SUMMARY]
[CONTENT] 376 | 188 | 60% | 228 | age 68.5 | 7.4 years | 148 | age 70.7 | 6.9 years ||| 169 | 207 ||| 3 ||| Linear | SBP | DBP | WML [SUMMARY]
[CONTENT] SBP | WML | WML | approximately 124 ||| DBP | approximately 77 ||| WML | BP | WML | BP [SUMMARY]
null
[CONTENT] ||| Hypertension (HTN ||| ||| 376 | 188 | 60% | 228 | age 68.5 | 7.4 years | 148 | age 70.7 | 6.9 years ||| 169 | 207 ||| 3 ||| Linear | SBP | DBP | WML ||| SBP | WML | WML | approximately 124 ||| DBP | approximately 77 ||| WML | BP | WML | BP ||| [SUMMARY]
null
Feasibility of assessing public health impacts of air pollution reduction programs on a local scale: New Haven case study.
21335318
New approaches to link health surveillance data with environmental and population exposure information are needed to examine the health benefits of risk management decisions.
BACKGROUND
Using a hybrid modeling approach that combines regional and local-scale air quality data, we estimated ambient concentrations for multiple air pollutants [e.g., PM2.5 (particulate matter ≤ 2.5 μm in aerodynamic diameter), NOx (nitrogen oxides)] for baseline year 2001 and projected emissions for 2010, 2020, and 2030. We assessed the feasibility of detecting health improvements in relation to reductions in air pollution for 26 different pollutant-health outcome linkages using both sample size and exploratory epidemiological simulations to further inform decision-making needs.
METHODS
Model projections suggested decreases (~10-60%) in pollutant concentrations, mainly attributable to decreases in pollutants from local sources between 2001 and 2010. Models indicated considerable spatial variability in the concentrations of most pollutants. Sample size analyses supported the feasibility of identifying linkages between reductions in NOx and improvements in all-cause mortality, prevalence of asthma in children and adults, and cardiovascular and respiratory hospitalizations.
RESULTS
Substantial reductions in air pollution (e.g., ~60% for NOx) are needed to detect health impacts of environmental actions using traditional epidemiological study designs in small communities like New Haven. In contrast, exploratory epidemiological simulations suggest that it may be possible to demonstrate the health impacts of PM reductions by predicting intraurban pollution gradients within New Haven using coupled models.
CONCLUSION
[ "Adolescent", "Adult", "Aged", "Aged, 80 and over", "Air Pollutants", "Air Pollution", "Cardiovascular Diseases", "Child", "Child, Preschool", "Cities", "Connecticut", "Conservation of Natural Resources", "Environmental Policy", "Feasibility Studies", "Health Status", "Humans", "Infant", "Infant, Newborn", "Linear Models", "Middle Aged", "Models, Chemical", "Public Health", "Respiratory Tract Diseases", "Young Adult" ]
3080930
null
null
Simulation-based epidemiological feasibility analysis
The average number of hospitalizations for CHD among those ≥ 65 years of age in each census block group decreased between 2001 and 2010, whereas average numbers of asthma hospitalizations increased (Table 3). We were unable to detect associations between small reductions in PM2.5 pollution concentrations and health outcomes, so we restricted our analysis to census block groups with PM2.5 reductions of > 4 μg/m3 (n = 30). For these census block groups, numbers of CHD hospitalizations were inversely associated with the estimated reduction in PM2.5 concentrations, indicating that numbers of hospitalizations decreased as the reduction in PM2.5 increased (p < 0.1; Table 4, Figure 3A). Asthma hospitalizations were also inversely associated with reductions in PM2.5 concentrations based on our simulations, suggesting that greater reductions in PM2.5 may slow the increase in asthma hospitalizations over time (Table 4, Figure 3B). However, the inverse association was weaker than for CHD hospitalizations. Finally, an exploratory analysis we conducted by including additional surrogate variables in the regression models aimed at capturing neighborhood effects caused substantial increases in model R2 values, whereas the PM2.5 effect estimates were attenuated somewhat depending on the outcome chosen (data not shown).
Results
Air quality modeling We produced combined CMAQ and AERMOD model results for the study area. Model estimates for NOx and PM2.5 were consistent with measured Air Quality System (AQS) monitoring data for the area (Johnson et al. 2010). Figure 1 shows maps of modeled PM2.5 and NOx concentrations for the baseline year (2001) and projections for 2010, 2020, and 2030. PM2.5 maps for all four time points show a wide range of concentrations in the study area, with high concentrations in the city center, near the port areas, and near major roadways such as I-95, whereas PM2.5 concentrations in suburban areas were much lower than in central parts of the study area. Finally, PM2.5 concentrations are projected to decrease over time, with the most pronounced decreases in areas with the highest estimated concentrations in 2001. Spatial patterns in ambient air quality concentrations for NOx relates strongly to the sources of mobile emissions such as major highways. NOx concentrations shown here depict strong spatial gradients because many of the locations for which we estimated concentrations (population-weighted centroids of the 318 census block groups) are near major roadways. Thus, contrasts between those areas and suburban areas are very pronounced. Finally, NOx concentrations are also projected to decrease considerably over time, particularly in locations close to roadways, because of the implementation of federal emission standards for mobile source emissions. Figure 2 shows distributions of annual and daily average modeled PM2.5 and NOx concentrations for the baseline model year 2001 and projections (2010, 2020, and 2030). We sorted the annual average concentrations for 2001 in order to group the 318 study area locations (census block groups) for which we divided estimates into three groups according to average pollutant concentrations at baseline: low (locations in the lowest 25% of the distribution), medium (locations in the 2nd and 3rd quartiles), and high (locations in the highest quartile of the distribution). As expected, daily averages are more variable than annual averages for both PM2.5 and NOx, which indicates the importance of temporal variability in pollutant concentrations. Downward trends in PM2.5 concentrations were evident for areas with medium and high concentrations, but not for low-concentration areas. Declines in NOx concentrations were evident over time for all three groups, but the decrease was also much sharper in the high-concentration areas. The models predicted large percentage decreases for NOx between 2001 and 2010 (61%) and with less pronounced decreases from 2001 to 2020 and 2030 (overall decreases of 78% and 81%, respectively). For PM2.5 the models predicted smaller percentage decreases from 2001 to 2010 (8%), 2020 (9%), and 2030 (9%). We produced combined CMAQ and AERMOD model results for the study area. Model estimates for NOx and PM2.5 were consistent with measured Air Quality System (AQS) monitoring data for the area (Johnson et al. 2010). Figure 1 shows maps of modeled PM2.5 and NOx concentrations for the baseline year (2001) and projections for 2010, 2020, and 2030. PM2.5 maps for all four time points show a wide range of concentrations in the study area, with high concentrations in the city center, near the port areas, and near major roadways such as I-95, whereas PM2.5 concentrations in suburban areas were much lower than in central parts of the study area. Finally, PM2.5 concentrations are projected to decrease over time, with the most pronounced decreases in areas with the highest estimated concentrations in 2001. Spatial patterns in ambient air quality concentrations for NOx relates strongly to the sources of mobile emissions such as major highways. NOx concentrations shown here depict strong spatial gradients because many of the locations for which we estimated concentrations (population-weighted centroids of the 318 census block groups) are near major roadways. Thus, contrasts between those areas and suburban areas are very pronounced. Finally, NOx concentrations are also projected to decrease considerably over time, particularly in locations close to roadways, because of the implementation of federal emission standards for mobile source emissions. Figure 2 shows distributions of annual and daily average modeled PM2.5 and NOx concentrations for the baseline model year 2001 and projections (2010, 2020, and 2030). We sorted the annual average concentrations for 2001 in order to group the 318 study area locations (census block groups) for which we divided estimates into three groups according to average pollutant concentrations at baseline: low (locations in the lowest 25% of the distribution), medium (locations in the 2nd and 3rd quartiles), and high (locations in the highest quartile of the distribution). As expected, daily averages are more variable than annual averages for both PM2.5 and NOx, which indicates the importance of temporal variability in pollutant concentrations. Downward trends in PM2.5 concentrations were evident for areas with medium and high concentrations, but not for low-concentration areas. Declines in NOx concentrations were evident over time for all three groups, but the decrease was also much sharper in the high-concentration areas. The models predicted large percentage decreases for NOx between 2001 and 2010 (61%) and with less pronounced decreases from 2001 to 2020 and 2030 (overall decreases of 78% and 81%, respectively). For PM2.5 the models predicted smaller percentage decreases from 2001 to 2010 (8%), 2020 (9%), and 2030 (9%). Sample-size–based feasibility analysis Table 1 exhibits the percent reduction in concentrations for a given pollutant that would be needed if we assumed a specific RR (1.01, 1.05, 1.10, 1.15, and 1.20) and an estimated reduction in adverse health outcome (2.5%, 5%, 10%, 15%, and 20%) using the assumptions stated above. Table 2 lists the health outcomes that we explored with corresponding minimum statistically detectable decreases in each outcome for the New Haven study population given the baseline rate of the outcome. The New Haven study population area is sufficient in size to examine reductions in adverse health outcomes ranging from a low of 2.5% (adult asthma prevalence) to a high of 10% (all-cause mortality and hospital discharge for cardiovascular diseases and respiratory causes). Based on the percentage decrease air pollution projected for 2010, 2020, and 2030 within New Haven for NOx (61%, 2010; 78%, 2020; 81%, 2030) and PM2.5 (8%, 2010; 9%, 2020 and 2030), the percent reduction in air pollution needed to produce a given change in the outcome (Table 1), and the minimum statistically significant percent decrease in each health outcome that can be detected in the New Haven study population (Table 2), we can assess the feasibility for detecting beneficial health effects of air pollution reductions. Of the 26 different air pollution–health outcome linkages assessed, only five, all NOx related, are potentially feasible (Table 2, last column): all-cause mortality, cardiovascular disease hospitalization, respiratory disease hospitalization discharge, current prevalence of asthma in children, and current prevalence of asthma in adults. Table 1 exhibits the percent reduction in concentrations for a given pollutant that would be needed if we assumed a specific RR (1.01, 1.05, 1.10, 1.15, and 1.20) and an estimated reduction in adverse health outcome (2.5%, 5%, 10%, 15%, and 20%) using the assumptions stated above. Table 2 lists the health outcomes that we explored with corresponding minimum statistically detectable decreases in each outcome for the New Haven study population given the baseline rate of the outcome. The New Haven study population area is sufficient in size to examine reductions in adverse health outcomes ranging from a low of 2.5% (adult asthma prevalence) to a high of 10% (all-cause mortality and hospital discharge for cardiovascular diseases and respiratory causes). Based on the percentage decrease air pollution projected for 2010, 2020, and 2030 within New Haven for NOx (61%, 2010; 78%, 2020; 81%, 2030) and PM2.5 (8%, 2010; 9%, 2020 and 2030), the percent reduction in air pollution needed to produce a given change in the outcome (Table 1), and the minimum statistically significant percent decrease in each health outcome that can be detected in the New Haven study population (Table 2), we can assess the feasibility for detecting beneficial health effects of air pollution reductions. Of the 26 different air pollution–health outcome linkages assessed, only five, all NOx related, are potentially feasible (Table 2, last column): all-cause mortality, cardiovascular disease hospitalization, respiratory disease hospitalization discharge, current prevalence of asthma in children, and current prevalence of asthma in adults. Simulation-based epidemiological feasibility analysis The average number of hospitalizations for CHD among those ≥ 65 years of age in each census block group decreased between 2001 and 2010, whereas average numbers of asthma hospitalizations increased (Table 3). We were unable to detect associations between small reductions in PM2.5 pollution concentrations and health outcomes, so we restricted our analysis to census block groups with PM2.5 reductions of > 4 μg/m3 (n = 30). For these census block groups, numbers of CHD hospitalizations were inversely associated with the estimated reduction in PM2.5 concentrations, indicating that numbers of hospitalizations decreased as the reduction in PM2.5 increased (p < 0.1; Table 4, Figure 3A). Asthma hospitalizations were also inversely associated with reductions in PM2.5 concentrations based on our simulations, suggesting that greater reductions in PM2.5 may slow the increase in asthma hospitalizations over time (Table 4, Figure 3B). However, the inverse association was weaker than for CHD hospitalizations. Finally, an exploratory analysis we conducted by including additional surrogate variables in the regression models aimed at capturing neighborhood effects caused substantial increases in model R2 values, whereas the PM2.5 effect estimates were attenuated somewhat depending on the outcome chosen (data not shown). The average number of hospitalizations for CHD among those ≥ 65 years of age in each census block group decreased between 2001 and 2010, whereas average numbers of asthma hospitalizations increased (Table 3). We were unable to detect associations between small reductions in PM2.5 pollution concentrations and health outcomes, so we restricted our analysis to census block groups with PM2.5 reductions of > 4 μg/m3 (n = 30). For these census block groups, numbers of CHD hospitalizations were inversely associated with the estimated reduction in PM2.5 concentrations, indicating that numbers of hospitalizations decreased as the reduction in PM2.5 increased (p < 0.1; Table 4, Figure 3A). Asthma hospitalizations were also inversely associated with reductions in PM2.5 concentrations based on our simulations, suggesting that greater reductions in PM2.5 may slow the increase in asthma hospitalizations over time (Table 4, Figure 3B). However, the inverse association was weaker than for CHD hospitalizations. Finally, an exploratory analysis we conducted by including additional surrogate variables in the regression models aimed at capturing neighborhood effects caused substantial increases in model R2 values, whereas the PM2.5 effect estimates were attenuated somewhat depending on the outcome chosen (data not shown).
Conclusions
In this project we successfully applied, compared, and evaluated exposure assessment and epidemiological modeling tools in the context of observed public health status in a relatively small community, New Haven, Connecticut, and provided the U.S. EPA and local, state, and city organizations with a new modeling-based methodology to measure the impact of collective risk mitigation approaches and regulations. Furthermore, because no single regulation or program that affects air quality can be isolated to track its effect on health, this project provided critical findings on how regulatory agencies may better examine the complex interactions of cumulative impacts on air quality and health effects from multiple actions in other urban communities.
[ "Air quality modeling", "Sample-size–based feasibility analysis" ]
[ "We produced combined CMAQ and AERMOD model results for the study area. Model estimates for NOx and PM2.5 were consistent with measured Air Quality System (AQS) monitoring data for the area (Johnson et al. 2010). Figure 1 shows maps of modeled PM2.5 and NOx concentrations for the baseline year (2001) and projections for 2010, 2020, and 2030. PM2.5 maps for all four time points show a wide range of concentrations in the study area, with high concentrations in the city center, near the port areas, and near major roadways such as I-95, whereas PM2.5 concentrations in suburban areas were much lower than in central parts of the study area. Finally, PM2.5 concentrations are projected to decrease over time, with the most pronounced decreases in areas with the highest estimated concentrations in 2001.\nSpatial patterns in ambient air quality concentrations for NOx relates strongly to the sources of mobile emissions such as major highways. NOx concentrations shown here depict strong spatial gradients because many of the locations for which we estimated concentrations (population-weighted centroids of the 318 census block groups) are near major roadways. Thus, contrasts between those areas and suburban areas are very pronounced. Finally, NOx concentrations are also projected to decrease considerably over time, particularly in locations close to roadways, because of the implementation of federal emission standards for mobile source emissions.\nFigure 2 shows distributions of annual and daily average modeled PM2.5 and NOx concentrations for the baseline model year 2001 and projections (2010, 2020, and 2030). We sorted the annual average concentrations for 2001 in order to group the 318 study area locations (census block groups) for which we divided estimates into three groups according to average pollutant concentrations at baseline: low (locations in the lowest 25% of the distribution), medium (locations in the 2nd and 3rd quartiles), and high (locations in the highest quartile of the distribution). As expected, daily averages are more variable than annual averages for both PM2.5 and NOx, which indicates the importance of temporal variability in pollutant concentrations. Downward trends in PM2.5 concentrations were evident for areas with medium and high concentrations, but not for low-concentration areas. Declines in NOx concentrations were evident over time for all three groups, but the decrease was also much sharper in the high-concentration areas. The models predicted large percentage decreases for NOx between 2001 and 2010 (61%) and with less pronounced decreases from 2001 to 2020 and 2030 (overall decreases of 78% and 81%, respectively). For PM2.5 the models predicted smaller percentage decreases from 2001 to 2010 (8%), 2020 (9%), and 2030 (9%).", "Table 1 exhibits the percent reduction in concentrations for a given pollutant that would be needed if we assumed a specific RR (1.01, 1.05, 1.10, 1.15, and 1.20) and an estimated reduction in adverse health outcome (2.5%, 5%, 10%, 15%, and 20%) using the assumptions stated above. Table 2 lists the health outcomes that we explored with corresponding minimum statistically detectable decreases in each outcome for the New Haven study population given the baseline rate of the outcome. The New Haven study population area is sufficient in size to examine reductions in adverse health outcomes ranging from a low of 2.5% (adult asthma prevalence) to a high of 10% (all-cause mortality and hospital discharge for cardiovascular diseases and respiratory causes).\nBased on the percentage decrease air pollution projected for 2010, 2020, and 2030 within New Haven for NOx (61%, 2010; 78%, 2020; 81%, 2030) and PM2.5 (8%, 2010; 9%, 2020 and 2030), the percent reduction in air pollution needed to produce a given change in the outcome (Table 1), and the minimum statistically significant percent decrease in each health outcome that can be detected in the New Haven study population (Table 2), we can assess the feasibility for detecting beneficial health effects of air pollution reductions. Of the 26 different air pollution–health outcome linkages assessed, only five, all NOx related, are potentially feasible (Table 2, last column): all-cause mortality, cardiovascular disease hospitalization, respiratory disease hospitalization discharge, current prevalence of asthma in children, and current prevalence of asthma in adults." ]
[ null, "methods" ]
[ "Materials and Methods", "Results", "Air quality modeling", "Sample-size–based feasibility analysis", "Simulation-based epidemiological feasibility analysis", "Discussion", "Conclusions" ]
[ "The New Haven Study Area is centered in the City of New Haven, Connecticut (population ~ 127,000), and extends to a 20-km radius, encompassing 318 census block groups in New Haven County with an estimated population in 2007 of more than 367,000 people. The City of New Haven is located on the southern coast of Connecticut on New Haven Harbor, which is fed by three rivers (the West, Mill, and Quinnipiac) that discharge into northern Long Island Sound. New Haven lies at the intersection of interstates I-91 and I-95, both major regional expressways that are often congested. In addition, several surface arteries pass through or around New Haven, including Routes 1, 10, 17, 34, and 63. Seaborne traffic passes through the Port of New Haven, a deep-water seaport that attracts a considerable number of barges and associated truck and rail traffic. In addition to several institutional power plants, one power generation facility serves the community. This wide range of emission source categories allows for testing of multipollutant emission control strategies.\nWe evaluated the overall feasibility of assessing the public health impact of air pollution reduction programs in the City of New Haven by linking projected emissions reductions from overall regulatory actions to estimated detectable health outcome changes. We began by identifying pollutants of interest for New Haven based on the local emissions inventory for the baseline year of 2001 (Weil 2004) and criteria air pollutants. For the present study, we focused on two air pollutants: NOx and PM2.5. We also identified health outcomes that have been associated with these pollutants: cardiovascular disease hospitalization and mortality; respiratory disease hospitalization and mortality; chronic obstructive pulmonary disease mortality and hospitalization; and asthma prevalence, diagnosis, and hospitalization.\nWe then evaluated existing data on ambient level air pollution, emission data, personal exposure data, and health outcome data for the New Haven area. As part of this data inventory evaluation, we assessed the relevance and completeness of data, as well as verification of locations and quantities of emissions from local sources. We then generated emission estimates for NOx and PM2.5 based on local emissions sources and the projected impacts of federal, state, and local regulatory reduction activities. We also applied an improved methodology to predict mobile source emissions (Cook et al. 2008).\nWe first estimated pollutant specific local-scale air concentrations using the U.S. EPA’s AERMOD dispersion model (Cimorelli et al. 2005). This model used information on local emission sources and local meteorological conditions to provide hourly and annual average concentrations at multiple locations corresponding to the weighted centroids of each of the 318 census block groups in the study area. We estimated total NOx and PM2.5 concentrations by combining regional background levels, chemically reactive pollutant estimates from the CMAQ (Community Multiscale Air Quality) model, and the AERMOD estimates. We estimated emissions using the baseline year (2001) emissions rates and projected emissions in 2010, 2020, and 2030 based on planned and anticipated pollution control programs.\nTo assess feasibility using a sample size approach, we first determined the minimum detectable decrease in each outcome relative to its baseline incidence rate [tests of two independent proportions for a (one-sided) likelihood ratio chi-square test with an α of 0.05 and power of 0.80] for a study population of 367,173 (i.e., the 2007 Census estimate for the New Haven population within the 318 block groups included in the study area). For some of the health outcomes, we made additional study area subpopulation calculations for different age groups (< 18 years, ≥ 18 years).\nThere is general consensus that RRs associated with air pollution exposure for a wide variety of health outcomes are typically less than 1.50, and usually within the range of 1.01–1.20, often for a 10-μg/m3 change in PM2.5 or an interquartile range change in gaseous pollutant concentrations. Jerrett et al. (2008) and Wellenius et al. (2005) found risk ratios or a RR in this range for NO2. RRs for PM2.5 and various outcomes in this range were found by Pope et al. (2002) and Sheppard et al. (1999), whereas Laden et al. (2006) found higher PM2.5 and mortality RRs of 1.16–1.28, and Peters et al. (2001) found an RR of 1.69 for PM2.5 and acute myocardial infarction.\nNext we determined the percent reduction in exposure that would be required to produce a given reduction in the outcome assuming a range of possible effect sizes for concentration–outcome associations. Specifically, we considered air pollution RR values of 1.01, 1.05, 1.10, 1.15, and 1.20 representing the increase in outcome (y) associated with an incremental increase in a given pollutant exposure equal to the level of the average value of the ambient pollution concentration (c) in the study population. The change in the outcome (Δy) associated with a change in exposure (Δc) is a function of the baseline incidence rate (y) and the risk coefficient (β) for a one-unit increase in exposure:\nwhere β = [ln(RR)]/c. The percent decrease in exposure (Δcreq) required to produce a particular reduction in the outcome for a given RR is calculated as\nValues of Δcreq < 100 indicate the percent reduction in exposure that would be required to produce a specific reduction in the outcome (Δy) assuming a given RR for the exposure–outcome association. Values of Δcreq ≥ 100 indicate that the corresponding value for Δy is not feasible, because exposure would have to be reduced by more than 100% to achieve it.\nFinally, we combined data on projected changes in mean annual ambient concentrations of air pollutants for 2010, 2020, and 2030 with the information on minimum detectable effect estimates and the percent reduction in exposure required to produce a given effect estimate to identify which air pollutant–health outcome associations (out of 26 possible combinations) would be most feasible for assessment.\nFor pollutants such as PM where projected reductions were relatively modest (~ 8%), we used an exploratory epidemiological methodology similar to that presented in Pope et al. (2009). Specifically, we used the simulated health data at census block group level (derived from county-specific health data and census information on demographics) to evaluate different strategies for demonstrating impacts of relatively small changes in ambient pollution (compared to NOx) over multiple years.\nThe outcomes for this analysis were the differences between the number of hospitalizations for 2001 and 2010, as illustrated by Equation 3:\nwhere ΔHC is the change in the number of hospitalizations at census block group C and H2010 and H2001 are the number of hospitalizations for the years 2010 and 2001, respectively.\nWe calculated the number of hospitalizations due to congestive heart disease [CHD; International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9CM; World Health Organization 2004), code 428] and asthma (ICD-9CM, code 493) for each census block group for the years 2010 and 2001. We chose these end points based on significant associations (1.28% increase in risk per 10-μg/m3 increase in same-day PM2.5) reported by Dominici et al. (2006) between PM and CHD hospitalizations for the Medicare cohort. We restricted hospitalizations due to CHD to the population > 65 years of age, and we calculated asthma hospitalizations separately for all ages and for the population < 25 years of age, because of known age-dependent differences [Connecticut Department of Public Health (CDPH) 2007]. We calculated the number of hospitalizations for the years 2001 and 2010 as\nwhere Hc is the number of hospitalizations for census block group C, Rate is the rate of hospitalization for females (F) and males (M) of race R (white, Hispanic, black), and PopC is the population of each subgroup (e.g., white female, black male, etc.).\nWe used 2000 U.S. Census Bureau (2001) data to estimate the size of each population subgroup in 2001. We used county-level population projections for 2010 to estimate the proportional change in each population subgroup from 2000 to 2010 and applied this to the 2000 census block group population to estimate 2010 census block populations for each subgroup.\nHospitalization rates for both outcomes are available for all of New Haven County for 2001 (CDPH 2001) and 2007 (CDPH 2007), and we used 2007 data for 2010 hospitalizations. The rates are broken down by age and sex, and age and race, but not by age, sex, and race. We therefore assumed constant ratios of rates of hospitalizations for males and females for all races. We calculated hospitalization rates according to sex, race, age (> 65 years of age for CHD, all ages, and < 25 years of age for asthma), health outcome (CHD or asthma), and year (2001 or 2010) as\nFor example, RateFR is the county-level hospitalization for females of race R; RateR is the county-level hospitalization rate for race R; Ratio of ratesM/F is the ratio of hospitalization rates between males and females; and PopFR and PopMR are county-level population sizes for females and males of race R, respectively. We then calculated RateMR by multiplying RateFR by Ratio of ratesM/F.\nReductions of PM2.5 at each census block group were then regressed against the changes in hospitalizations from 2001 to 2010 at each census block group. All regression analyses were performed using SAS (version 9.1; SAS Institute Inc., Cary, NC).\nOur analysis indirectly accounted for the effects due to changes in key ethnic/racial demographic profile by using group or sex relevant hospitalization rates as part of the health data and feasibility simulations. We explored the influence of introducing an additional explanatory variable in our health effects regressions to indirectly account for both neighborhood effects and the missing determinants of observed hospital admissions by computing average admissions either within a 3- or 4-km radius around each census tract (similarly considered by Özkaynak and Thurston 1987).", " Air quality modeling We produced combined CMAQ and AERMOD model results for the study area. Model estimates for NOx and PM2.5 were consistent with measured Air Quality System (AQS) monitoring data for the area (Johnson et al. 2010). Figure 1 shows maps of modeled PM2.5 and NOx concentrations for the baseline year (2001) and projections for 2010, 2020, and 2030. PM2.5 maps for all four time points show a wide range of concentrations in the study area, with high concentrations in the city center, near the port areas, and near major roadways such as I-95, whereas PM2.5 concentrations in suburban areas were much lower than in central parts of the study area. Finally, PM2.5 concentrations are projected to decrease over time, with the most pronounced decreases in areas with the highest estimated concentrations in 2001.\nSpatial patterns in ambient air quality concentrations for NOx relates strongly to the sources of mobile emissions such as major highways. NOx concentrations shown here depict strong spatial gradients because many of the locations for which we estimated concentrations (population-weighted centroids of the 318 census block groups) are near major roadways. Thus, contrasts between those areas and suburban areas are very pronounced. Finally, NOx concentrations are also projected to decrease considerably over time, particularly in locations close to roadways, because of the implementation of federal emission standards for mobile source emissions.\nFigure 2 shows distributions of annual and daily average modeled PM2.5 and NOx concentrations for the baseline model year 2001 and projections (2010, 2020, and 2030). We sorted the annual average concentrations for 2001 in order to group the 318 study area locations (census block groups) for which we divided estimates into three groups according to average pollutant concentrations at baseline: low (locations in the lowest 25% of the distribution), medium (locations in the 2nd and 3rd quartiles), and high (locations in the highest quartile of the distribution). As expected, daily averages are more variable than annual averages for both PM2.5 and NOx, which indicates the importance of temporal variability in pollutant concentrations. Downward trends in PM2.5 concentrations were evident for areas with medium and high concentrations, but not for low-concentration areas. Declines in NOx concentrations were evident over time for all three groups, but the decrease was also much sharper in the high-concentration areas. The models predicted large percentage decreases for NOx between 2001 and 2010 (61%) and with less pronounced decreases from 2001 to 2020 and 2030 (overall decreases of 78% and 81%, respectively). For PM2.5 the models predicted smaller percentage decreases from 2001 to 2010 (8%), 2020 (9%), and 2030 (9%).\nWe produced combined CMAQ and AERMOD model results for the study area. Model estimates for NOx and PM2.5 were consistent with measured Air Quality System (AQS) monitoring data for the area (Johnson et al. 2010). Figure 1 shows maps of modeled PM2.5 and NOx concentrations for the baseline year (2001) and projections for 2010, 2020, and 2030. PM2.5 maps for all four time points show a wide range of concentrations in the study area, with high concentrations in the city center, near the port areas, and near major roadways such as I-95, whereas PM2.5 concentrations in suburban areas were much lower than in central parts of the study area. Finally, PM2.5 concentrations are projected to decrease over time, with the most pronounced decreases in areas with the highest estimated concentrations in 2001.\nSpatial patterns in ambient air quality concentrations for NOx relates strongly to the sources of mobile emissions such as major highways. NOx concentrations shown here depict strong spatial gradients because many of the locations for which we estimated concentrations (population-weighted centroids of the 318 census block groups) are near major roadways. Thus, contrasts between those areas and suburban areas are very pronounced. Finally, NOx concentrations are also projected to decrease considerably over time, particularly in locations close to roadways, because of the implementation of federal emission standards for mobile source emissions.\nFigure 2 shows distributions of annual and daily average modeled PM2.5 and NOx concentrations for the baseline model year 2001 and projections (2010, 2020, and 2030). We sorted the annual average concentrations for 2001 in order to group the 318 study area locations (census block groups) for which we divided estimates into three groups according to average pollutant concentrations at baseline: low (locations in the lowest 25% of the distribution), medium (locations in the 2nd and 3rd quartiles), and high (locations in the highest quartile of the distribution). As expected, daily averages are more variable than annual averages for both PM2.5 and NOx, which indicates the importance of temporal variability in pollutant concentrations. Downward trends in PM2.5 concentrations were evident for areas with medium and high concentrations, but not for low-concentration areas. Declines in NOx concentrations were evident over time for all three groups, but the decrease was also much sharper in the high-concentration areas. The models predicted large percentage decreases for NOx between 2001 and 2010 (61%) and with less pronounced decreases from 2001 to 2020 and 2030 (overall decreases of 78% and 81%, respectively). For PM2.5 the models predicted smaller percentage decreases from 2001 to 2010 (8%), 2020 (9%), and 2030 (9%).\n Sample-size–based feasibility analysis Table 1 exhibits the percent reduction in concentrations for a given pollutant that would be needed if we assumed a specific RR (1.01, 1.05, 1.10, 1.15, and 1.20) and an estimated reduction in adverse health outcome (2.5%, 5%, 10%, 15%, and 20%) using the assumptions stated above. Table 2 lists the health outcomes that we explored with corresponding minimum statistically detectable decreases in each outcome for the New Haven study population given the baseline rate of the outcome. The New Haven study population area is sufficient in size to examine reductions in adverse health outcomes ranging from a low of 2.5% (adult asthma prevalence) to a high of 10% (all-cause mortality and hospital discharge for cardiovascular diseases and respiratory causes).\nBased on the percentage decrease air pollution projected for 2010, 2020, and 2030 within New Haven for NOx (61%, 2010; 78%, 2020; 81%, 2030) and PM2.5 (8%, 2010; 9%, 2020 and 2030), the percent reduction in air pollution needed to produce a given change in the outcome (Table 1), and the minimum statistically significant percent decrease in each health outcome that can be detected in the New Haven study population (Table 2), we can assess the feasibility for detecting beneficial health effects of air pollution reductions. Of the 26 different air pollution–health outcome linkages assessed, only five, all NOx related, are potentially feasible (Table 2, last column): all-cause mortality, cardiovascular disease hospitalization, respiratory disease hospitalization discharge, current prevalence of asthma in children, and current prevalence of asthma in adults.\nTable 1 exhibits the percent reduction in concentrations for a given pollutant that would be needed if we assumed a specific RR (1.01, 1.05, 1.10, 1.15, and 1.20) and an estimated reduction in adverse health outcome (2.5%, 5%, 10%, 15%, and 20%) using the assumptions stated above. Table 2 lists the health outcomes that we explored with corresponding minimum statistically detectable decreases in each outcome for the New Haven study population given the baseline rate of the outcome. The New Haven study population area is sufficient in size to examine reductions in adverse health outcomes ranging from a low of 2.5% (adult asthma prevalence) to a high of 10% (all-cause mortality and hospital discharge for cardiovascular diseases and respiratory causes).\nBased on the percentage decrease air pollution projected for 2010, 2020, and 2030 within New Haven for NOx (61%, 2010; 78%, 2020; 81%, 2030) and PM2.5 (8%, 2010; 9%, 2020 and 2030), the percent reduction in air pollution needed to produce a given change in the outcome (Table 1), and the minimum statistically significant percent decrease in each health outcome that can be detected in the New Haven study population (Table 2), we can assess the feasibility for detecting beneficial health effects of air pollution reductions. Of the 26 different air pollution–health outcome linkages assessed, only five, all NOx related, are potentially feasible (Table 2, last column): all-cause mortality, cardiovascular disease hospitalization, respiratory disease hospitalization discharge, current prevalence of asthma in children, and current prevalence of asthma in adults.\n Simulation-based epidemiological feasibility analysis The average number of hospitalizations for CHD among those ≥ 65 years of age in each census block group decreased between 2001 and 2010, whereas average numbers of asthma hospitalizations increased (Table 3). We were unable to detect associations between small reductions in PM2.5 pollution concentrations and health outcomes, so we restricted our analysis to census block groups with PM2.5 reductions of > 4 μg/m3 (n = 30). For these census block groups, numbers of CHD hospitalizations were inversely associated with the estimated reduction in PM2.5 concentrations, indicating that numbers of hospitalizations decreased as the reduction in PM2.5 increased (p < 0.1; Table 4, Figure 3A). Asthma hospitalizations were also inversely associated with reductions in PM2.5 concentrations based on our simulations, suggesting that greater reductions in PM2.5 may slow the increase in asthma hospitalizations over time (Table 4, Figure 3B). However, the inverse association was weaker than for CHD hospitalizations. Finally, an exploratory analysis we conducted by including additional surrogate variables in the regression models aimed at capturing neighborhood effects caused substantial increases in model R2 values, whereas the PM2.5 effect estimates were attenuated somewhat depending on the outcome chosen (data not shown).\nThe average number of hospitalizations for CHD among those ≥ 65 years of age in each census block group decreased between 2001 and 2010, whereas average numbers of asthma hospitalizations increased (Table 3). We were unable to detect associations between small reductions in PM2.5 pollution concentrations and health outcomes, so we restricted our analysis to census block groups with PM2.5 reductions of > 4 μg/m3 (n = 30). For these census block groups, numbers of CHD hospitalizations were inversely associated with the estimated reduction in PM2.5 concentrations, indicating that numbers of hospitalizations decreased as the reduction in PM2.5 increased (p < 0.1; Table 4, Figure 3A). Asthma hospitalizations were also inversely associated with reductions in PM2.5 concentrations based on our simulations, suggesting that greater reductions in PM2.5 may slow the increase in asthma hospitalizations over time (Table 4, Figure 3B). However, the inverse association was weaker than for CHD hospitalizations. Finally, an exploratory analysis we conducted by including additional surrogate variables in the regression models aimed at capturing neighborhood effects caused substantial increases in model R2 values, whereas the PM2.5 effect estimates were attenuated somewhat depending on the outcome chosen (data not shown).", "We produced combined CMAQ and AERMOD model results for the study area. Model estimates for NOx and PM2.5 were consistent with measured Air Quality System (AQS) monitoring data for the area (Johnson et al. 2010). Figure 1 shows maps of modeled PM2.5 and NOx concentrations for the baseline year (2001) and projections for 2010, 2020, and 2030. PM2.5 maps for all four time points show a wide range of concentrations in the study area, with high concentrations in the city center, near the port areas, and near major roadways such as I-95, whereas PM2.5 concentrations in suburban areas were much lower than in central parts of the study area. Finally, PM2.5 concentrations are projected to decrease over time, with the most pronounced decreases in areas with the highest estimated concentrations in 2001.\nSpatial patterns in ambient air quality concentrations for NOx relates strongly to the sources of mobile emissions such as major highways. NOx concentrations shown here depict strong spatial gradients because many of the locations for which we estimated concentrations (population-weighted centroids of the 318 census block groups) are near major roadways. Thus, contrasts between those areas and suburban areas are very pronounced. Finally, NOx concentrations are also projected to decrease considerably over time, particularly in locations close to roadways, because of the implementation of federal emission standards for mobile source emissions.\nFigure 2 shows distributions of annual and daily average modeled PM2.5 and NOx concentrations for the baseline model year 2001 and projections (2010, 2020, and 2030). We sorted the annual average concentrations for 2001 in order to group the 318 study area locations (census block groups) for which we divided estimates into three groups according to average pollutant concentrations at baseline: low (locations in the lowest 25% of the distribution), medium (locations in the 2nd and 3rd quartiles), and high (locations in the highest quartile of the distribution). As expected, daily averages are more variable than annual averages for both PM2.5 and NOx, which indicates the importance of temporal variability in pollutant concentrations. Downward trends in PM2.5 concentrations were evident for areas with medium and high concentrations, but not for low-concentration areas. Declines in NOx concentrations were evident over time for all three groups, but the decrease was also much sharper in the high-concentration areas. The models predicted large percentage decreases for NOx between 2001 and 2010 (61%) and with less pronounced decreases from 2001 to 2020 and 2030 (overall decreases of 78% and 81%, respectively). For PM2.5 the models predicted smaller percentage decreases from 2001 to 2010 (8%), 2020 (9%), and 2030 (9%).", "Table 1 exhibits the percent reduction in concentrations for a given pollutant that would be needed if we assumed a specific RR (1.01, 1.05, 1.10, 1.15, and 1.20) and an estimated reduction in adverse health outcome (2.5%, 5%, 10%, 15%, and 20%) using the assumptions stated above. Table 2 lists the health outcomes that we explored with corresponding minimum statistically detectable decreases in each outcome for the New Haven study population given the baseline rate of the outcome. The New Haven study population area is sufficient in size to examine reductions in adverse health outcomes ranging from a low of 2.5% (adult asthma prevalence) to a high of 10% (all-cause mortality and hospital discharge for cardiovascular diseases and respiratory causes).\nBased on the percentage decrease air pollution projected for 2010, 2020, and 2030 within New Haven for NOx (61%, 2010; 78%, 2020; 81%, 2030) and PM2.5 (8%, 2010; 9%, 2020 and 2030), the percent reduction in air pollution needed to produce a given change in the outcome (Table 1), and the minimum statistically significant percent decrease in each health outcome that can be detected in the New Haven study population (Table 2), we can assess the feasibility for detecting beneficial health effects of air pollution reductions. Of the 26 different air pollution–health outcome linkages assessed, only five, all NOx related, are potentially feasible (Table 2, last column): all-cause mortality, cardiovascular disease hospitalization, respiratory disease hospitalization discharge, current prevalence of asthma in children, and current prevalence of asthma in adults.", "The average number of hospitalizations for CHD among those ≥ 65 years of age in each census block group decreased between 2001 and 2010, whereas average numbers of asthma hospitalizations increased (Table 3). We were unable to detect associations between small reductions in PM2.5 pollution concentrations and health outcomes, so we restricted our analysis to census block groups with PM2.5 reductions of > 4 μg/m3 (n = 30). For these census block groups, numbers of CHD hospitalizations were inversely associated with the estimated reduction in PM2.5 concentrations, indicating that numbers of hospitalizations decreased as the reduction in PM2.5 increased (p < 0.1; Table 4, Figure 3A). Asthma hospitalizations were also inversely associated with reductions in PM2.5 concentrations based on our simulations, suggesting that greater reductions in PM2.5 may slow the increase in asthma hospitalizations over time (Table 4, Figure 3B). However, the inverse association was weaker than for CHD hospitalizations. Finally, an exploratory analysis we conducted by including additional surrogate variables in the regression models aimed at capturing neighborhood effects caused substantial increases in model R2 values, whereas the PM2.5 effect estimates were attenuated somewhat depending on the outcome chosen (data not shown).", "We used detailed information on local health and exposure-related data to assess the feasibility of identifying an impact of cumulative air pollution programs on environmental public health in New Haven for 26 different pollutant–health outcome linkages. Combined regional (CMAQ) and local-scale (AERMOD) air quality modeling analysis showed a small overall decrease for PM2.5 (~ 8–9%) in mean pollutant concentrations mostly from local sources and between 2001 and 2010; in contrast, we projected that NOx would decrease by > 60%. Most NOx reductions can be attributed to mobile source emission reduction programs. Thus, it is important to accurately characterize near-road impacts. Local reductions in PM2.5 are modest relative to high background PM concentrations. Statistical power calculations suggest that projected decreases in NOx may result in statistically significant improvements in health outcomes, including all-cause mortality, asthma prevalence in children and adults, and cardiovascular and respiratory hospitalizations. For other pollutants with more modest reductions, including PM, we determined the likelihood of performing a successful traditional air pollution reduction–health reduction analysis in New Haven to be poor. Alternative epidemiological study designs that use spatially and temporally resolved air quality and exposure models to characterize intraurban gradients were promising based on exploratory epidemiological simulations. However, health outcomes with low baseline rates would have to be strongly associated with air pollution exposures in order for exposure reductions to result in identifiable improvements and thus would not be ideal for examining risk management decisions.\nThis study illustrates the advantages of using air quality models over traditional epidemiological approaches using ambient measurements. For example, central-site data are especially problematic for certain PM components and species (e.g., elemental carbon, organic carbon, coarse and ultrafine PM) that exhibit significant spatial heterogeneity. Also, for many pollutants (e.g., toxic pollutants), ambient monitoring data are often nonexistent or limited. Appropriately verified air quality models, on the other hand, can provide the needed spatial and temporal resolution for multiple air pollutant concentrations at many locations. These same models can also be used to estimate the projected air quality and inputs for exposure models for future years, dependent on air pollution reduction activities, or due to the addition of new sources in a community (Isakov et al. 2006). For example, this model can address what happens if emissions from some specific stationary or mobile sources are reduced by certain amounts and what the associated impacts of these local controls versus regional controls may be. This model application helps determine which control options are most effective in reducing ambient concentrations.\nBoth the air quality modeling and feasibility analysis methodologies we used in this research have certain shortcomings. For instance, despite their advantages of being able to provide temporal (hourly) and spatial (at hundreds of locations) estimates, and having a long history of use by regulatory agencies in multipollutant mitigation strategies, models have uncertainties due to model inputs, algorithms, and model parameters (Sax and Isakov 2003). Therefore, in order to reduce uncertainty due to model inputs, detailed emissions and meteorological information should be provided for each model application. In the simulation-based epidemiological feasibility analyses we considered only single-pollutant models and did not include ecological covariates (e.g., income, poverty status, smoking) typically used in cross-sectional, ecological analysis (Özkaynak and Thurston 1987; Pope et al. 2009), because of a lack of complete information. Moreover, it is possible that some of the covariates may change over time, but presumably this may be less of an issue in local-scale assessments than in national-scale analyses. We did not perform joint optimizations with NOx and PM, which could be used to examine more complicated alternative study designs such as census block groups with low reduction levels in NOx but intermediate to high reductions in PM. Clearly, accounting for multipollutant strategies in future assessments will be important in implementing enhanced air pollution–health outcome risk management studies (Mauderly et al. 2010).\nThe linkages between air quality and exposure models (e.g., with the Stochastic Human Exposure and Dose Simulation Model and Hazardous Air Pollution Exposure Model) in the context of the New Haven study have been examined elsewhere (Isakov et al. 2009). Our biggest challenge has been with accessing geographically and temporally resolved health data in New Haven. Of course, this data gap is often a major challenge in other urban areas as well. Although there was strong local cooperation and local, state, and federal interest in working with the project, better research access to locally relevant health data should be both facilitated and encouraged. Given that the 2010 census has recently been collected and the air quality modeling for 2010 can be performed soon, we hope that the methodology we tested can be implemented in the near future using the actual 2010 local air quality modeling, census, and health data, in order to evaluate the results obtained from this feasibility study by using better databases and more robust models.\nBolstered by the findings from our study, the City of New Haven has been working to find better solutions for reducing air pollution burden and for understanding the impacts from air emissions. We presented the results from this analysis to the New Haven departments of Health, City Planning, and Economic Development and to the city chief executive officer. These results have been used by New Haven in finalizing their negotiations to obtain zero emissions from a proposed new power plant unit to meet peak demand operations, which will be achieved through offsets by the local power plant company and proposed retrofits of garbage trucks and some port operations and additional community benefits. Moreover, the city is also evaluating what can be done to reduce impacts from port operations and mitigate exposures at city schools located near busy roads and highways, in light of the detailed air quality modeling results and health risk evaluations presented here.", "In this project we successfully applied, compared, and evaluated exposure assessment and epidemiological modeling tools in the context of observed public health status in a relatively small community, New Haven, Connecticut, and provided the U.S. EPA and local, state, and city organizations with a new modeling-based methodology to measure the impact of collective risk mitigation approaches and regulations. Furthermore, because no single regulation or program that affects air quality can be isolated to track its effect on health, this project provided critical findings on how regulatory agencies may better examine the complex interactions of cumulative impacts on air quality and health effects from multiple actions in other urban communities." ]
[ "materials|methods", "results", null, "methods", "methods", "discussion", "conclusions" ]
[ "air pollution", "feasibility analysis", "health effects", "nitrogen oxides", "particulate matter" ]
Materials and Methods: The New Haven Study Area is centered in the City of New Haven, Connecticut (population ~ 127,000), and extends to a 20-km radius, encompassing 318 census block groups in New Haven County with an estimated population in 2007 of more than 367,000 people. The City of New Haven is located on the southern coast of Connecticut on New Haven Harbor, which is fed by three rivers (the West, Mill, and Quinnipiac) that discharge into northern Long Island Sound. New Haven lies at the intersection of interstates I-91 and I-95, both major regional expressways that are often congested. In addition, several surface arteries pass through or around New Haven, including Routes 1, 10, 17, 34, and 63. Seaborne traffic passes through the Port of New Haven, a deep-water seaport that attracts a considerable number of barges and associated truck and rail traffic. In addition to several institutional power plants, one power generation facility serves the community. This wide range of emission source categories allows for testing of multipollutant emission control strategies. We evaluated the overall feasibility of assessing the public health impact of air pollution reduction programs in the City of New Haven by linking projected emissions reductions from overall regulatory actions to estimated detectable health outcome changes. We began by identifying pollutants of interest for New Haven based on the local emissions inventory for the baseline year of 2001 (Weil 2004) and criteria air pollutants. For the present study, we focused on two air pollutants: NOx and PM2.5. We also identified health outcomes that have been associated with these pollutants: cardiovascular disease hospitalization and mortality; respiratory disease hospitalization and mortality; chronic obstructive pulmonary disease mortality and hospitalization; and asthma prevalence, diagnosis, and hospitalization. We then evaluated existing data on ambient level air pollution, emission data, personal exposure data, and health outcome data for the New Haven area. As part of this data inventory evaluation, we assessed the relevance and completeness of data, as well as verification of locations and quantities of emissions from local sources. We then generated emission estimates for NOx and PM2.5 based on local emissions sources and the projected impacts of federal, state, and local regulatory reduction activities. We also applied an improved methodology to predict mobile source emissions (Cook et al. 2008). We first estimated pollutant specific local-scale air concentrations using the U.S. EPA’s AERMOD dispersion model (Cimorelli et al. 2005). This model used information on local emission sources and local meteorological conditions to provide hourly and annual average concentrations at multiple locations corresponding to the weighted centroids of each of the 318 census block groups in the study area. We estimated total NOx and PM2.5 concentrations by combining regional background levels, chemically reactive pollutant estimates from the CMAQ (Community Multiscale Air Quality) model, and the AERMOD estimates. We estimated emissions using the baseline year (2001) emissions rates and projected emissions in 2010, 2020, and 2030 based on planned and anticipated pollution control programs. To assess feasibility using a sample size approach, we first determined the minimum detectable decrease in each outcome relative to its baseline incidence rate [tests of two independent proportions for a (one-sided) likelihood ratio chi-square test with an α of 0.05 and power of 0.80] for a study population of 367,173 (i.e., the 2007 Census estimate for the New Haven population within the 318 block groups included in the study area). For some of the health outcomes, we made additional study area subpopulation calculations for different age groups (< 18 years, ≥ 18 years). There is general consensus that RRs associated with air pollution exposure for a wide variety of health outcomes are typically less than 1.50, and usually within the range of 1.01–1.20, often for a 10-μg/m3 change in PM2.5 or an interquartile range change in gaseous pollutant concentrations. Jerrett et al. (2008) and Wellenius et al. (2005) found risk ratios or a RR in this range for NO2. RRs for PM2.5 and various outcomes in this range were found by Pope et al. (2002) and Sheppard et al. (1999), whereas Laden et al. (2006) found higher PM2.5 and mortality RRs of 1.16–1.28, and Peters et al. (2001) found an RR of 1.69 for PM2.5 and acute myocardial infarction. Next we determined the percent reduction in exposure that would be required to produce a given reduction in the outcome assuming a range of possible effect sizes for concentration–outcome associations. Specifically, we considered air pollution RR values of 1.01, 1.05, 1.10, 1.15, and 1.20 representing the increase in outcome (y) associated with an incremental increase in a given pollutant exposure equal to the level of the average value of the ambient pollution concentration (c) in the study population. The change in the outcome (Δy) associated with a change in exposure (Δc) is a function of the baseline incidence rate (y) and the risk coefficient (β) for a one-unit increase in exposure: where β = [ln(RR)]/c. The percent decrease in exposure (Δcreq) required to produce a particular reduction in the outcome for a given RR is calculated as Values of Δcreq < 100 indicate the percent reduction in exposure that would be required to produce a specific reduction in the outcome (Δy) assuming a given RR for the exposure–outcome association. Values of Δcreq ≥ 100 indicate that the corresponding value for Δy is not feasible, because exposure would have to be reduced by more than 100% to achieve it. Finally, we combined data on projected changes in mean annual ambient concentrations of air pollutants for 2010, 2020, and 2030 with the information on minimum detectable effect estimates and the percent reduction in exposure required to produce a given effect estimate to identify which air pollutant–health outcome associations (out of 26 possible combinations) would be most feasible for assessment. For pollutants such as PM where projected reductions were relatively modest (~ 8%), we used an exploratory epidemiological methodology similar to that presented in Pope et al. (2009). Specifically, we used the simulated health data at census block group level (derived from county-specific health data and census information on demographics) to evaluate different strategies for demonstrating impacts of relatively small changes in ambient pollution (compared to NOx) over multiple years. The outcomes for this analysis were the differences between the number of hospitalizations for 2001 and 2010, as illustrated by Equation 3: where ΔHC is the change in the number of hospitalizations at census block group C and H2010 and H2001 are the number of hospitalizations for the years 2010 and 2001, respectively. We calculated the number of hospitalizations due to congestive heart disease [CHD; International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9CM; World Health Organization 2004), code 428] and asthma (ICD-9CM, code 493) for each census block group for the years 2010 and 2001. We chose these end points based on significant associations (1.28% increase in risk per 10-μg/m3 increase in same-day PM2.5) reported by Dominici et al. (2006) between PM and CHD hospitalizations for the Medicare cohort. We restricted hospitalizations due to CHD to the population > 65 years of age, and we calculated asthma hospitalizations separately for all ages and for the population < 25 years of age, because of known age-dependent differences [Connecticut Department of Public Health (CDPH) 2007]. We calculated the number of hospitalizations for the years 2001 and 2010 as where Hc is the number of hospitalizations for census block group C, Rate is the rate of hospitalization for females (F) and males (M) of race R (white, Hispanic, black), and PopC is the population of each subgroup (e.g., white female, black male, etc.). We used 2000 U.S. Census Bureau (2001) data to estimate the size of each population subgroup in 2001. We used county-level population projections for 2010 to estimate the proportional change in each population subgroup from 2000 to 2010 and applied this to the 2000 census block group population to estimate 2010 census block populations for each subgroup. Hospitalization rates for both outcomes are available for all of New Haven County for 2001 (CDPH 2001) and 2007 (CDPH 2007), and we used 2007 data for 2010 hospitalizations. The rates are broken down by age and sex, and age and race, but not by age, sex, and race. We therefore assumed constant ratios of rates of hospitalizations for males and females for all races. We calculated hospitalization rates according to sex, race, age (> 65 years of age for CHD, all ages, and < 25 years of age for asthma), health outcome (CHD or asthma), and year (2001 or 2010) as For example, RateFR is the county-level hospitalization for females of race R; RateR is the county-level hospitalization rate for race R; Ratio of ratesM/F is the ratio of hospitalization rates between males and females; and PopFR and PopMR are county-level population sizes for females and males of race R, respectively. We then calculated RateMR by multiplying RateFR by Ratio of ratesM/F. Reductions of PM2.5 at each census block group were then regressed against the changes in hospitalizations from 2001 to 2010 at each census block group. All regression analyses were performed using SAS (version 9.1; SAS Institute Inc., Cary, NC). Our analysis indirectly accounted for the effects due to changes in key ethnic/racial demographic profile by using group or sex relevant hospitalization rates as part of the health data and feasibility simulations. We explored the influence of introducing an additional explanatory variable in our health effects regressions to indirectly account for both neighborhood effects and the missing determinants of observed hospital admissions by computing average admissions either within a 3- or 4-km radius around each census tract (similarly considered by Özkaynak and Thurston 1987). Results: Air quality modeling We produced combined CMAQ and AERMOD model results for the study area. Model estimates for NOx and PM2.5 were consistent with measured Air Quality System (AQS) monitoring data for the area (Johnson et al. 2010). Figure 1 shows maps of modeled PM2.5 and NOx concentrations for the baseline year (2001) and projections for 2010, 2020, and 2030. PM2.5 maps for all four time points show a wide range of concentrations in the study area, with high concentrations in the city center, near the port areas, and near major roadways such as I-95, whereas PM2.5 concentrations in suburban areas were much lower than in central parts of the study area. Finally, PM2.5 concentrations are projected to decrease over time, with the most pronounced decreases in areas with the highest estimated concentrations in 2001. Spatial patterns in ambient air quality concentrations for NOx relates strongly to the sources of mobile emissions such as major highways. NOx concentrations shown here depict strong spatial gradients because many of the locations for which we estimated concentrations (population-weighted centroids of the 318 census block groups) are near major roadways. Thus, contrasts between those areas and suburban areas are very pronounced. Finally, NOx concentrations are also projected to decrease considerably over time, particularly in locations close to roadways, because of the implementation of federal emission standards for mobile source emissions. Figure 2 shows distributions of annual and daily average modeled PM2.5 and NOx concentrations for the baseline model year 2001 and projections (2010, 2020, and 2030). We sorted the annual average concentrations for 2001 in order to group the 318 study area locations (census block groups) for which we divided estimates into three groups according to average pollutant concentrations at baseline: low (locations in the lowest 25% of the distribution), medium (locations in the 2nd and 3rd quartiles), and high (locations in the highest quartile of the distribution). As expected, daily averages are more variable than annual averages for both PM2.5 and NOx, which indicates the importance of temporal variability in pollutant concentrations. Downward trends in PM2.5 concentrations were evident for areas with medium and high concentrations, but not for low-concentration areas. Declines in NOx concentrations were evident over time for all three groups, but the decrease was also much sharper in the high-concentration areas. The models predicted large percentage decreases for NOx between 2001 and 2010 (61%) and with less pronounced decreases from 2001 to 2020 and 2030 (overall decreases of 78% and 81%, respectively). For PM2.5 the models predicted smaller percentage decreases from 2001 to 2010 (8%), 2020 (9%), and 2030 (9%). We produced combined CMAQ and AERMOD model results for the study area. Model estimates for NOx and PM2.5 were consistent with measured Air Quality System (AQS) monitoring data for the area (Johnson et al. 2010). Figure 1 shows maps of modeled PM2.5 and NOx concentrations for the baseline year (2001) and projections for 2010, 2020, and 2030. PM2.5 maps for all four time points show a wide range of concentrations in the study area, with high concentrations in the city center, near the port areas, and near major roadways such as I-95, whereas PM2.5 concentrations in suburban areas were much lower than in central parts of the study area. Finally, PM2.5 concentrations are projected to decrease over time, with the most pronounced decreases in areas with the highest estimated concentrations in 2001. Spatial patterns in ambient air quality concentrations for NOx relates strongly to the sources of mobile emissions such as major highways. NOx concentrations shown here depict strong spatial gradients because many of the locations for which we estimated concentrations (population-weighted centroids of the 318 census block groups) are near major roadways. Thus, contrasts between those areas and suburban areas are very pronounced. Finally, NOx concentrations are also projected to decrease considerably over time, particularly in locations close to roadways, because of the implementation of federal emission standards for mobile source emissions. Figure 2 shows distributions of annual and daily average modeled PM2.5 and NOx concentrations for the baseline model year 2001 and projections (2010, 2020, and 2030). We sorted the annual average concentrations for 2001 in order to group the 318 study area locations (census block groups) for which we divided estimates into three groups according to average pollutant concentrations at baseline: low (locations in the lowest 25% of the distribution), medium (locations in the 2nd and 3rd quartiles), and high (locations in the highest quartile of the distribution). As expected, daily averages are more variable than annual averages for both PM2.5 and NOx, which indicates the importance of temporal variability in pollutant concentrations. Downward trends in PM2.5 concentrations were evident for areas with medium and high concentrations, but not for low-concentration areas. Declines in NOx concentrations were evident over time for all three groups, but the decrease was also much sharper in the high-concentration areas. The models predicted large percentage decreases for NOx between 2001 and 2010 (61%) and with less pronounced decreases from 2001 to 2020 and 2030 (overall decreases of 78% and 81%, respectively). For PM2.5 the models predicted smaller percentage decreases from 2001 to 2010 (8%), 2020 (9%), and 2030 (9%). Sample-size–based feasibility analysis Table 1 exhibits the percent reduction in concentrations for a given pollutant that would be needed if we assumed a specific RR (1.01, 1.05, 1.10, 1.15, and 1.20) and an estimated reduction in adverse health outcome (2.5%, 5%, 10%, 15%, and 20%) using the assumptions stated above. Table 2 lists the health outcomes that we explored with corresponding minimum statistically detectable decreases in each outcome for the New Haven study population given the baseline rate of the outcome. The New Haven study population area is sufficient in size to examine reductions in adverse health outcomes ranging from a low of 2.5% (adult asthma prevalence) to a high of 10% (all-cause mortality and hospital discharge for cardiovascular diseases and respiratory causes). Based on the percentage decrease air pollution projected for 2010, 2020, and 2030 within New Haven for NOx (61%, 2010; 78%, 2020; 81%, 2030) and PM2.5 (8%, 2010; 9%, 2020 and 2030), the percent reduction in air pollution needed to produce a given change in the outcome (Table 1), and the minimum statistically significant percent decrease in each health outcome that can be detected in the New Haven study population (Table 2), we can assess the feasibility for detecting beneficial health effects of air pollution reductions. Of the 26 different air pollution–health outcome linkages assessed, only five, all NOx related, are potentially feasible (Table 2, last column): all-cause mortality, cardiovascular disease hospitalization, respiratory disease hospitalization discharge, current prevalence of asthma in children, and current prevalence of asthma in adults. Table 1 exhibits the percent reduction in concentrations for a given pollutant that would be needed if we assumed a specific RR (1.01, 1.05, 1.10, 1.15, and 1.20) and an estimated reduction in adverse health outcome (2.5%, 5%, 10%, 15%, and 20%) using the assumptions stated above. Table 2 lists the health outcomes that we explored with corresponding minimum statistically detectable decreases in each outcome for the New Haven study population given the baseline rate of the outcome. The New Haven study population area is sufficient in size to examine reductions in adverse health outcomes ranging from a low of 2.5% (adult asthma prevalence) to a high of 10% (all-cause mortality and hospital discharge for cardiovascular diseases and respiratory causes). Based on the percentage decrease air pollution projected for 2010, 2020, and 2030 within New Haven for NOx (61%, 2010; 78%, 2020; 81%, 2030) and PM2.5 (8%, 2010; 9%, 2020 and 2030), the percent reduction in air pollution needed to produce a given change in the outcome (Table 1), and the minimum statistically significant percent decrease in each health outcome that can be detected in the New Haven study population (Table 2), we can assess the feasibility for detecting beneficial health effects of air pollution reductions. Of the 26 different air pollution–health outcome linkages assessed, only five, all NOx related, are potentially feasible (Table 2, last column): all-cause mortality, cardiovascular disease hospitalization, respiratory disease hospitalization discharge, current prevalence of asthma in children, and current prevalence of asthma in adults. Simulation-based epidemiological feasibility analysis The average number of hospitalizations for CHD among those ≥ 65 years of age in each census block group decreased between 2001 and 2010, whereas average numbers of asthma hospitalizations increased (Table 3). We were unable to detect associations between small reductions in PM2.5 pollution concentrations and health outcomes, so we restricted our analysis to census block groups with PM2.5 reductions of > 4 μg/m3 (n = 30). For these census block groups, numbers of CHD hospitalizations were inversely associated with the estimated reduction in PM2.5 concentrations, indicating that numbers of hospitalizations decreased as the reduction in PM2.5 increased (p < 0.1; Table 4, Figure 3A). Asthma hospitalizations were also inversely associated with reductions in PM2.5 concentrations based on our simulations, suggesting that greater reductions in PM2.5 may slow the increase in asthma hospitalizations over time (Table 4, Figure 3B). However, the inverse association was weaker than for CHD hospitalizations. Finally, an exploratory analysis we conducted by including additional surrogate variables in the regression models aimed at capturing neighborhood effects caused substantial increases in model R2 values, whereas the PM2.5 effect estimates were attenuated somewhat depending on the outcome chosen (data not shown). The average number of hospitalizations for CHD among those ≥ 65 years of age in each census block group decreased between 2001 and 2010, whereas average numbers of asthma hospitalizations increased (Table 3). We were unable to detect associations between small reductions in PM2.5 pollution concentrations and health outcomes, so we restricted our analysis to census block groups with PM2.5 reductions of > 4 μg/m3 (n = 30). For these census block groups, numbers of CHD hospitalizations were inversely associated with the estimated reduction in PM2.5 concentrations, indicating that numbers of hospitalizations decreased as the reduction in PM2.5 increased (p < 0.1; Table 4, Figure 3A). Asthma hospitalizations were also inversely associated with reductions in PM2.5 concentrations based on our simulations, suggesting that greater reductions in PM2.5 may slow the increase in asthma hospitalizations over time (Table 4, Figure 3B). However, the inverse association was weaker than for CHD hospitalizations. Finally, an exploratory analysis we conducted by including additional surrogate variables in the regression models aimed at capturing neighborhood effects caused substantial increases in model R2 values, whereas the PM2.5 effect estimates were attenuated somewhat depending on the outcome chosen (data not shown). Air quality modeling: We produced combined CMAQ and AERMOD model results for the study area. Model estimates for NOx and PM2.5 were consistent with measured Air Quality System (AQS) monitoring data for the area (Johnson et al. 2010). Figure 1 shows maps of modeled PM2.5 and NOx concentrations for the baseline year (2001) and projections for 2010, 2020, and 2030. PM2.5 maps for all four time points show a wide range of concentrations in the study area, with high concentrations in the city center, near the port areas, and near major roadways such as I-95, whereas PM2.5 concentrations in suburban areas were much lower than in central parts of the study area. Finally, PM2.5 concentrations are projected to decrease over time, with the most pronounced decreases in areas with the highest estimated concentrations in 2001. Spatial patterns in ambient air quality concentrations for NOx relates strongly to the sources of mobile emissions such as major highways. NOx concentrations shown here depict strong spatial gradients because many of the locations for which we estimated concentrations (population-weighted centroids of the 318 census block groups) are near major roadways. Thus, contrasts between those areas and suburban areas are very pronounced. Finally, NOx concentrations are also projected to decrease considerably over time, particularly in locations close to roadways, because of the implementation of federal emission standards for mobile source emissions. Figure 2 shows distributions of annual and daily average modeled PM2.5 and NOx concentrations for the baseline model year 2001 and projections (2010, 2020, and 2030). We sorted the annual average concentrations for 2001 in order to group the 318 study area locations (census block groups) for which we divided estimates into three groups according to average pollutant concentrations at baseline: low (locations in the lowest 25% of the distribution), medium (locations in the 2nd and 3rd quartiles), and high (locations in the highest quartile of the distribution). As expected, daily averages are more variable than annual averages for both PM2.5 and NOx, which indicates the importance of temporal variability in pollutant concentrations. Downward trends in PM2.5 concentrations were evident for areas with medium and high concentrations, but not for low-concentration areas. Declines in NOx concentrations were evident over time for all three groups, but the decrease was also much sharper in the high-concentration areas. The models predicted large percentage decreases for NOx between 2001 and 2010 (61%) and with less pronounced decreases from 2001 to 2020 and 2030 (overall decreases of 78% and 81%, respectively). For PM2.5 the models predicted smaller percentage decreases from 2001 to 2010 (8%), 2020 (9%), and 2030 (9%). Sample-size–based feasibility analysis: Table 1 exhibits the percent reduction in concentrations for a given pollutant that would be needed if we assumed a specific RR (1.01, 1.05, 1.10, 1.15, and 1.20) and an estimated reduction in adverse health outcome (2.5%, 5%, 10%, 15%, and 20%) using the assumptions stated above. Table 2 lists the health outcomes that we explored with corresponding minimum statistically detectable decreases in each outcome for the New Haven study population given the baseline rate of the outcome. The New Haven study population area is sufficient in size to examine reductions in adverse health outcomes ranging from a low of 2.5% (adult asthma prevalence) to a high of 10% (all-cause mortality and hospital discharge for cardiovascular diseases and respiratory causes). Based on the percentage decrease air pollution projected for 2010, 2020, and 2030 within New Haven for NOx (61%, 2010; 78%, 2020; 81%, 2030) and PM2.5 (8%, 2010; 9%, 2020 and 2030), the percent reduction in air pollution needed to produce a given change in the outcome (Table 1), and the minimum statistically significant percent decrease in each health outcome that can be detected in the New Haven study population (Table 2), we can assess the feasibility for detecting beneficial health effects of air pollution reductions. Of the 26 different air pollution–health outcome linkages assessed, only five, all NOx related, are potentially feasible (Table 2, last column): all-cause mortality, cardiovascular disease hospitalization, respiratory disease hospitalization discharge, current prevalence of asthma in children, and current prevalence of asthma in adults. Simulation-based epidemiological feasibility analysis: The average number of hospitalizations for CHD among those ≥ 65 years of age in each census block group decreased between 2001 and 2010, whereas average numbers of asthma hospitalizations increased (Table 3). We were unable to detect associations between small reductions in PM2.5 pollution concentrations and health outcomes, so we restricted our analysis to census block groups with PM2.5 reductions of > 4 μg/m3 (n = 30). For these census block groups, numbers of CHD hospitalizations were inversely associated with the estimated reduction in PM2.5 concentrations, indicating that numbers of hospitalizations decreased as the reduction in PM2.5 increased (p < 0.1; Table 4, Figure 3A). Asthma hospitalizations were also inversely associated with reductions in PM2.5 concentrations based on our simulations, suggesting that greater reductions in PM2.5 may slow the increase in asthma hospitalizations over time (Table 4, Figure 3B). However, the inverse association was weaker than for CHD hospitalizations. Finally, an exploratory analysis we conducted by including additional surrogate variables in the regression models aimed at capturing neighborhood effects caused substantial increases in model R2 values, whereas the PM2.5 effect estimates were attenuated somewhat depending on the outcome chosen (data not shown). Discussion: We used detailed information on local health and exposure-related data to assess the feasibility of identifying an impact of cumulative air pollution programs on environmental public health in New Haven for 26 different pollutant–health outcome linkages. Combined regional (CMAQ) and local-scale (AERMOD) air quality modeling analysis showed a small overall decrease for PM2.5 (~ 8–9%) in mean pollutant concentrations mostly from local sources and between 2001 and 2010; in contrast, we projected that NOx would decrease by > 60%. Most NOx reductions can be attributed to mobile source emission reduction programs. Thus, it is important to accurately characterize near-road impacts. Local reductions in PM2.5 are modest relative to high background PM concentrations. Statistical power calculations suggest that projected decreases in NOx may result in statistically significant improvements in health outcomes, including all-cause mortality, asthma prevalence in children and adults, and cardiovascular and respiratory hospitalizations. For other pollutants with more modest reductions, including PM, we determined the likelihood of performing a successful traditional air pollution reduction–health reduction analysis in New Haven to be poor. Alternative epidemiological study designs that use spatially and temporally resolved air quality and exposure models to characterize intraurban gradients were promising based on exploratory epidemiological simulations. However, health outcomes with low baseline rates would have to be strongly associated with air pollution exposures in order for exposure reductions to result in identifiable improvements and thus would not be ideal for examining risk management decisions. This study illustrates the advantages of using air quality models over traditional epidemiological approaches using ambient measurements. For example, central-site data are especially problematic for certain PM components and species (e.g., elemental carbon, organic carbon, coarse and ultrafine PM) that exhibit significant spatial heterogeneity. Also, for many pollutants (e.g., toxic pollutants), ambient monitoring data are often nonexistent or limited. Appropriately verified air quality models, on the other hand, can provide the needed spatial and temporal resolution for multiple air pollutant concentrations at many locations. These same models can also be used to estimate the projected air quality and inputs for exposure models for future years, dependent on air pollution reduction activities, or due to the addition of new sources in a community (Isakov et al. 2006). For example, this model can address what happens if emissions from some specific stationary or mobile sources are reduced by certain amounts and what the associated impacts of these local controls versus regional controls may be. This model application helps determine which control options are most effective in reducing ambient concentrations. Both the air quality modeling and feasibility analysis methodologies we used in this research have certain shortcomings. For instance, despite their advantages of being able to provide temporal (hourly) and spatial (at hundreds of locations) estimates, and having a long history of use by regulatory agencies in multipollutant mitigation strategies, models have uncertainties due to model inputs, algorithms, and model parameters (Sax and Isakov 2003). Therefore, in order to reduce uncertainty due to model inputs, detailed emissions and meteorological information should be provided for each model application. In the simulation-based epidemiological feasibility analyses we considered only single-pollutant models and did not include ecological covariates (e.g., income, poverty status, smoking) typically used in cross-sectional, ecological analysis (Özkaynak and Thurston 1987; Pope et al. 2009), because of a lack of complete information. Moreover, it is possible that some of the covariates may change over time, but presumably this may be less of an issue in local-scale assessments than in national-scale analyses. We did not perform joint optimizations with NOx and PM, which could be used to examine more complicated alternative study designs such as census block groups with low reduction levels in NOx but intermediate to high reductions in PM. Clearly, accounting for multipollutant strategies in future assessments will be important in implementing enhanced air pollution–health outcome risk management studies (Mauderly et al. 2010). The linkages between air quality and exposure models (e.g., with the Stochastic Human Exposure and Dose Simulation Model and Hazardous Air Pollution Exposure Model) in the context of the New Haven study have been examined elsewhere (Isakov et al. 2009). Our biggest challenge has been with accessing geographically and temporally resolved health data in New Haven. Of course, this data gap is often a major challenge in other urban areas as well. Although there was strong local cooperation and local, state, and federal interest in working with the project, better research access to locally relevant health data should be both facilitated and encouraged. Given that the 2010 census has recently been collected and the air quality modeling for 2010 can be performed soon, we hope that the methodology we tested can be implemented in the near future using the actual 2010 local air quality modeling, census, and health data, in order to evaluate the results obtained from this feasibility study by using better databases and more robust models. Bolstered by the findings from our study, the City of New Haven has been working to find better solutions for reducing air pollution burden and for understanding the impacts from air emissions. We presented the results from this analysis to the New Haven departments of Health, City Planning, and Economic Development and to the city chief executive officer. These results have been used by New Haven in finalizing their negotiations to obtain zero emissions from a proposed new power plant unit to meet peak demand operations, which will be achieved through offsets by the local power plant company and proposed retrofits of garbage trucks and some port operations and additional community benefits. Moreover, the city is also evaluating what can be done to reduce impacts from port operations and mitigate exposures at city schools located near busy roads and highways, in light of the detailed air quality modeling results and health risk evaluations presented here. Conclusions: In this project we successfully applied, compared, and evaluated exposure assessment and epidemiological modeling tools in the context of observed public health status in a relatively small community, New Haven, Connecticut, and provided the U.S. EPA and local, state, and city organizations with a new modeling-based methodology to measure the impact of collective risk mitigation approaches and regulations. Furthermore, because no single regulation or program that affects air quality can be isolated to track its effect on health, this project provided critical findings on how regulatory agencies may better examine the complex interactions of cumulative impacts on air quality and health effects from multiple actions in other urban communities.
Background: New approaches to link health surveillance data with environmental and population exposure information are needed to examine the health benefits of risk management decisions. Methods: Using a hybrid modeling approach that combines regional and local-scale air quality data, we estimated ambient concentrations for multiple air pollutants [e.g., PM2.5 (particulate matter ≤ 2.5 μm in aerodynamic diameter), NOx (nitrogen oxides)] for baseline year 2001 and projected emissions for 2010, 2020, and 2030. We assessed the feasibility of detecting health improvements in relation to reductions in air pollution for 26 different pollutant-health outcome linkages using both sample size and exploratory epidemiological simulations to further inform decision-making needs. Results: Model projections suggested decreases (~10-60%) in pollutant concentrations, mainly attributable to decreases in pollutants from local sources between 2001 and 2010. Models indicated considerable spatial variability in the concentrations of most pollutants. Sample size analyses supported the feasibility of identifying linkages between reductions in NOx and improvements in all-cause mortality, prevalence of asthma in children and adults, and cardiovascular and respiratory hospitalizations. Conclusions: Substantial reductions in air pollution (e.g., ~60% for NOx) are needed to detect health impacts of environmental actions using traditional epidemiological study designs in small communities like New Haven. In contrast, exploratory epidemiological simulations suggest that it may be possible to demonstrate the health impacts of PM reductions by predicting intraurban pollution gradients within New Haven using coupled models.
null
null
6,374
280
[ 511, 320 ]
7
[ "concentrations", "pm2", "air", "health", "2010", "nox", "2001", "new", "outcome", "hospitalizations" ]
[ "new haven connecticut", "air pollution programs", "emissions proposed new", "estimate new haven", "emissions major highways" ]
null
null
null
[CONTENT] air pollution | feasibility analysis | health effects | nitrogen oxides | particulate matter [SUMMARY]
[CONTENT] air pollution | feasibility analysis | health effects | nitrogen oxides | particulate matter [SUMMARY]
[CONTENT] air pollution | feasibility analysis | health effects | nitrogen oxides | particulate matter [SUMMARY]
[CONTENT] air pollution | feasibility analysis | health effects | nitrogen oxides | particulate matter [SUMMARY]
null
null
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Air Pollutants | Air Pollution | Cardiovascular Diseases | Child | Child, Preschool | Cities | Connecticut | Conservation of Natural Resources | Environmental Policy | Feasibility Studies | Health Status | Humans | Infant | Infant, Newborn | Linear Models | Middle Aged | Models, Chemical | Public Health | Respiratory Tract Diseases | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Air Pollutants | Air Pollution | Cardiovascular Diseases | Child | Child, Preschool | Cities | Connecticut | Conservation of Natural Resources | Environmental Policy | Feasibility Studies | Health Status | Humans | Infant | Infant, Newborn | Linear Models | Middle Aged | Models, Chemical | Public Health | Respiratory Tract Diseases | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Air Pollutants | Air Pollution | Cardiovascular Diseases | Child | Child, Preschool | Cities | Connecticut | Conservation of Natural Resources | Environmental Policy | Feasibility Studies | Health Status | Humans | Infant | Infant, Newborn | Linear Models | Middle Aged | Models, Chemical | Public Health | Respiratory Tract Diseases | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Air Pollutants | Air Pollution | Cardiovascular Diseases | Child | Child, Preschool | Cities | Connecticut | Conservation of Natural Resources | Environmental Policy | Feasibility Studies | Health Status | Humans | Infant | Infant, Newborn | Linear Models | Middle Aged | Models, Chemical | Public Health | Respiratory Tract Diseases | Young Adult [SUMMARY]
null
null
[CONTENT] new haven connecticut | air pollution programs | emissions proposed new | estimate new haven | emissions major highways [SUMMARY]
[CONTENT] new haven connecticut | air pollution programs | emissions proposed new | estimate new haven | emissions major highways [SUMMARY]
[CONTENT] new haven connecticut | air pollution programs | emissions proposed new | estimate new haven | emissions major highways [SUMMARY]
[CONTENT] new haven connecticut | air pollution programs | emissions proposed new | estimate new haven | emissions major highways [SUMMARY]
null
null
[CONTENT] concentrations | pm2 | air | health | 2010 | nox | 2001 | new | outcome | hospitalizations [SUMMARY]
[CONTENT] concentrations | pm2 | air | health | 2010 | nox | 2001 | new | outcome | hospitalizations [SUMMARY]
[CONTENT] concentrations | pm2 | air | health | 2010 | nox | 2001 | new | outcome | hospitalizations [SUMMARY]
[CONTENT] concentrations | pm2 | air | health | 2010 | nox | 2001 | new | outcome | hospitalizations [SUMMARY]
null
null
[CONTENT] hospitalizations | pm2 | numbers | reductions | table | asthma hospitalizations | chd | reductions pm2 | increased | increased table [SUMMARY]
[CONTENT] concentrations | pm2 | nox | table | areas | 2001 | 2030 | hospitalizations | 2020 | 2010 [SUMMARY]
[CONTENT] provided | project | health | modeling | air quality | quality | new | risk mitigation approaches | evaluated exposure | evaluated exposure assessment epidemiological [SUMMARY]
[CONTENT] concentrations | pm2 | health | air | hospitalizations | nox | new | table | 2010 | outcome [SUMMARY]
null
null
[CONTENT] ||| 2.5 | year 2001 | 2010 | 2020 | 2030 ||| 26 [SUMMARY]
[CONTENT] 60% | between 2001 and 2010 ||| ||| [SUMMARY]
[CONTENT] ~60% | New Haven ||| New Haven [SUMMARY]
[CONTENT] ||| 2.5 | year 2001 | 2010 | 2020 | 2030 ||| 26 ||| ||| 60% | between 2001 and 2010 ||| ||| ||| ~60% | New Haven ||| New Haven [SUMMARY]
null
Current status on health sciences research productivity pertaining to Angola up to 2014.
26126605
Health research driven by the healthcare demands of the population can provide an informative evidence base to support decision-making processes on health policies, programmes, and practices. This paper surveyed the production of scientific research concerning health in Angola, specifically to access the publication rate over time, the main research topics and scientific fields, and the contribution of Angolan researchers and institutions.
BACKGROUND
The study focused on data collected in a retrospective literature search in Biblioteca Virtual em Saúde (BVS) as of June 8, 2014, with the keyword "Angola" and on content information in correspondent publications deposited in PubMed.
METHODS
BVS generated 1,029 hits, 74.6 % of which were deposited in PubMed where 301 abstracts were described. From 1979 to 2003, there were 62 publications and in 2004-2013 the quantity increased four-fold (n = 232); malaria was the most frequent topic (n = 42). Angola was the country with the largest number of publications, taking into account the primary affiliation of the first author (n = 45). Universities, institutes, or research centres accounted for 65 % of the publications and in descending order Portugal, Brazil, and the United States of America occupied the three first positions. Epidemiology was by far the most frequent field of research (n = 165).
RESULTS
The number of publications has increased steadily over the past 10 years, with predominance on malaria topics. Angola was the country with the largest number of major affiliations of the first author, but the contribution of Angolan institutions was relatively low, indicating a need to reinforce academic research institutions in the country.
CONCLUSIONS
[ "Academies and Institutes", "Angola", "Authorship", "Bibliometrics", "Biomedical Research", "Brazil", "Epidemiologic Studies", "Humans", "Malaria", "Portugal", "Publishing", "United States", "Universities" ]
4486127
Background
The concept of research for health, expressed on the World Health Report 2013, covers a broader range of investigations than health research. This wider view of research will become increasingly important in the transition from the United Nations Millennium Development Goals to a post-2015 sustainable development agenda [1]. In a more restricted concept, health research has the potential to contribute to the identification of social and economic determinants of health. In the context of implementation research, it is a major basis for decision-making processes on health policies, programmes, and practices, especially when driven by the demand of the health problems of the population [2, 3]. Moreover, conducting more health research can stimulate countries to strengthen their capacity to produce and use such research to guide decision-making, improve the efficiency of the investments, increase accountability in the work of researchers, and contribute substantially to the research training of human resources in developing countries [4, 5]. In the perspective of the World Health Organization (WHO), health research must be founded on building capacity to step up health research systems and support demand-driven health problems, particularly in low- and middle-income countries. Additionally, health research must be embedded with standards for best practices and ensure that quality evidence is turned into affordable health technologies and evidence-informed policy [6]. However, at the global level, the proportion of health research that considers these aspects is negligible [7]. In particular, evidence shows that the research output in developing countries is considerably less than in developed countries [8]. Therefore, it is crucial to get an accurate picture of the landscape of the scientific research produced in developing countries in order to identify gaps critical for social development [9]. In sub-Saharan Africa, in particular, the production of health research can affect teaching, the quality of undergraduate training, career promotion, and translational research relevant to the country, which is comparable to developed countries [5]. There have been documented bibliometric studies of research production in Africa, as a whole, as well as in other continents. This then motivated studies of the chronological progression of research within individual African countries [10–18]. These publications demonstrate the feebleness of African authorship in the most cited scientific publications in health [19]. Angola, a Portuguese-speaking country, is independent since 1975 and has spent 27 years in progressive socio-political instability, with the drawback of the civil war that ended 12 years ago. Nevertheless, Angola is making progress towards the achievement of the health Millennium Development Goals but is still far from achieving them [1]. The reconstruction of the national health system and policy will benefit from the knowledge of the factors influencing the success of certain interventions and the measure of their impact on populations. Therefore, health research will contribute to increased efficiency and effectiveness of health policy and decision making processes in Angola on programmes and practices. Support for science in Angola is less than 0.1 % of its GDP, compared to a world average of 1.7 % [20]. Consequently, a more serious commitment would be needed to build local and national capacity to accelerate the improvement of human resources training for health research. In this context, it becomes even more appropriate to carry out a survey on what has been produced as the result of health research. When searching the literature on health research in Angola, we found only one publication dated from 1953 which linked research to health, entitled “Scientific Research and the Health Status of Angola” [21]. The lack of studies on health research produced in Angola in the last 61 years motivated this work. We carried out a non-systematic literature survey to evaluate the progression of health research pertaining to Angola, answering specifically the following questions: 1) how has the number of publications evolved over the years? 2) what are the most published topics? 3) what is the level of involvement of Angolan researchers and institutions in this published research? and 4) what are the most represented research fields? We expect that this study will contribute to increasing the current state of knowledge on health research related to Angola and identify gaps that may guide the WHO recommendations on health research and consequently the adoption of effective interventions that will strengthen health research in Angola.
null
null
Results and discussion
Searching the BVS on June 8, 2014, with the keyword “Angola”, showed 1,029 results, of which 74.6 % were also in MEDLINE/PubMed database. Consequently, it was considered that further search in MEDLINE/PubMed would produce representative research publication data on health in Angola. Therefore, we undertook a further search for relevant Angolan publications in MEDLINE/PubMed, keeping the keyword “Angola”. To describe the number of publications across time we included all publications retrieved from BVS without the abstract filter. All other variables were taken into account using the availability of an abstract as the main inclusion criteria, and this yielded 658 abstracts. Applying the exclusion criteria we selected 301 abstracts for further description. Publication rate As shown in Fig. 1, the publications in MEDLINE/PubMed addressing health issues related to Angola date back to 1909. MEDLINE/PubMed, the first bibliographic database in the life sciences, with a focus on biomedicine, mainly covers the literature published from 1946 onwards, but it also includes more dated publications [22]. There has been a remarkable, albeit irregular, increase in the number of publications during the 1990s, with a more sustained increase from 2004 onwards, and a peak in 2013. From 1979 to 2003 (34 years) there were 62 publications, but in the last 10 years (2004–2013) the number increased four-fold (n = 232), suggesting a rampant increment in publication rate (Fig. 1). Additionally, it is noteworthy that the number of publications by the beginning of June 2014 already reached half the figure recorded in 2013, supporting the notion that Angola is motivating a steady increase in health-related scientific publications. This exponential trend was also observed in Palestine, a war zone, where a total of 770 publications were retrieved in the medical and biomedical field across a 10-year period (01 January 2002 to 31 December 2011), averaging approximately 80 articles per year. Interestingly, the number of publications has also increased four-fold during the period 2002–2011, with a stabilization in the last 3 years of the study period [23].Fig. 1Between 1909 and 1973, none of the publications deposited in MEDLINE/PubMed up to June 8, 2014, complied with the inclusion criteria for abstracts about health research in Angola Between 1909 and 1973, none of the publications deposited in MEDLINE/PubMed up to June 8, 2014, complied with the inclusion criteria for abstracts about health research in Angola A study published in 2007, which analyzed the geography of the PubMed biomedical publications in Africa between 1996 and 2005, concluded that the contribution of Africa, in particular, was considerably less than that of other continents. Indeed, it was evident that there was a continuous increase in publication output during this period in all African sub-regions, although Angola was located in the lowest quintile with less than two publications per year [17]. In the present study, we found, in the same period, five publications that included researchers that were affiliated with Angolan institutions. However, this did not change the lower position of Angola. Another publication about the scientific production in Public Health and Epidemiology in the WHO African region, in the period 1991–2010, obtained very similar results for Angola, placing it in the lowest quintile, with 11–50 publications [18]. This increase in health research in Angola matches with a recent study about the WHO African Region between 2000 and 2014 [24]. In this study, Angola was located in quintile three, with 100 to 499 articles. As shown in Fig. 1, the publications in MEDLINE/PubMed addressing health issues related to Angola date back to 1909. MEDLINE/PubMed, the first bibliographic database in the life sciences, with a focus on biomedicine, mainly covers the literature published from 1946 onwards, but it also includes more dated publications [22]. There has been a remarkable, albeit irregular, increase in the number of publications during the 1990s, with a more sustained increase from 2004 onwards, and a peak in 2013. From 1979 to 2003 (34 years) there were 62 publications, but in the last 10 years (2004–2013) the number increased four-fold (n = 232), suggesting a rampant increment in publication rate (Fig. 1). Additionally, it is noteworthy that the number of publications by the beginning of June 2014 already reached half the figure recorded in 2013, supporting the notion that Angola is motivating a steady increase in health-related scientific publications. This exponential trend was also observed in Palestine, a war zone, where a total of 770 publications were retrieved in the medical and biomedical field across a 10-year period (01 January 2002 to 31 December 2011), averaging approximately 80 articles per year. Interestingly, the number of publications has also increased four-fold during the period 2002–2011, with a stabilization in the last 3 years of the study period [23].Fig. 1Between 1909 and 1973, none of the publications deposited in MEDLINE/PubMed up to June 8, 2014, complied with the inclusion criteria for abstracts about health research in Angola Between 1909 and 1973, none of the publications deposited in MEDLINE/PubMed up to June 8, 2014, complied with the inclusion criteria for abstracts about health research in Angola A study published in 2007, which analyzed the geography of the PubMed biomedical publications in Africa between 1996 and 2005, concluded that the contribution of Africa, in particular, was considerably less than that of other continents. Indeed, it was evident that there was a continuous increase in publication output during this period in all African sub-regions, although Angola was located in the lowest quintile with less than two publications per year [17]. In the present study, we found, in the same period, five publications that included researchers that were affiliated with Angolan institutions. However, this did not change the lower position of Angola. Another publication about the scientific production in Public Health and Epidemiology in the WHO African region, in the period 1991–2010, obtained very similar results for Angola, placing it in the lowest quintile, with 11–50 publications [18]. This increase in health research in Angola matches with a recent study about the WHO African Region between 2000 and 2014 [24]. In this study, Angola was located in quintile three, with 100 to 499 articles. Health research topics Malaria was the disease that stood out, with 45 publications, followed by HIV infection (n = 26), trypanosomiasis (n = 24), and themes of epidemiology/public health (n = 24). Tuberculosis occupied the tenth position with nine papers (Fig. 2). As a whole, infectious diseases was a topic of 59 % of the publications under analysis (n = 178).Fig. 2Publications are deposited in MEDLINE/PubMed up to June 8, 2014 Publications are deposited in MEDLINE/PubMed up to June 8, 2014 The most investigated topics are aligned with the epidemiological profile of Angola, regarding the main causes of morbidity and mortality (communicable diseases), with a large emphasis on malaria [25]. Interestingly, there was a disproportionate volume of publications on HIV/AIDS relative to tuberculosis, which is unusual given the frequent association of both morbidities [26]. In an assessment of scientific output in public health and epidemiology in Africa, in the period 1991–2010, HIV/AIDS infection was the predominant topic (11.3 %) while malaria accounted for 8.6 % and tuberculosis for 7.1 % [16]. This analysis also considers the issue of the importance of research funding concerning infectious diseases [27]. The trend towards epidemiological transition in sub-Saharan Africa, owing to the increase of chronic non-communicable diseases, is not yet reflected at the scientific publication level, neither in the present study nor in the aforementioned African publications [16, 28–30]. Malaria was the disease that stood out, with 45 publications, followed by HIV infection (n = 26), trypanosomiasis (n = 24), and themes of epidemiology/public health (n = 24). Tuberculosis occupied the tenth position with nine papers (Fig. 2). As a whole, infectious diseases was a topic of 59 % of the publications under analysis (n = 178).Fig. 2Publications are deposited in MEDLINE/PubMed up to June 8, 2014 Publications are deposited in MEDLINE/PubMed up to June 8, 2014 The most investigated topics are aligned with the epidemiological profile of Angola, regarding the main causes of morbidity and mortality (communicable diseases), with a large emphasis on malaria [25]. Interestingly, there was a disproportionate volume of publications on HIV/AIDS relative to tuberculosis, which is unusual given the frequent association of both morbidities [26]. In an assessment of scientific output in public health and epidemiology in Africa, in the period 1991–2010, HIV/AIDS infection was the predominant topic (11.3 %) while malaria accounted for 8.6 % and tuberculosis for 7.1 % [16]. This analysis also considers the issue of the importance of research funding concerning infectious diseases [27]. The trend towards epidemiological transition in sub-Saharan Africa, owing to the increase of chronic non-communicable diseases, is not yet reflected at the scientific publication level, neither in the present study nor in the aforementioned African publications [16, 28–30]. Authorship and affiliation The primary affiliation of the first author was most frequently in an Angolan institution (45 publications) (Fig. 3). A total of 150 institutions were listed as the first author’s primary affiliation, revealing a high institutional dispersion; 14.7 % were Angolan institutions. Furthermore, 65 % of publications represented research conducted in universities and institutes or research centres (Fig. 4). The majority of publications by first author affiliation country and the number of academic research institutions were located in Portugal, United States of America, or Brazil (Fig. 5). The high number of publications with first author affiliation in Angola denotes the concern of researchers in linking their work to institutions from the country of study. Nevertheless, this involvement is dispersed across many different institutions, many of which have relatively low track records in publications, namely hospitals and non-governmental organizations. Furthermore, academic research in universities and institutes (or research centres) related to institutional affiliations abroad indicate the need for reinforcing the academic research capacity in Angola.Fig. 3Country denotes the location in which the primary institution of the first author affiliation was based in publications up to June 8, 2014Fig. 4Publications deposited in MEDLINE/PubMed up to June 8, 2014Fig. 5Country denotes the location in which the primary institution of the first author affiliation was based concerning publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014 Country denotes the location in which the primary institution of the first author affiliation was based in publications up to June 8, 2014 Publications deposited in MEDLINE/PubMed up to June 8, 2014 Country denotes the location in which the primary institution of the first author affiliation was based concerning publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014 The primary affiliation of the first author was most frequently in an Angolan institution (45 publications) (Fig. 3). A total of 150 institutions were listed as the first author’s primary affiliation, revealing a high institutional dispersion; 14.7 % were Angolan institutions. Furthermore, 65 % of publications represented research conducted in universities and institutes or research centres (Fig. 4). The majority of publications by first author affiliation country and the number of academic research institutions were located in Portugal, United States of America, or Brazil (Fig. 5). The high number of publications with first author affiliation in Angola denotes the concern of researchers in linking their work to institutions from the country of study. Nevertheless, this involvement is dispersed across many different institutions, many of which have relatively low track records in publications, namely hospitals and non-governmental organizations. Furthermore, academic research in universities and institutes (or research centres) related to institutional affiliations abroad indicate the need for reinforcing the academic research capacity in Angola.Fig. 3Country denotes the location in which the primary institution of the first author affiliation was based in publications up to June 8, 2014Fig. 4Publications deposited in MEDLINE/PubMed up to June 8, 2014Fig. 5Country denotes the location in which the primary institution of the first author affiliation was based concerning publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014 Country denotes the location in which the primary institution of the first author affiliation was based in publications up to June 8, 2014 Publications deposited in MEDLINE/PubMed up to June 8, 2014 Country denotes the location in which the primary institution of the first author affiliation was based concerning publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014 Publications by Angolan researchers An Angolan researcher was the first author in 58 (19 %) abstracts. The frequency of Angolan first authors or co-authors increased when the first author was affiliated to an Angolan institution (Fig. 6). In the present study, it is noteworthy that one-fifth of all publications had an Angolan as first author. The number of African first authors on publications relating to Africa between 1991 and 2010 has been increasing [16]. This finding was reinforced in the period between 2000 and 2014 in the WHO African Region, and it was found that the contributions of first authors from Africa doubled in this period, although it is recognized that it is still minimal (0.7 % in 2000 and 1.3 % in 2014) [24].Fig. 6Publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014, according to country in which the primary institution of the first author affiliation was based Publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014, according to country in which the primary institution of the first author affiliation was based An Angolan researcher was the first author in 58 (19 %) abstracts. The frequency of Angolan first authors or co-authors increased when the first author was affiliated to an Angolan institution (Fig. 6). In the present study, it is noteworthy that one-fifth of all publications had an Angolan as first author. The number of African first authors on publications relating to Africa between 1991 and 2010 has been increasing [16]. This finding was reinforced in the period between 2000 and 2014 in the WHO African Region, and it was found that the contributions of first authors from Africa doubled in this period, although it is recognized that it is still minimal (0.7 % in 2000 and 1.3 % in 2014) [24].Fig. 6Publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014, according to country in which the primary institution of the first author affiliation was based Publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014, according to country in which the primary institution of the first author affiliation was based Scientific fields Epidemiological research was by far the most common research area (n = 165) (Fig. 7). Although a few population genetics studies (data not shown) are included in this topic, this finding is in accordance to the burden of infectious diseases in the country. However, the increasing frequency of non-communicable diseases in sub-Saharan Africa is likely to be a matter of concern for research in Africa in coming years.Fig. 7Publications deposited in MEDLINE/PubMed up to June 8, 2014 Publications deposited in MEDLINE/PubMed up to June 8, 2014 On the other hand, a small number of publications (n = 32; 10.6 %) on socioeconomic and professional research, including governance, health policies, health systems management, and human resources is noted. Considering that Angola was a war zone for more than 50 % of its existence as an independent state, we might have expected a larger number of research publications on relevant topics such as the socioeconomic determinants of health (e.g., unemployment, poverty, education, family dissolution, and lifestyles) and the performance of health unities and health systems. However, we recognize the possibility that studies concerning some of those socioeconomic determinants of health may be part of research in social and economic sciences and, therefore, more accessible in other bibliographic databases. Further, the lack of studies on human resources in the various health professions, including studies in medical education, is also noticeable. We expect that the increasing number of health professional schools in Angola, including six new public medical schools, particularly since 2009, will raise interest in these topics, especially if educational curricula at the undergraduate and postgraduate levels in Angola incorporate more programs focused on improving research skills and providing on-the-job training in epidemiology [31]. Epidemiological research was by far the most common research area (n = 165) (Fig. 7). Although a few population genetics studies (data not shown) are included in this topic, this finding is in accordance to the burden of infectious diseases in the country. However, the increasing frequency of non-communicable diseases in sub-Saharan Africa is likely to be a matter of concern for research in Africa in coming years.Fig. 7Publications deposited in MEDLINE/PubMed up to June 8, 2014 Publications deposited in MEDLINE/PubMed up to June 8, 2014 On the other hand, a small number of publications (n = 32; 10.6 %) on socioeconomic and professional research, including governance, health policies, health systems management, and human resources is noted. Considering that Angola was a war zone for more than 50 % of its existence as an independent state, we might have expected a larger number of research publications on relevant topics such as the socioeconomic determinants of health (e.g., unemployment, poverty, education, family dissolution, and lifestyles) and the performance of health unities and health systems. However, we recognize the possibility that studies concerning some of those socioeconomic determinants of health may be part of research in social and economic sciences and, therefore, more accessible in other bibliographic databases. Further, the lack of studies on human resources in the various health professions, including studies in medical education, is also noticeable. We expect that the increasing number of health professional schools in Angola, including six new public medical schools, particularly since 2009, will raise interest in these topics, especially if educational curricula at the undergraduate and postgraduate levels in Angola incorporate more programs focused on improving research skills and providing on-the-job training in epidemiology [31]. Journals choice We also found that, despite publications being widely spread in a variety of journals, 11.4 % concentrated in PLoS ONE and Malaria Journal, both of which are free access journals indexed by common databases (Fig. 8). This finding is not unusual given that malaria was the most common research topic among Angolan publications observed in the present study. Notwithstanding the debate around open access publishing, our findings reinforce the importance of providing readers in financially disadvantaged countries with a wide range of relevant research and information [32, 33].Fig. 8Publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014 Publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014 We also found that, despite publications being widely spread in a variety of journals, 11.4 % concentrated in PLoS ONE and Malaria Journal, both of which are free access journals indexed by common databases (Fig. 8). This finding is not unusual given that malaria was the most common research topic among Angolan publications observed in the present study. Notwithstanding the debate around open access publishing, our findings reinforce the importance of providing readers in financially disadvantaged countries with a wide range of relevant research and information [32, 33].Fig. 8Publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014 Publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014 Limitations of the study The fact that Angola is a Portuguese speaking country and the majority of the journals indexed by MEDLINE/PubMed are in English may have introduced a selection bias due to idiom barriers. Though MEDLINE/PubMed represented the vast majority of the publications of the study and the search began at BVS, which included papers published in journals indexed by Scielo, it does not embody all the scientific and biomedical journals. Furthermore, we did not search “grey literature”. Thus, the number of Angolan authors and coauthors might be underestimated. Nevertheless, Angola still does not have an exhaustive electronic database of researchers containing individual publication records. In addition, the incorrect reporting of authors’ nationality and affiliation, observed in a few papers in the present study, may further contribute to underestimation of Angolan-authored publications. Finally, the study did not include the analysis of journal impact factors and the authors did not have access to any citation index database, which would allow the performance of bibliometric citation analyses. The fact that Angola is a Portuguese speaking country and the majority of the journals indexed by MEDLINE/PubMed are in English may have introduced a selection bias due to idiom barriers. Though MEDLINE/PubMed represented the vast majority of the publications of the study and the search began at BVS, which included papers published in journals indexed by Scielo, it does not embody all the scientific and biomedical journals. Furthermore, we did not search “grey literature”. Thus, the number of Angolan authors and coauthors might be underestimated. Nevertheless, Angola still does not have an exhaustive electronic database of researchers containing individual publication records. In addition, the incorrect reporting of authors’ nationality and affiliation, observed in a few papers in the present study, may further contribute to underestimation of Angolan-authored publications. Finally, the study did not include the analysis of journal impact factors and the authors did not have access to any citation index database, which would allow the performance of bibliometric citation analyses.
Conclusions
This initial contribution characterizing the productivity of health research pertaining to Angola showed that the number of publications has increased steadily over the past 10 years and was mainly focused on infectious diseases, namely malaria. Angola, as the country in which the primary institution of the first author affiliation was based, evidenced the largest number of publications, and about 20 % of the publications included an Angolan as the first author. However, academic institutions in Portugal, the United States of America, and Brazil contributed more than universities and research centres in Angola to research publication. The research was largely epidemiological, and remarkably, with a very small number of publications on economic and professional issues, involving themes such as health policy, governance, and management of health systems and human resources. In Angola, with the large expansion of higher education since 2009, universities should build and strengthen their capacities in research and human resources training. Including research training in the curricula of pre-graduate courses in health sciences will contribute to graduate professionals able to act as agents responsible for change, competent human resource managers, and promoters of evidence-based policies [34]. The alignment of the national health research agenda with the national program of health development and the wide use of multilateral international cooperation are critical to develop the operational tools towards universal health coverage in Angola as conceived by the WHO [1]. In summary, this work highlights the rapid increase of scientific publications related to Angola but also a need for reinforcing academic-driven research in this country.
[ "Publication rate", "Health research topics", "Authorship and affiliation", "Publications by Angolan researchers", "Scientific fields", "Journals choice", "Limitations of the study" ]
[ "As shown in Fig. 1, the publications in MEDLINE/PubMed addressing health issues related to Angola date back to 1909. MEDLINE/PubMed, the first bibliographic database in the life sciences, with a focus on biomedicine, mainly covers the literature published from 1946 onwards, but it also includes more dated publications [22]. There has been a remarkable, albeit irregular, increase in the number of publications during the 1990s, with a more sustained increase from 2004 onwards, and a peak in 2013. From 1979 to 2003 (34 years) there were 62 publications, but in the last 10 years (2004–2013) the number increased four-fold (n = 232), suggesting a rampant increment in publication rate (Fig. 1). Additionally, it is noteworthy that the number of publications by the beginning of June 2014 already reached half the figure recorded in 2013, supporting the notion that Angola is motivating a steady increase in health-related scientific publications. This exponential trend was also observed in Palestine, a war zone, where a total of 770 publications were retrieved in the medical and biomedical field across a 10-year period (01 January 2002 to 31 December 2011), averaging approximately 80 articles per year. Interestingly, the number of publications has also increased four-fold during the period 2002–2011, with a stabilization in the last 3 years of the study period [23].Fig. 1Between 1909 and 1973, none of the publications deposited in MEDLINE/PubMed up to June 8, 2014, complied with the inclusion criteria for abstracts about health research in Angola\nBetween 1909 and 1973, none of the publications deposited in MEDLINE/PubMed up to June 8, 2014, complied with the inclusion criteria for abstracts about health research in Angola\nA study published in 2007, which analyzed the geography of the PubMed biomedical publications in Africa between 1996 and 2005, concluded that the contribution of Africa, in particular, was considerably less than that of other continents. Indeed, it was evident that there was a continuous increase in publication output during this period in all African sub-regions, although Angola was located in the lowest quintile with less than two publications per year [17]. In the present study, we found, in the same period, five publications that included researchers that were affiliated with Angolan institutions. However, this did not change the lower position of Angola. Another publication about the scientific production in Public Health and Epidemiology in the WHO African region, in the period 1991–2010, obtained very similar results for Angola, placing it in the lowest quintile, with 11–50 publications [18]. This increase in health research in Angola matches with a recent study about the WHO African Region between 2000 and 2014 [24]. In this study, Angola was located in quintile three, with 100 to 499 articles.", "Malaria was the disease that stood out, with 45 publications, followed by HIV infection (n = 26), trypanosomiasis (n = 24), and themes of epidemiology/public health (n = 24). Tuberculosis occupied the tenth position with nine papers (Fig. 2). As a whole, infectious diseases was a topic of 59 % of the publications under analysis (n = 178).Fig. 2Publications are deposited in MEDLINE/PubMed up to June 8, 2014\nPublications are deposited in MEDLINE/PubMed up to June 8, 2014\nThe most investigated topics are aligned with the epidemiological profile of Angola, regarding the main causes of morbidity and mortality (communicable diseases), with a large emphasis on malaria [25]. Interestingly, there was a disproportionate volume of publications on HIV/AIDS relative to tuberculosis, which is unusual given the frequent association of both morbidities [26]. In an assessment of scientific output in public health and epidemiology in Africa, in the period 1991–2010, HIV/AIDS infection was the predominant topic (11.3 %) while malaria accounted for 8.6 % and tuberculosis for 7.1 % [16]. This analysis also considers the issue of the importance of research funding concerning infectious diseases [27]. The trend towards epidemiological transition in sub-Saharan Africa, owing to the increase of chronic non-communicable diseases, is not yet reflected at the scientific publication level, neither in the present study nor in the aforementioned African publications [16, 28–30].", "The primary affiliation of the first author was most frequently in an Angolan institution (45 publications) (Fig. 3). A total of 150 institutions were listed as the first author’s primary affiliation, revealing a high institutional dispersion; 14.7 % were Angolan institutions. Furthermore, 65 % of publications represented research conducted in universities and institutes or research centres (Fig. 4). The majority of publications by first author affiliation country and the number of academic research institutions were located in Portugal, United States of America, or Brazil (Fig. 5). The high number of publications with first author affiliation in Angola denotes the concern of researchers in linking their work to institutions from the country of study. Nevertheless, this involvement is dispersed across many different institutions, many of which have relatively low track records in publications, namely hospitals and non-governmental organizations. Furthermore, academic research in universities and institutes (or research centres) related to institutional affiliations abroad indicate the need for reinforcing the academic research capacity in Angola.Fig. 3Country denotes the location in which the primary institution of the first author affiliation was based in publications up to June 8, 2014Fig. 4Publications deposited in MEDLINE/PubMed up to June 8, 2014Fig. 5Country denotes the location in which the primary institution of the first author affiliation was based concerning publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014\nCountry denotes the location in which the primary institution of the first author affiliation was based in publications up to June 8, 2014\nPublications deposited in MEDLINE/PubMed up to June 8, 2014\nCountry denotes the location in which the primary institution of the first author affiliation was based concerning publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014", "An Angolan researcher was the first author in 58 (19 %) abstracts. The frequency of Angolan first authors or co-authors increased when the first author was affiliated to an Angolan institution (Fig. 6). In the present study, it is noteworthy that one-fifth of all publications had an Angolan as first author. The number of African first authors on publications relating to Africa between 1991 and 2010 has been increasing [16]. This finding was reinforced in the period between 2000 and 2014 in the WHO African Region, and it was found that the contributions of first authors from Africa doubled in this period, although it is recognized that it is still minimal (0.7 % in 2000 and 1.3 % in 2014) [24].Fig. 6Publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014, according to country in which the primary institution of the first author affiliation was based\nPublications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014, according to country in which the primary institution of the first author affiliation was based", "Epidemiological research was by far the most common research area (n = 165) (Fig. 7). Although a few population genetics studies (data not shown) are included in this topic, this finding is in accordance to the burden of infectious diseases in the country. However, the increasing frequency of non-communicable diseases in sub-Saharan Africa is likely to be a matter of concern for research in Africa in coming years.Fig. 7Publications deposited in MEDLINE/PubMed up to June 8, 2014\nPublications deposited in MEDLINE/PubMed up to June 8, 2014\nOn the other hand, a small number of publications (n = 32; 10.6 %) on socioeconomic and professional research, including governance, health policies, health systems management, and human resources is noted. Considering that Angola was a war zone for more than 50 % of its existence as an independent state, we might have expected a larger number of research publications on relevant topics such as the socioeconomic determinants of health (e.g., unemployment, poverty, education, family dissolution, and lifestyles) and the performance of health unities and health systems. However, we recognize the possibility that studies concerning some of those socioeconomic determinants of health may be part of research in social and economic sciences and, therefore, more accessible in other bibliographic databases. Further, the lack of studies on human resources in the various health professions, including studies in medical education, is also noticeable.\nWe expect that the increasing number of health professional schools in Angola, including six new public medical schools, particularly since 2009, will raise interest in these topics, especially if educational curricula at the undergraduate and postgraduate levels in Angola incorporate more programs focused on improving research skills and providing on-the-job training in epidemiology [31].", "We also found that, despite publications being widely spread in a variety of journals, 11.4 % concentrated in PLoS ONE and Malaria Journal, both of which are free access journals indexed by common databases (Fig. 8). This finding is not unusual given that malaria was the most common research topic among Angolan publications observed in the present study. Notwithstanding the debate around open access publishing, our findings reinforce the importance of providing readers in financially disadvantaged countries with a wide range of relevant research and information [32, 33].Fig. 8Publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014\nPublications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014", "The fact that Angola is a Portuguese speaking country and the majority of the journals indexed by MEDLINE/PubMed are in English may have introduced a selection bias due to idiom barriers. Though MEDLINE/PubMed represented the vast majority of the publications of the study and the search began at BVS, which included papers published in journals indexed by Scielo, it does not embody all the scientific and biomedical journals. Furthermore, we did not search “grey literature”. Thus, the number of Angolan authors and coauthors might be underestimated. Nevertheless, Angola still does not have an exhaustive electronic database of researchers containing individual publication records. In addition, the incorrect reporting of authors’ nationality and affiliation, observed in a few papers in the present study, may further contribute to underestimation of Angolan-authored publications. Finally, the study did not include the analysis of journal impact factors and the authors did not have access to any citation index database, which would allow the performance of bibliometric citation analyses." ]
[ null, null, null, null, null, null, null ]
[ "Background", "Methods", "Results and discussion", "Publication rate", "Health research topics", "Authorship and affiliation", "Publications by Angolan researchers", "Scientific fields", "Journals choice", "Limitations of the study", "Conclusions" ]
[ "The concept of research for health, expressed on the World Health Report 2013, covers a broader range of investigations than health research. This wider view of research will become increasingly important in the transition from the United Nations Millennium Development Goals to a post-2015 sustainable development agenda [1].\nIn a more restricted concept, health research has the potential to contribute to the identification of social and economic determinants of health. In the context of implementation research, it is a major basis for decision-making processes on health policies, programmes, and practices, especially when driven by the demand of the health problems of the population [2, 3].\nMoreover, conducting more health research can stimulate countries to strengthen their capacity to produce and use such research to guide decision-making, improve the efficiency of the investments, increase accountability in the work of researchers, and contribute substantially to the research training of human resources in developing countries [4, 5]. In the perspective of the World Health Organization (WHO), health research must be founded on building capacity to step up health research systems and support demand-driven health problems, particularly in low- and middle-income countries. Additionally, health research must be embedded with standards for best practices and ensure that quality evidence is turned into affordable health technologies and evidence-informed policy [6].\nHowever, at the global level, the proportion of health research that considers these aspects is negligible [7]. In particular, evidence shows that the research output in developing countries is considerably less than in developed countries [8]. Therefore, it is crucial to get an accurate picture of the landscape of the scientific research produced in developing countries in order to identify gaps critical for social development [9]. In sub-Saharan Africa, in particular, the production of health research can affect teaching, the quality of undergraduate training, career promotion, and translational research relevant to the country, which is comparable to developed countries [5].\nThere have been documented bibliometric studies of research production in Africa, as a whole, as well as in other continents. This then motivated studies of the chronological progression of research within individual African countries [10–18]. These publications demonstrate the feebleness of African authorship in the most cited scientific publications in health [19].\nAngola, a Portuguese-speaking country, is independent since 1975 and has spent 27 years in progressive socio-political instability, with the drawback of the civil war that ended 12 years ago. Nevertheless, Angola is making progress towards the achievement of the health Millennium Development Goals but is still far from achieving them [1]. The reconstruction of the national health system and policy will benefit from the knowledge of the factors influencing the success of certain interventions and the measure of their impact on populations. Therefore, health research will contribute to increased efficiency and effectiveness of health policy and decision making processes in Angola on programmes and practices. Support for science in Angola is less than 0.1 % of its GDP, compared to a world average of 1.7 % [20]. Consequently, a more serious commitment would be needed to build local and national capacity to accelerate the improvement of human resources training for health research. In this context, it becomes even more appropriate to carry out a survey on what has been produced as the result of health research. When searching the literature on health research in Angola, we found only one publication dated from 1953 which linked research to health, entitled “Scientific Research and the Health Status of Angola” [21]. The lack of studies on health research produced in Angola in the last 61 years motivated this work.\nWe carried out a non-systematic literature survey to evaluate the progression of health research pertaining to Angola, answering specifically the following questions: 1) how has the number of publications evolved over the years? 2) what are the most published topics? 3) what is the level of involvement of Angolan researchers and institutions in this published research? and 4) what are the most represented research fields?\nWe expect that this study will contribute to increasing the current state of knowledge on health research related to Angola and identify gaps that may guide the WHO recommendations on health research and consequently the adoption of effective interventions that will strengthen health research in Angola.", "We used the reference database Biblioteca Virtual em Saúde (BVS) to perform a non-systematic scientific literature survey aiming to count all existing publications up to June 8, 2014, with the keyword “Angola” and to produce a scientific publications dataset to be compared with other databases. The BVS was chosen because it is a platform that covers human health sciences issues, information sources published in countries of the Portuguese-speaking community, and a set of international databases. Moreover, resource constraints did not allow access to other biomedical databases such as those using the OVID platform. Thereafter, our content information study was based on the public database MEDLINE/PubMed disclosing the best match with the BVS dataset using the same keyword.\nPublications with no abstract available were excluded from the MEDLINE/PubMed dataset. To focus our study in human health research relevant for Angola, we revised the abstracts using the following exclusion criteria: 1) the word Angola did not mean the country; 2) the topic did not concern physical and mental health; 3) the topic reported exclusively to biological vectors; 4) Angola was not the focus of a specific study, but was instead reported as part of a group of countries (common for poliomyelitis and Marburg disease) or just as a reference; 5) case reports published outside Angola regarding citizens who temporarily resided in Angola.\nFrom each publication included in the study we extracted the following variables: year of publication; topic of health or disease, according to the revised International Classification of Diseases (ICD-10); country in which the primary institution of the first author affiliation was based; type of institution of the first author primary affiliation (university, institute or centre of scientific research, hospital, non-governmental organization, other); presence of an Angolan citizen as first author; number of Angolan co-authors; field of research (basic biomedical, clinical, epidemiological or socioeconomic and professional); and publishing journal. The Angolan authors and co-authors were identified by consulting their registration on the National Medical Council and in the universities. Socioeconomic and professional research includes health policy, human resources, and governance and management of health systems. The year of publication was the single variable that did not require the existence of the abstract. All the references of the abstracts included in the study were stored in a collection in the Zotero library. Collection of abstract content data was performed by manual curation and data were entered into an excel spreadsheet (Microsoft™ Excel® 2007) for statistical processing and plots generation.", "Searching the BVS on June 8, 2014, with the keyword “Angola”, showed 1,029 results, of which 74.6 % were also in MEDLINE/PubMed database. Consequently, it was considered that further search in MEDLINE/PubMed would produce representative research publication data on health in Angola. Therefore, we undertook a further search for relevant Angolan publications in MEDLINE/PubMed, keeping the keyword “Angola”. To describe the number of publications across time we included all publications retrieved from BVS without the abstract filter. All other variables were taken into account using the availability of an abstract as the main inclusion criteria, and this yielded 658 abstracts. Applying the exclusion criteria we selected 301 abstracts for further description.\n Publication rate As shown in Fig. 1, the publications in MEDLINE/PubMed addressing health issues related to Angola date back to 1909. MEDLINE/PubMed, the first bibliographic database in the life sciences, with a focus on biomedicine, mainly covers the literature published from 1946 onwards, but it also includes more dated publications [22]. There has been a remarkable, albeit irregular, increase in the number of publications during the 1990s, with a more sustained increase from 2004 onwards, and a peak in 2013. From 1979 to 2003 (34 years) there were 62 publications, but in the last 10 years (2004–2013) the number increased four-fold (n = 232), suggesting a rampant increment in publication rate (Fig. 1). Additionally, it is noteworthy that the number of publications by the beginning of June 2014 already reached half the figure recorded in 2013, supporting the notion that Angola is motivating a steady increase in health-related scientific publications. This exponential trend was also observed in Palestine, a war zone, where a total of 770 publications were retrieved in the medical and biomedical field across a 10-year period (01 January 2002 to 31 December 2011), averaging approximately 80 articles per year. Interestingly, the number of publications has also increased four-fold during the period 2002–2011, with a stabilization in the last 3 years of the study period [23].Fig. 1Between 1909 and 1973, none of the publications deposited in MEDLINE/PubMed up to June 8, 2014, complied with the inclusion criteria for abstracts about health research in Angola\nBetween 1909 and 1973, none of the publications deposited in MEDLINE/PubMed up to June 8, 2014, complied with the inclusion criteria for abstracts about health research in Angola\nA study published in 2007, which analyzed the geography of the PubMed biomedical publications in Africa between 1996 and 2005, concluded that the contribution of Africa, in particular, was considerably less than that of other continents. Indeed, it was evident that there was a continuous increase in publication output during this period in all African sub-regions, although Angola was located in the lowest quintile with less than two publications per year [17]. In the present study, we found, in the same period, five publications that included researchers that were affiliated with Angolan institutions. However, this did not change the lower position of Angola. Another publication about the scientific production in Public Health and Epidemiology in the WHO African region, in the period 1991–2010, obtained very similar results for Angola, placing it in the lowest quintile, with 11–50 publications [18]. This increase in health research in Angola matches with a recent study about the WHO African Region between 2000 and 2014 [24]. In this study, Angola was located in quintile three, with 100 to 499 articles.\nAs shown in Fig. 1, the publications in MEDLINE/PubMed addressing health issues related to Angola date back to 1909. MEDLINE/PubMed, the first bibliographic database in the life sciences, with a focus on biomedicine, mainly covers the literature published from 1946 onwards, but it also includes more dated publications [22]. There has been a remarkable, albeit irregular, increase in the number of publications during the 1990s, with a more sustained increase from 2004 onwards, and a peak in 2013. From 1979 to 2003 (34 years) there were 62 publications, but in the last 10 years (2004–2013) the number increased four-fold (n = 232), suggesting a rampant increment in publication rate (Fig. 1). Additionally, it is noteworthy that the number of publications by the beginning of June 2014 already reached half the figure recorded in 2013, supporting the notion that Angola is motivating a steady increase in health-related scientific publications. This exponential trend was also observed in Palestine, a war zone, where a total of 770 publications were retrieved in the medical and biomedical field across a 10-year period (01 January 2002 to 31 December 2011), averaging approximately 80 articles per year. Interestingly, the number of publications has also increased four-fold during the period 2002–2011, with a stabilization in the last 3 years of the study period [23].Fig. 1Between 1909 and 1973, none of the publications deposited in MEDLINE/PubMed up to June 8, 2014, complied with the inclusion criteria for abstracts about health research in Angola\nBetween 1909 and 1973, none of the publications deposited in MEDLINE/PubMed up to June 8, 2014, complied with the inclusion criteria for abstracts about health research in Angola\nA study published in 2007, which analyzed the geography of the PubMed biomedical publications in Africa between 1996 and 2005, concluded that the contribution of Africa, in particular, was considerably less than that of other continents. Indeed, it was evident that there was a continuous increase in publication output during this period in all African sub-regions, although Angola was located in the lowest quintile with less than two publications per year [17]. In the present study, we found, in the same period, five publications that included researchers that were affiliated with Angolan institutions. However, this did not change the lower position of Angola. Another publication about the scientific production in Public Health and Epidemiology in the WHO African region, in the period 1991–2010, obtained very similar results for Angola, placing it in the lowest quintile, with 11–50 publications [18]. This increase in health research in Angola matches with a recent study about the WHO African Region between 2000 and 2014 [24]. In this study, Angola was located in quintile three, with 100 to 499 articles.\n Health research topics Malaria was the disease that stood out, with 45 publications, followed by HIV infection (n = 26), trypanosomiasis (n = 24), and themes of epidemiology/public health (n = 24). Tuberculosis occupied the tenth position with nine papers (Fig. 2). As a whole, infectious diseases was a topic of 59 % of the publications under analysis (n = 178).Fig. 2Publications are deposited in MEDLINE/PubMed up to June 8, 2014\nPublications are deposited in MEDLINE/PubMed up to June 8, 2014\nThe most investigated topics are aligned with the epidemiological profile of Angola, regarding the main causes of morbidity and mortality (communicable diseases), with a large emphasis on malaria [25]. Interestingly, there was a disproportionate volume of publications on HIV/AIDS relative to tuberculosis, which is unusual given the frequent association of both morbidities [26]. In an assessment of scientific output in public health and epidemiology in Africa, in the period 1991–2010, HIV/AIDS infection was the predominant topic (11.3 %) while malaria accounted for 8.6 % and tuberculosis for 7.1 % [16]. This analysis also considers the issue of the importance of research funding concerning infectious diseases [27]. The trend towards epidemiological transition in sub-Saharan Africa, owing to the increase of chronic non-communicable diseases, is not yet reflected at the scientific publication level, neither in the present study nor in the aforementioned African publications [16, 28–30].\nMalaria was the disease that stood out, with 45 publications, followed by HIV infection (n = 26), trypanosomiasis (n = 24), and themes of epidemiology/public health (n = 24). Tuberculosis occupied the tenth position with nine papers (Fig. 2). As a whole, infectious diseases was a topic of 59 % of the publications under analysis (n = 178).Fig. 2Publications are deposited in MEDLINE/PubMed up to June 8, 2014\nPublications are deposited in MEDLINE/PubMed up to June 8, 2014\nThe most investigated topics are aligned with the epidemiological profile of Angola, regarding the main causes of morbidity and mortality (communicable diseases), with a large emphasis on malaria [25]. Interestingly, there was a disproportionate volume of publications on HIV/AIDS relative to tuberculosis, which is unusual given the frequent association of both morbidities [26]. In an assessment of scientific output in public health and epidemiology in Africa, in the period 1991–2010, HIV/AIDS infection was the predominant topic (11.3 %) while malaria accounted for 8.6 % and tuberculosis for 7.1 % [16]. This analysis also considers the issue of the importance of research funding concerning infectious diseases [27]. The trend towards epidemiological transition in sub-Saharan Africa, owing to the increase of chronic non-communicable diseases, is not yet reflected at the scientific publication level, neither in the present study nor in the aforementioned African publications [16, 28–30].\n Authorship and affiliation The primary affiliation of the first author was most frequently in an Angolan institution (45 publications) (Fig. 3). A total of 150 institutions were listed as the first author’s primary affiliation, revealing a high institutional dispersion; 14.7 % were Angolan institutions. Furthermore, 65 % of publications represented research conducted in universities and institutes or research centres (Fig. 4). The majority of publications by first author affiliation country and the number of academic research institutions were located in Portugal, United States of America, or Brazil (Fig. 5). The high number of publications with first author affiliation in Angola denotes the concern of researchers in linking their work to institutions from the country of study. Nevertheless, this involvement is dispersed across many different institutions, many of which have relatively low track records in publications, namely hospitals and non-governmental organizations. Furthermore, academic research in universities and institutes (or research centres) related to institutional affiliations abroad indicate the need for reinforcing the academic research capacity in Angola.Fig. 3Country denotes the location in which the primary institution of the first author affiliation was based in publications up to June 8, 2014Fig. 4Publications deposited in MEDLINE/PubMed up to June 8, 2014Fig. 5Country denotes the location in which the primary institution of the first author affiliation was based concerning publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014\nCountry denotes the location in which the primary institution of the first author affiliation was based in publications up to June 8, 2014\nPublications deposited in MEDLINE/PubMed up to June 8, 2014\nCountry denotes the location in which the primary institution of the first author affiliation was based concerning publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014\nThe primary affiliation of the first author was most frequently in an Angolan institution (45 publications) (Fig. 3). A total of 150 institutions were listed as the first author’s primary affiliation, revealing a high institutional dispersion; 14.7 % were Angolan institutions. Furthermore, 65 % of publications represented research conducted in universities and institutes or research centres (Fig. 4). The majority of publications by first author affiliation country and the number of academic research institutions were located in Portugal, United States of America, or Brazil (Fig. 5). The high number of publications with first author affiliation in Angola denotes the concern of researchers in linking their work to institutions from the country of study. Nevertheless, this involvement is dispersed across many different institutions, many of which have relatively low track records in publications, namely hospitals and non-governmental organizations. Furthermore, academic research in universities and institutes (or research centres) related to institutional affiliations abroad indicate the need for reinforcing the academic research capacity in Angola.Fig. 3Country denotes the location in which the primary institution of the first author affiliation was based in publications up to June 8, 2014Fig. 4Publications deposited in MEDLINE/PubMed up to June 8, 2014Fig. 5Country denotes the location in which the primary institution of the first author affiliation was based concerning publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014\nCountry denotes the location in which the primary institution of the first author affiliation was based in publications up to June 8, 2014\nPublications deposited in MEDLINE/PubMed up to June 8, 2014\nCountry denotes the location in which the primary institution of the first author affiliation was based concerning publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014\n Publications by Angolan researchers An Angolan researcher was the first author in 58 (19 %) abstracts. The frequency of Angolan first authors or co-authors increased when the first author was affiliated to an Angolan institution (Fig. 6). In the present study, it is noteworthy that one-fifth of all publications had an Angolan as first author. The number of African first authors on publications relating to Africa between 1991 and 2010 has been increasing [16]. This finding was reinforced in the period between 2000 and 2014 in the WHO African Region, and it was found that the contributions of first authors from Africa doubled in this period, although it is recognized that it is still minimal (0.7 % in 2000 and 1.3 % in 2014) [24].Fig. 6Publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014, according to country in which the primary institution of the first author affiliation was based\nPublications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014, according to country in which the primary institution of the first author affiliation was based\nAn Angolan researcher was the first author in 58 (19 %) abstracts. The frequency of Angolan first authors or co-authors increased when the first author was affiliated to an Angolan institution (Fig. 6). In the present study, it is noteworthy that one-fifth of all publications had an Angolan as first author. The number of African first authors on publications relating to Africa between 1991 and 2010 has been increasing [16]. This finding was reinforced in the period between 2000 and 2014 in the WHO African Region, and it was found that the contributions of first authors from Africa doubled in this period, although it is recognized that it is still minimal (0.7 % in 2000 and 1.3 % in 2014) [24].Fig. 6Publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014, according to country in which the primary institution of the first author affiliation was based\nPublications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014, according to country in which the primary institution of the first author affiliation was based\n Scientific fields Epidemiological research was by far the most common research area (n = 165) (Fig. 7). Although a few population genetics studies (data not shown) are included in this topic, this finding is in accordance to the burden of infectious diseases in the country. However, the increasing frequency of non-communicable diseases in sub-Saharan Africa is likely to be a matter of concern for research in Africa in coming years.Fig. 7Publications deposited in MEDLINE/PubMed up to June 8, 2014\nPublications deposited in MEDLINE/PubMed up to June 8, 2014\nOn the other hand, a small number of publications (n = 32; 10.6 %) on socioeconomic and professional research, including governance, health policies, health systems management, and human resources is noted. Considering that Angola was a war zone for more than 50 % of its existence as an independent state, we might have expected a larger number of research publications on relevant topics such as the socioeconomic determinants of health (e.g., unemployment, poverty, education, family dissolution, and lifestyles) and the performance of health unities and health systems. However, we recognize the possibility that studies concerning some of those socioeconomic determinants of health may be part of research in social and economic sciences and, therefore, more accessible in other bibliographic databases. Further, the lack of studies on human resources in the various health professions, including studies in medical education, is also noticeable.\nWe expect that the increasing number of health professional schools in Angola, including six new public medical schools, particularly since 2009, will raise interest in these topics, especially if educational curricula at the undergraduate and postgraduate levels in Angola incorporate more programs focused on improving research skills and providing on-the-job training in epidemiology [31].\nEpidemiological research was by far the most common research area (n = 165) (Fig. 7). Although a few population genetics studies (data not shown) are included in this topic, this finding is in accordance to the burden of infectious diseases in the country. However, the increasing frequency of non-communicable diseases in sub-Saharan Africa is likely to be a matter of concern for research in Africa in coming years.Fig. 7Publications deposited in MEDLINE/PubMed up to June 8, 2014\nPublications deposited in MEDLINE/PubMed up to June 8, 2014\nOn the other hand, a small number of publications (n = 32; 10.6 %) on socioeconomic and professional research, including governance, health policies, health systems management, and human resources is noted. Considering that Angola was a war zone for more than 50 % of its existence as an independent state, we might have expected a larger number of research publications on relevant topics such as the socioeconomic determinants of health (e.g., unemployment, poverty, education, family dissolution, and lifestyles) and the performance of health unities and health systems. However, we recognize the possibility that studies concerning some of those socioeconomic determinants of health may be part of research in social and economic sciences and, therefore, more accessible in other bibliographic databases. Further, the lack of studies on human resources in the various health professions, including studies in medical education, is also noticeable.\nWe expect that the increasing number of health professional schools in Angola, including six new public medical schools, particularly since 2009, will raise interest in these topics, especially if educational curricula at the undergraduate and postgraduate levels in Angola incorporate more programs focused on improving research skills and providing on-the-job training in epidemiology [31].\n Journals choice We also found that, despite publications being widely spread in a variety of journals, 11.4 % concentrated in PLoS ONE and Malaria Journal, both of which are free access journals indexed by common databases (Fig. 8). This finding is not unusual given that malaria was the most common research topic among Angolan publications observed in the present study. Notwithstanding the debate around open access publishing, our findings reinforce the importance of providing readers in financially disadvantaged countries with a wide range of relevant research and information [32, 33].Fig. 8Publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014\nPublications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014\nWe also found that, despite publications being widely spread in a variety of journals, 11.4 % concentrated in PLoS ONE and Malaria Journal, both of which are free access journals indexed by common databases (Fig. 8). This finding is not unusual given that malaria was the most common research topic among Angolan publications observed in the present study. Notwithstanding the debate around open access publishing, our findings reinforce the importance of providing readers in financially disadvantaged countries with a wide range of relevant research and information [32, 33].Fig. 8Publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014\nPublications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014\n Limitations of the study The fact that Angola is a Portuguese speaking country and the majority of the journals indexed by MEDLINE/PubMed are in English may have introduced a selection bias due to idiom barriers. Though MEDLINE/PubMed represented the vast majority of the publications of the study and the search began at BVS, which included papers published in journals indexed by Scielo, it does not embody all the scientific and biomedical journals. Furthermore, we did not search “grey literature”. Thus, the number of Angolan authors and coauthors might be underestimated. Nevertheless, Angola still does not have an exhaustive electronic database of researchers containing individual publication records. In addition, the incorrect reporting of authors’ nationality and affiliation, observed in a few papers in the present study, may further contribute to underestimation of Angolan-authored publications. Finally, the study did not include the analysis of journal impact factors and the authors did not have access to any citation index database, which would allow the performance of bibliometric citation analyses.\nThe fact that Angola is a Portuguese speaking country and the majority of the journals indexed by MEDLINE/PubMed are in English may have introduced a selection bias due to idiom barriers. Though MEDLINE/PubMed represented the vast majority of the publications of the study and the search began at BVS, which included papers published in journals indexed by Scielo, it does not embody all the scientific and biomedical journals. Furthermore, we did not search “grey literature”. Thus, the number of Angolan authors and coauthors might be underestimated. Nevertheless, Angola still does not have an exhaustive electronic database of researchers containing individual publication records. In addition, the incorrect reporting of authors’ nationality and affiliation, observed in a few papers in the present study, may further contribute to underestimation of Angolan-authored publications. Finally, the study did not include the analysis of journal impact factors and the authors did not have access to any citation index database, which would allow the performance of bibliometric citation analyses.", "As shown in Fig. 1, the publications in MEDLINE/PubMed addressing health issues related to Angola date back to 1909. MEDLINE/PubMed, the first bibliographic database in the life sciences, with a focus on biomedicine, mainly covers the literature published from 1946 onwards, but it also includes more dated publications [22]. There has been a remarkable, albeit irregular, increase in the number of publications during the 1990s, with a more sustained increase from 2004 onwards, and a peak in 2013. From 1979 to 2003 (34 years) there were 62 publications, but in the last 10 years (2004–2013) the number increased four-fold (n = 232), suggesting a rampant increment in publication rate (Fig. 1). Additionally, it is noteworthy that the number of publications by the beginning of June 2014 already reached half the figure recorded in 2013, supporting the notion that Angola is motivating a steady increase in health-related scientific publications. This exponential trend was also observed in Palestine, a war zone, where a total of 770 publications were retrieved in the medical and biomedical field across a 10-year period (01 January 2002 to 31 December 2011), averaging approximately 80 articles per year. Interestingly, the number of publications has also increased four-fold during the period 2002–2011, with a stabilization in the last 3 years of the study period [23].Fig. 1Between 1909 and 1973, none of the publications deposited in MEDLINE/PubMed up to June 8, 2014, complied with the inclusion criteria for abstracts about health research in Angola\nBetween 1909 and 1973, none of the publications deposited in MEDLINE/PubMed up to June 8, 2014, complied with the inclusion criteria for abstracts about health research in Angola\nA study published in 2007, which analyzed the geography of the PubMed biomedical publications in Africa between 1996 and 2005, concluded that the contribution of Africa, in particular, was considerably less than that of other continents. Indeed, it was evident that there was a continuous increase in publication output during this period in all African sub-regions, although Angola was located in the lowest quintile with less than two publications per year [17]. In the present study, we found, in the same period, five publications that included researchers that were affiliated with Angolan institutions. However, this did not change the lower position of Angola. Another publication about the scientific production in Public Health and Epidemiology in the WHO African region, in the period 1991–2010, obtained very similar results for Angola, placing it in the lowest quintile, with 11–50 publications [18]. This increase in health research in Angola matches with a recent study about the WHO African Region between 2000 and 2014 [24]. In this study, Angola was located in quintile three, with 100 to 499 articles.", "Malaria was the disease that stood out, with 45 publications, followed by HIV infection (n = 26), trypanosomiasis (n = 24), and themes of epidemiology/public health (n = 24). Tuberculosis occupied the tenth position with nine papers (Fig. 2). As a whole, infectious diseases was a topic of 59 % of the publications under analysis (n = 178).Fig. 2Publications are deposited in MEDLINE/PubMed up to June 8, 2014\nPublications are deposited in MEDLINE/PubMed up to June 8, 2014\nThe most investigated topics are aligned with the epidemiological profile of Angola, regarding the main causes of morbidity and mortality (communicable diseases), with a large emphasis on malaria [25]. Interestingly, there was a disproportionate volume of publications on HIV/AIDS relative to tuberculosis, which is unusual given the frequent association of both morbidities [26]. In an assessment of scientific output in public health and epidemiology in Africa, in the period 1991–2010, HIV/AIDS infection was the predominant topic (11.3 %) while malaria accounted for 8.6 % and tuberculosis for 7.1 % [16]. This analysis also considers the issue of the importance of research funding concerning infectious diseases [27]. The trend towards epidemiological transition in sub-Saharan Africa, owing to the increase of chronic non-communicable diseases, is not yet reflected at the scientific publication level, neither in the present study nor in the aforementioned African publications [16, 28–30].", "The primary affiliation of the first author was most frequently in an Angolan institution (45 publications) (Fig. 3). A total of 150 institutions were listed as the first author’s primary affiliation, revealing a high institutional dispersion; 14.7 % were Angolan institutions. Furthermore, 65 % of publications represented research conducted in universities and institutes or research centres (Fig. 4). The majority of publications by first author affiliation country and the number of academic research institutions were located in Portugal, United States of America, or Brazil (Fig. 5). The high number of publications with first author affiliation in Angola denotes the concern of researchers in linking their work to institutions from the country of study. Nevertheless, this involvement is dispersed across many different institutions, many of which have relatively low track records in publications, namely hospitals and non-governmental organizations. Furthermore, academic research in universities and institutes (or research centres) related to institutional affiliations abroad indicate the need for reinforcing the academic research capacity in Angola.Fig. 3Country denotes the location in which the primary institution of the first author affiliation was based in publications up to June 8, 2014Fig. 4Publications deposited in MEDLINE/PubMed up to June 8, 2014Fig. 5Country denotes the location in which the primary institution of the first author affiliation was based concerning publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014\nCountry denotes the location in which the primary institution of the first author affiliation was based in publications up to June 8, 2014\nPublications deposited in MEDLINE/PubMed up to June 8, 2014\nCountry denotes the location in which the primary institution of the first author affiliation was based concerning publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014", "An Angolan researcher was the first author in 58 (19 %) abstracts. The frequency of Angolan first authors or co-authors increased when the first author was affiliated to an Angolan institution (Fig. 6). In the present study, it is noteworthy that one-fifth of all publications had an Angolan as first author. The number of African first authors on publications relating to Africa between 1991 and 2010 has been increasing [16]. This finding was reinforced in the period between 2000 and 2014 in the WHO African Region, and it was found that the contributions of first authors from Africa doubled in this period, although it is recognized that it is still minimal (0.7 % in 2000 and 1.3 % in 2014) [24].Fig. 6Publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014, according to country in which the primary institution of the first author affiliation was based\nPublications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014, according to country in which the primary institution of the first author affiliation was based", "Epidemiological research was by far the most common research area (n = 165) (Fig. 7). Although a few population genetics studies (data not shown) are included in this topic, this finding is in accordance to the burden of infectious diseases in the country. However, the increasing frequency of non-communicable diseases in sub-Saharan Africa is likely to be a matter of concern for research in Africa in coming years.Fig. 7Publications deposited in MEDLINE/PubMed up to June 8, 2014\nPublications deposited in MEDLINE/PubMed up to June 8, 2014\nOn the other hand, a small number of publications (n = 32; 10.6 %) on socioeconomic and professional research, including governance, health policies, health systems management, and human resources is noted. Considering that Angola was a war zone for more than 50 % of its existence as an independent state, we might have expected a larger number of research publications on relevant topics such as the socioeconomic determinants of health (e.g., unemployment, poverty, education, family dissolution, and lifestyles) and the performance of health unities and health systems. However, we recognize the possibility that studies concerning some of those socioeconomic determinants of health may be part of research in social and economic sciences and, therefore, more accessible in other bibliographic databases. Further, the lack of studies on human resources in the various health professions, including studies in medical education, is also noticeable.\nWe expect that the increasing number of health professional schools in Angola, including six new public medical schools, particularly since 2009, will raise interest in these topics, especially if educational curricula at the undergraduate and postgraduate levels in Angola incorporate more programs focused on improving research skills and providing on-the-job training in epidemiology [31].", "We also found that, despite publications being widely spread in a variety of journals, 11.4 % concentrated in PLoS ONE and Malaria Journal, both of which are free access journals indexed by common databases (Fig. 8). This finding is not unusual given that malaria was the most common research topic among Angolan publications observed in the present study. Notwithstanding the debate around open access publishing, our findings reinforce the importance of providing readers in financially disadvantaged countries with a wide range of relevant research and information [32, 33].Fig. 8Publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014\nPublications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014", "The fact that Angola is a Portuguese speaking country and the majority of the journals indexed by MEDLINE/PubMed are in English may have introduced a selection bias due to idiom barriers. Though MEDLINE/PubMed represented the vast majority of the publications of the study and the search began at BVS, which included papers published in journals indexed by Scielo, it does not embody all the scientific and biomedical journals. Furthermore, we did not search “grey literature”. Thus, the number of Angolan authors and coauthors might be underestimated. Nevertheless, Angola still does not have an exhaustive electronic database of researchers containing individual publication records. In addition, the incorrect reporting of authors’ nationality and affiliation, observed in a few papers in the present study, may further contribute to underestimation of Angolan-authored publications. Finally, the study did not include the analysis of journal impact factors and the authors did not have access to any citation index database, which would allow the performance of bibliometric citation analyses.", "This initial contribution characterizing the productivity of health research pertaining to Angola showed that the number of publications has increased steadily over the past 10 years and was mainly focused on infectious diseases, namely malaria. Angola, as the country in which the primary institution of the first author affiliation was based, evidenced the largest number of publications, and about 20 % of the publications included an Angolan as the first author. However, academic institutions in Portugal, the United States of America, and Brazil contributed more than universities and research centres in Angola to research publication. The research was largely epidemiological, and remarkably, with a very small number of publications on economic and professional issues, involving themes such as health policy, governance, and management of health systems and human resources.\nIn Angola, with the large expansion of higher education since 2009, universities should build and strengthen their capacities in research and human resources training. Including research training in the curricula of pre-graduate courses in health sciences will contribute to graduate professionals able to act as agents responsible for change, competent human resource managers, and promoters of evidence-based policies [34]. The alignment of the national health research agenda with the national program of health development and the wide use of multilateral international cooperation are critical to develop the operational tools towards universal health coverage in Angola as conceived by the WHO [1]. In summary, this work highlights the rapid increase of scientific publications related to Angola but also a need for reinforcing academic-driven research in this country." ]
[ "introduction", "materials|methods", "results", null, null, null, null, null, null, null, "conclusion" ]
[ "Angola", "Health research", "Health research centres", "Universities" ]
Background: The concept of research for health, expressed on the World Health Report 2013, covers a broader range of investigations than health research. This wider view of research will become increasingly important in the transition from the United Nations Millennium Development Goals to a post-2015 sustainable development agenda [1]. In a more restricted concept, health research has the potential to contribute to the identification of social and economic determinants of health. In the context of implementation research, it is a major basis for decision-making processes on health policies, programmes, and practices, especially when driven by the demand of the health problems of the population [2, 3]. Moreover, conducting more health research can stimulate countries to strengthen their capacity to produce and use such research to guide decision-making, improve the efficiency of the investments, increase accountability in the work of researchers, and contribute substantially to the research training of human resources in developing countries [4, 5]. In the perspective of the World Health Organization (WHO), health research must be founded on building capacity to step up health research systems and support demand-driven health problems, particularly in low- and middle-income countries. Additionally, health research must be embedded with standards for best practices and ensure that quality evidence is turned into affordable health technologies and evidence-informed policy [6]. However, at the global level, the proportion of health research that considers these aspects is negligible [7]. In particular, evidence shows that the research output in developing countries is considerably less than in developed countries [8]. Therefore, it is crucial to get an accurate picture of the landscape of the scientific research produced in developing countries in order to identify gaps critical for social development [9]. In sub-Saharan Africa, in particular, the production of health research can affect teaching, the quality of undergraduate training, career promotion, and translational research relevant to the country, which is comparable to developed countries [5]. There have been documented bibliometric studies of research production in Africa, as a whole, as well as in other continents. This then motivated studies of the chronological progression of research within individual African countries [10–18]. These publications demonstrate the feebleness of African authorship in the most cited scientific publications in health [19]. Angola, a Portuguese-speaking country, is independent since 1975 and has spent 27 years in progressive socio-political instability, with the drawback of the civil war that ended 12 years ago. Nevertheless, Angola is making progress towards the achievement of the health Millennium Development Goals but is still far from achieving them [1]. The reconstruction of the national health system and policy will benefit from the knowledge of the factors influencing the success of certain interventions and the measure of their impact on populations. Therefore, health research will contribute to increased efficiency and effectiveness of health policy and decision making processes in Angola on programmes and practices. Support for science in Angola is less than 0.1 % of its GDP, compared to a world average of 1.7 % [20]. Consequently, a more serious commitment would be needed to build local and national capacity to accelerate the improvement of human resources training for health research. In this context, it becomes even more appropriate to carry out a survey on what has been produced as the result of health research. When searching the literature on health research in Angola, we found only one publication dated from 1953 which linked research to health, entitled “Scientific Research and the Health Status of Angola” [21]. The lack of studies on health research produced in Angola in the last 61 years motivated this work. We carried out a non-systematic literature survey to evaluate the progression of health research pertaining to Angola, answering specifically the following questions: 1) how has the number of publications evolved over the years? 2) what are the most published topics? 3) what is the level of involvement of Angolan researchers and institutions in this published research? and 4) what are the most represented research fields? We expect that this study will contribute to increasing the current state of knowledge on health research related to Angola and identify gaps that may guide the WHO recommendations on health research and consequently the adoption of effective interventions that will strengthen health research in Angola. Methods: We used the reference database Biblioteca Virtual em Saúde (BVS) to perform a non-systematic scientific literature survey aiming to count all existing publications up to June 8, 2014, with the keyword “Angola” and to produce a scientific publications dataset to be compared with other databases. The BVS was chosen because it is a platform that covers human health sciences issues, information sources published in countries of the Portuguese-speaking community, and a set of international databases. Moreover, resource constraints did not allow access to other biomedical databases such as those using the OVID platform. Thereafter, our content information study was based on the public database MEDLINE/PubMed disclosing the best match with the BVS dataset using the same keyword. Publications with no abstract available were excluded from the MEDLINE/PubMed dataset. To focus our study in human health research relevant for Angola, we revised the abstracts using the following exclusion criteria: 1) the word Angola did not mean the country; 2) the topic did not concern physical and mental health; 3) the topic reported exclusively to biological vectors; 4) Angola was not the focus of a specific study, but was instead reported as part of a group of countries (common for poliomyelitis and Marburg disease) or just as a reference; 5) case reports published outside Angola regarding citizens who temporarily resided in Angola. From each publication included in the study we extracted the following variables: year of publication; topic of health or disease, according to the revised International Classification of Diseases (ICD-10); country in which the primary institution of the first author affiliation was based; type of institution of the first author primary affiliation (university, institute or centre of scientific research, hospital, non-governmental organization, other); presence of an Angolan citizen as first author; number of Angolan co-authors; field of research (basic biomedical, clinical, epidemiological or socioeconomic and professional); and publishing journal. The Angolan authors and co-authors were identified by consulting their registration on the National Medical Council and in the universities. Socioeconomic and professional research includes health policy, human resources, and governance and management of health systems. The year of publication was the single variable that did not require the existence of the abstract. All the references of the abstracts included in the study were stored in a collection in the Zotero library. Collection of abstract content data was performed by manual curation and data were entered into an excel spreadsheet (Microsoft™ Excel® 2007) for statistical processing and plots generation. Results and discussion: Searching the BVS on June 8, 2014, with the keyword “Angola”, showed 1,029 results, of which 74.6 % were also in MEDLINE/PubMed database. Consequently, it was considered that further search in MEDLINE/PubMed would produce representative research publication data on health in Angola. Therefore, we undertook a further search for relevant Angolan publications in MEDLINE/PubMed, keeping the keyword “Angola”. To describe the number of publications across time we included all publications retrieved from BVS without the abstract filter. All other variables were taken into account using the availability of an abstract as the main inclusion criteria, and this yielded 658 abstracts. Applying the exclusion criteria we selected 301 abstracts for further description. Publication rate As shown in Fig. 1, the publications in MEDLINE/PubMed addressing health issues related to Angola date back to 1909. MEDLINE/PubMed, the first bibliographic database in the life sciences, with a focus on biomedicine, mainly covers the literature published from 1946 onwards, but it also includes more dated publications [22]. There has been a remarkable, albeit irregular, increase in the number of publications during the 1990s, with a more sustained increase from 2004 onwards, and a peak in 2013. From 1979 to 2003 (34 years) there were 62 publications, but in the last 10 years (2004–2013) the number increased four-fold (n = 232), suggesting a rampant increment in publication rate (Fig. 1). Additionally, it is noteworthy that the number of publications by the beginning of June 2014 already reached half the figure recorded in 2013, supporting the notion that Angola is motivating a steady increase in health-related scientific publications. This exponential trend was also observed in Palestine, a war zone, where a total of 770 publications were retrieved in the medical and biomedical field across a 10-year period (01 January 2002 to 31 December 2011), averaging approximately 80 articles per year. Interestingly, the number of publications has also increased four-fold during the period 2002–2011, with a stabilization in the last 3 years of the study period [23].Fig. 1Between 1909 and 1973, none of the publications deposited in MEDLINE/PubMed up to June 8, 2014, complied with the inclusion criteria for abstracts about health research in Angola Between 1909 and 1973, none of the publications deposited in MEDLINE/PubMed up to June 8, 2014, complied with the inclusion criteria for abstracts about health research in Angola A study published in 2007, which analyzed the geography of the PubMed biomedical publications in Africa between 1996 and 2005, concluded that the contribution of Africa, in particular, was considerably less than that of other continents. Indeed, it was evident that there was a continuous increase in publication output during this period in all African sub-regions, although Angola was located in the lowest quintile with less than two publications per year [17]. In the present study, we found, in the same period, five publications that included researchers that were affiliated with Angolan institutions. However, this did not change the lower position of Angola. Another publication about the scientific production in Public Health and Epidemiology in the WHO African region, in the period 1991–2010, obtained very similar results for Angola, placing it in the lowest quintile, with 11–50 publications [18]. This increase in health research in Angola matches with a recent study about the WHO African Region between 2000 and 2014 [24]. In this study, Angola was located in quintile three, with 100 to 499 articles. As shown in Fig. 1, the publications in MEDLINE/PubMed addressing health issues related to Angola date back to 1909. MEDLINE/PubMed, the first bibliographic database in the life sciences, with a focus on biomedicine, mainly covers the literature published from 1946 onwards, but it also includes more dated publications [22]. There has been a remarkable, albeit irregular, increase in the number of publications during the 1990s, with a more sustained increase from 2004 onwards, and a peak in 2013. From 1979 to 2003 (34 years) there were 62 publications, but in the last 10 years (2004–2013) the number increased four-fold (n = 232), suggesting a rampant increment in publication rate (Fig. 1). Additionally, it is noteworthy that the number of publications by the beginning of June 2014 already reached half the figure recorded in 2013, supporting the notion that Angola is motivating a steady increase in health-related scientific publications. This exponential trend was also observed in Palestine, a war zone, where a total of 770 publications were retrieved in the medical and biomedical field across a 10-year period (01 January 2002 to 31 December 2011), averaging approximately 80 articles per year. Interestingly, the number of publications has also increased four-fold during the period 2002–2011, with a stabilization in the last 3 years of the study period [23].Fig. 1Between 1909 and 1973, none of the publications deposited in MEDLINE/PubMed up to June 8, 2014, complied with the inclusion criteria for abstracts about health research in Angola Between 1909 and 1973, none of the publications deposited in MEDLINE/PubMed up to June 8, 2014, complied with the inclusion criteria for abstracts about health research in Angola A study published in 2007, which analyzed the geography of the PubMed biomedical publications in Africa between 1996 and 2005, concluded that the contribution of Africa, in particular, was considerably less than that of other continents. Indeed, it was evident that there was a continuous increase in publication output during this period in all African sub-regions, although Angola was located in the lowest quintile with less than two publications per year [17]. In the present study, we found, in the same period, five publications that included researchers that were affiliated with Angolan institutions. However, this did not change the lower position of Angola. Another publication about the scientific production in Public Health and Epidemiology in the WHO African region, in the period 1991–2010, obtained very similar results for Angola, placing it in the lowest quintile, with 11–50 publications [18]. This increase in health research in Angola matches with a recent study about the WHO African Region between 2000 and 2014 [24]. In this study, Angola was located in quintile three, with 100 to 499 articles. Health research topics Malaria was the disease that stood out, with 45 publications, followed by HIV infection (n = 26), trypanosomiasis (n = 24), and themes of epidemiology/public health (n = 24). Tuberculosis occupied the tenth position with nine papers (Fig. 2). As a whole, infectious diseases was a topic of 59 % of the publications under analysis (n = 178).Fig. 2Publications are deposited in MEDLINE/PubMed up to June 8, 2014 Publications are deposited in MEDLINE/PubMed up to June 8, 2014 The most investigated topics are aligned with the epidemiological profile of Angola, regarding the main causes of morbidity and mortality (communicable diseases), with a large emphasis on malaria [25]. Interestingly, there was a disproportionate volume of publications on HIV/AIDS relative to tuberculosis, which is unusual given the frequent association of both morbidities [26]. In an assessment of scientific output in public health and epidemiology in Africa, in the period 1991–2010, HIV/AIDS infection was the predominant topic (11.3 %) while malaria accounted for 8.6 % and tuberculosis for 7.1 % [16]. This analysis also considers the issue of the importance of research funding concerning infectious diseases [27]. The trend towards epidemiological transition in sub-Saharan Africa, owing to the increase of chronic non-communicable diseases, is not yet reflected at the scientific publication level, neither in the present study nor in the aforementioned African publications [16, 28–30]. Malaria was the disease that stood out, with 45 publications, followed by HIV infection (n = 26), trypanosomiasis (n = 24), and themes of epidemiology/public health (n = 24). Tuberculosis occupied the tenth position with nine papers (Fig. 2). As a whole, infectious diseases was a topic of 59 % of the publications under analysis (n = 178).Fig. 2Publications are deposited in MEDLINE/PubMed up to June 8, 2014 Publications are deposited in MEDLINE/PubMed up to June 8, 2014 The most investigated topics are aligned with the epidemiological profile of Angola, regarding the main causes of morbidity and mortality (communicable diseases), with a large emphasis on malaria [25]. Interestingly, there was a disproportionate volume of publications on HIV/AIDS relative to tuberculosis, which is unusual given the frequent association of both morbidities [26]. In an assessment of scientific output in public health and epidemiology in Africa, in the period 1991–2010, HIV/AIDS infection was the predominant topic (11.3 %) while malaria accounted for 8.6 % and tuberculosis for 7.1 % [16]. This analysis also considers the issue of the importance of research funding concerning infectious diseases [27]. The trend towards epidemiological transition in sub-Saharan Africa, owing to the increase of chronic non-communicable diseases, is not yet reflected at the scientific publication level, neither in the present study nor in the aforementioned African publications [16, 28–30]. Authorship and affiliation The primary affiliation of the first author was most frequently in an Angolan institution (45 publications) (Fig. 3). A total of 150 institutions were listed as the first author’s primary affiliation, revealing a high institutional dispersion; 14.7 % were Angolan institutions. Furthermore, 65 % of publications represented research conducted in universities and institutes or research centres (Fig. 4). The majority of publications by first author affiliation country and the number of academic research institutions were located in Portugal, United States of America, or Brazil (Fig. 5). The high number of publications with first author affiliation in Angola denotes the concern of researchers in linking their work to institutions from the country of study. Nevertheless, this involvement is dispersed across many different institutions, many of which have relatively low track records in publications, namely hospitals and non-governmental organizations. Furthermore, academic research in universities and institutes (or research centres) related to institutional affiliations abroad indicate the need for reinforcing the academic research capacity in Angola.Fig. 3Country denotes the location in which the primary institution of the first author affiliation was based in publications up to June 8, 2014Fig. 4Publications deposited in MEDLINE/PubMed up to June 8, 2014Fig. 5Country denotes the location in which the primary institution of the first author affiliation was based concerning publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014 Country denotes the location in which the primary institution of the first author affiliation was based in publications up to June 8, 2014 Publications deposited in MEDLINE/PubMed up to June 8, 2014 Country denotes the location in which the primary institution of the first author affiliation was based concerning publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014 The primary affiliation of the first author was most frequently in an Angolan institution (45 publications) (Fig. 3). A total of 150 institutions were listed as the first author’s primary affiliation, revealing a high institutional dispersion; 14.7 % were Angolan institutions. Furthermore, 65 % of publications represented research conducted in universities and institutes or research centres (Fig. 4). The majority of publications by first author affiliation country and the number of academic research institutions were located in Portugal, United States of America, or Brazil (Fig. 5). The high number of publications with first author affiliation in Angola denotes the concern of researchers in linking their work to institutions from the country of study. Nevertheless, this involvement is dispersed across many different institutions, many of which have relatively low track records in publications, namely hospitals and non-governmental organizations. Furthermore, academic research in universities and institutes (or research centres) related to institutional affiliations abroad indicate the need for reinforcing the academic research capacity in Angola.Fig. 3Country denotes the location in which the primary institution of the first author affiliation was based in publications up to June 8, 2014Fig. 4Publications deposited in MEDLINE/PubMed up to June 8, 2014Fig. 5Country denotes the location in which the primary institution of the first author affiliation was based concerning publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014 Country denotes the location in which the primary institution of the first author affiliation was based in publications up to June 8, 2014 Publications deposited in MEDLINE/PubMed up to June 8, 2014 Country denotes the location in which the primary institution of the first author affiliation was based concerning publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014 Publications by Angolan researchers An Angolan researcher was the first author in 58 (19 %) abstracts. The frequency of Angolan first authors or co-authors increased when the first author was affiliated to an Angolan institution (Fig. 6). In the present study, it is noteworthy that one-fifth of all publications had an Angolan as first author. The number of African first authors on publications relating to Africa between 1991 and 2010 has been increasing [16]. This finding was reinforced in the period between 2000 and 2014 in the WHO African Region, and it was found that the contributions of first authors from Africa doubled in this period, although it is recognized that it is still minimal (0.7 % in 2000 and 1.3 % in 2014) [24].Fig. 6Publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014, according to country in which the primary institution of the first author affiliation was based Publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014, according to country in which the primary institution of the first author affiliation was based An Angolan researcher was the first author in 58 (19 %) abstracts. The frequency of Angolan first authors or co-authors increased when the first author was affiliated to an Angolan institution (Fig. 6). In the present study, it is noteworthy that one-fifth of all publications had an Angolan as first author. The number of African first authors on publications relating to Africa between 1991 and 2010 has been increasing [16]. This finding was reinforced in the period between 2000 and 2014 in the WHO African Region, and it was found that the contributions of first authors from Africa doubled in this period, although it is recognized that it is still minimal (0.7 % in 2000 and 1.3 % in 2014) [24].Fig. 6Publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014, according to country in which the primary institution of the first author affiliation was based Publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014, according to country in which the primary institution of the first author affiliation was based Scientific fields Epidemiological research was by far the most common research area (n = 165) (Fig. 7). Although a few population genetics studies (data not shown) are included in this topic, this finding is in accordance to the burden of infectious diseases in the country. However, the increasing frequency of non-communicable diseases in sub-Saharan Africa is likely to be a matter of concern for research in Africa in coming years.Fig. 7Publications deposited in MEDLINE/PubMed up to June 8, 2014 Publications deposited in MEDLINE/PubMed up to June 8, 2014 On the other hand, a small number of publications (n = 32; 10.6 %) on socioeconomic and professional research, including governance, health policies, health systems management, and human resources is noted. Considering that Angola was a war zone for more than 50 % of its existence as an independent state, we might have expected a larger number of research publications on relevant topics such as the socioeconomic determinants of health (e.g., unemployment, poverty, education, family dissolution, and lifestyles) and the performance of health unities and health systems. However, we recognize the possibility that studies concerning some of those socioeconomic determinants of health may be part of research in social and economic sciences and, therefore, more accessible in other bibliographic databases. Further, the lack of studies on human resources in the various health professions, including studies in medical education, is also noticeable. We expect that the increasing number of health professional schools in Angola, including six new public medical schools, particularly since 2009, will raise interest in these topics, especially if educational curricula at the undergraduate and postgraduate levels in Angola incorporate more programs focused on improving research skills and providing on-the-job training in epidemiology [31]. Epidemiological research was by far the most common research area (n = 165) (Fig. 7). Although a few population genetics studies (data not shown) are included in this topic, this finding is in accordance to the burden of infectious diseases in the country. However, the increasing frequency of non-communicable diseases in sub-Saharan Africa is likely to be a matter of concern for research in Africa in coming years.Fig. 7Publications deposited in MEDLINE/PubMed up to June 8, 2014 Publications deposited in MEDLINE/PubMed up to June 8, 2014 On the other hand, a small number of publications (n = 32; 10.6 %) on socioeconomic and professional research, including governance, health policies, health systems management, and human resources is noted. Considering that Angola was a war zone for more than 50 % of its existence as an independent state, we might have expected a larger number of research publications on relevant topics such as the socioeconomic determinants of health (e.g., unemployment, poverty, education, family dissolution, and lifestyles) and the performance of health unities and health systems. However, we recognize the possibility that studies concerning some of those socioeconomic determinants of health may be part of research in social and economic sciences and, therefore, more accessible in other bibliographic databases. Further, the lack of studies on human resources in the various health professions, including studies in medical education, is also noticeable. We expect that the increasing number of health professional schools in Angola, including six new public medical schools, particularly since 2009, will raise interest in these topics, especially if educational curricula at the undergraduate and postgraduate levels in Angola incorporate more programs focused on improving research skills and providing on-the-job training in epidemiology [31]. Journals choice We also found that, despite publications being widely spread in a variety of journals, 11.4 % concentrated in PLoS ONE and Malaria Journal, both of which are free access journals indexed by common databases (Fig. 8). This finding is not unusual given that malaria was the most common research topic among Angolan publications observed in the present study. Notwithstanding the debate around open access publishing, our findings reinforce the importance of providing readers in financially disadvantaged countries with a wide range of relevant research and information [32, 33].Fig. 8Publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014 Publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014 We also found that, despite publications being widely spread in a variety of journals, 11.4 % concentrated in PLoS ONE and Malaria Journal, both of which are free access journals indexed by common databases (Fig. 8). This finding is not unusual given that malaria was the most common research topic among Angolan publications observed in the present study. Notwithstanding the debate around open access publishing, our findings reinforce the importance of providing readers in financially disadvantaged countries with a wide range of relevant research and information [32, 33].Fig. 8Publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014 Publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014 Limitations of the study The fact that Angola is a Portuguese speaking country and the majority of the journals indexed by MEDLINE/PubMed are in English may have introduced a selection bias due to idiom barriers. Though MEDLINE/PubMed represented the vast majority of the publications of the study and the search began at BVS, which included papers published in journals indexed by Scielo, it does not embody all the scientific and biomedical journals. Furthermore, we did not search “grey literature”. Thus, the number of Angolan authors and coauthors might be underestimated. Nevertheless, Angola still does not have an exhaustive electronic database of researchers containing individual publication records. In addition, the incorrect reporting of authors’ nationality and affiliation, observed in a few papers in the present study, may further contribute to underestimation of Angolan-authored publications. Finally, the study did not include the analysis of journal impact factors and the authors did not have access to any citation index database, which would allow the performance of bibliometric citation analyses. The fact that Angola is a Portuguese speaking country and the majority of the journals indexed by MEDLINE/PubMed are in English may have introduced a selection bias due to idiom barriers. Though MEDLINE/PubMed represented the vast majority of the publications of the study and the search began at BVS, which included papers published in journals indexed by Scielo, it does not embody all the scientific and biomedical journals. Furthermore, we did not search “grey literature”. Thus, the number of Angolan authors and coauthors might be underestimated. Nevertheless, Angola still does not have an exhaustive electronic database of researchers containing individual publication records. In addition, the incorrect reporting of authors’ nationality and affiliation, observed in a few papers in the present study, may further contribute to underestimation of Angolan-authored publications. Finally, the study did not include the analysis of journal impact factors and the authors did not have access to any citation index database, which would allow the performance of bibliometric citation analyses. Publication rate: As shown in Fig. 1, the publications in MEDLINE/PubMed addressing health issues related to Angola date back to 1909. MEDLINE/PubMed, the first bibliographic database in the life sciences, with a focus on biomedicine, mainly covers the literature published from 1946 onwards, but it also includes more dated publications [22]. There has been a remarkable, albeit irregular, increase in the number of publications during the 1990s, with a more sustained increase from 2004 onwards, and a peak in 2013. From 1979 to 2003 (34 years) there were 62 publications, but in the last 10 years (2004–2013) the number increased four-fold (n = 232), suggesting a rampant increment in publication rate (Fig. 1). Additionally, it is noteworthy that the number of publications by the beginning of June 2014 already reached half the figure recorded in 2013, supporting the notion that Angola is motivating a steady increase in health-related scientific publications. This exponential trend was also observed in Palestine, a war zone, where a total of 770 publications were retrieved in the medical and biomedical field across a 10-year period (01 January 2002 to 31 December 2011), averaging approximately 80 articles per year. Interestingly, the number of publications has also increased four-fold during the period 2002–2011, with a stabilization in the last 3 years of the study period [23].Fig. 1Between 1909 and 1973, none of the publications deposited in MEDLINE/PubMed up to June 8, 2014, complied with the inclusion criteria for abstracts about health research in Angola Between 1909 and 1973, none of the publications deposited in MEDLINE/PubMed up to June 8, 2014, complied with the inclusion criteria for abstracts about health research in Angola A study published in 2007, which analyzed the geography of the PubMed biomedical publications in Africa between 1996 and 2005, concluded that the contribution of Africa, in particular, was considerably less than that of other continents. Indeed, it was evident that there was a continuous increase in publication output during this period in all African sub-regions, although Angola was located in the lowest quintile with less than two publications per year [17]. In the present study, we found, in the same period, five publications that included researchers that were affiliated with Angolan institutions. However, this did not change the lower position of Angola. Another publication about the scientific production in Public Health and Epidemiology in the WHO African region, in the period 1991–2010, obtained very similar results for Angola, placing it in the lowest quintile, with 11–50 publications [18]. This increase in health research in Angola matches with a recent study about the WHO African Region between 2000 and 2014 [24]. In this study, Angola was located in quintile three, with 100 to 499 articles. Health research topics: Malaria was the disease that stood out, with 45 publications, followed by HIV infection (n = 26), trypanosomiasis (n = 24), and themes of epidemiology/public health (n = 24). Tuberculosis occupied the tenth position with nine papers (Fig. 2). As a whole, infectious diseases was a topic of 59 % of the publications under analysis (n = 178).Fig. 2Publications are deposited in MEDLINE/PubMed up to June 8, 2014 Publications are deposited in MEDLINE/PubMed up to June 8, 2014 The most investigated topics are aligned with the epidemiological profile of Angola, regarding the main causes of morbidity and mortality (communicable diseases), with a large emphasis on malaria [25]. Interestingly, there was a disproportionate volume of publications on HIV/AIDS relative to tuberculosis, which is unusual given the frequent association of both morbidities [26]. In an assessment of scientific output in public health and epidemiology in Africa, in the period 1991–2010, HIV/AIDS infection was the predominant topic (11.3 %) while malaria accounted for 8.6 % and tuberculosis for 7.1 % [16]. This analysis also considers the issue of the importance of research funding concerning infectious diseases [27]. The trend towards epidemiological transition in sub-Saharan Africa, owing to the increase of chronic non-communicable diseases, is not yet reflected at the scientific publication level, neither in the present study nor in the aforementioned African publications [16, 28–30]. Authorship and affiliation: The primary affiliation of the first author was most frequently in an Angolan institution (45 publications) (Fig. 3). A total of 150 institutions were listed as the first author’s primary affiliation, revealing a high institutional dispersion; 14.7 % were Angolan institutions. Furthermore, 65 % of publications represented research conducted in universities and institutes or research centres (Fig. 4). The majority of publications by first author affiliation country and the number of academic research institutions were located in Portugal, United States of America, or Brazil (Fig. 5). The high number of publications with first author affiliation in Angola denotes the concern of researchers in linking their work to institutions from the country of study. Nevertheless, this involvement is dispersed across many different institutions, many of which have relatively low track records in publications, namely hospitals and non-governmental organizations. Furthermore, academic research in universities and institutes (or research centres) related to institutional affiliations abroad indicate the need for reinforcing the academic research capacity in Angola.Fig. 3Country denotes the location in which the primary institution of the first author affiliation was based in publications up to June 8, 2014Fig. 4Publications deposited in MEDLINE/PubMed up to June 8, 2014Fig. 5Country denotes the location in which the primary institution of the first author affiliation was based concerning publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014 Country denotes the location in which the primary institution of the first author affiliation was based in publications up to June 8, 2014 Publications deposited in MEDLINE/PubMed up to June 8, 2014 Country denotes the location in which the primary institution of the first author affiliation was based concerning publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014 Publications by Angolan researchers: An Angolan researcher was the first author in 58 (19 %) abstracts. The frequency of Angolan first authors or co-authors increased when the first author was affiliated to an Angolan institution (Fig. 6). In the present study, it is noteworthy that one-fifth of all publications had an Angolan as first author. The number of African first authors on publications relating to Africa between 1991 and 2010 has been increasing [16]. This finding was reinforced in the period between 2000 and 2014 in the WHO African Region, and it was found that the contributions of first authors from Africa doubled in this period, although it is recognized that it is still minimal (0.7 % in 2000 and 1.3 % in 2014) [24].Fig. 6Publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014, according to country in which the primary institution of the first author affiliation was based Publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014, according to country in which the primary institution of the first author affiliation was based Scientific fields: Epidemiological research was by far the most common research area (n = 165) (Fig. 7). Although a few population genetics studies (data not shown) are included in this topic, this finding is in accordance to the burden of infectious diseases in the country. However, the increasing frequency of non-communicable diseases in sub-Saharan Africa is likely to be a matter of concern for research in Africa in coming years.Fig. 7Publications deposited in MEDLINE/PubMed up to June 8, 2014 Publications deposited in MEDLINE/PubMed up to June 8, 2014 On the other hand, a small number of publications (n = 32; 10.6 %) on socioeconomic and professional research, including governance, health policies, health systems management, and human resources is noted. Considering that Angola was a war zone for more than 50 % of its existence as an independent state, we might have expected a larger number of research publications on relevant topics such as the socioeconomic determinants of health (e.g., unemployment, poverty, education, family dissolution, and lifestyles) and the performance of health unities and health systems. However, we recognize the possibility that studies concerning some of those socioeconomic determinants of health may be part of research in social and economic sciences and, therefore, more accessible in other bibliographic databases. Further, the lack of studies on human resources in the various health professions, including studies in medical education, is also noticeable. We expect that the increasing number of health professional schools in Angola, including six new public medical schools, particularly since 2009, will raise interest in these topics, especially if educational curricula at the undergraduate and postgraduate levels in Angola incorporate more programs focused on improving research skills and providing on-the-job training in epidemiology [31]. Journals choice: We also found that, despite publications being widely spread in a variety of journals, 11.4 % concentrated in PLoS ONE and Malaria Journal, both of which are free access journals indexed by common databases (Fig. 8). This finding is not unusual given that malaria was the most common research topic among Angolan publications observed in the present study. Notwithstanding the debate around open access publishing, our findings reinforce the importance of providing readers in financially disadvantaged countries with a wide range of relevant research and information [32, 33].Fig. 8Publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014 Publications on health research in Angola deposited in MEDLINE/PubMed up to June 8, 2014 Limitations of the study: The fact that Angola is a Portuguese speaking country and the majority of the journals indexed by MEDLINE/PubMed are in English may have introduced a selection bias due to idiom barriers. Though MEDLINE/PubMed represented the vast majority of the publications of the study and the search began at BVS, which included papers published in journals indexed by Scielo, it does not embody all the scientific and biomedical journals. Furthermore, we did not search “grey literature”. Thus, the number of Angolan authors and coauthors might be underestimated. Nevertheless, Angola still does not have an exhaustive electronic database of researchers containing individual publication records. In addition, the incorrect reporting of authors’ nationality and affiliation, observed in a few papers in the present study, may further contribute to underestimation of Angolan-authored publications. Finally, the study did not include the analysis of journal impact factors and the authors did not have access to any citation index database, which would allow the performance of bibliometric citation analyses. Conclusions: This initial contribution characterizing the productivity of health research pertaining to Angola showed that the number of publications has increased steadily over the past 10 years and was mainly focused on infectious diseases, namely malaria. Angola, as the country in which the primary institution of the first author affiliation was based, evidenced the largest number of publications, and about 20 % of the publications included an Angolan as the first author. However, academic institutions in Portugal, the United States of America, and Brazil contributed more than universities and research centres in Angola to research publication. The research was largely epidemiological, and remarkably, with a very small number of publications on economic and professional issues, involving themes such as health policy, governance, and management of health systems and human resources. In Angola, with the large expansion of higher education since 2009, universities should build and strengthen their capacities in research and human resources training. Including research training in the curricula of pre-graduate courses in health sciences will contribute to graduate professionals able to act as agents responsible for change, competent human resource managers, and promoters of evidence-based policies [34]. The alignment of the national health research agenda with the national program of health development and the wide use of multilateral international cooperation are critical to develop the operational tools towards universal health coverage in Angola as conceived by the WHO [1]. In summary, this work highlights the rapid increase of scientific publications related to Angola but also a need for reinforcing academic-driven research in this country.
Background: Health research driven by the healthcare demands of the population can provide an informative evidence base to support decision-making processes on health policies, programmes, and practices. This paper surveyed the production of scientific research concerning health in Angola, specifically to access the publication rate over time, the main research topics and scientific fields, and the contribution of Angolan researchers and institutions. Methods: The study focused on data collected in a retrospective literature search in Biblioteca Virtual em Saúde (BVS) as of June 8, 2014, with the keyword "Angola" and on content information in correspondent publications deposited in PubMed. Results: BVS generated 1,029 hits, 74.6 % of which were deposited in PubMed where 301 abstracts were described. From 1979 to 2003, there were 62 publications and in 2004-2013 the quantity increased four-fold (n = 232); malaria was the most frequent topic (n = 42). Angola was the country with the largest number of publications, taking into account the primary affiliation of the first author (n = 45). Universities, institutes, or research centres accounted for 65 % of the publications and in descending order Portugal, Brazil, and the United States of America occupied the three first positions. Epidemiology was by far the most frequent field of research (n = 165). Conclusions: The number of publications has increased steadily over the past 10 years, with predominance on malaria topics. Angola was the country with the largest number of major affiliations of the first author, but the contribution of Angolan institutions was relatively low, indicating a need to reinforce academic research institutions in the country.
Background: The concept of research for health, expressed on the World Health Report 2013, covers a broader range of investigations than health research. This wider view of research will become increasingly important in the transition from the United Nations Millennium Development Goals to a post-2015 sustainable development agenda [1]. In a more restricted concept, health research has the potential to contribute to the identification of social and economic determinants of health. In the context of implementation research, it is a major basis for decision-making processes on health policies, programmes, and practices, especially when driven by the demand of the health problems of the population [2, 3]. Moreover, conducting more health research can stimulate countries to strengthen their capacity to produce and use such research to guide decision-making, improve the efficiency of the investments, increase accountability in the work of researchers, and contribute substantially to the research training of human resources in developing countries [4, 5]. In the perspective of the World Health Organization (WHO), health research must be founded on building capacity to step up health research systems and support demand-driven health problems, particularly in low- and middle-income countries. Additionally, health research must be embedded with standards for best practices and ensure that quality evidence is turned into affordable health technologies and evidence-informed policy [6]. However, at the global level, the proportion of health research that considers these aspects is negligible [7]. In particular, evidence shows that the research output in developing countries is considerably less than in developed countries [8]. Therefore, it is crucial to get an accurate picture of the landscape of the scientific research produced in developing countries in order to identify gaps critical for social development [9]. In sub-Saharan Africa, in particular, the production of health research can affect teaching, the quality of undergraduate training, career promotion, and translational research relevant to the country, which is comparable to developed countries [5]. There have been documented bibliometric studies of research production in Africa, as a whole, as well as in other continents. This then motivated studies of the chronological progression of research within individual African countries [10–18]. These publications demonstrate the feebleness of African authorship in the most cited scientific publications in health [19]. Angola, a Portuguese-speaking country, is independent since 1975 and has spent 27 years in progressive socio-political instability, with the drawback of the civil war that ended 12 years ago. Nevertheless, Angola is making progress towards the achievement of the health Millennium Development Goals but is still far from achieving them [1]. The reconstruction of the national health system and policy will benefit from the knowledge of the factors influencing the success of certain interventions and the measure of their impact on populations. Therefore, health research will contribute to increased efficiency and effectiveness of health policy and decision making processes in Angola on programmes and practices. Support for science in Angola is less than 0.1 % of its GDP, compared to a world average of 1.7 % [20]. Consequently, a more serious commitment would be needed to build local and national capacity to accelerate the improvement of human resources training for health research. In this context, it becomes even more appropriate to carry out a survey on what has been produced as the result of health research. When searching the literature on health research in Angola, we found only one publication dated from 1953 which linked research to health, entitled “Scientific Research and the Health Status of Angola” [21]. The lack of studies on health research produced in Angola in the last 61 years motivated this work. We carried out a non-systematic literature survey to evaluate the progression of health research pertaining to Angola, answering specifically the following questions: 1) how has the number of publications evolved over the years? 2) what are the most published topics? 3) what is the level of involvement of Angolan researchers and institutions in this published research? and 4) what are the most represented research fields? We expect that this study will contribute to increasing the current state of knowledge on health research related to Angola and identify gaps that may guide the WHO recommendations on health research and consequently the adoption of effective interventions that will strengthen health research in Angola. Conclusions: This initial contribution characterizing the productivity of health research pertaining to Angola showed that the number of publications has increased steadily over the past 10 years and was mainly focused on infectious diseases, namely malaria. Angola, as the country in which the primary institution of the first author affiliation was based, evidenced the largest number of publications, and about 20 % of the publications included an Angolan as the first author. However, academic institutions in Portugal, the United States of America, and Brazil contributed more than universities and research centres in Angola to research publication. The research was largely epidemiological, and remarkably, with a very small number of publications on economic and professional issues, involving themes such as health policy, governance, and management of health systems and human resources. In Angola, with the large expansion of higher education since 2009, universities should build and strengthen their capacities in research and human resources training. Including research training in the curricula of pre-graduate courses in health sciences will contribute to graduate professionals able to act as agents responsible for change, competent human resource managers, and promoters of evidence-based policies [34]. The alignment of the national health research agenda with the national program of health development and the wide use of multilateral international cooperation are critical to develop the operational tools towards universal health coverage in Angola as conceived by the WHO [1]. In summary, this work highlights the rapid increase of scientific publications related to Angola but also a need for reinforcing academic-driven research in this country.
Background: Health research driven by the healthcare demands of the population can provide an informative evidence base to support decision-making processes on health policies, programmes, and practices. This paper surveyed the production of scientific research concerning health in Angola, specifically to access the publication rate over time, the main research topics and scientific fields, and the contribution of Angolan researchers and institutions. Methods: The study focused on data collected in a retrospective literature search in Biblioteca Virtual em Saúde (BVS) as of June 8, 2014, with the keyword "Angola" and on content information in correspondent publications deposited in PubMed. Results: BVS generated 1,029 hits, 74.6 % of which were deposited in PubMed where 301 abstracts were described. From 1979 to 2003, there were 62 publications and in 2004-2013 the quantity increased four-fold (n = 232); malaria was the most frequent topic (n = 42). Angola was the country with the largest number of publications, taking into account the primary affiliation of the first author (n = 45). Universities, institutes, or research centres accounted for 65 % of the publications and in descending order Portugal, Brazil, and the United States of America occupied the three first positions. Epidemiology was by far the most frequent field of research (n = 165). Conclusions: The number of publications has increased steadily over the past 10 years, with predominance on malaria topics. Angola was the country with the largest number of major affiliations of the first author, but the contribution of Angolan institutions was relatively low, indicating a need to reinforce academic research institutions in the country.
8,108
330
[ 549, 300, 347, 214, 352, 136, 188 ]
11
[ "publications", "research", "health", "angola", "pubmed", "medline", "medline pubmed", "2014", "june", "health research" ]
[ "research includes health", "health research produced", "national health research", "conducting health research", "health research agenda" ]
null
[CONTENT] Angola | Health research | Health research centres | Universities [SUMMARY]
null
[CONTENT] Angola | Health research | Health research centres | Universities [SUMMARY]
[CONTENT] Angola | Health research | Health research centres | Universities [SUMMARY]
[CONTENT] Angola | Health research | Health research centres | Universities [SUMMARY]
[CONTENT] Angola | Health research | Health research centres | Universities [SUMMARY]
[CONTENT] Academies and Institutes | Angola | Authorship | Bibliometrics | Biomedical Research | Brazil | Epidemiologic Studies | Humans | Malaria | Portugal | Publishing | United States | Universities [SUMMARY]
null
[CONTENT] Academies and Institutes | Angola | Authorship | Bibliometrics | Biomedical Research | Brazil | Epidemiologic Studies | Humans | Malaria | Portugal | Publishing | United States | Universities [SUMMARY]
[CONTENT] Academies and Institutes | Angola | Authorship | Bibliometrics | Biomedical Research | Brazil | Epidemiologic Studies | Humans | Malaria | Portugal | Publishing | United States | Universities [SUMMARY]
[CONTENT] Academies and Institutes | Angola | Authorship | Bibliometrics | Biomedical Research | Brazil | Epidemiologic Studies | Humans | Malaria | Portugal | Publishing | United States | Universities [SUMMARY]
[CONTENT] Academies and Institutes | Angola | Authorship | Bibliometrics | Biomedical Research | Brazil | Epidemiologic Studies | Humans | Malaria | Portugal | Publishing | United States | Universities [SUMMARY]
[CONTENT] research includes health | health research produced | national health research | conducting health research | health research agenda [SUMMARY]
null
[CONTENT] research includes health | health research produced | national health research | conducting health research | health research agenda [SUMMARY]
[CONTENT] research includes health | health research produced | national health research | conducting health research | health research agenda [SUMMARY]
[CONTENT] research includes health | health research produced | national health research | conducting health research | health research agenda [SUMMARY]
[CONTENT] research includes health | health research produced | national health research | conducting health research | health research agenda [SUMMARY]
[CONTENT] publications | research | health | angola | pubmed | medline | medline pubmed | 2014 | june | health research [SUMMARY]
null
[CONTENT] publications | research | health | angola | pubmed | medline | medline pubmed | 2014 | june | health research [SUMMARY]
[CONTENT] publications | research | health | angola | pubmed | medline | medline pubmed | 2014 | june | health research [SUMMARY]
[CONTENT] publications | research | health | angola | pubmed | medline | medline pubmed | 2014 | june | health research [SUMMARY]
[CONTENT] publications | research | health | angola | pubmed | medline | medline pubmed | 2014 | june | health research [SUMMARY]
[CONTENT] health | research | health research | countries | making | angola | development | world | practices | developing countries [SUMMARY]
null
[CONTENT] publications | research | health | angola | pubmed | 2014 | medline | medline pubmed | june | author [SUMMARY]
[CONTENT] research | health | angola | graduate | human | publications | number publications | national | academic | training [SUMMARY]
[CONTENT] research | health | publications | angola | author | 2014 | pubmed | medline | medline pubmed | june [SUMMARY]
[CONTENT] research | health | publications | angola | author | 2014 | pubmed | medline | medline pubmed | june [SUMMARY]
[CONTENT] ||| Angola | Angolan [SUMMARY]
null
[CONTENT] 1,029 | 74.6 % | PubMed | 301 ||| 1979 | 2003 | 62 | 2004-2013 | four-fold | 232 | 42 ||| Angola | first | 45 ||| 65 % | Portugal | Brazil | the United States of America | three ||| 165 [SUMMARY]
[CONTENT] the past 10 years ||| Angola | first | Angolan [SUMMARY]
[CONTENT] ||| Angola | Angolan ||| Biblioteca Virtual | Saúde | June 8, 2014 | Angola | PubMed ||| 1,029 | 74.6 % | PubMed | 301 ||| 1979 | 2003 | 62 | 2004-2013 | four-fold | 232 | 42 ||| Angola | first | 45 ||| 65 % | Portugal | Brazil | the United States of America | three ||| 165 ||| the past 10 years ||| Angola | first | Angolan [SUMMARY]
[CONTENT] ||| Angola | Angolan ||| Biblioteca Virtual | Saúde | June 8, 2014 | Angola | PubMed ||| 1,029 | 74.6 % | PubMed | 301 ||| 1979 | 2003 | 62 | 2004-2013 | four-fold | 232 | 42 ||| Angola | first | 45 ||| 65 % | Portugal | Brazil | the United States of America | three ||| 165 ||| the past 10 years ||| Angola | first | Angolan [SUMMARY]
Medial Ball and Socket Total Knee Arthroplasty in Indian Population: 5-Year Clinical Results.
35251545
Medial pivot total knee arthroplasty aims to restore native knee kinematics through highly conforming medial tibiofemoral articulation with survival comparable to contemporary knee designs. The aim of this study was to report preliminary clinical results of medial pivot total knee arthroplasty in an Indian population.
BACKGROUND
A retrospective analysis of 45 patients (average age, 62 years; 40 women and 5 men) with end-stage arthritis (Kellgren-Lawrence grade 4) operated with a medial pivot prosthesis was done. All patients were assessed using Knee Society Score (satisfaction, expectation, and functional scores) and Oxford Knee Score, and range of motion was recorded at the end of 5-year postoperative follow-up. In addition, all patients underwent standardized radiological assessment.
METHODS
At the final follow-up, patients reported significant improvement in mean Knee Society Score (satisfaction, expectation, and functional scores) and Oxford Knee Score (p < 0.05). The mean range of motion achieved at the end of 5 years ranged from 0° (extension) to 118.4° (further flexion). There was no evidence of loosening or osteolysis at a minimum follow-up of 5 years.
RESULTS
These results demonstrated satisfactory clinical and radiological outcomes at 5 years after total knee arthroplasty with a medial pivot design, which may be related to better replication of natural knee kinematics with the medial pivot knee and inherent advantages of this design.
CONCLUSIONS
[ "Arthroplasty, Replacement, Knee", "Female", "Follow-Up Studies", "Humans", "Knee Joint", "Knee Prosthesis", "Male", "Middle Aged", "Osteoarthritis, Knee", "Prosthesis Design", "Range of Motion, Articular", "Retrospective Studies" ]
8858900
null
null
METHODS
A retrospective analysis of prospectively collected data of adult patients with end-stage knee arthritis (Kellgren-Lawrence grade 4) operated in a University Teaching Hospital from January 2015 to June 2015 was performed. A total of 245 patients were identified. Only patients (45 patients) who underwent MP TKA (ADVANCE Medial Pivot, MicroPort Orthopedics, Arlington, TN, USA) were included in the study. MP TKA was introduced to our system in 2014. Institutional Review Board approval was taken for the study (IRB No. IEC-486). The primary outcome measures for this study were patient satisfaction and met-expectations with the MP prosthesis using Knee Society Score (KSS) satisfaction, expectation, and functional scores. The secondary outcome measures were Oxford Knee Score (OKS), range of motion, and radiological assessment. All TKAs were performed by the same surgeon (RM) using the standard medial parapatellar approach. All patients were given an antibiotic (Cefuroxime axetil 1.5 g) 30–45 minutes prior to skin incision. All patients were given intravenous tranexamic acid (15 mg/kg) 10–15 minutes prior to tourniquet deflation. The posterior cruciate ligament (PCL) was sacrificed in all cases and implants were fixed with a single mix of Palacos bone cement with gentamicin (Heraeus Medical, Warsaw, IN, USA). Whenever there was mediolateral overhang as per the anteroposterior size of the femoral component, the authors used a stature femoral component (narrow mediolateral) available with this prosthesis. The stature femoral component was used in 70% of the patients and the most common size of femur and tibia used in the study was 2. Patelloplasty was done. Surgical and rehabilitation protocol was standardized for all patients. For each patient, demographic data, type of arthritis, body mass index (BMI), and preoperative scores were documented from the patient record and postoperative evaluation was done at the latest follow-up using KSS (satisfaction, expectation, and functional scores), OKS, and range of motion (measured using a goniometer). Flexion deformity, if any, was also recorded. Serial standard preoperative and postoperative radiographs at baseline and the latest follow-up were evaluated by an independent radiologist (NS) as per the protocol defined by the modern knee society radiographic evaluation system (Figs. 1, 2, 3).14) TKA loosening was defined as a complete radiolucent line of more than 2 mm in width, a visible cement mantle fracture around the component, or a change in component position. The interclass correlation coefficient was excellent (0.92) for clinical evaluation and good (0.84) for radiographic evaluation.15) Statistical Analysis Baseline characteristics were described for each patient using mean ± standard deviation, median (range), or frequency/percentage. KSS (satisfaction, expectation, and functional scores) and OKS were compared using a generalized estimating equation because the observations were correlated preoperatively and postoperatively. Within group comparison for all other parameters was performed using paired t-test or Wilcoxon rank-sum test (Mann-Whitney U-test). Correlation between two continuous variables was analysed using Pearson correlation coefficients. All analyses were conducted using Stata ver. 13.0 (Stata Corp., College Station, TX, USA). A p-value < 0.05 was considered statistically significant. Baseline characteristics were described for each patient using mean ± standard deviation, median (range), or frequency/percentage. KSS (satisfaction, expectation, and functional scores) and OKS were compared using a generalized estimating equation because the observations were correlated preoperatively and postoperatively. Within group comparison for all other parameters was performed using paired t-test or Wilcoxon rank-sum test (Mann-Whitney U-test). Correlation between two continuous variables was analysed using Pearson correlation coefficients. All analyses were conducted using Stata ver. 13.0 (Stata Corp., College Station, TX, USA). A p-value < 0.05 was considered statistically significant.
RESULTS
All patients reported improved satisfaction and functional outcome after TKA at a minimum follow-up of 5 years. The average age at the time of TKA was 62 years and 88.9% of the patients were women. There were 26 left knees and 24 right knees. The average BMI was 28.4 kg/m2. Five patients underwent bilateral TKA (total 50 MP TKAs). There were 4 cases of inflammatory arthritis, while the rest were degenerative osteoarthritis (Table 1). The mean preoperative KSS was 4.4 ± 1.7 (range, 0–6) for satisfaction score, 11.8 ± 1.6 (range, 7–15) for expectation score, and 19.6 ± 4.4 (range, 6–27) for functional score. The mean postoperative KSS was 34.6 ± 3.0 (range, 26–38) for satisfaction score, 12.5 ± 1.4 (range, 10–15) for expectation score, and 61.3 ± 6.7 (range, 42–69) for functional score at the latest follow-up. The mean OKS was 9.5 ± 2.5 (range, 4–12) preoperatively and 44.2 ± 2.2 (range, 40–48) at the latest follow-up. Patients reported clinically and statistically significant improvements (p < 0.001) for all patient-reported outcome measures (Table 2). The mean range of motion showed a statistically significant improvement from 97.6° ± 11.2° (70°–120°) to 118.4° ± 8.4° (100°–130°). Three patients had flexion deformity of less than 10° at the final follow-up. There was no evidence of implant failure or subsidence in any of the postoperative radiographs. Nonprogressive radiolucent lines (< 2 mm) were seen in 5 TKAs. The average tibiofemoral angle improved from a mean of 4.8° varus (2.2°–7.4°) preoperatively to a mean of 4.1° valgus (3.3°–4.9°) at the latest follow-up (Table 3). None of the patients were lost to follow-up or died due to surgery-related or unrelated causes. One patient with rheumatoid arthritis developed mid-substance quadriceps tendon rupture, which was managed with mesh repair. The patient did well and was able to perform activities of daily living.
null
null
[ "Statistical Analysis" ]
[ "Baseline characteristics were described for each patient using mean ± standard deviation, median (range), or frequency/percentage. KSS (satisfaction, expectation, and functional scores) and OKS were compared using a generalized estimating equation because the observations were correlated preoperatively and postoperatively. Within group comparison for all other parameters was performed using paired t-test or Wilcoxon rank-sum test (Mann-Whitney U-test). Correlation between two continuous variables was analysed using Pearson correlation coefficients. All analyses were conducted using Stata ver. 13.0 (Stata Corp., College Station, TX, USA). A p-value < 0.05 was considered statistically significant." ]
[ null ]
[ "METHODS", "Statistical Analysis", "RESULTS", "DISCUSSION" ]
[ "A retrospective analysis of prospectively collected data of adult patients with end-stage knee arthritis (Kellgren-Lawrence grade 4) operated in a University Teaching Hospital from January 2015 to June 2015 was performed. A total of 245 patients were identified. Only patients (45 patients) who underwent MP TKA (ADVANCE Medial Pivot, MicroPort Orthopedics, Arlington, TN, USA) were included in the study. MP TKA was introduced to our system in 2014. Institutional Review Board approval was taken for the study (IRB No. IEC-486).\nThe primary outcome measures for this study were patient satisfaction and met-expectations with the MP prosthesis using Knee Society Score (KSS) satisfaction, expectation, and functional scores. The secondary outcome measures were Oxford Knee Score (OKS), range of motion, and radiological assessment.\nAll TKAs were performed by the same surgeon (RM) using the standard medial parapatellar approach. All patients were given an antibiotic (Cefuroxime axetil 1.5 g) 30–45 minutes prior to skin incision. All patients were given intravenous tranexamic acid (15 mg/kg) 10–15 minutes prior to tourniquet deflation. The posterior cruciate ligament (PCL) was sacrificed in all cases and implants were fixed with a single mix of Palacos bone cement with gentamicin (Heraeus Medical, Warsaw, IN, USA). Whenever there was mediolateral overhang as per the anteroposterior size of the femoral component, the authors used a stature femoral component (narrow mediolateral) available with this prosthesis. The stature femoral component was used in 70% of the patients and the most common size of femur and tibia used in the study was 2. Patelloplasty was done. Surgical and rehabilitation protocol was standardized for all patients.\nFor each patient, demographic data, type of arthritis, body mass index (BMI), and preoperative scores were documented from the patient record and postoperative evaluation was done at the latest follow-up using KSS (satisfaction, expectation, and functional scores), OKS, and range of motion (measured using a goniometer). Flexion deformity, if any, was also recorded. Serial standard preoperative and postoperative radiographs at baseline and the latest follow-up were evaluated by an independent radiologist (NS) as per the protocol defined by the modern knee society radiographic evaluation system (Figs. 1, 2, 3).14) TKA loosening was defined as a complete radiolucent line of more than 2 mm in width, a visible cement mantle fracture around the component, or a change in component position. The interclass correlation coefficient was excellent (0.92) for clinical evaluation and good (0.84) for radiographic evaluation.15)\n Statistical Analysis Baseline characteristics were described for each patient using mean ± standard deviation, median (range), or frequency/percentage. KSS (satisfaction, expectation, and functional scores) and OKS were compared using a generalized estimating equation because the observations were correlated preoperatively and postoperatively. Within group comparison for all other parameters was performed using paired t-test or Wilcoxon rank-sum test (Mann-Whitney U-test). Correlation between two continuous variables was analysed using Pearson correlation coefficients. All analyses were conducted using Stata ver. 13.0 (Stata Corp., College Station, TX, USA). A p-value < 0.05 was considered statistically significant.\nBaseline characteristics were described for each patient using mean ± standard deviation, median (range), or frequency/percentage. KSS (satisfaction, expectation, and functional scores) and OKS were compared using a generalized estimating equation because the observations were correlated preoperatively and postoperatively. Within group comparison for all other parameters was performed using paired t-test or Wilcoxon rank-sum test (Mann-Whitney U-test). Correlation between two continuous variables was analysed using Pearson correlation coefficients. All analyses were conducted using Stata ver. 13.0 (Stata Corp., College Station, TX, USA). A p-value < 0.05 was considered statistically significant.", "Baseline characteristics were described for each patient using mean ± standard deviation, median (range), or frequency/percentage. KSS (satisfaction, expectation, and functional scores) and OKS were compared using a generalized estimating equation because the observations were correlated preoperatively and postoperatively. Within group comparison for all other parameters was performed using paired t-test or Wilcoxon rank-sum test (Mann-Whitney U-test). Correlation between two continuous variables was analysed using Pearson correlation coefficients. All analyses were conducted using Stata ver. 13.0 (Stata Corp., College Station, TX, USA). A p-value < 0.05 was considered statistically significant.", "All patients reported improved satisfaction and functional outcome after TKA at a minimum follow-up of 5 years. The average age at the time of TKA was 62 years and 88.9% of the patients were women. There were 26 left knees and 24 right knees. The average BMI was 28.4 kg/m2. Five patients underwent bilateral TKA (total 50 MP TKAs). There were 4 cases of inflammatory arthritis, while the rest were degenerative osteoarthritis (Table 1).\nThe mean preoperative KSS was 4.4 ± 1.7 (range, 0–6) for satisfaction score, 11.8 ± 1.6 (range, 7–15) for expectation score, and 19.6 ± 4.4 (range, 6–27) for functional score. The mean postoperative KSS was 34.6 ± 3.0 (range, 26–38) for satisfaction score, 12.5 ± 1.4 (range, 10–15) for expectation score, and 61.3 ± 6.7 (range, 42–69) for functional score at the latest follow-up. The mean OKS was 9.5 ± 2.5 (range, 4–12) preoperatively and 44.2 ± 2.2 (range, 40–48) at the latest follow-up. Patients reported clinically and statistically significant improvements (p < 0.001) for all patient-reported outcome measures (Table 2). The mean range of motion showed a statistically significant improvement from 97.6° ± 11.2° (70°–120°) to 118.4° ± 8.4° (100°–130°). Three patients had flexion deformity of less than 10° at the final follow-up.\nThere was no evidence of implant failure or subsidence in any of the postoperative radiographs. Nonprogressive radiolucent lines (< 2 mm) were seen in 5 TKAs. The average tibiofemoral angle improved from a mean of 4.8° varus (2.2°–7.4°) preoperatively to a mean of 4.1° valgus (3.3°–4.9°) at the latest follow-up (Table 3). None of the patients were lost to follow-up or died due to surgery-related or unrelated causes. One patient with rheumatoid arthritis developed mid-substance quadriceps tendon rupture, which was managed with mesh repair. The patient did well and was able to perform activities of daily living.", "Traditional methods of TKA have failed to recreate normal knee kinematics, which may contribute to up to 20% suboptimal outcomes.16) Attempts have been made to refine prostheses to achieve normal kinematics and elongate implant survivorship. The most important finding of this study is satisfactory outcome with MP-TKA without any major complications at a minimum follow-up of 5 years in the Indian population. This may be related to better replication of natural knee kinematics with MP knees and inherent advantages of this design.\nThe clinical results of our study are encouraging and satisfactory with all the patients reporting significant improvement in the patient-reported outcome measures. These results are in agreement with other studies in the literature.7891011) All patients in the present study felt their knees stable. None of the patients developed any symptomatic anteroposterior instability. Single radius curvature of the femoral component in MP TKA maximizes quadriceps efficiency, limits paradoxical roll forward, and provides potential advantages to the extensor apparatus.2) Pritchett17) reported that 77% of patients preferred MP to the posterior-stabilized (PS) design. Macheras et al.10) reported excellent long-term clinical results with an MP design and survival of 98.8% at 17 years of follow-up.\nThe average flexion reported in the present study was 118.4°, which is comparable to other studies with similar knee designs.7891011171819) Although the maximum flexion was less than that in other high-flex knees and mobile-bearing designs, this may be related to late presentation with advanced arthritis in the Indian population.20) The advantage of the MP design is that patients feel their knees are stable in activities requiring deep flexion due to its medial conformity. Furthermore, it has been reported that the increased range of motion correlates with increased satisfaction after TKA in Asian patients.21)\nThe medial conformity of this design also obviates the need for ligament release for balancing the knee, which is required in severe varus deformity; severe deformities necessitate PCL release to balance the knees. The posterior-stabilized type of MP TKA does not require a box cut for post-cam mechanism; instead, its ultracongruent polyethylene with anterior lip provides anteroposterior stability without any post. This also preserves bone stock in already smaller bones in Asian patients and, if required, for future revision surgery.18) Bae et al.22) showed that there was no difference in clinical and radiological results of TKA with an MP design regardless of whether the PCL was retained (67 knees) or sacrificed (70 knees).\nThe radiographic results were similar to those of other studies.7891011171819) There was no evidence of loosening or osteolysis in the present study, which may be attributed to the fact that the MP design produces fewer wear particles due to its ultracongruent nature.23) We note certain inherent limitations of the study, such as the single center study, small number of patients, retrospective study design, absence of control group, and relatively short follow-up. The current study has demonstrated satisfactory clinical and radiological outcomes at 5 years following MP-TKA in an Indian population." ]
[ "methods", null, "results", "discussion" ]
[ "Medial", "Knee", "Arthroplasty", "Clinical", "Radiological" ]
METHODS: A retrospective analysis of prospectively collected data of adult patients with end-stage knee arthritis (Kellgren-Lawrence grade 4) operated in a University Teaching Hospital from January 2015 to June 2015 was performed. A total of 245 patients were identified. Only patients (45 patients) who underwent MP TKA (ADVANCE Medial Pivot, MicroPort Orthopedics, Arlington, TN, USA) were included in the study. MP TKA was introduced to our system in 2014. Institutional Review Board approval was taken for the study (IRB No. IEC-486). The primary outcome measures for this study were patient satisfaction and met-expectations with the MP prosthesis using Knee Society Score (KSS) satisfaction, expectation, and functional scores. The secondary outcome measures were Oxford Knee Score (OKS), range of motion, and radiological assessment. All TKAs were performed by the same surgeon (RM) using the standard medial parapatellar approach. All patients were given an antibiotic (Cefuroxime axetil 1.5 g) 30–45 minutes prior to skin incision. All patients were given intravenous tranexamic acid (15 mg/kg) 10–15 minutes prior to tourniquet deflation. The posterior cruciate ligament (PCL) was sacrificed in all cases and implants were fixed with a single mix of Palacos bone cement with gentamicin (Heraeus Medical, Warsaw, IN, USA). Whenever there was mediolateral overhang as per the anteroposterior size of the femoral component, the authors used a stature femoral component (narrow mediolateral) available with this prosthesis. The stature femoral component was used in 70% of the patients and the most common size of femur and tibia used in the study was 2. Patelloplasty was done. Surgical and rehabilitation protocol was standardized for all patients. For each patient, demographic data, type of arthritis, body mass index (BMI), and preoperative scores were documented from the patient record and postoperative evaluation was done at the latest follow-up using KSS (satisfaction, expectation, and functional scores), OKS, and range of motion (measured using a goniometer). Flexion deformity, if any, was also recorded. Serial standard preoperative and postoperative radiographs at baseline and the latest follow-up were evaluated by an independent radiologist (NS) as per the protocol defined by the modern knee society radiographic evaluation system (Figs. 1, 2, 3).14) TKA loosening was defined as a complete radiolucent line of more than 2 mm in width, a visible cement mantle fracture around the component, or a change in component position. The interclass correlation coefficient was excellent (0.92) for clinical evaluation and good (0.84) for radiographic evaluation.15) Statistical Analysis Baseline characteristics were described for each patient using mean ± standard deviation, median (range), or frequency/percentage. KSS (satisfaction, expectation, and functional scores) and OKS were compared using a generalized estimating equation because the observations were correlated preoperatively and postoperatively. Within group comparison for all other parameters was performed using paired t-test or Wilcoxon rank-sum test (Mann-Whitney U-test). Correlation between two continuous variables was analysed using Pearson correlation coefficients. All analyses were conducted using Stata ver. 13.0 (Stata Corp., College Station, TX, USA). A p-value < 0.05 was considered statistically significant. Baseline characteristics were described for each patient using mean ± standard deviation, median (range), or frequency/percentage. KSS (satisfaction, expectation, and functional scores) and OKS were compared using a generalized estimating equation because the observations were correlated preoperatively and postoperatively. Within group comparison for all other parameters was performed using paired t-test or Wilcoxon rank-sum test (Mann-Whitney U-test). Correlation between two continuous variables was analysed using Pearson correlation coefficients. All analyses were conducted using Stata ver. 13.0 (Stata Corp., College Station, TX, USA). A p-value < 0.05 was considered statistically significant. Statistical Analysis: Baseline characteristics were described for each patient using mean ± standard deviation, median (range), or frequency/percentage. KSS (satisfaction, expectation, and functional scores) and OKS were compared using a generalized estimating equation because the observations were correlated preoperatively and postoperatively. Within group comparison for all other parameters was performed using paired t-test or Wilcoxon rank-sum test (Mann-Whitney U-test). Correlation between two continuous variables was analysed using Pearson correlation coefficients. All analyses were conducted using Stata ver. 13.0 (Stata Corp., College Station, TX, USA). A p-value < 0.05 was considered statistically significant. RESULTS: All patients reported improved satisfaction and functional outcome after TKA at a minimum follow-up of 5 years. The average age at the time of TKA was 62 years and 88.9% of the patients were women. There were 26 left knees and 24 right knees. The average BMI was 28.4 kg/m2. Five patients underwent bilateral TKA (total 50 MP TKAs). There were 4 cases of inflammatory arthritis, while the rest were degenerative osteoarthritis (Table 1). The mean preoperative KSS was 4.4 ± 1.7 (range, 0–6) for satisfaction score, 11.8 ± 1.6 (range, 7–15) for expectation score, and 19.6 ± 4.4 (range, 6–27) for functional score. The mean postoperative KSS was 34.6 ± 3.0 (range, 26–38) for satisfaction score, 12.5 ± 1.4 (range, 10–15) for expectation score, and 61.3 ± 6.7 (range, 42–69) for functional score at the latest follow-up. The mean OKS was 9.5 ± 2.5 (range, 4–12) preoperatively and 44.2 ± 2.2 (range, 40–48) at the latest follow-up. Patients reported clinically and statistically significant improvements (p < 0.001) for all patient-reported outcome measures (Table 2). The mean range of motion showed a statistically significant improvement from 97.6° ± 11.2° (70°–120°) to 118.4° ± 8.4° (100°–130°). Three patients had flexion deformity of less than 10° at the final follow-up. There was no evidence of implant failure or subsidence in any of the postoperative radiographs. Nonprogressive radiolucent lines (< 2 mm) were seen in 5 TKAs. The average tibiofemoral angle improved from a mean of 4.8° varus (2.2°–7.4°) preoperatively to a mean of 4.1° valgus (3.3°–4.9°) at the latest follow-up (Table 3). None of the patients were lost to follow-up or died due to surgery-related or unrelated causes. One patient with rheumatoid arthritis developed mid-substance quadriceps tendon rupture, which was managed with mesh repair. The patient did well and was able to perform activities of daily living. DISCUSSION: Traditional methods of TKA have failed to recreate normal knee kinematics, which may contribute to up to 20% suboptimal outcomes.16) Attempts have been made to refine prostheses to achieve normal kinematics and elongate implant survivorship. The most important finding of this study is satisfactory outcome with MP-TKA without any major complications at a minimum follow-up of 5 years in the Indian population. This may be related to better replication of natural knee kinematics with MP knees and inherent advantages of this design. The clinical results of our study are encouraging and satisfactory with all the patients reporting significant improvement in the patient-reported outcome measures. These results are in agreement with other studies in the literature.7891011) All patients in the present study felt their knees stable. None of the patients developed any symptomatic anteroposterior instability. Single radius curvature of the femoral component in MP TKA maximizes quadriceps efficiency, limits paradoxical roll forward, and provides potential advantages to the extensor apparatus.2) Pritchett17) reported that 77% of patients preferred MP to the posterior-stabilized (PS) design. Macheras et al.10) reported excellent long-term clinical results with an MP design and survival of 98.8% at 17 years of follow-up. The average flexion reported in the present study was 118.4°, which is comparable to other studies with similar knee designs.7891011171819) Although the maximum flexion was less than that in other high-flex knees and mobile-bearing designs, this may be related to late presentation with advanced arthritis in the Indian population.20) The advantage of the MP design is that patients feel their knees are stable in activities requiring deep flexion due to its medial conformity. Furthermore, it has been reported that the increased range of motion correlates with increased satisfaction after TKA in Asian patients.21) The medial conformity of this design also obviates the need for ligament release for balancing the knee, which is required in severe varus deformity; severe deformities necessitate PCL release to balance the knees. The posterior-stabilized type of MP TKA does not require a box cut for post-cam mechanism; instead, its ultracongruent polyethylene with anterior lip provides anteroposterior stability without any post. This also preserves bone stock in already smaller bones in Asian patients and, if required, for future revision surgery.18) Bae et al.22) showed that there was no difference in clinical and radiological results of TKA with an MP design regardless of whether the PCL was retained (67 knees) or sacrificed (70 knees). The radiographic results were similar to those of other studies.7891011171819) There was no evidence of loosening or osteolysis in the present study, which may be attributed to the fact that the MP design produces fewer wear particles due to its ultracongruent nature.23) We note certain inherent limitations of the study, such as the single center study, small number of patients, retrospective study design, absence of control group, and relatively short follow-up. The current study has demonstrated satisfactory clinical and radiological outcomes at 5 years following MP-TKA in an Indian population.
Background: Medial pivot total knee arthroplasty aims to restore native knee kinematics through highly conforming medial tibiofemoral articulation with survival comparable to contemporary knee designs. The aim of this study was to report preliminary clinical results of medial pivot total knee arthroplasty in an Indian population. Methods: A retrospective analysis of 45 patients (average age, 62 years; 40 women and 5 men) with end-stage arthritis (Kellgren-Lawrence grade 4) operated with a medial pivot prosthesis was done. All patients were assessed using Knee Society Score (satisfaction, expectation, and functional scores) and Oxford Knee Score, and range of motion was recorded at the end of 5-year postoperative follow-up. In addition, all patients underwent standardized radiological assessment. Results: At the final follow-up, patients reported significant improvement in mean Knee Society Score (satisfaction, expectation, and functional scores) and Oxford Knee Score (p < 0.05). The mean range of motion achieved at the end of 5 years ranged from 0° (extension) to 118.4° (further flexion). There was no evidence of loosening or osteolysis at a minimum follow-up of 5 years. Conclusions: These results demonstrated satisfactory clinical and radiological outcomes at 5 years after total knee arthroplasty with a medial pivot design, which may be related to better replication of natural knee kinematics with the medial pivot knee and inherent advantages of this design.
null
null
1,877
276
[ 124 ]
4
[ "patients", "range", "mp", "study", "tka", "follow", "satisfaction", "patient", "test", "mean" ]
[ "stage knee arthritis", "knee score", "kinematics mp knees", "knees radiographic results", "prosthesis knee society" ]
null
null
null
null
[CONTENT] Medial | Knee | Arthroplasty | Clinical | Radiological [SUMMARY]
[CONTENT] Medial | Knee | Arthroplasty | Clinical | Radiological [SUMMARY]
null
[CONTENT] Medial | Knee | Arthroplasty | Clinical | Radiological [SUMMARY]
null
null
[CONTENT] Arthroplasty, Replacement, Knee | Female | Follow-Up Studies | Humans | Knee Joint | Knee Prosthesis | Male | Middle Aged | Osteoarthritis, Knee | Prosthesis Design | Range of Motion, Articular | Retrospective Studies [SUMMARY]
[CONTENT] Arthroplasty, Replacement, Knee | Female | Follow-Up Studies | Humans | Knee Joint | Knee Prosthesis | Male | Middle Aged | Osteoarthritis, Knee | Prosthesis Design | Range of Motion, Articular | Retrospective Studies [SUMMARY]
null
[CONTENT] Arthroplasty, Replacement, Knee | Female | Follow-Up Studies | Humans | Knee Joint | Knee Prosthesis | Male | Middle Aged | Osteoarthritis, Knee | Prosthesis Design | Range of Motion, Articular | Retrospective Studies [SUMMARY]
null
null
[CONTENT] stage knee arthritis | knee score | kinematics mp knees | knees radiographic results | prosthesis knee society [SUMMARY]
[CONTENT] stage knee arthritis | knee score | kinematics mp knees | knees radiographic results | prosthesis knee society [SUMMARY]
null
[CONTENT] stage knee arthritis | knee score | kinematics mp knees | knees radiographic results | prosthesis knee society [SUMMARY]
null
null
[CONTENT] patients | range | mp | study | tka | follow | satisfaction | patient | test | mean [SUMMARY]
[CONTENT] patients | range | mp | study | tka | follow | satisfaction | patient | test | mean [SUMMARY]
null
[CONTENT] patients | range | mp | study | tka | follow | satisfaction | patient | test | mean [SUMMARY]
null
null
[CONTENT] patients | test | evaluation | component | scores | correlation | kss satisfaction expectation functional | kss satisfaction expectation | kss satisfaction | satisfaction expectation functional scores [SUMMARY]
[CONTENT] score | range | patients | follow | mean | table | latest follow | latest | reported | average [SUMMARY]
null
[CONTENT] patients | test | range | study | mp | tka | correlation | mean | follow | score [SUMMARY]
null
null
[CONTENT] 45 | 62 years | 40 | 5 | Kellgren-Lawrence | 4 ||| Knee Society Score | Oxford Knee Score | the end of 5-year ||| [SUMMARY]
[CONTENT] Knee Society Score | Oxford Knee Score ||| the end of 5 years | 118.4 ||| 5 years [SUMMARY]
null
[CONTENT] ||| Indian ||| 45 | 62 years | 40 | 5 | Kellgren-Lawrence | 4 ||| Knee Society Score | Oxford Knee Score | the end of 5-year ||| ||| ||| Knee Society Score | Oxford Knee Score ||| the end of 5 years | 118.4 ||| 5 years ||| 5 years [SUMMARY]
null
Acute-phase protein concentrations in serum of clinically healthy and diseased European bison (Bison bonasus) - preliminary study.
35012560
This is the first report describing levels of APPs in European bison. Serum concentration of acute phase proteins (APPs) may be helpful to assess general health status in wildlife and potentially useful in selecting animals for elimination. Since there is a lack of literature data regarding concentration of APPs in European bisons, establishment of the reference values is also needed.
BACKGROUND
A total of 87 European bison from Polish populations were divided into two groups: (1) healthy: immobilized for transportation, placing a telemetry collar and routine diagnostic purposes; and (2) selectively culled due to the poor health condition. The serum concentration of haptoglobin, serum amyloid A and α1-acid-glycoprotein were determined using commercial quantitative ELISA assays. Since none of the variables met the normality assumptions, non-parametric Mann-Whitney U test was used for all comparisons. Statistical significance was set at p < 0.05. Statistical analyses were performed using Statistica 13.3 (Tibco, USA).
METHODS
The concentration of haptoglobin and serum amyloid A was significantly higher in animals culled (euthanised) due to the poor condition in respect to the clinically healthy European bison. The levels of α1-acid-glycoprotein did not show statistical difference between healthy and sick animals.
RESULTS
Correlation between APPs concertation and health status was proven, therefore the determination of selected APPs may be considered in future as auxiliary predictive tool in assessing European bison health condition.
CONCLUSIONS
[ "Acute-Phase Proteins", "Animals", "Bison", "Haptoglobins", "Serum Amyloid A Protein" ]
8744219
Background
At the beginning of XX century, European bison (Bison bonasus) extinct from wild, when the last surviving animals were killed in the Białowieża Primeval Forest after World War I. Fortunately, the biggest wild herbivore in Europe have survived and now, 70 years after the first European bison had been released into the wild again, its world population is estimated at more than 9100 heads [1]. Unfortunately, growing numbers and density of the population exceeding capacity of the available habitats, increasing anthropopression and limited gene pool increase nowadays the risk of health threats such as exposure to infectious and invasive pathogens [2]. Therefore, it is crucial to support European bison population development by proper wildlife management. One of the tools that can be used to manage wildlife species is selective elimination (culling) of individuals, which genotype (and/or lineage) or health impairment make them undesirable in a herd. Apart from that, diseased individuals may threaten other animals in the population in case of contagious diseases, which, to make things worse, may also have zoonotic potential and be a challenge for public health [3, 4]. This applies especially to tuberculosis, which is currently a real zoonotic health hazard in wild ruminant populations including European bison [5]. The question which arises is how to accurately select individuals to be eliminated. The potential solution may be taken from farm animals medicine, in particular cattle, which is relatively closely related to European bison. It involves determining serum acute phase proteins (APPs) concentration, which reflects animal health status under different pathological conditions [6–8]. It could also be used as a predictive tool describing the chance of recovery based on the severity of the disease. Besides, determination of APPs can be applied as a reliable solution to assess general health status, which is crucial in case of endangered animal species, particularly with limited genes pool. The concentrations and type of APPs differ within the species [9, 10]. In cattle, the most important APPs are haptoglobin (Hp), serum amyloid A (SAA) and α1-acid-glycoprotein (AGP) [11], which levels are used as biomarkers of various pathological conditions and as predictive values for mastitis [12], respiratory diseases [13], and lameness [14]. However, no reports investigating concentration of APPs in European bison are available. Although, some studies have been described in other wildlife non-bovid species, nevertheless the data is very limited [15–17]. The secretion of APPs is associated with the acute phase reaction (APR) [18]. The APR is an innate response of the body to the disturbance of homeostasis which may be associated with various factors i.e. infection, damage of tissues, neoplastic hyperplasia, immunological disorders [19–21]. The goal of the acute phase response is to recover homeostasis and eliminate the causative factor. It comprises numerous hormonal, metabolic and neurological changes which occur within a short period of time after the injury, at the beginning of infection or inflammation [18]. Referring to cattle (as closely related to European bison domesticated species), the Hp, SAA and AGP have been described as the most significant markers of APR and are considered the major APPs used in the diagnostics. The level of Hp in cattle increases significantly in the course of APR (from nearly zero to about 2 g/l within 48 h from stimulation. The major biologic function of Hp is to bind haemoglobin in an equimolar ratio with very high affinity to prevent haemoglobin-mediated renal parenchymal injury and loss of iron following intravascular haemolysis [22]. Serum amyloid A seems also to be a good indicator of acute phase reaction in cattle. The synthesis of certain members of the family SAA proteins is significantly increased during inflammation [23]. SAA proteins can be considered as apolipoproteins because they associate with plasma lipoproteins mainly within the high-density range. Physiological role of SAA in the immune response during inflammation is not well understood, but various effects have been described. These include e.g., inhibition of lymphocyte proliferation, detoxification of endotoxin, inhibition of platelet aggregation, inhibition of thrombocytes aggregation, and inhibition of oxidative reaction in neutrophils [23]. Therefore, the aim of our preliminary study was to determine serum concentration of selected APPs (haptoglobin, serum amyloid A and α1-acid-glycoprotein) in clinically healthy European bison and animals eliminated under the clinical suspicion of different pathological conditions, followed by post-mortem evaluation [24]. We hypothesised that concentrations of selected APPs were higher in the eliminated European bison (ELIM) comparing to clinically healthy animals (HEAL).
null
null
Results
Analytical validation of the assays For Hp mean within-run and between-run CVs were 8.11% and 7.11%, respectively. Dilution studies resulted in linear regression equations with a correlation coefficient of 0.98 showing that the method measures the protein in a linear manner. The detection limit of the assay was 0.005 mg/ml. For SAA mean within-run and between-run CVs were 5.15% and 5.35%, respectively. Dilution studies resulted in linear regression equations with a correlation coefficient of 0.99 showing that the method measures the protein in a linear manner. The detection limit of the assay was 3.49 µg/ml. For AGP mean within-run and between-run CVs were 4.08% and 4.37%, respectively. Dilution studies resulted in linear regression equations with a correlation coefficient of 0.99 showing that the method measures the protein in a linear manner. The detection limit of the assay was 15.6 µg/ml. For Hp mean within-run and between-run CVs were 8.11% and 7.11%, respectively. Dilution studies resulted in linear regression equations with a correlation coefficient of 0.98 showing that the method measures the protein in a linear manner. The detection limit of the assay was 0.005 mg/ml. For SAA mean within-run and between-run CVs were 5.15% and 5.35%, respectively. Dilution studies resulted in linear regression equations with a correlation coefficient of 0.99 showing that the method measures the protein in a linear manner. The detection limit of the assay was 3.49 µg/ml. For AGP mean within-run and between-run CVs were 4.08% and 4.37%, respectively. Dilution studies resulted in linear regression equations with a correlation coefficient of 0.99 showing that the method measures the protein in a linear manner. The detection limit of the assay was 15.6 µg/ml. Acute phase protein concentrations The median concentration of SAA in ELIM animals was almost 4 times higher than the median level in clinically healthy group. In ELIM, the median concentration was 70.81 µg/ml while in HEAL, the median concentration of SAA was 18.95 µg/ml. The interquartile range (IQR) equals 12.38-30.11 µg/ml for SAA concentration in HEAL, while in ELIM 50.50-112.46 µg/ml (Fig. 1). On other hand, the median concentration of Hp in HEAL group was over 2 times lower than in the eliminated animals with IQR equal to 0.1-0.214 mg/ml in HEAL and 0.270-0.592 mg/ml in ELIM. The median concentration of Hp in HEAL was 0.176 mg/ml, while in ELIM the median concentration of Hp was equal to 0.305 mg/ml (Fig. 2). The concentration of SAA and Hp was significantly higher in ELIM comparing to HEAL (p <0.01). Similar differences were also observed for AGP levels, however they were not statistically significant (p = 0.3950). The median levels of AGP in HEAL and ELIM were equal to 207.25 and 271.25 µg/ml, respectively. The IQR for AGP was 146.65-259.1 µg/ml in HEAL and 178.55-407.25 in ELIM µg/ml (Fig. 3).Fig. 1 A boxplot for the concentration of serum amyloid A in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers. * - statistically significant differences between HEAL and ELIM (p < 0.01)Fig. 2 A boxplot for the concentration of haptoglobin in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations. * - statistically significant differences between HEAL and ELIM (p < 0.01)Fig. 3 A boxplot for the concentration of α-1-acid glikoprotein in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers, asterics represent far outliers  A boxplot for the concentration of serum amyloid A in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers. * - statistically significant differences between HEAL and ELIM (p < 0.01)  A boxplot for the concentration of haptoglobin in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations. * - statistically significant differences between HEAL and ELIM (p < 0.01)  A boxplot for the concentration of α-1-acid glikoprotein in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers, asterics represent far outliers The median concentration of SAA in ELIM animals was almost 4 times higher than the median level in clinically healthy group. In ELIM, the median concentration was 70.81 µg/ml while in HEAL, the median concentration of SAA was 18.95 µg/ml. The interquartile range (IQR) equals 12.38-30.11 µg/ml for SAA concentration in HEAL, while in ELIM 50.50-112.46 µg/ml (Fig. 1). On other hand, the median concentration of Hp in HEAL group was over 2 times lower than in the eliminated animals with IQR equal to 0.1-0.214 mg/ml in HEAL and 0.270-0.592 mg/ml in ELIM. The median concentration of Hp in HEAL was 0.176 mg/ml, while in ELIM the median concentration of Hp was equal to 0.305 mg/ml (Fig. 2). The concentration of SAA and Hp was significantly higher in ELIM comparing to HEAL (p <0.01). Similar differences were also observed for AGP levels, however they were not statistically significant (p = 0.3950). The median levels of AGP in HEAL and ELIM were equal to 207.25 and 271.25 µg/ml, respectively. The IQR for AGP was 146.65-259.1 µg/ml in HEAL and 178.55-407.25 in ELIM µg/ml (Fig. 3).Fig. 1 A boxplot for the concentration of serum amyloid A in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers. * - statistically significant differences between HEAL and ELIM (p < 0.01)Fig. 2 A boxplot for the concentration of haptoglobin in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations. * - statistically significant differences between HEAL and ELIM (p < 0.01)Fig. 3 A boxplot for the concentration of α-1-acid glikoprotein in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers, asterics represent far outliers  A boxplot for the concentration of serum amyloid A in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers. * - statistically significant differences between HEAL and ELIM (p < 0.01)  A boxplot for the concentration of haptoglobin in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations. * - statistically significant differences between HEAL and ELIM (p < 0.01)  A boxplot for the concentration of α-1-acid glikoprotein in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers, asterics represent far outliers
Conclusions
To conclude, our preliminary study is the first report describing concentrations of the selected APPs (Hp, SAA and AGP) in European bison. Subsequently the study suggest that concentration of APPs may be used as a supportive tool, monitoring the health status (at individual and/or population level) and when making decisions regarding undesired individuals to be eliminated due to health reasons. The most significant limitations of our research was the moderate number of animals involved in the study. Further studies will include greater samples size in order to associate APPs levels with the most frequent pathologies of European bison from different populations, variable in exposure to pathogens or differences in herd management. Most importantly, the study confirmed the selection of animals for culling was diligent and thoughtful and could have been confirmed by APP up-regulation in sick animals.
[ "Background", "Methods", "Samples", "Determination of APPs", "Statistical analysis", "Analytical validation of the assays", "Acute phase protein concentrations" ]
[ "At the beginning of XX century, European bison (Bison bonasus) extinct from wild, when the last surviving animals were killed in the Białowieża Primeval Forest after World War I. Fortunately, the biggest wild herbivore in Europe have survived and now, 70 years after the first European bison had been released into the wild again, its world population is estimated at more than 9100 heads [1]. Unfortunately, growing numbers and density of the population exceeding capacity of the available habitats, increasing anthropopression and limited gene pool increase nowadays the risk of health threats such as exposure to infectious and invasive pathogens [2]. Therefore, it is crucial to support European bison population development by proper wildlife management. One of the tools that can be used to manage wildlife species is selective elimination (culling) of individuals, which genotype (and/or lineage) or health impairment make them undesirable in a herd. Apart from that, diseased individuals may threaten other animals in the population in case of contagious diseases, which, to make things worse, may also have zoonotic potential and be a challenge for public health [3, 4]. This applies especially to tuberculosis, which is currently a real zoonotic health hazard in wild ruminant populations including European bison [5]. The question which arises is how to accurately select individuals to be eliminated. The potential solution may be taken from farm animals medicine, in particular cattle, which is relatively closely related to European bison. It involves determining serum acute phase proteins (APPs) concentration, which reflects animal health status under different pathological conditions [6–8]. It could also be used as a predictive tool describing the chance of recovery based on the severity of the disease. Besides, determination of APPs can be applied as a reliable solution to assess general health status, which is crucial in case of endangered animal species, particularly with limited genes pool. The concentrations and type of APPs differ within the species [9, 10]. In cattle, the most important APPs are haptoglobin (Hp), serum amyloid A (SAA) and α1-acid-glycoprotein (AGP) [11], which levels are used as biomarkers of various pathological conditions and as predictive values for mastitis [12], respiratory diseases [13], and lameness [14]. However, no reports investigating concentration of APPs in European bison are available. Although, some studies have been described in other wildlife non-bovid species, nevertheless the data is very limited [15–17]. The secretion of APPs is associated with the acute phase reaction (APR) [18]. The APR is an innate response of the body to the disturbance of homeostasis which may be associated with various factors i.e. infection, damage of tissues, neoplastic hyperplasia, immunological disorders [19–21]. The goal of the acute phase response is to recover homeostasis and eliminate the causative factor. It comprises numerous hormonal, metabolic and neurological changes which occur within a short period of time after the injury, at the beginning of infection or inflammation [18]. Referring to cattle (as closely related to European bison domesticated species), the Hp, SAA and AGP have been described as the most significant markers of APR and are considered the major APPs used in the diagnostics. The level of Hp in cattle increases significantly in the course of APR (from nearly zero to about 2 g/l within 48 h from stimulation. The major biologic function of Hp is to bind haemoglobin in an equimolar ratio with very high affinity to prevent haemoglobin-mediated renal parenchymal injury and loss of iron following intravascular haemolysis [22]. Serum amyloid A seems also to be a good indicator of acute phase reaction in cattle. The synthesis of certain members of the family SAA proteins is significantly increased during inflammation [23]. SAA proteins can be considered as apolipoproteins because they associate with plasma lipoproteins mainly within the high-density range. Physiological role of SAA in the immune response during inflammation is not well understood, but various effects have been described. These include e.g., inhibition of lymphocyte proliferation, detoxification of endotoxin, inhibition of platelet aggregation, inhibition of thrombocytes aggregation, and inhibition of oxidative reaction in neutrophils [23]. Therefore, the aim of our preliminary study was to determine serum concentration of selected APPs (haptoglobin, serum amyloid A and α1-acid-glycoprotein) in clinically healthy European bison and animals eliminated under the clinical suspicion of different pathological conditions, followed by post-mortem evaluation [24]. We hypothesised that concentrations of selected APPs were higher in the eliminated European bison (ELIM) comparing to clinically healthy animals (HEAL).", " Samples Blood samples were collected from 87 European bison, including 40 samples from clinically healthy individuals (HEAL) and 47 samples from individuals eliminated due to different pathological conditions (ELIM). In HEAL, there were 27 females and 13 males in the average age of 6.17 (range: 1-16 years). While in ELIM there were 30 females and 17 males in the average age of 6.5 (range: 1-20 years). The clinically healthy European bison were pharmacologically immobilized for transportation, placing a telemetry collar, or routine diagnostic purposes according to the previously described protocols [25]. Briefly, the combination of xylazine and etorphine were used to immobilization. The preparations were administered with specialized pneumatic Dan-Inject applicators. Following a successful shot, the animal is immobilized within 15 min and thereafter veterinary and animal husbandry manipulations are possible. In order to awake the European bisons the mixture of atipamezole, naloxone and diprenorphine were used. The selected individuals were culled after the health assessment (ELIM) by the herd supervising veterinarian and under the decision of Ministry of Environment. Briefly, the health status assessment includes integumentary system examination, orifices inspection (check for any discharges), reproductive organs examination, sight organ examination as well as body condition assessment. The animals were sampled in accordance with the appropriate regulations and permits (Polish General Directorate for Environmental Protection: Regulations DOP-OZGIZ.6401.06.7.2012.1 s and DOPOZ. 6401.06.7.2012.1 s1.Warsaw, 2012; Polish General Directorate for Environmental Protection: Regulations DZPWG.6401.06.23.2014.km2.; and Polish Ministry of the Environment, Regulation: DLP-III-0771-5/42,173/14/ZK). The blood samples were taken at capture and collected from the jugular vein into sterile tubes with clot activator for serum separation. The poor quality samples were excluded prior to the study for ensuring the high reliability of results. Serum was harvested from the blood samples by centrifugation (3000 × g for 15 min at room temperature) and stored at -80° C for further analysis (not longer than one month).\nBlood samples were collected from 87 European bison, including 40 samples from clinically healthy individuals (HEAL) and 47 samples from individuals eliminated due to different pathological conditions (ELIM). In HEAL, there were 27 females and 13 males in the average age of 6.17 (range: 1-16 years). While in ELIM there were 30 females and 17 males in the average age of 6.5 (range: 1-20 years). The clinically healthy European bison were pharmacologically immobilized for transportation, placing a telemetry collar, or routine diagnostic purposes according to the previously described protocols [25]. Briefly, the combination of xylazine and etorphine were used to immobilization. The preparations were administered with specialized pneumatic Dan-Inject applicators. Following a successful shot, the animal is immobilized within 15 min and thereafter veterinary and animal husbandry manipulations are possible. In order to awake the European bisons the mixture of atipamezole, naloxone and diprenorphine were used. The selected individuals were culled after the health assessment (ELIM) by the herd supervising veterinarian and under the decision of Ministry of Environment. Briefly, the health status assessment includes integumentary system examination, orifices inspection (check for any discharges), reproductive organs examination, sight organ examination as well as body condition assessment. The animals were sampled in accordance with the appropriate regulations and permits (Polish General Directorate for Environmental Protection: Regulations DOP-OZGIZ.6401.06.7.2012.1 s and DOPOZ. 6401.06.7.2012.1 s1.Warsaw, 2012; Polish General Directorate for Environmental Protection: Regulations DZPWG.6401.06.23.2014.km2.; and Polish Ministry of the Environment, Regulation: DLP-III-0771-5/42,173/14/ZK). The blood samples were taken at capture and collected from the jugular vein into sterile tubes with clot activator for serum separation. The poor quality samples were excluded prior to the study for ensuring the high reliability of results. Serum was harvested from the blood samples by centrifugation (3000 × g for 15 min at room temperature) and stored at -80° C for further analysis (not longer than one month).\n Determination of APPs The concentrations of haptoglobin, serum amyloid A and α1-acid-glycoprotein in serum were measured using commercial ELISAs and colorimetric assays according to manufacturer’s guidelines, which included Haptoglobin Kit Phase Range and Phase Serum Amyloid A assays (Tridelta Development Ltd County Kildare, Ireland) and Cow α1-acid glycoprotein (AGP) ELISA (Life Diagnostic, West Chester, USA). Serum samples were tested in duplicate. Prior to AGP and SAA analyses samples were diluted as follows: 1:10,000 for AGP and 1:500 for SAA. The concentrations of the APP was calculated based on standard curve for each protein using the FindGraph computer software (UNIPHIZ Lab, Vancouver, Canada). All assays were preliminarily validated in our laboratory before being applied in European bison samples. For this purpose, precision, accuracy and limit of detection were calculated. Within-run coefficients of variation (CVs) were calculated after the analysis of 2 serum samples with low and high proteins concentrations eight times in a single run. Between-run CVs were obtained by measuring the same samples in eight separate runs carried out on three different days. All samples used were frozen in aliquots and only the vials needed for each run were used. Accuracy was investigated by linearity under dilution; in brief, two European bison serum samples were diluted (1:2; 1:4; 1:8; 1:16, 1:32) with sample diluents. The limit of detection was calculated as the lowest concentration of APP that could be distinguished from a zero sample, and was taken as the mean + 3 standard deviations (SD) of 12 replicates of blank sample, tested in one analytical run.\nThe concentrations of haptoglobin, serum amyloid A and α1-acid-glycoprotein in serum were measured using commercial ELISAs and colorimetric assays according to manufacturer’s guidelines, which included Haptoglobin Kit Phase Range and Phase Serum Amyloid A assays (Tridelta Development Ltd County Kildare, Ireland) and Cow α1-acid glycoprotein (AGP) ELISA (Life Diagnostic, West Chester, USA). Serum samples were tested in duplicate. Prior to AGP and SAA analyses samples were diluted as follows: 1:10,000 for AGP and 1:500 for SAA. The concentrations of the APP was calculated based on standard curve for each protein using the FindGraph computer software (UNIPHIZ Lab, Vancouver, Canada). All assays were preliminarily validated in our laboratory before being applied in European bison samples. For this purpose, precision, accuracy and limit of detection were calculated. Within-run coefficients of variation (CVs) were calculated after the analysis of 2 serum samples with low and high proteins concentrations eight times in a single run. Between-run CVs were obtained by measuring the same samples in eight separate runs carried out on three different days. All samples used were frozen in aliquots and only the vials needed for each run were used. Accuracy was investigated by linearity under dilution; in brief, two European bison serum samples were diluted (1:2; 1:4; 1:8; 1:16, 1:32) with sample diluents. The limit of detection was calculated as the lowest concentration of APP that could be distinguished from a zero sample, and was taken as the mean + 3 standard deviations (SD) of 12 replicates of blank sample, tested in one analytical run.\n Statistical analysis In order to select proper statistical tools to evaluate the differences between healthy (HEAL) and European bison eliminated due to poor health conditions (sick animals) (ELIM), the normality of data was verified by using Shapiro-Wilk test. Since none of the variables met the normality assumptions, non-parametric Mann-Whitney U test was used for all comparisons. Statistical significance was set at p < 0.05. Statistical analyses were performed using Statistica 13.3 (Tibco, USA).\nIn order to select proper statistical tools to evaluate the differences between healthy (HEAL) and European bison eliminated due to poor health conditions (sick animals) (ELIM), the normality of data was verified by using Shapiro-Wilk test. Since none of the variables met the normality assumptions, non-parametric Mann-Whitney U test was used for all comparisons. Statistical significance was set at p < 0.05. Statistical analyses were performed using Statistica 13.3 (Tibco, USA).", "Blood samples were collected from 87 European bison, including 40 samples from clinically healthy individuals (HEAL) and 47 samples from individuals eliminated due to different pathological conditions (ELIM). In HEAL, there were 27 females and 13 males in the average age of 6.17 (range: 1-16 years). While in ELIM there were 30 females and 17 males in the average age of 6.5 (range: 1-20 years). The clinically healthy European bison were pharmacologically immobilized for transportation, placing a telemetry collar, or routine diagnostic purposes according to the previously described protocols [25]. Briefly, the combination of xylazine and etorphine were used to immobilization. The preparations were administered with specialized pneumatic Dan-Inject applicators. Following a successful shot, the animal is immobilized within 15 min and thereafter veterinary and animal husbandry manipulations are possible. In order to awake the European bisons the mixture of atipamezole, naloxone and diprenorphine were used. The selected individuals were culled after the health assessment (ELIM) by the herd supervising veterinarian and under the decision of Ministry of Environment. Briefly, the health status assessment includes integumentary system examination, orifices inspection (check for any discharges), reproductive organs examination, sight organ examination as well as body condition assessment. The animals were sampled in accordance with the appropriate regulations and permits (Polish General Directorate for Environmental Protection: Regulations DOP-OZGIZ.6401.06.7.2012.1 s and DOPOZ. 6401.06.7.2012.1 s1.Warsaw, 2012; Polish General Directorate for Environmental Protection: Regulations DZPWG.6401.06.23.2014.km2.; and Polish Ministry of the Environment, Regulation: DLP-III-0771-5/42,173/14/ZK). The blood samples were taken at capture and collected from the jugular vein into sterile tubes with clot activator for serum separation. The poor quality samples were excluded prior to the study for ensuring the high reliability of results. Serum was harvested from the blood samples by centrifugation (3000 × g for 15 min at room temperature) and stored at -80° C for further analysis (not longer than one month).", "The concentrations of haptoglobin, serum amyloid A and α1-acid-glycoprotein in serum were measured using commercial ELISAs and colorimetric assays according to manufacturer’s guidelines, which included Haptoglobin Kit Phase Range and Phase Serum Amyloid A assays (Tridelta Development Ltd County Kildare, Ireland) and Cow α1-acid glycoprotein (AGP) ELISA (Life Diagnostic, West Chester, USA). Serum samples were tested in duplicate. Prior to AGP and SAA analyses samples were diluted as follows: 1:10,000 for AGP and 1:500 for SAA. The concentrations of the APP was calculated based on standard curve for each protein using the FindGraph computer software (UNIPHIZ Lab, Vancouver, Canada). All assays were preliminarily validated in our laboratory before being applied in European bison samples. For this purpose, precision, accuracy and limit of detection were calculated. Within-run coefficients of variation (CVs) were calculated after the analysis of 2 serum samples with low and high proteins concentrations eight times in a single run. Between-run CVs were obtained by measuring the same samples in eight separate runs carried out on three different days. All samples used were frozen in aliquots and only the vials needed for each run were used. Accuracy was investigated by linearity under dilution; in brief, two European bison serum samples were diluted (1:2; 1:4; 1:8; 1:16, 1:32) with sample diluents. The limit of detection was calculated as the lowest concentration of APP that could be distinguished from a zero sample, and was taken as the mean + 3 standard deviations (SD) of 12 replicates of blank sample, tested in one analytical run.", "In order to select proper statistical tools to evaluate the differences between healthy (HEAL) and European bison eliminated due to poor health conditions (sick animals) (ELIM), the normality of data was verified by using Shapiro-Wilk test. Since none of the variables met the normality assumptions, non-parametric Mann-Whitney U test was used for all comparisons. Statistical significance was set at p < 0.05. Statistical analyses were performed using Statistica 13.3 (Tibco, USA).", "For Hp mean within-run and between-run CVs were 8.11% and 7.11%, respectively. Dilution studies resulted in linear regression equations with a correlation coefficient of 0.98 showing that the method measures the protein in a linear manner. The detection limit of the assay was 0.005 mg/ml.\nFor SAA mean within-run and between-run CVs were 5.15% and 5.35%, respectively. Dilution studies resulted in linear regression equations with a correlation coefficient of 0.99 showing that the method measures the protein in a linear manner. The detection limit of the assay was 3.49 µg/ml.\nFor AGP mean within-run and between-run CVs were 4.08% and 4.37%, respectively. Dilution studies resulted in linear regression equations with a correlation coefficient of 0.99 showing that the method measures the protein in a linear manner. The detection limit of the assay was 15.6 µg/ml.", "The median concentration of SAA in ELIM animals was almost 4 times higher than the median level in clinically healthy group. In ELIM, the median concentration was 70.81 µg/ml while in HEAL, the median concentration of SAA was 18.95 µg/ml. The interquartile range (IQR) equals 12.38-30.11 µg/ml for SAA concentration in HEAL, while in ELIM 50.50-112.46 µg/ml (Fig. 1). On other hand, the median concentration of Hp in HEAL group was over 2 times lower than in the eliminated animals with IQR equal to 0.1-0.214 mg/ml in HEAL and 0.270-0.592 mg/ml in ELIM. The median concentration of Hp in HEAL was 0.176 mg/ml, while in ELIM the median concentration of Hp was equal to 0.305 mg/ml (Fig. 2). The concentration of SAA and Hp was significantly higher in ELIM comparing to HEAL (p <0.01). Similar differences were also observed for AGP levels, however they were not statistically significant (p = 0.3950). The median levels of AGP in HEAL and ELIM were equal to 207.25 and 271.25 µg/ml, respectively. The IQR for AGP was 146.65-259.1 µg/ml in HEAL and 178.55-407.25 in ELIM µg/ml (Fig. 3).Fig. 1 A boxplot for the concentration of serum amyloid A in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers. * - statistically significant differences between HEAL and ELIM (p < 0.01)Fig. 2 A boxplot for the concentration of haptoglobin in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations. * - statistically significant differences between HEAL and ELIM (p < 0.01)Fig. 3 A boxplot for the concentration of α-1-acid glikoprotein in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers, asterics represent far outliers\n A boxplot for the concentration of serum amyloid A in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers. * - statistically significant differences between HEAL and ELIM (p < 0.01)\n A boxplot for the concentration of haptoglobin in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations. * - statistically significant differences between HEAL and ELIM (p < 0.01)\n A boxplot for the concentration of α-1-acid glikoprotein in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers, asterics represent far outliers" ]
[ null, null, null, null, null, null, null ]
[ "Background", "Methods", "Samples", "Determination of APPs", "Statistical analysis", "Results", "Analytical validation of the assays", "Acute phase protein concentrations", "Discussion", "Conclusions" ]
[ "At the beginning of XX century, European bison (Bison bonasus) extinct from wild, when the last surviving animals were killed in the Białowieża Primeval Forest after World War I. Fortunately, the biggest wild herbivore in Europe have survived and now, 70 years after the first European bison had been released into the wild again, its world population is estimated at more than 9100 heads [1]. Unfortunately, growing numbers and density of the population exceeding capacity of the available habitats, increasing anthropopression and limited gene pool increase nowadays the risk of health threats such as exposure to infectious and invasive pathogens [2]. Therefore, it is crucial to support European bison population development by proper wildlife management. One of the tools that can be used to manage wildlife species is selective elimination (culling) of individuals, which genotype (and/or lineage) or health impairment make them undesirable in a herd. Apart from that, diseased individuals may threaten other animals in the population in case of contagious diseases, which, to make things worse, may also have zoonotic potential and be a challenge for public health [3, 4]. This applies especially to tuberculosis, which is currently a real zoonotic health hazard in wild ruminant populations including European bison [5]. The question which arises is how to accurately select individuals to be eliminated. The potential solution may be taken from farm animals medicine, in particular cattle, which is relatively closely related to European bison. It involves determining serum acute phase proteins (APPs) concentration, which reflects animal health status under different pathological conditions [6–8]. It could also be used as a predictive tool describing the chance of recovery based on the severity of the disease. Besides, determination of APPs can be applied as a reliable solution to assess general health status, which is crucial in case of endangered animal species, particularly with limited genes pool. The concentrations and type of APPs differ within the species [9, 10]. In cattle, the most important APPs are haptoglobin (Hp), serum amyloid A (SAA) and α1-acid-glycoprotein (AGP) [11], which levels are used as biomarkers of various pathological conditions and as predictive values for mastitis [12], respiratory diseases [13], and lameness [14]. However, no reports investigating concentration of APPs in European bison are available. Although, some studies have been described in other wildlife non-bovid species, nevertheless the data is very limited [15–17]. The secretion of APPs is associated with the acute phase reaction (APR) [18]. The APR is an innate response of the body to the disturbance of homeostasis which may be associated with various factors i.e. infection, damage of tissues, neoplastic hyperplasia, immunological disorders [19–21]. The goal of the acute phase response is to recover homeostasis and eliminate the causative factor. It comprises numerous hormonal, metabolic and neurological changes which occur within a short period of time after the injury, at the beginning of infection or inflammation [18]. Referring to cattle (as closely related to European bison domesticated species), the Hp, SAA and AGP have been described as the most significant markers of APR and are considered the major APPs used in the diagnostics. The level of Hp in cattle increases significantly in the course of APR (from nearly zero to about 2 g/l within 48 h from stimulation. The major biologic function of Hp is to bind haemoglobin in an equimolar ratio with very high affinity to prevent haemoglobin-mediated renal parenchymal injury and loss of iron following intravascular haemolysis [22]. Serum amyloid A seems also to be a good indicator of acute phase reaction in cattle. The synthesis of certain members of the family SAA proteins is significantly increased during inflammation [23]. SAA proteins can be considered as apolipoproteins because they associate with plasma lipoproteins mainly within the high-density range. Physiological role of SAA in the immune response during inflammation is not well understood, but various effects have been described. These include e.g., inhibition of lymphocyte proliferation, detoxification of endotoxin, inhibition of platelet aggregation, inhibition of thrombocytes aggregation, and inhibition of oxidative reaction in neutrophils [23]. Therefore, the aim of our preliminary study was to determine serum concentration of selected APPs (haptoglobin, serum amyloid A and α1-acid-glycoprotein) in clinically healthy European bison and animals eliminated under the clinical suspicion of different pathological conditions, followed by post-mortem evaluation [24]. We hypothesised that concentrations of selected APPs were higher in the eliminated European bison (ELIM) comparing to clinically healthy animals (HEAL).", " Samples Blood samples were collected from 87 European bison, including 40 samples from clinically healthy individuals (HEAL) and 47 samples from individuals eliminated due to different pathological conditions (ELIM). In HEAL, there were 27 females and 13 males in the average age of 6.17 (range: 1-16 years). While in ELIM there were 30 females and 17 males in the average age of 6.5 (range: 1-20 years). The clinically healthy European bison were pharmacologically immobilized for transportation, placing a telemetry collar, or routine diagnostic purposes according to the previously described protocols [25]. Briefly, the combination of xylazine and etorphine were used to immobilization. The preparations were administered with specialized pneumatic Dan-Inject applicators. Following a successful shot, the animal is immobilized within 15 min and thereafter veterinary and animal husbandry manipulations are possible. In order to awake the European bisons the mixture of atipamezole, naloxone and diprenorphine were used. The selected individuals were culled after the health assessment (ELIM) by the herd supervising veterinarian and under the decision of Ministry of Environment. Briefly, the health status assessment includes integumentary system examination, orifices inspection (check for any discharges), reproductive organs examination, sight organ examination as well as body condition assessment. The animals were sampled in accordance with the appropriate regulations and permits (Polish General Directorate for Environmental Protection: Regulations DOP-OZGIZ.6401.06.7.2012.1 s and DOPOZ. 6401.06.7.2012.1 s1.Warsaw, 2012; Polish General Directorate for Environmental Protection: Regulations DZPWG.6401.06.23.2014.km2.; and Polish Ministry of the Environment, Regulation: DLP-III-0771-5/42,173/14/ZK). The blood samples were taken at capture and collected from the jugular vein into sterile tubes with clot activator for serum separation. The poor quality samples were excluded prior to the study for ensuring the high reliability of results. Serum was harvested from the blood samples by centrifugation (3000 × g for 15 min at room temperature) and stored at -80° C for further analysis (not longer than one month).\nBlood samples were collected from 87 European bison, including 40 samples from clinically healthy individuals (HEAL) and 47 samples from individuals eliminated due to different pathological conditions (ELIM). In HEAL, there were 27 females and 13 males in the average age of 6.17 (range: 1-16 years). While in ELIM there were 30 females and 17 males in the average age of 6.5 (range: 1-20 years). The clinically healthy European bison were pharmacologically immobilized for transportation, placing a telemetry collar, or routine diagnostic purposes according to the previously described protocols [25]. Briefly, the combination of xylazine and etorphine were used to immobilization. The preparations were administered with specialized pneumatic Dan-Inject applicators. Following a successful shot, the animal is immobilized within 15 min and thereafter veterinary and animal husbandry manipulations are possible. In order to awake the European bisons the mixture of atipamezole, naloxone and diprenorphine were used. The selected individuals were culled after the health assessment (ELIM) by the herd supervising veterinarian and under the decision of Ministry of Environment. Briefly, the health status assessment includes integumentary system examination, orifices inspection (check for any discharges), reproductive organs examination, sight organ examination as well as body condition assessment. The animals were sampled in accordance with the appropriate regulations and permits (Polish General Directorate for Environmental Protection: Regulations DOP-OZGIZ.6401.06.7.2012.1 s and DOPOZ. 6401.06.7.2012.1 s1.Warsaw, 2012; Polish General Directorate for Environmental Protection: Regulations DZPWG.6401.06.23.2014.km2.; and Polish Ministry of the Environment, Regulation: DLP-III-0771-5/42,173/14/ZK). The blood samples were taken at capture and collected from the jugular vein into sterile tubes with clot activator for serum separation. The poor quality samples were excluded prior to the study for ensuring the high reliability of results. Serum was harvested from the blood samples by centrifugation (3000 × g for 15 min at room temperature) and stored at -80° C for further analysis (not longer than one month).\n Determination of APPs The concentrations of haptoglobin, serum amyloid A and α1-acid-glycoprotein in serum were measured using commercial ELISAs and colorimetric assays according to manufacturer’s guidelines, which included Haptoglobin Kit Phase Range and Phase Serum Amyloid A assays (Tridelta Development Ltd County Kildare, Ireland) and Cow α1-acid glycoprotein (AGP) ELISA (Life Diagnostic, West Chester, USA). Serum samples were tested in duplicate. Prior to AGP and SAA analyses samples were diluted as follows: 1:10,000 for AGP and 1:500 for SAA. The concentrations of the APP was calculated based on standard curve for each protein using the FindGraph computer software (UNIPHIZ Lab, Vancouver, Canada). All assays were preliminarily validated in our laboratory before being applied in European bison samples. For this purpose, precision, accuracy and limit of detection were calculated. Within-run coefficients of variation (CVs) were calculated after the analysis of 2 serum samples with low and high proteins concentrations eight times in a single run. Between-run CVs were obtained by measuring the same samples in eight separate runs carried out on three different days. All samples used were frozen in aliquots and only the vials needed for each run were used. Accuracy was investigated by linearity under dilution; in brief, two European bison serum samples were diluted (1:2; 1:4; 1:8; 1:16, 1:32) with sample diluents. The limit of detection was calculated as the lowest concentration of APP that could be distinguished from a zero sample, and was taken as the mean + 3 standard deviations (SD) of 12 replicates of blank sample, tested in one analytical run.\nThe concentrations of haptoglobin, serum amyloid A and α1-acid-glycoprotein in serum were measured using commercial ELISAs and colorimetric assays according to manufacturer’s guidelines, which included Haptoglobin Kit Phase Range and Phase Serum Amyloid A assays (Tridelta Development Ltd County Kildare, Ireland) and Cow α1-acid glycoprotein (AGP) ELISA (Life Diagnostic, West Chester, USA). Serum samples were tested in duplicate. Prior to AGP and SAA analyses samples were diluted as follows: 1:10,000 for AGP and 1:500 for SAA. The concentrations of the APP was calculated based on standard curve for each protein using the FindGraph computer software (UNIPHIZ Lab, Vancouver, Canada). All assays were preliminarily validated in our laboratory before being applied in European bison samples. For this purpose, precision, accuracy and limit of detection were calculated. Within-run coefficients of variation (CVs) were calculated after the analysis of 2 serum samples with low and high proteins concentrations eight times in a single run. Between-run CVs were obtained by measuring the same samples in eight separate runs carried out on three different days. All samples used were frozen in aliquots and only the vials needed for each run were used. Accuracy was investigated by linearity under dilution; in brief, two European bison serum samples were diluted (1:2; 1:4; 1:8; 1:16, 1:32) with sample diluents. The limit of detection was calculated as the lowest concentration of APP that could be distinguished from a zero sample, and was taken as the mean + 3 standard deviations (SD) of 12 replicates of blank sample, tested in one analytical run.\n Statistical analysis In order to select proper statistical tools to evaluate the differences between healthy (HEAL) and European bison eliminated due to poor health conditions (sick animals) (ELIM), the normality of data was verified by using Shapiro-Wilk test. Since none of the variables met the normality assumptions, non-parametric Mann-Whitney U test was used for all comparisons. Statistical significance was set at p < 0.05. Statistical analyses were performed using Statistica 13.3 (Tibco, USA).\nIn order to select proper statistical tools to evaluate the differences between healthy (HEAL) and European bison eliminated due to poor health conditions (sick animals) (ELIM), the normality of data was verified by using Shapiro-Wilk test. Since none of the variables met the normality assumptions, non-parametric Mann-Whitney U test was used for all comparisons. Statistical significance was set at p < 0.05. Statistical analyses were performed using Statistica 13.3 (Tibco, USA).", "Blood samples were collected from 87 European bison, including 40 samples from clinically healthy individuals (HEAL) and 47 samples from individuals eliminated due to different pathological conditions (ELIM). In HEAL, there were 27 females and 13 males in the average age of 6.17 (range: 1-16 years). While in ELIM there were 30 females and 17 males in the average age of 6.5 (range: 1-20 years). The clinically healthy European bison were pharmacologically immobilized for transportation, placing a telemetry collar, or routine diagnostic purposes according to the previously described protocols [25]. Briefly, the combination of xylazine and etorphine were used to immobilization. The preparations were administered with specialized pneumatic Dan-Inject applicators. Following a successful shot, the animal is immobilized within 15 min and thereafter veterinary and animal husbandry manipulations are possible. In order to awake the European bisons the mixture of atipamezole, naloxone and diprenorphine were used. The selected individuals were culled after the health assessment (ELIM) by the herd supervising veterinarian and under the decision of Ministry of Environment. Briefly, the health status assessment includes integumentary system examination, orifices inspection (check for any discharges), reproductive organs examination, sight organ examination as well as body condition assessment. The animals were sampled in accordance with the appropriate regulations and permits (Polish General Directorate for Environmental Protection: Regulations DOP-OZGIZ.6401.06.7.2012.1 s and DOPOZ. 6401.06.7.2012.1 s1.Warsaw, 2012; Polish General Directorate for Environmental Protection: Regulations DZPWG.6401.06.23.2014.km2.; and Polish Ministry of the Environment, Regulation: DLP-III-0771-5/42,173/14/ZK). The blood samples were taken at capture and collected from the jugular vein into sterile tubes with clot activator for serum separation. The poor quality samples were excluded prior to the study for ensuring the high reliability of results. Serum was harvested from the blood samples by centrifugation (3000 × g for 15 min at room temperature) and stored at -80° C for further analysis (not longer than one month).", "The concentrations of haptoglobin, serum amyloid A and α1-acid-glycoprotein in serum were measured using commercial ELISAs and colorimetric assays according to manufacturer’s guidelines, which included Haptoglobin Kit Phase Range and Phase Serum Amyloid A assays (Tridelta Development Ltd County Kildare, Ireland) and Cow α1-acid glycoprotein (AGP) ELISA (Life Diagnostic, West Chester, USA). Serum samples were tested in duplicate. Prior to AGP and SAA analyses samples were diluted as follows: 1:10,000 for AGP and 1:500 for SAA. The concentrations of the APP was calculated based on standard curve for each protein using the FindGraph computer software (UNIPHIZ Lab, Vancouver, Canada). All assays were preliminarily validated in our laboratory before being applied in European bison samples. For this purpose, precision, accuracy and limit of detection were calculated. Within-run coefficients of variation (CVs) were calculated after the analysis of 2 serum samples with low and high proteins concentrations eight times in a single run. Between-run CVs were obtained by measuring the same samples in eight separate runs carried out on three different days. All samples used were frozen in aliquots and only the vials needed for each run were used. Accuracy was investigated by linearity under dilution; in brief, two European bison serum samples were diluted (1:2; 1:4; 1:8; 1:16, 1:32) with sample diluents. The limit of detection was calculated as the lowest concentration of APP that could be distinguished from a zero sample, and was taken as the mean + 3 standard deviations (SD) of 12 replicates of blank sample, tested in one analytical run.", "In order to select proper statistical tools to evaluate the differences between healthy (HEAL) and European bison eliminated due to poor health conditions (sick animals) (ELIM), the normality of data was verified by using Shapiro-Wilk test. Since none of the variables met the normality assumptions, non-parametric Mann-Whitney U test was used for all comparisons. Statistical significance was set at p < 0.05. Statistical analyses were performed using Statistica 13.3 (Tibco, USA).", " Analytical validation of the assays For Hp mean within-run and between-run CVs were 8.11% and 7.11%, respectively. Dilution studies resulted in linear regression equations with a correlation coefficient of 0.98 showing that the method measures the protein in a linear manner. The detection limit of the assay was 0.005 mg/ml.\nFor SAA mean within-run and between-run CVs were 5.15% and 5.35%, respectively. Dilution studies resulted in linear regression equations with a correlation coefficient of 0.99 showing that the method measures the protein in a linear manner. The detection limit of the assay was 3.49 µg/ml.\nFor AGP mean within-run and between-run CVs were 4.08% and 4.37%, respectively. Dilution studies resulted in linear regression equations with a correlation coefficient of 0.99 showing that the method measures the protein in a linear manner. The detection limit of the assay was 15.6 µg/ml.\nFor Hp mean within-run and between-run CVs were 8.11% and 7.11%, respectively. Dilution studies resulted in linear regression equations with a correlation coefficient of 0.98 showing that the method measures the protein in a linear manner. The detection limit of the assay was 0.005 mg/ml.\nFor SAA mean within-run and between-run CVs were 5.15% and 5.35%, respectively. Dilution studies resulted in linear regression equations with a correlation coefficient of 0.99 showing that the method measures the protein in a linear manner. The detection limit of the assay was 3.49 µg/ml.\nFor AGP mean within-run and between-run CVs were 4.08% and 4.37%, respectively. Dilution studies resulted in linear regression equations with a correlation coefficient of 0.99 showing that the method measures the protein in a linear manner. The detection limit of the assay was 15.6 µg/ml.\n Acute phase protein concentrations The median concentration of SAA in ELIM animals was almost 4 times higher than the median level in clinically healthy group. In ELIM, the median concentration was 70.81 µg/ml while in HEAL, the median concentration of SAA was 18.95 µg/ml. The interquartile range (IQR) equals 12.38-30.11 µg/ml for SAA concentration in HEAL, while in ELIM 50.50-112.46 µg/ml (Fig. 1). On other hand, the median concentration of Hp in HEAL group was over 2 times lower than in the eliminated animals with IQR equal to 0.1-0.214 mg/ml in HEAL and 0.270-0.592 mg/ml in ELIM. The median concentration of Hp in HEAL was 0.176 mg/ml, while in ELIM the median concentration of Hp was equal to 0.305 mg/ml (Fig. 2). The concentration of SAA and Hp was significantly higher in ELIM comparing to HEAL (p <0.01). Similar differences were also observed for AGP levels, however they were not statistically significant (p = 0.3950). The median levels of AGP in HEAL and ELIM were equal to 207.25 and 271.25 µg/ml, respectively. The IQR for AGP was 146.65-259.1 µg/ml in HEAL and 178.55-407.25 in ELIM µg/ml (Fig. 3).Fig. 1 A boxplot for the concentration of serum amyloid A in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers. * - statistically significant differences between HEAL and ELIM (p < 0.01)Fig. 2 A boxplot for the concentration of haptoglobin in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations. * - statistically significant differences between HEAL and ELIM (p < 0.01)Fig. 3 A boxplot for the concentration of α-1-acid glikoprotein in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers, asterics represent far outliers\n A boxplot for the concentration of serum amyloid A in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers. * - statistically significant differences between HEAL and ELIM (p < 0.01)\n A boxplot for the concentration of haptoglobin in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations. * - statistically significant differences between HEAL and ELIM (p < 0.01)\n A boxplot for the concentration of α-1-acid glikoprotein in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers, asterics represent far outliers\nThe median concentration of SAA in ELIM animals was almost 4 times higher than the median level in clinically healthy group. In ELIM, the median concentration was 70.81 µg/ml while in HEAL, the median concentration of SAA was 18.95 µg/ml. The interquartile range (IQR) equals 12.38-30.11 µg/ml for SAA concentration in HEAL, while in ELIM 50.50-112.46 µg/ml (Fig. 1). On other hand, the median concentration of Hp in HEAL group was over 2 times lower than in the eliminated animals with IQR equal to 0.1-0.214 mg/ml in HEAL and 0.270-0.592 mg/ml in ELIM. The median concentration of Hp in HEAL was 0.176 mg/ml, while in ELIM the median concentration of Hp was equal to 0.305 mg/ml (Fig. 2). The concentration of SAA and Hp was significantly higher in ELIM comparing to HEAL (p <0.01). Similar differences were also observed for AGP levels, however they were not statistically significant (p = 0.3950). The median levels of AGP in HEAL and ELIM were equal to 207.25 and 271.25 µg/ml, respectively. The IQR for AGP was 146.65-259.1 µg/ml in HEAL and 178.55-407.25 in ELIM µg/ml (Fig. 3).Fig. 1 A boxplot for the concentration of serum amyloid A in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers. * - statistically significant differences between HEAL and ELIM (p < 0.01)Fig. 2 A boxplot for the concentration of haptoglobin in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations. * - statistically significant differences between HEAL and ELIM (p < 0.01)Fig. 3 A boxplot for the concentration of α-1-acid glikoprotein in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers, asterics represent far outliers\n A boxplot for the concentration of serum amyloid A in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers. * - statistically significant differences between HEAL and ELIM (p < 0.01)\n A boxplot for the concentration of haptoglobin in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations. * - statistically significant differences between HEAL and ELIM (p < 0.01)\n A boxplot for the concentration of α-1-acid glikoprotein in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers, asterics represent far outliers", "For Hp mean within-run and between-run CVs were 8.11% and 7.11%, respectively. Dilution studies resulted in linear regression equations with a correlation coefficient of 0.98 showing that the method measures the protein in a linear manner. The detection limit of the assay was 0.005 mg/ml.\nFor SAA mean within-run and between-run CVs were 5.15% and 5.35%, respectively. Dilution studies resulted in linear regression equations with a correlation coefficient of 0.99 showing that the method measures the protein in a linear manner. The detection limit of the assay was 3.49 µg/ml.\nFor AGP mean within-run and between-run CVs were 4.08% and 4.37%, respectively. Dilution studies resulted in linear regression equations with a correlation coefficient of 0.99 showing that the method measures the protein in a linear manner. The detection limit of the assay was 15.6 µg/ml.", "The median concentration of SAA in ELIM animals was almost 4 times higher than the median level in clinically healthy group. In ELIM, the median concentration was 70.81 µg/ml while in HEAL, the median concentration of SAA was 18.95 µg/ml. The interquartile range (IQR) equals 12.38-30.11 µg/ml for SAA concentration in HEAL, while in ELIM 50.50-112.46 µg/ml (Fig. 1). On other hand, the median concentration of Hp in HEAL group was over 2 times lower than in the eliminated animals with IQR equal to 0.1-0.214 mg/ml in HEAL and 0.270-0.592 mg/ml in ELIM. The median concentration of Hp in HEAL was 0.176 mg/ml, while in ELIM the median concentration of Hp was equal to 0.305 mg/ml (Fig. 2). The concentration of SAA and Hp was significantly higher in ELIM comparing to HEAL (p <0.01). Similar differences were also observed for AGP levels, however they were not statistically significant (p = 0.3950). The median levels of AGP in HEAL and ELIM were equal to 207.25 and 271.25 µg/ml, respectively. The IQR for AGP was 146.65-259.1 µg/ml in HEAL and 178.55-407.25 in ELIM µg/ml (Fig. 3).Fig. 1 A boxplot for the concentration of serum amyloid A in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers. * - statistically significant differences between HEAL and ELIM (p < 0.01)Fig. 2 A boxplot for the concentration of haptoglobin in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations. * - statistically significant differences between HEAL and ELIM (p < 0.01)Fig. 3 A boxplot for the concentration of α-1-acid glikoprotein in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers, asterics represent far outliers\n A boxplot for the concentration of serum amyloid A in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers. * - statistically significant differences between HEAL and ELIM (p < 0.01)\n A boxplot for the concentration of haptoglobin in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations. * - statistically significant differences between HEAL and ELIM (p < 0.01)\n A boxplot for the concentration of α-1-acid glikoprotein in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers, asterics represent far outliers", "In 2020, International Union for Conservation of Nature (IUCN) has changed European bison status from “Vulnerable” to “Near Threatened”, which proved that proper wildlife management is effective [26]. However, taking into account that modern European bison population was recreated from several individuals, continuous actions should be carried out to protect the species of the largest land European mammal. On-going health surveillance of European bison should be an important element of European bison protection strategy [2]. Our study suggest that determination of APPs might be a useful tool to assess health status of the individual or at the population level. In European bison, in the other study the most prevalent pathologies observed postmortally included: pneumonia, emphysema, nephritis, body traumas, posthitis/balanoposthitis in males and infestations of Fasciola hepatica and Dictyocaulus viviparus [27]. However, to our best knowledge there are no studies investigating APPs in European bison and their relation with different pathological conditions.\nRegarding European bison`s immunology, there is only a report describing changes of immunoglobulins within the age [27]. Possibly, further studies may describe dynamics of APPs in different European bison disease. In the current preliminary study, we reported higher concentration of two (Hp, SAA) out of three investigated APPs in the eliminated due to poor health condition European bison. However, we cannot reject AGP as a potential marker for future studies using larger number of animals, since its concentrations were generally higher in animals from ELIM group, even though not statistically proven. The knowledge on usage of APPs as markers of physiological and pathological processes in wildlife species is lacking, even though they appear as desirable candidates for monitoring wildlife health and disease burden in the changing environment. Similar in the assumptions, the study of Glidden et al. [28] have demonstrated that Hp levels may be used to detect infections non-specifically and potential as a surveillance marker in African buffalo. Nevertheless, broaden experience may be drawn from cattle medicine, especially considering genetic relatedness among bovids. For example, Bagga et al. [14] have found APPs useful in diagnostics of lameness in cows, reporting SAA and Hp concentrations approximately 3 and 20, respectively times higher in lame cows comparing to non-lame cows. Similarly in our study the median concentration of SAA was almost 4 times higher in ELIM than in HEAL. The SAA levels obtained by Bagga in lame and non-lame cows (22.19 µg/ml and 8.89 µg/ml, respectively) are numerically lower that median SAA concentration in ELIM and HEAL (70.81 µg/ml and 18.95 µg/ml, respectively). Similarly the Hp concentrations in lame and non-lame cows (0.217 mg/ml and 0.012 mg/ml, respectively) are numerically lower than Hp concentration in ELIM and HEAL (0.305 mg/ml and 0.176 mg/ml, respectively. Dalanezi et al. [12] described increase in milk APPs in course of mastitis caused by different pathogens and suggested pathogen-specific APPs profiles. On the other hand, Moisa et al. [13] state that Hp concentration can be useful biomarker of respiratory diseases in dairy calves. The researcher reports that 0.195 mg/ml is the cut-off value for Hp as biomarker for bovine respiratory diseases. Comparing to our study, the European bisons from HEAL were below this value (0.176 mg/ml), while the animals from ELIM were above the cut-off point (0.305 mg/ml). Yet, Kęsik-Maliszewska et al. [29] have not proved differences in serum APPs excretion in experimentally infected with Schmallenberg virus calves, suggesting that not all infections induce measurable response. A team of Ansari-Lari et al. [30] analysed changes of Hp, Fb, SAA and albumin (Ab) levels in the course of post-traumatic reticulitis and peritonitis in cattle. Their study showed that the analysed proteins respond in a similar way and the particularly significant increase of their level was observed in acute diffuse peritonitis as compared to other forms of disease. Reduction of Ab level was observed in acute local peritonitis whereas serum concentration of Ab increased in a diffuse form of this condition. Moreover, APPs may be a useful indicators of the cattle welfare and stress [31, 32]. In the study by Saco et al. [31] serum levels of SAA and Hp in cattle kept under various environmental conditions were measured. They showed higher concentration of SAA in animals exposed to stress, whereas Hp values remained at similar level regardless of the animal maintenance. Comparing to our results, the SAA concentration both in ELIM and HEAL (70.81 µg/ml and 18.95 µg/ml, respectively) were significantly higher than cows kept under hardy conditions (3.46 µg/ml and 4.50 µg/ml). However, the levels of APPs may also vary in physiological conditions such as pregnancy, lactation [33, 34].", "To conclude, our preliminary study is the first report describing concentrations of the selected APPs (Hp, SAA and AGP) in European bison. Subsequently the study suggest that concentration of APPs may be used as a supportive tool, monitoring the health status (at individual and/or population level) and when making decisions regarding undesired individuals to be eliminated due to health reasons. The most significant limitations of our research was the moderate number of animals involved in the study. Further studies will include greater samples size in order to associate APPs levels with the most frequent pathologies of European bison from different populations, variable in exposure to pathogens or differences in herd management. Most importantly, the study confirmed the selection of animals for culling was diligent and thoughtful and could have been confirmed by APP up-regulation in sick animals." ]
[ null, null, null, null, null, "results", null, null, "discussion", "conclusion" ]
[ "European bison", "Acute phase proteins", "Wildlife management", "Hp", "SAA", "AGP" ]
Background: At the beginning of XX century, European bison (Bison bonasus) extinct from wild, when the last surviving animals were killed in the Białowieża Primeval Forest after World War I. Fortunately, the biggest wild herbivore in Europe have survived and now, 70 years after the first European bison had been released into the wild again, its world population is estimated at more than 9100 heads [1]. Unfortunately, growing numbers and density of the population exceeding capacity of the available habitats, increasing anthropopression and limited gene pool increase nowadays the risk of health threats such as exposure to infectious and invasive pathogens [2]. Therefore, it is crucial to support European bison population development by proper wildlife management. One of the tools that can be used to manage wildlife species is selective elimination (culling) of individuals, which genotype (and/or lineage) or health impairment make them undesirable in a herd. Apart from that, diseased individuals may threaten other animals in the population in case of contagious diseases, which, to make things worse, may also have zoonotic potential and be a challenge for public health [3, 4]. This applies especially to tuberculosis, which is currently a real zoonotic health hazard in wild ruminant populations including European bison [5]. The question which arises is how to accurately select individuals to be eliminated. The potential solution may be taken from farm animals medicine, in particular cattle, which is relatively closely related to European bison. It involves determining serum acute phase proteins (APPs) concentration, which reflects animal health status under different pathological conditions [6–8]. It could also be used as a predictive tool describing the chance of recovery based on the severity of the disease. Besides, determination of APPs can be applied as a reliable solution to assess general health status, which is crucial in case of endangered animal species, particularly with limited genes pool. The concentrations and type of APPs differ within the species [9, 10]. In cattle, the most important APPs are haptoglobin (Hp), serum amyloid A (SAA) and α1-acid-glycoprotein (AGP) [11], which levels are used as biomarkers of various pathological conditions and as predictive values for mastitis [12], respiratory diseases [13], and lameness [14]. However, no reports investigating concentration of APPs in European bison are available. Although, some studies have been described in other wildlife non-bovid species, nevertheless the data is very limited [15–17]. The secretion of APPs is associated with the acute phase reaction (APR) [18]. The APR is an innate response of the body to the disturbance of homeostasis which may be associated with various factors i.e. infection, damage of tissues, neoplastic hyperplasia, immunological disorders [19–21]. The goal of the acute phase response is to recover homeostasis and eliminate the causative factor. It comprises numerous hormonal, metabolic and neurological changes which occur within a short period of time after the injury, at the beginning of infection or inflammation [18]. Referring to cattle (as closely related to European bison domesticated species), the Hp, SAA and AGP have been described as the most significant markers of APR and are considered the major APPs used in the diagnostics. The level of Hp in cattle increases significantly in the course of APR (from nearly zero to about 2 g/l within 48 h from stimulation. The major biologic function of Hp is to bind haemoglobin in an equimolar ratio with very high affinity to prevent haemoglobin-mediated renal parenchymal injury and loss of iron following intravascular haemolysis [22]. Serum amyloid A seems also to be a good indicator of acute phase reaction in cattle. The synthesis of certain members of the family SAA proteins is significantly increased during inflammation [23]. SAA proteins can be considered as apolipoproteins because they associate with plasma lipoproteins mainly within the high-density range. Physiological role of SAA in the immune response during inflammation is not well understood, but various effects have been described. These include e.g., inhibition of lymphocyte proliferation, detoxification of endotoxin, inhibition of platelet aggregation, inhibition of thrombocytes aggregation, and inhibition of oxidative reaction in neutrophils [23]. Therefore, the aim of our preliminary study was to determine serum concentration of selected APPs (haptoglobin, serum amyloid A and α1-acid-glycoprotein) in clinically healthy European bison and animals eliminated under the clinical suspicion of different pathological conditions, followed by post-mortem evaluation [24]. We hypothesised that concentrations of selected APPs were higher in the eliminated European bison (ELIM) comparing to clinically healthy animals (HEAL). Methods: Samples Blood samples were collected from 87 European bison, including 40 samples from clinically healthy individuals (HEAL) and 47 samples from individuals eliminated due to different pathological conditions (ELIM). In HEAL, there were 27 females and 13 males in the average age of 6.17 (range: 1-16 years). While in ELIM there were 30 females and 17 males in the average age of 6.5 (range: 1-20 years). The clinically healthy European bison were pharmacologically immobilized for transportation, placing a telemetry collar, or routine diagnostic purposes according to the previously described protocols [25]. Briefly, the combination of xylazine and etorphine were used to immobilization. The preparations were administered with specialized pneumatic Dan-Inject applicators. Following a successful shot, the animal is immobilized within 15 min and thereafter veterinary and animal husbandry manipulations are possible. In order to awake the European bisons the mixture of atipamezole, naloxone and diprenorphine were used. The selected individuals were culled after the health assessment (ELIM) by the herd supervising veterinarian and under the decision of Ministry of Environment. Briefly, the health status assessment includes integumentary system examination, orifices inspection (check for any discharges), reproductive organs examination, sight organ examination as well as body condition assessment. The animals were sampled in accordance with the appropriate regulations and permits (Polish General Directorate for Environmental Protection: Regulations DOP-OZGIZ.6401.06.7.2012.1 s and DOPOZ. 6401.06.7.2012.1 s1.Warsaw, 2012; Polish General Directorate for Environmental Protection: Regulations DZPWG.6401.06.23.2014.km2.; and Polish Ministry of the Environment, Regulation: DLP-III-0771-5/42,173/14/ZK). The blood samples were taken at capture and collected from the jugular vein into sterile tubes with clot activator for serum separation. The poor quality samples were excluded prior to the study for ensuring the high reliability of results. Serum was harvested from the blood samples by centrifugation (3000 × g for 15 min at room temperature) and stored at -80° C for further analysis (not longer than one month). Blood samples were collected from 87 European bison, including 40 samples from clinically healthy individuals (HEAL) and 47 samples from individuals eliminated due to different pathological conditions (ELIM). In HEAL, there were 27 females and 13 males in the average age of 6.17 (range: 1-16 years). While in ELIM there were 30 females and 17 males in the average age of 6.5 (range: 1-20 years). The clinically healthy European bison were pharmacologically immobilized for transportation, placing a telemetry collar, or routine diagnostic purposes according to the previously described protocols [25]. Briefly, the combination of xylazine and etorphine were used to immobilization. The preparations were administered with specialized pneumatic Dan-Inject applicators. Following a successful shot, the animal is immobilized within 15 min and thereafter veterinary and animal husbandry manipulations are possible. In order to awake the European bisons the mixture of atipamezole, naloxone and diprenorphine were used. The selected individuals were culled after the health assessment (ELIM) by the herd supervising veterinarian and under the decision of Ministry of Environment. Briefly, the health status assessment includes integumentary system examination, orifices inspection (check for any discharges), reproductive organs examination, sight organ examination as well as body condition assessment. The animals were sampled in accordance with the appropriate regulations and permits (Polish General Directorate for Environmental Protection: Regulations DOP-OZGIZ.6401.06.7.2012.1 s and DOPOZ. 6401.06.7.2012.1 s1.Warsaw, 2012; Polish General Directorate for Environmental Protection: Regulations DZPWG.6401.06.23.2014.km2.; and Polish Ministry of the Environment, Regulation: DLP-III-0771-5/42,173/14/ZK). The blood samples were taken at capture and collected from the jugular vein into sterile tubes with clot activator for serum separation. The poor quality samples were excluded prior to the study for ensuring the high reliability of results. Serum was harvested from the blood samples by centrifugation (3000 × g for 15 min at room temperature) and stored at -80° C for further analysis (not longer than one month). Determination of APPs The concentrations of haptoglobin, serum amyloid A and α1-acid-glycoprotein in serum were measured using commercial ELISAs and colorimetric assays according to manufacturer’s guidelines, which included Haptoglobin Kit Phase Range and Phase Serum Amyloid A assays (Tridelta Development Ltd County Kildare, Ireland) and Cow α1-acid glycoprotein (AGP) ELISA (Life Diagnostic, West Chester, USA). Serum samples were tested in duplicate. Prior to AGP and SAA analyses samples were diluted as follows: 1:10,000 for AGP and 1:500 for SAA. The concentrations of the APP was calculated based on standard curve for each protein using the FindGraph computer software (UNIPHIZ Lab, Vancouver, Canada). All assays were preliminarily validated in our laboratory before being applied in European bison samples. For this purpose, precision, accuracy and limit of detection were calculated. Within-run coefficients of variation (CVs) were calculated after the analysis of 2 serum samples with low and high proteins concentrations eight times in a single run. Between-run CVs were obtained by measuring the same samples in eight separate runs carried out on three different days. All samples used were frozen in aliquots and only the vials needed for each run were used. Accuracy was investigated by linearity under dilution; in brief, two European bison serum samples were diluted (1:2; 1:4; 1:8; 1:16, 1:32) with sample diluents. The limit of detection was calculated as the lowest concentration of APP that could be distinguished from a zero sample, and was taken as the mean + 3 standard deviations (SD) of 12 replicates of blank sample, tested in one analytical run. The concentrations of haptoglobin, serum amyloid A and α1-acid-glycoprotein in serum were measured using commercial ELISAs and colorimetric assays according to manufacturer’s guidelines, which included Haptoglobin Kit Phase Range and Phase Serum Amyloid A assays (Tridelta Development Ltd County Kildare, Ireland) and Cow α1-acid glycoprotein (AGP) ELISA (Life Diagnostic, West Chester, USA). Serum samples were tested in duplicate. Prior to AGP and SAA analyses samples were diluted as follows: 1:10,000 for AGP and 1:500 for SAA. The concentrations of the APP was calculated based on standard curve for each protein using the FindGraph computer software (UNIPHIZ Lab, Vancouver, Canada). All assays were preliminarily validated in our laboratory before being applied in European bison samples. For this purpose, precision, accuracy and limit of detection were calculated. Within-run coefficients of variation (CVs) were calculated after the analysis of 2 serum samples with low and high proteins concentrations eight times in a single run. Between-run CVs were obtained by measuring the same samples in eight separate runs carried out on three different days. All samples used were frozen in aliquots and only the vials needed for each run were used. Accuracy was investigated by linearity under dilution; in brief, two European bison serum samples were diluted (1:2; 1:4; 1:8; 1:16, 1:32) with sample diluents. The limit of detection was calculated as the lowest concentration of APP that could be distinguished from a zero sample, and was taken as the mean + 3 standard deviations (SD) of 12 replicates of blank sample, tested in one analytical run. Statistical analysis In order to select proper statistical tools to evaluate the differences between healthy (HEAL) and European bison eliminated due to poor health conditions (sick animals) (ELIM), the normality of data was verified by using Shapiro-Wilk test. Since none of the variables met the normality assumptions, non-parametric Mann-Whitney U test was used for all comparisons. Statistical significance was set at p < 0.05. Statistical analyses were performed using Statistica 13.3 (Tibco, USA). In order to select proper statistical tools to evaluate the differences between healthy (HEAL) and European bison eliminated due to poor health conditions (sick animals) (ELIM), the normality of data was verified by using Shapiro-Wilk test. Since none of the variables met the normality assumptions, non-parametric Mann-Whitney U test was used for all comparisons. Statistical significance was set at p < 0.05. Statistical analyses were performed using Statistica 13.3 (Tibco, USA). Samples: Blood samples were collected from 87 European bison, including 40 samples from clinically healthy individuals (HEAL) and 47 samples from individuals eliminated due to different pathological conditions (ELIM). In HEAL, there were 27 females and 13 males in the average age of 6.17 (range: 1-16 years). While in ELIM there were 30 females and 17 males in the average age of 6.5 (range: 1-20 years). The clinically healthy European bison were pharmacologically immobilized for transportation, placing a telemetry collar, or routine diagnostic purposes according to the previously described protocols [25]. Briefly, the combination of xylazine and etorphine were used to immobilization. The preparations were administered with specialized pneumatic Dan-Inject applicators. Following a successful shot, the animal is immobilized within 15 min and thereafter veterinary and animal husbandry manipulations are possible. In order to awake the European bisons the mixture of atipamezole, naloxone and diprenorphine were used. The selected individuals were culled after the health assessment (ELIM) by the herd supervising veterinarian and under the decision of Ministry of Environment. Briefly, the health status assessment includes integumentary system examination, orifices inspection (check for any discharges), reproductive organs examination, sight organ examination as well as body condition assessment. The animals were sampled in accordance with the appropriate regulations and permits (Polish General Directorate for Environmental Protection: Regulations DOP-OZGIZ.6401.06.7.2012.1 s and DOPOZ. 6401.06.7.2012.1 s1.Warsaw, 2012; Polish General Directorate for Environmental Protection: Regulations DZPWG.6401.06.23.2014.km2.; and Polish Ministry of the Environment, Regulation: DLP-III-0771-5/42,173/14/ZK). The blood samples were taken at capture and collected from the jugular vein into sterile tubes with clot activator for serum separation. The poor quality samples were excluded prior to the study for ensuring the high reliability of results. Serum was harvested from the blood samples by centrifugation (3000 × g for 15 min at room temperature) and stored at -80° C for further analysis (not longer than one month). Determination of APPs: The concentrations of haptoglobin, serum amyloid A and α1-acid-glycoprotein in serum were measured using commercial ELISAs and colorimetric assays according to manufacturer’s guidelines, which included Haptoglobin Kit Phase Range and Phase Serum Amyloid A assays (Tridelta Development Ltd County Kildare, Ireland) and Cow α1-acid glycoprotein (AGP) ELISA (Life Diagnostic, West Chester, USA). Serum samples were tested in duplicate. Prior to AGP and SAA analyses samples were diluted as follows: 1:10,000 for AGP and 1:500 for SAA. The concentrations of the APP was calculated based on standard curve for each protein using the FindGraph computer software (UNIPHIZ Lab, Vancouver, Canada). All assays were preliminarily validated in our laboratory before being applied in European bison samples. For this purpose, precision, accuracy and limit of detection were calculated. Within-run coefficients of variation (CVs) were calculated after the analysis of 2 serum samples with low and high proteins concentrations eight times in a single run. Between-run CVs were obtained by measuring the same samples in eight separate runs carried out on three different days. All samples used were frozen in aliquots and only the vials needed for each run were used. Accuracy was investigated by linearity under dilution; in brief, two European bison serum samples were diluted (1:2; 1:4; 1:8; 1:16, 1:32) with sample diluents. The limit of detection was calculated as the lowest concentration of APP that could be distinguished from a zero sample, and was taken as the mean + 3 standard deviations (SD) of 12 replicates of blank sample, tested in one analytical run. Statistical analysis: In order to select proper statistical tools to evaluate the differences between healthy (HEAL) and European bison eliminated due to poor health conditions (sick animals) (ELIM), the normality of data was verified by using Shapiro-Wilk test. Since none of the variables met the normality assumptions, non-parametric Mann-Whitney U test was used for all comparisons. Statistical significance was set at p < 0.05. Statistical analyses were performed using Statistica 13.3 (Tibco, USA). Results: Analytical validation of the assays For Hp mean within-run and between-run CVs were 8.11% and 7.11%, respectively. Dilution studies resulted in linear regression equations with a correlation coefficient of 0.98 showing that the method measures the protein in a linear manner. The detection limit of the assay was 0.005 mg/ml. For SAA mean within-run and between-run CVs were 5.15% and 5.35%, respectively. Dilution studies resulted in linear regression equations with a correlation coefficient of 0.99 showing that the method measures the protein in a linear manner. The detection limit of the assay was 3.49 µg/ml. For AGP mean within-run and between-run CVs were 4.08% and 4.37%, respectively. Dilution studies resulted in linear regression equations with a correlation coefficient of 0.99 showing that the method measures the protein in a linear manner. The detection limit of the assay was 15.6 µg/ml. For Hp mean within-run and between-run CVs were 8.11% and 7.11%, respectively. Dilution studies resulted in linear regression equations with a correlation coefficient of 0.98 showing that the method measures the protein in a linear manner. The detection limit of the assay was 0.005 mg/ml. For SAA mean within-run and between-run CVs were 5.15% and 5.35%, respectively. Dilution studies resulted in linear regression equations with a correlation coefficient of 0.99 showing that the method measures the protein in a linear manner. The detection limit of the assay was 3.49 µg/ml. For AGP mean within-run and between-run CVs were 4.08% and 4.37%, respectively. Dilution studies resulted in linear regression equations with a correlation coefficient of 0.99 showing that the method measures the protein in a linear manner. The detection limit of the assay was 15.6 µg/ml. Acute phase protein concentrations The median concentration of SAA in ELIM animals was almost 4 times higher than the median level in clinically healthy group. In ELIM, the median concentration was 70.81 µg/ml while in HEAL, the median concentration of SAA was 18.95 µg/ml. The interquartile range (IQR) equals 12.38-30.11 µg/ml for SAA concentration in HEAL, while in ELIM 50.50-112.46 µg/ml (Fig. 1). On other hand, the median concentration of Hp in HEAL group was over 2 times lower than in the eliminated animals with IQR equal to 0.1-0.214 mg/ml in HEAL and 0.270-0.592 mg/ml in ELIM. The median concentration of Hp in HEAL was 0.176 mg/ml, while in ELIM the median concentration of Hp was equal to 0.305 mg/ml (Fig. 2). The concentration of SAA and Hp was significantly higher in ELIM comparing to HEAL (p <0.01). Similar differences were also observed for AGP levels, however they were not statistically significant (p = 0.3950). The median levels of AGP in HEAL and ELIM were equal to 207.25 and 271.25 µg/ml, respectively. The IQR for AGP was 146.65-259.1 µg/ml in HEAL and 178.55-407.25 in ELIM µg/ml (Fig. 3).Fig. 1 A boxplot for the concentration of serum amyloid A in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers. * - statistically significant differences between HEAL and ELIM (p < 0.01)Fig. 2 A boxplot for the concentration of haptoglobin in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations. * - statistically significant differences between HEAL and ELIM (p < 0.01)Fig. 3 A boxplot for the concentration of α-1-acid glikoprotein in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers, asterics represent far outliers  A boxplot for the concentration of serum amyloid A in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers. * - statistically significant differences between HEAL and ELIM (p < 0.01)  A boxplot for the concentration of haptoglobin in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations. * - statistically significant differences between HEAL and ELIM (p < 0.01)  A boxplot for the concentration of α-1-acid glikoprotein in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers, asterics represent far outliers The median concentration of SAA in ELIM animals was almost 4 times higher than the median level in clinically healthy group. In ELIM, the median concentration was 70.81 µg/ml while in HEAL, the median concentration of SAA was 18.95 µg/ml. The interquartile range (IQR) equals 12.38-30.11 µg/ml for SAA concentration in HEAL, while in ELIM 50.50-112.46 µg/ml (Fig. 1). On other hand, the median concentration of Hp in HEAL group was over 2 times lower than in the eliminated animals with IQR equal to 0.1-0.214 mg/ml in HEAL and 0.270-0.592 mg/ml in ELIM. The median concentration of Hp in HEAL was 0.176 mg/ml, while in ELIM the median concentration of Hp was equal to 0.305 mg/ml (Fig. 2). The concentration of SAA and Hp was significantly higher in ELIM comparing to HEAL (p <0.01). Similar differences were also observed for AGP levels, however they were not statistically significant (p = 0.3950). The median levels of AGP in HEAL and ELIM were equal to 207.25 and 271.25 µg/ml, respectively. The IQR for AGP was 146.65-259.1 µg/ml in HEAL and 178.55-407.25 in ELIM µg/ml (Fig. 3).Fig. 1 A boxplot for the concentration of serum amyloid A in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers. * - statistically significant differences between HEAL and ELIM (p < 0.01)Fig. 2 A boxplot for the concentration of haptoglobin in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations. * - statistically significant differences between HEAL and ELIM (p < 0.01)Fig. 3 A boxplot for the concentration of α-1-acid glikoprotein in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers, asterics represent far outliers  A boxplot for the concentration of serum amyloid A in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers. * - statistically significant differences between HEAL and ELIM (p < 0.01)  A boxplot for the concentration of haptoglobin in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations. * - statistically significant differences between HEAL and ELIM (p < 0.01)  A boxplot for the concentration of α-1-acid glikoprotein in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers, asterics represent far outliers Analytical validation of the assays: For Hp mean within-run and between-run CVs were 8.11% and 7.11%, respectively. Dilution studies resulted in linear regression equations with a correlation coefficient of 0.98 showing that the method measures the protein in a linear manner. The detection limit of the assay was 0.005 mg/ml. For SAA mean within-run and between-run CVs were 5.15% and 5.35%, respectively. Dilution studies resulted in linear regression equations with a correlation coefficient of 0.99 showing that the method measures the protein in a linear manner. The detection limit of the assay was 3.49 µg/ml. For AGP mean within-run and between-run CVs were 4.08% and 4.37%, respectively. Dilution studies resulted in linear regression equations with a correlation coefficient of 0.99 showing that the method measures the protein in a linear manner. The detection limit of the assay was 15.6 µg/ml. Acute phase protein concentrations: The median concentration of SAA in ELIM animals was almost 4 times higher than the median level in clinically healthy group. In ELIM, the median concentration was 70.81 µg/ml while in HEAL, the median concentration of SAA was 18.95 µg/ml. The interquartile range (IQR) equals 12.38-30.11 µg/ml for SAA concentration in HEAL, while in ELIM 50.50-112.46 µg/ml (Fig. 1). On other hand, the median concentration of Hp in HEAL group was over 2 times lower than in the eliminated animals with IQR equal to 0.1-0.214 mg/ml in HEAL and 0.270-0.592 mg/ml in ELIM. The median concentration of Hp in HEAL was 0.176 mg/ml, while in ELIM the median concentration of Hp was equal to 0.305 mg/ml (Fig. 2). The concentration of SAA and Hp was significantly higher in ELIM comparing to HEAL (p <0.01). Similar differences were also observed for AGP levels, however they were not statistically significant (p = 0.3950). The median levels of AGP in HEAL and ELIM were equal to 207.25 and 271.25 µg/ml, respectively. The IQR for AGP was 146.65-259.1 µg/ml in HEAL and 178.55-407.25 in ELIM µg/ml (Fig. 3).Fig. 1 A boxplot for the concentration of serum amyloid A in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers. * - statistically significant differences between HEAL and ELIM (p < 0.01)Fig. 2 A boxplot for the concentration of haptoglobin in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations. * - statistically significant differences between HEAL and ELIM (p < 0.01)Fig. 3 A boxplot for the concentration of α-1-acid glikoprotein in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers, asterics represent far outliers  A boxplot for the concentration of serum amyloid A in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers. * - statistically significant differences between HEAL and ELIM (p < 0.01)  A boxplot for the concentration of haptoglobin in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations. * - statistically significant differences between HEAL and ELIM (p < 0.01)  A boxplot for the concentration of α-1-acid glikoprotein in serum of European bison. The line inside the box is the median. The top and bottom lines of the box are the first and third quartiles, respectively. The top and bottom whiskers are the minimum and maximum concentrations, circles represent outliers, asterics represent far outliers Discussion: In 2020, International Union for Conservation of Nature (IUCN) has changed European bison status from “Vulnerable” to “Near Threatened”, which proved that proper wildlife management is effective [26]. However, taking into account that modern European bison population was recreated from several individuals, continuous actions should be carried out to protect the species of the largest land European mammal. On-going health surveillance of European bison should be an important element of European bison protection strategy [2]. Our study suggest that determination of APPs might be a useful tool to assess health status of the individual or at the population level. In European bison, in the other study the most prevalent pathologies observed postmortally included: pneumonia, emphysema, nephritis, body traumas, posthitis/balanoposthitis in males and infestations of Fasciola hepatica and Dictyocaulus viviparus [27]. However, to our best knowledge there are no studies investigating APPs in European bison and their relation with different pathological conditions. Regarding European bison`s immunology, there is only a report describing changes of immunoglobulins within the age [27]. Possibly, further studies may describe dynamics of APPs in different European bison disease. In the current preliminary study, we reported higher concentration of two (Hp, SAA) out of three investigated APPs in the eliminated due to poor health condition European bison. However, we cannot reject AGP as a potential marker for future studies using larger number of animals, since its concentrations were generally higher in animals from ELIM group, even though not statistically proven. The knowledge on usage of APPs as markers of physiological and pathological processes in wildlife species is lacking, even though they appear as desirable candidates for monitoring wildlife health and disease burden in the changing environment. Similar in the assumptions, the study of Glidden et al. [28] have demonstrated that Hp levels may be used to detect infections non-specifically and potential as a surveillance marker in African buffalo. Nevertheless, broaden experience may be drawn from cattle medicine, especially considering genetic relatedness among bovids. For example, Bagga et al. [14] have found APPs useful in diagnostics of lameness in cows, reporting SAA and Hp concentrations approximately 3 and 20, respectively times higher in lame cows comparing to non-lame cows. Similarly in our study the median concentration of SAA was almost 4 times higher in ELIM than in HEAL. The SAA levels obtained by Bagga in lame and non-lame cows (22.19 µg/ml and 8.89 µg/ml, respectively) are numerically lower that median SAA concentration in ELIM and HEAL (70.81 µg/ml and 18.95 µg/ml, respectively). Similarly the Hp concentrations in lame and non-lame cows (0.217 mg/ml and 0.012 mg/ml, respectively) are numerically lower than Hp concentration in ELIM and HEAL (0.305 mg/ml and 0.176 mg/ml, respectively. Dalanezi et al. [12] described increase in milk APPs in course of mastitis caused by different pathogens and suggested pathogen-specific APPs profiles. On the other hand, Moisa et al. [13] state that Hp concentration can be useful biomarker of respiratory diseases in dairy calves. The researcher reports that 0.195 mg/ml is the cut-off value for Hp as biomarker for bovine respiratory diseases. Comparing to our study, the European bisons from HEAL were below this value (0.176 mg/ml), while the animals from ELIM were above the cut-off point (0.305 mg/ml). Yet, Kęsik-Maliszewska et al. [29] have not proved differences in serum APPs excretion in experimentally infected with Schmallenberg virus calves, suggesting that not all infections induce measurable response. A team of Ansari-Lari et al. [30] analysed changes of Hp, Fb, SAA and albumin (Ab) levels in the course of post-traumatic reticulitis and peritonitis in cattle. Their study showed that the analysed proteins respond in a similar way and the particularly significant increase of their level was observed in acute diffuse peritonitis as compared to other forms of disease. Reduction of Ab level was observed in acute local peritonitis whereas serum concentration of Ab increased in a diffuse form of this condition. Moreover, APPs may be a useful indicators of the cattle welfare and stress [31, 32]. In the study by Saco et al. [31] serum levels of SAA and Hp in cattle kept under various environmental conditions were measured. They showed higher concentration of SAA in animals exposed to stress, whereas Hp values remained at similar level regardless of the animal maintenance. Comparing to our results, the SAA concentration both in ELIM and HEAL (70.81 µg/ml and 18.95 µg/ml, respectively) were significantly higher than cows kept under hardy conditions (3.46 µg/ml and 4.50 µg/ml). However, the levels of APPs may also vary in physiological conditions such as pregnancy, lactation [33, 34]. Conclusions: To conclude, our preliminary study is the first report describing concentrations of the selected APPs (Hp, SAA and AGP) in European bison. Subsequently the study suggest that concentration of APPs may be used as a supportive tool, monitoring the health status (at individual and/or population level) and when making decisions regarding undesired individuals to be eliminated due to health reasons. The most significant limitations of our research was the moderate number of animals involved in the study. Further studies will include greater samples size in order to associate APPs levels with the most frequent pathologies of European bison from different populations, variable in exposure to pathogens or differences in herd management. Most importantly, the study confirmed the selection of animals for culling was diligent and thoughtful and could have been confirmed by APP up-regulation in sick animals.
Background: This is the first report describing levels of APPs in European bison. Serum concentration of acute phase proteins (APPs) may be helpful to assess general health status in wildlife and potentially useful in selecting animals for elimination. Since there is a lack of literature data regarding concentration of APPs in European bisons, establishment of the reference values is also needed. Methods: A total of 87 European bison from Polish populations were divided into two groups: (1) healthy: immobilized for transportation, placing a telemetry collar and routine diagnostic purposes; and (2) selectively culled due to the poor health condition. The serum concentration of haptoglobin, serum amyloid A and α1-acid-glycoprotein were determined using commercial quantitative ELISA assays. Since none of the variables met the normality assumptions, non-parametric Mann-Whitney U test was used for all comparisons. Statistical significance was set at p < 0.05. Statistical analyses were performed using Statistica 13.3 (Tibco, USA). Results: The concentration of haptoglobin and serum amyloid A was significantly higher in animals culled (euthanised) due to the poor condition in respect to the clinically healthy European bison. The levels of α1-acid-glycoprotein did not show statistical difference between healthy and sick animals. Conclusions: Correlation between APPs concertation and health status was proven, therefore the determination of selected APPs may be considered in future as auxiliary predictive tool in assessing European bison health condition.
Background: At the beginning of XX century, European bison (Bison bonasus) extinct from wild, when the last surviving animals were killed in the Białowieża Primeval Forest after World War I. Fortunately, the biggest wild herbivore in Europe have survived and now, 70 years after the first European bison had been released into the wild again, its world population is estimated at more than 9100 heads [1]. Unfortunately, growing numbers and density of the population exceeding capacity of the available habitats, increasing anthropopression and limited gene pool increase nowadays the risk of health threats such as exposure to infectious and invasive pathogens [2]. Therefore, it is crucial to support European bison population development by proper wildlife management. One of the tools that can be used to manage wildlife species is selective elimination (culling) of individuals, which genotype (and/or lineage) or health impairment make them undesirable in a herd. Apart from that, diseased individuals may threaten other animals in the population in case of contagious diseases, which, to make things worse, may also have zoonotic potential and be a challenge for public health [3, 4]. This applies especially to tuberculosis, which is currently a real zoonotic health hazard in wild ruminant populations including European bison [5]. The question which arises is how to accurately select individuals to be eliminated. The potential solution may be taken from farm animals medicine, in particular cattle, which is relatively closely related to European bison. It involves determining serum acute phase proteins (APPs) concentration, which reflects animal health status under different pathological conditions [6–8]. It could also be used as a predictive tool describing the chance of recovery based on the severity of the disease. Besides, determination of APPs can be applied as a reliable solution to assess general health status, which is crucial in case of endangered animal species, particularly with limited genes pool. The concentrations and type of APPs differ within the species [9, 10]. In cattle, the most important APPs are haptoglobin (Hp), serum amyloid A (SAA) and α1-acid-glycoprotein (AGP) [11], which levels are used as biomarkers of various pathological conditions and as predictive values for mastitis [12], respiratory diseases [13], and lameness [14]. However, no reports investigating concentration of APPs in European bison are available. Although, some studies have been described in other wildlife non-bovid species, nevertheless the data is very limited [15–17]. The secretion of APPs is associated with the acute phase reaction (APR) [18]. The APR is an innate response of the body to the disturbance of homeostasis which may be associated with various factors i.e. infection, damage of tissues, neoplastic hyperplasia, immunological disorders [19–21]. The goal of the acute phase response is to recover homeostasis and eliminate the causative factor. It comprises numerous hormonal, metabolic and neurological changes which occur within a short period of time after the injury, at the beginning of infection or inflammation [18]. Referring to cattle (as closely related to European bison domesticated species), the Hp, SAA and AGP have been described as the most significant markers of APR and are considered the major APPs used in the diagnostics. The level of Hp in cattle increases significantly in the course of APR (from nearly zero to about 2 g/l within 48 h from stimulation. The major biologic function of Hp is to bind haemoglobin in an equimolar ratio with very high affinity to prevent haemoglobin-mediated renal parenchymal injury and loss of iron following intravascular haemolysis [22]. Serum amyloid A seems also to be a good indicator of acute phase reaction in cattle. The synthesis of certain members of the family SAA proteins is significantly increased during inflammation [23]. SAA proteins can be considered as apolipoproteins because they associate with plasma lipoproteins mainly within the high-density range. Physiological role of SAA in the immune response during inflammation is not well understood, but various effects have been described. These include e.g., inhibition of lymphocyte proliferation, detoxification of endotoxin, inhibition of platelet aggregation, inhibition of thrombocytes aggregation, and inhibition of oxidative reaction in neutrophils [23]. Therefore, the aim of our preliminary study was to determine serum concentration of selected APPs (haptoglobin, serum amyloid A and α1-acid-glycoprotein) in clinically healthy European bison and animals eliminated under the clinical suspicion of different pathological conditions, followed by post-mortem evaluation [24]. We hypothesised that concentrations of selected APPs were higher in the eliminated European bison (ELIM) comparing to clinically healthy animals (HEAL). Conclusions: To conclude, our preliminary study is the first report describing concentrations of the selected APPs (Hp, SAA and AGP) in European bison. Subsequently the study suggest that concentration of APPs may be used as a supportive tool, monitoring the health status (at individual and/or population level) and when making decisions regarding undesired individuals to be eliminated due to health reasons. The most significant limitations of our research was the moderate number of animals involved in the study. Further studies will include greater samples size in order to associate APPs levels with the most frequent pathologies of European bison from different populations, variable in exposure to pathogens or differences in herd management. Most importantly, the study confirmed the selection of animals for culling was diligent and thoughtful and could have been confirmed by APP up-regulation in sick animals.
Background: This is the first report describing levels of APPs in European bison. Serum concentration of acute phase proteins (APPs) may be helpful to assess general health status in wildlife and potentially useful in selecting animals for elimination. Since there is a lack of literature data regarding concentration of APPs in European bisons, establishment of the reference values is also needed. Methods: A total of 87 European bison from Polish populations were divided into two groups: (1) healthy: immobilized for transportation, placing a telemetry collar and routine diagnostic purposes; and (2) selectively culled due to the poor health condition. The serum concentration of haptoglobin, serum amyloid A and α1-acid-glycoprotein were determined using commercial quantitative ELISA assays. Since none of the variables met the normality assumptions, non-parametric Mann-Whitney U test was used for all comparisons. Statistical significance was set at p < 0.05. Statistical analyses were performed using Statistica 13.3 (Tibco, USA). Results: The concentration of haptoglobin and serum amyloid A was significantly higher in animals culled (euthanised) due to the poor condition in respect to the clinically healthy European bison. The levels of α1-acid-glycoprotein did not show statistical difference between healthy and sick animals. Conclusions: Correlation between APPs concertation and health status was proven, therefore the determination of selected APPs may be considered in future as auxiliary predictive tool in assessing European bison health condition.
6,953
278
[ 884, 1577, 382, 307, 92, 176, 670 ]
10
[ "european", "concentration", "ml", "serum", "elim", "bison", "european bison", "heal", "median", "samples" ]
[ "healthy european bison", "bison immunology report", "european bison protection", "bison animals eliminated", "bison disease" ]
null
[CONTENT] European bison | Acute phase proteins | Wildlife management | Hp | SAA | AGP [SUMMARY]
null
[CONTENT] European bison | Acute phase proteins | Wildlife management | Hp | SAA | AGP [SUMMARY]
[CONTENT] European bison | Acute phase proteins | Wildlife management | Hp | SAA | AGP [SUMMARY]
[CONTENT] European bison | Acute phase proteins | Wildlife management | Hp | SAA | AGP [SUMMARY]
[CONTENT] European bison | Acute phase proteins | Wildlife management | Hp | SAA | AGP [SUMMARY]
[CONTENT] Acute-Phase Proteins | Animals | Bison | Haptoglobins | Serum Amyloid A Protein [SUMMARY]
null
[CONTENT] Acute-Phase Proteins | Animals | Bison | Haptoglobins | Serum Amyloid A Protein [SUMMARY]
[CONTENT] Acute-Phase Proteins | Animals | Bison | Haptoglobins | Serum Amyloid A Protein [SUMMARY]
[CONTENT] Acute-Phase Proteins | Animals | Bison | Haptoglobins | Serum Amyloid A Protein [SUMMARY]
[CONTENT] Acute-Phase Proteins | Animals | Bison | Haptoglobins | Serum Amyloid A Protein [SUMMARY]
[CONTENT] healthy european bison | bison immunology report | european bison protection | bison animals eliminated | bison disease [SUMMARY]
null
[CONTENT] healthy european bison | bison immunology report | european bison protection | bison animals eliminated | bison disease [SUMMARY]
[CONTENT] healthy european bison | bison immunology report | european bison protection | bison animals eliminated | bison disease [SUMMARY]
[CONTENT] healthy european bison | bison immunology report | european bison protection | bison animals eliminated | bison disease [SUMMARY]
[CONTENT] healthy european bison | bison immunology report | european bison protection | bison animals eliminated | bison disease [SUMMARY]
[CONTENT] european | concentration | ml | serum | elim | bison | european bison | heal | median | samples [SUMMARY]
null
[CONTENT] european | concentration | ml | serum | elim | bison | european bison | heal | median | samples [SUMMARY]
[CONTENT] european | concentration | ml | serum | elim | bison | european bison | heal | median | samples [SUMMARY]
[CONTENT] european | concentration | ml | serum | elim | bison | european bison | heal | median | samples [SUMMARY]
[CONTENT] european | concentration | ml | serum | elim | bison | european bison | heal | median | samples [SUMMARY]
[CONTENT] apps | species | cattle | bison | wild | apr | inhibition | european | european bison | acute phase [SUMMARY]
null
[CONTENT] median | box | ml | concentration | respectively | µg | µg ml | heal | elim | serum european bison [SUMMARY]
[CONTENT] study | confirmed | apps | animals | health | moderate number animals involved | conclude preliminary study | conclude preliminary study report | agp european bison subsequently | report describing concentrations [SUMMARY]
[CONTENT] samples | ml | serum | european | run | bison | european bison | elim | concentration | apps [SUMMARY]
[CONTENT] samples | ml | serum | european | run | bison | european bison | elim | concentration | apps [SUMMARY]
[CONTENT] first | European ||| ||| European [SUMMARY]
null
[CONTENT] European ||| [SUMMARY]
[CONTENT] European [SUMMARY]
[CONTENT] ||| first | European ||| ||| European ||| 87 | European | Polish | two | 1 | 2 | the poor health condition ||| A and α1 | ELISA ||| Mann-Whitney U ||| p < 0.05 ||| Statistica | Tibco | USA ||| ||| European ||| ||| European [SUMMARY]
[CONTENT] ||| first | European ||| ||| European ||| 87 | European | Polish | two | 1 | 2 | the poor health condition ||| A and α1 | ELISA ||| Mann-Whitney U ||| p < 0.05 ||| Statistica | Tibco | USA ||| ||| European ||| ||| European [SUMMARY]
Low serum level of apolipoprotein A1 may predict the severity of COVID-19: A retrospective study.
34260764
Dyslipidemia has been observed in patients with coronavirus disease 2019 (COVID-19). This study aimed to investigate blood lipid profiles in patients with COVID-19 and to explore their predictive values for COVID-19 severity.
BACKGROUND
A total of 142 consecutive patients with COVID-19 were included in this single-center retrospective study. Blood lipid profile characteristics were investigated in patients with COVID-19 in comparison with 77 age- and gender-matched healthy subjects, their predictive values for COVID-19 severity were analyzed by using multivariable logistic regression analysis, and their prediction efficiencies were evaluated by using receiver operator characteristic (ROC) curves.
METHODS
There were 125 and 17 cases in the non-severe and severe groups, respectively. Total cholesterol (TC), high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol (LDL-C), and apolipoprotein A1 (ApoA1) gradually decreased across the groups in the following order: healthy controls, non-severe group, and severe group. ApoA1 was identified as an independent risk factor for COVID-19 severity (adjusted odds ratio [OR]: 0.865, 95% confidence interval [CI]: 0.800-0.935, p < 0.001), along with interleukin-6 (IL-6) (adjusted OR: 1.097, 95% CI: 1.034-1.165, p = 0.002). ApoA1 exhibited the highest area under the ROC curve (AUC) among all single markers (AUC: 0.896, 95% CI: 0.834-0.941); moreover, the risk model established using ApoA1 and IL-6 enhanced prediction efficiency (AUC: 0.977, 95% CI: 0.932-0.995).
RESULTS
Blood lipid profiles in patients with COVID-19 are quite abnormal compared with those in healthy subjects, especially in severe cases. Serum ApoA1 may represent a good indicator for predicting the severity of COVID-19.
CONCLUSION
[ "Adult", "Aged", "Apolipoprotein A-I", "Area Under Curve", "Biomarkers", "COVID-19", "Case-Control Studies", "Cholesterol, HDL", "Cholesterol, LDL", "Comorbidity", "Female", "Humans", "Lipids", "Male", "Middle Aged", "Retrospective Studies", "Risk Factors", "Severity of Illness Index" ]
8373354
INTRODUCTION
Severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) is a highly transmissible coronavirus that has caused an ever‐increasing number of coronavirus disease 2019 (COVID‐19) infections since December 2019 and spread rapidly worldwide. Although approximately 80% of patients infected with SARS‐CoV‐2 exhibit mild symptoms,1 the remaining severe cases may experience acute respiratory distress syndrome (ARDS), multiorgan failure (MOF), and death.2 Therefore, it is necessary to discriminate between severe and mild cases. Previous studies have found that the development of severe COVID‐19 is associated with age and underlying diseases, and patients who develop severe disease are likely to suffer from aberrant inflammatory reactions and cytokine storms.1, 3 Consequently, several clinical characteristics, the inflammation index, and cytokine levels have been used as indicators for predicting the severity of COVID‐19 by us and others.4, 5 Emerging evidence suggests that lipid metabolism dysregulation might promote the progression of COVID‐19, as revealed by mass spectrometry (MS)‐based proteomics analysis.6, 7 Although MS analysis is not commonly performed, blood lipids are routinely examined using automatic biochemical instruments in clinical laboratories. Additionally, dyslipidemia has also been observed in other respiratory infectious diseases.8, 9, 10 Therefore, blood lipids may be considered potential and available indicators of COVID‐19 severity. In a former study, serum hypolipidemia was identified in patients with COVID‐19.11 However, that study did not analyze other blood lipid components, such as apolipoprotein A1 (ApoA1), ApoB, and lipoprotein (a), and their predictive values for COVID‐19 severity are not fully understood. Therefore, to more comprehensively investigate blood lipid profiles in patients with COVID‐19 and determine their predictive value for disease severity, a retrospective study was performed.
null
null
RESULTS
General clinical characteristics In total, 142 consecutive patients with confirmed COVID‐19 were included in this study. The mean age was 49.10 ± 16.36 years, and 38.73% of the patients were male. Hypertension (37, 26.06%), diabetes (12, 8.45%), hepatic disease (10, 7.04%), and chronic lung disease (9, 6.34%) were the most common comorbidities. Fever (84, 59.15%) was the leading initial symptom, followed by cough (61, 42.96%), expectoration (32, 22.54%) and fatigue (27, 19.01%). Among the 142 patients, 17 (11.97%) and 125 (88.03%) patients were classified into the severe and non‐severe groups during admission, respectively. Significant differences in age, body mass index (BMI), hypertension, hepatic disease, and fever were noted between the severe and non‐severe groups. Regarding clinical treatment, a greater proportion of patients in the severe group received glucocorticoids, antibiotics, oxygen, invasive mechanical ventilation, and intensive care unit treatment (Table 1). General clinical characteristics of patients with COVID‐19 Data are presented as mean ± standard deviation or n (%). p‐value indicates the comparison between the non‐severe group and severe group. Abbreviations: COVID‐19, coronavirus disease 2019; ECMO, extracorporeal membrane oxygenation. In total, 142 consecutive patients with confirmed COVID‐19 were included in this study. The mean age was 49.10 ± 16.36 years, and 38.73% of the patients were male. Hypertension (37, 26.06%), diabetes (12, 8.45%), hepatic disease (10, 7.04%), and chronic lung disease (9, 6.34%) were the most common comorbidities. Fever (84, 59.15%) was the leading initial symptom, followed by cough (61, 42.96%), expectoration (32, 22.54%) and fatigue (27, 19.01%). Among the 142 patients, 17 (11.97%) and 125 (88.03%) patients were classified into the severe and non‐severe groups during admission, respectively. Significant differences in age, body mass index (BMI), hypertension, hepatic disease, and fever were noted between the severe and non‐severe groups. Regarding clinical treatment, a greater proportion of patients in the severe group received glucocorticoids, antibiotics, oxygen, invasive mechanical ventilation, and intensive care unit treatment (Table 1). General clinical characteristics of patients with COVID‐19 Data are presented as mean ± standard deviation or n (%). p‐value indicates the comparison between the non‐severe group and severe group. Abbreviations: COVID‐19, coronavirus disease 2019; ECMO, extracorporeal membrane oxygenation. Laboratory findings We first compared the general laboratory parameters between healthy controls and COVID‐19 patients. Neutrophil%, C‐reactive protein (CRP), aspartate aminotransferase (AST), and lactic dehydrogenase (LDH) were elevated, while white blood cell (WBC) count, lymphocyte%, lymphocyte count, platelet count, red blood cell (RBC) count, hemoglobin, albumin, total bilirubin (TBil), blood urea nitrogen (BUN), and blood uric acid (BUA) were decreased in patients with COVID‐19 compared with healthy controls (Table 2). Comparisons of demographic and laboratory variables between healthy controls and COVID‐19 patients Data are presented as mean ± standard deviation, n (%), or medians (interquartile ranges). Abbreviations: ALT, alanine aminotransferase; AST, aspartate aminotransferase; BUA, blood uric acid; BUN, blood urea nitrogen; COVID‐19, coronavirus disease 2019; CRP, C‐reactive protein; DBil, direct bilirubin; LDH, Lactic dehydrogenase; RBC, red blood cell; Scr, serum creatinine; TBil, total bilirubin; WBC, white blood cell. We next compared general laboratory parameters, coagulation tests, and cytokine levels between severe and non‐severe COVID‐19 patients. Compared with those in the non‐severe group, patients in the severe group exhibited increased neutrophil%, fibrinogen, activated partial thromboplastin time (aPTT), CRP, interleukin‐10 (IL‐10), interleukin‐6 (IL‐6), interferon‐γ (INF‐γ), AST, and LDH levels, as well as reduced lymphocyte count, platelet count, lymphocyte%, and albumin levels (Table 3). Baseline laboratory parameters in patients with COVID‐19 Data are presented as medians (interquartile ranges). Abbreviations: COVID‐19, coronavirus disease 2019; WBC, white blood cell; RBC, red blood cell; PT, prothrombin time; aPTT, activated partial thromboplastin time; CRP, C‐reactive protein; IL‐2, interleukin‐2; IL‐4, interleukin‐4; IL‐6, interleukin‐6; IL‐10, interleukin‐10; IFN‐γ, interferon‐γ; TNF‐α, tumor necrosis factor‐α; TBil, total bilirubin; DBil, direct bilirubin; AST, aspartate aminotransferase; ALT, alanine aminotransferase; LDH, Lactic dehydrogenase; BUN, blood urea nitrogen; BUA, blood uric acid; Scr, serum creatinine. We first compared the general laboratory parameters between healthy controls and COVID‐19 patients. Neutrophil%, C‐reactive protein (CRP), aspartate aminotransferase (AST), and lactic dehydrogenase (LDH) were elevated, while white blood cell (WBC) count, lymphocyte%, lymphocyte count, platelet count, red blood cell (RBC) count, hemoglobin, albumin, total bilirubin (TBil), blood urea nitrogen (BUN), and blood uric acid (BUA) were decreased in patients with COVID‐19 compared with healthy controls (Table 2). Comparisons of demographic and laboratory variables between healthy controls and COVID‐19 patients Data are presented as mean ± standard deviation, n (%), or medians (interquartile ranges). Abbreviations: ALT, alanine aminotransferase; AST, aspartate aminotransferase; BUA, blood uric acid; BUN, blood urea nitrogen; COVID‐19, coronavirus disease 2019; CRP, C‐reactive protein; DBil, direct bilirubin; LDH, Lactic dehydrogenase; RBC, red blood cell; Scr, serum creatinine; TBil, total bilirubin; WBC, white blood cell. We next compared general laboratory parameters, coagulation tests, and cytokine levels between severe and non‐severe COVID‐19 patients. Compared with those in the non‐severe group, patients in the severe group exhibited increased neutrophil%, fibrinogen, activated partial thromboplastin time (aPTT), CRP, interleukin‐10 (IL‐10), interleukin‐6 (IL‐6), interferon‐γ (INF‐γ), AST, and LDH levels, as well as reduced lymphocyte count, platelet count, lymphocyte%, and albumin levels (Table 3). Baseline laboratory parameters in patients with COVID‐19 Data are presented as medians (interquartile ranges). Abbreviations: COVID‐19, coronavirus disease 2019; WBC, white blood cell; RBC, red blood cell; PT, prothrombin time; aPTT, activated partial thromboplastin time; CRP, C‐reactive protein; IL‐2, interleukin‐2; IL‐4, interleukin‐4; IL‐6, interleukin‐6; IL‐10, interleukin‐10; IFN‐γ, interferon‐γ; TNF‐α, tumor necrosis factor‐α; TBil, total bilirubin; DBil, direct bilirubin; AST, aspartate aminotransferase; ALT, alanine aminotransferase; LDH, Lactic dehydrogenase; BUN, blood urea nitrogen; BUA, blood uric acid; Scr, serum creatinine. Baseline blood lipids Baseline blood lipids were obtained within 5 days of admission. TC, HDL‐C, LDL‐C, and ApoA1 gradually decreased from the healthy controls to the non‐severe group and the severe group. TG was higher in the non‐severe group than in the healthy controls; however, no significant differences were found between the severe and non‐severe groups or between the severe group and the healthy controls. There were no significant differences in ApoB or lipoprotein (a) among the three groups (Figure 1). Comparisons of blood lipids among the healthy controls, non‐severe group, and severe group. Data are presented as medians (interquartile ranges). (A) The total cholesterol levels in the healthy controls, non‐severe group, and severe group were 4.97 (4.55–5.79), 4.08 (3.69–4.63), and 3.58 (3.06–3.85) mmol/L, respectively. (B) The triglyceride levels in the healthy controls, non‐severe group, and severe group were 1.04 (0.85–1.43), 1.44 (0.98–2.00), and 1.22 (0.83–2.29) mmol/L, respectively. (C) The HDL‐C levels in the healthy controls, non‐severe group, and severe group were 1.62 (1.35–1.92), 1.09 (0.91–1.29), and 0.93 (0.82–1.00) mmol/L, respectively. (D) The LDL‐C levels in the healthy controls, non‐severe group, and severe group were 2.77 (2.40–3.42), 2.54 (2.18–2.85), and 2.21 (1.93–2.49) mmol/L, respectively. (E) The ApoA1 levels in the healthy controls, non‐severe group, and severe group were 1.45 (1.31–1.65), 1.22 (1.12–1.34), and 0.98 (0.89–1.08) g/L, respectively. (F) The ApoB levels in the healthy controls, non‐severe group, and severe group were 0.86 (0.72–1.01), 0.81 (0.71–0.94), and 0.76 (0.66–0.86) g/L, respectively. (G) The lipoprotein (a) levels in the healthy controls, non‐severe group, and severe group were 114.80 (62.20–202.90), 87.15 (47.75–161.15), 94.45 (36.55–135.40) mg/L, respectively. The Kruskal‐Wallis test was used to compare differences among the three groups, and post hoc pairwise comparisons were performed using the Nemenyi test. HDL‐C, high‐density lipoprotein cholesterol; LDL‐C, low‐density lipoprotein cholesterol; ApoA1, apolipoprotein A1; ApoB, apolipoprotein B Baseline blood lipids were obtained within 5 days of admission. TC, HDL‐C, LDL‐C, and ApoA1 gradually decreased from the healthy controls to the non‐severe group and the severe group. TG was higher in the non‐severe group than in the healthy controls; however, no significant differences were found between the severe and non‐severe groups or between the severe group and the healthy controls. There were no significant differences in ApoB or lipoprotein (a) among the three groups (Figure 1). Comparisons of blood lipids among the healthy controls, non‐severe group, and severe group. Data are presented as medians (interquartile ranges). (A) The total cholesterol levels in the healthy controls, non‐severe group, and severe group were 4.97 (4.55–5.79), 4.08 (3.69–4.63), and 3.58 (3.06–3.85) mmol/L, respectively. (B) The triglyceride levels in the healthy controls, non‐severe group, and severe group were 1.04 (0.85–1.43), 1.44 (0.98–2.00), and 1.22 (0.83–2.29) mmol/L, respectively. (C) The HDL‐C levels in the healthy controls, non‐severe group, and severe group were 1.62 (1.35–1.92), 1.09 (0.91–1.29), and 0.93 (0.82–1.00) mmol/L, respectively. (D) The LDL‐C levels in the healthy controls, non‐severe group, and severe group were 2.77 (2.40–3.42), 2.54 (2.18–2.85), and 2.21 (1.93–2.49) mmol/L, respectively. (E) The ApoA1 levels in the healthy controls, non‐severe group, and severe group were 1.45 (1.31–1.65), 1.22 (1.12–1.34), and 0.98 (0.89–1.08) g/L, respectively. (F) The ApoB levels in the healthy controls, non‐severe group, and severe group were 0.86 (0.72–1.01), 0.81 (0.71–0.94), and 0.76 (0.66–0.86) g/L, respectively. (G) The lipoprotein (a) levels in the healthy controls, non‐severe group, and severe group were 114.80 (62.20–202.90), 87.15 (47.75–161.15), 94.45 (36.55–135.40) mg/L, respectively. The Kruskal‐Wallis test was used to compare differences among the three groups, and post hoc pairwise comparisons were performed using the Nemenyi test. HDL‐C, high‐density lipoprotein cholesterol; LDL‐C, low‐density lipoprotein cholesterol; ApoA1, apolipoprotein A1; ApoB, apolipoprotein B Risk factors for COVID‐19 severity Potential risk factors, including several general clinical characteristics, immunoinflammatory markers, and blood lipids, were first identified by univariate logistic analysis. Age, BMI, hypertension, neutrophil%, lymphocyte%, lymphocyte count, platelet count, fibrinogen, CRP, IL‐6, IL‐10, HDL‐C, ApoA1, ALB, AST, and LDH were associated with the severity of COVID‐19 (p < 0.05). However, aPTT, interferon‐γ (IFN‐γ), TC, and LDL‐C were unrelated to COVID‐19 severity (p>0.05) (Figure 2). Next, variables with p < 0.1 in the univariate logistic analysis were entered into the multivariate logistic analysis. However, after adjusting for other potential risk factors, only IL‐6 (adjusted odds ratio [OR]: 1.097, 95% confidence interval [CI]: 1.034–1.165, p = 0.002) and ApoA1 (adjusted OR: 0.865, 95% CI: 0.800–0.935, p < 0.001) were identified as independent risk factors by multivariate logistic analysis (Figure 3). Therefore, a risk model was built using the combination of ApoA1 and IL‐6. The mathematical formula of the risk model was log (P)=12.303+0.093*IL‐6–0.145*ApoA1. Univariate logistic regression of risk factors for COVID‐19 severity. Old age, high BMI, hypertension, increased neutrophil%, fibrinogen, C‐reactive protein, interleukin‐6, interleukin‐10, aspartate aminotransferase, and lactic dehydrogenase and decreased lymphocyte%, lymphocyte count, platelet count, HDL‐C, apolipoprotein A1, and albumin were associated with the severity of COVID‐19 in univariate logistic regression analysis (all p < 0.05). COVID‐19: coronavirus disease 2019, OR, odds ratio; CI, confidence interval; HDL‐C, high‐density lipoprotein cholesterol; LDL‐C, low‐density lipoprotein cholesterol Multivariate logistic regression of independent risk factors for COVID‐19 severity. After adjusting for other potential risk factors, increased IL‐6 (OR: 1.097, 95% CI: 1.034–1.165, p = 0.002) and decreased ApoA1 (OR: 0.865, 95% CI: 0.800–0.935, p < 0.001) were recognized as independent risk factors for COVID‐19 severity. COVID‐19: coronavirus disease 2019, OR: odds ratio, CI: confidence interval In the prediction of COVID‐19 severity, the AUCs (95% CI) of TC, HDL‐C, LDL‐C, ApoA1, IL‐6, and the risk model were 0.726 (0.645–0.798), 0.674 (0.590–0.750), 0.669 (0.585–0.746), 0.896 (0.834–0.941), 0.855 (0.786–0.908), and 0.977 (0.932–0.995), respectively (Figure 4, Table 4). In particular, the sensitivity and specificity of Apo A1 were 94.12% (95% CI: 71.20%–99.00%) and 80.80% (95% CI: 72.80%–87.30%), respectively, which were both the highest among the above single markers. Moreover, the risk model increased the levels of sensitivity and specificity to 100.00% (95% CI: 80.30%–100.00%) and 89.89% (95% CI: 81.40%–94.10%), respectively. Receiver operator characteristic curves of total cholesterol, HDL‐C, LDL‐C, ApoA1, IL‐6, and risk model for the severity of COVID‐19. COVID‐19: coronavirus disease 2019, HDL‐C: high‐density lipoprotein cholesterol, LDL‐C: low‐density lipoprotein cholesterol, ApoA1: apolipoprotein A1, IL‐6: interleukin‐6 Predictive performance of blood lipids, interleukin‐6, and the risk model for COVID‐19 severity Abbreviations: COVID‐19, coronavirus disease 2019; HDL‐C, high‐density lipoprotein cholesterol; LDL‐C, low‐density lipoprotein cholesterol; ApoA1, apolipoprotein A1; IL‐6, Interleukin‐6; CI, confidence interval; PPV, positive predictive value; NPV, negative predictive value. Potential risk factors, including several general clinical characteristics, immunoinflammatory markers, and blood lipids, were first identified by univariate logistic analysis. Age, BMI, hypertension, neutrophil%, lymphocyte%, lymphocyte count, platelet count, fibrinogen, CRP, IL‐6, IL‐10, HDL‐C, ApoA1, ALB, AST, and LDH were associated with the severity of COVID‐19 (p < 0.05). However, aPTT, interferon‐γ (IFN‐γ), TC, and LDL‐C were unrelated to COVID‐19 severity (p>0.05) (Figure 2). Next, variables with p < 0.1 in the univariate logistic analysis were entered into the multivariate logistic analysis. However, after adjusting for other potential risk factors, only IL‐6 (adjusted odds ratio [OR]: 1.097, 95% confidence interval [CI]: 1.034–1.165, p = 0.002) and ApoA1 (adjusted OR: 0.865, 95% CI: 0.800–0.935, p < 0.001) were identified as independent risk factors by multivariate logistic analysis (Figure 3). Therefore, a risk model was built using the combination of ApoA1 and IL‐6. The mathematical formula of the risk model was log (P)=12.303+0.093*IL‐6–0.145*ApoA1. Univariate logistic regression of risk factors for COVID‐19 severity. Old age, high BMI, hypertension, increased neutrophil%, fibrinogen, C‐reactive protein, interleukin‐6, interleukin‐10, aspartate aminotransferase, and lactic dehydrogenase and decreased lymphocyte%, lymphocyte count, platelet count, HDL‐C, apolipoprotein A1, and albumin were associated with the severity of COVID‐19 in univariate logistic regression analysis (all p < 0.05). COVID‐19: coronavirus disease 2019, OR, odds ratio; CI, confidence interval; HDL‐C, high‐density lipoprotein cholesterol; LDL‐C, low‐density lipoprotein cholesterol Multivariate logistic regression of independent risk factors for COVID‐19 severity. After adjusting for other potential risk factors, increased IL‐6 (OR: 1.097, 95% CI: 1.034–1.165, p = 0.002) and decreased ApoA1 (OR: 0.865, 95% CI: 0.800–0.935, p < 0.001) were recognized as independent risk factors for COVID‐19 severity. COVID‐19: coronavirus disease 2019, OR: odds ratio, CI: confidence interval In the prediction of COVID‐19 severity, the AUCs (95% CI) of TC, HDL‐C, LDL‐C, ApoA1, IL‐6, and the risk model were 0.726 (0.645–0.798), 0.674 (0.590–0.750), 0.669 (0.585–0.746), 0.896 (0.834–0.941), 0.855 (0.786–0.908), and 0.977 (0.932–0.995), respectively (Figure 4, Table 4). In particular, the sensitivity and specificity of Apo A1 were 94.12% (95% CI: 71.20%–99.00%) and 80.80% (95% CI: 72.80%–87.30%), respectively, which were both the highest among the above single markers. Moreover, the risk model increased the levels of sensitivity and specificity to 100.00% (95% CI: 80.30%–100.00%) and 89.89% (95% CI: 81.40%–94.10%), respectively. Receiver operator characteristic curves of total cholesterol, HDL‐C, LDL‐C, ApoA1, IL‐6, and risk model for the severity of COVID‐19. COVID‐19: coronavirus disease 2019, HDL‐C: high‐density lipoprotein cholesterol, LDL‐C: low‐density lipoprotein cholesterol, ApoA1: apolipoprotein A1, IL‐6: interleukin‐6 Predictive performance of blood lipids, interleukin‐6, and the risk model for COVID‐19 severity Abbreviations: COVID‐19, coronavirus disease 2019; HDL‐C, high‐density lipoprotein cholesterol; LDL‐C, low‐density lipoprotein cholesterol; ApoA1, apolipoprotein A1; IL‐6, Interleukin‐6; CI, confidence interval; PPV, positive predictive value; NPV, negative predictive value.
null
null
[ "INTRODUCTION", "Study design and participants selection", "Determination of blood lipids", "Statistical analysis", "General clinical characteristics", "Laboratory findings", "Baseline blood lipids", "Risk factors for COVID‐19 severity", "AUTHOR CONTRIBUTIONS", "ETHICAL APPROVAL" ]
[ "Severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) is a highly transmissible coronavirus that has caused an ever‐increasing number of coronavirus disease 2019 (COVID‐19) infections since December 2019 and spread rapidly worldwide. Although approximately 80% of patients infected with SARS‐CoV‐2 exhibit mild symptoms,1 the remaining severe cases may experience acute respiratory distress syndrome (ARDS), multiorgan failure (MOF), and death.2 Therefore, it is necessary to discriminate between severe and mild cases.\nPrevious studies have found that the development of severe COVID‐19 is associated with age and underlying diseases, and patients who develop severe disease are likely to suffer from aberrant inflammatory reactions and cytokine storms.1, 3 Consequently, several clinical characteristics, the inflammation index, and cytokine levels have been used as indicators for predicting the severity of COVID‐19 by us and others.4, 5 Emerging evidence suggests that lipid metabolism dysregulation might promote the progression of COVID‐19, as revealed by mass spectrometry (MS)‐based proteomics analysis.6, 7 Although MS analysis is not commonly performed, blood lipids are routinely examined using automatic biochemical instruments in clinical laboratories. Additionally, dyslipidemia has also been observed in other respiratory infectious diseases.8, 9, 10 Therefore, blood lipids may be considered potential and available indicators of COVID‐19 severity.\nIn a former study, serum hypolipidemia was identified in patients with COVID‐19.11 However, that study did not analyze other blood lipid components, such as apolipoprotein A1 (ApoA1), ApoB, and lipoprotein (a), and their predictive values for COVID‐19 severity are not fully understood. Therefore, to more comprehensively investigate blood lipid profiles in patients with COVID‐19 and determine their predictive value for disease severity, a retrospective study was performed.", "This was a single‐center retrospective study approved by the institutional ethics board (PJ‐NBEY‐KY‐2020–061–01). A total of 142 consecutive patients with COVID‐19 were included from January 23 to April 20, 2020. In addition, 77 age‐ and gender‐matched healthy subjects were selected for evaluating the characteristics of blood lipid profiles in patients with COVID‐19.\nThe diagnosis of COVID‐19 and its severity were determined according to the National Diagnosis and Treatment Protocol for Novel Coronavirus Infection‐Induced Pneumonia (6th Trial Version). Patients with confirmed COVID‐19 were diagnosed based on a positive SARS‐CoV‐2 nucleic acid RT‐PCR result using specimens derived from sputum, throat swabs, or nasopharyngeal swabs. Severe patients exhibited one of the following features: (a) respiratory distress with respiration rate (RR) greater than 30 breaths per minute; (b) blood oxygen saturation less than 93% at a state of rest; (c) arterial blood oxygen partial pressure/inhaled oxygen concentration less than 300 mmHg (1 mmHg = 0.133 kPa); or (d) lesion rapidly progressed by more than 50% within one or two days as determined by pulmonary imaging.\nGeneral clinical characteristics, including gender, age, comorbidities, initial symptoms, treatment, and laboratory test data, were collected from the electronic medical records (EMRs).", "Blood lipids were assessed using a fully automatic biochemical analyzer (ADVIA2400, Siemens) according to the manufacturer's instructions (Purebio Biotechnology Co., Ltd). Briefly, total cholesterol (TC) was measured using the cholesterol oxidase‐p‐aminophenazone (CHOD‐PAP) method; triglyceride (TG) was assessed using the glycerol phosphate oxidase‐p‐aminophenazone (GPO‐PAP) method; high‐density lipoprotein cholesterol (HDL‐C) was assessed using the direct‐hydrogen peroxide method; low‐density lipoprotein cholesterol (LDL‐C) was assessed using the direct‐surfactant removal method; and ApoA1, ApoB, and lipoprotein (a) were assessed using the immunoturbidimetric method.", "SPSS software, version 16.0 (IBM) was used for statistical analysis. Normally and non‐normally distributed continuous data were expressed as the mean ± SD (standard deviation) and median (interquartile range [IQR]), respectively. Categorical variables were reported as numbers (%). The Kruskal‐Wallis test was used to compare blood lipids among the severe group, non‐severe group, and healthy subjects, and post hoc pairwise comparisons were performed using the Nemenyi test. Differences between the two groups were assessed using Student's t‐test and Mann‐Whitney U test for normally and non‐normally distributed continuous data, respectively, and chi‐square or Fisher's exact tests were used for categorical variables. Multivariate logistic regression analysis was adopted to explore independent risk factors for COVID‐19 severity, receiver operator characteristic (ROC) curves were generated, and the areas under ROC curves (AUCs) were calculated to evaluate prediction efficiency. A p‐value <0.05 indicates statistical significance.", "In total, 142 consecutive patients with confirmed COVID‐19 were included in this study. The mean age was 49.10 ± 16.36 years, and 38.73% of the patients were male. Hypertension (37, 26.06%), diabetes (12, 8.45%), hepatic disease (10, 7.04%), and chronic lung disease (9, 6.34%) were the most common comorbidities. Fever (84, 59.15%) was the leading initial symptom, followed by cough (61, 42.96%), expectoration (32, 22.54%) and fatigue (27, 19.01%).\nAmong the 142 patients, 17 (11.97%) and 125 (88.03%) patients were classified into the severe and non‐severe groups during admission, respectively. Significant differences in age, body mass index (BMI), hypertension, hepatic disease, and fever were noted between the severe and non‐severe groups. Regarding clinical treatment, a greater proportion of patients in the severe group received glucocorticoids, antibiotics, oxygen, invasive mechanical ventilation, and intensive care unit treatment (Table 1).\nGeneral clinical characteristics of patients with COVID‐19\nData are presented as mean ± standard deviation or n (%). p‐value indicates the comparison between the non‐severe group and severe group.\nAbbreviations: COVID‐19, coronavirus disease 2019; ECMO, extracorporeal membrane oxygenation.", "We first compared the general laboratory parameters between healthy controls and COVID‐19 patients. Neutrophil%, C‐reactive protein (CRP), aspartate aminotransferase (AST), and lactic dehydrogenase (LDH) were elevated, while white blood cell (WBC) count, lymphocyte%, lymphocyte count, platelet count, red blood cell (RBC) count, hemoglobin, albumin, total bilirubin (TBil), blood urea nitrogen (BUN), and blood uric acid (BUA) were decreased in patients with COVID‐19 compared with healthy controls (Table 2).\nComparisons of demographic and laboratory variables between healthy controls and COVID‐19 patients\nData are presented as mean ± standard deviation, n (%), or medians (interquartile ranges).\nAbbreviations: ALT, alanine aminotransferase; AST, aspartate aminotransferase; BUA, blood uric acid; BUN, blood urea nitrogen; COVID‐19, coronavirus disease 2019; CRP, C‐reactive protein; DBil, direct bilirubin; LDH, Lactic dehydrogenase; RBC, red blood cell; Scr, serum creatinine; TBil, total bilirubin; WBC, white blood cell.\nWe next compared general laboratory parameters, coagulation tests, and cytokine levels between severe and non‐severe COVID‐19 patients. Compared with those in the non‐severe group, patients in the severe group exhibited increased neutrophil%, fibrinogen, activated partial thromboplastin time (aPTT), CRP, interleukin‐10 (IL‐10), interleukin‐6 (IL‐6), interferon‐γ (INF‐γ), AST, and LDH levels, as well as reduced lymphocyte count, platelet count, lymphocyte%, and albumin levels (Table 3).\nBaseline laboratory parameters in patients with COVID‐19\nData are presented as medians (interquartile ranges).\nAbbreviations: COVID‐19, coronavirus disease 2019; WBC, white blood cell; RBC, red blood cell; PT, prothrombin time; aPTT, activated partial thromboplastin time; CRP, C‐reactive protein; IL‐2, interleukin‐2; IL‐4, interleukin‐4; IL‐6, interleukin‐6; IL‐10, interleukin‐10; IFN‐γ, interferon‐γ; TNF‐α, tumor necrosis factor‐α; TBil, total bilirubin; DBil, direct bilirubin; AST, aspartate aminotransferase; ALT, alanine aminotransferase; LDH, Lactic dehydrogenase; BUN, blood urea nitrogen; BUA, blood uric acid; Scr, serum creatinine.", "Baseline blood lipids were obtained within 5 days of admission. TC, HDL‐C, LDL‐C, and ApoA1 gradually decreased from the healthy controls to the non‐severe group and the severe group. TG was higher in the non‐severe group than in the healthy controls; however, no significant differences were found between the severe and non‐severe groups or between the severe group and the healthy controls. There were no significant differences in ApoB or lipoprotein (a) among the three groups (Figure 1).\nComparisons of blood lipids among the healthy controls, non‐severe group, and severe group. Data are presented as medians (interquartile ranges). (A) The total cholesterol levels in the healthy controls, non‐severe group, and severe group were 4.97 (4.55–5.79), 4.08 (3.69–4.63), and 3.58 (3.06–3.85) mmol/L, respectively. (B) The triglyceride levels in the healthy controls, non‐severe group, and severe group were 1.04 (0.85–1.43), 1.44 (0.98–2.00), and 1.22 (0.83–2.29) mmol/L, respectively. (C) The HDL‐C levels in the healthy controls, non‐severe group, and severe group were 1.62 (1.35–1.92), 1.09 (0.91–1.29), and 0.93 (0.82–1.00) mmol/L, respectively. (D) The LDL‐C levels in the healthy controls, non‐severe group, and severe group were 2.77 (2.40–3.42), 2.54 (2.18–2.85), and 2.21 (1.93–2.49) mmol/L, respectively. (E) The ApoA1 levels in the healthy controls, non‐severe group, and severe group were 1.45 (1.31–1.65), 1.22 (1.12–1.34), and 0.98 (0.89–1.08) g/L, respectively. (F) The ApoB levels in the healthy controls, non‐severe group, and severe group were 0.86 (0.72–1.01), 0.81 (0.71–0.94), and 0.76 (0.66–0.86) g/L, respectively. (G) The lipoprotein (a) levels in the healthy controls, non‐severe group, and severe group were 114.80 (62.20–202.90), 87.15 (47.75–161.15), 94.45 (36.55–135.40) mg/L, respectively. The Kruskal‐Wallis test was used to compare differences among the three groups, and post hoc pairwise comparisons were performed using the Nemenyi test. HDL‐C, high‐density lipoprotein cholesterol; LDL‐C, low‐density lipoprotein cholesterol; ApoA1, apolipoprotein A1; ApoB, apolipoprotein B", "Potential risk factors, including several general clinical characteristics, immunoinflammatory markers, and blood lipids, were first identified by univariate logistic analysis. Age, BMI, hypertension, neutrophil%, lymphocyte%, lymphocyte count, platelet count, fibrinogen, CRP, IL‐6, IL‐10, HDL‐C, ApoA1, ALB, AST, and LDH were associated with the severity of COVID‐19 (p < 0.05). However, aPTT, interferon‐γ (IFN‐γ), TC, and LDL‐C were unrelated to COVID‐19 severity (p>0.05) (Figure 2). Next, variables with p < 0.1 in the univariate logistic analysis were entered into the multivariate logistic analysis. However, after adjusting for other potential risk factors, only IL‐6 (adjusted odds ratio [OR]: 1.097, 95% confidence interval [CI]: 1.034–1.165, p = 0.002) and ApoA1 (adjusted OR: 0.865, 95% CI: 0.800–0.935, p < 0.001) were identified as independent risk factors by multivariate logistic analysis (Figure 3). Therefore, a risk model was built using the combination of ApoA1 and IL‐6. The mathematical formula of the risk model was log (P)=12.303+0.093*IL‐6–0.145*ApoA1.\nUnivariate logistic regression of risk factors for COVID‐19 severity. Old age, high BMI, hypertension, increased neutrophil%, fibrinogen, C‐reactive protein, interleukin‐6, interleukin‐10, aspartate aminotransferase, and lactic dehydrogenase and decreased lymphocyte%, lymphocyte count, platelet count, HDL‐C, apolipoprotein A1, and albumin were associated with the severity of COVID‐19 in univariate logistic regression analysis (all p < 0.05). COVID‐19: coronavirus disease 2019, OR, odds ratio; CI, confidence interval; HDL‐C, high‐density lipoprotein cholesterol; LDL‐C, low‐density lipoprotein cholesterol\nMultivariate logistic regression of independent risk factors for COVID‐19 severity. After adjusting for other potential risk factors, increased IL‐6 (OR: 1.097, 95% CI: 1.034–1.165, p = 0.002) and decreased ApoA1 (OR: 0.865, 95% CI: 0.800–0.935, p < 0.001) were recognized as independent risk factors for COVID‐19 severity. COVID‐19: coronavirus disease 2019, OR: odds ratio, CI: confidence interval\nIn the prediction of COVID‐19 severity, the AUCs (95% CI) of TC, HDL‐C, LDL‐C, ApoA1, IL‐6, and the risk model were 0.726 (0.645–0.798), 0.674 (0.590–0.750), 0.669 (0.585–0.746), 0.896 (0.834–0.941), 0.855 (0.786–0.908), and 0.977 (0.932–0.995), respectively (Figure 4, Table 4). In particular, the sensitivity and specificity of Apo A1 were 94.12% (95% CI: 71.20%–99.00%) and 80.80% (95% CI: 72.80%–87.30%), respectively, which were both the highest among the above single markers. Moreover, the risk model increased the levels of sensitivity and specificity to 100.00% (95% CI: 80.30%–100.00%) and 89.89% (95% CI: 81.40%–94.10%), respectively.\nReceiver operator characteristic curves of total cholesterol, HDL‐C, LDL‐C, ApoA1, IL‐6, and risk model for the severity of COVID‐19. COVID‐19: coronavirus disease 2019, HDL‐C: high‐density lipoprotein cholesterol, LDL‐C: low‐density lipoprotein cholesterol, ApoA1: apolipoprotein A1, IL‐6: interleukin‐6\nPredictive performance of blood lipids, interleukin‐6, and the risk model for COVID‐19 severity\nAbbreviations: COVID‐19, coronavirus disease 2019; HDL‐C, high‐density lipoprotein cholesterol; LDL‐C, low‐density lipoprotein cholesterol; ApoA1, apolipoprotein A1; IL‐6, Interleukin‐6; CI, confidence interval; PPV, positive predictive value; NPV, negative predictive value.", "Zhe Zhu interpreted the data and wrote the paper. Yayun Yang, Linyan Fan, Shuyuan Ye, and Kehong Lou collected and analyzed the data. Xin Hua, Zuoan Huang, and Qiaoyun Shi performed laboratory analysis. Guosheng Gao designed the study and revised the paper.", "This study was approved by the institutional ethics board of HwaMei Hospital, University of Chinese Academy of Science (PJ‐NBEY‐KY‐2020–061–01)." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIAL AND METHODS", "Study design and participants selection", "Determination of blood lipids", "Statistical analysis", "RESULTS", "General clinical characteristics", "Laboratory findings", "Baseline blood lipids", "Risk factors for COVID‐19 severity", "DISCUSSION", "CONFLICT OF INTEREST", "AUTHOR CONTRIBUTIONS", "ETHICAL APPROVAL" ]
[ "Severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) is a highly transmissible coronavirus that has caused an ever‐increasing number of coronavirus disease 2019 (COVID‐19) infections since December 2019 and spread rapidly worldwide. Although approximately 80% of patients infected with SARS‐CoV‐2 exhibit mild symptoms,1 the remaining severe cases may experience acute respiratory distress syndrome (ARDS), multiorgan failure (MOF), and death.2 Therefore, it is necessary to discriminate between severe and mild cases.\nPrevious studies have found that the development of severe COVID‐19 is associated with age and underlying diseases, and patients who develop severe disease are likely to suffer from aberrant inflammatory reactions and cytokine storms.1, 3 Consequently, several clinical characteristics, the inflammation index, and cytokine levels have been used as indicators for predicting the severity of COVID‐19 by us and others.4, 5 Emerging evidence suggests that lipid metabolism dysregulation might promote the progression of COVID‐19, as revealed by mass spectrometry (MS)‐based proteomics analysis.6, 7 Although MS analysis is not commonly performed, blood lipids are routinely examined using automatic biochemical instruments in clinical laboratories. Additionally, dyslipidemia has also been observed in other respiratory infectious diseases.8, 9, 10 Therefore, blood lipids may be considered potential and available indicators of COVID‐19 severity.\nIn a former study, serum hypolipidemia was identified in patients with COVID‐19.11 However, that study did not analyze other blood lipid components, such as apolipoprotein A1 (ApoA1), ApoB, and lipoprotein (a), and their predictive values for COVID‐19 severity are not fully understood. Therefore, to more comprehensively investigate blood lipid profiles in patients with COVID‐19 and determine their predictive value for disease severity, a retrospective study was performed.", "Study design and participants selection This was a single‐center retrospective study approved by the institutional ethics board (PJ‐NBEY‐KY‐2020–061–01). A total of 142 consecutive patients with COVID‐19 were included from January 23 to April 20, 2020. In addition, 77 age‐ and gender‐matched healthy subjects were selected for evaluating the characteristics of blood lipid profiles in patients with COVID‐19.\nThe diagnosis of COVID‐19 and its severity were determined according to the National Diagnosis and Treatment Protocol for Novel Coronavirus Infection‐Induced Pneumonia (6th Trial Version). Patients with confirmed COVID‐19 were diagnosed based on a positive SARS‐CoV‐2 nucleic acid RT‐PCR result using specimens derived from sputum, throat swabs, or nasopharyngeal swabs. Severe patients exhibited one of the following features: (a) respiratory distress with respiration rate (RR) greater than 30 breaths per minute; (b) blood oxygen saturation less than 93% at a state of rest; (c) arterial blood oxygen partial pressure/inhaled oxygen concentration less than 300 mmHg (1 mmHg = 0.133 kPa); or (d) lesion rapidly progressed by more than 50% within one or two days as determined by pulmonary imaging.\nGeneral clinical characteristics, including gender, age, comorbidities, initial symptoms, treatment, and laboratory test data, were collected from the electronic medical records (EMRs).\nThis was a single‐center retrospective study approved by the institutional ethics board (PJ‐NBEY‐KY‐2020–061–01). A total of 142 consecutive patients with COVID‐19 were included from January 23 to April 20, 2020. In addition, 77 age‐ and gender‐matched healthy subjects were selected for evaluating the characteristics of blood lipid profiles in patients with COVID‐19.\nThe diagnosis of COVID‐19 and its severity were determined according to the National Diagnosis and Treatment Protocol for Novel Coronavirus Infection‐Induced Pneumonia (6th Trial Version). Patients with confirmed COVID‐19 were diagnosed based on a positive SARS‐CoV‐2 nucleic acid RT‐PCR result using specimens derived from sputum, throat swabs, or nasopharyngeal swabs. Severe patients exhibited one of the following features: (a) respiratory distress with respiration rate (RR) greater than 30 breaths per minute; (b) blood oxygen saturation less than 93% at a state of rest; (c) arterial blood oxygen partial pressure/inhaled oxygen concentration less than 300 mmHg (1 mmHg = 0.133 kPa); or (d) lesion rapidly progressed by more than 50% within one or two days as determined by pulmonary imaging.\nGeneral clinical characteristics, including gender, age, comorbidities, initial symptoms, treatment, and laboratory test data, were collected from the electronic medical records (EMRs).\nDetermination of blood lipids Blood lipids were assessed using a fully automatic biochemical analyzer (ADVIA2400, Siemens) according to the manufacturer's instructions (Purebio Biotechnology Co., Ltd). Briefly, total cholesterol (TC) was measured using the cholesterol oxidase‐p‐aminophenazone (CHOD‐PAP) method; triglyceride (TG) was assessed using the glycerol phosphate oxidase‐p‐aminophenazone (GPO‐PAP) method; high‐density lipoprotein cholesterol (HDL‐C) was assessed using the direct‐hydrogen peroxide method; low‐density lipoprotein cholesterol (LDL‐C) was assessed using the direct‐surfactant removal method; and ApoA1, ApoB, and lipoprotein (a) were assessed using the immunoturbidimetric method.\nBlood lipids were assessed using a fully automatic biochemical analyzer (ADVIA2400, Siemens) according to the manufacturer's instructions (Purebio Biotechnology Co., Ltd). Briefly, total cholesterol (TC) was measured using the cholesterol oxidase‐p‐aminophenazone (CHOD‐PAP) method; triglyceride (TG) was assessed using the glycerol phosphate oxidase‐p‐aminophenazone (GPO‐PAP) method; high‐density lipoprotein cholesterol (HDL‐C) was assessed using the direct‐hydrogen peroxide method; low‐density lipoprotein cholesterol (LDL‐C) was assessed using the direct‐surfactant removal method; and ApoA1, ApoB, and lipoprotein (a) were assessed using the immunoturbidimetric method.\nStatistical analysis SPSS software, version 16.0 (IBM) was used for statistical analysis. Normally and non‐normally distributed continuous data were expressed as the mean ± SD (standard deviation) and median (interquartile range [IQR]), respectively. Categorical variables were reported as numbers (%). The Kruskal‐Wallis test was used to compare blood lipids among the severe group, non‐severe group, and healthy subjects, and post hoc pairwise comparisons were performed using the Nemenyi test. Differences between the two groups were assessed using Student's t‐test and Mann‐Whitney U test for normally and non‐normally distributed continuous data, respectively, and chi‐square or Fisher's exact tests were used for categorical variables. Multivariate logistic regression analysis was adopted to explore independent risk factors for COVID‐19 severity, receiver operator characteristic (ROC) curves were generated, and the areas under ROC curves (AUCs) were calculated to evaluate prediction efficiency. A p‐value <0.05 indicates statistical significance.\nSPSS software, version 16.0 (IBM) was used for statistical analysis. Normally and non‐normally distributed continuous data were expressed as the mean ± SD (standard deviation) and median (interquartile range [IQR]), respectively. Categorical variables were reported as numbers (%). The Kruskal‐Wallis test was used to compare blood lipids among the severe group, non‐severe group, and healthy subjects, and post hoc pairwise comparisons were performed using the Nemenyi test. Differences between the two groups were assessed using Student's t‐test and Mann‐Whitney U test for normally and non‐normally distributed continuous data, respectively, and chi‐square or Fisher's exact tests were used for categorical variables. Multivariate logistic regression analysis was adopted to explore independent risk factors for COVID‐19 severity, receiver operator characteristic (ROC) curves were generated, and the areas under ROC curves (AUCs) were calculated to evaluate prediction efficiency. A p‐value <0.05 indicates statistical significance.", "This was a single‐center retrospective study approved by the institutional ethics board (PJ‐NBEY‐KY‐2020–061–01). A total of 142 consecutive patients with COVID‐19 were included from January 23 to April 20, 2020. In addition, 77 age‐ and gender‐matched healthy subjects were selected for evaluating the characteristics of blood lipid profiles in patients with COVID‐19.\nThe diagnosis of COVID‐19 and its severity were determined according to the National Diagnosis and Treatment Protocol for Novel Coronavirus Infection‐Induced Pneumonia (6th Trial Version). Patients with confirmed COVID‐19 were diagnosed based on a positive SARS‐CoV‐2 nucleic acid RT‐PCR result using specimens derived from sputum, throat swabs, or nasopharyngeal swabs. Severe patients exhibited one of the following features: (a) respiratory distress with respiration rate (RR) greater than 30 breaths per minute; (b) blood oxygen saturation less than 93% at a state of rest; (c) arterial blood oxygen partial pressure/inhaled oxygen concentration less than 300 mmHg (1 mmHg = 0.133 kPa); or (d) lesion rapidly progressed by more than 50% within one or two days as determined by pulmonary imaging.\nGeneral clinical characteristics, including gender, age, comorbidities, initial symptoms, treatment, and laboratory test data, were collected from the electronic medical records (EMRs).", "Blood lipids were assessed using a fully automatic biochemical analyzer (ADVIA2400, Siemens) according to the manufacturer's instructions (Purebio Biotechnology Co., Ltd). Briefly, total cholesterol (TC) was measured using the cholesterol oxidase‐p‐aminophenazone (CHOD‐PAP) method; triglyceride (TG) was assessed using the glycerol phosphate oxidase‐p‐aminophenazone (GPO‐PAP) method; high‐density lipoprotein cholesterol (HDL‐C) was assessed using the direct‐hydrogen peroxide method; low‐density lipoprotein cholesterol (LDL‐C) was assessed using the direct‐surfactant removal method; and ApoA1, ApoB, and lipoprotein (a) were assessed using the immunoturbidimetric method.", "SPSS software, version 16.0 (IBM) was used for statistical analysis. Normally and non‐normally distributed continuous data were expressed as the mean ± SD (standard deviation) and median (interquartile range [IQR]), respectively. Categorical variables were reported as numbers (%). The Kruskal‐Wallis test was used to compare blood lipids among the severe group, non‐severe group, and healthy subjects, and post hoc pairwise comparisons were performed using the Nemenyi test. Differences between the two groups were assessed using Student's t‐test and Mann‐Whitney U test for normally and non‐normally distributed continuous data, respectively, and chi‐square or Fisher's exact tests were used for categorical variables. Multivariate logistic regression analysis was adopted to explore independent risk factors for COVID‐19 severity, receiver operator characteristic (ROC) curves were generated, and the areas under ROC curves (AUCs) were calculated to evaluate prediction efficiency. A p‐value <0.05 indicates statistical significance.", "General clinical characteristics In total, 142 consecutive patients with confirmed COVID‐19 were included in this study. The mean age was 49.10 ± 16.36 years, and 38.73% of the patients were male. Hypertension (37, 26.06%), diabetes (12, 8.45%), hepatic disease (10, 7.04%), and chronic lung disease (9, 6.34%) were the most common comorbidities. Fever (84, 59.15%) was the leading initial symptom, followed by cough (61, 42.96%), expectoration (32, 22.54%) and fatigue (27, 19.01%).\nAmong the 142 patients, 17 (11.97%) and 125 (88.03%) patients were classified into the severe and non‐severe groups during admission, respectively. Significant differences in age, body mass index (BMI), hypertension, hepatic disease, and fever were noted between the severe and non‐severe groups. Regarding clinical treatment, a greater proportion of patients in the severe group received glucocorticoids, antibiotics, oxygen, invasive mechanical ventilation, and intensive care unit treatment (Table 1).\nGeneral clinical characteristics of patients with COVID‐19\nData are presented as mean ± standard deviation or n (%). p‐value indicates the comparison between the non‐severe group and severe group.\nAbbreviations: COVID‐19, coronavirus disease 2019; ECMO, extracorporeal membrane oxygenation.\nIn total, 142 consecutive patients with confirmed COVID‐19 were included in this study. The mean age was 49.10 ± 16.36 years, and 38.73% of the patients were male. Hypertension (37, 26.06%), diabetes (12, 8.45%), hepatic disease (10, 7.04%), and chronic lung disease (9, 6.34%) were the most common comorbidities. Fever (84, 59.15%) was the leading initial symptom, followed by cough (61, 42.96%), expectoration (32, 22.54%) and fatigue (27, 19.01%).\nAmong the 142 patients, 17 (11.97%) and 125 (88.03%) patients were classified into the severe and non‐severe groups during admission, respectively. Significant differences in age, body mass index (BMI), hypertension, hepatic disease, and fever were noted between the severe and non‐severe groups. Regarding clinical treatment, a greater proportion of patients in the severe group received glucocorticoids, antibiotics, oxygen, invasive mechanical ventilation, and intensive care unit treatment (Table 1).\nGeneral clinical characteristics of patients with COVID‐19\nData are presented as mean ± standard deviation or n (%). p‐value indicates the comparison between the non‐severe group and severe group.\nAbbreviations: COVID‐19, coronavirus disease 2019; ECMO, extracorporeal membrane oxygenation.\nLaboratory findings We first compared the general laboratory parameters between healthy controls and COVID‐19 patients. Neutrophil%, C‐reactive protein (CRP), aspartate aminotransferase (AST), and lactic dehydrogenase (LDH) were elevated, while white blood cell (WBC) count, lymphocyte%, lymphocyte count, platelet count, red blood cell (RBC) count, hemoglobin, albumin, total bilirubin (TBil), blood urea nitrogen (BUN), and blood uric acid (BUA) were decreased in patients with COVID‐19 compared with healthy controls (Table 2).\nComparisons of demographic and laboratory variables between healthy controls and COVID‐19 patients\nData are presented as mean ± standard deviation, n (%), or medians (interquartile ranges).\nAbbreviations: ALT, alanine aminotransferase; AST, aspartate aminotransferase; BUA, blood uric acid; BUN, blood urea nitrogen; COVID‐19, coronavirus disease 2019; CRP, C‐reactive protein; DBil, direct bilirubin; LDH, Lactic dehydrogenase; RBC, red blood cell; Scr, serum creatinine; TBil, total bilirubin; WBC, white blood cell.\nWe next compared general laboratory parameters, coagulation tests, and cytokine levels between severe and non‐severe COVID‐19 patients. Compared with those in the non‐severe group, patients in the severe group exhibited increased neutrophil%, fibrinogen, activated partial thromboplastin time (aPTT), CRP, interleukin‐10 (IL‐10), interleukin‐6 (IL‐6), interferon‐γ (INF‐γ), AST, and LDH levels, as well as reduced lymphocyte count, platelet count, lymphocyte%, and albumin levels (Table 3).\nBaseline laboratory parameters in patients with COVID‐19\nData are presented as medians (interquartile ranges).\nAbbreviations: COVID‐19, coronavirus disease 2019; WBC, white blood cell; RBC, red blood cell; PT, prothrombin time; aPTT, activated partial thromboplastin time; CRP, C‐reactive protein; IL‐2, interleukin‐2; IL‐4, interleukin‐4; IL‐6, interleukin‐6; IL‐10, interleukin‐10; IFN‐γ, interferon‐γ; TNF‐α, tumor necrosis factor‐α; TBil, total bilirubin; DBil, direct bilirubin; AST, aspartate aminotransferase; ALT, alanine aminotransferase; LDH, Lactic dehydrogenase; BUN, blood urea nitrogen; BUA, blood uric acid; Scr, serum creatinine.\nWe first compared the general laboratory parameters between healthy controls and COVID‐19 patients. Neutrophil%, C‐reactive protein (CRP), aspartate aminotransferase (AST), and lactic dehydrogenase (LDH) were elevated, while white blood cell (WBC) count, lymphocyte%, lymphocyte count, platelet count, red blood cell (RBC) count, hemoglobin, albumin, total bilirubin (TBil), blood urea nitrogen (BUN), and blood uric acid (BUA) were decreased in patients with COVID‐19 compared with healthy controls (Table 2).\nComparisons of demographic and laboratory variables between healthy controls and COVID‐19 patients\nData are presented as mean ± standard deviation, n (%), or medians (interquartile ranges).\nAbbreviations: ALT, alanine aminotransferase; AST, aspartate aminotransferase; BUA, blood uric acid; BUN, blood urea nitrogen; COVID‐19, coronavirus disease 2019; CRP, C‐reactive protein; DBil, direct bilirubin; LDH, Lactic dehydrogenase; RBC, red blood cell; Scr, serum creatinine; TBil, total bilirubin; WBC, white blood cell.\nWe next compared general laboratory parameters, coagulation tests, and cytokine levels between severe and non‐severe COVID‐19 patients. Compared with those in the non‐severe group, patients in the severe group exhibited increased neutrophil%, fibrinogen, activated partial thromboplastin time (aPTT), CRP, interleukin‐10 (IL‐10), interleukin‐6 (IL‐6), interferon‐γ (INF‐γ), AST, and LDH levels, as well as reduced lymphocyte count, platelet count, lymphocyte%, and albumin levels (Table 3).\nBaseline laboratory parameters in patients with COVID‐19\nData are presented as medians (interquartile ranges).\nAbbreviations: COVID‐19, coronavirus disease 2019; WBC, white blood cell; RBC, red blood cell; PT, prothrombin time; aPTT, activated partial thromboplastin time; CRP, C‐reactive protein; IL‐2, interleukin‐2; IL‐4, interleukin‐4; IL‐6, interleukin‐6; IL‐10, interleukin‐10; IFN‐γ, interferon‐γ; TNF‐α, tumor necrosis factor‐α; TBil, total bilirubin; DBil, direct bilirubin; AST, aspartate aminotransferase; ALT, alanine aminotransferase; LDH, Lactic dehydrogenase; BUN, blood urea nitrogen; BUA, blood uric acid; Scr, serum creatinine.\nBaseline blood lipids Baseline blood lipids were obtained within 5 days of admission. TC, HDL‐C, LDL‐C, and ApoA1 gradually decreased from the healthy controls to the non‐severe group and the severe group. TG was higher in the non‐severe group than in the healthy controls; however, no significant differences were found between the severe and non‐severe groups or between the severe group and the healthy controls. There were no significant differences in ApoB or lipoprotein (a) among the three groups (Figure 1).\nComparisons of blood lipids among the healthy controls, non‐severe group, and severe group. Data are presented as medians (interquartile ranges). (A) The total cholesterol levels in the healthy controls, non‐severe group, and severe group were 4.97 (4.55–5.79), 4.08 (3.69–4.63), and 3.58 (3.06–3.85) mmol/L, respectively. (B) The triglyceride levels in the healthy controls, non‐severe group, and severe group were 1.04 (0.85–1.43), 1.44 (0.98–2.00), and 1.22 (0.83–2.29) mmol/L, respectively. (C) The HDL‐C levels in the healthy controls, non‐severe group, and severe group were 1.62 (1.35–1.92), 1.09 (0.91–1.29), and 0.93 (0.82–1.00) mmol/L, respectively. (D) The LDL‐C levels in the healthy controls, non‐severe group, and severe group were 2.77 (2.40–3.42), 2.54 (2.18–2.85), and 2.21 (1.93–2.49) mmol/L, respectively. (E) The ApoA1 levels in the healthy controls, non‐severe group, and severe group were 1.45 (1.31–1.65), 1.22 (1.12–1.34), and 0.98 (0.89–1.08) g/L, respectively. (F) The ApoB levels in the healthy controls, non‐severe group, and severe group were 0.86 (0.72–1.01), 0.81 (0.71–0.94), and 0.76 (0.66–0.86) g/L, respectively. (G) The lipoprotein (a) levels in the healthy controls, non‐severe group, and severe group were 114.80 (62.20–202.90), 87.15 (47.75–161.15), 94.45 (36.55–135.40) mg/L, respectively. The Kruskal‐Wallis test was used to compare differences among the three groups, and post hoc pairwise comparisons were performed using the Nemenyi test. HDL‐C, high‐density lipoprotein cholesterol; LDL‐C, low‐density lipoprotein cholesterol; ApoA1, apolipoprotein A1; ApoB, apolipoprotein B\nBaseline blood lipids were obtained within 5 days of admission. TC, HDL‐C, LDL‐C, and ApoA1 gradually decreased from the healthy controls to the non‐severe group and the severe group. TG was higher in the non‐severe group than in the healthy controls; however, no significant differences were found between the severe and non‐severe groups or between the severe group and the healthy controls. There were no significant differences in ApoB or lipoprotein (a) among the three groups (Figure 1).\nComparisons of blood lipids among the healthy controls, non‐severe group, and severe group. Data are presented as medians (interquartile ranges). (A) The total cholesterol levels in the healthy controls, non‐severe group, and severe group were 4.97 (4.55–5.79), 4.08 (3.69–4.63), and 3.58 (3.06–3.85) mmol/L, respectively. (B) The triglyceride levels in the healthy controls, non‐severe group, and severe group were 1.04 (0.85–1.43), 1.44 (0.98–2.00), and 1.22 (0.83–2.29) mmol/L, respectively. (C) The HDL‐C levels in the healthy controls, non‐severe group, and severe group were 1.62 (1.35–1.92), 1.09 (0.91–1.29), and 0.93 (0.82–1.00) mmol/L, respectively. (D) The LDL‐C levels in the healthy controls, non‐severe group, and severe group were 2.77 (2.40–3.42), 2.54 (2.18–2.85), and 2.21 (1.93–2.49) mmol/L, respectively. (E) The ApoA1 levels in the healthy controls, non‐severe group, and severe group were 1.45 (1.31–1.65), 1.22 (1.12–1.34), and 0.98 (0.89–1.08) g/L, respectively. (F) The ApoB levels in the healthy controls, non‐severe group, and severe group were 0.86 (0.72–1.01), 0.81 (0.71–0.94), and 0.76 (0.66–0.86) g/L, respectively. (G) The lipoprotein (a) levels in the healthy controls, non‐severe group, and severe group were 114.80 (62.20–202.90), 87.15 (47.75–161.15), 94.45 (36.55–135.40) mg/L, respectively. The Kruskal‐Wallis test was used to compare differences among the three groups, and post hoc pairwise comparisons were performed using the Nemenyi test. HDL‐C, high‐density lipoprotein cholesterol; LDL‐C, low‐density lipoprotein cholesterol; ApoA1, apolipoprotein A1; ApoB, apolipoprotein B\nRisk factors for COVID‐19 severity Potential risk factors, including several general clinical characteristics, immunoinflammatory markers, and blood lipids, were first identified by univariate logistic analysis. Age, BMI, hypertension, neutrophil%, lymphocyte%, lymphocyte count, platelet count, fibrinogen, CRP, IL‐6, IL‐10, HDL‐C, ApoA1, ALB, AST, and LDH were associated with the severity of COVID‐19 (p < 0.05). However, aPTT, interferon‐γ (IFN‐γ), TC, and LDL‐C were unrelated to COVID‐19 severity (p>0.05) (Figure 2). Next, variables with p < 0.1 in the univariate logistic analysis were entered into the multivariate logistic analysis. However, after adjusting for other potential risk factors, only IL‐6 (adjusted odds ratio [OR]: 1.097, 95% confidence interval [CI]: 1.034–1.165, p = 0.002) and ApoA1 (adjusted OR: 0.865, 95% CI: 0.800–0.935, p < 0.001) were identified as independent risk factors by multivariate logistic analysis (Figure 3). Therefore, a risk model was built using the combination of ApoA1 and IL‐6. The mathematical formula of the risk model was log (P)=12.303+0.093*IL‐6–0.145*ApoA1.\nUnivariate logistic regression of risk factors for COVID‐19 severity. Old age, high BMI, hypertension, increased neutrophil%, fibrinogen, C‐reactive protein, interleukin‐6, interleukin‐10, aspartate aminotransferase, and lactic dehydrogenase and decreased lymphocyte%, lymphocyte count, platelet count, HDL‐C, apolipoprotein A1, and albumin were associated with the severity of COVID‐19 in univariate logistic regression analysis (all p < 0.05). COVID‐19: coronavirus disease 2019, OR, odds ratio; CI, confidence interval; HDL‐C, high‐density lipoprotein cholesterol; LDL‐C, low‐density lipoprotein cholesterol\nMultivariate logistic regression of independent risk factors for COVID‐19 severity. After adjusting for other potential risk factors, increased IL‐6 (OR: 1.097, 95% CI: 1.034–1.165, p = 0.002) and decreased ApoA1 (OR: 0.865, 95% CI: 0.800–0.935, p < 0.001) were recognized as independent risk factors for COVID‐19 severity. COVID‐19: coronavirus disease 2019, OR: odds ratio, CI: confidence interval\nIn the prediction of COVID‐19 severity, the AUCs (95% CI) of TC, HDL‐C, LDL‐C, ApoA1, IL‐6, and the risk model were 0.726 (0.645–0.798), 0.674 (0.590–0.750), 0.669 (0.585–0.746), 0.896 (0.834–0.941), 0.855 (0.786–0.908), and 0.977 (0.932–0.995), respectively (Figure 4, Table 4). In particular, the sensitivity and specificity of Apo A1 were 94.12% (95% CI: 71.20%–99.00%) and 80.80% (95% CI: 72.80%–87.30%), respectively, which were both the highest among the above single markers. Moreover, the risk model increased the levels of sensitivity and specificity to 100.00% (95% CI: 80.30%–100.00%) and 89.89% (95% CI: 81.40%–94.10%), respectively.\nReceiver operator characteristic curves of total cholesterol, HDL‐C, LDL‐C, ApoA1, IL‐6, and risk model for the severity of COVID‐19. COVID‐19: coronavirus disease 2019, HDL‐C: high‐density lipoprotein cholesterol, LDL‐C: low‐density lipoprotein cholesterol, ApoA1: apolipoprotein A1, IL‐6: interleukin‐6\nPredictive performance of blood lipids, interleukin‐6, and the risk model for COVID‐19 severity\nAbbreviations: COVID‐19, coronavirus disease 2019; HDL‐C, high‐density lipoprotein cholesterol; LDL‐C, low‐density lipoprotein cholesterol; ApoA1, apolipoprotein A1; IL‐6, Interleukin‐6; CI, confidence interval; PPV, positive predictive value; NPV, negative predictive value.\nPotential risk factors, including several general clinical characteristics, immunoinflammatory markers, and blood lipids, were first identified by univariate logistic analysis. Age, BMI, hypertension, neutrophil%, lymphocyte%, lymphocyte count, platelet count, fibrinogen, CRP, IL‐6, IL‐10, HDL‐C, ApoA1, ALB, AST, and LDH were associated with the severity of COVID‐19 (p < 0.05). However, aPTT, interferon‐γ (IFN‐γ), TC, and LDL‐C were unrelated to COVID‐19 severity (p>0.05) (Figure 2). Next, variables with p < 0.1 in the univariate logistic analysis were entered into the multivariate logistic analysis. However, after adjusting for other potential risk factors, only IL‐6 (adjusted odds ratio [OR]: 1.097, 95% confidence interval [CI]: 1.034–1.165, p = 0.002) and ApoA1 (adjusted OR: 0.865, 95% CI: 0.800–0.935, p < 0.001) were identified as independent risk factors by multivariate logistic analysis (Figure 3). Therefore, a risk model was built using the combination of ApoA1 and IL‐6. The mathematical formula of the risk model was log (P)=12.303+0.093*IL‐6–0.145*ApoA1.\nUnivariate logistic regression of risk factors for COVID‐19 severity. Old age, high BMI, hypertension, increased neutrophil%, fibrinogen, C‐reactive protein, interleukin‐6, interleukin‐10, aspartate aminotransferase, and lactic dehydrogenase and decreased lymphocyte%, lymphocyte count, platelet count, HDL‐C, apolipoprotein A1, and albumin were associated with the severity of COVID‐19 in univariate logistic regression analysis (all p < 0.05). COVID‐19: coronavirus disease 2019, OR, odds ratio; CI, confidence interval; HDL‐C, high‐density lipoprotein cholesterol; LDL‐C, low‐density lipoprotein cholesterol\nMultivariate logistic regression of independent risk factors for COVID‐19 severity. After adjusting for other potential risk factors, increased IL‐6 (OR: 1.097, 95% CI: 1.034–1.165, p = 0.002) and decreased ApoA1 (OR: 0.865, 95% CI: 0.800–0.935, p < 0.001) were recognized as independent risk factors for COVID‐19 severity. COVID‐19: coronavirus disease 2019, OR: odds ratio, CI: confidence interval\nIn the prediction of COVID‐19 severity, the AUCs (95% CI) of TC, HDL‐C, LDL‐C, ApoA1, IL‐6, and the risk model were 0.726 (0.645–0.798), 0.674 (0.590–0.750), 0.669 (0.585–0.746), 0.896 (0.834–0.941), 0.855 (0.786–0.908), and 0.977 (0.932–0.995), respectively (Figure 4, Table 4). In particular, the sensitivity and specificity of Apo A1 were 94.12% (95% CI: 71.20%–99.00%) and 80.80% (95% CI: 72.80%–87.30%), respectively, which were both the highest among the above single markers. Moreover, the risk model increased the levels of sensitivity and specificity to 100.00% (95% CI: 80.30%–100.00%) and 89.89% (95% CI: 81.40%–94.10%), respectively.\nReceiver operator characteristic curves of total cholesterol, HDL‐C, LDL‐C, ApoA1, IL‐6, and risk model for the severity of COVID‐19. COVID‐19: coronavirus disease 2019, HDL‐C: high‐density lipoprotein cholesterol, LDL‐C: low‐density lipoprotein cholesterol, ApoA1: apolipoprotein A1, IL‐6: interleukin‐6\nPredictive performance of blood lipids, interleukin‐6, and the risk model for COVID‐19 severity\nAbbreviations: COVID‐19, coronavirus disease 2019; HDL‐C, high‐density lipoprotein cholesterol; LDL‐C, low‐density lipoprotein cholesterol; ApoA1, apolipoprotein A1; IL‐6, Interleukin‐6; CI, confidence interval; PPV, positive predictive value; NPV, negative predictive value.", "In total, 142 consecutive patients with confirmed COVID‐19 were included in this study. The mean age was 49.10 ± 16.36 years, and 38.73% of the patients were male. Hypertension (37, 26.06%), diabetes (12, 8.45%), hepatic disease (10, 7.04%), and chronic lung disease (9, 6.34%) were the most common comorbidities. Fever (84, 59.15%) was the leading initial symptom, followed by cough (61, 42.96%), expectoration (32, 22.54%) and fatigue (27, 19.01%).\nAmong the 142 patients, 17 (11.97%) and 125 (88.03%) patients were classified into the severe and non‐severe groups during admission, respectively. Significant differences in age, body mass index (BMI), hypertension, hepatic disease, and fever were noted between the severe and non‐severe groups. Regarding clinical treatment, a greater proportion of patients in the severe group received glucocorticoids, antibiotics, oxygen, invasive mechanical ventilation, and intensive care unit treatment (Table 1).\nGeneral clinical characteristics of patients with COVID‐19\nData are presented as mean ± standard deviation or n (%). p‐value indicates the comparison between the non‐severe group and severe group.\nAbbreviations: COVID‐19, coronavirus disease 2019; ECMO, extracorporeal membrane oxygenation.", "We first compared the general laboratory parameters between healthy controls and COVID‐19 patients. Neutrophil%, C‐reactive protein (CRP), aspartate aminotransferase (AST), and lactic dehydrogenase (LDH) were elevated, while white blood cell (WBC) count, lymphocyte%, lymphocyte count, platelet count, red blood cell (RBC) count, hemoglobin, albumin, total bilirubin (TBil), blood urea nitrogen (BUN), and blood uric acid (BUA) were decreased in patients with COVID‐19 compared with healthy controls (Table 2).\nComparisons of demographic and laboratory variables between healthy controls and COVID‐19 patients\nData are presented as mean ± standard deviation, n (%), or medians (interquartile ranges).\nAbbreviations: ALT, alanine aminotransferase; AST, aspartate aminotransferase; BUA, blood uric acid; BUN, blood urea nitrogen; COVID‐19, coronavirus disease 2019; CRP, C‐reactive protein; DBil, direct bilirubin; LDH, Lactic dehydrogenase; RBC, red blood cell; Scr, serum creatinine; TBil, total bilirubin; WBC, white blood cell.\nWe next compared general laboratory parameters, coagulation tests, and cytokine levels between severe and non‐severe COVID‐19 patients. Compared with those in the non‐severe group, patients in the severe group exhibited increased neutrophil%, fibrinogen, activated partial thromboplastin time (aPTT), CRP, interleukin‐10 (IL‐10), interleukin‐6 (IL‐6), interferon‐γ (INF‐γ), AST, and LDH levels, as well as reduced lymphocyte count, platelet count, lymphocyte%, and albumin levels (Table 3).\nBaseline laboratory parameters in patients with COVID‐19\nData are presented as medians (interquartile ranges).\nAbbreviations: COVID‐19, coronavirus disease 2019; WBC, white blood cell; RBC, red blood cell; PT, prothrombin time; aPTT, activated partial thromboplastin time; CRP, C‐reactive protein; IL‐2, interleukin‐2; IL‐4, interleukin‐4; IL‐6, interleukin‐6; IL‐10, interleukin‐10; IFN‐γ, interferon‐γ; TNF‐α, tumor necrosis factor‐α; TBil, total bilirubin; DBil, direct bilirubin; AST, aspartate aminotransferase; ALT, alanine aminotransferase; LDH, Lactic dehydrogenase; BUN, blood urea nitrogen; BUA, blood uric acid; Scr, serum creatinine.", "Baseline blood lipids were obtained within 5 days of admission. TC, HDL‐C, LDL‐C, and ApoA1 gradually decreased from the healthy controls to the non‐severe group and the severe group. TG was higher in the non‐severe group than in the healthy controls; however, no significant differences were found between the severe and non‐severe groups or between the severe group and the healthy controls. There were no significant differences in ApoB or lipoprotein (a) among the three groups (Figure 1).\nComparisons of blood lipids among the healthy controls, non‐severe group, and severe group. Data are presented as medians (interquartile ranges). (A) The total cholesterol levels in the healthy controls, non‐severe group, and severe group were 4.97 (4.55–5.79), 4.08 (3.69–4.63), and 3.58 (3.06–3.85) mmol/L, respectively. (B) The triglyceride levels in the healthy controls, non‐severe group, and severe group were 1.04 (0.85–1.43), 1.44 (0.98–2.00), and 1.22 (0.83–2.29) mmol/L, respectively. (C) The HDL‐C levels in the healthy controls, non‐severe group, and severe group were 1.62 (1.35–1.92), 1.09 (0.91–1.29), and 0.93 (0.82–1.00) mmol/L, respectively. (D) The LDL‐C levels in the healthy controls, non‐severe group, and severe group were 2.77 (2.40–3.42), 2.54 (2.18–2.85), and 2.21 (1.93–2.49) mmol/L, respectively. (E) The ApoA1 levels in the healthy controls, non‐severe group, and severe group were 1.45 (1.31–1.65), 1.22 (1.12–1.34), and 0.98 (0.89–1.08) g/L, respectively. (F) The ApoB levels in the healthy controls, non‐severe group, and severe group were 0.86 (0.72–1.01), 0.81 (0.71–0.94), and 0.76 (0.66–0.86) g/L, respectively. (G) The lipoprotein (a) levels in the healthy controls, non‐severe group, and severe group were 114.80 (62.20–202.90), 87.15 (47.75–161.15), 94.45 (36.55–135.40) mg/L, respectively. The Kruskal‐Wallis test was used to compare differences among the three groups, and post hoc pairwise comparisons were performed using the Nemenyi test. HDL‐C, high‐density lipoprotein cholesterol; LDL‐C, low‐density lipoprotein cholesterol; ApoA1, apolipoprotein A1; ApoB, apolipoprotein B", "Potential risk factors, including several general clinical characteristics, immunoinflammatory markers, and blood lipids, were first identified by univariate logistic analysis. Age, BMI, hypertension, neutrophil%, lymphocyte%, lymphocyte count, platelet count, fibrinogen, CRP, IL‐6, IL‐10, HDL‐C, ApoA1, ALB, AST, and LDH were associated with the severity of COVID‐19 (p < 0.05). However, aPTT, interferon‐γ (IFN‐γ), TC, and LDL‐C were unrelated to COVID‐19 severity (p>0.05) (Figure 2). Next, variables with p < 0.1 in the univariate logistic analysis were entered into the multivariate logistic analysis. However, after adjusting for other potential risk factors, only IL‐6 (adjusted odds ratio [OR]: 1.097, 95% confidence interval [CI]: 1.034–1.165, p = 0.002) and ApoA1 (adjusted OR: 0.865, 95% CI: 0.800–0.935, p < 0.001) were identified as independent risk factors by multivariate logistic analysis (Figure 3). Therefore, a risk model was built using the combination of ApoA1 and IL‐6. The mathematical formula of the risk model was log (P)=12.303+0.093*IL‐6–0.145*ApoA1.\nUnivariate logistic regression of risk factors for COVID‐19 severity. Old age, high BMI, hypertension, increased neutrophil%, fibrinogen, C‐reactive protein, interleukin‐6, interleukin‐10, aspartate aminotransferase, and lactic dehydrogenase and decreased lymphocyte%, lymphocyte count, platelet count, HDL‐C, apolipoprotein A1, and albumin were associated with the severity of COVID‐19 in univariate logistic regression analysis (all p < 0.05). COVID‐19: coronavirus disease 2019, OR, odds ratio; CI, confidence interval; HDL‐C, high‐density lipoprotein cholesterol; LDL‐C, low‐density lipoprotein cholesterol\nMultivariate logistic regression of independent risk factors for COVID‐19 severity. After adjusting for other potential risk factors, increased IL‐6 (OR: 1.097, 95% CI: 1.034–1.165, p = 0.002) and decreased ApoA1 (OR: 0.865, 95% CI: 0.800–0.935, p < 0.001) were recognized as independent risk factors for COVID‐19 severity. COVID‐19: coronavirus disease 2019, OR: odds ratio, CI: confidence interval\nIn the prediction of COVID‐19 severity, the AUCs (95% CI) of TC, HDL‐C, LDL‐C, ApoA1, IL‐6, and the risk model were 0.726 (0.645–0.798), 0.674 (0.590–0.750), 0.669 (0.585–0.746), 0.896 (0.834–0.941), 0.855 (0.786–0.908), and 0.977 (0.932–0.995), respectively (Figure 4, Table 4). In particular, the sensitivity and specificity of Apo A1 were 94.12% (95% CI: 71.20%–99.00%) and 80.80% (95% CI: 72.80%–87.30%), respectively, which were both the highest among the above single markers. Moreover, the risk model increased the levels of sensitivity and specificity to 100.00% (95% CI: 80.30%–100.00%) and 89.89% (95% CI: 81.40%–94.10%), respectively.\nReceiver operator characteristic curves of total cholesterol, HDL‐C, LDL‐C, ApoA1, IL‐6, and risk model for the severity of COVID‐19. COVID‐19: coronavirus disease 2019, HDL‐C: high‐density lipoprotein cholesterol, LDL‐C: low‐density lipoprotein cholesterol, ApoA1: apolipoprotein A1, IL‐6: interleukin‐6\nPredictive performance of blood lipids, interleukin‐6, and the risk model for COVID‐19 severity\nAbbreviations: COVID‐19, coronavirus disease 2019; HDL‐C, high‐density lipoprotein cholesterol; LDL‐C, low‐density lipoprotein cholesterol; ApoA1, apolipoprotein A1; IL‐6, Interleukin‐6; CI, confidence interval; PPV, positive predictive value; NPV, negative predictive value.", "In this study, baseline TC, HDL‐C, LDL‐C, and ApoA1 gradually decreased across the groups in the following order: healthy controls, non‐severe group, and severe group, indicating that they had potential roles in predicting the severity of COVID‐19. Other blood lipids, including TG, ApoB, and lipoprotein (a), had little value in distinguishing COVID‐19 severity. Additionally, we found that ApoA1 was most obviously decreased among the altered lipids in this study, representing an independent risk factor for COVID‐19 severity. The combination of ApoA1 and IL‐6 yielded even higher prediction efficiency.\nIn a recent study, Wei et al.11 observed that serum levels of TC, HDL‐C, and LDL‐C in patients with COVID‐19 were significantly lower than those in healthy subjects, especially in severe and critical cases, which was similar to our study. Another study also found that LDL‐C performed well in discriminating COVID‐19 severity, it was gradually decreased across moderate, severe, and critical cases; however, HDL‐C exhibited no significant differences among the three groups.12 These small discrepancies may be related to the heterogeneity of disease severity, different sample sizes, and testing methods. However, our study and others11, 12 all showed that blood lipids have potential auxiliary value in distinguishing severe COVID‐19 patients.\nApoA1, a major protein component of the HDL complex, is involved in “reverse cholesterol transport” by transporting excess cholesterol from peripheral cells back to the liver for excretion. In addition, ApoA1 has an anti‐inflammatory characteristic,13 suggesting its role in inflammatory diseases. Previous studies have revealed that serum ApoA1 is associated with the outcome of patients with sepsis and acute respiratory distress syndrome induced by pneumonia, as well as critically ill patients.14, 15, 16, 17 In acute inflammatory disease, serum amyloid A (SAA), an acute‐phase protein, displaces ApoA1 from the HDL complex; then, free ApoA1 is easily eliminated by the kidney, resulting in low levels in the peripheral blood.18 On the other hand, the liver is susceptible to attack by SARS‐CoV‐2, especially in severe cases19; therefore, reduced synthesis by the injured liver may also play a role.\nIL‐6 plays a key role in the development of COVID‐19, and its predictive value for disease severity has been revealed previously by us and others.4, 20, 21, 22 It was also found that increased IL‐6 was associated with poor outcomes.22, 23 In this study, IL‐6 and ApoA1 were identified as independent risk factors for COVID‐19 severity. The risk model established using these two markers exhibited the highest predictive value, with an AUC of 0.977 (95% CI: 0.932–0.995).\nApoA1 and its mimetic peptide D‐amino acids (D‐4F) exhibit therapeutic potential for treating cancer, influenza, sepsis, and ARDS, primarily due to their anti‐inflammatory, anti‐oxidant, and anti‐apoptotic properties.13, 24, 25, 26, 27 In addition, it is noteworthy that ApoA1 inhibits IL‐6 release and reduces macrophage activation.25 IL‐6 is the main participant in the cytokine storm, and macrophages are the primary source of IL‐6. Therefore, ApoA1 may exhibit therapeutic potential in treating patients with COVID‐19. It might be worthwhile to test the efficacy and safety of ApoA1 and its mimetics in treating these patients.\nThe main strength of this study is that the patients included in this study were treated without delay when infected with SARS‐CoV‐2, which may represent an early stage of the disease. Second, this study enrolled healthy controls to analyze trends in blood lipids among healthy subjects, non‐severe cases, and severe cases. Third, the predictive values of verified clinical characteristics and laboratory parameters were selected for comparison with blood lipids, making the results more credible. Finally, blood lipids were routinely tested by using an automatic biochemical analyzer, with clinical application value.\nA limitation of this study is that it was a single‐center retrospective study with a relatively small sample size that was not validated in internal or external cohorts. Therefore, a prospective study with a larger sample size is strongly encouraged.\nIn conclusion, this study sheds light on abnormal blood lipid profiles in patients with COVID‐19 compared with healthy subjects, especially in severe cases. Specifically, TC, HDL‐C, LDL‐C, and ApoA1 gradually decreased from the healthy controls to non‐severe and severe groups. Additionally, ApoA1 is a good indicator of COVID‐19 severity, and the combination of ApoA1 and IL‐6 enhances model predictability. These findings might be helpful in disclosing the pathogenesis of COVID‐19 and developing novel therapeutic strategies for COVID‐19.", "The authors declare that they have no competing interests.", "Zhe Zhu interpreted the data and wrote the paper. Yayun Yang, Linyan Fan, Shuyuan Ye, and Kehong Lou collected and analyzed the data. Xin Hua, Zuoan Huang, and Qiaoyun Shi performed laboratory analysis. Guosheng Gao designed the study and revised the paper.", "This study was approved by the institutional ethics board of HwaMei Hospital, University of Chinese Academy of Science (PJ‐NBEY‐KY‐2020–061–01)." ]
[ null, "materials-and-methods", null, null, null, "results", null, null, null, null, "discussion", "COI-statement", null, null ]
[ "ApoA1", "blood lipid", "COVID‐19", "disease severity", "SARS‐CoV‐2" ]
INTRODUCTION: Severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) is a highly transmissible coronavirus that has caused an ever‐increasing number of coronavirus disease 2019 (COVID‐19) infections since December 2019 and spread rapidly worldwide. Although approximately 80% of patients infected with SARS‐CoV‐2 exhibit mild symptoms,1 the remaining severe cases may experience acute respiratory distress syndrome (ARDS), multiorgan failure (MOF), and death.2 Therefore, it is necessary to discriminate between severe and mild cases. Previous studies have found that the development of severe COVID‐19 is associated with age and underlying diseases, and patients who develop severe disease are likely to suffer from aberrant inflammatory reactions and cytokine storms.1, 3 Consequently, several clinical characteristics, the inflammation index, and cytokine levels have been used as indicators for predicting the severity of COVID‐19 by us and others.4, 5 Emerging evidence suggests that lipid metabolism dysregulation might promote the progression of COVID‐19, as revealed by mass spectrometry (MS)‐based proteomics analysis.6, 7 Although MS analysis is not commonly performed, blood lipids are routinely examined using automatic biochemical instruments in clinical laboratories. Additionally, dyslipidemia has also been observed in other respiratory infectious diseases.8, 9, 10 Therefore, blood lipids may be considered potential and available indicators of COVID‐19 severity. In a former study, serum hypolipidemia was identified in patients with COVID‐19.11 However, that study did not analyze other blood lipid components, such as apolipoprotein A1 (ApoA1), ApoB, and lipoprotein (a), and their predictive values for COVID‐19 severity are not fully understood. Therefore, to more comprehensively investigate blood lipid profiles in patients with COVID‐19 and determine their predictive value for disease severity, a retrospective study was performed. MATERIAL AND METHODS: Study design and participants selection This was a single‐center retrospective study approved by the institutional ethics board (PJ‐NBEY‐KY‐2020–061–01). A total of 142 consecutive patients with COVID‐19 were included from January 23 to April 20, 2020. In addition, 77 age‐ and gender‐matched healthy subjects were selected for evaluating the characteristics of blood lipid profiles in patients with COVID‐19. The diagnosis of COVID‐19 and its severity were determined according to the National Diagnosis and Treatment Protocol for Novel Coronavirus Infection‐Induced Pneumonia (6th Trial Version). Patients with confirmed COVID‐19 were diagnosed based on a positive SARS‐CoV‐2 nucleic acid RT‐PCR result using specimens derived from sputum, throat swabs, or nasopharyngeal swabs. Severe patients exhibited one of the following features: (a) respiratory distress with respiration rate (RR) greater than 30 breaths per minute; (b) blood oxygen saturation less than 93% at a state of rest; (c) arterial blood oxygen partial pressure/inhaled oxygen concentration less than 300 mmHg (1 mmHg = 0.133 kPa); or (d) lesion rapidly progressed by more than 50% within one or two days as determined by pulmonary imaging. General clinical characteristics, including gender, age, comorbidities, initial symptoms, treatment, and laboratory test data, were collected from the electronic medical records (EMRs). This was a single‐center retrospective study approved by the institutional ethics board (PJ‐NBEY‐KY‐2020–061–01). A total of 142 consecutive patients with COVID‐19 were included from January 23 to April 20, 2020. In addition, 77 age‐ and gender‐matched healthy subjects were selected for evaluating the characteristics of blood lipid profiles in patients with COVID‐19. The diagnosis of COVID‐19 and its severity were determined according to the National Diagnosis and Treatment Protocol for Novel Coronavirus Infection‐Induced Pneumonia (6th Trial Version). Patients with confirmed COVID‐19 were diagnosed based on a positive SARS‐CoV‐2 nucleic acid RT‐PCR result using specimens derived from sputum, throat swabs, or nasopharyngeal swabs. Severe patients exhibited one of the following features: (a) respiratory distress with respiration rate (RR) greater than 30 breaths per minute; (b) blood oxygen saturation less than 93% at a state of rest; (c) arterial blood oxygen partial pressure/inhaled oxygen concentration less than 300 mmHg (1 mmHg = 0.133 kPa); or (d) lesion rapidly progressed by more than 50% within one or two days as determined by pulmonary imaging. General clinical characteristics, including gender, age, comorbidities, initial symptoms, treatment, and laboratory test data, were collected from the electronic medical records (EMRs). Determination of blood lipids Blood lipids were assessed using a fully automatic biochemical analyzer (ADVIA2400, Siemens) according to the manufacturer's instructions (Purebio Biotechnology Co., Ltd). Briefly, total cholesterol (TC) was measured using the cholesterol oxidase‐p‐aminophenazone (CHOD‐PAP) method; triglyceride (TG) was assessed using the glycerol phosphate oxidase‐p‐aminophenazone (GPO‐PAP) method; high‐density lipoprotein cholesterol (HDL‐C) was assessed using the direct‐hydrogen peroxide method; low‐density lipoprotein cholesterol (LDL‐C) was assessed using the direct‐surfactant removal method; and ApoA1, ApoB, and lipoprotein (a) were assessed using the immunoturbidimetric method. Blood lipids were assessed using a fully automatic biochemical analyzer (ADVIA2400, Siemens) according to the manufacturer's instructions (Purebio Biotechnology Co., Ltd). Briefly, total cholesterol (TC) was measured using the cholesterol oxidase‐p‐aminophenazone (CHOD‐PAP) method; triglyceride (TG) was assessed using the glycerol phosphate oxidase‐p‐aminophenazone (GPO‐PAP) method; high‐density lipoprotein cholesterol (HDL‐C) was assessed using the direct‐hydrogen peroxide method; low‐density lipoprotein cholesterol (LDL‐C) was assessed using the direct‐surfactant removal method; and ApoA1, ApoB, and lipoprotein (a) were assessed using the immunoturbidimetric method. Statistical analysis SPSS software, version 16.0 (IBM) was used for statistical analysis. Normally and non‐normally distributed continuous data were expressed as the mean ± SD (standard deviation) and median (interquartile range [IQR]), respectively. Categorical variables were reported as numbers (%). The Kruskal‐Wallis test was used to compare blood lipids among the severe group, non‐severe group, and healthy subjects, and post hoc pairwise comparisons were performed using the Nemenyi test. Differences between the two groups were assessed using Student's t‐test and Mann‐Whitney U test for normally and non‐normally distributed continuous data, respectively, and chi‐square or Fisher's exact tests were used for categorical variables. Multivariate logistic regression analysis was adopted to explore independent risk factors for COVID‐19 severity, receiver operator characteristic (ROC) curves were generated, and the areas under ROC curves (AUCs) were calculated to evaluate prediction efficiency. A p‐value <0.05 indicates statistical significance. SPSS software, version 16.0 (IBM) was used for statistical analysis. Normally and non‐normally distributed continuous data were expressed as the mean ± SD (standard deviation) and median (interquartile range [IQR]), respectively. Categorical variables were reported as numbers (%). The Kruskal‐Wallis test was used to compare blood lipids among the severe group, non‐severe group, and healthy subjects, and post hoc pairwise comparisons were performed using the Nemenyi test. Differences between the two groups were assessed using Student's t‐test and Mann‐Whitney U test for normally and non‐normally distributed continuous data, respectively, and chi‐square or Fisher's exact tests were used for categorical variables. Multivariate logistic regression analysis was adopted to explore independent risk factors for COVID‐19 severity, receiver operator characteristic (ROC) curves were generated, and the areas under ROC curves (AUCs) were calculated to evaluate prediction efficiency. A p‐value <0.05 indicates statistical significance. Study design and participants selection: This was a single‐center retrospective study approved by the institutional ethics board (PJ‐NBEY‐KY‐2020–061–01). A total of 142 consecutive patients with COVID‐19 were included from January 23 to April 20, 2020. In addition, 77 age‐ and gender‐matched healthy subjects were selected for evaluating the characteristics of blood lipid profiles in patients with COVID‐19. The diagnosis of COVID‐19 and its severity were determined according to the National Diagnosis and Treatment Protocol for Novel Coronavirus Infection‐Induced Pneumonia (6th Trial Version). Patients with confirmed COVID‐19 were diagnosed based on a positive SARS‐CoV‐2 nucleic acid RT‐PCR result using specimens derived from sputum, throat swabs, or nasopharyngeal swabs. Severe patients exhibited one of the following features: (a) respiratory distress with respiration rate (RR) greater than 30 breaths per minute; (b) blood oxygen saturation less than 93% at a state of rest; (c) arterial blood oxygen partial pressure/inhaled oxygen concentration less than 300 mmHg (1 mmHg = 0.133 kPa); or (d) lesion rapidly progressed by more than 50% within one or two days as determined by pulmonary imaging. General clinical characteristics, including gender, age, comorbidities, initial symptoms, treatment, and laboratory test data, were collected from the electronic medical records (EMRs). Determination of blood lipids: Blood lipids were assessed using a fully automatic biochemical analyzer (ADVIA2400, Siemens) according to the manufacturer's instructions (Purebio Biotechnology Co., Ltd). Briefly, total cholesterol (TC) was measured using the cholesterol oxidase‐p‐aminophenazone (CHOD‐PAP) method; triglyceride (TG) was assessed using the glycerol phosphate oxidase‐p‐aminophenazone (GPO‐PAP) method; high‐density lipoprotein cholesterol (HDL‐C) was assessed using the direct‐hydrogen peroxide method; low‐density lipoprotein cholesterol (LDL‐C) was assessed using the direct‐surfactant removal method; and ApoA1, ApoB, and lipoprotein (a) were assessed using the immunoturbidimetric method. Statistical analysis: SPSS software, version 16.0 (IBM) was used for statistical analysis. Normally and non‐normally distributed continuous data were expressed as the mean ± SD (standard deviation) and median (interquartile range [IQR]), respectively. Categorical variables were reported as numbers (%). The Kruskal‐Wallis test was used to compare blood lipids among the severe group, non‐severe group, and healthy subjects, and post hoc pairwise comparisons were performed using the Nemenyi test. Differences between the two groups were assessed using Student's t‐test and Mann‐Whitney U test for normally and non‐normally distributed continuous data, respectively, and chi‐square or Fisher's exact tests were used for categorical variables. Multivariate logistic regression analysis was adopted to explore independent risk factors for COVID‐19 severity, receiver operator characteristic (ROC) curves were generated, and the areas under ROC curves (AUCs) were calculated to evaluate prediction efficiency. A p‐value <0.05 indicates statistical significance. RESULTS: General clinical characteristics In total, 142 consecutive patients with confirmed COVID‐19 were included in this study. The mean age was 49.10 ± 16.36 years, and 38.73% of the patients were male. Hypertension (37, 26.06%), diabetes (12, 8.45%), hepatic disease (10, 7.04%), and chronic lung disease (9, 6.34%) were the most common comorbidities. Fever (84, 59.15%) was the leading initial symptom, followed by cough (61, 42.96%), expectoration (32, 22.54%) and fatigue (27, 19.01%). Among the 142 patients, 17 (11.97%) and 125 (88.03%) patients were classified into the severe and non‐severe groups during admission, respectively. Significant differences in age, body mass index (BMI), hypertension, hepatic disease, and fever were noted between the severe and non‐severe groups. Regarding clinical treatment, a greater proportion of patients in the severe group received glucocorticoids, antibiotics, oxygen, invasive mechanical ventilation, and intensive care unit treatment (Table 1). General clinical characteristics of patients with COVID‐19 Data are presented as mean ± standard deviation or n (%). p‐value indicates the comparison between the non‐severe group and severe group. Abbreviations: COVID‐19, coronavirus disease 2019; ECMO, extracorporeal membrane oxygenation. In total, 142 consecutive patients with confirmed COVID‐19 were included in this study. The mean age was 49.10 ± 16.36 years, and 38.73% of the patients were male. Hypertension (37, 26.06%), diabetes (12, 8.45%), hepatic disease (10, 7.04%), and chronic lung disease (9, 6.34%) were the most common comorbidities. Fever (84, 59.15%) was the leading initial symptom, followed by cough (61, 42.96%), expectoration (32, 22.54%) and fatigue (27, 19.01%). Among the 142 patients, 17 (11.97%) and 125 (88.03%) patients were classified into the severe and non‐severe groups during admission, respectively. Significant differences in age, body mass index (BMI), hypertension, hepatic disease, and fever were noted between the severe and non‐severe groups. Regarding clinical treatment, a greater proportion of patients in the severe group received glucocorticoids, antibiotics, oxygen, invasive mechanical ventilation, and intensive care unit treatment (Table 1). General clinical characteristics of patients with COVID‐19 Data are presented as mean ± standard deviation or n (%). p‐value indicates the comparison between the non‐severe group and severe group. Abbreviations: COVID‐19, coronavirus disease 2019; ECMO, extracorporeal membrane oxygenation. Laboratory findings We first compared the general laboratory parameters between healthy controls and COVID‐19 patients. Neutrophil%, C‐reactive protein (CRP), aspartate aminotransferase (AST), and lactic dehydrogenase (LDH) were elevated, while white blood cell (WBC) count, lymphocyte%, lymphocyte count, platelet count, red blood cell (RBC) count, hemoglobin, albumin, total bilirubin (TBil), blood urea nitrogen (BUN), and blood uric acid (BUA) were decreased in patients with COVID‐19 compared with healthy controls (Table 2). Comparisons of demographic and laboratory variables between healthy controls and COVID‐19 patients Data are presented as mean ± standard deviation, n (%), or medians (interquartile ranges). Abbreviations: ALT, alanine aminotransferase; AST, aspartate aminotransferase; BUA, blood uric acid; BUN, blood urea nitrogen; COVID‐19, coronavirus disease 2019; CRP, C‐reactive protein; DBil, direct bilirubin; LDH, Lactic dehydrogenase; RBC, red blood cell; Scr, serum creatinine; TBil, total bilirubin; WBC, white blood cell. We next compared general laboratory parameters, coagulation tests, and cytokine levels between severe and non‐severe COVID‐19 patients. Compared with those in the non‐severe group, patients in the severe group exhibited increased neutrophil%, fibrinogen, activated partial thromboplastin time (aPTT), CRP, interleukin‐10 (IL‐10), interleukin‐6 (IL‐6), interferon‐γ (INF‐γ), AST, and LDH levels, as well as reduced lymphocyte count, platelet count, lymphocyte%, and albumin levels (Table 3). Baseline laboratory parameters in patients with COVID‐19 Data are presented as medians (interquartile ranges). Abbreviations: COVID‐19, coronavirus disease 2019; WBC, white blood cell; RBC, red blood cell; PT, prothrombin time; aPTT, activated partial thromboplastin time; CRP, C‐reactive protein; IL‐2, interleukin‐2; IL‐4, interleukin‐4; IL‐6, interleukin‐6; IL‐10, interleukin‐10; IFN‐γ, interferon‐γ; TNF‐α, tumor necrosis factor‐α; TBil, total bilirubin; DBil, direct bilirubin; AST, aspartate aminotransferase; ALT, alanine aminotransferase; LDH, Lactic dehydrogenase; BUN, blood urea nitrogen; BUA, blood uric acid; Scr, serum creatinine. We first compared the general laboratory parameters between healthy controls and COVID‐19 patients. Neutrophil%, C‐reactive protein (CRP), aspartate aminotransferase (AST), and lactic dehydrogenase (LDH) were elevated, while white blood cell (WBC) count, lymphocyte%, lymphocyte count, platelet count, red blood cell (RBC) count, hemoglobin, albumin, total bilirubin (TBil), blood urea nitrogen (BUN), and blood uric acid (BUA) were decreased in patients with COVID‐19 compared with healthy controls (Table 2). Comparisons of demographic and laboratory variables between healthy controls and COVID‐19 patients Data are presented as mean ± standard deviation, n (%), or medians (interquartile ranges). Abbreviations: ALT, alanine aminotransferase; AST, aspartate aminotransferase; BUA, blood uric acid; BUN, blood urea nitrogen; COVID‐19, coronavirus disease 2019; CRP, C‐reactive protein; DBil, direct bilirubin; LDH, Lactic dehydrogenase; RBC, red blood cell; Scr, serum creatinine; TBil, total bilirubin; WBC, white blood cell. We next compared general laboratory parameters, coagulation tests, and cytokine levels between severe and non‐severe COVID‐19 patients. Compared with those in the non‐severe group, patients in the severe group exhibited increased neutrophil%, fibrinogen, activated partial thromboplastin time (aPTT), CRP, interleukin‐10 (IL‐10), interleukin‐6 (IL‐6), interferon‐γ (INF‐γ), AST, and LDH levels, as well as reduced lymphocyte count, platelet count, lymphocyte%, and albumin levels (Table 3). Baseline laboratory parameters in patients with COVID‐19 Data are presented as medians (interquartile ranges). Abbreviations: COVID‐19, coronavirus disease 2019; WBC, white blood cell; RBC, red blood cell; PT, prothrombin time; aPTT, activated partial thromboplastin time; CRP, C‐reactive protein; IL‐2, interleukin‐2; IL‐4, interleukin‐4; IL‐6, interleukin‐6; IL‐10, interleukin‐10; IFN‐γ, interferon‐γ; TNF‐α, tumor necrosis factor‐α; TBil, total bilirubin; DBil, direct bilirubin; AST, aspartate aminotransferase; ALT, alanine aminotransferase; LDH, Lactic dehydrogenase; BUN, blood urea nitrogen; BUA, blood uric acid; Scr, serum creatinine. Baseline blood lipids Baseline blood lipids were obtained within 5 days of admission. TC, HDL‐C, LDL‐C, and ApoA1 gradually decreased from the healthy controls to the non‐severe group and the severe group. TG was higher in the non‐severe group than in the healthy controls; however, no significant differences were found between the severe and non‐severe groups or between the severe group and the healthy controls. There were no significant differences in ApoB or lipoprotein (a) among the three groups (Figure 1). Comparisons of blood lipids among the healthy controls, non‐severe group, and severe group. Data are presented as medians (interquartile ranges). (A) The total cholesterol levels in the healthy controls, non‐severe group, and severe group were 4.97 (4.55–5.79), 4.08 (3.69–4.63), and 3.58 (3.06–3.85) mmol/L, respectively. (B) The triglyceride levels in the healthy controls, non‐severe group, and severe group were 1.04 (0.85–1.43), 1.44 (0.98–2.00), and 1.22 (0.83–2.29) mmol/L, respectively. (C) The HDL‐C levels in the healthy controls, non‐severe group, and severe group were 1.62 (1.35–1.92), 1.09 (0.91–1.29), and 0.93 (0.82–1.00) mmol/L, respectively. (D) The LDL‐C levels in the healthy controls, non‐severe group, and severe group were 2.77 (2.40–3.42), 2.54 (2.18–2.85), and 2.21 (1.93–2.49) mmol/L, respectively. (E) The ApoA1 levels in the healthy controls, non‐severe group, and severe group were 1.45 (1.31–1.65), 1.22 (1.12–1.34), and 0.98 (0.89–1.08) g/L, respectively. (F) The ApoB levels in the healthy controls, non‐severe group, and severe group were 0.86 (0.72–1.01), 0.81 (0.71–0.94), and 0.76 (0.66–0.86) g/L, respectively. (G) The lipoprotein (a) levels in the healthy controls, non‐severe group, and severe group were 114.80 (62.20–202.90), 87.15 (47.75–161.15), 94.45 (36.55–135.40) mg/L, respectively. The Kruskal‐Wallis test was used to compare differences among the three groups, and post hoc pairwise comparisons were performed using the Nemenyi test. HDL‐C, high‐density lipoprotein cholesterol; LDL‐C, low‐density lipoprotein cholesterol; ApoA1, apolipoprotein A1; ApoB, apolipoprotein B Baseline blood lipids were obtained within 5 days of admission. TC, HDL‐C, LDL‐C, and ApoA1 gradually decreased from the healthy controls to the non‐severe group and the severe group. TG was higher in the non‐severe group than in the healthy controls; however, no significant differences were found between the severe and non‐severe groups or between the severe group and the healthy controls. There were no significant differences in ApoB or lipoprotein (a) among the three groups (Figure 1). Comparisons of blood lipids among the healthy controls, non‐severe group, and severe group. Data are presented as medians (interquartile ranges). (A) The total cholesterol levels in the healthy controls, non‐severe group, and severe group were 4.97 (4.55–5.79), 4.08 (3.69–4.63), and 3.58 (3.06–3.85) mmol/L, respectively. (B) The triglyceride levels in the healthy controls, non‐severe group, and severe group were 1.04 (0.85–1.43), 1.44 (0.98–2.00), and 1.22 (0.83–2.29) mmol/L, respectively. (C) The HDL‐C levels in the healthy controls, non‐severe group, and severe group were 1.62 (1.35–1.92), 1.09 (0.91–1.29), and 0.93 (0.82–1.00) mmol/L, respectively. (D) The LDL‐C levels in the healthy controls, non‐severe group, and severe group were 2.77 (2.40–3.42), 2.54 (2.18–2.85), and 2.21 (1.93–2.49) mmol/L, respectively. (E) The ApoA1 levels in the healthy controls, non‐severe group, and severe group were 1.45 (1.31–1.65), 1.22 (1.12–1.34), and 0.98 (0.89–1.08) g/L, respectively. (F) The ApoB levels in the healthy controls, non‐severe group, and severe group were 0.86 (0.72–1.01), 0.81 (0.71–0.94), and 0.76 (0.66–0.86) g/L, respectively. (G) The lipoprotein (a) levels in the healthy controls, non‐severe group, and severe group were 114.80 (62.20–202.90), 87.15 (47.75–161.15), 94.45 (36.55–135.40) mg/L, respectively. The Kruskal‐Wallis test was used to compare differences among the three groups, and post hoc pairwise comparisons were performed using the Nemenyi test. HDL‐C, high‐density lipoprotein cholesterol; LDL‐C, low‐density lipoprotein cholesterol; ApoA1, apolipoprotein A1; ApoB, apolipoprotein B Risk factors for COVID‐19 severity Potential risk factors, including several general clinical characteristics, immunoinflammatory markers, and blood lipids, were first identified by univariate logistic analysis. Age, BMI, hypertension, neutrophil%, lymphocyte%, lymphocyte count, platelet count, fibrinogen, CRP, IL‐6, IL‐10, HDL‐C, ApoA1, ALB, AST, and LDH were associated with the severity of COVID‐19 (p < 0.05). However, aPTT, interferon‐γ (IFN‐γ), TC, and LDL‐C were unrelated to COVID‐19 severity (p>0.05) (Figure 2). Next, variables with p < 0.1 in the univariate logistic analysis were entered into the multivariate logistic analysis. However, after adjusting for other potential risk factors, only IL‐6 (adjusted odds ratio [OR]: 1.097, 95% confidence interval [CI]: 1.034–1.165, p = 0.002) and ApoA1 (adjusted OR: 0.865, 95% CI: 0.800–0.935, p < 0.001) were identified as independent risk factors by multivariate logistic analysis (Figure 3). Therefore, a risk model was built using the combination of ApoA1 and IL‐6. The mathematical formula of the risk model was log (P)=12.303+0.093*IL‐6–0.145*ApoA1. Univariate logistic regression of risk factors for COVID‐19 severity. Old age, high BMI, hypertension, increased neutrophil%, fibrinogen, C‐reactive protein, interleukin‐6, interleukin‐10, aspartate aminotransferase, and lactic dehydrogenase and decreased lymphocyte%, lymphocyte count, platelet count, HDL‐C, apolipoprotein A1, and albumin were associated with the severity of COVID‐19 in univariate logistic regression analysis (all p < 0.05). COVID‐19: coronavirus disease 2019, OR, odds ratio; CI, confidence interval; HDL‐C, high‐density lipoprotein cholesterol; LDL‐C, low‐density lipoprotein cholesterol Multivariate logistic regression of independent risk factors for COVID‐19 severity. After adjusting for other potential risk factors, increased IL‐6 (OR: 1.097, 95% CI: 1.034–1.165, p = 0.002) and decreased ApoA1 (OR: 0.865, 95% CI: 0.800–0.935, p < 0.001) were recognized as independent risk factors for COVID‐19 severity. COVID‐19: coronavirus disease 2019, OR: odds ratio, CI: confidence interval In the prediction of COVID‐19 severity, the AUCs (95% CI) of TC, HDL‐C, LDL‐C, ApoA1, IL‐6, and the risk model were 0.726 (0.645–0.798), 0.674 (0.590–0.750), 0.669 (0.585–0.746), 0.896 (0.834–0.941), 0.855 (0.786–0.908), and 0.977 (0.932–0.995), respectively (Figure 4, Table 4). In particular, the sensitivity and specificity of Apo A1 were 94.12% (95% CI: 71.20%–99.00%) and 80.80% (95% CI: 72.80%–87.30%), respectively, which were both the highest among the above single markers. Moreover, the risk model increased the levels of sensitivity and specificity to 100.00% (95% CI: 80.30%–100.00%) and 89.89% (95% CI: 81.40%–94.10%), respectively. Receiver operator characteristic curves of total cholesterol, HDL‐C, LDL‐C, ApoA1, IL‐6, and risk model for the severity of COVID‐19. COVID‐19: coronavirus disease 2019, HDL‐C: high‐density lipoprotein cholesterol, LDL‐C: low‐density lipoprotein cholesterol, ApoA1: apolipoprotein A1, IL‐6: interleukin‐6 Predictive performance of blood lipids, interleukin‐6, and the risk model for COVID‐19 severity Abbreviations: COVID‐19, coronavirus disease 2019; HDL‐C, high‐density lipoprotein cholesterol; LDL‐C, low‐density lipoprotein cholesterol; ApoA1, apolipoprotein A1; IL‐6, Interleukin‐6; CI, confidence interval; PPV, positive predictive value; NPV, negative predictive value. Potential risk factors, including several general clinical characteristics, immunoinflammatory markers, and blood lipids, were first identified by univariate logistic analysis. Age, BMI, hypertension, neutrophil%, lymphocyte%, lymphocyte count, platelet count, fibrinogen, CRP, IL‐6, IL‐10, HDL‐C, ApoA1, ALB, AST, and LDH were associated with the severity of COVID‐19 (p < 0.05). However, aPTT, interferon‐γ (IFN‐γ), TC, and LDL‐C were unrelated to COVID‐19 severity (p>0.05) (Figure 2). Next, variables with p < 0.1 in the univariate logistic analysis were entered into the multivariate logistic analysis. However, after adjusting for other potential risk factors, only IL‐6 (adjusted odds ratio [OR]: 1.097, 95% confidence interval [CI]: 1.034–1.165, p = 0.002) and ApoA1 (adjusted OR: 0.865, 95% CI: 0.800–0.935, p < 0.001) were identified as independent risk factors by multivariate logistic analysis (Figure 3). Therefore, a risk model was built using the combination of ApoA1 and IL‐6. The mathematical formula of the risk model was log (P)=12.303+0.093*IL‐6–0.145*ApoA1. Univariate logistic regression of risk factors for COVID‐19 severity. Old age, high BMI, hypertension, increased neutrophil%, fibrinogen, C‐reactive protein, interleukin‐6, interleukin‐10, aspartate aminotransferase, and lactic dehydrogenase and decreased lymphocyte%, lymphocyte count, platelet count, HDL‐C, apolipoprotein A1, and albumin were associated with the severity of COVID‐19 in univariate logistic regression analysis (all p < 0.05). COVID‐19: coronavirus disease 2019, OR, odds ratio; CI, confidence interval; HDL‐C, high‐density lipoprotein cholesterol; LDL‐C, low‐density lipoprotein cholesterol Multivariate logistic regression of independent risk factors for COVID‐19 severity. After adjusting for other potential risk factors, increased IL‐6 (OR: 1.097, 95% CI: 1.034–1.165, p = 0.002) and decreased ApoA1 (OR: 0.865, 95% CI: 0.800–0.935, p < 0.001) were recognized as independent risk factors for COVID‐19 severity. COVID‐19: coronavirus disease 2019, OR: odds ratio, CI: confidence interval In the prediction of COVID‐19 severity, the AUCs (95% CI) of TC, HDL‐C, LDL‐C, ApoA1, IL‐6, and the risk model were 0.726 (0.645–0.798), 0.674 (0.590–0.750), 0.669 (0.585–0.746), 0.896 (0.834–0.941), 0.855 (0.786–0.908), and 0.977 (0.932–0.995), respectively (Figure 4, Table 4). In particular, the sensitivity and specificity of Apo A1 were 94.12% (95% CI: 71.20%–99.00%) and 80.80% (95% CI: 72.80%–87.30%), respectively, which were both the highest among the above single markers. Moreover, the risk model increased the levels of sensitivity and specificity to 100.00% (95% CI: 80.30%–100.00%) and 89.89% (95% CI: 81.40%–94.10%), respectively. Receiver operator characteristic curves of total cholesterol, HDL‐C, LDL‐C, ApoA1, IL‐6, and risk model for the severity of COVID‐19. COVID‐19: coronavirus disease 2019, HDL‐C: high‐density lipoprotein cholesterol, LDL‐C: low‐density lipoprotein cholesterol, ApoA1: apolipoprotein A1, IL‐6: interleukin‐6 Predictive performance of blood lipids, interleukin‐6, and the risk model for COVID‐19 severity Abbreviations: COVID‐19, coronavirus disease 2019; HDL‐C, high‐density lipoprotein cholesterol; LDL‐C, low‐density lipoprotein cholesterol; ApoA1, apolipoprotein A1; IL‐6, Interleukin‐6; CI, confidence interval; PPV, positive predictive value; NPV, negative predictive value. General clinical characteristics: In total, 142 consecutive patients with confirmed COVID‐19 were included in this study. The mean age was 49.10 ± 16.36 years, and 38.73% of the patients were male. Hypertension (37, 26.06%), diabetes (12, 8.45%), hepatic disease (10, 7.04%), and chronic lung disease (9, 6.34%) were the most common comorbidities. Fever (84, 59.15%) was the leading initial symptom, followed by cough (61, 42.96%), expectoration (32, 22.54%) and fatigue (27, 19.01%). Among the 142 patients, 17 (11.97%) and 125 (88.03%) patients were classified into the severe and non‐severe groups during admission, respectively. Significant differences in age, body mass index (BMI), hypertension, hepatic disease, and fever were noted between the severe and non‐severe groups. Regarding clinical treatment, a greater proportion of patients in the severe group received glucocorticoids, antibiotics, oxygen, invasive mechanical ventilation, and intensive care unit treatment (Table 1). General clinical characteristics of patients with COVID‐19 Data are presented as mean ± standard deviation or n (%). p‐value indicates the comparison between the non‐severe group and severe group. Abbreviations: COVID‐19, coronavirus disease 2019; ECMO, extracorporeal membrane oxygenation. Laboratory findings: We first compared the general laboratory parameters between healthy controls and COVID‐19 patients. Neutrophil%, C‐reactive protein (CRP), aspartate aminotransferase (AST), and lactic dehydrogenase (LDH) were elevated, while white blood cell (WBC) count, lymphocyte%, lymphocyte count, platelet count, red blood cell (RBC) count, hemoglobin, albumin, total bilirubin (TBil), blood urea nitrogen (BUN), and blood uric acid (BUA) were decreased in patients with COVID‐19 compared with healthy controls (Table 2). Comparisons of demographic and laboratory variables between healthy controls and COVID‐19 patients Data are presented as mean ± standard deviation, n (%), or medians (interquartile ranges). Abbreviations: ALT, alanine aminotransferase; AST, aspartate aminotransferase; BUA, blood uric acid; BUN, blood urea nitrogen; COVID‐19, coronavirus disease 2019; CRP, C‐reactive protein; DBil, direct bilirubin; LDH, Lactic dehydrogenase; RBC, red blood cell; Scr, serum creatinine; TBil, total bilirubin; WBC, white blood cell. We next compared general laboratory parameters, coagulation tests, and cytokine levels between severe and non‐severe COVID‐19 patients. Compared with those in the non‐severe group, patients in the severe group exhibited increased neutrophil%, fibrinogen, activated partial thromboplastin time (aPTT), CRP, interleukin‐10 (IL‐10), interleukin‐6 (IL‐6), interferon‐γ (INF‐γ), AST, and LDH levels, as well as reduced lymphocyte count, platelet count, lymphocyte%, and albumin levels (Table 3). Baseline laboratory parameters in patients with COVID‐19 Data are presented as medians (interquartile ranges). Abbreviations: COVID‐19, coronavirus disease 2019; WBC, white blood cell; RBC, red blood cell; PT, prothrombin time; aPTT, activated partial thromboplastin time; CRP, C‐reactive protein; IL‐2, interleukin‐2; IL‐4, interleukin‐4; IL‐6, interleukin‐6; IL‐10, interleukin‐10; IFN‐γ, interferon‐γ; TNF‐α, tumor necrosis factor‐α; TBil, total bilirubin; DBil, direct bilirubin; AST, aspartate aminotransferase; ALT, alanine aminotransferase; LDH, Lactic dehydrogenase; BUN, blood urea nitrogen; BUA, blood uric acid; Scr, serum creatinine. Baseline blood lipids: Baseline blood lipids were obtained within 5 days of admission. TC, HDL‐C, LDL‐C, and ApoA1 gradually decreased from the healthy controls to the non‐severe group and the severe group. TG was higher in the non‐severe group than in the healthy controls; however, no significant differences were found between the severe and non‐severe groups or between the severe group and the healthy controls. There were no significant differences in ApoB or lipoprotein (a) among the three groups (Figure 1). Comparisons of blood lipids among the healthy controls, non‐severe group, and severe group. Data are presented as medians (interquartile ranges). (A) The total cholesterol levels in the healthy controls, non‐severe group, and severe group were 4.97 (4.55–5.79), 4.08 (3.69–4.63), and 3.58 (3.06–3.85) mmol/L, respectively. (B) The triglyceride levels in the healthy controls, non‐severe group, and severe group were 1.04 (0.85–1.43), 1.44 (0.98–2.00), and 1.22 (0.83–2.29) mmol/L, respectively. (C) The HDL‐C levels in the healthy controls, non‐severe group, and severe group were 1.62 (1.35–1.92), 1.09 (0.91–1.29), and 0.93 (0.82–1.00) mmol/L, respectively. (D) The LDL‐C levels in the healthy controls, non‐severe group, and severe group were 2.77 (2.40–3.42), 2.54 (2.18–2.85), and 2.21 (1.93–2.49) mmol/L, respectively. (E) The ApoA1 levels in the healthy controls, non‐severe group, and severe group were 1.45 (1.31–1.65), 1.22 (1.12–1.34), and 0.98 (0.89–1.08) g/L, respectively. (F) The ApoB levels in the healthy controls, non‐severe group, and severe group were 0.86 (0.72–1.01), 0.81 (0.71–0.94), and 0.76 (0.66–0.86) g/L, respectively. (G) The lipoprotein (a) levels in the healthy controls, non‐severe group, and severe group were 114.80 (62.20–202.90), 87.15 (47.75–161.15), 94.45 (36.55–135.40) mg/L, respectively. The Kruskal‐Wallis test was used to compare differences among the three groups, and post hoc pairwise comparisons were performed using the Nemenyi test. HDL‐C, high‐density lipoprotein cholesterol; LDL‐C, low‐density lipoprotein cholesterol; ApoA1, apolipoprotein A1; ApoB, apolipoprotein B Risk factors for COVID‐19 severity: Potential risk factors, including several general clinical characteristics, immunoinflammatory markers, and blood lipids, were first identified by univariate logistic analysis. Age, BMI, hypertension, neutrophil%, lymphocyte%, lymphocyte count, platelet count, fibrinogen, CRP, IL‐6, IL‐10, HDL‐C, ApoA1, ALB, AST, and LDH were associated with the severity of COVID‐19 (p < 0.05). However, aPTT, interferon‐γ (IFN‐γ), TC, and LDL‐C were unrelated to COVID‐19 severity (p>0.05) (Figure 2). Next, variables with p < 0.1 in the univariate logistic analysis were entered into the multivariate logistic analysis. However, after adjusting for other potential risk factors, only IL‐6 (adjusted odds ratio [OR]: 1.097, 95% confidence interval [CI]: 1.034–1.165, p = 0.002) and ApoA1 (adjusted OR: 0.865, 95% CI: 0.800–0.935, p < 0.001) were identified as independent risk factors by multivariate logistic analysis (Figure 3). Therefore, a risk model was built using the combination of ApoA1 and IL‐6. The mathematical formula of the risk model was log (P)=12.303+0.093*IL‐6–0.145*ApoA1. Univariate logistic regression of risk factors for COVID‐19 severity. Old age, high BMI, hypertension, increased neutrophil%, fibrinogen, C‐reactive protein, interleukin‐6, interleukin‐10, aspartate aminotransferase, and lactic dehydrogenase and decreased lymphocyte%, lymphocyte count, platelet count, HDL‐C, apolipoprotein A1, and albumin were associated with the severity of COVID‐19 in univariate logistic regression analysis (all p < 0.05). COVID‐19: coronavirus disease 2019, OR, odds ratio; CI, confidence interval; HDL‐C, high‐density lipoprotein cholesterol; LDL‐C, low‐density lipoprotein cholesterol Multivariate logistic regression of independent risk factors for COVID‐19 severity. After adjusting for other potential risk factors, increased IL‐6 (OR: 1.097, 95% CI: 1.034–1.165, p = 0.002) and decreased ApoA1 (OR: 0.865, 95% CI: 0.800–0.935, p < 0.001) were recognized as independent risk factors for COVID‐19 severity. COVID‐19: coronavirus disease 2019, OR: odds ratio, CI: confidence interval In the prediction of COVID‐19 severity, the AUCs (95% CI) of TC, HDL‐C, LDL‐C, ApoA1, IL‐6, and the risk model were 0.726 (0.645–0.798), 0.674 (0.590–0.750), 0.669 (0.585–0.746), 0.896 (0.834–0.941), 0.855 (0.786–0.908), and 0.977 (0.932–0.995), respectively (Figure 4, Table 4). In particular, the sensitivity and specificity of Apo A1 were 94.12% (95% CI: 71.20%–99.00%) and 80.80% (95% CI: 72.80%–87.30%), respectively, which were both the highest among the above single markers. Moreover, the risk model increased the levels of sensitivity and specificity to 100.00% (95% CI: 80.30%–100.00%) and 89.89% (95% CI: 81.40%–94.10%), respectively. Receiver operator characteristic curves of total cholesterol, HDL‐C, LDL‐C, ApoA1, IL‐6, and risk model for the severity of COVID‐19. COVID‐19: coronavirus disease 2019, HDL‐C: high‐density lipoprotein cholesterol, LDL‐C: low‐density lipoprotein cholesterol, ApoA1: apolipoprotein A1, IL‐6: interleukin‐6 Predictive performance of blood lipids, interleukin‐6, and the risk model for COVID‐19 severity Abbreviations: COVID‐19, coronavirus disease 2019; HDL‐C, high‐density lipoprotein cholesterol; LDL‐C, low‐density lipoprotein cholesterol; ApoA1, apolipoprotein A1; IL‐6, Interleukin‐6; CI, confidence interval; PPV, positive predictive value; NPV, negative predictive value. DISCUSSION: In this study, baseline TC, HDL‐C, LDL‐C, and ApoA1 gradually decreased across the groups in the following order: healthy controls, non‐severe group, and severe group, indicating that they had potential roles in predicting the severity of COVID‐19. Other blood lipids, including TG, ApoB, and lipoprotein (a), had little value in distinguishing COVID‐19 severity. Additionally, we found that ApoA1 was most obviously decreased among the altered lipids in this study, representing an independent risk factor for COVID‐19 severity. The combination of ApoA1 and IL‐6 yielded even higher prediction efficiency. In a recent study, Wei et al.11 observed that serum levels of TC, HDL‐C, and LDL‐C in patients with COVID‐19 were significantly lower than those in healthy subjects, especially in severe and critical cases, which was similar to our study. Another study also found that LDL‐C performed well in discriminating COVID‐19 severity, it was gradually decreased across moderate, severe, and critical cases; however, HDL‐C exhibited no significant differences among the three groups.12 These small discrepancies may be related to the heterogeneity of disease severity, different sample sizes, and testing methods. However, our study and others11, 12 all showed that blood lipids have potential auxiliary value in distinguishing severe COVID‐19 patients. ApoA1, a major protein component of the HDL complex, is involved in “reverse cholesterol transport” by transporting excess cholesterol from peripheral cells back to the liver for excretion. In addition, ApoA1 has an anti‐inflammatory characteristic,13 suggesting its role in inflammatory diseases. Previous studies have revealed that serum ApoA1 is associated with the outcome of patients with sepsis and acute respiratory distress syndrome induced by pneumonia, as well as critically ill patients.14, 15, 16, 17 In acute inflammatory disease, serum amyloid A (SAA), an acute‐phase protein, displaces ApoA1 from the HDL complex; then, free ApoA1 is easily eliminated by the kidney, resulting in low levels in the peripheral blood.18 On the other hand, the liver is susceptible to attack by SARS‐CoV‐2, especially in severe cases19; therefore, reduced synthesis by the injured liver may also play a role. IL‐6 plays a key role in the development of COVID‐19, and its predictive value for disease severity has been revealed previously by us and others.4, 20, 21, 22 It was also found that increased IL‐6 was associated with poor outcomes.22, 23 In this study, IL‐6 and ApoA1 were identified as independent risk factors for COVID‐19 severity. The risk model established using these two markers exhibited the highest predictive value, with an AUC of 0.977 (95% CI: 0.932–0.995). ApoA1 and its mimetic peptide D‐amino acids (D‐4F) exhibit therapeutic potential for treating cancer, influenza, sepsis, and ARDS, primarily due to their anti‐inflammatory, anti‐oxidant, and anti‐apoptotic properties.13, 24, 25, 26, 27 In addition, it is noteworthy that ApoA1 inhibits IL‐6 release and reduces macrophage activation.25 IL‐6 is the main participant in the cytokine storm, and macrophages are the primary source of IL‐6. Therefore, ApoA1 may exhibit therapeutic potential in treating patients with COVID‐19. It might be worthwhile to test the efficacy and safety of ApoA1 and its mimetics in treating these patients. The main strength of this study is that the patients included in this study were treated without delay when infected with SARS‐CoV‐2, which may represent an early stage of the disease. Second, this study enrolled healthy controls to analyze trends in blood lipids among healthy subjects, non‐severe cases, and severe cases. Third, the predictive values of verified clinical characteristics and laboratory parameters were selected for comparison with blood lipids, making the results more credible. Finally, blood lipids were routinely tested by using an automatic biochemical analyzer, with clinical application value. A limitation of this study is that it was a single‐center retrospective study with a relatively small sample size that was not validated in internal or external cohorts. Therefore, a prospective study with a larger sample size is strongly encouraged. In conclusion, this study sheds light on abnormal blood lipid profiles in patients with COVID‐19 compared with healthy subjects, especially in severe cases. Specifically, TC, HDL‐C, LDL‐C, and ApoA1 gradually decreased from the healthy controls to non‐severe and severe groups. Additionally, ApoA1 is a good indicator of COVID‐19 severity, and the combination of ApoA1 and IL‐6 enhances model predictability. These findings might be helpful in disclosing the pathogenesis of COVID‐19 and developing novel therapeutic strategies for COVID‐19. CONFLICT OF INTEREST: The authors declare that they have no competing interests. AUTHOR CONTRIBUTIONS: Zhe Zhu interpreted the data and wrote the paper. Yayun Yang, Linyan Fan, Shuyuan Ye, and Kehong Lou collected and analyzed the data. Xin Hua, Zuoan Huang, and Qiaoyun Shi performed laboratory analysis. Guosheng Gao designed the study and revised the paper. ETHICAL APPROVAL: This study was approved by the institutional ethics board of HwaMei Hospital, University of Chinese Academy of Science (PJ‐NBEY‐KY‐2020–061–01).
Background: Dyslipidemia has been observed in patients with coronavirus disease 2019 (COVID-19). This study aimed to investigate blood lipid profiles in patients with COVID-19 and to explore their predictive values for COVID-19 severity. Methods: A total of 142 consecutive patients with COVID-19 were included in this single-center retrospective study. Blood lipid profile characteristics were investigated in patients with COVID-19 in comparison with 77 age- and gender-matched healthy subjects, their predictive values for COVID-19 severity were analyzed by using multivariable logistic regression analysis, and their prediction efficiencies were evaluated by using receiver operator characteristic (ROC) curves. Results: There were 125 and 17 cases in the non-severe and severe groups, respectively. Total cholesterol (TC), high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol (LDL-C), and apolipoprotein A1 (ApoA1) gradually decreased across the groups in the following order: healthy controls, non-severe group, and severe group. ApoA1 was identified as an independent risk factor for COVID-19 severity (adjusted odds ratio [OR]: 0.865, 95% confidence interval [CI]: 0.800-0.935, p < 0.001), along with interleukin-6 (IL-6) (adjusted OR: 1.097, 95% CI: 1.034-1.165, p = 0.002). ApoA1 exhibited the highest area under the ROC curve (AUC) among all single markers (AUC: 0.896, 95% CI: 0.834-0.941); moreover, the risk model established using ApoA1 and IL-6 enhanced prediction efficiency (AUC: 0.977, 95% CI: 0.932-0.995). Conclusions: Blood lipid profiles in patients with COVID-19 are quite abnormal compared with those in healthy subjects, especially in severe cases. Serum ApoA1 may represent a good indicator for predicting the severity of COVID-19.
null
null
8,410
359
[ 312, 244, 108, 176, 262, 427, 444, 689, 51, 23 ]
14
[ "severe", "19", "covid", "covid 19", "severe group", "group", "blood", "patients", "non", "apoa1" ]
[ "novel coronavirus infection", "syndrome coronavirus sars", "19 coronavirus", "coronavirus disease 2019", "respiratory syndrome coronavirus" ]
null
null
null
[CONTENT] ApoA1 | blood lipid | COVID‐19 | disease severity | SARS‐CoV‐2 [SUMMARY]
null
[CONTENT] ApoA1 | blood lipid | COVID‐19 | disease severity | SARS‐CoV‐2 [SUMMARY]
null
[CONTENT] ApoA1 | blood lipid | COVID‐19 | disease severity | SARS‐CoV‐2 [SUMMARY]
null
[CONTENT] Adult | Aged | Apolipoprotein A-I | Area Under Curve | Biomarkers | COVID-19 | Case-Control Studies | Cholesterol, HDL | Cholesterol, LDL | Comorbidity | Female | Humans | Lipids | Male | Middle Aged | Retrospective Studies | Risk Factors | Severity of Illness Index [SUMMARY]
null
[CONTENT] Adult | Aged | Apolipoprotein A-I | Area Under Curve | Biomarkers | COVID-19 | Case-Control Studies | Cholesterol, HDL | Cholesterol, LDL | Comorbidity | Female | Humans | Lipids | Male | Middle Aged | Retrospective Studies | Risk Factors | Severity of Illness Index [SUMMARY]
null
[CONTENT] Adult | Aged | Apolipoprotein A-I | Area Under Curve | Biomarkers | COVID-19 | Case-Control Studies | Cholesterol, HDL | Cholesterol, LDL | Comorbidity | Female | Humans | Lipids | Male | Middle Aged | Retrospective Studies | Risk Factors | Severity of Illness Index [SUMMARY]
null
[CONTENT] novel coronavirus infection | syndrome coronavirus sars | 19 coronavirus | coronavirus disease 2019 | respiratory syndrome coronavirus [SUMMARY]
null
[CONTENT] novel coronavirus infection | syndrome coronavirus sars | 19 coronavirus | coronavirus disease 2019 | respiratory syndrome coronavirus [SUMMARY]
null
[CONTENT] novel coronavirus infection | syndrome coronavirus sars | 19 coronavirus | coronavirus disease 2019 | respiratory syndrome coronavirus [SUMMARY]
null
[CONTENT] severe | 19 | covid | covid 19 | severe group | group | blood | patients | non | apoa1 [SUMMARY]
null
[CONTENT] severe | 19 | covid | covid 19 | severe group | group | blood | patients | non | apoa1 [SUMMARY]
null
[CONTENT] severe | 19 | covid | covid 19 | severe group | group | blood | patients | non | apoa1 [SUMMARY]
null
[CONTENT] covid | 19 | covid 19 | severe | patients | severity | lipid | respiratory | mild | indicators [SUMMARY]
null
[CONTENT] severe | severe group | group | 19 | il | covid | covid 19 | controls | healthy controls | ci [SUMMARY]
null
[CONTENT] severe | 19 | covid 19 | covid | patients | group | severe group | blood | non | il [SUMMARY]
null
[CONTENT] 2019 | COVID-19 ||| COVID-19 | COVID-19 [SUMMARY]
null
[CONTENT] 125 | 17 ||| HDL-C | A1 ||| COVID-19 | 0.865 | 95% ||| CI | 0.800-0.935 | 1.097 | 95% | CI | 1.034-1.165 | 0.002 ||| ROC | 0.896 | 95% | CI | 0.834-0.941 | IL-6 | 0.977 | 95% | CI | 0.932 [SUMMARY]
null
[CONTENT] 2019 | COVID-19 ||| COVID-19 | COVID-19 ||| 142 | COVID-19 ||| COVID-19 | 77 | COVID-19 | ROC ||| 125 | 17 ||| HDL-C | A1 ||| COVID-19 | 0.865 | 95% ||| CI | 0.800-0.935 | 1.097 | 95% | CI | 1.034-1.165 | 0.002 ||| ROC | 0.896 | 95% | CI | 0.834-0.941 | IL-6 | 0.977 | 95% | CI | 0.932 ||| COVID-19 ||| Serum ApoA1 | COVID-19 [SUMMARY]
null
PREOPERATIVE COMPUTED TOMOGRAPHY ANGIOGRAPHY IN MULTIDISCIPLINARY PERSONALIZED ASSESSMENT OF PATIENT WITH RIGHT-SIDED COLON CANCER: SURGEON AND RADIOLOGIST POINT OF VIEW.
36043651
3D-CT angiography has made it possible to reach a qualitatively new level in the determination of treatment tactics for patients with colorectal cancer.
BACKGROUND
This study involved 103 patients with colorectal cancer who underwent preoperative 3D-CT angiography from 2016 to 2021.
METHODS
All patients underwent radical D3 right hemicolectomy. The median quantity of removal lymph nodes were 24.71±10.04. Anastomotic leakage was diagnosed in one patient. We have identified eight most common types of superior mesenteric artery. The ileocolic artery crossed the superior mesenteric vein on the anterior surface in 64 (62.1%) patients and on the posterior surface in 39 (37.9%). In 58 (56.3%) patients, the right colic artery was either absent or was a nonindependent branch of superior mesenteric artery. The distance from the root of the superior mesenteric artery to the root of the middle colic artery was 37.8±12.8 mm and that from the root of the middle colic artery to the root of the ileocolic artery was 29.5±15.7 mm. The trunk of Henle was above the root of the middle colic artery in 66 (64.1%) patients, at the same level with the middle colic artery in 16 (15.5%), and below the middle colic artery in 18 (17.5%) patients.
RESULTS
Preoperative analysis of 3D-CT angiography is a key pattern in assessment of vascular anatomy and can potentially show the complexity of future lymphadenectomy and reduce the risk of anastomotic leakage.
CONCLUSIONS
[ "Anastomotic Leak", "Colectomy", "Colonic Neoplasms", "Computed Tomography Angiography", "Humans", "Laparoscopy", "Radiologists", "Surgeons" ]
9423717
INTRODUCTION
The incidence of anastomotic leak (AL) after right hemicolectomy is relatively low, in comparison with left-sided/rectal colorectal cancer (CRC). In 2015, the European Society of Coloproctology (ESCP) audited right colectomy and ileocecal resection, collecting prospective data on 3,208 patients across 284 centers in 39 countries. The overall AL rate was 8.1% 11 . This is due to a more stable blood supply. However, different anatomical variations can have a significant impact on the duration of surgery and cause the technical complexity of its implementation. The need for standardization is still debated in the literature of Eastern D3 lymphadenectomy and Western embryologically oriented complete mesocolic excision with central vascular ligation (CME/CVL) 5,8 . In contrast to left and rectal cancer surgery, where the inferior mesenteric artery is the most important reference point, there are several such points, including the superior mesenteric vein (SMV), truncus Henle (TH), and the branches of the superior mesenteric artery (SMA). Not uncommon anatomical variability of the abovementioned vessels causes a higher percentage of conversions to open operations and increases intraoperative time and intraoperative blood loss 9,12 . It is difficult, in the scientific world of the 21st century, if not impossible, to say anything new in surgical anatomy of the abdominal cavity. However, with widely used in clinical practice, contrast-enhanced computed tomography (CT) has made it possible to reach a qualitatively new level in preoperative diagnosis and determination of treatment tactics for patients with CRC. Routine use of CT angiography allows a detailed analysis of each clinical case in the preoperative stage and identifies various anatomical nuances that may affect the operation 6,7 . The aim of this article was to analyze the clinical and radiological aspects that usually need to be discussed before surgery by a multidisciplinary team in patients with right-sided colon cancer.
METHODS
This study was carried out a comparative analysis of 3D-CT angiography data with intraoperative data. A detailed analysis of the anatomy of the branches of the SMA and its relationship with the surrounding structures was done in order to explore the nuances that may complicate and increase the time during right hemicolectomy with CME/CVL. The relationship between the anatomical features of the structure of SMA and postoperative complications was also investigated. Description of patients We included 103 patients (56 males and 47 females; mean age 64.2±11.6) with CRC who underwent preoperative 3D-CT angiography at Ternopil University Hospital from 2016 to 2021. The exclusion criteria were stage IV process and locally advanced forms of cancer. The informed consents were obtained from all patients. This study was passed by the Ethics Commission of Ternopil National Medical University (no. 43). We included 103 patients (56 males and 47 females; mean age 64.2±11.6) with CRC who underwent preoperative 3D-CT angiography at Ternopil University Hospital from 2016 to 2021. The exclusion criteria were stage IV process and locally advanced forms of cancer. The informed consents were obtained from all patients. This study was passed by the Ethics Commission of Ternopil National Medical University (no. 43). Measurements In this study, the following objectives were set: determine the type of SMA; determine the distance from the root of the SMA to the root of the middle colic artery (MCA), the distance from the root of the MCA to the root of the ileocolic artery (ICA); variant structure of the right colic artery (RCA); and the relationship between MCA and gastrocolic TH; and explore different variants of TH confluence. The distance between the vascular structures was measured in the frontal plane using a linear measurement. Anatomical features of the structure of SMA branches were determined in the arterial phase and venous structures of TH in the venous phase and compared the ratio of MCA and TH using Fusion. In this study, the following objectives were set: determine the type of SMA; determine the distance from the root of the SMA to the root of the middle colic artery (MCA), the distance from the root of the MCA to the root of the ileocolic artery (ICA); variant structure of the right colic artery (RCA); and the relationship between MCA and gastrocolic TH; and explore different variants of TH confluence. The distance between the vascular structures was measured in the frontal plane using a linear measurement. Anatomical features of the structure of SMA branches were determined in the arterial phase and venous structures of TH in the venous phase and compared the ratio of MCA and TH using Fusion. Scan protocol 3D-CT angiography was performed using a Philips Brilliance 64 CT machine with IV contrast (100 mL of iodinated contrast agent [370 mg/mL]). Contrast was injected into the ulnar vein at a rate of 4.5 mL/s. The bolus tracking method was used for scanning. Arterial phase scanning automatically began when the contrast in the abdominal aorta at the level of the abdominal trunk reached 180 HU. The 64-slice multidetector CT scanner (MDCT) can generate 0.75-mm slices that can be reconstructed into a 0.5-mm image. Therefore, in order to obtain high-quality CT angiography for preoperative analysis, a scanning protocol should be maintained: sublingual nitrate intake, high contrast rate (4–5 mL/s), early arterial phase (20–30 ‘), stress reduction (80–100 kV), and doubling the mAs. Image processing was performed using 3D volume rendering technique (VRT). 3D-CT angiography was performed using a Philips Brilliance 64 CT machine with IV contrast (100 mL of iodinated contrast agent [370 mg/mL]). Contrast was injected into the ulnar vein at a rate of 4.5 mL/s. The bolus tracking method was used for scanning. Arterial phase scanning automatically began when the contrast in the abdominal aorta at the level of the abdominal trunk reached 180 HU. The 64-slice multidetector CT scanner (MDCT) can generate 0.75-mm slices that can be reconstructed into a 0.5-mm image. Therefore, in order to obtain high-quality CT angiography for preoperative analysis, a scanning protocol should be maintained: sublingual nitrate intake, high contrast rate (4–5 mL/s), early arterial phase (20–30 ‘), stress reduction (80–100 kV), and doubling the mAs. Image processing was performed using 3D volume rendering technique (VRT). Statistical analysis Ordinal data were calculated using the median. All calculations were performed using the Statistica version 64 software. Ordinal data were calculated using the median. All calculations were performed using the Statistica version 64 software.
RESULTS
All patients underwent local radical right hemicolectomy with CME/CVL and R0 resection. The median quantity of removal lymph nodes was 24.71±10.04 (range 13–58). Positive lymph nodes were revealed in 38.7% of cases. The incidence of metastatic lymph nodes was 38.7% in D1 zone, 3.2% in D2 zone, and 9.7% in D3 zone. Mean operative time was 82 min (range 63–130). Median intraoperative blood loss was 70 mL (range 32–280). No patients required intraoperative blood transfusion. Postoperative complications were developed in seven patients. AL was diagnosed in one patient on postoperative day 8 for whom relaparotomy, lavage, and end stoma were performed (Figure 5). Unfortunately, on the first day after patient discharge from the hospital, he died from massive thromboembolic complication, despite maintaining prophylaxis therapy. One patient suffered from paralytic ileus in an early postoperative period. Median staying in hospital after operation was 8.4 days. The SMA was present in 100% of cases. Compared with the widely used Zebrowski classification of the inferior mesenteric artery, we could not find a common classification of anatomical variations of SMA. We have identified eight types that are most common in practice: Type A – MCA, RCA, and ICA deviate classically independently of each other from the main SMA trunk. Type B – RCA is absent. Type C – RCA deviates from ICA. Type D – RCA deviates from the right branch of the MCA or the main trunk of the MCA. Type E – Classical type A + the presence of additional MCA (AMCA) Type F – Right and left MCA branches deviate separately from the main SMA trunk. Type G – MCA and ICA have a common trunk and RCA is absent. Type H – RCA deviates from ICA + AMCA. Type A – MCA, RCA, and ICA deviate classically independently of each other from the main SMA trunk. Type B – RCA is absent. Type C – RCA deviates from ICA. Type D – RCA deviates from the right branch of the MCA or the main trunk of the MCA. Type E – Classical type A + the presence of additional MCA (AMCA) Type F – Right and left MCA branches deviate separately from the main SMA trunk. Type G – MCA and ICA have a common trunk and RCA is absent. Type H – RCA deviates from ICA + AMCA. Our analysis showed that type A was detected in 27 (25.9%) patients, type B in 22 (21.4%), type C in 20 (19.2%), type D in 12 (11.6%), type E in 9 (8.7%), type F in 9 (8.7%), type G in 3 (2.9%), and type H in 1 (0.9%) patient (Figures 1 and 2). The analysis also showed that in 12 (11.6%) patients, the right hepatic artery deviates from SMA. The ICA was present in 103 (100%) cases. The ICA crossed the SMV on the anterior surface in 64 (62.1%) cases and on the posterior surface of the SMV in 39 (37.9%) cases. The RCA is one of the most volatile arterial structures of the SMA system (literature data indicate that it is present in 11–40% of cases) 1,4,12 . According to our selected types of SMA, RCA was absent in 25 (24.3%) and in 33 (32%) patients and deviated from ICA and MCA/RMCA. Accordingly, in 58 (56.3%) patients, RCA was either absent or was a nonindependent branch of SMA. The MCA was present and originated directly from SMA in 103 (100%) cases. AMCA was present in 10 (9.7%) cases. The distance from the root of the SMA to the root of the MCA was 37.8±12.8 mm (range 13–65). The distance from the root of the MCA to the root of the ICA was 29.5±15.7 mm (range 0–80). Gastrocolic TH was present in 100 (97.1%) cases and located on the lower edge of the mesentery of the transverse colon, along the head of the pancreas, and flows into the right lateral part of the SMV wall. Our analysis showed that the caliber of TH varied from 3 to 10 mm and its length was 11.5±4.8 mm (range 2–33). Usually, the confluence of TH formed: middle colic vein (MCV), right colic vein (RCV), additional middle colic vein (AMCV), right gastroepiploic vein (RGEV), and anterior superior pancreaticoduodenal vein (ASPDV). Also, we observed a very interesting case where one of the veins which create confluence of TH was ileocolic vein (ICV) (Figure 3). Our analysis of 3D-CT angiograms showed the following type combinations of TH confluence: MCV+RCVRCV+RGEV+ASPDVICV+RCV+RGEVAbsence of THMCV+RGEVMCV+RGEV+ASPDV+AMCV MCV+RCV RCV+RGEV+ASPDV ICV+RCV+RGEV Absence of TH MCV+RGEV MCV+RGEV+ASPDV+AMCV Respectively, type 1 was observed in 17 (16.5%) patients, type 2 in 55 (53.4%), type 3 in 1 (1%), type 4 in 3 (2.9%), type 5 in 15 (14.6%), and type 6 in 12 (11.6%) patients (Figure 3). Unfortunately, it was impossible in some cases to create 3D-CT reconstruction of some types of TH due to lack of contrast, incorrect scanning, and various technical features. An important point of preoperative planning is understanding the relationship between TH and MCA. In 66 (64.1%) patients, TH was located above the root of the MCA, in which case the distance between them was 12.38±5.41 mm (range 3–29). In 18 (17.5%) patients, TH was located below the root of the MCA, in which case the distance between them was 10.95±7.1 mm. In 16 (15.5%) patients, TH was located at the same level with the root of the MCA (Figure 4).
CONCLUSION
Personalized preoperative analysis of 3D-CT angiography is a key pattern in assessment of vascular anatomy and can potentially show the complexity of future lymphadenectomy, reduce intraoperative time for identifying key landmarks, and develop an individualized surgical strategy. Personalized 3D-CT assessment can potentially significantly reduce the risk of AL. To solve this problem, new studies and further standardization are needed.
[ "Description of patients", "Measurements", "Scan protocol", "Statistical analysis" ]
[ "We included 103 patients (56 males and 47 females; mean age 64.2±11.6) with CRC\nwho underwent preoperative 3D-CT angiography at Ternopil University Hospital\nfrom 2016 to 2021. The exclusion criteria were stage IV process and locally\nadvanced forms of cancer. The informed consents were obtained from all patients.\nThis study was passed by the Ethics Commission of Ternopil National Medical\nUniversity (no. 43).", "In this study, the following objectives were set: determine the type of SMA;\ndetermine the distance from the root of the SMA to the root of the middle colic\nartery (MCA), the distance from the root of the MCA to the root of the ileocolic\nartery (ICA); variant structure of the right colic artery (RCA); and the\nrelationship between MCA and gastrocolic TH; and explore different variants of\nTH confluence.\nThe distance between the vascular structures was measured in the frontal plane\nusing a linear measurement. Anatomical features of the structure of SMA branches\nwere determined in the arterial phase and venous structures of TH in the venous\nphase and compared the ratio of MCA and TH using Fusion.", "3D-CT angiography was performed using a Philips Brilliance 64 CT machine with IV\ncontrast (100 mL of iodinated contrast agent [370 mg/mL]). Contrast was injected\ninto the ulnar vein at a rate of 4.5 mL/s. The bolus tracking method was used\nfor scanning. Arterial phase scanning automatically began when the contrast in\nthe abdominal aorta at the level of the abdominal trunk reached 180 HU. The\n64-slice multidetector CT scanner (MDCT) can generate 0.75-mm slices that can be\nreconstructed into a 0.5-mm image. Therefore, in order to obtain high-quality CT\nangiography for preoperative analysis, a scanning protocol should be maintained:\nsublingual nitrate intake, high contrast rate (4–5 mL/s), early arterial phase\n(20–30 ‘), stress reduction (80–100 kV), and doubling the mAs. Image processing\nwas performed using 3D volume rendering technique (VRT).", "Ordinal data were calculated using the median. All calculations were performed\nusing the Statistica version 64 software." ]
[ null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Description of patients", "Measurements", "Scan protocol", "Statistical analysis", "RESULTS", "DISCUSSION", "CONCLUSION" ]
[ "The incidence of anastomotic leak (AL) after right hemicolectomy is relatively low,\nin comparison with left-sided/rectal colorectal cancer (CRC). In 2015, the European\nSociety of Coloproctology (ESCP) audited right colectomy and ileocecal resection,\ncollecting prospective data on 3,208 patients across 284 centers in 39 countries.\nThe overall AL rate was 8.1%\n11\n. This is due to a more stable blood supply. However, different anatomical\nvariations can have a significant impact on the duration of surgery and cause the\ntechnical complexity of its implementation. The need for standardization is still\ndebated in the literature of Eastern D3 lymphadenectomy and Western embryologically\noriented complete mesocolic excision with central vascular ligation (CME/CVL)\n5,8\n.\nIn contrast to left and rectal cancer surgery, where the inferior mesenteric artery\nis the most important reference point, there are several such points, including the\nsuperior mesenteric vein (SMV), truncus Henle (TH), and the branches of the superior\nmesenteric artery (SMA). Not uncommon anatomical variability of the abovementioned\nvessels causes a higher percentage of conversions to open operations and increases\nintraoperative time and intraoperative blood loss\n9,12\n. It is difficult, in the scientific world of the 21st century, if not\nimpossible, to say anything new in surgical anatomy of the abdominal cavity.\nHowever, with widely used in clinical practice, contrast-enhanced computed\ntomography (CT) has made it possible to reach a qualitatively new level in\npreoperative diagnosis and determination of treatment tactics for patients with CRC.\nRoutine use of CT angiography allows a detailed analysis of each clinical case in\nthe preoperative stage and identifies various anatomical nuances that may affect the operation\n6,7\n.\nThe aim of this article was to analyze the clinical and radiological aspects that\nusually need to be discussed before surgery by a multidisciplinary team in patients\nwith right-sided colon cancer.", "This study was carried out a comparative analysis of 3D-CT angiography data with\nintraoperative data. A detailed analysis of the anatomy of the branches of the SMA\nand its relationship with the surrounding structures was done in order to explore\nthe nuances that may complicate and increase the time during right hemicolectomy\nwith CME/CVL. The relationship between the anatomical features of the structure of\nSMA and postoperative complications was also investigated.\nDescription of patients We included 103 patients (56 males and 47 females; mean age 64.2±11.6) with CRC\nwho underwent preoperative 3D-CT angiography at Ternopil University Hospital\nfrom 2016 to 2021. The exclusion criteria were stage IV process and locally\nadvanced forms of cancer. The informed consents were obtained from all patients.\nThis study was passed by the Ethics Commission of Ternopil National Medical\nUniversity (no. 43).\nWe included 103 patients (56 males and 47 females; mean age 64.2±11.6) with CRC\nwho underwent preoperative 3D-CT angiography at Ternopil University Hospital\nfrom 2016 to 2021. The exclusion criteria were stage IV process and locally\nadvanced forms of cancer. The informed consents were obtained from all patients.\nThis study was passed by the Ethics Commission of Ternopil National Medical\nUniversity (no. 43).\nMeasurements In this study, the following objectives were set: determine the type of SMA;\ndetermine the distance from the root of the SMA to the root of the middle colic\nartery (MCA), the distance from the root of the MCA to the root of the ileocolic\nartery (ICA); variant structure of the right colic artery (RCA); and the\nrelationship between MCA and gastrocolic TH; and explore different variants of\nTH confluence.\nThe distance between the vascular structures was measured in the frontal plane\nusing a linear measurement. Anatomical features of the structure of SMA branches\nwere determined in the arterial phase and venous structures of TH in the venous\nphase and compared the ratio of MCA and TH using Fusion.\nIn this study, the following objectives were set: determine the type of SMA;\ndetermine the distance from the root of the SMA to the root of the middle colic\nartery (MCA), the distance from the root of the MCA to the root of the ileocolic\nartery (ICA); variant structure of the right colic artery (RCA); and the\nrelationship between MCA and gastrocolic TH; and explore different variants of\nTH confluence.\nThe distance between the vascular structures was measured in the frontal plane\nusing a linear measurement. Anatomical features of the structure of SMA branches\nwere determined in the arterial phase and venous structures of TH in the venous\nphase and compared the ratio of MCA and TH using Fusion.\nScan protocol 3D-CT angiography was performed using a Philips Brilliance 64 CT machine with IV\ncontrast (100 mL of iodinated contrast agent [370 mg/mL]). Contrast was injected\ninto the ulnar vein at a rate of 4.5 mL/s. The bolus tracking method was used\nfor scanning. Arterial phase scanning automatically began when the contrast in\nthe abdominal aorta at the level of the abdominal trunk reached 180 HU. The\n64-slice multidetector CT scanner (MDCT) can generate 0.75-mm slices that can be\nreconstructed into a 0.5-mm image. Therefore, in order to obtain high-quality CT\nangiography for preoperative analysis, a scanning protocol should be maintained:\nsublingual nitrate intake, high contrast rate (4–5 mL/s), early arterial phase\n(20–30 ‘), stress reduction (80–100 kV), and doubling the mAs. Image processing\nwas performed using 3D volume rendering technique (VRT).\n3D-CT angiography was performed using a Philips Brilliance 64 CT machine with IV\ncontrast (100 mL of iodinated contrast agent [370 mg/mL]). Contrast was injected\ninto the ulnar vein at a rate of 4.5 mL/s. The bolus tracking method was used\nfor scanning. Arterial phase scanning automatically began when the contrast in\nthe abdominal aorta at the level of the abdominal trunk reached 180 HU. The\n64-slice multidetector CT scanner (MDCT) can generate 0.75-mm slices that can be\nreconstructed into a 0.5-mm image. Therefore, in order to obtain high-quality CT\nangiography for preoperative analysis, a scanning protocol should be maintained:\nsublingual nitrate intake, high contrast rate (4–5 mL/s), early arterial phase\n(20–30 ‘), stress reduction (80–100 kV), and doubling the mAs. Image processing\nwas performed using 3D volume rendering technique (VRT).\nStatistical analysis Ordinal data were calculated using the median. All calculations were performed\nusing the Statistica version 64 software.\nOrdinal data were calculated using the median. All calculations were performed\nusing the Statistica version 64 software.", "We included 103 patients (56 males and 47 females; mean age 64.2±11.6) with CRC\nwho underwent preoperative 3D-CT angiography at Ternopil University Hospital\nfrom 2016 to 2021. The exclusion criteria were stage IV process and locally\nadvanced forms of cancer. The informed consents were obtained from all patients.\nThis study was passed by the Ethics Commission of Ternopil National Medical\nUniversity (no. 43).", "In this study, the following objectives were set: determine the type of SMA;\ndetermine the distance from the root of the SMA to the root of the middle colic\nartery (MCA), the distance from the root of the MCA to the root of the ileocolic\nartery (ICA); variant structure of the right colic artery (RCA); and the\nrelationship between MCA and gastrocolic TH; and explore different variants of\nTH confluence.\nThe distance between the vascular structures was measured in the frontal plane\nusing a linear measurement. Anatomical features of the structure of SMA branches\nwere determined in the arterial phase and venous structures of TH in the venous\nphase and compared the ratio of MCA and TH using Fusion.", "3D-CT angiography was performed using a Philips Brilliance 64 CT machine with IV\ncontrast (100 mL of iodinated contrast agent [370 mg/mL]). Contrast was injected\ninto the ulnar vein at a rate of 4.5 mL/s. The bolus tracking method was used\nfor scanning. Arterial phase scanning automatically began when the contrast in\nthe abdominal aorta at the level of the abdominal trunk reached 180 HU. The\n64-slice multidetector CT scanner (MDCT) can generate 0.75-mm slices that can be\nreconstructed into a 0.5-mm image. Therefore, in order to obtain high-quality CT\nangiography for preoperative analysis, a scanning protocol should be maintained:\nsublingual nitrate intake, high contrast rate (4–5 mL/s), early arterial phase\n(20–30 ‘), stress reduction (80–100 kV), and doubling the mAs. Image processing\nwas performed using 3D volume rendering technique (VRT).", "Ordinal data were calculated using the median. All calculations were performed\nusing the Statistica version 64 software.", "All patients underwent local radical right hemicolectomy with CME/CVL and R0\nresection.\nThe median quantity of removal lymph nodes was 24.71±10.04 (range 13–58). Positive\nlymph nodes were revealed in 38.7% of cases. The incidence of metastatic lymph nodes\nwas 38.7% in D1 zone, 3.2% in D2 zone, and 9.7% in D3 zone. Mean operative time was\n82 min (range 63–130). Median intraoperative blood loss was 70 mL (range 32–280). No\npatients required intraoperative blood transfusion. Postoperative complications were\ndeveloped in seven patients. AL was diagnosed in one patient on postoperative day 8\nfor whom relaparotomy, lavage, and end stoma were performed (Figure 5). Unfortunately, on the first day after patient\ndischarge from the hospital, he died from massive thromboembolic complication,\ndespite maintaining prophylaxis therapy. One patient suffered from paralytic ileus\nin an early postoperative period. Median staying in hospital after operation was 8.4\ndays.\nThe SMA was present in 100% of cases. Compared with the widely used Zebrowski\nclassification of the inferior mesenteric artery, we could not find a common\nclassification of anatomical variations of SMA. We have identified eight types that\nare most common in practice: \nType A – MCA, RCA, and ICA deviate classically\nindependently of each other from the main SMA trunk.\nType B – RCA is absent.\nType C – RCA deviates from ICA.\nType D – RCA deviates from the right branch of the MCA or\nthe main trunk of the MCA.\nType E – Classical type A + the presence of additional MCA\n(AMCA)\nType F – Right and left MCA branches deviate separately\nfrom the main SMA trunk.\nType G – MCA and ICA have a common trunk and RCA is\nabsent.\nType H – RCA deviates from ICA + AMCA.\n\n\nType A – MCA, RCA, and ICA deviate classically\nindependently of each other from the main SMA trunk.\n\nType B – RCA is absent.\n\nType C – RCA deviates from ICA.\n\nType D – RCA deviates from the right branch of the MCA or\nthe main trunk of the MCA.\n\nType E – Classical type A + the presence of additional MCA\n(AMCA)\n\nType F – Right and left MCA branches deviate separately\nfrom the main SMA trunk.\n\nType G – MCA and ICA have a common trunk and RCA is\nabsent.\n\nType H – RCA deviates from ICA + AMCA.\nOur analysis showed that type A was detected in 27 (25.9%) patients, type B in 22\n(21.4%), type C in 20 (19.2%), type D in 12 (11.6%), type E in 9 (8.7%), type F in 9\n(8.7%), type G in 3 (2.9%), and type H in 1 (0.9%) patient (Figures 1 and 2). The\nanalysis also showed that in 12 (11.6%) patients, the right hepatic artery deviates\nfrom SMA.\nThe ICA was present in 103 (100%) cases. The ICA crossed the SMV on the anterior\nsurface in 64 (62.1%) cases and on the posterior surface of the SMV in 39 (37.9%)\ncases.\nThe RCA is one of the most volatile arterial structures of the SMA system (literature\ndata indicate that it is present in 11–40% of cases)\n1,4,12\n. According to our selected types of SMA, RCA was absent in 25 (24.3%) and in\n33 (32%) patients and deviated from ICA and MCA/RMCA. Accordingly, in 58 (56.3%)\npatients, RCA was either absent or was a nonindependent branch of SMA.\nThe MCA was present and originated directly from SMA in 103 (100%) cases. AMCA was\npresent in 10 (9.7%) cases.\nThe distance from the root of the SMA to the root of the MCA was 37.8±12.8 mm (range\n13–65).\nThe distance from the root of the MCA to the root of the ICA was 29.5±15.7 mm (range\n0–80).\nGastrocolic TH was present in 100 (97.1%) cases and located on the lower edge of the\nmesentery of the transverse colon, along the head of the pancreas, and flows into\nthe right lateral part of the SMV wall. Our analysis showed that the caliber of TH\nvaried from 3 to 10 mm and its length was 11.5±4.8 mm (range 2–33). Usually, the\nconfluence of TH formed: middle colic vein (MCV), right colic vein (RCV), additional\nmiddle colic vein (AMCV), right gastroepiploic vein (RGEV), and anterior superior\npancreaticoduodenal vein (ASPDV). Also, we observed a very interesting case where\none of the veins which create confluence of TH was ileocolic vein (ICV) (Figure 3). Our analysis of 3D-CT angiograms\nshowed the following type combinations of TH confluence: MCV+RCVRCV+RGEV+ASPDVICV+RCV+RGEVAbsence of THMCV+RGEVMCV+RGEV+ASPDV+AMCV\n\nMCV+RCV\nRCV+RGEV+ASPDV\nICV+RCV+RGEV\nAbsence of TH\nMCV+RGEV\nMCV+RGEV+ASPDV+AMCV\nRespectively, type 1 was observed in 17 (16.5%) patients, type 2 in 55 (53.4%), type\n3 in 1 (1%), type 4 in 3 (2.9%), type 5 in 15 (14.6%), and type 6 in 12 (11.6%)\npatients (Figure 3). Unfortunately, it was\nimpossible in some cases to create 3D-CT reconstruction of some types of TH due to\nlack of contrast, incorrect scanning, and various technical features.\nAn important point of preoperative planning is understanding the relationship between\nTH and MCA. In 66 (64.1%) patients, TH was located above the root of the MCA, in\nwhich case the distance between them was 12.38±5.41 mm (range 3–29). In 18 (17.5%)\npatients, TH was located below the root of the MCA, in which case the distance\nbetween them was 10.95±7.1 mm. In 16 (15.5%) patients, TH was located at the same\nlevel with the root of the MCA (Figure 4).", "The well-known concept of right hemicolectomy with CME/CVL, in the past decade, has\nsupplanted the traditional old notion of colon cancer surgery and has improved\npatients’ 5-year survival\n1,5,8\n. AL is the most devastating complication in colorectal surgery. Patients who\ndeveloped an AL had a higher mortality than those who did not, a longer median\nhospital stay, and a higher 30-day reoperation and 30-day readmission rate\n11\n. In our study, we observed AL in one patient, which resulted in 30-day\nmortality (Figure 5). Retrospective analysis of\nthis case showed our mistake. According to the oncology canons, the operation was\nperformed correctly (CME/CVL), but we did not perform an analysis of vascular\nanatomy before surgery, resulting in irreversible ischemic changes in the\nanastomotic area and the actual AL.\nIn the left-sided CRC, the inferior mesenteric artery is the most important landmark\nto perform D3 lymphadenectomy, while in the right-sided colon cancer surgery, there\nare several such “central” landmarks: SMV, SMA, ICA, MCA, and TH.\nEach right hemicolectomy with CME/CVL started from dissection of SMV and\nidentification of ICA and ICV. Here we do not have any problem. However, interesting\nis the effect of the course of ICA in relation to SMV on disease-free survival with\ncorrespondingly better results in the group of patients where ICA is ahead of SMV\n6\n. Therefore, a potential group of patients with an ICA course, behind the SMV,\nrequires more precision lymphadenectomy of the apical area. In our study, we found\nthat ICA crossed the SMV on the anterior surface in 64 (62.1%) cases and on the\nposterior surface of the SMV in 39 (37.9%) cases.\nRCA is one of the most volatile arterial structures of the SMA system. Literature\ndata indicate that it is present in 11–40% of cases\n1,4,12\n. In our study, we found that the weighted mean incidence of RCA was 43.7%\nfrom the SMA, 20.4% from the ICA, and 11.6% from the root of the MCA or rMCA, and\nRCA was absent in 25 (24.3%) cases. Usually, we do not encounter any problems with\nRCA when implementing the Western concept of CME/CVL and do not pay much attention\nto it if it is not an independent branch. However, the abovementioned anatomical\nvariants of RCA should be considered when performing the Eastern concept of D3 lymph\nnode dissection (segmental resections — 10 cm from the edge of the tumor)\n10\n.\nThe next one key point for performing right hemicolectomy with CME/CVL is TH,\nespecially due to being a special area of the apical lymph nodes. TH is a\nthin-walled venous trunk that has many different combinations of formation.\nTraditional TH branches are MCV, RGEV, ASPDV, and aMCV\n1,2\n. Very often, it is in the TH area due to excessive traction of the mesentery\nduring the allocation of MCA surgeons get bleeding. It is critical to understand the\nrelationship between TH and MCA to prevent damage of this trunk (Figure 4). In our study, we found that TH was\nlocated above the root of the MCA (12.38±5.41 mm) in 66 (64.1%) patients, at the\nsame level with the root of the MCA in 16 (15.5%) patients, and was located below\nthe root of the MCA (10.95±7.1 mm) in 18 (17.5%) patients. In the situation if the\nroot of the MCA is above TH, it is safer to start mobilization from the cranial part\nof the root of transverse colon mesentery, and in cases where the root of the MCA is\nbelow TH, then the dissection of MCA should begin from the caudal part of transverse\ncolon mesentery\n7\n.\nCT is a modality of choice for staging of colon cancer and distant metastasis.\nMagnetic resonance angiography is an expensive method to perform it routinely and\npreoperatively for every patient. Therefore, CT is optimal for staging and\nevaluating mesenteric vasculature\n8\n. However, 3D-CT angiography has several limitations. First, the preoperative\nCT protocol for patients with colon cancer usually does not include the early\narterial phase and, therefore, results in some difficulty in performing adequate 3D\nreconstruction. Second, the caliber of SMA branches is usually small in diameter and\nthey are not always well visualized on 3D-CT angiograms. In the abovementioned\ncases, the use of CT in the preoperative analysis of anatomical variants of the\nstructure of SMA branches cannot be performed in 3D mode and should be performed in\nnormal 2D mode\n1\n.\nNew era of development personalized strategy could be achieved by virtual reality\nexploration and planning for precision colorectal surgery, which can provide an\nenhanced understanding of crucial anatomical details\n3\n.\nOur study has some limitation. This study is partly retrospective (observation period\nfrom 2016 to 2018) and partly prospective (observation period from 2019 to 2021), so\nwe cannot fully conduct an effective analysis between anatomical variations of\nvascular anatomy with postoperative complications in a group of cases that were\nretrospectively analyzed.", "Personalized preoperative analysis of 3D-CT angiography is a key pattern in\nassessment of vascular anatomy and can potentially show the complexity of future\nlymphadenectomy, reduce intraoperative time for identifying key landmarks, and\ndevelop an individualized surgical strategy. Personalized 3D-CT assessment can\npotentially significantly reduce the risk of AL. To solve this problem, new studies\nand further standardization are needed." ]
[ "intro", "methods", null, null, null, null, "results", "discussion", "conclusions" ]
[ "Colonic Neoplasms", "Anastomotic Leak", "Tomography", "X-Ray Computed", "Angiography", "Neoplasias do Colo", "Fístula Anastomótica", "Tomografia Computadorizada por Rx", "Angiografia" ]
INTRODUCTION: The incidence of anastomotic leak (AL) after right hemicolectomy is relatively low, in comparison with left-sided/rectal colorectal cancer (CRC). In 2015, the European Society of Coloproctology (ESCP) audited right colectomy and ileocecal resection, collecting prospective data on 3,208 patients across 284 centers in 39 countries. The overall AL rate was 8.1% 11 . This is due to a more stable blood supply. However, different anatomical variations can have a significant impact on the duration of surgery and cause the technical complexity of its implementation. The need for standardization is still debated in the literature of Eastern D3 lymphadenectomy and Western embryologically oriented complete mesocolic excision with central vascular ligation (CME/CVL) 5,8 . In contrast to left and rectal cancer surgery, where the inferior mesenteric artery is the most important reference point, there are several such points, including the superior mesenteric vein (SMV), truncus Henle (TH), and the branches of the superior mesenteric artery (SMA). Not uncommon anatomical variability of the abovementioned vessels causes a higher percentage of conversions to open operations and increases intraoperative time and intraoperative blood loss 9,12 . It is difficult, in the scientific world of the 21st century, if not impossible, to say anything new in surgical anatomy of the abdominal cavity. However, with widely used in clinical practice, contrast-enhanced computed tomography (CT) has made it possible to reach a qualitatively new level in preoperative diagnosis and determination of treatment tactics for patients with CRC. Routine use of CT angiography allows a detailed analysis of each clinical case in the preoperative stage and identifies various anatomical nuances that may affect the operation 6,7 . The aim of this article was to analyze the clinical and radiological aspects that usually need to be discussed before surgery by a multidisciplinary team in patients with right-sided colon cancer. METHODS: This study was carried out a comparative analysis of 3D-CT angiography data with intraoperative data. A detailed analysis of the anatomy of the branches of the SMA and its relationship with the surrounding structures was done in order to explore the nuances that may complicate and increase the time during right hemicolectomy with CME/CVL. The relationship between the anatomical features of the structure of SMA and postoperative complications was also investigated. Description of patients We included 103 patients (56 males and 47 females; mean age 64.2±11.6) with CRC who underwent preoperative 3D-CT angiography at Ternopil University Hospital from 2016 to 2021. The exclusion criteria were stage IV process and locally advanced forms of cancer. The informed consents were obtained from all patients. This study was passed by the Ethics Commission of Ternopil National Medical University (no. 43). We included 103 patients (56 males and 47 females; mean age 64.2±11.6) with CRC who underwent preoperative 3D-CT angiography at Ternopil University Hospital from 2016 to 2021. The exclusion criteria were stage IV process and locally advanced forms of cancer. The informed consents were obtained from all patients. This study was passed by the Ethics Commission of Ternopil National Medical University (no. 43). Measurements In this study, the following objectives were set: determine the type of SMA; determine the distance from the root of the SMA to the root of the middle colic artery (MCA), the distance from the root of the MCA to the root of the ileocolic artery (ICA); variant structure of the right colic artery (RCA); and the relationship between MCA and gastrocolic TH; and explore different variants of TH confluence. The distance between the vascular structures was measured in the frontal plane using a linear measurement. Anatomical features of the structure of SMA branches were determined in the arterial phase and venous structures of TH in the venous phase and compared the ratio of MCA and TH using Fusion. In this study, the following objectives were set: determine the type of SMA; determine the distance from the root of the SMA to the root of the middle colic artery (MCA), the distance from the root of the MCA to the root of the ileocolic artery (ICA); variant structure of the right colic artery (RCA); and the relationship between MCA and gastrocolic TH; and explore different variants of TH confluence. The distance between the vascular structures was measured in the frontal plane using a linear measurement. Anatomical features of the structure of SMA branches were determined in the arterial phase and venous structures of TH in the venous phase and compared the ratio of MCA and TH using Fusion. Scan protocol 3D-CT angiography was performed using a Philips Brilliance 64 CT machine with IV contrast (100 mL of iodinated contrast agent [370 mg/mL]). Contrast was injected into the ulnar vein at a rate of 4.5 mL/s. The bolus tracking method was used for scanning. Arterial phase scanning automatically began when the contrast in the abdominal aorta at the level of the abdominal trunk reached 180 HU. The 64-slice multidetector CT scanner (MDCT) can generate 0.75-mm slices that can be reconstructed into a 0.5-mm image. Therefore, in order to obtain high-quality CT angiography for preoperative analysis, a scanning protocol should be maintained: sublingual nitrate intake, high contrast rate (4–5 mL/s), early arterial phase (20–30 ‘), stress reduction (80–100 kV), and doubling the mAs. Image processing was performed using 3D volume rendering technique (VRT). 3D-CT angiography was performed using a Philips Brilliance 64 CT machine with IV contrast (100 mL of iodinated contrast agent [370 mg/mL]). Contrast was injected into the ulnar vein at a rate of 4.5 mL/s. The bolus tracking method was used for scanning. Arterial phase scanning automatically began when the contrast in the abdominal aorta at the level of the abdominal trunk reached 180 HU. The 64-slice multidetector CT scanner (MDCT) can generate 0.75-mm slices that can be reconstructed into a 0.5-mm image. Therefore, in order to obtain high-quality CT angiography for preoperative analysis, a scanning protocol should be maintained: sublingual nitrate intake, high contrast rate (4–5 mL/s), early arterial phase (20–30 ‘), stress reduction (80–100 kV), and doubling the mAs. Image processing was performed using 3D volume rendering technique (VRT). Statistical analysis Ordinal data were calculated using the median. All calculations were performed using the Statistica version 64 software. Ordinal data were calculated using the median. All calculations were performed using the Statistica version 64 software. Description of patients: We included 103 patients (56 males and 47 females; mean age 64.2±11.6) with CRC who underwent preoperative 3D-CT angiography at Ternopil University Hospital from 2016 to 2021. The exclusion criteria were stage IV process and locally advanced forms of cancer. The informed consents were obtained from all patients. This study was passed by the Ethics Commission of Ternopil National Medical University (no. 43). Measurements: In this study, the following objectives were set: determine the type of SMA; determine the distance from the root of the SMA to the root of the middle colic artery (MCA), the distance from the root of the MCA to the root of the ileocolic artery (ICA); variant structure of the right colic artery (RCA); and the relationship between MCA and gastrocolic TH; and explore different variants of TH confluence. The distance between the vascular structures was measured in the frontal plane using a linear measurement. Anatomical features of the structure of SMA branches were determined in the arterial phase and venous structures of TH in the venous phase and compared the ratio of MCA and TH using Fusion. Scan protocol: 3D-CT angiography was performed using a Philips Brilliance 64 CT machine with IV contrast (100 mL of iodinated contrast agent [370 mg/mL]). Contrast was injected into the ulnar vein at a rate of 4.5 mL/s. The bolus tracking method was used for scanning. Arterial phase scanning automatically began when the contrast in the abdominal aorta at the level of the abdominal trunk reached 180 HU. The 64-slice multidetector CT scanner (MDCT) can generate 0.75-mm slices that can be reconstructed into a 0.5-mm image. Therefore, in order to obtain high-quality CT angiography for preoperative analysis, a scanning protocol should be maintained: sublingual nitrate intake, high contrast rate (4–5 mL/s), early arterial phase (20–30 ‘), stress reduction (80–100 kV), and doubling the mAs. Image processing was performed using 3D volume rendering technique (VRT). Statistical analysis: Ordinal data were calculated using the median. All calculations were performed using the Statistica version 64 software. RESULTS: All patients underwent local radical right hemicolectomy with CME/CVL and R0 resection. The median quantity of removal lymph nodes was 24.71±10.04 (range 13–58). Positive lymph nodes were revealed in 38.7% of cases. The incidence of metastatic lymph nodes was 38.7% in D1 zone, 3.2% in D2 zone, and 9.7% in D3 zone. Mean operative time was 82 min (range 63–130). Median intraoperative blood loss was 70 mL (range 32–280). No patients required intraoperative blood transfusion. Postoperative complications were developed in seven patients. AL was diagnosed in one patient on postoperative day 8 for whom relaparotomy, lavage, and end stoma were performed (Figure 5). Unfortunately, on the first day after patient discharge from the hospital, he died from massive thromboembolic complication, despite maintaining prophylaxis therapy. One patient suffered from paralytic ileus in an early postoperative period. Median staying in hospital after operation was 8.4 days. The SMA was present in 100% of cases. Compared with the widely used Zebrowski classification of the inferior mesenteric artery, we could not find a common classification of anatomical variations of SMA. We have identified eight types that are most common in practice: Type A – MCA, RCA, and ICA deviate classically independently of each other from the main SMA trunk. Type B – RCA is absent. Type C – RCA deviates from ICA. Type D – RCA deviates from the right branch of the MCA or the main trunk of the MCA. Type E – Classical type A + the presence of additional MCA (AMCA) Type F – Right and left MCA branches deviate separately from the main SMA trunk. Type G – MCA and ICA have a common trunk and RCA is absent. Type H – RCA deviates from ICA + AMCA. Type A – MCA, RCA, and ICA deviate classically independently of each other from the main SMA trunk. Type B – RCA is absent. Type C – RCA deviates from ICA. Type D – RCA deviates from the right branch of the MCA or the main trunk of the MCA. Type E – Classical type A + the presence of additional MCA (AMCA) Type F – Right and left MCA branches deviate separately from the main SMA trunk. Type G – MCA and ICA have a common trunk and RCA is absent. Type H – RCA deviates from ICA + AMCA. Our analysis showed that type A was detected in 27 (25.9%) patients, type B in 22 (21.4%), type C in 20 (19.2%), type D in 12 (11.6%), type E in 9 (8.7%), type F in 9 (8.7%), type G in 3 (2.9%), and type H in 1 (0.9%) patient (Figures 1 and 2). The analysis also showed that in 12 (11.6%) patients, the right hepatic artery deviates from SMA. The ICA was present in 103 (100%) cases. The ICA crossed the SMV on the anterior surface in 64 (62.1%) cases and on the posterior surface of the SMV in 39 (37.9%) cases. The RCA is one of the most volatile arterial structures of the SMA system (literature data indicate that it is present in 11–40% of cases) 1,4,12 . According to our selected types of SMA, RCA was absent in 25 (24.3%) and in 33 (32%) patients and deviated from ICA and MCA/RMCA. Accordingly, in 58 (56.3%) patients, RCA was either absent or was a nonindependent branch of SMA. The MCA was present and originated directly from SMA in 103 (100%) cases. AMCA was present in 10 (9.7%) cases. The distance from the root of the SMA to the root of the MCA was 37.8±12.8 mm (range 13–65). The distance from the root of the MCA to the root of the ICA was 29.5±15.7 mm (range 0–80). Gastrocolic TH was present in 100 (97.1%) cases and located on the lower edge of the mesentery of the transverse colon, along the head of the pancreas, and flows into the right lateral part of the SMV wall. Our analysis showed that the caliber of TH varied from 3 to 10 mm and its length was 11.5±4.8 mm (range 2–33). Usually, the confluence of TH formed: middle colic vein (MCV), right colic vein (RCV), additional middle colic vein (AMCV), right gastroepiploic vein (RGEV), and anterior superior pancreaticoduodenal vein (ASPDV). Also, we observed a very interesting case where one of the veins which create confluence of TH was ileocolic vein (ICV) (Figure 3). Our analysis of 3D-CT angiograms showed the following type combinations of TH confluence: MCV+RCVRCV+RGEV+ASPDVICV+RCV+RGEVAbsence of THMCV+RGEVMCV+RGEV+ASPDV+AMCV MCV+RCV RCV+RGEV+ASPDV ICV+RCV+RGEV Absence of TH MCV+RGEV MCV+RGEV+ASPDV+AMCV Respectively, type 1 was observed in 17 (16.5%) patients, type 2 in 55 (53.4%), type 3 in 1 (1%), type 4 in 3 (2.9%), type 5 in 15 (14.6%), and type 6 in 12 (11.6%) patients (Figure 3). Unfortunately, it was impossible in some cases to create 3D-CT reconstruction of some types of TH due to lack of contrast, incorrect scanning, and various technical features. An important point of preoperative planning is understanding the relationship between TH and MCA. In 66 (64.1%) patients, TH was located above the root of the MCA, in which case the distance between them was 12.38±5.41 mm (range 3–29). In 18 (17.5%) patients, TH was located below the root of the MCA, in which case the distance between them was 10.95±7.1 mm. In 16 (15.5%) patients, TH was located at the same level with the root of the MCA (Figure 4). DISCUSSION: The well-known concept of right hemicolectomy with CME/CVL, in the past decade, has supplanted the traditional old notion of colon cancer surgery and has improved patients’ 5-year survival 1,5,8 . AL is the most devastating complication in colorectal surgery. Patients who developed an AL had a higher mortality than those who did not, a longer median hospital stay, and a higher 30-day reoperation and 30-day readmission rate 11 . In our study, we observed AL in one patient, which resulted in 30-day mortality (Figure 5). Retrospective analysis of this case showed our mistake. According to the oncology canons, the operation was performed correctly (CME/CVL), but we did not perform an analysis of vascular anatomy before surgery, resulting in irreversible ischemic changes in the anastomotic area and the actual AL. In the left-sided CRC, the inferior mesenteric artery is the most important landmark to perform D3 lymphadenectomy, while in the right-sided colon cancer surgery, there are several such “central” landmarks: SMV, SMA, ICA, MCA, and TH. Each right hemicolectomy with CME/CVL started from dissection of SMV and identification of ICA and ICV. Here we do not have any problem. However, interesting is the effect of the course of ICA in relation to SMV on disease-free survival with correspondingly better results in the group of patients where ICA is ahead of SMV 6 . Therefore, a potential group of patients with an ICA course, behind the SMV, requires more precision lymphadenectomy of the apical area. In our study, we found that ICA crossed the SMV on the anterior surface in 64 (62.1%) cases and on the posterior surface of the SMV in 39 (37.9%) cases. RCA is one of the most volatile arterial structures of the SMA system. Literature data indicate that it is present in 11–40% of cases 1,4,12 . In our study, we found that the weighted mean incidence of RCA was 43.7% from the SMA, 20.4% from the ICA, and 11.6% from the root of the MCA or rMCA, and RCA was absent in 25 (24.3%) cases. Usually, we do not encounter any problems with RCA when implementing the Western concept of CME/CVL and do not pay much attention to it if it is not an independent branch. However, the abovementioned anatomical variants of RCA should be considered when performing the Eastern concept of D3 lymph node dissection (segmental resections — 10 cm from the edge of the tumor) 10 . The next one key point for performing right hemicolectomy with CME/CVL is TH, especially due to being a special area of the apical lymph nodes. TH is a thin-walled venous trunk that has many different combinations of formation. Traditional TH branches are MCV, RGEV, ASPDV, and aMCV 1,2 . Very often, it is in the TH area due to excessive traction of the mesentery during the allocation of MCA surgeons get bleeding. It is critical to understand the relationship between TH and MCA to prevent damage of this trunk (Figure 4). In our study, we found that TH was located above the root of the MCA (12.38±5.41 mm) in 66 (64.1%) patients, at the same level with the root of the MCA in 16 (15.5%) patients, and was located below the root of the MCA (10.95±7.1 mm) in 18 (17.5%) patients. In the situation if the root of the MCA is above TH, it is safer to start mobilization from the cranial part of the root of transverse colon mesentery, and in cases where the root of the MCA is below TH, then the dissection of MCA should begin from the caudal part of transverse colon mesentery 7 . CT is a modality of choice for staging of colon cancer and distant metastasis. Magnetic resonance angiography is an expensive method to perform it routinely and preoperatively for every patient. Therefore, CT is optimal for staging and evaluating mesenteric vasculature 8 . However, 3D-CT angiography has several limitations. First, the preoperative CT protocol for patients with colon cancer usually does not include the early arterial phase and, therefore, results in some difficulty in performing adequate 3D reconstruction. Second, the caliber of SMA branches is usually small in diameter and they are not always well visualized on 3D-CT angiograms. In the abovementioned cases, the use of CT in the preoperative analysis of anatomical variants of the structure of SMA branches cannot be performed in 3D mode and should be performed in normal 2D mode 1 . New era of development personalized strategy could be achieved by virtual reality exploration and planning for precision colorectal surgery, which can provide an enhanced understanding of crucial anatomical details 3 . Our study has some limitation. This study is partly retrospective (observation period from 2016 to 2018) and partly prospective (observation period from 2019 to 2021), so we cannot fully conduct an effective analysis between anatomical variations of vascular anatomy with postoperative complications in a group of cases that were retrospectively analyzed. CONCLUSION: Personalized preoperative analysis of 3D-CT angiography is a key pattern in assessment of vascular anatomy and can potentially show the complexity of future lymphadenectomy, reduce intraoperative time for identifying key landmarks, and develop an individualized surgical strategy. Personalized 3D-CT assessment can potentially significantly reduce the risk of AL. To solve this problem, new studies and further standardization are needed.
Background: 3D-CT angiography has made it possible to reach a qualitatively new level in the determination of treatment tactics for patients with colorectal cancer. Methods: This study involved 103 patients with colorectal cancer who underwent preoperative 3D-CT angiography from 2016 to 2021. Results: All patients underwent radical D3 right hemicolectomy. The median quantity of removal lymph nodes were 24.71±10.04. Anastomotic leakage was diagnosed in one patient. We have identified eight most common types of superior mesenteric artery. The ileocolic artery crossed the superior mesenteric vein on the anterior surface in 64 (62.1%) patients and on the posterior surface in 39 (37.9%). In 58 (56.3%) patients, the right colic artery was either absent or was a nonindependent branch of superior mesenteric artery. The distance from the root of the superior mesenteric artery to the root of the middle colic artery was 37.8±12.8 mm and that from the root of the middle colic artery to the root of the ileocolic artery was 29.5±15.7 mm. The trunk of Henle was above the root of the middle colic artery in 66 (64.1%) patients, at the same level with the middle colic artery in 16 (15.5%), and below the middle colic artery in 18 (17.5%) patients. Conclusions: Preoperative analysis of 3D-CT angiography is a key pattern in assessment of vascular anatomy and can potentially show the complexity of future lymphadenectomy and reduce the risk of anastomotic leakage.
INTRODUCTION: The incidence of anastomotic leak (AL) after right hemicolectomy is relatively low, in comparison with left-sided/rectal colorectal cancer (CRC). In 2015, the European Society of Coloproctology (ESCP) audited right colectomy and ileocecal resection, collecting prospective data on 3,208 patients across 284 centers in 39 countries. The overall AL rate was 8.1% 11 . This is due to a more stable blood supply. However, different anatomical variations can have a significant impact on the duration of surgery and cause the technical complexity of its implementation. The need for standardization is still debated in the literature of Eastern D3 lymphadenectomy and Western embryologically oriented complete mesocolic excision with central vascular ligation (CME/CVL) 5,8 . In contrast to left and rectal cancer surgery, where the inferior mesenteric artery is the most important reference point, there are several such points, including the superior mesenteric vein (SMV), truncus Henle (TH), and the branches of the superior mesenteric artery (SMA). Not uncommon anatomical variability of the abovementioned vessels causes a higher percentage of conversions to open operations and increases intraoperative time and intraoperative blood loss 9,12 . It is difficult, in the scientific world of the 21st century, if not impossible, to say anything new in surgical anatomy of the abdominal cavity. However, with widely used in clinical practice, contrast-enhanced computed tomography (CT) has made it possible to reach a qualitatively new level in preoperative diagnosis and determination of treatment tactics for patients with CRC. Routine use of CT angiography allows a detailed analysis of each clinical case in the preoperative stage and identifies various anatomical nuances that may affect the operation 6,7 . The aim of this article was to analyze the clinical and radiological aspects that usually need to be discussed before surgery by a multidisciplinary team in patients with right-sided colon cancer. CONCLUSION: Personalized preoperative analysis of 3D-CT angiography is a key pattern in assessment of vascular anatomy and can potentially show the complexity of future lymphadenectomy, reduce intraoperative time for identifying key landmarks, and develop an individualized surgical strategy. Personalized 3D-CT assessment can potentially significantly reduce the risk of AL. To solve this problem, new studies and further standardization are needed.
Background: 3D-CT angiography has made it possible to reach a qualitatively new level in the determination of treatment tactics for patients with colorectal cancer. Methods: This study involved 103 patients with colorectal cancer who underwent preoperative 3D-CT angiography from 2016 to 2021. Results: All patients underwent radical D3 right hemicolectomy. The median quantity of removal lymph nodes were 24.71±10.04. Anastomotic leakage was diagnosed in one patient. We have identified eight most common types of superior mesenteric artery. The ileocolic artery crossed the superior mesenteric vein on the anterior surface in 64 (62.1%) patients and on the posterior surface in 39 (37.9%). In 58 (56.3%) patients, the right colic artery was either absent or was a nonindependent branch of superior mesenteric artery. The distance from the root of the superior mesenteric artery to the root of the middle colic artery was 37.8±12.8 mm and that from the root of the middle colic artery to the root of the ileocolic artery was 29.5±15.7 mm. The trunk of Henle was above the root of the middle colic artery in 66 (64.1%) patients, at the same level with the middle colic artery in 16 (15.5%), and below the middle colic artery in 18 (17.5%) patients. Conclusions: Preoperative analysis of 3D-CT angiography is a key pattern in assessment of vascular anatomy and can potentially show the complexity of future lymphadenectomy and reduce the risk of anastomotic leakage.
4,139
281
[ 80, 144, 184, 20 ]
9
[ "mca", "type", "th", "patients", "sma", "ct", "root", "rca", "ica", "right" ]
[ "precision colorectal surgery", "colectomy ileocecal resection", "cancer surgery central", "leak right hemicolectomy", "rectal cancer surgery" ]
[CONTENT] Colonic Neoplasms | Anastomotic Leak | Tomography | X-Ray Computed | Angiography | Neoplasias do Colo | Fístula Anastomótica | Tomografia Computadorizada por Rx | Angiografia [SUMMARY]
[CONTENT] Colonic Neoplasms | Anastomotic Leak | Tomography | X-Ray Computed | Angiography | Neoplasias do Colo | Fístula Anastomótica | Tomografia Computadorizada por Rx | Angiografia [SUMMARY]
[CONTENT] Colonic Neoplasms | Anastomotic Leak | Tomography | X-Ray Computed | Angiography | Neoplasias do Colo | Fístula Anastomótica | Tomografia Computadorizada por Rx | Angiografia [SUMMARY]
[CONTENT] Colonic Neoplasms | Anastomotic Leak | Tomography | X-Ray Computed | Angiography | Neoplasias do Colo | Fístula Anastomótica | Tomografia Computadorizada por Rx | Angiografia [SUMMARY]
[CONTENT] Colonic Neoplasms | Anastomotic Leak | Tomography | X-Ray Computed | Angiography | Neoplasias do Colo | Fístula Anastomótica | Tomografia Computadorizada por Rx | Angiografia [SUMMARY]
[CONTENT] Colonic Neoplasms | Anastomotic Leak | Tomography | X-Ray Computed | Angiography | Neoplasias do Colo | Fístula Anastomótica | Tomografia Computadorizada por Rx | Angiografia [SUMMARY]
[CONTENT] Anastomotic Leak | Colectomy | Colonic Neoplasms | Computed Tomography Angiography | Humans | Laparoscopy | Radiologists | Surgeons [SUMMARY]
[CONTENT] Anastomotic Leak | Colectomy | Colonic Neoplasms | Computed Tomography Angiography | Humans | Laparoscopy | Radiologists | Surgeons [SUMMARY]
[CONTENT] Anastomotic Leak | Colectomy | Colonic Neoplasms | Computed Tomography Angiography | Humans | Laparoscopy | Radiologists | Surgeons [SUMMARY]
[CONTENT] Anastomotic Leak | Colectomy | Colonic Neoplasms | Computed Tomography Angiography | Humans | Laparoscopy | Radiologists | Surgeons [SUMMARY]
[CONTENT] Anastomotic Leak | Colectomy | Colonic Neoplasms | Computed Tomography Angiography | Humans | Laparoscopy | Radiologists | Surgeons [SUMMARY]
[CONTENT] Anastomotic Leak | Colectomy | Colonic Neoplasms | Computed Tomography Angiography | Humans | Laparoscopy | Radiologists | Surgeons [SUMMARY]
[CONTENT] precision colorectal surgery | colectomy ileocecal resection | cancer surgery central | leak right hemicolectomy | rectal cancer surgery [SUMMARY]
[CONTENT] precision colorectal surgery | colectomy ileocecal resection | cancer surgery central | leak right hemicolectomy | rectal cancer surgery [SUMMARY]
[CONTENT] precision colorectal surgery | colectomy ileocecal resection | cancer surgery central | leak right hemicolectomy | rectal cancer surgery [SUMMARY]
[CONTENT] precision colorectal surgery | colectomy ileocecal resection | cancer surgery central | leak right hemicolectomy | rectal cancer surgery [SUMMARY]
[CONTENT] precision colorectal surgery | colectomy ileocecal resection | cancer surgery central | leak right hemicolectomy | rectal cancer surgery [SUMMARY]
[CONTENT] precision colorectal surgery | colectomy ileocecal resection | cancer surgery central | leak right hemicolectomy | rectal cancer surgery [SUMMARY]
[CONTENT] mca | type | th | patients | sma | ct | root | rca | ica | right [SUMMARY]
[CONTENT] mca | type | th | patients | sma | ct | root | rca | ica | right [SUMMARY]
[CONTENT] mca | type | th | patients | sma | ct | root | rca | ica | right [SUMMARY]
[CONTENT] mca | type | th | patients | sma | ct | root | rca | ica | right [SUMMARY]
[CONTENT] mca | type | th | patients | sma | ct | root | rca | ica | right [SUMMARY]
[CONTENT] mca | type | th | patients | sma | ct | root | rca | ica | right [SUMMARY]
[CONTENT] clinical | surgery | mesenteric | need | rectal | superior mesenteric | cancer | anatomical | patients | right [SUMMARY]
[CONTENT] contrast | ml | mca | root | phase | ct | th | sma | distance | scanning [SUMMARY]
[CONTENT] type | mca | rca | cases | type rca | ica | range | deviates | sma | patients [SUMMARY]
[CONTENT] reduce | assessment | potentially | key | personalized | 3d ct | 3d | lymphadenectomy reduce intraoperative time | anatomy potentially complexity future | ct assessment potentially significantly [SUMMARY]
[CONTENT] mca | th | root | type | ct | patients | sma | contrast | 64 | 3d [SUMMARY]
[CONTENT] mca | th | root | type | ct | patients | sma | contrast | 64 | 3d [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] 103 | 2016 | 2021 [SUMMARY]
[CONTENT] D3 ||| 24.71±10.04 ||| one ||| eight ||| 64 | 62.1% | 39 | 37.9% ||| 58 | 56.3% ||| 37.8±12.8 mm | 29.5±15.7 mm ||| Henle | 66 | 64.1% | 16 | 15.5% | 18 | 17.5% [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] ||| 103 | 2016 | 2021 ||| ||| D3 ||| 24.71±10.04 ||| one ||| eight ||| 64 | 62.1% | 39 | 37.9% ||| 58 | 56.3% ||| 37.8±12.8 mm | 29.5±15.7 mm ||| Henle | 66 | 64.1% | 16 | 15.5% | 18 | 17.5% ||| [SUMMARY]
[CONTENT] ||| 103 | 2016 | 2021 ||| ||| D3 ||| 24.71±10.04 ||| one ||| eight ||| 64 | 62.1% | 39 | 37.9% ||| 58 | 56.3% ||| 37.8±12.8 mm | 29.5±15.7 mm ||| Henle | 66 | 64.1% | 16 | 15.5% | 18 | 17.5% ||| [SUMMARY]
Methylphenidate normalizes frontocingulate underactivation during error processing in attention-deficit/hyperactivity disorder.
21664605
Children with attention-deficit/hyperactivity disorder (ADHD) have deficits in performance monitoring often improved with the indirect catecholamine agonist methylphenidate (MPH). We used functional magnetic resonance imaging to investigate the effects of single-dose MPH on activation of error processing brain areas in medication-naive boys with ADHD during a stop task that elicits 50% error rates.
BACKGROUND
Twelve medication-naive boys with ADHD were scanned twice, under either a single clinical dose of MPH or placebo, in a randomized, double-blind design while they performed an individually adjusted tracking stop task, designed to elicit 50% failures. Brain activation was compared within patients under either drug condition. To test for potential normalization effects of MPH, brain activation in ADHD patients under either drug condition was compared with that of 13 healthy age-matched boys.
METHODS
During failed inhibition, boys with ADHD under placebo relative to control subjects showed reduced brain activation in performance monitoring areas of dorsomedial and left ventrolateral prefrontal cortices, thalamus, cingulate, and parietal regions. MPH, relative to placebo, upregulated activation in these brain regions within patients and normalized all activation differences between patients and control subjects. During successful inhibition, MPH normalized reduced activation observed in patients under placebo compared with control subjects in parietotemporal and cerebellar regions.
RESULTS
MPH normalized brain dysfunction in medication-naive ADHD boys relative to control subjects in typical brain areas of performance monitoring, comprising left ventrolateral and dorsomedial frontal and parietal cortices. This could underlie the amelioration of MPH of attention and academic performance in ADHD.
CONCLUSIONS
[ "Adolescent", "Attention", "Attention Deficit Disorder with Hyperactivity", "Central Nervous System Stimulants", "Child", "Double-Blind Method", "Frontal Lobe", "Gyrus Cinguli", "Humans", "Magnetic Resonance Imaging", "Male", "Methylphenidate", "Neuropsychological Tests", "Psychomotor Performance" ]
3139835
null
null
null
null
Results
Performance The probability of inhibition was about 50% in all subjects with no significant group differences, showing that the task algorithm worked [F(1,38) = 1; p < .3; Table 1]. A multivariate ANOVA between control subjects and ADHD patients under either drug condition showed a trend for a significant group effect [F(8,62) = 2, p < .09] due to a significant univariate group effect in the standard deviation to go trials [F(2,34) = 5, p < .02], which were higher in patients under either medication condition compared with control subjects (p < .05). Post hoc tests furthermore revealed a trend for MPH compared with placebo to slow down reaction times within ADHD patients to both go (p < .06) and post-error go trials (both p < .07) (Table 1). The probability of inhibition was about 50% in all subjects with no significant group differences, showing that the task algorithm worked [F(1,38) = 1; p < .3; Table 1]. A multivariate ANOVA between control subjects and ADHD patients under either drug condition showed a trend for a significant group effect [F(8,62) = 2, p < .09] due to a significant univariate group effect in the standard deviation to go trials [F(2,34) = 5, p < .02], which were higher in patients under either medication condition compared with control subjects (p < .05). Post hoc tests furthermore revealed a trend for MPH compared with placebo to slow down reaction times within ADHD patients to both go (p < .06) and post-error go trials (both p < .07) (Table 1).
null
null
[ "Subjects", "fMRI Paradigm: Stop Task", "fMRI Image Acquisition", "fMRI Image Analysis", "Performance", "Brain Activation", "Motion", "Within-Group Brain Activations", "Failed Stop–Go Contrast", "Successful Stop–Go Contrast", "ANOVA Within-Patient Comparisons in Brain Activation Between the Placebo and the MPH Conditions", "Failed Stop–Go Contrast", "Successful Stop–Go Contrast", "ANOVA Between-Group Comparisons in Brain Activation Between Control Subjects and Boys with ADHD Under Either the Placebo or the MPH Conditions", "Failed Stop–Go Contrast", "Successful Stop–Go Contrast", "Conjunction Analysis Between Within-Group and Between-Group ANOVAs" ]
[ "Twelve medication-naïve, right-handed boys aged 10 to 15 years (mean age = 13, SD = 1) who met clinical diagnostic criteria for the combined (inattentive/hyperactive) subtype of ADHD (DSM-IV) were recruited through clinics. Clinical diagnosis of ADHD was established through interviews with an experienced child psychiatrist (A-MM) using the standardized Maudsley Diagnostic Interview to check for presence or absence of diagnostic criteria for any mental disorder as set out by DSM-IV (30). Exclusion criteria were lifetime comorbidity with any other psychiatric disorder, except for conduct/oppositional defiant disorder (present in one patient), as well as learning disability and specific reading disorder, neurological abnormalities, epilepsy, drug or substance abuse, and previous exposure to stimulant medication. Patients with ADHD also had to score above cutoff for hyperactive/inattentive symptoms on the Strengths and Difficulties Questionnaire for Parents (SDQ) (31). Patients were scanned twice, in a randomized, counterbalanced fashion, 1 week apart, 1 hour after either .3 mg/kg of MPH administration or placebo (vitamin C, 100 mg).\nThirteen male right-handed adolescent boys in the age range of 11 to 16 years (mean age = 13, SD = 1) were recruited through advertisements in the same geographic areas of South London to ensure similar socioeconomic status and were scanned once. They scored below cutoff for behavioral problems in the SDQ and had no history of psychiatric disorder.\nAll participants were above the fifth percentile on the Raven progressive matrices performance IQ (32) (IQ mean estimate controls = 100, SD = 14; ADHD = 91, SD = 9) and paid £30 for participation. Parental and child informed consent/assent and approval from the local ethical committee was obtained.\nUnivariate analyses of variance (ANOVAs) showed no group differences between boys with and without ADHD for age [F(1,25) = 2, p = .2] but did for IQ [F(1,25) = 8, p < .009]. IQ is associated with ADHD in the general population (33,34). We purposely did not match groups for IQ because matching ADHD and control groups for IQ would have created unrepresentative groups and therefore be misguided (35). Furthermore, IQ was significantly negatively correlated with the SDQ scores for inattention and hyperactivity (r = −.5, p < .001). We did not covary for IQ because when groups are not randomly selected, covarying for a variable that differs between groups violates the standard assumptions for analysis of covariance. When the covariate is intrinsic to the condition, it becomes meaningless to “adjust” group effects for differences in the covariate because it would alter the group effect in potentially problematic ways, leading to spurious results (35,36).", "The rapid, mixed-trial, event-related fMRI design was practiced by subjects once before scanning. The visual tracking stop task requires withholding of a motor response to a go stimulus when it is followed unpredictably by a stop signal (8,37,38). The basic task is a choice reaction time task (left and right pointing arrows: go signals) with a mean intertrial-interval of 1.8 sec (156 go trials). In 20% of trials, pseudo-randomly interspersed, the go signals are followed (about 250 ms later) by arrows pointing upwards (stop signals), and subjects have to inhibit their motor responses (40 stop trials). A tracking algorithm changes the time interval between go-signal and stop-signal onsets according to each subject's inhibitory performance to ensure that the task is equally challenging for each individual and to provide 50% successful and 50% unsuccessful inhibition trials at every moment of the task.", "Gradient-echo echoplanar magnetic resonance imaging data were acquired on a GE Signa 1.5-T Horizon LX System (General Electric, Milwaukee, Wisconsin) at the Maudsley Hospital, London. A quadrature birdcage head coil was used for radio-frequency transmission and reception. During the 6-min run of the stop task, in each of 16 noncontiguous planes parallel to the anterior–posterior commissural, 196 T2*-weighted magnetic resonance images depicting blood oxygen–level dependent (BOLD) contrast covering the whole brain were acquired with echo time = 40 msec, repetition time = 1.8 sec, flip angle = 90°, in-plane resolution = 3.1 mm, slice thickness = 7 mm, slice skip = .7 mm, providing complete brain coverage.", "At the individual subject level, a standard general linear modeling approach was used to obtain estimates of the response size (beta) to each of the two stop task conditions (successful and unsuccessful stop trials) against an implicit baseline (go trials). Following transformation of the fMRI data for each individual into standard space and smoothing with a three-dimensional 7-mm full width at half maximum Gaussian filter, the experimental model was convolved for each condition with gamma variate functions having peak responses at 4 and 8 sec following stimulus onset to accommodate variability in BOLD response timing. By fitting these convolved model components to the time series at each voxel, beta estimates were obtained for each effect of interest. The standard errors of these beta estimates were computed nonparametrically using a bootstrap procedure designed to operate on time series data, containing serial dependencies, with repeated deterministic (experimentally determined) effects. This method is outlined in detail in a previous work (39). Two hundred bootstraps at each voxel were used to estimate parameter standard errors. Using the combined parameter estimates over all conditions, the mean fitted time series was also computed and, from the combined bootstrap parameter estimates for each bootstrap, the 95% confidence limits on the fitted time series was computed.\nThe second-level analysis proceeded by computing either the group differences (patients and controls) or the drug condition differences (placebo, MPH) within patients at each voxel and the standard error of this difference (using the bootstrap estimates derived earlier). The significance of these differences was then tested in three ways: 1) a simple parametric random effects (paired t) test, using only the group difference/placebo-MPH effect size differences; 2) a permutation test of the same random effects t statistic in which the null distribution was estimated by randomly swapping the signs of the differences (we used 40,000 permutations per voxel to obtain a confidence limit of .0007–.0013 for p value of .001); and 3) a mixed-effects test using both the effect size differences and their subject-level standard errors to accommodate first (subject) level heteroscedasticity (40). This was also conducted using 40,000 permutations per voxel.\nIn addition to voxelwise maps, cluster-level inference on the contrast (beta) values was performed at a family-wise error corrected threshold of p < .05 using the Threshold-Free Cluster Enhancement method proposed by Smith and Nichols (41). This cluster-level inference was also used for the within-group maps for each experimental condition.", "The probability of inhibition was about 50% in all subjects with no significant group differences, showing that the task algorithm worked [F(1,38) = 1; p < .3; Table 1].\nA multivariate ANOVA between control subjects and ADHD patients under either drug condition showed a trend for a significant group effect [F(8,62) = 2, p < .09] due to a significant univariate group effect in the standard deviation to go trials [F(2,34) = 5, p < .02], which were higher in patients under either medication condition compared with control subjects (p < .05). Post hoc tests furthermore revealed a trend for MPH compared with placebo to slow down reaction times within ADHD patients to both go (p < .06) and post-error go trials (both p < .07) (Table 1).", " Motion Multivariate ANOVA showed no significant group differences between control subjects and ADHD patients under either drug condition in mean or maximum rotation or translation parameters in the x, y, or z dimensions [F(2,38) = .9, p < .5].\nMultivariate ANOVA showed no significant group differences between control subjects and ADHD patients under either drug condition in mean or maximum rotation or translation parameters in the x, y, or z dimensions [F(2,38) = .9, p < .5].\n Within-Group Brain Activations Failed Stop–Go Contrast Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas.\nActivation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex.\nActivation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1).\nControl subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas.\nActivation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex.\nActivation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1).\n Successful Stop–Go Contrast Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate.\nActivation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices.\nActivation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1).\nActivation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate.\nActivation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices.\nActivation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1).\n Failed Stop–Go Contrast Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas.\nActivation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex.\nActivation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1).\nControl subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas.\nActivation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex.\nActivation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1).\n Successful Stop–Go Contrast Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate.\nActivation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices.\nActivation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1).\nActivation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate.\nActivation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices.\nActivation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1).\n ANOVA Within-Patient Comparisons in Brain Activation Between the Placebo and the MPH Conditions Failed Stop–Go Contrast MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH.\nTo investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02).\nMPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH.\nTo investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02).\n Successful Stop–Go Contrast No significant activation differences were observed between medication conditions.\nNo significant activation differences were observed between medication conditions.\n Failed Stop–Go Contrast MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH.\nTo investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02).\nMPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH.\nTo investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02).\n Successful Stop–Go Contrast No significant activation differences were observed between medication conditions.\nNo significant activation differences were observed between medication conditions.\n ANOVA Between-Group Comparisons in Brain Activation Between Control Subjects and Boys with ADHD Under Either the Placebo or the MPH Conditions Failed Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3).\nUnder the MPH condition, ADHD patients did not differ from controls in any of these regions.\nPost-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02).\nRelative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3).\nUnder the MPH condition, ADHD patients did not differ from controls in any of these regions.\nPost-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02).\n Successful Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1).\nUnder the MPH condition, ADHD patients did not differ from control subjects in any of these regions.\nActivation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3).\nAll group difference findings for both contrasts remained essentially unchanged when IQ was covaried.\nRelative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1).\nUnder the MPH condition, ADHD patients did not differ from control subjects in any of these regions.\nActivation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3).\nAll group difference findings for both contrasts remained essentially unchanged when IQ was covaried.\n Failed Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3).\nUnder the MPH condition, ADHD patients did not differ from controls in any of these regions.\nPost-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02).\nRelative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3).\nUnder the MPH condition, ADHD patients did not differ from controls in any of these regions.\nPost-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02).\n Successful Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1).\nUnder the MPH condition, ADHD patients did not differ from control subjects in any of these regions.\nActivation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3).\nAll group difference findings for both contrasts remained essentially unchanged when IQ was covaried.\nRelative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1).\nUnder the MPH condition, ADHD patients did not differ from control subjects in any of these regions.\nActivation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3).\nAll group difference findings for both contrasts remained essentially unchanged when IQ was covaried.\n Conjunction Analysis Between Within-Group and Between-Group ANOVAs To test whether brain regions that were upregulated with MPH relative to placebo within patients overlapped with brain regions that were reduced in patients under placebo relative to controls and then normalized with MPH, we performed a conjunction analysis by determining the voxels where the within-group ANOVA (MPH > placebo in ADHD) and the between-group ANOVA (control subjects > ADHD placebo) were both significant (42). Three clusters emerged, in left IFC (Talairach coordinates: −43, 7, 4), right SMA (Talairach coordinates: 7, 4, 59) and right inferior parietal lobe (Talairach coordinates: 32, −63, 42). Overlapping clusters are also indicated in bold in Table 3 and shown in Figure 3.\nTo test whether brain regions that were upregulated with MPH relative to placebo within patients overlapped with brain regions that were reduced in patients under placebo relative to controls and then normalized with MPH, we performed a conjunction analysis by determining the voxels where the within-group ANOVA (MPH > placebo in ADHD) and the between-group ANOVA (control subjects > ADHD placebo) were both significant (42). Three clusters emerged, in left IFC (Talairach coordinates: −43, 7, 4), right SMA (Talairach coordinates: 7, 4, 59) and right inferior parietal lobe (Talairach coordinates: 32, −63, 42). Overlapping clusters are also indicated in bold in Table 3 and shown in Figure 3.", "Multivariate ANOVA showed no significant group differences between control subjects and ADHD patients under either drug condition in mean or maximum rotation or translation parameters in the x, y, or z dimensions [F(2,38) = .9, p < .5].", " Failed Stop–Go Contrast Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas.\nActivation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex.\nActivation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1).\nControl subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas.\nActivation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex.\nActivation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1).\n Successful Stop–Go Contrast Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate.\nActivation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices.\nActivation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1).\nActivation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate.\nActivation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices.\nActivation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1).", "Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas.\nActivation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex.\nActivation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1).", "Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate.\nActivation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices.\nActivation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1).", " Failed Stop–Go Contrast MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH.\nTo investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02).\nMPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH.\nTo investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02).\n Successful Stop–Go Contrast No significant activation differences were observed between medication conditions.\nNo significant activation differences were observed between medication conditions.", "MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH.\nTo investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02).", "No significant activation differences were observed between medication conditions.", " Failed Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3).\nUnder the MPH condition, ADHD patients did not differ from controls in any of these regions.\nPost-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02).\nRelative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3).\nUnder the MPH condition, ADHD patients did not differ from controls in any of these regions.\nPost-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02).\n Successful Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1).\nUnder the MPH condition, ADHD patients did not differ from control subjects in any of these regions.\nActivation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3).\nAll group difference findings for both contrasts remained essentially unchanged when IQ was covaried.\nRelative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1).\nUnder the MPH condition, ADHD patients did not differ from control subjects in any of these regions.\nActivation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3).\nAll group difference findings for both contrasts remained essentially unchanged when IQ was covaried.", "Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3).\nUnder the MPH condition, ADHD patients did not differ from controls in any of these regions.\nPost-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02).", "Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1).\nUnder the MPH condition, ADHD patients did not differ from control subjects in any of these regions.\nActivation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3).\nAll group difference findings for both contrasts remained essentially unchanged when IQ was covaried.", "To test whether brain regions that were upregulated with MPH relative to placebo within patients overlapped with brain regions that were reduced in patients under placebo relative to controls and then normalized with MPH, we performed a conjunction analysis by determining the voxels where the within-group ANOVA (MPH > placebo in ADHD) and the between-group ANOVA (control subjects > ADHD placebo) were both significant (42). Three clusters emerged, in left IFC (Talairach coordinates: −43, 7, 4), right SMA (Talairach coordinates: 7, 4, 59) and right inferior parietal lobe (Talairach coordinates: 32, −63, 42). Overlapping clusters are also indicated in bold in Table 3 and shown in Figure 3." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Methods and Materials", "Subjects", "fMRI Paradigm: Stop Task", "fMRI Image Acquisition", "fMRI Image Analysis", "Results", "Performance", "Brain Activation", "Motion", "Within-Group Brain Activations", "Failed Stop–Go Contrast", "Successful Stop–Go Contrast", "ANOVA Within-Patient Comparisons in Brain Activation Between the Placebo and the MPH Conditions", "Failed Stop–Go Contrast", "Successful Stop–Go Contrast", "ANOVA Between-Group Comparisons in Brain Activation Between Control Subjects and Boys with ADHD Under Either the Placebo or the MPH Conditions", "Failed Stop–Go Contrast", "Successful Stop–Go Contrast", "Conjunction Analysis Between Within-Group and Between-Group ANOVAs", "Discussion" ]
[ " Subjects Twelve medication-naïve, right-handed boys aged 10 to 15 years (mean age = 13, SD = 1) who met clinical diagnostic criteria for the combined (inattentive/hyperactive) subtype of ADHD (DSM-IV) were recruited through clinics. Clinical diagnosis of ADHD was established through interviews with an experienced child psychiatrist (A-MM) using the standardized Maudsley Diagnostic Interview to check for presence or absence of diagnostic criteria for any mental disorder as set out by DSM-IV (30). Exclusion criteria were lifetime comorbidity with any other psychiatric disorder, except for conduct/oppositional defiant disorder (present in one patient), as well as learning disability and specific reading disorder, neurological abnormalities, epilepsy, drug or substance abuse, and previous exposure to stimulant medication. Patients with ADHD also had to score above cutoff for hyperactive/inattentive symptoms on the Strengths and Difficulties Questionnaire for Parents (SDQ) (31). Patients were scanned twice, in a randomized, counterbalanced fashion, 1 week apart, 1 hour after either .3 mg/kg of MPH administration or placebo (vitamin C, 100 mg).\nThirteen male right-handed adolescent boys in the age range of 11 to 16 years (mean age = 13, SD = 1) were recruited through advertisements in the same geographic areas of South London to ensure similar socioeconomic status and were scanned once. They scored below cutoff for behavioral problems in the SDQ and had no history of psychiatric disorder.\nAll participants were above the fifth percentile on the Raven progressive matrices performance IQ (32) (IQ mean estimate controls = 100, SD = 14; ADHD = 91, SD = 9) and paid £30 for participation. Parental and child informed consent/assent and approval from the local ethical committee was obtained.\nUnivariate analyses of variance (ANOVAs) showed no group differences between boys with and without ADHD for age [F(1,25) = 2, p = .2] but did for IQ [F(1,25) = 8, p < .009]. IQ is associated with ADHD in the general population (33,34). We purposely did not match groups for IQ because matching ADHD and control groups for IQ would have created unrepresentative groups and therefore be misguided (35). Furthermore, IQ was significantly negatively correlated with the SDQ scores for inattention and hyperactivity (r = −.5, p < .001). We did not covary for IQ because when groups are not randomly selected, covarying for a variable that differs between groups violates the standard assumptions for analysis of covariance. When the covariate is intrinsic to the condition, it becomes meaningless to “adjust” group effects for differences in the covariate because it would alter the group effect in potentially problematic ways, leading to spurious results (35,36).\nTwelve medication-naïve, right-handed boys aged 10 to 15 years (mean age = 13, SD = 1) who met clinical diagnostic criteria for the combined (inattentive/hyperactive) subtype of ADHD (DSM-IV) were recruited through clinics. Clinical diagnosis of ADHD was established through interviews with an experienced child psychiatrist (A-MM) using the standardized Maudsley Diagnostic Interview to check for presence or absence of diagnostic criteria for any mental disorder as set out by DSM-IV (30). Exclusion criteria were lifetime comorbidity with any other psychiatric disorder, except for conduct/oppositional defiant disorder (present in one patient), as well as learning disability and specific reading disorder, neurological abnormalities, epilepsy, drug or substance abuse, and previous exposure to stimulant medication. Patients with ADHD also had to score above cutoff for hyperactive/inattentive symptoms on the Strengths and Difficulties Questionnaire for Parents (SDQ) (31). Patients were scanned twice, in a randomized, counterbalanced fashion, 1 week apart, 1 hour after either .3 mg/kg of MPH administration or placebo (vitamin C, 100 mg).\nThirteen male right-handed adolescent boys in the age range of 11 to 16 years (mean age = 13, SD = 1) were recruited through advertisements in the same geographic areas of South London to ensure similar socioeconomic status and were scanned once. They scored below cutoff for behavioral problems in the SDQ and had no history of psychiatric disorder.\nAll participants were above the fifth percentile on the Raven progressive matrices performance IQ (32) (IQ mean estimate controls = 100, SD = 14; ADHD = 91, SD = 9) and paid £30 for participation. Parental and child informed consent/assent and approval from the local ethical committee was obtained.\nUnivariate analyses of variance (ANOVAs) showed no group differences between boys with and without ADHD for age [F(1,25) = 2, p = .2] but did for IQ [F(1,25) = 8, p < .009]. IQ is associated with ADHD in the general population (33,34). We purposely did not match groups for IQ because matching ADHD and control groups for IQ would have created unrepresentative groups and therefore be misguided (35). Furthermore, IQ was significantly negatively correlated with the SDQ scores for inattention and hyperactivity (r = −.5, p < .001). We did not covary for IQ because when groups are not randomly selected, covarying for a variable that differs between groups violates the standard assumptions for analysis of covariance. When the covariate is intrinsic to the condition, it becomes meaningless to “adjust” group effects for differences in the covariate because it would alter the group effect in potentially problematic ways, leading to spurious results (35,36).\n fMRI Paradigm: Stop Task The rapid, mixed-trial, event-related fMRI design was practiced by subjects once before scanning. The visual tracking stop task requires withholding of a motor response to a go stimulus when it is followed unpredictably by a stop signal (8,37,38). The basic task is a choice reaction time task (left and right pointing arrows: go signals) with a mean intertrial-interval of 1.8 sec (156 go trials). In 20% of trials, pseudo-randomly interspersed, the go signals are followed (about 250 ms later) by arrows pointing upwards (stop signals), and subjects have to inhibit their motor responses (40 stop trials). A tracking algorithm changes the time interval between go-signal and stop-signal onsets according to each subject's inhibitory performance to ensure that the task is equally challenging for each individual and to provide 50% successful and 50% unsuccessful inhibition trials at every moment of the task.\nThe rapid, mixed-trial, event-related fMRI design was practiced by subjects once before scanning. The visual tracking stop task requires withholding of a motor response to a go stimulus when it is followed unpredictably by a stop signal (8,37,38). The basic task is a choice reaction time task (left and right pointing arrows: go signals) with a mean intertrial-interval of 1.8 sec (156 go trials). In 20% of trials, pseudo-randomly interspersed, the go signals are followed (about 250 ms later) by arrows pointing upwards (stop signals), and subjects have to inhibit their motor responses (40 stop trials). A tracking algorithm changes the time interval between go-signal and stop-signal onsets according to each subject's inhibitory performance to ensure that the task is equally challenging for each individual and to provide 50% successful and 50% unsuccessful inhibition trials at every moment of the task.\n fMRI Image Acquisition Gradient-echo echoplanar magnetic resonance imaging data were acquired on a GE Signa 1.5-T Horizon LX System (General Electric, Milwaukee, Wisconsin) at the Maudsley Hospital, London. A quadrature birdcage head coil was used for radio-frequency transmission and reception. During the 6-min run of the stop task, in each of 16 noncontiguous planes parallel to the anterior–posterior commissural, 196 T2*-weighted magnetic resonance images depicting blood oxygen–level dependent (BOLD) contrast covering the whole brain were acquired with echo time = 40 msec, repetition time = 1.8 sec, flip angle = 90°, in-plane resolution = 3.1 mm, slice thickness = 7 mm, slice skip = .7 mm, providing complete brain coverage.\nGradient-echo echoplanar magnetic resonance imaging data were acquired on a GE Signa 1.5-T Horizon LX System (General Electric, Milwaukee, Wisconsin) at the Maudsley Hospital, London. A quadrature birdcage head coil was used for radio-frequency transmission and reception. During the 6-min run of the stop task, in each of 16 noncontiguous planes parallel to the anterior–posterior commissural, 196 T2*-weighted magnetic resonance images depicting blood oxygen–level dependent (BOLD) contrast covering the whole brain were acquired with echo time = 40 msec, repetition time = 1.8 sec, flip angle = 90°, in-plane resolution = 3.1 mm, slice thickness = 7 mm, slice skip = .7 mm, providing complete brain coverage.\n fMRI Image Analysis At the individual subject level, a standard general linear modeling approach was used to obtain estimates of the response size (beta) to each of the two stop task conditions (successful and unsuccessful stop trials) against an implicit baseline (go trials). Following transformation of the fMRI data for each individual into standard space and smoothing with a three-dimensional 7-mm full width at half maximum Gaussian filter, the experimental model was convolved for each condition with gamma variate functions having peak responses at 4 and 8 sec following stimulus onset to accommodate variability in BOLD response timing. By fitting these convolved model components to the time series at each voxel, beta estimates were obtained for each effect of interest. The standard errors of these beta estimates were computed nonparametrically using a bootstrap procedure designed to operate on time series data, containing serial dependencies, with repeated deterministic (experimentally determined) effects. This method is outlined in detail in a previous work (39). Two hundred bootstraps at each voxel were used to estimate parameter standard errors. Using the combined parameter estimates over all conditions, the mean fitted time series was also computed and, from the combined bootstrap parameter estimates for each bootstrap, the 95% confidence limits on the fitted time series was computed.\nThe second-level analysis proceeded by computing either the group differences (patients and controls) or the drug condition differences (placebo, MPH) within patients at each voxel and the standard error of this difference (using the bootstrap estimates derived earlier). The significance of these differences was then tested in three ways: 1) a simple parametric random effects (paired t) test, using only the group difference/placebo-MPH effect size differences; 2) a permutation test of the same random effects t statistic in which the null distribution was estimated by randomly swapping the signs of the differences (we used 40,000 permutations per voxel to obtain a confidence limit of .0007–.0013 for p value of .001); and 3) a mixed-effects test using both the effect size differences and their subject-level standard errors to accommodate first (subject) level heteroscedasticity (40). This was also conducted using 40,000 permutations per voxel.\nIn addition to voxelwise maps, cluster-level inference on the contrast (beta) values was performed at a family-wise error corrected threshold of p < .05 using the Threshold-Free Cluster Enhancement method proposed by Smith and Nichols (41). This cluster-level inference was also used for the within-group maps for each experimental condition.\nAt the individual subject level, a standard general linear modeling approach was used to obtain estimates of the response size (beta) to each of the two stop task conditions (successful and unsuccessful stop trials) against an implicit baseline (go trials). Following transformation of the fMRI data for each individual into standard space and smoothing with a three-dimensional 7-mm full width at half maximum Gaussian filter, the experimental model was convolved for each condition with gamma variate functions having peak responses at 4 and 8 sec following stimulus onset to accommodate variability in BOLD response timing. By fitting these convolved model components to the time series at each voxel, beta estimates were obtained for each effect of interest. The standard errors of these beta estimates were computed nonparametrically using a bootstrap procedure designed to operate on time series data, containing serial dependencies, with repeated deterministic (experimentally determined) effects. This method is outlined in detail in a previous work (39). Two hundred bootstraps at each voxel were used to estimate parameter standard errors. Using the combined parameter estimates over all conditions, the mean fitted time series was also computed and, from the combined bootstrap parameter estimates for each bootstrap, the 95% confidence limits on the fitted time series was computed.\nThe second-level analysis proceeded by computing either the group differences (patients and controls) or the drug condition differences (placebo, MPH) within patients at each voxel and the standard error of this difference (using the bootstrap estimates derived earlier). The significance of these differences was then tested in three ways: 1) a simple parametric random effects (paired t) test, using only the group difference/placebo-MPH effect size differences; 2) a permutation test of the same random effects t statistic in which the null distribution was estimated by randomly swapping the signs of the differences (we used 40,000 permutations per voxel to obtain a confidence limit of .0007–.0013 for p value of .001); and 3) a mixed-effects test using both the effect size differences and their subject-level standard errors to accommodate first (subject) level heteroscedasticity (40). This was also conducted using 40,000 permutations per voxel.\nIn addition to voxelwise maps, cluster-level inference on the contrast (beta) values was performed at a family-wise error corrected threshold of p < .05 using the Threshold-Free Cluster Enhancement method proposed by Smith and Nichols (41). This cluster-level inference was also used for the within-group maps for each experimental condition.", "Twelve medication-naïve, right-handed boys aged 10 to 15 years (mean age = 13, SD = 1) who met clinical diagnostic criteria for the combined (inattentive/hyperactive) subtype of ADHD (DSM-IV) were recruited through clinics. Clinical diagnosis of ADHD was established through interviews with an experienced child psychiatrist (A-MM) using the standardized Maudsley Diagnostic Interview to check for presence or absence of diagnostic criteria for any mental disorder as set out by DSM-IV (30). Exclusion criteria were lifetime comorbidity with any other psychiatric disorder, except for conduct/oppositional defiant disorder (present in one patient), as well as learning disability and specific reading disorder, neurological abnormalities, epilepsy, drug or substance abuse, and previous exposure to stimulant medication. Patients with ADHD also had to score above cutoff for hyperactive/inattentive symptoms on the Strengths and Difficulties Questionnaire for Parents (SDQ) (31). Patients were scanned twice, in a randomized, counterbalanced fashion, 1 week apart, 1 hour after either .3 mg/kg of MPH administration or placebo (vitamin C, 100 mg).\nThirteen male right-handed adolescent boys in the age range of 11 to 16 years (mean age = 13, SD = 1) were recruited through advertisements in the same geographic areas of South London to ensure similar socioeconomic status and were scanned once. They scored below cutoff for behavioral problems in the SDQ and had no history of psychiatric disorder.\nAll participants were above the fifth percentile on the Raven progressive matrices performance IQ (32) (IQ mean estimate controls = 100, SD = 14; ADHD = 91, SD = 9) and paid £30 for participation. Parental and child informed consent/assent and approval from the local ethical committee was obtained.\nUnivariate analyses of variance (ANOVAs) showed no group differences between boys with and without ADHD for age [F(1,25) = 2, p = .2] but did for IQ [F(1,25) = 8, p < .009]. IQ is associated with ADHD in the general population (33,34). We purposely did not match groups for IQ because matching ADHD and control groups for IQ would have created unrepresentative groups and therefore be misguided (35). Furthermore, IQ was significantly negatively correlated with the SDQ scores for inattention and hyperactivity (r = −.5, p < .001). We did not covary for IQ because when groups are not randomly selected, covarying for a variable that differs between groups violates the standard assumptions for analysis of covariance. When the covariate is intrinsic to the condition, it becomes meaningless to “adjust” group effects for differences in the covariate because it would alter the group effect in potentially problematic ways, leading to spurious results (35,36).", "The rapid, mixed-trial, event-related fMRI design was practiced by subjects once before scanning. The visual tracking stop task requires withholding of a motor response to a go stimulus when it is followed unpredictably by a stop signal (8,37,38). The basic task is a choice reaction time task (left and right pointing arrows: go signals) with a mean intertrial-interval of 1.8 sec (156 go trials). In 20% of trials, pseudo-randomly interspersed, the go signals are followed (about 250 ms later) by arrows pointing upwards (stop signals), and subjects have to inhibit their motor responses (40 stop trials). A tracking algorithm changes the time interval between go-signal and stop-signal onsets according to each subject's inhibitory performance to ensure that the task is equally challenging for each individual and to provide 50% successful and 50% unsuccessful inhibition trials at every moment of the task.", "Gradient-echo echoplanar magnetic resonance imaging data were acquired on a GE Signa 1.5-T Horizon LX System (General Electric, Milwaukee, Wisconsin) at the Maudsley Hospital, London. A quadrature birdcage head coil was used for radio-frequency transmission and reception. During the 6-min run of the stop task, in each of 16 noncontiguous planes parallel to the anterior–posterior commissural, 196 T2*-weighted magnetic resonance images depicting blood oxygen–level dependent (BOLD) contrast covering the whole brain were acquired with echo time = 40 msec, repetition time = 1.8 sec, flip angle = 90°, in-plane resolution = 3.1 mm, slice thickness = 7 mm, slice skip = .7 mm, providing complete brain coverage.", "At the individual subject level, a standard general linear modeling approach was used to obtain estimates of the response size (beta) to each of the two stop task conditions (successful and unsuccessful stop trials) against an implicit baseline (go trials). Following transformation of the fMRI data for each individual into standard space and smoothing with a three-dimensional 7-mm full width at half maximum Gaussian filter, the experimental model was convolved for each condition with gamma variate functions having peak responses at 4 and 8 sec following stimulus onset to accommodate variability in BOLD response timing. By fitting these convolved model components to the time series at each voxel, beta estimates were obtained for each effect of interest. The standard errors of these beta estimates were computed nonparametrically using a bootstrap procedure designed to operate on time series data, containing serial dependencies, with repeated deterministic (experimentally determined) effects. This method is outlined in detail in a previous work (39). Two hundred bootstraps at each voxel were used to estimate parameter standard errors. Using the combined parameter estimates over all conditions, the mean fitted time series was also computed and, from the combined bootstrap parameter estimates for each bootstrap, the 95% confidence limits on the fitted time series was computed.\nThe second-level analysis proceeded by computing either the group differences (patients and controls) or the drug condition differences (placebo, MPH) within patients at each voxel and the standard error of this difference (using the bootstrap estimates derived earlier). The significance of these differences was then tested in three ways: 1) a simple parametric random effects (paired t) test, using only the group difference/placebo-MPH effect size differences; 2) a permutation test of the same random effects t statistic in which the null distribution was estimated by randomly swapping the signs of the differences (we used 40,000 permutations per voxel to obtain a confidence limit of .0007–.0013 for p value of .001); and 3) a mixed-effects test using both the effect size differences and their subject-level standard errors to accommodate first (subject) level heteroscedasticity (40). This was also conducted using 40,000 permutations per voxel.\nIn addition to voxelwise maps, cluster-level inference on the contrast (beta) values was performed at a family-wise error corrected threshold of p < .05 using the Threshold-Free Cluster Enhancement method proposed by Smith and Nichols (41). This cluster-level inference was also used for the within-group maps for each experimental condition.", " Performance The probability of inhibition was about 50% in all subjects with no significant group differences, showing that the task algorithm worked [F(1,38) = 1; p < .3; Table 1].\nA multivariate ANOVA between control subjects and ADHD patients under either drug condition showed a trend for a significant group effect [F(8,62) = 2, p < .09] due to a significant univariate group effect in the standard deviation to go trials [F(2,34) = 5, p < .02], which were higher in patients under either medication condition compared with control subjects (p < .05). Post hoc tests furthermore revealed a trend for MPH compared with placebo to slow down reaction times within ADHD patients to both go (p < .06) and post-error go trials (both p < .07) (Table 1).\nThe probability of inhibition was about 50% in all subjects with no significant group differences, showing that the task algorithm worked [F(1,38) = 1; p < .3; Table 1].\nA multivariate ANOVA between control subjects and ADHD patients under either drug condition showed a trend for a significant group effect [F(8,62) = 2, p < .09] due to a significant univariate group effect in the standard deviation to go trials [F(2,34) = 5, p < .02], which were higher in patients under either medication condition compared with control subjects (p < .05). Post hoc tests furthermore revealed a trend for MPH compared with placebo to slow down reaction times within ADHD patients to both go (p < .06) and post-error go trials (both p < .07) (Table 1).", "The probability of inhibition was about 50% in all subjects with no significant group differences, showing that the task algorithm worked [F(1,38) = 1; p < .3; Table 1].\nA multivariate ANOVA between control subjects and ADHD patients under either drug condition showed a trend for a significant group effect [F(8,62) = 2, p < .09] due to a significant univariate group effect in the standard deviation to go trials [F(2,34) = 5, p < .02], which were higher in patients under either medication condition compared with control subjects (p < .05). Post hoc tests furthermore revealed a trend for MPH compared with placebo to slow down reaction times within ADHD patients to both go (p < .06) and post-error go trials (both p < .07) (Table 1).", " Motion Multivariate ANOVA showed no significant group differences between control subjects and ADHD patients under either drug condition in mean or maximum rotation or translation parameters in the x, y, or z dimensions [F(2,38) = .9, p < .5].\nMultivariate ANOVA showed no significant group differences between control subjects and ADHD patients under either drug condition in mean or maximum rotation or translation parameters in the x, y, or z dimensions [F(2,38) = .9, p < .5].\n Within-Group Brain Activations Failed Stop–Go Contrast Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas.\nActivation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex.\nActivation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1).\nControl subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas.\nActivation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex.\nActivation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1).\n Successful Stop–Go Contrast Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate.\nActivation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices.\nActivation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1).\nActivation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate.\nActivation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices.\nActivation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1).\n Failed Stop–Go Contrast Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas.\nActivation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex.\nActivation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1).\nControl subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas.\nActivation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex.\nActivation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1).\n Successful Stop–Go Contrast Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate.\nActivation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices.\nActivation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1).\nActivation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate.\nActivation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices.\nActivation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1).\n ANOVA Within-Patient Comparisons in Brain Activation Between the Placebo and the MPH Conditions Failed Stop–Go Contrast MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH.\nTo investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02).\nMPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH.\nTo investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02).\n Successful Stop–Go Contrast No significant activation differences were observed between medication conditions.\nNo significant activation differences were observed between medication conditions.\n Failed Stop–Go Contrast MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH.\nTo investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02).\nMPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH.\nTo investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02).\n Successful Stop–Go Contrast No significant activation differences were observed between medication conditions.\nNo significant activation differences were observed between medication conditions.\n ANOVA Between-Group Comparisons in Brain Activation Between Control Subjects and Boys with ADHD Under Either the Placebo or the MPH Conditions Failed Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3).\nUnder the MPH condition, ADHD patients did not differ from controls in any of these regions.\nPost-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02).\nRelative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3).\nUnder the MPH condition, ADHD patients did not differ from controls in any of these regions.\nPost-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02).\n Successful Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1).\nUnder the MPH condition, ADHD patients did not differ from control subjects in any of these regions.\nActivation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3).\nAll group difference findings for both contrasts remained essentially unchanged when IQ was covaried.\nRelative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1).\nUnder the MPH condition, ADHD patients did not differ from control subjects in any of these regions.\nActivation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3).\nAll group difference findings for both contrasts remained essentially unchanged when IQ was covaried.\n Failed Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3).\nUnder the MPH condition, ADHD patients did not differ from controls in any of these regions.\nPost-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02).\nRelative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3).\nUnder the MPH condition, ADHD patients did not differ from controls in any of these regions.\nPost-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02).\n Successful Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1).\nUnder the MPH condition, ADHD patients did not differ from control subjects in any of these regions.\nActivation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3).\nAll group difference findings for both contrasts remained essentially unchanged when IQ was covaried.\nRelative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1).\nUnder the MPH condition, ADHD patients did not differ from control subjects in any of these regions.\nActivation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3).\nAll group difference findings for both contrasts remained essentially unchanged when IQ was covaried.\n Conjunction Analysis Between Within-Group and Between-Group ANOVAs To test whether brain regions that were upregulated with MPH relative to placebo within patients overlapped with brain regions that were reduced in patients under placebo relative to controls and then normalized with MPH, we performed a conjunction analysis by determining the voxels where the within-group ANOVA (MPH > placebo in ADHD) and the between-group ANOVA (control subjects > ADHD placebo) were both significant (42). Three clusters emerged, in left IFC (Talairach coordinates: −43, 7, 4), right SMA (Talairach coordinates: 7, 4, 59) and right inferior parietal lobe (Talairach coordinates: 32, −63, 42). Overlapping clusters are also indicated in bold in Table 3 and shown in Figure 3.\nTo test whether brain regions that were upregulated with MPH relative to placebo within patients overlapped with brain regions that were reduced in patients under placebo relative to controls and then normalized with MPH, we performed a conjunction analysis by determining the voxels where the within-group ANOVA (MPH > placebo in ADHD) and the between-group ANOVA (control subjects > ADHD placebo) were both significant (42). Three clusters emerged, in left IFC (Talairach coordinates: −43, 7, 4), right SMA (Talairach coordinates: 7, 4, 59) and right inferior parietal lobe (Talairach coordinates: 32, −63, 42). Overlapping clusters are also indicated in bold in Table 3 and shown in Figure 3.", "Multivariate ANOVA showed no significant group differences between control subjects and ADHD patients under either drug condition in mean or maximum rotation or translation parameters in the x, y, or z dimensions [F(2,38) = .9, p < .5].", " Failed Stop–Go Contrast Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas.\nActivation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex.\nActivation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1).\nControl subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas.\nActivation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex.\nActivation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1).\n Successful Stop–Go Contrast Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate.\nActivation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices.\nActivation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1).\nActivation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate.\nActivation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices.\nActivation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1).", "Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas.\nActivation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex.\nActivation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1).", "Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate.\nActivation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices.\nActivation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1).", " Failed Stop–Go Contrast MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH.\nTo investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02).\nMPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH.\nTo investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02).\n Successful Stop–Go Contrast No significant activation differences were observed between medication conditions.\nNo significant activation differences were observed between medication conditions.", "MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH.\nTo investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02).", "No significant activation differences were observed between medication conditions.", " Failed Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3).\nUnder the MPH condition, ADHD patients did not differ from controls in any of these regions.\nPost-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02).\nRelative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3).\nUnder the MPH condition, ADHD patients did not differ from controls in any of these regions.\nPost-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02).\n Successful Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1).\nUnder the MPH condition, ADHD patients did not differ from control subjects in any of these regions.\nActivation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3).\nAll group difference findings for both contrasts remained essentially unchanged when IQ was covaried.\nRelative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1).\nUnder the MPH condition, ADHD patients did not differ from control subjects in any of these regions.\nActivation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3).\nAll group difference findings for both contrasts remained essentially unchanged when IQ was covaried.", "Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3).\nUnder the MPH condition, ADHD patients did not differ from controls in any of these regions.\nPost-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02).", "Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1).\nUnder the MPH condition, ADHD patients did not differ from control subjects in any of these regions.\nActivation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3).\nAll group difference findings for both contrasts remained essentially unchanged when IQ was covaried.", "To test whether brain regions that were upregulated with MPH relative to placebo within patients overlapped with brain regions that were reduced in patients under placebo relative to controls and then normalized with MPH, we performed a conjunction analysis by determining the voxels where the within-group ANOVA (MPH > placebo in ADHD) and the between-group ANOVA (control subjects > ADHD placebo) were both significant (42). Three clusters emerged, in left IFC (Talairach coordinates: −43, 7, 4), right SMA (Talairach coordinates: 7, 4, 59) and right inferior parietal lobe (Talairach coordinates: 32, −63, 42). Overlapping clusters are also indicated in bold in Table 3 and shown in Figure 3.", "During error trials, ADHD boys under placebo compared with healthy control subjects showed significant underactivation in a typical error processing and performance monitoring network comprising dMFC, left IFC, thalamus, posterior cingulate/precuneus, and inferior temporoparietal regions. Among patients, MPH compared with placebo significantly upregulated activation in overlapping medial frontal, IFC, and parietal regions as well as the lenticular nucleus. Under MPH, brain activation differences between control subjects and ADHD patients were no longer observed. Reduced fronto-thalamo-parietal activation that was normalized with MPH was, furthermore, negatively associated with faster post-error reaction times in patients, which were trendwise slowed with MPH.\nDuring successful stop trials, ADHD boys showed underactivation in a right hemispheric network of medial temporal and inferior parietal brain regions and, at a more lenient threshold, in small clusters of bilateral IFC, thalamus, and pre-SMA. Although within-patient comparison between MPH and placebo did not show significant activation differences, all underactivations in patients relative to control subjects under placebo were normalized with a single dose of MPH.\nThe dMFC, comprising Brodmann areas 8, 6, and 32, including pre-SMA and ACC, is a typical region of error processing and performance monitoring in adults (37,43–48) and children (29,38,49). We have previously found this region to be underactivated in children with ADHD during oddball (50) and switch tasks (4). Errors indicate violation of a reward prediction (i.e., positive performance) and have been linked to midbrain dopamine (51). Normalization with MPH of underfunctioning of this region in ADHD is in line with the notion that phasic dopamine response modulates error-related mesial frontal activation (52,53). These findings extend evidence for upregulation with acute and chronic doses of MPH in previously medicated patients with ADHD in a more rostral ACC location during tasks of cognitive control (22,28,54).\nActivation in dMFC during errors triggers additional activation in functionally interconnected left IFC, as well as striatal, premotor, and parietal components of the error monitoring system, leading to post-error performance adjustments (43–45,48). IFC underactivation is one of the most consistent findings in fMRI studies in patients with ADHD, with right IFC dysfunction typically observed during inhibitory performance (7,8,11,55), in line with its role in inhibition (37,56), and left IFC during stop errors (4,29) as well as during flexible, selective, or sustained attention (4,9,26,50,57), in line with its role for performance monitoring (44,45,48,49) and saliency processing (58,59). IFC dysfunction is furthermore a disorder-specific neurofunctional deficit compared with patients with conduct (6,50,57,60) and obsessive compulsive (4) disorders. MPH thus appears to modulate an important neurofunctional biomarker of ADHD. The more predominantly left-hemispheric upregulation effect during errors may suggest a stronger effect of MPH on performance monitoring than inhibitory function in ADHD. Left IFC upregulation has previously been observed in ADHD patients in the context of an attention-demanding time discrimination task after acute (27) and 6 weeks of MPH treatment during interference inhibition (54). Structural studies have shown more normal cortical thinning in left IFC in psychostimulant-medicated compared with unmedicated ADHD children (61). Together, this raises the speculation that MPH may have a lateralized upregulating effect on left IFC structure and function.\nPosterior thalamic regions have been associated both with motor response inhibition (62) and performance monitoring (48,63,64). The finding that MPH normalizes activation in this region is in line with speculation of this region's involvement in the modulation of the dopaminergic error signal (63,65,66).\nThe fact that lower dMFC, IFC, and thalamic activation in ADHD patients was associated with faster post-error slowing, both of which were enhanced by MPH, reinforces the role of this network for abnormal error monitoring in ADHD. Posterior cingulate and precuneus are connected with MFC and parietal areas and form part of the performance monitoring network (47,49,67,68), mediating visual spatial attention to saliency (69,70) and the integration of performance outcome with attentional modulation (48). The fact that these regions were underactivated during both inhibition and its failure is in line with a generic attention role of these areas. In line with this, we and others have previously observed underactivation in ADHD patients in these regions during inhibition errors (4,6,8,60), as well as during other salient stimuli such as oddball, novel or incongruent targets (10,26,50,71,72).\nNormalization with MPH of reduced activation in typical frontoparietal regions of saliency processing and performance monitoring is consistent with the dopamine-deficiency hypothesis of ADHD given that dopamine agonists enhance stimulus salience (73). It is also in line with our previous findings of upregulation with MPH of posterior cingulate/precuneus in the same group of medication naive boys with ADHD during a target detection task, resulting in improved attention (26), and during an attention demanding time discrimination task (27). To our knowledge, normalization of inferior parietal activation with MPH has only recently been observed in ADHD patients, in the context of sustained attention (26) and interference inhibition (54).\nDuring successful stop trials, MPH also normalized underactivation in the cerebellum, which, together with subthalamic nucleus, caudate, and IFG, forms a neurofunctional network of motor response inhibition (38). These findings extend previous evidence for cerebellum upregulation with MPH in ADHD patients during interference inhibition (54) and time estimation (27).\nWithin patients, MPH also enhanced activation of caudate and putamen. This is in line with previous fMRI findings of caudate upregulation in ADHD patients after acute and chronic doses of MPH during inhibition and attention tasks (22,24,54) and is likely associated with the known effect of MPH on striatal dopamine transporter blockage (14,15).\nThe findings of more pronounced normalization effects of MPH on abnormal performance monitoring than inhibition networks could suggest that MPH enhances generic attention and performance monitoring functions more than inhibitory capacity. This would be in line with the behavioral effect of MPH of modulating go and post-error reaction times, but not inhibition speed, which, furthermore were correlated with the reduced frontothalamic error-processing activation that was normalized with MPH. Relative to control subjects, patients only significantly differed in intrasubject response variability. Small subject numbers, a relatively older child group, and fMRI task design restrictions may have been responsible for minor behavior differences. The findings of brain dysfunctions in boys with ADHD and their normalization under the clinical dose of MPH despite minor performance differences and only trend-level improvements with MPH show that brain activation is more sensitive than performance to detect both abnormalities and pharmacologic effects. This is in line with previous findings of marked brain dysfunctions in ADHD adolescents despite no stop task impairment (7,8,50) and higher sensitivity of brain activation than behavior to show pharmacologic effects of MPH in ADHD (24,26–28,54,74).\nMPH prevents the reuptake of catecholamines from the synaptic cleft by blocking dopamine and norepinephrine transporters (DAT/NET) (75,76), with higher affinity for the former (77,78). In healthy adults, MPH blocks 60% to 70% of striatal DAT in a dose-dependent manner, increasing extracellular levels of dopamine in striatum (75,79–82), as well as in frontal, thalamic, and temporal regions (83). The upregulating effects on basal ganglia, thalamic, and anterior cingulate activation were therefore likely mediated by mesolimbic striatocingulate dopaminergic pathways known to modulate error monitoring systems (63,64). In frontal regions, however, MPH upregulates noradrenaline to the same or greater extent than dopamine (84–86), via reuptake inhibition of NET that clear up both dopamine and noradrenaline (85,87–89). The upregulating effects on frontal activation, therefore, may have been mediated by enhanced catecholamine neurotransmission, in line with recent evidence that noradrenaline also plays a role in error monitoring (66,90).\nA limitation of the study is that patients were tested twice, whereas control subjects were only scanned once, for ethical and financial reasons. Practice effects, however, were overcome by the counterbalanced design. Another limitation is the relatively small sample size. Minimum numbers of 15 to 20 participants have been suggested for fMRI studies (91). Repeated-measures designs, however, are statistically more powerful than independent data sets, which makes the within-subject ANOVA more robust.\nTo our knowledge, this is the first study to show that a single dose of MPH in ADHD upregulates and normalizes the underfunctioning of dMFC, left IFC, posterior cingulate, and parietal regions that in concert play an important role in error processing. The normalization findings of these key regions of both performance monitoring and ADHD dysfunction reinforce the association between dopaminergic neurotransmission abnormalities, ADHD, and poor performance monitoring and may underlie the behavioral effects of improving attention and school performance in boys with ADHD." ]
[ "materials|methods", null, null, null, null, "results", null, null, null, null, null, null, null, null, null, null, null, null, null, "discussion" ]
[ "Attention-deficit/hyperactivity disorder (ADHD)", "error processing", "methylphenidate", "motor response inhibition", "performance monitoring", "stop task" ]
Methods and Materials: Subjects Twelve medication-naïve, right-handed boys aged 10 to 15 years (mean age = 13, SD = 1) who met clinical diagnostic criteria for the combined (inattentive/hyperactive) subtype of ADHD (DSM-IV) were recruited through clinics. Clinical diagnosis of ADHD was established through interviews with an experienced child psychiatrist (A-MM) using the standardized Maudsley Diagnostic Interview to check for presence or absence of diagnostic criteria for any mental disorder as set out by DSM-IV (30). Exclusion criteria were lifetime comorbidity with any other psychiatric disorder, except for conduct/oppositional defiant disorder (present in one patient), as well as learning disability and specific reading disorder, neurological abnormalities, epilepsy, drug or substance abuse, and previous exposure to stimulant medication. Patients with ADHD also had to score above cutoff for hyperactive/inattentive symptoms on the Strengths and Difficulties Questionnaire for Parents (SDQ) (31). Patients were scanned twice, in a randomized, counterbalanced fashion, 1 week apart, 1 hour after either .3 mg/kg of MPH administration or placebo (vitamin C, 100 mg). Thirteen male right-handed adolescent boys in the age range of 11 to 16 years (mean age = 13, SD = 1) were recruited through advertisements in the same geographic areas of South London to ensure similar socioeconomic status and were scanned once. They scored below cutoff for behavioral problems in the SDQ and had no history of psychiatric disorder. All participants were above the fifth percentile on the Raven progressive matrices performance IQ (32) (IQ mean estimate controls = 100, SD = 14; ADHD = 91, SD = 9) and paid £30 for participation. Parental and child informed consent/assent and approval from the local ethical committee was obtained. Univariate analyses of variance (ANOVAs) showed no group differences between boys with and without ADHD for age [F(1,25) = 2, p = .2] but did for IQ [F(1,25) = 8, p < .009]. IQ is associated with ADHD in the general population (33,34). We purposely did not match groups for IQ because matching ADHD and control groups for IQ would have created unrepresentative groups and therefore be misguided (35). Furthermore, IQ was significantly negatively correlated with the SDQ scores for inattention and hyperactivity (r = −.5, p < .001). We did not covary for IQ because when groups are not randomly selected, covarying for a variable that differs between groups violates the standard assumptions for analysis of covariance. When the covariate is intrinsic to the condition, it becomes meaningless to “adjust” group effects for differences in the covariate because it would alter the group effect in potentially problematic ways, leading to spurious results (35,36). Twelve medication-naïve, right-handed boys aged 10 to 15 years (mean age = 13, SD = 1) who met clinical diagnostic criteria for the combined (inattentive/hyperactive) subtype of ADHD (DSM-IV) were recruited through clinics. Clinical diagnosis of ADHD was established through interviews with an experienced child psychiatrist (A-MM) using the standardized Maudsley Diagnostic Interview to check for presence or absence of diagnostic criteria for any mental disorder as set out by DSM-IV (30). Exclusion criteria were lifetime comorbidity with any other psychiatric disorder, except for conduct/oppositional defiant disorder (present in one patient), as well as learning disability and specific reading disorder, neurological abnormalities, epilepsy, drug or substance abuse, and previous exposure to stimulant medication. Patients with ADHD also had to score above cutoff for hyperactive/inattentive symptoms on the Strengths and Difficulties Questionnaire for Parents (SDQ) (31). Patients were scanned twice, in a randomized, counterbalanced fashion, 1 week apart, 1 hour after either .3 mg/kg of MPH administration or placebo (vitamin C, 100 mg). Thirteen male right-handed adolescent boys in the age range of 11 to 16 years (mean age = 13, SD = 1) were recruited through advertisements in the same geographic areas of South London to ensure similar socioeconomic status and were scanned once. They scored below cutoff for behavioral problems in the SDQ and had no history of psychiatric disorder. All participants were above the fifth percentile on the Raven progressive matrices performance IQ (32) (IQ mean estimate controls = 100, SD = 14; ADHD = 91, SD = 9) and paid £30 for participation. Parental and child informed consent/assent and approval from the local ethical committee was obtained. Univariate analyses of variance (ANOVAs) showed no group differences between boys with and without ADHD for age [F(1,25) = 2, p = .2] but did for IQ [F(1,25) = 8, p < .009]. IQ is associated with ADHD in the general population (33,34). We purposely did not match groups for IQ because matching ADHD and control groups for IQ would have created unrepresentative groups and therefore be misguided (35). Furthermore, IQ was significantly negatively correlated with the SDQ scores for inattention and hyperactivity (r = −.5, p < .001). We did not covary for IQ because when groups are not randomly selected, covarying for a variable that differs between groups violates the standard assumptions for analysis of covariance. When the covariate is intrinsic to the condition, it becomes meaningless to “adjust” group effects for differences in the covariate because it would alter the group effect in potentially problematic ways, leading to spurious results (35,36). fMRI Paradigm: Stop Task The rapid, mixed-trial, event-related fMRI design was practiced by subjects once before scanning. The visual tracking stop task requires withholding of a motor response to a go stimulus when it is followed unpredictably by a stop signal (8,37,38). The basic task is a choice reaction time task (left and right pointing arrows: go signals) with a mean intertrial-interval of 1.8 sec (156 go trials). In 20% of trials, pseudo-randomly interspersed, the go signals are followed (about 250 ms later) by arrows pointing upwards (stop signals), and subjects have to inhibit their motor responses (40 stop trials). A tracking algorithm changes the time interval between go-signal and stop-signal onsets according to each subject's inhibitory performance to ensure that the task is equally challenging for each individual and to provide 50% successful and 50% unsuccessful inhibition trials at every moment of the task. The rapid, mixed-trial, event-related fMRI design was practiced by subjects once before scanning. The visual tracking stop task requires withholding of a motor response to a go stimulus when it is followed unpredictably by a stop signal (8,37,38). The basic task is a choice reaction time task (left and right pointing arrows: go signals) with a mean intertrial-interval of 1.8 sec (156 go trials). In 20% of trials, pseudo-randomly interspersed, the go signals are followed (about 250 ms later) by arrows pointing upwards (stop signals), and subjects have to inhibit their motor responses (40 stop trials). A tracking algorithm changes the time interval between go-signal and stop-signal onsets according to each subject's inhibitory performance to ensure that the task is equally challenging for each individual and to provide 50% successful and 50% unsuccessful inhibition trials at every moment of the task. fMRI Image Acquisition Gradient-echo echoplanar magnetic resonance imaging data were acquired on a GE Signa 1.5-T Horizon LX System (General Electric, Milwaukee, Wisconsin) at the Maudsley Hospital, London. A quadrature birdcage head coil was used for radio-frequency transmission and reception. During the 6-min run of the stop task, in each of 16 noncontiguous planes parallel to the anterior–posterior commissural, 196 T2*-weighted magnetic resonance images depicting blood oxygen–level dependent (BOLD) contrast covering the whole brain were acquired with echo time = 40 msec, repetition time = 1.8 sec, flip angle = 90°, in-plane resolution = 3.1 mm, slice thickness = 7 mm, slice skip = .7 mm, providing complete brain coverage. Gradient-echo echoplanar magnetic resonance imaging data were acquired on a GE Signa 1.5-T Horizon LX System (General Electric, Milwaukee, Wisconsin) at the Maudsley Hospital, London. A quadrature birdcage head coil was used for radio-frequency transmission and reception. During the 6-min run of the stop task, in each of 16 noncontiguous planes parallel to the anterior–posterior commissural, 196 T2*-weighted magnetic resonance images depicting blood oxygen–level dependent (BOLD) contrast covering the whole brain were acquired with echo time = 40 msec, repetition time = 1.8 sec, flip angle = 90°, in-plane resolution = 3.1 mm, slice thickness = 7 mm, slice skip = .7 mm, providing complete brain coverage. fMRI Image Analysis At the individual subject level, a standard general linear modeling approach was used to obtain estimates of the response size (beta) to each of the two stop task conditions (successful and unsuccessful stop trials) against an implicit baseline (go trials). Following transformation of the fMRI data for each individual into standard space and smoothing with a three-dimensional 7-mm full width at half maximum Gaussian filter, the experimental model was convolved for each condition with gamma variate functions having peak responses at 4 and 8 sec following stimulus onset to accommodate variability in BOLD response timing. By fitting these convolved model components to the time series at each voxel, beta estimates were obtained for each effect of interest. The standard errors of these beta estimates were computed nonparametrically using a bootstrap procedure designed to operate on time series data, containing serial dependencies, with repeated deterministic (experimentally determined) effects. This method is outlined in detail in a previous work (39). Two hundred bootstraps at each voxel were used to estimate parameter standard errors. Using the combined parameter estimates over all conditions, the mean fitted time series was also computed and, from the combined bootstrap parameter estimates for each bootstrap, the 95% confidence limits on the fitted time series was computed. The second-level analysis proceeded by computing either the group differences (patients and controls) or the drug condition differences (placebo, MPH) within patients at each voxel and the standard error of this difference (using the bootstrap estimates derived earlier). The significance of these differences was then tested in three ways: 1) a simple parametric random effects (paired t) test, using only the group difference/placebo-MPH effect size differences; 2) a permutation test of the same random effects t statistic in which the null distribution was estimated by randomly swapping the signs of the differences (we used 40,000 permutations per voxel to obtain a confidence limit of .0007–.0013 for p value of .001); and 3) a mixed-effects test using both the effect size differences and their subject-level standard errors to accommodate first (subject) level heteroscedasticity (40). This was also conducted using 40,000 permutations per voxel. In addition to voxelwise maps, cluster-level inference on the contrast (beta) values was performed at a family-wise error corrected threshold of p < .05 using the Threshold-Free Cluster Enhancement method proposed by Smith and Nichols (41). This cluster-level inference was also used for the within-group maps for each experimental condition. At the individual subject level, a standard general linear modeling approach was used to obtain estimates of the response size (beta) to each of the two stop task conditions (successful and unsuccessful stop trials) against an implicit baseline (go trials). Following transformation of the fMRI data for each individual into standard space and smoothing with a three-dimensional 7-mm full width at half maximum Gaussian filter, the experimental model was convolved for each condition with gamma variate functions having peak responses at 4 and 8 sec following stimulus onset to accommodate variability in BOLD response timing. By fitting these convolved model components to the time series at each voxel, beta estimates were obtained for each effect of interest. The standard errors of these beta estimates were computed nonparametrically using a bootstrap procedure designed to operate on time series data, containing serial dependencies, with repeated deterministic (experimentally determined) effects. This method is outlined in detail in a previous work (39). Two hundred bootstraps at each voxel were used to estimate parameter standard errors. Using the combined parameter estimates over all conditions, the mean fitted time series was also computed and, from the combined bootstrap parameter estimates for each bootstrap, the 95% confidence limits on the fitted time series was computed. The second-level analysis proceeded by computing either the group differences (patients and controls) or the drug condition differences (placebo, MPH) within patients at each voxel and the standard error of this difference (using the bootstrap estimates derived earlier). The significance of these differences was then tested in three ways: 1) a simple parametric random effects (paired t) test, using only the group difference/placebo-MPH effect size differences; 2) a permutation test of the same random effects t statistic in which the null distribution was estimated by randomly swapping the signs of the differences (we used 40,000 permutations per voxel to obtain a confidence limit of .0007–.0013 for p value of .001); and 3) a mixed-effects test using both the effect size differences and their subject-level standard errors to accommodate first (subject) level heteroscedasticity (40). This was also conducted using 40,000 permutations per voxel. In addition to voxelwise maps, cluster-level inference on the contrast (beta) values was performed at a family-wise error corrected threshold of p < .05 using the Threshold-Free Cluster Enhancement method proposed by Smith and Nichols (41). This cluster-level inference was also used for the within-group maps for each experimental condition. Subjects: Twelve medication-naïve, right-handed boys aged 10 to 15 years (mean age = 13, SD = 1) who met clinical diagnostic criteria for the combined (inattentive/hyperactive) subtype of ADHD (DSM-IV) were recruited through clinics. Clinical diagnosis of ADHD was established through interviews with an experienced child psychiatrist (A-MM) using the standardized Maudsley Diagnostic Interview to check for presence or absence of diagnostic criteria for any mental disorder as set out by DSM-IV (30). Exclusion criteria were lifetime comorbidity with any other psychiatric disorder, except for conduct/oppositional defiant disorder (present in one patient), as well as learning disability and specific reading disorder, neurological abnormalities, epilepsy, drug or substance abuse, and previous exposure to stimulant medication. Patients with ADHD also had to score above cutoff for hyperactive/inattentive symptoms on the Strengths and Difficulties Questionnaire for Parents (SDQ) (31). Patients were scanned twice, in a randomized, counterbalanced fashion, 1 week apart, 1 hour after either .3 mg/kg of MPH administration or placebo (vitamin C, 100 mg). Thirteen male right-handed adolescent boys in the age range of 11 to 16 years (mean age = 13, SD = 1) were recruited through advertisements in the same geographic areas of South London to ensure similar socioeconomic status and were scanned once. They scored below cutoff for behavioral problems in the SDQ and had no history of psychiatric disorder. All participants were above the fifth percentile on the Raven progressive matrices performance IQ (32) (IQ mean estimate controls = 100, SD = 14; ADHD = 91, SD = 9) and paid £30 for participation. Parental and child informed consent/assent and approval from the local ethical committee was obtained. Univariate analyses of variance (ANOVAs) showed no group differences between boys with and without ADHD for age [F(1,25) = 2, p = .2] but did for IQ [F(1,25) = 8, p < .009]. IQ is associated with ADHD in the general population (33,34). We purposely did not match groups for IQ because matching ADHD and control groups for IQ would have created unrepresentative groups and therefore be misguided (35). Furthermore, IQ was significantly negatively correlated with the SDQ scores for inattention and hyperactivity (r = −.5, p < .001). We did not covary for IQ because when groups are not randomly selected, covarying for a variable that differs between groups violates the standard assumptions for analysis of covariance. When the covariate is intrinsic to the condition, it becomes meaningless to “adjust” group effects for differences in the covariate because it would alter the group effect in potentially problematic ways, leading to spurious results (35,36). fMRI Paradigm: Stop Task: The rapid, mixed-trial, event-related fMRI design was practiced by subjects once before scanning. The visual tracking stop task requires withholding of a motor response to a go stimulus when it is followed unpredictably by a stop signal (8,37,38). The basic task is a choice reaction time task (left and right pointing arrows: go signals) with a mean intertrial-interval of 1.8 sec (156 go trials). In 20% of trials, pseudo-randomly interspersed, the go signals are followed (about 250 ms later) by arrows pointing upwards (stop signals), and subjects have to inhibit their motor responses (40 stop trials). A tracking algorithm changes the time interval between go-signal and stop-signal onsets according to each subject's inhibitory performance to ensure that the task is equally challenging for each individual and to provide 50% successful and 50% unsuccessful inhibition trials at every moment of the task. fMRI Image Acquisition: Gradient-echo echoplanar magnetic resonance imaging data were acquired on a GE Signa 1.5-T Horizon LX System (General Electric, Milwaukee, Wisconsin) at the Maudsley Hospital, London. A quadrature birdcage head coil was used for radio-frequency transmission and reception. During the 6-min run of the stop task, in each of 16 noncontiguous planes parallel to the anterior–posterior commissural, 196 T2*-weighted magnetic resonance images depicting blood oxygen–level dependent (BOLD) contrast covering the whole brain were acquired with echo time = 40 msec, repetition time = 1.8 sec, flip angle = 90°, in-plane resolution = 3.1 mm, slice thickness = 7 mm, slice skip = .7 mm, providing complete brain coverage. fMRI Image Analysis: At the individual subject level, a standard general linear modeling approach was used to obtain estimates of the response size (beta) to each of the two stop task conditions (successful and unsuccessful stop trials) against an implicit baseline (go trials). Following transformation of the fMRI data for each individual into standard space and smoothing with a three-dimensional 7-mm full width at half maximum Gaussian filter, the experimental model was convolved for each condition with gamma variate functions having peak responses at 4 and 8 sec following stimulus onset to accommodate variability in BOLD response timing. By fitting these convolved model components to the time series at each voxel, beta estimates were obtained for each effect of interest. The standard errors of these beta estimates were computed nonparametrically using a bootstrap procedure designed to operate on time series data, containing serial dependencies, with repeated deterministic (experimentally determined) effects. This method is outlined in detail in a previous work (39). Two hundred bootstraps at each voxel were used to estimate parameter standard errors. Using the combined parameter estimates over all conditions, the mean fitted time series was also computed and, from the combined bootstrap parameter estimates for each bootstrap, the 95% confidence limits on the fitted time series was computed. The second-level analysis proceeded by computing either the group differences (patients and controls) or the drug condition differences (placebo, MPH) within patients at each voxel and the standard error of this difference (using the bootstrap estimates derived earlier). The significance of these differences was then tested in three ways: 1) a simple parametric random effects (paired t) test, using only the group difference/placebo-MPH effect size differences; 2) a permutation test of the same random effects t statistic in which the null distribution was estimated by randomly swapping the signs of the differences (we used 40,000 permutations per voxel to obtain a confidence limit of .0007–.0013 for p value of .001); and 3) a mixed-effects test using both the effect size differences and their subject-level standard errors to accommodate first (subject) level heteroscedasticity (40). This was also conducted using 40,000 permutations per voxel. In addition to voxelwise maps, cluster-level inference on the contrast (beta) values was performed at a family-wise error corrected threshold of p < .05 using the Threshold-Free Cluster Enhancement method proposed by Smith and Nichols (41). This cluster-level inference was also used for the within-group maps for each experimental condition. Results: Performance The probability of inhibition was about 50% in all subjects with no significant group differences, showing that the task algorithm worked [F(1,38) = 1; p < .3; Table 1]. A multivariate ANOVA between control subjects and ADHD patients under either drug condition showed a trend for a significant group effect [F(8,62) = 2, p < .09] due to a significant univariate group effect in the standard deviation to go trials [F(2,34) = 5, p < .02], which were higher in patients under either medication condition compared with control subjects (p < .05). Post hoc tests furthermore revealed a trend for MPH compared with placebo to slow down reaction times within ADHD patients to both go (p < .06) and post-error go trials (both p < .07) (Table 1). The probability of inhibition was about 50% in all subjects with no significant group differences, showing that the task algorithm worked [F(1,38) = 1; p < .3; Table 1]. A multivariate ANOVA between control subjects and ADHD patients under either drug condition showed a trend for a significant group effect [F(8,62) = 2, p < .09] due to a significant univariate group effect in the standard deviation to go trials [F(2,34) = 5, p < .02], which were higher in patients under either medication condition compared with control subjects (p < .05). Post hoc tests furthermore revealed a trend for MPH compared with placebo to slow down reaction times within ADHD patients to both go (p < .06) and post-error go trials (both p < .07) (Table 1). Performance: The probability of inhibition was about 50% in all subjects with no significant group differences, showing that the task algorithm worked [F(1,38) = 1; p < .3; Table 1]. A multivariate ANOVA between control subjects and ADHD patients under either drug condition showed a trend for a significant group effect [F(8,62) = 2, p < .09] due to a significant univariate group effect in the standard deviation to go trials [F(2,34) = 5, p < .02], which were higher in patients under either medication condition compared with control subjects (p < .05). Post hoc tests furthermore revealed a trend for MPH compared with placebo to slow down reaction times within ADHD patients to both go (p < .06) and post-error go trials (both p < .07) (Table 1). Brain Activation: Motion Multivariate ANOVA showed no significant group differences between control subjects and ADHD patients under either drug condition in mean or maximum rotation or translation parameters in the x, y, or z dimensions [F(2,38) = .9, p < .5]. Multivariate ANOVA showed no significant group differences between control subjects and ADHD patients under either drug condition in mean or maximum rotation or translation parameters in the x, y, or z dimensions [F(2,38) = .9, p < .5]. Within-Group Brain Activations Failed Stop–Go Contrast Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas. Activation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex. Activation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1). Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas. Activation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex. Activation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1). Successful Stop–Go Contrast Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate. Activation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices. Activation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1). Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate. Activation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices. Activation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1). Failed Stop–Go Contrast Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas. Activation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex. Activation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1). Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas. Activation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex. Activation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1). Successful Stop–Go Contrast Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate. Activation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices. Activation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1). Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate. Activation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices. Activation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1). ANOVA Within-Patient Comparisons in Brain Activation Between the Placebo and the MPH Conditions Failed Stop–Go Contrast MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH. To investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02). MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH. To investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02). Successful Stop–Go Contrast No significant activation differences were observed between medication conditions. No significant activation differences were observed between medication conditions. Failed Stop–Go Contrast MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH. To investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02). MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH. To investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02). Successful Stop–Go Contrast No significant activation differences were observed between medication conditions. No significant activation differences were observed between medication conditions. ANOVA Between-Group Comparisons in Brain Activation Between Control Subjects and Boys with ADHD Under Either the Placebo or the MPH Conditions Failed Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3). Under the MPH condition, ADHD patients did not differ from controls in any of these regions. Post-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02). Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3). Under the MPH condition, ADHD patients did not differ from controls in any of these regions. Post-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02). Successful Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1). Under the MPH condition, ADHD patients did not differ from control subjects in any of these regions. Activation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3). All group difference findings for both contrasts remained essentially unchanged when IQ was covaried. Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1). Under the MPH condition, ADHD patients did not differ from control subjects in any of these regions. Activation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3). All group difference findings for both contrasts remained essentially unchanged when IQ was covaried. Failed Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3). Under the MPH condition, ADHD patients did not differ from controls in any of these regions. Post-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02). Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3). Under the MPH condition, ADHD patients did not differ from controls in any of these regions. Post-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02). Successful Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1). Under the MPH condition, ADHD patients did not differ from control subjects in any of these regions. Activation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3). All group difference findings for both contrasts remained essentially unchanged when IQ was covaried. Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1). Under the MPH condition, ADHD patients did not differ from control subjects in any of these regions. Activation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3). All group difference findings for both contrasts remained essentially unchanged when IQ was covaried. Conjunction Analysis Between Within-Group and Between-Group ANOVAs To test whether brain regions that were upregulated with MPH relative to placebo within patients overlapped with brain regions that were reduced in patients under placebo relative to controls and then normalized with MPH, we performed a conjunction analysis by determining the voxels where the within-group ANOVA (MPH > placebo in ADHD) and the between-group ANOVA (control subjects > ADHD placebo) were both significant (42). Three clusters emerged, in left IFC (Talairach coordinates: −43, 7, 4), right SMA (Talairach coordinates: 7, 4, 59) and right inferior parietal lobe (Talairach coordinates: 32, −63, 42). Overlapping clusters are also indicated in bold in Table 3 and shown in Figure 3. To test whether brain regions that were upregulated with MPH relative to placebo within patients overlapped with brain regions that were reduced in patients under placebo relative to controls and then normalized with MPH, we performed a conjunction analysis by determining the voxels where the within-group ANOVA (MPH > placebo in ADHD) and the between-group ANOVA (control subjects > ADHD placebo) were both significant (42). Three clusters emerged, in left IFC (Talairach coordinates: −43, 7, 4), right SMA (Talairach coordinates: 7, 4, 59) and right inferior parietal lobe (Talairach coordinates: 32, −63, 42). Overlapping clusters are also indicated in bold in Table 3 and shown in Figure 3. Motion: Multivariate ANOVA showed no significant group differences between control subjects and ADHD patients under either drug condition in mean or maximum rotation or translation parameters in the x, y, or z dimensions [F(2,38) = .9, p < .5]. Within-Group Brain Activations: Failed Stop–Go Contrast Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas. Activation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex. Activation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1). Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas. Activation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex. Activation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1). Successful Stop–Go Contrast Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate. Activation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices. Activation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1). Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate. Activation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices. Activation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1). Failed Stop–Go Contrast: Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas. Activation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex. Activation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1). Successful Stop–Go Contrast: Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate. Activation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices. Activation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1). ANOVA Within-Patient Comparisons in Brain Activation Between the Placebo and the MPH Conditions: Failed Stop–Go Contrast MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH. To investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02). MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH. To investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02). Successful Stop–Go Contrast No significant activation differences were observed between medication conditions. No significant activation differences were observed between medication conditions. Failed Stop–Go Contrast: MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH. To investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02). Successful Stop–Go Contrast: No significant activation differences were observed between medication conditions. ANOVA Between-Group Comparisons in Brain Activation Between Control Subjects and Boys with ADHD Under Either the Placebo or the MPH Conditions: Failed Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3). Under the MPH condition, ADHD patients did not differ from controls in any of these regions. Post-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02). Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3). Under the MPH condition, ADHD patients did not differ from controls in any of these regions. Post-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02). Successful Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1). Under the MPH condition, ADHD patients did not differ from control subjects in any of these regions. Activation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3). All group difference findings for both contrasts remained essentially unchanged when IQ was covaried. Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1). Under the MPH condition, ADHD patients did not differ from control subjects in any of these regions. Activation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3). All group difference findings for both contrasts remained essentially unchanged when IQ was covaried. Failed Stop–Go Contrast: Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3). Under the MPH condition, ADHD patients did not differ from controls in any of these regions. Post-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02). Successful Stop–Go Contrast: Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1). Under the MPH condition, ADHD patients did not differ from control subjects in any of these regions. Activation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3). All group difference findings for both contrasts remained essentially unchanged when IQ was covaried. Conjunction Analysis Between Within-Group and Between-Group ANOVAs: To test whether brain regions that were upregulated with MPH relative to placebo within patients overlapped with brain regions that were reduced in patients under placebo relative to controls and then normalized with MPH, we performed a conjunction analysis by determining the voxels where the within-group ANOVA (MPH > placebo in ADHD) and the between-group ANOVA (control subjects > ADHD placebo) were both significant (42). Three clusters emerged, in left IFC (Talairach coordinates: −43, 7, 4), right SMA (Talairach coordinates: 7, 4, 59) and right inferior parietal lobe (Talairach coordinates: 32, −63, 42). Overlapping clusters are also indicated in bold in Table 3 and shown in Figure 3. Discussion: During error trials, ADHD boys under placebo compared with healthy control subjects showed significant underactivation in a typical error processing and performance monitoring network comprising dMFC, left IFC, thalamus, posterior cingulate/precuneus, and inferior temporoparietal regions. Among patients, MPH compared with placebo significantly upregulated activation in overlapping medial frontal, IFC, and parietal regions as well as the lenticular nucleus. Under MPH, brain activation differences between control subjects and ADHD patients were no longer observed. Reduced fronto-thalamo-parietal activation that was normalized with MPH was, furthermore, negatively associated with faster post-error reaction times in patients, which were trendwise slowed with MPH. During successful stop trials, ADHD boys showed underactivation in a right hemispheric network of medial temporal and inferior parietal brain regions and, at a more lenient threshold, in small clusters of bilateral IFC, thalamus, and pre-SMA. Although within-patient comparison between MPH and placebo did not show significant activation differences, all underactivations in patients relative to control subjects under placebo were normalized with a single dose of MPH. The dMFC, comprising Brodmann areas 8, 6, and 32, including pre-SMA and ACC, is a typical region of error processing and performance monitoring in adults (37,43–48) and children (29,38,49). We have previously found this region to be underactivated in children with ADHD during oddball (50) and switch tasks (4). Errors indicate violation of a reward prediction (i.e., positive performance) and have been linked to midbrain dopamine (51). Normalization with MPH of underfunctioning of this region in ADHD is in line with the notion that phasic dopamine response modulates error-related mesial frontal activation (52,53). These findings extend evidence for upregulation with acute and chronic doses of MPH in previously medicated patients with ADHD in a more rostral ACC location during tasks of cognitive control (22,28,54). Activation in dMFC during errors triggers additional activation in functionally interconnected left IFC, as well as striatal, premotor, and parietal components of the error monitoring system, leading to post-error performance adjustments (43–45,48). IFC underactivation is one of the most consistent findings in fMRI studies in patients with ADHD, with right IFC dysfunction typically observed during inhibitory performance (7,8,11,55), in line with its role in inhibition (37,56), and left IFC during stop errors (4,29) as well as during flexible, selective, or sustained attention (4,9,26,50,57), in line with its role for performance monitoring (44,45,48,49) and saliency processing (58,59). IFC dysfunction is furthermore a disorder-specific neurofunctional deficit compared with patients with conduct (6,50,57,60) and obsessive compulsive (4) disorders. MPH thus appears to modulate an important neurofunctional biomarker of ADHD. The more predominantly left-hemispheric upregulation effect during errors may suggest a stronger effect of MPH on performance monitoring than inhibitory function in ADHD. Left IFC upregulation has previously been observed in ADHD patients in the context of an attention-demanding time discrimination task after acute (27) and 6 weeks of MPH treatment during interference inhibition (54). Structural studies have shown more normal cortical thinning in left IFC in psychostimulant-medicated compared with unmedicated ADHD children (61). Together, this raises the speculation that MPH may have a lateralized upregulating effect on left IFC structure and function. Posterior thalamic regions have been associated both with motor response inhibition (62) and performance monitoring (48,63,64). The finding that MPH normalizes activation in this region is in line with speculation of this region's involvement in the modulation of the dopaminergic error signal (63,65,66). The fact that lower dMFC, IFC, and thalamic activation in ADHD patients was associated with faster post-error slowing, both of which were enhanced by MPH, reinforces the role of this network for abnormal error monitoring in ADHD. Posterior cingulate and precuneus are connected with MFC and parietal areas and form part of the performance monitoring network (47,49,67,68), mediating visual spatial attention to saliency (69,70) and the integration of performance outcome with attentional modulation (48). The fact that these regions were underactivated during both inhibition and its failure is in line with a generic attention role of these areas. In line with this, we and others have previously observed underactivation in ADHD patients in these regions during inhibition errors (4,6,8,60), as well as during other salient stimuli such as oddball, novel or incongruent targets (10,26,50,71,72). Normalization with MPH of reduced activation in typical frontoparietal regions of saliency processing and performance monitoring is consistent with the dopamine-deficiency hypothesis of ADHD given that dopamine agonists enhance stimulus salience (73). It is also in line with our previous findings of upregulation with MPH of posterior cingulate/precuneus in the same group of medication naive boys with ADHD during a target detection task, resulting in improved attention (26), and during an attention demanding time discrimination task (27). To our knowledge, normalization of inferior parietal activation with MPH has only recently been observed in ADHD patients, in the context of sustained attention (26) and interference inhibition (54). During successful stop trials, MPH also normalized underactivation in the cerebellum, which, together with subthalamic nucleus, caudate, and IFG, forms a neurofunctional network of motor response inhibition (38). These findings extend previous evidence for cerebellum upregulation with MPH in ADHD patients during interference inhibition (54) and time estimation (27). Within patients, MPH also enhanced activation of caudate and putamen. This is in line with previous fMRI findings of caudate upregulation in ADHD patients after acute and chronic doses of MPH during inhibition and attention tasks (22,24,54) and is likely associated with the known effect of MPH on striatal dopamine transporter blockage (14,15). The findings of more pronounced normalization effects of MPH on abnormal performance monitoring than inhibition networks could suggest that MPH enhances generic attention and performance monitoring functions more than inhibitory capacity. This would be in line with the behavioral effect of MPH of modulating go and post-error reaction times, but not inhibition speed, which, furthermore were correlated with the reduced frontothalamic error-processing activation that was normalized with MPH. Relative to control subjects, patients only significantly differed in intrasubject response variability. Small subject numbers, a relatively older child group, and fMRI task design restrictions may have been responsible for minor behavior differences. The findings of brain dysfunctions in boys with ADHD and their normalization under the clinical dose of MPH despite minor performance differences and only trend-level improvements with MPH show that brain activation is more sensitive than performance to detect both abnormalities and pharmacologic effects. This is in line with previous findings of marked brain dysfunctions in ADHD adolescents despite no stop task impairment (7,8,50) and higher sensitivity of brain activation than behavior to show pharmacologic effects of MPH in ADHD (24,26–28,54,74). MPH prevents the reuptake of catecholamines from the synaptic cleft by blocking dopamine and norepinephrine transporters (DAT/NET) (75,76), with higher affinity for the former (77,78). In healthy adults, MPH blocks 60% to 70% of striatal DAT in a dose-dependent manner, increasing extracellular levels of dopamine in striatum (75,79–82), as well as in frontal, thalamic, and temporal regions (83). The upregulating effects on basal ganglia, thalamic, and anterior cingulate activation were therefore likely mediated by mesolimbic striatocingulate dopaminergic pathways known to modulate error monitoring systems (63,64). In frontal regions, however, MPH upregulates noradrenaline to the same or greater extent than dopamine (84–86), via reuptake inhibition of NET that clear up both dopamine and noradrenaline (85,87–89). The upregulating effects on frontal activation, therefore, may have been mediated by enhanced catecholamine neurotransmission, in line with recent evidence that noradrenaline also plays a role in error monitoring (66,90). A limitation of the study is that patients were tested twice, whereas control subjects were only scanned once, for ethical and financial reasons. Practice effects, however, were overcome by the counterbalanced design. Another limitation is the relatively small sample size. Minimum numbers of 15 to 20 participants have been suggested for fMRI studies (91). Repeated-measures designs, however, are statistically more powerful than independent data sets, which makes the within-subject ANOVA more robust. To our knowledge, this is the first study to show that a single dose of MPH in ADHD upregulates and normalizes the underfunctioning of dMFC, left IFC, posterior cingulate, and parietal regions that in concert play an important role in error processing. The normalization findings of these key regions of both performance monitoring and ADHD dysfunction reinforce the association between dopaminergic neurotransmission abnormalities, ADHD, and poor performance monitoring and may underlie the behavioral effects of improving attention and school performance in boys with ADHD.
Background: Children with attention-deficit/hyperactivity disorder (ADHD) have deficits in performance monitoring often improved with the indirect catecholamine agonist methylphenidate (MPH). We used functional magnetic resonance imaging to investigate the effects of single-dose MPH on activation of error processing brain areas in medication-naive boys with ADHD during a stop task that elicits 50% error rates. Methods: Twelve medication-naive boys with ADHD were scanned twice, under either a single clinical dose of MPH or placebo, in a randomized, double-blind design while they performed an individually adjusted tracking stop task, designed to elicit 50% failures. Brain activation was compared within patients under either drug condition. To test for potential normalization effects of MPH, brain activation in ADHD patients under either drug condition was compared with that of 13 healthy age-matched boys. Results: During failed inhibition, boys with ADHD under placebo relative to control subjects showed reduced brain activation in performance monitoring areas of dorsomedial and left ventrolateral prefrontal cortices, thalamus, cingulate, and parietal regions. MPH, relative to placebo, upregulated activation in these brain regions within patients and normalized all activation differences between patients and control subjects. During successful inhibition, MPH normalized reduced activation observed in patients under placebo compared with control subjects in parietotemporal and cerebellar regions. Conclusions: MPH normalized brain dysfunction in medication-naive ADHD boys relative to control subjects in typical brain areas of performance monitoring, comprising left ventrolateral and dorsomedial frontal and parietal cortices. This could underlie the amelioration of MPH of attention and academic performance in ADHD.
null
null
12,775
303
[ 531, 180, 142, 484, 158, 3768, 45, 600, 153, 139, 378, 171, 10, 690, 172, 165, 139 ]
20
[ "adhd", "right", "activation", "patients", "mph", "left", "ifc", "inferior", "placebo", "parietal" ]
[ "adhd patients differ", "medication patients adhd", "adhd patients acute", "adhd adolescents despite", "diagnosis adhd established" ]
null
null
null
null
null
null
[CONTENT] Attention-deficit/hyperactivity disorder (ADHD) | error processing | methylphenidate | motor response inhibition | performance monitoring | stop task [SUMMARY]
null
[CONTENT] Attention-deficit/hyperactivity disorder (ADHD) | error processing | methylphenidate | motor response inhibition | performance monitoring | stop task [SUMMARY]
null
null
null
[CONTENT] Adolescent | Attention | Attention Deficit Disorder with Hyperactivity | Central Nervous System Stimulants | Child | Double-Blind Method | Frontal Lobe | Gyrus Cinguli | Humans | Magnetic Resonance Imaging | Male | Methylphenidate | Neuropsychological Tests | Psychomotor Performance [SUMMARY]
null
[CONTENT] Adolescent | Attention | Attention Deficit Disorder with Hyperactivity | Central Nervous System Stimulants | Child | Double-Blind Method | Frontal Lobe | Gyrus Cinguli | Humans | Magnetic Resonance Imaging | Male | Methylphenidate | Neuropsychological Tests | Psychomotor Performance [SUMMARY]
null
null
null
[CONTENT] adhd patients differ | medication patients adhd | adhd patients acute | adhd adolescents despite | diagnosis adhd established [SUMMARY]
null
[CONTENT] adhd patients differ | medication patients adhd | adhd patients acute | adhd adolescents despite | diagnosis adhd established [SUMMARY]
null
null
null
[CONTENT] adhd | right | activation | patients | mph | left | ifc | inferior | placebo | parietal [SUMMARY]
null
[CONTENT] adhd | right | activation | patients | mph | left | ifc | inferior | placebo | parietal [SUMMARY]
null
null
null
[CONTENT] compared | trend | significant | group effect | significant group | group | subjects | effect | trials | table [SUMMARY]
null
[CONTENT] activation | adhd | right | superior | patients | left | ifc | adhd patients | inferior | posterior [SUMMARY]
null
null
null
[CONTENT] ||| MPH ||| MPH [SUMMARY]
null
[CONTENT] MPH ||| MPH | 50% ||| Twelve | MPH | 50% ||| ||| MPH | 13 ||| ||| MPH ||| MPH ||| MPH ||| MPH | ADHD [SUMMARY]
null
Aloperine Inhibits Proliferation and Promotes Apoptosis in Colorectal Cancer Cells by Regulating the circNSUN2/miR-296-5p/STAT3 Pathway.
33664565
Aloperine can regulate miR-296-5p/Signal Transducer and Activator of Transcription 3 (STAT3) pathway to inhibit the malignant development of colorectal cancer (CRC), but the regulatory mechanism is unclear. This study explored the upstream mechanism of Aloperine in reducing CRC damage from the perspective of the circRNA-miRNA-mRNA regulatory network.
BACKGROUND
After treatment with gradient concentrations of Aloperine (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) for 24 hours, changes in CRC cell proliferation and apoptosis were detected by functional experiments. Data of the differential expression of miR-296-5p in CRC patients and healthy people were obtained from Starbase. The effects of Aloperine on 12 differentially expressed circRNAs were detected. The binding of miR-296-5p with NOP2/Sun RNA methyltransferase 2 (circNSUN2) and STAT3 was predicted by TargetScan and confirmed through dual-luciferase experiments. The expressions of circNSUN2, miR-296-5p and STAT3 as well as apoptosis-related genes in CRC cells were detected by qRT-PCR and Western blot as needed. Rescue experiments were conducted to test the regulatory effects of circNSUN2, miR-296-5p and STAT3 on CRC cells.
METHODS
Aloperine at a concentration gradient inhibited proliferation and promoted apoptosis in CRC cells. The abnormally low expression of miR-296-5p in CRC could be upregulated by Aloperine. Among the differentially expressed circRNAs in CRC, only circNSUN2 not only targets miR-296-5p, but also can be regulated by Aloperine. The up-regulation of circNSUN2 offset the inhibitory effect of Aloperine on cancer cells. The rescue experiments finally confirmed the regulation of circNSUN2/miR-296-5p/STAT3 axis in CRC cells.
RESULTS
By regulating the circNSUN2/miR-296-5p/STAT3 pathway, Aloperine prevents the malignant development of CRC cells.
CONCLUSION
[ "Antineoplastic Agents", "Apoptosis", "Cell Line, Tumor", "Cell Proliferation", "Colorectal Neoplasms", "Computational Biology", "Dose-Response Relationship, Drug", "Drug Screening Assays, Antitumor", "Humans", "Methyltransferases", "MicroRNAs", "Molecular Structure", "Quinolizidines", "RNA, Circular", "STAT3 Transcription Factor", "Structure-Activity Relationship" ]
7924259
Introduction
Cancer has become the first killer in the world and the biggest obstacle for extending human life expectancy. It is estimated that there were 18.1 million new cancer cases and 9.6 million cancer deaths in 2018.1 Colorectal cancer (CRC) is one of the most common gastrointestinal tumors.2 According to the data from the International Agency for Research on Cancer (IARC),1 the number of new CRC cases worldwide in 2018 was approximately 1.09 million, making CRC the fourth most prevalent malignant tumor after lung, breast and prostate cancers; meantime, CRC had a high mortality rate of about 9.2%, ranking only after lung cancer. Epidemiological studies have shown that the incidence of CRC varies significantly in different countries, and is higher in developed areas.3 More importantly, as the incidence of CRC rises rapidly in people under 50, the age of onset of the disease is getting lower.4,5 Therefore, it is urgent to improve the diagnosis rate and treatment effect of CRC. Currently, the main clinical treatment of CRC is surgery combined with adjuvant radiotherapy, chemotherapy or molecular targeted therapy. Surgery is generally considered as the first choice for comprehensive treatment of CRC. This method is suitable for patients whose tumors are confined to the intestinal wall while penetrating the intestinal wall and invading the serous or extraserous membrane without lymph node metastasis.6,7 However, the availability of radical resection is limited as most patients with CRC have adenocarcinoma, which generally develops from polyps and is metastatic by nature.8 As a result, CRC patients tend to undergo simple resection, which leads to a high recurrence rate. Chemotherapy, therefore, is still necessary for postoperative and late-stage CRC patients.9,10 At present, the commonly used chemotherapeutic drugs in clinic mainly include 5-fluorouracil (5-Fu), oxaliplatin and its derivatives.9 Chemotherapy is highly risky because it indiscriminately kills tumor cells and immune cells and thereby reduces the anti-tumor immune effect.11 Likewise, radiotherapy greatly weakens the body’s immunity.12 Hence, researchers have proposed to find effective treatments or drugs that cause less damage to patients. With advances in the research and development of traditional Chinese medicine, more attention has been paid to the roles of traditional Chinese medicine and its active ingredients in disease prevention and treatment. At the same time, new targets for molecular targeted therapy of diseases have been discovered in the exploration of the molecular mechanisms of traditional Chinese medicine and its monomers. Aloperine (ALO) has been confirmed to have a significant anti-cancer activity. ALO is a component of the traditional Chinese medicine Sophora alopecuroides L.13 It is also one of the main alkaloids separated and extracted in the lab, with a molecular formula of C15H24N2.14 Recent studies have found that ALO has the effects of anti-inflammation, immunosuppression, redox suppression, cardiovascular protection and anti-cancer, among which its tumor-suppressing activity has been extensively reported. For example, Yu et al revealed that ALO inhibited the carcinogenesis process by regulating excessive autophagy in thyroid cancer cells;15 Liu et al demonstrated that ALO inhibited the PI3K/Akt pathway, increased apoptosis and caused cell cycle block in liver cancer cells;16 besides, ALO was also found to exert an anti-cancer effect in prostate cancer and breast cancer.17,18 In the previous study, our research group found that ALO up-regulated miR-296-5p and inhibited the activity of its target gene STAT3, thereby inhibiting the proliferation and inducing the apoptosis of CRC cells. However, the activation pathway of miR-296-5p is unclear. In order to clarify the upstream activation pathway of ALO-upregulated miR-296-5p, this study will explore the mechanism of ALO in inhibiting CRC from the perspective of the circRNA-miRNA-mRNA regulatory network.
null
null
null
null
Discussion
CRC is a malignant lesion of the mucosal epithelium of the colon or rectum under the action of various carcinogenic factors such as environment or heredity.2 Most CRC patients suffer from adenocarcinoma, which usually develops from polyps. Tumors developed from polyps can infiltrate in a circular shape along the horizontal axis of the intestinal tube, develop into the deep layer of the intestinal wall, and finally penetrate the intestinal wall and metastasize to blood vessels or lymphatic vessels.20 This is one of the important reasons for the high recurrence and metastasis rates of CRC. The ultimate goals of CRC research are to improve the current treatment of CRC, save patients’ lives and improve their quality of life. Based on the role of traditional Chinese medicine in disease prevention and treatment, this study explored the effects of ALO, an alkaloid exhibiting strong anticancer activity, on the proliferation and apoptosis of CRC cells. The results showed that after ALO treatment, the viability and proliferation of CRC cells were significantly inhibited; on the contrary, the number of apoptotic cancer cells was significantly increased. This is consistent with the previous experimental results.21 In order to probe deeper into the upstream mechanism of ALO in alleviating CRC carcinogenesis through the miR-296-5p/STAT3 axis, we screened and verified the upstream targeting circRNA of miR-296-5p. CircRNAs are a type of covalently closed circular non-coding RNA formed by back-splicing of mRNA precursors (pre-mRNA).22 CircRNA was considered to be a useless splicing by-product in early research. With the deepening of research, it is found that circRNA has a wide range of sources and plays a variety of functional roles in the growth and development of organisms, with the characteristics of conservative, stable, and tissue-specific.22,23 The most familiar and extensively studied function of circRNA is its sponging effect on miRNA.24 For example, circRNA ciRS-7 which contains more than 70 miR-7 binding sites can achieve competitive adsorption of miR-7 through the AGO2 protein.25 Besides, increasing reports on cancer indicate that circRNAs, such as circRNA-cTFRC, circPSMC3 and circSETD3, act as sponges of miRNAs and participate in transcriptional regulation.26–28 Based on these literature reports, we screened circRNAs that were reported to be abnormally expressed in CRC, and finally obtained 12 circRNAs. After detecting their expressions in ALO-treated cancer cells and predicting their binding sequence for miR-296-5p, circNSUN2 was finally identified as the research object of this study. CircNSUN2, which maps to the 5p15 amplicon in CRCs, was confirmed by Chen et al to regulate cytoplasmic output and promote liver metastasis of CRC through N 6-methyladenosine modification.29 Similarly, our experimental results uncovered that knockdown of circNSUN2 inhibited proliferation and accelerated apoptosis in CRC cells. More importantly, we for the first time revealed that ALO can prevent the activation of circNSUN2 and offset the promotion effect of overexpression of circNSUN2 on esophageal cancer progression. Since circNSUN2 and miR-296-5p have targeting sequences, we verified the cellular regulatory effects of circNSUN2 and miR-296-5p/STAT3 through dual luciferase experiments and rescue experiments. The final result confirmed our conjecture that regulating the circNSUN2/miR-296-5p/STAT3 axis can reduce the proliferation rate and increase the apoptosis rate of CRC cells. At present, blocking proliferation and increasing apoptosis in cancer cells are the intensively discussed mechanisms in CRC study. Tumor cells can proliferate quickly and unrestrictedly, and thus, inhibiting their proliferation can produce an anti-tumor effect.30 Apoptosis, alternatively called programmed cell death, is an autonomous cell death process strictly controlled by multiple genes. Apoptosis is an important part of cell life cycle, and it is also an important link in regulating the development of the body and maintaining the stability of the internal environment in organisms.31,32 Our research shows that ALO inhibits the proliferation and promotes the apoptosis of CRC cells by regulating the circNSUN2/miR-296-5p/STAT3 pathway, and ultimately prevents the tumorigenesis of CRC. There are still certain limitations in our research. Although we have confirmed the effect of ALO on inducing apoptosis and reducing proliferation in CRC cells and clarified the underlying mechanism, the circNSUN2/miR-296-5p/STAT3 axis and the pathways related to proliferation and apoptosis have not been analyzed yet, which will be further explored in future research. In addition, the anticancer effect of ALO on CRC and its experimental concentration needs to be further tested in animal and clinical experiments. Whether ALO has an impact on drug resistance in CRC is also a direction of future research.
[ "Method", "Cell Purchase and ALO Processing", "MTT Assay", "EdU (5-Ethynyl-2ʹ-Deoxyuridine) Cell Proliferation Experiment", "Colony Formation Assay", "Apoptosis Experiment", "Bioinformatics Analysis and Target Gene Binding Verification", "Quantitative Real-Time Polymerase Chain Reaction (qRT-PCR)", "Cell Transfection", "Western Blot", "Statistical Analysis", "Result", "ALO at a Concentration Gradient Inhibited Proliferation Yet Promoted Apoptosis in CRC Cells", "The Abnormally Low Expression of miR-296-5p in Colon Cancer Could Be Up-Regulated by ALO", "Among the Differentially Expressed circRNAs in CRC, Only circNSUN2 Not Only Had a Target Site for miR-296-5p, but Also Could Be Regulated by ALO", "Overexpression of circNSUN2 Partially Offset the Suppression of ALO on the Biological Behavior of Cancer Cells", "Sh-circNSUN2 Inhibited the Malignant Development of CRC Cells, Which Was Neutralized by miR-296-5p Inhibitor", "MiR-296-5p Bound to STAT3 and Upregulation of Mir-296-5p Inhibited the Expression of STAT3 in CRC Cells", "MiR-296-5p Mimic Inhibited Proliferation and Promoted Apoptosis in CRC Cells by Regulating Apoptosis-Related Genes, Which Was Partially Reversed by Overexpressed STAT3", "Discussion" ]
[ "Cell Purchase and ALO Processing CCD-18Co and CRC cell lines SW480 (CCL-228™) and HT29 (HTB-38™) used in this research were provided by the American Type Culture Collection (ATCC). According to the culture requirements, CCD-18Co, SW480 and HT29 cells were separately inoculated in DMEM medium (30–2007, ATCC) containing 10% fetal bovine serum and incubated in a D180-P cell incubator (RWD, China) containing 5% CO2 at 37 °C.\nALO (DK0052, CAS NO: 56293-29-9, HPLC≥98%), the research object of this study, was provided by Chengdu DESITE Biological Company, China. ALO was dissolved in fresh DMEM to prepare 0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L ALO solutions, which were separately used to treat the CCD-18Co, SW480 and HT29 cells for 24 hours (h).\nCCD-18Co and CRC cell lines SW480 (CCL-228™) and HT29 (HTB-38™) used in this research were provided by the American Type Culture Collection (ATCC). According to the culture requirements, CCD-18Co, SW480 and HT29 cells were separately inoculated in DMEM medium (30–2007, ATCC) containing 10% fetal bovine serum and incubated in a D180-P cell incubator (RWD, China) containing 5% CO2 at 37 °C.\nALO (DK0052, CAS NO: 56293-29-9, HPLC≥98%), the research object of this study, was provided by Chengdu DESITE Biological Company, China. ALO was dissolved in fresh DMEM to prepare 0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L ALO solutions, which were separately used to treat the CCD-18Co, SW480 and HT29 cells for 24 hours (h).\nMTT Assay One hundred µL of SW480 or HT29 cell suspension (1×104 cells/mL) was transferred to each well of a 96-well plate. Of note, three holes were added to the same treatment group. After treatment with the ALO solutions for 24 h, SW480 or HT29 cells were added with MTT reagent (10 µL/well, PB180519, Procell, USA), which could react with mitochondria in living cells. After 4 h of binding, a HBS-1096A enzyme label analyzer (DeTie, China) was used to detect the absorbance of SW480 or HT29 cells in the mixed solutions at 490 nm, and then cell viability was calculated based on the absorbance.\nOne hundred µL of SW480 or HT29 cell suspension (1×104 cells/mL) was transferred to each well of a 96-well plate. Of note, three holes were added to the same treatment group. After treatment with the ALO solutions for 24 h, SW480 or HT29 cells were added with MTT reagent (10 µL/well, PB180519, Procell, USA), which could react with mitochondria in living cells. After 4 h of binding, a HBS-1096A enzyme label analyzer (DeTie, China) was used to detect the absorbance of SW480 or HT29 cells in the mixed solutions at 490 nm, and then cell viability was calculated based on the absorbance.\nEdU (5-Ethynyl-2ʹ-Deoxyuridine) Cell Proliferation Experiment SW480 or HT29 cells in logarithmic growth phase were seeded in 96-well plates at 1×104 cells/well. EdU reagent (ST067) purchased from Beyotime was diluted to 50 μmol/L with fresh DMEM medium. The EdU dilution was then used to incubate the cells at 100 μL/well for 2 h. After removing the culture medium, the SW480 or HT29 cells were fixed with methanol at room temperature for 15 minutes (min). Positive fluorescence expression (red) in the collected SW480 or HT29 cells were observed, photographed and recorded under a Leica DMi8 inverted fluorescence microscope (magnification 100 ×). Then DAPI reagent was used to stain the nuclei, and the stains were observed under a microscope.\nSW480 or HT29 cells in logarithmic growth phase were seeded in 96-well plates at 1×104 cells/well. EdU reagent (ST067) purchased from Beyotime was diluted to 50 μmol/L with fresh DMEM medium. The EdU dilution was then used to incubate the cells at 100 μL/well for 2 h. After removing the culture medium, the SW480 or HT29 cells were fixed with methanol at room temperature for 15 minutes (min). Positive fluorescence expression (red) in the collected SW480 or HT29 cells were observed, photographed and recorded under a Leica DMi8 inverted fluorescence microscope (magnification 100 ×). Then DAPI reagent was used to stain the nuclei, and the stains were observed under a microscope.\nColony Formation Assay Two hundred SW480 or HT29 cells were transferred to petri dishes containing complete medium (10 mL). In order to distribute the cells evenly, the petri dishes were rotated slowly for 1 min after cell inoculation. Next, the culture dishes containing cells were placed in an incubator for routine culture. During the culture, the medium was changed every 2 days (d). After 14 d, the SW480 or HT29 cells were soaked with Giemsa dye for 20 min and then the colony formation of the cells was observed under a microscope.\nTwo hundred SW480 or HT29 cells were transferred to petri dishes containing complete medium (10 mL). In order to distribute the cells evenly, the petri dishes were rotated slowly for 1 min after cell inoculation. Next, the culture dishes containing cells were placed in an incubator for routine culture. During the culture, the medium was changed every 2 days (d). After 14 d, the SW480 or HT29 cells were soaked with Giemsa dye for 20 min and then the colony formation of the cells was observed under a microscope.\nApoptosis Experiment An Annexin V-FITC Apoptosis Detection Kit (CA1020, Solarbio, China) and a flow cytometer (CytoFLEX, BECKMAN COULTER, USA) were used to detect changes in the apoptosis of SW480 or HT29 cells. In brief, SW480 or HT29 cells were prepared into 1×106 cells/mL cell suspension with 1 mL 1× Binding Buffer. One hundred μL of SW480 or HT29 cells and 5 μL of Annexin V-FITC reagent were added to a centrifuge tube and mixed for 10 min in the dark at room temperature. Next, 5 μL of PI reagent was added to the centrifuge tube and incubated in the dark at room temperature for 5 min. Afterwards, PBS was added to adjust the mixture to a final volume of 500 μL. After half an hour, the cells in the centrifuge tube were transferred to the detection instrument to calculate the number of apoptotic cells.\nAn Annexin V-FITC Apoptosis Detection Kit (CA1020, Solarbio, China) and a flow cytometer (CytoFLEX, BECKMAN COULTER, USA) were used to detect changes in the apoptosis of SW480 or HT29 cells. In brief, SW480 or HT29 cells were prepared into 1×106 cells/mL cell suspension with 1 mL 1× Binding Buffer. One hundred μL of SW480 or HT29 cells and 5 μL of Annexin V-FITC reagent were added to a centrifuge tube and mixed for 10 min in the dark at room temperature. Next, 5 μL of PI reagent was added to the centrifuge tube and incubated in the dark at room temperature for 5 min. Afterwards, PBS was added to adjust the mixture to a final volume of 500 μL. After half an hour, the cells in the centrifuge tube were transferred to the detection instrument to calculate the number of apoptotic cells.\nBioinformatics Analysis and Target Gene Binding Verification Starbase (http://starbase.sysu.edu.cn/index.php) was used to retrieve and analyze information of differential expression of miR-296-5p in Colon adenocarcinoma (COAD) patients and healthy volunteers. TargetScan (http://www.targetscan.org) was used to predict the circRNA that contained a binding sequence for miR-296-5p and the downstream target genes of miR-296-5p. The binding sequences predicted by TargetScan were designed by COBIOER (China) and used to construct the following reporter plasmids for dual luciferase experiments: NOP2/Sun RNA methyltransferase 2 wild type (circNSUN2-WT), circNSUN2-mutant type (MUT), Signal Transducer and Activator of Transcription 3 (STAT3)-WT and STAT3-MUT. The reporter plasmids were separately co-transfected with miR-296-5p mimic or mimic control into SW480 and HT29 cells using liposome for 48 h. Luciferase activity in the transfected SW480 and HT29 cells was detected using Dual-Luciferase® Reporter Assay System (E1960, Promega, USA) with a GloMax 20/20 luminometer (Promega, USA).\nStarbase (http://starbase.sysu.edu.cn/index.php) was used to retrieve and analyze information of differential expression of miR-296-5p in Colon adenocarcinoma (COAD) patients and healthy volunteers. TargetScan (http://www.targetscan.org) was used to predict the circRNA that contained a binding sequence for miR-296-5p and the downstream target genes of miR-296-5p. The binding sequences predicted by TargetScan were designed by COBIOER (China) and used to construct the following reporter plasmids for dual luciferase experiments: NOP2/Sun RNA methyltransferase 2 wild type (circNSUN2-WT), circNSUN2-mutant type (MUT), Signal Transducer and Activator of Transcription 3 (STAT3)-WT and STAT3-MUT. The reporter plasmids were separately co-transfected with miR-296-5p mimic or mimic control into SW480 and HT29 cells using liposome for 48 h. Luciferase activity in the transfected SW480 and HT29 cells was detected using Dual-Luciferase® Reporter Assay System (E1960, Promega, USA) with a GloMax 20/20 luminometer (Promega, USA).\nQuantitative Real-Time Polymerase Chain Reaction (qRT-PCR) qRT-PCR is a reliable method for rapid detection of gene mRNA levels. Briefly, total RNA of SW480 and HT29 cells was fully isolated by the conventional RNA extraction reagent TRIzol (15,596,018, Invitrogen, USA), and subsequently reverse transcribed into cDNA using the RNA reverse transcription reagent (RR047A) developed by TaKaRa (Japan). The qRT-PCR reaction system which consisted of cDNA, gene primers (Sangon synthesis), SYBR® Green (S4438-20RXN, Sigma-Aldrich, Germany) and DEPC water was added to a 96-well plate and put into the detection instrument (Bio-Rad thermal cycler T100). The reaction conditions were set as follows: pre-denaturation at 95 °C for 10 min, denaturation at 95 °C for 15 seconds (s), and annealing at 58 °C for 1 min, for a total of 40 cycles. According to Anita Ciesielska’s report,19 gene mRNA levels were quantified by the 2−ΔΔCT method. GAPDH and U6 were internal references in the experiment. The specific sequences of synthetic primers were shown in Table 1.Table 1Primers for qRT-PCRGeneForward Primer (5ʹ-3ʹ)Reverse Primer (5ʹ-3ʹ)miR-296-5pCCTGTGTCGTATCCAGTGCAAGTCGTATCCAGTGCGTGTCGcircDDX17TGCCAACCACAACATCCTCCACGCTCCCCAGGATTACCAAATcircHIPK3TATGTTGGTGGATCCTGTTCGGCATGGTGGGTAGACCAAGACTTGTGAcircZFRAACCACCACAGATTCACTATAACCACCACAGATTCACTATcircRNA_100290GTCATTCCCTCTTTAATGGTGCAGAACTTCCGCTCTAACATACHas_circ_0026344CTCAGCCTCTAGCATAAGCTCAGGCAAGAGAATGATTTGAACcircVAPATGGATTCCAAATTGAGATGCGTATTCACTTTTCTATCCGATGGATTTCGCHsa_circ_0009361AGAACCAGATTCGAGACGCCGTGCTCTTCAATGCCACCTTCcircNSUN2CTTGAGAAAATCGCCACACTTGTTGAGGAGCAGTGGTGGcircACAP2TCAGAGCATCTGCCCAAAGTTAAATGAACCCCAAGGCTCCGcircITGA7GTGTGCACAGGTCCTTCCAATGGAAGTTCTGTGAGGGACGcircEIF4G3CCTACCCCATCCCCTTATTCACCGTGCTGTAGACTGCTGAGSTAT3CCCCATACCTGAAGACCAAGGGACTCAAACTGCCCTCCTU6CTCGCTTCGGCAGCACAAACGCTTCACGAATTTGCGTGAPDHTGTGGGCATCAATGGATTTGGACACCATGTATTCCGGGTCAAT\n\nPrimers for qRT-PCR\nqRT-PCR is a reliable method for rapid detection of gene mRNA levels. Briefly, total RNA of SW480 and HT29 cells was fully isolated by the conventional RNA extraction reagent TRIzol (15,596,018, Invitrogen, USA), and subsequently reverse transcribed into cDNA using the RNA reverse transcription reagent (RR047A) developed by TaKaRa (Japan). The qRT-PCR reaction system which consisted of cDNA, gene primers (Sangon synthesis), SYBR® Green (S4438-20RXN, Sigma-Aldrich, Germany) and DEPC water was added to a 96-well plate and put into the detection instrument (Bio-Rad thermal cycler T100). The reaction conditions were set as follows: pre-denaturation at 95 °C for 10 min, denaturation at 95 °C for 15 seconds (s), and annealing at 58 °C for 1 min, for a total of 40 cycles. According to Anita Ciesielska’s report,19 gene mRNA levels were quantified by the 2−ΔΔCT method. GAPDH and U6 were internal references in the experiment. The specific sequences of synthetic primers were shown in Table 1.Table 1Primers for qRT-PCRGeneForward Primer (5ʹ-3ʹ)Reverse Primer (5ʹ-3ʹ)miR-296-5pCCTGTGTCGTATCCAGTGCAAGTCGTATCCAGTGCGTGTCGcircDDX17TGCCAACCACAACATCCTCCACGCTCCCCAGGATTACCAAATcircHIPK3TATGTTGGTGGATCCTGTTCGGCATGGTGGGTAGACCAAGACTTGTGAcircZFRAACCACCACAGATTCACTATAACCACCACAGATTCACTATcircRNA_100290GTCATTCCCTCTTTAATGGTGCAGAACTTCCGCTCTAACATACHas_circ_0026344CTCAGCCTCTAGCATAAGCTCAGGCAAGAGAATGATTTGAACcircVAPATGGATTCCAAATTGAGATGCGTATTCACTTTTCTATCCGATGGATTTCGCHsa_circ_0009361AGAACCAGATTCGAGACGCCGTGCTCTTCAATGCCACCTTCcircNSUN2CTTGAGAAAATCGCCACACTTGTTGAGGAGCAGTGGTGGcircACAP2TCAGAGCATCTGCCCAAAGTTAAATGAACCCCAAGGCTCCGcircITGA7GTGTGCACAGGTCCTTCCAATGGAAGTTCTGTGAGGGACGcircEIF4G3CCTACCCCATCCCCTTATTCACCGTGCTGTAGACTGCTGAGSTAT3CCCCATACCTGAAGACCAAGGGACTCAAACTGCCCTCCTU6CTCGCTTCGGCAGCACAAACGCTTCACGAATTTGCGTGAPDHTGTGGGCATCAATGGATTTGGACACCATGTATTCCGGGTCAAT\n\nPrimers for qRT-PCR\nCell Transfection Exogenous up-regulation or knockdown of test genes is a common method to observe their regulatory effects on cells or tissues. Liposome transfection with Lipofectamine 3000 (L3000008, ThermoFisher Scientific, USA) is the most common and convenient method for exogenous interference. Therefore, we commissioned Guangzhou GENESEED Biological Company to synthesize circNSUN2 overexpressed plasmid, circNSUN2 silenced plasmid (sh-circNSUN2), STAT3 overexpressed plasmid and their respective negative controls. MiR-296-5p mimic (miR10000690-1-5) and mimic control (miR1N0000001-1-5), as well as miR-296-5p inhibitor (miR20000690-1-5) and inhibitor control (miR2N0000001-1-5), were purchased directly from Guangzhou RIOBOBIO Company. The plasmids were transfected into SW480 and HT29 cells according to the grouping using Lipofectamine 3000. After 48 h, the success rate of transfection of the cells was determined by qRT-PCR.\nExogenous up-regulation or knockdown of test genes is a common method to observe their regulatory effects on cells or tissues. Liposome transfection with Lipofectamine 3000 (L3000008, ThermoFisher Scientific, USA) is the most common and convenient method for exogenous interference. Therefore, we commissioned Guangzhou GENESEED Biological Company to synthesize circNSUN2 overexpressed plasmid, circNSUN2 silenced plasmid (sh-circNSUN2), STAT3 overexpressed plasmid and their respective negative controls. MiR-296-5p mimic (miR10000690-1-5) and mimic control (miR1N0000001-1-5), as well as miR-296-5p inhibitor (miR20000690-1-5) and inhibitor control (miR2N0000001-1-5), were purchased directly from Guangzhou RIOBOBIO Company. The plasmids were transfected into SW480 and HT29 cells according to the grouping using Lipofectamine 3000. After 48 h, the success rate of transfection of the cells was determined by qRT-PCR.\nWestern Blot RIPA lysate (tissue/cell, R0010) produced by Solarbio (China) was used to separate protein from SW480 and HT29 cells, and its concentration was determined with BCA protein concentration determination reagent (PC0020, Solarbio, China). Then the protein was subjected to high temperature denaturation to maintain a relatively stable state. After being electrophoresed by SDS-PAGE, the protein was transferred to the membrane carrier (PVDF membrane). The membrane was soaked with the same BSA sealant (SW3015) produced by Solarbio for 2 h at room temperature. Next, the membrane was incubated with Abcam (USA) antibodies (anti-STAT3, ab68153, 88 kDa; Bax, ab32503, 21 kDa; Bcl-2, ab59348, 26 kDa) that could bind to the corresponding antigens in the protein on the membrane overnight (4 °C), with GAPDH (ab8245, 36KD) as an internal reference. To identify the antigen-antibody complex, a secondary antibody labeled with HRP (Goat Anti-Rabbit antibody, 1:10,000, ab6721; Goat Anti-Mouse antibody, 1:10000, ab205719) was used to treat the PVDF membrane for 1.5 h at room temperature. Fluorescent signal of the complex was enhanced using the ECL Western Blotting Substrate (PE0010, Solarbio) and detected using the Gel Doc™ XR+ imaging system (BIO-RAD, USA). The gray value of the bands was calculated by Image Lab software to determine the final gene protein level.\nRIPA lysate (tissue/cell, R0010) produced by Solarbio (China) was used to separate protein from SW480 and HT29 cells, and its concentration was determined with BCA protein concentration determination reagent (PC0020, Solarbio, China). Then the protein was subjected to high temperature denaturation to maintain a relatively stable state. After being electrophoresed by SDS-PAGE, the protein was transferred to the membrane carrier (PVDF membrane). The membrane was soaked with the same BSA sealant (SW3015) produced by Solarbio for 2 h at room temperature. Next, the membrane was incubated with Abcam (USA) antibodies (anti-STAT3, ab68153, 88 kDa; Bax, ab32503, 21 kDa; Bcl-2, ab59348, 26 kDa) that could bind to the corresponding antigens in the protein on the membrane overnight (4 °C), with GAPDH (ab8245, 36KD) as an internal reference. To identify the antigen-antibody complex, a secondary antibody labeled with HRP (Goat Anti-Rabbit antibody, 1:10,000, ab6721; Goat Anti-Mouse antibody, 1:10000, ab205719) was used to treat the PVDF membrane for 1.5 h at room temperature. Fluorescent signal of the complex was enhanced using the ECL Western Blotting Substrate (PE0010, Solarbio) and detected using the Gel Doc™ XR+ imaging system (BIO-RAD, USA). The gray value of the bands was calculated by Image Lab software to determine the final gene protein level.\nStatistical Analysis SPSS 20.0 software (IBM, USA) was employed for statistical analysis of the data obtained in the experiments. Differences between two groups were compared by independent sample t-test, and those between multiple groups were compared by one-way ANOVA followed by Tukey’s test. When p<0.05, the difference was considered statistically significant.\nSPSS 20.0 software (IBM, USA) was employed for statistical analysis of the data obtained in the experiments. Differences between two groups were compared by independent sample t-test, and those between multiple groups were compared by one-way ANOVA followed by Tukey’s test. When p<0.05, the difference was considered statistically significant.", "CCD-18Co and CRC cell lines SW480 (CCL-228™) and HT29 (HTB-38™) used in this research were provided by the American Type Culture Collection (ATCC). According to the culture requirements, CCD-18Co, SW480 and HT29 cells were separately inoculated in DMEM medium (30–2007, ATCC) containing 10% fetal bovine serum and incubated in a D180-P cell incubator (RWD, China) containing 5% CO2 at 37 °C.\nALO (DK0052, CAS NO: 56293-29-9, HPLC≥98%), the research object of this study, was provided by Chengdu DESITE Biological Company, China. ALO was dissolved in fresh DMEM to prepare 0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L ALO solutions, which were separately used to treat the CCD-18Co, SW480 and HT29 cells for 24 hours (h).", "One hundred µL of SW480 or HT29 cell suspension (1×104 cells/mL) was transferred to each well of a 96-well plate. Of note, three holes were added to the same treatment group. After treatment with the ALO solutions for 24 h, SW480 or HT29 cells were added with MTT reagent (10 µL/well, PB180519, Procell, USA), which could react with mitochondria in living cells. After 4 h of binding, a HBS-1096A enzyme label analyzer (DeTie, China) was used to detect the absorbance of SW480 or HT29 cells in the mixed solutions at 490 nm, and then cell viability was calculated based on the absorbance.", "SW480 or HT29 cells in logarithmic growth phase were seeded in 96-well plates at 1×104 cells/well. EdU reagent (ST067) purchased from Beyotime was diluted to 50 μmol/L with fresh DMEM medium. The EdU dilution was then used to incubate the cells at 100 μL/well for 2 h. After removing the culture medium, the SW480 or HT29 cells were fixed with methanol at room temperature for 15 minutes (min). Positive fluorescence expression (red) in the collected SW480 or HT29 cells were observed, photographed and recorded under a Leica DMi8 inverted fluorescence microscope (magnification 100 ×). Then DAPI reagent was used to stain the nuclei, and the stains were observed under a microscope.", "Two hundred SW480 or HT29 cells were transferred to petri dishes containing complete medium (10 mL). In order to distribute the cells evenly, the petri dishes were rotated slowly for 1 min after cell inoculation. Next, the culture dishes containing cells were placed in an incubator for routine culture. During the culture, the medium was changed every 2 days (d). After 14 d, the SW480 or HT29 cells were soaked with Giemsa dye for 20 min and then the colony formation of the cells was observed under a microscope.", "An Annexin V-FITC Apoptosis Detection Kit (CA1020, Solarbio, China) and a flow cytometer (CytoFLEX, BECKMAN COULTER, USA) were used to detect changes in the apoptosis of SW480 or HT29 cells. In brief, SW480 or HT29 cells were prepared into 1×106 cells/mL cell suspension with 1 mL 1× Binding Buffer. One hundred μL of SW480 or HT29 cells and 5 μL of Annexin V-FITC reagent were added to a centrifuge tube and mixed for 10 min in the dark at room temperature. Next, 5 μL of PI reagent was added to the centrifuge tube and incubated in the dark at room temperature for 5 min. Afterwards, PBS was added to adjust the mixture to a final volume of 500 μL. After half an hour, the cells in the centrifuge tube were transferred to the detection instrument to calculate the number of apoptotic cells.", "Starbase (http://starbase.sysu.edu.cn/index.php) was used to retrieve and analyze information of differential expression of miR-296-5p in Colon adenocarcinoma (COAD) patients and healthy volunteers. TargetScan (http://www.targetscan.org) was used to predict the circRNA that contained a binding sequence for miR-296-5p and the downstream target genes of miR-296-5p. The binding sequences predicted by TargetScan were designed by COBIOER (China) and used to construct the following reporter plasmids for dual luciferase experiments: NOP2/Sun RNA methyltransferase 2 wild type (circNSUN2-WT), circNSUN2-mutant type (MUT), Signal Transducer and Activator of Transcription 3 (STAT3)-WT and STAT3-MUT. The reporter plasmids were separately co-transfected with miR-296-5p mimic or mimic control into SW480 and HT29 cells using liposome for 48 h. Luciferase activity in the transfected SW480 and HT29 cells was detected using Dual-Luciferase® Reporter Assay System (E1960, Promega, USA) with a GloMax 20/20 luminometer (Promega, USA).", "qRT-PCR is a reliable method for rapid detection of gene mRNA levels. Briefly, total RNA of SW480 and HT29 cells was fully isolated by the conventional RNA extraction reagent TRIzol (15,596,018, Invitrogen, USA), and subsequently reverse transcribed into cDNA using the RNA reverse transcription reagent (RR047A) developed by TaKaRa (Japan). The qRT-PCR reaction system which consisted of cDNA, gene primers (Sangon synthesis), SYBR® Green (S4438-20RXN, Sigma-Aldrich, Germany) and DEPC water was added to a 96-well plate and put into the detection instrument (Bio-Rad thermal cycler T100). The reaction conditions were set as follows: pre-denaturation at 95 °C for 10 min, denaturation at 95 °C for 15 seconds (s), and annealing at 58 °C for 1 min, for a total of 40 cycles. According to Anita Ciesielska’s report,19 gene mRNA levels were quantified by the 2−ΔΔCT method. GAPDH and U6 were internal references in the experiment. The specific sequences of synthetic primers were shown in Table 1.Table 1Primers for qRT-PCRGeneForward Primer (5ʹ-3ʹ)Reverse Primer (5ʹ-3ʹ)miR-296-5pCCTGTGTCGTATCCAGTGCAAGTCGTATCCAGTGCGTGTCGcircDDX17TGCCAACCACAACATCCTCCACGCTCCCCAGGATTACCAAATcircHIPK3TATGTTGGTGGATCCTGTTCGGCATGGTGGGTAGACCAAGACTTGTGAcircZFRAACCACCACAGATTCACTATAACCACCACAGATTCACTATcircRNA_100290GTCATTCCCTCTTTAATGGTGCAGAACTTCCGCTCTAACATACHas_circ_0026344CTCAGCCTCTAGCATAAGCTCAGGCAAGAGAATGATTTGAACcircVAPATGGATTCCAAATTGAGATGCGTATTCACTTTTCTATCCGATGGATTTCGCHsa_circ_0009361AGAACCAGATTCGAGACGCCGTGCTCTTCAATGCCACCTTCcircNSUN2CTTGAGAAAATCGCCACACTTGTTGAGGAGCAGTGGTGGcircACAP2TCAGAGCATCTGCCCAAAGTTAAATGAACCCCAAGGCTCCGcircITGA7GTGTGCACAGGTCCTTCCAATGGAAGTTCTGTGAGGGACGcircEIF4G3CCTACCCCATCCCCTTATTCACCGTGCTGTAGACTGCTGAGSTAT3CCCCATACCTGAAGACCAAGGGACTCAAACTGCCCTCCTU6CTCGCTTCGGCAGCACAAACGCTTCACGAATTTGCGTGAPDHTGTGGGCATCAATGGATTTGGACACCATGTATTCCGGGTCAAT\n\nPrimers for qRT-PCR", "Exogenous up-regulation or knockdown of test genes is a common method to observe their regulatory effects on cells or tissues. Liposome transfection with Lipofectamine 3000 (L3000008, ThermoFisher Scientific, USA) is the most common and convenient method for exogenous interference. Therefore, we commissioned Guangzhou GENESEED Biological Company to synthesize circNSUN2 overexpressed plasmid, circNSUN2 silenced plasmid (sh-circNSUN2), STAT3 overexpressed plasmid and their respective negative controls. MiR-296-5p mimic (miR10000690-1-5) and mimic control (miR1N0000001-1-5), as well as miR-296-5p inhibitor (miR20000690-1-5) and inhibitor control (miR2N0000001-1-5), were purchased directly from Guangzhou RIOBOBIO Company. The plasmids were transfected into SW480 and HT29 cells according to the grouping using Lipofectamine 3000. After 48 h, the success rate of transfection of the cells was determined by qRT-PCR.", "RIPA lysate (tissue/cell, R0010) produced by Solarbio (China) was used to separate protein from SW480 and HT29 cells, and its concentration was determined with BCA protein concentration determination reagent (PC0020, Solarbio, China). Then the protein was subjected to high temperature denaturation to maintain a relatively stable state. After being electrophoresed by SDS-PAGE, the protein was transferred to the membrane carrier (PVDF membrane). The membrane was soaked with the same BSA sealant (SW3015) produced by Solarbio for 2 h at room temperature. Next, the membrane was incubated with Abcam (USA) antibodies (anti-STAT3, ab68153, 88 kDa; Bax, ab32503, 21 kDa; Bcl-2, ab59348, 26 kDa) that could bind to the corresponding antigens in the protein on the membrane overnight (4 °C), with GAPDH (ab8245, 36KD) as an internal reference. To identify the antigen-antibody complex, a secondary antibody labeled with HRP (Goat Anti-Rabbit antibody, 1:10,000, ab6721; Goat Anti-Mouse antibody, 1:10000, ab205719) was used to treat the PVDF membrane for 1.5 h at room temperature. Fluorescent signal of the complex was enhanced using the ECL Western Blotting Substrate (PE0010, Solarbio) and detected using the Gel Doc™ XR+ imaging system (BIO-RAD, USA). The gray value of the bands was calculated by Image Lab software to determine the final gene protein level.", "SPSS 20.0 software (IBM, USA) was employed for statistical analysis of the data obtained in the experiments. Differences between two groups were compared by independent sample t-test, and those between multiple groups were compared by one-way ANOVA followed by Tukey’s test. When p<0.05, the difference was considered statistically significant.", "ALO at a Concentration Gradient Inhibited Proliferation Yet Promoted Apoptosis in CRC Cells In this study, the toxicity of ALO in normal cells (CCD-18Co) was detected, ALO showed no significant toxicity to CCD-18Co cells (Figure 1A). In order to study the effects of ALO on the growth and basic physiological functions of CRC cells, we used different concentrations of ALO to treat SW480 and HT29 cells. The results showed that 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L ALO solutions all inhibited the viability of SW480 and HT29 cells (p<0.05, Figure 1B and C). Moreover, the viability of SW480 and HT29 cells was close to 50% under the treatment of 0.8 mmol/L ALO solution, but was reduced to less than 50% when the ALO concentration was increased to 1 mmol/L (Figure 1B and C). In the following EdU cell proliferation experiment, the red fluorescence expression was reduced in SW480 and HT29 cells under the treatment of 0.8 mmol/L ALO (Figure 1D and E). Consistently, the clone formation experiment also showed that different concentrations of ALO (0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L) reduced the number of cell clones and inhibited the proliferation of SW480 and HT29 cells (p<0.05, Figure 1F–H). At the same time, the increase in the concentration of the ALO solution accelerated the death of CRC cells. As shown in Figure 1I–K, the number of apoptotic SW480 and HT29 cells increased significantly after treatment with different concentrations of ALO (p<0.01, Figure 1I–K).Figure 1ALO at a concentration gradient inhibited proliferation yet promoted apoptosis in CRC cells. (A) The effect of ALO on the normal colonic tissue cells (CCD-18Co) was detected by MTT experiments. (B and C) MTT experiments showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) inhibited the activity of SW480 and HT29 cells. (D and E) EdU cell proliferation experiments showed that 0.8 mmol/L ALO inhibited the proliferation of SW480 and HT29 cells. The magnification was 100×. (F–H) The clone formation experiment showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) inhibited the proliferation of SW480 and HT29 cells. (I–K) Flow cytometry experiments showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) promoted the apoptosis of SW480 and HT29 cells. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs Control.Abbreviations: ALO, aloperine; CRC, colorectal cancer.\nALO at a concentration gradient inhibited proliferation yet promoted apoptosis in CRC cells. (A) The effect of ALO on the normal colonic tissue cells (CCD-18Co) was detected by MTT experiments. (B and C) MTT experiments showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) inhibited the activity of SW480 and HT29 cells. (D and E) EdU cell proliferation experiments showed that 0.8 mmol/L ALO inhibited the proliferation of SW480 and HT29 cells. The magnification was 100×. (F–H) The clone formation experiment showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) inhibited the proliferation of SW480 and HT29 cells. (I–K) Flow cytometry experiments showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) promoted the apoptosis of SW480 and HT29 cells. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs Control.\nIn this study, the toxicity of ALO in normal cells (CCD-18Co) was detected, ALO showed no significant toxicity to CCD-18Co cells (Figure 1A). In order to study the effects of ALO on the growth and basic physiological functions of CRC cells, we used different concentrations of ALO to treat SW480 and HT29 cells. The results showed that 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L ALO solutions all inhibited the viability of SW480 and HT29 cells (p<0.05, Figure 1B and C). Moreover, the viability of SW480 and HT29 cells was close to 50% under the treatment of 0.8 mmol/L ALO solution, but was reduced to less than 50% when the ALO concentration was increased to 1 mmol/L (Figure 1B and C). In the following EdU cell proliferation experiment, the red fluorescence expression was reduced in SW480 and HT29 cells under the treatment of 0.8 mmol/L ALO (Figure 1D and E). Consistently, the clone formation experiment also showed that different concentrations of ALO (0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L) reduced the number of cell clones and inhibited the proliferation of SW480 and HT29 cells (p<0.05, Figure 1F–H). At the same time, the increase in the concentration of the ALO solution accelerated the death of CRC cells. As shown in Figure 1I–K, the number of apoptotic SW480 and HT29 cells increased significantly after treatment with different concentrations of ALO (p<0.01, Figure 1I–K).Figure 1ALO at a concentration gradient inhibited proliferation yet promoted apoptosis in CRC cells. (A) The effect of ALO on the normal colonic tissue cells (CCD-18Co) was detected by MTT experiments. (B and C) MTT experiments showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) inhibited the activity of SW480 and HT29 cells. (D and E) EdU cell proliferation experiments showed that 0.8 mmol/L ALO inhibited the proliferation of SW480 and HT29 cells. The magnification was 100×. (F–H) The clone formation experiment showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) inhibited the proliferation of SW480 and HT29 cells. (I–K) Flow cytometry experiments showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) promoted the apoptosis of SW480 and HT29 cells. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs Control.Abbreviations: ALO, aloperine; CRC, colorectal cancer.\nALO at a concentration gradient inhibited proliferation yet promoted apoptosis in CRC cells. (A) The effect of ALO on the normal colonic tissue cells (CCD-18Co) was detected by MTT experiments. (B and C) MTT experiments showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) inhibited the activity of SW480 and HT29 cells. (D and E) EdU cell proliferation experiments showed that 0.8 mmol/L ALO inhibited the proliferation of SW480 and HT29 cells. The magnification was 100×. (F–H) The clone formation experiment showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) inhibited the proliferation of SW480 and HT29 cells. (I–K) Flow cytometry experiments showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) promoted the apoptosis of SW480 and HT29 cells. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs Control.\nThe Abnormally Low Expression of miR-296-5p in Colon Cancer Could Be Up-Regulated by ALO Data from the Starbase database showed that the expression of miR-296-5p in colon cancer patients was significantly lower than that in healthy people (p=2.5e-6, Figure 2A). In this study, however, after treatment with different concentrations of ALO for 24 h, miR-296-5p activity in SW480 and HT29 cells was significantly increased (p<0.01, Figure 2B and C), suggesting that ALO can stimulate the up-regulation of miR-296-5p.Figure 2The abnormally low expression of miR-296-5p in colon cancer could be upregulated by ALO. (A) Starbase (http://starbase.sysu.edu.cn/index.php) was used to retrieve the information of differential expression of miR-296-5p in colon adenocarcinoma (COAD, n=450) patients and healthy volunteers (n=8) in vivo. P=2.5e-6. (B and C) The qRT-PCR experiment showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) up-regulated miR-296-5p expression in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. **p<0.01, ***p<0.001 vs Control.Abbreviation: qRT-PCR, quantitative real-time polymerase chain reaction.\nThe abnormally low expression of miR-296-5p in colon cancer could be upregulated by ALO. (A) Starbase (http://starbase.sysu.edu.cn/index.php) was used to retrieve the information of differential expression of miR-296-5p in colon adenocarcinoma (COAD, n=450) patients and healthy volunteers (n=8) in vivo. P=2.5e-6. (B and C) The qRT-PCR experiment showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) up-regulated miR-296-5p expression in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. **p<0.01, ***p<0.001 vs Control.\nData from the Starbase database showed that the expression of miR-296-5p in colon cancer patients was significantly lower than that in healthy people (p=2.5e-6, Figure 2A). In this study, however, after treatment with different concentrations of ALO for 24 h, miR-296-5p activity in SW480 and HT29 cells was significantly increased (p<0.01, Figure 2B and C), suggesting that ALO can stimulate the up-regulation of miR-296-5p.Figure 2The abnormally low expression of miR-296-5p in colon cancer could be upregulated by ALO. (A) Starbase (http://starbase.sysu.edu.cn/index.php) was used to retrieve the information of differential expression of miR-296-5p in colon adenocarcinoma (COAD, n=450) patients and healthy volunteers (n=8) in vivo. P=2.5e-6. (B and C) The qRT-PCR experiment showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) up-regulated miR-296-5p expression in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. **p<0.01, ***p<0.001 vs Control.Abbreviation: qRT-PCR, quantitative real-time polymerase chain reaction.\nThe abnormally low expression of miR-296-5p in colon cancer could be upregulated by ALO. (A) Starbase (http://starbase.sysu.edu.cn/index.php) was used to retrieve the information of differential expression of miR-296-5p in colon adenocarcinoma (COAD, n=450) patients and healthy volunteers (n=8) in vivo. P=2.5e-6. (B and C) The qRT-PCR experiment showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) up-regulated miR-296-5p expression in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. **p<0.01, ***p<0.001 vs Control.\nAmong the Differentially Expressed circRNAs in CRC, Only circNSUN2 Not Only Had a Target Site for miR-296-5p, but Also Could Be Regulated by ALO Through literature review, we screened 12 circRNAs that were reported to be associated with CRC: circDDX17, circHIPK3, circZFR, circRNA_100290, Has_circ_0026344, circVAPA, Hsa_circ_0009361, circNSUN2, circACAP2, circACVRL1, circITGA7 and circEIF4G3. We first detected the regulatory effect of ALO on these differentially expressed circRNAs by qRT-PCR to perform preliminary screening (Figure 3A). The results showed that the expressions of circDDX17, Has_circ_0026344, circVAPA, circNSUN2, circACAP2, circACVRL1 and circEIF4G3 were regulated by 0.8 mmol/L ALO (p<0.001, Figure 3A). Through subsequent screening of target genes, we found that only circNSUN2 and miR-296-5p have targeted binding sites (Figure 3B). After further verification, it was confirmed that circNSUN2-WT and miR-296-5p mimic did reduce the luciferase activity in SW480 and HT29 cells (p<0.001, Figure 3C and D). Therefore, we chose circNSUN2 as the next research object.Figure 3Among the differentially expressed circRNAs in CRC, only circNSUN2 not only had a target site for miR-296-5p, but also could be regulated by ALO. (A) qRT-PCR detected the regulation of 0.8 mmol/L ALO on the differentially expressed circRNAs in CRC. GAPDH played the role of internal reference. (B) The TargetScan database (http://www.targetscan.org) was used to predict the target sites of circNSUN2 and miR-296-5p. (C and D) The dual luciferase experiment confirmed that circNSUN2 can bind to miR-296-5p in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; +++p<0.001 vs MC.Abbreviation: MC, miR-296-5p mimic control.\nAmong the differentially expressed circRNAs in CRC, only circNSUN2 not only had a target site for miR-296-5p, but also could be regulated by ALO. (A) qRT-PCR detected the regulation of 0.8 mmol/L ALO on the differentially expressed circRNAs in CRC. GAPDH played the role of internal reference. (B) The TargetScan database (http://www.targetscan.org) was used to predict the target sites of circNSUN2 and miR-296-5p. (C and D) The dual luciferase experiment confirmed that circNSUN2 can bind to miR-296-5p in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; +++p<0.001 vs MC.\nThrough literature review, we screened 12 circRNAs that were reported to be associated with CRC: circDDX17, circHIPK3, circZFR, circRNA_100290, Has_circ_0026344, circVAPA, Hsa_circ_0009361, circNSUN2, circACAP2, circACVRL1, circITGA7 and circEIF4G3. We first detected the regulatory effect of ALO on these differentially expressed circRNAs by qRT-PCR to perform preliminary screening (Figure 3A). The results showed that the expressions of circDDX17, Has_circ_0026344, circVAPA, circNSUN2, circACAP2, circACVRL1 and circEIF4G3 were regulated by 0.8 mmol/L ALO (p<0.001, Figure 3A). Through subsequent screening of target genes, we found that only circNSUN2 and miR-296-5p have targeted binding sites (Figure 3B). After further verification, it was confirmed that circNSUN2-WT and miR-296-5p mimic did reduce the luciferase activity in SW480 and HT29 cells (p<0.001, Figure 3C and D). Therefore, we chose circNSUN2 as the next research object.Figure 3Among the differentially expressed circRNAs in CRC, only circNSUN2 not only had a target site for miR-296-5p, but also could be regulated by ALO. (A) qRT-PCR detected the regulation of 0.8 mmol/L ALO on the differentially expressed circRNAs in CRC. GAPDH played the role of internal reference. (B) The TargetScan database (http://www.targetscan.org) was used to predict the target sites of circNSUN2 and miR-296-5p. (C and D) The dual luciferase experiment confirmed that circNSUN2 can bind to miR-296-5p in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; +++p<0.001 vs MC.Abbreviation: MC, miR-296-5p mimic control.\nAmong the differentially expressed circRNAs in CRC, only circNSUN2 not only had a target site for miR-296-5p, but also could be regulated by ALO. (A) qRT-PCR detected the regulation of 0.8 mmol/L ALO on the differentially expressed circRNAs in CRC. GAPDH played the role of internal reference. (B) The TargetScan database (http://www.targetscan.org) was used to predict the target sites of circNSUN2 and miR-296-5p. (C and D) The dual luciferase experiment confirmed that circNSUN2 can bind to miR-296-5p in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; +++p<0.001 vs MC.\nOverexpression of circNSUN2 Partially Offset the Suppression of ALO on the Biological Behavior of Cancer Cells In SW480 and HT29 cells, ALO suppressed the mRNA level of circNSUN2, whereas transfection of circNSUN2 overexpression significantly increased the expression of circNSUN2 (p<0.001, Figure 4A and B). The following basic cell function experiments showed that the upregulation of circNSUN2 partially offset the inhibitory effect of ALO on CRC cell viability, proliferation and apoptosis (p<0.001, Figure 4C–L).Figure 4Overexpression of circNSUN2 partially offset the suppression of ALO on the biological behavior of cancer cells. (A and B) The qRT-PCR experiment showed that ALO (0.8 mmol/L) inhibited the expression of circNSUN2 in SW480 and HT29 cells. GAPDH played the role of internal reference. (C and D) The MTT experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on cell viability in SW480 and HT29 cells. (E and F) EdU cell proliferation experiments showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. The magnification was 100×. (G–I) The clone formation experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. (J–L) Flow cytometry experiments showed that overexpression of circNSUN2 partially offset the promotion of ALO on apoptosis in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; ^p<0.05, ^^p<0.01, ^^^p<0.001 vs ALO+NC.Abbreviations: CircNSUN2, NOP2/Sun RNA methyltransferase 2; NC, negative control.\nOverexpression of circNSUN2 partially offset the suppression of ALO on the biological behavior of cancer cells. (A and B) The qRT-PCR experiment showed that ALO (0.8 mmol/L) inhibited the expression of circNSUN2 in SW480 and HT29 cells. GAPDH played the role of internal reference. (C and D) The MTT experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on cell viability in SW480 and HT29 cells. (E and F) EdU cell proliferation experiments showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. The magnification was 100×. (G–I) The clone formation experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. (J–L) Flow cytometry experiments showed that overexpression of circNSUN2 partially offset the promotion of ALO on apoptosis in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; ^p<0.05, ^^p<0.01, ^^^p<0.001 vs ALO+NC.\nIn SW480 and HT29 cells, ALO suppressed the mRNA level of circNSUN2, whereas transfection of circNSUN2 overexpression significantly increased the expression of circNSUN2 (p<0.001, Figure 4A and B). The following basic cell function experiments showed that the upregulation of circNSUN2 partially offset the inhibitory effect of ALO on CRC cell viability, proliferation and apoptosis (p<0.001, Figure 4C–L).Figure 4Overexpression of circNSUN2 partially offset the suppression of ALO on the biological behavior of cancer cells. (A and B) The qRT-PCR experiment showed that ALO (0.8 mmol/L) inhibited the expression of circNSUN2 in SW480 and HT29 cells. GAPDH played the role of internal reference. (C and D) The MTT experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on cell viability in SW480 and HT29 cells. (E and F) EdU cell proliferation experiments showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. The magnification was 100×. (G–I) The clone formation experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. (J–L) Flow cytometry experiments showed that overexpression of circNSUN2 partially offset the promotion of ALO on apoptosis in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; ^p<0.05, ^^p<0.01, ^^^p<0.001 vs ALO+NC.Abbreviations: CircNSUN2, NOP2/Sun RNA methyltransferase 2; NC, negative control.\nOverexpression of circNSUN2 partially offset the suppression of ALO on the biological behavior of cancer cells. (A and B) The qRT-PCR experiment showed that ALO (0.8 mmol/L) inhibited the expression of circNSUN2 in SW480 and HT29 cells. GAPDH played the role of internal reference. (C and D) The MTT experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on cell viability in SW480 and HT29 cells. (E and F) EdU cell proliferation experiments showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. The magnification was 100×. (G–I) The clone formation experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. (J–L) Flow cytometry experiments showed that overexpression of circNSUN2 partially offset the promotion of ALO on apoptosis in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; ^p<0.05, ^^p<0.01, ^^^p<0.001 vs ALO+NC.\nSh-circNSUN2 Inhibited the Malignant Development of CRC Cells, Which Was Neutralized by miR-296-5p Inhibitor We also tested the mutual regulation of circNSUN2 and miR-296-5p in SW480 and HT29 cells. Sh-circNSUN2 could down-regulate the expression of circNSUN2, and besides, it inhibited the proliferation of SW480 and HT29 cells, while promoting their apoptosis and the expression of miR-296-5p (p<0.001, Figure 5A–J). However, the inhibitory effect of sh-circNSUN2 on the malignant development of the two cancer cells was neutralized by miR-296-5p inhibitor (p<0.001, Figure 5A–J).Figure 5Sh-circNSUN2 inhibited the malignant development of CRC cells, which was neutralized by miR-296-5p inhibitor. (A and B) The qRT-PCR experiment showed that miR-296-5p inhibitor neutralized the inhibitory effect of sh-circNSUN2 on circNSUN2 expression. GAPDH played the role of internal reference. (C and D) The qRT-PCR experiment showed that sh-circNSUN2 up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (E–G) The clone formation experiment showed that the effect of sh-circNSUN2 on the proliferation of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. (H-J) Flow cytometry experiments showed that the effect of sh-circNSUN2 on promoting the apoptosis of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. All experiments were repeated three times to obtain average values. ***p<0.001 vs sh-NC+IC; ^^^p<0.001 vs sh-circNSUN2+IC; ###p<0.001 vs sh-NC+I.Abbreviations: Sh-circNSUN2, silent circNSUN2; sh-NC, silent negative control; IC, miR-296-5p inhibitor control; I, miR-296-5p inhibitor.\nSh-circNSUN2 inhibited the malignant development of CRC cells, which was neutralized by miR-296-5p inhibitor. (A and B) The qRT-PCR experiment showed that miR-296-5p inhibitor neutralized the inhibitory effect of sh-circNSUN2 on circNSUN2 expression. GAPDH played the role of internal reference. (C and D) The qRT-PCR experiment showed that sh-circNSUN2 up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (E–G) The clone formation experiment showed that the effect of sh-circNSUN2 on the proliferation of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. (H-J) Flow cytometry experiments showed that the effect of sh-circNSUN2 on promoting the apoptosis of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. All experiments were repeated three times to obtain average values. ***p<0.001 vs sh-NC+IC; ^^^p<0.001 vs sh-circNSUN2+IC; ###p<0.001 vs sh-NC+I.\nWe also tested the mutual regulation of circNSUN2 and miR-296-5p in SW480 and HT29 cells. Sh-circNSUN2 could down-regulate the expression of circNSUN2, and besides, it inhibited the proliferation of SW480 and HT29 cells, while promoting their apoptosis and the expression of miR-296-5p (p<0.001, Figure 5A–J). However, the inhibitory effect of sh-circNSUN2 on the malignant development of the two cancer cells was neutralized by miR-296-5p inhibitor (p<0.001, Figure 5A–J).Figure 5Sh-circNSUN2 inhibited the malignant development of CRC cells, which was neutralized by miR-296-5p inhibitor. (A and B) The qRT-PCR experiment showed that miR-296-5p inhibitor neutralized the inhibitory effect of sh-circNSUN2 on circNSUN2 expression. GAPDH played the role of internal reference. (C and D) The qRT-PCR experiment showed that sh-circNSUN2 up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (E–G) The clone formation experiment showed that the effect of sh-circNSUN2 on the proliferation of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. (H-J) Flow cytometry experiments showed that the effect of sh-circNSUN2 on promoting the apoptosis of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. All experiments were repeated three times to obtain average values. ***p<0.001 vs sh-NC+IC; ^^^p<0.001 vs sh-circNSUN2+IC; ###p<0.001 vs sh-NC+I.Abbreviations: Sh-circNSUN2, silent circNSUN2; sh-NC, silent negative control; IC, miR-296-5p inhibitor control; I, miR-296-5p inhibitor.\nSh-circNSUN2 inhibited the malignant development of CRC cells, which was neutralized by miR-296-5p inhibitor. (A and B) The qRT-PCR experiment showed that miR-296-5p inhibitor neutralized the inhibitory effect of sh-circNSUN2 on circNSUN2 expression. GAPDH played the role of internal reference. (C and D) The qRT-PCR experiment showed that sh-circNSUN2 up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (E–G) The clone formation experiment showed that the effect of sh-circNSUN2 on the proliferation of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. (H-J) Flow cytometry experiments showed that the effect of sh-circNSUN2 on promoting the apoptosis of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. All experiments were repeated three times to obtain average values. ***p<0.001 vs sh-NC+IC; ^^^p<0.001 vs sh-circNSUN2+IC; ###p<0.001 vs sh-NC+I.\nMiR-296-5p Bound to STAT3 and Upregulation of Mir-296-5p Inhibited the Expression of STAT3 in CRC Cells We screened the target genes of miR-296-5p, and found that miR-296-5p and STAT3 have targeted binding sites (Figure 6A), and this binding relation was further confirmed by a dual-luciferase verification test (p<0.001, Figure 6B and C). Next, we used qRT-PCR and Western blot to analyze the mRNA and protein levels of miR-296-5p and STAT. It was found that miR-296-5p mimic up-regulated miR-296-5p expression while preventing the activation of STAT3 (p<0.001, Figure 7A–G). However, the overexpression of STAT3 reversed the regulation of miR-296-5p mimic on the two genes (p<0.001, Figure 7A–G).Figure 6The targeted binding of miR-296-5p to STAT3. (A) TargetScan was used to predict the binding sequence of miR-296-5p and STAT3. (B and C) Dual luciferase experiment confirmed that miR-296-5p can bind to STAT3 in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. +++p<0.001 vs MC.Figure 7Up-regulation of miR-296-5p inhibited the expression of STAT3 in CRC cells. (A and B) The qRT-PCR experiment showed that miR-296-5p mimic up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (C and D) The qRT-PCR experiment showed that miR-296-5p mimic suppressed the mRNA level of STAT3, while overexpression of STAT3 reversed this effect. GAPDH played the role of internal reference. (E–G) The Western blot experiment showed that miR-296-5p mimic inhibited the protein level of STAT3, while over-expression of STAT3 reversed this effect. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. ***p<0.001 vs MC+NC; ^^^p<0.001 vs M+NC; ###p<0.001 vs MC+STAT3.Abbreviation: STAT3, signal transducer and activator of transcription 3.\nThe targeted binding of miR-296-5p to STAT3. (A) TargetScan was used to predict the binding sequence of miR-296-5p and STAT3. (B and C) Dual luciferase experiment confirmed that miR-296-5p can bind to STAT3 in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. +++p<0.001 vs MC.\nUp-regulation of miR-296-5p inhibited the expression of STAT3 in CRC cells. (A and B) The qRT-PCR experiment showed that miR-296-5p mimic up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (C and D) The qRT-PCR experiment showed that miR-296-5p mimic suppressed the mRNA level of STAT3, while overexpression of STAT3 reversed this effect. GAPDH played the role of internal reference. (E–G) The Western blot experiment showed that miR-296-5p mimic inhibited the protein level of STAT3, while over-expression of STAT3 reversed this effect. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. ***p<0.001 vs MC+NC; ^^^p<0.001 vs M+NC; ###p<0.001 vs MC+STAT3.\nWe screened the target genes of miR-296-5p, and found that miR-296-5p and STAT3 have targeted binding sites (Figure 6A), and this binding relation was further confirmed by a dual-luciferase verification test (p<0.001, Figure 6B and C). Next, we used qRT-PCR and Western blot to analyze the mRNA and protein levels of miR-296-5p and STAT. It was found that miR-296-5p mimic up-regulated miR-296-5p expression while preventing the activation of STAT3 (p<0.001, Figure 7A–G). However, the overexpression of STAT3 reversed the regulation of miR-296-5p mimic on the two genes (p<0.001, Figure 7A–G).Figure 6The targeted binding of miR-296-5p to STAT3. (A) TargetScan was used to predict the binding sequence of miR-296-5p and STAT3. (B and C) Dual luciferase experiment confirmed that miR-296-5p can bind to STAT3 in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. +++p<0.001 vs MC.Figure 7Up-regulation of miR-296-5p inhibited the expression of STAT3 in CRC cells. (A and B) The qRT-PCR experiment showed that miR-296-5p mimic up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (C and D) The qRT-PCR experiment showed that miR-296-5p mimic suppressed the mRNA level of STAT3, while overexpression of STAT3 reversed this effect. GAPDH played the role of internal reference. (E–G) The Western blot experiment showed that miR-296-5p mimic inhibited the protein level of STAT3, while over-expression of STAT3 reversed this effect. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. ***p<0.001 vs MC+NC; ^^^p<0.001 vs M+NC; ###p<0.001 vs MC+STAT3.Abbreviation: STAT3, signal transducer and activator of transcription 3.\nThe targeted binding of miR-296-5p to STAT3. (A) TargetScan was used to predict the binding sequence of miR-296-5p and STAT3. (B and C) Dual luciferase experiment confirmed that miR-296-5p can bind to STAT3 in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. +++p<0.001 vs MC.\nUp-regulation of miR-296-5p inhibited the expression of STAT3 in CRC cells. (A and B) The qRT-PCR experiment showed that miR-296-5p mimic up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (C and D) The qRT-PCR experiment showed that miR-296-5p mimic suppressed the mRNA level of STAT3, while overexpression of STAT3 reversed this effect. GAPDH played the role of internal reference. (E–G) The Western blot experiment showed that miR-296-5p mimic inhibited the protein level of STAT3, while over-expression of STAT3 reversed this effect. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. ***p<0.001 vs MC+NC; ^^^p<0.001 vs M+NC; ###p<0.001 vs MC+STAT3.\nMiR-296-5p Mimic Inhibited Proliferation and Promoted Apoptosis in CRC Cells by Regulating Apoptosis-Related Genes, Which Was Partially Reversed by Overexpressed STAT3 The following cell experiments showed that by activating Bax and blocking the protein activity of Bcl-2, miR-296-5p mimic inhibited proliferation yet accelerated apoptosis in cancer cells (p<0.001, Figure 8A–I). However, up-regulation of STAT3 produced a completely opposite regulatory effect (p<0.001, Figure 8A–I). More importantly, overexpression of STAT3 neutralized the regulation of miR-296-5p mimic on CRC cells (p<0.001, Figure 8A–I).Figure 8MiR-296-5p mimic inhibited proliferation and promoted apoptosis in CRC cells by regulating apoptosis-related genes, which was partially reversed by overexpression of STAT3. (A–C) The clone formation experiment showed that the inhibitory effect of miR-296-5p mimic on the proliferation of SW480 and HT29 cells was neutralized by overexpressed STAT3. (D–F) Flow cytometry experiments showed that miR-296-5p mimic promoted the apoptosis of SW480 and HT29 cells, which was neutralized by overexpressed STAT3. (G–I) Western blot experiments showed that the regulation of apoptosis-related protein expression by miR-296-5p mimic was neutralized by overexpressed STAT3. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs MC+NC; ^^p<0.01, ^^^p<0.001 vs M+NC; ##p<0.01, ###p<0.001 vs MC+STAT3.\nMiR-296-5p mimic inhibited proliferation and promoted apoptosis in CRC cells by regulating apoptosis-related genes, which was partially reversed by overexpression of STAT3. (A–C) The clone formation experiment showed that the inhibitory effect of miR-296-5p mimic on the proliferation of SW480 and HT29 cells was neutralized by overexpressed STAT3. (D–F) Flow cytometry experiments showed that miR-296-5p mimic promoted the apoptosis of SW480 and HT29 cells, which was neutralized by overexpressed STAT3. (G–I) Western blot experiments showed that the regulation of apoptosis-related protein expression by miR-296-5p mimic was neutralized by overexpressed STAT3. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs MC+NC; ^^p<0.01, ^^^p<0.001 vs M+NC; ##p<0.01, ###p<0.001 vs MC+STAT3.\nThe following cell experiments showed that by activating Bax and blocking the protein activity of Bcl-2, miR-296-5p mimic inhibited proliferation yet accelerated apoptosis in cancer cells (p<0.001, Figure 8A–I). However, up-regulation of STAT3 produced a completely opposite regulatory effect (p<0.001, Figure 8A–I). More importantly, overexpression of STAT3 neutralized the regulation of miR-296-5p mimic on CRC cells (p<0.001, Figure 8A–I).Figure 8MiR-296-5p mimic inhibited proliferation and promoted apoptosis in CRC cells by regulating apoptosis-related genes, which was partially reversed by overexpression of STAT3. (A–C) The clone formation experiment showed that the inhibitory effect of miR-296-5p mimic on the proliferation of SW480 and HT29 cells was neutralized by overexpressed STAT3. (D–F) Flow cytometry experiments showed that miR-296-5p mimic promoted the apoptosis of SW480 and HT29 cells, which was neutralized by overexpressed STAT3. (G–I) Western blot experiments showed that the regulation of apoptosis-related protein expression by miR-296-5p mimic was neutralized by overexpressed STAT3. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs MC+NC; ^^p<0.01, ^^^p<0.001 vs M+NC; ##p<0.01, ###p<0.001 vs MC+STAT3.\nMiR-296-5p mimic inhibited proliferation and promoted apoptosis in CRC cells by regulating apoptosis-related genes, which was partially reversed by overexpression of STAT3. (A–C) The clone formation experiment showed that the inhibitory effect of miR-296-5p mimic on the proliferation of SW480 and HT29 cells was neutralized by overexpressed STAT3. (D–F) Flow cytometry experiments showed that miR-296-5p mimic promoted the apoptosis of SW480 and HT29 cells, which was neutralized by overexpressed STAT3. (G–I) Western blot experiments showed that the regulation of apoptosis-related protein expression by miR-296-5p mimic was neutralized by overexpressed STAT3. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs MC+NC; ^^p<0.01, ^^^p<0.001 vs M+NC; ##p<0.01, ###p<0.001 vs MC+STAT3.", "In this study, the toxicity of ALO in normal cells (CCD-18Co) was detected, ALO showed no significant toxicity to CCD-18Co cells (Figure 1A). In order to study the effects of ALO on the growth and basic physiological functions of CRC cells, we used different concentrations of ALO to treat SW480 and HT29 cells. The results showed that 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L ALO solutions all inhibited the viability of SW480 and HT29 cells (p<0.05, Figure 1B and C). Moreover, the viability of SW480 and HT29 cells was close to 50% under the treatment of 0.8 mmol/L ALO solution, but was reduced to less than 50% when the ALO concentration was increased to 1 mmol/L (Figure 1B and C). In the following EdU cell proliferation experiment, the red fluorescence expression was reduced in SW480 and HT29 cells under the treatment of 0.8 mmol/L ALO (Figure 1D and E). Consistently, the clone formation experiment also showed that different concentrations of ALO (0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L) reduced the number of cell clones and inhibited the proliferation of SW480 and HT29 cells (p<0.05, Figure 1F–H). At the same time, the increase in the concentration of the ALO solution accelerated the death of CRC cells. As shown in Figure 1I–K, the number of apoptotic SW480 and HT29 cells increased significantly after treatment with different concentrations of ALO (p<0.01, Figure 1I–K).Figure 1ALO at a concentration gradient inhibited proliferation yet promoted apoptosis in CRC cells. (A) The effect of ALO on the normal colonic tissue cells (CCD-18Co) was detected by MTT experiments. (B and C) MTT experiments showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) inhibited the activity of SW480 and HT29 cells. (D and E) EdU cell proliferation experiments showed that 0.8 mmol/L ALO inhibited the proliferation of SW480 and HT29 cells. The magnification was 100×. (F–H) The clone formation experiment showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) inhibited the proliferation of SW480 and HT29 cells. (I–K) Flow cytometry experiments showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) promoted the apoptosis of SW480 and HT29 cells. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs Control.Abbreviations: ALO, aloperine; CRC, colorectal cancer.\nALO at a concentration gradient inhibited proliferation yet promoted apoptosis in CRC cells. (A) The effect of ALO on the normal colonic tissue cells (CCD-18Co) was detected by MTT experiments. (B and C) MTT experiments showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) inhibited the activity of SW480 and HT29 cells. (D and E) EdU cell proliferation experiments showed that 0.8 mmol/L ALO inhibited the proliferation of SW480 and HT29 cells. The magnification was 100×. (F–H) The clone formation experiment showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) inhibited the proliferation of SW480 and HT29 cells. (I–K) Flow cytometry experiments showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) promoted the apoptosis of SW480 and HT29 cells. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs Control.", "Data from the Starbase database showed that the expression of miR-296-5p in colon cancer patients was significantly lower than that in healthy people (p=2.5e-6, Figure 2A). In this study, however, after treatment with different concentrations of ALO for 24 h, miR-296-5p activity in SW480 and HT29 cells was significantly increased (p<0.01, Figure 2B and C), suggesting that ALO can stimulate the up-regulation of miR-296-5p.Figure 2The abnormally low expression of miR-296-5p in colon cancer could be upregulated by ALO. (A) Starbase (http://starbase.sysu.edu.cn/index.php) was used to retrieve the information of differential expression of miR-296-5p in colon adenocarcinoma (COAD, n=450) patients and healthy volunteers (n=8) in vivo. P=2.5e-6. (B and C) The qRT-PCR experiment showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) up-regulated miR-296-5p expression in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. **p<0.01, ***p<0.001 vs Control.Abbreviation: qRT-PCR, quantitative real-time polymerase chain reaction.\nThe abnormally low expression of miR-296-5p in colon cancer could be upregulated by ALO. (A) Starbase (http://starbase.sysu.edu.cn/index.php) was used to retrieve the information of differential expression of miR-296-5p in colon adenocarcinoma (COAD, n=450) patients and healthy volunteers (n=8) in vivo. P=2.5e-6. (B and C) The qRT-PCR experiment showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) up-regulated miR-296-5p expression in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. **p<0.01, ***p<0.001 vs Control.", "Through literature review, we screened 12 circRNAs that were reported to be associated with CRC: circDDX17, circHIPK3, circZFR, circRNA_100290, Has_circ_0026344, circVAPA, Hsa_circ_0009361, circNSUN2, circACAP2, circACVRL1, circITGA7 and circEIF4G3. We first detected the regulatory effect of ALO on these differentially expressed circRNAs by qRT-PCR to perform preliminary screening (Figure 3A). The results showed that the expressions of circDDX17, Has_circ_0026344, circVAPA, circNSUN2, circACAP2, circACVRL1 and circEIF4G3 were regulated by 0.8 mmol/L ALO (p<0.001, Figure 3A). Through subsequent screening of target genes, we found that only circNSUN2 and miR-296-5p have targeted binding sites (Figure 3B). After further verification, it was confirmed that circNSUN2-WT and miR-296-5p mimic did reduce the luciferase activity in SW480 and HT29 cells (p<0.001, Figure 3C and D). Therefore, we chose circNSUN2 as the next research object.Figure 3Among the differentially expressed circRNAs in CRC, only circNSUN2 not only had a target site for miR-296-5p, but also could be regulated by ALO. (A) qRT-PCR detected the regulation of 0.8 mmol/L ALO on the differentially expressed circRNAs in CRC. GAPDH played the role of internal reference. (B) The TargetScan database (http://www.targetscan.org) was used to predict the target sites of circNSUN2 and miR-296-5p. (C and D) The dual luciferase experiment confirmed that circNSUN2 can bind to miR-296-5p in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; +++p<0.001 vs MC.Abbreviation: MC, miR-296-5p mimic control.\nAmong the differentially expressed circRNAs in CRC, only circNSUN2 not only had a target site for miR-296-5p, but also could be regulated by ALO. (A) qRT-PCR detected the regulation of 0.8 mmol/L ALO on the differentially expressed circRNAs in CRC. GAPDH played the role of internal reference. (B) The TargetScan database (http://www.targetscan.org) was used to predict the target sites of circNSUN2 and miR-296-5p. (C and D) The dual luciferase experiment confirmed that circNSUN2 can bind to miR-296-5p in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; +++p<0.001 vs MC.", "In SW480 and HT29 cells, ALO suppressed the mRNA level of circNSUN2, whereas transfection of circNSUN2 overexpression significantly increased the expression of circNSUN2 (p<0.001, Figure 4A and B). The following basic cell function experiments showed that the upregulation of circNSUN2 partially offset the inhibitory effect of ALO on CRC cell viability, proliferation and apoptosis (p<0.001, Figure 4C–L).Figure 4Overexpression of circNSUN2 partially offset the suppression of ALO on the biological behavior of cancer cells. (A and B) The qRT-PCR experiment showed that ALO (0.8 mmol/L) inhibited the expression of circNSUN2 in SW480 and HT29 cells. GAPDH played the role of internal reference. (C and D) The MTT experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on cell viability in SW480 and HT29 cells. (E and F) EdU cell proliferation experiments showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. The magnification was 100×. (G–I) The clone formation experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. (J–L) Flow cytometry experiments showed that overexpression of circNSUN2 partially offset the promotion of ALO on apoptosis in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; ^p<0.05, ^^p<0.01, ^^^p<0.001 vs ALO+NC.Abbreviations: CircNSUN2, NOP2/Sun RNA methyltransferase 2; NC, negative control.\nOverexpression of circNSUN2 partially offset the suppression of ALO on the biological behavior of cancer cells. (A and B) The qRT-PCR experiment showed that ALO (0.8 mmol/L) inhibited the expression of circNSUN2 in SW480 and HT29 cells. GAPDH played the role of internal reference. (C and D) The MTT experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on cell viability in SW480 and HT29 cells. (E and F) EdU cell proliferation experiments showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. The magnification was 100×. (G–I) The clone formation experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. (J–L) Flow cytometry experiments showed that overexpression of circNSUN2 partially offset the promotion of ALO on apoptosis in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; ^p<0.05, ^^p<0.01, ^^^p<0.001 vs ALO+NC.", "We also tested the mutual regulation of circNSUN2 and miR-296-5p in SW480 and HT29 cells. Sh-circNSUN2 could down-regulate the expression of circNSUN2, and besides, it inhibited the proliferation of SW480 and HT29 cells, while promoting their apoptosis and the expression of miR-296-5p (p<0.001, Figure 5A–J). However, the inhibitory effect of sh-circNSUN2 on the malignant development of the two cancer cells was neutralized by miR-296-5p inhibitor (p<0.001, Figure 5A–J).Figure 5Sh-circNSUN2 inhibited the malignant development of CRC cells, which was neutralized by miR-296-5p inhibitor. (A and B) The qRT-PCR experiment showed that miR-296-5p inhibitor neutralized the inhibitory effect of sh-circNSUN2 on circNSUN2 expression. GAPDH played the role of internal reference. (C and D) The qRT-PCR experiment showed that sh-circNSUN2 up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (E–G) The clone formation experiment showed that the effect of sh-circNSUN2 on the proliferation of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. (H-J) Flow cytometry experiments showed that the effect of sh-circNSUN2 on promoting the apoptosis of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. All experiments were repeated three times to obtain average values. ***p<0.001 vs sh-NC+IC; ^^^p<0.001 vs sh-circNSUN2+IC; ###p<0.001 vs sh-NC+I.Abbreviations: Sh-circNSUN2, silent circNSUN2; sh-NC, silent negative control; IC, miR-296-5p inhibitor control; I, miR-296-5p inhibitor.\nSh-circNSUN2 inhibited the malignant development of CRC cells, which was neutralized by miR-296-5p inhibitor. (A and B) The qRT-PCR experiment showed that miR-296-5p inhibitor neutralized the inhibitory effect of sh-circNSUN2 on circNSUN2 expression. GAPDH played the role of internal reference. (C and D) The qRT-PCR experiment showed that sh-circNSUN2 up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (E–G) The clone formation experiment showed that the effect of sh-circNSUN2 on the proliferation of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. (H-J) Flow cytometry experiments showed that the effect of sh-circNSUN2 on promoting the apoptosis of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. All experiments were repeated three times to obtain average values. ***p<0.001 vs sh-NC+IC; ^^^p<0.001 vs sh-circNSUN2+IC; ###p<0.001 vs sh-NC+I.", "We screened the target genes of miR-296-5p, and found that miR-296-5p and STAT3 have targeted binding sites (Figure 6A), and this binding relation was further confirmed by a dual-luciferase verification test (p<0.001, Figure 6B and C). Next, we used qRT-PCR and Western blot to analyze the mRNA and protein levels of miR-296-5p and STAT. It was found that miR-296-5p mimic up-regulated miR-296-5p expression while preventing the activation of STAT3 (p<0.001, Figure 7A–G). However, the overexpression of STAT3 reversed the regulation of miR-296-5p mimic on the two genes (p<0.001, Figure 7A–G).Figure 6The targeted binding of miR-296-5p to STAT3. (A) TargetScan was used to predict the binding sequence of miR-296-5p and STAT3. (B and C) Dual luciferase experiment confirmed that miR-296-5p can bind to STAT3 in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. +++p<0.001 vs MC.Figure 7Up-regulation of miR-296-5p inhibited the expression of STAT3 in CRC cells. (A and B) The qRT-PCR experiment showed that miR-296-5p mimic up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (C and D) The qRT-PCR experiment showed that miR-296-5p mimic suppressed the mRNA level of STAT3, while overexpression of STAT3 reversed this effect. GAPDH played the role of internal reference. (E–G) The Western blot experiment showed that miR-296-5p mimic inhibited the protein level of STAT3, while over-expression of STAT3 reversed this effect. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. ***p<0.001 vs MC+NC; ^^^p<0.001 vs M+NC; ###p<0.001 vs MC+STAT3.Abbreviation: STAT3, signal transducer and activator of transcription 3.\nThe targeted binding of miR-296-5p to STAT3. (A) TargetScan was used to predict the binding sequence of miR-296-5p and STAT3. (B and C) Dual luciferase experiment confirmed that miR-296-5p can bind to STAT3 in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. +++p<0.001 vs MC.\nUp-regulation of miR-296-5p inhibited the expression of STAT3 in CRC cells. (A and B) The qRT-PCR experiment showed that miR-296-5p mimic up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (C and D) The qRT-PCR experiment showed that miR-296-5p mimic suppressed the mRNA level of STAT3, while overexpression of STAT3 reversed this effect. GAPDH played the role of internal reference. (E–G) The Western blot experiment showed that miR-296-5p mimic inhibited the protein level of STAT3, while over-expression of STAT3 reversed this effect. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. ***p<0.001 vs MC+NC; ^^^p<0.001 vs M+NC; ###p<0.001 vs MC+STAT3.", "The following cell experiments showed that by activating Bax and blocking the protein activity of Bcl-2, miR-296-5p mimic inhibited proliferation yet accelerated apoptosis in cancer cells (p<0.001, Figure 8A–I). However, up-regulation of STAT3 produced a completely opposite regulatory effect (p<0.001, Figure 8A–I). More importantly, overexpression of STAT3 neutralized the regulation of miR-296-5p mimic on CRC cells (p<0.001, Figure 8A–I).Figure 8MiR-296-5p mimic inhibited proliferation and promoted apoptosis in CRC cells by regulating apoptosis-related genes, which was partially reversed by overexpression of STAT3. (A–C) The clone formation experiment showed that the inhibitory effect of miR-296-5p mimic on the proliferation of SW480 and HT29 cells was neutralized by overexpressed STAT3. (D–F) Flow cytometry experiments showed that miR-296-5p mimic promoted the apoptosis of SW480 and HT29 cells, which was neutralized by overexpressed STAT3. (G–I) Western blot experiments showed that the regulation of apoptosis-related protein expression by miR-296-5p mimic was neutralized by overexpressed STAT3. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs MC+NC; ^^p<0.01, ^^^p<0.001 vs M+NC; ##p<0.01, ###p<0.001 vs MC+STAT3.\nMiR-296-5p mimic inhibited proliferation and promoted apoptosis in CRC cells by regulating apoptosis-related genes, which was partially reversed by overexpression of STAT3. (A–C) The clone formation experiment showed that the inhibitory effect of miR-296-5p mimic on the proliferation of SW480 and HT29 cells was neutralized by overexpressed STAT3. (D–F) Flow cytometry experiments showed that miR-296-5p mimic promoted the apoptosis of SW480 and HT29 cells, which was neutralized by overexpressed STAT3. (G–I) Western blot experiments showed that the regulation of apoptosis-related protein expression by miR-296-5p mimic was neutralized by overexpressed STAT3. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs MC+NC; ^^p<0.01, ^^^p<0.001 vs M+NC; ##p<0.01, ###p<0.001 vs MC+STAT3.", "CRC is a malignant lesion of the mucosal epithelium of the colon or rectum under the action of various carcinogenic factors such as environment or heredity.2 Most CRC patients suffer from adenocarcinoma, which usually develops from polyps. Tumors developed from polyps can infiltrate in a circular shape along the horizontal axis of the intestinal tube, develop into the deep layer of the intestinal wall, and finally penetrate the intestinal wall and metastasize to blood vessels or lymphatic vessels.20 This is one of the important reasons for the high recurrence and metastasis rates of CRC. The ultimate goals of CRC research are to improve the current treatment of CRC, save patients’ lives and improve their quality of life. Based on the role of traditional Chinese medicine in disease prevention and treatment, this study explored the effects of ALO, an alkaloid exhibiting strong anticancer activity, on the proliferation and apoptosis of CRC cells. The results showed that after ALO treatment, the viability and proliferation of CRC cells were significantly inhibited; on the contrary, the number of apoptotic cancer cells was significantly increased. This is consistent with the previous experimental results.21 In order to probe deeper into the upstream mechanism of ALO in alleviating CRC carcinogenesis through the miR-296-5p/STAT3 axis, we screened and verified the upstream targeting circRNA of miR-296-5p.\nCircRNAs are a type of covalently closed circular non-coding RNA formed by back-splicing of mRNA precursors (pre-mRNA).22 CircRNA was considered to be a useless splicing by-product in early research. With the deepening of research, it is found that circRNA has a wide range of sources and plays a variety of functional roles in the growth and development of organisms, with the characteristics of conservative, stable, and tissue-specific.22,23 The most familiar and extensively studied function of circRNA is its sponging effect on miRNA.24 For example, circRNA ciRS-7 which contains more than 70 miR-7 binding sites can achieve competitive adsorption of miR-7 through the AGO2 protein.25 Besides, increasing reports on cancer indicate that circRNAs, such as circRNA-cTFRC, circPSMC3 and circSETD3, act as sponges of miRNAs and participate in transcriptional regulation.26–28 Based on these literature reports, we screened circRNAs that were reported to be abnormally expressed in CRC, and finally obtained 12 circRNAs. After detecting their expressions in ALO-treated cancer cells and predicting their binding sequence for miR-296-5p, circNSUN2 was finally identified as the research object of this study.\nCircNSUN2, which maps to the 5p15 amplicon in CRCs, was confirmed by Chen et al to regulate cytoplasmic output and promote liver metastasis of CRC through N 6-methyladenosine modification.29 Similarly, our experimental results uncovered that knockdown of circNSUN2 inhibited proliferation and accelerated apoptosis in CRC cells. More importantly, we for the first time revealed that ALO can prevent the activation of circNSUN2 and offset the promotion effect of overexpression of circNSUN2 on esophageal cancer progression. Since circNSUN2 and miR-296-5p have targeting sequences, we verified the cellular regulatory effects of circNSUN2 and miR-296-5p/STAT3 through dual luciferase experiments and rescue experiments. The final result confirmed our conjecture that regulating the circNSUN2/miR-296-5p/STAT3 axis can reduce the proliferation rate and increase the apoptosis rate of CRC cells.\nAt present, blocking proliferation and increasing apoptosis in cancer cells are the intensively discussed mechanisms in CRC study. Tumor cells can proliferate quickly and unrestrictedly, and thus, inhibiting their proliferation can produce an anti-tumor effect.30 Apoptosis, alternatively called programmed cell death, is an autonomous cell death process strictly controlled by multiple genes. Apoptosis is an important part of cell life cycle, and it is also an important link in regulating the development of the body and maintaining the stability of the internal environment in organisms.31,32 Our research shows that ALO inhibits the proliferation and promotes the apoptosis of CRC cells by regulating the circNSUN2/miR-296-5p/STAT3 pathway, and ultimately prevents the tumorigenesis of CRC.\nThere are still certain limitations in our research. Although we have confirmed the effect of ALO on inducing apoptosis and reducing proliferation in CRC cells and clarified the underlying mechanism, the circNSUN2/miR-296-5p/STAT3 axis and the pathways related to proliferation and apoptosis have not been analyzed yet, which will be further explored in future research. In addition, the anticancer effect of ALO on CRC and its experimental concentration needs to be further tested in animal and clinical experiments. Whether ALO has an impact on drug resistance in CRC is also a direction of future research." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Method", "Cell Purchase and ALO Processing", "MTT Assay", "EdU (5-Ethynyl-2ʹ-Deoxyuridine) Cell Proliferation Experiment", "Colony Formation Assay", "Apoptosis Experiment", "Bioinformatics Analysis and Target Gene Binding Verification", "Quantitative Real-Time Polymerase Chain Reaction (qRT-PCR)", "Cell Transfection", "Western Blot", "Statistical Analysis", "Result", "ALO at a Concentration Gradient Inhibited Proliferation Yet Promoted Apoptosis in CRC Cells", "The Abnormally Low Expression of miR-296-5p in Colon Cancer Could Be Up-Regulated by ALO", "Among the Differentially Expressed circRNAs in CRC, Only circNSUN2 Not Only Had a Target Site for miR-296-5p, but Also Could Be Regulated by ALO", "Overexpression of circNSUN2 Partially Offset the Suppression of ALO on the Biological Behavior of Cancer Cells", "Sh-circNSUN2 Inhibited the Malignant Development of CRC Cells, Which Was Neutralized by miR-296-5p Inhibitor", "MiR-296-5p Bound to STAT3 and Upregulation of Mir-296-5p Inhibited the Expression of STAT3 in CRC Cells", "MiR-296-5p Mimic Inhibited Proliferation and Promoted Apoptosis in CRC Cells by Regulating Apoptosis-Related Genes, Which Was Partially Reversed by Overexpressed STAT3", "Discussion" ]
[ "Cancer has become the first killer in the world and the biggest obstacle for extending human life expectancy. It is estimated that there were 18.1 million new cancer cases and 9.6 million cancer deaths in 2018.1 Colorectal cancer (CRC) is one of the most common gastrointestinal tumors.2 According to the data from the International Agency for Research on Cancer (IARC),1 the number of new CRC cases worldwide in 2018 was approximately 1.09 million, making CRC the fourth most prevalent malignant tumor after lung, breast and prostate cancers; meantime, CRC had a high mortality rate of about 9.2%, ranking only after lung cancer. Epidemiological studies have shown that the incidence of CRC varies significantly in different countries, and is higher in developed areas.3 More importantly, as the incidence of CRC rises rapidly in people under 50, the age of onset of the disease is getting lower.4,5 Therefore, it is urgent to improve the diagnosis rate and treatment effect of CRC.\nCurrently, the main clinical treatment of CRC is surgery combined with adjuvant radiotherapy, chemotherapy or molecular targeted therapy. Surgery is generally considered as the first choice for comprehensive treatment of CRC. This method is suitable for patients whose tumors are confined to the intestinal wall while penetrating the intestinal wall and invading the serous or extraserous membrane without lymph node metastasis.6,7 However, the availability of radical resection is limited as most patients with CRC have adenocarcinoma, which generally develops from polyps and is metastatic by nature.8 As a result, CRC patients tend to undergo simple resection, which leads to a high recurrence rate. Chemotherapy, therefore, is still necessary for postoperative and late-stage CRC patients.9,10 At present, the commonly used chemotherapeutic drugs in clinic mainly include 5-fluorouracil (5-Fu), oxaliplatin and its derivatives.9 Chemotherapy is highly risky because it indiscriminately kills tumor cells and immune cells and thereby reduces the anti-tumor immune effect.11 Likewise, radiotherapy greatly weakens the body’s immunity.12 Hence, researchers have proposed to find effective treatments or drugs that cause less damage to patients.\nWith advances in the research and development of traditional Chinese medicine, more attention has been paid to the roles of traditional Chinese medicine and its active ingredients in disease prevention and treatment. At the same time, new targets for molecular targeted therapy of diseases have been discovered in the exploration of the molecular mechanisms of traditional Chinese medicine and its monomers. Aloperine (ALO) has been confirmed to have a significant anti-cancer activity. ALO is a component of the traditional Chinese medicine Sophora alopecuroides L.13 It is also one of the main alkaloids separated and extracted in the lab, with a molecular formula of C15H24N2.14 Recent studies have found that ALO has the effects of anti-inflammation, immunosuppression, redox suppression, cardiovascular protection and anti-cancer, among which its tumor-suppressing activity has been extensively reported. For example, Yu et al revealed that ALO inhibited the carcinogenesis process by regulating excessive autophagy in thyroid cancer cells;15 Liu et al demonstrated that ALO inhibited the PI3K/Akt pathway, increased apoptosis and caused cell cycle block in liver cancer cells;16 besides, ALO was also found to exert an anti-cancer effect in prostate cancer and breast cancer.17,18 In the previous study, our research group found that ALO up-regulated miR-296-5p and inhibited the activity of its target gene STAT3, thereby inhibiting the proliferation and inducing the apoptosis of CRC cells. However, the activation pathway of miR-296-5p is unclear. In order to clarify the upstream activation pathway of ALO-upregulated miR-296-5p, this study will explore the mechanism of ALO in inhibiting CRC from the perspective of the circRNA-miRNA-mRNA regulatory network.", "Cell Purchase and ALO Processing CCD-18Co and CRC cell lines SW480 (CCL-228™) and HT29 (HTB-38™) used in this research were provided by the American Type Culture Collection (ATCC). According to the culture requirements, CCD-18Co, SW480 and HT29 cells were separately inoculated in DMEM medium (30–2007, ATCC) containing 10% fetal bovine serum and incubated in a D180-P cell incubator (RWD, China) containing 5% CO2 at 37 °C.\nALO (DK0052, CAS NO: 56293-29-9, HPLC≥98%), the research object of this study, was provided by Chengdu DESITE Biological Company, China. ALO was dissolved in fresh DMEM to prepare 0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L ALO solutions, which were separately used to treat the CCD-18Co, SW480 and HT29 cells for 24 hours (h).\nCCD-18Co and CRC cell lines SW480 (CCL-228™) and HT29 (HTB-38™) used in this research were provided by the American Type Culture Collection (ATCC). According to the culture requirements, CCD-18Co, SW480 and HT29 cells were separately inoculated in DMEM medium (30–2007, ATCC) containing 10% fetal bovine serum and incubated in a D180-P cell incubator (RWD, China) containing 5% CO2 at 37 °C.\nALO (DK0052, CAS NO: 56293-29-9, HPLC≥98%), the research object of this study, was provided by Chengdu DESITE Biological Company, China. ALO was dissolved in fresh DMEM to prepare 0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L ALO solutions, which were separately used to treat the CCD-18Co, SW480 and HT29 cells for 24 hours (h).\nMTT Assay One hundred µL of SW480 or HT29 cell suspension (1×104 cells/mL) was transferred to each well of a 96-well plate. Of note, three holes were added to the same treatment group. After treatment with the ALO solutions for 24 h, SW480 or HT29 cells were added with MTT reagent (10 µL/well, PB180519, Procell, USA), which could react with mitochondria in living cells. After 4 h of binding, a HBS-1096A enzyme label analyzer (DeTie, China) was used to detect the absorbance of SW480 or HT29 cells in the mixed solutions at 490 nm, and then cell viability was calculated based on the absorbance.\nOne hundred µL of SW480 or HT29 cell suspension (1×104 cells/mL) was transferred to each well of a 96-well plate. Of note, three holes were added to the same treatment group. After treatment with the ALO solutions for 24 h, SW480 or HT29 cells were added with MTT reagent (10 µL/well, PB180519, Procell, USA), which could react with mitochondria in living cells. After 4 h of binding, a HBS-1096A enzyme label analyzer (DeTie, China) was used to detect the absorbance of SW480 or HT29 cells in the mixed solutions at 490 nm, and then cell viability was calculated based on the absorbance.\nEdU (5-Ethynyl-2ʹ-Deoxyuridine) Cell Proliferation Experiment SW480 or HT29 cells in logarithmic growth phase were seeded in 96-well plates at 1×104 cells/well. EdU reagent (ST067) purchased from Beyotime was diluted to 50 μmol/L with fresh DMEM medium. The EdU dilution was then used to incubate the cells at 100 μL/well for 2 h. After removing the culture medium, the SW480 or HT29 cells were fixed with methanol at room temperature for 15 minutes (min). Positive fluorescence expression (red) in the collected SW480 or HT29 cells were observed, photographed and recorded under a Leica DMi8 inverted fluorescence microscope (magnification 100 ×). Then DAPI reagent was used to stain the nuclei, and the stains were observed under a microscope.\nSW480 or HT29 cells in logarithmic growth phase were seeded in 96-well plates at 1×104 cells/well. EdU reagent (ST067) purchased from Beyotime was diluted to 50 μmol/L with fresh DMEM medium. The EdU dilution was then used to incubate the cells at 100 μL/well for 2 h. After removing the culture medium, the SW480 or HT29 cells were fixed with methanol at room temperature for 15 minutes (min). Positive fluorescence expression (red) in the collected SW480 or HT29 cells were observed, photographed and recorded under a Leica DMi8 inverted fluorescence microscope (magnification 100 ×). Then DAPI reagent was used to stain the nuclei, and the stains were observed under a microscope.\nColony Formation Assay Two hundred SW480 or HT29 cells were transferred to petri dishes containing complete medium (10 mL). In order to distribute the cells evenly, the petri dishes were rotated slowly for 1 min after cell inoculation. Next, the culture dishes containing cells were placed in an incubator for routine culture. During the culture, the medium was changed every 2 days (d). After 14 d, the SW480 or HT29 cells were soaked with Giemsa dye for 20 min and then the colony formation of the cells was observed under a microscope.\nTwo hundred SW480 or HT29 cells were transferred to petri dishes containing complete medium (10 mL). In order to distribute the cells evenly, the petri dishes were rotated slowly for 1 min after cell inoculation. Next, the culture dishes containing cells were placed in an incubator for routine culture. During the culture, the medium was changed every 2 days (d). After 14 d, the SW480 or HT29 cells were soaked with Giemsa dye for 20 min and then the colony formation of the cells was observed under a microscope.\nApoptosis Experiment An Annexin V-FITC Apoptosis Detection Kit (CA1020, Solarbio, China) and a flow cytometer (CytoFLEX, BECKMAN COULTER, USA) were used to detect changes in the apoptosis of SW480 or HT29 cells. In brief, SW480 or HT29 cells were prepared into 1×106 cells/mL cell suspension with 1 mL 1× Binding Buffer. One hundred μL of SW480 or HT29 cells and 5 μL of Annexin V-FITC reagent were added to a centrifuge tube and mixed for 10 min in the dark at room temperature. Next, 5 μL of PI reagent was added to the centrifuge tube and incubated in the dark at room temperature for 5 min. Afterwards, PBS was added to adjust the mixture to a final volume of 500 μL. After half an hour, the cells in the centrifuge tube were transferred to the detection instrument to calculate the number of apoptotic cells.\nAn Annexin V-FITC Apoptosis Detection Kit (CA1020, Solarbio, China) and a flow cytometer (CytoFLEX, BECKMAN COULTER, USA) were used to detect changes in the apoptosis of SW480 or HT29 cells. In brief, SW480 or HT29 cells were prepared into 1×106 cells/mL cell suspension with 1 mL 1× Binding Buffer. One hundred μL of SW480 or HT29 cells and 5 μL of Annexin V-FITC reagent were added to a centrifuge tube and mixed for 10 min in the dark at room temperature. Next, 5 μL of PI reagent was added to the centrifuge tube and incubated in the dark at room temperature for 5 min. Afterwards, PBS was added to adjust the mixture to a final volume of 500 μL. After half an hour, the cells in the centrifuge tube were transferred to the detection instrument to calculate the number of apoptotic cells.\nBioinformatics Analysis and Target Gene Binding Verification Starbase (http://starbase.sysu.edu.cn/index.php) was used to retrieve and analyze information of differential expression of miR-296-5p in Colon adenocarcinoma (COAD) patients and healthy volunteers. TargetScan (http://www.targetscan.org) was used to predict the circRNA that contained a binding sequence for miR-296-5p and the downstream target genes of miR-296-5p. The binding sequences predicted by TargetScan were designed by COBIOER (China) and used to construct the following reporter plasmids for dual luciferase experiments: NOP2/Sun RNA methyltransferase 2 wild type (circNSUN2-WT), circNSUN2-mutant type (MUT), Signal Transducer and Activator of Transcription 3 (STAT3)-WT and STAT3-MUT. The reporter plasmids were separately co-transfected with miR-296-5p mimic or mimic control into SW480 and HT29 cells using liposome for 48 h. Luciferase activity in the transfected SW480 and HT29 cells was detected using Dual-Luciferase® Reporter Assay System (E1960, Promega, USA) with a GloMax 20/20 luminometer (Promega, USA).\nStarbase (http://starbase.sysu.edu.cn/index.php) was used to retrieve and analyze information of differential expression of miR-296-5p in Colon adenocarcinoma (COAD) patients and healthy volunteers. TargetScan (http://www.targetscan.org) was used to predict the circRNA that contained a binding sequence for miR-296-5p and the downstream target genes of miR-296-5p. The binding sequences predicted by TargetScan were designed by COBIOER (China) and used to construct the following reporter plasmids for dual luciferase experiments: NOP2/Sun RNA methyltransferase 2 wild type (circNSUN2-WT), circNSUN2-mutant type (MUT), Signal Transducer and Activator of Transcription 3 (STAT3)-WT and STAT3-MUT. The reporter plasmids were separately co-transfected with miR-296-5p mimic or mimic control into SW480 and HT29 cells using liposome for 48 h. Luciferase activity in the transfected SW480 and HT29 cells was detected using Dual-Luciferase® Reporter Assay System (E1960, Promega, USA) with a GloMax 20/20 luminometer (Promega, USA).\nQuantitative Real-Time Polymerase Chain Reaction (qRT-PCR) qRT-PCR is a reliable method for rapid detection of gene mRNA levels. Briefly, total RNA of SW480 and HT29 cells was fully isolated by the conventional RNA extraction reagent TRIzol (15,596,018, Invitrogen, USA), and subsequently reverse transcribed into cDNA using the RNA reverse transcription reagent (RR047A) developed by TaKaRa (Japan). The qRT-PCR reaction system which consisted of cDNA, gene primers (Sangon synthesis), SYBR® Green (S4438-20RXN, Sigma-Aldrich, Germany) and DEPC water was added to a 96-well plate and put into the detection instrument (Bio-Rad thermal cycler T100). The reaction conditions were set as follows: pre-denaturation at 95 °C for 10 min, denaturation at 95 °C for 15 seconds (s), and annealing at 58 °C for 1 min, for a total of 40 cycles. According to Anita Ciesielska’s report,19 gene mRNA levels were quantified by the 2−ΔΔCT method. GAPDH and U6 were internal references in the experiment. The specific sequences of synthetic primers were shown in Table 1.Table 1Primers for qRT-PCRGeneForward Primer (5ʹ-3ʹ)Reverse Primer (5ʹ-3ʹ)miR-296-5pCCTGTGTCGTATCCAGTGCAAGTCGTATCCAGTGCGTGTCGcircDDX17TGCCAACCACAACATCCTCCACGCTCCCCAGGATTACCAAATcircHIPK3TATGTTGGTGGATCCTGTTCGGCATGGTGGGTAGACCAAGACTTGTGAcircZFRAACCACCACAGATTCACTATAACCACCACAGATTCACTATcircRNA_100290GTCATTCCCTCTTTAATGGTGCAGAACTTCCGCTCTAACATACHas_circ_0026344CTCAGCCTCTAGCATAAGCTCAGGCAAGAGAATGATTTGAACcircVAPATGGATTCCAAATTGAGATGCGTATTCACTTTTCTATCCGATGGATTTCGCHsa_circ_0009361AGAACCAGATTCGAGACGCCGTGCTCTTCAATGCCACCTTCcircNSUN2CTTGAGAAAATCGCCACACTTGTTGAGGAGCAGTGGTGGcircACAP2TCAGAGCATCTGCCCAAAGTTAAATGAACCCCAAGGCTCCGcircITGA7GTGTGCACAGGTCCTTCCAATGGAAGTTCTGTGAGGGACGcircEIF4G3CCTACCCCATCCCCTTATTCACCGTGCTGTAGACTGCTGAGSTAT3CCCCATACCTGAAGACCAAGGGACTCAAACTGCCCTCCTU6CTCGCTTCGGCAGCACAAACGCTTCACGAATTTGCGTGAPDHTGTGGGCATCAATGGATTTGGACACCATGTATTCCGGGTCAAT\n\nPrimers for qRT-PCR\nqRT-PCR is a reliable method for rapid detection of gene mRNA levels. Briefly, total RNA of SW480 and HT29 cells was fully isolated by the conventional RNA extraction reagent TRIzol (15,596,018, Invitrogen, USA), and subsequently reverse transcribed into cDNA using the RNA reverse transcription reagent (RR047A) developed by TaKaRa (Japan). The qRT-PCR reaction system which consisted of cDNA, gene primers (Sangon synthesis), SYBR® Green (S4438-20RXN, Sigma-Aldrich, Germany) and DEPC water was added to a 96-well plate and put into the detection instrument (Bio-Rad thermal cycler T100). The reaction conditions were set as follows: pre-denaturation at 95 °C for 10 min, denaturation at 95 °C for 15 seconds (s), and annealing at 58 °C for 1 min, for a total of 40 cycles. According to Anita Ciesielska’s report,19 gene mRNA levels were quantified by the 2−ΔΔCT method. GAPDH and U6 were internal references in the experiment. The specific sequences of synthetic primers were shown in Table 1.Table 1Primers for qRT-PCRGeneForward Primer (5ʹ-3ʹ)Reverse Primer (5ʹ-3ʹ)miR-296-5pCCTGTGTCGTATCCAGTGCAAGTCGTATCCAGTGCGTGTCGcircDDX17TGCCAACCACAACATCCTCCACGCTCCCCAGGATTACCAAATcircHIPK3TATGTTGGTGGATCCTGTTCGGCATGGTGGGTAGACCAAGACTTGTGAcircZFRAACCACCACAGATTCACTATAACCACCACAGATTCACTATcircRNA_100290GTCATTCCCTCTTTAATGGTGCAGAACTTCCGCTCTAACATACHas_circ_0026344CTCAGCCTCTAGCATAAGCTCAGGCAAGAGAATGATTTGAACcircVAPATGGATTCCAAATTGAGATGCGTATTCACTTTTCTATCCGATGGATTTCGCHsa_circ_0009361AGAACCAGATTCGAGACGCCGTGCTCTTCAATGCCACCTTCcircNSUN2CTTGAGAAAATCGCCACACTTGTTGAGGAGCAGTGGTGGcircACAP2TCAGAGCATCTGCCCAAAGTTAAATGAACCCCAAGGCTCCGcircITGA7GTGTGCACAGGTCCTTCCAATGGAAGTTCTGTGAGGGACGcircEIF4G3CCTACCCCATCCCCTTATTCACCGTGCTGTAGACTGCTGAGSTAT3CCCCATACCTGAAGACCAAGGGACTCAAACTGCCCTCCTU6CTCGCTTCGGCAGCACAAACGCTTCACGAATTTGCGTGAPDHTGTGGGCATCAATGGATTTGGACACCATGTATTCCGGGTCAAT\n\nPrimers for qRT-PCR\nCell Transfection Exogenous up-regulation or knockdown of test genes is a common method to observe their regulatory effects on cells or tissues. Liposome transfection with Lipofectamine 3000 (L3000008, ThermoFisher Scientific, USA) is the most common and convenient method for exogenous interference. Therefore, we commissioned Guangzhou GENESEED Biological Company to synthesize circNSUN2 overexpressed plasmid, circNSUN2 silenced plasmid (sh-circNSUN2), STAT3 overexpressed plasmid and their respective negative controls. MiR-296-5p mimic (miR10000690-1-5) and mimic control (miR1N0000001-1-5), as well as miR-296-5p inhibitor (miR20000690-1-5) and inhibitor control (miR2N0000001-1-5), were purchased directly from Guangzhou RIOBOBIO Company. The plasmids were transfected into SW480 and HT29 cells according to the grouping using Lipofectamine 3000. After 48 h, the success rate of transfection of the cells was determined by qRT-PCR.\nExogenous up-regulation or knockdown of test genes is a common method to observe their regulatory effects on cells or tissues. Liposome transfection with Lipofectamine 3000 (L3000008, ThermoFisher Scientific, USA) is the most common and convenient method for exogenous interference. Therefore, we commissioned Guangzhou GENESEED Biological Company to synthesize circNSUN2 overexpressed plasmid, circNSUN2 silenced plasmid (sh-circNSUN2), STAT3 overexpressed plasmid and their respective negative controls. MiR-296-5p mimic (miR10000690-1-5) and mimic control (miR1N0000001-1-5), as well as miR-296-5p inhibitor (miR20000690-1-5) and inhibitor control (miR2N0000001-1-5), were purchased directly from Guangzhou RIOBOBIO Company. The plasmids were transfected into SW480 and HT29 cells according to the grouping using Lipofectamine 3000. After 48 h, the success rate of transfection of the cells was determined by qRT-PCR.\nWestern Blot RIPA lysate (tissue/cell, R0010) produced by Solarbio (China) was used to separate protein from SW480 and HT29 cells, and its concentration was determined with BCA protein concentration determination reagent (PC0020, Solarbio, China). Then the protein was subjected to high temperature denaturation to maintain a relatively stable state. After being electrophoresed by SDS-PAGE, the protein was transferred to the membrane carrier (PVDF membrane). The membrane was soaked with the same BSA sealant (SW3015) produced by Solarbio for 2 h at room temperature. Next, the membrane was incubated with Abcam (USA) antibodies (anti-STAT3, ab68153, 88 kDa; Bax, ab32503, 21 kDa; Bcl-2, ab59348, 26 kDa) that could bind to the corresponding antigens in the protein on the membrane overnight (4 °C), with GAPDH (ab8245, 36KD) as an internal reference. To identify the antigen-antibody complex, a secondary antibody labeled with HRP (Goat Anti-Rabbit antibody, 1:10,000, ab6721; Goat Anti-Mouse antibody, 1:10000, ab205719) was used to treat the PVDF membrane for 1.5 h at room temperature. Fluorescent signal of the complex was enhanced using the ECL Western Blotting Substrate (PE0010, Solarbio) and detected using the Gel Doc™ XR+ imaging system (BIO-RAD, USA). The gray value of the bands was calculated by Image Lab software to determine the final gene protein level.\nRIPA lysate (tissue/cell, R0010) produced by Solarbio (China) was used to separate protein from SW480 and HT29 cells, and its concentration was determined with BCA protein concentration determination reagent (PC0020, Solarbio, China). Then the protein was subjected to high temperature denaturation to maintain a relatively stable state. After being electrophoresed by SDS-PAGE, the protein was transferred to the membrane carrier (PVDF membrane). The membrane was soaked with the same BSA sealant (SW3015) produced by Solarbio for 2 h at room temperature. Next, the membrane was incubated with Abcam (USA) antibodies (anti-STAT3, ab68153, 88 kDa; Bax, ab32503, 21 kDa; Bcl-2, ab59348, 26 kDa) that could bind to the corresponding antigens in the protein on the membrane overnight (4 °C), with GAPDH (ab8245, 36KD) as an internal reference. To identify the antigen-antibody complex, a secondary antibody labeled with HRP (Goat Anti-Rabbit antibody, 1:10,000, ab6721; Goat Anti-Mouse antibody, 1:10000, ab205719) was used to treat the PVDF membrane for 1.5 h at room temperature. Fluorescent signal of the complex was enhanced using the ECL Western Blotting Substrate (PE0010, Solarbio) and detected using the Gel Doc™ XR+ imaging system (BIO-RAD, USA). The gray value of the bands was calculated by Image Lab software to determine the final gene protein level.\nStatistical Analysis SPSS 20.0 software (IBM, USA) was employed for statistical analysis of the data obtained in the experiments. Differences between two groups were compared by independent sample t-test, and those between multiple groups were compared by one-way ANOVA followed by Tukey’s test. When p<0.05, the difference was considered statistically significant.\nSPSS 20.0 software (IBM, USA) was employed for statistical analysis of the data obtained in the experiments. Differences between two groups were compared by independent sample t-test, and those between multiple groups were compared by one-way ANOVA followed by Tukey’s test. When p<0.05, the difference was considered statistically significant.", "CCD-18Co and CRC cell lines SW480 (CCL-228™) and HT29 (HTB-38™) used in this research were provided by the American Type Culture Collection (ATCC). According to the culture requirements, CCD-18Co, SW480 and HT29 cells were separately inoculated in DMEM medium (30–2007, ATCC) containing 10% fetal bovine serum and incubated in a D180-P cell incubator (RWD, China) containing 5% CO2 at 37 °C.\nALO (DK0052, CAS NO: 56293-29-9, HPLC≥98%), the research object of this study, was provided by Chengdu DESITE Biological Company, China. ALO was dissolved in fresh DMEM to prepare 0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L ALO solutions, which were separately used to treat the CCD-18Co, SW480 and HT29 cells for 24 hours (h).", "One hundred µL of SW480 or HT29 cell suspension (1×104 cells/mL) was transferred to each well of a 96-well plate. Of note, three holes were added to the same treatment group. After treatment with the ALO solutions for 24 h, SW480 or HT29 cells were added with MTT reagent (10 µL/well, PB180519, Procell, USA), which could react with mitochondria in living cells. After 4 h of binding, a HBS-1096A enzyme label analyzer (DeTie, China) was used to detect the absorbance of SW480 or HT29 cells in the mixed solutions at 490 nm, and then cell viability was calculated based on the absorbance.", "SW480 or HT29 cells in logarithmic growth phase were seeded in 96-well plates at 1×104 cells/well. EdU reagent (ST067) purchased from Beyotime was diluted to 50 μmol/L with fresh DMEM medium. The EdU dilution was then used to incubate the cells at 100 μL/well for 2 h. After removing the culture medium, the SW480 or HT29 cells were fixed with methanol at room temperature for 15 minutes (min). Positive fluorescence expression (red) in the collected SW480 or HT29 cells were observed, photographed and recorded under a Leica DMi8 inverted fluorescence microscope (magnification 100 ×). Then DAPI reagent was used to stain the nuclei, and the stains were observed under a microscope.", "Two hundred SW480 or HT29 cells were transferred to petri dishes containing complete medium (10 mL). In order to distribute the cells evenly, the petri dishes were rotated slowly for 1 min after cell inoculation. Next, the culture dishes containing cells were placed in an incubator for routine culture. During the culture, the medium was changed every 2 days (d). After 14 d, the SW480 or HT29 cells were soaked with Giemsa dye for 20 min and then the colony formation of the cells was observed under a microscope.", "An Annexin V-FITC Apoptosis Detection Kit (CA1020, Solarbio, China) and a flow cytometer (CytoFLEX, BECKMAN COULTER, USA) were used to detect changes in the apoptosis of SW480 or HT29 cells. In brief, SW480 or HT29 cells were prepared into 1×106 cells/mL cell suspension with 1 mL 1× Binding Buffer. One hundred μL of SW480 or HT29 cells and 5 μL of Annexin V-FITC reagent were added to a centrifuge tube and mixed for 10 min in the dark at room temperature. Next, 5 μL of PI reagent was added to the centrifuge tube and incubated in the dark at room temperature for 5 min. Afterwards, PBS was added to adjust the mixture to a final volume of 500 μL. After half an hour, the cells in the centrifuge tube were transferred to the detection instrument to calculate the number of apoptotic cells.", "Starbase (http://starbase.sysu.edu.cn/index.php) was used to retrieve and analyze information of differential expression of miR-296-5p in Colon adenocarcinoma (COAD) patients and healthy volunteers. TargetScan (http://www.targetscan.org) was used to predict the circRNA that contained a binding sequence for miR-296-5p and the downstream target genes of miR-296-5p. The binding sequences predicted by TargetScan were designed by COBIOER (China) and used to construct the following reporter plasmids for dual luciferase experiments: NOP2/Sun RNA methyltransferase 2 wild type (circNSUN2-WT), circNSUN2-mutant type (MUT), Signal Transducer and Activator of Transcription 3 (STAT3)-WT and STAT3-MUT. The reporter plasmids were separately co-transfected with miR-296-5p mimic or mimic control into SW480 and HT29 cells using liposome for 48 h. Luciferase activity in the transfected SW480 and HT29 cells was detected using Dual-Luciferase® Reporter Assay System (E1960, Promega, USA) with a GloMax 20/20 luminometer (Promega, USA).", "qRT-PCR is a reliable method for rapid detection of gene mRNA levels. Briefly, total RNA of SW480 and HT29 cells was fully isolated by the conventional RNA extraction reagent TRIzol (15,596,018, Invitrogen, USA), and subsequently reverse transcribed into cDNA using the RNA reverse transcription reagent (RR047A) developed by TaKaRa (Japan). The qRT-PCR reaction system which consisted of cDNA, gene primers (Sangon synthesis), SYBR® Green (S4438-20RXN, Sigma-Aldrich, Germany) and DEPC water was added to a 96-well plate and put into the detection instrument (Bio-Rad thermal cycler T100). The reaction conditions were set as follows: pre-denaturation at 95 °C for 10 min, denaturation at 95 °C for 15 seconds (s), and annealing at 58 °C for 1 min, for a total of 40 cycles. According to Anita Ciesielska’s report,19 gene mRNA levels were quantified by the 2−ΔΔCT method. GAPDH and U6 were internal references in the experiment. The specific sequences of synthetic primers were shown in Table 1.Table 1Primers for qRT-PCRGeneForward Primer (5ʹ-3ʹ)Reverse Primer (5ʹ-3ʹ)miR-296-5pCCTGTGTCGTATCCAGTGCAAGTCGTATCCAGTGCGTGTCGcircDDX17TGCCAACCACAACATCCTCCACGCTCCCCAGGATTACCAAATcircHIPK3TATGTTGGTGGATCCTGTTCGGCATGGTGGGTAGACCAAGACTTGTGAcircZFRAACCACCACAGATTCACTATAACCACCACAGATTCACTATcircRNA_100290GTCATTCCCTCTTTAATGGTGCAGAACTTCCGCTCTAACATACHas_circ_0026344CTCAGCCTCTAGCATAAGCTCAGGCAAGAGAATGATTTGAACcircVAPATGGATTCCAAATTGAGATGCGTATTCACTTTTCTATCCGATGGATTTCGCHsa_circ_0009361AGAACCAGATTCGAGACGCCGTGCTCTTCAATGCCACCTTCcircNSUN2CTTGAGAAAATCGCCACACTTGTTGAGGAGCAGTGGTGGcircACAP2TCAGAGCATCTGCCCAAAGTTAAATGAACCCCAAGGCTCCGcircITGA7GTGTGCACAGGTCCTTCCAATGGAAGTTCTGTGAGGGACGcircEIF4G3CCTACCCCATCCCCTTATTCACCGTGCTGTAGACTGCTGAGSTAT3CCCCATACCTGAAGACCAAGGGACTCAAACTGCCCTCCTU6CTCGCTTCGGCAGCACAAACGCTTCACGAATTTGCGTGAPDHTGTGGGCATCAATGGATTTGGACACCATGTATTCCGGGTCAAT\n\nPrimers for qRT-PCR", "Exogenous up-regulation or knockdown of test genes is a common method to observe their regulatory effects on cells or tissues. Liposome transfection with Lipofectamine 3000 (L3000008, ThermoFisher Scientific, USA) is the most common and convenient method for exogenous interference. Therefore, we commissioned Guangzhou GENESEED Biological Company to synthesize circNSUN2 overexpressed plasmid, circNSUN2 silenced plasmid (sh-circNSUN2), STAT3 overexpressed plasmid and their respective negative controls. MiR-296-5p mimic (miR10000690-1-5) and mimic control (miR1N0000001-1-5), as well as miR-296-5p inhibitor (miR20000690-1-5) and inhibitor control (miR2N0000001-1-5), were purchased directly from Guangzhou RIOBOBIO Company. The plasmids were transfected into SW480 and HT29 cells according to the grouping using Lipofectamine 3000. After 48 h, the success rate of transfection of the cells was determined by qRT-PCR.", "RIPA lysate (tissue/cell, R0010) produced by Solarbio (China) was used to separate protein from SW480 and HT29 cells, and its concentration was determined with BCA protein concentration determination reagent (PC0020, Solarbio, China). Then the protein was subjected to high temperature denaturation to maintain a relatively stable state. After being electrophoresed by SDS-PAGE, the protein was transferred to the membrane carrier (PVDF membrane). The membrane was soaked with the same BSA sealant (SW3015) produced by Solarbio for 2 h at room temperature. Next, the membrane was incubated with Abcam (USA) antibodies (anti-STAT3, ab68153, 88 kDa; Bax, ab32503, 21 kDa; Bcl-2, ab59348, 26 kDa) that could bind to the corresponding antigens in the protein on the membrane overnight (4 °C), with GAPDH (ab8245, 36KD) as an internal reference. To identify the antigen-antibody complex, a secondary antibody labeled with HRP (Goat Anti-Rabbit antibody, 1:10,000, ab6721; Goat Anti-Mouse antibody, 1:10000, ab205719) was used to treat the PVDF membrane for 1.5 h at room temperature. Fluorescent signal of the complex was enhanced using the ECL Western Blotting Substrate (PE0010, Solarbio) and detected using the Gel Doc™ XR+ imaging system (BIO-RAD, USA). The gray value of the bands was calculated by Image Lab software to determine the final gene protein level.", "SPSS 20.0 software (IBM, USA) was employed for statistical analysis of the data obtained in the experiments. Differences between two groups were compared by independent sample t-test, and those between multiple groups were compared by one-way ANOVA followed by Tukey’s test. When p<0.05, the difference was considered statistically significant.", "ALO at a Concentration Gradient Inhibited Proliferation Yet Promoted Apoptosis in CRC Cells In this study, the toxicity of ALO in normal cells (CCD-18Co) was detected, ALO showed no significant toxicity to CCD-18Co cells (Figure 1A). In order to study the effects of ALO on the growth and basic physiological functions of CRC cells, we used different concentrations of ALO to treat SW480 and HT29 cells. The results showed that 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L ALO solutions all inhibited the viability of SW480 and HT29 cells (p<0.05, Figure 1B and C). Moreover, the viability of SW480 and HT29 cells was close to 50% under the treatment of 0.8 mmol/L ALO solution, but was reduced to less than 50% when the ALO concentration was increased to 1 mmol/L (Figure 1B and C). In the following EdU cell proliferation experiment, the red fluorescence expression was reduced in SW480 and HT29 cells under the treatment of 0.8 mmol/L ALO (Figure 1D and E). Consistently, the clone formation experiment also showed that different concentrations of ALO (0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L) reduced the number of cell clones and inhibited the proliferation of SW480 and HT29 cells (p<0.05, Figure 1F–H). At the same time, the increase in the concentration of the ALO solution accelerated the death of CRC cells. As shown in Figure 1I–K, the number of apoptotic SW480 and HT29 cells increased significantly after treatment with different concentrations of ALO (p<0.01, Figure 1I–K).Figure 1ALO at a concentration gradient inhibited proliferation yet promoted apoptosis in CRC cells. (A) The effect of ALO on the normal colonic tissue cells (CCD-18Co) was detected by MTT experiments. (B and C) MTT experiments showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) inhibited the activity of SW480 and HT29 cells. (D and E) EdU cell proliferation experiments showed that 0.8 mmol/L ALO inhibited the proliferation of SW480 and HT29 cells. The magnification was 100×. (F–H) The clone formation experiment showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) inhibited the proliferation of SW480 and HT29 cells. (I–K) Flow cytometry experiments showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) promoted the apoptosis of SW480 and HT29 cells. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs Control.Abbreviations: ALO, aloperine; CRC, colorectal cancer.\nALO at a concentration gradient inhibited proliferation yet promoted apoptosis in CRC cells. (A) The effect of ALO on the normal colonic tissue cells (CCD-18Co) was detected by MTT experiments. (B and C) MTT experiments showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) inhibited the activity of SW480 and HT29 cells. (D and E) EdU cell proliferation experiments showed that 0.8 mmol/L ALO inhibited the proliferation of SW480 and HT29 cells. The magnification was 100×. (F–H) The clone formation experiment showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) inhibited the proliferation of SW480 and HT29 cells. (I–K) Flow cytometry experiments showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) promoted the apoptosis of SW480 and HT29 cells. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs Control.\nIn this study, the toxicity of ALO in normal cells (CCD-18Co) was detected, ALO showed no significant toxicity to CCD-18Co cells (Figure 1A). In order to study the effects of ALO on the growth and basic physiological functions of CRC cells, we used different concentrations of ALO to treat SW480 and HT29 cells. The results showed that 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L ALO solutions all inhibited the viability of SW480 and HT29 cells (p<0.05, Figure 1B and C). Moreover, the viability of SW480 and HT29 cells was close to 50% under the treatment of 0.8 mmol/L ALO solution, but was reduced to less than 50% when the ALO concentration was increased to 1 mmol/L (Figure 1B and C). In the following EdU cell proliferation experiment, the red fluorescence expression was reduced in SW480 and HT29 cells under the treatment of 0.8 mmol/L ALO (Figure 1D and E). Consistently, the clone formation experiment also showed that different concentrations of ALO (0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L) reduced the number of cell clones and inhibited the proliferation of SW480 and HT29 cells (p<0.05, Figure 1F–H). At the same time, the increase in the concentration of the ALO solution accelerated the death of CRC cells. As shown in Figure 1I–K, the number of apoptotic SW480 and HT29 cells increased significantly after treatment with different concentrations of ALO (p<0.01, Figure 1I–K).Figure 1ALO at a concentration gradient inhibited proliferation yet promoted apoptosis in CRC cells. (A) The effect of ALO on the normal colonic tissue cells (CCD-18Co) was detected by MTT experiments. (B and C) MTT experiments showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) inhibited the activity of SW480 and HT29 cells. (D and E) EdU cell proliferation experiments showed that 0.8 mmol/L ALO inhibited the proliferation of SW480 and HT29 cells. The magnification was 100×. (F–H) The clone formation experiment showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) inhibited the proliferation of SW480 and HT29 cells. (I–K) Flow cytometry experiments showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) promoted the apoptosis of SW480 and HT29 cells. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs Control.Abbreviations: ALO, aloperine; CRC, colorectal cancer.\nALO at a concentration gradient inhibited proliferation yet promoted apoptosis in CRC cells. (A) The effect of ALO on the normal colonic tissue cells (CCD-18Co) was detected by MTT experiments. (B and C) MTT experiments showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) inhibited the activity of SW480 and HT29 cells. (D and E) EdU cell proliferation experiments showed that 0.8 mmol/L ALO inhibited the proliferation of SW480 and HT29 cells. The magnification was 100×. (F–H) The clone formation experiment showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) inhibited the proliferation of SW480 and HT29 cells. (I–K) Flow cytometry experiments showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) promoted the apoptosis of SW480 and HT29 cells. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs Control.\nThe Abnormally Low Expression of miR-296-5p in Colon Cancer Could Be Up-Regulated by ALO Data from the Starbase database showed that the expression of miR-296-5p in colon cancer patients was significantly lower than that in healthy people (p=2.5e-6, Figure 2A). In this study, however, after treatment with different concentrations of ALO for 24 h, miR-296-5p activity in SW480 and HT29 cells was significantly increased (p<0.01, Figure 2B and C), suggesting that ALO can stimulate the up-regulation of miR-296-5p.Figure 2The abnormally low expression of miR-296-5p in colon cancer could be upregulated by ALO. (A) Starbase (http://starbase.sysu.edu.cn/index.php) was used to retrieve the information of differential expression of miR-296-5p in colon adenocarcinoma (COAD, n=450) patients and healthy volunteers (n=8) in vivo. P=2.5e-6. (B and C) The qRT-PCR experiment showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) up-regulated miR-296-5p expression in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. **p<0.01, ***p<0.001 vs Control.Abbreviation: qRT-PCR, quantitative real-time polymerase chain reaction.\nThe abnormally low expression of miR-296-5p in colon cancer could be upregulated by ALO. (A) Starbase (http://starbase.sysu.edu.cn/index.php) was used to retrieve the information of differential expression of miR-296-5p in colon adenocarcinoma (COAD, n=450) patients and healthy volunteers (n=8) in vivo. P=2.5e-6. (B and C) The qRT-PCR experiment showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) up-regulated miR-296-5p expression in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. **p<0.01, ***p<0.001 vs Control.\nData from the Starbase database showed that the expression of miR-296-5p in colon cancer patients was significantly lower than that in healthy people (p=2.5e-6, Figure 2A). In this study, however, after treatment with different concentrations of ALO for 24 h, miR-296-5p activity in SW480 and HT29 cells was significantly increased (p<0.01, Figure 2B and C), suggesting that ALO can stimulate the up-regulation of miR-296-5p.Figure 2The abnormally low expression of miR-296-5p in colon cancer could be upregulated by ALO. (A) Starbase (http://starbase.sysu.edu.cn/index.php) was used to retrieve the information of differential expression of miR-296-5p in colon adenocarcinoma (COAD, n=450) patients and healthy volunteers (n=8) in vivo. P=2.5e-6. (B and C) The qRT-PCR experiment showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) up-regulated miR-296-5p expression in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. **p<0.01, ***p<0.001 vs Control.Abbreviation: qRT-PCR, quantitative real-time polymerase chain reaction.\nThe abnormally low expression of miR-296-5p in colon cancer could be upregulated by ALO. (A) Starbase (http://starbase.sysu.edu.cn/index.php) was used to retrieve the information of differential expression of miR-296-5p in colon adenocarcinoma (COAD, n=450) patients and healthy volunteers (n=8) in vivo. P=2.5e-6. (B and C) The qRT-PCR experiment showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) up-regulated miR-296-5p expression in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. **p<0.01, ***p<0.001 vs Control.\nAmong the Differentially Expressed circRNAs in CRC, Only circNSUN2 Not Only Had a Target Site for miR-296-5p, but Also Could Be Regulated by ALO Through literature review, we screened 12 circRNAs that were reported to be associated with CRC: circDDX17, circHIPK3, circZFR, circRNA_100290, Has_circ_0026344, circVAPA, Hsa_circ_0009361, circNSUN2, circACAP2, circACVRL1, circITGA7 and circEIF4G3. We first detected the regulatory effect of ALO on these differentially expressed circRNAs by qRT-PCR to perform preliminary screening (Figure 3A). The results showed that the expressions of circDDX17, Has_circ_0026344, circVAPA, circNSUN2, circACAP2, circACVRL1 and circEIF4G3 were regulated by 0.8 mmol/L ALO (p<0.001, Figure 3A). Through subsequent screening of target genes, we found that only circNSUN2 and miR-296-5p have targeted binding sites (Figure 3B). After further verification, it was confirmed that circNSUN2-WT and miR-296-5p mimic did reduce the luciferase activity in SW480 and HT29 cells (p<0.001, Figure 3C and D). Therefore, we chose circNSUN2 as the next research object.Figure 3Among the differentially expressed circRNAs in CRC, only circNSUN2 not only had a target site for miR-296-5p, but also could be regulated by ALO. (A) qRT-PCR detected the regulation of 0.8 mmol/L ALO on the differentially expressed circRNAs in CRC. GAPDH played the role of internal reference. (B) The TargetScan database (http://www.targetscan.org) was used to predict the target sites of circNSUN2 and miR-296-5p. (C and D) The dual luciferase experiment confirmed that circNSUN2 can bind to miR-296-5p in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; +++p<0.001 vs MC.Abbreviation: MC, miR-296-5p mimic control.\nAmong the differentially expressed circRNAs in CRC, only circNSUN2 not only had a target site for miR-296-5p, but also could be regulated by ALO. (A) qRT-PCR detected the regulation of 0.8 mmol/L ALO on the differentially expressed circRNAs in CRC. GAPDH played the role of internal reference. (B) The TargetScan database (http://www.targetscan.org) was used to predict the target sites of circNSUN2 and miR-296-5p. (C and D) The dual luciferase experiment confirmed that circNSUN2 can bind to miR-296-5p in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; +++p<0.001 vs MC.\nThrough literature review, we screened 12 circRNAs that were reported to be associated with CRC: circDDX17, circHIPK3, circZFR, circRNA_100290, Has_circ_0026344, circVAPA, Hsa_circ_0009361, circNSUN2, circACAP2, circACVRL1, circITGA7 and circEIF4G3. We first detected the regulatory effect of ALO on these differentially expressed circRNAs by qRT-PCR to perform preliminary screening (Figure 3A). The results showed that the expressions of circDDX17, Has_circ_0026344, circVAPA, circNSUN2, circACAP2, circACVRL1 and circEIF4G3 were regulated by 0.8 mmol/L ALO (p<0.001, Figure 3A). Through subsequent screening of target genes, we found that only circNSUN2 and miR-296-5p have targeted binding sites (Figure 3B). After further verification, it was confirmed that circNSUN2-WT and miR-296-5p mimic did reduce the luciferase activity in SW480 and HT29 cells (p<0.001, Figure 3C and D). Therefore, we chose circNSUN2 as the next research object.Figure 3Among the differentially expressed circRNAs in CRC, only circNSUN2 not only had a target site for miR-296-5p, but also could be regulated by ALO. (A) qRT-PCR detected the regulation of 0.8 mmol/L ALO on the differentially expressed circRNAs in CRC. GAPDH played the role of internal reference. (B) The TargetScan database (http://www.targetscan.org) was used to predict the target sites of circNSUN2 and miR-296-5p. (C and D) The dual luciferase experiment confirmed that circNSUN2 can bind to miR-296-5p in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; +++p<0.001 vs MC.Abbreviation: MC, miR-296-5p mimic control.\nAmong the differentially expressed circRNAs in CRC, only circNSUN2 not only had a target site for miR-296-5p, but also could be regulated by ALO. (A) qRT-PCR detected the regulation of 0.8 mmol/L ALO on the differentially expressed circRNAs in CRC. GAPDH played the role of internal reference. (B) The TargetScan database (http://www.targetscan.org) was used to predict the target sites of circNSUN2 and miR-296-5p. (C and D) The dual luciferase experiment confirmed that circNSUN2 can bind to miR-296-5p in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; +++p<0.001 vs MC.\nOverexpression of circNSUN2 Partially Offset the Suppression of ALO on the Biological Behavior of Cancer Cells In SW480 and HT29 cells, ALO suppressed the mRNA level of circNSUN2, whereas transfection of circNSUN2 overexpression significantly increased the expression of circNSUN2 (p<0.001, Figure 4A and B). The following basic cell function experiments showed that the upregulation of circNSUN2 partially offset the inhibitory effect of ALO on CRC cell viability, proliferation and apoptosis (p<0.001, Figure 4C–L).Figure 4Overexpression of circNSUN2 partially offset the suppression of ALO on the biological behavior of cancer cells. (A and B) The qRT-PCR experiment showed that ALO (0.8 mmol/L) inhibited the expression of circNSUN2 in SW480 and HT29 cells. GAPDH played the role of internal reference. (C and D) The MTT experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on cell viability in SW480 and HT29 cells. (E and F) EdU cell proliferation experiments showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. The magnification was 100×. (G–I) The clone formation experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. (J–L) Flow cytometry experiments showed that overexpression of circNSUN2 partially offset the promotion of ALO on apoptosis in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; ^p<0.05, ^^p<0.01, ^^^p<0.001 vs ALO+NC.Abbreviations: CircNSUN2, NOP2/Sun RNA methyltransferase 2; NC, negative control.\nOverexpression of circNSUN2 partially offset the suppression of ALO on the biological behavior of cancer cells. (A and B) The qRT-PCR experiment showed that ALO (0.8 mmol/L) inhibited the expression of circNSUN2 in SW480 and HT29 cells. GAPDH played the role of internal reference. (C and D) The MTT experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on cell viability in SW480 and HT29 cells. (E and F) EdU cell proliferation experiments showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. The magnification was 100×. (G–I) The clone formation experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. (J–L) Flow cytometry experiments showed that overexpression of circNSUN2 partially offset the promotion of ALO on apoptosis in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; ^p<0.05, ^^p<0.01, ^^^p<0.001 vs ALO+NC.\nIn SW480 and HT29 cells, ALO suppressed the mRNA level of circNSUN2, whereas transfection of circNSUN2 overexpression significantly increased the expression of circNSUN2 (p<0.001, Figure 4A and B). The following basic cell function experiments showed that the upregulation of circNSUN2 partially offset the inhibitory effect of ALO on CRC cell viability, proliferation and apoptosis (p<0.001, Figure 4C–L).Figure 4Overexpression of circNSUN2 partially offset the suppression of ALO on the biological behavior of cancer cells. (A and B) The qRT-PCR experiment showed that ALO (0.8 mmol/L) inhibited the expression of circNSUN2 in SW480 and HT29 cells. GAPDH played the role of internal reference. (C and D) The MTT experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on cell viability in SW480 and HT29 cells. (E and F) EdU cell proliferation experiments showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. The magnification was 100×. (G–I) The clone formation experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. (J–L) Flow cytometry experiments showed that overexpression of circNSUN2 partially offset the promotion of ALO on apoptosis in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; ^p<0.05, ^^p<0.01, ^^^p<0.001 vs ALO+NC.Abbreviations: CircNSUN2, NOP2/Sun RNA methyltransferase 2; NC, negative control.\nOverexpression of circNSUN2 partially offset the suppression of ALO on the biological behavior of cancer cells. (A and B) The qRT-PCR experiment showed that ALO (0.8 mmol/L) inhibited the expression of circNSUN2 in SW480 and HT29 cells. GAPDH played the role of internal reference. (C and D) The MTT experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on cell viability in SW480 and HT29 cells. (E and F) EdU cell proliferation experiments showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. The magnification was 100×. (G–I) The clone formation experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. (J–L) Flow cytometry experiments showed that overexpression of circNSUN2 partially offset the promotion of ALO on apoptosis in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; ^p<0.05, ^^p<0.01, ^^^p<0.001 vs ALO+NC.\nSh-circNSUN2 Inhibited the Malignant Development of CRC Cells, Which Was Neutralized by miR-296-5p Inhibitor We also tested the mutual regulation of circNSUN2 and miR-296-5p in SW480 and HT29 cells. Sh-circNSUN2 could down-regulate the expression of circNSUN2, and besides, it inhibited the proliferation of SW480 and HT29 cells, while promoting their apoptosis and the expression of miR-296-5p (p<0.001, Figure 5A–J). However, the inhibitory effect of sh-circNSUN2 on the malignant development of the two cancer cells was neutralized by miR-296-5p inhibitor (p<0.001, Figure 5A–J).Figure 5Sh-circNSUN2 inhibited the malignant development of CRC cells, which was neutralized by miR-296-5p inhibitor. (A and B) The qRT-PCR experiment showed that miR-296-5p inhibitor neutralized the inhibitory effect of sh-circNSUN2 on circNSUN2 expression. GAPDH played the role of internal reference. (C and D) The qRT-PCR experiment showed that sh-circNSUN2 up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (E–G) The clone formation experiment showed that the effect of sh-circNSUN2 on the proliferation of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. (H-J) Flow cytometry experiments showed that the effect of sh-circNSUN2 on promoting the apoptosis of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. All experiments were repeated three times to obtain average values. ***p<0.001 vs sh-NC+IC; ^^^p<0.001 vs sh-circNSUN2+IC; ###p<0.001 vs sh-NC+I.Abbreviations: Sh-circNSUN2, silent circNSUN2; sh-NC, silent negative control; IC, miR-296-5p inhibitor control; I, miR-296-5p inhibitor.\nSh-circNSUN2 inhibited the malignant development of CRC cells, which was neutralized by miR-296-5p inhibitor. (A and B) The qRT-PCR experiment showed that miR-296-5p inhibitor neutralized the inhibitory effect of sh-circNSUN2 on circNSUN2 expression. GAPDH played the role of internal reference. (C and D) The qRT-PCR experiment showed that sh-circNSUN2 up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (E–G) The clone formation experiment showed that the effect of sh-circNSUN2 on the proliferation of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. (H-J) Flow cytometry experiments showed that the effect of sh-circNSUN2 on promoting the apoptosis of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. All experiments were repeated three times to obtain average values. ***p<0.001 vs sh-NC+IC; ^^^p<0.001 vs sh-circNSUN2+IC; ###p<0.001 vs sh-NC+I.\nWe also tested the mutual regulation of circNSUN2 and miR-296-5p in SW480 and HT29 cells. Sh-circNSUN2 could down-regulate the expression of circNSUN2, and besides, it inhibited the proliferation of SW480 and HT29 cells, while promoting their apoptosis and the expression of miR-296-5p (p<0.001, Figure 5A–J). However, the inhibitory effect of sh-circNSUN2 on the malignant development of the two cancer cells was neutralized by miR-296-5p inhibitor (p<0.001, Figure 5A–J).Figure 5Sh-circNSUN2 inhibited the malignant development of CRC cells, which was neutralized by miR-296-5p inhibitor. (A and B) The qRT-PCR experiment showed that miR-296-5p inhibitor neutralized the inhibitory effect of sh-circNSUN2 on circNSUN2 expression. GAPDH played the role of internal reference. (C and D) The qRT-PCR experiment showed that sh-circNSUN2 up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (E–G) The clone formation experiment showed that the effect of sh-circNSUN2 on the proliferation of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. (H-J) Flow cytometry experiments showed that the effect of sh-circNSUN2 on promoting the apoptosis of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. All experiments were repeated three times to obtain average values. ***p<0.001 vs sh-NC+IC; ^^^p<0.001 vs sh-circNSUN2+IC; ###p<0.001 vs sh-NC+I.Abbreviations: Sh-circNSUN2, silent circNSUN2; sh-NC, silent negative control; IC, miR-296-5p inhibitor control; I, miR-296-5p inhibitor.\nSh-circNSUN2 inhibited the malignant development of CRC cells, which was neutralized by miR-296-5p inhibitor. (A and B) The qRT-PCR experiment showed that miR-296-5p inhibitor neutralized the inhibitory effect of sh-circNSUN2 on circNSUN2 expression. GAPDH played the role of internal reference. (C and D) The qRT-PCR experiment showed that sh-circNSUN2 up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (E–G) The clone formation experiment showed that the effect of sh-circNSUN2 on the proliferation of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. (H-J) Flow cytometry experiments showed that the effect of sh-circNSUN2 on promoting the apoptosis of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. All experiments were repeated three times to obtain average values. ***p<0.001 vs sh-NC+IC; ^^^p<0.001 vs sh-circNSUN2+IC; ###p<0.001 vs sh-NC+I.\nMiR-296-5p Bound to STAT3 and Upregulation of Mir-296-5p Inhibited the Expression of STAT3 in CRC Cells We screened the target genes of miR-296-5p, and found that miR-296-5p and STAT3 have targeted binding sites (Figure 6A), and this binding relation was further confirmed by a dual-luciferase verification test (p<0.001, Figure 6B and C). Next, we used qRT-PCR and Western blot to analyze the mRNA and protein levels of miR-296-5p and STAT. It was found that miR-296-5p mimic up-regulated miR-296-5p expression while preventing the activation of STAT3 (p<0.001, Figure 7A–G). However, the overexpression of STAT3 reversed the regulation of miR-296-5p mimic on the two genes (p<0.001, Figure 7A–G).Figure 6The targeted binding of miR-296-5p to STAT3. (A) TargetScan was used to predict the binding sequence of miR-296-5p and STAT3. (B and C) Dual luciferase experiment confirmed that miR-296-5p can bind to STAT3 in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. +++p<0.001 vs MC.Figure 7Up-regulation of miR-296-5p inhibited the expression of STAT3 in CRC cells. (A and B) The qRT-PCR experiment showed that miR-296-5p mimic up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (C and D) The qRT-PCR experiment showed that miR-296-5p mimic suppressed the mRNA level of STAT3, while overexpression of STAT3 reversed this effect. GAPDH played the role of internal reference. (E–G) The Western blot experiment showed that miR-296-5p mimic inhibited the protein level of STAT3, while over-expression of STAT3 reversed this effect. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. ***p<0.001 vs MC+NC; ^^^p<0.001 vs M+NC; ###p<0.001 vs MC+STAT3.Abbreviation: STAT3, signal transducer and activator of transcription 3.\nThe targeted binding of miR-296-5p to STAT3. (A) TargetScan was used to predict the binding sequence of miR-296-5p and STAT3. (B and C) Dual luciferase experiment confirmed that miR-296-5p can bind to STAT3 in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. +++p<0.001 vs MC.\nUp-regulation of miR-296-5p inhibited the expression of STAT3 in CRC cells. (A and B) The qRT-PCR experiment showed that miR-296-5p mimic up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (C and D) The qRT-PCR experiment showed that miR-296-5p mimic suppressed the mRNA level of STAT3, while overexpression of STAT3 reversed this effect. GAPDH played the role of internal reference. (E–G) The Western blot experiment showed that miR-296-5p mimic inhibited the protein level of STAT3, while over-expression of STAT3 reversed this effect. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. ***p<0.001 vs MC+NC; ^^^p<0.001 vs M+NC; ###p<0.001 vs MC+STAT3.\nWe screened the target genes of miR-296-5p, and found that miR-296-5p and STAT3 have targeted binding sites (Figure 6A), and this binding relation was further confirmed by a dual-luciferase verification test (p<0.001, Figure 6B and C). Next, we used qRT-PCR and Western blot to analyze the mRNA and protein levels of miR-296-5p and STAT. It was found that miR-296-5p mimic up-regulated miR-296-5p expression while preventing the activation of STAT3 (p<0.001, Figure 7A–G). However, the overexpression of STAT3 reversed the regulation of miR-296-5p mimic on the two genes (p<0.001, Figure 7A–G).Figure 6The targeted binding of miR-296-5p to STAT3. (A) TargetScan was used to predict the binding sequence of miR-296-5p and STAT3. (B and C) Dual luciferase experiment confirmed that miR-296-5p can bind to STAT3 in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. +++p<0.001 vs MC.Figure 7Up-regulation of miR-296-5p inhibited the expression of STAT3 in CRC cells. (A and B) The qRT-PCR experiment showed that miR-296-5p mimic up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (C and D) The qRT-PCR experiment showed that miR-296-5p mimic suppressed the mRNA level of STAT3, while overexpression of STAT3 reversed this effect. GAPDH played the role of internal reference. (E–G) The Western blot experiment showed that miR-296-5p mimic inhibited the protein level of STAT3, while over-expression of STAT3 reversed this effect. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. ***p<0.001 vs MC+NC; ^^^p<0.001 vs M+NC; ###p<0.001 vs MC+STAT3.Abbreviation: STAT3, signal transducer and activator of transcription 3.\nThe targeted binding of miR-296-5p to STAT3. (A) TargetScan was used to predict the binding sequence of miR-296-5p and STAT3. (B and C) Dual luciferase experiment confirmed that miR-296-5p can bind to STAT3 in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. +++p<0.001 vs MC.\nUp-regulation of miR-296-5p inhibited the expression of STAT3 in CRC cells. (A and B) The qRT-PCR experiment showed that miR-296-5p mimic up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (C and D) The qRT-PCR experiment showed that miR-296-5p mimic suppressed the mRNA level of STAT3, while overexpression of STAT3 reversed this effect. GAPDH played the role of internal reference. (E–G) The Western blot experiment showed that miR-296-5p mimic inhibited the protein level of STAT3, while over-expression of STAT3 reversed this effect. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. ***p<0.001 vs MC+NC; ^^^p<0.001 vs M+NC; ###p<0.001 vs MC+STAT3.\nMiR-296-5p Mimic Inhibited Proliferation and Promoted Apoptosis in CRC Cells by Regulating Apoptosis-Related Genes, Which Was Partially Reversed by Overexpressed STAT3 The following cell experiments showed that by activating Bax and blocking the protein activity of Bcl-2, miR-296-5p mimic inhibited proliferation yet accelerated apoptosis in cancer cells (p<0.001, Figure 8A–I). However, up-regulation of STAT3 produced a completely opposite regulatory effect (p<0.001, Figure 8A–I). More importantly, overexpression of STAT3 neutralized the regulation of miR-296-5p mimic on CRC cells (p<0.001, Figure 8A–I).Figure 8MiR-296-5p mimic inhibited proliferation and promoted apoptosis in CRC cells by regulating apoptosis-related genes, which was partially reversed by overexpression of STAT3. (A–C) The clone formation experiment showed that the inhibitory effect of miR-296-5p mimic on the proliferation of SW480 and HT29 cells was neutralized by overexpressed STAT3. (D–F) Flow cytometry experiments showed that miR-296-5p mimic promoted the apoptosis of SW480 and HT29 cells, which was neutralized by overexpressed STAT3. (G–I) Western blot experiments showed that the regulation of apoptosis-related protein expression by miR-296-5p mimic was neutralized by overexpressed STAT3. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs MC+NC; ^^p<0.01, ^^^p<0.001 vs M+NC; ##p<0.01, ###p<0.001 vs MC+STAT3.\nMiR-296-5p mimic inhibited proliferation and promoted apoptosis in CRC cells by regulating apoptosis-related genes, which was partially reversed by overexpression of STAT3. (A–C) The clone formation experiment showed that the inhibitory effect of miR-296-5p mimic on the proliferation of SW480 and HT29 cells was neutralized by overexpressed STAT3. (D–F) Flow cytometry experiments showed that miR-296-5p mimic promoted the apoptosis of SW480 and HT29 cells, which was neutralized by overexpressed STAT3. (G–I) Western blot experiments showed that the regulation of apoptosis-related protein expression by miR-296-5p mimic was neutralized by overexpressed STAT3. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs MC+NC; ^^p<0.01, ^^^p<0.001 vs M+NC; ##p<0.01, ###p<0.001 vs MC+STAT3.\nThe following cell experiments showed that by activating Bax and blocking the protein activity of Bcl-2, miR-296-5p mimic inhibited proliferation yet accelerated apoptosis in cancer cells (p<0.001, Figure 8A–I). However, up-regulation of STAT3 produced a completely opposite regulatory effect (p<0.001, Figure 8A–I). More importantly, overexpression of STAT3 neutralized the regulation of miR-296-5p mimic on CRC cells (p<0.001, Figure 8A–I).Figure 8MiR-296-5p mimic inhibited proliferation and promoted apoptosis in CRC cells by regulating apoptosis-related genes, which was partially reversed by overexpression of STAT3. (A–C) The clone formation experiment showed that the inhibitory effect of miR-296-5p mimic on the proliferation of SW480 and HT29 cells was neutralized by overexpressed STAT3. (D–F) Flow cytometry experiments showed that miR-296-5p mimic promoted the apoptosis of SW480 and HT29 cells, which was neutralized by overexpressed STAT3. (G–I) Western blot experiments showed that the regulation of apoptosis-related protein expression by miR-296-5p mimic was neutralized by overexpressed STAT3. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs MC+NC; ^^p<0.01, ^^^p<0.001 vs M+NC; ##p<0.01, ###p<0.001 vs MC+STAT3.\nMiR-296-5p mimic inhibited proliferation and promoted apoptosis in CRC cells by regulating apoptosis-related genes, which was partially reversed by overexpression of STAT3. (A–C) The clone formation experiment showed that the inhibitory effect of miR-296-5p mimic on the proliferation of SW480 and HT29 cells was neutralized by overexpressed STAT3. (D–F) Flow cytometry experiments showed that miR-296-5p mimic promoted the apoptosis of SW480 and HT29 cells, which was neutralized by overexpressed STAT3. (G–I) Western blot experiments showed that the regulation of apoptosis-related protein expression by miR-296-5p mimic was neutralized by overexpressed STAT3. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs MC+NC; ^^p<0.01, ^^^p<0.001 vs M+NC; ##p<0.01, ###p<0.001 vs MC+STAT3.", "In this study, the toxicity of ALO in normal cells (CCD-18Co) was detected, ALO showed no significant toxicity to CCD-18Co cells (Figure 1A). In order to study the effects of ALO on the growth and basic physiological functions of CRC cells, we used different concentrations of ALO to treat SW480 and HT29 cells. The results showed that 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L ALO solutions all inhibited the viability of SW480 and HT29 cells (p<0.05, Figure 1B and C). Moreover, the viability of SW480 and HT29 cells was close to 50% under the treatment of 0.8 mmol/L ALO solution, but was reduced to less than 50% when the ALO concentration was increased to 1 mmol/L (Figure 1B and C). In the following EdU cell proliferation experiment, the red fluorescence expression was reduced in SW480 and HT29 cells under the treatment of 0.8 mmol/L ALO (Figure 1D and E). Consistently, the clone formation experiment also showed that different concentrations of ALO (0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L) reduced the number of cell clones and inhibited the proliferation of SW480 and HT29 cells (p<0.05, Figure 1F–H). At the same time, the increase in the concentration of the ALO solution accelerated the death of CRC cells. As shown in Figure 1I–K, the number of apoptotic SW480 and HT29 cells increased significantly after treatment with different concentrations of ALO (p<0.01, Figure 1I–K).Figure 1ALO at a concentration gradient inhibited proliferation yet promoted apoptosis in CRC cells. (A) The effect of ALO on the normal colonic tissue cells (CCD-18Co) was detected by MTT experiments. (B and C) MTT experiments showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) inhibited the activity of SW480 and HT29 cells. (D and E) EdU cell proliferation experiments showed that 0.8 mmol/L ALO inhibited the proliferation of SW480 and HT29 cells. The magnification was 100×. (F–H) The clone formation experiment showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) inhibited the proliferation of SW480 and HT29 cells. (I–K) Flow cytometry experiments showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) promoted the apoptosis of SW480 and HT29 cells. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs Control.Abbreviations: ALO, aloperine; CRC, colorectal cancer.\nALO at a concentration gradient inhibited proliferation yet promoted apoptosis in CRC cells. (A) The effect of ALO on the normal colonic tissue cells (CCD-18Co) was detected by MTT experiments. (B and C) MTT experiments showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) inhibited the activity of SW480 and HT29 cells. (D and E) EdU cell proliferation experiments showed that 0.8 mmol/L ALO inhibited the proliferation of SW480 and HT29 cells. The magnification was 100×. (F–H) The clone formation experiment showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) inhibited the proliferation of SW480 and HT29 cells. (I–K) Flow cytometry experiments showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) promoted the apoptosis of SW480 and HT29 cells. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs Control.", "Data from the Starbase database showed that the expression of miR-296-5p in colon cancer patients was significantly lower than that in healthy people (p=2.5e-6, Figure 2A). In this study, however, after treatment with different concentrations of ALO for 24 h, miR-296-5p activity in SW480 and HT29 cells was significantly increased (p<0.01, Figure 2B and C), suggesting that ALO can stimulate the up-regulation of miR-296-5p.Figure 2The abnormally low expression of miR-296-5p in colon cancer could be upregulated by ALO. (A) Starbase (http://starbase.sysu.edu.cn/index.php) was used to retrieve the information of differential expression of miR-296-5p in colon adenocarcinoma (COAD, n=450) patients and healthy volunteers (n=8) in vivo. P=2.5e-6. (B and C) The qRT-PCR experiment showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) up-regulated miR-296-5p expression in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. **p<0.01, ***p<0.001 vs Control.Abbreviation: qRT-PCR, quantitative real-time polymerase chain reaction.\nThe abnormally low expression of miR-296-5p in colon cancer could be upregulated by ALO. (A) Starbase (http://starbase.sysu.edu.cn/index.php) was used to retrieve the information of differential expression of miR-296-5p in colon adenocarcinoma (COAD, n=450) patients and healthy volunteers (n=8) in vivo. P=2.5e-6. (B and C) The qRT-PCR experiment showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) up-regulated miR-296-5p expression in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. **p<0.01, ***p<0.001 vs Control.", "Through literature review, we screened 12 circRNAs that were reported to be associated with CRC: circDDX17, circHIPK3, circZFR, circRNA_100290, Has_circ_0026344, circVAPA, Hsa_circ_0009361, circNSUN2, circACAP2, circACVRL1, circITGA7 and circEIF4G3. We first detected the regulatory effect of ALO on these differentially expressed circRNAs by qRT-PCR to perform preliminary screening (Figure 3A). The results showed that the expressions of circDDX17, Has_circ_0026344, circVAPA, circNSUN2, circACAP2, circACVRL1 and circEIF4G3 were regulated by 0.8 mmol/L ALO (p<0.001, Figure 3A). Through subsequent screening of target genes, we found that only circNSUN2 and miR-296-5p have targeted binding sites (Figure 3B). After further verification, it was confirmed that circNSUN2-WT and miR-296-5p mimic did reduce the luciferase activity in SW480 and HT29 cells (p<0.001, Figure 3C and D). Therefore, we chose circNSUN2 as the next research object.Figure 3Among the differentially expressed circRNAs in CRC, only circNSUN2 not only had a target site for miR-296-5p, but also could be regulated by ALO. (A) qRT-PCR detected the regulation of 0.8 mmol/L ALO on the differentially expressed circRNAs in CRC. GAPDH played the role of internal reference. (B) The TargetScan database (http://www.targetscan.org) was used to predict the target sites of circNSUN2 and miR-296-5p. (C and D) The dual luciferase experiment confirmed that circNSUN2 can bind to miR-296-5p in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; +++p<0.001 vs MC.Abbreviation: MC, miR-296-5p mimic control.\nAmong the differentially expressed circRNAs in CRC, only circNSUN2 not only had a target site for miR-296-5p, but also could be regulated by ALO. (A) qRT-PCR detected the regulation of 0.8 mmol/L ALO on the differentially expressed circRNAs in CRC. GAPDH played the role of internal reference. (B) The TargetScan database (http://www.targetscan.org) was used to predict the target sites of circNSUN2 and miR-296-5p. (C and D) The dual luciferase experiment confirmed that circNSUN2 can bind to miR-296-5p in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; +++p<0.001 vs MC.", "In SW480 and HT29 cells, ALO suppressed the mRNA level of circNSUN2, whereas transfection of circNSUN2 overexpression significantly increased the expression of circNSUN2 (p<0.001, Figure 4A and B). The following basic cell function experiments showed that the upregulation of circNSUN2 partially offset the inhibitory effect of ALO on CRC cell viability, proliferation and apoptosis (p<0.001, Figure 4C–L).Figure 4Overexpression of circNSUN2 partially offset the suppression of ALO on the biological behavior of cancer cells. (A and B) The qRT-PCR experiment showed that ALO (0.8 mmol/L) inhibited the expression of circNSUN2 in SW480 and HT29 cells. GAPDH played the role of internal reference. (C and D) The MTT experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on cell viability in SW480 and HT29 cells. (E and F) EdU cell proliferation experiments showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. The magnification was 100×. (G–I) The clone formation experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. (J–L) Flow cytometry experiments showed that overexpression of circNSUN2 partially offset the promotion of ALO on apoptosis in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; ^p<0.05, ^^p<0.01, ^^^p<0.001 vs ALO+NC.Abbreviations: CircNSUN2, NOP2/Sun RNA methyltransferase 2; NC, negative control.\nOverexpression of circNSUN2 partially offset the suppression of ALO on the biological behavior of cancer cells. (A and B) The qRT-PCR experiment showed that ALO (0.8 mmol/L) inhibited the expression of circNSUN2 in SW480 and HT29 cells. GAPDH played the role of internal reference. (C and D) The MTT experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on cell viability in SW480 and HT29 cells. (E and F) EdU cell proliferation experiments showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. The magnification was 100×. (G–I) The clone formation experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. (J–L) Flow cytometry experiments showed that overexpression of circNSUN2 partially offset the promotion of ALO on apoptosis in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; ^p<0.05, ^^p<0.01, ^^^p<0.001 vs ALO+NC.", "We also tested the mutual regulation of circNSUN2 and miR-296-5p in SW480 and HT29 cells. Sh-circNSUN2 could down-regulate the expression of circNSUN2, and besides, it inhibited the proliferation of SW480 and HT29 cells, while promoting their apoptosis and the expression of miR-296-5p (p<0.001, Figure 5A–J). However, the inhibitory effect of sh-circNSUN2 on the malignant development of the two cancer cells was neutralized by miR-296-5p inhibitor (p<0.001, Figure 5A–J).Figure 5Sh-circNSUN2 inhibited the malignant development of CRC cells, which was neutralized by miR-296-5p inhibitor. (A and B) The qRT-PCR experiment showed that miR-296-5p inhibitor neutralized the inhibitory effect of sh-circNSUN2 on circNSUN2 expression. GAPDH played the role of internal reference. (C and D) The qRT-PCR experiment showed that sh-circNSUN2 up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (E–G) The clone formation experiment showed that the effect of sh-circNSUN2 on the proliferation of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. (H-J) Flow cytometry experiments showed that the effect of sh-circNSUN2 on promoting the apoptosis of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. All experiments were repeated three times to obtain average values. ***p<0.001 vs sh-NC+IC; ^^^p<0.001 vs sh-circNSUN2+IC; ###p<0.001 vs sh-NC+I.Abbreviations: Sh-circNSUN2, silent circNSUN2; sh-NC, silent negative control; IC, miR-296-5p inhibitor control; I, miR-296-5p inhibitor.\nSh-circNSUN2 inhibited the malignant development of CRC cells, which was neutralized by miR-296-5p inhibitor. (A and B) The qRT-PCR experiment showed that miR-296-5p inhibitor neutralized the inhibitory effect of sh-circNSUN2 on circNSUN2 expression. GAPDH played the role of internal reference. (C and D) The qRT-PCR experiment showed that sh-circNSUN2 up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (E–G) The clone formation experiment showed that the effect of sh-circNSUN2 on the proliferation of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. (H-J) Flow cytometry experiments showed that the effect of sh-circNSUN2 on promoting the apoptosis of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. All experiments were repeated three times to obtain average values. ***p<0.001 vs sh-NC+IC; ^^^p<0.001 vs sh-circNSUN2+IC; ###p<0.001 vs sh-NC+I.", "We screened the target genes of miR-296-5p, and found that miR-296-5p and STAT3 have targeted binding sites (Figure 6A), and this binding relation was further confirmed by a dual-luciferase verification test (p<0.001, Figure 6B and C). Next, we used qRT-PCR and Western blot to analyze the mRNA and protein levels of miR-296-5p and STAT. It was found that miR-296-5p mimic up-regulated miR-296-5p expression while preventing the activation of STAT3 (p<0.001, Figure 7A–G). However, the overexpression of STAT3 reversed the regulation of miR-296-5p mimic on the two genes (p<0.001, Figure 7A–G).Figure 6The targeted binding of miR-296-5p to STAT3. (A) TargetScan was used to predict the binding sequence of miR-296-5p and STAT3. (B and C) Dual luciferase experiment confirmed that miR-296-5p can bind to STAT3 in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. +++p<0.001 vs MC.Figure 7Up-regulation of miR-296-5p inhibited the expression of STAT3 in CRC cells. (A and B) The qRT-PCR experiment showed that miR-296-5p mimic up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (C and D) The qRT-PCR experiment showed that miR-296-5p mimic suppressed the mRNA level of STAT3, while overexpression of STAT3 reversed this effect. GAPDH played the role of internal reference. (E–G) The Western blot experiment showed that miR-296-5p mimic inhibited the protein level of STAT3, while over-expression of STAT3 reversed this effect. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. ***p<0.001 vs MC+NC; ^^^p<0.001 vs M+NC; ###p<0.001 vs MC+STAT3.Abbreviation: STAT3, signal transducer and activator of transcription 3.\nThe targeted binding of miR-296-5p to STAT3. (A) TargetScan was used to predict the binding sequence of miR-296-5p and STAT3. (B and C) Dual luciferase experiment confirmed that miR-296-5p can bind to STAT3 in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. +++p<0.001 vs MC.\nUp-regulation of miR-296-5p inhibited the expression of STAT3 in CRC cells. (A and B) The qRT-PCR experiment showed that miR-296-5p mimic up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (C and D) The qRT-PCR experiment showed that miR-296-5p mimic suppressed the mRNA level of STAT3, while overexpression of STAT3 reversed this effect. GAPDH played the role of internal reference. (E–G) The Western blot experiment showed that miR-296-5p mimic inhibited the protein level of STAT3, while over-expression of STAT3 reversed this effect. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. ***p<0.001 vs MC+NC; ^^^p<0.001 vs M+NC; ###p<0.001 vs MC+STAT3.", "The following cell experiments showed that by activating Bax and blocking the protein activity of Bcl-2, miR-296-5p mimic inhibited proliferation yet accelerated apoptosis in cancer cells (p<0.001, Figure 8A–I). However, up-regulation of STAT3 produced a completely opposite regulatory effect (p<0.001, Figure 8A–I). More importantly, overexpression of STAT3 neutralized the regulation of miR-296-5p mimic on CRC cells (p<0.001, Figure 8A–I).Figure 8MiR-296-5p mimic inhibited proliferation and promoted apoptosis in CRC cells by regulating apoptosis-related genes, which was partially reversed by overexpression of STAT3. (A–C) The clone formation experiment showed that the inhibitory effect of miR-296-5p mimic on the proliferation of SW480 and HT29 cells was neutralized by overexpressed STAT3. (D–F) Flow cytometry experiments showed that miR-296-5p mimic promoted the apoptosis of SW480 and HT29 cells, which was neutralized by overexpressed STAT3. (G–I) Western blot experiments showed that the regulation of apoptosis-related protein expression by miR-296-5p mimic was neutralized by overexpressed STAT3. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs MC+NC; ^^p<0.01, ^^^p<0.001 vs M+NC; ##p<0.01, ###p<0.001 vs MC+STAT3.\nMiR-296-5p mimic inhibited proliferation and promoted apoptosis in CRC cells by regulating apoptosis-related genes, which was partially reversed by overexpression of STAT3. (A–C) The clone formation experiment showed that the inhibitory effect of miR-296-5p mimic on the proliferation of SW480 and HT29 cells was neutralized by overexpressed STAT3. (D–F) Flow cytometry experiments showed that miR-296-5p mimic promoted the apoptosis of SW480 and HT29 cells, which was neutralized by overexpressed STAT3. (G–I) Western blot experiments showed that the regulation of apoptosis-related protein expression by miR-296-5p mimic was neutralized by overexpressed STAT3. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs MC+NC; ^^p<0.01, ^^^p<0.001 vs M+NC; ##p<0.01, ###p<0.001 vs MC+STAT3.", "CRC is a malignant lesion of the mucosal epithelium of the colon or rectum under the action of various carcinogenic factors such as environment or heredity.2 Most CRC patients suffer from adenocarcinoma, which usually develops from polyps. Tumors developed from polyps can infiltrate in a circular shape along the horizontal axis of the intestinal tube, develop into the deep layer of the intestinal wall, and finally penetrate the intestinal wall and metastasize to blood vessels or lymphatic vessels.20 This is one of the important reasons for the high recurrence and metastasis rates of CRC. The ultimate goals of CRC research are to improve the current treatment of CRC, save patients’ lives and improve their quality of life. Based on the role of traditional Chinese medicine in disease prevention and treatment, this study explored the effects of ALO, an alkaloid exhibiting strong anticancer activity, on the proliferation and apoptosis of CRC cells. The results showed that after ALO treatment, the viability and proliferation of CRC cells were significantly inhibited; on the contrary, the number of apoptotic cancer cells was significantly increased. This is consistent with the previous experimental results.21 In order to probe deeper into the upstream mechanism of ALO in alleviating CRC carcinogenesis through the miR-296-5p/STAT3 axis, we screened and verified the upstream targeting circRNA of miR-296-5p.\nCircRNAs are a type of covalently closed circular non-coding RNA formed by back-splicing of mRNA precursors (pre-mRNA).22 CircRNA was considered to be a useless splicing by-product in early research. With the deepening of research, it is found that circRNA has a wide range of sources and plays a variety of functional roles in the growth and development of organisms, with the characteristics of conservative, stable, and tissue-specific.22,23 The most familiar and extensively studied function of circRNA is its sponging effect on miRNA.24 For example, circRNA ciRS-7 which contains more than 70 miR-7 binding sites can achieve competitive adsorption of miR-7 through the AGO2 protein.25 Besides, increasing reports on cancer indicate that circRNAs, such as circRNA-cTFRC, circPSMC3 and circSETD3, act as sponges of miRNAs and participate in transcriptional regulation.26–28 Based on these literature reports, we screened circRNAs that were reported to be abnormally expressed in CRC, and finally obtained 12 circRNAs. After detecting their expressions in ALO-treated cancer cells and predicting their binding sequence for miR-296-5p, circNSUN2 was finally identified as the research object of this study.\nCircNSUN2, which maps to the 5p15 amplicon in CRCs, was confirmed by Chen et al to regulate cytoplasmic output and promote liver metastasis of CRC through N 6-methyladenosine modification.29 Similarly, our experimental results uncovered that knockdown of circNSUN2 inhibited proliferation and accelerated apoptosis in CRC cells. More importantly, we for the first time revealed that ALO can prevent the activation of circNSUN2 and offset the promotion effect of overexpression of circNSUN2 on esophageal cancer progression. Since circNSUN2 and miR-296-5p have targeting sequences, we verified the cellular regulatory effects of circNSUN2 and miR-296-5p/STAT3 through dual luciferase experiments and rescue experiments. The final result confirmed our conjecture that regulating the circNSUN2/miR-296-5p/STAT3 axis can reduce the proliferation rate and increase the apoptosis rate of CRC cells.\nAt present, blocking proliferation and increasing apoptosis in cancer cells are the intensively discussed mechanisms in CRC study. Tumor cells can proliferate quickly and unrestrictedly, and thus, inhibiting their proliferation can produce an anti-tumor effect.30 Apoptosis, alternatively called programmed cell death, is an autonomous cell death process strictly controlled by multiple genes. Apoptosis is an important part of cell life cycle, and it is also an important link in regulating the development of the body and maintaining the stability of the internal environment in organisms.31,32 Our research shows that ALO inhibits the proliferation and promotes the apoptosis of CRC cells by regulating the circNSUN2/miR-296-5p/STAT3 pathway, and ultimately prevents the tumorigenesis of CRC.\nThere are still certain limitations in our research. Although we have confirmed the effect of ALO on inducing apoptosis and reducing proliferation in CRC cells and clarified the underlying mechanism, the circNSUN2/miR-296-5p/STAT3 axis and the pathways related to proliferation and apoptosis have not been analyzed yet, which will be further explored in future research. In addition, the anticancer effect of ALO on CRC and its experimental concentration needs to be further tested in animal and clinical experiments. Whether ALO has an impact on drug resistance in CRC is also a direction of future research." ]
[ "intro", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "colorectal cancer", "aloperine", "NOP2/Sun RNA methyltransferase 2", "miR-296-5p", "signal transducer and activator of transcription 3" ]
Introduction: Cancer has become the first killer in the world and the biggest obstacle for extending human life expectancy. It is estimated that there were 18.1 million new cancer cases and 9.6 million cancer deaths in 2018.1 Colorectal cancer (CRC) is one of the most common gastrointestinal tumors.2 According to the data from the International Agency for Research on Cancer (IARC),1 the number of new CRC cases worldwide in 2018 was approximately 1.09 million, making CRC the fourth most prevalent malignant tumor after lung, breast and prostate cancers; meantime, CRC had a high mortality rate of about 9.2%, ranking only after lung cancer. Epidemiological studies have shown that the incidence of CRC varies significantly in different countries, and is higher in developed areas.3 More importantly, as the incidence of CRC rises rapidly in people under 50, the age of onset of the disease is getting lower.4,5 Therefore, it is urgent to improve the diagnosis rate and treatment effect of CRC. Currently, the main clinical treatment of CRC is surgery combined with adjuvant radiotherapy, chemotherapy or molecular targeted therapy. Surgery is generally considered as the first choice for comprehensive treatment of CRC. This method is suitable for patients whose tumors are confined to the intestinal wall while penetrating the intestinal wall and invading the serous or extraserous membrane without lymph node metastasis.6,7 However, the availability of radical resection is limited as most patients with CRC have adenocarcinoma, which generally develops from polyps and is metastatic by nature.8 As a result, CRC patients tend to undergo simple resection, which leads to a high recurrence rate. Chemotherapy, therefore, is still necessary for postoperative and late-stage CRC patients.9,10 At present, the commonly used chemotherapeutic drugs in clinic mainly include 5-fluorouracil (5-Fu), oxaliplatin and its derivatives.9 Chemotherapy is highly risky because it indiscriminately kills tumor cells and immune cells and thereby reduces the anti-tumor immune effect.11 Likewise, radiotherapy greatly weakens the body’s immunity.12 Hence, researchers have proposed to find effective treatments or drugs that cause less damage to patients. With advances in the research and development of traditional Chinese medicine, more attention has been paid to the roles of traditional Chinese medicine and its active ingredients in disease prevention and treatment. At the same time, new targets for molecular targeted therapy of diseases have been discovered in the exploration of the molecular mechanisms of traditional Chinese medicine and its monomers. Aloperine (ALO) has been confirmed to have a significant anti-cancer activity. ALO is a component of the traditional Chinese medicine Sophora alopecuroides L.13 It is also one of the main alkaloids separated and extracted in the lab, with a molecular formula of C15H24N2.14 Recent studies have found that ALO has the effects of anti-inflammation, immunosuppression, redox suppression, cardiovascular protection and anti-cancer, among which its tumor-suppressing activity has been extensively reported. For example, Yu et al revealed that ALO inhibited the carcinogenesis process by regulating excessive autophagy in thyroid cancer cells;15 Liu et al demonstrated that ALO inhibited the PI3K/Akt pathway, increased apoptosis and caused cell cycle block in liver cancer cells;16 besides, ALO was also found to exert an anti-cancer effect in prostate cancer and breast cancer.17,18 In the previous study, our research group found that ALO up-regulated miR-296-5p and inhibited the activity of its target gene STAT3, thereby inhibiting the proliferation and inducing the apoptosis of CRC cells. However, the activation pathway of miR-296-5p is unclear. In order to clarify the upstream activation pathway of ALO-upregulated miR-296-5p, this study will explore the mechanism of ALO in inhibiting CRC from the perspective of the circRNA-miRNA-mRNA regulatory network. Method: Cell Purchase and ALO Processing CCD-18Co and CRC cell lines SW480 (CCL-228™) and HT29 (HTB-38™) used in this research were provided by the American Type Culture Collection (ATCC). According to the culture requirements, CCD-18Co, SW480 and HT29 cells were separately inoculated in DMEM medium (30–2007, ATCC) containing 10% fetal bovine serum and incubated in a D180-P cell incubator (RWD, China) containing 5% CO2 at 37 °C. ALO (DK0052, CAS NO: 56293-29-9, HPLC≥98%), the research object of this study, was provided by Chengdu DESITE Biological Company, China. ALO was dissolved in fresh DMEM to prepare 0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L ALO solutions, which were separately used to treat the CCD-18Co, SW480 and HT29 cells for 24 hours (h). CCD-18Co and CRC cell lines SW480 (CCL-228™) and HT29 (HTB-38™) used in this research were provided by the American Type Culture Collection (ATCC). According to the culture requirements, CCD-18Co, SW480 and HT29 cells were separately inoculated in DMEM medium (30–2007, ATCC) containing 10% fetal bovine serum and incubated in a D180-P cell incubator (RWD, China) containing 5% CO2 at 37 °C. ALO (DK0052, CAS NO: 56293-29-9, HPLC≥98%), the research object of this study, was provided by Chengdu DESITE Biological Company, China. ALO was dissolved in fresh DMEM to prepare 0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L ALO solutions, which were separately used to treat the CCD-18Co, SW480 and HT29 cells for 24 hours (h). MTT Assay One hundred µL of SW480 or HT29 cell suspension (1×104 cells/mL) was transferred to each well of a 96-well plate. Of note, three holes were added to the same treatment group. After treatment with the ALO solutions for 24 h, SW480 or HT29 cells were added with MTT reagent (10 µL/well, PB180519, Procell, USA), which could react with mitochondria in living cells. After 4 h of binding, a HBS-1096A enzyme label analyzer (DeTie, China) was used to detect the absorbance of SW480 or HT29 cells in the mixed solutions at 490 nm, and then cell viability was calculated based on the absorbance. One hundred µL of SW480 or HT29 cell suspension (1×104 cells/mL) was transferred to each well of a 96-well plate. Of note, three holes were added to the same treatment group. After treatment with the ALO solutions for 24 h, SW480 or HT29 cells were added with MTT reagent (10 µL/well, PB180519, Procell, USA), which could react with mitochondria in living cells. After 4 h of binding, a HBS-1096A enzyme label analyzer (DeTie, China) was used to detect the absorbance of SW480 or HT29 cells in the mixed solutions at 490 nm, and then cell viability was calculated based on the absorbance. EdU (5-Ethynyl-2ʹ-Deoxyuridine) Cell Proliferation Experiment SW480 or HT29 cells in logarithmic growth phase were seeded in 96-well plates at 1×104 cells/well. EdU reagent (ST067) purchased from Beyotime was diluted to 50 μmol/L with fresh DMEM medium. The EdU dilution was then used to incubate the cells at 100 μL/well for 2 h. After removing the culture medium, the SW480 or HT29 cells were fixed with methanol at room temperature for 15 minutes (min). Positive fluorescence expression (red) in the collected SW480 or HT29 cells were observed, photographed and recorded under a Leica DMi8 inverted fluorescence microscope (magnification 100 ×). Then DAPI reagent was used to stain the nuclei, and the stains were observed under a microscope. SW480 or HT29 cells in logarithmic growth phase were seeded in 96-well plates at 1×104 cells/well. EdU reagent (ST067) purchased from Beyotime was diluted to 50 μmol/L with fresh DMEM medium. The EdU dilution was then used to incubate the cells at 100 μL/well for 2 h. After removing the culture medium, the SW480 or HT29 cells were fixed with methanol at room temperature for 15 minutes (min). Positive fluorescence expression (red) in the collected SW480 or HT29 cells were observed, photographed and recorded under a Leica DMi8 inverted fluorescence microscope (magnification 100 ×). Then DAPI reagent was used to stain the nuclei, and the stains were observed under a microscope. Colony Formation Assay Two hundred SW480 or HT29 cells were transferred to petri dishes containing complete medium (10 mL). In order to distribute the cells evenly, the petri dishes were rotated slowly for 1 min after cell inoculation. Next, the culture dishes containing cells were placed in an incubator for routine culture. During the culture, the medium was changed every 2 days (d). After 14 d, the SW480 or HT29 cells were soaked with Giemsa dye for 20 min and then the colony formation of the cells was observed under a microscope. Two hundred SW480 or HT29 cells were transferred to petri dishes containing complete medium (10 mL). In order to distribute the cells evenly, the petri dishes were rotated slowly for 1 min after cell inoculation. Next, the culture dishes containing cells were placed in an incubator for routine culture. During the culture, the medium was changed every 2 days (d). After 14 d, the SW480 or HT29 cells were soaked with Giemsa dye for 20 min and then the colony formation of the cells was observed under a microscope. Apoptosis Experiment An Annexin V-FITC Apoptosis Detection Kit (CA1020, Solarbio, China) and a flow cytometer (CytoFLEX, BECKMAN COULTER, USA) were used to detect changes in the apoptosis of SW480 or HT29 cells. In brief, SW480 or HT29 cells were prepared into 1×106 cells/mL cell suspension with 1 mL 1× Binding Buffer. One hundred μL of SW480 or HT29 cells and 5 μL of Annexin V-FITC reagent were added to a centrifuge tube and mixed for 10 min in the dark at room temperature. Next, 5 μL of PI reagent was added to the centrifuge tube and incubated in the dark at room temperature for 5 min. Afterwards, PBS was added to adjust the mixture to a final volume of 500 μL. After half an hour, the cells in the centrifuge tube were transferred to the detection instrument to calculate the number of apoptotic cells. An Annexin V-FITC Apoptosis Detection Kit (CA1020, Solarbio, China) and a flow cytometer (CytoFLEX, BECKMAN COULTER, USA) were used to detect changes in the apoptosis of SW480 or HT29 cells. In brief, SW480 or HT29 cells were prepared into 1×106 cells/mL cell suspension with 1 mL 1× Binding Buffer. One hundred μL of SW480 or HT29 cells and 5 μL of Annexin V-FITC reagent were added to a centrifuge tube and mixed for 10 min in the dark at room temperature. Next, 5 μL of PI reagent was added to the centrifuge tube and incubated in the dark at room temperature for 5 min. Afterwards, PBS was added to adjust the mixture to a final volume of 500 μL. After half an hour, the cells in the centrifuge tube were transferred to the detection instrument to calculate the number of apoptotic cells. Bioinformatics Analysis and Target Gene Binding Verification Starbase (http://starbase.sysu.edu.cn/index.php) was used to retrieve and analyze information of differential expression of miR-296-5p in Colon adenocarcinoma (COAD) patients and healthy volunteers. TargetScan (http://www.targetscan.org) was used to predict the circRNA that contained a binding sequence for miR-296-5p and the downstream target genes of miR-296-5p. The binding sequences predicted by TargetScan were designed by COBIOER (China) and used to construct the following reporter plasmids for dual luciferase experiments: NOP2/Sun RNA methyltransferase 2 wild type (circNSUN2-WT), circNSUN2-mutant type (MUT), Signal Transducer and Activator of Transcription 3 (STAT3)-WT and STAT3-MUT. The reporter plasmids were separately co-transfected with miR-296-5p mimic or mimic control into SW480 and HT29 cells using liposome for 48 h. Luciferase activity in the transfected SW480 and HT29 cells was detected using Dual-Luciferase® Reporter Assay System (E1960, Promega, USA) with a GloMax 20/20 luminometer (Promega, USA). Starbase (http://starbase.sysu.edu.cn/index.php) was used to retrieve and analyze information of differential expression of miR-296-5p in Colon adenocarcinoma (COAD) patients and healthy volunteers. TargetScan (http://www.targetscan.org) was used to predict the circRNA that contained a binding sequence for miR-296-5p and the downstream target genes of miR-296-5p. The binding sequences predicted by TargetScan were designed by COBIOER (China) and used to construct the following reporter plasmids for dual luciferase experiments: NOP2/Sun RNA methyltransferase 2 wild type (circNSUN2-WT), circNSUN2-mutant type (MUT), Signal Transducer and Activator of Transcription 3 (STAT3)-WT and STAT3-MUT. The reporter plasmids were separately co-transfected with miR-296-5p mimic or mimic control into SW480 and HT29 cells using liposome for 48 h. Luciferase activity in the transfected SW480 and HT29 cells was detected using Dual-Luciferase® Reporter Assay System (E1960, Promega, USA) with a GloMax 20/20 luminometer (Promega, USA). Quantitative Real-Time Polymerase Chain Reaction (qRT-PCR) qRT-PCR is a reliable method for rapid detection of gene mRNA levels. Briefly, total RNA of SW480 and HT29 cells was fully isolated by the conventional RNA extraction reagent TRIzol (15,596,018, Invitrogen, USA), and subsequently reverse transcribed into cDNA using the RNA reverse transcription reagent (RR047A) developed by TaKaRa (Japan). The qRT-PCR reaction system which consisted of cDNA, gene primers (Sangon synthesis), SYBR® Green (S4438-20RXN, Sigma-Aldrich, Germany) and DEPC water was added to a 96-well plate and put into the detection instrument (Bio-Rad thermal cycler T100). The reaction conditions were set as follows: pre-denaturation at 95 °C for 10 min, denaturation at 95 °C for 15 seconds (s), and annealing at 58 °C for 1 min, for a total of 40 cycles. According to Anita Ciesielska’s report,19 gene mRNA levels were quantified by the 2−ΔΔCT method. GAPDH and U6 were internal references in the experiment. The specific sequences of synthetic primers were shown in Table 1.Table 1Primers for qRT-PCRGeneForward Primer (5ʹ-3ʹ)Reverse Primer (5ʹ-3ʹ)miR-296-5pCCTGTGTCGTATCCAGTGCAAGTCGTATCCAGTGCGTGTCGcircDDX17TGCCAACCACAACATCCTCCACGCTCCCCAGGATTACCAAATcircHIPK3TATGTTGGTGGATCCTGTTCGGCATGGTGGGTAGACCAAGACTTGTGAcircZFRAACCACCACAGATTCACTATAACCACCACAGATTCACTATcircRNA_100290GTCATTCCCTCTTTAATGGTGCAGAACTTCCGCTCTAACATACHas_circ_0026344CTCAGCCTCTAGCATAAGCTCAGGCAAGAGAATGATTTGAACcircVAPATGGATTCCAAATTGAGATGCGTATTCACTTTTCTATCCGATGGATTTCGCHsa_circ_0009361AGAACCAGATTCGAGACGCCGTGCTCTTCAATGCCACCTTCcircNSUN2CTTGAGAAAATCGCCACACTTGTTGAGGAGCAGTGGTGGcircACAP2TCAGAGCATCTGCCCAAAGTTAAATGAACCCCAAGGCTCCGcircITGA7GTGTGCACAGGTCCTTCCAATGGAAGTTCTGTGAGGGACGcircEIF4G3CCTACCCCATCCCCTTATTCACCGTGCTGTAGACTGCTGAGSTAT3CCCCATACCTGAAGACCAAGGGACTCAAACTGCCCTCCTU6CTCGCTTCGGCAGCACAAACGCTTCACGAATTTGCGTGAPDHTGTGGGCATCAATGGATTTGGACACCATGTATTCCGGGTCAAT Primers for qRT-PCR qRT-PCR is a reliable method for rapid detection of gene mRNA levels. Briefly, total RNA of SW480 and HT29 cells was fully isolated by the conventional RNA extraction reagent TRIzol (15,596,018, Invitrogen, USA), and subsequently reverse transcribed into cDNA using the RNA reverse transcription reagent (RR047A) developed by TaKaRa (Japan). The qRT-PCR reaction system which consisted of cDNA, gene primers (Sangon synthesis), SYBR® Green (S4438-20RXN, Sigma-Aldrich, Germany) and DEPC water was added to a 96-well plate and put into the detection instrument (Bio-Rad thermal cycler T100). The reaction conditions were set as follows: pre-denaturation at 95 °C for 10 min, denaturation at 95 °C for 15 seconds (s), and annealing at 58 °C for 1 min, for a total of 40 cycles. According to Anita Ciesielska’s report,19 gene mRNA levels were quantified by the 2−ΔΔCT method. GAPDH and U6 were internal references in the experiment. The specific sequences of synthetic primers were shown in Table 1.Table 1Primers for qRT-PCRGeneForward Primer (5ʹ-3ʹ)Reverse Primer (5ʹ-3ʹ)miR-296-5pCCTGTGTCGTATCCAGTGCAAGTCGTATCCAGTGCGTGTCGcircDDX17TGCCAACCACAACATCCTCCACGCTCCCCAGGATTACCAAATcircHIPK3TATGTTGGTGGATCCTGTTCGGCATGGTGGGTAGACCAAGACTTGTGAcircZFRAACCACCACAGATTCACTATAACCACCACAGATTCACTATcircRNA_100290GTCATTCCCTCTTTAATGGTGCAGAACTTCCGCTCTAACATACHas_circ_0026344CTCAGCCTCTAGCATAAGCTCAGGCAAGAGAATGATTTGAACcircVAPATGGATTCCAAATTGAGATGCGTATTCACTTTTCTATCCGATGGATTTCGCHsa_circ_0009361AGAACCAGATTCGAGACGCCGTGCTCTTCAATGCCACCTTCcircNSUN2CTTGAGAAAATCGCCACACTTGTTGAGGAGCAGTGGTGGcircACAP2TCAGAGCATCTGCCCAAAGTTAAATGAACCCCAAGGCTCCGcircITGA7GTGTGCACAGGTCCTTCCAATGGAAGTTCTGTGAGGGACGcircEIF4G3CCTACCCCATCCCCTTATTCACCGTGCTGTAGACTGCTGAGSTAT3CCCCATACCTGAAGACCAAGGGACTCAAACTGCCCTCCTU6CTCGCTTCGGCAGCACAAACGCTTCACGAATTTGCGTGAPDHTGTGGGCATCAATGGATTTGGACACCATGTATTCCGGGTCAAT Primers for qRT-PCR Cell Transfection Exogenous up-regulation or knockdown of test genes is a common method to observe their regulatory effects on cells or tissues. Liposome transfection with Lipofectamine 3000 (L3000008, ThermoFisher Scientific, USA) is the most common and convenient method for exogenous interference. Therefore, we commissioned Guangzhou GENESEED Biological Company to synthesize circNSUN2 overexpressed plasmid, circNSUN2 silenced plasmid (sh-circNSUN2), STAT3 overexpressed plasmid and their respective negative controls. MiR-296-5p mimic (miR10000690-1-5) and mimic control (miR1N0000001-1-5), as well as miR-296-5p inhibitor (miR20000690-1-5) and inhibitor control (miR2N0000001-1-5), were purchased directly from Guangzhou RIOBOBIO Company. The plasmids were transfected into SW480 and HT29 cells according to the grouping using Lipofectamine 3000. After 48 h, the success rate of transfection of the cells was determined by qRT-PCR. Exogenous up-regulation or knockdown of test genes is a common method to observe their regulatory effects on cells or tissues. Liposome transfection with Lipofectamine 3000 (L3000008, ThermoFisher Scientific, USA) is the most common and convenient method for exogenous interference. Therefore, we commissioned Guangzhou GENESEED Biological Company to synthesize circNSUN2 overexpressed plasmid, circNSUN2 silenced plasmid (sh-circNSUN2), STAT3 overexpressed plasmid and their respective negative controls. MiR-296-5p mimic (miR10000690-1-5) and mimic control (miR1N0000001-1-5), as well as miR-296-5p inhibitor (miR20000690-1-5) and inhibitor control (miR2N0000001-1-5), were purchased directly from Guangzhou RIOBOBIO Company. The plasmids were transfected into SW480 and HT29 cells according to the grouping using Lipofectamine 3000. After 48 h, the success rate of transfection of the cells was determined by qRT-PCR. Western Blot RIPA lysate (tissue/cell, R0010) produced by Solarbio (China) was used to separate protein from SW480 and HT29 cells, and its concentration was determined with BCA protein concentration determination reagent (PC0020, Solarbio, China). Then the protein was subjected to high temperature denaturation to maintain a relatively stable state. After being electrophoresed by SDS-PAGE, the protein was transferred to the membrane carrier (PVDF membrane). The membrane was soaked with the same BSA sealant (SW3015) produced by Solarbio for 2 h at room temperature. Next, the membrane was incubated with Abcam (USA) antibodies (anti-STAT3, ab68153, 88 kDa; Bax, ab32503, 21 kDa; Bcl-2, ab59348, 26 kDa) that could bind to the corresponding antigens in the protein on the membrane overnight (4 °C), with GAPDH (ab8245, 36KD) as an internal reference. To identify the antigen-antibody complex, a secondary antibody labeled with HRP (Goat Anti-Rabbit antibody, 1:10,000, ab6721; Goat Anti-Mouse antibody, 1:10000, ab205719) was used to treat the PVDF membrane for 1.5 h at room temperature. Fluorescent signal of the complex was enhanced using the ECL Western Blotting Substrate (PE0010, Solarbio) and detected using the Gel Doc™ XR+ imaging system (BIO-RAD, USA). The gray value of the bands was calculated by Image Lab software to determine the final gene protein level. RIPA lysate (tissue/cell, R0010) produced by Solarbio (China) was used to separate protein from SW480 and HT29 cells, and its concentration was determined with BCA protein concentration determination reagent (PC0020, Solarbio, China). Then the protein was subjected to high temperature denaturation to maintain a relatively stable state. After being electrophoresed by SDS-PAGE, the protein was transferred to the membrane carrier (PVDF membrane). The membrane was soaked with the same BSA sealant (SW3015) produced by Solarbio for 2 h at room temperature. Next, the membrane was incubated with Abcam (USA) antibodies (anti-STAT3, ab68153, 88 kDa; Bax, ab32503, 21 kDa; Bcl-2, ab59348, 26 kDa) that could bind to the corresponding antigens in the protein on the membrane overnight (4 °C), with GAPDH (ab8245, 36KD) as an internal reference. To identify the antigen-antibody complex, a secondary antibody labeled with HRP (Goat Anti-Rabbit antibody, 1:10,000, ab6721; Goat Anti-Mouse antibody, 1:10000, ab205719) was used to treat the PVDF membrane for 1.5 h at room temperature. Fluorescent signal of the complex was enhanced using the ECL Western Blotting Substrate (PE0010, Solarbio) and detected using the Gel Doc™ XR+ imaging system (BIO-RAD, USA). The gray value of the bands was calculated by Image Lab software to determine the final gene protein level. Statistical Analysis SPSS 20.0 software (IBM, USA) was employed for statistical analysis of the data obtained in the experiments. Differences between two groups were compared by independent sample t-test, and those between multiple groups were compared by one-way ANOVA followed by Tukey’s test. When p<0.05, the difference was considered statistically significant. SPSS 20.0 software (IBM, USA) was employed for statistical analysis of the data obtained in the experiments. Differences between two groups were compared by independent sample t-test, and those between multiple groups were compared by one-way ANOVA followed by Tukey’s test. When p<0.05, the difference was considered statistically significant. Cell Purchase and ALO Processing: CCD-18Co and CRC cell lines SW480 (CCL-228™) and HT29 (HTB-38™) used in this research were provided by the American Type Culture Collection (ATCC). According to the culture requirements, CCD-18Co, SW480 and HT29 cells were separately inoculated in DMEM medium (30–2007, ATCC) containing 10% fetal bovine serum and incubated in a D180-P cell incubator (RWD, China) containing 5% CO2 at 37 °C. ALO (DK0052, CAS NO: 56293-29-9, HPLC≥98%), the research object of this study, was provided by Chengdu DESITE Biological Company, China. ALO was dissolved in fresh DMEM to prepare 0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L ALO solutions, which were separately used to treat the CCD-18Co, SW480 and HT29 cells for 24 hours (h). MTT Assay: One hundred µL of SW480 or HT29 cell suspension (1×104 cells/mL) was transferred to each well of a 96-well plate. Of note, three holes were added to the same treatment group. After treatment with the ALO solutions for 24 h, SW480 or HT29 cells were added with MTT reagent (10 µL/well, PB180519, Procell, USA), which could react with mitochondria in living cells. After 4 h of binding, a HBS-1096A enzyme label analyzer (DeTie, China) was used to detect the absorbance of SW480 or HT29 cells in the mixed solutions at 490 nm, and then cell viability was calculated based on the absorbance. EdU (5-Ethynyl-2ʹ-Deoxyuridine) Cell Proliferation Experiment: SW480 or HT29 cells in logarithmic growth phase were seeded in 96-well plates at 1×104 cells/well. EdU reagent (ST067) purchased from Beyotime was diluted to 50 μmol/L with fresh DMEM medium. The EdU dilution was then used to incubate the cells at 100 μL/well for 2 h. After removing the culture medium, the SW480 or HT29 cells were fixed with methanol at room temperature for 15 minutes (min). Positive fluorescence expression (red) in the collected SW480 or HT29 cells were observed, photographed and recorded under a Leica DMi8 inverted fluorescence microscope (magnification 100 ×). Then DAPI reagent was used to stain the nuclei, and the stains were observed under a microscope. Colony Formation Assay: Two hundred SW480 or HT29 cells were transferred to petri dishes containing complete medium (10 mL). In order to distribute the cells evenly, the petri dishes were rotated slowly for 1 min after cell inoculation. Next, the culture dishes containing cells were placed in an incubator for routine culture. During the culture, the medium was changed every 2 days (d). After 14 d, the SW480 or HT29 cells were soaked with Giemsa dye for 20 min and then the colony formation of the cells was observed under a microscope. Apoptosis Experiment: An Annexin V-FITC Apoptosis Detection Kit (CA1020, Solarbio, China) and a flow cytometer (CytoFLEX, BECKMAN COULTER, USA) were used to detect changes in the apoptosis of SW480 or HT29 cells. In brief, SW480 or HT29 cells were prepared into 1×106 cells/mL cell suspension with 1 mL 1× Binding Buffer. One hundred μL of SW480 or HT29 cells and 5 μL of Annexin V-FITC reagent were added to a centrifuge tube and mixed for 10 min in the dark at room temperature. Next, 5 μL of PI reagent was added to the centrifuge tube and incubated in the dark at room temperature for 5 min. Afterwards, PBS was added to adjust the mixture to a final volume of 500 μL. After half an hour, the cells in the centrifuge tube were transferred to the detection instrument to calculate the number of apoptotic cells. Bioinformatics Analysis and Target Gene Binding Verification: Starbase (http://starbase.sysu.edu.cn/index.php) was used to retrieve and analyze information of differential expression of miR-296-5p in Colon adenocarcinoma (COAD) patients and healthy volunteers. TargetScan (http://www.targetscan.org) was used to predict the circRNA that contained a binding sequence for miR-296-5p and the downstream target genes of miR-296-5p. The binding sequences predicted by TargetScan were designed by COBIOER (China) and used to construct the following reporter plasmids for dual luciferase experiments: NOP2/Sun RNA methyltransferase 2 wild type (circNSUN2-WT), circNSUN2-mutant type (MUT), Signal Transducer and Activator of Transcription 3 (STAT3)-WT and STAT3-MUT. The reporter plasmids were separately co-transfected with miR-296-5p mimic or mimic control into SW480 and HT29 cells using liposome for 48 h. Luciferase activity in the transfected SW480 and HT29 cells was detected using Dual-Luciferase® Reporter Assay System (E1960, Promega, USA) with a GloMax 20/20 luminometer (Promega, USA). Quantitative Real-Time Polymerase Chain Reaction (qRT-PCR): qRT-PCR is a reliable method for rapid detection of gene mRNA levels. Briefly, total RNA of SW480 and HT29 cells was fully isolated by the conventional RNA extraction reagent TRIzol (15,596,018, Invitrogen, USA), and subsequently reverse transcribed into cDNA using the RNA reverse transcription reagent (RR047A) developed by TaKaRa (Japan). The qRT-PCR reaction system which consisted of cDNA, gene primers (Sangon synthesis), SYBR® Green (S4438-20RXN, Sigma-Aldrich, Germany) and DEPC water was added to a 96-well plate and put into the detection instrument (Bio-Rad thermal cycler T100). The reaction conditions were set as follows: pre-denaturation at 95 °C for 10 min, denaturation at 95 °C for 15 seconds (s), and annealing at 58 °C for 1 min, for a total of 40 cycles. According to Anita Ciesielska’s report,19 gene mRNA levels were quantified by the 2−ΔΔCT method. GAPDH and U6 were internal references in the experiment. The specific sequences of synthetic primers were shown in Table 1.Table 1Primers for qRT-PCRGeneForward Primer (5ʹ-3ʹ)Reverse Primer (5ʹ-3ʹ)miR-296-5pCCTGTGTCGTATCCAGTGCAAGTCGTATCCAGTGCGTGTCGcircDDX17TGCCAACCACAACATCCTCCACGCTCCCCAGGATTACCAAATcircHIPK3TATGTTGGTGGATCCTGTTCGGCATGGTGGGTAGACCAAGACTTGTGAcircZFRAACCACCACAGATTCACTATAACCACCACAGATTCACTATcircRNA_100290GTCATTCCCTCTTTAATGGTGCAGAACTTCCGCTCTAACATACHas_circ_0026344CTCAGCCTCTAGCATAAGCTCAGGCAAGAGAATGATTTGAACcircVAPATGGATTCCAAATTGAGATGCGTATTCACTTTTCTATCCGATGGATTTCGCHsa_circ_0009361AGAACCAGATTCGAGACGCCGTGCTCTTCAATGCCACCTTCcircNSUN2CTTGAGAAAATCGCCACACTTGTTGAGGAGCAGTGGTGGcircACAP2TCAGAGCATCTGCCCAAAGTTAAATGAACCCCAAGGCTCCGcircITGA7GTGTGCACAGGTCCTTCCAATGGAAGTTCTGTGAGGGACGcircEIF4G3CCTACCCCATCCCCTTATTCACCGTGCTGTAGACTGCTGAGSTAT3CCCCATACCTGAAGACCAAGGGACTCAAACTGCCCTCCTU6CTCGCTTCGGCAGCACAAACGCTTCACGAATTTGCGTGAPDHTGTGGGCATCAATGGATTTGGACACCATGTATTCCGGGTCAAT Primers for qRT-PCR Cell Transfection: Exogenous up-regulation or knockdown of test genes is a common method to observe their regulatory effects on cells or tissues. Liposome transfection with Lipofectamine 3000 (L3000008, ThermoFisher Scientific, USA) is the most common and convenient method for exogenous interference. Therefore, we commissioned Guangzhou GENESEED Biological Company to synthesize circNSUN2 overexpressed plasmid, circNSUN2 silenced plasmid (sh-circNSUN2), STAT3 overexpressed plasmid and their respective negative controls. MiR-296-5p mimic (miR10000690-1-5) and mimic control (miR1N0000001-1-5), as well as miR-296-5p inhibitor (miR20000690-1-5) and inhibitor control (miR2N0000001-1-5), were purchased directly from Guangzhou RIOBOBIO Company. The plasmids were transfected into SW480 and HT29 cells according to the grouping using Lipofectamine 3000. After 48 h, the success rate of transfection of the cells was determined by qRT-PCR. Western Blot: RIPA lysate (tissue/cell, R0010) produced by Solarbio (China) was used to separate protein from SW480 and HT29 cells, and its concentration was determined with BCA protein concentration determination reagent (PC0020, Solarbio, China). Then the protein was subjected to high temperature denaturation to maintain a relatively stable state. After being electrophoresed by SDS-PAGE, the protein was transferred to the membrane carrier (PVDF membrane). The membrane was soaked with the same BSA sealant (SW3015) produced by Solarbio for 2 h at room temperature. Next, the membrane was incubated with Abcam (USA) antibodies (anti-STAT3, ab68153, 88 kDa; Bax, ab32503, 21 kDa; Bcl-2, ab59348, 26 kDa) that could bind to the corresponding antigens in the protein on the membrane overnight (4 °C), with GAPDH (ab8245, 36KD) as an internal reference. To identify the antigen-antibody complex, a secondary antibody labeled with HRP (Goat Anti-Rabbit antibody, 1:10,000, ab6721; Goat Anti-Mouse antibody, 1:10000, ab205719) was used to treat the PVDF membrane for 1.5 h at room temperature. Fluorescent signal of the complex was enhanced using the ECL Western Blotting Substrate (PE0010, Solarbio) and detected using the Gel Doc™ XR+ imaging system (BIO-RAD, USA). The gray value of the bands was calculated by Image Lab software to determine the final gene protein level. Statistical Analysis: SPSS 20.0 software (IBM, USA) was employed for statistical analysis of the data obtained in the experiments. Differences between two groups were compared by independent sample t-test, and those between multiple groups were compared by one-way ANOVA followed by Tukey’s test. When p<0.05, the difference was considered statistically significant. Result: ALO at a Concentration Gradient Inhibited Proliferation Yet Promoted Apoptosis in CRC Cells In this study, the toxicity of ALO in normal cells (CCD-18Co) was detected, ALO showed no significant toxicity to CCD-18Co cells (Figure 1A). In order to study the effects of ALO on the growth and basic physiological functions of CRC cells, we used different concentrations of ALO to treat SW480 and HT29 cells. The results showed that 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L ALO solutions all inhibited the viability of SW480 and HT29 cells (p<0.05, Figure 1B and C). Moreover, the viability of SW480 and HT29 cells was close to 50% under the treatment of 0.8 mmol/L ALO solution, but was reduced to less than 50% when the ALO concentration was increased to 1 mmol/L (Figure 1B and C). In the following EdU cell proliferation experiment, the red fluorescence expression was reduced in SW480 and HT29 cells under the treatment of 0.8 mmol/L ALO (Figure 1D and E). Consistently, the clone formation experiment also showed that different concentrations of ALO (0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L) reduced the number of cell clones and inhibited the proliferation of SW480 and HT29 cells (p<0.05, Figure 1F–H). At the same time, the increase in the concentration of the ALO solution accelerated the death of CRC cells. As shown in Figure 1I–K, the number of apoptotic SW480 and HT29 cells increased significantly after treatment with different concentrations of ALO (p<0.01, Figure 1I–K).Figure 1ALO at a concentration gradient inhibited proliferation yet promoted apoptosis in CRC cells. (A) The effect of ALO on the normal colonic tissue cells (CCD-18Co) was detected by MTT experiments. (B and C) MTT experiments showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) inhibited the activity of SW480 and HT29 cells. (D and E) EdU cell proliferation experiments showed that 0.8 mmol/L ALO inhibited the proliferation of SW480 and HT29 cells. The magnification was 100×. (F–H) The clone formation experiment showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) inhibited the proliferation of SW480 and HT29 cells. (I–K) Flow cytometry experiments showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) promoted the apoptosis of SW480 and HT29 cells. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs Control.Abbreviations: ALO, aloperine; CRC, colorectal cancer. ALO at a concentration gradient inhibited proliferation yet promoted apoptosis in CRC cells. (A) The effect of ALO on the normal colonic tissue cells (CCD-18Co) was detected by MTT experiments. (B and C) MTT experiments showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) inhibited the activity of SW480 and HT29 cells. (D and E) EdU cell proliferation experiments showed that 0.8 mmol/L ALO inhibited the proliferation of SW480 and HT29 cells. The magnification was 100×. (F–H) The clone formation experiment showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) inhibited the proliferation of SW480 and HT29 cells. (I–K) Flow cytometry experiments showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) promoted the apoptosis of SW480 and HT29 cells. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs Control. In this study, the toxicity of ALO in normal cells (CCD-18Co) was detected, ALO showed no significant toxicity to CCD-18Co cells (Figure 1A). In order to study the effects of ALO on the growth and basic physiological functions of CRC cells, we used different concentrations of ALO to treat SW480 and HT29 cells. The results showed that 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L ALO solutions all inhibited the viability of SW480 and HT29 cells (p<0.05, Figure 1B and C). Moreover, the viability of SW480 and HT29 cells was close to 50% under the treatment of 0.8 mmol/L ALO solution, but was reduced to less than 50% when the ALO concentration was increased to 1 mmol/L (Figure 1B and C). In the following EdU cell proliferation experiment, the red fluorescence expression was reduced in SW480 and HT29 cells under the treatment of 0.8 mmol/L ALO (Figure 1D and E). Consistently, the clone formation experiment also showed that different concentrations of ALO (0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L) reduced the number of cell clones and inhibited the proliferation of SW480 and HT29 cells (p<0.05, Figure 1F–H). At the same time, the increase in the concentration of the ALO solution accelerated the death of CRC cells. As shown in Figure 1I–K, the number of apoptotic SW480 and HT29 cells increased significantly after treatment with different concentrations of ALO (p<0.01, Figure 1I–K).Figure 1ALO at a concentration gradient inhibited proliferation yet promoted apoptosis in CRC cells. (A) The effect of ALO on the normal colonic tissue cells (CCD-18Co) was detected by MTT experiments. (B and C) MTT experiments showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) inhibited the activity of SW480 and HT29 cells. (D and E) EdU cell proliferation experiments showed that 0.8 mmol/L ALO inhibited the proliferation of SW480 and HT29 cells. The magnification was 100×. (F–H) The clone formation experiment showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) inhibited the proliferation of SW480 and HT29 cells. (I–K) Flow cytometry experiments showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) promoted the apoptosis of SW480 and HT29 cells. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs Control.Abbreviations: ALO, aloperine; CRC, colorectal cancer. ALO at a concentration gradient inhibited proliferation yet promoted apoptosis in CRC cells. (A) The effect of ALO on the normal colonic tissue cells (CCD-18Co) was detected by MTT experiments. (B and C) MTT experiments showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) inhibited the activity of SW480 and HT29 cells. (D and E) EdU cell proliferation experiments showed that 0.8 mmol/L ALO inhibited the proliferation of SW480 and HT29 cells. The magnification was 100×. (F–H) The clone formation experiment showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) inhibited the proliferation of SW480 and HT29 cells. (I–K) Flow cytometry experiments showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) promoted the apoptosis of SW480 and HT29 cells. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs Control. The Abnormally Low Expression of miR-296-5p in Colon Cancer Could Be Up-Regulated by ALO Data from the Starbase database showed that the expression of miR-296-5p in colon cancer patients was significantly lower than that in healthy people (p=2.5e-6, Figure 2A). In this study, however, after treatment with different concentrations of ALO for 24 h, miR-296-5p activity in SW480 and HT29 cells was significantly increased (p<0.01, Figure 2B and C), suggesting that ALO can stimulate the up-regulation of miR-296-5p.Figure 2The abnormally low expression of miR-296-5p in colon cancer could be upregulated by ALO. (A) Starbase (http://starbase.sysu.edu.cn/index.php) was used to retrieve the information of differential expression of miR-296-5p in colon adenocarcinoma (COAD, n=450) patients and healthy volunteers (n=8) in vivo. P=2.5e-6. (B and C) The qRT-PCR experiment showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) up-regulated miR-296-5p expression in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. **p<0.01, ***p<0.001 vs Control.Abbreviation: qRT-PCR, quantitative real-time polymerase chain reaction. The abnormally low expression of miR-296-5p in colon cancer could be upregulated by ALO. (A) Starbase (http://starbase.sysu.edu.cn/index.php) was used to retrieve the information of differential expression of miR-296-5p in colon adenocarcinoma (COAD, n=450) patients and healthy volunteers (n=8) in vivo. P=2.5e-6. (B and C) The qRT-PCR experiment showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) up-regulated miR-296-5p expression in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. **p<0.01, ***p<0.001 vs Control. Data from the Starbase database showed that the expression of miR-296-5p in colon cancer patients was significantly lower than that in healthy people (p=2.5e-6, Figure 2A). In this study, however, after treatment with different concentrations of ALO for 24 h, miR-296-5p activity in SW480 and HT29 cells was significantly increased (p<0.01, Figure 2B and C), suggesting that ALO can stimulate the up-regulation of miR-296-5p.Figure 2The abnormally low expression of miR-296-5p in colon cancer could be upregulated by ALO. (A) Starbase (http://starbase.sysu.edu.cn/index.php) was used to retrieve the information of differential expression of miR-296-5p in colon adenocarcinoma (COAD, n=450) patients and healthy volunteers (n=8) in vivo. P=2.5e-6. (B and C) The qRT-PCR experiment showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) up-regulated miR-296-5p expression in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. **p<0.01, ***p<0.001 vs Control.Abbreviation: qRT-PCR, quantitative real-time polymerase chain reaction. The abnormally low expression of miR-296-5p in colon cancer could be upregulated by ALO. (A) Starbase (http://starbase.sysu.edu.cn/index.php) was used to retrieve the information of differential expression of miR-296-5p in colon adenocarcinoma (COAD, n=450) patients and healthy volunteers (n=8) in vivo. P=2.5e-6. (B and C) The qRT-PCR experiment showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) up-regulated miR-296-5p expression in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. **p<0.01, ***p<0.001 vs Control. Among the Differentially Expressed circRNAs in CRC, Only circNSUN2 Not Only Had a Target Site for miR-296-5p, but Also Could Be Regulated by ALO Through literature review, we screened 12 circRNAs that were reported to be associated with CRC: circDDX17, circHIPK3, circZFR, circRNA_100290, Has_circ_0026344, circVAPA, Hsa_circ_0009361, circNSUN2, circACAP2, circACVRL1, circITGA7 and circEIF4G3. We first detected the regulatory effect of ALO on these differentially expressed circRNAs by qRT-PCR to perform preliminary screening (Figure 3A). The results showed that the expressions of circDDX17, Has_circ_0026344, circVAPA, circNSUN2, circACAP2, circACVRL1 and circEIF4G3 were regulated by 0.8 mmol/L ALO (p<0.001, Figure 3A). Through subsequent screening of target genes, we found that only circNSUN2 and miR-296-5p have targeted binding sites (Figure 3B). After further verification, it was confirmed that circNSUN2-WT and miR-296-5p mimic did reduce the luciferase activity in SW480 and HT29 cells (p<0.001, Figure 3C and D). Therefore, we chose circNSUN2 as the next research object.Figure 3Among the differentially expressed circRNAs in CRC, only circNSUN2 not only had a target site for miR-296-5p, but also could be regulated by ALO. (A) qRT-PCR detected the regulation of 0.8 mmol/L ALO on the differentially expressed circRNAs in CRC. GAPDH played the role of internal reference. (B) The TargetScan database (http://www.targetscan.org) was used to predict the target sites of circNSUN2 and miR-296-5p. (C and D) The dual luciferase experiment confirmed that circNSUN2 can bind to miR-296-5p in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; +++p<0.001 vs MC.Abbreviation: MC, miR-296-5p mimic control. Among the differentially expressed circRNAs in CRC, only circNSUN2 not only had a target site for miR-296-5p, but also could be regulated by ALO. (A) qRT-PCR detected the regulation of 0.8 mmol/L ALO on the differentially expressed circRNAs in CRC. GAPDH played the role of internal reference. (B) The TargetScan database (http://www.targetscan.org) was used to predict the target sites of circNSUN2 and miR-296-5p. (C and D) The dual luciferase experiment confirmed that circNSUN2 can bind to miR-296-5p in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; +++p<0.001 vs MC. Through literature review, we screened 12 circRNAs that were reported to be associated with CRC: circDDX17, circHIPK3, circZFR, circRNA_100290, Has_circ_0026344, circVAPA, Hsa_circ_0009361, circNSUN2, circACAP2, circACVRL1, circITGA7 and circEIF4G3. We first detected the regulatory effect of ALO on these differentially expressed circRNAs by qRT-PCR to perform preliminary screening (Figure 3A). The results showed that the expressions of circDDX17, Has_circ_0026344, circVAPA, circNSUN2, circACAP2, circACVRL1 and circEIF4G3 were regulated by 0.8 mmol/L ALO (p<0.001, Figure 3A). Through subsequent screening of target genes, we found that only circNSUN2 and miR-296-5p have targeted binding sites (Figure 3B). After further verification, it was confirmed that circNSUN2-WT and miR-296-5p mimic did reduce the luciferase activity in SW480 and HT29 cells (p<0.001, Figure 3C and D). Therefore, we chose circNSUN2 as the next research object.Figure 3Among the differentially expressed circRNAs in CRC, only circNSUN2 not only had a target site for miR-296-5p, but also could be regulated by ALO. (A) qRT-PCR detected the regulation of 0.8 mmol/L ALO on the differentially expressed circRNAs in CRC. GAPDH played the role of internal reference. (B) The TargetScan database (http://www.targetscan.org) was used to predict the target sites of circNSUN2 and miR-296-5p. (C and D) The dual luciferase experiment confirmed that circNSUN2 can bind to miR-296-5p in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; +++p<0.001 vs MC.Abbreviation: MC, miR-296-5p mimic control. Among the differentially expressed circRNAs in CRC, only circNSUN2 not only had a target site for miR-296-5p, but also could be regulated by ALO. (A) qRT-PCR detected the regulation of 0.8 mmol/L ALO on the differentially expressed circRNAs in CRC. GAPDH played the role of internal reference. (B) The TargetScan database (http://www.targetscan.org) was used to predict the target sites of circNSUN2 and miR-296-5p. (C and D) The dual luciferase experiment confirmed that circNSUN2 can bind to miR-296-5p in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; +++p<0.001 vs MC. Overexpression of circNSUN2 Partially Offset the Suppression of ALO on the Biological Behavior of Cancer Cells In SW480 and HT29 cells, ALO suppressed the mRNA level of circNSUN2, whereas transfection of circNSUN2 overexpression significantly increased the expression of circNSUN2 (p<0.001, Figure 4A and B). The following basic cell function experiments showed that the upregulation of circNSUN2 partially offset the inhibitory effect of ALO on CRC cell viability, proliferation and apoptosis (p<0.001, Figure 4C–L).Figure 4Overexpression of circNSUN2 partially offset the suppression of ALO on the biological behavior of cancer cells. (A and B) The qRT-PCR experiment showed that ALO (0.8 mmol/L) inhibited the expression of circNSUN2 in SW480 and HT29 cells. GAPDH played the role of internal reference. (C and D) The MTT experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on cell viability in SW480 and HT29 cells. (E and F) EdU cell proliferation experiments showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. The magnification was 100×. (G–I) The clone formation experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. (J–L) Flow cytometry experiments showed that overexpression of circNSUN2 partially offset the promotion of ALO on apoptosis in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; ^p<0.05, ^^p<0.01, ^^^p<0.001 vs ALO+NC.Abbreviations: CircNSUN2, NOP2/Sun RNA methyltransferase 2; NC, negative control. Overexpression of circNSUN2 partially offset the suppression of ALO on the biological behavior of cancer cells. (A and B) The qRT-PCR experiment showed that ALO (0.8 mmol/L) inhibited the expression of circNSUN2 in SW480 and HT29 cells. GAPDH played the role of internal reference. (C and D) The MTT experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on cell viability in SW480 and HT29 cells. (E and F) EdU cell proliferation experiments showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. The magnification was 100×. (G–I) The clone formation experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. (J–L) Flow cytometry experiments showed that overexpression of circNSUN2 partially offset the promotion of ALO on apoptosis in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; ^p<0.05, ^^p<0.01, ^^^p<0.001 vs ALO+NC. In SW480 and HT29 cells, ALO suppressed the mRNA level of circNSUN2, whereas transfection of circNSUN2 overexpression significantly increased the expression of circNSUN2 (p<0.001, Figure 4A and B). The following basic cell function experiments showed that the upregulation of circNSUN2 partially offset the inhibitory effect of ALO on CRC cell viability, proliferation and apoptosis (p<0.001, Figure 4C–L).Figure 4Overexpression of circNSUN2 partially offset the suppression of ALO on the biological behavior of cancer cells. (A and B) The qRT-PCR experiment showed that ALO (0.8 mmol/L) inhibited the expression of circNSUN2 in SW480 and HT29 cells. GAPDH played the role of internal reference. (C and D) The MTT experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on cell viability in SW480 and HT29 cells. (E and F) EdU cell proliferation experiments showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. The magnification was 100×. (G–I) The clone formation experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. (J–L) Flow cytometry experiments showed that overexpression of circNSUN2 partially offset the promotion of ALO on apoptosis in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; ^p<0.05, ^^p<0.01, ^^^p<0.001 vs ALO+NC.Abbreviations: CircNSUN2, NOP2/Sun RNA methyltransferase 2; NC, negative control. Overexpression of circNSUN2 partially offset the suppression of ALO on the biological behavior of cancer cells. (A and B) The qRT-PCR experiment showed that ALO (0.8 mmol/L) inhibited the expression of circNSUN2 in SW480 and HT29 cells. GAPDH played the role of internal reference. (C and D) The MTT experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on cell viability in SW480 and HT29 cells. (E and F) EdU cell proliferation experiments showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. The magnification was 100×. (G–I) The clone formation experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. (J–L) Flow cytometry experiments showed that overexpression of circNSUN2 partially offset the promotion of ALO on apoptosis in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; ^p<0.05, ^^p<0.01, ^^^p<0.001 vs ALO+NC. Sh-circNSUN2 Inhibited the Malignant Development of CRC Cells, Which Was Neutralized by miR-296-5p Inhibitor We also tested the mutual regulation of circNSUN2 and miR-296-5p in SW480 and HT29 cells. Sh-circNSUN2 could down-regulate the expression of circNSUN2, and besides, it inhibited the proliferation of SW480 and HT29 cells, while promoting their apoptosis and the expression of miR-296-5p (p<0.001, Figure 5A–J). However, the inhibitory effect of sh-circNSUN2 on the malignant development of the two cancer cells was neutralized by miR-296-5p inhibitor (p<0.001, Figure 5A–J).Figure 5Sh-circNSUN2 inhibited the malignant development of CRC cells, which was neutralized by miR-296-5p inhibitor. (A and B) The qRT-PCR experiment showed that miR-296-5p inhibitor neutralized the inhibitory effect of sh-circNSUN2 on circNSUN2 expression. GAPDH played the role of internal reference. (C and D) The qRT-PCR experiment showed that sh-circNSUN2 up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (E–G) The clone formation experiment showed that the effect of sh-circNSUN2 on the proliferation of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. (H-J) Flow cytometry experiments showed that the effect of sh-circNSUN2 on promoting the apoptosis of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. All experiments were repeated three times to obtain average values. ***p<0.001 vs sh-NC+IC; ^^^p<0.001 vs sh-circNSUN2+IC; ###p<0.001 vs sh-NC+I.Abbreviations: Sh-circNSUN2, silent circNSUN2; sh-NC, silent negative control; IC, miR-296-5p inhibitor control; I, miR-296-5p inhibitor. Sh-circNSUN2 inhibited the malignant development of CRC cells, which was neutralized by miR-296-5p inhibitor. (A and B) The qRT-PCR experiment showed that miR-296-5p inhibitor neutralized the inhibitory effect of sh-circNSUN2 on circNSUN2 expression. GAPDH played the role of internal reference. (C and D) The qRT-PCR experiment showed that sh-circNSUN2 up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (E–G) The clone formation experiment showed that the effect of sh-circNSUN2 on the proliferation of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. (H-J) Flow cytometry experiments showed that the effect of sh-circNSUN2 on promoting the apoptosis of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. All experiments were repeated three times to obtain average values. ***p<0.001 vs sh-NC+IC; ^^^p<0.001 vs sh-circNSUN2+IC; ###p<0.001 vs sh-NC+I. We also tested the mutual regulation of circNSUN2 and miR-296-5p in SW480 and HT29 cells. Sh-circNSUN2 could down-regulate the expression of circNSUN2, and besides, it inhibited the proliferation of SW480 and HT29 cells, while promoting their apoptosis and the expression of miR-296-5p (p<0.001, Figure 5A–J). However, the inhibitory effect of sh-circNSUN2 on the malignant development of the two cancer cells was neutralized by miR-296-5p inhibitor (p<0.001, Figure 5A–J).Figure 5Sh-circNSUN2 inhibited the malignant development of CRC cells, which was neutralized by miR-296-5p inhibitor. (A and B) The qRT-PCR experiment showed that miR-296-5p inhibitor neutralized the inhibitory effect of sh-circNSUN2 on circNSUN2 expression. GAPDH played the role of internal reference. (C and D) The qRT-PCR experiment showed that sh-circNSUN2 up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (E–G) The clone formation experiment showed that the effect of sh-circNSUN2 on the proliferation of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. (H-J) Flow cytometry experiments showed that the effect of sh-circNSUN2 on promoting the apoptosis of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. All experiments were repeated three times to obtain average values. ***p<0.001 vs sh-NC+IC; ^^^p<0.001 vs sh-circNSUN2+IC; ###p<0.001 vs sh-NC+I.Abbreviations: Sh-circNSUN2, silent circNSUN2; sh-NC, silent negative control; IC, miR-296-5p inhibitor control; I, miR-296-5p inhibitor. Sh-circNSUN2 inhibited the malignant development of CRC cells, which was neutralized by miR-296-5p inhibitor. (A and B) The qRT-PCR experiment showed that miR-296-5p inhibitor neutralized the inhibitory effect of sh-circNSUN2 on circNSUN2 expression. GAPDH played the role of internal reference. (C and D) The qRT-PCR experiment showed that sh-circNSUN2 up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (E–G) The clone formation experiment showed that the effect of sh-circNSUN2 on the proliferation of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. (H-J) Flow cytometry experiments showed that the effect of sh-circNSUN2 on promoting the apoptosis of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. All experiments were repeated three times to obtain average values. ***p<0.001 vs sh-NC+IC; ^^^p<0.001 vs sh-circNSUN2+IC; ###p<0.001 vs sh-NC+I. MiR-296-5p Bound to STAT3 and Upregulation of Mir-296-5p Inhibited the Expression of STAT3 in CRC Cells We screened the target genes of miR-296-5p, and found that miR-296-5p and STAT3 have targeted binding sites (Figure 6A), and this binding relation was further confirmed by a dual-luciferase verification test (p<0.001, Figure 6B and C). Next, we used qRT-PCR and Western blot to analyze the mRNA and protein levels of miR-296-5p and STAT. It was found that miR-296-5p mimic up-regulated miR-296-5p expression while preventing the activation of STAT3 (p<0.001, Figure 7A–G). However, the overexpression of STAT3 reversed the regulation of miR-296-5p mimic on the two genes (p<0.001, Figure 7A–G).Figure 6The targeted binding of miR-296-5p to STAT3. (A) TargetScan was used to predict the binding sequence of miR-296-5p and STAT3. (B and C) Dual luciferase experiment confirmed that miR-296-5p can bind to STAT3 in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. +++p<0.001 vs MC.Figure 7Up-regulation of miR-296-5p inhibited the expression of STAT3 in CRC cells. (A and B) The qRT-PCR experiment showed that miR-296-5p mimic up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (C and D) The qRT-PCR experiment showed that miR-296-5p mimic suppressed the mRNA level of STAT3, while overexpression of STAT3 reversed this effect. GAPDH played the role of internal reference. (E–G) The Western blot experiment showed that miR-296-5p mimic inhibited the protein level of STAT3, while over-expression of STAT3 reversed this effect. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. ***p<0.001 vs MC+NC; ^^^p<0.001 vs M+NC; ###p<0.001 vs MC+STAT3.Abbreviation: STAT3, signal transducer and activator of transcription 3. The targeted binding of miR-296-5p to STAT3. (A) TargetScan was used to predict the binding sequence of miR-296-5p and STAT3. (B and C) Dual luciferase experiment confirmed that miR-296-5p can bind to STAT3 in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. +++p<0.001 vs MC. Up-regulation of miR-296-5p inhibited the expression of STAT3 in CRC cells. (A and B) The qRT-PCR experiment showed that miR-296-5p mimic up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (C and D) The qRT-PCR experiment showed that miR-296-5p mimic suppressed the mRNA level of STAT3, while overexpression of STAT3 reversed this effect. GAPDH played the role of internal reference. (E–G) The Western blot experiment showed that miR-296-5p mimic inhibited the protein level of STAT3, while over-expression of STAT3 reversed this effect. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. ***p<0.001 vs MC+NC; ^^^p<0.001 vs M+NC; ###p<0.001 vs MC+STAT3. We screened the target genes of miR-296-5p, and found that miR-296-5p and STAT3 have targeted binding sites (Figure 6A), and this binding relation was further confirmed by a dual-luciferase verification test (p<0.001, Figure 6B and C). Next, we used qRT-PCR and Western blot to analyze the mRNA and protein levels of miR-296-5p and STAT. It was found that miR-296-5p mimic up-regulated miR-296-5p expression while preventing the activation of STAT3 (p<0.001, Figure 7A–G). However, the overexpression of STAT3 reversed the regulation of miR-296-5p mimic on the two genes (p<0.001, Figure 7A–G).Figure 6The targeted binding of miR-296-5p to STAT3. (A) TargetScan was used to predict the binding sequence of miR-296-5p and STAT3. (B and C) Dual luciferase experiment confirmed that miR-296-5p can bind to STAT3 in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. +++p<0.001 vs MC.Figure 7Up-regulation of miR-296-5p inhibited the expression of STAT3 in CRC cells. (A and B) The qRT-PCR experiment showed that miR-296-5p mimic up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (C and D) The qRT-PCR experiment showed that miR-296-5p mimic suppressed the mRNA level of STAT3, while overexpression of STAT3 reversed this effect. GAPDH played the role of internal reference. (E–G) The Western blot experiment showed that miR-296-5p mimic inhibited the protein level of STAT3, while over-expression of STAT3 reversed this effect. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. ***p<0.001 vs MC+NC; ^^^p<0.001 vs M+NC; ###p<0.001 vs MC+STAT3.Abbreviation: STAT3, signal transducer and activator of transcription 3. The targeted binding of miR-296-5p to STAT3. (A) TargetScan was used to predict the binding sequence of miR-296-5p and STAT3. (B and C) Dual luciferase experiment confirmed that miR-296-5p can bind to STAT3 in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. +++p<0.001 vs MC. Up-regulation of miR-296-5p inhibited the expression of STAT3 in CRC cells. (A and B) The qRT-PCR experiment showed that miR-296-5p mimic up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (C and D) The qRT-PCR experiment showed that miR-296-5p mimic suppressed the mRNA level of STAT3, while overexpression of STAT3 reversed this effect. GAPDH played the role of internal reference. (E–G) The Western blot experiment showed that miR-296-5p mimic inhibited the protein level of STAT3, while over-expression of STAT3 reversed this effect. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. ***p<0.001 vs MC+NC; ^^^p<0.001 vs M+NC; ###p<0.001 vs MC+STAT3. MiR-296-5p Mimic Inhibited Proliferation and Promoted Apoptosis in CRC Cells by Regulating Apoptosis-Related Genes, Which Was Partially Reversed by Overexpressed STAT3 The following cell experiments showed that by activating Bax and blocking the protein activity of Bcl-2, miR-296-5p mimic inhibited proliferation yet accelerated apoptosis in cancer cells (p<0.001, Figure 8A–I). However, up-regulation of STAT3 produced a completely opposite regulatory effect (p<0.001, Figure 8A–I). More importantly, overexpression of STAT3 neutralized the regulation of miR-296-5p mimic on CRC cells (p<0.001, Figure 8A–I).Figure 8MiR-296-5p mimic inhibited proliferation and promoted apoptosis in CRC cells by regulating apoptosis-related genes, which was partially reversed by overexpression of STAT3. (A–C) The clone formation experiment showed that the inhibitory effect of miR-296-5p mimic on the proliferation of SW480 and HT29 cells was neutralized by overexpressed STAT3. (D–F) Flow cytometry experiments showed that miR-296-5p mimic promoted the apoptosis of SW480 and HT29 cells, which was neutralized by overexpressed STAT3. (G–I) Western blot experiments showed that the regulation of apoptosis-related protein expression by miR-296-5p mimic was neutralized by overexpressed STAT3. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs MC+NC; ^^p<0.01, ^^^p<0.001 vs M+NC; ##p<0.01, ###p<0.001 vs MC+STAT3. MiR-296-5p mimic inhibited proliferation and promoted apoptosis in CRC cells by regulating apoptosis-related genes, which was partially reversed by overexpression of STAT3. (A–C) The clone formation experiment showed that the inhibitory effect of miR-296-5p mimic on the proliferation of SW480 and HT29 cells was neutralized by overexpressed STAT3. (D–F) Flow cytometry experiments showed that miR-296-5p mimic promoted the apoptosis of SW480 and HT29 cells, which was neutralized by overexpressed STAT3. (G–I) Western blot experiments showed that the regulation of apoptosis-related protein expression by miR-296-5p mimic was neutralized by overexpressed STAT3. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs MC+NC; ^^p<0.01, ^^^p<0.001 vs M+NC; ##p<0.01, ###p<0.001 vs MC+STAT3. The following cell experiments showed that by activating Bax and blocking the protein activity of Bcl-2, miR-296-5p mimic inhibited proliferation yet accelerated apoptosis in cancer cells (p<0.001, Figure 8A–I). However, up-regulation of STAT3 produced a completely opposite regulatory effect (p<0.001, Figure 8A–I). More importantly, overexpression of STAT3 neutralized the regulation of miR-296-5p mimic on CRC cells (p<0.001, Figure 8A–I).Figure 8MiR-296-5p mimic inhibited proliferation and promoted apoptosis in CRC cells by regulating apoptosis-related genes, which was partially reversed by overexpression of STAT3. (A–C) The clone formation experiment showed that the inhibitory effect of miR-296-5p mimic on the proliferation of SW480 and HT29 cells was neutralized by overexpressed STAT3. (D–F) Flow cytometry experiments showed that miR-296-5p mimic promoted the apoptosis of SW480 and HT29 cells, which was neutralized by overexpressed STAT3. (G–I) Western blot experiments showed that the regulation of apoptosis-related protein expression by miR-296-5p mimic was neutralized by overexpressed STAT3. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs MC+NC; ^^p<0.01, ^^^p<0.001 vs M+NC; ##p<0.01, ###p<0.001 vs MC+STAT3. MiR-296-5p mimic inhibited proliferation and promoted apoptosis in CRC cells by regulating apoptosis-related genes, which was partially reversed by overexpression of STAT3. (A–C) The clone formation experiment showed that the inhibitory effect of miR-296-5p mimic on the proliferation of SW480 and HT29 cells was neutralized by overexpressed STAT3. (D–F) Flow cytometry experiments showed that miR-296-5p mimic promoted the apoptosis of SW480 and HT29 cells, which was neutralized by overexpressed STAT3. (G–I) Western blot experiments showed that the regulation of apoptosis-related protein expression by miR-296-5p mimic was neutralized by overexpressed STAT3. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs MC+NC; ^^p<0.01, ^^^p<0.001 vs M+NC; ##p<0.01, ###p<0.001 vs MC+STAT3. ALO at a Concentration Gradient Inhibited Proliferation Yet Promoted Apoptosis in CRC Cells: In this study, the toxicity of ALO in normal cells (CCD-18Co) was detected, ALO showed no significant toxicity to CCD-18Co cells (Figure 1A). In order to study the effects of ALO on the growth and basic physiological functions of CRC cells, we used different concentrations of ALO to treat SW480 and HT29 cells. The results showed that 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L ALO solutions all inhibited the viability of SW480 and HT29 cells (p<0.05, Figure 1B and C). Moreover, the viability of SW480 and HT29 cells was close to 50% under the treatment of 0.8 mmol/L ALO solution, but was reduced to less than 50% when the ALO concentration was increased to 1 mmol/L (Figure 1B and C). In the following EdU cell proliferation experiment, the red fluorescence expression was reduced in SW480 and HT29 cells under the treatment of 0.8 mmol/L ALO (Figure 1D and E). Consistently, the clone formation experiment also showed that different concentrations of ALO (0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L) reduced the number of cell clones and inhibited the proliferation of SW480 and HT29 cells (p<0.05, Figure 1F–H). At the same time, the increase in the concentration of the ALO solution accelerated the death of CRC cells. As shown in Figure 1I–K, the number of apoptotic SW480 and HT29 cells increased significantly after treatment with different concentrations of ALO (p<0.01, Figure 1I–K).Figure 1ALO at a concentration gradient inhibited proliferation yet promoted apoptosis in CRC cells. (A) The effect of ALO on the normal colonic tissue cells (CCD-18Co) was detected by MTT experiments. (B and C) MTT experiments showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) inhibited the activity of SW480 and HT29 cells. (D and E) EdU cell proliferation experiments showed that 0.8 mmol/L ALO inhibited the proliferation of SW480 and HT29 cells. The magnification was 100×. (F–H) The clone formation experiment showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) inhibited the proliferation of SW480 and HT29 cells. (I–K) Flow cytometry experiments showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) promoted the apoptosis of SW480 and HT29 cells. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs Control.Abbreviations: ALO, aloperine; CRC, colorectal cancer. ALO at a concentration gradient inhibited proliferation yet promoted apoptosis in CRC cells. (A) The effect of ALO on the normal colonic tissue cells (CCD-18Co) was detected by MTT experiments. (B and C) MTT experiments showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) inhibited the activity of SW480 and HT29 cells. (D and E) EdU cell proliferation experiments showed that 0.8 mmol/L ALO inhibited the proliferation of SW480 and HT29 cells. The magnification was 100×. (F–H) The clone formation experiment showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) inhibited the proliferation of SW480 and HT29 cells. (I–K) Flow cytometry experiments showed that ALO at a concentration gradient (0.2 mmol/L, 0.4 mmol/L and 0.8 mmol/L) promoted the apoptosis of SW480 and HT29 cells. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs Control. The Abnormally Low Expression of miR-296-5p in Colon Cancer Could Be Up-Regulated by ALO: Data from the Starbase database showed that the expression of miR-296-5p in colon cancer patients was significantly lower than that in healthy people (p=2.5e-6, Figure 2A). In this study, however, after treatment with different concentrations of ALO for 24 h, miR-296-5p activity in SW480 and HT29 cells was significantly increased (p<0.01, Figure 2B and C), suggesting that ALO can stimulate the up-regulation of miR-296-5p.Figure 2The abnormally low expression of miR-296-5p in colon cancer could be upregulated by ALO. (A) Starbase (http://starbase.sysu.edu.cn/index.php) was used to retrieve the information of differential expression of miR-296-5p in colon adenocarcinoma (COAD, n=450) patients and healthy volunteers (n=8) in vivo. P=2.5e-6. (B and C) The qRT-PCR experiment showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) up-regulated miR-296-5p expression in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. **p<0.01, ***p<0.001 vs Control.Abbreviation: qRT-PCR, quantitative real-time polymerase chain reaction. The abnormally low expression of miR-296-5p in colon cancer could be upregulated by ALO. (A) Starbase (http://starbase.sysu.edu.cn/index.php) was used to retrieve the information of differential expression of miR-296-5p in colon adenocarcinoma (COAD, n=450) patients and healthy volunteers (n=8) in vivo. P=2.5e-6. (B and C) The qRT-PCR experiment showed that ALO at a concentration gradient (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) up-regulated miR-296-5p expression in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. **p<0.01, ***p<0.001 vs Control. Among the Differentially Expressed circRNAs in CRC, Only circNSUN2 Not Only Had a Target Site for miR-296-5p, but Also Could Be Regulated by ALO: Through literature review, we screened 12 circRNAs that were reported to be associated with CRC: circDDX17, circHIPK3, circZFR, circRNA_100290, Has_circ_0026344, circVAPA, Hsa_circ_0009361, circNSUN2, circACAP2, circACVRL1, circITGA7 and circEIF4G3. We first detected the regulatory effect of ALO on these differentially expressed circRNAs by qRT-PCR to perform preliminary screening (Figure 3A). The results showed that the expressions of circDDX17, Has_circ_0026344, circVAPA, circNSUN2, circACAP2, circACVRL1 and circEIF4G3 were regulated by 0.8 mmol/L ALO (p<0.001, Figure 3A). Through subsequent screening of target genes, we found that only circNSUN2 and miR-296-5p have targeted binding sites (Figure 3B). After further verification, it was confirmed that circNSUN2-WT and miR-296-5p mimic did reduce the luciferase activity in SW480 and HT29 cells (p<0.001, Figure 3C and D). Therefore, we chose circNSUN2 as the next research object.Figure 3Among the differentially expressed circRNAs in CRC, only circNSUN2 not only had a target site for miR-296-5p, but also could be regulated by ALO. (A) qRT-PCR detected the regulation of 0.8 mmol/L ALO on the differentially expressed circRNAs in CRC. GAPDH played the role of internal reference. (B) The TargetScan database (http://www.targetscan.org) was used to predict the target sites of circNSUN2 and miR-296-5p. (C and D) The dual luciferase experiment confirmed that circNSUN2 can bind to miR-296-5p in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; +++p<0.001 vs MC.Abbreviation: MC, miR-296-5p mimic control. Among the differentially expressed circRNAs in CRC, only circNSUN2 not only had a target site for miR-296-5p, but also could be regulated by ALO. (A) qRT-PCR detected the regulation of 0.8 mmol/L ALO on the differentially expressed circRNAs in CRC. GAPDH played the role of internal reference. (B) The TargetScan database (http://www.targetscan.org) was used to predict the target sites of circNSUN2 and miR-296-5p. (C and D) The dual luciferase experiment confirmed that circNSUN2 can bind to miR-296-5p in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; +++p<0.001 vs MC. Overexpression of circNSUN2 Partially Offset the Suppression of ALO on the Biological Behavior of Cancer Cells: In SW480 and HT29 cells, ALO suppressed the mRNA level of circNSUN2, whereas transfection of circNSUN2 overexpression significantly increased the expression of circNSUN2 (p<0.001, Figure 4A and B). The following basic cell function experiments showed that the upregulation of circNSUN2 partially offset the inhibitory effect of ALO on CRC cell viability, proliferation and apoptosis (p<0.001, Figure 4C–L).Figure 4Overexpression of circNSUN2 partially offset the suppression of ALO on the biological behavior of cancer cells. (A and B) The qRT-PCR experiment showed that ALO (0.8 mmol/L) inhibited the expression of circNSUN2 in SW480 and HT29 cells. GAPDH played the role of internal reference. (C and D) The MTT experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on cell viability in SW480 and HT29 cells. (E and F) EdU cell proliferation experiments showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. The magnification was 100×. (G–I) The clone formation experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. (J–L) Flow cytometry experiments showed that overexpression of circNSUN2 partially offset the promotion of ALO on apoptosis in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; ^p<0.05, ^^p<0.01, ^^^p<0.001 vs ALO+NC.Abbreviations: CircNSUN2, NOP2/Sun RNA methyltransferase 2; NC, negative control. Overexpression of circNSUN2 partially offset the suppression of ALO on the biological behavior of cancer cells. (A and B) The qRT-PCR experiment showed that ALO (0.8 mmol/L) inhibited the expression of circNSUN2 in SW480 and HT29 cells. GAPDH played the role of internal reference. (C and D) The MTT experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on cell viability in SW480 and HT29 cells. (E and F) EdU cell proliferation experiments showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. The magnification was 100×. (G–I) The clone formation experiment showed that overexpression of circNSUN2 partially offset the inhibition of ALO on proliferation in SW480 and HT29 cells. (J–L) Flow cytometry experiments showed that overexpression of circNSUN2 partially offset the promotion of ALO on apoptosis in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. ***p<0.001 vs Control; ^p<0.05, ^^p<0.01, ^^^p<0.001 vs ALO+NC. Sh-circNSUN2 Inhibited the Malignant Development of CRC Cells, Which Was Neutralized by miR-296-5p Inhibitor: We also tested the mutual regulation of circNSUN2 and miR-296-5p in SW480 and HT29 cells. Sh-circNSUN2 could down-regulate the expression of circNSUN2, and besides, it inhibited the proliferation of SW480 and HT29 cells, while promoting their apoptosis and the expression of miR-296-5p (p<0.001, Figure 5A–J). However, the inhibitory effect of sh-circNSUN2 on the malignant development of the two cancer cells was neutralized by miR-296-5p inhibitor (p<0.001, Figure 5A–J).Figure 5Sh-circNSUN2 inhibited the malignant development of CRC cells, which was neutralized by miR-296-5p inhibitor. (A and B) The qRT-PCR experiment showed that miR-296-5p inhibitor neutralized the inhibitory effect of sh-circNSUN2 on circNSUN2 expression. GAPDH played the role of internal reference. (C and D) The qRT-PCR experiment showed that sh-circNSUN2 up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (E–G) The clone formation experiment showed that the effect of sh-circNSUN2 on the proliferation of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. (H-J) Flow cytometry experiments showed that the effect of sh-circNSUN2 on promoting the apoptosis of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. All experiments were repeated three times to obtain average values. ***p<0.001 vs sh-NC+IC; ^^^p<0.001 vs sh-circNSUN2+IC; ###p<0.001 vs sh-NC+I.Abbreviations: Sh-circNSUN2, silent circNSUN2; sh-NC, silent negative control; IC, miR-296-5p inhibitor control; I, miR-296-5p inhibitor. Sh-circNSUN2 inhibited the malignant development of CRC cells, which was neutralized by miR-296-5p inhibitor. (A and B) The qRT-PCR experiment showed that miR-296-5p inhibitor neutralized the inhibitory effect of sh-circNSUN2 on circNSUN2 expression. GAPDH played the role of internal reference. (C and D) The qRT-PCR experiment showed that sh-circNSUN2 up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (E–G) The clone formation experiment showed that the effect of sh-circNSUN2 on the proliferation of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. (H-J) Flow cytometry experiments showed that the effect of sh-circNSUN2 on promoting the apoptosis of SW480 and HT29 cells was neutralized by miR-296-5p inhibitor. All experiments were repeated three times to obtain average values. ***p<0.001 vs sh-NC+IC; ^^^p<0.001 vs sh-circNSUN2+IC; ###p<0.001 vs sh-NC+I. MiR-296-5p Bound to STAT3 and Upregulation of Mir-296-5p Inhibited the Expression of STAT3 in CRC Cells: We screened the target genes of miR-296-5p, and found that miR-296-5p and STAT3 have targeted binding sites (Figure 6A), and this binding relation was further confirmed by a dual-luciferase verification test (p<0.001, Figure 6B and C). Next, we used qRT-PCR and Western blot to analyze the mRNA and protein levels of miR-296-5p and STAT. It was found that miR-296-5p mimic up-regulated miR-296-5p expression while preventing the activation of STAT3 (p<0.001, Figure 7A–G). However, the overexpression of STAT3 reversed the regulation of miR-296-5p mimic on the two genes (p<0.001, Figure 7A–G).Figure 6The targeted binding of miR-296-5p to STAT3. (A) TargetScan was used to predict the binding sequence of miR-296-5p and STAT3. (B and C) Dual luciferase experiment confirmed that miR-296-5p can bind to STAT3 in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. +++p<0.001 vs MC.Figure 7Up-regulation of miR-296-5p inhibited the expression of STAT3 in CRC cells. (A and B) The qRT-PCR experiment showed that miR-296-5p mimic up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (C and D) The qRT-PCR experiment showed that miR-296-5p mimic suppressed the mRNA level of STAT3, while overexpression of STAT3 reversed this effect. GAPDH played the role of internal reference. (E–G) The Western blot experiment showed that miR-296-5p mimic inhibited the protein level of STAT3, while over-expression of STAT3 reversed this effect. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. ***p<0.001 vs MC+NC; ^^^p<0.001 vs M+NC; ###p<0.001 vs MC+STAT3.Abbreviation: STAT3, signal transducer and activator of transcription 3. The targeted binding of miR-296-5p to STAT3. (A) TargetScan was used to predict the binding sequence of miR-296-5p and STAT3. (B and C) Dual luciferase experiment confirmed that miR-296-5p can bind to STAT3 in SW480 and HT29 cells. All experiments were repeated three times to obtain average values. +++p<0.001 vs MC. Up-regulation of miR-296-5p inhibited the expression of STAT3 in CRC cells. (A and B) The qRT-PCR experiment showed that miR-296-5p mimic up-regulated the mRNA level of miR-296-5p. U6 played the role of internal reference. (C and D) The qRT-PCR experiment showed that miR-296-5p mimic suppressed the mRNA level of STAT3, while overexpression of STAT3 reversed this effect. GAPDH played the role of internal reference. (E–G) The Western blot experiment showed that miR-296-5p mimic inhibited the protein level of STAT3, while over-expression of STAT3 reversed this effect. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. ***p<0.001 vs MC+NC; ^^^p<0.001 vs M+NC; ###p<0.001 vs MC+STAT3. MiR-296-5p Mimic Inhibited Proliferation and Promoted Apoptosis in CRC Cells by Regulating Apoptosis-Related Genes, Which Was Partially Reversed by Overexpressed STAT3: The following cell experiments showed that by activating Bax and blocking the protein activity of Bcl-2, miR-296-5p mimic inhibited proliferation yet accelerated apoptosis in cancer cells (p<0.001, Figure 8A–I). However, up-regulation of STAT3 produced a completely opposite regulatory effect (p<0.001, Figure 8A–I). More importantly, overexpression of STAT3 neutralized the regulation of miR-296-5p mimic on CRC cells (p<0.001, Figure 8A–I).Figure 8MiR-296-5p mimic inhibited proliferation and promoted apoptosis in CRC cells by regulating apoptosis-related genes, which was partially reversed by overexpression of STAT3. (A–C) The clone formation experiment showed that the inhibitory effect of miR-296-5p mimic on the proliferation of SW480 and HT29 cells was neutralized by overexpressed STAT3. (D–F) Flow cytometry experiments showed that miR-296-5p mimic promoted the apoptosis of SW480 and HT29 cells, which was neutralized by overexpressed STAT3. (G–I) Western blot experiments showed that the regulation of apoptosis-related protein expression by miR-296-5p mimic was neutralized by overexpressed STAT3. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs MC+NC; ^^p<0.01, ^^^p<0.001 vs M+NC; ##p<0.01, ###p<0.001 vs MC+STAT3. MiR-296-5p mimic inhibited proliferation and promoted apoptosis in CRC cells by regulating apoptosis-related genes, which was partially reversed by overexpression of STAT3. (A–C) The clone formation experiment showed that the inhibitory effect of miR-296-5p mimic on the proliferation of SW480 and HT29 cells was neutralized by overexpressed STAT3. (D–F) Flow cytometry experiments showed that miR-296-5p mimic promoted the apoptosis of SW480 and HT29 cells, which was neutralized by overexpressed STAT3. (G–I) Western blot experiments showed that the regulation of apoptosis-related protein expression by miR-296-5p mimic was neutralized by overexpressed STAT3. GAPDH played the role of internal reference. All experiments were repeated three times to obtain average values. *p<0.05, **p<0.01, ***p<0.001 vs MC+NC; ^^p<0.01, ^^^p<0.001 vs M+NC; ##p<0.01, ###p<0.001 vs MC+STAT3. Discussion: CRC is a malignant lesion of the mucosal epithelium of the colon or rectum under the action of various carcinogenic factors such as environment or heredity.2 Most CRC patients suffer from adenocarcinoma, which usually develops from polyps. Tumors developed from polyps can infiltrate in a circular shape along the horizontal axis of the intestinal tube, develop into the deep layer of the intestinal wall, and finally penetrate the intestinal wall and metastasize to blood vessels or lymphatic vessels.20 This is one of the important reasons for the high recurrence and metastasis rates of CRC. The ultimate goals of CRC research are to improve the current treatment of CRC, save patients’ lives and improve their quality of life. Based on the role of traditional Chinese medicine in disease prevention and treatment, this study explored the effects of ALO, an alkaloid exhibiting strong anticancer activity, on the proliferation and apoptosis of CRC cells. The results showed that after ALO treatment, the viability and proliferation of CRC cells were significantly inhibited; on the contrary, the number of apoptotic cancer cells was significantly increased. This is consistent with the previous experimental results.21 In order to probe deeper into the upstream mechanism of ALO in alleviating CRC carcinogenesis through the miR-296-5p/STAT3 axis, we screened and verified the upstream targeting circRNA of miR-296-5p. CircRNAs are a type of covalently closed circular non-coding RNA formed by back-splicing of mRNA precursors (pre-mRNA).22 CircRNA was considered to be a useless splicing by-product in early research. With the deepening of research, it is found that circRNA has a wide range of sources and plays a variety of functional roles in the growth and development of organisms, with the characteristics of conservative, stable, and tissue-specific.22,23 The most familiar and extensively studied function of circRNA is its sponging effect on miRNA.24 For example, circRNA ciRS-7 which contains more than 70 miR-7 binding sites can achieve competitive adsorption of miR-7 through the AGO2 protein.25 Besides, increasing reports on cancer indicate that circRNAs, such as circRNA-cTFRC, circPSMC3 and circSETD3, act as sponges of miRNAs and participate in transcriptional regulation.26–28 Based on these literature reports, we screened circRNAs that were reported to be abnormally expressed in CRC, and finally obtained 12 circRNAs. After detecting their expressions in ALO-treated cancer cells and predicting their binding sequence for miR-296-5p, circNSUN2 was finally identified as the research object of this study. CircNSUN2, which maps to the 5p15 amplicon in CRCs, was confirmed by Chen et al to regulate cytoplasmic output and promote liver metastasis of CRC through N 6-methyladenosine modification.29 Similarly, our experimental results uncovered that knockdown of circNSUN2 inhibited proliferation and accelerated apoptosis in CRC cells. More importantly, we for the first time revealed that ALO can prevent the activation of circNSUN2 and offset the promotion effect of overexpression of circNSUN2 on esophageal cancer progression. Since circNSUN2 and miR-296-5p have targeting sequences, we verified the cellular regulatory effects of circNSUN2 and miR-296-5p/STAT3 through dual luciferase experiments and rescue experiments. The final result confirmed our conjecture that regulating the circNSUN2/miR-296-5p/STAT3 axis can reduce the proliferation rate and increase the apoptosis rate of CRC cells. At present, blocking proliferation and increasing apoptosis in cancer cells are the intensively discussed mechanisms in CRC study. Tumor cells can proliferate quickly and unrestrictedly, and thus, inhibiting their proliferation can produce an anti-tumor effect.30 Apoptosis, alternatively called programmed cell death, is an autonomous cell death process strictly controlled by multiple genes. Apoptosis is an important part of cell life cycle, and it is also an important link in regulating the development of the body and maintaining the stability of the internal environment in organisms.31,32 Our research shows that ALO inhibits the proliferation and promotes the apoptosis of CRC cells by regulating the circNSUN2/miR-296-5p/STAT3 pathway, and ultimately prevents the tumorigenesis of CRC. There are still certain limitations in our research. Although we have confirmed the effect of ALO on inducing apoptosis and reducing proliferation in CRC cells and clarified the underlying mechanism, the circNSUN2/miR-296-5p/STAT3 axis and the pathways related to proliferation and apoptosis have not been analyzed yet, which will be further explored in future research. In addition, the anticancer effect of ALO on CRC and its experimental concentration needs to be further tested in animal and clinical experiments. Whether ALO has an impact on drug resistance in CRC is also a direction of future research.
Background: Aloperine can regulate miR-296-5p/Signal Transducer and Activator of Transcription 3 (STAT3) pathway to inhibit the malignant development of colorectal cancer (CRC), but the regulatory mechanism is unclear. This study explored the upstream mechanism of Aloperine in reducing CRC damage from the perspective of the circRNA-miRNA-mRNA regulatory network. Methods: After treatment with gradient concentrations of Aloperine (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) for 24 hours, changes in CRC cell proliferation and apoptosis were detected by functional experiments. Data of the differential expression of miR-296-5p in CRC patients and healthy people were obtained from Starbase. The effects of Aloperine on 12 differentially expressed circRNAs were detected. The binding of miR-296-5p with NOP2/Sun RNA methyltransferase 2 (circNSUN2) and STAT3 was predicted by TargetScan and confirmed through dual-luciferase experiments. The expressions of circNSUN2, miR-296-5p and STAT3 as well as apoptosis-related genes in CRC cells were detected by qRT-PCR and Western blot as needed. Rescue experiments were conducted to test the regulatory effects of circNSUN2, miR-296-5p and STAT3 on CRC cells. Results: Aloperine at a concentration gradient inhibited proliferation and promoted apoptosis in CRC cells. The abnormally low expression of miR-296-5p in CRC could be upregulated by Aloperine. Among the differentially expressed circRNAs in CRC, only circNSUN2 not only targets miR-296-5p, but also can be regulated by Aloperine. The up-regulation of circNSUN2 offset the inhibitory effect of Aloperine on cancer cells. The rescue experiments finally confirmed the regulation of circNSUN2/miR-296-5p/STAT3 axis in CRC cells. Conclusions: By regulating the circNSUN2/miR-296-5p/STAT3 pathway, Aloperine prevents the malignant development of CRC cells.
Introduction: Cancer has become the first killer in the world and the biggest obstacle for extending human life expectancy. It is estimated that there were 18.1 million new cancer cases and 9.6 million cancer deaths in 2018.1 Colorectal cancer (CRC) is one of the most common gastrointestinal tumors.2 According to the data from the International Agency for Research on Cancer (IARC),1 the number of new CRC cases worldwide in 2018 was approximately 1.09 million, making CRC the fourth most prevalent malignant tumor after lung, breast and prostate cancers; meantime, CRC had a high mortality rate of about 9.2%, ranking only after lung cancer. Epidemiological studies have shown that the incidence of CRC varies significantly in different countries, and is higher in developed areas.3 More importantly, as the incidence of CRC rises rapidly in people under 50, the age of onset of the disease is getting lower.4,5 Therefore, it is urgent to improve the diagnosis rate and treatment effect of CRC. Currently, the main clinical treatment of CRC is surgery combined with adjuvant radiotherapy, chemotherapy or molecular targeted therapy. Surgery is generally considered as the first choice for comprehensive treatment of CRC. This method is suitable for patients whose tumors are confined to the intestinal wall while penetrating the intestinal wall and invading the serous or extraserous membrane without lymph node metastasis.6,7 However, the availability of radical resection is limited as most patients with CRC have adenocarcinoma, which generally develops from polyps and is metastatic by nature.8 As a result, CRC patients tend to undergo simple resection, which leads to a high recurrence rate. Chemotherapy, therefore, is still necessary for postoperative and late-stage CRC patients.9,10 At present, the commonly used chemotherapeutic drugs in clinic mainly include 5-fluorouracil (5-Fu), oxaliplatin and its derivatives.9 Chemotherapy is highly risky because it indiscriminately kills tumor cells and immune cells and thereby reduces the anti-tumor immune effect.11 Likewise, radiotherapy greatly weakens the body’s immunity.12 Hence, researchers have proposed to find effective treatments or drugs that cause less damage to patients. With advances in the research and development of traditional Chinese medicine, more attention has been paid to the roles of traditional Chinese medicine and its active ingredients in disease prevention and treatment. At the same time, new targets for molecular targeted therapy of diseases have been discovered in the exploration of the molecular mechanisms of traditional Chinese medicine and its monomers. Aloperine (ALO) has been confirmed to have a significant anti-cancer activity. ALO is a component of the traditional Chinese medicine Sophora alopecuroides L.13 It is also one of the main alkaloids separated and extracted in the lab, with a molecular formula of C15H24N2.14 Recent studies have found that ALO has the effects of anti-inflammation, immunosuppression, redox suppression, cardiovascular protection and anti-cancer, among which its tumor-suppressing activity has been extensively reported. For example, Yu et al revealed that ALO inhibited the carcinogenesis process by regulating excessive autophagy in thyroid cancer cells;15 Liu et al demonstrated that ALO inhibited the PI3K/Akt pathway, increased apoptosis and caused cell cycle block in liver cancer cells;16 besides, ALO was also found to exert an anti-cancer effect in prostate cancer and breast cancer.17,18 In the previous study, our research group found that ALO up-regulated miR-296-5p and inhibited the activity of its target gene STAT3, thereby inhibiting the proliferation and inducing the apoptosis of CRC cells. However, the activation pathway of miR-296-5p is unclear. In order to clarify the upstream activation pathway of ALO-upregulated miR-296-5p, this study will explore the mechanism of ALO in inhibiting CRC from the perspective of the circRNA-miRNA-mRNA regulatory network. Discussion: CRC is a malignant lesion of the mucosal epithelium of the colon or rectum under the action of various carcinogenic factors such as environment or heredity.2 Most CRC patients suffer from adenocarcinoma, which usually develops from polyps. Tumors developed from polyps can infiltrate in a circular shape along the horizontal axis of the intestinal tube, develop into the deep layer of the intestinal wall, and finally penetrate the intestinal wall and metastasize to blood vessels or lymphatic vessels.20 This is one of the important reasons for the high recurrence and metastasis rates of CRC. The ultimate goals of CRC research are to improve the current treatment of CRC, save patients’ lives and improve their quality of life. Based on the role of traditional Chinese medicine in disease prevention and treatment, this study explored the effects of ALO, an alkaloid exhibiting strong anticancer activity, on the proliferation and apoptosis of CRC cells. The results showed that after ALO treatment, the viability and proliferation of CRC cells were significantly inhibited; on the contrary, the number of apoptotic cancer cells was significantly increased. This is consistent with the previous experimental results.21 In order to probe deeper into the upstream mechanism of ALO in alleviating CRC carcinogenesis through the miR-296-5p/STAT3 axis, we screened and verified the upstream targeting circRNA of miR-296-5p. CircRNAs are a type of covalently closed circular non-coding RNA formed by back-splicing of mRNA precursors (pre-mRNA).22 CircRNA was considered to be a useless splicing by-product in early research. With the deepening of research, it is found that circRNA has a wide range of sources and plays a variety of functional roles in the growth and development of organisms, with the characteristics of conservative, stable, and tissue-specific.22,23 The most familiar and extensively studied function of circRNA is its sponging effect on miRNA.24 For example, circRNA ciRS-7 which contains more than 70 miR-7 binding sites can achieve competitive adsorption of miR-7 through the AGO2 protein.25 Besides, increasing reports on cancer indicate that circRNAs, such as circRNA-cTFRC, circPSMC3 and circSETD3, act as sponges of miRNAs and participate in transcriptional regulation.26–28 Based on these literature reports, we screened circRNAs that were reported to be abnormally expressed in CRC, and finally obtained 12 circRNAs. After detecting their expressions in ALO-treated cancer cells and predicting their binding sequence for miR-296-5p, circNSUN2 was finally identified as the research object of this study. CircNSUN2, which maps to the 5p15 amplicon in CRCs, was confirmed by Chen et al to regulate cytoplasmic output and promote liver metastasis of CRC through N 6-methyladenosine modification.29 Similarly, our experimental results uncovered that knockdown of circNSUN2 inhibited proliferation and accelerated apoptosis in CRC cells. More importantly, we for the first time revealed that ALO can prevent the activation of circNSUN2 and offset the promotion effect of overexpression of circNSUN2 on esophageal cancer progression. Since circNSUN2 and miR-296-5p have targeting sequences, we verified the cellular regulatory effects of circNSUN2 and miR-296-5p/STAT3 through dual luciferase experiments and rescue experiments. The final result confirmed our conjecture that regulating the circNSUN2/miR-296-5p/STAT3 axis can reduce the proliferation rate and increase the apoptosis rate of CRC cells. At present, blocking proliferation and increasing apoptosis in cancer cells are the intensively discussed mechanisms in CRC study. Tumor cells can proliferate quickly and unrestrictedly, and thus, inhibiting their proliferation can produce an anti-tumor effect.30 Apoptosis, alternatively called programmed cell death, is an autonomous cell death process strictly controlled by multiple genes. Apoptosis is an important part of cell life cycle, and it is also an important link in regulating the development of the body and maintaining the stability of the internal environment in organisms.31,32 Our research shows that ALO inhibits the proliferation and promotes the apoptosis of CRC cells by regulating the circNSUN2/miR-296-5p/STAT3 pathway, and ultimately prevents the tumorigenesis of CRC. There are still certain limitations in our research. Although we have confirmed the effect of ALO on inducing apoptosis and reducing proliferation in CRC cells and clarified the underlying mechanism, the circNSUN2/miR-296-5p/STAT3 axis and the pathways related to proliferation and apoptosis have not been analyzed yet, which will be further explored in future research. In addition, the anticancer effect of ALO on CRC and its experimental concentration needs to be further tested in animal and clinical experiments. Whether ALO has an impact on drug resistance in CRC is also a direction of future research.
Background: Aloperine can regulate miR-296-5p/Signal Transducer and Activator of Transcription 3 (STAT3) pathway to inhibit the malignant development of colorectal cancer (CRC), but the regulatory mechanism is unclear. This study explored the upstream mechanism of Aloperine in reducing CRC damage from the perspective of the circRNA-miRNA-mRNA regulatory network. Methods: After treatment with gradient concentrations of Aloperine (0.1 mmol/L, 0.2 mmol/L, 0.4 mmol/L, 0.8 mmol/L and 1 mmol/L) for 24 hours, changes in CRC cell proliferation and apoptosis were detected by functional experiments. Data of the differential expression of miR-296-5p in CRC patients and healthy people were obtained from Starbase. The effects of Aloperine on 12 differentially expressed circRNAs were detected. The binding of miR-296-5p with NOP2/Sun RNA methyltransferase 2 (circNSUN2) and STAT3 was predicted by TargetScan and confirmed through dual-luciferase experiments. The expressions of circNSUN2, miR-296-5p and STAT3 as well as apoptosis-related genes in CRC cells were detected by qRT-PCR and Western blot as needed. Rescue experiments were conducted to test the regulatory effects of circNSUN2, miR-296-5p and STAT3 on CRC cells. Results: Aloperine at a concentration gradient inhibited proliferation and promoted apoptosis in CRC cells. The abnormally low expression of miR-296-5p in CRC could be upregulated by Aloperine. Among the differentially expressed circRNAs in CRC, only circNSUN2 not only targets miR-296-5p, but also can be regulated by Aloperine. The up-regulation of circNSUN2 offset the inhibitory effect of Aloperine on cancer cells. The rescue experiments finally confirmed the regulation of circNSUN2/miR-296-5p/STAT3 axis in CRC cells. Conclusions: By regulating the circNSUN2/miR-296-5p/STAT3 pathway, Aloperine prevents the malignant development of CRC cells.
17,564
354
[ 3307, 174, 127, 135, 102, 165, 183, 226, 170, 277, 62, 7293, 749, 370, 448, 475, 507, 596, 425, 836 ]
21
[ "cells", "296", "mir", "mir 296", "5p", "296 5p", "mir 296 5p", "alo", "sw480", "ht29" ]
[ "cancer crc common", "colon cancer regulated", "crc study tumor", "colorectal cancer alo", "2018 colorectal cancer" ]
null
null
[CONTENT] colorectal cancer | aloperine | NOP2/Sun RNA methyltransferase 2 | miR-296-5p | signal transducer and activator of transcription 3 [SUMMARY]
null
null
[CONTENT] colorectal cancer | aloperine | NOP2/Sun RNA methyltransferase 2 | miR-296-5p | signal transducer and activator of transcription 3 [SUMMARY]
[CONTENT] colorectal cancer | aloperine | NOP2/Sun RNA methyltransferase 2 | miR-296-5p | signal transducer and activator of transcription 3 [SUMMARY]
[CONTENT] colorectal cancer | aloperine | NOP2/Sun RNA methyltransferase 2 | miR-296-5p | signal transducer and activator of transcription 3 [SUMMARY]
[CONTENT] Antineoplastic Agents | Apoptosis | Cell Line, Tumor | Cell Proliferation | Colorectal Neoplasms | Computational Biology | Dose-Response Relationship, Drug | Drug Screening Assays, Antitumor | Humans | Methyltransferases | MicroRNAs | Molecular Structure | Quinolizidines | RNA, Circular | STAT3 Transcription Factor | Structure-Activity Relationship [SUMMARY]
null
null
[CONTENT] Antineoplastic Agents | Apoptosis | Cell Line, Tumor | Cell Proliferation | Colorectal Neoplasms | Computational Biology | Dose-Response Relationship, Drug | Drug Screening Assays, Antitumor | Humans | Methyltransferases | MicroRNAs | Molecular Structure | Quinolizidines | RNA, Circular | STAT3 Transcription Factor | Structure-Activity Relationship [SUMMARY]
[CONTENT] Antineoplastic Agents | Apoptosis | Cell Line, Tumor | Cell Proliferation | Colorectal Neoplasms | Computational Biology | Dose-Response Relationship, Drug | Drug Screening Assays, Antitumor | Humans | Methyltransferases | MicroRNAs | Molecular Structure | Quinolizidines | RNA, Circular | STAT3 Transcription Factor | Structure-Activity Relationship [SUMMARY]
[CONTENT] Antineoplastic Agents | Apoptosis | Cell Line, Tumor | Cell Proliferation | Colorectal Neoplasms | Computational Biology | Dose-Response Relationship, Drug | Drug Screening Assays, Antitumor | Humans | Methyltransferases | MicroRNAs | Molecular Structure | Quinolizidines | RNA, Circular | STAT3 Transcription Factor | Structure-Activity Relationship [SUMMARY]
[CONTENT] cancer crc common | colon cancer regulated | crc study tumor | colorectal cancer alo | 2018 colorectal cancer [SUMMARY]
null
null
[CONTENT] cancer crc common | colon cancer regulated | crc study tumor | colorectal cancer alo | 2018 colorectal cancer [SUMMARY]
[CONTENT] cancer crc common | colon cancer regulated | crc study tumor | colorectal cancer alo | 2018 colorectal cancer [SUMMARY]
[CONTENT] cancer crc common | colon cancer regulated | crc study tumor | colorectal cancer alo | 2018 colorectal cancer [SUMMARY]
[CONTENT] cells | 296 | mir | mir 296 | 5p | 296 5p | mir 296 5p | alo | sw480 | ht29 [SUMMARY]
null
null
[CONTENT] cells | 296 | mir | mir 296 | 5p | 296 5p | mir 296 5p | alo | sw480 | ht29 [SUMMARY]
[CONTENT] cells | 296 | mir | mir 296 | 5p | 296 5p | mir 296 5p | alo | sw480 | ht29 [SUMMARY]
[CONTENT] cells | 296 | mir | mir 296 | 5p | 296 5p | mir 296 5p | alo | sw480 | ht29 [SUMMARY]
[CONTENT] cancer | crc | alo | molecular | anti | medicine | traditional chinese medicine | tumor | chinese | chinese medicine [SUMMARY]
null
null
[CONTENT] crc | circnsun2 | research | proliferation | apoptosis | mir | alo | circrna | axis | 5p stat3 [SUMMARY]
[CONTENT] cells | 5p | 296 5p | mir 296 5p | mir | 296 | mir 296 | alo | circnsun2 | mmol [SUMMARY]
[CONTENT] cells | 5p | 296 5p | mir 296 5p | mir | 296 | mir 296 | alo | circnsun2 | mmol [SUMMARY]
[CONTENT] Activator of Transcription 3 ( | STAT3 | CRC ||| Aloperine | CRC [SUMMARY]
null
null
[CONTENT] ||| STAT3 | Aloperine | CRC [SUMMARY]
[CONTENT] Activator of Transcription 3 ( | STAT3 | CRC ||| Aloperine | CRC ||| Aloperine | 0.1 mmol | 0.2 | 0.4 | 0.8 | 1 | 24 hours | CRC ||| CRC ||| Aloperine | 12 ||| NOP2 | 2 | STAT3 | TargetScan ||| STAT3 | CRC ||| STAT3 | CRC ||| ||| CRC ||| CRC | Aloperine ||| CRC | Aloperine ||| Aloperine ||| CRC ||| ||| ||| STAT3 | Aloperine | CRC [SUMMARY]
[CONTENT] Activator of Transcription 3 ( | STAT3 | CRC ||| Aloperine | CRC ||| Aloperine | 0.1 mmol | 0.2 | 0.4 | 0.8 | 1 | 24 hours | CRC ||| CRC ||| Aloperine | 12 ||| NOP2 | 2 | STAT3 | TargetScan ||| STAT3 | CRC ||| STAT3 | CRC ||| ||| CRC ||| CRC | Aloperine ||| CRC | Aloperine ||| Aloperine ||| CRC ||| ||| ||| STAT3 | Aloperine | CRC [SUMMARY]
Safety and Efficacy of Drug-Coated Balloons in Patients with Acute Coronary Syndromes and Vulnerable Plaque.
36198017
Percutaneous coronary intervention (PCI) is the main treatment option for acute coronary syndromes (ACS) often related to the progression and rupture of vulnerable plaques. While drug-eluting stents (DES) are now routinely used in PCI, drug-coated balloons (DCB) are a new strategy to PCI and their practice in the treatment of ACS with vulnerable plaques has not been reported. This study aimed to evaluate the safety and efficacy of DCB in ACS complicated with vulnerable plaque lesions.
BACKGROUND
123 patients were retrospectively analyzed and diagnosed with ACS and given PCI in our Cardiology Department from December 2020 to July 2022. Vulnerable plaques were confirmed by intravenous ultrasound (IVUS) in all patients. According to individual treatment plan, patients were entered into either DCB (n = 55) or DES (n = 68) groups. The results of coronary angiography and IVUS before and immediately after percutaneous coronary intervention were analyzed. The occurrence of major adverse cardiovascular events (MACE) and the results of coronary angiography were also evaluated during follow-up.
METHODS
There were no significant differences in baseline clinical characteristics, preoperative minimal luminal diameter (MLD), and preoperative diameter stenosis (DS) between the two groups. Also, there were no differences in IVUS plaque burden (PB), vessel area, and lumen area in the two groups before and immediately after PCI. The efficacy analysis showed that immediately after PCI, the DCB group had smaller MLD and higher degrees of lumen stenosis than the DES group (P < 0.05). However, during follow-up, no significant differences in MLD and DS were seen in two groups; relatively, late loss in luminal diameter(LLL)in the DCB group was smaller (P<0.05). Safety analysis showed that during follow-up, 9 patients developed restenosis after DCB implantation while restenosis occurred in 10 patients with DES treatment, no statistical difference in the incidence of restenosis in the two groups. Besides, there was no statistical difference in the incidence of major adverse cardiac events(MACE)during hospitalization and follow-up in the DCB group (7.3% (4/55)) and the DES group (8.8% (6/68)).
RESULTS
DCB is safe and effective for ACS complicated with vulnerable plaque and has an advantage over DES in LLL.
CONCLUSION
[ "Acute Coronary Syndrome", "Angioplasty, Balloon, Coronary", "Constriction, Pathologic", "Coronary Angiography", "Coronary Artery Disease", "Coronary Restenosis", "Drug-Eluting Stents", "Humans", "Percutaneous Coronary Intervention", "Retrospective Studies", "Treatment Outcome" ]
9537494
Introduction
Acute coronary syndromes (ACS) is a serious coronary heart disease that threatens human health. Coronary plaques in such patients usually have a large plaque burden with lipid-rich necrotic cores, called vulnerable plaques, whose progression and rupture lead to unstable angina, acute non-ST elevation myocardial infarction, and acute ST-elevation myocardial infarction.1–4 Percutaneous coronary intervention is now an effective treatment approach for ACS. PCI can restore normal blood flow through myocardial revascularization and thus greatly improves symptoms and prognosis in ACS patients.5,6 Compared with traditional two-dimensional coronary angiography (CAG), new intraluminal imaging techniques such as IVUS have more efficacy in reducing complications after drug-eluting stents placement, decreasing in-stent restenosis rate, and improving treatment outcome. Equally important, IVUS can better provide intravascular imaging data for coronary vascular lesions with coronary overlapping, anatomical abnormalities, aneurysms, myocardial bridges, and calcifications, especially in the identification of coronary culprit vessels. Because most ACS patients present with plaque ruptures, an accurate identification of the nature, extent and location of plaque ruptures is particularly important for the formulation of diagnosis and treatment planning. In this context, IVUS is superior to CAG in providing relevant imaging information and intraprocedural guidance of PCI and evaluating postoperative results and clinical prognosis.7,8 Currently, DES is the preferred strategy for PCI in patients with ACS. The use of DES has the advantages of inducing intimal hyperplasia, thickening fibrous cap, and normalizing wall stress to reduce plaque rupture. However, subsequent complications of DES may occur, such as in-stent restenosis, in-stent hyperplasia, and stent fracture, and require long-term dual antiplatelet therapy. In severe cases, there may be no reflow phenomenon after stent placement, rapidly resulting in a larger area of myocardium infarction.9–11 DCB are a new revascularization technique that has become the treatment of choice for in-stent restenosis because it meets the new concept of intervention without implantation. Indications of DCB also include small vessel disease and bifurcation disease, among others.12–14 However, there are few reports related to the use of DCB in ACS patients with vulnerable plaques. Therefore, this study used DES as a parallel treatment to examine the safety and efficacy of DCB in ACS complicated with vulnerable plaques lesions under the guidance of IVUS.
null
null
Results
Baseline Demographic and Clinical Characteristics A total of 123 patients were included in this study, of whom 55 received DCB and 68 received DES. As shown in Table 1, there were no statistically significant differences between the DCB and the DES groups in terms of average age, gender distribution, ejection fraction, and other clinical characteristics (P > 0.05). Coronary heart disease-related risk factors such as hypertension, hyperlipidemia, diabetes, and smoking history were also not significantly different between the two groups (P > 0.05). Similarly, there were no significant differences in clinical diagnosis of ST-elevation myocardial infarction, non-ST-elevation myocardial infarction, and unstable angina pectoris between the two groups (X2 = 0.92,P>0.05). Baseline Patient Characteristics of the Study Population. Comparing two data sets using the continuity corrected chi-square test; DCB: drug coated balloon, DES: drug-eluting stent; STEMI :ST-segment elevation myocardial infarction, NSTEMI: non-STEMI; MI :myocardial infarction; PCI :percutaneous coronary intervention; CABG: coronary artery bypass grafting; LVEF: Left Ventricle ejection A total of 123 patients were included in this study, of whom 55 received DCB and 68 received DES. As shown in Table 1, there were no statistically significant differences between the DCB and the DES groups in terms of average age, gender distribution, ejection fraction, and other clinical characteristics (P > 0.05). Coronary heart disease-related risk factors such as hypertension, hyperlipidemia, diabetes, and smoking history were also not significantly different between the two groups (P > 0.05). Similarly, there were no significant differences in clinical diagnosis of ST-elevation myocardial infarction, non-ST-elevation myocardial infarction, and unstable angina pectoris between the two groups (X2 = 0.92,P>0.05). Baseline Patient Characteristics of the Study Population. Comparing two data sets using the continuity corrected chi-square test; DCB: drug coated balloon, DES: drug-eluting stent; STEMI :ST-segment elevation myocardial infarction, NSTEMI: non-STEMI; MI :myocardial infarction; PCI :percutaneous coronary intervention; CABG: coronary artery bypass grafting; LVEF: Left Ventricle ejection Procedural Characteristics of PCI By combining preoperative ECG and echocardiography and intraoperative CAG results, the culprit vessels in ACS patients were diagnosed and PCI was performed. The target vessel locations included left anterior descending artery, left circumflex artery, right coronary artery, and ramus coronary artery, and there were no statistically significant differences in target vessel locations between the DCB and DES treatment groups(X2 = 2.83,P>0.05). According to intraoperative CAG, coronary artery lesions in the two groups were divided into single vessel disease, double vessel disease, and triple vessel disease, and there were no significant differences in multivessel disease between the two groups (X2 = 0.01,P>0.05). Except for the significant difference in intraoperative balloon usage between the two groups (X2 = 7.69,P<0.05), there were no statistical differences in other PCI procedural characteristics, including implantation diameter and length, number of patients with intraoperative dissection, number of patients with temporary pacemakers, and number of patients using intra-aortic balloon pump(IABP)(P > 0.05). The procedural characteristics of PCI in the two groups are shown in Table 2. Procedural Characteristics of PCI. Comparing two data sets using the continuity corrected chi-square test #Comparing two data sets using the Fisher's exact probability test; DCB: drug coated balloon, DES: drug-eluting stent; IABP: intra-aortic balloon pump By combining preoperative ECG and echocardiography and intraoperative CAG results, the culprit vessels in ACS patients were diagnosed and PCI was performed. The target vessel locations included left anterior descending artery, left circumflex artery, right coronary artery, and ramus coronary artery, and there were no statistically significant differences in target vessel locations between the DCB and DES treatment groups(X2 = 2.83,P>0.05). According to intraoperative CAG, coronary artery lesions in the two groups were divided into single vessel disease, double vessel disease, and triple vessel disease, and there were no significant differences in multivessel disease between the two groups (X2 = 0.01,P>0.05). Except for the significant difference in intraoperative balloon usage between the two groups (X2 = 7.69,P<0.05), there were no statistical differences in other PCI procedural characteristics, including implantation diameter and length, number of patients with intraoperative dissection, number of patients with temporary pacemakers, and number of patients using intra-aortic balloon pump(IABP)(P > 0.05). The procedural characteristics of PCI in the two groups are shown in Table 2. Procedural Characteristics of PCI. Comparing two data sets using the continuity corrected chi-square test #Comparing two data sets using the Fisher's exact probability test; DCB: drug coated balloon, DES: drug-eluting stent; IABP: intra-aortic balloon pump QCA and IVUS Analysis The preoperative, postoperative and follow-up results of PCI are shown in Table 3. There were no significant differences in preoperative RFD, MLD and DS between the two groups (P > 0.05). However, CAG immediately after DCB or DES implantation showed significantly smaller MLD and greater degree of stenosis in the DCB group than in the DES group (P < 0.05). Further follow-up CAG after PCI revealed that there were no statistically significant differences in MLD, stenosis degree, and coronary restenosis between the two groups (P > 0.05), except that compared to the DES group, the DCB group had less LLL in lumen diameter and a greater number of lumen enlargements (P < 0.05). Comparison of Coronary QCA and IVUS Results Between DCB and DES Groups. DCB: drug coated balloon, DES: drug-eluting stent; CAG: coronary artery angiography; IVUS: intravascular ultrasound; MLD: minimal lumen diameter; LLL: late luminal loss; Both groups underwent IVUS before and after PCI. The results showed that there were no significant differences in vessel area and lumen area between the two groups (P > 0.05). In addition, no statistically significant differences were seen in preoperative plaque burden (79.05 ± 4.17 VS 80.07 ± 4.68)and postprocedural plaque burden (61.69 ± 5.64 VS 63.32 ± 6.07)between the two groups (P > 0.05). The preoperative, postoperative and follow-up results of PCI are shown in Table 3. There were no significant differences in preoperative RFD, MLD and DS between the two groups (P > 0.05). However, CAG immediately after DCB or DES implantation showed significantly smaller MLD and greater degree of stenosis in the DCB group than in the DES group (P < 0.05). Further follow-up CAG after PCI revealed that there were no statistically significant differences in MLD, stenosis degree, and coronary restenosis between the two groups (P > 0.05), except that compared to the DES group, the DCB group had less LLL in lumen diameter and a greater number of lumen enlargements (P < 0.05). Comparison of Coronary QCA and IVUS Results Between DCB and DES Groups. DCB: drug coated balloon, DES: drug-eluting stent; CAG: coronary artery angiography; IVUS: intravascular ultrasound; MLD: minimal lumen diameter; LLL: late luminal loss; Both groups underwent IVUS before and after PCI. The results showed that there were no significant differences in vessel area and lumen area between the two groups (P > 0.05). In addition, no statistically significant differences were seen in preoperative plaque burden (79.05 ± 4.17 VS 80.07 ± 4.68)and postprocedural plaque burden (61.69 ± 5.64 VS 63.32 ± 6.07)between the two groups (P > 0.05). Follow-up MACE During the follow-up period, there were no significant differences in target lesion revascularization, myocardial infarction, and cardiac death between the two groups (P > 0.05). And survival analysis shows that there were no significant differences in the survival rate of the two groups (Log-rank,X2 = 0.01,P > 0.05). The detailed data are shown in Table 4 and Figure 2. Comparison of MACE Between DCB and DES Groups. Comparing two data sets using the continuity corrected chi-square test; #Comparing two data sets using the Fisher's exact probability test; DCB: drug coated balloon, DES: drug-eluting stent; MACE: major adverse cardiovascular event; TLR: target lesion revascularization; MI: myocardial infarction. During the follow-up period, there were no significant differences in target lesion revascularization, myocardial infarction, and cardiac death between the two groups (P > 0.05). And survival analysis shows that there were no significant differences in the survival rate of the two groups (Log-rank,X2 = 0.01,P > 0.05). The detailed data are shown in Table 4 and Figure 2. Comparison of MACE Between DCB and DES Groups. Comparing two data sets using the continuity corrected chi-square test; #Comparing two data sets using the Fisher's exact probability test; DCB: drug coated balloon, DES: drug-eluting stent; MACE: major adverse cardiovascular event; TLR: target lesion revascularization; MI: myocardial infarction.
Conclusion
In summary, DCB is safe and effective in the treatment of ACS complicated with vulnerable plaque, and DCB has the advantage over DES in LLL. Our work provides practical experience in the interventional treatment of vulnerable plaques in ACS.
[ "Materials and Methods", "Patients", "PCI Procedure", "Analysis Parameters", "Statistical Analysis", "Baseline Demographic and Clinical Characteristics", "Procedural Characteristics of PCI", "QCA and IVUS Analysis", "Follow-up MACE", "Limitations", "Conclusion" ]
[ "Patients This retrospectively study investigated 123 patients who were diagnosed with ACS\nand underwent PCI in the Department of Cardiology, the First Affiliated Hospital\nof Zhengzhou University from December 2020 to July 2022, and all patients were\nconfirmed to have vulnerable plaques by IVUS. According to the treatment\nstrategy they received, patients were entered into either the DCB treatment\ngroup (n = 55) or the DES treatment group (n = 61). Figure 1 shows the study flowchart.\nThe flowchart of this study.\nInclusion criteria: (1) 18-85 years of age; (2) meeting the diagnostic criteria\nof the European Heart Association for ACS 15,16(3) having definite\nculprit vessel; and (4) gray-scale IVUS showing PB ≥ 70% and minimal luminal\narea (MLA) ≤ 4.0 mm2.\n17\n Exclusion criteria: (1) chronic total occlusion; (2) heavily calcified\nlesions requiring rotational atherectomy; (3) patients with thrombus aspiration;\n(4) severe valvular insufficiency or valvular stenosis; (5) definite\ncontraindications to antiplatelet drugs or anticoagulants; (6) dissection repair\nremedial stents after DCB implantation; (7) severe renal insufficiency (GFR<\n30 ml/min) or severe liver insufficiency; and (8) patients who declined DCB and\nDES implantation.\nThe survival curve of DCB and DES groups\nThis retrospectively study investigated 123 patients who were diagnosed with ACS\nand underwent PCI in the Department of Cardiology, the First Affiliated Hospital\nof Zhengzhou University from December 2020 to July 2022, and all patients were\nconfirmed to have vulnerable plaques by IVUS. According to the treatment\nstrategy they received, patients were entered into either the DCB treatment\ngroup (n = 55) or the DES treatment group (n = 61). Figure 1 shows the study flowchart.\nThe flowchart of this study.\nInclusion criteria: (1) 18-85 years of age; (2) meeting the diagnostic criteria\nof the European Heart Association for ACS 15,16(3) having definite\nculprit vessel; and (4) gray-scale IVUS showing PB ≥ 70% and minimal luminal\narea (MLA) ≤ 4.0 mm2.\n17\n Exclusion criteria: (1) chronic total occlusion; (2) heavily calcified\nlesions requiring rotational atherectomy; (3) patients with thrombus aspiration;\n(4) severe valvular insufficiency or valvular stenosis; (5) definite\ncontraindications to antiplatelet drugs or anticoagulants; (6) dissection repair\nremedial stents after DCB implantation; (7) severe renal insufficiency (GFR<\n30 ml/min) or severe liver insufficiency; and (8) patients who declined DCB and\nDES implantation.\nThe survival curve of DCB and DES groups\nPCI Procedure Patients received dual antiplatelet therapy.18–22 Scheme 1: preoperative\nloading dose of aspirin 300 mg + ticagrelor 180 mg, postoperative maintenance\ndose of aspirin 100 mg + ticagrelor 90 mg, twice a day. Scheme 2: preoperative\nloading dose of aspirin 300 mg + clopidogrel 300 mg, postoperative maintenance\ndose of aspirin 100 mg/d + clopidogrel 75 mg/d. For special ACS patients, such\nas emergency PCI patients, comatose patients, and patients with poor\ncooperation, preoperative dual antiplatelet drug therapy might not be given and\nreplaced by continuous intravenous anticoagulant drugs.\nAfter routine sterilization and draping, the radial artery or femoral artery was\nselected for arterial puncture and sheath insertion. CAG was performed according\nto established protocols. To clearly display the culprit vessels, an appropriate\npositioning was selected for imaging according to the preoperative\nelectrocardiogram, and multi-position imaging was performed if necessary.\nCulprit vessel were determined by combining the patient's electrocardiogram,\nechocardiography, and intraoperative angiography, and graded according to the\nThrombolysis in Myocardial Infarction (TIMI) criteria. For TIMI grade < 3\nculprit vessels, a compliant balloon was used to pre-dilate the culprit vessel\nlesions to restore TIMI grade 3 flow. According to the patient's condition, IVUS\nwas performed at the appropriate time. The probe was sent to the relatively\nnormal segment at the distal end of the culprit vessel, and then withdrew to the\nrelatively normal segment at the proximal end at a constant rate of 0.5 mm/s to\nobtain intravascular imaging including vessel area, lumen area and plaque\nburden.\nBefore DCB or DES implant in the culprit vessel, an appropriate compliant\nballoon, semi-compliant balloon, cutting balloon, or spinous balloon was used to\ndilate the lesion and reduce culprit stenosis.\n23\n For DCB implantation, if the degree of vascular stenosis was ≤ 30%, and\nthere was no dissection or only type A or B dissection after dilation, a\nsize-matched DCB was chosen based on IVUS imaging and placed in the lesion\nsustained release before balloon withdrawal. The length of the DCB catheter\nexceed the target lesion by at least 5 mm. and the ratio of DCB diameters with\nreference vessel diameters were 0.8–1.0. The recommended inflation time was at\nleast 40 s at >7 atm. For DES implantation, after pre-dilation, a\nsize-matched DES based on IVUS imaging was placed in the lesion and released. A\nhigh-pressure balloon with appropriate dimensions was selected for post-stenting\ndilation to ensure that the stent was fully expanded and adhered to the vessel\nwall. IVUS examination was performed after DCB and DES implantation to further\nevaluate the immediate postoperative efficacy and guide further treatment of\nstent malapposition and dissection. At the same time, postoperative IVUS imaging\nresults were recorded.\nIn this study, DCB had a paclitaxel/iohexol matrix coating on the Bingo\nDrug-Coated Balloon (Bingo™, Yinyi Biotech, Dalian, China). And the IVUS imaging\nwere done with\nthe computer program (H749A70200,Boston Scientific, Shanghai, China).\nPatients received dual antiplatelet therapy.18–22 Scheme 1: preoperative\nloading dose of aspirin 300 mg + ticagrelor 180 mg, postoperative maintenance\ndose of aspirin 100 mg + ticagrelor 90 mg, twice a day. Scheme 2: preoperative\nloading dose of aspirin 300 mg + clopidogrel 300 mg, postoperative maintenance\ndose of aspirin 100 mg/d + clopidogrel 75 mg/d. For special ACS patients, such\nas emergency PCI patients, comatose patients, and patients with poor\ncooperation, preoperative dual antiplatelet drug therapy might not be given and\nreplaced by continuous intravenous anticoagulant drugs.\nAfter routine sterilization and draping, the radial artery or femoral artery was\nselected for arterial puncture and sheath insertion. CAG was performed according\nto established protocols. To clearly display the culprit vessels, an appropriate\npositioning was selected for imaging according to the preoperative\nelectrocardiogram, and multi-position imaging was performed if necessary.\nCulprit vessel were determined by combining the patient's electrocardiogram,\nechocardiography, and intraoperative angiography, and graded according to the\nThrombolysis in Myocardial Infarction (TIMI) criteria. For TIMI grade < 3\nculprit vessels, a compliant balloon was used to pre-dilate the culprit vessel\nlesions to restore TIMI grade 3 flow. According to the patient's condition, IVUS\nwas performed at the appropriate time. The probe was sent to the relatively\nnormal segment at the distal end of the culprit vessel, and then withdrew to the\nrelatively normal segment at the proximal end at a constant rate of 0.5 mm/s to\nobtain intravascular imaging including vessel area, lumen area and plaque\nburden.\nBefore DCB or DES implant in the culprit vessel, an appropriate compliant\nballoon, semi-compliant balloon, cutting balloon, or spinous balloon was used to\ndilate the lesion and reduce culprit stenosis.\n23\n For DCB implantation, if the degree of vascular stenosis was ≤ 30%, and\nthere was no dissection or only type A or B dissection after dilation, a\nsize-matched DCB was chosen based on IVUS imaging and placed in the lesion\nsustained release before balloon withdrawal. The length of the DCB catheter\nexceed the target lesion by at least 5 mm. and the ratio of DCB diameters with\nreference vessel diameters were 0.8–1.0. The recommended inflation time was at\nleast 40 s at >7 atm. For DES implantation, after pre-dilation, a\nsize-matched DES based on IVUS imaging was placed in the lesion and released. A\nhigh-pressure balloon with appropriate dimensions was selected for post-stenting\ndilation to ensure that the stent was fully expanded and adhered to the vessel\nwall. IVUS examination was performed after DCB and DES implantation to further\nevaluate the immediate postoperative efficacy and guide further treatment of\nstent malapposition and dissection. At the same time, postoperative IVUS imaging\nresults were recorded.\nIn this study, DCB had a paclitaxel/iohexol matrix coating on the Bingo\nDrug-Coated Balloon (Bingo™, Yinyi Biotech, Dalian, China). And the IVUS imaging\nwere done with\nthe computer program (H749A70200,Boston Scientific, Shanghai, China).\nAnalysis Parameters Basic patient information was collected, including sex, age, clinical diagnosis,\nleft ventricular ejection fraction (LVEF), history of hypertension,\nhyperlipidemia, diabetes, smoking, myocardial infarction, previous PCI, previous\ncoronary artery bypass grafting (CABG), and family history of coronary heart\ndisease.\nQuantitative coronary angiography (QCA) software was used to analyze the CAG data\nof all patients, including (1) preoperative and immediate postoperative\nreference vessel diameter (RFD), minimal luminal diameter, and diameter\nstenosis; (2) MLD, DS, LLL, restenosis lesions, and lumen enlargement at\nfollow-up.\nThe IVUS analysis were also evaluated, including preoperative and immediate\npostoperative vessel area, plaque burden, and lumen area. Because the PB ≥70%\nwas the strongest independent predictor of subsequent lesion-related events in\nthe first PROSPECT study, and Gregg et al defined vulnerable plaque with this PB\nthreshold.17,24 In this study, the PB criterion was selected to define\na vulnerable plaque, and the result of CAG and IVUS were evaluated by two\nexperienced researchers.\nAll patients passed the outpatient or telephone assessment 1, 3, 6, and 12 months\nafter PCI. Major adverse cardiovascular events during hospitalization and\nfollow-up were target lesion revascularization (TLR), myocardial infarction\n(MI), and cardiac death.\nBasic patient information was collected, including sex, age, clinical diagnosis,\nleft ventricular ejection fraction (LVEF), history of hypertension,\nhyperlipidemia, diabetes, smoking, myocardial infarction, previous PCI, previous\ncoronary artery bypass grafting (CABG), and family history of coronary heart\ndisease.\nQuantitative coronary angiography (QCA) software was used to analyze the CAG data\nof all patients, including (1) preoperative and immediate postoperative\nreference vessel diameter (RFD), minimal luminal diameter, and diameter\nstenosis; (2) MLD, DS, LLL, restenosis lesions, and lumen enlargement at\nfollow-up.\nThe IVUS analysis were also evaluated, including preoperative and immediate\npostoperative vessel area, plaque burden, and lumen area. Because the PB ≥70%\nwas the strongest independent predictor of subsequent lesion-related events in\nthe first PROSPECT study, and Gregg et al defined vulnerable plaque with this PB\nthreshold.17,24 In this study, the PB criterion was selected to define\na vulnerable plaque, and the result of CAG and IVUS were evaluated by two\nexperienced researchers.\nAll patients passed the outpatient or telephone assessment 1, 3, 6, and 12 months\nafter PCI. Major adverse cardiovascular events during hospitalization and\nfollow-up were target lesion revascularization (TLR), myocardial infarction\n(MI), and cardiac death.\nStatistical Analysis SPSS version 22.0 (IBM Corporation, Armonk, NY, USA). was used for statistical\nanalysis. Quantitative data that conformed to normal distribution were analyzed\nby t-test and expressed as mean ± standard deviation (SD); those that did not\nconform to normal distribution were analyzed by Wilcoxon rank-sum test and\nexpressed as median and quartile range (M (P25, P75)). Qualitative data were\ncompared using the chi-square test. If T ≤ 5, the continuity-corrected\nchi-square test and Fisher's exact probability test were used and expressed as\nnumber of cases (%). A Kaplan-Meier curve was used to describe survival\nanalysis. Two-tailed P < 0.05 indicates a statistically significant\ndifference.\nSPSS version 22.0 (IBM Corporation, Armonk, NY, USA). was used for statistical\nanalysis. Quantitative data that conformed to normal distribution were analyzed\nby t-test and expressed as mean ± standard deviation (SD); those that did not\nconform to normal distribution were analyzed by Wilcoxon rank-sum test and\nexpressed as median and quartile range (M (P25, P75)). Qualitative data were\ncompared using the chi-square test. If T ≤ 5, the continuity-corrected\nchi-square test and Fisher's exact probability test were used and expressed as\nnumber of cases (%). A Kaplan-Meier curve was used to describe survival\nanalysis. Two-tailed P < 0.05 indicates a statistically significant\ndifference.", "This retrospectively study investigated 123 patients who were diagnosed with ACS\nand underwent PCI in the Department of Cardiology, the First Affiliated Hospital\nof Zhengzhou University from December 2020 to July 2022, and all patients were\nconfirmed to have vulnerable plaques by IVUS. According to the treatment\nstrategy they received, patients were entered into either the DCB treatment\ngroup (n = 55) or the DES treatment group (n = 61). Figure 1 shows the study flowchart.\nThe flowchart of this study.\nInclusion criteria: (1) 18-85 years of age; (2) meeting the diagnostic criteria\nof the European Heart Association for ACS 15,16(3) having definite\nculprit vessel; and (4) gray-scale IVUS showing PB ≥ 70% and minimal luminal\narea (MLA) ≤ 4.0 mm2.\n17\n Exclusion criteria: (1) chronic total occlusion; (2) heavily calcified\nlesions requiring rotational atherectomy; (3) patients with thrombus aspiration;\n(4) severe valvular insufficiency or valvular stenosis; (5) definite\ncontraindications to antiplatelet drugs or anticoagulants; (6) dissection repair\nremedial stents after DCB implantation; (7) severe renal insufficiency (GFR<\n30 ml/min) or severe liver insufficiency; and (8) patients who declined DCB and\nDES implantation.\nThe survival curve of DCB and DES groups", "Patients received dual antiplatelet therapy.18–22 Scheme 1: preoperative\nloading dose of aspirin 300 mg + ticagrelor 180 mg, postoperative maintenance\ndose of aspirin 100 mg + ticagrelor 90 mg, twice a day. Scheme 2: preoperative\nloading dose of aspirin 300 mg + clopidogrel 300 mg, postoperative maintenance\ndose of aspirin 100 mg/d + clopidogrel 75 mg/d. For special ACS patients, such\nas emergency PCI patients, comatose patients, and patients with poor\ncooperation, preoperative dual antiplatelet drug therapy might not be given and\nreplaced by continuous intravenous anticoagulant drugs.\nAfter routine sterilization and draping, the radial artery or femoral artery was\nselected for arterial puncture and sheath insertion. CAG was performed according\nto established protocols. To clearly display the culprit vessels, an appropriate\npositioning was selected for imaging according to the preoperative\nelectrocardiogram, and multi-position imaging was performed if necessary.\nCulprit vessel were determined by combining the patient's electrocardiogram,\nechocardiography, and intraoperative angiography, and graded according to the\nThrombolysis in Myocardial Infarction (TIMI) criteria. For TIMI grade < 3\nculprit vessels, a compliant balloon was used to pre-dilate the culprit vessel\nlesions to restore TIMI grade 3 flow. According to the patient's condition, IVUS\nwas performed at the appropriate time. The probe was sent to the relatively\nnormal segment at the distal end of the culprit vessel, and then withdrew to the\nrelatively normal segment at the proximal end at a constant rate of 0.5 mm/s to\nobtain intravascular imaging including vessel area, lumen area and plaque\nburden.\nBefore DCB or DES implant in the culprit vessel, an appropriate compliant\nballoon, semi-compliant balloon, cutting balloon, or spinous balloon was used to\ndilate the lesion and reduce culprit stenosis.\n23\n For DCB implantation, if the degree of vascular stenosis was ≤ 30%, and\nthere was no dissection or only type A or B dissection after dilation, a\nsize-matched DCB was chosen based on IVUS imaging and placed in the lesion\nsustained release before balloon withdrawal. The length of the DCB catheter\nexceed the target lesion by at least 5 mm. and the ratio of DCB diameters with\nreference vessel diameters were 0.8–1.0. The recommended inflation time was at\nleast 40 s at >7 atm. For DES implantation, after pre-dilation, a\nsize-matched DES based on IVUS imaging was placed in the lesion and released. A\nhigh-pressure balloon with appropriate dimensions was selected for post-stenting\ndilation to ensure that the stent was fully expanded and adhered to the vessel\nwall. IVUS examination was performed after DCB and DES implantation to further\nevaluate the immediate postoperative efficacy and guide further treatment of\nstent malapposition and dissection. At the same time, postoperative IVUS imaging\nresults were recorded.\nIn this study, DCB had a paclitaxel/iohexol matrix coating on the Bingo\nDrug-Coated Balloon (Bingo™, Yinyi Biotech, Dalian, China). And the IVUS imaging\nwere done with\nthe computer program (H749A70200,Boston Scientific, Shanghai, China).", "Basic patient information was collected, including sex, age, clinical diagnosis,\nleft ventricular ejection fraction (LVEF), history of hypertension,\nhyperlipidemia, diabetes, smoking, myocardial infarction, previous PCI, previous\ncoronary artery bypass grafting (CABG), and family history of coronary heart\ndisease.\nQuantitative coronary angiography (QCA) software was used to analyze the CAG data\nof all patients, including (1) preoperative and immediate postoperative\nreference vessel diameter (RFD), minimal luminal diameter, and diameter\nstenosis; (2) MLD, DS, LLL, restenosis lesions, and lumen enlargement at\nfollow-up.\nThe IVUS analysis were also evaluated, including preoperative and immediate\npostoperative vessel area, plaque burden, and lumen area. Because the PB ≥70%\nwas the strongest independent predictor of subsequent lesion-related events in\nthe first PROSPECT study, and Gregg et al defined vulnerable plaque with this PB\nthreshold.17,24 In this study, the PB criterion was selected to define\na vulnerable plaque, and the result of CAG and IVUS were evaluated by two\nexperienced researchers.\nAll patients passed the outpatient or telephone assessment 1, 3, 6, and 12 months\nafter PCI. Major adverse cardiovascular events during hospitalization and\nfollow-up were target lesion revascularization (TLR), myocardial infarction\n(MI), and cardiac death.", "SPSS version 22.0 (IBM Corporation, Armonk, NY, USA). was used for statistical\nanalysis. Quantitative data that conformed to normal distribution were analyzed\nby t-test and expressed as mean ± standard deviation (SD); those that did not\nconform to normal distribution were analyzed by Wilcoxon rank-sum test and\nexpressed as median and quartile range (M (P25, P75)). Qualitative data were\ncompared using the chi-square test. If T ≤ 5, the continuity-corrected\nchi-square test and Fisher's exact probability test were used and expressed as\nnumber of cases (%). A Kaplan-Meier curve was used to describe survival\nanalysis. Two-tailed P < 0.05 indicates a statistically significant\ndifference.", "A total of 123 patients were included in this study, of whom 55 received DCB and\n68 received DES. As shown in Table 1, there were no statistically\nsignificant differences between the DCB and the DES groups in terms of average\nage, gender distribution, ejection fraction, and other clinical characteristics\n(P > 0.05). Coronary heart disease-related risk factors\nsuch as hypertension, hyperlipidemia, diabetes, and smoking history were also\nnot significantly different between the two groups (P >\n0.05). Similarly, there were no significant differences in clinical diagnosis of\nST-elevation myocardial infarction, non-ST-elevation myocardial infarction, and\nunstable angina pectoris between the two groups\n(X2 = 0.92,P>0.05).\nBaseline Patient Characteristics of the Study Population.\nComparing two data sets using the continuity corrected chi-square\ntest; DCB: drug coated balloon, DES: drug-eluting stent; STEMI\n:ST-segment elevation myocardial infarction, NSTEMI: non-STEMI; MI\n:myocardial infarction; PCI :percutaneous coronary intervention;\nCABG: coronary artery bypass grafting; LVEF: Left Ventricle\nejection", "By combining preoperative ECG and echocardiography and intraoperative CAG\nresults, the culprit vessels in ACS patients were diagnosed and PCI was\nperformed. The target vessel locations included left anterior descending artery,\nleft circumflex artery, right coronary artery, and ramus coronary artery, and\nthere were no statistically significant differences in target vessel locations\nbetween the DCB and DES treatment\ngroups(X2 = 2.83,P>0.05). According to\nintraoperative CAG, coronary artery lesions in the two groups were divided into\nsingle vessel disease, double vessel disease, and triple vessel disease, and\nthere were no significant differences in multivessel disease between the two\ngroups (X2 = 0.01,P>0.05). Except for the\nsignificant difference in intraoperative balloon usage between the two groups\n(X2 = 7.69,P<0.05), there were no statistical\ndifferences in other PCI procedural characteristics, including implantation\ndiameter and length, number of patients with intraoperative dissection, number\nof patients with temporary pacemakers, and number of patients using intra-aortic\nballoon pump(IABP)(P > 0.05). The procedural characteristics\nof PCI in the two groups are shown in Table 2.\nProcedural Characteristics of PCI.\nComparing two data sets using the continuity corrected chi-square\ntest #Comparing two data sets using the Fisher's exact\nprobability test; DCB: drug coated balloon, DES: drug-eluting stent;\nIABP: intra-aortic balloon pump", "The preoperative, postoperative and follow-up results of PCI are shown in Table 3. There were\nno significant differences in preoperative RFD, MLD and DS between the two\ngroups (P > 0.05). However, CAG immediately after DCB or DES\nimplantation showed significantly smaller MLD and greater degree of stenosis in\nthe DCB group than in the DES group (P < 0.05). Further\nfollow-up CAG after PCI revealed that there were no statistically significant\ndifferences in MLD, stenosis degree, and coronary restenosis between the two\ngroups (P > 0.05), except that compared to the DES group,\nthe DCB group had less LLL in lumen diameter and a greater number of lumen\nenlargements (P < 0.05).\nComparison of Coronary QCA and IVUS Results Between DCB and DES\nGroups.\nDCB: drug coated balloon, DES: drug-eluting stent; CAG: coronary\nartery angiography; IVUS: intravascular ultrasound; MLD: minimal\nlumen diameter; LLL: late luminal loss;\nBoth groups underwent IVUS before and after PCI. The results showed that there\nwere no significant differences in vessel area and lumen area between the two\ngroups (P > 0.05). In addition, no statistically significant\ndifferences were seen in preoperative plaque burden (79.05 ± 4.17 VS\n80.07 ± 4.68)and postprocedural plaque burden (61.69 ± 5.64 VS\n63.32 ± 6.07)between the two groups (P > 0.05).", "During the follow-up period, there were no significant differences in target\nlesion revascularization, myocardial infarction, and cardiac death between the\ntwo groups (P > 0.05). And survival analysis shows that\nthere were no significant differences in the survival rate of the two groups\n(Log-rank,X2 = 0.01,P > 0.05). The detailed data are shown in\nTable 4 and\nFigure 2.\nComparison of MACE Between DCB and DES Groups.\nComparing two data sets using the continuity corrected chi-square\ntest; #Comparing two data sets using the Fisher's exact\nprobability test; DCB: drug coated balloon, DES: drug-eluting stent;\nMACE: major adverse cardiovascular event; TLR: target lesion\nrevascularization; MI: myocardial infarction.", "This study has certain limitations. As a retrospective, single-center, and\nsmall-sample study, there may be patient selection bias. Second, although CAG was\nperformed during follow-up in both groups, we had to assess MLD using QCA\nimmediately after PCI due to the lack of IVUS imaging during follow-up. These\nlimitations may have affected the results to some extent. Therefore, to further\nvalidate our conclusions, multiple interventional methods such as optical coherence\ntomography(OCT), virtual histology IVUS (VH-IVUS), and near-infrared spectroscopy\n(NIRS) can be used to comprehensively detect and identify vulnerable plaques to\nachieve accurate diagnosis of lesions. In addition, multicenter, large sample, and\nlonger follow-up studies are needed.", "In summary, DCB is safe and effective in the treatment of ACS complicated with\nvulnerable plaque, and DCB has the advantage over DES in LLL. Our work provides\npractical experience in the interventional treatment of vulnerable plaques in\nACS." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and Methods", "Patients", "PCI Procedure", "Analysis Parameters", "Statistical Analysis", "Results", "Baseline Demographic and Clinical Characteristics", "Procedural Characteristics of PCI", "QCA and IVUS Analysis", "Follow-up MACE", "Discussion", "Limitations", "Conclusion" ]
[ "Acute coronary syndromes (ACS) is a serious coronary heart disease that threatens\nhuman health. Coronary plaques in such patients usually have a large plaque burden\nwith lipid-rich necrotic cores, called vulnerable plaques, whose progression and\nrupture lead to unstable angina, acute non-ST elevation myocardial infarction, and\nacute ST-elevation myocardial infarction.1–4 Percutaneous coronary\nintervention is now an effective treatment approach for ACS. PCI can restore normal\nblood flow through myocardial revascularization and thus greatly improves symptoms\nand prognosis in ACS patients.5,6\nCompared with traditional two-dimensional coronary angiography (CAG), new\nintraluminal imaging techniques such as IVUS have more efficacy in reducing\ncomplications after drug-eluting stents placement, decreasing in-stent restenosis\nrate, and improving treatment outcome. Equally important, IVUS can better provide\nintravascular imaging data for coronary vascular lesions with coronary overlapping,\nanatomical abnormalities, aneurysms, myocardial bridges, and calcifications,\nespecially in the identification of coronary culprit vessels. Because most ACS\npatients present with plaque ruptures, an accurate identification of the nature,\nextent and location of plaque ruptures is particularly important for the formulation\nof diagnosis and treatment planning. In this context, IVUS is superior to CAG in\nproviding relevant imaging information and intraprocedural guidance of PCI and\nevaluating postoperative results and clinical prognosis.7,8\nCurrently, DES is the preferred strategy for PCI in patients with ACS. The use of DES\nhas the advantages of inducing intimal hyperplasia, thickening fibrous cap, and\nnormalizing wall stress to reduce plaque rupture. However, subsequent complications\nof DES may occur, such as in-stent restenosis, in-stent hyperplasia, and stent\nfracture, and require long-term dual antiplatelet therapy. In severe cases, there\nmay be no reflow phenomenon after stent placement, rapidly resulting in a larger\narea of myocardium infarction.9–11 DCB are a new\nrevascularization technique that has become the treatment of choice for in-stent\nrestenosis because it meets the new concept of intervention without implantation.\nIndications of DCB also include small vessel disease and bifurcation disease, among\nothers.12–14 However,\nthere are few reports related to the use of DCB in ACS patients with vulnerable\nplaques. Therefore, this study used DES as a parallel treatment to examine the\nsafety and efficacy of DCB in ACS complicated with vulnerable plaques lesions under\nthe guidance of IVUS.", "Patients This retrospectively study investigated 123 patients who were diagnosed with ACS\nand underwent PCI in the Department of Cardiology, the First Affiliated Hospital\nof Zhengzhou University from December 2020 to July 2022, and all patients were\nconfirmed to have vulnerable plaques by IVUS. According to the treatment\nstrategy they received, patients were entered into either the DCB treatment\ngroup (n = 55) or the DES treatment group (n = 61). Figure 1 shows the study flowchart.\nThe flowchart of this study.\nInclusion criteria: (1) 18-85 years of age; (2) meeting the diagnostic criteria\nof the European Heart Association for ACS 15,16(3) having definite\nculprit vessel; and (4) gray-scale IVUS showing PB ≥ 70% and minimal luminal\narea (MLA) ≤ 4.0 mm2.\n17\n Exclusion criteria: (1) chronic total occlusion; (2) heavily calcified\nlesions requiring rotational atherectomy; (3) patients with thrombus aspiration;\n(4) severe valvular insufficiency or valvular stenosis; (5) definite\ncontraindications to antiplatelet drugs or anticoagulants; (6) dissection repair\nremedial stents after DCB implantation; (7) severe renal insufficiency (GFR<\n30 ml/min) or severe liver insufficiency; and (8) patients who declined DCB and\nDES implantation.\nThe survival curve of DCB and DES groups\nThis retrospectively study investigated 123 patients who were diagnosed with ACS\nand underwent PCI in the Department of Cardiology, the First Affiliated Hospital\nof Zhengzhou University from December 2020 to July 2022, and all patients were\nconfirmed to have vulnerable plaques by IVUS. According to the treatment\nstrategy they received, patients were entered into either the DCB treatment\ngroup (n = 55) or the DES treatment group (n = 61). Figure 1 shows the study flowchart.\nThe flowchart of this study.\nInclusion criteria: (1) 18-85 years of age; (2) meeting the diagnostic criteria\nof the European Heart Association for ACS 15,16(3) having definite\nculprit vessel; and (4) gray-scale IVUS showing PB ≥ 70% and minimal luminal\narea (MLA) ≤ 4.0 mm2.\n17\n Exclusion criteria: (1) chronic total occlusion; (2) heavily calcified\nlesions requiring rotational atherectomy; (3) patients with thrombus aspiration;\n(4) severe valvular insufficiency or valvular stenosis; (5) definite\ncontraindications to antiplatelet drugs or anticoagulants; (6) dissection repair\nremedial stents after DCB implantation; (7) severe renal insufficiency (GFR<\n30 ml/min) or severe liver insufficiency; and (8) patients who declined DCB and\nDES implantation.\nThe survival curve of DCB and DES groups\nPCI Procedure Patients received dual antiplatelet therapy.18–22 Scheme 1: preoperative\nloading dose of aspirin 300 mg + ticagrelor 180 mg, postoperative maintenance\ndose of aspirin 100 mg + ticagrelor 90 mg, twice a day. Scheme 2: preoperative\nloading dose of aspirin 300 mg + clopidogrel 300 mg, postoperative maintenance\ndose of aspirin 100 mg/d + clopidogrel 75 mg/d. For special ACS patients, such\nas emergency PCI patients, comatose patients, and patients with poor\ncooperation, preoperative dual antiplatelet drug therapy might not be given and\nreplaced by continuous intravenous anticoagulant drugs.\nAfter routine sterilization and draping, the radial artery or femoral artery was\nselected for arterial puncture and sheath insertion. CAG was performed according\nto established protocols. To clearly display the culprit vessels, an appropriate\npositioning was selected for imaging according to the preoperative\nelectrocardiogram, and multi-position imaging was performed if necessary.\nCulprit vessel were determined by combining the patient's electrocardiogram,\nechocardiography, and intraoperative angiography, and graded according to the\nThrombolysis in Myocardial Infarction (TIMI) criteria. For TIMI grade < 3\nculprit vessels, a compliant balloon was used to pre-dilate the culprit vessel\nlesions to restore TIMI grade 3 flow. According to the patient's condition, IVUS\nwas performed at the appropriate time. The probe was sent to the relatively\nnormal segment at the distal end of the culprit vessel, and then withdrew to the\nrelatively normal segment at the proximal end at a constant rate of 0.5 mm/s to\nobtain intravascular imaging including vessel area, lumen area and plaque\nburden.\nBefore DCB or DES implant in the culprit vessel, an appropriate compliant\nballoon, semi-compliant balloon, cutting balloon, or spinous balloon was used to\ndilate the lesion and reduce culprit stenosis.\n23\n For DCB implantation, if the degree of vascular stenosis was ≤ 30%, and\nthere was no dissection or only type A or B dissection after dilation, a\nsize-matched DCB was chosen based on IVUS imaging and placed in the lesion\nsustained release before balloon withdrawal. The length of the DCB catheter\nexceed the target lesion by at least 5 mm. and the ratio of DCB diameters with\nreference vessel diameters were 0.8–1.0. The recommended inflation time was at\nleast 40 s at >7 atm. For DES implantation, after pre-dilation, a\nsize-matched DES based on IVUS imaging was placed in the lesion and released. A\nhigh-pressure balloon with appropriate dimensions was selected for post-stenting\ndilation to ensure that the stent was fully expanded and adhered to the vessel\nwall. IVUS examination was performed after DCB and DES implantation to further\nevaluate the immediate postoperative efficacy and guide further treatment of\nstent malapposition and dissection. At the same time, postoperative IVUS imaging\nresults were recorded.\nIn this study, DCB had a paclitaxel/iohexol matrix coating on the Bingo\nDrug-Coated Balloon (Bingo™, Yinyi Biotech, Dalian, China). And the IVUS imaging\nwere done with\nthe computer program (H749A70200,Boston Scientific, Shanghai, China).\nPatients received dual antiplatelet therapy.18–22 Scheme 1: preoperative\nloading dose of aspirin 300 mg + ticagrelor 180 mg, postoperative maintenance\ndose of aspirin 100 mg + ticagrelor 90 mg, twice a day. Scheme 2: preoperative\nloading dose of aspirin 300 mg + clopidogrel 300 mg, postoperative maintenance\ndose of aspirin 100 mg/d + clopidogrel 75 mg/d. For special ACS patients, such\nas emergency PCI patients, comatose patients, and patients with poor\ncooperation, preoperative dual antiplatelet drug therapy might not be given and\nreplaced by continuous intravenous anticoagulant drugs.\nAfter routine sterilization and draping, the radial artery or femoral artery was\nselected for arterial puncture and sheath insertion. CAG was performed according\nto established protocols. To clearly display the culprit vessels, an appropriate\npositioning was selected for imaging according to the preoperative\nelectrocardiogram, and multi-position imaging was performed if necessary.\nCulprit vessel were determined by combining the patient's electrocardiogram,\nechocardiography, and intraoperative angiography, and graded according to the\nThrombolysis in Myocardial Infarction (TIMI) criteria. For TIMI grade < 3\nculprit vessels, a compliant balloon was used to pre-dilate the culprit vessel\nlesions to restore TIMI grade 3 flow. According to the patient's condition, IVUS\nwas performed at the appropriate time. The probe was sent to the relatively\nnormal segment at the distal end of the culprit vessel, and then withdrew to the\nrelatively normal segment at the proximal end at a constant rate of 0.5 mm/s to\nobtain intravascular imaging including vessel area, lumen area and plaque\nburden.\nBefore DCB or DES implant in the culprit vessel, an appropriate compliant\nballoon, semi-compliant balloon, cutting balloon, or spinous balloon was used to\ndilate the lesion and reduce culprit stenosis.\n23\n For DCB implantation, if the degree of vascular stenosis was ≤ 30%, and\nthere was no dissection or only type A or B dissection after dilation, a\nsize-matched DCB was chosen based on IVUS imaging and placed in the lesion\nsustained release before balloon withdrawal. The length of the DCB catheter\nexceed the target lesion by at least 5 mm. and the ratio of DCB diameters with\nreference vessel diameters were 0.8–1.0. The recommended inflation time was at\nleast 40 s at >7 atm. For DES implantation, after pre-dilation, a\nsize-matched DES based on IVUS imaging was placed in the lesion and released. A\nhigh-pressure balloon with appropriate dimensions was selected for post-stenting\ndilation to ensure that the stent was fully expanded and adhered to the vessel\nwall. IVUS examination was performed after DCB and DES implantation to further\nevaluate the immediate postoperative efficacy and guide further treatment of\nstent malapposition and dissection. At the same time, postoperative IVUS imaging\nresults were recorded.\nIn this study, DCB had a paclitaxel/iohexol matrix coating on the Bingo\nDrug-Coated Balloon (Bingo™, Yinyi Biotech, Dalian, China). And the IVUS imaging\nwere done with\nthe computer program (H749A70200,Boston Scientific, Shanghai, China).\nAnalysis Parameters Basic patient information was collected, including sex, age, clinical diagnosis,\nleft ventricular ejection fraction (LVEF), history of hypertension,\nhyperlipidemia, diabetes, smoking, myocardial infarction, previous PCI, previous\ncoronary artery bypass grafting (CABG), and family history of coronary heart\ndisease.\nQuantitative coronary angiography (QCA) software was used to analyze the CAG data\nof all patients, including (1) preoperative and immediate postoperative\nreference vessel diameter (RFD), minimal luminal diameter, and diameter\nstenosis; (2) MLD, DS, LLL, restenosis lesions, and lumen enlargement at\nfollow-up.\nThe IVUS analysis were also evaluated, including preoperative and immediate\npostoperative vessel area, plaque burden, and lumen area. Because the PB ≥70%\nwas the strongest independent predictor of subsequent lesion-related events in\nthe first PROSPECT study, and Gregg et al defined vulnerable plaque with this PB\nthreshold.17,24 In this study, the PB criterion was selected to define\na vulnerable plaque, and the result of CAG and IVUS were evaluated by two\nexperienced researchers.\nAll patients passed the outpatient or telephone assessment 1, 3, 6, and 12 months\nafter PCI. Major adverse cardiovascular events during hospitalization and\nfollow-up were target lesion revascularization (TLR), myocardial infarction\n(MI), and cardiac death.\nBasic patient information was collected, including sex, age, clinical diagnosis,\nleft ventricular ejection fraction (LVEF), history of hypertension,\nhyperlipidemia, diabetes, smoking, myocardial infarction, previous PCI, previous\ncoronary artery bypass grafting (CABG), and family history of coronary heart\ndisease.\nQuantitative coronary angiography (QCA) software was used to analyze the CAG data\nof all patients, including (1) preoperative and immediate postoperative\nreference vessel diameter (RFD), minimal luminal diameter, and diameter\nstenosis; (2) MLD, DS, LLL, restenosis lesions, and lumen enlargement at\nfollow-up.\nThe IVUS analysis were also evaluated, including preoperative and immediate\npostoperative vessel area, plaque burden, and lumen area. Because the PB ≥70%\nwas the strongest independent predictor of subsequent lesion-related events in\nthe first PROSPECT study, and Gregg et al defined vulnerable plaque with this PB\nthreshold.17,24 In this study, the PB criterion was selected to define\na vulnerable plaque, and the result of CAG and IVUS were evaluated by two\nexperienced researchers.\nAll patients passed the outpatient or telephone assessment 1, 3, 6, and 12 months\nafter PCI. Major adverse cardiovascular events during hospitalization and\nfollow-up were target lesion revascularization (TLR), myocardial infarction\n(MI), and cardiac death.\nStatistical Analysis SPSS version 22.0 (IBM Corporation, Armonk, NY, USA). was used for statistical\nanalysis. Quantitative data that conformed to normal distribution were analyzed\nby t-test and expressed as mean ± standard deviation (SD); those that did not\nconform to normal distribution were analyzed by Wilcoxon rank-sum test and\nexpressed as median and quartile range (M (P25, P75)). Qualitative data were\ncompared using the chi-square test. If T ≤ 5, the continuity-corrected\nchi-square test and Fisher's exact probability test were used and expressed as\nnumber of cases (%). A Kaplan-Meier curve was used to describe survival\nanalysis. Two-tailed P < 0.05 indicates a statistically significant\ndifference.\nSPSS version 22.0 (IBM Corporation, Armonk, NY, USA). was used for statistical\nanalysis. Quantitative data that conformed to normal distribution were analyzed\nby t-test and expressed as mean ± standard deviation (SD); those that did not\nconform to normal distribution were analyzed by Wilcoxon rank-sum test and\nexpressed as median and quartile range (M (P25, P75)). Qualitative data were\ncompared using the chi-square test. If T ≤ 5, the continuity-corrected\nchi-square test and Fisher's exact probability test were used and expressed as\nnumber of cases (%). A Kaplan-Meier curve was used to describe survival\nanalysis. Two-tailed P < 0.05 indicates a statistically significant\ndifference.", "This retrospectively study investigated 123 patients who were diagnosed with ACS\nand underwent PCI in the Department of Cardiology, the First Affiliated Hospital\nof Zhengzhou University from December 2020 to July 2022, and all patients were\nconfirmed to have vulnerable plaques by IVUS. According to the treatment\nstrategy they received, patients were entered into either the DCB treatment\ngroup (n = 55) or the DES treatment group (n = 61). Figure 1 shows the study flowchart.\nThe flowchart of this study.\nInclusion criteria: (1) 18-85 years of age; (2) meeting the diagnostic criteria\nof the European Heart Association for ACS 15,16(3) having definite\nculprit vessel; and (4) gray-scale IVUS showing PB ≥ 70% and minimal luminal\narea (MLA) ≤ 4.0 mm2.\n17\n Exclusion criteria: (1) chronic total occlusion; (2) heavily calcified\nlesions requiring rotational atherectomy; (3) patients with thrombus aspiration;\n(4) severe valvular insufficiency or valvular stenosis; (5) definite\ncontraindications to antiplatelet drugs or anticoagulants; (6) dissection repair\nremedial stents after DCB implantation; (7) severe renal insufficiency (GFR<\n30 ml/min) or severe liver insufficiency; and (8) patients who declined DCB and\nDES implantation.\nThe survival curve of DCB and DES groups", "Patients received dual antiplatelet therapy.18–22 Scheme 1: preoperative\nloading dose of aspirin 300 mg + ticagrelor 180 mg, postoperative maintenance\ndose of aspirin 100 mg + ticagrelor 90 mg, twice a day. Scheme 2: preoperative\nloading dose of aspirin 300 mg + clopidogrel 300 mg, postoperative maintenance\ndose of aspirin 100 mg/d + clopidogrel 75 mg/d. For special ACS patients, such\nas emergency PCI patients, comatose patients, and patients with poor\ncooperation, preoperative dual antiplatelet drug therapy might not be given and\nreplaced by continuous intravenous anticoagulant drugs.\nAfter routine sterilization and draping, the radial artery or femoral artery was\nselected for arterial puncture and sheath insertion. CAG was performed according\nto established protocols. To clearly display the culprit vessels, an appropriate\npositioning was selected for imaging according to the preoperative\nelectrocardiogram, and multi-position imaging was performed if necessary.\nCulprit vessel were determined by combining the patient's electrocardiogram,\nechocardiography, and intraoperative angiography, and graded according to the\nThrombolysis in Myocardial Infarction (TIMI) criteria. For TIMI grade < 3\nculprit vessels, a compliant balloon was used to pre-dilate the culprit vessel\nlesions to restore TIMI grade 3 flow. According to the patient's condition, IVUS\nwas performed at the appropriate time. The probe was sent to the relatively\nnormal segment at the distal end of the culprit vessel, and then withdrew to the\nrelatively normal segment at the proximal end at a constant rate of 0.5 mm/s to\nobtain intravascular imaging including vessel area, lumen area and plaque\nburden.\nBefore DCB or DES implant in the culprit vessel, an appropriate compliant\nballoon, semi-compliant balloon, cutting balloon, or spinous balloon was used to\ndilate the lesion and reduce culprit stenosis.\n23\n For DCB implantation, if the degree of vascular stenosis was ≤ 30%, and\nthere was no dissection or only type A or B dissection after dilation, a\nsize-matched DCB was chosen based on IVUS imaging and placed in the lesion\nsustained release before balloon withdrawal. The length of the DCB catheter\nexceed the target lesion by at least 5 mm. and the ratio of DCB diameters with\nreference vessel diameters were 0.8–1.0. The recommended inflation time was at\nleast 40 s at >7 atm. For DES implantation, after pre-dilation, a\nsize-matched DES based on IVUS imaging was placed in the lesion and released. A\nhigh-pressure balloon with appropriate dimensions was selected for post-stenting\ndilation to ensure that the stent was fully expanded and adhered to the vessel\nwall. IVUS examination was performed after DCB and DES implantation to further\nevaluate the immediate postoperative efficacy and guide further treatment of\nstent malapposition and dissection. At the same time, postoperative IVUS imaging\nresults were recorded.\nIn this study, DCB had a paclitaxel/iohexol matrix coating on the Bingo\nDrug-Coated Balloon (Bingo™, Yinyi Biotech, Dalian, China). And the IVUS imaging\nwere done with\nthe computer program (H749A70200,Boston Scientific, Shanghai, China).", "Basic patient information was collected, including sex, age, clinical diagnosis,\nleft ventricular ejection fraction (LVEF), history of hypertension,\nhyperlipidemia, diabetes, smoking, myocardial infarction, previous PCI, previous\ncoronary artery bypass grafting (CABG), and family history of coronary heart\ndisease.\nQuantitative coronary angiography (QCA) software was used to analyze the CAG data\nof all patients, including (1) preoperative and immediate postoperative\nreference vessel diameter (RFD), minimal luminal diameter, and diameter\nstenosis; (2) MLD, DS, LLL, restenosis lesions, and lumen enlargement at\nfollow-up.\nThe IVUS analysis were also evaluated, including preoperative and immediate\npostoperative vessel area, plaque burden, and lumen area. Because the PB ≥70%\nwas the strongest independent predictor of subsequent lesion-related events in\nthe first PROSPECT study, and Gregg et al defined vulnerable plaque with this PB\nthreshold.17,24 In this study, the PB criterion was selected to define\na vulnerable plaque, and the result of CAG and IVUS were evaluated by two\nexperienced researchers.\nAll patients passed the outpatient or telephone assessment 1, 3, 6, and 12 months\nafter PCI. Major adverse cardiovascular events during hospitalization and\nfollow-up were target lesion revascularization (TLR), myocardial infarction\n(MI), and cardiac death.", "SPSS version 22.0 (IBM Corporation, Armonk, NY, USA). was used for statistical\nanalysis. Quantitative data that conformed to normal distribution were analyzed\nby t-test and expressed as mean ± standard deviation (SD); those that did not\nconform to normal distribution were analyzed by Wilcoxon rank-sum test and\nexpressed as median and quartile range (M (P25, P75)). Qualitative data were\ncompared using the chi-square test. If T ≤ 5, the continuity-corrected\nchi-square test and Fisher's exact probability test were used and expressed as\nnumber of cases (%). A Kaplan-Meier curve was used to describe survival\nanalysis. Two-tailed P < 0.05 indicates a statistically significant\ndifference.", "Baseline Demographic and Clinical Characteristics A total of 123 patients were included in this study, of whom 55 received DCB and\n68 received DES. As shown in Table 1, there were no statistically\nsignificant differences between the DCB and the DES groups in terms of average\nage, gender distribution, ejection fraction, and other clinical characteristics\n(P > 0.05). Coronary heart disease-related risk factors\nsuch as hypertension, hyperlipidemia, diabetes, and smoking history were also\nnot significantly different between the two groups (P >\n0.05). Similarly, there were no significant differences in clinical diagnosis of\nST-elevation myocardial infarction, non-ST-elevation myocardial infarction, and\nunstable angina pectoris between the two groups\n(X2 = 0.92,P>0.05).\nBaseline Patient Characteristics of the Study Population.\nComparing two data sets using the continuity corrected chi-square\ntest; DCB: drug coated balloon, DES: drug-eluting stent; STEMI\n:ST-segment elevation myocardial infarction, NSTEMI: non-STEMI; MI\n:myocardial infarction; PCI :percutaneous coronary intervention;\nCABG: coronary artery bypass grafting; LVEF: Left Ventricle\nejection\nA total of 123 patients were included in this study, of whom 55 received DCB and\n68 received DES. As shown in Table 1, there were no statistically\nsignificant differences between the DCB and the DES groups in terms of average\nage, gender distribution, ejection fraction, and other clinical characteristics\n(P > 0.05). Coronary heart disease-related risk factors\nsuch as hypertension, hyperlipidemia, diabetes, and smoking history were also\nnot significantly different between the two groups (P >\n0.05). Similarly, there were no significant differences in clinical diagnosis of\nST-elevation myocardial infarction, non-ST-elevation myocardial infarction, and\nunstable angina pectoris between the two groups\n(X2 = 0.92,P>0.05).\nBaseline Patient Characteristics of the Study Population.\nComparing two data sets using the continuity corrected chi-square\ntest; DCB: drug coated balloon, DES: drug-eluting stent; STEMI\n:ST-segment elevation myocardial infarction, NSTEMI: non-STEMI; MI\n:myocardial infarction; PCI :percutaneous coronary intervention;\nCABG: coronary artery bypass grafting; LVEF: Left Ventricle\nejection\nProcedural Characteristics of PCI By combining preoperative ECG and echocardiography and intraoperative CAG\nresults, the culprit vessels in ACS patients were diagnosed and PCI was\nperformed. The target vessel locations included left anterior descending artery,\nleft circumflex artery, right coronary artery, and ramus coronary artery, and\nthere were no statistically significant differences in target vessel locations\nbetween the DCB and DES treatment\ngroups(X2 = 2.83,P>0.05). According to\nintraoperative CAG, coronary artery lesions in the two groups were divided into\nsingle vessel disease, double vessel disease, and triple vessel disease, and\nthere were no significant differences in multivessel disease between the two\ngroups (X2 = 0.01,P>0.05). Except for the\nsignificant difference in intraoperative balloon usage between the two groups\n(X2 = 7.69,P<0.05), there were no statistical\ndifferences in other PCI procedural characteristics, including implantation\ndiameter and length, number of patients with intraoperative dissection, number\nof patients with temporary pacemakers, and number of patients using intra-aortic\nballoon pump(IABP)(P > 0.05). The procedural characteristics\nof PCI in the two groups are shown in Table 2.\nProcedural Characteristics of PCI.\nComparing two data sets using the continuity corrected chi-square\ntest #Comparing two data sets using the Fisher's exact\nprobability test; DCB: drug coated balloon, DES: drug-eluting stent;\nIABP: intra-aortic balloon pump\nBy combining preoperative ECG and echocardiography and intraoperative CAG\nresults, the culprit vessels in ACS patients were diagnosed and PCI was\nperformed. The target vessel locations included left anterior descending artery,\nleft circumflex artery, right coronary artery, and ramus coronary artery, and\nthere were no statistically significant differences in target vessel locations\nbetween the DCB and DES treatment\ngroups(X2 = 2.83,P>0.05). According to\nintraoperative CAG, coronary artery lesions in the two groups were divided into\nsingle vessel disease, double vessel disease, and triple vessel disease, and\nthere were no significant differences in multivessel disease between the two\ngroups (X2 = 0.01,P>0.05). Except for the\nsignificant difference in intraoperative balloon usage between the two groups\n(X2 = 7.69,P<0.05), there were no statistical\ndifferences in other PCI procedural characteristics, including implantation\ndiameter and length, number of patients with intraoperative dissection, number\nof patients with temporary pacemakers, and number of patients using intra-aortic\nballoon pump(IABP)(P > 0.05). The procedural characteristics\nof PCI in the two groups are shown in Table 2.\nProcedural Characteristics of PCI.\nComparing two data sets using the continuity corrected chi-square\ntest #Comparing two data sets using the Fisher's exact\nprobability test; DCB: drug coated balloon, DES: drug-eluting stent;\nIABP: intra-aortic balloon pump\nQCA and IVUS Analysis The preoperative, postoperative and follow-up results of PCI are shown in Table 3. There were\nno significant differences in preoperative RFD, MLD and DS between the two\ngroups (P > 0.05). However, CAG immediately after DCB or DES\nimplantation showed significantly smaller MLD and greater degree of stenosis in\nthe DCB group than in the DES group (P < 0.05). Further\nfollow-up CAG after PCI revealed that there were no statistically significant\ndifferences in MLD, stenosis degree, and coronary restenosis between the two\ngroups (P > 0.05), except that compared to the DES group,\nthe DCB group had less LLL in lumen diameter and a greater number of lumen\nenlargements (P < 0.05).\nComparison of Coronary QCA and IVUS Results Between DCB and DES\nGroups.\nDCB: drug coated balloon, DES: drug-eluting stent; CAG: coronary\nartery angiography; IVUS: intravascular ultrasound; MLD: minimal\nlumen diameter; LLL: late luminal loss;\nBoth groups underwent IVUS before and after PCI. The results showed that there\nwere no significant differences in vessel area and lumen area between the two\ngroups (P > 0.05). In addition, no statistically significant\ndifferences were seen in preoperative plaque burden (79.05 ± 4.17 VS\n80.07 ± 4.68)and postprocedural plaque burden (61.69 ± 5.64 VS\n63.32 ± 6.07)between the two groups (P > 0.05).\nThe preoperative, postoperative and follow-up results of PCI are shown in Table 3. There were\nno significant differences in preoperative RFD, MLD and DS between the two\ngroups (P > 0.05). However, CAG immediately after DCB or DES\nimplantation showed significantly smaller MLD and greater degree of stenosis in\nthe DCB group than in the DES group (P < 0.05). Further\nfollow-up CAG after PCI revealed that there were no statistically significant\ndifferences in MLD, stenosis degree, and coronary restenosis between the two\ngroups (P > 0.05), except that compared to the DES group,\nthe DCB group had less LLL in lumen diameter and a greater number of lumen\nenlargements (P < 0.05).\nComparison of Coronary QCA and IVUS Results Between DCB and DES\nGroups.\nDCB: drug coated balloon, DES: drug-eluting stent; CAG: coronary\nartery angiography; IVUS: intravascular ultrasound; MLD: minimal\nlumen diameter; LLL: late luminal loss;\nBoth groups underwent IVUS before and after PCI. The results showed that there\nwere no significant differences in vessel area and lumen area between the two\ngroups (P > 0.05). In addition, no statistically significant\ndifferences were seen in preoperative plaque burden (79.05 ± 4.17 VS\n80.07 ± 4.68)and postprocedural plaque burden (61.69 ± 5.64 VS\n63.32 ± 6.07)between the two groups (P > 0.05).\nFollow-up MACE During the follow-up period, there were no significant differences in target\nlesion revascularization, myocardial infarction, and cardiac death between the\ntwo groups (P > 0.05). And survival analysis shows that\nthere were no significant differences in the survival rate of the two groups\n(Log-rank,X2 = 0.01,P > 0.05). The detailed data are shown in\nTable 4 and\nFigure 2.\nComparison of MACE Between DCB and DES Groups.\nComparing two data sets using the continuity corrected chi-square\ntest; #Comparing two data sets using the Fisher's exact\nprobability test; DCB: drug coated balloon, DES: drug-eluting stent;\nMACE: major adverse cardiovascular event; TLR: target lesion\nrevascularization; MI: myocardial infarction.\nDuring the follow-up period, there were no significant differences in target\nlesion revascularization, myocardial infarction, and cardiac death between the\ntwo groups (P > 0.05). And survival analysis shows that\nthere were no significant differences in the survival rate of the two groups\n(Log-rank,X2 = 0.01,P > 0.05). The detailed data are shown in\nTable 4 and\nFigure 2.\nComparison of MACE Between DCB and DES Groups.\nComparing two data sets using the continuity corrected chi-square\ntest; #Comparing two data sets using the Fisher's exact\nprobability test; DCB: drug coated balloon, DES: drug-eluting stent;\nMACE: major adverse cardiovascular event; TLR: target lesion\nrevascularization; MI: myocardial infarction.", "A total of 123 patients were included in this study, of whom 55 received DCB and\n68 received DES. As shown in Table 1, there were no statistically\nsignificant differences between the DCB and the DES groups in terms of average\nage, gender distribution, ejection fraction, and other clinical characteristics\n(P > 0.05). Coronary heart disease-related risk factors\nsuch as hypertension, hyperlipidemia, diabetes, and smoking history were also\nnot significantly different between the two groups (P >\n0.05). Similarly, there were no significant differences in clinical diagnosis of\nST-elevation myocardial infarction, non-ST-elevation myocardial infarction, and\nunstable angina pectoris between the two groups\n(X2 = 0.92,P>0.05).\nBaseline Patient Characteristics of the Study Population.\nComparing two data sets using the continuity corrected chi-square\ntest; DCB: drug coated balloon, DES: drug-eluting stent; STEMI\n:ST-segment elevation myocardial infarction, NSTEMI: non-STEMI; MI\n:myocardial infarction; PCI :percutaneous coronary intervention;\nCABG: coronary artery bypass grafting; LVEF: Left Ventricle\nejection", "By combining preoperative ECG and echocardiography and intraoperative CAG\nresults, the culprit vessels in ACS patients were diagnosed and PCI was\nperformed. The target vessel locations included left anterior descending artery,\nleft circumflex artery, right coronary artery, and ramus coronary artery, and\nthere were no statistically significant differences in target vessel locations\nbetween the DCB and DES treatment\ngroups(X2 = 2.83,P>0.05). According to\nintraoperative CAG, coronary artery lesions in the two groups were divided into\nsingle vessel disease, double vessel disease, and triple vessel disease, and\nthere were no significant differences in multivessel disease between the two\ngroups (X2 = 0.01,P>0.05). Except for the\nsignificant difference in intraoperative balloon usage between the two groups\n(X2 = 7.69,P<0.05), there were no statistical\ndifferences in other PCI procedural characteristics, including implantation\ndiameter and length, number of patients with intraoperative dissection, number\nof patients with temporary pacemakers, and number of patients using intra-aortic\nballoon pump(IABP)(P > 0.05). The procedural characteristics\nof PCI in the two groups are shown in Table 2.\nProcedural Characteristics of PCI.\nComparing two data sets using the continuity corrected chi-square\ntest #Comparing two data sets using the Fisher's exact\nprobability test; DCB: drug coated balloon, DES: drug-eluting stent;\nIABP: intra-aortic balloon pump", "The preoperative, postoperative and follow-up results of PCI are shown in Table 3. There were\nno significant differences in preoperative RFD, MLD and DS between the two\ngroups (P > 0.05). However, CAG immediately after DCB or DES\nimplantation showed significantly smaller MLD and greater degree of stenosis in\nthe DCB group than in the DES group (P < 0.05). Further\nfollow-up CAG after PCI revealed that there were no statistically significant\ndifferences in MLD, stenosis degree, and coronary restenosis between the two\ngroups (P > 0.05), except that compared to the DES group,\nthe DCB group had less LLL in lumen diameter and a greater number of lumen\nenlargements (P < 0.05).\nComparison of Coronary QCA and IVUS Results Between DCB and DES\nGroups.\nDCB: drug coated balloon, DES: drug-eluting stent; CAG: coronary\nartery angiography; IVUS: intravascular ultrasound; MLD: minimal\nlumen diameter; LLL: late luminal loss;\nBoth groups underwent IVUS before and after PCI. The results showed that there\nwere no significant differences in vessel area and lumen area between the two\ngroups (P > 0.05). In addition, no statistically significant\ndifferences were seen in preoperative plaque burden (79.05 ± 4.17 VS\n80.07 ± 4.68)and postprocedural plaque burden (61.69 ± 5.64 VS\n63.32 ± 6.07)between the two groups (P > 0.05).", "During the follow-up period, there were no significant differences in target\nlesion revascularization, myocardial infarction, and cardiac death between the\ntwo groups (P > 0.05). And survival analysis shows that\nthere were no significant differences in the survival rate of the two groups\n(Log-rank,X2 = 0.01,P > 0.05). The detailed data are shown in\nTable 4 and\nFigure 2.\nComparison of MACE Between DCB and DES Groups.\nComparing two data sets using the continuity corrected chi-square\ntest; #Comparing two data sets using the Fisher's exact\nprobability test; DCB: drug coated balloon, DES: drug-eluting stent;\nMACE: major adverse cardiovascular event; TLR: target lesion\nrevascularization; MI: myocardial infarction.", "DES is the preferred strategy for reperfusion in patients with ACS. However, delayed\nhealing and vascular remodeling of the vascular endothelium often occur after DES\nimplantation due to the vulnerability of the plaque and the hypercoagulable state\nand persistent inflammation caused by residual metals, leading to a greatly\nincreased risk of late in-stent thrombosis.10,25 As a new strategy for the\ntreatment of coronary artery disease, DCB has achieved reliable efficacy in various\ntypes of coronary artery disease such as in-stent stenosis due to its unique drug\nrelease behavior and no residual metal substances. Similarly, DCB has also been used\nin ACS lesions, but little is known about the effect of plaque vulnerability on the\nefficacy of DCB in ACS patients.\nAt present, there are many ways to identify vulnerable plaques, and each imaging\nmethod, whether invasive or non-invasive, has different diagnostic criteria. Gregg\net al noted that PB ≥ 70% or MLA ≤ 4.0 mm2 on IVUS are the strongest\nindependent predictors of subsequent MACE after PCI.17,26–30 Therefore, this study\nsimilarly adopted this criterion to define vulnerable plaque. We also analyzed IVUS\nin the DCB and DES groups before and immediately after PCI. We found no significant\ndifferences in vessel area, PB, and lumen area between the two groups, indicating\nthat DCB has comparable effects on vulnerable plaques and luminal structure\nimmediately after implantation in ACS patients compared to DES. Nevertheless, the\nlong-term efficacy of DCB is unclear, as patients in both groups did not undergo\nIVUS during follow-up for economic or other reasons. It has been reported that\nduring IVUS follow-up of DCB in the treatment of native vessel lesions, DCB was\nfound to stabilize plaques by reducing PB and altering plaque composition, thereby\nreducing subsequent adverse events caused by plaque rupture. 31,32 Whether this\neffect of DCB appears in the treatment of ACS complicated with vulnerable plaques\nwarrants further IVUS follow-up analysis.\nIn this study, CAG was performed before both DCB and DES implantation, and there were\nno significant differences in preoperative MLD and DS between the two groups. For\nthe immediate postoperative CAG results showing that the DCB group had a smaller MLD\nand greater degree of stenosis than the DES group, one possible reason is that DCB,\nas a transport tool for antiproliferative drugs, lacks the effective support of\nmetal nets like DES. Secondly, in the DES group, to ensure the full contact between\nthe stent and the blood vessel, a high-pressure balloon of suitable size was used\nfor post-dilation, which can allow the DES to dilate diseased vessels more\neffectively, reduce the elastic recoil of vessels, and ultimately prevent lumen stenosis.\n33\n For DCB-PCI, in addition to adequate pretreatment prior to implantation to\navoid the occurrence of dissection, the lumen size and lesions had been evaluated by\nIVUS to minimize the difference in vasodilation with DES.\nFollow-up CAG results showed that there were no significant differences in MLD and DS\nbetween the two groups, but the DCB group had smaller LLL and greater lumen\nenlargement. There are multiple reasons for these results. First, unlike patients in\nprevious studies, patients in this study had a higher PB and a higher proportion of\npatients in the DCB group used cutting balloons or spinous balloons than in the DES\ngroup, which may allow for a more complete and uniform dispersion of the internal\nand medial plaques. Second, compared to DES, DCB has a larger contact area with the\ndiseased arterial segment, enabling the antiproliferative drugs to diffuse uniformly\nand rapidly to the diseased vessel. Further, the residual metal stent material and\nlong-term sustained release of anti-proliferative drugs in the DES group could lead\nto delayed endothelialization and inflammatory response, while in the DCB group, the\narterial vasomotor and dilatation functions were relatively stable because there was\nno metal residual problem, which may be the reason for the late lumen enlargement.\nAll these factors together resulted in better vascular remodeling and then less\nlumen loss diameter and greater lumen enlargement in the DCB group.34–36 On the other hand, there were\nno significant differences in the incidence of target lesion revascularization,\nmyocardial infarction, and cardiac death between the two groups during follow-up,\nalso suggesting that DCB is non-inferior to DES in long-term outcomes.", "This study has certain limitations. As a retrospective, single-center, and\nsmall-sample study, there may be patient selection bias. Second, although CAG was\nperformed during follow-up in both groups, we had to assess MLD using QCA\nimmediately after PCI due to the lack of IVUS imaging during follow-up. These\nlimitations may have affected the results to some extent. Therefore, to further\nvalidate our conclusions, multiple interventional methods such as optical coherence\ntomography(OCT), virtual histology IVUS (VH-IVUS), and near-infrared spectroscopy\n(NIRS) can be used to comprehensively detect and identify vulnerable plaques to\nachieve accurate diagnosis of lesions. In addition, multicenter, large sample, and\nlonger follow-up studies are needed.", "In summary, DCB is safe and effective in the treatment of ACS complicated with\nvulnerable plaque, and DCB has the advantage over DES in LLL. Our work provides\npractical experience in the interventional treatment of vulnerable plaques in\nACS." ]
[ "intro", null, null, null, null, null, "results", null, null, null, null, "discussion", null, null ]
[ "acute coronary syndromes", "drug coated balloon", "drug-eluting stent", "vulnerable plaque" ]
Introduction: Acute coronary syndromes (ACS) is a serious coronary heart disease that threatens human health. Coronary plaques in such patients usually have a large plaque burden with lipid-rich necrotic cores, called vulnerable plaques, whose progression and rupture lead to unstable angina, acute non-ST elevation myocardial infarction, and acute ST-elevation myocardial infarction.1–4 Percutaneous coronary intervention is now an effective treatment approach for ACS. PCI can restore normal blood flow through myocardial revascularization and thus greatly improves symptoms and prognosis in ACS patients.5,6 Compared with traditional two-dimensional coronary angiography (CAG), new intraluminal imaging techniques such as IVUS have more efficacy in reducing complications after drug-eluting stents placement, decreasing in-stent restenosis rate, and improving treatment outcome. Equally important, IVUS can better provide intravascular imaging data for coronary vascular lesions with coronary overlapping, anatomical abnormalities, aneurysms, myocardial bridges, and calcifications, especially in the identification of coronary culprit vessels. Because most ACS patients present with plaque ruptures, an accurate identification of the nature, extent and location of plaque ruptures is particularly important for the formulation of diagnosis and treatment planning. In this context, IVUS is superior to CAG in providing relevant imaging information and intraprocedural guidance of PCI and evaluating postoperative results and clinical prognosis.7,8 Currently, DES is the preferred strategy for PCI in patients with ACS. The use of DES has the advantages of inducing intimal hyperplasia, thickening fibrous cap, and normalizing wall stress to reduce plaque rupture. However, subsequent complications of DES may occur, such as in-stent restenosis, in-stent hyperplasia, and stent fracture, and require long-term dual antiplatelet therapy. In severe cases, there may be no reflow phenomenon after stent placement, rapidly resulting in a larger area of myocardium infarction.9–11 DCB are a new revascularization technique that has become the treatment of choice for in-stent restenosis because it meets the new concept of intervention without implantation. Indications of DCB also include small vessel disease and bifurcation disease, among others.12–14 However, there are few reports related to the use of DCB in ACS patients with vulnerable plaques. Therefore, this study used DES as a parallel treatment to examine the safety and efficacy of DCB in ACS complicated with vulnerable plaques lesions under the guidance of IVUS. Materials and Methods: Patients This retrospectively study investigated 123 patients who were diagnosed with ACS and underwent PCI in the Department of Cardiology, the First Affiliated Hospital of Zhengzhou University from December 2020 to July 2022, and all patients were confirmed to have vulnerable plaques by IVUS. According to the treatment strategy they received, patients were entered into either the DCB treatment group (n = 55) or the DES treatment group (n = 61). Figure 1 shows the study flowchart. The flowchart of this study. Inclusion criteria: (1) 18-85 years of age; (2) meeting the diagnostic criteria of the European Heart Association for ACS 15,16(3) having definite culprit vessel; and (4) gray-scale IVUS showing PB ≥ 70% and minimal luminal area (MLA) ≤ 4.0 mm2. 17 Exclusion criteria: (1) chronic total occlusion; (2) heavily calcified lesions requiring rotational atherectomy; (3) patients with thrombus aspiration; (4) severe valvular insufficiency or valvular stenosis; (5) definite contraindications to antiplatelet drugs or anticoagulants; (6) dissection repair remedial stents after DCB implantation; (7) severe renal insufficiency (GFR< 30 ml/min) or severe liver insufficiency; and (8) patients who declined DCB and DES implantation. The survival curve of DCB and DES groups This retrospectively study investigated 123 patients who were diagnosed with ACS and underwent PCI in the Department of Cardiology, the First Affiliated Hospital of Zhengzhou University from December 2020 to July 2022, and all patients were confirmed to have vulnerable plaques by IVUS. According to the treatment strategy they received, patients were entered into either the DCB treatment group (n = 55) or the DES treatment group (n = 61). Figure 1 shows the study flowchart. The flowchart of this study. Inclusion criteria: (1) 18-85 years of age; (2) meeting the diagnostic criteria of the European Heart Association for ACS 15,16(3) having definite culprit vessel; and (4) gray-scale IVUS showing PB ≥ 70% and minimal luminal area (MLA) ≤ 4.0 mm2. 17 Exclusion criteria: (1) chronic total occlusion; (2) heavily calcified lesions requiring rotational atherectomy; (3) patients with thrombus aspiration; (4) severe valvular insufficiency or valvular stenosis; (5) definite contraindications to antiplatelet drugs or anticoagulants; (6) dissection repair remedial stents after DCB implantation; (7) severe renal insufficiency (GFR< 30 ml/min) or severe liver insufficiency; and (8) patients who declined DCB and DES implantation. The survival curve of DCB and DES groups PCI Procedure Patients received dual antiplatelet therapy.18–22 Scheme 1: preoperative loading dose of aspirin 300 mg + ticagrelor 180 mg, postoperative maintenance dose of aspirin 100 mg + ticagrelor 90 mg, twice a day. Scheme 2: preoperative loading dose of aspirin 300 mg + clopidogrel 300 mg, postoperative maintenance dose of aspirin 100 mg/d + clopidogrel 75 mg/d. For special ACS patients, such as emergency PCI patients, comatose patients, and patients with poor cooperation, preoperative dual antiplatelet drug therapy might not be given and replaced by continuous intravenous anticoagulant drugs. After routine sterilization and draping, the radial artery or femoral artery was selected for arterial puncture and sheath insertion. CAG was performed according to established protocols. To clearly display the culprit vessels, an appropriate positioning was selected for imaging according to the preoperative electrocardiogram, and multi-position imaging was performed if necessary. Culprit vessel were determined by combining the patient's electrocardiogram, echocardiography, and intraoperative angiography, and graded according to the Thrombolysis in Myocardial Infarction (TIMI) criteria. For TIMI grade < 3 culprit vessels, a compliant balloon was used to pre-dilate the culprit vessel lesions to restore TIMI grade 3 flow. According to the patient's condition, IVUS was performed at the appropriate time. The probe was sent to the relatively normal segment at the distal end of the culprit vessel, and then withdrew to the relatively normal segment at the proximal end at a constant rate of 0.5 mm/s to obtain intravascular imaging including vessel area, lumen area and plaque burden. Before DCB or DES implant in the culprit vessel, an appropriate compliant balloon, semi-compliant balloon, cutting balloon, or spinous balloon was used to dilate the lesion and reduce culprit stenosis. 23 For DCB implantation, if the degree of vascular stenosis was ≤ 30%, and there was no dissection or only type A or B dissection after dilation, a size-matched DCB was chosen based on IVUS imaging and placed in the lesion sustained release before balloon withdrawal. The length of the DCB catheter exceed the target lesion by at least 5 mm. and the ratio of DCB diameters with reference vessel diameters were 0.8–1.0. The recommended inflation time was at least 40 s at >7 atm. For DES implantation, after pre-dilation, a size-matched DES based on IVUS imaging was placed in the lesion and released. A high-pressure balloon with appropriate dimensions was selected for post-stenting dilation to ensure that the stent was fully expanded and adhered to the vessel wall. IVUS examination was performed after DCB and DES implantation to further evaluate the immediate postoperative efficacy and guide further treatment of stent malapposition and dissection. At the same time, postoperative IVUS imaging results were recorded. In this study, DCB had a paclitaxel/iohexol matrix coating on the Bingo Drug-Coated Balloon (Bingo™, Yinyi Biotech, Dalian, China). And the IVUS imaging were done with the computer program (H749A70200,Boston Scientific, Shanghai, China). Patients received dual antiplatelet therapy.18–22 Scheme 1: preoperative loading dose of aspirin 300 mg + ticagrelor 180 mg, postoperative maintenance dose of aspirin 100 mg + ticagrelor 90 mg, twice a day. Scheme 2: preoperative loading dose of aspirin 300 mg + clopidogrel 300 mg, postoperative maintenance dose of aspirin 100 mg/d + clopidogrel 75 mg/d. For special ACS patients, such as emergency PCI patients, comatose patients, and patients with poor cooperation, preoperative dual antiplatelet drug therapy might not be given and replaced by continuous intravenous anticoagulant drugs. After routine sterilization and draping, the radial artery or femoral artery was selected for arterial puncture and sheath insertion. CAG was performed according to established protocols. To clearly display the culprit vessels, an appropriate positioning was selected for imaging according to the preoperative electrocardiogram, and multi-position imaging was performed if necessary. Culprit vessel were determined by combining the patient's electrocardiogram, echocardiography, and intraoperative angiography, and graded according to the Thrombolysis in Myocardial Infarction (TIMI) criteria. For TIMI grade < 3 culprit vessels, a compliant balloon was used to pre-dilate the culprit vessel lesions to restore TIMI grade 3 flow. According to the patient's condition, IVUS was performed at the appropriate time. The probe was sent to the relatively normal segment at the distal end of the culprit vessel, and then withdrew to the relatively normal segment at the proximal end at a constant rate of 0.5 mm/s to obtain intravascular imaging including vessel area, lumen area and plaque burden. Before DCB or DES implant in the culprit vessel, an appropriate compliant balloon, semi-compliant balloon, cutting balloon, or spinous balloon was used to dilate the lesion and reduce culprit stenosis. 23 For DCB implantation, if the degree of vascular stenosis was ≤ 30%, and there was no dissection or only type A or B dissection after dilation, a size-matched DCB was chosen based on IVUS imaging and placed in the lesion sustained release before balloon withdrawal. The length of the DCB catheter exceed the target lesion by at least 5 mm. and the ratio of DCB diameters with reference vessel diameters were 0.8–1.0. The recommended inflation time was at least 40 s at >7 atm. For DES implantation, after pre-dilation, a size-matched DES based on IVUS imaging was placed in the lesion and released. A high-pressure balloon with appropriate dimensions was selected for post-stenting dilation to ensure that the stent was fully expanded and adhered to the vessel wall. IVUS examination was performed after DCB and DES implantation to further evaluate the immediate postoperative efficacy and guide further treatment of stent malapposition and dissection. At the same time, postoperative IVUS imaging results were recorded. In this study, DCB had a paclitaxel/iohexol matrix coating on the Bingo Drug-Coated Balloon (Bingo™, Yinyi Biotech, Dalian, China). And the IVUS imaging were done with the computer program (H749A70200,Boston Scientific, Shanghai, China). Analysis Parameters Basic patient information was collected, including sex, age, clinical diagnosis, left ventricular ejection fraction (LVEF), history of hypertension, hyperlipidemia, diabetes, smoking, myocardial infarction, previous PCI, previous coronary artery bypass grafting (CABG), and family history of coronary heart disease. Quantitative coronary angiography (QCA) software was used to analyze the CAG data of all patients, including (1) preoperative and immediate postoperative reference vessel diameter (RFD), minimal luminal diameter, and diameter stenosis; (2) MLD, DS, LLL, restenosis lesions, and lumen enlargement at follow-up. The IVUS analysis were also evaluated, including preoperative and immediate postoperative vessel area, plaque burden, and lumen area. Because the PB ≥70% was the strongest independent predictor of subsequent lesion-related events in the first PROSPECT study, and Gregg et al defined vulnerable plaque with this PB threshold.17,24 In this study, the PB criterion was selected to define a vulnerable plaque, and the result of CAG and IVUS were evaluated by two experienced researchers. All patients passed the outpatient or telephone assessment 1, 3, 6, and 12 months after PCI. Major adverse cardiovascular events during hospitalization and follow-up were target lesion revascularization (TLR), myocardial infarction (MI), and cardiac death. Basic patient information was collected, including sex, age, clinical diagnosis, left ventricular ejection fraction (LVEF), history of hypertension, hyperlipidemia, diabetes, smoking, myocardial infarction, previous PCI, previous coronary artery bypass grafting (CABG), and family history of coronary heart disease. Quantitative coronary angiography (QCA) software was used to analyze the CAG data of all patients, including (1) preoperative and immediate postoperative reference vessel diameter (RFD), minimal luminal diameter, and diameter stenosis; (2) MLD, DS, LLL, restenosis lesions, and lumen enlargement at follow-up. The IVUS analysis were also evaluated, including preoperative and immediate postoperative vessel area, plaque burden, and lumen area. Because the PB ≥70% was the strongest independent predictor of subsequent lesion-related events in the first PROSPECT study, and Gregg et al defined vulnerable plaque with this PB threshold.17,24 In this study, the PB criterion was selected to define a vulnerable plaque, and the result of CAG and IVUS were evaluated by two experienced researchers. All patients passed the outpatient or telephone assessment 1, 3, 6, and 12 months after PCI. Major adverse cardiovascular events during hospitalization and follow-up were target lesion revascularization (TLR), myocardial infarction (MI), and cardiac death. Statistical Analysis SPSS version 22.0 (IBM Corporation, Armonk, NY, USA). was used for statistical analysis. Quantitative data that conformed to normal distribution were analyzed by t-test and expressed as mean ± standard deviation (SD); those that did not conform to normal distribution were analyzed by Wilcoxon rank-sum test and expressed as median and quartile range (M (P25, P75)). Qualitative data were compared using the chi-square test. If T ≤ 5, the continuity-corrected chi-square test and Fisher's exact probability test were used and expressed as number of cases (%). A Kaplan-Meier curve was used to describe survival analysis. Two-tailed P < 0.05 indicates a statistically significant difference. SPSS version 22.0 (IBM Corporation, Armonk, NY, USA). was used for statistical analysis. Quantitative data that conformed to normal distribution were analyzed by t-test and expressed as mean ± standard deviation (SD); those that did not conform to normal distribution were analyzed by Wilcoxon rank-sum test and expressed as median and quartile range (M (P25, P75)). Qualitative data were compared using the chi-square test. If T ≤ 5, the continuity-corrected chi-square test and Fisher's exact probability test were used and expressed as number of cases (%). A Kaplan-Meier curve was used to describe survival analysis. Two-tailed P < 0.05 indicates a statistically significant difference. Patients: This retrospectively study investigated 123 patients who were diagnosed with ACS and underwent PCI in the Department of Cardiology, the First Affiliated Hospital of Zhengzhou University from December 2020 to July 2022, and all patients were confirmed to have vulnerable plaques by IVUS. According to the treatment strategy they received, patients were entered into either the DCB treatment group (n = 55) or the DES treatment group (n = 61). Figure 1 shows the study flowchart. The flowchart of this study. Inclusion criteria: (1) 18-85 years of age; (2) meeting the diagnostic criteria of the European Heart Association for ACS 15,16(3) having definite culprit vessel; and (4) gray-scale IVUS showing PB ≥ 70% and minimal luminal area (MLA) ≤ 4.0 mm2. 17 Exclusion criteria: (1) chronic total occlusion; (2) heavily calcified lesions requiring rotational atherectomy; (3) patients with thrombus aspiration; (4) severe valvular insufficiency or valvular stenosis; (5) definite contraindications to antiplatelet drugs or anticoagulants; (6) dissection repair remedial stents after DCB implantation; (7) severe renal insufficiency (GFR< 30 ml/min) or severe liver insufficiency; and (8) patients who declined DCB and DES implantation. The survival curve of DCB and DES groups PCI Procedure: Patients received dual antiplatelet therapy.18–22 Scheme 1: preoperative loading dose of aspirin 300 mg + ticagrelor 180 mg, postoperative maintenance dose of aspirin 100 mg + ticagrelor 90 mg, twice a day. Scheme 2: preoperative loading dose of aspirin 300 mg + clopidogrel 300 mg, postoperative maintenance dose of aspirin 100 mg/d + clopidogrel 75 mg/d. For special ACS patients, such as emergency PCI patients, comatose patients, and patients with poor cooperation, preoperative dual antiplatelet drug therapy might not be given and replaced by continuous intravenous anticoagulant drugs. After routine sterilization and draping, the radial artery or femoral artery was selected for arterial puncture and sheath insertion. CAG was performed according to established protocols. To clearly display the culprit vessels, an appropriate positioning was selected for imaging according to the preoperative electrocardiogram, and multi-position imaging was performed if necessary. Culprit vessel were determined by combining the patient's electrocardiogram, echocardiography, and intraoperative angiography, and graded according to the Thrombolysis in Myocardial Infarction (TIMI) criteria. For TIMI grade < 3 culprit vessels, a compliant balloon was used to pre-dilate the culprit vessel lesions to restore TIMI grade 3 flow. According to the patient's condition, IVUS was performed at the appropriate time. The probe was sent to the relatively normal segment at the distal end of the culprit vessel, and then withdrew to the relatively normal segment at the proximal end at a constant rate of 0.5 mm/s to obtain intravascular imaging including vessel area, lumen area and plaque burden. Before DCB or DES implant in the culprit vessel, an appropriate compliant balloon, semi-compliant balloon, cutting balloon, or spinous balloon was used to dilate the lesion and reduce culprit stenosis. 23 For DCB implantation, if the degree of vascular stenosis was ≤ 30%, and there was no dissection or only type A or B dissection after dilation, a size-matched DCB was chosen based on IVUS imaging and placed in the lesion sustained release before balloon withdrawal. The length of the DCB catheter exceed the target lesion by at least 5 mm. and the ratio of DCB diameters with reference vessel diameters were 0.8–1.0. The recommended inflation time was at least 40 s at >7 atm. For DES implantation, after pre-dilation, a size-matched DES based on IVUS imaging was placed in the lesion and released. A high-pressure balloon with appropriate dimensions was selected for post-stenting dilation to ensure that the stent was fully expanded and adhered to the vessel wall. IVUS examination was performed after DCB and DES implantation to further evaluate the immediate postoperative efficacy and guide further treatment of stent malapposition and dissection. At the same time, postoperative IVUS imaging results were recorded. In this study, DCB had a paclitaxel/iohexol matrix coating on the Bingo Drug-Coated Balloon (Bingo™, Yinyi Biotech, Dalian, China). And the IVUS imaging were done with the computer program (H749A70200,Boston Scientific, Shanghai, China). Analysis Parameters: Basic patient information was collected, including sex, age, clinical diagnosis, left ventricular ejection fraction (LVEF), history of hypertension, hyperlipidemia, diabetes, smoking, myocardial infarction, previous PCI, previous coronary artery bypass grafting (CABG), and family history of coronary heart disease. Quantitative coronary angiography (QCA) software was used to analyze the CAG data of all patients, including (1) preoperative and immediate postoperative reference vessel diameter (RFD), minimal luminal diameter, and diameter stenosis; (2) MLD, DS, LLL, restenosis lesions, and lumen enlargement at follow-up. The IVUS analysis were also evaluated, including preoperative and immediate postoperative vessel area, plaque burden, and lumen area. Because the PB ≥70% was the strongest independent predictor of subsequent lesion-related events in the first PROSPECT study, and Gregg et al defined vulnerable plaque with this PB threshold.17,24 In this study, the PB criterion was selected to define a vulnerable plaque, and the result of CAG and IVUS were evaluated by two experienced researchers. All patients passed the outpatient or telephone assessment 1, 3, 6, and 12 months after PCI. Major adverse cardiovascular events during hospitalization and follow-up were target lesion revascularization (TLR), myocardial infarction (MI), and cardiac death. Statistical Analysis: SPSS version 22.0 (IBM Corporation, Armonk, NY, USA). was used for statistical analysis. Quantitative data that conformed to normal distribution were analyzed by t-test and expressed as mean ± standard deviation (SD); those that did not conform to normal distribution were analyzed by Wilcoxon rank-sum test and expressed as median and quartile range (M (P25, P75)). Qualitative data were compared using the chi-square test. If T ≤ 5, the continuity-corrected chi-square test and Fisher's exact probability test were used and expressed as number of cases (%). A Kaplan-Meier curve was used to describe survival analysis. Two-tailed P < 0.05 indicates a statistically significant difference. Results: Baseline Demographic and Clinical Characteristics A total of 123 patients were included in this study, of whom 55 received DCB and 68 received DES. As shown in Table 1, there were no statistically significant differences between the DCB and the DES groups in terms of average age, gender distribution, ejection fraction, and other clinical characteristics (P > 0.05). Coronary heart disease-related risk factors such as hypertension, hyperlipidemia, diabetes, and smoking history were also not significantly different between the two groups (P > 0.05). Similarly, there were no significant differences in clinical diagnosis of ST-elevation myocardial infarction, non-ST-elevation myocardial infarction, and unstable angina pectoris between the two groups (X2 = 0.92,P>0.05). Baseline Patient Characteristics of the Study Population. Comparing two data sets using the continuity corrected chi-square test; DCB: drug coated balloon, DES: drug-eluting stent; STEMI :ST-segment elevation myocardial infarction, NSTEMI: non-STEMI; MI :myocardial infarction; PCI :percutaneous coronary intervention; CABG: coronary artery bypass grafting; LVEF: Left Ventricle ejection A total of 123 patients were included in this study, of whom 55 received DCB and 68 received DES. As shown in Table 1, there were no statistically significant differences between the DCB and the DES groups in terms of average age, gender distribution, ejection fraction, and other clinical characteristics (P > 0.05). Coronary heart disease-related risk factors such as hypertension, hyperlipidemia, diabetes, and smoking history were also not significantly different between the two groups (P > 0.05). Similarly, there were no significant differences in clinical diagnosis of ST-elevation myocardial infarction, non-ST-elevation myocardial infarction, and unstable angina pectoris between the two groups (X2 = 0.92,P>0.05). Baseline Patient Characteristics of the Study Population. Comparing two data sets using the continuity corrected chi-square test; DCB: drug coated balloon, DES: drug-eluting stent; STEMI :ST-segment elevation myocardial infarction, NSTEMI: non-STEMI; MI :myocardial infarction; PCI :percutaneous coronary intervention; CABG: coronary artery bypass grafting; LVEF: Left Ventricle ejection Procedural Characteristics of PCI By combining preoperative ECG and echocardiography and intraoperative CAG results, the culprit vessels in ACS patients were diagnosed and PCI was performed. The target vessel locations included left anterior descending artery, left circumflex artery, right coronary artery, and ramus coronary artery, and there were no statistically significant differences in target vessel locations between the DCB and DES treatment groups(X2 = 2.83,P>0.05). According to intraoperative CAG, coronary artery lesions in the two groups were divided into single vessel disease, double vessel disease, and triple vessel disease, and there were no significant differences in multivessel disease between the two groups (X2 = 0.01,P>0.05). Except for the significant difference in intraoperative balloon usage between the two groups (X2 = 7.69,P<0.05), there were no statistical differences in other PCI procedural characteristics, including implantation diameter and length, number of patients with intraoperative dissection, number of patients with temporary pacemakers, and number of patients using intra-aortic balloon pump(IABP)(P > 0.05). The procedural characteristics of PCI in the two groups are shown in Table 2. Procedural Characteristics of PCI. Comparing two data sets using the continuity corrected chi-square test #Comparing two data sets using the Fisher's exact probability test; DCB: drug coated balloon, DES: drug-eluting stent; IABP: intra-aortic balloon pump By combining preoperative ECG and echocardiography and intraoperative CAG results, the culprit vessels in ACS patients were diagnosed and PCI was performed. The target vessel locations included left anterior descending artery, left circumflex artery, right coronary artery, and ramus coronary artery, and there were no statistically significant differences in target vessel locations between the DCB and DES treatment groups(X2 = 2.83,P>0.05). According to intraoperative CAG, coronary artery lesions in the two groups were divided into single vessel disease, double vessel disease, and triple vessel disease, and there were no significant differences in multivessel disease between the two groups (X2 = 0.01,P>0.05). Except for the significant difference in intraoperative balloon usage between the two groups (X2 = 7.69,P<0.05), there were no statistical differences in other PCI procedural characteristics, including implantation diameter and length, number of patients with intraoperative dissection, number of patients with temporary pacemakers, and number of patients using intra-aortic balloon pump(IABP)(P > 0.05). The procedural characteristics of PCI in the two groups are shown in Table 2. Procedural Characteristics of PCI. Comparing two data sets using the continuity corrected chi-square test #Comparing two data sets using the Fisher's exact probability test; DCB: drug coated balloon, DES: drug-eluting stent; IABP: intra-aortic balloon pump QCA and IVUS Analysis The preoperative, postoperative and follow-up results of PCI are shown in Table 3. There were no significant differences in preoperative RFD, MLD and DS between the two groups (P > 0.05). However, CAG immediately after DCB or DES implantation showed significantly smaller MLD and greater degree of stenosis in the DCB group than in the DES group (P < 0.05). Further follow-up CAG after PCI revealed that there were no statistically significant differences in MLD, stenosis degree, and coronary restenosis between the two groups (P > 0.05), except that compared to the DES group, the DCB group had less LLL in lumen diameter and a greater number of lumen enlargements (P < 0.05). Comparison of Coronary QCA and IVUS Results Between DCB and DES Groups. DCB: drug coated balloon, DES: drug-eluting stent; CAG: coronary artery angiography; IVUS: intravascular ultrasound; MLD: minimal lumen diameter; LLL: late luminal loss; Both groups underwent IVUS before and after PCI. The results showed that there were no significant differences in vessel area and lumen area between the two groups (P > 0.05). In addition, no statistically significant differences were seen in preoperative plaque burden (79.05 ± 4.17 VS 80.07 ± 4.68)and postprocedural plaque burden (61.69 ± 5.64 VS 63.32 ± 6.07)between the two groups (P > 0.05). The preoperative, postoperative and follow-up results of PCI are shown in Table 3. There were no significant differences in preoperative RFD, MLD and DS between the two groups (P > 0.05). However, CAG immediately after DCB or DES implantation showed significantly smaller MLD and greater degree of stenosis in the DCB group than in the DES group (P < 0.05). Further follow-up CAG after PCI revealed that there were no statistically significant differences in MLD, stenosis degree, and coronary restenosis between the two groups (P > 0.05), except that compared to the DES group, the DCB group had less LLL in lumen diameter and a greater number of lumen enlargements (P < 0.05). Comparison of Coronary QCA and IVUS Results Between DCB and DES Groups. DCB: drug coated balloon, DES: drug-eluting stent; CAG: coronary artery angiography; IVUS: intravascular ultrasound; MLD: minimal lumen diameter; LLL: late luminal loss; Both groups underwent IVUS before and after PCI. The results showed that there were no significant differences in vessel area and lumen area between the two groups (P > 0.05). In addition, no statistically significant differences were seen in preoperative plaque burden (79.05 ± 4.17 VS 80.07 ± 4.68)and postprocedural plaque burden (61.69 ± 5.64 VS 63.32 ± 6.07)between the two groups (P > 0.05). Follow-up MACE During the follow-up period, there were no significant differences in target lesion revascularization, myocardial infarction, and cardiac death between the two groups (P > 0.05). And survival analysis shows that there were no significant differences in the survival rate of the two groups (Log-rank,X2 = 0.01,P > 0.05). The detailed data are shown in Table 4 and Figure 2. Comparison of MACE Between DCB and DES Groups. Comparing two data sets using the continuity corrected chi-square test; #Comparing two data sets using the Fisher's exact probability test; DCB: drug coated balloon, DES: drug-eluting stent; MACE: major adverse cardiovascular event; TLR: target lesion revascularization; MI: myocardial infarction. During the follow-up period, there were no significant differences in target lesion revascularization, myocardial infarction, and cardiac death between the two groups (P > 0.05). And survival analysis shows that there were no significant differences in the survival rate of the two groups (Log-rank,X2 = 0.01,P > 0.05). The detailed data are shown in Table 4 and Figure 2. Comparison of MACE Between DCB and DES Groups. Comparing two data sets using the continuity corrected chi-square test; #Comparing two data sets using the Fisher's exact probability test; DCB: drug coated balloon, DES: drug-eluting stent; MACE: major adverse cardiovascular event; TLR: target lesion revascularization; MI: myocardial infarction. Baseline Demographic and Clinical Characteristics: A total of 123 patients were included in this study, of whom 55 received DCB and 68 received DES. As shown in Table 1, there were no statistically significant differences between the DCB and the DES groups in terms of average age, gender distribution, ejection fraction, and other clinical characteristics (P > 0.05). Coronary heart disease-related risk factors such as hypertension, hyperlipidemia, diabetes, and smoking history were also not significantly different between the two groups (P > 0.05). Similarly, there were no significant differences in clinical diagnosis of ST-elevation myocardial infarction, non-ST-elevation myocardial infarction, and unstable angina pectoris between the two groups (X2 = 0.92,P>0.05). Baseline Patient Characteristics of the Study Population. Comparing two data sets using the continuity corrected chi-square test; DCB: drug coated balloon, DES: drug-eluting stent; STEMI :ST-segment elevation myocardial infarction, NSTEMI: non-STEMI; MI :myocardial infarction; PCI :percutaneous coronary intervention; CABG: coronary artery bypass grafting; LVEF: Left Ventricle ejection Procedural Characteristics of PCI: By combining preoperative ECG and echocardiography and intraoperative CAG results, the culprit vessels in ACS patients were diagnosed and PCI was performed. The target vessel locations included left anterior descending artery, left circumflex artery, right coronary artery, and ramus coronary artery, and there were no statistically significant differences in target vessel locations between the DCB and DES treatment groups(X2 = 2.83,P>0.05). According to intraoperative CAG, coronary artery lesions in the two groups were divided into single vessel disease, double vessel disease, and triple vessel disease, and there were no significant differences in multivessel disease between the two groups (X2 = 0.01,P>0.05). Except for the significant difference in intraoperative balloon usage between the two groups (X2 = 7.69,P<0.05), there were no statistical differences in other PCI procedural characteristics, including implantation diameter and length, number of patients with intraoperative dissection, number of patients with temporary pacemakers, and number of patients using intra-aortic balloon pump(IABP)(P > 0.05). The procedural characteristics of PCI in the two groups are shown in Table 2. Procedural Characteristics of PCI. Comparing two data sets using the continuity corrected chi-square test #Comparing two data sets using the Fisher's exact probability test; DCB: drug coated balloon, DES: drug-eluting stent; IABP: intra-aortic balloon pump QCA and IVUS Analysis: The preoperative, postoperative and follow-up results of PCI are shown in Table 3. There were no significant differences in preoperative RFD, MLD and DS between the two groups (P > 0.05). However, CAG immediately after DCB or DES implantation showed significantly smaller MLD and greater degree of stenosis in the DCB group than in the DES group (P < 0.05). Further follow-up CAG after PCI revealed that there were no statistically significant differences in MLD, stenosis degree, and coronary restenosis between the two groups (P > 0.05), except that compared to the DES group, the DCB group had less LLL in lumen diameter and a greater number of lumen enlargements (P < 0.05). Comparison of Coronary QCA and IVUS Results Between DCB and DES Groups. DCB: drug coated balloon, DES: drug-eluting stent; CAG: coronary artery angiography; IVUS: intravascular ultrasound; MLD: minimal lumen diameter; LLL: late luminal loss; Both groups underwent IVUS before and after PCI. The results showed that there were no significant differences in vessel area and lumen area between the two groups (P > 0.05). In addition, no statistically significant differences were seen in preoperative plaque burden (79.05 ± 4.17 VS 80.07 ± 4.68)and postprocedural plaque burden (61.69 ± 5.64 VS 63.32 ± 6.07)between the two groups (P > 0.05). Follow-up MACE: During the follow-up period, there were no significant differences in target lesion revascularization, myocardial infarction, and cardiac death between the two groups (P > 0.05). And survival analysis shows that there were no significant differences in the survival rate of the two groups (Log-rank,X2 = 0.01,P > 0.05). The detailed data are shown in Table 4 and Figure 2. Comparison of MACE Between DCB and DES Groups. Comparing two data sets using the continuity corrected chi-square test; #Comparing two data sets using the Fisher's exact probability test; DCB: drug coated balloon, DES: drug-eluting stent; MACE: major adverse cardiovascular event; TLR: target lesion revascularization; MI: myocardial infarction. Discussion: DES is the preferred strategy for reperfusion in patients with ACS. However, delayed healing and vascular remodeling of the vascular endothelium often occur after DES implantation due to the vulnerability of the plaque and the hypercoagulable state and persistent inflammation caused by residual metals, leading to a greatly increased risk of late in-stent thrombosis.10,25 As a new strategy for the treatment of coronary artery disease, DCB has achieved reliable efficacy in various types of coronary artery disease such as in-stent stenosis due to its unique drug release behavior and no residual metal substances. Similarly, DCB has also been used in ACS lesions, but little is known about the effect of plaque vulnerability on the efficacy of DCB in ACS patients. At present, there are many ways to identify vulnerable plaques, and each imaging method, whether invasive or non-invasive, has different diagnostic criteria. Gregg et al noted that PB ≥ 70% or MLA ≤ 4.0 mm2 on IVUS are the strongest independent predictors of subsequent MACE after PCI.17,26–30 Therefore, this study similarly adopted this criterion to define vulnerable plaque. We also analyzed IVUS in the DCB and DES groups before and immediately after PCI. We found no significant differences in vessel area, PB, and lumen area between the two groups, indicating that DCB has comparable effects on vulnerable plaques and luminal structure immediately after implantation in ACS patients compared to DES. Nevertheless, the long-term efficacy of DCB is unclear, as patients in both groups did not undergo IVUS during follow-up for economic or other reasons. It has been reported that during IVUS follow-up of DCB in the treatment of native vessel lesions, DCB was found to stabilize plaques by reducing PB and altering plaque composition, thereby reducing subsequent adverse events caused by plaque rupture. 31,32 Whether this effect of DCB appears in the treatment of ACS complicated with vulnerable plaques warrants further IVUS follow-up analysis. In this study, CAG was performed before both DCB and DES implantation, and there were no significant differences in preoperative MLD and DS between the two groups. For the immediate postoperative CAG results showing that the DCB group had a smaller MLD and greater degree of stenosis than the DES group, one possible reason is that DCB, as a transport tool for antiproliferative drugs, lacks the effective support of metal nets like DES. Secondly, in the DES group, to ensure the full contact between the stent and the blood vessel, a high-pressure balloon of suitable size was used for post-dilation, which can allow the DES to dilate diseased vessels more effectively, reduce the elastic recoil of vessels, and ultimately prevent lumen stenosis. 33 For DCB-PCI, in addition to adequate pretreatment prior to implantation to avoid the occurrence of dissection, the lumen size and lesions had been evaluated by IVUS to minimize the difference in vasodilation with DES. Follow-up CAG results showed that there were no significant differences in MLD and DS between the two groups, but the DCB group had smaller LLL and greater lumen enlargement. There are multiple reasons for these results. First, unlike patients in previous studies, patients in this study had a higher PB and a higher proportion of patients in the DCB group used cutting balloons or spinous balloons than in the DES group, which may allow for a more complete and uniform dispersion of the internal and medial plaques. Second, compared to DES, DCB has a larger contact area with the diseased arterial segment, enabling the antiproliferative drugs to diffuse uniformly and rapidly to the diseased vessel. Further, the residual metal stent material and long-term sustained release of anti-proliferative drugs in the DES group could lead to delayed endothelialization and inflammatory response, while in the DCB group, the arterial vasomotor and dilatation functions were relatively stable because there was no metal residual problem, which may be the reason for the late lumen enlargement. All these factors together resulted in better vascular remodeling and then less lumen loss diameter and greater lumen enlargement in the DCB group.34–36 On the other hand, there were no significant differences in the incidence of target lesion revascularization, myocardial infarction, and cardiac death between the two groups during follow-up, also suggesting that DCB is non-inferior to DES in long-term outcomes. Limitations: This study has certain limitations. As a retrospective, single-center, and small-sample study, there may be patient selection bias. Second, although CAG was performed during follow-up in both groups, we had to assess MLD using QCA immediately after PCI due to the lack of IVUS imaging during follow-up. These limitations may have affected the results to some extent. Therefore, to further validate our conclusions, multiple interventional methods such as optical coherence tomography(OCT), virtual histology IVUS (VH-IVUS), and near-infrared spectroscopy (NIRS) can be used to comprehensively detect and identify vulnerable plaques to achieve accurate diagnosis of lesions. In addition, multicenter, large sample, and longer follow-up studies are needed. Conclusion: In summary, DCB is safe and effective in the treatment of ACS complicated with vulnerable plaque, and DCB has the advantage over DES in LLL. Our work provides practical experience in the interventional treatment of vulnerable plaques in ACS.
Background: Percutaneous coronary intervention (PCI) is the main treatment option for acute coronary syndromes (ACS) often related to the progression and rupture of vulnerable plaques. While drug-eluting stents (DES) are now routinely used in PCI, drug-coated balloons (DCB) are a new strategy to PCI and their practice in the treatment of ACS with vulnerable plaques has not been reported. This study aimed to evaluate the safety and efficacy of DCB in ACS complicated with vulnerable plaque lesions. Methods: 123 patients were retrospectively analyzed and diagnosed with ACS and given PCI in our Cardiology Department from December 2020 to July 2022. Vulnerable plaques were confirmed by intravenous ultrasound (IVUS) in all patients. According to individual treatment plan, patients were entered into either DCB (n = 55) or DES (n = 68) groups. The results of coronary angiography and IVUS before and immediately after percutaneous coronary intervention were analyzed. The occurrence of major adverse cardiovascular events (MACE) and the results of coronary angiography were also evaluated during follow-up. Results: There were no significant differences in baseline clinical characteristics, preoperative minimal luminal diameter (MLD), and preoperative diameter stenosis (DS) between the two groups. Also, there were no differences in IVUS plaque burden (PB), vessel area, and lumen area in the two groups before and immediately after PCI. The efficacy analysis showed that immediately after PCI, the DCB group had smaller MLD and higher degrees of lumen stenosis than the DES group (P < 0.05). However, during follow-up, no significant differences in MLD and DS were seen in two groups; relatively, late loss in luminal diameter(LLL)in the DCB group was smaller (P<0.05). Safety analysis showed that during follow-up, 9 patients developed restenosis after DCB implantation while restenosis occurred in 10 patients with DES treatment, no statistical difference in the incidence of restenosis in the two groups. Besides, there was no statistical difference in the incidence of major adverse cardiac events(MACE)during hospitalization and follow-up in the DCB group (7.3% (4/55)) and the DES group (8.8% (6/68)). Conclusions: DCB is safe and effective for ACS complicated with vulnerable plaque and has an advantage over DES in LLL.
Introduction: Acute coronary syndromes (ACS) is a serious coronary heart disease that threatens human health. Coronary plaques in such patients usually have a large plaque burden with lipid-rich necrotic cores, called vulnerable plaques, whose progression and rupture lead to unstable angina, acute non-ST elevation myocardial infarction, and acute ST-elevation myocardial infarction.1–4 Percutaneous coronary intervention is now an effective treatment approach for ACS. PCI can restore normal blood flow through myocardial revascularization and thus greatly improves symptoms and prognosis in ACS patients.5,6 Compared with traditional two-dimensional coronary angiography (CAG), new intraluminal imaging techniques such as IVUS have more efficacy in reducing complications after drug-eluting stents placement, decreasing in-stent restenosis rate, and improving treatment outcome. Equally important, IVUS can better provide intravascular imaging data for coronary vascular lesions with coronary overlapping, anatomical abnormalities, aneurysms, myocardial bridges, and calcifications, especially in the identification of coronary culprit vessels. Because most ACS patients present with plaque ruptures, an accurate identification of the nature, extent and location of plaque ruptures is particularly important for the formulation of diagnosis and treatment planning. In this context, IVUS is superior to CAG in providing relevant imaging information and intraprocedural guidance of PCI and evaluating postoperative results and clinical prognosis.7,8 Currently, DES is the preferred strategy for PCI in patients with ACS. The use of DES has the advantages of inducing intimal hyperplasia, thickening fibrous cap, and normalizing wall stress to reduce plaque rupture. However, subsequent complications of DES may occur, such as in-stent restenosis, in-stent hyperplasia, and stent fracture, and require long-term dual antiplatelet therapy. In severe cases, there may be no reflow phenomenon after stent placement, rapidly resulting in a larger area of myocardium infarction.9–11 DCB are a new revascularization technique that has become the treatment of choice for in-stent restenosis because it meets the new concept of intervention without implantation. Indications of DCB also include small vessel disease and bifurcation disease, among others.12–14 However, there are few reports related to the use of DCB in ACS patients with vulnerable plaques. Therefore, this study used DES as a parallel treatment to examine the safety and efficacy of DCB in ACS complicated with vulnerable plaques lesions under the guidance of IVUS. Conclusion: In summary, DCB is safe and effective in the treatment of ACS complicated with vulnerable plaque, and DCB has the advantage over DES in LLL. Our work provides practical experience in the interventional treatment of vulnerable plaques in ACS.
Background: Percutaneous coronary intervention (PCI) is the main treatment option for acute coronary syndromes (ACS) often related to the progression and rupture of vulnerable plaques. While drug-eluting stents (DES) are now routinely used in PCI, drug-coated balloons (DCB) are a new strategy to PCI and their practice in the treatment of ACS with vulnerable plaques has not been reported. This study aimed to evaluate the safety and efficacy of DCB in ACS complicated with vulnerable plaque lesions. Methods: 123 patients were retrospectively analyzed and diagnosed with ACS and given PCI in our Cardiology Department from December 2020 to July 2022. Vulnerable plaques were confirmed by intravenous ultrasound (IVUS) in all patients. According to individual treatment plan, patients were entered into either DCB (n = 55) or DES (n = 68) groups. The results of coronary angiography and IVUS before and immediately after percutaneous coronary intervention were analyzed. The occurrence of major adverse cardiovascular events (MACE) and the results of coronary angiography were also evaluated during follow-up. Results: There were no significant differences in baseline clinical characteristics, preoperative minimal luminal diameter (MLD), and preoperative diameter stenosis (DS) between the two groups. Also, there were no differences in IVUS plaque burden (PB), vessel area, and lumen area in the two groups before and immediately after PCI. The efficacy analysis showed that immediately after PCI, the DCB group had smaller MLD and higher degrees of lumen stenosis than the DES group (P < 0.05). However, during follow-up, no significant differences in MLD and DS were seen in two groups; relatively, late loss in luminal diameter(LLL)in the DCB group was smaller (P<0.05). Safety analysis showed that during follow-up, 9 patients developed restenosis after DCB implantation while restenosis occurred in 10 patients with DES treatment, no statistical difference in the incidence of restenosis in the two groups. Besides, there was no statistical difference in the incidence of major adverse cardiac events(MACE)during hospitalization and follow-up in the DCB group (7.3% (4/55)) and the DES group (8.8% (6/68)). Conclusions: DCB is safe and effective for ACS complicated with vulnerable plaque and has an advantage over DES in LLL.
8,389
444
[ 2640, 272, 620, 268, 153, 225, 273, 288, 155, 152, 46 ]
14
[ "dcb", "des", "patients", "groups", "ivus", "vessel", "05", "pci", "coronary", "balloon" ]
[ "coronary vascular lesions", "imaging data coronary", "health coronary plaques", "quantitative coronary angiography", "acute coronary syndromes" ]
null
[CONTENT] acute coronary syndromes | drug coated balloon | drug-eluting stent | vulnerable plaque [SUMMARY]
null
[CONTENT] acute coronary syndromes | drug coated balloon | drug-eluting stent | vulnerable plaque [SUMMARY]
[CONTENT] acute coronary syndromes | drug coated balloon | drug-eluting stent | vulnerable plaque [SUMMARY]
[CONTENT] acute coronary syndromes | drug coated balloon | drug-eluting stent | vulnerable plaque [SUMMARY]
[CONTENT] acute coronary syndromes | drug coated balloon | drug-eluting stent | vulnerable plaque [SUMMARY]
[CONTENT] Acute Coronary Syndrome | Angioplasty, Balloon, Coronary | Constriction, Pathologic | Coronary Angiography | Coronary Artery Disease | Coronary Restenosis | Drug-Eluting Stents | Humans | Percutaneous Coronary Intervention | Retrospective Studies | Treatment Outcome [SUMMARY]
null
[CONTENT] Acute Coronary Syndrome | Angioplasty, Balloon, Coronary | Constriction, Pathologic | Coronary Angiography | Coronary Artery Disease | Coronary Restenosis | Drug-Eluting Stents | Humans | Percutaneous Coronary Intervention | Retrospective Studies | Treatment Outcome [SUMMARY]
[CONTENT] Acute Coronary Syndrome | Angioplasty, Balloon, Coronary | Constriction, Pathologic | Coronary Angiography | Coronary Artery Disease | Coronary Restenosis | Drug-Eluting Stents | Humans | Percutaneous Coronary Intervention | Retrospective Studies | Treatment Outcome [SUMMARY]
[CONTENT] Acute Coronary Syndrome | Angioplasty, Balloon, Coronary | Constriction, Pathologic | Coronary Angiography | Coronary Artery Disease | Coronary Restenosis | Drug-Eluting Stents | Humans | Percutaneous Coronary Intervention | Retrospective Studies | Treatment Outcome [SUMMARY]
[CONTENT] Acute Coronary Syndrome | Angioplasty, Balloon, Coronary | Constriction, Pathologic | Coronary Angiography | Coronary Artery Disease | Coronary Restenosis | Drug-Eluting Stents | Humans | Percutaneous Coronary Intervention | Retrospective Studies | Treatment Outcome [SUMMARY]
[CONTENT] coronary vascular lesions | imaging data coronary | health coronary plaques | quantitative coronary angiography | acute coronary syndromes [SUMMARY]
null
[CONTENT] coronary vascular lesions | imaging data coronary | health coronary plaques | quantitative coronary angiography | acute coronary syndromes [SUMMARY]
[CONTENT] coronary vascular lesions | imaging data coronary | health coronary plaques | quantitative coronary angiography | acute coronary syndromes [SUMMARY]
[CONTENT] coronary vascular lesions | imaging data coronary | health coronary plaques | quantitative coronary angiography | acute coronary syndromes [SUMMARY]
[CONTENT] coronary vascular lesions | imaging data coronary | health coronary plaques | quantitative coronary angiography | acute coronary syndromes [SUMMARY]
[CONTENT] dcb | des | patients | groups | ivus | vessel | 05 | pci | coronary | balloon [SUMMARY]
null
[CONTENT] dcb | des | patients | groups | ivus | vessel | 05 | pci | coronary | balloon [SUMMARY]
[CONTENT] dcb | des | patients | groups | ivus | vessel | 05 | pci | coronary | balloon [SUMMARY]
[CONTENT] dcb | des | patients | groups | ivus | vessel | 05 | pci | coronary | balloon [SUMMARY]
[CONTENT] dcb | des | patients | groups | ivus | vessel | 05 | pci | coronary | balloon [SUMMARY]
[CONTENT] coronary | acs | stent restenosis | acute | stent | new | treatment | plaques | patients | plaque [SUMMARY]
null
[CONTENT] 05 | groups | differences | significant differences | significant | des | dcb | characteristics | coronary | groups 05 [SUMMARY]
[CONTENT] vulnerable | acs | treatment | dcb safe | des lll work provides | summary | practical | practical experience | practical experience interventional | safe [SUMMARY]
[CONTENT] dcb | des | groups | 05 | patients | ivus | coronary | differences | significant | vessel [SUMMARY]
[CONTENT] dcb | des | groups | 05 | patients | ivus | coronary | differences | significant | vessel [SUMMARY]
[CONTENT] ||| PCI | DCB | ACS ||| DCB | ACS [SUMMARY]
null
[CONTENT] MLD | two ||| IVUS | two | PCI ||| PCI | DCB | MLD ||| MLD | two | DCB | P<0.05 ||| 9 | DCB | 10 | two ||| DCB | 7.3% | 8.8% | 6/68 [SUMMARY]
[CONTENT] DCB | ACS | LLL [SUMMARY]
[CONTENT] ||| PCI | DCB | ACS ||| DCB | ACS ||| 123 | ACS | Cardiology Department | December 2020 to July 2022 ||| ||| DCB | 55 | 68 ||| IVUS ||| ||| MLD | two ||| IVUS | two | PCI ||| PCI | DCB | MLD ||| MLD | two | DCB | P<0.05 ||| 9 | DCB | 10 | two ||| DCB | 7.3% | 8.8% | 6/68 ||| DCB | ACS | LLL [SUMMARY]
[CONTENT] ||| PCI | DCB | ACS ||| DCB | ACS ||| 123 | ACS | Cardiology Department | December 2020 to July 2022 ||| ||| DCB | 55 | 68 ||| IVUS ||| ||| MLD | two ||| IVUS | two | PCI ||| PCI | DCB | MLD ||| MLD | two | DCB | P<0.05 ||| 9 | DCB | 10 | two ||| DCB | 7.3% | 8.8% | 6/68 ||| DCB | ACS | LLL [SUMMARY]
Effects of Developmental Failure of Swallowing Threshold on Obesity and Eating Behaviors in Children Aged 5-15 Years.
35807794
The aim of the present study was to identify factors related to developmental failure of swallowing threshold in children aged 5-15 years.
BACKGROUND
A total of 83 children aged 5-15 years were included in this study. A self-administered lifestyle questionnaire was completed, along with hand grip strength and oral function tests. Swallowing threshold was determined based on the concentration of dissolved glucose obtained from gummy jellies when the participants signaled that they wanted to swallow the chewed gummy jellies. Developmental failure of swallowing threshold was defined as glucose concentrations in the lowest 20th percentile. After univariate analysis, multivariate binary logistic regression analysis was used to identify factors associated with developmental failure of swallowing threshold.
METHODS
Hand grip strength was significantly correlated with masticatory performance (r = 0.611, p &lt; 0.01). Logistic regression analysis revealed factors related to developmental failure of swallowing threshold, i.e., overweight/obesity (Odds ratio) (OR) = 5.343, p = 0.031, 95% CI = 1.168-24.437) and eating between meals at least once a day (OR = 4.934, p = 0.049, 95% CI = 1.004-24.244).
RESULTS
Developmental failure of swallowing threshold was closely associated with childhood obesity in 5- to 15-year-old children.
CONCLUSIONS
[ "Adolescent", "Child", "Child, Preschool", "Deglutition", "Feeding Behavior", "Glucose", "Hand Strength", "Humans", "Mastication", "Pediatric Obesity" ]
9268440
1. Introduction
Eating habits acquired in early childhood can change over time based on personal experience and learning [1,2]. Eating habits acquired in childhood or adolescence may persist into adulthood [3]. Poor dietary habits established during childhood that persist into adulthood increase the risk of obesity and related adverse consequences [4,5,6]. In previous studies, early childhood obesity was associated with a higher risk of adult metabolic syndrome [7,8]. Eating behaviors such as frequent consumption of fast food and sugar-sweetened beverages are associated with the development of childhood obesity [9]. In addition, factors such as eating frequency, amount, and occasions may interact to cause obesity [5,7]. Other studies indicated that early modification of poor eating habits could improve health and decrease the future risk of metabolic syndrome, type 2 diabetes, and atherosclerotic cardiovascular diseases [10,11]. A relationship between obesity and eating speed was reported recently. Eating speed was associated with the incidence of metabolic syndrome in adults [12,13], while eating quickly was associated with increased body mass index (BMI) in young adults [14]. Similar results have also been reported in children [15,16,17]. In addition, chewing well before swallowing might be an effective method for reducing food intake and facilitating healthy weight management in preschool children [18] and adults [19]. These findings suggest that appropriate eating behaviors should be acquired within the critical early time period. However, the number of chewing cycles, chewing time, and chewing rate have not been analyzed in the context of the swallowing threshold, and it remains unclear when these functions are acquired. We hypothesized that these functions would be acquired relatively early in childhood. Additionally, based on previous studies, we hypothesized that developmental failure of swallowing threshold in children would be associated with eating fast, chewing frequently, and other habits [17,20]. This developmental failure could involve physiological problems including obesity. Identifying the factors associated with developmental failure of swallowing threshold may aid the development of strategies for improving it. The primary purpose of this study was to clarify differences in swallowing thresholds and other oral functions according to age, as well as when functions related to the swallowing threshold are acquired. A secondary aim was to determine factors related to the developmental failure of swallowing threshold in children aged 5–15 years. In this study, swallowing threshold was determined based on the concentration of dissolved glucose obtained from gummy jellies when the participants signaled that they wanted to swallow the chewed gummy jellies.
null
null
3. Results
The ICCs for all measurement items ranged from 0.84 to 0.91, indicating a high degree of intraobserver reliability. The anthropometric parameters and dental examination results are shown in Table 1 according to age and sex. The 15-year-old males were significantly taller than females of the same age (p < 0.05). The DMFT index in 11-year-old females was significantly higher than in males of the same age (p < 0.05). Hand grip strength and oral functional parameters are shown in Table 2 according to age and sex. The hand grip strengths in 5-, 10-, 12-, 14-, and 15-year-old males were significantly greater compared to females of the same age (all p < 0.05). When considering the results of comparing the mean scores of males and females for all measurement items by age, hand grip strength in 14–15-year-old participants was significantly greater compared to 5–7-year-old participants (all p < 0.05). Hand grip strength increased with age. The maximum occlusal force was greatest in 15-year-old participants and was significantly greater than in those aged 5–7 years (all p < 0.05). Masticatory performance was lowest in 5-year-old participants and was significantly lower than in those aged 12, 14, and 15 years (all p < 0.05). The number of chewing cycles was highest in 9-year-old participants and was significantly higher than in those aged 6 and 14 years (both p < 0.05). The change in chewing time with age was similar to the change in the number of chewing cycles. The swallowing threshold was highest in 15-year-old participants, followed by the 8-year-old participants; both were significantly higher than in those aged 6 years (p < 0.05). The results of Pearson’s bivariate correlation analyses are shown in Table 3. Hand grip strength was significantly and positively correlated with masticatory performance (r = 0.611, p < 0.01) and was strongly correlated with maximum occlusal force (r = 0.791, p < 0.01). Swallowing threshold was strongly correlated with masticatory performance (r = 0.727, p < 0.01) and was significantly and positively correlated with the number of erupted teeth, maximum occlusal force, and number of chewing cycles (r = 0.432, p < 0.01; r = 0.493, p < 0.01; r = 0.465, p < 0.01, respectively). The number of chewing cycles was strongly correlated with chewing time (r = 0.801, p < 0.01). The correlation between age and swallowing threshold was weak but significant (r = 0.362, p < 0.01; Figure 1). Table 4 summarizes the data on developmental failure of swallowing threshold based on demographic and health-related variables. Participants with developmental failure of swallowing threshold were more likely to be overweight or obese, eat between meals at least once per day, and have low or no physical activity (p = 0.014, p = 0.021, and p = 0.006, respectively). Table 5 summarizes the data on developmental failure of swallowing threshold based on age, height, body weight, dental status, hand grip strength, and masticatory function. Swallowing threshold, age, number of erupted teeth, maximum occlusal force, masticatory performance, number of chewing cycles, and chewing time in the develop-mental failure of swallowing threshold group were significantly lower compared to the normal swallowing threshold group (all p < 0.05). The mean DMFT index in the developmental failure swallowing threshold group was significantly higher compared to the normal swallowing threshold group (p < 0.05). The mean chewing time to swallow in the normal swallowing threshold group was 20.91 s. Masticatory performance was similar between the groups. Table 6 shows the predictors of developmental failure of swallowing threshold, as revealed by logistic regression. Overweight or obese individuals had higher odds of developmental failure of swallowing threshold (OR = 5.343, p = 0.031, 95% CI = 1.168–24.437). Eating between meals once or more a day was also associated with higher odds of developmental failure of swallowing threshold (OR = 4.934, p = 0.049, 95% CI = 1.004–24.244).
5. Conclusions
We found that the relationship between swallowing threshold and age was unclear, whereas the number of chewing cycles and chewing time tended to gradually decrease after reaching a peak at the age of 9 and 8 years, respectively. An appropriate number of chewing cycles and chewing time until swallowing should be established by 9 years of age. Developmental failure of swallowing threshold was closely associated with childhood obesity and eating between meals at least once per day among the 5- to 15-year-old individuals.
[ "2. Materials and Methods", "2.2. Questionnaire", "2.3. Anthropometry and Dental Examination", "2.4. Hand Grip Strength", "2.5. Maximum Occlusal Force", "2.6. Masticatory Performance", "2.7. Swallowing Threshold", "2.8. Reliability of Measurements", "2.9. Statistical Analysis" ]
[ "2.1. Participants This study was conducted in accordance with the Declaration of Helsinki and was approved by the Human Investigation Committee of Kyushu Dental University (approval number: 18–37). All participants and their parents/guardians provided written informed consent for participation in the study. The participants were recruited following an initial examination at two private dental clinics in Japan. The inclusion criteria were as follows: aged 5–15 years, normal language comprehension, cooperative behavior, and no current dental diseases or complications. The exclusion criteria included systemic disturbances causing swallowing impairment, obvious facial asymmetry that could affect the measurements, soft tissue abnormalities, temporomandibular joint dysfunction, and dental or structural irregularities.\nUsing software (PS: Power and Sample Size Calculation, available from the Vanderbilt University’s website), sample size was calculated. The study was based on a continuous response variable ranging from normal swallowing threshold to developmental failure of swallowing threshold; four normal participants were recruited for every participant with the developmental failure of swallowing threshold. In a previous study, the data for each group were normally distributed, with a standard deviation (SD) of swallowing threshold of 40 mg/dL [21]. Based on a true difference of 40 mg/dL between the developmental failure and normal groups, 14 developmental failure and 56 normal participants were required to reject the null hypothesis (i.e., the population means of the developmental failure and normal groups are equal) with a power of 0.9. The type I error probability for the null hypothesis was 0.05.\nThe participants were divided into 11 groups based on their chronological age, and each group was further divided into two subgroups based on sex. Details of the participants are shown in Table 1. \nThis study was conducted in accordance with the Declaration of Helsinki and was approved by the Human Investigation Committee of Kyushu Dental University (approval number: 18–37). All participants and their parents/guardians provided written informed consent for participation in the study. The participants were recruited following an initial examination at two private dental clinics in Japan. The inclusion criteria were as follows: aged 5–15 years, normal language comprehension, cooperative behavior, and no current dental diseases or complications. The exclusion criteria included systemic disturbances causing swallowing impairment, obvious facial asymmetry that could affect the measurements, soft tissue abnormalities, temporomandibular joint dysfunction, and dental or structural irregularities.\nUsing software (PS: Power and Sample Size Calculation, available from the Vanderbilt University’s website), sample size was calculated. The study was based on a continuous response variable ranging from normal swallowing threshold to developmental failure of swallowing threshold; four normal participants were recruited for every participant with the developmental failure of swallowing threshold. In a previous study, the data for each group were normally distributed, with a standard deviation (SD) of swallowing threshold of 40 mg/dL [21]. Based on a true difference of 40 mg/dL between the developmental failure and normal groups, 14 developmental failure and 56 normal participants were required to reject the null hypothesis (i.e., the population means of the developmental failure and normal groups are equal) with a power of 0.9. The type I error probability for the null hypothesis was 0.05.\nThe participants were divided into 11 groups based on their chronological age, and each group was further divided into two subgroups based on sex. Details of the participants are shown in Table 1. \n2.2. Questionnaire The survey solicited the following information: demographic characteristics (sex and age), eating habits, physical activity, and sleep [21].\nThe survey solicited the following information: demographic characteristics (sex and age), eating habits, physical activity, and sleep [21].\n2.3. Anthropometry and Dental Examination Height and body weight were measured in the consultation room of the clinic. Height was measured to an accuracy of ±0.1 cm using a portable digital stadiometer (AD-653; A&D, Tokyo, Japan), with the head in the Frankfort plane, whereas body weight was measured with an accuracy of 0.1 kg [22]. Rohrer and Kaup indices were calculated from the height and weight. \nDuring intraoral examination, the number of erupted teeth and the decayed, missing, and filled teeth (DMFT) index were recorded for each patient [23].\nHeight and body weight were measured in the consultation room of the clinic. Height was measured to an accuracy of ±0.1 cm using a portable digital stadiometer (AD-653; A&D, Tokyo, Japan), with the head in the Frankfort plane, whereas body weight was measured with an accuracy of 0.1 kg [22]. Rohrer and Kaup indices were calculated from the height and weight. \nDuring intraoral examination, the number of erupted teeth and the decayed, missing, and filled teeth (DMFT) index were recorded for each patient [23].\n2.4. Hand Grip Strength Hand grip strength is generally used as an index of muscle weakness in the diagnosis of sarcopenia [24]. Therefore, hand grip strength measurement was performed for evaluating systemic muscle strength in the participants. A portable grip strength meter (T-2288; Toei Light Co., Ltd., Saitama, Japan) was used to measure hand grip strength. Participants were asked to stand and hold the dynamometer in their hand with the arm parallel to the body, but without squeezing the arm against the body. Hand grip strength (kg) was measured twice for each hand (alternately) with a 30 s interval between trials. The highest value from either the left or right hand was recorded as the grip strength [20].\nHand grip strength is generally used as an index of muscle weakness in the diagnosis of sarcopenia [24]. Therefore, hand grip strength measurement was performed for evaluating systemic muscle strength in the participants. A portable grip strength meter (T-2288; Toei Light Co., Ltd., Saitama, Japan) was used to measure hand grip strength. Participants were asked to stand and hold the dynamometer in their hand with the arm parallel to the body, but without squeezing the arm against the body. Hand grip strength (kg) was measured twice for each hand (alternately) with a 30 s interval between trials. The highest value from either the left or right hand was recorded as the grip strength [20].\n2.5. Maximum Occlusal Force Maximum occlusal force was measured using a portable occlusal force meter (GM10; Nagano Keiki Co., Ltd., Tokyo, Japan). The participants were instructed to bite down with maximal voluntary muscular effort using their first molars. Maximum occlusal force was measured on each side, with a 30 s interval between bite measurements. The larger of the values from the left and right sides was recorded as the maximum bite force [21].\nMaximum occlusal force was measured using a portable occlusal force meter (GM10; Nagano Keiki Co., Ltd., Tokyo, Japan). The participants were instructed to bite down with maximal voluntary muscular effort using their first molars. Maximum occlusal force was measured on each side, with a 30 s interval between bite measurements. The larger of the values from the left and right sides was recorded as the maximum bite force [21].\n2.6. Masticatory Performance Masticatory performance was determined by measuring the concentration of dissolved glucose obtained from a cylindrical gummy jelly (GLUCOLUMN; GC Co., Ltd., Tokyo, Japan) using a glucose-measuring device (GLUCO SENSOR GS-II; GC Co., Ltd., Tokyo, Japan). Prior to the test, the participants were instructed regarding the chewing movements and mouth rinsing procedures to prevent swallowing. The participants were then instructed to chew the gummy jelly for 20 s freely. After chewing, the participants were asked to take 10 mL of distilled water into their mouth and spit out the gummy jelly and distilled water into a filter cup. The glucose concentration in the filtrate (mg/dL) was measured using a dedicated device [20].\nMasticatory performance was determined by measuring the concentration of dissolved glucose obtained from a cylindrical gummy jelly (GLUCOLUMN; GC Co., Ltd., Tokyo, Japan) using a glucose-measuring device (GLUCO SENSOR GS-II; GC Co., Ltd., Tokyo, Japan). Prior to the test, the participants were instructed regarding the chewing movements and mouth rinsing procedures to prevent swallowing. The participants were then instructed to chew the gummy jelly for 20 s freely. After chewing, the participants were asked to take 10 mL of distilled water into their mouth and spit out the gummy jelly and distilled water into a filter cup. The glucose concentration in the filtrate (mg/dL) was measured using a dedicated device [20].\n2.7. Swallowing Threshold Following evaluation of the masticatory performance, an assessment of the swallowing threshold was performed using the test gummy jellies (GLUCOLUMN; GC Co. Ltd.). Variables related to swallowing (i.e., the number of chewing cycles, chewing time, chewing rate, and glucose concentration in the filtrate (mg/dL)) were assessed for all participants. Each participant was instructed to chew a gummy jelly freely until feeling the desire to swallow, at which time they were instructed to stop chewing and signal to the examiner that they were ready to expel the gummy jelly. The examiner visually quantified the number of chewing cycles. The time from the onset of chewing to the moment when the participants raised their hands was recorded using a stopwatch [20]. The subsequent steps were the same as those used for the evaluation of masticatory performance. We determined that the glucose concentration in the filtrate was an indicator of the swallowing threshold. Participants with low swallowing threshold tend to signal that they want to swallow the gummy jelly quickly, resulting in low glucose concentrations. A glucose concentration in the lowest 20th percentile was defined as developmental failure of swallowing threshold, based on previous studies reporting the criteria for sarcopenia [21,25]. \nFollowing evaluation of the masticatory performance, an assessment of the swallowing threshold was performed using the test gummy jellies (GLUCOLUMN; GC Co. Ltd.). Variables related to swallowing (i.e., the number of chewing cycles, chewing time, chewing rate, and glucose concentration in the filtrate (mg/dL)) were assessed for all participants. Each participant was instructed to chew a gummy jelly freely until feeling the desire to swallow, at which time they were instructed to stop chewing and signal to the examiner that they were ready to expel the gummy jelly. The examiner visually quantified the number of chewing cycles. The time from the onset of chewing to the moment when the participants raised their hands was recorded using a stopwatch [20]. The subsequent steps were the same as those used for the evaluation of masticatory performance. We determined that the glucose concentration in the filtrate was an indicator of the swallowing threshold. Participants with low swallowing threshold tend to signal that they want to swallow the gummy jelly quickly, resulting in low glucose concentrations. A glucose concentration in the lowest 20th percentile was defined as developmental failure of swallowing threshold, based on previous studies reporting the criteria for sarcopenia [21,25]. \n2.8. Reliability of Measurements All measurements were performed in duplicate, separated by 30 s rest periods, and the mean values were calculated for the subsequent analyses. All examinations were performed by the same examiner. Data generated during sample collection were assessed for reliability. Random error was evaluated based on intraobserver reliability, which was quantified using the intraclass correlation coefficient (ICC; where 0.8 ≤ ICC ≤ 1.0 corresponded to high reliability) [26].\nAll measurements were performed in duplicate, separated by 30 s rest periods, and the mean values were calculated for the subsequent analyses. All examinations were performed by the same examiner. Data generated during sample collection were assessed for reliability. Random error was evaluated based on intraobserver reliability, which was quantified using the intraclass correlation coefficient (ICC; where 0.8 ≤ ICC ≤ 1.0 corresponded to high reliability) [26].\n2.9. Statistical Analysis The Shapiro–Wilk test was used to determine data normality. All continuous data were expressed as means ± SD. The means were compared between two groups using a two-tailed t-test or the Mann–Whitney U test. The Kruskal–Wallis test was used for comparisons between more than two groups. Pearson’s correlation coefficient was used to determine associations among glucose concentration before swallowing and other variables. The chi-squared test or Fisher’s exact test was used, as appropriate, to compare categorical variables between the normal swallowing threshold and developmental failure of swallowing threshold (lowest 20%) groups. Binary logistic regression analysis with the forward selection (conditional) method was used to identify factors predicting developmental failure of swallowing threshold. Independent variables that were significant in the univariate analyses were included. Categorical variables were coded appropriately before being entered into the model. Adjusted odds ratios (ORs) and 95% confidence intervals (CIs) were calculated for the low swallowing threshold groups. A p-value < 0.05 was considered statistically significant. All data were analyzed using SPSS Statistics for Windows software (version 23.0; IBM Corp., Armonk, NY, USA).\nThe Shapiro–Wilk test was used to determine data normality. All continuous data were expressed as means ± SD. The means were compared between two groups using a two-tailed t-test or the Mann–Whitney U test. The Kruskal–Wallis test was used for comparisons between more than two groups. Pearson’s correlation coefficient was used to determine associations among glucose concentration before swallowing and other variables. The chi-squared test or Fisher’s exact test was used, as appropriate, to compare categorical variables between the normal swallowing threshold and developmental failure of swallowing threshold (lowest 20%) groups. Binary logistic regression analysis with the forward selection (conditional) method was used to identify factors predicting developmental failure of swallowing threshold. Independent variables that were significant in the univariate analyses were included. Categorical variables were coded appropriately before being entered into the model. Adjusted odds ratios (ORs) and 95% confidence intervals (CIs) were calculated for the low swallowing threshold groups. A p-value < 0.05 was considered statistically significant. All data were analyzed using SPSS Statistics for Windows software (version 23.0; IBM Corp., Armonk, NY, USA).", "The survey solicited the following information: demographic characteristics (sex and age), eating habits, physical activity, and sleep [21].", "Height and body weight were measured in the consultation room of the clinic. Height was measured to an accuracy of ±0.1 cm using a portable digital stadiometer (AD-653; A&D, Tokyo, Japan), with the head in the Frankfort plane, whereas body weight was measured with an accuracy of 0.1 kg [22]. Rohrer and Kaup indices were calculated from the height and weight. \nDuring intraoral examination, the number of erupted teeth and the decayed, missing, and filled teeth (DMFT) index were recorded for each patient [23].", "Hand grip strength is generally used as an index of muscle weakness in the diagnosis of sarcopenia [24]. Therefore, hand grip strength measurement was performed for evaluating systemic muscle strength in the participants. A portable grip strength meter (T-2288; Toei Light Co., Ltd., Saitama, Japan) was used to measure hand grip strength. Participants were asked to stand and hold the dynamometer in their hand with the arm parallel to the body, but without squeezing the arm against the body. Hand grip strength (kg) was measured twice for each hand (alternately) with a 30 s interval between trials. The highest value from either the left or right hand was recorded as the grip strength [20].", "Maximum occlusal force was measured using a portable occlusal force meter (GM10; Nagano Keiki Co., Ltd., Tokyo, Japan). The participants were instructed to bite down with maximal voluntary muscular effort using their first molars. Maximum occlusal force was measured on each side, with a 30 s interval between bite measurements. The larger of the values from the left and right sides was recorded as the maximum bite force [21].", "Masticatory performance was determined by measuring the concentration of dissolved glucose obtained from a cylindrical gummy jelly (GLUCOLUMN; GC Co., Ltd., Tokyo, Japan) using a glucose-measuring device (GLUCO SENSOR GS-II; GC Co., Ltd., Tokyo, Japan). Prior to the test, the participants were instructed regarding the chewing movements and mouth rinsing procedures to prevent swallowing. The participants were then instructed to chew the gummy jelly for 20 s freely. After chewing, the participants were asked to take 10 mL of distilled water into their mouth and spit out the gummy jelly and distilled water into a filter cup. The glucose concentration in the filtrate (mg/dL) was measured using a dedicated device [20].", "Following evaluation of the masticatory performance, an assessment of the swallowing threshold was performed using the test gummy jellies (GLUCOLUMN; GC Co. Ltd.). Variables related to swallowing (i.e., the number of chewing cycles, chewing time, chewing rate, and glucose concentration in the filtrate (mg/dL)) were assessed for all participants. Each participant was instructed to chew a gummy jelly freely until feeling the desire to swallow, at which time they were instructed to stop chewing and signal to the examiner that they were ready to expel the gummy jelly. The examiner visually quantified the number of chewing cycles. The time from the onset of chewing to the moment when the participants raised their hands was recorded using a stopwatch [20]. The subsequent steps were the same as those used for the evaluation of masticatory performance. We determined that the glucose concentration in the filtrate was an indicator of the swallowing threshold. Participants with low swallowing threshold tend to signal that they want to swallow the gummy jelly quickly, resulting in low glucose concentrations. A glucose concentration in the lowest 20th percentile was defined as developmental failure of swallowing threshold, based on previous studies reporting the criteria for sarcopenia [21,25]. ", "All measurements were performed in duplicate, separated by 30 s rest periods, and the mean values were calculated for the subsequent analyses. All examinations were performed by the same examiner. Data generated during sample collection were assessed for reliability. Random error was evaluated based on intraobserver reliability, which was quantified using the intraclass correlation coefficient (ICC; where 0.8 ≤ ICC ≤ 1.0 corresponded to high reliability) [26].", "The Shapiro–Wilk test was used to determine data normality. All continuous data were expressed as means ± SD. The means were compared between two groups using a two-tailed t-test or the Mann–Whitney U test. The Kruskal–Wallis test was used for comparisons between more than two groups. Pearson’s correlation coefficient was used to determine associations among glucose concentration before swallowing and other variables. The chi-squared test or Fisher’s exact test was used, as appropriate, to compare categorical variables between the normal swallowing threshold and developmental failure of swallowing threshold (lowest 20%) groups. Binary logistic regression analysis with the forward selection (conditional) method was used to identify factors predicting developmental failure of swallowing threshold. Independent variables that were significant in the univariate analyses were included. Categorical variables were coded appropriately before being entered into the model. Adjusted odds ratios (ORs) and 95% confidence intervals (CIs) were calculated for the low swallowing threshold groups. A p-value < 0.05 was considered statistically significant. All data were analyzed using SPSS Statistics for Windows software (version 23.0; IBM Corp., Armonk, NY, USA)." ]
[ null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Participants", "2.2. Questionnaire", "2.3. Anthropometry and Dental Examination", "2.4. Hand Grip Strength", "2.5. Maximum Occlusal Force", "2.6. Masticatory Performance", "2.7. Swallowing Threshold", "2.8. Reliability of Measurements", "2.9. Statistical Analysis", "3. Results", "4. Discussion", "5. Conclusions" ]
[ "Eating habits acquired in early childhood can change over time based on personal experience and learning [1,2]. Eating habits acquired in childhood or adolescence may persist into adulthood [3]. Poor dietary habits established during childhood that persist into adulthood increase the risk of obesity and related adverse consequences [4,5,6].\nIn previous studies, early childhood obesity was associated with a higher risk of adult metabolic syndrome [7,8]. Eating behaviors such as frequent consumption of fast food and sugar-sweetened beverages are associated with the development of childhood obesity [9]. In addition, factors such as eating frequency, amount, and occasions may interact to cause obesity [5,7]. Other studies indicated that early modification of poor eating habits could improve health and decrease the future risk of metabolic syndrome, type 2 diabetes, and atherosclerotic cardiovascular diseases [10,11].\nA relationship between obesity and eating speed was reported recently. Eating speed was associated with the incidence of metabolic syndrome in adults [12,13], while eating quickly was associated with increased body mass index (BMI) in young adults [14]. Similar results have also been reported in children [15,16,17]. In addition, chewing well before swallowing might be an effective method for reducing food intake and facilitating healthy weight management in preschool children [18] and adults [19].\nThese findings suggest that appropriate eating behaviors should be acquired within the critical early time period. However, the number of chewing cycles, chewing time, and chewing rate have not been analyzed in the context of the swallowing threshold, and it remains unclear when these functions are acquired. We hypothesized that these functions would be acquired relatively early in childhood. \nAdditionally, based on previous studies, we hypothesized that developmental failure of swallowing threshold in children would be associated with eating fast, chewing frequently, and other habits [17,20]. This developmental failure could involve physiological problems including obesity. Identifying the factors associated with developmental failure of swallowing threshold may aid the development of strategies for improving it.\nThe primary purpose of this study was to clarify differences in swallowing thresholds and other oral functions according to age, as well as when functions related to the swallowing threshold are acquired. A secondary aim was to determine factors related to the developmental failure of swallowing threshold in children aged 5–15 years. In this study, swallowing threshold was determined based on the concentration of dissolved glucose obtained from gummy jellies when the participants signaled that they wanted to swallow the chewed gummy jellies.", "2.1. Participants This study was conducted in accordance with the Declaration of Helsinki and was approved by the Human Investigation Committee of Kyushu Dental University (approval number: 18–37). All participants and their parents/guardians provided written informed consent for participation in the study. The participants were recruited following an initial examination at two private dental clinics in Japan. The inclusion criteria were as follows: aged 5–15 years, normal language comprehension, cooperative behavior, and no current dental diseases or complications. The exclusion criteria included systemic disturbances causing swallowing impairment, obvious facial asymmetry that could affect the measurements, soft tissue abnormalities, temporomandibular joint dysfunction, and dental or structural irregularities.\nUsing software (PS: Power and Sample Size Calculation, available from the Vanderbilt University’s website), sample size was calculated. The study was based on a continuous response variable ranging from normal swallowing threshold to developmental failure of swallowing threshold; four normal participants were recruited for every participant with the developmental failure of swallowing threshold. In a previous study, the data for each group were normally distributed, with a standard deviation (SD) of swallowing threshold of 40 mg/dL [21]. Based on a true difference of 40 mg/dL between the developmental failure and normal groups, 14 developmental failure and 56 normal participants were required to reject the null hypothesis (i.e., the population means of the developmental failure and normal groups are equal) with a power of 0.9. The type I error probability for the null hypothesis was 0.05.\nThe participants were divided into 11 groups based on their chronological age, and each group was further divided into two subgroups based on sex. Details of the participants are shown in Table 1. \nThis study was conducted in accordance with the Declaration of Helsinki and was approved by the Human Investigation Committee of Kyushu Dental University (approval number: 18–37). All participants and their parents/guardians provided written informed consent for participation in the study. The participants were recruited following an initial examination at two private dental clinics in Japan. The inclusion criteria were as follows: aged 5–15 years, normal language comprehension, cooperative behavior, and no current dental diseases or complications. The exclusion criteria included systemic disturbances causing swallowing impairment, obvious facial asymmetry that could affect the measurements, soft tissue abnormalities, temporomandibular joint dysfunction, and dental or structural irregularities.\nUsing software (PS: Power and Sample Size Calculation, available from the Vanderbilt University’s website), sample size was calculated. The study was based on a continuous response variable ranging from normal swallowing threshold to developmental failure of swallowing threshold; four normal participants were recruited for every participant with the developmental failure of swallowing threshold. In a previous study, the data for each group were normally distributed, with a standard deviation (SD) of swallowing threshold of 40 mg/dL [21]. Based on a true difference of 40 mg/dL between the developmental failure and normal groups, 14 developmental failure and 56 normal participants were required to reject the null hypothesis (i.e., the population means of the developmental failure and normal groups are equal) with a power of 0.9. The type I error probability for the null hypothesis was 0.05.\nThe participants were divided into 11 groups based on their chronological age, and each group was further divided into two subgroups based on sex. Details of the participants are shown in Table 1. \n2.2. Questionnaire The survey solicited the following information: demographic characteristics (sex and age), eating habits, physical activity, and sleep [21].\nThe survey solicited the following information: demographic characteristics (sex and age), eating habits, physical activity, and sleep [21].\n2.3. Anthropometry and Dental Examination Height and body weight were measured in the consultation room of the clinic. Height was measured to an accuracy of ±0.1 cm using a portable digital stadiometer (AD-653; A&D, Tokyo, Japan), with the head in the Frankfort plane, whereas body weight was measured with an accuracy of 0.1 kg [22]. Rohrer and Kaup indices were calculated from the height and weight. \nDuring intraoral examination, the number of erupted teeth and the decayed, missing, and filled teeth (DMFT) index were recorded for each patient [23].\nHeight and body weight were measured in the consultation room of the clinic. Height was measured to an accuracy of ±0.1 cm using a portable digital stadiometer (AD-653; A&D, Tokyo, Japan), with the head in the Frankfort plane, whereas body weight was measured with an accuracy of 0.1 kg [22]. Rohrer and Kaup indices were calculated from the height and weight. \nDuring intraoral examination, the number of erupted teeth and the decayed, missing, and filled teeth (DMFT) index were recorded for each patient [23].\n2.4. Hand Grip Strength Hand grip strength is generally used as an index of muscle weakness in the diagnosis of sarcopenia [24]. Therefore, hand grip strength measurement was performed for evaluating systemic muscle strength in the participants. A portable grip strength meter (T-2288; Toei Light Co., Ltd., Saitama, Japan) was used to measure hand grip strength. Participants were asked to stand and hold the dynamometer in their hand with the arm parallel to the body, but without squeezing the arm against the body. Hand grip strength (kg) was measured twice for each hand (alternately) with a 30 s interval between trials. The highest value from either the left or right hand was recorded as the grip strength [20].\nHand grip strength is generally used as an index of muscle weakness in the diagnosis of sarcopenia [24]. Therefore, hand grip strength measurement was performed for evaluating systemic muscle strength in the participants. A portable grip strength meter (T-2288; Toei Light Co., Ltd., Saitama, Japan) was used to measure hand grip strength. Participants were asked to stand and hold the dynamometer in their hand with the arm parallel to the body, but without squeezing the arm against the body. Hand grip strength (kg) was measured twice for each hand (alternately) with a 30 s interval between trials. The highest value from either the left or right hand was recorded as the grip strength [20].\n2.5. Maximum Occlusal Force Maximum occlusal force was measured using a portable occlusal force meter (GM10; Nagano Keiki Co., Ltd., Tokyo, Japan). The participants were instructed to bite down with maximal voluntary muscular effort using their first molars. Maximum occlusal force was measured on each side, with a 30 s interval between bite measurements. The larger of the values from the left and right sides was recorded as the maximum bite force [21].\nMaximum occlusal force was measured using a portable occlusal force meter (GM10; Nagano Keiki Co., Ltd., Tokyo, Japan). The participants were instructed to bite down with maximal voluntary muscular effort using their first molars. Maximum occlusal force was measured on each side, with a 30 s interval between bite measurements. The larger of the values from the left and right sides was recorded as the maximum bite force [21].\n2.6. Masticatory Performance Masticatory performance was determined by measuring the concentration of dissolved glucose obtained from a cylindrical gummy jelly (GLUCOLUMN; GC Co., Ltd., Tokyo, Japan) using a glucose-measuring device (GLUCO SENSOR GS-II; GC Co., Ltd., Tokyo, Japan). Prior to the test, the participants were instructed regarding the chewing movements and mouth rinsing procedures to prevent swallowing. The participants were then instructed to chew the gummy jelly for 20 s freely. After chewing, the participants were asked to take 10 mL of distilled water into their mouth and spit out the gummy jelly and distilled water into a filter cup. The glucose concentration in the filtrate (mg/dL) was measured using a dedicated device [20].\nMasticatory performance was determined by measuring the concentration of dissolved glucose obtained from a cylindrical gummy jelly (GLUCOLUMN; GC Co., Ltd., Tokyo, Japan) using a glucose-measuring device (GLUCO SENSOR GS-II; GC Co., Ltd., Tokyo, Japan). Prior to the test, the participants were instructed regarding the chewing movements and mouth rinsing procedures to prevent swallowing. The participants were then instructed to chew the gummy jelly for 20 s freely. After chewing, the participants were asked to take 10 mL of distilled water into their mouth and spit out the gummy jelly and distilled water into a filter cup. The glucose concentration in the filtrate (mg/dL) was measured using a dedicated device [20].\n2.7. Swallowing Threshold Following evaluation of the masticatory performance, an assessment of the swallowing threshold was performed using the test gummy jellies (GLUCOLUMN; GC Co. Ltd.). Variables related to swallowing (i.e., the number of chewing cycles, chewing time, chewing rate, and glucose concentration in the filtrate (mg/dL)) were assessed for all participants. Each participant was instructed to chew a gummy jelly freely until feeling the desire to swallow, at which time they were instructed to stop chewing and signal to the examiner that they were ready to expel the gummy jelly. The examiner visually quantified the number of chewing cycles. The time from the onset of chewing to the moment when the participants raised their hands was recorded using a stopwatch [20]. The subsequent steps were the same as those used for the evaluation of masticatory performance. We determined that the glucose concentration in the filtrate was an indicator of the swallowing threshold. Participants with low swallowing threshold tend to signal that they want to swallow the gummy jelly quickly, resulting in low glucose concentrations. A glucose concentration in the lowest 20th percentile was defined as developmental failure of swallowing threshold, based on previous studies reporting the criteria for sarcopenia [21,25]. \nFollowing evaluation of the masticatory performance, an assessment of the swallowing threshold was performed using the test gummy jellies (GLUCOLUMN; GC Co. Ltd.). Variables related to swallowing (i.e., the number of chewing cycles, chewing time, chewing rate, and glucose concentration in the filtrate (mg/dL)) were assessed for all participants. Each participant was instructed to chew a gummy jelly freely until feeling the desire to swallow, at which time they were instructed to stop chewing and signal to the examiner that they were ready to expel the gummy jelly. The examiner visually quantified the number of chewing cycles. The time from the onset of chewing to the moment when the participants raised their hands was recorded using a stopwatch [20]. The subsequent steps were the same as those used for the evaluation of masticatory performance. We determined that the glucose concentration in the filtrate was an indicator of the swallowing threshold. Participants with low swallowing threshold tend to signal that they want to swallow the gummy jelly quickly, resulting in low glucose concentrations. A glucose concentration in the lowest 20th percentile was defined as developmental failure of swallowing threshold, based on previous studies reporting the criteria for sarcopenia [21,25]. \n2.8. Reliability of Measurements All measurements were performed in duplicate, separated by 30 s rest periods, and the mean values were calculated for the subsequent analyses. All examinations were performed by the same examiner. Data generated during sample collection were assessed for reliability. Random error was evaluated based on intraobserver reliability, which was quantified using the intraclass correlation coefficient (ICC; where 0.8 ≤ ICC ≤ 1.0 corresponded to high reliability) [26].\nAll measurements were performed in duplicate, separated by 30 s rest periods, and the mean values were calculated for the subsequent analyses. All examinations were performed by the same examiner. Data generated during sample collection were assessed for reliability. Random error was evaluated based on intraobserver reliability, which was quantified using the intraclass correlation coefficient (ICC; where 0.8 ≤ ICC ≤ 1.0 corresponded to high reliability) [26].\n2.9. Statistical Analysis The Shapiro–Wilk test was used to determine data normality. All continuous data were expressed as means ± SD. The means were compared between two groups using a two-tailed t-test or the Mann–Whitney U test. The Kruskal–Wallis test was used for comparisons between more than two groups. Pearson’s correlation coefficient was used to determine associations among glucose concentration before swallowing and other variables. The chi-squared test or Fisher’s exact test was used, as appropriate, to compare categorical variables between the normal swallowing threshold and developmental failure of swallowing threshold (lowest 20%) groups. Binary logistic regression analysis with the forward selection (conditional) method was used to identify factors predicting developmental failure of swallowing threshold. Independent variables that were significant in the univariate analyses were included. Categorical variables were coded appropriately before being entered into the model. Adjusted odds ratios (ORs) and 95% confidence intervals (CIs) were calculated for the low swallowing threshold groups. A p-value < 0.05 was considered statistically significant. All data were analyzed using SPSS Statistics for Windows software (version 23.0; IBM Corp., Armonk, NY, USA).\nThe Shapiro–Wilk test was used to determine data normality. All continuous data were expressed as means ± SD. The means were compared between two groups using a two-tailed t-test or the Mann–Whitney U test. The Kruskal–Wallis test was used for comparisons between more than two groups. Pearson’s correlation coefficient was used to determine associations among glucose concentration before swallowing and other variables. The chi-squared test or Fisher’s exact test was used, as appropriate, to compare categorical variables between the normal swallowing threshold and developmental failure of swallowing threshold (lowest 20%) groups. Binary logistic regression analysis with the forward selection (conditional) method was used to identify factors predicting developmental failure of swallowing threshold. Independent variables that were significant in the univariate analyses were included. Categorical variables were coded appropriately before being entered into the model. Adjusted odds ratios (ORs) and 95% confidence intervals (CIs) were calculated for the low swallowing threshold groups. A p-value < 0.05 was considered statistically significant. All data were analyzed using SPSS Statistics for Windows software (version 23.0; IBM Corp., Armonk, NY, USA).", "This study was conducted in accordance with the Declaration of Helsinki and was approved by the Human Investigation Committee of Kyushu Dental University (approval number: 18–37). All participants and their parents/guardians provided written informed consent for participation in the study. The participants were recruited following an initial examination at two private dental clinics in Japan. The inclusion criteria were as follows: aged 5–15 years, normal language comprehension, cooperative behavior, and no current dental diseases or complications. The exclusion criteria included systemic disturbances causing swallowing impairment, obvious facial asymmetry that could affect the measurements, soft tissue abnormalities, temporomandibular joint dysfunction, and dental or structural irregularities.\nUsing software (PS: Power and Sample Size Calculation, available from the Vanderbilt University’s website), sample size was calculated. The study was based on a continuous response variable ranging from normal swallowing threshold to developmental failure of swallowing threshold; four normal participants were recruited for every participant with the developmental failure of swallowing threshold. In a previous study, the data for each group were normally distributed, with a standard deviation (SD) of swallowing threshold of 40 mg/dL [21]. Based on a true difference of 40 mg/dL between the developmental failure and normal groups, 14 developmental failure and 56 normal participants were required to reject the null hypothesis (i.e., the population means of the developmental failure and normal groups are equal) with a power of 0.9. The type I error probability for the null hypothesis was 0.05.\nThe participants were divided into 11 groups based on their chronological age, and each group was further divided into two subgroups based on sex. Details of the participants are shown in Table 1. ", "The survey solicited the following information: demographic characteristics (sex and age), eating habits, physical activity, and sleep [21].", "Height and body weight were measured in the consultation room of the clinic. Height was measured to an accuracy of ±0.1 cm using a portable digital stadiometer (AD-653; A&D, Tokyo, Japan), with the head in the Frankfort plane, whereas body weight was measured with an accuracy of 0.1 kg [22]. Rohrer and Kaup indices were calculated from the height and weight. \nDuring intraoral examination, the number of erupted teeth and the decayed, missing, and filled teeth (DMFT) index were recorded for each patient [23].", "Hand grip strength is generally used as an index of muscle weakness in the diagnosis of sarcopenia [24]. Therefore, hand grip strength measurement was performed for evaluating systemic muscle strength in the participants. A portable grip strength meter (T-2288; Toei Light Co., Ltd., Saitama, Japan) was used to measure hand grip strength. Participants were asked to stand and hold the dynamometer in their hand with the arm parallel to the body, but without squeezing the arm against the body. Hand grip strength (kg) was measured twice for each hand (alternately) with a 30 s interval between trials. The highest value from either the left or right hand was recorded as the grip strength [20].", "Maximum occlusal force was measured using a portable occlusal force meter (GM10; Nagano Keiki Co., Ltd., Tokyo, Japan). The participants were instructed to bite down with maximal voluntary muscular effort using their first molars. Maximum occlusal force was measured on each side, with a 30 s interval between bite measurements. The larger of the values from the left and right sides was recorded as the maximum bite force [21].", "Masticatory performance was determined by measuring the concentration of dissolved glucose obtained from a cylindrical gummy jelly (GLUCOLUMN; GC Co., Ltd., Tokyo, Japan) using a glucose-measuring device (GLUCO SENSOR GS-II; GC Co., Ltd., Tokyo, Japan). Prior to the test, the participants were instructed regarding the chewing movements and mouth rinsing procedures to prevent swallowing. The participants were then instructed to chew the gummy jelly for 20 s freely. After chewing, the participants were asked to take 10 mL of distilled water into their mouth and spit out the gummy jelly and distilled water into a filter cup. The glucose concentration in the filtrate (mg/dL) was measured using a dedicated device [20].", "Following evaluation of the masticatory performance, an assessment of the swallowing threshold was performed using the test gummy jellies (GLUCOLUMN; GC Co. Ltd.). Variables related to swallowing (i.e., the number of chewing cycles, chewing time, chewing rate, and glucose concentration in the filtrate (mg/dL)) were assessed for all participants. Each participant was instructed to chew a gummy jelly freely until feeling the desire to swallow, at which time they were instructed to stop chewing and signal to the examiner that they were ready to expel the gummy jelly. The examiner visually quantified the number of chewing cycles. The time from the onset of chewing to the moment when the participants raised their hands was recorded using a stopwatch [20]. The subsequent steps were the same as those used for the evaluation of masticatory performance. We determined that the glucose concentration in the filtrate was an indicator of the swallowing threshold. Participants with low swallowing threshold tend to signal that they want to swallow the gummy jelly quickly, resulting in low glucose concentrations. A glucose concentration in the lowest 20th percentile was defined as developmental failure of swallowing threshold, based on previous studies reporting the criteria for sarcopenia [21,25]. ", "All measurements were performed in duplicate, separated by 30 s rest periods, and the mean values were calculated for the subsequent analyses. All examinations were performed by the same examiner. Data generated during sample collection were assessed for reliability. Random error was evaluated based on intraobserver reliability, which was quantified using the intraclass correlation coefficient (ICC; where 0.8 ≤ ICC ≤ 1.0 corresponded to high reliability) [26].", "The Shapiro–Wilk test was used to determine data normality. All continuous data were expressed as means ± SD. The means were compared between two groups using a two-tailed t-test or the Mann–Whitney U test. The Kruskal–Wallis test was used for comparisons between more than two groups. Pearson’s correlation coefficient was used to determine associations among glucose concentration before swallowing and other variables. The chi-squared test or Fisher’s exact test was used, as appropriate, to compare categorical variables between the normal swallowing threshold and developmental failure of swallowing threshold (lowest 20%) groups. Binary logistic regression analysis with the forward selection (conditional) method was used to identify factors predicting developmental failure of swallowing threshold. Independent variables that were significant in the univariate analyses were included. Categorical variables were coded appropriately before being entered into the model. Adjusted odds ratios (ORs) and 95% confidence intervals (CIs) were calculated for the low swallowing threshold groups. A p-value < 0.05 was considered statistically significant. All data were analyzed using SPSS Statistics for Windows software (version 23.0; IBM Corp., Armonk, NY, USA).", "The ICCs for all measurement items ranged from 0.84 to 0.91, indicating a high degree of intraobserver reliability. \nThe anthropometric parameters and dental examination results are shown in Table 1 according to age and sex. The 15-year-old males were significantly taller than females of the same age (p < 0.05). The DMFT index in 11-year-old females was significantly higher than in males of the same age (p < 0.05).\nHand grip strength and oral functional parameters are shown in Table 2 according to age and sex. The hand grip strengths in 5-, 10-, 12-, 14-, and 15-year-old males were significantly greater compared to females of the same age (all p < 0.05). \nWhen considering the results of comparing the mean scores of males and females for all measurement items by age, hand grip strength in 14–15-year-old participants was significantly greater compared to 5–7-year-old participants (all p < 0.05). Hand grip strength increased with age. The maximum occlusal force was greatest in 15-year-old participants and was significantly greater than in those aged 5–7 years (all p < 0.05). Masticatory performance was lowest in 5-year-old participants and was significantly lower than in those aged 12, 14, and 15 years (all p < 0.05). The number of chewing cycles was highest in 9-year-old participants and was significantly higher than in those aged 6 and 14 years (both p < 0.05). The change in chewing time with age was similar to the change in the number of chewing cycles. The swallowing threshold was highest in 15-year-old participants, followed by the 8-year-old participants; both were significantly higher than in those aged 6 years (p < 0.05).\nThe results of Pearson’s bivariate correlation analyses are shown in Table 3. Hand grip strength was significantly and positively correlated with masticatory performance (r = 0.611, p < 0.01) and was strongly correlated with maximum occlusal force (r = 0.791, p < 0.01). Swallowing threshold was strongly correlated with masticatory performance (r = 0.727, p < 0.01) and was significantly and positively correlated with the number of erupted teeth, maximum occlusal force, and number of chewing cycles (r = 0.432, p < 0.01; r = 0.493, p < 0.01; r = 0.465, p < 0.01, respectively). The number of chewing cycles was strongly correlated with chewing time (r = 0.801, p < 0.01). The correlation between age and swallowing threshold was weak but significant (r = 0.362, p < 0.01; Figure 1).\nTable 4 summarizes the data on developmental failure of swallowing threshold based on demographic and health-related variables. Participants with developmental failure of swallowing threshold were more likely to be overweight or obese, eat between meals at least once per day, and have low or no physical activity (p = 0.014, p = 0.021, and p = 0.006, respectively).\nTable 5 summarizes the data on developmental failure of swallowing threshold based on age, height, body weight, dental status, hand grip strength, and masticatory function. Swallowing threshold, age, number of erupted teeth, maximum occlusal force, masticatory performance, number of chewing cycles, and chewing time in the develop-mental failure of swallowing threshold group were significantly lower compared to the normal swallowing threshold group (all p < 0.05). The mean DMFT index in the developmental failure swallowing threshold group was significantly higher compared to the normal swallowing threshold group (p < 0.05). The mean chewing time to swallow in the normal swallowing threshold group was 20.91 s. Masticatory performance was similar between the groups.\nTable 6 shows the predictors of developmental failure of swallowing threshold, as revealed by logistic regression. Overweight or obese individuals had higher odds of developmental failure of swallowing threshold (OR = 5.343, p = 0.031, 95% CI = 1.168–24.437). Eating between meals once or more a day was also associated with higher odds of developmental failure of swallowing threshold (OR = 4.934, p = 0.049, 95% CI = 1.004–24.244).", "In this study, patterns of swallowing threshold in children according to age were clearly different from those of handgrip strength, maximum occlusal force, and masticatory performance. Maximum occlusal force and masticatory performance tended to increase with age, but the rates of increase between the ages of 12 and 15 years were not as high as those for hand grip strength. These results suggest that maximum occlusal force and masticatory performance do not grow rapidly during this period. \nWhen considering the results of univariate analysis, hand grip strength was significantly and positively correlated with masticatory performance as well as maximum occlusal force. The European Working Group on Sarcopenia in Older People 2 (EWGSOP2) reported that low activity and malnutrition cause secondary sarcopenia not only in the elderly but also in children [24]. Sarcopenia may be associated with developmental failure of masticatory function in childhood. \nWe found that age had a weak correlation with the swallowing threshold. At the age of 8 years, multiple participants had swallowing thresholds that reached or surpassed those of 15-year-old individuals (Figure 1). A recent study of individuals aged 20–79 years reported that the swallowing threshold values peaked at 40 s, which was close to the peak time for 15-year-old females in this study [21]. Therefore, the swallowing threshold is unlikely to be age-dependent, and we derived a “unified” threshold of developmental failure, i.e., a threshold applicable to all ages. The number of chewing cycles and chewing time tended to gradually decrease after peaking at the age of 9 and 8 years, respectively.\nIn the logistic regression analysis, obesity was significantly associated with developmental failure of swallowing threshold. A previous study reported that children who did not chew well had a significantly higher likelihood of overweight compared to the reference group of children aged 5–6 years. To assess the degree of chewing, the parents in that study answered the following self-administered questionnaire item: “How well does your child chew foods while eating?” [18]. Another study objectively evaluating the swallowing threshold reported that a lower threshold was associated with a higher BMI among preschool children aged 3–5 years. They suggested that a higher BMI was associated with a lower number of chewing cycles and a shorter chewing time [17]. A study of young adults (in their twenties) also reported that BMI was negatively correlated with the total number of chews and duration of chewing but did not correlate significantly with the chewing rate [14]. Consistent with these findings, our results showed that the number of chewing cycles and chewing time in participants with developmental failure of swallowing threshold were significantly lower compared to those with a normal swallowing threshold. However, chewing rate was not significantly different between the two groups. These findings suggested that overweight/obesity is more prevalent among children who chew their food less, and over a shorter period of time, prior to swallowing, regardless of the chewing rate.\nSeveral studies have reported that the causes of obesity are multi-faceted and include interactions among eating behaviors, obesogenic environments, genetics, and physical inactivity [5,7]. This may be indirectly associated with our finding that a significantly higher percentage of participants with developmental failure of swallowing threshold reported a lack of physical activity according to univariate analysis, although a causal relationship could not be determined.\nIn the logistic regression analysis, eating between meals at least once per day was also significantly associated with developmental failure of swallowing threshold. Chewing food activates the hypothalamic histamine nervous system and suppresses appetite through the satiety center in animals [27,28,29]. Children with developmental failure of swallowing threshold are less likely to feel sated, so it is more likely that they will eat between meals at least once per day. Several studies reported that consumption of low-quality snacks was associated with increased prevalence of obesity among children [30,31,32]. Our results indicate that this may be explained in terms of developmental failure of swallowing threshold. A study of Japanese elementary school children reported that their snacking habits were influenced by paternal eating habits [33]. Another study suggested that parents could play an important role in the control of school-age children’s food intake and choices [6]. National surveys of food intake in the United States reported that daily energy intake in children aged 2 to 18 increased significantly from 1839 kcal/day to 2023 kcal/day between 1977–1978 and 2003–2006 [34]. In addition, another study reported that children aged 2 to 11 consume extra energy and sugars in their diets but insufficient Vitamin D, calcium, and potassium [35]. This may be applicable to all eating behaviors, i.e., not just snacking.\nWe further considered strategies to improve developmental failure of swallowing threshold. First, the chewing rate until swallowing was not significantly different between participants with developmental failure and normal swallowing thresholds. Therefore, it may not be necessary to control the chewing rate. Second, prolonging the chewing time until swallowing may also not be helpful for preventing developmental failure of swallowing threshold; even after the participants with developmental failure of swallowing threshold chewed the gummy jelly for 20 s, masticatory performance was significantly lower (97.94 ± 24.97 mg/dL) than that of participants with normal swallowing thresholds (134.19 ± 40.75 mg/dL). The lower masticatory performance may be associated with the younger age, weaker occlusal forces, a higher DMFT index, and fewer erupted teeth in participants with developmental failure of swallowing threshold. If these problems are resolved, developmental failure of swallowing threshold may improve. However, as it is difficult to solve these problems immediately, increasing the number of chewing cycles may be the key to improving developmental failure of swallowing threshold. Considering that it is a critical period of development, the appropriate number of chewing cycles and chewing time until swallowing should be established by the age of 9 years.\nOur study had some limitations. First, The National Health Examination Survey among 12- to 17-year-old US adolescents reported that BMI in the adolescents with a concave facial profile was higher than that in the adolescents with a straight facial profile [36]. Another study reported that a higher BMI was associated with a lower bite force in children aged 8–10 years [37]. These results suggest that facial profile type in children, especially Class III malocclusion, could be a factor associated with developmental failure of swallowing threshold. However, in this study, relationships between the type of malocclusion and the characteristics of the swallowing threshold were not able to be clarified, because participants’ malocclusions were not evaluated in detail. In the future, it is necessary to clarify the relationship between malocclusion and obesity and developmental failure of swallowing threshold in children. Second, the number of participants was small for performing logistic regression analysis. Among the patients who visited two dental clinics, the number of patients who met all of the conditions we set was very limited. Therefore, this study was performed with the minimum required number of participants calculated by the power analysis. It may be that a highly accurate model will be built in studies with higher numbers of participants. Third, it used a cross-sectional design, which precluded determination of causal relationships between developmental failure of swallowing threshold and the variables included in the logistic regression analyses. Longitudinal studies are needed to investigate the relative influence of factors associated with developmental failure of swallowing threshold. Additionally, the diet- and lifestyle-related questionnaires evaluated only children’s problems; future studies may benefit from evaluating the parents’ eating habits and dietary education.", "We found that the relationship between swallowing threshold and age was unclear, whereas the number of chewing cycles and chewing time tended to gradually decrease after reaching a peak at the age of 9 and 8 years, respectively. An appropriate number of chewing cycles and chewing time until swallowing should be established by 9 years of age. Developmental failure of swallowing threshold was closely associated with childhood obesity and eating between meals at least once per day among the 5- to 15-year-old individuals." ]
[ "intro", null, "subjects", null, null, null, null, null, null, null, null, "results", "discussion", "conclusions" ]
[ "eating behaviors", "obesity", "swallowing threshold", "number of chewing cycle", "children" ]
1. Introduction: Eating habits acquired in early childhood can change over time based on personal experience and learning [1,2]. Eating habits acquired in childhood or adolescence may persist into adulthood [3]. Poor dietary habits established during childhood that persist into adulthood increase the risk of obesity and related adverse consequences [4,5,6]. In previous studies, early childhood obesity was associated with a higher risk of adult metabolic syndrome [7,8]. Eating behaviors such as frequent consumption of fast food and sugar-sweetened beverages are associated with the development of childhood obesity [9]. In addition, factors such as eating frequency, amount, and occasions may interact to cause obesity [5,7]. Other studies indicated that early modification of poor eating habits could improve health and decrease the future risk of metabolic syndrome, type 2 diabetes, and atherosclerotic cardiovascular diseases [10,11]. A relationship between obesity and eating speed was reported recently. Eating speed was associated with the incidence of metabolic syndrome in adults [12,13], while eating quickly was associated with increased body mass index (BMI) in young adults [14]. Similar results have also been reported in children [15,16,17]. In addition, chewing well before swallowing might be an effective method for reducing food intake and facilitating healthy weight management in preschool children [18] and adults [19]. These findings suggest that appropriate eating behaviors should be acquired within the critical early time period. However, the number of chewing cycles, chewing time, and chewing rate have not been analyzed in the context of the swallowing threshold, and it remains unclear when these functions are acquired. We hypothesized that these functions would be acquired relatively early in childhood. Additionally, based on previous studies, we hypothesized that developmental failure of swallowing threshold in children would be associated with eating fast, chewing frequently, and other habits [17,20]. This developmental failure could involve physiological problems including obesity. Identifying the factors associated with developmental failure of swallowing threshold may aid the development of strategies for improving it. The primary purpose of this study was to clarify differences in swallowing thresholds and other oral functions according to age, as well as when functions related to the swallowing threshold are acquired. A secondary aim was to determine factors related to the developmental failure of swallowing threshold in children aged 5–15 years. In this study, swallowing threshold was determined based on the concentration of dissolved glucose obtained from gummy jellies when the participants signaled that they wanted to swallow the chewed gummy jellies. 2. Materials and Methods: 2.1. Participants This study was conducted in accordance with the Declaration of Helsinki and was approved by the Human Investigation Committee of Kyushu Dental University (approval number: 18–37). All participants and their parents/guardians provided written informed consent for participation in the study. The participants were recruited following an initial examination at two private dental clinics in Japan. The inclusion criteria were as follows: aged 5–15 years, normal language comprehension, cooperative behavior, and no current dental diseases or complications. The exclusion criteria included systemic disturbances causing swallowing impairment, obvious facial asymmetry that could affect the measurements, soft tissue abnormalities, temporomandibular joint dysfunction, and dental or structural irregularities. Using software (PS: Power and Sample Size Calculation, available from the Vanderbilt University’s website), sample size was calculated. The study was based on a continuous response variable ranging from normal swallowing threshold to developmental failure of swallowing threshold; four normal participants were recruited for every participant with the developmental failure of swallowing threshold. In a previous study, the data for each group were normally distributed, with a standard deviation (SD) of swallowing threshold of 40 mg/dL [21]. Based on a true difference of 40 mg/dL between the developmental failure and normal groups, 14 developmental failure and 56 normal participants were required to reject the null hypothesis (i.e., the population means of the developmental failure and normal groups are equal) with a power of 0.9. The type I error probability for the null hypothesis was 0.05. The participants were divided into 11 groups based on their chronological age, and each group was further divided into two subgroups based on sex. Details of the participants are shown in Table 1. This study was conducted in accordance with the Declaration of Helsinki and was approved by the Human Investigation Committee of Kyushu Dental University (approval number: 18–37). All participants and their parents/guardians provided written informed consent for participation in the study. The participants were recruited following an initial examination at two private dental clinics in Japan. The inclusion criteria were as follows: aged 5–15 years, normal language comprehension, cooperative behavior, and no current dental diseases or complications. The exclusion criteria included systemic disturbances causing swallowing impairment, obvious facial asymmetry that could affect the measurements, soft tissue abnormalities, temporomandibular joint dysfunction, and dental or structural irregularities. Using software (PS: Power and Sample Size Calculation, available from the Vanderbilt University’s website), sample size was calculated. The study was based on a continuous response variable ranging from normal swallowing threshold to developmental failure of swallowing threshold; four normal participants were recruited for every participant with the developmental failure of swallowing threshold. In a previous study, the data for each group were normally distributed, with a standard deviation (SD) of swallowing threshold of 40 mg/dL [21]. Based on a true difference of 40 mg/dL between the developmental failure and normal groups, 14 developmental failure and 56 normal participants were required to reject the null hypothesis (i.e., the population means of the developmental failure and normal groups are equal) with a power of 0.9. The type I error probability for the null hypothesis was 0.05. The participants were divided into 11 groups based on their chronological age, and each group was further divided into two subgroups based on sex. Details of the participants are shown in Table 1. 2.2. Questionnaire The survey solicited the following information: demographic characteristics (sex and age), eating habits, physical activity, and sleep [21]. The survey solicited the following information: demographic characteristics (sex and age), eating habits, physical activity, and sleep [21]. 2.3. Anthropometry and Dental Examination Height and body weight were measured in the consultation room of the clinic. Height was measured to an accuracy of ±0.1 cm using a portable digital stadiometer (AD-653; A&D, Tokyo, Japan), with the head in the Frankfort plane, whereas body weight was measured with an accuracy of 0.1 kg [22]. Rohrer and Kaup indices were calculated from the height and weight. During intraoral examination, the number of erupted teeth and the decayed, missing, and filled teeth (DMFT) index were recorded for each patient [23]. Height and body weight were measured in the consultation room of the clinic. Height was measured to an accuracy of ±0.1 cm using a portable digital stadiometer (AD-653; A&D, Tokyo, Japan), with the head in the Frankfort plane, whereas body weight was measured with an accuracy of 0.1 kg [22]. Rohrer and Kaup indices were calculated from the height and weight. During intraoral examination, the number of erupted teeth and the decayed, missing, and filled teeth (DMFT) index were recorded for each patient [23]. 2.4. Hand Grip Strength Hand grip strength is generally used as an index of muscle weakness in the diagnosis of sarcopenia [24]. Therefore, hand grip strength measurement was performed for evaluating systemic muscle strength in the participants. A portable grip strength meter (T-2288; Toei Light Co., Ltd., Saitama, Japan) was used to measure hand grip strength. Participants were asked to stand and hold the dynamometer in their hand with the arm parallel to the body, but without squeezing the arm against the body. Hand grip strength (kg) was measured twice for each hand (alternately) with a 30 s interval between trials. The highest value from either the left or right hand was recorded as the grip strength [20]. Hand grip strength is generally used as an index of muscle weakness in the diagnosis of sarcopenia [24]. Therefore, hand grip strength measurement was performed for evaluating systemic muscle strength in the participants. A portable grip strength meter (T-2288; Toei Light Co., Ltd., Saitama, Japan) was used to measure hand grip strength. Participants were asked to stand and hold the dynamometer in their hand with the arm parallel to the body, but without squeezing the arm against the body. Hand grip strength (kg) was measured twice for each hand (alternately) with a 30 s interval between trials. The highest value from either the left or right hand was recorded as the grip strength [20]. 2.5. Maximum Occlusal Force Maximum occlusal force was measured using a portable occlusal force meter (GM10; Nagano Keiki Co., Ltd., Tokyo, Japan). The participants were instructed to bite down with maximal voluntary muscular effort using their first molars. Maximum occlusal force was measured on each side, with a 30 s interval between bite measurements. The larger of the values from the left and right sides was recorded as the maximum bite force [21]. Maximum occlusal force was measured using a portable occlusal force meter (GM10; Nagano Keiki Co., Ltd., Tokyo, Japan). The participants were instructed to bite down with maximal voluntary muscular effort using their first molars. Maximum occlusal force was measured on each side, with a 30 s interval between bite measurements. The larger of the values from the left and right sides was recorded as the maximum bite force [21]. 2.6. Masticatory Performance Masticatory performance was determined by measuring the concentration of dissolved glucose obtained from a cylindrical gummy jelly (GLUCOLUMN; GC Co., Ltd., Tokyo, Japan) using a glucose-measuring device (GLUCO SENSOR GS-II; GC Co., Ltd., Tokyo, Japan). Prior to the test, the participants were instructed regarding the chewing movements and mouth rinsing procedures to prevent swallowing. The participants were then instructed to chew the gummy jelly for 20 s freely. After chewing, the participants were asked to take 10 mL of distilled water into their mouth and spit out the gummy jelly and distilled water into a filter cup. The glucose concentration in the filtrate (mg/dL) was measured using a dedicated device [20]. Masticatory performance was determined by measuring the concentration of dissolved glucose obtained from a cylindrical gummy jelly (GLUCOLUMN; GC Co., Ltd., Tokyo, Japan) using a glucose-measuring device (GLUCO SENSOR GS-II; GC Co., Ltd., Tokyo, Japan). Prior to the test, the participants were instructed regarding the chewing movements and mouth rinsing procedures to prevent swallowing. The participants were then instructed to chew the gummy jelly for 20 s freely. After chewing, the participants were asked to take 10 mL of distilled water into their mouth and spit out the gummy jelly and distilled water into a filter cup. The glucose concentration in the filtrate (mg/dL) was measured using a dedicated device [20]. 2.7. Swallowing Threshold Following evaluation of the masticatory performance, an assessment of the swallowing threshold was performed using the test gummy jellies (GLUCOLUMN; GC Co. Ltd.). Variables related to swallowing (i.e., the number of chewing cycles, chewing time, chewing rate, and glucose concentration in the filtrate (mg/dL)) were assessed for all participants. Each participant was instructed to chew a gummy jelly freely until feeling the desire to swallow, at which time they were instructed to stop chewing and signal to the examiner that they were ready to expel the gummy jelly. The examiner visually quantified the number of chewing cycles. The time from the onset of chewing to the moment when the participants raised their hands was recorded using a stopwatch [20]. The subsequent steps were the same as those used for the evaluation of masticatory performance. We determined that the glucose concentration in the filtrate was an indicator of the swallowing threshold. Participants with low swallowing threshold tend to signal that they want to swallow the gummy jelly quickly, resulting in low glucose concentrations. A glucose concentration in the lowest 20th percentile was defined as developmental failure of swallowing threshold, based on previous studies reporting the criteria for sarcopenia [21,25]. Following evaluation of the masticatory performance, an assessment of the swallowing threshold was performed using the test gummy jellies (GLUCOLUMN; GC Co. Ltd.). Variables related to swallowing (i.e., the number of chewing cycles, chewing time, chewing rate, and glucose concentration in the filtrate (mg/dL)) were assessed for all participants. Each participant was instructed to chew a gummy jelly freely until feeling the desire to swallow, at which time they were instructed to stop chewing and signal to the examiner that they were ready to expel the gummy jelly. The examiner visually quantified the number of chewing cycles. The time from the onset of chewing to the moment when the participants raised their hands was recorded using a stopwatch [20]. The subsequent steps were the same as those used for the evaluation of masticatory performance. We determined that the glucose concentration in the filtrate was an indicator of the swallowing threshold. Participants with low swallowing threshold tend to signal that they want to swallow the gummy jelly quickly, resulting in low glucose concentrations. A glucose concentration in the lowest 20th percentile was defined as developmental failure of swallowing threshold, based on previous studies reporting the criteria for sarcopenia [21,25]. 2.8. Reliability of Measurements All measurements were performed in duplicate, separated by 30 s rest periods, and the mean values were calculated for the subsequent analyses. All examinations were performed by the same examiner. Data generated during sample collection were assessed for reliability. Random error was evaluated based on intraobserver reliability, which was quantified using the intraclass correlation coefficient (ICC; where 0.8 ≤ ICC ≤ 1.0 corresponded to high reliability) [26]. All measurements were performed in duplicate, separated by 30 s rest periods, and the mean values were calculated for the subsequent analyses. All examinations were performed by the same examiner. Data generated during sample collection were assessed for reliability. Random error was evaluated based on intraobserver reliability, which was quantified using the intraclass correlation coefficient (ICC; where 0.8 ≤ ICC ≤ 1.0 corresponded to high reliability) [26]. 2.9. Statistical Analysis The Shapiro–Wilk test was used to determine data normality. All continuous data were expressed as means ± SD. The means were compared between two groups using a two-tailed t-test or the Mann–Whitney U test. The Kruskal–Wallis test was used for comparisons between more than two groups. Pearson’s correlation coefficient was used to determine associations among glucose concentration before swallowing and other variables. The chi-squared test or Fisher’s exact test was used, as appropriate, to compare categorical variables between the normal swallowing threshold and developmental failure of swallowing threshold (lowest 20%) groups. Binary logistic regression analysis with the forward selection (conditional) method was used to identify factors predicting developmental failure of swallowing threshold. Independent variables that were significant in the univariate analyses were included. Categorical variables were coded appropriately before being entered into the model. Adjusted odds ratios (ORs) and 95% confidence intervals (CIs) were calculated for the low swallowing threshold groups. A p-value < 0.05 was considered statistically significant. All data were analyzed using SPSS Statistics for Windows software (version 23.0; IBM Corp., Armonk, NY, USA). The Shapiro–Wilk test was used to determine data normality. All continuous data were expressed as means ± SD. The means were compared between two groups using a two-tailed t-test or the Mann–Whitney U test. The Kruskal–Wallis test was used for comparisons between more than two groups. Pearson’s correlation coefficient was used to determine associations among glucose concentration before swallowing and other variables. The chi-squared test or Fisher’s exact test was used, as appropriate, to compare categorical variables between the normal swallowing threshold and developmental failure of swallowing threshold (lowest 20%) groups. Binary logistic regression analysis with the forward selection (conditional) method was used to identify factors predicting developmental failure of swallowing threshold. Independent variables that were significant in the univariate analyses were included. Categorical variables were coded appropriately before being entered into the model. Adjusted odds ratios (ORs) and 95% confidence intervals (CIs) were calculated for the low swallowing threshold groups. A p-value < 0.05 was considered statistically significant. All data were analyzed using SPSS Statistics for Windows software (version 23.0; IBM Corp., Armonk, NY, USA). 2.1. Participants: This study was conducted in accordance with the Declaration of Helsinki and was approved by the Human Investigation Committee of Kyushu Dental University (approval number: 18–37). All participants and their parents/guardians provided written informed consent for participation in the study. The participants were recruited following an initial examination at two private dental clinics in Japan. The inclusion criteria were as follows: aged 5–15 years, normal language comprehension, cooperative behavior, and no current dental diseases or complications. The exclusion criteria included systemic disturbances causing swallowing impairment, obvious facial asymmetry that could affect the measurements, soft tissue abnormalities, temporomandibular joint dysfunction, and dental or structural irregularities. Using software (PS: Power and Sample Size Calculation, available from the Vanderbilt University’s website), sample size was calculated. The study was based on a continuous response variable ranging from normal swallowing threshold to developmental failure of swallowing threshold; four normal participants were recruited for every participant with the developmental failure of swallowing threshold. In a previous study, the data for each group were normally distributed, with a standard deviation (SD) of swallowing threshold of 40 mg/dL [21]. Based on a true difference of 40 mg/dL between the developmental failure and normal groups, 14 developmental failure and 56 normal participants were required to reject the null hypothesis (i.e., the population means of the developmental failure and normal groups are equal) with a power of 0.9. The type I error probability for the null hypothesis was 0.05. The participants were divided into 11 groups based on their chronological age, and each group was further divided into two subgroups based on sex. Details of the participants are shown in Table 1. 2.2. Questionnaire: The survey solicited the following information: demographic characteristics (sex and age), eating habits, physical activity, and sleep [21]. 2.3. Anthropometry and Dental Examination: Height and body weight were measured in the consultation room of the clinic. Height was measured to an accuracy of ±0.1 cm using a portable digital stadiometer (AD-653; A&D, Tokyo, Japan), with the head in the Frankfort plane, whereas body weight was measured with an accuracy of 0.1 kg [22]. Rohrer and Kaup indices were calculated from the height and weight. During intraoral examination, the number of erupted teeth and the decayed, missing, and filled teeth (DMFT) index were recorded for each patient [23]. 2.4. Hand Grip Strength: Hand grip strength is generally used as an index of muscle weakness in the diagnosis of sarcopenia [24]. Therefore, hand grip strength measurement was performed for evaluating systemic muscle strength in the participants. A portable grip strength meter (T-2288; Toei Light Co., Ltd., Saitama, Japan) was used to measure hand grip strength. Participants were asked to stand and hold the dynamometer in their hand with the arm parallel to the body, but without squeezing the arm against the body. Hand grip strength (kg) was measured twice for each hand (alternately) with a 30 s interval between trials. The highest value from either the left or right hand was recorded as the grip strength [20]. 2.5. Maximum Occlusal Force: Maximum occlusal force was measured using a portable occlusal force meter (GM10; Nagano Keiki Co., Ltd., Tokyo, Japan). The participants were instructed to bite down with maximal voluntary muscular effort using their first molars. Maximum occlusal force was measured on each side, with a 30 s interval between bite measurements. The larger of the values from the left and right sides was recorded as the maximum bite force [21]. 2.6. Masticatory Performance: Masticatory performance was determined by measuring the concentration of dissolved glucose obtained from a cylindrical gummy jelly (GLUCOLUMN; GC Co., Ltd., Tokyo, Japan) using a glucose-measuring device (GLUCO SENSOR GS-II; GC Co., Ltd., Tokyo, Japan). Prior to the test, the participants were instructed regarding the chewing movements and mouth rinsing procedures to prevent swallowing. The participants were then instructed to chew the gummy jelly for 20 s freely. After chewing, the participants were asked to take 10 mL of distilled water into their mouth and spit out the gummy jelly and distilled water into a filter cup. The glucose concentration in the filtrate (mg/dL) was measured using a dedicated device [20]. 2.7. Swallowing Threshold: Following evaluation of the masticatory performance, an assessment of the swallowing threshold was performed using the test gummy jellies (GLUCOLUMN; GC Co. Ltd.). Variables related to swallowing (i.e., the number of chewing cycles, chewing time, chewing rate, and glucose concentration in the filtrate (mg/dL)) were assessed for all participants. Each participant was instructed to chew a gummy jelly freely until feeling the desire to swallow, at which time they were instructed to stop chewing and signal to the examiner that they were ready to expel the gummy jelly. The examiner visually quantified the number of chewing cycles. The time from the onset of chewing to the moment when the participants raised their hands was recorded using a stopwatch [20]. The subsequent steps were the same as those used for the evaluation of masticatory performance. We determined that the glucose concentration in the filtrate was an indicator of the swallowing threshold. Participants with low swallowing threshold tend to signal that they want to swallow the gummy jelly quickly, resulting in low glucose concentrations. A glucose concentration in the lowest 20th percentile was defined as developmental failure of swallowing threshold, based on previous studies reporting the criteria for sarcopenia [21,25]. 2.8. Reliability of Measurements: All measurements were performed in duplicate, separated by 30 s rest periods, and the mean values were calculated for the subsequent analyses. All examinations were performed by the same examiner. Data generated during sample collection were assessed for reliability. Random error was evaluated based on intraobserver reliability, which was quantified using the intraclass correlation coefficient (ICC; where 0.8 ≤ ICC ≤ 1.0 corresponded to high reliability) [26]. 2.9. Statistical Analysis: The Shapiro–Wilk test was used to determine data normality. All continuous data were expressed as means ± SD. The means were compared between two groups using a two-tailed t-test or the Mann–Whitney U test. The Kruskal–Wallis test was used for comparisons between more than two groups. Pearson’s correlation coefficient was used to determine associations among glucose concentration before swallowing and other variables. The chi-squared test or Fisher’s exact test was used, as appropriate, to compare categorical variables between the normal swallowing threshold and developmental failure of swallowing threshold (lowest 20%) groups. Binary logistic regression analysis with the forward selection (conditional) method was used to identify factors predicting developmental failure of swallowing threshold. Independent variables that were significant in the univariate analyses were included. Categorical variables were coded appropriately before being entered into the model. Adjusted odds ratios (ORs) and 95% confidence intervals (CIs) were calculated for the low swallowing threshold groups. A p-value < 0.05 was considered statistically significant. All data were analyzed using SPSS Statistics for Windows software (version 23.0; IBM Corp., Armonk, NY, USA). 3. Results: The ICCs for all measurement items ranged from 0.84 to 0.91, indicating a high degree of intraobserver reliability. The anthropometric parameters and dental examination results are shown in Table 1 according to age and sex. The 15-year-old males were significantly taller than females of the same age (p < 0.05). The DMFT index in 11-year-old females was significantly higher than in males of the same age (p < 0.05). Hand grip strength and oral functional parameters are shown in Table 2 according to age and sex. The hand grip strengths in 5-, 10-, 12-, 14-, and 15-year-old males were significantly greater compared to females of the same age (all p < 0.05). When considering the results of comparing the mean scores of males and females for all measurement items by age, hand grip strength in 14–15-year-old participants was significantly greater compared to 5–7-year-old participants (all p < 0.05). Hand grip strength increased with age. The maximum occlusal force was greatest in 15-year-old participants and was significantly greater than in those aged 5–7 years (all p < 0.05). Masticatory performance was lowest in 5-year-old participants and was significantly lower than in those aged 12, 14, and 15 years (all p < 0.05). The number of chewing cycles was highest in 9-year-old participants and was significantly higher than in those aged 6 and 14 years (both p < 0.05). The change in chewing time with age was similar to the change in the number of chewing cycles. The swallowing threshold was highest in 15-year-old participants, followed by the 8-year-old participants; both were significantly higher than in those aged 6 years (p < 0.05). The results of Pearson’s bivariate correlation analyses are shown in Table 3. Hand grip strength was significantly and positively correlated with masticatory performance (r = 0.611, p < 0.01) and was strongly correlated with maximum occlusal force (r = 0.791, p < 0.01). Swallowing threshold was strongly correlated with masticatory performance (r = 0.727, p < 0.01) and was significantly and positively correlated with the number of erupted teeth, maximum occlusal force, and number of chewing cycles (r = 0.432, p < 0.01; r = 0.493, p < 0.01; r = 0.465, p < 0.01, respectively). The number of chewing cycles was strongly correlated with chewing time (r = 0.801, p < 0.01). The correlation between age and swallowing threshold was weak but significant (r = 0.362, p < 0.01; Figure 1). Table 4 summarizes the data on developmental failure of swallowing threshold based on demographic and health-related variables. Participants with developmental failure of swallowing threshold were more likely to be overweight or obese, eat between meals at least once per day, and have low or no physical activity (p = 0.014, p = 0.021, and p = 0.006, respectively). Table 5 summarizes the data on developmental failure of swallowing threshold based on age, height, body weight, dental status, hand grip strength, and masticatory function. Swallowing threshold, age, number of erupted teeth, maximum occlusal force, masticatory performance, number of chewing cycles, and chewing time in the develop-mental failure of swallowing threshold group were significantly lower compared to the normal swallowing threshold group (all p < 0.05). The mean DMFT index in the developmental failure swallowing threshold group was significantly higher compared to the normal swallowing threshold group (p < 0.05). The mean chewing time to swallow in the normal swallowing threshold group was 20.91 s. Masticatory performance was similar between the groups. Table 6 shows the predictors of developmental failure of swallowing threshold, as revealed by logistic regression. Overweight or obese individuals had higher odds of developmental failure of swallowing threshold (OR = 5.343, p = 0.031, 95% CI = 1.168–24.437). Eating between meals once or more a day was also associated with higher odds of developmental failure of swallowing threshold (OR = 4.934, p = 0.049, 95% CI = 1.004–24.244). 4. Discussion: In this study, patterns of swallowing threshold in children according to age were clearly different from those of handgrip strength, maximum occlusal force, and masticatory performance. Maximum occlusal force and masticatory performance tended to increase with age, but the rates of increase between the ages of 12 and 15 years were not as high as those for hand grip strength. These results suggest that maximum occlusal force and masticatory performance do not grow rapidly during this period. When considering the results of univariate analysis, hand grip strength was significantly and positively correlated with masticatory performance as well as maximum occlusal force. The European Working Group on Sarcopenia in Older People 2 (EWGSOP2) reported that low activity and malnutrition cause secondary sarcopenia not only in the elderly but also in children [24]. Sarcopenia may be associated with developmental failure of masticatory function in childhood. We found that age had a weak correlation with the swallowing threshold. At the age of 8 years, multiple participants had swallowing thresholds that reached or surpassed those of 15-year-old individuals (Figure 1). A recent study of individuals aged 20–79 years reported that the swallowing threshold values peaked at 40 s, which was close to the peak time for 15-year-old females in this study [21]. Therefore, the swallowing threshold is unlikely to be age-dependent, and we derived a “unified” threshold of developmental failure, i.e., a threshold applicable to all ages. The number of chewing cycles and chewing time tended to gradually decrease after peaking at the age of 9 and 8 years, respectively. In the logistic regression analysis, obesity was significantly associated with developmental failure of swallowing threshold. A previous study reported that children who did not chew well had a significantly higher likelihood of overweight compared to the reference group of children aged 5–6 years. To assess the degree of chewing, the parents in that study answered the following self-administered questionnaire item: “How well does your child chew foods while eating?” [18]. Another study objectively evaluating the swallowing threshold reported that a lower threshold was associated with a higher BMI among preschool children aged 3–5 years. They suggested that a higher BMI was associated with a lower number of chewing cycles and a shorter chewing time [17]. A study of young adults (in their twenties) also reported that BMI was negatively correlated with the total number of chews and duration of chewing but did not correlate significantly with the chewing rate [14]. Consistent with these findings, our results showed that the number of chewing cycles and chewing time in participants with developmental failure of swallowing threshold were significantly lower compared to those with a normal swallowing threshold. However, chewing rate was not significantly different between the two groups. These findings suggested that overweight/obesity is more prevalent among children who chew their food less, and over a shorter period of time, prior to swallowing, regardless of the chewing rate. Several studies have reported that the causes of obesity are multi-faceted and include interactions among eating behaviors, obesogenic environments, genetics, and physical inactivity [5,7]. This may be indirectly associated with our finding that a significantly higher percentage of participants with developmental failure of swallowing threshold reported a lack of physical activity according to univariate analysis, although a causal relationship could not be determined. In the logistic regression analysis, eating between meals at least once per day was also significantly associated with developmental failure of swallowing threshold. Chewing food activates the hypothalamic histamine nervous system and suppresses appetite through the satiety center in animals [27,28,29]. Children with developmental failure of swallowing threshold are less likely to feel sated, so it is more likely that they will eat between meals at least once per day. Several studies reported that consumption of low-quality snacks was associated with increased prevalence of obesity among children [30,31,32]. Our results indicate that this may be explained in terms of developmental failure of swallowing threshold. A study of Japanese elementary school children reported that their snacking habits were influenced by paternal eating habits [33]. Another study suggested that parents could play an important role in the control of school-age children’s food intake and choices [6]. National surveys of food intake in the United States reported that daily energy intake in children aged 2 to 18 increased significantly from 1839 kcal/day to 2023 kcal/day between 1977–1978 and 2003–2006 [34]. In addition, another study reported that children aged 2 to 11 consume extra energy and sugars in their diets but insufficient Vitamin D, calcium, and potassium [35]. This may be applicable to all eating behaviors, i.e., not just snacking. We further considered strategies to improve developmental failure of swallowing threshold. First, the chewing rate until swallowing was not significantly different between participants with developmental failure and normal swallowing thresholds. Therefore, it may not be necessary to control the chewing rate. Second, prolonging the chewing time until swallowing may also not be helpful for preventing developmental failure of swallowing threshold; even after the participants with developmental failure of swallowing threshold chewed the gummy jelly for 20 s, masticatory performance was significantly lower (97.94 ± 24.97 mg/dL) than that of participants with normal swallowing thresholds (134.19 ± 40.75 mg/dL). The lower masticatory performance may be associated with the younger age, weaker occlusal forces, a higher DMFT index, and fewer erupted teeth in participants with developmental failure of swallowing threshold. If these problems are resolved, developmental failure of swallowing threshold may improve. However, as it is difficult to solve these problems immediately, increasing the number of chewing cycles may be the key to improving developmental failure of swallowing threshold. Considering that it is a critical period of development, the appropriate number of chewing cycles and chewing time until swallowing should be established by the age of 9 years. Our study had some limitations. First, The National Health Examination Survey among 12- to 17-year-old US adolescents reported that BMI in the adolescents with a concave facial profile was higher than that in the adolescents with a straight facial profile [36]. Another study reported that a higher BMI was associated with a lower bite force in children aged 8–10 years [37]. These results suggest that facial profile type in children, especially Class III malocclusion, could be a factor associated with developmental failure of swallowing threshold. However, in this study, relationships between the type of malocclusion and the characteristics of the swallowing threshold were not able to be clarified, because participants’ malocclusions were not evaluated in detail. In the future, it is necessary to clarify the relationship between malocclusion and obesity and developmental failure of swallowing threshold in children. Second, the number of participants was small for performing logistic regression analysis. Among the patients who visited two dental clinics, the number of patients who met all of the conditions we set was very limited. Therefore, this study was performed with the minimum required number of participants calculated by the power analysis. It may be that a highly accurate model will be built in studies with higher numbers of participants. Third, it used a cross-sectional design, which precluded determination of causal relationships between developmental failure of swallowing threshold and the variables included in the logistic regression analyses. Longitudinal studies are needed to investigate the relative influence of factors associated with developmental failure of swallowing threshold. Additionally, the diet- and lifestyle-related questionnaires evaluated only children’s problems; future studies may benefit from evaluating the parents’ eating habits and dietary education. 5. Conclusions: We found that the relationship between swallowing threshold and age was unclear, whereas the number of chewing cycles and chewing time tended to gradually decrease after reaching a peak at the age of 9 and 8 years, respectively. An appropriate number of chewing cycles and chewing time until swallowing should be established by 9 years of age. Developmental failure of swallowing threshold was closely associated with childhood obesity and eating between meals at least once per day among the 5- to 15-year-old individuals.
Background: The aim of the present study was to identify factors related to developmental failure of swallowing threshold in children aged 5-15 years. Methods: A total of 83 children aged 5-15 years were included in this study. A self-administered lifestyle questionnaire was completed, along with hand grip strength and oral function tests. Swallowing threshold was determined based on the concentration of dissolved glucose obtained from gummy jellies when the participants signaled that they wanted to swallow the chewed gummy jellies. Developmental failure of swallowing threshold was defined as glucose concentrations in the lowest 20th percentile. After univariate analysis, multivariate binary logistic regression analysis was used to identify factors associated with developmental failure of swallowing threshold. Results: Hand grip strength was significantly correlated with masticatory performance (r = 0.611, p &lt; 0.01). Logistic regression analysis revealed factors related to developmental failure of swallowing threshold, i.e., overweight/obesity (Odds ratio) (OR) = 5.343, p = 0.031, 95% CI = 1.168-24.437) and eating between meals at least once a day (OR = 4.934, p = 0.049, 95% CI = 1.004-24.244). Conclusions: Developmental failure of swallowing threshold was closely associated with childhood obesity in 5- to 15-year-old children.
1. Introduction: Eating habits acquired in early childhood can change over time based on personal experience and learning [1,2]. Eating habits acquired in childhood or adolescence may persist into adulthood [3]. Poor dietary habits established during childhood that persist into adulthood increase the risk of obesity and related adverse consequences [4,5,6]. In previous studies, early childhood obesity was associated with a higher risk of adult metabolic syndrome [7,8]. Eating behaviors such as frequent consumption of fast food and sugar-sweetened beverages are associated with the development of childhood obesity [9]. In addition, factors such as eating frequency, amount, and occasions may interact to cause obesity [5,7]. Other studies indicated that early modification of poor eating habits could improve health and decrease the future risk of metabolic syndrome, type 2 diabetes, and atherosclerotic cardiovascular diseases [10,11]. A relationship between obesity and eating speed was reported recently. Eating speed was associated with the incidence of metabolic syndrome in adults [12,13], while eating quickly was associated with increased body mass index (BMI) in young adults [14]. Similar results have also been reported in children [15,16,17]. In addition, chewing well before swallowing might be an effective method for reducing food intake and facilitating healthy weight management in preschool children [18] and adults [19]. These findings suggest that appropriate eating behaviors should be acquired within the critical early time period. However, the number of chewing cycles, chewing time, and chewing rate have not been analyzed in the context of the swallowing threshold, and it remains unclear when these functions are acquired. We hypothesized that these functions would be acquired relatively early in childhood. Additionally, based on previous studies, we hypothesized that developmental failure of swallowing threshold in children would be associated with eating fast, chewing frequently, and other habits [17,20]. This developmental failure could involve physiological problems including obesity. Identifying the factors associated with developmental failure of swallowing threshold may aid the development of strategies for improving it. The primary purpose of this study was to clarify differences in swallowing thresholds and other oral functions according to age, as well as when functions related to the swallowing threshold are acquired. A secondary aim was to determine factors related to the developmental failure of swallowing threshold in children aged 5–15 years. In this study, swallowing threshold was determined based on the concentration of dissolved glucose obtained from gummy jellies when the participants signaled that they wanted to swallow the chewed gummy jellies. 5. Conclusions: We found that the relationship between swallowing threshold and age was unclear, whereas the number of chewing cycles and chewing time tended to gradually decrease after reaching a peak at the age of 9 and 8 years, respectively. An appropriate number of chewing cycles and chewing time until swallowing should be established by 9 years of age. Developmental failure of swallowing threshold was closely associated with childhood obesity and eating between meals at least once per day among the 5- to 15-year-old individuals.
Background: The aim of the present study was to identify factors related to developmental failure of swallowing threshold in children aged 5-15 years. Methods: A total of 83 children aged 5-15 years were included in this study. A self-administered lifestyle questionnaire was completed, along with hand grip strength and oral function tests. Swallowing threshold was determined based on the concentration of dissolved glucose obtained from gummy jellies when the participants signaled that they wanted to swallow the chewed gummy jellies. Developmental failure of swallowing threshold was defined as glucose concentrations in the lowest 20th percentile. After univariate analysis, multivariate binary logistic regression analysis was used to identify factors associated with developmental failure of swallowing threshold. Results: Hand grip strength was significantly correlated with masticatory performance (r = 0.611, p &lt; 0.01). Logistic regression analysis revealed factors related to developmental failure of swallowing threshold, i.e., overweight/obesity (Odds ratio) (OR) = 5.343, p = 0.031, 95% CI = 1.168-24.437) and eating between meals at least once a day (OR = 4.934, p = 0.049, 95% CI = 1.004-24.244). Conclusions: Developmental failure of swallowing threshold was closely associated with childhood obesity in 5- to 15-year-old children.
6,981
251
[ 2736, 27, 105, 136, 82, 140, 228, 79, 222 ]
14
[ "swallowing", "threshold", "swallowing threshold", "participants", "chewing", "failure", "developmental failure", "developmental", "failure swallowing threshold", "failure swallowing" ]
[ "age eating habits", "development childhood obesity", "eating habits dietary", "obesity eating speed", "associated childhood obesity" ]
null
[CONTENT] eating behaviors | obesity | swallowing threshold | number of chewing cycle | children [SUMMARY]
null
[CONTENT] eating behaviors | obesity | swallowing threshold | number of chewing cycle | children [SUMMARY]
[CONTENT] eating behaviors | obesity | swallowing threshold | number of chewing cycle | children [SUMMARY]
[CONTENT] eating behaviors | obesity | swallowing threshold | number of chewing cycle | children [SUMMARY]
[CONTENT] eating behaviors | obesity | swallowing threshold | number of chewing cycle | children [SUMMARY]
[CONTENT] Adolescent | Child | Child, Preschool | Deglutition | Feeding Behavior | Glucose | Hand Strength | Humans | Mastication | Pediatric Obesity [SUMMARY]
null
[CONTENT] Adolescent | Child | Child, Preschool | Deglutition | Feeding Behavior | Glucose | Hand Strength | Humans | Mastication | Pediatric Obesity [SUMMARY]
[CONTENT] Adolescent | Child | Child, Preschool | Deglutition | Feeding Behavior | Glucose | Hand Strength | Humans | Mastication | Pediatric Obesity [SUMMARY]
[CONTENT] Adolescent | Child | Child, Preschool | Deglutition | Feeding Behavior | Glucose | Hand Strength | Humans | Mastication | Pediatric Obesity [SUMMARY]
[CONTENT] Adolescent | Child | Child, Preschool | Deglutition | Feeding Behavior | Glucose | Hand Strength | Humans | Mastication | Pediatric Obesity [SUMMARY]
[CONTENT] age eating habits | development childhood obesity | eating habits dietary | obesity eating speed | associated childhood obesity [SUMMARY]
null
[CONTENT] age eating habits | development childhood obesity | eating habits dietary | obesity eating speed | associated childhood obesity [SUMMARY]
[CONTENT] age eating habits | development childhood obesity | eating habits dietary | obesity eating speed | associated childhood obesity [SUMMARY]
[CONTENT] age eating habits | development childhood obesity | eating habits dietary | obesity eating speed | associated childhood obesity [SUMMARY]
[CONTENT] age eating habits | development childhood obesity | eating habits dietary | obesity eating speed | associated childhood obesity [SUMMARY]
[CONTENT] swallowing | threshold | swallowing threshold | participants | chewing | failure | developmental failure | developmental | failure swallowing threshold | failure swallowing [SUMMARY]
null
[CONTENT] swallowing | threshold | swallowing threshold | participants | chewing | failure | developmental failure | developmental | failure swallowing threshold | failure swallowing [SUMMARY]
[CONTENT] swallowing | threshold | swallowing threshold | participants | chewing | failure | developmental failure | developmental | failure swallowing threshold | failure swallowing [SUMMARY]
[CONTENT] swallowing | threshold | swallowing threshold | participants | chewing | failure | developmental failure | developmental | failure swallowing threshold | failure swallowing [SUMMARY]
[CONTENT] swallowing | threshold | swallowing threshold | participants | chewing | failure | developmental failure | developmental | failure swallowing threshold | failure swallowing [SUMMARY]
[CONTENT] acquired | eating | early | obesity | childhood | associated | functions | swallowing | habits | children [SUMMARY]
null
[CONTENT] significantly | 01 | old | year old | year | swallowing threshold | threshold | year old participants | old participants | swallowing [SUMMARY]
[CONTENT] chewing | age | swallowing | cycles chewing time | chewing time | cycles chewing | time | chewing cycles | chewing cycles chewing | number chewing [SUMMARY]
[CONTENT] swallowing | threshold | swallowing threshold | chewing | participants | failure | developmental | developmental failure | hand | strength [SUMMARY]
[CONTENT] swallowing | threshold | swallowing threshold | chewing | participants | failure | developmental | developmental failure | hand | strength [SUMMARY]
[CONTENT] 5-15 years [SUMMARY]
null
[CONTENT] 0.611 | p &lt | 0.01 ||| ||| 5.343 | 0.031 | 95% | CI | 1.168-24.437 | 4.934 | 0.049 | 95% | CI | 1.004-24.244 [SUMMARY]
[CONTENT] 15-year-old [SUMMARY]
[CONTENT] 5-15 years ||| 83 | 5-15 years ||| ||| ||| 20th ||| ||| ||| 0.611 | p &lt | 0.01 ||| ||| 5.343 | 0.031 | 95% | CI | 1.168-24.437 | 4.934 | 0.049 | 95% | CI | 1.004-24.244 ||| 15-year-old [SUMMARY]
[CONTENT] 5-15 years ||| 83 | 5-15 years ||| ||| ||| 20th ||| ||| ||| 0.611 | p &lt | 0.01 ||| ||| 5.343 | 0.031 | 95% | CI | 1.168-24.437 | 4.934 | 0.049 | 95% | CI | 1.004-24.244 ||| 15-year-old [SUMMARY]
Risk and consequences of chemotherapy-induced neutropenic complications in patients receiving daily filgrastim: the importance of duration of prophylaxis.
24767095
To examine duration of daily filgrastim prophylaxis, and risk and consequences of chemotherapy-induced neutropenic complications (CINC) requiring inpatient care.
BACKGROUND
Using a retrospective cohort design and US healthcare claims data (2001-2010), we identified all cancer patients who initiated ≥1 course of myelosuppressive chemotherapy and received daily filgrastim prophylactically in ≥1 cycle. Cycles with daily filgrastim prophylaxis were pooled for analyses. CINC was identified based on hospital admissions with a diagnosis of neutropenia, fever, or infection; consequences were characterized in terms of hospital mortality, hospital length of stay (LOS), and CINC-related healthcare expenditures.
METHODS
Risk of CINC requiring inpatient care-adjusted for patient characteristics-was 2.4 (95% CI: 1.6-3.4) and 1.9 (1.3-2.8) times higher with 1-3 (N = 8371) and 4-6 (N = 3691) days of filgrastim prophylaxis, respectively, versus ≥7 days (N = 2226). Among subjects who developed CINC, consequences with 1-3 and 4-6 (vs. ≥7) days of filgrastim prophylaxis were: mortality (8.4% [n/N = 10/119] and 4.0% [3/75] vs. 0% [0/34]); LOS (means: 7.4 [N = 243] and 7.1 [N = 99] vs. 6.5 [N = 40]); and expenditures (means: $18,912 [N = 225] and $14,907 [N = 94] vs. $13,165 [N = 39]).
RESULTS
In this retrospective evaluation, shorter courses of daily filgrastim prophylaxis were found to be associated with an increased risk of CINC as well as poorer outcomes among those developing this condition. Because of the limitations inherent in healthcare claims databases specifically and retrospective evaluations generally, additional research addressing these limitations is needed to confirm the findings of this study.
CONCLUSIONS
[ "Aged", "Febrile Neutropenia", "Female", "Filgrastim", "Granulocyte Colony-Stimulating Factor", "Hospitalization", "Humans", "Insurance Claim Review", "Male", "Middle Aged", "Neoplasms", "Neutropenia", "Post-Exposure Prophylaxis", "Recombinant Proteins", "Retrospective Studies", "Risk Assessment", "United States" ]
4018988
Background
Neutropenia is a common side effect of myelosuppressive chemotherapy that both increases the risk of infection and diminishes patients’ ability to fight infection. When neutropenic patients become febrile, the high likelihood of infection and serious consequences thereof usually results in hospitalization [1,2]. FN, as well as severe or prolonged neutropenia, also can interfere with the planned delivery of treatment and adversely affect important patient outcomes [2-11]. Clinical practice guidelines recommend primary prophylactic use of a colony-stimulating factor (CSF)–which has been shown to reduce the risk of FN in clinical trials–when the risk of FN is 20% or higher [2]. While the American Society of Clinical Oncology (ASCO) initially recommended that CSF prophylaxis be administered only when FN risk is 40% or higher, in 2006, ASCO lowered the threshold to 20% based on data highlighting the importance of FN-related hospitalization as an outcome and evidence demonstrating the value of CSF prophylaxis in reducing the risk of FN, the risk of FN-related hospitalization, and the associated use of IV anti-infective agents [2,12,13]. The CSF filgrastim, which is widely used in clinical practice as prophylaxis against FN, requires daily administration during each cycle until neutrophil recovery occurs (in clinical trials, given typically for 10–11 days [and up to 14 days] until absolute neutrophil count [ANC] ≥10 × 109/L) [14-19]. In clinical practice, patients often receive shorter courses of daily filgrastim prophylaxis than those administered to subjects in the clinical trial setting [14-16,20-24]. While published studies suggest that shorter courses of daily filgrastim prophylaxis are associated with an increased risk of hospitalization for CINC, these studies focused on selected tumor types or employed data that now are over a decade old [23,24]. During the past decade, use of chemotherapy and supportive care in clinical practice–as well as recommended use of these agents in authoritative guidelines–has changed considerably [2,25-27], Moreover, only one study has examined whether CSFs may favorably impact clinical outcomes and economic costs when CINC develop despite CSF prophylaxis, and it was published a decade ago and focused on elderly patients with a single type of cancer (non-Hodgkin’s lymphoma) [28]. We therefore undertook a new study to evaluate the relationship between the duration of daily filgrastim prophylaxis, and the risk and consequences of CINC requiring inpatient care using a large healthcare claims database. While such databases often lack detailed clinical information (e.g., on absolute neutrophil counts), they offer access to information on the health profile and healthcare utilization of tens of millions of covered lives.
Methods
Data source Patient-level information from two large healthcare claims databases–the Thomson Reuters MarketScan Commercial Claims and Encounters and Medicare Supplemental and Coordination of Benefits Database (MarketScan Database, 2001–2010) and the Intercontinental Marketing Services LifeLink Database (LifeLink Database, 2001–2008)–were pooled for analyses. Both databases comprise medical (i.e., facility and professional service) and outpatient pharmacy claims from a large number of participating private health plans, and each contains claims data for 15 million persons annually. Data available for each facility and professional-service claim include date and place of service, diagnoses, procedures performed/services rendered, and quantity of services (professional-service claims). Data available for each retail pharmacy claim include the drug dispensed, dispensing date, quantity dispensed, and number of days supplied. All claims also include paid (i.e., reimbursed) amounts. Selected demographic and eligibility information is available for persons in both databases. All data can be arrayed to provide a detailed chronology of all medical and pharmacy services used by each plan member over time. The study databases were de-identified prior to their release to study investigators, as set forth in the corresponding Data Use Agreements. The study databases have been evaluated and certified by independent third parties to be in compliance with the Health Insurance Portability and Accountability Act (HIPAA) of 1996 statistical de-identification standards and to satisfy the conditions set forth in Sections 164.514 (a)-(b) 1ii of the HIPAA Privacy Rule regarding the determination and documentation of statistically de-identified data. Use of the study databases for health services research was determined–via independent third parties–to be fully compliant with the HIPAA Privacy Rule and federal guidance on Public Welfare and the Protection of Human Subjects [29]. Patient-level information from two large healthcare claims databases–the Thomson Reuters MarketScan Commercial Claims and Encounters and Medicare Supplemental and Coordination of Benefits Database (MarketScan Database, 2001–2010) and the Intercontinental Marketing Services LifeLink Database (LifeLink Database, 2001–2008)–were pooled for analyses. Both databases comprise medical (i.e., facility and professional service) and outpatient pharmacy claims from a large number of participating private health plans, and each contains claims data for 15 million persons annually. Data available for each facility and professional-service claim include date and place of service, diagnoses, procedures performed/services rendered, and quantity of services (professional-service claims). Data available for each retail pharmacy claim include the drug dispensed, dispensing date, quantity dispensed, and number of days supplied. All claims also include paid (i.e., reimbursed) amounts. Selected demographic and eligibility information is available for persons in both databases. All data can be arrayed to provide a detailed chronology of all medical and pharmacy services used by each plan member over time. The study databases were de-identified prior to their release to study investigators, as set forth in the corresponding Data Use Agreements. The study databases have been evaluated and certified by independent third parties to be in compliance with the Health Insurance Portability and Accountability Act (HIPAA) of 1996 statistical de-identification standards and to satisfy the conditions set forth in Sections 164.514 (a)-(b) 1ii of the HIPAA Privacy Rule regarding the determination and documentation of statistically de-identified data. Use of the study databases for health services research was determined–via independent third parties–to be fully compliant with the HIPAA Privacy Rule and federal guidance on Public Welfare and the Protection of Human Subjects [29]. Study population The study population comprised all patients who initiated ≥1 course of myelosuppressive chemotherapy for a solid tumor or non-Hodgkin’s lymphoma (NHL) and who received daily filgrastim prophylaxis during ≥1 cycle. All cycles in which patients received daily filgrastim prophylaxis–irrespective of duration–were pooled for analyses. Cancer chemotherapy patients All patients, aged ≥18 years, who began ≥1 new course of myelosuppressive chemotherapy were identified; the enrollment window was July 1, 2001 through June 30, 2010 for patients in the MarketScan Database and July 1, 2001 through June 30, 2008 for patients in the LifeLink Database. Receipt of chemotherapy was ascertained based on the presence of ≥1 paid medical claim for a chemotherapy drug or administration thereof (identified using Healthcare Common Procedure Coding System [HCPCS], International Classification of Disease, Ninth Revision, Clinical Modification [ICD-9-CM], and Health Care Financing Administration Uniform Bill-92 [UB-92] revenue codes). Patients were considered to have initiated a new course of chemotherapy if there was a chemotherapy claim during the study period that was preceded by a period ≥60 days without any other claims for chemotherapy. Only patients who had evidence of a primary solid tumor or NHL (based on ≥2 medical claims [≥7 days apart] with a qualifying 3-digit ICD-9-CM diagnosis code during the period beginning 30 days prior to the index date and ending 30 days thereafter) were selected. All patients, aged ≥18 years, who began ≥1 new course of myelosuppressive chemotherapy were identified; the enrollment window was July 1, 2001 through June 30, 2010 for patients in the MarketScan Database and July 1, 2001 through June 30, 2008 for patients in the LifeLink Database. Receipt of chemotherapy was ascertained based on the presence of ≥1 paid medical claim for a chemotherapy drug or administration thereof (identified using Healthcare Common Procedure Coding System [HCPCS], International Classification of Disease, Ninth Revision, Clinical Modification [ICD-9-CM], and Health Care Financing Administration Uniform Bill-92 [UB-92] revenue codes). Patients were considered to have initiated a new course of chemotherapy if there was a chemotherapy claim during the study period that was preceded by a period ≥60 days without any other claims for chemotherapy. Only patients who had evidence of a primary solid tumor or NHL (based on ≥2 medical claims [≥7 days apart] with a qualifying 3-digit ICD-9-CM diagnosis code during the period beginning 30 days prior to the index date and ending 30 days thereafter) were selected. Chemotherapy courses, cycles, and regimens For each cancer chemotherapy patient, each unique cycle within each course of chemotherapy was identified. The first chemotherapy cycle (of the first course) was defined as beginning with the date of initiation of chemotherapy and ending with the first service date for the next administration of chemotherapy administration (as evidenced by a medical claim with a corresponding HCPCS, ICD-9-CM, or UB-92 code) occurring at least 12 days-but no more than 59 days-after the date of initiation of chemotherapy. If a second chemotherapy cycle did not commence prior to day 60, or if there was evidence of receipt of radiation therapy (based on medical claims with relevant HCPCS, ICD-9-CM, or UB-92 codes) during this period, both the first cycle of chemotherapy and the course of chemotherapy were considered to have been completed 30 days following the beginning of the cycle or on the day prior to initiation of radiation therapy, whichever occurred first. The second and all subsequent cycles of chemotherapy, as well as subsequent courses of chemotherapy–if any–during the period of interest, were similarly defined. A maximum of 8 cycles per course were considered. For patients with multiple courses of chemotherapy, all qualifying courses were considered. Chemotherapy regimens were ascertained based on a review of all HCPCS Level II codes for parenterally administered antineoplastic agents on medical claims with service dates within 6 days of the start of each cycle of chemotherapy. Regimens were categorized on a cycle-specific basis according to the number of agents administered that are considered to be myelotoxic (list available from authors upon request). For each cancer chemotherapy patient, each unique cycle within each course of chemotherapy was identified. The first chemotherapy cycle (of the first course) was defined as beginning with the date of initiation of chemotherapy and ending with the first service date for the next administration of chemotherapy administration (as evidenced by a medical claim with a corresponding HCPCS, ICD-9-CM, or UB-92 code) occurring at least 12 days-but no more than 59 days-after the date of initiation of chemotherapy. If a second chemotherapy cycle did not commence prior to day 60, or if there was evidence of receipt of radiation therapy (based on medical claims with relevant HCPCS, ICD-9-CM, or UB-92 codes) during this period, both the first cycle of chemotherapy and the course of chemotherapy were considered to have been completed 30 days following the beginning of the cycle or on the day prior to initiation of radiation therapy, whichever occurred first. The second and all subsequent cycles of chemotherapy, as well as subsequent courses of chemotherapy–if any–during the period of interest, were similarly defined. A maximum of 8 cycles per course were considered. For patients with multiple courses of chemotherapy, all qualifying courses were considered. Chemotherapy regimens were ascertained based on a review of all HCPCS Level II codes for parenterally administered antineoplastic agents on medical claims with service dates within 6 days of the start of each cycle of chemotherapy. Regimens were categorized on a cycle-specific basis according to the number of agents administered that are considered to be myelotoxic (list available from authors upon request). Daily filgrastim prophylaxis Cancer chemotherapy patients who received daily filgrastim prophylaxis during ≥1 cycle of chemotherapy were selected for inclusion in the study population. Administration of daily filgrastim on or before day 5 of a given cycle was considered to represent prophylaxis [14,22,23]. Receipt of daily filgrastim was identified based on medical claims (J1440, J1441) with relevant codes from the HCPCS system. Duration of daily filgrastim prophylaxis during the patient-cycle was characterized based on temporal patterns of administration; end of prophylaxis was defined as a gap ≥3 days in its use, outpatient administration of IV antimicrobial therapy (an indicator for possible CINC), or hospitalization for CINC. Duration of prophylaxis was characterized as 1–3, 4–6, or ≥7 days. Cancer chemotherapy patients who received daily filgrastim prophylaxis during ≥1 cycle of chemotherapy were selected for inclusion in the study population. Administration of daily filgrastim on or before day 5 of a given cycle was considered to represent prophylaxis [14,22,23]. Receipt of daily filgrastim was identified based on medical claims (J1440, J1441) with relevant codes from the HCPCS system. Duration of daily filgrastim prophylaxis during the patient-cycle was characterized based on temporal patterns of administration; end of prophylaxis was defined as a gap ≥3 days in its use, outpatient administration of IV antimicrobial therapy (an indicator for possible CINC), or hospitalization for CINC. Duration of prophylaxis was characterized as 1–3, 4–6, or ≥7 days. Exclusion criteria Patient-cycles were excluded from the analytic file if the following occurred: (1) evidence of ≥2 primary cancers (solid or blood) within (i.e., +/-) 30 days of chemotherapy initiation; (2) any gaps in health benefits during the 6-month ("pretreatment") period prior to chemotherapy initiation; (3) evidence of hematopoietic stem cell or bone marrow transplantation prior to or during receipt of chemotherapy; (4) evidence of chemotherapy based only on medical claims for administration of the drugs (HCPCS codes identifying the specific chemotherapy agents were not available, and thus the regimen and level of myelosuppression could not be determined); (5) pharmacy claims for myelotoxic chemotherapy (only drug dispense dates are available on pharmacy claims, and because pharmacy/medical claims cannot be definitively linked, precise dates of chemotherapy administration–needed to characterize the course and cycles–could not be ascertained); (6) pharmacy claims for daily filgrastim (precise dates of administration could not be ascertained, for reasons stated above); (7) evidence of receipt of sargramostim (J2820) or pegfilgrastim (C9119, S0135, J2505) during the first 5 days of the cycle (i.e., as prophylaxis). Patient-cycles were excluded from the analytic file if the following occurred: (1) evidence of ≥2 primary cancers (solid or blood) within (i.e., +/-) 30 days of chemotherapy initiation; (2) any gaps in health benefits during the 6-month ("pretreatment") period prior to chemotherapy initiation; (3) evidence of hematopoietic stem cell or bone marrow transplantation prior to or during receipt of chemotherapy; (4) evidence of chemotherapy based only on medical claims for administration of the drugs (HCPCS codes identifying the specific chemotherapy agents were not available, and thus the regimen and level of myelosuppression could not be determined); (5) pharmacy claims for myelotoxic chemotherapy (only drug dispense dates are available on pharmacy claims, and because pharmacy/medical claims cannot be definitively linked, precise dates of chemotherapy administration–needed to characterize the course and cycles–could not be ascertained); (6) pharmacy claims for daily filgrastim (precise dates of administration could not be ascertained, for reasons stated above); (7) evidence of receipt of sargramostim (J2820) or pegfilgrastim (C9119, S0135, J2505) during the first 5 days of the cycle (i.e., as prophylaxis). The study population comprised all patients who initiated ≥1 course of myelosuppressive chemotherapy for a solid tumor or non-Hodgkin’s lymphoma (NHL) and who received daily filgrastim prophylaxis during ≥1 cycle. All cycles in which patients received daily filgrastim prophylaxis–irrespective of duration–were pooled for analyses. Cancer chemotherapy patients All patients, aged ≥18 years, who began ≥1 new course of myelosuppressive chemotherapy were identified; the enrollment window was July 1, 2001 through June 30, 2010 for patients in the MarketScan Database and July 1, 2001 through June 30, 2008 for patients in the LifeLink Database. Receipt of chemotherapy was ascertained based on the presence of ≥1 paid medical claim for a chemotherapy drug or administration thereof (identified using Healthcare Common Procedure Coding System [HCPCS], International Classification of Disease, Ninth Revision, Clinical Modification [ICD-9-CM], and Health Care Financing Administration Uniform Bill-92 [UB-92] revenue codes). Patients were considered to have initiated a new course of chemotherapy if there was a chemotherapy claim during the study period that was preceded by a period ≥60 days without any other claims for chemotherapy. Only patients who had evidence of a primary solid tumor or NHL (based on ≥2 medical claims [≥7 days apart] with a qualifying 3-digit ICD-9-CM diagnosis code during the period beginning 30 days prior to the index date and ending 30 days thereafter) were selected. All patients, aged ≥18 years, who began ≥1 new course of myelosuppressive chemotherapy were identified; the enrollment window was July 1, 2001 through June 30, 2010 for patients in the MarketScan Database and July 1, 2001 through June 30, 2008 for patients in the LifeLink Database. Receipt of chemotherapy was ascertained based on the presence of ≥1 paid medical claim for a chemotherapy drug or administration thereof (identified using Healthcare Common Procedure Coding System [HCPCS], International Classification of Disease, Ninth Revision, Clinical Modification [ICD-9-CM], and Health Care Financing Administration Uniform Bill-92 [UB-92] revenue codes). Patients were considered to have initiated a new course of chemotherapy if there was a chemotherapy claim during the study period that was preceded by a period ≥60 days without any other claims for chemotherapy. Only patients who had evidence of a primary solid tumor or NHL (based on ≥2 medical claims [≥7 days apart] with a qualifying 3-digit ICD-9-CM diagnosis code during the period beginning 30 days prior to the index date and ending 30 days thereafter) were selected. Chemotherapy courses, cycles, and regimens For each cancer chemotherapy patient, each unique cycle within each course of chemotherapy was identified. The first chemotherapy cycle (of the first course) was defined as beginning with the date of initiation of chemotherapy and ending with the first service date for the next administration of chemotherapy administration (as evidenced by a medical claim with a corresponding HCPCS, ICD-9-CM, or UB-92 code) occurring at least 12 days-but no more than 59 days-after the date of initiation of chemotherapy. If a second chemotherapy cycle did not commence prior to day 60, or if there was evidence of receipt of radiation therapy (based on medical claims with relevant HCPCS, ICD-9-CM, or UB-92 codes) during this period, both the first cycle of chemotherapy and the course of chemotherapy were considered to have been completed 30 days following the beginning of the cycle or on the day prior to initiation of radiation therapy, whichever occurred first. The second and all subsequent cycles of chemotherapy, as well as subsequent courses of chemotherapy–if any–during the period of interest, were similarly defined. A maximum of 8 cycles per course were considered. For patients with multiple courses of chemotherapy, all qualifying courses were considered. Chemotherapy regimens were ascertained based on a review of all HCPCS Level II codes for parenterally administered antineoplastic agents on medical claims with service dates within 6 days of the start of each cycle of chemotherapy. Regimens were categorized on a cycle-specific basis according to the number of agents administered that are considered to be myelotoxic (list available from authors upon request). For each cancer chemotherapy patient, each unique cycle within each course of chemotherapy was identified. The first chemotherapy cycle (of the first course) was defined as beginning with the date of initiation of chemotherapy and ending with the first service date for the next administration of chemotherapy administration (as evidenced by a medical claim with a corresponding HCPCS, ICD-9-CM, or UB-92 code) occurring at least 12 days-but no more than 59 days-after the date of initiation of chemotherapy. If a second chemotherapy cycle did not commence prior to day 60, or if there was evidence of receipt of radiation therapy (based on medical claims with relevant HCPCS, ICD-9-CM, or UB-92 codes) during this period, both the first cycle of chemotherapy and the course of chemotherapy were considered to have been completed 30 days following the beginning of the cycle or on the day prior to initiation of radiation therapy, whichever occurred first. The second and all subsequent cycles of chemotherapy, as well as subsequent courses of chemotherapy–if any–during the period of interest, were similarly defined. A maximum of 8 cycles per course were considered. For patients with multiple courses of chemotherapy, all qualifying courses were considered. Chemotherapy regimens were ascertained based on a review of all HCPCS Level II codes for parenterally administered antineoplastic agents on medical claims with service dates within 6 days of the start of each cycle of chemotherapy. Regimens were categorized on a cycle-specific basis according to the number of agents administered that are considered to be myelotoxic (list available from authors upon request). Daily filgrastim prophylaxis Cancer chemotherapy patients who received daily filgrastim prophylaxis during ≥1 cycle of chemotherapy were selected for inclusion in the study population. Administration of daily filgrastim on or before day 5 of a given cycle was considered to represent prophylaxis [14,22,23]. Receipt of daily filgrastim was identified based on medical claims (J1440, J1441) with relevant codes from the HCPCS system. Duration of daily filgrastim prophylaxis during the patient-cycle was characterized based on temporal patterns of administration; end of prophylaxis was defined as a gap ≥3 days in its use, outpatient administration of IV antimicrobial therapy (an indicator for possible CINC), or hospitalization for CINC. Duration of prophylaxis was characterized as 1–3, 4–6, or ≥7 days. Cancer chemotherapy patients who received daily filgrastim prophylaxis during ≥1 cycle of chemotherapy were selected for inclusion in the study population. Administration of daily filgrastim on or before day 5 of a given cycle was considered to represent prophylaxis [14,22,23]. Receipt of daily filgrastim was identified based on medical claims (J1440, J1441) with relevant codes from the HCPCS system. Duration of daily filgrastim prophylaxis during the patient-cycle was characterized based on temporal patterns of administration; end of prophylaxis was defined as a gap ≥3 days in its use, outpatient administration of IV antimicrobial therapy (an indicator for possible CINC), or hospitalization for CINC. Duration of prophylaxis was characterized as 1–3, 4–6, or ≥7 days. Exclusion criteria Patient-cycles were excluded from the analytic file if the following occurred: (1) evidence of ≥2 primary cancers (solid or blood) within (i.e., +/-) 30 days of chemotherapy initiation; (2) any gaps in health benefits during the 6-month ("pretreatment") period prior to chemotherapy initiation; (3) evidence of hematopoietic stem cell or bone marrow transplantation prior to or during receipt of chemotherapy; (4) evidence of chemotherapy based only on medical claims for administration of the drugs (HCPCS codes identifying the specific chemotherapy agents were not available, and thus the regimen and level of myelosuppression could not be determined); (5) pharmacy claims for myelotoxic chemotherapy (only drug dispense dates are available on pharmacy claims, and because pharmacy/medical claims cannot be definitively linked, precise dates of chemotherapy administration–needed to characterize the course and cycles–could not be ascertained); (6) pharmacy claims for daily filgrastim (precise dates of administration could not be ascertained, for reasons stated above); (7) evidence of receipt of sargramostim (J2820) or pegfilgrastim (C9119, S0135, J2505) during the first 5 days of the cycle (i.e., as prophylaxis). Patient-cycles were excluded from the analytic file if the following occurred: (1) evidence of ≥2 primary cancers (solid or blood) within (i.e., +/-) 30 days of chemotherapy initiation; (2) any gaps in health benefits during the 6-month ("pretreatment") period prior to chemotherapy initiation; (3) evidence of hematopoietic stem cell or bone marrow transplantation prior to or during receipt of chemotherapy; (4) evidence of chemotherapy based only on medical claims for administration of the drugs (HCPCS codes identifying the specific chemotherapy agents were not available, and thus the regimen and level of myelosuppression could not be determined); (5) pharmacy claims for myelotoxic chemotherapy (only drug dispense dates are available on pharmacy claims, and because pharmacy/medical claims cannot be definitively linked, precise dates of chemotherapy administration–needed to characterize the course and cycles–could not be ascertained); (6) pharmacy claims for daily filgrastim (precise dates of administration could not be ascertained, for reasons stated above); (7) evidence of receipt of sargramostim (J2820) or pegfilgrastim (C9119, S0135, J2505) during the first 5 days of the cycle (i.e., as prophylaxis). Neutropenic complications requiring inpatient care CINC was identified based on inpatient admissions with a diagnosis (principal or secondary) of neutropenia (ICD-9-CM 288.0), fever (780.6), or infection (list available upon request). Admissions were identified on a cycle-specific basis using acute-care facility inpatient claims with admission dates anytime between day 6 and the last day of the chemotherapy cycle. Episodes of CINC treated exclusively on an outpatient basis were not considered. Because CINC that occurred on the day of the last dose of prophylaxis or the following day could have resulted in an artificial truncation of a planned longer course of prophylaxis–and thus could exaggerate the risk of CINC with shorter courses–analyses were conducted alternatively with and without inclusion of these patient cycles. Consequences of CINC requiring inpatient care were characterized in terms of in-hospital mortality, total hospital LOS (during the cycle), and total CINC-related healthcare expenditures (i.e., from hospital discharge to end of cycle). Total CINC-related healthcare expenditures included those for the initial hospitalization as well as care provided post-discharge on an outpatient basis; outpatient expenditures comprised encounters with a diagnosis of neutropenia, fever, or infection, as well as use/prescriptions for CSF agents (i.e., as treatment) and antimicrobial therapy. Expenditures were estimated based on total paid amounts on corresponding claims. Mortality was characterized using data only from the MarketScan Database as such information was not available in the LifeLink Database. CINC was identified based on inpatient admissions with a diagnosis (principal or secondary) of neutropenia (ICD-9-CM 288.0), fever (780.6), or infection (list available upon request). Admissions were identified on a cycle-specific basis using acute-care facility inpatient claims with admission dates anytime between day 6 and the last day of the chemotherapy cycle. Episodes of CINC treated exclusively on an outpatient basis were not considered. Because CINC that occurred on the day of the last dose of prophylaxis or the following day could have resulted in an artificial truncation of a planned longer course of prophylaxis–and thus could exaggerate the risk of CINC with shorter courses–analyses were conducted alternatively with and without inclusion of these patient cycles. Consequences of CINC requiring inpatient care were characterized in terms of in-hospital mortality, total hospital LOS (during the cycle), and total CINC-related healthcare expenditures (i.e., from hospital discharge to end of cycle). Total CINC-related healthcare expenditures included those for the initial hospitalization as well as care provided post-discharge on an outpatient basis; outpatient expenditures comprised encounters with a diagnosis of neutropenia, fever, or infection, as well as use/prescriptions for CSF agents (i.e., as treatment) and antimicrobial therapy. Expenditures were estimated based on total paid amounts on corresponding claims. Mortality was characterized using data only from the MarketScan Database as such information was not available in the LifeLink Database. Patient characteristics Patient characteristics included: age; gender; presence of selected chronic comorbidities (cardiovascular disease, diabetes, liver disease, renal disease); history of blood disorders (anemia, neutropenia, other), infection, hospitalization (all-cause and CINC-related, respectively), chemotherapy, and radiation therapy; pre-chemotherapy healthcare expenditures; type of cancer, presence of metastases; cycle number and minimum length of prior cycles; chemotherapy regimen; receipt of antimicrobial prophylaxis in the cycle of interest; and year of chemotherapy initiation. Only observed data were utilized in defining patient characteristics. Age was assessed as of the first day of the first cycle of chemotherapy in the course. Chronic comorbidities and history of blood disorders, infections, hospitalization, chemotherapy, radiation therapy, presence of metastases, and healthcare expenditures were assessed from the beginning of the 6-month pretreatment period through the first day of the corresponding cycle of chemotherapy. Selected variables (i.e., anemia, neutropenia, other blood disorders, infections) were alternatively evaluated during the chemotherapy course (up to the beginning of the cycle of interest). Metastases (bone vs. other site) and chronic comorbidities were identified on the basis of ≥1 diagnosis codes on inpatient claims, ≥2 diagnosis codes on outpatient claims (excluding those for laboratory services) on different days, ≥1 procedure codes, and ≥1 drug codes, as appropriate. Blood disorders and infections were identified on the basis of ≥1 diagnosis codes (on inpatient and/or outpatient claims) and ≥1 drug codes, as appropriate. Prophylactic use of antimicrobial agents was ascertained based on a medical claim for administration of drug from cycle day 1 to cycle day 5, or a pharmacy claim for a filled prescription from cycle day -3 to cycle day 5, with a corresponding drug code. Patient characteristics included: age; gender; presence of selected chronic comorbidities (cardiovascular disease, diabetes, liver disease, renal disease); history of blood disorders (anemia, neutropenia, other), infection, hospitalization (all-cause and CINC-related, respectively), chemotherapy, and radiation therapy; pre-chemotherapy healthcare expenditures; type of cancer, presence of metastases; cycle number and minimum length of prior cycles; chemotherapy regimen; receipt of antimicrobial prophylaxis in the cycle of interest; and year of chemotherapy initiation. Only observed data were utilized in defining patient characteristics. Age was assessed as of the first day of the first cycle of chemotherapy in the course. Chronic comorbidities and history of blood disorders, infections, hospitalization, chemotherapy, radiation therapy, presence of metastases, and healthcare expenditures were assessed from the beginning of the 6-month pretreatment period through the first day of the corresponding cycle of chemotherapy. Selected variables (i.e., anemia, neutropenia, other blood disorders, infections) were alternatively evaluated during the chemotherapy course (up to the beginning of the cycle of interest). Metastases (bone vs. other site) and chronic comorbidities were identified on the basis of ≥1 diagnosis codes on inpatient claims, ≥2 diagnosis codes on outpatient claims (excluding those for laboratory services) on different days, ≥1 procedure codes, and ≥1 drug codes, as appropriate. Blood disorders and infections were identified on the basis of ≥1 diagnosis codes (on inpatient and/or outpatient claims) and ≥1 drug codes, as appropriate. Prophylactic use of antimicrobial agents was ascertained based on a medical claim for administration of drug from cycle day 1 to cycle day 5, or a pharmacy claim for a filled prescription from cycle day -3 to cycle day 5, with a corresponding drug code. Statistical analyses Incidence of CINC was evaluated for each patient-cycle in which daily filgrastim was administered prophylactically, and was estimated by same-cycle duration of prophylaxis in an unadjusted and adjusted context. For the latter, a generalized estimating equation (GEE) with a binomial distribution, logistic link function, and exchangeable correlation structure was employed. The GEE method accounts for correlation among repeated measures for the same subject (in this instance, across cycles), while controlling for both fixed characteristics (e.g., gender) and time-dependent covariates (e.g., first versus subsequent cycles). All observed patient characteristics were entered into, and retained in, the multivariate model. Subgroup and sensitivity analyses focusing on the first cycle only, employing a narrow definition of CINC (which was identified using the diagnosis code for neutropenia only but otherwise the same algorithm for the broad definition as described above), and focusing on alternative tumor types separately (i.e., breast cancer, lung cancer, colorectal cancer, and NHL) also were conducted. In-hospital mortality, hospital LOS, and CINC-related healthcare expenditures among patients who developed CINC were descriptively evaluated by duration of daily filgrastim prophylaxis. Confidence intervals (95%) for in-hospital mortality were computed using the Wilson score interval; confidence intervals for LOS and economic costs were computed using nonparametric bootstrapping (percentile method) from the study population (1,000 replicates with replacement). Confidence intervals for in-hospital mortality, hospital LOS, and healthcare expenditures were estimated assuming independence between observations. Only observed data were used in defining study variables; patients who developed CINC but were missing data on study measures–because such data either were not recorded on claims or were not provided by some health plans–were excluded from corresponding analyses. Incidence of CINC was evaluated for each patient-cycle in which daily filgrastim was administered prophylactically, and was estimated by same-cycle duration of prophylaxis in an unadjusted and adjusted context. For the latter, a generalized estimating equation (GEE) with a binomial distribution, logistic link function, and exchangeable correlation structure was employed. The GEE method accounts for correlation among repeated measures for the same subject (in this instance, across cycles), while controlling for both fixed characteristics (e.g., gender) and time-dependent covariates (e.g., first versus subsequent cycles). All observed patient characteristics were entered into, and retained in, the multivariate model. Subgroup and sensitivity analyses focusing on the first cycle only, employing a narrow definition of CINC (which was identified using the diagnosis code for neutropenia only but otherwise the same algorithm for the broad definition as described above), and focusing on alternative tumor types separately (i.e., breast cancer, lung cancer, colorectal cancer, and NHL) also were conducted. In-hospital mortality, hospital LOS, and CINC-related healthcare expenditures among patients who developed CINC were descriptively evaluated by duration of daily filgrastim prophylaxis. Confidence intervals (95%) for in-hospital mortality were computed using the Wilson score interval; confidence intervals for LOS and economic costs were computed using nonparametric bootstrapping (percentile method) from the study population (1,000 replicates with replacement). Confidence intervals for in-hospital mortality, hospital LOS, and healthcare expenditures were estimated assuming independence between observations. Only observed data were used in defining study variables; patients who developed CINC but were missing data on study measures–because such data either were not recorded on claims or were not provided by some health plans–were excluded from corresponding analyses.
Results
A total of 135,921 adult patients initiated a new course of chemotherapy for a solid tumor or NHL during the period of interest and met all other eligibility criteria. Among these patients, a total of 5,477 received daily filgrastim as prophylaxis during ≥1 cycle of chemotherapy and thus were included in the study population; these patients contributed a total of 14,288 daily filgrastim prophylaxis patient-cycles to the analytic file. Filgrastim was administered for 1–3 days in 58% of cycles (n = 8,371), 4–6 days in 26% of cycles (n = 3,691), and ≥7days in 16% of cycles (n = 2,226) (Figure  1). Days of filgrastim prophylaxis. Mean (SD) age of patients was 54 (13) years for 1–3 days of filgrastim prophylaxis, 57 (14) years for 4–6 days of prophylaxis, and 56 (13) years for ≥7 days of prophylaxis (Table  1). Breast cancer was the most common tumor type (48% for 1–3 days, 50% for 4–6 days, 49% for ≥7 days), followed by NHL (10%, 13%, and 13%, respectively), and colorectal cancer (16%, 10%, and 13%, respectively). Metastatic disease was present in 38% of patients receiving 1–3 days of prophylaxis, 32% receiving 4–6 days, and 35% receiving ≥7 days. Antimicrobial agents—principally oral (~90%)—were concurrently used as prophylaxis in 6.7% of patients receiving 1–3 days of filgrastim prophylaxis, 7.5% receiving 4–6 days, and 7.5% receiving ≥7 days. Patient, cancer, and treatment characteristics, by duration of filgrastim prophylaxis Crude risk of CINC during a cycle of chemotherapy was 2.9% with 1–3 days of filgrastim prophylaxis, 2.7% with 4–6 days, and 1.8% with ≥7 days. In adjusted analyses, the odds of CINC were 2.4 (95% CI: 1.6-3.4) and 1.9 (1.3-2.8) times higher with 1–3 and 4–6 days of filgrastim prophylaxis, respectively, versus ≥7 days (referent group) (Figure  2). Among the subgroup of patients who developed CINC requiring inpatient care (n = 382), mean hospital LOS was 7.4 (6.4-8.3) days with 1–3 days of prophylaxis (n = 243), 7.1 (5.7-8.5) days with 4–6 days of prophylaxis (n = 99), and 6.5 (4.9-8.0) with ≥7 days of prophylaxis (n = 40) (Table  2). Among the subgroup of patients who developed CINC and for whom healthcare expenditures were available (n = 358), mean total CINC-related healthcare expenditures were $18,912 (14,570-23,581) with 1–3 days of prophylaxis (n = 225), $14,907 (11,155-19,728) with 4–6 days (n = 94), and $13,165 (9,595-17,144) with ≥7 days (n = 39). Among the subgroup of patients who developed CINC and for whom discharge status was available (n = 228), in-hospital mortality was 8.4% (4.6-14.8) with 1–3 days of prophylaxis (n-119), 4.0% (1.4-11.1) with 4–6 days (n = 75), and 0% (0–10.2) with ≥7 days (n = 34). Adjusted odds ratios for chemotherapy-induced neutropenic complications requiring inpatient care, by duration of filgrastim prophylaxis*. *Results adjusted for patient, cancer, and chemotherapy characteristics. Unadjusted risk of inpatient mortality, length of stay in hospital, and healthcare expenditures for chemotherapy- induced neutropenic complications requiring inpatient care, by duration of filgrastim prophylaxis *n’s indicate the number of patient-cycles considered in analyses of study measures; among patients who developed CINC (n = 382), those with missing data on study measures- because such data were not recorded on claims or were not provided by some health plans- were excluded from corresponding analyses. **Only patients with paid amounts > $0 were considered in these analyses. In subgroup analyses focusing on the first cycle only, crude risks of CINC were 7.0% with 1–3 days of filgrastim prophylaxis (n = 1054), 5.0% with 4–6 days (n = 482), and 5.2% with ≥7 days (n = 363); in adjusted analyses, odds of CINC were 1.6 (0.9-2.8) and 1.1 (0.6-2.1) times higher with 1–3 and 4–6 days (versus ≥7 days) of filgrastim prophylaxis in cycle 1 (Table  3). In analyses employing the narrow definition for CINC, crude risks of CINC were 1.6% with 1–3 days of filgrastim prophylaxis, 1.8% with 4–6 days, and 1.3% with ≥7 days; in adjusted analyses, odds of CINC were 1.6 (1.1-2.5) and 1.7 (1.1-2.7) times higher with 1–3 and 4–6 days of filgrastim prophylaxis, respectively, versus ≥7 days. In tumor-specific analyses, adjusted odds of CINC with 1–3 and 4–6 days of filgrastim prophylaxis (vs. ≥7 days) were: 1.8 (1.0-3.0) and 1.9 (1.1-3.4) for breast cancer; 20.2 (1.9-212.4) and 14.5 (1.3-158.7) for lung cancer; 1.9 (0.2-16.7) and 2.5 (0.3-23.5) for colorectal cancer; and 2.1 (1.2-3.6) and 1.8 (1.0-3.2) for NHL. We note that not all observed differences were statistically significant in unadjusted subgroup and secondary analyses, presumably due to the lack of adjustment for systematic differences in patient characteristics between prophylaxis subgroups. Adjusted odds ratios for chemotherapy- induced neutropenic complications requiring inpatient care in subgroup and secondary analyses *Results adjusted for patient, cancer, and treatment characteristic listed in Table  2.
Conclusion
In conclusion, we found that among patients receiving myelosuppressive chemotherapy, shorter courses of daily filgrastim prophylaxis are associated with increased risk of CINC. We also found that when CINC develops despite daily filgrastim prophylaxis, the outcomes thereof may be poorer in those receiving shorter courses of prophylaxis. Additional research is needed to explore these relationships among individual tumor types and chemotherapy regimens.
[ "Background", "Data source", "Study population", "Cancer chemotherapy patients", "Chemotherapy courses, cycles, and regimens", "Daily filgrastim prophylaxis", "Exclusion criteria", "Neutropenic complications requiring inpatient care", "Patient characteristics", "Statistical analyses", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Neutropenia is a common side effect of myelosuppressive chemotherapy that both increases the risk of infection and diminishes patients’ ability to fight infection. When neutropenic patients become febrile, the high likelihood of infection and serious consequences thereof usually results in hospitalization\n[1,2]. FN, as well as severe or prolonged neutropenia, also can interfere with the planned delivery of treatment and adversely affect important patient outcomes\n[2-11].\nClinical practice guidelines recommend primary prophylactic use of a colony-stimulating factor (CSF)–which has been shown to reduce the risk of FN in clinical trials–when the risk of FN is 20% or higher\n[2]. While the American Society of Clinical Oncology (ASCO) initially recommended that CSF prophylaxis be administered only when FN risk is 40% or higher, in 2006, ASCO lowered the threshold to 20% based on data highlighting the importance of FN-related hospitalization as an outcome and evidence demonstrating the value of CSF prophylaxis in reducing the risk of FN, the risk of FN-related hospitalization, and the associated use of IV anti-infective agents\n[2,12,13]. The CSF filgrastim, which is widely used in clinical practice as prophylaxis against FN, requires daily administration during each cycle until neutrophil recovery occurs (in clinical trials, given typically for 10–11 days [and up to 14 days] until absolute neutrophil count [ANC] ≥10 × 109/L)\n[14-19].\nIn clinical practice, patients often receive shorter courses of daily filgrastim prophylaxis than those administered to subjects in the clinical trial setting\n[14-16,20-24]. While published studies suggest that shorter courses of daily filgrastim prophylaxis are associated with an increased risk of hospitalization for CINC, these studies focused on selected tumor types or employed data that now are over a decade old\n[23,24]. During the past decade, use of chemotherapy and supportive care in clinical practice–as well as recommended use of these agents in authoritative guidelines–has changed considerably\n[2,25-27], Moreover, only one study has examined whether CSFs may favorably impact clinical outcomes and economic costs when CINC develop despite CSF prophylaxis, and it was published a decade ago and focused on elderly patients with a single type of cancer (non-Hodgkin’s lymphoma)\n[28]. We therefore undertook a new study to evaluate the relationship between the duration of daily filgrastim prophylaxis, and the risk and consequences of CINC requiring inpatient care using a large healthcare claims database. While such databases often lack detailed clinical information (e.g., on absolute neutrophil counts), they offer access to information on the health profile and healthcare utilization of tens of millions of covered lives.", "Patient-level information from two large healthcare claims databases–the Thomson Reuters MarketScan Commercial Claims and Encounters and Medicare Supplemental and Coordination of Benefits Database (MarketScan Database, 2001–2010) and the Intercontinental Marketing Services LifeLink Database (LifeLink Database, 2001–2008)–were pooled for analyses. Both databases comprise medical (i.e., facility and professional service) and outpatient pharmacy claims from a large number of participating private health plans, and each contains claims data for 15 million persons annually.\nData available for each facility and professional-service claim include date and place of service, diagnoses, procedures performed/services rendered, and quantity of services (professional-service claims). Data available for each retail pharmacy claim include the drug dispensed, dispensing date, quantity dispensed, and number of days supplied. All claims also include paid (i.e., reimbursed) amounts. Selected demographic and eligibility information is available for persons in both databases. All data can be arrayed to provide a detailed chronology of all medical and pharmacy services used by each plan member over time.\nThe study databases were de-identified prior to their release to study investigators, as set forth in the corresponding Data Use Agreements. The study databases have been evaluated and certified by independent third parties to be in compliance with the Health Insurance Portability and Accountability Act (HIPAA) of 1996 statistical de-identification standards and to satisfy the conditions set forth in Sections 164.514 (a)-(b) 1ii of the HIPAA Privacy Rule regarding the determination and documentation of statistically de-identified data. Use of the study databases for health services research was determined–via independent third parties–to be fully compliant with the HIPAA Privacy Rule and federal guidance on Public Welfare and the Protection of Human Subjects\n[29].", "The study population comprised all patients who initiated ≥1 course of myelosuppressive chemotherapy for a solid tumor or non-Hodgkin’s lymphoma (NHL) and who received daily filgrastim prophylaxis during ≥1 cycle. All cycles in which patients received daily filgrastim prophylaxis–irrespective of duration–were pooled for analyses.\n Cancer chemotherapy patients All patients, aged ≥18 years, who began ≥1 new course of myelosuppressive chemotherapy were identified; the enrollment window was July 1, 2001 through June 30, 2010 for patients in the MarketScan Database and July 1, 2001 through June 30, 2008 for patients in the LifeLink Database. Receipt of chemotherapy was ascertained based on the presence of ≥1 paid medical claim for a chemotherapy drug or administration thereof (identified using Healthcare Common Procedure Coding System [HCPCS], International Classification of Disease, Ninth Revision, Clinical Modification [ICD-9-CM], and Health Care Financing Administration Uniform Bill-92 [UB-92] revenue codes).\nPatients were considered to have initiated a new course of chemotherapy if there was a chemotherapy claim during the study period that was preceded by a period ≥60 days without any other claims for chemotherapy. Only patients who had evidence of a primary solid tumor or NHL (based on ≥2 medical claims [≥7 days apart] with a qualifying 3-digit ICD-9-CM diagnosis code during the period beginning 30 days prior to the index date and ending 30 days thereafter) were selected.\nAll patients, aged ≥18 years, who began ≥1 new course of myelosuppressive chemotherapy were identified; the enrollment window was July 1, 2001 through June 30, 2010 for patients in the MarketScan Database and July 1, 2001 through June 30, 2008 for patients in the LifeLink Database. Receipt of chemotherapy was ascertained based on the presence of ≥1 paid medical claim for a chemotherapy drug or administration thereof (identified using Healthcare Common Procedure Coding System [HCPCS], International Classification of Disease, Ninth Revision, Clinical Modification [ICD-9-CM], and Health Care Financing Administration Uniform Bill-92 [UB-92] revenue codes).\nPatients were considered to have initiated a new course of chemotherapy if there was a chemotherapy claim during the study period that was preceded by a period ≥60 days without any other claims for chemotherapy. Only patients who had evidence of a primary solid tumor or NHL (based on ≥2 medical claims [≥7 days apart] with a qualifying 3-digit ICD-9-CM diagnosis code during the period beginning 30 days prior to the index date and ending 30 days thereafter) were selected.\n Chemotherapy courses, cycles, and regimens For each cancer chemotherapy patient, each unique cycle within each course of chemotherapy was identified. The first chemotherapy cycle (of the first course) was defined as beginning with the date of initiation of chemotherapy and ending with the first service date for the next administration of chemotherapy administration (as evidenced by a medical claim with a corresponding HCPCS, ICD-9-CM, or UB-92 code) occurring at least 12 days-but no more than 59 days-after the date of initiation of chemotherapy. If a second chemotherapy cycle did not commence prior to day 60, or if there was evidence of receipt of radiation therapy (based on medical claims with relevant HCPCS, ICD-9-CM, or UB-92 codes) during this period, both the first cycle of chemotherapy and the course of chemotherapy were considered to have been completed 30 days following the beginning of the cycle or on the day prior to initiation of radiation therapy, whichever occurred first. The second and all subsequent cycles of chemotherapy, as well as subsequent courses of chemotherapy–if any–during the period of interest, were similarly defined. A maximum of 8 cycles per course were considered. For patients with multiple courses of chemotherapy, all qualifying courses were considered.\nChemotherapy regimens were ascertained based on a review of all HCPCS Level II codes for parenterally administered antineoplastic agents on medical claims with service dates within 6 days of the start of each cycle of chemotherapy. Regimens were categorized on a cycle-specific basis according to the number of agents administered that are considered to be myelotoxic (list available from authors upon request).\nFor each cancer chemotherapy patient, each unique cycle within each course of chemotherapy was identified. The first chemotherapy cycle (of the first course) was defined as beginning with the date of initiation of chemotherapy and ending with the first service date for the next administration of chemotherapy administration (as evidenced by a medical claim with a corresponding HCPCS, ICD-9-CM, or UB-92 code) occurring at least 12 days-but no more than 59 days-after the date of initiation of chemotherapy. If a second chemotherapy cycle did not commence prior to day 60, or if there was evidence of receipt of radiation therapy (based on medical claims with relevant HCPCS, ICD-9-CM, or UB-92 codes) during this period, both the first cycle of chemotherapy and the course of chemotherapy were considered to have been completed 30 days following the beginning of the cycle or on the day prior to initiation of radiation therapy, whichever occurred first. The second and all subsequent cycles of chemotherapy, as well as subsequent courses of chemotherapy–if any–during the period of interest, were similarly defined. A maximum of 8 cycles per course were considered. For patients with multiple courses of chemotherapy, all qualifying courses were considered.\nChemotherapy regimens were ascertained based on a review of all HCPCS Level II codes for parenterally administered antineoplastic agents on medical claims with service dates within 6 days of the start of each cycle of chemotherapy. Regimens were categorized on a cycle-specific basis according to the number of agents administered that are considered to be myelotoxic (list available from authors upon request).\n Daily filgrastim prophylaxis Cancer chemotherapy patients who received daily filgrastim prophylaxis during ≥1 cycle of chemotherapy were selected for inclusion in the study population. Administration of daily filgrastim on or before day 5 of a given cycle was considered to represent prophylaxis\n[14,22,23]. Receipt of daily filgrastim was identified based on medical claims (J1440, J1441) with relevant codes from the HCPCS system.\nDuration of daily filgrastim prophylaxis during the patient-cycle was characterized based on temporal patterns of administration; end of prophylaxis was defined as a gap ≥3 days in its use, outpatient administration of IV antimicrobial therapy (an indicator for possible CINC), or hospitalization for CINC. Duration of prophylaxis was characterized as 1–3, 4–6, or ≥7 days.\nCancer chemotherapy patients who received daily filgrastim prophylaxis during ≥1 cycle of chemotherapy were selected for inclusion in the study population. Administration of daily filgrastim on or before day 5 of a given cycle was considered to represent prophylaxis\n[14,22,23]. Receipt of daily filgrastim was identified based on medical claims (J1440, J1441) with relevant codes from the HCPCS system.\nDuration of daily filgrastim prophylaxis during the patient-cycle was characterized based on temporal patterns of administration; end of prophylaxis was defined as a gap ≥3 days in its use, outpatient administration of IV antimicrobial therapy (an indicator for possible CINC), or hospitalization for CINC. Duration of prophylaxis was characterized as 1–3, 4–6, or ≥7 days.\n Exclusion criteria Patient-cycles were excluded from the analytic file if the following occurred: (1) evidence of ≥2 primary cancers (solid or blood) within (i.e., +/-) 30 days of chemotherapy initiation; (2) any gaps in health benefits during the 6-month (\"pretreatment\") period prior to chemotherapy initiation; (3) evidence of hematopoietic stem cell or bone marrow transplantation prior to or during receipt of chemotherapy; (4) evidence of chemotherapy based only on medical claims for administration of the drugs (HCPCS codes identifying the specific chemotherapy agents were not available, and thus the regimen and level of myelosuppression could not be determined); (5) pharmacy claims for myelotoxic chemotherapy (only drug dispense dates are available on pharmacy claims, and because pharmacy/medical claims cannot be definitively linked, precise dates of chemotherapy administration–needed to characterize the course and cycles–could not be ascertained); (6) pharmacy claims for daily filgrastim (precise dates of administration could not be ascertained, for reasons stated above); (7) evidence of receipt of sargramostim (J2820) or pegfilgrastim (C9119, S0135, J2505) during the first 5 days of the cycle (i.e., as prophylaxis).\nPatient-cycles were excluded from the analytic file if the following occurred: (1) evidence of ≥2 primary cancers (solid or blood) within (i.e., +/-) 30 days of chemotherapy initiation; (2) any gaps in health benefits during the 6-month (\"pretreatment\") period prior to chemotherapy initiation; (3) evidence of hematopoietic stem cell or bone marrow transplantation prior to or during receipt of chemotherapy; (4) evidence of chemotherapy based only on medical claims for administration of the drugs (HCPCS codes identifying the specific chemotherapy agents were not available, and thus the regimen and level of myelosuppression could not be determined); (5) pharmacy claims for myelotoxic chemotherapy (only drug dispense dates are available on pharmacy claims, and because pharmacy/medical claims cannot be definitively linked, precise dates of chemotherapy administration–needed to characterize the course and cycles–could not be ascertained); (6) pharmacy claims for daily filgrastim (precise dates of administration could not be ascertained, for reasons stated above); (7) evidence of receipt of sargramostim (J2820) or pegfilgrastim (C9119, S0135, J2505) during the first 5 days of the cycle (i.e., as prophylaxis).", "All patients, aged ≥18 years, who began ≥1 new course of myelosuppressive chemotherapy were identified; the enrollment window was July 1, 2001 through June 30, 2010 for patients in the MarketScan Database and July 1, 2001 through June 30, 2008 for patients in the LifeLink Database. Receipt of chemotherapy was ascertained based on the presence of ≥1 paid medical claim for a chemotherapy drug or administration thereof (identified using Healthcare Common Procedure Coding System [HCPCS], International Classification of Disease, Ninth Revision, Clinical Modification [ICD-9-CM], and Health Care Financing Administration Uniform Bill-92 [UB-92] revenue codes).\nPatients were considered to have initiated a new course of chemotherapy if there was a chemotherapy claim during the study period that was preceded by a period ≥60 days without any other claims for chemotherapy. Only patients who had evidence of a primary solid tumor or NHL (based on ≥2 medical claims [≥7 days apart] with a qualifying 3-digit ICD-9-CM diagnosis code during the period beginning 30 days prior to the index date and ending 30 days thereafter) were selected.", "For each cancer chemotherapy patient, each unique cycle within each course of chemotherapy was identified. The first chemotherapy cycle (of the first course) was defined as beginning with the date of initiation of chemotherapy and ending with the first service date for the next administration of chemotherapy administration (as evidenced by a medical claim with a corresponding HCPCS, ICD-9-CM, or UB-92 code) occurring at least 12 days-but no more than 59 days-after the date of initiation of chemotherapy. If a second chemotherapy cycle did not commence prior to day 60, or if there was evidence of receipt of radiation therapy (based on medical claims with relevant HCPCS, ICD-9-CM, or UB-92 codes) during this period, both the first cycle of chemotherapy and the course of chemotherapy were considered to have been completed 30 days following the beginning of the cycle or on the day prior to initiation of radiation therapy, whichever occurred first. The second and all subsequent cycles of chemotherapy, as well as subsequent courses of chemotherapy–if any–during the period of interest, were similarly defined. A maximum of 8 cycles per course were considered. For patients with multiple courses of chemotherapy, all qualifying courses were considered.\nChemotherapy regimens were ascertained based on a review of all HCPCS Level II codes for parenterally administered antineoplastic agents on medical claims with service dates within 6 days of the start of each cycle of chemotherapy. Regimens were categorized on a cycle-specific basis according to the number of agents administered that are considered to be myelotoxic (list available from authors upon request).", "Cancer chemotherapy patients who received daily filgrastim prophylaxis during ≥1 cycle of chemotherapy were selected for inclusion in the study population. Administration of daily filgrastim on or before day 5 of a given cycle was considered to represent prophylaxis\n[14,22,23]. Receipt of daily filgrastim was identified based on medical claims (J1440, J1441) with relevant codes from the HCPCS system.\nDuration of daily filgrastim prophylaxis during the patient-cycle was characterized based on temporal patterns of administration; end of prophylaxis was defined as a gap ≥3 days in its use, outpatient administration of IV antimicrobial therapy (an indicator for possible CINC), or hospitalization for CINC. Duration of prophylaxis was characterized as 1–3, 4–6, or ≥7 days.", "Patient-cycles were excluded from the analytic file if the following occurred: (1) evidence of ≥2 primary cancers (solid or blood) within (i.e., +/-) 30 days of chemotherapy initiation; (2) any gaps in health benefits during the 6-month (\"pretreatment\") period prior to chemotherapy initiation; (3) evidence of hematopoietic stem cell or bone marrow transplantation prior to or during receipt of chemotherapy; (4) evidence of chemotherapy based only on medical claims for administration of the drugs (HCPCS codes identifying the specific chemotherapy agents were not available, and thus the regimen and level of myelosuppression could not be determined); (5) pharmacy claims for myelotoxic chemotherapy (only drug dispense dates are available on pharmacy claims, and because pharmacy/medical claims cannot be definitively linked, precise dates of chemotherapy administration–needed to characterize the course and cycles–could not be ascertained); (6) pharmacy claims for daily filgrastim (precise dates of administration could not be ascertained, for reasons stated above); (7) evidence of receipt of sargramostim (J2820) or pegfilgrastim (C9119, S0135, J2505) during the first 5 days of the cycle (i.e., as prophylaxis).", "CINC was identified based on inpatient admissions with a diagnosis (principal or secondary) of neutropenia (ICD-9-CM 288.0), fever (780.6), or infection (list available upon request). Admissions were identified on a cycle-specific basis using acute-care facility inpatient claims with admission dates anytime between day 6 and the last day of the chemotherapy cycle. Episodes of CINC treated exclusively on an outpatient basis were not considered. Because CINC that occurred on the day of the last dose of prophylaxis or the following day could have resulted in an artificial truncation of a planned longer course of prophylaxis–and thus could exaggerate the risk of CINC with shorter courses–analyses were conducted alternatively with and without inclusion of these patient cycles.\nConsequences of CINC requiring inpatient care were characterized in terms of in-hospital mortality, total hospital LOS (during the cycle), and total CINC-related healthcare expenditures (i.e., from hospital discharge to end of cycle). Total CINC-related healthcare expenditures included those for the initial hospitalization as well as care provided post-discharge on an outpatient basis; outpatient expenditures comprised encounters with a diagnosis of neutropenia, fever, or infection, as well as use/prescriptions for CSF agents (i.e., as treatment) and antimicrobial therapy. Expenditures were estimated based on total paid amounts on corresponding claims. Mortality was characterized using data only from the MarketScan Database as such information was not available in the LifeLink Database.", "Patient characteristics included: age; gender; presence of selected chronic comorbidities (cardiovascular disease, diabetes, liver disease, renal disease); history of blood disorders (anemia, neutropenia, other), infection, hospitalization (all-cause and CINC-related, respectively), chemotherapy, and radiation therapy; pre-chemotherapy healthcare expenditures; type of cancer, presence of metastases; cycle number and minimum length of prior cycles; chemotherapy regimen; receipt of antimicrobial prophylaxis in the cycle of interest; and year of chemotherapy initiation. Only observed data were utilized in defining patient characteristics.\nAge was assessed as of the first day of the first cycle of chemotherapy in the course. Chronic comorbidities and history of blood disorders, infections, hospitalization, chemotherapy, radiation therapy, presence of metastases, and healthcare expenditures were assessed from the beginning of the 6-month pretreatment period through the first day of the corresponding cycle of chemotherapy. Selected variables (i.e., anemia, neutropenia, other blood disorders, infections) were alternatively evaluated during the chemotherapy course (up to the beginning of the cycle of interest). Metastases (bone vs. other site) and chronic comorbidities were identified on the basis of ≥1 diagnosis codes on inpatient claims, ≥2 diagnosis codes on outpatient claims (excluding those for laboratory services) on different days, ≥1 procedure codes, and ≥1 drug codes, as appropriate. Blood disorders and infections were identified on the basis of ≥1 diagnosis codes (on inpatient and/or outpatient claims) and ≥1 drug codes, as appropriate. Prophylactic use of antimicrobial agents was ascertained based on a medical claim for administration of drug from cycle day 1 to cycle day 5, or a pharmacy claim for a filled prescription from cycle day -3 to cycle day 5, with a corresponding drug code.", "Incidence of CINC was evaluated for each patient-cycle in which daily filgrastim was administered prophylactically, and was estimated by same-cycle duration of prophylaxis in an unadjusted and adjusted context. For the latter, a generalized estimating equation (GEE) with a binomial distribution, logistic link function, and exchangeable correlation structure was employed. The GEE method accounts for correlation among repeated measures for the same subject (in this instance, across cycles), while controlling for both fixed characteristics (e.g., gender) and time-dependent covariates (e.g., first versus subsequent cycles). All observed patient characteristics were entered into, and retained in, the multivariate model. Subgroup and sensitivity analyses focusing on the first cycle only, employing a narrow definition of CINC (which was identified using the diagnosis code for neutropenia only but otherwise the same algorithm for the broad definition as described above), and focusing on alternative tumor types separately (i.e., breast cancer, lung cancer, colorectal cancer, and NHL) also were conducted.\nIn-hospital mortality, hospital LOS, and CINC-related healthcare expenditures among patients who developed CINC were descriptively evaluated by duration of daily filgrastim prophylaxis. Confidence intervals (95%) for in-hospital mortality were computed using the Wilson score interval; confidence intervals for LOS and economic costs were computed using nonparametric bootstrapping (percentile method) from the study population (1,000 replicates with replacement). Confidence intervals for in-hospital mortality, hospital LOS, and healthcare expenditures were estimated assuming independence between observations. Only observed data were used in defining study variables; patients who developed CINC but were missing data on study measures–because such data either were not recorded on claims or were not provided by some health plans–were excluded from corresponding analyses.", "Funding for this research was provided by Amgen Inc. to Policy Analysis Inc. (PAI). Derek Weycker, John Edelsberg, and Alex Kartashov are employed by Policy Analysis Inc. (PAI). Rich Barron and Jason Legg are employed by Amgen Inc. Andrew Glass is employed by the Center for Health Research, Kaiser Permanente Northwest, and received an honorarium for this research from Amgen Inc.", "Authorship was designated based on the guidelines promulgated by the International Committee of Medical Journal Editors (2004). All persons who meet criteria for authorship are listed as authors on the title page. The contribution of each of these individuals to this study–by task–is as follows: conception and supervision (Barron, Weycker), development of design (Barron, Edelsberg, Glass, Legg, Weycker), conduct of analyses (Kartashov, Weycker), interpretation of results (all authors), preparation of manuscript (Edelsberg, Weycker), and review of manuscript (all authors). All authors have read and approved the final version of the manuscript. The study sponsor reviewed the study research plan and study manuscript; data management, processing, and analyses were conducted by Policy Analysis Inc. (PAI), and all final analytic decisions were made by study investigators.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6963/14/189/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Data source", "Study population", "Cancer chemotherapy patients", "Chemotherapy courses, cycles, and regimens", "Daily filgrastim prophylaxis", "Exclusion criteria", "Neutropenic complications requiring inpatient care", "Patient characteristics", "Statistical analyses", "Results", "Discussion", "Conclusion", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Neutropenia is a common side effect of myelosuppressive chemotherapy that both increases the risk of infection and diminishes patients’ ability to fight infection. When neutropenic patients become febrile, the high likelihood of infection and serious consequences thereof usually results in hospitalization\n[1,2]. FN, as well as severe or prolonged neutropenia, also can interfere with the planned delivery of treatment and adversely affect important patient outcomes\n[2-11].\nClinical practice guidelines recommend primary prophylactic use of a colony-stimulating factor (CSF)–which has been shown to reduce the risk of FN in clinical trials–when the risk of FN is 20% or higher\n[2]. While the American Society of Clinical Oncology (ASCO) initially recommended that CSF prophylaxis be administered only when FN risk is 40% or higher, in 2006, ASCO lowered the threshold to 20% based on data highlighting the importance of FN-related hospitalization as an outcome and evidence demonstrating the value of CSF prophylaxis in reducing the risk of FN, the risk of FN-related hospitalization, and the associated use of IV anti-infective agents\n[2,12,13]. The CSF filgrastim, which is widely used in clinical practice as prophylaxis against FN, requires daily administration during each cycle until neutrophil recovery occurs (in clinical trials, given typically for 10–11 days [and up to 14 days] until absolute neutrophil count [ANC] ≥10 × 109/L)\n[14-19].\nIn clinical practice, patients often receive shorter courses of daily filgrastim prophylaxis than those administered to subjects in the clinical trial setting\n[14-16,20-24]. While published studies suggest that shorter courses of daily filgrastim prophylaxis are associated with an increased risk of hospitalization for CINC, these studies focused on selected tumor types or employed data that now are over a decade old\n[23,24]. During the past decade, use of chemotherapy and supportive care in clinical practice–as well as recommended use of these agents in authoritative guidelines–has changed considerably\n[2,25-27], Moreover, only one study has examined whether CSFs may favorably impact clinical outcomes and economic costs when CINC develop despite CSF prophylaxis, and it was published a decade ago and focused on elderly patients with a single type of cancer (non-Hodgkin’s lymphoma)\n[28]. We therefore undertook a new study to evaluate the relationship between the duration of daily filgrastim prophylaxis, and the risk and consequences of CINC requiring inpatient care using a large healthcare claims database. While such databases often lack detailed clinical information (e.g., on absolute neutrophil counts), they offer access to information on the health profile and healthcare utilization of tens of millions of covered lives.", " Data source Patient-level information from two large healthcare claims databases–the Thomson Reuters MarketScan Commercial Claims and Encounters and Medicare Supplemental and Coordination of Benefits Database (MarketScan Database, 2001–2010) and the Intercontinental Marketing Services LifeLink Database (LifeLink Database, 2001–2008)–were pooled for analyses. Both databases comprise medical (i.e., facility and professional service) and outpatient pharmacy claims from a large number of participating private health plans, and each contains claims data for 15 million persons annually.\nData available for each facility and professional-service claim include date and place of service, diagnoses, procedures performed/services rendered, and quantity of services (professional-service claims). Data available for each retail pharmacy claim include the drug dispensed, dispensing date, quantity dispensed, and number of days supplied. All claims also include paid (i.e., reimbursed) amounts. Selected demographic and eligibility information is available for persons in both databases. All data can be arrayed to provide a detailed chronology of all medical and pharmacy services used by each plan member over time.\nThe study databases were de-identified prior to their release to study investigators, as set forth in the corresponding Data Use Agreements. The study databases have been evaluated and certified by independent third parties to be in compliance with the Health Insurance Portability and Accountability Act (HIPAA) of 1996 statistical de-identification standards and to satisfy the conditions set forth in Sections 164.514 (a)-(b) 1ii of the HIPAA Privacy Rule regarding the determination and documentation of statistically de-identified data. Use of the study databases for health services research was determined–via independent third parties–to be fully compliant with the HIPAA Privacy Rule and federal guidance on Public Welfare and the Protection of Human Subjects\n[29].\nPatient-level information from two large healthcare claims databases–the Thomson Reuters MarketScan Commercial Claims and Encounters and Medicare Supplemental and Coordination of Benefits Database (MarketScan Database, 2001–2010) and the Intercontinental Marketing Services LifeLink Database (LifeLink Database, 2001–2008)–were pooled for analyses. Both databases comprise medical (i.e., facility and professional service) and outpatient pharmacy claims from a large number of participating private health plans, and each contains claims data for 15 million persons annually.\nData available for each facility and professional-service claim include date and place of service, diagnoses, procedures performed/services rendered, and quantity of services (professional-service claims). Data available for each retail pharmacy claim include the drug dispensed, dispensing date, quantity dispensed, and number of days supplied. All claims also include paid (i.e., reimbursed) amounts. Selected demographic and eligibility information is available for persons in both databases. All data can be arrayed to provide a detailed chronology of all medical and pharmacy services used by each plan member over time.\nThe study databases were de-identified prior to their release to study investigators, as set forth in the corresponding Data Use Agreements. The study databases have been evaluated and certified by independent third parties to be in compliance with the Health Insurance Portability and Accountability Act (HIPAA) of 1996 statistical de-identification standards and to satisfy the conditions set forth in Sections 164.514 (a)-(b) 1ii of the HIPAA Privacy Rule regarding the determination and documentation of statistically de-identified data. Use of the study databases for health services research was determined–via independent third parties–to be fully compliant with the HIPAA Privacy Rule and federal guidance on Public Welfare and the Protection of Human Subjects\n[29].\n Study population The study population comprised all patients who initiated ≥1 course of myelosuppressive chemotherapy for a solid tumor or non-Hodgkin’s lymphoma (NHL) and who received daily filgrastim prophylaxis during ≥1 cycle. All cycles in which patients received daily filgrastim prophylaxis–irrespective of duration–were pooled for analyses.\n Cancer chemotherapy patients All patients, aged ≥18 years, who began ≥1 new course of myelosuppressive chemotherapy were identified; the enrollment window was July 1, 2001 through June 30, 2010 for patients in the MarketScan Database and July 1, 2001 through June 30, 2008 for patients in the LifeLink Database. Receipt of chemotherapy was ascertained based on the presence of ≥1 paid medical claim for a chemotherapy drug or administration thereof (identified using Healthcare Common Procedure Coding System [HCPCS], International Classification of Disease, Ninth Revision, Clinical Modification [ICD-9-CM], and Health Care Financing Administration Uniform Bill-92 [UB-92] revenue codes).\nPatients were considered to have initiated a new course of chemotherapy if there was a chemotherapy claim during the study period that was preceded by a period ≥60 days without any other claims for chemotherapy. Only patients who had evidence of a primary solid tumor or NHL (based on ≥2 medical claims [≥7 days apart] with a qualifying 3-digit ICD-9-CM diagnosis code during the period beginning 30 days prior to the index date and ending 30 days thereafter) were selected.\nAll patients, aged ≥18 years, who began ≥1 new course of myelosuppressive chemotherapy were identified; the enrollment window was July 1, 2001 through June 30, 2010 for patients in the MarketScan Database and July 1, 2001 through June 30, 2008 for patients in the LifeLink Database. Receipt of chemotherapy was ascertained based on the presence of ≥1 paid medical claim for a chemotherapy drug or administration thereof (identified using Healthcare Common Procedure Coding System [HCPCS], International Classification of Disease, Ninth Revision, Clinical Modification [ICD-9-CM], and Health Care Financing Administration Uniform Bill-92 [UB-92] revenue codes).\nPatients were considered to have initiated a new course of chemotherapy if there was a chemotherapy claim during the study period that was preceded by a period ≥60 days without any other claims for chemotherapy. Only patients who had evidence of a primary solid tumor or NHL (based on ≥2 medical claims [≥7 days apart] with a qualifying 3-digit ICD-9-CM diagnosis code during the period beginning 30 days prior to the index date and ending 30 days thereafter) were selected.\n Chemotherapy courses, cycles, and regimens For each cancer chemotherapy patient, each unique cycle within each course of chemotherapy was identified. The first chemotherapy cycle (of the first course) was defined as beginning with the date of initiation of chemotherapy and ending with the first service date for the next administration of chemotherapy administration (as evidenced by a medical claim with a corresponding HCPCS, ICD-9-CM, or UB-92 code) occurring at least 12 days-but no more than 59 days-after the date of initiation of chemotherapy. If a second chemotherapy cycle did not commence prior to day 60, or if there was evidence of receipt of radiation therapy (based on medical claims with relevant HCPCS, ICD-9-CM, or UB-92 codes) during this period, both the first cycle of chemotherapy and the course of chemotherapy were considered to have been completed 30 days following the beginning of the cycle or on the day prior to initiation of radiation therapy, whichever occurred first. The second and all subsequent cycles of chemotherapy, as well as subsequent courses of chemotherapy–if any–during the period of interest, were similarly defined. A maximum of 8 cycles per course were considered. For patients with multiple courses of chemotherapy, all qualifying courses were considered.\nChemotherapy regimens were ascertained based on a review of all HCPCS Level II codes for parenterally administered antineoplastic agents on medical claims with service dates within 6 days of the start of each cycle of chemotherapy. Regimens were categorized on a cycle-specific basis according to the number of agents administered that are considered to be myelotoxic (list available from authors upon request).\nFor each cancer chemotherapy patient, each unique cycle within each course of chemotherapy was identified. The first chemotherapy cycle (of the first course) was defined as beginning with the date of initiation of chemotherapy and ending with the first service date for the next administration of chemotherapy administration (as evidenced by a medical claim with a corresponding HCPCS, ICD-9-CM, or UB-92 code) occurring at least 12 days-but no more than 59 days-after the date of initiation of chemotherapy. If a second chemotherapy cycle did not commence prior to day 60, or if there was evidence of receipt of radiation therapy (based on medical claims with relevant HCPCS, ICD-9-CM, or UB-92 codes) during this period, both the first cycle of chemotherapy and the course of chemotherapy were considered to have been completed 30 days following the beginning of the cycle or on the day prior to initiation of radiation therapy, whichever occurred first. The second and all subsequent cycles of chemotherapy, as well as subsequent courses of chemotherapy–if any–during the period of interest, were similarly defined. A maximum of 8 cycles per course were considered. For patients with multiple courses of chemotherapy, all qualifying courses were considered.\nChemotherapy regimens were ascertained based on a review of all HCPCS Level II codes for parenterally administered antineoplastic agents on medical claims with service dates within 6 days of the start of each cycle of chemotherapy. Regimens were categorized on a cycle-specific basis according to the number of agents administered that are considered to be myelotoxic (list available from authors upon request).\n Daily filgrastim prophylaxis Cancer chemotherapy patients who received daily filgrastim prophylaxis during ≥1 cycle of chemotherapy were selected for inclusion in the study population. Administration of daily filgrastim on or before day 5 of a given cycle was considered to represent prophylaxis\n[14,22,23]. Receipt of daily filgrastim was identified based on medical claims (J1440, J1441) with relevant codes from the HCPCS system.\nDuration of daily filgrastim prophylaxis during the patient-cycle was characterized based on temporal patterns of administration; end of prophylaxis was defined as a gap ≥3 days in its use, outpatient administration of IV antimicrobial therapy (an indicator for possible CINC), or hospitalization for CINC. Duration of prophylaxis was characterized as 1–3, 4–6, or ≥7 days.\nCancer chemotherapy patients who received daily filgrastim prophylaxis during ≥1 cycle of chemotherapy were selected for inclusion in the study population. Administration of daily filgrastim on or before day 5 of a given cycle was considered to represent prophylaxis\n[14,22,23]. Receipt of daily filgrastim was identified based on medical claims (J1440, J1441) with relevant codes from the HCPCS system.\nDuration of daily filgrastim prophylaxis during the patient-cycle was characterized based on temporal patterns of administration; end of prophylaxis was defined as a gap ≥3 days in its use, outpatient administration of IV antimicrobial therapy (an indicator for possible CINC), or hospitalization for CINC. Duration of prophylaxis was characterized as 1–3, 4–6, or ≥7 days.\n Exclusion criteria Patient-cycles were excluded from the analytic file if the following occurred: (1) evidence of ≥2 primary cancers (solid or blood) within (i.e., +/-) 30 days of chemotherapy initiation; (2) any gaps in health benefits during the 6-month (\"pretreatment\") period prior to chemotherapy initiation; (3) evidence of hematopoietic stem cell or bone marrow transplantation prior to or during receipt of chemotherapy; (4) evidence of chemotherapy based only on medical claims for administration of the drugs (HCPCS codes identifying the specific chemotherapy agents were not available, and thus the regimen and level of myelosuppression could not be determined); (5) pharmacy claims for myelotoxic chemotherapy (only drug dispense dates are available on pharmacy claims, and because pharmacy/medical claims cannot be definitively linked, precise dates of chemotherapy administration–needed to characterize the course and cycles–could not be ascertained); (6) pharmacy claims for daily filgrastim (precise dates of administration could not be ascertained, for reasons stated above); (7) evidence of receipt of sargramostim (J2820) or pegfilgrastim (C9119, S0135, J2505) during the first 5 days of the cycle (i.e., as prophylaxis).\nPatient-cycles were excluded from the analytic file if the following occurred: (1) evidence of ≥2 primary cancers (solid or blood) within (i.e., +/-) 30 days of chemotherapy initiation; (2) any gaps in health benefits during the 6-month (\"pretreatment\") period prior to chemotherapy initiation; (3) evidence of hematopoietic stem cell or bone marrow transplantation prior to or during receipt of chemotherapy; (4) evidence of chemotherapy based only on medical claims for administration of the drugs (HCPCS codes identifying the specific chemotherapy agents were not available, and thus the regimen and level of myelosuppression could not be determined); (5) pharmacy claims for myelotoxic chemotherapy (only drug dispense dates are available on pharmacy claims, and because pharmacy/medical claims cannot be definitively linked, precise dates of chemotherapy administration–needed to characterize the course and cycles–could not be ascertained); (6) pharmacy claims for daily filgrastim (precise dates of administration could not be ascertained, for reasons stated above); (7) evidence of receipt of sargramostim (J2820) or pegfilgrastim (C9119, S0135, J2505) during the first 5 days of the cycle (i.e., as prophylaxis).\nThe study population comprised all patients who initiated ≥1 course of myelosuppressive chemotherapy for a solid tumor or non-Hodgkin’s lymphoma (NHL) and who received daily filgrastim prophylaxis during ≥1 cycle. All cycles in which patients received daily filgrastim prophylaxis–irrespective of duration–were pooled for analyses.\n Cancer chemotherapy patients All patients, aged ≥18 years, who began ≥1 new course of myelosuppressive chemotherapy were identified; the enrollment window was July 1, 2001 through June 30, 2010 for patients in the MarketScan Database and July 1, 2001 through June 30, 2008 for patients in the LifeLink Database. Receipt of chemotherapy was ascertained based on the presence of ≥1 paid medical claim for a chemotherapy drug or administration thereof (identified using Healthcare Common Procedure Coding System [HCPCS], International Classification of Disease, Ninth Revision, Clinical Modification [ICD-9-CM], and Health Care Financing Administration Uniform Bill-92 [UB-92] revenue codes).\nPatients were considered to have initiated a new course of chemotherapy if there was a chemotherapy claim during the study period that was preceded by a period ≥60 days without any other claims for chemotherapy. Only patients who had evidence of a primary solid tumor or NHL (based on ≥2 medical claims [≥7 days apart] with a qualifying 3-digit ICD-9-CM diagnosis code during the period beginning 30 days prior to the index date and ending 30 days thereafter) were selected.\nAll patients, aged ≥18 years, who began ≥1 new course of myelosuppressive chemotherapy were identified; the enrollment window was July 1, 2001 through June 30, 2010 for patients in the MarketScan Database and July 1, 2001 through June 30, 2008 for patients in the LifeLink Database. Receipt of chemotherapy was ascertained based on the presence of ≥1 paid medical claim for a chemotherapy drug or administration thereof (identified using Healthcare Common Procedure Coding System [HCPCS], International Classification of Disease, Ninth Revision, Clinical Modification [ICD-9-CM], and Health Care Financing Administration Uniform Bill-92 [UB-92] revenue codes).\nPatients were considered to have initiated a new course of chemotherapy if there was a chemotherapy claim during the study period that was preceded by a period ≥60 days without any other claims for chemotherapy. Only patients who had evidence of a primary solid tumor or NHL (based on ≥2 medical claims [≥7 days apart] with a qualifying 3-digit ICD-9-CM diagnosis code during the period beginning 30 days prior to the index date and ending 30 days thereafter) were selected.\n Chemotherapy courses, cycles, and regimens For each cancer chemotherapy patient, each unique cycle within each course of chemotherapy was identified. The first chemotherapy cycle (of the first course) was defined as beginning with the date of initiation of chemotherapy and ending with the first service date for the next administration of chemotherapy administration (as evidenced by a medical claim with a corresponding HCPCS, ICD-9-CM, or UB-92 code) occurring at least 12 days-but no more than 59 days-after the date of initiation of chemotherapy. If a second chemotherapy cycle did not commence prior to day 60, or if there was evidence of receipt of radiation therapy (based on medical claims with relevant HCPCS, ICD-9-CM, or UB-92 codes) during this period, both the first cycle of chemotherapy and the course of chemotherapy were considered to have been completed 30 days following the beginning of the cycle or on the day prior to initiation of radiation therapy, whichever occurred first. The second and all subsequent cycles of chemotherapy, as well as subsequent courses of chemotherapy–if any–during the period of interest, were similarly defined. A maximum of 8 cycles per course were considered. For patients with multiple courses of chemotherapy, all qualifying courses were considered.\nChemotherapy regimens were ascertained based on a review of all HCPCS Level II codes for parenterally administered antineoplastic agents on medical claims with service dates within 6 days of the start of each cycle of chemotherapy. Regimens were categorized on a cycle-specific basis according to the number of agents administered that are considered to be myelotoxic (list available from authors upon request).\nFor each cancer chemotherapy patient, each unique cycle within each course of chemotherapy was identified. The first chemotherapy cycle (of the first course) was defined as beginning with the date of initiation of chemotherapy and ending with the first service date for the next administration of chemotherapy administration (as evidenced by a medical claim with a corresponding HCPCS, ICD-9-CM, or UB-92 code) occurring at least 12 days-but no more than 59 days-after the date of initiation of chemotherapy. If a second chemotherapy cycle did not commence prior to day 60, or if there was evidence of receipt of radiation therapy (based on medical claims with relevant HCPCS, ICD-9-CM, or UB-92 codes) during this period, both the first cycle of chemotherapy and the course of chemotherapy were considered to have been completed 30 days following the beginning of the cycle or on the day prior to initiation of radiation therapy, whichever occurred first. The second and all subsequent cycles of chemotherapy, as well as subsequent courses of chemotherapy–if any–during the period of interest, were similarly defined. A maximum of 8 cycles per course were considered. For patients with multiple courses of chemotherapy, all qualifying courses were considered.\nChemotherapy regimens were ascertained based on a review of all HCPCS Level II codes for parenterally administered antineoplastic agents on medical claims with service dates within 6 days of the start of each cycle of chemotherapy. Regimens were categorized on a cycle-specific basis according to the number of agents administered that are considered to be myelotoxic (list available from authors upon request).\n Daily filgrastim prophylaxis Cancer chemotherapy patients who received daily filgrastim prophylaxis during ≥1 cycle of chemotherapy were selected for inclusion in the study population. Administration of daily filgrastim on or before day 5 of a given cycle was considered to represent prophylaxis\n[14,22,23]. Receipt of daily filgrastim was identified based on medical claims (J1440, J1441) with relevant codes from the HCPCS system.\nDuration of daily filgrastim prophylaxis during the patient-cycle was characterized based on temporal patterns of administration; end of prophylaxis was defined as a gap ≥3 days in its use, outpatient administration of IV antimicrobial therapy (an indicator for possible CINC), or hospitalization for CINC. Duration of prophylaxis was characterized as 1–3, 4–6, or ≥7 days.\nCancer chemotherapy patients who received daily filgrastim prophylaxis during ≥1 cycle of chemotherapy were selected for inclusion in the study population. Administration of daily filgrastim on or before day 5 of a given cycle was considered to represent prophylaxis\n[14,22,23]. Receipt of daily filgrastim was identified based on medical claims (J1440, J1441) with relevant codes from the HCPCS system.\nDuration of daily filgrastim prophylaxis during the patient-cycle was characterized based on temporal patterns of administration; end of prophylaxis was defined as a gap ≥3 days in its use, outpatient administration of IV antimicrobial therapy (an indicator for possible CINC), or hospitalization for CINC. Duration of prophylaxis was characterized as 1–3, 4–6, or ≥7 days.\n Exclusion criteria Patient-cycles were excluded from the analytic file if the following occurred: (1) evidence of ≥2 primary cancers (solid or blood) within (i.e., +/-) 30 days of chemotherapy initiation; (2) any gaps in health benefits during the 6-month (\"pretreatment\") period prior to chemotherapy initiation; (3) evidence of hematopoietic stem cell or bone marrow transplantation prior to or during receipt of chemotherapy; (4) evidence of chemotherapy based only on medical claims for administration of the drugs (HCPCS codes identifying the specific chemotherapy agents were not available, and thus the regimen and level of myelosuppression could not be determined); (5) pharmacy claims for myelotoxic chemotherapy (only drug dispense dates are available on pharmacy claims, and because pharmacy/medical claims cannot be definitively linked, precise dates of chemotherapy administration–needed to characterize the course and cycles–could not be ascertained); (6) pharmacy claims for daily filgrastim (precise dates of administration could not be ascertained, for reasons stated above); (7) evidence of receipt of sargramostim (J2820) or pegfilgrastim (C9119, S0135, J2505) during the first 5 days of the cycle (i.e., as prophylaxis).\nPatient-cycles were excluded from the analytic file if the following occurred: (1) evidence of ≥2 primary cancers (solid or blood) within (i.e., +/-) 30 days of chemotherapy initiation; (2) any gaps in health benefits during the 6-month (\"pretreatment\") period prior to chemotherapy initiation; (3) evidence of hematopoietic stem cell or bone marrow transplantation prior to or during receipt of chemotherapy; (4) evidence of chemotherapy based only on medical claims for administration of the drugs (HCPCS codes identifying the specific chemotherapy agents were not available, and thus the regimen and level of myelosuppression could not be determined); (5) pharmacy claims for myelotoxic chemotherapy (only drug dispense dates are available on pharmacy claims, and because pharmacy/medical claims cannot be definitively linked, precise dates of chemotherapy administration–needed to characterize the course and cycles–could not be ascertained); (6) pharmacy claims for daily filgrastim (precise dates of administration could not be ascertained, for reasons stated above); (7) evidence of receipt of sargramostim (J2820) or pegfilgrastim (C9119, S0135, J2505) during the first 5 days of the cycle (i.e., as prophylaxis).\n Neutropenic complications requiring inpatient care CINC was identified based on inpatient admissions with a diagnosis (principal or secondary) of neutropenia (ICD-9-CM 288.0), fever (780.6), or infection (list available upon request). Admissions were identified on a cycle-specific basis using acute-care facility inpatient claims with admission dates anytime between day 6 and the last day of the chemotherapy cycle. Episodes of CINC treated exclusively on an outpatient basis were not considered. Because CINC that occurred on the day of the last dose of prophylaxis or the following day could have resulted in an artificial truncation of a planned longer course of prophylaxis–and thus could exaggerate the risk of CINC with shorter courses–analyses were conducted alternatively with and without inclusion of these patient cycles.\nConsequences of CINC requiring inpatient care were characterized in terms of in-hospital mortality, total hospital LOS (during the cycle), and total CINC-related healthcare expenditures (i.e., from hospital discharge to end of cycle). Total CINC-related healthcare expenditures included those for the initial hospitalization as well as care provided post-discharge on an outpatient basis; outpatient expenditures comprised encounters with a diagnosis of neutropenia, fever, or infection, as well as use/prescriptions for CSF agents (i.e., as treatment) and antimicrobial therapy. Expenditures were estimated based on total paid amounts on corresponding claims. Mortality was characterized using data only from the MarketScan Database as such information was not available in the LifeLink Database.\nCINC was identified based on inpatient admissions with a diagnosis (principal or secondary) of neutropenia (ICD-9-CM 288.0), fever (780.6), or infection (list available upon request). Admissions were identified on a cycle-specific basis using acute-care facility inpatient claims with admission dates anytime between day 6 and the last day of the chemotherapy cycle. Episodes of CINC treated exclusively on an outpatient basis were not considered. Because CINC that occurred on the day of the last dose of prophylaxis or the following day could have resulted in an artificial truncation of a planned longer course of prophylaxis–and thus could exaggerate the risk of CINC with shorter courses–analyses were conducted alternatively with and without inclusion of these patient cycles.\nConsequences of CINC requiring inpatient care were characterized in terms of in-hospital mortality, total hospital LOS (during the cycle), and total CINC-related healthcare expenditures (i.e., from hospital discharge to end of cycle). Total CINC-related healthcare expenditures included those for the initial hospitalization as well as care provided post-discharge on an outpatient basis; outpatient expenditures comprised encounters with a diagnosis of neutropenia, fever, or infection, as well as use/prescriptions for CSF agents (i.e., as treatment) and antimicrobial therapy. Expenditures were estimated based on total paid amounts on corresponding claims. Mortality was characterized using data only from the MarketScan Database as such information was not available in the LifeLink Database.\n Patient characteristics Patient characteristics included: age; gender; presence of selected chronic comorbidities (cardiovascular disease, diabetes, liver disease, renal disease); history of blood disorders (anemia, neutropenia, other), infection, hospitalization (all-cause and CINC-related, respectively), chemotherapy, and radiation therapy; pre-chemotherapy healthcare expenditures; type of cancer, presence of metastases; cycle number and minimum length of prior cycles; chemotherapy regimen; receipt of antimicrobial prophylaxis in the cycle of interest; and year of chemotherapy initiation. Only observed data were utilized in defining patient characteristics.\nAge was assessed as of the first day of the first cycle of chemotherapy in the course. Chronic comorbidities and history of blood disorders, infections, hospitalization, chemotherapy, radiation therapy, presence of metastases, and healthcare expenditures were assessed from the beginning of the 6-month pretreatment period through the first day of the corresponding cycle of chemotherapy. Selected variables (i.e., anemia, neutropenia, other blood disorders, infections) were alternatively evaluated during the chemotherapy course (up to the beginning of the cycle of interest). Metastases (bone vs. other site) and chronic comorbidities were identified on the basis of ≥1 diagnosis codes on inpatient claims, ≥2 diagnosis codes on outpatient claims (excluding those for laboratory services) on different days, ≥1 procedure codes, and ≥1 drug codes, as appropriate. Blood disorders and infections were identified on the basis of ≥1 diagnosis codes (on inpatient and/or outpatient claims) and ≥1 drug codes, as appropriate. Prophylactic use of antimicrobial agents was ascertained based on a medical claim for administration of drug from cycle day 1 to cycle day 5, or a pharmacy claim for a filled prescription from cycle day -3 to cycle day 5, with a corresponding drug code.\nPatient characteristics included: age; gender; presence of selected chronic comorbidities (cardiovascular disease, diabetes, liver disease, renal disease); history of blood disorders (anemia, neutropenia, other), infection, hospitalization (all-cause and CINC-related, respectively), chemotherapy, and radiation therapy; pre-chemotherapy healthcare expenditures; type of cancer, presence of metastases; cycle number and minimum length of prior cycles; chemotherapy regimen; receipt of antimicrobial prophylaxis in the cycle of interest; and year of chemotherapy initiation. Only observed data were utilized in defining patient characteristics.\nAge was assessed as of the first day of the first cycle of chemotherapy in the course. Chronic comorbidities and history of blood disorders, infections, hospitalization, chemotherapy, radiation therapy, presence of metastases, and healthcare expenditures were assessed from the beginning of the 6-month pretreatment period through the first day of the corresponding cycle of chemotherapy. Selected variables (i.e., anemia, neutropenia, other blood disorders, infections) were alternatively evaluated during the chemotherapy course (up to the beginning of the cycle of interest). Metastases (bone vs. other site) and chronic comorbidities were identified on the basis of ≥1 diagnosis codes on inpatient claims, ≥2 diagnosis codes on outpatient claims (excluding those for laboratory services) on different days, ≥1 procedure codes, and ≥1 drug codes, as appropriate. Blood disorders and infections were identified on the basis of ≥1 diagnosis codes (on inpatient and/or outpatient claims) and ≥1 drug codes, as appropriate. Prophylactic use of antimicrobial agents was ascertained based on a medical claim for administration of drug from cycle day 1 to cycle day 5, or a pharmacy claim for a filled prescription from cycle day -3 to cycle day 5, with a corresponding drug code.\n Statistical analyses Incidence of CINC was evaluated for each patient-cycle in which daily filgrastim was administered prophylactically, and was estimated by same-cycle duration of prophylaxis in an unadjusted and adjusted context. For the latter, a generalized estimating equation (GEE) with a binomial distribution, logistic link function, and exchangeable correlation structure was employed. The GEE method accounts for correlation among repeated measures for the same subject (in this instance, across cycles), while controlling for both fixed characteristics (e.g., gender) and time-dependent covariates (e.g., first versus subsequent cycles). All observed patient characteristics were entered into, and retained in, the multivariate model. Subgroup and sensitivity analyses focusing on the first cycle only, employing a narrow definition of CINC (which was identified using the diagnosis code for neutropenia only but otherwise the same algorithm for the broad definition as described above), and focusing on alternative tumor types separately (i.e., breast cancer, lung cancer, colorectal cancer, and NHL) also were conducted.\nIn-hospital mortality, hospital LOS, and CINC-related healthcare expenditures among patients who developed CINC were descriptively evaluated by duration of daily filgrastim prophylaxis. Confidence intervals (95%) for in-hospital mortality were computed using the Wilson score interval; confidence intervals for LOS and economic costs were computed using nonparametric bootstrapping (percentile method) from the study population (1,000 replicates with replacement). Confidence intervals for in-hospital mortality, hospital LOS, and healthcare expenditures were estimated assuming independence between observations. Only observed data were used in defining study variables; patients who developed CINC but were missing data on study measures–because such data either were not recorded on claims or were not provided by some health plans–were excluded from corresponding analyses.\nIncidence of CINC was evaluated for each patient-cycle in which daily filgrastim was administered prophylactically, and was estimated by same-cycle duration of prophylaxis in an unadjusted and adjusted context. For the latter, a generalized estimating equation (GEE) with a binomial distribution, logistic link function, and exchangeable correlation structure was employed. The GEE method accounts for correlation among repeated measures for the same subject (in this instance, across cycles), while controlling for both fixed characteristics (e.g., gender) and time-dependent covariates (e.g., first versus subsequent cycles). All observed patient characteristics were entered into, and retained in, the multivariate model. Subgroup and sensitivity analyses focusing on the first cycle only, employing a narrow definition of CINC (which was identified using the diagnosis code for neutropenia only but otherwise the same algorithm for the broad definition as described above), and focusing on alternative tumor types separately (i.e., breast cancer, lung cancer, colorectal cancer, and NHL) also were conducted.\nIn-hospital mortality, hospital LOS, and CINC-related healthcare expenditures among patients who developed CINC were descriptively evaluated by duration of daily filgrastim prophylaxis. Confidence intervals (95%) for in-hospital mortality were computed using the Wilson score interval; confidence intervals for LOS and economic costs were computed using nonparametric bootstrapping (percentile method) from the study population (1,000 replicates with replacement). Confidence intervals for in-hospital mortality, hospital LOS, and healthcare expenditures were estimated assuming independence between observations. Only observed data were used in defining study variables; patients who developed CINC but were missing data on study measures–because such data either were not recorded on claims or were not provided by some health plans–were excluded from corresponding analyses.", "Patient-level information from two large healthcare claims databases–the Thomson Reuters MarketScan Commercial Claims and Encounters and Medicare Supplemental and Coordination of Benefits Database (MarketScan Database, 2001–2010) and the Intercontinental Marketing Services LifeLink Database (LifeLink Database, 2001–2008)–were pooled for analyses. Both databases comprise medical (i.e., facility and professional service) and outpatient pharmacy claims from a large number of participating private health plans, and each contains claims data for 15 million persons annually.\nData available for each facility and professional-service claim include date and place of service, diagnoses, procedures performed/services rendered, and quantity of services (professional-service claims). Data available for each retail pharmacy claim include the drug dispensed, dispensing date, quantity dispensed, and number of days supplied. All claims also include paid (i.e., reimbursed) amounts. Selected demographic and eligibility information is available for persons in both databases. All data can be arrayed to provide a detailed chronology of all medical and pharmacy services used by each plan member over time.\nThe study databases were de-identified prior to their release to study investigators, as set forth in the corresponding Data Use Agreements. The study databases have been evaluated and certified by independent third parties to be in compliance with the Health Insurance Portability and Accountability Act (HIPAA) of 1996 statistical de-identification standards and to satisfy the conditions set forth in Sections 164.514 (a)-(b) 1ii of the HIPAA Privacy Rule regarding the determination and documentation of statistically de-identified data. Use of the study databases for health services research was determined–via independent third parties–to be fully compliant with the HIPAA Privacy Rule and federal guidance on Public Welfare and the Protection of Human Subjects\n[29].", "The study population comprised all patients who initiated ≥1 course of myelosuppressive chemotherapy for a solid tumor or non-Hodgkin’s lymphoma (NHL) and who received daily filgrastim prophylaxis during ≥1 cycle. All cycles in which patients received daily filgrastim prophylaxis–irrespective of duration–were pooled for analyses.\n Cancer chemotherapy patients All patients, aged ≥18 years, who began ≥1 new course of myelosuppressive chemotherapy were identified; the enrollment window was July 1, 2001 through June 30, 2010 for patients in the MarketScan Database and July 1, 2001 through June 30, 2008 for patients in the LifeLink Database. Receipt of chemotherapy was ascertained based on the presence of ≥1 paid medical claim for a chemotherapy drug or administration thereof (identified using Healthcare Common Procedure Coding System [HCPCS], International Classification of Disease, Ninth Revision, Clinical Modification [ICD-9-CM], and Health Care Financing Administration Uniform Bill-92 [UB-92] revenue codes).\nPatients were considered to have initiated a new course of chemotherapy if there was a chemotherapy claim during the study period that was preceded by a period ≥60 days without any other claims for chemotherapy. Only patients who had evidence of a primary solid tumor or NHL (based on ≥2 medical claims [≥7 days apart] with a qualifying 3-digit ICD-9-CM diagnosis code during the period beginning 30 days prior to the index date and ending 30 days thereafter) were selected.\nAll patients, aged ≥18 years, who began ≥1 new course of myelosuppressive chemotherapy were identified; the enrollment window was July 1, 2001 through June 30, 2010 for patients in the MarketScan Database and July 1, 2001 through June 30, 2008 for patients in the LifeLink Database. Receipt of chemotherapy was ascertained based on the presence of ≥1 paid medical claim for a chemotherapy drug or administration thereof (identified using Healthcare Common Procedure Coding System [HCPCS], International Classification of Disease, Ninth Revision, Clinical Modification [ICD-9-CM], and Health Care Financing Administration Uniform Bill-92 [UB-92] revenue codes).\nPatients were considered to have initiated a new course of chemotherapy if there was a chemotherapy claim during the study period that was preceded by a period ≥60 days without any other claims for chemotherapy. Only patients who had evidence of a primary solid tumor or NHL (based on ≥2 medical claims [≥7 days apart] with a qualifying 3-digit ICD-9-CM diagnosis code during the period beginning 30 days prior to the index date and ending 30 days thereafter) were selected.\n Chemotherapy courses, cycles, and regimens For each cancer chemotherapy patient, each unique cycle within each course of chemotherapy was identified. The first chemotherapy cycle (of the first course) was defined as beginning with the date of initiation of chemotherapy and ending with the first service date for the next administration of chemotherapy administration (as evidenced by a medical claim with a corresponding HCPCS, ICD-9-CM, or UB-92 code) occurring at least 12 days-but no more than 59 days-after the date of initiation of chemotherapy. If a second chemotherapy cycle did not commence prior to day 60, or if there was evidence of receipt of radiation therapy (based on medical claims with relevant HCPCS, ICD-9-CM, or UB-92 codes) during this period, both the first cycle of chemotherapy and the course of chemotherapy were considered to have been completed 30 days following the beginning of the cycle or on the day prior to initiation of radiation therapy, whichever occurred first. The second and all subsequent cycles of chemotherapy, as well as subsequent courses of chemotherapy–if any–during the period of interest, were similarly defined. A maximum of 8 cycles per course were considered. For patients with multiple courses of chemotherapy, all qualifying courses were considered.\nChemotherapy regimens were ascertained based on a review of all HCPCS Level II codes for parenterally administered antineoplastic agents on medical claims with service dates within 6 days of the start of each cycle of chemotherapy. Regimens were categorized on a cycle-specific basis according to the number of agents administered that are considered to be myelotoxic (list available from authors upon request).\nFor each cancer chemotherapy patient, each unique cycle within each course of chemotherapy was identified. The first chemotherapy cycle (of the first course) was defined as beginning with the date of initiation of chemotherapy and ending with the first service date for the next administration of chemotherapy administration (as evidenced by a medical claim with a corresponding HCPCS, ICD-9-CM, or UB-92 code) occurring at least 12 days-but no more than 59 days-after the date of initiation of chemotherapy. If a second chemotherapy cycle did not commence prior to day 60, or if there was evidence of receipt of radiation therapy (based on medical claims with relevant HCPCS, ICD-9-CM, or UB-92 codes) during this period, both the first cycle of chemotherapy and the course of chemotherapy were considered to have been completed 30 days following the beginning of the cycle or on the day prior to initiation of radiation therapy, whichever occurred first. The second and all subsequent cycles of chemotherapy, as well as subsequent courses of chemotherapy–if any–during the period of interest, were similarly defined. A maximum of 8 cycles per course were considered. For patients with multiple courses of chemotherapy, all qualifying courses were considered.\nChemotherapy regimens were ascertained based on a review of all HCPCS Level II codes for parenterally administered antineoplastic agents on medical claims with service dates within 6 days of the start of each cycle of chemotherapy. Regimens were categorized on a cycle-specific basis according to the number of agents administered that are considered to be myelotoxic (list available from authors upon request).\n Daily filgrastim prophylaxis Cancer chemotherapy patients who received daily filgrastim prophylaxis during ≥1 cycle of chemotherapy were selected for inclusion in the study population. Administration of daily filgrastim on or before day 5 of a given cycle was considered to represent prophylaxis\n[14,22,23]. Receipt of daily filgrastim was identified based on medical claims (J1440, J1441) with relevant codes from the HCPCS system.\nDuration of daily filgrastim prophylaxis during the patient-cycle was characterized based on temporal patterns of administration; end of prophylaxis was defined as a gap ≥3 days in its use, outpatient administration of IV antimicrobial therapy (an indicator for possible CINC), or hospitalization for CINC. Duration of prophylaxis was characterized as 1–3, 4–6, or ≥7 days.\nCancer chemotherapy patients who received daily filgrastim prophylaxis during ≥1 cycle of chemotherapy were selected for inclusion in the study population. Administration of daily filgrastim on or before day 5 of a given cycle was considered to represent prophylaxis\n[14,22,23]. Receipt of daily filgrastim was identified based on medical claims (J1440, J1441) with relevant codes from the HCPCS system.\nDuration of daily filgrastim prophylaxis during the patient-cycle was characterized based on temporal patterns of administration; end of prophylaxis was defined as a gap ≥3 days in its use, outpatient administration of IV antimicrobial therapy (an indicator for possible CINC), or hospitalization for CINC. Duration of prophylaxis was characterized as 1–3, 4–6, or ≥7 days.\n Exclusion criteria Patient-cycles were excluded from the analytic file if the following occurred: (1) evidence of ≥2 primary cancers (solid or blood) within (i.e., +/-) 30 days of chemotherapy initiation; (2) any gaps in health benefits during the 6-month (\"pretreatment\") period prior to chemotherapy initiation; (3) evidence of hematopoietic stem cell or bone marrow transplantation prior to or during receipt of chemotherapy; (4) evidence of chemotherapy based only on medical claims for administration of the drugs (HCPCS codes identifying the specific chemotherapy agents were not available, and thus the regimen and level of myelosuppression could not be determined); (5) pharmacy claims for myelotoxic chemotherapy (only drug dispense dates are available on pharmacy claims, and because pharmacy/medical claims cannot be definitively linked, precise dates of chemotherapy administration–needed to characterize the course and cycles–could not be ascertained); (6) pharmacy claims for daily filgrastim (precise dates of administration could not be ascertained, for reasons stated above); (7) evidence of receipt of sargramostim (J2820) or pegfilgrastim (C9119, S0135, J2505) during the first 5 days of the cycle (i.e., as prophylaxis).\nPatient-cycles were excluded from the analytic file if the following occurred: (1) evidence of ≥2 primary cancers (solid or blood) within (i.e., +/-) 30 days of chemotherapy initiation; (2) any gaps in health benefits during the 6-month (\"pretreatment\") period prior to chemotherapy initiation; (3) evidence of hematopoietic stem cell or bone marrow transplantation prior to or during receipt of chemotherapy; (4) evidence of chemotherapy based only on medical claims for administration of the drugs (HCPCS codes identifying the specific chemotherapy agents were not available, and thus the regimen and level of myelosuppression could not be determined); (5) pharmacy claims for myelotoxic chemotherapy (only drug dispense dates are available on pharmacy claims, and because pharmacy/medical claims cannot be definitively linked, precise dates of chemotherapy administration–needed to characterize the course and cycles–could not be ascertained); (6) pharmacy claims for daily filgrastim (precise dates of administration could not be ascertained, for reasons stated above); (7) evidence of receipt of sargramostim (J2820) or pegfilgrastim (C9119, S0135, J2505) during the first 5 days of the cycle (i.e., as prophylaxis).", "All patients, aged ≥18 years, who began ≥1 new course of myelosuppressive chemotherapy were identified; the enrollment window was July 1, 2001 through June 30, 2010 for patients in the MarketScan Database and July 1, 2001 through June 30, 2008 for patients in the LifeLink Database. Receipt of chemotherapy was ascertained based on the presence of ≥1 paid medical claim for a chemotherapy drug or administration thereof (identified using Healthcare Common Procedure Coding System [HCPCS], International Classification of Disease, Ninth Revision, Clinical Modification [ICD-9-CM], and Health Care Financing Administration Uniform Bill-92 [UB-92] revenue codes).\nPatients were considered to have initiated a new course of chemotherapy if there was a chemotherapy claim during the study period that was preceded by a period ≥60 days without any other claims for chemotherapy. Only patients who had evidence of a primary solid tumor or NHL (based on ≥2 medical claims [≥7 days apart] with a qualifying 3-digit ICD-9-CM diagnosis code during the period beginning 30 days prior to the index date and ending 30 days thereafter) were selected.", "For each cancer chemotherapy patient, each unique cycle within each course of chemotherapy was identified. The first chemotherapy cycle (of the first course) was defined as beginning with the date of initiation of chemotherapy and ending with the first service date for the next administration of chemotherapy administration (as evidenced by a medical claim with a corresponding HCPCS, ICD-9-CM, or UB-92 code) occurring at least 12 days-but no more than 59 days-after the date of initiation of chemotherapy. If a second chemotherapy cycle did not commence prior to day 60, or if there was evidence of receipt of radiation therapy (based on medical claims with relevant HCPCS, ICD-9-CM, or UB-92 codes) during this period, both the first cycle of chemotherapy and the course of chemotherapy were considered to have been completed 30 days following the beginning of the cycle or on the day prior to initiation of radiation therapy, whichever occurred first. The second and all subsequent cycles of chemotherapy, as well as subsequent courses of chemotherapy–if any–during the period of interest, were similarly defined. A maximum of 8 cycles per course were considered. For patients with multiple courses of chemotherapy, all qualifying courses were considered.\nChemotherapy regimens were ascertained based on a review of all HCPCS Level II codes for parenterally administered antineoplastic agents on medical claims with service dates within 6 days of the start of each cycle of chemotherapy. Regimens were categorized on a cycle-specific basis according to the number of agents administered that are considered to be myelotoxic (list available from authors upon request).", "Cancer chemotherapy patients who received daily filgrastim prophylaxis during ≥1 cycle of chemotherapy were selected for inclusion in the study population. Administration of daily filgrastim on or before day 5 of a given cycle was considered to represent prophylaxis\n[14,22,23]. Receipt of daily filgrastim was identified based on medical claims (J1440, J1441) with relevant codes from the HCPCS system.\nDuration of daily filgrastim prophylaxis during the patient-cycle was characterized based on temporal patterns of administration; end of prophylaxis was defined as a gap ≥3 days in its use, outpatient administration of IV antimicrobial therapy (an indicator for possible CINC), or hospitalization for CINC. Duration of prophylaxis was characterized as 1–3, 4–6, or ≥7 days.", "Patient-cycles were excluded from the analytic file if the following occurred: (1) evidence of ≥2 primary cancers (solid or blood) within (i.e., +/-) 30 days of chemotherapy initiation; (2) any gaps in health benefits during the 6-month (\"pretreatment\") period prior to chemotherapy initiation; (3) evidence of hematopoietic stem cell or bone marrow transplantation prior to or during receipt of chemotherapy; (4) evidence of chemotherapy based only on medical claims for administration of the drugs (HCPCS codes identifying the specific chemotherapy agents were not available, and thus the regimen and level of myelosuppression could not be determined); (5) pharmacy claims for myelotoxic chemotherapy (only drug dispense dates are available on pharmacy claims, and because pharmacy/medical claims cannot be definitively linked, precise dates of chemotherapy administration–needed to characterize the course and cycles–could not be ascertained); (6) pharmacy claims for daily filgrastim (precise dates of administration could not be ascertained, for reasons stated above); (7) evidence of receipt of sargramostim (J2820) or pegfilgrastim (C9119, S0135, J2505) during the first 5 days of the cycle (i.e., as prophylaxis).", "CINC was identified based on inpatient admissions with a diagnosis (principal or secondary) of neutropenia (ICD-9-CM 288.0), fever (780.6), or infection (list available upon request). Admissions were identified on a cycle-specific basis using acute-care facility inpatient claims with admission dates anytime between day 6 and the last day of the chemotherapy cycle. Episodes of CINC treated exclusively on an outpatient basis were not considered. Because CINC that occurred on the day of the last dose of prophylaxis or the following day could have resulted in an artificial truncation of a planned longer course of prophylaxis–and thus could exaggerate the risk of CINC with shorter courses–analyses were conducted alternatively with and without inclusion of these patient cycles.\nConsequences of CINC requiring inpatient care were characterized in terms of in-hospital mortality, total hospital LOS (during the cycle), and total CINC-related healthcare expenditures (i.e., from hospital discharge to end of cycle). Total CINC-related healthcare expenditures included those for the initial hospitalization as well as care provided post-discharge on an outpatient basis; outpatient expenditures comprised encounters with a diagnosis of neutropenia, fever, or infection, as well as use/prescriptions for CSF agents (i.e., as treatment) and antimicrobial therapy. Expenditures were estimated based on total paid amounts on corresponding claims. Mortality was characterized using data only from the MarketScan Database as such information was not available in the LifeLink Database.", "Patient characteristics included: age; gender; presence of selected chronic comorbidities (cardiovascular disease, diabetes, liver disease, renal disease); history of blood disorders (anemia, neutropenia, other), infection, hospitalization (all-cause and CINC-related, respectively), chemotherapy, and radiation therapy; pre-chemotherapy healthcare expenditures; type of cancer, presence of metastases; cycle number and minimum length of prior cycles; chemotherapy regimen; receipt of antimicrobial prophylaxis in the cycle of interest; and year of chemotherapy initiation. Only observed data were utilized in defining patient characteristics.\nAge was assessed as of the first day of the first cycle of chemotherapy in the course. Chronic comorbidities and history of blood disorders, infections, hospitalization, chemotherapy, radiation therapy, presence of metastases, and healthcare expenditures were assessed from the beginning of the 6-month pretreatment period through the first day of the corresponding cycle of chemotherapy. Selected variables (i.e., anemia, neutropenia, other blood disorders, infections) were alternatively evaluated during the chemotherapy course (up to the beginning of the cycle of interest). Metastases (bone vs. other site) and chronic comorbidities were identified on the basis of ≥1 diagnosis codes on inpatient claims, ≥2 diagnosis codes on outpatient claims (excluding those for laboratory services) on different days, ≥1 procedure codes, and ≥1 drug codes, as appropriate. Blood disorders and infections were identified on the basis of ≥1 diagnosis codes (on inpatient and/or outpatient claims) and ≥1 drug codes, as appropriate. Prophylactic use of antimicrobial agents was ascertained based on a medical claim for administration of drug from cycle day 1 to cycle day 5, or a pharmacy claim for a filled prescription from cycle day -3 to cycle day 5, with a corresponding drug code.", "Incidence of CINC was evaluated for each patient-cycle in which daily filgrastim was administered prophylactically, and was estimated by same-cycle duration of prophylaxis in an unadjusted and adjusted context. For the latter, a generalized estimating equation (GEE) with a binomial distribution, logistic link function, and exchangeable correlation structure was employed. The GEE method accounts for correlation among repeated measures for the same subject (in this instance, across cycles), while controlling for both fixed characteristics (e.g., gender) and time-dependent covariates (e.g., first versus subsequent cycles). All observed patient characteristics were entered into, and retained in, the multivariate model. Subgroup and sensitivity analyses focusing on the first cycle only, employing a narrow definition of CINC (which was identified using the diagnosis code for neutropenia only but otherwise the same algorithm for the broad definition as described above), and focusing on alternative tumor types separately (i.e., breast cancer, lung cancer, colorectal cancer, and NHL) also were conducted.\nIn-hospital mortality, hospital LOS, and CINC-related healthcare expenditures among patients who developed CINC were descriptively evaluated by duration of daily filgrastim prophylaxis. Confidence intervals (95%) for in-hospital mortality were computed using the Wilson score interval; confidence intervals for LOS and economic costs were computed using nonparametric bootstrapping (percentile method) from the study population (1,000 replicates with replacement). Confidence intervals for in-hospital mortality, hospital LOS, and healthcare expenditures were estimated assuming independence between observations. Only observed data were used in defining study variables; patients who developed CINC but were missing data on study measures–because such data either were not recorded on claims or were not provided by some health plans–were excluded from corresponding analyses.", "A total of 135,921 adult patients initiated a new course of chemotherapy for a solid tumor or NHL during the period of interest and met all other eligibility criteria. Among these patients, a total of 5,477 received daily filgrastim as prophylaxis during ≥1 cycle of chemotherapy and thus were included in the study population; these patients contributed a total of 14,288 daily filgrastim prophylaxis patient-cycles to the analytic file. Filgrastim was administered for 1–3 days in 58% of cycles (n = 8,371), 4–6 days in 26% of cycles (n = 3,691), and ≥7days in 16% of cycles (n = 2,226) (Figure \n1).\nDays of filgrastim prophylaxis.\nMean (SD) age of patients was 54 (13) years for 1–3 days of filgrastim prophylaxis, 57 (14) years for 4–6 days of prophylaxis, and 56 (13) years for ≥7 days of prophylaxis (Table \n1). Breast cancer was the most common tumor type (48% for 1–3 days, 50% for 4–6 days, 49% for ≥7 days), followed by NHL (10%, 13%, and 13%, respectively), and colorectal cancer (16%, 10%, and 13%, respectively). Metastatic disease was present in 38% of patients receiving 1–3 days of prophylaxis, 32% receiving 4–6 days, and 35% receiving ≥7 days. Antimicrobial agents—principally oral (~90%)—were concurrently used as prophylaxis in 6.7% of patients receiving 1–3 days of filgrastim prophylaxis, 7.5% receiving 4–6 days, and 7.5% receiving ≥7 days.\nPatient, cancer, and treatment characteristics, by duration of filgrastim prophylaxis\nCrude risk of CINC during a cycle of chemotherapy was 2.9% with 1–3 days of filgrastim prophylaxis, 2.7% with 4–6 days, and 1.8% with ≥7 days. In adjusted analyses, the odds of CINC were 2.4 (95% CI: 1.6-3.4) and 1.9 (1.3-2.8) times higher with 1–3 and 4–6 days of filgrastim prophylaxis, respectively, versus ≥7 days (referent group) (Figure \n2). Among the subgroup of patients who developed CINC requiring inpatient care (n = 382), mean hospital LOS was 7.4 (6.4-8.3) days with 1–3 days of prophylaxis (n = 243), 7.1 (5.7-8.5) days with 4–6 days of prophylaxis (n = 99), and 6.5 (4.9-8.0) with ≥7 days of prophylaxis (n = 40) (Table \n2). Among the subgroup of patients who developed CINC and for whom healthcare expenditures were available (n = 358), mean total CINC-related healthcare expenditures were $18,912 (14,570-23,581) with 1–3 days of prophylaxis (n = 225), $14,907 (11,155-19,728) with 4–6 days (n = 94), and $13,165 (9,595-17,144) with ≥7 days (n = 39). Among the subgroup of patients who developed CINC and for whom discharge status was available (n = 228), in-hospital mortality was 8.4% (4.6-14.8) with 1–3 days of prophylaxis (n-119), 4.0% (1.4-11.1) with 4–6 days (n = 75), and 0% (0–10.2) with ≥7 days (n = 34).\nAdjusted odds ratios for chemotherapy-induced neutropenic complications requiring inpatient care, by duration of filgrastim prophylaxis*. *Results adjusted for patient, cancer, and chemotherapy characteristics.\nUnadjusted risk of inpatient mortality, length of stay in hospital, and healthcare expenditures for chemotherapy- induced neutropenic complications requiring inpatient care, by duration of filgrastim prophylaxis\n*n’s indicate the number of patient-cycles considered in analyses of study measures; among patients who developed CINC (n = 382), those with missing data on study measures- because such data were not recorded on claims or were not provided by some health plans- were excluded from corresponding analyses.\n**Only patients with paid amounts > $0 were considered in these analyses.\nIn subgroup analyses focusing on the first cycle only, crude risks of CINC were 7.0% with 1–3 days of filgrastim prophylaxis (n = 1054), 5.0% with 4–6 days (n = 482), and 5.2% with ≥7 days (n = 363); in adjusted analyses, odds of CINC were 1.6 (0.9-2.8) and 1.1 (0.6-2.1) times higher with 1–3 and 4–6 days (versus ≥7 days) of filgrastim prophylaxis in cycle 1 (Table \n3). In analyses employing the narrow definition for CINC, crude risks of CINC were 1.6% with 1–3 days of filgrastim prophylaxis, 1.8% with 4–6 days, and 1.3% with ≥7 days; in adjusted analyses, odds of CINC were 1.6 (1.1-2.5) and 1.7 (1.1-2.7) times higher with 1–3 and 4–6 days of filgrastim prophylaxis, respectively, versus ≥7 days. In tumor-specific analyses, adjusted odds of CINC with 1–3 and 4–6 days of filgrastim prophylaxis (vs. ≥7 days) were: 1.8 (1.0-3.0) and 1.9 (1.1-3.4) for breast cancer; 20.2 (1.9-212.4) and 14.5 (1.3-158.7) for lung cancer; 1.9 (0.2-16.7) and 2.5 (0.3-23.5) for colorectal cancer; and 2.1 (1.2-3.6) and 1.8 (1.0-3.2) for NHL. We note that not all observed differences were statistically significant in unadjusted subgroup and secondary analyses, presumably due to the lack of adjustment for systematic differences in patient characteristics between prophylaxis subgroups.\nAdjusted odds ratios for chemotherapy- induced neutropenic complications requiring inpatient care in subgroup and secondary analyses\n*Results adjusted for patient, cancer, and treatment characteristic listed in Table \n2.", "In the largest retrospective study of daily filgrastim use in US clinical practice, we found that the large majority of patients undergoing cancer chemotherapy who are administered prophylaxis with daily filgrastim receive considerably fewer days of administration than subjects in clinical trials of daily filgrastim prophylaxis. In our study population, 95% of patients received fewer than 10 days of filgrastim prophylaxis and 58% received only 1–3 days, versus the typical 10–11 days required for neutrophil recovery in the pivotal clinical trials\n[17-19]. This finding is largely consistent with observations from other retrospective clinical practice studies\n[14-16,22,23]. We also found that patients who receive shorter courses of daily filgrastim prophylaxis have a substantially higher risk of CINC requiring hospitalization. Among patients in our study population who received 1–3 or 4–6 days of prophylaxis, the adjusted odds of CINC were 2.4 and 1.9 times, respectively, higher than those receiving ≥7 days. In addition, our study results suggest that, among the subgroup of patients who develop CINC despite prophylaxis, the consequences of this condition may be worse among those who receive fewer administrations of daily filgrastim prophylaxis. This last set of results should be interpreted with caution, however, as the number of patients who developed CINC despite prophylaxis was, in absolute terms, small.\nOver the past decade, the frequency of use of alternative (often, more myelosuppressive) chemotherapy regimens in US clinical practice has changed considerably, in large part due to better chemotherapy agents, combinations of agents, and dosing schedules\n[25-27]. Because these regimens are typically associated with higher levels of myelosuppression, use of supportive care–including CSF prophylaxis–also has increased\n[27]. While pegfilgrastim—a longer-acting version of filgrastim that requires only a single dose administered subcutaneously once per chemotherapy cycle—is now (by far) the most commonly used CSF prophylactic agent in US clinical practice, filgrastim accounts for a small but important segment of the prophylactic market\n[14,16]. In the present study, 60,600 (45%) of the 135,921 adult cancer chemotherapy study subjects received CSF prophylaxis in ≥1cycle during their chemotherapy course. Among the subgroup who received CSF prophylaxis, 91% received pegfilgrastim in ≥1 cycle and 9% received filgrastim. Notwithstanding these temporal changes in chemotherapy regimens and supportive care, however, our findings regarding the frequent use of shorter courses of filgrastim prophylaxis and the associated consequences–based on a study population of over 5,000 patients and nearly 15,000 patient-cycles–are largely consistent with those reported previously. In the 2006 study by Weycker and colleagues–which included 598 breast cancer, lung cancer, and NHL patients, and employed data from 1998-2002– for example, mean duration of prophylaxis ranged from 4.3 to 6.5 days across tumor types, while in the Morrison study–which included 1,451 cancer patients and employed data from 2001-2003– mean duration of prophylaxis ranged from 3.7 to 6.0 days across calendar years and cycles of use\n[15,23]. In the present study, mean (SD) duration of prophylaxis on an overall basis was 3.6 (2.9) days. Moreover, in the aforementioned 2011 study by Weycker et al., odds of CINC were reported to be 1.5 times higher for patients receiving <7 versus ≥7 days of filgrastim prophylaxis, while the corresponding odds ratio in this study was estimated to be 2.2 (95% CI 1.6-3.1)\n[14]. We note that odds ratios for CINC reported above were largely comparable when limiting attention to the most recent four-year period (2007–2010): 2.5 (1.5-4.3) with 1–3 versus ≥7 days of filgrastim prophylaxis and 2.2 (1.2-3.9) with 4–6 versus ≥7 days of filgrastim prophylaxis. Study results also were robust to alternative specifications of the multivariate model and when excluding observations for which hospitalizations occurred on the day of the last dose of daily filgrastim prophylaxis or the following day.\nWe expected that systematic differences in the prevalence of risk factors (e.g., history of neutropenia, higher doses of chemotherapy agents, metastases to bone, poor performance status) for CINC would occur according to the duration of prophylaxis (1–3, 4–6, vs. ≥7 days). We thus used techniques of multivariate regression to adjust for such risk factors–to the extent possible–in analyzing the relationship between duration of daily filgrastim prophylaxis and outcomes of interest. Given the limitations of the claims-based databases we used, however, we were forced to use proxies for certain established risk factors. For example, a proxy measure based on pre-chemotherapy healthcare expenditures was employed for performance status, which in prior research has been shown to be correlated with health status in other patient populations\n[30]. Moreover, because information was not available for some clinically important parameters (e.g., ANC), the possibility exists that the study groups differed in terms of unobserved characteristics that predispose to CINC.\nWe believe that our approach to controlling for such systematic differences was comprehensive given available data, and that the above-noted biases–if present–would confer a conservative bias to analyses. For example, if the risk of FN is higher among patients receiving higher doses of chemotherapy (which is unobservable in the study database), and these patients are more likely to receive longer durations of prophylaxis, then the estimated difference in risk between patients receiving longer versus shorter courses of prophylaxis will be smaller than the true or actual difference. That our adjustment procedures were at least to some extent successful is suggested by the greater odds ratios for CINC in the adjusted versus unadjusted analyses, but uncertainty as to the adequacy of adjustment for confounding risk factors is one of the major limitations of our study.\nIn those instances where code-based operational algorithms were used to identify risk factors of interest (e.g., using ICD-9-CM code 288.0 for neutropenia, rather than ANC) errors of omission/commission in medical coding may have impacted the accuracy of adjustment. We do not believe, however, that any such differences or limitations were for the most part systematic in nature. However, patients who have a history of CINC – and thus may be more likely to receive longer durations of prophylaxis – may be more likely to have \"neutropenia\" designated as a secondary (or even primary) diagnosis on future encounters, all else equal.\nThere is no ICD-9-CM diagnosis code for CINC (i.e., neutropenia-related fever or infection), and thus codes for neutropenia, fever, and infection were employed to identify hospitalizations assumed to be related to neutropenic complications. Patients are typically not given chemotherapy when they are neutropenic or have active infection. The timing of fever and infection after chemotherapy increases the likelihood that such outcomes are related to receipt of chemotherapy. Codes for neutropenia, fever (surrogate for active infection), and infection thus are all likely to be related to episodes if seen within a defined exposure period after receiving chemotherapy. While the sensitivity of this algorithm for identifying neutropenic complications is undoubtedly higher than that of an algorithm using only the ICD-9-CM code for neutropenia, its specificity and positive predictive value are unknown.\nOther miscellaneous limitations deserve a brief mention. CINC requiring outpatient care were not considered in our analyses due to the small total number of events (n = 60). Patients with evidence of receipt of daily filgrastim via outpatient pharmacy (3% of total) were excluded because we could not ascertain the precise dates of use. Expenditures were not adjusted to current dollars since use of a general price index may yield spurious findings when applied to specific patients in specific health plans who consumed specific healthcare services and since the distribution of study groups by calendar year was comparable. Hospital discharge disposition was available only in the MarketScan Database and information concerning LOS or paid amounts was missing for some hospitalizations. Moreover, we note that a disproportionately high percentage of patients receiving 1–3 days of filgrastim prophylaxis (vs. those receiving 4–6 or ≥7 days) who developed CINC requiring inpatient care did not have information available on discharge disposition, and thus the inpatient mortality findings must be viewed with caution. Although patients initiating \"delayed\" use of daily filgrastim (e.g., after day 5 of the chemotherapy cycle) have been included in other published evaluations, they were not considered in our analyses as such patients may have received daily filgrastim for the treatment of CINC (e.g., severe neutropenia) as opposed to prophylaxis against CINC\n[15,16]. Finally, we note that it cannot be determined from outpatient pharmacy claims data whether drugs dispensed were actually taken, when they were taken, or how much was taken. For this reason, our characterization of antimicrobial prophylaxis use (principally oral) may be upwardly biased and differences in actual use across prophylaxis subgroups may confound the results of analyses. We also note, however, that the percentage of patients with filled prescriptions for antimicrobials was relatively low (and comparable) across filgrastim prophylaxis subgroups.", "In conclusion, we found that among patients receiving myelosuppressive chemotherapy, shorter courses of daily filgrastim prophylaxis are associated with increased risk of CINC. We also found that when CINC develops despite daily filgrastim prophylaxis, the outcomes thereof may be poorer in those receiving shorter courses of prophylaxis. Additional research is needed to explore these relationships among individual tumor types and chemotherapy regimens.", "Funding for this research was provided by Amgen Inc. to Policy Analysis Inc. (PAI). Derek Weycker, John Edelsberg, and Alex Kartashov are employed by Policy Analysis Inc. (PAI). Rich Barron and Jason Legg are employed by Amgen Inc. Andrew Glass is employed by the Center for Health Research, Kaiser Permanente Northwest, and received an honorarium for this research from Amgen Inc.", "Authorship was designated based on the guidelines promulgated by the International Committee of Medical Journal Editors (2004). All persons who meet criteria for authorship are listed as authors on the title page. The contribution of each of these individuals to this study–by task–is as follows: conception and supervision (Barron, Weycker), development of design (Barron, Edelsberg, Glass, Legg, Weycker), conduct of analyses (Kartashov, Weycker), interpretation of results (all authors), preparation of manuscript (Edelsberg, Weycker), and review of manuscript (all authors). All authors have read and approved the final version of the manuscript. The study sponsor reviewed the study research plan and study manuscript; data management, processing, and analyses were conducted by Policy Analysis Inc. (PAI), and all final analytic decisions were made by study investigators.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6963/14/189/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, "results", "discussion", "conclusions", null, null, null ]
[ "Filgrastim", "Granulocyte colony-stimulating factor", "Febrile neutropenia", "Cost", "Neoplasms" ]
Background: Neutropenia is a common side effect of myelosuppressive chemotherapy that both increases the risk of infection and diminishes patients’ ability to fight infection. When neutropenic patients become febrile, the high likelihood of infection and serious consequences thereof usually results in hospitalization [1,2]. FN, as well as severe or prolonged neutropenia, also can interfere with the planned delivery of treatment and adversely affect important patient outcomes [2-11]. Clinical practice guidelines recommend primary prophylactic use of a colony-stimulating factor (CSF)–which has been shown to reduce the risk of FN in clinical trials–when the risk of FN is 20% or higher [2]. While the American Society of Clinical Oncology (ASCO) initially recommended that CSF prophylaxis be administered only when FN risk is 40% or higher, in 2006, ASCO lowered the threshold to 20% based on data highlighting the importance of FN-related hospitalization as an outcome and evidence demonstrating the value of CSF prophylaxis in reducing the risk of FN, the risk of FN-related hospitalization, and the associated use of IV anti-infective agents [2,12,13]. The CSF filgrastim, which is widely used in clinical practice as prophylaxis against FN, requires daily administration during each cycle until neutrophil recovery occurs (in clinical trials, given typically for 10–11 days [and up to 14 days] until absolute neutrophil count [ANC] ≥10 × 109/L) [14-19]. In clinical practice, patients often receive shorter courses of daily filgrastim prophylaxis than those administered to subjects in the clinical trial setting [14-16,20-24]. While published studies suggest that shorter courses of daily filgrastim prophylaxis are associated with an increased risk of hospitalization for CINC, these studies focused on selected tumor types or employed data that now are over a decade old [23,24]. During the past decade, use of chemotherapy and supportive care in clinical practice–as well as recommended use of these agents in authoritative guidelines–has changed considerably [2,25-27], Moreover, only one study has examined whether CSFs may favorably impact clinical outcomes and economic costs when CINC develop despite CSF prophylaxis, and it was published a decade ago and focused on elderly patients with a single type of cancer (non-Hodgkin’s lymphoma) [28]. We therefore undertook a new study to evaluate the relationship between the duration of daily filgrastim prophylaxis, and the risk and consequences of CINC requiring inpatient care using a large healthcare claims database. While such databases often lack detailed clinical information (e.g., on absolute neutrophil counts), they offer access to information on the health profile and healthcare utilization of tens of millions of covered lives. Methods: Data source Patient-level information from two large healthcare claims databases–the Thomson Reuters MarketScan Commercial Claims and Encounters and Medicare Supplemental and Coordination of Benefits Database (MarketScan Database, 2001–2010) and the Intercontinental Marketing Services LifeLink Database (LifeLink Database, 2001–2008)–were pooled for analyses. Both databases comprise medical (i.e., facility and professional service) and outpatient pharmacy claims from a large number of participating private health plans, and each contains claims data for 15 million persons annually. Data available for each facility and professional-service claim include date and place of service, diagnoses, procedures performed/services rendered, and quantity of services (professional-service claims). Data available for each retail pharmacy claim include the drug dispensed, dispensing date, quantity dispensed, and number of days supplied. All claims also include paid (i.e., reimbursed) amounts. Selected demographic and eligibility information is available for persons in both databases. All data can be arrayed to provide a detailed chronology of all medical and pharmacy services used by each plan member over time. The study databases were de-identified prior to their release to study investigators, as set forth in the corresponding Data Use Agreements. The study databases have been evaluated and certified by independent third parties to be in compliance with the Health Insurance Portability and Accountability Act (HIPAA) of 1996 statistical de-identification standards and to satisfy the conditions set forth in Sections 164.514 (a)-(b) 1ii of the HIPAA Privacy Rule regarding the determination and documentation of statistically de-identified data. Use of the study databases for health services research was determined–via independent third parties–to be fully compliant with the HIPAA Privacy Rule and federal guidance on Public Welfare and the Protection of Human Subjects [29]. Patient-level information from two large healthcare claims databases–the Thomson Reuters MarketScan Commercial Claims and Encounters and Medicare Supplemental and Coordination of Benefits Database (MarketScan Database, 2001–2010) and the Intercontinental Marketing Services LifeLink Database (LifeLink Database, 2001–2008)–were pooled for analyses. Both databases comprise medical (i.e., facility and professional service) and outpatient pharmacy claims from a large number of participating private health plans, and each contains claims data for 15 million persons annually. Data available for each facility and professional-service claim include date and place of service, diagnoses, procedures performed/services rendered, and quantity of services (professional-service claims). Data available for each retail pharmacy claim include the drug dispensed, dispensing date, quantity dispensed, and number of days supplied. All claims also include paid (i.e., reimbursed) amounts. Selected demographic and eligibility information is available for persons in both databases. All data can be arrayed to provide a detailed chronology of all medical and pharmacy services used by each plan member over time. The study databases were de-identified prior to their release to study investigators, as set forth in the corresponding Data Use Agreements. The study databases have been evaluated and certified by independent third parties to be in compliance with the Health Insurance Portability and Accountability Act (HIPAA) of 1996 statistical de-identification standards and to satisfy the conditions set forth in Sections 164.514 (a)-(b) 1ii of the HIPAA Privacy Rule regarding the determination and documentation of statistically de-identified data. Use of the study databases for health services research was determined–via independent third parties–to be fully compliant with the HIPAA Privacy Rule and federal guidance on Public Welfare and the Protection of Human Subjects [29]. Study population The study population comprised all patients who initiated ≥1 course of myelosuppressive chemotherapy for a solid tumor or non-Hodgkin’s lymphoma (NHL) and who received daily filgrastim prophylaxis during ≥1 cycle. All cycles in which patients received daily filgrastim prophylaxis–irrespective of duration–were pooled for analyses. Cancer chemotherapy patients All patients, aged ≥18 years, who began ≥1 new course of myelosuppressive chemotherapy were identified; the enrollment window was July 1, 2001 through June 30, 2010 for patients in the MarketScan Database and July 1, 2001 through June 30, 2008 for patients in the LifeLink Database. Receipt of chemotherapy was ascertained based on the presence of ≥1 paid medical claim for a chemotherapy drug or administration thereof (identified using Healthcare Common Procedure Coding System [HCPCS], International Classification of Disease, Ninth Revision, Clinical Modification [ICD-9-CM], and Health Care Financing Administration Uniform Bill-92 [UB-92] revenue codes). Patients were considered to have initiated a new course of chemotherapy if there was a chemotherapy claim during the study period that was preceded by a period ≥60 days without any other claims for chemotherapy. Only patients who had evidence of a primary solid tumor or NHL (based on ≥2 medical claims [≥7 days apart] with a qualifying 3-digit ICD-9-CM diagnosis code during the period beginning 30 days prior to the index date and ending 30 days thereafter) were selected. All patients, aged ≥18 years, who began ≥1 new course of myelosuppressive chemotherapy were identified; the enrollment window was July 1, 2001 through June 30, 2010 for patients in the MarketScan Database and July 1, 2001 through June 30, 2008 for patients in the LifeLink Database. Receipt of chemotherapy was ascertained based on the presence of ≥1 paid medical claim for a chemotherapy drug or administration thereof (identified using Healthcare Common Procedure Coding System [HCPCS], International Classification of Disease, Ninth Revision, Clinical Modification [ICD-9-CM], and Health Care Financing Administration Uniform Bill-92 [UB-92] revenue codes). Patients were considered to have initiated a new course of chemotherapy if there was a chemotherapy claim during the study period that was preceded by a period ≥60 days without any other claims for chemotherapy. Only patients who had evidence of a primary solid tumor or NHL (based on ≥2 medical claims [≥7 days apart] with a qualifying 3-digit ICD-9-CM diagnosis code during the period beginning 30 days prior to the index date and ending 30 days thereafter) were selected. Chemotherapy courses, cycles, and regimens For each cancer chemotherapy patient, each unique cycle within each course of chemotherapy was identified. The first chemotherapy cycle (of the first course) was defined as beginning with the date of initiation of chemotherapy and ending with the first service date for the next administration of chemotherapy administration (as evidenced by a medical claim with a corresponding HCPCS, ICD-9-CM, or UB-92 code) occurring at least 12 days-but no more than 59 days-after the date of initiation of chemotherapy. If a second chemotherapy cycle did not commence prior to day 60, or if there was evidence of receipt of radiation therapy (based on medical claims with relevant HCPCS, ICD-9-CM, or UB-92 codes) during this period, both the first cycle of chemotherapy and the course of chemotherapy were considered to have been completed 30 days following the beginning of the cycle or on the day prior to initiation of radiation therapy, whichever occurred first. The second and all subsequent cycles of chemotherapy, as well as subsequent courses of chemotherapy–if any–during the period of interest, were similarly defined. A maximum of 8 cycles per course were considered. For patients with multiple courses of chemotherapy, all qualifying courses were considered. Chemotherapy regimens were ascertained based on a review of all HCPCS Level II codes for parenterally administered antineoplastic agents on medical claims with service dates within 6 days of the start of each cycle of chemotherapy. Regimens were categorized on a cycle-specific basis according to the number of agents administered that are considered to be myelotoxic (list available from authors upon request). For each cancer chemotherapy patient, each unique cycle within each course of chemotherapy was identified. The first chemotherapy cycle (of the first course) was defined as beginning with the date of initiation of chemotherapy and ending with the first service date for the next administration of chemotherapy administration (as evidenced by a medical claim with a corresponding HCPCS, ICD-9-CM, or UB-92 code) occurring at least 12 days-but no more than 59 days-after the date of initiation of chemotherapy. If a second chemotherapy cycle did not commence prior to day 60, or if there was evidence of receipt of radiation therapy (based on medical claims with relevant HCPCS, ICD-9-CM, or UB-92 codes) during this period, both the first cycle of chemotherapy and the course of chemotherapy were considered to have been completed 30 days following the beginning of the cycle or on the day prior to initiation of radiation therapy, whichever occurred first. The second and all subsequent cycles of chemotherapy, as well as subsequent courses of chemotherapy–if any–during the period of interest, were similarly defined. A maximum of 8 cycles per course were considered. For patients with multiple courses of chemotherapy, all qualifying courses were considered. Chemotherapy regimens were ascertained based on a review of all HCPCS Level II codes for parenterally administered antineoplastic agents on medical claims with service dates within 6 days of the start of each cycle of chemotherapy. Regimens were categorized on a cycle-specific basis according to the number of agents administered that are considered to be myelotoxic (list available from authors upon request). Daily filgrastim prophylaxis Cancer chemotherapy patients who received daily filgrastim prophylaxis during ≥1 cycle of chemotherapy were selected for inclusion in the study population. Administration of daily filgrastim on or before day 5 of a given cycle was considered to represent prophylaxis [14,22,23]. Receipt of daily filgrastim was identified based on medical claims (J1440, J1441) with relevant codes from the HCPCS system. Duration of daily filgrastim prophylaxis during the patient-cycle was characterized based on temporal patterns of administration; end of prophylaxis was defined as a gap ≥3 days in its use, outpatient administration of IV antimicrobial therapy (an indicator for possible CINC), or hospitalization for CINC. Duration of prophylaxis was characterized as 1–3, 4–6, or ≥7 days. Cancer chemotherapy patients who received daily filgrastim prophylaxis during ≥1 cycle of chemotherapy were selected for inclusion in the study population. Administration of daily filgrastim on or before day 5 of a given cycle was considered to represent prophylaxis [14,22,23]. Receipt of daily filgrastim was identified based on medical claims (J1440, J1441) with relevant codes from the HCPCS system. Duration of daily filgrastim prophylaxis during the patient-cycle was characterized based on temporal patterns of administration; end of prophylaxis was defined as a gap ≥3 days in its use, outpatient administration of IV antimicrobial therapy (an indicator for possible CINC), or hospitalization for CINC. Duration of prophylaxis was characterized as 1–3, 4–6, or ≥7 days. Exclusion criteria Patient-cycles were excluded from the analytic file if the following occurred: (1) evidence of ≥2 primary cancers (solid or blood) within (i.e., +/-) 30 days of chemotherapy initiation; (2) any gaps in health benefits during the 6-month ("pretreatment") period prior to chemotherapy initiation; (3) evidence of hematopoietic stem cell or bone marrow transplantation prior to or during receipt of chemotherapy; (4) evidence of chemotherapy based only on medical claims for administration of the drugs (HCPCS codes identifying the specific chemotherapy agents were not available, and thus the regimen and level of myelosuppression could not be determined); (5) pharmacy claims for myelotoxic chemotherapy (only drug dispense dates are available on pharmacy claims, and because pharmacy/medical claims cannot be definitively linked, precise dates of chemotherapy administration–needed to characterize the course and cycles–could not be ascertained); (6) pharmacy claims for daily filgrastim (precise dates of administration could not be ascertained, for reasons stated above); (7) evidence of receipt of sargramostim (J2820) or pegfilgrastim (C9119, S0135, J2505) during the first 5 days of the cycle (i.e., as prophylaxis). Patient-cycles were excluded from the analytic file if the following occurred: (1) evidence of ≥2 primary cancers (solid or blood) within (i.e., +/-) 30 days of chemotherapy initiation; (2) any gaps in health benefits during the 6-month ("pretreatment") period prior to chemotherapy initiation; (3) evidence of hematopoietic stem cell or bone marrow transplantation prior to or during receipt of chemotherapy; (4) evidence of chemotherapy based only on medical claims for administration of the drugs (HCPCS codes identifying the specific chemotherapy agents were not available, and thus the regimen and level of myelosuppression could not be determined); (5) pharmacy claims for myelotoxic chemotherapy (only drug dispense dates are available on pharmacy claims, and because pharmacy/medical claims cannot be definitively linked, precise dates of chemotherapy administration–needed to characterize the course and cycles–could not be ascertained); (6) pharmacy claims for daily filgrastim (precise dates of administration could not be ascertained, for reasons stated above); (7) evidence of receipt of sargramostim (J2820) or pegfilgrastim (C9119, S0135, J2505) during the first 5 days of the cycle (i.e., as prophylaxis). The study population comprised all patients who initiated ≥1 course of myelosuppressive chemotherapy for a solid tumor or non-Hodgkin’s lymphoma (NHL) and who received daily filgrastim prophylaxis during ≥1 cycle. All cycles in which patients received daily filgrastim prophylaxis–irrespective of duration–were pooled for analyses. Cancer chemotherapy patients All patients, aged ≥18 years, who began ≥1 new course of myelosuppressive chemotherapy were identified; the enrollment window was July 1, 2001 through June 30, 2010 for patients in the MarketScan Database and July 1, 2001 through June 30, 2008 for patients in the LifeLink Database. Receipt of chemotherapy was ascertained based on the presence of ≥1 paid medical claim for a chemotherapy drug or administration thereof (identified using Healthcare Common Procedure Coding System [HCPCS], International Classification of Disease, Ninth Revision, Clinical Modification [ICD-9-CM], and Health Care Financing Administration Uniform Bill-92 [UB-92] revenue codes). Patients were considered to have initiated a new course of chemotherapy if there was a chemotherapy claim during the study period that was preceded by a period ≥60 days without any other claims for chemotherapy. Only patients who had evidence of a primary solid tumor or NHL (based on ≥2 medical claims [≥7 days apart] with a qualifying 3-digit ICD-9-CM diagnosis code during the period beginning 30 days prior to the index date and ending 30 days thereafter) were selected. All patients, aged ≥18 years, who began ≥1 new course of myelosuppressive chemotherapy were identified; the enrollment window was July 1, 2001 through June 30, 2010 for patients in the MarketScan Database and July 1, 2001 through June 30, 2008 for patients in the LifeLink Database. Receipt of chemotherapy was ascertained based on the presence of ≥1 paid medical claim for a chemotherapy drug or administration thereof (identified using Healthcare Common Procedure Coding System [HCPCS], International Classification of Disease, Ninth Revision, Clinical Modification [ICD-9-CM], and Health Care Financing Administration Uniform Bill-92 [UB-92] revenue codes). Patients were considered to have initiated a new course of chemotherapy if there was a chemotherapy claim during the study period that was preceded by a period ≥60 days without any other claims for chemotherapy. Only patients who had evidence of a primary solid tumor or NHL (based on ≥2 medical claims [≥7 days apart] with a qualifying 3-digit ICD-9-CM diagnosis code during the period beginning 30 days prior to the index date and ending 30 days thereafter) were selected. Chemotherapy courses, cycles, and regimens For each cancer chemotherapy patient, each unique cycle within each course of chemotherapy was identified. The first chemotherapy cycle (of the first course) was defined as beginning with the date of initiation of chemotherapy and ending with the first service date for the next administration of chemotherapy administration (as evidenced by a medical claim with a corresponding HCPCS, ICD-9-CM, or UB-92 code) occurring at least 12 days-but no more than 59 days-after the date of initiation of chemotherapy. If a second chemotherapy cycle did not commence prior to day 60, or if there was evidence of receipt of radiation therapy (based on medical claims with relevant HCPCS, ICD-9-CM, or UB-92 codes) during this period, both the first cycle of chemotherapy and the course of chemotherapy were considered to have been completed 30 days following the beginning of the cycle or on the day prior to initiation of radiation therapy, whichever occurred first. The second and all subsequent cycles of chemotherapy, as well as subsequent courses of chemotherapy–if any–during the period of interest, were similarly defined. A maximum of 8 cycles per course were considered. For patients with multiple courses of chemotherapy, all qualifying courses were considered. Chemotherapy regimens were ascertained based on a review of all HCPCS Level II codes for parenterally administered antineoplastic agents on medical claims with service dates within 6 days of the start of each cycle of chemotherapy. Regimens were categorized on a cycle-specific basis according to the number of agents administered that are considered to be myelotoxic (list available from authors upon request). For each cancer chemotherapy patient, each unique cycle within each course of chemotherapy was identified. The first chemotherapy cycle (of the first course) was defined as beginning with the date of initiation of chemotherapy and ending with the first service date for the next administration of chemotherapy administration (as evidenced by a medical claim with a corresponding HCPCS, ICD-9-CM, or UB-92 code) occurring at least 12 days-but no more than 59 days-after the date of initiation of chemotherapy. If a second chemotherapy cycle did not commence prior to day 60, or if there was evidence of receipt of radiation therapy (based on medical claims with relevant HCPCS, ICD-9-CM, or UB-92 codes) during this period, both the first cycle of chemotherapy and the course of chemotherapy were considered to have been completed 30 days following the beginning of the cycle or on the day prior to initiation of radiation therapy, whichever occurred first. The second and all subsequent cycles of chemotherapy, as well as subsequent courses of chemotherapy–if any–during the period of interest, were similarly defined. A maximum of 8 cycles per course were considered. For patients with multiple courses of chemotherapy, all qualifying courses were considered. Chemotherapy regimens were ascertained based on a review of all HCPCS Level II codes for parenterally administered antineoplastic agents on medical claims with service dates within 6 days of the start of each cycle of chemotherapy. Regimens were categorized on a cycle-specific basis according to the number of agents administered that are considered to be myelotoxic (list available from authors upon request). Daily filgrastim prophylaxis Cancer chemotherapy patients who received daily filgrastim prophylaxis during ≥1 cycle of chemotherapy were selected for inclusion in the study population. Administration of daily filgrastim on or before day 5 of a given cycle was considered to represent prophylaxis [14,22,23]. Receipt of daily filgrastim was identified based on medical claims (J1440, J1441) with relevant codes from the HCPCS system. Duration of daily filgrastim prophylaxis during the patient-cycle was characterized based on temporal patterns of administration; end of prophylaxis was defined as a gap ≥3 days in its use, outpatient administration of IV antimicrobial therapy (an indicator for possible CINC), or hospitalization for CINC. Duration of prophylaxis was characterized as 1–3, 4–6, or ≥7 days. Cancer chemotherapy patients who received daily filgrastim prophylaxis during ≥1 cycle of chemotherapy were selected for inclusion in the study population. Administration of daily filgrastim on or before day 5 of a given cycle was considered to represent prophylaxis [14,22,23]. Receipt of daily filgrastim was identified based on medical claims (J1440, J1441) with relevant codes from the HCPCS system. Duration of daily filgrastim prophylaxis during the patient-cycle was characterized based on temporal patterns of administration; end of prophylaxis was defined as a gap ≥3 days in its use, outpatient administration of IV antimicrobial therapy (an indicator for possible CINC), or hospitalization for CINC. Duration of prophylaxis was characterized as 1–3, 4–6, or ≥7 days. Exclusion criteria Patient-cycles were excluded from the analytic file if the following occurred: (1) evidence of ≥2 primary cancers (solid or blood) within (i.e., +/-) 30 days of chemotherapy initiation; (2) any gaps in health benefits during the 6-month ("pretreatment") period prior to chemotherapy initiation; (3) evidence of hematopoietic stem cell or bone marrow transplantation prior to or during receipt of chemotherapy; (4) evidence of chemotherapy based only on medical claims for administration of the drugs (HCPCS codes identifying the specific chemotherapy agents were not available, and thus the regimen and level of myelosuppression could not be determined); (5) pharmacy claims for myelotoxic chemotherapy (only drug dispense dates are available on pharmacy claims, and because pharmacy/medical claims cannot be definitively linked, precise dates of chemotherapy administration–needed to characterize the course and cycles–could not be ascertained); (6) pharmacy claims for daily filgrastim (precise dates of administration could not be ascertained, for reasons stated above); (7) evidence of receipt of sargramostim (J2820) or pegfilgrastim (C9119, S0135, J2505) during the first 5 days of the cycle (i.e., as prophylaxis). Patient-cycles were excluded from the analytic file if the following occurred: (1) evidence of ≥2 primary cancers (solid or blood) within (i.e., +/-) 30 days of chemotherapy initiation; (2) any gaps in health benefits during the 6-month ("pretreatment") period prior to chemotherapy initiation; (3) evidence of hematopoietic stem cell or bone marrow transplantation prior to or during receipt of chemotherapy; (4) evidence of chemotherapy based only on medical claims for administration of the drugs (HCPCS codes identifying the specific chemotherapy agents were not available, and thus the regimen and level of myelosuppression could not be determined); (5) pharmacy claims for myelotoxic chemotherapy (only drug dispense dates are available on pharmacy claims, and because pharmacy/medical claims cannot be definitively linked, precise dates of chemotherapy administration–needed to characterize the course and cycles–could not be ascertained); (6) pharmacy claims for daily filgrastim (precise dates of administration could not be ascertained, for reasons stated above); (7) evidence of receipt of sargramostim (J2820) or pegfilgrastim (C9119, S0135, J2505) during the first 5 days of the cycle (i.e., as prophylaxis). Neutropenic complications requiring inpatient care CINC was identified based on inpatient admissions with a diagnosis (principal or secondary) of neutropenia (ICD-9-CM 288.0), fever (780.6), or infection (list available upon request). Admissions were identified on a cycle-specific basis using acute-care facility inpatient claims with admission dates anytime between day 6 and the last day of the chemotherapy cycle. Episodes of CINC treated exclusively on an outpatient basis were not considered. Because CINC that occurred on the day of the last dose of prophylaxis or the following day could have resulted in an artificial truncation of a planned longer course of prophylaxis–and thus could exaggerate the risk of CINC with shorter courses–analyses were conducted alternatively with and without inclusion of these patient cycles. Consequences of CINC requiring inpatient care were characterized in terms of in-hospital mortality, total hospital LOS (during the cycle), and total CINC-related healthcare expenditures (i.e., from hospital discharge to end of cycle). Total CINC-related healthcare expenditures included those for the initial hospitalization as well as care provided post-discharge on an outpatient basis; outpatient expenditures comprised encounters with a diagnosis of neutropenia, fever, or infection, as well as use/prescriptions for CSF agents (i.e., as treatment) and antimicrobial therapy. Expenditures were estimated based on total paid amounts on corresponding claims. Mortality was characterized using data only from the MarketScan Database as such information was not available in the LifeLink Database. CINC was identified based on inpatient admissions with a diagnosis (principal or secondary) of neutropenia (ICD-9-CM 288.0), fever (780.6), or infection (list available upon request). Admissions were identified on a cycle-specific basis using acute-care facility inpatient claims with admission dates anytime between day 6 and the last day of the chemotherapy cycle. Episodes of CINC treated exclusively on an outpatient basis were not considered. Because CINC that occurred on the day of the last dose of prophylaxis or the following day could have resulted in an artificial truncation of a planned longer course of prophylaxis–and thus could exaggerate the risk of CINC with shorter courses–analyses were conducted alternatively with and without inclusion of these patient cycles. Consequences of CINC requiring inpatient care were characterized in terms of in-hospital mortality, total hospital LOS (during the cycle), and total CINC-related healthcare expenditures (i.e., from hospital discharge to end of cycle). Total CINC-related healthcare expenditures included those for the initial hospitalization as well as care provided post-discharge on an outpatient basis; outpatient expenditures comprised encounters with a diagnosis of neutropenia, fever, or infection, as well as use/prescriptions for CSF agents (i.e., as treatment) and antimicrobial therapy. Expenditures were estimated based on total paid amounts on corresponding claims. Mortality was characterized using data only from the MarketScan Database as such information was not available in the LifeLink Database. Patient characteristics Patient characteristics included: age; gender; presence of selected chronic comorbidities (cardiovascular disease, diabetes, liver disease, renal disease); history of blood disorders (anemia, neutropenia, other), infection, hospitalization (all-cause and CINC-related, respectively), chemotherapy, and radiation therapy; pre-chemotherapy healthcare expenditures; type of cancer, presence of metastases; cycle number and minimum length of prior cycles; chemotherapy regimen; receipt of antimicrobial prophylaxis in the cycle of interest; and year of chemotherapy initiation. Only observed data were utilized in defining patient characteristics. Age was assessed as of the first day of the first cycle of chemotherapy in the course. Chronic comorbidities and history of blood disorders, infections, hospitalization, chemotherapy, radiation therapy, presence of metastases, and healthcare expenditures were assessed from the beginning of the 6-month pretreatment period through the first day of the corresponding cycle of chemotherapy. Selected variables (i.e., anemia, neutropenia, other blood disorders, infections) were alternatively evaluated during the chemotherapy course (up to the beginning of the cycle of interest). Metastases (bone vs. other site) and chronic comorbidities were identified on the basis of ≥1 diagnosis codes on inpatient claims, ≥2 diagnosis codes on outpatient claims (excluding those for laboratory services) on different days, ≥1 procedure codes, and ≥1 drug codes, as appropriate. Blood disorders and infections were identified on the basis of ≥1 diagnosis codes (on inpatient and/or outpatient claims) and ≥1 drug codes, as appropriate. Prophylactic use of antimicrobial agents was ascertained based on a medical claim for administration of drug from cycle day 1 to cycle day 5, or a pharmacy claim for a filled prescription from cycle day -3 to cycle day 5, with a corresponding drug code. Patient characteristics included: age; gender; presence of selected chronic comorbidities (cardiovascular disease, diabetes, liver disease, renal disease); history of blood disorders (anemia, neutropenia, other), infection, hospitalization (all-cause and CINC-related, respectively), chemotherapy, and radiation therapy; pre-chemotherapy healthcare expenditures; type of cancer, presence of metastases; cycle number and minimum length of prior cycles; chemotherapy regimen; receipt of antimicrobial prophylaxis in the cycle of interest; and year of chemotherapy initiation. Only observed data were utilized in defining patient characteristics. Age was assessed as of the first day of the first cycle of chemotherapy in the course. Chronic comorbidities and history of blood disorders, infections, hospitalization, chemotherapy, radiation therapy, presence of metastases, and healthcare expenditures were assessed from the beginning of the 6-month pretreatment period through the first day of the corresponding cycle of chemotherapy. Selected variables (i.e., anemia, neutropenia, other blood disorders, infections) were alternatively evaluated during the chemotherapy course (up to the beginning of the cycle of interest). Metastases (bone vs. other site) and chronic comorbidities were identified on the basis of ≥1 diagnosis codes on inpatient claims, ≥2 diagnosis codes on outpatient claims (excluding those for laboratory services) on different days, ≥1 procedure codes, and ≥1 drug codes, as appropriate. Blood disorders and infections were identified on the basis of ≥1 diagnosis codes (on inpatient and/or outpatient claims) and ≥1 drug codes, as appropriate. Prophylactic use of antimicrobial agents was ascertained based on a medical claim for administration of drug from cycle day 1 to cycle day 5, or a pharmacy claim for a filled prescription from cycle day -3 to cycle day 5, with a corresponding drug code. Statistical analyses Incidence of CINC was evaluated for each patient-cycle in which daily filgrastim was administered prophylactically, and was estimated by same-cycle duration of prophylaxis in an unadjusted and adjusted context. For the latter, a generalized estimating equation (GEE) with a binomial distribution, logistic link function, and exchangeable correlation structure was employed. The GEE method accounts for correlation among repeated measures for the same subject (in this instance, across cycles), while controlling for both fixed characteristics (e.g., gender) and time-dependent covariates (e.g., first versus subsequent cycles). All observed patient characteristics were entered into, and retained in, the multivariate model. Subgroup and sensitivity analyses focusing on the first cycle only, employing a narrow definition of CINC (which was identified using the diagnosis code for neutropenia only but otherwise the same algorithm for the broad definition as described above), and focusing on alternative tumor types separately (i.e., breast cancer, lung cancer, colorectal cancer, and NHL) also were conducted. In-hospital mortality, hospital LOS, and CINC-related healthcare expenditures among patients who developed CINC were descriptively evaluated by duration of daily filgrastim prophylaxis. Confidence intervals (95%) for in-hospital mortality were computed using the Wilson score interval; confidence intervals for LOS and economic costs were computed using nonparametric bootstrapping (percentile method) from the study population (1,000 replicates with replacement). Confidence intervals for in-hospital mortality, hospital LOS, and healthcare expenditures were estimated assuming independence between observations. Only observed data were used in defining study variables; patients who developed CINC but were missing data on study measures–because such data either were not recorded on claims or were not provided by some health plans–were excluded from corresponding analyses. Incidence of CINC was evaluated for each patient-cycle in which daily filgrastim was administered prophylactically, and was estimated by same-cycle duration of prophylaxis in an unadjusted and adjusted context. For the latter, a generalized estimating equation (GEE) with a binomial distribution, logistic link function, and exchangeable correlation structure was employed. The GEE method accounts for correlation among repeated measures for the same subject (in this instance, across cycles), while controlling for both fixed characteristics (e.g., gender) and time-dependent covariates (e.g., first versus subsequent cycles). All observed patient characteristics were entered into, and retained in, the multivariate model. Subgroup and sensitivity analyses focusing on the first cycle only, employing a narrow definition of CINC (which was identified using the diagnosis code for neutropenia only but otherwise the same algorithm for the broad definition as described above), and focusing on alternative tumor types separately (i.e., breast cancer, lung cancer, colorectal cancer, and NHL) also were conducted. In-hospital mortality, hospital LOS, and CINC-related healthcare expenditures among patients who developed CINC were descriptively evaluated by duration of daily filgrastim prophylaxis. Confidence intervals (95%) for in-hospital mortality were computed using the Wilson score interval; confidence intervals for LOS and economic costs were computed using nonparametric bootstrapping (percentile method) from the study population (1,000 replicates with replacement). Confidence intervals for in-hospital mortality, hospital LOS, and healthcare expenditures were estimated assuming independence between observations. Only observed data were used in defining study variables; patients who developed CINC but were missing data on study measures–because such data either were not recorded on claims or were not provided by some health plans–were excluded from corresponding analyses. Data source: Patient-level information from two large healthcare claims databases–the Thomson Reuters MarketScan Commercial Claims and Encounters and Medicare Supplemental and Coordination of Benefits Database (MarketScan Database, 2001–2010) and the Intercontinental Marketing Services LifeLink Database (LifeLink Database, 2001–2008)–were pooled for analyses. Both databases comprise medical (i.e., facility and professional service) and outpatient pharmacy claims from a large number of participating private health plans, and each contains claims data for 15 million persons annually. Data available for each facility and professional-service claim include date and place of service, diagnoses, procedures performed/services rendered, and quantity of services (professional-service claims). Data available for each retail pharmacy claim include the drug dispensed, dispensing date, quantity dispensed, and number of days supplied. All claims also include paid (i.e., reimbursed) amounts. Selected demographic and eligibility information is available for persons in both databases. All data can be arrayed to provide a detailed chronology of all medical and pharmacy services used by each plan member over time. The study databases were de-identified prior to their release to study investigators, as set forth in the corresponding Data Use Agreements. The study databases have been evaluated and certified by independent third parties to be in compliance with the Health Insurance Portability and Accountability Act (HIPAA) of 1996 statistical de-identification standards and to satisfy the conditions set forth in Sections 164.514 (a)-(b) 1ii of the HIPAA Privacy Rule regarding the determination and documentation of statistically de-identified data. Use of the study databases for health services research was determined–via independent third parties–to be fully compliant with the HIPAA Privacy Rule and federal guidance on Public Welfare and the Protection of Human Subjects [29]. Study population: The study population comprised all patients who initiated ≥1 course of myelosuppressive chemotherapy for a solid tumor or non-Hodgkin’s lymphoma (NHL) and who received daily filgrastim prophylaxis during ≥1 cycle. All cycles in which patients received daily filgrastim prophylaxis–irrespective of duration–were pooled for analyses. Cancer chemotherapy patients All patients, aged ≥18 years, who began ≥1 new course of myelosuppressive chemotherapy were identified; the enrollment window was July 1, 2001 through June 30, 2010 for patients in the MarketScan Database and July 1, 2001 through June 30, 2008 for patients in the LifeLink Database. Receipt of chemotherapy was ascertained based on the presence of ≥1 paid medical claim for a chemotherapy drug or administration thereof (identified using Healthcare Common Procedure Coding System [HCPCS], International Classification of Disease, Ninth Revision, Clinical Modification [ICD-9-CM], and Health Care Financing Administration Uniform Bill-92 [UB-92] revenue codes). Patients were considered to have initiated a new course of chemotherapy if there was a chemotherapy claim during the study period that was preceded by a period ≥60 days without any other claims for chemotherapy. Only patients who had evidence of a primary solid tumor or NHL (based on ≥2 medical claims [≥7 days apart] with a qualifying 3-digit ICD-9-CM diagnosis code during the period beginning 30 days prior to the index date and ending 30 days thereafter) were selected. All patients, aged ≥18 years, who began ≥1 new course of myelosuppressive chemotherapy were identified; the enrollment window was July 1, 2001 through June 30, 2010 for patients in the MarketScan Database and July 1, 2001 through June 30, 2008 for patients in the LifeLink Database. Receipt of chemotherapy was ascertained based on the presence of ≥1 paid medical claim for a chemotherapy drug or administration thereof (identified using Healthcare Common Procedure Coding System [HCPCS], International Classification of Disease, Ninth Revision, Clinical Modification [ICD-9-CM], and Health Care Financing Administration Uniform Bill-92 [UB-92] revenue codes). Patients were considered to have initiated a new course of chemotherapy if there was a chemotherapy claim during the study period that was preceded by a period ≥60 days without any other claims for chemotherapy. Only patients who had evidence of a primary solid tumor or NHL (based on ≥2 medical claims [≥7 days apart] with a qualifying 3-digit ICD-9-CM diagnosis code during the period beginning 30 days prior to the index date and ending 30 days thereafter) were selected. Chemotherapy courses, cycles, and regimens For each cancer chemotherapy patient, each unique cycle within each course of chemotherapy was identified. The first chemotherapy cycle (of the first course) was defined as beginning with the date of initiation of chemotherapy and ending with the first service date for the next administration of chemotherapy administration (as evidenced by a medical claim with a corresponding HCPCS, ICD-9-CM, or UB-92 code) occurring at least 12 days-but no more than 59 days-after the date of initiation of chemotherapy. If a second chemotherapy cycle did not commence prior to day 60, or if there was evidence of receipt of radiation therapy (based on medical claims with relevant HCPCS, ICD-9-CM, or UB-92 codes) during this period, both the first cycle of chemotherapy and the course of chemotherapy were considered to have been completed 30 days following the beginning of the cycle or on the day prior to initiation of radiation therapy, whichever occurred first. The second and all subsequent cycles of chemotherapy, as well as subsequent courses of chemotherapy–if any–during the period of interest, were similarly defined. A maximum of 8 cycles per course were considered. For patients with multiple courses of chemotherapy, all qualifying courses were considered. Chemotherapy regimens were ascertained based on a review of all HCPCS Level II codes for parenterally administered antineoplastic agents on medical claims with service dates within 6 days of the start of each cycle of chemotherapy. Regimens were categorized on a cycle-specific basis according to the number of agents administered that are considered to be myelotoxic (list available from authors upon request). For each cancer chemotherapy patient, each unique cycle within each course of chemotherapy was identified. The first chemotherapy cycle (of the first course) was defined as beginning with the date of initiation of chemotherapy and ending with the first service date for the next administration of chemotherapy administration (as evidenced by a medical claim with a corresponding HCPCS, ICD-9-CM, or UB-92 code) occurring at least 12 days-but no more than 59 days-after the date of initiation of chemotherapy. If a second chemotherapy cycle did not commence prior to day 60, or if there was evidence of receipt of radiation therapy (based on medical claims with relevant HCPCS, ICD-9-CM, or UB-92 codes) during this period, both the first cycle of chemotherapy and the course of chemotherapy were considered to have been completed 30 days following the beginning of the cycle or on the day prior to initiation of radiation therapy, whichever occurred first. The second and all subsequent cycles of chemotherapy, as well as subsequent courses of chemotherapy–if any–during the period of interest, were similarly defined. A maximum of 8 cycles per course were considered. For patients with multiple courses of chemotherapy, all qualifying courses were considered. Chemotherapy regimens were ascertained based on a review of all HCPCS Level II codes for parenterally administered antineoplastic agents on medical claims with service dates within 6 days of the start of each cycle of chemotherapy. Regimens were categorized on a cycle-specific basis according to the number of agents administered that are considered to be myelotoxic (list available from authors upon request). Daily filgrastim prophylaxis Cancer chemotherapy patients who received daily filgrastim prophylaxis during ≥1 cycle of chemotherapy were selected for inclusion in the study population. Administration of daily filgrastim on or before day 5 of a given cycle was considered to represent prophylaxis [14,22,23]. Receipt of daily filgrastim was identified based on medical claims (J1440, J1441) with relevant codes from the HCPCS system. Duration of daily filgrastim prophylaxis during the patient-cycle was characterized based on temporal patterns of administration; end of prophylaxis was defined as a gap ≥3 days in its use, outpatient administration of IV antimicrobial therapy (an indicator for possible CINC), or hospitalization for CINC. Duration of prophylaxis was characterized as 1–3, 4–6, or ≥7 days. Cancer chemotherapy patients who received daily filgrastim prophylaxis during ≥1 cycle of chemotherapy were selected for inclusion in the study population. Administration of daily filgrastim on or before day 5 of a given cycle was considered to represent prophylaxis [14,22,23]. Receipt of daily filgrastim was identified based on medical claims (J1440, J1441) with relevant codes from the HCPCS system. Duration of daily filgrastim prophylaxis during the patient-cycle was characterized based on temporal patterns of administration; end of prophylaxis was defined as a gap ≥3 days in its use, outpatient administration of IV antimicrobial therapy (an indicator for possible CINC), or hospitalization for CINC. Duration of prophylaxis was characterized as 1–3, 4–6, or ≥7 days. Exclusion criteria Patient-cycles were excluded from the analytic file if the following occurred: (1) evidence of ≥2 primary cancers (solid or blood) within (i.e., +/-) 30 days of chemotherapy initiation; (2) any gaps in health benefits during the 6-month ("pretreatment") period prior to chemotherapy initiation; (3) evidence of hematopoietic stem cell or bone marrow transplantation prior to or during receipt of chemotherapy; (4) evidence of chemotherapy based only on medical claims for administration of the drugs (HCPCS codes identifying the specific chemotherapy agents were not available, and thus the regimen and level of myelosuppression could not be determined); (5) pharmacy claims for myelotoxic chemotherapy (only drug dispense dates are available on pharmacy claims, and because pharmacy/medical claims cannot be definitively linked, precise dates of chemotherapy administration–needed to characterize the course and cycles–could not be ascertained); (6) pharmacy claims for daily filgrastim (precise dates of administration could not be ascertained, for reasons stated above); (7) evidence of receipt of sargramostim (J2820) or pegfilgrastim (C9119, S0135, J2505) during the first 5 days of the cycle (i.e., as prophylaxis). Patient-cycles were excluded from the analytic file if the following occurred: (1) evidence of ≥2 primary cancers (solid or blood) within (i.e., +/-) 30 days of chemotherapy initiation; (2) any gaps in health benefits during the 6-month ("pretreatment") period prior to chemotherapy initiation; (3) evidence of hematopoietic stem cell or bone marrow transplantation prior to or during receipt of chemotherapy; (4) evidence of chemotherapy based only on medical claims for administration of the drugs (HCPCS codes identifying the specific chemotherapy agents were not available, and thus the regimen and level of myelosuppression could not be determined); (5) pharmacy claims for myelotoxic chemotherapy (only drug dispense dates are available on pharmacy claims, and because pharmacy/medical claims cannot be definitively linked, precise dates of chemotherapy administration–needed to characterize the course and cycles–could not be ascertained); (6) pharmacy claims for daily filgrastim (precise dates of administration could not be ascertained, for reasons stated above); (7) evidence of receipt of sargramostim (J2820) or pegfilgrastim (C9119, S0135, J2505) during the first 5 days of the cycle (i.e., as prophylaxis). Cancer chemotherapy patients: All patients, aged ≥18 years, who began ≥1 new course of myelosuppressive chemotherapy were identified; the enrollment window was July 1, 2001 through June 30, 2010 for patients in the MarketScan Database and July 1, 2001 through June 30, 2008 for patients in the LifeLink Database. Receipt of chemotherapy was ascertained based on the presence of ≥1 paid medical claim for a chemotherapy drug or administration thereof (identified using Healthcare Common Procedure Coding System [HCPCS], International Classification of Disease, Ninth Revision, Clinical Modification [ICD-9-CM], and Health Care Financing Administration Uniform Bill-92 [UB-92] revenue codes). Patients were considered to have initiated a new course of chemotherapy if there was a chemotherapy claim during the study period that was preceded by a period ≥60 days without any other claims for chemotherapy. Only patients who had evidence of a primary solid tumor or NHL (based on ≥2 medical claims [≥7 days apart] with a qualifying 3-digit ICD-9-CM diagnosis code during the period beginning 30 days prior to the index date and ending 30 days thereafter) were selected. Chemotherapy courses, cycles, and regimens: For each cancer chemotherapy patient, each unique cycle within each course of chemotherapy was identified. The first chemotherapy cycle (of the first course) was defined as beginning with the date of initiation of chemotherapy and ending with the first service date for the next administration of chemotherapy administration (as evidenced by a medical claim with a corresponding HCPCS, ICD-9-CM, or UB-92 code) occurring at least 12 days-but no more than 59 days-after the date of initiation of chemotherapy. If a second chemotherapy cycle did not commence prior to day 60, or if there was evidence of receipt of radiation therapy (based on medical claims with relevant HCPCS, ICD-9-CM, or UB-92 codes) during this period, both the first cycle of chemotherapy and the course of chemotherapy were considered to have been completed 30 days following the beginning of the cycle or on the day prior to initiation of radiation therapy, whichever occurred first. The second and all subsequent cycles of chemotherapy, as well as subsequent courses of chemotherapy–if any–during the period of interest, were similarly defined. A maximum of 8 cycles per course were considered. For patients with multiple courses of chemotherapy, all qualifying courses were considered. Chemotherapy regimens were ascertained based on a review of all HCPCS Level II codes for parenterally administered antineoplastic agents on medical claims with service dates within 6 days of the start of each cycle of chemotherapy. Regimens were categorized on a cycle-specific basis according to the number of agents administered that are considered to be myelotoxic (list available from authors upon request). Daily filgrastim prophylaxis: Cancer chemotherapy patients who received daily filgrastim prophylaxis during ≥1 cycle of chemotherapy were selected for inclusion in the study population. Administration of daily filgrastim on or before day 5 of a given cycle was considered to represent prophylaxis [14,22,23]. Receipt of daily filgrastim was identified based on medical claims (J1440, J1441) with relevant codes from the HCPCS system. Duration of daily filgrastim prophylaxis during the patient-cycle was characterized based on temporal patterns of administration; end of prophylaxis was defined as a gap ≥3 days in its use, outpatient administration of IV antimicrobial therapy (an indicator for possible CINC), or hospitalization for CINC. Duration of prophylaxis was characterized as 1–3, 4–6, or ≥7 days. Exclusion criteria: Patient-cycles were excluded from the analytic file if the following occurred: (1) evidence of ≥2 primary cancers (solid or blood) within (i.e., +/-) 30 days of chemotherapy initiation; (2) any gaps in health benefits during the 6-month ("pretreatment") period prior to chemotherapy initiation; (3) evidence of hematopoietic stem cell or bone marrow transplantation prior to or during receipt of chemotherapy; (4) evidence of chemotherapy based only on medical claims for administration of the drugs (HCPCS codes identifying the specific chemotherapy agents were not available, and thus the regimen and level of myelosuppression could not be determined); (5) pharmacy claims for myelotoxic chemotherapy (only drug dispense dates are available on pharmacy claims, and because pharmacy/medical claims cannot be definitively linked, precise dates of chemotherapy administration–needed to characterize the course and cycles–could not be ascertained); (6) pharmacy claims for daily filgrastim (precise dates of administration could not be ascertained, for reasons stated above); (7) evidence of receipt of sargramostim (J2820) or pegfilgrastim (C9119, S0135, J2505) during the first 5 days of the cycle (i.e., as prophylaxis). Neutropenic complications requiring inpatient care: CINC was identified based on inpatient admissions with a diagnosis (principal or secondary) of neutropenia (ICD-9-CM 288.0), fever (780.6), or infection (list available upon request). Admissions were identified on a cycle-specific basis using acute-care facility inpatient claims with admission dates anytime between day 6 and the last day of the chemotherapy cycle. Episodes of CINC treated exclusively on an outpatient basis were not considered. Because CINC that occurred on the day of the last dose of prophylaxis or the following day could have resulted in an artificial truncation of a planned longer course of prophylaxis–and thus could exaggerate the risk of CINC with shorter courses–analyses were conducted alternatively with and without inclusion of these patient cycles. Consequences of CINC requiring inpatient care were characterized in terms of in-hospital mortality, total hospital LOS (during the cycle), and total CINC-related healthcare expenditures (i.e., from hospital discharge to end of cycle). Total CINC-related healthcare expenditures included those for the initial hospitalization as well as care provided post-discharge on an outpatient basis; outpatient expenditures comprised encounters with a diagnosis of neutropenia, fever, or infection, as well as use/prescriptions for CSF agents (i.e., as treatment) and antimicrobial therapy. Expenditures were estimated based on total paid amounts on corresponding claims. Mortality was characterized using data only from the MarketScan Database as such information was not available in the LifeLink Database. Patient characteristics: Patient characteristics included: age; gender; presence of selected chronic comorbidities (cardiovascular disease, diabetes, liver disease, renal disease); history of blood disorders (anemia, neutropenia, other), infection, hospitalization (all-cause and CINC-related, respectively), chemotherapy, and radiation therapy; pre-chemotherapy healthcare expenditures; type of cancer, presence of metastases; cycle number and minimum length of prior cycles; chemotherapy regimen; receipt of antimicrobial prophylaxis in the cycle of interest; and year of chemotherapy initiation. Only observed data were utilized in defining patient characteristics. Age was assessed as of the first day of the first cycle of chemotherapy in the course. Chronic comorbidities and history of blood disorders, infections, hospitalization, chemotherapy, radiation therapy, presence of metastases, and healthcare expenditures were assessed from the beginning of the 6-month pretreatment period through the first day of the corresponding cycle of chemotherapy. Selected variables (i.e., anemia, neutropenia, other blood disorders, infections) were alternatively evaluated during the chemotherapy course (up to the beginning of the cycle of interest). Metastases (bone vs. other site) and chronic comorbidities were identified on the basis of ≥1 diagnosis codes on inpatient claims, ≥2 diagnosis codes on outpatient claims (excluding those for laboratory services) on different days, ≥1 procedure codes, and ≥1 drug codes, as appropriate. Blood disorders and infections were identified on the basis of ≥1 diagnosis codes (on inpatient and/or outpatient claims) and ≥1 drug codes, as appropriate. Prophylactic use of antimicrobial agents was ascertained based on a medical claim for administration of drug from cycle day 1 to cycle day 5, or a pharmacy claim for a filled prescription from cycle day -3 to cycle day 5, with a corresponding drug code. Statistical analyses: Incidence of CINC was evaluated for each patient-cycle in which daily filgrastim was administered prophylactically, and was estimated by same-cycle duration of prophylaxis in an unadjusted and adjusted context. For the latter, a generalized estimating equation (GEE) with a binomial distribution, logistic link function, and exchangeable correlation structure was employed. The GEE method accounts for correlation among repeated measures for the same subject (in this instance, across cycles), while controlling for both fixed characteristics (e.g., gender) and time-dependent covariates (e.g., first versus subsequent cycles). All observed patient characteristics were entered into, and retained in, the multivariate model. Subgroup and sensitivity analyses focusing on the first cycle only, employing a narrow definition of CINC (which was identified using the diagnosis code for neutropenia only but otherwise the same algorithm for the broad definition as described above), and focusing on alternative tumor types separately (i.e., breast cancer, lung cancer, colorectal cancer, and NHL) also were conducted. In-hospital mortality, hospital LOS, and CINC-related healthcare expenditures among patients who developed CINC were descriptively evaluated by duration of daily filgrastim prophylaxis. Confidence intervals (95%) for in-hospital mortality were computed using the Wilson score interval; confidence intervals for LOS and economic costs were computed using nonparametric bootstrapping (percentile method) from the study population (1,000 replicates with replacement). Confidence intervals for in-hospital mortality, hospital LOS, and healthcare expenditures were estimated assuming independence between observations. Only observed data were used in defining study variables; patients who developed CINC but were missing data on study measures–because such data either were not recorded on claims or were not provided by some health plans–were excluded from corresponding analyses. Results: A total of 135,921 adult patients initiated a new course of chemotherapy for a solid tumor or NHL during the period of interest and met all other eligibility criteria. Among these patients, a total of 5,477 received daily filgrastim as prophylaxis during ≥1 cycle of chemotherapy and thus were included in the study population; these patients contributed a total of 14,288 daily filgrastim prophylaxis patient-cycles to the analytic file. Filgrastim was administered for 1–3 days in 58% of cycles (n = 8,371), 4–6 days in 26% of cycles (n = 3,691), and ≥7days in 16% of cycles (n = 2,226) (Figure  1). Days of filgrastim prophylaxis. Mean (SD) age of patients was 54 (13) years for 1–3 days of filgrastim prophylaxis, 57 (14) years for 4–6 days of prophylaxis, and 56 (13) years for ≥7 days of prophylaxis (Table  1). Breast cancer was the most common tumor type (48% for 1–3 days, 50% for 4–6 days, 49% for ≥7 days), followed by NHL (10%, 13%, and 13%, respectively), and colorectal cancer (16%, 10%, and 13%, respectively). Metastatic disease was present in 38% of patients receiving 1–3 days of prophylaxis, 32% receiving 4–6 days, and 35% receiving ≥7 days. Antimicrobial agents—principally oral (~90%)—were concurrently used as prophylaxis in 6.7% of patients receiving 1–3 days of filgrastim prophylaxis, 7.5% receiving 4–6 days, and 7.5% receiving ≥7 days. Patient, cancer, and treatment characteristics, by duration of filgrastim prophylaxis Crude risk of CINC during a cycle of chemotherapy was 2.9% with 1–3 days of filgrastim prophylaxis, 2.7% with 4–6 days, and 1.8% with ≥7 days. In adjusted analyses, the odds of CINC were 2.4 (95% CI: 1.6-3.4) and 1.9 (1.3-2.8) times higher with 1–3 and 4–6 days of filgrastim prophylaxis, respectively, versus ≥7 days (referent group) (Figure  2). Among the subgroup of patients who developed CINC requiring inpatient care (n = 382), mean hospital LOS was 7.4 (6.4-8.3) days with 1–3 days of prophylaxis (n = 243), 7.1 (5.7-8.5) days with 4–6 days of prophylaxis (n = 99), and 6.5 (4.9-8.0) with ≥7 days of prophylaxis (n = 40) (Table  2). Among the subgroup of patients who developed CINC and for whom healthcare expenditures were available (n = 358), mean total CINC-related healthcare expenditures were $18,912 (14,570-23,581) with 1–3 days of prophylaxis (n = 225), $14,907 (11,155-19,728) with 4–6 days (n = 94), and $13,165 (9,595-17,144) with ≥7 days (n = 39). Among the subgroup of patients who developed CINC and for whom discharge status was available (n = 228), in-hospital mortality was 8.4% (4.6-14.8) with 1–3 days of prophylaxis (n-119), 4.0% (1.4-11.1) with 4–6 days (n = 75), and 0% (0–10.2) with ≥7 days (n = 34). Adjusted odds ratios for chemotherapy-induced neutropenic complications requiring inpatient care, by duration of filgrastim prophylaxis*. *Results adjusted for patient, cancer, and chemotherapy characteristics. Unadjusted risk of inpatient mortality, length of stay in hospital, and healthcare expenditures for chemotherapy- induced neutropenic complications requiring inpatient care, by duration of filgrastim prophylaxis *n’s indicate the number of patient-cycles considered in analyses of study measures; among patients who developed CINC (n = 382), those with missing data on study measures- because such data were not recorded on claims or were not provided by some health plans- were excluded from corresponding analyses. **Only patients with paid amounts > $0 were considered in these analyses. In subgroup analyses focusing on the first cycle only, crude risks of CINC were 7.0% with 1–3 days of filgrastim prophylaxis (n = 1054), 5.0% with 4–6 days (n = 482), and 5.2% with ≥7 days (n = 363); in adjusted analyses, odds of CINC were 1.6 (0.9-2.8) and 1.1 (0.6-2.1) times higher with 1–3 and 4–6 days (versus ≥7 days) of filgrastim prophylaxis in cycle 1 (Table  3). In analyses employing the narrow definition for CINC, crude risks of CINC were 1.6% with 1–3 days of filgrastim prophylaxis, 1.8% with 4–6 days, and 1.3% with ≥7 days; in adjusted analyses, odds of CINC were 1.6 (1.1-2.5) and 1.7 (1.1-2.7) times higher with 1–3 and 4–6 days of filgrastim prophylaxis, respectively, versus ≥7 days. In tumor-specific analyses, adjusted odds of CINC with 1–3 and 4–6 days of filgrastim prophylaxis (vs. ≥7 days) were: 1.8 (1.0-3.0) and 1.9 (1.1-3.4) for breast cancer; 20.2 (1.9-212.4) and 14.5 (1.3-158.7) for lung cancer; 1.9 (0.2-16.7) and 2.5 (0.3-23.5) for colorectal cancer; and 2.1 (1.2-3.6) and 1.8 (1.0-3.2) for NHL. We note that not all observed differences were statistically significant in unadjusted subgroup and secondary analyses, presumably due to the lack of adjustment for systematic differences in patient characteristics between prophylaxis subgroups. Adjusted odds ratios for chemotherapy- induced neutropenic complications requiring inpatient care in subgroup and secondary analyses *Results adjusted for patient, cancer, and treatment characteristic listed in Table  2. Discussion: In the largest retrospective study of daily filgrastim use in US clinical practice, we found that the large majority of patients undergoing cancer chemotherapy who are administered prophylaxis with daily filgrastim receive considerably fewer days of administration than subjects in clinical trials of daily filgrastim prophylaxis. In our study population, 95% of patients received fewer than 10 days of filgrastim prophylaxis and 58% received only 1–3 days, versus the typical 10–11 days required for neutrophil recovery in the pivotal clinical trials [17-19]. This finding is largely consistent with observations from other retrospective clinical practice studies [14-16,22,23]. We also found that patients who receive shorter courses of daily filgrastim prophylaxis have a substantially higher risk of CINC requiring hospitalization. Among patients in our study population who received 1–3 or 4–6 days of prophylaxis, the adjusted odds of CINC were 2.4 and 1.9 times, respectively, higher than those receiving ≥7 days. In addition, our study results suggest that, among the subgroup of patients who develop CINC despite prophylaxis, the consequences of this condition may be worse among those who receive fewer administrations of daily filgrastim prophylaxis. This last set of results should be interpreted with caution, however, as the number of patients who developed CINC despite prophylaxis was, in absolute terms, small. Over the past decade, the frequency of use of alternative (often, more myelosuppressive) chemotherapy regimens in US clinical practice has changed considerably, in large part due to better chemotherapy agents, combinations of agents, and dosing schedules [25-27]. Because these regimens are typically associated with higher levels of myelosuppression, use of supportive care–including CSF prophylaxis–also has increased [27]. While pegfilgrastim—a longer-acting version of filgrastim that requires only a single dose administered subcutaneously once per chemotherapy cycle—is now (by far) the most commonly used CSF prophylactic agent in US clinical practice, filgrastim accounts for a small but important segment of the prophylactic market [14,16]. In the present study, 60,600 (45%) of the 135,921 adult cancer chemotherapy study subjects received CSF prophylaxis in ≥1cycle during their chemotherapy course. Among the subgroup who received CSF prophylaxis, 91% received pegfilgrastim in ≥1 cycle and 9% received filgrastim. Notwithstanding these temporal changes in chemotherapy regimens and supportive care, however, our findings regarding the frequent use of shorter courses of filgrastim prophylaxis and the associated consequences–based on a study population of over 5,000 patients and nearly 15,000 patient-cycles–are largely consistent with those reported previously. In the 2006 study by Weycker and colleagues–which included 598 breast cancer, lung cancer, and NHL patients, and employed data from 1998-2002– for example, mean duration of prophylaxis ranged from 4.3 to 6.5 days across tumor types, while in the Morrison study–which included 1,451 cancer patients and employed data from 2001-2003– mean duration of prophylaxis ranged from 3.7 to 6.0 days across calendar years and cycles of use [15,23]. In the present study, mean (SD) duration of prophylaxis on an overall basis was 3.6 (2.9) days. Moreover, in the aforementioned 2011 study by Weycker et al., odds of CINC were reported to be 1.5 times higher for patients receiving <7 versus ≥7 days of filgrastim prophylaxis, while the corresponding odds ratio in this study was estimated to be 2.2 (95% CI 1.6-3.1) [14]. We note that odds ratios for CINC reported above were largely comparable when limiting attention to the most recent four-year period (2007–2010): 2.5 (1.5-4.3) with 1–3 versus ≥7 days of filgrastim prophylaxis and 2.2 (1.2-3.9) with 4–6 versus ≥7 days of filgrastim prophylaxis. Study results also were robust to alternative specifications of the multivariate model and when excluding observations for which hospitalizations occurred on the day of the last dose of daily filgrastim prophylaxis or the following day. We expected that systematic differences in the prevalence of risk factors (e.g., history of neutropenia, higher doses of chemotherapy agents, metastases to bone, poor performance status) for CINC would occur according to the duration of prophylaxis (1–3, 4–6, vs. ≥7 days). We thus used techniques of multivariate regression to adjust for such risk factors–to the extent possible–in analyzing the relationship between duration of daily filgrastim prophylaxis and outcomes of interest. Given the limitations of the claims-based databases we used, however, we were forced to use proxies for certain established risk factors. For example, a proxy measure based on pre-chemotherapy healthcare expenditures was employed for performance status, which in prior research has been shown to be correlated with health status in other patient populations [30]. Moreover, because information was not available for some clinically important parameters (e.g., ANC), the possibility exists that the study groups differed in terms of unobserved characteristics that predispose to CINC. We believe that our approach to controlling for such systematic differences was comprehensive given available data, and that the above-noted biases–if present–would confer a conservative bias to analyses. For example, if the risk of FN is higher among patients receiving higher doses of chemotherapy (which is unobservable in the study database), and these patients are more likely to receive longer durations of prophylaxis, then the estimated difference in risk between patients receiving longer versus shorter courses of prophylaxis will be smaller than the true or actual difference. That our adjustment procedures were at least to some extent successful is suggested by the greater odds ratios for CINC in the adjusted versus unadjusted analyses, but uncertainty as to the adequacy of adjustment for confounding risk factors is one of the major limitations of our study. In those instances where code-based operational algorithms were used to identify risk factors of interest (e.g., using ICD-9-CM code 288.0 for neutropenia, rather than ANC) errors of omission/commission in medical coding may have impacted the accuracy of adjustment. We do not believe, however, that any such differences or limitations were for the most part systematic in nature. However, patients who have a history of CINC – and thus may be more likely to receive longer durations of prophylaxis – may be more likely to have "neutropenia" designated as a secondary (or even primary) diagnosis on future encounters, all else equal. There is no ICD-9-CM diagnosis code for CINC (i.e., neutropenia-related fever or infection), and thus codes for neutropenia, fever, and infection were employed to identify hospitalizations assumed to be related to neutropenic complications. Patients are typically not given chemotherapy when they are neutropenic or have active infection. The timing of fever and infection after chemotherapy increases the likelihood that such outcomes are related to receipt of chemotherapy. Codes for neutropenia, fever (surrogate for active infection), and infection thus are all likely to be related to episodes if seen within a defined exposure period after receiving chemotherapy. While the sensitivity of this algorithm for identifying neutropenic complications is undoubtedly higher than that of an algorithm using only the ICD-9-CM code for neutropenia, its specificity and positive predictive value are unknown. Other miscellaneous limitations deserve a brief mention. CINC requiring outpatient care were not considered in our analyses due to the small total number of events (n = 60). Patients with evidence of receipt of daily filgrastim via outpatient pharmacy (3% of total) were excluded because we could not ascertain the precise dates of use. Expenditures were not adjusted to current dollars since use of a general price index may yield spurious findings when applied to specific patients in specific health plans who consumed specific healthcare services and since the distribution of study groups by calendar year was comparable. Hospital discharge disposition was available only in the MarketScan Database and information concerning LOS or paid amounts was missing for some hospitalizations. Moreover, we note that a disproportionately high percentage of patients receiving 1–3 days of filgrastim prophylaxis (vs. those receiving 4–6 or ≥7 days) who developed CINC requiring inpatient care did not have information available on discharge disposition, and thus the inpatient mortality findings must be viewed with caution. Although patients initiating "delayed" use of daily filgrastim (e.g., after day 5 of the chemotherapy cycle) have been included in other published evaluations, they were not considered in our analyses as such patients may have received daily filgrastim for the treatment of CINC (e.g., severe neutropenia) as opposed to prophylaxis against CINC [15,16]. Finally, we note that it cannot be determined from outpatient pharmacy claims data whether drugs dispensed were actually taken, when they were taken, or how much was taken. For this reason, our characterization of antimicrobial prophylaxis use (principally oral) may be upwardly biased and differences in actual use across prophylaxis subgroups may confound the results of analyses. We also note, however, that the percentage of patients with filled prescriptions for antimicrobials was relatively low (and comparable) across filgrastim prophylaxis subgroups. Conclusion: In conclusion, we found that among patients receiving myelosuppressive chemotherapy, shorter courses of daily filgrastim prophylaxis are associated with increased risk of CINC. We also found that when CINC develops despite daily filgrastim prophylaxis, the outcomes thereof may be poorer in those receiving shorter courses of prophylaxis. Additional research is needed to explore these relationships among individual tumor types and chemotherapy regimens. Competing interests: Funding for this research was provided by Amgen Inc. to Policy Analysis Inc. (PAI). Derek Weycker, John Edelsberg, and Alex Kartashov are employed by Policy Analysis Inc. (PAI). Rich Barron and Jason Legg are employed by Amgen Inc. Andrew Glass is employed by the Center for Health Research, Kaiser Permanente Northwest, and received an honorarium for this research from Amgen Inc. Authors’ contributions: Authorship was designated based on the guidelines promulgated by the International Committee of Medical Journal Editors (2004). All persons who meet criteria for authorship are listed as authors on the title page. The contribution of each of these individuals to this study–by task–is as follows: conception and supervision (Barron, Weycker), development of design (Barron, Edelsberg, Glass, Legg, Weycker), conduct of analyses (Kartashov, Weycker), interpretation of results (all authors), preparation of manuscript (Edelsberg, Weycker), and review of manuscript (all authors). All authors have read and approved the final version of the manuscript. The study sponsor reviewed the study research plan and study manuscript; data management, processing, and analyses were conducted by Policy Analysis Inc. (PAI), and all final analytic decisions were made by study investigators. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1472-6963/14/189/prepub
Background: To examine duration of daily filgrastim prophylaxis, and risk and consequences of chemotherapy-induced neutropenic complications (CINC) requiring inpatient care. Methods: Using a retrospective cohort design and US healthcare claims data (2001-2010), we identified all cancer patients who initiated ≥1 course of myelosuppressive chemotherapy and received daily filgrastim prophylactically in ≥1 cycle. Cycles with daily filgrastim prophylaxis were pooled for analyses. CINC was identified based on hospital admissions with a diagnosis of neutropenia, fever, or infection; consequences were characterized in terms of hospital mortality, hospital length of stay (LOS), and CINC-related healthcare expenditures. Results: Risk of CINC requiring inpatient care-adjusted for patient characteristics-was 2.4 (95% CI: 1.6-3.4) and 1.9 (1.3-2.8) times higher with 1-3 (N = 8371) and 4-6 (N = 3691) days of filgrastim prophylaxis, respectively, versus ≥7 days (N = 2226). Among subjects who developed CINC, consequences with 1-3 and 4-6 (vs. ≥7) days of filgrastim prophylaxis were: mortality (8.4% [n/N = 10/119] and 4.0% [3/75] vs. 0% [0/34]); LOS (means: 7.4 [N = 243] and 7.1 [N = 99] vs. 6.5 [N = 40]); and expenditures (means: $18,912 [N = 225] and $14,907 [N = 94] vs. $13,165 [N = 39]). Conclusions: In this retrospective evaluation, shorter courses of daily filgrastim prophylaxis were found to be associated with an increased risk of CINC as well as poorer outcomes among those developing this condition. Because of the limitations inherent in healthcare claims databases specifically and retrospective evaluations generally, additional research addressing these limitations is needed to confirm the findings of this study.
Background: Neutropenia is a common side effect of myelosuppressive chemotherapy that both increases the risk of infection and diminishes patients’ ability to fight infection. When neutropenic patients become febrile, the high likelihood of infection and serious consequences thereof usually results in hospitalization [1,2]. FN, as well as severe or prolonged neutropenia, also can interfere with the planned delivery of treatment and adversely affect important patient outcomes [2-11]. Clinical practice guidelines recommend primary prophylactic use of a colony-stimulating factor (CSF)–which has been shown to reduce the risk of FN in clinical trials–when the risk of FN is 20% or higher [2]. While the American Society of Clinical Oncology (ASCO) initially recommended that CSF prophylaxis be administered only when FN risk is 40% or higher, in 2006, ASCO lowered the threshold to 20% based on data highlighting the importance of FN-related hospitalization as an outcome and evidence demonstrating the value of CSF prophylaxis in reducing the risk of FN, the risk of FN-related hospitalization, and the associated use of IV anti-infective agents [2,12,13]. The CSF filgrastim, which is widely used in clinical practice as prophylaxis against FN, requires daily administration during each cycle until neutrophil recovery occurs (in clinical trials, given typically for 10–11 days [and up to 14 days] until absolute neutrophil count [ANC] ≥10 × 109/L) [14-19]. In clinical practice, patients often receive shorter courses of daily filgrastim prophylaxis than those administered to subjects in the clinical trial setting [14-16,20-24]. While published studies suggest that shorter courses of daily filgrastim prophylaxis are associated with an increased risk of hospitalization for CINC, these studies focused on selected tumor types or employed data that now are over a decade old [23,24]. During the past decade, use of chemotherapy and supportive care in clinical practice–as well as recommended use of these agents in authoritative guidelines–has changed considerably [2,25-27], Moreover, only one study has examined whether CSFs may favorably impact clinical outcomes and economic costs when CINC develop despite CSF prophylaxis, and it was published a decade ago and focused on elderly patients with a single type of cancer (non-Hodgkin’s lymphoma) [28]. We therefore undertook a new study to evaluate the relationship between the duration of daily filgrastim prophylaxis, and the risk and consequences of CINC requiring inpatient care using a large healthcare claims database. While such databases often lack detailed clinical information (e.g., on absolute neutrophil counts), they offer access to information on the health profile and healthcare utilization of tens of millions of covered lives. Conclusion: In conclusion, we found that among patients receiving myelosuppressive chemotherapy, shorter courses of daily filgrastim prophylaxis are associated with increased risk of CINC. We also found that when CINC develops despite daily filgrastim prophylaxis, the outcomes thereof may be poorer in those receiving shorter courses of prophylaxis. Additional research is needed to explore these relationships among individual tumor types and chemotherapy regimens.
Background: To examine duration of daily filgrastim prophylaxis, and risk and consequences of chemotherapy-induced neutropenic complications (CINC) requiring inpatient care. Methods: Using a retrospective cohort design and US healthcare claims data (2001-2010), we identified all cancer patients who initiated ≥1 course of myelosuppressive chemotherapy and received daily filgrastim prophylactically in ≥1 cycle. Cycles with daily filgrastim prophylaxis were pooled for analyses. CINC was identified based on hospital admissions with a diagnosis of neutropenia, fever, or infection; consequences were characterized in terms of hospital mortality, hospital length of stay (LOS), and CINC-related healthcare expenditures. Results: Risk of CINC requiring inpatient care-adjusted for patient characteristics-was 2.4 (95% CI: 1.6-3.4) and 1.9 (1.3-2.8) times higher with 1-3 (N = 8371) and 4-6 (N = 3691) days of filgrastim prophylaxis, respectively, versus ≥7 days (N = 2226). Among subjects who developed CINC, consequences with 1-3 and 4-6 (vs. ≥7) days of filgrastim prophylaxis were: mortality (8.4% [n/N = 10/119] and 4.0% [3/75] vs. 0% [0/34]); LOS (means: 7.4 [N = 243] and 7.1 [N = 99] vs. 6.5 [N = 40]); and expenditures (means: $18,912 [N = 225] and $14,907 [N = 94] vs. $13,165 [N = 39]). Conclusions: In this retrospective evaluation, shorter courses of daily filgrastim prophylaxis were found to be associated with an increased risk of CINC as well as poorer outcomes among those developing this condition. Because of the limitations inherent in healthcare claims databases specifically and retrospective evaluations generally, additional research addressing these limitations is needed to confirm the findings of this study.
14,071
385
[ 519, 330, 1840, 208, 300, 135, 236, 277, 339, 336, 71, 166, 16 ]
17
[ "chemotherapy", "days", "cycle", "prophylaxis", "claims", "patients", "filgrastim", "cinc", "administration", "daily" ]
[ "csf prophylaxis administered", "neutropenia higher doses", "chemotherapy increases risk", "chemotherapy codes neutropenia", "recommended csf prophylaxis" ]
[CONTENT] Filgrastim | Granulocyte colony-stimulating factor | Febrile neutropenia | Cost | Neoplasms [SUMMARY]
[CONTENT] Filgrastim | Granulocyte colony-stimulating factor | Febrile neutropenia | Cost | Neoplasms [SUMMARY]
[CONTENT] Filgrastim | Granulocyte colony-stimulating factor | Febrile neutropenia | Cost | Neoplasms [SUMMARY]
[CONTENT] Filgrastim | Granulocyte colony-stimulating factor | Febrile neutropenia | Cost | Neoplasms [SUMMARY]
[CONTENT] Filgrastim | Granulocyte colony-stimulating factor | Febrile neutropenia | Cost | Neoplasms [SUMMARY]
[CONTENT] Filgrastim | Granulocyte colony-stimulating factor | Febrile neutropenia | Cost | Neoplasms [SUMMARY]
[CONTENT] Aged | Febrile Neutropenia | Female | Filgrastim | Granulocyte Colony-Stimulating Factor | Hospitalization | Humans | Insurance Claim Review | Male | Middle Aged | Neoplasms | Neutropenia | Post-Exposure Prophylaxis | Recombinant Proteins | Retrospective Studies | Risk Assessment | United States [SUMMARY]
[CONTENT] Aged | Febrile Neutropenia | Female | Filgrastim | Granulocyte Colony-Stimulating Factor | Hospitalization | Humans | Insurance Claim Review | Male | Middle Aged | Neoplasms | Neutropenia | Post-Exposure Prophylaxis | Recombinant Proteins | Retrospective Studies | Risk Assessment | United States [SUMMARY]
[CONTENT] Aged | Febrile Neutropenia | Female | Filgrastim | Granulocyte Colony-Stimulating Factor | Hospitalization | Humans | Insurance Claim Review | Male | Middle Aged | Neoplasms | Neutropenia | Post-Exposure Prophylaxis | Recombinant Proteins | Retrospective Studies | Risk Assessment | United States [SUMMARY]
[CONTENT] Aged | Febrile Neutropenia | Female | Filgrastim | Granulocyte Colony-Stimulating Factor | Hospitalization | Humans | Insurance Claim Review | Male | Middle Aged | Neoplasms | Neutropenia | Post-Exposure Prophylaxis | Recombinant Proteins | Retrospective Studies | Risk Assessment | United States [SUMMARY]
[CONTENT] Aged | Febrile Neutropenia | Female | Filgrastim | Granulocyte Colony-Stimulating Factor | Hospitalization | Humans | Insurance Claim Review | Male | Middle Aged | Neoplasms | Neutropenia | Post-Exposure Prophylaxis | Recombinant Proteins | Retrospective Studies | Risk Assessment | United States [SUMMARY]
[CONTENT] Aged | Febrile Neutropenia | Female | Filgrastim | Granulocyte Colony-Stimulating Factor | Hospitalization | Humans | Insurance Claim Review | Male | Middle Aged | Neoplasms | Neutropenia | Post-Exposure Prophylaxis | Recombinant Proteins | Retrospective Studies | Risk Assessment | United States [SUMMARY]
[CONTENT] csf prophylaxis administered | neutropenia higher doses | chemotherapy increases risk | chemotherapy codes neutropenia | recommended csf prophylaxis [SUMMARY]
[CONTENT] csf prophylaxis administered | neutropenia higher doses | chemotherapy increases risk | chemotherapy codes neutropenia | recommended csf prophylaxis [SUMMARY]
[CONTENT] csf prophylaxis administered | neutropenia higher doses | chemotherapy increases risk | chemotherapy codes neutropenia | recommended csf prophylaxis [SUMMARY]
[CONTENT] csf prophylaxis administered | neutropenia higher doses | chemotherapy increases risk | chemotherapy codes neutropenia | recommended csf prophylaxis [SUMMARY]
[CONTENT] csf prophylaxis administered | neutropenia higher doses | chemotherapy increases risk | chemotherapy codes neutropenia | recommended csf prophylaxis [SUMMARY]
[CONTENT] csf prophylaxis administered | neutropenia higher doses | chemotherapy increases risk | chemotherapy codes neutropenia | recommended csf prophylaxis [SUMMARY]
[CONTENT] chemotherapy | days | cycle | prophylaxis | claims | patients | filgrastim | cinc | administration | daily [SUMMARY]
[CONTENT] chemotherapy | days | cycle | prophylaxis | claims | patients | filgrastim | cinc | administration | daily [SUMMARY]
[CONTENT] chemotherapy | days | cycle | prophylaxis | claims | patients | filgrastim | cinc | administration | daily [SUMMARY]
[CONTENT] chemotherapy | days | cycle | prophylaxis | claims | patients | filgrastim | cinc | administration | daily [SUMMARY]
[CONTENT] chemotherapy | days | cycle | prophylaxis | claims | patients | filgrastim | cinc | administration | daily [SUMMARY]
[CONTENT] chemotherapy | days | cycle | prophylaxis | claims | patients | filgrastim | cinc | administration | daily [SUMMARY]
[CONTENT] fn | clinical | risk | csf | risk fn | practice | clinical practice | prophylaxis | csf prophylaxis | neutrophil [SUMMARY]
[CONTENT] chemotherapy | cycle | claims | days | administration | day | course | patients | medical | prophylaxis [SUMMARY]
[CONTENT] days | prophylaxis | days filgrastim prophylaxis | days filgrastim | filgrastim | filgrastim prophylaxis | days prophylaxis | cinc | analyses | adjusted [SUMMARY]
[CONTENT] found | receiving | prophylaxis | shorter courses | shorter | courses | daily filgrastim prophylaxis | filgrastim prophylaxis | individual | conclusion found patients receiving [SUMMARY]
[CONTENT] chemotherapy | prophylaxis | days | cycle | filgrastim | cinc | patients | claims | daily | daily filgrastim [SUMMARY]
[CONTENT] chemotherapy | prophylaxis | days | cycle | filgrastim | cinc | patients | claims | daily | daily filgrastim [SUMMARY]
[CONTENT] daily | CINC [SUMMARY]
[CONTENT] US | 2001-2010 | daily | ≥1 ||| daily ||| CINC | CINC [SUMMARY]
[CONTENT] 2.4 | 95% | CI | 1.6-3.4 | 1.9 | 1.3-2.8 | 1-3 | 8371 | 4-6 | days | 2226 ||| CINC | 1-3 | days | 8.4% | n/N | 10/119 | 4.0% ||| 3/75 | 0% | 7.4 ||| 243 | 7.1 ||| 99 | 6.5 ||| 40 | 18,912 | 225 | 14,907 | 94 | 13,165 ||| 39 [SUMMARY]
[CONTENT] daily | CINC ||| [SUMMARY]
[CONTENT] daily | CINC ||| US | 2001-2010 | daily | ≥1 ||| daily ||| CINC | CINC ||| 2.4 | 95% | CI | 1.6-3.4 | 1.9 | 1.3-2.8 | 1-3 | 8371 | 4-6 | days | 2226 ||| CINC | 1-3 | days | 8.4% | n/N | 10/119 | 4.0% ||| 3/75 | 0% | 7.4 ||| 243 | 7.1 ||| 99 | 6.5 ||| 40 | 18,912 | 225 | 14,907 | 94 | 13,165 ||| 39 ||| daily | CINC ||| [SUMMARY]
[CONTENT] daily | CINC ||| US | 2001-2010 | daily | ≥1 ||| daily ||| CINC | CINC ||| 2.4 | 95% | CI | 1.6-3.4 | 1.9 | 1.3-2.8 | 1-3 | 8371 | 4-6 | days | 2226 ||| CINC | 1-3 | days | 8.4% | n/N | 10/119 | 4.0% ||| 3/75 | 0% | 7.4 ||| 243 | 7.1 ||| 99 | 6.5 ||| 40 | 18,912 | 225 | 14,907 | 94 | 13,165 ||| 39 ||| daily | CINC ||| [SUMMARY]
Could saline irrigation clear all residual common bile duct stones after lithotripsy? A self-controlled prospective cohort study.
33584068
A previous study showed that irrigation with 100 mL saline reduced residual common bile duct (CBD) stones, which potentially cause recurrent stones after endoscopic retrograde cholangiopancreatography.
BACKGROUND
This prospective self-controlled study enrolled patients receiving mechanical lithotripsy for large (> 1.2 cm) CBD stones. After occlusion cholangiography confirmed CBD stone clearance, peroral cholangioscopy (POC) was performed to determine clearance scores based on the number of residual stones. The amounts of residual stones spotted via POC were graded on a 5-point scale (score 1, worst; score 5, best). Scores were documented after only stone removal (control) and after irrigation with 50 mL and 100 mL saline, respectively. The stone composition was analyzed using infrared spectroscopy.
METHODS
Between October 2018 and January 2020, 47 patients had CBD clearance scores of 2.4 ± 1.1 without saline irrigation, 3.5 ± 0.7 with 50 mL irrigation, and 4.6 ± 0.6 with 100 mL irrigation (P < 0.001). Multivariate analysis showed that CBD diameter > 15 mm [odds ratio (OR) = 0.08, 95% confidence interval (CI): 0.01-0.49; P = 0.007] and periampullary diverticula (PAD) (OR = 6.51, 95%CI: 1.08-39.21; P = 0.041) were independent risk factors for residual stones. Bilirubin pigment stones constituted the main residual stones found in patients with PAD (P = 0.004).
RESULTS
Irrigation with 100 mL of saline may not clear all residual CBD stones after lithotripsy, especially in patients with PAD and/or a dilated (> 15 mm) CBD. Pigment residual stones are soft and commonly found in patients with PAD. Additional saline irrigation may be required to remove retained stones.
CONCLUSION
[ "Cholangiopancreatography, Endoscopic Retrograde", "Common Bile Duct", "Gallstones", "Humans", "Lithotripsy", "Prospective Studies" ]
7852583
INTRODUCTION
Endoscopic retrograde cholangiopancreatography (ERCP) is an effective and relatively minimally invasive technique for common bile duct (CBD) stones[1-3]. It has been reported that the recurrence rate of CBD stones after ERCP has increased from 4% to 24%[4-6]. The incidence of residual stones after mechanical lithotripsy for intractable CBD stones is 24% to 40%[7-10]. A growing number of studies suggest that an important reason for the recurrence of bile duct stones is the presence of stone debris after lithotripsy[11-13]. During ERCP, occluded cholangiography (OC) is often performed after stone removal to determine whether the stone is removed completely, but OC lacks accuracy. Even if no obvious stones are found on cholangiography, the presence of contrast can obscure small stone debris in the bile duct[8,14]. Complete bile duct clearance is necessary to decrease recurrent bile duct stones. Some studies[15-17] reported that irrigation of the bile duct with saline after stone extraction further improves the clearance of the bile duct and has the advantages of being a simple, low-cost procedure with rare complications. Ang et al[16] showed that a mean of 48 mL of saline solution could irrigate and flush out residual stones after the endoscopic removal of CBD stones. Ahn et al[17] found that irrigation with 100 mL of saline can flush out residual stone fragments from the bile duct into the duodenum after stone extraction. However, intraductal ultrasound (IDUS) has a high sensitivity and accuracy in diagnosing bile duct stones/debris[11,16]. This modality yields only indirect images of the debris. The lack of direct evidence to support the efficacy of saline irrigation after lithotripsy prompted us to use peroral cholangioscopy (POC) to examine the bile duct and detect any residual stones/debris. The results of no irrigation after stone extraction were confirmed by OC and were compared to the effectiveness of irrigation with 50 mL or 100 mL saline. To evaluate whether irrigation with 100 mL of saline is more effective in achieving complete clearance of the bile duct after mechanical lithotripsy, we conducted this prospective self-controlled study.
MATERIALS AND METHODS
Study design and participants Between October 2018 and January 2020, 47 patients with CBD stones were enrolled in a prospective clinical trial conducted in the surgical endoscopy center of The First Hospital of Lanzhou University (Figure 1). All eligible patients or their legal representatives gave informed consent before the treatment. The inclusion criteria were patients with CBD stones undergoing ERCP, able to provide informed consent, with a stone size larger than 1.2 cm and requiring mechanical lithotripsy for stone removal. The exclusion criteria included pre-ERCP acute suppurative cholangitis, acute pancreatitis, gastrointestinal (GI) tract hemorrhage and/or perforation, previous history of ERCP, prior Bilroth II gastrectomy and Roux-en-Y or cholangiojejunostomy, pregnancy or breastfeeding, coagulopathy with international normalized ratio > 1.5 and low platelet count (< 50 × 109/L) or active use of anticoagulation drugs, severe liver disease including decompensated liver cirrhosis or liver failure, septic shock, biliary-duodenal fistula confirmed before ERCP cannulation, the presence of intrahepatic duct stones and malignancy, and patient unwillingness or inability to give informed consent. Flow chart of the self-controlled study. CBD: Common bile duct; ERCP: Endoscopic retrograde cholangiopancreatography; PTCD: Percutaneous transhepatic cholangiodrainage. To assess the number of residual stones, we created a CBD clearance score as follows: (1) A large number of stone fragments and biliary sludge; (2) A moderate number of stone fragments and biliary sludge; (3) A small number of stone fragments and biliary sludge; (4) Presence of biliary sludge without any stones; and (5) Completely cleared CBD without any biliary sludge. The scores were determined by two endoscopists independently (Figure 2). SpyGlass DS images and simulation graphs of the residual stone. A-E: SpyGlass DS images (A1-E1) and simulation graphs (A2-E2). Score 1: A large amount of stone fragments and biliary sludge. Score 2: A moderate amount of stone fragments and biliary sludge. Score 3: A small amount of stone fragments and biliary sludge. Score 4: Presence of biliary sludge without any stones. Score 5: Completely cleared common bile duct without any biliary sludge. The study was approved by the ethics committee of The First Hospital of Lanzhou University (No. LDYYMENG2018-1028) and conducted in accordance with the ethical principles of the Declaration of Helsinki. All authors had access to the study data and reviewed and approved the final manuscript. Between October 2018 and January 2020, 47 patients with CBD stones were enrolled in a prospective clinical trial conducted in the surgical endoscopy center of The First Hospital of Lanzhou University (Figure 1). All eligible patients or their legal representatives gave informed consent before the treatment. The inclusion criteria were patients with CBD stones undergoing ERCP, able to provide informed consent, with a stone size larger than 1.2 cm and requiring mechanical lithotripsy for stone removal. The exclusion criteria included pre-ERCP acute suppurative cholangitis, acute pancreatitis, gastrointestinal (GI) tract hemorrhage and/or perforation, previous history of ERCP, prior Bilroth II gastrectomy and Roux-en-Y or cholangiojejunostomy, pregnancy or breastfeeding, coagulopathy with international normalized ratio > 1.5 and low platelet count (< 50 × 109/L) or active use of anticoagulation drugs, severe liver disease including decompensated liver cirrhosis or liver failure, septic shock, biliary-duodenal fistula confirmed before ERCP cannulation, the presence of intrahepatic duct stones and malignancy, and patient unwillingness or inability to give informed consent. Flow chart of the self-controlled study. CBD: Common bile duct; ERCP: Endoscopic retrograde cholangiopancreatography; PTCD: Percutaneous transhepatic cholangiodrainage. To assess the number of residual stones, we created a CBD clearance score as follows: (1) A large number of stone fragments and biliary sludge; (2) A moderate number of stone fragments and biliary sludge; (3) A small number of stone fragments and biliary sludge; (4) Presence of biliary sludge without any stones; and (5) Completely cleared CBD without any biliary sludge. The scores were determined by two endoscopists independently (Figure 2). SpyGlass DS images and simulation graphs of the residual stone. A-E: SpyGlass DS images (A1-E1) and simulation graphs (A2-E2). Score 1: A large amount of stone fragments and biliary sludge. Score 2: A moderate amount of stone fragments and biliary sludge. Score 3: A small amount of stone fragments and biliary sludge. Score 4: Presence of biliary sludge without any stones. Score 5: Completely cleared common bile duct without any biliary sludge. The study was approved by the ethics committee of The First Hospital of Lanzhou University (No. LDYYMENG2018-1028) and conducted in accordance with the ethical principles of the Declaration of Helsinki. All authors had access to the study data and reviewed and approved the final manuscript. Procedures All endoscopic procedures were performed by two experienced endoscopists (each has performed > 1000 ERCPs). Routine prophylactic antibiotics were given in all cases. ERCP was performed with a standard duodenoscope (TJF-260V, Olympus, Tokyo, Japan). The patients received propofol anesthesia without tracheal intubation. During ERCP, cannulation was performed with a wire-guided sphincterotome. After successful cannulation, contrast was injected to determine the stone size (more or less than 12 mm) and the need for mechanical lithotripsy. All patients underwent a small sphincterotomy with an average length of 3-5 mm using the ENDO CUT mode (power setting 90-120 W; Erbe Elektromedizin, Tuebingen, Germany) followed by balloon sphincteroplasty using a controlled radial expansion (CRE) balloon (10-12 mm in diameter; Boston Scientific, Cork, Ireland). Lithotripsy was performed using an endoscopic lithotripter-compatible basket (Boston Scientific, Marlborough, MA, United States), and stones were removed using the basket and a balloon catheter. Samples of CBD stones were collected using an EndoRetrieval bag (Micro-Tech, Nanjing, China) and placed in a container for subsequent analysis. Occlusion cholangiography was performed by injecting contrast with a balloon catheter. Fluoroscopic assessment using a C-arm X-ray (OEC 9900 Elite, Salt Lake City, Utah, United States) was applied to confirm complete removal of CBD stones and the existence of stone residuals was determined by both endoscopist and radiologist. Confirmed stone residues by applying occlusion cholangiography were then followed by repeated extraction attempts. If complete stone removal was confirmed, a CBD clearance score was generated by further applying SpyGlass DS (Boston Scientific Corporation, Marlborough, Massachusetts, United States) examination. If the CBD clearance score was less than 5, the bile duct was irrigated intermittently with 50 mL of normal saline using a basket. The basket was shaken during irrigation with slight suction applied to the duodenoscope to promote drainage. The bile duct was then examined again using the SpyGlass DS to detect any residual stones/sludge and document the clearance score. If the clearance score was still less than 5, repeat irrigation of the CBD was performed using another 50 mL of normal saline. The final CBD clearance score was obtained one more time using SpyGlass DS examination. CBD stones that were not irrigated out after double 50 mL-saline irrigations continued to be irrigated with saline until they were fully cleared (Figure 3). Endoscopic nasobiliary drainage (ENBD) or biliary stenting was performed if necessary. The CBD clearance score was assessed by two blinded endoscopists who were masked to the treatment. Protocol of evaluation and irrigation procedures. CBD: Common bile duct; ERCP: Endoscopic retrograde cholangiopancreatography; POC: Peroral cholangioscopy. The composition of the removed CBD stones was analyzed using infrared ray (IR) spectroscopy. One milligram (mg) of stone samples was mixed with 150 mg of potassium bromide in a mortar and ground into powder. The stone mixture was pressed into a mold and then placed into an automatic IR spectrum analyzer for automated detection to determine the IR spectrogram of the CBD stones. The IR spectrogram showed characteristic absorption peaks at a wavelength of 1460 cm-1, indicating cholesterol-based stones, and those at a wavelength of 1680 cm-1 indicated pigment-based stones. All endoscopic procedures were performed by two experienced endoscopists (each has performed > 1000 ERCPs). Routine prophylactic antibiotics were given in all cases. ERCP was performed with a standard duodenoscope (TJF-260V, Olympus, Tokyo, Japan). The patients received propofol anesthesia without tracheal intubation. During ERCP, cannulation was performed with a wire-guided sphincterotome. After successful cannulation, contrast was injected to determine the stone size (more or less than 12 mm) and the need for mechanical lithotripsy. All patients underwent a small sphincterotomy with an average length of 3-5 mm using the ENDO CUT mode (power setting 90-120 W; Erbe Elektromedizin, Tuebingen, Germany) followed by balloon sphincteroplasty using a controlled radial expansion (CRE) balloon (10-12 mm in diameter; Boston Scientific, Cork, Ireland). Lithotripsy was performed using an endoscopic lithotripter-compatible basket (Boston Scientific, Marlborough, MA, United States), and stones were removed using the basket and a balloon catheter. Samples of CBD stones were collected using an EndoRetrieval bag (Micro-Tech, Nanjing, China) and placed in a container for subsequent analysis. Occlusion cholangiography was performed by injecting contrast with a balloon catheter. Fluoroscopic assessment using a C-arm X-ray (OEC 9900 Elite, Salt Lake City, Utah, United States) was applied to confirm complete removal of CBD stones and the existence of stone residuals was determined by both endoscopist and radiologist. Confirmed stone residues by applying occlusion cholangiography were then followed by repeated extraction attempts. If complete stone removal was confirmed, a CBD clearance score was generated by further applying SpyGlass DS (Boston Scientific Corporation, Marlborough, Massachusetts, United States) examination. If the CBD clearance score was less than 5, the bile duct was irrigated intermittently with 50 mL of normal saline using a basket. The basket was shaken during irrigation with slight suction applied to the duodenoscope to promote drainage. The bile duct was then examined again using the SpyGlass DS to detect any residual stones/sludge and document the clearance score. If the clearance score was still less than 5, repeat irrigation of the CBD was performed using another 50 mL of normal saline. The final CBD clearance score was obtained one more time using SpyGlass DS examination. CBD stones that were not irrigated out after double 50 mL-saline irrigations continued to be irrigated with saline until they were fully cleared (Figure 3). Endoscopic nasobiliary drainage (ENBD) or biliary stenting was performed if necessary. The CBD clearance score was assessed by two blinded endoscopists who were masked to the treatment. Protocol of evaluation and irrigation procedures. CBD: Common bile duct; ERCP: Endoscopic retrograde cholangiopancreatography; POC: Peroral cholangioscopy. The composition of the removed CBD stones was analyzed using infrared ray (IR) spectroscopy. One milligram (mg) of stone samples was mixed with 150 mg of potassium bromide in a mortar and ground into powder. The stone mixture was pressed into a mold and then placed into an automatic IR spectrum analyzer for automated detection to determine the IR spectrogram of the CBD stones. The IR spectrogram showed characteristic absorption peaks at a wavelength of 1460 cm-1, indicating cholesterol-based stones, and those at a wavelength of 1680 cm-1 indicated pigment-based stones. Definition for complications Most of the post-ERCP adverse events were detected within 24 h after the procedure. We routinely assessed these adverse events, including cholangitis, oozing/bleeding, pancreatitis, cholecystitis, and perforation, at 24 and 48 h after the ERCP procedure by symptoms, signs, laboratory tests, and imaging examinations if necessary[18,19]. Post-ERCP adverse events were defined as follows: (1) The diagnosis of acute cholangitis was based on Tokyo Guidelines 2018/2013 (TG18/TG13) diagnostic criteria[20]; (2) Oozing was defined as bleeding that was slight and stopped spontaneously; (3) Pancreatitis was described as new or worsening abdominal pain along with an increase in serum amylase levels (> 3 × higher than normal upper limit measured 24 h after surgery); (4) Cholecystitis was defined as pain in the epigastrium or RUQ accompanied by a positive Murphy sign, and abdominal ultrasonography showing a thickened gallbladder wall; and (5) Perforation was defined as sudden abdominal pain accompanied by retroperitoneal air and fluid. Mechanical lithotripsy was performed by the same assistant for all patients. Stones that were fragmented successfully with only one attempt were defined as soft stones. Stones that required multiple fragmentation attempts or needed to be broken again were defined as hard stones. Most of the post-ERCP adverse events were detected within 24 h after the procedure. We routinely assessed these adverse events, including cholangitis, oozing/bleeding, pancreatitis, cholecystitis, and perforation, at 24 and 48 h after the ERCP procedure by symptoms, signs, laboratory tests, and imaging examinations if necessary[18,19]. Post-ERCP adverse events were defined as follows: (1) The diagnosis of acute cholangitis was based on Tokyo Guidelines 2018/2013 (TG18/TG13) diagnostic criteria[20]; (2) Oozing was defined as bleeding that was slight and stopped spontaneously; (3) Pancreatitis was described as new or worsening abdominal pain along with an increase in serum amylase levels (> 3 × higher than normal upper limit measured 24 h after surgery); (4) Cholecystitis was defined as pain in the epigastrium or RUQ accompanied by a positive Murphy sign, and abdominal ultrasonography showing a thickened gallbladder wall; and (5) Perforation was defined as sudden abdominal pain accompanied by retroperitoneal air and fluid. Mechanical lithotripsy was performed by the same assistant for all patients. Stones that were fragmented successfully with only one attempt were defined as soft stones. Stones that required multiple fragmentation attempts or needed to be broken again were defined as hard stones. Statistical analysis The sample size calculation was based on the rate of residual stone clearance. According to a previous study[16] and our prior experience, we assumed a success rate of 84.5% for endoscopic extraction of CBD stones and an increase of up to 97% with saline irrigation. We estimated that 47 patients were needed in this analysis to obtain a power of 80% at the 5% level. Categorical variables are expressed as numbers or percentages (%). Continuous variables are presented as the mean ± SD or median and interquartile range, as appropriate. Continuous variables with a normal distribution were analyzed by paired t-test or signed-rank test. Categorical variables were analyzed using the chi-square test or Fisher’s exact test. Logistic regression was used to predict risk factors for complications, and the results are presented as odds ratios (ORs) with 95% confidence intervals (Cis). Variables with a P value of < 0.2 in the univariate analysis were included in the multivariate analysis. Linear mixed-effects models were conducted to assess the effect of saline irrigation on the clearance score with a random intercept for each patient and an unstructured covariance structure. A two-sided P value of less than 0.05 was considered statistically significant. The analyses were conducted with statistical software (SAS version 9.4, SAS Institute, Inc., NC, United States). The sample size calculation was based on the rate of residual stone clearance. According to a previous study[16] and our prior experience, we assumed a success rate of 84.5% for endoscopic extraction of CBD stones and an increase of up to 97% with saline irrigation. We estimated that 47 patients were needed in this analysis to obtain a power of 80% at the 5% level. Categorical variables are expressed as numbers or percentages (%). Continuous variables are presented as the mean ± SD or median and interquartile range, as appropriate. Continuous variables with a normal distribution were analyzed by paired t-test or signed-rank test. Categorical variables were analyzed using the chi-square test or Fisher’s exact test. Logistic regression was used to predict risk factors for complications, and the results are presented as odds ratios (ORs) with 95% confidence intervals (Cis). Variables with a P value of < 0.2 in the univariate analysis were included in the multivariate analysis. Linear mixed-effects models were conducted to assess the effect of saline irrigation on the clearance score with a random intercept for each patient and an unstructured covariance structure. A two-sided P value of less than 0.05 was considered statistically significant. The analyses were conducted with statistical software (SAS version 9.4, SAS Institute, Inc., NC, United States).
null
null
CONCLUSION
We thank the clinical and research teams at the Department of General Surgery for providing ongoing support.
[ "INTRODUCTION", "Study design and participants", "Procedures", "Definition for complications", "Statistical analysis", "RESULTS", "DISCUSSION", "CONCLUSION" ]
[ "Endoscopic retrograde cholangiopancreatography (ERCP) is an effective and relatively minimally invasive technique for common bile duct (CBD) stones[1-3]. It has been reported that the recurrence rate of CBD stones after ERCP has increased from 4% to 24%[4-6]. The incidence of residual stones after mechanical lithotripsy for intractable CBD stones is 24% to 40%[7-10]. A growing number of studies suggest that an important reason for the recurrence of bile duct stones is the presence of stone debris after lithotripsy[11-13]. During ERCP, occluded cholangiography (OC) is often performed after stone removal to determine whether the stone is removed completely, but OC lacks accuracy. Even if no obvious stones are found on cholangiography, the presence of contrast can obscure small stone debris in the bile duct[8,14]. Complete bile duct clearance is necessary to decrease recurrent bile duct stones.\nSome studies[15-17] reported that irrigation of the bile duct with saline after stone extraction further improves the clearance of the bile duct and has the advantages of being a simple, low-cost procedure with rare complications. Ang et al[16] showed that a mean of 48 mL of saline solution could irrigate and flush out residual stones after the endoscopic removal of CBD stones. Ahn et al[17] found that irrigation with 100 mL of saline can flush out residual stone fragments from the bile duct into the duodenum after stone extraction. However, intraductal ultrasound (IDUS) has a high sensitivity and accuracy in diagnosing bile duct stones/debris[11,16]. This modality yields only indirect images of the debris.\nThe lack of direct evidence to support the efficacy of saline irrigation after lithotripsy prompted us to use peroral cholangioscopy (POC) to examine the bile duct and detect any residual stones/debris. The results of no irrigation after stone extraction were confirmed by OC and were compared to the effectiveness of irrigation with 50 mL or 100 mL saline. To evaluate whether irrigation with 100 mL of saline is more effective in achieving complete clearance of the bile duct after mechanical lithotripsy, we conducted this prospective self-controlled study.", "Between October 2018 and January 2020, 47 patients with CBD stones were enrolled in a prospective clinical trial conducted in the surgical endoscopy center of The First Hospital of Lanzhou University (Figure 1). All eligible patients or their legal representatives gave informed consent before the treatment. The inclusion criteria were patients with CBD stones undergoing ERCP, able to provide informed consent, with a stone size larger than 1.2 cm and requiring mechanical lithotripsy for stone removal. The exclusion criteria included pre-ERCP acute suppurative cholangitis, acute pancreatitis, gastrointestinal (GI) tract hemorrhage and/or perforation, previous history of ERCP, prior Bilroth II gastrectomy and Roux-en-Y or cholangiojejunostomy, pregnancy or breastfeeding, coagulopathy with international normalized ratio > 1.5 and low platelet count (< 50 × 109/L) or active use of anticoagulation drugs, severe liver disease including decompensated liver cirrhosis or liver failure, septic shock, biliary-duodenal fistula confirmed before ERCP cannulation, the presence of intrahepatic duct stones and malignancy, and patient unwillingness or inability to give informed consent.\n\nFlow chart of the self-controlled study. CBD: Common bile duct; ERCP: Endoscopic retrograde cholangiopancreatography; PTCD: Percutaneous transhepatic cholangiodrainage.\nTo assess the number of residual stones, we created a CBD clearance score as follows: (1) A large number of stone fragments and biliary sludge; (2) A moderate number of stone fragments and biliary sludge; (3) A small number of stone fragments and biliary sludge; (4) Presence of biliary sludge without any stones; and (5) Completely cleared CBD without any biliary sludge. The scores were determined by two endoscopists independently (Figure 2).\n\nSpyGlass DS images and simulation graphs of the residual stone. A-E: SpyGlass DS images (A1-E1) and simulation graphs (A2-E2). Score 1: A large amount of stone fragments and biliary sludge. Score 2: A moderate amount of stone fragments and biliary sludge. Score 3: A small amount of stone fragments and biliary sludge. Score 4: Presence of biliary sludge without any stones. Score 5: Completely cleared common bile duct without any biliary sludge.\nThe study was approved by the ethics committee of The First Hospital of Lanzhou University (No. LDYYMENG2018-1028) and conducted in accordance with the ethical principles of the Declaration of Helsinki. All authors had access to the study data and reviewed and approved the final manuscript.", "All endoscopic procedures were performed by two experienced endoscopists (each has performed > 1000 ERCPs). Routine prophylactic antibiotics were given in all cases. ERCP was performed with a standard duodenoscope (TJF-260V, Olympus, Tokyo, Japan). The patients received propofol anesthesia without tracheal intubation.\nDuring ERCP, cannulation was performed with a wire-guided sphincterotome. After successful cannulation, contrast was injected to determine the stone size (more or less than 12 mm) and the need for mechanical lithotripsy. All patients underwent a small sphincterotomy with an average length of 3-5 mm using the ENDO CUT mode (power setting 90-120 W; Erbe Elektromedizin, Tuebingen, Germany) followed by balloon sphincteroplasty using a controlled radial expansion (CRE) balloon (10-12 mm in diameter; Boston Scientific, Cork, Ireland). Lithotripsy was performed using an endoscopic lithotripter-compatible basket (Boston Scientific, Marlborough, MA, United States), and stones were removed using the basket and a balloon catheter. Samples of CBD stones were collected using an EndoRetrieval bag (Micro-Tech, Nanjing, China) and placed in a container for subsequent analysis. Occlusion cholangiography was performed by injecting contrast with a balloon catheter. Fluoroscopic assessment using a C-arm X-ray (OEC 9900 Elite, Salt Lake City, Utah, United States) was applied to confirm complete removal of CBD stones and the existence of stone residuals was determined by both endoscopist and radiologist. Confirmed stone residues by applying occlusion cholangiography were then followed by repeated extraction attempts. If complete stone removal was confirmed, a CBD clearance score was generated by further applying SpyGlass DS (Boston Scientific Corporation, Marlborough, Massachusetts, United States) examination. If the CBD clearance score was less than 5, the bile duct was irrigated intermittently with 50 mL of normal saline using a basket. The basket was shaken during irrigation with slight suction applied to the duodenoscope to promote drainage. The bile duct was then examined again using the SpyGlass DS to detect any residual stones/sludge and document the clearance score. If the clearance score was still less than 5, repeat irrigation of the CBD was performed using another 50 mL of normal saline. The final CBD clearance score was obtained one more time using SpyGlass DS examination. CBD stones that were not irrigated out after double 50 mL-saline irrigations continued to be irrigated with saline until they were fully cleared (Figure 3). Endoscopic nasobiliary drainage (ENBD) or biliary stenting was performed if necessary. The CBD clearance score was assessed by two blinded endoscopists who were masked to the treatment.\n\nProtocol of evaluation and irrigation procedures. CBD: Common bile duct; ERCP: Endoscopic retrograde cholangiopancreatography; POC: Peroral cholangioscopy.\nThe composition of the removed CBD stones was analyzed using infrared ray (IR) spectroscopy. One milligram (mg) of stone samples was mixed with 150 mg of potassium bromide in a mortar and ground into powder. The stone mixture was pressed into a mold and then placed into an automatic IR spectrum analyzer for automated detection to determine the IR spectrogram of the CBD stones. The IR spectrogram showed characteristic absorption peaks at a wavelength of 1460 cm-1, indicating cholesterol-based stones, and those at a wavelength of 1680 cm-1 indicated pigment-based stones.", "Most of the post-ERCP adverse events were detected within 24 h after the procedure. We routinely assessed these adverse events, including cholangitis, oozing/bleeding, pancreatitis, cholecystitis, and perforation, at 24 and 48 h after the ERCP procedure by symptoms, signs, laboratory tests, and imaging examinations if necessary[18,19]. Post-ERCP adverse events were defined as follows: (1) The diagnosis of acute cholangitis was based on Tokyo Guidelines 2018/2013 (TG18/TG13) diagnostic criteria[20]; (2) Oozing was defined as bleeding that was slight and stopped spontaneously; (3) Pancreatitis was described as new or worsening abdominal pain along with an increase in serum amylase levels (> 3 × higher than normal upper limit measured 24 h after surgery); (4) Cholecystitis was defined as pain in the epigastrium or RUQ accompanied by a positive Murphy sign, and abdominal ultrasonography showing a thickened gallbladder wall; and (5) Perforation was defined as sudden abdominal pain accompanied by retroperitoneal air and fluid.\nMechanical lithotripsy was performed by the same assistant for all patients. Stones that were fragmented successfully with only one attempt were defined as soft stones. Stones that required multiple fragmentation attempts or needed to be broken again were defined as hard stones.", "The sample size calculation was based on the rate of residual stone clearance. According to a previous study[16] and our prior experience, we assumed a success rate of 84.5% for endoscopic extraction of CBD stones and an increase of up to 97% with saline irrigation. We estimated that 47 patients were needed in this analysis to obtain a power of 80% at the 5% level.\nCategorical variables are expressed as numbers or percentages (%). Continuous variables are presented as the mean ± SD or median and interquartile range, as appropriate. Continuous variables with a normal distribution were analyzed by paired t-test or signed-rank test. Categorical variables were analyzed using the chi-square test or Fisher’s exact test. Logistic regression was used to predict risk factors for complications, and the results are presented as odds ratios (ORs) with 95% confidence intervals (Cis). Variables with a P value of < 0.2 in the univariate analysis were included in the multivariate analysis. Linear mixed-effects models were conducted to assess the effect of saline irrigation on the clearance score with a random intercept for each patient and an unstructured covariance structure. A two-sided P value of less than 0.05 was considered statistically significant. The analyses were conducted with statistical software (SAS version 9.4, SAS Institute, Inc., NC, United States).", "The patients' average age was 61 ± 16.5 years, and 23 (48.9%) were male. Comorbidities included coronary disease in three (6.4%) patients, hypertension in 14 (29.8%), diabetes in two (4.3%), liver cirrhosis in three (2.1%), and portal hypertension in one (6.4%). The stone analysis showed that 17 (36.2%) cases had cholesterol-based stones, and 30 (63.8%) had pigment-based stones.\nProcedure-related adverse events occurred in 11 (23.4%) of 47 patients, with cholangitis in four patients, bleeding in two, pancreatitis in four, and cholecystitis in one. No mortality or perforation occurred. The mean time for ERCP was 40.3 ± 15.4 min (Table 1).\nClinical and procedural characteristics of the patients\nData are expressed as the mean ± SD, median (interquartile range) or n (%). ENBD: Endoscopic nasobiliary drainage; CBD: Common bile duct.\nAfter endoscopic stone extraction, occlusion cholangiography showed that there were no residual CBD stones. Seven patients had a clearance score of 4 identified by POC with the SpyGlass DS, and no patients had a score of 5 before saline irrigation. After irrigation with 50 mL saline, CBD clearance improved to 4 in 28 patients, but none achieved a score of 5. After a total of 100 mL of saline irrigation, there was further improvement in the clearance scores: 12 (26%) patients had a score of 4, and 32 (68%) had a score of 5 identified by the SpyGlass DS. The respective CBD stone clearance rates for the control (no irrigation), 50 mL saline irrigation, and 100 mL saline irrigation were 15%, 60%, and 94%, respectively, and the difference among them was statistically significant (P < 0.001) (Table 2). The CBD clearance score (mean ± SD) was 2.4 ± 1.1 in the control, 3.5 ± 0.7 in the 50 mL saline irrigation, and 4.6 ± 0.6 in the 100 mL saline irrigation (P < 0.001) (Supplementary Figure 1).\nCommon bile duct clear score and stone clearance rate before and after saline irrigation\nAfter ERCP, 13 patients had a score of 1 (a large amount of residual stone/sludge) before saline irrigation. Only 15 patients did not reach a score of 5 (complete clearance) after 100 mL of saline irrigation. A dilated CBD diameter > 15 mm (OR = 4.93, 95%CI: 1.13-21.57; P = 0.034) was an independent risk factor for significant residual CBD stones (score = 1). Multivariate analysis revealed that CBD diameter > 15 mm (OR = 0.08, 95%CI: 0.01-0.49; P = 0.007) and the presence of periampullary diverticula (PAD) (OR = 6.51, 95%CI: 1.08-39.21; P = 0.041) were independent risk factors for failed CBD clearance despite irrigation with 100 mL saline (Table 3).\nBiliary cleanliness before and after saline irrigation, assessed by logistic regression analysis\nN: Total number; CBD: Common bile duct; PAD: Periampullary diverticula; CI: Confidence interval; OR: Odds ratio.\nFurther analysis showed that the volume of saline used for irrigation[20] (P < 0.001) was an important factor determining CBD clearance (Supplementary Table 1).\nWe used IR spectroscopy to analyze the components of stones in the CBD of all patients. IR spectroscopy analysis showed that the proportion of pigment-based stones was significantly higher in patients with PAD (93.3% vs 50.0%, P = 0.04), and these stones were mostly soft stones (77.4% vs 22.6%, P = 0.01) (Table 4).\nCorrelation between the composition of stones and variables\nCBD: Common bile duct; PAD: Periampullary diverticula.", "Fluoroscopic cholangiographic imagery is currently the main method to determine the successful clearance of CBD stones[20]. However, studies involving the use of IDUS have identified residual biliary sludge within the bile duct despite the absence of filling defects on cholangiography. Mechanical lithotripsy produces a large number of stone fragments, and these minor residual CBD stones may lead to recurrent stone formation[21].\nIt has been reported that residual small CBD stone fragments can be flushed out of the bile duct using saline irrigation with a balloon catheter with a side hole[15]. A study reported that an average of 48 mL of saline could remove residual stones[16]. However, another study found that at least 100 mL of saline was needed to reduce residual stones[17]. Therefore, the effectiveness of saline irrigation on the clearance of residual bile duct stones after lithotripsy is not clear. This study's purpose differs from previous ones in that all of the patients had large stones and were post lithotripsy. In this study, we found that after ERCP and an additional round of lithotripsy were performed to remove CBD stones, POC using the SpyGlass DS showed that only 15% of the patients were relatively cleared of bile duct stones despite a negative occlusion cholangiogram. After irrigation with 50 mL of normal saline, 60% of the patients had relatively cleared bile ducts. After a total of 100 mL saline irrigation, 94% of the patients achieved complete clearance of the bile duct. The results showed that irrigation with 50 mL of saline was not enough to clear the bile duct of residual stones/sludge.\nIn this series, all patients received mechanical lithotripsy for stone fragmentation, which generates substantial stone fragments/debris, thus increasing the difficulty of clearing the bile duct. Our results showed that a good CBD clearance rate could be achieved with a larger saline irrigation volume. The clearance rate of bile duct stones could be higher for those without lithotripsy.\nResidual stones have been found in approximately 1/3 of cases after stone retrieval using endoscopic ultrasound (EUS)[22,23]. In addition to stone detection, the EUS approach might also offer an alternative for treating bile duct stones. However, the use of EUS is very challenging[24-27]. IDUS has also been used to detect small residual stones in 14/59 patients (23.7%) with residual stones[11]. However, the ultrasonic probe is costly and can be easily damaged. In addition, the method is highly operator dependent and often produces poor quality images, and the presence of air in the bile duct can affect the detection of residual stones. Many studies have shown that POC has a high sensitivity in the detection of residual CBD stones, ranging from 25.3% to 34%, where residual stones are missed by standard cholangiography[8,9,28].\nIn our study, we used the SpyGlass DS system to determine the CBD clearance score. The SpyScope has a 4-way tip deflection and is much smaller (10 French) than conventional choledochoscopes. More importantly, it has a separate and independent irrigation channel that allows intermittent or continuous irrigation of the biliary system. It also offers a direct examination of the bile duct lumen and is more accurate in detecting and diagnosing residual bile duct stones/sludge.\nResidual CBD stones can lead to recurrent stone formation. Itoi et al[7] reported that PAD and intraoperative lithotripsy were closely related to residual stones (P < 0.05), as we observed in our study. We also found that a dilated CBD with a diameter > 15 mm was an independent risk factor for failed CBD clearance and a large number of residual stones. Despite irrigation with 100 mL saline, PAD and a dilated CBD remained independent risk factors for incomplete bile duct clearance. This may be due to the presence of an air-filled duodenum/diverticulum that compresses the distal CBD, making it difficult to wash away the residual stones/sludge[29,30]. CBD clearance can be improved by increasing the volume of saline irrigation.\nOur results showed that PAD not only increased the difficulty of bile duct clearance but can also affect stone composition. We found that pigment-based stones constituted the majority of CBD stones in our patients with PAD (P = 0.004). Song et al[31] reported that recurrent bile duct stones were more likely to be brown pigmented stones than cholesterol-based stones. This may be because pigment-based stones are often related to bacterial infection, and the formation of biliary sludge will contribute to recurrent stone formation over time[32-34]. The brown pigmented stones that form as a result of bacterial infection are soft and easy to fragment and generate abundant debris. Our experience showed that soft stones/sludge require more saline irrigation to clear the bile duct. In this study, 26% of cases with biliary sludge were found even with 100 mL saline irrigation. If an effective bile duct cleaning device is used to remove biliary sludge, it may reduce the CBD stone recurrence rate after ERCP.\nWithout adequate biliary drainage, saline irrigation can increase the intraductal pressure, causing cholangitis because of contaminated bile[35]. Proper drainage methods can mitigate this stress[36,37]. No serious adverse events were reported in our study except for 10% of patients with cholangitis secondary possibly to saline irrigation. Intermittent irrigation and endoscopic suction to promote biliary drainage may lower the risk in addition to antibiotic coverage. Differences between this study and previous studies include the following: We studied the effect of flushing with saline after lithotripsy; our evaluation method was different from that of previous studies (we used SpyGlass DS examination); and we performed a component analysis related to diverticula. Our study has several methodological advantages. First, it was a prospective study using a self-controlled design, and SpyGlass DS was used to assess the presence of residual stones in the bile ducts. All of the patients showed no stones by imaging even though small residual bile duct stones were observed by SpyGlass DS, indicating that the evaluation of residual bile duct stones using SpyGlass DS is more sensitive than imaging alone. If we observe that there are still stones, it would be unethical not to continue flushing after using 50 mL of saline. Second, this study used objective definitions to assess residual stones identified by POC. Third, this study included objective stone composition analysis using IR spectroscopy.\nThe study has limitations. This study was not a randomized controlled trial. The use of direct cholangioscopy by the SpyGlass DS increases the procedure time with additional cost. Therefore, we do not recommend routine use of SpyGlass DS to rule out residual stones.", "In conclusion, in patients with large CBD stones undergoing mechanical lithotripsy, routine irrigation of the bile duct with at least 100 mL of saline is recommended to improve CBD clearance, especially for patients with a dilated bile duct and/or those with PAD. Saline irrigation is simple, inexpensive, and easy to perform to improve bile duct clearance, and it may avoid recurrent stone formation." ]
[ null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Study design and participants", "Procedures", "Definition for complications", "Statistical analysis", "RESULTS", "DISCUSSION", "CONCLUSION" ]
[ "Endoscopic retrograde cholangiopancreatography (ERCP) is an effective and relatively minimally invasive technique for common bile duct (CBD) stones[1-3]. It has been reported that the recurrence rate of CBD stones after ERCP has increased from 4% to 24%[4-6]. The incidence of residual stones after mechanical lithotripsy for intractable CBD stones is 24% to 40%[7-10]. A growing number of studies suggest that an important reason for the recurrence of bile duct stones is the presence of stone debris after lithotripsy[11-13]. During ERCP, occluded cholangiography (OC) is often performed after stone removal to determine whether the stone is removed completely, but OC lacks accuracy. Even if no obvious stones are found on cholangiography, the presence of contrast can obscure small stone debris in the bile duct[8,14]. Complete bile duct clearance is necessary to decrease recurrent bile duct stones.\nSome studies[15-17] reported that irrigation of the bile duct with saline after stone extraction further improves the clearance of the bile duct and has the advantages of being a simple, low-cost procedure with rare complications. Ang et al[16] showed that a mean of 48 mL of saline solution could irrigate and flush out residual stones after the endoscopic removal of CBD stones. Ahn et al[17] found that irrigation with 100 mL of saline can flush out residual stone fragments from the bile duct into the duodenum after stone extraction. However, intraductal ultrasound (IDUS) has a high sensitivity and accuracy in diagnosing bile duct stones/debris[11,16]. This modality yields only indirect images of the debris.\nThe lack of direct evidence to support the efficacy of saline irrigation after lithotripsy prompted us to use peroral cholangioscopy (POC) to examine the bile duct and detect any residual stones/debris. The results of no irrigation after stone extraction were confirmed by OC and were compared to the effectiveness of irrigation with 50 mL or 100 mL saline. To evaluate whether irrigation with 100 mL of saline is more effective in achieving complete clearance of the bile duct after mechanical lithotripsy, we conducted this prospective self-controlled study.", "Study design and participants Between October 2018 and January 2020, 47 patients with CBD stones were enrolled in a prospective clinical trial conducted in the surgical endoscopy center of The First Hospital of Lanzhou University (Figure 1). All eligible patients or their legal representatives gave informed consent before the treatment. The inclusion criteria were patients with CBD stones undergoing ERCP, able to provide informed consent, with a stone size larger than 1.2 cm and requiring mechanical lithotripsy for stone removal. The exclusion criteria included pre-ERCP acute suppurative cholangitis, acute pancreatitis, gastrointestinal (GI) tract hemorrhage and/or perforation, previous history of ERCP, prior Bilroth II gastrectomy and Roux-en-Y or cholangiojejunostomy, pregnancy or breastfeeding, coagulopathy with international normalized ratio > 1.5 and low platelet count (< 50 × 109/L) or active use of anticoagulation drugs, severe liver disease including decompensated liver cirrhosis or liver failure, septic shock, biliary-duodenal fistula confirmed before ERCP cannulation, the presence of intrahepatic duct stones and malignancy, and patient unwillingness or inability to give informed consent.\n\nFlow chart of the self-controlled study. CBD: Common bile duct; ERCP: Endoscopic retrograde cholangiopancreatography; PTCD: Percutaneous transhepatic cholangiodrainage.\nTo assess the number of residual stones, we created a CBD clearance score as follows: (1) A large number of stone fragments and biliary sludge; (2) A moderate number of stone fragments and biliary sludge; (3) A small number of stone fragments and biliary sludge; (4) Presence of biliary sludge without any stones; and (5) Completely cleared CBD without any biliary sludge. The scores were determined by two endoscopists independently (Figure 2).\n\nSpyGlass DS images and simulation graphs of the residual stone. A-E: SpyGlass DS images (A1-E1) and simulation graphs (A2-E2). Score 1: A large amount of stone fragments and biliary sludge. Score 2: A moderate amount of stone fragments and biliary sludge. Score 3: A small amount of stone fragments and biliary sludge. Score 4: Presence of biliary sludge without any stones. Score 5: Completely cleared common bile duct without any biliary sludge.\nThe study was approved by the ethics committee of The First Hospital of Lanzhou University (No. LDYYMENG2018-1028) and conducted in accordance with the ethical principles of the Declaration of Helsinki. All authors had access to the study data and reviewed and approved the final manuscript.\nBetween October 2018 and January 2020, 47 patients with CBD stones were enrolled in a prospective clinical trial conducted in the surgical endoscopy center of The First Hospital of Lanzhou University (Figure 1). All eligible patients or their legal representatives gave informed consent before the treatment. The inclusion criteria were patients with CBD stones undergoing ERCP, able to provide informed consent, with a stone size larger than 1.2 cm and requiring mechanical lithotripsy for stone removal. The exclusion criteria included pre-ERCP acute suppurative cholangitis, acute pancreatitis, gastrointestinal (GI) tract hemorrhage and/or perforation, previous history of ERCP, prior Bilroth II gastrectomy and Roux-en-Y or cholangiojejunostomy, pregnancy or breastfeeding, coagulopathy with international normalized ratio > 1.5 and low platelet count (< 50 × 109/L) or active use of anticoagulation drugs, severe liver disease including decompensated liver cirrhosis or liver failure, septic shock, biliary-duodenal fistula confirmed before ERCP cannulation, the presence of intrahepatic duct stones and malignancy, and patient unwillingness or inability to give informed consent.\n\nFlow chart of the self-controlled study. CBD: Common bile duct; ERCP: Endoscopic retrograde cholangiopancreatography; PTCD: Percutaneous transhepatic cholangiodrainage.\nTo assess the number of residual stones, we created a CBD clearance score as follows: (1) A large number of stone fragments and biliary sludge; (2) A moderate number of stone fragments and biliary sludge; (3) A small number of stone fragments and biliary sludge; (4) Presence of biliary sludge without any stones; and (5) Completely cleared CBD without any biliary sludge. The scores were determined by two endoscopists independently (Figure 2).\n\nSpyGlass DS images and simulation graphs of the residual stone. A-E: SpyGlass DS images (A1-E1) and simulation graphs (A2-E2). Score 1: A large amount of stone fragments and biliary sludge. Score 2: A moderate amount of stone fragments and biliary sludge. Score 3: A small amount of stone fragments and biliary sludge. Score 4: Presence of biliary sludge without any stones. Score 5: Completely cleared common bile duct without any biliary sludge.\nThe study was approved by the ethics committee of The First Hospital of Lanzhou University (No. LDYYMENG2018-1028) and conducted in accordance with the ethical principles of the Declaration of Helsinki. All authors had access to the study data and reviewed and approved the final manuscript.\nProcedures All endoscopic procedures were performed by two experienced endoscopists (each has performed > 1000 ERCPs). Routine prophylactic antibiotics were given in all cases. ERCP was performed with a standard duodenoscope (TJF-260V, Olympus, Tokyo, Japan). The patients received propofol anesthesia without tracheal intubation.\nDuring ERCP, cannulation was performed with a wire-guided sphincterotome. After successful cannulation, contrast was injected to determine the stone size (more or less than 12 mm) and the need for mechanical lithotripsy. All patients underwent a small sphincterotomy with an average length of 3-5 mm using the ENDO CUT mode (power setting 90-120 W; Erbe Elektromedizin, Tuebingen, Germany) followed by balloon sphincteroplasty using a controlled radial expansion (CRE) balloon (10-12 mm in diameter; Boston Scientific, Cork, Ireland). Lithotripsy was performed using an endoscopic lithotripter-compatible basket (Boston Scientific, Marlborough, MA, United States), and stones were removed using the basket and a balloon catheter. Samples of CBD stones were collected using an EndoRetrieval bag (Micro-Tech, Nanjing, China) and placed in a container for subsequent analysis. Occlusion cholangiography was performed by injecting contrast with a balloon catheter. Fluoroscopic assessment using a C-arm X-ray (OEC 9900 Elite, Salt Lake City, Utah, United States) was applied to confirm complete removal of CBD stones and the existence of stone residuals was determined by both endoscopist and radiologist. Confirmed stone residues by applying occlusion cholangiography were then followed by repeated extraction attempts. If complete stone removal was confirmed, a CBD clearance score was generated by further applying SpyGlass DS (Boston Scientific Corporation, Marlborough, Massachusetts, United States) examination. If the CBD clearance score was less than 5, the bile duct was irrigated intermittently with 50 mL of normal saline using a basket. The basket was shaken during irrigation with slight suction applied to the duodenoscope to promote drainage. The bile duct was then examined again using the SpyGlass DS to detect any residual stones/sludge and document the clearance score. If the clearance score was still less than 5, repeat irrigation of the CBD was performed using another 50 mL of normal saline. The final CBD clearance score was obtained one more time using SpyGlass DS examination. CBD stones that were not irrigated out after double 50 mL-saline irrigations continued to be irrigated with saline until they were fully cleared (Figure 3). Endoscopic nasobiliary drainage (ENBD) or biliary stenting was performed if necessary. The CBD clearance score was assessed by two blinded endoscopists who were masked to the treatment.\n\nProtocol of evaluation and irrigation procedures. CBD: Common bile duct; ERCP: Endoscopic retrograde cholangiopancreatography; POC: Peroral cholangioscopy.\nThe composition of the removed CBD stones was analyzed using infrared ray (IR) spectroscopy. One milligram (mg) of stone samples was mixed with 150 mg of potassium bromide in a mortar and ground into powder. The stone mixture was pressed into a mold and then placed into an automatic IR spectrum analyzer for automated detection to determine the IR spectrogram of the CBD stones. The IR spectrogram showed characteristic absorption peaks at a wavelength of 1460 cm-1, indicating cholesterol-based stones, and those at a wavelength of 1680 cm-1 indicated pigment-based stones.\nAll endoscopic procedures were performed by two experienced endoscopists (each has performed > 1000 ERCPs). Routine prophylactic antibiotics were given in all cases. ERCP was performed with a standard duodenoscope (TJF-260V, Olympus, Tokyo, Japan). The patients received propofol anesthesia without tracheal intubation.\nDuring ERCP, cannulation was performed with a wire-guided sphincterotome. After successful cannulation, contrast was injected to determine the stone size (more or less than 12 mm) and the need for mechanical lithotripsy. All patients underwent a small sphincterotomy with an average length of 3-5 mm using the ENDO CUT mode (power setting 90-120 W; Erbe Elektromedizin, Tuebingen, Germany) followed by balloon sphincteroplasty using a controlled radial expansion (CRE) balloon (10-12 mm in diameter; Boston Scientific, Cork, Ireland). Lithotripsy was performed using an endoscopic lithotripter-compatible basket (Boston Scientific, Marlborough, MA, United States), and stones were removed using the basket and a balloon catheter. Samples of CBD stones were collected using an EndoRetrieval bag (Micro-Tech, Nanjing, China) and placed in a container for subsequent analysis. Occlusion cholangiography was performed by injecting contrast with a balloon catheter. Fluoroscopic assessment using a C-arm X-ray (OEC 9900 Elite, Salt Lake City, Utah, United States) was applied to confirm complete removal of CBD stones and the existence of stone residuals was determined by both endoscopist and radiologist. Confirmed stone residues by applying occlusion cholangiography were then followed by repeated extraction attempts. If complete stone removal was confirmed, a CBD clearance score was generated by further applying SpyGlass DS (Boston Scientific Corporation, Marlborough, Massachusetts, United States) examination. If the CBD clearance score was less than 5, the bile duct was irrigated intermittently with 50 mL of normal saline using a basket. The basket was shaken during irrigation with slight suction applied to the duodenoscope to promote drainage. The bile duct was then examined again using the SpyGlass DS to detect any residual stones/sludge and document the clearance score. If the clearance score was still less than 5, repeat irrigation of the CBD was performed using another 50 mL of normal saline. The final CBD clearance score was obtained one more time using SpyGlass DS examination. CBD stones that were not irrigated out after double 50 mL-saline irrigations continued to be irrigated with saline until they were fully cleared (Figure 3). Endoscopic nasobiliary drainage (ENBD) or biliary stenting was performed if necessary. The CBD clearance score was assessed by two blinded endoscopists who were masked to the treatment.\n\nProtocol of evaluation and irrigation procedures. CBD: Common bile duct; ERCP: Endoscopic retrograde cholangiopancreatography; POC: Peroral cholangioscopy.\nThe composition of the removed CBD stones was analyzed using infrared ray (IR) spectroscopy. One milligram (mg) of stone samples was mixed with 150 mg of potassium bromide in a mortar and ground into powder. The stone mixture was pressed into a mold and then placed into an automatic IR spectrum analyzer for automated detection to determine the IR spectrogram of the CBD stones. The IR spectrogram showed characteristic absorption peaks at a wavelength of 1460 cm-1, indicating cholesterol-based stones, and those at a wavelength of 1680 cm-1 indicated pigment-based stones.\nDefinition for complications Most of the post-ERCP adverse events were detected within 24 h after the procedure. We routinely assessed these adverse events, including cholangitis, oozing/bleeding, pancreatitis, cholecystitis, and perforation, at 24 and 48 h after the ERCP procedure by symptoms, signs, laboratory tests, and imaging examinations if necessary[18,19]. Post-ERCP adverse events were defined as follows: (1) The diagnosis of acute cholangitis was based on Tokyo Guidelines 2018/2013 (TG18/TG13) diagnostic criteria[20]; (2) Oozing was defined as bleeding that was slight and stopped spontaneously; (3) Pancreatitis was described as new or worsening abdominal pain along with an increase in serum amylase levels (> 3 × higher than normal upper limit measured 24 h after surgery); (4) Cholecystitis was defined as pain in the epigastrium or RUQ accompanied by a positive Murphy sign, and abdominal ultrasonography showing a thickened gallbladder wall; and (5) Perforation was defined as sudden abdominal pain accompanied by retroperitoneal air and fluid.\nMechanical lithotripsy was performed by the same assistant for all patients. Stones that were fragmented successfully with only one attempt were defined as soft stones. Stones that required multiple fragmentation attempts or needed to be broken again were defined as hard stones.\nMost of the post-ERCP adverse events were detected within 24 h after the procedure. We routinely assessed these adverse events, including cholangitis, oozing/bleeding, pancreatitis, cholecystitis, and perforation, at 24 and 48 h after the ERCP procedure by symptoms, signs, laboratory tests, and imaging examinations if necessary[18,19]. Post-ERCP adverse events were defined as follows: (1) The diagnosis of acute cholangitis was based on Tokyo Guidelines 2018/2013 (TG18/TG13) diagnostic criteria[20]; (2) Oozing was defined as bleeding that was slight and stopped spontaneously; (3) Pancreatitis was described as new or worsening abdominal pain along with an increase in serum amylase levels (> 3 × higher than normal upper limit measured 24 h after surgery); (4) Cholecystitis was defined as pain in the epigastrium or RUQ accompanied by a positive Murphy sign, and abdominal ultrasonography showing a thickened gallbladder wall; and (5) Perforation was defined as sudden abdominal pain accompanied by retroperitoneal air and fluid.\nMechanical lithotripsy was performed by the same assistant for all patients. Stones that were fragmented successfully with only one attempt were defined as soft stones. Stones that required multiple fragmentation attempts or needed to be broken again were defined as hard stones.\nStatistical analysis The sample size calculation was based on the rate of residual stone clearance. According to a previous study[16] and our prior experience, we assumed a success rate of 84.5% for endoscopic extraction of CBD stones and an increase of up to 97% with saline irrigation. We estimated that 47 patients were needed in this analysis to obtain a power of 80% at the 5% level.\nCategorical variables are expressed as numbers or percentages (%). Continuous variables are presented as the mean ± SD or median and interquartile range, as appropriate. Continuous variables with a normal distribution were analyzed by paired t-test or signed-rank test. Categorical variables were analyzed using the chi-square test or Fisher’s exact test. Logistic regression was used to predict risk factors for complications, and the results are presented as odds ratios (ORs) with 95% confidence intervals (Cis). Variables with a P value of < 0.2 in the univariate analysis were included in the multivariate analysis. Linear mixed-effects models were conducted to assess the effect of saline irrigation on the clearance score with a random intercept for each patient and an unstructured covariance structure. A two-sided P value of less than 0.05 was considered statistically significant. The analyses were conducted with statistical software (SAS version 9.4, SAS Institute, Inc., NC, United States).\nThe sample size calculation was based on the rate of residual stone clearance. According to a previous study[16] and our prior experience, we assumed a success rate of 84.5% for endoscopic extraction of CBD stones and an increase of up to 97% with saline irrigation. We estimated that 47 patients were needed in this analysis to obtain a power of 80% at the 5% level.\nCategorical variables are expressed as numbers or percentages (%). Continuous variables are presented as the mean ± SD or median and interquartile range, as appropriate. Continuous variables with a normal distribution were analyzed by paired t-test or signed-rank test. Categorical variables were analyzed using the chi-square test or Fisher’s exact test. Logistic regression was used to predict risk factors for complications, and the results are presented as odds ratios (ORs) with 95% confidence intervals (Cis). Variables with a P value of < 0.2 in the univariate analysis were included in the multivariate analysis. Linear mixed-effects models were conducted to assess the effect of saline irrigation on the clearance score with a random intercept for each patient and an unstructured covariance structure. A two-sided P value of less than 0.05 was considered statistically significant. The analyses were conducted with statistical software (SAS version 9.4, SAS Institute, Inc., NC, United States).", "Between October 2018 and January 2020, 47 patients with CBD stones were enrolled in a prospective clinical trial conducted in the surgical endoscopy center of The First Hospital of Lanzhou University (Figure 1). All eligible patients or their legal representatives gave informed consent before the treatment. The inclusion criteria were patients with CBD stones undergoing ERCP, able to provide informed consent, with a stone size larger than 1.2 cm and requiring mechanical lithotripsy for stone removal. The exclusion criteria included pre-ERCP acute suppurative cholangitis, acute pancreatitis, gastrointestinal (GI) tract hemorrhage and/or perforation, previous history of ERCP, prior Bilroth II gastrectomy and Roux-en-Y or cholangiojejunostomy, pregnancy or breastfeeding, coagulopathy with international normalized ratio > 1.5 and low platelet count (< 50 × 109/L) or active use of anticoagulation drugs, severe liver disease including decompensated liver cirrhosis or liver failure, septic shock, biliary-duodenal fistula confirmed before ERCP cannulation, the presence of intrahepatic duct stones and malignancy, and patient unwillingness or inability to give informed consent.\n\nFlow chart of the self-controlled study. CBD: Common bile duct; ERCP: Endoscopic retrograde cholangiopancreatography; PTCD: Percutaneous transhepatic cholangiodrainage.\nTo assess the number of residual stones, we created a CBD clearance score as follows: (1) A large number of stone fragments and biliary sludge; (2) A moderate number of stone fragments and biliary sludge; (3) A small number of stone fragments and biliary sludge; (4) Presence of biliary sludge without any stones; and (5) Completely cleared CBD without any biliary sludge. The scores were determined by two endoscopists independently (Figure 2).\n\nSpyGlass DS images and simulation graphs of the residual stone. A-E: SpyGlass DS images (A1-E1) and simulation graphs (A2-E2). Score 1: A large amount of stone fragments and biliary sludge. Score 2: A moderate amount of stone fragments and biliary sludge. Score 3: A small amount of stone fragments and biliary sludge. Score 4: Presence of biliary sludge without any stones. Score 5: Completely cleared common bile duct without any biliary sludge.\nThe study was approved by the ethics committee of The First Hospital of Lanzhou University (No. LDYYMENG2018-1028) and conducted in accordance with the ethical principles of the Declaration of Helsinki. All authors had access to the study data and reviewed and approved the final manuscript.", "All endoscopic procedures were performed by two experienced endoscopists (each has performed > 1000 ERCPs). Routine prophylactic antibiotics were given in all cases. ERCP was performed with a standard duodenoscope (TJF-260V, Olympus, Tokyo, Japan). The patients received propofol anesthesia without tracheal intubation.\nDuring ERCP, cannulation was performed with a wire-guided sphincterotome. After successful cannulation, contrast was injected to determine the stone size (more or less than 12 mm) and the need for mechanical lithotripsy. All patients underwent a small sphincterotomy with an average length of 3-5 mm using the ENDO CUT mode (power setting 90-120 W; Erbe Elektromedizin, Tuebingen, Germany) followed by balloon sphincteroplasty using a controlled radial expansion (CRE) balloon (10-12 mm in diameter; Boston Scientific, Cork, Ireland). Lithotripsy was performed using an endoscopic lithotripter-compatible basket (Boston Scientific, Marlborough, MA, United States), and stones were removed using the basket and a balloon catheter. Samples of CBD stones were collected using an EndoRetrieval bag (Micro-Tech, Nanjing, China) and placed in a container for subsequent analysis. Occlusion cholangiography was performed by injecting contrast with a balloon catheter. Fluoroscopic assessment using a C-arm X-ray (OEC 9900 Elite, Salt Lake City, Utah, United States) was applied to confirm complete removal of CBD stones and the existence of stone residuals was determined by both endoscopist and radiologist. Confirmed stone residues by applying occlusion cholangiography were then followed by repeated extraction attempts. If complete stone removal was confirmed, a CBD clearance score was generated by further applying SpyGlass DS (Boston Scientific Corporation, Marlborough, Massachusetts, United States) examination. If the CBD clearance score was less than 5, the bile duct was irrigated intermittently with 50 mL of normal saline using a basket. The basket was shaken during irrigation with slight suction applied to the duodenoscope to promote drainage. The bile duct was then examined again using the SpyGlass DS to detect any residual stones/sludge and document the clearance score. If the clearance score was still less than 5, repeat irrigation of the CBD was performed using another 50 mL of normal saline. The final CBD clearance score was obtained one more time using SpyGlass DS examination. CBD stones that were not irrigated out after double 50 mL-saline irrigations continued to be irrigated with saline until they were fully cleared (Figure 3). Endoscopic nasobiliary drainage (ENBD) or biliary stenting was performed if necessary. The CBD clearance score was assessed by two blinded endoscopists who were masked to the treatment.\n\nProtocol of evaluation and irrigation procedures. CBD: Common bile duct; ERCP: Endoscopic retrograde cholangiopancreatography; POC: Peroral cholangioscopy.\nThe composition of the removed CBD stones was analyzed using infrared ray (IR) spectroscopy. One milligram (mg) of stone samples was mixed with 150 mg of potassium bromide in a mortar and ground into powder. The stone mixture was pressed into a mold and then placed into an automatic IR spectrum analyzer for automated detection to determine the IR spectrogram of the CBD stones. The IR spectrogram showed characteristic absorption peaks at a wavelength of 1460 cm-1, indicating cholesterol-based stones, and those at a wavelength of 1680 cm-1 indicated pigment-based stones.", "Most of the post-ERCP adverse events were detected within 24 h after the procedure. We routinely assessed these adverse events, including cholangitis, oozing/bleeding, pancreatitis, cholecystitis, and perforation, at 24 and 48 h after the ERCP procedure by symptoms, signs, laboratory tests, and imaging examinations if necessary[18,19]. Post-ERCP adverse events were defined as follows: (1) The diagnosis of acute cholangitis was based on Tokyo Guidelines 2018/2013 (TG18/TG13) diagnostic criteria[20]; (2) Oozing was defined as bleeding that was slight and stopped spontaneously; (3) Pancreatitis was described as new or worsening abdominal pain along with an increase in serum amylase levels (> 3 × higher than normal upper limit measured 24 h after surgery); (4) Cholecystitis was defined as pain in the epigastrium or RUQ accompanied by a positive Murphy sign, and abdominal ultrasonography showing a thickened gallbladder wall; and (5) Perforation was defined as sudden abdominal pain accompanied by retroperitoneal air and fluid.\nMechanical lithotripsy was performed by the same assistant for all patients. Stones that were fragmented successfully with only one attempt were defined as soft stones. Stones that required multiple fragmentation attempts or needed to be broken again were defined as hard stones.", "The sample size calculation was based on the rate of residual stone clearance. According to a previous study[16] and our prior experience, we assumed a success rate of 84.5% for endoscopic extraction of CBD stones and an increase of up to 97% with saline irrigation. We estimated that 47 patients were needed in this analysis to obtain a power of 80% at the 5% level.\nCategorical variables are expressed as numbers or percentages (%). Continuous variables are presented as the mean ± SD or median and interquartile range, as appropriate. Continuous variables with a normal distribution were analyzed by paired t-test or signed-rank test. Categorical variables were analyzed using the chi-square test or Fisher’s exact test. Logistic regression was used to predict risk factors for complications, and the results are presented as odds ratios (ORs) with 95% confidence intervals (Cis). Variables with a P value of < 0.2 in the univariate analysis were included in the multivariate analysis. Linear mixed-effects models were conducted to assess the effect of saline irrigation on the clearance score with a random intercept for each patient and an unstructured covariance structure. A two-sided P value of less than 0.05 was considered statistically significant. The analyses were conducted with statistical software (SAS version 9.4, SAS Institute, Inc., NC, United States).", "The patients' average age was 61 ± 16.5 years, and 23 (48.9%) were male. Comorbidities included coronary disease in three (6.4%) patients, hypertension in 14 (29.8%), diabetes in two (4.3%), liver cirrhosis in three (2.1%), and portal hypertension in one (6.4%). The stone analysis showed that 17 (36.2%) cases had cholesterol-based stones, and 30 (63.8%) had pigment-based stones.\nProcedure-related adverse events occurred in 11 (23.4%) of 47 patients, with cholangitis in four patients, bleeding in two, pancreatitis in four, and cholecystitis in one. No mortality or perforation occurred. The mean time for ERCP was 40.3 ± 15.4 min (Table 1).\nClinical and procedural characteristics of the patients\nData are expressed as the mean ± SD, median (interquartile range) or n (%). ENBD: Endoscopic nasobiliary drainage; CBD: Common bile duct.\nAfter endoscopic stone extraction, occlusion cholangiography showed that there were no residual CBD stones. Seven patients had a clearance score of 4 identified by POC with the SpyGlass DS, and no patients had a score of 5 before saline irrigation. After irrigation with 50 mL saline, CBD clearance improved to 4 in 28 patients, but none achieved a score of 5. After a total of 100 mL of saline irrigation, there was further improvement in the clearance scores: 12 (26%) patients had a score of 4, and 32 (68%) had a score of 5 identified by the SpyGlass DS. The respective CBD stone clearance rates for the control (no irrigation), 50 mL saline irrigation, and 100 mL saline irrigation were 15%, 60%, and 94%, respectively, and the difference among them was statistically significant (P < 0.001) (Table 2). The CBD clearance score (mean ± SD) was 2.4 ± 1.1 in the control, 3.5 ± 0.7 in the 50 mL saline irrigation, and 4.6 ± 0.6 in the 100 mL saline irrigation (P < 0.001) (Supplementary Figure 1).\nCommon bile duct clear score and stone clearance rate before and after saline irrigation\nAfter ERCP, 13 patients had a score of 1 (a large amount of residual stone/sludge) before saline irrigation. Only 15 patients did not reach a score of 5 (complete clearance) after 100 mL of saline irrigation. A dilated CBD diameter > 15 mm (OR = 4.93, 95%CI: 1.13-21.57; P = 0.034) was an independent risk factor for significant residual CBD stones (score = 1). Multivariate analysis revealed that CBD diameter > 15 mm (OR = 0.08, 95%CI: 0.01-0.49; P = 0.007) and the presence of periampullary diverticula (PAD) (OR = 6.51, 95%CI: 1.08-39.21; P = 0.041) were independent risk factors for failed CBD clearance despite irrigation with 100 mL saline (Table 3).\nBiliary cleanliness before and after saline irrigation, assessed by logistic regression analysis\nN: Total number; CBD: Common bile duct; PAD: Periampullary diverticula; CI: Confidence interval; OR: Odds ratio.\nFurther analysis showed that the volume of saline used for irrigation[20] (P < 0.001) was an important factor determining CBD clearance (Supplementary Table 1).\nWe used IR spectroscopy to analyze the components of stones in the CBD of all patients. IR spectroscopy analysis showed that the proportion of pigment-based stones was significantly higher in patients with PAD (93.3% vs 50.0%, P = 0.04), and these stones were mostly soft stones (77.4% vs 22.6%, P = 0.01) (Table 4).\nCorrelation between the composition of stones and variables\nCBD: Common bile duct; PAD: Periampullary diverticula.", "Fluoroscopic cholangiographic imagery is currently the main method to determine the successful clearance of CBD stones[20]. However, studies involving the use of IDUS have identified residual biliary sludge within the bile duct despite the absence of filling defects on cholangiography. Mechanical lithotripsy produces a large number of stone fragments, and these minor residual CBD stones may lead to recurrent stone formation[21].\nIt has been reported that residual small CBD stone fragments can be flushed out of the bile duct using saline irrigation with a balloon catheter with a side hole[15]. A study reported that an average of 48 mL of saline could remove residual stones[16]. However, another study found that at least 100 mL of saline was needed to reduce residual stones[17]. Therefore, the effectiveness of saline irrigation on the clearance of residual bile duct stones after lithotripsy is not clear. This study's purpose differs from previous ones in that all of the patients had large stones and were post lithotripsy. In this study, we found that after ERCP and an additional round of lithotripsy were performed to remove CBD stones, POC using the SpyGlass DS showed that only 15% of the patients were relatively cleared of bile duct stones despite a negative occlusion cholangiogram. After irrigation with 50 mL of normal saline, 60% of the patients had relatively cleared bile ducts. After a total of 100 mL saline irrigation, 94% of the patients achieved complete clearance of the bile duct. The results showed that irrigation with 50 mL of saline was not enough to clear the bile duct of residual stones/sludge.\nIn this series, all patients received mechanical lithotripsy for stone fragmentation, which generates substantial stone fragments/debris, thus increasing the difficulty of clearing the bile duct. Our results showed that a good CBD clearance rate could be achieved with a larger saline irrigation volume. The clearance rate of bile duct stones could be higher for those without lithotripsy.\nResidual stones have been found in approximately 1/3 of cases after stone retrieval using endoscopic ultrasound (EUS)[22,23]. In addition to stone detection, the EUS approach might also offer an alternative for treating bile duct stones. However, the use of EUS is very challenging[24-27]. IDUS has also been used to detect small residual stones in 14/59 patients (23.7%) with residual stones[11]. However, the ultrasonic probe is costly and can be easily damaged. In addition, the method is highly operator dependent and often produces poor quality images, and the presence of air in the bile duct can affect the detection of residual stones. Many studies have shown that POC has a high sensitivity in the detection of residual CBD stones, ranging from 25.3% to 34%, where residual stones are missed by standard cholangiography[8,9,28].\nIn our study, we used the SpyGlass DS system to determine the CBD clearance score. The SpyScope has a 4-way tip deflection and is much smaller (10 French) than conventional choledochoscopes. More importantly, it has a separate and independent irrigation channel that allows intermittent or continuous irrigation of the biliary system. It also offers a direct examination of the bile duct lumen and is more accurate in detecting and diagnosing residual bile duct stones/sludge.\nResidual CBD stones can lead to recurrent stone formation. Itoi et al[7] reported that PAD and intraoperative lithotripsy were closely related to residual stones (P < 0.05), as we observed in our study. We also found that a dilated CBD with a diameter > 15 mm was an independent risk factor for failed CBD clearance and a large number of residual stones. Despite irrigation with 100 mL saline, PAD and a dilated CBD remained independent risk factors for incomplete bile duct clearance. This may be due to the presence of an air-filled duodenum/diverticulum that compresses the distal CBD, making it difficult to wash away the residual stones/sludge[29,30]. CBD clearance can be improved by increasing the volume of saline irrigation.\nOur results showed that PAD not only increased the difficulty of bile duct clearance but can also affect stone composition. We found that pigment-based stones constituted the majority of CBD stones in our patients with PAD (P = 0.004). Song et al[31] reported that recurrent bile duct stones were more likely to be brown pigmented stones than cholesterol-based stones. This may be because pigment-based stones are often related to bacterial infection, and the formation of biliary sludge will contribute to recurrent stone formation over time[32-34]. The brown pigmented stones that form as a result of bacterial infection are soft and easy to fragment and generate abundant debris. Our experience showed that soft stones/sludge require more saline irrigation to clear the bile duct. In this study, 26% of cases with biliary sludge were found even with 100 mL saline irrigation. If an effective bile duct cleaning device is used to remove biliary sludge, it may reduce the CBD stone recurrence rate after ERCP.\nWithout adequate biliary drainage, saline irrigation can increase the intraductal pressure, causing cholangitis because of contaminated bile[35]. Proper drainage methods can mitigate this stress[36,37]. No serious adverse events were reported in our study except for 10% of patients with cholangitis secondary possibly to saline irrigation. Intermittent irrigation and endoscopic suction to promote biliary drainage may lower the risk in addition to antibiotic coverage. Differences between this study and previous studies include the following: We studied the effect of flushing with saline after lithotripsy; our evaluation method was different from that of previous studies (we used SpyGlass DS examination); and we performed a component analysis related to diverticula. Our study has several methodological advantages. First, it was a prospective study using a self-controlled design, and SpyGlass DS was used to assess the presence of residual stones in the bile ducts. All of the patients showed no stones by imaging even though small residual bile duct stones were observed by SpyGlass DS, indicating that the evaluation of residual bile duct stones using SpyGlass DS is more sensitive than imaging alone. If we observe that there are still stones, it would be unethical not to continue flushing after using 50 mL of saline. Second, this study used objective definitions to assess residual stones identified by POC. Third, this study included objective stone composition analysis using IR spectroscopy.\nThe study has limitations. This study was not a randomized controlled trial. The use of direct cholangioscopy by the SpyGlass DS increases the procedure time with additional cost. Therefore, we do not recommend routine use of SpyGlass DS to rule out residual stones.", "In conclusion, in patients with large CBD stones undergoing mechanical lithotripsy, routine irrigation of the bile duct with at least 100 mL of saline is recommended to improve CBD clearance, especially for patients with a dilated bile duct and/or those with PAD. Saline irrigation is simple, inexpensive, and easy to perform to improve bile duct clearance, and it may avoid recurrent stone formation." ]
[ null, "methods", null, null, null, null, null, null, null ]
[ "Endoscopic retrograde cholangiopancreatography", "Common bile duct gall stones", "Peroral cholangioscopy", "Saline irrigation", "Periampullary diverticula", "Prospective cohort study" ]
INTRODUCTION: Endoscopic retrograde cholangiopancreatography (ERCP) is an effective and relatively minimally invasive technique for common bile duct (CBD) stones[1-3]. It has been reported that the recurrence rate of CBD stones after ERCP has increased from 4% to 24%[4-6]. The incidence of residual stones after mechanical lithotripsy for intractable CBD stones is 24% to 40%[7-10]. A growing number of studies suggest that an important reason for the recurrence of bile duct stones is the presence of stone debris after lithotripsy[11-13]. During ERCP, occluded cholangiography (OC) is often performed after stone removal to determine whether the stone is removed completely, but OC lacks accuracy. Even if no obvious stones are found on cholangiography, the presence of contrast can obscure small stone debris in the bile duct[8,14]. Complete bile duct clearance is necessary to decrease recurrent bile duct stones. Some studies[15-17] reported that irrigation of the bile duct with saline after stone extraction further improves the clearance of the bile duct and has the advantages of being a simple, low-cost procedure with rare complications. Ang et al[16] showed that a mean of 48 mL of saline solution could irrigate and flush out residual stones after the endoscopic removal of CBD stones. Ahn et al[17] found that irrigation with 100 mL of saline can flush out residual stone fragments from the bile duct into the duodenum after stone extraction. However, intraductal ultrasound (IDUS) has a high sensitivity and accuracy in diagnosing bile duct stones/debris[11,16]. This modality yields only indirect images of the debris. The lack of direct evidence to support the efficacy of saline irrigation after lithotripsy prompted us to use peroral cholangioscopy (POC) to examine the bile duct and detect any residual stones/debris. The results of no irrigation after stone extraction were confirmed by OC and were compared to the effectiveness of irrigation with 50 mL or 100 mL saline. To evaluate whether irrigation with 100 mL of saline is more effective in achieving complete clearance of the bile duct after mechanical lithotripsy, we conducted this prospective self-controlled study. MATERIALS AND METHODS: Study design and participants Between October 2018 and January 2020, 47 patients with CBD stones were enrolled in a prospective clinical trial conducted in the surgical endoscopy center of The First Hospital of Lanzhou University (Figure 1). All eligible patients or their legal representatives gave informed consent before the treatment. The inclusion criteria were patients with CBD stones undergoing ERCP, able to provide informed consent, with a stone size larger than 1.2 cm and requiring mechanical lithotripsy for stone removal. The exclusion criteria included pre-ERCP acute suppurative cholangitis, acute pancreatitis, gastrointestinal (GI) tract hemorrhage and/or perforation, previous history of ERCP, prior Bilroth II gastrectomy and Roux-en-Y or cholangiojejunostomy, pregnancy or breastfeeding, coagulopathy with international normalized ratio > 1.5 and low platelet count (< 50 × 109/L) or active use of anticoagulation drugs, severe liver disease including decompensated liver cirrhosis or liver failure, septic shock, biliary-duodenal fistula confirmed before ERCP cannulation, the presence of intrahepatic duct stones and malignancy, and patient unwillingness or inability to give informed consent. Flow chart of the self-controlled study. CBD: Common bile duct; ERCP: Endoscopic retrograde cholangiopancreatography; PTCD: Percutaneous transhepatic cholangiodrainage. To assess the number of residual stones, we created a CBD clearance score as follows: (1) A large number of stone fragments and biliary sludge; (2) A moderate number of stone fragments and biliary sludge; (3) A small number of stone fragments and biliary sludge; (4) Presence of biliary sludge without any stones; and (5) Completely cleared CBD without any biliary sludge. The scores were determined by two endoscopists independently (Figure 2). SpyGlass DS images and simulation graphs of the residual stone. A-E: SpyGlass DS images (A1-E1) and simulation graphs (A2-E2). Score 1: A large amount of stone fragments and biliary sludge. Score 2: A moderate amount of stone fragments and biliary sludge. Score 3: A small amount of stone fragments and biliary sludge. Score 4: Presence of biliary sludge without any stones. Score 5: Completely cleared common bile duct without any biliary sludge. The study was approved by the ethics committee of The First Hospital of Lanzhou University (No. LDYYMENG2018-1028) and conducted in accordance with the ethical principles of the Declaration of Helsinki. All authors had access to the study data and reviewed and approved the final manuscript. Between October 2018 and January 2020, 47 patients with CBD stones were enrolled in a prospective clinical trial conducted in the surgical endoscopy center of The First Hospital of Lanzhou University (Figure 1). All eligible patients or their legal representatives gave informed consent before the treatment. The inclusion criteria were patients with CBD stones undergoing ERCP, able to provide informed consent, with a stone size larger than 1.2 cm and requiring mechanical lithotripsy for stone removal. The exclusion criteria included pre-ERCP acute suppurative cholangitis, acute pancreatitis, gastrointestinal (GI) tract hemorrhage and/or perforation, previous history of ERCP, prior Bilroth II gastrectomy and Roux-en-Y or cholangiojejunostomy, pregnancy or breastfeeding, coagulopathy with international normalized ratio > 1.5 and low platelet count (< 50 × 109/L) or active use of anticoagulation drugs, severe liver disease including decompensated liver cirrhosis or liver failure, septic shock, biliary-duodenal fistula confirmed before ERCP cannulation, the presence of intrahepatic duct stones and malignancy, and patient unwillingness or inability to give informed consent. Flow chart of the self-controlled study. CBD: Common bile duct; ERCP: Endoscopic retrograde cholangiopancreatography; PTCD: Percutaneous transhepatic cholangiodrainage. To assess the number of residual stones, we created a CBD clearance score as follows: (1) A large number of stone fragments and biliary sludge; (2) A moderate number of stone fragments and biliary sludge; (3) A small number of stone fragments and biliary sludge; (4) Presence of biliary sludge without any stones; and (5) Completely cleared CBD without any biliary sludge. The scores were determined by two endoscopists independently (Figure 2). SpyGlass DS images and simulation graphs of the residual stone. A-E: SpyGlass DS images (A1-E1) and simulation graphs (A2-E2). Score 1: A large amount of stone fragments and biliary sludge. Score 2: A moderate amount of stone fragments and biliary sludge. Score 3: A small amount of stone fragments and biliary sludge. Score 4: Presence of biliary sludge without any stones. Score 5: Completely cleared common bile duct without any biliary sludge. The study was approved by the ethics committee of The First Hospital of Lanzhou University (No. LDYYMENG2018-1028) and conducted in accordance with the ethical principles of the Declaration of Helsinki. All authors had access to the study data and reviewed and approved the final manuscript. Procedures All endoscopic procedures were performed by two experienced endoscopists (each has performed > 1000 ERCPs). Routine prophylactic antibiotics were given in all cases. ERCP was performed with a standard duodenoscope (TJF-260V, Olympus, Tokyo, Japan). The patients received propofol anesthesia without tracheal intubation. During ERCP, cannulation was performed with a wire-guided sphincterotome. After successful cannulation, contrast was injected to determine the stone size (more or less than 12 mm) and the need for mechanical lithotripsy. All patients underwent a small sphincterotomy with an average length of 3-5 mm using the ENDO CUT mode (power setting 90-120 W; Erbe Elektromedizin, Tuebingen, Germany) followed by balloon sphincteroplasty using a controlled radial expansion (CRE) balloon (10-12 mm in diameter; Boston Scientific, Cork, Ireland). Lithotripsy was performed using an endoscopic lithotripter-compatible basket (Boston Scientific, Marlborough, MA, United States), and stones were removed using the basket and a balloon catheter. Samples of CBD stones were collected using an EndoRetrieval bag (Micro-Tech, Nanjing, China) and placed in a container for subsequent analysis. Occlusion cholangiography was performed by injecting contrast with a balloon catheter. Fluoroscopic assessment using a C-arm X-ray (OEC 9900 Elite, Salt Lake City, Utah, United States) was applied to confirm complete removal of CBD stones and the existence of stone residuals was determined by both endoscopist and radiologist. Confirmed stone residues by applying occlusion cholangiography were then followed by repeated extraction attempts. If complete stone removal was confirmed, a CBD clearance score was generated by further applying SpyGlass DS (Boston Scientific Corporation, Marlborough, Massachusetts, United States) examination. If the CBD clearance score was less than 5, the bile duct was irrigated intermittently with 50 mL of normal saline using a basket. The basket was shaken during irrigation with slight suction applied to the duodenoscope to promote drainage. The bile duct was then examined again using the SpyGlass DS to detect any residual stones/sludge and document the clearance score. If the clearance score was still less than 5, repeat irrigation of the CBD was performed using another 50 mL of normal saline. The final CBD clearance score was obtained one more time using SpyGlass DS examination. CBD stones that were not irrigated out after double 50 mL-saline irrigations continued to be irrigated with saline until they were fully cleared (Figure 3). Endoscopic nasobiliary drainage (ENBD) or biliary stenting was performed if necessary. The CBD clearance score was assessed by two blinded endoscopists who were masked to the treatment. Protocol of evaluation and irrigation procedures. CBD: Common bile duct; ERCP: Endoscopic retrograde cholangiopancreatography; POC: Peroral cholangioscopy. The composition of the removed CBD stones was analyzed using infrared ray (IR) spectroscopy. One milligram (mg) of stone samples was mixed with 150 mg of potassium bromide in a mortar and ground into powder. The stone mixture was pressed into a mold and then placed into an automatic IR spectrum analyzer for automated detection to determine the IR spectrogram of the CBD stones. The IR spectrogram showed characteristic absorption peaks at a wavelength of 1460 cm-1, indicating cholesterol-based stones, and those at a wavelength of 1680 cm-1 indicated pigment-based stones. All endoscopic procedures were performed by two experienced endoscopists (each has performed > 1000 ERCPs). Routine prophylactic antibiotics were given in all cases. ERCP was performed with a standard duodenoscope (TJF-260V, Olympus, Tokyo, Japan). The patients received propofol anesthesia without tracheal intubation. During ERCP, cannulation was performed with a wire-guided sphincterotome. After successful cannulation, contrast was injected to determine the stone size (more or less than 12 mm) and the need for mechanical lithotripsy. All patients underwent a small sphincterotomy with an average length of 3-5 mm using the ENDO CUT mode (power setting 90-120 W; Erbe Elektromedizin, Tuebingen, Germany) followed by balloon sphincteroplasty using a controlled radial expansion (CRE) balloon (10-12 mm in diameter; Boston Scientific, Cork, Ireland). Lithotripsy was performed using an endoscopic lithotripter-compatible basket (Boston Scientific, Marlborough, MA, United States), and stones were removed using the basket and a balloon catheter. Samples of CBD stones were collected using an EndoRetrieval bag (Micro-Tech, Nanjing, China) and placed in a container for subsequent analysis. Occlusion cholangiography was performed by injecting contrast with a balloon catheter. Fluoroscopic assessment using a C-arm X-ray (OEC 9900 Elite, Salt Lake City, Utah, United States) was applied to confirm complete removal of CBD stones and the existence of stone residuals was determined by both endoscopist and radiologist. Confirmed stone residues by applying occlusion cholangiography were then followed by repeated extraction attempts. If complete stone removal was confirmed, a CBD clearance score was generated by further applying SpyGlass DS (Boston Scientific Corporation, Marlborough, Massachusetts, United States) examination. If the CBD clearance score was less than 5, the bile duct was irrigated intermittently with 50 mL of normal saline using a basket. The basket was shaken during irrigation with slight suction applied to the duodenoscope to promote drainage. The bile duct was then examined again using the SpyGlass DS to detect any residual stones/sludge and document the clearance score. If the clearance score was still less than 5, repeat irrigation of the CBD was performed using another 50 mL of normal saline. The final CBD clearance score was obtained one more time using SpyGlass DS examination. CBD stones that were not irrigated out after double 50 mL-saline irrigations continued to be irrigated with saline until they were fully cleared (Figure 3). Endoscopic nasobiliary drainage (ENBD) or biliary stenting was performed if necessary. The CBD clearance score was assessed by two blinded endoscopists who were masked to the treatment. Protocol of evaluation and irrigation procedures. CBD: Common bile duct; ERCP: Endoscopic retrograde cholangiopancreatography; POC: Peroral cholangioscopy. The composition of the removed CBD stones was analyzed using infrared ray (IR) spectroscopy. One milligram (mg) of stone samples was mixed with 150 mg of potassium bromide in a mortar and ground into powder. The stone mixture was pressed into a mold and then placed into an automatic IR spectrum analyzer for automated detection to determine the IR spectrogram of the CBD stones. The IR spectrogram showed characteristic absorption peaks at a wavelength of 1460 cm-1, indicating cholesterol-based stones, and those at a wavelength of 1680 cm-1 indicated pigment-based stones. Definition for complications Most of the post-ERCP adverse events were detected within 24 h after the procedure. We routinely assessed these adverse events, including cholangitis, oozing/bleeding, pancreatitis, cholecystitis, and perforation, at 24 and 48 h after the ERCP procedure by symptoms, signs, laboratory tests, and imaging examinations if necessary[18,19]. Post-ERCP adverse events were defined as follows: (1) The diagnosis of acute cholangitis was based on Tokyo Guidelines 2018/2013 (TG18/TG13) diagnostic criteria[20]; (2) Oozing was defined as bleeding that was slight and stopped spontaneously; (3) Pancreatitis was described as new or worsening abdominal pain along with an increase in serum amylase levels (> 3 × higher than normal upper limit measured 24 h after surgery); (4) Cholecystitis was defined as pain in the epigastrium or RUQ accompanied by a positive Murphy sign, and abdominal ultrasonography showing a thickened gallbladder wall; and (5) Perforation was defined as sudden abdominal pain accompanied by retroperitoneal air and fluid. Mechanical lithotripsy was performed by the same assistant for all patients. Stones that were fragmented successfully with only one attempt were defined as soft stones. Stones that required multiple fragmentation attempts or needed to be broken again were defined as hard stones. Most of the post-ERCP adverse events were detected within 24 h after the procedure. We routinely assessed these adverse events, including cholangitis, oozing/bleeding, pancreatitis, cholecystitis, and perforation, at 24 and 48 h after the ERCP procedure by symptoms, signs, laboratory tests, and imaging examinations if necessary[18,19]. Post-ERCP adverse events were defined as follows: (1) The diagnosis of acute cholangitis was based on Tokyo Guidelines 2018/2013 (TG18/TG13) diagnostic criteria[20]; (2) Oozing was defined as bleeding that was slight and stopped spontaneously; (3) Pancreatitis was described as new or worsening abdominal pain along with an increase in serum amylase levels (> 3 × higher than normal upper limit measured 24 h after surgery); (4) Cholecystitis was defined as pain in the epigastrium or RUQ accompanied by a positive Murphy sign, and abdominal ultrasonography showing a thickened gallbladder wall; and (5) Perforation was defined as sudden abdominal pain accompanied by retroperitoneal air and fluid. Mechanical lithotripsy was performed by the same assistant for all patients. Stones that were fragmented successfully with only one attempt were defined as soft stones. Stones that required multiple fragmentation attempts or needed to be broken again were defined as hard stones. Statistical analysis The sample size calculation was based on the rate of residual stone clearance. According to a previous study[16] and our prior experience, we assumed a success rate of 84.5% for endoscopic extraction of CBD stones and an increase of up to 97% with saline irrigation. We estimated that 47 patients were needed in this analysis to obtain a power of 80% at the 5% level. Categorical variables are expressed as numbers or percentages (%). Continuous variables are presented as the mean ± SD or median and interquartile range, as appropriate. Continuous variables with a normal distribution were analyzed by paired t-test or signed-rank test. Categorical variables were analyzed using the chi-square test or Fisher’s exact test. Logistic regression was used to predict risk factors for complications, and the results are presented as odds ratios (ORs) with 95% confidence intervals (Cis). Variables with a P value of < 0.2 in the univariate analysis were included in the multivariate analysis. Linear mixed-effects models were conducted to assess the effect of saline irrigation on the clearance score with a random intercept for each patient and an unstructured covariance structure. A two-sided P value of less than 0.05 was considered statistically significant. The analyses were conducted with statistical software (SAS version 9.4, SAS Institute, Inc., NC, United States). The sample size calculation was based on the rate of residual stone clearance. According to a previous study[16] and our prior experience, we assumed a success rate of 84.5% for endoscopic extraction of CBD stones and an increase of up to 97% with saline irrigation. We estimated that 47 patients were needed in this analysis to obtain a power of 80% at the 5% level. Categorical variables are expressed as numbers or percentages (%). Continuous variables are presented as the mean ± SD or median and interquartile range, as appropriate. Continuous variables with a normal distribution were analyzed by paired t-test or signed-rank test. Categorical variables were analyzed using the chi-square test or Fisher’s exact test. Logistic regression was used to predict risk factors for complications, and the results are presented as odds ratios (ORs) with 95% confidence intervals (Cis). Variables with a P value of < 0.2 in the univariate analysis were included in the multivariate analysis. Linear mixed-effects models were conducted to assess the effect of saline irrigation on the clearance score with a random intercept for each patient and an unstructured covariance structure. A two-sided P value of less than 0.05 was considered statistically significant. The analyses were conducted with statistical software (SAS version 9.4, SAS Institute, Inc., NC, United States). Study design and participants: Between October 2018 and January 2020, 47 patients with CBD stones were enrolled in a prospective clinical trial conducted in the surgical endoscopy center of The First Hospital of Lanzhou University (Figure 1). All eligible patients or their legal representatives gave informed consent before the treatment. The inclusion criteria were patients with CBD stones undergoing ERCP, able to provide informed consent, with a stone size larger than 1.2 cm and requiring mechanical lithotripsy for stone removal. The exclusion criteria included pre-ERCP acute suppurative cholangitis, acute pancreatitis, gastrointestinal (GI) tract hemorrhage and/or perforation, previous history of ERCP, prior Bilroth II gastrectomy and Roux-en-Y or cholangiojejunostomy, pregnancy or breastfeeding, coagulopathy with international normalized ratio > 1.5 and low platelet count (< 50 × 109/L) or active use of anticoagulation drugs, severe liver disease including decompensated liver cirrhosis or liver failure, septic shock, biliary-duodenal fistula confirmed before ERCP cannulation, the presence of intrahepatic duct stones and malignancy, and patient unwillingness or inability to give informed consent. Flow chart of the self-controlled study. CBD: Common bile duct; ERCP: Endoscopic retrograde cholangiopancreatography; PTCD: Percutaneous transhepatic cholangiodrainage. To assess the number of residual stones, we created a CBD clearance score as follows: (1) A large number of stone fragments and biliary sludge; (2) A moderate number of stone fragments and biliary sludge; (3) A small number of stone fragments and biliary sludge; (4) Presence of biliary sludge without any stones; and (5) Completely cleared CBD without any biliary sludge. The scores were determined by two endoscopists independently (Figure 2). SpyGlass DS images and simulation graphs of the residual stone. A-E: SpyGlass DS images (A1-E1) and simulation graphs (A2-E2). Score 1: A large amount of stone fragments and biliary sludge. Score 2: A moderate amount of stone fragments and biliary sludge. Score 3: A small amount of stone fragments and biliary sludge. Score 4: Presence of biliary sludge without any stones. Score 5: Completely cleared common bile duct without any biliary sludge. The study was approved by the ethics committee of The First Hospital of Lanzhou University (No. LDYYMENG2018-1028) and conducted in accordance with the ethical principles of the Declaration of Helsinki. All authors had access to the study data and reviewed and approved the final manuscript. Procedures: All endoscopic procedures were performed by two experienced endoscopists (each has performed > 1000 ERCPs). Routine prophylactic antibiotics were given in all cases. ERCP was performed with a standard duodenoscope (TJF-260V, Olympus, Tokyo, Japan). The patients received propofol anesthesia without tracheal intubation. During ERCP, cannulation was performed with a wire-guided sphincterotome. After successful cannulation, contrast was injected to determine the stone size (more or less than 12 mm) and the need for mechanical lithotripsy. All patients underwent a small sphincterotomy with an average length of 3-5 mm using the ENDO CUT mode (power setting 90-120 W; Erbe Elektromedizin, Tuebingen, Germany) followed by balloon sphincteroplasty using a controlled radial expansion (CRE) balloon (10-12 mm in diameter; Boston Scientific, Cork, Ireland). Lithotripsy was performed using an endoscopic lithotripter-compatible basket (Boston Scientific, Marlborough, MA, United States), and stones were removed using the basket and a balloon catheter. Samples of CBD stones were collected using an EndoRetrieval bag (Micro-Tech, Nanjing, China) and placed in a container for subsequent analysis. Occlusion cholangiography was performed by injecting contrast with a balloon catheter. Fluoroscopic assessment using a C-arm X-ray (OEC 9900 Elite, Salt Lake City, Utah, United States) was applied to confirm complete removal of CBD stones and the existence of stone residuals was determined by both endoscopist and radiologist. Confirmed stone residues by applying occlusion cholangiography were then followed by repeated extraction attempts. If complete stone removal was confirmed, a CBD clearance score was generated by further applying SpyGlass DS (Boston Scientific Corporation, Marlborough, Massachusetts, United States) examination. If the CBD clearance score was less than 5, the bile duct was irrigated intermittently with 50 mL of normal saline using a basket. The basket was shaken during irrigation with slight suction applied to the duodenoscope to promote drainage. The bile duct was then examined again using the SpyGlass DS to detect any residual stones/sludge and document the clearance score. If the clearance score was still less than 5, repeat irrigation of the CBD was performed using another 50 mL of normal saline. The final CBD clearance score was obtained one more time using SpyGlass DS examination. CBD stones that were not irrigated out after double 50 mL-saline irrigations continued to be irrigated with saline until they were fully cleared (Figure 3). Endoscopic nasobiliary drainage (ENBD) or biliary stenting was performed if necessary. The CBD clearance score was assessed by two blinded endoscopists who were masked to the treatment. Protocol of evaluation and irrigation procedures. CBD: Common bile duct; ERCP: Endoscopic retrograde cholangiopancreatography; POC: Peroral cholangioscopy. The composition of the removed CBD stones was analyzed using infrared ray (IR) spectroscopy. One milligram (mg) of stone samples was mixed with 150 mg of potassium bromide in a mortar and ground into powder. The stone mixture was pressed into a mold and then placed into an automatic IR spectrum analyzer for automated detection to determine the IR spectrogram of the CBD stones. The IR spectrogram showed characteristic absorption peaks at a wavelength of 1460 cm-1, indicating cholesterol-based stones, and those at a wavelength of 1680 cm-1 indicated pigment-based stones. Definition for complications: Most of the post-ERCP adverse events were detected within 24 h after the procedure. We routinely assessed these adverse events, including cholangitis, oozing/bleeding, pancreatitis, cholecystitis, and perforation, at 24 and 48 h after the ERCP procedure by symptoms, signs, laboratory tests, and imaging examinations if necessary[18,19]. Post-ERCP adverse events were defined as follows: (1) The diagnosis of acute cholangitis was based on Tokyo Guidelines 2018/2013 (TG18/TG13) diagnostic criteria[20]; (2) Oozing was defined as bleeding that was slight and stopped spontaneously; (3) Pancreatitis was described as new or worsening abdominal pain along with an increase in serum amylase levels (> 3 × higher than normal upper limit measured 24 h after surgery); (4) Cholecystitis was defined as pain in the epigastrium or RUQ accompanied by a positive Murphy sign, and abdominal ultrasonography showing a thickened gallbladder wall; and (5) Perforation was defined as sudden abdominal pain accompanied by retroperitoneal air and fluid. Mechanical lithotripsy was performed by the same assistant for all patients. Stones that were fragmented successfully with only one attempt were defined as soft stones. Stones that required multiple fragmentation attempts or needed to be broken again were defined as hard stones. Statistical analysis: The sample size calculation was based on the rate of residual stone clearance. According to a previous study[16] and our prior experience, we assumed a success rate of 84.5% for endoscopic extraction of CBD stones and an increase of up to 97% with saline irrigation. We estimated that 47 patients were needed in this analysis to obtain a power of 80% at the 5% level. Categorical variables are expressed as numbers or percentages (%). Continuous variables are presented as the mean ± SD or median and interquartile range, as appropriate. Continuous variables with a normal distribution were analyzed by paired t-test or signed-rank test. Categorical variables were analyzed using the chi-square test or Fisher’s exact test. Logistic regression was used to predict risk factors for complications, and the results are presented as odds ratios (ORs) with 95% confidence intervals (Cis). Variables with a P value of < 0.2 in the univariate analysis were included in the multivariate analysis. Linear mixed-effects models were conducted to assess the effect of saline irrigation on the clearance score with a random intercept for each patient and an unstructured covariance structure. A two-sided P value of less than 0.05 was considered statistically significant. The analyses were conducted with statistical software (SAS version 9.4, SAS Institute, Inc., NC, United States). RESULTS: The patients' average age was 61 ± 16.5 years, and 23 (48.9%) were male. Comorbidities included coronary disease in three (6.4%) patients, hypertension in 14 (29.8%), diabetes in two (4.3%), liver cirrhosis in three (2.1%), and portal hypertension in one (6.4%). The stone analysis showed that 17 (36.2%) cases had cholesterol-based stones, and 30 (63.8%) had pigment-based stones. Procedure-related adverse events occurred in 11 (23.4%) of 47 patients, with cholangitis in four patients, bleeding in two, pancreatitis in four, and cholecystitis in one. No mortality or perforation occurred. The mean time for ERCP was 40.3 ± 15.4 min (Table 1). Clinical and procedural characteristics of the patients Data are expressed as the mean ± SD, median (interquartile range) or n (%). ENBD: Endoscopic nasobiliary drainage; CBD: Common bile duct. After endoscopic stone extraction, occlusion cholangiography showed that there were no residual CBD stones. Seven patients had a clearance score of 4 identified by POC with the SpyGlass DS, and no patients had a score of 5 before saline irrigation. After irrigation with 50 mL saline, CBD clearance improved to 4 in 28 patients, but none achieved a score of 5. After a total of 100 mL of saline irrigation, there was further improvement in the clearance scores: 12 (26%) patients had a score of 4, and 32 (68%) had a score of 5 identified by the SpyGlass DS. The respective CBD stone clearance rates for the control (no irrigation), 50 mL saline irrigation, and 100 mL saline irrigation were 15%, 60%, and 94%, respectively, and the difference among them was statistically significant (P < 0.001) (Table 2). The CBD clearance score (mean ± SD) was 2.4 ± 1.1 in the control, 3.5 ± 0.7 in the 50 mL saline irrigation, and 4.6 ± 0.6 in the 100 mL saline irrigation (P < 0.001) (Supplementary Figure 1). Common bile duct clear score and stone clearance rate before and after saline irrigation After ERCP, 13 patients had a score of 1 (a large amount of residual stone/sludge) before saline irrigation. Only 15 patients did not reach a score of 5 (complete clearance) after 100 mL of saline irrigation. A dilated CBD diameter > 15 mm (OR = 4.93, 95%CI: 1.13-21.57; P = 0.034) was an independent risk factor for significant residual CBD stones (score = 1). Multivariate analysis revealed that CBD diameter > 15 mm (OR = 0.08, 95%CI: 0.01-0.49; P = 0.007) and the presence of periampullary diverticula (PAD) (OR = 6.51, 95%CI: 1.08-39.21; P = 0.041) were independent risk factors for failed CBD clearance despite irrigation with 100 mL saline (Table 3). Biliary cleanliness before and after saline irrigation, assessed by logistic regression analysis N: Total number; CBD: Common bile duct; PAD: Periampullary diverticula; CI: Confidence interval; OR: Odds ratio. Further analysis showed that the volume of saline used for irrigation[20] (P < 0.001) was an important factor determining CBD clearance (Supplementary Table 1). We used IR spectroscopy to analyze the components of stones in the CBD of all patients. IR spectroscopy analysis showed that the proportion of pigment-based stones was significantly higher in patients with PAD (93.3% vs 50.0%, P = 0.04), and these stones were mostly soft stones (77.4% vs 22.6%, P = 0.01) (Table 4). Correlation between the composition of stones and variables CBD: Common bile duct; PAD: Periampullary diverticula. DISCUSSION: Fluoroscopic cholangiographic imagery is currently the main method to determine the successful clearance of CBD stones[20]. However, studies involving the use of IDUS have identified residual biliary sludge within the bile duct despite the absence of filling defects on cholangiography. Mechanical lithotripsy produces a large number of stone fragments, and these minor residual CBD stones may lead to recurrent stone formation[21]. It has been reported that residual small CBD stone fragments can be flushed out of the bile duct using saline irrigation with a balloon catheter with a side hole[15]. A study reported that an average of 48 mL of saline could remove residual stones[16]. However, another study found that at least 100 mL of saline was needed to reduce residual stones[17]. Therefore, the effectiveness of saline irrigation on the clearance of residual bile duct stones after lithotripsy is not clear. This study's purpose differs from previous ones in that all of the patients had large stones and were post lithotripsy. In this study, we found that after ERCP and an additional round of lithotripsy were performed to remove CBD stones, POC using the SpyGlass DS showed that only 15% of the patients were relatively cleared of bile duct stones despite a negative occlusion cholangiogram. After irrigation with 50 mL of normal saline, 60% of the patients had relatively cleared bile ducts. After a total of 100 mL saline irrigation, 94% of the patients achieved complete clearance of the bile duct. The results showed that irrigation with 50 mL of saline was not enough to clear the bile duct of residual stones/sludge. In this series, all patients received mechanical lithotripsy for stone fragmentation, which generates substantial stone fragments/debris, thus increasing the difficulty of clearing the bile duct. Our results showed that a good CBD clearance rate could be achieved with a larger saline irrigation volume. The clearance rate of bile duct stones could be higher for those without lithotripsy. Residual stones have been found in approximately 1/3 of cases after stone retrieval using endoscopic ultrasound (EUS)[22,23]. In addition to stone detection, the EUS approach might also offer an alternative for treating bile duct stones. However, the use of EUS is very challenging[24-27]. IDUS has also been used to detect small residual stones in 14/59 patients (23.7%) with residual stones[11]. However, the ultrasonic probe is costly and can be easily damaged. In addition, the method is highly operator dependent and often produces poor quality images, and the presence of air in the bile duct can affect the detection of residual stones. Many studies have shown that POC has a high sensitivity in the detection of residual CBD stones, ranging from 25.3% to 34%, where residual stones are missed by standard cholangiography[8,9,28]. In our study, we used the SpyGlass DS system to determine the CBD clearance score. The SpyScope has a 4-way tip deflection and is much smaller (10 French) than conventional choledochoscopes. More importantly, it has a separate and independent irrigation channel that allows intermittent or continuous irrigation of the biliary system. It also offers a direct examination of the bile duct lumen and is more accurate in detecting and diagnosing residual bile duct stones/sludge. Residual CBD stones can lead to recurrent stone formation. Itoi et al[7] reported that PAD and intraoperative lithotripsy were closely related to residual stones (P < 0.05), as we observed in our study. We also found that a dilated CBD with a diameter > 15 mm was an independent risk factor for failed CBD clearance and a large number of residual stones. Despite irrigation with 100 mL saline, PAD and a dilated CBD remained independent risk factors for incomplete bile duct clearance. This may be due to the presence of an air-filled duodenum/diverticulum that compresses the distal CBD, making it difficult to wash away the residual stones/sludge[29,30]. CBD clearance can be improved by increasing the volume of saline irrigation. Our results showed that PAD not only increased the difficulty of bile duct clearance but can also affect stone composition. We found that pigment-based stones constituted the majority of CBD stones in our patients with PAD (P = 0.004). Song et al[31] reported that recurrent bile duct stones were more likely to be brown pigmented stones than cholesterol-based stones. This may be because pigment-based stones are often related to bacterial infection, and the formation of biliary sludge will contribute to recurrent stone formation over time[32-34]. The brown pigmented stones that form as a result of bacterial infection are soft and easy to fragment and generate abundant debris. Our experience showed that soft stones/sludge require more saline irrigation to clear the bile duct. In this study, 26% of cases with biliary sludge were found even with 100 mL saline irrigation. If an effective bile duct cleaning device is used to remove biliary sludge, it may reduce the CBD stone recurrence rate after ERCP. Without adequate biliary drainage, saline irrigation can increase the intraductal pressure, causing cholangitis because of contaminated bile[35]. Proper drainage methods can mitigate this stress[36,37]. No serious adverse events were reported in our study except for 10% of patients with cholangitis secondary possibly to saline irrigation. Intermittent irrigation and endoscopic suction to promote biliary drainage may lower the risk in addition to antibiotic coverage. Differences between this study and previous studies include the following: We studied the effect of flushing with saline after lithotripsy; our evaluation method was different from that of previous studies (we used SpyGlass DS examination); and we performed a component analysis related to diverticula. Our study has several methodological advantages. First, it was a prospective study using a self-controlled design, and SpyGlass DS was used to assess the presence of residual stones in the bile ducts. All of the patients showed no stones by imaging even though small residual bile duct stones were observed by SpyGlass DS, indicating that the evaluation of residual bile duct stones using SpyGlass DS is more sensitive than imaging alone. If we observe that there are still stones, it would be unethical not to continue flushing after using 50 mL of saline. Second, this study used objective definitions to assess residual stones identified by POC. Third, this study included objective stone composition analysis using IR spectroscopy. The study has limitations. This study was not a randomized controlled trial. The use of direct cholangioscopy by the SpyGlass DS increases the procedure time with additional cost. Therefore, we do not recommend routine use of SpyGlass DS to rule out residual stones. CONCLUSION: In conclusion, in patients with large CBD stones undergoing mechanical lithotripsy, routine irrigation of the bile duct with at least 100 mL of saline is recommended to improve CBD clearance, especially for patients with a dilated bile duct and/or those with PAD. Saline irrigation is simple, inexpensive, and easy to perform to improve bile duct clearance, and it may avoid recurrent stone formation.
Background: A previous study showed that irrigation with 100 mL saline reduced residual common bile duct (CBD) stones, which potentially cause recurrent stones after endoscopic retrograde cholangiopancreatography. Methods: This prospective self-controlled study enrolled patients receiving mechanical lithotripsy for large (> 1.2 cm) CBD stones. After occlusion cholangiography confirmed CBD stone clearance, peroral cholangioscopy (POC) was performed to determine clearance scores based on the number of residual stones. The amounts of residual stones spotted via POC were graded on a 5-point scale (score 1, worst; score 5, best). Scores were documented after only stone removal (control) and after irrigation with 50 mL and 100 mL saline, respectively. The stone composition was analyzed using infrared spectroscopy. Results: Between October 2018 and January 2020, 47 patients had CBD clearance scores of 2.4 ± 1.1 without saline irrigation, 3.5 ± 0.7 with 50 mL irrigation, and 4.6 ± 0.6 with 100 mL irrigation (P < 0.001). Multivariate analysis showed that CBD diameter > 15 mm [odds ratio (OR) = 0.08, 95% confidence interval (CI): 0.01-0.49; P = 0.007] and periampullary diverticula (PAD) (OR = 6.51, 95%CI: 1.08-39.21; P = 0.041) were independent risk factors for residual stones. Bilirubin pigment stones constituted the main residual stones found in patients with PAD (P = 0.004). Conclusions: Irrigation with 100 mL of saline may not clear all residual CBD stones after lithotripsy, especially in patients with PAD and/or a dilated (> 15 mm) CBD. Pigment residual stones are soft and commonly found in patients with PAD. Additional saline irrigation may be required to remove retained stones.
INTRODUCTION: Endoscopic retrograde cholangiopancreatography (ERCP) is an effective and relatively minimally invasive technique for common bile duct (CBD) stones[1-3]. It has been reported that the recurrence rate of CBD stones after ERCP has increased from 4% to 24%[4-6]. The incidence of residual stones after mechanical lithotripsy for intractable CBD stones is 24% to 40%[7-10]. A growing number of studies suggest that an important reason for the recurrence of bile duct stones is the presence of stone debris after lithotripsy[11-13]. During ERCP, occluded cholangiography (OC) is often performed after stone removal to determine whether the stone is removed completely, but OC lacks accuracy. Even if no obvious stones are found on cholangiography, the presence of contrast can obscure small stone debris in the bile duct[8,14]. Complete bile duct clearance is necessary to decrease recurrent bile duct stones. Some studies[15-17] reported that irrigation of the bile duct with saline after stone extraction further improves the clearance of the bile duct and has the advantages of being a simple, low-cost procedure with rare complications. Ang et al[16] showed that a mean of 48 mL of saline solution could irrigate and flush out residual stones after the endoscopic removal of CBD stones. Ahn et al[17] found that irrigation with 100 mL of saline can flush out residual stone fragments from the bile duct into the duodenum after stone extraction. However, intraductal ultrasound (IDUS) has a high sensitivity and accuracy in diagnosing bile duct stones/debris[11,16]. This modality yields only indirect images of the debris. The lack of direct evidence to support the efficacy of saline irrigation after lithotripsy prompted us to use peroral cholangioscopy (POC) to examine the bile duct and detect any residual stones/debris. The results of no irrigation after stone extraction were confirmed by OC and were compared to the effectiveness of irrigation with 50 mL or 100 mL saline. To evaluate whether irrigation with 100 mL of saline is more effective in achieving complete clearance of the bile duct after mechanical lithotripsy, we conducted this prospective self-controlled study. CONCLUSION: We thank the clinical and research teams at the Department of General Surgery for providing ongoing support.
Background: A previous study showed that irrigation with 100 mL saline reduced residual common bile duct (CBD) stones, which potentially cause recurrent stones after endoscopic retrograde cholangiopancreatography. Methods: This prospective self-controlled study enrolled patients receiving mechanical lithotripsy for large (> 1.2 cm) CBD stones. After occlusion cholangiography confirmed CBD stone clearance, peroral cholangioscopy (POC) was performed to determine clearance scores based on the number of residual stones. The amounts of residual stones spotted via POC were graded on a 5-point scale (score 1, worst; score 5, best). Scores were documented after only stone removal (control) and after irrigation with 50 mL and 100 mL saline, respectively. The stone composition was analyzed using infrared spectroscopy. Results: Between October 2018 and January 2020, 47 patients had CBD clearance scores of 2.4 ± 1.1 without saline irrigation, 3.5 ± 0.7 with 50 mL irrigation, and 4.6 ± 0.6 with 100 mL irrigation (P < 0.001). Multivariate analysis showed that CBD diameter > 15 mm [odds ratio (OR) = 0.08, 95% confidence interval (CI): 0.01-0.49; P = 0.007] and periampullary diverticula (PAD) (OR = 6.51, 95%CI: 1.08-39.21; P = 0.041) were independent risk factors for residual stones. Bilirubin pigment stones constituted the main residual stones found in patients with PAD (P = 0.004). Conclusions: Irrigation with 100 mL of saline may not clear all residual CBD stones after lithotripsy, especially in patients with PAD and/or a dilated (> 15 mm) CBD. Pigment residual stones are soft and commonly found in patients with PAD. Additional saline irrigation may be required to remove retained stones.
7,280
332
[ 397, 467, 625, 239, 259, 745, 1236, 71 ]
9
[ "stones", "cbd", "stone", "duct", "bile", "saline", "irrigation", "bile duct", "clearance", "score" ]
[ "difficulty bile duct", "improve bile duct", "bile duct study", "stones found cholangiography", "residual stones bile" ]
null
[CONTENT] Endoscopic retrograde cholangiopancreatography | Common bile duct gall stones | Peroral cholangioscopy | Saline irrigation | Periampullary diverticula | Prospective cohort study [SUMMARY]
[CONTENT] Endoscopic retrograde cholangiopancreatography | Common bile duct gall stones | Peroral cholangioscopy | Saline irrigation | Periampullary diverticula | Prospective cohort study [SUMMARY]
null
[CONTENT] Endoscopic retrograde cholangiopancreatography | Common bile duct gall stones | Peroral cholangioscopy | Saline irrigation | Periampullary diverticula | Prospective cohort study [SUMMARY]
[CONTENT] Endoscopic retrograde cholangiopancreatography | Common bile duct gall stones | Peroral cholangioscopy | Saline irrigation | Periampullary diverticula | Prospective cohort study [SUMMARY]
[CONTENT] Endoscopic retrograde cholangiopancreatography | Common bile duct gall stones | Peroral cholangioscopy | Saline irrigation | Periampullary diverticula | Prospective cohort study [SUMMARY]
[CONTENT] Cholangiopancreatography, Endoscopic Retrograde | Common Bile Duct | Gallstones | Humans | Lithotripsy | Prospective Studies [SUMMARY]
[CONTENT] Cholangiopancreatography, Endoscopic Retrograde | Common Bile Duct | Gallstones | Humans | Lithotripsy | Prospective Studies [SUMMARY]
null
[CONTENT] Cholangiopancreatography, Endoscopic Retrograde | Common Bile Duct | Gallstones | Humans | Lithotripsy | Prospective Studies [SUMMARY]
[CONTENT] Cholangiopancreatography, Endoscopic Retrograde | Common Bile Duct | Gallstones | Humans | Lithotripsy | Prospective Studies [SUMMARY]
[CONTENT] Cholangiopancreatography, Endoscopic Retrograde | Common Bile Duct | Gallstones | Humans | Lithotripsy | Prospective Studies [SUMMARY]
[CONTENT] difficulty bile duct | improve bile duct | bile duct study | stones found cholangiography | residual stones bile [SUMMARY]
[CONTENT] difficulty bile duct | improve bile duct | bile duct study | stones found cholangiography | residual stones bile [SUMMARY]
null
[CONTENT] difficulty bile duct | improve bile duct | bile duct study | stones found cholangiography | residual stones bile [SUMMARY]
[CONTENT] difficulty bile duct | improve bile duct | bile duct study | stones found cholangiography | residual stones bile [SUMMARY]
[CONTENT] difficulty bile duct | improve bile duct | bile duct study | stones found cholangiography | residual stones bile [SUMMARY]
[CONTENT] stones | cbd | stone | duct | bile | saline | irrigation | bile duct | clearance | score [SUMMARY]
[CONTENT] stones | cbd | stone | duct | bile | saline | irrigation | bile duct | clearance | score [SUMMARY]
null
[CONTENT] stones | cbd | stone | duct | bile | saline | irrigation | bile duct | clearance | score [SUMMARY]
[CONTENT] stones | cbd | stone | duct | bile | saline | irrigation | bile duct | clearance | score [SUMMARY]
[CONTENT] stones | cbd | stone | duct | bile | saline | irrigation | bile duct | clearance | score [SUMMARY]
[CONTENT] bile duct | bile | duct | debris | stones | stone | oc | irrigation | saline | ml [SUMMARY]
[CONTENT] stones | biliary sludge | cbd | biliary | stone | score | sludge | performed | ercp | stone fragments biliary [SUMMARY]
null
[CONTENT] improve | bile duct | duct | bile | large cbd | clearance avoid | especially patients dilated | especially patients dilated bile | improve bile | improve bile duct [SUMMARY]
[CONTENT] stones | cbd | stone | duct | bile | saline | bile duct | irrigation | clearance | score [SUMMARY]
[CONTENT] stones | cbd | stone | duct | bile | saline | bile duct | irrigation | clearance | score [SUMMARY]
[CONTENT] 100 [SUMMARY]
[CONTENT] 1.2 cm ||| ||| 5 | 1 | 5 ||| 50 mL and | 100 mL saline ||| [SUMMARY]
null
[CONTENT] 100 mL | PAD | 15 mm ||| PAD ||| [SUMMARY]
[CONTENT] 100 ||| 1.2 cm ||| ||| 5 | 1 | 5 ||| 50 mL and | 100 mL saline ||| ||| Between October 2018 and January 2020 | 47 | 2.4 ± | 3.5 | 0.7 | 50 | 4.6 | 100 ||| ||| 15 mm ||| 0.08 | 95% | CI | 0.01 | 0.007 | PAD | 6.51 | 1.08-39.21 | P = | 0.041 ||| PAD | 0.004 ||| 100 mL | PAD | 15 mm ||| PAD ||| [SUMMARY]
[CONTENT] 100 ||| 1.2 cm ||| ||| 5 | 1 | 5 ||| 50 mL and | 100 mL saline ||| ||| Between October 2018 and January 2020 | 47 | 2.4 ± | 3.5 | 0.7 | 50 | 4.6 | 100 ||| ||| 15 mm ||| 0.08 | 95% | CI | 0.01 | 0.007 | PAD | 6.51 | 1.08-39.21 | P = | 0.041 ||| PAD | 0.004 ||| 100 mL | PAD | 15 mm ||| PAD ||| [SUMMARY]
The association of interferon-alpha with development of collateral circulation after artery occlusion.
34599832
Previous studies have demonstrated that interferon (IFN) signaling is enhanced in patients with poor collateral circulation (CC). However, the role and mechanisms of IFN-alpha in the development of CC remain unknown.
BACKGROUND
We studied the serum levels of IFN-alpha and coronary CC in a case-control study using logistics regression, including 114 coronary chronic total occlusion (CTO) patients with good coronary CC and 94 CTO patients with poor coronary CC. Restricted cubic splines was used to flexibly model the association of the levels of IFN-alpha with the incidence of good CC perfusion restoration after systemic treatment with IFN-alpha was assessed in a mice hind-limb ischemia model.
METHODS
Compared with the first IFN-alpha tertile, the risk of poor CC was higher in the third IFN-alpha tertile (OR: 4.79, 95% CI: 2.22-10.4, p < .001). A cubic spline-smoothing curve showed that the risk of poor CC increased with increasing levels of serum IFN-alpha. IFN-alpha inhibited the development of CC in a hindlimb ischemia model. Arterioles of CC in the IFN-alpha group were smaller in diameter than in the control group.
RESULTS
Patients with CTO and with poor CC have higher serum levels of IFN-alpha than CTO patients with good CC. IFN-alpha might impair the development of CC after artery occlusion.
CONCLUSION
[ "Animals", "Arteries", "Case-Control Studies", "Chronic Disease", "Collateral Circulation", "Coronary Angiography", "Coronary Circulation", "Coronary Occlusion", "Humans", "Interferon-alpha", "Mice", "Percutaneous Coronary Intervention" ]
8571556
BACKGROUND
Development of collateral circulation (CC), also termed as arteriogenesis, is a natural life‐preservation mechanism occurring in patients with coronary chronic total occlusion (CTO). Well‐developed CC preserves cardiac function, thus reducing cardiac mortality after coronary occlusion. 1 , 2 Patients with CTO show a great deal of heterogeneity in their arteriogenic responses to artery occlusion. This is affected by multiple factors, including inflammatory factors. 3 The interaction between inflammation and arteriogenesis may be worthy of study, but data on the correlation between arteriogenesis and inflammation are limited. A previous study has found that interferon (IFN) signaling in monocytes is enhanced and stimulated by lipopolysaccharide among patients with impaired coronary CC. 4 Furthermore, blocking the receptor of IFN have been found to be helpful in alleviating ischemia in mice. 5 However, whether patients with poor CC have higher levels of serum IFN or whether treatment with IFN inhibits the development of arteriogenesis remains unknown. IFN‐alpha is widely used in treating hepatitis B and C, as well as various cancers. IFN‐alpha might affect the development of CC in patients with CTO. In this study, we aimed to investigate whether patients with poor CC have higher levels of IFN‐alpha, and if so, whether IFN‐alpha inhibits the development of CC.
null
null
RESULTS
Between January 2018 and June 2019, 208 patients with CTO were included in our study as required by the inclusion and exclusion criteria (Figure S1). The baseline characteristics of the included patients are presented in Table 1. No significant difference between groups was observed except ejection fraction and the levels of serum IFN‐alpha. In patients with poor CC, serum INF‐alpha levels were significantly higher (87.7 ± 42.0 vs. 59.8 ± 18.5, p < .001), and ejection fraction was significantly lower (51.2 ± 10.5 vs. 59.8 ± 18.5, p = .04) than in patients with good CC. Baseline characteristics of included patients Abbreviation: CC, collateral circulation; CRP, C‐reactive protein; LAD, left anterior descending; LCX, left circumflex; LDL, low density lipoprotein; PLT, platelet count; RCA, right coronary artery; WBC, white blood cells. ANOVA showed that the serum levels of IFN‐alpha were significantly associated with CC, as did the Rentrop score (Figure 1). In a multivariate regression analysis, the risk of poor CC was increased with increasing IFN‐alpha tertile. Compared with the first tertile, the risk of poor CC was 3.67 (95% CI: 1.18–7.43) for model 1, 4.37 (95% CI: 2.06–9.26) for model 2, and 4.79 (95% CI: 2.22–10.4) for model 3 (Table 2). After adjusting for model 3, the cubic spline‐smoothing curve shows that the risk of poor CC increased with increasing serum IFN‐alpha levels (Figure 2). The incidence of poor CC increased with the serum levels of IFN‐alpha when the serum levels of IFN‐alpha were less than 112 pg/mL (OR: 1.0348, 95% CI: 1.0195–1.0577, p < .001, Table 3). When the levels of IFN‐alpha were larger than 112 pg/mL, the risk of poor CC did not increase with increasing levels of IFN‐alpha (OR: 1.0036, 95% CI: 0.9748–1.033, p = .807, Table 3). Our results remained robust in sensitivity analyses when we excluded patients aged >70 years or <55 years, as well as when using the fourth tertiles for the levels of serum IFN‐alpha (Tables S1–S3). Correlation between the serum levels of IFN‐alpha and Rentrop scores. The levels in patients with Rentrop scores 0 and 1, or Rentrop 2 and 3, were not significantly different. The serum levels of IFN‐alpha in CTO patients with Rentrop scores of 2 and 3 were significantly different from those with Rentrop scores of 0 and 1 OR (95% CI) of poor CC according to tertiles of the levels of serum IFN‐alpha Note: Model 1: unadjusted; model 2: adjusted age, sex, hypertension, hyperlipidemia, smoking; model 3: adjusted age, sex, hypertension, hyperlipidemia, smoking, uric acid, CRP. Relationship between the serum levels of IFN‐alpha and CC. A nonlinear relationship was observed between the serum levels of IFN‐alpha and CC after adjusting for model 3. Threshold effect analysis found that when serum levels of IFN‐alpha were larger than 112 pg/mL, the risks of poor CC did not keep on increasing Threshold effect analysis of the serum levels of IFN‐alpha on poor CC Note: When the levels of IFN‐alpha larger than 112 pg/mL, the incidence of poor CC did not increase with the increase of the levels of IFN‐alpha. Adjusted for model 3. Abbreviation: CC, coronary collateral circulation. To assess the in vivo effect of IFN‐alpha on CC, we examined whether intraperitoneal injection of IFN‐alpha at 2000 IU/d inhibited blood perfusion recovery in a mouse hind‐limb. Laser speckle imaging showed that blood flow recovery was significantly impaired in mice treated with IFN‐alpha compared with control mice at 1 week (41.7% ± 2.66% vs. 47.3% ± 2.16%, p = 0.020), at 14 days (59.8% ± 3.31% vs. 66.5% ± 3.21%, p = .005) and at 21 days (59.8% ± 3.31% vs. 66.5% ± 3.21%, p = .005, Figure 3A,B). Although the numbers of arterioles are comparable between groups, the diameter of the arterioles in the mice treated with IFN‐alpha was smaller compared with the control group (Figure 4A). In order to study local ischemia, we examined the morphology in the gastrocnemius muscle by HE staining and Masson staining. Although we did not find obvious morphological changes by HE analysis (Figure S2), the area of interstitial fibrosis evaluated by Masson staining in the mice treated with IFN‐alpha was larger than in the control group (Figure 4B). INF‐alpha impairs the development of CC in a murine hindlimb ischemic model. (A) Representative laser speckle perfusion images. (B) Quantification of laser speckle perfusion (ischemic/nonischemic) in control and IFN‐alpha treated mice over time Immunohistochemistry of SMA in cross‐sections of the adductor muscles or Masson staining of gastrocnemius muscle collected from the control and IFN‐alpha group 3 weeks after femoral artery ligation. (A) Representative immunohistochemistry and quantification of CC numbers and diameter of adductor muscles. (B) Representative Masson staining and quantification of interstitial fibrosis of gastrocnemius muscle
CONCLUSION
CTO patients with poor CC have higher serum levels of IFN‐alpha than CTO patients with good CC. IFN‐alpha might impair the development of CC after artery occlusion.
[ "Study population", "Ischemic hind‐limb model and laser speckle imaging", "Histological analyses", "Statistical analyses", "Limitation", "AUTHOR CONTRIBUTIONS" ]
[ "This study was conducted in accordance with the Declaration of Helsinki and with written informed consent of each participant. This study was approved by the medical ethics committee of Second Xiangya Hospital of Central South University. Patients undergoing coronary angiography at our catheter laboratory between January 2018 and June 2019 and with at least one major coronary artery total occlusion were enrolled in our study. The exclusion criteria were as follows: (1) patients with ST‐segment elevated myocardial infarction/non‐ST‐segment elevated myocardial infarction, (2) patients with severe cardiac dysfunction/cardiac shock (HYHA IV or Killip IV), (3) patients with coronary artery bypass grafting, (4) patients with malignant tumor, (5) patients with inflammatory or infectious diseases, and (6) patients with severe hepatic (Child class C) and renal dysfunction (glomerular filtration rate <15 mL/min/1.73 m2 or needing hemodialysis). Pei JY and Wang XP were blinded to the characteristics of the included patients. They reviewed the angiography results and classified the extent of coronary circulation by the Rentrop classification from 0 to 3 as follows: 0 = none, 1 = filling of side branches of the artery dilated via collateral channels without visualization of the epicardial segment, 2 = partial filling of the epicardial segment via collateral channels, and 3 = complete filling of the epicardial segment of the artery being dilated via collateral channels.\n6\n Rentrop 0–1 were classified as poor CC development; Rentrop 2–3 were classified as good CC development. Venous blood samples were collected immediately before coronary angiography. Serum was separated at 4°C, and the levels of IFN‐alpha were measured by ELISA (BMS216, Invitrogen).", "The animal protocol was approved by the animal ethics committee of Second Xiangya Hospital of Central South University. All animal works were performed in Second Xiangya Hospital of Central South University. Male C57/BL mice were anesthetized with 3% isoflurane. The left femoral artery was ligated and excised as described previously.\n7\n The right leg underwent sham operation and was used as control. The mice were randomly divided into two groups: an INF‐alpha group with intraperitoneal injection of 2000 IU IFN‐alpha, and a control group with intraperitoneal injection of the same amount of saline. The IFN‐alpha concentration used is comparable to dosages used in patients. Hind‐limb prefusion was measured by laser speckle imaging under temperature‐controlled conditions, before and after left hind‐limb ligation, as well as at 3 days, 1 week, and 3 weeks following ligation. Color‐coded images of the paws representing the flux value were used to calculate the ratio of the tissue perfusion of the occluded (left) to the Sham‐operated (Sham) paw. The right hind‐limb was used as reference. All mice were put to death by cervical dislocation.", "Adductor muscles from bilateral hind limbs were harvested at 3 weeks after surgery and underwent immediate tissue fixation overnight. For mouse arteriole density identification, the adductor muscles were stained with alpha‐SMA monoclonal antibody at 3 weeks after ligation.", "We presented baseline characteristics of patients as frequencies and percentages for categorical variables and as means and standard deviations or interquartile range for continuous variables, depending on whether data distribution was normal (assessed by normal Q‐Q plots). We compared categorical variables using chi‐square analysis, and continuous variables were compared by analysis of variance test or Mann–Whitney U test, according to distribution type. We evaluated the relationship between serum levels of IFN‐alpha and CC development with the use of the levels of IFN‐alpha as both a continuous and a categorical variable. We constructed logistic regression model to calculate the odds ratio (OR) among tertiles. We used the first tertile as reference and used the following three models: model 1: unadjusted; model 2: adjusted age, sex, hypertension, hyperlipidemia, smoking; model 3: adjusted age, sex, hypertension, hyperlipidemia, smoking, uric acid, CRP. We also used a two‐piecewise linear regression model to examine the threshold effect of the levels of IFN‐alpha on the risk of poor CC according to the smoothing plot. The threshold value was determined using a trial method which was to move the trial turning point along the predefined interval and picked up the one which gave maximum model likelihood. A log likelihood ratio test was conducted comparing one‐line linear regression model with two‐piecewise linear model. We further used restricted cubic splines with five knots at the 5th, 35th, 50th, 65th, and 95th centiles to flexibly model the association of the levels of IFN‐alpha with the incidence of good CC adjusted for model 3 where the threshold value served as the reference.\nWe also did several sensitivity analyses by excluding patients with age <55 years or >70 years or by using the quantiles for the levels of serum IFN‐alpha. We performed all of the analyses using R version 3.4.3 (R Foundation for Statistical Computing, Vienna, Austria).", "First, the main limitation of present study is the small number of patients with CTO. However, the sensitivity analysis showed the robustness of our results. Second, our study did not include all inflammatory and angiogenic factors, they may have a role in the development of CC. Third, we evaluated the extent of CC by Rentrop classification instead of CFI. Although collateral is more accurate than Rentrop classification, it is an invasive procedure needing pressure wire. Rentrop classification is more often used in routine clinical practice. Fourth, we evaluated the development of CC in a model of hind‐limb ischemia rather than heart ischemia. However, both development of lower limb CC and coronary CC share common cellular and molecular mechanism: arteriogenesis. Ischemic hind‐limb model is an ideal model for arteriogenesis including coronary circulation.\n4\n, \n5\n Ischemic hind‐limb model mimics aspects of human occlusive artery disease to investigate vascular regeneration and to test therapeutical approaches in a reproducible manner.", "Xinqun Hu and Zhenhua Xing designed the study and provided methodological expertise. Zhenhua Xing drafted the manuscript. Zhenhua Xing performed the case–control study. Xiaopu Wang, Junyu Pei, Zhaowei Zhu, Shi Tai performed the animal experiments. All authors have read, provided critical feedback on, and approved the final manuscript." ]
[ null, null, null, null, null, null ]
[ "BACKGROUND", "MATERIALS AND METHODS", "Study population", "Ischemic hind‐limb model and laser speckle imaging", "Histological analyses", "Statistical analyses", "RESULTS", "DISCUSSION", "Limitation", "CONCLUSION", "CONFLICTS OF INTEREST", "AUTHOR CONTRIBUTIONS", "Supporting information" ]
[ "Development of collateral circulation (CC), also termed as arteriogenesis, is a natural life‐preservation mechanism occurring in patients with coronary chronic total occlusion (CTO). Well‐developed CC preserves cardiac function, thus reducing cardiac mortality after coronary occlusion.\n1\n, \n2\n Patients with CTO show a great deal of heterogeneity in their arteriogenic responses to artery occlusion. This is affected by multiple factors, including inflammatory factors.\n3\n The interaction between inflammation and arteriogenesis may be worthy of study, but data on the correlation between arteriogenesis and inflammation are limited. A previous study has found that interferon (IFN) signaling in monocytes is enhanced and stimulated by lipopolysaccharide among patients with impaired coronary CC.\n4\n Furthermore, blocking the receptor of IFN have been found to be helpful in alleviating ischemia in mice.\n5\n However, whether patients with poor CC have higher levels of serum IFN or whether treatment with IFN inhibits the development of arteriogenesis remains unknown. IFN‐alpha is widely used in treating hepatitis B and C, as well as various cancers. IFN‐alpha might affect the development of CC in patients with CTO. In this study, we aimed to investigate whether patients with poor CC have higher levels of IFN‐alpha, and if so, whether IFN‐alpha inhibits the development of CC.", "Study population This study was conducted in accordance with the Declaration of Helsinki and with written informed consent of each participant. This study was approved by the medical ethics committee of Second Xiangya Hospital of Central South University. Patients undergoing coronary angiography at our catheter laboratory between January 2018 and June 2019 and with at least one major coronary artery total occlusion were enrolled in our study. The exclusion criteria were as follows: (1) patients with ST‐segment elevated myocardial infarction/non‐ST‐segment elevated myocardial infarction, (2) patients with severe cardiac dysfunction/cardiac shock (HYHA IV or Killip IV), (3) patients with coronary artery bypass grafting, (4) patients with malignant tumor, (5) patients with inflammatory or infectious diseases, and (6) patients with severe hepatic (Child class C) and renal dysfunction (glomerular filtration rate <15 mL/min/1.73 m2 or needing hemodialysis). Pei JY and Wang XP were blinded to the characteristics of the included patients. They reviewed the angiography results and classified the extent of coronary circulation by the Rentrop classification from 0 to 3 as follows: 0 = none, 1 = filling of side branches of the artery dilated via collateral channels without visualization of the epicardial segment, 2 = partial filling of the epicardial segment via collateral channels, and 3 = complete filling of the epicardial segment of the artery being dilated via collateral channels.\n6\n Rentrop 0–1 were classified as poor CC development; Rentrop 2–3 were classified as good CC development. Venous blood samples were collected immediately before coronary angiography. Serum was separated at 4°C, and the levels of IFN‐alpha were measured by ELISA (BMS216, Invitrogen).\nThis study was conducted in accordance with the Declaration of Helsinki and with written informed consent of each participant. This study was approved by the medical ethics committee of Second Xiangya Hospital of Central South University. Patients undergoing coronary angiography at our catheter laboratory between January 2018 and June 2019 and with at least one major coronary artery total occlusion were enrolled in our study. The exclusion criteria were as follows: (1) patients with ST‐segment elevated myocardial infarction/non‐ST‐segment elevated myocardial infarction, (2) patients with severe cardiac dysfunction/cardiac shock (HYHA IV or Killip IV), (3) patients with coronary artery bypass grafting, (4) patients with malignant tumor, (5) patients with inflammatory or infectious diseases, and (6) patients with severe hepatic (Child class C) and renal dysfunction (glomerular filtration rate <15 mL/min/1.73 m2 or needing hemodialysis). Pei JY and Wang XP were blinded to the characteristics of the included patients. They reviewed the angiography results and classified the extent of coronary circulation by the Rentrop classification from 0 to 3 as follows: 0 = none, 1 = filling of side branches of the artery dilated via collateral channels without visualization of the epicardial segment, 2 = partial filling of the epicardial segment via collateral channels, and 3 = complete filling of the epicardial segment of the artery being dilated via collateral channels.\n6\n Rentrop 0–1 were classified as poor CC development; Rentrop 2–3 were classified as good CC development. Venous blood samples were collected immediately before coronary angiography. Serum was separated at 4°C, and the levels of IFN‐alpha were measured by ELISA (BMS216, Invitrogen).\nIschemic hind‐limb model and laser speckle imaging The animal protocol was approved by the animal ethics committee of Second Xiangya Hospital of Central South University. All animal works were performed in Second Xiangya Hospital of Central South University. Male C57/BL mice were anesthetized with 3% isoflurane. The left femoral artery was ligated and excised as described previously.\n7\n The right leg underwent sham operation and was used as control. The mice were randomly divided into two groups: an INF‐alpha group with intraperitoneal injection of 2000 IU IFN‐alpha, and a control group with intraperitoneal injection of the same amount of saline. The IFN‐alpha concentration used is comparable to dosages used in patients. Hind‐limb prefusion was measured by laser speckle imaging under temperature‐controlled conditions, before and after left hind‐limb ligation, as well as at 3 days, 1 week, and 3 weeks following ligation. Color‐coded images of the paws representing the flux value were used to calculate the ratio of the tissue perfusion of the occluded (left) to the Sham‐operated (Sham) paw. The right hind‐limb was used as reference. All mice were put to death by cervical dislocation.\nThe animal protocol was approved by the animal ethics committee of Second Xiangya Hospital of Central South University. All animal works were performed in Second Xiangya Hospital of Central South University. Male C57/BL mice were anesthetized with 3% isoflurane. The left femoral artery was ligated and excised as described previously.\n7\n The right leg underwent sham operation and was used as control. The mice were randomly divided into two groups: an INF‐alpha group with intraperitoneal injection of 2000 IU IFN‐alpha, and a control group with intraperitoneal injection of the same amount of saline. The IFN‐alpha concentration used is comparable to dosages used in patients. Hind‐limb prefusion was measured by laser speckle imaging under temperature‐controlled conditions, before and after left hind‐limb ligation, as well as at 3 days, 1 week, and 3 weeks following ligation. Color‐coded images of the paws representing the flux value were used to calculate the ratio of the tissue perfusion of the occluded (left) to the Sham‐operated (Sham) paw. The right hind‐limb was used as reference. All mice were put to death by cervical dislocation.\nHistological analyses Adductor muscles from bilateral hind limbs were harvested at 3 weeks after surgery and underwent immediate tissue fixation overnight. For mouse arteriole density identification, the adductor muscles were stained with alpha‐SMA monoclonal antibody at 3 weeks after ligation.\nAdductor muscles from bilateral hind limbs were harvested at 3 weeks after surgery and underwent immediate tissue fixation overnight. For mouse arteriole density identification, the adductor muscles were stained with alpha‐SMA monoclonal antibody at 3 weeks after ligation.\nStatistical analyses We presented baseline characteristics of patients as frequencies and percentages for categorical variables and as means and standard deviations or interquartile range for continuous variables, depending on whether data distribution was normal (assessed by normal Q‐Q plots). We compared categorical variables using chi‐square analysis, and continuous variables were compared by analysis of variance test or Mann–Whitney U test, according to distribution type. We evaluated the relationship between serum levels of IFN‐alpha and CC development with the use of the levels of IFN‐alpha as both a continuous and a categorical variable. We constructed logistic regression model to calculate the odds ratio (OR) among tertiles. We used the first tertile as reference and used the following three models: model 1: unadjusted; model 2: adjusted age, sex, hypertension, hyperlipidemia, smoking; model 3: adjusted age, sex, hypertension, hyperlipidemia, smoking, uric acid, CRP. We also used a two‐piecewise linear regression model to examine the threshold effect of the levels of IFN‐alpha on the risk of poor CC according to the smoothing plot. The threshold value was determined using a trial method which was to move the trial turning point along the predefined interval and picked up the one which gave maximum model likelihood. A log likelihood ratio test was conducted comparing one‐line linear regression model with two‐piecewise linear model. We further used restricted cubic splines with five knots at the 5th, 35th, 50th, 65th, and 95th centiles to flexibly model the association of the levels of IFN‐alpha with the incidence of good CC adjusted for model 3 where the threshold value served as the reference.\nWe also did several sensitivity analyses by excluding patients with age <55 years or >70 years or by using the quantiles for the levels of serum IFN‐alpha. We performed all of the analyses using R version 3.4.3 (R Foundation for Statistical Computing, Vienna, Austria).\nWe presented baseline characteristics of patients as frequencies and percentages for categorical variables and as means and standard deviations or interquartile range for continuous variables, depending on whether data distribution was normal (assessed by normal Q‐Q plots). We compared categorical variables using chi‐square analysis, and continuous variables were compared by analysis of variance test or Mann–Whitney U test, according to distribution type. We evaluated the relationship between serum levels of IFN‐alpha and CC development with the use of the levels of IFN‐alpha as both a continuous and a categorical variable. We constructed logistic regression model to calculate the odds ratio (OR) among tertiles. We used the first tertile as reference and used the following three models: model 1: unadjusted; model 2: adjusted age, sex, hypertension, hyperlipidemia, smoking; model 3: adjusted age, sex, hypertension, hyperlipidemia, smoking, uric acid, CRP. We also used a two‐piecewise linear regression model to examine the threshold effect of the levels of IFN‐alpha on the risk of poor CC according to the smoothing plot. The threshold value was determined using a trial method which was to move the trial turning point along the predefined interval and picked up the one which gave maximum model likelihood. A log likelihood ratio test was conducted comparing one‐line linear regression model with two‐piecewise linear model. We further used restricted cubic splines with five knots at the 5th, 35th, 50th, 65th, and 95th centiles to flexibly model the association of the levels of IFN‐alpha with the incidence of good CC adjusted for model 3 where the threshold value served as the reference.\nWe also did several sensitivity analyses by excluding patients with age <55 years or >70 years or by using the quantiles for the levels of serum IFN‐alpha. We performed all of the analyses using R version 3.4.3 (R Foundation for Statistical Computing, Vienna, Austria).", "This study was conducted in accordance with the Declaration of Helsinki and with written informed consent of each participant. This study was approved by the medical ethics committee of Second Xiangya Hospital of Central South University. Patients undergoing coronary angiography at our catheter laboratory between January 2018 and June 2019 and with at least one major coronary artery total occlusion were enrolled in our study. The exclusion criteria were as follows: (1) patients with ST‐segment elevated myocardial infarction/non‐ST‐segment elevated myocardial infarction, (2) patients with severe cardiac dysfunction/cardiac shock (HYHA IV or Killip IV), (3) patients with coronary artery bypass grafting, (4) patients with malignant tumor, (5) patients with inflammatory or infectious diseases, and (6) patients with severe hepatic (Child class C) and renal dysfunction (glomerular filtration rate <15 mL/min/1.73 m2 or needing hemodialysis). Pei JY and Wang XP were blinded to the characteristics of the included patients. They reviewed the angiography results and classified the extent of coronary circulation by the Rentrop classification from 0 to 3 as follows: 0 = none, 1 = filling of side branches of the artery dilated via collateral channels without visualization of the epicardial segment, 2 = partial filling of the epicardial segment via collateral channels, and 3 = complete filling of the epicardial segment of the artery being dilated via collateral channels.\n6\n Rentrop 0–1 were classified as poor CC development; Rentrop 2–3 were classified as good CC development. Venous blood samples were collected immediately before coronary angiography. Serum was separated at 4°C, and the levels of IFN‐alpha were measured by ELISA (BMS216, Invitrogen).", "The animal protocol was approved by the animal ethics committee of Second Xiangya Hospital of Central South University. All animal works were performed in Second Xiangya Hospital of Central South University. Male C57/BL mice were anesthetized with 3% isoflurane. The left femoral artery was ligated and excised as described previously.\n7\n The right leg underwent sham operation and was used as control. The mice were randomly divided into two groups: an INF‐alpha group with intraperitoneal injection of 2000 IU IFN‐alpha, and a control group with intraperitoneal injection of the same amount of saline. The IFN‐alpha concentration used is comparable to dosages used in patients. Hind‐limb prefusion was measured by laser speckle imaging under temperature‐controlled conditions, before and after left hind‐limb ligation, as well as at 3 days, 1 week, and 3 weeks following ligation. Color‐coded images of the paws representing the flux value were used to calculate the ratio of the tissue perfusion of the occluded (left) to the Sham‐operated (Sham) paw. The right hind‐limb was used as reference. All mice were put to death by cervical dislocation.", "Adductor muscles from bilateral hind limbs were harvested at 3 weeks after surgery and underwent immediate tissue fixation overnight. For mouse arteriole density identification, the adductor muscles were stained with alpha‐SMA monoclonal antibody at 3 weeks after ligation.", "We presented baseline characteristics of patients as frequencies and percentages for categorical variables and as means and standard deviations or interquartile range for continuous variables, depending on whether data distribution was normal (assessed by normal Q‐Q plots). We compared categorical variables using chi‐square analysis, and continuous variables were compared by analysis of variance test or Mann–Whitney U test, according to distribution type. We evaluated the relationship between serum levels of IFN‐alpha and CC development with the use of the levels of IFN‐alpha as both a continuous and a categorical variable. We constructed logistic regression model to calculate the odds ratio (OR) among tertiles. We used the first tertile as reference and used the following three models: model 1: unadjusted; model 2: adjusted age, sex, hypertension, hyperlipidemia, smoking; model 3: adjusted age, sex, hypertension, hyperlipidemia, smoking, uric acid, CRP. We also used a two‐piecewise linear regression model to examine the threshold effect of the levels of IFN‐alpha on the risk of poor CC according to the smoothing plot. The threshold value was determined using a trial method which was to move the trial turning point along the predefined interval and picked up the one which gave maximum model likelihood. A log likelihood ratio test was conducted comparing one‐line linear regression model with two‐piecewise linear model. We further used restricted cubic splines with five knots at the 5th, 35th, 50th, 65th, and 95th centiles to flexibly model the association of the levels of IFN‐alpha with the incidence of good CC adjusted for model 3 where the threshold value served as the reference.\nWe also did several sensitivity analyses by excluding patients with age <55 years or >70 years or by using the quantiles for the levels of serum IFN‐alpha. We performed all of the analyses using R version 3.4.3 (R Foundation for Statistical Computing, Vienna, Austria).", "Between January 2018 and June 2019, 208 patients with CTO were included in our study as required by the inclusion and exclusion criteria (Figure S1). The baseline characteristics of the included patients are presented in Table 1. No significant difference between groups was observed except ejection fraction and the levels of serum IFN‐alpha. In patients with poor CC, serum INF‐alpha levels were significantly higher (87.7 ± 42.0 vs. 59.8 ± 18.5, p < .001), and ejection fraction was significantly lower (51.2 ± 10.5 vs. 59.8 ± 18.5, p = .04) than in patients with good CC.\nBaseline characteristics of included patients\nAbbreviation: CC, collateral circulation; CRP, C‐reactive protein; LAD, left anterior descending; LCX, left circumflex; LDL, low density lipoprotein; PLT, platelet count; RCA, right coronary artery; WBC, white blood cells.\nANOVA showed that the serum levels of IFN‐alpha were significantly associated with CC, as did the Rentrop score (Figure 1). In a multivariate regression analysis, the risk of poor CC was increased with increasing IFN‐alpha tertile. Compared with the first tertile, the risk of poor CC was 3.67 (95% CI: 1.18–7.43) for model 1, 4.37 (95% CI: 2.06–9.26) for model 2, and 4.79 (95% CI: 2.22–10.4) for model 3 (Table 2). After adjusting for model 3, the cubic spline‐smoothing curve shows that the risk of poor CC increased with increasing serum IFN‐alpha levels (Figure 2). The incidence of poor CC increased with the serum levels of IFN‐alpha when the serum levels of IFN‐alpha were less than 112 pg/mL (OR: 1.0348, 95% CI: 1.0195–1.0577, p < .001, Table 3). When the levels of IFN‐alpha were larger than 112 pg/mL, the risk of poor CC did not increase with increasing levels of IFN‐alpha (OR: 1.0036, 95% CI: 0.9748–1.033, p = .807, Table 3). Our results remained robust in sensitivity analyses when we excluded patients aged >70 years or <55 years, as well as when using the fourth tertiles for the levels of serum IFN‐alpha (Tables S1–S3).\nCorrelation between the serum levels of IFN‐alpha and Rentrop scores. The levels in patients with Rentrop scores 0 and 1, or Rentrop 2 and 3, were not significantly different. The serum levels of IFN‐alpha in CTO patients with Rentrop scores of 2 and 3 were significantly different from those with Rentrop scores of 0 and 1\nOR (95% CI) of poor CC according to tertiles of the levels of serum IFN‐alpha\n\nNote: Model 1: unadjusted; model 2: adjusted age, sex, hypertension, hyperlipidemia, smoking; model 3: adjusted age, sex, hypertension, hyperlipidemia, smoking, uric acid, CRP.\nRelationship between the serum levels of IFN‐alpha and CC. A nonlinear relationship was observed between the serum levels of IFN‐alpha and CC after adjusting for model 3. Threshold effect analysis found that when serum levels of IFN‐alpha were larger than 112 pg/mL, the risks of poor CC did not keep on increasing\nThreshold effect analysis of the serum levels of IFN‐alpha on poor CC\n\nNote: When the levels of IFN‐alpha larger than 112 pg/mL, the incidence of poor CC did not increase with the increase of the levels of IFN‐alpha. Adjusted for model 3.\nAbbreviation: CC, coronary collateral circulation.\nTo assess the in vivo effect of IFN‐alpha on CC, we examined whether intraperitoneal injection of IFN‐alpha at 2000 IU/d inhibited blood perfusion recovery in a mouse hind‐limb. Laser speckle imaging showed that blood flow recovery was significantly impaired in mice treated with IFN‐alpha compared with control mice at 1 week (41.7% ± 2.66% vs. 47.3% ± 2.16%, p = 0.020), at 14 days (59.8% ± 3.31% vs. 66.5% ± 3.21%, p = .005) and at 21 days (59.8% ± 3.31% vs. 66.5% ± 3.21%, p = .005, Figure 3A,B). Although the numbers of arterioles are comparable between groups, the diameter of the arterioles in the mice treated with IFN‐alpha was smaller compared with the control group (Figure 4A). In order to study local ischemia, we examined the morphology in the gastrocnemius muscle by HE staining and Masson staining. Although we did not find obvious morphological changes by HE analysis (Figure S2), the area of interstitial fibrosis evaluated by Masson staining in the mice treated with IFN‐alpha was larger than in the control group (Figure 4B).\nINF‐alpha impairs the development of CC in a murine hindlimb ischemic model. (A) Representative laser speckle perfusion images. (B) Quantification of laser speckle perfusion (ischemic/nonischemic) in control and IFN‐alpha treated mice over time\nImmunohistochemistry of SMA in cross‐sections of the adductor muscles or Masson staining of gastrocnemius muscle collected from the control and IFN‐alpha group 3 weeks after femoral artery ligation. (A) Representative immunohistochemistry and quantification of CC numbers and diameter of adductor muscles. (B) Representative Masson staining and quantification of interstitial fibrosis of gastrocnemius muscle", "In this study, we first found that CTO patients with poor CC have higher serum levels of IFN‐alpha. We found that IFN‐alpha inhibits the development of CC after artery occlusion.\nThe development of CC has a close relationship with multiple factors affecting coronary occlusion, such as diabetes mellitus, exercise, age, and other factors.\n8\n, \n9\n, \n10\n Recent studies have found that inflammatory factors are involved in the progress of CC. Fan et al. found that C‐reactive protein was positively associated with poor CC in patients with coronary artery disease.\n11\n Rakhit et al.\n12\n found that tumor necrosis factor alpha (TNF‐alpha) was inversely correlated with collateral flow index (CFI), whereas IL‐6 was positively correlated with CFI. However, these case–control studies did not determine the cause and effect relationship between inflammatory factors and the development of CC. We further investigated this cause and effect relationship in animal experiments and found that IFN‐alpha inhibited the development of CC after hind‐limb ischemia.\nThe development of CC depends on local activation of endothelial cells, the eNOS signaling pathway.\n13\n Previous study has found IFN‐alpha inhabited the expression and phosphorylation of eNOS in endothelial cells.\n14\n Epstein found that aging caused collateral rarefaction by inhibiting eNOS; overexpression of eNOS restored normal CC.\n15\n Furthermore, exercise promoted CC by promoting the expression and phosphorylation of eNOS. Therefore, eNOS is an essential factor for the development of CC. IFN‐alpha inhibits the expression and phosphorylation of eNOS, which may impair the development of CC. The positive remodeling of an arteriole into an artery up to 12–20 times its original size requires the proliferation and migration of endothelial cells and VSMCs.\n16\n, \n17\n Relevant study also found inflammatory factors, such as hs‐CRP, TNF‐alpha IFN‐alpha, inhibits the proliferation and migration of endothelial cells.\n11\n, \n12\n This may be another reason for patients with high serum levels of IFN‐alpha have impaired CC.\nLimitation First, the main limitation of present study is the small number of patients with CTO. However, the sensitivity analysis showed the robustness of our results. Second, our study did not include all inflammatory and angiogenic factors, they may have a role in the development of CC. Third, we evaluated the extent of CC by Rentrop classification instead of CFI. Although collateral is more accurate than Rentrop classification, it is an invasive procedure needing pressure wire. Rentrop classification is more often used in routine clinical practice. Fourth, we evaluated the development of CC in a model of hind‐limb ischemia rather than heart ischemia. However, both development of lower limb CC and coronary CC share common cellular and molecular mechanism: arteriogenesis. Ischemic hind‐limb model is an ideal model for arteriogenesis including coronary circulation.\n4\n, \n5\n Ischemic hind‐limb model mimics aspects of human occlusive artery disease to investigate vascular regeneration and to test therapeutical approaches in a reproducible manner.\nFirst, the main limitation of present study is the small number of patients with CTO. However, the sensitivity analysis showed the robustness of our results. Second, our study did not include all inflammatory and angiogenic factors, they may have a role in the development of CC. Third, we evaluated the extent of CC by Rentrop classification instead of CFI. Although collateral is more accurate than Rentrop classification, it is an invasive procedure needing pressure wire. Rentrop classification is more often used in routine clinical practice. Fourth, we evaluated the development of CC in a model of hind‐limb ischemia rather than heart ischemia. However, both development of lower limb CC and coronary CC share common cellular and molecular mechanism: arteriogenesis. Ischemic hind‐limb model is an ideal model for arteriogenesis including coronary circulation.\n4\n, \n5\n Ischemic hind‐limb model mimics aspects of human occlusive artery disease to investigate vascular regeneration and to test therapeutical approaches in a reproducible manner.", "First, the main limitation of present study is the small number of patients with CTO. However, the sensitivity analysis showed the robustness of our results. Second, our study did not include all inflammatory and angiogenic factors, they may have a role in the development of CC. Third, we evaluated the extent of CC by Rentrop classification instead of CFI. Although collateral is more accurate than Rentrop classification, it is an invasive procedure needing pressure wire. Rentrop classification is more often used in routine clinical practice. Fourth, we evaluated the development of CC in a model of hind‐limb ischemia rather than heart ischemia. However, both development of lower limb CC and coronary CC share common cellular and molecular mechanism: arteriogenesis. Ischemic hind‐limb model is an ideal model for arteriogenesis including coronary circulation.\n4\n, \n5\n Ischemic hind‐limb model mimics aspects of human occlusive artery disease to investigate vascular regeneration and to test therapeutical approaches in a reproducible manner.", "CTO patients with poor CC have higher serum levels of IFN‐alpha than CTO patients with good CC. IFN‐alpha might impair the development of CC after artery occlusion.", "The authors declare that they have no potential conflicts of interests.", "Xinqun Hu and Zhenhua Xing designed the study and provided methodological expertise. Zhenhua Xing drafted the manuscript. Zhenhua Xing performed the case–control study. Xiaopu Wang, Junyu Pei, Zhaowei Zhu, Shi Tai performed the animal experiments. All authors have read, provided critical feedback on, and approved the final manuscript.", "\nSupplementary Table S1. OR (95% CI) of poor CC according to fourths of the levels of serum IFN‐alpha\n\nSupplementary Table S2. OR (95% CI) of poor CC by excluding patients with age >70 years\n\nSupplementary Table S3. OR (95% CI) of poor CC by excluding patients with age <55 years\n\nSupplementary Figure S1. Chart of inclusion and exclusion of present study\n\nSupplementary Figure S2. HE staining of the left and right sides of gastrocnemius muscle.no obvious morphological changes were found\nClick here for additional data file." ]
[ "background", "materials-and-methods", null, null, null, null, "results", "discussion", null, "conclusions", "COI-statement", null, "supplementary-material" ]
[ "collateral circulation", "coronary chronic total occlusion", "IFN‐alpha" ]
BACKGROUND: Development of collateral circulation (CC), also termed as arteriogenesis, is a natural life‐preservation mechanism occurring in patients with coronary chronic total occlusion (CTO). Well‐developed CC preserves cardiac function, thus reducing cardiac mortality after coronary occlusion. 1 , 2 Patients with CTO show a great deal of heterogeneity in their arteriogenic responses to artery occlusion. This is affected by multiple factors, including inflammatory factors. 3 The interaction between inflammation and arteriogenesis may be worthy of study, but data on the correlation between arteriogenesis and inflammation are limited. A previous study has found that interferon (IFN) signaling in monocytes is enhanced and stimulated by lipopolysaccharide among patients with impaired coronary CC. 4 Furthermore, blocking the receptor of IFN have been found to be helpful in alleviating ischemia in mice. 5 However, whether patients with poor CC have higher levels of serum IFN or whether treatment with IFN inhibits the development of arteriogenesis remains unknown. IFN‐alpha is widely used in treating hepatitis B and C, as well as various cancers. IFN‐alpha might affect the development of CC in patients with CTO. In this study, we aimed to investigate whether patients with poor CC have higher levels of IFN‐alpha, and if so, whether IFN‐alpha inhibits the development of CC. MATERIALS AND METHODS: Study population This study was conducted in accordance with the Declaration of Helsinki and with written informed consent of each participant. This study was approved by the medical ethics committee of Second Xiangya Hospital of Central South University. Patients undergoing coronary angiography at our catheter laboratory between January 2018 and June 2019 and with at least one major coronary artery total occlusion were enrolled in our study. The exclusion criteria were as follows: (1) patients with ST‐segment elevated myocardial infarction/non‐ST‐segment elevated myocardial infarction, (2) patients with severe cardiac dysfunction/cardiac shock (HYHA IV or Killip IV), (3) patients with coronary artery bypass grafting, (4) patients with malignant tumor, (5) patients with inflammatory or infectious diseases, and (6) patients with severe hepatic (Child class C) and renal dysfunction (glomerular filtration rate <15 mL/min/1.73 m2 or needing hemodialysis). Pei JY and Wang XP were blinded to the characteristics of the included patients. They reviewed the angiography results and classified the extent of coronary circulation by the Rentrop classification from 0 to 3 as follows: 0 = none, 1 = filling of side branches of the artery dilated via collateral channels without visualization of the epicardial segment, 2 = partial filling of the epicardial segment via collateral channels, and 3 = complete filling of the epicardial segment of the artery being dilated via collateral channels. 6 Rentrop 0–1 were classified as poor CC development; Rentrop 2–3 were classified as good CC development. Venous blood samples were collected immediately before coronary angiography. Serum was separated at 4°C, and the levels of IFN‐alpha were measured by ELISA (BMS216, Invitrogen). This study was conducted in accordance with the Declaration of Helsinki and with written informed consent of each participant. This study was approved by the medical ethics committee of Second Xiangya Hospital of Central South University. Patients undergoing coronary angiography at our catheter laboratory between January 2018 and June 2019 and with at least one major coronary artery total occlusion were enrolled in our study. The exclusion criteria were as follows: (1) patients with ST‐segment elevated myocardial infarction/non‐ST‐segment elevated myocardial infarction, (2) patients with severe cardiac dysfunction/cardiac shock (HYHA IV or Killip IV), (3) patients with coronary artery bypass grafting, (4) patients with malignant tumor, (5) patients with inflammatory or infectious diseases, and (6) patients with severe hepatic (Child class C) and renal dysfunction (glomerular filtration rate <15 mL/min/1.73 m2 or needing hemodialysis). Pei JY and Wang XP were blinded to the characteristics of the included patients. They reviewed the angiography results and classified the extent of coronary circulation by the Rentrop classification from 0 to 3 as follows: 0 = none, 1 = filling of side branches of the artery dilated via collateral channels without visualization of the epicardial segment, 2 = partial filling of the epicardial segment via collateral channels, and 3 = complete filling of the epicardial segment of the artery being dilated via collateral channels. 6 Rentrop 0–1 were classified as poor CC development; Rentrop 2–3 were classified as good CC development. Venous blood samples were collected immediately before coronary angiography. Serum was separated at 4°C, and the levels of IFN‐alpha were measured by ELISA (BMS216, Invitrogen). Ischemic hind‐limb model and laser speckle imaging The animal protocol was approved by the animal ethics committee of Second Xiangya Hospital of Central South University. All animal works were performed in Second Xiangya Hospital of Central South University. Male C57/BL mice were anesthetized with 3% isoflurane. The left femoral artery was ligated and excised as described previously. 7 The right leg underwent sham operation and was used as control. The mice were randomly divided into two groups: an INF‐alpha group with intraperitoneal injection of 2000 IU IFN‐alpha, and a control group with intraperitoneal injection of the same amount of saline. The IFN‐alpha concentration used is comparable to dosages used in patients. Hind‐limb prefusion was measured by laser speckle imaging under temperature‐controlled conditions, before and after left hind‐limb ligation, as well as at 3 days, 1 week, and 3 weeks following ligation. Color‐coded images of the paws representing the flux value were used to calculate the ratio of the tissue perfusion of the occluded (left) to the Sham‐operated (Sham) paw. The right hind‐limb was used as reference. All mice were put to death by cervical dislocation. The animal protocol was approved by the animal ethics committee of Second Xiangya Hospital of Central South University. All animal works were performed in Second Xiangya Hospital of Central South University. Male C57/BL mice were anesthetized with 3% isoflurane. The left femoral artery was ligated and excised as described previously. 7 The right leg underwent sham operation and was used as control. The mice were randomly divided into two groups: an INF‐alpha group with intraperitoneal injection of 2000 IU IFN‐alpha, and a control group with intraperitoneal injection of the same amount of saline. The IFN‐alpha concentration used is comparable to dosages used in patients. Hind‐limb prefusion was measured by laser speckle imaging under temperature‐controlled conditions, before and after left hind‐limb ligation, as well as at 3 days, 1 week, and 3 weeks following ligation. Color‐coded images of the paws representing the flux value were used to calculate the ratio of the tissue perfusion of the occluded (left) to the Sham‐operated (Sham) paw. The right hind‐limb was used as reference. All mice were put to death by cervical dislocation. Histological analyses Adductor muscles from bilateral hind limbs were harvested at 3 weeks after surgery and underwent immediate tissue fixation overnight. For mouse arteriole density identification, the adductor muscles were stained with alpha‐SMA monoclonal antibody at 3 weeks after ligation. Adductor muscles from bilateral hind limbs were harvested at 3 weeks after surgery and underwent immediate tissue fixation overnight. For mouse arteriole density identification, the adductor muscles were stained with alpha‐SMA monoclonal antibody at 3 weeks after ligation. Statistical analyses We presented baseline characteristics of patients as frequencies and percentages for categorical variables and as means and standard deviations or interquartile range for continuous variables, depending on whether data distribution was normal (assessed by normal Q‐Q plots). We compared categorical variables using chi‐square analysis, and continuous variables were compared by analysis of variance test or Mann–Whitney U test, according to distribution type. We evaluated the relationship between serum levels of IFN‐alpha and CC development with the use of the levels of IFN‐alpha as both a continuous and a categorical variable. We constructed logistic regression model to calculate the odds ratio (OR) among tertiles. We used the first tertile as reference and used the following three models: model 1: unadjusted; model 2: adjusted age, sex, hypertension, hyperlipidemia, smoking; model 3: adjusted age, sex, hypertension, hyperlipidemia, smoking, uric acid, CRP. We also used a two‐piecewise linear regression model to examine the threshold effect of the levels of IFN‐alpha on the risk of poor CC according to the smoothing plot. The threshold value was determined using a trial method which was to move the trial turning point along the predefined interval and picked up the one which gave maximum model likelihood. A log likelihood ratio test was conducted comparing one‐line linear regression model with two‐piecewise linear model. We further used restricted cubic splines with five knots at the 5th, 35th, 50th, 65th, and 95th centiles to flexibly model the association of the levels of IFN‐alpha with the incidence of good CC adjusted for model 3 where the threshold value served as the reference. We also did several sensitivity analyses by excluding patients with age <55 years or >70 years or by using the quantiles for the levels of serum IFN‐alpha. We performed all of the analyses using R version 3.4.3 (R Foundation for Statistical Computing, Vienna, Austria). We presented baseline characteristics of patients as frequencies and percentages for categorical variables and as means and standard deviations or interquartile range for continuous variables, depending on whether data distribution was normal (assessed by normal Q‐Q plots). We compared categorical variables using chi‐square analysis, and continuous variables were compared by analysis of variance test or Mann–Whitney U test, according to distribution type. We evaluated the relationship between serum levels of IFN‐alpha and CC development with the use of the levels of IFN‐alpha as both a continuous and a categorical variable. We constructed logistic regression model to calculate the odds ratio (OR) among tertiles. We used the first tertile as reference and used the following three models: model 1: unadjusted; model 2: adjusted age, sex, hypertension, hyperlipidemia, smoking; model 3: adjusted age, sex, hypertension, hyperlipidemia, smoking, uric acid, CRP. We also used a two‐piecewise linear regression model to examine the threshold effect of the levels of IFN‐alpha on the risk of poor CC according to the smoothing plot. The threshold value was determined using a trial method which was to move the trial turning point along the predefined interval and picked up the one which gave maximum model likelihood. A log likelihood ratio test was conducted comparing one‐line linear regression model with two‐piecewise linear model. We further used restricted cubic splines with five knots at the 5th, 35th, 50th, 65th, and 95th centiles to flexibly model the association of the levels of IFN‐alpha with the incidence of good CC adjusted for model 3 where the threshold value served as the reference. We also did several sensitivity analyses by excluding patients with age <55 years or >70 years or by using the quantiles for the levels of serum IFN‐alpha. We performed all of the analyses using R version 3.4.3 (R Foundation for Statistical Computing, Vienna, Austria). Study population: This study was conducted in accordance with the Declaration of Helsinki and with written informed consent of each participant. This study was approved by the medical ethics committee of Second Xiangya Hospital of Central South University. Patients undergoing coronary angiography at our catheter laboratory between January 2018 and June 2019 and with at least one major coronary artery total occlusion were enrolled in our study. The exclusion criteria were as follows: (1) patients with ST‐segment elevated myocardial infarction/non‐ST‐segment elevated myocardial infarction, (2) patients with severe cardiac dysfunction/cardiac shock (HYHA IV or Killip IV), (3) patients with coronary artery bypass grafting, (4) patients with malignant tumor, (5) patients with inflammatory or infectious diseases, and (6) patients with severe hepatic (Child class C) and renal dysfunction (glomerular filtration rate <15 mL/min/1.73 m2 or needing hemodialysis). Pei JY and Wang XP were blinded to the characteristics of the included patients. They reviewed the angiography results and classified the extent of coronary circulation by the Rentrop classification from 0 to 3 as follows: 0 = none, 1 = filling of side branches of the artery dilated via collateral channels without visualization of the epicardial segment, 2 = partial filling of the epicardial segment via collateral channels, and 3 = complete filling of the epicardial segment of the artery being dilated via collateral channels. 6 Rentrop 0–1 were classified as poor CC development; Rentrop 2–3 were classified as good CC development. Venous blood samples were collected immediately before coronary angiography. Serum was separated at 4°C, and the levels of IFN‐alpha were measured by ELISA (BMS216, Invitrogen). Ischemic hind‐limb model and laser speckle imaging: The animal protocol was approved by the animal ethics committee of Second Xiangya Hospital of Central South University. All animal works were performed in Second Xiangya Hospital of Central South University. Male C57/BL mice were anesthetized with 3% isoflurane. The left femoral artery was ligated and excised as described previously. 7 The right leg underwent sham operation and was used as control. The mice were randomly divided into two groups: an INF‐alpha group with intraperitoneal injection of 2000 IU IFN‐alpha, and a control group with intraperitoneal injection of the same amount of saline. The IFN‐alpha concentration used is comparable to dosages used in patients. Hind‐limb prefusion was measured by laser speckle imaging under temperature‐controlled conditions, before and after left hind‐limb ligation, as well as at 3 days, 1 week, and 3 weeks following ligation. Color‐coded images of the paws representing the flux value were used to calculate the ratio of the tissue perfusion of the occluded (left) to the Sham‐operated (Sham) paw. The right hind‐limb was used as reference. All mice were put to death by cervical dislocation. Histological analyses: Adductor muscles from bilateral hind limbs were harvested at 3 weeks after surgery and underwent immediate tissue fixation overnight. For mouse arteriole density identification, the adductor muscles were stained with alpha‐SMA monoclonal antibody at 3 weeks after ligation. Statistical analyses: We presented baseline characteristics of patients as frequencies and percentages for categorical variables and as means and standard deviations or interquartile range for continuous variables, depending on whether data distribution was normal (assessed by normal Q‐Q plots). We compared categorical variables using chi‐square analysis, and continuous variables were compared by analysis of variance test or Mann–Whitney U test, according to distribution type. We evaluated the relationship between serum levels of IFN‐alpha and CC development with the use of the levels of IFN‐alpha as both a continuous and a categorical variable. We constructed logistic regression model to calculate the odds ratio (OR) among tertiles. We used the first tertile as reference and used the following three models: model 1: unadjusted; model 2: adjusted age, sex, hypertension, hyperlipidemia, smoking; model 3: adjusted age, sex, hypertension, hyperlipidemia, smoking, uric acid, CRP. We also used a two‐piecewise linear regression model to examine the threshold effect of the levels of IFN‐alpha on the risk of poor CC according to the smoothing plot. The threshold value was determined using a trial method which was to move the trial turning point along the predefined interval and picked up the one which gave maximum model likelihood. A log likelihood ratio test was conducted comparing one‐line linear regression model with two‐piecewise linear model. We further used restricted cubic splines with five knots at the 5th, 35th, 50th, 65th, and 95th centiles to flexibly model the association of the levels of IFN‐alpha with the incidence of good CC adjusted for model 3 where the threshold value served as the reference. We also did several sensitivity analyses by excluding patients with age <55 years or >70 years or by using the quantiles for the levels of serum IFN‐alpha. We performed all of the analyses using R version 3.4.3 (R Foundation for Statistical Computing, Vienna, Austria). RESULTS: Between January 2018 and June 2019, 208 patients with CTO were included in our study as required by the inclusion and exclusion criteria (Figure S1). The baseline characteristics of the included patients are presented in Table 1. No significant difference between groups was observed except ejection fraction and the levels of serum IFN‐alpha. In patients with poor CC, serum INF‐alpha levels were significantly higher (87.7 ± 42.0 vs. 59.8 ± 18.5, p < .001), and ejection fraction was significantly lower (51.2 ± 10.5 vs. 59.8 ± 18.5, p = .04) than in patients with good CC. Baseline characteristics of included patients Abbreviation: CC, collateral circulation; CRP, C‐reactive protein; LAD, left anterior descending; LCX, left circumflex; LDL, low density lipoprotein; PLT, platelet count; RCA, right coronary artery; WBC, white blood cells. ANOVA showed that the serum levels of IFN‐alpha were significantly associated with CC, as did the Rentrop score (Figure 1). In a multivariate regression analysis, the risk of poor CC was increased with increasing IFN‐alpha tertile. Compared with the first tertile, the risk of poor CC was 3.67 (95% CI: 1.18–7.43) for model 1, 4.37 (95% CI: 2.06–9.26) for model 2, and 4.79 (95% CI: 2.22–10.4) for model 3 (Table 2). After adjusting for model 3, the cubic spline‐smoothing curve shows that the risk of poor CC increased with increasing serum IFN‐alpha levels (Figure 2). The incidence of poor CC increased with the serum levels of IFN‐alpha when the serum levels of IFN‐alpha were less than 112 pg/mL (OR: 1.0348, 95% CI: 1.0195–1.0577, p < .001, Table 3). When the levels of IFN‐alpha were larger than 112 pg/mL, the risk of poor CC did not increase with increasing levels of IFN‐alpha (OR: 1.0036, 95% CI: 0.9748–1.033, p = .807, Table 3). Our results remained robust in sensitivity analyses when we excluded patients aged >70 years or <55 years, as well as when using the fourth tertiles for the levels of serum IFN‐alpha (Tables S1–S3). Correlation between the serum levels of IFN‐alpha and Rentrop scores. The levels in patients with Rentrop scores 0 and 1, or Rentrop 2 and 3, were not significantly different. The serum levels of IFN‐alpha in CTO patients with Rentrop scores of 2 and 3 were significantly different from those with Rentrop scores of 0 and 1 OR (95% CI) of poor CC according to tertiles of the levels of serum IFN‐alpha Note: Model 1: unadjusted; model 2: adjusted age, sex, hypertension, hyperlipidemia, smoking; model 3: adjusted age, sex, hypertension, hyperlipidemia, smoking, uric acid, CRP. Relationship between the serum levels of IFN‐alpha and CC. A nonlinear relationship was observed between the serum levels of IFN‐alpha and CC after adjusting for model 3. Threshold effect analysis found that when serum levels of IFN‐alpha were larger than 112 pg/mL, the risks of poor CC did not keep on increasing Threshold effect analysis of the serum levels of IFN‐alpha on poor CC Note: When the levels of IFN‐alpha larger than 112 pg/mL, the incidence of poor CC did not increase with the increase of the levels of IFN‐alpha. Adjusted for model 3. Abbreviation: CC, coronary collateral circulation. To assess the in vivo effect of IFN‐alpha on CC, we examined whether intraperitoneal injection of IFN‐alpha at 2000 IU/d inhibited blood perfusion recovery in a mouse hind‐limb. Laser speckle imaging showed that blood flow recovery was significantly impaired in mice treated with IFN‐alpha compared with control mice at 1 week (41.7% ± 2.66% vs. 47.3% ± 2.16%, p = 0.020), at 14 days (59.8% ± 3.31% vs. 66.5% ± 3.21%, p = .005) and at 21 days (59.8% ± 3.31% vs. 66.5% ± 3.21%, p = .005, Figure 3A,B). Although the numbers of arterioles are comparable between groups, the diameter of the arterioles in the mice treated with IFN‐alpha was smaller compared with the control group (Figure 4A). In order to study local ischemia, we examined the morphology in the gastrocnemius muscle by HE staining and Masson staining. Although we did not find obvious morphological changes by HE analysis (Figure S2), the area of interstitial fibrosis evaluated by Masson staining in the mice treated with IFN‐alpha was larger than in the control group (Figure 4B). INF‐alpha impairs the development of CC in a murine hindlimb ischemic model. (A) Representative laser speckle perfusion images. (B) Quantification of laser speckle perfusion (ischemic/nonischemic) in control and IFN‐alpha treated mice over time Immunohistochemistry of SMA in cross‐sections of the adductor muscles or Masson staining of gastrocnemius muscle collected from the control and IFN‐alpha group 3 weeks after femoral artery ligation. (A) Representative immunohistochemistry and quantification of CC numbers and diameter of adductor muscles. (B) Representative Masson staining and quantification of interstitial fibrosis of gastrocnemius muscle DISCUSSION: In this study, we first found that CTO patients with poor CC have higher serum levels of IFN‐alpha. We found that IFN‐alpha inhibits the development of CC after artery occlusion. The development of CC has a close relationship with multiple factors affecting coronary occlusion, such as diabetes mellitus, exercise, age, and other factors. 8 , 9 , 10 Recent studies have found that inflammatory factors are involved in the progress of CC. Fan et al. found that C‐reactive protein was positively associated with poor CC in patients with coronary artery disease. 11 Rakhit et al. 12 found that tumor necrosis factor alpha (TNF‐alpha) was inversely correlated with collateral flow index (CFI), whereas IL‐6 was positively correlated with CFI. However, these case–control studies did not determine the cause and effect relationship between inflammatory factors and the development of CC. We further investigated this cause and effect relationship in animal experiments and found that IFN‐alpha inhibited the development of CC after hind‐limb ischemia. The development of CC depends on local activation of endothelial cells, the eNOS signaling pathway. 13 Previous study has found IFN‐alpha inhabited the expression and phosphorylation of eNOS in endothelial cells. 14 Epstein found that aging caused collateral rarefaction by inhibiting eNOS; overexpression of eNOS restored normal CC. 15 Furthermore, exercise promoted CC by promoting the expression and phosphorylation of eNOS. Therefore, eNOS is an essential factor for the development of CC. IFN‐alpha inhibits the expression and phosphorylation of eNOS, which may impair the development of CC. The positive remodeling of an arteriole into an artery up to 12–20 times its original size requires the proliferation and migration of endothelial cells and VSMCs. 16 , 17 Relevant study also found inflammatory factors, such as hs‐CRP, TNF‐alpha IFN‐alpha, inhibits the proliferation and migration of endothelial cells. 11 , 12 This may be another reason for patients with high serum levels of IFN‐alpha have impaired CC. Limitation First, the main limitation of present study is the small number of patients with CTO. However, the sensitivity analysis showed the robustness of our results. Second, our study did not include all inflammatory and angiogenic factors, they may have a role in the development of CC. Third, we evaluated the extent of CC by Rentrop classification instead of CFI. Although collateral is more accurate than Rentrop classification, it is an invasive procedure needing pressure wire. Rentrop classification is more often used in routine clinical practice. Fourth, we evaluated the development of CC in a model of hind‐limb ischemia rather than heart ischemia. However, both development of lower limb CC and coronary CC share common cellular and molecular mechanism: arteriogenesis. Ischemic hind‐limb model is an ideal model for arteriogenesis including coronary circulation. 4 , 5 Ischemic hind‐limb model mimics aspects of human occlusive artery disease to investigate vascular regeneration and to test therapeutical approaches in a reproducible manner. First, the main limitation of present study is the small number of patients with CTO. However, the sensitivity analysis showed the robustness of our results. Second, our study did not include all inflammatory and angiogenic factors, they may have a role in the development of CC. Third, we evaluated the extent of CC by Rentrop classification instead of CFI. Although collateral is more accurate than Rentrop classification, it is an invasive procedure needing pressure wire. Rentrop classification is more often used in routine clinical practice. Fourth, we evaluated the development of CC in a model of hind‐limb ischemia rather than heart ischemia. However, both development of lower limb CC and coronary CC share common cellular and molecular mechanism: arteriogenesis. Ischemic hind‐limb model is an ideal model for arteriogenesis including coronary circulation. 4 , 5 Ischemic hind‐limb model mimics aspects of human occlusive artery disease to investigate vascular regeneration and to test therapeutical approaches in a reproducible manner. Limitation: First, the main limitation of present study is the small number of patients with CTO. However, the sensitivity analysis showed the robustness of our results. Second, our study did not include all inflammatory and angiogenic factors, they may have a role in the development of CC. Third, we evaluated the extent of CC by Rentrop classification instead of CFI. Although collateral is more accurate than Rentrop classification, it is an invasive procedure needing pressure wire. Rentrop classification is more often used in routine clinical practice. Fourth, we evaluated the development of CC in a model of hind‐limb ischemia rather than heart ischemia. However, both development of lower limb CC and coronary CC share common cellular and molecular mechanism: arteriogenesis. Ischemic hind‐limb model is an ideal model for arteriogenesis including coronary circulation. 4 , 5 Ischemic hind‐limb model mimics aspects of human occlusive artery disease to investigate vascular regeneration and to test therapeutical approaches in a reproducible manner. CONCLUSION: CTO patients with poor CC have higher serum levels of IFN‐alpha than CTO patients with good CC. IFN‐alpha might impair the development of CC after artery occlusion. CONFLICTS OF INTEREST: The authors declare that they have no potential conflicts of interests. AUTHOR CONTRIBUTIONS: Xinqun Hu and Zhenhua Xing designed the study and provided methodological expertise. Zhenhua Xing drafted the manuscript. Zhenhua Xing performed the case–control study. Xiaopu Wang, Junyu Pei, Zhaowei Zhu, Shi Tai performed the animal experiments. All authors have read, provided critical feedback on, and approved the final manuscript. Supporting information: Supplementary Table S1. OR (95% CI) of poor CC according to fourths of the levels of serum IFN‐alpha Supplementary Table S2. OR (95% CI) of poor CC by excluding patients with age >70 years Supplementary Table S3. OR (95% CI) of poor CC by excluding patients with age <55 years Supplementary Figure S1. Chart of inclusion and exclusion of present study Supplementary Figure S2. HE staining of the left and right sides of gastrocnemius muscle.no obvious morphological changes were found Click here for additional data file.
Background: Previous studies have demonstrated that interferon (IFN) signaling is enhanced in patients with poor collateral circulation (CC). However, the role and mechanisms of IFN-alpha in the development of CC remain unknown. Methods: We studied the serum levels of IFN-alpha and coronary CC in a case-control study using logistics regression, including 114 coronary chronic total occlusion (CTO) patients with good coronary CC and 94 CTO patients with poor coronary CC. Restricted cubic splines was used to flexibly model the association of the levels of IFN-alpha with the incidence of good CC perfusion restoration after systemic treatment with IFN-alpha was assessed in a mice hind-limb ischemia model. Results: Compared with the first IFN-alpha tertile, the risk of poor CC was higher in the third IFN-alpha tertile (OR: 4.79, 95% CI: 2.22-10.4, p < .001). A cubic spline-smoothing curve showed that the risk of poor CC increased with increasing levels of serum IFN-alpha. IFN-alpha inhibited the development of CC in a hindlimb ischemia model. Arterioles of CC in the IFN-alpha group were smaller in diameter than in the control group. Conclusions: Patients with CTO and with poor CC have higher serum levels of IFN-alpha than CTO patients with good CC. IFN-alpha might impair the development of CC after artery occlusion.
BACKGROUND: Development of collateral circulation (CC), also termed as arteriogenesis, is a natural life‐preservation mechanism occurring in patients with coronary chronic total occlusion (CTO). Well‐developed CC preserves cardiac function, thus reducing cardiac mortality after coronary occlusion. 1 , 2 Patients with CTO show a great deal of heterogeneity in their arteriogenic responses to artery occlusion. This is affected by multiple factors, including inflammatory factors. 3 The interaction between inflammation and arteriogenesis may be worthy of study, but data on the correlation between arteriogenesis and inflammation are limited. A previous study has found that interferon (IFN) signaling in monocytes is enhanced and stimulated by lipopolysaccharide among patients with impaired coronary CC. 4 Furthermore, blocking the receptor of IFN have been found to be helpful in alleviating ischemia in mice. 5 However, whether patients with poor CC have higher levels of serum IFN or whether treatment with IFN inhibits the development of arteriogenesis remains unknown. IFN‐alpha is widely used in treating hepatitis B and C, as well as various cancers. IFN‐alpha might affect the development of CC in patients with CTO. In this study, we aimed to investigate whether patients with poor CC have higher levels of IFN‐alpha, and if so, whether IFN‐alpha inhibits the development of CC. CONCLUSION: CTO patients with poor CC have higher serum levels of IFN‐alpha than CTO patients with good CC. IFN‐alpha might impair the development of CC after artery occlusion.
Background: Previous studies have demonstrated that interferon (IFN) signaling is enhanced in patients with poor collateral circulation (CC). However, the role and mechanisms of IFN-alpha in the development of CC remain unknown. Methods: We studied the serum levels of IFN-alpha and coronary CC in a case-control study using logistics regression, including 114 coronary chronic total occlusion (CTO) patients with good coronary CC and 94 CTO patients with poor coronary CC. Restricted cubic splines was used to flexibly model the association of the levels of IFN-alpha with the incidence of good CC perfusion restoration after systemic treatment with IFN-alpha was assessed in a mice hind-limb ischemia model. Results: Compared with the first IFN-alpha tertile, the risk of poor CC was higher in the third IFN-alpha tertile (OR: 4.79, 95% CI: 2.22-10.4, p < .001). A cubic spline-smoothing curve showed that the risk of poor CC increased with increasing levels of serum IFN-alpha. IFN-alpha inhibited the development of CC in a hindlimb ischemia model. Arterioles of CC in the IFN-alpha group were smaller in diameter than in the control group. Conclusions: Patients with CTO and with poor CC have higher serum levels of IFN-alpha than CTO patients with good CC. IFN-alpha might impair the development of CC after artery occlusion.
5,275
277
[ 324, 207, 42, 353, 180, 60 ]
13
[ "cc", "alpha", "ifn", "ifn alpha", "patients", "model", "levels", "levels ifn alpha", "levels ifn", "development" ]
[ "arteriogenesis including coronary", "mechanism arteriogenesis ischemic", "inflammatory angiogenic factors", "inflammation arteriogenesis worthy", "interaction inflammation arteriogenesis" ]
null
[CONTENT] collateral circulation | coronary chronic total occlusion | IFN‐alpha [SUMMARY]
null
[CONTENT] collateral circulation | coronary chronic total occlusion | IFN‐alpha [SUMMARY]
[CONTENT] collateral circulation | coronary chronic total occlusion | IFN‐alpha [SUMMARY]
[CONTENT] collateral circulation | coronary chronic total occlusion | IFN‐alpha [SUMMARY]
[CONTENT] collateral circulation | coronary chronic total occlusion | IFN‐alpha [SUMMARY]
[CONTENT] Animals | Arteries | Case-Control Studies | Chronic Disease | Collateral Circulation | Coronary Angiography | Coronary Circulation | Coronary Occlusion | Humans | Interferon-alpha | Mice | Percutaneous Coronary Intervention [SUMMARY]
null
[CONTENT] Animals | Arteries | Case-Control Studies | Chronic Disease | Collateral Circulation | Coronary Angiography | Coronary Circulation | Coronary Occlusion | Humans | Interferon-alpha | Mice | Percutaneous Coronary Intervention [SUMMARY]
[CONTENT] Animals | Arteries | Case-Control Studies | Chronic Disease | Collateral Circulation | Coronary Angiography | Coronary Circulation | Coronary Occlusion | Humans | Interferon-alpha | Mice | Percutaneous Coronary Intervention [SUMMARY]
[CONTENT] Animals | Arteries | Case-Control Studies | Chronic Disease | Collateral Circulation | Coronary Angiography | Coronary Circulation | Coronary Occlusion | Humans | Interferon-alpha | Mice | Percutaneous Coronary Intervention [SUMMARY]
[CONTENT] Animals | Arteries | Case-Control Studies | Chronic Disease | Collateral Circulation | Coronary Angiography | Coronary Circulation | Coronary Occlusion | Humans | Interferon-alpha | Mice | Percutaneous Coronary Intervention [SUMMARY]
[CONTENT] arteriogenesis including coronary | mechanism arteriogenesis ischemic | inflammatory angiogenic factors | inflammation arteriogenesis worthy | interaction inflammation arteriogenesis [SUMMARY]
null
[CONTENT] arteriogenesis including coronary | mechanism arteriogenesis ischemic | inflammatory angiogenic factors | inflammation arteriogenesis worthy | interaction inflammation arteriogenesis [SUMMARY]
[CONTENT] arteriogenesis including coronary | mechanism arteriogenesis ischemic | inflammatory angiogenic factors | inflammation arteriogenesis worthy | interaction inflammation arteriogenesis [SUMMARY]
[CONTENT] arteriogenesis including coronary | mechanism arteriogenesis ischemic | inflammatory angiogenic factors | inflammation arteriogenesis worthy | interaction inflammation arteriogenesis [SUMMARY]
[CONTENT] arteriogenesis including coronary | mechanism arteriogenesis ischemic | inflammatory angiogenic factors | inflammation arteriogenesis worthy | interaction inflammation arteriogenesis [SUMMARY]
[CONTENT] cc | alpha | ifn | ifn alpha | patients | model | levels | levels ifn alpha | levels ifn | development [SUMMARY]
null
[CONTENT] cc | alpha | ifn | ifn alpha | patients | model | levels | levels ifn alpha | levels ifn | development [SUMMARY]
[CONTENT] cc | alpha | ifn | ifn alpha | patients | model | levels | levels ifn alpha | levels ifn | development [SUMMARY]
[CONTENT] cc | alpha | ifn | ifn alpha | patients | model | levels | levels ifn alpha | levels ifn | development [SUMMARY]
[CONTENT] cc | alpha | ifn | ifn alpha | patients | model | levels | levels ifn alpha | levels ifn | development [SUMMARY]
[CONTENT] ifn | cc | arteriogenesis | patients | cc higher levels | inflammation | poor cc higher levels | higher levels | development | occlusion [SUMMARY]
null
[CONTENT] alpha | ifn | ifn alpha | levels | cc | levels ifn | levels ifn alpha | serum | model | figure [SUMMARY]
[CONTENT] cto patients | cc | cto | ifn alpha impair | alpha impair | ifn alpha impair development | cto patients good cc | cto patients good | patients good cc ifn | good cc ifn [SUMMARY]
[CONTENT] cc | ifn | alpha | model | ifn alpha | patients | levels | development | coronary | limb [SUMMARY]
[CONTENT] cc | ifn | alpha | model | ifn alpha | patients | levels | development | coronary | limb [SUMMARY]
[CONTENT] IFN | CC ||| IFN | CC [SUMMARY]
null
[CONTENT] first | IFN | CC | third | IFN | 4.79 | 95% | CI | 2.22 | .001 ||| CC | IFN ||| IFN | CC ||| IFN [SUMMARY]
[CONTENT] CTO | CC | IFN | CTO | CC ||| IFN | CC [SUMMARY]
[CONTENT] IFN | CC ||| IFN | CC ||| IFN | CC | 114 | CTO | CC | 94 | CTO | CC ||| Restricted | IFN | CC | IFN ||| first | IFN | CC | third | IFN | 4.79 | 95% | CI | 2.22 | .001 ||| CC | IFN ||| IFN | CC ||| IFN ||| CTO | CC | IFN | CTO | CC ||| IFN | CC [SUMMARY]
[CONTENT] IFN | CC ||| IFN | CC ||| IFN | CC | 114 | CTO | CC | 94 | CTO | CC ||| Restricted | IFN | CC | IFN ||| first | IFN | CC | third | IFN | 4.79 | 95% | CI | 2.22 | .001 ||| CC | IFN ||| IFN | CC ||| IFN ||| CTO | CC | IFN | CTO | CC ||| IFN | CC [SUMMARY]
Next-generation sequencing for the genetic characterization of Maedi/Visna virus isolated from the northwest of China.
34697919
Maedi/Visna virus (MVV) is a contagious viral pathogen that causes considerable economic losses to the sheep industry worldwide.
BACKGROUND
Therefore, in this study, we conducted next-generation sequencing on an MVV strain obtained from northwest China to reveal its genetic evolution via phylogenetic analysis.
METHODS
A MVV strain obtained from Inner Mongolia (NM) of China was identified. Sequence analysis indicated that its whole-genome length is 9193 bp. Homology comparison of nucleotides between the NM strain and reference strains showed that the sequence homology of gag and env were 77.1%-86.8% and 67.7%-75.5%, respectively. Phylogenetic analysis revealed that the NM strain was closely related to the reference strains isolated from America, which belong to the A2 type. Notably, there were 5 amino acid insertions in variable region 4 and a highly variable motif at the C-terminal of the surface glycoprotein (SU5).
RESULTS
The present study is the first to show the whole-genome sequence of an MVV obtained from China. The detailed analyses provide essential information for understanding the genetic characteristics of MVV, and the results enrich the MVV library.
CONCLUSIONS
[ "Animals", "China", "High-Throughput Nucleotide Sequencing", "Phylogeny", "Pneumonia, Progressive Interstitial, of Sheep", "Sheep", "Sheep Diseases", "Visna-maedi virus" ]
8636652
INTRODUCTION
Maedi/Visna virus (MVV) belongs to the Retroviridae family and Lentivirus genus and is a monocyte/macrophage-tropic non-oncogenic virus [1]. MVV infection is persistent and chronic and is characterized by slow progressive degenerative inflammation in multiple organs, including lungs, brain, udder, and joints in sheep and goats [2]. Labored breathing associated with emaciation caused by progressive pneumonitis is the predominant feature in clinically affected sheep [3]. MMV is epidemic globally, except for Australia and New Zealand, resulting in considerable economic losses [4]. The MVV is a single-stranded RNA molecule with positive-sense polarity and contains 3 primary genes (gag, pol, and env) and several regulatory genes (e.g., vif, vpr-like, and rev) [5]. The gag gene encodes the internal structural proteins, such as the capsid protein (CA), which stimulates the host to produce antibodies. The pol gene encodes enzymes involved in replication and DNA integration, such as reverse transcriptase [6]. These 2 genomic regions, the gag-pol (1.8 kb) and the pol (1.2 kb), were initially proposed by Shah et al. [7] to be used in the classification of MVV subtype strains. Subsequently, many other strains were added to the phylogenetic tree, but the additions were often based on the sequence of only a small part of one of the fragments initially proposed by Olech et al. [8]. The env gene encodes surface and transmembrane glycoproteins that insert into the envelope. The surface glycoprotein (SU) contains genetically variable domains, thereby determining the antigenic variability of the different isolates. The use of next-generation sequencing (NGS), also called high throughput sequencing, has rapidly expanded over recent years [91011]. Due to their high accuracy and high throughput capacity, Illumina, Inc. is the dominant player in the NGS arena. Illumina NGS supports various protocols (e.g., genomic sequencing, RNA sequencing) and provides varying levels of throughput, including the MiniSeq, MiSeq, and HiSeq models [12]. The presence of Maedi-Visna disease in China was first reported in 1966. Serological diagnosis revealed that MVV was present in several regions of China, except Inner Mongolia (NM), which is one of the major provinces of sheep production in China. However, the lack of a complete genome sequence of the Chinese MMV strains has hampered comprehension of its genetic characteristics. We have worked on ovine viral pneumonia for several years, and a case of Maedi-Visna disease was recently identified in the western portion of NM. To elucidate the molecular characteristics of our isolate, NGS was applied to obtain a complete genome sequence. Moreover, phylogenetic information on our isolate was analyzed, which may assist in elucidating the genetic evolution of MVV.
null
null
RESULTS
Characterization of MVV in sheep lung tissue Gross appearance showed that the volume of the affected sheep lung was increased by 2- to 3-fold. Nodular lesions were scattered or densely distributed in local areas, and the surrounding tissue was a dark red color and provided a hard tactile sensation. Fibrotic focuses were scattered in the pleural surface (Fig. 1A). Histopathologically, lymphocytic and lymphofollicular hyperplasias were scattered or locally distributed in the pulmonary interstitium. Lymphoid follicles occurred before the peripheral bronchioles, and the germinal centers of some lymphoid follicles were obvious. There were significant proliferations of lymphocytes, macrophages, and a small number of fibroblasts in the alveolar septum. Type II epithelial cells on the inner surface of the alveolar wall had proliferated to varying degrees. Additionally, some alveolar cavities were filled with red homogeneous serous fluid as well as some macrophages and lymphocytes (Fig. 1B). A PCR procedure was performed to amplify the LTR sequence of the MVV. Detection by gel electrophoresis confirmed that the lung tissue was MVV-positive (Fig. 1C). Furthermore, electron microscope-based examination of purified MVV showed that the viral particles were approximately 80 nm (Fig. 1D). M, marker; MVV, Maedi/Visna virus. Gross appearance showed that the volume of the affected sheep lung was increased by 2- to 3-fold. Nodular lesions were scattered or densely distributed in local areas, and the surrounding tissue was a dark red color and provided a hard tactile sensation. Fibrotic focuses were scattered in the pleural surface (Fig. 1A). Histopathologically, lymphocytic and lymphofollicular hyperplasias were scattered or locally distributed in the pulmonary interstitium. Lymphoid follicles occurred before the peripheral bronchioles, and the germinal centers of some lymphoid follicles were obvious. There were significant proliferations of lymphocytes, macrophages, and a small number of fibroblasts in the alveolar septum. Type II epithelial cells on the inner surface of the alveolar wall had proliferated to varying degrees. Additionally, some alveolar cavities were filled with red homogeneous serous fluid as well as some macrophages and lymphocytes (Fig. 1B). A PCR procedure was performed to amplify the LTR sequence of the MVV. Detection by gel electrophoresis confirmed that the lung tissue was MVV-positive (Fig. 1C). Furthermore, electron microscope-based examination of purified MVV showed that the viral particles were approximately 80 nm (Fig. 1D). M, marker; MVV, Maedi/Visna virus. Complete genome sequence of the NM1111 strain A total of 1.1 GB of valid data was obtained, and the data were uniformly distributed. The sequencing depth was up to 188,295 times. The values of Q20 and Q30 were 96.64% and 90.48%, respectively, and the G+C content was 40.43%. These results reveal that our data exhibited good sequencing quality. Finally, the isolate contained 9,193 nucleotides after splicing and optimization. Three primary encoding genes, the gag gene, position at 502 to 1,845, contained 1,344 nucleotides; the pol gene, position at 1716 to 5036, contained 3321 nucleotides; the env gene, position at 5,985 to 8,948, contained 2,964 nucleotides (Table 1, Fig. 2). The complete genome sequence of MVV was uploaded, named NM1111, to GenBank with the accession number MW248464 (Supplementary Data 1). LTR, long terminal repeat; N/A, not applicable. LTR, long terminal repeat. A total of 1.1 GB of valid data was obtained, and the data were uniformly distributed. The sequencing depth was up to 188,295 times. The values of Q20 and Q30 were 96.64% and 90.48%, respectively, and the G+C content was 40.43%. These results reveal that our data exhibited good sequencing quality. Finally, the isolate contained 9,193 nucleotides after splicing and optimization. Three primary encoding genes, the gag gene, position at 502 to 1,845, contained 1,344 nucleotides; the pol gene, position at 1716 to 5036, contained 3321 nucleotides; the env gene, position at 5,985 to 8,948, contained 2,964 nucleotides (Table 1, Fig. 2). The complete genome sequence of MVV was uploaded, named NM1111, to GenBank with the accession number MW248464 (Supplementary Data 1). LTR, long terminal repeat; N/A, not applicable. LTR, long terminal repeat. Phylogenetic analysis A neighbor-joining tree was constructed using the whole-genome sequence of NM1111 and the available reference strains. The phylogenetic analysis showed that the NM1111 strain was clustered in a branch of MMV type A2 (Fig. 3). Meanwhile, homology comparisons showed that the gag and pol of NM1111 had a close relationship with American strain MH916859 (Table 2). In contrast, the env had a lower sequence homology with the reference strains, suggesting the development of antigenic variability with evolution. Amino acid homology analysis indicated that the gag protein is highly conserved, especially in 2 dominant epitopes (epitope2 and epitope3). However, within the highly homologous epitope2, the Sichuan strain (KR011757) mutated from alanine (A) to valine (V) at position 221. Notably, the major homology region (MHR) region of NM1111 mutated from aspartic acid (D) to glutamic acid (E) at position 296 (Table 3). The SU region had a high degree of sequence variation, especially in variable region 4 (V4) with 5 amino acid insertions (position at 554-555, 564-566) (Table 4). Immunodominant epitope SU5 had a conserved motif (VRAYTYGV) located at the N-terminal part and a highly variable motif at the C-terminal (Table 4). NM, Inner Mongolia. N/A, not applicable. MHR, major homology region. A neighbor-joining tree was constructed using the whole-genome sequence of NM1111 and the available reference strains. The phylogenetic analysis showed that the NM1111 strain was clustered in a branch of MMV type A2 (Fig. 3). Meanwhile, homology comparisons showed that the gag and pol of NM1111 had a close relationship with American strain MH916859 (Table 2). In contrast, the env had a lower sequence homology with the reference strains, suggesting the development of antigenic variability with evolution. Amino acid homology analysis indicated that the gag protein is highly conserved, especially in 2 dominant epitopes (epitope2 and epitope3). However, within the highly homologous epitope2, the Sichuan strain (KR011757) mutated from alanine (A) to valine (V) at position 221. Notably, the major homology region (MHR) region of NM1111 mutated from aspartic acid (D) to glutamic acid (E) at position 296 (Table 3). The SU region had a high degree of sequence variation, especially in variable region 4 (V4) with 5 amino acid insertions (position at 554-555, 564-566) (Table 4). Immunodominant epitope SU5 had a conserved motif (VRAYTYGV) located at the N-terminal part and a highly variable motif at the C-terminal (Table 4). NM, Inner Mongolia. N/A, not applicable. MHR, major homology region.
null
null
[ "Ethics statement", "Sample collection and hematoxylin and eosin (H&E) staining", "Polymerase chain reaction (PCR) identification", "Virus enrichment and identification", "Sequencing and genome assembly", "Sequence alignment and polygenetic analysis", "Characterization of MVV in sheep lung tissue", "Complete genome sequence of the NM1111 strain", "Phylogenetic analysis" ]
[ "This work does not contain any studies performed on living animals. Collection of lung tissues conformed to the experimental practices and standards approved by the animal welfare and research ethics committee of NM Agricultural University (approval ID: 2020007).", "A 3-year-old Dorper ewe with serve dyspnea, cough, and wheezing was provided by a large-scale sheep farm in NM and was sacrificed for sample collection. Lung samples were kept in 10% formalin or frozen at −80°C. The lung samples were fixed in 10% formalin and processed by applying standard procedures for pathological examination. After processing, 3–5 mm-thick sections were stained with H&E for microscope examination.", "MVV identification by PCR was performed as previously described [13]. Briefly, a pair of primers (F: 5′-TGACACAGCAAATGTAACCGCAAG-3′; R: 5′- CCACGTTGGGCGCCAGCTGCGAGA-3′) were used to amplify a 291 bp fragment of the long terminal repeat (LTR) region of pro-viral DNA. MVV was proliferated in choroid plexus cells and, after 3 passages, were used as positive controls. Healthy lung tissue and RNase-free and DNase-free water were used as negative and blank controls, respectively.", "PCR-positive lung tissue was ground and then made into a suspension by adding a 5-fold volume of PBS. Viruses were released by performing 3 freezing and thawing cycles. The supernatants were collected after centrifugation at 4,000 r/min for 30 min, repeated 3 times, and then centrifuged at 12,000 r/min for 1 h. The obtained supernatant was centrifuged at 35,000 r/min for 3 h, and the pellet was resuspended in TNE buffer. After centrifugation of the suspension at 5,000 r/min for 10 min, the supernatant was passed through a 25% sucrose cushion and centrifuged at 35,000 r/min for 4 h. Viruses were purified by a 20%–50% gradient of sucrose and ultracentrifugation. The final acquisition was negatively stained with 2% uranyl acetate and examined by electron microscopy.", "Viral RNA was extracted using the TRIzol (TAKARA, China) reagent, then reverse-transcribed into complementary DNA (cDNA) with the Prime ScriptTM RT reagent kit and g DNA Eraser (TAKARA), according to the manufacturer's recommendations. A viral cDNA library was constructed and sequenced by Shanghai Biozeron Biological Technology Co. Ltd. Briefly, 1 μg of cDNA was used with Illumina's TruSeq™ Nano DNA Sample Prep Kit for library preparation. Libraries were sequenced on the Illumina HiSeq 4000 platform at 2 × 150 bp read length. Genome assembly was performed using ABySS [14] to achieve optimal results, and corrections for single-base polymorphism and the infilling of remaining gaps were conducted in SOAPdenovo [15].", "Sequence alignment was performed using DNASTAR Lasergene (version 7.1.0). To evaluate the relationship between the reference strains and the NM strain, a phylogenetic tree was constructed using the neighbor-joining method provided in molecular evolutionary genetics analysis software (version 7.0). Bootstrap values were estimated for 1,000 replicates.", "Gross appearance showed that the volume of the affected sheep lung was increased by 2- to 3-fold. Nodular lesions were scattered or densely distributed in local areas, and the surrounding tissue was a dark red color and provided a hard tactile sensation. Fibrotic focuses were scattered in the pleural surface (Fig. 1A). Histopathologically, lymphocytic and lymphofollicular hyperplasias were scattered or locally distributed in the pulmonary interstitium. Lymphoid follicles occurred before the peripheral bronchioles, and the germinal centers of some lymphoid follicles were obvious. There were significant proliferations of lymphocytes, macrophages, and a small number of fibroblasts in the alveolar septum. Type II epithelial cells on the inner surface of the alveolar wall had proliferated to varying degrees. Additionally, some alveolar cavities were filled with red homogeneous serous fluid as well as some macrophages and lymphocytes (Fig. 1B). A PCR procedure was performed to amplify the LTR sequence of the MVV. Detection by gel electrophoresis confirmed that the lung tissue was MVV-positive (Fig. 1C). Furthermore, electron microscope-based examination of purified MVV showed that the viral particles were approximately 80 nm (Fig. 1D).\nM, marker; MVV, Maedi/Visna virus.", "A total of 1.1 GB of valid data was obtained, and the data were uniformly distributed. The sequencing depth was up to 188,295 times. The values of Q20 and Q30 were 96.64% and 90.48%, respectively, and the G+C content was 40.43%. These results reveal that our data exhibited good sequencing quality. Finally, the isolate contained 9,193 nucleotides after splicing and optimization. Three primary encoding genes, the gag gene, position at 502 to 1,845, contained 1,344 nucleotides; the pol gene, position at 1716 to 5036, contained 3321 nucleotides; the env gene, position at 5,985 to 8,948, contained 2,964 nucleotides (Table 1, Fig. 2). The complete genome sequence of MVV was uploaded, named NM1111, to GenBank with the accession number MW248464 (Supplementary Data 1).\nLTR, long terminal repeat; N/A, not applicable.\nLTR, long terminal repeat.", "A neighbor-joining tree was constructed using the whole-genome sequence of NM1111 and the available reference strains. The phylogenetic analysis showed that the NM1111 strain was clustered in a branch of MMV type A2 (Fig. 3). Meanwhile, homology comparisons showed that the gag and pol of NM1111 had a close relationship with American strain MH916859 (Table 2). In contrast, the env had a lower sequence homology with the reference strains, suggesting the development of antigenic variability with evolution. Amino acid homology analysis indicated that the gag protein is highly conserved, especially in 2 dominant epitopes (epitope2 and epitope3). However, within the highly homologous epitope2, the Sichuan strain (KR011757) mutated from alanine (A) to valine (V) at position 221. Notably, the major homology region (MHR) region of NM1111 mutated from aspartic acid (D) to glutamic acid (E) at position 296 (Table 3). The SU region had a high degree of sequence variation, especially in variable region 4 (V4) with 5 amino acid insertions (position at 554-555, 564-566) (Table 4). Immunodominant epitope SU5 had a conserved motif (VRAYTYGV) located at the N-terminal part and a highly variable motif at the C-terminal (Table 4).\nNM, Inner Mongolia.\nN/A, not applicable.\nMHR, major homology region." ]
[ null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Ethics statement", "Sample collection and hematoxylin and eosin (H&E) staining", "Polymerase chain reaction (PCR) identification", "Virus enrichment and identification", "Sequencing and genome assembly", "Sequence alignment and polygenetic analysis", "RESULTS", "Characterization of MVV in sheep lung tissue", "Complete genome sequence of the NM1111 strain", "Phylogenetic analysis", "DISCUSSION" ]
[ "Maedi/Visna virus (MVV) belongs to the Retroviridae family and Lentivirus genus and is a monocyte/macrophage-tropic non-oncogenic virus [1]. MVV infection is persistent and chronic and is characterized by slow progressive degenerative inflammation in multiple organs, including lungs, brain, udder, and joints in sheep and goats [2]. Labored breathing associated with emaciation caused by progressive pneumonitis is the predominant feature in clinically affected sheep [3]. MMV is epidemic globally, except for Australia and New Zealand, resulting in considerable economic losses [4].\nThe MVV is a single-stranded RNA molecule with positive-sense polarity and contains 3 primary genes (gag, pol, and env) and several regulatory genes (e.g., vif, vpr-like, and rev) [5]. The gag gene encodes the internal structural proteins, such as the capsid protein (CA), which stimulates the host to produce antibodies. The pol gene encodes enzymes involved in replication and DNA integration, such as reverse transcriptase [6]. These 2 genomic regions, the gag-pol (1.8 kb) and the pol (1.2 kb), were initially proposed by Shah et al. [7] to be used in the classification of MVV subtype strains. Subsequently, many other strains were added to the phylogenetic tree, but the additions were often based on the sequence of only a small part of one of the fragments initially proposed by Olech et al. [8]. The env gene encodes surface and transmembrane glycoproteins that insert into the envelope. The surface glycoprotein (SU) contains genetically variable domains, thereby determining the antigenic variability of the different isolates.\nThe use of next-generation sequencing (NGS), also called high throughput sequencing, has rapidly expanded over recent years [91011]. Due to their high accuracy and high throughput capacity, Illumina, Inc. is the dominant player in the NGS arena. Illumina NGS supports various protocols (e.g., genomic sequencing, RNA sequencing) and provides varying levels of throughput, including the MiniSeq, MiSeq, and HiSeq models [12]. The presence of Maedi-Visna disease in China was first reported in 1966. Serological diagnosis revealed that MVV was present in several regions of China, except Inner Mongolia (NM), which is one of the major provinces of sheep production in China. However, the lack of a complete genome sequence of the Chinese MMV strains has hampered comprehension of its genetic characteristics. We have worked on ovine viral pneumonia for several years, and a case of Maedi-Visna disease was recently identified in the western portion of NM. To elucidate the molecular characteristics of our isolate, NGS was applied to obtain a complete genome sequence. Moreover, phylogenetic information on our isolate was analyzed, which may assist in elucidating the genetic evolution of MVV.", " Ethics statement This work does not contain any studies performed on living animals. Collection of lung tissues conformed to the experimental practices and standards approved by the animal welfare and research ethics committee of NM Agricultural University (approval ID: 2020007).\nThis work does not contain any studies performed on living animals. Collection of lung tissues conformed to the experimental practices and standards approved by the animal welfare and research ethics committee of NM Agricultural University (approval ID: 2020007).\n Sample collection and hematoxylin and eosin (H&E) staining A 3-year-old Dorper ewe with serve dyspnea, cough, and wheezing was provided by a large-scale sheep farm in NM and was sacrificed for sample collection. Lung samples were kept in 10% formalin or frozen at −80°C. The lung samples were fixed in 10% formalin and processed by applying standard procedures for pathological examination. After processing, 3–5 mm-thick sections were stained with H&E for microscope examination.\nA 3-year-old Dorper ewe with serve dyspnea, cough, and wheezing was provided by a large-scale sheep farm in NM and was sacrificed for sample collection. Lung samples were kept in 10% formalin or frozen at −80°C. The lung samples were fixed in 10% formalin and processed by applying standard procedures for pathological examination. After processing, 3–5 mm-thick sections were stained with H&E for microscope examination.\n Polymerase chain reaction (PCR) identification MVV identification by PCR was performed as previously described [13]. Briefly, a pair of primers (F: 5′-TGACACAGCAAATGTAACCGCAAG-3′; R: 5′- CCACGTTGGGCGCCAGCTGCGAGA-3′) were used to amplify a 291 bp fragment of the long terminal repeat (LTR) region of pro-viral DNA. MVV was proliferated in choroid plexus cells and, after 3 passages, were used as positive controls. Healthy lung tissue and RNase-free and DNase-free water were used as negative and blank controls, respectively.\nMVV identification by PCR was performed as previously described [13]. Briefly, a pair of primers (F: 5′-TGACACAGCAAATGTAACCGCAAG-3′; R: 5′- CCACGTTGGGCGCCAGCTGCGAGA-3′) were used to amplify a 291 bp fragment of the long terminal repeat (LTR) region of pro-viral DNA. MVV was proliferated in choroid plexus cells and, after 3 passages, were used as positive controls. Healthy lung tissue and RNase-free and DNase-free water were used as negative and blank controls, respectively.\n Virus enrichment and identification PCR-positive lung tissue was ground and then made into a suspension by adding a 5-fold volume of PBS. Viruses were released by performing 3 freezing and thawing cycles. The supernatants were collected after centrifugation at 4,000 r/min for 30 min, repeated 3 times, and then centrifuged at 12,000 r/min for 1 h. The obtained supernatant was centrifuged at 35,000 r/min for 3 h, and the pellet was resuspended in TNE buffer. After centrifugation of the suspension at 5,000 r/min for 10 min, the supernatant was passed through a 25% sucrose cushion and centrifuged at 35,000 r/min for 4 h. Viruses were purified by a 20%–50% gradient of sucrose and ultracentrifugation. The final acquisition was negatively stained with 2% uranyl acetate and examined by electron microscopy.\nPCR-positive lung tissue was ground and then made into a suspension by adding a 5-fold volume of PBS. Viruses were released by performing 3 freezing and thawing cycles. The supernatants were collected after centrifugation at 4,000 r/min for 30 min, repeated 3 times, and then centrifuged at 12,000 r/min for 1 h. The obtained supernatant was centrifuged at 35,000 r/min for 3 h, and the pellet was resuspended in TNE buffer. After centrifugation of the suspension at 5,000 r/min for 10 min, the supernatant was passed through a 25% sucrose cushion and centrifuged at 35,000 r/min for 4 h. Viruses were purified by a 20%–50% gradient of sucrose and ultracentrifugation. The final acquisition was negatively stained with 2% uranyl acetate and examined by electron microscopy.\n Sequencing and genome assembly Viral RNA was extracted using the TRIzol (TAKARA, China) reagent, then reverse-transcribed into complementary DNA (cDNA) with the Prime ScriptTM RT reagent kit and g DNA Eraser (TAKARA), according to the manufacturer's recommendations. A viral cDNA library was constructed and sequenced by Shanghai Biozeron Biological Technology Co. Ltd. Briefly, 1 μg of cDNA was used with Illumina's TruSeq™ Nano DNA Sample Prep Kit for library preparation. Libraries were sequenced on the Illumina HiSeq 4000 platform at 2 × 150 bp read length. Genome assembly was performed using ABySS [14] to achieve optimal results, and corrections for single-base polymorphism and the infilling of remaining gaps were conducted in SOAPdenovo [15].\nViral RNA was extracted using the TRIzol (TAKARA, China) reagent, then reverse-transcribed into complementary DNA (cDNA) with the Prime ScriptTM RT reagent kit and g DNA Eraser (TAKARA), according to the manufacturer's recommendations. A viral cDNA library was constructed and sequenced by Shanghai Biozeron Biological Technology Co. Ltd. Briefly, 1 μg of cDNA was used with Illumina's TruSeq™ Nano DNA Sample Prep Kit for library preparation. Libraries were sequenced on the Illumina HiSeq 4000 platform at 2 × 150 bp read length. Genome assembly was performed using ABySS [14] to achieve optimal results, and corrections for single-base polymorphism and the infilling of remaining gaps were conducted in SOAPdenovo [15].\n Sequence alignment and polygenetic analysis Sequence alignment was performed using DNASTAR Lasergene (version 7.1.0). To evaluate the relationship between the reference strains and the NM strain, a phylogenetic tree was constructed using the neighbor-joining method provided in molecular evolutionary genetics analysis software (version 7.0). Bootstrap values were estimated for 1,000 replicates.\nSequence alignment was performed using DNASTAR Lasergene (version 7.1.0). To evaluate the relationship between the reference strains and the NM strain, a phylogenetic tree was constructed using the neighbor-joining method provided in molecular evolutionary genetics analysis software (version 7.0). Bootstrap values were estimated for 1,000 replicates.", "This work does not contain any studies performed on living animals. Collection of lung tissues conformed to the experimental practices and standards approved by the animal welfare and research ethics committee of NM Agricultural University (approval ID: 2020007).", "A 3-year-old Dorper ewe with serve dyspnea, cough, and wheezing was provided by a large-scale sheep farm in NM and was sacrificed for sample collection. Lung samples were kept in 10% formalin or frozen at −80°C. The lung samples were fixed in 10% formalin and processed by applying standard procedures for pathological examination. After processing, 3–5 mm-thick sections were stained with H&E for microscope examination.", "MVV identification by PCR was performed as previously described [13]. Briefly, a pair of primers (F: 5′-TGACACAGCAAATGTAACCGCAAG-3′; R: 5′- CCACGTTGGGCGCCAGCTGCGAGA-3′) were used to amplify a 291 bp fragment of the long terminal repeat (LTR) region of pro-viral DNA. MVV was proliferated in choroid plexus cells and, after 3 passages, were used as positive controls. Healthy lung tissue and RNase-free and DNase-free water were used as negative and blank controls, respectively.", "PCR-positive lung tissue was ground and then made into a suspension by adding a 5-fold volume of PBS. Viruses were released by performing 3 freezing and thawing cycles. The supernatants were collected after centrifugation at 4,000 r/min for 30 min, repeated 3 times, and then centrifuged at 12,000 r/min for 1 h. The obtained supernatant was centrifuged at 35,000 r/min for 3 h, and the pellet was resuspended in TNE buffer. After centrifugation of the suspension at 5,000 r/min for 10 min, the supernatant was passed through a 25% sucrose cushion and centrifuged at 35,000 r/min for 4 h. Viruses were purified by a 20%–50% gradient of sucrose and ultracentrifugation. The final acquisition was negatively stained with 2% uranyl acetate and examined by electron microscopy.", "Viral RNA was extracted using the TRIzol (TAKARA, China) reagent, then reverse-transcribed into complementary DNA (cDNA) with the Prime ScriptTM RT reagent kit and g DNA Eraser (TAKARA), according to the manufacturer's recommendations. A viral cDNA library was constructed and sequenced by Shanghai Biozeron Biological Technology Co. Ltd. Briefly, 1 μg of cDNA was used with Illumina's TruSeq™ Nano DNA Sample Prep Kit for library preparation. Libraries were sequenced on the Illumina HiSeq 4000 platform at 2 × 150 bp read length. Genome assembly was performed using ABySS [14] to achieve optimal results, and corrections for single-base polymorphism and the infilling of remaining gaps were conducted in SOAPdenovo [15].", "Sequence alignment was performed using DNASTAR Lasergene (version 7.1.0). To evaluate the relationship between the reference strains and the NM strain, a phylogenetic tree was constructed using the neighbor-joining method provided in molecular evolutionary genetics analysis software (version 7.0). Bootstrap values were estimated for 1,000 replicates.", " Characterization of MVV in sheep lung tissue Gross appearance showed that the volume of the affected sheep lung was increased by 2- to 3-fold. Nodular lesions were scattered or densely distributed in local areas, and the surrounding tissue was a dark red color and provided a hard tactile sensation. Fibrotic focuses were scattered in the pleural surface (Fig. 1A). Histopathologically, lymphocytic and lymphofollicular hyperplasias were scattered or locally distributed in the pulmonary interstitium. Lymphoid follicles occurred before the peripheral bronchioles, and the germinal centers of some lymphoid follicles were obvious. There were significant proliferations of lymphocytes, macrophages, and a small number of fibroblasts in the alveolar septum. Type II epithelial cells on the inner surface of the alveolar wall had proliferated to varying degrees. Additionally, some alveolar cavities were filled with red homogeneous serous fluid as well as some macrophages and lymphocytes (Fig. 1B). A PCR procedure was performed to amplify the LTR sequence of the MVV. Detection by gel electrophoresis confirmed that the lung tissue was MVV-positive (Fig. 1C). Furthermore, electron microscope-based examination of purified MVV showed that the viral particles were approximately 80 nm (Fig. 1D).\nM, marker; MVV, Maedi/Visna virus.\nGross appearance showed that the volume of the affected sheep lung was increased by 2- to 3-fold. Nodular lesions were scattered or densely distributed in local areas, and the surrounding tissue was a dark red color and provided a hard tactile sensation. Fibrotic focuses were scattered in the pleural surface (Fig. 1A). Histopathologically, lymphocytic and lymphofollicular hyperplasias were scattered or locally distributed in the pulmonary interstitium. Lymphoid follicles occurred before the peripheral bronchioles, and the germinal centers of some lymphoid follicles were obvious. There were significant proliferations of lymphocytes, macrophages, and a small number of fibroblasts in the alveolar septum. Type II epithelial cells on the inner surface of the alveolar wall had proliferated to varying degrees. Additionally, some alveolar cavities were filled with red homogeneous serous fluid as well as some macrophages and lymphocytes (Fig. 1B). A PCR procedure was performed to amplify the LTR sequence of the MVV. Detection by gel electrophoresis confirmed that the lung tissue was MVV-positive (Fig. 1C). Furthermore, electron microscope-based examination of purified MVV showed that the viral particles were approximately 80 nm (Fig. 1D).\nM, marker; MVV, Maedi/Visna virus.\n Complete genome sequence of the NM1111 strain A total of 1.1 GB of valid data was obtained, and the data were uniformly distributed. The sequencing depth was up to 188,295 times. The values of Q20 and Q30 were 96.64% and 90.48%, respectively, and the G+C content was 40.43%. These results reveal that our data exhibited good sequencing quality. Finally, the isolate contained 9,193 nucleotides after splicing and optimization. Three primary encoding genes, the gag gene, position at 502 to 1,845, contained 1,344 nucleotides; the pol gene, position at 1716 to 5036, contained 3321 nucleotides; the env gene, position at 5,985 to 8,948, contained 2,964 nucleotides (Table 1, Fig. 2). The complete genome sequence of MVV was uploaded, named NM1111, to GenBank with the accession number MW248464 (Supplementary Data 1).\nLTR, long terminal repeat; N/A, not applicable.\nLTR, long terminal repeat.\nA total of 1.1 GB of valid data was obtained, and the data were uniformly distributed. The sequencing depth was up to 188,295 times. The values of Q20 and Q30 were 96.64% and 90.48%, respectively, and the G+C content was 40.43%. These results reveal that our data exhibited good sequencing quality. Finally, the isolate contained 9,193 nucleotides after splicing and optimization. Three primary encoding genes, the gag gene, position at 502 to 1,845, contained 1,344 nucleotides; the pol gene, position at 1716 to 5036, contained 3321 nucleotides; the env gene, position at 5,985 to 8,948, contained 2,964 nucleotides (Table 1, Fig. 2). The complete genome sequence of MVV was uploaded, named NM1111, to GenBank with the accession number MW248464 (Supplementary Data 1).\nLTR, long terminal repeat; N/A, not applicable.\nLTR, long terminal repeat.\n Phylogenetic analysis A neighbor-joining tree was constructed using the whole-genome sequence of NM1111 and the available reference strains. The phylogenetic analysis showed that the NM1111 strain was clustered in a branch of MMV type A2 (Fig. 3). Meanwhile, homology comparisons showed that the gag and pol of NM1111 had a close relationship with American strain MH916859 (Table 2). In contrast, the env had a lower sequence homology with the reference strains, suggesting the development of antigenic variability with evolution. Amino acid homology analysis indicated that the gag protein is highly conserved, especially in 2 dominant epitopes (epitope2 and epitope3). However, within the highly homologous epitope2, the Sichuan strain (KR011757) mutated from alanine (A) to valine (V) at position 221. Notably, the major homology region (MHR) region of NM1111 mutated from aspartic acid (D) to glutamic acid (E) at position 296 (Table 3). The SU region had a high degree of sequence variation, especially in variable region 4 (V4) with 5 amino acid insertions (position at 554-555, 564-566) (Table 4). Immunodominant epitope SU5 had a conserved motif (VRAYTYGV) located at the N-terminal part and a highly variable motif at the C-terminal (Table 4).\nNM, Inner Mongolia.\nN/A, not applicable.\nMHR, major homology region.\nA neighbor-joining tree was constructed using the whole-genome sequence of NM1111 and the available reference strains. The phylogenetic analysis showed that the NM1111 strain was clustered in a branch of MMV type A2 (Fig. 3). Meanwhile, homology comparisons showed that the gag and pol of NM1111 had a close relationship with American strain MH916859 (Table 2). In contrast, the env had a lower sequence homology with the reference strains, suggesting the development of antigenic variability with evolution. Amino acid homology analysis indicated that the gag protein is highly conserved, especially in 2 dominant epitopes (epitope2 and epitope3). However, within the highly homologous epitope2, the Sichuan strain (KR011757) mutated from alanine (A) to valine (V) at position 221. Notably, the major homology region (MHR) region of NM1111 mutated from aspartic acid (D) to glutamic acid (E) at position 296 (Table 3). The SU region had a high degree of sequence variation, especially in variable region 4 (V4) with 5 amino acid insertions (position at 554-555, 564-566) (Table 4). Immunodominant epitope SU5 had a conserved motif (VRAYTYGV) located at the N-terminal part and a highly variable motif at the C-terminal (Table 4).\nNM, Inner Mongolia.\nN/A, not applicable.\nMHR, major homology region.", "Gross appearance showed that the volume of the affected sheep lung was increased by 2- to 3-fold. Nodular lesions were scattered or densely distributed in local areas, and the surrounding tissue was a dark red color and provided a hard tactile sensation. Fibrotic focuses were scattered in the pleural surface (Fig. 1A). Histopathologically, lymphocytic and lymphofollicular hyperplasias were scattered or locally distributed in the pulmonary interstitium. Lymphoid follicles occurred before the peripheral bronchioles, and the germinal centers of some lymphoid follicles were obvious. There were significant proliferations of lymphocytes, macrophages, and a small number of fibroblasts in the alveolar septum. Type II epithelial cells on the inner surface of the alveolar wall had proliferated to varying degrees. Additionally, some alveolar cavities were filled with red homogeneous serous fluid as well as some macrophages and lymphocytes (Fig. 1B). A PCR procedure was performed to amplify the LTR sequence of the MVV. Detection by gel electrophoresis confirmed that the lung tissue was MVV-positive (Fig. 1C). Furthermore, electron microscope-based examination of purified MVV showed that the viral particles were approximately 80 nm (Fig. 1D).\nM, marker; MVV, Maedi/Visna virus.", "A total of 1.1 GB of valid data was obtained, and the data were uniformly distributed. The sequencing depth was up to 188,295 times. The values of Q20 and Q30 were 96.64% and 90.48%, respectively, and the G+C content was 40.43%. These results reveal that our data exhibited good sequencing quality. Finally, the isolate contained 9,193 nucleotides after splicing and optimization. Three primary encoding genes, the gag gene, position at 502 to 1,845, contained 1,344 nucleotides; the pol gene, position at 1716 to 5036, contained 3321 nucleotides; the env gene, position at 5,985 to 8,948, contained 2,964 nucleotides (Table 1, Fig. 2). The complete genome sequence of MVV was uploaded, named NM1111, to GenBank with the accession number MW248464 (Supplementary Data 1).\nLTR, long terminal repeat; N/A, not applicable.\nLTR, long terminal repeat.", "A neighbor-joining tree was constructed using the whole-genome sequence of NM1111 and the available reference strains. The phylogenetic analysis showed that the NM1111 strain was clustered in a branch of MMV type A2 (Fig. 3). Meanwhile, homology comparisons showed that the gag and pol of NM1111 had a close relationship with American strain MH916859 (Table 2). In contrast, the env had a lower sequence homology with the reference strains, suggesting the development of antigenic variability with evolution. Amino acid homology analysis indicated that the gag protein is highly conserved, especially in 2 dominant epitopes (epitope2 and epitope3). However, within the highly homologous epitope2, the Sichuan strain (KR011757) mutated from alanine (A) to valine (V) at position 221. Notably, the major homology region (MHR) region of NM1111 mutated from aspartic acid (D) to glutamic acid (E) at position 296 (Table 3). The SU region had a high degree of sequence variation, especially in variable region 4 (V4) with 5 amino acid insertions (position at 554-555, 564-566) (Table 4). Immunodominant epitope SU5 had a conserved motif (VRAYTYGV) located at the N-terminal part and a highly variable motif at the C-terminal (Table 4).\nNM, Inner Mongolia.\nN/A, not applicable.\nMHR, major homology region.", "NGS is a recent, powerful technique developed and applied for virus sequencing. In this study, the first complete genome sequence of MVV from China was obtained by NGS. Sequencing depth, high purity of the assembled data, the distribution of the GC content, and the sequencing coverage verified the reliability of the obtained results. Despite NGS being the most prevalent technology for sequencing at present, future complete genome sequencing of MVV isolates with recently developed third-generation sequencing would also be desirable.\nGag is relatively conserved in MVV and encodes a major protective antigen. CA, encoded by gag, contains 3 linear epitopes (epitope 2, MHR, and epitope 3) and can cause a strong antibody reaction during infection, thus making it of great value for serodiagnosis. These epitopes are also important for maintaining cross-reactivity in gag antigen-based serological tests [16]. The amino acid sequence analysis showed that the linear epitopes were highly conserved, but there was a mutation from aspartic acid (D) to glutamic acid (E) at position 296 in the MHR region, an observation similar to that for a type A5 strain from Poland [17]. MVV has been serologically detected in 12 regions of China (Yunnan, Guizhou, Gansu, Ningxia, Shandong, Sichuan, Hunan, Guangdong, Chongqing, Guangxi, Jilin, and Anhui), and the infection rate in those regions ranged from 4.60% to 50.00% [18]. However, when Sun et al. [19] carried out a serological investigation on MVV in China, the detection rate in NM was zero. In addition to the limited sample collection area, it was inferred that a mutation in MHR region might also have contributed. Tanaka et al. [20] reported that, in addition to the epitope2 and epitope3, MHR is usually conserved in many retroviruses. Mutations in this region may destroy capsid protein assembly and reduce human immunodeficiency virus infectivity [20]. In this study, the MHR region of the strains isolated thirty years ago, except the south Africa strain, were shown to be conserved, but other strains exhibited varying degrees of mutation. Whether the mutations in this region could reduce the virulence of MVV needs to be further elucidated.\nEnv is the most mutated gene in MVV, which can often cause antigen drift. The V4 region of the SU domain can affect cell tropism and the synthesis of neutralizing antibodies [21]. Hence, mutations in this region have an important role in interactions between the virus and the host. Based on the analysis of env protein, the mutation rate of the V4 region was high in our isolate, which is consistent with the results in the study of Hötzel and Cheevers [22]. In addition, there were 5 amino acid insertions, suggesting a high probability of antigen drift. Skraban et al. [23] reported that the SU5 epitope is responsible for a specific immune response, and it could be used to realize a specific diagnosis for a virus strain. In this study, we observed a conserved motif at the N-terminal and a highly variable motif at the C-terminal of the SU5 region, observations that are consistent with the A5 strain of MVV isolated from Poland [17]. These observations emphasize the effectiveness of identifying different MVV subtypes within local genotypes [24].\nPolygenetic analysis indicated that the NM1111 isolate was closely related to American stains. Meanwhile, based on the gag sequence construction of a neighbor-joining tree, the Sichuan strain (KR011757) was separated along the same branch from an American stain (AY101611) (Supplementary Fig. 1). The results suggest that the main epidemic MMV strain in China was probably MMV type A2. Characterizing more MVV strains isolated from China will help elucidate the genetic evolution of MMV." ]
[ "intro", "materials|methods", null, null, null, null, null, null, "results", null, null, null, "discussion" ]
[ "Maedi/Visna virus", "next-generation sequencing", "phylogenetic analysis", "sheep lung" ]
INTRODUCTION: Maedi/Visna virus (MVV) belongs to the Retroviridae family and Lentivirus genus and is a monocyte/macrophage-tropic non-oncogenic virus [1]. MVV infection is persistent and chronic and is characterized by slow progressive degenerative inflammation in multiple organs, including lungs, brain, udder, and joints in sheep and goats [2]. Labored breathing associated with emaciation caused by progressive pneumonitis is the predominant feature in clinically affected sheep [3]. MMV is epidemic globally, except for Australia and New Zealand, resulting in considerable economic losses [4]. The MVV is a single-stranded RNA molecule with positive-sense polarity and contains 3 primary genes (gag, pol, and env) and several regulatory genes (e.g., vif, vpr-like, and rev) [5]. The gag gene encodes the internal structural proteins, such as the capsid protein (CA), which stimulates the host to produce antibodies. The pol gene encodes enzymes involved in replication and DNA integration, such as reverse transcriptase [6]. These 2 genomic regions, the gag-pol (1.8 kb) and the pol (1.2 kb), were initially proposed by Shah et al. [7] to be used in the classification of MVV subtype strains. Subsequently, many other strains were added to the phylogenetic tree, but the additions were often based on the sequence of only a small part of one of the fragments initially proposed by Olech et al. [8]. The env gene encodes surface and transmembrane glycoproteins that insert into the envelope. The surface glycoprotein (SU) contains genetically variable domains, thereby determining the antigenic variability of the different isolates. The use of next-generation sequencing (NGS), also called high throughput sequencing, has rapidly expanded over recent years [91011]. Due to their high accuracy and high throughput capacity, Illumina, Inc. is the dominant player in the NGS arena. Illumina NGS supports various protocols (e.g., genomic sequencing, RNA sequencing) and provides varying levels of throughput, including the MiniSeq, MiSeq, and HiSeq models [12]. The presence of Maedi-Visna disease in China was first reported in 1966. Serological diagnosis revealed that MVV was present in several regions of China, except Inner Mongolia (NM), which is one of the major provinces of sheep production in China. However, the lack of a complete genome sequence of the Chinese MMV strains has hampered comprehension of its genetic characteristics. We have worked on ovine viral pneumonia for several years, and a case of Maedi-Visna disease was recently identified in the western portion of NM. To elucidate the molecular characteristics of our isolate, NGS was applied to obtain a complete genome sequence. Moreover, phylogenetic information on our isolate was analyzed, which may assist in elucidating the genetic evolution of MVV. MATERIALS AND METHODS: Ethics statement This work does not contain any studies performed on living animals. Collection of lung tissues conformed to the experimental practices and standards approved by the animal welfare and research ethics committee of NM Agricultural University (approval ID: 2020007). This work does not contain any studies performed on living animals. Collection of lung tissues conformed to the experimental practices and standards approved by the animal welfare and research ethics committee of NM Agricultural University (approval ID: 2020007). Sample collection and hematoxylin and eosin (H&E) staining A 3-year-old Dorper ewe with serve dyspnea, cough, and wheezing was provided by a large-scale sheep farm in NM and was sacrificed for sample collection. Lung samples were kept in 10% formalin or frozen at −80°C. The lung samples were fixed in 10% formalin and processed by applying standard procedures for pathological examination. After processing, 3–5 mm-thick sections were stained with H&E for microscope examination. A 3-year-old Dorper ewe with serve dyspnea, cough, and wheezing was provided by a large-scale sheep farm in NM and was sacrificed for sample collection. Lung samples were kept in 10% formalin or frozen at −80°C. The lung samples were fixed in 10% formalin and processed by applying standard procedures for pathological examination. After processing, 3–5 mm-thick sections were stained with H&E for microscope examination. Polymerase chain reaction (PCR) identification MVV identification by PCR was performed as previously described [13]. Briefly, a pair of primers (F: 5′-TGACACAGCAAATGTAACCGCAAG-3′; R: 5′- CCACGTTGGGCGCCAGCTGCGAGA-3′) were used to amplify a 291 bp fragment of the long terminal repeat (LTR) region of pro-viral DNA. MVV was proliferated in choroid plexus cells and, after 3 passages, were used as positive controls. Healthy lung tissue and RNase-free and DNase-free water were used as negative and blank controls, respectively. MVV identification by PCR was performed as previously described [13]. Briefly, a pair of primers (F: 5′-TGACACAGCAAATGTAACCGCAAG-3′; R: 5′- CCACGTTGGGCGCCAGCTGCGAGA-3′) were used to amplify a 291 bp fragment of the long terminal repeat (LTR) region of pro-viral DNA. MVV was proliferated in choroid plexus cells and, after 3 passages, were used as positive controls. Healthy lung tissue and RNase-free and DNase-free water were used as negative and blank controls, respectively. Virus enrichment and identification PCR-positive lung tissue was ground and then made into a suspension by adding a 5-fold volume of PBS. Viruses were released by performing 3 freezing and thawing cycles. The supernatants were collected after centrifugation at 4,000 r/min for 30 min, repeated 3 times, and then centrifuged at 12,000 r/min for 1 h. The obtained supernatant was centrifuged at 35,000 r/min for 3 h, and the pellet was resuspended in TNE buffer. After centrifugation of the suspension at 5,000 r/min for 10 min, the supernatant was passed through a 25% sucrose cushion and centrifuged at 35,000 r/min for 4 h. Viruses were purified by a 20%–50% gradient of sucrose and ultracentrifugation. The final acquisition was negatively stained with 2% uranyl acetate and examined by electron microscopy. PCR-positive lung tissue was ground and then made into a suspension by adding a 5-fold volume of PBS. Viruses were released by performing 3 freezing and thawing cycles. The supernatants were collected after centrifugation at 4,000 r/min for 30 min, repeated 3 times, and then centrifuged at 12,000 r/min for 1 h. The obtained supernatant was centrifuged at 35,000 r/min for 3 h, and the pellet was resuspended in TNE buffer. After centrifugation of the suspension at 5,000 r/min for 10 min, the supernatant was passed through a 25% sucrose cushion and centrifuged at 35,000 r/min for 4 h. Viruses were purified by a 20%–50% gradient of sucrose and ultracentrifugation. The final acquisition was negatively stained with 2% uranyl acetate and examined by electron microscopy. Sequencing and genome assembly Viral RNA was extracted using the TRIzol (TAKARA, China) reagent, then reverse-transcribed into complementary DNA (cDNA) with the Prime ScriptTM RT reagent kit and g DNA Eraser (TAKARA), according to the manufacturer's recommendations. A viral cDNA library was constructed and sequenced by Shanghai Biozeron Biological Technology Co. Ltd. Briefly, 1 μg of cDNA was used with Illumina's TruSeq™ Nano DNA Sample Prep Kit for library preparation. Libraries were sequenced on the Illumina HiSeq 4000 platform at 2 × 150 bp read length. Genome assembly was performed using ABySS [14] to achieve optimal results, and corrections for single-base polymorphism and the infilling of remaining gaps were conducted in SOAPdenovo [15]. Viral RNA was extracted using the TRIzol (TAKARA, China) reagent, then reverse-transcribed into complementary DNA (cDNA) with the Prime ScriptTM RT reagent kit and g DNA Eraser (TAKARA), according to the manufacturer's recommendations. A viral cDNA library was constructed and sequenced by Shanghai Biozeron Biological Technology Co. Ltd. Briefly, 1 μg of cDNA was used with Illumina's TruSeq™ Nano DNA Sample Prep Kit for library preparation. Libraries were sequenced on the Illumina HiSeq 4000 platform at 2 × 150 bp read length. Genome assembly was performed using ABySS [14] to achieve optimal results, and corrections for single-base polymorphism and the infilling of remaining gaps were conducted in SOAPdenovo [15]. Sequence alignment and polygenetic analysis Sequence alignment was performed using DNASTAR Lasergene (version 7.1.0). To evaluate the relationship between the reference strains and the NM strain, a phylogenetic tree was constructed using the neighbor-joining method provided in molecular evolutionary genetics analysis software (version 7.0). Bootstrap values were estimated for 1,000 replicates. Sequence alignment was performed using DNASTAR Lasergene (version 7.1.0). To evaluate the relationship between the reference strains and the NM strain, a phylogenetic tree was constructed using the neighbor-joining method provided in molecular evolutionary genetics analysis software (version 7.0). Bootstrap values were estimated for 1,000 replicates. Ethics statement: This work does not contain any studies performed on living animals. Collection of lung tissues conformed to the experimental practices and standards approved by the animal welfare and research ethics committee of NM Agricultural University (approval ID: 2020007). Sample collection and hematoxylin and eosin (H&E) staining: A 3-year-old Dorper ewe with serve dyspnea, cough, and wheezing was provided by a large-scale sheep farm in NM and was sacrificed for sample collection. Lung samples were kept in 10% formalin or frozen at −80°C. The lung samples were fixed in 10% formalin and processed by applying standard procedures for pathological examination. After processing, 3–5 mm-thick sections were stained with H&E for microscope examination. Polymerase chain reaction (PCR) identification: MVV identification by PCR was performed as previously described [13]. Briefly, a pair of primers (F: 5′-TGACACAGCAAATGTAACCGCAAG-3′; R: 5′- CCACGTTGGGCGCCAGCTGCGAGA-3′) were used to amplify a 291 bp fragment of the long terminal repeat (LTR) region of pro-viral DNA. MVV was proliferated in choroid plexus cells and, after 3 passages, were used as positive controls. Healthy lung tissue and RNase-free and DNase-free water were used as negative and blank controls, respectively. Virus enrichment and identification: PCR-positive lung tissue was ground and then made into a suspension by adding a 5-fold volume of PBS. Viruses were released by performing 3 freezing and thawing cycles. The supernatants were collected after centrifugation at 4,000 r/min for 30 min, repeated 3 times, and then centrifuged at 12,000 r/min for 1 h. The obtained supernatant was centrifuged at 35,000 r/min for 3 h, and the pellet was resuspended in TNE buffer. After centrifugation of the suspension at 5,000 r/min for 10 min, the supernatant was passed through a 25% sucrose cushion and centrifuged at 35,000 r/min for 4 h. Viruses were purified by a 20%–50% gradient of sucrose and ultracentrifugation. The final acquisition was negatively stained with 2% uranyl acetate and examined by electron microscopy. Sequencing and genome assembly: Viral RNA was extracted using the TRIzol (TAKARA, China) reagent, then reverse-transcribed into complementary DNA (cDNA) with the Prime ScriptTM RT reagent kit and g DNA Eraser (TAKARA), according to the manufacturer's recommendations. A viral cDNA library was constructed and sequenced by Shanghai Biozeron Biological Technology Co. Ltd. Briefly, 1 μg of cDNA was used with Illumina's TruSeq™ Nano DNA Sample Prep Kit for library preparation. Libraries were sequenced on the Illumina HiSeq 4000 platform at 2 × 150 bp read length. Genome assembly was performed using ABySS [14] to achieve optimal results, and corrections for single-base polymorphism and the infilling of remaining gaps were conducted in SOAPdenovo [15]. Sequence alignment and polygenetic analysis: Sequence alignment was performed using DNASTAR Lasergene (version 7.1.0). To evaluate the relationship between the reference strains and the NM strain, a phylogenetic tree was constructed using the neighbor-joining method provided in molecular evolutionary genetics analysis software (version 7.0). Bootstrap values were estimated for 1,000 replicates. RESULTS: Characterization of MVV in sheep lung tissue Gross appearance showed that the volume of the affected sheep lung was increased by 2- to 3-fold. Nodular lesions were scattered or densely distributed in local areas, and the surrounding tissue was a dark red color and provided a hard tactile sensation. Fibrotic focuses were scattered in the pleural surface (Fig. 1A). Histopathologically, lymphocytic and lymphofollicular hyperplasias were scattered or locally distributed in the pulmonary interstitium. Lymphoid follicles occurred before the peripheral bronchioles, and the germinal centers of some lymphoid follicles were obvious. There were significant proliferations of lymphocytes, macrophages, and a small number of fibroblasts in the alveolar septum. Type II epithelial cells on the inner surface of the alveolar wall had proliferated to varying degrees. Additionally, some alveolar cavities were filled with red homogeneous serous fluid as well as some macrophages and lymphocytes (Fig. 1B). A PCR procedure was performed to amplify the LTR sequence of the MVV. Detection by gel electrophoresis confirmed that the lung tissue was MVV-positive (Fig. 1C). Furthermore, electron microscope-based examination of purified MVV showed that the viral particles were approximately 80 nm (Fig. 1D). M, marker; MVV, Maedi/Visna virus. Gross appearance showed that the volume of the affected sheep lung was increased by 2- to 3-fold. Nodular lesions were scattered or densely distributed in local areas, and the surrounding tissue was a dark red color and provided a hard tactile sensation. Fibrotic focuses were scattered in the pleural surface (Fig. 1A). Histopathologically, lymphocytic and lymphofollicular hyperplasias were scattered or locally distributed in the pulmonary interstitium. Lymphoid follicles occurred before the peripheral bronchioles, and the germinal centers of some lymphoid follicles were obvious. There were significant proliferations of lymphocytes, macrophages, and a small number of fibroblasts in the alveolar septum. Type II epithelial cells on the inner surface of the alveolar wall had proliferated to varying degrees. Additionally, some alveolar cavities were filled with red homogeneous serous fluid as well as some macrophages and lymphocytes (Fig. 1B). A PCR procedure was performed to amplify the LTR sequence of the MVV. Detection by gel electrophoresis confirmed that the lung tissue was MVV-positive (Fig. 1C). Furthermore, electron microscope-based examination of purified MVV showed that the viral particles were approximately 80 nm (Fig. 1D). M, marker; MVV, Maedi/Visna virus. Complete genome sequence of the NM1111 strain A total of 1.1 GB of valid data was obtained, and the data were uniformly distributed. The sequencing depth was up to 188,295 times. The values of Q20 and Q30 were 96.64% and 90.48%, respectively, and the G+C content was 40.43%. These results reveal that our data exhibited good sequencing quality. Finally, the isolate contained 9,193 nucleotides after splicing and optimization. Three primary encoding genes, the gag gene, position at 502 to 1,845, contained 1,344 nucleotides; the pol gene, position at 1716 to 5036, contained 3321 nucleotides; the env gene, position at 5,985 to 8,948, contained 2,964 nucleotides (Table 1, Fig. 2). The complete genome sequence of MVV was uploaded, named NM1111, to GenBank with the accession number MW248464 (Supplementary Data 1). LTR, long terminal repeat; N/A, not applicable. LTR, long terminal repeat. A total of 1.1 GB of valid data was obtained, and the data were uniformly distributed. The sequencing depth was up to 188,295 times. The values of Q20 and Q30 were 96.64% and 90.48%, respectively, and the G+C content was 40.43%. These results reveal that our data exhibited good sequencing quality. Finally, the isolate contained 9,193 nucleotides after splicing and optimization. Three primary encoding genes, the gag gene, position at 502 to 1,845, contained 1,344 nucleotides; the pol gene, position at 1716 to 5036, contained 3321 nucleotides; the env gene, position at 5,985 to 8,948, contained 2,964 nucleotides (Table 1, Fig. 2). The complete genome sequence of MVV was uploaded, named NM1111, to GenBank with the accession number MW248464 (Supplementary Data 1). LTR, long terminal repeat; N/A, not applicable. LTR, long terminal repeat. Phylogenetic analysis A neighbor-joining tree was constructed using the whole-genome sequence of NM1111 and the available reference strains. The phylogenetic analysis showed that the NM1111 strain was clustered in a branch of MMV type A2 (Fig. 3). Meanwhile, homology comparisons showed that the gag and pol of NM1111 had a close relationship with American strain MH916859 (Table 2). In contrast, the env had a lower sequence homology with the reference strains, suggesting the development of antigenic variability with evolution. Amino acid homology analysis indicated that the gag protein is highly conserved, especially in 2 dominant epitopes (epitope2 and epitope3). However, within the highly homologous epitope2, the Sichuan strain (KR011757) mutated from alanine (A) to valine (V) at position 221. Notably, the major homology region (MHR) region of NM1111 mutated from aspartic acid (D) to glutamic acid (E) at position 296 (Table 3). The SU region had a high degree of sequence variation, especially in variable region 4 (V4) with 5 amino acid insertions (position at 554-555, 564-566) (Table 4). Immunodominant epitope SU5 had a conserved motif (VRAYTYGV) located at the N-terminal part and a highly variable motif at the C-terminal (Table 4). NM, Inner Mongolia. N/A, not applicable. MHR, major homology region. A neighbor-joining tree was constructed using the whole-genome sequence of NM1111 and the available reference strains. The phylogenetic analysis showed that the NM1111 strain was clustered in a branch of MMV type A2 (Fig. 3). Meanwhile, homology comparisons showed that the gag and pol of NM1111 had a close relationship with American strain MH916859 (Table 2). In contrast, the env had a lower sequence homology with the reference strains, suggesting the development of antigenic variability with evolution. Amino acid homology analysis indicated that the gag protein is highly conserved, especially in 2 dominant epitopes (epitope2 and epitope3). However, within the highly homologous epitope2, the Sichuan strain (KR011757) mutated from alanine (A) to valine (V) at position 221. Notably, the major homology region (MHR) region of NM1111 mutated from aspartic acid (D) to glutamic acid (E) at position 296 (Table 3). The SU region had a high degree of sequence variation, especially in variable region 4 (V4) with 5 amino acid insertions (position at 554-555, 564-566) (Table 4). Immunodominant epitope SU5 had a conserved motif (VRAYTYGV) located at the N-terminal part and a highly variable motif at the C-terminal (Table 4). NM, Inner Mongolia. N/A, not applicable. MHR, major homology region. Characterization of MVV in sheep lung tissue: Gross appearance showed that the volume of the affected sheep lung was increased by 2- to 3-fold. Nodular lesions were scattered or densely distributed in local areas, and the surrounding tissue was a dark red color and provided a hard tactile sensation. Fibrotic focuses were scattered in the pleural surface (Fig. 1A). Histopathologically, lymphocytic and lymphofollicular hyperplasias were scattered or locally distributed in the pulmonary interstitium. Lymphoid follicles occurred before the peripheral bronchioles, and the germinal centers of some lymphoid follicles were obvious. There were significant proliferations of lymphocytes, macrophages, and a small number of fibroblasts in the alveolar septum. Type II epithelial cells on the inner surface of the alveolar wall had proliferated to varying degrees. Additionally, some alveolar cavities were filled with red homogeneous serous fluid as well as some macrophages and lymphocytes (Fig. 1B). A PCR procedure was performed to amplify the LTR sequence of the MVV. Detection by gel electrophoresis confirmed that the lung tissue was MVV-positive (Fig. 1C). Furthermore, electron microscope-based examination of purified MVV showed that the viral particles were approximately 80 nm (Fig. 1D). M, marker; MVV, Maedi/Visna virus. Complete genome sequence of the NM1111 strain: A total of 1.1 GB of valid data was obtained, and the data were uniformly distributed. The sequencing depth was up to 188,295 times. The values of Q20 and Q30 were 96.64% and 90.48%, respectively, and the G+C content was 40.43%. These results reveal that our data exhibited good sequencing quality. Finally, the isolate contained 9,193 nucleotides after splicing and optimization. Three primary encoding genes, the gag gene, position at 502 to 1,845, contained 1,344 nucleotides; the pol gene, position at 1716 to 5036, contained 3321 nucleotides; the env gene, position at 5,985 to 8,948, contained 2,964 nucleotides (Table 1, Fig. 2). The complete genome sequence of MVV was uploaded, named NM1111, to GenBank with the accession number MW248464 (Supplementary Data 1). LTR, long terminal repeat; N/A, not applicable. LTR, long terminal repeat. Phylogenetic analysis: A neighbor-joining tree was constructed using the whole-genome sequence of NM1111 and the available reference strains. The phylogenetic analysis showed that the NM1111 strain was clustered in a branch of MMV type A2 (Fig. 3). Meanwhile, homology comparisons showed that the gag and pol of NM1111 had a close relationship with American strain MH916859 (Table 2). In contrast, the env had a lower sequence homology with the reference strains, suggesting the development of antigenic variability with evolution. Amino acid homology analysis indicated that the gag protein is highly conserved, especially in 2 dominant epitopes (epitope2 and epitope3). However, within the highly homologous epitope2, the Sichuan strain (KR011757) mutated from alanine (A) to valine (V) at position 221. Notably, the major homology region (MHR) region of NM1111 mutated from aspartic acid (D) to glutamic acid (E) at position 296 (Table 3). The SU region had a high degree of sequence variation, especially in variable region 4 (V4) with 5 amino acid insertions (position at 554-555, 564-566) (Table 4). Immunodominant epitope SU5 had a conserved motif (VRAYTYGV) located at the N-terminal part and a highly variable motif at the C-terminal (Table 4). NM, Inner Mongolia. N/A, not applicable. MHR, major homology region. DISCUSSION: NGS is a recent, powerful technique developed and applied for virus sequencing. In this study, the first complete genome sequence of MVV from China was obtained by NGS. Sequencing depth, high purity of the assembled data, the distribution of the GC content, and the sequencing coverage verified the reliability of the obtained results. Despite NGS being the most prevalent technology for sequencing at present, future complete genome sequencing of MVV isolates with recently developed third-generation sequencing would also be desirable. Gag is relatively conserved in MVV and encodes a major protective antigen. CA, encoded by gag, contains 3 linear epitopes (epitope 2, MHR, and epitope 3) and can cause a strong antibody reaction during infection, thus making it of great value for serodiagnosis. These epitopes are also important for maintaining cross-reactivity in gag antigen-based serological tests [16]. The amino acid sequence analysis showed that the linear epitopes were highly conserved, but there was a mutation from aspartic acid (D) to glutamic acid (E) at position 296 in the MHR region, an observation similar to that for a type A5 strain from Poland [17]. MVV has been serologically detected in 12 regions of China (Yunnan, Guizhou, Gansu, Ningxia, Shandong, Sichuan, Hunan, Guangdong, Chongqing, Guangxi, Jilin, and Anhui), and the infection rate in those regions ranged from 4.60% to 50.00% [18]. However, when Sun et al. [19] carried out a serological investigation on MVV in China, the detection rate in NM was zero. In addition to the limited sample collection area, it was inferred that a mutation in MHR region might also have contributed. Tanaka et al. [20] reported that, in addition to the epitope2 and epitope3, MHR is usually conserved in many retroviruses. Mutations in this region may destroy capsid protein assembly and reduce human immunodeficiency virus infectivity [20]. In this study, the MHR region of the strains isolated thirty years ago, except the south Africa strain, were shown to be conserved, but other strains exhibited varying degrees of mutation. Whether the mutations in this region could reduce the virulence of MVV needs to be further elucidated. Env is the most mutated gene in MVV, which can often cause antigen drift. The V4 region of the SU domain can affect cell tropism and the synthesis of neutralizing antibodies [21]. Hence, mutations in this region have an important role in interactions between the virus and the host. Based on the analysis of env protein, the mutation rate of the V4 region was high in our isolate, which is consistent with the results in the study of Hötzel and Cheevers [22]. In addition, there were 5 amino acid insertions, suggesting a high probability of antigen drift. Skraban et al. [23] reported that the SU5 epitope is responsible for a specific immune response, and it could be used to realize a specific diagnosis for a virus strain. In this study, we observed a conserved motif at the N-terminal and a highly variable motif at the C-terminal of the SU5 region, observations that are consistent with the A5 strain of MVV isolated from Poland [17]. These observations emphasize the effectiveness of identifying different MVV subtypes within local genotypes [24]. Polygenetic analysis indicated that the NM1111 isolate was closely related to American stains. Meanwhile, based on the gag sequence construction of a neighbor-joining tree, the Sichuan strain (KR011757) was separated along the same branch from an American stain (AY101611) (Supplementary Fig. 1). The results suggest that the main epidemic MMV strain in China was probably MMV type A2. Characterizing more MVV strains isolated from China will help elucidate the genetic evolution of MMV.
Background: Maedi/Visna virus (MVV) is a contagious viral pathogen that causes considerable economic losses to the sheep industry worldwide. Methods: Therefore, in this study, we conducted next-generation sequencing on an MVV strain obtained from northwest China to reveal its genetic evolution via phylogenetic analysis. Results: A MVV strain obtained from Inner Mongolia (NM) of China was identified. Sequence analysis indicated that its whole-genome length is 9193 bp. Homology comparison of nucleotides between the NM strain and reference strains showed that the sequence homology of gag and env were 77.1%-86.8% and 67.7%-75.5%, respectively. Phylogenetic analysis revealed that the NM strain was closely related to the reference strains isolated from America, which belong to the A2 type. Notably, there were 5 amino acid insertions in variable region 4 and a highly variable motif at the C-terminal of the surface glycoprotein (SU5). Conclusions: The present study is the first to show the whole-genome sequence of an MVV obtained from China. The detailed analyses provide essential information for understanding the genetic characteristics of MVV, and the results enrich the MVV library.
null
null
5,148
221
[ 43, 84, 93, 151, 135, 56, 229, 174, 272 ]
13
[ "mvv", "sequence", "region", "lung", "min", "strain", "fig", "position", "000", "nm" ]
[ "oncogenic virus mvv", "assembly viral rna", "visna virus gross", "viral rna", "maedi visna virus" ]
null
null
null
[CONTENT] Maedi/Visna virus | next-generation sequencing | phylogenetic analysis | sheep lung [SUMMARY]
null
[CONTENT] Maedi/Visna virus | next-generation sequencing | phylogenetic analysis | sheep lung [SUMMARY]
null
[CONTENT] Maedi/Visna virus | next-generation sequencing | phylogenetic analysis | sheep lung [SUMMARY]
null
[CONTENT] Animals | China | High-Throughput Nucleotide Sequencing | Phylogeny | Pneumonia, Progressive Interstitial, of Sheep | Sheep | Sheep Diseases | Visna-maedi virus [SUMMARY]
null
[CONTENT] Animals | China | High-Throughput Nucleotide Sequencing | Phylogeny | Pneumonia, Progressive Interstitial, of Sheep | Sheep | Sheep Diseases | Visna-maedi virus [SUMMARY]
null
[CONTENT] Animals | China | High-Throughput Nucleotide Sequencing | Phylogeny | Pneumonia, Progressive Interstitial, of Sheep | Sheep | Sheep Diseases | Visna-maedi virus [SUMMARY]
null
[CONTENT] oncogenic virus mvv | assembly viral rna | visna virus gross | viral rna | maedi visna virus [SUMMARY]
null
[CONTENT] oncogenic virus mvv | assembly viral rna | visna virus gross | viral rna | maedi visna virus [SUMMARY]
null
[CONTENT] oncogenic virus mvv | assembly viral rna | visna virus gross | viral rna | maedi visna virus [SUMMARY]
null
[CONTENT] mvv | sequence | region | lung | min | strain | fig | position | 000 | nm [SUMMARY]
null
[CONTENT] mvv | sequence | region | lung | min | strain | fig | position | 000 | nm [SUMMARY]
null
[CONTENT] mvv | sequence | region | lung | min | strain | fig | position | 000 | nm [SUMMARY]
null
[CONTENT] ngs | mvv | gene encodes | throughput | pol | encodes | sequencing | maedi visna | maedi | visna [SUMMARY]
null
[CONTENT] homology | position | table | nm1111 | fig | contained | nucleotides | region | data | acid [SUMMARY]
null
[CONTENT] mvv | min | region | 000 | lung | 000 min | sequence | position | strain | homology [SUMMARY]
null
[CONTENT] Maedi | MVV [SUMMARY]
null
[CONTENT] MVV | Inner Mongolia | NM | China ||| Sequence | 9193 ||| NM | 77.1%-86.8% | 67.7%-75.5% ||| NM | America | A2 ||| 5 | 4 [SUMMARY]
null
[CONTENT] Maedi | MVV ||| MVV | China ||| ||| MVV | Inner Mongolia | NM | China ||| Sequence | 9193 ||| NM | 77.1%-86.8% | 67.7%-75.5% ||| NM | America | A2 ||| 5 | 4 ||| first | MVV | China ||| MVV | MVV [SUMMARY]
null
Association of Early Pubertal Onset in Female Rats With Inhalation of Lavender Oil.
35014224
Central precocious puberty (CPP) is caused by early activation of the hypothalamic-pituitary-gonadal axis but its major cause remains unclear. Studies have indicated an association between chronic environmental exposure to endocrine-disrupting chemicals and pubertal onset. Essential oil is widely used in homes worldwide for relief of respiratory symptoms, stress, and/or sleep disturbance.
BACKGROUND
To evaluate this association, we compared the hormone levels and timing of vaginal opening (VO) in female rats exposed to lavender oil (LO) through different routes (study groups: control, LO nasal spray [LS], and indoor exposure to LO [LE]) during the prepubertal period. The body weights of the animals were also compared every 3 days until the day of VO, at which time gonadotropin levels and internal organ weights were assessed.
METHODS
The LS group showed early VO at 33.8 ± 1.8 days compared with the control (38.4 ± 2.9 days) and LE (36.6 ± 1.5 days) groups. Additionally, luteinizing hormone levels were significantly higher in the LE and LS groups than those in the control group. Body weights did not differ significantly among the groups.
RESULTS
Inhalation exposure to an exogenic simulant during the prepubertal period might trigger early pubertal onset in female rats. Further evaluation of exposure to other endocrine-disrupting chemicals capable of inducing CPP through the skin, orally, and/or nasally is warranted.
CONCLUSION
[ "Administration, Inhalation", "Animals", "Female", "Lavandula", "Oils, Volatile", "Plant Oils", "Puberty, Precocious", "Random Allocation", "Rats" ]
8748666
INTRODUCTION
Central precocious puberty (CPP) describes the early activation of the hypothalamic–pituitary–gonadal (HPG) axis, which leads to the rapid progression of bone age, early menarche, reduction in final adult height, and the appearance of secondary sexual characteristics before 8 and 9 years of age in girls and boys, respectively.1 Traditionally, CPP is accompanied by intracranial lesions, including optic glioma, pilocytic astrocytoma, hydrocephalus, Rathke’s cleft cyst, and pituitary adenomas, in 40–90% of boys and < 10% of girls.23 The gonadotropin-releasing hormone (GnRH) stimulation test is used for diagnosing CPP, and the basal luteinizing hormone (LH) level is considered as a valuable tool to assess pubertal state.4 CPP treatment was introduced in 1980, and dosing of a recombinant GnRH agonist every 4 weeks or 3 months leads to increases in the final adult height and delayed menarche.23 Improving the final adult height of children with CPP is one of the major issues during treatment.56 However, the incidence of precocious puberty is rapidly increasing, and examination and treatment of this condition are becoming a major burden because of the associated medical expenses, although the cause of this condition remains unknown.23 Environmental hormones, i.e., endocrine-disrupting chemicals, were recently suggested to contribute to the onset of puberty in childhood,78 and animal studies demonstrated that endocrine-disrupting chemicals accelerate pubertal onset.910 Additionally, previous reports showed that lavender oil (LO) and tea tree oil are associated with prepubertal gynecomastia in boys.1112 Moreover, cases of premature thelarche that resolved after cessation of exposure to lavender-containing fragrance have been reported,1314 and an in vitro study showed that components of LO, including linalool and linalyl acetate, activate estrogen-related gene expression in human cell lines.13 However, studies of the absorbance of these materials in sufficient amounts and their effect on breast growth have not been performed. A previous study suggested that smell of sense can be transmitted to the central nervous system, thereby facilitating the bypass of inhaled molecules via the nasal pathway of the blood–brain barrier.15 There are several opportunities for inhalation of numerous endocrine-disrupting chemicals from estrogenic sources in cosmetics, perfumes, air fresheners, and scented candles/diffusers using essential oils, which can directly affect olfactory stimulation of the neuroendocrine system. Since some studies suggested that essential oils may have efficacy against the coronavirus disease 2019 and its inflammatory complications,161718 the interests for home therapy using essential oils are more increasing. In this study, we tested whether continuous inhalation of LO affects early gonadotropin activation and precocious puberty. However, it is difficult to limit or measure the number of EDCs to nasal exposure in human. Thus, we investigated the effects of continuous inhalation of LO on pubertal onset and gonadotropin hormone levels in an animal model and compared them to control conditions.
METHODS
Animals and experimental design To obtain study animals, rats were bred using male and female Sprague–Dawley rats. From birth onward, we maintained an indoor temperature of 22°C (humidity: 30–70%) and controlled illumination (12-hour light/dark cycle) to allow breeding in a constant environment along with free access to water and food. On day 18 after birth, we identified 15 immature females and randomly divided them into three groups: olfactory stimulation groups 1 and 2 and a control group (n = 5/group). We used 100% pure LO obtained from Lavandula angustifolia (NOW Foods, Bloomingdale, IL, USA) for all experiments. Group 1 was treated by indoor exposure to LO (LE) via an LO diffuser in the cage using an LO-soaked puff (changed daily) along with daily exposure to 0.9% NaCl spray. For Group 2, LO was administered as a nasal spray of aromatic LO (LS) once daily. The control group was treated with a single exposure to a nasal spray (0.9% NaCl) daily. The dose of one spray of LO or 0.9% NaCl ranged from 72–125 µL. The body weight of the animals was measured every 3 days from postnatal day (PND) 18 to the day of vaginal opening (VO). To obtain study animals, rats were bred using male and female Sprague–Dawley rats. From birth onward, we maintained an indoor temperature of 22°C (humidity: 30–70%) and controlled illumination (12-hour light/dark cycle) to allow breeding in a constant environment along with free access to water and food. On day 18 after birth, we identified 15 immature females and randomly divided them into three groups: olfactory stimulation groups 1 and 2 and a control group (n = 5/group). We used 100% pure LO obtained from Lavandula angustifolia (NOW Foods, Bloomingdale, IL, USA) for all experiments. Group 1 was treated by indoor exposure to LO (LE) via an LO diffuser in the cage using an LO-soaked puff (changed daily) along with daily exposure to 0.9% NaCl spray. For Group 2, LO was administered as a nasal spray of aromatic LO (LS) once daily. The control group was treated with a single exposure to a nasal spray (0.9% NaCl) daily. The dose of one spray of LO or 0.9% NaCl ranged from 72–125 µL. The body weight of the animals was measured every 3 days from postnatal day (PND) 18 to the day of vaginal opening (VO). Analysis of VO All study groups were evaluated for VO time as an indicator of pubertal initiation at a fixed time (09:00 hour) daily. The day of VO was recorded, and VO timing was compared between the three study groups. All study groups were evaluated for VO time as an indicator of pubertal initiation at a fixed time (09:00 hour) daily. The day of VO was recorded, and VO timing was compared between the three study groups. Euthanasia and hormone assays After VO was observed in each rat, we measured serum LH, follicle-stimulating hormone (FSH), and estradiol levels to compare hormone concentrations between study groups. The endpoint of the experiment was defined as VO occurring in the last rat. For this process, truncal blood was collected into ice-cold ethylenediamine tetraacetic acid-containing tubes after decapitation, after which the tubes were centrifuged, and plasma samples were collected and stored at −20°C until analysis. The plasma levels of LH, FSH, and estradiol of each rat were measured using enzyme-linked immunosorbent assay kits (cat. No. MBS764675, MBS2502190, and MBS263850, respectively; MyBioSource, Inc., San Diego, CA, USA) according to manufacturer’s instructions. After VO was observed in each rat, we measured serum LH, follicle-stimulating hormone (FSH), and estradiol levels to compare hormone concentrations between study groups. The endpoint of the experiment was defined as VO occurring in the last rat. For this process, truncal blood was collected into ice-cold ethylenediamine tetraacetic acid-containing tubes after decapitation, after which the tubes were centrifuged, and plasma samples were collected and stored at −20°C until analysis. The plasma levels of LH, FSH, and estradiol of each rat were measured using enzyme-linked immunosorbent assay kits (cat. No. MBS764675, MBS2502190, and MBS263850, respectively; MyBioSource, Inc., San Diego, CA, USA) according to manufacturer’s instructions. Measurement of organ weight After euthanasia, we measured the weight of the ovaries, spleen, kidneys, and liver. The organ weight was then modified by body weight and presented as tissue weight per 150 g body weight. After euthanasia, we measured the weight of the ovaries, spleen, kidneys, and liver. The organ weight was then modified by body weight and presented as tissue weight per 150 g body weight. Statistical analysis Data were presented as the mean ± standard deviation, and statistical analyses were performed using SPSS software (v.26.0; SPSS, Inc., Chicago, IL, USA). Statistical significance was determined by Kruskal-Wallis test and one-way analysis of variance for multiple-group comparisons and Mann-Whitney U test for comparisons between two groups. Statistical significance was defined at P < 0.05. Data were presented as the mean ± standard deviation, and statistical analyses were performed using SPSS software (v.26.0; SPSS, Inc., Chicago, IL, USA). Statistical significance was determined by Kruskal-Wallis test and one-way analysis of variance for multiple-group comparisons and Mann-Whitney U test for comparisons between two groups. Statistical significance was defined at P < 0.05. Ethics statement The procedures used and the care of animals were approved by the Institutional Animal Care and Use Committee in Southwest Medi-Chem Institute (approval No. SEMI-20-001). The procedures used and the care of animals were approved by the Institutional Animal Care and Use Committee in Southwest Medi-Chem Institute (approval No. SEMI-20-001).
RESULTS
Effect of olfactory exposure to LO on VO and pubertal onset VO occurred earlier in the LE (33.8 ± 1.8 days) and LS (36.6 ± 1.5 days) groups than in the control group (38.4 ± 2.9 days) (Table 1), and VO in the LE group occurred significantly earlier than in the control group (P = 0.014) and LS group (P = 0.032), respectively. However, there was no significant difference between the LS and control groups with respect to VO timing (P = 0.151). Almost all rats in the LE group experienced VO at 33 days, and the control group mostly showed VO between 38 and 41 days (Fig. 1). LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil, VO = vaginal opening. *P < 0.05 vs. control; **P < 0.05 vs. LS group. PND = postnatal day, LE = exposure to diffused lavender oil, C = control, LS = exposure to lavender oil as a nasal spray. VO occurred earlier in the LE (33.8 ± 1.8 days) and LS (36.6 ± 1.5 days) groups than in the control group (38.4 ± 2.9 days) (Table 1), and VO in the LE group occurred significantly earlier than in the control group (P = 0.014) and LS group (P = 0.032), respectively. However, there was no significant difference between the LS and control groups with respect to VO timing (P = 0.151). Almost all rats in the LE group experienced VO at 33 days, and the control group mostly showed VO between 38 and 41 days (Fig. 1). LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil, VO = vaginal opening. *P < 0.05 vs. control; **P < 0.05 vs. LS group. PND = postnatal day, LE = exposure to diffused lavender oil, C = control, LS = exposure to lavender oil as a nasal spray. Measurement of gonadotropin hormone and estradiol levels LH levels were significantly higher in the LE (67.6 ± 3.0 mIU/mL) and LS (64.3 ± 7.4 mIU/mL) groups than in the control group (49.9 ± 2.7 mIU/mL; P < 0.001 for both) (Table 2). Additionally, FSH levels were significantly higher in the LE (50.9 ± 9.1 ng/mL) and LS (51.4 ± 7.1 ng/mL) groups than in the control group (35.2 ± 3.7 ng/mL; P = 0.009 and P = 0.011, respectively). Estradiol levels were elevated in both the LE (4.9 ± 1.4 ng/mL) and LS (5.3 ± 1.4 ng/mL) groups relative to the control group (3.9 ± 0.8 ng/mL), although the differences were not significant (P = 0.326 and P = 0.547, respectively). Data are presented as the mean ± SD (n = 5). LH = luteinizing hormone, FSH = follicle-stimulating hormone, LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil. aNot significant vs. control. *P < 0.05 vs. control; **P < 0.001 vs. control. LH levels were significantly higher in the LE (67.6 ± 3.0 mIU/mL) and LS (64.3 ± 7.4 mIU/mL) groups than in the control group (49.9 ± 2.7 mIU/mL; P < 0.001 for both) (Table 2). Additionally, FSH levels were significantly higher in the LE (50.9 ± 9.1 ng/mL) and LS (51.4 ± 7.1 ng/mL) groups than in the control group (35.2 ± 3.7 ng/mL; P = 0.009 and P = 0.011, respectively). Estradiol levels were elevated in both the LE (4.9 ± 1.4 ng/mL) and LS (5.3 ± 1.4 ng/mL) groups relative to the control group (3.9 ± 0.8 ng/mL), although the differences were not significant (P = 0.326 and P = 0.547, respectively). Data are presented as the mean ± SD (n = 5). LH = luteinizing hormone, FSH = follicle-stimulating hormone, LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil. aNot significant vs. control. *P < 0.05 vs. control; **P < 0.001 vs. control. Measurement of body and organ weights Measurement of the body weight of rats in the control, LE, and LS groups every 3 days from PND 18 until VO revealed no significant differences among three groups (Fig. 2 and Table 3). The weights of the ovaries, liver, and spleen after VO showed no significant differences among three groups; however, the weight of the kidneys per 150 g body weight increased significantly after VO in the LE group (1.926 ± 0.154 g) compared with that in the control (1.664 ± 0.077 g; P = 0.009) and LS (1.694 ± 0.154 g; P = 0.017) groups (Table 4). PND = postnatal day, LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil. Data are presented as the mean ± SD (n = 5). PND = postnatal day, LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil. aNot significant. Data are presented as the mean ± SD (n = 5). LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil. aTissue weight (g) per body weight (150 g); bNot significant. *P < 0.01 vs. control; **P < 0.05 vs. LS group. Measurement of the body weight of rats in the control, LE, and LS groups every 3 days from PND 18 until VO revealed no significant differences among three groups (Fig. 2 and Table 3). The weights of the ovaries, liver, and spleen after VO showed no significant differences among three groups; however, the weight of the kidneys per 150 g body weight increased significantly after VO in the LE group (1.926 ± 0.154 g) compared with that in the control (1.664 ± 0.077 g; P = 0.009) and LS (1.694 ± 0.154 g; P = 0.017) groups (Table 4). PND = postnatal day, LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil. Data are presented as the mean ± SD (n = 5). PND = postnatal day, LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil. aNot significant. Data are presented as the mean ± SD (n = 5). LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil. aTissue weight (g) per body weight (150 g); bNot significant. *P < 0.01 vs. control; **P < 0.05 vs. LS group.
null
null
[ "Animals and experimental design", "Analysis of VO", "Euthanasia and hormone assays", "Measurement of organ weight", "Statistical analysis", "Ethics statement", "Effect of olfactory exposure to LO on VO and pubertal onset", "Measurement of gonadotropin hormone and estradiol levels", "Measurement of body and organ weights" ]
[ "To obtain study animals, rats were bred using male and female Sprague–Dawley rats. From birth onward, we maintained an indoor temperature of 22°C (humidity: 30–70%) and controlled illumination (12-hour light/dark cycle) to allow breeding in a constant environment along with free access to water and food. On day 18 after birth, we identified 15 immature females and randomly divided them into three groups: olfactory stimulation groups 1 and 2 and a control group (n = 5/group). We used 100% pure LO obtained from Lavandula angustifolia (NOW Foods, Bloomingdale, IL, USA) for all experiments. Group 1 was treated by indoor exposure to LO (LE) via an LO diffuser in the cage using an LO-soaked puff (changed daily) along with daily exposure to 0.9% NaCl spray. For Group 2, LO was administered as a nasal spray of aromatic LO (LS) once daily. The control group was treated with a single exposure to a nasal spray (0.9% NaCl) daily. The dose of one spray of LO or 0.9% NaCl ranged from 72–125 µL. The body weight of the animals was measured every 3 days from postnatal day (PND) 18 to the day of vaginal opening (VO).", "All study groups were evaluated for VO time as an indicator of pubertal initiation at a fixed time (09:00 hour) daily. The day of VO was recorded, and VO timing was compared between the three study groups.", "After VO was observed in each rat, we measured serum LH, follicle-stimulating hormone (FSH), and estradiol levels to compare hormone concentrations between study groups. The endpoint of the experiment was defined as VO occurring in the last rat. For this process, truncal blood was collected into ice-cold ethylenediamine tetraacetic acid-containing tubes after decapitation, after which the tubes were centrifuged, and plasma samples were collected and stored at −20°C until analysis. The plasma levels of LH, FSH, and estradiol of each rat were measured using enzyme-linked immunosorbent assay kits (cat. No. MBS764675, MBS2502190, and MBS263850, respectively; MyBioSource, Inc., San Diego, CA, USA) according to manufacturer’s instructions.", "After euthanasia, we measured the weight of the ovaries, spleen, kidneys, and liver. The organ weight was then modified by body weight and presented as tissue weight per 150 g body weight.", "Data were presented as the mean ± standard deviation, and statistical analyses were performed using SPSS software (v.26.0; SPSS, Inc., Chicago, IL, USA). Statistical significance was determined by Kruskal-Wallis test and one-way analysis of variance for multiple-group comparisons and Mann-Whitney U test for comparisons between two groups. Statistical significance was defined at P < 0.05.", "The procedures used and the care of animals were approved by the Institutional Animal Care and Use Committee in Southwest Medi-Chem Institute (approval No. SEMI-20-001).", "VO occurred earlier in the LE (33.8 ± 1.8 days) and LS (36.6 ± 1.5 days) groups than in the control group (38.4 ± 2.9 days) (Table 1), and VO in the LE group occurred significantly earlier than in the control group (P = 0.014) and LS group (P = 0.032), respectively. However, there was no significant difference between the LS and control groups with respect to VO timing (P = 0.151). Almost all rats in the LE group experienced VO at 33 days, and the control group mostly showed VO between 38 and 41 days (Fig. 1).\nLS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil, VO = vaginal opening.\n*P < 0.05 vs. control; **P < 0.05 vs. LS group.\nPND = postnatal day, LE = exposure to diffused lavender oil, C = control, LS = exposure to lavender oil as a nasal spray.", "LH levels were significantly higher in the LE (67.6 ± 3.0 mIU/mL) and LS (64.3 ± 7.4 mIU/mL) groups than in the control group (49.9 ± 2.7 mIU/mL; P < 0.001 for both) (Table 2). Additionally, FSH levels were significantly higher in the LE (50.9 ± 9.1 ng/mL) and LS (51.4 ± 7.1 ng/mL) groups than in the control group (35.2 ± 3.7 ng/mL; P = 0.009 and P = 0.011, respectively). Estradiol levels were elevated in both the LE (4.9 ± 1.4 ng/mL) and LS (5.3 ± 1.4 ng/mL) groups relative to the control group (3.9 ± 0.8 ng/mL), although the differences were not significant (P = 0.326 and P = 0.547, respectively).\nData are presented as the mean ± SD (n = 5).\nLH = luteinizing hormone, FSH = follicle-stimulating hormone, LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil.\naNot significant vs. control.\n*P < 0.05 vs. control; **P < 0.001 vs. control.", "Measurement of the body weight of rats in the control, LE, and LS groups every 3 days from PND 18 until VO revealed no significant differences among three groups (Fig. 2 and Table 3). The weights of the ovaries, liver, and spleen after VO showed no significant differences among three groups; however, the weight of the kidneys per 150 g body weight increased significantly after VO in the LE group (1.926 ± 0.154 g) compared with that in the control (1.664 ± 0.077 g; P = 0.009) and LS (1.694 ± 0.154 g; P = 0.017) groups (Table 4).\nPND = postnatal day, LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil.\nData are presented as the mean ± SD (n = 5).\nPND = postnatal day, LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil.\naNot significant.\nData are presented as the mean ± SD (n = 5).\nLS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil.\naTissue weight (g) per body weight (150 g); bNot significant.\n*P < 0.01 vs. control; **P < 0.05 vs. LS group." ]
[ null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Animals and experimental design", "Analysis of VO", "Euthanasia and hormone assays", "Measurement of organ weight", "Statistical analysis", "Ethics statement", "RESULTS", "Effect of olfactory exposure to LO on VO and pubertal onset", "Measurement of gonadotropin hormone and estradiol levels", "Measurement of body and organ weights", "DISCUSSION" ]
[ "Central precocious puberty (CPP) describes the early activation of the hypothalamic–pituitary–gonadal (HPG) axis, which leads to the rapid progression of bone age, early menarche, reduction in final adult height, and the appearance of secondary sexual characteristics before 8 and 9 years of age in girls and boys, respectively.1 Traditionally, CPP is accompanied by intracranial lesions, including optic glioma, pilocytic astrocytoma, hydrocephalus, Rathke’s cleft cyst, and pituitary adenomas, in 40–90% of boys and < 10% of girls.23 The gonadotropin-releasing hormone (GnRH) stimulation test is used for diagnosing CPP, and the basal luteinizing hormone (LH) level is considered as a valuable tool to assess pubertal state.4 CPP treatment was introduced in 1980, and dosing of a recombinant GnRH agonist every 4 weeks or 3 months leads to increases in the final adult height and delayed menarche.23 Improving the final adult height of children with CPP is one of the major issues during treatment.56 However, the incidence of precocious puberty is rapidly increasing, and examination and treatment of this condition are becoming a major burden because of the associated medical expenses, although the cause of this condition remains unknown.23\nEnvironmental hormones, i.e., endocrine-disrupting chemicals, were recently suggested to contribute to the onset of puberty in childhood,78 and animal studies demonstrated that endocrine-disrupting chemicals accelerate pubertal onset.910 Additionally, previous reports showed that lavender oil (LO) and tea tree oil are associated with prepubertal gynecomastia in boys.1112 Moreover, cases of premature thelarche that resolved after cessation of exposure to lavender-containing fragrance have been reported,1314 and an in vitro study showed that components of LO, including linalool and linalyl acetate, activate estrogen-related gene expression in human cell lines.13 However, studies of the absorbance of these materials in sufficient amounts and their effect on breast growth have not been performed. A previous study suggested that smell of sense can be transmitted to the central nervous system, thereby facilitating the bypass of inhaled molecules via the nasal pathway of the blood–brain barrier.15 There are several opportunities for inhalation of numerous endocrine-disrupting chemicals from estrogenic sources in cosmetics, perfumes, air fresheners, and scented candles/diffusers using essential oils, which can directly affect olfactory stimulation of the neuroendocrine system. Since some studies suggested that essential oils may have efficacy against the coronavirus disease 2019 and its inflammatory complications,161718 the interests for home therapy using essential oils are more increasing.\nIn this study, we tested whether continuous inhalation of LO affects early gonadotropin activation and precocious puberty. However, it is difficult to limit or measure the number of EDCs to nasal exposure in human. Thus, we investigated the effects of continuous inhalation of LO on pubertal onset and gonadotropin hormone levels in an animal model and compared them to control conditions.", " Animals and experimental design To obtain study animals, rats were bred using male and female Sprague–Dawley rats. From birth onward, we maintained an indoor temperature of 22°C (humidity: 30–70%) and controlled illumination (12-hour light/dark cycle) to allow breeding in a constant environment along with free access to water and food. On day 18 after birth, we identified 15 immature females and randomly divided them into three groups: olfactory stimulation groups 1 and 2 and a control group (n = 5/group). We used 100% pure LO obtained from Lavandula angustifolia (NOW Foods, Bloomingdale, IL, USA) for all experiments. Group 1 was treated by indoor exposure to LO (LE) via an LO diffuser in the cage using an LO-soaked puff (changed daily) along with daily exposure to 0.9% NaCl spray. For Group 2, LO was administered as a nasal spray of aromatic LO (LS) once daily. The control group was treated with a single exposure to a nasal spray (0.9% NaCl) daily. The dose of one spray of LO or 0.9% NaCl ranged from 72–125 µL. The body weight of the animals was measured every 3 days from postnatal day (PND) 18 to the day of vaginal opening (VO).\nTo obtain study animals, rats were bred using male and female Sprague–Dawley rats. From birth onward, we maintained an indoor temperature of 22°C (humidity: 30–70%) and controlled illumination (12-hour light/dark cycle) to allow breeding in a constant environment along with free access to water and food. On day 18 after birth, we identified 15 immature females and randomly divided them into three groups: olfactory stimulation groups 1 and 2 and a control group (n = 5/group). We used 100% pure LO obtained from Lavandula angustifolia (NOW Foods, Bloomingdale, IL, USA) for all experiments. Group 1 was treated by indoor exposure to LO (LE) via an LO diffuser in the cage using an LO-soaked puff (changed daily) along with daily exposure to 0.9% NaCl spray. For Group 2, LO was administered as a nasal spray of aromatic LO (LS) once daily. The control group was treated with a single exposure to a nasal spray (0.9% NaCl) daily. The dose of one spray of LO or 0.9% NaCl ranged from 72–125 µL. The body weight of the animals was measured every 3 days from postnatal day (PND) 18 to the day of vaginal opening (VO).\n Analysis of VO All study groups were evaluated for VO time as an indicator of pubertal initiation at a fixed time (09:00 hour) daily. The day of VO was recorded, and VO timing was compared between the three study groups.\nAll study groups were evaluated for VO time as an indicator of pubertal initiation at a fixed time (09:00 hour) daily. The day of VO was recorded, and VO timing was compared between the three study groups.\n Euthanasia and hormone assays After VO was observed in each rat, we measured serum LH, follicle-stimulating hormone (FSH), and estradiol levels to compare hormone concentrations between study groups. The endpoint of the experiment was defined as VO occurring in the last rat. For this process, truncal blood was collected into ice-cold ethylenediamine tetraacetic acid-containing tubes after decapitation, after which the tubes were centrifuged, and plasma samples were collected and stored at −20°C until analysis. The plasma levels of LH, FSH, and estradiol of each rat were measured using enzyme-linked immunosorbent assay kits (cat. No. MBS764675, MBS2502190, and MBS263850, respectively; MyBioSource, Inc., San Diego, CA, USA) according to manufacturer’s instructions.\nAfter VO was observed in each rat, we measured serum LH, follicle-stimulating hormone (FSH), and estradiol levels to compare hormone concentrations between study groups. The endpoint of the experiment was defined as VO occurring in the last rat. For this process, truncal blood was collected into ice-cold ethylenediamine tetraacetic acid-containing tubes after decapitation, after which the tubes were centrifuged, and plasma samples were collected and stored at −20°C until analysis. The plasma levels of LH, FSH, and estradiol of each rat were measured using enzyme-linked immunosorbent assay kits (cat. No. MBS764675, MBS2502190, and MBS263850, respectively; MyBioSource, Inc., San Diego, CA, USA) according to manufacturer’s instructions.\n Measurement of organ weight After euthanasia, we measured the weight of the ovaries, spleen, kidneys, and liver. The organ weight was then modified by body weight and presented as tissue weight per 150 g body weight.\nAfter euthanasia, we measured the weight of the ovaries, spleen, kidneys, and liver. The organ weight was then modified by body weight and presented as tissue weight per 150 g body weight.\n Statistical analysis Data were presented as the mean ± standard deviation, and statistical analyses were performed using SPSS software (v.26.0; SPSS, Inc., Chicago, IL, USA). Statistical significance was determined by Kruskal-Wallis test and one-way analysis of variance for multiple-group comparisons and Mann-Whitney U test for comparisons between two groups. Statistical significance was defined at P < 0.05.\nData were presented as the mean ± standard deviation, and statistical analyses were performed using SPSS software (v.26.0; SPSS, Inc., Chicago, IL, USA). Statistical significance was determined by Kruskal-Wallis test and one-way analysis of variance for multiple-group comparisons and Mann-Whitney U test for comparisons between two groups. Statistical significance was defined at P < 0.05.\n Ethics statement The procedures used and the care of animals were approved by the Institutional Animal Care and Use Committee in Southwest Medi-Chem Institute (approval No. SEMI-20-001).\nThe procedures used and the care of animals were approved by the Institutional Animal Care and Use Committee in Southwest Medi-Chem Institute (approval No. SEMI-20-001).", "To obtain study animals, rats were bred using male and female Sprague–Dawley rats. From birth onward, we maintained an indoor temperature of 22°C (humidity: 30–70%) and controlled illumination (12-hour light/dark cycle) to allow breeding in a constant environment along with free access to water and food. On day 18 after birth, we identified 15 immature females and randomly divided them into three groups: olfactory stimulation groups 1 and 2 and a control group (n = 5/group). We used 100% pure LO obtained from Lavandula angustifolia (NOW Foods, Bloomingdale, IL, USA) for all experiments. Group 1 was treated by indoor exposure to LO (LE) via an LO diffuser in the cage using an LO-soaked puff (changed daily) along with daily exposure to 0.9% NaCl spray. For Group 2, LO was administered as a nasal spray of aromatic LO (LS) once daily. The control group was treated with a single exposure to a nasal spray (0.9% NaCl) daily. The dose of one spray of LO or 0.9% NaCl ranged from 72–125 µL. The body weight of the animals was measured every 3 days from postnatal day (PND) 18 to the day of vaginal opening (VO).", "All study groups were evaluated for VO time as an indicator of pubertal initiation at a fixed time (09:00 hour) daily. The day of VO was recorded, and VO timing was compared between the three study groups.", "After VO was observed in each rat, we measured serum LH, follicle-stimulating hormone (FSH), and estradiol levels to compare hormone concentrations between study groups. The endpoint of the experiment was defined as VO occurring in the last rat. For this process, truncal blood was collected into ice-cold ethylenediamine tetraacetic acid-containing tubes after decapitation, after which the tubes were centrifuged, and plasma samples were collected and stored at −20°C until analysis. The plasma levels of LH, FSH, and estradiol of each rat were measured using enzyme-linked immunosorbent assay kits (cat. No. MBS764675, MBS2502190, and MBS263850, respectively; MyBioSource, Inc., San Diego, CA, USA) according to manufacturer’s instructions.", "After euthanasia, we measured the weight of the ovaries, spleen, kidneys, and liver. The organ weight was then modified by body weight and presented as tissue weight per 150 g body weight.", "Data were presented as the mean ± standard deviation, and statistical analyses were performed using SPSS software (v.26.0; SPSS, Inc., Chicago, IL, USA). Statistical significance was determined by Kruskal-Wallis test and one-way analysis of variance for multiple-group comparisons and Mann-Whitney U test for comparisons between two groups. Statistical significance was defined at P < 0.05.", "The procedures used and the care of animals were approved by the Institutional Animal Care and Use Committee in Southwest Medi-Chem Institute (approval No. SEMI-20-001).", " Effect of olfactory exposure to LO on VO and pubertal onset VO occurred earlier in the LE (33.8 ± 1.8 days) and LS (36.6 ± 1.5 days) groups than in the control group (38.4 ± 2.9 days) (Table 1), and VO in the LE group occurred significantly earlier than in the control group (P = 0.014) and LS group (P = 0.032), respectively. However, there was no significant difference between the LS and control groups with respect to VO timing (P = 0.151). Almost all rats in the LE group experienced VO at 33 days, and the control group mostly showed VO between 38 and 41 days (Fig. 1).\nLS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil, VO = vaginal opening.\n*P < 0.05 vs. control; **P < 0.05 vs. LS group.\nPND = postnatal day, LE = exposure to diffused lavender oil, C = control, LS = exposure to lavender oil as a nasal spray.\nVO occurred earlier in the LE (33.8 ± 1.8 days) and LS (36.6 ± 1.5 days) groups than in the control group (38.4 ± 2.9 days) (Table 1), and VO in the LE group occurred significantly earlier than in the control group (P = 0.014) and LS group (P = 0.032), respectively. However, there was no significant difference between the LS and control groups with respect to VO timing (P = 0.151). Almost all rats in the LE group experienced VO at 33 days, and the control group mostly showed VO between 38 and 41 days (Fig. 1).\nLS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil, VO = vaginal opening.\n*P < 0.05 vs. control; **P < 0.05 vs. LS group.\nPND = postnatal day, LE = exposure to diffused lavender oil, C = control, LS = exposure to lavender oil as a nasal spray.\n Measurement of gonadotropin hormone and estradiol levels LH levels were significantly higher in the LE (67.6 ± 3.0 mIU/mL) and LS (64.3 ± 7.4 mIU/mL) groups than in the control group (49.9 ± 2.7 mIU/mL; P < 0.001 for both) (Table 2). Additionally, FSH levels were significantly higher in the LE (50.9 ± 9.1 ng/mL) and LS (51.4 ± 7.1 ng/mL) groups than in the control group (35.2 ± 3.7 ng/mL; P = 0.009 and P = 0.011, respectively). Estradiol levels were elevated in both the LE (4.9 ± 1.4 ng/mL) and LS (5.3 ± 1.4 ng/mL) groups relative to the control group (3.9 ± 0.8 ng/mL), although the differences were not significant (P = 0.326 and P = 0.547, respectively).\nData are presented as the mean ± SD (n = 5).\nLH = luteinizing hormone, FSH = follicle-stimulating hormone, LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil.\naNot significant vs. control.\n*P < 0.05 vs. control; **P < 0.001 vs. control.\nLH levels were significantly higher in the LE (67.6 ± 3.0 mIU/mL) and LS (64.3 ± 7.4 mIU/mL) groups than in the control group (49.9 ± 2.7 mIU/mL; P < 0.001 for both) (Table 2). Additionally, FSH levels were significantly higher in the LE (50.9 ± 9.1 ng/mL) and LS (51.4 ± 7.1 ng/mL) groups than in the control group (35.2 ± 3.7 ng/mL; P = 0.009 and P = 0.011, respectively). Estradiol levels were elevated in both the LE (4.9 ± 1.4 ng/mL) and LS (5.3 ± 1.4 ng/mL) groups relative to the control group (3.9 ± 0.8 ng/mL), although the differences were not significant (P = 0.326 and P = 0.547, respectively).\nData are presented as the mean ± SD (n = 5).\nLH = luteinizing hormone, FSH = follicle-stimulating hormone, LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil.\naNot significant vs. control.\n*P < 0.05 vs. control; **P < 0.001 vs. control.\n Measurement of body and organ weights Measurement of the body weight of rats in the control, LE, and LS groups every 3 days from PND 18 until VO revealed no significant differences among three groups (Fig. 2 and Table 3). The weights of the ovaries, liver, and spleen after VO showed no significant differences among three groups; however, the weight of the kidneys per 150 g body weight increased significantly after VO in the LE group (1.926 ± 0.154 g) compared with that in the control (1.664 ± 0.077 g; P = 0.009) and LS (1.694 ± 0.154 g; P = 0.017) groups (Table 4).\nPND = postnatal day, LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil.\nData are presented as the mean ± SD (n = 5).\nPND = postnatal day, LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil.\naNot significant.\nData are presented as the mean ± SD (n = 5).\nLS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil.\naTissue weight (g) per body weight (150 g); bNot significant.\n*P < 0.01 vs. control; **P < 0.05 vs. LS group.\nMeasurement of the body weight of rats in the control, LE, and LS groups every 3 days from PND 18 until VO revealed no significant differences among three groups (Fig. 2 and Table 3). The weights of the ovaries, liver, and spleen after VO showed no significant differences among three groups; however, the weight of the kidneys per 150 g body weight increased significantly after VO in the LE group (1.926 ± 0.154 g) compared with that in the control (1.664 ± 0.077 g; P = 0.009) and LS (1.694 ± 0.154 g; P = 0.017) groups (Table 4).\nPND = postnatal day, LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil.\nData are presented as the mean ± SD (n = 5).\nPND = postnatal day, LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil.\naNot significant.\nData are presented as the mean ± SD (n = 5).\nLS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil.\naTissue weight (g) per body weight (150 g); bNot significant.\n*P < 0.01 vs. control; **P < 0.05 vs. LS group.", "VO occurred earlier in the LE (33.8 ± 1.8 days) and LS (36.6 ± 1.5 days) groups than in the control group (38.4 ± 2.9 days) (Table 1), and VO in the LE group occurred significantly earlier than in the control group (P = 0.014) and LS group (P = 0.032), respectively. However, there was no significant difference between the LS and control groups with respect to VO timing (P = 0.151). Almost all rats in the LE group experienced VO at 33 days, and the control group mostly showed VO between 38 and 41 days (Fig. 1).\nLS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil, VO = vaginal opening.\n*P < 0.05 vs. control; **P < 0.05 vs. LS group.\nPND = postnatal day, LE = exposure to diffused lavender oil, C = control, LS = exposure to lavender oil as a nasal spray.", "LH levels were significantly higher in the LE (67.6 ± 3.0 mIU/mL) and LS (64.3 ± 7.4 mIU/mL) groups than in the control group (49.9 ± 2.7 mIU/mL; P < 0.001 for both) (Table 2). Additionally, FSH levels were significantly higher in the LE (50.9 ± 9.1 ng/mL) and LS (51.4 ± 7.1 ng/mL) groups than in the control group (35.2 ± 3.7 ng/mL; P = 0.009 and P = 0.011, respectively). Estradiol levels were elevated in both the LE (4.9 ± 1.4 ng/mL) and LS (5.3 ± 1.4 ng/mL) groups relative to the control group (3.9 ± 0.8 ng/mL), although the differences were not significant (P = 0.326 and P = 0.547, respectively).\nData are presented as the mean ± SD (n = 5).\nLH = luteinizing hormone, FSH = follicle-stimulating hormone, LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil.\naNot significant vs. control.\n*P < 0.05 vs. control; **P < 0.001 vs. control.", "Measurement of the body weight of rats in the control, LE, and LS groups every 3 days from PND 18 until VO revealed no significant differences among three groups (Fig. 2 and Table 3). The weights of the ovaries, liver, and spleen after VO showed no significant differences among three groups; however, the weight of the kidneys per 150 g body weight increased significantly after VO in the LE group (1.926 ± 0.154 g) compared with that in the control (1.664 ± 0.077 g; P = 0.009) and LS (1.694 ± 0.154 g; P = 0.017) groups (Table 4).\nPND = postnatal day, LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil.\nData are presented as the mean ± SD (n = 5).\nPND = postnatal day, LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil.\naNot significant.\nData are presented as the mean ± SD (n = 5).\nLS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil.\naTissue weight (g) per body weight (150 g); bNot significant.\n*P < 0.01 vs. control; **P < 0.05 vs. LS group.", "In this study, we found that persistent exposure to LO is associated with the HPG axis activation and early pubertal onset. We observed early VO in rats persistently exposed to LO as compared with that in the control group.\nVO in the Sprague-Dawley rats occurs after the surge of gonadotropins ranges from PND 30.8 to 38.4, and it might be affected by the environment, nutrition, temperature, and light.192021 In a controlled environment, the mean VO of the control group in this study was 38.4 days, and VO occurred significantly earlier in the LE group (33.8 days).\nAdditionally, serum LH and FSH levels were significantly higher in the LE and LS groups than in the control group. The LH level has been considered as a golden marker of pubertal status, whereas the estradiol level showed a fluctuation during the day and could be low even in the pubertal period.22232425 Chronic and persistent exposure to estrogens could also affect gonadotropin activation. Chronic exposure to sex hormone in cases of peripheral precocious puberty, including congenital adrenal hyperplasia or McCune-Albright syndrome, could lead to the secondary CPP, and these patients require to be treated with the GnRH agonist.2627\nPrevious studies showed that estrogenic effect of LO affected the premature thelarche in girls and gynecomastia in boys.11121314 To the best of our knowledge, no previous studies have reported an association between pubertal onset and persistent olfactory exposure to LO in an animal model. Nasal inhalation is an important source of iatrogenic sex-hormone exposure, and olfactory exposure to LO may result in LO delivery to the central nervous system and bloodstream to induce an iatrogenic effect of estrogen.\nSeveral studies have shown that LO is effective at reducing menopausal symptoms and supporting healthy sleep.2829 Additionally, previous studies reported that LO does not increase estrogen levels in adults30; in contrast, in the present study, we could not conclude that LO exposure did not affect estrogen activity. Although we observed no differences in estradiol levels between the LE and control groups, the LE group showed early pubertal onset and significantly increased LH and FSH levels. Furthermore, the LS group showed no significant differences in VO compared with the control group. Therefore, the amount and persistence of LO exposure may determine pubertal onset.\nA previous animal study reported that percutaneous injection of LO when performing uterotrophic assays on immature rats results in significantly reduced body weight gain after 3 days compared with that in the control group and a group administered 17α-ethinyl estradiol.31 However, the authors only assessed weight gain and organ-weight-to-terminal-body-weight to evaluate the presence of an estrogenic effect, and did not compare hormone levels or VO timing. The different administration modalities may have been responsible for the reported differences in body weight gain between the LO injection group and the group undergoing oral estrogen administration. In the present study, we found no differences in organ weight, including the ovaries, and body weight gain between groups, despite the apparently different VO and gonadotropin levels. Interestingly, the kidney tissue weight was significantly increased only in the LE group, supporting an association between physiological LO-specific effects and endocrine effects. A recent study showed that LO exposure affects renal restoration in a dose-dependent manner by decreasing antioxidant signals and inflammatory cytokine levels, as well as by inhibiting apoptosis.32 In the present study, we measured neither renal function nor nephron numbers and used only renal tissue weight as an indicator of the positive effect of LO exposure. Therefore, increased renal tissue weight may indicate a renal burden related to LO excretion.\nFour studies have reported a total of 11 pediatric patients (seven males and four females) showing premature thelarche in females (age range: 14 months to 7 years and 9 months) and gynecomastia in males (age range: 4 years and 5 months to 10 years and 1 month) after using an LO-containing product.11121314 Ramsey et al.13 reported that patients showed symptomatic improvement after discontinuation of LO exposure accompanied by no abnormal laboratory findings. These reports suggest the estrogenic effect of topical preparation of LO. Additionally, in vitro studies showed that LO (or the LO components linalool and linalyl acetate) exerts an estrogenic effect by stimulating α-transcription of the estrogen receptor.1113 The peripheral hormones or signals transmitting to GnRH neurons may lead to GnRH secretion and stimulation of pituitary gonadotropins and gonadal sex steroids.8 Given our observation of early activation of the HPG axis after olfactory exposure to LO, further investigation of the effect of LO on gene activation related to GnRH synthesis or secretion may explain the associations between early activation of the HPG axis and LO exposure.\nWe focused on the effects of LO exposure through olfactory stimulation and not via oral ingestion or topical application. One limitation of this study is its small sample size; thus, further studies are required to validate the findings. We observed significant elevation of gonadotropin levels not only in the LE group but also in the LS group, suggesting that repetitive olfactory exposure affected the manifestation of LO-specific physiological effects. Several studies reported that essential LO affects anxiety when administered via the oral or nasal routes.333435 However, LO exposure via oral ingestion in food is not as common as skin absorption of various LO-containing cosmetic products or LO inhalation. Nevertheless, inhalation of environmental LO is difficult to quantify in humans, and epidemiological surveillance data for the sole effect of LO inhalation in children undergoing precocious puberty are unavailable. Furthermore, the frequency and duration of exposure to LO inhalation are highly variable, and the effect of LO following skin exposure on children remains inconclusive because of the variable amounts of LO to which the skin is exposed, and difficulties associated with follow-up to assess long-term effects.36\nThis study showed the effect of olfactory stimulation by LO on the early onset of puberty. These results suggest that avoidance of LO exposure to minimize unnecessary iatrogenic estrogen effects from fragrances, diffusers, and perfumes can prevent early stimulation of the HPG axis, particularly in younger children. Further in vitro evaluation of LO-related effects on central kisspeptin signaling may reveal the mechanisms associated with early activation of the HPG axis by persistent olfactory exposure to LO." ]
[ "intro", "methods", null, null, null, null, null, null, "results", null, null, null, "discussion" ]
[ "Precocious Puberty", "Endocrine Disruptors", "Lavender Oil", "Inhalation Exposure", "Vaginal Opening" ]
INTRODUCTION: Central precocious puberty (CPP) describes the early activation of the hypothalamic–pituitary–gonadal (HPG) axis, which leads to the rapid progression of bone age, early menarche, reduction in final adult height, and the appearance of secondary sexual characteristics before 8 and 9 years of age in girls and boys, respectively.1 Traditionally, CPP is accompanied by intracranial lesions, including optic glioma, pilocytic astrocytoma, hydrocephalus, Rathke’s cleft cyst, and pituitary adenomas, in 40–90% of boys and < 10% of girls.23 The gonadotropin-releasing hormone (GnRH) stimulation test is used for diagnosing CPP, and the basal luteinizing hormone (LH) level is considered as a valuable tool to assess pubertal state.4 CPP treatment was introduced in 1980, and dosing of a recombinant GnRH agonist every 4 weeks or 3 months leads to increases in the final adult height and delayed menarche.23 Improving the final adult height of children with CPP is one of the major issues during treatment.56 However, the incidence of precocious puberty is rapidly increasing, and examination and treatment of this condition are becoming a major burden because of the associated medical expenses, although the cause of this condition remains unknown.23 Environmental hormones, i.e., endocrine-disrupting chemicals, were recently suggested to contribute to the onset of puberty in childhood,78 and animal studies demonstrated that endocrine-disrupting chemicals accelerate pubertal onset.910 Additionally, previous reports showed that lavender oil (LO) and tea tree oil are associated with prepubertal gynecomastia in boys.1112 Moreover, cases of premature thelarche that resolved after cessation of exposure to lavender-containing fragrance have been reported,1314 and an in vitro study showed that components of LO, including linalool and linalyl acetate, activate estrogen-related gene expression in human cell lines.13 However, studies of the absorbance of these materials in sufficient amounts and their effect on breast growth have not been performed. A previous study suggested that smell of sense can be transmitted to the central nervous system, thereby facilitating the bypass of inhaled molecules via the nasal pathway of the blood–brain barrier.15 There are several opportunities for inhalation of numerous endocrine-disrupting chemicals from estrogenic sources in cosmetics, perfumes, air fresheners, and scented candles/diffusers using essential oils, which can directly affect olfactory stimulation of the neuroendocrine system. Since some studies suggested that essential oils may have efficacy against the coronavirus disease 2019 and its inflammatory complications,161718 the interests for home therapy using essential oils are more increasing. In this study, we tested whether continuous inhalation of LO affects early gonadotropin activation and precocious puberty. However, it is difficult to limit or measure the number of EDCs to nasal exposure in human. Thus, we investigated the effects of continuous inhalation of LO on pubertal onset and gonadotropin hormone levels in an animal model and compared them to control conditions. METHODS: Animals and experimental design To obtain study animals, rats were bred using male and female Sprague–Dawley rats. From birth onward, we maintained an indoor temperature of 22°C (humidity: 30–70%) and controlled illumination (12-hour light/dark cycle) to allow breeding in a constant environment along with free access to water and food. On day 18 after birth, we identified 15 immature females and randomly divided them into three groups: olfactory stimulation groups 1 and 2 and a control group (n = 5/group). We used 100% pure LO obtained from Lavandula angustifolia (NOW Foods, Bloomingdale, IL, USA) for all experiments. Group 1 was treated by indoor exposure to LO (LE) via an LO diffuser in the cage using an LO-soaked puff (changed daily) along with daily exposure to 0.9% NaCl spray. For Group 2, LO was administered as a nasal spray of aromatic LO (LS) once daily. The control group was treated with a single exposure to a nasal spray (0.9% NaCl) daily. The dose of one spray of LO or 0.9% NaCl ranged from 72–125 µL. The body weight of the animals was measured every 3 days from postnatal day (PND) 18 to the day of vaginal opening (VO). To obtain study animals, rats were bred using male and female Sprague–Dawley rats. From birth onward, we maintained an indoor temperature of 22°C (humidity: 30–70%) and controlled illumination (12-hour light/dark cycle) to allow breeding in a constant environment along with free access to water and food. On day 18 after birth, we identified 15 immature females and randomly divided them into three groups: olfactory stimulation groups 1 and 2 and a control group (n = 5/group). We used 100% pure LO obtained from Lavandula angustifolia (NOW Foods, Bloomingdale, IL, USA) for all experiments. Group 1 was treated by indoor exposure to LO (LE) via an LO diffuser in the cage using an LO-soaked puff (changed daily) along with daily exposure to 0.9% NaCl spray. For Group 2, LO was administered as a nasal spray of aromatic LO (LS) once daily. The control group was treated with a single exposure to a nasal spray (0.9% NaCl) daily. The dose of one spray of LO or 0.9% NaCl ranged from 72–125 µL. The body weight of the animals was measured every 3 days from postnatal day (PND) 18 to the day of vaginal opening (VO). Analysis of VO All study groups were evaluated for VO time as an indicator of pubertal initiation at a fixed time (09:00 hour) daily. The day of VO was recorded, and VO timing was compared between the three study groups. All study groups were evaluated for VO time as an indicator of pubertal initiation at a fixed time (09:00 hour) daily. The day of VO was recorded, and VO timing was compared between the three study groups. Euthanasia and hormone assays After VO was observed in each rat, we measured serum LH, follicle-stimulating hormone (FSH), and estradiol levels to compare hormone concentrations between study groups. The endpoint of the experiment was defined as VO occurring in the last rat. For this process, truncal blood was collected into ice-cold ethylenediamine tetraacetic acid-containing tubes after decapitation, after which the tubes were centrifuged, and plasma samples were collected and stored at −20°C until analysis. The plasma levels of LH, FSH, and estradiol of each rat were measured using enzyme-linked immunosorbent assay kits (cat. No. MBS764675, MBS2502190, and MBS263850, respectively; MyBioSource, Inc., San Diego, CA, USA) according to manufacturer’s instructions. After VO was observed in each rat, we measured serum LH, follicle-stimulating hormone (FSH), and estradiol levels to compare hormone concentrations between study groups. The endpoint of the experiment was defined as VO occurring in the last rat. For this process, truncal blood was collected into ice-cold ethylenediamine tetraacetic acid-containing tubes after decapitation, after which the tubes were centrifuged, and plasma samples were collected and stored at −20°C until analysis. The plasma levels of LH, FSH, and estradiol of each rat were measured using enzyme-linked immunosorbent assay kits (cat. No. MBS764675, MBS2502190, and MBS263850, respectively; MyBioSource, Inc., San Diego, CA, USA) according to manufacturer’s instructions. Measurement of organ weight After euthanasia, we measured the weight of the ovaries, spleen, kidneys, and liver. The organ weight was then modified by body weight and presented as tissue weight per 150 g body weight. After euthanasia, we measured the weight of the ovaries, spleen, kidneys, and liver. The organ weight was then modified by body weight and presented as tissue weight per 150 g body weight. Statistical analysis Data were presented as the mean ± standard deviation, and statistical analyses were performed using SPSS software (v.26.0; SPSS, Inc., Chicago, IL, USA). Statistical significance was determined by Kruskal-Wallis test and one-way analysis of variance for multiple-group comparisons and Mann-Whitney U test for comparisons between two groups. Statistical significance was defined at P < 0.05. Data were presented as the mean ± standard deviation, and statistical analyses were performed using SPSS software (v.26.0; SPSS, Inc., Chicago, IL, USA). Statistical significance was determined by Kruskal-Wallis test and one-way analysis of variance for multiple-group comparisons and Mann-Whitney U test for comparisons between two groups. Statistical significance was defined at P < 0.05. Ethics statement The procedures used and the care of animals were approved by the Institutional Animal Care and Use Committee in Southwest Medi-Chem Institute (approval No. SEMI-20-001). The procedures used and the care of animals were approved by the Institutional Animal Care and Use Committee in Southwest Medi-Chem Institute (approval No. SEMI-20-001). Animals and experimental design: To obtain study animals, rats were bred using male and female Sprague–Dawley rats. From birth onward, we maintained an indoor temperature of 22°C (humidity: 30–70%) and controlled illumination (12-hour light/dark cycle) to allow breeding in a constant environment along with free access to water and food. On day 18 after birth, we identified 15 immature females and randomly divided them into three groups: olfactory stimulation groups 1 and 2 and a control group (n = 5/group). We used 100% pure LO obtained from Lavandula angustifolia (NOW Foods, Bloomingdale, IL, USA) for all experiments. Group 1 was treated by indoor exposure to LO (LE) via an LO diffuser in the cage using an LO-soaked puff (changed daily) along with daily exposure to 0.9% NaCl spray. For Group 2, LO was administered as a nasal spray of aromatic LO (LS) once daily. The control group was treated with a single exposure to a nasal spray (0.9% NaCl) daily. The dose of one spray of LO or 0.9% NaCl ranged from 72–125 µL. The body weight of the animals was measured every 3 days from postnatal day (PND) 18 to the day of vaginal opening (VO). Analysis of VO: All study groups were evaluated for VO time as an indicator of pubertal initiation at a fixed time (09:00 hour) daily. The day of VO was recorded, and VO timing was compared between the three study groups. Euthanasia and hormone assays: After VO was observed in each rat, we measured serum LH, follicle-stimulating hormone (FSH), and estradiol levels to compare hormone concentrations between study groups. The endpoint of the experiment was defined as VO occurring in the last rat. For this process, truncal blood was collected into ice-cold ethylenediamine tetraacetic acid-containing tubes after decapitation, after which the tubes were centrifuged, and plasma samples were collected and stored at −20°C until analysis. The plasma levels of LH, FSH, and estradiol of each rat were measured using enzyme-linked immunosorbent assay kits (cat. No. MBS764675, MBS2502190, and MBS263850, respectively; MyBioSource, Inc., San Diego, CA, USA) according to manufacturer’s instructions. Measurement of organ weight: After euthanasia, we measured the weight of the ovaries, spleen, kidneys, and liver. The organ weight was then modified by body weight and presented as tissue weight per 150 g body weight. Statistical analysis: Data were presented as the mean ± standard deviation, and statistical analyses were performed using SPSS software (v.26.0; SPSS, Inc., Chicago, IL, USA). Statistical significance was determined by Kruskal-Wallis test and one-way analysis of variance for multiple-group comparisons and Mann-Whitney U test for comparisons between two groups. Statistical significance was defined at P < 0.05. Ethics statement: The procedures used and the care of animals were approved by the Institutional Animal Care and Use Committee in Southwest Medi-Chem Institute (approval No. SEMI-20-001). RESULTS: Effect of olfactory exposure to LO on VO and pubertal onset VO occurred earlier in the LE (33.8 ± 1.8 days) and LS (36.6 ± 1.5 days) groups than in the control group (38.4 ± 2.9 days) (Table 1), and VO in the LE group occurred significantly earlier than in the control group (P = 0.014) and LS group (P = 0.032), respectively. However, there was no significant difference between the LS and control groups with respect to VO timing (P = 0.151). Almost all rats in the LE group experienced VO at 33 days, and the control group mostly showed VO between 38 and 41 days (Fig. 1). LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil, VO = vaginal opening. *P < 0.05 vs. control; **P < 0.05 vs. LS group. PND = postnatal day, LE = exposure to diffused lavender oil, C = control, LS = exposure to lavender oil as a nasal spray. VO occurred earlier in the LE (33.8 ± 1.8 days) and LS (36.6 ± 1.5 days) groups than in the control group (38.4 ± 2.9 days) (Table 1), and VO in the LE group occurred significantly earlier than in the control group (P = 0.014) and LS group (P = 0.032), respectively. However, there was no significant difference between the LS and control groups with respect to VO timing (P = 0.151). Almost all rats in the LE group experienced VO at 33 days, and the control group mostly showed VO between 38 and 41 days (Fig. 1). LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil, VO = vaginal opening. *P < 0.05 vs. control; **P < 0.05 vs. LS group. PND = postnatal day, LE = exposure to diffused lavender oil, C = control, LS = exposure to lavender oil as a nasal spray. Measurement of gonadotropin hormone and estradiol levels LH levels were significantly higher in the LE (67.6 ± 3.0 mIU/mL) and LS (64.3 ± 7.4 mIU/mL) groups than in the control group (49.9 ± 2.7 mIU/mL; P < 0.001 for both) (Table 2). Additionally, FSH levels were significantly higher in the LE (50.9 ± 9.1 ng/mL) and LS (51.4 ± 7.1 ng/mL) groups than in the control group (35.2 ± 3.7 ng/mL; P = 0.009 and P = 0.011, respectively). Estradiol levels were elevated in both the LE (4.9 ± 1.4 ng/mL) and LS (5.3 ± 1.4 ng/mL) groups relative to the control group (3.9 ± 0.8 ng/mL), although the differences were not significant (P = 0.326 and P = 0.547, respectively). Data are presented as the mean ± SD (n = 5). LH = luteinizing hormone, FSH = follicle-stimulating hormone, LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil. aNot significant vs. control. *P < 0.05 vs. control; **P < 0.001 vs. control. LH levels were significantly higher in the LE (67.6 ± 3.0 mIU/mL) and LS (64.3 ± 7.4 mIU/mL) groups than in the control group (49.9 ± 2.7 mIU/mL; P < 0.001 for both) (Table 2). Additionally, FSH levels were significantly higher in the LE (50.9 ± 9.1 ng/mL) and LS (51.4 ± 7.1 ng/mL) groups than in the control group (35.2 ± 3.7 ng/mL; P = 0.009 and P = 0.011, respectively). Estradiol levels were elevated in both the LE (4.9 ± 1.4 ng/mL) and LS (5.3 ± 1.4 ng/mL) groups relative to the control group (3.9 ± 0.8 ng/mL), although the differences were not significant (P = 0.326 and P = 0.547, respectively). Data are presented as the mean ± SD (n = 5). LH = luteinizing hormone, FSH = follicle-stimulating hormone, LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil. aNot significant vs. control. *P < 0.05 vs. control; **P < 0.001 vs. control. Measurement of body and organ weights Measurement of the body weight of rats in the control, LE, and LS groups every 3 days from PND 18 until VO revealed no significant differences among three groups (Fig. 2 and Table 3). The weights of the ovaries, liver, and spleen after VO showed no significant differences among three groups; however, the weight of the kidneys per 150 g body weight increased significantly after VO in the LE group (1.926 ± 0.154 g) compared with that in the control (1.664 ± 0.077 g; P = 0.009) and LS (1.694 ± 0.154 g; P = 0.017) groups (Table 4). PND = postnatal day, LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil. Data are presented as the mean ± SD (n = 5). PND = postnatal day, LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil. aNot significant. Data are presented as the mean ± SD (n = 5). LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil. aTissue weight (g) per body weight (150 g); bNot significant. *P < 0.01 vs. control; **P < 0.05 vs. LS group. Measurement of the body weight of rats in the control, LE, and LS groups every 3 days from PND 18 until VO revealed no significant differences among three groups (Fig. 2 and Table 3). The weights of the ovaries, liver, and spleen after VO showed no significant differences among three groups; however, the weight of the kidneys per 150 g body weight increased significantly after VO in the LE group (1.926 ± 0.154 g) compared with that in the control (1.664 ± 0.077 g; P = 0.009) and LS (1.694 ± 0.154 g; P = 0.017) groups (Table 4). PND = postnatal day, LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil. Data are presented as the mean ± SD (n = 5). PND = postnatal day, LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil. aNot significant. Data are presented as the mean ± SD (n = 5). LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil. aTissue weight (g) per body weight (150 g); bNot significant. *P < 0.01 vs. control; **P < 0.05 vs. LS group. Effect of olfactory exposure to LO on VO and pubertal onset: VO occurred earlier in the LE (33.8 ± 1.8 days) and LS (36.6 ± 1.5 days) groups than in the control group (38.4 ± 2.9 days) (Table 1), and VO in the LE group occurred significantly earlier than in the control group (P = 0.014) and LS group (P = 0.032), respectively. However, there was no significant difference between the LS and control groups with respect to VO timing (P = 0.151). Almost all rats in the LE group experienced VO at 33 days, and the control group mostly showed VO between 38 and 41 days (Fig. 1). LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil, VO = vaginal opening. *P < 0.05 vs. control; **P < 0.05 vs. LS group. PND = postnatal day, LE = exposure to diffused lavender oil, C = control, LS = exposure to lavender oil as a nasal spray. Measurement of gonadotropin hormone and estradiol levels: LH levels were significantly higher in the LE (67.6 ± 3.0 mIU/mL) and LS (64.3 ± 7.4 mIU/mL) groups than in the control group (49.9 ± 2.7 mIU/mL; P < 0.001 for both) (Table 2). Additionally, FSH levels were significantly higher in the LE (50.9 ± 9.1 ng/mL) and LS (51.4 ± 7.1 ng/mL) groups than in the control group (35.2 ± 3.7 ng/mL; P = 0.009 and P = 0.011, respectively). Estradiol levels were elevated in both the LE (4.9 ± 1.4 ng/mL) and LS (5.3 ± 1.4 ng/mL) groups relative to the control group (3.9 ± 0.8 ng/mL), although the differences were not significant (P = 0.326 and P = 0.547, respectively). Data are presented as the mean ± SD (n = 5). LH = luteinizing hormone, FSH = follicle-stimulating hormone, LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil. aNot significant vs. control. *P < 0.05 vs. control; **P < 0.001 vs. control. Measurement of body and organ weights: Measurement of the body weight of rats in the control, LE, and LS groups every 3 days from PND 18 until VO revealed no significant differences among three groups (Fig. 2 and Table 3). The weights of the ovaries, liver, and spleen after VO showed no significant differences among three groups; however, the weight of the kidneys per 150 g body weight increased significantly after VO in the LE group (1.926 ± 0.154 g) compared with that in the control (1.664 ± 0.077 g; P = 0.009) and LS (1.694 ± 0.154 g; P = 0.017) groups (Table 4). PND = postnatal day, LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil. Data are presented as the mean ± SD (n = 5). PND = postnatal day, LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil. aNot significant. Data are presented as the mean ± SD (n = 5). LS = exposure to lavender oil as a nasal spray, LE = exposure to diffused lavender oil. aTissue weight (g) per body weight (150 g); bNot significant. *P < 0.01 vs. control; **P < 0.05 vs. LS group. DISCUSSION: In this study, we found that persistent exposure to LO is associated with the HPG axis activation and early pubertal onset. We observed early VO in rats persistently exposed to LO as compared with that in the control group. VO in the Sprague-Dawley rats occurs after the surge of gonadotropins ranges from PND 30.8 to 38.4, and it might be affected by the environment, nutrition, temperature, and light.192021 In a controlled environment, the mean VO of the control group in this study was 38.4 days, and VO occurred significantly earlier in the LE group (33.8 days). Additionally, serum LH and FSH levels were significantly higher in the LE and LS groups than in the control group. The LH level has been considered as a golden marker of pubertal status, whereas the estradiol level showed a fluctuation during the day and could be low even in the pubertal period.22232425 Chronic and persistent exposure to estrogens could also affect gonadotropin activation. Chronic exposure to sex hormone in cases of peripheral precocious puberty, including congenital adrenal hyperplasia or McCune-Albright syndrome, could lead to the secondary CPP, and these patients require to be treated with the GnRH agonist.2627 Previous studies showed that estrogenic effect of LO affected the premature thelarche in girls and gynecomastia in boys.11121314 To the best of our knowledge, no previous studies have reported an association between pubertal onset and persistent olfactory exposure to LO in an animal model. Nasal inhalation is an important source of iatrogenic sex-hormone exposure, and olfactory exposure to LO may result in LO delivery to the central nervous system and bloodstream to induce an iatrogenic effect of estrogen. Several studies have shown that LO is effective at reducing menopausal symptoms and supporting healthy sleep.2829 Additionally, previous studies reported that LO does not increase estrogen levels in adults30; in contrast, in the present study, we could not conclude that LO exposure did not affect estrogen activity. Although we observed no differences in estradiol levels between the LE and control groups, the LE group showed early pubertal onset and significantly increased LH and FSH levels. Furthermore, the LS group showed no significant differences in VO compared with the control group. Therefore, the amount and persistence of LO exposure may determine pubertal onset. A previous animal study reported that percutaneous injection of LO when performing uterotrophic assays on immature rats results in significantly reduced body weight gain after 3 days compared with that in the control group and a group administered 17α-ethinyl estradiol.31 However, the authors only assessed weight gain and organ-weight-to-terminal-body-weight to evaluate the presence of an estrogenic effect, and did not compare hormone levels or VO timing. The different administration modalities may have been responsible for the reported differences in body weight gain between the LO injection group and the group undergoing oral estrogen administration. In the present study, we found no differences in organ weight, including the ovaries, and body weight gain between groups, despite the apparently different VO and gonadotropin levels. Interestingly, the kidney tissue weight was significantly increased only in the LE group, supporting an association between physiological LO-specific effects and endocrine effects. A recent study showed that LO exposure affects renal restoration in a dose-dependent manner by decreasing antioxidant signals and inflammatory cytokine levels, as well as by inhibiting apoptosis.32 In the present study, we measured neither renal function nor nephron numbers and used only renal tissue weight as an indicator of the positive effect of LO exposure. Therefore, increased renal tissue weight may indicate a renal burden related to LO excretion. Four studies have reported a total of 11 pediatric patients (seven males and four females) showing premature thelarche in females (age range: 14 months to 7 years and 9 months) and gynecomastia in males (age range: 4 years and 5 months to 10 years and 1 month) after using an LO-containing product.11121314 Ramsey et al.13 reported that patients showed symptomatic improvement after discontinuation of LO exposure accompanied by no abnormal laboratory findings. These reports suggest the estrogenic effect of topical preparation of LO. Additionally, in vitro studies showed that LO (or the LO components linalool and linalyl acetate) exerts an estrogenic effect by stimulating α-transcription of the estrogen receptor.1113 The peripheral hormones or signals transmitting to GnRH neurons may lead to GnRH secretion and stimulation of pituitary gonadotropins and gonadal sex steroids.8 Given our observation of early activation of the HPG axis after olfactory exposure to LO, further investigation of the effect of LO on gene activation related to GnRH synthesis or secretion may explain the associations between early activation of the HPG axis and LO exposure. We focused on the effects of LO exposure through olfactory stimulation and not via oral ingestion or topical application. One limitation of this study is its small sample size; thus, further studies are required to validate the findings. We observed significant elevation of gonadotropin levels not only in the LE group but also in the LS group, suggesting that repetitive olfactory exposure affected the manifestation of LO-specific physiological effects. Several studies reported that essential LO affects anxiety when administered via the oral or nasal routes.333435 However, LO exposure via oral ingestion in food is not as common as skin absorption of various LO-containing cosmetic products or LO inhalation. Nevertheless, inhalation of environmental LO is difficult to quantify in humans, and epidemiological surveillance data for the sole effect of LO inhalation in children undergoing precocious puberty are unavailable. Furthermore, the frequency and duration of exposure to LO inhalation are highly variable, and the effect of LO following skin exposure on children remains inconclusive because of the variable amounts of LO to which the skin is exposed, and difficulties associated with follow-up to assess long-term effects.36 This study showed the effect of olfactory stimulation by LO on the early onset of puberty. These results suggest that avoidance of LO exposure to minimize unnecessary iatrogenic estrogen effects from fragrances, diffusers, and perfumes can prevent early stimulation of the HPG axis, particularly in younger children. Further in vitro evaluation of LO-related effects on central kisspeptin signaling may reveal the mechanisms associated with early activation of the HPG axis by persistent olfactory exposure to LO.
Background: Central precocious puberty (CPP) is caused by early activation of the hypothalamic-pituitary-gonadal axis but its major cause remains unclear. Studies have indicated an association between chronic environmental exposure to endocrine-disrupting chemicals and pubertal onset. Essential oil is widely used in homes worldwide for relief of respiratory symptoms, stress, and/or sleep disturbance. Methods: To evaluate this association, we compared the hormone levels and timing of vaginal opening (VO) in female rats exposed to lavender oil (LO) through different routes (study groups: control, LO nasal spray [LS], and indoor exposure to LO [LE]) during the prepubertal period. The body weights of the animals were also compared every 3 days until the day of VO, at which time gonadotropin levels and internal organ weights were assessed. Results: The LS group showed early VO at 33.8 ± 1.8 days compared with the control (38.4 ± 2.9 days) and LE (36.6 ± 1.5 days) groups. Additionally, luteinizing hormone levels were significantly higher in the LE and LS groups than those in the control group. Body weights did not differ significantly among the groups. Conclusions: Inhalation exposure to an exogenic simulant during the prepubertal period might trigger early pubertal onset in female rats. Further evaluation of exposure to other endocrine-disrupting chemicals capable of inducing CPP through the skin, orally, and/or nasally is warranted.
null
null
5,610
273
[ 247, 42, 142, 38, 74, 33, 193, 232, 258 ]
13
[ "exposure", "group", "lo", "control", "vo", "ls", "le", "groups", "weight", "oil" ]
[ "puberty results suggest", "early onset puberty", "pituitary gonadal hpg", "peripheral precocious puberty", "puberty cpp describes" ]
null
null
[CONTENT] Precocious Puberty | Endocrine Disruptors | Lavender Oil | Inhalation Exposure | Vaginal Opening [SUMMARY]
[CONTENT] Precocious Puberty | Endocrine Disruptors | Lavender Oil | Inhalation Exposure | Vaginal Opening [SUMMARY]
[CONTENT] Precocious Puberty | Endocrine Disruptors | Lavender Oil | Inhalation Exposure | Vaginal Opening [SUMMARY]
null
[CONTENT] Precocious Puberty | Endocrine Disruptors | Lavender Oil | Inhalation Exposure | Vaginal Opening [SUMMARY]
null
[CONTENT] Administration, Inhalation | Animals | Female | Lavandula | Oils, Volatile | Plant Oils | Puberty, Precocious | Random Allocation | Rats [SUMMARY]
[CONTENT] Administration, Inhalation | Animals | Female | Lavandula | Oils, Volatile | Plant Oils | Puberty, Precocious | Random Allocation | Rats [SUMMARY]
[CONTENT] Administration, Inhalation | Animals | Female | Lavandula | Oils, Volatile | Plant Oils | Puberty, Precocious | Random Allocation | Rats [SUMMARY]
null
[CONTENT] Administration, Inhalation | Animals | Female | Lavandula | Oils, Volatile | Plant Oils | Puberty, Precocious | Random Allocation | Rats [SUMMARY]
null
[CONTENT] puberty results suggest | early onset puberty | pituitary gonadal hpg | peripheral precocious puberty | puberty cpp describes [SUMMARY]
[CONTENT] puberty results suggest | early onset puberty | pituitary gonadal hpg | peripheral precocious puberty | puberty cpp describes [SUMMARY]
[CONTENT] puberty results suggest | early onset puberty | pituitary gonadal hpg | peripheral precocious puberty | puberty cpp describes [SUMMARY]
null
[CONTENT] puberty results suggest | early onset puberty | pituitary gonadal hpg | peripheral precocious puberty | puberty cpp describes [SUMMARY]
null
[CONTENT] exposure | group | lo | control | vo | ls | le | groups | weight | oil [SUMMARY]
[CONTENT] exposure | group | lo | control | vo | ls | le | groups | weight | oil [SUMMARY]
[CONTENT] exposure | group | lo | control | vo | ls | le | groups | weight | oil [SUMMARY]
null
[CONTENT] exposure | group | lo | control | vo | ls | le | groups | weight | oil [SUMMARY]
null
[CONTENT] cpp | puberty | essential oils | final adult | final adult height | adult height | adult | chemicals | suggested | treatment [SUMMARY]
[CONTENT] lo | daily | weight | vo | statistical | group | animals | nacl | rat | groups [SUMMARY]
[CONTENT] ls | ml | lavender | oil | lavender oil | le | control | exposure | group | ng [SUMMARY]
null
[CONTENT] weight | group | lo | vo | exposure | ls | control | le | groups | oil [SUMMARY]
null
[CONTENT] CPP ||| ||| [SUMMARY]
[CONTENT] LO | LO | LO ||| ||| every 3 days | the day [SUMMARY]
[CONTENT] LS | 33.8 | 1.8 days | 38.4 | 2.9 days | LE | 36.6 ± | 1.5 days ||| LE ||| [SUMMARY]
null
[CONTENT] CPP ||| ||| ||| LO | LO | LO ||| ||| every 3 days | the day ||| ||| LS | 33.8 | 1.8 days | 38.4 | 2.9 days | LE | 36.6 ± | 1.5 days ||| LE ||| ||| ||| CPP [SUMMARY]
null
Quality of life and patient-perceived symptoms in patients with psoriasis undergoing proactive or reactive management with the fixed-dose combination Cal/BD foam: A post-hoc analysis of PSO-LONG.
34543474
Psoriasis has important physical and psychosocial effects that extend beyond the skin. Understanding the impact of treatment on health-related quality of life (HRQoL) and patient-perceived symptom severity in psoriasis is key to clinical decision-making.
BACKGROUND
Five hundred and twenty-one patients from the Phase 3, randomized, double-blind PSO-LONG trial were included. An initial 4-week, open-label phase of fixed-dose combination Cal/BD foam once daily (QD) was followed by a 52-week maintenance phase, at the start of which patients were randomized to a proactive management arm (Cal/BD foam twice weekly) or reactive management arm (vehicle foam twice weekly). Patient-perceived symptom severity and HRQoL were assessed using the Psoriasis Symptom Inventory (PSI), the Dermatology Life Quality Index (DLQI) and the EuroQol-5D for psoriasis (EQ-5D-5L-PSO).
METHODS
Statistically and clinically significant improvements were observed across all PRO measures. The mean difference (standard deviation) from baseline to Week 4 was -8.97 (6.18) for PSI, -6.02 (5.46) for DLQI and 0.11 (0.15) for EQ-5D-5L-PSO scores. During maintenance, patients receiving reactive management had significantly higher DLQI (15% [p = 0.007]) and PSI (15% [p = 0.0128]) and a numerically lower EQ-5D-5L-PSO mean area under the curve score than patients receiving proactive management (1% [p = 0.0842]).
RESULTS
Cal/BD foam significantly improved DLQI, EQ-5D-5L-PSO and PSI scores during the open-label and maintenance phases. Patients assigned to proactive management had significantly better DLQI and PSI scores and numerically better EQ-5D-5L-PSO versus reactive management. Additionally, baseline flare was associated with worse PROs than the start of a relapse, and patients starting a relapse also had worse PROs than patients in remission.
CONCLUSIONS
[ "Betamethasone", "Dermatologic Agents", "Drug Combinations", "Humans", "Psoriasis", "Quality of Life", "Treatment Outcome" ]
9298373
Introduction
Psoriasis is a chronic, immune‐mediated disease, with primarily skin and joint symptoms. 1 , 2 The morphology, localization and severity of lesions in psoriasis can be highly variable. 3 Various genetic, environmental and immunological factors have been proposed as potential contributors to the pathophysiology of this disease. 3 Individuals with psoriasis have also been shown to have an elevated risk of cardiovascular disease, metabolic syndrome and diabetes, compared with the general population. 4 Globally, the prevalence of psoriasis has been reported to vary between 0.09% and 11.43%, which corresponds to approximately 125 million affected people. 5 , 6 Despite ongoing efforts to improve the management of this condition, the burden of disease has been increasing steadily over the past decades. 7 Collectively, this renders psoriasis a significant health issue worldwide. Psoriasis can significantly influence a person's quality of life (QoL) and cause social stigmatization, physical disability and emotional distress. 8 Moreover, the impact of psoriasis on QoL is similar to that in patients with other chronic conditions such as cardiovascular disease, diabetes, end‐stage renal disease, liver disease and cancer. 9 Skin symptoms of psoriasis including scaling, itch and pain can significantly affect physical well‐being and limit daily activities, social contacts and (skin‐exposing) activities, and work. 10 Psoriasis has a greater psychological burden than any other dermatological condition and has been associated with impaired emotional functioning, a negative body and self‐image, depression, anxiety and suicide risk more than any other skin condition. 10 , 11 Other factors may also be attributable to the low QoL in psoriasis patients, including the chronic and recurring nature of the disease, lack of control and fear of unexpected breakout, and feeling of hopelessness in terms of cure. 12 Furthermore, duration and severity of psoriasis significantly decrease the QoL. 13 , 14 For mild to moderate psoriasis, current treatment strategies commonly involve topical agents. For moderate to severe psoriasis, topical agents are often added to phototherapy and systemic or biologic agents. 15 , 16 , 17 Current management strategies mainly aim to clear active disease sites and prolong symptom‐free periods. 18 However, long‐term disease control is challenging, and patient satisfaction with available therapies remains low. 19 Moreover, psoriasis is often undertreated such that patients do not achieve substantial skin clearance, symptom relief or improvements in QoL. 20 , 21 Although skin clearance may be achievable for most patients in the short term, long‐term strategies are important to optimize adherence and long‐term outcomes including health‐related quality of life (HRQoL). 22 , 23 However, the majority of clinical data and guidance available for topical management of psoriasis is focused on short‐term use, with limited data on long‐term use. 23 Therefore, understanding the impact of treatment on HRQoL and patient‐perceived symptom severity in psoriasis is key to informing clinical decision‐making, improving clinical outcomes and quality of care. Patient‐reported outcomes measures (PROs) are invaluable tools to evaluate these effects and support clinical management. 22 , 24 This post hoc analysis of the PSO‐LONG trial captured the effect on HRQoL and patient‐perceived symptom severity of treating psoriasis with fixed‐dose calcipotriene 50 µg/g and betamethasone dipropionate 0.5 mg/g (Cal/BD) foam topical treatment through a 52‐week period. Three PRO measures were used: EuroQoL 5‐Dimensional Questionnaire for Psoriasis (EQ‐5D‐5L‐PSO), the Dermatology Life Quality Index (DLQI) and the Psoriasis Symptom Inventory (PSI). The analysis aimed to evaluate the value of Cal/BD foam for flare treatment and long‐term management (proactive vs reactive) on PROs as well as compare results at baseline flare, start of a relapse and during remission.
null
null
Results
Patient demographics A total of 521 patients were included in the full analysis set used in this post hoc analysis. Patients were predominantly male (67.4%) and white (90.2%). The mean age was approximately 52.3 years. The majority of patients had moderate baseline PGA scores (85.2%). Patient characteristics are summarized in Table 1. Demographic and baseline characteristics of randomized patients (full analysis set; N = 521) Demographic and baseline characteristics (maintenance full analysis set; N = 521) A total of 521 patients were included in the full analysis set used in this post hoc analysis. Patients were predominantly male (67.4%) and white (90.2%). The mean age was approximately 52.3 years. The majority of patients had moderate baseline PGA scores (85.2%). Patient characteristics are summarized in Table 1. Demographic and baseline characteristics of randomized patients (full analysis set; N = 521) Demographic and baseline characteristics (maintenance full analysis set; N = 521) Open‐label phase Initial flare treatment with Cal/BD foam QD during the open‐label phase led to statistically and clinically significant improvements across all PRO measures (Table 2). The mean difference from baseline to Week 4 was −8.97 (standard deviation [SD] = 6.18; P < 0.0001) for PSI scores, −6.02 (SD = 5.46; P < 0.0001) for DLQI scores and 0.11 (SD = 0.15; P < 0.0001) for EQ‐5D‐5L‐PSO scores. Changes in PSI, DLQI and EQ‐5D‐5L‐PSO scores in flare treatment from baseline to Week 4 (full analysis set; N = 521) Difference calculated from participants with both baseline and Week 4 scores. Initial flare treatment with Cal/BD foam QD during the open‐label phase led to statistically and clinically significant improvements across all PRO measures (Table 2). The mean difference from baseline to Week 4 was −8.97 (standard deviation [SD] = 6.18; P < 0.0001) for PSI scores, −6.02 (SD = 5.46; P < 0.0001) for DLQI scores and 0.11 (SD = 0.15; P < 0.0001) for EQ‐5D‐5L‐PSO scores. Changes in PSI, DLQI and EQ‐5D‐5L‐PSO scores in flare treatment from baseline to Week 4 (full analysis set; N = 521) Difference calculated from participants with both baseline and Week 4 scores. Maintenance phase The PRO improvements were maintained over the next 52 weeks for both proactive and reactive management arms, across the three PRO assessment tools. Patients receiving proactive management showed significantly greater improvements in patient‐perceived symptom severity than patients receiving reactive management: the mean PSI AUC score was 15% higher for reactive management (5.74) than for proactive management (4.99) during the maintenance phase (difference −0.75; P = 0.0128) (Table 3). It is worth noting that the levels of participant engagement with the PSI questionnaire were low in the last 2 weeks of the maintenance phase. The analysis of the PSI total score was therefore based on the first 28 weeks of the maintenance phase. Mean PSI, DLQI and EQ‐5D‐5L‐PSO AUC scores in proactive and reactive management arms and differences during maintenance phase (full analysis set; N = 521) Proactive management also corresponded with significantly greater improvements in DLQI AUC scores; the mean DLQI score was 15% higher for reactive management (3.40) than proactive management (2.95) (difference −0.45; P = 0.007). Although the difference between proactive and reactive management did not result in significantly greater improvement in EQ‐5D‐5L‐PSO scores, the numerical difference favoured the proactive management arm; the mean EQ‐5D‐5L‐PSO AUC score was 1% higher for proactive management (0.89) than for reactive management (0.88) (difference 0.01, P = 0.0842). The mean scores in proactive and reactive management arms for PSI, DLQI and EQ‐5D‐5L‐PSO across each visit are shown in Fig. 1. Mean scores in proactive and reactive management arms for (a) PSI, (b) DLQI, and (c) EQ‐5D across study visits (Full Analysis Set; N = 521). Across both treatment arms, patients had improvements in symptoms and HRQoL during remission compared with the baseline flare and the start of a relapse (Table 4). Additionally, the mean change (95% confidence interval [CI]; p‐value) between the start of a relapse and remission was −2.28 (95% CI: −2.64 to −1.92; <0.0001) for PSI scores, −1.32 (95% CI: −1.60 to −1.04; <0.0001) for DLQI and 0.03 (95% CI: 0.02 to 0.04; <0.0001) for EQ‐5D‐5L‐PSO (Fig. 2). PSI, DLQI and EQ‐5D‐5L‐PSO scores for baseline, remission and start of relapse (full analysis set; N = 521) Distribution of AUC scores in proactive and reactive management arms for (a) PSI, (b) DLQI, and (c) EQ‐5D (Full Analysis Set; N = 521). The PRO improvements were maintained over the next 52 weeks for both proactive and reactive management arms, across the three PRO assessment tools. Patients receiving proactive management showed significantly greater improvements in patient‐perceived symptom severity than patients receiving reactive management: the mean PSI AUC score was 15% higher for reactive management (5.74) than for proactive management (4.99) during the maintenance phase (difference −0.75; P = 0.0128) (Table 3). It is worth noting that the levels of participant engagement with the PSI questionnaire were low in the last 2 weeks of the maintenance phase. The analysis of the PSI total score was therefore based on the first 28 weeks of the maintenance phase. Mean PSI, DLQI and EQ‐5D‐5L‐PSO AUC scores in proactive and reactive management arms and differences during maintenance phase (full analysis set; N = 521) Proactive management also corresponded with significantly greater improvements in DLQI AUC scores; the mean DLQI score was 15% higher for reactive management (3.40) than proactive management (2.95) (difference −0.45; P = 0.007). Although the difference between proactive and reactive management did not result in significantly greater improvement in EQ‐5D‐5L‐PSO scores, the numerical difference favoured the proactive management arm; the mean EQ‐5D‐5L‐PSO AUC score was 1% higher for proactive management (0.89) than for reactive management (0.88) (difference 0.01, P = 0.0842). The mean scores in proactive and reactive management arms for PSI, DLQI and EQ‐5D‐5L‐PSO across each visit are shown in Fig. 1. Mean scores in proactive and reactive management arms for (a) PSI, (b) DLQI, and (c) EQ‐5D across study visits (Full Analysis Set; N = 521). Across both treatment arms, patients had improvements in symptoms and HRQoL during remission compared with the baseline flare and the start of a relapse (Table 4). Additionally, the mean change (95% confidence interval [CI]; p‐value) between the start of a relapse and remission was −2.28 (95% CI: −2.64 to −1.92; <0.0001) for PSI scores, −1.32 (95% CI: −1.60 to −1.04; <0.0001) for DLQI and 0.03 (95% CI: 0.02 to 0.04; <0.0001) for EQ‐5D‐5L‐PSO (Fig. 2). PSI, DLQI and EQ‐5D‐5L‐PSO scores for baseline, remission and start of relapse (full analysis set; N = 521) Distribution of AUC scores in proactive and reactive management arms for (a) PSI, (b) DLQI, and (c) EQ‐5D (Full Analysis Set; N = 521).
Conclusion
In this analysis, Cal/BD foam QD flare treatment was associated with significant improvements in PROs from baseline that were maintained with a twice‐weekly application through 52 weeks. In patients undergoing proactive management, DLQI and PSI scores were significantly improved vs. patients receiving reactive management, potentially due to the reduction in the number of relapses and increased time in remission over a year of exposure. Overall, the results of this analysis add to the original PSO‐LONG findings, suggesting that proactive management with fixed‐dose Cal/BD foam could offer not only improved long‐term control of psoriasis but also improved HRQoL and patient‐perceived symptom severity over conventional reactive treatment with Cal/BD foam.
[ "Introduction", "Study design", "Patient‐reported outcomes", "Statistical analyses", "Patient demographics", "Open‐label phase", "Maintenance phase" ]
[ "Psoriasis is a chronic, immune‐mediated disease, with primarily skin and joint symptoms.\n1\n, \n2\n The morphology, localization and severity of lesions in psoriasis can be highly variable.\n3\n Various genetic, environmental and immunological factors have been proposed as potential contributors to the pathophysiology of this disease.\n3\n Individuals with psoriasis have also been shown to have an elevated risk of cardiovascular disease, metabolic syndrome and diabetes, compared with the general population.\n4\n\n\nGlobally, the prevalence of psoriasis has been reported to vary between 0.09% and 11.43%, which corresponds to approximately 125 million affected people.\n5\n, \n6\n Despite ongoing efforts to improve the management of this condition, the burden of disease has been increasing steadily over the past decades.\n7\n Collectively, this renders psoriasis a significant health issue worldwide.\nPsoriasis can significantly influence a person's quality of life (QoL) and cause social stigmatization, physical disability and emotional distress.\n8\n Moreover, the impact of psoriasis on QoL is similar to that in patients with other chronic conditions such as cardiovascular disease, diabetes, end‐stage renal disease, liver disease and cancer.\n9\n Skin symptoms of psoriasis including scaling, itch and pain can significantly affect physical well‐being and limit daily activities, social contacts and (skin‐exposing) activities, and work.\n10\n Psoriasis has a greater psychological burden than any other dermatological condition and has been associated with impaired emotional functioning, a negative body and self‐image, depression, anxiety and suicide risk more than any other skin condition.\n10\n, \n11\n Other factors may also be attributable to the low QoL in psoriasis patients, including the chronic and recurring nature of the disease, lack of control and fear of unexpected breakout, and feeling of hopelessness in terms of cure.\n12\n Furthermore, duration and severity of psoriasis significantly decrease the QoL.\n13\n, \n14\n\n\nFor mild to moderate psoriasis, current treatment strategies commonly involve topical agents. For moderate to severe psoriasis, topical agents are often added to phototherapy and systemic or biologic agents.\n15\n, \n16\n, \n17\n Current management strategies mainly aim to clear active disease sites and prolong symptom‐free periods.\n18\n However, long‐term disease control is challenging, and patient satisfaction with available therapies remains low.\n19\n Moreover, psoriasis is often undertreated such that patients do not achieve substantial skin clearance, symptom relief or improvements in QoL.\n20\n, \n21\n\n\nAlthough skin clearance may be achievable for most patients in the short term, long‐term strategies are important to optimize adherence and long‐term outcomes including health‐related quality of life (HRQoL).\n22\n, \n23\n However, the majority of clinical data and guidance available for topical management of psoriasis is focused on short‐term use, with limited data on long‐term use.\n23\n\n\nTherefore, understanding the impact of treatment on HRQoL and patient‐perceived symptom severity in psoriasis is key to informing clinical decision‐making, improving clinical outcomes and quality of care. Patient‐reported outcomes measures (PROs) are invaluable tools to evaluate these effects and support clinical management.\n22\n, \n24\n\n\nThis post hoc analysis of the PSO‐LONG trial captured the effect on HRQoL and patient‐perceived symptom severity of treating psoriasis with fixed‐dose calcipotriene 50 µg/g and betamethasone dipropionate 0.5 mg/g (Cal/BD) foam topical treatment through a 52‐week period. Three PRO measures were used: EuroQoL 5‐Dimensional Questionnaire for Psoriasis (EQ‐5D‐5L‐PSO), the Dermatology Life Quality Index (DLQI) and the Psoriasis Symptom Inventory (PSI). The analysis aimed to evaluate the value of Cal/BD foam for flare treatment and long‐term management (proactive vs reactive) on PROs as well as compare results at baseline flare, start of a relapse and during remission.", "This post hoc analysis included the full analysis set (n = 521) from the Phase 3, randomized, double‐blind PSO‐LONG trial (NCT02899962). The PSO‐LONG trial assessed long‐term efficacy and safety of proactive management with twice‐weekly fixed‐dose combination Cal/BD foam versus reactive management with twice‐weekly vehicle in patients with psoriasis vulgaris. Eligible patients were aged ≥18 years and had a clinical diagnosis of truncal and/or limb psoriasis for at least 6 months involving 2–30% of the body surface area (BSA), a modified Psoriasis Area and Severity Index (mPASI) score of ≥2 and a physician’s global assessment of disease severity (PGA) score of at least ‘mild’ (PGA ≥ 2).\nThe trial included an initial 4‐week, open‐label phase of fixed‐dose combination Cal/BD foam once daily, followed by a 52‐week maintenance phase for patients who achieved a PGA score of 0 or 1 and an at least 2‐grade improvement after the initial 4 weeks. At the start of the maintenance phase, patients were randomized to either proactive management (Cal/BD foam twice weekly) or reactive management (vehicle foam twice weekly). Relapses (defined as at least ‘mild’, PGA ≥ 2) were treated with fixed‐dose combination Cal/BD foam once daily (QD) for 4 weeks. Remission was defined as ‘clear’ or ‘almost clear’, PGA 0/1. The full details of the PSO‐LONG trial study design\n25\n and the efficacy and safety results\n26\n are published elsewhere.", "Patients completed the EQ‐5D‐5L‐PSO, DLQI and PSI assessments. The EQ‐5D‐5L‐PSO questionnaire measures health status over five general dimensions (mobility, self‐care, usual activities, pain/discomfort and anxiety/depression) and two psoriasis‐related dimensions (skin irritation and self‐confidence).\n27\n Each dimension has five response levels, and a visual analogue scale allows patients to assess their health status with a score ranging from 0 (worst health) to 100 (best health). Responses to the questions can be converted into an index score ranging from 0.00 to 1.00, where a score of 0.00 indicates the worst health and 1.00 indicates full health.\nThe DLQI is a ten‐item questionnaire used to measure the impact of dermatological disorders on a patient’s HRQoL in the following six areas: symptoms and feelings; daily activities; leisure activities; work and school; personal relationships and treatment‐related distress.\n12\n, \n28\n Total scores range from 0–1 (‘no effect at all’) to 21–30 (‘extremely large effect’).\nThe PSI is an assessment of the severity of eight psoriasis‑related symptoms including itch, redness, scaling, burning, stinging, cracking, flaking and pain.\n29\n, \n30\n Scoring for each symptom ranges from ‘not at all severe’ (0) to ‘very severe’ (4), giving a total score range from 0 (no symptoms) to 32 (more severe symptoms).\nPatient‐reported outcomes were assessed across treatment arms at baseline (Visit 1), at the start of a relapse and during remission at all monthly scheduled and unscheduled visits. The DLQI and EQ‐5D‐5L‐PSO were completed at the trial site on an electronic slate/tablet. To ensure unbiased answers for questionnaires that were completed onsite, the PROs were collected prior to any other assessments. The PSI was completed on an eDiary device, provided for the participants for use at home. The participants were asked to complete the PSI daily during the open‐label treatment phase (starting at Visit 1), then weekly during the first 28 weeks of the maintenance phase (Weeks 4–28) and the last 2 weeks of the maintenance phase (Weeks 54–56 – only applicable for those who completed the PSI).", "The statistical analyses were performed on the full analysis set (N = 521). Patient‐reported outcome results were collected within treatment arms at each visit. The integrated area under the curve (AUC) in proactive and reactive management arms during the maintenance phase was calculated for each PRO using the trapezoidal rule and subsequently normalized by the number of days in study for each patient. Additionally, PROs were assessed across treatment arms at baseline, at the start of a relapse and during remission at unscheduled and scheduled visits. Missing assessment of PRO scores in‐between non‐missing assessments in the maintenance phase was not imputed. The P‐value for treatment changes were assessed by using the Wilcoxon signed rank sum test. Differences across treatment arms were considered significant at P < 0.05.", "A total of 521 patients were included in the full analysis set used in this post hoc analysis. Patients were predominantly male (67.4%) and white (90.2%). The mean age was approximately 52.3 years. The majority of patients had moderate baseline PGA scores (85.2%). Patient characteristics are summarized in Table 1.\nDemographic and baseline characteristics of randomized patients (full analysis set; N = 521)\nDemographic and baseline characteristics\n(maintenance full analysis set; N = 521)", "Initial flare treatment with Cal/BD foam QD during the open‐label phase led to statistically and clinically significant improvements across all PRO measures (Table 2). The mean difference from baseline to Week 4 was −8.97 (standard deviation [SD] = 6.18; P < 0.0001) for PSI scores, −6.02 (SD = 5.46; P < 0.0001) for DLQI scores and 0.11 (SD = 0.15; P < 0.0001) for EQ‐5D‐5L‐PSO scores.\nChanges in PSI, DLQI and EQ‐5D‐5L‐PSO scores in flare treatment from baseline to Week 4 (full analysis set; N = 521)\nDifference calculated from participants with both baseline and Week 4 scores.", "The PRO improvements were maintained over the next 52 weeks for both proactive and reactive management arms, across the three PRO assessment tools. Patients receiving proactive management showed significantly greater improvements in patient‐perceived symptom severity than patients receiving reactive management: the mean PSI AUC score was 15% higher for reactive management (5.74) than for proactive management (4.99) during the maintenance phase (difference −0.75; P = 0.0128) (Table 3). It is worth noting that the levels of participant engagement with the PSI questionnaire were low in the last 2 weeks of the maintenance phase. The analysis of the PSI total score was therefore based on the first 28 weeks of the maintenance phase.\nMean PSI, DLQI and EQ‐5D‐5L‐PSO AUC scores in proactive and reactive management arms and differences during maintenance phase (full analysis set; N = 521)\nProactive management also corresponded with significantly greater improvements in DLQI AUC scores; the mean DLQI score was 15% higher for reactive management (3.40) than proactive management (2.95) (difference −0.45; P = 0.007). Although the difference between proactive and reactive management did not result in significantly greater improvement in EQ‐5D‐5L‐PSO scores, the numerical difference favoured the proactive management arm; the mean EQ‐5D‐5L‐PSO AUC score was 1% higher for proactive management (0.89) than for reactive management (0.88) (difference 0.01, P = 0.0842). The mean scores in proactive and reactive management arms for PSI, DLQI and EQ‐5D‐5L‐PSO across each visit are shown in Fig. 1.\nMean scores in proactive and reactive management arms for (a) PSI, (b) DLQI, and (c) EQ‐5D across study visits (Full Analysis Set; N = 521).\nAcross both treatment arms, patients had improvements in symptoms and HRQoL during remission compared with the baseline flare and the start of a relapse (Table 4). Additionally, the mean change (95% confidence interval [CI]; p‐value) between the start of a relapse and remission was −2.28 (95% CI: −2.64 to −1.92; <0.0001) for PSI scores, −1.32 (95% CI: −1.60 to −1.04; <0.0001) for DLQI and 0.03 (95% CI: 0.02 to 0.04; <0.0001) for EQ‐5D‐5L‐PSO (Fig. 2).\nPSI, DLQI and EQ‐5D‐5L‐PSO scores for baseline, remission and start of relapse (full analysis set; N = 521)\nDistribution of AUC scores in proactive and reactive management arms for (a) PSI, (b) DLQI, and (c) EQ‐5D (Full Analysis Set; N = 521)." ]
[ null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Study design", "Patient‐reported outcomes", "Statistical analyses", "Results", "Patient demographics", "Open‐label phase", "Maintenance phase", "Discussion", "Conclusion" ]
[ "Psoriasis is a chronic, immune‐mediated disease, with primarily skin and joint symptoms.\n1\n, \n2\n The morphology, localization and severity of lesions in psoriasis can be highly variable.\n3\n Various genetic, environmental and immunological factors have been proposed as potential contributors to the pathophysiology of this disease.\n3\n Individuals with psoriasis have also been shown to have an elevated risk of cardiovascular disease, metabolic syndrome and diabetes, compared with the general population.\n4\n\n\nGlobally, the prevalence of psoriasis has been reported to vary between 0.09% and 11.43%, which corresponds to approximately 125 million affected people.\n5\n, \n6\n Despite ongoing efforts to improve the management of this condition, the burden of disease has been increasing steadily over the past decades.\n7\n Collectively, this renders psoriasis a significant health issue worldwide.\nPsoriasis can significantly influence a person's quality of life (QoL) and cause social stigmatization, physical disability and emotional distress.\n8\n Moreover, the impact of psoriasis on QoL is similar to that in patients with other chronic conditions such as cardiovascular disease, diabetes, end‐stage renal disease, liver disease and cancer.\n9\n Skin symptoms of psoriasis including scaling, itch and pain can significantly affect physical well‐being and limit daily activities, social contacts and (skin‐exposing) activities, and work.\n10\n Psoriasis has a greater psychological burden than any other dermatological condition and has been associated with impaired emotional functioning, a negative body and self‐image, depression, anxiety and suicide risk more than any other skin condition.\n10\n, \n11\n Other factors may also be attributable to the low QoL in psoriasis patients, including the chronic and recurring nature of the disease, lack of control and fear of unexpected breakout, and feeling of hopelessness in terms of cure.\n12\n Furthermore, duration and severity of psoriasis significantly decrease the QoL.\n13\n, \n14\n\n\nFor mild to moderate psoriasis, current treatment strategies commonly involve topical agents. For moderate to severe psoriasis, topical agents are often added to phototherapy and systemic or biologic agents.\n15\n, \n16\n, \n17\n Current management strategies mainly aim to clear active disease sites and prolong symptom‐free periods.\n18\n However, long‐term disease control is challenging, and patient satisfaction with available therapies remains low.\n19\n Moreover, psoriasis is often undertreated such that patients do not achieve substantial skin clearance, symptom relief or improvements in QoL.\n20\n, \n21\n\n\nAlthough skin clearance may be achievable for most patients in the short term, long‐term strategies are important to optimize adherence and long‐term outcomes including health‐related quality of life (HRQoL).\n22\n, \n23\n However, the majority of clinical data and guidance available for topical management of psoriasis is focused on short‐term use, with limited data on long‐term use.\n23\n\n\nTherefore, understanding the impact of treatment on HRQoL and patient‐perceived symptom severity in psoriasis is key to informing clinical decision‐making, improving clinical outcomes and quality of care. Patient‐reported outcomes measures (PROs) are invaluable tools to evaluate these effects and support clinical management.\n22\n, \n24\n\n\nThis post hoc analysis of the PSO‐LONG trial captured the effect on HRQoL and patient‐perceived symptom severity of treating psoriasis with fixed‐dose calcipotriene 50 µg/g and betamethasone dipropionate 0.5 mg/g (Cal/BD) foam topical treatment through a 52‐week period. Three PRO measures were used: EuroQoL 5‐Dimensional Questionnaire for Psoriasis (EQ‐5D‐5L‐PSO), the Dermatology Life Quality Index (DLQI) and the Psoriasis Symptom Inventory (PSI). The analysis aimed to evaluate the value of Cal/BD foam for flare treatment and long‐term management (proactive vs reactive) on PROs as well as compare results at baseline flare, start of a relapse and during remission.", "Study design This post hoc analysis included the full analysis set (n = 521) from the Phase 3, randomized, double‐blind PSO‐LONG trial (NCT02899962). The PSO‐LONG trial assessed long‐term efficacy and safety of proactive management with twice‐weekly fixed‐dose combination Cal/BD foam versus reactive management with twice‐weekly vehicle in patients with psoriasis vulgaris. Eligible patients were aged ≥18 years and had a clinical diagnosis of truncal and/or limb psoriasis for at least 6 months involving 2–30% of the body surface area (BSA), a modified Psoriasis Area and Severity Index (mPASI) score of ≥2 and a physician’s global assessment of disease severity (PGA) score of at least ‘mild’ (PGA ≥ 2).\nThe trial included an initial 4‐week, open‐label phase of fixed‐dose combination Cal/BD foam once daily, followed by a 52‐week maintenance phase for patients who achieved a PGA score of 0 or 1 and an at least 2‐grade improvement after the initial 4 weeks. At the start of the maintenance phase, patients were randomized to either proactive management (Cal/BD foam twice weekly) or reactive management (vehicle foam twice weekly). Relapses (defined as at least ‘mild’, PGA ≥ 2) were treated with fixed‐dose combination Cal/BD foam once daily (QD) for 4 weeks. Remission was defined as ‘clear’ or ‘almost clear’, PGA 0/1. The full details of the PSO‐LONG trial study design\n25\n and the efficacy and safety results\n26\n are published elsewhere.\nThis post hoc analysis included the full analysis set (n = 521) from the Phase 3, randomized, double‐blind PSO‐LONG trial (NCT02899962). The PSO‐LONG trial assessed long‐term efficacy and safety of proactive management with twice‐weekly fixed‐dose combination Cal/BD foam versus reactive management with twice‐weekly vehicle in patients with psoriasis vulgaris. Eligible patients were aged ≥18 years and had a clinical diagnosis of truncal and/or limb psoriasis for at least 6 months involving 2–30% of the body surface area (BSA), a modified Psoriasis Area and Severity Index (mPASI) score of ≥2 and a physician’s global assessment of disease severity (PGA) score of at least ‘mild’ (PGA ≥ 2).\nThe trial included an initial 4‐week, open‐label phase of fixed‐dose combination Cal/BD foam once daily, followed by a 52‐week maintenance phase for patients who achieved a PGA score of 0 or 1 and an at least 2‐grade improvement after the initial 4 weeks. At the start of the maintenance phase, patients were randomized to either proactive management (Cal/BD foam twice weekly) or reactive management (vehicle foam twice weekly). Relapses (defined as at least ‘mild’, PGA ≥ 2) were treated with fixed‐dose combination Cal/BD foam once daily (QD) for 4 weeks. Remission was defined as ‘clear’ or ‘almost clear’, PGA 0/1. The full details of the PSO‐LONG trial study design\n25\n and the efficacy and safety results\n26\n are published elsewhere.\nPatient‐reported outcomes Patients completed the EQ‐5D‐5L‐PSO, DLQI and PSI assessments. The EQ‐5D‐5L‐PSO questionnaire measures health status over five general dimensions (mobility, self‐care, usual activities, pain/discomfort and anxiety/depression) and two psoriasis‐related dimensions (skin irritation and self‐confidence).\n27\n Each dimension has five response levels, and a visual analogue scale allows patients to assess their health status with a score ranging from 0 (worst health) to 100 (best health). Responses to the questions can be converted into an index score ranging from 0.00 to 1.00, where a score of 0.00 indicates the worst health and 1.00 indicates full health.\nThe DLQI is a ten‐item questionnaire used to measure the impact of dermatological disorders on a patient’s HRQoL in the following six areas: symptoms and feelings; daily activities; leisure activities; work and school; personal relationships and treatment‐related distress.\n12\n, \n28\n Total scores range from 0–1 (‘no effect at all’) to 21–30 (‘extremely large effect’).\nThe PSI is an assessment of the severity of eight psoriasis‑related symptoms including itch, redness, scaling, burning, stinging, cracking, flaking and pain.\n29\n, \n30\n Scoring for each symptom ranges from ‘not at all severe’ (0) to ‘very severe’ (4), giving a total score range from 0 (no symptoms) to 32 (more severe symptoms).\nPatient‐reported outcomes were assessed across treatment arms at baseline (Visit 1), at the start of a relapse and during remission at all monthly scheduled and unscheduled visits. The DLQI and EQ‐5D‐5L‐PSO were completed at the trial site on an electronic slate/tablet. To ensure unbiased answers for questionnaires that were completed onsite, the PROs were collected prior to any other assessments. The PSI was completed on an eDiary device, provided for the participants for use at home. The participants were asked to complete the PSI daily during the open‐label treatment phase (starting at Visit 1), then weekly during the first 28 weeks of the maintenance phase (Weeks 4–28) and the last 2 weeks of the maintenance phase (Weeks 54–56 – only applicable for those who completed the PSI).\nPatients completed the EQ‐5D‐5L‐PSO, DLQI and PSI assessments. The EQ‐5D‐5L‐PSO questionnaire measures health status over five general dimensions (mobility, self‐care, usual activities, pain/discomfort and anxiety/depression) and two psoriasis‐related dimensions (skin irritation and self‐confidence).\n27\n Each dimension has five response levels, and a visual analogue scale allows patients to assess their health status with a score ranging from 0 (worst health) to 100 (best health). Responses to the questions can be converted into an index score ranging from 0.00 to 1.00, where a score of 0.00 indicates the worst health and 1.00 indicates full health.\nThe DLQI is a ten‐item questionnaire used to measure the impact of dermatological disorders on a patient’s HRQoL in the following six areas: symptoms and feelings; daily activities; leisure activities; work and school; personal relationships and treatment‐related distress.\n12\n, \n28\n Total scores range from 0–1 (‘no effect at all’) to 21–30 (‘extremely large effect’).\nThe PSI is an assessment of the severity of eight psoriasis‑related symptoms including itch, redness, scaling, burning, stinging, cracking, flaking and pain.\n29\n, \n30\n Scoring for each symptom ranges from ‘not at all severe’ (0) to ‘very severe’ (4), giving a total score range from 0 (no symptoms) to 32 (more severe symptoms).\nPatient‐reported outcomes were assessed across treatment arms at baseline (Visit 1), at the start of a relapse and during remission at all monthly scheduled and unscheduled visits. The DLQI and EQ‐5D‐5L‐PSO were completed at the trial site on an electronic slate/tablet. To ensure unbiased answers for questionnaires that were completed onsite, the PROs were collected prior to any other assessments. The PSI was completed on an eDiary device, provided for the participants for use at home. The participants were asked to complete the PSI daily during the open‐label treatment phase (starting at Visit 1), then weekly during the first 28 weeks of the maintenance phase (Weeks 4–28) and the last 2 weeks of the maintenance phase (Weeks 54–56 – only applicable for those who completed the PSI).\nStatistical analyses The statistical analyses were performed on the full analysis set (N = 521). Patient‐reported outcome results were collected within treatment arms at each visit. The integrated area under the curve (AUC) in proactive and reactive management arms during the maintenance phase was calculated for each PRO using the trapezoidal rule and subsequently normalized by the number of days in study for each patient. Additionally, PROs were assessed across treatment arms at baseline, at the start of a relapse and during remission at unscheduled and scheduled visits. Missing assessment of PRO scores in‐between non‐missing assessments in the maintenance phase was not imputed. The P‐value for treatment changes were assessed by using the Wilcoxon signed rank sum test. Differences across treatment arms were considered significant at P < 0.05.\nThe statistical analyses were performed on the full analysis set (N = 521). Patient‐reported outcome results were collected within treatment arms at each visit. The integrated area under the curve (AUC) in proactive and reactive management arms during the maintenance phase was calculated for each PRO using the trapezoidal rule and subsequently normalized by the number of days in study for each patient. Additionally, PROs were assessed across treatment arms at baseline, at the start of a relapse and during remission at unscheduled and scheduled visits. Missing assessment of PRO scores in‐between non‐missing assessments in the maintenance phase was not imputed. The P‐value for treatment changes were assessed by using the Wilcoxon signed rank sum test. Differences across treatment arms were considered significant at P < 0.05.", "This post hoc analysis included the full analysis set (n = 521) from the Phase 3, randomized, double‐blind PSO‐LONG trial (NCT02899962). The PSO‐LONG trial assessed long‐term efficacy and safety of proactive management with twice‐weekly fixed‐dose combination Cal/BD foam versus reactive management with twice‐weekly vehicle in patients with psoriasis vulgaris. Eligible patients were aged ≥18 years and had a clinical diagnosis of truncal and/or limb psoriasis for at least 6 months involving 2–30% of the body surface area (BSA), a modified Psoriasis Area and Severity Index (mPASI) score of ≥2 and a physician’s global assessment of disease severity (PGA) score of at least ‘mild’ (PGA ≥ 2).\nThe trial included an initial 4‐week, open‐label phase of fixed‐dose combination Cal/BD foam once daily, followed by a 52‐week maintenance phase for patients who achieved a PGA score of 0 or 1 and an at least 2‐grade improvement after the initial 4 weeks. At the start of the maintenance phase, patients were randomized to either proactive management (Cal/BD foam twice weekly) or reactive management (vehicle foam twice weekly). Relapses (defined as at least ‘mild’, PGA ≥ 2) were treated with fixed‐dose combination Cal/BD foam once daily (QD) for 4 weeks. Remission was defined as ‘clear’ or ‘almost clear’, PGA 0/1. The full details of the PSO‐LONG trial study design\n25\n and the efficacy and safety results\n26\n are published elsewhere.", "Patients completed the EQ‐5D‐5L‐PSO, DLQI and PSI assessments. The EQ‐5D‐5L‐PSO questionnaire measures health status over five general dimensions (mobility, self‐care, usual activities, pain/discomfort and anxiety/depression) and two psoriasis‐related dimensions (skin irritation and self‐confidence).\n27\n Each dimension has five response levels, and a visual analogue scale allows patients to assess their health status with a score ranging from 0 (worst health) to 100 (best health). Responses to the questions can be converted into an index score ranging from 0.00 to 1.00, where a score of 0.00 indicates the worst health and 1.00 indicates full health.\nThe DLQI is a ten‐item questionnaire used to measure the impact of dermatological disorders on a patient’s HRQoL in the following six areas: symptoms and feelings; daily activities; leisure activities; work and school; personal relationships and treatment‐related distress.\n12\n, \n28\n Total scores range from 0–1 (‘no effect at all’) to 21–30 (‘extremely large effect’).\nThe PSI is an assessment of the severity of eight psoriasis‑related symptoms including itch, redness, scaling, burning, stinging, cracking, flaking and pain.\n29\n, \n30\n Scoring for each symptom ranges from ‘not at all severe’ (0) to ‘very severe’ (4), giving a total score range from 0 (no symptoms) to 32 (more severe symptoms).\nPatient‐reported outcomes were assessed across treatment arms at baseline (Visit 1), at the start of a relapse and during remission at all monthly scheduled and unscheduled visits. The DLQI and EQ‐5D‐5L‐PSO were completed at the trial site on an electronic slate/tablet. To ensure unbiased answers for questionnaires that were completed onsite, the PROs were collected prior to any other assessments. The PSI was completed on an eDiary device, provided for the participants for use at home. The participants were asked to complete the PSI daily during the open‐label treatment phase (starting at Visit 1), then weekly during the first 28 weeks of the maintenance phase (Weeks 4–28) and the last 2 weeks of the maintenance phase (Weeks 54–56 – only applicable for those who completed the PSI).", "The statistical analyses were performed on the full analysis set (N = 521). Patient‐reported outcome results were collected within treatment arms at each visit. The integrated area under the curve (AUC) in proactive and reactive management arms during the maintenance phase was calculated for each PRO using the trapezoidal rule and subsequently normalized by the number of days in study for each patient. Additionally, PROs were assessed across treatment arms at baseline, at the start of a relapse and during remission at unscheduled and scheduled visits. Missing assessment of PRO scores in‐between non‐missing assessments in the maintenance phase was not imputed. The P‐value for treatment changes were assessed by using the Wilcoxon signed rank sum test. Differences across treatment arms were considered significant at P < 0.05.", "Patient demographics A total of 521 patients were included in the full analysis set used in this post hoc analysis. Patients were predominantly male (67.4%) and white (90.2%). The mean age was approximately 52.3 years. The majority of patients had moderate baseline PGA scores (85.2%). Patient characteristics are summarized in Table 1.\nDemographic and baseline characteristics of randomized patients (full analysis set; N = 521)\nDemographic and baseline characteristics\n(maintenance full analysis set; N = 521)\nA total of 521 patients were included in the full analysis set used in this post hoc analysis. Patients were predominantly male (67.4%) and white (90.2%). The mean age was approximately 52.3 years. The majority of patients had moderate baseline PGA scores (85.2%). Patient characteristics are summarized in Table 1.\nDemographic and baseline characteristics of randomized patients (full analysis set; N = 521)\nDemographic and baseline characteristics\n(maintenance full analysis set; N = 521)\nOpen‐label phase Initial flare treatment with Cal/BD foam QD during the open‐label phase led to statistically and clinically significant improvements across all PRO measures (Table 2). The mean difference from baseline to Week 4 was −8.97 (standard deviation [SD] = 6.18; P < 0.0001) for PSI scores, −6.02 (SD = 5.46; P < 0.0001) for DLQI scores and 0.11 (SD = 0.15; P < 0.0001) for EQ‐5D‐5L‐PSO scores.\nChanges in PSI, DLQI and EQ‐5D‐5L‐PSO scores in flare treatment from baseline to Week 4 (full analysis set; N = 521)\nDifference calculated from participants with both baseline and Week 4 scores.\nInitial flare treatment with Cal/BD foam QD during the open‐label phase led to statistically and clinically significant improvements across all PRO measures (Table 2). The mean difference from baseline to Week 4 was −8.97 (standard deviation [SD] = 6.18; P < 0.0001) for PSI scores, −6.02 (SD = 5.46; P < 0.0001) for DLQI scores and 0.11 (SD = 0.15; P < 0.0001) for EQ‐5D‐5L‐PSO scores.\nChanges in PSI, DLQI and EQ‐5D‐5L‐PSO scores in flare treatment from baseline to Week 4 (full analysis set; N = 521)\nDifference calculated from participants with both baseline and Week 4 scores.\nMaintenance phase The PRO improvements were maintained over the next 52 weeks for both proactive and reactive management arms, across the three PRO assessment tools. Patients receiving proactive management showed significantly greater improvements in patient‐perceived symptom severity than patients receiving reactive management: the mean PSI AUC score was 15% higher for reactive management (5.74) than for proactive management (4.99) during the maintenance phase (difference −0.75; P = 0.0128) (Table 3). It is worth noting that the levels of participant engagement with the PSI questionnaire were low in the last 2 weeks of the maintenance phase. The analysis of the PSI total score was therefore based on the first 28 weeks of the maintenance phase.\nMean PSI, DLQI and EQ‐5D‐5L‐PSO AUC scores in proactive and reactive management arms and differences during maintenance phase (full analysis set; N = 521)\nProactive management also corresponded with significantly greater improvements in DLQI AUC scores; the mean DLQI score was 15% higher for reactive management (3.40) than proactive management (2.95) (difference −0.45; P = 0.007). Although the difference between proactive and reactive management did not result in significantly greater improvement in EQ‐5D‐5L‐PSO scores, the numerical difference favoured the proactive management arm; the mean EQ‐5D‐5L‐PSO AUC score was 1% higher for proactive management (0.89) than for reactive management (0.88) (difference 0.01, P = 0.0842). The mean scores in proactive and reactive management arms for PSI, DLQI and EQ‐5D‐5L‐PSO across each visit are shown in Fig. 1.\nMean scores in proactive and reactive management arms for (a) PSI, (b) DLQI, and (c) EQ‐5D across study visits (Full Analysis Set; N = 521).\nAcross both treatment arms, patients had improvements in symptoms and HRQoL during remission compared with the baseline flare and the start of a relapse (Table 4). Additionally, the mean change (95% confidence interval [CI]; p‐value) between the start of a relapse and remission was −2.28 (95% CI: −2.64 to −1.92; <0.0001) for PSI scores, −1.32 (95% CI: −1.60 to −1.04; <0.0001) for DLQI and 0.03 (95% CI: 0.02 to 0.04; <0.0001) for EQ‐5D‐5L‐PSO (Fig. 2).\nPSI, DLQI and EQ‐5D‐5L‐PSO scores for baseline, remission and start of relapse (full analysis set; N = 521)\nDistribution of AUC scores in proactive and reactive management arms for (a) PSI, (b) DLQI, and (c) EQ‐5D (Full Analysis Set; N = 521).\nThe PRO improvements were maintained over the next 52 weeks for both proactive and reactive management arms, across the three PRO assessment tools. Patients receiving proactive management showed significantly greater improvements in patient‐perceived symptom severity than patients receiving reactive management: the mean PSI AUC score was 15% higher for reactive management (5.74) than for proactive management (4.99) during the maintenance phase (difference −0.75; P = 0.0128) (Table 3). It is worth noting that the levels of participant engagement with the PSI questionnaire were low in the last 2 weeks of the maintenance phase. The analysis of the PSI total score was therefore based on the first 28 weeks of the maintenance phase.\nMean PSI, DLQI and EQ‐5D‐5L‐PSO AUC scores in proactive and reactive management arms and differences during maintenance phase (full analysis set; N = 521)\nProactive management also corresponded with significantly greater improvements in DLQI AUC scores; the mean DLQI score was 15% higher for reactive management (3.40) than proactive management (2.95) (difference −0.45; P = 0.007). Although the difference between proactive and reactive management did not result in significantly greater improvement in EQ‐5D‐5L‐PSO scores, the numerical difference favoured the proactive management arm; the mean EQ‐5D‐5L‐PSO AUC score was 1% higher for proactive management (0.89) than for reactive management (0.88) (difference 0.01, P = 0.0842). The mean scores in proactive and reactive management arms for PSI, DLQI and EQ‐5D‐5L‐PSO across each visit are shown in Fig. 1.\nMean scores in proactive and reactive management arms for (a) PSI, (b) DLQI, and (c) EQ‐5D across study visits (Full Analysis Set; N = 521).\nAcross both treatment arms, patients had improvements in symptoms and HRQoL during remission compared with the baseline flare and the start of a relapse (Table 4). Additionally, the mean change (95% confidence interval [CI]; p‐value) between the start of a relapse and remission was −2.28 (95% CI: −2.64 to −1.92; <0.0001) for PSI scores, −1.32 (95% CI: −1.60 to −1.04; <0.0001) for DLQI and 0.03 (95% CI: 0.02 to 0.04; <0.0001) for EQ‐5D‐5L‐PSO (Fig. 2).\nPSI, DLQI and EQ‐5D‐5L‐PSO scores for baseline, remission and start of relapse (full analysis set; N = 521)\nDistribution of AUC scores in proactive and reactive management arms for (a) PSI, (b) DLQI, and (c) EQ‐5D (Full Analysis Set; N = 521).", "A total of 521 patients were included in the full analysis set used in this post hoc analysis. Patients were predominantly male (67.4%) and white (90.2%). The mean age was approximately 52.3 years. The majority of patients had moderate baseline PGA scores (85.2%). Patient characteristics are summarized in Table 1.\nDemographic and baseline characteristics of randomized patients (full analysis set; N = 521)\nDemographic and baseline characteristics\n(maintenance full analysis set; N = 521)", "Initial flare treatment with Cal/BD foam QD during the open‐label phase led to statistically and clinically significant improvements across all PRO measures (Table 2). The mean difference from baseline to Week 4 was −8.97 (standard deviation [SD] = 6.18; P < 0.0001) for PSI scores, −6.02 (SD = 5.46; P < 0.0001) for DLQI scores and 0.11 (SD = 0.15; P < 0.0001) for EQ‐5D‐5L‐PSO scores.\nChanges in PSI, DLQI and EQ‐5D‐5L‐PSO scores in flare treatment from baseline to Week 4 (full analysis set; N = 521)\nDifference calculated from participants with both baseline and Week 4 scores.", "The PRO improvements were maintained over the next 52 weeks for both proactive and reactive management arms, across the three PRO assessment tools. Patients receiving proactive management showed significantly greater improvements in patient‐perceived symptom severity than patients receiving reactive management: the mean PSI AUC score was 15% higher for reactive management (5.74) than for proactive management (4.99) during the maintenance phase (difference −0.75; P = 0.0128) (Table 3). It is worth noting that the levels of participant engagement with the PSI questionnaire were low in the last 2 weeks of the maintenance phase. The analysis of the PSI total score was therefore based on the first 28 weeks of the maintenance phase.\nMean PSI, DLQI and EQ‐5D‐5L‐PSO AUC scores in proactive and reactive management arms and differences during maintenance phase (full analysis set; N = 521)\nProactive management also corresponded with significantly greater improvements in DLQI AUC scores; the mean DLQI score was 15% higher for reactive management (3.40) than proactive management (2.95) (difference −0.45; P = 0.007). Although the difference between proactive and reactive management did not result in significantly greater improvement in EQ‐5D‐5L‐PSO scores, the numerical difference favoured the proactive management arm; the mean EQ‐5D‐5L‐PSO AUC score was 1% higher for proactive management (0.89) than for reactive management (0.88) (difference 0.01, P = 0.0842). The mean scores in proactive and reactive management arms for PSI, DLQI and EQ‐5D‐5L‐PSO across each visit are shown in Fig. 1.\nMean scores in proactive and reactive management arms for (a) PSI, (b) DLQI, and (c) EQ‐5D across study visits (Full Analysis Set; N = 521).\nAcross both treatment arms, patients had improvements in symptoms and HRQoL during remission compared with the baseline flare and the start of a relapse (Table 4). Additionally, the mean change (95% confidence interval [CI]; p‐value) between the start of a relapse and remission was −2.28 (95% CI: −2.64 to −1.92; <0.0001) for PSI scores, −1.32 (95% CI: −1.60 to −1.04; <0.0001) for DLQI and 0.03 (95% CI: 0.02 to 0.04; <0.0001) for EQ‐5D‐5L‐PSO (Fig. 2).\nPSI, DLQI and EQ‐5D‐5L‐PSO scores for baseline, remission and start of relapse (full analysis set; N = 521)\nDistribution of AUC scores in proactive and reactive management arms for (a) PSI, (b) DLQI, and (c) EQ‐5D (Full Analysis Set; N = 521).", "The PSO‐LONG was the first randomized, double‐blind, 52‐week clinical trial to evaluate long‐term safety and efficacy outcomes of a proactive management strategy.\n25\n This post hoc analysis evaluated the value of Cal/BD foam for flare treatment and long‐term management (proactive vs. reactive) on PROs as well as comparing results at baseline flare, start of a relapse and during remission. To our knowledge, it is the first analysis to capture the effect on HRQoL of treating psoriasis with Cal/BD foam throughout a 52‐week period.\nIn this post hoc analysis, the patient’s HRQoL considerably improved with the Cal/BD foam QD flare treatment as demonstrated by significant changes in the DLQI, EQ‐5D‐5L‐PSO and PSI scores at randomization (end of flare) versus the baseline (start of flare). It is worth noting, however, that patients who did not achieve treatment success at the end of the open‐label phase were discontinued from the study. Therefore, those included in the maintenance phase were already shown to respond to Cal/BD foam treatment.\nFollowing resolution of the initial flare, the impact of the initial treatment on DLQI, EQ‐5D‐5L‐PSO and PSI on these patients was maintained through the 52 weeks for both proactive and reactive management. Patients assigned to proactive management had significantly better DLQI and PSI scores and numerically better EQ‐5D‐5L‐PSO scores versus reactive management. This could be attributed to dermatology‐specific and psoriasis‐specific questionnaires having a greater capacity for differentiation and sensitivity to changes on HRQoL than generic measures such as EuroQol‐5D.\n31\n\n\nThe baseline flare was associated with worse PROs than the start of a relapse. This could be due to the baseline flare representing an untreated flare, whereas the start of relapse represents flares occurring during the course of proactive or reactive management. Patients in relapse also had a poorer HRQoL and patient‐perceived symptom severity than patients in remission, which indicated that relapses had a substantial impact on the patients’ HRQoL. In the PSO‐LONG trial, the rate ratio of relapses for proactive versus reactive management was 0.54 (95% CI: 0.46–0.63; P < 0.001), and the predicted number of relapses per year of exposure was 3.1 (proactive management) versus 4.8 (reactive management), with proactive management giving 41 extra days in remission in a year.\n26\n Therefore, a reduction in the number of relapses and increased time in remission over a year of exposure in patients receiving proactive management versus reactive management can be attributed to the observed improvements in HRQoL and patient‐perceived symptom severity.\nAlthough skin clearance may be achievable for most patients in the short term, long‐term strategies are important to optimize adherence and long‐term outcomes, including HRQoL.\n22\n, \n23\n However, the majority of clinical data and guidance available for topical agents is focused on short‐term use, with limited guidance or clinical data on long‐term use.\n23\n Currently, long‐term management with topical treatment in psoriasis follows a reactive approach in response to disease relapses as opposed to a proactive approach to maintain remission. In the PSO‐LONG trial, the incidence of adverse events in the maintenance phase was similar between treatment groups and similar to the incidence reported following treatment with Cal/BD foam QD for 12 weeks, providing support for the long‐term safety and tolerability of proactive management with topical agents.\n26\n Although this analysis has inherent limitations related to its post hoc nature, the results of the PSO‐LONG trial warrant further research into the use of proactive topical treatments in the long‐term management of psoriasis, including in the real‐world clinical setting.", "In this analysis, Cal/BD foam QD flare treatment was associated with significant improvements in PROs from baseline that were maintained with a twice‐weekly application through 52 weeks. In patients undergoing proactive management, DLQI and PSI scores were significantly improved vs. patients receiving reactive management, potentially due to the reduction in the number of relapses and increased time in remission over a year of exposure.\nOverall, the results of this analysis add to the original PSO‐LONG findings, suggesting that proactive management with fixed‐dose Cal/BD foam could offer not only improved long‐term control of psoriasis but also improved HRQoL and patient‐perceived symptom severity over conventional reactive treatment with Cal/BD foam." ]
[ null, "materials-and-methods", null, null, null, "results", null, null, null, "discussion", "conclusions" ]
[ "betamethasone dipropionate", "calcipotriol", "psoriasis", "topical administration" ]
Introduction: Psoriasis is a chronic, immune‐mediated disease, with primarily skin and joint symptoms. 1 , 2 The morphology, localization and severity of lesions in psoriasis can be highly variable. 3 Various genetic, environmental and immunological factors have been proposed as potential contributors to the pathophysiology of this disease. 3 Individuals with psoriasis have also been shown to have an elevated risk of cardiovascular disease, metabolic syndrome and diabetes, compared with the general population. 4 Globally, the prevalence of psoriasis has been reported to vary between 0.09% and 11.43%, which corresponds to approximately 125 million affected people. 5 , 6 Despite ongoing efforts to improve the management of this condition, the burden of disease has been increasing steadily over the past decades. 7 Collectively, this renders psoriasis a significant health issue worldwide. Psoriasis can significantly influence a person's quality of life (QoL) and cause social stigmatization, physical disability and emotional distress. 8 Moreover, the impact of psoriasis on QoL is similar to that in patients with other chronic conditions such as cardiovascular disease, diabetes, end‐stage renal disease, liver disease and cancer. 9 Skin symptoms of psoriasis including scaling, itch and pain can significantly affect physical well‐being and limit daily activities, social contacts and (skin‐exposing) activities, and work. 10 Psoriasis has a greater psychological burden than any other dermatological condition and has been associated with impaired emotional functioning, a negative body and self‐image, depression, anxiety and suicide risk more than any other skin condition. 10 , 11 Other factors may also be attributable to the low QoL in psoriasis patients, including the chronic and recurring nature of the disease, lack of control and fear of unexpected breakout, and feeling of hopelessness in terms of cure. 12 Furthermore, duration and severity of psoriasis significantly decrease the QoL. 13 , 14 For mild to moderate psoriasis, current treatment strategies commonly involve topical agents. For moderate to severe psoriasis, topical agents are often added to phototherapy and systemic or biologic agents. 15 , 16 , 17 Current management strategies mainly aim to clear active disease sites and prolong symptom‐free periods. 18 However, long‐term disease control is challenging, and patient satisfaction with available therapies remains low. 19 Moreover, psoriasis is often undertreated such that patients do not achieve substantial skin clearance, symptom relief or improvements in QoL. 20 , 21 Although skin clearance may be achievable for most patients in the short term, long‐term strategies are important to optimize adherence and long‐term outcomes including health‐related quality of life (HRQoL). 22 , 23 However, the majority of clinical data and guidance available for topical management of psoriasis is focused on short‐term use, with limited data on long‐term use. 23 Therefore, understanding the impact of treatment on HRQoL and patient‐perceived symptom severity in psoriasis is key to informing clinical decision‐making, improving clinical outcomes and quality of care. Patient‐reported outcomes measures (PROs) are invaluable tools to evaluate these effects and support clinical management. 22 , 24 This post hoc analysis of the PSO‐LONG trial captured the effect on HRQoL and patient‐perceived symptom severity of treating psoriasis with fixed‐dose calcipotriene 50 µg/g and betamethasone dipropionate 0.5 mg/g (Cal/BD) foam topical treatment through a 52‐week period. Three PRO measures were used: EuroQoL 5‐Dimensional Questionnaire for Psoriasis (EQ‐5D‐5L‐PSO), the Dermatology Life Quality Index (DLQI) and the Psoriasis Symptom Inventory (PSI). The analysis aimed to evaluate the value of Cal/BD foam for flare treatment and long‐term management (proactive vs reactive) on PROs as well as compare results at baseline flare, start of a relapse and during remission. Materials and methods: Study design This post hoc analysis included the full analysis set (n = 521) from the Phase 3, randomized, double‐blind PSO‐LONG trial (NCT02899962). The PSO‐LONG trial assessed long‐term efficacy and safety of proactive management with twice‐weekly fixed‐dose combination Cal/BD foam versus reactive management with twice‐weekly vehicle in patients with psoriasis vulgaris. Eligible patients were aged ≥18 years and had a clinical diagnosis of truncal and/or limb psoriasis for at least 6 months involving 2–30% of the body surface area (BSA), a modified Psoriasis Area and Severity Index (mPASI) score of ≥2 and a physician’s global assessment of disease severity (PGA) score of at least ‘mild’ (PGA ≥ 2). The trial included an initial 4‐week, open‐label phase of fixed‐dose combination Cal/BD foam once daily, followed by a 52‐week maintenance phase for patients who achieved a PGA score of 0 or 1 and an at least 2‐grade improvement after the initial 4 weeks. At the start of the maintenance phase, patients were randomized to either proactive management (Cal/BD foam twice weekly) or reactive management (vehicle foam twice weekly). Relapses (defined as at least ‘mild’, PGA ≥ 2) were treated with fixed‐dose combination Cal/BD foam once daily (QD) for 4 weeks. Remission was defined as ‘clear’ or ‘almost clear’, PGA 0/1. The full details of the PSO‐LONG trial study design 25 and the efficacy and safety results 26 are published elsewhere. This post hoc analysis included the full analysis set (n = 521) from the Phase 3, randomized, double‐blind PSO‐LONG trial (NCT02899962). The PSO‐LONG trial assessed long‐term efficacy and safety of proactive management with twice‐weekly fixed‐dose combination Cal/BD foam versus reactive management with twice‐weekly vehicle in patients with psoriasis vulgaris. Eligible patients were aged ≥18 years and had a clinical diagnosis of truncal and/or limb psoriasis for at least 6 months involving 2–30% of the body surface area (BSA), a modified Psoriasis Area and Severity Index (mPASI) score of ≥2 and a physician’s global assessment of disease severity (PGA) score of at least ‘mild’ (PGA ≥ 2). The trial included an initial 4‐week, open‐label phase of fixed‐dose combination Cal/BD foam once daily, followed by a 52‐week maintenance phase for patients who achieved a PGA score of 0 or 1 and an at least 2‐grade improvement after the initial 4 weeks. At the start of the maintenance phase, patients were randomized to either proactive management (Cal/BD foam twice weekly) or reactive management (vehicle foam twice weekly). Relapses (defined as at least ‘mild’, PGA ≥ 2) were treated with fixed‐dose combination Cal/BD foam once daily (QD) for 4 weeks. Remission was defined as ‘clear’ or ‘almost clear’, PGA 0/1. The full details of the PSO‐LONG trial study design 25 and the efficacy and safety results 26 are published elsewhere. Patient‐reported outcomes Patients completed the EQ‐5D‐5L‐PSO, DLQI and PSI assessments. The EQ‐5D‐5L‐PSO questionnaire measures health status over five general dimensions (mobility, self‐care, usual activities, pain/discomfort and anxiety/depression) and two psoriasis‐related dimensions (skin irritation and self‐confidence). 27 Each dimension has five response levels, and a visual analogue scale allows patients to assess their health status with a score ranging from 0 (worst health) to 100 (best health). Responses to the questions can be converted into an index score ranging from 0.00 to 1.00, where a score of 0.00 indicates the worst health and 1.00 indicates full health. The DLQI is a ten‐item questionnaire used to measure the impact of dermatological disorders on a patient’s HRQoL in the following six areas: symptoms and feelings; daily activities; leisure activities; work and school; personal relationships and treatment‐related distress. 12 , 28 Total scores range from 0–1 (‘no effect at all’) to 21–30 (‘extremely large effect’). The PSI is an assessment of the severity of eight psoriasis‑related symptoms including itch, redness, scaling, burning, stinging, cracking, flaking and pain. 29 , 30 Scoring for each symptom ranges from ‘not at all severe’ (0) to ‘very severe’ (4), giving a total score range from 0 (no symptoms) to 32 (more severe symptoms). Patient‐reported outcomes were assessed across treatment arms at baseline (Visit 1), at the start of a relapse and during remission at all monthly scheduled and unscheduled visits. The DLQI and EQ‐5D‐5L‐PSO were completed at the trial site on an electronic slate/tablet. To ensure unbiased answers for questionnaires that were completed onsite, the PROs were collected prior to any other assessments. The PSI was completed on an eDiary device, provided for the participants for use at home. The participants were asked to complete the PSI daily during the open‐label treatment phase (starting at Visit 1), then weekly during the first 28 weeks of the maintenance phase (Weeks 4–28) and the last 2 weeks of the maintenance phase (Weeks 54–56 – only applicable for those who completed the PSI). Patients completed the EQ‐5D‐5L‐PSO, DLQI and PSI assessments. The EQ‐5D‐5L‐PSO questionnaire measures health status over five general dimensions (mobility, self‐care, usual activities, pain/discomfort and anxiety/depression) and two psoriasis‐related dimensions (skin irritation and self‐confidence). 27 Each dimension has five response levels, and a visual analogue scale allows patients to assess their health status with a score ranging from 0 (worst health) to 100 (best health). Responses to the questions can be converted into an index score ranging from 0.00 to 1.00, where a score of 0.00 indicates the worst health and 1.00 indicates full health. The DLQI is a ten‐item questionnaire used to measure the impact of dermatological disorders on a patient’s HRQoL in the following six areas: symptoms and feelings; daily activities; leisure activities; work and school; personal relationships and treatment‐related distress. 12 , 28 Total scores range from 0–1 (‘no effect at all’) to 21–30 (‘extremely large effect’). The PSI is an assessment of the severity of eight psoriasis‑related symptoms including itch, redness, scaling, burning, stinging, cracking, flaking and pain. 29 , 30 Scoring for each symptom ranges from ‘not at all severe’ (0) to ‘very severe’ (4), giving a total score range from 0 (no symptoms) to 32 (more severe symptoms). Patient‐reported outcomes were assessed across treatment arms at baseline (Visit 1), at the start of a relapse and during remission at all monthly scheduled and unscheduled visits. The DLQI and EQ‐5D‐5L‐PSO were completed at the trial site on an electronic slate/tablet. To ensure unbiased answers for questionnaires that were completed onsite, the PROs were collected prior to any other assessments. The PSI was completed on an eDiary device, provided for the participants for use at home. The participants were asked to complete the PSI daily during the open‐label treatment phase (starting at Visit 1), then weekly during the first 28 weeks of the maintenance phase (Weeks 4–28) and the last 2 weeks of the maintenance phase (Weeks 54–56 – only applicable for those who completed the PSI). Statistical analyses The statistical analyses were performed on the full analysis set (N = 521). Patient‐reported outcome results were collected within treatment arms at each visit. The integrated area under the curve (AUC) in proactive and reactive management arms during the maintenance phase was calculated for each PRO using the trapezoidal rule and subsequently normalized by the number of days in study for each patient. Additionally, PROs were assessed across treatment arms at baseline, at the start of a relapse and during remission at unscheduled and scheduled visits. Missing assessment of PRO scores in‐between non‐missing assessments in the maintenance phase was not imputed. The P‐value for treatment changes were assessed by using the Wilcoxon signed rank sum test. Differences across treatment arms were considered significant at P < 0.05. The statistical analyses were performed on the full analysis set (N = 521). Patient‐reported outcome results were collected within treatment arms at each visit. The integrated area under the curve (AUC) in proactive and reactive management arms during the maintenance phase was calculated for each PRO using the trapezoidal rule and subsequently normalized by the number of days in study for each patient. Additionally, PROs were assessed across treatment arms at baseline, at the start of a relapse and during remission at unscheduled and scheduled visits. Missing assessment of PRO scores in‐between non‐missing assessments in the maintenance phase was not imputed. The P‐value for treatment changes were assessed by using the Wilcoxon signed rank sum test. Differences across treatment arms were considered significant at P < 0.05. Study design: This post hoc analysis included the full analysis set (n = 521) from the Phase 3, randomized, double‐blind PSO‐LONG trial (NCT02899962). The PSO‐LONG trial assessed long‐term efficacy and safety of proactive management with twice‐weekly fixed‐dose combination Cal/BD foam versus reactive management with twice‐weekly vehicle in patients with psoriasis vulgaris. Eligible patients were aged ≥18 years and had a clinical diagnosis of truncal and/or limb psoriasis for at least 6 months involving 2–30% of the body surface area (BSA), a modified Psoriasis Area and Severity Index (mPASI) score of ≥2 and a physician’s global assessment of disease severity (PGA) score of at least ‘mild’ (PGA ≥ 2). The trial included an initial 4‐week, open‐label phase of fixed‐dose combination Cal/BD foam once daily, followed by a 52‐week maintenance phase for patients who achieved a PGA score of 0 or 1 and an at least 2‐grade improvement after the initial 4 weeks. At the start of the maintenance phase, patients were randomized to either proactive management (Cal/BD foam twice weekly) or reactive management (vehicle foam twice weekly). Relapses (defined as at least ‘mild’, PGA ≥ 2) were treated with fixed‐dose combination Cal/BD foam once daily (QD) for 4 weeks. Remission was defined as ‘clear’ or ‘almost clear’, PGA 0/1. The full details of the PSO‐LONG trial study design 25 and the efficacy and safety results 26 are published elsewhere. Patient‐reported outcomes: Patients completed the EQ‐5D‐5L‐PSO, DLQI and PSI assessments. The EQ‐5D‐5L‐PSO questionnaire measures health status over five general dimensions (mobility, self‐care, usual activities, pain/discomfort and anxiety/depression) and two psoriasis‐related dimensions (skin irritation and self‐confidence). 27 Each dimension has five response levels, and a visual analogue scale allows patients to assess their health status with a score ranging from 0 (worst health) to 100 (best health). Responses to the questions can be converted into an index score ranging from 0.00 to 1.00, where a score of 0.00 indicates the worst health and 1.00 indicates full health. The DLQI is a ten‐item questionnaire used to measure the impact of dermatological disorders on a patient’s HRQoL in the following six areas: symptoms and feelings; daily activities; leisure activities; work and school; personal relationships and treatment‐related distress. 12 , 28 Total scores range from 0–1 (‘no effect at all’) to 21–30 (‘extremely large effect’). The PSI is an assessment of the severity of eight psoriasis‑related symptoms including itch, redness, scaling, burning, stinging, cracking, flaking and pain. 29 , 30 Scoring for each symptom ranges from ‘not at all severe’ (0) to ‘very severe’ (4), giving a total score range from 0 (no symptoms) to 32 (more severe symptoms). Patient‐reported outcomes were assessed across treatment arms at baseline (Visit 1), at the start of a relapse and during remission at all monthly scheduled and unscheduled visits. The DLQI and EQ‐5D‐5L‐PSO were completed at the trial site on an electronic slate/tablet. To ensure unbiased answers for questionnaires that were completed onsite, the PROs were collected prior to any other assessments. The PSI was completed on an eDiary device, provided for the participants for use at home. The participants were asked to complete the PSI daily during the open‐label treatment phase (starting at Visit 1), then weekly during the first 28 weeks of the maintenance phase (Weeks 4–28) and the last 2 weeks of the maintenance phase (Weeks 54–56 – only applicable for those who completed the PSI). Statistical analyses: The statistical analyses were performed on the full analysis set (N = 521). Patient‐reported outcome results were collected within treatment arms at each visit. The integrated area under the curve (AUC) in proactive and reactive management arms during the maintenance phase was calculated for each PRO using the trapezoidal rule and subsequently normalized by the number of days in study for each patient. Additionally, PROs were assessed across treatment arms at baseline, at the start of a relapse and during remission at unscheduled and scheduled visits. Missing assessment of PRO scores in‐between non‐missing assessments in the maintenance phase was not imputed. The P‐value for treatment changes were assessed by using the Wilcoxon signed rank sum test. Differences across treatment arms were considered significant at P < 0.05. Results: Patient demographics A total of 521 patients were included in the full analysis set used in this post hoc analysis. Patients were predominantly male (67.4%) and white (90.2%). The mean age was approximately 52.3 years. The majority of patients had moderate baseline PGA scores (85.2%). Patient characteristics are summarized in Table 1. Demographic and baseline characteristics of randomized patients (full analysis set; N = 521) Demographic and baseline characteristics (maintenance full analysis set; N = 521) A total of 521 patients were included in the full analysis set used in this post hoc analysis. Patients were predominantly male (67.4%) and white (90.2%). The mean age was approximately 52.3 years. The majority of patients had moderate baseline PGA scores (85.2%). Patient characteristics are summarized in Table 1. Demographic and baseline characteristics of randomized patients (full analysis set; N = 521) Demographic and baseline characteristics (maintenance full analysis set; N = 521) Open‐label phase Initial flare treatment with Cal/BD foam QD during the open‐label phase led to statistically and clinically significant improvements across all PRO measures (Table 2). The mean difference from baseline to Week 4 was −8.97 (standard deviation [SD] = 6.18; P < 0.0001) for PSI scores, −6.02 (SD = 5.46; P < 0.0001) for DLQI scores and 0.11 (SD = 0.15; P < 0.0001) for EQ‐5D‐5L‐PSO scores. Changes in PSI, DLQI and EQ‐5D‐5L‐PSO scores in flare treatment from baseline to Week 4 (full analysis set; N = 521) Difference calculated from participants with both baseline and Week 4 scores. Initial flare treatment with Cal/BD foam QD during the open‐label phase led to statistically and clinically significant improvements across all PRO measures (Table 2). The mean difference from baseline to Week 4 was −8.97 (standard deviation [SD] = 6.18; P < 0.0001) for PSI scores, −6.02 (SD = 5.46; P < 0.0001) for DLQI scores and 0.11 (SD = 0.15; P < 0.0001) for EQ‐5D‐5L‐PSO scores. Changes in PSI, DLQI and EQ‐5D‐5L‐PSO scores in flare treatment from baseline to Week 4 (full analysis set; N = 521) Difference calculated from participants with both baseline and Week 4 scores. Maintenance phase The PRO improvements were maintained over the next 52 weeks for both proactive and reactive management arms, across the three PRO assessment tools. Patients receiving proactive management showed significantly greater improvements in patient‐perceived symptom severity than patients receiving reactive management: the mean PSI AUC score was 15% higher for reactive management (5.74) than for proactive management (4.99) during the maintenance phase (difference −0.75; P = 0.0128) (Table 3). It is worth noting that the levels of participant engagement with the PSI questionnaire were low in the last 2 weeks of the maintenance phase. The analysis of the PSI total score was therefore based on the first 28 weeks of the maintenance phase. Mean PSI, DLQI and EQ‐5D‐5L‐PSO AUC scores in proactive and reactive management arms and differences during maintenance phase (full analysis set; N = 521) Proactive management also corresponded with significantly greater improvements in DLQI AUC scores; the mean DLQI score was 15% higher for reactive management (3.40) than proactive management (2.95) (difference −0.45; P = 0.007). Although the difference between proactive and reactive management did not result in significantly greater improvement in EQ‐5D‐5L‐PSO scores, the numerical difference favoured the proactive management arm; the mean EQ‐5D‐5L‐PSO AUC score was 1% higher for proactive management (0.89) than for reactive management (0.88) (difference 0.01, P = 0.0842). The mean scores in proactive and reactive management arms for PSI, DLQI and EQ‐5D‐5L‐PSO across each visit are shown in Fig. 1. Mean scores in proactive and reactive management arms for (a) PSI, (b) DLQI, and (c) EQ‐5D across study visits (Full Analysis Set; N = 521). Across both treatment arms, patients had improvements in symptoms and HRQoL during remission compared with the baseline flare and the start of a relapse (Table 4). Additionally, the mean change (95% confidence interval [CI]; p‐value) between the start of a relapse and remission was −2.28 (95% CI: −2.64 to −1.92; <0.0001) for PSI scores, −1.32 (95% CI: −1.60 to −1.04; <0.0001) for DLQI and 0.03 (95% CI: 0.02 to 0.04; <0.0001) for EQ‐5D‐5L‐PSO (Fig. 2). PSI, DLQI and EQ‐5D‐5L‐PSO scores for baseline, remission and start of relapse (full analysis set; N = 521) Distribution of AUC scores in proactive and reactive management arms for (a) PSI, (b) DLQI, and (c) EQ‐5D (Full Analysis Set; N = 521). The PRO improvements were maintained over the next 52 weeks for both proactive and reactive management arms, across the three PRO assessment tools. Patients receiving proactive management showed significantly greater improvements in patient‐perceived symptom severity than patients receiving reactive management: the mean PSI AUC score was 15% higher for reactive management (5.74) than for proactive management (4.99) during the maintenance phase (difference −0.75; P = 0.0128) (Table 3). It is worth noting that the levels of participant engagement with the PSI questionnaire were low in the last 2 weeks of the maintenance phase. The analysis of the PSI total score was therefore based on the first 28 weeks of the maintenance phase. Mean PSI, DLQI and EQ‐5D‐5L‐PSO AUC scores in proactive and reactive management arms and differences during maintenance phase (full analysis set; N = 521) Proactive management also corresponded with significantly greater improvements in DLQI AUC scores; the mean DLQI score was 15% higher for reactive management (3.40) than proactive management (2.95) (difference −0.45; P = 0.007). Although the difference between proactive and reactive management did not result in significantly greater improvement in EQ‐5D‐5L‐PSO scores, the numerical difference favoured the proactive management arm; the mean EQ‐5D‐5L‐PSO AUC score was 1% higher for proactive management (0.89) than for reactive management (0.88) (difference 0.01, P = 0.0842). The mean scores in proactive and reactive management arms for PSI, DLQI and EQ‐5D‐5L‐PSO across each visit are shown in Fig. 1. Mean scores in proactive and reactive management arms for (a) PSI, (b) DLQI, and (c) EQ‐5D across study visits (Full Analysis Set; N = 521). Across both treatment arms, patients had improvements in symptoms and HRQoL during remission compared with the baseline flare and the start of a relapse (Table 4). Additionally, the mean change (95% confidence interval [CI]; p‐value) between the start of a relapse and remission was −2.28 (95% CI: −2.64 to −1.92; <0.0001) for PSI scores, −1.32 (95% CI: −1.60 to −1.04; <0.0001) for DLQI and 0.03 (95% CI: 0.02 to 0.04; <0.0001) for EQ‐5D‐5L‐PSO (Fig. 2). PSI, DLQI and EQ‐5D‐5L‐PSO scores for baseline, remission and start of relapse (full analysis set; N = 521) Distribution of AUC scores in proactive and reactive management arms for (a) PSI, (b) DLQI, and (c) EQ‐5D (Full Analysis Set; N = 521). Patient demographics: A total of 521 patients were included in the full analysis set used in this post hoc analysis. Patients were predominantly male (67.4%) and white (90.2%). The mean age was approximately 52.3 years. The majority of patients had moderate baseline PGA scores (85.2%). Patient characteristics are summarized in Table 1. Demographic and baseline characteristics of randomized patients (full analysis set; N = 521) Demographic and baseline characteristics (maintenance full analysis set; N = 521) Open‐label phase: Initial flare treatment with Cal/BD foam QD during the open‐label phase led to statistically and clinically significant improvements across all PRO measures (Table 2). The mean difference from baseline to Week 4 was −8.97 (standard deviation [SD] = 6.18; P < 0.0001) for PSI scores, −6.02 (SD = 5.46; P < 0.0001) for DLQI scores and 0.11 (SD = 0.15; P < 0.0001) for EQ‐5D‐5L‐PSO scores. Changes in PSI, DLQI and EQ‐5D‐5L‐PSO scores in flare treatment from baseline to Week 4 (full analysis set; N = 521) Difference calculated from participants with both baseline and Week 4 scores. Maintenance phase: The PRO improvements were maintained over the next 52 weeks for both proactive and reactive management arms, across the three PRO assessment tools. Patients receiving proactive management showed significantly greater improvements in patient‐perceived symptom severity than patients receiving reactive management: the mean PSI AUC score was 15% higher for reactive management (5.74) than for proactive management (4.99) during the maintenance phase (difference −0.75; P = 0.0128) (Table 3). It is worth noting that the levels of participant engagement with the PSI questionnaire were low in the last 2 weeks of the maintenance phase. The analysis of the PSI total score was therefore based on the first 28 weeks of the maintenance phase. Mean PSI, DLQI and EQ‐5D‐5L‐PSO AUC scores in proactive and reactive management arms and differences during maintenance phase (full analysis set; N = 521) Proactive management also corresponded with significantly greater improvements in DLQI AUC scores; the mean DLQI score was 15% higher for reactive management (3.40) than proactive management (2.95) (difference −0.45; P = 0.007). Although the difference between proactive and reactive management did not result in significantly greater improvement in EQ‐5D‐5L‐PSO scores, the numerical difference favoured the proactive management arm; the mean EQ‐5D‐5L‐PSO AUC score was 1% higher for proactive management (0.89) than for reactive management (0.88) (difference 0.01, P = 0.0842). The mean scores in proactive and reactive management arms for PSI, DLQI and EQ‐5D‐5L‐PSO across each visit are shown in Fig. 1. Mean scores in proactive and reactive management arms for (a) PSI, (b) DLQI, and (c) EQ‐5D across study visits (Full Analysis Set; N = 521). Across both treatment arms, patients had improvements in symptoms and HRQoL during remission compared with the baseline flare and the start of a relapse (Table 4). Additionally, the mean change (95% confidence interval [CI]; p‐value) between the start of a relapse and remission was −2.28 (95% CI: −2.64 to −1.92; <0.0001) for PSI scores, −1.32 (95% CI: −1.60 to −1.04; <0.0001) for DLQI and 0.03 (95% CI: 0.02 to 0.04; <0.0001) for EQ‐5D‐5L‐PSO (Fig. 2). PSI, DLQI and EQ‐5D‐5L‐PSO scores for baseline, remission and start of relapse (full analysis set; N = 521) Distribution of AUC scores in proactive and reactive management arms for (a) PSI, (b) DLQI, and (c) EQ‐5D (Full Analysis Set; N = 521). Discussion: The PSO‐LONG was the first randomized, double‐blind, 52‐week clinical trial to evaluate long‐term safety and efficacy outcomes of a proactive management strategy. 25 This post hoc analysis evaluated the value of Cal/BD foam for flare treatment and long‐term management (proactive vs. reactive) on PROs as well as comparing results at baseline flare, start of a relapse and during remission. To our knowledge, it is the first analysis to capture the effect on HRQoL of treating psoriasis with Cal/BD foam throughout a 52‐week period. In this post hoc analysis, the patient’s HRQoL considerably improved with the Cal/BD foam QD flare treatment as demonstrated by significant changes in the DLQI, EQ‐5D‐5L‐PSO and PSI scores at randomization (end of flare) versus the baseline (start of flare). It is worth noting, however, that patients who did not achieve treatment success at the end of the open‐label phase were discontinued from the study. Therefore, those included in the maintenance phase were already shown to respond to Cal/BD foam treatment. Following resolution of the initial flare, the impact of the initial treatment on DLQI, EQ‐5D‐5L‐PSO and PSI on these patients was maintained through the 52 weeks for both proactive and reactive management. Patients assigned to proactive management had significantly better DLQI and PSI scores and numerically better EQ‐5D‐5L‐PSO scores versus reactive management. This could be attributed to dermatology‐specific and psoriasis‐specific questionnaires having a greater capacity for differentiation and sensitivity to changes on HRQoL than generic measures such as EuroQol‐5D. 31 The baseline flare was associated with worse PROs than the start of a relapse. This could be due to the baseline flare representing an untreated flare, whereas the start of relapse represents flares occurring during the course of proactive or reactive management. Patients in relapse also had a poorer HRQoL and patient‐perceived symptom severity than patients in remission, which indicated that relapses had a substantial impact on the patients’ HRQoL. In the PSO‐LONG trial, the rate ratio of relapses for proactive versus reactive management was 0.54 (95% CI: 0.46–0.63; P < 0.001), and the predicted number of relapses per year of exposure was 3.1 (proactive management) versus 4.8 (reactive management), with proactive management giving 41 extra days in remission in a year. 26 Therefore, a reduction in the number of relapses and increased time in remission over a year of exposure in patients receiving proactive management versus reactive management can be attributed to the observed improvements in HRQoL and patient‐perceived symptom severity. Although skin clearance may be achievable for most patients in the short term, long‐term strategies are important to optimize adherence and long‐term outcomes, including HRQoL. 22 , 23 However, the majority of clinical data and guidance available for topical agents is focused on short‐term use, with limited guidance or clinical data on long‐term use. 23 Currently, long‐term management with topical treatment in psoriasis follows a reactive approach in response to disease relapses as opposed to a proactive approach to maintain remission. In the PSO‐LONG trial, the incidence of adverse events in the maintenance phase was similar between treatment groups and similar to the incidence reported following treatment with Cal/BD foam QD for 12 weeks, providing support for the long‐term safety and tolerability of proactive management with topical agents. 26 Although this analysis has inherent limitations related to its post hoc nature, the results of the PSO‐LONG trial warrant further research into the use of proactive topical treatments in the long‐term management of psoriasis, including in the real‐world clinical setting. Conclusion: In this analysis, Cal/BD foam QD flare treatment was associated with significant improvements in PROs from baseline that were maintained with a twice‐weekly application through 52 weeks. In patients undergoing proactive management, DLQI and PSI scores were significantly improved vs. patients receiving reactive management, potentially due to the reduction in the number of relapses and increased time in remission over a year of exposure. Overall, the results of this analysis add to the original PSO‐LONG findings, suggesting that proactive management with fixed‐dose Cal/BD foam could offer not only improved long‐term control of psoriasis but also improved HRQoL and patient‐perceived symptom severity over conventional reactive treatment with Cal/BD foam.
Background: Psoriasis has important physical and psychosocial effects that extend beyond the skin. Understanding the impact of treatment on health-related quality of life (HRQoL) and patient-perceived symptom severity in psoriasis is key to clinical decision-making. Methods: Five hundred and twenty-one patients from the Phase 3, randomized, double-blind PSO-LONG trial were included. An initial 4-week, open-label phase of fixed-dose combination Cal/BD foam once daily (QD) was followed by a 52-week maintenance phase, at the start of which patients were randomized to a proactive management arm (Cal/BD foam twice weekly) or reactive management arm (vehicle foam twice weekly). Patient-perceived symptom severity and HRQoL were assessed using the Psoriasis Symptom Inventory (PSI), the Dermatology Life Quality Index (DLQI) and the EuroQol-5D for psoriasis (EQ-5D-5L-PSO). Results: Statistically and clinically significant improvements were observed across all PRO measures. The mean difference (standard deviation) from baseline to Week 4 was -8.97 (6.18) for PSI, -6.02 (5.46) for DLQI and 0.11 (0.15) for EQ-5D-5L-PSO scores. During maintenance, patients receiving reactive management had significantly higher DLQI (15% [p = 0.007]) and PSI (15% [p = 0.0128]) and a numerically lower EQ-5D-5L-PSO mean area under the curve score than patients receiving proactive management (1% [p = 0.0842]). Conclusions: Cal/BD foam significantly improved DLQI, EQ-5D-5L-PSO and PSI scores during the open-label and maintenance phases. Patients assigned to proactive management had significantly better DLQI and PSI scores and numerically better EQ-5D-5L-PSO versus reactive management. Additionally, baseline flare was associated with worse PROs than the start of a relapse, and patients starting a relapse also had worse PROs than patients in remission.
Introduction: Psoriasis is a chronic, immune‐mediated disease, with primarily skin and joint symptoms. 1 , 2 The morphology, localization and severity of lesions in psoriasis can be highly variable. 3 Various genetic, environmental and immunological factors have been proposed as potential contributors to the pathophysiology of this disease. 3 Individuals with psoriasis have also been shown to have an elevated risk of cardiovascular disease, metabolic syndrome and diabetes, compared with the general population. 4 Globally, the prevalence of psoriasis has been reported to vary between 0.09% and 11.43%, which corresponds to approximately 125 million affected people. 5 , 6 Despite ongoing efforts to improve the management of this condition, the burden of disease has been increasing steadily over the past decades. 7 Collectively, this renders psoriasis a significant health issue worldwide. Psoriasis can significantly influence a person's quality of life (QoL) and cause social stigmatization, physical disability and emotional distress. 8 Moreover, the impact of psoriasis on QoL is similar to that in patients with other chronic conditions such as cardiovascular disease, diabetes, end‐stage renal disease, liver disease and cancer. 9 Skin symptoms of psoriasis including scaling, itch and pain can significantly affect physical well‐being and limit daily activities, social contacts and (skin‐exposing) activities, and work. 10 Psoriasis has a greater psychological burden than any other dermatological condition and has been associated with impaired emotional functioning, a negative body and self‐image, depression, anxiety and suicide risk more than any other skin condition. 10 , 11 Other factors may also be attributable to the low QoL in psoriasis patients, including the chronic and recurring nature of the disease, lack of control and fear of unexpected breakout, and feeling of hopelessness in terms of cure. 12 Furthermore, duration and severity of psoriasis significantly decrease the QoL. 13 , 14 For mild to moderate psoriasis, current treatment strategies commonly involve topical agents. For moderate to severe psoriasis, topical agents are often added to phototherapy and systemic or biologic agents. 15 , 16 , 17 Current management strategies mainly aim to clear active disease sites and prolong symptom‐free periods. 18 However, long‐term disease control is challenging, and patient satisfaction with available therapies remains low. 19 Moreover, psoriasis is often undertreated such that patients do not achieve substantial skin clearance, symptom relief or improvements in QoL. 20 , 21 Although skin clearance may be achievable for most patients in the short term, long‐term strategies are important to optimize adherence and long‐term outcomes including health‐related quality of life (HRQoL). 22 , 23 However, the majority of clinical data and guidance available for topical management of psoriasis is focused on short‐term use, with limited data on long‐term use. 23 Therefore, understanding the impact of treatment on HRQoL and patient‐perceived symptom severity in psoriasis is key to informing clinical decision‐making, improving clinical outcomes and quality of care. Patient‐reported outcomes measures (PROs) are invaluable tools to evaluate these effects and support clinical management. 22 , 24 This post hoc analysis of the PSO‐LONG trial captured the effect on HRQoL and patient‐perceived symptom severity of treating psoriasis with fixed‐dose calcipotriene 50 µg/g and betamethasone dipropionate 0.5 mg/g (Cal/BD) foam topical treatment through a 52‐week period. Three PRO measures were used: EuroQoL 5‐Dimensional Questionnaire for Psoriasis (EQ‐5D‐5L‐PSO), the Dermatology Life Quality Index (DLQI) and the Psoriasis Symptom Inventory (PSI). The analysis aimed to evaluate the value of Cal/BD foam for flare treatment and long‐term management (proactive vs reactive) on PROs as well as compare results at baseline flare, start of a relapse and during remission. Conclusion: In this analysis, Cal/BD foam QD flare treatment was associated with significant improvements in PROs from baseline that were maintained with a twice‐weekly application through 52 weeks. In patients undergoing proactive management, DLQI and PSI scores were significantly improved vs. patients receiving reactive management, potentially due to the reduction in the number of relapses and increased time in remission over a year of exposure. Overall, the results of this analysis add to the original PSO‐LONG findings, suggesting that proactive management with fixed‐dose Cal/BD foam could offer not only improved long‐term control of psoriasis but also improved HRQoL and patient‐perceived symptom severity over conventional reactive treatment with Cal/BD foam.
Background: Psoriasis has important physical and psychosocial effects that extend beyond the skin. Understanding the impact of treatment on health-related quality of life (HRQoL) and patient-perceived symptom severity in psoriasis is key to clinical decision-making. Methods: Five hundred and twenty-one patients from the Phase 3, randomized, double-blind PSO-LONG trial were included. An initial 4-week, open-label phase of fixed-dose combination Cal/BD foam once daily (QD) was followed by a 52-week maintenance phase, at the start of which patients were randomized to a proactive management arm (Cal/BD foam twice weekly) or reactive management arm (vehicle foam twice weekly). Patient-perceived symptom severity and HRQoL were assessed using the Psoriasis Symptom Inventory (PSI), the Dermatology Life Quality Index (DLQI) and the EuroQol-5D for psoriasis (EQ-5D-5L-PSO). Results: Statistically and clinically significant improvements were observed across all PRO measures. The mean difference (standard deviation) from baseline to Week 4 was -8.97 (6.18) for PSI, -6.02 (5.46) for DLQI and 0.11 (0.15) for EQ-5D-5L-PSO scores. During maintenance, patients receiving reactive management had significantly higher DLQI (15% [p = 0.007]) and PSI (15% [p = 0.0128]) and a numerically lower EQ-5D-5L-PSO mean area under the curve score than patients receiving proactive management (1% [p = 0.0842]). Conclusions: Cal/BD foam significantly improved DLQI, EQ-5D-5L-PSO and PSI scores during the open-label and maintenance phases. Patients assigned to proactive management had significantly better DLQI and PSI scores and numerically better EQ-5D-5L-PSO versus reactive management. Additionally, baseline flare was associated with worse PROs than the start of a relapse, and patients starting a relapse also had worse PROs than patients in remission.
6,447
375
[ 733, 295, 423, 144, 103, 138, 514 ]
11
[ "management", "proactive", "patients", "psi", "scores", "pso", "reactive", "phase", "analysis", "reactive management" ]
[ "prevalence psoriasis reported", "severity psoriasis significantly", "disease individuals psoriasis", "psoriasis highly variable", "psoriasis significant health" ]
null
[CONTENT] betamethasone dipropionate | calcipotriol | psoriasis | topical administration [SUMMARY]
null
[CONTENT] betamethasone dipropionate | calcipotriol | psoriasis | topical administration [SUMMARY]
[CONTENT] betamethasone dipropionate | calcipotriol | psoriasis | topical administration [SUMMARY]
[CONTENT] betamethasone dipropionate | calcipotriol | psoriasis | topical administration [SUMMARY]
[CONTENT] betamethasone dipropionate | calcipotriol | psoriasis | topical administration [SUMMARY]
[CONTENT] Betamethasone | Dermatologic Agents | Drug Combinations | Humans | Psoriasis | Quality of Life | Treatment Outcome [SUMMARY]
null
[CONTENT] Betamethasone | Dermatologic Agents | Drug Combinations | Humans | Psoriasis | Quality of Life | Treatment Outcome [SUMMARY]
[CONTENT] Betamethasone | Dermatologic Agents | Drug Combinations | Humans | Psoriasis | Quality of Life | Treatment Outcome [SUMMARY]
[CONTENT] Betamethasone | Dermatologic Agents | Drug Combinations | Humans | Psoriasis | Quality of Life | Treatment Outcome [SUMMARY]
[CONTENT] Betamethasone | Dermatologic Agents | Drug Combinations | Humans | Psoriasis | Quality of Life | Treatment Outcome [SUMMARY]
[CONTENT] prevalence psoriasis reported | severity psoriasis significantly | disease individuals psoriasis | psoriasis highly variable | psoriasis significant health [SUMMARY]
null
[CONTENT] prevalence psoriasis reported | severity psoriasis significantly | disease individuals psoriasis | psoriasis highly variable | psoriasis significant health [SUMMARY]
[CONTENT] prevalence psoriasis reported | severity psoriasis significantly | disease individuals psoriasis | psoriasis highly variable | psoriasis significant health [SUMMARY]
[CONTENT] prevalence psoriasis reported | severity psoriasis significantly | disease individuals psoriasis | psoriasis highly variable | psoriasis significant health [SUMMARY]
[CONTENT] prevalence psoriasis reported | severity psoriasis significantly | disease individuals psoriasis | psoriasis highly variable | psoriasis significant health [SUMMARY]
[CONTENT] management | proactive | patients | psi | scores | pso | reactive | phase | analysis | reactive management [SUMMARY]
null
[CONTENT] management | proactive | patients | psi | scores | pso | reactive | phase | analysis | reactive management [SUMMARY]
[CONTENT] management | proactive | patients | psi | scores | pso | reactive | phase | analysis | reactive management [SUMMARY]
[CONTENT] management | proactive | patients | psi | scores | pso | reactive | phase | analysis | reactive management [SUMMARY]
[CONTENT] management | proactive | patients | psi | scores | pso | reactive | phase | analysis | reactive management [SUMMARY]
[CONTENT] psoriasis | disease | qol | term | skin | quality | long | topical | long term | chronic [SUMMARY]
null
[CONTENT] management | mean | scores | proactive | difference | psi | eq 5d | eq | 5d | reactive management [SUMMARY]
[CONTENT] improved | bd | bd foam | cal | cal bd | foam | cal bd foam | management | long | proactive management [SUMMARY]
[CONTENT] management | proactive | patients | scores | psi | treatment | psoriasis | reactive | analysis | long [SUMMARY]
[CONTENT] management | proactive | patients | scores | psi | treatment | psoriasis | reactive | analysis | long [SUMMARY]
[CONTENT] ||| [SUMMARY]
null
[CONTENT] ||| Week 4 | 6.18 | PSI | 5.46 | DLQI | 0.11 | 0.15 ||| DLQI | 15% | 0.007 | PSI | 15% ||| 0.0128 | 1% | 0.0842 [SUMMARY]
[CONTENT] Cal | DLQI | PSI ||| DLQI | PSI ||| [SUMMARY]
[CONTENT] ||| ||| Five hundred and twenty-one | the Phase 3 ||| 4-week | Cal/BD | 52-week | weekly | weekly ||| the Psoriasis Symptom Inventory | the Dermatology Life Quality Index | DLQI ||| ||| Week 4 | 6.18 | PSI | 5.46 | DLQI | 0.11 | 0.15 ||| DLQI | 15% | 0.007 | PSI | 15% ||| 0.0128 | 1% | 0.0842 ||| DLQI | PSI ||| DLQI | PSI ||| [SUMMARY]
[CONTENT] ||| ||| Five hundred and twenty-one | the Phase 3 ||| 4-week | Cal/BD | 52-week | weekly | weekly ||| the Psoriasis Symptom Inventory | the Dermatology Life Quality Index | DLQI ||| ||| Week 4 | 6.18 | PSI | 5.46 | DLQI | 0.11 | 0.15 ||| DLQI | 15% | 0.007 | PSI | 15% ||| 0.0128 | 1% | 0.0842 ||| DLQI | PSI ||| DLQI | PSI ||| [SUMMARY]
Evaluation of the potential of a simplified co-delivery system with oligodeoxynucleotides as a drug carrier for enhanced antitumor effect.
29719392
We previously developed a simple effective system based on oligodeoxynucleotides with CGA repeating units (CGA-ODNs) for Dox and siRNA intracellular co-delivery.
BACKGROUND
In the present study, the in vitro cytotoxicity, gene transfection and in vivo safety of the co-delivery system were further characterized and discussed.
METHODS
Compared with poly(ethyleneimine) (PEI), both CGA-ODNs and the pH-sensitive targeted coating, o-carboxymethyl-chitosan (CMCS)-poly(ethylene glycol) (PEG)-aspargine-glycine-arginine (NGR) (CMCS-PEG-NGR, CPN) showed no obvious cytotoxicity in 72 h. The excellent transfection capability of CPN coated Dox and siRNA co-loaded nanoparticles (CPN-PDR) was confirmed by real-time PCR and Western blot analysis. It was calculated that there was no significant difference in silencing efficiency among Lipo/siRNA, CPN-modified siRNA-loaded nanoparticles (CPN-PR) and CPN-PDR. Furthermore, CPN-PDR was observed to be significantly much more toxic than Dox- and CPN-modified Dox-loaded nanoparticles (CPN-PD), implying their higher antitumor potential. Both hemolysis tests and histological assessment implied that CPN-PDR was safe for intravenous injection with nontoxicity and good biocompatibility in vitro and in vivo.
RESULTS
The results indicated that CPN-PDR could be a potentially promising co-delivery carrier for enhanced antitumor therapy.
CONCLUSION
[ "Animals", "Antineoplastic Agents", "Chitosan", "Doxorubicin", "Drug Carriers", "Drug Delivery Systems", "Endosomes", "Humans", "Hydrogen-Ion Concentration", "MCF-7 Cells", "Mice", "Nanoparticles", "Oligodeoxyribonucleotides", "Oligopeptides", "Polyethylene Glycols", "RNA, Small Interfering", "Transfection" ]
5916381
Characteristics of nanoparticles
In the optimal formulation of CPN-PDR, the weight ratio of DOX to siRNA was 1:1; therefore, the siRNA loading content was equal to the DOX loading content. Based on the facts that the molar ratio of CGA-ODNs to Dox was 1:5, the weight ratio of CPN/PEI/ODNs and siRNA was 4:1:1, and consequently, siRNA and Dox loading efficiency in CPN-PDR was calculated to be 4.93%.
Statistical analysis
All studies were repeated a minimum of three times and measured at least in triplicate. The results are reported as the means ± SD. Statistical significance was analyzed using Student’s t-test. Differences between experimental groups were considered significant if the P-value was <0.05.
null
null
Conclusion
In summary, the delivery system materials were demonstrated to be nontoxic and biocompatible. The nontoxic and negatively charged copolymer CPN coating could significantly decrease the cytotoxicity of the cationic core, bPDR. The obtained co-delivery system CPN-PDR was also confirmed with enhanced cytotoxicity against tumor cells. Gene transfection efficiency induced by CPN-PDR was fairly equal with that of the commercial product Lipofectamine 2000, implying the successful intracellular delivery and good transfection of siRNA. Moreover, the preliminary evaluation of safety showed CPN-PDR to have good biocompatibility, so it has great potential for further exploitation and clinical application. With the development of aptamer technology, the promising application prospects of this novel oligodeoxynucleotide-based co-loading platform will further increase.
[ "Introduction", "Materials", "Cells and animals", "Preparation of nanoparticles", "In vitro cytotoxicity of materials", "In vitro cytotoxicity of blank nanoparticles", "siRNA transfection with nanoparticles", "Real-time PCR", "Western blot assay", "Cell proliferation assay", "Hemolysis test", "Histological assessment", "Results and discussion", "Cytotoxicity of blank nanoparticles", "Expression level of VEGF mRNA", "Expression level of VEGF protein", "Inhibition of cell proliferation in vitro", "Hemolysis test", "Tissue section", "Conclusion" ]
[ "Cancer has been one of the leading causes of death worldwide, and mortality and morbidity continue to increase.1 Chemotherapy is one of the major strategies for cancer therapy. However, mono-chemotherapy may encounter problems including drug resistance and unspecific toxicity. To overcome the high mortality rate of cancer, new therapeutic strategies, such as combinational therapy, have been developed.2,3 It has been reported that a combination of chemotherapy and gene therapy could potentially achieve synergistic effects and improve target selectivity, and deter the development of cancer drug resistance.4 For example, a double-modulating strategy based on the combination of chemotherapeutic agent 5-fluorouracil (5-FU) and siRNA has been developed, in which chemotherapy enhances intratumoral siRNA delivery and the delivered siRNA enhances the chemosensitivity of tumors. Furthermore, combination therapy may compensate for the limited delivery of siRNA to tumor tissue and probably manage the 5-FU-resistant tumors.5\nRNA interference (RNAi) is a natural cellular process that regulates gene expression level. Small interfering RNAs (siRNAs), which are small double-stranded RNAs, 20–24 nucleotides (nts) in length with sequences complementary to a gene’s coding sequence, can induce degradation of corresponding messenger RNAs (mRNAs), thus blocking the translation of the mRNA into protein.6,7 The key therapeutic advantage of using RNAi lies in its ability to specifically and potently knock down the expression of disease-causing genes of known sequence. The high specificity and potency makes it widely used in treating various cancers such as breast, liver, and lung cancers.8–10 In recent years, combination of chemotherapy and siRNA-induced RNAi has become a hot topic in cancer treatment. For example, co-delivery of siRNA targeting multidrug resistance protein 1 (MDR1) or anti-survivin siRNA and chemotherapeutic drugs was demonstrated to be a promising strategy to overcome drug resistance.11–13 Anti-apoptotic gene Bcl-2 is a potential combinational therapy target owing to its important role in cell apoptosis, and combination of Bcl-2 siRNA and 5-FU could induce a remarkable increase of cell apoptosis both in vitro and in vivo.14\nTo achieve the desired combinational and synergistic effects of chemotherapy and RNAi, selection of siRNA is important, and the selection of siRNA is also the section of silencing target. The mechanism for combinational anticancer effect varies with different silencing targets. For example, siRNA targeting MDR1 or anti-apoptotic gene Bcl-2 were used to prevent drug resistance response and sensitize chemotherapeutic drugs, or enhance the apoptosis of cancer cells, both offering enhanced anticancer effect.15,16 Tumor tissues have abundant new blood vessels that pump sufficient nutrition and oxygen for tumor growth and metastasis. The dependency of tumor tissues on these new blood vessels is important in anti-angiogenesis therapy for cancer treatment.17 Recently, there have been many efforts in achieving anti-angiogenesis, including delivering siRNAs to silence specific pro-angiogenic genes, such as vascular endothelial growth factor (VEGF), fibroblast growth factor (FGF) and interleukin-8 (IL-8).18 Of these targets, VEGF has received a lot of attention owing to its key role in tumor angiogenesis. With the development of RNAi technology, siRNA with VEGF target has been widely explored and referred to as anti-VEGF treatment.19–21 In 2004, the success of bevacizumab showed VEGF to be a potential target for angiogenesis therapy. There are a number of novel anti-VEGF agents in phase III clinical trials that may come to market in the next few years.22,23 In our design, siRNA with VEGF target was selected to co-deliver with chemotherapeutic drugs, resulting in a combinational anticancer effect. The mechanism for this could be that anti-VEGF treatments sensitize the cells to chemotherapy agents by blocking blood supply, improve drug penetration by disruption or normalization of tumor vasculature and in addition directly kill cancer cells by gene therapy.24,25\nHowever, naked siRNAs may not induce efficient transfection by themselves because they are unstable and vulnerable, especially to nuclease-mediated degradation; also, the negatively charged surface blocks them from cellular endocytosis. Moreover, successful transfection of siRNA into cells is based on siRNA with complete structure, which could further initiate the RNAi process for targeted gene silencing.26,27 Thus, nanoscale delivery systems, which could co-load drug and siRNA and protect siRNA from degradation, were widely used and demonstrated to be excellent candidates. These include nanoparticles,28 liposomes,29 polyplexes30 and dendrimers.31 The basic standard involves employing cationic polymers or lipids to condense negatively charged siRNA and protect them from degradation.32,33 In our previous study,25 oligodeoxynucleotides with CGA repeating units (CGA-ODNs) were introduced to load Dox to obtain Dox-loaded CGA-ODNs (CGA-ODNs-Dox). Poly(ethyleneimine) (PEI) was then used to condense siRNA and CGA ODNs-Dox to obtain Dox and siRNA co-loaded nanoparticles (PEI/CGA-ODNs-Dox and siRNA[PDR]). Finally, the pH-sensitive targeted material, o-carboxymethyl-chitosan (CMCS)-poly(ethylene glycol) (PEG)-aspargine-glycine-arginine (NGR) (CMCS-PEG-NGR[CPN]), was used to modify PDR to obtain multifunctional CPN-PDR. CPN-PDRs were demonstrated to be multifunctional, being able to co-deliver Dox and siRNA into cells, induce pH-responsive disassembly and realize endosomal escape of gene and drug. Thus, in the present study, the cytotoxicity of the materials and the delivery system are further evaluated. Then, the transfection efficiency of siRNA is monitored by semiquantitative real-time polymerase chain reaction (real-time PCR) and Western blot techniques for mRNA level and protein level, respectively. Finally, the safety of the delivery system is evaluated by hemolysis test in vitro and histological assessment.", "Dox was purchased from Dalian Meilun Biology Technology Co. Ltd (Dalian, China). Targeting human VEGF siRNA (sense: 5′-ACAUCACCAUGCAGAUUAUdTdT-3′, anti-sense: 5′-dTdTUGUAGUGGUACGUCUAAUA-3′) was obtained from Guangzhou Ribobio Co., Ltd (Guangzhou, China). CGA oligodeoxynucleotides (sequence: 5′-CGACGACGACGACGACGACGA-3′; complementary sequence: 5′-TCGTCGTCGTCGTCGTCGTCG-3′) were purchased from BGI Co. (Shenzhen, China). Fetal bovine serum (FBS) was obtained from Sijiqing Co., Ltd, (Zhejiang, China) and 3-[4,5-dimethyl-2-thiazolyl]-2,5-diphenyl-2H-tetrazolium bromide (MTT) was from Sigma-Aldrich (St Louis, MO, USA). Trizol RNA extraction was from Thermo Fisher Scientific (Waltham, MA, USA). SYBR®Green and ReverTra Ace qPCR RT Kit were purchased from Toyobo (Osaka, Japan). All solutions were made up in Millipore ultrapure water (EMD Millipore, Billerica, MA, USA) and sterilized for cell culture. All other chemicals and reagents were of analytical grade and used as received. All the primers used in real-time PCR were synthesized and purified by BGI Co. with sequences: VEGF – forward: 5′-CTGGAGTGTGTGCCCACTGA-3′; VEGF – reverse: 5′-TCCTATGTGCTGGCCTTGGT-3′; actin – forward: GAGCTACGAGCTGCCTGACG; actin – reverse: CCTAGAAGCATTTGCGGTGG.", "The MCF-7 cells were purchased from the Chinese Academy of Sciences Cells Bank (Shanghai, China), and the cells were cultured in Roswell Park Memorial Institute (RPMI) 1640 medium supplemented with 10% fetal bovine serum (FBS), streptomycin at 100 µg/mL and penicillin at 100 U/mL. All cells were cultured in a 37°C incubator with 5% CO2.\nKunming mice (20±2 g) were obtained from Experimental Animal Center of Shandong University. Animal experiments were carried out according to the requirements of the Animal Management Rules of the Ministry of Health (China) and approved by the Laboratory Animal Ethics Review Committee of Qilu Medical College of Shandong University.", "PDR and CPN-PDR were prepared by the method reported in our previous study.25 Briefly, CGA-ODNs-Dox was first obtained by incubating Dox with CGA-ODNs at room temperature. Then, PEI, siRNA, ODNs-Dox and CPN were dissolved and diluted to the corresponding concentrations with deionized water. After that, siRNA and CGA-ODNs-Dox were mixed by vortexing for several seconds to obtain the mixture. The mixture was added dropwise into PEI solution and mixed by vortexing and then incubated for 30 min at room temperature to form PDR. CPN-PDR was obtained by adding PDR suspension into the CPN solution under vortexing, followed by 30-min incubation at room temperature. When preparing the blank nanoparticles, the DOX and siRNA were not involved in above method. The construction of different kinds of nanoparticles is shown in Figure 1.\nThe loading content of DOX and siRNA was calculated using the following formula:34\nLoading content(%)=Weight of loading drugsWeight of nanoparticles×100%(1)", "MTT assays were carried out on MCF-7 cells to evaluate the in vitro cytotoxicity of CGA-ODNs, PEI and CPN, respectively.35 MCF-7 cells were seeded with density at 7,000 cells/well in the 96-well plates and allowed to adhere overnight. CGA-ODNs, PEI and CPN solutions, which were corresponding to Dox concentrations (0.25, 0.625, 1.25, 2.5 and 5 µM), were added and incubated with the cells for 24, 48 and 72 h, respectively. Five wells were set for each concentration, and cells incubated with fresh media were taken as control. At a determined time point, 20 µL MTT solutions (5 mg/mL) were added. After 4 h incubation, the plates were centrifuged (3,000 rpm, 10 min) and the supernatant was removed. Then, 200 µL DMSO solutions were added to dissolve the formazan crystals formed by the living cells. The cell viability was calculated according to the following formula (Equation 2) after recording the absorbance at 570 nm by a microplate reader (Model 680; Bio-Rad Laboratories Inc., Hercules, CA, USA). All the assays were repeated three times.\nCell viability=Abs(sample)−Abs(blank)Abs(control)−Abs(blank)×100%(2)", "The cytotoxicity of non-DOX- and siRNA-loaded nano-particles (blank PDR[bPDR]) and CPN-coated blank PDR (bCPN-PDR) were also evaluated in MCF-7 cells. bPDR and bCPN-PDR solutions, corresponding to Dox concentrations (0.25, 0.625, 1.25, 2.5 and 5 µM), were added and incubated with the cells for 24, 48 and 72 h, respectively. Five wells were set for each concentration and cells incubated with fresh media were taken as control. After the indicated time periods, cell viability was evaluated according to the procedure described above.", "MCF-7 cells were seeded into 24-well plates at densities of 8×104, 6×104 and 5×104 cells/well for 24, 48 and 72 h transfection, respectively. After culturing at 37°C overnight, MCF-7 cells were treated with siRNA specific for VEGF. The siRNA was delivered either by Lipofectamine 2000 (Thermo Fisher Scientific) according to the manufacturer’s instructions or by CPN-modified siRNA-loaded nanoparticles (CPN-PR) or by CPN-modified Dox- and siRNA-loaded nanoparticles (CPN-PDR). The final concentrations of Dox and siRNA were 1 µM and 55 nM, respectively. After siRNA transfection, cells were harvested for RNA extraction and Western blot PCR analysis or subjected to Western blot assay.", "The expression level of VEGF mRNA was monitored by real-time PCR technique with the standard procedure. Total RNA was extracted using Trizol agent under the standard procedure (Thermo Fisher Scientific), and cDNA synthesis was carried out using Rever Tra Ace qPCR RT Kit (Toyobo) according to the manufacturer’s instructions. The final concentration of RNA was measured by Nano Drop 2000 (Thermo Fisher Scientific), and the final product cDNA was stored at −20°C until use.\nEach real-time PCR system contained 20 µL solutions including cDNA (2 µL), SYBR®Green mix (10 µL), primer (1.6 µL) and double distilled water (6.4 µL). The reaction system was placed into PCR instrument (Roche LightCycler™; Hoffman-La Roche Ltd, Basel, Switzerland) for analysis. β-Actin served as a reference gene. Three wells were set for each group. The expression level of VEGF mRNA was normalized to the β-actin expression level and calculated by recording the Ct value at the end of the reaction.", "A Western blot assay was employed to investigate the expression level of total VEGF protein after transfection. At a determined time, cells were trypsinized and pelleted by centrifugation. The protein collecting process was carried out based on the standard procedure of Total Protein Extraction Kit (BestBio, Shanghai, China). The total protein was stored at −80°C until use. Protein was quantified by BCA protein assay kit (Beyotime Biotechnology, Shanghai, China) based on the standard carve calculation (A=0.006 C+0.0919, r=0.9987). After quantification, all the samples were diluted to 2 µg/µL with sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) electrophoresis, denatured in boiling water for 5 min and stored at −80°C until use. Equal amounts of protein (50 µg) were subjected to 12% SDS-PAGE and transferred to nitrocellulose membranes using standard procedures. The membrane was blocked in 2% skimmed milk in phosphate buffered saline for 2 h and washed by Tris-buffered saline and Tween-20 (TBST) solution three times. Then the membranes were probed with primary antibodies against VEGF and glyceraldehyde 3-phosphate dehydrogenase (GAPDN) overnight at 4°C. After 3×10 min washing in TBST on shaker, membranes were incubated with horseradish peroxidase (HRP)-conjugated anti-rabbit IgG for 1 h at room temperature. With another 3×10 min washes in TBST, the membranes were developed with electrochemiluminescence (ECL) using a gel-imaging system and analyzed using Image Analysis software.", "The anti-proliferation activity of Dox, CPN-PD and CPN-PDR against MCF-7 cells was tested via MTT method. MCF-7 cells were seeded into 96-well plates at a density of 7,000 cells/well. After overnight incubation, the cells were treated with various concentrations (0.002, 0.01, 0.05, 0.25 and 0.5 µM) of Dox, CPN-PD and CPN-PDR solutions and incubated for another 48 h. Five wells were set for each concentration, and cells incubated with fresh media were taken as control. After indicated time periods, cell viability was evaluated according to the procedure described in the “In vitro cytotoxicity of materials” section.", "Hemolysis test was carried out to investigate the safety of CPN-PDR for intravenous injection. A 2% red blood cell suspension was collected by centrifugation and resuspension. The test was performed under the design described in Table 1. After incubated at 37°C for 3 h, all the groups were centrifuged at 3,000 rpm for 15 min and visualized by the naked eye. To measure the hemolysis ratios of each group, UV-vis spectrophotometry was carried out to record the absorbance of supernatant in each group. The quantitation of hemolysis ratios was calculated according to the following formula:\nHR(%)=Abs(sample)−Abs(−)Abs(+)−Abs(−)×100%(3)where Abs (sample), Abs (−) and Abs (+) refer to the absorbances of the samples, negative control and positive control, respectively.", "In order to evaluate the compatibility and tissue toxicity of CPN-PDR in vivo, a histological observation was performed.36 Five female Kunming mice were injected with CPN-PDR at a dose of 232 µg/kg through the tail vein. In the meantime, normal saline (NS) was taken as a control reagent. After 1 week, all the mice were sacrificed, and the heart, liver, spleen, lung and kidney were separated. After washing twice with PBS, all the organs were fixed in 4% formaldehyde, dehydrated in gradient alcohol, placed in xylene, embedded in paraffin and made into sections, followed by hematoxylin-eosin (HE) staining for histological examination with microscope.", " Characteristics of nanoparticles In the optimal formulation of CPN-PDR, the weight ratio of DOX to siRNA was 1:1; therefore, the siRNA loading content was equal to the DOX loading content. Based on the facts that the molar ratio of CGA-ODNs to Dox was 1:5, the weight ratio of CPN/PEI/ODNs and siRNA was 4:1:1, and consequently, siRNA and Dox loading efficiency in CPN-PDR was calculated to be 4.93%.\nIn the optimal formulation of CPN-PDR, the weight ratio of DOX to siRNA was 1:1; therefore, the siRNA loading content was equal to the DOX loading content. Based on the facts that the molar ratio of CGA-ODNs to Dox was 1:5, the weight ratio of CPN/PEI/ODNs and siRNA was 4:1:1, and consequently, siRNA and Dox loading efficiency in CPN-PDR was calculated to be 4.93%.\n Cytotoxicity of materials In the co-delivery platform, CGA-ODNs were selected as carriers of DOX, then the mixture of CGA-ODNs-Dox and siRNA (abbreviated as DR) was electrostatically interacted with PEI to obtain Dox and siRNA co-loaded nanoparticles, PEI/DR (PDR). The copolymer CPN was employed as a multifunctional material.25 In order to evaluate the safety of the carrier, MTT assays were carried out to evaluate the cytotoxicity of CGA-ODNs and CPN. In the meantime, PEI, a toxic cationic polymer, was also tested in this experiment. As shown in Figure 2A, the viability of MCF-7 cells treated with CGA-ODNs and CPN at any concentration was observed to be stable and >80% within 48 h. After 72 h incubation, the viability of MCF-7 cells treated with CPN was still >80%, but after treatment with a higher concentration of CGA-ODNs (2.5 and 5 µM), the cell viability was slightly decreased to 60%. This indicated that there is no obvious cytotoxicity of CGA-ODNs and CPN. However, the cell viability of MCF-7 cells treated with PEI was obviously dependent on the concentrations of PEI. With an increase in concentration, the MCF-7 cells treated with PEI showed lower cell viability. Moreover, this concentration-dependent trend became more obvious when incubation time increased from 24 h to 72 h.\nIn the co-delivery platform, CGA-ODNs were selected as carriers of DOX, then the mixture of CGA-ODNs-Dox and siRNA (abbreviated as DR) was electrostatically interacted with PEI to obtain Dox and siRNA co-loaded nanoparticles, PEI/DR (PDR). The copolymer CPN was employed as a multifunctional material.25 In order to evaluate the safety of the carrier, MTT assays were carried out to evaluate the cytotoxicity of CGA-ODNs and CPN. In the meantime, PEI, a toxic cationic polymer, was also tested in this experiment. As shown in Figure 2A, the viability of MCF-7 cells treated with CGA-ODNs and CPN at any concentration was observed to be stable and >80% within 48 h. After 72 h incubation, the viability of MCF-7 cells treated with CPN was still >80%, but after treatment with a higher concentration of CGA-ODNs (2.5 and 5 µM), the cell viability was slightly decreased to 60%. This indicated that there is no obvious cytotoxicity of CGA-ODNs and CPN. However, the cell viability of MCF-7 cells treated with PEI was obviously dependent on the concentrations of PEI. With an increase in concentration, the MCF-7 cells treated with PEI showed lower cell viability. Moreover, this concentration-dependent trend became more obvious when incubation time increased from 24 h to 72 h.\n Cytotoxicity of blank nanoparticles Cell viability of MCF-7 cells was also monitored to investigate the cytotoxicity of non-RNA- and non-drug-loaded carriers including bPDR and bCPN-PDR. As shown in Figure 2B, after treatment with bCPN-PDR, the cell viability was stable around 95% and shown to be concentration independent at 24 h. Meanwhile, bPDR-treated cells showed lower cell viability with time. With increased concentration, bPDR showed higher cytotoxicity, inducing 80% cell death in 72 h. Compared with the bPDR group, the cell viabilities of the bCPN-PDR group were significantly higher (P<0.05). This phenomenon could be explained by the higher positive charge of bPDR due to the positive ingredient of PEI, while the cell cytotoxicity could be significantly decreased by covering with the nontoxic negatively charged CPN (>80% cell viability in Figure 2A), which could induce the charge reversal of bPDR, implying that the negatively charged CPN coating could decrease the higher toxicity of bPDR to cells.\nCell viability of MCF-7 cells was also monitored to investigate the cytotoxicity of non-RNA- and non-drug-loaded carriers including bPDR and bCPN-PDR. As shown in Figure 2B, after treatment with bCPN-PDR, the cell viability was stable around 95% and shown to be concentration independent at 24 h. Meanwhile, bPDR-treated cells showed lower cell viability with time. With increased concentration, bPDR showed higher cytotoxicity, inducing 80% cell death in 72 h. Compared with the bPDR group, the cell viabilities of the bCPN-PDR group were significantly higher (P<0.05). This phenomenon could be explained by the higher positive charge of bPDR due to the positive ingredient of PEI, while the cell cytotoxicity could be significantly decreased by covering with the nontoxic negatively charged CPN (>80% cell viability in Figure 2A), which could induce the charge reversal of bPDR, implying that the negatively charged CPN coating could decrease the higher toxicity of bPDR to cells.\n Expression level of VEGF mRNA The transfection efficiency of CPN-PDR compared to Lipofectamine 2000 was demonstrated by delivering VEGF-siRNA into MCF-7 cells. The expression level of VEGF mRNA was measured using real-time PCR, and the results are shown in Figure 3. The expression level of VEGF mRNA was normalized to the β-actin expression and calculated based on the semiquantitative method. Untreated cells were selected as control, and the expression level was set as 100%. For lipo/siRNA group, the silencing efficiencies were 61.98%±6.96%, 31.93%±6.43% and 37.02%±3.17% at 24, 48 and 72 h, respectively. The decreased expression of VEGF mRNA reflected that gene silencing capability of lipo/siRNA. With incubation time extended from 24 to 48 h, the capability was enhanced (P<0.05). However, in 72 h, no further reduction was observed compared to that in 48 h, although the significantly downregulated expression could be observed (P<0.05 vs control, Dox and bCPN-PDR), implying that the silencing effect could reach the plateau period after 48 h and last for 72 h at least. The CPN-PR and CPN-PDR have shown similar behavior to lipo/siRNA. Compared with the control, a remarkable decrease of VEGF mRNA expression was observed (P<0.01) at 48 h. There was no significant difference in expression levels between CPN-PR and CPN-PDR (P>0.05) at any time point, implying their comparable gene delivery and transfection capability. More importantly, this capability was equivalent to the commercial Lipofectamine 2000.\nThe transfection efficiency of CPN-PDR compared to Lipofectamine 2000 was demonstrated by delivering VEGF-siRNA into MCF-7 cells. The expression level of VEGF mRNA was measured using real-time PCR, and the results are shown in Figure 3. The expression level of VEGF mRNA was normalized to the β-actin expression and calculated based on the semiquantitative method. Untreated cells were selected as control, and the expression level was set as 100%. For lipo/siRNA group, the silencing efficiencies were 61.98%±6.96%, 31.93%±6.43% and 37.02%±3.17% at 24, 48 and 72 h, respectively. The decreased expression of VEGF mRNA reflected that gene silencing capability of lipo/siRNA. With incubation time extended from 24 to 48 h, the capability was enhanced (P<0.05). However, in 72 h, no further reduction was observed compared to that in 48 h, although the significantly downregulated expression could be observed (P<0.05 vs control, Dox and bCPN-PDR), implying that the silencing effect could reach the plateau period after 48 h and last for 72 h at least. The CPN-PR and CPN-PDR have shown similar behavior to lipo/siRNA. Compared with the control, a remarkable decrease of VEGF mRNA expression was observed (P<0.01) at 48 h. There was no significant difference in expression levels between CPN-PR and CPN-PDR (P>0.05) at any time point, implying their comparable gene delivery and transfection capability. More importantly, this capability was equivalent to the commercial Lipofectamine 2000.\n Expression level of VEGF protein Based on the results of real-time PCR, the gene-silencing effect enhanced with time and reached a platform after 48 h. In this study, the Western blot technique was employed to observe VEGF protein expression at 48 h. The results are shown in Figure 4. GAPDH was selected as the inner control; all the bands were found to be equivalent in gray levels, and the amount of protein expression was positively correlated with the shade of gray in the imaging picture. From Figure 4, the gray levels of both control and bCPN-PDR, which were similar, were obvious. However, siRNA-loaded nanoparticles including lipo/siRNA, CPN-PR and CPN-PDR demonstrated a notably lower gray level. Furthermore, as marked with the red rectangle in Figure 4, expression of VEGF protein in CPN-PDR could hardly be identified. This suggested that co-delivery of anti-VEGF siRNA and Dox could downregulate the relevant VEGF protein level, which was in agreement with the results from semiquantitative real-time PCR. Taken together, these results indicate that nanoparticles loaded with anti-VEGF siRNA downregulated both the protein and the mRNA expression of VEGF.\nBased on the results of real-time PCR, the gene-silencing effect enhanced with time and reached a platform after 48 h. In this study, the Western blot technique was employed to observe VEGF protein expression at 48 h. The results are shown in Figure 4. GAPDH was selected as the inner control; all the bands were found to be equivalent in gray levels, and the amount of protein expression was positively correlated with the shade of gray in the imaging picture. From Figure 4, the gray levels of both control and bCPN-PDR, which were similar, were obvious. However, siRNA-loaded nanoparticles including lipo/siRNA, CPN-PR and CPN-PDR demonstrated a notably lower gray level. Furthermore, as marked with the red rectangle in Figure 4, expression of VEGF protein in CPN-PDR could hardly be identified. This suggested that co-delivery of anti-VEGF siRNA and Dox could downregulate the relevant VEGF protein level, which was in agreement with the results from semiquantitative real-time PCR. Taken together, these results indicate that nanoparticles loaded with anti-VEGF siRNA downregulated both the protein and the mRNA expression of VEGF.\n Inhibition of cell proliferation in vitro The ultimate goal of siRNA intracellular delivery is to restrain the proliferation of cancer cells. Different treatments against MCF-7 cells were performed to further investigate the inhibition effect. As shown in Figure 5A, an obvious concentration dependence in terms of DOX, CPN-PD and CPN-PDR could be observed after 48 h incubation. We have calculated the differences between CPN-PDR vs CPN-PD and CPN-PDR vs DOX at each dose. The results indicated that there was a significant difference between CPN-PDR vs DOX at each dose except the dose of 0.05, and there were significant differences between CPN-PDR vs CPN-PD at a dose of 0.002. It was suggested that the in vitro antitumor activity of CPN-PDR was equivalent to CPN-PD, and both of them showed a higher antitumor activity than free drug DOX, which was mainly attributed to the design of the multifunctional nano-vectors. We have also calculated the statistical differences between CPN-PDR vs CPN-PD and CPN-PDR vs DOX in the concentration range of Dox (0.002, 0.01, 0.05, 0.25 and 0.5 µM) using Student’s t-test. After statistical calculation, the significant differences were verified to exist between CPN-PDR vs CPN-PD (P<0.05) and CPN-PDR vs DOX (P<0.01). After calculation with professional software, the half maximal inhibitory concentrations (IC50) are presented in Figure 5B. They are 0.0367±0.0088 µM for Dox, 0.0266±0.0027 µM for CPN-PD and 0.0126±0.0022 µM for CPN-PDR, respectively. The IC50 of CPN-PD was 2.913-fold higher than that of CPN-PDR, and the IC50 of DOX was 2.110-fold higher than that of CPN-PDR. After statistical calculation, significant differences were found between CPN-PDR vs CPN-PD (P<0.05) and CPN-PDR vs DOX (P<0.05). With the results from real-time PCR and Western blot, the increased cytotoxicity of CPN-PDR probably resulted from siRNA delivery and successful transfection. This was because the intracellular delivery of anti-VEGF siRNA could downregulate the expression of VEGF protein. The anti-VEGF therapy may sensitize cancer cells or kill cells block the blood supply of the tumor and improve drug penetration. Taken together, the anti-VEGF siRNA and Dox co-delivery platform could induce increased cytotoxicity against cancer cells.\nThe ultimate goal of siRNA intracellular delivery is to restrain the proliferation of cancer cells. Different treatments against MCF-7 cells were performed to further investigate the inhibition effect. As shown in Figure 5A, an obvious concentration dependence in terms of DOX, CPN-PD and CPN-PDR could be observed after 48 h incubation. We have calculated the differences between CPN-PDR vs CPN-PD and CPN-PDR vs DOX at each dose. The results indicated that there was a significant difference between CPN-PDR vs DOX at each dose except the dose of 0.05, and there were significant differences between CPN-PDR vs CPN-PD at a dose of 0.002. It was suggested that the in vitro antitumor activity of CPN-PDR was equivalent to CPN-PD, and both of them showed a higher antitumor activity than free drug DOX, which was mainly attributed to the design of the multifunctional nano-vectors. We have also calculated the statistical differences between CPN-PDR vs CPN-PD and CPN-PDR vs DOX in the concentration range of Dox (0.002, 0.01, 0.05, 0.25 and 0.5 µM) using Student’s t-test. After statistical calculation, the significant differences were verified to exist between CPN-PDR vs CPN-PD (P<0.05) and CPN-PDR vs DOX (P<0.01). After calculation with professional software, the half maximal inhibitory concentrations (IC50) are presented in Figure 5B. They are 0.0367±0.0088 µM for Dox, 0.0266±0.0027 µM for CPN-PD and 0.0126±0.0022 µM for CPN-PDR, respectively. The IC50 of CPN-PD was 2.913-fold higher than that of CPN-PDR, and the IC50 of DOX was 2.110-fold higher than that of CPN-PDR. After statistical calculation, significant differences were found between CPN-PDR vs CPN-PD (P<0.05) and CPN-PDR vs DOX (P<0.05). With the results from real-time PCR and Western blot, the increased cytotoxicity of CPN-PDR probably resulted from siRNA delivery and successful transfection. This was because the intracellular delivery of anti-VEGF siRNA could downregulate the expression of VEGF protein. The anti-VEGF therapy may sensitize cancer cells or kill cells block the blood supply of the tumor and improve drug penetration. Taken together, the anti-VEGF siRNA and Dox co-delivery platform could induce increased cytotoxicity against cancer cells.\n Hemolysis test To investigate the safety of intravenous injection of CPN-PDR, a hemolysis test was performed. As shown in Table 1, different volume ratios of CPN-PDR to 2% red blood cell suspension were selected. After incubation and centrifugation, tubes were observed by the naked eye. As shown in Figure 6, no red blood cell hemolysis was observed in tube 6, in which 100% PBS was added serving as a negative control. For tube 7, 100% double distilled water was added and red cell hemolysis could be clearly observed. For tubes 1–5, in which different percentages of CPN-PDR were added, no red blood cell hemolysis occurred. Hemolysis of red blood cells can generate hemoprotein, which could be detected using UV-vis spectrophotometry. After scanning the absorbance wavelength of hemoprotein, 577 nm was selected as the detecting wavelength. According to Equation 3, hemolysis ratios were calculated based on the recorded absorbance and shown in Table 2. All the hemolysis ratios of tubes 1–5 were <5%, implying that no hemolysis was induced by addition of CPN-PDR.\nTo investigate the safety of intravenous injection of CPN-PDR, a hemolysis test was performed. As shown in Table 1, different volume ratios of CPN-PDR to 2% red blood cell suspension were selected. After incubation and centrifugation, tubes were observed by the naked eye. As shown in Figure 6, no red blood cell hemolysis was observed in tube 6, in which 100% PBS was added serving as a negative control. For tube 7, 100% double distilled water was added and red cell hemolysis could be clearly observed. For tubes 1–5, in which different percentages of CPN-PDR were added, no red blood cell hemolysis occurred. Hemolysis of red blood cells can generate hemoprotein, which could be detected using UV-vis spectrophotometry. After scanning the absorbance wavelength of hemoprotein, 577 nm was selected as the detecting wavelength. According to Equation 3, hemolysis ratios were calculated based on the recorded absorbance and shown in Table 2. All the hemolysis ratios of tubes 1–5 were <5%, implying that no hemolysis was induced by addition of CPN-PDR.\n Tissue section In this study, a histological analysis of organs (heart, liver, spleen, lung and kidney) was performed to evaluate whether CPN-PDR could cause tissue damage, inflammation and lesions. Kunming mice were injected with CPN-PDR and normal saline (NS) by tail vein, respectively. After 1 week, all the mice were sacrificed, and the heart, liver, spleen, lung and kidney were separated. After the standard procedure of HE staining for histological examination, the organs were observed under a microscope. Mice without any treatment were chosen as a control. As shown in Figure 7, compared with the control group, there was no visible histologically difference between mice administrated with CPN-PDR group and that with NS group, implying the safety of CPN-PDR in vivo.\nIn this study, a histological analysis of organs (heart, liver, spleen, lung and kidney) was performed to evaluate whether CPN-PDR could cause tissue damage, inflammation and lesions. Kunming mice were injected with CPN-PDR and normal saline (NS) by tail vein, respectively. After 1 week, all the mice were sacrificed, and the heart, liver, spleen, lung and kidney were separated. After the standard procedure of HE staining for histological examination, the organs were observed under a microscope. Mice without any treatment were chosen as a control. As shown in Figure 7, compared with the control group, there was no visible histologically difference between mice administrated with CPN-PDR group and that with NS group, implying the safety of CPN-PDR in vivo.", "Cell viability of MCF-7 cells was also monitored to investigate the cytotoxicity of non-RNA- and non-drug-loaded carriers including bPDR and bCPN-PDR. As shown in Figure 2B, after treatment with bCPN-PDR, the cell viability was stable around 95% and shown to be concentration independent at 24 h. Meanwhile, bPDR-treated cells showed lower cell viability with time. With increased concentration, bPDR showed higher cytotoxicity, inducing 80% cell death in 72 h. Compared with the bPDR group, the cell viabilities of the bCPN-PDR group were significantly higher (P<0.05). This phenomenon could be explained by the higher positive charge of bPDR due to the positive ingredient of PEI, while the cell cytotoxicity could be significantly decreased by covering with the nontoxic negatively charged CPN (>80% cell viability in Figure 2A), which could induce the charge reversal of bPDR, implying that the negatively charged CPN coating could decrease the higher toxicity of bPDR to cells.", "The transfection efficiency of CPN-PDR compared to Lipofectamine 2000 was demonstrated by delivering VEGF-siRNA into MCF-7 cells. The expression level of VEGF mRNA was measured using real-time PCR, and the results are shown in Figure 3. The expression level of VEGF mRNA was normalized to the β-actin expression and calculated based on the semiquantitative method. Untreated cells were selected as control, and the expression level was set as 100%. For lipo/siRNA group, the silencing efficiencies were 61.98%±6.96%, 31.93%±6.43% and 37.02%±3.17% at 24, 48 and 72 h, respectively. The decreased expression of VEGF mRNA reflected that gene silencing capability of lipo/siRNA. With incubation time extended from 24 to 48 h, the capability was enhanced (P<0.05). However, in 72 h, no further reduction was observed compared to that in 48 h, although the significantly downregulated expression could be observed (P<0.05 vs control, Dox and bCPN-PDR), implying that the silencing effect could reach the plateau period after 48 h and last for 72 h at least. The CPN-PR and CPN-PDR have shown similar behavior to lipo/siRNA. Compared with the control, a remarkable decrease of VEGF mRNA expression was observed (P<0.01) at 48 h. There was no significant difference in expression levels between CPN-PR and CPN-PDR (P>0.05) at any time point, implying their comparable gene delivery and transfection capability. More importantly, this capability was equivalent to the commercial Lipofectamine 2000.", "Based on the results of real-time PCR, the gene-silencing effect enhanced with time and reached a platform after 48 h. In this study, the Western blot technique was employed to observe VEGF protein expression at 48 h. The results are shown in Figure 4. GAPDH was selected as the inner control; all the bands were found to be equivalent in gray levels, and the amount of protein expression was positively correlated with the shade of gray in the imaging picture. From Figure 4, the gray levels of both control and bCPN-PDR, which were similar, were obvious. However, siRNA-loaded nanoparticles including lipo/siRNA, CPN-PR and CPN-PDR demonstrated a notably lower gray level. Furthermore, as marked with the red rectangle in Figure 4, expression of VEGF protein in CPN-PDR could hardly be identified. This suggested that co-delivery of anti-VEGF siRNA and Dox could downregulate the relevant VEGF protein level, which was in agreement with the results from semiquantitative real-time PCR. Taken together, these results indicate that nanoparticles loaded with anti-VEGF siRNA downregulated both the protein and the mRNA expression of VEGF.", "The ultimate goal of siRNA intracellular delivery is to restrain the proliferation of cancer cells. Different treatments against MCF-7 cells were performed to further investigate the inhibition effect. As shown in Figure 5A, an obvious concentration dependence in terms of DOX, CPN-PD and CPN-PDR could be observed after 48 h incubation. We have calculated the differences between CPN-PDR vs CPN-PD and CPN-PDR vs DOX at each dose. The results indicated that there was a significant difference between CPN-PDR vs DOX at each dose except the dose of 0.05, and there were significant differences between CPN-PDR vs CPN-PD at a dose of 0.002. It was suggested that the in vitro antitumor activity of CPN-PDR was equivalent to CPN-PD, and both of them showed a higher antitumor activity than free drug DOX, which was mainly attributed to the design of the multifunctional nano-vectors. We have also calculated the statistical differences between CPN-PDR vs CPN-PD and CPN-PDR vs DOX in the concentration range of Dox (0.002, 0.01, 0.05, 0.25 and 0.5 µM) using Student’s t-test. After statistical calculation, the significant differences were verified to exist between CPN-PDR vs CPN-PD (P<0.05) and CPN-PDR vs DOX (P<0.01). After calculation with professional software, the half maximal inhibitory concentrations (IC50) are presented in Figure 5B. They are 0.0367±0.0088 µM for Dox, 0.0266±0.0027 µM for CPN-PD and 0.0126±0.0022 µM for CPN-PDR, respectively. The IC50 of CPN-PD was 2.913-fold higher than that of CPN-PDR, and the IC50 of DOX was 2.110-fold higher than that of CPN-PDR. After statistical calculation, significant differences were found between CPN-PDR vs CPN-PD (P<0.05) and CPN-PDR vs DOX (P<0.05). With the results from real-time PCR and Western blot, the increased cytotoxicity of CPN-PDR probably resulted from siRNA delivery and successful transfection. This was because the intracellular delivery of anti-VEGF siRNA could downregulate the expression of VEGF protein. The anti-VEGF therapy may sensitize cancer cells or kill cells block the blood supply of the tumor and improve drug penetration. Taken together, the anti-VEGF siRNA and Dox co-delivery platform could induce increased cytotoxicity against cancer cells.", "To investigate the safety of intravenous injection of CPN-PDR, a hemolysis test was performed. As shown in Table 1, different volume ratios of CPN-PDR to 2% red blood cell suspension were selected. After incubation and centrifugation, tubes were observed by the naked eye. As shown in Figure 6, no red blood cell hemolysis was observed in tube 6, in which 100% PBS was added serving as a negative control. For tube 7, 100% double distilled water was added and red cell hemolysis could be clearly observed. For tubes 1–5, in which different percentages of CPN-PDR were added, no red blood cell hemolysis occurred. Hemolysis of red blood cells can generate hemoprotein, which could be detected using UV-vis spectrophotometry. After scanning the absorbance wavelength of hemoprotein, 577 nm was selected as the detecting wavelength. According to Equation 3, hemolysis ratios were calculated based on the recorded absorbance and shown in Table 2. All the hemolysis ratios of tubes 1–5 were <5%, implying that no hemolysis was induced by addition of CPN-PDR.", "In this study, a histological analysis of organs (heart, liver, spleen, lung and kidney) was performed to evaluate whether CPN-PDR could cause tissue damage, inflammation and lesions. Kunming mice were injected with CPN-PDR and normal saline (NS) by tail vein, respectively. After 1 week, all the mice were sacrificed, and the heart, liver, spleen, lung and kidney were separated. After the standard procedure of HE staining for histological examination, the organs were observed under a microscope. Mice without any treatment were chosen as a control. As shown in Figure 7, compared with the control group, there was no visible histologically difference between mice administrated with CPN-PDR group and that with NS group, implying the safety of CPN-PDR in vivo.", "In summary, the delivery system materials were demonstrated to be nontoxic and biocompatible. The nontoxic and negatively charged copolymer CPN coating could significantly decrease the cytotoxicity of the cationic core, bPDR. The obtained co-delivery system CPN-PDR was also confirmed with enhanced cytotoxicity against tumor cells. Gene transfection efficiency induced by CPN-PDR was fairly equal with that of the commercial product Lipofectamine 2000, implying the successful intracellular delivery and good transfection of siRNA. Moreover, the preliminary evaluation of safety showed CPN-PDR to have good biocompatibility, so it has great potential for further exploitation and clinical application. With the development of aptamer technology, the promising application prospects of this novel oligodeoxynucleotide-based co-loading platform will further increase." ]
[ "intro", "materials", null, null, "materials", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Materials", "Cells and animals", "Preparation of nanoparticles", "In vitro cytotoxicity of materials", "In vitro cytotoxicity of blank nanoparticles", "siRNA transfection with nanoparticles", "Real-time PCR", "Western blot assay", "Cell proliferation assay", "Hemolysis test", "Histological assessment", "Statistical analysis", "Results and discussion", "Characteristics of nanoparticles", "Cytotoxicity of materials", "Cytotoxicity of blank nanoparticles", "Expression level of VEGF mRNA", "Expression level of VEGF protein", "Inhibition of cell proliferation in vitro", "Hemolysis test", "Tissue section", "Conclusion" ]
[ "Cancer has been one of the leading causes of death worldwide, and mortality and morbidity continue to increase.1 Chemotherapy is one of the major strategies for cancer therapy. However, mono-chemotherapy may encounter problems including drug resistance and unspecific toxicity. To overcome the high mortality rate of cancer, new therapeutic strategies, such as combinational therapy, have been developed.2,3 It has been reported that a combination of chemotherapy and gene therapy could potentially achieve synergistic effects and improve target selectivity, and deter the development of cancer drug resistance.4 For example, a double-modulating strategy based on the combination of chemotherapeutic agent 5-fluorouracil (5-FU) and siRNA has been developed, in which chemotherapy enhances intratumoral siRNA delivery and the delivered siRNA enhances the chemosensitivity of tumors. Furthermore, combination therapy may compensate for the limited delivery of siRNA to tumor tissue and probably manage the 5-FU-resistant tumors.5\nRNA interference (RNAi) is a natural cellular process that regulates gene expression level. Small interfering RNAs (siRNAs), which are small double-stranded RNAs, 20–24 nucleotides (nts) in length with sequences complementary to a gene’s coding sequence, can induce degradation of corresponding messenger RNAs (mRNAs), thus blocking the translation of the mRNA into protein.6,7 The key therapeutic advantage of using RNAi lies in its ability to specifically and potently knock down the expression of disease-causing genes of known sequence. The high specificity and potency makes it widely used in treating various cancers such as breast, liver, and lung cancers.8–10 In recent years, combination of chemotherapy and siRNA-induced RNAi has become a hot topic in cancer treatment. For example, co-delivery of siRNA targeting multidrug resistance protein 1 (MDR1) or anti-survivin siRNA and chemotherapeutic drugs was demonstrated to be a promising strategy to overcome drug resistance.11–13 Anti-apoptotic gene Bcl-2 is a potential combinational therapy target owing to its important role in cell apoptosis, and combination of Bcl-2 siRNA and 5-FU could induce a remarkable increase of cell apoptosis both in vitro and in vivo.14\nTo achieve the desired combinational and synergistic effects of chemotherapy and RNAi, selection of siRNA is important, and the selection of siRNA is also the section of silencing target. The mechanism for combinational anticancer effect varies with different silencing targets. For example, siRNA targeting MDR1 or anti-apoptotic gene Bcl-2 were used to prevent drug resistance response and sensitize chemotherapeutic drugs, or enhance the apoptosis of cancer cells, both offering enhanced anticancer effect.15,16 Tumor tissues have abundant new blood vessels that pump sufficient nutrition and oxygen for tumor growth and metastasis. The dependency of tumor tissues on these new blood vessels is important in anti-angiogenesis therapy for cancer treatment.17 Recently, there have been many efforts in achieving anti-angiogenesis, including delivering siRNAs to silence specific pro-angiogenic genes, such as vascular endothelial growth factor (VEGF), fibroblast growth factor (FGF) and interleukin-8 (IL-8).18 Of these targets, VEGF has received a lot of attention owing to its key role in tumor angiogenesis. With the development of RNAi technology, siRNA with VEGF target has been widely explored and referred to as anti-VEGF treatment.19–21 In 2004, the success of bevacizumab showed VEGF to be a potential target for angiogenesis therapy. There are a number of novel anti-VEGF agents in phase III clinical trials that may come to market in the next few years.22,23 In our design, siRNA with VEGF target was selected to co-deliver with chemotherapeutic drugs, resulting in a combinational anticancer effect. The mechanism for this could be that anti-VEGF treatments sensitize the cells to chemotherapy agents by blocking blood supply, improve drug penetration by disruption or normalization of tumor vasculature and in addition directly kill cancer cells by gene therapy.24,25\nHowever, naked siRNAs may not induce efficient transfection by themselves because they are unstable and vulnerable, especially to nuclease-mediated degradation; also, the negatively charged surface blocks them from cellular endocytosis. Moreover, successful transfection of siRNA into cells is based on siRNA with complete structure, which could further initiate the RNAi process for targeted gene silencing.26,27 Thus, nanoscale delivery systems, which could co-load drug and siRNA and protect siRNA from degradation, were widely used and demonstrated to be excellent candidates. These include nanoparticles,28 liposomes,29 polyplexes30 and dendrimers.31 The basic standard involves employing cationic polymers or lipids to condense negatively charged siRNA and protect them from degradation.32,33 In our previous study,25 oligodeoxynucleotides with CGA repeating units (CGA-ODNs) were introduced to load Dox to obtain Dox-loaded CGA-ODNs (CGA-ODNs-Dox). Poly(ethyleneimine) (PEI) was then used to condense siRNA and CGA ODNs-Dox to obtain Dox and siRNA co-loaded nanoparticles (PEI/CGA-ODNs-Dox and siRNA[PDR]). Finally, the pH-sensitive targeted material, o-carboxymethyl-chitosan (CMCS)-poly(ethylene glycol) (PEG)-aspargine-glycine-arginine (NGR) (CMCS-PEG-NGR[CPN]), was used to modify PDR to obtain multifunctional CPN-PDR. CPN-PDRs were demonstrated to be multifunctional, being able to co-deliver Dox and siRNA into cells, induce pH-responsive disassembly and realize endosomal escape of gene and drug. Thus, in the present study, the cytotoxicity of the materials and the delivery system are further evaluated. Then, the transfection efficiency of siRNA is monitored by semiquantitative real-time polymerase chain reaction (real-time PCR) and Western blot techniques for mRNA level and protein level, respectively. Finally, the safety of the delivery system is evaluated by hemolysis test in vitro and histological assessment.", " Materials Dox was purchased from Dalian Meilun Biology Technology Co. Ltd (Dalian, China). Targeting human VEGF siRNA (sense: 5′-ACAUCACCAUGCAGAUUAUdTdT-3′, anti-sense: 5′-dTdTUGUAGUGGUACGUCUAAUA-3′) was obtained from Guangzhou Ribobio Co., Ltd (Guangzhou, China). CGA oligodeoxynucleotides (sequence: 5′-CGACGACGACGACGACGACGA-3′; complementary sequence: 5′-TCGTCGTCGTCGTCGTCGTCG-3′) were purchased from BGI Co. (Shenzhen, China). Fetal bovine serum (FBS) was obtained from Sijiqing Co., Ltd, (Zhejiang, China) and 3-[4,5-dimethyl-2-thiazolyl]-2,5-diphenyl-2H-tetrazolium bromide (MTT) was from Sigma-Aldrich (St Louis, MO, USA). Trizol RNA extraction was from Thermo Fisher Scientific (Waltham, MA, USA). SYBR®Green and ReverTra Ace qPCR RT Kit were purchased from Toyobo (Osaka, Japan). All solutions were made up in Millipore ultrapure water (EMD Millipore, Billerica, MA, USA) and sterilized for cell culture. All other chemicals and reagents were of analytical grade and used as received. All the primers used in real-time PCR were synthesized and purified by BGI Co. with sequences: VEGF – forward: 5′-CTGGAGTGTGTGCCCACTGA-3′; VEGF – reverse: 5′-TCCTATGTGCTGGCCTTGGT-3′; actin – forward: GAGCTACGAGCTGCCTGACG; actin – reverse: CCTAGAAGCATTTGCGGTGG.\nDox was purchased from Dalian Meilun Biology Technology Co. Ltd (Dalian, China). Targeting human VEGF siRNA (sense: 5′-ACAUCACCAUGCAGAUUAUdTdT-3′, anti-sense: 5′-dTdTUGUAGUGGUACGUCUAAUA-3′) was obtained from Guangzhou Ribobio Co., Ltd (Guangzhou, China). CGA oligodeoxynucleotides (sequence: 5′-CGACGACGACGACGACGACGA-3′; complementary sequence: 5′-TCGTCGTCGTCGTCGTCGTCG-3′) were purchased from BGI Co. (Shenzhen, China). Fetal bovine serum (FBS) was obtained from Sijiqing Co., Ltd, (Zhejiang, China) and 3-[4,5-dimethyl-2-thiazolyl]-2,5-diphenyl-2H-tetrazolium bromide (MTT) was from Sigma-Aldrich (St Louis, MO, USA). Trizol RNA extraction was from Thermo Fisher Scientific (Waltham, MA, USA). SYBR®Green and ReverTra Ace qPCR RT Kit were purchased from Toyobo (Osaka, Japan). All solutions were made up in Millipore ultrapure water (EMD Millipore, Billerica, MA, USA) and sterilized for cell culture. All other chemicals and reagents were of analytical grade and used as received. All the primers used in real-time PCR were synthesized and purified by BGI Co. with sequences: VEGF – forward: 5′-CTGGAGTGTGTGCCCACTGA-3′; VEGF – reverse: 5′-TCCTATGTGCTGGCCTTGGT-3′; actin – forward: GAGCTACGAGCTGCCTGACG; actin – reverse: CCTAGAAGCATTTGCGGTGG.\n Cells and animals The MCF-7 cells were purchased from the Chinese Academy of Sciences Cells Bank (Shanghai, China), and the cells were cultured in Roswell Park Memorial Institute (RPMI) 1640 medium supplemented with 10% fetal bovine serum (FBS), streptomycin at 100 µg/mL and penicillin at 100 U/mL. All cells were cultured in a 37°C incubator with 5% CO2.\nKunming mice (20±2 g) were obtained from Experimental Animal Center of Shandong University. Animal experiments were carried out according to the requirements of the Animal Management Rules of the Ministry of Health (China) and approved by the Laboratory Animal Ethics Review Committee of Qilu Medical College of Shandong University.\nThe MCF-7 cells were purchased from the Chinese Academy of Sciences Cells Bank (Shanghai, China), and the cells were cultured in Roswell Park Memorial Institute (RPMI) 1640 medium supplemented with 10% fetal bovine serum (FBS), streptomycin at 100 µg/mL and penicillin at 100 U/mL. All cells were cultured in a 37°C incubator with 5% CO2.\nKunming mice (20±2 g) were obtained from Experimental Animal Center of Shandong University. Animal experiments were carried out according to the requirements of the Animal Management Rules of the Ministry of Health (China) and approved by the Laboratory Animal Ethics Review Committee of Qilu Medical College of Shandong University.\n Preparation of nanoparticles PDR and CPN-PDR were prepared by the method reported in our previous study.25 Briefly, CGA-ODNs-Dox was first obtained by incubating Dox with CGA-ODNs at room temperature. Then, PEI, siRNA, ODNs-Dox and CPN were dissolved and diluted to the corresponding concentrations with deionized water. After that, siRNA and CGA-ODNs-Dox were mixed by vortexing for several seconds to obtain the mixture. The mixture was added dropwise into PEI solution and mixed by vortexing and then incubated for 30 min at room temperature to form PDR. CPN-PDR was obtained by adding PDR suspension into the CPN solution under vortexing, followed by 30-min incubation at room temperature. When preparing the blank nanoparticles, the DOX and siRNA were not involved in above method. The construction of different kinds of nanoparticles is shown in Figure 1.\nThe loading content of DOX and siRNA was calculated using the following formula:34\nLoading content(%)=Weight of loading drugsWeight of nanoparticles×100%(1)\nPDR and CPN-PDR were prepared by the method reported in our previous study.25 Briefly, CGA-ODNs-Dox was first obtained by incubating Dox with CGA-ODNs at room temperature. Then, PEI, siRNA, ODNs-Dox and CPN were dissolved and diluted to the corresponding concentrations with deionized water. After that, siRNA and CGA-ODNs-Dox were mixed by vortexing for several seconds to obtain the mixture. The mixture was added dropwise into PEI solution and mixed by vortexing and then incubated for 30 min at room temperature to form PDR. CPN-PDR was obtained by adding PDR suspension into the CPN solution under vortexing, followed by 30-min incubation at room temperature. When preparing the blank nanoparticles, the DOX and siRNA were not involved in above method. The construction of different kinds of nanoparticles is shown in Figure 1.\nThe loading content of DOX and siRNA was calculated using the following formula:34\nLoading content(%)=Weight of loading drugsWeight of nanoparticles×100%(1)\n In vitro cytotoxicity of materials MTT assays were carried out on MCF-7 cells to evaluate the in vitro cytotoxicity of CGA-ODNs, PEI and CPN, respectively.35 MCF-7 cells were seeded with density at 7,000 cells/well in the 96-well plates and allowed to adhere overnight. CGA-ODNs, PEI and CPN solutions, which were corresponding to Dox concentrations (0.25, 0.625, 1.25, 2.5 and 5 µM), were added and incubated with the cells for 24, 48 and 72 h, respectively. Five wells were set for each concentration, and cells incubated with fresh media were taken as control. At a determined time point, 20 µL MTT solutions (5 mg/mL) were added. After 4 h incubation, the plates were centrifuged (3,000 rpm, 10 min) and the supernatant was removed. Then, 200 µL DMSO solutions were added to dissolve the formazan crystals formed by the living cells. The cell viability was calculated according to the following formula (Equation 2) after recording the absorbance at 570 nm by a microplate reader (Model 680; Bio-Rad Laboratories Inc., Hercules, CA, USA). All the assays were repeated three times.\nCell viability=Abs(sample)−Abs(blank)Abs(control)−Abs(blank)×100%(2)\nMTT assays were carried out on MCF-7 cells to evaluate the in vitro cytotoxicity of CGA-ODNs, PEI and CPN, respectively.35 MCF-7 cells were seeded with density at 7,000 cells/well in the 96-well plates and allowed to adhere overnight. CGA-ODNs, PEI and CPN solutions, which were corresponding to Dox concentrations (0.25, 0.625, 1.25, 2.5 and 5 µM), were added and incubated with the cells for 24, 48 and 72 h, respectively. Five wells were set for each concentration, and cells incubated with fresh media were taken as control. At a determined time point, 20 µL MTT solutions (5 mg/mL) were added. After 4 h incubation, the plates were centrifuged (3,000 rpm, 10 min) and the supernatant was removed. Then, 200 µL DMSO solutions were added to dissolve the formazan crystals formed by the living cells. The cell viability was calculated according to the following formula (Equation 2) after recording the absorbance at 570 nm by a microplate reader (Model 680; Bio-Rad Laboratories Inc., Hercules, CA, USA). All the assays were repeated three times.\nCell viability=Abs(sample)−Abs(blank)Abs(control)−Abs(blank)×100%(2)\n In vitro cytotoxicity of blank nanoparticles The cytotoxicity of non-DOX- and siRNA-loaded nano-particles (blank PDR[bPDR]) and CPN-coated blank PDR (bCPN-PDR) were also evaluated in MCF-7 cells. bPDR and bCPN-PDR solutions, corresponding to Dox concentrations (0.25, 0.625, 1.25, 2.5 and 5 µM), were added and incubated with the cells for 24, 48 and 72 h, respectively. Five wells were set for each concentration and cells incubated with fresh media were taken as control. After the indicated time periods, cell viability was evaluated according to the procedure described above.\nThe cytotoxicity of non-DOX- and siRNA-loaded nano-particles (blank PDR[bPDR]) and CPN-coated blank PDR (bCPN-PDR) were also evaluated in MCF-7 cells. bPDR and bCPN-PDR solutions, corresponding to Dox concentrations (0.25, 0.625, 1.25, 2.5 and 5 µM), were added and incubated with the cells for 24, 48 and 72 h, respectively. Five wells were set for each concentration and cells incubated with fresh media were taken as control. After the indicated time periods, cell viability was evaluated according to the procedure described above.\n siRNA transfection with nanoparticles MCF-7 cells were seeded into 24-well plates at densities of 8×104, 6×104 and 5×104 cells/well for 24, 48 and 72 h transfection, respectively. After culturing at 37°C overnight, MCF-7 cells were treated with siRNA specific for VEGF. The siRNA was delivered either by Lipofectamine 2000 (Thermo Fisher Scientific) according to the manufacturer’s instructions or by CPN-modified siRNA-loaded nanoparticles (CPN-PR) or by CPN-modified Dox- and siRNA-loaded nanoparticles (CPN-PDR). The final concentrations of Dox and siRNA were 1 µM and 55 nM, respectively. After siRNA transfection, cells were harvested for RNA extraction and Western blot PCR analysis or subjected to Western blot assay.\nMCF-7 cells were seeded into 24-well plates at densities of 8×104, 6×104 and 5×104 cells/well for 24, 48 and 72 h transfection, respectively. After culturing at 37°C overnight, MCF-7 cells were treated with siRNA specific for VEGF. The siRNA was delivered either by Lipofectamine 2000 (Thermo Fisher Scientific) according to the manufacturer’s instructions or by CPN-modified siRNA-loaded nanoparticles (CPN-PR) or by CPN-modified Dox- and siRNA-loaded nanoparticles (CPN-PDR). The final concentrations of Dox and siRNA were 1 µM and 55 nM, respectively. After siRNA transfection, cells were harvested for RNA extraction and Western blot PCR analysis or subjected to Western blot assay.\n Real-time PCR The expression level of VEGF mRNA was monitored by real-time PCR technique with the standard procedure. Total RNA was extracted using Trizol agent under the standard procedure (Thermo Fisher Scientific), and cDNA synthesis was carried out using Rever Tra Ace qPCR RT Kit (Toyobo) according to the manufacturer’s instructions. The final concentration of RNA was measured by Nano Drop 2000 (Thermo Fisher Scientific), and the final product cDNA was stored at −20°C until use.\nEach real-time PCR system contained 20 µL solutions including cDNA (2 µL), SYBR®Green mix (10 µL), primer (1.6 µL) and double distilled water (6.4 µL). The reaction system was placed into PCR instrument (Roche LightCycler™; Hoffman-La Roche Ltd, Basel, Switzerland) for analysis. β-Actin served as a reference gene. Three wells were set for each group. The expression level of VEGF mRNA was normalized to the β-actin expression level and calculated by recording the Ct value at the end of the reaction.\nThe expression level of VEGF mRNA was monitored by real-time PCR technique with the standard procedure. Total RNA was extracted using Trizol agent under the standard procedure (Thermo Fisher Scientific), and cDNA synthesis was carried out using Rever Tra Ace qPCR RT Kit (Toyobo) according to the manufacturer’s instructions. The final concentration of RNA was measured by Nano Drop 2000 (Thermo Fisher Scientific), and the final product cDNA was stored at −20°C until use.\nEach real-time PCR system contained 20 µL solutions including cDNA (2 µL), SYBR®Green mix (10 µL), primer (1.6 µL) and double distilled water (6.4 µL). The reaction system was placed into PCR instrument (Roche LightCycler™; Hoffman-La Roche Ltd, Basel, Switzerland) for analysis. β-Actin served as a reference gene. Three wells were set for each group. The expression level of VEGF mRNA was normalized to the β-actin expression level and calculated by recording the Ct value at the end of the reaction.\n Western blot assay A Western blot assay was employed to investigate the expression level of total VEGF protein after transfection. At a determined time, cells were trypsinized and pelleted by centrifugation. The protein collecting process was carried out based on the standard procedure of Total Protein Extraction Kit (BestBio, Shanghai, China). The total protein was stored at −80°C until use. Protein was quantified by BCA protein assay kit (Beyotime Biotechnology, Shanghai, China) based on the standard carve calculation (A=0.006 C+0.0919, r=0.9987). After quantification, all the samples were diluted to 2 µg/µL with sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) electrophoresis, denatured in boiling water for 5 min and stored at −80°C until use. Equal amounts of protein (50 µg) were subjected to 12% SDS-PAGE and transferred to nitrocellulose membranes using standard procedures. The membrane was blocked in 2% skimmed milk in phosphate buffered saline for 2 h and washed by Tris-buffered saline and Tween-20 (TBST) solution three times. Then the membranes were probed with primary antibodies against VEGF and glyceraldehyde 3-phosphate dehydrogenase (GAPDN) overnight at 4°C. After 3×10 min washing in TBST on shaker, membranes were incubated with horseradish peroxidase (HRP)-conjugated anti-rabbit IgG for 1 h at room temperature. With another 3×10 min washes in TBST, the membranes were developed with electrochemiluminescence (ECL) using a gel-imaging system and analyzed using Image Analysis software.\nA Western blot assay was employed to investigate the expression level of total VEGF protein after transfection. At a determined time, cells were trypsinized and pelleted by centrifugation. The protein collecting process was carried out based on the standard procedure of Total Protein Extraction Kit (BestBio, Shanghai, China). The total protein was stored at −80°C until use. Protein was quantified by BCA protein assay kit (Beyotime Biotechnology, Shanghai, China) based on the standard carve calculation (A=0.006 C+0.0919, r=0.9987). After quantification, all the samples were diluted to 2 µg/µL with sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) electrophoresis, denatured in boiling water for 5 min and stored at −80°C until use. Equal amounts of protein (50 µg) were subjected to 12% SDS-PAGE and transferred to nitrocellulose membranes using standard procedures. The membrane was blocked in 2% skimmed milk in phosphate buffered saline for 2 h and washed by Tris-buffered saline and Tween-20 (TBST) solution three times. Then the membranes were probed with primary antibodies against VEGF and glyceraldehyde 3-phosphate dehydrogenase (GAPDN) overnight at 4°C. After 3×10 min washing in TBST on shaker, membranes were incubated with horseradish peroxidase (HRP)-conjugated anti-rabbit IgG for 1 h at room temperature. With another 3×10 min washes in TBST, the membranes were developed with electrochemiluminescence (ECL) using a gel-imaging system and analyzed using Image Analysis software.\n Cell proliferation assay The anti-proliferation activity of Dox, CPN-PD and CPN-PDR against MCF-7 cells was tested via MTT method. MCF-7 cells were seeded into 96-well plates at a density of 7,000 cells/well. After overnight incubation, the cells were treated with various concentrations (0.002, 0.01, 0.05, 0.25 and 0.5 µM) of Dox, CPN-PD and CPN-PDR solutions and incubated for another 48 h. Five wells were set for each concentration, and cells incubated with fresh media were taken as control. After indicated time periods, cell viability was evaluated according to the procedure described in the “In vitro cytotoxicity of materials” section.\nThe anti-proliferation activity of Dox, CPN-PD and CPN-PDR against MCF-7 cells was tested via MTT method. MCF-7 cells were seeded into 96-well plates at a density of 7,000 cells/well. After overnight incubation, the cells were treated with various concentrations (0.002, 0.01, 0.05, 0.25 and 0.5 µM) of Dox, CPN-PD and CPN-PDR solutions and incubated for another 48 h. Five wells were set for each concentration, and cells incubated with fresh media were taken as control. After indicated time periods, cell viability was evaluated according to the procedure described in the “In vitro cytotoxicity of materials” section.\n Hemolysis test Hemolysis test was carried out to investigate the safety of CPN-PDR for intravenous injection. A 2% red blood cell suspension was collected by centrifugation and resuspension. The test was performed under the design described in Table 1. After incubated at 37°C for 3 h, all the groups were centrifuged at 3,000 rpm for 15 min and visualized by the naked eye. To measure the hemolysis ratios of each group, UV-vis spectrophotometry was carried out to record the absorbance of supernatant in each group. The quantitation of hemolysis ratios was calculated according to the following formula:\nHR(%)=Abs(sample)−Abs(−)Abs(+)−Abs(−)×100%(3)where Abs (sample), Abs (−) and Abs (+) refer to the absorbances of the samples, negative control and positive control, respectively.\nHemolysis test was carried out to investigate the safety of CPN-PDR for intravenous injection. A 2% red blood cell suspension was collected by centrifugation and resuspension. The test was performed under the design described in Table 1. After incubated at 37°C for 3 h, all the groups were centrifuged at 3,000 rpm for 15 min and visualized by the naked eye. To measure the hemolysis ratios of each group, UV-vis spectrophotometry was carried out to record the absorbance of supernatant in each group. The quantitation of hemolysis ratios was calculated according to the following formula:\nHR(%)=Abs(sample)−Abs(−)Abs(+)−Abs(−)×100%(3)where Abs (sample), Abs (−) and Abs (+) refer to the absorbances of the samples, negative control and positive control, respectively.\n Histological assessment In order to evaluate the compatibility and tissue toxicity of CPN-PDR in vivo, a histological observation was performed.36 Five female Kunming mice were injected with CPN-PDR at a dose of 232 µg/kg through the tail vein. In the meantime, normal saline (NS) was taken as a control reagent. After 1 week, all the mice were sacrificed, and the heart, liver, spleen, lung and kidney were separated. After washing twice with PBS, all the organs were fixed in 4% formaldehyde, dehydrated in gradient alcohol, placed in xylene, embedded in paraffin and made into sections, followed by hematoxylin-eosin (HE) staining for histological examination with microscope.\nIn order to evaluate the compatibility and tissue toxicity of CPN-PDR in vivo, a histological observation was performed.36 Five female Kunming mice were injected with CPN-PDR at a dose of 232 µg/kg through the tail vein. In the meantime, normal saline (NS) was taken as a control reagent. After 1 week, all the mice were sacrificed, and the heart, liver, spleen, lung and kidney were separated. After washing twice with PBS, all the organs were fixed in 4% formaldehyde, dehydrated in gradient alcohol, placed in xylene, embedded in paraffin and made into sections, followed by hematoxylin-eosin (HE) staining for histological examination with microscope.\n Statistical analysis All studies were repeated a minimum of three times and measured at least in triplicate. The results are reported as the means ± SD. Statistical significance was analyzed using Student’s t-test. Differences between experimental groups were considered significant if the P-value was <0.05.\nAll studies were repeated a minimum of three times and measured at least in triplicate. The results are reported as the means ± SD. Statistical significance was analyzed using Student’s t-test. Differences between experimental groups were considered significant if the P-value was <0.05.", "Dox was purchased from Dalian Meilun Biology Technology Co. Ltd (Dalian, China). Targeting human VEGF siRNA (sense: 5′-ACAUCACCAUGCAGAUUAUdTdT-3′, anti-sense: 5′-dTdTUGUAGUGGUACGUCUAAUA-3′) was obtained from Guangzhou Ribobio Co., Ltd (Guangzhou, China). CGA oligodeoxynucleotides (sequence: 5′-CGACGACGACGACGACGACGA-3′; complementary sequence: 5′-TCGTCGTCGTCGTCGTCGTCG-3′) were purchased from BGI Co. (Shenzhen, China). Fetal bovine serum (FBS) was obtained from Sijiqing Co., Ltd, (Zhejiang, China) and 3-[4,5-dimethyl-2-thiazolyl]-2,5-diphenyl-2H-tetrazolium bromide (MTT) was from Sigma-Aldrich (St Louis, MO, USA). Trizol RNA extraction was from Thermo Fisher Scientific (Waltham, MA, USA). SYBR®Green and ReverTra Ace qPCR RT Kit were purchased from Toyobo (Osaka, Japan). All solutions were made up in Millipore ultrapure water (EMD Millipore, Billerica, MA, USA) and sterilized for cell culture. All other chemicals and reagents were of analytical grade and used as received. All the primers used in real-time PCR were synthesized and purified by BGI Co. with sequences: VEGF – forward: 5′-CTGGAGTGTGTGCCCACTGA-3′; VEGF – reverse: 5′-TCCTATGTGCTGGCCTTGGT-3′; actin – forward: GAGCTACGAGCTGCCTGACG; actin – reverse: CCTAGAAGCATTTGCGGTGG.", "The MCF-7 cells were purchased from the Chinese Academy of Sciences Cells Bank (Shanghai, China), and the cells were cultured in Roswell Park Memorial Institute (RPMI) 1640 medium supplemented with 10% fetal bovine serum (FBS), streptomycin at 100 µg/mL and penicillin at 100 U/mL. All cells were cultured in a 37°C incubator with 5% CO2.\nKunming mice (20±2 g) were obtained from Experimental Animal Center of Shandong University. Animal experiments were carried out according to the requirements of the Animal Management Rules of the Ministry of Health (China) and approved by the Laboratory Animal Ethics Review Committee of Qilu Medical College of Shandong University.", "PDR and CPN-PDR were prepared by the method reported in our previous study.25 Briefly, CGA-ODNs-Dox was first obtained by incubating Dox with CGA-ODNs at room temperature. Then, PEI, siRNA, ODNs-Dox and CPN were dissolved and diluted to the corresponding concentrations with deionized water. After that, siRNA and CGA-ODNs-Dox were mixed by vortexing for several seconds to obtain the mixture. The mixture was added dropwise into PEI solution and mixed by vortexing and then incubated for 30 min at room temperature to form PDR. CPN-PDR was obtained by adding PDR suspension into the CPN solution under vortexing, followed by 30-min incubation at room temperature. When preparing the blank nanoparticles, the DOX and siRNA were not involved in above method. The construction of different kinds of nanoparticles is shown in Figure 1.\nThe loading content of DOX and siRNA was calculated using the following formula:34\nLoading content(%)=Weight of loading drugsWeight of nanoparticles×100%(1)", "MTT assays were carried out on MCF-7 cells to evaluate the in vitro cytotoxicity of CGA-ODNs, PEI and CPN, respectively.35 MCF-7 cells were seeded with density at 7,000 cells/well in the 96-well plates and allowed to adhere overnight. CGA-ODNs, PEI and CPN solutions, which were corresponding to Dox concentrations (0.25, 0.625, 1.25, 2.5 and 5 µM), were added and incubated with the cells for 24, 48 and 72 h, respectively. Five wells were set for each concentration, and cells incubated with fresh media were taken as control. At a determined time point, 20 µL MTT solutions (5 mg/mL) were added. After 4 h incubation, the plates were centrifuged (3,000 rpm, 10 min) and the supernatant was removed. Then, 200 µL DMSO solutions were added to dissolve the formazan crystals formed by the living cells. The cell viability was calculated according to the following formula (Equation 2) after recording the absorbance at 570 nm by a microplate reader (Model 680; Bio-Rad Laboratories Inc., Hercules, CA, USA). All the assays were repeated three times.\nCell viability=Abs(sample)−Abs(blank)Abs(control)−Abs(blank)×100%(2)", "The cytotoxicity of non-DOX- and siRNA-loaded nano-particles (blank PDR[bPDR]) and CPN-coated blank PDR (bCPN-PDR) were also evaluated in MCF-7 cells. bPDR and bCPN-PDR solutions, corresponding to Dox concentrations (0.25, 0.625, 1.25, 2.5 and 5 µM), were added and incubated with the cells for 24, 48 and 72 h, respectively. Five wells were set for each concentration and cells incubated with fresh media were taken as control. After the indicated time periods, cell viability was evaluated according to the procedure described above.", "MCF-7 cells were seeded into 24-well plates at densities of 8×104, 6×104 and 5×104 cells/well for 24, 48 and 72 h transfection, respectively. After culturing at 37°C overnight, MCF-7 cells were treated with siRNA specific for VEGF. The siRNA was delivered either by Lipofectamine 2000 (Thermo Fisher Scientific) according to the manufacturer’s instructions or by CPN-modified siRNA-loaded nanoparticles (CPN-PR) or by CPN-modified Dox- and siRNA-loaded nanoparticles (CPN-PDR). The final concentrations of Dox and siRNA were 1 µM and 55 nM, respectively. After siRNA transfection, cells were harvested for RNA extraction and Western blot PCR analysis or subjected to Western blot assay.", "The expression level of VEGF mRNA was monitored by real-time PCR technique with the standard procedure. Total RNA was extracted using Trizol agent under the standard procedure (Thermo Fisher Scientific), and cDNA synthesis was carried out using Rever Tra Ace qPCR RT Kit (Toyobo) according to the manufacturer’s instructions. The final concentration of RNA was measured by Nano Drop 2000 (Thermo Fisher Scientific), and the final product cDNA was stored at −20°C until use.\nEach real-time PCR system contained 20 µL solutions including cDNA (2 µL), SYBR®Green mix (10 µL), primer (1.6 µL) and double distilled water (6.4 µL). The reaction system was placed into PCR instrument (Roche LightCycler™; Hoffman-La Roche Ltd, Basel, Switzerland) for analysis. β-Actin served as a reference gene. Three wells were set for each group. The expression level of VEGF mRNA was normalized to the β-actin expression level and calculated by recording the Ct value at the end of the reaction.", "A Western blot assay was employed to investigate the expression level of total VEGF protein after transfection. At a determined time, cells were trypsinized and pelleted by centrifugation. The protein collecting process was carried out based on the standard procedure of Total Protein Extraction Kit (BestBio, Shanghai, China). The total protein was stored at −80°C until use. Protein was quantified by BCA protein assay kit (Beyotime Biotechnology, Shanghai, China) based on the standard carve calculation (A=0.006 C+0.0919, r=0.9987). After quantification, all the samples were diluted to 2 µg/µL with sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) electrophoresis, denatured in boiling water for 5 min and stored at −80°C until use. Equal amounts of protein (50 µg) were subjected to 12% SDS-PAGE and transferred to nitrocellulose membranes using standard procedures. The membrane was blocked in 2% skimmed milk in phosphate buffered saline for 2 h and washed by Tris-buffered saline and Tween-20 (TBST) solution three times. Then the membranes were probed with primary antibodies against VEGF and glyceraldehyde 3-phosphate dehydrogenase (GAPDN) overnight at 4°C. After 3×10 min washing in TBST on shaker, membranes were incubated with horseradish peroxidase (HRP)-conjugated anti-rabbit IgG for 1 h at room temperature. With another 3×10 min washes in TBST, the membranes were developed with electrochemiluminescence (ECL) using a gel-imaging system and analyzed using Image Analysis software.", "The anti-proliferation activity of Dox, CPN-PD and CPN-PDR against MCF-7 cells was tested via MTT method. MCF-7 cells were seeded into 96-well plates at a density of 7,000 cells/well. After overnight incubation, the cells were treated with various concentrations (0.002, 0.01, 0.05, 0.25 and 0.5 µM) of Dox, CPN-PD and CPN-PDR solutions and incubated for another 48 h. Five wells were set for each concentration, and cells incubated with fresh media were taken as control. After indicated time periods, cell viability was evaluated according to the procedure described in the “In vitro cytotoxicity of materials” section.", "Hemolysis test was carried out to investigate the safety of CPN-PDR for intravenous injection. A 2% red blood cell suspension was collected by centrifugation and resuspension. The test was performed under the design described in Table 1. After incubated at 37°C for 3 h, all the groups were centrifuged at 3,000 rpm for 15 min and visualized by the naked eye. To measure the hemolysis ratios of each group, UV-vis spectrophotometry was carried out to record the absorbance of supernatant in each group. The quantitation of hemolysis ratios was calculated according to the following formula:\nHR(%)=Abs(sample)−Abs(−)Abs(+)−Abs(−)×100%(3)where Abs (sample), Abs (−) and Abs (+) refer to the absorbances of the samples, negative control and positive control, respectively.", "In order to evaluate the compatibility and tissue toxicity of CPN-PDR in vivo, a histological observation was performed.36 Five female Kunming mice were injected with CPN-PDR at a dose of 232 µg/kg through the tail vein. In the meantime, normal saline (NS) was taken as a control reagent. After 1 week, all the mice were sacrificed, and the heart, liver, spleen, lung and kidney were separated. After washing twice with PBS, all the organs were fixed in 4% formaldehyde, dehydrated in gradient alcohol, placed in xylene, embedded in paraffin and made into sections, followed by hematoxylin-eosin (HE) staining for histological examination with microscope.", "All studies were repeated a minimum of three times and measured at least in triplicate. The results are reported as the means ± SD. Statistical significance was analyzed using Student’s t-test. Differences between experimental groups were considered significant if the P-value was <0.05.", " Characteristics of nanoparticles In the optimal formulation of CPN-PDR, the weight ratio of DOX to siRNA was 1:1; therefore, the siRNA loading content was equal to the DOX loading content. Based on the facts that the molar ratio of CGA-ODNs to Dox was 1:5, the weight ratio of CPN/PEI/ODNs and siRNA was 4:1:1, and consequently, siRNA and Dox loading efficiency in CPN-PDR was calculated to be 4.93%.\nIn the optimal formulation of CPN-PDR, the weight ratio of DOX to siRNA was 1:1; therefore, the siRNA loading content was equal to the DOX loading content. Based on the facts that the molar ratio of CGA-ODNs to Dox was 1:5, the weight ratio of CPN/PEI/ODNs and siRNA was 4:1:1, and consequently, siRNA and Dox loading efficiency in CPN-PDR was calculated to be 4.93%.\n Cytotoxicity of materials In the co-delivery platform, CGA-ODNs were selected as carriers of DOX, then the mixture of CGA-ODNs-Dox and siRNA (abbreviated as DR) was electrostatically interacted with PEI to obtain Dox and siRNA co-loaded nanoparticles, PEI/DR (PDR). The copolymer CPN was employed as a multifunctional material.25 In order to evaluate the safety of the carrier, MTT assays were carried out to evaluate the cytotoxicity of CGA-ODNs and CPN. In the meantime, PEI, a toxic cationic polymer, was also tested in this experiment. As shown in Figure 2A, the viability of MCF-7 cells treated with CGA-ODNs and CPN at any concentration was observed to be stable and >80% within 48 h. After 72 h incubation, the viability of MCF-7 cells treated with CPN was still >80%, but after treatment with a higher concentration of CGA-ODNs (2.5 and 5 µM), the cell viability was slightly decreased to 60%. This indicated that there is no obvious cytotoxicity of CGA-ODNs and CPN. However, the cell viability of MCF-7 cells treated with PEI was obviously dependent on the concentrations of PEI. With an increase in concentration, the MCF-7 cells treated with PEI showed lower cell viability. Moreover, this concentration-dependent trend became more obvious when incubation time increased from 24 h to 72 h.\nIn the co-delivery platform, CGA-ODNs were selected as carriers of DOX, then the mixture of CGA-ODNs-Dox and siRNA (abbreviated as DR) was electrostatically interacted with PEI to obtain Dox and siRNA co-loaded nanoparticles, PEI/DR (PDR). The copolymer CPN was employed as a multifunctional material.25 In order to evaluate the safety of the carrier, MTT assays were carried out to evaluate the cytotoxicity of CGA-ODNs and CPN. In the meantime, PEI, a toxic cationic polymer, was also tested in this experiment. As shown in Figure 2A, the viability of MCF-7 cells treated with CGA-ODNs and CPN at any concentration was observed to be stable and >80% within 48 h. After 72 h incubation, the viability of MCF-7 cells treated with CPN was still >80%, but after treatment with a higher concentration of CGA-ODNs (2.5 and 5 µM), the cell viability was slightly decreased to 60%. This indicated that there is no obvious cytotoxicity of CGA-ODNs and CPN. However, the cell viability of MCF-7 cells treated with PEI was obviously dependent on the concentrations of PEI. With an increase in concentration, the MCF-7 cells treated with PEI showed lower cell viability. Moreover, this concentration-dependent trend became more obvious when incubation time increased from 24 h to 72 h.\n Cytotoxicity of blank nanoparticles Cell viability of MCF-7 cells was also monitored to investigate the cytotoxicity of non-RNA- and non-drug-loaded carriers including bPDR and bCPN-PDR. As shown in Figure 2B, after treatment with bCPN-PDR, the cell viability was stable around 95% and shown to be concentration independent at 24 h. Meanwhile, bPDR-treated cells showed lower cell viability with time. With increased concentration, bPDR showed higher cytotoxicity, inducing 80% cell death in 72 h. Compared with the bPDR group, the cell viabilities of the bCPN-PDR group were significantly higher (P<0.05). This phenomenon could be explained by the higher positive charge of bPDR due to the positive ingredient of PEI, while the cell cytotoxicity could be significantly decreased by covering with the nontoxic negatively charged CPN (>80% cell viability in Figure 2A), which could induce the charge reversal of bPDR, implying that the negatively charged CPN coating could decrease the higher toxicity of bPDR to cells.\nCell viability of MCF-7 cells was also monitored to investigate the cytotoxicity of non-RNA- and non-drug-loaded carriers including bPDR and bCPN-PDR. As shown in Figure 2B, after treatment with bCPN-PDR, the cell viability was stable around 95% and shown to be concentration independent at 24 h. Meanwhile, bPDR-treated cells showed lower cell viability with time. With increased concentration, bPDR showed higher cytotoxicity, inducing 80% cell death in 72 h. Compared with the bPDR group, the cell viabilities of the bCPN-PDR group were significantly higher (P<0.05). This phenomenon could be explained by the higher positive charge of bPDR due to the positive ingredient of PEI, while the cell cytotoxicity could be significantly decreased by covering with the nontoxic negatively charged CPN (>80% cell viability in Figure 2A), which could induce the charge reversal of bPDR, implying that the negatively charged CPN coating could decrease the higher toxicity of bPDR to cells.\n Expression level of VEGF mRNA The transfection efficiency of CPN-PDR compared to Lipofectamine 2000 was demonstrated by delivering VEGF-siRNA into MCF-7 cells. The expression level of VEGF mRNA was measured using real-time PCR, and the results are shown in Figure 3. The expression level of VEGF mRNA was normalized to the β-actin expression and calculated based on the semiquantitative method. Untreated cells were selected as control, and the expression level was set as 100%. For lipo/siRNA group, the silencing efficiencies were 61.98%±6.96%, 31.93%±6.43% and 37.02%±3.17% at 24, 48 and 72 h, respectively. The decreased expression of VEGF mRNA reflected that gene silencing capability of lipo/siRNA. With incubation time extended from 24 to 48 h, the capability was enhanced (P<0.05). However, in 72 h, no further reduction was observed compared to that in 48 h, although the significantly downregulated expression could be observed (P<0.05 vs control, Dox and bCPN-PDR), implying that the silencing effect could reach the plateau period after 48 h and last for 72 h at least. The CPN-PR and CPN-PDR have shown similar behavior to lipo/siRNA. Compared with the control, a remarkable decrease of VEGF mRNA expression was observed (P<0.01) at 48 h. There was no significant difference in expression levels between CPN-PR and CPN-PDR (P>0.05) at any time point, implying their comparable gene delivery and transfection capability. More importantly, this capability was equivalent to the commercial Lipofectamine 2000.\nThe transfection efficiency of CPN-PDR compared to Lipofectamine 2000 was demonstrated by delivering VEGF-siRNA into MCF-7 cells. The expression level of VEGF mRNA was measured using real-time PCR, and the results are shown in Figure 3. The expression level of VEGF mRNA was normalized to the β-actin expression and calculated based on the semiquantitative method. Untreated cells were selected as control, and the expression level was set as 100%. For lipo/siRNA group, the silencing efficiencies were 61.98%±6.96%, 31.93%±6.43% and 37.02%±3.17% at 24, 48 and 72 h, respectively. The decreased expression of VEGF mRNA reflected that gene silencing capability of lipo/siRNA. With incubation time extended from 24 to 48 h, the capability was enhanced (P<0.05). However, in 72 h, no further reduction was observed compared to that in 48 h, although the significantly downregulated expression could be observed (P<0.05 vs control, Dox and bCPN-PDR), implying that the silencing effect could reach the plateau period after 48 h and last for 72 h at least. The CPN-PR and CPN-PDR have shown similar behavior to lipo/siRNA. Compared with the control, a remarkable decrease of VEGF mRNA expression was observed (P<0.01) at 48 h. There was no significant difference in expression levels between CPN-PR and CPN-PDR (P>0.05) at any time point, implying their comparable gene delivery and transfection capability. More importantly, this capability was equivalent to the commercial Lipofectamine 2000.\n Expression level of VEGF protein Based on the results of real-time PCR, the gene-silencing effect enhanced with time and reached a platform after 48 h. In this study, the Western blot technique was employed to observe VEGF protein expression at 48 h. The results are shown in Figure 4. GAPDH was selected as the inner control; all the bands were found to be equivalent in gray levels, and the amount of protein expression was positively correlated with the shade of gray in the imaging picture. From Figure 4, the gray levels of both control and bCPN-PDR, which were similar, were obvious. However, siRNA-loaded nanoparticles including lipo/siRNA, CPN-PR and CPN-PDR demonstrated a notably lower gray level. Furthermore, as marked with the red rectangle in Figure 4, expression of VEGF protein in CPN-PDR could hardly be identified. This suggested that co-delivery of anti-VEGF siRNA and Dox could downregulate the relevant VEGF protein level, which was in agreement with the results from semiquantitative real-time PCR. Taken together, these results indicate that nanoparticles loaded with anti-VEGF siRNA downregulated both the protein and the mRNA expression of VEGF.\nBased on the results of real-time PCR, the gene-silencing effect enhanced with time and reached a platform after 48 h. In this study, the Western blot technique was employed to observe VEGF protein expression at 48 h. The results are shown in Figure 4. GAPDH was selected as the inner control; all the bands were found to be equivalent in gray levels, and the amount of protein expression was positively correlated with the shade of gray in the imaging picture. From Figure 4, the gray levels of both control and bCPN-PDR, which were similar, were obvious. However, siRNA-loaded nanoparticles including lipo/siRNA, CPN-PR and CPN-PDR demonstrated a notably lower gray level. Furthermore, as marked with the red rectangle in Figure 4, expression of VEGF protein in CPN-PDR could hardly be identified. This suggested that co-delivery of anti-VEGF siRNA and Dox could downregulate the relevant VEGF protein level, which was in agreement with the results from semiquantitative real-time PCR. Taken together, these results indicate that nanoparticles loaded with anti-VEGF siRNA downregulated both the protein and the mRNA expression of VEGF.\n Inhibition of cell proliferation in vitro The ultimate goal of siRNA intracellular delivery is to restrain the proliferation of cancer cells. Different treatments against MCF-7 cells were performed to further investigate the inhibition effect. As shown in Figure 5A, an obvious concentration dependence in terms of DOX, CPN-PD and CPN-PDR could be observed after 48 h incubation. We have calculated the differences between CPN-PDR vs CPN-PD and CPN-PDR vs DOX at each dose. The results indicated that there was a significant difference between CPN-PDR vs DOX at each dose except the dose of 0.05, and there were significant differences between CPN-PDR vs CPN-PD at a dose of 0.002. It was suggested that the in vitro antitumor activity of CPN-PDR was equivalent to CPN-PD, and both of them showed a higher antitumor activity than free drug DOX, which was mainly attributed to the design of the multifunctional nano-vectors. We have also calculated the statistical differences between CPN-PDR vs CPN-PD and CPN-PDR vs DOX in the concentration range of Dox (0.002, 0.01, 0.05, 0.25 and 0.5 µM) using Student’s t-test. After statistical calculation, the significant differences were verified to exist between CPN-PDR vs CPN-PD (P<0.05) and CPN-PDR vs DOX (P<0.01). After calculation with professional software, the half maximal inhibitory concentrations (IC50) are presented in Figure 5B. They are 0.0367±0.0088 µM for Dox, 0.0266±0.0027 µM for CPN-PD and 0.0126±0.0022 µM for CPN-PDR, respectively. The IC50 of CPN-PD was 2.913-fold higher than that of CPN-PDR, and the IC50 of DOX was 2.110-fold higher than that of CPN-PDR. After statistical calculation, significant differences were found between CPN-PDR vs CPN-PD (P<0.05) and CPN-PDR vs DOX (P<0.05). With the results from real-time PCR and Western blot, the increased cytotoxicity of CPN-PDR probably resulted from siRNA delivery and successful transfection. This was because the intracellular delivery of anti-VEGF siRNA could downregulate the expression of VEGF protein. The anti-VEGF therapy may sensitize cancer cells or kill cells block the blood supply of the tumor and improve drug penetration. Taken together, the anti-VEGF siRNA and Dox co-delivery platform could induce increased cytotoxicity against cancer cells.\nThe ultimate goal of siRNA intracellular delivery is to restrain the proliferation of cancer cells. Different treatments against MCF-7 cells were performed to further investigate the inhibition effect. As shown in Figure 5A, an obvious concentration dependence in terms of DOX, CPN-PD and CPN-PDR could be observed after 48 h incubation. We have calculated the differences between CPN-PDR vs CPN-PD and CPN-PDR vs DOX at each dose. The results indicated that there was a significant difference between CPN-PDR vs DOX at each dose except the dose of 0.05, and there were significant differences between CPN-PDR vs CPN-PD at a dose of 0.002. It was suggested that the in vitro antitumor activity of CPN-PDR was equivalent to CPN-PD, and both of them showed a higher antitumor activity than free drug DOX, which was mainly attributed to the design of the multifunctional nano-vectors. We have also calculated the statistical differences between CPN-PDR vs CPN-PD and CPN-PDR vs DOX in the concentration range of Dox (0.002, 0.01, 0.05, 0.25 and 0.5 µM) using Student’s t-test. After statistical calculation, the significant differences were verified to exist between CPN-PDR vs CPN-PD (P<0.05) and CPN-PDR vs DOX (P<0.01). After calculation with professional software, the half maximal inhibitory concentrations (IC50) are presented in Figure 5B. They are 0.0367±0.0088 µM for Dox, 0.0266±0.0027 µM for CPN-PD and 0.0126±0.0022 µM for CPN-PDR, respectively. The IC50 of CPN-PD was 2.913-fold higher than that of CPN-PDR, and the IC50 of DOX was 2.110-fold higher than that of CPN-PDR. After statistical calculation, significant differences were found between CPN-PDR vs CPN-PD (P<0.05) and CPN-PDR vs DOX (P<0.05). With the results from real-time PCR and Western blot, the increased cytotoxicity of CPN-PDR probably resulted from siRNA delivery and successful transfection. This was because the intracellular delivery of anti-VEGF siRNA could downregulate the expression of VEGF protein. The anti-VEGF therapy may sensitize cancer cells or kill cells block the blood supply of the tumor and improve drug penetration. Taken together, the anti-VEGF siRNA and Dox co-delivery platform could induce increased cytotoxicity against cancer cells.\n Hemolysis test To investigate the safety of intravenous injection of CPN-PDR, a hemolysis test was performed. As shown in Table 1, different volume ratios of CPN-PDR to 2% red blood cell suspension were selected. After incubation and centrifugation, tubes were observed by the naked eye. As shown in Figure 6, no red blood cell hemolysis was observed in tube 6, in which 100% PBS was added serving as a negative control. For tube 7, 100% double distilled water was added and red cell hemolysis could be clearly observed. For tubes 1–5, in which different percentages of CPN-PDR were added, no red blood cell hemolysis occurred. Hemolysis of red blood cells can generate hemoprotein, which could be detected using UV-vis spectrophotometry. After scanning the absorbance wavelength of hemoprotein, 577 nm was selected as the detecting wavelength. According to Equation 3, hemolysis ratios were calculated based on the recorded absorbance and shown in Table 2. All the hemolysis ratios of tubes 1–5 were <5%, implying that no hemolysis was induced by addition of CPN-PDR.\nTo investigate the safety of intravenous injection of CPN-PDR, a hemolysis test was performed. As shown in Table 1, different volume ratios of CPN-PDR to 2% red blood cell suspension were selected. After incubation and centrifugation, tubes were observed by the naked eye. As shown in Figure 6, no red blood cell hemolysis was observed in tube 6, in which 100% PBS was added serving as a negative control. For tube 7, 100% double distilled water was added and red cell hemolysis could be clearly observed. For tubes 1–5, in which different percentages of CPN-PDR were added, no red blood cell hemolysis occurred. Hemolysis of red blood cells can generate hemoprotein, which could be detected using UV-vis spectrophotometry. After scanning the absorbance wavelength of hemoprotein, 577 nm was selected as the detecting wavelength. According to Equation 3, hemolysis ratios were calculated based on the recorded absorbance and shown in Table 2. All the hemolysis ratios of tubes 1–5 were <5%, implying that no hemolysis was induced by addition of CPN-PDR.\n Tissue section In this study, a histological analysis of organs (heart, liver, spleen, lung and kidney) was performed to evaluate whether CPN-PDR could cause tissue damage, inflammation and lesions. Kunming mice were injected with CPN-PDR and normal saline (NS) by tail vein, respectively. After 1 week, all the mice were sacrificed, and the heart, liver, spleen, lung and kidney were separated. After the standard procedure of HE staining for histological examination, the organs were observed under a microscope. Mice without any treatment were chosen as a control. As shown in Figure 7, compared with the control group, there was no visible histologically difference between mice administrated with CPN-PDR group and that with NS group, implying the safety of CPN-PDR in vivo.\nIn this study, a histological analysis of organs (heart, liver, spleen, lung and kidney) was performed to evaluate whether CPN-PDR could cause tissue damage, inflammation and lesions. Kunming mice were injected with CPN-PDR and normal saline (NS) by tail vein, respectively. After 1 week, all the mice were sacrificed, and the heart, liver, spleen, lung and kidney were separated. After the standard procedure of HE staining for histological examination, the organs were observed under a microscope. Mice without any treatment were chosen as a control. As shown in Figure 7, compared with the control group, there was no visible histologically difference between mice administrated with CPN-PDR group and that with NS group, implying the safety of CPN-PDR in vivo.", "In the optimal formulation of CPN-PDR, the weight ratio of DOX to siRNA was 1:1; therefore, the siRNA loading content was equal to the DOX loading content. Based on the facts that the molar ratio of CGA-ODNs to Dox was 1:5, the weight ratio of CPN/PEI/ODNs and siRNA was 4:1:1, and consequently, siRNA and Dox loading efficiency in CPN-PDR was calculated to be 4.93%.", "In the co-delivery platform, CGA-ODNs were selected as carriers of DOX, then the mixture of CGA-ODNs-Dox and siRNA (abbreviated as DR) was electrostatically interacted with PEI to obtain Dox and siRNA co-loaded nanoparticles, PEI/DR (PDR). The copolymer CPN was employed as a multifunctional material.25 In order to evaluate the safety of the carrier, MTT assays were carried out to evaluate the cytotoxicity of CGA-ODNs and CPN. In the meantime, PEI, a toxic cationic polymer, was also tested in this experiment. As shown in Figure 2A, the viability of MCF-7 cells treated with CGA-ODNs and CPN at any concentration was observed to be stable and >80% within 48 h. After 72 h incubation, the viability of MCF-7 cells treated with CPN was still >80%, but after treatment with a higher concentration of CGA-ODNs (2.5 and 5 µM), the cell viability was slightly decreased to 60%. This indicated that there is no obvious cytotoxicity of CGA-ODNs and CPN. However, the cell viability of MCF-7 cells treated with PEI was obviously dependent on the concentrations of PEI. With an increase in concentration, the MCF-7 cells treated with PEI showed lower cell viability. Moreover, this concentration-dependent trend became more obvious when incubation time increased from 24 h to 72 h.", "Cell viability of MCF-7 cells was also monitored to investigate the cytotoxicity of non-RNA- and non-drug-loaded carriers including bPDR and bCPN-PDR. As shown in Figure 2B, after treatment with bCPN-PDR, the cell viability was stable around 95% and shown to be concentration independent at 24 h. Meanwhile, bPDR-treated cells showed lower cell viability with time. With increased concentration, bPDR showed higher cytotoxicity, inducing 80% cell death in 72 h. Compared with the bPDR group, the cell viabilities of the bCPN-PDR group were significantly higher (P<0.05). This phenomenon could be explained by the higher positive charge of bPDR due to the positive ingredient of PEI, while the cell cytotoxicity could be significantly decreased by covering with the nontoxic negatively charged CPN (>80% cell viability in Figure 2A), which could induce the charge reversal of bPDR, implying that the negatively charged CPN coating could decrease the higher toxicity of bPDR to cells.", "The transfection efficiency of CPN-PDR compared to Lipofectamine 2000 was demonstrated by delivering VEGF-siRNA into MCF-7 cells. The expression level of VEGF mRNA was measured using real-time PCR, and the results are shown in Figure 3. The expression level of VEGF mRNA was normalized to the β-actin expression and calculated based on the semiquantitative method. Untreated cells were selected as control, and the expression level was set as 100%. For lipo/siRNA group, the silencing efficiencies were 61.98%±6.96%, 31.93%±6.43% and 37.02%±3.17% at 24, 48 and 72 h, respectively. The decreased expression of VEGF mRNA reflected that gene silencing capability of lipo/siRNA. With incubation time extended from 24 to 48 h, the capability was enhanced (P<0.05). However, in 72 h, no further reduction was observed compared to that in 48 h, although the significantly downregulated expression could be observed (P<0.05 vs control, Dox and bCPN-PDR), implying that the silencing effect could reach the plateau period after 48 h and last for 72 h at least. The CPN-PR and CPN-PDR have shown similar behavior to lipo/siRNA. Compared with the control, a remarkable decrease of VEGF mRNA expression was observed (P<0.01) at 48 h. There was no significant difference in expression levels between CPN-PR and CPN-PDR (P>0.05) at any time point, implying their comparable gene delivery and transfection capability. More importantly, this capability was equivalent to the commercial Lipofectamine 2000.", "Based on the results of real-time PCR, the gene-silencing effect enhanced with time and reached a platform after 48 h. In this study, the Western blot technique was employed to observe VEGF protein expression at 48 h. The results are shown in Figure 4. GAPDH was selected as the inner control; all the bands were found to be equivalent in gray levels, and the amount of protein expression was positively correlated with the shade of gray in the imaging picture. From Figure 4, the gray levels of both control and bCPN-PDR, which were similar, were obvious. However, siRNA-loaded nanoparticles including lipo/siRNA, CPN-PR and CPN-PDR demonstrated a notably lower gray level. Furthermore, as marked with the red rectangle in Figure 4, expression of VEGF protein in CPN-PDR could hardly be identified. This suggested that co-delivery of anti-VEGF siRNA and Dox could downregulate the relevant VEGF protein level, which was in agreement with the results from semiquantitative real-time PCR. Taken together, these results indicate that nanoparticles loaded with anti-VEGF siRNA downregulated both the protein and the mRNA expression of VEGF.", "The ultimate goal of siRNA intracellular delivery is to restrain the proliferation of cancer cells. Different treatments against MCF-7 cells were performed to further investigate the inhibition effect. As shown in Figure 5A, an obvious concentration dependence in terms of DOX, CPN-PD and CPN-PDR could be observed after 48 h incubation. We have calculated the differences between CPN-PDR vs CPN-PD and CPN-PDR vs DOX at each dose. The results indicated that there was a significant difference between CPN-PDR vs DOX at each dose except the dose of 0.05, and there were significant differences between CPN-PDR vs CPN-PD at a dose of 0.002. It was suggested that the in vitro antitumor activity of CPN-PDR was equivalent to CPN-PD, and both of them showed a higher antitumor activity than free drug DOX, which was mainly attributed to the design of the multifunctional nano-vectors. We have also calculated the statistical differences between CPN-PDR vs CPN-PD and CPN-PDR vs DOX in the concentration range of Dox (0.002, 0.01, 0.05, 0.25 and 0.5 µM) using Student’s t-test. After statistical calculation, the significant differences were verified to exist between CPN-PDR vs CPN-PD (P<0.05) and CPN-PDR vs DOX (P<0.01). After calculation with professional software, the half maximal inhibitory concentrations (IC50) are presented in Figure 5B. They are 0.0367±0.0088 µM for Dox, 0.0266±0.0027 µM for CPN-PD and 0.0126±0.0022 µM for CPN-PDR, respectively. The IC50 of CPN-PD was 2.913-fold higher than that of CPN-PDR, and the IC50 of DOX was 2.110-fold higher than that of CPN-PDR. After statistical calculation, significant differences were found between CPN-PDR vs CPN-PD (P<0.05) and CPN-PDR vs DOX (P<0.05). With the results from real-time PCR and Western blot, the increased cytotoxicity of CPN-PDR probably resulted from siRNA delivery and successful transfection. This was because the intracellular delivery of anti-VEGF siRNA could downregulate the expression of VEGF protein. The anti-VEGF therapy may sensitize cancer cells or kill cells block the blood supply of the tumor and improve drug penetration. Taken together, the anti-VEGF siRNA and Dox co-delivery platform could induce increased cytotoxicity against cancer cells.", "To investigate the safety of intravenous injection of CPN-PDR, a hemolysis test was performed. As shown in Table 1, different volume ratios of CPN-PDR to 2% red blood cell suspension were selected. After incubation and centrifugation, tubes were observed by the naked eye. As shown in Figure 6, no red blood cell hemolysis was observed in tube 6, in which 100% PBS was added serving as a negative control. For tube 7, 100% double distilled water was added and red cell hemolysis could be clearly observed. For tubes 1–5, in which different percentages of CPN-PDR were added, no red blood cell hemolysis occurred. Hemolysis of red blood cells can generate hemoprotein, which could be detected using UV-vis spectrophotometry. After scanning the absorbance wavelength of hemoprotein, 577 nm was selected as the detecting wavelength. According to Equation 3, hemolysis ratios were calculated based on the recorded absorbance and shown in Table 2. All the hemolysis ratios of tubes 1–5 were <5%, implying that no hemolysis was induced by addition of CPN-PDR.", "In this study, a histological analysis of organs (heart, liver, spleen, lung and kidney) was performed to evaluate whether CPN-PDR could cause tissue damage, inflammation and lesions. Kunming mice were injected with CPN-PDR and normal saline (NS) by tail vein, respectively. After 1 week, all the mice were sacrificed, and the heart, liver, spleen, lung and kidney were separated. After the standard procedure of HE staining for histological examination, the organs were observed under a microscope. Mice without any treatment were chosen as a control. As shown in Figure 7, compared with the control group, there was no visible histologically difference between mice administrated with CPN-PDR group and that with NS group, implying the safety of CPN-PDR in vivo.", "In summary, the delivery system materials were demonstrated to be nontoxic and biocompatible. The nontoxic and negatively charged copolymer CPN coating could significantly decrease the cytotoxicity of the cationic core, bPDR. The obtained co-delivery system CPN-PDR was also confirmed with enhanced cytotoxicity against tumor cells. Gene transfection efficiency induced by CPN-PDR was fairly equal with that of the commercial product Lipofectamine 2000, implying the successful intracellular delivery and good transfection of siRNA. Moreover, the preliminary evaluation of safety showed CPN-PDR to have good biocompatibility, so it has great potential for further exploitation and clinical application. With the development of aptamer technology, the promising application prospects of this novel oligodeoxynucleotide-based co-loading platform will further increase." ]
[ "intro", "materials|methods", "materials", null, null, "materials", null, null, null, null, null, null, null, "methods", null, "intro", "materials", null, null, null, null, null, null, null ]
[ "co-delivery", "doxorubicin", "VEGF", "cytotoxicity", "transfection" ]
Introduction: Cancer has been one of the leading causes of death worldwide, and mortality and morbidity continue to increase.1 Chemotherapy is one of the major strategies for cancer therapy. However, mono-chemotherapy may encounter problems including drug resistance and unspecific toxicity. To overcome the high mortality rate of cancer, new therapeutic strategies, such as combinational therapy, have been developed.2,3 It has been reported that a combination of chemotherapy and gene therapy could potentially achieve synergistic effects and improve target selectivity, and deter the development of cancer drug resistance.4 For example, a double-modulating strategy based on the combination of chemotherapeutic agent 5-fluorouracil (5-FU) and siRNA has been developed, in which chemotherapy enhances intratumoral siRNA delivery and the delivered siRNA enhances the chemosensitivity of tumors. Furthermore, combination therapy may compensate for the limited delivery of siRNA to tumor tissue and probably manage the 5-FU-resistant tumors.5 RNA interference (RNAi) is a natural cellular process that regulates gene expression level. Small interfering RNAs (siRNAs), which are small double-stranded RNAs, 20–24 nucleotides (nts) in length with sequences complementary to a gene’s coding sequence, can induce degradation of corresponding messenger RNAs (mRNAs), thus blocking the translation of the mRNA into protein.6,7 The key therapeutic advantage of using RNAi lies in its ability to specifically and potently knock down the expression of disease-causing genes of known sequence. The high specificity and potency makes it widely used in treating various cancers such as breast, liver, and lung cancers.8–10 In recent years, combination of chemotherapy and siRNA-induced RNAi has become a hot topic in cancer treatment. For example, co-delivery of siRNA targeting multidrug resistance protein 1 (MDR1) or anti-survivin siRNA and chemotherapeutic drugs was demonstrated to be a promising strategy to overcome drug resistance.11–13 Anti-apoptotic gene Bcl-2 is a potential combinational therapy target owing to its important role in cell apoptosis, and combination of Bcl-2 siRNA and 5-FU could induce a remarkable increase of cell apoptosis both in vitro and in vivo.14 To achieve the desired combinational and synergistic effects of chemotherapy and RNAi, selection of siRNA is important, and the selection of siRNA is also the section of silencing target. The mechanism for combinational anticancer effect varies with different silencing targets. For example, siRNA targeting MDR1 or anti-apoptotic gene Bcl-2 were used to prevent drug resistance response and sensitize chemotherapeutic drugs, or enhance the apoptosis of cancer cells, both offering enhanced anticancer effect.15,16 Tumor tissues have abundant new blood vessels that pump sufficient nutrition and oxygen for tumor growth and metastasis. The dependency of tumor tissues on these new blood vessels is important in anti-angiogenesis therapy for cancer treatment.17 Recently, there have been many efforts in achieving anti-angiogenesis, including delivering siRNAs to silence specific pro-angiogenic genes, such as vascular endothelial growth factor (VEGF), fibroblast growth factor (FGF) and interleukin-8 (IL-8).18 Of these targets, VEGF has received a lot of attention owing to its key role in tumor angiogenesis. With the development of RNAi technology, siRNA with VEGF target has been widely explored and referred to as anti-VEGF treatment.19–21 In 2004, the success of bevacizumab showed VEGF to be a potential target for angiogenesis therapy. There are a number of novel anti-VEGF agents in phase III clinical trials that may come to market in the next few years.22,23 In our design, siRNA with VEGF target was selected to co-deliver with chemotherapeutic drugs, resulting in a combinational anticancer effect. The mechanism for this could be that anti-VEGF treatments sensitize the cells to chemotherapy agents by blocking blood supply, improve drug penetration by disruption or normalization of tumor vasculature and in addition directly kill cancer cells by gene therapy.24,25 However, naked siRNAs may not induce efficient transfection by themselves because they are unstable and vulnerable, especially to nuclease-mediated degradation; also, the negatively charged surface blocks them from cellular endocytosis. Moreover, successful transfection of siRNA into cells is based on siRNA with complete structure, which could further initiate the RNAi process for targeted gene silencing.26,27 Thus, nanoscale delivery systems, which could co-load drug and siRNA and protect siRNA from degradation, were widely used and demonstrated to be excellent candidates. These include nanoparticles,28 liposomes,29 polyplexes30 and dendrimers.31 The basic standard involves employing cationic polymers or lipids to condense negatively charged siRNA and protect them from degradation.32,33 In our previous study,25 oligodeoxynucleotides with CGA repeating units (CGA-ODNs) were introduced to load Dox to obtain Dox-loaded CGA-ODNs (CGA-ODNs-Dox). Poly(ethyleneimine) (PEI) was then used to condense siRNA and CGA ODNs-Dox to obtain Dox and siRNA co-loaded nanoparticles (PEI/CGA-ODNs-Dox and siRNA[PDR]). Finally, the pH-sensitive targeted material, o-carboxymethyl-chitosan (CMCS)-poly(ethylene glycol) (PEG)-aspargine-glycine-arginine (NGR) (CMCS-PEG-NGR[CPN]), was used to modify PDR to obtain multifunctional CPN-PDR. CPN-PDRs were demonstrated to be multifunctional, being able to co-deliver Dox and siRNA into cells, induce pH-responsive disassembly and realize endosomal escape of gene and drug. Thus, in the present study, the cytotoxicity of the materials and the delivery system are further evaluated. Then, the transfection efficiency of siRNA is monitored by semiquantitative real-time polymerase chain reaction (real-time PCR) and Western blot techniques for mRNA level and protein level, respectively. Finally, the safety of the delivery system is evaluated by hemolysis test in vitro and histological assessment. Materials and methods: Materials Dox was purchased from Dalian Meilun Biology Technology Co. Ltd (Dalian, China). Targeting human VEGF siRNA (sense: 5′-ACAUCACCAUGCAGAUUAUdTdT-3′, anti-sense: 5′-dTdTUGUAGUGGUACGUCUAAUA-3′) was obtained from Guangzhou Ribobio Co., Ltd (Guangzhou, China). CGA oligodeoxynucleotides (sequence: 5′-CGACGACGACGACGACGACGA-3′; complementary sequence: 5′-TCGTCGTCGTCGTCGTCGTCG-3′) were purchased from BGI Co. (Shenzhen, China). Fetal bovine serum (FBS) was obtained from Sijiqing Co., Ltd, (Zhejiang, China) and 3-[4,5-dimethyl-2-thiazolyl]-2,5-diphenyl-2H-tetrazolium bromide (MTT) was from Sigma-Aldrich (St Louis, MO, USA). Trizol RNA extraction was from Thermo Fisher Scientific (Waltham, MA, USA). SYBR®Green and ReverTra Ace qPCR RT Kit were purchased from Toyobo (Osaka, Japan). All solutions were made up in Millipore ultrapure water (EMD Millipore, Billerica, MA, USA) and sterilized for cell culture. All other chemicals and reagents were of analytical grade and used as received. All the primers used in real-time PCR were synthesized and purified by BGI Co. with sequences: VEGF – forward: 5′-CTGGAGTGTGTGCCCACTGA-3′; VEGF – reverse: 5′-TCCTATGTGCTGGCCTTGGT-3′; actin – forward: GAGCTACGAGCTGCCTGACG; actin – reverse: CCTAGAAGCATTTGCGGTGG. Dox was purchased from Dalian Meilun Biology Technology Co. Ltd (Dalian, China). Targeting human VEGF siRNA (sense: 5′-ACAUCACCAUGCAGAUUAUdTdT-3′, anti-sense: 5′-dTdTUGUAGUGGUACGUCUAAUA-3′) was obtained from Guangzhou Ribobio Co., Ltd (Guangzhou, China). CGA oligodeoxynucleotides (sequence: 5′-CGACGACGACGACGACGACGA-3′; complementary sequence: 5′-TCGTCGTCGTCGTCGTCGTCG-3′) were purchased from BGI Co. (Shenzhen, China). Fetal bovine serum (FBS) was obtained from Sijiqing Co., Ltd, (Zhejiang, China) and 3-[4,5-dimethyl-2-thiazolyl]-2,5-diphenyl-2H-tetrazolium bromide (MTT) was from Sigma-Aldrich (St Louis, MO, USA). Trizol RNA extraction was from Thermo Fisher Scientific (Waltham, MA, USA). SYBR®Green and ReverTra Ace qPCR RT Kit were purchased from Toyobo (Osaka, Japan). All solutions were made up in Millipore ultrapure water (EMD Millipore, Billerica, MA, USA) and sterilized for cell culture. All other chemicals and reagents were of analytical grade and used as received. All the primers used in real-time PCR were synthesized and purified by BGI Co. with sequences: VEGF – forward: 5′-CTGGAGTGTGTGCCCACTGA-3′; VEGF – reverse: 5′-TCCTATGTGCTGGCCTTGGT-3′; actin – forward: GAGCTACGAGCTGCCTGACG; actin – reverse: CCTAGAAGCATTTGCGGTGG. Cells and animals The MCF-7 cells were purchased from the Chinese Academy of Sciences Cells Bank (Shanghai, China), and the cells were cultured in Roswell Park Memorial Institute (RPMI) 1640 medium supplemented with 10% fetal bovine serum (FBS), streptomycin at 100 µg/mL and penicillin at 100 U/mL. All cells were cultured in a 37°C incubator with 5% CO2. Kunming mice (20±2 g) were obtained from Experimental Animal Center of Shandong University. Animal experiments were carried out according to the requirements of the Animal Management Rules of the Ministry of Health (China) and approved by the Laboratory Animal Ethics Review Committee of Qilu Medical College of Shandong University. The MCF-7 cells were purchased from the Chinese Academy of Sciences Cells Bank (Shanghai, China), and the cells were cultured in Roswell Park Memorial Institute (RPMI) 1640 medium supplemented with 10% fetal bovine serum (FBS), streptomycin at 100 µg/mL and penicillin at 100 U/mL. All cells were cultured in a 37°C incubator with 5% CO2. Kunming mice (20±2 g) were obtained from Experimental Animal Center of Shandong University. Animal experiments were carried out according to the requirements of the Animal Management Rules of the Ministry of Health (China) and approved by the Laboratory Animal Ethics Review Committee of Qilu Medical College of Shandong University. Preparation of nanoparticles PDR and CPN-PDR were prepared by the method reported in our previous study.25 Briefly, CGA-ODNs-Dox was first obtained by incubating Dox with CGA-ODNs at room temperature. Then, PEI, siRNA, ODNs-Dox and CPN were dissolved and diluted to the corresponding concentrations with deionized water. After that, siRNA and CGA-ODNs-Dox were mixed by vortexing for several seconds to obtain the mixture. The mixture was added dropwise into PEI solution and mixed by vortexing and then incubated for 30 min at room temperature to form PDR. CPN-PDR was obtained by adding PDR suspension into the CPN solution under vortexing, followed by 30-min incubation at room temperature. When preparing the blank nanoparticles, the DOX and siRNA were not involved in above method. The construction of different kinds of nanoparticles is shown in Figure 1. The loading content of DOX and siRNA was calculated using the following formula:34 Loading content(%)=Weight of loading drugsWeight of nanoparticles×100%(1) PDR and CPN-PDR were prepared by the method reported in our previous study.25 Briefly, CGA-ODNs-Dox was first obtained by incubating Dox with CGA-ODNs at room temperature. Then, PEI, siRNA, ODNs-Dox and CPN were dissolved and diluted to the corresponding concentrations with deionized water. After that, siRNA and CGA-ODNs-Dox were mixed by vortexing for several seconds to obtain the mixture. The mixture was added dropwise into PEI solution and mixed by vortexing and then incubated for 30 min at room temperature to form PDR. CPN-PDR was obtained by adding PDR suspension into the CPN solution under vortexing, followed by 30-min incubation at room temperature. When preparing the blank nanoparticles, the DOX and siRNA were not involved in above method. The construction of different kinds of nanoparticles is shown in Figure 1. The loading content of DOX and siRNA was calculated using the following formula:34 Loading content(%)=Weight of loading drugsWeight of nanoparticles×100%(1) In vitro cytotoxicity of materials MTT assays were carried out on MCF-7 cells to evaluate the in vitro cytotoxicity of CGA-ODNs, PEI and CPN, respectively.35 MCF-7 cells were seeded with density at 7,000 cells/well in the 96-well plates and allowed to adhere overnight. CGA-ODNs, PEI and CPN solutions, which were corresponding to Dox concentrations (0.25, 0.625, 1.25, 2.5 and 5 µM), were added and incubated with the cells for 24, 48 and 72 h, respectively. Five wells were set for each concentration, and cells incubated with fresh media were taken as control. At a determined time point, 20 µL MTT solutions (5 mg/mL) were added. After 4 h incubation, the plates were centrifuged (3,000 rpm, 10 min) and the supernatant was removed. Then, 200 µL DMSO solutions were added to dissolve the formazan crystals formed by the living cells. The cell viability was calculated according to the following formula (Equation 2) after recording the absorbance at 570 nm by a microplate reader (Model 680; Bio-Rad Laboratories Inc., Hercules, CA, USA). All the assays were repeated three times. Cell viability=Abs(sample)−Abs(blank)Abs(control)−Abs(blank)×100%(2) MTT assays were carried out on MCF-7 cells to evaluate the in vitro cytotoxicity of CGA-ODNs, PEI and CPN, respectively.35 MCF-7 cells were seeded with density at 7,000 cells/well in the 96-well plates and allowed to adhere overnight. CGA-ODNs, PEI and CPN solutions, which were corresponding to Dox concentrations (0.25, 0.625, 1.25, 2.5 and 5 µM), were added and incubated with the cells for 24, 48 and 72 h, respectively. Five wells were set for each concentration, and cells incubated with fresh media were taken as control. At a determined time point, 20 µL MTT solutions (5 mg/mL) were added. After 4 h incubation, the plates were centrifuged (3,000 rpm, 10 min) and the supernatant was removed. Then, 200 µL DMSO solutions were added to dissolve the formazan crystals formed by the living cells. The cell viability was calculated according to the following formula (Equation 2) after recording the absorbance at 570 nm by a microplate reader (Model 680; Bio-Rad Laboratories Inc., Hercules, CA, USA). All the assays were repeated three times. Cell viability=Abs(sample)−Abs(blank)Abs(control)−Abs(blank)×100%(2) In vitro cytotoxicity of blank nanoparticles The cytotoxicity of non-DOX- and siRNA-loaded nano-particles (blank PDR[bPDR]) and CPN-coated blank PDR (bCPN-PDR) were also evaluated in MCF-7 cells. bPDR and bCPN-PDR solutions, corresponding to Dox concentrations (0.25, 0.625, 1.25, 2.5 and 5 µM), were added and incubated with the cells for 24, 48 and 72 h, respectively. Five wells were set for each concentration and cells incubated with fresh media were taken as control. After the indicated time periods, cell viability was evaluated according to the procedure described above. The cytotoxicity of non-DOX- and siRNA-loaded nano-particles (blank PDR[bPDR]) and CPN-coated blank PDR (bCPN-PDR) were also evaluated in MCF-7 cells. bPDR and bCPN-PDR solutions, corresponding to Dox concentrations (0.25, 0.625, 1.25, 2.5 and 5 µM), were added and incubated with the cells for 24, 48 and 72 h, respectively. Five wells were set for each concentration and cells incubated with fresh media were taken as control. After the indicated time periods, cell viability was evaluated according to the procedure described above. siRNA transfection with nanoparticles MCF-7 cells were seeded into 24-well plates at densities of 8×104, 6×104 and 5×104 cells/well for 24, 48 and 72 h transfection, respectively. After culturing at 37°C overnight, MCF-7 cells were treated with siRNA specific for VEGF. The siRNA was delivered either by Lipofectamine 2000 (Thermo Fisher Scientific) according to the manufacturer’s instructions or by CPN-modified siRNA-loaded nanoparticles (CPN-PR) or by CPN-modified Dox- and siRNA-loaded nanoparticles (CPN-PDR). The final concentrations of Dox and siRNA were 1 µM and 55 nM, respectively. After siRNA transfection, cells were harvested for RNA extraction and Western blot PCR analysis or subjected to Western blot assay. MCF-7 cells were seeded into 24-well plates at densities of 8×104, 6×104 and 5×104 cells/well for 24, 48 and 72 h transfection, respectively. After culturing at 37°C overnight, MCF-7 cells were treated with siRNA specific for VEGF. The siRNA was delivered either by Lipofectamine 2000 (Thermo Fisher Scientific) according to the manufacturer’s instructions or by CPN-modified siRNA-loaded nanoparticles (CPN-PR) or by CPN-modified Dox- and siRNA-loaded nanoparticles (CPN-PDR). The final concentrations of Dox and siRNA were 1 µM and 55 nM, respectively. After siRNA transfection, cells were harvested for RNA extraction and Western blot PCR analysis or subjected to Western blot assay. Real-time PCR The expression level of VEGF mRNA was monitored by real-time PCR technique with the standard procedure. Total RNA was extracted using Trizol agent under the standard procedure (Thermo Fisher Scientific), and cDNA synthesis was carried out using Rever Tra Ace qPCR RT Kit (Toyobo) according to the manufacturer’s instructions. The final concentration of RNA was measured by Nano Drop 2000 (Thermo Fisher Scientific), and the final product cDNA was stored at −20°C until use. Each real-time PCR system contained 20 µL solutions including cDNA (2 µL), SYBR®Green mix (10 µL), primer (1.6 µL) and double distilled water (6.4 µL). The reaction system was placed into PCR instrument (Roche LightCycler™; Hoffman-La Roche Ltd, Basel, Switzerland) for analysis. β-Actin served as a reference gene. Three wells were set for each group. The expression level of VEGF mRNA was normalized to the β-actin expression level and calculated by recording the Ct value at the end of the reaction. The expression level of VEGF mRNA was monitored by real-time PCR technique with the standard procedure. Total RNA was extracted using Trizol agent under the standard procedure (Thermo Fisher Scientific), and cDNA synthesis was carried out using Rever Tra Ace qPCR RT Kit (Toyobo) according to the manufacturer’s instructions. The final concentration of RNA was measured by Nano Drop 2000 (Thermo Fisher Scientific), and the final product cDNA was stored at −20°C until use. Each real-time PCR system contained 20 µL solutions including cDNA (2 µL), SYBR®Green mix (10 µL), primer (1.6 µL) and double distilled water (6.4 µL). The reaction system was placed into PCR instrument (Roche LightCycler™; Hoffman-La Roche Ltd, Basel, Switzerland) for analysis. β-Actin served as a reference gene. Three wells were set for each group. The expression level of VEGF mRNA was normalized to the β-actin expression level and calculated by recording the Ct value at the end of the reaction. Western blot assay A Western blot assay was employed to investigate the expression level of total VEGF protein after transfection. At a determined time, cells were trypsinized and pelleted by centrifugation. The protein collecting process was carried out based on the standard procedure of Total Protein Extraction Kit (BestBio, Shanghai, China). The total protein was stored at −80°C until use. Protein was quantified by BCA protein assay kit (Beyotime Biotechnology, Shanghai, China) based on the standard carve calculation (A=0.006 C+0.0919, r=0.9987). After quantification, all the samples were diluted to 2 µg/µL with sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) electrophoresis, denatured in boiling water for 5 min and stored at −80°C until use. Equal amounts of protein (50 µg) were subjected to 12% SDS-PAGE and transferred to nitrocellulose membranes using standard procedures. The membrane was blocked in 2% skimmed milk in phosphate buffered saline for 2 h and washed by Tris-buffered saline and Tween-20 (TBST) solution three times. Then the membranes were probed with primary antibodies against VEGF and glyceraldehyde 3-phosphate dehydrogenase (GAPDN) overnight at 4°C. After 3×10 min washing in TBST on shaker, membranes were incubated with horseradish peroxidase (HRP)-conjugated anti-rabbit IgG for 1 h at room temperature. With another 3×10 min washes in TBST, the membranes were developed with electrochemiluminescence (ECL) using a gel-imaging system and analyzed using Image Analysis software. A Western blot assay was employed to investigate the expression level of total VEGF protein after transfection. At a determined time, cells were trypsinized and pelleted by centrifugation. The protein collecting process was carried out based on the standard procedure of Total Protein Extraction Kit (BestBio, Shanghai, China). The total protein was stored at −80°C until use. Protein was quantified by BCA protein assay kit (Beyotime Biotechnology, Shanghai, China) based on the standard carve calculation (A=0.006 C+0.0919, r=0.9987). After quantification, all the samples were diluted to 2 µg/µL with sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) electrophoresis, denatured in boiling water for 5 min and stored at −80°C until use. Equal amounts of protein (50 µg) were subjected to 12% SDS-PAGE and transferred to nitrocellulose membranes using standard procedures. The membrane was blocked in 2% skimmed milk in phosphate buffered saline for 2 h and washed by Tris-buffered saline and Tween-20 (TBST) solution three times. Then the membranes were probed with primary antibodies against VEGF and glyceraldehyde 3-phosphate dehydrogenase (GAPDN) overnight at 4°C. After 3×10 min washing in TBST on shaker, membranes were incubated with horseradish peroxidase (HRP)-conjugated anti-rabbit IgG for 1 h at room temperature. With another 3×10 min washes in TBST, the membranes were developed with electrochemiluminescence (ECL) using a gel-imaging system and analyzed using Image Analysis software. Cell proliferation assay The anti-proliferation activity of Dox, CPN-PD and CPN-PDR against MCF-7 cells was tested via MTT method. MCF-7 cells were seeded into 96-well plates at a density of 7,000 cells/well. After overnight incubation, the cells were treated with various concentrations (0.002, 0.01, 0.05, 0.25 and 0.5 µM) of Dox, CPN-PD and CPN-PDR solutions and incubated for another 48 h. Five wells were set for each concentration, and cells incubated with fresh media were taken as control. After indicated time periods, cell viability was evaluated according to the procedure described in the “In vitro cytotoxicity of materials” section. The anti-proliferation activity of Dox, CPN-PD and CPN-PDR against MCF-7 cells was tested via MTT method. MCF-7 cells were seeded into 96-well plates at a density of 7,000 cells/well. After overnight incubation, the cells were treated with various concentrations (0.002, 0.01, 0.05, 0.25 and 0.5 µM) of Dox, CPN-PD and CPN-PDR solutions and incubated for another 48 h. Five wells were set for each concentration, and cells incubated with fresh media were taken as control. After indicated time periods, cell viability was evaluated according to the procedure described in the “In vitro cytotoxicity of materials” section. Hemolysis test Hemolysis test was carried out to investigate the safety of CPN-PDR for intravenous injection. A 2% red blood cell suspension was collected by centrifugation and resuspension. The test was performed under the design described in Table 1. After incubated at 37°C for 3 h, all the groups were centrifuged at 3,000 rpm for 15 min and visualized by the naked eye. To measure the hemolysis ratios of each group, UV-vis spectrophotometry was carried out to record the absorbance of supernatant in each group. The quantitation of hemolysis ratios was calculated according to the following formula: HR(%)=Abs(sample)−Abs(−)Abs(+)−Abs(−)×100%(3)where Abs (sample), Abs (−) and Abs (+) refer to the absorbances of the samples, negative control and positive control, respectively. Hemolysis test was carried out to investigate the safety of CPN-PDR for intravenous injection. A 2% red blood cell suspension was collected by centrifugation and resuspension. The test was performed under the design described in Table 1. After incubated at 37°C for 3 h, all the groups were centrifuged at 3,000 rpm for 15 min and visualized by the naked eye. To measure the hemolysis ratios of each group, UV-vis spectrophotometry was carried out to record the absorbance of supernatant in each group. The quantitation of hemolysis ratios was calculated according to the following formula: HR(%)=Abs(sample)−Abs(−)Abs(+)−Abs(−)×100%(3)where Abs (sample), Abs (−) and Abs (+) refer to the absorbances of the samples, negative control and positive control, respectively. Histological assessment In order to evaluate the compatibility and tissue toxicity of CPN-PDR in vivo, a histological observation was performed.36 Five female Kunming mice were injected with CPN-PDR at a dose of 232 µg/kg through the tail vein. In the meantime, normal saline (NS) was taken as a control reagent. After 1 week, all the mice were sacrificed, and the heart, liver, spleen, lung and kidney were separated. After washing twice with PBS, all the organs were fixed in 4% formaldehyde, dehydrated in gradient alcohol, placed in xylene, embedded in paraffin and made into sections, followed by hematoxylin-eosin (HE) staining for histological examination with microscope. In order to evaluate the compatibility and tissue toxicity of CPN-PDR in vivo, a histological observation was performed.36 Five female Kunming mice were injected with CPN-PDR at a dose of 232 µg/kg through the tail vein. In the meantime, normal saline (NS) was taken as a control reagent. After 1 week, all the mice were sacrificed, and the heart, liver, spleen, lung and kidney were separated. After washing twice with PBS, all the organs were fixed in 4% formaldehyde, dehydrated in gradient alcohol, placed in xylene, embedded in paraffin and made into sections, followed by hematoxylin-eosin (HE) staining for histological examination with microscope. Statistical analysis All studies were repeated a minimum of three times and measured at least in triplicate. The results are reported as the means ± SD. Statistical significance was analyzed using Student’s t-test. Differences between experimental groups were considered significant if the P-value was <0.05. All studies were repeated a minimum of three times and measured at least in triplicate. The results are reported as the means ± SD. Statistical significance was analyzed using Student’s t-test. Differences between experimental groups were considered significant if the P-value was <0.05. Materials: Dox was purchased from Dalian Meilun Biology Technology Co. Ltd (Dalian, China). Targeting human VEGF siRNA (sense: 5′-ACAUCACCAUGCAGAUUAUdTdT-3′, anti-sense: 5′-dTdTUGUAGUGGUACGUCUAAUA-3′) was obtained from Guangzhou Ribobio Co., Ltd (Guangzhou, China). CGA oligodeoxynucleotides (sequence: 5′-CGACGACGACGACGACGACGA-3′; complementary sequence: 5′-TCGTCGTCGTCGTCGTCGTCG-3′) were purchased from BGI Co. (Shenzhen, China). Fetal bovine serum (FBS) was obtained from Sijiqing Co., Ltd, (Zhejiang, China) and 3-[4,5-dimethyl-2-thiazolyl]-2,5-diphenyl-2H-tetrazolium bromide (MTT) was from Sigma-Aldrich (St Louis, MO, USA). Trizol RNA extraction was from Thermo Fisher Scientific (Waltham, MA, USA). SYBR®Green and ReverTra Ace qPCR RT Kit were purchased from Toyobo (Osaka, Japan). All solutions were made up in Millipore ultrapure water (EMD Millipore, Billerica, MA, USA) and sterilized for cell culture. All other chemicals and reagents were of analytical grade and used as received. All the primers used in real-time PCR were synthesized and purified by BGI Co. with sequences: VEGF – forward: 5′-CTGGAGTGTGTGCCCACTGA-3′; VEGF – reverse: 5′-TCCTATGTGCTGGCCTTGGT-3′; actin – forward: GAGCTACGAGCTGCCTGACG; actin – reverse: CCTAGAAGCATTTGCGGTGG. Cells and animals: The MCF-7 cells were purchased from the Chinese Academy of Sciences Cells Bank (Shanghai, China), and the cells were cultured in Roswell Park Memorial Institute (RPMI) 1640 medium supplemented with 10% fetal bovine serum (FBS), streptomycin at 100 µg/mL and penicillin at 100 U/mL. All cells were cultured in a 37°C incubator with 5% CO2. Kunming mice (20±2 g) were obtained from Experimental Animal Center of Shandong University. Animal experiments were carried out according to the requirements of the Animal Management Rules of the Ministry of Health (China) and approved by the Laboratory Animal Ethics Review Committee of Qilu Medical College of Shandong University. Preparation of nanoparticles: PDR and CPN-PDR were prepared by the method reported in our previous study.25 Briefly, CGA-ODNs-Dox was first obtained by incubating Dox with CGA-ODNs at room temperature. Then, PEI, siRNA, ODNs-Dox and CPN were dissolved and diluted to the corresponding concentrations with deionized water. After that, siRNA and CGA-ODNs-Dox were mixed by vortexing for several seconds to obtain the mixture. The mixture was added dropwise into PEI solution and mixed by vortexing and then incubated for 30 min at room temperature to form PDR. CPN-PDR was obtained by adding PDR suspension into the CPN solution under vortexing, followed by 30-min incubation at room temperature. When preparing the blank nanoparticles, the DOX and siRNA were not involved in above method. The construction of different kinds of nanoparticles is shown in Figure 1. The loading content of DOX and siRNA was calculated using the following formula:34 Loading content(%)=Weight of loading drugsWeight of nanoparticles×100%(1) In vitro cytotoxicity of materials: MTT assays were carried out on MCF-7 cells to evaluate the in vitro cytotoxicity of CGA-ODNs, PEI and CPN, respectively.35 MCF-7 cells were seeded with density at 7,000 cells/well in the 96-well plates and allowed to adhere overnight. CGA-ODNs, PEI and CPN solutions, which were corresponding to Dox concentrations (0.25, 0.625, 1.25, 2.5 and 5 µM), were added and incubated with the cells for 24, 48 and 72 h, respectively. Five wells were set for each concentration, and cells incubated with fresh media were taken as control. At a determined time point, 20 µL MTT solutions (5 mg/mL) were added. After 4 h incubation, the plates were centrifuged (3,000 rpm, 10 min) and the supernatant was removed. Then, 200 µL DMSO solutions were added to dissolve the formazan crystals formed by the living cells. The cell viability was calculated according to the following formula (Equation 2) after recording the absorbance at 570 nm by a microplate reader (Model 680; Bio-Rad Laboratories Inc., Hercules, CA, USA). All the assays were repeated three times. Cell viability=Abs(sample)−Abs(blank)Abs(control)−Abs(blank)×100%(2) In vitro cytotoxicity of blank nanoparticles: The cytotoxicity of non-DOX- and siRNA-loaded nano-particles (blank PDR[bPDR]) and CPN-coated blank PDR (bCPN-PDR) were also evaluated in MCF-7 cells. bPDR and bCPN-PDR solutions, corresponding to Dox concentrations (0.25, 0.625, 1.25, 2.5 and 5 µM), were added and incubated with the cells for 24, 48 and 72 h, respectively. Five wells were set for each concentration and cells incubated with fresh media were taken as control. After the indicated time periods, cell viability was evaluated according to the procedure described above. siRNA transfection with nanoparticles: MCF-7 cells were seeded into 24-well plates at densities of 8×104, 6×104 and 5×104 cells/well for 24, 48 and 72 h transfection, respectively. After culturing at 37°C overnight, MCF-7 cells were treated with siRNA specific for VEGF. The siRNA was delivered either by Lipofectamine 2000 (Thermo Fisher Scientific) according to the manufacturer’s instructions or by CPN-modified siRNA-loaded nanoparticles (CPN-PR) or by CPN-modified Dox- and siRNA-loaded nanoparticles (CPN-PDR). The final concentrations of Dox and siRNA were 1 µM and 55 nM, respectively. After siRNA transfection, cells were harvested for RNA extraction and Western blot PCR analysis or subjected to Western blot assay. Real-time PCR: The expression level of VEGF mRNA was monitored by real-time PCR technique with the standard procedure. Total RNA was extracted using Trizol agent under the standard procedure (Thermo Fisher Scientific), and cDNA synthesis was carried out using Rever Tra Ace qPCR RT Kit (Toyobo) according to the manufacturer’s instructions. The final concentration of RNA was measured by Nano Drop 2000 (Thermo Fisher Scientific), and the final product cDNA was stored at −20°C until use. Each real-time PCR system contained 20 µL solutions including cDNA (2 µL), SYBR®Green mix (10 µL), primer (1.6 µL) and double distilled water (6.4 µL). The reaction system was placed into PCR instrument (Roche LightCycler™; Hoffman-La Roche Ltd, Basel, Switzerland) for analysis. β-Actin served as a reference gene. Three wells were set for each group. The expression level of VEGF mRNA was normalized to the β-actin expression level and calculated by recording the Ct value at the end of the reaction. Western blot assay: A Western blot assay was employed to investigate the expression level of total VEGF protein after transfection. At a determined time, cells were trypsinized and pelleted by centrifugation. The protein collecting process was carried out based on the standard procedure of Total Protein Extraction Kit (BestBio, Shanghai, China). The total protein was stored at −80°C until use. Protein was quantified by BCA protein assay kit (Beyotime Biotechnology, Shanghai, China) based on the standard carve calculation (A=0.006 C+0.0919, r=0.9987). After quantification, all the samples were diluted to 2 µg/µL with sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) electrophoresis, denatured in boiling water for 5 min and stored at −80°C until use. Equal amounts of protein (50 µg) were subjected to 12% SDS-PAGE and transferred to nitrocellulose membranes using standard procedures. The membrane was blocked in 2% skimmed milk in phosphate buffered saline for 2 h and washed by Tris-buffered saline and Tween-20 (TBST) solution three times. Then the membranes were probed with primary antibodies against VEGF and glyceraldehyde 3-phosphate dehydrogenase (GAPDN) overnight at 4°C. After 3×10 min washing in TBST on shaker, membranes were incubated with horseradish peroxidase (HRP)-conjugated anti-rabbit IgG for 1 h at room temperature. With another 3×10 min washes in TBST, the membranes were developed with electrochemiluminescence (ECL) using a gel-imaging system and analyzed using Image Analysis software. Cell proliferation assay: The anti-proliferation activity of Dox, CPN-PD and CPN-PDR against MCF-7 cells was tested via MTT method. MCF-7 cells were seeded into 96-well plates at a density of 7,000 cells/well. After overnight incubation, the cells were treated with various concentrations (0.002, 0.01, 0.05, 0.25 and 0.5 µM) of Dox, CPN-PD and CPN-PDR solutions and incubated for another 48 h. Five wells were set for each concentration, and cells incubated with fresh media were taken as control. After indicated time periods, cell viability was evaluated according to the procedure described in the “In vitro cytotoxicity of materials” section. Hemolysis test: Hemolysis test was carried out to investigate the safety of CPN-PDR for intravenous injection. A 2% red blood cell suspension was collected by centrifugation and resuspension. The test was performed under the design described in Table 1. After incubated at 37°C for 3 h, all the groups were centrifuged at 3,000 rpm for 15 min and visualized by the naked eye. To measure the hemolysis ratios of each group, UV-vis spectrophotometry was carried out to record the absorbance of supernatant in each group. The quantitation of hemolysis ratios was calculated according to the following formula: HR(%)=Abs(sample)−Abs(−)Abs(+)−Abs(−)×100%(3)where Abs (sample), Abs (−) and Abs (+) refer to the absorbances of the samples, negative control and positive control, respectively. Histological assessment: In order to evaluate the compatibility and tissue toxicity of CPN-PDR in vivo, a histological observation was performed.36 Five female Kunming mice were injected with CPN-PDR at a dose of 232 µg/kg through the tail vein. In the meantime, normal saline (NS) was taken as a control reagent. After 1 week, all the mice were sacrificed, and the heart, liver, spleen, lung and kidney were separated. After washing twice with PBS, all the organs were fixed in 4% formaldehyde, dehydrated in gradient alcohol, placed in xylene, embedded in paraffin and made into sections, followed by hematoxylin-eosin (HE) staining for histological examination with microscope. Statistical analysis: All studies were repeated a minimum of three times and measured at least in triplicate. The results are reported as the means ± SD. Statistical significance was analyzed using Student’s t-test. Differences between experimental groups were considered significant if the P-value was <0.05. Results and discussion: Characteristics of nanoparticles In the optimal formulation of CPN-PDR, the weight ratio of DOX to siRNA was 1:1; therefore, the siRNA loading content was equal to the DOX loading content. Based on the facts that the molar ratio of CGA-ODNs to Dox was 1:5, the weight ratio of CPN/PEI/ODNs and siRNA was 4:1:1, and consequently, siRNA and Dox loading efficiency in CPN-PDR was calculated to be 4.93%. In the optimal formulation of CPN-PDR, the weight ratio of DOX to siRNA was 1:1; therefore, the siRNA loading content was equal to the DOX loading content. Based on the facts that the molar ratio of CGA-ODNs to Dox was 1:5, the weight ratio of CPN/PEI/ODNs and siRNA was 4:1:1, and consequently, siRNA and Dox loading efficiency in CPN-PDR was calculated to be 4.93%. Cytotoxicity of materials In the co-delivery platform, CGA-ODNs were selected as carriers of DOX, then the mixture of CGA-ODNs-Dox and siRNA (abbreviated as DR) was electrostatically interacted with PEI to obtain Dox and siRNA co-loaded nanoparticles, PEI/DR (PDR). The copolymer CPN was employed as a multifunctional material.25 In order to evaluate the safety of the carrier, MTT assays were carried out to evaluate the cytotoxicity of CGA-ODNs and CPN. In the meantime, PEI, a toxic cationic polymer, was also tested in this experiment. As shown in Figure 2A, the viability of MCF-7 cells treated with CGA-ODNs and CPN at any concentration was observed to be stable and >80% within 48 h. After 72 h incubation, the viability of MCF-7 cells treated with CPN was still >80%, but after treatment with a higher concentration of CGA-ODNs (2.5 and 5 µM), the cell viability was slightly decreased to 60%. This indicated that there is no obvious cytotoxicity of CGA-ODNs and CPN. However, the cell viability of MCF-7 cells treated with PEI was obviously dependent on the concentrations of PEI. With an increase in concentration, the MCF-7 cells treated with PEI showed lower cell viability. Moreover, this concentration-dependent trend became more obvious when incubation time increased from 24 h to 72 h. In the co-delivery platform, CGA-ODNs were selected as carriers of DOX, then the mixture of CGA-ODNs-Dox and siRNA (abbreviated as DR) was electrostatically interacted with PEI to obtain Dox and siRNA co-loaded nanoparticles, PEI/DR (PDR). The copolymer CPN was employed as a multifunctional material.25 In order to evaluate the safety of the carrier, MTT assays were carried out to evaluate the cytotoxicity of CGA-ODNs and CPN. In the meantime, PEI, a toxic cationic polymer, was also tested in this experiment. As shown in Figure 2A, the viability of MCF-7 cells treated with CGA-ODNs and CPN at any concentration was observed to be stable and >80% within 48 h. After 72 h incubation, the viability of MCF-7 cells treated with CPN was still >80%, but after treatment with a higher concentration of CGA-ODNs (2.5 and 5 µM), the cell viability was slightly decreased to 60%. This indicated that there is no obvious cytotoxicity of CGA-ODNs and CPN. However, the cell viability of MCF-7 cells treated with PEI was obviously dependent on the concentrations of PEI. With an increase in concentration, the MCF-7 cells treated with PEI showed lower cell viability. Moreover, this concentration-dependent trend became more obvious when incubation time increased from 24 h to 72 h. Cytotoxicity of blank nanoparticles Cell viability of MCF-7 cells was also monitored to investigate the cytotoxicity of non-RNA- and non-drug-loaded carriers including bPDR and bCPN-PDR. As shown in Figure 2B, after treatment with bCPN-PDR, the cell viability was stable around 95% and shown to be concentration independent at 24 h. Meanwhile, bPDR-treated cells showed lower cell viability with time. With increased concentration, bPDR showed higher cytotoxicity, inducing 80% cell death in 72 h. Compared with the bPDR group, the cell viabilities of the bCPN-PDR group were significantly higher (P<0.05). This phenomenon could be explained by the higher positive charge of bPDR due to the positive ingredient of PEI, while the cell cytotoxicity could be significantly decreased by covering with the nontoxic negatively charged CPN (>80% cell viability in Figure 2A), which could induce the charge reversal of bPDR, implying that the negatively charged CPN coating could decrease the higher toxicity of bPDR to cells. Cell viability of MCF-7 cells was also monitored to investigate the cytotoxicity of non-RNA- and non-drug-loaded carriers including bPDR and bCPN-PDR. As shown in Figure 2B, after treatment with bCPN-PDR, the cell viability was stable around 95% and shown to be concentration independent at 24 h. Meanwhile, bPDR-treated cells showed lower cell viability with time. With increased concentration, bPDR showed higher cytotoxicity, inducing 80% cell death in 72 h. Compared with the bPDR group, the cell viabilities of the bCPN-PDR group were significantly higher (P<0.05). This phenomenon could be explained by the higher positive charge of bPDR due to the positive ingredient of PEI, while the cell cytotoxicity could be significantly decreased by covering with the nontoxic negatively charged CPN (>80% cell viability in Figure 2A), which could induce the charge reversal of bPDR, implying that the negatively charged CPN coating could decrease the higher toxicity of bPDR to cells. Expression level of VEGF mRNA The transfection efficiency of CPN-PDR compared to Lipofectamine 2000 was demonstrated by delivering VEGF-siRNA into MCF-7 cells. The expression level of VEGF mRNA was measured using real-time PCR, and the results are shown in Figure 3. The expression level of VEGF mRNA was normalized to the β-actin expression and calculated based on the semiquantitative method. Untreated cells were selected as control, and the expression level was set as 100%. For lipo/siRNA group, the silencing efficiencies were 61.98%±6.96%, 31.93%±6.43% and 37.02%±3.17% at 24, 48 and 72 h, respectively. The decreased expression of VEGF mRNA reflected that gene silencing capability of lipo/siRNA. With incubation time extended from 24 to 48 h, the capability was enhanced (P<0.05). However, in 72 h, no further reduction was observed compared to that in 48 h, although the significantly downregulated expression could be observed (P<0.05 vs control, Dox and bCPN-PDR), implying that the silencing effect could reach the plateau period after 48 h and last for 72 h at least. The CPN-PR and CPN-PDR have shown similar behavior to lipo/siRNA. Compared with the control, a remarkable decrease of VEGF mRNA expression was observed (P<0.01) at 48 h. There was no significant difference in expression levels between CPN-PR and CPN-PDR (P>0.05) at any time point, implying their comparable gene delivery and transfection capability. More importantly, this capability was equivalent to the commercial Lipofectamine 2000. The transfection efficiency of CPN-PDR compared to Lipofectamine 2000 was demonstrated by delivering VEGF-siRNA into MCF-7 cells. The expression level of VEGF mRNA was measured using real-time PCR, and the results are shown in Figure 3. The expression level of VEGF mRNA was normalized to the β-actin expression and calculated based on the semiquantitative method. Untreated cells were selected as control, and the expression level was set as 100%. For lipo/siRNA group, the silencing efficiencies were 61.98%±6.96%, 31.93%±6.43% and 37.02%±3.17% at 24, 48 and 72 h, respectively. The decreased expression of VEGF mRNA reflected that gene silencing capability of lipo/siRNA. With incubation time extended from 24 to 48 h, the capability was enhanced (P<0.05). However, in 72 h, no further reduction was observed compared to that in 48 h, although the significantly downregulated expression could be observed (P<0.05 vs control, Dox and bCPN-PDR), implying that the silencing effect could reach the plateau period after 48 h and last for 72 h at least. The CPN-PR and CPN-PDR have shown similar behavior to lipo/siRNA. Compared with the control, a remarkable decrease of VEGF mRNA expression was observed (P<0.01) at 48 h. There was no significant difference in expression levels between CPN-PR and CPN-PDR (P>0.05) at any time point, implying their comparable gene delivery and transfection capability. More importantly, this capability was equivalent to the commercial Lipofectamine 2000. Expression level of VEGF protein Based on the results of real-time PCR, the gene-silencing effect enhanced with time and reached a platform after 48 h. In this study, the Western blot technique was employed to observe VEGF protein expression at 48 h. The results are shown in Figure 4. GAPDH was selected as the inner control; all the bands were found to be equivalent in gray levels, and the amount of protein expression was positively correlated with the shade of gray in the imaging picture. From Figure 4, the gray levels of both control and bCPN-PDR, which were similar, were obvious. However, siRNA-loaded nanoparticles including lipo/siRNA, CPN-PR and CPN-PDR demonstrated a notably lower gray level. Furthermore, as marked with the red rectangle in Figure 4, expression of VEGF protein in CPN-PDR could hardly be identified. This suggested that co-delivery of anti-VEGF siRNA and Dox could downregulate the relevant VEGF protein level, which was in agreement with the results from semiquantitative real-time PCR. Taken together, these results indicate that nanoparticles loaded with anti-VEGF siRNA downregulated both the protein and the mRNA expression of VEGF. Based on the results of real-time PCR, the gene-silencing effect enhanced with time and reached a platform after 48 h. In this study, the Western blot technique was employed to observe VEGF protein expression at 48 h. The results are shown in Figure 4. GAPDH was selected as the inner control; all the bands were found to be equivalent in gray levels, and the amount of protein expression was positively correlated with the shade of gray in the imaging picture. From Figure 4, the gray levels of both control and bCPN-PDR, which were similar, were obvious. However, siRNA-loaded nanoparticles including lipo/siRNA, CPN-PR and CPN-PDR demonstrated a notably lower gray level. Furthermore, as marked with the red rectangle in Figure 4, expression of VEGF protein in CPN-PDR could hardly be identified. This suggested that co-delivery of anti-VEGF siRNA and Dox could downregulate the relevant VEGF protein level, which was in agreement with the results from semiquantitative real-time PCR. Taken together, these results indicate that nanoparticles loaded with anti-VEGF siRNA downregulated both the protein and the mRNA expression of VEGF. Inhibition of cell proliferation in vitro The ultimate goal of siRNA intracellular delivery is to restrain the proliferation of cancer cells. Different treatments against MCF-7 cells were performed to further investigate the inhibition effect. As shown in Figure 5A, an obvious concentration dependence in terms of DOX, CPN-PD and CPN-PDR could be observed after 48 h incubation. We have calculated the differences between CPN-PDR vs CPN-PD and CPN-PDR vs DOX at each dose. The results indicated that there was a significant difference between CPN-PDR vs DOX at each dose except the dose of 0.05, and there were significant differences between CPN-PDR vs CPN-PD at a dose of 0.002. It was suggested that the in vitro antitumor activity of CPN-PDR was equivalent to CPN-PD, and both of them showed a higher antitumor activity than free drug DOX, which was mainly attributed to the design of the multifunctional nano-vectors. We have also calculated the statistical differences between CPN-PDR vs CPN-PD and CPN-PDR vs DOX in the concentration range of Dox (0.002, 0.01, 0.05, 0.25 and 0.5 µM) using Student’s t-test. After statistical calculation, the significant differences were verified to exist between CPN-PDR vs CPN-PD (P<0.05) and CPN-PDR vs DOX (P<0.01). After calculation with professional software, the half maximal inhibitory concentrations (IC50) are presented in Figure 5B. They are 0.0367±0.0088 µM for Dox, 0.0266±0.0027 µM for CPN-PD and 0.0126±0.0022 µM for CPN-PDR, respectively. The IC50 of CPN-PD was 2.913-fold higher than that of CPN-PDR, and the IC50 of DOX was 2.110-fold higher than that of CPN-PDR. After statistical calculation, significant differences were found between CPN-PDR vs CPN-PD (P<0.05) and CPN-PDR vs DOX (P<0.05). With the results from real-time PCR and Western blot, the increased cytotoxicity of CPN-PDR probably resulted from siRNA delivery and successful transfection. This was because the intracellular delivery of anti-VEGF siRNA could downregulate the expression of VEGF protein. The anti-VEGF therapy may sensitize cancer cells or kill cells block the blood supply of the tumor and improve drug penetration. Taken together, the anti-VEGF siRNA and Dox co-delivery platform could induce increased cytotoxicity against cancer cells. The ultimate goal of siRNA intracellular delivery is to restrain the proliferation of cancer cells. Different treatments against MCF-7 cells were performed to further investigate the inhibition effect. As shown in Figure 5A, an obvious concentration dependence in terms of DOX, CPN-PD and CPN-PDR could be observed after 48 h incubation. We have calculated the differences between CPN-PDR vs CPN-PD and CPN-PDR vs DOX at each dose. The results indicated that there was a significant difference between CPN-PDR vs DOX at each dose except the dose of 0.05, and there were significant differences between CPN-PDR vs CPN-PD at a dose of 0.002. It was suggested that the in vitro antitumor activity of CPN-PDR was equivalent to CPN-PD, and both of them showed a higher antitumor activity than free drug DOX, which was mainly attributed to the design of the multifunctional nano-vectors. We have also calculated the statistical differences between CPN-PDR vs CPN-PD and CPN-PDR vs DOX in the concentration range of Dox (0.002, 0.01, 0.05, 0.25 and 0.5 µM) using Student’s t-test. After statistical calculation, the significant differences were verified to exist between CPN-PDR vs CPN-PD (P<0.05) and CPN-PDR vs DOX (P<0.01). After calculation with professional software, the half maximal inhibitory concentrations (IC50) are presented in Figure 5B. They are 0.0367±0.0088 µM for Dox, 0.0266±0.0027 µM for CPN-PD and 0.0126±0.0022 µM for CPN-PDR, respectively. The IC50 of CPN-PD was 2.913-fold higher than that of CPN-PDR, and the IC50 of DOX was 2.110-fold higher than that of CPN-PDR. After statistical calculation, significant differences were found between CPN-PDR vs CPN-PD (P<0.05) and CPN-PDR vs DOX (P<0.05). With the results from real-time PCR and Western blot, the increased cytotoxicity of CPN-PDR probably resulted from siRNA delivery and successful transfection. This was because the intracellular delivery of anti-VEGF siRNA could downregulate the expression of VEGF protein. The anti-VEGF therapy may sensitize cancer cells or kill cells block the blood supply of the tumor and improve drug penetration. Taken together, the anti-VEGF siRNA and Dox co-delivery platform could induce increased cytotoxicity against cancer cells. Hemolysis test To investigate the safety of intravenous injection of CPN-PDR, a hemolysis test was performed. As shown in Table 1, different volume ratios of CPN-PDR to 2% red blood cell suspension were selected. After incubation and centrifugation, tubes were observed by the naked eye. As shown in Figure 6, no red blood cell hemolysis was observed in tube 6, in which 100% PBS was added serving as a negative control. For tube 7, 100% double distilled water was added and red cell hemolysis could be clearly observed. For tubes 1–5, in which different percentages of CPN-PDR were added, no red blood cell hemolysis occurred. Hemolysis of red blood cells can generate hemoprotein, which could be detected using UV-vis spectrophotometry. After scanning the absorbance wavelength of hemoprotein, 577 nm was selected as the detecting wavelength. According to Equation 3, hemolysis ratios were calculated based on the recorded absorbance and shown in Table 2. All the hemolysis ratios of tubes 1–5 were <5%, implying that no hemolysis was induced by addition of CPN-PDR. To investigate the safety of intravenous injection of CPN-PDR, a hemolysis test was performed. As shown in Table 1, different volume ratios of CPN-PDR to 2% red blood cell suspension were selected. After incubation and centrifugation, tubes were observed by the naked eye. As shown in Figure 6, no red blood cell hemolysis was observed in tube 6, in which 100% PBS was added serving as a negative control. For tube 7, 100% double distilled water was added and red cell hemolysis could be clearly observed. For tubes 1–5, in which different percentages of CPN-PDR were added, no red blood cell hemolysis occurred. Hemolysis of red blood cells can generate hemoprotein, which could be detected using UV-vis spectrophotometry. After scanning the absorbance wavelength of hemoprotein, 577 nm was selected as the detecting wavelength. According to Equation 3, hemolysis ratios were calculated based on the recorded absorbance and shown in Table 2. All the hemolysis ratios of tubes 1–5 were <5%, implying that no hemolysis was induced by addition of CPN-PDR. Tissue section In this study, a histological analysis of organs (heart, liver, spleen, lung and kidney) was performed to evaluate whether CPN-PDR could cause tissue damage, inflammation and lesions. Kunming mice were injected with CPN-PDR and normal saline (NS) by tail vein, respectively. After 1 week, all the mice were sacrificed, and the heart, liver, spleen, lung and kidney were separated. After the standard procedure of HE staining for histological examination, the organs were observed under a microscope. Mice without any treatment were chosen as a control. As shown in Figure 7, compared with the control group, there was no visible histologically difference between mice administrated with CPN-PDR group and that with NS group, implying the safety of CPN-PDR in vivo. In this study, a histological analysis of organs (heart, liver, spleen, lung and kidney) was performed to evaluate whether CPN-PDR could cause tissue damage, inflammation and lesions. Kunming mice were injected with CPN-PDR and normal saline (NS) by tail vein, respectively. After 1 week, all the mice were sacrificed, and the heart, liver, spleen, lung and kidney were separated. After the standard procedure of HE staining for histological examination, the organs were observed under a microscope. Mice without any treatment were chosen as a control. As shown in Figure 7, compared with the control group, there was no visible histologically difference between mice administrated with CPN-PDR group and that with NS group, implying the safety of CPN-PDR in vivo. Characteristics of nanoparticles: In the optimal formulation of CPN-PDR, the weight ratio of DOX to siRNA was 1:1; therefore, the siRNA loading content was equal to the DOX loading content. Based on the facts that the molar ratio of CGA-ODNs to Dox was 1:5, the weight ratio of CPN/PEI/ODNs and siRNA was 4:1:1, and consequently, siRNA and Dox loading efficiency in CPN-PDR was calculated to be 4.93%. Cytotoxicity of materials: In the co-delivery platform, CGA-ODNs were selected as carriers of DOX, then the mixture of CGA-ODNs-Dox and siRNA (abbreviated as DR) was electrostatically interacted with PEI to obtain Dox and siRNA co-loaded nanoparticles, PEI/DR (PDR). The copolymer CPN was employed as a multifunctional material.25 In order to evaluate the safety of the carrier, MTT assays were carried out to evaluate the cytotoxicity of CGA-ODNs and CPN. In the meantime, PEI, a toxic cationic polymer, was also tested in this experiment. As shown in Figure 2A, the viability of MCF-7 cells treated with CGA-ODNs and CPN at any concentration was observed to be stable and >80% within 48 h. After 72 h incubation, the viability of MCF-7 cells treated with CPN was still >80%, but after treatment with a higher concentration of CGA-ODNs (2.5 and 5 µM), the cell viability was slightly decreased to 60%. This indicated that there is no obvious cytotoxicity of CGA-ODNs and CPN. However, the cell viability of MCF-7 cells treated with PEI was obviously dependent on the concentrations of PEI. With an increase in concentration, the MCF-7 cells treated with PEI showed lower cell viability. Moreover, this concentration-dependent trend became more obvious when incubation time increased from 24 h to 72 h. Cytotoxicity of blank nanoparticles: Cell viability of MCF-7 cells was also monitored to investigate the cytotoxicity of non-RNA- and non-drug-loaded carriers including bPDR and bCPN-PDR. As shown in Figure 2B, after treatment with bCPN-PDR, the cell viability was stable around 95% and shown to be concentration independent at 24 h. Meanwhile, bPDR-treated cells showed lower cell viability with time. With increased concentration, bPDR showed higher cytotoxicity, inducing 80% cell death in 72 h. Compared with the bPDR group, the cell viabilities of the bCPN-PDR group were significantly higher (P<0.05). This phenomenon could be explained by the higher positive charge of bPDR due to the positive ingredient of PEI, while the cell cytotoxicity could be significantly decreased by covering with the nontoxic negatively charged CPN (>80% cell viability in Figure 2A), which could induce the charge reversal of bPDR, implying that the negatively charged CPN coating could decrease the higher toxicity of bPDR to cells. Expression level of VEGF mRNA: The transfection efficiency of CPN-PDR compared to Lipofectamine 2000 was demonstrated by delivering VEGF-siRNA into MCF-7 cells. The expression level of VEGF mRNA was measured using real-time PCR, and the results are shown in Figure 3. The expression level of VEGF mRNA was normalized to the β-actin expression and calculated based on the semiquantitative method. Untreated cells were selected as control, and the expression level was set as 100%. For lipo/siRNA group, the silencing efficiencies were 61.98%±6.96%, 31.93%±6.43% and 37.02%±3.17% at 24, 48 and 72 h, respectively. The decreased expression of VEGF mRNA reflected that gene silencing capability of lipo/siRNA. With incubation time extended from 24 to 48 h, the capability was enhanced (P<0.05). However, in 72 h, no further reduction was observed compared to that in 48 h, although the significantly downregulated expression could be observed (P<0.05 vs control, Dox and bCPN-PDR), implying that the silencing effect could reach the plateau period after 48 h and last for 72 h at least. The CPN-PR and CPN-PDR have shown similar behavior to lipo/siRNA. Compared with the control, a remarkable decrease of VEGF mRNA expression was observed (P<0.01) at 48 h. There was no significant difference in expression levels between CPN-PR and CPN-PDR (P>0.05) at any time point, implying their comparable gene delivery and transfection capability. More importantly, this capability was equivalent to the commercial Lipofectamine 2000. Expression level of VEGF protein: Based on the results of real-time PCR, the gene-silencing effect enhanced with time and reached a platform after 48 h. In this study, the Western blot technique was employed to observe VEGF protein expression at 48 h. The results are shown in Figure 4. GAPDH was selected as the inner control; all the bands were found to be equivalent in gray levels, and the amount of protein expression was positively correlated with the shade of gray in the imaging picture. From Figure 4, the gray levels of both control and bCPN-PDR, which were similar, were obvious. However, siRNA-loaded nanoparticles including lipo/siRNA, CPN-PR and CPN-PDR demonstrated a notably lower gray level. Furthermore, as marked with the red rectangle in Figure 4, expression of VEGF protein in CPN-PDR could hardly be identified. This suggested that co-delivery of anti-VEGF siRNA and Dox could downregulate the relevant VEGF protein level, which was in agreement with the results from semiquantitative real-time PCR. Taken together, these results indicate that nanoparticles loaded with anti-VEGF siRNA downregulated both the protein and the mRNA expression of VEGF. Inhibition of cell proliferation in vitro: The ultimate goal of siRNA intracellular delivery is to restrain the proliferation of cancer cells. Different treatments against MCF-7 cells were performed to further investigate the inhibition effect. As shown in Figure 5A, an obvious concentration dependence in terms of DOX, CPN-PD and CPN-PDR could be observed after 48 h incubation. We have calculated the differences between CPN-PDR vs CPN-PD and CPN-PDR vs DOX at each dose. The results indicated that there was a significant difference between CPN-PDR vs DOX at each dose except the dose of 0.05, and there were significant differences between CPN-PDR vs CPN-PD at a dose of 0.002. It was suggested that the in vitro antitumor activity of CPN-PDR was equivalent to CPN-PD, and both of them showed a higher antitumor activity than free drug DOX, which was mainly attributed to the design of the multifunctional nano-vectors. We have also calculated the statistical differences between CPN-PDR vs CPN-PD and CPN-PDR vs DOX in the concentration range of Dox (0.002, 0.01, 0.05, 0.25 and 0.5 µM) using Student’s t-test. After statistical calculation, the significant differences were verified to exist between CPN-PDR vs CPN-PD (P<0.05) and CPN-PDR vs DOX (P<0.01). After calculation with professional software, the half maximal inhibitory concentrations (IC50) are presented in Figure 5B. They are 0.0367±0.0088 µM for Dox, 0.0266±0.0027 µM for CPN-PD and 0.0126±0.0022 µM for CPN-PDR, respectively. The IC50 of CPN-PD was 2.913-fold higher than that of CPN-PDR, and the IC50 of DOX was 2.110-fold higher than that of CPN-PDR. After statistical calculation, significant differences were found between CPN-PDR vs CPN-PD (P<0.05) and CPN-PDR vs DOX (P<0.05). With the results from real-time PCR and Western blot, the increased cytotoxicity of CPN-PDR probably resulted from siRNA delivery and successful transfection. This was because the intracellular delivery of anti-VEGF siRNA could downregulate the expression of VEGF protein. The anti-VEGF therapy may sensitize cancer cells or kill cells block the blood supply of the tumor and improve drug penetration. Taken together, the anti-VEGF siRNA and Dox co-delivery platform could induce increased cytotoxicity against cancer cells. Hemolysis test: To investigate the safety of intravenous injection of CPN-PDR, a hemolysis test was performed. As shown in Table 1, different volume ratios of CPN-PDR to 2% red blood cell suspension were selected. After incubation and centrifugation, tubes were observed by the naked eye. As shown in Figure 6, no red blood cell hemolysis was observed in tube 6, in which 100% PBS was added serving as a negative control. For tube 7, 100% double distilled water was added and red cell hemolysis could be clearly observed. For tubes 1–5, in which different percentages of CPN-PDR were added, no red blood cell hemolysis occurred. Hemolysis of red blood cells can generate hemoprotein, which could be detected using UV-vis spectrophotometry. After scanning the absorbance wavelength of hemoprotein, 577 nm was selected as the detecting wavelength. According to Equation 3, hemolysis ratios were calculated based on the recorded absorbance and shown in Table 2. All the hemolysis ratios of tubes 1–5 were <5%, implying that no hemolysis was induced by addition of CPN-PDR. Tissue section: In this study, a histological analysis of organs (heart, liver, spleen, lung and kidney) was performed to evaluate whether CPN-PDR could cause tissue damage, inflammation and lesions. Kunming mice were injected with CPN-PDR and normal saline (NS) by tail vein, respectively. After 1 week, all the mice were sacrificed, and the heart, liver, spleen, lung and kidney were separated. After the standard procedure of HE staining for histological examination, the organs were observed under a microscope. Mice without any treatment were chosen as a control. As shown in Figure 7, compared with the control group, there was no visible histologically difference between mice administrated with CPN-PDR group and that with NS group, implying the safety of CPN-PDR in vivo. Conclusion: In summary, the delivery system materials were demonstrated to be nontoxic and biocompatible. The nontoxic and negatively charged copolymer CPN coating could significantly decrease the cytotoxicity of the cationic core, bPDR. The obtained co-delivery system CPN-PDR was also confirmed with enhanced cytotoxicity against tumor cells. Gene transfection efficiency induced by CPN-PDR was fairly equal with that of the commercial product Lipofectamine 2000, implying the successful intracellular delivery and good transfection of siRNA. Moreover, the preliminary evaluation of safety showed CPN-PDR to have good biocompatibility, so it has great potential for further exploitation and clinical application. With the development of aptamer technology, the promising application prospects of this novel oligodeoxynucleotide-based co-loading platform will further increase.
Background: We previously developed a simple effective system based on oligodeoxynucleotides with CGA repeating units (CGA-ODNs) for Dox and siRNA intracellular co-delivery. Methods: In the present study, the in vitro cytotoxicity, gene transfection and in vivo safety of the co-delivery system were further characterized and discussed. Results: Compared with poly(ethyleneimine) (PEI), both CGA-ODNs and the pH-sensitive targeted coating, o-carboxymethyl-chitosan (CMCS)-poly(ethylene glycol) (PEG)-aspargine-glycine-arginine (NGR) (CMCS-PEG-NGR, CPN) showed no obvious cytotoxicity in 72 h. The excellent transfection capability of CPN coated Dox and siRNA co-loaded nanoparticles (CPN-PDR) was confirmed by real-time PCR and Western blot analysis. It was calculated that there was no significant difference in silencing efficiency among Lipo/siRNA, CPN-modified siRNA-loaded nanoparticles (CPN-PR) and CPN-PDR. Furthermore, CPN-PDR was observed to be significantly much more toxic than Dox- and CPN-modified Dox-loaded nanoparticles (CPN-PD), implying their higher antitumor potential. Both hemolysis tests and histological assessment implied that CPN-PDR was safe for intravenous injection with nontoxicity and good biocompatibility in vitro and in vivo. Conclusions: The results indicated that CPN-PDR could be a potentially promising co-delivery carrier for enhanced antitumor therapy.
Characteristics of nanoparticles: In the optimal formulation of CPN-PDR, the weight ratio of DOX to siRNA was 1:1; therefore, the siRNA loading content was equal to the DOX loading content. Based on the facts that the molar ratio of CGA-ODNs to Dox was 1:5, the weight ratio of CPN/PEI/ODNs and siRNA was 4:1:1, and consequently, siRNA and Dox loading efficiency in CPN-PDR was calculated to be 4.93%. Conclusion: In summary, the delivery system materials were demonstrated to be nontoxic and biocompatible. The nontoxic and negatively charged copolymer CPN coating could significantly decrease the cytotoxicity of the cationic core, bPDR. The obtained co-delivery system CPN-PDR was also confirmed with enhanced cytotoxicity against tumor cells. Gene transfection efficiency induced by CPN-PDR was fairly equal with that of the commercial product Lipofectamine 2000, implying the successful intracellular delivery and good transfection of siRNA. Moreover, the preliminary evaluation of safety showed CPN-PDR to have good biocompatibility, so it has great potential for further exploitation and clinical application. With the development of aptamer technology, the promising application prospects of this novel oligodeoxynucleotide-based co-loading platform will further increase.
Background: We previously developed a simple effective system based on oligodeoxynucleotides with CGA repeating units (CGA-ODNs) for Dox and siRNA intracellular co-delivery. Methods: In the present study, the in vitro cytotoxicity, gene transfection and in vivo safety of the co-delivery system were further characterized and discussed. Results: Compared with poly(ethyleneimine) (PEI), both CGA-ODNs and the pH-sensitive targeted coating, o-carboxymethyl-chitosan (CMCS)-poly(ethylene glycol) (PEG)-aspargine-glycine-arginine (NGR) (CMCS-PEG-NGR, CPN) showed no obvious cytotoxicity in 72 h. The excellent transfection capability of CPN coated Dox and siRNA co-loaded nanoparticles (CPN-PDR) was confirmed by real-time PCR and Western blot analysis. It was calculated that there was no significant difference in silencing efficiency among Lipo/siRNA, CPN-modified siRNA-loaded nanoparticles (CPN-PR) and CPN-PDR. Furthermore, CPN-PDR was observed to be significantly much more toxic than Dox- and CPN-modified Dox-loaded nanoparticles (CPN-PD), implying their higher antitumor potential. Both hemolysis tests and histological assessment implied that CPN-PDR was safe for intravenous injection with nontoxicity and good biocompatibility in vitro and in vivo. Conclusions: The results indicated that CPN-PDR could be a potentially promising co-delivery carrier for enhanced antitumor therapy.
12,903
273
[ 1055, 232, 129, 191, 228, 112, 137, 203, 279, 126, 142, 132, 3750, 185, 288, 222, 451, 207, 152, 138 ]
24
[ "cpn", "pdr", "cells", "cpn pdr", "sirna", "dox", "vegf", "cell", "expression", "time" ]
[ "sirna targeting multidrug", "interfering rnas sirnas", "sirna chemotherapeutic drugs", "effects chemotherapy rnai", "chemotherapy rnai selection" ]
null
[CONTENT] co-delivery | doxorubicin | VEGF | cytotoxicity | transfection [SUMMARY]
[CONTENT] co-delivery | doxorubicin | VEGF | cytotoxicity | transfection [SUMMARY]
null
[CONTENT] co-delivery | doxorubicin | VEGF | cytotoxicity | transfection [SUMMARY]
[CONTENT] co-delivery | doxorubicin | VEGF | cytotoxicity | transfection [SUMMARY]
[CONTENT] co-delivery | doxorubicin | VEGF | cytotoxicity | transfection [SUMMARY]
[CONTENT] Animals | Antineoplastic Agents | Chitosan | Doxorubicin | Drug Carriers | Drug Delivery Systems | Endosomes | Humans | Hydrogen-Ion Concentration | MCF-7 Cells | Mice | Nanoparticles | Oligodeoxyribonucleotides | Oligopeptides | Polyethylene Glycols | RNA, Small Interfering | Transfection [SUMMARY]
[CONTENT] Animals | Antineoplastic Agents | Chitosan | Doxorubicin | Drug Carriers | Drug Delivery Systems | Endosomes | Humans | Hydrogen-Ion Concentration | MCF-7 Cells | Mice | Nanoparticles | Oligodeoxyribonucleotides | Oligopeptides | Polyethylene Glycols | RNA, Small Interfering | Transfection [SUMMARY]
null
[CONTENT] Animals | Antineoplastic Agents | Chitosan | Doxorubicin | Drug Carriers | Drug Delivery Systems | Endosomes | Humans | Hydrogen-Ion Concentration | MCF-7 Cells | Mice | Nanoparticles | Oligodeoxyribonucleotides | Oligopeptides | Polyethylene Glycols | RNA, Small Interfering | Transfection [SUMMARY]
[CONTENT] Animals | Antineoplastic Agents | Chitosan | Doxorubicin | Drug Carriers | Drug Delivery Systems | Endosomes | Humans | Hydrogen-Ion Concentration | MCF-7 Cells | Mice | Nanoparticles | Oligodeoxyribonucleotides | Oligopeptides | Polyethylene Glycols | RNA, Small Interfering | Transfection [SUMMARY]
[CONTENT] Animals | Antineoplastic Agents | Chitosan | Doxorubicin | Drug Carriers | Drug Delivery Systems | Endosomes | Humans | Hydrogen-Ion Concentration | MCF-7 Cells | Mice | Nanoparticles | Oligodeoxyribonucleotides | Oligopeptides | Polyethylene Glycols | RNA, Small Interfering | Transfection [SUMMARY]
[CONTENT] sirna targeting multidrug | interfering rnas sirnas | sirna chemotherapeutic drugs | effects chemotherapy rnai | chemotherapy rnai selection [SUMMARY]
[CONTENT] sirna targeting multidrug | interfering rnas sirnas | sirna chemotherapeutic drugs | effects chemotherapy rnai | chemotherapy rnai selection [SUMMARY]
null
[CONTENT] sirna targeting multidrug | interfering rnas sirnas | sirna chemotherapeutic drugs | effects chemotherapy rnai | chemotherapy rnai selection [SUMMARY]
[CONTENT] sirna targeting multidrug | interfering rnas sirnas | sirna chemotherapeutic drugs | effects chemotherapy rnai | chemotherapy rnai selection [SUMMARY]
[CONTENT] sirna targeting multidrug | interfering rnas sirnas | sirna chemotherapeutic drugs | effects chemotherapy rnai | chemotherapy rnai selection [SUMMARY]
[CONTENT] cpn | pdr | cells | cpn pdr | sirna | dox | vegf | cell | expression | time [SUMMARY]
[CONTENT] cpn | pdr | cells | cpn pdr | sirna | dox | vegf | cell | expression | time [SUMMARY]
null
[CONTENT] cpn | pdr | cells | cpn pdr | sirna | dox | vegf | cell | expression | time [SUMMARY]
[CONTENT] cpn | pdr | cells | cpn pdr | sirna | dox | vegf | cell | expression | time [SUMMARY]
[CONTENT] cpn | pdr | cells | cpn pdr | sirna | dox | vegf | cell | expression | time [SUMMARY]
[CONTENT] ratio | loading | sirna | dox loading | weight ratio | dox | weight | loading content | content | odns [SUMMARY]
[CONTENT] triplicate results reported | significant value | considered significant value 05 | groups considered significant value | groups considered significant | groups considered | triplicate results | significant value 05 | test differences experimental | considered significant [SUMMARY]
null
[CONTENT] application | good | delivery | delivery system | nontoxic | cpn | system | cpn pdr | co | transfection [SUMMARY]
[CONTENT] cpn | pdr | cells | sirna | cpn pdr | dox | vegf | cell | abs | odns [SUMMARY]
[CONTENT] cpn | pdr | cells | sirna | cpn pdr | dox | vegf | cell | abs | odns [SUMMARY]
[CONTENT] CGA | CGA | Dox [SUMMARY]
[CONTENT] [SUMMARY]
null
[CONTENT] CPN-PDR [SUMMARY]
[CONTENT] CGA | CGA | Dox ||| ||| PEI | CGA | CMCS)-poly(ethylene | NGR | CMCS-PEG-NGR | CPN | 72 | CPN | Dox | CPN-PDR | PCR | Western ||| Lipo | CPN | CPN-PDR ||| CPN-PDR | CPN | CPN-PD ||| CPN-PDR ||| CPN-PDR [SUMMARY]
[CONTENT] CGA | CGA | Dox ||| ||| PEI | CGA | CMCS)-poly(ethylene | NGR | CMCS-PEG-NGR | CPN | 72 | CPN | Dox | CPN-PDR | PCR | Western ||| Lipo | CPN | CPN-PDR ||| CPN-PDR | CPN | CPN-PD ||| CPN-PDR ||| CPN-PDR [SUMMARY]
A Novel Signature Based on m6A RNA Methylation Regulators Reveals Distinct Prognostic Subgroups and Associates with Tumor Immunity of Patients with Pancreatic Neuroendocrine Neoplasms.
35609514
The RNA N6-methyladenosine (m6A) regulators play a crucial role in tumorigenesis and could be indicators of prognosis and therapeutic targets in various cancers. However, the expression status and prognostic value of m6A regulators have not been studied in pancreatic neuroendocrine neoplasms (PanNENs). We aimed to investigate the expression patterns and prognostic value of m6A regulators and assess their correlations with immune checkpoints and infiltrates in PanNENs.
INTRODUCTION
Immunohistochemistry was performed for 15 m6A regulators and immune markers using tissue microarrays obtained from 183 patients with PanNENs. The correlation between m6A protein expression and clinicopathological parameters with recurrence-free survival (RFS) was examined using a random survival forest, Cox regression model, and survival tree analysis.
METHODS
Among the 15 m6A proteins, high expression of YTHDF2 (p < 0.001) and HNRNPC (p = 0.006) was found to be significantly associated with recurrence and served as independent risk factors in multivariate analysis. High YTHDF2 expression was associated with higher number of CD3+ T cells (p = 0.003), whereas high HNRNPC expression was significantly correlated with the expression of PD-L1 (p = 0.039). A YTHDF2-based signature was determined, including five patterns from survival tree analysis: patients with the LNnegYTHDF2high signature had a 5-year RFS rate of 92.1%, whereas patients with LNposTumorSize<2.5 cm signature had the worst 5-year RFS rate of 0% (p < 0.001). The area under receiver operating characteristic curve was 0.870 (95% confidence interval: 0.762-0.915) for the YTHDF2-based signature. The C-index was 0.978, suggesting good discrimination ability; moreover, the risk score of recurrence served as an independent prognostic factor indicating shorter RFS.
RESULTS
YTHDF2 appears to serve as a promising prognostic biomarker and therapeutic target. A YTHDF2-based signature can identify distinct subgroups, which may be helpful to strategize personalized postoperative monitoring.
CONCLUSIONS
[ "Humans", "Methylation", "Prognosis", "Adenosine", "RNA", "Multivariate Analysis", "Neoplasms" ]
9808770
Introduction
Pancreatic neuroendocrine neoplasms (PanNENs) are a rare, heterogeneous group of neoplasms originating from the neuroendocrine cells and classified as functional and nonfunctional depending on whether the tumor overproduces biologically active hormones [1]. However, the incidence of PanNENs has significantly increased in the last decades and has risen to 0.8 per 100,000 individuals per year [2]. This increase may be attributed, at least in part, to the improved and more frequent use of imaging in clinical diagnostic practices [3]. Moreover, both patients with nonfunctional or functional tumors are always diagnosed at a late stage and usually present locally advanced or metastatic disease [4, 5]. Current clinical strategies for most patients with PanNENs primarily involve surgical resection [6]. One of the most significant causes of concern in patients with a PanNEN is cancer recurrence after curative resection. However, clinical outcomes after surgical resection vary widely, with recurrence-free survival (RFS) ranging from 0.96 to 121.9 months [7]. The most frequently used cancer staging system is based on the WHO classification which categorizes tumors into four groups based on proliferative activity and morphology: NET G1, NET G2, NET G3, and NEC [8]. However, patients with PanNEN with the same grade of tumor may demonstrate different clinical courses, whereas patients with low-grade PanNENs show unpredictable disease progression and outcomes [9]. Therefore, a clinically feasible and accurate prediction of the risk of relapse in patients with PanNENs after resection is needed. N6-methyladenosine (m6A) is the most common epigenetic modification of the mRNA, comprising >60% of all RNA-based modifications [10]. The m6A methylation levels are greatly associated with the immune response, stress response regulation, tumorigenesis, and miRNA processing [11, 12, 13, 14, 15, 16]. The m6A methylation plays a crucial role in disease progression of various cancers [11, 12, 16] as well as in the onset and progression of fetal growth restriction and preeclampsia [17, 18]. The modification of m6A is regulated by methyltransferases (writers) and demethylases (erasers), and m6A performs multiple functions through its interactions with specific binding proteins known as the readers [19]. The writers mainly include KIAA1429, METTL3, WTAP, METTL14, RBM15, and METTL16, which promote m6A RNA methylation. The erasers encompass fat mass and obesity-associated protein (FTO) and alkylation repair homolog protein 5 (ALKBH5). The readers comprise HNRNPC, IGF2BP2, YTHDC1, YTHDC2, YTHDF1, YTHDF2, and YTHDF3, which recognize RNA m6A binding site [19, 20, 21]. Increasing amount of evidence has emerged which shows the vital role of m6A regulators in tumorigenesis and prognosis of various cancers. Chen et al. [22] reported that METTL14 inhibited colorectal cancer (CRC) progression through primary miR-375 processing-based regulation. Cives et al. [23] revealed that METTL3 could promote the proliferation and migration of hepatocellular carcinoma. A previous study found that high YTHDF1 expression predicted a poor clinical outcome in CRC patients [24]. Additionally, the low expression of FTO was associated with a shorter RFS in patients with intrahepatic cholangiocarcinoma [25]. Overexpression of FTO suppressed acute myeloid leukemia cell differentiation [26]. Growing evidence suggests that m6A modification and its regulators play vital roles in human cancers [11, 12, 22, 23, 24, 25, 26]. m6A modification is associated with tumorigenesis, proliferation, invasion, and metastasis [22, 23, 24]. And m6A regulators were indicators of prognosis and therapeutic targets [25, 26]. Moreover, epigenetic modification and especially methylation have been already described as important events during tumorigenesis of PanNEN, and methylation modification is an epigenetic regulatory mechanism for gene expression in PanNENs, including MEN1, which involved in the chromatin remodeling in PanNENs [27]. However, whether m6A regulators could also be specific biomarkers in PanNENs is yet to be determined. And the expression status, functional roles, and clinical significance of m6A methylation regulators also remain largely unknown in PanNENs. In addition, recent studies have shown prognosis of immune infiltrates and immune checkpoints in pancreatic neuroendocrine tumors [23, 28]. However, the correlation between m6A regulators and immune infiltration and immune checkpoints is yet to be explored. To better identify patients at risk and improve their follow-up management, we explored the expression profiles of m6A proteins in resected PanNENs and investigated the potential prognostic value of m6A proteins. We also analyzed the association among common immune infiltrates, immune checkpoints, and m6A regulators. Furthermore, a YTHDF2-based signature was established, which may help in postoperative monitoring and stratification of risks to guide clinical treatment options.
null
null
Results
Patients' Characteristics A total of 96 patients were male (52.4%), whereas 87 patients were female (47.6%). The tumor diameter was >2.5 cm in 53.1% of the patients (97/183). The number of positive lymph nodes ranged from 1 to 29. A total of 50.7% tumors were nonfunctional, and the majority of functional tumors were classified as insulinoma (76/90). Most of the tumors (175/183) were well differentiated, and eight were poorly differentiated (Table 1). The median follow-up time was 39 months (range: 1–197 months), during which there were 43 relapses and 10 deaths, and all of the deaths were owing to disease progression. The 3- and 5-year RFS rates were 83.6% and 77.0%, respectively. A total of 96 patients were male (52.4%), whereas 87 patients were female (47.6%). The tumor diameter was >2.5 cm in 53.1% of the patients (97/183). The number of positive lymph nodes ranged from 1 to 29. A total of 50.7% tumors were nonfunctional, and the majority of functional tumors were classified as insulinoma (76/90). Most of the tumors (175/183) were well differentiated, and eight were poorly differentiated (Table 1). The median follow-up time was 39 months (range: 1–197 months), during which there were 43 relapses and 10 deaths, and all of the deaths were owing to disease progression. The 3- and 5-year RFS rates were 83.6% and 77.0%, respectively. Expression Patterns of the 15 m6A Regulators We found that the expression levels of m6A proteins in PanNENs and normal tissues were evident (Fig. 1a). The expression levels of writers (i.e., KIAA1429, METTL14, RBM15, WTAP, and METTL16), readers (i.e., IGF2BP2, HNRNPC, YTHDC1, YTHDC2, YTHDF1, YTHDF2, and YTHDF3), and erasers (ALKBH5 and FTO) were higher in PanNEN tissues than in normal tissues (p < 0.001). No statistically significant difference was evident between the normal and PanNENs tissues in the context of METTL3 expression level. Regarding METTL3, METTL14, METTL16, WTAP, RBM15, and KIAA1429 expression, a high level was observed in 26.7%, 11.5%, 35.5%, 64.4%, 67.7%, and 55.7% of tumors, respectively. Among the 183 tumors, 36.6% were deemed to have high FTO expression and 72.1% had high ALKBH5 expression. Regarding reader molecules (i.e., IGF2BP2, HNRNPC, YTHDC1, YTHDC2, YTHDF1, YTHDF2, and YTHDF3), immunostaining revealed low expression in 64.5%, 95.1%, 72.1%, 57.9%, 84.7%, 50.3%, and 96.2%, respectively, of these samples and high expression in the remaining samples (Fig. 1b). The immunohistochemical patterns of the 15 m6A proteins are shown in Figure 1c and online supplementary Figure S1. All 183 PanNENs showed nuclear immunoreactivity for METTL3, METTL14, METTL16, WTAP, RBM15, KIAA1429, FTO, ALKBH5, HNRNPC, and YTHDC1. The positive sites of IGF2BP2, YTHDC2, YTHDF1, YTHDF2, and YTHDF3 were located in the cytoplasm. We found that the expression levels of m6A proteins in PanNENs and normal tissues were evident (Fig. 1a). The expression levels of writers (i.e., KIAA1429, METTL14, RBM15, WTAP, and METTL16), readers (i.e., IGF2BP2, HNRNPC, YTHDC1, YTHDC2, YTHDF1, YTHDF2, and YTHDF3), and erasers (ALKBH5 and FTO) were higher in PanNEN tissues than in normal tissues (p < 0.001). No statistically significant difference was evident between the normal and PanNENs tissues in the context of METTL3 expression level. Regarding METTL3, METTL14, METTL16, WTAP, RBM15, and KIAA1429 expression, a high level was observed in 26.7%, 11.5%, 35.5%, 64.4%, 67.7%, and 55.7% of tumors, respectively. Among the 183 tumors, 36.6% were deemed to have high FTO expression and 72.1% had high ALKBH5 expression. Regarding reader molecules (i.e., IGF2BP2, HNRNPC, YTHDC1, YTHDC2, YTHDF1, YTHDF2, and YTHDF3), immunostaining revealed low expression in 64.5%, 95.1%, 72.1%, 57.9%, 84.7%, 50.3%, and 96.2%, respectively, of these samples and high expression in the remaining samples (Fig. 1b). The immunohistochemical patterns of the 15 m6A proteins are shown in Figure 1c and online supplementary Figure S1. All 183 PanNENs showed nuclear immunoreactivity for METTL3, METTL14, METTL16, WTAP, RBM15, KIAA1429, FTO, ALKBH5, HNRNPC, and YTHDC1. The positive sites of IGF2BP2, YTHDC2, YTHDF1, YTHDF2, and YTHDF3 were located in the cytoplasm. Associations among m6A Regulators, Immune Markers, and Clinicopathological Parameters in PanNENs Among the 15 proteins studied, only YTHDF2 and HNRNPC expressions had significant prognostic value. Of the 43 patients with recurrence, 12 (27.9%) showed low YTHDF2 expression, which was dramatically lower than the proportion of patients without recurrence having low YTHDF2 expression (85 of 140, 60.7%; p = 0.003; Fig. 1d). Similarly, 88.4% of the patients (38 of 43 patients) with recurrence showed low tumoral HNRNPC expression; in contrast, 97.1% of the patients without recurrence (136 of 140 patients) showed low tumoral expression of HNRNPC (p = 0.034; Fig. 1d). The log-rank test revealed that both high HNRNPC and YTHDF2 expressions predicted a significantly shortened RFS (p = 0.003 and p = 0.001, respectively; Fig. 1e, f). In the correlation analysis between YTHDF2 and HNRNPC expression and clinicopathological variables, we found that YTHDF2 expression was positively associated with tumor size (p = 0.021), nodal status (p = 0.007), functional status (p = 0.010), Ki67 index (p = 0.023), and liver metastasis (p < 0.001). There were no significant associations between YTHDF2 expression and the other clinicopathological variables, including age, sex, tumor site, lymphovascular invasion (LVI), and perineural invasion (PNI) (online suppl. Table S3). High YTHDF2 expression was associated with higher number of CD3+ T cells (p = 0.003). There were no significant associations between YTHDF2 expression and other immune markers (PD-L1, B7-H3, CD3, CD4, and CD68). However, high HNRNPC expression was significantly correlated with positive PD-L1 expression (p = 0.039) and high densities of CD3+ T cells. Moreover, there were no significant associations between HNRNPC expression and other immune markers (Table 2). Representative images of PD-L1, B7-H3 CD3, CD4, CD15, and CD68 are presented in online supplementary Figure S2. Among the 15 proteins studied, only YTHDF2 and HNRNPC expressions had significant prognostic value. Of the 43 patients with recurrence, 12 (27.9%) showed low YTHDF2 expression, which was dramatically lower than the proportion of patients without recurrence having low YTHDF2 expression (85 of 140, 60.7%; p = 0.003; Fig. 1d). Similarly, 88.4% of the patients (38 of 43 patients) with recurrence showed low tumoral HNRNPC expression; in contrast, 97.1% of the patients without recurrence (136 of 140 patients) showed low tumoral expression of HNRNPC (p = 0.034; Fig. 1d). The log-rank test revealed that both high HNRNPC and YTHDF2 expressions predicted a significantly shortened RFS (p = 0.003 and p = 0.001, respectively; Fig. 1e, f). In the correlation analysis between YTHDF2 and HNRNPC expression and clinicopathological variables, we found that YTHDF2 expression was positively associated with tumor size (p = 0.021), nodal status (p = 0.007), functional status (p = 0.010), Ki67 index (p = 0.023), and liver metastasis (p < 0.001). There were no significant associations between YTHDF2 expression and the other clinicopathological variables, including age, sex, tumor site, lymphovascular invasion (LVI), and perineural invasion (PNI) (online suppl. Table S3). High YTHDF2 expression was associated with higher number of CD3+ T cells (p = 0.003). There were no significant associations between YTHDF2 expression and other immune markers (PD-L1, B7-H3, CD3, CD4, and CD68). However, high HNRNPC expression was significantly correlated with positive PD-L1 expression (p = 0.039) and high densities of CD3+ T cells. Moreover, there were no significant associations between HNRNPC expression and other immune markers (Table 2). Representative images of PD-L1, B7-H3 CD3, CD4, CD15, and CD68 are presented in online supplementary Figure S2. Prognostic Predictors of RFS in PanNENs A univariate Cox analysis for both the clinicopathological parameters and m6A proteins was conducted. Nine clinicopathological factors (tumor size, lymph node status, PNI, LVI, grade, functional status, liver metastasis, and Ki67 index) and five m6A proteins (METTL14, METTL16, ALKBH5, HNRNPC, YTHDF1, YTHDF2, and YTHDF3) were identified to be associated with RFS, with p values of <0.05 (Table 3). Considering analysis of collinear factors, tumor grade was found to be associated with Ki67 index and liver metastasis, and LVI was associated with lymph node status. Therefore, PNI, tumor size, lymph node status, grade, functional status, METTL14, METTL16, ALKBH5, HNRNPC, YTHDF1, YTHDF2, and YTHDF3 were included in the multivariate Cox regression model. Multivariate Cox regression analysis identified tumor size, lymph node status, tumor grade, PNI, HNRNPC expression, and YTHDF2 expression as independent risk factors for RFS after excluding collinear factors. Next, we applied an RSF model to evaluate the importance of these independent risk factors. In the analysis of variable of importance, we found that VIMP of HNRNPC was −0.0011 (Fig. 2a). In the minimal depth analysis, HNRNPC, PNI, and grade had values of 2.405, 2.482, and 2.183, respectively, which were very close to or more than the depth threshold (Fig. 2b). Therefore, they were all excluded from the RSF model. We also performed survival analyses to determine the prognostic value of YTHDF2 mRNA expression in patients with PanNET in ICGC cohort. The Kaplan-Meier plot revealed that high YTHDF2 mRNA expression was significantly associated with poor RFS (p = 0.017, Fig. 2c). Furthermore, three variables (tumor size, lymph node status, and YTHDF2 expression) significantly affected RFS. Nodal status had the largest effect on RFS, followed by YTHDF2 expression and tumor size. A univariate Cox analysis for both the clinicopathological parameters and m6A proteins was conducted. Nine clinicopathological factors (tumor size, lymph node status, PNI, LVI, grade, functional status, liver metastasis, and Ki67 index) and five m6A proteins (METTL14, METTL16, ALKBH5, HNRNPC, YTHDF1, YTHDF2, and YTHDF3) were identified to be associated with RFS, with p values of <0.05 (Table 3). Considering analysis of collinear factors, tumor grade was found to be associated with Ki67 index and liver metastasis, and LVI was associated with lymph node status. Therefore, PNI, tumor size, lymph node status, grade, functional status, METTL14, METTL16, ALKBH5, HNRNPC, YTHDF1, YTHDF2, and YTHDF3 were included in the multivariate Cox regression model. Multivariate Cox regression analysis identified tumor size, lymph node status, tumor grade, PNI, HNRNPC expression, and YTHDF2 expression as independent risk factors for RFS after excluding collinear factors. Next, we applied an RSF model to evaluate the importance of these independent risk factors. In the analysis of variable of importance, we found that VIMP of HNRNPC was −0.0011 (Fig. 2a). In the minimal depth analysis, HNRNPC, PNI, and grade had values of 2.405, 2.482, and 2.183, respectively, which were very close to or more than the depth threshold (Fig. 2b). Therefore, they were all excluded from the RSF model. We also performed survival analyses to determine the prognostic value of YTHDF2 mRNA expression in patients with PanNET in ICGC cohort. The Kaplan-Meier plot revealed that high YTHDF2 mRNA expression was significantly associated with poor RFS (p = 0.017, Fig. 2c). Furthermore, three variables (tumor size, lymph node status, and YTHDF2 expression) significantly affected RFS. Nodal status had the largest effect on RFS, followed by YTHDF2 expression and tumor size. Construction and Evaluation of the Prognostic Model for Predicting RFS in PanNENs To construct a recurrence signature that can classify patients into subpopulations according to RFS, we further performed a recursive partitioning analysis. After pruning the decision trees using the post-pruning method, five terminal nodes representing a recurrence signature were identified (Fig. 2d). LNposTumorSize<2.5 cm (node 1; 20.8 months) and LNposTumorSize≥2.5 cmYTHDF2high (node 3; 21.1 months) patients had worse median RFS than LNnegYTHDF2high patients (node 5; 39.6 months, node 1 vs. node 5, p < 0.001), LNnegYTHDF2low patients (node 4; 43.2 months), and LNposTumorSize≥2.5 cmYTHDF2low patients (node 2; 29.3 months). Patients with the LNnegYTHDF2high (node 5) signature had a 5-year RFS rate of 92.1%, whereas patients with the LNposTumorSize<2.5 cm (node 1) signature had the worst 5-year RFS rate of 0% (Fig. 3a). To better identify risk groups for the recurrence signatures, we performed pair-wise comparisons for each recurrence signature subpopulation (on RFS) and the corresponding risk score of recurrence. The results showed that patients within the five terminal nodes can be categorized into three risk groups: low (node 4), intermediate (node 2 and node 5), and high (node 1 and node 3) with well-separated RFS curves (p < 0.001; Fig. 3b). Among the risk groups, the 5-year RFS rates were 6.0%, 70.4%, and 90.1% for the high-, intermediate-, and low-risk groups, respectively. To further test our proposed recurrence signature, we performed a subanalysis on a specific group of patients with nonsyndromic and nonfunctional PanNENs. A total of 93 PanNENs from the full cohort were eligible for this analysis (online suppl. Table S4). The Kaplan-Meier RFS curves showed clear separation among the risk groups (p = 0.018; Fig. 3c). Furthermore, a risk score was generated to evaluate the risk of recurrence based on variables selected from the survival tree analysis: Risk Score = (1.218 × Nodal Status) + (1.625 × Tumor Size) + (0.745 × expression value of YTHDF2) (Table 4). Multivariate analysis using a Cox proportional hazards model was performed which included PNI, tumor size, nodal status, grade, functional status, METTL14, METTL16, ALKBH5, HNRNPC, YTHDF1, YTHDF2, YTHDF3, and the risk score. The risk score was determined to be an independent prognostic factor for patients with resected PanNEN in our study based on the multivariate Cox regression, and higher risk scores were found to be associated with shorter survival (hazard ratio: 33.04, 95% CI: 4.341–251.434; p = 0.001) (Table 5). Specificity and sensitivity comparisons were performed via time-dependent ROC curve analysis of risk score. The predictive accuracy of the recurrence signature was relatively high and the area under ROC curve was 0.858 (95% CI: 0.747–0.913) at 1 year, 0.824 (95% CI: 0.754–0.912) at 3 years, and 0.870 (95% CI: 0.762–0.915) at 5 years (Fig. 3d). The C-index was 0.978 (95% CI: 0.936–1), suggesting good discrimination ability. The calibration curves also showed good agreement between the predicted and observed RFS (Fig. 3e). To construct a recurrence signature that can classify patients into subpopulations according to RFS, we further performed a recursive partitioning analysis. After pruning the decision trees using the post-pruning method, five terminal nodes representing a recurrence signature were identified (Fig. 2d). LNposTumorSize<2.5 cm (node 1; 20.8 months) and LNposTumorSize≥2.5 cmYTHDF2high (node 3; 21.1 months) patients had worse median RFS than LNnegYTHDF2high patients (node 5; 39.6 months, node 1 vs. node 5, p < 0.001), LNnegYTHDF2low patients (node 4; 43.2 months), and LNposTumorSize≥2.5 cmYTHDF2low patients (node 2; 29.3 months). Patients with the LNnegYTHDF2high (node 5) signature had a 5-year RFS rate of 92.1%, whereas patients with the LNposTumorSize<2.5 cm (node 1) signature had the worst 5-year RFS rate of 0% (Fig. 3a). To better identify risk groups for the recurrence signatures, we performed pair-wise comparisons for each recurrence signature subpopulation (on RFS) and the corresponding risk score of recurrence. The results showed that patients within the five terminal nodes can be categorized into three risk groups: low (node 4), intermediate (node 2 and node 5), and high (node 1 and node 3) with well-separated RFS curves (p < 0.001; Fig. 3b). Among the risk groups, the 5-year RFS rates were 6.0%, 70.4%, and 90.1% for the high-, intermediate-, and low-risk groups, respectively. To further test our proposed recurrence signature, we performed a subanalysis on a specific group of patients with nonsyndromic and nonfunctional PanNENs. A total of 93 PanNENs from the full cohort were eligible for this analysis (online suppl. Table S4). The Kaplan-Meier RFS curves showed clear separation among the risk groups (p = 0.018; Fig. 3c). Furthermore, a risk score was generated to evaluate the risk of recurrence based on variables selected from the survival tree analysis: Risk Score = (1.218 × Nodal Status) + (1.625 × Tumor Size) + (0.745 × expression value of YTHDF2) (Table 4). Multivariate analysis using a Cox proportional hazards model was performed which included PNI, tumor size, nodal status, grade, functional status, METTL14, METTL16, ALKBH5, HNRNPC, YTHDF1, YTHDF2, YTHDF3, and the risk score. The risk score was determined to be an independent prognostic factor for patients with resected PanNEN in our study based on the multivariate Cox regression, and higher risk scores were found to be associated with shorter survival (hazard ratio: 33.04, 95% CI: 4.341–251.434; p = 0.001) (Table 5). Specificity and sensitivity comparisons were performed via time-dependent ROC curve analysis of risk score. The predictive accuracy of the recurrence signature was relatively high and the area under ROC curve was 0.858 (95% CI: 0.747–0.913) at 1 year, 0.824 (95% CI: 0.754–0.912) at 3 years, and 0.870 (95% CI: 0.762–0.915) at 5 years (Fig. 3d). The C-index was 0.978 (95% CI: 0.936–1), suggesting good discrimination ability. The calibration curves also showed good agreement between the predicted and observed RFS (Fig. 3e).
null
null
[ "Patient Cohort and Follow-Up", "Tissue Microarrays and Immunohistochemistry", "Evaluation of Immunostaining", "Risk Score Models and Random Survival Forest", "Statistical Analysis", "Patients' Characteristics", "Expression Patterns of the 15 m6A Regulators", "Associations among m6A Regulators, Immune Markers, and Clinicopathological Parameters in PanNENs", "Prognostic Predictors of RFS in PanNENs", "Construction and Evaluation of the Prognostic Model for Predicting RFS in PanNENs", "Statement of Ethics", "Funding Sources", "Author Contributions" ]
[ "A total of 183 patients with PanNENs who were admitted to the Peking Union Medical College Hospital (PUMCH) and underwent surgeries during the time period 2004–2019 with sufficient tumor tissue for evaluation were included in the current study. The FFPE specimens and matching hematoxylin and eosin slides were retrieved. Patients' clinicopathological data, including age, sex, primary tumor site, tumor size, tumor grading, functional status, and surgical procedures, were collected from the medical records. The American Joint on Cancer Committee (AJCC) 8th edition stages for each patient were defined based on the collected data. Survival and recurrence information were obtained from medical record reviews and telephone interviews. Recurrence and distant metastasis were determined based on biochemical markers, clinical multidisciplinary consultation, radiological, and/or histological examinations. The time between the surgery and tumor recurrence or last follow-up appointment was termed as RFS. Disease-specific survival was calculated from the surgery date to the time of patient death or last follow-up till time point of December 29, 2020. This retrospective study was approved by the Institutional Review Board of PUMCH (approval number: S-K1593) and conducted in accordance with the Declaration of Helsinki. Written consent for this study was not required and formally waived by the hospital's Ethics Committee.", "Representative areas of tumors and adjacent normal tissue were selected from hematoxylin and eosin-stained paraffin blocks and then re-embedded into recipient tissue microarray (TMA) blocks (12 × 8 arrays). The diameter of each core was 2 mm.\nImmunohistochemical analysis was performed according to the protocol described by Zong et al. [29]. The 4-μm TMA sections were deparaffinized. Heat-mediated antigen retrieval was performed using sodium citrate buffer (pH 6.0) for 10 min. The endogenous peroxidase activity was quenched using a 3% hydrogen peroxide solution (ZSGB-BIO, Beijing, China) and then the sections were blocked with 5% normal goat serum for 30 min. TMA sections were incubated with primary antibodies against m6A methylation regulators (METTL3, METTL14, METTL16, WTAP, KIAA1429, RBM15, FTO, ALKBH5, IGF2BP2, HNRNPC, YTHDC1, YTHDC2, YTHDF1, YTHDF2, and YTHDF3) and immune markers (PD-L1, B7-H3, CD3, CD4, CD15, CD68) overnight at 4°C; the details and optimal dilutions are summarized in online supplementary Table S1 (see www.karger.com/doi/10.1159/000525228 for all online suppl. material). Stromal cells, normal acinar cells, ductal cells, and islet cells from normal tissues were incubated with primary antibodies and used as internal positive controls, whereas the same tissues without primary antibodies comprised the negative controls.", "The immunostaining performed on the collected patient samples was independently assessed by two pathologists (XL Chen and SW Mo) who were blinded to the patients' clinicopathological features and clinical outcomes. Both pathologists reexamined the slides and reached a consensus in case of any discrepancy. The immunoreactivity of 15 m6A markers, PD-L1, and B7-H3 in tumor specimens was quantified using the method developed by Budwit-Novotny et al. [30]. In brief, the percentage of positively stained tumor cells was multiplied by the relative intensity of specific staining, which was assigned a value of negative (0), weak (1), distinct (2), and strong (3) [30, 31]. The density of stromal CD3, CD4, CD15, and CD68 were quantified in four ×400 high-power fields and the mean counts of four fields were used for statistical analysis as we described earlier [32]. The cutoff values of m6A markers and immune markers were determined using X-tile (Yale University, New Haven, CT, USA). These cutoff values are summarized in the online supplementary Table S2.", "A random survival forest (RSF) model was constructed based on the minimal depth and variable importance (VIMP). VIMP and minimal depth were used to assess the true effect of the variable on RFS. Given that feature subsets of variables are randomly selected using RSF method, RSF can process high-dimensional (many variables) data. In our current study, we included more than ten variables. Through training of the RSF algorithm, the variables which were the most related to recurrence were selected using minimal depth and VIMP. And in the training process, the interaction between variables can be detected. Moreover, when creating random forest, unbiased estimation is used for generalization error, and the model has strong generalization ability [33]. If a large part of the features is lost, the accuracy can still be maintained [33]. Based on these strengths of RSF, this method was helpful to select relapse-related variables and construct a recurrence signature. Recurrence-specific variables defined by a VIMP of >0 and minimal depth less than the depth threshold were finally entered into a survival tree analysis [33]. The recurrence-specific variables, including nodal status, tumor size, YTHDF2, were selected by using minimal depth and VIMP. By integrating the expression level of YTHDF2, nodal status, and tumor size as well as their corresponding coefficients which were derived from the multivariate Cox model constructed only using these 3 variables, a risk score was generated, as represented below: Risk Score = (1.218 × Nodal Status) + (1.625 × Tumor Size) + (0.745 × expression value of YTHDF2).\nNodal status was divided into negative/positive and scored as 0/1; tumor size was divided into <2.5 cm/≥2.5 cm and scored as 0/1, and expression value of YTHDF2 was divided into low/high and scored as 0/1, and these scores were multiplied by the corresponding coefficients to generate a risk score. The median (median = 1.625) was used as the cutoff value for the risk score.", "The continuous and categorical variables were described as median (range) and frequency (percentage), respectively. The Mann-Whitney U test was conducted to compare nonnormally distributed continuous variables, whereas a Student's t test was used to analyze normally distributed continuous variables. The χ2 test or Fisher's exact test was used to evaluate the relationship between the expression of m6A methylation regulators and categorical variables. The survival curves were plotted by Kaplan-Meier method, and the log-rank test was employed to compare the survival curves generated. The Cox proportional hazard regression model was used to estimate the hazard ratio with a 95% confidence interval (CI) for variables associated with RFS. Potential risk factors with a p value of <0.05 in the univariate Cox analysis were entered into the multivariate Cox regression model (backward Wald) after considering collinearity among the variables. We evaluated the discrimination ability of the final model with Harrell concordance index (C-index). The area under the time-dependent receiver operating characteristic (ROC) curve at different cutoff times was measured as predictive performance. Calibration was assessed by visual examination of the calibration plot. p values of <0.05 were considered statistically significant. Statistical analyses were two-sided and accomplished using SPSS software (version 21.0; IBM Corp., Armonk, NY, USA). Statistical analyses of RNA sequencing data from International Cancer Genome Consortium (ICGC) database (https://www.icgc.org/) were conducted with R version 3.5.0 (http://www.r-project.org).", "A total of 96 patients were male (52.4%), whereas 87 patients were female (47.6%). The tumor diameter was >2.5 cm in 53.1% of the patients (97/183). The number of positive lymph nodes ranged from 1 to 29. A total of 50.7% tumors were nonfunctional, and the majority of functional tumors were classified as insulinoma (76/90). Most of the tumors (175/183) were well differentiated, and eight were poorly differentiated (Table 1). The median follow-up time was 39 months (range: 1–197 months), during which there were 43 relapses and 10 deaths, and all of the deaths were owing to disease progression. The 3- and 5-year RFS rates were 83.6% and 77.0%, respectively.", "We found that the expression levels of m6A proteins in PanNENs and normal tissues were evident (Fig. 1a). The expression levels of writers (i.e., KIAA1429, METTL14, RBM15, WTAP, and METTL16), readers (i.e., IGF2BP2, HNRNPC, YTHDC1, YTHDC2, YTHDF1, YTHDF2, and YTHDF3), and erasers (ALKBH5 and FTO) were higher in PanNEN tissues than in normal tissues (p < 0.001). No statistically significant difference was evident between the normal and PanNENs tissues in the context of METTL3 expression level. Regarding METTL3, METTL14, METTL16, WTAP, RBM15, and KIAA1429 expression, a high level was observed in 26.7%, 11.5%, 35.5%, 64.4%, 67.7%, and 55.7% of tumors, respectively. Among the 183 tumors, 36.6% were deemed to have high FTO expression and 72.1% had high ALKBH5 expression. Regarding reader molecules (i.e., IGF2BP2, HNRNPC, YTHDC1, YTHDC2, YTHDF1, YTHDF2, and YTHDF3), immunostaining revealed low expression in 64.5%, 95.1%, 72.1%, 57.9%, 84.7%, 50.3%, and 96.2%, respectively, of these samples and high expression in the remaining samples (Fig. 1b). The immunohistochemical patterns of the 15 m6A proteins are shown in Figure 1c and online supplementary Figure S1. All 183 PanNENs showed nuclear immunoreactivity for METTL3, METTL14, METTL16, WTAP, RBM15, KIAA1429, FTO, ALKBH5, HNRNPC, and YTHDC1. The positive sites of IGF2BP2, YTHDC2, YTHDF1, YTHDF2, and YTHDF3 were located in the cytoplasm.", "Among the 15 proteins studied, only YTHDF2 and HNRNPC expressions had significant prognostic value. Of the 43 patients with recurrence, 12 (27.9%) showed low YTHDF2 expression, which was dramatically lower than the proportion of patients without recurrence having low YTHDF2 expression (85 of 140, 60.7%; p = 0.003; Fig. 1d). Similarly, 88.4% of the patients (38 of 43 patients) with recurrence showed low tumoral HNRNPC expression; in contrast, 97.1% of the patients without recurrence (136 of 140 patients) showed low tumoral expression of HNRNPC (p = 0.034; Fig. 1d). The log-rank test revealed that both high HNRNPC and YTHDF2 expressions predicted a significantly shortened RFS (p = 0.003 and p = 0.001, respectively; Fig. 1e, f). In the correlation analysis between YTHDF2 and HNRNPC expression and clinicopathological variables, we found that YTHDF2 expression was positively associated with tumor size (p = 0.021), nodal status (p = 0.007), functional status (p = 0.010), Ki67 index (p = 0.023), and liver metastasis (p < 0.001). There were no significant associations between YTHDF2 expression and the other clinicopathological variables, including age, sex, tumor site, lymphovascular invasion (LVI), and perineural invasion (PNI) (online suppl. Table S3).\nHigh YTHDF2 expression was associated with higher number of CD3+ T cells (p = 0.003). There were no significant associations between YTHDF2 expression and other immune markers (PD-L1, B7-H3, CD3, CD4, and CD68). However, high HNRNPC expression was significantly correlated with positive PD-L1 expression (p = 0.039) and high densities of CD3+ T cells. Moreover, there were no significant associations between HNRNPC expression and other immune markers (Table 2). Representative images of PD-L1, B7-H3 CD3, CD4, CD15, and CD68 are presented in online supplementary Figure S2.", "A univariate Cox analysis for both the clinicopathological parameters and m6A proteins was conducted. Nine clinicopathological factors (tumor size, lymph node status, PNI, LVI, grade, functional status, liver metastasis, and Ki67 index) and five m6A proteins (METTL14, METTL16, ALKBH5, HNRNPC, YTHDF1, YTHDF2, and YTHDF3) were identified to be associated with RFS, with p values of <0.05 (Table 3). Considering analysis of collinear factors, tumor grade was found to be associated with Ki67 index and liver metastasis, and LVI was associated with lymph node status. Therefore, PNI, tumor size, lymph node status, grade, functional status, METTL14, METTL16, ALKBH5, HNRNPC, YTHDF1, YTHDF2, and YTHDF3 were included in the multivariate Cox regression model. Multivariate Cox regression analysis identified tumor size, lymph node status, tumor grade, PNI, HNRNPC expression, and YTHDF2 expression as independent risk factors for RFS after excluding collinear factors. Next, we applied an RSF model to evaluate the importance of these independent risk factors. In the analysis of variable of importance, we found that VIMP of HNRNPC was −0.0011 (Fig. 2a). In the minimal depth analysis, HNRNPC, PNI, and grade had values of 2.405, 2.482, and 2.183, respectively, which were very close to or more than the depth threshold (Fig. 2b). Therefore, they were all excluded from the RSF model. We also performed survival analyses to determine the prognostic value of YTHDF2 mRNA expression in patients with PanNET in ICGC cohort. The Kaplan-Meier plot revealed that high YTHDF2 mRNA expression was significantly associated with poor RFS (p = 0.017, Fig. 2c). Furthermore, three variables (tumor size, lymph node status, and YTHDF2 expression) significantly affected RFS. Nodal status had the largest effect on RFS, followed by YTHDF2 expression and tumor size.", "To construct a recurrence signature that can classify patients into subpopulations according to RFS, we further performed a recursive partitioning analysis. After pruning the decision trees using the post-pruning method, five terminal nodes representing a recurrence signature were identified (Fig. 2d). LNposTumorSize<2.5 cm (node 1; 20.8 months) and LNposTumorSize≥2.5 cmYTHDF2high (node 3; 21.1 months) patients had worse median RFS than LNnegYTHDF2high patients (node 5; 39.6 months, node 1 vs. node 5, p < 0.001), LNnegYTHDF2low patients (node 4; 43.2 months), and LNposTumorSize≥2.5 cmYTHDF2low patients (node 2; 29.3 months). Patients with the LNnegYTHDF2high (node 5) signature had a 5-year RFS rate of 92.1%, whereas patients with the LNposTumorSize<2.5 cm (node 1) signature had the worst 5-year RFS rate of 0% (Fig. 3a). To better identify risk groups for the recurrence signatures, we performed pair-wise comparisons for each recurrence signature subpopulation (on RFS) and the corresponding risk score of recurrence. The results showed that patients within the five terminal nodes can be categorized into three risk groups: low (node 4), intermediate (node 2 and node 5), and high (node 1 and node 3) with well-separated RFS curves (p < 0.001; Fig. 3b). Among the risk groups, the 5-year RFS rates were 6.0%, 70.4%, and 90.1% for the high-, intermediate-, and low-risk groups, respectively. To further test our proposed recurrence signature, we performed a subanalysis on a specific group of patients with nonsyndromic and nonfunctional PanNENs. A total of 93 PanNENs from the full cohort were eligible for this analysis (online suppl. Table S4). The Kaplan-Meier RFS curves showed clear separation among the risk groups (p = 0.018; Fig. 3c).\nFurthermore, a risk score was generated to evaluate the risk of recurrence based on variables selected from the survival tree analysis: Risk Score = (1.218 × Nodal Status) + (1.625 × Tumor Size) + (0.745 × expression value of YTHDF2) (Table 4). Multivariate analysis using a Cox proportional hazards model was performed which included PNI, tumor size, nodal status, grade, functional status, METTL14, METTL16, ALKBH5, HNRNPC, YTHDF1, YTHDF2, YTHDF3, and the risk score. The risk score was determined to be an independent prognostic factor for patients with resected PanNEN in our study based on the multivariate Cox regression, and higher risk scores were found to be associated with shorter survival (hazard ratio: 33.04, 95% CI: 4.341–251.434; p = 0.001) (Table 5). Specificity and sensitivity comparisons were performed via time-dependent ROC curve analysis of risk score. The predictive accuracy of the recurrence signature was relatively high and the area under ROC curve was 0.858 (95% CI: 0.747–0.913) at 1 year, 0.824 (95% CI: 0.754–0.912) at 3 years, and 0.870 (95% CI: 0.762–0.915) at 5 years (Fig. 3d). The C-index was 0.978 (95% CI: 0.936–1), suggesting good discrimination ability. The calibration curves also showed good agreement between the predicted and observed RFS (Fig. 3e).", "The study was approved by the Institutional Review Board of Peking Union Medical College Hospital (approval number S-K1593) and conducted in accordance with the Declaration of Helsinki. Written consent for this study was not required and formally waived by the hospital's Ethics Committee.", "This work was supported by grants from the Chinese Academy of Medical Sciences Initiative for Innovative Medicine (CAMS-2016-I2M-1–001), the National Scientific Data Sharing Platform for Population and Health (NCMI–YF01N–201906), and the National Natural Science Foundation of China (Nos. 81672648).", "Jie Chen made substantial contributions to the conception, design, and critical revision of the manuscript. Shengwei Mo, Liju Zong, Shuangni Yu, and Zhaohui Lu made substantial contributions to tissue microarray construction. Shengwei Mo made substantial contributions to data acquisition. Xianlong Chen and Shengwei Mo performed analysis of the data and drafting of the manuscript. All the authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and Methods", "Patient Cohort and Follow-Up", "Tissue Microarrays and Immunohistochemistry", "Evaluation of Immunostaining", "Risk Score Models and Random Survival Forest", "Statistical Analysis", "Results", "Patients' Characteristics", "Expression Patterns of the 15 m6A Regulators", "Associations among m6A Regulators, Immune Markers, and Clinicopathological Parameters in PanNENs", "Prognostic Predictors of RFS in PanNENs", "Construction and Evaluation of the Prognostic Model for Predicting RFS in PanNENs", "Discussion/Conclusion", "Statement of Ethics", "Conflict of Interest Statement", "Funding Sources", "Author Contributions", "Data Availability Statement", "Supplementary Material" ]
[ "Pancreatic neuroendocrine neoplasms (PanNENs) are a rare, heterogeneous group of neoplasms originating from the neuroendocrine cells and classified as functional and nonfunctional depending on whether the tumor overproduces biologically active hormones [1]. However, the incidence of PanNENs has significantly increased in the last decades and has risen to 0.8 per 100,000 individuals per year [2]. This increase may be attributed, at least in part, to the improved and more frequent use of imaging in clinical diagnostic practices [3]. Moreover, both patients with nonfunctional or functional tumors are always diagnosed at a late stage and usually present locally advanced or metastatic disease [4, 5].\nCurrent clinical strategies for most patients with PanNENs primarily involve surgical resection [6]. One of the most significant causes of concern in patients with a PanNEN is cancer recurrence after curative resection. However, clinical outcomes after surgical resection vary widely, with recurrence-free survival (RFS) ranging from 0.96 to 121.9 months [7]. The most frequently used cancer staging system is based on the WHO classification which categorizes tumors into four groups based on proliferative activity and morphology: NET G1, NET G2, NET G3, and NEC [8]. However, patients with PanNEN with the same grade of tumor may demonstrate different clinical courses, whereas patients with low-grade PanNENs show unpredictable disease progression and outcomes [9]. Therefore, a clinically feasible and accurate prediction of the risk of relapse in patients with PanNENs after resection is needed.\nN6-methyladenosine (m6A) is the most common epigenetic modification of the mRNA, comprising >60% of all RNA-based modifications [10]. The m6A methylation levels are greatly associated with the immune response, stress response regulation, tumorigenesis, and miRNA processing [11, 12, 13, 14, 15, 16]. The m6A methylation plays a crucial role in disease progression of various cancers [11, 12, 16] as well as in the onset and progression of fetal growth restriction and preeclampsia [17, 18]. The modification of m6A is regulated by methyltransferases (writers) and demethylases (erasers), and m6A performs multiple functions through its interactions with specific binding proteins known as the readers [19]. The writers mainly include KIAA1429, METTL3, WTAP, METTL14, RBM15, and METTL16, which promote m6A RNA methylation. The erasers encompass fat mass and obesity-associated protein (FTO) and alkylation repair homolog protein 5 (ALKBH5). The readers comprise HNRNPC, IGF2BP2, YTHDC1, YTHDC2, YTHDF1, YTHDF2, and YTHDF3, which recognize RNA m6A binding site [19, 20, 21]. Increasing amount of evidence has emerged which shows the vital role of m6A regulators in tumorigenesis and prognosis of various cancers. Chen et al. [22] reported that METTL14 inhibited colorectal cancer (CRC) progression through primary miR-375 processing-based regulation. Cives et al. [23] revealed that METTL3 could promote the proliferation and migration of hepatocellular carcinoma. A previous study found that high YTHDF1 expression predicted a poor clinical outcome in CRC patients [24]. Additionally, the low expression of FTO was associated with a shorter RFS in patients with intrahepatic cholangiocarcinoma [25]. Overexpression of FTO suppressed acute myeloid leukemia cell differentiation [26]. Growing evidence suggests that m6A modification and its regulators play vital roles in human cancers [11, 12, 22, 23, 24, 25, 26]. m6A modification is associated with tumorigenesis, proliferation, invasion, and metastasis [22, 23, 24]. And m6A regulators were indicators of prognosis and therapeutic targets [25, 26]. Moreover, epigenetic modification and especially methylation have been already described as important events during tumorigenesis of PanNEN, and methylation modification is an epigenetic regulatory mechanism for gene expression in PanNENs, including MEN1, which involved in the chromatin remodeling in PanNENs [27]. However, whether m6A regulators could also be specific biomarkers in PanNENs is yet to be determined. And the expression status, functional roles, and clinical significance of m6A methylation regulators also remain largely unknown in PanNENs. In addition, recent studies have shown prognosis of immune infiltrates and immune checkpoints in pancreatic neuroendocrine tumors [23, 28]. However, the correlation between m6A regulators and immune infiltration and immune checkpoints is yet to be explored.\nTo better identify patients at risk and improve their follow-up management, we explored the expression profiles of m6A proteins in resected PanNENs and investigated the potential prognostic value of m6A proteins. We also analyzed the association among common immune infiltrates, immune checkpoints, and m6A regulators. Furthermore, a YTHDF2-based signature was established, which may help in postoperative monitoring and stratification of risks to guide clinical treatment options.", "Patient Cohort and Follow-Up A total of 183 patients with PanNENs who were admitted to the Peking Union Medical College Hospital (PUMCH) and underwent surgeries during the time period 2004–2019 with sufficient tumor tissue for evaluation were included in the current study. The FFPE specimens and matching hematoxylin and eosin slides were retrieved. Patients' clinicopathological data, including age, sex, primary tumor site, tumor size, tumor grading, functional status, and surgical procedures, were collected from the medical records. The American Joint on Cancer Committee (AJCC) 8th edition stages for each patient were defined based on the collected data. Survival and recurrence information were obtained from medical record reviews and telephone interviews. Recurrence and distant metastasis were determined based on biochemical markers, clinical multidisciplinary consultation, radiological, and/or histological examinations. The time between the surgery and tumor recurrence or last follow-up appointment was termed as RFS. Disease-specific survival was calculated from the surgery date to the time of patient death or last follow-up till time point of December 29, 2020. This retrospective study was approved by the Institutional Review Board of PUMCH (approval number: S-K1593) and conducted in accordance with the Declaration of Helsinki. Written consent for this study was not required and formally waived by the hospital's Ethics Committee.\nA total of 183 patients with PanNENs who were admitted to the Peking Union Medical College Hospital (PUMCH) and underwent surgeries during the time period 2004–2019 with sufficient tumor tissue for evaluation were included in the current study. The FFPE specimens and matching hematoxylin and eosin slides were retrieved. Patients' clinicopathological data, including age, sex, primary tumor site, tumor size, tumor grading, functional status, and surgical procedures, were collected from the medical records. The American Joint on Cancer Committee (AJCC) 8th edition stages for each patient were defined based on the collected data. Survival and recurrence information were obtained from medical record reviews and telephone interviews. Recurrence and distant metastasis were determined based on biochemical markers, clinical multidisciplinary consultation, radiological, and/or histological examinations. The time between the surgery and tumor recurrence or last follow-up appointment was termed as RFS. Disease-specific survival was calculated from the surgery date to the time of patient death or last follow-up till time point of December 29, 2020. This retrospective study was approved by the Institutional Review Board of PUMCH (approval number: S-K1593) and conducted in accordance with the Declaration of Helsinki. Written consent for this study was not required and formally waived by the hospital's Ethics Committee.\nTissue Microarrays and Immunohistochemistry Representative areas of tumors and adjacent normal tissue were selected from hematoxylin and eosin-stained paraffin blocks and then re-embedded into recipient tissue microarray (TMA) blocks (12 × 8 arrays). The diameter of each core was 2 mm.\nImmunohistochemical analysis was performed according to the protocol described by Zong et al. [29]. The 4-μm TMA sections were deparaffinized. Heat-mediated antigen retrieval was performed using sodium citrate buffer (pH 6.0) for 10 min. The endogenous peroxidase activity was quenched using a 3% hydrogen peroxide solution (ZSGB-BIO, Beijing, China) and then the sections were blocked with 5% normal goat serum for 30 min. TMA sections were incubated with primary antibodies against m6A methylation regulators (METTL3, METTL14, METTL16, WTAP, KIAA1429, RBM15, FTO, ALKBH5, IGF2BP2, HNRNPC, YTHDC1, YTHDC2, YTHDF1, YTHDF2, and YTHDF3) and immune markers (PD-L1, B7-H3, CD3, CD4, CD15, CD68) overnight at 4°C; the details and optimal dilutions are summarized in online supplementary Table S1 (see www.karger.com/doi/10.1159/000525228 for all online suppl. material). Stromal cells, normal acinar cells, ductal cells, and islet cells from normal tissues were incubated with primary antibodies and used as internal positive controls, whereas the same tissues without primary antibodies comprised the negative controls.\nRepresentative areas of tumors and adjacent normal tissue were selected from hematoxylin and eosin-stained paraffin blocks and then re-embedded into recipient tissue microarray (TMA) blocks (12 × 8 arrays). The diameter of each core was 2 mm.\nImmunohistochemical analysis was performed according to the protocol described by Zong et al. [29]. The 4-μm TMA sections were deparaffinized. Heat-mediated antigen retrieval was performed using sodium citrate buffer (pH 6.0) for 10 min. The endogenous peroxidase activity was quenched using a 3% hydrogen peroxide solution (ZSGB-BIO, Beijing, China) and then the sections were blocked with 5% normal goat serum for 30 min. TMA sections were incubated with primary antibodies against m6A methylation regulators (METTL3, METTL14, METTL16, WTAP, KIAA1429, RBM15, FTO, ALKBH5, IGF2BP2, HNRNPC, YTHDC1, YTHDC2, YTHDF1, YTHDF2, and YTHDF3) and immune markers (PD-L1, B7-H3, CD3, CD4, CD15, CD68) overnight at 4°C; the details and optimal dilutions are summarized in online supplementary Table S1 (see www.karger.com/doi/10.1159/000525228 for all online suppl. material). Stromal cells, normal acinar cells, ductal cells, and islet cells from normal tissues were incubated with primary antibodies and used as internal positive controls, whereas the same tissues without primary antibodies comprised the negative controls.\nEvaluation of Immunostaining The immunostaining performed on the collected patient samples was independently assessed by two pathologists (XL Chen and SW Mo) who were blinded to the patients' clinicopathological features and clinical outcomes. Both pathologists reexamined the slides and reached a consensus in case of any discrepancy. The immunoreactivity of 15 m6A markers, PD-L1, and B7-H3 in tumor specimens was quantified using the method developed by Budwit-Novotny et al. [30]. In brief, the percentage of positively stained tumor cells was multiplied by the relative intensity of specific staining, which was assigned a value of negative (0), weak (1), distinct (2), and strong (3) [30, 31]. The density of stromal CD3, CD4, CD15, and CD68 were quantified in four ×400 high-power fields and the mean counts of four fields were used for statistical analysis as we described earlier [32]. The cutoff values of m6A markers and immune markers were determined using X-tile (Yale University, New Haven, CT, USA). These cutoff values are summarized in the online supplementary Table S2.\nThe immunostaining performed on the collected patient samples was independently assessed by two pathologists (XL Chen and SW Mo) who were blinded to the patients' clinicopathological features and clinical outcomes. Both pathologists reexamined the slides and reached a consensus in case of any discrepancy. The immunoreactivity of 15 m6A markers, PD-L1, and B7-H3 in tumor specimens was quantified using the method developed by Budwit-Novotny et al. [30]. In brief, the percentage of positively stained tumor cells was multiplied by the relative intensity of specific staining, which was assigned a value of negative (0), weak (1), distinct (2), and strong (3) [30, 31]. The density of stromal CD3, CD4, CD15, and CD68 were quantified in four ×400 high-power fields and the mean counts of four fields were used for statistical analysis as we described earlier [32]. The cutoff values of m6A markers and immune markers were determined using X-tile (Yale University, New Haven, CT, USA). These cutoff values are summarized in the online supplementary Table S2.\nRisk Score Models and Random Survival Forest A random survival forest (RSF) model was constructed based on the minimal depth and variable importance (VIMP). VIMP and minimal depth were used to assess the true effect of the variable on RFS. Given that feature subsets of variables are randomly selected using RSF method, RSF can process high-dimensional (many variables) data. In our current study, we included more than ten variables. Through training of the RSF algorithm, the variables which were the most related to recurrence were selected using minimal depth and VIMP. And in the training process, the interaction between variables can be detected. Moreover, when creating random forest, unbiased estimation is used for generalization error, and the model has strong generalization ability [33]. If a large part of the features is lost, the accuracy can still be maintained [33]. Based on these strengths of RSF, this method was helpful to select relapse-related variables and construct a recurrence signature. Recurrence-specific variables defined by a VIMP of >0 and minimal depth less than the depth threshold were finally entered into a survival tree analysis [33]. The recurrence-specific variables, including nodal status, tumor size, YTHDF2, were selected by using minimal depth and VIMP. By integrating the expression level of YTHDF2, nodal status, and tumor size as well as their corresponding coefficients which were derived from the multivariate Cox model constructed only using these 3 variables, a risk score was generated, as represented below: Risk Score = (1.218 × Nodal Status) + (1.625 × Tumor Size) + (0.745 × expression value of YTHDF2).\nNodal status was divided into negative/positive and scored as 0/1; tumor size was divided into <2.5 cm/≥2.5 cm and scored as 0/1, and expression value of YTHDF2 was divided into low/high and scored as 0/1, and these scores were multiplied by the corresponding coefficients to generate a risk score. The median (median = 1.625) was used as the cutoff value for the risk score.\nA random survival forest (RSF) model was constructed based on the minimal depth and variable importance (VIMP). VIMP and minimal depth were used to assess the true effect of the variable on RFS. Given that feature subsets of variables are randomly selected using RSF method, RSF can process high-dimensional (many variables) data. In our current study, we included more than ten variables. Through training of the RSF algorithm, the variables which were the most related to recurrence were selected using minimal depth and VIMP. And in the training process, the interaction between variables can be detected. Moreover, when creating random forest, unbiased estimation is used for generalization error, and the model has strong generalization ability [33]. If a large part of the features is lost, the accuracy can still be maintained [33]. Based on these strengths of RSF, this method was helpful to select relapse-related variables and construct a recurrence signature. Recurrence-specific variables defined by a VIMP of >0 and minimal depth less than the depth threshold were finally entered into a survival tree analysis [33]. The recurrence-specific variables, including nodal status, tumor size, YTHDF2, were selected by using minimal depth and VIMP. By integrating the expression level of YTHDF2, nodal status, and tumor size as well as their corresponding coefficients which were derived from the multivariate Cox model constructed only using these 3 variables, a risk score was generated, as represented below: Risk Score = (1.218 × Nodal Status) + (1.625 × Tumor Size) + (0.745 × expression value of YTHDF2).\nNodal status was divided into negative/positive and scored as 0/1; tumor size was divided into <2.5 cm/≥2.5 cm and scored as 0/1, and expression value of YTHDF2 was divided into low/high and scored as 0/1, and these scores were multiplied by the corresponding coefficients to generate a risk score. The median (median = 1.625) was used as the cutoff value for the risk score.\nStatistical Analysis The continuous and categorical variables were described as median (range) and frequency (percentage), respectively. The Mann-Whitney U test was conducted to compare nonnormally distributed continuous variables, whereas a Student's t test was used to analyze normally distributed continuous variables. The χ2 test or Fisher's exact test was used to evaluate the relationship between the expression of m6A methylation regulators and categorical variables. The survival curves were plotted by Kaplan-Meier method, and the log-rank test was employed to compare the survival curves generated. The Cox proportional hazard regression model was used to estimate the hazard ratio with a 95% confidence interval (CI) for variables associated with RFS. Potential risk factors with a p value of <0.05 in the univariate Cox analysis were entered into the multivariate Cox regression model (backward Wald) after considering collinearity among the variables. We evaluated the discrimination ability of the final model with Harrell concordance index (C-index). The area under the time-dependent receiver operating characteristic (ROC) curve at different cutoff times was measured as predictive performance. Calibration was assessed by visual examination of the calibration plot. p values of <0.05 were considered statistically significant. Statistical analyses were two-sided and accomplished using SPSS software (version 21.0; IBM Corp., Armonk, NY, USA). Statistical analyses of RNA sequencing data from International Cancer Genome Consortium (ICGC) database (https://www.icgc.org/) were conducted with R version 3.5.0 (http://www.r-project.org).\nThe continuous and categorical variables were described as median (range) and frequency (percentage), respectively. The Mann-Whitney U test was conducted to compare nonnormally distributed continuous variables, whereas a Student's t test was used to analyze normally distributed continuous variables. The χ2 test or Fisher's exact test was used to evaluate the relationship between the expression of m6A methylation regulators and categorical variables. The survival curves were plotted by Kaplan-Meier method, and the log-rank test was employed to compare the survival curves generated. The Cox proportional hazard regression model was used to estimate the hazard ratio with a 95% confidence interval (CI) for variables associated with RFS. Potential risk factors with a p value of <0.05 in the univariate Cox analysis were entered into the multivariate Cox regression model (backward Wald) after considering collinearity among the variables. We evaluated the discrimination ability of the final model with Harrell concordance index (C-index). The area under the time-dependent receiver operating characteristic (ROC) curve at different cutoff times was measured as predictive performance. Calibration was assessed by visual examination of the calibration plot. p values of <0.05 were considered statistically significant. Statistical analyses were two-sided and accomplished using SPSS software (version 21.0; IBM Corp., Armonk, NY, USA). Statistical analyses of RNA sequencing data from International Cancer Genome Consortium (ICGC) database (https://www.icgc.org/) were conducted with R version 3.5.0 (http://www.r-project.org).", "A total of 183 patients with PanNENs who were admitted to the Peking Union Medical College Hospital (PUMCH) and underwent surgeries during the time period 2004–2019 with sufficient tumor tissue for evaluation were included in the current study. The FFPE specimens and matching hematoxylin and eosin slides were retrieved. Patients' clinicopathological data, including age, sex, primary tumor site, tumor size, tumor grading, functional status, and surgical procedures, were collected from the medical records. The American Joint on Cancer Committee (AJCC) 8th edition stages for each patient were defined based on the collected data. Survival and recurrence information were obtained from medical record reviews and telephone interviews. Recurrence and distant metastasis were determined based on biochemical markers, clinical multidisciplinary consultation, radiological, and/or histological examinations. The time between the surgery and tumor recurrence or last follow-up appointment was termed as RFS. Disease-specific survival was calculated from the surgery date to the time of patient death or last follow-up till time point of December 29, 2020. This retrospective study was approved by the Institutional Review Board of PUMCH (approval number: S-K1593) and conducted in accordance with the Declaration of Helsinki. Written consent for this study was not required and formally waived by the hospital's Ethics Committee.", "Representative areas of tumors and adjacent normal tissue were selected from hematoxylin and eosin-stained paraffin blocks and then re-embedded into recipient tissue microarray (TMA) blocks (12 × 8 arrays). The diameter of each core was 2 mm.\nImmunohistochemical analysis was performed according to the protocol described by Zong et al. [29]. The 4-μm TMA sections were deparaffinized. Heat-mediated antigen retrieval was performed using sodium citrate buffer (pH 6.0) for 10 min. The endogenous peroxidase activity was quenched using a 3% hydrogen peroxide solution (ZSGB-BIO, Beijing, China) and then the sections were blocked with 5% normal goat serum for 30 min. TMA sections were incubated with primary antibodies against m6A methylation regulators (METTL3, METTL14, METTL16, WTAP, KIAA1429, RBM15, FTO, ALKBH5, IGF2BP2, HNRNPC, YTHDC1, YTHDC2, YTHDF1, YTHDF2, and YTHDF3) and immune markers (PD-L1, B7-H3, CD3, CD4, CD15, CD68) overnight at 4°C; the details and optimal dilutions are summarized in online supplementary Table S1 (see www.karger.com/doi/10.1159/000525228 for all online suppl. material). Stromal cells, normal acinar cells, ductal cells, and islet cells from normal tissues were incubated with primary antibodies and used as internal positive controls, whereas the same tissues without primary antibodies comprised the negative controls.", "The immunostaining performed on the collected patient samples was independently assessed by two pathologists (XL Chen and SW Mo) who were blinded to the patients' clinicopathological features and clinical outcomes. Both pathologists reexamined the slides and reached a consensus in case of any discrepancy. The immunoreactivity of 15 m6A markers, PD-L1, and B7-H3 in tumor specimens was quantified using the method developed by Budwit-Novotny et al. [30]. In brief, the percentage of positively stained tumor cells was multiplied by the relative intensity of specific staining, which was assigned a value of negative (0), weak (1), distinct (2), and strong (3) [30, 31]. The density of stromal CD3, CD4, CD15, and CD68 were quantified in four ×400 high-power fields and the mean counts of four fields were used for statistical analysis as we described earlier [32]. The cutoff values of m6A markers and immune markers were determined using X-tile (Yale University, New Haven, CT, USA). These cutoff values are summarized in the online supplementary Table S2.", "A random survival forest (RSF) model was constructed based on the minimal depth and variable importance (VIMP). VIMP and minimal depth were used to assess the true effect of the variable on RFS. Given that feature subsets of variables are randomly selected using RSF method, RSF can process high-dimensional (many variables) data. In our current study, we included more than ten variables. Through training of the RSF algorithm, the variables which were the most related to recurrence were selected using minimal depth and VIMP. And in the training process, the interaction between variables can be detected. Moreover, when creating random forest, unbiased estimation is used for generalization error, and the model has strong generalization ability [33]. If a large part of the features is lost, the accuracy can still be maintained [33]. Based on these strengths of RSF, this method was helpful to select relapse-related variables and construct a recurrence signature. Recurrence-specific variables defined by a VIMP of >0 and minimal depth less than the depth threshold were finally entered into a survival tree analysis [33]. The recurrence-specific variables, including nodal status, tumor size, YTHDF2, were selected by using minimal depth and VIMP. By integrating the expression level of YTHDF2, nodal status, and tumor size as well as their corresponding coefficients which were derived from the multivariate Cox model constructed only using these 3 variables, a risk score was generated, as represented below: Risk Score = (1.218 × Nodal Status) + (1.625 × Tumor Size) + (0.745 × expression value of YTHDF2).\nNodal status was divided into negative/positive and scored as 0/1; tumor size was divided into <2.5 cm/≥2.5 cm and scored as 0/1, and expression value of YTHDF2 was divided into low/high and scored as 0/1, and these scores were multiplied by the corresponding coefficients to generate a risk score. The median (median = 1.625) was used as the cutoff value for the risk score.", "The continuous and categorical variables were described as median (range) and frequency (percentage), respectively. The Mann-Whitney U test was conducted to compare nonnormally distributed continuous variables, whereas a Student's t test was used to analyze normally distributed continuous variables. The χ2 test or Fisher's exact test was used to evaluate the relationship between the expression of m6A methylation regulators and categorical variables. The survival curves were plotted by Kaplan-Meier method, and the log-rank test was employed to compare the survival curves generated. The Cox proportional hazard regression model was used to estimate the hazard ratio with a 95% confidence interval (CI) for variables associated with RFS. Potential risk factors with a p value of <0.05 in the univariate Cox analysis were entered into the multivariate Cox regression model (backward Wald) after considering collinearity among the variables. We evaluated the discrimination ability of the final model with Harrell concordance index (C-index). The area under the time-dependent receiver operating characteristic (ROC) curve at different cutoff times was measured as predictive performance. Calibration was assessed by visual examination of the calibration plot. p values of <0.05 were considered statistically significant. Statistical analyses were two-sided and accomplished using SPSS software (version 21.0; IBM Corp., Armonk, NY, USA). Statistical analyses of RNA sequencing data from International Cancer Genome Consortium (ICGC) database (https://www.icgc.org/) were conducted with R version 3.5.0 (http://www.r-project.org).", "Patients' Characteristics A total of 96 patients were male (52.4%), whereas 87 patients were female (47.6%). The tumor diameter was >2.5 cm in 53.1% of the patients (97/183). The number of positive lymph nodes ranged from 1 to 29. A total of 50.7% tumors were nonfunctional, and the majority of functional tumors were classified as insulinoma (76/90). Most of the tumors (175/183) were well differentiated, and eight were poorly differentiated (Table 1). The median follow-up time was 39 months (range: 1–197 months), during which there were 43 relapses and 10 deaths, and all of the deaths were owing to disease progression. The 3- and 5-year RFS rates were 83.6% and 77.0%, respectively.\nA total of 96 patients were male (52.4%), whereas 87 patients were female (47.6%). The tumor diameter was >2.5 cm in 53.1% of the patients (97/183). The number of positive lymph nodes ranged from 1 to 29. A total of 50.7% tumors were nonfunctional, and the majority of functional tumors were classified as insulinoma (76/90). Most of the tumors (175/183) were well differentiated, and eight were poorly differentiated (Table 1). The median follow-up time was 39 months (range: 1–197 months), during which there were 43 relapses and 10 deaths, and all of the deaths were owing to disease progression. The 3- and 5-year RFS rates were 83.6% and 77.0%, respectively.\nExpression Patterns of the 15 m6A Regulators We found that the expression levels of m6A proteins in PanNENs and normal tissues were evident (Fig. 1a). The expression levels of writers (i.e., KIAA1429, METTL14, RBM15, WTAP, and METTL16), readers (i.e., IGF2BP2, HNRNPC, YTHDC1, YTHDC2, YTHDF1, YTHDF2, and YTHDF3), and erasers (ALKBH5 and FTO) were higher in PanNEN tissues than in normal tissues (p < 0.001). No statistically significant difference was evident between the normal and PanNENs tissues in the context of METTL3 expression level. Regarding METTL3, METTL14, METTL16, WTAP, RBM15, and KIAA1429 expression, a high level was observed in 26.7%, 11.5%, 35.5%, 64.4%, 67.7%, and 55.7% of tumors, respectively. Among the 183 tumors, 36.6% were deemed to have high FTO expression and 72.1% had high ALKBH5 expression. Regarding reader molecules (i.e., IGF2BP2, HNRNPC, YTHDC1, YTHDC2, YTHDF1, YTHDF2, and YTHDF3), immunostaining revealed low expression in 64.5%, 95.1%, 72.1%, 57.9%, 84.7%, 50.3%, and 96.2%, respectively, of these samples and high expression in the remaining samples (Fig. 1b). The immunohistochemical patterns of the 15 m6A proteins are shown in Figure 1c and online supplementary Figure S1. All 183 PanNENs showed nuclear immunoreactivity for METTL3, METTL14, METTL16, WTAP, RBM15, KIAA1429, FTO, ALKBH5, HNRNPC, and YTHDC1. The positive sites of IGF2BP2, YTHDC2, YTHDF1, YTHDF2, and YTHDF3 were located in the cytoplasm.\nWe found that the expression levels of m6A proteins in PanNENs and normal tissues were evident (Fig. 1a). The expression levels of writers (i.e., KIAA1429, METTL14, RBM15, WTAP, and METTL16), readers (i.e., IGF2BP2, HNRNPC, YTHDC1, YTHDC2, YTHDF1, YTHDF2, and YTHDF3), and erasers (ALKBH5 and FTO) were higher in PanNEN tissues than in normal tissues (p < 0.001). No statistically significant difference was evident between the normal and PanNENs tissues in the context of METTL3 expression level. Regarding METTL3, METTL14, METTL16, WTAP, RBM15, and KIAA1429 expression, a high level was observed in 26.7%, 11.5%, 35.5%, 64.4%, 67.7%, and 55.7% of tumors, respectively. Among the 183 tumors, 36.6% were deemed to have high FTO expression and 72.1% had high ALKBH5 expression. Regarding reader molecules (i.e., IGF2BP2, HNRNPC, YTHDC1, YTHDC2, YTHDF1, YTHDF2, and YTHDF3), immunostaining revealed low expression in 64.5%, 95.1%, 72.1%, 57.9%, 84.7%, 50.3%, and 96.2%, respectively, of these samples and high expression in the remaining samples (Fig. 1b). The immunohistochemical patterns of the 15 m6A proteins are shown in Figure 1c and online supplementary Figure S1. All 183 PanNENs showed nuclear immunoreactivity for METTL3, METTL14, METTL16, WTAP, RBM15, KIAA1429, FTO, ALKBH5, HNRNPC, and YTHDC1. The positive sites of IGF2BP2, YTHDC2, YTHDF1, YTHDF2, and YTHDF3 were located in the cytoplasm.\nAssociations among m6A Regulators, Immune Markers, and Clinicopathological Parameters in PanNENs Among the 15 proteins studied, only YTHDF2 and HNRNPC expressions had significant prognostic value. Of the 43 patients with recurrence, 12 (27.9%) showed low YTHDF2 expression, which was dramatically lower than the proportion of patients without recurrence having low YTHDF2 expression (85 of 140, 60.7%; p = 0.003; Fig. 1d). Similarly, 88.4% of the patients (38 of 43 patients) with recurrence showed low tumoral HNRNPC expression; in contrast, 97.1% of the patients without recurrence (136 of 140 patients) showed low tumoral expression of HNRNPC (p = 0.034; Fig. 1d). The log-rank test revealed that both high HNRNPC and YTHDF2 expressions predicted a significantly shortened RFS (p = 0.003 and p = 0.001, respectively; Fig. 1e, f). In the correlation analysis between YTHDF2 and HNRNPC expression and clinicopathological variables, we found that YTHDF2 expression was positively associated with tumor size (p = 0.021), nodal status (p = 0.007), functional status (p = 0.010), Ki67 index (p = 0.023), and liver metastasis (p < 0.001). There were no significant associations between YTHDF2 expression and the other clinicopathological variables, including age, sex, tumor site, lymphovascular invasion (LVI), and perineural invasion (PNI) (online suppl. Table S3).\nHigh YTHDF2 expression was associated with higher number of CD3+ T cells (p = 0.003). There were no significant associations between YTHDF2 expression and other immune markers (PD-L1, B7-H3, CD3, CD4, and CD68). However, high HNRNPC expression was significantly correlated with positive PD-L1 expression (p = 0.039) and high densities of CD3+ T cells. Moreover, there were no significant associations between HNRNPC expression and other immune markers (Table 2). Representative images of PD-L1, B7-H3 CD3, CD4, CD15, and CD68 are presented in online supplementary Figure S2.\nAmong the 15 proteins studied, only YTHDF2 and HNRNPC expressions had significant prognostic value. Of the 43 patients with recurrence, 12 (27.9%) showed low YTHDF2 expression, which was dramatically lower than the proportion of patients without recurrence having low YTHDF2 expression (85 of 140, 60.7%; p = 0.003; Fig. 1d). Similarly, 88.4% of the patients (38 of 43 patients) with recurrence showed low tumoral HNRNPC expression; in contrast, 97.1% of the patients without recurrence (136 of 140 patients) showed low tumoral expression of HNRNPC (p = 0.034; Fig. 1d). The log-rank test revealed that both high HNRNPC and YTHDF2 expressions predicted a significantly shortened RFS (p = 0.003 and p = 0.001, respectively; Fig. 1e, f). In the correlation analysis between YTHDF2 and HNRNPC expression and clinicopathological variables, we found that YTHDF2 expression was positively associated with tumor size (p = 0.021), nodal status (p = 0.007), functional status (p = 0.010), Ki67 index (p = 0.023), and liver metastasis (p < 0.001). There were no significant associations between YTHDF2 expression and the other clinicopathological variables, including age, sex, tumor site, lymphovascular invasion (LVI), and perineural invasion (PNI) (online suppl. Table S3).\nHigh YTHDF2 expression was associated with higher number of CD3+ T cells (p = 0.003). There were no significant associations between YTHDF2 expression and other immune markers (PD-L1, B7-H3, CD3, CD4, and CD68). However, high HNRNPC expression was significantly correlated with positive PD-L1 expression (p = 0.039) and high densities of CD3+ T cells. Moreover, there were no significant associations between HNRNPC expression and other immune markers (Table 2). Representative images of PD-L1, B7-H3 CD3, CD4, CD15, and CD68 are presented in online supplementary Figure S2.\nPrognostic Predictors of RFS in PanNENs A univariate Cox analysis for both the clinicopathological parameters and m6A proteins was conducted. Nine clinicopathological factors (tumor size, lymph node status, PNI, LVI, grade, functional status, liver metastasis, and Ki67 index) and five m6A proteins (METTL14, METTL16, ALKBH5, HNRNPC, YTHDF1, YTHDF2, and YTHDF3) were identified to be associated with RFS, with p values of <0.05 (Table 3). Considering analysis of collinear factors, tumor grade was found to be associated with Ki67 index and liver metastasis, and LVI was associated with lymph node status. Therefore, PNI, tumor size, lymph node status, grade, functional status, METTL14, METTL16, ALKBH5, HNRNPC, YTHDF1, YTHDF2, and YTHDF3 were included in the multivariate Cox regression model. Multivariate Cox regression analysis identified tumor size, lymph node status, tumor grade, PNI, HNRNPC expression, and YTHDF2 expression as independent risk factors for RFS after excluding collinear factors. Next, we applied an RSF model to evaluate the importance of these independent risk factors. In the analysis of variable of importance, we found that VIMP of HNRNPC was −0.0011 (Fig. 2a). In the minimal depth analysis, HNRNPC, PNI, and grade had values of 2.405, 2.482, and 2.183, respectively, which were very close to or more than the depth threshold (Fig. 2b). Therefore, they were all excluded from the RSF model. We also performed survival analyses to determine the prognostic value of YTHDF2 mRNA expression in patients with PanNET in ICGC cohort. The Kaplan-Meier plot revealed that high YTHDF2 mRNA expression was significantly associated with poor RFS (p = 0.017, Fig. 2c). Furthermore, three variables (tumor size, lymph node status, and YTHDF2 expression) significantly affected RFS. Nodal status had the largest effect on RFS, followed by YTHDF2 expression and tumor size.\nA univariate Cox analysis for both the clinicopathological parameters and m6A proteins was conducted. Nine clinicopathological factors (tumor size, lymph node status, PNI, LVI, grade, functional status, liver metastasis, and Ki67 index) and five m6A proteins (METTL14, METTL16, ALKBH5, HNRNPC, YTHDF1, YTHDF2, and YTHDF3) were identified to be associated with RFS, with p values of <0.05 (Table 3). Considering analysis of collinear factors, tumor grade was found to be associated with Ki67 index and liver metastasis, and LVI was associated with lymph node status. Therefore, PNI, tumor size, lymph node status, grade, functional status, METTL14, METTL16, ALKBH5, HNRNPC, YTHDF1, YTHDF2, and YTHDF3 were included in the multivariate Cox regression model. Multivariate Cox regression analysis identified tumor size, lymph node status, tumor grade, PNI, HNRNPC expression, and YTHDF2 expression as independent risk factors for RFS after excluding collinear factors. Next, we applied an RSF model to evaluate the importance of these independent risk factors. In the analysis of variable of importance, we found that VIMP of HNRNPC was −0.0011 (Fig. 2a). In the minimal depth analysis, HNRNPC, PNI, and grade had values of 2.405, 2.482, and 2.183, respectively, which were very close to or more than the depth threshold (Fig. 2b). Therefore, they were all excluded from the RSF model. We also performed survival analyses to determine the prognostic value of YTHDF2 mRNA expression in patients with PanNET in ICGC cohort. The Kaplan-Meier plot revealed that high YTHDF2 mRNA expression was significantly associated with poor RFS (p = 0.017, Fig. 2c). Furthermore, three variables (tumor size, lymph node status, and YTHDF2 expression) significantly affected RFS. Nodal status had the largest effect on RFS, followed by YTHDF2 expression and tumor size.\nConstruction and Evaluation of the Prognostic Model for Predicting RFS in PanNENs To construct a recurrence signature that can classify patients into subpopulations according to RFS, we further performed a recursive partitioning analysis. After pruning the decision trees using the post-pruning method, five terminal nodes representing a recurrence signature were identified (Fig. 2d). LNposTumorSize<2.5 cm (node 1; 20.8 months) and LNposTumorSize≥2.5 cmYTHDF2high (node 3; 21.1 months) patients had worse median RFS than LNnegYTHDF2high patients (node 5; 39.6 months, node 1 vs. node 5, p < 0.001), LNnegYTHDF2low patients (node 4; 43.2 months), and LNposTumorSize≥2.5 cmYTHDF2low patients (node 2; 29.3 months). Patients with the LNnegYTHDF2high (node 5) signature had a 5-year RFS rate of 92.1%, whereas patients with the LNposTumorSize<2.5 cm (node 1) signature had the worst 5-year RFS rate of 0% (Fig. 3a). To better identify risk groups for the recurrence signatures, we performed pair-wise comparisons for each recurrence signature subpopulation (on RFS) and the corresponding risk score of recurrence. The results showed that patients within the five terminal nodes can be categorized into three risk groups: low (node 4), intermediate (node 2 and node 5), and high (node 1 and node 3) with well-separated RFS curves (p < 0.001; Fig. 3b). Among the risk groups, the 5-year RFS rates were 6.0%, 70.4%, and 90.1% for the high-, intermediate-, and low-risk groups, respectively. To further test our proposed recurrence signature, we performed a subanalysis on a specific group of patients with nonsyndromic and nonfunctional PanNENs. A total of 93 PanNENs from the full cohort were eligible for this analysis (online suppl. Table S4). The Kaplan-Meier RFS curves showed clear separation among the risk groups (p = 0.018; Fig. 3c).\nFurthermore, a risk score was generated to evaluate the risk of recurrence based on variables selected from the survival tree analysis: Risk Score = (1.218 × Nodal Status) + (1.625 × Tumor Size) + (0.745 × expression value of YTHDF2) (Table 4). Multivariate analysis using a Cox proportional hazards model was performed which included PNI, tumor size, nodal status, grade, functional status, METTL14, METTL16, ALKBH5, HNRNPC, YTHDF1, YTHDF2, YTHDF3, and the risk score. The risk score was determined to be an independent prognostic factor for patients with resected PanNEN in our study based on the multivariate Cox regression, and higher risk scores were found to be associated with shorter survival (hazard ratio: 33.04, 95% CI: 4.341–251.434; p = 0.001) (Table 5). Specificity and sensitivity comparisons were performed via time-dependent ROC curve analysis of risk score. The predictive accuracy of the recurrence signature was relatively high and the area under ROC curve was 0.858 (95% CI: 0.747–0.913) at 1 year, 0.824 (95% CI: 0.754–0.912) at 3 years, and 0.870 (95% CI: 0.762–0.915) at 5 years (Fig. 3d). The C-index was 0.978 (95% CI: 0.936–1), suggesting good discrimination ability. The calibration curves also showed good agreement between the predicted and observed RFS (Fig. 3e).\nTo construct a recurrence signature that can classify patients into subpopulations according to RFS, we further performed a recursive partitioning analysis. After pruning the decision trees using the post-pruning method, five terminal nodes representing a recurrence signature were identified (Fig. 2d). LNposTumorSize<2.5 cm (node 1; 20.8 months) and LNposTumorSize≥2.5 cmYTHDF2high (node 3; 21.1 months) patients had worse median RFS than LNnegYTHDF2high patients (node 5; 39.6 months, node 1 vs. node 5, p < 0.001), LNnegYTHDF2low patients (node 4; 43.2 months), and LNposTumorSize≥2.5 cmYTHDF2low patients (node 2; 29.3 months). Patients with the LNnegYTHDF2high (node 5) signature had a 5-year RFS rate of 92.1%, whereas patients with the LNposTumorSize<2.5 cm (node 1) signature had the worst 5-year RFS rate of 0% (Fig. 3a). To better identify risk groups for the recurrence signatures, we performed pair-wise comparisons for each recurrence signature subpopulation (on RFS) and the corresponding risk score of recurrence. The results showed that patients within the five terminal nodes can be categorized into three risk groups: low (node 4), intermediate (node 2 and node 5), and high (node 1 and node 3) with well-separated RFS curves (p < 0.001; Fig. 3b). Among the risk groups, the 5-year RFS rates were 6.0%, 70.4%, and 90.1% for the high-, intermediate-, and low-risk groups, respectively. To further test our proposed recurrence signature, we performed a subanalysis on a specific group of patients with nonsyndromic and nonfunctional PanNENs. A total of 93 PanNENs from the full cohort were eligible for this analysis (online suppl. Table S4). The Kaplan-Meier RFS curves showed clear separation among the risk groups (p = 0.018; Fig. 3c).\nFurthermore, a risk score was generated to evaluate the risk of recurrence based on variables selected from the survival tree analysis: Risk Score = (1.218 × Nodal Status) + (1.625 × Tumor Size) + (0.745 × expression value of YTHDF2) (Table 4). Multivariate analysis using a Cox proportional hazards model was performed which included PNI, tumor size, nodal status, grade, functional status, METTL14, METTL16, ALKBH5, HNRNPC, YTHDF1, YTHDF2, YTHDF3, and the risk score. The risk score was determined to be an independent prognostic factor for patients with resected PanNEN in our study based on the multivariate Cox regression, and higher risk scores were found to be associated with shorter survival (hazard ratio: 33.04, 95% CI: 4.341–251.434; p = 0.001) (Table 5). Specificity and sensitivity comparisons were performed via time-dependent ROC curve analysis of risk score. The predictive accuracy of the recurrence signature was relatively high and the area under ROC curve was 0.858 (95% CI: 0.747–0.913) at 1 year, 0.824 (95% CI: 0.754–0.912) at 3 years, and 0.870 (95% CI: 0.762–0.915) at 5 years (Fig. 3d). The C-index was 0.978 (95% CI: 0.936–1), suggesting good discrimination ability. The calibration curves also showed good agreement between the predicted and observed RFS (Fig. 3e).", "A total of 96 patients were male (52.4%), whereas 87 patients were female (47.6%). The tumor diameter was >2.5 cm in 53.1% of the patients (97/183). The number of positive lymph nodes ranged from 1 to 29. A total of 50.7% tumors were nonfunctional, and the majority of functional tumors were classified as insulinoma (76/90). Most of the tumors (175/183) were well differentiated, and eight were poorly differentiated (Table 1). The median follow-up time was 39 months (range: 1–197 months), during which there were 43 relapses and 10 deaths, and all of the deaths were owing to disease progression. The 3- and 5-year RFS rates were 83.6% and 77.0%, respectively.", "We found that the expression levels of m6A proteins in PanNENs and normal tissues were evident (Fig. 1a). The expression levels of writers (i.e., KIAA1429, METTL14, RBM15, WTAP, and METTL16), readers (i.e., IGF2BP2, HNRNPC, YTHDC1, YTHDC2, YTHDF1, YTHDF2, and YTHDF3), and erasers (ALKBH5 and FTO) were higher in PanNEN tissues than in normal tissues (p < 0.001). No statistically significant difference was evident between the normal and PanNENs tissues in the context of METTL3 expression level. Regarding METTL3, METTL14, METTL16, WTAP, RBM15, and KIAA1429 expression, a high level was observed in 26.7%, 11.5%, 35.5%, 64.4%, 67.7%, and 55.7% of tumors, respectively. Among the 183 tumors, 36.6% were deemed to have high FTO expression and 72.1% had high ALKBH5 expression. Regarding reader molecules (i.e., IGF2BP2, HNRNPC, YTHDC1, YTHDC2, YTHDF1, YTHDF2, and YTHDF3), immunostaining revealed low expression in 64.5%, 95.1%, 72.1%, 57.9%, 84.7%, 50.3%, and 96.2%, respectively, of these samples and high expression in the remaining samples (Fig. 1b). The immunohistochemical patterns of the 15 m6A proteins are shown in Figure 1c and online supplementary Figure S1. All 183 PanNENs showed nuclear immunoreactivity for METTL3, METTL14, METTL16, WTAP, RBM15, KIAA1429, FTO, ALKBH5, HNRNPC, and YTHDC1. The positive sites of IGF2BP2, YTHDC2, YTHDF1, YTHDF2, and YTHDF3 were located in the cytoplasm.", "Among the 15 proteins studied, only YTHDF2 and HNRNPC expressions had significant prognostic value. Of the 43 patients with recurrence, 12 (27.9%) showed low YTHDF2 expression, which was dramatically lower than the proportion of patients without recurrence having low YTHDF2 expression (85 of 140, 60.7%; p = 0.003; Fig. 1d). Similarly, 88.4% of the patients (38 of 43 patients) with recurrence showed low tumoral HNRNPC expression; in contrast, 97.1% of the patients without recurrence (136 of 140 patients) showed low tumoral expression of HNRNPC (p = 0.034; Fig. 1d). The log-rank test revealed that both high HNRNPC and YTHDF2 expressions predicted a significantly shortened RFS (p = 0.003 and p = 0.001, respectively; Fig. 1e, f). In the correlation analysis between YTHDF2 and HNRNPC expression and clinicopathological variables, we found that YTHDF2 expression was positively associated with tumor size (p = 0.021), nodal status (p = 0.007), functional status (p = 0.010), Ki67 index (p = 0.023), and liver metastasis (p < 0.001). There were no significant associations between YTHDF2 expression and the other clinicopathological variables, including age, sex, tumor site, lymphovascular invasion (LVI), and perineural invasion (PNI) (online suppl. Table S3).\nHigh YTHDF2 expression was associated with higher number of CD3+ T cells (p = 0.003). There were no significant associations between YTHDF2 expression and other immune markers (PD-L1, B7-H3, CD3, CD4, and CD68). However, high HNRNPC expression was significantly correlated with positive PD-L1 expression (p = 0.039) and high densities of CD3+ T cells. Moreover, there were no significant associations between HNRNPC expression and other immune markers (Table 2). Representative images of PD-L1, B7-H3 CD3, CD4, CD15, and CD68 are presented in online supplementary Figure S2.", "A univariate Cox analysis for both the clinicopathological parameters and m6A proteins was conducted. Nine clinicopathological factors (tumor size, lymph node status, PNI, LVI, grade, functional status, liver metastasis, and Ki67 index) and five m6A proteins (METTL14, METTL16, ALKBH5, HNRNPC, YTHDF1, YTHDF2, and YTHDF3) were identified to be associated with RFS, with p values of <0.05 (Table 3). Considering analysis of collinear factors, tumor grade was found to be associated with Ki67 index and liver metastasis, and LVI was associated with lymph node status. Therefore, PNI, tumor size, lymph node status, grade, functional status, METTL14, METTL16, ALKBH5, HNRNPC, YTHDF1, YTHDF2, and YTHDF3 were included in the multivariate Cox regression model. Multivariate Cox regression analysis identified tumor size, lymph node status, tumor grade, PNI, HNRNPC expression, and YTHDF2 expression as independent risk factors for RFS after excluding collinear factors. Next, we applied an RSF model to evaluate the importance of these independent risk factors. In the analysis of variable of importance, we found that VIMP of HNRNPC was −0.0011 (Fig. 2a). In the minimal depth analysis, HNRNPC, PNI, and grade had values of 2.405, 2.482, and 2.183, respectively, which were very close to or more than the depth threshold (Fig. 2b). Therefore, they were all excluded from the RSF model. We also performed survival analyses to determine the prognostic value of YTHDF2 mRNA expression in patients with PanNET in ICGC cohort. The Kaplan-Meier plot revealed that high YTHDF2 mRNA expression was significantly associated with poor RFS (p = 0.017, Fig. 2c). Furthermore, three variables (tumor size, lymph node status, and YTHDF2 expression) significantly affected RFS. Nodal status had the largest effect on RFS, followed by YTHDF2 expression and tumor size.", "To construct a recurrence signature that can classify patients into subpopulations according to RFS, we further performed a recursive partitioning analysis. After pruning the decision trees using the post-pruning method, five terminal nodes representing a recurrence signature were identified (Fig. 2d). LNposTumorSize<2.5 cm (node 1; 20.8 months) and LNposTumorSize≥2.5 cmYTHDF2high (node 3; 21.1 months) patients had worse median RFS than LNnegYTHDF2high patients (node 5; 39.6 months, node 1 vs. node 5, p < 0.001), LNnegYTHDF2low patients (node 4; 43.2 months), and LNposTumorSize≥2.5 cmYTHDF2low patients (node 2; 29.3 months). Patients with the LNnegYTHDF2high (node 5) signature had a 5-year RFS rate of 92.1%, whereas patients with the LNposTumorSize<2.5 cm (node 1) signature had the worst 5-year RFS rate of 0% (Fig. 3a). To better identify risk groups for the recurrence signatures, we performed pair-wise comparisons for each recurrence signature subpopulation (on RFS) and the corresponding risk score of recurrence. The results showed that patients within the five terminal nodes can be categorized into three risk groups: low (node 4), intermediate (node 2 and node 5), and high (node 1 and node 3) with well-separated RFS curves (p < 0.001; Fig. 3b). Among the risk groups, the 5-year RFS rates were 6.0%, 70.4%, and 90.1% for the high-, intermediate-, and low-risk groups, respectively. To further test our proposed recurrence signature, we performed a subanalysis on a specific group of patients with nonsyndromic and nonfunctional PanNENs. A total of 93 PanNENs from the full cohort were eligible for this analysis (online suppl. Table S4). The Kaplan-Meier RFS curves showed clear separation among the risk groups (p = 0.018; Fig. 3c).\nFurthermore, a risk score was generated to evaluate the risk of recurrence based on variables selected from the survival tree analysis: Risk Score = (1.218 × Nodal Status) + (1.625 × Tumor Size) + (0.745 × expression value of YTHDF2) (Table 4). Multivariate analysis using a Cox proportional hazards model was performed which included PNI, tumor size, nodal status, grade, functional status, METTL14, METTL16, ALKBH5, HNRNPC, YTHDF1, YTHDF2, YTHDF3, and the risk score. The risk score was determined to be an independent prognostic factor for patients with resected PanNEN in our study based on the multivariate Cox regression, and higher risk scores were found to be associated with shorter survival (hazard ratio: 33.04, 95% CI: 4.341–251.434; p = 0.001) (Table 5). Specificity and sensitivity comparisons were performed via time-dependent ROC curve analysis of risk score. The predictive accuracy of the recurrence signature was relatively high and the area under ROC curve was 0.858 (95% CI: 0.747–0.913) at 1 year, 0.824 (95% CI: 0.754–0.912) at 3 years, and 0.870 (95% CI: 0.762–0.915) at 5 years (Fig. 3d). The C-index was 0.978 (95% CI: 0.936–1), suggesting good discrimination ability. The calibration curves also showed good agreement between the predicted and observed RFS (Fig. 3e).", "Identification of predictors of relapse in patients with PanNEN is important. Previous studies that investigated risk predictors identified that tumor size, lymph node status, WHO grade, Ki67 index, tertiary lymphoid structures, and tumor-infiltrating neutrophils may predict recurrence [34, 35, 36, 37]. However, a practical and effective model is urgently needed. In our study, we constructed a risk model for recurrence with a corresponding risk score to stratify patients on the basis of RFS. The recurrence signature classifies patients into three risk groups independent of grade, with each group representing a distinct RFS outcome.\nThis is the first large-scale study exploring 15 m6A proteins as potential relapse biomarkers for PanNENs, to the best of our knowledge. To date, all of these 15 proteins have not been described in PanNENs. High expression of METTL14 was observed in 11.5% of the entire cohort, and high METTL14 expression predicted poor RFS outcomes (p = 0.024), similar to pancreatic cancer [38], whereas there are opposite results in other types of cancers, such as CRC [39] and bladder cancer [40]. METTL16, another m6A methyltransferase, was also a predictor of relapse and associated with poor RFS in this cohort (p = 0.023); however, recent studies suggested that METTL16 deletion is correlated with poor disease-specific survival or RFS in hepatocellular carcinoma [41], and no difference was found between patients with METTL16-high and METTL16-low CRC [42]. We found that patients with low ALKBH5 expression had a better clinical outcome. Recent studies have also suggested that YTHDF1 and YTHDF3 play a vital role in various types of cancers [43, 44]. Liu et al. [44] showed that YTHDF1 promotes ovarian cancer progression by augmenting EIF3C translation, and its high expression indicated a poor clinical outcome. The expression of YTHDF1 and YTHDF3 was reported to be associated with poor prognosis in breast cancer patients [43]. We attempted to examine the prognostic value of this factor in patients with resected PanNENs. We found that the high expression of both YTHDF1 and YTHDF3 predicted extremely low RFS (p = 0.003 and p = 0.002).\nYTHDF2 is the most effective m6A reader that can selectively bind to the m6A site. Recent reports have described the ability of YTHDF2 to degrade both tumor promoter as well as suppressor gene mRNAs and adversely affect inflammatory reaction, vascular abnormalization, and self-renewal of leukemic stem cells [45, 46, 47, 48, 49]. Compared with normal tissues, YTHDF2 expression was upregulated in prostate cancer, pancreatic cancer, and lung adenocarcinoma; moreover, high YTHDF2 expression was associated with unfavorable clinicopathological parameters and poor survival, which was consistent with our study [50, 51, 52]. However, Shen et al. [53] found that the expression level of YTHDF2 was lower in gastric cancer than in normal gastric tissues, and low YTHDF2 expression was correlated with poor prognosis. In our PanNENs, high YTHDF2 expression was observed in 49.7% of the tumors; high YTHDF2 expression was strongly associated with poorer RFS (p = 0.0013), and in an independent ICGC cohort, the same correlation was observed (p = 0.017), which validated this finding. We also identified associations between YTHDF2 expression and clinicopathological features, including liver metastasis (p < 0.001), although we did not determine YTHDF2 expression in metastatic lesions. After adjusting for other covariables, high YTHDF2 expression was identified as an independent risk factor for recurrence in patients with PanNENs, suggesting the role of YTHDF2 in metastasis. Recent studies revealed complex biological functions of YTHDF2 in different types of cancers. YTHDF2 could mediate mRNA degradation of tumor suppressors (LHPP and NKX3-1) to promote the proliferation and migration of prostate cancer cells [50]. In pancreatic cancer, YTHDF2 knockdown resulted in epithelial-mesenchymal transition through YAP pathway to promote the invasion and migration of cancer cells [51]. Another study showed that upregulated YTHDF2 decreased FOXC2 expression level to inhibit the proliferation, invasion, and migration of gastric cancer cells [53]. However, the regulatory mechanisms of YTHDF2 require further investigations in PanNENs.\nAnother finding is the discovery that HNRNPC has prognostic significance in PanNEN recurrence. High HNRNPC expression was observed more frequently in patients with recurrence than in those without recurrence and was associated with a significantly shortened RFS. HNRNPC contributes to tumorigenesis, which may partially explain the association between high HNRNPC expression and the high recurrence rate in PanNENs. Additionally, we observed a positive correlation between HNRNPC expression and tumoral PD-L1 expression (p < 0.001). No existing literature describes the influence of HNRNPC on PD-L1 expression of PanNEN cells. Therefore, detailed mechanistic investigations are required to understand the functional link between HNRNPC and PD-L1 in PanNENs.\nThe current clinical practice of regular follow-up is the main method for postoperative monitoring of patients and allows early identification of recurrence. However, an important concern with postoperative monitoring of patients is the balance of effectiveness against cost because the radiation exposure over prolonged follow-up periods can be harmful and detrimental to the patient [54]. Current international guidelines of the European Society for Medical Oncology provide follow-up recommendations after PanNEN resection based on grade [55]. In our study, PanNENs were classified into three risk groups: low-, intermediate-, and high-risk groups. Interestingly, grade 2 tumors were distributed across different recurrence risk groups. These results imply that owing to significant heterogeneity in disease behavior, particularly with grade 2 tumors, such distinctions are insufficient and postoperative surveillance should not merely be based on grade. Considering the high accuracy of recurrence signature and its independence from grade, postoperative follow-up regimens could be customized on the basis of these alternative recurrence signatures and risk groups. In addition, adjuvant therapy for high-risk patients may be useful in improving clinical outcomes, although this strategy will need to be validated prospectively in future studies.\nThe high YTHDF2 expression (47.1%) in patients with PanNEN and its correlation with recurrence may have important therapeutic implications with potential for development and clinical use of YTHDF2 inhibitors for the treatment of high YTHDF2-expressing tumors. High HNRNPC expression was correlated with recurrence and PD-L1 expression, which may allow the development of a combinatorial therapeutic regimen using HNRNPC inhibitor and PD-L1 monoclonal antibody. Recent studies have shown that genomic mutations in PanNENs can be relevant to classify patients beyond their tumor grade and identify novel prognostic markers and therapeutic targets that could be relevant in the future for adjuvant therapy [56]. It will be interesting to investigate whether such a clinical approach is feasible in PanNENs with high HNRNPC expression in prospective clinical trials.\nSome limitations exist in the current study, despite several valuable findings. First, its retrospective nature has inherent limitations. Second, the study was performed in patients from a single institution. Although we conducted a subgroup analysis to validate the recurrence signature in a selected group, additional external validation is required to investigate whether these signatures are useful markers in other cohorts. However, the low prevalence of PanNEN in the population is an obstacle to conducting the study in a larger sample size. Therefore, clinical data from multiple centers and prospective evidence are required to validate these results.\nIn conclusion, this is the first study to explore the expression profiles of m6A proteins and association between m6A regulators and immune microenvironment in PanNENs. We identified high YTHDF2 and HNRNPC expression as independent risk factors for disease relapse after surgery. We further established a recurrence signature to identify patients with PanNEN at a higher risk of recurrence. A comprehensive understanding of m6A regulators in PanNEN may help in developing novel treatment strategies.", "The study was approved by the Institutional Review Board of Peking Union Medical College Hospital (approval number S-K1593) and conducted in accordance with the Declaration of Helsinki. Written consent for this study was not required and formally waived by the hospital's Ethics Committee.", "The authors declare that they have no conflict of interest.", "This work was supported by grants from the Chinese Academy of Medical Sciences Initiative for Innovative Medicine (CAMS-2016-I2M-1–001), the National Scientific Data Sharing Platform for Population and Health (NCMI–YF01N–201906), and the National Natural Science Foundation of China (Nos. 81672648).", "Jie Chen made substantial contributions to the conception, design, and critical revision of the manuscript. Shengwei Mo, Liju Zong, Shuangni Yu, and Zhaohui Lu made substantial contributions to tissue microarray construction. Shengwei Mo made substantial contributions to data acquisition. Xianlong Chen and Shengwei Mo performed analysis of the data and drafting of the manuscript. All the authors read and approved the final manuscript.", "The data used and/or analyzed during the current study are available from the corresponding author on reasonable request.", "Supplementary data\nClick here for additional data file.\nSupplementary data\nClick here for additional data file.\nSupplementary data\nClick here for additional data file.\nSupplementary data\nClick here for additional data file.\nSupplementary data\nClick here for additional data file.\nSupplementary data\nClick here for additional data file.\nSupplementary data\nClick here for additional data file." ]
[ "intro", "materials|methods", null, null, null, null, null, "results", null, null, null, null, null, "discussion|conclusions", null, "COI-statement", null, null, "data-availability", "supplementary-material" ]
[ "N6-methyladenosine methylation regulators", "Pancreatic neuroendocrine neoplasms", "Immune markers", "Recurrence", "Postoperative surveillance" ]
Introduction: Pancreatic neuroendocrine neoplasms (PanNENs) are a rare, heterogeneous group of neoplasms originating from the neuroendocrine cells and classified as functional and nonfunctional depending on whether the tumor overproduces biologically active hormones [1]. However, the incidence of PanNENs has significantly increased in the last decades and has risen to 0.8 per 100,000 individuals per year [2]. This increase may be attributed, at least in part, to the improved and more frequent use of imaging in clinical diagnostic practices [3]. Moreover, both patients with nonfunctional or functional tumors are always diagnosed at a late stage and usually present locally advanced or metastatic disease [4, 5]. Current clinical strategies for most patients with PanNENs primarily involve surgical resection [6]. One of the most significant causes of concern in patients with a PanNEN is cancer recurrence after curative resection. However, clinical outcomes after surgical resection vary widely, with recurrence-free survival (RFS) ranging from 0.96 to 121.9 months [7]. The most frequently used cancer staging system is based on the WHO classification which categorizes tumors into four groups based on proliferative activity and morphology: NET G1, NET G2, NET G3, and NEC [8]. However, patients with PanNEN with the same grade of tumor may demonstrate different clinical courses, whereas patients with low-grade PanNENs show unpredictable disease progression and outcomes [9]. Therefore, a clinically feasible and accurate prediction of the risk of relapse in patients with PanNENs after resection is needed. N6-methyladenosine (m6A) is the most common epigenetic modification of the mRNA, comprising >60% of all RNA-based modifications [10]. The m6A methylation levels are greatly associated with the immune response, stress response regulation, tumorigenesis, and miRNA processing [11, 12, 13, 14, 15, 16]. The m6A methylation plays a crucial role in disease progression of various cancers [11, 12, 16] as well as in the onset and progression of fetal growth restriction and preeclampsia [17, 18]. The modification of m6A is regulated by methyltransferases (writers) and demethylases (erasers), and m6A performs multiple functions through its interactions with specific binding proteins known as the readers [19]. The writers mainly include KIAA1429, METTL3, WTAP, METTL14, RBM15, and METTL16, which promote m6A RNA methylation. The erasers encompass fat mass and obesity-associated protein (FTO) and alkylation repair homolog protein 5 (ALKBH5). The readers comprise HNRNPC, IGF2BP2, YTHDC1, YTHDC2, YTHDF1, YTHDF2, and YTHDF3, which recognize RNA m6A binding site [19, 20, 21]. Increasing amount of evidence has emerged which shows the vital role of m6A regulators in tumorigenesis and prognosis of various cancers. Chen et al. [22] reported that METTL14 inhibited colorectal cancer (CRC) progression through primary miR-375 processing-based regulation. Cives et al. [23] revealed that METTL3 could promote the proliferation and migration of hepatocellular carcinoma. A previous study found that high YTHDF1 expression predicted a poor clinical outcome in CRC patients [24]. Additionally, the low expression of FTO was associated with a shorter RFS in patients with intrahepatic cholangiocarcinoma [25]. Overexpression of FTO suppressed acute myeloid leukemia cell differentiation [26]. Growing evidence suggests that m6A modification and its regulators play vital roles in human cancers [11, 12, 22, 23, 24, 25, 26]. m6A modification is associated with tumorigenesis, proliferation, invasion, and metastasis [22, 23, 24]. And m6A regulators were indicators of prognosis and therapeutic targets [25, 26]. Moreover, epigenetic modification and especially methylation have been already described as important events during tumorigenesis of PanNEN, and methylation modification is an epigenetic regulatory mechanism for gene expression in PanNENs, including MEN1, which involved in the chromatin remodeling in PanNENs [27]. However, whether m6A regulators could also be specific biomarkers in PanNENs is yet to be determined. And the expression status, functional roles, and clinical significance of m6A methylation regulators also remain largely unknown in PanNENs. In addition, recent studies have shown prognosis of immune infiltrates and immune checkpoints in pancreatic neuroendocrine tumors [23, 28]. However, the correlation between m6A regulators and immune infiltration and immune checkpoints is yet to be explored. To better identify patients at risk and improve their follow-up management, we explored the expression profiles of m6A proteins in resected PanNENs and investigated the potential prognostic value of m6A proteins. We also analyzed the association among common immune infiltrates, immune checkpoints, and m6A regulators. Furthermore, a YTHDF2-based signature was established, which may help in postoperative monitoring and stratification of risks to guide clinical treatment options. Materials and Methods: Patient Cohort and Follow-Up A total of 183 patients with PanNENs who were admitted to the Peking Union Medical College Hospital (PUMCH) and underwent surgeries during the time period 2004–2019 with sufficient tumor tissue for evaluation were included in the current study. The FFPE specimens and matching hematoxylin and eosin slides were retrieved. Patients' clinicopathological data, including age, sex, primary tumor site, tumor size, tumor grading, functional status, and surgical procedures, were collected from the medical records. The American Joint on Cancer Committee (AJCC) 8th edition stages for each patient were defined based on the collected data. Survival and recurrence information were obtained from medical record reviews and telephone interviews. Recurrence and distant metastasis were determined based on biochemical markers, clinical multidisciplinary consultation, radiological, and/or histological examinations. The time between the surgery and tumor recurrence or last follow-up appointment was termed as RFS. Disease-specific survival was calculated from the surgery date to the time of patient death or last follow-up till time point of December 29, 2020. This retrospective study was approved by the Institutional Review Board of PUMCH (approval number: S-K1593) and conducted in accordance with the Declaration of Helsinki. Written consent for this study was not required and formally waived by the hospital's Ethics Committee. A total of 183 patients with PanNENs who were admitted to the Peking Union Medical College Hospital (PUMCH) and underwent surgeries during the time period 2004–2019 with sufficient tumor tissue for evaluation were included in the current study. The FFPE specimens and matching hematoxylin and eosin slides were retrieved. Patients' clinicopathological data, including age, sex, primary tumor site, tumor size, tumor grading, functional status, and surgical procedures, were collected from the medical records. The American Joint on Cancer Committee (AJCC) 8th edition stages for each patient were defined based on the collected data. Survival and recurrence information were obtained from medical record reviews and telephone interviews. Recurrence and distant metastasis were determined based on biochemical markers, clinical multidisciplinary consultation, radiological, and/or histological examinations. The time between the surgery and tumor recurrence or last follow-up appointment was termed as RFS. Disease-specific survival was calculated from the surgery date to the time of patient death or last follow-up till time point of December 29, 2020. This retrospective study was approved by the Institutional Review Board of PUMCH (approval number: S-K1593) and conducted in accordance with the Declaration of Helsinki. Written consent for this study was not required and formally waived by the hospital's Ethics Committee. Tissue Microarrays and Immunohistochemistry Representative areas of tumors and adjacent normal tissue were selected from hematoxylin and eosin-stained paraffin blocks and then re-embedded into recipient tissue microarray (TMA) blocks (12 × 8 arrays). The diameter of each core was 2 mm. Immunohistochemical analysis was performed according to the protocol described by Zong et al. [29]. The 4-μm TMA sections were deparaffinized. Heat-mediated antigen retrieval was performed using sodium citrate buffer (pH 6.0) for 10 min. The endogenous peroxidase activity was quenched using a 3% hydrogen peroxide solution (ZSGB-BIO, Beijing, China) and then the sections were blocked with 5% normal goat serum for 30 min. TMA sections were incubated with primary antibodies against m6A methylation regulators (METTL3, METTL14, METTL16, WTAP, KIAA1429, RBM15, FTO, ALKBH5, IGF2BP2, HNRNPC, YTHDC1, YTHDC2, YTHDF1, YTHDF2, and YTHDF3) and immune markers (PD-L1, B7-H3, CD3, CD4, CD15, CD68) overnight at 4°C; the details and optimal dilutions are summarized in online supplementary Table S1 (see www.karger.com/doi/10.1159/000525228 for all online suppl. material). Stromal cells, normal acinar cells, ductal cells, and islet cells from normal tissues were incubated with primary antibodies and used as internal positive controls, whereas the same tissues without primary antibodies comprised the negative controls. Representative areas of tumors and adjacent normal tissue were selected from hematoxylin and eosin-stained paraffin blocks and then re-embedded into recipient tissue microarray (TMA) blocks (12 × 8 arrays). The diameter of each core was 2 mm. Immunohistochemical analysis was performed according to the protocol described by Zong et al. [29]. The 4-μm TMA sections were deparaffinized. Heat-mediated antigen retrieval was performed using sodium citrate buffer (pH 6.0) for 10 min. The endogenous peroxidase activity was quenched using a 3% hydrogen peroxide solution (ZSGB-BIO, Beijing, China) and then the sections were blocked with 5% normal goat serum for 30 min. TMA sections were incubated with primary antibodies against m6A methylation regulators (METTL3, METTL14, METTL16, WTAP, KIAA1429, RBM15, FTO, ALKBH5, IGF2BP2, HNRNPC, YTHDC1, YTHDC2, YTHDF1, YTHDF2, and YTHDF3) and immune markers (PD-L1, B7-H3, CD3, CD4, CD15, CD68) overnight at 4°C; the details and optimal dilutions are summarized in online supplementary Table S1 (see www.karger.com/doi/10.1159/000525228 for all online suppl. material). Stromal cells, normal acinar cells, ductal cells, and islet cells from normal tissues were incubated with primary antibodies and used as internal positive controls, whereas the same tissues without primary antibodies comprised the negative controls. Evaluation of Immunostaining The immunostaining performed on the collected patient samples was independently assessed by two pathologists (XL Chen and SW Mo) who were blinded to the patients' clinicopathological features and clinical outcomes. Both pathologists reexamined the slides and reached a consensus in case of any discrepancy. The immunoreactivity of 15 m6A markers, PD-L1, and B7-H3 in tumor specimens was quantified using the method developed by Budwit-Novotny et al. [30]. In brief, the percentage of positively stained tumor cells was multiplied by the relative intensity of specific staining, which was assigned a value of negative (0), weak (1), distinct (2), and strong (3) [30, 31]. The density of stromal CD3, CD4, CD15, and CD68 were quantified in four ×400 high-power fields and the mean counts of four fields were used for statistical analysis as we described earlier [32]. The cutoff values of m6A markers and immune markers were determined using X-tile (Yale University, New Haven, CT, USA). These cutoff values are summarized in the online supplementary Table S2. The immunostaining performed on the collected patient samples was independently assessed by two pathologists (XL Chen and SW Mo) who were blinded to the patients' clinicopathological features and clinical outcomes. Both pathologists reexamined the slides and reached a consensus in case of any discrepancy. The immunoreactivity of 15 m6A markers, PD-L1, and B7-H3 in tumor specimens was quantified using the method developed by Budwit-Novotny et al. [30]. In brief, the percentage of positively stained tumor cells was multiplied by the relative intensity of specific staining, which was assigned a value of negative (0), weak (1), distinct (2), and strong (3) [30, 31]. The density of stromal CD3, CD4, CD15, and CD68 were quantified in four ×400 high-power fields and the mean counts of four fields were used for statistical analysis as we described earlier [32]. The cutoff values of m6A markers and immune markers were determined using X-tile (Yale University, New Haven, CT, USA). These cutoff values are summarized in the online supplementary Table S2. Risk Score Models and Random Survival Forest A random survival forest (RSF) model was constructed based on the minimal depth and variable importance (VIMP). VIMP and minimal depth were used to assess the true effect of the variable on RFS. Given that feature subsets of variables are randomly selected using RSF method, RSF can process high-dimensional (many variables) data. In our current study, we included more than ten variables. Through training of the RSF algorithm, the variables which were the most related to recurrence were selected using minimal depth and VIMP. And in the training process, the interaction between variables can be detected. Moreover, when creating random forest, unbiased estimation is used for generalization error, and the model has strong generalization ability [33]. If a large part of the features is lost, the accuracy can still be maintained [33]. Based on these strengths of RSF, this method was helpful to select relapse-related variables and construct a recurrence signature. Recurrence-specific variables defined by a VIMP of >0 and minimal depth less than the depth threshold were finally entered into a survival tree analysis [33]. The recurrence-specific variables, including nodal status, tumor size, YTHDF2, were selected by using minimal depth and VIMP. By integrating the expression level of YTHDF2, nodal status, and tumor size as well as their corresponding coefficients which were derived from the multivariate Cox model constructed only using these 3 variables, a risk score was generated, as represented below: Risk Score = (1.218 × Nodal Status) + (1.625 × Tumor Size) + (0.745 × expression value of YTHDF2). Nodal status was divided into negative/positive and scored as 0/1; tumor size was divided into <2.5 cm/≥2.5 cm and scored as 0/1, and expression value of YTHDF2 was divided into low/high and scored as 0/1, and these scores were multiplied by the corresponding coefficients to generate a risk score. The median (median = 1.625) was used as the cutoff value for the risk score. A random survival forest (RSF) model was constructed based on the minimal depth and variable importance (VIMP). VIMP and minimal depth were used to assess the true effect of the variable on RFS. Given that feature subsets of variables are randomly selected using RSF method, RSF can process high-dimensional (many variables) data. In our current study, we included more than ten variables. Through training of the RSF algorithm, the variables which were the most related to recurrence were selected using minimal depth and VIMP. And in the training process, the interaction between variables can be detected. Moreover, when creating random forest, unbiased estimation is used for generalization error, and the model has strong generalization ability [33]. If a large part of the features is lost, the accuracy can still be maintained [33]. Based on these strengths of RSF, this method was helpful to select relapse-related variables and construct a recurrence signature. Recurrence-specific variables defined by a VIMP of >0 and minimal depth less than the depth threshold were finally entered into a survival tree analysis [33]. The recurrence-specific variables, including nodal status, tumor size, YTHDF2, were selected by using minimal depth and VIMP. By integrating the expression level of YTHDF2, nodal status, and tumor size as well as their corresponding coefficients which were derived from the multivariate Cox model constructed only using these 3 variables, a risk score was generated, as represented below: Risk Score = (1.218 × Nodal Status) + (1.625 × Tumor Size) + (0.745 × expression value of YTHDF2). Nodal status was divided into negative/positive and scored as 0/1; tumor size was divided into <2.5 cm/≥2.5 cm and scored as 0/1, and expression value of YTHDF2 was divided into low/high and scored as 0/1, and these scores were multiplied by the corresponding coefficients to generate a risk score. The median (median = 1.625) was used as the cutoff value for the risk score. Statistical Analysis The continuous and categorical variables were described as median (range) and frequency (percentage), respectively. The Mann-Whitney U test was conducted to compare nonnormally distributed continuous variables, whereas a Student's t test was used to analyze normally distributed continuous variables. The χ2 test or Fisher's exact test was used to evaluate the relationship between the expression of m6A methylation regulators and categorical variables. The survival curves were plotted by Kaplan-Meier method, and the log-rank test was employed to compare the survival curves generated. The Cox proportional hazard regression model was used to estimate the hazard ratio with a 95% confidence interval (CI) for variables associated with RFS. Potential risk factors with a p value of <0.05 in the univariate Cox analysis were entered into the multivariate Cox regression model (backward Wald) after considering collinearity among the variables. We evaluated the discrimination ability of the final model with Harrell concordance index (C-index). The area under the time-dependent receiver operating characteristic (ROC) curve at different cutoff times was measured as predictive performance. Calibration was assessed by visual examination of the calibration plot. p values of <0.05 were considered statistically significant. Statistical analyses were two-sided and accomplished using SPSS software (version 21.0; IBM Corp., Armonk, NY, USA). Statistical analyses of RNA sequencing data from International Cancer Genome Consortium (ICGC) database (https://www.icgc.org/) were conducted with R version 3.5.0 (http://www.r-project.org). The continuous and categorical variables were described as median (range) and frequency (percentage), respectively. The Mann-Whitney U test was conducted to compare nonnormally distributed continuous variables, whereas a Student's t test was used to analyze normally distributed continuous variables. The χ2 test or Fisher's exact test was used to evaluate the relationship between the expression of m6A methylation regulators and categorical variables. The survival curves were plotted by Kaplan-Meier method, and the log-rank test was employed to compare the survival curves generated. The Cox proportional hazard regression model was used to estimate the hazard ratio with a 95% confidence interval (CI) for variables associated with RFS. Potential risk factors with a p value of <0.05 in the univariate Cox analysis were entered into the multivariate Cox regression model (backward Wald) after considering collinearity among the variables. We evaluated the discrimination ability of the final model with Harrell concordance index (C-index). The area under the time-dependent receiver operating characteristic (ROC) curve at different cutoff times was measured as predictive performance. Calibration was assessed by visual examination of the calibration plot. p values of <0.05 were considered statistically significant. Statistical analyses were two-sided and accomplished using SPSS software (version 21.0; IBM Corp., Armonk, NY, USA). Statistical analyses of RNA sequencing data from International Cancer Genome Consortium (ICGC) database (https://www.icgc.org/) were conducted with R version 3.5.0 (http://www.r-project.org). Patient Cohort and Follow-Up: A total of 183 patients with PanNENs who were admitted to the Peking Union Medical College Hospital (PUMCH) and underwent surgeries during the time period 2004–2019 with sufficient tumor tissue for evaluation were included in the current study. The FFPE specimens and matching hematoxylin and eosin slides were retrieved. Patients' clinicopathological data, including age, sex, primary tumor site, tumor size, tumor grading, functional status, and surgical procedures, were collected from the medical records. The American Joint on Cancer Committee (AJCC) 8th edition stages for each patient were defined based on the collected data. Survival and recurrence information were obtained from medical record reviews and telephone interviews. Recurrence and distant metastasis were determined based on biochemical markers, clinical multidisciplinary consultation, radiological, and/or histological examinations. The time between the surgery and tumor recurrence or last follow-up appointment was termed as RFS. Disease-specific survival was calculated from the surgery date to the time of patient death or last follow-up till time point of December 29, 2020. This retrospective study was approved by the Institutional Review Board of PUMCH (approval number: S-K1593) and conducted in accordance with the Declaration of Helsinki. Written consent for this study was not required and formally waived by the hospital's Ethics Committee. Tissue Microarrays and Immunohistochemistry: Representative areas of tumors and adjacent normal tissue were selected from hematoxylin and eosin-stained paraffin blocks and then re-embedded into recipient tissue microarray (TMA) blocks (12 × 8 arrays). The diameter of each core was 2 mm. Immunohistochemical analysis was performed according to the protocol described by Zong et al. [29]. The 4-μm TMA sections were deparaffinized. Heat-mediated antigen retrieval was performed using sodium citrate buffer (pH 6.0) for 10 min. The endogenous peroxidase activity was quenched using a 3% hydrogen peroxide solution (ZSGB-BIO, Beijing, China) and then the sections were blocked with 5% normal goat serum for 30 min. TMA sections were incubated with primary antibodies against m6A methylation regulators (METTL3, METTL14, METTL16, WTAP, KIAA1429, RBM15, FTO, ALKBH5, IGF2BP2, HNRNPC, YTHDC1, YTHDC2, YTHDF1, YTHDF2, and YTHDF3) and immune markers (PD-L1, B7-H3, CD3, CD4, CD15, CD68) overnight at 4°C; the details and optimal dilutions are summarized in online supplementary Table S1 (see www.karger.com/doi/10.1159/000525228 for all online suppl. material). Stromal cells, normal acinar cells, ductal cells, and islet cells from normal tissues were incubated with primary antibodies and used as internal positive controls, whereas the same tissues without primary antibodies comprised the negative controls. Evaluation of Immunostaining: The immunostaining performed on the collected patient samples was independently assessed by two pathologists (XL Chen and SW Mo) who were blinded to the patients' clinicopathological features and clinical outcomes. Both pathologists reexamined the slides and reached a consensus in case of any discrepancy. The immunoreactivity of 15 m6A markers, PD-L1, and B7-H3 in tumor specimens was quantified using the method developed by Budwit-Novotny et al. [30]. In brief, the percentage of positively stained tumor cells was multiplied by the relative intensity of specific staining, which was assigned a value of negative (0), weak (1), distinct (2), and strong (3) [30, 31]. The density of stromal CD3, CD4, CD15, and CD68 were quantified in four ×400 high-power fields and the mean counts of four fields were used for statistical analysis as we described earlier [32]. The cutoff values of m6A markers and immune markers were determined using X-tile (Yale University, New Haven, CT, USA). These cutoff values are summarized in the online supplementary Table S2. Risk Score Models and Random Survival Forest: A random survival forest (RSF) model was constructed based on the minimal depth and variable importance (VIMP). VIMP and minimal depth were used to assess the true effect of the variable on RFS. Given that feature subsets of variables are randomly selected using RSF method, RSF can process high-dimensional (many variables) data. In our current study, we included more than ten variables. Through training of the RSF algorithm, the variables which were the most related to recurrence were selected using minimal depth and VIMP. And in the training process, the interaction between variables can be detected. Moreover, when creating random forest, unbiased estimation is used for generalization error, and the model has strong generalization ability [33]. If a large part of the features is lost, the accuracy can still be maintained [33]. Based on these strengths of RSF, this method was helpful to select relapse-related variables and construct a recurrence signature. Recurrence-specific variables defined by a VIMP of >0 and minimal depth less than the depth threshold were finally entered into a survival tree analysis [33]. The recurrence-specific variables, including nodal status, tumor size, YTHDF2, were selected by using minimal depth and VIMP. By integrating the expression level of YTHDF2, nodal status, and tumor size as well as their corresponding coefficients which were derived from the multivariate Cox model constructed only using these 3 variables, a risk score was generated, as represented below: Risk Score = (1.218 × Nodal Status) + (1.625 × Tumor Size) + (0.745 × expression value of YTHDF2). Nodal status was divided into negative/positive and scored as 0/1; tumor size was divided into <2.5 cm/≥2.5 cm and scored as 0/1, and expression value of YTHDF2 was divided into low/high and scored as 0/1, and these scores were multiplied by the corresponding coefficients to generate a risk score. The median (median = 1.625) was used as the cutoff value for the risk score. Statistical Analysis: The continuous and categorical variables were described as median (range) and frequency (percentage), respectively. The Mann-Whitney U test was conducted to compare nonnormally distributed continuous variables, whereas a Student's t test was used to analyze normally distributed continuous variables. The χ2 test or Fisher's exact test was used to evaluate the relationship between the expression of m6A methylation regulators and categorical variables. The survival curves were plotted by Kaplan-Meier method, and the log-rank test was employed to compare the survival curves generated. The Cox proportional hazard regression model was used to estimate the hazard ratio with a 95% confidence interval (CI) for variables associated with RFS. Potential risk factors with a p value of <0.05 in the univariate Cox analysis were entered into the multivariate Cox regression model (backward Wald) after considering collinearity among the variables. We evaluated the discrimination ability of the final model with Harrell concordance index (C-index). The area under the time-dependent receiver operating characteristic (ROC) curve at different cutoff times was measured as predictive performance. Calibration was assessed by visual examination of the calibration plot. p values of <0.05 were considered statistically significant. Statistical analyses were two-sided and accomplished using SPSS software (version 21.0; IBM Corp., Armonk, NY, USA). Statistical analyses of RNA sequencing data from International Cancer Genome Consortium (ICGC) database (https://www.icgc.org/) were conducted with R version 3.5.0 (http://www.r-project.org). Results: Patients' Characteristics A total of 96 patients were male (52.4%), whereas 87 patients were female (47.6%). The tumor diameter was >2.5 cm in 53.1% of the patients (97/183). The number of positive lymph nodes ranged from 1 to 29. A total of 50.7% tumors were nonfunctional, and the majority of functional tumors were classified as insulinoma (76/90). Most of the tumors (175/183) were well differentiated, and eight were poorly differentiated (Table 1). The median follow-up time was 39 months (range: 1–197 months), during which there were 43 relapses and 10 deaths, and all of the deaths were owing to disease progression. The 3- and 5-year RFS rates were 83.6% and 77.0%, respectively. A total of 96 patients were male (52.4%), whereas 87 patients were female (47.6%). The tumor diameter was >2.5 cm in 53.1% of the patients (97/183). The number of positive lymph nodes ranged from 1 to 29. A total of 50.7% tumors were nonfunctional, and the majority of functional tumors were classified as insulinoma (76/90). Most of the tumors (175/183) were well differentiated, and eight were poorly differentiated (Table 1). The median follow-up time was 39 months (range: 1–197 months), during which there were 43 relapses and 10 deaths, and all of the deaths were owing to disease progression. The 3- and 5-year RFS rates were 83.6% and 77.0%, respectively. Expression Patterns of the 15 m6A Regulators We found that the expression levels of m6A proteins in PanNENs and normal tissues were evident (Fig. 1a). The expression levels of writers (i.e., KIAA1429, METTL14, RBM15, WTAP, and METTL16), readers (i.e., IGF2BP2, HNRNPC, YTHDC1, YTHDC2, YTHDF1, YTHDF2, and YTHDF3), and erasers (ALKBH5 and FTO) were higher in PanNEN tissues than in normal tissues (p < 0.001). No statistically significant difference was evident between the normal and PanNENs tissues in the context of METTL3 expression level. Regarding METTL3, METTL14, METTL16, WTAP, RBM15, and KIAA1429 expression, a high level was observed in 26.7%, 11.5%, 35.5%, 64.4%, 67.7%, and 55.7% of tumors, respectively. Among the 183 tumors, 36.6% were deemed to have high FTO expression and 72.1% had high ALKBH5 expression. Regarding reader molecules (i.e., IGF2BP2, HNRNPC, YTHDC1, YTHDC2, YTHDF1, YTHDF2, and YTHDF3), immunostaining revealed low expression in 64.5%, 95.1%, 72.1%, 57.9%, 84.7%, 50.3%, and 96.2%, respectively, of these samples and high expression in the remaining samples (Fig. 1b). The immunohistochemical patterns of the 15 m6A proteins are shown in Figure 1c and online supplementary Figure S1. All 183 PanNENs showed nuclear immunoreactivity for METTL3, METTL14, METTL16, WTAP, RBM15, KIAA1429, FTO, ALKBH5, HNRNPC, and YTHDC1. The positive sites of IGF2BP2, YTHDC2, YTHDF1, YTHDF2, and YTHDF3 were located in the cytoplasm. We found that the expression levels of m6A proteins in PanNENs and normal tissues were evident (Fig. 1a). The expression levels of writers (i.e., KIAA1429, METTL14, RBM15, WTAP, and METTL16), readers (i.e., IGF2BP2, HNRNPC, YTHDC1, YTHDC2, YTHDF1, YTHDF2, and YTHDF3), and erasers (ALKBH5 and FTO) were higher in PanNEN tissues than in normal tissues (p < 0.001). No statistically significant difference was evident between the normal and PanNENs tissues in the context of METTL3 expression level. Regarding METTL3, METTL14, METTL16, WTAP, RBM15, and KIAA1429 expression, a high level was observed in 26.7%, 11.5%, 35.5%, 64.4%, 67.7%, and 55.7% of tumors, respectively. Among the 183 tumors, 36.6% were deemed to have high FTO expression and 72.1% had high ALKBH5 expression. Regarding reader molecules (i.e., IGF2BP2, HNRNPC, YTHDC1, YTHDC2, YTHDF1, YTHDF2, and YTHDF3), immunostaining revealed low expression in 64.5%, 95.1%, 72.1%, 57.9%, 84.7%, 50.3%, and 96.2%, respectively, of these samples and high expression in the remaining samples (Fig. 1b). The immunohistochemical patterns of the 15 m6A proteins are shown in Figure 1c and online supplementary Figure S1. All 183 PanNENs showed nuclear immunoreactivity for METTL3, METTL14, METTL16, WTAP, RBM15, KIAA1429, FTO, ALKBH5, HNRNPC, and YTHDC1. The positive sites of IGF2BP2, YTHDC2, YTHDF1, YTHDF2, and YTHDF3 were located in the cytoplasm. Associations among m6A Regulators, Immune Markers, and Clinicopathological Parameters in PanNENs Among the 15 proteins studied, only YTHDF2 and HNRNPC expressions had significant prognostic value. Of the 43 patients with recurrence, 12 (27.9%) showed low YTHDF2 expression, which was dramatically lower than the proportion of patients without recurrence having low YTHDF2 expression (85 of 140, 60.7%; p = 0.003; Fig. 1d). Similarly, 88.4% of the patients (38 of 43 patients) with recurrence showed low tumoral HNRNPC expression; in contrast, 97.1% of the patients without recurrence (136 of 140 patients) showed low tumoral expression of HNRNPC (p = 0.034; Fig. 1d). The log-rank test revealed that both high HNRNPC and YTHDF2 expressions predicted a significantly shortened RFS (p = 0.003 and p = 0.001, respectively; Fig. 1e, f). In the correlation analysis between YTHDF2 and HNRNPC expression and clinicopathological variables, we found that YTHDF2 expression was positively associated with tumor size (p = 0.021), nodal status (p = 0.007), functional status (p = 0.010), Ki67 index (p = 0.023), and liver metastasis (p < 0.001). There were no significant associations between YTHDF2 expression and the other clinicopathological variables, including age, sex, tumor site, lymphovascular invasion (LVI), and perineural invasion (PNI) (online suppl. Table S3). High YTHDF2 expression was associated with higher number of CD3+ T cells (p = 0.003). There were no significant associations between YTHDF2 expression and other immune markers (PD-L1, B7-H3, CD3, CD4, and CD68). However, high HNRNPC expression was significantly correlated with positive PD-L1 expression (p = 0.039) and high densities of CD3+ T cells. Moreover, there were no significant associations between HNRNPC expression and other immune markers (Table 2). Representative images of PD-L1, B7-H3 CD3, CD4, CD15, and CD68 are presented in online supplementary Figure S2. Among the 15 proteins studied, only YTHDF2 and HNRNPC expressions had significant prognostic value. Of the 43 patients with recurrence, 12 (27.9%) showed low YTHDF2 expression, which was dramatically lower than the proportion of patients without recurrence having low YTHDF2 expression (85 of 140, 60.7%; p = 0.003; Fig. 1d). Similarly, 88.4% of the patients (38 of 43 patients) with recurrence showed low tumoral HNRNPC expression; in contrast, 97.1% of the patients without recurrence (136 of 140 patients) showed low tumoral expression of HNRNPC (p = 0.034; Fig. 1d). The log-rank test revealed that both high HNRNPC and YTHDF2 expressions predicted a significantly shortened RFS (p = 0.003 and p = 0.001, respectively; Fig. 1e, f). In the correlation analysis between YTHDF2 and HNRNPC expression and clinicopathological variables, we found that YTHDF2 expression was positively associated with tumor size (p = 0.021), nodal status (p = 0.007), functional status (p = 0.010), Ki67 index (p = 0.023), and liver metastasis (p < 0.001). There were no significant associations between YTHDF2 expression and the other clinicopathological variables, including age, sex, tumor site, lymphovascular invasion (LVI), and perineural invasion (PNI) (online suppl. Table S3). High YTHDF2 expression was associated with higher number of CD3+ T cells (p = 0.003). There were no significant associations between YTHDF2 expression and other immune markers (PD-L1, B7-H3, CD3, CD4, and CD68). However, high HNRNPC expression was significantly correlated with positive PD-L1 expression (p = 0.039) and high densities of CD3+ T cells. Moreover, there were no significant associations between HNRNPC expression and other immune markers (Table 2). Representative images of PD-L1, B7-H3 CD3, CD4, CD15, and CD68 are presented in online supplementary Figure S2. Prognostic Predictors of RFS in PanNENs A univariate Cox analysis for both the clinicopathological parameters and m6A proteins was conducted. Nine clinicopathological factors (tumor size, lymph node status, PNI, LVI, grade, functional status, liver metastasis, and Ki67 index) and five m6A proteins (METTL14, METTL16, ALKBH5, HNRNPC, YTHDF1, YTHDF2, and YTHDF3) were identified to be associated with RFS, with p values of <0.05 (Table 3). Considering analysis of collinear factors, tumor grade was found to be associated with Ki67 index and liver metastasis, and LVI was associated with lymph node status. Therefore, PNI, tumor size, lymph node status, grade, functional status, METTL14, METTL16, ALKBH5, HNRNPC, YTHDF1, YTHDF2, and YTHDF3 were included in the multivariate Cox regression model. Multivariate Cox regression analysis identified tumor size, lymph node status, tumor grade, PNI, HNRNPC expression, and YTHDF2 expression as independent risk factors for RFS after excluding collinear factors. Next, we applied an RSF model to evaluate the importance of these independent risk factors. In the analysis of variable of importance, we found that VIMP of HNRNPC was −0.0011 (Fig. 2a). In the minimal depth analysis, HNRNPC, PNI, and grade had values of 2.405, 2.482, and 2.183, respectively, which were very close to or more than the depth threshold (Fig. 2b). Therefore, they were all excluded from the RSF model. We also performed survival analyses to determine the prognostic value of YTHDF2 mRNA expression in patients with PanNET in ICGC cohort. The Kaplan-Meier plot revealed that high YTHDF2 mRNA expression was significantly associated with poor RFS (p = 0.017, Fig. 2c). Furthermore, three variables (tumor size, lymph node status, and YTHDF2 expression) significantly affected RFS. Nodal status had the largest effect on RFS, followed by YTHDF2 expression and tumor size. A univariate Cox analysis for both the clinicopathological parameters and m6A proteins was conducted. Nine clinicopathological factors (tumor size, lymph node status, PNI, LVI, grade, functional status, liver metastasis, and Ki67 index) and five m6A proteins (METTL14, METTL16, ALKBH5, HNRNPC, YTHDF1, YTHDF2, and YTHDF3) were identified to be associated with RFS, with p values of <0.05 (Table 3). Considering analysis of collinear factors, tumor grade was found to be associated with Ki67 index and liver metastasis, and LVI was associated with lymph node status. Therefore, PNI, tumor size, lymph node status, grade, functional status, METTL14, METTL16, ALKBH5, HNRNPC, YTHDF1, YTHDF2, and YTHDF3 were included in the multivariate Cox regression model. Multivariate Cox regression analysis identified tumor size, lymph node status, tumor grade, PNI, HNRNPC expression, and YTHDF2 expression as independent risk factors for RFS after excluding collinear factors. Next, we applied an RSF model to evaluate the importance of these independent risk factors. In the analysis of variable of importance, we found that VIMP of HNRNPC was −0.0011 (Fig. 2a). In the minimal depth analysis, HNRNPC, PNI, and grade had values of 2.405, 2.482, and 2.183, respectively, which were very close to or more than the depth threshold (Fig. 2b). Therefore, they were all excluded from the RSF model. We also performed survival analyses to determine the prognostic value of YTHDF2 mRNA expression in patients with PanNET in ICGC cohort. The Kaplan-Meier plot revealed that high YTHDF2 mRNA expression was significantly associated with poor RFS (p = 0.017, Fig. 2c). Furthermore, three variables (tumor size, lymph node status, and YTHDF2 expression) significantly affected RFS. Nodal status had the largest effect on RFS, followed by YTHDF2 expression and tumor size. Construction and Evaluation of the Prognostic Model for Predicting RFS in PanNENs To construct a recurrence signature that can classify patients into subpopulations according to RFS, we further performed a recursive partitioning analysis. After pruning the decision trees using the post-pruning method, five terminal nodes representing a recurrence signature were identified (Fig. 2d). LNposTumorSize<2.5 cm (node 1; 20.8 months) and LNposTumorSize≥2.5 cmYTHDF2high (node 3; 21.1 months) patients had worse median RFS than LNnegYTHDF2high patients (node 5; 39.6 months, node 1 vs. node 5, p < 0.001), LNnegYTHDF2low patients (node 4; 43.2 months), and LNposTumorSize≥2.5 cmYTHDF2low patients (node 2; 29.3 months). Patients with the LNnegYTHDF2high (node 5) signature had a 5-year RFS rate of 92.1%, whereas patients with the LNposTumorSize<2.5 cm (node 1) signature had the worst 5-year RFS rate of 0% (Fig. 3a). To better identify risk groups for the recurrence signatures, we performed pair-wise comparisons for each recurrence signature subpopulation (on RFS) and the corresponding risk score of recurrence. The results showed that patients within the five terminal nodes can be categorized into three risk groups: low (node 4), intermediate (node 2 and node 5), and high (node 1 and node 3) with well-separated RFS curves (p < 0.001; Fig. 3b). Among the risk groups, the 5-year RFS rates were 6.0%, 70.4%, and 90.1% for the high-, intermediate-, and low-risk groups, respectively. To further test our proposed recurrence signature, we performed a subanalysis on a specific group of patients with nonsyndromic and nonfunctional PanNENs. A total of 93 PanNENs from the full cohort were eligible for this analysis (online suppl. Table S4). The Kaplan-Meier RFS curves showed clear separation among the risk groups (p = 0.018; Fig. 3c). Furthermore, a risk score was generated to evaluate the risk of recurrence based on variables selected from the survival tree analysis: Risk Score = (1.218 × Nodal Status) + (1.625 × Tumor Size) + (0.745 × expression value of YTHDF2) (Table 4). Multivariate analysis using a Cox proportional hazards model was performed which included PNI, tumor size, nodal status, grade, functional status, METTL14, METTL16, ALKBH5, HNRNPC, YTHDF1, YTHDF2, YTHDF3, and the risk score. The risk score was determined to be an independent prognostic factor for patients with resected PanNEN in our study based on the multivariate Cox regression, and higher risk scores were found to be associated with shorter survival (hazard ratio: 33.04, 95% CI: 4.341–251.434; p = 0.001) (Table 5). Specificity and sensitivity comparisons were performed via time-dependent ROC curve analysis of risk score. The predictive accuracy of the recurrence signature was relatively high and the area under ROC curve was 0.858 (95% CI: 0.747–0.913) at 1 year, 0.824 (95% CI: 0.754–0.912) at 3 years, and 0.870 (95% CI: 0.762–0.915) at 5 years (Fig. 3d). The C-index was 0.978 (95% CI: 0.936–1), suggesting good discrimination ability. The calibration curves also showed good agreement between the predicted and observed RFS (Fig. 3e). To construct a recurrence signature that can classify patients into subpopulations according to RFS, we further performed a recursive partitioning analysis. After pruning the decision trees using the post-pruning method, five terminal nodes representing a recurrence signature were identified (Fig. 2d). LNposTumorSize<2.5 cm (node 1; 20.8 months) and LNposTumorSize≥2.5 cmYTHDF2high (node 3; 21.1 months) patients had worse median RFS than LNnegYTHDF2high patients (node 5; 39.6 months, node 1 vs. node 5, p < 0.001), LNnegYTHDF2low patients (node 4; 43.2 months), and LNposTumorSize≥2.5 cmYTHDF2low patients (node 2; 29.3 months). Patients with the LNnegYTHDF2high (node 5) signature had a 5-year RFS rate of 92.1%, whereas patients with the LNposTumorSize<2.5 cm (node 1) signature had the worst 5-year RFS rate of 0% (Fig. 3a). To better identify risk groups for the recurrence signatures, we performed pair-wise comparisons for each recurrence signature subpopulation (on RFS) and the corresponding risk score of recurrence. The results showed that patients within the five terminal nodes can be categorized into three risk groups: low (node 4), intermediate (node 2 and node 5), and high (node 1 and node 3) with well-separated RFS curves (p < 0.001; Fig. 3b). Among the risk groups, the 5-year RFS rates were 6.0%, 70.4%, and 90.1% for the high-, intermediate-, and low-risk groups, respectively. To further test our proposed recurrence signature, we performed a subanalysis on a specific group of patients with nonsyndromic and nonfunctional PanNENs. A total of 93 PanNENs from the full cohort were eligible for this analysis (online suppl. Table S4). The Kaplan-Meier RFS curves showed clear separation among the risk groups (p = 0.018; Fig. 3c). Furthermore, a risk score was generated to evaluate the risk of recurrence based on variables selected from the survival tree analysis: Risk Score = (1.218 × Nodal Status) + (1.625 × Tumor Size) + (0.745 × expression value of YTHDF2) (Table 4). Multivariate analysis using a Cox proportional hazards model was performed which included PNI, tumor size, nodal status, grade, functional status, METTL14, METTL16, ALKBH5, HNRNPC, YTHDF1, YTHDF2, YTHDF3, and the risk score. The risk score was determined to be an independent prognostic factor for patients with resected PanNEN in our study based on the multivariate Cox regression, and higher risk scores were found to be associated with shorter survival (hazard ratio: 33.04, 95% CI: 4.341–251.434; p = 0.001) (Table 5). Specificity and sensitivity comparisons were performed via time-dependent ROC curve analysis of risk score. The predictive accuracy of the recurrence signature was relatively high and the area under ROC curve was 0.858 (95% CI: 0.747–0.913) at 1 year, 0.824 (95% CI: 0.754–0.912) at 3 years, and 0.870 (95% CI: 0.762–0.915) at 5 years (Fig. 3d). The C-index was 0.978 (95% CI: 0.936–1), suggesting good discrimination ability. The calibration curves also showed good agreement between the predicted and observed RFS (Fig. 3e). Patients' Characteristics: A total of 96 patients were male (52.4%), whereas 87 patients were female (47.6%). The tumor diameter was >2.5 cm in 53.1% of the patients (97/183). The number of positive lymph nodes ranged from 1 to 29. A total of 50.7% tumors were nonfunctional, and the majority of functional tumors were classified as insulinoma (76/90). Most of the tumors (175/183) were well differentiated, and eight were poorly differentiated (Table 1). The median follow-up time was 39 months (range: 1–197 months), during which there were 43 relapses and 10 deaths, and all of the deaths were owing to disease progression. The 3- and 5-year RFS rates were 83.6% and 77.0%, respectively. Expression Patterns of the 15 m6A Regulators: We found that the expression levels of m6A proteins in PanNENs and normal tissues were evident (Fig. 1a). The expression levels of writers (i.e., KIAA1429, METTL14, RBM15, WTAP, and METTL16), readers (i.e., IGF2BP2, HNRNPC, YTHDC1, YTHDC2, YTHDF1, YTHDF2, and YTHDF3), and erasers (ALKBH5 and FTO) were higher in PanNEN tissues than in normal tissues (p < 0.001). No statistically significant difference was evident between the normal and PanNENs tissues in the context of METTL3 expression level. Regarding METTL3, METTL14, METTL16, WTAP, RBM15, and KIAA1429 expression, a high level was observed in 26.7%, 11.5%, 35.5%, 64.4%, 67.7%, and 55.7% of tumors, respectively. Among the 183 tumors, 36.6% were deemed to have high FTO expression and 72.1% had high ALKBH5 expression. Regarding reader molecules (i.e., IGF2BP2, HNRNPC, YTHDC1, YTHDC2, YTHDF1, YTHDF2, and YTHDF3), immunostaining revealed low expression in 64.5%, 95.1%, 72.1%, 57.9%, 84.7%, 50.3%, and 96.2%, respectively, of these samples and high expression in the remaining samples (Fig. 1b). The immunohistochemical patterns of the 15 m6A proteins are shown in Figure 1c and online supplementary Figure S1. All 183 PanNENs showed nuclear immunoreactivity for METTL3, METTL14, METTL16, WTAP, RBM15, KIAA1429, FTO, ALKBH5, HNRNPC, and YTHDC1. The positive sites of IGF2BP2, YTHDC2, YTHDF1, YTHDF2, and YTHDF3 were located in the cytoplasm. Associations among m6A Regulators, Immune Markers, and Clinicopathological Parameters in PanNENs: Among the 15 proteins studied, only YTHDF2 and HNRNPC expressions had significant prognostic value. Of the 43 patients with recurrence, 12 (27.9%) showed low YTHDF2 expression, which was dramatically lower than the proportion of patients without recurrence having low YTHDF2 expression (85 of 140, 60.7%; p = 0.003; Fig. 1d). Similarly, 88.4% of the patients (38 of 43 patients) with recurrence showed low tumoral HNRNPC expression; in contrast, 97.1% of the patients without recurrence (136 of 140 patients) showed low tumoral expression of HNRNPC (p = 0.034; Fig. 1d). The log-rank test revealed that both high HNRNPC and YTHDF2 expressions predicted a significantly shortened RFS (p = 0.003 and p = 0.001, respectively; Fig. 1e, f). In the correlation analysis between YTHDF2 and HNRNPC expression and clinicopathological variables, we found that YTHDF2 expression was positively associated with tumor size (p = 0.021), nodal status (p = 0.007), functional status (p = 0.010), Ki67 index (p = 0.023), and liver metastasis (p < 0.001). There were no significant associations between YTHDF2 expression and the other clinicopathological variables, including age, sex, tumor site, lymphovascular invasion (LVI), and perineural invasion (PNI) (online suppl. Table S3). High YTHDF2 expression was associated with higher number of CD3+ T cells (p = 0.003). There were no significant associations between YTHDF2 expression and other immune markers (PD-L1, B7-H3, CD3, CD4, and CD68). However, high HNRNPC expression was significantly correlated with positive PD-L1 expression (p = 0.039) and high densities of CD3+ T cells. Moreover, there were no significant associations between HNRNPC expression and other immune markers (Table 2). Representative images of PD-L1, B7-H3 CD3, CD4, CD15, and CD68 are presented in online supplementary Figure S2. Prognostic Predictors of RFS in PanNENs: A univariate Cox analysis for both the clinicopathological parameters and m6A proteins was conducted. Nine clinicopathological factors (tumor size, lymph node status, PNI, LVI, grade, functional status, liver metastasis, and Ki67 index) and five m6A proteins (METTL14, METTL16, ALKBH5, HNRNPC, YTHDF1, YTHDF2, and YTHDF3) were identified to be associated with RFS, with p values of <0.05 (Table 3). Considering analysis of collinear factors, tumor grade was found to be associated with Ki67 index and liver metastasis, and LVI was associated with lymph node status. Therefore, PNI, tumor size, lymph node status, grade, functional status, METTL14, METTL16, ALKBH5, HNRNPC, YTHDF1, YTHDF2, and YTHDF3 were included in the multivariate Cox regression model. Multivariate Cox regression analysis identified tumor size, lymph node status, tumor grade, PNI, HNRNPC expression, and YTHDF2 expression as independent risk factors for RFS after excluding collinear factors. Next, we applied an RSF model to evaluate the importance of these independent risk factors. In the analysis of variable of importance, we found that VIMP of HNRNPC was −0.0011 (Fig. 2a). In the minimal depth analysis, HNRNPC, PNI, and grade had values of 2.405, 2.482, and 2.183, respectively, which were very close to or more than the depth threshold (Fig. 2b). Therefore, they were all excluded from the RSF model. We also performed survival analyses to determine the prognostic value of YTHDF2 mRNA expression in patients with PanNET in ICGC cohort. The Kaplan-Meier plot revealed that high YTHDF2 mRNA expression was significantly associated with poor RFS (p = 0.017, Fig. 2c). Furthermore, three variables (tumor size, lymph node status, and YTHDF2 expression) significantly affected RFS. Nodal status had the largest effect on RFS, followed by YTHDF2 expression and tumor size. Construction and Evaluation of the Prognostic Model for Predicting RFS in PanNENs: To construct a recurrence signature that can classify patients into subpopulations according to RFS, we further performed a recursive partitioning analysis. After pruning the decision trees using the post-pruning method, five terminal nodes representing a recurrence signature were identified (Fig. 2d). LNposTumorSize<2.5 cm (node 1; 20.8 months) and LNposTumorSize≥2.5 cmYTHDF2high (node 3; 21.1 months) patients had worse median RFS than LNnegYTHDF2high patients (node 5; 39.6 months, node 1 vs. node 5, p < 0.001), LNnegYTHDF2low patients (node 4; 43.2 months), and LNposTumorSize≥2.5 cmYTHDF2low patients (node 2; 29.3 months). Patients with the LNnegYTHDF2high (node 5) signature had a 5-year RFS rate of 92.1%, whereas patients with the LNposTumorSize<2.5 cm (node 1) signature had the worst 5-year RFS rate of 0% (Fig. 3a). To better identify risk groups for the recurrence signatures, we performed pair-wise comparisons for each recurrence signature subpopulation (on RFS) and the corresponding risk score of recurrence. The results showed that patients within the five terminal nodes can be categorized into three risk groups: low (node 4), intermediate (node 2 and node 5), and high (node 1 and node 3) with well-separated RFS curves (p < 0.001; Fig. 3b). Among the risk groups, the 5-year RFS rates were 6.0%, 70.4%, and 90.1% for the high-, intermediate-, and low-risk groups, respectively. To further test our proposed recurrence signature, we performed a subanalysis on a specific group of patients with nonsyndromic and nonfunctional PanNENs. A total of 93 PanNENs from the full cohort were eligible for this analysis (online suppl. Table S4). The Kaplan-Meier RFS curves showed clear separation among the risk groups (p = 0.018; Fig. 3c). Furthermore, a risk score was generated to evaluate the risk of recurrence based on variables selected from the survival tree analysis: Risk Score = (1.218 × Nodal Status) + (1.625 × Tumor Size) + (0.745 × expression value of YTHDF2) (Table 4). Multivariate analysis using a Cox proportional hazards model was performed which included PNI, tumor size, nodal status, grade, functional status, METTL14, METTL16, ALKBH5, HNRNPC, YTHDF1, YTHDF2, YTHDF3, and the risk score. The risk score was determined to be an independent prognostic factor for patients with resected PanNEN in our study based on the multivariate Cox regression, and higher risk scores were found to be associated with shorter survival (hazard ratio: 33.04, 95% CI: 4.341–251.434; p = 0.001) (Table 5). Specificity and sensitivity comparisons were performed via time-dependent ROC curve analysis of risk score. The predictive accuracy of the recurrence signature was relatively high and the area under ROC curve was 0.858 (95% CI: 0.747–0.913) at 1 year, 0.824 (95% CI: 0.754–0.912) at 3 years, and 0.870 (95% CI: 0.762–0.915) at 5 years (Fig. 3d). The C-index was 0.978 (95% CI: 0.936–1), suggesting good discrimination ability. The calibration curves also showed good agreement between the predicted and observed RFS (Fig. 3e). Discussion/Conclusion: Identification of predictors of relapse in patients with PanNEN is important. Previous studies that investigated risk predictors identified that tumor size, lymph node status, WHO grade, Ki67 index, tertiary lymphoid structures, and tumor-infiltrating neutrophils may predict recurrence [34, 35, 36, 37]. However, a practical and effective model is urgently needed. In our study, we constructed a risk model for recurrence with a corresponding risk score to stratify patients on the basis of RFS. The recurrence signature classifies patients into three risk groups independent of grade, with each group representing a distinct RFS outcome. This is the first large-scale study exploring 15 m6A proteins as potential relapse biomarkers for PanNENs, to the best of our knowledge. To date, all of these 15 proteins have not been described in PanNENs. High expression of METTL14 was observed in 11.5% of the entire cohort, and high METTL14 expression predicted poor RFS outcomes (p = 0.024), similar to pancreatic cancer [38], whereas there are opposite results in other types of cancers, such as CRC [39] and bladder cancer [40]. METTL16, another m6A methyltransferase, was also a predictor of relapse and associated with poor RFS in this cohort (p = 0.023); however, recent studies suggested that METTL16 deletion is correlated with poor disease-specific survival or RFS in hepatocellular carcinoma [41], and no difference was found between patients with METTL16-high and METTL16-low CRC [42]. We found that patients with low ALKBH5 expression had a better clinical outcome. Recent studies have also suggested that YTHDF1 and YTHDF3 play a vital role in various types of cancers [43, 44]. Liu et al. [44] showed that YTHDF1 promotes ovarian cancer progression by augmenting EIF3C translation, and its high expression indicated a poor clinical outcome. The expression of YTHDF1 and YTHDF3 was reported to be associated with poor prognosis in breast cancer patients [43]. We attempted to examine the prognostic value of this factor in patients with resected PanNENs. We found that the high expression of both YTHDF1 and YTHDF3 predicted extremely low RFS (p = 0.003 and p = 0.002). YTHDF2 is the most effective m6A reader that can selectively bind to the m6A site. Recent reports have described the ability of YTHDF2 to degrade both tumor promoter as well as suppressor gene mRNAs and adversely affect inflammatory reaction, vascular abnormalization, and self-renewal of leukemic stem cells [45, 46, 47, 48, 49]. Compared with normal tissues, YTHDF2 expression was upregulated in prostate cancer, pancreatic cancer, and lung adenocarcinoma; moreover, high YTHDF2 expression was associated with unfavorable clinicopathological parameters and poor survival, which was consistent with our study [50, 51, 52]. However, Shen et al. [53] found that the expression level of YTHDF2 was lower in gastric cancer than in normal gastric tissues, and low YTHDF2 expression was correlated with poor prognosis. In our PanNENs, high YTHDF2 expression was observed in 49.7% of the tumors; high YTHDF2 expression was strongly associated with poorer RFS (p = 0.0013), and in an independent ICGC cohort, the same correlation was observed (p = 0.017), which validated this finding. We also identified associations between YTHDF2 expression and clinicopathological features, including liver metastasis (p < 0.001), although we did not determine YTHDF2 expression in metastatic lesions. After adjusting for other covariables, high YTHDF2 expression was identified as an independent risk factor for recurrence in patients with PanNENs, suggesting the role of YTHDF2 in metastasis. Recent studies revealed complex biological functions of YTHDF2 in different types of cancers. YTHDF2 could mediate mRNA degradation of tumor suppressors (LHPP and NKX3-1) to promote the proliferation and migration of prostate cancer cells [50]. In pancreatic cancer, YTHDF2 knockdown resulted in epithelial-mesenchymal transition through YAP pathway to promote the invasion and migration of cancer cells [51]. Another study showed that upregulated YTHDF2 decreased FOXC2 expression level to inhibit the proliferation, invasion, and migration of gastric cancer cells [53]. However, the regulatory mechanisms of YTHDF2 require further investigations in PanNENs. Another finding is the discovery that HNRNPC has prognostic significance in PanNEN recurrence. High HNRNPC expression was observed more frequently in patients with recurrence than in those without recurrence and was associated with a significantly shortened RFS. HNRNPC contributes to tumorigenesis, which may partially explain the association between high HNRNPC expression and the high recurrence rate in PanNENs. Additionally, we observed a positive correlation between HNRNPC expression and tumoral PD-L1 expression (p < 0.001). No existing literature describes the influence of HNRNPC on PD-L1 expression of PanNEN cells. Therefore, detailed mechanistic investigations are required to understand the functional link between HNRNPC and PD-L1 in PanNENs. The current clinical practice of regular follow-up is the main method for postoperative monitoring of patients and allows early identification of recurrence. However, an important concern with postoperative monitoring of patients is the balance of effectiveness against cost because the radiation exposure over prolonged follow-up periods can be harmful and detrimental to the patient [54]. Current international guidelines of the European Society for Medical Oncology provide follow-up recommendations after PanNEN resection based on grade [55]. In our study, PanNENs were classified into three risk groups: low-, intermediate-, and high-risk groups. Interestingly, grade 2 tumors were distributed across different recurrence risk groups. These results imply that owing to significant heterogeneity in disease behavior, particularly with grade 2 tumors, such distinctions are insufficient and postoperative surveillance should not merely be based on grade. Considering the high accuracy of recurrence signature and its independence from grade, postoperative follow-up regimens could be customized on the basis of these alternative recurrence signatures and risk groups. In addition, adjuvant therapy for high-risk patients may be useful in improving clinical outcomes, although this strategy will need to be validated prospectively in future studies. The high YTHDF2 expression (47.1%) in patients with PanNEN and its correlation with recurrence may have important therapeutic implications with potential for development and clinical use of YTHDF2 inhibitors for the treatment of high YTHDF2-expressing tumors. High HNRNPC expression was correlated with recurrence and PD-L1 expression, which may allow the development of a combinatorial therapeutic regimen using HNRNPC inhibitor and PD-L1 monoclonal antibody. Recent studies have shown that genomic mutations in PanNENs can be relevant to classify patients beyond their tumor grade and identify novel prognostic markers and therapeutic targets that could be relevant in the future for adjuvant therapy [56]. It will be interesting to investigate whether such a clinical approach is feasible in PanNENs with high HNRNPC expression in prospective clinical trials. Some limitations exist in the current study, despite several valuable findings. First, its retrospective nature has inherent limitations. Second, the study was performed in patients from a single institution. Although we conducted a subgroup analysis to validate the recurrence signature in a selected group, additional external validation is required to investigate whether these signatures are useful markers in other cohorts. However, the low prevalence of PanNEN in the population is an obstacle to conducting the study in a larger sample size. Therefore, clinical data from multiple centers and prospective evidence are required to validate these results. In conclusion, this is the first study to explore the expression profiles of m6A proteins and association between m6A regulators and immune microenvironment in PanNENs. We identified high YTHDF2 and HNRNPC expression as independent risk factors for disease relapse after surgery. We further established a recurrence signature to identify patients with PanNEN at a higher risk of recurrence. A comprehensive understanding of m6A regulators in PanNEN may help in developing novel treatment strategies. Statement of Ethics: The study was approved by the Institutional Review Board of Peking Union Medical College Hospital (approval number S-K1593) and conducted in accordance with the Declaration of Helsinki. Written consent for this study was not required and formally waived by the hospital's Ethics Committee. Conflict of Interest Statement: The authors declare that they have no conflict of interest. Funding Sources: This work was supported by grants from the Chinese Academy of Medical Sciences Initiative for Innovative Medicine (CAMS-2016-I2M-1–001), the National Scientific Data Sharing Platform for Population and Health (NCMI–YF01N–201906), and the National Natural Science Foundation of China (Nos. 81672648). Author Contributions: Jie Chen made substantial contributions to the conception, design, and critical revision of the manuscript. Shengwei Mo, Liju Zong, Shuangni Yu, and Zhaohui Lu made substantial contributions to tissue microarray construction. Shengwei Mo made substantial contributions to data acquisition. Xianlong Chen and Shengwei Mo performed analysis of the data and drafting of the manuscript. All the authors read and approved the final manuscript. Data Availability Statement: The data used and/or analyzed during the current study are available from the corresponding author on reasonable request. Supplementary Material: Supplementary data Click here for additional data file. Supplementary data Click here for additional data file. Supplementary data Click here for additional data file. Supplementary data Click here for additional data file. Supplementary data Click here for additional data file. Supplementary data Click here for additional data file. Supplementary data Click here for additional data file.
Background: The RNA N6-methyladenosine (m6A) regulators play a crucial role in tumorigenesis and could be indicators of prognosis and therapeutic targets in various cancers. However, the expression status and prognostic value of m6A regulators have not been studied in pancreatic neuroendocrine neoplasms (PanNENs). We aimed to investigate the expression patterns and prognostic value of m6A regulators and assess their correlations with immune checkpoints and infiltrates in PanNENs. Methods: Immunohistochemistry was performed for 15 m6A regulators and immune markers using tissue microarrays obtained from 183 patients with PanNENs. The correlation between m6A protein expression and clinicopathological parameters with recurrence-free survival (RFS) was examined using a random survival forest, Cox regression model, and survival tree analysis. Results: Among the 15 m6A proteins, high expression of YTHDF2 (p < 0.001) and HNRNPC (p = 0.006) was found to be significantly associated with recurrence and served as independent risk factors in multivariate analysis. High YTHDF2 expression was associated with higher number of CD3+ T cells (p = 0.003), whereas high HNRNPC expression was significantly correlated with the expression of PD-L1 (p = 0.039). A YTHDF2-based signature was determined, including five patterns from survival tree analysis: patients with the LNnegYTHDF2high signature had a 5-year RFS rate of 92.1%, whereas patients with LNposTumorSize<2.5 cm signature had the worst 5-year RFS rate of 0% (p < 0.001). The area under receiver operating characteristic curve was 0.870 (95% confidence interval: 0.762-0.915) for the YTHDF2-based signature. The C-index was 0.978, suggesting good discrimination ability; moreover, the risk score of recurrence served as an independent prognostic factor indicating shorter RFS. Conclusions: YTHDF2 appears to serve as a promising prognostic biomarker and therapeutic target. A YTHDF2-based signature can identify distinct subgroups, which may be helpful to strategize personalized postoperative monitoring.
null
null
12,523
367
[ 242, 264, 216, 386, 280, 149, 304, 381, 360, 631, 50, 53, 73 ]
20
[ "expression", "ythdf2", "patients", "recurrence", "tumor", "risk", "rfs", "high", "hnrnpc", "node" ]
[ "cancer pancreatic cancer", "checkpoints pancreatic neuroendocrine", "pannen grade tumor", "pannen cancer recurrence", "neuroendocrine neoplasms pannens" ]
null
null
null
[CONTENT] N6-methyladenosine methylation regulators | Pancreatic neuroendocrine neoplasms | Immune markers | Recurrence | Postoperative surveillance [SUMMARY]
null
[CONTENT] N6-methyladenosine methylation regulators | Pancreatic neuroendocrine neoplasms | Immune markers | Recurrence | Postoperative surveillance [SUMMARY]
null
[CONTENT] N6-methyladenosine methylation regulators | Pancreatic neuroendocrine neoplasms | Immune markers | Recurrence | Postoperative surveillance [SUMMARY]
null
[CONTENT] Humans | Methylation | Prognosis | Adenosine | RNA | Multivariate Analysis | Neoplasms [SUMMARY]
null
[CONTENT] Humans | Methylation | Prognosis | Adenosine | RNA | Multivariate Analysis | Neoplasms [SUMMARY]
null
[CONTENT] Humans | Methylation | Prognosis | Adenosine | RNA | Multivariate Analysis | Neoplasms [SUMMARY]
null
[CONTENT] cancer pancreatic cancer | checkpoints pancreatic neuroendocrine | pannen grade tumor | pannen cancer recurrence | neuroendocrine neoplasms pannens [SUMMARY]
null
[CONTENT] cancer pancreatic cancer | checkpoints pancreatic neuroendocrine | pannen grade tumor | pannen cancer recurrence | neuroendocrine neoplasms pannens [SUMMARY]
null
[CONTENT] cancer pancreatic cancer | checkpoints pancreatic neuroendocrine | pannen grade tumor | pannen cancer recurrence | neuroendocrine neoplasms pannens [SUMMARY]
null
[CONTENT] expression | ythdf2 | patients | recurrence | tumor | risk | rfs | high | hnrnpc | node [SUMMARY]
null
[CONTENT] expression | ythdf2 | patients | recurrence | tumor | risk | rfs | high | hnrnpc | node [SUMMARY]
null
[CONTENT] expression | ythdf2 | patients | recurrence | tumor | risk | rfs | high | hnrnpc | node [SUMMARY]
null
[CONTENT] m6a | modification | pannens | clinical | patients | regulators | methylation | immune | 23 | m6a regulators [SUMMARY]
null
[CONTENT] node | expression | ythdf2 | patients | fig | hnrnpc | risk | rfs | status | recurrence [SUMMARY]
null
[CONTENT] expression | ythdf2 | patients | recurrence | tumor | risk | variables | node | data | hnrnpc [SUMMARY]
null
[CONTENT] m6A ||| m6A ||| m6A [SUMMARY]
null
[CONTENT] 15 | m6A | YTHDF2 | HNRNPC | 0.006 ||| YTHDF2 | 0.003 | HNRNPC | PD-L1 | 0.039 ||| YTHDF2 | five | LNnegYTHDF2high | 5-year | RFS | 92.1% | LNposTumorSize<2.5 cm | 5-year | RFS | 0% ||| 0.870 | 95% | 0.762-0.915 | YTHDF2 ||| 0.978 | RFS [SUMMARY]
null
[CONTENT] m6A ||| m6A ||| m6A | 15 | m6A | 183 ||| m6A | RFS ||| ||| 15 | m6A | YTHDF2 | HNRNPC | 0.006 ||| YTHDF2 | 0.003 | HNRNPC | PD-L1 | 0.039 ||| YTHDF2 | five | LNnegYTHDF2high | 5-year | RFS | 92.1% | LNposTumorSize<2.5 cm | 5-year | RFS | 0% ||| 0.870 | 95% | 0.762-0.915 | YTHDF2 ||| 0.978 | RFS ||| YTHDF2 ||| YTHDF2 [SUMMARY]
null
Brain barriers virtual: an interim solution or future opportunity?
35232464
Scientific conferences are vital communication events for scientists in academia, industry, and government agencies. In the brain barriers research field, several international conferences exist that allow researchers to present data, share knowledge, and discuss novel ideas and concepts. These meetings are critical platforms for researchers to connect and exchange breakthrough findings on a regular basis. Due to the worldwide COVID-19 pandemic, all in-person meetings were canceled in 2020. In response, we launched the Brain Barriers Virtual 2020 (BBV2020) seminar series, the first stand-in virtual event for the brain barriers field, to offer scientists a virtual platform to present their work. Here we report the aggregate attendance information on two in-person meetings compared with BBV2020 and comment on the utility of the virtual platform.
BACKGROUND
The BBV2020 seminar series was hosted on a Zoom webinar platform and was free of cost for participants. Using registration- and Zoom-based data from the BBV2020 virtual seminar series and survey data collected from BBV2020 participants, we analyzed attendance trends, global reach, participation based on career stage, and engagement of BBV2020. We compared these data with those from two previous in-person conferences, a BBB meeting held in 2018 and CVB 2019.
METHODS
We found that BBV2020 seminar participation steadily decreased over the course of the series. In contrast, live participation was consistently above 100 attendees and recording views were above 200 views per seminar. We also found that participants valued BBV2020 as a supplement during the COVID-19 pandemic in 2020. Based on one post-BBV2020 survey, the majority of participants indicated that they would prefer in-person meetings but would welcome a virtual component to future in-person meetings. Compared to in-person meetings, BBV2020 enabled participation from a broad range of career stages and was attended by scientists in academic, industry, and government agencies from a wide range of countries worldwide.
RESULTS
Our findings suggest that a virtual event such as the BBV2020 seminar series provides easy access to science for researchers across all career stages around the globe. However, we recognize that limitations exist. Regardless, such a virtual event could be a valuable tool for the brain barriers community to reach and engage scientists worldwide to further grow the brain barriers research field in the future.
CONCLUSIONS
[ "COVID-19", "Central Nervous System", "Congresses as Topic", "Humans", "SARS-CoV-2", "Surveys and Questionnaires", "Videoconferencing" ]
8886561
Introduction
On March 11, 2020, the World Health Organization (WHO) declared the coronavirus disease (COVID-19) outbreak a global pandemic [1]. Shortly after, countries worldwide issued stay-at-home orders and lockdowns: the COVID-19 pandemic had forced the world to come to an abrupt halt [2]. With global air travel restrictions and travel bans in place, enterprises including academic institutions, industry and government agencies required individuals to call off their business travel. As a result, scheduled scientific conferences were canceled throughout 2020 [3, 4]. These cancelations created a sudden void in scientific communication between researchers worldwide and forced research communities to be creative and adapt. One possibility to continue scientific communication and interaction was by switching conferences and seminars to virtual online events using novel digital platforms enabling researchers to connect and share science [3, 4]. As a consequence, major conferences and meetings in the brain barriers research field, including the “Barriers of the CNS Gordon Research Conference” (GRC 2020) and the international symposium on “Signal Transduction at the Blood–Brain Barriers” in Bari, Italy were first postponed until 2021 and due to the ongoing pandemic subsequently moved to 2022. The “Cerebral Vascular Biology” (CVB) conference in Uppsala, Sweden planned for 2021 was postponed until 2023. Postponing these meetings left organizers and conference chairs in a difficult position and researchers without a platform to present and share their data. Thus, these postponements created a critical need for the field to stay connected and an opportunity to establish a virtual event that could supplement the postponed conferences in 2020. The Brain Barriers Virtual 2020 Seminar Series (BBV2020) addressed this need by providing a forum for brain barrier scientists worldwide to come together online during the COVID-19 crisis in 2020. BBV2020 seminars were held weekly from May 20, 2020 to September 2, 2020 in a 1-h format featuring a live session with an invited speaker followed by time for questions and answers from the community. The seminar series was hosted on a Zoom webinar platform and was free for anyone to attend. BBV2020 served as a forum for invited speakers to present their work to the brain barriers community and enabled this community to discuss brain barriers science from the safety of their home office. Over 1300 scientists around the world were registered, each seminar was viewed live by 100–400 participants, and the video-recordings received 150–900 views each week. Participants were from academia, industry and government agencies at all career stages from around the globe. In this report, we share participation data that give insight into the impact this virtual seminar series had on the brain barriers field during the COVID-19 pandemic. We compare these data with existing data from past in-person brain barriers meetings and draw conclusions from these data on the potential value of virtual meetings for the brain barriers research field in the future. While the organization of virtual events during the pandemic may seem self-evident, reflection on the impact carries utility for events in the future.
null
null
Results
We organized the virtual BBV2020 seminar series for the brain barriers research field to fill the void that the COVID-19 pandemic left due to canceled and postponed meetings in 2020. The seminar series was held between May–September 2020 every Wednesday from 12:00 to 1:00 PM Eastern Daylight Time (EDT). Seminars were hosted on a Zoom webinar platform and featured a 45-min presentation from an invited speaker followed by questions and answers from the audience. In the following sections, we summarize data collected by the Zoom software, from the listserv used, and the participant survey. We compare and contrast the data from the virtual BBV2020 seminar series with data from in-person meetings from past years. BBV2020 attendance On May 14, 2020, we sent out the first advertisement for the BBV2020 seminar series via email using a listserv with email addresses of past conference attendees and via postings using LinkedIn, Facebook, and Twitter. By the start of the first seminar on May 20, 2020, more than 720 scientists had registered. By the end of the 16-week long BBV2020 seminar series, a total of 1302 registrants signed up on our listserv to receive the seminar link, speaker updates, and video recording links (Fig. 1A). We obtained data from two past in-person meetings from the respective conference chairs: (1) brain barriers meeting in 2018 (BBB 2018) and the (2) Cerebral Vascular Biology meeting in 2019 (CVB 2019). Fig. 1 Attendance Trends for Brain Barrier Research Conferences. A With 1,302 individuals signing up and added to the listserv, BBV2020 attracted more registrants than BBB 2018 and CVB 2019. B Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020. C Number of views (clicks) of the video recordings when available weekly for BBV2020. Black line indicates best linear fit. Gray line indicates the average (mean) for all 16 weeks Attendance Trends for Brain Barrier Research Conferences. A With 1,302 individuals signing up and added to the listserv, BBV2020 attracted more registrants than BBB 2018 and CVB 2019. B Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020. C Number of views (clicks) of the video recordings when available weekly for BBV2020. Black line indicates best linear fit. Gray line indicates the average (mean) for all 16 weeks Strikingly, the total number of registered applicants for the BBV2020 was more than 4-fold higher compared to the total number of registered participants for the 2018 and the CVB 2019 meetings (Fig. 1A). The large number of registrants is likely due to free registration, no associated travel expenses, a world-wide reach, and that virtual meetings come with no obligations for the attendees. In contrast, the number of attendees of the in-person meetings (BBB 2018 and CVB2019) was limited and determined by the size of the conference site. The first live BBV2020 seminar was attended by 419 participants. As the seminar series progressed, the participant number during live seminars gradually decreased, the average participant number was 220, the minimum was 108 (Fig. 1B). A similar trend was observed in the numbers of total recording views: the first seminar recording had a maximum of 944 views; in average, videos had 422 views (Fig. 1C). The trendline in Fig. 1C shows a drop in recording views over the course of the seminar series. These trends held regardless of academic rank as graduate students, postdoctoral fellows, and professors all showed a similar downward trend in live attendance (Additional file 1: Fig. S1A–C). Note that we were unable to capture recording data for week 2 due to a recording error and for week 13 due to the request by the speaker to not record. Together, the high number of registrants suggests that the virtual BBV2020 seminar series sparked interest. Attendance of the live seminars and the numbers of views for the recorded videos decreased over time, which may be an indicator of “virtual meeting fatigue” or attendees being on their summer vacation. On May 14, 2020, we sent out the first advertisement for the BBV2020 seminar series via email using a listserv with email addresses of past conference attendees and via postings using LinkedIn, Facebook, and Twitter. By the start of the first seminar on May 20, 2020, more than 720 scientists had registered. By the end of the 16-week long BBV2020 seminar series, a total of 1302 registrants signed up on our listserv to receive the seminar link, speaker updates, and video recording links (Fig. 1A). We obtained data from two past in-person meetings from the respective conference chairs: (1) brain barriers meeting in 2018 (BBB 2018) and the (2) Cerebral Vascular Biology meeting in 2019 (CVB 2019). Fig. 1 Attendance Trends for Brain Barrier Research Conferences. A With 1,302 individuals signing up and added to the listserv, BBV2020 attracted more registrants than BBB 2018 and CVB 2019. B Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020. C Number of views (clicks) of the video recordings when available weekly for BBV2020. Black line indicates best linear fit. Gray line indicates the average (mean) for all 16 weeks Attendance Trends for Brain Barrier Research Conferences. A With 1,302 individuals signing up and added to the listserv, BBV2020 attracted more registrants than BBB 2018 and CVB 2019. B Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020. C Number of views (clicks) of the video recordings when available weekly for BBV2020. Black line indicates best linear fit. Gray line indicates the average (mean) for all 16 weeks Strikingly, the total number of registered applicants for the BBV2020 was more than 4-fold higher compared to the total number of registered participants for the 2018 and the CVB 2019 meetings (Fig. 1A). The large number of registrants is likely due to free registration, no associated travel expenses, a world-wide reach, and that virtual meetings come with no obligations for the attendees. In contrast, the number of attendees of the in-person meetings (BBB 2018 and CVB2019) was limited and determined by the size of the conference site. The first live BBV2020 seminar was attended by 419 participants. As the seminar series progressed, the participant number during live seminars gradually decreased, the average participant number was 220, the minimum was 108 (Fig. 1B). A similar trend was observed in the numbers of total recording views: the first seminar recording had a maximum of 944 views; in average, videos had 422 views (Fig. 1C). The trendline in Fig. 1C shows a drop in recording views over the course of the seminar series. These trends held regardless of academic rank as graduate students, postdoctoral fellows, and professors all showed a similar downward trend in live attendance (Additional file 1: Fig. S1A–C). Note that we were unable to capture recording data for week 2 due to a recording error and for week 13 due to the request by the speaker to not record. Together, the high number of registrants suggests that the virtual BBV2020 seminar series sparked interest. Attendance of the live seminars and the numbers of views for the recorded videos decreased over time, which may be an indicator of “virtual meeting fatigue” or attendees being on their summer vacation. BBV2020 survey analysis By the end of the seminar series in September, every BBV2020 registrant received an invitation to participate in an online survey created by the organizers on August 23, 2020. The survey was designed for registrants to provide feedback and consisted of 10 questions: (1) demographic question to capture career level and career type; (2) single select multiple choice question to capture attendance of past in-person brain barrier meetings, (3) single select multiple choice question on how many recorded seminars were viewed, (4) Likert scale question regarding the enrichment BBV2020 provided for the community during the COVID-19 pandemic, (5) dichotomous question asking if the registrant would still participate in BBV2020 seminars if they were continued, (6) single select multiple choice question on attendance frequency, (7) single select multiple choice question on how many times the registrant has thus far attended a brain barriers meeting previously in person, (8) matrix question to capture affective factors, (9) matrix question to capture learning and teaching, and (10) matrix question to capture combined validity and practicality of the BBV2020 series. In addition, registrants responding to the survey had the opportunity to include general comments to the organizers in a separate text box. Overall, 23% of registrants (total of 299 registrants) responded to the survey. Over 89% of survey respondents agreed strongly that “Attending BBV2020 was a positive experience” and 63% of responders felt that “BBV2020 is appropriate for my research”. Generally, the respondents tended to agree with the following statement: “BBV2020 has enriched the connection with the Brain Barriers community in the gap that COVID-19 has created” (n = 298 respondents to this question of a total of 299 registrants, M = 1.44, SD = 0.660, 1 = Strongly Agree, 2 = Agree, 3 = Neutral, 4 = Disagree, 5 = Strongly Disagree; Fig. 2A). However, when posed with the statements “In the future, I would prefer to attend a virtual conference over an in-person conference” or “The virtual experience is not as effective compared to the in-person conferences”, responses were neutral (M = 3.24; Fig. 2B) indicating a positive view towards a virtual conference. A difference emerged when we compared BBV2020 participants based on their career level with their preference for in-person versus virtual meeting formats: using one-way ANOVA, we found a significant difference between groups (F(9286) = 2.607, p = 0.007). Using Tukey’s HSD post hoc test indicated that Master’s students were more likely to disagree with the following statement when compared to Faculty/Professors: “I would feel more comfortable attending Brain Barrier conferences/seminars in person, but was glad to attend virtually due to COVID19.” A strong trend emerged when Master’s and Ph.D. students were grouped together and compared to Post-Docs and Faculty/Professors in an independent-samples t test. Results show graduate students in general were more favorable to a virtual conference compared to colleagues more established in their careers (t(225) = -1.972, p = 0.061). The lack of significance is in part due to the low sample size (n = 74) for graduate students. Fig. 2 Post-seminar survey results. A Volunteer respondent result when posed with the prompt “BBV2020 has enriched connection with the Brain Barriers community in the gap that COVID-19 has created”. B Volunteer respondent results when posted with the prompts “In the future, I would prefer to attend a virtual conference over an in-person conference” (Blue) and “The virtual experience is not as effective compared to the in-person conferences” (Red) Post-seminar survey results. A Volunteer respondent result when posed with the prompt “BBV2020 has enriched connection with the Brain Barriers community in the gap that COVID-19 has created”. B Volunteer respondent results when posted with the prompts “In the future, I would prefer to attend a virtual conference over an in-person conference” (Blue) and “The virtual experience is not as effective compared to the in-person conferences” (Red) Overall, descriptive data revealed many participants felt BBV2020 was a positive experience. In fact, 99.3% of survey respondents noted they somewhat or strongly agree that their experience was positive, with only 4.7% saying they faced technical difficulties. Additionally, most respondents agreed that virtual conferences have an important role in research (93.2%) and are more accessible than in-person conferences (90.2%). Finally, more than half of respondents (56.9%) noted virtual conferences can do things in-person conferences cannot. Nevertheless, there continued to be a desire among many BBV2020 participants to return to conferencing in person, with 13.5% saying they found it hard to focus in the virtual format. In fact, roughly one-third (33.9%) felt the virtual conference was not as effective as an in-person conference, and only one-quarter (23.3%) said they would prefer virtual conferences to in-person conferences in the future, while 35.8% were neutral. Together, our findings suggest that while BBV2020 filled a need created by a global pandemic, the enthusiasm for returning to only in-person meetings remained relatively neutral suggesting that virtual platforms may play a critical role in dissemination of research and science in the future. Moreover, the data from this study show a trend between graduate students versus professors in their response with regard to in-person meetings indicating that career stage may influence the perspective of the usefulness of in-person versus virtual platforms. By the end of the seminar series in September, every BBV2020 registrant received an invitation to participate in an online survey created by the organizers on August 23, 2020. The survey was designed for registrants to provide feedback and consisted of 10 questions: (1) demographic question to capture career level and career type; (2) single select multiple choice question to capture attendance of past in-person brain barrier meetings, (3) single select multiple choice question on how many recorded seminars were viewed, (4) Likert scale question regarding the enrichment BBV2020 provided for the community during the COVID-19 pandemic, (5) dichotomous question asking if the registrant would still participate in BBV2020 seminars if they were continued, (6) single select multiple choice question on attendance frequency, (7) single select multiple choice question on how many times the registrant has thus far attended a brain barriers meeting previously in person, (8) matrix question to capture affective factors, (9) matrix question to capture learning and teaching, and (10) matrix question to capture combined validity and practicality of the BBV2020 series. In addition, registrants responding to the survey had the opportunity to include general comments to the organizers in a separate text box. Overall, 23% of registrants (total of 299 registrants) responded to the survey. Over 89% of survey respondents agreed strongly that “Attending BBV2020 was a positive experience” and 63% of responders felt that “BBV2020 is appropriate for my research”. Generally, the respondents tended to agree with the following statement: “BBV2020 has enriched the connection with the Brain Barriers community in the gap that COVID-19 has created” (n = 298 respondents to this question of a total of 299 registrants, M = 1.44, SD = 0.660, 1 = Strongly Agree, 2 = Agree, 3 = Neutral, 4 = Disagree, 5 = Strongly Disagree; Fig. 2A). However, when posed with the statements “In the future, I would prefer to attend a virtual conference over an in-person conference” or “The virtual experience is not as effective compared to the in-person conferences”, responses were neutral (M = 3.24; Fig. 2B) indicating a positive view towards a virtual conference. A difference emerged when we compared BBV2020 participants based on their career level with their preference for in-person versus virtual meeting formats: using one-way ANOVA, we found a significant difference between groups (F(9286) = 2.607, p = 0.007). Using Tukey’s HSD post hoc test indicated that Master’s students were more likely to disagree with the following statement when compared to Faculty/Professors: “I would feel more comfortable attending Brain Barrier conferences/seminars in person, but was glad to attend virtually due to COVID19.” A strong trend emerged when Master’s and Ph.D. students were grouped together and compared to Post-Docs and Faculty/Professors in an independent-samples t test. Results show graduate students in general were more favorable to a virtual conference compared to colleagues more established in their careers (t(225) = -1.972, p = 0.061). The lack of significance is in part due to the low sample size (n = 74) for graduate students. Fig. 2 Post-seminar survey results. A Volunteer respondent result when posed with the prompt “BBV2020 has enriched connection with the Brain Barriers community in the gap that COVID-19 has created”. B Volunteer respondent results when posted with the prompts “In the future, I would prefer to attend a virtual conference over an in-person conference” (Blue) and “The virtual experience is not as effective compared to the in-person conferences” (Red) Post-seminar survey results. A Volunteer respondent result when posed with the prompt “BBV2020 has enriched connection with the Brain Barriers community in the gap that COVID-19 has created”. B Volunteer respondent results when posted with the prompts “In the future, I would prefer to attend a virtual conference over an in-person conference” (Blue) and “The virtual experience is not as effective compared to the in-person conferences” (Red) Overall, descriptive data revealed many participants felt BBV2020 was a positive experience. In fact, 99.3% of survey respondents noted they somewhat or strongly agree that their experience was positive, with only 4.7% saying they faced technical difficulties. Additionally, most respondents agreed that virtual conferences have an important role in research (93.2%) and are more accessible than in-person conferences (90.2%). Finally, more than half of respondents (56.9%) noted virtual conferences can do things in-person conferences cannot. Nevertheless, there continued to be a desire among many BBV2020 participants to return to conferencing in person, with 13.5% saying they found it hard to focus in the virtual format. In fact, roughly one-third (33.9%) felt the virtual conference was not as effective as an in-person conference, and only one-quarter (23.3%) said they would prefer virtual conferences to in-person conferences in the future, while 35.8% were neutral. Together, our findings suggest that while BBV2020 filled a need created by a global pandemic, the enthusiasm for returning to only in-person meetings remained relatively neutral suggesting that virtual platforms may play a critical role in dissemination of research and science in the future. Moreover, the data from this study show a trend between graduate students versus professors in their response with regard to in-person meetings indicating that career stage may influence the perspective of the usefulness of in-person versus virtual platforms. Global reach of BBV2020 During the seminar registration process we collected attendants’ affiliations and based on this information we determined the global reach of the BBV2020. Registrants of the BBV2020 seminar series were from 43 countries representing six of the seven continents (Fig. 3A, B). Compared to the in-person BBB 2018 and the CVB2019, more countries were represented at the BBV2020 (Fig. 3A). Fig. 3 Global reach of BBV2020. A BBV2020 had more countries represented compared to BBB 2018 and CVB 2019 alone. B World map showing countries represented by BBV2020. Countries where registrants participated in BBV2020: Red and Pink. Countries not represented at BBV2020: Light Gray and Dark Gray. Countries where registrants attended BBV2020 and at least one of the in-person meetings between Meeting 2018 or CVB 2019: Red. Countries where registrants attended BBV2020 and did not attend either BBB 2018 or CVB 2019: Pink. Countries that had no participation in any of the brain barriers meetings examined in this study: Light gray. Countries that had participation in either BBB 2018 or CVB 2019 that did not participate in BBV2020: Dark Gray Global reach of BBV2020. A BBV2020 had more countries represented compared to BBB 2018 and CVB 2019 alone. B World map showing countries represented by BBV2020. Countries where registrants participated in BBV2020: Red and Pink. Countries not represented at BBV2020: Light Gray and Dark Gray. Countries where registrants attended BBV2020 and at least one of the in-person meetings between Meeting 2018 or CVB 2019: Red. Countries where registrants attended BBV2020 and did not attend either BBB 2018 or CVB 2019: Pink. Countries that had no participation in any of the brain barriers meetings examined in this study: Light gray. Countries that had participation in either BBB 2018 or CVB 2019 that did not participate in BBV2020: Dark Gray We also compared the number of participants from each country who joined BBV2020 vs. BBB 2018 and CVB 2019. We found that the number of participants per country worldwide was also higher compared to both in person meetings (Fig. 4A–F). These data show that the virtual nature of BBV2020 enabled researchers world-wide to participate and suggest that the outreach of a virtual event can extend beyond the field. Fig. 4 Global view of participation over BBV2020, BBB 2018, and CVB 2019. Representation of participants for (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Ordered from top to bottom in greatest number of registrants. Number of registrants scaled by color based on number of registrants per country for (D) BBV 2020, (E) BBB 2018, and (F) CVB 2019. Light Pink: 1–10 registrants; Dark Pink: 11–50 registrants; Red: 51–100 registrants and Dark Red: > 100 registrants Global view of participation over BBV2020, BBB 2018, and CVB 2019. Representation of participants for (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Ordered from top to bottom in greatest number of registrants. Number of registrants scaled by color based on number of registrants per country for (D) BBV 2020, (E) BBB 2018, and (F) CVB 2019. Light Pink: 1–10 registrants; Dark Pink: 11–50 registrants; Red: 51–100 registrants and Dark Red: > 100 registrants During the seminar registration process we collected attendants’ affiliations and based on this information we determined the global reach of the BBV2020. Registrants of the BBV2020 seminar series were from 43 countries representing six of the seven continents (Fig. 3A, B). Compared to the in-person BBB 2018 and the CVB2019, more countries were represented at the BBV2020 (Fig. 3A). Fig. 3 Global reach of BBV2020. A BBV2020 had more countries represented compared to BBB 2018 and CVB 2019 alone. B World map showing countries represented by BBV2020. Countries where registrants participated in BBV2020: Red and Pink. Countries not represented at BBV2020: Light Gray and Dark Gray. Countries where registrants attended BBV2020 and at least one of the in-person meetings between Meeting 2018 or CVB 2019: Red. Countries where registrants attended BBV2020 and did not attend either BBB 2018 or CVB 2019: Pink. Countries that had no participation in any of the brain barriers meetings examined in this study: Light gray. Countries that had participation in either BBB 2018 or CVB 2019 that did not participate in BBV2020: Dark Gray Global reach of BBV2020. A BBV2020 had more countries represented compared to BBB 2018 and CVB 2019 alone. B World map showing countries represented by BBV2020. Countries where registrants participated in BBV2020: Red and Pink. Countries not represented at BBV2020: Light Gray and Dark Gray. Countries where registrants attended BBV2020 and at least one of the in-person meetings between Meeting 2018 or CVB 2019: Red. Countries where registrants attended BBV2020 and did not attend either BBB 2018 or CVB 2019: Pink. Countries that had no participation in any of the brain barriers meetings examined in this study: Light gray. Countries that had participation in either BBB 2018 or CVB 2019 that did not participate in BBV2020: Dark Gray We also compared the number of participants from each country who joined BBV2020 vs. BBB 2018 and CVB 2019. We found that the number of participants per country worldwide was also higher compared to both in person meetings (Fig. 4A–F). These data show that the virtual nature of BBV2020 enabled researchers world-wide to participate and suggest that the outreach of a virtual event can extend beyond the field. Fig. 4 Global view of participation over BBV2020, BBB 2018, and CVB 2019. Representation of participants for (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Ordered from top to bottom in greatest number of registrants. Number of registrants scaled by color based on number of registrants per country for (D) BBV 2020, (E) BBB 2018, and (F) CVB 2019. Light Pink: 1–10 registrants; Dark Pink: 11–50 registrants; Red: 51–100 registrants and Dark Red: > 100 registrants Global view of participation over BBV2020, BBB 2018, and CVB 2019. Representation of participants for (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Ordered from top to bottom in greatest number of registrants. Number of registrants scaled by color based on number of registrants per country for (D) BBV 2020, (E) BBB 2018, and (F) CVB 2019. Light Pink: 1–10 registrants; Dark Pink: 11–50 registrants; Red: 51–100 registrants and Dark Red: > 100 registrants Career stages represented at BBV2020 We examined the self-reported career stage of BBV2020 participants to that of BBB 2018 and CVB 2019 participants. Based on the survey, more than 46% of all participants attended a brain barriers conference/seminar for the first time. We also found that in-person meetings are attended by 42.4–48.9% academicians at the professor rank, graduate students make up 22.9–25.8% and postdoctoral fellows are 6.6–17.7% of the participants (Fig. 5). In contrast, at the BBV2020 19.6% of academicians were at the professor rank, graduate students made up 30.5% and postdoctoral fellows 16.1% of attendees. We also found that BBV2020 had a larger participation from industry (18.5% vs. 5.6–6.6%), and undergraduate students (3.2% vs. 1%) compared to both in person meetings. Together, our data suggest that a virtual platform seminar like BBV2020 reaches a broader range of scientific ranks and careers. Fig. 5 Proportions of various career stages participating in BBV2020, BBB 2018, and CVB 2019. Proportions of various career stages participating in (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Colors correspond to career stage: Red (professor any rank), Green (graduate student any rank), Dark Blue (research faculty or researcher), Yellow (industry scientist), Orange (postdoctoral fellow), Purple (government/regulatory), Light Blue (clinician), Green (undergraduate student) Proportions of various career stages participating in BBV2020, BBB 2018, and CVB 2019. Proportions of various career stages participating in (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Colors correspond to career stage: Red (professor any rank), Green (graduate student any rank), Dark Blue (research faculty or researcher), Yellow (industry scientist), Orange (postdoctoral fellow), Purple (government/regulatory), Light Blue (clinician), Green (undergraduate student) We examined the self-reported career stage of BBV2020 participants to that of BBB 2018 and CVB 2019 participants. Based on the survey, more than 46% of all participants attended a brain barriers conference/seminar for the first time. We also found that in-person meetings are attended by 42.4–48.9% academicians at the professor rank, graduate students make up 22.9–25.8% and postdoctoral fellows are 6.6–17.7% of the participants (Fig. 5). In contrast, at the BBV2020 19.6% of academicians were at the professor rank, graduate students made up 30.5% and postdoctoral fellows 16.1% of attendees. We also found that BBV2020 had a larger participation from industry (18.5% vs. 5.6–6.6%), and undergraduate students (3.2% vs. 1%) compared to both in person meetings. Together, our data suggest that a virtual platform seminar like BBV2020 reaches a broader range of scientific ranks and careers. Fig. 5 Proportions of various career stages participating in BBV2020, BBB 2018, and CVB 2019. Proportions of various career stages participating in (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Colors correspond to career stage: Red (professor any rank), Green (graduate student any rank), Dark Blue (research faculty or researcher), Yellow (industry scientist), Orange (postdoctoral fellow), Purple (government/regulatory), Light Blue (clinician), Green (undergraduate student) Proportions of various career stages participating in BBV2020, BBB 2018, and CVB 2019. Proportions of various career stages participating in (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Colors correspond to career stage: Red (professor any rank), Green (graduate student any rank), Dark Blue (research faculty or researcher), Yellow (industry scientist), Orange (postdoctoral fellow), Purple (government/regulatory), Light Blue (clinician), Green (undergraduate student)
Conclusion and future perspectives
Additional file 1: Figure S1. Attendance Trends for BBV2020 by Academic Rank. Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020 separated by A) Graduate Student, B) Postdoctoral Fellow, and C) Professor (any rank). Additional file 1: Figure S1. Attendance Trends for BBV2020 by Academic Rank. Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020 separated by A) Graduate Student, B) Postdoctoral Fellow, and C) Professor (any rank).
[ "Methods", "Organization", "Virtual platform and technical assistance", "BBV2020 schedule", "Invited speakers", "Advertisement and participants", "BBV2020 seminar presentation", "Map generation", "Survey", "Data collection, analysis, and statistics", "BBV2020 attendance", "BBV2020 survey analysis", "Global reach of BBV2020", "Career stages represented at BBV2020", "Attendance: scientists around the globe", "Survey: feedback from participants", "Virtual seminars: a powerful opportunity", "Conclusion and future perspectives" ]
[ " Organization The BBV2020 seminar series was created by Drs. Bjoern Bauer (University of Kentucky, Lexington, KY, USA), Anika Hartz (University of Kentucky, Lexington, KY, USA), and Brandon Kim (University of Alabama, Tuscaloosa, Alabama, USA). Seminar moderators were postdoctoral researchers and scientists from the USA and Europe: Drs. Natalie Hudson (Trinity College Dublin, Dublin, Ireland), Geetika Nehra (University of Kentucky, Lexington, KY, USA), Michelle Pizzo (Denali Therapeutics Inc., San Francisco, CA, USA), and Steffen Storck (University Medical Center of the Johannes Gutenberg University Mainz, Mainz, Germany).\nThe BBV2020 seminar series was created by Drs. Bjoern Bauer (University of Kentucky, Lexington, KY, USA), Anika Hartz (University of Kentucky, Lexington, KY, USA), and Brandon Kim (University of Alabama, Tuscaloosa, Alabama, USA). Seminar moderators were postdoctoral researchers and scientists from the USA and Europe: Drs. Natalie Hudson (Trinity College Dublin, Dublin, Ireland), Geetika Nehra (University of Kentucky, Lexington, KY, USA), Michelle Pizzo (Denali Therapeutics Inc., San Francisco, CA, USA), and Steffen Storck (University Medical Center of the Johannes Gutenberg University Mainz, Mainz, Germany).\n Virtual platform and technical assistance BBV2020 was run on the Zoom Webinar 500 platform (Zoom Video Communications, Inc.; San Jose, CA, USA) that was funded by Dr. Bjoern Bauer using University of Kentucky funds. This particular Zoom Webinar license allowed for up to 500 live participants. The Zoom Webinar platform supports an in-session chat function, a Q&A forum, a raise hand function for live questions, and provides a panelist section with the option to share audio, video and presentation slides, a recording function and post-webinar data collection. The hosts and moderators were in control of audio and video; participants were not able to record or unmute during a session. Participants had no control function other than a “raise hand” option and the Q&A and chat function allowing to send questions or messages to the moderator. Technical assistance was provided by Todd Sizemore, College of Pharmacy, University of Kentucky.\nBBV2020 was run on the Zoom Webinar 500 platform (Zoom Video Communications, Inc.; San Jose, CA, USA) that was funded by Dr. Bjoern Bauer using University of Kentucky funds. This particular Zoom Webinar license allowed for up to 500 live participants. The Zoom Webinar platform supports an in-session chat function, a Q&A forum, a raise hand function for live questions, and provides a panelist section with the option to share audio, video and presentation slides, a recording function and post-webinar data collection. The hosts and moderators were in control of audio and video; participants were not able to record or unmute during a session. Participants had no control function other than a “raise hand” option and the Q&A and chat function allowing to send questions or messages to the moderator. Technical assistance was provided by Todd Sizemore, College of Pharmacy, University of Kentucky.\n BBV2020 schedule The BBV2020 seminar series ran from May 20 to September 2, 2020. Live seminars were held once per week from 12 to 1 PM Eastern Daylight Time (EDT) to accommodate researchers across a wide range of time zones: Central European Time (6:00 PM) to Pacific Daylight Time (9:00 AM). To accommodate colleagues with time constraints or those within incompatible time zones such as Asia or Oceania, presentations were recorded (depending on speaker permissions) and the seminar recording was made available for up to 48 h after the live seminar. The link for the recorded videos was distributed via the BBV2020 listserv that was set up through the University of Kentucky listserv.\nThe BBV2020 seminar series ran from May 20 to September 2, 2020. Live seminars were held once per week from 12 to 1 PM Eastern Daylight Time (EDT) to accommodate researchers across a wide range of time zones: Central European Time (6:00 PM) to Pacific Daylight Time (9:00 AM). To accommodate colleagues with time constraints or those within incompatible time zones such as Asia or Oceania, presentations were recorded (depending on speaker permissions) and the seminar recording was made available for up to 48 h after the live seminar. The link for the recorded videos was distributed via the BBV2020 listserv that was set up through the University of Kentucky listserv.\n Invited speakers Efforts were made to invite speakers from a variety of countries and genders. Invited speakers included organizers of postponed in-person conferences, speakers from academia and industry from different career stages. A total of 16 speakers were invited: May 20, 2020: Elga De Vries, Amsterdam UMC, Amsterdam, The Netherlands; May 27, 2020: Richard Daneman, University of California San Diego, San Diego, CA, USA; June 3, 2020: Robyn Klein, Washington University, St. Louis, MO, USA; June 10, 2020: Ayal Ben-Zvi, The Hebrew University of Jerusalem, Israel; June 17, 2020: Britta Engelhardt, University of Bern, Bern, Switzerland; June 24, 2020: Margareta Hammarlund-Udenaes, Uppsala University, Uppsala, Sweden; July 1, 2020: Daniela Virgintino, University of Bari, Bari, Italy; July 8, 2020: Eric Shusta, University of Wisconsin, Madison, WI, USA; July 15, 2020: Matthew Campbell, Trinity College Dublin, Dublin, Ireland; July 22, 2020: Krzysztof Kucharz, University of Copenhagen, Copenhagen, Denmark; July 29, 2020: William Elmquist, University of Minnesota, Minneapolis, MN, USA; August 5, 2020: Edward Neuwelt, Oregon Health & Sciences University, OR, USA; August 12, 2020: Robert Thorne, Denali Therapeutics, San Francisco, CA, USA; August 19, 2020: Teresa Sanchez, Weill Cornell Medical College, New York City, NJ, USA; August 26, 2020: Michael Taylor, University of Wisconsin, Madison, MI, USA; September 2, 2020: Joan Abbott, King’s College London, London, UK.\nSpeakers were offered a training session by the organizers to cover key aspects of a Zoom-based webinar and to test video, audio, and slide presentation prior to the live seminar. Ten out of 16 speakers took the training session. Speakers were informed by the organizers that any data presented should be considered public as security on the Zoom platform could not be ensured. Invited speakers had the opportunity to give permission to have their seminar recorded or to opt out of the recording. If a speaker gave permission to record, the entire seminar was recorded, and recordings were made available to all registered participants in a non-downloadable format through the University of Kentucky Zoom cloud for up to 48 h after the live session. Out of 16 speakers, 15 gave permission to record the live seminar.\nEfforts were made to invite speakers from a variety of countries and genders. Invited speakers included organizers of postponed in-person conferences, speakers from academia and industry from different career stages. A total of 16 speakers were invited: May 20, 2020: Elga De Vries, Amsterdam UMC, Amsterdam, The Netherlands; May 27, 2020: Richard Daneman, University of California San Diego, San Diego, CA, USA; June 3, 2020: Robyn Klein, Washington University, St. Louis, MO, USA; June 10, 2020: Ayal Ben-Zvi, The Hebrew University of Jerusalem, Israel; June 17, 2020: Britta Engelhardt, University of Bern, Bern, Switzerland; June 24, 2020: Margareta Hammarlund-Udenaes, Uppsala University, Uppsala, Sweden; July 1, 2020: Daniela Virgintino, University of Bari, Bari, Italy; July 8, 2020: Eric Shusta, University of Wisconsin, Madison, WI, USA; July 15, 2020: Matthew Campbell, Trinity College Dublin, Dublin, Ireland; July 22, 2020: Krzysztof Kucharz, University of Copenhagen, Copenhagen, Denmark; July 29, 2020: William Elmquist, University of Minnesota, Minneapolis, MN, USA; August 5, 2020: Edward Neuwelt, Oregon Health & Sciences University, OR, USA; August 12, 2020: Robert Thorne, Denali Therapeutics, San Francisco, CA, USA; August 19, 2020: Teresa Sanchez, Weill Cornell Medical College, New York City, NJ, USA; August 26, 2020: Michael Taylor, University of Wisconsin, Madison, MI, USA; September 2, 2020: Joan Abbott, King’s College London, London, UK.\nSpeakers were offered a training session by the organizers to cover key aspects of a Zoom-based webinar and to test video, audio, and slide presentation prior to the live seminar. Ten out of 16 speakers took the training session. Speakers were informed by the organizers that any data presented should be considered public as security on the Zoom platform could not be ensured. Invited speakers had the opportunity to give permission to have their seminar recorded or to opt out of the recording. If a speaker gave permission to record, the entire seminar was recorded, and recordings were made available to all registered participants in a non-downloadable format through the University of Kentucky Zoom cloud for up to 48 h after the live session. Out of 16 speakers, 15 gave permission to record the live seminar.\n Advertisement and participants To reach scientists in the brain barriers community, the seminar was advertised by email to participants of previous in-person meetings, and on social media platforms such as Facebook and LinkedIn on personal profiles. In addition, the introductory flyer was kindly distributed via email by the International Brain Barriers Society (IBBS) using their email distribution list. Interested scientists were asked to register by sending their name, affiliation, and career stage to a dedicated email address created for the organization and execution of BBV2020 ([email protected]). Once registered, participants were added to a BBV2020 listserv provided by the University of Kentucky and received weekly updates. These weekly updates included a flyer with the information regarding the upcoming speakers and the specific zoom link for the seminar. Participants also received a follow up email after each seminar with a zoom link to the video recordings. With the permission of the speaker of the week, registered viewers had access to recordings for up to 48 h after the live seminar.\nTo reach scientists in the brain barriers community, the seminar was advertised by email to participants of previous in-person meetings, and on social media platforms such as Facebook and LinkedIn on personal profiles. In addition, the introductory flyer was kindly distributed via email by the International Brain Barriers Society (IBBS) using their email distribution list. Interested scientists were asked to register by sending their name, affiliation, and career stage to a dedicated email address created for the organization and execution of BBV2020 ([email protected]). Once registered, participants were added to a BBV2020 listserv provided by the University of Kentucky and received weekly updates. These weekly updates included a flyer with the information regarding the upcoming speakers and the specific zoom link for the seminar. Participants also received a follow up email after each seminar with a zoom link to the video recordings. With the permission of the speaker of the week, registered viewers had access to recordings for up to 48 h after the live seminar.\n BBV2020 seminar presentation Each seminar was started with opening remarks by one of the organizers (Drs. Bauer, Hartz, or Kim) followed by an introduction of the speaker by the moderator. Moderators (Drs. Hudson, Nehra, Pizzo, Storck) rotated each week, only one moderator was assigned to each week. After the seminar presentation, the moderator facilitated the question-and-answer session. Participants were encouraged to ask a live question by raising their digital hand. When a participant raised the digital hand, the moderator would unmute the participant at which point the participant was able to ask the speaker a question directly. In addition, participants had the opportunity to ask questions via the Zoom Q&A or the chat function. Written questions were then subsequently read by the moderator. The moderator concluded each seminar by thanking the speaker and the audience for attending and by announcing the speaker for the following week.\nEach seminar was started with opening remarks by one of the organizers (Drs. Bauer, Hartz, or Kim) followed by an introduction of the speaker by the moderator. Moderators (Drs. Hudson, Nehra, Pizzo, Storck) rotated each week, only one moderator was assigned to each week. After the seminar presentation, the moderator facilitated the question-and-answer session. Participants were encouraged to ask a live question by raising their digital hand. When a participant raised the digital hand, the moderator would unmute the participant at which point the participant was able to ask the speaker a question directly. In addition, participants had the opportunity to ask questions via the Zoom Q&A or the chat function. Written questions were then subsequently read by the moderator. The moderator concluded each seminar by thanking the speaker and the audience for attending and by announcing the speaker for the following week.\n Map generation World maps highlighting countries participating in the various meetings and representation were generated using MapChart (www.mapchart.net; license: https://creativecommons.org/licenses/by-sa/4.0/).\nWorld maps highlighting countries participating in the various meetings and representation were generated using MapChart (www.mapchart.net; license: https://creativecommons.org/licenses/by-sa/4.0/).\n Survey Toward the end of the seminar series, a survey was generated using Qualtrics (Qualtrics) and run through the University of Alabama Qualtrics system. A link to the survey was sent to all registered seminar participants by email. Questions covered general audience participation, career stage, as well as affective factors. Survey results are only reported in an aggregate.\nToward the end of the seminar series, a survey was generated using Qualtrics (Qualtrics) and run through the University of Alabama Qualtrics system. A link to the survey was sent to all registered seminar participants by email. Questions covered general audience participation, career stage, as well as affective factors. Survey results are only reported in an aggregate.\n Data collection, analysis, and statistics Survey data was collected through the University of Alabama Qualtrics system and was determined by the University of Alabama IRB to be “non-human subject research”. Data from the live webinars were collected by Zoom for each seminar and reported as unique viewers to eliminate counting individuals double who potentially logged on twice. Data on video views were collected from the University of Kentucky Zoom Webinar cloud server and were reported as total accessed views. Survey data from Qualtrics was analyzed with the IBM® SPSS® software platform (IBM, Armonk, NY, USA), using one-way ANOVAs and t-tests to identify significant trends [5]. Data with a p < 0.05 are considered significant.\nSurvey data was collected through the University of Alabama Qualtrics system and was determined by the University of Alabama IRB to be “non-human subject research”. Data from the live webinars were collected by Zoom for each seminar and reported as unique viewers to eliminate counting individuals double who potentially logged on twice. Data on video views were collected from the University of Kentucky Zoom Webinar cloud server and were reported as total accessed views. Survey data from Qualtrics was analyzed with the IBM® SPSS® software platform (IBM, Armonk, NY, USA), using one-way ANOVAs and t-tests to identify significant trends [5]. Data with a p < 0.05 are considered significant.", "The BBV2020 seminar series was created by Drs. Bjoern Bauer (University of Kentucky, Lexington, KY, USA), Anika Hartz (University of Kentucky, Lexington, KY, USA), and Brandon Kim (University of Alabama, Tuscaloosa, Alabama, USA). Seminar moderators were postdoctoral researchers and scientists from the USA and Europe: Drs. Natalie Hudson (Trinity College Dublin, Dublin, Ireland), Geetika Nehra (University of Kentucky, Lexington, KY, USA), Michelle Pizzo (Denali Therapeutics Inc., San Francisco, CA, USA), and Steffen Storck (University Medical Center of the Johannes Gutenberg University Mainz, Mainz, Germany).", "BBV2020 was run on the Zoom Webinar 500 platform (Zoom Video Communications, Inc.; San Jose, CA, USA) that was funded by Dr. Bjoern Bauer using University of Kentucky funds. This particular Zoom Webinar license allowed for up to 500 live participants. The Zoom Webinar platform supports an in-session chat function, a Q&A forum, a raise hand function for live questions, and provides a panelist section with the option to share audio, video and presentation slides, a recording function and post-webinar data collection. The hosts and moderators were in control of audio and video; participants were not able to record or unmute during a session. Participants had no control function other than a “raise hand” option and the Q&A and chat function allowing to send questions or messages to the moderator. Technical assistance was provided by Todd Sizemore, College of Pharmacy, University of Kentucky.", "The BBV2020 seminar series ran from May 20 to September 2, 2020. Live seminars were held once per week from 12 to 1 PM Eastern Daylight Time (EDT) to accommodate researchers across a wide range of time zones: Central European Time (6:00 PM) to Pacific Daylight Time (9:00 AM). To accommodate colleagues with time constraints or those within incompatible time zones such as Asia or Oceania, presentations were recorded (depending on speaker permissions) and the seminar recording was made available for up to 48 h after the live seminar. The link for the recorded videos was distributed via the BBV2020 listserv that was set up through the University of Kentucky listserv.", "Efforts were made to invite speakers from a variety of countries and genders. Invited speakers included organizers of postponed in-person conferences, speakers from academia and industry from different career stages. A total of 16 speakers were invited: May 20, 2020: Elga De Vries, Amsterdam UMC, Amsterdam, The Netherlands; May 27, 2020: Richard Daneman, University of California San Diego, San Diego, CA, USA; June 3, 2020: Robyn Klein, Washington University, St. Louis, MO, USA; June 10, 2020: Ayal Ben-Zvi, The Hebrew University of Jerusalem, Israel; June 17, 2020: Britta Engelhardt, University of Bern, Bern, Switzerland; June 24, 2020: Margareta Hammarlund-Udenaes, Uppsala University, Uppsala, Sweden; July 1, 2020: Daniela Virgintino, University of Bari, Bari, Italy; July 8, 2020: Eric Shusta, University of Wisconsin, Madison, WI, USA; July 15, 2020: Matthew Campbell, Trinity College Dublin, Dublin, Ireland; July 22, 2020: Krzysztof Kucharz, University of Copenhagen, Copenhagen, Denmark; July 29, 2020: William Elmquist, University of Minnesota, Minneapolis, MN, USA; August 5, 2020: Edward Neuwelt, Oregon Health & Sciences University, OR, USA; August 12, 2020: Robert Thorne, Denali Therapeutics, San Francisco, CA, USA; August 19, 2020: Teresa Sanchez, Weill Cornell Medical College, New York City, NJ, USA; August 26, 2020: Michael Taylor, University of Wisconsin, Madison, MI, USA; September 2, 2020: Joan Abbott, King’s College London, London, UK.\nSpeakers were offered a training session by the organizers to cover key aspects of a Zoom-based webinar and to test video, audio, and slide presentation prior to the live seminar. Ten out of 16 speakers took the training session. Speakers were informed by the organizers that any data presented should be considered public as security on the Zoom platform could not be ensured. Invited speakers had the opportunity to give permission to have their seminar recorded or to opt out of the recording. If a speaker gave permission to record, the entire seminar was recorded, and recordings were made available to all registered participants in a non-downloadable format through the University of Kentucky Zoom cloud for up to 48 h after the live session. Out of 16 speakers, 15 gave permission to record the live seminar.", "To reach scientists in the brain barriers community, the seminar was advertised by email to participants of previous in-person meetings, and on social media platforms such as Facebook and LinkedIn on personal profiles. In addition, the introductory flyer was kindly distributed via email by the International Brain Barriers Society (IBBS) using their email distribution list. Interested scientists were asked to register by sending their name, affiliation, and career stage to a dedicated email address created for the organization and execution of BBV2020 ([email protected]). Once registered, participants were added to a BBV2020 listserv provided by the University of Kentucky and received weekly updates. These weekly updates included a flyer with the information regarding the upcoming speakers and the specific zoom link for the seminar. Participants also received a follow up email after each seminar with a zoom link to the video recordings. With the permission of the speaker of the week, registered viewers had access to recordings for up to 48 h after the live seminar.", "Each seminar was started with opening remarks by one of the organizers (Drs. Bauer, Hartz, or Kim) followed by an introduction of the speaker by the moderator. Moderators (Drs. Hudson, Nehra, Pizzo, Storck) rotated each week, only one moderator was assigned to each week. After the seminar presentation, the moderator facilitated the question-and-answer session. Participants were encouraged to ask a live question by raising their digital hand. When a participant raised the digital hand, the moderator would unmute the participant at which point the participant was able to ask the speaker a question directly. In addition, participants had the opportunity to ask questions via the Zoom Q&A or the chat function. Written questions were then subsequently read by the moderator. The moderator concluded each seminar by thanking the speaker and the audience for attending and by announcing the speaker for the following week.", "World maps highlighting countries participating in the various meetings and representation were generated using MapChart (www.mapchart.net; license: https://creativecommons.org/licenses/by-sa/4.0/).", "Toward the end of the seminar series, a survey was generated using Qualtrics (Qualtrics) and run through the University of Alabama Qualtrics system. A link to the survey was sent to all registered seminar participants by email. Questions covered general audience participation, career stage, as well as affective factors. Survey results are only reported in an aggregate.", "Survey data was collected through the University of Alabama Qualtrics system and was determined by the University of Alabama IRB to be “non-human subject research”. Data from the live webinars were collected by Zoom for each seminar and reported as unique viewers to eliminate counting individuals double who potentially logged on twice. Data on video views were collected from the University of Kentucky Zoom Webinar cloud server and were reported as total accessed views. Survey data from Qualtrics was analyzed with the IBM® SPSS® software platform (IBM, Armonk, NY, USA), using one-way ANOVAs and t-tests to identify significant trends [5]. Data with a p < 0.05 are considered significant.", "On May 14, 2020, we sent out the first advertisement for the BBV2020 seminar series via email using a listserv with email addresses of past conference attendees and via postings using LinkedIn, Facebook, and Twitter. By the start of the first seminar on May 20, 2020, more than 720 scientists had registered. By the end of the 16-week long BBV2020 seminar series, a total of 1302 registrants signed up on our listserv to receive the seminar link, speaker updates, and video recording links (Fig. 1A). We obtained data from two past in-person meetings from the respective conference chairs: (1) brain barriers meeting in 2018 (BBB 2018) and the (2) Cerebral Vascular Biology meeting in 2019 (CVB 2019).\n\nFig. 1\nAttendance Trends for Brain Barrier Research Conferences. A With 1,302 individuals signing up and added to the listserv, BBV2020 attracted more registrants than BBB 2018 and CVB 2019. B Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020. C Number of views (clicks) of the video recordings when available weekly for BBV2020. Black line indicates best linear fit. Gray line indicates the average (mean) for all 16 weeks\n\nAttendance Trends for Brain Barrier Research Conferences. A With 1,302 individuals signing up and added to the listserv, BBV2020 attracted more registrants than BBB 2018 and CVB 2019. B Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020. C Number of views (clicks) of the video recordings when available weekly for BBV2020. Black line indicates best linear fit. Gray line indicates the average (mean) for all 16 weeks\nStrikingly, the total number of registered applicants for the BBV2020 was more than 4-fold higher compared to the total number of registered participants for the 2018 and the CVB 2019 meetings (Fig. 1A). The large number of registrants is likely due to free registration, no associated travel expenses, a world-wide reach, and that virtual meetings come with no obligations for the attendees. In contrast, the number of attendees of the in-person meetings (BBB 2018 and CVB2019) was limited and determined by the size of the conference site.\nThe first live BBV2020 seminar was attended by 419 participants. As the seminar series progressed, the participant number during live seminars gradually decreased, the average participant number was 220, the minimum was 108 (Fig. 1B). A similar trend was observed in the numbers of total recording views: the first seminar recording had a maximum of 944 views; in average, videos had 422 views (Fig. 1C). The trendline in Fig. 1C shows a drop in recording views over the course of the seminar series. These trends held regardless of academic rank as graduate students, postdoctoral fellows, and professors all showed a similar downward trend in live attendance (Additional file 1: Fig. S1A–C). Note that we were unable to capture recording data for week 2 due to a recording error and for week 13 due to the request by the speaker to not record.\nTogether, the high number of registrants suggests that the virtual BBV2020 seminar series sparked interest. Attendance of the live seminars and the numbers of views for the recorded videos decreased over time, which may be an indicator of “virtual meeting fatigue” or attendees being on their summer vacation.", "By the end of the seminar series in September, every BBV2020 registrant received an invitation to participate in an online survey created by the organizers on August 23, 2020. The survey was designed for registrants to provide feedback and consisted of 10 questions: (1) demographic question to capture career level and career type; (2) single select multiple choice question to capture attendance of past in-person brain barrier meetings, (3) single select multiple choice question on how many recorded seminars were viewed, (4) Likert scale question regarding the enrichment BBV2020 provided for the community during the COVID-19 pandemic, (5) dichotomous question asking if the registrant would still participate in BBV2020 seminars if they were continued, (6) single select multiple choice question on attendance frequency, (7) single select multiple choice question on how many times the registrant has thus far attended a brain barriers meeting previously in person, (8) matrix question to capture affective factors, (9) matrix question to capture learning and teaching, and (10) matrix question to capture combined validity and practicality of the BBV2020 series. In addition, registrants responding to the survey had the opportunity to include general comments to the organizers in a separate text box.\nOverall, 23% of registrants (total of 299 registrants) responded to the survey. Over 89% of survey respondents agreed strongly that “Attending BBV2020 was a positive experience” and 63% of responders felt that “BBV2020 is appropriate for my research”. Generally, the respondents tended to agree with the following statement: “BBV2020 has enriched the connection with the Brain Barriers community in the gap that COVID-19 has created” (n = 298 respondents to this question of a total of 299 registrants, M = 1.44, SD = 0.660, 1 = Strongly Agree, 2 = Agree, 3 = Neutral, 4 = Disagree, 5 = Strongly Disagree; Fig. 2A). However, when posed with the statements “In the future, I would prefer to attend a virtual conference over an in-person conference” or “The virtual experience is not as effective compared to the in-person conferences”, responses were neutral (M = 3.24; Fig. 2B) indicating a positive view towards a virtual conference. A difference emerged when we compared BBV2020 participants based on their career level with their preference for in-person versus virtual meeting formats: using one-way ANOVA, we found a significant difference between groups (F(9286) = 2.607, p = 0.007). Using Tukey’s HSD post hoc test indicated that Master’s students were more likely to disagree with the following statement when compared to Faculty/Professors: “I would feel more comfortable attending Brain Barrier conferences/seminars in person, but was glad to attend virtually due to COVID19.” A strong trend emerged when Master’s and Ph.D. students were grouped together and compared to Post-Docs and Faculty/Professors in an independent-samples t test. Results show graduate students in general were more favorable to a virtual conference compared to colleagues more established in their careers (t(225) = -1.972, p = 0.061). The lack of significance is in part due to the low sample size (n = 74) for graduate students.\n\nFig. 2\nPost-seminar survey results. A Volunteer respondent result when posed with the prompt “BBV2020 has enriched connection with the Brain Barriers community in the gap that COVID-19 has created”. B Volunteer respondent results when posted with the prompts “In the future, I would prefer to attend a virtual conference over an in-person conference” (Blue) and “The virtual experience is not as effective compared to the in-person conferences” (Red)\n\nPost-seminar survey results. A Volunteer respondent result when posed with the prompt “BBV2020 has enriched connection with the Brain Barriers community in the gap that COVID-19 has created”. B Volunteer respondent results when posted with the prompts “In the future, I would prefer to attend a virtual conference over an in-person conference” (Blue) and “The virtual experience is not as effective compared to the in-person conferences” (Red)\nOverall, descriptive data revealed many participants felt BBV2020 was a positive experience. In fact, 99.3% of survey respondents noted they somewhat or strongly agree that their experience was positive, with only 4.7% saying they faced technical difficulties. Additionally, most respondents agreed that virtual conferences have an important role in research (93.2%) and are more accessible than in-person conferences (90.2%). Finally, more than half of respondents (56.9%) noted virtual conferences can do things in-person conferences cannot.\nNevertheless, there continued to be a desire among many BBV2020 participants to return to conferencing in person, with 13.5% saying they found it hard to focus in the virtual format. In fact, roughly one-third (33.9%) felt the virtual conference was not as effective as an in-person conference, and only one-quarter (23.3%) said they would prefer virtual conferences to in-person conferences in the future, while 35.8% were neutral.\nTogether, our findings suggest that while BBV2020 filled a need created by a global pandemic, the enthusiasm for returning to only in-person meetings remained relatively neutral suggesting that virtual platforms may play a critical role in dissemination of research and science in the future. Moreover, the data from this study show a trend between graduate students versus professors in their response with regard to in-person meetings indicating that career stage may influence the perspective of the usefulness of in-person versus virtual platforms.", "During the seminar registration process we collected attendants’ affiliations and based on this information we determined the global reach of the BBV2020. Registrants of the BBV2020 seminar series were from 43 countries representing six of the seven continents (Fig. 3A, B). Compared to the in-person BBB 2018 and the CVB2019, more countries were represented at the BBV2020 (Fig. 3A).\n\nFig. 3\nGlobal reach of BBV2020. A BBV2020 had more countries represented compared to BBB 2018 and CVB 2019 alone. B World map showing countries represented by BBV2020. Countries where registrants participated in BBV2020: Red and Pink. Countries not represented at BBV2020: Light Gray and Dark Gray. Countries where registrants attended BBV2020 and at least one of the in-person meetings between Meeting 2018 or CVB 2019: Red. Countries where registrants attended BBV2020 and did not attend either BBB 2018 or CVB 2019: Pink. Countries that had no participation in any of the brain barriers meetings examined in this study: Light gray. Countries that had participation in either BBB 2018 or CVB 2019 that did not participate in BBV2020: Dark Gray\n\nGlobal reach of BBV2020. A BBV2020 had more countries represented compared to BBB 2018 and CVB 2019 alone. B World map showing countries represented by BBV2020. Countries where registrants participated in BBV2020: Red and Pink. Countries not represented at BBV2020: Light Gray and Dark Gray. Countries where registrants attended BBV2020 and at least one of the in-person meetings between Meeting 2018 or CVB 2019: Red. Countries where registrants attended BBV2020 and did not attend either BBB 2018 or CVB 2019: Pink. Countries that had no participation in any of the brain barriers meetings examined in this study: Light gray. Countries that had participation in either BBB 2018 or CVB 2019 that did not participate in BBV2020: Dark Gray\nWe also compared the number of participants from each country who joined BBV2020 vs. BBB 2018 and CVB 2019. We found that the number of participants per country worldwide was also higher compared to both in person meetings (Fig. 4A–F). These data show that the virtual nature of BBV2020 enabled researchers world-wide to participate and suggest that the outreach of a virtual event can extend beyond the field.\n\nFig. 4\nGlobal view of participation over BBV2020, BBB 2018, and CVB 2019. Representation of participants for (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Ordered from top to bottom in greatest number of registrants. Number of registrants scaled by color based on number of registrants per country for (D) BBV 2020, (E) BBB 2018, and (F) CVB 2019. Light Pink: 1–10 registrants; Dark Pink: 11–50 registrants; Red: 51–100 registrants and Dark Red: > 100 registrants\n\nGlobal view of participation over BBV2020, BBB 2018, and CVB 2019. Representation of participants for (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Ordered from top to bottom in greatest number of registrants. Number of registrants scaled by color based on number of registrants per country for (D) BBV 2020, (E) BBB 2018, and (F) CVB 2019. Light Pink: 1–10 registrants; Dark Pink: 11–50 registrants; Red: 51–100 registrants and Dark Red: > 100 registrants", "We examined the self-reported career stage of BBV2020 participants to that of BBB 2018 and CVB 2019 participants. Based on the survey, more than 46% of all participants attended a brain barriers conference/seminar for the first time. We also found that in-person meetings are attended by 42.4–48.9% academicians at the professor rank, graduate students make up 22.9–25.8% and postdoctoral fellows are 6.6–17.7% of the participants (Fig. 5). In contrast, at the BBV2020 19.6% of academicians were at the professor rank, graduate students made up 30.5% and postdoctoral fellows 16.1% of attendees. We also found that BBV2020 had a larger participation from industry (18.5% vs. 5.6–6.6%), and undergraduate students (3.2% vs. 1%) compared to both in person meetings. Together, our data suggest that a virtual platform seminar like BBV2020 reaches a broader range of scientific ranks and careers.\n\nFig. 5\nProportions of various career stages participating in BBV2020, BBB 2018, and CVB 2019. Proportions of various career stages participating in (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Colors correspond to career stage: Red (professor any rank), Green (graduate student any rank), Dark Blue (research faculty or researcher), Yellow (industry scientist), Orange (postdoctoral fellow), Purple (government/regulatory), Light Blue (clinician), Green (undergraduate student)\n\nProportions of various career stages participating in BBV2020, BBB 2018, and CVB 2019. Proportions of various career stages participating in (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Colors correspond to career stage: Red (professor any rank), Green (graduate student any rank), Dark Blue (research faculty or researcher), Yellow (industry scientist), Orange (postdoctoral fellow), Purple (government/regulatory), Light Blue (clinician), Green (undergraduate student)", "Due to the pandemic, the idea and subsequent organization of the BBV2020 occurred in a relatively short time frame of about 4 weeks. This is in stark contrast to in-person meetings that are often scheduled and organized months and years ahead of the actual event. Advertisement of the BBV2020 encompassed personal email, postings on social media, and utilization of the International Brain Barriers Society email distribution list. Continuous advertisement resulted in the registration of over 1300 scientists. The total registrant number cannot be directly compared to that of in-person meetings since those are more complex in nature, have maximum capacity limits, and involve other factors such as travel expenses and time commitment. Similar to other meetings which transitioned to online formats during the pandemic, BBV2020 had a large increase in registrants when compared to previous in-person meetings (~7-fold and 4.5-fold higher compared to 2018 and 2019, respectively; Fig. 1; [8]. Given that previous brain barriers meetings usually have a few hundred participants, the high number of registrants could be an indicator for high interest in the field beyond traditional brain barrier research groups. In contrast, these high registrant numbers could also be due, in part, to BBV2020 being a no-cost event or could be explained by pandemic-related cancellations resulting in relatively event-free calendar of the participants. While there were more than 1,300 registrants for BBV2020, no more than 419 ever attended a live webinar. Additionally, 323 people registered for BBV2020 but never attended a live webinar. They may, however, have participated by watching the video recordings after the live webinar. A downward trend of attendance numbers for live seminars was observed throughout the course of the 16-week series (Fig. 1B). The recorded seminars drew considerable interest initially with a maximum of 944 views in week 4 of the series, but later in the series, video views dropped to about 200 per seminar (Fig. 1C). Reasons for these downward trends remain speculative, but could be due to the summer break, the selected time for the seminar making it difficult for researchers in certain time zones to attend live, zoom fatigue, or increasing responsibilities in re-opened research laboratories or at home. The downward trends we observed was in contrast to a similar virtual seminar series in another field that remained relatively flat over the course of the summer [4]. Given that average attendance of the BBV2020 was 220 per seminar and typical attendance for in-person conferences in the brain barriers field range between 150 and 300 participants (caveat: one of the in-person conferences has a strict 182-person maximum limit), we conclude that a core group of participants attended most sessions.", "A survey conducted after the seminar series revealed that most participants felt that BBV2020 enhanced the connection to the brain barriers scientific community during the COVID-19 pandemic (Fig. 2A). Differences emerged between how graduate students and professors viewed BBV2020: graduate students in general were more favorable of the virtual format compared to faculty. This may be partially explained by a lack of funding for graduate student travel or political “red tape” (e.g. visas) for some researchers [9]. Additionally, students may disproportionately adapt more easily and rapidly to technology usage compared to senior colleagues [10]. Clearly, another survey would be needed to determine if a virtual component outside of a pandemic would be positively viewed by researchers in the brain barriers field.", "One major advantage of a virtual seminar series such as the BBV2020 is the lack of a travel burden. Taking off the financial and time-commitment associated with in-person meetings has been shown to increase the diversity of attendees [3, 9, 11]. Based on those findings, we anticipated that BBV2020 would have representation from more countries than past in-person meetings. As shown in Fig. 3, registrants from 43 different countries were represented at BBV2020, which is 40% more compared to BBB 2018 (23 countries) and CVB 2019 (25 countries; Fig. 3). In particular, there was high participation from countries in Southeast Asia, Middle East, Africa, and South America that had not been represented at in-person conferences held in 2018 and 2019 (Figs. 3B, 4). Unsurprisingly however, considering countries represented at previous in-person meetings and due to the number of registrants for BBV2020, there was generally a greater participation from each country represented than in 2018 meeting and CVB 2019 [3, 8]. Due to the time of the BBV2020 live sessions (noon to 1 pm Eastern US Time), the majority of participants were from North America, South America, and Europe (Fig. 4). While time zone challenges are common among virtual events, switching between time zones or holding a “flipped” session (flipping evening sessions from Europe to the USA and vice versa) could increase participation from Asia and Oceania [12]. However, offering a virtual series that constantly switches the seminar time may be difficult to manage and could be confusing for participants. Another strength of virtual seminars is that they are more accessible and inclusive especially for junior researchers in the field. The in-person meetings in 2018 and 2019 were dominated by academic faculty at any professor rank and graduate students (Fig. 5). In contrast, BBV2020 saw a large reduction in the percentage of participating faculty (over 40% in person to less than 20% for BBV2020) and a larger proportion of graduate students (22–25% in person to over 30% for BBV2020), industry scientists (about 6% in person to 18.5% for BBV2020), and governmental/regulatory personnel who had signed up (Fig. 5). In addition, a weekly virtual seminar series allows scientist to pick-and-choose the sessions they are interested in, a choice that could interfere with the observed trends shown in Fig. 1. Our data further support the trends observed across a variety of disciplines who have shifted their conferences from in-person to online events, revealing that virtual conferences are more flexible, and more inclusive and accessible worldwide, especially for early-career scientists [9]. Virtual seminars also provide an opportunity for the organizers to invite prominent speakers of a field on a relatively small budget. In general, virtual conferences are known to enhance the accessibility for busy professionals. A recently published Nature poll indicates that 74% of scientists want virtual meetings to stay after the pandemic (925 poll responses) [13]. Easier accessibility, lower carbon footprint, and lower costs were the driving factors chosen in favor for virtual meetings in this poll. Moreover, virtual seminars provide an easy and affordable opportunity for researchers with disabilities, researchers with high teaching loads, or for scientists with parenting and other responsibilities to stay connected to their respective research community. During the COVID-19 pandemic, virtual seminars took center stage and raised the standard for accessibility, interactions, inclusivity, and equity in science [9]. Virtual events changed how we think about meetings and they constitute a yet underestimated, but powerful tool to share science and connect with each other. By rethinking how meetings are held, various fields have had different degrees of participation depending on the availability of the talks or content versus live sessions. These virtual platforms have the potential to create a “connectivity cascade” reaching more people than traditional meetings that are confined by a single geographical location and time [14]. BBV2020 had an impressive reach worldwide, however, there was still a notable deficit in participants from underdeveloped countries, especially Africa. These discrepancies have been noted in the context of expanding telehealth to African countries that still remain challenging due to monopolization of telecommunication infrastructure, political will, and access to adequate broadband efficiency [15]. Despite the advantages of BBV2020, we recognize that limitations remain, such as one-way delivery of material to participants. Lack of critical face-to-face interactions allowing junior scientists to have meaningful two-way communication with other participants was notably lacking in BBV2020 as seen in other virtual conferences as well [16]. The future of such virtual seminars, meetings, or conferences may lie in their inclusion as part of in-person meetings according to a recently published “best practice” guideline for virtual meetings [17]. For example, a virtual component could be included in plenary sessions, workshops, small group discussions, poster sessions, and/or social events of in-person conferences. BBV2020 represented only one aspect (a seminar held in the form of a webinar) of these common meeting events, but future events could certainly enhance the virtual experience. Regardless, virtual components could be exploited to increase diversity in the brain barriers research field which is consistent with the NINDS strategy for enhancing the diversity of neuroscience researchers [18].", "Overall, the Virtual BBV2020 Seminar Series created during the COVID-19 pandemic was positively received by the brain barriers research community. Key benefits of virtual seminars like the BBV2020 are that they are convenient, affordable, eco-friendly, reach a broad audience, and are a good tool to increase diversity and equity. Thus, incorporating a virtual component during an in-person meeting or offering a virtual counterpart in addition to in-person meetings could help increase the outreach of a small research field like the brain barriers field. Nevertheless, limitations remain such as a lack of interaction between speakers and participants, which is particularly critical for junior researchers. And although BBV2020 expanded the reach beyond past in-person meetings, challenges reaching scientists in underdeveloped countries remain. Together, BBV2020 was created in a time of crisis but has the potential to thrive in the post-pandemic world. Embracing virtual scientific events may allow us to advance the brain barriers research field and have an impact on how we exchange science in the future." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Organization", "Virtual platform and technical assistance", "BBV2020 schedule", "Invited speakers", "Advertisement and participants", "BBV2020 seminar presentation", "Map generation", "Survey", "Data collection, analysis, and statistics", "Results", "BBV2020 attendance", "BBV2020 survey analysis", "Global reach of BBV2020", "Career stages represented at BBV2020", "Discussion", "Attendance: scientists around the globe", "Survey: feedback from participants", "Virtual seminars: a powerful opportunity", "Conclusion and future perspectives", "Supplementary Information" ]
[ "On March 11, 2020, the World Health Organization (WHO) declared the coronavirus disease (COVID-19) outbreak a global pandemic [1]. Shortly after, countries worldwide issued stay-at-home orders and lockdowns: the COVID-19 pandemic had forced the world to come to an abrupt halt [2]. With global air travel restrictions and travel bans in place, enterprises including academic institutions, industry and government agencies required individuals to call off their business travel. As a result, scheduled scientific conferences were canceled throughout 2020 [3, 4]. These cancelations created a sudden void in scientific communication between researchers worldwide and forced research communities to be creative and adapt. One possibility to continue scientific communication and interaction was by switching conferences and seminars to virtual online events using novel digital platforms enabling researchers to connect and share science [3, 4].\nAs a consequence, major conferences and meetings in the brain barriers research field, including the “Barriers of the CNS Gordon Research Conference” (GRC 2020) and the international symposium on “Signal Transduction at the Blood–Brain Barriers” in Bari, Italy were first postponed until 2021 and due to the ongoing pandemic subsequently moved to 2022. The “Cerebral Vascular Biology” (CVB) conference in Uppsala, Sweden planned for 2021 was postponed until 2023. Postponing these meetings left organizers and conference chairs in a difficult position and researchers without a platform to present and share their data. Thus, these postponements created a critical need for the field to stay connected and an opportunity to establish a virtual event that could supplement the postponed conferences in 2020. The Brain Barriers Virtual 2020 Seminar Series (BBV2020) addressed this need by providing a forum for brain barrier scientists worldwide to come together online during the COVID-19 crisis in 2020. BBV2020 seminars were held weekly from May 20, 2020 to September 2, 2020 in a 1-h format featuring a live session with an invited speaker followed by time for questions and answers from the community. The seminar series was hosted on a Zoom webinar platform and was free for anyone to attend. BBV2020 served as a forum for invited speakers to present their work to the brain barriers community and enabled this community to discuss brain barriers science from the safety of their home office. Over 1300 scientists around the world were registered, each seminar was viewed live by 100–400 participants, and the video-recordings received 150–900 views each week. Participants were from academia, industry and government agencies at all career stages from around the globe.\nIn this report, we share participation data that give insight into the impact this virtual seminar series had on the brain barriers field during the COVID-19 pandemic. We compare these data with existing data from past in-person brain barriers meetings and draw conclusions from these data on the potential value of virtual meetings for the brain barriers research field in the future. While the organization of virtual events during the pandemic may seem self-evident, reflection on the impact carries utility for events in the future.", " Organization The BBV2020 seminar series was created by Drs. Bjoern Bauer (University of Kentucky, Lexington, KY, USA), Anika Hartz (University of Kentucky, Lexington, KY, USA), and Brandon Kim (University of Alabama, Tuscaloosa, Alabama, USA). Seminar moderators were postdoctoral researchers and scientists from the USA and Europe: Drs. Natalie Hudson (Trinity College Dublin, Dublin, Ireland), Geetika Nehra (University of Kentucky, Lexington, KY, USA), Michelle Pizzo (Denali Therapeutics Inc., San Francisco, CA, USA), and Steffen Storck (University Medical Center of the Johannes Gutenberg University Mainz, Mainz, Germany).\nThe BBV2020 seminar series was created by Drs. Bjoern Bauer (University of Kentucky, Lexington, KY, USA), Anika Hartz (University of Kentucky, Lexington, KY, USA), and Brandon Kim (University of Alabama, Tuscaloosa, Alabama, USA). Seminar moderators were postdoctoral researchers and scientists from the USA and Europe: Drs. Natalie Hudson (Trinity College Dublin, Dublin, Ireland), Geetika Nehra (University of Kentucky, Lexington, KY, USA), Michelle Pizzo (Denali Therapeutics Inc., San Francisco, CA, USA), and Steffen Storck (University Medical Center of the Johannes Gutenberg University Mainz, Mainz, Germany).\n Virtual platform and technical assistance BBV2020 was run on the Zoom Webinar 500 platform (Zoom Video Communications, Inc.; San Jose, CA, USA) that was funded by Dr. Bjoern Bauer using University of Kentucky funds. This particular Zoom Webinar license allowed for up to 500 live participants. The Zoom Webinar platform supports an in-session chat function, a Q&A forum, a raise hand function for live questions, and provides a panelist section with the option to share audio, video and presentation slides, a recording function and post-webinar data collection. The hosts and moderators were in control of audio and video; participants were not able to record or unmute during a session. Participants had no control function other than a “raise hand” option and the Q&A and chat function allowing to send questions or messages to the moderator. Technical assistance was provided by Todd Sizemore, College of Pharmacy, University of Kentucky.\nBBV2020 was run on the Zoom Webinar 500 platform (Zoom Video Communications, Inc.; San Jose, CA, USA) that was funded by Dr. Bjoern Bauer using University of Kentucky funds. This particular Zoom Webinar license allowed for up to 500 live participants. The Zoom Webinar platform supports an in-session chat function, a Q&A forum, a raise hand function for live questions, and provides a panelist section with the option to share audio, video and presentation slides, a recording function and post-webinar data collection. The hosts and moderators were in control of audio and video; participants were not able to record or unmute during a session. Participants had no control function other than a “raise hand” option and the Q&A and chat function allowing to send questions or messages to the moderator. Technical assistance was provided by Todd Sizemore, College of Pharmacy, University of Kentucky.\n BBV2020 schedule The BBV2020 seminar series ran from May 20 to September 2, 2020. Live seminars were held once per week from 12 to 1 PM Eastern Daylight Time (EDT) to accommodate researchers across a wide range of time zones: Central European Time (6:00 PM) to Pacific Daylight Time (9:00 AM). To accommodate colleagues with time constraints or those within incompatible time zones such as Asia or Oceania, presentations were recorded (depending on speaker permissions) and the seminar recording was made available for up to 48 h after the live seminar. The link for the recorded videos was distributed via the BBV2020 listserv that was set up through the University of Kentucky listserv.\nThe BBV2020 seminar series ran from May 20 to September 2, 2020. Live seminars were held once per week from 12 to 1 PM Eastern Daylight Time (EDT) to accommodate researchers across a wide range of time zones: Central European Time (6:00 PM) to Pacific Daylight Time (9:00 AM). To accommodate colleagues with time constraints or those within incompatible time zones such as Asia or Oceania, presentations were recorded (depending on speaker permissions) and the seminar recording was made available for up to 48 h after the live seminar. The link for the recorded videos was distributed via the BBV2020 listserv that was set up through the University of Kentucky listserv.\n Invited speakers Efforts were made to invite speakers from a variety of countries and genders. Invited speakers included organizers of postponed in-person conferences, speakers from academia and industry from different career stages. A total of 16 speakers were invited: May 20, 2020: Elga De Vries, Amsterdam UMC, Amsterdam, The Netherlands; May 27, 2020: Richard Daneman, University of California San Diego, San Diego, CA, USA; June 3, 2020: Robyn Klein, Washington University, St. Louis, MO, USA; June 10, 2020: Ayal Ben-Zvi, The Hebrew University of Jerusalem, Israel; June 17, 2020: Britta Engelhardt, University of Bern, Bern, Switzerland; June 24, 2020: Margareta Hammarlund-Udenaes, Uppsala University, Uppsala, Sweden; July 1, 2020: Daniela Virgintino, University of Bari, Bari, Italy; July 8, 2020: Eric Shusta, University of Wisconsin, Madison, WI, USA; July 15, 2020: Matthew Campbell, Trinity College Dublin, Dublin, Ireland; July 22, 2020: Krzysztof Kucharz, University of Copenhagen, Copenhagen, Denmark; July 29, 2020: William Elmquist, University of Minnesota, Minneapolis, MN, USA; August 5, 2020: Edward Neuwelt, Oregon Health & Sciences University, OR, USA; August 12, 2020: Robert Thorne, Denali Therapeutics, San Francisco, CA, USA; August 19, 2020: Teresa Sanchez, Weill Cornell Medical College, New York City, NJ, USA; August 26, 2020: Michael Taylor, University of Wisconsin, Madison, MI, USA; September 2, 2020: Joan Abbott, King’s College London, London, UK.\nSpeakers were offered a training session by the organizers to cover key aspects of a Zoom-based webinar and to test video, audio, and slide presentation prior to the live seminar. Ten out of 16 speakers took the training session. Speakers were informed by the organizers that any data presented should be considered public as security on the Zoom platform could not be ensured. Invited speakers had the opportunity to give permission to have their seminar recorded or to opt out of the recording. If a speaker gave permission to record, the entire seminar was recorded, and recordings were made available to all registered participants in a non-downloadable format through the University of Kentucky Zoom cloud for up to 48 h after the live session. Out of 16 speakers, 15 gave permission to record the live seminar.\nEfforts were made to invite speakers from a variety of countries and genders. Invited speakers included organizers of postponed in-person conferences, speakers from academia and industry from different career stages. A total of 16 speakers were invited: May 20, 2020: Elga De Vries, Amsterdam UMC, Amsterdam, The Netherlands; May 27, 2020: Richard Daneman, University of California San Diego, San Diego, CA, USA; June 3, 2020: Robyn Klein, Washington University, St. Louis, MO, USA; June 10, 2020: Ayal Ben-Zvi, The Hebrew University of Jerusalem, Israel; June 17, 2020: Britta Engelhardt, University of Bern, Bern, Switzerland; June 24, 2020: Margareta Hammarlund-Udenaes, Uppsala University, Uppsala, Sweden; July 1, 2020: Daniela Virgintino, University of Bari, Bari, Italy; July 8, 2020: Eric Shusta, University of Wisconsin, Madison, WI, USA; July 15, 2020: Matthew Campbell, Trinity College Dublin, Dublin, Ireland; July 22, 2020: Krzysztof Kucharz, University of Copenhagen, Copenhagen, Denmark; July 29, 2020: William Elmquist, University of Minnesota, Minneapolis, MN, USA; August 5, 2020: Edward Neuwelt, Oregon Health & Sciences University, OR, USA; August 12, 2020: Robert Thorne, Denali Therapeutics, San Francisco, CA, USA; August 19, 2020: Teresa Sanchez, Weill Cornell Medical College, New York City, NJ, USA; August 26, 2020: Michael Taylor, University of Wisconsin, Madison, MI, USA; September 2, 2020: Joan Abbott, King’s College London, London, UK.\nSpeakers were offered a training session by the organizers to cover key aspects of a Zoom-based webinar and to test video, audio, and slide presentation prior to the live seminar. Ten out of 16 speakers took the training session. Speakers were informed by the organizers that any data presented should be considered public as security on the Zoom platform could not be ensured. Invited speakers had the opportunity to give permission to have their seminar recorded or to opt out of the recording. If a speaker gave permission to record, the entire seminar was recorded, and recordings were made available to all registered participants in a non-downloadable format through the University of Kentucky Zoom cloud for up to 48 h after the live session. Out of 16 speakers, 15 gave permission to record the live seminar.\n Advertisement and participants To reach scientists in the brain barriers community, the seminar was advertised by email to participants of previous in-person meetings, and on social media platforms such as Facebook and LinkedIn on personal profiles. In addition, the introductory flyer was kindly distributed via email by the International Brain Barriers Society (IBBS) using their email distribution list. Interested scientists were asked to register by sending their name, affiliation, and career stage to a dedicated email address created for the organization and execution of BBV2020 ([email protected]). Once registered, participants were added to a BBV2020 listserv provided by the University of Kentucky and received weekly updates. These weekly updates included a flyer with the information regarding the upcoming speakers and the specific zoom link for the seminar. Participants also received a follow up email after each seminar with a zoom link to the video recordings. With the permission of the speaker of the week, registered viewers had access to recordings for up to 48 h after the live seminar.\nTo reach scientists in the brain barriers community, the seminar was advertised by email to participants of previous in-person meetings, and on social media platforms such as Facebook and LinkedIn on personal profiles. In addition, the introductory flyer was kindly distributed via email by the International Brain Barriers Society (IBBS) using their email distribution list. Interested scientists were asked to register by sending their name, affiliation, and career stage to a dedicated email address created for the organization and execution of BBV2020 ([email protected]). Once registered, participants were added to a BBV2020 listserv provided by the University of Kentucky and received weekly updates. These weekly updates included a flyer with the information regarding the upcoming speakers and the specific zoom link for the seminar. Participants also received a follow up email after each seminar with a zoom link to the video recordings. With the permission of the speaker of the week, registered viewers had access to recordings for up to 48 h after the live seminar.\n BBV2020 seminar presentation Each seminar was started with opening remarks by one of the organizers (Drs. Bauer, Hartz, or Kim) followed by an introduction of the speaker by the moderator. Moderators (Drs. Hudson, Nehra, Pizzo, Storck) rotated each week, only one moderator was assigned to each week. After the seminar presentation, the moderator facilitated the question-and-answer session. Participants were encouraged to ask a live question by raising their digital hand. When a participant raised the digital hand, the moderator would unmute the participant at which point the participant was able to ask the speaker a question directly. In addition, participants had the opportunity to ask questions via the Zoom Q&A or the chat function. Written questions were then subsequently read by the moderator. The moderator concluded each seminar by thanking the speaker and the audience for attending and by announcing the speaker for the following week.\nEach seminar was started with opening remarks by one of the organizers (Drs. Bauer, Hartz, or Kim) followed by an introduction of the speaker by the moderator. Moderators (Drs. Hudson, Nehra, Pizzo, Storck) rotated each week, only one moderator was assigned to each week. After the seminar presentation, the moderator facilitated the question-and-answer session. Participants were encouraged to ask a live question by raising their digital hand. When a participant raised the digital hand, the moderator would unmute the participant at which point the participant was able to ask the speaker a question directly. In addition, participants had the opportunity to ask questions via the Zoom Q&A or the chat function. Written questions were then subsequently read by the moderator. The moderator concluded each seminar by thanking the speaker and the audience for attending and by announcing the speaker for the following week.\n Map generation World maps highlighting countries participating in the various meetings and representation were generated using MapChart (www.mapchart.net; license: https://creativecommons.org/licenses/by-sa/4.0/).\nWorld maps highlighting countries participating in the various meetings and representation were generated using MapChart (www.mapchart.net; license: https://creativecommons.org/licenses/by-sa/4.0/).\n Survey Toward the end of the seminar series, a survey was generated using Qualtrics (Qualtrics) and run through the University of Alabama Qualtrics system. A link to the survey was sent to all registered seminar participants by email. Questions covered general audience participation, career stage, as well as affective factors. Survey results are only reported in an aggregate.\nToward the end of the seminar series, a survey was generated using Qualtrics (Qualtrics) and run through the University of Alabama Qualtrics system. A link to the survey was sent to all registered seminar participants by email. Questions covered general audience participation, career stage, as well as affective factors. Survey results are only reported in an aggregate.\n Data collection, analysis, and statistics Survey data was collected through the University of Alabama Qualtrics system and was determined by the University of Alabama IRB to be “non-human subject research”. Data from the live webinars were collected by Zoom for each seminar and reported as unique viewers to eliminate counting individuals double who potentially logged on twice. Data on video views were collected from the University of Kentucky Zoom Webinar cloud server and were reported as total accessed views. Survey data from Qualtrics was analyzed with the IBM® SPSS® software platform (IBM, Armonk, NY, USA), using one-way ANOVAs and t-tests to identify significant trends [5]. Data with a p < 0.05 are considered significant.\nSurvey data was collected through the University of Alabama Qualtrics system and was determined by the University of Alabama IRB to be “non-human subject research”. Data from the live webinars were collected by Zoom for each seminar and reported as unique viewers to eliminate counting individuals double who potentially logged on twice. Data on video views were collected from the University of Kentucky Zoom Webinar cloud server and were reported as total accessed views. Survey data from Qualtrics was analyzed with the IBM® SPSS® software platform (IBM, Armonk, NY, USA), using one-way ANOVAs and t-tests to identify significant trends [5]. Data with a p < 0.05 are considered significant.", "The BBV2020 seminar series was created by Drs. Bjoern Bauer (University of Kentucky, Lexington, KY, USA), Anika Hartz (University of Kentucky, Lexington, KY, USA), and Brandon Kim (University of Alabama, Tuscaloosa, Alabama, USA). Seminar moderators were postdoctoral researchers and scientists from the USA and Europe: Drs. Natalie Hudson (Trinity College Dublin, Dublin, Ireland), Geetika Nehra (University of Kentucky, Lexington, KY, USA), Michelle Pizzo (Denali Therapeutics Inc., San Francisco, CA, USA), and Steffen Storck (University Medical Center of the Johannes Gutenberg University Mainz, Mainz, Germany).", "BBV2020 was run on the Zoom Webinar 500 platform (Zoom Video Communications, Inc.; San Jose, CA, USA) that was funded by Dr. Bjoern Bauer using University of Kentucky funds. This particular Zoom Webinar license allowed for up to 500 live participants. The Zoom Webinar platform supports an in-session chat function, a Q&A forum, a raise hand function for live questions, and provides a panelist section with the option to share audio, video and presentation slides, a recording function and post-webinar data collection. The hosts and moderators were in control of audio and video; participants were not able to record or unmute during a session. Participants had no control function other than a “raise hand” option and the Q&A and chat function allowing to send questions or messages to the moderator. Technical assistance was provided by Todd Sizemore, College of Pharmacy, University of Kentucky.", "The BBV2020 seminar series ran from May 20 to September 2, 2020. Live seminars were held once per week from 12 to 1 PM Eastern Daylight Time (EDT) to accommodate researchers across a wide range of time zones: Central European Time (6:00 PM) to Pacific Daylight Time (9:00 AM). To accommodate colleagues with time constraints or those within incompatible time zones such as Asia or Oceania, presentations were recorded (depending on speaker permissions) and the seminar recording was made available for up to 48 h after the live seminar. The link for the recorded videos was distributed via the BBV2020 listserv that was set up through the University of Kentucky listserv.", "Efforts were made to invite speakers from a variety of countries and genders. Invited speakers included organizers of postponed in-person conferences, speakers from academia and industry from different career stages. A total of 16 speakers were invited: May 20, 2020: Elga De Vries, Amsterdam UMC, Amsterdam, The Netherlands; May 27, 2020: Richard Daneman, University of California San Diego, San Diego, CA, USA; June 3, 2020: Robyn Klein, Washington University, St. Louis, MO, USA; June 10, 2020: Ayal Ben-Zvi, The Hebrew University of Jerusalem, Israel; June 17, 2020: Britta Engelhardt, University of Bern, Bern, Switzerland; June 24, 2020: Margareta Hammarlund-Udenaes, Uppsala University, Uppsala, Sweden; July 1, 2020: Daniela Virgintino, University of Bari, Bari, Italy; July 8, 2020: Eric Shusta, University of Wisconsin, Madison, WI, USA; July 15, 2020: Matthew Campbell, Trinity College Dublin, Dublin, Ireland; July 22, 2020: Krzysztof Kucharz, University of Copenhagen, Copenhagen, Denmark; July 29, 2020: William Elmquist, University of Minnesota, Minneapolis, MN, USA; August 5, 2020: Edward Neuwelt, Oregon Health & Sciences University, OR, USA; August 12, 2020: Robert Thorne, Denali Therapeutics, San Francisco, CA, USA; August 19, 2020: Teresa Sanchez, Weill Cornell Medical College, New York City, NJ, USA; August 26, 2020: Michael Taylor, University of Wisconsin, Madison, MI, USA; September 2, 2020: Joan Abbott, King’s College London, London, UK.\nSpeakers were offered a training session by the organizers to cover key aspects of a Zoom-based webinar and to test video, audio, and slide presentation prior to the live seminar. Ten out of 16 speakers took the training session. Speakers were informed by the organizers that any data presented should be considered public as security on the Zoom platform could not be ensured. Invited speakers had the opportunity to give permission to have their seminar recorded or to opt out of the recording. If a speaker gave permission to record, the entire seminar was recorded, and recordings were made available to all registered participants in a non-downloadable format through the University of Kentucky Zoom cloud for up to 48 h after the live session. Out of 16 speakers, 15 gave permission to record the live seminar.", "To reach scientists in the brain barriers community, the seminar was advertised by email to participants of previous in-person meetings, and on social media platforms such as Facebook and LinkedIn on personal profiles. In addition, the introductory flyer was kindly distributed via email by the International Brain Barriers Society (IBBS) using their email distribution list. Interested scientists were asked to register by sending their name, affiliation, and career stage to a dedicated email address created for the organization and execution of BBV2020 ([email protected]). Once registered, participants were added to a BBV2020 listserv provided by the University of Kentucky and received weekly updates. These weekly updates included a flyer with the information regarding the upcoming speakers and the specific zoom link for the seminar. Participants also received a follow up email after each seminar with a zoom link to the video recordings. With the permission of the speaker of the week, registered viewers had access to recordings for up to 48 h after the live seminar.", "Each seminar was started with opening remarks by one of the organizers (Drs. Bauer, Hartz, or Kim) followed by an introduction of the speaker by the moderator. Moderators (Drs. Hudson, Nehra, Pizzo, Storck) rotated each week, only one moderator was assigned to each week. After the seminar presentation, the moderator facilitated the question-and-answer session. Participants were encouraged to ask a live question by raising their digital hand. When a participant raised the digital hand, the moderator would unmute the participant at which point the participant was able to ask the speaker a question directly. In addition, participants had the opportunity to ask questions via the Zoom Q&A or the chat function. Written questions were then subsequently read by the moderator. The moderator concluded each seminar by thanking the speaker and the audience for attending and by announcing the speaker for the following week.", "World maps highlighting countries participating in the various meetings and representation were generated using MapChart (www.mapchart.net; license: https://creativecommons.org/licenses/by-sa/4.0/).", "Toward the end of the seminar series, a survey was generated using Qualtrics (Qualtrics) and run through the University of Alabama Qualtrics system. A link to the survey was sent to all registered seminar participants by email. Questions covered general audience participation, career stage, as well as affective factors. Survey results are only reported in an aggregate.", "Survey data was collected through the University of Alabama Qualtrics system and was determined by the University of Alabama IRB to be “non-human subject research”. Data from the live webinars were collected by Zoom for each seminar and reported as unique viewers to eliminate counting individuals double who potentially logged on twice. Data on video views were collected from the University of Kentucky Zoom Webinar cloud server and were reported as total accessed views. Survey data from Qualtrics was analyzed with the IBM® SPSS® software platform (IBM, Armonk, NY, USA), using one-way ANOVAs and t-tests to identify significant trends [5]. Data with a p < 0.05 are considered significant.", "We organized the virtual BBV2020 seminar series for the brain barriers research field to fill the void that the COVID-19 pandemic left due to canceled and postponed meetings in 2020. The seminar series was held between May–September 2020 every Wednesday from 12:00 to 1:00 PM Eastern Daylight Time (EDT). Seminars were hosted on a Zoom webinar platform and featured a 45-min presentation from an invited speaker followed by questions and answers from the audience. In the following sections, we summarize data collected by the Zoom software, from the listserv used, and the participant survey. We compare and contrast the data from the virtual BBV2020 seminar series with data from in-person meetings from past years.\n BBV2020 attendance On May 14, 2020, we sent out the first advertisement for the BBV2020 seminar series via email using a listserv with email addresses of past conference attendees and via postings using LinkedIn, Facebook, and Twitter. By the start of the first seminar on May 20, 2020, more than 720 scientists had registered. By the end of the 16-week long BBV2020 seminar series, a total of 1302 registrants signed up on our listserv to receive the seminar link, speaker updates, and video recording links (Fig. 1A). We obtained data from two past in-person meetings from the respective conference chairs: (1) brain barriers meeting in 2018 (BBB 2018) and the (2) Cerebral Vascular Biology meeting in 2019 (CVB 2019).\n\nFig. 1\nAttendance Trends for Brain Barrier Research Conferences. A With 1,302 individuals signing up and added to the listserv, BBV2020 attracted more registrants than BBB 2018 and CVB 2019. B Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020. C Number of views (clicks) of the video recordings when available weekly for BBV2020. Black line indicates best linear fit. Gray line indicates the average (mean) for all 16 weeks\n\nAttendance Trends for Brain Barrier Research Conferences. A With 1,302 individuals signing up and added to the listserv, BBV2020 attracted more registrants than BBB 2018 and CVB 2019. B Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020. C Number of views (clicks) of the video recordings when available weekly for BBV2020. Black line indicates best linear fit. Gray line indicates the average (mean) for all 16 weeks\nStrikingly, the total number of registered applicants for the BBV2020 was more than 4-fold higher compared to the total number of registered participants for the 2018 and the CVB 2019 meetings (Fig. 1A). The large number of registrants is likely due to free registration, no associated travel expenses, a world-wide reach, and that virtual meetings come with no obligations for the attendees. In contrast, the number of attendees of the in-person meetings (BBB 2018 and CVB2019) was limited and determined by the size of the conference site.\nThe first live BBV2020 seminar was attended by 419 participants. As the seminar series progressed, the participant number during live seminars gradually decreased, the average participant number was 220, the minimum was 108 (Fig. 1B). A similar trend was observed in the numbers of total recording views: the first seminar recording had a maximum of 944 views; in average, videos had 422 views (Fig. 1C). The trendline in Fig. 1C shows a drop in recording views over the course of the seminar series. These trends held regardless of academic rank as graduate students, postdoctoral fellows, and professors all showed a similar downward trend in live attendance (Additional file 1: Fig. S1A–C). Note that we were unable to capture recording data for week 2 due to a recording error and for week 13 due to the request by the speaker to not record.\nTogether, the high number of registrants suggests that the virtual BBV2020 seminar series sparked interest. Attendance of the live seminars and the numbers of views for the recorded videos decreased over time, which may be an indicator of “virtual meeting fatigue” or attendees being on their summer vacation.\nOn May 14, 2020, we sent out the first advertisement for the BBV2020 seminar series via email using a listserv with email addresses of past conference attendees and via postings using LinkedIn, Facebook, and Twitter. By the start of the first seminar on May 20, 2020, more than 720 scientists had registered. By the end of the 16-week long BBV2020 seminar series, a total of 1302 registrants signed up on our listserv to receive the seminar link, speaker updates, and video recording links (Fig. 1A). We obtained data from two past in-person meetings from the respective conference chairs: (1) brain barriers meeting in 2018 (BBB 2018) and the (2) Cerebral Vascular Biology meeting in 2019 (CVB 2019).\n\nFig. 1\nAttendance Trends for Brain Barrier Research Conferences. A With 1,302 individuals signing up and added to the listserv, BBV2020 attracted more registrants than BBB 2018 and CVB 2019. B Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020. C Number of views (clicks) of the video recordings when available weekly for BBV2020. Black line indicates best linear fit. Gray line indicates the average (mean) for all 16 weeks\n\nAttendance Trends for Brain Barrier Research Conferences. A With 1,302 individuals signing up and added to the listserv, BBV2020 attracted more registrants than BBB 2018 and CVB 2019. B Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020. C Number of views (clicks) of the video recordings when available weekly for BBV2020. Black line indicates best linear fit. Gray line indicates the average (mean) for all 16 weeks\nStrikingly, the total number of registered applicants for the BBV2020 was more than 4-fold higher compared to the total number of registered participants for the 2018 and the CVB 2019 meetings (Fig. 1A). The large number of registrants is likely due to free registration, no associated travel expenses, a world-wide reach, and that virtual meetings come with no obligations for the attendees. In contrast, the number of attendees of the in-person meetings (BBB 2018 and CVB2019) was limited and determined by the size of the conference site.\nThe first live BBV2020 seminar was attended by 419 participants. As the seminar series progressed, the participant number during live seminars gradually decreased, the average participant number was 220, the minimum was 108 (Fig. 1B). A similar trend was observed in the numbers of total recording views: the first seminar recording had a maximum of 944 views; in average, videos had 422 views (Fig. 1C). The trendline in Fig. 1C shows a drop in recording views over the course of the seminar series. These trends held regardless of academic rank as graduate students, postdoctoral fellows, and professors all showed a similar downward trend in live attendance (Additional file 1: Fig. S1A–C). Note that we were unable to capture recording data for week 2 due to a recording error and for week 13 due to the request by the speaker to not record.\nTogether, the high number of registrants suggests that the virtual BBV2020 seminar series sparked interest. Attendance of the live seminars and the numbers of views for the recorded videos decreased over time, which may be an indicator of “virtual meeting fatigue” or attendees being on their summer vacation.\n BBV2020 survey analysis By the end of the seminar series in September, every BBV2020 registrant received an invitation to participate in an online survey created by the organizers on August 23, 2020. The survey was designed for registrants to provide feedback and consisted of 10 questions: (1) demographic question to capture career level and career type; (2) single select multiple choice question to capture attendance of past in-person brain barrier meetings, (3) single select multiple choice question on how many recorded seminars were viewed, (4) Likert scale question regarding the enrichment BBV2020 provided for the community during the COVID-19 pandemic, (5) dichotomous question asking if the registrant would still participate in BBV2020 seminars if they were continued, (6) single select multiple choice question on attendance frequency, (7) single select multiple choice question on how many times the registrant has thus far attended a brain barriers meeting previously in person, (8) matrix question to capture affective factors, (9) matrix question to capture learning and teaching, and (10) matrix question to capture combined validity and practicality of the BBV2020 series. In addition, registrants responding to the survey had the opportunity to include general comments to the organizers in a separate text box.\nOverall, 23% of registrants (total of 299 registrants) responded to the survey. Over 89% of survey respondents agreed strongly that “Attending BBV2020 was a positive experience” and 63% of responders felt that “BBV2020 is appropriate for my research”. Generally, the respondents tended to agree with the following statement: “BBV2020 has enriched the connection with the Brain Barriers community in the gap that COVID-19 has created” (n = 298 respondents to this question of a total of 299 registrants, M = 1.44, SD = 0.660, 1 = Strongly Agree, 2 = Agree, 3 = Neutral, 4 = Disagree, 5 = Strongly Disagree; Fig. 2A). However, when posed with the statements “In the future, I would prefer to attend a virtual conference over an in-person conference” or “The virtual experience is not as effective compared to the in-person conferences”, responses were neutral (M = 3.24; Fig. 2B) indicating a positive view towards a virtual conference. A difference emerged when we compared BBV2020 participants based on their career level with their preference for in-person versus virtual meeting formats: using one-way ANOVA, we found a significant difference between groups (F(9286) = 2.607, p = 0.007). Using Tukey’s HSD post hoc test indicated that Master’s students were more likely to disagree with the following statement when compared to Faculty/Professors: “I would feel more comfortable attending Brain Barrier conferences/seminars in person, but was glad to attend virtually due to COVID19.” A strong trend emerged when Master’s and Ph.D. students were grouped together and compared to Post-Docs and Faculty/Professors in an independent-samples t test. Results show graduate students in general were more favorable to a virtual conference compared to colleagues more established in their careers (t(225) = -1.972, p = 0.061). The lack of significance is in part due to the low sample size (n = 74) for graduate students.\n\nFig. 2\nPost-seminar survey results. A Volunteer respondent result when posed with the prompt “BBV2020 has enriched connection with the Brain Barriers community in the gap that COVID-19 has created”. B Volunteer respondent results when posted with the prompts “In the future, I would prefer to attend a virtual conference over an in-person conference” (Blue) and “The virtual experience is not as effective compared to the in-person conferences” (Red)\n\nPost-seminar survey results. A Volunteer respondent result when posed with the prompt “BBV2020 has enriched connection with the Brain Barriers community in the gap that COVID-19 has created”. B Volunteer respondent results when posted with the prompts “In the future, I would prefer to attend a virtual conference over an in-person conference” (Blue) and “The virtual experience is not as effective compared to the in-person conferences” (Red)\nOverall, descriptive data revealed many participants felt BBV2020 was a positive experience. In fact, 99.3% of survey respondents noted they somewhat or strongly agree that their experience was positive, with only 4.7% saying they faced technical difficulties. Additionally, most respondents agreed that virtual conferences have an important role in research (93.2%) and are more accessible than in-person conferences (90.2%). Finally, more than half of respondents (56.9%) noted virtual conferences can do things in-person conferences cannot.\nNevertheless, there continued to be a desire among many BBV2020 participants to return to conferencing in person, with 13.5% saying they found it hard to focus in the virtual format. In fact, roughly one-third (33.9%) felt the virtual conference was not as effective as an in-person conference, and only one-quarter (23.3%) said they would prefer virtual conferences to in-person conferences in the future, while 35.8% were neutral.\nTogether, our findings suggest that while BBV2020 filled a need created by a global pandemic, the enthusiasm for returning to only in-person meetings remained relatively neutral suggesting that virtual platforms may play a critical role in dissemination of research and science in the future. Moreover, the data from this study show a trend between graduate students versus professors in their response with regard to in-person meetings indicating that career stage may influence the perspective of the usefulness of in-person versus virtual platforms.\nBy the end of the seminar series in September, every BBV2020 registrant received an invitation to participate in an online survey created by the organizers on August 23, 2020. The survey was designed for registrants to provide feedback and consisted of 10 questions: (1) demographic question to capture career level and career type; (2) single select multiple choice question to capture attendance of past in-person brain barrier meetings, (3) single select multiple choice question on how many recorded seminars were viewed, (4) Likert scale question regarding the enrichment BBV2020 provided for the community during the COVID-19 pandemic, (5) dichotomous question asking if the registrant would still participate in BBV2020 seminars if they were continued, (6) single select multiple choice question on attendance frequency, (7) single select multiple choice question on how many times the registrant has thus far attended a brain barriers meeting previously in person, (8) matrix question to capture affective factors, (9) matrix question to capture learning and teaching, and (10) matrix question to capture combined validity and practicality of the BBV2020 series. In addition, registrants responding to the survey had the opportunity to include general comments to the organizers in a separate text box.\nOverall, 23% of registrants (total of 299 registrants) responded to the survey. Over 89% of survey respondents agreed strongly that “Attending BBV2020 was a positive experience” and 63% of responders felt that “BBV2020 is appropriate for my research”. Generally, the respondents tended to agree with the following statement: “BBV2020 has enriched the connection with the Brain Barriers community in the gap that COVID-19 has created” (n = 298 respondents to this question of a total of 299 registrants, M = 1.44, SD = 0.660, 1 = Strongly Agree, 2 = Agree, 3 = Neutral, 4 = Disagree, 5 = Strongly Disagree; Fig. 2A). However, when posed with the statements “In the future, I would prefer to attend a virtual conference over an in-person conference” or “The virtual experience is not as effective compared to the in-person conferences”, responses were neutral (M = 3.24; Fig. 2B) indicating a positive view towards a virtual conference. A difference emerged when we compared BBV2020 participants based on their career level with their preference for in-person versus virtual meeting formats: using one-way ANOVA, we found a significant difference between groups (F(9286) = 2.607, p = 0.007). Using Tukey’s HSD post hoc test indicated that Master’s students were more likely to disagree with the following statement when compared to Faculty/Professors: “I would feel more comfortable attending Brain Barrier conferences/seminars in person, but was glad to attend virtually due to COVID19.” A strong trend emerged when Master’s and Ph.D. students were grouped together and compared to Post-Docs and Faculty/Professors in an independent-samples t test. Results show graduate students in general were more favorable to a virtual conference compared to colleagues more established in their careers (t(225) = -1.972, p = 0.061). The lack of significance is in part due to the low sample size (n = 74) for graduate students.\n\nFig. 2\nPost-seminar survey results. A Volunteer respondent result when posed with the prompt “BBV2020 has enriched connection with the Brain Barriers community in the gap that COVID-19 has created”. B Volunteer respondent results when posted with the prompts “In the future, I would prefer to attend a virtual conference over an in-person conference” (Blue) and “The virtual experience is not as effective compared to the in-person conferences” (Red)\n\nPost-seminar survey results. A Volunteer respondent result when posed with the prompt “BBV2020 has enriched connection with the Brain Barriers community in the gap that COVID-19 has created”. B Volunteer respondent results when posted with the prompts “In the future, I would prefer to attend a virtual conference over an in-person conference” (Blue) and “The virtual experience is not as effective compared to the in-person conferences” (Red)\nOverall, descriptive data revealed many participants felt BBV2020 was a positive experience. In fact, 99.3% of survey respondents noted they somewhat or strongly agree that their experience was positive, with only 4.7% saying they faced technical difficulties. Additionally, most respondents agreed that virtual conferences have an important role in research (93.2%) and are more accessible than in-person conferences (90.2%). Finally, more than half of respondents (56.9%) noted virtual conferences can do things in-person conferences cannot.\nNevertheless, there continued to be a desire among many BBV2020 participants to return to conferencing in person, with 13.5% saying they found it hard to focus in the virtual format. In fact, roughly one-third (33.9%) felt the virtual conference was not as effective as an in-person conference, and only one-quarter (23.3%) said they would prefer virtual conferences to in-person conferences in the future, while 35.8% were neutral.\nTogether, our findings suggest that while BBV2020 filled a need created by a global pandemic, the enthusiasm for returning to only in-person meetings remained relatively neutral suggesting that virtual platforms may play a critical role in dissemination of research and science in the future. Moreover, the data from this study show a trend between graduate students versus professors in their response with regard to in-person meetings indicating that career stage may influence the perspective of the usefulness of in-person versus virtual platforms.\n Global reach of BBV2020 During the seminar registration process we collected attendants’ affiliations and based on this information we determined the global reach of the BBV2020. Registrants of the BBV2020 seminar series were from 43 countries representing six of the seven continents (Fig. 3A, B). Compared to the in-person BBB 2018 and the CVB2019, more countries were represented at the BBV2020 (Fig. 3A).\n\nFig. 3\nGlobal reach of BBV2020. A BBV2020 had more countries represented compared to BBB 2018 and CVB 2019 alone. B World map showing countries represented by BBV2020. Countries where registrants participated in BBV2020: Red and Pink. Countries not represented at BBV2020: Light Gray and Dark Gray. Countries where registrants attended BBV2020 and at least one of the in-person meetings between Meeting 2018 or CVB 2019: Red. Countries where registrants attended BBV2020 and did not attend either BBB 2018 or CVB 2019: Pink. Countries that had no participation in any of the brain barriers meetings examined in this study: Light gray. Countries that had participation in either BBB 2018 or CVB 2019 that did not participate in BBV2020: Dark Gray\n\nGlobal reach of BBV2020. A BBV2020 had more countries represented compared to BBB 2018 and CVB 2019 alone. B World map showing countries represented by BBV2020. Countries where registrants participated in BBV2020: Red and Pink. Countries not represented at BBV2020: Light Gray and Dark Gray. Countries where registrants attended BBV2020 and at least one of the in-person meetings between Meeting 2018 or CVB 2019: Red. Countries where registrants attended BBV2020 and did not attend either BBB 2018 or CVB 2019: Pink. Countries that had no participation in any of the brain barriers meetings examined in this study: Light gray. Countries that had participation in either BBB 2018 or CVB 2019 that did not participate in BBV2020: Dark Gray\nWe also compared the number of participants from each country who joined BBV2020 vs. BBB 2018 and CVB 2019. We found that the number of participants per country worldwide was also higher compared to both in person meetings (Fig. 4A–F). These data show that the virtual nature of BBV2020 enabled researchers world-wide to participate and suggest that the outreach of a virtual event can extend beyond the field.\n\nFig. 4\nGlobal view of participation over BBV2020, BBB 2018, and CVB 2019. Representation of participants for (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Ordered from top to bottom in greatest number of registrants. Number of registrants scaled by color based on number of registrants per country for (D) BBV 2020, (E) BBB 2018, and (F) CVB 2019. Light Pink: 1–10 registrants; Dark Pink: 11–50 registrants; Red: 51–100 registrants and Dark Red: > 100 registrants\n\nGlobal view of participation over BBV2020, BBB 2018, and CVB 2019. Representation of participants for (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Ordered from top to bottom in greatest number of registrants. Number of registrants scaled by color based on number of registrants per country for (D) BBV 2020, (E) BBB 2018, and (F) CVB 2019. Light Pink: 1–10 registrants; Dark Pink: 11–50 registrants; Red: 51–100 registrants and Dark Red: > 100 registrants\nDuring the seminar registration process we collected attendants’ affiliations and based on this information we determined the global reach of the BBV2020. Registrants of the BBV2020 seminar series were from 43 countries representing six of the seven continents (Fig. 3A, B). Compared to the in-person BBB 2018 and the CVB2019, more countries were represented at the BBV2020 (Fig. 3A).\n\nFig. 3\nGlobal reach of BBV2020. A BBV2020 had more countries represented compared to BBB 2018 and CVB 2019 alone. B World map showing countries represented by BBV2020. Countries where registrants participated in BBV2020: Red and Pink. Countries not represented at BBV2020: Light Gray and Dark Gray. Countries where registrants attended BBV2020 and at least one of the in-person meetings between Meeting 2018 or CVB 2019: Red. Countries where registrants attended BBV2020 and did not attend either BBB 2018 or CVB 2019: Pink. Countries that had no participation in any of the brain barriers meetings examined in this study: Light gray. Countries that had participation in either BBB 2018 or CVB 2019 that did not participate in BBV2020: Dark Gray\n\nGlobal reach of BBV2020. A BBV2020 had more countries represented compared to BBB 2018 and CVB 2019 alone. B World map showing countries represented by BBV2020. Countries where registrants participated in BBV2020: Red and Pink. Countries not represented at BBV2020: Light Gray and Dark Gray. Countries where registrants attended BBV2020 and at least one of the in-person meetings between Meeting 2018 or CVB 2019: Red. Countries where registrants attended BBV2020 and did not attend either BBB 2018 or CVB 2019: Pink. Countries that had no participation in any of the brain barriers meetings examined in this study: Light gray. Countries that had participation in either BBB 2018 or CVB 2019 that did not participate in BBV2020: Dark Gray\nWe also compared the number of participants from each country who joined BBV2020 vs. BBB 2018 and CVB 2019. We found that the number of participants per country worldwide was also higher compared to both in person meetings (Fig. 4A–F). These data show that the virtual nature of BBV2020 enabled researchers world-wide to participate and suggest that the outreach of a virtual event can extend beyond the field.\n\nFig. 4\nGlobal view of participation over BBV2020, BBB 2018, and CVB 2019. Representation of participants for (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Ordered from top to bottom in greatest number of registrants. Number of registrants scaled by color based on number of registrants per country for (D) BBV 2020, (E) BBB 2018, and (F) CVB 2019. Light Pink: 1–10 registrants; Dark Pink: 11–50 registrants; Red: 51–100 registrants and Dark Red: > 100 registrants\n\nGlobal view of participation over BBV2020, BBB 2018, and CVB 2019. Representation of participants for (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Ordered from top to bottom in greatest number of registrants. Number of registrants scaled by color based on number of registrants per country for (D) BBV 2020, (E) BBB 2018, and (F) CVB 2019. Light Pink: 1–10 registrants; Dark Pink: 11–50 registrants; Red: 51–100 registrants and Dark Red: > 100 registrants\n Career stages represented at BBV2020 We examined the self-reported career stage of BBV2020 participants to that of BBB 2018 and CVB 2019 participants. Based on the survey, more than 46% of all participants attended a brain barriers conference/seminar for the first time. We also found that in-person meetings are attended by 42.4–48.9% academicians at the professor rank, graduate students make up 22.9–25.8% and postdoctoral fellows are 6.6–17.7% of the participants (Fig. 5). In contrast, at the BBV2020 19.6% of academicians were at the professor rank, graduate students made up 30.5% and postdoctoral fellows 16.1% of attendees. We also found that BBV2020 had a larger participation from industry (18.5% vs. 5.6–6.6%), and undergraduate students (3.2% vs. 1%) compared to both in person meetings. Together, our data suggest that a virtual platform seminar like BBV2020 reaches a broader range of scientific ranks and careers.\n\nFig. 5\nProportions of various career stages participating in BBV2020, BBB 2018, and CVB 2019. Proportions of various career stages participating in (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Colors correspond to career stage: Red (professor any rank), Green (graduate student any rank), Dark Blue (research faculty or researcher), Yellow (industry scientist), Orange (postdoctoral fellow), Purple (government/regulatory), Light Blue (clinician), Green (undergraduate student)\n\nProportions of various career stages participating in BBV2020, BBB 2018, and CVB 2019. Proportions of various career stages participating in (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Colors correspond to career stage: Red (professor any rank), Green (graduate student any rank), Dark Blue (research faculty or researcher), Yellow (industry scientist), Orange (postdoctoral fellow), Purple (government/regulatory), Light Blue (clinician), Green (undergraduate student)\nWe examined the self-reported career stage of BBV2020 participants to that of BBB 2018 and CVB 2019 participants. Based on the survey, more than 46% of all participants attended a brain barriers conference/seminar for the first time. We also found that in-person meetings are attended by 42.4–48.9% academicians at the professor rank, graduate students make up 22.9–25.8% and postdoctoral fellows are 6.6–17.7% of the participants (Fig. 5). In contrast, at the BBV2020 19.6% of academicians were at the professor rank, graduate students made up 30.5% and postdoctoral fellows 16.1% of attendees. We also found that BBV2020 had a larger participation from industry (18.5% vs. 5.6–6.6%), and undergraduate students (3.2% vs. 1%) compared to both in person meetings. Together, our data suggest that a virtual platform seminar like BBV2020 reaches a broader range of scientific ranks and careers.\n\nFig. 5\nProportions of various career stages participating in BBV2020, BBB 2018, and CVB 2019. Proportions of various career stages participating in (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Colors correspond to career stage: Red (professor any rank), Green (graduate student any rank), Dark Blue (research faculty or researcher), Yellow (industry scientist), Orange (postdoctoral fellow), Purple (government/regulatory), Light Blue (clinician), Green (undergraduate student)\n\nProportions of various career stages participating in BBV2020, BBB 2018, and CVB 2019. Proportions of various career stages participating in (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Colors correspond to career stage: Red (professor any rank), Green (graduate student any rank), Dark Blue (research faculty or researcher), Yellow (industry scientist), Orange (postdoctoral fellow), Purple (government/regulatory), Light Blue (clinician), Green (undergraduate student)", "On May 14, 2020, we sent out the first advertisement for the BBV2020 seminar series via email using a listserv with email addresses of past conference attendees and via postings using LinkedIn, Facebook, and Twitter. By the start of the first seminar on May 20, 2020, more than 720 scientists had registered. By the end of the 16-week long BBV2020 seminar series, a total of 1302 registrants signed up on our listserv to receive the seminar link, speaker updates, and video recording links (Fig. 1A). We obtained data from two past in-person meetings from the respective conference chairs: (1) brain barriers meeting in 2018 (BBB 2018) and the (2) Cerebral Vascular Biology meeting in 2019 (CVB 2019).\n\nFig. 1\nAttendance Trends for Brain Barrier Research Conferences. A With 1,302 individuals signing up and added to the listserv, BBV2020 attracted more registrants than BBB 2018 and CVB 2019. B Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020. C Number of views (clicks) of the video recordings when available weekly for BBV2020. Black line indicates best linear fit. Gray line indicates the average (mean) for all 16 weeks\n\nAttendance Trends for Brain Barrier Research Conferences. A With 1,302 individuals signing up and added to the listserv, BBV2020 attracted more registrants than BBB 2018 and CVB 2019. B Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020. C Number of views (clicks) of the video recordings when available weekly for BBV2020. Black line indicates best linear fit. Gray line indicates the average (mean) for all 16 weeks\nStrikingly, the total number of registered applicants for the BBV2020 was more than 4-fold higher compared to the total number of registered participants for the 2018 and the CVB 2019 meetings (Fig. 1A). The large number of registrants is likely due to free registration, no associated travel expenses, a world-wide reach, and that virtual meetings come with no obligations for the attendees. In contrast, the number of attendees of the in-person meetings (BBB 2018 and CVB2019) was limited and determined by the size of the conference site.\nThe first live BBV2020 seminar was attended by 419 participants. As the seminar series progressed, the participant number during live seminars gradually decreased, the average participant number was 220, the minimum was 108 (Fig. 1B). A similar trend was observed in the numbers of total recording views: the first seminar recording had a maximum of 944 views; in average, videos had 422 views (Fig. 1C). The trendline in Fig. 1C shows a drop in recording views over the course of the seminar series. These trends held regardless of academic rank as graduate students, postdoctoral fellows, and professors all showed a similar downward trend in live attendance (Additional file 1: Fig. S1A–C). Note that we were unable to capture recording data for week 2 due to a recording error and for week 13 due to the request by the speaker to not record.\nTogether, the high number of registrants suggests that the virtual BBV2020 seminar series sparked interest. Attendance of the live seminars and the numbers of views for the recorded videos decreased over time, which may be an indicator of “virtual meeting fatigue” or attendees being on their summer vacation.", "By the end of the seminar series in September, every BBV2020 registrant received an invitation to participate in an online survey created by the organizers on August 23, 2020. The survey was designed for registrants to provide feedback and consisted of 10 questions: (1) demographic question to capture career level and career type; (2) single select multiple choice question to capture attendance of past in-person brain barrier meetings, (3) single select multiple choice question on how many recorded seminars were viewed, (4) Likert scale question regarding the enrichment BBV2020 provided for the community during the COVID-19 pandemic, (5) dichotomous question asking if the registrant would still participate in BBV2020 seminars if they were continued, (6) single select multiple choice question on attendance frequency, (7) single select multiple choice question on how many times the registrant has thus far attended a brain barriers meeting previously in person, (8) matrix question to capture affective factors, (9) matrix question to capture learning and teaching, and (10) matrix question to capture combined validity and practicality of the BBV2020 series. In addition, registrants responding to the survey had the opportunity to include general comments to the organizers in a separate text box.\nOverall, 23% of registrants (total of 299 registrants) responded to the survey. Over 89% of survey respondents agreed strongly that “Attending BBV2020 was a positive experience” and 63% of responders felt that “BBV2020 is appropriate for my research”. Generally, the respondents tended to agree with the following statement: “BBV2020 has enriched the connection with the Brain Barriers community in the gap that COVID-19 has created” (n = 298 respondents to this question of a total of 299 registrants, M = 1.44, SD = 0.660, 1 = Strongly Agree, 2 = Agree, 3 = Neutral, 4 = Disagree, 5 = Strongly Disagree; Fig. 2A). However, when posed with the statements “In the future, I would prefer to attend a virtual conference over an in-person conference” or “The virtual experience is not as effective compared to the in-person conferences”, responses were neutral (M = 3.24; Fig. 2B) indicating a positive view towards a virtual conference. A difference emerged when we compared BBV2020 participants based on their career level with their preference for in-person versus virtual meeting formats: using one-way ANOVA, we found a significant difference between groups (F(9286) = 2.607, p = 0.007). Using Tukey’s HSD post hoc test indicated that Master’s students were more likely to disagree with the following statement when compared to Faculty/Professors: “I would feel more comfortable attending Brain Barrier conferences/seminars in person, but was glad to attend virtually due to COVID19.” A strong trend emerged when Master’s and Ph.D. students were grouped together and compared to Post-Docs and Faculty/Professors in an independent-samples t test. Results show graduate students in general were more favorable to a virtual conference compared to colleagues more established in their careers (t(225) = -1.972, p = 0.061). The lack of significance is in part due to the low sample size (n = 74) for graduate students.\n\nFig. 2\nPost-seminar survey results. A Volunteer respondent result when posed with the prompt “BBV2020 has enriched connection with the Brain Barriers community in the gap that COVID-19 has created”. B Volunteer respondent results when posted with the prompts “In the future, I would prefer to attend a virtual conference over an in-person conference” (Blue) and “The virtual experience is not as effective compared to the in-person conferences” (Red)\n\nPost-seminar survey results. A Volunteer respondent result when posed with the prompt “BBV2020 has enriched connection with the Brain Barriers community in the gap that COVID-19 has created”. B Volunteer respondent results when posted with the prompts “In the future, I would prefer to attend a virtual conference over an in-person conference” (Blue) and “The virtual experience is not as effective compared to the in-person conferences” (Red)\nOverall, descriptive data revealed many participants felt BBV2020 was a positive experience. In fact, 99.3% of survey respondents noted they somewhat or strongly agree that their experience was positive, with only 4.7% saying they faced technical difficulties. Additionally, most respondents agreed that virtual conferences have an important role in research (93.2%) and are more accessible than in-person conferences (90.2%). Finally, more than half of respondents (56.9%) noted virtual conferences can do things in-person conferences cannot.\nNevertheless, there continued to be a desire among many BBV2020 participants to return to conferencing in person, with 13.5% saying they found it hard to focus in the virtual format. In fact, roughly one-third (33.9%) felt the virtual conference was not as effective as an in-person conference, and only one-quarter (23.3%) said they would prefer virtual conferences to in-person conferences in the future, while 35.8% were neutral.\nTogether, our findings suggest that while BBV2020 filled a need created by a global pandemic, the enthusiasm for returning to only in-person meetings remained relatively neutral suggesting that virtual platforms may play a critical role in dissemination of research and science in the future. Moreover, the data from this study show a trend between graduate students versus professors in their response with regard to in-person meetings indicating that career stage may influence the perspective of the usefulness of in-person versus virtual platforms.", "During the seminar registration process we collected attendants’ affiliations and based on this information we determined the global reach of the BBV2020. Registrants of the BBV2020 seminar series were from 43 countries representing six of the seven continents (Fig. 3A, B). Compared to the in-person BBB 2018 and the CVB2019, more countries were represented at the BBV2020 (Fig. 3A).\n\nFig. 3\nGlobal reach of BBV2020. A BBV2020 had more countries represented compared to BBB 2018 and CVB 2019 alone. B World map showing countries represented by BBV2020. Countries where registrants participated in BBV2020: Red and Pink. Countries not represented at BBV2020: Light Gray and Dark Gray. Countries where registrants attended BBV2020 and at least one of the in-person meetings between Meeting 2018 or CVB 2019: Red. Countries where registrants attended BBV2020 and did not attend either BBB 2018 or CVB 2019: Pink. Countries that had no participation in any of the brain barriers meetings examined in this study: Light gray. Countries that had participation in either BBB 2018 or CVB 2019 that did not participate in BBV2020: Dark Gray\n\nGlobal reach of BBV2020. A BBV2020 had more countries represented compared to BBB 2018 and CVB 2019 alone. B World map showing countries represented by BBV2020. Countries where registrants participated in BBV2020: Red and Pink. Countries not represented at BBV2020: Light Gray and Dark Gray. Countries where registrants attended BBV2020 and at least one of the in-person meetings between Meeting 2018 or CVB 2019: Red. Countries where registrants attended BBV2020 and did not attend either BBB 2018 or CVB 2019: Pink. Countries that had no participation in any of the brain barriers meetings examined in this study: Light gray. Countries that had participation in either BBB 2018 or CVB 2019 that did not participate in BBV2020: Dark Gray\nWe also compared the number of participants from each country who joined BBV2020 vs. BBB 2018 and CVB 2019. We found that the number of participants per country worldwide was also higher compared to both in person meetings (Fig. 4A–F). These data show that the virtual nature of BBV2020 enabled researchers world-wide to participate and suggest that the outreach of a virtual event can extend beyond the field.\n\nFig. 4\nGlobal view of participation over BBV2020, BBB 2018, and CVB 2019. Representation of participants for (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Ordered from top to bottom in greatest number of registrants. Number of registrants scaled by color based on number of registrants per country for (D) BBV 2020, (E) BBB 2018, and (F) CVB 2019. Light Pink: 1–10 registrants; Dark Pink: 11–50 registrants; Red: 51–100 registrants and Dark Red: > 100 registrants\n\nGlobal view of participation over BBV2020, BBB 2018, and CVB 2019. Representation of participants for (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Ordered from top to bottom in greatest number of registrants. Number of registrants scaled by color based on number of registrants per country for (D) BBV 2020, (E) BBB 2018, and (F) CVB 2019. Light Pink: 1–10 registrants; Dark Pink: 11–50 registrants; Red: 51–100 registrants and Dark Red: > 100 registrants", "We examined the self-reported career stage of BBV2020 participants to that of BBB 2018 and CVB 2019 participants. Based on the survey, more than 46% of all participants attended a brain barriers conference/seminar for the first time. We also found that in-person meetings are attended by 42.4–48.9% academicians at the professor rank, graduate students make up 22.9–25.8% and postdoctoral fellows are 6.6–17.7% of the participants (Fig. 5). In contrast, at the BBV2020 19.6% of academicians were at the professor rank, graduate students made up 30.5% and postdoctoral fellows 16.1% of attendees. We also found that BBV2020 had a larger participation from industry (18.5% vs. 5.6–6.6%), and undergraduate students (3.2% vs. 1%) compared to both in person meetings. Together, our data suggest that a virtual platform seminar like BBV2020 reaches a broader range of scientific ranks and careers.\n\nFig. 5\nProportions of various career stages participating in BBV2020, BBB 2018, and CVB 2019. Proportions of various career stages participating in (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Colors correspond to career stage: Red (professor any rank), Green (graduate student any rank), Dark Blue (research faculty or researcher), Yellow (industry scientist), Orange (postdoctoral fellow), Purple (government/regulatory), Light Blue (clinician), Green (undergraduate student)\n\nProportions of various career stages participating in BBV2020, BBB 2018, and CVB 2019. Proportions of various career stages participating in (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Colors correspond to career stage: Red (professor any rank), Green (graduate student any rank), Dark Blue (research faculty or researcher), Yellow (industry scientist), Orange (postdoctoral fellow), Purple (government/regulatory), Light Blue (clinician), Green (undergraduate student)", "Seemingly overnight the worldwide COVID-19 pandemic moved virtual meetings, seminars and conferences from the sidelines to center stage. For scientists, connecting on a virtual platform to share data and discuss ideas became the new norm. For many professional societies and research networks a virtual format was a viable option to offset the burden of the pandemic [6, 7]. We organized the BBV2020 seminar series for researchers in the brain barriers field to fill the vacuum canceled in-person conferences left in 2020. Using data collected throughout the BBV2020 seminar series from Zoom Webinar live attendance, number of views from posted videos, and a post-seminar survey, we captured the impact of BBV2020 on the brain barriers field. We compared these data with those from past in-person conferences and assessed the reach of the BBV2020 virtual seminar series with in-person meetings. While we acknowledge that comparisons between a seminar series to a conference may have its limitations, this was the first stand-in virtual event for the brain barriers field that may provide insight for future planning. In the following sections we discuss our findings in the context of the existing literature and provide a future perspective on how a virtual platform could impact the brain barriers field beyond the pandemic.\n Attendance: scientists around the globe Due to the pandemic, the idea and subsequent organization of the BBV2020 occurred in a relatively short time frame of about 4 weeks. This is in stark contrast to in-person meetings that are often scheduled and organized months and years ahead of the actual event. Advertisement of the BBV2020 encompassed personal email, postings on social media, and utilization of the International Brain Barriers Society email distribution list. Continuous advertisement resulted in the registration of over 1300 scientists. The total registrant number cannot be directly compared to that of in-person meetings since those are more complex in nature, have maximum capacity limits, and involve other factors such as travel expenses and time commitment. Similar to other meetings which transitioned to online formats during the pandemic, BBV2020 had a large increase in registrants when compared to previous in-person meetings (~7-fold and 4.5-fold higher compared to 2018 and 2019, respectively; Fig. 1; [8]. Given that previous brain barriers meetings usually have a few hundred participants, the high number of registrants could be an indicator for high interest in the field beyond traditional brain barrier research groups. In contrast, these high registrant numbers could also be due, in part, to BBV2020 being a no-cost event or could be explained by pandemic-related cancellations resulting in relatively event-free calendar of the participants. While there were more than 1,300 registrants for BBV2020, no more than 419 ever attended a live webinar. Additionally, 323 people registered for BBV2020 but never attended a live webinar. They may, however, have participated by watching the video recordings after the live webinar. A downward trend of attendance numbers for live seminars was observed throughout the course of the 16-week series (Fig. 1B). The recorded seminars drew considerable interest initially with a maximum of 944 views in week 4 of the series, but later in the series, video views dropped to about 200 per seminar (Fig. 1C). Reasons for these downward trends remain speculative, but could be due to the summer break, the selected time for the seminar making it difficult for researchers in certain time zones to attend live, zoom fatigue, or increasing responsibilities in re-opened research laboratories or at home. The downward trends we observed was in contrast to a similar virtual seminar series in another field that remained relatively flat over the course of the summer [4]. Given that average attendance of the BBV2020 was 220 per seminar and typical attendance for in-person conferences in the brain barriers field range between 150 and 300 participants (caveat: one of the in-person conferences has a strict 182-person maximum limit), we conclude that a core group of participants attended most sessions.\nDue to the pandemic, the idea and subsequent organization of the BBV2020 occurred in a relatively short time frame of about 4 weeks. This is in stark contrast to in-person meetings that are often scheduled and organized months and years ahead of the actual event. Advertisement of the BBV2020 encompassed personal email, postings on social media, and utilization of the International Brain Barriers Society email distribution list. Continuous advertisement resulted in the registration of over 1300 scientists. The total registrant number cannot be directly compared to that of in-person meetings since those are more complex in nature, have maximum capacity limits, and involve other factors such as travel expenses and time commitment. Similar to other meetings which transitioned to online formats during the pandemic, BBV2020 had a large increase in registrants when compared to previous in-person meetings (~7-fold and 4.5-fold higher compared to 2018 and 2019, respectively; Fig. 1; [8]. Given that previous brain barriers meetings usually have a few hundred participants, the high number of registrants could be an indicator for high interest in the field beyond traditional brain barrier research groups. In contrast, these high registrant numbers could also be due, in part, to BBV2020 being a no-cost event or could be explained by pandemic-related cancellations resulting in relatively event-free calendar of the participants. While there were more than 1,300 registrants for BBV2020, no more than 419 ever attended a live webinar. Additionally, 323 people registered for BBV2020 but never attended a live webinar. They may, however, have participated by watching the video recordings after the live webinar. A downward trend of attendance numbers for live seminars was observed throughout the course of the 16-week series (Fig. 1B). The recorded seminars drew considerable interest initially with a maximum of 944 views in week 4 of the series, but later in the series, video views dropped to about 200 per seminar (Fig. 1C). Reasons for these downward trends remain speculative, but could be due to the summer break, the selected time for the seminar making it difficult for researchers in certain time zones to attend live, zoom fatigue, or increasing responsibilities in re-opened research laboratories or at home. The downward trends we observed was in contrast to a similar virtual seminar series in another field that remained relatively flat over the course of the summer [4]. Given that average attendance of the BBV2020 was 220 per seminar and typical attendance for in-person conferences in the brain barriers field range between 150 and 300 participants (caveat: one of the in-person conferences has a strict 182-person maximum limit), we conclude that a core group of participants attended most sessions.\n Survey: feedback from participants A survey conducted after the seminar series revealed that most participants felt that BBV2020 enhanced the connection to the brain barriers scientific community during the COVID-19 pandemic (Fig. 2A). Differences emerged between how graduate students and professors viewed BBV2020: graduate students in general were more favorable of the virtual format compared to faculty. This may be partially explained by a lack of funding for graduate student travel or political “red tape” (e.g. visas) for some researchers [9]. Additionally, students may disproportionately adapt more easily and rapidly to technology usage compared to senior colleagues [10]. Clearly, another survey would be needed to determine if a virtual component outside of a pandemic would be positively viewed by researchers in the brain barriers field.\nA survey conducted after the seminar series revealed that most participants felt that BBV2020 enhanced the connection to the brain barriers scientific community during the COVID-19 pandemic (Fig. 2A). Differences emerged between how graduate students and professors viewed BBV2020: graduate students in general were more favorable of the virtual format compared to faculty. This may be partially explained by a lack of funding for graduate student travel or political “red tape” (e.g. visas) for some researchers [9]. Additionally, students may disproportionately adapt more easily and rapidly to technology usage compared to senior colleagues [10]. Clearly, another survey would be needed to determine if a virtual component outside of a pandemic would be positively viewed by researchers in the brain barriers field.\n Virtual seminars: a powerful opportunity One major advantage of a virtual seminar series such as the BBV2020 is the lack of a travel burden. Taking off the financial and time-commitment associated with in-person meetings has been shown to increase the diversity of attendees [3, 9, 11]. Based on those findings, we anticipated that BBV2020 would have representation from more countries than past in-person meetings. As shown in Fig. 3, registrants from 43 different countries were represented at BBV2020, which is 40% more compared to BBB 2018 (23 countries) and CVB 2019 (25 countries; Fig. 3). In particular, there was high participation from countries in Southeast Asia, Middle East, Africa, and South America that had not been represented at in-person conferences held in 2018 and 2019 (Figs. 3B, 4). Unsurprisingly however, considering countries represented at previous in-person meetings and due to the number of registrants for BBV2020, there was generally a greater participation from each country represented than in 2018 meeting and CVB 2019 [3, 8]. Due to the time of the BBV2020 live sessions (noon to 1 pm Eastern US Time), the majority of participants were from North America, South America, and Europe (Fig. 4). While time zone challenges are common among virtual events, switching between time zones or holding a “flipped” session (flipping evening sessions from Europe to the USA and vice versa) could increase participation from Asia and Oceania [12]. However, offering a virtual series that constantly switches the seminar time may be difficult to manage and could be confusing for participants. Another strength of virtual seminars is that they are more accessible and inclusive especially for junior researchers in the field. The in-person meetings in 2018 and 2019 were dominated by academic faculty at any professor rank and graduate students (Fig. 5). In contrast, BBV2020 saw a large reduction in the percentage of participating faculty (over 40% in person to less than 20% for BBV2020) and a larger proportion of graduate students (22–25% in person to over 30% for BBV2020), industry scientists (about 6% in person to 18.5% for BBV2020), and governmental/regulatory personnel who had signed up (Fig. 5). In addition, a weekly virtual seminar series allows scientist to pick-and-choose the sessions they are interested in, a choice that could interfere with the observed trends shown in Fig. 1. Our data further support the trends observed across a variety of disciplines who have shifted their conferences from in-person to online events, revealing that virtual conferences are more flexible, and more inclusive and accessible worldwide, especially for early-career scientists [9]. Virtual seminars also provide an opportunity for the organizers to invite prominent speakers of a field on a relatively small budget. In general, virtual conferences are known to enhance the accessibility for busy professionals. A recently published Nature poll indicates that 74% of scientists want virtual meetings to stay after the pandemic (925 poll responses) [13]. Easier accessibility, lower carbon footprint, and lower costs were the driving factors chosen in favor for virtual meetings in this poll. Moreover, virtual seminars provide an easy and affordable opportunity for researchers with disabilities, researchers with high teaching loads, or for scientists with parenting and other responsibilities to stay connected to their respective research community. During the COVID-19 pandemic, virtual seminars took center stage and raised the standard for accessibility, interactions, inclusivity, and equity in science [9]. Virtual events changed how we think about meetings and they constitute a yet underestimated, but powerful tool to share science and connect with each other. By rethinking how meetings are held, various fields have had different degrees of participation depending on the availability of the talks or content versus live sessions. These virtual platforms have the potential to create a “connectivity cascade” reaching more people than traditional meetings that are confined by a single geographical location and time [14]. BBV2020 had an impressive reach worldwide, however, there was still a notable deficit in participants from underdeveloped countries, especially Africa. These discrepancies have been noted in the context of expanding telehealth to African countries that still remain challenging due to monopolization of telecommunication infrastructure, political will, and access to adequate broadband efficiency [15]. Despite the advantages of BBV2020, we recognize that limitations remain, such as one-way delivery of material to participants. Lack of critical face-to-face interactions allowing junior scientists to have meaningful two-way communication with other participants was notably lacking in BBV2020 as seen in other virtual conferences as well [16]. The future of such virtual seminars, meetings, or conferences may lie in their inclusion as part of in-person meetings according to a recently published “best practice” guideline for virtual meetings [17]. For example, a virtual component could be included in plenary sessions, workshops, small group discussions, poster sessions, and/or social events of in-person conferences. BBV2020 represented only one aspect (a seminar held in the form of a webinar) of these common meeting events, but future events could certainly enhance the virtual experience. Regardless, virtual components could be exploited to increase diversity in the brain barriers research field which is consistent with the NINDS strategy for enhancing the diversity of neuroscience researchers [18].\nOne major advantage of a virtual seminar series such as the BBV2020 is the lack of a travel burden. Taking off the financial and time-commitment associated with in-person meetings has been shown to increase the diversity of attendees [3, 9, 11]. Based on those findings, we anticipated that BBV2020 would have representation from more countries than past in-person meetings. As shown in Fig. 3, registrants from 43 different countries were represented at BBV2020, which is 40% more compared to BBB 2018 (23 countries) and CVB 2019 (25 countries; Fig. 3). In particular, there was high participation from countries in Southeast Asia, Middle East, Africa, and South America that had not been represented at in-person conferences held in 2018 and 2019 (Figs. 3B, 4). Unsurprisingly however, considering countries represented at previous in-person meetings and due to the number of registrants for BBV2020, there was generally a greater participation from each country represented than in 2018 meeting and CVB 2019 [3, 8]. Due to the time of the BBV2020 live sessions (noon to 1 pm Eastern US Time), the majority of participants were from North America, South America, and Europe (Fig. 4). While time zone challenges are common among virtual events, switching between time zones or holding a “flipped” session (flipping evening sessions from Europe to the USA and vice versa) could increase participation from Asia and Oceania [12]. However, offering a virtual series that constantly switches the seminar time may be difficult to manage and could be confusing for participants. Another strength of virtual seminars is that they are more accessible and inclusive especially for junior researchers in the field. The in-person meetings in 2018 and 2019 were dominated by academic faculty at any professor rank and graduate students (Fig. 5). In contrast, BBV2020 saw a large reduction in the percentage of participating faculty (over 40% in person to less than 20% for BBV2020) and a larger proportion of graduate students (22–25% in person to over 30% for BBV2020), industry scientists (about 6% in person to 18.5% for BBV2020), and governmental/regulatory personnel who had signed up (Fig. 5). In addition, a weekly virtual seminar series allows scientist to pick-and-choose the sessions they are interested in, a choice that could interfere with the observed trends shown in Fig. 1. Our data further support the trends observed across a variety of disciplines who have shifted their conferences from in-person to online events, revealing that virtual conferences are more flexible, and more inclusive and accessible worldwide, especially for early-career scientists [9]. Virtual seminars also provide an opportunity for the organizers to invite prominent speakers of a field on a relatively small budget. In general, virtual conferences are known to enhance the accessibility for busy professionals. A recently published Nature poll indicates that 74% of scientists want virtual meetings to stay after the pandemic (925 poll responses) [13]. Easier accessibility, lower carbon footprint, and lower costs were the driving factors chosen in favor for virtual meetings in this poll. Moreover, virtual seminars provide an easy and affordable opportunity for researchers with disabilities, researchers with high teaching loads, or for scientists with parenting and other responsibilities to stay connected to their respective research community. During the COVID-19 pandemic, virtual seminars took center stage and raised the standard for accessibility, interactions, inclusivity, and equity in science [9]. Virtual events changed how we think about meetings and they constitute a yet underestimated, but powerful tool to share science and connect with each other. By rethinking how meetings are held, various fields have had different degrees of participation depending on the availability of the talks or content versus live sessions. These virtual platforms have the potential to create a “connectivity cascade” reaching more people than traditional meetings that are confined by a single geographical location and time [14]. BBV2020 had an impressive reach worldwide, however, there was still a notable deficit in participants from underdeveloped countries, especially Africa. These discrepancies have been noted in the context of expanding telehealth to African countries that still remain challenging due to monopolization of telecommunication infrastructure, political will, and access to adequate broadband efficiency [15]. Despite the advantages of BBV2020, we recognize that limitations remain, such as one-way delivery of material to participants. Lack of critical face-to-face interactions allowing junior scientists to have meaningful two-way communication with other participants was notably lacking in BBV2020 as seen in other virtual conferences as well [16]. The future of such virtual seminars, meetings, or conferences may lie in their inclusion as part of in-person meetings according to a recently published “best practice” guideline for virtual meetings [17]. For example, a virtual component could be included in plenary sessions, workshops, small group discussions, poster sessions, and/or social events of in-person conferences. BBV2020 represented only one aspect (a seminar held in the form of a webinar) of these common meeting events, but future events could certainly enhance the virtual experience. Regardless, virtual components could be exploited to increase diversity in the brain barriers research field which is consistent with the NINDS strategy for enhancing the diversity of neuroscience researchers [18].", "Due to the pandemic, the idea and subsequent organization of the BBV2020 occurred in a relatively short time frame of about 4 weeks. This is in stark contrast to in-person meetings that are often scheduled and organized months and years ahead of the actual event. Advertisement of the BBV2020 encompassed personal email, postings on social media, and utilization of the International Brain Barriers Society email distribution list. Continuous advertisement resulted in the registration of over 1300 scientists. The total registrant number cannot be directly compared to that of in-person meetings since those are more complex in nature, have maximum capacity limits, and involve other factors such as travel expenses and time commitment. Similar to other meetings which transitioned to online formats during the pandemic, BBV2020 had a large increase in registrants when compared to previous in-person meetings (~7-fold and 4.5-fold higher compared to 2018 and 2019, respectively; Fig. 1; [8]. Given that previous brain barriers meetings usually have a few hundred participants, the high number of registrants could be an indicator for high interest in the field beyond traditional brain barrier research groups. In contrast, these high registrant numbers could also be due, in part, to BBV2020 being a no-cost event or could be explained by pandemic-related cancellations resulting in relatively event-free calendar of the participants. While there were more than 1,300 registrants for BBV2020, no more than 419 ever attended a live webinar. Additionally, 323 people registered for BBV2020 but never attended a live webinar. They may, however, have participated by watching the video recordings after the live webinar. A downward trend of attendance numbers for live seminars was observed throughout the course of the 16-week series (Fig. 1B). The recorded seminars drew considerable interest initially with a maximum of 944 views in week 4 of the series, but later in the series, video views dropped to about 200 per seminar (Fig. 1C). Reasons for these downward trends remain speculative, but could be due to the summer break, the selected time for the seminar making it difficult for researchers in certain time zones to attend live, zoom fatigue, or increasing responsibilities in re-opened research laboratories or at home. The downward trends we observed was in contrast to a similar virtual seminar series in another field that remained relatively flat over the course of the summer [4]. Given that average attendance of the BBV2020 was 220 per seminar and typical attendance for in-person conferences in the brain barriers field range between 150 and 300 participants (caveat: one of the in-person conferences has a strict 182-person maximum limit), we conclude that a core group of participants attended most sessions.", "A survey conducted after the seminar series revealed that most participants felt that BBV2020 enhanced the connection to the brain barriers scientific community during the COVID-19 pandemic (Fig. 2A). Differences emerged between how graduate students and professors viewed BBV2020: graduate students in general were more favorable of the virtual format compared to faculty. This may be partially explained by a lack of funding for graduate student travel or political “red tape” (e.g. visas) for some researchers [9]. Additionally, students may disproportionately adapt more easily and rapidly to technology usage compared to senior colleagues [10]. Clearly, another survey would be needed to determine if a virtual component outside of a pandemic would be positively viewed by researchers in the brain barriers field.", "One major advantage of a virtual seminar series such as the BBV2020 is the lack of a travel burden. Taking off the financial and time-commitment associated with in-person meetings has been shown to increase the diversity of attendees [3, 9, 11]. Based on those findings, we anticipated that BBV2020 would have representation from more countries than past in-person meetings. As shown in Fig. 3, registrants from 43 different countries were represented at BBV2020, which is 40% more compared to BBB 2018 (23 countries) and CVB 2019 (25 countries; Fig. 3). In particular, there was high participation from countries in Southeast Asia, Middle East, Africa, and South America that had not been represented at in-person conferences held in 2018 and 2019 (Figs. 3B, 4). Unsurprisingly however, considering countries represented at previous in-person meetings and due to the number of registrants for BBV2020, there was generally a greater participation from each country represented than in 2018 meeting and CVB 2019 [3, 8]. Due to the time of the BBV2020 live sessions (noon to 1 pm Eastern US Time), the majority of participants were from North America, South America, and Europe (Fig. 4). While time zone challenges are common among virtual events, switching between time zones or holding a “flipped” session (flipping evening sessions from Europe to the USA and vice versa) could increase participation from Asia and Oceania [12]. However, offering a virtual series that constantly switches the seminar time may be difficult to manage and could be confusing for participants. Another strength of virtual seminars is that they are more accessible and inclusive especially for junior researchers in the field. The in-person meetings in 2018 and 2019 were dominated by academic faculty at any professor rank and graduate students (Fig. 5). In contrast, BBV2020 saw a large reduction in the percentage of participating faculty (over 40% in person to less than 20% for BBV2020) and a larger proportion of graduate students (22–25% in person to over 30% for BBV2020), industry scientists (about 6% in person to 18.5% for BBV2020), and governmental/regulatory personnel who had signed up (Fig. 5). In addition, a weekly virtual seminar series allows scientist to pick-and-choose the sessions they are interested in, a choice that could interfere with the observed trends shown in Fig. 1. Our data further support the trends observed across a variety of disciplines who have shifted their conferences from in-person to online events, revealing that virtual conferences are more flexible, and more inclusive and accessible worldwide, especially for early-career scientists [9]. Virtual seminars also provide an opportunity for the organizers to invite prominent speakers of a field on a relatively small budget. In general, virtual conferences are known to enhance the accessibility for busy professionals. A recently published Nature poll indicates that 74% of scientists want virtual meetings to stay after the pandemic (925 poll responses) [13]. Easier accessibility, lower carbon footprint, and lower costs were the driving factors chosen in favor for virtual meetings in this poll. Moreover, virtual seminars provide an easy and affordable opportunity for researchers with disabilities, researchers with high teaching loads, or for scientists with parenting and other responsibilities to stay connected to their respective research community. During the COVID-19 pandemic, virtual seminars took center stage and raised the standard for accessibility, interactions, inclusivity, and equity in science [9]. Virtual events changed how we think about meetings and they constitute a yet underestimated, but powerful tool to share science and connect with each other. By rethinking how meetings are held, various fields have had different degrees of participation depending on the availability of the talks or content versus live sessions. These virtual platforms have the potential to create a “connectivity cascade” reaching more people than traditional meetings that are confined by a single geographical location and time [14]. BBV2020 had an impressive reach worldwide, however, there was still a notable deficit in participants from underdeveloped countries, especially Africa. These discrepancies have been noted in the context of expanding telehealth to African countries that still remain challenging due to monopolization of telecommunication infrastructure, political will, and access to adequate broadband efficiency [15]. Despite the advantages of BBV2020, we recognize that limitations remain, such as one-way delivery of material to participants. Lack of critical face-to-face interactions allowing junior scientists to have meaningful two-way communication with other participants was notably lacking in BBV2020 as seen in other virtual conferences as well [16]. The future of such virtual seminars, meetings, or conferences may lie in their inclusion as part of in-person meetings according to a recently published “best practice” guideline for virtual meetings [17]. For example, a virtual component could be included in plenary sessions, workshops, small group discussions, poster sessions, and/or social events of in-person conferences. BBV2020 represented only one aspect (a seminar held in the form of a webinar) of these common meeting events, but future events could certainly enhance the virtual experience. Regardless, virtual components could be exploited to increase diversity in the brain barriers research field which is consistent with the NINDS strategy for enhancing the diversity of neuroscience researchers [18].", "Overall, the Virtual BBV2020 Seminar Series created during the COVID-19 pandemic was positively received by the brain barriers research community. Key benefits of virtual seminars like the BBV2020 are that they are convenient, affordable, eco-friendly, reach a broad audience, and are a good tool to increase diversity and equity. Thus, incorporating a virtual component during an in-person meeting or offering a virtual counterpart in addition to in-person meetings could help increase the outreach of a small research field like the brain barriers field. Nevertheless, limitations remain such as a lack of interaction between speakers and participants, which is particularly critical for junior researchers. And although BBV2020 expanded the reach beyond past in-person meetings, challenges reaching scientists in underdeveloped countries remain. Together, BBV2020 was created in a time of crisis but has the potential to thrive in the post-pandemic world. Embracing virtual scientific events may allow us to advance the brain barriers research field and have an impact on how we exchange science in the future.", " \nAdditional file 1: Figure S1. Attendance Trends for BBV2020 by Academic Rank. Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020 separated by A) Graduate Student, B) Postdoctoral Fellow, and C) Professor (any rank).\nAdditional file 1: Figure S1. Attendance Trends for BBV2020 by Academic Rank. Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020 separated by A) Graduate Student, B) Postdoctoral Fellow, and C) Professor (any rank).\n\nAdditional file 1: Figure S1. Attendance Trends for BBV2020 by Academic Rank. Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020 separated by A) Graduate Student, B) Postdoctoral Fellow, and C) Professor (any rank).\nAdditional file 1: Figure S1. Attendance Trends for BBV2020 by Academic Rank. Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020 separated by A) Graduate Student, B) Postdoctoral Fellow, and C) Professor (any rank)." ]
[ "introduction", null, null, null, null, null, null, null, null, null, null, "results", null, null, null, null, "discussion", null, null, null, null, "supplementary-material" ]
[ "Blood–brain barrier", "BBV2020", "Brain barriers virtual", "Virtual seminar series", "COVID-19", "Education" ]
Introduction: On March 11, 2020, the World Health Organization (WHO) declared the coronavirus disease (COVID-19) outbreak a global pandemic [1]. Shortly after, countries worldwide issued stay-at-home orders and lockdowns: the COVID-19 pandemic had forced the world to come to an abrupt halt [2]. With global air travel restrictions and travel bans in place, enterprises including academic institutions, industry and government agencies required individuals to call off their business travel. As a result, scheduled scientific conferences were canceled throughout 2020 [3, 4]. These cancelations created a sudden void in scientific communication between researchers worldwide and forced research communities to be creative and adapt. One possibility to continue scientific communication and interaction was by switching conferences and seminars to virtual online events using novel digital platforms enabling researchers to connect and share science [3, 4]. As a consequence, major conferences and meetings in the brain barriers research field, including the “Barriers of the CNS Gordon Research Conference” (GRC 2020) and the international symposium on “Signal Transduction at the Blood–Brain Barriers” in Bari, Italy were first postponed until 2021 and due to the ongoing pandemic subsequently moved to 2022. The “Cerebral Vascular Biology” (CVB) conference in Uppsala, Sweden planned for 2021 was postponed until 2023. Postponing these meetings left organizers and conference chairs in a difficult position and researchers without a platform to present and share their data. Thus, these postponements created a critical need for the field to stay connected and an opportunity to establish a virtual event that could supplement the postponed conferences in 2020. The Brain Barriers Virtual 2020 Seminar Series (BBV2020) addressed this need by providing a forum for brain barrier scientists worldwide to come together online during the COVID-19 crisis in 2020. BBV2020 seminars were held weekly from May 20, 2020 to September 2, 2020 in a 1-h format featuring a live session with an invited speaker followed by time for questions and answers from the community. The seminar series was hosted on a Zoom webinar platform and was free for anyone to attend. BBV2020 served as a forum for invited speakers to present their work to the brain barriers community and enabled this community to discuss brain barriers science from the safety of their home office. Over 1300 scientists around the world were registered, each seminar was viewed live by 100–400 participants, and the video-recordings received 150–900 views each week. Participants were from academia, industry and government agencies at all career stages from around the globe. In this report, we share participation data that give insight into the impact this virtual seminar series had on the brain barriers field during the COVID-19 pandemic. We compare these data with existing data from past in-person brain barriers meetings and draw conclusions from these data on the potential value of virtual meetings for the brain barriers research field in the future. While the organization of virtual events during the pandemic may seem self-evident, reflection on the impact carries utility for events in the future. Methods: Organization The BBV2020 seminar series was created by Drs. Bjoern Bauer (University of Kentucky, Lexington, KY, USA), Anika Hartz (University of Kentucky, Lexington, KY, USA), and Brandon Kim (University of Alabama, Tuscaloosa, Alabama, USA). Seminar moderators were postdoctoral researchers and scientists from the USA and Europe: Drs. Natalie Hudson (Trinity College Dublin, Dublin, Ireland), Geetika Nehra (University of Kentucky, Lexington, KY, USA), Michelle Pizzo (Denali Therapeutics Inc., San Francisco, CA, USA), and Steffen Storck (University Medical Center of the Johannes Gutenberg University Mainz, Mainz, Germany). The BBV2020 seminar series was created by Drs. Bjoern Bauer (University of Kentucky, Lexington, KY, USA), Anika Hartz (University of Kentucky, Lexington, KY, USA), and Brandon Kim (University of Alabama, Tuscaloosa, Alabama, USA). Seminar moderators were postdoctoral researchers and scientists from the USA and Europe: Drs. Natalie Hudson (Trinity College Dublin, Dublin, Ireland), Geetika Nehra (University of Kentucky, Lexington, KY, USA), Michelle Pizzo (Denali Therapeutics Inc., San Francisco, CA, USA), and Steffen Storck (University Medical Center of the Johannes Gutenberg University Mainz, Mainz, Germany). Virtual platform and technical assistance BBV2020 was run on the Zoom Webinar 500 platform (Zoom Video Communications, Inc.; San Jose, CA, USA) that was funded by Dr. Bjoern Bauer using University of Kentucky funds. This particular Zoom Webinar license allowed for up to 500 live participants. The Zoom Webinar platform supports an in-session chat function, a Q&A forum, a raise hand function for live questions, and provides a panelist section with the option to share audio, video and presentation slides, a recording function and post-webinar data collection. The hosts and moderators were in control of audio and video; participants were not able to record or unmute during a session. Participants had no control function other than a “raise hand” option and the Q&A and chat function allowing to send questions or messages to the moderator. Technical assistance was provided by Todd Sizemore, College of Pharmacy, University of Kentucky. BBV2020 was run on the Zoom Webinar 500 platform (Zoom Video Communications, Inc.; San Jose, CA, USA) that was funded by Dr. Bjoern Bauer using University of Kentucky funds. This particular Zoom Webinar license allowed for up to 500 live participants. The Zoom Webinar platform supports an in-session chat function, a Q&A forum, a raise hand function for live questions, and provides a panelist section with the option to share audio, video and presentation slides, a recording function and post-webinar data collection. The hosts and moderators were in control of audio and video; participants were not able to record or unmute during a session. Participants had no control function other than a “raise hand” option and the Q&A and chat function allowing to send questions or messages to the moderator. Technical assistance was provided by Todd Sizemore, College of Pharmacy, University of Kentucky. BBV2020 schedule The BBV2020 seminar series ran from May 20 to September 2, 2020. Live seminars were held once per week from 12 to 1 PM Eastern Daylight Time (EDT) to accommodate researchers across a wide range of time zones: Central European Time (6:00 PM) to Pacific Daylight Time (9:00 AM). To accommodate colleagues with time constraints or those within incompatible time zones such as Asia or Oceania, presentations were recorded (depending on speaker permissions) and the seminar recording was made available for up to 48 h after the live seminar. The link for the recorded videos was distributed via the BBV2020 listserv that was set up through the University of Kentucky listserv. The BBV2020 seminar series ran from May 20 to September 2, 2020. Live seminars were held once per week from 12 to 1 PM Eastern Daylight Time (EDT) to accommodate researchers across a wide range of time zones: Central European Time (6:00 PM) to Pacific Daylight Time (9:00 AM). To accommodate colleagues with time constraints or those within incompatible time zones such as Asia or Oceania, presentations were recorded (depending on speaker permissions) and the seminar recording was made available for up to 48 h after the live seminar. The link for the recorded videos was distributed via the BBV2020 listserv that was set up through the University of Kentucky listserv. Invited speakers Efforts were made to invite speakers from a variety of countries and genders. Invited speakers included organizers of postponed in-person conferences, speakers from academia and industry from different career stages. A total of 16 speakers were invited: May 20, 2020: Elga De Vries, Amsterdam UMC, Amsterdam, The Netherlands; May 27, 2020: Richard Daneman, University of California San Diego, San Diego, CA, USA; June 3, 2020: Robyn Klein, Washington University, St. Louis, MO, USA; June 10, 2020: Ayal Ben-Zvi, The Hebrew University of Jerusalem, Israel; June 17, 2020: Britta Engelhardt, University of Bern, Bern, Switzerland; June 24, 2020: Margareta Hammarlund-Udenaes, Uppsala University, Uppsala, Sweden; July 1, 2020: Daniela Virgintino, University of Bari, Bari, Italy; July 8, 2020: Eric Shusta, University of Wisconsin, Madison, WI, USA; July 15, 2020: Matthew Campbell, Trinity College Dublin, Dublin, Ireland; July 22, 2020: Krzysztof Kucharz, University of Copenhagen, Copenhagen, Denmark; July 29, 2020: William Elmquist, University of Minnesota, Minneapolis, MN, USA; August 5, 2020: Edward Neuwelt, Oregon Health & Sciences University, OR, USA; August 12, 2020: Robert Thorne, Denali Therapeutics, San Francisco, CA, USA; August 19, 2020: Teresa Sanchez, Weill Cornell Medical College, New York City, NJ, USA; August 26, 2020: Michael Taylor, University of Wisconsin, Madison, MI, USA; September 2, 2020: Joan Abbott, King’s College London, London, UK. Speakers were offered a training session by the organizers to cover key aspects of a Zoom-based webinar and to test video, audio, and slide presentation prior to the live seminar. Ten out of 16 speakers took the training session. Speakers were informed by the organizers that any data presented should be considered public as security on the Zoom platform could not be ensured. Invited speakers had the opportunity to give permission to have their seminar recorded or to opt out of the recording. If a speaker gave permission to record, the entire seminar was recorded, and recordings were made available to all registered participants in a non-downloadable format through the University of Kentucky Zoom cloud for up to 48 h after the live session. Out of 16 speakers, 15 gave permission to record the live seminar. Efforts were made to invite speakers from a variety of countries and genders. Invited speakers included organizers of postponed in-person conferences, speakers from academia and industry from different career stages. A total of 16 speakers were invited: May 20, 2020: Elga De Vries, Amsterdam UMC, Amsterdam, The Netherlands; May 27, 2020: Richard Daneman, University of California San Diego, San Diego, CA, USA; June 3, 2020: Robyn Klein, Washington University, St. Louis, MO, USA; June 10, 2020: Ayal Ben-Zvi, The Hebrew University of Jerusalem, Israel; June 17, 2020: Britta Engelhardt, University of Bern, Bern, Switzerland; June 24, 2020: Margareta Hammarlund-Udenaes, Uppsala University, Uppsala, Sweden; July 1, 2020: Daniela Virgintino, University of Bari, Bari, Italy; July 8, 2020: Eric Shusta, University of Wisconsin, Madison, WI, USA; July 15, 2020: Matthew Campbell, Trinity College Dublin, Dublin, Ireland; July 22, 2020: Krzysztof Kucharz, University of Copenhagen, Copenhagen, Denmark; July 29, 2020: William Elmquist, University of Minnesota, Minneapolis, MN, USA; August 5, 2020: Edward Neuwelt, Oregon Health & Sciences University, OR, USA; August 12, 2020: Robert Thorne, Denali Therapeutics, San Francisco, CA, USA; August 19, 2020: Teresa Sanchez, Weill Cornell Medical College, New York City, NJ, USA; August 26, 2020: Michael Taylor, University of Wisconsin, Madison, MI, USA; September 2, 2020: Joan Abbott, King’s College London, London, UK. Speakers were offered a training session by the organizers to cover key aspects of a Zoom-based webinar and to test video, audio, and slide presentation prior to the live seminar. Ten out of 16 speakers took the training session. Speakers were informed by the organizers that any data presented should be considered public as security on the Zoom platform could not be ensured. Invited speakers had the opportunity to give permission to have their seminar recorded or to opt out of the recording. If a speaker gave permission to record, the entire seminar was recorded, and recordings were made available to all registered participants in a non-downloadable format through the University of Kentucky Zoom cloud for up to 48 h after the live session. Out of 16 speakers, 15 gave permission to record the live seminar. Advertisement and participants To reach scientists in the brain barriers community, the seminar was advertised by email to participants of previous in-person meetings, and on social media platforms such as Facebook and LinkedIn on personal profiles. In addition, the introductory flyer was kindly distributed via email by the International Brain Barriers Society (IBBS) using their email distribution list. Interested scientists were asked to register by sending their name, affiliation, and career stage to a dedicated email address created for the organization and execution of BBV2020 ([email protected]). Once registered, participants were added to a BBV2020 listserv provided by the University of Kentucky and received weekly updates. These weekly updates included a flyer with the information regarding the upcoming speakers and the specific zoom link for the seminar. Participants also received a follow up email after each seminar with a zoom link to the video recordings. With the permission of the speaker of the week, registered viewers had access to recordings for up to 48 h after the live seminar. To reach scientists in the brain barriers community, the seminar was advertised by email to participants of previous in-person meetings, and on social media platforms such as Facebook and LinkedIn on personal profiles. In addition, the introductory flyer was kindly distributed via email by the International Brain Barriers Society (IBBS) using their email distribution list. Interested scientists were asked to register by sending their name, affiliation, and career stage to a dedicated email address created for the organization and execution of BBV2020 ([email protected]). Once registered, participants were added to a BBV2020 listserv provided by the University of Kentucky and received weekly updates. These weekly updates included a flyer with the information regarding the upcoming speakers and the specific zoom link for the seminar. Participants also received a follow up email after each seminar with a zoom link to the video recordings. With the permission of the speaker of the week, registered viewers had access to recordings for up to 48 h after the live seminar. BBV2020 seminar presentation Each seminar was started with opening remarks by one of the organizers (Drs. Bauer, Hartz, or Kim) followed by an introduction of the speaker by the moderator. Moderators (Drs. Hudson, Nehra, Pizzo, Storck) rotated each week, only one moderator was assigned to each week. After the seminar presentation, the moderator facilitated the question-and-answer session. Participants were encouraged to ask a live question by raising their digital hand. When a participant raised the digital hand, the moderator would unmute the participant at which point the participant was able to ask the speaker a question directly. In addition, participants had the opportunity to ask questions via the Zoom Q&A or the chat function. Written questions were then subsequently read by the moderator. The moderator concluded each seminar by thanking the speaker and the audience for attending and by announcing the speaker for the following week. Each seminar was started with opening remarks by one of the organizers (Drs. Bauer, Hartz, or Kim) followed by an introduction of the speaker by the moderator. Moderators (Drs. Hudson, Nehra, Pizzo, Storck) rotated each week, only one moderator was assigned to each week. After the seminar presentation, the moderator facilitated the question-and-answer session. Participants were encouraged to ask a live question by raising their digital hand. When a participant raised the digital hand, the moderator would unmute the participant at which point the participant was able to ask the speaker a question directly. In addition, participants had the opportunity to ask questions via the Zoom Q&A or the chat function. Written questions were then subsequently read by the moderator. The moderator concluded each seminar by thanking the speaker and the audience for attending and by announcing the speaker for the following week. Map generation World maps highlighting countries participating in the various meetings and representation were generated using MapChart (www.mapchart.net; license: https://creativecommons.org/licenses/by-sa/4.0/). World maps highlighting countries participating in the various meetings and representation were generated using MapChart (www.mapchart.net; license: https://creativecommons.org/licenses/by-sa/4.0/). Survey Toward the end of the seminar series, a survey was generated using Qualtrics (Qualtrics) and run through the University of Alabama Qualtrics system. A link to the survey was sent to all registered seminar participants by email. Questions covered general audience participation, career stage, as well as affective factors. Survey results are only reported in an aggregate. Toward the end of the seminar series, a survey was generated using Qualtrics (Qualtrics) and run through the University of Alabama Qualtrics system. A link to the survey was sent to all registered seminar participants by email. Questions covered general audience participation, career stage, as well as affective factors. Survey results are only reported in an aggregate. Data collection, analysis, and statistics Survey data was collected through the University of Alabama Qualtrics system and was determined by the University of Alabama IRB to be “non-human subject research”. Data from the live webinars were collected by Zoom for each seminar and reported as unique viewers to eliminate counting individuals double who potentially logged on twice. Data on video views were collected from the University of Kentucky Zoom Webinar cloud server and were reported as total accessed views. Survey data from Qualtrics was analyzed with the IBM® SPSS® software platform (IBM, Armonk, NY, USA), using one-way ANOVAs and t-tests to identify significant trends [5]. Data with a p < 0.05 are considered significant. Survey data was collected through the University of Alabama Qualtrics system and was determined by the University of Alabama IRB to be “non-human subject research”. Data from the live webinars were collected by Zoom for each seminar and reported as unique viewers to eliminate counting individuals double who potentially logged on twice. Data on video views were collected from the University of Kentucky Zoom Webinar cloud server and were reported as total accessed views. Survey data from Qualtrics was analyzed with the IBM® SPSS® software platform (IBM, Armonk, NY, USA), using one-way ANOVAs and t-tests to identify significant trends [5]. Data with a p < 0.05 are considered significant. Organization: The BBV2020 seminar series was created by Drs. Bjoern Bauer (University of Kentucky, Lexington, KY, USA), Anika Hartz (University of Kentucky, Lexington, KY, USA), and Brandon Kim (University of Alabama, Tuscaloosa, Alabama, USA). Seminar moderators were postdoctoral researchers and scientists from the USA and Europe: Drs. Natalie Hudson (Trinity College Dublin, Dublin, Ireland), Geetika Nehra (University of Kentucky, Lexington, KY, USA), Michelle Pizzo (Denali Therapeutics Inc., San Francisco, CA, USA), and Steffen Storck (University Medical Center of the Johannes Gutenberg University Mainz, Mainz, Germany). Virtual platform and technical assistance: BBV2020 was run on the Zoom Webinar 500 platform (Zoom Video Communications, Inc.; San Jose, CA, USA) that was funded by Dr. Bjoern Bauer using University of Kentucky funds. This particular Zoom Webinar license allowed for up to 500 live participants. The Zoom Webinar platform supports an in-session chat function, a Q&A forum, a raise hand function for live questions, and provides a panelist section with the option to share audio, video and presentation slides, a recording function and post-webinar data collection. The hosts and moderators were in control of audio and video; participants were not able to record or unmute during a session. Participants had no control function other than a “raise hand” option and the Q&A and chat function allowing to send questions or messages to the moderator. Technical assistance was provided by Todd Sizemore, College of Pharmacy, University of Kentucky. BBV2020 schedule: The BBV2020 seminar series ran from May 20 to September 2, 2020. Live seminars were held once per week from 12 to 1 PM Eastern Daylight Time (EDT) to accommodate researchers across a wide range of time zones: Central European Time (6:00 PM) to Pacific Daylight Time (9:00 AM). To accommodate colleagues with time constraints or those within incompatible time zones such as Asia or Oceania, presentations were recorded (depending on speaker permissions) and the seminar recording was made available for up to 48 h after the live seminar. The link for the recorded videos was distributed via the BBV2020 listserv that was set up through the University of Kentucky listserv. Invited speakers: Efforts were made to invite speakers from a variety of countries and genders. Invited speakers included organizers of postponed in-person conferences, speakers from academia and industry from different career stages. A total of 16 speakers were invited: May 20, 2020: Elga De Vries, Amsterdam UMC, Amsterdam, The Netherlands; May 27, 2020: Richard Daneman, University of California San Diego, San Diego, CA, USA; June 3, 2020: Robyn Klein, Washington University, St. Louis, MO, USA; June 10, 2020: Ayal Ben-Zvi, The Hebrew University of Jerusalem, Israel; June 17, 2020: Britta Engelhardt, University of Bern, Bern, Switzerland; June 24, 2020: Margareta Hammarlund-Udenaes, Uppsala University, Uppsala, Sweden; July 1, 2020: Daniela Virgintino, University of Bari, Bari, Italy; July 8, 2020: Eric Shusta, University of Wisconsin, Madison, WI, USA; July 15, 2020: Matthew Campbell, Trinity College Dublin, Dublin, Ireland; July 22, 2020: Krzysztof Kucharz, University of Copenhagen, Copenhagen, Denmark; July 29, 2020: William Elmquist, University of Minnesota, Minneapolis, MN, USA; August 5, 2020: Edward Neuwelt, Oregon Health & Sciences University, OR, USA; August 12, 2020: Robert Thorne, Denali Therapeutics, San Francisco, CA, USA; August 19, 2020: Teresa Sanchez, Weill Cornell Medical College, New York City, NJ, USA; August 26, 2020: Michael Taylor, University of Wisconsin, Madison, MI, USA; September 2, 2020: Joan Abbott, King’s College London, London, UK. Speakers were offered a training session by the organizers to cover key aspects of a Zoom-based webinar and to test video, audio, and slide presentation prior to the live seminar. Ten out of 16 speakers took the training session. Speakers were informed by the organizers that any data presented should be considered public as security on the Zoom platform could not be ensured. Invited speakers had the opportunity to give permission to have their seminar recorded or to opt out of the recording. If a speaker gave permission to record, the entire seminar was recorded, and recordings were made available to all registered participants in a non-downloadable format through the University of Kentucky Zoom cloud for up to 48 h after the live session. Out of 16 speakers, 15 gave permission to record the live seminar. Advertisement and participants: To reach scientists in the brain barriers community, the seminar was advertised by email to participants of previous in-person meetings, and on social media platforms such as Facebook and LinkedIn on personal profiles. In addition, the introductory flyer was kindly distributed via email by the International Brain Barriers Society (IBBS) using their email distribution list. Interested scientists were asked to register by sending their name, affiliation, and career stage to a dedicated email address created for the organization and execution of BBV2020 ([email protected]). Once registered, participants were added to a BBV2020 listserv provided by the University of Kentucky and received weekly updates. These weekly updates included a flyer with the information regarding the upcoming speakers and the specific zoom link for the seminar. Participants also received a follow up email after each seminar with a zoom link to the video recordings. With the permission of the speaker of the week, registered viewers had access to recordings for up to 48 h after the live seminar. BBV2020 seminar presentation: Each seminar was started with opening remarks by one of the organizers (Drs. Bauer, Hartz, or Kim) followed by an introduction of the speaker by the moderator. Moderators (Drs. Hudson, Nehra, Pizzo, Storck) rotated each week, only one moderator was assigned to each week. After the seminar presentation, the moderator facilitated the question-and-answer session. Participants were encouraged to ask a live question by raising their digital hand. When a participant raised the digital hand, the moderator would unmute the participant at which point the participant was able to ask the speaker a question directly. In addition, participants had the opportunity to ask questions via the Zoom Q&A or the chat function. Written questions were then subsequently read by the moderator. The moderator concluded each seminar by thanking the speaker and the audience for attending and by announcing the speaker for the following week. Map generation: World maps highlighting countries participating in the various meetings and representation were generated using MapChart (www.mapchart.net; license: https://creativecommons.org/licenses/by-sa/4.0/). Survey: Toward the end of the seminar series, a survey was generated using Qualtrics (Qualtrics) and run through the University of Alabama Qualtrics system. A link to the survey was sent to all registered seminar participants by email. Questions covered general audience participation, career stage, as well as affective factors. Survey results are only reported in an aggregate. Data collection, analysis, and statistics: Survey data was collected through the University of Alabama Qualtrics system and was determined by the University of Alabama IRB to be “non-human subject research”. Data from the live webinars were collected by Zoom for each seminar and reported as unique viewers to eliminate counting individuals double who potentially logged on twice. Data on video views were collected from the University of Kentucky Zoom Webinar cloud server and were reported as total accessed views. Survey data from Qualtrics was analyzed with the IBM® SPSS® software platform (IBM, Armonk, NY, USA), using one-way ANOVAs and t-tests to identify significant trends [5]. Data with a p < 0.05 are considered significant. Results: We organized the virtual BBV2020 seminar series for the brain barriers research field to fill the void that the COVID-19 pandemic left due to canceled and postponed meetings in 2020. The seminar series was held between May–September 2020 every Wednesday from 12:00 to 1:00 PM Eastern Daylight Time (EDT). Seminars were hosted on a Zoom webinar platform and featured a 45-min presentation from an invited speaker followed by questions and answers from the audience. In the following sections, we summarize data collected by the Zoom software, from the listserv used, and the participant survey. We compare and contrast the data from the virtual BBV2020 seminar series with data from in-person meetings from past years. BBV2020 attendance On May 14, 2020, we sent out the first advertisement for the BBV2020 seminar series via email using a listserv with email addresses of past conference attendees and via postings using LinkedIn, Facebook, and Twitter. By the start of the first seminar on May 20, 2020, more than 720 scientists had registered. By the end of the 16-week long BBV2020 seminar series, a total of 1302 registrants signed up on our listserv to receive the seminar link, speaker updates, and video recording links (Fig. 1A). We obtained data from two past in-person meetings from the respective conference chairs: (1) brain barriers meeting in 2018 (BBB 2018) and the (2) Cerebral Vascular Biology meeting in 2019 (CVB 2019). Fig. 1 Attendance Trends for Brain Barrier Research Conferences. A With 1,302 individuals signing up and added to the listserv, BBV2020 attracted more registrants than BBB 2018 and CVB 2019. B Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020. C Number of views (clicks) of the video recordings when available weekly for BBV2020. Black line indicates best linear fit. Gray line indicates the average (mean) for all 16 weeks Attendance Trends for Brain Barrier Research Conferences. A With 1,302 individuals signing up and added to the listserv, BBV2020 attracted more registrants than BBB 2018 and CVB 2019. B Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020. C Number of views (clicks) of the video recordings when available weekly for BBV2020. Black line indicates best linear fit. Gray line indicates the average (mean) for all 16 weeks Strikingly, the total number of registered applicants for the BBV2020 was more than 4-fold higher compared to the total number of registered participants for the 2018 and the CVB 2019 meetings (Fig. 1A). The large number of registrants is likely due to free registration, no associated travel expenses, a world-wide reach, and that virtual meetings come with no obligations for the attendees. In contrast, the number of attendees of the in-person meetings (BBB 2018 and CVB2019) was limited and determined by the size of the conference site. The first live BBV2020 seminar was attended by 419 participants. As the seminar series progressed, the participant number during live seminars gradually decreased, the average participant number was 220, the minimum was 108 (Fig. 1B). A similar trend was observed in the numbers of total recording views: the first seminar recording had a maximum of 944 views; in average, videos had 422 views (Fig. 1C). The trendline in Fig. 1C shows a drop in recording views over the course of the seminar series. These trends held regardless of academic rank as graduate students, postdoctoral fellows, and professors all showed a similar downward trend in live attendance (Additional file 1: Fig. S1A–C). Note that we were unable to capture recording data for week 2 due to a recording error and for week 13 due to the request by the speaker to not record. Together, the high number of registrants suggests that the virtual BBV2020 seminar series sparked interest. Attendance of the live seminars and the numbers of views for the recorded videos decreased over time, which may be an indicator of “virtual meeting fatigue” or attendees being on their summer vacation. On May 14, 2020, we sent out the first advertisement for the BBV2020 seminar series via email using a listserv with email addresses of past conference attendees and via postings using LinkedIn, Facebook, and Twitter. By the start of the first seminar on May 20, 2020, more than 720 scientists had registered. By the end of the 16-week long BBV2020 seminar series, a total of 1302 registrants signed up on our listserv to receive the seminar link, speaker updates, and video recording links (Fig. 1A). We obtained data from two past in-person meetings from the respective conference chairs: (1) brain barriers meeting in 2018 (BBB 2018) and the (2) Cerebral Vascular Biology meeting in 2019 (CVB 2019). Fig. 1 Attendance Trends for Brain Barrier Research Conferences. A With 1,302 individuals signing up and added to the listserv, BBV2020 attracted more registrants than BBB 2018 and CVB 2019. B Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020. C Number of views (clicks) of the video recordings when available weekly for BBV2020. Black line indicates best linear fit. Gray line indicates the average (mean) for all 16 weeks Attendance Trends for Brain Barrier Research Conferences. A With 1,302 individuals signing up and added to the listserv, BBV2020 attracted more registrants than BBB 2018 and CVB 2019. B Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020. C Number of views (clicks) of the video recordings when available weekly for BBV2020. Black line indicates best linear fit. Gray line indicates the average (mean) for all 16 weeks Strikingly, the total number of registered applicants for the BBV2020 was more than 4-fold higher compared to the total number of registered participants for the 2018 and the CVB 2019 meetings (Fig. 1A). The large number of registrants is likely due to free registration, no associated travel expenses, a world-wide reach, and that virtual meetings come with no obligations for the attendees. In contrast, the number of attendees of the in-person meetings (BBB 2018 and CVB2019) was limited and determined by the size of the conference site. The first live BBV2020 seminar was attended by 419 participants. As the seminar series progressed, the participant number during live seminars gradually decreased, the average participant number was 220, the minimum was 108 (Fig. 1B). A similar trend was observed in the numbers of total recording views: the first seminar recording had a maximum of 944 views; in average, videos had 422 views (Fig. 1C). The trendline in Fig. 1C shows a drop in recording views over the course of the seminar series. These trends held regardless of academic rank as graduate students, postdoctoral fellows, and professors all showed a similar downward trend in live attendance (Additional file 1: Fig. S1A–C). Note that we were unable to capture recording data for week 2 due to a recording error and for week 13 due to the request by the speaker to not record. Together, the high number of registrants suggests that the virtual BBV2020 seminar series sparked interest. Attendance of the live seminars and the numbers of views for the recorded videos decreased over time, which may be an indicator of “virtual meeting fatigue” or attendees being on their summer vacation. BBV2020 survey analysis By the end of the seminar series in September, every BBV2020 registrant received an invitation to participate in an online survey created by the organizers on August 23, 2020. The survey was designed for registrants to provide feedback and consisted of 10 questions: (1) demographic question to capture career level and career type; (2) single select multiple choice question to capture attendance of past in-person brain barrier meetings, (3) single select multiple choice question on how many recorded seminars were viewed, (4) Likert scale question regarding the enrichment BBV2020 provided for the community during the COVID-19 pandemic, (5) dichotomous question asking if the registrant would still participate in BBV2020 seminars if they were continued, (6) single select multiple choice question on attendance frequency, (7) single select multiple choice question on how many times the registrant has thus far attended a brain barriers meeting previously in person, (8) matrix question to capture affective factors, (9) matrix question to capture learning and teaching, and (10) matrix question to capture combined validity and practicality of the BBV2020 series. In addition, registrants responding to the survey had the opportunity to include general comments to the organizers in a separate text box. Overall, 23% of registrants (total of 299 registrants) responded to the survey. Over 89% of survey respondents agreed strongly that “Attending BBV2020 was a positive experience” and 63% of responders felt that “BBV2020 is appropriate for my research”. Generally, the respondents tended to agree with the following statement: “BBV2020 has enriched the connection with the Brain Barriers community in the gap that COVID-19 has created” (n = 298 respondents to this question of a total of 299 registrants, M = 1.44, SD = 0.660, 1 = Strongly Agree, 2 = Agree, 3 = Neutral, 4 = Disagree, 5 = Strongly Disagree; Fig. 2A). However, when posed with the statements “In the future, I would prefer to attend a virtual conference over an in-person conference” or “The virtual experience is not as effective compared to the in-person conferences”, responses were neutral (M = 3.24; Fig. 2B) indicating a positive view towards a virtual conference. A difference emerged when we compared BBV2020 participants based on their career level with their preference for in-person versus virtual meeting formats: using one-way ANOVA, we found a significant difference between groups (F(9286) = 2.607, p = 0.007). Using Tukey’s HSD post hoc test indicated that Master’s students were more likely to disagree with the following statement when compared to Faculty/Professors: “I would feel more comfortable attending Brain Barrier conferences/seminars in person, but was glad to attend virtually due to COVID19.” A strong trend emerged when Master’s and Ph.D. students were grouped together and compared to Post-Docs and Faculty/Professors in an independent-samples t test. Results show graduate students in general were more favorable to a virtual conference compared to colleagues more established in their careers (t(225) = -1.972, p = 0.061). The lack of significance is in part due to the low sample size (n = 74) for graduate students. Fig. 2 Post-seminar survey results. A Volunteer respondent result when posed with the prompt “BBV2020 has enriched connection with the Brain Barriers community in the gap that COVID-19 has created”. B Volunteer respondent results when posted with the prompts “In the future, I would prefer to attend a virtual conference over an in-person conference” (Blue) and “The virtual experience is not as effective compared to the in-person conferences” (Red) Post-seminar survey results. A Volunteer respondent result when posed with the prompt “BBV2020 has enriched connection with the Brain Barriers community in the gap that COVID-19 has created”. B Volunteer respondent results when posted with the prompts “In the future, I would prefer to attend a virtual conference over an in-person conference” (Blue) and “The virtual experience is not as effective compared to the in-person conferences” (Red) Overall, descriptive data revealed many participants felt BBV2020 was a positive experience. In fact, 99.3% of survey respondents noted they somewhat or strongly agree that their experience was positive, with only 4.7% saying they faced technical difficulties. Additionally, most respondents agreed that virtual conferences have an important role in research (93.2%) and are more accessible than in-person conferences (90.2%). Finally, more than half of respondents (56.9%) noted virtual conferences can do things in-person conferences cannot. Nevertheless, there continued to be a desire among many BBV2020 participants to return to conferencing in person, with 13.5% saying they found it hard to focus in the virtual format. In fact, roughly one-third (33.9%) felt the virtual conference was not as effective as an in-person conference, and only one-quarter (23.3%) said they would prefer virtual conferences to in-person conferences in the future, while 35.8% were neutral. Together, our findings suggest that while BBV2020 filled a need created by a global pandemic, the enthusiasm for returning to only in-person meetings remained relatively neutral suggesting that virtual platforms may play a critical role in dissemination of research and science in the future. Moreover, the data from this study show a trend between graduate students versus professors in their response with regard to in-person meetings indicating that career stage may influence the perspective of the usefulness of in-person versus virtual platforms. By the end of the seminar series in September, every BBV2020 registrant received an invitation to participate in an online survey created by the organizers on August 23, 2020. The survey was designed for registrants to provide feedback and consisted of 10 questions: (1) demographic question to capture career level and career type; (2) single select multiple choice question to capture attendance of past in-person brain barrier meetings, (3) single select multiple choice question on how many recorded seminars were viewed, (4) Likert scale question regarding the enrichment BBV2020 provided for the community during the COVID-19 pandemic, (5) dichotomous question asking if the registrant would still participate in BBV2020 seminars if they were continued, (6) single select multiple choice question on attendance frequency, (7) single select multiple choice question on how many times the registrant has thus far attended a brain barriers meeting previously in person, (8) matrix question to capture affective factors, (9) matrix question to capture learning and teaching, and (10) matrix question to capture combined validity and practicality of the BBV2020 series. In addition, registrants responding to the survey had the opportunity to include general comments to the organizers in a separate text box. Overall, 23% of registrants (total of 299 registrants) responded to the survey. Over 89% of survey respondents agreed strongly that “Attending BBV2020 was a positive experience” and 63% of responders felt that “BBV2020 is appropriate for my research”. Generally, the respondents tended to agree with the following statement: “BBV2020 has enriched the connection with the Brain Barriers community in the gap that COVID-19 has created” (n = 298 respondents to this question of a total of 299 registrants, M = 1.44, SD = 0.660, 1 = Strongly Agree, 2 = Agree, 3 = Neutral, 4 = Disagree, 5 = Strongly Disagree; Fig. 2A). However, when posed with the statements “In the future, I would prefer to attend a virtual conference over an in-person conference” or “The virtual experience is not as effective compared to the in-person conferences”, responses were neutral (M = 3.24; Fig. 2B) indicating a positive view towards a virtual conference. A difference emerged when we compared BBV2020 participants based on their career level with their preference for in-person versus virtual meeting formats: using one-way ANOVA, we found a significant difference between groups (F(9286) = 2.607, p = 0.007). Using Tukey’s HSD post hoc test indicated that Master’s students were more likely to disagree with the following statement when compared to Faculty/Professors: “I would feel more comfortable attending Brain Barrier conferences/seminars in person, but was glad to attend virtually due to COVID19.” A strong trend emerged when Master’s and Ph.D. students were grouped together and compared to Post-Docs and Faculty/Professors in an independent-samples t test. Results show graduate students in general were more favorable to a virtual conference compared to colleagues more established in their careers (t(225) = -1.972, p = 0.061). The lack of significance is in part due to the low sample size (n = 74) for graduate students. Fig. 2 Post-seminar survey results. A Volunteer respondent result when posed with the prompt “BBV2020 has enriched connection with the Brain Barriers community in the gap that COVID-19 has created”. B Volunteer respondent results when posted with the prompts “In the future, I would prefer to attend a virtual conference over an in-person conference” (Blue) and “The virtual experience is not as effective compared to the in-person conferences” (Red) Post-seminar survey results. A Volunteer respondent result when posed with the prompt “BBV2020 has enriched connection with the Brain Barriers community in the gap that COVID-19 has created”. B Volunteer respondent results when posted with the prompts “In the future, I would prefer to attend a virtual conference over an in-person conference” (Blue) and “The virtual experience is not as effective compared to the in-person conferences” (Red) Overall, descriptive data revealed many participants felt BBV2020 was a positive experience. In fact, 99.3% of survey respondents noted they somewhat or strongly agree that their experience was positive, with only 4.7% saying they faced technical difficulties. Additionally, most respondents agreed that virtual conferences have an important role in research (93.2%) and are more accessible than in-person conferences (90.2%). Finally, more than half of respondents (56.9%) noted virtual conferences can do things in-person conferences cannot. Nevertheless, there continued to be a desire among many BBV2020 participants to return to conferencing in person, with 13.5% saying they found it hard to focus in the virtual format. In fact, roughly one-third (33.9%) felt the virtual conference was not as effective as an in-person conference, and only one-quarter (23.3%) said they would prefer virtual conferences to in-person conferences in the future, while 35.8% were neutral. Together, our findings suggest that while BBV2020 filled a need created by a global pandemic, the enthusiasm for returning to only in-person meetings remained relatively neutral suggesting that virtual platforms may play a critical role in dissemination of research and science in the future. Moreover, the data from this study show a trend between graduate students versus professors in their response with regard to in-person meetings indicating that career stage may influence the perspective of the usefulness of in-person versus virtual platforms. Global reach of BBV2020 During the seminar registration process we collected attendants’ affiliations and based on this information we determined the global reach of the BBV2020. Registrants of the BBV2020 seminar series were from 43 countries representing six of the seven continents (Fig. 3A, B). Compared to the in-person BBB 2018 and the CVB2019, more countries were represented at the BBV2020 (Fig. 3A). Fig. 3 Global reach of BBV2020. A BBV2020 had more countries represented compared to BBB 2018 and CVB 2019 alone. B World map showing countries represented by BBV2020. Countries where registrants participated in BBV2020: Red and Pink. Countries not represented at BBV2020: Light Gray and Dark Gray. Countries where registrants attended BBV2020 and at least one of the in-person meetings between Meeting 2018 or CVB 2019: Red. Countries where registrants attended BBV2020 and did not attend either BBB 2018 or CVB 2019: Pink. Countries that had no participation in any of the brain barriers meetings examined in this study: Light gray. Countries that had participation in either BBB 2018 or CVB 2019 that did not participate in BBV2020: Dark Gray Global reach of BBV2020. A BBV2020 had more countries represented compared to BBB 2018 and CVB 2019 alone. B World map showing countries represented by BBV2020. Countries where registrants participated in BBV2020: Red and Pink. Countries not represented at BBV2020: Light Gray and Dark Gray. Countries where registrants attended BBV2020 and at least one of the in-person meetings between Meeting 2018 or CVB 2019: Red. Countries where registrants attended BBV2020 and did not attend either BBB 2018 or CVB 2019: Pink. Countries that had no participation in any of the brain barriers meetings examined in this study: Light gray. Countries that had participation in either BBB 2018 or CVB 2019 that did not participate in BBV2020: Dark Gray We also compared the number of participants from each country who joined BBV2020 vs. BBB 2018 and CVB 2019. We found that the number of participants per country worldwide was also higher compared to both in person meetings (Fig. 4A–F). These data show that the virtual nature of BBV2020 enabled researchers world-wide to participate and suggest that the outreach of a virtual event can extend beyond the field. Fig. 4 Global view of participation over BBV2020, BBB 2018, and CVB 2019. Representation of participants for (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Ordered from top to bottom in greatest number of registrants. Number of registrants scaled by color based on number of registrants per country for (D) BBV 2020, (E) BBB 2018, and (F) CVB 2019. Light Pink: 1–10 registrants; Dark Pink: 11–50 registrants; Red: 51–100 registrants and Dark Red: > 100 registrants Global view of participation over BBV2020, BBB 2018, and CVB 2019. Representation of participants for (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Ordered from top to bottom in greatest number of registrants. Number of registrants scaled by color based on number of registrants per country for (D) BBV 2020, (E) BBB 2018, and (F) CVB 2019. Light Pink: 1–10 registrants; Dark Pink: 11–50 registrants; Red: 51–100 registrants and Dark Red: > 100 registrants During the seminar registration process we collected attendants’ affiliations and based on this information we determined the global reach of the BBV2020. Registrants of the BBV2020 seminar series were from 43 countries representing six of the seven continents (Fig. 3A, B). Compared to the in-person BBB 2018 and the CVB2019, more countries were represented at the BBV2020 (Fig. 3A). Fig. 3 Global reach of BBV2020. A BBV2020 had more countries represented compared to BBB 2018 and CVB 2019 alone. B World map showing countries represented by BBV2020. Countries where registrants participated in BBV2020: Red and Pink. Countries not represented at BBV2020: Light Gray and Dark Gray. Countries where registrants attended BBV2020 and at least one of the in-person meetings between Meeting 2018 or CVB 2019: Red. Countries where registrants attended BBV2020 and did not attend either BBB 2018 or CVB 2019: Pink. Countries that had no participation in any of the brain barriers meetings examined in this study: Light gray. Countries that had participation in either BBB 2018 or CVB 2019 that did not participate in BBV2020: Dark Gray Global reach of BBV2020. A BBV2020 had more countries represented compared to BBB 2018 and CVB 2019 alone. B World map showing countries represented by BBV2020. Countries where registrants participated in BBV2020: Red and Pink. Countries not represented at BBV2020: Light Gray and Dark Gray. Countries where registrants attended BBV2020 and at least one of the in-person meetings between Meeting 2018 or CVB 2019: Red. Countries where registrants attended BBV2020 and did not attend either BBB 2018 or CVB 2019: Pink. Countries that had no participation in any of the brain barriers meetings examined in this study: Light gray. Countries that had participation in either BBB 2018 or CVB 2019 that did not participate in BBV2020: Dark Gray We also compared the number of participants from each country who joined BBV2020 vs. BBB 2018 and CVB 2019. We found that the number of participants per country worldwide was also higher compared to both in person meetings (Fig. 4A–F). These data show that the virtual nature of BBV2020 enabled researchers world-wide to participate and suggest that the outreach of a virtual event can extend beyond the field. Fig. 4 Global view of participation over BBV2020, BBB 2018, and CVB 2019. Representation of participants for (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Ordered from top to bottom in greatest number of registrants. Number of registrants scaled by color based on number of registrants per country for (D) BBV 2020, (E) BBB 2018, and (F) CVB 2019. Light Pink: 1–10 registrants; Dark Pink: 11–50 registrants; Red: 51–100 registrants and Dark Red: > 100 registrants Global view of participation over BBV2020, BBB 2018, and CVB 2019. Representation of participants for (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Ordered from top to bottom in greatest number of registrants. Number of registrants scaled by color based on number of registrants per country for (D) BBV 2020, (E) BBB 2018, and (F) CVB 2019. Light Pink: 1–10 registrants; Dark Pink: 11–50 registrants; Red: 51–100 registrants and Dark Red: > 100 registrants Career stages represented at BBV2020 We examined the self-reported career stage of BBV2020 participants to that of BBB 2018 and CVB 2019 participants. Based on the survey, more than 46% of all participants attended a brain barriers conference/seminar for the first time. We also found that in-person meetings are attended by 42.4–48.9% academicians at the professor rank, graduate students make up 22.9–25.8% and postdoctoral fellows are 6.6–17.7% of the participants (Fig. 5). In contrast, at the BBV2020 19.6% of academicians were at the professor rank, graduate students made up 30.5% and postdoctoral fellows 16.1% of attendees. We also found that BBV2020 had a larger participation from industry (18.5% vs. 5.6–6.6%), and undergraduate students (3.2% vs. 1%) compared to both in person meetings. Together, our data suggest that a virtual platform seminar like BBV2020 reaches a broader range of scientific ranks and careers. Fig. 5 Proportions of various career stages participating in BBV2020, BBB 2018, and CVB 2019. Proportions of various career stages participating in (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Colors correspond to career stage: Red (professor any rank), Green (graduate student any rank), Dark Blue (research faculty or researcher), Yellow (industry scientist), Orange (postdoctoral fellow), Purple (government/regulatory), Light Blue (clinician), Green (undergraduate student) Proportions of various career stages participating in BBV2020, BBB 2018, and CVB 2019. Proportions of various career stages participating in (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Colors correspond to career stage: Red (professor any rank), Green (graduate student any rank), Dark Blue (research faculty or researcher), Yellow (industry scientist), Orange (postdoctoral fellow), Purple (government/regulatory), Light Blue (clinician), Green (undergraduate student) We examined the self-reported career stage of BBV2020 participants to that of BBB 2018 and CVB 2019 participants. Based on the survey, more than 46% of all participants attended a brain barriers conference/seminar for the first time. We also found that in-person meetings are attended by 42.4–48.9% academicians at the professor rank, graduate students make up 22.9–25.8% and postdoctoral fellows are 6.6–17.7% of the participants (Fig. 5). In contrast, at the BBV2020 19.6% of academicians were at the professor rank, graduate students made up 30.5% and postdoctoral fellows 16.1% of attendees. We also found that BBV2020 had a larger participation from industry (18.5% vs. 5.6–6.6%), and undergraduate students (3.2% vs. 1%) compared to both in person meetings. Together, our data suggest that a virtual platform seminar like BBV2020 reaches a broader range of scientific ranks and careers. Fig. 5 Proportions of various career stages participating in BBV2020, BBB 2018, and CVB 2019. Proportions of various career stages participating in (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Colors correspond to career stage: Red (professor any rank), Green (graduate student any rank), Dark Blue (research faculty or researcher), Yellow (industry scientist), Orange (postdoctoral fellow), Purple (government/regulatory), Light Blue (clinician), Green (undergraduate student) Proportions of various career stages participating in BBV2020, BBB 2018, and CVB 2019. Proportions of various career stages participating in (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Colors correspond to career stage: Red (professor any rank), Green (graduate student any rank), Dark Blue (research faculty or researcher), Yellow (industry scientist), Orange (postdoctoral fellow), Purple (government/regulatory), Light Blue (clinician), Green (undergraduate student) BBV2020 attendance: On May 14, 2020, we sent out the first advertisement for the BBV2020 seminar series via email using a listserv with email addresses of past conference attendees and via postings using LinkedIn, Facebook, and Twitter. By the start of the first seminar on May 20, 2020, more than 720 scientists had registered. By the end of the 16-week long BBV2020 seminar series, a total of 1302 registrants signed up on our listserv to receive the seminar link, speaker updates, and video recording links (Fig. 1A). We obtained data from two past in-person meetings from the respective conference chairs: (1) brain barriers meeting in 2018 (BBB 2018) and the (2) Cerebral Vascular Biology meeting in 2019 (CVB 2019). Fig. 1 Attendance Trends for Brain Barrier Research Conferences. A With 1,302 individuals signing up and added to the listserv, BBV2020 attracted more registrants than BBB 2018 and CVB 2019. B Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020. C Number of views (clicks) of the video recordings when available weekly for BBV2020. Black line indicates best linear fit. Gray line indicates the average (mean) for all 16 weeks Attendance Trends for Brain Barrier Research Conferences. A With 1,302 individuals signing up and added to the listserv, BBV2020 attracted more registrants than BBB 2018 and CVB 2019. B Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020. C Number of views (clicks) of the video recordings when available weekly for BBV2020. Black line indicates best linear fit. Gray line indicates the average (mean) for all 16 weeks Strikingly, the total number of registered applicants for the BBV2020 was more than 4-fold higher compared to the total number of registered participants for the 2018 and the CVB 2019 meetings (Fig. 1A). The large number of registrants is likely due to free registration, no associated travel expenses, a world-wide reach, and that virtual meetings come with no obligations for the attendees. In contrast, the number of attendees of the in-person meetings (BBB 2018 and CVB2019) was limited and determined by the size of the conference site. The first live BBV2020 seminar was attended by 419 participants. As the seminar series progressed, the participant number during live seminars gradually decreased, the average participant number was 220, the minimum was 108 (Fig. 1B). A similar trend was observed in the numbers of total recording views: the first seminar recording had a maximum of 944 views; in average, videos had 422 views (Fig. 1C). The trendline in Fig. 1C shows a drop in recording views over the course of the seminar series. These trends held regardless of academic rank as graduate students, postdoctoral fellows, and professors all showed a similar downward trend in live attendance (Additional file 1: Fig. S1A–C). Note that we were unable to capture recording data for week 2 due to a recording error and for week 13 due to the request by the speaker to not record. Together, the high number of registrants suggests that the virtual BBV2020 seminar series sparked interest. Attendance of the live seminars and the numbers of views for the recorded videos decreased over time, which may be an indicator of “virtual meeting fatigue” or attendees being on their summer vacation. BBV2020 survey analysis: By the end of the seminar series in September, every BBV2020 registrant received an invitation to participate in an online survey created by the organizers on August 23, 2020. The survey was designed for registrants to provide feedback and consisted of 10 questions: (1) demographic question to capture career level and career type; (2) single select multiple choice question to capture attendance of past in-person brain barrier meetings, (3) single select multiple choice question on how many recorded seminars were viewed, (4) Likert scale question regarding the enrichment BBV2020 provided for the community during the COVID-19 pandemic, (5) dichotomous question asking if the registrant would still participate in BBV2020 seminars if they were continued, (6) single select multiple choice question on attendance frequency, (7) single select multiple choice question on how many times the registrant has thus far attended a brain barriers meeting previously in person, (8) matrix question to capture affective factors, (9) matrix question to capture learning and teaching, and (10) matrix question to capture combined validity and practicality of the BBV2020 series. In addition, registrants responding to the survey had the opportunity to include general comments to the organizers in a separate text box. Overall, 23% of registrants (total of 299 registrants) responded to the survey. Over 89% of survey respondents agreed strongly that “Attending BBV2020 was a positive experience” and 63% of responders felt that “BBV2020 is appropriate for my research”. Generally, the respondents tended to agree with the following statement: “BBV2020 has enriched the connection with the Brain Barriers community in the gap that COVID-19 has created” (n = 298 respondents to this question of a total of 299 registrants, M = 1.44, SD = 0.660, 1 = Strongly Agree, 2 = Agree, 3 = Neutral, 4 = Disagree, 5 = Strongly Disagree; Fig. 2A). However, when posed with the statements “In the future, I would prefer to attend a virtual conference over an in-person conference” or “The virtual experience is not as effective compared to the in-person conferences”, responses were neutral (M = 3.24; Fig. 2B) indicating a positive view towards a virtual conference. A difference emerged when we compared BBV2020 participants based on their career level with their preference for in-person versus virtual meeting formats: using one-way ANOVA, we found a significant difference between groups (F(9286) = 2.607, p = 0.007). Using Tukey’s HSD post hoc test indicated that Master’s students were more likely to disagree with the following statement when compared to Faculty/Professors: “I would feel more comfortable attending Brain Barrier conferences/seminars in person, but was glad to attend virtually due to COVID19.” A strong trend emerged when Master’s and Ph.D. students were grouped together and compared to Post-Docs and Faculty/Professors in an independent-samples t test. Results show graduate students in general were more favorable to a virtual conference compared to colleagues more established in their careers (t(225) = -1.972, p = 0.061). The lack of significance is in part due to the low sample size (n = 74) for graduate students. Fig. 2 Post-seminar survey results. A Volunteer respondent result when posed with the prompt “BBV2020 has enriched connection with the Brain Barriers community in the gap that COVID-19 has created”. B Volunteer respondent results when posted with the prompts “In the future, I would prefer to attend a virtual conference over an in-person conference” (Blue) and “The virtual experience is not as effective compared to the in-person conferences” (Red) Post-seminar survey results. A Volunteer respondent result when posed with the prompt “BBV2020 has enriched connection with the Brain Barriers community in the gap that COVID-19 has created”. B Volunteer respondent results when posted with the prompts “In the future, I would prefer to attend a virtual conference over an in-person conference” (Blue) and “The virtual experience is not as effective compared to the in-person conferences” (Red) Overall, descriptive data revealed many participants felt BBV2020 was a positive experience. In fact, 99.3% of survey respondents noted they somewhat or strongly agree that their experience was positive, with only 4.7% saying they faced technical difficulties. Additionally, most respondents agreed that virtual conferences have an important role in research (93.2%) and are more accessible than in-person conferences (90.2%). Finally, more than half of respondents (56.9%) noted virtual conferences can do things in-person conferences cannot. Nevertheless, there continued to be a desire among many BBV2020 participants to return to conferencing in person, with 13.5% saying they found it hard to focus in the virtual format. In fact, roughly one-third (33.9%) felt the virtual conference was not as effective as an in-person conference, and only one-quarter (23.3%) said they would prefer virtual conferences to in-person conferences in the future, while 35.8% were neutral. Together, our findings suggest that while BBV2020 filled a need created by a global pandemic, the enthusiasm for returning to only in-person meetings remained relatively neutral suggesting that virtual platforms may play a critical role in dissemination of research and science in the future. Moreover, the data from this study show a trend between graduate students versus professors in their response with regard to in-person meetings indicating that career stage may influence the perspective of the usefulness of in-person versus virtual platforms. Global reach of BBV2020: During the seminar registration process we collected attendants’ affiliations and based on this information we determined the global reach of the BBV2020. Registrants of the BBV2020 seminar series were from 43 countries representing six of the seven continents (Fig. 3A, B). Compared to the in-person BBB 2018 and the CVB2019, more countries were represented at the BBV2020 (Fig. 3A). Fig. 3 Global reach of BBV2020. A BBV2020 had more countries represented compared to BBB 2018 and CVB 2019 alone. B World map showing countries represented by BBV2020. Countries where registrants participated in BBV2020: Red and Pink. Countries not represented at BBV2020: Light Gray and Dark Gray. Countries where registrants attended BBV2020 and at least one of the in-person meetings between Meeting 2018 or CVB 2019: Red. Countries where registrants attended BBV2020 and did not attend either BBB 2018 or CVB 2019: Pink. Countries that had no participation in any of the brain barriers meetings examined in this study: Light gray. Countries that had participation in either BBB 2018 or CVB 2019 that did not participate in BBV2020: Dark Gray Global reach of BBV2020. A BBV2020 had more countries represented compared to BBB 2018 and CVB 2019 alone. B World map showing countries represented by BBV2020. Countries where registrants participated in BBV2020: Red and Pink. Countries not represented at BBV2020: Light Gray and Dark Gray. Countries where registrants attended BBV2020 and at least one of the in-person meetings between Meeting 2018 or CVB 2019: Red. Countries where registrants attended BBV2020 and did not attend either BBB 2018 or CVB 2019: Pink. Countries that had no participation in any of the brain barriers meetings examined in this study: Light gray. Countries that had participation in either BBB 2018 or CVB 2019 that did not participate in BBV2020: Dark Gray We also compared the number of participants from each country who joined BBV2020 vs. BBB 2018 and CVB 2019. We found that the number of participants per country worldwide was also higher compared to both in person meetings (Fig. 4A–F). These data show that the virtual nature of BBV2020 enabled researchers world-wide to participate and suggest that the outreach of a virtual event can extend beyond the field. Fig. 4 Global view of participation over BBV2020, BBB 2018, and CVB 2019. Representation of participants for (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Ordered from top to bottom in greatest number of registrants. Number of registrants scaled by color based on number of registrants per country for (D) BBV 2020, (E) BBB 2018, and (F) CVB 2019. Light Pink: 1–10 registrants; Dark Pink: 11–50 registrants; Red: 51–100 registrants and Dark Red: > 100 registrants Global view of participation over BBV2020, BBB 2018, and CVB 2019. Representation of participants for (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Ordered from top to bottom in greatest number of registrants. Number of registrants scaled by color based on number of registrants per country for (D) BBV 2020, (E) BBB 2018, and (F) CVB 2019. Light Pink: 1–10 registrants; Dark Pink: 11–50 registrants; Red: 51–100 registrants and Dark Red: > 100 registrants Career stages represented at BBV2020: We examined the self-reported career stage of BBV2020 participants to that of BBB 2018 and CVB 2019 participants. Based on the survey, more than 46% of all participants attended a brain barriers conference/seminar for the first time. We also found that in-person meetings are attended by 42.4–48.9% academicians at the professor rank, graduate students make up 22.9–25.8% and postdoctoral fellows are 6.6–17.7% of the participants (Fig. 5). In contrast, at the BBV2020 19.6% of academicians were at the professor rank, graduate students made up 30.5% and postdoctoral fellows 16.1% of attendees. We also found that BBV2020 had a larger participation from industry (18.5% vs. 5.6–6.6%), and undergraduate students (3.2% vs. 1%) compared to both in person meetings. Together, our data suggest that a virtual platform seminar like BBV2020 reaches a broader range of scientific ranks and careers. Fig. 5 Proportions of various career stages participating in BBV2020, BBB 2018, and CVB 2019. Proportions of various career stages participating in (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Colors correspond to career stage: Red (professor any rank), Green (graduate student any rank), Dark Blue (research faculty or researcher), Yellow (industry scientist), Orange (postdoctoral fellow), Purple (government/regulatory), Light Blue (clinician), Green (undergraduate student) Proportions of various career stages participating in BBV2020, BBB 2018, and CVB 2019. Proportions of various career stages participating in (A) BBV2020, (B) BBB 2018, and (C) CVB 2019. Colors correspond to career stage: Red (professor any rank), Green (graduate student any rank), Dark Blue (research faculty or researcher), Yellow (industry scientist), Orange (postdoctoral fellow), Purple (government/regulatory), Light Blue (clinician), Green (undergraduate student) Discussion: Seemingly overnight the worldwide COVID-19 pandemic moved virtual meetings, seminars and conferences from the sidelines to center stage. For scientists, connecting on a virtual platform to share data and discuss ideas became the new norm. For many professional societies and research networks a virtual format was a viable option to offset the burden of the pandemic [6, 7]. We organized the BBV2020 seminar series for researchers in the brain barriers field to fill the vacuum canceled in-person conferences left in 2020. Using data collected throughout the BBV2020 seminar series from Zoom Webinar live attendance, number of views from posted videos, and a post-seminar survey, we captured the impact of BBV2020 on the brain barriers field. We compared these data with those from past in-person conferences and assessed the reach of the BBV2020 virtual seminar series with in-person meetings. While we acknowledge that comparisons between a seminar series to a conference may have its limitations, this was the first stand-in virtual event for the brain barriers field that may provide insight for future planning. In the following sections we discuss our findings in the context of the existing literature and provide a future perspective on how a virtual platform could impact the brain barriers field beyond the pandemic. Attendance: scientists around the globe Due to the pandemic, the idea and subsequent organization of the BBV2020 occurred in a relatively short time frame of about 4 weeks. This is in stark contrast to in-person meetings that are often scheduled and organized months and years ahead of the actual event. Advertisement of the BBV2020 encompassed personal email, postings on social media, and utilization of the International Brain Barriers Society email distribution list. Continuous advertisement resulted in the registration of over 1300 scientists. The total registrant number cannot be directly compared to that of in-person meetings since those are more complex in nature, have maximum capacity limits, and involve other factors such as travel expenses and time commitment. Similar to other meetings which transitioned to online formats during the pandemic, BBV2020 had a large increase in registrants when compared to previous in-person meetings (~7-fold and 4.5-fold higher compared to 2018 and 2019, respectively; Fig. 1; [8]. Given that previous brain barriers meetings usually have a few hundred participants, the high number of registrants could be an indicator for high interest in the field beyond traditional brain barrier research groups. In contrast, these high registrant numbers could also be due, in part, to BBV2020 being a no-cost event or could be explained by pandemic-related cancellations resulting in relatively event-free calendar of the participants. While there were more than 1,300 registrants for BBV2020, no more than 419 ever attended a live webinar. Additionally, 323 people registered for BBV2020 but never attended a live webinar. They may, however, have participated by watching the video recordings after the live webinar. A downward trend of attendance numbers for live seminars was observed throughout the course of the 16-week series (Fig. 1B). The recorded seminars drew considerable interest initially with a maximum of 944 views in week 4 of the series, but later in the series, video views dropped to about 200 per seminar (Fig. 1C). Reasons for these downward trends remain speculative, but could be due to the summer break, the selected time for the seminar making it difficult for researchers in certain time zones to attend live, zoom fatigue, or increasing responsibilities in re-opened research laboratories or at home. The downward trends we observed was in contrast to a similar virtual seminar series in another field that remained relatively flat over the course of the summer [4]. Given that average attendance of the BBV2020 was 220 per seminar and typical attendance for in-person conferences in the brain barriers field range between 150 and 300 participants (caveat: one of the in-person conferences has a strict 182-person maximum limit), we conclude that a core group of participants attended most sessions. Due to the pandemic, the idea and subsequent organization of the BBV2020 occurred in a relatively short time frame of about 4 weeks. This is in stark contrast to in-person meetings that are often scheduled and organized months and years ahead of the actual event. Advertisement of the BBV2020 encompassed personal email, postings on social media, and utilization of the International Brain Barriers Society email distribution list. Continuous advertisement resulted in the registration of over 1300 scientists. The total registrant number cannot be directly compared to that of in-person meetings since those are more complex in nature, have maximum capacity limits, and involve other factors such as travel expenses and time commitment. Similar to other meetings which transitioned to online formats during the pandemic, BBV2020 had a large increase in registrants when compared to previous in-person meetings (~7-fold and 4.5-fold higher compared to 2018 and 2019, respectively; Fig. 1; [8]. Given that previous brain barriers meetings usually have a few hundred participants, the high number of registrants could be an indicator for high interest in the field beyond traditional brain barrier research groups. In contrast, these high registrant numbers could also be due, in part, to BBV2020 being a no-cost event or could be explained by pandemic-related cancellations resulting in relatively event-free calendar of the participants. While there were more than 1,300 registrants for BBV2020, no more than 419 ever attended a live webinar. Additionally, 323 people registered for BBV2020 but never attended a live webinar. They may, however, have participated by watching the video recordings after the live webinar. A downward trend of attendance numbers for live seminars was observed throughout the course of the 16-week series (Fig. 1B). The recorded seminars drew considerable interest initially with a maximum of 944 views in week 4 of the series, but later in the series, video views dropped to about 200 per seminar (Fig. 1C). Reasons for these downward trends remain speculative, but could be due to the summer break, the selected time for the seminar making it difficult for researchers in certain time zones to attend live, zoom fatigue, or increasing responsibilities in re-opened research laboratories or at home. The downward trends we observed was in contrast to a similar virtual seminar series in another field that remained relatively flat over the course of the summer [4]. Given that average attendance of the BBV2020 was 220 per seminar and typical attendance for in-person conferences in the brain barriers field range between 150 and 300 participants (caveat: one of the in-person conferences has a strict 182-person maximum limit), we conclude that a core group of participants attended most sessions. Survey: feedback from participants A survey conducted after the seminar series revealed that most participants felt that BBV2020 enhanced the connection to the brain barriers scientific community during the COVID-19 pandemic (Fig. 2A). Differences emerged between how graduate students and professors viewed BBV2020: graduate students in general were more favorable of the virtual format compared to faculty. This may be partially explained by a lack of funding for graduate student travel or political “red tape” (e.g. visas) for some researchers [9]. Additionally, students may disproportionately adapt more easily and rapidly to technology usage compared to senior colleagues [10]. Clearly, another survey would be needed to determine if a virtual component outside of a pandemic would be positively viewed by researchers in the brain barriers field. A survey conducted after the seminar series revealed that most participants felt that BBV2020 enhanced the connection to the brain barriers scientific community during the COVID-19 pandemic (Fig. 2A). Differences emerged between how graduate students and professors viewed BBV2020: graduate students in general were more favorable of the virtual format compared to faculty. This may be partially explained by a lack of funding for graduate student travel or political “red tape” (e.g. visas) for some researchers [9]. Additionally, students may disproportionately adapt more easily and rapidly to technology usage compared to senior colleagues [10]. Clearly, another survey would be needed to determine if a virtual component outside of a pandemic would be positively viewed by researchers in the brain barriers field. Virtual seminars: a powerful opportunity One major advantage of a virtual seminar series such as the BBV2020 is the lack of a travel burden. Taking off the financial and time-commitment associated with in-person meetings has been shown to increase the diversity of attendees [3, 9, 11]. Based on those findings, we anticipated that BBV2020 would have representation from more countries than past in-person meetings. As shown in Fig. 3, registrants from 43 different countries were represented at BBV2020, which is 40% more compared to BBB 2018 (23 countries) and CVB 2019 (25 countries; Fig. 3). In particular, there was high participation from countries in Southeast Asia, Middle East, Africa, and South America that had not been represented at in-person conferences held in 2018 and 2019 (Figs. 3B, 4). Unsurprisingly however, considering countries represented at previous in-person meetings and due to the number of registrants for BBV2020, there was generally a greater participation from each country represented than in 2018 meeting and CVB 2019 [3, 8]. Due to the time of the BBV2020 live sessions (noon to 1 pm Eastern US Time), the majority of participants were from North America, South America, and Europe (Fig. 4). While time zone challenges are common among virtual events, switching between time zones or holding a “flipped” session (flipping evening sessions from Europe to the USA and vice versa) could increase participation from Asia and Oceania [12]. However, offering a virtual series that constantly switches the seminar time may be difficult to manage and could be confusing for participants. Another strength of virtual seminars is that they are more accessible and inclusive especially for junior researchers in the field. The in-person meetings in 2018 and 2019 were dominated by academic faculty at any professor rank and graduate students (Fig. 5). In contrast, BBV2020 saw a large reduction in the percentage of participating faculty (over 40% in person to less than 20% for BBV2020) and a larger proportion of graduate students (22–25% in person to over 30% for BBV2020), industry scientists (about 6% in person to 18.5% for BBV2020), and governmental/regulatory personnel who had signed up (Fig. 5). In addition, a weekly virtual seminar series allows scientist to pick-and-choose the sessions they are interested in, a choice that could interfere with the observed trends shown in Fig. 1. Our data further support the trends observed across a variety of disciplines who have shifted their conferences from in-person to online events, revealing that virtual conferences are more flexible, and more inclusive and accessible worldwide, especially for early-career scientists [9]. Virtual seminars also provide an opportunity for the organizers to invite prominent speakers of a field on a relatively small budget. In general, virtual conferences are known to enhance the accessibility for busy professionals. A recently published Nature poll indicates that 74% of scientists want virtual meetings to stay after the pandemic (925 poll responses) [13]. Easier accessibility, lower carbon footprint, and lower costs were the driving factors chosen in favor for virtual meetings in this poll. Moreover, virtual seminars provide an easy and affordable opportunity for researchers with disabilities, researchers with high teaching loads, or for scientists with parenting and other responsibilities to stay connected to their respective research community. During the COVID-19 pandemic, virtual seminars took center stage and raised the standard for accessibility, interactions, inclusivity, and equity in science [9]. Virtual events changed how we think about meetings and they constitute a yet underestimated, but powerful tool to share science and connect with each other. By rethinking how meetings are held, various fields have had different degrees of participation depending on the availability of the talks or content versus live sessions. These virtual platforms have the potential to create a “connectivity cascade” reaching more people than traditional meetings that are confined by a single geographical location and time [14]. BBV2020 had an impressive reach worldwide, however, there was still a notable deficit in participants from underdeveloped countries, especially Africa. These discrepancies have been noted in the context of expanding telehealth to African countries that still remain challenging due to monopolization of telecommunication infrastructure, political will, and access to adequate broadband efficiency [15]. Despite the advantages of BBV2020, we recognize that limitations remain, such as one-way delivery of material to participants. Lack of critical face-to-face interactions allowing junior scientists to have meaningful two-way communication with other participants was notably lacking in BBV2020 as seen in other virtual conferences as well [16]. The future of such virtual seminars, meetings, or conferences may lie in their inclusion as part of in-person meetings according to a recently published “best practice” guideline for virtual meetings [17]. For example, a virtual component could be included in plenary sessions, workshops, small group discussions, poster sessions, and/or social events of in-person conferences. BBV2020 represented only one aspect (a seminar held in the form of a webinar) of these common meeting events, but future events could certainly enhance the virtual experience. Regardless, virtual components could be exploited to increase diversity in the brain barriers research field which is consistent with the NINDS strategy for enhancing the diversity of neuroscience researchers [18]. One major advantage of a virtual seminar series such as the BBV2020 is the lack of a travel burden. Taking off the financial and time-commitment associated with in-person meetings has been shown to increase the diversity of attendees [3, 9, 11]. Based on those findings, we anticipated that BBV2020 would have representation from more countries than past in-person meetings. As shown in Fig. 3, registrants from 43 different countries were represented at BBV2020, which is 40% more compared to BBB 2018 (23 countries) and CVB 2019 (25 countries; Fig. 3). In particular, there was high participation from countries in Southeast Asia, Middle East, Africa, and South America that had not been represented at in-person conferences held in 2018 and 2019 (Figs. 3B, 4). Unsurprisingly however, considering countries represented at previous in-person meetings and due to the number of registrants for BBV2020, there was generally a greater participation from each country represented than in 2018 meeting and CVB 2019 [3, 8]. Due to the time of the BBV2020 live sessions (noon to 1 pm Eastern US Time), the majority of participants were from North America, South America, and Europe (Fig. 4). While time zone challenges are common among virtual events, switching between time zones or holding a “flipped” session (flipping evening sessions from Europe to the USA and vice versa) could increase participation from Asia and Oceania [12]. However, offering a virtual series that constantly switches the seminar time may be difficult to manage and could be confusing for participants. Another strength of virtual seminars is that they are more accessible and inclusive especially for junior researchers in the field. The in-person meetings in 2018 and 2019 were dominated by academic faculty at any professor rank and graduate students (Fig. 5). In contrast, BBV2020 saw a large reduction in the percentage of participating faculty (over 40% in person to less than 20% for BBV2020) and a larger proportion of graduate students (22–25% in person to over 30% for BBV2020), industry scientists (about 6% in person to 18.5% for BBV2020), and governmental/regulatory personnel who had signed up (Fig. 5). In addition, a weekly virtual seminar series allows scientist to pick-and-choose the sessions they are interested in, a choice that could interfere with the observed trends shown in Fig. 1. Our data further support the trends observed across a variety of disciplines who have shifted their conferences from in-person to online events, revealing that virtual conferences are more flexible, and more inclusive and accessible worldwide, especially for early-career scientists [9]. Virtual seminars also provide an opportunity for the organizers to invite prominent speakers of a field on a relatively small budget. In general, virtual conferences are known to enhance the accessibility for busy professionals. A recently published Nature poll indicates that 74% of scientists want virtual meetings to stay after the pandemic (925 poll responses) [13]. Easier accessibility, lower carbon footprint, and lower costs were the driving factors chosen in favor for virtual meetings in this poll. Moreover, virtual seminars provide an easy and affordable opportunity for researchers with disabilities, researchers with high teaching loads, or for scientists with parenting and other responsibilities to stay connected to their respective research community. During the COVID-19 pandemic, virtual seminars took center stage and raised the standard for accessibility, interactions, inclusivity, and equity in science [9]. Virtual events changed how we think about meetings and they constitute a yet underestimated, but powerful tool to share science and connect with each other. By rethinking how meetings are held, various fields have had different degrees of participation depending on the availability of the talks or content versus live sessions. These virtual platforms have the potential to create a “connectivity cascade” reaching more people than traditional meetings that are confined by a single geographical location and time [14]. BBV2020 had an impressive reach worldwide, however, there was still a notable deficit in participants from underdeveloped countries, especially Africa. These discrepancies have been noted in the context of expanding telehealth to African countries that still remain challenging due to monopolization of telecommunication infrastructure, political will, and access to adequate broadband efficiency [15]. Despite the advantages of BBV2020, we recognize that limitations remain, such as one-way delivery of material to participants. Lack of critical face-to-face interactions allowing junior scientists to have meaningful two-way communication with other participants was notably lacking in BBV2020 as seen in other virtual conferences as well [16]. The future of such virtual seminars, meetings, or conferences may lie in their inclusion as part of in-person meetings according to a recently published “best practice” guideline for virtual meetings [17]. For example, a virtual component could be included in plenary sessions, workshops, small group discussions, poster sessions, and/or social events of in-person conferences. BBV2020 represented only one aspect (a seminar held in the form of a webinar) of these common meeting events, but future events could certainly enhance the virtual experience. Regardless, virtual components could be exploited to increase diversity in the brain barriers research field which is consistent with the NINDS strategy for enhancing the diversity of neuroscience researchers [18]. Attendance: scientists around the globe: Due to the pandemic, the idea and subsequent organization of the BBV2020 occurred in a relatively short time frame of about 4 weeks. This is in stark contrast to in-person meetings that are often scheduled and organized months and years ahead of the actual event. Advertisement of the BBV2020 encompassed personal email, postings on social media, and utilization of the International Brain Barriers Society email distribution list. Continuous advertisement resulted in the registration of over 1300 scientists. The total registrant number cannot be directly compared to that of in-person meetings since those are more complex in nature, have maximum capacity limits, and involve other factors such as travel expenses and time commitment. Similar to other meetings which transitioned to online formats during the pandemic, BBV2020 had a large increase in registrants when compared to previous in-person meetings (~7-fold and 4.5-fold higher compared to 2018 and 2019, respectively; Fig. 1; [8]. Given that previous brain barriers meetings usually have a few hundred participants, the high number of registrants could be an indicator for high interest in the field beyond traditional brain barrier research groups. In contrast, these high registrant numbers could also be due, in part, to BBV2020 being a no-cost event or could be explained by pandemic-related cancellations resulting in relatively event-free calendar of the participants. While there were more than 1,300 registrants for BBV2020, no more than 419 ever attended a live webinar. Additionally, 323 people registered for BBV2020 but never attended a live webinar. They may, however, have participated by watching the video recordings after the live webinar. A downward trend of attendance numbers for live seminars was observed throughout the course of the 16-week series (Fig. 1B). The recorded seminars drew considerable interest initially with a maximum of 944 views in week 4 of the series, but later in the series, video views dropped to about 200 per seminar (Fig. 1C). Reasons for these downward trends remain speculative, but could be due to the summer break, the selected time for the seminar making it difficult for researchers in certain time zones to attend live, zoom fatigue, or increasing responsibilities in re-opened research laboratories or at home. The downward trends we observed was in contrast to a similar virtual seminar series in another field that remained relatively flat over the course of the summer [4]. Given that average attendance of the BBV2020 was 220 per seminar and typical attendance for in-person conferences in the brain barriers field range between 150 and 300 participants (caveat: one of the in-person conferences has a strict 182-person maximum limit), we conclude that a core group of participants attended most sessions. Survey: feedback from participants: A survey conducted after the seminar series revealed that most participants felt that BBV2020 enhanced the connection to the brain barriers scientific community during the COVID-19 pandemic (Fig. 2A). Differences emerged between how graduate students and professors viewed BBV2020: graduate students in general were more favorable of the virtual format compared to faculty. This may be partially explained by a lack of funding for graduate student travel or political “red tape” (e.g. visas) for some researchers [9]. Additionally, students may disproportionately adapt more easily and rapidly to technology usage compared to senior colleagues [10]. Clearly, another survey would be needed to determine if a virtual component outside of a pandemic would be positively viewed by researchers in the brain barriers field. Virtual seminars: a powerful opportunity: One major advantage of a virtual seminar series such as the BBV2020 is the lack of a travel burden. Taking off the financial and time-commitment associated with in-person meetings has been shown to increase the diversity of attendees [3, 9, 11]. Based on those findings, we anticipated that BBV2020 would have representation from more countries than past in-person meetings. As shown in Fig. 3, registrants from 43 different countries were represented at BBV2020, which is 40% more compared to BBB 2018 (23 countries) and CVB 2019 (25 countries; Fig. 3). In particular, there was high participation from countries in Southeast Asia, Middle East, Africa, and South America that had not been represented at in-person conferences held in 2018 and 2019 (Figs. 3B, 4). Unsurprisingly however, considering countries represented at previous in-person meetings and due to the number of registrants for BBV2020, there was generally a greater participation from each country represented than in 2018 meeting and CVB 2019 [3, 8]. Due to the time of the BBV2020 live sessions (noon to 1 pm Eastern US Time), the majority of participants were from North America, South America, and Europe (Fig. 4). While time zone challenges are common among virtual events, switching between time zones or holding a “flipped” session (flipping evening sessions from Europe to the USA and vice versa) could increase participation from Asia and Oceania [12]. However, offering a virtual series that constantly switches the seminar time may be difficult to manage and could be confusing for participants. Another strength of virtual seminars is that they are more accessible and inclusive especially for junior researchers in the field. The in-person meetings in 2018 and 2019 were dominated by academic faculty at any professor rank and graduate students (Fig. 5). In contrast, BBV2020 saw a large reduction in the percentage of participating faculty (over 40% in person to less than 20% for BBV2020) and a larger proportion of graduate students (22–25% in person to over 30% for BBV2020), industry scientists (about 6% in person to 18.5% for BBV2020), and governmental/regulatory personnel who had signed up (Fig. 5). In addition, a weekly virtual seminar series allows scientist to pick-and-choose the sessions they are interested in, a choice that could interfere with the observed trends shown in Fig. 1. Our data further support the trends observed across a variety of disciplines who have shifted their conferences from in-person to online events, revealing that virtual conferences are more flexible, and more inclusive and accessible worldwide, especially for early-career scientists [9]. Virtual seminars also provide an opportunity for the organizers to invite prominent speakers of a field on a relatively small budget. In general, virtual conferences are known to enhance the accessibility for busy professionals. A recently published Nature poll indicates that 74% of scientists want virtual meetings to stay after the pandemic (925 poll responses) [13]. Easier accessibility, lower carbon footprint, and lower costs were the driving factors chosen in favor for virtual meetings in this poll. Moreover, virtual seminars provide an easy and affordable opportunity for researchers with disabilities, researchers with high teaching loads, or for scientists with parenting and other responsibilities to stay connected to their respective research community. During the COVID-19 pandemic, virtual seminars took center stage and raised the standard for accessibility, interactions, inclusivity, and equity in science [9]. Virtual events changed how we think about meetings and they constitute a yet underestimated, but powerful tool to share science and connect with each other. By rethinking how meetings are held, various fields have had different degrees of participation depending on the availability of the talks or content versus live sessions. These virtual platforms have the potential to create a “connectivity cascade” reaching more people than traditional meetings that are confined by a single geographical location and time [14]. BBV2020 had an impressive reach worldwide, however, there was still a notable deficit in participants from underdeveloped countries, especially Africa. These discrepancies have been noted in the context of expanding telehealth to African countries that still remain challenging due to monopolization of telecommunication infrastructure, political will, and access to adequate broadband efficiency [15]. Despite the advantages of BBV2020, we recognize that limitations remain, such as one-way delivery of material to participants. Lack of critical face-to-face interactions allowing junior scientists to have meaningful two-way communication with other participants was notably lacking in BBV2020 as seen in other virtual conferences as well [16]. The future of such virtual seminars, meetings, or conferences may lie in their inclusion as part of in-person meetings according to a recently published “best practice” guideline for virtual meetings [17]. For example, a virtual component could be included in plenary sessions, workshops, small group discussions, poster sessions, and/or social events of in-person conferences. BBV2020 represented only one aspect (a seminar held in the form of a webinar) of these common meeting events, but future events could certainly enhance the virtual experience. Regardless, virtual components could be exploited to increase diversity in the brain barriers research field which is consistent with the NINDS strategy for enhancing the diversity of neuroscience researchers [18]. Conclusion and future perspectives: Overall, the Virtual BBV2020 Seminar Series created during the COVID-19 pandemic was positively received by the brain barriers research community. Key benefits of virtual seminars like the BBV2020 are that they are convenient, affordable, eco-friendly, reach a broad audience, and are a good tool to increase diversity and equity. Thus, incorporating a virtual component during an in-person meeting or offering a virtual counterpart in addition to in-person meetings could help increase the outreach of a small research field like the brain barriers field. Nevertheless, limitations remain such as a lack of interaction between speakers and participants, which is particularly critical for junior researchers. And although BBV2020 expanded the reach beyond past in-person meetings, challenges reaching scientists in underdeveloped countries remain. Together, BBV2020 was created in a time of crisis but has the potential to thrive in the post-pandemic world. Embracing virtual scientific events may allow us to advance the brain barriers research field and have an impact on how we exchange science in the future. Supplementary Information: Additional file 1: Figure S1. Attendance Trends for BBV2020 by Academic Rank. Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020 separated by A) Graduate Student, B) Postdoctoral Fellow, and C) Professor (any rank). Additional file 1: Figure S1. Attendance Trends for BBV2020 by Academic Rank. Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020 separated by A) Graduate Student, B) Postdoctoral Fellow, and C) Professor (any rank). Additional file 1: Figure S1. Attendance Trends for BBV2020 by Academic Rank. Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020 separated by A) Graduate Student, B) Postdoctoral Fellow, and C) Professor (any rank). Additional file 1: Figure S1. Attendance Trends for BBV2020 by Academic Rank. Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020 separated by A) Graduate Student, B) Postdoctoral Fellow, and C) Professor (any rank).
Background: Scientific conferences are vital communication events for scientists in academia, industry, and government agencies. In the brain barriers research field, several international conferences exist that allow researchers to present data, share knowledge, and discuss novel ideas and concepts. These meetings are critical platforms for researchers to connect and exchange breakthrough findings on a regular basis. Due to the worldwide COVID-19 pandemic, all in-person meetings were canceled in 2020. In response, we launched the Brain Barriers Virtual 2020 (BBV2020) seminar series, the first stand-in virtual event for the brain barriers field, to offer scientists a virtual platform to present their work. Here we report the aggregate attendance information on two in-person meetings compared with BBV2020 and comment on the utility of the virtual platform. Methods: The BBV2020 seminar series was hosted on a Zoom webinar platform and was free of cost for participants. Using registration- and Zoom-based data from the BBV2020 virtual seminar series and survey data collected from BBV2020 participants, we analyzed attendance trends, global reach, participation based on career stage, and engagement of BBV2020. We compared these data with those from two previous in-person conferences, a BBB meeting held in 2018 and CVB 2019. Results: We found that BBV2020 seminar participation steadily decreased over the course of the series. In contrast, live participation was consistently above 100 attendees and recording views were above 200 views per seminar. We also found that participants valued BBV2020 as a supplement during the COVID-19 pandemic in 2020. Based on one post-BBV2020 survey, the majority of participants indicated that they would prefer in-person meetings but would welcome a virtual component to future in-person meetings. Compared to in-person meetings, BBV2020 enabled participation from a broad range of career stages and was attended by scientists in academic, industry, and government agencies from a wide range of countries worldwide. Conclusions: Our findings suggest that a virtual event such as the BBV2020 seminar series provides easy access to science for researchers across all career stages around the globe. However, we recognize that limitations exist. Regardless, such a virtual event could be a valuable tool for the brain barriers community to reach and engage scientists worldwide to further grow the brain barriers research field in the future.
Introduction: On March 11, 2020, the World Health Organization (WHO) declared the coronavirus disease (COVID-19) outbreak a global pandemic [1]. Shortly after, countries worldwide issued stay-at-home orders and lockdowns: the COVID-19 pandemic had forced the world to come to an abrupt halt [2]. With global air travel restrictions and travel bans in place, enterprises including academic institutions, industry and government agencies required individuals to call off their business travel. As a result, scheduled scientific conferences were canceled throughout 2020 [3, 4]. These cancelations created a sudden void in scientific communication between researchers worldwide and forced research communities to be creative and adapt. One possibility to continue scientific communication and interaction was by switching conferences and seminars to virtual online events using novel digital platforms enabling researchers to connect and share science [3, 4]. As a consequence, major conferences and meetings in the brain barriers research field, including the “Barriers of the CNS Gordon Research Conference” (GRC 2020) and the international symposium on “Signal Transduction at the Blood–Brain Barriers” in Bari, Italy were first postponed until 2021 and due to the ongoing pandemic subsequently moved to 2022. The “Cerebral Vascular Biology” (CVB) conference in Uppsala, Sweden planned for 2021 was postponed until 2023. Postponing these meetings left organizers and conference chairs in a difficult position and researchers without a platform to present and share their data. Thus, these postponements created a critical need for the field to stay connected and an opportunity to establish a virtual event that could supplement the postponed conferences in 2020. The Brain Barriers Virtual 2020 Seminar Series (BBV2020) addressed this need by providing a forum for brain barrier scientists worldwide to come together online during the COVID-19 crisis in 2020. BBV2020 seminars were held weekly from May 20, 2020 to September 2, 2020 in a 1-h format featuring a live session with an invited speaker followed by time for questions and answers from the community. The seminar series was hosted on a Zoom webinar platform and was free for anyone to attend. BBV2020 served as a forum for invited speakers to present their work to the brain barriers community and enabled this community to discuss brain barriers science from the safety of their home office. Over 1300 scientists around the world were registered, each seminar was viewed live by 100–400 participants, and the video-recordings received 150–900 views each week. Participants were from academia, industry and government agencies at all career stages from around the globe. In this report, we share participation data that give insight into the impact this virtual seminar series had on the brain barriers field during the COVID-19 pandemic. We compare these data with existing data from past in-person brain barriers meetings and draw conclusions from these data on the potential value of virtual meetings for the brain barriers research field in the future. While the organization of virtual events during the pandemic may seem self-evident, reflection on the impact carries utility for events in the future. Conclusion and future perspectives: Additional file 1: Figure S1. Attendance Trends for BBV2020 by Academic Rank. Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020 separated by A) Graduate Student, B) Postdoctoral Fellow, and C) Professor (any rank). Additional file 1: Figure S1. Attendance Trends for BBV2020 by Academic Rank. Number of unique viewers weekly at the live Zoom Webinar sessions of BBV2020 separated by A) Graduate Student, B) Postdoctoral Fellow, and C) Professor (any rank).
Background: Scientific conferences are vital communication events for scientists in academia, industry, and government agencies. In the brain barriers research field, several international conferences exist that allow researchers to present data, share knowledge, and discuss novel ideas and concepts. These meetings are critical platforms for researchers to connect and exchange breakthrough findings on a regular basis. Due to the worldwide COVID-19 pandemic, all in-person meetings were canceled in 2020. In response, we launched the Brain Barriers Virtual 2020 (BBV2020) seminar series, the first stand-in virtual event for the brain barriers field, to offer scientists a virtual platform to present their work. Here we report the aggregate attendance information on two in-person meetings compared with BBV2020 and comment on the utility of the virtual platform. Methods: The BBV2020 seminar series was hosted on a Zoom webinar platform and was free of cost for participants. Using registration- and Zoom-based data from the BBV2020 virtual seminar series and survey data collected from BBV2020 participants, we analyzed attendance trends, global reach, participation based on career stage, and engagement of BBV2020. We compared these data with those from two previous in-person conferences, a BBB meeting held in 2018 and CVB 2019. Results: We found that BBV2020 seminar participation steadily decreased over the course of the series. In contrast, live participation was consistently above 100 attendees and recording views were above 200 views per seminar. We also found that participants valued BBV2020 as a supplement during the COVID-19 pandemic in 2020. Based on one post-BBV2020 survey, the majority of participants indicated that they would prefer in-person meetings but would welcome a virtual component to future in-person meetings. Compared to in-person meetings, BBV2020 enabled participation from a broad range of career stages and was attended by scientists in academic, industry, and government agencies from a wide range of countries worldwide. Conclusions: Our findings suggest that a virtual event such as the BBV2020 seminar series provides easy access to science for researchers across all career stages around the globe. However, we recognize that limitations exist. Regardless, such a virtual event could be a valuable tool for the brain barriers community to reach and engage scientists worldwide to further grow the brain barriers research field in the future.
19,478
438
[ 3011, 127, 169, 127, 479, 186, 170, 23, 66, 132, 658, 1106, 641, 382, 524, 141, 1040, 192 ]
22
[ "bbv2020", "virtual", "seminar", "person", "meetings", "registrants", "participants", "2018", "2019", "countries" ]
[ "scientists globe pandemic", "brain barrier conferences", "2021 ongoing pandemic", "pandemic virtual seminars", "conferences brain barriers" ]
null
[CONTENT] Blood–brain barrier | BBV2020 | Brain barriers virtual | Virtual seminar series | COVID-19 | Education [SUMMARY]
null
[CONTENT] Blood–brain barrier | BBV2020 | Brain barriers virtual | Virtual seminar series | COVID-19 | Education [SUMMARY]
[CONTENT] Blood–brain barrier | BBV2020 | Brain barriers virtual | Virtual seminar series | COVID-19 | Education [SUMMARY]
[CONTENT] Blood–brain barrier | BBV2020 | Brain barriers virtual | Virtual seminar series | COVID-19 | Education [SUMMARY]
[CONTENT] Blood–brain barrier | BBV2020 | Brain barriers virtual | Virtual seminar series | COVID-19 | Education [SUMMARY]
[CONTENT] COVID-19 | Central Nervous System | Congresses as Topic | Humans | SARS-CoV-2 | Surveys and Questionnaires | Videoconferencing [SUMMARY]
null
[CONTENT] COVID-19 | Central Nervous System | Congresses as Topic | Humans | SARS-CoV-2 | Surveys and Questionnaires | Videoconferencing [SUMMARY]
[CONTENT] COVID-19 | Central Nervous System | Congresses as Topic | Humans | SARS-CoV-2 | Surveys and Questionnaires | Videoconferencing [SUMMARY]
[CONTENT] COVID-19 | Central Nervous System | Congresses as Topic | Humans | SARS-CoV-2 | Surveys and Questionnaires | Videoconferencing [SUMMARY]
[CONTENT] COVID-19 | Central Nervous System | Congresses as Topic | Humans | SARS-CoV-2 | Surveys and Questionnaires | Videoconferencing [SUMMARY]
[CONTENT] scientists globe pandemic | brain barrier conferences | 2021 ongoing pandemic | pandemic virtual seminars | conferences brain barriers [SUMMARY]
null
[CONTENT] scientists globe pandemic | brain barrier conferences | 2021 ongoing pandemic | pandemic virtual seminars | conferences brain barriers [SUMMARY]
[CONTENT] scientists globe pandemic | brain barrier conferences | 2021 ongoing pandemic | pandemic virtual seminars | conferences brain barriers [SUMMARY]
[CONTENT] scientists globe pandemic | brain barrier conferences | 2021 ongoing pandemic | pandemic virtual seminars | conferences brain barriers [SUMMARY]
[CONTENT] scientists globe pandemic | brain barrier conferences | 2021 ongoing pandemic | pandemic virtual seminars | conferences brain barriers [SUMMARY]
[CONTENT] bbv2020 | virtual | seminar | person | meetings | registrants | participants | 2018 | 2019 | countries [SUMMARY]
null
[CONTENT] bbv2020 | virtual | seminar | person | meetings | registrants | participants | 2018 | 2019 | countries [SUMMARY]
[CONTENT] bbv2020 | virtual | seminar | person | meetings | registrants | participants | 2018 | 2019 | countries [SUMMARY]
[CONTENT] bbv2020 | virtual | seminar | person | meetings | registrants | participants | 2018 | 2019 | countries [SUMMARY]
[CONTENT] bbv2020 | virtual | seminar | person | meetings | registrants | participants | 2018 | 2019 | countries [SUMMARY]
[CONTENT] 2020 | barriers | brain | brain barriers | pandemic | virtual | covid | covid 19 | data | field [SUMMARY]
null
[CONTENT] bbv2020 | registrants | 2018 cvb 2019 | 2018 cvb | 2018 | cvb 2019 | 2019 | bbb 2018 cvb | bbb 2018 cvb 2019 | bbb 2018 [SUMMARY]
[CONTENT] virtual | field | like | remain | increase | bbv2020 | research | research field | barriers research | brain barriers research [SUMMARY]
[CONTENT] bbv2020 | virtual | university | seminar | person | meetings | registrants | 2020 | participants | 2018 [SUMMARY]
[CONTENT] bbv2020 | virtual | university | seminar | person | meetings | registrants | 2020 | participants | 2018 [SUMMARY]
[CONTENT] ||| ||| ||| COVID-19 | 2020 ||| the Brain Barriers Virtual | 2020 | first ||| two | BBV2020 [SUMMARY]
null
[CONTENT] ||| 100 | 200 ||| BBV2020 | COVID-19 | 2020 ||| one ||| [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| COVID-19 | 2020 ||| the Brain Barriers Virtual | 2020 | first ||| two | BBV2020 ||| ||| Zoom | BBV2020 | BBV2020 | BBV2020 ||| two | BBB | 2018 | 2019 ||| ||| ||| 100 | 200 ||| BBV2020 | COVID-19 | 2020 ||| one ||| ||| ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| COVID-19 | 2020 ||| the Brain Barriers Virtual | 2020 | first ||| two | BBV2020 ||| ||| Zoom | BBV2020 | BBV2020 | BBV2020 ||| two | BBB | 2018 | 2019 ||| ||| ||| 100 | 200 ||| BBV2020 | COVID-19 | 2020 ||| one ||| ||| ||| ||| [SUMMARY]
Mechanism of Yinqin Oral Liquid in the Treatment of Chronic Pharyngitis Based on Network Pharmacology.
34707348
Yinqin oral liquid (YOL) has curative effect for upper respiratory tract infections, especially for chronic pharyngitis (CP). Since the traditional Chinese herbal formulae are complicated, the pharmacological mechanism of YOL remains unclear. The aim of this work was to explore the active ingredients and mechanisms of YOL against CP.
BACKGROUND
First, the profile of putative target of YOL was predicted based on structural and functional similarities of all available YOL components, which were obtained from the Drug Bank database, to the known drugs using TCMSP. The chemical constituents and targets of honeysuckle, scutellaria, bupleurum and cicada were searched by TCMSP, CTD, GeneCards and other databases were used to query the CP-related genes, which were searched by UniProt database. Thereafter, the interactions network between compounds and overlapping genes was constructed, visualized, and analyzed by Cytoscape software. Finally, pathway enrichment analysis of overlapping genes was carried out on Database for Annotation, Visualization, and Integrated Discovery (DAVID) platform.
METHODS
The pathway enrichment analysis showed 55 compounds and 113 corresponding targets in the compound-target network, and the key targets involved PTGS1, ESR2, GSK3β, NCOA2, ESR1. The PPI core network contained 30 proteins, including VEGFA, IL6, ESR1, RELA and HIF1A. A total of 148 GO items were obtained (p<0.05), 102 entries on biological process (BP), 34 entries on biological process (BP) and 12 entries on cell composition (CC) were included. A total of 46 signaling pathways were obtained by KEGG pathway enrichment screening (p<0.05), involving cancer, PI3K-AKT, hepatitis, proteoglycans, p53, HIF-1 signaling pathways.
RESULTS
These results collectively indicate YOL (including the main ingredients luteolin and baicalein) as a highly effective therapeutic agent for anti-inflammation, through the NF-kB pathway.
CONCLUSION
[ "Administration, Oral", "Animals", "Anti-Inflammatory Agents", "Chronic Disease", "Databases, Factual", "Drugs, Chinese Herbal", "Flavanones", "Luteolin", "Mice", "NF-kappa B", "Network Pharmacology", "Pharyngitis", "RAW 264.7 Cells" ]
8542895
Introduction
Chronic pharyngitis (CP) is a very common disease associated with chronic inflammation involving pharyngeal mucosa, submucosal and lymphatic tissues. Clinically, CP is mainly manifested as pharyngeal discomfort (foreign body sensation, burning sensation, irritation, etc.), with occasional pharyngeal itch, cough, etc., which belongs to the category of “slow throat arthralgia” in the traditional Chinese medicine (TCM).1 CP has a high incidence and accounts for 10–12% of all pharyngeal diseases, with a long disease course, recurrence and is difficult to cure. At present, western medicine uses antibiotics supplemented by hormone preparations, such as dexamethasone and antiviral drugs, for the treatment of pharyngitis. These treatment methods have drawbacks such as narrow therapeutic spectrum, recurrence, poor tolerance and major toxic and side effects. Hence, developing more effective therapeutic strategies against CP, and reducing the occurrence of side effects from the current therapeutic options is of great clinical importance. TCM is a major component of complementary and alternative medicine. Owing to its excellent clinical efficacy, TCM is a research hotspot worldwide. Dialectical theory of TCM is used to treat acute and chronic pharyngitis and has unique advantages such as minor toxic side effects, obvious curative effect and treatment of both symptoms and root causes.2–4 The pharmacological effects of TCM herbal formulae play an important role in their appropriate application. However, the complex nature of herbal formulae has impeded this understanding. Numerous chemical ingredients involving multiple potential targets are present in an herbal formula. It is not adequate to explain the effects produced by a whole herbal formula if we individually consider every single ingredient in it. Yinqin oral liquid (YOL) is composed of extracts of honeysuckle, scutellaria, bupleurum and cicada. YOL has the effects of relieving wind, muscle and clearing heat, which is mainly used for cold, fever, aversion to cold, pharyngeal pain and other upper respiratory tract infections caused by wind fever and wind-cold. Long-term clinical observation has proven its effect in relieving pharyngeal pain and treating hand-foot-mouth disease. YOL contains several chemical compounds, which regulate diverse targets, and thus precisely determine the pharmacological mechanisms involved in its therapeutic actions. In addition, there are several challenges in the relationships between the herbs and diseases.5,6 Network pharmacology is based on the theory of system biology. As a new subject, it selects specific signal nodes to design multi-target drug molecules through the network analysis of system biology. It is possible to study both the active components and the potential gene targets from TCM because of the establishment of network pharmacology and bioinformatics. This study predicted and analyzed the active components and target of YOL using network pharmacology, to explore the rationality of its formula and the scientific nature of treating CP, and further explore the pharmacological mechanisms of YOL on CP.
null
null
null
null
Conclusion
This study used the network pharmacology platform to probe the YOL treatment of CP by multi-target analysis. The results showed that YOL could treat CP by acting on related targets, which were consistent with the reported literature. In addition, we have verified that YOL and its two monomer compounds, Luteolin and Baicalein inhibited activation of the NF-kB, Stat3 and PI3K-Akt pathways. Although the potential advantages of the “network target, multi-component” strategy of network pharmacology are obvious, the content of TCM ingredients is usually ignored in the research of network pharmacology of Chinese medicine, and the influence of content and concentration on the efficacy cannot be ignored. The predicted targets, which have not been discussed, may provide clues for further research on the mechanism of YOL.
[ "Materials and Methods", "Chemicals and Reagents", "Cell Culture", "Collection of Drug Molecular Information and Screening of Active Ingredients", "Known Therapeutic Targets of Chronic Pharyngitis (CP)", "Network Construction and Analysis", "Protein-Protein Interaction (PPI) Data", "Pathway Enrichment Analysis", "Statistical Analysis", "Results", "Screening and Analysis of Active Compounds", "Network Analysis of “Active Component-Target” Interaction in YOL", "The Construction of Key Protein Network of YOL and CP", "Pharmacological Mechanisms of YOL Acting on CP", "Experimental Verification of Predicted Results", "Discussion", "Conclusion" ]
[ "Chemicals and Reagents Yinqin Oral Liquid (YOL) was provided by su zhou si yuan natural products research and development Co. Ltd. YOL is composed of honeysuckle, scutellaria, bupleurum and cicada, decompressed and concentrated to 2 g/mL through boiling and alcohol sinking, and stored at 4°C for later use. Primary antibodies: COX-2 and iNOS antibodies were purchased from Abcam (Cambridge, UK); glyceraldehyde-3-phosphate dehydrogenase (GAPDH) antibody was obtained from Millipore (Billerica, MA, USA); PI3K, p-AKT, AKT, p-Stats, Stat3 and p-p65 (S536) antibodies were purchased from Cell Signaling Technology (Danvers, MA, USA); and p65 antibodies was obtained from Santa Cruz (CA, USA). The horseradish peroxidase (HRP)-conjugated sheep anti-mouse or anti-rabbit secondary antibodies were purchased from Thermo Fisher (Waltham, MA, USA). The proteins were visualized using an ECL detection kit (Thermo Fisher).\nYinqin Oral Liquid (YOL) was provided by su zhou si yuan natural products research and development Co. Ltd. YOL is composed of honeysuckle, scutellaria, bupleurum and cicada, decompressed and concentrated to 2 g/mL through boiling and alcohol sinking, and stored at 4°C for later use. Primary antibodies: COX-2 and iNOS antibodies were purchased from Abcam (Cambridge, UK); glyceraldehyde-3-phosphate dehydrogenase (GAPDH) antibody was obtained from Millipore (Billerica, MA, USA); PI3K, p-AKT, AKT, p-Stats, Stat3 and p-p65 (S536) antibodies were purchased from Cell Signaling Technology (Danvers, MA, USA); and p65 antibodies was obtained from Santa Cruz (CA, USA). The horseradish peroxidase (HRP)-conjugated sheep anti-mouse or anti-rabbit secondary antibodies were purchased from Thermo Fisher (Waltham, MA, USA). The proteins were visualized using an ECL detection kit (Thermo Fisher).\nCell Culture RAW264.7 cells, a mouse macrophage cell line (Cell Bank of Chinese Academy of Sciences, Shanghai, China) were cultured using DMEM supplemented with FBS (10%) and penicillin-streptomycin (1%). The cells were cultured at 37°C in a humidified environment of 5% carbon dioxide. The medium was changed every other day and the cells were passaged at a dilution of 1:3.\nRAW264.7 cells, a mouse macrophage cell line (Cell Bank of Chinese Academy of Sciences, Shanghai, China) were cultured using DMEM supplemented with FBS (10%) and penicillin-streptomycin (1%). The cells were cultured at 37°C in a humidified environment of 5% carbon dioxide. The medium was changed every other day and the cells were passaged at a dilution of 1:3.\nCollection of Drug Molecular Information and Screening of Active Ingredients In the TCMSP database (https://tcmsp-e.com/),7 the “Herb name” was selected to retrieve the molecular ADME parameter information of honeysuckle, scutellaria, bupleurum and cicada slough, and then the screen conformed to the oral bioavailability (OB) ADME parameter information of honey (Drug-likeness, DL) of ≥0.18 or more active ingredients (Table 1). YOL ingredients were supplemented through literature review. Potential targets related to active ingredients were searched through the TCMSP database and Cytoscape 3.6.1 software.Table 155 Active Compounds Predicted by OB and DL Among 4 Herbs in YOLDrugMol IDMolecule NameMWOB%DLHoneysuckleMOL001494Mandenol308.56420.19MOL001495Ethyl linolenate306.5446.10.2MOL002914Eriodyctiol (flavanone)288.2741.350.24MOL003006(-)-(3R,8S,9R,9aS,10aS)-9-ethenyl-8-(beta-D-glucopyranosyloxy)-2,3,9,9a,10,10a-hexahydro-5-oxo-5H,8H-pyrano[4,3-d]oxazolo[3,2-a]pyridine-3-carboxylic acid_qt281.2987.470.23MOL003014Secologanic dibutylacetal_qt384.5753.650.29MOL002773Beta-carotene536.9637.180.58MOL003036ZINC03978781412.7743.830.76MOL003044Chryseriol300.2835.850.27MOL0030955-hydroxy-7-methoxy-2-(3,4,5-trimethoxyphenyl)chromone358.3751.960.41MOL003111Centauroside_qt434.4855.790.5MOL003117Ioniceracetalides B_qt314.3761.190.19MOL003128Dinethylsecologanoside434.4448.460.48MOL000358Beta-sitosterol414.7936.910.75MOL000422Kaempferol286.2541.880.24MOL000449Stigmasterol412.7743.830.76MOL000006Luteolin286.2536.160.25MOL000098Quercetin302.2546.430.28ScutellariaMOL000073Ent-Epicatechin290.2948.960.24MOL000173Wogonin284.2830.680.23MOL000228(2R)-7-hydroxy-5-methoxy-2-phenylchroman-4-one270.355.230.2MOL000359Sitosterol414.7936.910.75MOL000525Norwogonin270.2539.40.21MOL0005525,2ʹ-Dihydroxy-6,7,8-trimethoxyflavone344.3431.710.35MOL001458Coptisine320.3430.670.86MOL001490Bis[(2S)-2-ethylhexyl] benzene-1,2-dicarboxylate390.6243.590.35MOL001689Acacetin284.2834.970.24MOL002714Baicalein270.2533.520.21MOL002879Diop390.6243.590.39MOL002897Epiberberine336.3943.090.78MOL0029095,7,2,5-tetrahydroxy-8,6-dimethoxyflavone376.3433.820.45MOL002910Carthamidin288.2741.150.24MOL002913Dihydrobaicalin_qt272.2740.040.21MOL002914Eriodyctiol (flavanone)288.2741.350.24MOL002915Salvigenin328.3449.070.33MOL0029175,2ʹ,6ʹ-Trihydroxy-7,8-dimethoxyflavone330.3145.050.33MOL0029255,7,2ʹ,6ʹ-Tetrahydroxyflavone286.2537.010.24MOL002927Skullcapflavone II374.3769.510.44MOL002928Oroxylin a284.2841.370.23MOL002932Panicolin314.3176.260.29MOL0029335,7,4ʹ-Trihydroxy-8-methoxyflavone300.2836.560.27MOL002934Neobaicalein374.37104.30.44MOL002937Dihydrooroxylin A286.366.060.23MOL008206Moslosooflavone298.3144.090.25MOL01041511,13-Eicosadienoic acid, methyl ester322.5939.280.23MOL012266Rivularin344.3437.940.37MOL000490Petunidin317.2930.050.31MOL0045983,5,6,7-tetramethoxy-2-(3,4,5-trimethoxyphenyl)chromone432.4631.970.59BupleurumMOL002776Baicalin446.3940.120.75MOL001645Kaempferol308.5642.10.2MOL004718Linoleyl acetate412.7742.980.76MOL004653α-spinasterol426.546.060.66MOL004624Stigmasterol348.4847.720.53MOL004609Areapillin360.3448.960.41MOL000354Isorhamnetin316.2849.60.31MOL013187Cubebin356.457.130.64\n\n55 Active Compounds Predicted by OB and DL Among 4 Herbs in YOL\nIn the TCMSP database (https://tcmsp-e.com/),7 the “Herb name” was selected to retrieve the molecular ADME parameter information of honeysuckle, scutellaria, bupleurum and cicada slough, and then the screen conformed to the oral bioavailability (OB) ADME parameter information of honey (Drug-likeness, DL) of ≥0.18 or more active ingredients (Table 1). YOL ingredients were supplemented through literature review. Potential targets related to active ingredients were searched through the TCMSP database and Cytoscape 3.6.1 software.Table 155 Active Compounds Predicted by OB and DL Among 4 Herbs in YOLDrugMol IDMolecule NameMWOB%DLHoneysuckleMOL001494Mandenol308.56420.19MOL001495Ethyl linolenate306.5446.10.2MOL002914Eriodyctiol (flavanone)288.2741.350.24MOL003006(-)-(3R,8S,9R,9aS,10aS)-9-ethenyl-8-(beta-D-glucopyranosyloxy)-2,3,9,9a,10,10a-hexahydro-5-oxo-5H,8H-pyrano[4,3-d]oxazolo[3,2-a]pyridine-3-carboxylic acid_qt281.2987.470.23MOL003014Secologanic dibutylacetal_qt384.5753.650.29MOL002773Beta-carotene536.9637.180.58MOL003036ZINC03978781412.7743.830.76MOL003044Chryseriol300.2835.850.27MOL0030955-hydroxy-7-methoxy-2-(3,4,5-trimethoxyphenyl)chromone358.3751.960.41MOL003111Centauroside_qt434.4855.790.5MOL003117Ioniceracetalides B_qt314.3761.190.19MOL003128Dinethylsecologanoside434.4448.460.48MOL000358Beta-sitosterol414.7936.910.75MOL000422Kaempferol286.2541.880.24MOL000449Stigmasterol412.7743.830.76MOL000006Luteolin286.2536.160.25MOL000098Quercetin302.2546.430.28ScutellariaMOL000073Ent-Epicatechin290.2948.960.24MOL000173Wogonin284.2830.680.23MOL000228(2R)-7-hydroxy-5-methoxy-2-phenylchroman-4-one270.355.230.2MOL000359Sitosterol414.7936.910.75MOL000525Norwogonin270.2539.40.21MOL0005525,2ʹ-Dihydroxy-6,7,8-trimethoxyflavone344.3431.710.35MOL001458Coptisine320.3430.670.86MOL001490Bis[(2S)-2-ethylhexyl] benzene-1,2-dicarboxylate390.6243.590.35MOL001689Acacetin284.2834.970.24MOL002714Baicalein270.2533.520.21MOL002879Diop390.6243.590.39MOL002897Epiberberine336.3943.090.78MOL0029095,7,2,5-tetrahydroxy-8,6-dimethoxyflavone376.3433.820.45MOL002910Carthamidin288.2741.150.24MOL002913Dihydrobaicalin_qt272.2740.040.21MOL002914Eriodyctiol (flavanone)288.2741.350.24MOL002915Salvigenin328.3449.070.33MOL0029175,2ʹ,6ʹ-Trihydroxy-7,8-dimethoxyflavone330.3145.050.33MOL0029255,7,2ʹ,6ʹ-Tetrahydroxyflavone286.2537.010.24MOL002927Skullcapflavone II374.3769.510.44MOL002928Oroxylin a284.2841.370.23MOL002932Panicolin314.3176.260.29MOL0029335,7,4ʹ-Trihydroxy-8-methoxyflavone300.2836.560.27MOL002934Neobaicalein374.37104.30.44MOL002937Dihydrooroxylin A286.366.060.23MOL008206Moslosooflavone298.3144.090.25MOL01041511,13-Eicosadienoic acid, methyl ester322.5939.280.23MOL012266Rivularin344.3437.940.37MOL000490Petunidin317.2930.050.31MOL0045983,5,6,7-tetramethoxy-2-(3,4,5-trimethoxyphenyl)chromone432.4631.970.59BupleurumMOL002776Baicalin446.3940.120.75MOL001645Kaempferol308.5642.10.2MOL004718Linoleyl acetate412.7742.980.76MOL004653α-spinasterol426.546.060.66MOL004624Stigmasterol348.4847.720.53MOL004609Areapillin360.3448.960.41MOL000354Isorhamnetin316.2849.60.31MOL013187Cubebin356.457.130.64\n\n55 Active Compounds Predicted by OB and DL Among 4 Herbs in YOL\nKnown Therapeutic Targets of Chronic Pharyngitis (CP) In the drug treatment of CP, the known therapeutic targets were mainly obtained from two sources: GeneCards database (https://www.genecards.org/) and CTD database (http://ctd.mdibl.org/). The data analysis was conducted in 5878 therapeutic targets for the treatment of CP, after the redundant entries were removed. Figure S1 shows the specific information about these known therapeutic targets.\nIn the drug treatment of CP, the known therapeutic targets were mainly obtained from two sources: GeneCards database (https://www.genecards.org/) and CTD database (http://ctd.mdibl.org/). The data analysis was conducted in 5878 therapeutic targets for the treatment of CP, after the redundant entries were removed. Figure S1 shows the specific information about these known therapeutic targets.\nNetwork Construction and Analysis To understand the relationship between the herbs and chemical compounds that consist of YOL and its putative targets, and the therapeutic targets known for CP, Network Visualization was conducted using Cytoscape 3.6.1 software,8 and the degree between compounds and targets was analyzed.\nTo understand the relationship between the herbs and chemical compounds that consist of YOL and its putative targets, and the therapeutic targets known for CP, Network Visualization was conducted using Cytoscape 3.6.1 software,8 and the degree between compounds and targets was analyzed.\nProtein-Protein Interaction (PPI) Data PPI core network (PPICN) is used to study the relationship between chemical compounds and disease-associated protein molecules based on biochemistry, signal transduction and genetic networks.9 The differing ID types of the proteins were converted to UniProt IDs. In order to further understand how YOL and CP interact at the protein level, the selected targets in this study were uploaded on the online Venn diagram (http://bioinfogp.cnb.csic.es/tools/venny/index.html). Moreover, the genes at the intersection of the active compound and CP were selected and uploaded on STRING 10.5 (https://string-db.org) to obtain the PPI relationship.\nPPI core network (PPICN) is used to study the relationship between chemical compounds and disease-associated protein molecules based on biochemistry, signal transduction and genetic networks.9 The differing ID types of the proteins were converted to UniProt IDs. In order to further understand how YOL and CP interact at the protein level, the selected targets in this study were uploaded on the online Venn diagram (http://bioinfogp.cnb.csic.es/tools/venny/index.html). Moreover, the genes at the intersection of the active compound and CP were selected and uploaded on STRING 10.5 (https://string-db.org) to obtain the PPI relationship.\nPathway Enrichment Analysis The data was imported in the format Gene Symbol into David 6.8 database (https://david.ncifcrf.gov/). Molecular function (CC), biological process (BP), and cellular function (MF) were selected, respectively. A pathway enrichment analysis was prospectively performed using biological process enrichment GO analysis and KEGG pathway analysis (http://www.genome.jp/kegg/) to clarify the pathways involving putative CP targets, and visualized by online mapping website Omicshare Tools (http://www.omicshare.com/tools/index.php/).\nThe data was imported in the format Gene Symbol into David 6.8 database (https://david.ncifcrf.gov/). Molecular function (CC), biological process (BP), and cellular function (MF) were selected, respectively. A pathway enrichment analysis was prospectively performed using biological process enrichment GO analysis and KEGG pathway analysis (http://www.genome.jp/kegg/) to clarify the pathways involving putative CP targets, and visualized by online mapping website Omicshare Tools (http://www.omicshare.com/tools/index.php/).\nStatistical Analysis Each experiment was repeated at least in triplicate and data were shown as mean ± SD. Statistical difference in GO and KEGG analysis was performed using hypergeometric and Fisher's exact tests. P<0.05 was considered statistically significant.\nEach experiment was repeated at least in triplicate and data were shown as mean ± SD. Statistical difference in GO and KEGG analysis was performed using hypergeometric and Fisher's exact tests. P<0.05 was considered statistically significant.", "Yinqin Oral Liquid (YOL) was provided by su zhou si yuan natural products research and development Co. Ltd. YOL is composed of honeysuckle, scutellaria, bupleurum and cicada, decompressed and concentrated to 2 g/mL through boiling and alcohol sinking, and stored at 4°C for later use. Primary antibodies: COX-2 and iNOS antibodies were purchased from Abcam (Cambridge, UK); glyceraldehyde-3-phosphate dehydrogenase (GAPDH) antibody was obtained from Millipore (Billerica, MA, USA); PI3K, p-AKT, AKT, p-Stats, Stat3 and p-p65 (S536) antibodies were purchased from Cell Signaling Technology (Danvers, MA, USA); and p65 antibodies was obtained from Santa Cruz (CA, USA). The horseradish peroxidase (HRP)-conjugated sheep anti-mouse or anti-rabbit secondary antibodies were purchased from Thermo Fisher (Waltham, MA, USA). The proteins were visualized using an ECL detection kit (Thermo Fisher).", "RAW264.7 cells, a mouse macrophage cell line (Cell Bank of Chinese Academy of Sciences, Shanghai, China) were cultured using DMEM supplemented with FBS (10%) and penicillin-streptomycin (1%). The cells were cultured at 37°C in a humidified environment of 5% carbon dioxide. The medium was changed every other day and the cells were passaged at a dilution of 1:3.", "In the TCMSP database (https://tcmsp-e.com/),7 the “Herb name” was selected to retrieve the molecular ADME parameter information of honeysuckle, scutellaria, bupleurum and cicada slough, and then the screen conformed to the oral bioavailability (OB) ADME parameter information of honey (Drug-likeness, DL) of ≥0.18 or more active ingredients (Table 1). YOL ingredients were supplemented through literature review. Potential targets related to active ingredients were searched through the TCMSP database and Cytoscape 3.6.1 software.Table 155 Active Compounds Predicted by OB and DL Among 4 Herbs in YOLDrugMol IDMolecule NameMWOB%DLHoneysuckleMOL001494Mandenol308.56420.19MOL001495Ethyl linolenate306.5446.10.2MOL002914Eriodyctiol (flavanone)288.2741.350.24MOL003006(-)-(3R,8S,9R,9aS,10aS)-9-ethenyl-8-(beta-D-glucopyranosyloxy)-2,3,9,9a,10,10a-hexahydro-5-oxo-5H,8H-pyrano[4,3-d]oxazolo[3,2-a]pyridine-3-carboxylic acid_qt281.2987.470.23MOL003014Secologanic dibutylacetal_qt384.5753.650.29MOL002773Beta-carotene536.9637.180.58MOL003036ZINC03978781412.7743.830.76MOL003044Chryseriol300.2835.850.27MOL0030955-hydroxy-7-methoxy-2-(3,4,5-trimethoxyphenyl)chromone358.3751.960.41MOL003111Centauroside_qt434.4855.790.5MOL003117Ioniceracetalides B_qt314.3761.190.19MOL003128Dinethylsecologanoside434.4448.460.48MOL000358Beta-sitosterol414.7936.910.75MOL000422Kaempferol286.2541.880.24MOL000449Stigmasterol412.7743.830.76MOL000006Luteolin286.2536.160.25MOL000098Quercetin302.2546.430.28ScutellariaMOL000073Ent-Epicatechin290.2948.960.24MOL000173Wogonin284.2830.680.23MOL000228(2R)-7-hydroxy-5-methoxy-2-phenylchroman-4-one270.355.230.2MOL000359Sitosterol414.7936.910.75MOL000525Norwogonin270.2539.40.21MOL0005525,2ʹ-Dihydroxy-6,7,8-trimethoxyflavone344.3431.710.35MOL001458Coptisine320.3430.670.86MOL001490Bis[(2S)-2-ethylhexyl] benzene-1,2-dicarboxylate390.6243.590.35MOL001689Acacetin284.2834.970.24MOL002714Baicalein270.2533.520.21MOL002879Diop390.6243.590.39MOL002897Epiberberine336.3943.090.78MOL0029095,7,2,5-tetrahydroxy-8,6-dimethoxyflavone376.3433.820.45MOL002910Carthamidin288.2741.150.24MOL002913Dihydrobaicalin_qt272.2740.040.21MOL002914Eriodyctiol (flavanone)288.2741.350.24MOL002915Salvigenin328.3449.070.33MOL0029175,2ʹ,6ʹ-Trihydroxy-7,8-dimethoxyflavone330.3145.050.33MOL0029255,7,2ʹ,6ʹ-Tetrahydroxyflavone286.2537.010.24MOL002927Skullcapflavone II374.3769.510.44MOL002928Oroxylin a284.2841.370.23MOL002932Panicolin314.3176.260.29MOL0029335,7,4ʹ-Trihydroxy-8-methoxyflavone300.2836.560.27MOL002934Neobaicalein374.37104.30.44MOL002937Dihydrooroxylin A286.366.060.23MOL008206Moslosooflavone298.3144.090.25MOL01041511,13-Eicosadienoic acid, methyl ester322.5939.280.23MOL012266Rivularin344.3437.940.37MOL000490Petunidin317.2930.050.31MOL0045983,5,6,7-tetramethoxy-2-(3,4,5-trimethoxyphenyl)chromone432.4631.970.59BupleurumMOL002776Baicalin446.3940.120.75MOL001645Kaempferol308.5642.10.2MOL004718Linoleyl acetate412.7742.980.76MOL004653α-spinasterol426.546.060.66MOL004624Stigmasterol348.4847.720.53MOL004609Areapillin360.3448.960.41MOL000354Isorhamnetin316.2849.60.31MOL013187Cubebin356.457.130.64\n\n55 Active Compounds Predicted by OB and DL Among 4 Herbs in YOL", "In the drug treatment of CP, the known therapeutic targets were mainly obtained from two sources: GeneCards database (https://www.genecards.org/) and CTD database (http://ctd.mdibl.org/). The data analysis was conducted in 5878 therapeutic targets for the treatment of CP, after the redundant entries were removed. Figure S1 shows the specific information about these known therapeutic targets.", "To understand the relationship between the herbs and chemical compounds that consist of YOL and its putative targets, and the therapeutic targets known for CP, Network Visualization was conducted using Cytoscape 3.6.1 software,8 and the degree between compounds and targets was analyzed.", "PPI core network (PPICN) is used to study the relationship between chemical compounds and disease-associated protein molecules based on biochemistry, signal transduction and genetic networks.9 The differing ID types of the proteins were converted to UniProt IDs. In order to further understand how YOL and CP interact at the protein level, the selected targets in this study were uploaded on the online Venn diagram (http://bioinfogp.cnb.csic.es/tools/venny/index.html). Moreover, the genes at the intersection of the active compound and CP were selected and uploaded on STRING 10.5 (https://string-db.org) to obtain the PPI relationship.", "The data was imported in the format Gene Symbol into David 6.8 database (https://david.ncifcrf.gov/). Molecular function (CC), biological process (BP), and cellular function (MF) were selected, respectively. A pathway enrichment analysis was prospectively performed using biological process enrichment GO analysis and KEGG pathway analysis (http://www.genome.jp/kegg/) to clarify the pathways involving putative CP targets, and visualized by online mapping website Omicshare Tools (http://www.omicshare.com/tools/index.php/).", "Each experiment was repeated at least in triplicate and data were shown as mean ± SD. Statistical difference in GO and KEGG analysis was performed using hypergeometric and Fisher's exact tests. P<0.05 was considered statistically significant.", "Screening and Analysis of Active Compounds First, all the compounds related to four TCMs were retrieved by TCMSP. Second, according to the screening conditions of OB ≥30% and DL ≥0.18,10 23 active constituents of honeysuckle (17 predicted relevant targets), 36 of baicalensis (30 predicted relevant targets), 17 of bupleurum (13 predicted relevant targets), and no suitable compounds of cicada slough were obtained. Among them, baicalensis and honeysuckle contained two identical ingredients, while bupleurum and honeysuckle contained three identical components. Finally, a total of 55 active components were obtained for the data analysis after removing redundant entries (Table 1).\nFirst, all the compounds related to four TCMs were retrieved by TCMSP. Second, according to the screening conditions of OB ≥30% and DL ≥0.18,10 23 active constituents of honeysuckle (17 predicted relevant targets), 36 of baicalensis (30 predicted relevant targets), 17 of bupleurum (13 predicted relevant targets), and no suitable compounds of cicada slough were obtained. Among them, baicalensis and honeysuckle contained two identical ingredients, while bupleurum and honeysuckle contained three identical components. Finally, a total of 55 active components were obtained for the data analysis after removing redundant entries (Table 1).\nNetwork Analysis of “Active Component-Target” Interaction in YOL The compound-target network contained 175 nodes including 55 compound nodes and 113 target nodes, of which two had no corresponding target and 577 edges. As shown in Figure 1, blue represents compounds in bupleurum, red represents compounds in scutellaria, purple represents compounds in honeysuckle, green represents drug targets, each edge indicates the interaction between compounds and their targets, and two of the 55 compounds were excluded from the network construction. The degree value of a node represents the amount of connected routes in the network, and the larger the shape, the higher the degree value. The network in line with its topological properties showed that the nodes with higher degree of screening were analyzed. These nodes that connect compounds or targets act as hubs in the whole network and may predict key compounds or targets. The top five compounds ranked by degree were MOL000098-quercetin, MOL000422-Kaempferol, MOL000449-stigmasterol, MOL000358-quebrachol and MOL000006-luteolin, which could interact with 154, 68, 39, 32 and 28 target proteins, respectively. PTGS1, NCOA2, AR, PRSS1 and PPARG were the top five targets with the highest degree, which could interact with 42, 38, 29, 28 and 17 compounds, respectively.Figure 1Compounds-target network diagram. Blue represents compounds in bupleurum, Red represents compounds in scutellaria, Purple represents compounds in honeysuckle, Green represents drug targets. The size represents the degree value. The larger the shape, the larger the degree value.\nCompounds-target network diagram. Blue represents compounds in bupleurum, Red represents compounds in scutellaria, Purple represents compounds in honeysuckle, Green represents drug targets. The size represents the degree value. The larger the shape, the larger the degree value.\nThe compound-target network contained 175 nodes including 55 compound nodes and 113 target nodes, of which two had no corresponding target and 577 edges. As shown in Figure 1, blue represents compounds in bupleurum, red represents compounds in scutellaria, purple represents compounds in honeysuckle, green represents drug targets, each edge indicates the interaction between compounds and their targets, and two of the 55 compounds were excluded from the network construction. The degree value of a node represents the amount of connected routes in the network, and the larger the shape, the higher the degree value. The network in line with its topological properties showed that the nodes with higher degree of screening were analyzed. These nodes that connect compounds or targets act as hubs in the whole network and may predict key compounds or targets. The top five compounds ranked by degree were MOL000098-quercetin, MOL000422-Kaempferol, MOL000449-stigmasterol, MOL000358-quebrachol and MOL000006-luteolin, which could interact with 154, 68, 39, 32 and 28 target proteins, respectively. PTGS1, NCOA2, AR, PRSS1 and PPARG were the top five targets with the highest degree, which could interact with 42, 38, 29, 28 and 17 compounds, respectively.Figure 1Compounds-target network diagram. Blue represents compounds in bupleurum, Red represents compounds in scutellaria, Purple represents compounds in honeysuckle, Green represents drug targets. The size represents the degree value. The larger the shape, the larger the degree value.\nCompounds-target network diagram. Blue represents compounds in bupleurum, Red represents compounds in scutellaria, Purple represents compounds in honeysuckle, Green represents drug targets. The size represents the degree value. The larger the shape, the larger the degree value.\nThe Construction of Key Protein Network of YOL and CP Through the internationally recognized CTD and GeneCards disease databases, 5150 and 1247 pharyngitis-related targets were obtained, respectively. Through the Venny online tool, 30 targets of honeysuckle, scutellaria, bupleuri and pharyngitis were obtained, including VEGFA, IL6, ESR1, RELA, HIF1A, etc. (Figure 2A). Finally, STRING online tool was used to construct the PPI network interaction map of drug and disease. It contained 30 nodes representing proteins and 48 edges representing the interaction between proteins. The thicker the line, the higher is the correlation degree. Through the protein interaction network, we could further explore the therapeutic target and mechanism of YOL for CP (Figure 2B).Figure 2(A) Venn diagram of compounds-target network diagram; (B) Core network diagram of protein interaction between YOL and CP.\n(A) Venn diagram of compounds-target network diagram; (B) Core network diagram of protein interaction between YOL and CP.\nThrough the internationally recognized CTD and GeneCards disease databases, 5150 and 1247 pharyngitis-related targets were obtained, respectively. Through the Venny online tool, 30 targets of honeysuckle, scutellaria, bupleuri and pharyngitis were obtained, including VEGFA, IL6, ESR1, RELA, HIF1A, etc. (Figure 2A). Finally, STRING online tool was used to construct the PPI network interaction map of drug and disease. It contained 30 nodes representing proteins and 48 edges representing the interaction between proteins. The thicker the line, the higher is the correlation degree. Through the protein interaction network, we could further explore the therapeutic target and mechanism of YOL for CP (Figure 2B).Figure 2(A) Venn diagram of compounds-target network diagram; (B) Core network diagram of protein interaction between YOL and CP.\n(A) Venn diagram of compounds-target network diagram; (B) Core network diagram of protein interaction between YOL and CP.\nPharmacological Mechanisms of YOL Acting on CP A total of 30 common targets were obtained by DAVID online tool and 148 GO items were obtained (p < 0.05) by performing the functional enrichment analysis. There were 102 entries on biological process (BP), 34 entries on Molecular Function (MF), and 12 entries on cell composition (CC) (Figure 3A). A total of 46 signaling pathways were obtained by KEGG pathway enrichment screening (p<0.05), involving cancer, PI3K-AKT, hepatitis, proteoglycans, p53, HIF-1 signaling pathways, etc. (Figure 3B and Table S1).Figure 3(A) GO analysis function annotation diagram. Biological process (BP), molecular function (MF), and cell composition (CC). (B) Enriched KEGG pathways of YOL selected targets for CP.\n(A) GO analysis function annotation diagram. Biological process (BP), molecular function (MF), and cell composition (CC). (B) Enriched KEGG pathways of YOL selected targets for CP.\nA total of 30 common targets were obtained by DAVID online tool and 148 GO items were obtained (p < 0.05) by performing the functional enrichment analysis. There were 102 entries on biological process (BP), 34 entries on Molecular Function (MF), and 12 entries on cell composition (CC) (Figure 3A). A total of 46 signaling pathways were obtained by KEGG pathway enrichment screening (p<0.05), involving cancer, PI3K-AKT, hepatitis, proteoglycans, p53, HIF-1 signaling pathways, etc. (Figure 3B and Table S1).Figure 3(A) GO analysis function annotation diagram. Biological process (BP), molecular function (MF), and cell composition (CC). (B) Enriched KEGG pathways of YOL selected targets for CP.\n(A) GO analysis function annotation diagram. Biological process (BP), molecular function (MF), and cell composition (CC). (B) Enriched KEGG pathways of YOL selected targets for CP.\nExperimental Verification of Predicted Results In order to experimentally demonstrate the predictions about the molecular mechanism of YOL, we chose two ingredients for testing (Figure 4), based on their composition score, the group to which they belong, and their herbal origin to examine how ingredients from different herbs affect the same proteins. The effects of these compounds on the expression levels of related proteins were analyzed by Western blot. Macrophages secrete several inflammatory cytokines, which play significant roles in inflammation. Signal transduction and transcriptional activator 3 (STAT3), nuclear factor kappa B (NF-kB), and phosphatidylinositol 3-kinase (PI3K)/Akt pathways are critical inflammatory pathways. LPS can significantly increase the production of many proinflammatory factors, including inducible NO synthase (iNOS) and cyclooxygenase-2 (COX-2) in macrophages. After pretreatment of macrophages with YOL, luteolin and baicalein, the protein expression levels of iNOS and COX-2 induced by LPS were significantly inhibited in a dose-dependent manner (Figure 4A). The related protein changes were consistent with iNOS and COX-2 inflammation protein expression (Figure 4B–D).Figure 4The YOL inhibits PI3K/p-AKT/p-Stats pathway in RAW 264.7 cells. Representative Western blot images show the relative expressions of iNOS, COX-2 (A), p-p65, p65 (B), PI3K, p-AKT, AKT (C), p-Stats, Stat3 (D) in the groups. The protein levels (normalized) of iNOS, COX2 (E), and p-p65, p65, PI3K, p-AKT, AKT, p-Stats, Stat3 (F) in cells. All the values are presented as mean ± SD. #P < 0.05 and ##P < 0.01 vs control group. *P < 0.05 and **P < 0.01 vs LPS group.\nThe YOL inhibits PI3K/p-AKT/p-Stats pathway in RAW 264.7 cells. Representative Western blot images show the relative expressions of iNOS, COX-2 (A), p-p65, p65 (B), PI3K, p-AKT, AKT (C), p-Stats, Stat3 (D) in the groups. The protein levels (normalized) of iNOS, COX2 (E), and p-p65, p65, PI3K, p-AKT, AKT, p-Stats, Stat3 (F) in cells. All the values are presented as mean ± SD. #P < 0.05 and ##P < 0.01 vs control group. *P < 0.05 and **P < 0.01 vs LPS group.\nIn order to experimentally demonstrate the predictions about the molecular mechanism of YOL, we chose two ingredients for testing (Figure 4), based on their composition score, the group to which they belong, and their herbal origin to examine how ingredients from different herbs affect the same proteins. The effects of these compounds on the expression levels of related proteins were analyzed by Western blot. Macrophages secrete several inflammatory cytokines, which play significant roles in inflammation. Signal transduction and transcriptional activator 3 (STAT3), nuclear factor kappa B (NF-kB), and phosphatidylinositol 3-kinase (PI3K)/Akt pathways are critical inflammatory pathways. LPS can significantly increase the production of many proinflammatory factors, including inducible NO synthase (iNOS) and cyclooxygenase-2 (COX-2) in macrophages. After pretreatment of macrophages with YOL, luteolin and baicalein, the protein expression levels of iNOS and COX-2 induced by LPS were significantly inhibited in a dose-dependent manner (Figure 4A). The related protein changes were consistent with iNOS and COX-2 inflammation protein expression (Figure 4B–D).Figure 4The YOL inhibits PI3K/p-AKT/p-Stats pathway in RAW 264.7 cells. Representative Western blot images show the relative expressions of iNOS, COX-2 (A), p-p65, p65 (B), PI3K, p-AKT, AKT (C), p-Stats, Stat3 (D) in the groups. The protein levels (normalized) of iNOS, COX2 (E), and p-p65, p65, PI3K, p-AKT, AKT, p-Stats, Stat3 (F) in cells. All the values are presented as mean ± SD. #P < 0.05 and ##P < 0.01 vs control group. *P < 0.05 and **P < 0.01 vs LPS group.\nThe YOL inhibits PI3K/p-AKT/p-Stats pathway in RAW 264.7 cells. Representative Western blot images show the relative expressions of iNOS, COX-2 (A), p-p65, p65 (B), PI3K, p-AKT, AKT (C), p-Stats, Stat3 (D) in the groups. The protein levels (normalized) of iNOS, COX2 (E), and p-p65, p65, PI3K, p-AKT, AKT, p-Stats, Stat3 (F) in cells. All the values are presented as mean ± SD. #P < 0.05 and ##P < 0.01 vs control group. *P < 0.05 and **P < 0.01 vs LPS group.", "First, all the compounds related to four TCMs were retrieved by TCMSP. Second, according to the screening conditions of OB ≥30% and DL ≥0.18,10 23 active constituents of honeysuckle (17 predicted relevant targets), 36 of baicalensis (30 predicted relevant targets), 17 of bupleurum (13 predicted relevant targets), and no suitable compounds of cicada slough were obtained. Among them, baicalensis and honeysuckle contained two identical ingredients, while bupleurum and honeysuckle contained three identical components. Finally, a total of 55 active components were obtained for the data analysis after removing redundant entries (Table 1).", "The compound-target network contained 175 nodes including 55 compound nodes and 113 target nodes, of which two had no corresponding target and 577 edges. As shown in Figure 1, blue represents compounds in bupleurum, red represents compounds in scutellaria, purple represents compounds in honeysuckle, green represents drug targets, each edge indicates the interaction between compounds and their targets, and two of the 55 compounds were excluded from the network construction. The degree value of a node represents the amount of connected routes in the network, and the larger the shape, the higher the degree value. The network in line with its topological properties showed that the nodes with higher degree of screening were analyzed. These nodes that connect compounds or targets act as hubs in the whole network and may predict key compounds or targets. The top five compounds ranked by degree were MOL000098-quercetin, MOL000422-Kaempferol, MOL000449-stigmasterol, MOL000358-quebrachol and MOL000006-luteolin, which could interact with 154, 68, 39, 32 and 28 target proteins, respectively. PTGS1, NCOA2, AR, PRSS1 and PPARG were the top five targets with the highest degree, which could interact with 42, 38, 29, 28 and 17 compounds, respectively.Figure 1Compounds-target network diagram. Blue represents compounds in bupleurum, Red represents compounds in scutellaria, Purple represents compounds in honeysuckle, Green represents drug targets. The size represents the degree value. The larger the shape, the larger the degree value.\nCompounds-target network diagram. Blue represents compounds in bupleurum, Red represents compounds in scutellaria, Purple represents compounds in honeysuckle, Green represents drug targets. The size represents the degree value. The larger the shape, the larger the degree value.", "Through the internationally recognized CTD and GeneCards disease databases, 5150 and 1247 pharyngitis-related targets were obtained, respectively. Through the Venny online tool, 30 targets of honeysuckle, scutellaria, bupleuri and pharyngitis were obtained, including VEGFA, IL6, ESR1, RELA, HIF1A, etc. (Figure 2A). Finally, STRING online tool was used to construct the PPI network interaction map of drug and disease. It contained 30 nodes representing proteins and 48 edges representing the interaction between proteins. The thicker the line, the higher is the correlation degree. Through the protein interaction network, we could further explore the therapeutic target and mechanism of YOL for CP (Figure 2B).Figure 2(A) Venn diagram of compounds-target network diagram; (B) Core network diagram of protein interaction between YOL and CP.\n(A) Venn diagram of compounds-target network diagram; (B) Core network diagram of protein interaction between YOL and CP.", "A total of 30 common targets were obtained by DAVID online tool and 148 GO items were obtained (p < 0.05) by performing the functional enrichment analysis. There were 102 entries on biological process (BP), 34 entries on Molecular Function (MF), and 12 entries on cell composition (CC) (Figure 3A). A total of 46 signaling pathways were obtained by KEGG pathway enrichment screening (p<0.05), involving cancer, PI3K-AKT, hepatitis, proteoglycans, p53, HIF-1 signaling pathways, etc. (Figure 3B and Table S1).Figure 3(A) GO analysis function annotation diagram. Biological process (BP), molecular function (MF), and cell composition (CC). (B) Enriched KEGG pathways of YOL selected targets for CP.\n(A) GO analysis function annotation diagram. Biological process (BP), molecular function (MF), and cell composition (CC). (B) Enriched KEGG pathways of YOL selected targets for CP.", "In order to experimentally demonstrate the predictions about the molecular mechanism of YOL, we chose two ingredients for testing (Figure 4), based on their composition score, the group to which they belong, and their herbal origin to examine how ingredients from different herbs affect the same proteins. The effects of these compounds on the expression levels of related proteins were analyzed by Western blot. Macrophages secrete several inflammatory cytokines, which play significant roles in inflammation. Signal transduction and transcriptional activator 3 (STAT3), nuclear factor kappa B (NF-kB), and phosphatidylinositol 3-kinase (PI3K)/Akt pathways are critical inflammatory pathways. LPS can significantly increase the production of many proinflammatory factors, including inducible NO synthase (iNOS) and cyclooxygenase-2 (COX-2) in macrophages. After pretreatment of macrophages with YOL, luteolin and baicalein, the protein expression levels of iNOS and COX-2 induced by LPS were significantly inhibited in a dose-dependent manner (Figure 4A). The related protein changes were consistent with iNOS and COX-2 inflammation protein expression (Figure 4B–D).Figure 4The YOL inhibits PI3K/p-AKT/p-Stats pathway in RAW 264.7 cells. Representative Western blot images show the relative expressions of iNOS, COX-2 (A), p-p65, p65 (B), PI3K, p-AKT, AKT (C), p-Stats, Stat3 (D) in the groups. The protein levels (normalized) of iNOS, COX2 (E), and p-p65, p65, PI3K, p-AKT, AKT, p-Stats, Stat3 (F) in cells. All the values are presented as mean ± SD. #P < 0.05 and ##P < 0.01 vs control group. *P < 0.05 and **P < 0.01 vs LPS group.\nThe YOL inhibits PI3K/p-AKT/p-Stats pathway in RAW 264.7 cells. Representative Western blot images show the relative expressions of iNOS, COX-2 (A), p-p65, p65 (B), PI3K, p-AKT, AKT (C), p-Stats, Stat3 (D) in the groups. The protein levels (normalized) of iNOS, COX2 (E), and p-p65, p65, PI3K, p-AKT, AKT, p-Stats, Stat3 (F) in cells. All the values are presented as mean ± SD. #P < 0.05 and ##P < 0.01 vs control group. *P < 0.05 and **P < 0.01 vs LPS group.", "Many diseases, including cancer and chronic inflammation, are known to be regulated by multiple signaling pathways. TCMs are considered as multi-component and multi-target therapeutic drugs, which is in accordance with the methodologies of network pharmacology. The holistic view of TCM considers the human body as a complex biological network system, which is connected with network pharmacology. By combining the network pharmacology with the holistic view of TCM, the disease-target-drug network is obtained. The network shows that different components of TCM may act on one target, and different targets may act on the same pathway. It is also possible that the same component of TCM acts on different targets, and the same target acts on different pathways. The multi-component, multi-target and multi-pathway of TCM can be clearly visualized through network pharmacology, which resolves many difficulties encountered in the study of TCM.13–15 CP is the diffuse inflammation of pharyngeal mucosa, submucosa and lymph tissues. Clinically, CP is mainly manifested as pharyngeal discomfort (foreign body sensation, burning sensation, stimulation sensation, etc.), pharyngeal itching, cough, etc. Pharyngitis belongs to the category of “Fenghou Bi” in TCM. Pharyngitis pertains to the category of “wind throat arthralgia” in Chinese medicine, and is often caused by evil heat entering the body, and heat in the lungs and stomach. YOL is an oral TCM preparation, which is further optimized on the basis of Jinchan oral liquid developed by Children’s Hospital of Suzhou University.5,6 However, there is a lack of research on its active ingredients and its mechanism of action.\nIn this study, 30 potential targets, 102 biological processes, 12 molecular functions, 34 cell components, and 46 KEGG pathways were obtained. The active ingredients in YOL can treat CP through multiple targets and pathways. To elucidate the biological pathways that may be affected by YOL, a significantly overexpressed KEGG pathway was identified. The results showed VEGFA, IL6, ESR1 and RELA as the key targets and the involvement of PI3K-Akt, p53 and HIF-1 signaling pathways. TCM aims to restore a patient’s health through the use of a TCM prescription, which is usually composed of two or more TCMs in optimal proportions.\nYOL, for example, consists of four TCMs. As expected, most of the selected active ingredients are directly or indirectly associated with inflammation. In a previous study, neochlorogenic acid was demonstrated to inhibit LPS-activated inflammatory responses through up-regulation of Nrf2/HO-1 and AMPK pathways.16 The decoction extract from six natural herbs, which contain honeysuckle, exhibits anti-inflammatory and immunomodulatory effects by targeting NF-κB/IL-6/STAT3 signaling.17 Luteolin is abundant in honeysuckle, which alleviates CP by inhibiting the NF-κB pathway and anti-inflammatory polarization of M1 macrophages.11 Protective effects of baicalein on liver injury in mice induced by multi-microbial septicemia were found to be based on inhibiting inflammation.12 Network pharmacological studies have shown that the main active components of honeysuckle may play a role through inflammation-related proteins such as HSP90AA1, HSP90AB1, ESR1, PTGS2, TERT and PPARG, and especially heat shock protein HSP90A (HSP90AA1).18 Flavonoids such as baicalin and wogonin alone and in combination have anti-inflammatory activity in Scutellaria.19 Network pharmacological studies have also shown that alkaloids such as coptisine and epiphone in Scutellaria and components such as dihydrokaempferol, and rographolide flavonoids also show anti-inflammatory activity. MAPK14, TNFRSF1A, EGFR and SELE are the primary anti-inflammatory components of Scutellaria. The anti-inflammatory active components in Scutellaria can directly affect MAPK14 and EGFR to produce anti-inflammatory effects, and can also indirectly affect other targets to exert anti-inflammatory effects. As one of the four p38 MAPKs, MAPK14 plays a vital role in the cellular cascade induced by extracellular stimuli such as proinflammatory cytokines or physical stimuli.20 Studies predicted that the pharmacodynamic basis of Bupleurum is mainly saponins, flavonoids, volatile oils, fatty acids and other components. Flavonoids mainly act on PI3K-Akt, NF-kB and other pathways, participate in regulating inflammatory factors, estrogen signal transduction and other physiological processes. Bupleurum-Scutellaria drug pair, first used in small bupleurum decoction in Treatise on Febrile Diseases, is an important part of Bupleurum prescription. Bupleurum is bitter, flat and clear, and can disperse the stagnation of bile fire. Scutellaria has a bitter and cold taste, and can clear internal heat. Both of them must be used together to regulate the liver and gallbladder, and clear the dampness and heat of internal accumulation, which are commonly used to treat fever, pharyngitis, and respiratory diseases.21 Cicada slough can evacuate wind and heat, and clear the throat. Clinically, it is common in classical prescriptions such as Sheng jiang San and Pharyngitis San. However, the active components of Cicada slough did not satisfy the prediction requirements in this research.\nYOL is composed of honeysuckle, scutellaria, bupleurum and cicada slough. According to TCM theory, bupleurum is bitter and flat, enters the liver and gallbladder meridian, penetrates and clears Shaoyang, and can drain the stagnation of gas. Scutellaria is bitter and cold, and clears the heat of Shaoyang. The actions of Bupleurum facilitate the clear discharge of Scutellaria, the two are compatible, and achieve the goal of reconciliation and Shaoyang. Honeysuckle cools through the surface, has heat-clearing and detoxifying effects, and also has the effect of fragrant obscenity. Cicada slough is light floating, clears the coke gas, shows honeysuckle compatibility, understanding the table evil, and is combined with the “treatment of the coke such as feather.” The combination of the four drugs, Xuan and descent together, clear the evil, thereby clearing heat and detoxification, warming the evil without hiding, scattered outside and recovered. Therefore, the treatment of CP with YOL can control the pharyngitis, regulate the balance of Yin and Yang, strengthen the foundation and improve the immunity of the body.", "This study used the network pharmacology platform to probe the YOL treatment of CP by multi-target analysis. The results showed that YOL could treat CP by acting on related targets, which were consistent with the reported literature. In addition, we have verified that YOL and its two monomer compounds, Luteolin and Baicalein inhibited activation of the NF-kB, Stat3 and PI3K-Akt pathways. Although the potential advantages of the “network target, multi-component” strategy of network pharmacology are obvious, the content of TCM ingredients is usually ignored in the research of network pharmacology of Chinese medicine, and the influence of content and concentration on the efficacy cannot be ignored. The predicted targets, which have not been discussed, may provide clues for further research on the mechanism of YOL." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and Methods", "Chemicals and Reagents", "Cell Culture", "Collection of Drug Molecular Information and Screening of Active Ingredients", "Known Therapeutic Targets of Chronic Pharyngitis (CP)", "Network Construction and Analysis", "Protein-Protein Interaction (PPI) Data", "Pathway Enrichment Analysis", "Statistical Analysis", "Results", "Screening and Analysis of Active Compounds", "Network Analysis of “Active Component-Target” Interaction in YOL", "The Construction of Key Protein Network of YOL and CP", "Pharmacological Mechanisms of YOL Acting on CP", "Experimental Verification of Predicted Results", "Discussion", "Conclusion" ]
[ "Chronic pharyngitis (CP) is a very common disease associated with chronic inflammation involving pharyngeal mucosa, submucosal and lymphatic tissues. Clinically, CP is mainly manifested as pharyngeal discomfort (foreign body sensation, burning sensation, irritation, etc.), with occasional pharyngeal itch, cough, etc., which belongs to the category of “slow throat arthralgia” in the traditional Chinese medicine (TCM).1 CP has a high incidence and accounts for 10–12% of all pharyngeal diseases, with a long disease course, recurrence and is difficult to cure. At present, western medicine uses antibiotics supplemented by hormone preparations, such as dexamethasone and antiviral drugs, for the treatment of pharyngitis. These treatment methods have drawbacks such as narrow therapeutic spectrum, recurrence, poor tolerance and major toxic and side effects. Hence, developing more effective therapeutic strategies against CP, and reducing the occurrence of side effects from the current therapeutic options is of great clinical importance.\nTCM is a major component of complementary and alternative medicine. Owing to its excellent clinical efficacy, TCM is a research hotspot worldwide. Dialectical theory of TCM is used to treat acute and chronic pharyngitis and has unique advantages such as minor toxic side effects, obvious curative effect and treatment of both symptoms and root causes.2–4 The pharmacological effects of TCM herbal formulae play an important role in their appropriate application. However, the complex nature of herbal formulae has impeded this understanding. Numerous chemical ingredients involving multiple potential targets are present in an herbal formula. It is not adequate to explain the effects produced by a whole herbal formula if we individually consider every single ingredient in it. Yinqin oral liquid (YOL) is composed of extracts of honeysuckle, scutellaria, bupleurum and cicada. YOL has the effects of relieving wind, muscle and clearing heat, which is mainly used for cold, fever, aversion to cold, pharyngeal pain and other upper respiratory tract infections caused by wind fever and wind-cold. Long-term clinical observation has proven its effect in relieving pharyngeal pain and treating hand-foot-mouth disease. YOL contains several chemical compounds, which regulate diverse targets, and thus precisely determine the pharmacological mechanisms involved in its therapeutic actions. In addition, there are several challenges in the relationships between the herbs and diseases.5,6\nNetwork pharmacology is based on the theory of system biology. As a new subject, it selects specific signal nodes to design multi-target drug molecules through the network analysis of system biology. It is possible to study both the active components and the potential gene targets from TCM because of the establishment of network pharmacology and bioinformatics. This study predicted and analyzed the active components and target of YOL using network pharmacology, to explore the rationality of its formula and the scientific nature of treating CP, and further explore the pharmacological mechanisms of YOL on CP.", "Chemicals and Reagents Yinqin Oral Liquid (YOL) was provided by su zhou si yuan natural products research and development Co. Ltd. YOL is composed of honeysuckle, scutellaria, bupleurum and cicada, decompressed and concentrated to 2 g/mL through boiling and alcohol sinking, and stored at 4°C for later use. Primary antibodies: COX-2 and iNOS antibodies were purchased from Abcam (Cambridge, UK); glyceraldehyde-3-phosphate dehydrogenase (GAPDH) antibody was obtained from Millipore (Billerica, MA, USA); PI3K, p-AKT, AKT, p-Stats, Stat3 and p-p65 (S536) antibodies were purchased from Cell Signaling Technology (Danvers, MA, USA); and p65 antibodies was obtained from Santa Cruz (CA, USA). The horseradish peroxidase (HRP)-conjugated sheep anti-mouse or anti-rabbit secondary antibodies were purchased from Thermo Fisher (Waltham, MA, USA). The proteins were visualized using an ECL detection kit (Thermo Fisher).\nYinqin Oral Liquid (YOL) was provided by su zhou si yuan natural products research and development Co. Ltd. YOL is composed of honeysuckle, scutellaria, bupleurum and cicada, decompressed and concentrated to 2 g/mL through boiling and alcohol sinking, and stored at 4°C for later use. Primary antibodies: COX-2 and iNOS antibodies were purchased from Abcam (Cambridge, UK); glyceraldehyde-3-phosphate dehydrogenase (GAPDH) antibody was obtained from Millipore (Billerica, MA, USA); PI3K, p-AKT, AKT, p-Stats, Stat3 and p-p65 (S536) antibodies were purchased from Cell Signaling Technology (Danvers, MA, USA); and p65 antibodies was obtained from Santa Cruz (CA, USA). The horseradish peroxidase (HRP)-conjugated sheep anti-mouse or anti-rabbit secondary antibodies were purchased from Thermo Fisher (Waltham, MA, USA). The proteins were visualized using an ECL detection kit (Thermo Fisher).\nCell Culture RAW264.7 cells, a mouse macrophage cell line (Cell Bank of Chinese Academy of Sciences, Shanghai, China) were cultured using DMEM supplemented with FBS (10%) and penicillin-streptomycin (1%). The cells were cultured at 37°C in a humidified environment of 5% carbon dioxide. The medium was changed every other day and the cells were passaged at a dilution of 1:3.\nRAW264.7 cells, a mouse macrophage cell line (Cell Bank of Chinese Academy of Sciences, Shanghai, China) were cultured using DMEM supplemented with FBS (10%) and penicillin-streptomycin (1%). The cells were cultured at 37°C in a humidified environment of 5% carbon dioxide. The medium was changed every other day and the cells were passaged at a dilution of 1:3.\nCollection of Drug Molecular Information and Screening of Active Ingredients In the TCMSP database (https://tcmsp-e.com/),7 the “Herb name” was selected to retrieve the molecular ADME parameter information of honeysuckle, scutellaria, bupleurum and cicada slough, and then the screen conformed to the oral bioavailability (OB) ADME parameter information of honey (Drug-likeness, DL) of ≥0.18 or more active ingredients (Table 1). YOL ingredients were supplemented through literature review. Potential targets related to active ingredients were searched through the TCMSP database and Cytoscape 3.6.1 software.Table 155 Active Compounds Predicted by OB and DL Among 4 Herbs in YOLDrugMol IDMolecule NameMWOB%DLHoneysuckleMOL001494Mandenol308.56420.19MOL001495Ethyl linolenate306.5446.10.2MOL002914Eriodyctiol (flavanone)288.2741.350.24MOL003006(-)-(3R,8S,9R,9aS,10aS)-9-ethenyl-8-(beta-D-glucopyranosyloxy)-2,3,9,9a,10,10a-hexahydro-5-oxo-5H,8H-pyrano[4,3-d]oxazolo[3,2-a]pyridine-3-carboxylic acid_qt281.2987.470.23MOL003014Secologanic dibutylacetal_qt384.5753.650.29MOL002773Beta-carotene536.9637.180.58MOL003036ZINC03978781412.7743.830.76MOL003044Chryseriol300.2835.850.27MOL0030955-hydroxy-7-methoxy-2-(3,4,5-trimethoxyphenyl)chromone358.3751.960.41MOL003111Centauroside_qt434.4855.790.5MOL003117Ioniceracetalides B_qt314.3761.190.19MOL003128Dinethylsecologanoside434.4448.460.48MOL000358Beta-sitosterol414.7936.910.75MOL000422Kaempferol286.2541.880.24MOL000449Stigmasterol412.7743.830.76MOL000006Luteolin286.2536.160.25MOL000098Quercetin302.2546.430.28ScutellariaMOL000073Ent-Epicatechin290.2948.960.24MOL000173Wogonin284.2830.680.23MOL000228(2R)-7-hydroxy-5-methoxy-2-phenylchroman-4-one270.355.230.2MOL000359Sitosterol414.7936.910.75MOL000525Norwogonin270.2539.40.21MOL0005525,2ʹ-Dihydroxy-6,7,8-trimethoxyflavone344.3431.710.35MOL001458Coptisine320.3430.670.86MOL001490Bis[(2S)-2-ethylhexyl] benzene-1,2-dicarboxylate390.6243.590.35MOL001689Acacetin284.2834.970.24MOL002714Baicalein270.2533.520.21MOL002879Diop390.6243.590.39MOL002897Epiberberine336.3943.090.78MOL0029095,7,2,5-tetrahydroxy-8,6-dimethoxyflavone376.3433.820.45MOL002910Carthamidin288.2741.150.24MOL002913Dihydrobaicalin_qt272.2740.040.21MOL002914Eriodyctiol (flavanone)288.2741.350.24MOL002915Salvigenin328.3449.070.33MOL0029175,2ʹ,6ʹ-Trihydroxy-7,8-dimethoxyflavone330.3145.050.33MOL0029255,7,2ʹ,6ʹ-Tetrahydroxyflavone286.2537.010.24MOL002927Skullcapflavone II374.3769.510.44MOL002928Oroxylin a284.2841.370.23MOL002932Panicolin314.3176.260.29MOL0029335,7,4ʹ-Trihydroxy-8-methoxyflavone300.2836.560.27MOL002934Neobaicalein374.37104.30.44MOL002937Dihydrooroxylin A286.366.060.23MOL008206Moslosooflavone298.3144.090.25MOL01041511,13-Eicosadienoic acid, methyl ester322.5939.280.23MOL012266Rivularin344.3437.940.37MOL000490Petunidin317.2930.050.31MOL0045983,5,6,7-tetramethoxy-2-(3,4,5-trimethoxyphenyl)chromone432.4631.970.59BupleurumMOL002776Baicalin446.3940.120.75MOL001645Kaempferol308.5642.10.2MOL004718Linoleyl acetate412.7742.980.76MOL004653α-spinasterol426.546.060.66MOL004624Stigmasterol348.4847.720.53MOL004609Areapillin360.3448.960.41MOL000354Isorhamnetin316.2849.60.31MOL013187Cubebin356.457.130.64\n\n55 Active Compounds Predicted by OB and DL Among 4 Herbs in YOL\nIn the TCMSP database (https://tcmsp-e.com/),7 the “Herb name” was selected to retrieve the molecular ADME parameter information of honeysuckle, scutellaria, bupleurum and cicada slough, and then the screen conformed to the oral bioavailability (OB) ADME parameter information of honey (Drug-likeness, DL) of ≥0.18 or more active ingredients (Table 1). YOL ingredients were supplemented through literature review. Potential targets related to active ingredients were searched through the TCMSP database and Cytoscape 3.6.1 software.Table 155 Active Compounds Predicted by OB and DL Among 4 Herbs in YOLDrugMol IDMolecule NameMWOB%DLHoneysuckleMOL001494Mandenol308.56420.19MOL001495Ethyl linolenate306.5446.10.2MOL002914Eriodyctiol (flavanone)288.2741.350.24MOL003006(-)-(3R,8S,9R,9aS,10aS)-9-ethenyl-8-(beta-D-glucopyranosyloxy)-2,3,9,9a,10,10a-hexahydro-5-oxo-5H,8H-pyrano[4,3-d]oxazolo[3,2-a]pyridine-3-carboxylic acid_qt281.2987.470.23MOL003014Secologanic dibutylacetal_qt384.5753.650.29MOL002773Beta-carotene536.9637.180.58MOL003036ZINC03978781412.7743.830.76MOL003044Chryseriol300.2835.850.27MOL0030955-hydroxy-7-methoxy-2-(3,4,5-trimethoxyphenyl)chromone358.3751.960.41MOL003111Centauroside_qt434.4855.790.5MOL003117Ioniceracetalides B_qt314.3761.190.19MOL003128Dinethylsecologanoside434.4448.460.48MOL000358Beta-sitosterol414.7936.910.75MOL000422Kaempferol286.2541.880.24MOL000449Stigmasterol412.7743.830.76MOL000006Luteolin286.2536.160.25MOL000098Quercetin302.2546.430.28ScutellariaMOL000073Ent-Epicatechin290.2948.960.24MOL000173Wogonin284.2830.680.23MOL000228(2R)-7-hydroxy-5-methoxy-2-phenylchroman-4-one270.355.230.2MOL000359Sitosterol414.7936.910.75MOL000525Norwogonin270.2539.40.21MOL0005525,2ʹ-Dihydroxy-6,7,8-trimethoxyflavone344.3431.710.35MOL001458Coptisine320.3430.670.86MOL001490Bis[(2S)-2-ethylhexyl] benzene-1,2-dicarboxylate390.6243.590.35MOL001689Acacetin284.2834.970.24MOL002714Baicalein270.2533.520.21MOL002879Diop390.6243.590.39MOL002897Epiberberine336.3943.090.78MOL0029095,7,2,5-tetrahydroxy-8,6-dimethoxyflavone376.3433.820.45MOL002910Carthamidin288.2741.150.24MOL002913Dihydrobaicalin_qt272.2740.040.21MOL002914Eriodyctiol (flavanone)288.2741.350.24MOL002915Salvigenin328.3449.070.33MOL0029175,2ʹ,6ʹ-Trihydroxy-7,8-dimethoxyflavone330.3145.050.33MOL0029255,7,2ʹ,6ʹ-Tetrahydroxyflavone286.2537.010.24MOL002927Skullcapflavone II374.3769.510.44MOL002928Oroxylin a284.2841.370.23MOL002932Panicolin314.3176.260.29MOL0029335,7,4ʹ-Trihydroxy-8-methoxyflavone300.2836.560.27MOL002934Neobaicalein374.37104.30.44MOL002937Dihydrooroxylin A286.366.060.23MOL008206Moslosooflavone298.3144.090.25MOL01041511,13-Eicosadienoic acid, methyl ester322.5939.280.23MOL012266Rivularin344.3437.940.37MOL000490Petunidin317.2930.050.31MOL0045983,5,6,7-tetramethoxy-2-(3,4,5-trimethoxyphenyl)chromone432.4631.970.59BupleurumMOL002776Baicalin446.3940.120.75MOL001645Kaempferol308.5642.10.2MOL004718Linoleyl acetate412.7742.980.76MOL004653α-spinasterol426.546.060.66MOL004624Stigmasterol348.4847.720.53MOL004609Areapillin360.3448.960.41MOL000354Isorhamnetin316.2849.60.31MOL013187Cubebin356.457.130.64\n\n55 Active Compounds Predicted by OB and DL Among 4 Herbs in YOL\nKnown Therapeutic Targets of Chronic Pharyngitis (CP) In the drug treatment of CP, the known therapeutic targets were mainly obtained from two sources: GeneCards database (https://www.genecards.org/) and CTD database (http://ctd.mdibl.org/). The data analysis was conducted in 5878 therapeutic targets for the treatment of CP, after the redundant entries were removed. Figure S1 shows the specific information about these known therapeutic targets.\nIn the drug treatment of CP, the known therapeutic targets were mainly obtained from two sources: GeneCards database (https://www.genecards.org/) and CTD database (http://ctd.mdibl.org/). The data analysis was conducted in 5878 therapeutic targets for the treatment of CP, after the redundant entries were removed. Figure S1 shows the specific information about these known therapeutic targets.\nNetwork Construction and Analysis To understand the relationship between the herbs and chemical compounds that consist of YOL and its putative targets, and the therapeutic targets known for CP, Network Visualization was conducted using Cytoscape 3.6.1 software,8 and the degree between compounds and targets was analyzed.\nTo understand the relationship between the herbs and chemical compounds that consist of YOL and its putative targets, and the therapeutic targets known for CP, Network Visualization was conducted using Cytoscape 3.6.1 software,8 and the degree between compounds and targets was analyzed.\nProtein-Protein Interaction (PPI) Data PPI core network (PPICN) is used to study the relationship between chemical compounds and disease-associated protein molecules based on biochemistry, signal transduction and genetic networks.9 The differing ID types of the proteins were converted to UniProt IDs. In order to further understand how YOL and CP interact at the protein level, the selected targets in this study were uploaded on the online Venn diagram (http://bioinfogp.cnb.csic.es/tools/venny/index.html). Moreover, the genes at the intersection of the active compound and CP were selected and uploaded on STRING 10.5 (https://string-db.org) to obtain the PPI relationship.\nPPI core network (PPICN) is used to study the relationship between chemical compounds and disease-associated protein molecules based on biochemistry, signal transduction and genetic networks.9 The differing ID types of the proteins were converted to UniProt IDs. In order to further understand how YOL and CP interact at the protein level, the selected targets in this study were uploaded on the online Venn diagram (http://bioinfogp.cnb.csic.es/tools/venny/index.html). Moreover, the genes at the intersection of the active compound and CP were selected and uploaded on STRING 10.5 (https://string-db.org) to obtain the PPI relationship.\nPathway Enrichment Analysis The data was imported in the format Gene Symbol into David 6.8 database (https://david.ncifcrf.gov/). Molecular function (CC), biological process (BP), and cellular function (MF) were selected, respectively. A pathway enrichment analysis was prospectively performed using biological process enrichment GO analysis and KEGG pathway analysis (http://www.genome.jp/kegg/) to clarify the pathways involving putative CP targets, and visualized by online mapping website Omicshare Tools (http://www.omicshare.com/tools/index.php/).\nThe data was imported in the format Gene Symbol into David 6.8 database (https://david.ncifcrf.gov/). Molecular function (CC), biological process (BP), and cellular function (MF) were selected, respectively. A pathway enrichment analysis was prospectively performed using biological process enrichment GO analysis and KEGG pathway analysis (http://www.genome.jp/kegg/) to clarify the pathways involving putative CP targets, and visualized by online mapping website Omicshare Tools (http://www.omicshare.com/tools/index.php/).\nStatistical Analysis Each experiment was repeated at least in triplicate and data were shown as mean ± SD. Statistical difference in GO and KEGG analysis was performed using hypergeometric and Fisher's exact tests. P<0.05 was considered statistically significant.\nEach experiment was repeated at least in triplicate and data were shown as mean ± SD. Statistical difference in GO and KEGG analysis was performed using hypergeometric and Fisher's exact tests. P<0.05 was considered statistically significant.", "Yinqin Oral Liquid (YOL) was provided by su zhou si yuan natural products research and development Co. Ltd. YOL is composed of honeysuckle, scutellaria, bupleurum and cicada, decompressed and concentrated to 2 g/mL through boiling and alcohol sinking, and stored at 4°C for later use. Primary antibodies: COX-2 and iNOS antibodies were purchased from Abcam (Cambridge, UK); glyceraldehyde-3-phosphate dehydrogenase (GAPDH) antibody was obtained from Millipore (Billerica, MA, USA); PI3K, p-AKT, AKT, p-Stats, Stat3 and p-p65 (S536) antibodies were purchased from Cell Signaling Technology (Danvers, MA, USA); and p65 antibodies was obtained from Santa Cruz (CA, USA). The horseradish peroxidase (HRP)-conjugated sheep anti-mouse or anti-rabbit secondary antibodies were purchased from Thermo Fisher (Waltham, MA, USA). The proteins were visualized using an ECL detection kit (Thermo Fisher).", "RAW264.7 cells, a mouse macrophage cell line (Cell Bank of Chinese Academy of Sciences, Shanghai, China) were cultured using DMEM supplemented with FBS (10%) and penicillin-streptomycin (1%). The cells were cultured at 37°C in a humidified environment of 5% carbon dioxide. The medium was changed every other day and the cells were passaged at a dilution of 1:3.", "In the TCMSP database (https://tcmsp-e.com/),7 the “Herb name” was selected to retrieve the molecular ADME parameter information of honeysuckle, scutellaria, bupleurum and cicada slough, and then the screen conformed to the oral bioavailability (OB) ADME parameter information of honey (Drug-likeness, DL) of ≥0.18 or more active ingredients (Table 1). YOL ingredients were supplemented through literature review. Potential targets related to active ingredients were searched through the TCMSP database and Cytoscape 3.6.1 software.Table 155 Active Compounds Predicted by OB and DL Among 4 Herbs in YOLDrugMol IDMolecule NameMWOB%DLHoneysuckleMOL001494Mandenol308.56420.19MOL001495Ethyl linolenate306.5446.10.2MOL002914Eriodyctiol (flavanone)288.2741.350.24MOL003006(-)-(3R,8S,9R,9aS,10aS)-9-ethenyl-8-(beta-D-glucopyranosyloxy)-2,3,9,9a,10,10a-hexahydro-5-oxo-5H,8H-pyrano[4,3-d]oxazolo[3,2-a]pyridine-3-carboxylic acid_qt281.2987.470.23MOL003014Secologanic dibutylacetal_qt384.5753.650.29MOL002773Beta-carotene536.9637.180.58MOL003036ZINC03978781412.7743.830.76MOL003044Chryseriol300.2835.850.27MOL0030955-hydroxy-7-methoxy-2-(3,4,5-trimethoxyphenyl)chromone358.3751.960.41MOL003111Centauroside_qt434.4855.790.5MOL003117Ioniceracetalides B_qt314.3761.190.19MOL003128Dinethylsecologanoside434.4448.460.48MOL000358Beta-sitosterol414.7936.910.75MOL000422Kaempferol286.2541.880.24MOL000449Stigmasterol412.7743.830.76MOL000006Luteolin286.2536.160.25MOL000098Quercetin302.2546.430.28ScutellariaMOL000073Ent-Epicatechin290.2948.960.24MOL000173Wogonin284.2830.680.23MOL000228(2R)-7-hydroxy-5-methoxy-2-phenylchroman-4-one270.355.230.2MOL000359Sitosterol414.7936.910.75MOL000525Norwogonin270.2539.40.21MOL0005525,2ʹ-Dihydroxy-6,7,8-trimethoxyflavone344.3431.710.35MOL001458Coptisine320.3430.670.86MOL001490Bis[(2S)-2-ethylhexyl] benzene-1,2-dicarboxylate390.6243.590.35MOL001689Acacetin284.2834.970.24MOL002714Baicalein270.2533.520.21MOL002879Diop390.6243.590.39MOL002897Epiberberine336.3943.090.78MOL0029095,7,2,5-tetrahydroxy-8,6-dimethoxyflavone376.3433.820.45MOL002910Carthamidin288.2741.150.24MOL002913Dihydrobaicalin_qt272.2740.040.21MOL002914Eriodyctiol (flavanone)288.2741.350.24MOL002915Salvigenin328.3449.070.33MOL0029175,2ʹ,6ʹ-Trihydroxy-7,8-dimethoxyflavone330.3145.050.33MOL0029255,7,2ʹ,6ʹ-Tetrahydroxyflavone286.2537.010.24MOL002927Skullcapflavone II374.3769.510.44MOL002928Oroxylin a284.2841.370.23MOL002932Panicolin314.3176.260.29MOL0029335,7,4ʹ-Trihydroxy-8-methoxyflavone300.2836.560.27MOL002934Neobaicalein374.37104.30.44MOL002937Dihydrooroxylin A286.366.060.23MOL008206Moslosooflavone298.3144.090.25MOL01041511,13-Eicosadienoic acid, methyl ester322.5939.280.23MOL012266Rivularin344.3437.940.37MOL000490Petunidin317.2930.050.31MOL0045983,5,6,7-tetramethoxy-2-(3,4,5-trimethoxyphenyl)chromone432.4631.970.59BupleurumMOL002776Baicalin446.3940.120.75MOL001645Kaempferol308.5642.10.2MOL004718Linoleyl acetate412.7742.980.76MOL004653α-spinasterol426.546.060.66MOL004624Stigmasterol348.4847.720.53MOL004609Areapillin360.3448.960.41MOL000354Isorhamnetin316.2849.60.31MOL013187Cubebin356.457.130.64\n\n55 Active Compounds Predicted by OB and DL Among 4 Herbs in YOL", "In the drug treatment of CP, the known therapeutic targets were mainly obtained from two sources: GeneCards database (https://www.genecards.org/) and CTD database (http://ctd.mdibl.org/). The data analysis was conducted in 5878 therapeutic targets for the treatment of CP, after the redundant entries were removed. Figure S1 shows the specific information about these known therapeutic targets.", "To understand the relationship between the herbs and chemical compounds that consist of YOL and its putative targets, and the therapeutic targets known for CP, Network Visualization was conducted using Cytoscape 3.6.1 software,8 and the degree between compounds and targets was analyzed.", "PPI core network (PPICN) is used to study the relationship between chemical compounds and disease-associated protein molecules based on biochemistry, signal transduction and genetic networks.9 The differing ID types of the proteins were converted to UniProt IDs. In order to further understand how YOL and CP interact at the protein level, the selected targets in this study were uploaded on the online Venn diagram (http://bioinfogp.cnb.csic.es/tools/venny/index.html). Moreover, the genes at the intersection of the active compound and CP were selected and uploaded on STRING 10.5 (https://string-db.org) to obtain the PPI relationship.", "The data was imported in the format Gene Symbol into David 6.8 database (https://david.ncifcrf.gov/). Molecular function (CC), biological process (BP), and cellular function (MF) were selected, respectively. A pathway enrichment analysis was prospectively performed using biological process enrichment GO analysis and KEGG pathway analysis (http://www.genome.jp/kegg/) to clarify the pathways involving putative CP targets, and visualized by online mapping website Omicshare Tools (http://www.omicshare.com/tools/index.php/).", "Each experiment was repeated at least in triplicate and data were shown as mean ± SD. Statistical difference in GO and KEGG analysis was performed using hypergeometric and Fisher's exact tests. P<0.05 was considered statistically significant.", "Screening and Analysis of Active Compounds First, all the compounds related to four TCMs were retrieved by TCMSP. Second, according to the screening conditions of OB ≥30% and DL ≥0.18,10 23 active constituents of honeysuckle (17 predicted relevant targets), 36 of baicalensis (30 predicted relevant targets), 17 of bupleurum (13 predicted relevant targets), and no suitable compounds of cicada slough were obtained. Among them, baicalensis and honeysuckle contained two identical ingredients, while bupleurum and honeysuckle contained three identical components. Finally, a total of 55 active components were obtained for the data analysis after removing redundant entries (Table 1).\nFirst, all the compounds related to four TCMs were retrieved by TCMSP. Second, according to the screening conditions of OB ≥30% and DL ≥0.18,10 23 active constituents of honeysuckle (17 predicted relevant targets), 36 of baicalensis (30 predicted relevant targets), 17 of bupleurum (13 predicted relevant targets), and no suitable compounds of cicada slough were obtained. Among them, baicalensis and honeysuckle contained two identical ingredients, while bupleurum and honeysuckle contained three identical components. Finally, a total of 55 active components were obtained for the data analysis after removing redundant entries (Table 1).\nNetwork Analysis of “Active Component-Target” Interaction in YOL The compound-target network contained 175 nodes including 55 compound nodes and 113 target nodes, of which two had no corresponding target and 577 edges. As shown in Figure 1, blue represents compounds in bupleurum, red represents compounds in scutellaria, purple represents compounds in honeysuckle, green represents drug targets, each edge indicates the interaction between compounds and their targets, and two of the 55 compounds were excluded from the network construction. The degree value of a node represents the amount of connected routes in the network, and the larger the shape, the higher the degree value. The network in line with its topological properties showed that the nodes with higher degree of screening were analyzed. These nodes that connect compounds or targets act as hubs in the whole network and may predict key compounds or targets. The top five compounds ranked by degree were MOL000098-quercetin, MOL000422-Kaempferol, MOL000449-stigmasterol, MOL000358-quebrachol and MOL000006-luteolin, which could interact with 154, 68, 39, 32 and 28 target proteins, respectively. PTGS1, NCOA2, AR, PRSS1 and PPARG were the top five targets with the highest degree, which could interact with 42, 38, 29, 28 and 17 compounds, respectively.Figure 1Compounds-target network diagram. Blue represents compounds in bupleurum, Red represents compounds in scutellaria, Purple represents compounds in honeysuckle, Green represents drug targets. The size represents the degree value. The larger the shape, the larger the degree value.\nCompounds-target network diagram. Blue represents compounds in bupleurum, Red represents compounds in scutellaria, Purple represents compounds in honeysuckle, Green represents drug targets. The size represents the degree value. The larger the shape, the larger the degree value.\nThe compound-target network contained 175 nodes including 55 compound nodes and 113 target nodes, of which two had no corresponding target and 577 edges. As shown in Figure 1, blue represents compounds in bupleurum, red represents compounds in scutellaria, purple represents compounds in honeysuckle, green represents drug targets, each edge indicates the interaction between compounds and their targets, and two of the 55 compounds were excluded from the network construction. The degree value of a node represents the amount of connected routes in the network, and the larger the shape, the higher the degree value. The network in line with its topological properties showed that the nodes with higher degree of screening were analyzed. These nodes that connect compounds or targets act as hubs in the whole network and may predict key compounds or targets. The top five compounds ranked by degree were MOL000098-quercetin, MOL000422-Kaempferol, MOL000449-stigmasterol, MOL000358-quebrachol and MOL000006-luteolin, which could interact with 154, 68, 39, 32 and 28 target proteins, respectively. PTGS1, NCOA2, AR, PRSS1 and PPARG were the top five targets with the highest degree, which could interact with 42, 38, 29, 28 and 17 compounds, respectively.Figure 1Compounds-target network diagram. Blue represents compounds in bupleurum, Red represents compounds in scutellaria, Purple represents compounds in honeysuckle, Green represents drug targets. The size represents the degree value. The larger the shape, the larger the degree value.\nCompounds-target network diagram. Blue represents compounds in bupleurum, Red represents compounds in scutellaria, Purple represents compounds in honeysuckle, Green represents drug targets. The size represents the degree value. The larger the shape, the larger the degree value.\nThe Construction of Key Protein Network of YOL and CP Through the internationally recognized CTD and GeneCards disease databases, 5150 and 1247 pharyngitis-related targets were obtained, respectively. Through the Venny online tool, 30 targets of honeysuckle, scutellaria, bupleuri and pharyngitis were obtained, including VEGFA, IL6, ESR1, RELA, HIF1A, etc. (Figure 2A). Finally, STRING online tool was used to construct the PPI network interaction map of drug and disease. It contained 30 nodes representing proteins and 48 edges representing the interaction between proteins. The thicker the line, the higher is the correlation degree. Through the protein interaction network, we could further explore the therapeutic target and mechanism of YOL for CP (Figure 2B).Figure 2(A) Venn diagram of compounds-target network diagram; (B) Core network diagram of protein interaction between YOL and CP.\n(A) Venn diagram of compounds-target network diagram; (B) Core network diagram of protein interaction between YOL and CP.\nThrough the internationally recognized CTD and GeneCards disease databases, 5150 and 1247 pharyngitis-related targets were obtained, respectively. Through the Venny online tool, 30 targets of honeysuckle, scutellaria, bupleuri and pharyngitis were obtained, including VEGFA, IL6, ESR1, RELA, HIF1A, etc. (Figure 2A). Finally, STRING online tool was used to construct the PPI network interaction map of drug and disease. It contained 30 nodes representing proteins and 48 edges representing the interaction between proteins. The thicker the line, the higher is the correlation degree. Through the protein interaction network, we could further explore the therapeutic target and mechanism of YOL for CP (Figure 2B).Figure 2(A) Venn diagram of compounds-target network diagram; (B) Core network diagram of protein interaction between YOL and CP.\n(A) Venn diagram of compounds-target network diagram; (B) Core network diagram of protein interaction between YOL and CP.\nPharmacological Mechanisms of YOL Acting on CP A total of 30 common targets were obtained by DAVID online tool and 148 GO items were obtained (p < 0.05) by performing the functional enrichment analysis. There were 102 entries on biological process (BP), 34 entries on Molecular Function (MF), and 12 entries on cell composition (CC) (Figure 3A). A total of 46 signaling pathways were obtained by KEGG pathway enrichment screening (p<0.05), involving cancer, PI3K-AKT, hepatitis, proteoglycans, p53, HIF-1 signaling pathways, etc. (Figure 3B and Table S1).Figure 3(A) GO analysis function annotation diagram. Biological process (BP), molecular function (MF), and cell composition (CC). (B) Enriched KEGG pathways of YOL selected targets for CP.\n(A) GO analysis function annotation diagram. Biological process (BP), molecular function (MF), and cell composition (CC). (B) Enriched KEGG pathways of YOL selected targets for CP.\nA total of 30 common targets were obtained by DAVID online tool and 148 GO items were obtained (p < 0.05) by performing the functional enrichment analysis. There were 102 entries on biological process (BP), 34 entries on Molecular Function (MF), and 12 entries on cell composition (CC) (Figure 3A). A total of 46 signaling pathways were obtained by KEGG pathway enrichment screening (p<0.05), involving cancer, PI3K-AKT, hepatitis, proteoglycans, p53, HIF-1 signaling pathways, etc. (Figure 3B and Table S1).Figure 3(A) GO analysis function annotation diagram. Biological process (BP), molecular function (MF), and cell composition (CC). (B) Enriched KEGG pathways of YOL selected targets for CP.\n(A) GO analysis function annotation diagram. Biological process (BP), molecular function (MF), and cell composition (CC). (B) Enriched KEGG pathways of YOL selected targets for CP.\nExperimental Verification of Predicted Results In order to experimentally demonstrate the predictions about the molecular mechanism of YOL, we chose two ingredients for testing (Figure 4), based on their composition score, the group to which they belong, and their herbal origin to examine how ingredients from different herbs affect the same proteins. The effects of these compounds on the expression levels of related proteins were analyzed by Western blot. Macrophages secrete several inflammatory cytokines, which play significant roles in inflammation. Signal transduction and transcriptional activator 3 (STAT3), nuclear factor kappa B (NF-kB), and phosphatidylinositol 3-kinase (PI3K)/Akt pathways are critical inflammatory pathways. LPS can significantly increase the production of many proinflammatory factors, including inducible NO synthase (iNOS) and cyclooxygenase-2 (COX-2) in macrophages. After pretreatment of macrophages with YOL, luteolin and baicalein, the protein expression levels of iNOS and COX-2 induced by LPS were significantly inhibited in a dose-dependent manner (Figure 4A). The related protein changes were consistent with iNOS and COX-2 inflammation protein expression (Figure 4B–D).Figure 4The YOL inhibits PI3K/p-AKT/p-Stats pathway in RAW 264.7 cells. Representative Western blot images show the relative expressions of iNOS, COX-2 (A), p-p65, p65 (B), PI3K, p-AKT, AKT (C), p-Stats, Stat3 (D) in the groups. The protein levels (normalized) of iNOS, COX2 (E), and p-p65, p65, PI3K, p-AKT, AKT, p-Stats, Stat3 (F) in cells. All the values are presented as mean ± SD. #P < 0.05 and ##P < 0.01 vs control group. *P < 0.05 and **P < 0.01 vs LPS group.\nThe YOL inhibits PI3K/p-AKT/p-Stats pathway in RAW 264.7 cells. Representative Western blot images show the relative expressions of iNOS, COX-2 (A), p-p65, p65 (B), PI3K, p-AKT, AKT (C), p-Stats, Stat3 (D) in the groups. The protein levels (normalized) of iNOS, COX2 (E), and p-p65, p65, PI3K, p-AKT, AKT, p-Stats, Stat3 (F) in cells. All the values are presented as mean ± SD. #P < 0.05 and ##P < 0.01 vs control group. *P < 0.05 and **P < 0.01 vs LPS group.\nIn order to experimentally demonstrate the predictions about the molecular mechanism of YOL, we chose two ingredients for testing (Figure 4), based on their composition score, the group to which they belong, and their herbal origin to examine how ingredients from different herbs affect the same proteins. The effects of these compounds on the expression levels of related proteins were analyzed by Western blot. Macrophages secrete several inflammatory cytokines, which play significant roles in inflammation. Signal transduction and transcriptional activator 3 (STAT3), nuclear factor kappa B (NF-kB), and phosphatidylinositol 3-kinase (PI3K)/Akt pathways are critical inflammatory pathways. LPS can significantly increase the production of many proinflammatory factors, including inducible NO synthase (iNOS) and cyclooxygenase-2 (COX-2) in macrophages. After pretreatment of macrophages with YOL, luteolin and baicalein, the protein expression levels of iNOS and COX-2 induced by LPS were significantly inhibited in a dose-dependent manner (Figure 4A). The related protein changes were consistent with iNOS and COX-2 inflammation protein expression (Figure 4B–D).Figure 4The YOL inhibits PI3K/p-AKT/p-Stats pathway in RAW 264.7 cells. Representative Western blot images show the relative expressions of iNOS, COX-2 (A), p-p65, p65 (B), PI3K, p-AKT, AKT (C), p-Stats, Stat3 (D) in the groups. The protein levels (normalized) of iNOS, COX2 (E), and p-p65, p65, PI3K, p-AKT, AKT, p-Stats, Stat3 (F) in cells. All the values are presented as mean ± SD. #P < 0.05 and ##P < 0.01 vs control group. *P < 0.05 and **P < 0.01 vs LPS group.\nThe YOL inhibits PI3K/p-AKT/p-Stats pathway in RAW 264.7 cells. Representative Western blot images show the relative expressions of iNOS, COX-2 (A), p-p65, p65 (B), PI3K, p-AKT, AKT (C), p-Stats, Stat3 (D) in the groups. The protein levels (normalized) of iNOS, COX2 (E), and p-p65, p65, PI3K, p-AKT, AKT, p-Stats, Stat3 (F) in cells. All the values are presented as mean ± SD. #P < 0.05 and ##P < 0.01 vs control group. *P < 0.05 and **P < 0.01 vs LPS group.", "First, all the compounds related to four TCMs were retrieved by TCMSP. Second, according to the screening conditions of OB ≥30% and DL ≥0.18,10 23 active constituents of honeysuckle (17 predicted relevant targets), 36 of baicalensis (30 predicted relevant targets), 17 of bupleurum (13 predicted relevant targets), and no suitable compounds of cicada slough were obtained. Among them, baicalensis and honeysuckle contained two identical ingredients, while bupleurum and honeysuckle contained three identical components. Finally, a total of 55 active components were obtained for the data analysis after removing redundant entries (Table 1).", "The compound-target network contained 175 nodes including 55 compound nodes and 113 target nodes, of which two had no corresponding target and 577 edges. As shown in Figure 1, blue represents compounds in bupleurum, red represents compounds in scutellaria, purple represents compounds in honeysuckle, green represents drug targets, each edge indicates the interaction between compounds and their targets, and two of the 55 compounds were excluded from the network construction. The degree value of a node represents the amount of connected routes in the network, and the larger the shape, the higher the degree value. The network in line with its topological properties showed that the nodes with higher degree of screening were analyzed. These nodes that connect compounds or targets act as hubs in the whole network and may predict key compounds or targets. The top five compounds ranked by degree were MOL000098-quercetin, MOL000422-Kaempferol, MOL000449-stigmasterol, MOL000358-quebrachol and MOL000006-luteolin, which could interact with 154, 68, 39, 32 and 28 target proteins, respectively. PTGS1, NCOA2, AR, PRSS1 and PPARG were the top five targets with the highest degree, which could interact with 42, 38, 29, 28 and 17 compounds, respectively.Figure 1Compounds-target network diagram. Blue represents compounds in bupleurum, Red represents compounds in scutellaria, Purple represents compounds in honeysuckle, Green represents drug targets. The size represents the degree value. The larger the shape, the larger the degree value.\nCompounds-target network diagram. Blue represents compounds in bupleurum, Red represents compounds in scutellaria, Purple represents compounds in honeysuckle, Green represents drug targets. The size represents the degree value. The larger the shape, the larger the degree value.", "Through the internationally recognized CTD and GeneCards disease databases, 5150 and 1247 pharyngitis-related targets were obtained, respectively. Through the Venny online tool, 30 targets of honeysuckle, scutellaria, bupleuri and pharyngitis were obtained, including VEGFA, IL6, ESR1, RELA, HIF1A, etc. (Figure 2A). Finally, STRING online tool was used to construct the PPI network interaction map of drug and disease. It contained 30 nodes representing proteins and 48 edges representing the interaction between proteins. The thicker the line, the higher is the correlation degree. Through the protein interaction network, we could further explore the therapeutic target and mechanism of YOL for CP (Figure 2B).Figure 2(A) Venn diagram of compounds-target network diagram; (B) Core network diagram of protein interaction between YOL and CP.\n(A) Venn diagram of compounds-target network diagram; (B) Core network diagram of protein interaction between YOL and CP.", "A total of 30 common targets were obtained by DAVID online tool and 148 GO items were obtained (p < 0.05) by performing the functional enrichment analysis. There were 102 entries on biological process (BP), 34 entries on Molecular Function (MF), and 12 entries on cell composition (CC) (Figure 3A). A total of 46 signaling pathways were obtained by KEGG pathway enrichment screening (p<0.05), involving cancer, PI3K-AKT, hepatitis, proteoglycans, p53, HIF-1 signaling pathways, etc. (Figure 3B and Table S1).Figure 3(A) GO analysis function annotation diagram. Biological process (BP), molecular function (MF), and cell composition (CC). (B) Enriched KEGG pathways of YOL selected targets for CP.\n(A) GO analysis function annotation diagram. Biological process (BP), molecular function (MF), and cell composition (CC). (B) Enriched KEGG pathways of YOL selected targets for CP.", "In order to experimentally demonstrate the predictions about the molecular mechanism of YOL, we chose two ingredients for testing (Figure 4), based on their composition score, the group to which they belong, and their herbal origin to examine how ingredients from different herbs affect the same proteins. The effects of these compounds on the expression levels of related proteins were analyzed by Western blot. Macrophages secrete several inflammatory cytokines, which play significant roles in inflammation. Signal transduction and transcriptional activator 3 (STAT3), nuclear factor kappa B (NF-kB), and phosphatidylinositol 3-kinase (PI3K)/Akt pathways are critical inflammatory pathways. LPS can significantly increase the production of many proinflammatory factors, including inducible NO synthase (iNOS) and cyclooxygenase-2 (COX-2) in macrophages. After pretreatment of macrophages with YOL, luteolin and baicalein, the protein expression levels of iNOS and COX-2 induced by LPS were significantly inhibited in a dose-dependent manner (Figure 4A). The related protein changes were consistent with iNOS and COX-2 inflammation protein expression (Figure 4B–D).Figure 4The YOL inhibits PI3K/p-AKT/p-Stats pathway in RAW 264.7 cells. Representative Western blot images show the relative expressions of iNOS, COX-2 (A), p-p65, p65 (B), PI3K, p-AKT, AKT (C), p-Stats, Stat3 (D) in the groups. The protein levels (normalized) of iNOS, COX2 (E), and p-p65, p65, PI3K, p-AKT, AKT, p-Stats, Stat3 (F) in cells. All the values are presented as mean ± SD. #P < 0.05 and ##P < 0.01 vs control group. *P < 0.05 and **P < 0.01 vs LPS group.\nThe YOL inhibits PI3K/p-AKT/p-Stats pathway in RAW 264.7 cells. Representative Western blot images show the relative expressions of iNOS, COX-2 (A), p-p65, p65 (B), PI3K, p-AKT, AKT (C), p-Stats, Stat3 (D) in the groups. The protein levels (normalized) of iNOS, COX2 (E), and p-p65, p65, PI3K, p-AKT, AKT, p-Stats, Stat3 (F) in cells. All the values are presented as mean ± SD. #P < 0.05 and ##P < 0.01 vs control group. *P < 0.05 and **P < 0.01 vs LPS group.", "Many diseases, including cancer and chronic inflammation, are known to be regulated by multiple signaling pathways. TCMs are considered as multi-component and multi-target therapeutic drugs, which is in accordance with the methodologies of network pharmacology. The holistic view of TCM considers the human body as a complex biological network system, which is connected with network pharmacology. By combining the network pharmacology with the holistic view of TCM, the disease-target-drug network is obtained. The network shows that different components of TCM may act on one target, and different targets may act on the same pathway. It is also possible that the same component of TCM acts on different targets, and the same target acts on different pathways. The multi-component, multi-target and multi-pathway of TCM can be clearly visualized through network pharmacology, which resolves many difficulties encountered in the study of TCM.13–15 CP is the diffuse inflammation of pharyngeal mucosa, submucosa and lymph tissues. Clinically, CP is mainly manifested as pharyngeal discomfort (foreign body sensation, burning sensation, stimulation sensation, etc.), pharyngeal itching, cough, etc. Pharyngitis belongs to the category of “Fenghou Bi” in TCM. Pharyngitis pertains to the category of “wind throat arthralgia” in Chinese medicine, and is often caused by evil heat entering the body, and heat in the lungs and stomach. YOL is an oral TCM preparation, which is further optimized on the basis of Jinchan oral liquid developed by Children’s Hospital of Suzhou University.5,6 However, there is a lack of research on its active ingredients and its mechanism of action.\nIn this study, 30 potential targets, 102 biological processes, 12 molecular functions, 34 cell components, and 46 KEGG pathways were obtained. The active ingredients in YOL can treat CP through multiple targets and pathways. To elucidate the biological pathways that may be affected by YOL, a significantly overexpressed KEGG pathway was identified. The results showed VEGFA, IL6, ESR1 and RELA as the key targets and the involvement of PI3K-Akt, p53 and HIF-1 signaling pathways. TCM aims to restore a patient’s health through the use of a TCM prescription, which is usually composed of two or more TCMs in optimal proportions.\nYOL, for example, consists of four TCMs. As expected, most of the selected active ingredients are directly or indirectly associated with inflammation. In a previous study, neochlorogenic acid was demonstrated to inhibit LPS-activated inflammatory responses through up-regulation of Nrf2/HO-1 and AMPK pathways.16 The decoction extract from six natural herbs, which contain honeysuckle, exhibits anti-inflammatory and immunomodulatory effects by targeting NF-κB/IL-6/STAT3 signaling.17 Luteolin is abundant in honeysuckle, which alleviates CP by inhibiting the NF-κB pathway and anti-inflammatory polarization of M1 macrophages.11 Protective effects of baicalein on liver injury in mice induced by multi-microbial septicemia were found to be based on inhibiting inflammation.12 Network pharmacological studies have shown that the main active components of honeysuckle may play a role through inflammation-related proteins such as HSP90AA1, HSP90AB1, ESR1, PTGS2, TERT and PPARG, and especially heat shock protein HSP90A (HSP90AA1).18 Flavonoids such as baicalin and wogonin alone and in combination have anti-inflammatory activity in Scutellaria.19 Network pharmacological studies have also shown that alkaloids such as coptisine and epiphone in Scutellaria and components such as dihydrokaempferol, and rographolide flavonoids also show anti-inflammatory activity. MAPK14, TNFRSF1A, EGFR and SELE are the primary anti-inflammatory components of Scutellaria. The anti-inflammatory active components in Scutellaria can directly affect MAPK14 and EGFR to produce anti-inflammatory effects, and can also indirectly affect other targets to exert anti-inflammatory effects. As one of the four p38 MAPKs, MAPK14 plays a vital role in the cellular cascade induced by extracellular stimuli such as proinflammatory cytokines or physical stimuli.20 Studies predicted that the pharmacodynamic basis of Bupleurum is mainly saponins, flavonoids, volatile oils, fatty acids and other components. Flavonoids mainly act on PI3K-Akt, NF-kB and other pathways, participate in regulating inflammatory factors, estrogen signal transduction and other physiological processes. Bupleurum-Scutellaria drug pair, first used in small bupleurum decoction in Treatise on Febrile Diseases, is an important part of Bupleurum prescription. Bupleurum is bitter, flat and clear, and can disperse the stagnation of bile fire. Scutellaria has a bitter and cold taste, and can clear internal heat. Both of them must be used together to regulate the liver and gallbladder, and clear the dampness and heat of internal accumulation, which are commonly used to treat fever, pharyngitis, and respiratory diseases.21 Cicada slough can evacuate wind and heat, and clear the throat. Clinically, it is common in classical prescriptions such as Sheng jiang San and Pharyngitis San. However, the active components of Cicada slough did not satisfy the prediction requirements in this research.\nYOL is composed of honeysuckle, scutellaria, bupleurum and cicada slough. According to TCM theory, bupleurum is bitter and flat, enters the liver and gallbladder meridian, penetrates and clears Shaoyang, and can drain the stagnation of gas. Scutellaria is bitter and cold, and clears the heat of Shaoyang. The actions of Bupleurum facilitate the clear discharge of Scutellaria, the two are compatible, and achieve the goal of reconciliation and Shaoyang. Honeysuckle cools through the surface, has heat-clearing and detoxifying effects, and also has the effect of fragrant obscenity. Cicada slough is light floating, clears the coke gas, shows honeysuckle compatibility, understanding the table evil, and is combined with the “treatment of the coke such as feather.” The combination of the four drugs, Xuan and descent together, clear the evil, thereby clearing heat and detoxification, warming the evil without hiding, scattered outside and recovered. Therefore, the treatment of CP with YOL can control the pharyngitis, regulate the balance of Yin and Yang, strengthen the foundation and improve the immunity of the body.", "This study used the network pharmacology platform to probe the YOL treatment of CP by multi-target analysis. The results showed that YOL could treat CP by acting on related targets, which were consistent with the reported literature. In addition, we have verified that YOL and its two monomer compounds, Luteolin and Baicalein inhibited activation of the NF-kB, Stat3 and PI3K-Akt pathways. Although the potential advantages of the “network target, multi-component” strategy of network pharmacology are obvious, the content of TCM ingredients is usually ignored in the research of network pharmacology of Chinese medicine, and the influence of content and concentration on the efficacy cannot be ignored. The predicted targets, which have not been discussed, may provide clues for further research on the mechanism of YOL." ]
[ "intro", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Yinqin oral liquid", "chronic pharyngitis", "network pharmacology", "NF-kB", "multi-target analysis" ]
Introduction: Chronic pharyngitis (CP) is a very common disease associated with chronic inflammation involving pharyngeal mucosa, submucosal and lymphatic tissues. Clinically, CP is mainly manifested as pharyngeal discomfort (foreign body sensation, burning sensation, irritation, etc.), with occasional pharyngeal itch, cough, etc., which belongs to the category of “slow throat arthralgia” in the traditional Chinese medicine (TCM).1 CP has a high incidence and accounts for 10–12% of all pharyngeal diseases, with a long disease course, recurrence and is difficult to cure. At present, western medicine uses antibiotics supplemented by hormone preparations, such as dexamethasone and antiviral drugs, for the treatment of pharyngitis. These treatment methods have drawbacks such as narrow therapeutic spectrum, recurrence, poor tolerance and major toxic and side effects. Hence, developing more effective therapeutic strategies against CP, and reducing the occurrence of side effects from the current therapeutic options is of great clinical importance. TCM is a major component of complementary and alternative medicine. Owing to its excellent clinical efficacy, TCM is a research hotspot worldwide. Dialectical theory of TCM is used to treat acute and chronic pharyngitis and has unique advantages such as minor toxic side effects, obvious curative effect and treatment of both symptoms and root causes.2–4 The pharmacological effects of TCM herbal formulae play an important role in their appropriate application. However, the complex nature of herbal formulae has impeded this understanding. Numerous chemical ingredients involving multiple potential targets are present in an herbal formula. It is not adequate to explain the effects produced by a whole herbal formula if we individually consider every single ingredient in it. Yinqin oral liquid (YOL) is composed of extracts of honeysuckle, scutellaria, bupleurum and cicada. YOL has the effects of relieving wind, muscle and clearing heat, which is mainly used for cold, fever, aversion to cold, pharyngeal pain and other upper respiratory tract infections caused by wind fever and wind-cold. Long-term clinical observation has proven its effect in relieving pharyngeal pain and treating hand-foot-mouth disease. YOL contains several chemical compounds, which regulate diverse targets, and thus precisely determine the pharmacological mechanisms involved in its therapeutic actions. In addition, there are several challenges in the relationships between the herbs and diseases.5,6 Network pharmacology is based on the theory of system biology. As a new subject, it selects specific signal nodes to design multi-target drug molecules through the network analysis of system biology. It is possible to study both the active components and the potential gene targets from TCM because of the establishment of network pharmacology and bioinformatics. This study predicted and analyzed the active components and target of YOL using network pharmacology, to explore the rationality of its formula and the scientific nature of treating CP, and further explore the pharmacological mechanisms of YOL on CP. Materials and Methods: Chemicals and Reagents Yinqin Oral Liquid (YOL) was provided by su zhou si yuan natural products research and development Co. Ltd. YOL is composed of honeysuckle, scutellaria, bupleurum and cicada, decompressed and concentrated to 2 g/mL through boiling and alcohol sinking, and stored at 4°C for later use. Primary antibodies: COX-2 and iNOS antibodies were purchased from Abcam (Cambridge, UK); glyceraldehyde-3-phosphate dehydrogenase (GAPDH) antibody was obtained from Millipore (Billerica, MA, USA); PI3K, p-AKT, AKT, p-Stats, Stat3 and p-p65 (S536) antibodies were purchased from Cell Signaling Technology (Danvers, MA, USA); and p65 antibodies was obtained from Santa Cruz (CA, USA). The horseradish peroxidase (HRP)-conjugated sheep anti-mouse or anti-rabbit secondary antibodies were purchased from Thermo Fisher (Waltham, MA, USA). The proteins were visualized using an ECL detection kit (Thermo Fisher). Yinqin Oral Liquid (YOL) was provided by su zhou si yuan natural products research and development Co. Ltd. YOL is composed of honeysuckle, scutellaria, bupleurum and cicada, decompressed and concentrated to 2 g/mL through boiling and alcohol sinking, and stored at 4°C for later use. Primary antibodies: COX-2 and iNOS antibodies were purchased from Abcam (Cambridge, UK); glyceraldehyde-3-phosphate dehydrogenase (GAPDH) antibody was obtained from Millipore (Billerica, MA, USA); PI3K, p-AKT, AKT, p-Stats, Stat3 and p-p65 (S536) antibodies were purchased from Cell Signaling Technology (Danvers, MA, USA); and p65 antibodies was obtained from Santa Cruz (CA, USA). The horseradish peroxidase (HRP)-conjugated sheep anti-mouse or anti-rabbit secondary antibodies were purchased from Thermo Fisher (Waltham, MA, USA). The proteins were visualized using an ECL detection kit (Thermo Fisher). Cell Culture RAW264.7 cells, a mouse macrophage cell line (Cell Bank of Chinese Academy of Sciences, Shanghai, China) were cultured using DMEM supplemented with FBS (10%) and penicillin-streptomycin (1%). The cells were cultured at 37°C in a humidified environment of 5% carbon dioxide. The medium was changed every other day and the cells were passaged at a dilution of 1:3. RAW264.7 cells, a mouse macrophage cell line (Cell Bank of Chinese Academy of Sciences, Shanghai, China) were cultured using DMEM supplemented with FBS (10%) and penicillin-streptomycin (1%). The cells were cultured at 37°C in a humidified environment of 5% carbon dioxide. The medium was changed every other day and the cells were passaged at a dilution of 1:3. Collection of Drug Molecular Information and Screening of Active Ingredients In the TCMSP database (https://tcmsp-e.com/),7 the “Herb name” was selected to retrieve the molecular ADME parameter information of honeysuckle, scutellaria, bupleurum and cicada slough, and then the screen conformed to the oral bioavailability (OB) ADME parameter information of honey (Drug-likeness, DL) of ≥0.18 or more active ingredients (Table 1). YOL ingredients were supplemented through literature review. Potential targets related to active ingredients were searched through the TCMSP database and Cytoscape 3.6.1 software.Table 155 Active Compounds Predicted by OB and DL Among 4 Herbs in YOLDrugMol IDMolecule NameMWOB%DLHoneysuckleMOL001494Mandenol308.56420.19MOL001495Ethyl linolenate306.5446.10.2MOL002914Eriodyctiol (flavanone)288.2741.350.24MOL003006(-)-(3R,8S,9R,9aS,10aS)-9-ethenyl-8-(beta-D-glucopyranosyloxy)-2,3,9,9a,10,10a-hexahydro-5-oxo-5H,8H-pyrano[4,3-d]oxazolo[3,2-a]pyridine-3-carboxylic acid_qt281.2987.470.23MOL003014Secologanic dibutylacetal_qt384.5753.650.29MOL002773Beta-carotene536.9637.180.58MOL003036ZINC03978781412.7743.830.76MOL003044Chryseriol300.2835.850.27MOL0030955-hydroxy-7-methoxy-2-(3,4,5-trimethoxyphenyl)chromone358.3751.960.41MOL003111Centauroside_qt434.4855.790.5MOL003117Ioniceracetalides B_qt314.3761.190.19MOL003128Dinethylsecologanoside434.4448.460.48MOL000358Beta-sitosterol414.7936.910.75MOL000422Kaempferol286.2541.880.24MOL000449Stigmasterol412.7743.830.76MOL000006Luteolin286.2536.160.25MOL000098Quercetin302.2546.430.28ScutellariaMOL000073Ent-Epicatechin290.2948.960.24MOL000173Wogonin284.2830.680.23MOL000228(2R)-7-hydroxy-5-methoxy-2-phenylchroman-4-one270.355.230.2MOL000359Sitosterol414.7936.910.75MOL000525Norwogonin270.2539.40.21MOL0005525,2ʹ-Dihydroxy-6,7,8-trimethoxyflavone344.3431.710.35MOL001458Coptisine320.3430.670.86MOL001490Bis[(2S)-2-ethylhexyl] benzene-1,2-dicarboxylate390.6243.590.35MOL001689Acacetin284.2834.970.24MOL002714Baicalein270.2533.520.21MOL002879Diop390.6243.590.39MOL002897Epiberberine336.3943.090.78MOL0029095,7,2,5-tetrahydroxy-8,6-dimethoxyflavone376.3433.820.45MOL002910Carthamidin288.2741.150.24MOL002913Dihydrobaicalin_qt272.2740.040.21MOL002914Eriodyctiol (flavanone)288.2741.350.24MOL002915Salvigenin328.3449.070.33MOL0029175,2ʹ,6ʹ-Trihydroxy-7,8-dimethoxyflavone330.3145.050.33MOL0029255,7,2ʹ,6ʹ-Tetrahydroxyflavone286.2537.010.24MOL002927Skullcapflavone II374.3769.510.44MOL002928Oroxylin a284.2841.370.23MOL002932Panicolin314.3176.260.29MOL0029335,7,4ʹ-Trihydroxy-8-methoxyflavone300.2836.560.27MOL002934Neobaicalein374.37104.30.44MOL002937Dihydrooroxylin A286.366.060.23MOL008206Moslosooflavone298.3144.090.25MOL01041511,13-Eicosadienoic acid, methyl ester322.5939.280.23MOL012266Rivularin344.3437.940.37MOL000490Petunidin317.2930.050.31MOL0045983,5,6,7-tetramethoxy-2-(3,4,5-trimethoxyphenyl)chromone432.4631.970.59BupleurumMOL002776Baicalin446.3940.120.75MOL001645Kaempferol308.5642.10.2MOL004718Linoleyl acetate412.7742.980.76MOL004653α-spinasterol426.546.060.66MOL004624Stigmasterol348.4847.720.53MOL004609Areapillin360.3448.960.41MOL000354Isorhamnetin316.2849.60.31MOL013187Cubebin356.457.130.64 55 Active Compounds Predicted by OB and DL Among 4 Herbs in YOL In the TCMSP database (https://tcmsp-e.com/),7 the “Herb name” was selected to retrieve the molecular ADME parameter information of honeysuckle, scutellaria, bupleurum and cicada slough, and then the screen conformed to the oral bioavailability (OB) ADME parameter information of honey (Drug-likeness, DL) of ≥0.18 or more active ingredients (Table 1). YOL ingredients were supplemented through literature review. Potential targets related to active ingredients were searched through the TCMSP database and Cytoscape 3.6.1 software.Table 155 Active Compounds Predicted by OB and DL Among 4 Herbs in YOLDrugMol IDMolecule NameMWOB%DLHoneysuckleMOL001494Mandenol308.56420.19MOL001495Ethyl linolenate306.5446.10.2MOL002914Eriodyctiol (flavanone)288.2741.350.24MOL003006(-)-(3R,8S,9R,9aS,10aS)-9-ethenyl-8-(beta-D-glucopyranosyloxy)-2,3,9,9a,10,10a-hexahydro-5-oxo-5H,8H-pyrano[4,3-d]oxazolo[3,2-a]pyridine-3-carboxylic acid_qt281.2987.470.23MOL003014Secologanic dibutylacetal_qt384.5753.650.29MOL002773Beta-carotene536.9637.180.58MOL003036ZINC03978781412.7743.830.76MOL003044Chryseriol300.2835.850.27MOL0030955-hydroxy-7-methoxy-2-(3,4,5-trimethoxyphenyl)chromone358.3751.960.41MOL003111Centauroside_qt434.4855.790.5MOL003117Ioniceracetalides B_qt314.3761.190.19MOL003128Dinethylsecologanoside434.4448.460.48MOL000358Beta-sitosterol414.7936.910.75MOL000422Kaempferol286.2541.880.24MOL000449Stigmasterol412.7743.830.76MOL000006Luteolin286.2536.160.25MOL000098Quercetin302.2546.430.28ScutellariaMOL000073Ent-Epicatechin290.2948.960.24MOL000173Wogonin284.2830.680.23MOL000228(2R)-7-hydroxy-5-methoxy-2-phenylchroman-4-one270.355.230.2MOL000359Sitosterol414.7936.910.75MOL000525Norwogonin270.2539.40.21MOL0005525,2ʹ-Dihydroxy-6,7,8-trimethoxyflavone344.3431.710.35MOL001458Coptisine320.3430.670.86MOL001490Bis[(2S)-2-ethylhexyl] benzene-1,2-dicarboxylate390.6243.590.35MOL001689Acacetin284.2834.970.24MOL002714Baicalein270.2533.520.21MOL002879Diop390.6243.590.39MOL002897Epiberberine336.3943.090.78MOL0029095,7,2,5-tetrahydroxy-8,6-dimethoxyflavone376.3433.820.45MOL002910Carthamidin288.2741.150.24MOL002913Dihydrobaicalin_qt272.2740.040.21MOL002914Eriodyctiol (flavanone)288.2741.350.24MOL002915Salvigenin328.3449.070.33MOL0029175,2ʹ,6ʹ-Trihydroxy-7,8-dimethoxyflavone330.3145.050.33MOL0029255,7,2ʹ,6ʹ-Tetrahydroxyflavone286.2537.010.24MOL002927Skullcapflavone II374.3769.510.44MOL002928Oroxylin a284.2841.370.23MOL002932Panicolin314.3176.260.29MOL0029335,7,4ʹ-Trihydroxy-8-methoxyflavone300.2836.560.27MOL002934Neobaicalein374.37104.30.44MOL002937Dihydrooroxylin A286.366.060.23MOL008206Moslosooflavone298.3144.090.25MOL01041511,13-Eicosadienoic acid, methyl ester322.5939.280.23MOL012266Rivularin344.3437.940.37MOL000490Petunidin317.2930.050.31MOL0045983,5,6,7-tetramethoxy-2-(3,4,5-trimethoxyphenyl)chromone432.4631.970.59BupleurumMOL002776Baicalin446.3940.120.75MOL001645Kaempferol308.5642.10.2MOL004718Linoleyl acetate412.7742.980.76MOL004653α-spinasterol426.546.060.66MOL004624Stigmasterol348.4847.720.53MOL004609Areapillin360.3448.960.41MOL000354Isorhamnetin316.2849.60.31MOL013187Cubebin356.457.130.64 55 Active Compounds Predicted by OB and DL Among 4 Herbs in YOL Known Therapeutic Targets of Chronic Pharyngitis (CP) In the drug treatment of CP, the known therapeutic targets were mainly obtained from two sources: GeneCards database (https://www.genecards.org/) and CTD database (http://ctd.mdibl.org/). The data analysis was conducted in 5878 therapeutic targets for the treatment of CP, after the redundant entries were removed. Figure S1 shows the specific information about these known therapeutic targets. In the drug treatment of CP, the known therapeutic targets were mainly obtained from two sources: GeneCards database (https://www.genecards.org/) and CTD database (http://ctd.mdibl.org/). The data analysis was conducted in 5878 therapeutic targets for the treatment of CP, after the redundant entries were removed. Figure S1 shows the specific information about these known therapeutic targets. Network Construction and Analysis To understand the relationship between the herbs and chemical compounds that consist of YOL and its putative targets, and the therapeutic targets known for CP, Network Visualization was conducted using Cytoscape 3.6.1 software,8 and the degree between compounds and targets was analyzed. To understand the relationship between the herbs and chemical compounds that consist of YOL and its putative targets, and the therapeutic targets known for CP, Network Visualization was conducted using Cytoscape 3.6.1 software,8 and the degree between compounds and targets was analyzed. Protein-Protein Interaction (PPI) Data PPI core network (PPICN) is used to study the relationship between chemical compounds and disease-associated protein molecules based on biochemistry, signal transduction and genetic networks.9 The differing ID types of the proteins were converted to UniProt IDs. In order to further understand how YOL and CP interact at the protein level, the selected targets in this study were uploaded on the online Venn diagram (http://bioinfogp.cnb.csic.es/tools/venny/index.html). Moreover, the genes at the intersection of the active compound and CP were selected and uploaded on STRING 10.5 (https://string-db.org) to obtain the PPI relationship. PPI core network (PPICN) is used to study the relationship between chemical compounds and disease-associated protein molecules based on biochemistry, signal transduction and genetic networks.9 The differing ID types of the proteins were converted to UniProt IDs. In order to further understand how YOL and CP interact at the protein level, the selected targets in this study were uploaded on the online Venn diagram (http://bioinfogp.cnb.csic.es/tools/venny/index.html). Moreover, the genes at the intersection of the active compound and CP were selected and uploaded on STRING 10.5 (https://string-db.org) to obtain the PPI relationship. Pathway Enrichment Analysis The data was imported in the format Gene Symbol into David 6.8 database (https://david.ncifcrf.gov/). Molecular function (CC), biological process (BP), and cellular function (MF) were selected, respectively. A pathway enrichment analysis was prospectively performed using biological process enrichment GO analysis and KEGG pathway analysis (http://www.genome.jp/kegg/) to clarify the pathways involving putative CP targets, and visualized by online mapping website Omicshare Tools (http://www.omicshare.com/tools/index.php/). The data was imported in the format Gene Symbol into David 6.8 database (https://david.ncifcrf.gov/). Molecular function (CC), biological process (BP), and cellular function (MF) were selected, respectively. A pathway enrichment analysis was prospectively performed using biological process enrichment GO analysis and KEGG pathway analysis (http://www.genome.jp/kegg/) to clarify the pathways involving putative CP targets, and visualized by online mapping website Omicshare Tools (http://www.omicshare.com/tools/index.php/). Statistical Analysis Each experiment was repeated at least in triplicate and data were shown as mean ± SD. Statistical difference in GO and KEGG analysis was performed using hypergeometric and Fisher's exact tests. P<0.05 was considered statistically significant. Each experiment was repeated at least in triplicate and data were shown as mean ± SD. Statistical difference in GO and KEGG analysis was performed using hypergeometric and Fisher's exact tests. P<0.05 was considered statistically significant. Chemicals and Reagents: Yinqin Oral Liquid (YOL) was provided by su zhou si yuan natural products research and development Co. Ltd. YOL is composed of honeysuckle, scutellaria, bupleurum and cicada, decompressed and concentrated to 2 g/mL through boiling and alcohol sinking, and stored at 4°C for later use. Primary antibodies: COX-2 and iNOS antibodies were purchased from Abcam (Cambridge, UK); glyceraldehyde-3-phosphate dehydrogenase (GAPDH) antibody was obtained from Millipore (Billerica, MA, USA); PI3K, p-AKT, AKT, p-Stats, Stat3 and p-p65 (S536) antibodies were purchased from Cell Signaling Technology (Danvers, MA, USA); and p65 antibodies was obtained from Santa Cruz (CA, USA). The horseradish peroxidase (HRP)-conjugated sheep anti-mouse or anti-rabbit secondary antibodies were purchased from Thermo Fisher (Waltham, MA, USA). The proteins were visualized using an ECL detection kit (Thermo Fisher). Cell Culture: RAW264.7 cells, a mouse macrophage cell line (Cell Bank of Chinese Academy of Sciences, Shanghai, China) were cultured using DMEM supplemented with FBS (10%) and penicillin-streptomycin (1%). The cells were cultured at 37°C in a humidified environment of 5% carbon dioxide. The medium was changed every other day and the cells were passaged at a dilution of 1:3. Collection of Drug Molecular Information and Screening of Active Ingredients: In the TCMSP database (https://tcmsp-e.com/),7 the “Herb name” was selected to retrieve the molecular ADME parameter information of honeysuckle, scutellaria, bupleurum and cicada slough, and then the screen conformed to the oral bioavailability (OB) ADME parameter information of honey (Drug-likeness, DL) of ≥0.18 or more active ingredients (Table 1). YOL ingredients were supplemented through literature review. Potential targets related to active ingredients were searched through the TCMSP database and Cytoscape 3.6.1 software.Table 155 Active Compounds Predicted by OB and DL Among 4 Herbs in YOLDrugMol IDMolecule NameMWOB%DLHoneysuckleMOL001494Mandenol308.56420.19MOL001495Ethyl linolenate306.5446.10.2MOL002914Eriodyctiol (flavanone)288.2741.350.24MOL003006(-)-(3R,8S,9R,9aS,10aS)-9-ethenyl-8-(beta-D-glucopyranosyloxy)-2,3,9,9a,10,10a-hexahydro-5-oxo-5H,8H-pyrano[4,3-d]oxazolo[3,2-a]pyridine-3-carboxylic acid_qt281.2987.470.23MOL003014Secologanic dibutylacetal_qt384.5753.650.29MOL002773Beta-carotene536.9637.180.58MOL003036ZINC03978781412.7743.830.76MOL003044Chryseriol300.2835.850.27MOL0030955-hydroxy-7-methoxy-2-(3,4,5-trimethoxyphenyl)chromone358.3751.960.41MOL003111Centauroside_qt434.4855.790.5MOL003117Ioniceracetalides B_qt314.3761.190.19MOL003128Dinethylsecologanoside434.4448.460.48MOL000358Beta-sitosterol414.7936.910.75MOL000422Kaempferol286.2541.880.24MOL000449Stigmasterol412.7743.830.76MOL000006Luteolin286.2536.160.25MOL000098Quercetin302.2546.430.28ScutellariaMOL000073Ent-Epicatechin290.2948.960.24MOL000173Wogonin284.2830.680.23MOL000228(2R)-7-hydroxy-5-methoxy-2-phenylchroman-4-one270.355.230.2MOL000359Sitosterol414.7936.910.75MOL000525Norwogonin270.2539.40.21MOL0005525,2ʹ-Dihydroxy-6,7,8-trimethoxyflavone344.3431.710.35MOL001458Coptisine320.3430.670.86MOL001490Bis[(2S)-2-ethylhexyl] benzene-1,2-dicarboxylate390.6243.590.35MOL001689Acacetin284.2834.970.24MOL002714Baicalein270.2533.520.21MOL002879Diop390.6243.590.39MOL002897Epiberberine336.3943.090.78MOL0029095,7,2,5-tetrahydroxy-8,6-dimethoxyflavone376.3433.820.45MOL002910Carthamidin288.2741.150.24MOL002913Dihydrobaicalin_qt272.2740.040.21MOL002914Eriodyctiol (flavanone)288.2741.350.24MOL002915Salvigenin328.3449.070.33MOL0029175,2ʹ,6ʹ-Trihydroxy-7,8-dimethoxyflavone330.3145.050.33MOL0029255,7,2ʹ,6ʹ-Tetrahydroxyflavone286.2537.010.24MOL002927Skullcapflavone II374.3769.510.44MOL002928Oroxylin a284.2841.370.23MOL002932Panicolin314.3176.260.29MOL0029335,7,4ʹ-Trihydroxy-8-methoxyflavone300.2836.560.27MOL002934Neobaicalein374.37104.30.44MOL002937Dihydrooroxylin A286.366.060.23MOL008206Moslosooflavone298.3144.090.25MOL01041511,13-Eicosadienoic acid, methyl ester322.5939.280.23MOL012266Rivularin344.3437.940.37MOL000490Petunidin317.2930.050.31MOL0045983,5,6,7-tetramethoxy-2-(3,4,5-trimethoxyphenyl)chromone432.4631.970.59BupleurumMOL002776Baicalin446.3940.120.75MOL001645Kaempferol308.5642.10.2MOL004718Linoleyl acetate412.7742.980.76MOL004653α-spinasterol426.546.060.66MOL004624Stigmasterol348.4847.720.53MOL004609Areapillin360.3448.960.41MOL000354Isorhamnetin316.2849.60.31MOL013187Cubebin356.457.130.64 55 Active Compounds Predicted by OB and DL Among 4 Herbs in YOL Known Therapeutic Targets of Chronic Pharyngitis (CP): In the drug treatment of CP, the known therapeutic targets were mainly obtained from two sources: GeneCards database (https://www.genecards.org/) and CTD database (http://ctd.mdibl.org/). The data analysis was conducted in 5878 therapeutic targets for the treatment of CP, after the redundant entries were removed. Figure S1 shows the specific information about these known therapeutic targets. Network Construction and Analysis: To understand the relationship between the herbs and chemical compounds that consist of YOL and its putative targets, and the therapeutic targets known for CP, Network Visualization was conducted using Cytoscape 3.6.1 software,8 and the degree between compounds and targets was analyzed. Protein-Protein Interaction (PPI) Data: PPI core network (PPICN) is used to study the relationship between chemical compounds and disease-associated protein molecules based on biochemistry, signal transduction and genetic networks.9 The differing ID types of the proteins were converted to UniProt IDs. In order to further understand how YOL and CP interact at the protein level, the selected targets in this study were uploaded on the online Venn diagram (http://bioinfogp.cnb.csic.es/tools/venny/index.html). Moreover, the genes at the intersection of the active compound and CP were selected and uploaded on STRING 10.5 (https://string-db.org) to obtain the PPI relationship. Pathway Enrichment Analysis: The data was imported in the format Gene Symbol into David 6.8 database (https://david.ncifcrf.gov/). Molecular function (CC), biological process (BP), and cellular function (MF) were selected, respectively. A pathway enrichment analysis was prospectively performed using biological process enrichment GO analysis and KEGG pathway analysis (http://www.genome.jp/kegg/) to clarify the pathways involving putative CP targets, and visualized by online mapping website Omicshare Tools (http://www.omicshare.com/tools/index.php/). Statistical Analysis: Each experiment was repeated at least in triplicate and data were shown as mean ± SD. Statistical difference in GO and KEGG analysis was performed using hypergeometric and Fisher's exact tests. P<0.05 was considered statistically significant. Results: Screening and Analysis of Active Compounds First, all the compounds related to four TCMs were retrieved by TCMSP. Second, according to the screening conditions of OB ≥30% and DL ≥0.18,10 23 active constituents of honeysuckle (17 predicted relevant targets), 36 of baicalensis (30 predicted relevant targets), 17 of bupleurum (13 predicted relevant targets), and no suitable compounds of cicada slough were obtained. Among them, baicalensis and honeysuckle contained two identical ingredients, while bupleurum and honeysuckle contained three identical components. Finally, a total of 55 active components were obtained for the data analysis after removing redundant entries (Table 1). First, all the compounds related to four TCMs were retrieved by TCMSP. Second, according to the screening conditions of OB ≥30% and DL ≥0.18,10 23 active constituents of honeysuckle (17 predicted relevant targets), 36 of baicalensis (30 predicted relevant targets), 17 of bupleurum (13 predicted relevant targets), and no suitable compounds of cicada slough were obtained. Among them, baicalensis and honeysuckle contained two identical ingredients, while bupleurum and honeysuckle contained three identical components. Finally, a total of 55 active components were obtained for the data analysis after removing redundant entries (Table 1). Network Analysis of “Active Component-Target” Interaction in YOL The compound-target network contained 175 nodes including 55 compound nodes and 113 target nodes, of which two had no corresponding target and 577 edges. As shown in Figure 1, blue represents compounds in bupleurum, red represents compounds in scutellaria, purple represents compounds in honeysuckle, green represents drug targets, each edge indicates the interaction between compounds and their targets, and two of the 55 compounds were excluded from the network construction. The degree value of a node represents the amount of connected routes in the network, and the larger the shape, the higher the degree value. The network in line with its topological properties showed that the nodes with higher degree of screening were analyzed. These nodes that connect compounds or targets act as hubs in the whole network and may predict key compounds or targets. The top five compounds ranked by degree were MOL000098-quercetin, MOL000422-Kaempferol, MOL000449-stigmasterol, MOL000358-quebrachol and MOL000006-luteolin, which could interact with 154, 68, 39, 32 and 28 target proteins, respectively. PTGS1, NCOA2, AR, PRSS1 and PPARG were the top five targets with the highest degree, which could interact with 42, 38, 29, 28 and 17 compounds, respectively.Figure 1Compounds-target network diagram. Blue represents compounds in bupleurum, Red represents compounds in scutellaria, Purple represents compounds in honeysuckle, Green represents drug targets. The size represents the degree value. The larger the shape, the larger the degree value. Compounds-target network diagram. Blue represents compounds in bupleurum, Red represents compounds in scutellaria, Purple represents compounds in honeysuckle, Green represents drug targets. The size represents the degree value. The larger the shape, the larger the degree value. The compound-target network contained 175 nodes including 55 compound nodes and 113 target nodes, of which two had no corresponding target and 577 edges. As shown in Figure 1, blue represents compounds in bupleurum, red represents compounds in scutellaria, purple represents compounds in honeysuckle, green represents drug targets, each edge indicates the interaction between compounds and their targets, and two of the 55 compounds were excluded from the network construction. The degree value of a node represents the amount of connected routes in the network, and the larger the shape, the higher the degree value. The network in line with its topological properties showed that the nodes with higher degree of screening were analyzed. These nodes that connect compounds or targets act as hubs in the whole network and may predict key compounds or targets. The top five compounds ranked by degree were MOL000098-quercetin, MOL000422-Kaempferol, MOL000449-stigmasterol, MOL000358-quebrachol and MOL000006-luteolin, which could interact with 154, 68, 39, 32 and 28 target proteins, respectively. PTGS1, NCOA2, AR, PRSS1 and PPARG were the top five targets with the highest degree, which could interact with 42, 38, 29, 28 and 17 compounds, respectively.Figure 1Compounds-target network diagram. Blue represents compounds in bupleurum, Red represents compounds in scutellaria, Purple represents compounds in honeysuckle, Green represents drug targets. The size represents the degree value. The larger the shape, the larger the degree value. Compounds-target network diagram. Blue represents compounds in bupleurum, Red represents compounds in scutellaria, Purple represents compounds in honeysuckle, Green represents drug targets. The size represents the degree value. The larger the shape, the larger the degree value. The Construction of Key Protein Network of YOL and CP Through the internationally recognized CTD and GeneCards disease databases, 5150 and 1247 pharyngitis-related targets were obtained, respectively. Through the Venny online tool, 30 targets of honeysuckle, scutellaria, bupleuri and pharyngitis were obtained, including VEGFA, IL6, ESR1, RELA, HIF1A, etc. (Figure 2A). Finally, STRING online tool was used to construct the PPI network interaction map of drug and disease. It contained 30 nodes representing proteins and 48 edges representing the interaction between proteins. The thicker the line, the higher is the correlation degree. Through the protein interaction network, we could further explore the therapeutic target and mechanism of YOL for CP (Figure 2B).Figure 2(A) Venn diagram of compounds-target network diagram; (B) Core network diagram of protein interaction between YOL and CP. (A) Venn diagram of compounds-target network diagram; (B) Core network diagram of protein interaction between YOL and CP. Through the internationally recognized CTD and GeneCards disease databases, 5150 and 1247 pharyngitis-related targets were obtained, respectively. Through the Venny online tool, 30 targets of honeysuckle, scutellaria, bupleuri and pharyngitis were obtained, including VEGFA, IL6, ESR1, RELA, HIF1A, etc. (Figure 2A). Finally, STRING online tool was used to construct the PPI network interaction map of drug and disease. It contained 30 nodes representing proteins and 48 edges representing the interaction between proteins. The thicker the line, the higher is the correlation degree. Through the protein interaction network, we could further explore the therapeutic target and mechanism of YOL for CP (Figure 2B).Figure 2(A) Venn diagram of compounds-target network diagram; (B) Core network diagram of protein interaction between YOL and CP. (A) Venn diagram of compounds-target network diagram; (B) Core network diagram of protein interaction between YOL and CP. Pharmacological Mechanisms of YOL Acting on CP A total of 30 common targets were obtained by DAVID online tool and 148 GO items were obtained (p < 0.05) by performing the functional enrichment analysis. There were 102 entries on biological process (BP), 34 entries on Molecular Function (MF), and 12 entries on cell composition (CC) (Figure 3A). A total of 46 signaling pathways were obtained by KEGG pathway enrichment screening (p<0.05), involving cancer, PI3K-AKT, hepatitis, proteoglycans, p53, HIF-1 signaling pathways, etc. (Figure 3B and Table S1).Figure 3(A) GO analysis function annotation diagram. Biological process (BP), molecular function (MF), and cell composition (CC). (B) Enriched KEGG pathways of YOL selected targets for CP. (A) GO analysis function annotation diagram. Biological process (BP), molecular function (MF), and cell composition (CC). (B) Enriched KEGG pathways of YOL selected targets for CP. A total of 30 common targets were obtained by DAVID online tool and 148 GO items were obtained (p < 0.05) by performing the functional enrichment analysis. There were 102 entries on biological process (BP), 34 entries on Molecular Function (MF), and 12 entries on cell composition (CC) (Figure 3A). A total of 46 signaling pathways were obtained by KEGG pathway enrichment screening (p<0.05), involving cancer, PI3K-AKT, hepatitis, proteoglycans, p53, HIF-1 signaling pathways, etc. (Figure 3B and Table S1).Figure 3(A) GO analysis function annotation diagram. Biological process (BP), molecular function (MF), and cell composition (CC). (B) Enriched KEGG pathways of YOL selected targets for CP. (A) GO analysis function annotation diagram. Biological process (BP), molecular function (MF), and cell composition (CC). (B) Enriched KEGG pathways of YOL selected targets for CP. Experimental Verification of Predicted Results In order to experimentally demonstrate the predictions about the molecular mechanism of YOL, we chose two ingredients for testing (Figure 4), based on their composition score, the group to which they belong, and their herbal origin to examine how ingredients from different herbs affect the same proteins. The effects of these compounds on the expression levels of related proteins were analyzed by Western blot. Macrophages secrete several inflammatory cytokines, which play significant roles in inflammation. Signal transduction and transcriptional activator 3 (STAT3), nuclear factor kappa B (NF-kB), and phosphatidylinositol 3-kinase (PI3K)/Akt pathways are critical inflammatory pathways. LPS can significantly increase the production of many proinflammatory factors, including inducible NO synthase (iNOS) and cyclooxygenase-2 (COX-2) in macrophages. After pretreatment of macrophages with YOL, luteolin and baicalein, the protein expression levels of iNOS and COX-2 induced by LPS were significantly inhibited in a dose-dependent manner (Figure 4A). The related protein changes were consistent with iNOS and COX-2 inflammation protein expression (Figure 4B–D).Figure 4The YOL inhibits PI3K/p-AKT/p-Stats pathway in RAW 264.7 cells. Representative Western blot images show the relative expressions of iNOS, COX-2 (A), p-p65, p65 (B), PI3K, p-AKT, AKT (C), p-Stats, Stat3 (D) in the groups. The protein levels (normalized) of iNOS, COX2 (E), and p-p65, p65, PI3K, p-AKT, AKT, p-Stats, Stat3 (F) in cells. All the values are presented as mean ± SD. #P < 0.05 and ##P < 0.01 vs control group. *P < 0.05 and **P < 0.01 vs LPS group. The YOL inhibits PI3K/p-AKT/p-Stats pathway in RAW 264.7 cells. Representative Western blot images show the relative expressions of iNOS, COX-2 (A), p-p65, p65 (B), PI3K, p-AKT, AKT (C), p-Stats, Stat3 (D) in the groups. The protein levels (normalized) of iNOS, COX2 (E), and p-p65, p65, PI3K, p-AKT, AKT, p-Stats, Stat3 (F) in cells. All the values are presented as mean ± SD. #P < 0.05 and ##P < 0.01 vs control group. *P < 0.05 and **P < 0.01 vs LPS group. In order to experimentally demonstrate the predictions about the molecular mechanism of YOL, we chose two ingredients for testing (Figure 4), based on their composition score, the group to which they belong, and their herbal origin to examine how ingredients from different herbs affect the same proteins. The effects of these compounds on the expression levels of related proteins were analyzed by Western blot. Macrophages secrete several inflammatory cytokines, which play significant roles in inflammation. Signal transduction and transcriptional activator 3 (STAT3), nuclear factor kappa B (NF-kB), and phosphatidylinositol 3-kinase (PI3K)/Akt pathways are critical inflammatory pathways. LPS can significantly increase the production of many proinflammatory factors, including inducible NO synthase (iNOS) and cyclooxygenase-2 (COX-2) in macrophages. After pretreatment of macrophages with YOL, luteolin and baicalein, the protein expression levels of iNOS and COX-2 induced by LPS were significantly inhibited in a dose-dependent manner (Figure 4A). The related protein changes were consistent with iNOS and COX-2 inflammation protein expression (Figure 4B–D).Figure 4The YOL inhibits PI3K/p-AKT/p-Stats pathway in RAW 264.7 cells. Representative Western blot images show the relative expressions of iNOS, COX-2 (A), p-p65, p65 (B), PI3K, p-AKT, AKT (C), p-Stats, Stat3 (D) in the groups. The protein levels (normalized) of iNOS, COX2 (E), and p-p65, p65, PI3K, p-AKT, AKT, p-Stats, Stat3 (F) in cells. All the values are presented as mean ± SD. #P < 0.05 and ##P < 0.01 vs control group. *P < 0.05 and **P < 0.01 vs LPS group. The YOL inhibits PI3K/p-AKT/p-Stats pathway in RAW 264.7 cells. Representative Western blot images show the relative expressions of iNOS, COX-2 (A), p-p65, p65 (B), PI3K, p-AKT, AKT (C), p-Stats, Stat3 (D) in the groups. The protein levels (normalized) of iNOS, COX2 (E), and p-p65, p65, PI3K, p-AKT, AKT, p-Stats, Stat3 (F) in cells. All the values are presented as mean ± SD. #P < 0.05 and ##P < 0.01 vs control group. *P < 0.05 and **P < 0.01 vs LPS group. Screening and Analysis of Active Compounds: First, all the compounds related to four TCMs were retrieved by TCMSP. Second, according to the screening conditions of OB ≥30% and DL ≥0.18,10 23 active constituents of honeysuckle (17 predicted relevant targets), 36 of baicalensis (30 predicted relevant targets), 17 of bupleurum (13 predicted relevant targets), and no suitable compounds of cicada slough were obtained. Among them, baicalensis and honeysuckle contained two identical ingredients, while bupleurum and honeysuckle contained three identical components. Finally, a total of 55 active components were obtained for the data analysis after removing redundant entries (Table 1). Network Analysis of “Active Component-Target” Interaction in YOL: The compound-target network contained 175 nodes including 55 compound nodes and 113 target nodes, of which two had no corresponding target and 577 edges. As shown in Figure 1, blue represents compounds in bupleurum, red represents compounds in scutellaria, purple represents compounds in honeysuckle, green represents drug targets, each edge indicates the interaction between compounds and their targets, and two of the 55 compounds were excluded from the network construction. The degree value of a node represents the amount of connected routes in the network, and the larger the shape, the higher the degree value. The network in line with its topological properties showed that the nodes with higher degree of screening were analyzed. These nodes that connect compounds or targets act as hubs in the whole network and may predict key compounds or targets. The top five compounds ranked by degree were MOL000098-quercetin, MOL000422-Kaempferol, MOL000449-stigmasterol, MOL000358-quebrachol and MOL000006-luteolin, which could interact with 154, 68, 39, 32 and 28 target proteins, respectively. PTGS1, NCOA2, AR, PRSS1 and PPARG were the top five targets with the highest degree, which could interact with 42, 38, 29, 28 and 17 compounds, respectively.Figure 1Compounds-target network diagram. Blue represents compounds in bupleurum, Red represents compounds in scutellaria, Purple represents compounds in honeysuckle, Green represents drug targets. The size represents the degree value. The larger the shape, the larger the degree value. Compounds-target network diagram. Blue represents compounds in bupleurum, Red represents compounds in scutellaria, Purple represents compounds in honeysuckle, Green represents drug targets. The size represents the degree value. The larger the shape, the larger the degree value. The Construction of Key Protein Network of YOL and CP: Through the internationally recognized CTD and GeneCards disease databases, 5150 and 1247 pharyngitis-related targets were obtained, respectively. Through the Venny online tool, 30 targets of honeysuckle, scutellaria, bupleuri and pharyngitis were obtained, including VEGFA, IL6, ESR1, RELA, HIF1A, etc. (Figure 2A). Finally, STRING online tool was used to construct the PPI network interaction map of drug and disease. It contained 30 nodes representing proteins and 48 edges representing the interaction between proteins. The thicker the line, the higher is the correlation degree. Through the protein interaction network, we could further explore the therapeutic target and mechanism of YOL for CP (Figure 2B).Figure 2(A) Venn diagram of compounds-target network diagram; (B) Core network diagram of protein interaction between YOL and CP. (A) Venn diagram of compounds-target network diagram; (B) Core network diagram of protein interaction between YOL and CP. Pharmacological Mechanisms of YOL Acting on CP: A total of 30 common targets were obtained by DAVID online tool and 148 GO items were obtained (p < 0.05) by performing the functional enrichment analysis. There were 102 entries on biological process (BP), 34 entries on Molecular Function (MF), and 12 entries on cell composition (CC) (Figure 3A). A total of 46 signaling pathways were obtained by KEGG pathway enrichment screening (p<0.05), involving cancer, PI3K-AKT, hepatitis, proteoglycans, p53, HIF-1 signaling pathways, etc. (Figure 3B and Table S1).Figure 3(A) GO analysis function annotation diagram. Biological process (BP), molecular function (MF), and cell composition (CC). (B) Enriched KEGG pathways of YOL selected targets for CP. (A) GO analysis function annotation diagram. Biological process (BP), molecular function (MF), and cell composition (CC). (B) Enriched KEGG pathways of YOL selected targets for CP. Experimental Verification of Predicted Results: In order to experimentally demonstrate the predictions about the molecular mechanism of YOL, we chose two ingredients for testing (Figure 4), based on their composition score, the group to which they belong, and their herbal origin to examine how ingredients from different herbs affect the same proteins. The effects of these compounds on the expression levels of related proteins were analyzed by Western blot. Macrophages secrete several inflammatory cytokines, which play significant roles in inflammation. Signal transduction and transcriptional activator 3 (STAT3), nuclear factor kappa B (NF-kB), and phosphatidylinositol 3-kinase (PI3K)/Akt pathways are critical inflammatory pathways. LPS can significantly increase the production of many proinflammatory factors, including inducible NO synthase (iNOS) and cyclooxygenase-2 (COX-2) in macrophages. After pretreatment of macrophages with YOL, luteolin and baicalein, the protein expression levels of iNOS and COX-2 induced by LPS were significantly inhibited in a dose-dependent manner (Figure 4A). The related protein changes were consistent with iNOS and COX-2 inflammation protein expression (Figure 4B–D).Figure 4The YOL inhibits PI3K/p-AKT/p-Stats pathway in RAW 264.7 cells. Representative Western blot images show the relative expressions of iNOS, COX-2 (A), p-p65, p65 (B), PI3K, p-AKT, AKT (C), p-Stats, Stat3 (D) in the groups. The protein levels (normalized) of iNOS, COX2 (E), and p-p65, p65, PI3K, p-AKT, AKT, p-Stats, Stat3 (F) in cells. All the values are presented as mean ± SD. #P < 0.05 and ##P < 0.01 vs control group. *P < 0.05 and **P < 0.01 vs LPS group. The YOL inhibits PI3K/p-AKT/p-Stats pathway in RAW 264.7 cells. Representative Western blot images show the relative expressions of iNOS, COX-2 (A), p-p65, p65 (B), PI3K, p-AKT, AKT (C), p-Stats, Stat3 (D) in the groups. The protein levels (normalized) of iNOS, COX2 (E), and p-p65, p65, PI3K, p-AKT, AKT, p-Stats, Stat3 (F) in cells. All the values are presented as mean ± SD. #P < 0.05 and ##P < 0.01 vs control group. *P < 0.05 and **P < 0.01 vs LPS group. Discussion: Many diseases, including cancer and chronic inflammation, are known to be regulated by multiple signaling pathways. TCMs are considered as multi-component and multi-target therapeutic drugs, which is in accordance with the methodologies of network pharmacology. The holistic view of TCM considers the human body as a complex biological network system, which is connected with network pharmacology. By combining the network pharmacology with the holistic view of TCM, the disease-target-drug network is obtained. The network shows that different components of TCM may act on one target, and different targets may act on the same pathway. It is also possible that the same component of TCM acts on different targets, and the same target acts on different pathways. The multi-component, multi-target and multi-pathway of TCM can be clearly visualized through network pharmacology, which resolves many difficulties encountered in the study of TCM.13–15 CP is the diffuse inflammation of pharyngeal mucosa, submucosa and lymph tissues. Clinically, CP is mainly manifested as pharyngeal discomfort (foreign body sensation, burning sensation, stimulation sensation, etc.), pharyngeal itching, cough, etc. Pharyngitis belongs to the category of “Fenghou Bi” in TCM. Pharyngitis pertains to the category of “wind throat arthralgia” in Chinese medicine, and is often caused by evil heat entering the body, and heat in the lungs and stomach. YOL is an oral TCM preparation, which is further optimized on the basis of Jinchan oral liquid developed by Children’s Hospital of Suzhou University.5,6 However, there is a lack of research on its active ingredients and its mechanism of action. In this study, 30 potential targets, 102 biological processes, 12 molecular functions, 34 cell components, and 46 KEGG pathways were obtained. The active ingredients in YOL can treat CP through multiple targets and pathways. To elucidate the biological pathways that may be affected by YOL, a significantly overexpressed KEGG pathway was identified. The results showed VEGFA, IL6, ESR1 and RELA as the key targets and the involvement of PI3K-Akt, p53 and HIF-1 signaling pathways. TCM aims to restore a patient’s health through the use of a TCM prescription, which is usually composed of two or more TCMs in optimal proportions. YOL, for example, consists of four TCMs. As expected, most of the selected active ingredients are directly or indirectly associated with inflammation. In a previous study, neochlorogenic acid was demonstrated to inhibit LPS-activated inflammatory responses through up-regulation of Nrf2/HO-1 and AMPK pathways.16 The decoction extract from six natural herbs, which contain honeysuckle, exhibits anti-inflammatory and immunomodulatory effects by targeting NF-κB/IL-6/STAT3 signaling.17 Luteolin is abundant in honeysuckle, which alleviates CP by inhibiting the NF-κB pathway and anti-inflammatory polarization of M1 macrophages.11 Protective effects of baicalein on liver injury in mice induced by multi-microbial septicemia were found to be based on inhibiting inflammation.12 Network pharmacological studies have shown that the main active components of honeysuckle may play a role through inflammation-related proteins such as HSP90AA1, HSP90AB1, ESR1, PTGS2, TERT and PPARG, and especially heat shock protein HSP90A (HSP90AA1).18 Flavonoids such as baicalin and wogonin alone and in combination have anti-inflammatory activity in Scutellaria.19 Network pharmacological studies have also shown that alkaloids such as coptisine and epiphone in Scutellaria and components such as dihydrokaempferol, and rographolide flavonoids also show anti-inflammatory activity. MAPK14, TNFRSF1A, EGFR and SELE are the primary anti-inflammatory components of Scutellaria. The anti-inflammatory active components in Scutellaria can directly affect MAPK14 and EGFR to produce anti-inflammatory effects, and can also indirectly affect other targets to exert anti-inflammatory effects. As one of the four p38 MAPKs, MAPK14 plays a vital role in the cellular cascade induced by extracellular stimuli such as proinflammatory cytokines or physical stimuli.20 Studies predicted that the pharmacodynamic basis of Bupleurum is mainly saponins, flavonoids, volatile oils, fatty acids and other components. Flavonoids mainly act on PI3K-Akt, NF-kB and other pathways, participate in regulating inflammatory factors, estrogen signal transduction and other physiological processes. Bupleurum-Scutellaria drug pair, first used in small bupleurum decoction in Treatise on Febrile Diseases, is an important part of Bupleurum prescription. Bupleurum is bitter, flat and clear, and can disperse the stagnation of bile fire. Scutellaria has a bitter and cold taste, and can clear internal heat. Both of them must be used together to regulate the liver and gallbladder, and clear the dampness and heat of internal accumulation, which are commonly used to treat fever, pharyngitis, and respiratory diseases.21 Cicada slough can evacuate wind and heat, and clear the throat. Clinically, it is common in classical prescriptions such as Sheng jiang San and Pharyngitis San. However, the active components of Cicada slough did not satisfy the prediction requirements in this research. YOL is composed of honeysuckle, scutellaria, bupleurum and cicada slough. According to TCM theory, bupleurum is bitter and flat, enters the liver and gallbladder meridian, penetrates and clears Shaoyang, and can drain the stagnation of gas. Scutellaria is bitter and cold, and clears the heat of Shaoyang. The actions of Bupleurum facilitate the clear discharge of Scutellaria, the two are compatible, and achieve the goal of reconciliation and Shaoyang. Honeysuckle cools through the surface, has heat-clearing and detoxifying effects, and also has the effect of fragrant obscenity. Cicada slough is light floating, clears the coke gas, shows honeysuckle compatibility, understanding the table evil, and is combined with the “treatment of the coke such as feather.” The combination of the four drugs, Xuan and descent together, clear the evil, thereby clearing heat and detoxification, warming the evil without hiding, scattered outside and recovered. Therefore, the treatment of CP with YOL can control the pharyngitis, regulate the balance of Yin and Yang, strengthen the foundation and improve the immunity of the body. Conclusion: This study used the network pharmacology platform to probe the YOL treatment of CP by multi-target analysis. The results showed that YOL could treat CP by acting on related targets, which were consistent with the reported literature. In addition, we have verified that YOL and its two monomer compounds, Luteolin and Baicalein inhibited activation of the NF-kB, Stat3 and PI3K-Akt pathways. Although the potential advantages of the “network target, multi-component” strategy of network pharmacology are obvious, the content of TCM ingredients is usually ignored in the research of network pharmacology of Chinese medicine, and the influence of content and concentration on the efficacy cannot be ignored. The predicted targets, which have not been discussed, may provide clues for further research on the mechanism of YOL.
Background: Yinqin oral liquid (YOL) has curative effect for upper respiratory tract infections, especially for chronic pharyngitis (CP). Since the traditional Chinese herbal formulae are complicated, the pharmacological mechanism of YOL remains unclear. The aim of this work was to explore the active ingredients and mechanisms of YOL against CP. Methods: First, the profile of putative target of YOL was predicted based on structural and functional similarities of all available YOL components, which were obtained from the Drug Bank database, to the known drugs using TCMSP. The chemical constituents and targets of honeysuckle, scutellaria, bupleurum and cicada were searched by TCMSP, CTD, GeneCards and other databases were used to query the CP-related genes, which were searched by UniProt database. Thereafter, the interactions network between compounds and overlapping genes was constructed, visualized, and analyzed by Cytoscape software. Finally, pathway enrichment analysis of overlapping genes was carried out on Database for Annotation, Visualization, and Integrated Discovery (DAVID) platform. Results: The pathway enrichment analysis showed 55 compounds and 113 corresponding targets in the compound-target network, and the key targets involved PTGS1, ESR2, GSK3β, NCOA2, ESR1. The PPI core network contained 30 proteins, including VEGFA, IL6, ESR1, RELA and HIF1A. A total of 148 GO items were obtained (p<0.05), 102 entries on biological process (BP), 34 entries on biological process (BP) and 12 entries on cell composition (CC) were included. A total of 46 signaling pathways were obtained by KEGG pathway enrichment screening (p<0.05), involving cancer, PI3K-AKT, hepatitis, proteoglycans, p53, HIF-1 signaling pathways. Conclusions: These results collectively indicate YOL (including the main ingredients luteolin and baicalein) as a highly effective therapeutic agent for anti-inflammation, through the NF-kB pathway.
Introduction: Chronic pharyngitis (CP) is a very common disease associated with chronic inflammation involving pharyngeal mucosa, submucosal and lymphatic tissues. Clinically, CP is mainly manifested as pharyngeal discomfort (foreign body sensation, burning sensation, irritation, etc.), with occasional pharyngeal itch, cough, etc., which belongs to the category of “slow throat arthralgia” in the traditional Chinese medicine (TCM).1 CP has a high incidence and accounts for 10–12% of all pharyngeal diseases, with a long disease course, recurrence and is difficult to cure. At present, western medicine uses antibiotics supplemented by hormone preparations, such as dexamethasone and antiviral drugs, for the treatment of pharyngitis. These treatment methods have drawbacks such as narrow therapeutic spectrum, recurrence, poor tolerance and major toxic and side effects. Hence, developing more effective therapeutic strategies against CP, and reducing the occurrence of side effects from the current therapeutic options is of great clinical importance. TCM is a major component of complementary and alternative medicine. Owing to its excellent clinical efficacy, TCM is a research hotspot worldwide. Dialectical theory of TCM is used to treat acute and chronic pharyngitis and has unique advantages such as minor toxic side effects, obvious curative effect and treatment of both symptoms and root causes.2–4 The pharmacological effects of TCM herbal formulae play an important role in their appropriate application. However, the complex nature of herbal formulae has impeded this understanding. Numerous chemical ingredients involving multiple potential targets are present in an herbal formula. It is not adequate to explain the effects produced by a whole herbal formula if we individually consider every single ingredient in it. Yinqin oral liquid (YOL) is composed of extracts of honeysuckle, scutellaria, bupleurum and cicada. YOL has the effects of relieving wind, muscle and clearing heat, which is mainly used for cold, fever, aversion to cold, pharyngeal pain and other upper respiratory tract infections caused by wind fever and wind-cold. Long-term clinical observation has proven its effect in relieving pharyngeal pain and treating hand-foot-mouth disease. YOL contains several chemical compounds, which regulate diverse targets, and thus precisely determine the pharmacological mechanisms involved in its therapeutic actions. In addition, there are several challenges in the relationships between the herbs and diseases.5,6 Network pharmacology is based on the theory of system biology. As a new subject, it selects specific signal nodes to design multi-target drug molecules through the network analysis of system biology. It is possible to study both the active components and the potential gene targets from TCM because of the establishment of network pharmacology and bioinformatics. This study predicted and analyzed the active components and target of YOL using network pharmacology, to explore the rationality of its formula and the scientific nature of treating CP, and further explore the pharmacological mechanisms of YOL on CP. Conclusion: This study used the network pharmacology platform to probe the YOL treatment of CP by multi-target analysis. The results showed that YOL could treat CP by acting on related targets, which were consistent with the reported literature. In addition, we have verified that YOL and its two monomer compounds, Luteolin and Baicalein inhibited activation of the NF-kB, Stat3 and PI3K-Akt pathways. Although the potential advantages of the “network target, multi-component” strategy of network pharmacology are obvious, the content of TCM ingredients is usually ignored in the research of network pharmacology of Chinese medicine, and the influence of content and concentration on the efficacy cannot be ignored. The predicted targets, which have not been discussed, may provide clues for further research on the mechanism of YOL.
Background: Yinqin oral liquid (YOL) has curative effect for upper respiratory tract infections, especially for chronic pharyngitis (CP). Since the traditional Chinese herbal formulae are complicated, the pharmacological mechanism of YOL remains unclear. The aim of this work was to explore the active ingredients and mechanisms of YOL against CP. Methods: First, the profile of putative target of YOL was predicted based on structural and functional similarities of all available YOL components, which were obtained from the Drug Bank database, to the known drugs using TCMSP. The chemical constituents and targets of honeysuckle, scutellaria, bupleurum and cicada were searched by TCMSP, CTD, GeneCards and other databases were used to query the CP-related genes, which were searched by UniProt database. Thereafter, the interactions network between compounds and overlapping genes was constructed, visualized, and analyzed by Cytoscape software. Finally, pathway enrichment analysis of overlapping genes was carried out on Database for Annotation, Visualization, and Integrated Discovery (DAVID) platform. Results: The pathway enrichment analysis showed 55 compounds and 113 corresponding targets in the compound-target network, and the key targets involved PTGS1, ESR2, GSK3β, NCOA2, ESR1. The PPI core network contained 30 proteins, including VEGFA, IL6, ESR1, RELA and HIF1A. A total of 148 GO items were obtained (p<0.05), 102 entries on biological process (BP), 34 entries on biological process (BP) and 12 entries on cell composition (CC) were included. A total of 46 signaling pathways were obtained by KEGG pathway enrichment screening (p<0.05), involving cancer, PI3K-AKT, hepatitis, proteoglycans, p53, HIF-1 signaling pathways. Conclusions: These results collectively indicate YOL (including the main ingredients luteolin and baicalein) as a highly effective therapeutic agent for anti-inflammation, through the NF-kB pathway.
8,336
358
[ 1644, 185, 77, 198, 64, 45, 104, 81, 40, 2649, 113, 329, 179, 188, 491, 1126, 150 ]
18
[ "targets", "compounds", "network", "yol", "cp", "represents", "akt", "figure", "analysis", "target" ]
[ "tcm pharyngitis", "pharyngitis cp drug", "chronic pharyngitis", "pharyngitis treatment methods", "targets chronic pharyngitis" ]
null
null
[CONTENT] Yinqin oral liquid | chronic pharyngitis | network pharmacology | NF-kB | multi-target analysis [SUMMARY]
null
null
[CONTENT] Yinqin oral liquid | chronic pharyngitis | network pharmacology | NF-kB | multi-target analysis [SUMMARY]
[CONTENT] Yinqin oral liquid | chronic pharyngitis | network pharmacology | NF-kB | multi-target analysis [SUMMARY]
[CONTENT] Yinqin oral liquid | chronic pharyngitis | network pharmacology | NF-kB | multi-target analysis [SUMMARY]
[CONTENT] Administration, Oral | Animals | Anti-Inflammatory Agents | Chronic Disease | Databases, Factual | Drugs, Chinese Herbal | Flavanones | Luteolin | Mice | NF-kappa B | Network Pharmacology | Pharyngitis | RAW 264.7 Cells [SUMMARY]
null
null
[CONTENT] Administration, Oral | Animals | Anti-Inflammatory Agents | Chronic Disease | Databases, Factual | Drugs, Chinese Herbal | Flavanones | Luteolin | Mice | NF-kappa B | Network Pharmacology | Pharyngitis | RAW 264.7 Cells [SUMMARY]
[CONTENT] Administration, Oral | Animals | Anti-Inflammatory Agents | Chronic Disease | Databases, Factual | Drugs, Chinese Herbal | Flavanones | Luteolin | Mice | NF-kappa B | Network Pharmacology | Pharyngitis | RAW 264.7 Cells [SUMMARY]
[CONTENT] Administration, Oral | Animals | Anti-Inflammatory Agents | Chronic Disease | Databases, Factual | Drugs, Chinese Herbal | Flavanones | Luteolin | Mice | NF-kappa B | Network Pharmacology | Pharyngitis | RAW 264.7 Cells [SUMMARY]
[CONTENT] tcm pharyngitis | pharyngitis cp drug | chronic pharyngitis | pharyngitis treatment methods | targets chronic pharyngitis [SUMMARY]
null
null
[CONTENT] tcm pharyngitis | pharyngitis cp drug | chronic pharyngitis | pharyngitis treatment methods | targets chronic pharyngitis [SUMMARY]
[CONTENT] tcm pharyngitis | pharyngitis cp drug | chronic pharyngitis | pharyngitis treatment methods | targets chronic pharyngitis [SUMMARY]
[CONTENT] tcm pharyngitis | pharyngitis cp drug | chronic pharyngitis | pharyngitis treatment methods | targets chronic pharyngitis [SUMMARY]
[CONTENT] targets | compounds | network | yol | cp | represents | akt | figure | analysis | target [SUMMARY]
null
null
[CONTENT] targets | compounds | network | yol | cp | represents | akt | figure | analysis | target [SUMMARY]
[CONTENT] targets | compounds | network | yol | cp | represents | akt | figure | analysis | target [SUMMARY]
[CONTENT] targets | compounds | network | yol | cp | represents | akt | figure | analysis | target [SUMMARY]
[CONTENT] pharyngeal | tcm | effects | herbal | clinical | formula | cp | cold | wind | pharmacological [SUMMARY]
null
null
[CONTENT] pharmacology | network pharmacology | network | content | ignored | yol | multi | research | target | ingredients usually ignored research [SUMMARY]
[CONTENT] targets | network | compounds | yol | cp | represents | analysis | akt | target | figure [SUMMARY]
[CONTENT] targets | network | compounds | yol | cp | represents | analysis | akt | target | figure [SUMMARY]
[CONTENT] Yinqin | YOL ||| Chinese | YOL ||| YOL [SUMMARY]
null
null
[CONTENT] YOL | luteolin | baicalein [SUMMARY]
[CONTENT] YOL ||| Chinese | YOL ||| YOL | CP. ||| First | YOL | YOL | the Drug Bank | TCMSP ||| cicada | TCMSP | CTD | GeneCards | UniProt ||| Cytoscape ||| Database for Annotation | Integrated Discovery ||| ||| 55 | 113 | PTGS1 | ESR2 | NCOA2 ||| 30 | VEGFA | 148 | p<0.05 | 102 | 34 | 12 | CC ||| 46 | KEGG | p<0.05 | proteoglycans ||| YOL | luteolin | baicalein [SUMMARY]
[CONTENT] YOL ||| Chinese | YOL ||| YOL | CP. ||| First | YOL | YOL | the Drug Bank | TCMSP ||| cicada | TCMSP | CTD | GeneCards | UniProt ||| Cytoscape ||| Database for Annotation | Integrated Discovery ||| ||| 55 | 113 | PTGS1 | ESR2 | NCOA2 ||| 30 | VEGFA | 148 | p<0.05 | 102 | 34 | 12 | CC ||| 46 | KEGG | p<0.05 | proteoglycans ||| YOL | luteolin | baicalein [SUMMARY]
Work ability and associated factors in people living with human T-cell leukemia virus type 1.
35946625
Infection with the human T-lymphotropic virus type 1 (HTLV-1) affects an estimated 10-15 million people worldwide. However, knowledge of the impact of HTLV-1 infection on work ability is lacking. This study aimed to measure the frequency and identify factors associated with poor work ability in patients living with HTLV-1.
BACKGROUND
This cross-sectional study included 207 individuals infected with HTLV-1 who attended the University Hospital in Salvador, Bahia, Brazil. HTLV-1 antibodies were detected in the participants' blood by enzyme-linked immunosorbent assay (ELISA) and confirmed by western blotting. Participants answered a questionnaire on sociodemographic data, personal habits, clinical data, health-related quality of life, and work ability, evaluated using the work ability index questionnaire. A Poisson regression model with a robust variance estimate was used to identify the factors associated with the prevalence of poor work ability.
METHODS
Patients mean age was 55.2, ranging from 19 to 84 years, 73.0% were females, 100% had monthly family income less than US$ 394, and 33.8% presented HTLV-1 associated myelopathy/tropical spastic paraparesis (HAM/TSP). No individual was classified as having excellent work ability. Poor work ability prevalence was strongly associated (prevalence ratio; 95% confidence interval [CI]) with sedentarism (1.30; 1.03-1.65), neurological symptoms (1.25; 1.02-1.52), and low physical (0.95; 0.94-0.96) and mental (0.98; 0.97-0.99) component summaries of health-related quality of life.
RESULTS
Poor work ability among people living with HTLV-1 is associated with sedentarism, neurologic symptoms, and low health-related quality of life.
CONCLUSIONS
[ "Cross-Sectional Studies", "Female", "HTLV-I Infections", "Human T-lymphotropic virus 1", "Humans", "Leukemia, T-Cell", "Male", "Middle Aged", "Paraparesis, Tropical Spastic", "Quality of Life", "Work Capacity Evaluation" ]
9344948
INTRODUCTION
Human T-lymphotropic virus type 1 (HTLV-1) is a type C retrovirus that was first isolated and identified from a patient with cutaneous T-cell malignancy in 1980 1 . It is transmitted through breastfeeding, sexual contact, blood transfusion, and sharing syringes and needles 2 . The prevalence of HTLV-1 infection is poorly known; however, it is estimated that it affects 10-15 million people worldwide 3 . Clusters of high prevalence were found in nearby areas with negligible prevalence. HTLV-1 infection is endemic in southwestern Japan, sub-Saharan Africa, South America, and the Caribbean area, with foci in the Middle East and Australo-Melanesia 4 . HTLV-1 infection is frequent in Brazil 5 , in the State of Bahia 6 , particularly in its capital, Salvador city, with an estimated prevalence of 1.8% 7 . Most people (approximately 95%) infected with HTLV-1 remain asymptomatic 8 . Individuals with HTLV-1 have a 57% greater risk of death due to any cause than those HTLV-1-negative individuals. HTLV-1 is associated with increased odds of seborrheic dermatitis and Sjogren’s syndrome and a lower relative risk of gastric cancer 9 . HTLV-1 can cause two severe diseases, adult T-cell leukemia/lymphoma (ATLL) and HTLV-1-associated myelopathy/tropical spastic paraparesis (HAM/TSP). It is estimated that 0.25-3% of people infected with HTLV-1 will develop HAM/TSP during their lifetime. HAM/TSP mainly occurs in adulthood. HAM/TSP has an insidious onset that progressively evolves to neurological features, such as spasticity or hyperreflexia of the lower extremities, lower extremity muscle weakness, and urinary bladder disturbances. Approximately 50% of cases present with sensory disturbances and low back pain. HAM/TSP can be associated with other HTLV-1 associated symptoms like uveitis, myositis, and infective dermatitis 10 . Patients with HAM/TSP have difficulty performing daily routine activities, particularly because of a disturbed gait that compromises physical, emotional, and social aspects, impairing their quality of life 11 . Patients with HIV-HTLV-1 coinfection or HTLV-1 infection report more difficulty performing daily activities than those with exclusive infection with human immunodeficiency virus (HIV) 12 . The work ability index (WAI) questionnaire is an instrument that evaluates workers’ perception of work demands and the environment, work organization, work community, promotion of workers’ health and functional capacity, and promotion of professional competence. Good work ability means high-quality work, enjoyment of staying in one’s job, and the expectation of a meaningful retirement 13 . Therefore, this study aimed to measure the frequency and identify factors associated with work ability in patients living with HTLV-1.
METHODS
Study design and study population This cross-sectional study was conducted from February 2018 to December 2019 at the University Hospital, Federal University of Bahia, Brazil. This study is part of broader research that investigates other health aspects of people with HTLV-1 14 . The target population comprised 209 individuals aged 18 years or older who were invited to participate in the study. Severe cognitive deficits that prevented the elicitation of reliable information in the interview were an exclusion criterion. There were only two refusals, resulting in a final study population of 207 individuals. This cross-sectional study was conducted from February 2018 to December 2019 at the University Hospital, Federal University of Bahia, Brazil. This study is part of broader research that investigates other health aspects of people with HTLV-1 14 . The target population comprised 209 individuals aged 18 years or older who were invited to participate in the study. Severe cognitive deficits that prevented the elicitation of reliable information in the interview were an exclusion criterion. There were only two refusals, resulting in a final study population of 207 individuals. Data collection instruments and procedures Participants were interviewed by a member of the research team after the medical consultation in a quiet room, keeping the patient’s privacy. Information about sociodemographic characteristics (age, race, schooling, civil status, number of children, and monthly family income - coded as < 1 Brazilian minimum wage and 1-2 Brazilian minimal wage. One Brazilian minimal wage was equivalent to US$ 197 by the time of the study) personal habits (smoking, drinking, and sedentarism), health-related quality of life, clinical data (comorbidities and HTLV-1 symptoms), and work ability were collected using structured questionnaires. Work ability was used as the dependent variable. Work ability was evaluated using the WAI questionnaire. WAI is a summary measure of seven dimensions (range 7-49): 1 -current work ability compared with the lifetime best, 2 - work ability in relation to the demands of the job, 3 - number of current diseases diagnosed by a physician, 4 - estimated work impairment due to diseases, 5 - sick leave, 6 - self-prognosis of work ability 2 years from now, and 7 - mental resources. The total score was classified into four work ability categories: poor (7-27 points), moderate (28-36 points), good (37-43 points), and excellent (44-49 points). For the purposes of this study, the four possible subgroups of WAI were categorized as poor versus others 15 . The WAI questionnaire was validated in a Brazilian population and showed satisfactory psychometric properties 16 . Neurologic evaluation was performed on all 207 individuals at the University Hospital, according to the World Health Organization criteria 17 . Seventy (33.8%) of the 207 individuals in the study presented neurological symptoms; all of them presented weakness and spasticity of one or both legs, compatible with HAM/TSP. Other diagnostics were 44 lumbar pain, 33 neurogenic bladder, 28 hyperreflexia, 11 polyneuropathy, and 5 erectile dysfunction (Figure 1). FIGURE 1:Study population flowchart. Health-related quality of life was evaluated using the 36-Item Short-Form Health Survey version 2 (SF-36v2) questionnaire. This instrument comprises 36 items that generate eight domains: physical functioning, physical role, bodily pain, general health, vitality, social functioning, emotional role, and mental health. Two summary measures can be calculated from these domains: physical and mental component summaries. The psychometric properties of the SF-36v2 have been validated in a Brazilian population 18 . PRO CoRE software, version 1.4 (Optum Inc., Johnston, RI, USA), was used to score the survey. The normalized scores have a mean of 50 and a standard deviation of 10, a transformation that enables better comparisons among domains. A commercial license (license number QM025905) granted permission for using the SF-36v2. Participants were interviewed by a member of the research team after the medical consultation in a quiet room, keeping the patient’s privacy. Information about sociodemographic characteristics (age, race, schooling, civil status, number of children, and monthly family income - coded as < 1 Brazilian minimum wage and 1-2 Brazilian minimal wage. One Brazilian minimal wage was equivalent to US$ 197 by the time of the study) personal habits (smoking, drinking, and sedentarism), health-related quality of life, clinical data (comorbidities and HTLV-1 symptoms), and work ability were collected using structured questionnaires. Work ability was used as the dependent variable. Work ability was evaluated using the WAI questionnaire. WAI is a summary measure of seven dimensions (range 7-49): 1 -current work ability compared with the lifetime best, 2 - work ability in relation to the demands of the job, 3 - number of current diseases diagnosed by a physician, 4 - estimated work impairment due to diseases, 5 - sick leave, 6 - self-prognosis of work ability 2 years from now, and 7 - mental resources. The total score was classified into four work ability categories: poor (7-27 points), moderate (28-36 points), good (37-43 points), and excellent (44-49 points). For the purposes of this study, the four possible subgroups of WAI were categorized as poor versus others 15 . The WAI questionnaire was validated in a Brazilian population and showed satisfactory psychometric properties 16 . Neurologic evaluation was performed on all 207 individuals at the University Hospital, according to the World Health Organization criteria 17 . Seventy (33.8%) of the 207 individuals in the study presented neurological symptoms; all of them presented weakness and spasticity of one or both legs, compatible with HAM/TSP. Other diagnostics were 44 lumbar pain, 33 neurogenic bladder, 28 hyperreflexia, 11 polyneuropathy, and 5 erectile dysfunction (Figure 1). FIGURE 1:Study population flowchart. Health-related quality of life was evaluated using the 36-Item Short-Form Health Survey version 2 (SF-36v2) questionnaire. This instrument comprises 36 items that generate eight domains: physical functioning, physical role, bodily pain, general health, vitality, social functioning, emotional role, and mental health. Two summary measures can be calculated from these domains: physical and mental component summaries. The psychometric properties of the SF-36v2 have been validated in a Brazilian population 18 . PRO CoRE software, version 1.4 (Optum Inc., Johnston, RI, USA), was used to score the survey. The normalized scores have a mean of 50 and a standard deviation of 10, a transformation that enables better comparisons among domains. A commercial license (license number QM025905) granted permission for using the SF-36v2. Laboratory examinations HTLV-1 antibodies were detected in the blood of the participants by enzyme-linked immunosorbent assay (ELISA) and confirmed by western blotting at the Infectology Research Laboratory, University Hospital, Federal University of Bahia. HTLV-1 antibodies were detected in the blood of the participants by enzyme-linked immunosorbent assay (ELISA) and confirmed by western blotting at the Infectology Research Laboratory, University Hospital, Federal University of Bahia. Statistical data analysis Differences between subgroups of continuous variables were compared using Student's t-test. Differences between subgroups of categorical variables were compared using Pearson's chi-square test. Variables with a P-value < 0.20 in bivariate analysis were selected for composing a Poisson regression model with robust variance estimators that had work ability as the dependent variable 19 - 21 . Cronbach’s alpha coefficient was used to evaluate the internal consistency of the SF-36v2 and WAI instruments; values above 0.70 were considered acceptable 22 - 23 . Differences between subgroups of continuous variables were compared using Student's t-test. Differences between subgroups of categorical variables were compared using Pearson's chi-square test. Variables with a P-value < 0.20 in bivariate analysis were selected for composing a Poisson regression model with robust variance estimators that had work ability as the dependent variable 19 - 21 . Cronbach’s alpha coefficient was used to evaluate the internal consistency of the SF-36v2 and WAI instruments; values above 0.70 were considered acceptable 22 - 23 . Ethical aspects The research protocol was approved by the research ethics committee of the Federal University of Bahia (opinion number:30762714.4.0000.5577). All the participants provided written informed consent. The research protocol was approved by the research ethics committee of the Federal University of Bahia (opinion number:30762714.4.0000.5577). All the participants provided written informed consent.
RESULTS
The mean age was 55.2, ranging from 19 to 84 years, 73.0% were females, 100% had a monthly family income less than US$ 394, and 33.8% presented HAM/TSP. The work ability of 207 individuals with HTLV-1 was poor in 54.1% (n = 112), moderate in 37.7% (n = 78), and good in 8.2% (n = 17). No individual was classified as having excellent work ability. The alpha coefficient of the WAI questionnaire was 0.84, indicating a high internal consistency. The poor work ability prevalence rate was significantly higher (P < 0.20) among individuals who had children, had schooling < 8 years, civil status other than stable relation, who did not referred alcohol consumption, sedentary status, comorbidities, and neurological symptoms (Table 1). TABLE 1:Work ability according to characteristics of 207 individuals with HTLV-1, Salvador, Brazil, 2018-2019. Work ability Poor Moderate/good (n = 112) (n = 95) Characteristicn%n%PR95% CIP-valueSex Male3155.42544.61.030.78-1.360.826Female8153.47046.6 Race White 1058.8741.21.100.72-1.670.683Other10253.78846.3 Children No1139.31760.70.700.43-1.120.090Yes10156.47843.6 Schooling < 8 years8061.15138.91.451.08-1.950.008≥ 8 years3242.14457.9 Civil status Other6658.94641.11.220.94-1.580.131Stable relation4648.44951.6 Monthly family income < 1 MW3655.42944.61.040.79-1.350.8031 to 2 MW7653.56646.5 Smoking Yes1168.8531.21.300.91-1.860.222No10152.99047.1 Drinking Yes2242.33057.70.730.52-1.030.049No9058.16541.9 Comorbidities Yes9759.16740.91.701.11-2.600.005No1534.92865.1 Sedentary Yes7957.75842.31.220.92-1.630.151No3347.13752.9 Neurological symptoms Yes5578.61521.41.891.50-2.38< 0.001No5741.68058.4 *Fisher test; MW: Brazilian minimal wage (approx. 197.39 US$/month). *Fisher test; MW: Brazilian minimal wage (approx. 197.39 US$/month). Bivariate analyses showed that individuals with poor work ability presented systematically lower (P < 0.001) SF-36 domain scores and physical and mental component summaries of health-related quality of life and were significantly older (P < 0.042) than those with moderate or good work ability (Table 2). TABLE 2:Work ability according to SF-36 health-related quality of life domains (mean [SD], in %) and age (mean [SD], in years) of 207 individuals with HTLV-1, Salvador, Brazil, 2018-2019. Work ability PoorModerate/good VariableCronbach alpha(n = 112)(n = 95) P-value Physical Functioning0.9533.1 (10.9)50.9 (8.9)< 0.001Role Physical0.9529.1 (10.4)48.4 (12.9)< 0.001Bodily Pain0.7834.6 (11.2)47.5 (11.1)< 0.001General Health0.7736.2 (10.0)50.2 (8.9)< 0.001Vitality0.8241.6 (12.7)56.2 (10.0)< 0.001Social Functioning0.7538.3 (14.3)52.9 (8.9)< 0.001Role Emotional0.9432.8 (16.6)49.0 (13.6)< 0.001Mental Health0.8439.1 (15.5)51.6 (10.3)< 0.001Physical Component Summary-32.7 (9.7)49.1 (9.8)< 0.001Mental Component Summary-40.3 (16.6)52.5 (10.7)< 0.001Age-56.8 (12.4)53.3 (12.4)0.042 The Poisson regression model estimated that adjusted prevalence rates (PR) of poor work ability were 30% higher among sedentary individuals (PR = 1.30; 95% confidence interval [CI]: 1.03-1.65) and 25% higher among those with neurological symptoms (PR = 1.25; 95% CI: 1.02-1.52). The mean level of the physical component summary of the health-related quality of life was 5% lower (PR = 0.95; 95% CI: 0.94-0.96), and the mean level of the mental component summary was 2% lower (PR = 0.98; 95% CI: 0.97-0.99) among individuals with poor work ability compared with those with moderate or good work ability. The alpha coefficients of the eight domains of the SF36v2 questionnaire varied from 0.75 to 0.95, revealing high internal consistency (Table 3). TABLE 3:Results of Poisson regression having the prevalence ratio of low work ability as the dependent variable among 207 individuals with HTLV-1, Salvador, Brazil, 2018-2019.Predictors (referent)PR95% CI P-value Children (Yes)0.910.66-1.260.571Schooling (≥ 8 years)1.050.84--1.310.696Civil status (Stable relation)1.070.87-1.310.502Drinking (No)1.120.86-1.460.392Comorbidities (No)0.980.69-1.350.845Age (Years)1.011.00-1.020.061Sedentary (No)1.301.03-1.650.030Neurological symptoms (No)1.251.02-1.520.028Physical Component Summary (%)0.950.94--0.96< 0.001Mental Component Summary (%)0.980.97-0.99< 0.001 PR: adjusted prevalence ratio. PR: adjusted prevalence ratio.
null
null
[ "Study design and study population", "Data collection instruments and procedures", "Laboratory examinations", "Statistical data analysis", "Ethical aspects" ]
[ "This cross-sectional study was conducted from February 2018 to December 2019 at\nthe University Hospital, Federal University of Bahia, Brazil. This study is part\nof broader research that investigates other health aspects of people with\nHTLV-1\n14\n. The target population comprised 209 individuals aged 18 years or older\nwho were invited to participate in the study. Severe cognitive deficits that\nprevented the elicitation of reliable information in the interview were an\nexclusion criterion. There were only two refusals, resulting in a final study\npopulation of 207 individuals.", "Participants were interviewed by a member of the research team after the medical\nconsultation in a quiet room, keeping the patient’s privacy. Information about\nsociodemographic characteristics (age, race, schooling, civil status, number of\nchildren, and monthly family income - coded as < 1 Brazilian minimum wage and\n1-2 Brazilian minimal wage. One Brazilian minimal wage was equivalent to US$ 197\nby the time of the study) personal habits (smoking, drinking, and sedentarism),\nhealth-related quality of life, clinical data (comorbidities and HTLV-1\nsymptoms), and work ability were collected using structured questionnaires. \nWork ability was used as the dependent variable. Work ability was evaluated using\nthe WAI questionnaire. WAI is a summary measure of seven dimensions (range\n7-49): 1 -current work ability compared with the lifetime best, 2 - work ability\nin relation to the demands of the job, 3 - number of current diseases diagnosed\nby a physician, 4 - estimated work impairment due to diseases, 5 - sick leave, 6\n- self-prognosis of work ability 2 years from now, and 7 - mental resources. The\ntotal score was classified into four work ability categories: poor (7-27\npoints), moderate (28-36 points), good (37-43 points), and excellent (44-49\npoints). For the purposes of this study, the four possible subgroups of WAI were\ncategorized as poor versus others\n15\n. The WAI questionnaire was validated in a Brazilian population and\nshowed satisfactory psychometric properties\n16\n.\nNeurologic evaluation was performed on all 207 individuals at the University\nHospital, according to the World Health Organization criteria\n17\n. Seventy (33.8%) of the 207 individuals in the study presented\nneurological symptoms; all of them presented weakness and spasticity of one or\nboth legs, compatible with HAM/TSP. Other diagnostics were 44 lumbar pain, 33\nneurogenic bladder, 28 hyperreflexia, 11 polyneuropathy, and 5 erectile\ndysfunction (Figure 1).\n\nFIGURE 1:Study population flowchart.\n\nHealth-related quality of life was evaluated using the 36-Item Short-Form Health\nSurvey version 2 (SF-36v2) questionnaire. This instrument comprises 36 items\nthat generate eight domains: physical functioning, physical role, bodily pain,\ngeneral health, vitality, social functioning, emotional role, and mental health.\nTwo summary measures can be calculated from these domains: physical and mental\ncomponent summaries. The psychometric properties of the SF-36v2 have been\nvalidated in a Brazilian population\n18\n. PRO CoRE software, version 1.4 (Optum Inc., Johnston, RI, USA), was\nused to score the survey. The normalized scores have a mean of 50 and a standard\ndeviation of 10, a transformation that enables better comparisons among domains.\nA commercial license (license number QM025905) granted permission for using the\nSF-36v2. ", "HTLV-1 antibodies were detected in the blood of the participants by enzyme-linked\nimmunosorbent assay (ELISA) and confirmed by western blotting at the Infectology\nResearch Laboratory, University Hospital, Federal University of Bahia.", "Differences between subgroups of continuous variables were compared using\nStudent's t-test. Differences between subgroups of categorical variables were\ncompared using Pearson's chi-square test. Variables with a\nP-value < 0.20 in bivariate analysis were selected for\ncomposing a Poisson regression model with robust variance estimators that had\nwork ability as the dependent variable\n19\n\n-\n\n21\n. Cronbach’s alpha coefficient was used to evaluate the internal\nconsistency of the SF-36v2 and WAI instruments; values above 0.70 were\nconsidered acceptable\n22\n\n-\n\n23\n. ", "The research protocol was approved by the research ethics committee of the\nFederal University of Bahia (opinion number:30762714.4.0000.5577). All the\nparticipants provided written informed consent. " ]
[ null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Study design and study population", "Data collection instruments and procedures", "Laboratory examinations", "Statistical data analysis", "Ethical aspects", "RESULTS", "DISCUSSION" ]
[ "Human T-lymphotropic virus type 1 (HTLV-1) is a type C retrovirus that was first\nisolated and identified from a patient with cutaneous T-cell malignancy in 1980\n1\n. It is transmitted through breastfeeding, sexual contact, blood transfusion,\nand sharing syringes and needles\n2\n. \nThe prevalence of HTLV-1 infection is poorly known; however, it is estimated that it\naffects 10-15 million people worldwide\n3\n. Clusters of high prevalence were found in nearby areas with negligible\nprevalence. HTLV-1 infection is endemic in southwestern Japan, sub-Saharan Africa,\nSouth America, and the Caribbean area, with foci in the Middle East and\nAustralo-Melanesia\n4\n. HTLV-1 infection is frequent in Brazil\n5\n, in the State of Bahia\n6\n, particularly in its capital, Salvador city, with an estimated prevalence of\n1.8%\n7\n. \nMost people (approximately 95%) infected with HTLV-1 remain asymptomatic\n8\n. Individuals with HTLV-1 have a 57% greater risk of death due to any cause\nthan those HTLV-1-negative individuals. HTLV-1 is associated with increased odds of\nseborrheic dermatitis and Sjogren’s syndrome and a lower relative risk of gastric\ncancer\n9\n. HTLV-1 can cause two severe diseases, adult T-cell leukemia/lymphoma (ATLL)\nand HTLV-1-associated myelopathy/tropical spastic paraparesis (HAM/TSP). It is\nestimated that 0.25-3% of people infected with HTLV-1 will develop HAM/TSP during\ntheir lifetime. HAM/TSP mainly occurs in adulthood. HAM/TSP has an insidious onset\nthat progressively evolves to neurological features, such as spasticity or\nhyperreflexia of the lower extremities, lower extremity muscle weakness, and urinary\nbladder disturbances. Approximately 50% of cases present with sensory disturbances\nand low back pain. HAM/TSP can be associated with other HTLV-1 associated symptoms\nlike uveitis, myositis, and infective dermatitis\n10\n. Patients with HAM/TSP have difficulty performing daily routine activities,\nparticularly because of a disturbed gait that compromises physical, emotional, and\nsocial aspects, impairing their quality of life\n11\n. Patients with HIV-HTLV-1 coinfection or HTLV-1 infection report more\ndifficulty performing daily activities than those with exclusive infection with\nhuman immunodeficiency virus (HIV)\n12\n. \nThe work ability index (WAI) questionnaire is an instrument that evaluates workers’\nperception of work demands and the environment, work organization, work community,\npromotion of workers’ health and functional capacity, and promotion of professional\ncompetence. Good work ability means high-quality work, enjoyment of staying in one’s\njob, and the expectation of a meaningful retirement\n13\n. \nTherefore, this study aimed to measure the frequency and identify factors associated\nwith work ability in patients living with HTLV-1. ", "Study design and study population This cross-sectional study was conducted from February 2018 to December 2019 at\nthe University Hospital, Federal University of Bahia, Brazil. This study is part\nof broader research that investigates other health aspects of people with\nHTLV-1\n14\n. The target population comprised 209 individuals aged 18 years or older\nwho were invited to participate in the study. Severe cognitive deficits that\nprevented the elicitation of reliable information in the interview were an\nexclusion criterion. There were only two refusals, resulting in a final study\npopulation of 207 individuals.\nThis cross-sectional study was conducted from February 2018 to December 2019 at\nthe University Hospital, Federal University of Bahia, Brazil. This study is part\nof broader research that investigates other health aspects of people with\nHTLV-1\n14\n. The target population comprised 209 individuals aged 18 years or older\nwho were invited to participate in the study. Severe cognitive deficits that\nprevented the elicitation of reliable information in the interview were an\nexclusion criterion. There were only two refusals, resulting in a final study\npopulation of 207 individuals.\nData collection instruments and procedures Participants were interviewed by a member of the research team after the medical\nconsultation in a quiet room, keeping the patient’s privacy. Information about\nsociodemographic characteristics (age, race, schooling, civil status, number of\nchildren, and monthly family income - coded as < 1 Brazilian minimum wage and\n1-2 Brazilian minimal wage. One Brazilian minimal wage was equivalent to US$ 197\nby the time of the study) personal habits (smoking, drinking, and sedentarism),\nhealth-related quality of life, clinical data (comorbidities and HTLV-1\nsymptoms), and work ability were collected using structured questionnaires. \nWork ability was used as the dependent variable. Work ability was evaluated using\nthe WAI questionnaire. WAI is a summary measure of seven dimensions (range\n7-49): 1 -current work ability compared with the lifetime best, 2 - work ability\nin relation to the demands of the job, 3 - number of current diseases diagnosed\nby a physician, 4 - estimated work impairment due to diseases, 5 - sick leave, 6\n- self-prognosis of work ability 2 years from now, and 7 - mental resources. The\ntotal score was classified into four work ability categories: poor (7-27\npoints), moderate (28-36 points), good (37-43 points), and excellent (44-49\npoints). For the purposes of this study, the four possible subgroups of WAI were\ncategorized as poor versus others\n15\n. The WAI questionnaire was validated in a Brazilian population and\nshowed satisfactory psychometric properties\n16\n.\nNeurologic evaluation was performed on all 207 individuals at the University\nHospital, according to the World Health Organization criteria\n17\n. Seventy (33.8%) of the 207 individuals in the study presented\nneurological symptoms; all of them presented weakness and spasticity of one or\nboth legs, compatible with HAM/TSP. Other diagnostics were 44 lumbar pain, 33\nneurogenic bladder, 28 hyperreflexia, 11 polyneuropathy, and 5 erectile\ndysfunction (Figure 1).\n\nFIGURE 1:Study population flowchart.\n\nHealth-related quality of life was evaluated using the 36-Item Short-Form Health\nSurvey version 2 (SF-36v2) questionnaire. This instrument comprises 36 items\nthat generate eight domains: physical functioning, physical role, bodily pain,\ngeneral health, vitality, social functioning, emotional role, and mental health.\nTwo summary measures can be calculated from these domains: physical and mental\ncomponent summaries. The psychometric properties of the SF-36v2 have been\nvalidated in a Brazilian population\n18\n. PRO CoRE software, version 1.4 (Optum Inc., Johnston, RI, USA), was\nused to score the survey. The normalized scores have a mean of 50 and a standard\ndeviation of 10, a transformation that enables better comparisons among domains.\nA commercial license (license number QM025905) granted permission for using the\nSF-36v2. \nParticipants were interviewed by a member of the research team after the medical\nconsultation in a quiet room, keeping the patient’s privacy. Information about\nsociodemographic characteristics (age, race, schooling, civil status, number of\nchildren, and monthly family income - coded as < 1 Brazilian minimum wage and\n1-2 Brazilian minimal wage. One Brazilian minimal wage was equivalent to US$ 197\nby the time of the study) personal habits (smoking, drinking, and sedentarism),\nhealth-related quality of life, clinical data (comorbidities and HTLV-1\nsymptoms), and work ability were collected using structured questionnaires. \nWork ability was used as the dependent variable. Work ability was evaluated using\nthe WAI questionnaire. WAI is a summary measure of seven dimensions (range\n7-49): 1 -current work ability compared with the lifetime best, 2 - work ability\nin relation to the demands of the job, 3 - number of current diseases diagnosed\nby a physician, 4 - estimated work impairment due to diseases, 5 - sick leave, 6\n- self-prognosis of work ability 2 years from now, and 7 - mental resources. The\ntotal score was classified into four work ability categories: poor (7-27\npoints), moderate (28-36 points), good (37-43 points), and excellent (44-49\npoints). For the purposes of this study, the four possible subgroups of WAI were\ncategorized as poor versus others\n15\n. The WAI questionnaire was validated in a Brazilian population and\nshowed satisfactory psychometric properties\n16\n.\nNeurologic evaluation was performed on all 207 individuals at the University\nHospital, according to the World Health Organization criteria\n17\n. Seventy (33.8%) of the 207 individuals in the study presented\nneurological symptoms; all of them presented weakness and spasticity of one or\nboth legs, compatible with HAM/TSP. Other diagnostics were 44 lumbar pain, 33\nneurogenic bladder, 28 hyperreflexia, 11 polyneuropathy, and 5 erectile\ndysfunction (Figure 1).\n\nFIGURE 1:Study population flowchart.\n\nHealth-related quality of life was evaluated using the 36-Item Short-Form Health\nSurvey version 2 (SF-36v2) questionnaire. This instrument comprises 36 items\nthat generate eight domains: physical functioning, physical role, bodily pain,\ngeneral health, vitality, social functioning, emotional role, and mental health.\nTwo summary measures can be calculated from these domains: physical and mental\ncomponent summaries. The psychometric properties of the SF-36v2 have been\nvalidated in a Brazilian population\n18\n. PRO CoRE software, version 1.4 (Optum Inc., Johnston, RI, USA), was\nused to score the survey. The normalized scores have a mean of 50 and a standard\ndeviation of 10, a transformation that enables better comparisons among domains.\nA commercial license (license number QM025905) granted permission for using the\nSF-36v2. \nLaboratory examinations HTLV-1 antibodies were detected in the blood of the participants by enzyme-linked\nimmunosorbent assay (ELISA) and confirmed by western blotting at the Infectology\nResearch Laboratory, University Hospital, Federal University of Bahia.\nHTLV-1 antibodies were detected in the blood of the participants by enzyme-linked\nimmunosorbent assay (ELISA) and confirmed by western blotting at the Infectology\nResearch Laboratory, University Hospital, Federal University of Bahia.\nStatistical data analysis Differences between subgroups of continuous variables were compared using\nStudent's t-test. Differences between subgroups of categorical variables were\ncompared using Pearson's chi-square test. Variables with a\nP-value < 0.20 in bivariate analysis were selected for\ncomposing a Poisson regression model with robust variance estimators that had\nwork ability as the dependent variable\n19\n\n-\n\n21\n. Cronbach’s alpha coefficient was used to evaluate the internal\nconsistency of the SF-36v2 and WAI instruments; values above 0.70 were\nconsidered acceptable\n22\n\n-\n\n23\n. \nDifferences between subgroups of continuous variables were compared using\nStudent's t-test. Differences between subgroups of categorical variables were\ncompared using Pearson's chi-square test. Variables with a\nP-value < 0.20 in bivariate analysis were selected for\ncomposing a Poisson regression model with robust variance estimators that had\nwork ability as the dependent variable\n19\n\n-\n\n21\n. Cronbach’s alpha coefficient was used to evaluate the internal\nconsistency of the SF-36v2 and WAI instruments; values above 0.70 were\nconsidered acceptable\n22\n\n-\n\n23\n. \nEthical aspects The research protocol was approved by the research ethics committee of the\nFederal University of Bahia (opinion number:30762714.4.0000.5577). All the\nparticipants provided written informed consent. \nThe research protocol was approved by the research ethics committee of the\nFederal University of Bahia (opinion number:30762714.4.0000.5577). All the\nparticipants provided written informed consent. ", "This cross-sectional study was conducted from February 2018 to December 2019 at\nthe University Hospital, Federal University of Bahia, Brazil. This study is part\nof broader research that investigates other health aspects of people with\nHTLV-1\n14\n. The target population comprised 209 individuals aged 18 years or older\nwho were invited to participate in the study. Severe cognitive deficits that\nprevented the elicitation of reliable information in the interview were an\nexclusion criterion. There were only two refusals, resulting in a final study\npopulation of 207 individuals.", "Participants were interviewed by a member of the research team after the medical\nconsultation in a quiet room, keeping the patient’s privacy. Information about\nsociodemographic characteristics (age, race, schooling, civil status, number of\nchildren, and monthly family income - coded as < 1 Brazilian minimum wage and\n1-2 Brazilian minimal wage. One Brazilian minimal wage was equivalent to US$ 197\nby the time of the study) personal habits (smoking, drinking, and sedentarism),\nhealth-related quality of life, clinical data (comorbidities and HTLV-1\nsymptoms), and work ability were collected using structured questionnaires. \nWork ability was used as the dependent variable. Work ability was evaluated using\nthe WAI questionnaire. WAI is a summary measure of seven dimensions (range\n7-49): 1 -current work ability compared with the lifetime best, 2 - work ability\nin relation to the demands of the job, 3 - number of current diseases diagnosed\nby a physician, 4 - estimated work impairment due to diseases, 5 - sick leave, 6\n- self-prognosis of work ability 2 years from now, and 7 - mental resources. The\ntotal score was classified into four work ability categories: poor (7-27\npoints), moderate (28-36 points), good (37-43 points), and excellent (44-49\npoints). For the purposes of this study, the four possible subgroups of WAI were\ncategorized as poor versus others\n15\n. The WAI questionnaire was validated in a Brazilian population and\nshowed satisfactory psychometric properties\n16\n.\nNeurologic evaluation was performed on all 207 individuals at the University\nHospital, according to the World Health Organization criteria\n17\n. Seventy (33.8%) of the 207 individuals in the study presented\nneurological symptoms; all of them presented weakness and spasticity of one or\nboth legs, compatible with HAM/TSP. Other diagnostics were 44 lumbar pain, 33\nneurogenic bladder, 28 hyperreflexia, 11 polyneuropathy, and 5 erectile\ndysfunction (Figure 1).\n\nFIGURE 1:Study population flowchart.\n\nHealth-related quality of life was evaluated using the 36-Item Short-Form Health\nSurvey version 2 (SF-36v2) questionnaire. This instrument comprises 36 items\nthat generate eight domains: physical functioning, physical role, bodily pain,\ngeneral health, vitality, social functioning, emotional role, and mental health.\nTwo summary measures can be calculated from these domains: physical and mental\ncomponent summaries. The psychometric properties of the SF-36v2 have been\nvalidated in a Brazilian population\n18\n. PRO CoRE software, version 1.4 (Optum Inc., Johnston, RI, USA), was\nused to score the survey. The normalized scores have a mean of 50 and a standard\ndeviation of 10, a transformation that enables better comparisons among domains.\nA commercial license (license number QM025905) granted permission for using the\nSF-36v2. ", "HTLV-1 antibodies were detected in the blood of the participants by enzyme-linked\nimmunosorbent assay (ELISA) and confirmed by western blotting at the Infectology\nResearch Laboratory, University Hospital, Federal University of Bahia.", "Differences between subgroups of continuous variables were compared using\nStudent's t-test. Differences between subgroups of categorical variables were\ncompared using Pearson's chi-square test. Variables with a\nP-value < 0.20 in bivariate analysis were selected for\ncomposing a Poisson regression model with robust variance estimators that had\nwork ability as the dependent variable\n19\n\n-\n\n21\n. Cronbach’s alpha coefficient was used to evaluate the internal\nconsistency of the SF-36v2 and WAI instruments; values above 0.70 were\nconsidered acceptable\n22\n\n-\n\n23\n. ", "The research protocol was approved by the research ethics committee of the\nFederal University of Bahia (opinion number:30762714.4.0000.5577). All the\nparticipants provided written informed consent. ", "The mean age was 55.2, ranging from 19 to 84 years, 73.0% were females, 100% had a\nmonthly family income less than US$ 394, and 33.8% presented HAM/TSP. The work\nability of 207 individuals with HTLV-1 was poor in 54.1% (n = 112), moderate in\n37.7% (n = 78), and good in 8.2% (n = 17). No individual was classified as having\nexcellent work ability. The alpha coefficient of the WAI questionnaire was 0.84,\nindicating a high internal consistency.\nThe poor work ability prevalence rate was significantly higher (P\n< 0.20) among individuals who had children, had schooling < 8 years, civil\nstatus other than stable relation, who did not referred alcohol consumption,\nsedentary status, comorbidities, and neurological symptoms (Table 1).\n\nTABLE 1:Work ability according to characteristics of 207 individuals with\nHTLV-1, Salvador, Brazil, 2018-2019.\nWork ability \n\n\n\nPoor Moderate/good \n\n\n\n(n = 112) (n = 95) \n\n\nCharacteristicn%n%PR95% CIP-valueSex\n\n\n\n\n\n\nMale3155.42544.61.030.78-1.360.826Female8153.47046.6\n\n\nRace\n\n\n\n\n\n\nWhite 1058.8741.21.100.72-1.670.683Other10253.78846.3\n\n\nChildren\n\n\n\n\n\n\nNo1139.31760.70.700.43-1.120.090Yes10156.47843.6\n\n\nSchooling\n\n\n\n\n\n\n< 8 years8061.15138.91.451.08-1.950.008≥ 8 years3242.14457.9\n\n\nCivil status\n\n\n\n\n\n\nOther6658.94641.11.220.94-1.580.131Stable relation4648.44951.6\n\n\nMonthly family income\n\n\n\n\n\n\n< 1 MW3655.42944.61.040.79-1.350.8031 to 2 MW7653.56646.5\n\n\nSmoking\n\n\n\n\n\n\nYes1168.8531.21.300.91-1.860.222No10152.99047.1\n\n\nDrinking\n\n\n\n\n\n\nYes2242.33057.70.730.52-1.030.049No9058.16541.9\n\n\nComorbidities\n\n\n\n\n\n\nYes9759.16740.91.701.11-2.600.005No1534.92865.1\n\n\nSedentary\n\n\n\n\n\n\nYes7957.75842.31.220.92-1.630.151No3347.13752.9\n\n\nNeurological symptoms\n\n\n\n\n\n\nYes5578.61521.41.891.50-2.38< 0.001No5741.68058.4\n\n\n\n*Fisher test; MW: Brazilian minimal wage\n(approx. 197.39 US$/month).\n\n\n*Fisher test; MW: Brazilian minimal wage\n(approx. 197.39 US$/month).\nBivariate analyses showed that individuals with poor work ability presented\nsystematically lower (P < 0.001) SF-36 domain scores and\nphysical and mental component summaries of health-related quality of life and were\nsignificantly older (P < 0.042) than those with moderate or good\nwork ability (Table 2).\n\nTABLE 2:Work ability according to SF-36 health-related quality of life\ndomains (mean [SD], in %) and age (mean [SD], in years) of 207\nindividuals with HTLV-1, Salvador, Brazil, 2018-2019. \n\nWork ability \n\n\nPoorModerate/good\nVariableCronbach alpha(n = 112)(n = 95)\n\nP-value\nPhysical Functioning0.9533.1 (10.9)50.9 (8.9)< 0.001Role Physical0.9529.1 (10.4)48.4 (12.9)< 0.001Bodily Pain0.7834.6 (11.2)47.5 (11.1)< 0.001General Health0.7736.2 (10.0)50.2 (8.9)< 0.001Vitality0.8241.6 (12.7)56.2 (10.0)< 0.001Social Functioning0.7538.3 (14.3)52.9 (8.9)< 0.001Role Emotional0.9432.8 (16.6)49.0 (13.6)< 0.001Mental Health0.8439.1 (15.5)51.6 (10.3)< 0.001Physical Component Summary-32.7 (9.7)49.1 (9.8)< 0.001Mental Component Summary-40.3 (16.6)52.5 (10.7)< 0.001Age-56.8 (12.4)53.3 (12.4)0.042\n\nThe Poisson regression model estimated that adjusted prevalence rates (PR) of poor\nwork ability were 30% higher among sedentary individuals (PR = 1.30; 95% confidence\ninterval [CI]: 1.03-1.65) and 25% higher among those with neurological symptoms (PR\n= 1.25; 95% CI: 1.02-1.52). The mean level of the physical component summary of the\nhealth-related quality of life was 5% lower (PR = 0.95; 95% CI: 0.94-0.96), and the\nmean level of the mental component summary was 2% lower (PR = 0.98; 95% CI:\n0.97-0.99) among individuals with poor work ability compared with those with\nmoderate or good work ability. The alpha coefficients of the eight domains of the\nSF36v2 questionnaire varied from 0.75 to 0.95, revealing high internal consistency\n(Table 3).\n\nTABLE 3:Results of Poisson regression having the prevalence ratio of low work\nability as the dependent variable among 207 individuals with HTLV-1,\nSalvador, Brazil, 2018-2019.Predictors (referent)PR95% CI\n\nP-value\nChildren (Yes)0.910.66-1.260.571Schooling (≥ 8 years)1.050.84--1.310.696Civil status (Stable relation)1.070.87-1.310.502Drinking (No)1.120.86-1.460.392Comorbidities (No)0.980.69-1.350.845Age (Years)1.011.00-1.020.061Sedentary (No)1.301.03-1.650.030Neurological symptoms (No)1.251.02-1.520.028Physical Component Summary (%)0.950.94--0.96< 0.001Mental Component Summary (%)0.980.97-0.99< 0.001\nPR: adjusted prevalence ratio.\n\n\nPR: adjusted prevalence ratio.", "Poor work ability was common in the study population (54.1%). In addition, poor work\nability is associated with an increased risk of sickness absence\n24\n, early retirement\n25\n, and higher mortality in older age\n26\n.\nThis study among people living with HTLV-1 found that poor work ability was\nassociated with sedentarism, neurologic symptoms, and low health-related quality of\nlife in both the physical and mental components.\nMultivariate analysis estimated that the adjusted prevalence of poor work ability was\n30% higher among individuals with a sedentary lifestyle and 25% higher among those\npresenting with neurologic symptoms. People infected with HTLV-1, who already\npresent with neurological symptoms, are expected to have impaired work ability.\nPatients with HAM/TSP usually have impaired gait, dependence on daily activities,\nand a poor quality of life due to intense muscle weakness\n27\n. The same reasoning applies to the relationship between sedentarism and poor\nwork ability\n28\n. Unfortunately, this study did not collect information on the temporal\nsequence of the relationship between these independent variables (sedentarism and\nneurologic symptoms) and outcomes (work ability).\nHTLV-1 infection has been associated with several diseases. Fortunately, only a few\nof these are fatal, such as leukemia. However, this disease is rare and has a\nrelatively low impact on the community mortality rates\n9\n. The results of this study raise awareness of the poorly recognized burden\nof HTLV-1 infection on the morbidity caused by neurologic symptoms and its impact on\nwork ability.\nIndividuals with poor work ability had a lower health-related quality of life than\nthose with moderate or good work ability. The differences found in the bivariate\nanalyses were confirmed in the multivariate analyses, which estimated a 5% lower\nphysical and a 2% lower mental component summary for patients with poor work ability\nafter adjusting for relevant variables. The complex construct of the WAI\nquestionnaire has many points of convergence with that of the SF36v2\n29\n since both deal with physical and mental demands. Therefore, the WAI score\nis expected to be strongly associated with SF36v2 dimensions and component\nsummaries\n30\n\n,\n\n31\n. For example, the progressive and disabling gait disturbances of patients\nwith HTLV-1 may impair physical, emotional, social, and mental aspects that, in\nturn, may modify the health-related quality of life perception\n11\n.\nThe magnitude of the differences in physical and mental component summaries according\nto work ability can be analyzed from the perspective of the minimal clinically\nimportant difference (MCID)\n32\n. The concept of MCID evolved to minimal important difference, defined as\n“the smallest difference in score in the domain of interest that patients perceive\nas important, either beneficial or harmful, and that would lead the clinician to\nconsider a change in the patient’s management\n33\n.”\nThe MCID for physical component summary varied in studies, including patients with\nmoderate to severe psoriasis (2.5 points)\n34\n, undergoing lumbar spine surgeries (4.11-5.21\n35\n; and 4.93)\n36\n, and surgical (7.83) and non-surgical (2.15) patients with spinal\ndeformities\n37\n. The MDIC for mental component summary was 2.5 points in a study of patients\nwith moderate to severe psoriasis\n34\n. Roughly half of all patients treated for hepatitis C fail to achieve\nclinically important improvements in physical and mental component summaries\n38\n. However, the MCID is not an immutable characteristic and may vary by\npopulation and context\n39\n. Concerning patients’ quality of life scores, the MCID for group-level is\nnecessarily smaller than those for individual patient-level\n40\n. We are unaware of a study that determined the MCID for the work ability\nindex among patients with non-alcoholic fatty liver disease (NAFLD), similar to our\nstudy population. \nThe high frequency (54.1%) of poor work ability among patients with NAFLD and the\nnature of the factors associated with poor work ability suggest the need to\nimplement strategies to provide adequate health care among people living with\nHTLV-1.\nOne important limitation of this preliminary cross-sectional study is the lack of\ninformation about the temporal sequence of neurological symptoms and sedentarism\nrelated to the investigated outcomes and poor work ability. However, to the best of\nour knowledge, this is the first study to evaluate work ability and associated\nfactors among people living with HTLV-1. \nThe frequency of poor work ability among people living with HTLV-1 was high and was\nassociated with sedentarism, neurologic symptoms, and low health-related quality of\nlife." ]
[ "intro", "methods", null, null, null, null, null, "results", "discussion" ]
[ "human T-lymphotropic virus 1", "Work capacity evaluation", "Paraparesis", "Tropical spastic", "Quality of life" ]
INTRODUCTION: Human T-lymphotropic virus type 1 (HTLV-1) is a type C retrovirus that was first isolated and identified from a patient with cutaneous T-cell malignancy in 1980 1 . It is transmitted through breastfeeding, sexual contact, blood transfusion, and sharing syringes and needles 2 . The prevalence of HTLV-1 infection is poorly known; however, it is estimated that it affects 10-15 million people worldwide 3 . Clusters of high prevalence were found in nearby areas with negligible prevalence. HTLV-1 infection is endemic in southwestern Japan, sub-Saharan Africa, South America, and the Caribbean area, with foci in the Middle East and Australo-Melanesia 4 . HTLV-1 infection is frequent in Brazil 5 , in the State of Bahia 6 , particularly in its capital, Salvador city, with an estimated prevalence of 1.8% 7 . Most people (approximately 95%) infected with HTLV-1 remain asymptomatic 8 . Individuals with HTLV-1 have a 57% greater risk of death due to any cause than those HTLV-1-negative individuals. HTLV-1 is associated with increased odds of seborrheic dermatitis and Sjogren’s syndrome and a lower relative risk of gastric cancer 9 . HTLV-1 can cause two severe diseases, adult T-cell leukemia/lymphoma (ATLL) and HTLV-1-associated myelopathy/tropical spastic paraparesis (HAM/TSP). It is estimated that 0.25-3% of people infected with HTLV-1 will develop HAM/TSP during their lifetime. HAM/TSP mainly occurs in adulthood. HAM/TSP has an insidious onset that progressively evolves to neurological features, such as spasticity or hyperreflexia of the lower extremities, lower extremity muscle weakness, and urinary bladder disturbances. Approximately 50% of cases present with sensory disturbances and low back pain. HAM/TSP can be associated with other HTLV-1 associated symptoms like uveitis, myositis, and infective dermatitis 10 . Patients with HAM/TSP have difficulty performing daily routine activities, particularly because of a disturbed gait that compromises physical, emotional, and social aspects, impairing their quality of life 11 . Patients with HIV-HTLV-1 coinfection or HTLV-1 infection report more difficulty performing daily activities than those with exclusive infection with human immunodeficiency virus (HIV) 12 . The work ability index (WAI) questionnaire is an instrument that evaluates workers’ perception of work demands and the environment, work organization, work community, promotion of workers’ health and functional capacity, and promotion of professional competence. Good work ability means high-quality work, enjoyment of staying in one’s job, and the expectation of a meaningful retirement 13 . Therefore, this study aimed to measure the frequency and identify factors associated with work ability in patients living with HTLV-1. METHODS: Study design and study population This cross-sectional study was conducted from February 2018 to December 2019 at the University Hospital, Federal University of Bahia, Brazil. This study is part of broader research that investigates other health aspects of people with HTLV-1 14 . The target population comprised 209 individuals aged 18 years or older who were invited to participate in the study. Severe cognitive deficits that prevented the elicitation of reliable information in the interview were an exclusion criterion. There were only two refusals, resulting in a final study population of 207 individuals. This cross-sectional study was conducted from February 2018 to December 2019 at the University Hospital, Federal University of Bahia, Brazil. This study is part of broader research that investigates other health aspects of people with HTLV-1 14 . The target population comprised 209 individuals aged 18 years or older who were invited to participate in the study. Severe cognitive deficits that prevented the elicitation of reliable information in the interview were an exclusion criterion. There were only two refusals, resulting in a final study population of 207 individuals. Data collection instruments and procedures Participants were interviewed by a member of the research team after the medical consultation in a quiet room, keeping the patient’s privacy. Information about sociodemographic characteristics (age, race, schooling, civil status, number of children, and monthly family income - coded as < 1 Brazilian minimum wage and 1-2 Brazilian minimal wage. One Brazilian minimal wage was equivalent to US$ 197 by the time of the study) personal habits (smoking, drinking, and sedentarism), health-related quality of life, clinical data (comorbidities and HTLV-1 symptoms), and work ability were collected using structured questionnaires. Work ability was used as the dependent variable. Work ability was evaluated using the WAI questionnaire. WAI is a summary measure of seven dimensions (range 7-49): 1 -current work ability compared with the lifetime best, 2 - work ability in relation to the demands of the job, 3 - number of current diseases diagnosed by a physician, 4 - estimated work impairment due to diseases, 5 - sick leave, 6 - self-prognosis of work ability 2 years from now, and 7 - mental resources. The total score was classified into four work ability categories: poor (7-27 points), moderate (28-36 points), good (37-43 points), and excellent (44-49 points). For the purposes of this study, the four possible subgroups of WAI were categorized as poor versus others 15 . The WAI questionnaire was validated in a Brazilian population and showed satisfactory psychometric properties 16 . Neurologic evaluation was performed on all 207 individuals at the University Hospital, according to the World Health Organization criteria 17 . Seventy (33.8%) of the 207 individuals in the study presented neurological symptoms; all of them presented weakness and spasticity of one or both legs, compatible with HAM/TSP. Other diagnostics were 44 lumbar pain, 33 neurogenic bladder, 28 hyperreflexia, 11 polyneuropathy, and 5 erectile dysfunction (Figure 1). FIGURE 1:Study population flowchart. Health-related quality of life was evaluated using the 36-Item Short-Form Health Survey version 2 (SF-36v2) questionnaire. This instrument comprises 36 items that generate eight domains: physical functioning, physical role, bodily pain, general health, vitality, social functioning, emotional role, and mental health. Two summary measures can be calculated from these domains: physical and mental component summaries. The psychometric properties of the SF-36v2 have been validated in a Brazilian population 18 . PRO CoRE software, version 1.4 (Optum Inc., Johnston, RI, USA), was used to score the survey. The normalized scores have a mean of 50 and a standard deviation of 10, a transformation that enables better comparisons among domains. A commercial license (license number QM025905) granted permission for using the SF-36v2. Participants were interviewed by a member of the research team after the medical consultation in a quiet room, keeping the patient’s privacy. Information about sociodemographic characteristics (age, race, schooling, civil status, number of children, and monthly family income - coded as < 1 Brazilian minimum wage and 1-2 Brazilian minimal wage. One Brazilian minimal wage was equivalent to US$ 197 by the time of the study) personal habits (smoking, drinking, and sedentarism), health-related quality of life, clinical data (comorbidities and HTLV-1 symptoms), and work ability were collected using structured questionnaires. Work ability was used as the dependent variable. Work ability was evaluated using the WAI questionnaire. WAI is a summary measure of seven dimensions (range 7-49): 1 -current work ability compared with the lifetime best, 2 - work ability in relation to the demands of the job, 3 - number of current diseases diagnosed by a physician, 4 - estimated work impairment due to diseases, 5 - sick leave, 6 - self-prognosis of work ability 2 years from now, and 7 - mental resources. The total score was classified into four work ability categories: poor (7-27 points), moderate (28-36 points), good (37-43 points), and excellent (44-49 points). For the purposes of this study, the four possible subgroups of WAI were categorized as poor versus others 15 . The WAI questionnaire was validated in a Brazilian population and showed satisfactory psychometric properties 16 . Neurologic evaluation was performed on all 207 individuals at the University Hospital, according to the World Health Organization criteria 17 . Seventy (33.8%) of the 207 individuals in the study presented neurological symptoms; all of them presented weakness and spasticity of one or both legs, compatible with HAM/TSP. Other diagnostics were 44 lumbar pain, 33 neurogenic bladder, 28 hyperreflexia, 11 polyneuropathy, and 5 erectile dysfunction (Figure 1). FIGURE 1:Study population flowchart. Health-related quality of life was evaluated using the 36-Item Short-Form Health Survey version 2 (SF-36v2) questionnaire. This instrument comprises 36 items that generate eight domains: physical functioning, physical role, bodily pain, general health, vitality, social functioning, emotional role, and mental health. Two summary measures can be calculated from these domains: physical and mental component summaries. The psychometric properties of the SF-36v2 have been validated in a Brazilian population 18 . PRO CoRE software, version 1.4 (Optum Inc., Johnston, RI, USA), was used to score the survey. The normalized scores have a mean of 50 and a standard deviation of 10, a transformation that enables better comparisons among domains. A commercial license (license number QM025905) granted permission for using the SF-36v2. Laboratory examinations HTLV-1 antibodies were detected in the blood of the participants by enzyme-linked immunosorbent assay (ELISA) and confirmed by western blotting at the Infectology Research Laboratory, University Hospital, Federal University of Bahia. HTLV-1 antibodies were detected in the blood of the participants by enzyme-linked immunosorbent assay (ELISA) and confirmed by western blotting at the Infectology Research Laboratory, University Hospital, Federal University of Bahia. Statistical data analysis Differences between subgroups of continuous variables were compared using Student's t-test. Differences between subgroups of categorical variables were compared using Pearson's chi-square test. Variables with a P-value < 0.20 in bivariate analysis were selected for composing a Poisson regression model with robust variance estimators that had work ability as the dependent variable 19 - 21 . Cronbach’s alpha coefficient was used to evaluate the internal consistency of the SF-36v2 and WAI instruments; values above 0.70 were considered acceptable 22 - 23 . Differences between subgroups of continuous variables were compared using Student's t-test. Differences between subgroups of categorical variables were compared using Pearson's chi-square test. Variables with a P-value < 0.20 in bivariate analysis were selected for composing a Poisson regression model with robust variance estimators that had work ability as the dependent variable 19 - 21 . Cronbach’s alpha coefficient was used to evaluate the internal consistency of the SF-36v2 and WAI instruments; values above 0.70 were considered acceptable 22 - 23 . Ethical aspects The research protocol was approved by the research ethics committee of the Federal University of Bahia (opinion number:30762714.4.0000.5577). All the participants provided written informed consent. The research protocol was approved by the research ethics committee of the Federal University of Bahia (opinion number:30762714.4.0000.5577). All the participants provided written informed consent. Study design and study population: This cross-sectional study was conducted from February 2018 to December 2019 at the University Hospital, Federal University of Bahia, Brazil. This study is part of broader research that investigates other health aspects of people with HTLV-1 14 . The target population comprised 209 individuals aged 18 years or older who were invited to participate in the study. Severe cognitive deficits that prevented the elicitation of reliable information in the interview were an exclusion criterion. There were only two refusals, resulting in a final study population of 207 individuals. Data collection instruments and procedures: Participants were interviewed by a member of the research team after the medical consultation in a quiet room, keeping the patient’s privacy. Information about sociodemographic characteristics (age, race, schooling, civil status, number of children, and monthly family income - coded as < 1 Brazilian minimum wage and 1-2 Brazilian minimal wage. One Brazilian minimal wage was equivalent to US$ 197 by the time of the study) personal habits (smoking, drinking, and sedentarism), health-related quality of life, clinical data (comorbidities and HTLV-1 symptoms), and work ability were collected using structured questionnaires. Work ability was used as the dependent variable. Work ability was evaluated using the WAI questionnaire. WAI is a summary measure of seven dimensions (range 7-49): 1 -current work ability compared with the lifetime best, 2 - work ability in relation to the demands of the job, 3 - number of current diseases diagnosed by a physician, 4 - estimated work impairment due to diseases, 5 - sick leave, 6 - self-prognosis of work ability 2 years from now, and 7 - mental resources. The total score was classified into four work ability categories: poor (7-27 points), moderate (28-36 points), good (37-43 points), and excellent (44-49 points). For the purposes of this study, the four possible subgroups of WAI were categorized as poor versus others 15 . The WAI questionnaire was validated in a Brazilian population and showed satisfactory psychometric properties 16 . Neurologic evaluation was performed on all 207 individuals at the University Hospital, according to the World Health Organization criteria 17 . Seventy (33.8%) of the 207 individuals in the study presented neurological symptoms; all of them presented weakness and spasticity of one or both legs, compatible with HAM/TSP. Other diagnostics were 44 lumbar pain, 33 neurogenic bladder, 28 hyperreflexia, 11 polyneuropathy, and 5 erectile dysfunction (Figure 1). FIGURE 1:Study population flowchart. Health-related quality of life was evaluated using the 36-Item Short-Form Health Survey version 2 (SF-36v2) questionnaire. This instrument comprises 36 items that generate eight domains: physical functioning, physical role, bodily pain, general health, vitality, social functioning, emotional role, and mental health. Two summary measures can be calculated from these domains: physical and mental component summaries. The psychometric properties of the SF-36v2 have been validated in a Brazilian population 18 . PRO CoRE software, version 1.4 (Optum Inc., Johnston, RI, USA), was used to score the survey. The normalized scores have a mean of 50 and a standard deviation of 10, a transformation that enables better comparisons among domains. A commercial license (license number QM025905) granted permission for using the SF-36v2. Laboratory examinations: HTLV-1 antibodies were detected in the blood of the participants by enzyme-linked immunosorbent assay (ELISA) and confirmed by western blotting at the Infectology Research Laboratory, University Hospital, Federal University of Bahia. Statistical data analysis: Differences between subgroups of continuous variables were compared using Student's t-test. Differences between subgroups of categorical variables were compared using Pearson's chi-square test. Variables with a P-value < 0.20 in bivariate analysis were selected for composing a Poisson regression model with robust variance estimators that had work ability as the dependent variable 19 - 21 . Cronbach’s alpha coefficient was used to evaluate the internal consistency of the SF-36v2 and WAI instruments; values above 0.70 were considered acceptable 22 - 23 . Ethical aspects: The research protocol was approved by the research ethics committee of the Federal University of Bahia (opinion number:30762714.4.0000.5577). All the participants provided written informed consent. RESULTS: The mean age was 55.2, ranging from 19 to 84 years, 73.0% were females, 100% had a monthly family income less than US$ 394, and 33.8% presented HAM/TSP. The work ability of 207 individuals with HTLV-1 was poor in 54.1% (n = 112), moderate in 37.7% (n = 78), and good in 8.2% (n = 17). No individual was classified as having excellent work ability. The alpha coefficient of the WAI questionnaire was 0.84, indicating a high internal consistency. The poor work ability prevalence rate was significantly higher (P < 0.20) among individuals who had children, had schooling < 8 years, civil status other than stable relation, who did not referred alcohol consumption, sedentary status, comorbidities, and neurological symptoms (Table 1). TABLE 1:Work ability according to characteristics of 207 individuals with HTLV-1, Salvador, Brazil, 2018-2019. Work ability Poor Moderate/good (n = 112) (n = 95) Characteristicn%n%PR95% CIP-valueSex Male3155.42544.61.030.78-1.360.826Female8153.47046.6 Race White 1058.8741.21.100.72-1.670.683Other10253.78846.3 Children No1139.31760.70.700.43-1.120.090Yes10156.47843.6 Schooling < 8 years8061.15138.91.451.08-1.950.008≥ 8 years3242.14457.9 Civil status Other6658.94641.11.220.94-1.580.131Stable relation4648.44951.6 Monthly family income < 1 MW3655.42944.61.040.79-1.350.8031 to 2 MW7653.56646.5 Smoking Yes1168.8531.21.300.91-1.860.222No10152.99047.1 Drinking Yes2242.33057.70.730.52-1.030.049No9058.16541.9 Comorbidities Yes9759.16740.91.701.11-2.600.005No1534.92865.1 Sedentary Yes7957.75842.31.220.92-1.630.151No3347.13752.9 Neurological symptoms Yes5578.61521.41.891.50-2.38< 0.001No5741.68058.4 *Fisher test; MW: Brazilian minimal wage (approx. 197.39 US$/month). *Fisher test; MW: Brazilian minimal wage (approx. 197.39 US$/month). Bivariate analyses showed that individuals with poor work ability presented systematically lower (P < 0.001) SF-36 domain scores and physical and mental component summaries of health-related quality of life and were significantly older (P < 0.042) than those with moderate or good work ability (Table 2). TABLE 2:Work ability according to SF-36 health-related quality of life domains (mean [SD], in %) and age (mean [SD], in years) of 207 individuals with HTLV-1, Salvador, Brazil, 2018-2019. Work ability PoorModerate/good VariableCronbach alpha(n = 112)(n = 95) P-value Physical Functioning0.9533.1 (10.9)50.9 (8.9)< 0.001Role Physical0.9529.1 (10.4)48.4 (12.9)< 0.001Bodily Pain0.7834.6 (11.2)47.5 (11.1)< 0.001General Health0.7736.2 (10.0)50.2 (8.9)< 0.001Vitality0.8241.6 (12.7)56.2 (10.0)< 0.001Social Functioning0.7538.3 (14.3)52.9 (8.9)< 0.001Role Emotional0.9432.8 (16.6)49.0 (13.6)< 0.001Mental Health0.8439.1 (15.5)51.6 (10.3)< 0.001Physical Component Summary-32.7 (9.7)49.1 (9.8)< 0.001Mental Component Summary-40.3 (16.6)52.5 (10.7)< 0.001Age-56.8 (12.4)53.3 (12.4)0.042 The Poisson regression model estimated that adjusted prevalence rates (PR) of poor work ability were 30% higher among sedentary individuals (PR = 1.30; 95% confidence interval [CI]: 1.03-1.65) and 25% higher among those with neurological symptoms (PR = 1.25; 95% CI: 1.02-1.52). The mean level of the physical component summary of the health-related quality of life was 5% lower (PR = 0.95; 95% CI: 0.94-0.96), and the mean level of the mental component summary was 2% lower (PR = 0.98; 95% CI: 0.97-0.99) among individuals with poor work ability compared with those with moderate or good work ability. The alpha coefficients of the eight domains of the SF36v2 questionnaire varied from 0.75 to 0.95, revealing high internal consistency (Table 3). TABLE 3:Results of Poisson regression having the prevalence ratio of low work ability as the dependent variable among 207 individuals with HTLV-1, Salvador, Brazil, 2018-2019.Predictors (referent)PR95% CI P-value Children (Yes)0.910.66-1.260.571Schooling (≥ 8 years)1.050.84--1.310.696Civil status (Stable relation)1.070.87-1.310.502Drinking (No)1.120.86-1.460.392Comorbidities (No)0.980.69-1.350.845Age (Years)1.011.00-1.020.061Sedentary (No)1.301.03-1.650.030Neurological symptoms (No)1.251.02-1.520.028Physical Component Summary (%)0.950.94--0.96< 0.001Mental Component Summary (%)0.980.97-0.99< 0.001 PR: adjusted prevalence ratio. PR: adjusted prevalence ratio. DISCUSSION: Poor work ability was common in the study population (54.1%). In addition, poor work ability is associated with an increased risk of sickness absence 24 , early retirement 25 , and higher mortality in older age 26 . This study among people living with HTLV-1 found that poor work ability was associated with sedentarism, neurologic symptoms, and low health-related quality of life in both the physical and mental components. Multivariate analysis estimated that the adjusted prevalence of poor work ability was 30% higher among individuals with a sedentary lifestyle and 25% higher among those presenting with neurologic symptoms. People infected with HTLV-1, who already present with neurological symptoms, are expected to have impaired work ability. Patients with HAM/TSP usually have impaired gait, dependence on daily activities, and a poor quality of life due to intense muscle weakness 27 . The same reasoning applies to the relationship between sedentarism and poor work ability 28 . Unfortunately, this study did not collect information on the temporal sequence of the relationship between these independent variables (sedentarism and neurologic symptoms) and outcomes (work ability). HTLV-1 infection has been associated with several diseases. Fortunately, only a few of these are fatal, such as leukemia. However, this disease is rare and has a relatively low impact on the community mortality rates 9 . The results of this study raise awareness of the poorly recognized burden of HTLV-1 infection on the morbidity caused by neurologic symptoms and its impact on work ability. Individuals with poor work ability had a lower health-related quality of life than those with moderate or good work ability. The differences found in the bivariate analyses were confirmed in the multivariate analyses, which estimated a 5% lower physical and a 2% lower mental component summary for patients with poor work ability after adjusting for relevant variables. The complex construct of the WAI questionnaire has many points of convergence with that of the SF36v2 29 since both deal with physical and mental demands. Therefore, the WAI score is expected to be strongly associated with SF36v2 dimensions and component summaries 30 , 31 . For example, the progressive and disabling gait disturbances of patients with HTLV-1 may impair physical, emotional, social, and mental aspects that, in turn, may modify the health-related quality of life perception 11 . The magnitude of the differences in physical and mental component summaries according to work ability can be analyzed from the perspective of the minimal clinically important difference (MCID) 32 . The concept of MCID evolved to minimal important difference, defined as “the smallest difference in score in the domain of interest that patients perceive as important, either beneficial or harmful, and that would lead the clinician to consider a change in the patient’s management 33 .” The MCID for physical component summary varied in studies, including patients with moderate to severe psoriasis (2.5 points) 34 , undergoing lumbar spine surgeries (4.11-5.21 35 ; and 4.93) 36 , and surgical (7.83) and non-surgical (2.15) patients with spinal deformities 37 . The MDIC for mental component summary was 2.5 points in a study of patients with moderate to severe psoriasis 34 . Roughly half of all patients treated for hepatitis C fail to achieve clinically important improvements in physical and mental component summaries 38 . However, the MCID is not an immutable characteristic and may vary by population and context 39 . Concerning patients’ quality of life scores, the MCID for group-level is necessarily smaller than those for individual patient-level 40 . We are unaware of a study that determined the MCID for the work ability index among patients with non-alcoholic fatty liver disease (NAFLD), similar to our study population. The high frequency (54.1%) of poor work ability among patients with NAFLD and the nature of the factors associated with poor work ability suggest the need to implement strategies to provide adequate health care among people living with HTLV-1. One important limitation of this preliminary cross-sectional study is the lack of information about the temporal sequence of neurological symptoms and sedentarism related to the investigated outcomes and poor work ability. However, to the best of our knowledge, this is the first study to evaluate work ability and associated factors among people living with HTLV-1. The frequency of poor work ability among people living with HTLV-1 was high and was associated with sedentarism, neurologic symptoms, and low health-related quality of life.
Background: Infection with the human T-lymphotropic virus type 1 (HTLV-1) affects an estimated 10-15 million people worldwide. However, knowledge of the impact of HTLV-1 infection on work ability is lacking. This study aimed to measure the frequency and identify factors associated with poor work ability in patients living with HTLV-1. Methods: This cross-sectional study included 207 individuals infected with HTLV-1 who attended the University Hospital in Salvador, Bahia, Brazil. HTLV-1 antibodies were detected in the participants' blood by enzyme-linked immunosorbent assay (ELISA) and confirmed by western blotting. Participants answered a questionnaire on sociodemographic data, personal habits, clinical data, health-related quality of life, and work ability, evaluated using the work ability index questionnaire. A Poisson regression model with a robust variance estimate was used to identify the factors associated with the prevalence of poor work ability. Results: Patients mean age was 55.2, ranging from 19 to 84 years, 73.0% were females, 100% had monthly family income less than US$ 394, and 33.8% presented HTLV-1 associated myelopathy/tropical spastic paraparesis (HAM/TSP). No individual was classified as having excellent work ability. Poor work ability prevalence was strongly associated (prevalence ratio; 95% confidence interval [CI]) with sedentarism (1.30; 1.03-1.65), neurological symptoms (1.25; 1.02-1.52), and low physical (0.95; 0.94-0.96) and mental (0.98; 0.97-0.99) component summaries of health-related quality of life. Conclusions: Poor work ability among people living with HTLV-1 is associated with sedentarism, neurologic symptoms, and low health-related quality of life.
null
null
5,002
327
[ 106, 582, 40, 109, 31 ]
9
[ "work", "work ability", "ability", "study", "htlv", "health", "individuals", "poor", "physical", "population" ]
[ "negligible prevalence htlv", "infected htlv develop", "people infected htlv", "htlv infection endemic", "prevalence htlv infection" ]
null
null
[CONTENT] human T-lymphotropic virus 1 | Work capacity evaluation | Paraparesis | Tropical spastic | Quality of life [SUMMARY]
[CONTENT] human T-lymphotropic virus 1 | Work capacity evaluation | Paraparesis | Tropical spastic | Quality of life [SUMMARY]
[CONTENT] human T-lymphotropic virus 1 | Work capacity evaluation | Paraparesis | Tropical spastic | Quality of life [SUMMARY]
null
[CONTENT] human T-lymphotropic virus 1 | Work capacity evaluation | Paraparesis | Tropical spastic | Quality of life [SUMMARY]
null
[CONTENT] Cross-Sectional Studies | Female | HTLV-I Infections | Human T-lymphotropic virus 1 | Humans | Leukemia, T-Cell | Male | Middle Aged | Paraparesis, Tropical Spastic | Quality of Life | Work Capacity Evaluation [SUMMARY]
[CONTENT] Cross-Sectional Studies | Female | HTLV-I Infections | Human T-lymphotropic virus 1 | Humans | Leukemia, T-Cell | Male | Middle Aged | Paraparesis, Tropical Spastic | Quality of Life | Work Capacity Evaluation [SUMMARY]
[CONTENT] Cross-Sectional Studies | Female | HTLV-I Infections | Human T-lymphotropic virus 1 | Humans | Leukemia, T-Cell | Male | Middle Aged | Paraparesis, Tropical Spastic | Quality of Life | Work Capacity Evaluation [SUMMARY]
null
[CONTENT] Cross-Sectional Studies | Female | HTLV-I Infections | Human T-lymphotropic virus 1 | Humans | Leukemia, T-Cell | Male | Middle Aged | Paraparesis, Tropical Spastic | Quality of Life | Work Capacity Evaluation [SUMMARY]
null
[CONTENT] negligible prevalence htlv | infected htlv develop | people infected htlv | htlv infection endemic | prevalence htlv infection [SUMMARY]
[CONTENT] negligible prevalence htlv | infected htlv develop | people infected htlv | htlv infection endemic | prevalence htlv infection [SUMMARY]
[CONTENT] negligible prevalence htlv | infected htlv develop | people infected htlv | htlv infection endemic | prevalence htlv infection [SUMMARY]
null
[CONTENT] negligible prevalence htlv | infected htlv develop | people infected htlv | htlv infection endemic | prevalence htlv infection [SUMMARY]
null
[CONTENT] work | work ability | ability | study | htlv | health | individuals | poor | physical | population [SUMMARY]
[CONTENT] work | work ability | ability | study | htlv | health | individuals | poor | physical | population [SUMMARY]
[CONTENT] work | work ability | ability | study | htlv | health | individuals | poor | physical | population [SUMMARY]
null
[CONTENT] work | work ability | ability | study | htlv | health | individuals | poor | physical | population [SUMMARY]
null
[CONTENT] htlv | infection | associated | work | ham | tsp | ham tsp | htlv infection | htlv associated | prevalence [SUMMARY]
[CONTENT] study | work | ability | work ability | brazilian | health | population | university | number | points [SUMMARY]
[CONTENT] pr | work ability | work | ability | 95 | table | component summary | ci | component | individuals [SUMMARY]
null
[CONTENT] work | ability | work ability | study | htlv | university | research | health | population | individuals [SUMMARY]
null
[CONTENT] 1 | an estimated 10-15 million ||| ||| [SUMMARY]
[CONTENT] 207 | the University Hospital | Salvador | Bahia | Brazil ||| ELISA ||| ||| Poisson [SUMMARY]
[CONTENT] 55.2 | 19 to 84 years | 73.0% | 100% | monthly | less than US$ 394 | 33.8% ||| ||| 95% ||| CI | 1.30 | 1.03-1.65 | 1.25 | 1.02 | 0.95 | 0.94 | 0.98 | 0.97 [SUMMARY]
null
[CONTENT] 1 | an estimated 10-15 million ||| ||| ||| 207 | the University Hospital | Salvador | Bahia | Brazil ||| ELISA ||| ||| Poisson ||| ||| 55.2 | 19 to 84 years | 73.0% | 100% | monthly | less than US$ 394 | 33.8% ||| ||| 95% ||| CI | 1.30 | 1.03-1.65 | 1.25 | 1.02 | 0.95 | 0.94 | 0.98 | 0.97 ||| [SUMMARY]
null
Body mass index and survival in women with breast cancer-systematic literature review and meta-analysis of 82 follow-up studies.
24769692
Positive association between obesity and survival after breast cancer was demonstrated in previous meta-analyses of published data, but only the results for the comparison of obese versus non-obese was summarised.
BACKGROUND
We systematically searched in MEDLINE and EMBASE for follow-up studies of breast cancer survivors with body mass index (BMI) before and after diagnosis, and total and cause-specific mortality until June 2013, as part of the World Cancer Research Fund Continuous Update Project. Random-effects meta-analyses were conducted to explore the magnitude and the shape of the associations.
METHODS
Eighty-two studies, including 213 075 breast cancer survivors with 41 477 deaths (23 182 from breast cancer) were identified. For BMI before diagnosis, compared with normal weight women, the summary relative risks (RRs) of total mortality were 1.41 [95% confidence interval (CI) 1.29-1.53] for obese (BMI >30.0), 1.07 (95 CI 1.02-1.12) for overweight (BMI 25.0-<30.0) and 1.10 (95% CI 0.92-1.31) for underweight (BMI <18.5) women. For obese women, the summary RRs were 1.75 (95% CI 1.26-2.41) for pre-menopausal and 1.34 (95% CI 1.18-1.53) for post-menopausal breast cancer. For each 5 kg/m(2) increment of BMI before, <12 months after, and ≥12 months after diagnosis, increased risks of 17%, 11%, and 8% for total mortality, and 18%, 14%, and 29% for breast cancer mortality were observed, respectively.
RESULTS
Obesity is associated with poorer overall and breast cancer survival in pre- and post-menopausal breast cancer, regardless of when BMI is ascertained. Being overweight is also related to a higher risk of mortality. Randomised clinical trials are needed to test interventions for weight loss and maintenance on survival in women with breast cancer.
CONCLUSIONS
[ "Body Mass Index", "Breast Neoplasms", "Female", "Follow-Up Studies", "Humans", "MEDLINE", "Obesity", "Prognosis", "Randomized Controlled Trials as Topic", "Risk Factors", "Survivors" ]
4176449
introduction
The number of female breast cancer survivors is growing because of longer survival as a consequence of advances in treatment and early diagnosis. There were ∼2.6 million female breast cancer survivors in US in 2008 [1], and in the UK, breast cancer accounted for ∼28% of the 2 million cancer survivors in 2008 [2]. Obesity is a pandemic health concern, with over 500 million adults worldwide estimated to be obese and 958 million were overweight in 2008 [3]. One of the established risk factors for breast cancer development in post-menopausal women is obesity [4], which has further been linked to breast cancer recurrence [5] and poorer survival in pre- and post-menopausal breast cancer [6, 7]. Preliminary findings from randomised, controlled trials suggest that lifestyle modifications improved biomarkers associated with breast cancer progression and overall survival [8]. The biological mechanisms underlying the association between obesity and breast cancer survival are not established, and could involve interacting mediators of hormones, adipocytokines, and inflammatory cytokines which link to cell survival or apoptosis, migration, and proliferation [9]. Higher level of oestradiol produced in postmenopausal women through aromatisation of androgens in the adipose tissues [10], and higher level of insulin [11], a condition common in obese women, are linked to poorer prognosis in breast cancer. A possible interaction between leptin and insulin [12], and obesity-related markers of inflammation [13] have also been linked to breast cancer outcomes. Non-biological mechanisms could include chemotherapy under-dosing in obese women, suboptimal treatment, and obesity-related complications [14]. Numerous studies have examined the relationship between obesity and breast cancer outcomes, and past reviews have concluded that obesity is linked to a lower survival; however, when investigated in a meta-analysis of published data, only the results of obese compared with non-obese or lighter women were summarised [6, 7, 15]. We carried out a systematic literature review and meta-analysis of published studies to explore the magnitude and the shape of the association between body fatness, as measured by body mass index (BMI), and the risk of total and cause-specific mortality, overall and in women with pre- and post-menopausal breast cancer. As body weight may change close to diagnosis and during primary treatment of breast cancer [16], we examined BMI in three periods: before diagnosis, <12 months after diagnosis, and ≥12 months after breast cancer diagnosis.
materials and methods
data sources and search We carried out a systematic literature search, limited to publications in English, for articles on BMI and survival in women with breast cancer in OVID MEDLINE and EMBASE from inception to 30 June 2013 using the search strategy implemented for the WCRF/AICR Continuous Update Project on breast cancer survival. The search strategy contained medical subject headings and text words that covered a broad range of factors on diet, physical activity, and anthropometry. The protocol for the review is available at http://www.dietandcancerreport.org/index.php [17]. In addition, we hand-searched the reference lists of relevant articles, reviews, and meta-analysis papers. We carried out a systematic literature search, limited to publications in English, for articles on BMI and survival in women with breast cancer in OVID MEDLINE and EMBASE from inception to 30 June 2013 using the search strategy implemented for the WCRF/AICR Continuous Update Project on breast cancer survival. The search strategy contained medical subject headings and text words that covered a broad range of factors on diet, physical activity, and anthropometry. The protocol for the review is available at http://www.dietandcancerreport.org/index.php [17]. In addition, we hand-searched the reference lists of relevant articles, reviews, and meta-analysis papers. study selection Included were follow-up studies of breast cancer survivors, which reported estimates of the associations of BMI ascertained before and after breast cancer diagnosis with total or cause-specific mortality risks. Studies that investigated BMI after diagnosis were divided into two groups: BMI <12 months after diagnosis (BMI <12 months) and BMI 12 months or more after diagnosis (BMI ≥12 months). Outcomes included total mortality, breast cancer mortality, death from cardiovascular disease, and death from causes other than breast cancer. When multiple publications on the same study population were found, results based on longer follow-up and more outcomes were selected for the meta-analysis. Included were follow-up studies of breast cancer survivors, which reported estimates of the associations of BMI ascertained before and after breast cancer diagnosis with total or cause-specific mortality risks. Studies that investigated BMI after diagnosis were divided into two groups: BMI <12 months after diagnosis (BMI <12 months) and BMI 12 months or more after diagnosis (BMI ≥12 months). Outcomes included total mortality, breast cancer mortality, death from cardiovascular disease, and death from causes other than breast cancer. When multiple publications on the same study population were found, results based on longer follow-up and more outcomes were selected for the meta-analysis. data extraction DSMC, TN, and DA conducted the search. DSMC, ARV, and DNR extracted the study characteristics, tumour-related information, cancer treatment, timing and method of weight and height assessment, BMI levels, number of outcomes and population at-risk, outcome type, estimates of association and their measure of variance [95% confidence interval (CI) or P value], and adjustment factors in the analysis. DSMC, TN, and DA conducted the search. DSMC, ARV, and DNR extracted the study characteristics, tumour-related information, cancer treatment, timing and method of weight and height assessment, BMI levels, number of outcomes and population at-risk, outcome type, estimates of association and their measure of variance [95% confidence interval (CI) or P value], and adjustment factors in the analysis. statistical analysis Categorical and dose–response meta-analyses were conducted using random-effects models to account for between-study heterogeneity [18]. Summary relative risks (RRs) were estimated using the average of the natural logarithm of the RRs of each study weighted by the inverse of the variance and then unweighted by applying a random-effects variance component which is derived from the extent of variability of the effect sizes of the studies. The maximally adjusted RR estimates were used for the meta-analysis except for the follow-up of randomised, controlled trials [19, 20] where unadjusted results were also included, as these studies mostly involved a more homogeneous study population. BMI or Quetelet's Index (QI) measured in units of kg/m2 was used. We conducted categorical meta-analyses by pooling the categorical results reported in the studies. The studies used different BMI categories. In some studies, underweight (BMI <18.5 kg/m2 according to WHO international classification) and normal weight women (BMI 18.5–<25.0 kg/m2) were classified together but, in some studies, they were classified separately. Similarly, most studies classified overweight (BMI 25.0–<30.0 kg/m2) and obese (BMI ≥30.0 kg/m2) women separately but, in some studies, overweight and obese women were combined. The reference category was normal weight or underweight together with normal weight, depending on the studies. For convenience, the BMI categories are referred to as underweight, normal weight, overweight, and obese in the present review. We derived the RRs for overweight and obese women compared with normal weight women in two studies [19, 21] that had more than four BMI categories using the method of Hamling et al. [22]. Studies that reported results for obese compared with non-obese women were analysed separately. The non-linear dose–response relationship between BMI and mortality was examined using the best-fitting second-order fractional polynomial regression model [23], defined as the one with the lowest deviance. Non-linearity was tested using the likelihood ratio test [24]. In the non-linear meta-analysis, the reference category was the lowest BMI category in each study and RRs were recalculated using the method of Hamling et al. [22] when the reference category was not the lowest BMI category in the study. We also conducted linear dose–response meta-analyses, excluding the category underweight when reported separately in the studies, by pooling estimates of RR per unit increase (with its standard error) provided by the studies, or derived by us from categorical data using generalised least-squares for trend estimation [25]. To estimate the trend, the numbers of outcomes and population at-risk for at least three BMI categories, or the information required to derive them using standard methods [26], and means or medians of the BMI categories, or if not reported in the studies, the estimated midpoints of the categories had to be available. When the extreme BMI categories were open-ended, we used the width of the adjacent close-ended category to estimate the midpoints. Where the RRs were presented by subgroups (age group [27], menopausal status [28, 29], stage [30] or subtype [31] of breast cancer, or others [32–34]), an overall estimate for the study was obtained by a fixed-effect model before pooling in the meta-analysis. We estimated the risk increase of death for an increment of 5 kg/m2 of BMI. To assess heterogeneity, we computed the Cochran Q test and I2 statistic [35]. The cut points of 30% and 50% were used for low, moderate, and substantial level of heterogeneity. Sources of heterogeneity were explored by meta-regression and subgroup analyses using pre-defined factors, including indicators of study quality (menopausal status, hormone receptor status, number of outcomes, length of follow-up, study design, geographic location, BMI assessment, adjustment for confounders, and others). Small study or publication bias was examined by Egger's test [36] and visual inspection of the funnel plots. The influence of each individual study on the summary RR was examined by excluding the study in turn [37]. A P value of <0.05 was considered statistically significant. All analyses were conducted using Stata version 12.1 (Stata Statistical Software: Release 12, StataCorp LP, College Station, TX). Categorical and dose–response meta-analyses were conducted using random-effects models to account for between-study heterogeneity [18]. Summary relative risks (RRs) were estimated using the average of the natural logarithm of the RRs of each study weighted by the inverse of the variance and then unweighted by applying a random-effects variance component which is derived from the extent of variability of the effect sizes of the studies. The maximally adjusted RR estimates were used for the meta-analysis except for the follow-up of randomised, controlled trials [19, 20] where unadjusted results were also included, as these studies mostly involved a more homogeneous study population. BMI or Quetelet's Index (QI) measured in units of kg/m2 was used. We conducted categorical meta-analyses by pooling the categorical results reported in the studies. The studies used different BMI categories. In some studies, underweight (BMI <18.5 kg/m2 according to WHO international classification) and normal weight women (BMI 18.5–<25.0 kg/m2) were classified together but, in some studies, they were classified separately. Similarly, most studies classified overweight (BMI 25.0–<30.0 kg/m2) and obese (BMI ≥30.0 kg/m2) women separately but, in some studies, overweight and obese women were combined. The reference category was normal weight or underweight together with normal weight, depending on the studies. For convenience, the BMI categories are referred to as underweight, normal weight, overweight, and obese in the present review. We derived the RRs for overweight and obese women compared with normal weight women in two studies [19, 21] that had more than four BMI categories using the method of Hamling et al. [22]. Studies that reported results for obese compared with non-obese women were analysed separately. The non-linear dose–response relationship between BMI and mortality was examined using the best-fitting second-order fractional polynomial regression model [23], defined as the one with the lowest deviance. Non-linearity was tested using the likelihood ratio test [24]. In the non-linear meta-analysis, the reference category was the lowest BMI category in each study and RRs were recalculated using the method of Hamling et al. [22] when the reference category was not the lowest BMI category in the study. We also conducted linear dose–response meta-analyses, excluding the category underweight when reported separately in the studies, by pooling estimates of RR per unit increase (with its standard error) provided by the studies, or derived by us from categorical data using generalised least-squares for trend estimation [25]. To estimate the trend, the numbers of outcomes and population at-risk for at least three BMI categories, or the information required to derive them using standard methods [26], and means or medians of the BMI categories, or if not reported in the studies, the estimated midpoints of the categories had to be available. When the extreme BMI categories were open-ended, we used the width of the adjacent close-ended category to estimate the midpoints. Where the RRs were presented by subgroups (age group [27], menopausal status [28, 29], stage [30] or subtype [31] of breast cancer, or others [32–34]), an overall estimate for the study was obtained by a fixed-effect model before pooling in the meta-analysis. We estimated the risk increase of death for an increment of 5 kg/m2 of BMI. To assess heterogeneity, we computed the Cochran Q test and I2 statistic [35]. The cut points of 30% and 50% were used for low, moderate, and substantial level of heterogeneity. Sources of heterogeneity were explored by meta-regression and subgroup analyses using pre-defined factors, including indicators of study quality (menopausal status, hormone receptor status, number of outcomes, length of follow-up, study design, geographic location, BMI assessment, adjustment for confounders, and others). Small study or publication bias was examined by Egger's test [36] and visual inspection of the funnel plots. The influence of each individual study on the summary RR was examined by excluding the study in turn [37]. A P value of <0.05 was considered statistically significant. All analyses were conducted using Stata version 12.1 (Stata Statistical Software: Release 12, StataCorp LP, College Station, TX).
results
A total of 124 publications investigating the relationship of body fatness and mortality in women with breast cancer were identified. We excluded 31 publications, including four publications on other obesity indices [38–41], 12 publications without a measure of association [42–53], and 15 publications superseded by publications of the same study with more outcomes [54–68]. A further 14 publications were excluded because of insufficient data for the meta-analysis (five publications [69–73]) or unadjusted results (nine publications [74–82]), from which nine publications reported statistically significant increased risk of total, breast cancer or non-breast cancer mortality in obese women (before or <12 months after diagnosis) compared with the reference BMI [69, 71–74, 76, 77, 79, 82], two publications reported non-significant inverse associations [75, 80] and three publications reported no association [70, 78, 81] of BMI with survival after breast cancer. Hence, 79 publications from 82 follow-up studies with 41 477 deaths (23 182 from breast cancer) in 213 075 breast cancer survivors were included in the meta-analyses (Figure 1). Supplementary Table S1, available at Annals of Oncology online shows the characteristics of the studies included in the meta-analyses and details of the excluded studies are in supplementary Table S2, available at Annals of Oncology online. Results of the meta-analyses are summarised in Table 1.Table 1.Summary of meta-analyses of BMI and survival in women with breast canceraBMI before diagnosisBMI <12 months after diagnosisBMI ≥12 months after diagnosisNRR (95% CI)I2 (%)PhNRR (95% CI)I2 (%)PhNRR (95% CI)I2 (%)PhTotal mortality Under versus normal weight101.10 (0.92–1.31)48%0.04111.25 (0.99–0.57)63%<0.0131.29 (1.02–1.63)0%0.39 Over versus normal weight191.07 (1.02–1.12)0%0.88221.07 (1.02–1.12)21%0.1840.98 (0.86–1.11)0%0.72 Obese versus normal weight211.41 (1.29–1.53)38%0.04241.23 (1.12–1.33)69%<0.0151.21 (1.06–1.38)0%0.70 Obese versus non-obese–––121.26 (1.07–1.47)80%<0.01––– Per 5 kg/m2 increase151.17 (1.13–1.21)7%0.38121.11 (1.06–1.16)55%0.0141.08 (1.01–1.15)0%0.52Breast cancer mortality Under versus normal weight81.02 (0.85–1.21)31%0.1851.53 (1.27–1.83)0%0.5911.10 (0.15–8.08)– Over versus normal weight211.11 (1.06–1.17)0%0.66121.11 (1.03–1.20)14%0.3121.37 (0.96–1.95)0%0.90 Obese versus normal weight221.35 (1.24–1.47)36%0.05121.25 (1.10–1.42)53%0.0221.68 (0.90–3.15)67%0.08 Obese versus non-obese–––61.26 (1.05–1.51)64%0.02––– Per 5 kg/m2 increase181.18 (1.12–1.25)47%0.0181.14 (1.05–1.24)66%0.0121.29 (0.97–1.72)64%0.10Cardiovascular disease related mortality Over versus normal weight21.01 (0.80–1.29)0%0.87–––––– Obese versus normal weight21.60 (0.66–3.87)78%0.03–––––– Per 5 kg/m2 increase21.21 (0.83–1.77)80%0.03––––––Non-breast cancer mortality Over versus normal weight–––50.96 (0.83–1.11)26%0.25––– Obese versus normal weight–––51.29 (0.99–1.68)72%0.01–––aBMI before and after diagnosis (<12 months after, or ≥12 months after diagnosis) was classified according to the exposure period which the studies referred to in the BMI assessment; the BMI categories were included in the categorical meta-analyses as defined by the studies.Ph, P for heterogeneity between studies.Figure 1.Flowchart of search. Summary of meta-analyses of BMI and survival in women with breast cancera aBMI before and after diagnosis (<12 months after, or ≥12 months after diagnosis) was classified according to the exposure period which the studies referred to in the BMI assessment; the BMI categories were included in the categorical meta-analyses as defined by the studies. Ph, P for heterogeneity between studies. Flowchart of search. Studies were follow-up of women with breast cancer identified in prospective aetiologic cohort studies (women were free of cancer at enrolment), or cohorts of breast cancer survivors whose participants were identified in hospitals or through cancer registries, or follow-up of breast cancer patients enrolled in case–control studies or randomised clinical trials. Some studies included only premenopausal women [83–85] or postmenopausal women [21, 27, 86–94], but most studies included both. Menopausal status was usually determined at time of diagnosis. Year of diagnosis was from 1957–1965 [70] to 2002–2009 [74]. Patient tumour characteristics and stage of disease at diagnosis varied across studies, and some studies included carcinoma in situ. No all studies provided clinical information on the tumour, treatment, and co-morbidities. Most of the studies were based in North America or Europe. There were three studies from each of Australia [79, 95, 96], Korea [97, 98] and China [99–101]; two studies from Japan [71, 102]; one study from Tunisia [103] and four international studies [19, 104–106]. Study size ranged from 96 [107] to 24 698 patients [97]. Total number of deaths ranged from 56 [93] to 7397 [108], and the proportion of deaths from breast cancer ranged from 22% [27] to 98% [84] when reported. All but eight studies [30, 93, 94, 98, 99, 109–111] had an average follow-up of more than 5 years. BMI and total mortality categorical meta-analysis For BMI before diagnosis, compared with normal weight women, the summary RRs were 1.41 (95% CI 1.29–1.53, 21 studies) for obese women, 1.07 (95% CI 1.02–1.12, 19 studies) for overweight women, and 1.10 (95% CI 0.92–1.31, 10 studies) for underweight women (Figure 2). For BMI <12 months after diagnosis and the same comparisons, the summary RRs were 1.23 (95% CI 1.12–1.33, 24 studies) for obese women, 1.07 (95% CI 1.02–1.12, 22 studies) for overweight women, and 1.25 (95% CI 0.99–1.57, 11 studies) for underweight women (supplementary Figure S1, available at Annals of Oncology online). Substantial heterogeneities were observed between studies of obese women and underweight women (I2 = 69%, P < 0.01; I2 = 63%, P < 0.01, respectively). For BMI ≥12 months after diagnosis, the summary RRs were 1.21 (95% CI 1.06–1.38, 5 studies) for obese women, 0.98 (95% CI 0.86–1.11, 4 studies) for overweight women, and 1.29 (95% CI 1.02–1.63, 3 studies) for underweight women (supplementary Figure S2, available at Annals of Oncology online). Twelve additional studies reported results for obese versus non-obese women <12 months after diagnosis, and the summary RR was 1.26 (95% CI 1.07–1.47, I2 = 80%, P < 0.01).Figure 2.Categorical meta-analysis of pre-diagnosis BMI and total mortality. Categorical meta-analysis of pre-diagnosis BMI and total mortality. For BMI before diagnosis, compared with normal weight women, the summary RRs were 1.41 (95% CI 1.29–1.53, 21 studies) for obese women, 1.07 (95% CI 1.02–1.12, 19 studies) for overweight women, and 1.10 (95% CI 0.92–1.31, 10 studies) for underweight women (Figure 2). For BMI <12 months after diagnosis and the same comparisons, the summary RRs were 1.23 (95% CI 1.12–1.33, 24 studies) for obese women, 1.07 (95% CI 1.02–1.12, 22 studies) for overweight women, and 1.25 (95% CI 0.99–1.57, 11 studies) for underweight women (supplementary Figure S1, available at Annals of Oncology online). Substantial heterogeneities were observed between studies of obese women and underweight women (I2 = 69%, P < 0.01; I2 = 63%, P < 0.01, respectively). For BMI ≥12 months after diagnosis, the summary RRs were 1.21 (95% CI 1.06–1.38, 5 studies) for obese women, 0.98 (95% CI 0.86–1.11, 4 studies) for overweight women, and 1.29 (95% CI 1.02–1.63, 3 studies) for underweight women (supplementary Figure S2, available at Annals of Oncology online). Twelve additional studies reported results for obese versus non-obese women <12 months after diagnosis, and the summary RR was 1.26 (95% CI 1.07–1.47, I2 = 80%, P < 0.01).Figure 2.Categorical meta-analysis of pre-diagnosis BMI and total mortality. Categorical meta-analysis of pre-diagnosis BMI and total mortality. dose–response meta-analysis There was evidence of a J-shaped association in the non-linear dose–response meta-analyses of BMI before and after diagnosis with total mortality (all P < 0.01; Figure 3), suggesting that underweight women may be at slightly increased risk compared with normal weight women. The curves show linear increasing trends from 20 kg/m2 for BMI before diagnosis and <12 months after diagnosis, and from 25 kg/m2 for BMI ≥12 months after diagnosis. When linear models were fitted excluding the underweight category, the summary RRs of total mortality for each 5 kg/m2 increase in BMI were 1.17 (95% CI 1.13–1.21, 15 studies, 6358 deaths), 1.11 (95% CI 1.06–1.16, 12 studies, 6020 deaths), and 1.08 (95% CI 1.01–1.15, 4 studies, 1703 deaths) for BMI before, <12 months after, and ≥12 months after diagnosis, respectively (Figure 4). Substantial heterogeneity was observed between studies on BMI <12 months after diagnosis (I2 = 55%, P = 0.01).Figure 3.Non-linear dose–response curves of BMI and mortality.Figure 4.Linear dose–response meta-analysis of BMI and total mortality. Non-linear dose–response curves of BMI and mortality. Linear dose–response meta-analysis of BMI and total mortality. There was evidence of a J-shaped association in the non-linear dose–response meta-analyses of BMI before and after diagnosis with total mortality (all P < 0.01; Figure 3), suggesting that underweight women may be at slightly increased risk compared with normal weight women. The curves show linear increasing trends from 20 kg/m2 for BMI before diagnosis and <12 months after diagnosis, and from 25 kg/m2 for BMI ≥12 months after diagnosis. When linear models were fitted excluding the underweight category, the summary RRs of total mortality for each 5 kg/m2 increase in BMI were 1.17 (95% CI 1.13–1.21, 15 studies, 6358 deaths), 1.11 (95% CI 1.06–1.16, 12 studies, 6020 deaths), and 1.08 (95% CI 1.01–1.15, 4 studies, 1703 deaths) for BMI before, <12 months after, and ≥12 months after diagnosis, respectively (Figure 4). Substantial heterogeneity was observed between studies on BMI <12 months after diagnosis (I2 = 55%, P = 0.01).Figure 3.Non-linear dose–response curves of BMI and mortality.Figure 4.Linear dose–response meta-analysis of BMI and total mortality. Non-linear dose–response curves of BMI and mortality. Linear dose–response meta-analysis of BMI and total mortality. categorical meta-analysis For BMI before diagnosis, compared with normal weight women, the summary RRs were 1.41 (95% CI 1.29–1.53, 21 studies) for obese women, 1.07 (95% CI 1.02–1.12, 19 studies) for overweight women, and 1.10 (95% CI 0.92–1.31, 10 studies) for underweight women (Figure 2). For BMI <12 months after diagnosis and the same comparisons, the summary RRs were 1.23 (95% CI 1.12–1.33, 24 studies) for obese women, 1.07 (95% CI 1.02–1.12, 22 studies) for overweight women, and 1.25 (95% CI 0.99–1.57, 11 studies) for underweight women (supplementary Figure S1, available at Annals of Oncology online). Substantial heterogeneities were observed between studies of obese women and underweight women (I2 = 69%, P < 0.01; I2 = 63%, P < 0.01, respectively). For BMI ≥12 months after diagnosis, the summary RRs were 1.21 (95% CI 1.06–1.38, 5 studies) for obese women, 0.98 (95% CI 0.86–1.11, 4 studies) for overweight women, and 1.29 (95% CI 1.02–1.63, 3 studies) for underweight women (supplementary Figure S2, available at Annals of Oncology online). Twelve additional studies reported results for obese versus non-obese women <12 months after diagnosis, and the summary RR was 1.26 (95% CI 1.07–1.47, I2 = 80%, P < 0.01).Figure 2.Categorical meta-analysis of pre-diagnosis BMI and total mortality. Categorical meta-analysis of pre-diagnosis BMI and total mortality. For BMI before diagnosis, compared with normal weight women, the summary RRs were 1.41 (95% CI 1.29–1.53, 21 studies) for obese women, 1.07 (95% CI 1.02–1.12, 19 studies) for overweight women, and 1.10 (95% CI 0.92–1.31, 10 studies) for underweight women (Figure 2). For BMI <12 months after diagnosis and the same comparisons, the summary RRs were 1.23 (95% CI 1.12–1.33, 24 studies) for obese women, 1.07 (95% CI 1.02–1.12, 22 studies) for overweight women, and 1.25 (95% CI 0.99–1.57, 11 studies) for underweight women (supplementary Figure S1, available at Annals of Oncology online). Substantial heterogeneities were observed between studies of obese women and underweight women (I2 = 69%, P < 0.01; I2 = 63%, P < 0.01, respectively). For BMI ≥12 months after diagnosis, the summary RRs were 1.21 (95% CI 1.06–1.38, 5 studies) for obese women, 0.98 (95% CI 0.86–1.11, 4 studies) for overweight women, and 1.29 (95% CI 1.02–1.63, 3 studies) for underweight women (supplementary Figure S2, available at Annals of Oncology online). Twelve additional studies reported results for obese versus non-obese women <12 months after diagnosis, and the summary RR was 1.26 (95% CI 1.07–1.47, I2 = 80%, P < 0.01).Figure 2.Categorical meta-analysis of pre-diagnosis BMI and total mortality. Categorical meta-analysis of pre-diagnosis BMI and total mortality. dose–response meta-analysis There was evidence of a J-shaped association in the non-linear dose–response meta-analyses of BMI before and after diagnosis with total mortality (all P < 0.01; Figure 3), suggesting that underweight women may be at slightly increased risk compared with normal weight women. The curves show linear increasing trends from 20 kg/m2 for BMI before diagnosis and <12 months after diagnosis, and from 25 kg/m2 for BMI ≥12 months after diagnosis. When linear models were fitted excluding the underweight category, the summary RRs of total mortality for each 5 kg/m2 increase in BMI were 1.17 (95% CI 1.13–1.21, 15 studies, 6358 deaths), 1.11 (95% CI 1.06–1.16, 12 studies, 6020 deaths), and 1.08 (95% CI 1.01–1.15, 4 studies, 1703 deaths) for BMI before, <12 months after, and ≥12 months after diagnosis, respectively (Figure 4). Substantial heterogeneity was observed between studies on BMI <12 months after diagnosis (I2 = 55%, P = 0.01).Figure 3.Non-linear dose–response curves of BMI and mortality.Figure 4.Linear dose–response meta-analysis of BMI and total mortality. Non-linear dose–response curves of BMI and mortality. Linear dose–response meta-analysis of BMI and total mortality. There was evidence of a J-shaped association in the non-linear dose–response meta-analyses of BMI before and after diagnosis with total mortality (all P < 0.01; Figure 3), suggesting that underweight women may be at slightly increased risk compared with normal weight women. The curves show linear increasing trends from 20 kg/m2 for BMI before diagnosis and <12 months after diagnosis, and from 25 kg/m2 for BMI ≥12 months after diagnosis. When linear models were fitted excluding the underweight category, the summary RRs of total mortality for each 5 kg/m2 increase in BMI were 1.17 (95% CI 1.13–1.21, 15 studies, 6358 deaths), 1.11 (95% CI 1.06–1.16, 12 studies, 6020 deaths), and 1.08 (95% CI 1.01–1.15, 4 studies, 1703 deaths) for BMI before, <12 months after, and ≥12 months after diagnosis, respectively (Figure 4). Substantial heterogeneity was observed between studies on BMI <12 months after diagnosis (I2 = 55%, P = 0.01).Figure 3.Non-linear dose–response curves of BMI and mortality.Figure 4.Linear dose–response meta-analysis of BMI and total mortality. Non-linear dose–response curves of BMI and mortality. Linear dose–response meta-analysis of BMI and total mortality. BMI and breast cancer mortality categorical meta-analysis BMI was significantly associated with breast cancer mortality. Compared with normal weight women, for BMI before diagnosis, the summary RRs were 1.35 (95% CI 1.24–1.47, 22 studies) for obese women, 1.11 (95% CI 1.06–1.17, 21 studies) for overweight women, and 1.02 (95% CI 0.85–1.21, 8 studies) for underweight women (Figure 5). For BMI <12 months after diagnosis, the summary RRs were 1.25 (95% CI 1.10–1.42, 12 studies) for obese women, 1.11 (95% CI 1.03–1.20, 12 studies) for overweight women, and 1.53 (95% CI 1.27–1.83, 5 studies) for underweight women (supplementary Figure S3, available at Annals of Oncology online). Substantial heterogeneity was observed between studies of obese women (I2 = 53%, P = 0.02). For BMI ≥12 months after diagnosis, the summary RRs of the two studies identified were 1.68 (95% CI 0.90–3.15) for obese women and 1.37 (95% CI 0.96–1.95) for overweight women (supplementary Figure S4, available at Annals of Oncology online). The summary of another six studies that reported RRs for obese versus non-obese <12 months after diagnosis was 1.26 (95% CI 1.05–1.51, I2 = 64%, P = 0.02).Figure 5.Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality. Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality. BMI was significantly associated with breast cancer mortality. Compared with normal weight women, for BMI before diagnosis, the summary RRs were 1.35 (95% CI 1.24–1.47, 22 studies) for obese women, 1.11 (95% CI 1.06–1.17, 21 studies) for overweight women, and 1.02 (95% CI 0.85–1.21, 8 studies) for underweight women (Figure 5). For BMI <12 months after diagnosis, the summary RRs were 1.25 (95% CI 1.10–1.42, 12 studies) for obese women, 1.11 (95% CI 1.03–1.20, 12 studies) for overweight women, and 1.53 (95% CI 1.27–1.83, 5 studies) for underweight women (supplementary Figure S3, available at Annals of Oncology online). Substantial heterogeneity was observed between studies of obese women (I2 = 53%, P = 0.02). For BMI ≥12 months after diagnosis, the summary RRs of the two studies identified were 1.68 (95% CI 0.90–3.15) for obese women and 1.37 (95% CI 0.96–1.95) for overweight women (supplementary Figure S4, available at Annals of Oncology online). The summary of another six studies that reported RRs for obese versus non-obese <12 months after diagnosis was 1.26 (95% CI 1.05–1.51, I2 = 64%, P = 0.02).Figure 5.Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality. Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality. dose–response meta-analysis There was no significant evidence of a non-linear relationship between BMI before, <12 months after, and ≥12 months after diagnosis and breast cancer mortality (P = 0.21, P = 1.00, P = 0.86, respectively) (Figure 3). When linear models were fitted excluding data from the underweight category, statistically significant increased risks of breast cancer mortality with BMI before and <12 months after diagnosis were observed (Figure 6). The summary RRs for each 5 kg/m2 increase were 1.18 (95% CI 1.12–1.25, 18 studies, 5262 breast cancer deaths) for BMI before diagnosis and 1.14 (95% CI 1.05–1.24, 8 studies, 3857 breast cancer deaths) for BMI <12 months after diagnosis, with moderate (I2 = 47%, P = 0.01) and substantial (I2 = 66%, P = 0.01) heterogeneities between studies, respectively. Only two studies on BMI ≥12 months after diagnosis and breast cancer mortality (N = 220 deaths) were identified. The summary RR was 1.29 (95% CI 0.97–1.72).Figure 6.Linear dose–response meta-analysis of BMI and breast cancer mortality. Linear dose–response meta-analysis of BMI and breast cancer mortality. There was no significant evidence of a non-linear relationship between BMI before, <12 months after, and ≥12 months after diagnosis and breast cancer mortality (P = 0.21, P = 1.00, P = 0.86, respectively) (Figure 3). When linear models were fitted excluding data from the underweight category, statistically significant increased risks of breast cancer mortality with BMI before and <12 months after diagnosis were observed (Figure 6). The summary RRs for each 5 kg/m2 increase were 1.18 (95% CI 1.12–1.25, 18 studies, 5262 breast cancer deaths) for BMI before diagnosis and 1.14 (95% CI 1.05–1.24, 8 studies, 3857 breast cancer deaths) for BMI <12 months after diagnosis, with moderate (I2 = 47%, P = 0.01) and substantial (I2 = 66%, P = 0.01) heterogeneities between studies, respectively. Only two studies on BMI ≥12 months after diagnosis and breast cancer mortality (N = 220 deaths) were identified. The summary RR was 1.29 (95% CI 0.97–1.72).Figure 6.Linear dose–response meta-analysis of BMI and breast cancer mortality. Linear dose–response meta-analysis of BMI and breast cancer mortality. categorical meta-analysis BMI was significantly associated with breast cancer mortality. Compared with normal weight women, for BMI before diagnosis, the summary RRs were 1.35 (95% CI 1.24–1.47, 22 studies) for obese women, 1.11 (95% CI 1.06–1.17, 21 studies) for overweight women, and 1.02 (95% CI 0.85–1.21, 8 studies) for underweight women (Figure 5). For BMI <12 months after diagnosis, the summary RRs were 1.25 (95% CI 1.10–1.42, 12 studies) for obese women, 1.11 (95% CI 1.03–1.20, 12 studies) for overweight women, and 1.53 (95% CI 1.27–1.83, 5 studies) for underweight women (supplementary Figure S3, available at Annals of Oncology online). Substantial heterogeneity was observed between studies of obese women (I2 = 53%, P = 0.02). For BMI ≥12 months after diagnosis, the summary RRs of the two studies identified were 1.68 (95% CI 0.90–3.15) for obese women and 1.37 (95% CI 0.96–1.95) for overweight women (supplementary Figure S4, available at Annals of Oncology online). The summary of another six studies that reported RRs for obese versus non-obese <12 months after diagnosis was 1.26 (95% CI 1.05–1.51, I2 = 64%, P = 0.02).Figure 5.Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality. Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality. BMI was significantly associated with breast cancer mortality. Compared with normal weight women, for BMI before diagnosis, the summary RRs were 1.35 (95% CI 1.24–1.47, 22 studies) for obese women, 1.11 (95% CI 1.06–1.17, 21 studies) for overweight women, and 1.02 (95% CI 0.85–1.21, 8 studies) for underweight women (Figure 5). For BMI <12 months after diagnosis, the summary RRs were 1.25 (95% CI 1.10–1.42, 12 studies) for obese women, 1.11 (95% CI 1.03–1.20, 12 studies) for overweight women, and 1.53 (95% CI 1.27–1.83, 5 studies) for underweight women (supplementary Figure S3, available at Annals of Oncology online). Substantial heterogeneity was observed between studies of obese women (I2 = 53%, P = 0.02). For BMI ≥12 months after diagnosis, the summary RRs of the two studies identified were 1.68 (95% CI 0.90–3.15) for obese women and 1.37 (95% CI 0.96–1.95) for overweight women (supplementary Figure S4, available at Annals of Oncology online). The summary of another six studies that reported RRs for obese versus non-obese <12 months after diagnosis was 1.26 (95% CI 1.05–1.51, I2 = 64%, P = 0.02).Figure 5.Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality. Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality. dose–response meta-analysis There was no significant evidence of a non-linear relationship between BMI before, <12 months after, and ≥12 months after diagnosis and breast cancer mortality (P = 0.21, P = 1.00, P = 0.86, respectively) (Figure 3). When linear models were fitted excluding data from the underweight category, statistically significant increased risks of breast cancer mortality with BMI before and <12 months after diagnosis were observed (Figure 6). The summary RRs for each 5 kg/m2 increase were 1.18 (95% CI 1.12–1.25, 18 studies, 5262 breast cancer deaths) for BMI before diagnosis and 1.14 (95% CI 1.05–1.24, 8 studies, 3857 breast cancer deaths) for BMI <12 months after diagnosis, with moderate (I2 = 47%, P = 0.01) and substantial (I2 = 66%, P = 0.01) heterogeneities between studies, respectively. Only two studies on BMI ≥12 months after diagnosis and breast cancer mortality (N = 220 deaths) were identified. The summary RR was 1.29 (95% CI 0.97–1.72).Figure 6.Linear dose–response meta-analysis of BMI and breast cancer mortality. Linear dose–response meta-analysis of BMI and breast cancer mortality. There was no significant evidence of a non-linear relationship between BMI before, <12 months after, and ≥12 months after diagnosis and breast cancer mortality (P = 0.21, P = 1.00, P = 0.86, respectively) (Figure 3). When linear models were fitted excluding data from the underweight category, statistically significant increased risks of breast cancer mortality with BMI before and <12 months after diagnosis were observed (Figure 6). The summary RRs for each 5 kg/m2 increase were 1.18 (95% CI 1.12–1.25, 18 studies, 5262 breast cancer deaths) for BMI before diagnosis and 1.14 (95% CI 1.05–1.24, 8 studies, 3857 breast cancer deaths) for BMI <12 months after diagnosis, with moderate (I2 = 47%, P = 0.01) and substantial (I2 = 66%, P = 0.01) heterogeneities between studies, respectively. Only two studies on BMI ≥12 months after diagnosis and breast cancer mortality (N = 220 deaths) were identified. The summary RR was 1.29 (95% CI 0.97–1.72).Figure 6.Linear dose–response meta-analysis of BMI and breast cancer mortality. Linear dose–response meta-analysis of BMI and breast cancer mortality. BMI and other mortality outcomes Only two studies reported results for death from cardiovascular disease (N = 151 deaths) [27, 112]. The summary RR for obese versus normal weight before diagnosis was 1.60 (95% CI 0.66–3.87). No association was observed for overweight versus normal weight (summary RR = 1.01, 95% CI 0.80–1.29). For each 5 kg/m2 increase in BMI, the summary RR was 1.21 (95% CI 0.83–1.77). Five studies reported results for deaths from any cause other than breast cancer (N = 2704 deaths) [21, 34, 108, 113, 114]. The summary RRs were 1.29 (95% CI 0.99–1.68, I2 = 72%, P = 0.01) for obese women, and 0.96 (95% CI 0.83–1.11, I2 = 26%, P = 0.25) for overweight women compared with normal weight women. Only two studies reported results for death from cardiovascular disease (N = 151 deaths) [27, 112]. The summary RR for obese versus normal weight before diagnosis was 1.60 (95% CI 0.66–3.87). No association was observed for overweight versus normal weight (summary RR = 1.01, 95% CI 0.80–1.29). For each 5 kg/m2 increase in BMI, the summary RR was 1.21 (95% CI 0.83–1.77). Five studies reported results for deaths from any cause other than breast cancer (N = 2704 deaths) [21, 34, 108, 113, 114]. The summary RRs were 1.29 (95% CI 0.99–1.68, I2 = 72%, P = 0.01) for obese women, and 0.96 (95% CI 0.83–1.11, I2 = 26%, P = 0.25) for overweight women compared with normal weight women. subgroup, meta-regression, and sensitivity analyses The results of the subgroup and meta-regression analyses are in supplementary Tables S3 and S4, available at Annals of Oncology online. Subgroup analysis was not carried out for BMI ≥12 months after diagnosis as the limited number of studies would hinder any meaningful comparisons. Increased risks of mortality were observed in the meta-analyses by menopausal status. While the summary risk estimates seem stronger with premenopausal breast cancer, there was no significant heterogeneity between pre- and post-menopausal breast cancer as shown in the meta-regression analyses (P = 0.28–0.89) (supplementary Tables S3 and S4, available at Annals of Oncology online). For BMI before diagnosis and total mortality, the summary RRs for obese versus normal weight were 1.75 (95% CI 1.26–2.41, I2 = 70%, P < 0.01, 7 studies) in women with pre-menopausal breast cancer and 1.34 (95% CI 1.18–1.53, I2 = 27%, P = 0.20, 9 studies) in women with post-menopausal breast cancer. Studies with larger number of deaths [105, 115], conducted in Europe [28, 115], or with weight and height assessed through medical records [28, 104, 115, 116] tended to report weaker associations for BMI <12 months after diagnosis and total mortality compared with other studies (meta-regression P = 0.01, 0.02, 0.01, respectively) (supplementary Table S3, available at Annals of Oncology online); while studies with larger number of deaths [101], conducted in Asia [101, 102], or adjusted for co-morbidity [101, 102] reported weaker associations for BMI <12 months after diagnosis and breast cancer mortality (meta-regression P = 0.01, 0.02, 0.01, respectively) (supplementary Table S4, available at Annals of Oncology online). Analyses stratified by study designs, or restricted to studies with invasive cases only, early-stage non-metastatic cases only, or mammography screening detected cases only, or controlled for previous diseases did not produce results that were materially different from those obtained in the overall analyses (results not shown). Summary risk estimates remained statistically significant when each study was omitted in turn, except for BMI ≥12 months after diagnosis and total mortality. The summary RR was 1.06 (95% CI 0.98–1.15) per 5 kg/m2 increase when Flatt et al. [117] which contributed 315 deaths was omitted. The results of the subgroup and meta-regression analyses are in supplementary Tables S3 and S4, available at Annals of Oncology online. Subgroup analysis was not carried out for BMI ≥12 months after diagnosis as the limited number of studies would hinder any meaningful comparisons. Increased risks of mortality were observed in the meta-analyses by menopausal status. While the summary risk estimates seem stronger with premenopausal breast cancer, there was no significant heterogeneity between pre- and post-menopausal breast cancer as shown in the meta-regression analyses (P = 0.28–0.89) (supplementary Tables S3 and S4, available at Annals of Oncology online). For BMI before diagnosis and total mortality, the summary RRs for obese versus normal weight were 1.75 (95% CI 1.26–2.41, I2 = 70%, P < 0.01, 7 studies) in women with pre-menopausal breast cancer and 1.34 (95% CI 1.18–1.53, I2 = 27%, P = 0.20, 9 studies) in women with post-menopausal breast cancer. Studies with larger number of deaths [105, 115], conducted in Europe [28, 115], or with weight and height assessed through medical records [28, 104, 115, 116] tended to report weaker associations for BMI <12 months after diagnosis and total mortality compared with other studies (meta-regression P = 0.01, 0.02, 0.01, respectively) (supplementary Table S3, available at Annals of Oncology online); while studies with larger number of deaths [101], conducted in Asia [101, 102], or adjusted for co-morbidity [101, 102] reported weaker associations for BMI <12 months after diagnosis and breast cancer mortality (meta-regression P = 0.01, 0.02, 0.01, respectively) (supplementary Table S4, available at Annals of Oncology online). Analyses stratified by study designs, or restricted to studies with invasive cases only, early-stage non-metastatic cases only, or mammography screening detected cases only, or controlled for previous diseases did not produce results that were materially different from those obtained in the overall analyses (results not shown). Summary risk estimates remained statistically significant when each study was omitted in turn, except for BMI ≥12 months after diagnosis and total mortality. The summary RR was 1.06 (95% CI 0.98–1.15) per 5 kg/m2 increase when Flatt et al. [117] which contributed 315 deaths was omitted. small studies or publication bias Asymmetry was only detected in the funnel plots of BMI <12 months after diagnosis and total mortality, and breast cancer mortality, which suggests that small studies with an inverse association are missing (plots not shown). Egger's tests were borderline significant (P = 0.05) or statistically significant (P = 0.03), respectively. Asymmetry was only detected in the funnel plots of BMI <12 months after diagnosis and total mortality, and breast cancer mortality, which suggests that small studies with an inverse association are missing (plots not shown). Egger's tests were borderline significant (P = 0.05) or statistically significant (P = 0.03), respectively.
null
null
[ "data sources and search", "study selection", "data extraction", "statistical analysis", "BMI and total mortality", "categorical meta-analysis", "dose–response meta-analysis", "BMI and breast cancer mortality", "categorical meta-analysis", "dose–response meta-analysis", "BMI and other mortality outcomes", "subgroup, meta-regression, and sensitivity analyses", "small studies or publication bias", "funding", "disclosure" ]
[ "We carried out a systematic literature search, limited to publications in English, for articles on BMI and survival in women with breast cancer in OVID MEDLINE and EMBASE from inception to 30 June 2013 using the search strategy implemented for the WCRF/AICR Continuous Update Project on breast cancer survival. The search strategy contained medical subject headings and text words that covered a broad range of factors on diet, physical activity, and anthropometry. The protocol for the review is available at http://www.dietandcancerreport.org/index.php [17]. In addition, we hand-searched the reference lists of relevant articles, reviews, and meta-analysis papers.", "Included were follow-up studies of breast cancer survivors, which reported estimates of the associations of BMI ascertained before and after breast cancer diagnosis with total or cause-specific mortality risks. Studies that investigated BMI after diagnosis were divided into two groups: BMI <12 months after diagnosis (BMI <12 months) and BMI 12 months or more after diagnosis (BMI ≥12 months). Outcomes included total mortality, breast cancer mortality, death from cardiovascular disease, and death from causes other than breast cancer. When multiple publications on the same study population were found, results based on longer follow-up and more outcomes were selected for the meta-analysis.", "DSMC, TN, and DA conducted the search. DSMC, ARV, and DNR extracted the study characteristics, tumour-related information, cancer treatment, timing and method of weight and height assessment, BMI levels, number of outcomes and population at-risk, outcome type, estimates of association and their measure of variance [95% confidence interval (CI) or P value], and adjustment factors in the analysis.", "Categorical and dose–response meta-analyses were conducted using random-effects models to account for between-study heterogeneity [18]. Summary relative risks (RRs) were estimated using the average of the natural logarithm of the RRs of each study weighted by the inverse of the variance and then unweighted by applying a random-effects variance component which is derived from the extent of variability of the effect sizes of the studies. The maximally adjusted RR estimates were used for the meta-analysis except for the follow-up of randomised, controlled trials [19, 20] where unadjusted results were also included, as these studies mostly involved a more homogeneous study population. BMI or Quetelet's Index (QI) measured in units of kg/m2 was used.\nWe conducted categorical meta-analyses by pooling the categorical results reported in the studies. The studies used different BMI categories. In some studies, underweight (BMI <18.5 kg/m2 according to WHO international classification) and normal weight women (BMI 18.5–<25.0 kg/m2) were classified together but, in some studies, they were classified separately. Similarly, most studies classified overweight (BMI 25.0–<30.0 kg/m2) and obese (BMI ≥30.0 kg/m2) women separately but, in some studies, overweight and obese women were combined. The reference category was normal weight or underweight together with normal weight, depending on the studies. For convenience, the BMI categories are referred to as underweight, normal weight, overweight, and obese in the present review. We derived the RRs for overweight and obese women compared with normal weight women in two studies [19, 21] that had more than four BMI categories using the method of Hamling et al. [22]. Studies that reported results for obese compared with non-obese women were analysed separately.\nThe non-linear dose–response relationship between BMI and mortality was examined using the best-fitting second-order fractional polynomial regression model [23], defined as the one with the lowest deviance. Non-linearity was tested using the likelihood ratio test [24]. In the non-linear meta-analysis, the reference category was the lowest BMI category in each study and RRs were recalculated using the method of Hamling et al. [22] when the reference category was not the lowest BMI category in the study.\nWe also conducted linear dose–response meta-analyses, excluding the category underweight when reported separately in the studies, by pooling estimates of RR per unit increase (with its standard error) provided by the studies, or derived by us from categorical data using generalised least-squares for trend estimation [25]. To estimate the trend, the numbers of outcomes and population at-risk for at least three BMI categories, or the information required to derive them using standard methods [26], and means or medians of the BMI categories, or if not reported in the studies, the estimated midpoints of the categories had to be available. When the extreme BMI categories were open-ended, we used the width of the adjacent close-ended category to estimate the midpoints. Where the RRs were presented by subgroups (age group [27], menopausal status [28, 29], stage [30] or subtype [31] of breast cancer, or others [32–34]), an overall estimate for the study was obtained by a fixed-effect model before pooling in the meta-analysis. We estimated the risk increase of death for an increment of 5 kg/m2 of BMI.\nTo assess heterogeneity, we computed the Cochran Q test and I2 statistic [35]. The cut points of 30% and 50% were used for low, moderate, and substantial level of heterogeneity. Sources of heterogeneity were explored by meta-regression and subgroup analyses using pre-defined factors, including indicators of study quality (menopausal status, hormone receptor status, number of outcomes, length of follow-up, study design, geographic location, BMI assessment, adjustment for confounders, and others). Small study or publication bias was examined by Egger's test [36] and visual inspection of the funnel plots. The influence of each individual study on the summary RR was examined by excluding the study in turn [37]. A P value of <0.05 was considered statistically significant. All analyses were conducted using Stata version 12.1 (Stata Statistical Software: Release 12, StataCorp LP, College Station, TX).", " categorical meta-analysis For BMI before diagnosis, compared with normal weight women, the summary RRs were 1.41 (95% CI 1.29–1.53, 21 studies) for obese women, 1.07 (95% CI 1.02–1.12, 19 studies) for overweight women, and 1.10 (95% CI 0.92–1.31, 10 studies) for underweight women (Figure 2). For BMI <12 months after diagnosis and the same comparisons, the summary RRs were 1.23 (95% CI 1.12–1.33, 24 studies) for obese women, 1.07 (95% CI 1.02–1.12, 22 studies) for overweight women, and 1.25 (95% CI 0.99–1.57, 11 studies) for underweight women (supplementary Figure S1, available at Annals of Oncology online). Substantial heterogeneities were observed between studies of obese women and underweight women (I2 = 69%, P < 0.01; I2 = 63%, P < 0.01, respectively). For BMI ≥12 months after diagnosis, the summary RRs were 1.21 (95% CI 1.06–1.38, 5 studies) for obese women, 0.98 (95% CI 0.86–1.11, 4 studies) for overweight women, and 1.29 (95% CI 1.02–1.63, 3 studies) for underweight women (supplementary Figure S2, available at Annals of Oncology online). Twelve additional studies reported results for obese versus non-obese women <12 months after diagnosis, and the summary RR was 1.26 (95% CI 1.07–1.47, I2 = 80%, P < 0.01).Figure 2.Categorical meta-analysis of pre-diagnosis BMI and total mortality.\nCategorical meta-analysis of pre-diagnosis BMI and total mortality.\nFor BMI before diagnosis, compared with normal weight women, the summary RRs were 1.41 (95% CI 1.29–1.53, 21 studies) for obese women, 1.07 (95% CI 1.02–1.12, 19 studies) for overweight women, and 1.10 (95% CI 0.92–1.31, 10 studies) for underweight women (Figure 2). For BMI <12 months after diagnosis and the same comparisons, the summary RRs were 1.23 (95% CI 1.12–1.33, 24 studies) for obese women, 1.07 (95% CI 1.02–1.12, 22 studies) for overweight women, and 1.25 (95% CI 0.99–1.57, 11 studies) for underweight women (supplementary Figure S1, available at Annals of Oncology online). Substantial heterogeneities were observed between studies of obese women and underweight women (I2 = 69%, P < 0.01; I2 = 63%, P < 0.01, respectively). For BMI ≥12 months after diagnosis, the summary RRs were 1.21 (95% CI 1.06–1.38, 5 studies) for obese women, 0.98 (95% CI 0.86–1.11, 4 studies) for overweight women, and 1.29 (95% CI 1.02–1.63, 3 studies) for underweight women (supplementary Figure S2, available at Annals of Oncology online). Twelve additional studies reported results for obese versus non-obese women <12 months after diagnosis, and the summary RR was 1.26 (95% CI 1.07–1.47, I2 = 80%, P < 0.01).Figure 2.Categorical meta-analysis of pre-diagnosis BMI and total mortality.\nCategorical meta-analysis of pre-diagnosis BMI and total mortality.\n dose–response meta-analysis There was evidence of a J-shaped association in the non-linear dose–response meta-analyses of BMI before and after diagnosis with total mortality (all P < 0.01; Figure 3), suggesting that underweight women may be at slightly increased risk compared with normal weight women. The curves show linear increasing trends from 20 kg/m2 for BMI before diagnosis and <12 months after diagnosis, and from 25 kg/m2 for BMI ≥12 months after diagnosis. When linear models were fitted excluding the underweight category, the summary RRs of total mortality for each 5 kg/m2 increase in BMI were 1.17 (95% CI 1.13–1.21, 15 studies, 6358 deaths), 1.11 (95% CI 1.06–1.16, 12 studies, 6020 deaths), and 1.08 (95% CI 1.01–1.15, 4 studies, 1703 deaths) for BMI before, <12 months after, and ≥12 months after diagnosis, respectively (Figure 4). Substantial heterogeneity was observed between studies on BMI <12 months after diagnosis (I2 = 55%, P = 0.01).Figure 3.Non-linear dose–response curves of BMI and mortality.Figure 4.Linear dose–response meta-analysis of BMI and total mortality.\nNon-linear dose–response curves of BMI and mortality.\nLinear dose–response meta-analysis of BMI and total mortality.\nThere was evidence of a J-shaped association in the non-linear dose–response meta-analyses of BMI before and after diagnosis with total mortality (all P < 0.01; Figure 3), suggesting that underweight women may be at slightly increased risk compared with normal weight women. The curves show linear increasing trends from 20 kg/m2 for BMI before diagnosis and <12 months after diagnosis, and from 25 kg/m2 for BMI ≥12 months after diagnosis. When linear models were fitted excluding the underweight category, the summary RRs of total mortality for each 5 kg/m2 increase in BMI were 1.17 (95% CI 1.13–1.21, 15 studies, 6358 deaths), 1.11 (95% CI 1.06–1.16, 12 studies, 6020 deaths), and 1.08 (95% CI 1.01–1.15, 4 studies, 1703 deaths) for BMI before, <12 months after, and ≥12 months after diagnosis, respectively (Figure 4). Substantial heterogeneity was observed between studies on BMI <12 months after diagnosis (I2 = 55%, P = 0.01).Figure 3.Non-linear dose–response curves of BMI and mortality.Figure 4.Linear dose–response meta-analysis of BMI and total mortality.\nNon-linear dose–response curves of BMI and mortality.\nLinear dose–response meta-analysis of BMI and total mortality.", "For BMI before diagnosis, compared with normal weight women, the summary RRs were 1.41 (95% CI 1.29–1.53, 21 studies) for obese women, 1.07 (95% CI 1.02–1.12, 19 studies) for overweight women, and 1.10 (95% CI 0.92–1.31, 10 studies) for underweight women (Figure 2). For BMI <12 months after diagnosis and the same comparisons, the summary RRs were 1.23 (95% CI 1.12–1.33, 24 studies) for obese women, 1.07 (95% CI 1.02–1.12, 22 studies) for overweight women, and 1.25 (95% CI 0.99–1.57, 11 studies) for underweight women (supplementary Figure S1, available at Annals of Oncology online). Substantial heterogeneities were observed between studies of obese women and underweight women (I2 = 69%, P < 0.01; I2 = 63%, P < 0.01, respectively). For BMI ≥12 months after diagnosis, the summary RRs were 1.21 (95% CI 1.06–1.38, 5 studies) for obese women, 0.98 (95% CI 0.86–1.11, 4 studies) for overweight women, and 1.29 (95% CI 1.02–1.63, 3 studies) for underweight women (supplementary Figure S2, available at Annals of Oncology online). Twelve additional studies reported results for obese versus non-obese women <12 months after diagnosis, and the summary RR was 1.26 (95% CI 1.07–1.47, I2 = 80%, P < 0.01).Figure 2.Categorical meta-analysis of pre-diagnosis BMI and total mortality.\nCategorical meta-analysis of pre-diagnosis BMI and total mortality.", "There was evidence of a J-shaped association in the non-linear dose–response meta-analyses of BMI before and after diagnosis with total mortality (all P < 0.01; Figure 3), suggesting that underweight women may be at slightly increased risk compared with normal weight women. The curves show linear increasing trends from 20 kg/m2 for BMI before diagnosis and <12 months after diagnosis, and from 25 kg/m2 for BMI ≥12 months after diagnosis. When linear models were fitted excluding the underweight category, the summary RRs of total mortality for each 5 kg/m2 increase in BMI were 1.17 (95% CI 1.13–1.21, 15 studies, 6358 deaths), 1.11 (95% CI 1.06–1.16, 12 studies, 6020 deaths), and 1.08 (95% CI 1.01–1.15, 4 studies, 1703 deaths) for BMI before, <12 months after, and ≥12 months after diagnosis, respectively (Figure 4). Substantial heterogeneity was observed between studies on BMI <12 months after diagnosis (I2 = 55%, P = 0.01).Figure 3.Non-linear dose–response curves of BMI and mortality.Figure 4.Linear dose–response meta-analysis of BMI and total mortality.\nNon-linear dose–response curves of BMI and mortality.\nLinear dose–response meta-analysis of BMI and total mortality.", " categorical meta-analysis BMI was significantly associated with breast cancer mortality. Compared with normal weight women, for BMI before diagnosis, the summary RRs were 1.35 (95% CI 1.24–1.47, 22 studies) for obese women, 1.11 (95% CI 1.06–1.17, 21 studies) for overweight women, and 1.02 (95% CI 0.85–1.21, 8 studies) for underweight women (Figure 5). For BMI <12 months after diagnosis, the summary RRs were 1.25 (95% CI 1.10–1.42, 12 studies) for obese women, 1.11 (95% CI 1.03–1.20, 12 studies) for overweight women, and 1.53 (95% CI 1.27–1.83, 5 studies) for underweight women (supplementary Figure S3, available at Annals of Oncology online). Substantial heterogeneity was observed between studies of obese women (I2 = 53%, P = 0.02). For BMI ≥12 months after diagnosis, the summary RRs of the two studies identified were 1.68 (95% CI 0.90–3.15) for obese women and 1.37 (95% CI 0.96–1.95) for overweight women (supplementary Figure S4, available at Annals of Oncology online). The summary of another six studies that reported RRs for obese versus non-obese <12 months after diagnosis was 1.26 (95% CI 1.05–1.51, I2 = 64%, P = 0.02).Figure 5.Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality.\nCategorical meta-analysis of pre-diagnosis BMI and breast cancer mortality.\nBMI was significantly associated with breast cancer mortality. Compared with normal weight women, for BMI before diagnosis, the summary RRs were 1.35 (95% CI 1.24–1.47, 22 studies) for obese women, 1.11 (95% CI 1.06–1.17, 21 studies) for overweight women, and 1.02 (95% CI 0.85–1.21, 8 studies) for underweight women (Figure 5). For BMI <12 months after diagnosis, the summary RRs were 1.25 (95% CI 1.10–1.42, 12 studies) for obese women, 1.11 (95% CI 1.03–1.20, 12 studies) for overweight women, and 1.53 (95% CI 1.27–1.83, 5 studies) for underweight women (supplementary Figure S3, available at Annals of Oncology online). Substantial heterogeneity was observed between studies of obese women (I2 = 53%, P = 0.02). For BMI ≥12 months after diagnosis, the summary RRs of the two studies identified were 1.68 (95% CI 0.90–3.15) for obese women and 1.37 (95% CI 0.96–1.95) for overweight women (supplementary Figure S4, available at Annals of Oncology online). The summary of another six studies that reported RRs for obese versus non-obese <12 months after diagnosis was 1.26 (95% CI 1.05–1.51, I2 = 64%, P = 0.02).Figure 5.Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality.\nCategorical meta-analysis of pre-diagnosis BMI and breast cancer mortality.\n dose–response meta-analysis There was no significant evidence of a non-linear relationship between BMI before, <12 months after, and ≥12 months after diagnosis and breast cancer mortality (P = 0.21, P = 1.00, P = 0.86, respectively) (Figure 3). When linear models were fitted excluding data from the underweight category, statistically significant increased risks of breast cancer mortality with BMI before and <12 months after diagnosis were observed (Figure 6). The summary RRs for each 5 kg/m2 increase were 1.18 (95% CI 1.12–1.25, 18 studies, 5262 breast cancer deaths) for BMI before diagnosis and 1.14 (95% CI 1.05–1.24, 8 studies, 3857 breast cancer deaths) for BMI <12 months after diagnosis, with moderate (I2 = 47%, P = 0.01) and substantial (I2 = 66%, P = 0.01) heterogeneities between studies, respectively. Only two studies on BMI ≥12 months after diagnosis and breast cancer mortality (N = 220 deaths) were identified. The summary RR was 1.29 (95% CI 0.97–1.72).Figure 6.Linear dose–response meta-analysis of BMI and breast cancer mortality.\nLinear dose–response meta-analysis of BMI and breast cancer mortality.\nThere was no significant evidence of a non-linear relationship between BMI before, <12 months after, and ≥12 months after diagnosis and breast cancer mortality (P = 0.21, P = 1.00, P = 0.86, respectively) (Figure 3). When linear models were fitted excluding data from the underweight category, statistically significant increased risks of breast cancer mortality with BMI before and <12 months after diagnosis were observed (Figure 6). The summary RRs for each 5 kg/m2 increase were 1.18 (95% CI 1.12–1.25, 18 studies, 5262 breast cancer deaths) for BMI before diagnosis and 1.14 (95% CI 1.05–1.24, 8 studies, 3857 breast cancer deaths) for BMI <12 months after diagnosis, with moderate (I2 = 47%, P = 0.01) and substantial (I2 = 66%, P = 0.01) heterogeneities between studies, respectively. Only two studies on BMI ≥12 months after diagnosis and breast cancer mortality (N = 220 deaths) were identified. The summary RR was 1.29 (95% CI 0.97–1.72).Figure 6.Linear dose–response meta-analysis of BMI and breast cancer mortality.\nLinear dose–response meta-analysis of BMI and breast cancer mortality.", "BMI was significantly associated with breast cancer mortality. Compared with normal weight women, for BMI before diagnosis, the summary RRs were 1.35 (95% CI 1.24–1.47, 22 studies) for obese women, 1.11 (95% CI 1.06–1.17, 21 studies) for overweight women, and 1.02 (95% CI 0.85–1.21, 8 studies) for underweight women (Figure 5). For BMI <12 months after diagnosis, the summary RRs were 1.25 (95% CI 1.10–1.42, 12 studies) for obese women, 1.11 (95% CI 1.03–1.20, 12 studies) for overweight women, and 1.53 (95% CI 1.27–1.83, 5 studies) for underweight women (supplementary Figure S3, available at Annals of Oncology online). Substantial heterogeneity was observed between studies of obese women (I2 = 53%, P = 0.02). For BMI ≥12 months after diagnosis, the summary RRs of the two studies identified were 1.68 (95% CI 0.90–3.15) for obese women and 1.37 (95% CI 0.96–1.95) for overweight women (supplementary Figure S4, available at Annals of Oncology online). The summary of another six studies that reported RRs for obese versus non-obese <12 months after diagnosis was 1.26 (95% CI 1.05–1.51, I2 = 64%, P = 0.02).Figure 5.Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality.\nCategorical meta-analysis of pre-diagnosis BMI and breast cancer mortality.", "There was no significant evidence of a non-linear relationship between BMI before, <12 months after, and ≥12 months after diagnosis and breast cancer mortality (P = 0.21, P = 1.00, P = 0.86, respectively) (Figure 3). When linear models were fitted excluding data from the underweight category, statistically significant increased risks of breast cancer mortality with BMI before and <12 months after diagnosis were observed (Figure 6). The summary RRs for each 5 kg/m2 increase were 1.18 (95% CI 1.12–1.25, 18 studies, 5262 breast cancer deaths) for BMI before diagnosis and 1.14 (95% CI 1.05–1.24, 8 studies, 3857 breast cancer deaths) for BMI <12 months after diagnosis, with moderate (I2 = 47%, P = 0.01) and substantial (I2 = 66%, P = 0.01) heterogeneities between studies, respectively. Only two studies on BMI ≥12 months after diagnosis and breast cancer mortality (N = 220 deaths) were identified. The summary RR was 1.29 (95% CI 0.97–1.72).Figure 6.Linear dose–response meta-analysis of BMI and breast cancer mortality.\nLinear dose–response meta-analysis of BMI and breast cancer mortality.", "Only two studies reported results for death from cardiovascular disease (N = 151 deaths) [27, 112]. The summary RR for obese versus normal weight before diagnosis was 1.60 (95% CI 0.66–3.87). No association was observed for overweight versus normal weight (summary RR = 1.01, 95% CI 0.80–1.29). For each 5 kg/m2 increase in BMI, the summary RR was 1.21 (95% CI 0.83–1.77). Five studies reported results for deaths from any cause other than breast cancer (N = 2704 deaths) [21, 34, 108, 113, 114]. The summary RRs were 1.29 (95% CI 0.99–1.68, I2 = 72%, P = 0.01) for obese women, and 0.96 (95% CI 0.83–1.11, I2 = 26%, P = 0.25) for overweight women compared with normal weight women.", "The results of the subgroup and meta-regression analyses are in supplementary Tables S3 and S4, available at Annals of Oncology online. Subgroup analysis was not carried out for BMI ≥12 months after diagnosis as the limited number of studies would hinder any meaningful comparisons.\nIncreased risks of mortality were observed in the meta-analyses by menopausal status. While the summary risk estimates seem stronger with premenopausal breast cancer, there was no significant heterogeneity between pre- and post-menopausal breast cancer as shown in the meta-regression analyses (P = 0.28–0.89) (supplementary Tables S3 and S4, available at Annals of Oncology online). For BMI before diagnosis and total mortality, the summary RRs for obese versus normal weight were 1.75 (95% CI 1.26–2.41, I2 = 70%, P < 0.01, 7 studies) in women with pre-menopausal breast cancer and 1.34 (95% CI 1.18–1.53, I2 = 27%, P = 0.20, 9 studies) in women with post-menopausal breast cancer.\nStudies with larger number of deaths [105, 115], conducted in Europe [28, 115], or with weight and height assessed through medical records [28, 104, 115, 116] tended to report weaker associations for BMI <12 months after diagnosis and total mortality compared with other studies (meta-regression P = 0.01, 0.02, 0.01, respectively) (supplementary Table S3, available at Annals of Oncology online); while studies with larger number of deaths [101], conducted in Asia [101, 102], or adjusted for co-morbidity [101, 102] reported weaker associations for BMI <12 months after diagnosis and breast cancer mortality (meta-regression P = 0.01, 0.02, 0.01, respectively) (supplementary Table S4, available at Annals of Oncology online).\nAnalyses stratified by study designs, or restricted to studies with invasive cases only, early-stage non-metastatic cases only, or mammography screening detected cases only, or controlled for previous diseases did not produce results that were materially different from those obtained in the overall analyses (results not shown). Summary risk estimates remained statistically significant when each study was omitted in turn, except for BMI ≥12 months after diagnosis and total mortality. The summary RR was 1.06 (95% CI 0.98–1.15) per 5 kg/m2 increase when Flatt et al. [117] which contributed 315 deaths was omitted.", "Asymmetry was only detected in the funnel plots of BMI <12 months after diagnosis and total mortality, and breast cancer mortality, which suggests that small studies with an inverse association are missing (plots not shown). Egger's tests were borderline significant (P = 0.05) or statistically significant (P = 0.03), respectively.", "This work was supported by the World Cancer Research Fund International (grant number: 2007/SP01) (http://www.wcrf-uk.org/). The funder of this study had no role in the decisions about the design and conduct of the study; collection, management, analysis, or interpretation of the data; or the preparation, review, or approval of the manuscript. The views expressed in this review are the opinions of the authors. They may not represent the views of the World Cancer Research Fund International/American Institute for Cancer Research and may differ from those in future updates of the evidence related to food, nutrition, physical activity, and cancer survival.", "DCG reports personal fees from World Cancer Research Fund/American Institute for Cancer Research, during the conduct of the study; grants from Danone, and grants from Kelloggs, outside the submitted work. AM reports personal fees from Metagenics/Metaproteomics, personal fees from Pfizer, outside the submitted work. All remaining authors have declared no conflicts of interest." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "introduction", "materials and methods", "data sources and search", "study selection", "data extraction", "statistical analysis", "results", "BMI and total mortality", "categorical meta-analysis", "dose–response meta-analysis", "BMI and breast cancer mortality", "categorical meta-analysis", "dose–response meta-analysis", "BMI and other mortality outcomes", "subgroup, meta-regression, and sensitivity analyses", "small studies or publication bias", "discussion", "funding", "disclosure", "Supplementary Material" ]
[ "The number of female breast cancer survivors is growing because of longer survival as a consequence of advances in treatment and early diagnosis. There were ∼2.6 million female breast cancer survivors in US in 2008 [1], and in the UK, breast cancer accounted for ∼28% of the 2 million cancer survivors in 2008 [2].\nObesity is a pandemic health concern, with over 500 million adults worldwide estimated to be obese and 958 million were overweight in 2008 [3]. One of the established risk factors for breast cancer development in post-menopausal women is obesity [4], which has further been linked to breast cancer recurrence [5] and poorer survival in pre- and post-menopausal breast cancer [6, 7]. Preliminary findings from randomised, controlled trials suggest that lifestyle modifications improved biomarkers associated with breast cancer progression and overall survival [8].\nThe biological mechanisms underlying the association between obesity and breast cancer survival are not established, and could involve interacting mediators of hormones, adipocytokines, and inflammatory cytokines which link to cell survival or apoptosis, migration, and proliferation [9]. Higher level of oestradiol produced in postmenopausal women through aromatisation of androgens in the adipose tissues [10], and higher level of insulin [11], a condition common in obese women, are linked to poorer prognosis in breast cancer. A possible interaction between leptin and insulin [12], and obesity-related markers of inflammation [13] have also been linked to breast cancer outcomes. Non-biological mechanisms could include chemotherapy under-dosing in obese women, suboptimal treatment, and obesity-related complications [14].\nNumerous studies have examined the relationship between obesity and breast cancer outcomes, and past reviews have concluded that obesity is linked to a lower survival; however, when investigated in a meta-analysis of published data, only the results of obese compared with non-obese or lighter women were summarised [6, 7, 15].\nWe carried out a systematic literature review and meta-analysis of published studies to explore the magnitude and the shape of the association between body fatness, as measured by body mass index (BMI), and the risk of total and cause-specific mortality, overall and in women with pre- and post-menopausal breast cancer. As body weight may change close to diagnosis and during primary treatment of breast cancer [16], we examined BMI in three periods: before diagnosis, <12 months after diagnosis, and ≥12 months after breast cancer diagnosis.", " data sources and search We carried out a systematic literature search, limited to publications in English, for articles on BMI and survival in women with breast cancer in OVID MEDLINE and EMBASE from inception to 30 June 2013 using the search strategy implemented for the WCRF/AICR Continuous Update Project on breast cancer survival. The search strategy contained medical subject headings and text words that covered a broad range of factors on diet, physical activity, and anthropometry. The protocol for the review is available at http://www.dietandcancerreport.org/index.php [17]. In addition, we hand-searched the reference lists of relevant articles, reviews, and meta-analysis papers.\nWe carried out a systematic literature search, limited to publications in English, for articles on BMI and survival in women with breast cancer in OVID MEDLINE and EMBASE from inception to 30 June 2013 using the search strategy implemented for the WCRF/AICR Continuous Update Project on breast cancer survival. The search strategy contained medical subject headings and text words that covered a broad range of factors on diet, physical activity, and anthropometry. The protocol for the review is available at http://www.dietandcancerreport.org/index.php [17]. In addition, we hand-searched the reference lists of relevant articles, reviews, and meta-analysis papers.\n study selection Included were follow-up studies of breast cancer survivors, which reported estimates of the associations of BMI ascertained before and after breast cancer diagnosis with total or cause-specific mortality risks. Studies that investigated BMI after diagnosis were divided into two groups: BMI <12 months after diagnosis (BMI <12 months) and BMI 12 months or more after diagnosis (BMI ≥12 months). Outcomes included total mortality, breast cancer mortality, death from cardiovascular disease, and death from causes other than breast cancer. When multiple publications on the same study population were found, results based on longer follow-up and more outcomes were selected for the meta-analysis.\nIncluded were follow-up studies of breast cancer survivors, which reported estimates of the associations of BMI ascertained before and after breast cancer diagnosis with total or cause-specific mortality risks. Studies that investigated BMI after diagnosis were divided into two groups: BMI <12 months after diagnosis (BMI <12 months) and BMI 12 months or more after diagnosis (BMI ≥12 months). Outcomes included total mortality, breast cancer mortality, death from cardiovascular disease, and death from causes other than breast cancer. When multiple publications on the same study population were found, results based on longer follow-up and more outcomes were selected for the meta-analysis.\n data extraction DSMC, TN, and DA conducted the search. DSMC, ARV, and DNR extracted the study characteristics, tumour-related information, cancer treatment, timing and method of weight and height assessment, BMI levels, number of outcomes and population at-risk, outcome type, estimates of association and their measure of variance [95% confidence interval (CI) or P value], and adjustment factors in the analysis.\nDSMC, TN, and DA conducted the search. DSMC, ARV, and DNR extracted the study characteristics, tumour-related information, cancer treatment, timing and method of weight and height assessment, BMI levels, number of outcomes and population at-risk, outcome type, estimates of association and their measure of variance [95% confidence interval (CI) or P value], and adjustment factors in the analysis.\n statistical analysis Categorical and dose–response meta-analyses were conducted using random-effects models to account for between-study heterogeneity [18]. Summary relative risks (RRs) were estimated using the average of the natural logarithm of the RRs of each study weighted by the inverse of the variance and then unweighted by applying a random-effects variance component which is derived from the extent of variability of the effect sizes of the studies. The maximally adjusted RR estimates were used for the meta-analysis except for the follow-up of randomised, controlled trials [19, 20] where unadjusted results were also included, as these studies mostly involved a more homogeneous study population. BMI or Quetelet's Index (QI) measured in units of kg/m2 was used.\nWe conducted categorical meta-analyses by pooling the categorical results reported in the studies. The studies used different BMI categories. In some studies, underweight (BMI <18.5 kg/m2 according to WHO international classification) and normal weight women (BMI 18.5–<25.0 kg/m2) were classified together but, in some studies, they were classified separately. Similarly, most studies classified overweight (BMI 25.0–<30.0 kg/m2) and obese (BMI ≥30.0 kg/m2) women separately but, in some studies, overweight and obese women were combined. The reference category was normal weight or underweight together with normal weight, depending on the studies. For convenience, the BMI categories are referred to as underweight, normal weight, overweight, and obese in the present review. We derived the RRs for overweight and obese women compared with normal weight women in two studies [19, 21] that had more than four BMI categories using the method of Hamling et al. [22]. Studies that reported results for obese compared with non-obese women were analysed separately.\nThe non-linear dose–response relationship between BMI and mortality was examined using the best-fitting second-order fractional polynomial regression model [23], defined as the one with the lowest deviance. Non-linearity was tested using the likelihood ratio test [24]. In the non-linear meta-analysis, the reference category was the lowest BMI category in each study and RRs were recalculated using the method of Hamling et al. [22] when the reference category was not the lowest BMI category in the study.\nWe also conducted linear dose–response meta-analyses, excluding the category underweight when reported separately in the studies, by pooling estimates of RR per unit increase (with its standard error) provided by the studies, or derived by us from categorical data using generalised least-squares for trend estimation [25]. To estimate the trend, the numbers of outcomes and population at-risk for at least three BMI categories, or the information required to derive them using standard methods [26], and means or medians of the BMI categories, or if not reported in the studies, the estimated midpoints of the categories had to be available. When the extreme BMI categories were open-ended, we used the width of the adjacent close-ended category to estimate the midpoints. Where the RRs were presented by subgroups (age group [27], menopausal status [28, 29], stage [30] or subtype [31] of breast cancer, or others [32–34]), an overall estimate for the study was obtained by a fixed-effect model before pooling in the meta-analysis. We estimated the risk increase of death for an increment of 5 kg/m2 of BMI.\nTo assess heterogeneity, we computed the Cochran Q test and I2 statistic [35]. The cut points of 30% and 50% were used for low, moderate, and substantial level of heterogeneity. Sources of heterogeneity were explored by meta-regression and subgroup analyses using pre-defined factors, including indicators of study quality (menopausal status, hormone receptor status, number of outcomes, length of follow-up, study design, geographic location, BMI assessment, adjustment for confounders, and others). Small study or publication bias was examined by Egger's test [36] and visual inspection of the funnel plots. The influence of each individual study on the summary RR was examined by excluding the study in turn [37]. A P value of <0.05 was considered statistically significant. All analyses were conducted using Stata version 12.1 (Stata Statistical Software: Release 12, StataCorp LP, College Station, TX).\nCategorical and dose–response meta-analyses were conducted using random-effects models to account for between-study heterogeneity [18]. Summary relative risks (RRs) were estimated using the average of the natural logarithm of the RRs of each study weighted by the inverse of the variance and then unweighted by applying a random-effects variance component which is derived from the extent of variability of the effect sizes of the studies. The maximally adjusted RR estimates were used for the meta-analysis except for the follow-up of randomised, controlled trials [19, 20] where unadjusted results were also included, as these studies mostly involved a more homogeneous study population. BMI or Quetelet's Index (QI) measured in units of kg/m2 was used.\nWe conducted categorical meta-analyses by pooling the categorical results reported in the studies. The studies used different BMI categories. In some studies, underweight (BMI <18.5 kg/m2 according to WHO international classification) and normal weight women (BMI 18.5–<25.0 kg/m2) were classified together but, in some studies, they were classified separately. Similarly, most studies classified overweight (BMI 25.0–<30.0 kg/m2) and obese (BMI ≥30.0 kg/m2) women separately but, in some studies, overweight and obese women were combined. The reference category was normal weight or underweight together with normal weight, depending on the studies. For convenience, the BMI categories are referred to as underweight, normal weight, overweight, and obese in the present review. We derived the RRs for overweight and obese women compared with normal weight women in two studies [19, 21] that had more than four BMI categories using the method of Hamling et al. [22]. Studies that reported results for obese compared with non-obese women were analysed separately.\nThe non-linear dose–response relationship between BMI and mortality was examined using the best-fitting second-order fractional polynomial regression model [23], defined as the one with the lowest deviance. Non-linearity was tested using the likelihood ratio test [24]. In the non-linear meta-analysis, the reference category was the lowest BMI category in each study and RRs were recalculated using the method of Hamling et al. [22] when the reference category was not the lowest BMI category in the study.\nWe also conducted linear dose–response meta-analyses, excluding the category underweight when reported separately in the studies, by pooling estimates of RR per unit increase (with its standard error) provided by the studies, or derived by us from categorical data using generalised least-squares for trend estimation [25]. To estimate the trend, the numbers of outcomes and population at-risk for at least three BMI categories, or the information required to derive them using standard methods [26], and means or medians of the BMI categories, or if not reported in the studies, the estimated midpoints of the categories had to be available. When the extreme BMI categories were open-ended, we used the width of the adjacent close-ended category to estimate the midpoints. Where the RRs were presented by subgroups (age group [27], menopausal status [28, 29], stage [30] or subtype [31] of breast cancer, or others [32–34]), an overall estimate for the study was obtained by a fixed-effect model before pooling in the meta-analysis. We estimated the risk increase of death for an increment of 5 kg/m2 of BMI.\nTo assess heterogeneity, we computed the Cochran Q test and I2 statistic [35]. The cut points of 30% and 50% were used for low, moderate, and substantial level of heterogeneity. Sources of heterogeneity were explored by meta-regression and subgroup analyses using pre-defined factors, including indicators of study quality (menopausal status, hormone receptor status, number of outcomes, length of follow-up, study design, geographic location, BMI assessment, adjustment for confounders, and others). Small study or publication bias was examined by Egger's test [36] and visual inspection of the funnel plots. The influence of each individual study on the summary RR was examined by excluding the study in turn [37]. A P value of <0.05 was considered statistically significant. All analyses were conducted using Stata version 12.1 (Stata Statistical Software: Release 12, StataCorp LP, College Station, TX).", "We carried out a systematic literature search, limited to publications in English, for articles on BMI and survival in women with breast cancer in OVID MEDLINE and EMBASE from inception to 30 June 2013 using the search strategy implemented for the WCRF/AICR Continuous Update Project on breast cancer survival. The search strategy contained medical subject headings and text words that covered a broad range of factors on diet, physical activity, and anthropometry. The protocol for the review is available at http://www.dietandcancerreport.org/index.php [17]. In addition, we hand-searched the reference lists of relevant articles, reviews, and meta-analysis papers.", "Included were follow-up studies of breast cancer survivors, which reported estimates of the associations of BMI ascertained before and after breast cancer diagnosis with total or cause-specific mortality risks. Studies that investigated BMI after diagnosis were divided into two groups: BMI <12 months after diagnosis (BMI <12 months) and BMI 12 months or more after diagnosis (BMI ≥12 months). Outcomes included total mortality, breast cancer mortality, death from cardiovascular disease, and death from causes other than breast cancer. When multiple publications on the same study population were found, results based on longer follow-up and more outcomes were selected for the meta-analysis.", "DSMC, TN, and DA conducted the search. DSMC, ARV, and DNR extracted the study characteristics, tumour-related information, cancer treatment, timing and method of weight and height assessment, BMI levels, number of outcomes and population at-risk, outcome type, estimates of association and their measure of variance [95% confidence interval (CI) or P value], and adjustment factors in the analysis.", "Categorical and dose–response meta-analyses were conducted using random-effects models to account for between-study heterogeneity [18]. Summary relative risks (RRs) were estimated using the average of the natural logarithm of the RRs of each study weighted by the inverse of the variance and then unweighted by applying a random-effects variance component which is derived from the extent of variability of the effect sizes of the studies. The maximally adjusted RR estimates were used for the meta-analysis except for the follow-up of randomised, controlled trials [19, 20] where unadjusted results were also included, as these studies mostly involved a more homogeneous study population. BMI or Quetelet's Index (QI) measured in units of kg/m2 was used.\nWe conducted categorical meta-analyses by pooling the categorical results reported in the studies. The studies used different BMI categories. In some studies, underweight (BMI <18.5 kg/m2 according to WHO international classification) and normal weight women (BMI 18.5–<25.0 kg/m2) were classified together but, in some studies, they were classified separately. Similarly, most studies classified overweight (BMI 25.0–<30.0 kg/m2) and obese (BMI ≥30.0 kg/m2) women separately but, in some studies, overweight and obese women were combined. The reference category was normal weight or underweight together with normal weight, depending on the studies. For convenience, the BMI categories are referred to as underweight, normal weight, overweight, and obese in the present review. We derived the RRs for overweight and obese women compared with normal weight women in two studies [19, 21] that had more than four BMI categories using the method of Hamling et al. [22]. Studies that reported results for obese compared with non-obese women were analysed separately.\nThe non-linear dose–response relationship between BMI and mortality was examined using the best-fitting second-order fractional polynomial regression model [23], defined as the one with the lowest deviance. Non-linearity was tested using the likelihood ratio test [24]. In the non-linear meta-analysis, the reference category was the lowest BMI category in each study and RRs were recalculated using the method of Hamling et al. [22] when the reference category was not the lowest BMI category in the study.\nWe also conducted linear dose–response meta-analyses, excluding the category underweight when reported separately in the studies, by pooling estimates of RR per unit increase (with its standard error) provided by the studies, or derived by us from categorical data using generalised least-squares for trend estimation [25]. To estimate the trend, the numbers of outcomes and population at-risk for at least three BMI categories, or the information required to derive them using standard methods [26], and means or medians of the BMI categories, or if not reported in the studies, the estimated midpoints of the categories had to be available. When the extreme BMI categories were open-ended, we used the width of the adjacent close-ended category to estimate the midpoints. Where the RRs were presented by subgroups (age group [27], menopausal status [28, 29], stage [30] or subtype [31] of breast cancer, or others [32–34]), an overall estimate for the study was obtained by a fixed-effect model before pooling in the meta-analysis. We estimated the risk increase of death for an increment of 5 kg/m2 of BMI.\nTo assess heterogeneity, we computed the Cochran Q test and I2 statistic [35]. The cut points of 30% and 50% were used for low, moderate, and substantial level of heterogeneity. Sources of heterogeneity were explored by meta-regression and subgroup analyses using pre-defined factors, including indicators of study quality (menopausal status, hormone receptor status, number of outcomes, length of follow-up, study design, geographic location, BMI assessment, adjustment for confounders, and others). Small study or publication bias was examined by Egger's test [36] and visual inspection of the funnel plots. The influence of each individual study on the summary RR was examined by excluding the study in turn [37]. A P value of <0.05 was considered statistically significant. All analyses were conducted using Stata version 12.1 (Stata Statistical Software: Release 12, StataCorp LP, College Station, TX).", "A total of 124 publications investigating the relationship of body fatness and mortality in women with breast cancer were identified. We excluded 31 publications, including four publications on other obesity indices [38–41], 12 publications without a measure of association [42–53], and 15 publications superseded by publications of the same study with more outcomes [54–68]. A further 14 publications were excluded because of insufficient data for the meta-analysis (five publications [69–73]) or unadjusted results (nine publications [74–82]), from which nine publications reported statistically significant increased risk of total, breast cancer or non-breast cancer mortality in obese women (before or <12 months after diagnosis) compared with the reference BMI [69, 71–74, 76, 77, 79, 82], two publications reported non-significant inverse associations [75, 80] and three publications reported no association [70, 78, 81] of BMI with survival after breast cancer. Hence, 79 publications from 82 follow-up studies with 41 477 deaths (23 182 from breast cancer) in 213 075 breast cancer survivors were included in the meta-analyses (Figure 1). Supplementary Table S1, available at Annals of Oncology online shows the characteristics of the studies included in the meta-analyses and details of the excluded studies are in supplementary Table S2, available at Annals of Oncology online. Results of the meta-analyses are summarised in Table 1.Table 1.Summary of meta-analyses of BMI and survival in women with breast canceraBMI before diagnosisBMI <12 months after diagnosisBMI ≥12 months after diagnosisNRR (95% CI)I2 (%)PhNRR (95% CI)I2 (%)PhNRR (95% CI)I2 (%)PhTotal mortality Under versus normal weight101.10 (0.92–1.31)48%0.04111.25 (0.99–0.57)63%<0.0131.29 (1.02–1.63)0%0.39 Over versus normal weight191.07 (1.02–1.12)0%0.88221.07 (1.02–1.12)21%0.1840.98 (0.86–1.11)0%0.72 Obese versus normal weight211.41 (1.29–1.53)38%0.04241.23 (1.12–1.33)69%<0.0151.21 (1.06–1.38)0%0.70 Obese versus non-obese–––121.26 (1.07–1.47)80%<0.01––– Per 5 kg/m2 increase151.17 (1.13–1.21)7%0.38121.11 (1.06–1.16)55%0.0141.08 (1.01–1.15)0%0.52Breast cancer mortality Under versus normal weight81.02 (0.85–1.21)31%0.1851.53 (1.27–1.83)0%0.5911.10 (0.15–8.08)– Over versus normal weight211.11 (1.06–1.17)0%0.66121.11 (1.03–1.20)14%0.3121.37 (0.96–1.95)0%0.90 Obese versus normal weight221.35 (1.24–1.47)36%0.05121.25 (1.10–1.42)53%0.0221.68 (0.90–3.15)67%0.08 Obese versus non-obese–––61.26 (1.05–1.51)64%0.02––– Per 5 kg/m2 increase181.18 (1.12–1.25)47%0.0181.14 (1.05–1.24)66%0.0121.29 (0.97–1.72)64%0.10Cardiovascular disease related mortality Over versus normal weight21.01 (0.80–1.29)0%0.87–––––– Obese versus normal weight21.60 (0.66–3.87)78%0.03–––––– Per 5 kg/m2 increase21.21 (0.83–1.77)80%0.03––––––Non-breast cancer mortality Over versus normal weight–––50.96 (0.83–1.11)26%0.25––– Obese versus normal weight–––51.29 (0.99–1.68)72%0.01–––aBMI before and after diagnosis (<12 months after, or ≥12 months after diagnosis) was classified according to the exposure period which the studies referred to in the BMI assessment; the BMI categories were included in the categorical meta-analyses as defined by the studies.Ph, P for heterogeneity between studies.Figure 1.Flowchart of search.\nSummary of meta-analyses of BMI and survival in women with breast cancera\naBMI before and after diagnosis (<12 months after, or ≥12 months after diagnosis) was classified according to the exposure period which the studies referred to in the BMI assessment; the BMI categories were included in the categorical meta-analyses as defined by the studies.\nPh, P for heterogeneity between studies.\nFlowchart of search.\nStudies were follow-up of women with breast cancer identified in prospective aetiologic cohort studies (women were free of cancer at enrolment), or cohorts of breast cancer survivors whose participants were identified in hospitals or through cancer registries, or follow-up of breast cancer patients enrolled in case–control studies or randomised clinical trials.\nSome studies included only premenopausal women [83–85] or postmenopausal women [21, 27, 86–94], but most studies included both. Menopausal status was usually determined at time of diagnosis. Year of diagnosis was from 1957–1965 [70] to 2002–2009 [74]. Patient tumour characteristics and stage of disease at diagnosis varied across studies, and some studies included carcinoma in situ. No all studies provided clinical information on the tumour, treatment, and co-morbidities.\nMost of the studies were based in North America or Europe. There were three studies from each of Australia [79, 95, 96], Korea [97, 98] and China [99–101]; two studies from Japan [71, 102]; one study from Tunisia [103] and four international studies [19, 104–106]. Study size ranged from 96 [107] to 24 698 patients [97]. Total number of deaths ranged from 56 [93] to 7397 [108], and the proportion of deaths from breast cancer ranged from 22% [27] to 98% [84] when reported. All but eight studies [30, 93, 94, 98, 99, 109–111] had an average follow-up of more than 5 years.\n BMI and total mortality categorical meta-analysis For BMI before diagnosis, compared with normal weight women, the summary RRs were 1.41 (95% CI 1.29–1.53, 21 studies) for obese women, 1.07 (95% CI 1.02–1.12, 19 studies) for overweight women, and 1.10 (95% CI 0.92–1.31, 10 studies) for underweight women (Figure 2). For BMI <12 months after diagnosis and the same comparisons, the summary RRs were 1.23 (95% CI 1.12–1.33, 24 studies) for obese women, 1.07 (95% CI 1.02–1.12, 22 studies) for overweight women, and 1.25 (95% CI 0.99–1.57, 11 studies) for underweight women (supplementary Figure S1, available at Annals of Oncology online). Substantial heterogeneities were observed between studies of obese women and underweight women (I2 = 69%, P < 0.01; I2 = 63%, P < 0.01, respectively). For BMI ≥12 months after diagnosis, the summary RRs were 1.21 (95% CI 1.06–1.38, 5 studies) for obese women, 0.98 (95% CI 0.86–1.11, 4 studies) for overweight women, and 1.29 (95% CI 1.02–1.63, 3 studies) for underweight women (supplementary Figure S2, available at Annals of Oncology online). Twelve additional studies reported results for obese versus non-obese women <12 months after diagnosis, and the summary RR was 1.26 (95% CI 1.07–1.47, I2 = 80%, P < 0.01).Figure 2.Categorical meta-analysis of pre-diagnosis BMI and total mortality.\nCategorical meta-analysis of pre-diagnosis BMI and total mortality.\nFor BMI before diagnosis, compared with normal weight women, the summary RRs were 1.41 (95% CI 1.29–1.53, 21 studies) for obese women, 1.07 (95% CI 1.02–1.12, 19 studies) for overweight women, and 1.10 (95% CI 0.92–1.31, 10 studies) for underweight women (Figure 2). For BMI <12 months after diagnosis and the same comparisons, the summary RRs were 1.23 (95% CI 1.12–1.33, 24 studies) for obese women, 1.07 (95% CI 1.02–1.12, 22 studies) for overweight women, and 1.25 (95% CI 0.99–1.57, 11 studies) for underweight women (supplementary Figure S1, available at Annals of Oncology online). Substantial heterogeneities were observed between studies of obese women and underweight women (I2 = 69%, P < 0.01; I2 = 63%, P < 0.01, respectively). For BMI ≥12 months after diagnosis, the summary RRs were 1.21 (95% CI 1.06–1.38, 5 studies) for obese women, 0.98 (95% CI 0.86–1.11, 4 studies) for overweight women, and 1.29 (95% CI 1.02–1.63, 3 studies) for underweight women (supplementary Figure S2, available at Annals of Oncology online). Twelve additional studies reported results for obese versus non-obese women <12 months after diagnosis, and the summary RR was 1.26 (95% CI 1.07–1.47, I2 = 80%, P < 0.01).Figure 2.Categorical meta-analysis of pre-diagnosis BMI and total mortality.\nCategorical meta-analysis of pre-diagnosis BMI and total mortality.\n dose–response meta-analysis There was evidence of a J-shaped association in the non-linear dose–response meta-analyses of BMI before and after diagnosis with total mortality (all P < 0.01; Figure 3), suggesting that underweight women may be at slightly increased risk compared with normal weight women. The curves show linear increasing trends from 20 kg/m2 for BMI before diagnosis and <12 months after diagnosis, and from 25 kg/m2 for BMI ≥12 months after diagnosis. When linear models were fitted excluding the underweight category, the summary RRs of total mortality for each 5 kg/m2 increase in BMI were 1.17 (95% CI 1.13–1.21, 15 studies, 6358 deaths), 1.11 (95% CI 1.06–1.16, 12 studies, 6020 deaths), and 1.08 (95% CI 1.01–1.15, 4 studies, 1703 deaths) for BMI before, <12 months after, and ≥12 months after diagnosis, respectively (Figure 4). Substantial heterogeneity was observed between studies on BMI <12 months after diagnosis (I2 = 55%, P = 0.01).Figure 3.Non-linear dose–response curves of BMI and mortality.Figure 4.Linear dose–response meta-analysis of BMI and total mortality.\nNon-linear dose–response curves of BMI and mortality.\nLinear dose–response meta-analysis of BMI and total mortality.\nThere was evidence of a J-shaped association in the non-linear dose–response meta-analyses of BMI before and after diagnosis with total mortality (all P < 0.01; Figure 3), suggesting that underweight women may be at slightly increased risk compared with normal weight women. The curves show linear increasing trends from 20 kg/m2 for BMI before diagnosis and <12 months after diagnosis, and from 25 kg/m2 for BMI ≥12 months after diagnosis. When linear models were fitted excluding the underweight category, the summary RRs of total mortality for each 5 kg/m2 increase in BMI were 1.17 (95% CI 1.13–1.21, 15 studies, 6358 deaths), 1.11 (95% CI 1.06–1.16, 12 studies, 6020 deaths), and 1.08 (95% CI 1.01–1.15, 4 studies, 1703 deaths) for BMI before, <12 months after, and ≥12 months after diagnosis, respectively (Figure 4). Substantial heterogeneity was observed between studies on BMI <12 months after diagnosis (I2 = 55%, P = 0.01).Figure 3.Non-linear dose–response curves of BMI and mortality.Figure 4.Linear dose–response meta-analysis of BMI and total mortality.\nNon-linear dose–response curves of BMI and mortality.\nLinear dose–response meta-analysis of BMI and total mortality.\n categorical meta-analysis For BMI before diagnosis, compared with normal weight women, the summary RRs were 1.41 (95% CI 1.29–1.53, 21 studies) for obese women, 1.07 (95% CI 1.02–1.12, 19 studies) for overweight women, and 1.10 (95% CI 0.92–1.31, 10 studies) for underweight women (Figure 2). For BMI <12 months after diagnosis and the same comparisons, the summary RRs were 1.23 (95% CI 1.12–1.33, 24 studies) for obese women, 1.07 (95% CI 1.02–1.12, 22 studies) for overweight women, and 1.25 (95% CI 0.99–1.57, 11 studies) for underweight women (supplementary Figure S1, available at Annals of Oncology online). Substantial heterogeneities were observed between studies of obese women and underweight women (I2 = 69%, P < 0.01; I2 = 63%, P < 0.01, respectively). For BMI ≥12 months after diagnosis, the summary RRs were 1.21 (95% CI 1.06–1.38, 5 studies) for obese women, 0.98 (95% CI 0.86–1.11, 4 studies) for overweight women, and 1.29 (95% CI 1.02–1.63, 3 studies) for underweight women (supplementary Figure S2, available at Annals of Oncology online). Twelve additional studies reported results for obese versus non-obese women <12 months after diagnosis, and the summary RR was 1.26 (95% CI 1.07–1.47, I2 = 80%, P < 0.01).Figure 2.Categorical meta-analysis of pre-diagnosis BMI and total mortality.\nCategorical meta-analysis of pre-diagnosis BMI and total mortality.\nFor BMI before diagnosis, compared with normal weight women, the summary RRs were 1.41 (95% CI 1.29–1.53, 21 studies) for obese women, 1.07 (95% CI 1.02–1.12, 19 studies) for overweight women, and 1.10 (95% CI 0.92–1.31, 10 studies) for underweight women (Figure 2). For BMI <12 months after diagnosis and the same comparisons, the summary RRs were 1.23 (95% CI 1.12–1.33, 24 studies) for obese women, 1.07 (95% CI 1.02–1.12, 22 studies) for overweight women, and 1.25 (95% CI 0.99–1.57, 11 studies) for underweight women (supplementary Figure S1, available at Annals of Oncology online). Substantial heterogeneities were observed between studies of obese women and underweight women (I2 = 69%, P < 0.01; I2 = 63%, P < 0.01, respectively). For BMI ≥12 months after diagnosis, the summary RRs were 1.21 (95% CI 1.06–1.38, 5 studies) for obese women, 0.98 (95% CI 0.86–1.11, 4 studies) for overweight women, and 1.29 (95% CI 1.02–1.63, 3 studies) for underweight women (supplementary Figure S2, available at Annals of Oncology online). Twelve additional studies reported results for obese versus non-obese women <12 months after diagnosis, and the summary RR was 1.26 (95% CI 1.07–1.47, I2 = 80%, P < 0.01).Figure 2.Categorical meta-analysis of pre-diagnosis BMI and total mortality.\nCategorical meta-analysis of pre-diagnosis BMI and total mortality.\n dose–response meta-analysis There was evidence of a J-shaped association in the non-linear dose–response meta-analyses of BMI before and after diagnosis with total mortality (all P < 0.01; Figure 3), suggesting that underweight women may be at slightly increased risk compared with normal weight women. The curves show linear increasing trends from 20 kg/m2 for BMI before diagnosis and <12 months after diagnosis, and from 25 kg/m2 for BMI ≥12 months after diagnosis. When linear models were fitted excluding the underweight category, the summary RRs of total mortality for each 5 kg/m2 increase in BMI were 1.17 (95% CI 1.13–1.21, 15 studies, 6358 deaths), 1.11 (95% CI 1.06–1.16, 12 studies, 6020 deaths), and 1.08 (95% CI 1.01–1.15, 4 studies, 1703 deaths) for BMI before, <12 months after, and ≥12 months after diagnosis, respectively (Figure 4). Substantial heterogeneity was observed between studies on BMI <12 months after diagnosis (I2 = 55%, P = 0.01).Figure 3.Non-linear dose–response curves of BMI and mortality.Figure 4.Linear dose–response meta-analysis of BMI and total mortality.\nNon-linear dose–response curves of BMI and mortality.\nLinear dose–response meta-analysis of BMI and total mortality.\nThere was evidence of a J-shaped association in the non-linear dose–response meta-analyses of BMI before and after diagnosis with total mortality (all P < 0.01; Figure 3), suggesting that underweight women may be at slightly increased risk compared with normal weight women. The curves show linear increasing trends from 20 kg/m2 for BMI before diagnosis and <12 months after diagnosis, and from 25 kg/m2 for BMI ≥12 months after diagnosis. When linear models were fitted excluding the underweight category, the summary RRs of total mortality for each 5 kg/m2 increase in BMI were 1.17 (95% CI 1.13–1.21, 15 studies, 6358 deaths), 1.11 (95% CI 1.06–1.16, 12 studies, 6020 deaths), and 1.08 (95% CI 1.01–1.15, 4 studies, 1703 deaths) for BMI before, <12 months after, and ≥12 months after diagnosis, respectively (Figure 4). Substantial heterogeneity was observed between studies on BMI <12 months after diagnosis (I2 = 55%, P = 0.01).Figure 3.Non-linear dose–response curves of BMI and mortality.Figure 4.Linear dose–response meta-analysis of BMI and total mortality.\nNon-linear dose–response curves of BMI and mortality.\nLinear dose–response meta-analysis of BMI and total mortality.\n BMI and breast cancer mortality categorical meta-analysis BMI was significantly associated with breast cancer mortality. Compared with normal weight women, for BMI before diagnosis, the summary RRs were 1.35 (95% CI 1.24–1.47, 22 studies) for obese women, 1.11 (95% CI 1.06–1.17, 21 studies) for overweight women, and 1.02 (95% CI 0.85–1.21, 8 studies) for underweight women (Figure 5). For BMI <12 months after diagnosis, the summary RRs were 1.25 (95% CI 1.10–1.42, 12 studies) for obese women, 1.11 (95% CI 1.03–1.20, 12 studies) for overweight women, and 1.53 (95% CI 1.27–1.83, 5 studies) for underweight women (supplementary Figure S3, available at Annals of Oncology online). Substantial heterogeneity was observed between studies of obese women (I2 = 53%, P = 0.02). For BMI ≥12 months after diagnosis, the summary RRs of the two studies identified were 1.68 (95% CI 0.90–3.15) for obese women and 1.37 (95% CI 0.96–1.95) for overweight women (supplementary Figure S4, available at Annals of Oncology online). The summary of another six studies that reported RRs for obese versus non-obese <12 months after diagnosis was 1.26 (95% CI 1.05–1.51, I2 = 64%, P = 0.02).Figure 5.Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality.\nCategorical meta-analysis of pre-diagnosis BMI and breast cancer mortality.\nBMI was significantly associated with breast cancer mortality. Compared with normal weight women, for BMI before diagnosis, the summary RRs were 1.35 (95% CI 1.24–1.47, 22 studies) for obese women, 1.11 (95% CI 1.06–1.17, 21 studies) for overweight women, and 1.02 (95% CI 0.85–1.21, 8 studies) for underweight women (Figure 5). For BMI <12 months after diagnosis, the summary RRs were 1.25 (95% CI 1.10–1.42, 12 studies) for obese women, 1.11 (95% CI 1.03–1.20, 12 studies) for overweight women, and 1.53 (95% CI 1.27–1.83, 5 studies) for underweight women (supplementary Figure S3, available at Annals of Oncology online). Substantial heterogeneity was observed between studies of obese women (I2 = 53%, P = 0.02). For BMI ≥12 months after diagnosis, the summary RRs of the two studies identified were 1.68 (95% CI 0.90–3.15) for obese women and 1.37 (95% CI 0.96–1.95) for overweight women (supplementary Figure S4, available at Annals of Oncology online). The summary of another six studies that reported RRs for obese versus non-obese <12 months after diagnosis was 1.26 (95% CI 1.05–1.51, I2 = 64%, P = 0.02).Figure 5.Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality.\nCategorical meta-analysis of pre-diagnosis BMI and breast cancer mortality.\n dose–response meta-analysis There was no significant evidence of a non-linear relationship between BMI before, <12 months after, and ≥12 months after diagnosis and breast cancer mortality (P = 0.21, P = 1.00, P = 0.86, respectively) (Figure 3). When linear models were fitted excluding data from the underweight category, statistically significant increased risks of breast cancer mortality with BMI before and <12 months after diagnosis were observed (Figure 6). The summary RRs for each 5 kg/m2 increase were 1.18 (95% CI 1.12–1.25, 18 studies, 5262 breast cancer deaths) for BMI before diagnosis and 1.14 (95% CI 1.05–1.24, 8 studies, 3857 breast cancer deaths) for BMI <12 months after diagnosis, with moderate (I2 = 47%, P = 0.01) and substantial (I2 = 66%, P = 0.01) heterogeneities between studies, respectively. Only two studies on BMI ≥12 months after diagnosis and breast cancer mortality (N = 220 deaths) were identified. The summary RR was 1.29 (95% CI 0.97–1.72).Figure 6.Linear dose–response meta-analysis of BMI and breast cancer mortality.\nLinear dose–response meta-analysis of BMI and breast cancer mortality.\nThere was no significant evidence of a non-linear relationship between BMI before, <12 months after, and ≥12 months after diagnosis and breast cancer mortality (P = 0.21, P = 1.00, P = 0.86, respectively) (Figure 3). When linear models were fitted excluding data from the underweight category, statistically significant increased risks of breast cancer mortality with BMI before and <12 months after diagnosis were observed (Figure 6). The summary RRs for each 5 kg/m2 increase were 1.18 (95% CI 1.12–1.25, 18 studies, 5262 breast cancer deaths) for BMI before diagnosis and 1.14 (95% CI 1.05–1.24, 8 studies, 3857 breast cancer deaths) for BMI <12 months after diagnosis, with moderate (I2 = 47%, P = 0.01) and substantial (I2 = 66%, P = 0.01) heterogeneities between studies, respectively. Only two studies on BMI ≥12 months after diagnosis and breast cancer mortality (N = 220 deaths) were identified. The summary RR was 1.29 (95% CI 0.97–1.72).Figure 6.Linear dose–response meta-analysis of BMI and breast cancer mortality.\nLinear dose–response meta-analysis of BMI and breast cancer mortality.\n categorical meta-analysis BMI was significantly associated with breast cancer mortality. Compared with normal weight women, for BMI before diagnosis, the summary RRs were 1.35 (95% CI 1.24–1.47, 22 studies) for obese women, 1.11 (95% CI 1.06–1.17, 21 studies) for overweight women, and 1.02 (95% CI 0.85–1.21, 8 studies) for underweight women (Figure 5). For BMI <12 months after diagnosis, the summary RRs were 1.25 (95% CI 1.10–1.42, 12 studies) for obese women, 1.11 (95% CI 1.03–1.20, 12 studies) for overweight women, and 1.53 (95% CI 1.27–1.83, 5 studies) for underweight women (supplementary Figure S3, available at Annals of Oncology online). Substantial heterogeneity was observed between studies of obese women (I2 = 53%, P = 0.02). For BMI ≥12 months after diagnosis, the summary RRs of the two studies identified were 1.68 (95% CI 0.90–3.15) for obese women and 1.37 (95% CI 0.96–1.95) for overweight women (supplementary Figure S4, available at Annals of Oncology online). The summary of another six studies that reported RRs for obese versus non-obese <12 months after diagnosis was 1.26 (95% CI 1.05–1.51, I2 = 64%, P = 0.02).Figure 5.Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality.\nCategorical meta-analysis of pre-diagnosis BMI and breast cancer mortality.\nBMI was significantly associated with breast cancer mortality. Compared with normal weight women, for BMI before diagnosis, the summary RRs were 1.35 (95% CI 1.24–1.47, 22 studies) for obese women, 1.11 (95% CI 1.06–1.17, 21 studies) for overweight women, and 1.02 (95% CI 0.85–1.21, 8 studies) for underweight women (Figure 5). For BMI <12 months after diagnosis, the summary RRs were 1.25 (95% CI 1.10–1.42, 12 studies) for obese women, 1.11 (95% CI 1.03–1.20, 12 studies) for overweight women, and 1.53 (95% CI 1.27–1.83, 5 studies) for underweight women (supplementary Figure S3, available at Annals of Oncology online). Substantial heterogeneity was observed between studies of obese women (I2 = 53%, P = 0.02). For BMI ≥12 months after diagnosis, the summary RRs of the two studies identified were 1.68 (95% CI 0.90–3.15) for obese women and 1.37 (95% CI 0.96–1.95) for overweight women (supplementary Figure S4, available at Annals of Oncology online). The summary of another six studies that reported RRs for obese versus non-obese <12 months after diagnosis was 1.26 (95% CI 1.05–1.51, I2 = 64%, P = 0.02).Figure 5.Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality.\nCategorical meta-analysis of pre-diagnosis BMI and breast cancer mortality.\n dose–response meta-analysis There was no significant evidence of a non-linear relationship between BMI before, <12 months after, and ≥12 months after diagnosis and breast cancer mortality (P = 0.21, P = 1.00, P = 0.86, respectively) (Figure 3). When linear models were fitted excluding data from the underweight category, statistically significant increased risks of breast cancer mortality with BMI before and <12 months after diagnosis were observed (Figure 6). The summary RRs for each 5 kg/m2 increase were 1.18 (95% CI 1.12–1.25, 18 studies, 5262 breast cancer deaths) for BMI before diagnosis and 1.14 (95% CI 1.05–1.24, 8 studies, 3857 breast cancer deaths) for BMI <12 months after diagnosis, with moderate (I2 = 47%, P = 0.01) and substantial (I2 = 66%, P = 0.01) heterogeneities between studies, respectively. Only two studies on BMI ≥12 months after diagnosis and breast cancer mortality (N = 220 deaths) were identified. The summary RR was 1.29 (95% CI 0.97–1.72).Figure 6.Linear dose–response meta-analysis of BMI and breast cancer mortality.\nLinear dose–response meta-analysis of BMI and breast cancer mortality.\nThere was no significant evidence of a non-linear relationship between BMI before, <12 months after, and ≥12 months after diagnosis and breast cancer mortality (P = 0.21, P = 1.00, P = 0.86, respectively) (Figure 3). When linear models were fitted excluding data from the underweight category, statistically significant increased risks of breast cancer mortality with BMI before and <12 months after diagnosis were observed (Figure 6). The summary RRs for each 5 kg/m2 increase were 1.18 (95% CI 1.12–1.25, 18 studies, 5262 breast cancer deaths) for BMI before diagnosis and 1.14 (95% CI 1.05–1.24, 8 studies, 3857 breast cancer deaths) for BMI <12 months after diagnosis, with moderate (I2 = 47%, P = 0.01) and substantial (I2 = 66%, P = 0.01) heterogeneities between studies, respectively. Only two studies on BMI ≥12 months after diagnosis and breast cancer mortality (N = 220 deaths) were identified. The summary RR was 1.29 (95% CI 0.97–1.72).Figure 6.Linear dose–response meta-analysis of BMI and breast cancer mortality.\nLinear dose–response meta-analysis of BMI and breast cancer mortality.\n BMI and other mortality outcomes Only two studies reported results for death from cardiovascular disease (N = 151 deaths) [27, 112]. The summary RR for obese versus normal weight before diagnosis was 1.60 (95% CI 0.66–3.87). No association was observed for overweight versus normal weight (summary RR = 1.01, 95% CI 0.80–1.29). For each 5 kg/m2 increase in BMI, the summary RR was 1.21 (95% CI 0.83–1.77). Five studies reported results for deaths from any cause other than breast cancer (N = 2704 deaths) [21, 34, 108, 113, 114]. The summary RRs were 1.29 (95% CI 0.99–1.68, I2 = 72%, P = 0.01) for obese women, and 0.96 (95% CI 0.83–1.11, I2 = 26%, P = 0.25) for overweight women compared with normal weight women.\nOnly two studies reported results for death from cardiovascular disease (N = 151 deaths) [27, 112]. The summary RR for obese versus normal weight before diagnosis was 1.60 (95% CI 0.66–3.87). No association was observed for overweight versus normal weight (summary RR = 1.01, 95% CI 0.80–1.29). For each 5 kg/m2 increase in BMI, the summary RR was 1.21 (95% CI 0.83–1.77). Five studies reported results for deaths from any cause other than breast cancer (N = 2704 deaths) [21, 34, 108, 113, 114]. The summary RRs were 1.29 (95% CI 0.99–1.68, I2 = 72%, P = 0.01) for obese women, and 0.96 (95% CI 0.83–1.11, I2 = 26%, P = 0.25) for overweight women compared with normal weight women.\n subgroup, meta-regression, and sensitivity analyses The results of the subgroup and meta-regression analyses are in supplementary Tables S3 and S4, available at Annals of Oncology online. Subgroup analysis was not carried out for BMI ≥12 months after diagnosis as the limited number of studies would hinder any meaningful comparisons.\nIncreased risks of mortality were observed in the meta-analyses by menopausal status. While the summary risk estimates seem stronger with premenopausal breast cancer, there was no significant heterogeneity between pre- and post-menopausal breast cancer as shown in the meta-regression analyses (P = 0.28–0.89) (supplementary Tables S3 and S4, available at Annals of Oncology online). For BMI before diagnosis and total mortality, the summary RRs for obese versus normal weight were 1.75 (95% CI 1.26–2.41, I2 = 70%, P < 0.01, 7 studies) in women with pre-menopausal breast cancer and 1.34 (95% CI 1.18–1.53, I2 = 27%, P = 0.20, 9 studies) in women with post-menopausal breast cancer.\nStudies with larger number of deaths [105, 115], conducted in Europe [28, 115], or with weight and height assessed through medical records [28, 104, 115, 116] tended to report weaker associations for BMI <12 months after diagnosis and total mortality compared with other studies (meta-regression P = 0.01, 0.02, 0.01, respectively) (supplementary Table S3, available at Annals of Oncology online); while studies with larger number of deaths [101], conducted in Asia [101, 102], or adjusted for co-morbidity [101, 102] reported weaker associations for BMI <12 months after diagnosis and breast cancer mortality (meta-regression P = 0.01, 0.02, 0.01, respectively) (supplementary Table S4, available at Annals of Oncology online).\nAnalyses stratified by study designs, or restricted to studies with invasive cases only, early-stage non-metastatic cases only, or mammography screening detected cases only, or controlled for previous diseases did not produce results that were materially different from those obtained in the overall analyses (results not shown). Summary risk estimates remained statistically significant when each study was omitted in turn, except for BMI ≥12 months after diagnosis and total mortality. The summary RR was 1.06 (95% CI 0.98–1.15) per 5 kg/m2 increase when Flatt et al. [117] which contributed 315 deaths was omitted.\nThe results of the subgroup and meta-regression analyses are in supplementary Tables S3 and S4, available at Annals of Oncology online. Subgroup analysis was not carried out for BMI ≥12 months after diagnosis as the limited number of studies would hinder any meaningful comparisons.\nIncreased risks of mortality were observed in the meta-analyses by menopausal status. While the summary risk estimates seem stronger with premenopausal breast cancer, there was no significant heterogeneity between pre- and post-menopausal breast cancer as shown in the meta-regression analyses (P = 0.28–0.89) (supplementary Tables S3 and S4, available at Annals of Oncology online). For BMI before diagnosis and total mortality, the summary RRs for obese versus normal weight were 1.75 (95% CI 1.26–2.41, I2 = 70%, P < 0.01, 7 studies) in women with pre-menopausal breast cancer and 1.34 (95% CI 1.18–1.53, I2 = 27%, P = 0.20, 9 studies) in women with post-menopausal breast cancer.\nStudies with larger number of deaths [105, 115], conducted in Europe [28, 115], or with weight and height assessed through medical records [28, 104, 115, 116] tended to report weaker associations for BMI <12 months after diagnosis and total mortality compared with other studies (meta-regression P = 0.01, 0.02, 0.01, respectively) (supplementary Table S3, available at Annals of Oncology online); while studies with larger number of deaths [101], conducted in Asia [101, 102], or adjusted for co-morbidity [101, 102] reported weaker associations for BMI <12 months after diagnosis and breast cancer mortality (meta-regression P = 0.01, 0.02, 0.01, respectively) (supplementary Table S4, available at Annals of Oncology online).\nAnalyses stratified by study designs, or restricted to studies with invasive cases only, early-stage non-metastatic cases only, or mammography screening detected cases only, or controlled for previous diseases did not produce results that were materially different from those obtained in the overall analyses (results not shown). Summary risk estimates remained statistically significant when each study was omitted in turn, except for BMI ≥12 months after diagnosis and total mortality. The summary RR was 1.06 (95% CI 0.98–1.15) per 5 kg/m2 increase when Flatt et al. [117] which contributed 315 deaths was omitted.\n small studies or publication bias Asymmetry was only detected in the funnel plots of BMI <12 months after diagnosis and total mortality, and breast cancer mortality, which suggests that small studies with an inverse association are missing (plots not shown). Egger's tests were borderline significant (P = 0.05) or statistically significant (P = 0.03), respectively.\nAsymmetry was only detected in the funnel plots of BMI <12 months after diagnosis and total mortality, and breast cancer mortality, which suggests that small studies with an inverse association are missing (plots not shown). Egger's tests were borderline significant (P = 0.05) or statistically significant (P = 0.03), respectively.", " categorical meta-analysis For BMI before diagnosis, compared with normal weight women, the summary RRs were 1.41 (95% CI 1.29–1.53, 21 studies) for obese women, 1.07 (95% CI 1.02–1.12, 19 studies) for overweight women, and 1.10 (95% CI 0.92–1.31, 10 studies) for underweight women (Figure 2). For BMI <12 months after diagnosis and the same comparisons, the summary RRs were 1.23 (95% CI 1.12–1.33, 24 studies) for obese women, 1.07 (95% CI 1.02–1.12, 22 studies) for overweight women, and 1.25 (95% CI 0.99–1.57, 11 studies) for underweight women (supplementary Figure S1, available at Annals of Oncology online). Substantial heterogeneities were observed between studies of obese women and underweight women (I2 = 69%, P < 0.01; I2 = 63%, P < 0.01, respectively). For BMI ≥12 months after diagnosis, the summary RRs were 1.21 (95% CI 1.06–1.38, 5 studies) for obese women, 0.98 (95% CI 0.86–1.11, 4 studies) for overweight women, and 1.29 (95% CI 1.02–1.63, 3 studies) for underweight women (supplementary Figure S2, available at Annals of Oncology online). Twelve additional studies reported results for obese versus non-obese women <12 months after diagnosis, and the summary RR was 1.26 (95% CI 1.07–1.47, I2 = 80%, P < 0.01).Figure 2.Categorical meta-analysis of pre-diagnosis BMI and total mortality.\nCategorical meta-analysis of pre-diagnosis BMI and total mortality.\nFor BMI before diagnosis, compared with normal weight women, the summary RRs were 1.41 (95% CI 1.29–1.53, 21 studies) for obese women, 1.07 (95% CI 1.02–1.12, 19 studies) for overweight women, and 1.10 (95% CI 0.92–1.31, 10 studies) for underweight women (Figure 2). For BMI <12 months after diagnosis and the same comparisons, the summary RRs were 1.23 (95% CI 1.12–1.33, 24 studies) for obese women, 1.07 (95% CI 1.02–1.12, 22 studies) for overweight women, and 1.25 (95% CI 0.99–1.57, 11 studies) for underweight women (supplementary Figure S1, available at Annals of Oncology online). Substantial heterogeneities were observed between studies of obese women and underweight women (I2 = 69%, P < 0.01; I2 = 63%, P < 0.01, respectively). For BMI ≥12 months after diagnosis, the summary RRs were 1.21 (95% CI 1.06–1.38, 5 studies) for obese women, 0.98 (95% CI 0.86–1.11, 4 studies) for overweight women, and 1.29 (95% CI 1.02–1.63, 3 studies) for underweight women (supplementary Figure S2, available at Annals of Oncology online). Twelve additional studies reported results for obese versus non-obese women <12 months after diagnosis, and the summary RR was 1.26 (95% CI 1.07–1.47, I2 = 80%, P < 0.01).Figure 2.Categorical meta-analysis of pre-diagnosis BMI and total mortality.\nCategorical meta-analysis of pre-diagnosis BMI and total mortality.\n dose–response meta-analysis There was evidence of a J-shaped association in the non-linear dose–response meta-analyses of BMI before and after diagnosis with total mortality (all P < 0.01; Figure 3), suggesting that underweight women may be at slightly increased risk compared with normal weight women. The curves show linear increasing trends from 20 kg/m2 for BMI before diagnosis and <12 months after diagnosis, and from 25 kg/m2 for BMI ≥12 months after diagnosis. When linear models were fitted excluding the underweight category, the summary RRs of total mortality for each 5 kg/m2 increase in BMI were 1.17 (95% CI 1.13–1.21, 15 studies, 6358 deaths), 1.11 (95% CI 1.06–1.16, 12 studies, 6020 deaths), and 1.08 (95% CI 1.01–1.15, 4 studies, 1703 deaths) for BMI before, <12 months after, and ≥12 months after diagnosis, respectively (Figure 4). Substantial heterogeneity was observed between studies on BMI <12 months after diagnosis (I2 = 55%, P = 0.01).Figure 3.Non-linear dose–response curves of BMI and mortality.Figure 4.Linear dose–response meta-analysis of BMI and total mortality.\nNon-linear dose–response curves of BMI and mortality.\nLinear dose–response meta-analysis of BMI and total mortality.\nThere was evidence of a J-shaped association in the non-linear dose–response meta-analyses of BMI before and after diagnosis with total mortality (all P < 0.01; Figure 3), suggesting that underweight women may be at slightly increased risk compared with normal weight women. The curves show linear increasing trends from 20 kg/m2 for BMI before diagnosis and <12 months after diagnosis, and from 25 kg/m2 for BMI ≥12 months after diagnosis. When linear models were fitted excluding the underweight category, the summary RRs of total mortality for each 5 kg/m2 increase in BMI were 1.17 (95% CI 1.13–1.21, 15 studies, 6358 deaths), 1.11 (95% CI 1.06–1.16, 12 studies, 6020 deaths), and 1.08 (95% CI 1.01–1.15, 4 studies, 1703 deaths) for BMI before, <12 months after, and ≥12 months after diagnosis, respectively (Figure 4). Substantial heterogeneity was observed between studies on BMI <12 months after diagnosis (I2 = 55%, P = 0.01).Figure 3.Non-linear dose–response curves of BMI and mortality.Figure 4.Linear dose–response meta-analysis of BMI and total mortality.\nNon-linear dose–response curves of BMI and mortality.\nLinear dose–response meta-analysis of BMI and total mortality.", "For BMI before diagnosis, compared with normal weight women, the summary RRs were 1.41 (95% CI 1.29–1.53, 21 studies) for obese women, 1.07 (95% CI 1.02–1.12, 19 studies) for overweight women, and 1.10 (95% CI 0.92–1.31, 10 studies) for underweight women (Figure 2). For BMI <12 months after diagnosis and the same comparisons, the summary RRs were 1.23 (95% CI 1.12–1.33, 24 studies) for obese women, 1.07 (95% CI 1.02–1.12, 22 studies) for overweight women, and 1.25 (95% CI 0.99–1.57, 11 studies) for underweight women (supplementary Figure S1, available at Annals of Oncology online). Substantial heterogeneities were observed between studies of obese women and underweight women (I2 = 69%, P < 0.01; I2 = 63%, P < 0.01, respectively). For BMI ≥12 months after diagnosis, the summary RRs were 1.21 (95% CI 1.06–1.38, 5 studies) for obese women, 0.98 (95% CI 0.86–1.11, 4 studies) for overweight women, and 1.29 (95% CI 1.02–1.63, 3 studies) for underweight women (supplementary Figure S2, available at Annals of Oncology online). Twelve additional studies reported results for obese versus non-obese women <12 months after diagnosis, and the summary RR was 1.26 (95% CI 1.07–1.47, I2 = 80%, P < 0.01).Figure 2.Categorical meta-analysis of pre-diagnosis BMI and total mortality.\nCategorical meta-analysis of pre-diagnosis BMI and total mortality.", "There was evidence of a J-shaped association in the non-linear dose–response meta-analyses of BMI before and after diagnosis with total mortality (all P < 0.01; Figure 3), suggesting that underweight women may be at slightly increased risk compared with normal weight women. The curves show linear increasing trends from 20 kg/m2 for BMI before diagnosis and <12 months after diagnosis, and from 25 kg/m2 for BMI ≥12 months after diagnosis. When linear models were fitted excluding the underweight category, the summary RRs of total mortality for each 5 kg/m2 increase in BMI were 1.17 (95% CI 1.13–1.21, 15 studies, 6358 deaths), 1.11 (95% CI 1.06–1.16, 12 studies, 6020 deaths), and 1.08 (95% CI 1.01–1.15, 4 studies, 1703 deaths) for BMI before, <12 months after, and ≥12 months after diagnosis, respectively (Figure 4). Substantial heterogeneity was observed between studies on BMI <12 months after diagnosis (I2 = 55%, P = 0.01).Figure 3.Non-linear dose–response curves of BMI and mortality.Figure 4.Linear dose–response meta-analysis of BMI and total mortality.\nNon-linear dose–response curves of BMI and mortality.\nLinear dose–response meta-analysis of BMI and total mortality.", " categorical meta-analysis BMI was significantly associated with breast cancer mortality. Compared with normal weight women, for BMI before diagnosis, the summary RRs were 1.35 (95% CI 1.24–1.47, 22 studies) for obese women, 1.11 (95% CI 1.06–1.17, 21 studies) for overweight women, and 1.02 (95% CI 0.85–1.21, 8 studies) for underweight women (Figure 5). For BMI <12 months after diagnosis, the summary RRs were 1.25 (95% CI 1.10–1.42, 12 studies) for obese women, 1.11 (95% CI 1.03–1.20, 12 studies) for overweight women, and 1.53 (95% CI 1.27–1.83, 5 studies) for underweight women (supplementary Figure S3, available at Annals of Oncology online). Substantial heterogeneity was observed between studies of obese women (I2 = 53%, P = 0.02). For BMI ≥12 months after diagnosis, the summary RRs of the two studies identified were 1.68 (95% CI 0.90–3.15) for obese women and 1.37 (95% CI 0.96–1.95) for overweight women (supplementary Figure S4, available at Annals of Oncology online). The summary of another six studies that reported RRs for obese versus non-obese <12 months after diagnosis was 1.26 (95% CI 1.05–1.51, I2 = 64%, P = 0.02).Figure 5.Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality.\nCategorical meta-analysis of pre-diagnosis BMI and breast cancer mortality.\nBMI was significantly associated with breast cancer mortality. Compared with normal weight women, for BMI before diagnosis, the summary RRs were 1.35 (95% CI 1.24–1.47, 22 studies) for obese women, 1.11 (95% CI 1.06–1.17, 21 studies) for overweight women, and 1.02 (95% CI 0.85–1.21, 8 studies) for underweight women (Figure 5). For BMI <12 months after diagnosis, the summary RRs were 1.25 (95% CI 1.10–1.42, 12 studies) for obese women, 1.11 (95% CI 1.03–1.20, 12 studies) for overweight women, and 1.53 (95% CI 1.27–1.83, 5 studies) for underweight women (supplementary Figure S3, available at Annals of Oncology online). Substantial heterogeneity was observed between studies of obese women (I2 = 53%, P = 0.02). For BMI ≥12 months after diagnosis, the summary RRs of the two studies identified were 1.68 (95% CI 0.90–3.15) for obese women and 1.37 (95% CI 0.96–1.95) for overweight women (supplementary Figure S4, available at Annals of Oncology online). The summary of another six studies that reported RRs for obese versus non-obese <12 months after diagnosis was 1.26 (95% CI 1.05–1.51, I2 = 64%, P = 0.02).Figure 5.Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality.\nCategorical meta-analysis of pre-diagnosis BMI and breast cancer mortality.\n dose–response meta-analysis There was no significant evidence of a non-linear relationship between BMI before, <12 months after, and ≥12 months after diagnosis and breast cancer mortality (P = 0.21, P = 1.00, P = 0.86, respectively) (Figure 3). When linear models were fitted excluding data from the underweight category, statistically significant increased risks of breast cancer mortality with BMI before and <12 months after diagnosis were observed (Figure 6). The summary RRs for each 5 kg/m2 increase were 1.18 (95% CI 1.12–1.25, 18 studies, 5262 breast cancer deaths) for BMI before diagnosis and 1.14 (95% CI 1.05–1.24, 8 studies, 3857 breast cancer deaths) for BMI <12 months after diagnosis, with moderate (I2 = 47%, P = 0.01) and substantial (I2 = 66%, P = 0.01) heterogeneities between studies, respectively. Only two studies on BMI ≥12 months after diagnosis and breast cancer mortality (N = 220 deaths) were identified. The summary RR was 1.29 (95% CI 0.97–1.72).Figure 6.Linear dose–response meta-analysis of BMI and breast cancer mortality.\nLinear dose–response meta-analysis of BMI and breast cancer mortality.\nThere was no significant evidence of a non-linear relationship between BMI before, <12 months after, and ≥12 months after diagnosis and breast cancer mortality (P = 0.21, P = 1.00, P = 0.86, respectively) (Figure 3). When linear models were fitted excluding data from the underweight category, statistically significant increased risks of breast cancer mortality with BMI before and <12 months after diagnosis were observed (Figure 6). The summary RRs for each 5 kg/m2 increase were 1.18 (95% CI 1.12–1.25, 18 studies, 5262 breast cancer deaths) for BMI before diagnosis and 1.14 (95% CI 1.05–1.24, 8 studies, 3857 breast cancer deaths) for BMI <12 months after diagnosis, with moderate (I2 = 47%, P = 0.01) and substantial (I2 = 66%, P = 0.01) heterogeneities between studies, respectively. Only two studies on BMI ≥12 months after diagnosis and breast cancer mortality (N = 220 deaths) were identified. The summary RR was 1.29 (95% CI 0.97–1.72).Figure 6.Linear dose–response meta-analysis of BMI and breast cancer mortality.\nLinear dose–response meta-analysis of BMI and breast cancer mortality.", "BMI was significantly associated with breast cancer mortality. Compared with normal weight women, for BMI before diagnosis, the summary RRs were 1.35 (95% CI 1.24–1.47, 22 studies) for obese women, 1.11 (95% CI 1.06–1.17, 21 studies) for overweight women, and 1.02 (95% CI 0.85–1.21, 8 studies) for underweight women (Figure 5). For BMI <12 months after diagnosis, the summary RRs were 1.25 (95% CI 1.10–1.42, 12 studies) for obese women, 1.11 (95% CI 1.03–1.20, 12 studies) for overweight women, and 1.53 (95% CI 1.27–1.83, 5 studies) for underweight women (supplementary Figure S3, available at Annals of Oncology online). Substantial heterogeneity was observed between studies of obese women (I2 = 53%, P = 0.02). For BMI ≥12 months after diagnosis, the summary RRs of the two studies identified were 1.68 (95% CI 0.90–3.15) for obese women and 1.37 (95% CI 0.96–1.95) for overweight women (supplementary Figure S4, available at Annals of Oncology online). The summary of another six studies that reported RRs for obese versus non-obese <12 months after diagnosis was 1.26 (95% CI 1.05–1.51, I2 = 64%, P = 0.02).Figure 5.Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality.\nCategorical meta-analysis of pre-diagnosis BMI and breast cancer mortality.", "There was no significant evidence of a non-linear relationship between BMI before, <12 months after, and ≥12 months after diagnosis and breast cancer mortality (P = 0.21, P = 1.00, P = 0.86, respectively) (Figure 3). When linear models were fitted excluding data from the underweight category, statistically significant increased risks of breast cancer mortality with BMI before and <12 months after diagnosis were observed (Figure 6). The summary RRs for each 5 kg/m2 increase were 1.18 (95% CI 1.12–1.25, 18 studies, 5262 breast cancer deaths) for BMI before diagnosis and 1.14 (95% CI 1.05–1.24, 8 studies, 3857 breast cancer deaths) for BMI <12 months after diagnosis, with moderate (I2 = 47%, P = 0.01) and substantial (I2 = 66%, P = 0.01) heterogeneities between studies, respectively. Only two studies on BMI ≥12 months after diagnosis and breast cancer mortality (N = 220 deaths) were identified. The summary RR was 1.29 (95% CI 0.97–1.72).Figure 6.Linear dose–response meta-analysis of BMI and breast cancer mortality.\nLinear dose–response meta-analysis of BMI and breast cancer mortality.", "Only two studies reported results for death from cardiovascular disease (N = 151 deaths) [27, 112]. The summary RR for obese versus normal weight before diagnosis was 1.60 (95% CI 0.66–3.87). No association was observed for overweight versus normal weight (summary RR = 1.01, 95% CI 0.80–1.29). For each 5 kg/m2 increase in BMI, the summary RR was 1.21 (95% CI 0.83–1.77). Five studies reported results for deaths from any cause other than breast cancer (N = 2704 deaths) [21, 34, 108, 113, 114]. The summary RRs were 1.29 (95% CI 0.99–1.68, I2 = 72%, P = 0.01) for obese women, and 0.96 (95% CI 0.83–1.11, I2 = 26%, P = 0.25) for overweight women compared with normal weight women.", "The results of the subgroup and meta-regression analyses are in supplementary Tables S3 and S4, available at Annals of Oncology online. Subgroup analysis was not carried out for BMI ≥12 months after diagnosis as the limited number of studies would hinder any meaningful comparisons.\nIncreased risks of mortality were observed in the meta-analyses by menopausal status. While the summary risk estimates seem stronger with premenopausal breast cancer, there was no significant heterogeneity between pre- and post-menopausal breast cancer as shown in the meta-regression analyses (P = 0.28–0.89) (supplementary Tables S3 and S4, available at Annals of Oncology online). For BMI before diagnosis and total mortality, the summary RRs for obese versus normal weight were 1.75 (95% CI 1.26–2.41, I2 = 70%, P < 0.01, 7 studies) in women with pre-menopausal breast cancer and 1.34 (95% CI 1.18–1.53, I2 = 27%, P = 0.20, 9 studies) in women with post-menopausal breast cancer.\nStudies with larger number of deaths [105, 115], conducted in Europe [28, 115], or with weight and height assessed through medical records [28, 104, 115, 116] tended to report weaker associations for BMI <12 months after diagnosis and total mortality compared with other studies (meta-regression P = 0.01, 0.02, 0.01, respectively) (supplementary Table S3, available at Annals of Oncology online); while studies with larger number of deaths [101], conducted in Asia [101, 102], or adjusted for co-morbidity [101, 102] reported weaker associations for BMI <12 months after diagnosis and breast cancer mortality (meta-regression P = 0.01, 0.02, 0.01, respectively) (supplementary Table S4, available at Annals of Oncology online).\nAnalyses stratified by study designs, or restricted to studies with invasive cases only, early-stage non-metastatic cases only, or mammography screening detected cases only, or controlled for previous diseases did not produce results that were materially different from those obtained in the overall analyses (results not shown). Summary risk estimates remained statistically significant when each study was omitted in turn, except for BMI ≥12 months after diagnosis and total mortality. The summary RR was 1.06 (95% CI 0.98–1.15) per 5 kg/m2 increase when Flatt et al. [117] which contributed 315 deaths was omitted.", "Asymmetry was only detected in the funnel plots of BMI <12 months after diagnosis and total mortality, and breast cancer mortality, which suggests that small studies with an inverse association are missing (plots not shown). Egger's tests were borderline significant (P = 0.05) or statistically significant (P = 0.03), respectively.", "The present systematic literature review and meta-analysis of follow-up studies clearly supports that, in breast cancer survivors, higher BMI is consistently associated with lower overall and breast cancer survival, regardless of when BMI is ascertained. The limited number of studies on death from cardiovascular disease is also consistent with a positive association. For before, <12 months after, and 12 months or more after breast cancer diagnosis, compared with normal weight women, obese women had 41%, 23%, and 21% higher risk for total mortality, and 35%, 25%, and 68% increased risk for breast cancer mortality, respectively. The findings were supported by the positive associations observed in the linear dose–response meta-analysis. All associations were statistically significant, apart from the relationship between BMI ≥12 months after diagnosis and breast cancer mortality. This may be due to limited statistical power, with only 220 breast cancer deaths from two follow-up studies.\nPositive associations, in some cases statistically significant, were also observed in overweight, and underweight women compared with normal weight women. Women with BMI of 20 kg/m2 before, or <12 months after diagnosis, and of 25 kg/m2 12 months or more after diagnosis appeared to have the lowest mortality risk in the non-linear dose–response analysis. Co-morbid conditions may cause the observed increased risk in underweight women. Thorough investigation within the group and on their contribution to the shape of the association is hindered, as not all studies in this review reported results for this group. The increased risk associated with obesity was similar in pre- or post-menopausal breast cancer. We did not find any evidence of a protective effect of obesity on survival after pre-menopausal breast cancer, contrary to what has been observed for the development of breast cancer in pre-menopausal women [4].\nA large body of evidence with 41 477 deaths (23 182 from breast cancer) in over 210 000 breast cancer survivors was systematically reviewed in the present study. We carried out categorical, linear, and non-linear dose–response meta-analyses to examine the magnitude and the shape of the associations for total and cause-specific mortality in underweight, overweight, and obese women by time periods before and after diagnosis that is important in relation to the population-at-risk and breast cancer survivors. Our findings agree with and further extend the results from previous meta-analyses. A review published in 2010 reported statistically significant increased risks of 33% of both total and breast cancer mortality for obesity versus non-obesity around diagnosis [7]. These estimates are slightly higher than ours, which may be explained by the different search periods and inclusion criteria for the articles (33 studies and 15 studies included in the analyses, respectively). Another review published in 2012 further reported consistent positive associations of total and breast cancer mortality with higher versus lower BMI around diagnosis [6]. No significant differences were observed by menopausal status or hormone receptor status. The After Breast Cancer Pooling Project of four prospective cohort studies found differential effects of levels of pre-diagnosis obesity on survival [118]. Compared with normal weight women, significant or borderline significant increased risks of 81% of total and 40% of breast cancer mortality were only observed for morbidly obese (≥40 kg/m2) women and not for women in other obesity categories. We observed statistically significant increased risks also for overweight women, probably because of a larger number of studies. We were unable to investigate the associations with severely and morbidly obese women because only two studies included in this review reported such results [19, 113]. Overall, our findings are consistent with previous meta-analyses in showing elevated total and breast cancer mortality associated with higher BMI and support the current guidelines for breast cancer survivors to stay as lean as possible within the normal range of body weight [4], for overweight women to avoid weight gain during treatment and for obese women to lose weight after treatment [119].\nThe present review is limited by the challenges and flaws encountered by the individual epidemiological studies evaluating the body fatness–mortality relationship in breast cancer survivors. Most studies did not adjust for co-morbidities and assess intentional weight loss. Women with more serious health issues, and especially smokers, may lose weight but are at an increased risk of mortality, and this might cause an apparent increased risk in underweight women. Body weight information through the natural history of the disease and treatment information were usually not complete or available. Increase of body weight post-diagnosis is common in women with breast cancer, particularly during chemotherapy [16]. Chemotherapy under-dosing is a common problem in obese women and may contribute to their increased mortality [120]. Although several studies with pre-diagnosis BMI adjusted for underlying illnesses or excluded the first few years of follow-up, reverse causation may have affected the results in studies that assessed BMI in women with cancer and other illnesses. However, in these studies, the associations were similar to other studies. Possible survival benefit (subjects with better prognostic factors survive) may be present in the survival cohorts, in which the range of BMI could be narrower, and may cause an underestimation of the association.\nFollow-up studies with variable characteristics were pooled in the meta-analysis. Women identified in clinical trials may have had specific tumour subtypes, with fewer co-morbidities, and were more likely to receive protocol treatments with high treatment completion rates. Women who were recruited through mammography screening programmes may have had healthier lifestyles or access to medical facilities, and more likely to be diagnosed with in situ or early-stage breast cancer. Cancer detection methods, tumour classifications and treatment regimens change over time, and may vary within (if follow-up is long) and between studies, and could not be simply examined by using the diagnosis or treatment date. We cannot rule out the effect of unmeasured or residual confounding in our analysis. Nevertheless, most results were adjusted for multiple confounding factors, including tumour stage or other-related variables and stratified analyses by several key factors showed similar summary risk estimates. Small study or publication bias was observed in the analyses of BMI <12 months after diagnosis. However, the overall evidence is supported by large, well-designed studies and is unlikely to be changed. We did not conduct analyses by race/ethnicity and treatment types as only limited studies had published results.\nFuture studies of body fatness and breast cancer outcomes should aim to account for co-morbidities, separate intended and unintended changes of body weight, and collect complete treatment information during study follow-up. Randomised clinical trials are needed to test interventions for weight loss and maintenance on survival in women with breast cancer.\nIn conclusion, the present systematic literature review and meta-analysis extends and confirms the associations of obesity with an unfavourable overall and breast cancer survival in pre- and post-menopausal breast cancer, regardless of when BMI is ascertained. Increased risks of mortality in underweight and overweight women were also observed. Given the comparable elevated risks with obesity in the development (for post-menopausal women) and prognosis of breast cancer, and the complications with cancer treatment and other obesity-related co-morbidities, it is prudent to maintain a healthy body weight (BMI 18.5–<25.0 kg/m2) throughout life.", "This work was supported by the World Cancer Research Fund International (grant number: 2007/SP01) (http://www.wcrf-uk.org/). The funder of this study had no role in the decisions about the design and conduct of the study; collection, management, analysis, or interpretation of the data; or the preparation, review, or approval of the manuscript. The views expressed in this review are the opinions of the authors. They may not represent the views of the World Cancer Research Fund International/American Institute for Cancer Research and may differ from those in future updates of the evidence related to food, nutrition, physical activity, and cancer survival.", "DCG reports personal fees from World Cancer Research Fund/American Institute for Cancer Research, during the conduct of the study; grants from Danone, and grants from Kelloggs, outside the submitted work. AM reports personal fees from Metagenics/Metaproteomics, personal fees from Pfizer, outside the submitted work. All remaining authors have declared no conflicts of interest.", "" ]
[ "intro", "methods", null, null, null, null, "results", null, null, null, null, null, null, null, null, null, "discussion", null, null, "supplementary-material" ]
[ "body mass index", "meta-analysis", "survival after breast cancer", "systematic literature review" ]
introduction: The number of female breast cancer survivors is growing because of longer survival as a consequence of advances in treatment and early diagnosis. There were ∼2.6 million female breast cancer survivors in US in 2008 [1], and in the UK, breast cancer accounted for ∼28% of the 2 million cancer survivors in 2008 [2]. Obesity is a pandemic health concern, with over 500 million adults worldwide estimated to be obese and 958 million were overweight in 2008 [3]. One of the established risk factors for breast cancer development in post-menopausal women is obesity [4], which has further been linked to breast cancer recurrence [5] and poorer survival in pre- and post-menopausal breast cancer [6, 7]. Preliminary findings from randomised, controlled trials suggest that lifestyle modifications improved biomarkers associated with breast cancer progression and overall survival [8]. The biological mechanisms underlying the association between obesity and breast cancer survival are not established, and could involve interacting mediators of hormones, adipocytokines, and inflammatory cytokines which link to cell survival or apoptosis, migration, and proliferation [9]. Higher level of oestradiol produced in postmenopausal women through aromatisation of androgens in the adipose tissues [10], and higher level of insulin [11], a condition common in obese women, are linked to poorer prognosis in breast cancer. A possible interaction between leptin and insulin [12], and obesity-related markers of inflammation [13] have also been linked to breast cancer outcomes. Non-biological mechanisms could include chemotherapy under-dosing in obese women, suboptimal treatment, and obesity-related complications [14]. Numerous studies have examined the relationship between obesity and breast cancer outcomes, and past reviews have concluded that obesity is linked to a lower survival; however, when investigated in a meta-analysis of published data, only the results of obese compared with non-obese or lighter women were summarised [6, 7, 15]. We carried out a systematic literature review and meta-analysis of published studies to explore the magnitude and the shape of the association between body fatness, as measured by body mass index (BMI), and the risk of total and cause-specific mortality, overall and in women with pre- and post-menopausal breast cancer. As body weight may change close to diagnosis and during primary treatment of breast cancer [16], we examined BMI in three periods: before diagnosis, <12 months after diagnosis, and ≥12 months after breast cancer diagnosis. materials and methods: data sources and search We carried out a systematic literature search, limited to publications in English, for articles on BMI and survival in women with breast cancer in OVID MEDLINE and EMBASE from inception to 30 June 2013 using the search strategy implemented for the WCRF/AICR Continuous Update Project on breast cancer survival. The search strategy contained medical subject headings and text words that covered a broad range of factors on diet, physical activity, and anthropometry. The protocol for the review is available at http://www.dietandcancerreport.org/index.php [17]. In addition, we hand-searched the reference lists of relevant articles, reviews, and meta-analysis papers. We carried out a systematic literature search, limited to publications in English, for articles on BMI and survival in women with breast cancer in OVID MEDLINE and EMBASE from inception to 30 June 2013 using the search strategy implemented for the WCRF/AICR Continuous Update Project on breast cancer survival. The search strategy contained medical subject headings and text words that covered a broad range of factors on diet, physical activity, and anthropometry. The protocol for the review is available at http://www.dietandcancerreport.org/index.php [17]. In addition, we hand-searched the reference lists of relevant articles, reviews, and meta-analysis papers. study selection Included were follow-up studies of breast cancer survivors, which reported estimates of the associations of BMI ascertained before and after breast cancer diagnosis with total or cause-specific mortality risks. Studies that investigated BMI after diagnosis were divided into two groups: BMI <12 months after diagnosis (BMI <12 months) and BMI 12 months or more after diagnosis (BMI ≥12 months). Outcomes included total mortality, breast cancer mortality, death from cardiovascular disease, and death from causes other than breast cancer. When multiple publications on the same study population were found, results based on longer follow-up and more outcomes were selected for the meta-analysis. Included were follow-up studies of breast cancer survivors, which reported estimates of the associations of BMI ascertained before and after breast cancer diagnosis with total or cause-specific mortality risks. Studies that investigated BMI after diagnosis were divided into two groups: BMI <12 months after diagnosis (BMI <12 months) and BMI 12 months or more after diagnosis (BMI ≥12 months). Outcomes included total mortality, breast cancer mortality, death from cardiovascular disease, and death from causes other than breast cancer. When multiple publications on the same study population were found, results based on longer follow-up and more outcomes were selected for the meta-analysis. data extraction DSMC, TN, and DA conducted the search. DSMC, ARV, and DNR extracted the study characteristics, tumour-related information, cancer treatment, timing and method of weight and height assessment, BMI levels, number of outcomes and population at-risk, outcome type, estimates of association and their measure of variance [95% confidence interval (CI) or P value], and adjustment factors in the analysis. DSMC, TN, and DA conducted the search. DSMC, ARV, and DNR extracted the study characteristics, tumour-related information, cancer treatment, timing and method of weight and height assessment, BMI levels, number of outcomes and population at-risk, outcome type, estimates of association and their measure of variance [95% confidence interval (CI) or P value], and adjustment factors in the analysis. statistical analysis Categorical and dose–response meta-analyses were conducted using random-effects models to account for between-study heterogeneity [18]. Summary relative risks (RRs) were estimated using the average of the natural logarithm of the RRs of each study weighted by the inverse of the variance and then unweighted by applying a random-effects variance component which is derived from the extent of variability of the effect sizes of the studies. The maximally adjusted RR estimates were used for the meta-analysis except for the follow-up of randomised, controlled trials [19, 20] where unadjusted results were also included, as these studies mostly involved a more homogeneous study population. BMI or Quetelet's Index (QI) measured in units of kg/m2 was used. We conducted categorical meta-analyses by pooling the categorical results reported in the studies. The studies used different BMI categories. In some studies, underweight (BMI <18.5 kg/m2 according to WHO international classification) and normal weight women (BMI 18.5–<25.0 kg/m2) were classified together but, in some studies, they were classified separately. Similarly, most studies classified overweight (BMI 25.0–<30.0 kg/m2) and obese (BMI ≥30.0 kg/m2) women separately but, in some studies, overweight and obese women were combined. The reference category was normal weight or underweight together with normal weight, depending on the studies. For convenience, the BMI categories are referred to as underweight, normal weight, overweight, and obese in the present review. We derived the RRs for overweight and obese women compared with normal weight women in two studies [19, 21] that had more than four BMI categories using the method of Hamling et al. [22]. Studies that reported results for obese compared with non-obese women were analysed separately. The non-linear dose–response relationship between BMI and mortality was examined using the best-fitting second-order fractional polynomial regression model [23], defined as the one with the lowest deviance. Non-linearity was tested using the likelihood ratio test [24]. In the non-linear meta-analysis, the reference category was the lowest BMI category in each study and RRs were recalculated using the method of Hamling et al. [22] when the reference category was not the lowest BMI category in the study. We also conducted linear dose–response meta-analyses, excluding the category underweight when reported separately in the studies, by pooling estimates of RR per unit increase (with its standard error) provided by the studies, or derived by us from categorical data using generalised least-squares for trend estimation [25]. To estimate the trend, the numbers of outcomes and population at-risk for at least three BMI categories, or the information required to derive them using standard methods [26], and means or medians of the BMI categories, or if not reported in the studies, the estimated midpoints of the categories had to be available. When the extreme BMI categories were open-ended, we used the width of the adjacent close-ended category to estimate the midpoints. Where the RRs were presented by subgroups (age group [27], menopausal status [28, 29], stage [30] or subtype [31] of breast cancer, or others [32–34]), an overall estimate for the study was obtained by a fixed-effect model before pooling in the meta-analysis. We estimated the risk increase of death for an increment of 5 kg/m2 of BMI. To assess heterogeneity, we computed the Cochran Q test and I2 statistic [35]. The cut points of 30% and 50% were used for low, moderate, and substantial level of heterogeneity. Sources of heterogeneity were explored by meta-regression and subgroup analyses using pre-defined factors, including indicators of study quality (menopausal status, hormone receptor status, number of outcomes, length of follow-up, study design, geographic location, BMI assessment, adjustment for confounders, and others). Small study or publication bias was examined by Egger's test [36] and visual inspection of the funnel plots. The influence of each individual study on the summary RR was examined by excluding the study in turn [37]. A P value of <0.05 was considered statistically significant. All analyses were conducted using Stata version 12.1 (Stata Statistical Software: Release 12, StataCorp LP, College Station, TX). Categorical and dose–response meta-analyses were conducted using random-effects models to account for between-study heterogeneity [18]. Summary relative risks (RRs) were estimated using the average of the natural logarithm of the RRs of each study weighted by the inverse of the variance and then unweighted by applying a random-effects variance component which is derived from the extent of variability of the effect sizes of the studies. The maximally adjusted RR estimates were used for the meta-analysis except for the follow-up of randomised, controlled trials [19, 20] where unadjusted results were also included, as these studies mostly involved a more homogeneous study population. BMI or Quetelet's Index (QI) measured in units of kg/m2 was used. We conducted categorical meta-analyses by pooling the categorical results reported in the studies. The studies used different BMI categories. In some studies, underweight (BMI <18.5 kg/m2 according to WHO international classification) and normal weight women (BMI 18.5–<25.0 kg/m2) were classified together but, in some studies, they were classified separately. Similarly, most studies classified overweight (BMI 25.0–<30.0 kg/m2) and obese (BMI ≥30.0 kg/m2) women separately but, in some studies, overweight and obese women were combined. The reference category was normal weight or underweight together with normal weight, depending on the studies. For convenience, the BMI categories are referred to as underweight, normal weight, overweight, and obese in the present review. We derived the RRs for overweight and obese women compared with normal weight women in two studies [19, 21] that had more than four BMI categories using the method of Hamling et al. [22]. Studies that reported results for obese compared with non-obese women were analysed separately. The non-linear dose–response relationship between BMI and mortality was examined using the best-fitting second-order fractional polynomial regression model [23], defined as the one with the lowest deviance. Non-linearity was tested using the likelihood ratio test [24]. In the non-linear meta-analysis, the reference category was the lowest BMI category in each study and RRs were recalculated using the method of Hamling et al. [22] when the reference category was not the lowest BMI category in the study. We also conducted linear dose–response meta-analyses, excluding the category underweight when reported separately in the studies, by pooling estimates of RR per unit increase (with its standard error) provided by the studies, or derived by us from categorical data using generalised least-squares for trend estimation [25]. To estimate the trend, the numbers of outcomes and population at-risk for at least three BMI categories, or the information required to derive them using standard methods [26], and means or medians of the BMI categories, or if not reported in the studies, the estimated midpoints of the categories had to be available. When the extreme BMI categories were open-ended, we used the width of the adjacent close-ended category to estimate the midpoints. Where the RRs were presented by subgroups (age group [27], menopausal status [28, 29], stage [30] or subtype [31] of breast cancer, or others [32–34]), an overall estimate for the study was obtained by a fixed-effect model before pooling in the meta-analysis. We estimated the risk increase of death for an increment of 5 kg/m2 of BMI. To assess heterogeneity, we computed the Cochran Q test and I2 statistic [35]. The cut points of 30% and 50% were used for low, moderate, and substantial level of heterogeneity. Sources of heterogeneity were explored by meta-regression and subgroup analyses using pre-defined factors, including indicators of study quality (menopausal status, hormone receptor status, number of outcomes, length of follow-up, study design, geographic location, BMI assessment, adjustment for confounders, and others). Small study or publication bias was examined by Egger's test [36] and visual inspection of the funnel plots. The influence of each individual study on the summary RR was examined by excluding the study in turn [37]. A P value of <0.05 was considered statistically significant. All analyses were conducted using Stata version 12.1 (Stata Statistical Software: Release 12, StataCorp LP, College Station, TX). data sources and search: We carried out a systematic literature search, limited to publications in English, for articles on BMI and survival in women with breast cancer in OVID MEDLINE and EMBASE from inception to 30 June 2013 using the search strategy implemented for the WCRF/AICR Continuous Update Project on breast cancer survival. The search strategy contained medical subject headings and text words that covered a broad range of factors on diet, physical activity, and anthropometry. The protocol for the review is available at http://www.dietandcancerreport.org/index.php [17]. In addition, we hand-searched the reference lists of relevant articles, reviews, and meta-analysis papers. study selection: Included were follow-up studies of breast cancer survivors, which reported estimates of the associations of BMI ascertained before and after breast cancer diagnosis with total or cause-specific mortality risks. Studies that investigated BMI after diagnosis were divided into two groups: BMI <12 months after diagnosis (BMI <12 months) and BMI 12 months or more after diagnosis (BMI ≥12 months). Outcomes included total mortality, breast cancer mortality, death from cardiovascular disease, and death from causes other than breast cancer. When multiple publications on the same study population were found, results based on longer follow-up and more outcomes were selected for the meta-analysis. data extraction: DSMC, TN, and DA conducted the search. DSMC, ARV, and DNR extracted the study characteristics, tumour-related information, cancer treatment, timing and method of weight and height assessment, BMI levels, number of outcomes and population at-risk, outcome type, estimates of association and their measure of variance [95% confidence interval (CI) or P value], and adjustment factors in the analysis. statistical analysis: Categorical and dose–response meta-analyses were conducted using random-effects models to account for between-study heterogeneity [18]. Summary relative risks (RRs) were estimated using the average of the natural logarithm of the RRs of each study weighted by the inverse of the variance and then unweighted by applying a random-effects variance component which is derived from the extent of variability of the effect sizes of the studies. The maximally adjusted RR estimates were used for the meta-analysis except for the follow-up of randomised, controlled trials [19, 20] where unadjusted results were also included, as these studies mostly involved a more homogeneous study population. BMI or Quetelet's Index (QI) measured in units of kg/m2 was used. We conducted categorical meta-analyses by pooling the categorical results reported in the studies. The studies used different BMI categories. In some studies, underweight (BMI <18.5 kg/m2 according to WHO international classification) and normal weight women (BMI 18.5–<25.0 kg/m2) were classified together but, in some studies, they were classified separately. Similarly, most studies classified overweight (BMI 25.0–<30.0 kg/m2) and obese (BMI ≥30.0 kg/m2) women separately but, in some studies, overweight and obese women were combined. The reference category was normal weight or underweight together with normal weight, depending on the studies. For convenience, the BMI categories are referred to as underweight, normal weight, overweight, and obese in the present review. We derived the RRs for overweight and obese women compared with normal weight women in two studies [19, 21] that had more than four BMI categories using the method of Hamling et al. [22]. Studies that reported results for obese compared with non-obese women were analysed separately. The non-linear dose–response relationship between BMI and mortality was examined using the best-fitting second-order fractional polynomial regression model [23], defined as the one with the lowest deviance. Non-linearity was tested using the likelihood ratio test [24]. In the non-linear meta-analysis, the reference category was the lowest BMI category in each study and RRs were recalculated using the method of Hamling et al. [22] when the reference category was not the lowest BMI category in the study. We also conducted linear dose–response meta-analyses, excluding the category underweight when reported separately in the studies, by pooling estimates of RR per unit increase (with its standard error) provided by the studies, or derived by us from categorical data using generalised least-squares for trend estimation [25]. To estimate the trend, the numbers of outcomes and population at-risk for at least three BMI categories, or the information required to derive them using standard methods [26], and means or medians of the BMI categories, or if not reported in the studies, the estimated midpoints of the categories had to be available. When the extreme BMI categories were open-ended, we used the width of the adjacent close-ended category to estimate the midpoints. Where the RRs were presented by subgroups (age group [27], menopausal status [28, 29], stage [30] or subtype [31] of breast cancer, or others [32–34]), an overall estimate for the study was obtained by a fixed-effect model before pooling in the meta-analysis. We estimated the risk increase of death for an increment of 5 kg/m2 of BMI. To assess heterogeneity, we computed the Cochran Q test and I2 statistic [35]. The cut points of 30% and 50% were used for low, moderate, and substantial level of heterogeneity. Sources of heterogeneity were explored by meta-regression and subgroup analyses using pre-defined factors, including indicators of study quality (menopausal status, hormone receptor status, number of outcomes, length of follow-up, study design, geographic location, BMI assessment, adjustment for confounders, and others). Small study or publication bias was examined by Egger's test [36] and visual inspection of the funnel plots. The influence of each individual study on the summary RR was examined by excluding the study in turn [37]. A P value of <0.05 was considered statistically significant. All analyses were conducted using Stata version 12.1 (Stata Statistical Software: Release 12, StataCorp LP, College Station, TX). results: A total of 124 publications investigating the relationship of body fatness and mortality in women with breast cancer were identified. We excluded 31 publications, including four publications on other obesity indices [38–41], 12 publications without a measure of association [42–53], and 15 publications superseded by publications of the same study with more outcomes [54–68]. A further 14 publications were excluded because of insufficient data for the meta-analysis (five publications [69–73]) or unadjusted results (nine publications [74–82]), from which nine publications reported statistically significant increased risk of total, breast cancer or non-breast cancer mortality in obese women (before or <12 months after diagnosis) compared with the reference BMI [69, 71–74, 76, 77, 79, 82], two publications reported non-significant inverse associations [75, 80] and three publications reported no association [70, 78, 81] of BMI with survival after breast cancer. Hence, 79 publications from 82 follow-up studies with 41 477 deaths (23 182 from breast cancer) in 213 075 breast cancer survivors were included in the meta-analyses (Figure 1). Supplementary Table S1, available at Annals of Oncology online shows the characteristics of the studies included in the meta-analyses and details of the excluded studies are in supplementary Table S2, available at Annals of Oncology online. Results of the meta-analyses are summarised in Table 1.Table 1.Summary of meta-analyses of BMI and survival in women with breast canceraBMI before diagnosisBMI <12 months after diagnosisBMI ≥12 months after diagnosisNRR (95% CI)I2 (%)PhNRR (95% CI)I2 (%)PhNRR (95% CI)I2 (%)PhTotal mortality Under versus normal weight101.10 (0.92–1.31)48%0.04111.25 (0.99–0.57)63%<0.0131.29 (1.02–1.63)0%0.39 Over versus normal weight191.07 (1.02–1.12)0%0.88221.07 (1.02–1.12)21%0.1840.98 (0.86–1.11)0%0.72 Obese versus normal weight211.41 (1.29–1.53)38%0.04241.23 (1.12–1.33)69%<0.0151.21 (1.06–1.38)0%0.70 Obese versus non-obese–––121.26 (1.07–1.47)80%<0.01––– Per 5 kg/m2 increase151.17 (1.13–1.21)7%0.38121.11 (1.06–1.16)55%0.0141.08 (1.01–1.15)0%0.52Breast cancer mortality Under versus normal weight81.02 (0.85–1.21)31%0.1851.53 (1.27–1.83)0%0.5911.10 (0.15–8.08)– Over versus normal weight211.11 (1.06–1.17)0%0.66121.11 (1.03–1.20)14%0.3121.37 (0.96–1.95)0%0.90 Obese versus normal weight221.35 (1.24–1.47)36%0.05121.25 (1.10–1.42)53%0.0221.68 (0.90–3.15)67%0.08 Obese versus non-obese–––61.26 (1.05–1.51)64%0.02––– Per 5 kg/m2 increase181.18 (1.12–1.25)47%0.0181.14 (1.05–1.24)66%0.0121.29 (0.97–1.72)64%0.10Cardiovascular disease related mortality Over versus normal weight21.01 (0.80–1.29)0%0.87–––––– Obese versus normal weight21.60 (0.66–3.87)78%0.03–––––– Per 5 kg/m2 increase21.21 (0.83–1.77)80%0.03––––––Non-breast cancer mortality Over versus normal weight–––50.96 (0.83–1.11)26%0.25––– Obese versus normal weight–––51.29 (0.99–1.68)72%0.01–––aBMI before and after diagnosis (<12 months after, or ≥12 months after diagnosis) was classified according to the exposure period which the studies referred to in the BMI assessment; the BMI categories were included in the categorical meta-analyses as defined by the studies.Ph, P for heterogeneity between studies.Figure 1.Flowchart of search. Summary of meta-analyses of BMI and survival in women with breast cancera aBMI before and after diagnosis (<12 months after, or ≥12 months after diagnosis) was classified according to the exposure period which the studies referred to in the BMI assessment; the BMI categories were included in the categorical meta-analyses as defined by the studies. Ph, P for heterogeneity between studies. Flowchart of search. Studies were follow-up of women with breast cancer identified in prospective aetiologic cohort studies (women were free of cancer at enrolment), or cohorts of breast cancer survivors whose participants were identified in hospitals or through cancer registries, or follow-up of breast cancer patients enrolled in case–control studies or randomised clinical trials. Some studies included only premenopausal women [83–85] or postmenopausal women [21, 27, 86–94], but most studies included both. Menopausal status was usually determined at time of diagnosis. Year of diagnosis was from 1957–1965 [70] to 2002–2009 [74]. Patient tumour characteristics and stage of disease at diagnosis varied across studies, and some studies included carcinoma in situ. No all studies provided clinical information on the tumour, treatment, and co-morbidities. Most of the studies were based in North America or Europe. There were three studies from each of Australia [79, 95, 96], Korea [97, 98] and China [99–101]; two studies from Japan [71, 102]; one study from Tunisia [103] and four international studies [19, 104–106]. Study size ranged from 96 [107] to 24 698 patients [97]. Total number of deaths ranged from 56 [93] to 7397 [108], and the proportion of deaths from breast cancer ranged from 22% [27] to 98% [84] when reported. All but eight studies [30, 93, 94, 98, 99, 109–111] had an average follow-up of more than 5 years. BMI and total mortality categorical meta-analysis For BMI before diagnosis, compared with normal weight women, the summary RRs were 1.41 (95% CI 1.29–1.53, 21 studies) for obese women, 1.07 (95% CI 1.02–1.12, 19 studies) for overweight women, and 1.10 (95% CI 0.92–1.31, 10 studies) for underweight women (Figure 2). For BMI <12 months after diagnosis and the same comparisons, the summary RRs were 1.23 (95% CI 1.12–1.33, 24 studies) for obese women, 1.07 (95% CI 1.02–1.12, 22 studies) for overweight women, and 1.25 (95% CI 0.99–1.57, 11 studies) for underweight women (supplementary Figure S1, available at Annals of Oncology online). Substantial heterogeneities were observed between studies of obese women and underweight women (I2 = 69%, P < 0.01; I2 = 63%, P < 0.01, respectively). For BMI ≥12 months after diagnosis, the summary RRs were 1.21 (95% CI 1.06–1.38, 5 studies) for obese women, 0.98 (95% CI 0.86–1.11, 4 studies) for overweight women, and 1.29 (95% CI 1.02–1.63, 3 studies) for underweight women (supplementary Figure S2, available at Annals of Oncology online). Twelve additional studies reported results for obese versus non-obese women <12 months after diagnosis, and the summary RR was 1.26 (95% CI 1.07–1.47, I2 = 80%, P < 0.01).Figure 2.Categorical meta-analysis of pre-diagnosis BMI and total mortality. Categorical meta-analysis of pre-diagnosis BMI and total mortality. For BMI before diagnosis, compared with normal weight women, the summary RRs were 1.41 (95% CI 1.29–1.53, 21 studies) for obese women, 1.07 (95% CI 1.02–1.12, 19 studies) for overweight women, and 1.10 (95% CI 0.92–1.31, 10 studies) for underweight women (Figure 2). For BMI <12 months after diagnosis and the same comparisons, the summary RRs were 1.23 (95% CI 1.12–1.33, 24 studies) for obese women, 1.07 (95% CI 1.02–1.12, 22 studies) for overweight women, and 1.25 (95% CI 0.99–1.57, 11 studies) for underweight women (supplementary Figure S1, available at Annals of Oncology online). Substantial heterogeneities were observed between studies of obese women and underweight women (I2 = 69%, P < 0.01; I2 = 63%, P < 0.01, respectively). For BMI ≥12 months after diagnosis, the summary RRs were 1.21 (95% CI 1.06–1.38, 5 studies) for obese women, 0.98 (95% CI 0.86–1.11, 4 studies) for overweight women, and 1.29 (95% CI 1.02–1.63, 3 studies) for underweight women (supplementary Figure S2, available at Annals of Oncology online). Twelve additional studies reported results for obese versus non-obese women <12 months after diagnosis, and the summary RR was 1.26 (95% CI 1.07–1.47, I2 = 80%, P < 0.01).Figure 2.Categorical meta-analysis of pre-diagnosis BMI and total mortality. Categorical meta-analysis of pre-diagnosis BMI and total mortality. dose–response meta-analysis There was evidence of a J-shaped association in the non-linear dose–response meta-analyses of BMI before and after diagnosis with total mortality (all P < 0.01; Figure 3), suggesting that underweight women may be at slightly increased risk compared with normal weight women. The curves show linear increasing trends from 20 kg/m2 for BMI before diagnosis and <12 months after diagnosis, and from 25 kg/m2 for BMI ≥12 months after diagnosis. When linear models were fitted excluding the underweight category, the summary RRs of total mortality for each 5 kg/m2 increase in BMI were 1.17 (95% CI 1.13–1.21, 15 studies, 6358 deaths), 1.11 (95% CI 1.06–1.16, 12 studies, 6020 deaths), and 1.08 (95% CI 1.01–1.15, 4 studies, 1703 deaths) for BMI before, <12 months after, and ≥12 months after diagnosis, respectively (Figure 4). Substantial heterogeneity was observed between studies on BMI <12 months after diagnosis (I2 = 55%, P = 0.01).Figure 3.Non-linear dose–response curves of BMI and mortality.Figure 4.Linear dose–response meta-analysis of BMI and total mortality. Non-linear dose–response curves of BMI and mortality. Linear dose–response meta-analysis of BMI and total mortality. There was evidence of a J-shaped association in the non-linear dose–response meta-analyses of BMI before and after diagnosis with total mortality (all P < 0.01; Figure 3), suggesting that underweight women may be at slightly increased risk compared with normal weight women. The curves show linear increasing trends from 20 kg/m2 for BMI before diagnosis and <12 months after diagnosis, and from 25 kg/m2 for BMI ≥12 months after diagnosis. When linear models were fitted excluding the underweight category, the summary RRs of total mortality for each 5 kg/m2 increase in BMI were 1.17 (95% CI 1.13–1.21, 15 studies, 6358 deaths), 1.11 (95% CI 1.06–1.16, 12 studies, 6020 deaths), and 1.08 (95% CI 1.01–1.15, 4 studies, 1703 deaths) for BMI before, <12 months after, and ≥12 months after diagnosis, respectively (Figure 4). Substantial heterogeneity was observed between studies on BMI <12 months after diagnosis (I2 = 55%, P = 0.01).Figure 3.Non-linear dose–response curves of BMI and mortality.Figure 4.Linear dose–response meta-analysis of BMI and total mortality. Non-linear dose–response curves of BMI and mortality. Linear dose–response meta-analysis of BMI and total mortality. categorical meta-analysis For BMI before diagnosis, compared with normal weight women, the summary RRs were 1.41 (95% CI 1.29–1.53, 21 studies) for obese women, 1.07 (95% CI 1.02–1.12, 19 studies) for overweight women, and 1.10 (95% CI 0.92–1.31, 10 studies) for underweight women (Figure 2). For BMI <12 months after diagnosis and the same comparisons, the summary RRs were 1.23 (95% CI 1.12–1.33, 24 studies) for obese women, 1.07 (95% CI 1.02–1.12, 22 studies) for overweight women, and 1.25 (95% CI 0.99–1.57, 11 studies) for underweight women (supplementary Figure S1, available at Annals of Oncology online). Substantial heterogeneities were observed between studies of obese women and underweight women (I2 = 69%, P < 0.01; I2 = 63%, P < 0.01, respectively). For BMI ≥12 months after diagnosis, the summary RRs were 1.21 (95% CI 1.06–1.38, 5 studies) for obese women, 0.98 (95% CI 0.86–1.11, 4 studies) for overweight women, and 1.29 (95% CI 1.02–1.63, 3 studies) for underweight women (supplementary Figure S2, available at Annals of Oncology online). Twelve additional studies reported results for obese versus non-obese women <12 months after diagnosis, and the summary RR was 1.26 (95% CI 1.07–1.47, I2 = 80%, P < 0.01).Figure 2.Categorical meta-analysis of pre-diagnosis BMI and total mortality. Categorical meta-analysis of pre-diagnosis BMI and total mortality. For BMI before diagnosis, compared with normal weight women, the summary RRs were 1.41 (95% CI 1.29–1.53, 21 studies) for obese women, 1.07 (95% CI 1.02–1.12, 19 studies) for overweight women, and 1.10 (95% CI 0.92–1.31, 10 studies) for underweight women (Figure 2). For BMI <12 months after diagnosis and the same comparisons, the summary RRs were 1.23 (95% CI 1.12–1.33, 24 studies) for obese women, 1.07 (95% CI 1.02–1.12, 22 studies) for overweight women, and 1.25 (95% CI 0.99–1.57, 11 studies) for underweight women (supplementary Figure S1, available at Annals of Oncology online). Substantial heterogeneities were observed between studies of obese women and underweight women (I2 = 69%, P < 0.01; I2 = 63%, P < 0.01, respectively). For BMI ≥12 months after diagnosis, the summary RRs were 1.21 (95% CI 1.06–1.38, 5 studies) for obese women, 0.98 (95% CI 0.86–1.11, 4 studies) for overweight women, and 1.29 (95% CI 1.02–1.63, 3 studies) for underweight women (supplementary Figure S2, available at Annals of Oncology online). Twelve additional studies reported results for obese versus non-obese women <12 months after diagnosis, and the summary RR was 1.26 (95% CI 1.07–1.47, I2 = 80%, P < 0.01).Figure 2.Categorical meta-analysis of pre-diagnosis BMI and total mortality. Categorical meta-analysis of pre-diagnosis BMI and total mortality. dose–response meta-analysis There was evidence of a J-shaped association in the non-linear dose–response meta-analyses of BMI before and after diagnosis with total mortality (all P < 0.01; Figure 3), suggesting that underweight women may be at slightly increased risk compared with normal weight women. The curves show linear increasing trends from 20 kg/m2 for BMI before diagnosis and <12 months after diagnosis, and from 25 kg/m2 for BMI ≥12 months after diagnosis. When linear models were fitted excluding the underweight category, the summary RRs of total mortality for each 5 kg/m2 increase in BMI were 1.17 (95% CI 1.13–1.21, 15 studies, 6358 deaths), 1.11 (95% CI 1.06–1.16, 12 studies, 6020 deaths), and 1.08 (95% CI 1.01–1.15, 4 studies, 1703 deaths) for BMI before, <12 months after, and ≥12 months after diagnosis, respectively (Figure 4). Substantial heterogeneity was observed between studies on BMI <12 months after diagnosis (I2 = 55%, P = 0.01).Figure 3.Non-linear dose–response curves of BMI and mortality.Figure 4.Linear dose–response meta-analysis of BMI and total mortality. Non-linear dose–response curves of BMI and mortality. Linear dose–response meta-analysis of BMI and total mortality. There was evidence of a J-shaped association in the non-linear dose–response meta-analyses of BMI before and after diagnosis with total mortality (all P < 0.01; Figure 3), suggesting that underweight women may be at slightly increased risk compared with normal weight women. The curves show linear increasing trends from 20 kg/m2 for BMI before diagnosis and <12 months after diagnosis, and from 25 kg/m2 for BMI ≥12 months after diagnosis. When linear models were fitted excluding the underweight category, the summary RRs of total mortality for each 5 kg/m2 increase in BMI were 1.17 (95% CI 1.13–1.21, 15 studies, 6358 deaths), 1.11 (95% CI 1.06–1.16, 12 studies, 6020 deaths), and 1.08 (95% CI 1.01–1.15, 4 studies, 1703 deaths) for BMI before, <12 months after, and ≥12 months after diagnosis, respectively (Figure 4). Substantial heterogeneity was observed between studies on BMI <12 months after diagnosis (I2 = 55%, P = 0.01).Figure 3.Non-linear dose–response curves of BMI and mortality.Figure 4.Linear dose–response meta-analysis of BMI and total mortality. Non-linear dose–response curves of BMI and mortality. Linear dose–response meta-analysis of BMI and total mortality. BMI and breast cancer mortality categorical meta-analysis BMI was significantly associated with breast cancer mortality. Compared with normal weight women, for BMI before diagnosis, the summary RRs were 1.35 (95% CI 1.24–1.47, 22 studies) for obese women, 1.11 (95% CI 1.06–1.17, 21 studies) for overweight women, and 1.02 (95% CI 0.85–1.21, 8 studies) for underweight women (Figure 5). For BMI <12 months after diagnosis, the summary RRs were 1.25 (95% CI 1.10–1.42, 12 studies) for obese women, 1.11 (95% CI 1.03–1.20, 12 studies) for overweight women, and 1.53 (95% CI 1.27–1.83, 5 studies) for underweight women (supplementary Figure S3, available at Annals of Oncology online). Substantial heterogeneity was observed between studies of obese women (I2 = 53%, P = 0.02). For BMI ≥12 months after diagnosis, the summary RRs of the two studies identified were 1.68 (95% CI 0.90–3.15) for obese women and 1.37 (95% CI 0.96–1.95) for overweight women (supplementary Figure S4, available at Annals of Oncology online). The summary of another six studies that reported RRs for obese versus non-obese <12 months after diagnosis was 1.26 (95% CI 1.05–1.51, I2 = 64%, P = 0.02).Figure 5.Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality. Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality. BMI was significantly associated with breast cancer mortality. Compared with normal weight women, for BMI before diagnosis, the summary RRs were 1.35 (95% CI 1.24–1.47, 22 studies) for obese women, 1.11 (95% CI 1.06–1.17, 21 studies) for overweight women, and 1.02 (95% CI 0.85–1.21, 8 studies) for underweight women (Figure 5). For BMI <12 months after diagnosis, the summary RRs were 1.25 (95% CI 1.10–1.42, 12 studies) for obese women, 1.11 (95% CI 1.03–1.20, 12 studies) for overweight women, and 1.53 (95% CI 1.27–1.83, 5 studies) for underweight women (supplementary Figure S3, available at Annals of Oncology online). Substantial heterogeneity was observed between studies of obese women (I2 = 53%, P = 0.02). For BMI ≥12 months after diagnosis, the summary RRs of the two studies identified were 1.68 (95% CI 0.90–3.15) for obese women and 1.37 (95% CI 0.96–1.95) for overweight women (supplementary Figure S4, available at Annals of Oncology online). The summary of another six studies that reported RRs for obese versus non-obese <12 months after diagnosis was 1.26 (95% CI 1.05–1.51, I2 = 64%, P = 0.02).Figure 5.Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality. Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality. dose–response meta-analysis There was no significant evidence of a non-linear relationship between BMI before, <12 months after, and ≥12 months after diagnosis and breast cancer mortality (P = 0.21, P = 1.00, P = 0.86, respectively) (Figure 3). When linear models were fitted excluding data from the underweight category, statistically significant increased risks of breast cancer mortality with BMI before and <12 months after diagnosis were observed (Figure 6). The summary RRs for each 5 kg/m2 increase were 1.18 (95% CI 1.12–1.25, 18 studies, 5262 breast cancer deaths) for BMI before diagnosis and 1.14 (95% CI 1.05–1.24, 8 studies, 3857 breast cancer deaths) for BMI <12 months after diagnosis, with moderate (I2 = 47%, P = 0.01) and substantial (I2 = 66%, P = 0.01) heterogeneities between studies, respectively. Only two studies on BMI ≥12 months after diagnosis and breast cancer mortality (N = 220 deaths) were identified. The summary RR was 1.29 (95% CI 0.97–1.72).Figure 6.Linear dose–response meta-analysis of BMI and breast cancer mortality. Linear dose–response meta-analysis of BMI and breast cancer mortality. There was no significant evidence of a non-linear relationship between BMI before, <12 months after, and ≥12 months after diagnosis and breast cancer mortality (P = 0.21, P = 1.00, P = 0.86, respectively) (Figure 3). When linear models were fitted excluding data from the underweight category, statistically significant increased risks of breast cancer mortality with BMI before and <12 months after diagnosis were observed (Figure 6). The summary RRs for each 5 kg/m2 increase were 1.18 (95% CI 1.12–1.25, 18 studies, 5262 breast cancer deaths) for BMI before diagnosis and 1.14 (95% CI 1.05–1.24, 8 studies, 3857 breast cancer deaths) for BMI <12 months after diagnosis, with moderate (I2 = 47%, P = 0.01) and substantial (I2 = 66%, P = 0.01) heterogeneities between studies, respectively. Only two studies on BMI ≥12 months after diagnosis and breast cancer mortality (N = 220 deaths) were identified. The summary RR was 1.29 (95% CI 0.97–1.72).Figure 6.Linear dose–response meta-analysis of BMI and breast cancer mortality. Linear dose–response meta-analysis of BMI and breast cancer mortality. categorical meta-analysis BMI was significantly associated with breast cancer mortality. Compared with normal weight women, for BMI before diagnosis, the summary RRs were 1.35 (95% CI 1.24–1.47, 22 studies) for obese women, 1.11 (95% CI 1.06–1.17, 21 studies) for overweight women, and 1.02 (95% CI 0.85–1.21, 8 studies) for underweight women (Figure 5). For BMI <12 months after diagnosis, the summary RRs were 1.25 (95% CI 1.10–1.42, 12 studies) for obese women, 1.11 (95% CI 1.03–1.20, 12 studies) for overweight women, and 1.53 (95% CI 1.27–1.83, 5 studies) for underweight women (supplementary Figure S3, available at Annals of Oncology online). Substantial heterogeneity was observed between studies of obese women (I2 = 53%, P = 0.02). For BMI ≥12 months after diagnosis, the summary RRs of the two studies identified were 1.68 (95% CI 0.90–3.15) for obese women and 1.37 (95% CI 0.96–1.95) for overweight women (supplementary Figure S4, available at Annals of Oncology online). The summary of another six studies that reported RRs for obese versus non-obese <12 months after diagnosis was 1.26 (95% CI 1.05–1.51, I2 = 64%, P = 0.02).Figure 5.Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality. Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality. BMI was significantly associated with breast cancer mortality. Compared with normal weight women, for BMI before diagnosis, the summary RRs were 1.35 (95% CI 1.24–1.47, 22 studies) for obese women, 1.11 (95% CI 1.06–1.17, 21 studies) for overweight women, and 1.02 (95% CI 0.85–1.21, 8 studies) for underweight women (Figure 5). For BMI <12 months after diagnosis, the summary RRs were 1.25 (95% CI 1.10–1.42, 12 studies) for obese women, 1.11 (95% CI 1.03–1.20, 12 studies) for overweight women, and 1.53 (95% CI 1.27–1.83, 5 studies) for underweight women (supplementary Figure S3, available at Annals of Oncology online). Substantial heterogeneity was observed between studies of obese women (I2 = 53%, P = 0.02). For BMI ≥12 months after diagnosis, the summary RRs of the two studies identified were 1.68 (95% CI 0.90–3.15) for obese women and 1.37 (95% CI 0.96–1.95) for overweight women (supplementary Figure S4, available at Annals of Oncology online). The summary of another six studies that reported RRs for obese versus non-obese <12 months after diagnosis was 1.26 (95% CI 1.05–1.51, I2 = 64%, P = 0.02).Figure 5.Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality. Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality. dose–response meta-analysis There was no significant evidence of a non-linear relationship between BMI before, <12 months after, and ≥12 months after diagnosis and breast cancer mortality (P = 0.21, P = 1.00, P = 0.86, respectively) (Figure 3). When linear models were fitted excluding data from the underweight category, statistically significant increased risks of breast cancer mortality with BMI before and <12 months after diagnosis were observed (Figure 6). The summary RRs for each 5 kg/m2 increase were 1.18 (95% CI 1.12–1.25, 18 studies, 5262 breast cancer deaths) for BMI before diagnosis and 1.14 (95% CI 1.05–1.24, 8 studies, 3857 breast cancer deaths) for BMI <12 months after diagnosis, with moderate (I2 = 47%, P = 0.01) and substantial (I2 = 66%, P = 0.01) heterogeneities between studies, respectively. Only two studies on BMI ≥12 months after diagnosis and breast cancer mortality (N = 220 deaths) were identified. The summary RR was 1.29 (95% CI 0.97–1.72).Figure 6.Linear dose–response meta-analysis of BMI and breast cancer mortality. Linear dose–response meta-analysis of BMI and breast cancer mortality. There was no significant evidence of a non-linear relationship between BMI before, <12 months after, and ≥12 months after diagnosis and breast cancer mortality (P = 0.21, P = 1.00, P = 0.86, respectively) (Figure 3). When linear models were fitted excluding data from the underweight category, statistically significant increased risks of breast cancer mortality with BMI before and <12 months after diagnosis were observed (Figure 6). The summary RRs for each 5 kg/m2 increase were 1.18 (95% CI 1.12–1.25, 18 studies, 5262 breast cancer deaths) for BMI before diagnosis and 1.14 (95% CI 1.05–1.24, 8 studies, 3857 breast cancer deaths) for BMI <12 months after diagnosis, with moderate (I2 = 47%, P = 0.01) and substantial (I2 = 66%, P = 0.01) heterogeneities between studies, respectively. Only two studies on BMI ≥12 months after diagnosis and breast cancer mortality (N = 220 deaths) were identified. The summary RR was 1.29 (95% CI 0.97–1.72).Figure 6.Linear dose–response meta-analysis of BMI and breast cancer mortality. Linear dose–response meta-analysis of BMI and breast cancer mortality. BMI and other mortality outcomes Only two studies reported results for death from cardiovascular disease (N = 151 deaths) [27, 112]. The summary RR for obese versus normal weight before diagnosis was 1.60 (95% CI 0.66–3.87). No association was observed for overweight versus normal weight (summary RR = 1.01, 95% CI 0.80–1.29). For each 5 kg/m2 increase in BMI, the summary RR was 1.21 (95% CI 0.83–1.77). Five studies reported results for deaths from any cause other than breast cancer (N = 2704 deaths) [21, 34, 108, 113, 114]. The summary RRs were 1.29 (95% CI 0.99–1.68, I2 = 72%, P = 0.01) for obese women, and 0.96 (95% CI 0.83–1.11, I2 = 26%, P = 0.25) for overweight women compared with normal weight women. Only two studies reported results for death from cardiovascular disease (N = 151 deaths) [27, 112]. The summary RR for obese versus normal weight before diagnosis was 1.60 (95% CI 0.66–3.87). No association was observed for overweight versus normal weight (summary RR = 1.01, 95% CI 0.80–1.29). For each 5 kg/m2 increase in BMI, the summary RR was 1.21 (95% CI 0.83–1.77). Five studies reported results for deaths from any cause other than breast cancer (N = 2704 deaths) [21, 34, 108, 113, 114]. The summary RRs were 1.29 (95% CI 0.99–1.68, I2 = 72%, P = 0.01) for obese women, and 0.96 (95% CI 0.83–1.11, I2 = 26%, P = 0.25) for overweight women compared with normal weight women. subgroup, meta-regression, and sensitivity analyses The results of the subgroup and meta-regression analyses are in supplementary Tables S3 and S4, available at Annals of Oncology online. Subgroup analysis was not carried out for BMI ≥12 months after diagnosis as the limited number of studies would hinder any meaningful comparisons. Increased risks of mortality were observed in the meta-analyses by menopausal status. While the summary risk estimates seem stronger with premenopausal breast cancer, there was no significant heterogeneity between pre- and post-menopausal breast cancer as shown in the meta-regression analyses (P = 0.28–0.89) (supplementary Tables S3 and S4, available at Annals of Oncology online). For BMI before diagnosis and total mortality, the summary RRs for obese versus normal weight were 1.75 (95% CI 1.26–2.41, I2 = 70%, P < 0.01, 7 studies) in women with pre-menopausal breast cancer and 1.34 (95% CI 1.18–1.53, I2 = 27%, P = 0.20, 9 studies) in women with post-menopausal breast cancer. Studies with larger number of deaths [105, 115], conducted in Europe [28, 115], or with weight and height assessed through medical records [28, 104, 115, 116] tended to report weaker associations for BMI <12 months after diagnosis and total mortality compared with other studies (meta-regression P = 0.01, 0.02, 0.01, respectively) (supplementary Table S3, available at Annals of Oncology online); while studies with larger number of deaths [101], conducted in Asia [101, 102], or adjusted for co-morbidity [101, 102] reported weaker associations for BMI <12 months after diagnosis and breast cancer mortality (meta-regression P = 0.01, 0.02, 0.01, respectively) (supplementary Table S4, available at Annals of Oncology online). Analyses stratified by study designs, or restricted to studies with invasive cases only, early-stage non-metastatic cases only, or mammography screening detected cases only, or controlled for previous diseases did not produce results that were materially different from those obtained in the overall analyses (results not shown). Summary risk estimates remained statistically significant when each study was omitted in turn, except for BMI ≥12 months after diagnosis and total mortality. The summary RR was 1.06 (95% CI 0.98–1.15) per 5 kg/m2 increase when Flatt et al. [117] which contributed 315 deaths was omitted. The results of the subgroup and meta-regression analyses are in supplementary Tables S3 and S4, available at Annals of Oncology online. Subgroup analysis was not carried out for BMI ≥12 months after diagnosis as the limited number of studies would hinder any meaningful comparisons. Increased risks of mortality were observed in the meta-analyses by menopausal status. While the summary risk estimates seem stronger with premenopausal breast cancer, there was no significant heterogeneity between pre- and post-menopausal breast cancer as shown in the meta-regression analyses (P = 0.28–0.89) (supplementary Tables S3 and S4, available at Annals of Oncology online). For BMI before diagnosis and total mortality, the summary RRs for obese versus normal weight were 1.75 (95% CI 1.26–2.41, I2 = 70%, P < 0.01, 7 studies) in women with pre-menopausal breast cancer and 1.34 (95% CI 1.18–1.53, I2 = 27%, P = 0.20, 9 studies) in women with post-menopausal breast cancer. Studies with larger number of deaths [105, 115], conducted in Europe [28, 115], or with weight and height assessed through medical records [28, 104, 115, 116] tended to report weaker associations for BMI <12 months after diagnosis and total mortality compared with other studies (meta-regression P = 0.01, 0.02, 0.01, respectively) (supplementary Table S3, available at Annals of Oncology online); while studies with larger number of deaths [101], conducted in Asia [101, 102], or adjusted for co-morbidity [101, 102] reported weaker associations for BMI <12 months after diagnosis and breast cancer mortality (meta-regression P = 0.01, 0.02, 0.01, respectively) (supplementary Table S4, available at Annals of Oncology online). Analyses stratified by study designs, or restricted to studies with invasive cases only, early-stage non-metastatic cases only, or mammography screening detected cases only, or controlled for previous diseases did not produce results that were materially different from those obtained in the overall analyses (results not shown). Summary risk estimates remained statistically significant when each study was omitted in turn, except for BMI ≥12 months after diagnosis and total mortality. The summary RR was 1.06 (95% CI 0.98–1.15) per 5 kg/m2 increase when Flatt et al. [117] which contributed 315 deaths was omitted. small studies or publication bias Asymmetry was only detected in the funnel plots of BMI <12 months after diagnosis and total mortality, and breast cancer mortality, which suggests that small studies with an inverse association are missing (plots not shown). Egger's tests were borderline significant (P = 0.05) or statistically significant (P = 0.03), respectively. Asymmetry was only detected in the funnel plots of BMI <12 months after diagnosis and total mortality, and breast cancer mortality, which suggests that small studies with an inverse association are missing (plots not shown). Egger's tests were borderline significant (P = 0.05) or statistically significant (P = 0.03), respectively. BMI and total mortality: categorical meta-analysis For BMI before diagnosis, compared with normal weight women, the summary RRs were 1.41 (95% CI 1.29–1.53, 21 studies) for obese women, 1.07 (95% CI 1.02–1.12, 19 studies) for overweight women, and 1.10 (95% CI 0.92–1.31, 10 studies) for underweight women (Figure 2). For BMI <12 months after diagnosis and the same comparisons, the summary RRs were 1.23 (95% CI 1.12–1.33, 24 studies) for obese women, 1.07 (95% CI 1.02–1.12, 22 studies) for overweight women, and 1.25 (95% CI 0.99–1.57, 11 studies) for underweight women (supplementary Figure S1, available at Annals of Oncology online). Substantial heterogeneities were observed between studies of obese women and underweight women (I2 = 69%, P < 0.01; I2 = 63%, P < 0.01, respectively). For BMI ≥12 months after diagnosis, the summary RRs were 1.21 (95% CI 1.06–1.38, 5 studies) for obese women, 0.98 (95% CI 0.86–1.11, 4 studies) for overweight women, and 1.29 (95% CI 1.02–1.63, 3 studies) for underweight women (supplementary Figure S2, available at Annals of Oncology online). Twelve additional studies reported results for obese versus non-obese women <12 months after diagnosis, and the summary RR was 1.26 (95% CI 1.07–1.47, I2 = 80%, P < 0.01).Figure 2.Categorical meta-analysis of pre-diagnosis BMI and total mortality. Categorical meta-analysis of pre-diagnosis BMI and total mortality. For BMI before diagnosis, compared with normal weight women, the summary RRs were 1.41 (95% CI 1.29–1.53, 21 studies) for obese women, 1.07 (95% CI 1.02–1.12, 19 studies) for overweight women, and 1.10 (95% CI 0.92–1.31, 10 studies) for underweight women (Figure 2). For BMI <12 months after diagnosis and the same comparisons, the summary RRs were 1.23 (95% CI 1.12–1.33, 24 studies) for obese women, 1.07 (95% CI 1.02–1.12, 22 studies) for overweight women, and 1.25 (95% CI 0.99–1.57, 11 studies) for underweight women (supplementary Figure S1, available at Annals of Oncology online). Substantial heterogeneities were observed between studies of obese women and underweight women (I2 = 69%, P < 0.01; I2 = 63%, P < 0.01, respectively). For BMI ≥12 months after diagnosis, the summary RRs were 1.21 (95% CI 1.06–1.38, 5 studies) for obese women, 0.98 (95% CI 0.86–1.11, 4 studies) for overweight women, and 1.29 (95% CI 1.02–1.63, 3 studies) for underweight women (supplementary Figure S2, available at Annals of Oncology online). Twelve additional studies reported results for obese versus non-obese women <12 months after diagnosis, and the summary RR was 1.26 (95% CI 1.07–1.47, I2 = 80%, P < 0.01).Figure 2.Categorical meta-analysis of pre-diagnosis BMI and total mortality. Categorical meta-analysis of pre-diagnosis BMI and total mortality. dose–response meta-analysis There was evidence of a J-shaped association in the non-linear dose–response meta-analyses of BMI before and after diagnosis with total mortality (all P < 0.01; Figure 3), suggesting that underweight women may be at slightly increased risk compared with normal weight women. The curves show linear increasing trends from 20 kg/m2 for BMI before diagnosis and <12 months after diagnosis, and from 25 kg/m2 for BMI ≥12 months after diagnosis. When linear models were fitted excluding the underweight category, the summary RRs of total mortality for each 5 kg/m2 increase in BMI were 1.17 (95% CI 1.13–1.21, 15 studies, 6358 deaths), 1.11 (95% CI 1.06–1.16, 12 studies, 6020 deaths), and 1.08 (95% CI 1.01–1.15, 4 studies, 1703 deaths) for BMI before, <12 months after, and ≥12 months after diagnosis, respectively (Figure 4). Substantial heterogeneity was observed between studies on BMI <12 months after diagnosis (I2 = 55%, P = 0.01).Figure 3.Non-linear dose–response curves of BMI and mortality.Figure 4.Linear dose–response meta-analysis of BMI and total mortality. Non-linear dose–response curves of BMI and mortality. Linear dose–response meta-analysis of BMI and total mortality. There was evidence of a J-shaped association in the non-linear dose–response meta-analyses of BMI before and after diagnosis with total mortality (all P < 0.01; Figure 3), suggesting that underweight women may be at slightly increased risk compared with normal weight women. The curves show linear increasing trends from 20 kg/m2 for BMI before diagnosis and <12 months after diagnosis, and from 25 kg/m2 for BMI ≥12 months after diagnosis. When linear models were fitted excluding the underweight category, the summary RRs of total mortality for each 5 kg/m2 increase in BMI were 1.17 (95% CI 1.13–1.21, 15 studies, 6358 deaths), 1.11 (95% CI 1.06–1.16, 12 studies, 6020 deaths), and 1.08 (95% CI 1.01–1.15, 4 studies, 1703 deaths) for BMI before, <12 months after, and ≥12 months after diagnosis, respectively (Figure 4). Substantial heterogeneity was observed between studies on BMI <12 months after diagnosis (I2 = 55%, P = 0.01).Figure 3.Non-linear dose–response curves of BMI and mortality.Figure 4.Linear dose–response meta-analysis of BMI and total mortality. Non-linear dose–response curves of BMI and mortality. Linear dose–response meta-analysis of BMI and total mortality. categorical meta-analysis: For BMI before diagnosis, compared with normal weight women, the summary RRs were 1.41 (95% CI 1.29–1.53, 21 studies) for obese women, 1.07 (95% CI 1.02–1.12, 19 studies) for overweight women, and 1.10 (95% CI 0.92–1.31, 10 studies) for underweight women (Figure 2). For BMI <12 months after diagnosis and the same comparisons, the summary RRs were 1.23 (95% CI 1.12–1.33, 24 studies) for obese women, 1.07 (95% CI 1.02–1.12, 22 studies) for overweight women, and 1.25 (95% CI 0.99–1.57, 11 studies) for underweight women (supplementary Figure S1, available at Annals of Oncology online). Substantial heterogeneities were observed between studies of obese women and underweight women (I2 = 69%, P < 0.01; I2 = 63%, P < 0.01, respectively). For BMI ≥12 months after diagnosis, the summary RRs were 1.21 (95% CI 1.06–1.38, 5 studies) for obese women, 0.98 (95% CI 0.86–1.11, 4 studies) for overweight women, and 1.29 (95% CI 1.02–1.63, 3 studies) for underweight women (supplementary Figure S2, available at Annals of Oncology online). Twelve additional studies reported results for obese versus non-obese women <12 months after diagnosis, and the summary RR was 1.26 (95% CI 1.07–1.47, I2 = 80%, P < 0.01).Figure 2.Categorical meta-analysis of pre-diagnosis BMI and total mortality. Categorical meta-analysis of pre-diagnosis BMI and total mortality. dose–response meta-analysis: There was evidence of a J-shaped association in the non-linear dose–response meta-analyses of BMI before and after diagnosis with total mortality (all P < 0.01; Figure 3), suggesting that underweight women may be at slightly increased risk compared with normal weight women. The curves show linear increasing trends from 20 kg/m2 for BMI before diagnosis and <12 months after diagnosis, and from 25 kg/m2 for BMI ≥12 months after diagnosis. When linear models were fitted excluding the underweight category, the summary RRs of total mortality for each 5 kg/m2 increase in BMI were 1.17 (95% CI 1.13–1.21, 15 studies, 6358 deaths), 1.11 (95% CI 1.06–1.16, 12 studies, 6020 deaths), and 1.08 (95% CI 1.01–1.15, 4 studies, 1703 deaths) for BMI before, <12 months after, and ≥12 months after diagnosis, respectively (Figure 4). Substantial heterogeneity was observed between studies on BMI <12 months after diagnosis (I2 = 55%, P = 0.01).Figure 3.Non-linear dose–response curves of BMI and mortality.Figure 4.Linear dose–response meta-analysis of BMI and total mortality. Non-linear dose–response curves of BMI and mortality. Linear dose–response meta-analysis of BMI and total mortality. BMI and breast cancer mortality: categorical meta-analysis BMI was significantly associated with breast cancer mortality. Compared with normal weight women, for BMI before diagnosis, the summary RRs were 1.35 (95% CI 1.24–1.47, 22 studies) for obese women, 1.11 (95% CI 1.06–1.17, 21 studies) for overweight women, and 1.02 (95% CI 0.85–1.21, 8 studies) for underweight women (Figure 5). For BMI <12 months after diagnosis, the summary RRs were 1.25 (95% CI 1.10–1.42, 12 studies) for obese women, 1.11 (95% CI 1.03–1.20, 12 studies) for overweight women, and 1.53 (95% CI 1.27–1.83, 5 studies) for underweight women (supplementary Figure S3, available at Annals of Oncology online). Substantial heterogeneity was observed between studies of obese women (I2 = 53%, P = 0.02). For BMI ≥12 months after diagnosis, the summary RRs of the two studies identified were 1.68 (95% CI 0.90–3.15) for obese women and 1.37 (95% CI 0.96–1.95) for overweight women (supplementary Figure S4, available at Annals of Oncology online). The summary of another six studies that reported RRs for obese versus non-obese <12 months after diagnosis was 1.26 (95% CI 1.05–1.51, I2 = 64%, P = 0.02).Figure 5.Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality. Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality. BMI was significantly associated with breast cancer mortality. Compared with normal weight women, for BMI before diagnosis, the summary RRs were 1.35 (95% CI 1.24–1.47, 22 studies) for obese women, 1.11 (95% CI 1.06–1.17, 21 studies) for overweight women, and 1.02 (95% CI 0.85–1.21, 8 studies) for underweight women (Figure 5). For BMI <12 months after diagnosis, the summary RRs were 1.25 (95% CI 1.10–1.42, 12 studies) for obese women, 1.11 (95% CI 1.03–1.20, 12 studies) for overweight women, and 1.53 (95% CI 1.27–1.83, 5 studies) for underweight women (supplementary Figure S3, available at Annals of Oncology online). Substantial heterogeneity was observed between studies of obese women (I2 = 53%, P = 0.02). For BMI ≥12 months after diagnosis, the summary RRs of the two studies identified were 1.68 (95% CI 0.90–3.15) for obese women and 1.37 (95% CI 0.96–1.95) for overweight women (supplementary Figure S4, available at Annals of Oncology online). The summary of another six studies that reported RRs for obese versus non-obese <12 months after diagnosis was 1.26 (95% CI 1.05–1.51, I2 = 64%, P = 0.02).Figure 5.Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality. Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality. dose–response meta-analysis There was no significant evidence of a non-linear relationship between BMI before, <12 months after, and ≥12 months after diagnosis and breast cancer mortality (P = 0.21, P = 1.00, P = 0.86, respectively) (Figure 3). When linear models were fitted excluding data from the underweight category, statistically significant increased risks of breast cancer mortality with BMI before and <12 months after diagnosis were observed (Figure 6). The summary RRs for each 5 kg/m2 increase were 1.18 (95% CI 1.12–1.25, 18 studies, 5262 breast cancer deaths) for BMI before diagnosis and 1.14 (95% CI 1.05–1.24, 8 studies, 3857 breast cancer deaths) for BMI <12 months after diagnosis, with moderate (I2 = 47%, P = 0.01) and substantial (I2 = 66%, P = 0.01) heterogeneities between studies, respectively. Only two studies on BMI ≥12 months after diagnosis and breast cancer mortality (N = 220 deaths) were identified. The summary RR was 1.29 (95% CI 0.97–1.72).Figure 6.Linear dose–response meta-analysis of BMI and breast cancer mortality. Linear dose–response meta-analysis of BMI and breast cancer mortality. There was no significant evidence of a non-linear relationship between BMI before, <12 months after, and ≥12 months after diagnosis and breast cancer mortality (P = 0.21, P = 1.00, P = 0.86, respectively) (Figure 3). When linear models were fitted excluding data from the underweight category, statistically significant increased risks of breast cancer mortality with BMI before and <12 months after diagnosis were observed (Figure 6). The summary RRs for each 5 kg/m2 increase were 1.18 (95% CI 1.12–1.25, 18 studies, 5262 breast cancer deaths) for BMI before diagnosis and 1.14 (95% CI 1.05–1.24, 8 studies, 3857 breast cancer deaths) for BMI <12 months after diagnosis, with moderate (I2 = 47%, P = 0.01) and substantial (I2 = 66%, P = 0.01) heterogeneities between studies, respectively. Only two studies on BMI ≥12 months after diagnosis and breast cancer mortality (N = 220 deaths) were identified. The summary RR was 1.29 (95% CI 0.97–1.72).Figure 6.Linear dose–response meta-analysis of BMI and breast cancer mortality. Linear dose–response meta-analysis of BMI and breast cancer mortality. categorical meta-analysis: BMI was significantly associated with breast cancer mortality. Compared with normal weight women, for BMI before diagnosis, the summary RRs were 1.35 (95% CI 1.24–1.47, 22 studies) for obese women, 1.11 (95% CI 1.06–1.17, 21 studies) for overweight women, and 1.02 (95% CI 0.85–1.21, 8 studies) for underweight women (Figure 5). For BMI <12 months after diagnosis, the summary RRs were 1.25 (95% CI 1.10–1.42, 12 studies) for obese women, 1.11 (95% CI 1.03–1.20, 12 studies) for overweight women, and 1.53 (95% CI 1.27–1.83, 5 studies) for underweight women (supplementary Figure S3, available at Annals of Oncology online). Substantial heterogeneity was observed between studies of obese women (I2 = 53%, P = 0.02). For BMI ≥12 months after diagnosis, the summary RRs of the two studies identified were 1.68 (95% CI 0.90–3.15) for obese women and 1.37 (95% CI 0.96–1.95) for overweight women (supplementary Figure S4, available at Annals of Oncology online). The summary of another six studies that reported RRs for obese versus non-obese <12 months after diagnosis was 1.26 (95% CI 1.05–1.51, I2 = 64%, P = 0.02).Figure 5.Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality. Categorical meta-analysis of pre-diagnosis BMI and breast cancer mortality. dose–response meta-analysis: There was no significant evidence of a non-linear relationship between BMI before, <12 months after, and ≥12 months after diagnosis and breast cancer mortality (P = 0.21, P = 1.00, P = 0.86, respectively) (Figure 3). When linear models were fitted excluding data from the underweight category, statistically significant increased risks of breast cancer mortality with BMI before and <12 months after diagnosis were observed (Figure 6). The summary RRs for each 5 kg/m2 increase were 1.18 (95% CI 1.12–1.25, 18 studies, 5262 breast cancer deaths) for BMI before diagnosis and 1.14 (95% CI 1.05–1.24, 8 studies, 3857 breast cancer deaths) for BMI <12 months after diagnosis, with moderate (I2 = 47%, P = 0.01) and substantial (I2 = 66%, P = 0.01) heterogeneities between studies, respectively. Only two studies on BMI ≥12 months after diagnosis and breast cancer mortality (N = 220 deaths) were identified. The summary RR was 1.29 (95% CI 0.97–1.72).Figure 6.Linear dose–response meta-analysis of BMI and breast cancer mortality. Linear dose–response meta-analysis of BMI and breast cancer mortality. BMI and other mortality outcomes: Only two studies reported results for death from cardiovascular disease (N = 151 deaths) [27, 112]. The summary RR for obese versus normal weight before diagnosis was 1.60 (95% CI 0.66–3.87). No association was observed for overweight versus normal weight (summary RR = 1.01, 95% CI 0.80–1.29). For each 5 kg/m2 increase in BMI, the summary RR was 1.21 (95% CI 0.83–1.77). Five studies reported results for deaths from any cause other than breast cancer (N = 2704 deaths) [21, 34, 108, 113, 114]. The summary RRs were 1.29 (95% CI 0.99–1.68, I2 = 72%, P = 0.01) for obese women, and 0.96 (95% CI 0.83–1.11, I2 = 26%, P = 0.25) for overweight women compared with normal weight women. subgroup, meta-regression, and sensitivity analyses: The results of the subgroup and meta-regression analyses are in supplementary Tables S3 and S4, available at Annals of Oncology online. Subgroup analysis was not carried out for BMI ≥12 months after diagnosis as the limited number of studies would hinder any meaningful comparisons. Increased risks of mortality were observed in the meta-analyses by menopausal status. While the summary risk estimates seem stronger with premenopausal breast cancer, there was no significant heterogeneity between pre- and post-menopausal breast cancer as shown in the meta-regression analyses (P = 0.28–0.89) (supplementary Tables S3 and S4, available at Annals of Oncology online). For BMI before diagnosis and total mortality, the summary RRs for obese versus normal weight were 1.75 (95% CI 1.26–2.41, I2 = 70%, P < 0.01, 7 studies) in women with pre-menopausal breast cancer and 1.34 (95% CI 1.18–1.53, I2 = 27%, P = 0.20, 9 studies) in women with post-menopausal breast cancer. Studies with larger number of deaths [105, 115], conducted in Europe [28, 115], or with weight and height assessed through medical records [28, 104, 115, 116] tended to report weaker associations for BMI <12 months after diagnosis and total mortality compared with other studies (meta-regression P = 0.01, 0.02, 0.01, respectively) (supplementary Table S3, available at Annals of Oncology online); while studies with larger number of deaths [101], conducted in Asia [101, 102], or adjusted for co-morbidity [101, 102] reported weaker associations for BMI <12 months after diagnosis and breast cancer mortality (meta-regression P = 0.01, 0.02, 0.01, respectively) (supplementary Table S4, available at Annals of Oncology online). Analyses stratified by study designs, or restricted to studies with invasive cases only, early-stage non-metastatic cases only, or mammography screening detected cases only, or controlled for previous diseases did not produce results that were materially different from those obtained in the overall analyses (results not shown). Summary risk estimates remained statistically significant when each study was omitted in turn, except for BMI ≥12 months after diagnosis and total mortality. The summary RR was 1.06 (95% CI 0.98–1.15) per 5 kg/m2 increase when Flatt et al. [117] which contributed 315 deaths was omitted. small studies or publication bias: Asymmetry was only detected in the funnel plots of BMI <12 months after diagnosis and total mortality, and breast cancer mortality, which suggests that small studies with an inverse association are missing (plots not shown). Egger's tests were borderline significant (P = 0.05) or statistically significant (P = 0.03), respectively. discussion: The present systematic literature review and meta-analysis of follow-up studies clearly supports that, in breast cancer survivors, higher BMI is consistently associated with lower overall and breast cancer survival, regardless of when BMI is ascertained. The limited number of studies on death from cardiovascular disease is also consistent with a positive association. For before, <12 months after, and 12 months or more after breast cancer diagnosis, compared with normal weight women, obese women had 41%, 23%, and 21% higher risk for total mortality, and 35%, 25%, and 68% increased risk for breast cancer mortality, respectively. The findings were supported by the positive associations observed in the linear dose–response meta-analysis. All associations were statistically significant, apart from the relationship between BMI ≥12 months after diagnosis and breast cancer mortality. This may be due to limited statistical power, with only 220 breast cancer deaths from two follow-up studies. Positive associations, in some cases statistically significant, were also observed in overweight, and underweight women compared with normal weight women. Women with BMI of 20 kg/m2 before, or <12 months after diagnosis, and of 25 kg/m2 12 months or more after diagnosis appeared to have the lowest mortality risk in the non-linear dose–response analysis. Co-morbid conditions may cause the observed increased risk in underweight women. Thorough investigation within the group and on their contribution to the shape of the association is hindered, as not all studies in this review reported results for this group. The increased risk associated with obesity was similar in pre- or post-menopausal breast cancer. We did not find any evidence of a protective effect of obesity on survival after pre-menopausal breast cancer, contrary to what has been observed for the development of breast cancer in pre-menopausal women [4]. A large body of evidence with 41 477 deaths (23 182 from breast cancer) in over 210 000 breast cancer survivors was systematically reviewed in the present study. We carried out categorical, linear, and non-linear dose–response meta-analyses to examine the magnitude and the shape of the associations for total and cause-specific mortality in underweight, overweight, and obese women by time periods before and after diagnosis that is important in relation to the population-at-risk and breast cancer survivors. Our findings agree with and further extend the results from previous meta-analyses. A review published in 2010 reported statistically significant increased risks of 33% of both total and breast cancer mortality for obesity versus non-obesity around diagnosis [7]. These estimates are slightly higher than ours, which may be explained by the different search periods and inclusion criteria for the articles (33 studies and 15 studies included in the analyses, respectively). Another review published in 2012 further reported consistent positive associations of total and breast cancer mortality with higher versus lower BMI around diagnosis [6]. No significant differences were observed by menopausal status or hormone receptor status. The After Breast Cancer Pooling Project of four prospective cohort studies found differential effects of levels of pre-diagnosis obesity on survival [118]. Compared with normal weight women, significant or borderline significant increased risks of 81% of total and 40% of breast cancer mortality were only observed for morbidly obese (≥40 kg/m2) women and not for women in other obesity categories. We observed statistically significant increased risks also for overweight women, probably because of a larger number of studies. We were unable to investigate the associations with severely and morbidly obese women because only two studies included in this review reported such results [19, 113]. Overall, our findings are consistent with previous meta-analyses in showing elevated total and breast cancer mortality associated with higher BMI and support the current guidelines for breast cancer survivors to stay as lean as possible within the normal range of body weight [4], for overweight women to avoid weight gain during treatment and for obese women to lose weight after treatment [119]. The present review is limited by the challenges and flaws encountered by the individual epidemiological studies evaluating the body fatness–mortality relationship in breast cancer survivors. Most studies did not adjust for co-morbidities and assess intentional weight loss. Women with more serious health issues, and especially smokers, may lose weight but are at an increased risk of mortality, and this might cause an apparent increased risk in underweight women. Body weight information through the natural history of the disease and treatment information were usually not complete or available. Increase of body weight post-diagnosis is common in women with breast cancer, particularly during chemotherapy [16]. Chemotherapy under-dosing is a common problem in obese women and may contribute to their increased mortality [120]. Although several studies with pre-diagnosis BMI adjusted for underlying illnesses or excluded the first few years of follow-up, reverse causation may have affected the results in studies that assessed BMI in women with cancer and other illnesses. However, in these studies, the associations were similar to other studies. Possible survival benefit (subjects with better prognostic factors survive) may be present in the survival cohorts, in which the range of BMI could be narrower, and may cause an underestimation of the association. Follow-up studies with variable characteristics were pooled in the meta-analysis. Women identified in clinical trials may have had specific tumour subtypes, with fewer co-morbidities, and were more likely to receive protocol treatments with high treatment completion rates. Women who were recruited through mammography screening programmes may have had healthier lifestyles or access to medical facilities, and more likely to be diagnosed with in situ or early-stage breast cancer. Cancer detection methods, tumour classifications and treatment regimens change over time, and may vary within (if follow-up is long) and between studies, and could not be simply examined by using the diagnosis or treatment date. We cannot rule out the effect of unmeasured or residual confounding in our analysis. Nevertheless, most results were adjusted for multiple confounding factors, including tumour stage or other-related variables and stratified analyses by several key factors showed similar summary risk estimates. Small study or publication bias was observed in the analyses of BMI <12 months after diagnosis. However, the overall evidence is supported by large, well-designed studies and is unlikely to be changed. We did not conduct analyses by race/ethnicity and treatment types as only limited studies had published results. Future studies of body fatness and breast cancer outcomes should aim to account for co-morbidities, separate intended and unintended changes of body weight, and collect complete treatment information during study follow-up. Randomised clinical trials are needed to test interventions for weight loss and maintenance on survival in women with breast cancer. In conclusion, the present systematic literature review and meta-analysis extends and confirms the associations of obesity with an unfavourable overall and breast cancer survival in pre- and post-menopausal breast cancer, regardless of when BMI is ascertained. Increased risks of mortality in underweight and overweight women were also observed. Given the comparable elevated risks with obesity in the development (for post-menopausal women) and prognosis of breast cancer, and the complications with cancer treatment and other obesity-related co-morbidities, it is prudent to maintain a healthy body weight (BMI 18.5–<25.0 kg/m2) throughout life. funding: This work was supported by the World Cancer Research Fund International (grant number: 2007/SP01) (http://www.wcrf-uk.org/). The funder of this study had no role in the decisions about the design and conduct of the study; collection, management, analysis, or interpretation of the data; or the preparation, review, or approval of the manuscript. The views expressed in this review are the opinions of the authors. They may not represent the views of the World Cancer Research Fund International/American Institute for Cancer Research and may differ from those in future updates of the evidence related to food, nutrition, physical activity, and cancer survival. disclosure: DCG reports personal fees from World Cancer Research Fund/American Institute for Cancer Research, during the conduct of the study; grants from Danone, and grants from Kelloggs, outside the submitted work. AM reports personal fees from Metagenics/Metaproteomics, personal fees from Pfizer, outside the submitted work. All remaining authors have declared no conflicts of interest. Supplementary Material:
Background: Positive association between obesity and survival after breast cancer was demonstrated in previous meta-analyses of published data, but only the results for the comparison of obese versus non-obese was summarised. Methods: We systematically searched in MEDLINE and EMBASE for follow-up studies of breast cancer survivors with body mass index (BMI) before and after diagnosis, and total and cause-specific mortality until June 2013, as part of the World Cancer Research Fund Continuous Update Project. Random-effects meta-analyses were conducted to explore the magnitude and the shape of the associations. Results: Eighty-two studies, including 213 075 breast cancer survivors with 41 477 deaths (23 182 from breast cancer) were identified. For BMI before diagnosis, compared with normal weight women, the summary relative risks (RRs) of total mortality were 1.41 [95% confidence interval (CI) 1.29-1.53] for obese (BMI >30.0), 1.07 (95 CI 1.02-1.12) for overweight (BMI 25.0-<30.0) and 1.10 (95% CI 0.92-1.31) for underweight (BMI <18.5) women. For obese women, the summary RRs were 1.75 (95% CI 1.26-2.41) for pre-menopausal and 1.34 (95% CI 1.18-1.53) for post-menopausal breast cancer. For each 5 kg/m(2) increment of BMI before, <12 months after, and ≥12 months after diagnosis, increased risks of 17%, 11%, and 8% for total mortality, and 18%, 14%, and 29% for breast cancer mortality were observed, respectively. Conclusions: Obesity is associated with poorer overall and breast cancer survival in pre- and post-menopausal breast cancer, regardless of when BMI is ascertained. Being overweight is also related to a higher risk of mortality. Randomised clinical trials are needed to test interventions for weight loss and maintenance on survival in women with breast cancer.
null
null
16,332
377
[ 115, 125, 81, 861, 1120, 298, 254, 1022, 273, 230, 165, 464, 63, 123, 66 ]
20
[ "studies", "bmi", "women", "diagnosis", "12", "95", "ci", "95 ci", "cancer", "mortality" ]
[ "cancer mortality obese", "breast cancer survivors", "menopausal breast cancer", "obesity linked breast", "menopausal women obesity" ]
null
null
[CONTENT] body mass index | meta-analysis | survival after breast cancer | systematic literature review [SUMMARY]
[CONTENT] body mass index | meta-analysis | survival after breast cancer | systematic literature review [SUMMARY]
[CONTENT] body mass index | meta-analysis | survival after breast cancer | systematic literature review [SUMMARY]
null
[CONTENT] body mass index | meta-analysis | survival after breast cancer | systematic literature review [SUMMARY]
null
[CONTENT] Body Mass Index | Breast Neoplasms | Female | Follow-Up Studies | Humans | MEDLINE | Obesity | Prognosis | Randomized Controlled Trials as Topic | Risk Factors | Survivors [SUMMARY]
[CONTENT] Body Mass Index | Breast Neoplasms | Female | Follow-Up Studies | Humans | MEDLINE | Obesity | Prognosis | Randomized Controlled Trials as Topic | Risk Factors | Survivors [SUMMARY]
[CONTENT] Body Mass Index | Breast Neoplasms | Female | Follow-Up Studies | Humans | MEDLINE | Obesity | Prognosis | Randomized Controlled Trials as Topic | Risk Factors | Survivors [SUMMARY]
null
[CONTENT] Body Mass Index | Breast Neoplasms | Female | Follow-Up Studies | Humans | MEDLINE | Obesity | Prognosis | Randomized Controlled Trials as Topic | Risk Factors | Survivors [SUMMARY]
null
[CONTENT] cancer mortality obese | breast cancer survivors | menopausal breast cancer | obesity linked breast | menopausal women obesity [SUMMARY]
[CONTENT] cancer mortality obese | breast cancer survivors | menopausal breast cancer | obesity linked breast | menopausal women obesity [SUMMARY]
[CONTENT] cancer mortality obese | breast cancer survivors | menopausal breast cancer | obesity linked breast | menopausal women obesity [SUMMARY]
null
[CONTENT] cancer mortality obese | breast cancer survivors | menopausal breast cancer | obesity linked breast | menopausal women obesity [SUMMARY]
null
[CONTENT] studies | bmi | women | diagnosis | 12 | 95 | ci | 95 ci | cancer | mortality [SUMMARY]
[CONTENT] studies | bmi | women | diagnosis | 12 | 95 | ci | 95 ci | cancer | mortality [SUMMARY]
[CONTENT] studies | bmi | women | diagnosis | 12 | 95 | ci | 95 ci | cancer | mortality [SUMMARY]
null
[CONTENT] studies | bmi | women | diagnosis | 12 | 95 | ci | 95 ci | cancer | mortality [SUMMARY]
null
[CONTENT] breast cancer | breast | obesity | cancer | million | linked | survival | 2008 | women | body [SUMMARY]
[CONTENT] bmi | study | studies | categories | bmi categories | category | 30 | meta | separately | conducted [SUMMARY]
[CONTENT] 95 ci | studies | 95 | ci | women | bmi | diagnosis | 12 | figure | mortality [SUMMARY]
null
[CONTENT] studies | bmi | women | 95 ci | 95 | ci | diagnosis | cancer | 12 | breast [SUMMARY]
null
[CONTENT] obese [SUMMARY]
[CONTENT] MEDLINE | BMI | June 2013 | the World Cancer Research Fund ||| [SUMMARY]
[CONTENT] Eighty-two | 213 075 | 41 477 | 23 182 ||| BMI | 1.41 | 95% | CI | 1.29-1.53 | BMI | 30.0 | 1.07 | 95 | CI | 1.02-1.12 | BMI | 1.10 | 95% | CI | 0.92-1.31 ||| obese | 1.75 | 95% | CI | 1.26-2.41 | 1.34 | 95% | CI | 1.18-1.53 ||| 5 kg/m(2 | BMI | 12 months | months | 17% | 11% | 8% | 18% | 14% | 29% [SUMMARY]
null
[CONTENT] obese ||| MEDLINE | BMI | June 2013 | the World Cancer Research Fund ||| ||| Eighty-two | 213 075 | 41 477 | 23 182 ||| BMI | 1.41 | 95% | CI | 1.29-1.53 | BMI | 30.0 | 1.07 | 95 | CI | 1.02-1.12 | BMI | 1.10 | 95% | CI | 0.92-1.31 ||| obese | 1.75 | 95% | CI | 1.26-2.41 | 1.34 | 95% | CI | 1.18-1.53 ||| 5 kg/m(2 | BMI | 12 months | months | 17% | 11% | 8% | 18% | 14% | 29% ||| BMI ||| ||| [SUMMARY]
null
Is application of salt for 3 days locally is sufficient to treat umbilical granuloma?
34341201
The falling of Umbilical stump occurs by 7-15 days of age. The healing of umbilical stump may be complicated by Umbilical Granuloma. It is often treated by chemical cauterisation which require repeated applications and may lead to local or systemic complications. Common salt by way of its dessicative property may help in treatment of Umbilical Granuloma.
BACKGROUND
This is retrospective study over 3 years from a pediatric surgery unit in Northern India. The study subjects were infants less than 10 weeks of age who presented with umbilical granuloma. The method of salt application was 1 pinch of common salt for 1 hour twice a day for 3 consecutive days. The babies were assessed at day 5th for resolution. The success was defined as thrice resolution after 3 cycles. The baseline demographic details were taken and the association of success of treatment was analyzed.
MATERIALS AND METHODS
A total of 36 infants were given treatment in form of common salt application for treatment of umbilical granuloma. The success of around 96% and the cases which presented early responded well. Most of the cases resolved after 3 cycles of treatment.
RESULTS
The common salt application is effective in treatment of granuloma without any side effects.
CONCLUSION
[ "Digestive System Abnormalities", "Female", "Granuloma", "Humans", "Infant", "Infant, Newborn", "Male", "Retrospective Studies", "Treatment Outcome", "Wound Healing" ]
8362916
INTRODUCTION
Umbilical stump separates by 7–15 days postpartum.[1] Umbilical ring is closed by epithelisation after the stump falls from the umbilicus.[2] In few infants, the umbilical ring is not closed properly and develop small velvety, pink to bright red coloured outgrowth at the umbilicus called as umbilical granuloma (UG). It needs attention because of serous discharge and if not treated may lead to infection or mild oozing of blood.[3] Umbilical granuloma is not a congenital abnormality, but it represents granulation tissue which is not properly epithelialised. Histologically, it is composed of fibroblasts and capillaries with no nerves.[1] The spontaneous resolution of umbilical granuloma is not known, hence the early and proper treatment is necessary to avoid potential infective complications.[3] Umbilical Granuloma (UG) can be treated by various modalities and most common modality is chemical cauterisation. The drugs used for chemical cauterisation are silver nitrate sticks,[45] copper sulphate granules,[67] ethanol wipes,[8] doxycycline,[9] clobetasol propionate[1011] and more recently, common salt is used for chemical cauterisation. All chemical agents used for the treatment of UG have shown complete resolution by different authors though each chemical has its own merits and demerits. Major drawback of chemical cauterisation is need of repeated applications and local side effect. Silver nitrate sticks and copper sulphate granules may lead to burning of local skin if not applied properly and require expertise for proper application to get the desired result, hence require a frequent visit to the hospital. Clobetasol propionate does not require any expertise for application and can be safely applied by parents at home. Disadvantage of clobestsol propionate is a long time frame of treatment which is usually of 30 days and being a steroid, it has a potential risk of local and systemic side effect[11] Ethanol wipes and doxycycline are comparatively safe and can be used by parents at home, but their results were not convincing, hence not widely used. In search of safer and effective management of umbilical granuloma, common salt was used by few researchers[1213] based on its desiccative property. Application of salt leads to a higher concentration of sodium ion around UG and draws water out of it, resulting in shrinkage of the granulation tissue. This study was carried out to know the effect of salt on UG and duration of salt (rock salt) treatment to achieve the optimal result.
null
null
RESULTS
A total of 36 patients with umbilical granuloma aged between 4 weeks and 20 weeks (except one) were managed during July 2018–December 2019 [Table 1]: In 6 cases, granuloma separated after one complete session of salt application, whereas in 11 cases, it took two complete sessions for the resolution of UG. In most cases, UG was resolved with three cycles of salt application. Two infants required four complete sessions for the resolution of UG [Table 2]. The cases presented early required relatively less number of salt applications than who presented late and this difference was found out to be statistically significant. One case lost to follow-up after second session of salt application, whereas one case not responded to salt application and managed with suture ligation. None of patients had a local complication in terms of skin colour change, skin maceration or excessive cry during salt application. No parent complained for any difficulty or irritability of their infant while applying salt. All parents completed the treatment without difficulty and had satisfactory result except in two cases, one of which lost to follow-up and another which was termed as nonresponder and latter treated by suture ligation technique. Demographic profile of study infants. n=36 Chi-square test applied Response in study infants in different age groups. n=36 Fisher’s exact test applied. *Patient lost to follow-up after second cycle and considered as non-responder, #Patient was 18 month old, not responded to the cauterisation with salt and managed with suture ligation
CONCLUSION
Umbilical granuloma can be effectively managed with common salt, and the best result is obtained if application salt is done at least for 9–12 days. Financial support and sponsorship Nil. Nil. Conflicts of interest There are no conflicts of interest. There are no conflicts of interest.
[ "Statistical analysis", "Limitation(s)", "Financial support and sponsorship" ]
[ "Data were entered into Excel and scrutinised for error. Further, categorical variables were expressed in frequency and percentage. Association of categorical variable was assessed by Chi-square/Fisher's exact test. P < 0.05 was considered as statistically significant and analysed by Stata 14 (StataCorp, College Station, Texas, USA).", "It was a retrospective study and there was no comparison with any other mode of treatment. The prospective study with three or four arms would have been better because, in the present series, the same patients were subjected to the next cycle once if not resolved with one cycle till completion of the fourth cycle. The contact time of the salt to the UG was subjective because we have to rely on the parents. This retrospective observational study was carried to know the efficacy of common salt/table salt (rock salt) in the management of umbilical granuloma.", "Nil." ]
[ null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Statistical analysis", "RESULTS", "DISCUSSION", "Limitation(s)", "CONCLUSION", "Financial support and sponsorship", "Conflicts of interest" ]
[ "Umbilical stump separates by 7–15 days postpartum.[1] Umbilical ring is closed by epithelisation after the stump falls from the umbilicus.[2] In few infants, the umbilical ring is not closed properly and develop small velvety, pink to bright red coloured outgrowth at the umbilicus called as umbilical granuloma (UG). It needs attention because of serous discharge and if not treated may lead to infection or mild oozing of blood.[3] Umbilical granuloma is not a congenital abnormality, but it represents granulation tissue which is not properly epithelialised. Histologically, it is composed of fibroblasts and capillaries with no nerves.[1] The spontaneous resolution of umbilical granuloma is not known, hence the early and proper treatment is necessary to avoid potential infective complications.[3] Umbilical Granuloma (UG) can be treated by various modalities and most common modality is chemical cauterisation. The drugs used for chemical cauterisation are silver nitrate sticks,[45] copper sulphate granules,[67] ethanol wipes,[8] doxycycline,[9] clobetasol propionate[1011] and more recently, common salt is used for chemical cauterisation. All chemical agents used for the treatment of UG have shown complete resolution by different authors though each chemical has its own merits and demerits.\nMajor drawback of chemical cauterisation is need of repeated applications and local side effect. Silver nitrate sticks and copper sulphate granules may lead to burning of local skin if not applied properly and require expertise for proper application to get the desired result, hence require a frequent visit to the hospital. Clobetasol propionate does not require any expertise for application and can be safely applied by parents at home. Disadvantage of clobestsol propionate is a long time frame of treatment which is usually of 30 days and being a steroid, it has a potential risk of local and systemic side effect[11] Ethanol wipes and doxycycline are comparatively safe and can be used by parents at home, but their results were not convincing, hence not widely used.\nIn search of safer and effective management of umbilical granuloma, common salt was used by few researchers[1213] based on its desiccative property. Application of salt leads to a higher concentration of sodium ion around UG and draws water out of it, resulting in shrinkage of the granulation tissue. This study was carried out to know the effect of salt on UG and duration of salt (rock salt) treatment to achieve the optimal result.", "This was a retrospective study performed at paediatric surgery outpatient clinic, tertiary centre in North India, from January 2015 to December 2018.\nAs per the record, a total of 40 infants of UG presented to the paediatric surgery outpatient department, of which only 36 were managed with salt application. Four patients already received some treatment from the general practitioner, hence were not included in the study.\nAs per the record, the procedure followed for salt application was (i) lesion cleaned with lukewarm water soaked cotton ball [Figure 1], (ii) pinch of rock salt sprinkled over lesion and removed after 1 h and (iii) procedure properly taught and explained to parent of the infant to repeat at home twice daily for 3 days and review on 5th day. This is one complete session of salt application. At the first review that is on 5th day if UG resolved, it was considered as cured and the patient was reviewed after 1 month. If UG was not resolved at the first visit, parents are advised to repeat the same procedure. As per the record, application of salt twice daily for 1 h for 3 consecutive days was considered as one complete cycle.\nShowing the effect of slat application\nAs per the records, the patients were levelled nonresponsive if they do not respond to four cycles of salt application and were managed with suture ligation technique. All patients were followed for 12 weeks.\n\nCured: If granuloma was separated completely displaying normal skin cover and without any serous discharge,Nonresponsive: No change in the nature of the disease\n\nCured: If granuloma was separated completely displaying normal skin cover and without any serous discharge,\nNonresponsive: No change in the nature of the disease\nThe following parameters were retrieved from the record at each visit: (1) change in colour of lesion, (2) change in size of the lesion, (3) soakage if any from the lesion, (4) excessive crying during salt application and (7) any skin change. Parents did not report any behavioural changes, sleep disturbances or irritability in the infants after the application (though the first application is done by the treating clinician itself.)\nStatistical analysis Data were entered into Excel and scrutinised for error. Further, categorical variables were expressed in frequency and percentage. Association of categorical variable was assessed by Chi-square/Fisher's exact test. P < 0.05 was considered as statistically significant and analysed by Stata 14 (StataCorp, College Station, Texas, USA).\nData were entered into Excel and scrutinised for error. Further, categorical variables were expressed in frequency and percentage. Association of categorical variable was assessed by Chi-square/Fisher's exact test. P < 0.05 was considered as statistically significant and analysed by Stata 14 (StataCorp, College Station, Texas, USA).", "Data were entered into Excel and scrutinised for error. Further, categorical variables were expressed in frequency and percentage. Association of categorical variable was assessed by Chi-square/Fisher's exact test. P < 0.05 was considered as statistically significant and analysed by Stata 14 (StataCorp, College Station, Texas, USA).", "A total of 36 patients with umbilical granuloma aged between 4 weeks and 20 weeks (except one) were managed during July 2018–December 2019 [Table 1]: In 6 cases, granuloma separated after one complete session of salt application, whereas in 11 cases, it took two complete sessions for the resolution of UG. In most cases, UG was resolved with three cycles of salt application. Two infants required four complete sessions for the resolution of UG [Table 2]. The cases presented early required relatively less number of salt applications than who presented late and this difference was found out to be statistically significant. One case lost to follow-up after second session of salt application, whereas one case not responded to salt application and managed with suture ligation. None of patients had a local complication in terms of skin colour change, skin maceration or excessive cry during salt application. No parent complained for any difficulty or irritability of their infant while applying salt. All parents completed the treatment without difficulty and had satisfactory result except in two cases, one of which lost to follow-up and another which was termed as nonresponder and latter treated by suture ligation technique.\nDemographic profile of study infants. n=36\nChi-square test applied\nResponse in study infants in different age groups. n=36\nFisher’s exact test applied. *Patient lost to follow-up after second cycle and considered as non-responder, #Patient was 18 month old, not responded to the cauterisation with salt and managed with suture ligation", "UG is relatively a common condition with no recognised associated anomalies. Chemical cauterisation is the most common method to treat UG. Few other umbilical conditions may present in a similar manner which are difficult to distinguish from UG like patent urachus and patent vitello-intestinal duct and should not be treated with chemical cauterisation. Therefore adequate assessment of the discharge and swelling of the umbilicus should be done to minimise diagnostic errors and delay in proper management.\nApplication of common salt for treating umbilical granuloma has been tried in the past and used at some centres in India[1314] for the management of UG. A review of the English literature revealed the following steps for salt application: (i) cleaning of the umbilical area with a wet cotton pad, (ii) pinch of salt crystals is sprinkled on the granuloma, (iii) granuloma is closed with an adhesive drape, (iv) drape is opened 30 min after the procedure and the application procedure is terminated and (v) this process was repeated two times a day for 3 days and the patient was reviewed on the 6th day.[15]\nThe procedure of salt application was modification which was done by either increasing the days of application or increase the time of contact of salt to the granuloma for better results. It ranged from single application where salt remained for 24 h[16] to 12 application of salt over 7 consecutive days where salt was removed after 1 h of each application.[14] Few studies adopted the covering of the granuloma site after application of salt using adhesive tape for ensuring proper contact of the salt to the granuloma,[161718] whereas others left the site open to prevent possible sogginess around the granuloma.[141719]\nMost of the published series on chemical cauterisation of UG using common salt had excellent results[141718] and only in few results were not convincing.[67] The overall efficacy of salt to treat UG ranged from 53% to 100%, as reported in different studies.[13141516171819] It was not clear how many days of treatment is required for optimum result; current studies used salt in a phased manner and determined the time and number of salt application for the resolution of UG.\nIn the present study, salt was used as the primary agent for the cauterisation of UG with slight modification of standard technique based on the fact that local and systemic side effects of salt are negligible, its application can be done by the parents easily at their home if properly demonstrated to them, duration of treatment is short and last but not the least was the low cost and easy availability of the salt. Modification of the standard procedure (vide above 16) done in the present study was (a) contact time of salt was made to 1 h instead of 30 min at each application and (b) patients were not marked as nonresponsive if UG was not resolved after application of salt for 3 consecutive days (one cycle). The patients were asked to apply at least for three more cycles of salt application before marking them as no responder. With this modification, complete resolution of UG was achieved in 96% cases and only one case who was nonresponsive to cauterisation with salt was 18 months.\nLimitation(s) It was a retrospective study and there was no comparison with any other mode of treatment. The prospective study with three or four arms would have been better because, in the present series, the same patients were subjected to the next cycle once if not resolved with one cycle till completion of the fourth cycle. The contact time of the salt to the UG was subjective because we have to rely on the parents. This retrospective observational study was carried to know the efficacy of common salt/table salt (rock salt) in the management of umbilical granuloma.\nIt was a retrospective study and there was no comparison with any other mode of treatment. The prospective study with three or four arms would have been better because, in the present series, the same patients were subjected to the next cycle once if not resolved with one cycle till completion of the fourth cycle. The contact time of the salt to the UG was subjective because we have to rely on the parents. This retrospective observational study was carried to know the efficacy of common salt/table salt (rock salt) in the management of umbilical granuloma.", "It was a retrospective study and there was no comparison with any other mode of treatment. The prospective study with three or four arms would have been better because, in the present series, the same patients were subjected to the next cycle once if not resolved with one cycle till completion of the fourth cycle. The contact time of the salt to the UG was subjective because we have to rely on the parents. This retrospective observational study was carried to know the efficacy of common salt/table salt (rock salt) in the management of umbilical granuloma.", "Umbilical granuloma can be effectively managed with common salt, and the best result is obtained if application salt is done at least for 9–12 days.\nFinancial support and sponsorship Nil.\nNil.\nConflicts of interest There are no conflicts of interest.\nThere are no conflicts of interest.", "Nil.", "There are no conflicts of interest." ]
[ "intro", "materials|methods", null, "results", "discussion", null, "conclusion", null, "COI-statement" ]
[ "Chemical cauterisation", "persistent discharge", "umbilical polyp" ]
INTRODUCTION: Umbilical stump separates by 7–15 days postpartum.[1] Umbilical ring is closed by epithelisation after the stump falls from the umbilicus.[2] In few infants, the umbilical ring is not closed properly and develop small velvety, pink to bright red coloured outgrowth at the umbilicus called as umbilical granuloma (UG). It needs attention because of serous discharge and if not treated may lead to infection or mild oozing of blood.[3] Umbilical granuloma is not a congenital abnormality, but it represents granulation tissue which is not properly epithelialised. Histologically, it is composed of fibroblasts and capillaries with no nerves.[1] The spontaneous resolution of umbilical granuloma is not known, hence the early and proper treatment is necessary to avoid potential infective complications.[3] Umbilical Granuloma (UG) can be treated by various modalities and most common modality is chemical cauterisation. The drugs used for chemical cauterisation are silver nitrate sticks,[45] copper sulphate granules,[67] ethanol wipes,[8] doxycycline,[9] clobetasol propionate[1011] and more recently, common salt is used for chemical cauterisation. All chemical agents used for the treatment of UG have shown complete resolution by different authors though each chemical has its own merits and demerits. Major drawback of chemical cauterisation is need of repeated applications and local side effect. Silver nitrate sticks and copper sulphate granules may lead to burning of local skin if not applied properly and require expertise for proper application to get the desired result, hence require a frequent visit to the hospital. Clobetasol propionate does not require any expertise for application and can be safely applied by parents at home. Disadvantage of clobestsol propionate is a long time frame of treatment which is usually of 30 days and being a steroid, it has a potential risk of local and systemic side effect[11] Ethanol wipes and doxycycline are comparatively safe and can be used by parents at home, but their results were not convincing, hence not widely used. In search of safer and effective management of umbilical granuloma, common salt was used by few researchers[1213] based on its desiccative property. Application of salt leads to a higher concentration of sodium ion around UG and draws water out of it, resulting in shrinkage of the granulation tissue. This study was carried out to know the effect of salt on UG and duration of salt (rock salt) treatment to achieve the optimal result. MATERIALS AND METHODS: This was a retrospective study performed at paediatric surgery outpatient clinic, tertiary centre in North India, from January 2015 to December 2018. As per the record, a total of 40 infants of UG presented to the paediatric surgery outpatient department, of which only 36 were managed with salt application. Four patients already received some treatment from the general practitioner, hence were not included in the study. As per the record, the procedure followed for salt application was (i) lesion cleaned with lukewarm water soaked cotton ball [Figure 1], (ii) pinch of rock salt sprinkled over lesion and removed after 1 h and (iii) procedure properly taught and explained to parent of the infant to repeat at home twice daily for 3 days and review on 5th day. This is one complete session of salt application. At the first review that is on 5th day if UG resolved, it was considered as cured and the patient was reviewed after 1 month. If UG was not resolved at the first visit, parents are advised to repeat the same procedure. As per the record, application of salt twice daily for 1 h for 3 consecutive days was considered as one complete cycle. Showing the effect of slat application As per the records, the patients were levelled nonresponsive if they do not respond to four cycles of salt application and were managed with suture ligation technique. All patients were followed for 12 weeks. Cured: If granuloma was separated completely displaying normal skin cover and without any serous discharge,Nonresponsive: No change in the nature of the disease Cured: If granuloma was separated completely displaying normal skin cover and without any serous discharge, Nonresponsive: No change in the nature of the disease The following parameters were retrieved from the record at each visit: (1) change in colour of lesion, (2) change in size of the lesion, (3) soakage if any from the lesion, (4) excessive crying during salt application and (7) any skin change. Parents did not report any behavioural changes, sleep disturbances or irritability in the infants after the application (though the first application is done by the treating clinician itself.) Statistical analysis Data were entered into Excel and scrutinised for error. Further, categorical variables were expressed in frequency and percentage. Association of categorical variable was assessed by Chi-square/Fisher's exact test. P < 0.05 was considered as statistically significant and analysed by Stata 14 (StataCorp, College Station, Texas, USA). Data were entered into Excel and scrutinised for error. Further, categorical variables were expressed in frequency and percentage. Association of categorical variable was assessed by Chi-square/Fisher's exact test. P < 0.05 was considered as statistically significant and analysed by Stata 14 (StataCorp, College Station, Texas, USA). Statistical analysis: Data were entered into Excel and scrutinised for error. Further, categorical variables were expressed in frequency and percentage. Association of categorical variable was assessed by Chi-square/Fisher's exact test. P < 0.05 was considered as statistically significant and analysed by Stata 14 (StataCorp, College Station, Texas, USA). RESULTS: A total of 36 patients with umbilical granuloma aged between 4 weeks and 20 weeks (except one) were managed during July 2018–December 2019 [Table 1]: In 6 cases, granuloma separated after one complete session of salt application, whereas in 11 cases, it took two complete sessions for the resolution of UG. In most cases, UG was resolved with three cycles of salt application. Two infants required four complete sessions for the resolution of UG [Table 2]. The cases presented early required relatively less number of salt applications than who presented late and this difference was found out to be statistically significant. One case lost to follow-up after second session of salt application, whereas one case not responded to salt application and managed with suture ligation. None of patients had a local complication in terms of skin colour change, skin maceration or excessive cry during salt application. No parent complained for any difficulty or irritability of their infant while applying salt. All parents completed the treatment without difficulty and had satisfactory result except in two cases, one of which lost to follow-up and another which was termed as nonresponder and latter treated by suture ligation technique. Demographic profile of study infants. n=36 Chi-square test applied Response in study infants in different age groups. n=36 Fisher’s exact test applied. *Patient lost to follow-up after second cycle and considered as non-responder, #Patient was 18 month old, not responded to the cauterisation with salt and managed with suture ligation DISCUSSION: UG is relatively a common condition with no recognised associated anomalies. Chemical cauterisation is the most common method to treat UG. Few other umbilical conditions may present in a similar manner which are difficult to distinguish from UG like patent urachus and patent vitello-intestinal duct and should not be treated with chemical cauterisation. Therefore adequate assessment of the discharge and swelling of the umbilicus should be done to minimise diagnostic errors and delay in proper management. Application of common salt for treating umbilical granuloma has been tried in the past and used at some centres in India[1314] for the management of UG. A review of the English literature revealed the following steps for salt application: (i) cleaning of the umbilical area with a wet cotton pad, (ii) pinch of salt crystals is sprinkled on the granuloma, (iii) granuloma is closed with an adhesive drape, (iv) drape is opened 30 min after the procedure and the application procedure is terminated and (v) this process was repeated two times a day for 3 days and the patient was reviewed on the 6th day.[15] The procedure of salt application was modification which was done by either increasing the days of application or increase the time of contact of salt to the granuloma for better results. It ranged from single application where salt remained for 24 h[16] to 12 application of salt over 7 consecutive days where salt was removed after 1 h of each application.[14] Few studies adopted the covering of the granuloma site after application of salt using adhesive tape for ensuring proper contact of the salt to the granuloma,[161718] whereas others left the site open to prevent possible sogginess around the granuloma.[141719] Most of the published series on chemical cauterisation of UG using common salt had excellent results[141718] and only in few results were not convincing.[67] The overall efficacy of salt to treat UG ranged from 53% to 100%, as reported in different studies.[13141516171819] It was not clear how many days of treatment is required for optimum result; current studies used salt in a phased manner and determined the time and number of salt application for the resolution of UG. In the present study, salt was used as the primary agent for the cauterisation of UG with slight modification of standard technique based on the fact that local and systemic side effects of salt are negligible, its application can be done by the parents easily at their home if properly demonstrated to them, duration of treatment is short and last but not the least was the low cost and easy availability of the salt. Modification of the standard procedure (vide above 16) done in the present study was (a) contact time of salt was made to 1 h instead of 30 min at each application and (b) patients were not marked as nonresponsive if UG was not resolved after application of salt for 3 consecutive days (one cycle). The patients were asked to apply at least for three more cycles of salt application before marking them as no responder. With this modification, complete resolution of UG was achieved in 96% cases and only one case who was nonresponsive to cauterisation with salt was 18 months. Limitation(s) It was a retrospective study and there was no comparison with any other mode of treatment. The prospective study with three or four arms would have been better because, in the present series, the same patients were subjected to the next cycle once if not resolved with one cycle till completion of the fourth cycle. The contact time of the salt to the UG was subjective because we have to rely on the parents. This retrospective observational study was carried to know the efficacy of common salt/table salt (rock salt) in the management of umbilical granuloma. It was a retrospective study and there was no comparison with any other mode of treatment. The prospective study with three or four arms would have been better because, in the present series, the same patients were subjected to the next cycle once if not resolved with one cycle till completion of the fourth cycle. The contact time of the salt to the UG was subjective because we have to rely on the parents. This retrospective observational study was carried to know the efficacy of common salt/table salt (rock salt) in the management of umbilical granuloma. Limitation(s): It was a retrospective study and there was no comparison with any other mode of treatment. The prospective study with three or four arms would have been better because, in the present series, the same patients were subjected to the next cycle once if not resolved with one cycle till completion of the fourth cycle. The contact time of the salt to the UG was subjective because we have to rely on the parents. This retrospective observational study was carried to know the efficacy of common salt/table salt (rock salt) in the management of umbilical granuloma. CONCLUSION: Umbilical granuloma can be effectively managed with common salt, and the best result is obtained if application salt is done at least for 9–12 days. Financial support and sponsorship Nil. Nil. Conflicts of interest There are no conflicts of interest. There are no conflicts of interest. Financial support and sponsorship: Nil. Conflicts of interest: There are no conflicts of interest.
Background: The falling of Umbilical stump occurs by 7-15 days of age. The healing of umbilical stump may be complicated by Umbilical Granuloma. It is often treated by chemical cauterisation which require repeated applications and may lead to local or systemic complications. Common salt by way of its dessicative property may help in treatment of Umbilical Granuloma. Methods: This is retrospective study over 3 years from a pediatric surgery unit in Northern India. The study subjects were infants less than 10 weeks of age who presented with umbilical granuloma. The method of salt application was 1 pinch of common salt for 1 hour twice a day for 3 consecutive days. The babies were assessed at day 5th for resolution. The success was defined as thrice resolution after 3 cycles. The baseline demographic details were taken and the association of success of treatment was analyzed. Results: A total of 36 infants were given treatment in form of common salt application for treatment of umbilical granuloma. The success of around 96% and the cases which presented early responded well. Most of the cases resolved after 3 cycles of treatment. Conclusions: The common salt application is effective in treatment of granuloma without any side effects.
INTRODUCTION: Umbilical stump separates by 7–15 days postpartum.[1] Umbilical ring is closed by epithelisation after the stump falls from the umbilicus.[2] In few infants, the umbilical ring is not closed properly and develop small velvety, pink to bright red coloured outgrowth at the umbilicus called as umbilical granuloma (UG). It needs attention because of serous discharge and if not treated may lead to infection or mild oozing of blood.[3] Umbilical granuloma is not a congenital abnormality, but it represents granulation tissue which is not properly epithelialised. Histologically, it is composed of fibroblasts and capillaries with no nerves.[1] The spontaneous resolution of umbilical granuloma is not known, hence the early and proper treatment is necessary to avoid potential infective complications.[3] Umbilical Granuloma (UG) can be treated by various modalities and most common modality is chemical cauterisation. The drugs used for chemical cauterisation are silver nitrate sticks,[45] copper sulphate granules,[67] ethanol wipes,[8] doxycycline,[9] clobetasol propionate[1011] and more recently, common salt is used for chemical cauterisation. All chemical agents used for the treatment of UG have shown complete resolution by different authors though each chemical has its own merits and demerits. Major drawback of chemical cauterisation is need of repeated applications and local side effect. Silver nitrate sticks and copper sulphate granules may lead to burning of local skin if not applied properly and require expertise for proper application to get the desired result, hence require a frequent visit to the hospital. Clobetasol propionate does not require any expertise for application and can be safely applied by parents at home. Disadvantage of clobestsol propionate is a long time frame of treatment which is usually of 30 days and being a steroid, it has a potential risk of local and systemic side effect[11] Ethanol wipes and doxycycline are comparatively safe and can be used by parents at home, but their results were not convincing, hence not widely used. In search of safer and effective management of umbilical granuloma, common salt was used by few researchers[1213] based on its desiccative property. Application of salt leads to a higher concentration of sodium ion around UG and draws water out of it, resulting in shrinkage of the granulation tissue. This study was carried out to know the effect of salt on UG and duration of salt (rock salt) treatment to achieve the optimal result. CONCLUSION: Umbilical granuloma can be effectively managed with common salt, and the best result is obtained if application salt is done at least for 9–12 days. Financial support and sponsorship Nil. Nil. Conflicts of interest There are no conflicts of interest. There are no conflicts of interest.
Background: The falling of Umbilical stump occurs by 7-15 days of age. The healing of umbilical stump may be complicated by Umbilical Granuloma. It is often treated by chemical cauterisation which require repeated applications and may lead to local or systemic complications. Common salt by way of its dessicative property may help in treatment of Umbilical Granuloma. Methods: This is retrospective study over 3 years from a pediatric surgery unit in Northern India. The study subjects were infants less than 10 weeks of age who presented with umbilical granuloma. The method of salt application was 1 pinch of common salt for 1 hour twice a day for 3 consecutive days. The babies were assessed at day 5th for resolution. The success was defined as thrice resolution after 3 cycles. The baseline demographic details were taken and the association of success of treatment was analyzed. Results: A total of 36 infants were given treatment in form of common salt application for treatment of umbilical granuloma. The success of around 96% and the cases which presented early responded well. Most of the cases resolved after 3 cycles of treatment. Conclusions: The common salt application is effective in treatment of granuloma without any side effects.
2,349
230
[ 61, 106, 2 ]
9
[ "salt", "application", "ug", "granuloma", "umbilical", "study", "salt application", "cycle", "umbilical granuloma", "treatment" ]
[ "umbilical granuloma tried", "umbilical granuloma known", "umbilical granuloma congenital", "treating umbilical granuloma", "umbilical granuloma ug" ]
null
[CONTENT] Chemical cauterisation | persistent discharge | umbilical polyp [SUMMARY]
null
[CONTENT] Chemical cauterisation | persistent discharge | umbilical polyp [SUMMARY]
[CONTENT] Chemical cauterisation | persistent discharge | umbilical polyp [SUMMARY]
[CONTENT] Chemical cauterisation | persistent discharge | umbilical polyp [SUMMARY]
[CONTENT] Chemical cauterisation | persistent discharge | umbilical polyp [SUMMARY]
[CONTENT] Digestive System Abnormalities | Female | Granuloma | Humans | Infant | Infant, Newborn | Male | Retrospective Studies | Treatment Outcome | Wound Healing [SUMMARY]
null
[CONTENT] Digestive System Abnormalities | Female | Granuloma | Humans | Infant | Infant, Newborn | Male | Retrospective Studies | Treatment Outcome | Wound Healing [SUMMARY]
[CONTENT] Digestive System Abnormalities | Female | Granuloma | Humans | Infant | Infant, Newborn | Male | Retrospective Studies | Treatment Outcome | Wound Healing [SUMMARY]
[CONTENT] Digestive System Abnormalities | Female | Granuloma | Humans | Infant | Infant, Newborn | Male | Retrospective Studies | Treatment Outcome | Wound Healing [SUMMARY]
[CONTENT] Digestive System Abnormalities | Female | Granuloma | Humans | Infant | Infant, Newborn | Male | Retrospective Studies | Treatment Outcome | Wound Healing [SUMMARY]
[CONTENT] umbilical granuloma tried | umbilical granuloma known | umbilical granuloma congenital | treating umbilical granuloma | umbilical granuloma ug [SUMMARY]
null
[CONTENT] umbilical granuloma tried | umbilical granuloma known | umbilical granuloma congenital | treating umbilical granuloma | umbilical granuloma ug [SUMMARY]
[CONTENT] umbilical granuloma tried | umbilical granuloma known | umbilical granuloma congenital | treating umbilical granuloma | umbilical granuloma ug [SUMMARY]
[CONTENT] umbilical granuloma tried | umbilical granuloma known | umbilical granuloma congenital | treating umbilical granuloma | umbilical granuloma ug [SUMMARY]
[CONTENT] umbilical granuloma tried | umbilical granuloma known | umbilical granuloma congenital | treating umbilical granuloma | umbilical granuloma ug [SUMMARY]
[CONTENT] salt | application | ug | granuloma | umbilical | study | salt application | cycle | umbilical granuloma | treatment [SUMMARY]
null
[CONTENT] salt | application | ug | granuloma | umbilical | study | salt application | cycle | umbilical granuloma | treatment [SUMMARY]
[CONTENT] salt | application | ug | granuloma | umbilical | study | salt application | cycle | umbilical granuloma | treatment [SUMMARY]
[CONTENT] salt | application | ug | granuloma | umbilical | study | salt application | cycle | umbilical granuloma | treatment [SUMMARY]
[CONTENT] salt | application | ug | granuloma | umbilical | study | salt application | cycle | umbilical granuloma | treatment [SUMMARY]
[CONTENT] chemical | umbilical | chemical cauterisation | salt | require | propionate | cauterisation | umbilical granuloma | ug | granuloma [SUMMARY]
null
[CONTENT] cases | salt | salt application | follow | lost | lost follow | application | suture ligation | suture | ligation [SUMMARY]
[CONTENT] conflicts interest | conflicts | interest | conflicts interest conflicts interest | conflicts interest conflicts | interest conflicts interest | interest conflicts | nil | salt | nil conflicts interest conflicts [SUMMARY]
[CONTENT] nil | salt | conflicts interest | interest | conflicts | application | ug | study | granuloma | umbilical [SUMMARY]
[CONTENT] nil | salt | conflicts interest | interest | conflicts | application | ug | study | granuloma | umbilical [SUMMARY]
[CONTENT] 7-15 days of age ||| Umbilical Granuloma ||| ||| Umbilical Granuloma [SUMMARY]
null
[CONTENT] 36 ||| around 96% ||| 3 [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] 7-15 days of age ||| Umbilical Granuloma ||| ||| Umbilical Granuloma ||| 3 years | Northern India ||| less than 10 weeks of age ||| 1 | 1 hour | 3 consecutive days ||| day 5th ||| 3 ||| ||| 36 ||| around 96% ||| 3 ||| [SUMMARY]
[CONTENT] 7-15 days of age ||| Umbilical Granuloma ||| ||| Umbilical Granuloma ||| 3 years | Northern India ||| less than 10 weeks of age ||| 1 | 1 hour | 3 consecutive days ||| day 5th ||| 3 ||| ||| 36 ||| around 96% ||| 3 ||| [SUMMARY]
Cannabidiol (CBD) Use among children with juvenile idiopathic arthritis.
34903213
Juvenile idiopathic arthritis (JIA) is common and difficult to treat. Cannabidiol (CBD) is now widely available, but no studies to date have investigated the use of CBD for JIA.
BACKGROUND
We performed a chart review to identify patients with JIA at a Midwestern medical institution between 2017 and 2019. We surveyed primary caregivers of JIA patients using an anonymous, online survey with questions on caregiver knowledge and attitudes towards CBD. We compared respondents with no interest in CBD use vs. those contemplating or currently using CBD using descriptive statistics.
METHODS
Of 900 reviewed charts, 422 met inclusion criteria. Of these, 236 consented to be sent a survey link, and n=136 (58%) completed surveys. Overall, 34.5% (n=47) of respondents reported no interest in using a CBD product for their child's JIA, while 54% (n=79) reported contemplating using CBD and 7% (n=10) reported currently giving their child CBD. Only 2% of respondents contemplating or actively using a CBD product learned about CBD from their child's rheumatologist, compared with television (70%) or a friend (50%). Most respondents had not talked to their child's rheumatologist about using CBD. Of those currently using CBD, most used oral or topical products, and only 10% of respondents (n=1) knew what dose they were giving their child.
RESULTS
Our results show infrequent use but a large interest in CBD among caregivers of children with JIA. Given CBD's unknown safety profile in children with JIA, this study highlights a need for better studies and education around CBD for pediatric rheumatologists.
CONCLUSIONS
[ "Adolescent", "Arthritis, Juvenile", "Cannabidiol", "Child", "Female", "Health Knowledge, Attitudes, Practice", "Humans", "Male", "Parents", "Surveys and Questionnaires" ]
8670290
Background
Juvenile idiopathic arthritis (JIA) is the most common type of chronic arthritis in children, affecting 1 in 1000 children. It is an important cause of short and long-term disability and causes significant financial burden with annual direct medical costs ranging $400-$7,000 [1]. Effective treatments for JIA include non-steroidal anti-inflammatory drugs (NSAIDs), corticosteroids, disease modifying anti-rheumatic drugs (DMARDs), and biologic agents, but each carries potential adverse effects [2]. Indeed, parents and children frequently worry about side effects and the long-term safety of medications prescribed for JIA [3, 4]. As a result, many parents and children (34-92%) use complementary and integrative medicine (CIM) separately or in conjunction with standard treatment of JIA [5–10]. One such CIM treatment gaining popularity in the past years is cannabidiol (CBD), which is derived from Cannabis sativa. Since the removal of some CBD products from the Controlled Substances Act in 2018, a vast number of products made from hemp (Cannabis sativa with <0.3% Δ-9-tetrahydrocannabinol [THC]) have become available in brick and mortar retailers in topical, oral or inhaled forms [11]. CBD is non-intoxicating and has been widely advertised as a safe and natural therapy for many ailments including chronic pain, arthritis, other inflammatory diseases, and mental health conditions, resulting in frequent use for these conditions [12, 13]. In non-human animal studies, CBD reduces pain and inflammation due to arthritis, but these findings have not been validated in human studies [14, 15]. With the exception of Epidiolex, which is approved for the treatment of the rare seizure disorders Lennox Gastaut and Dravet Syndrome, CBD is minimally regulated by the Food and Drug Administration (FDA) [16, 17]. With the exception of these rare seizure disorders, evidence of a therapeutic benefit of CBD for pediatric conditions is lacking [18]. CBD’s safety profile has only been characterized among individuals with Dravet and Lennox Gastaut syndrome, so whether CBD is safe for use in healthy children or other pediatric populations remains unknown. Complicating matters, CBD is a promiscuous molecule that interacts with numerous systems in the body (e.g., serotonergic 5HT1A, endocannabinoid system as cannabinoid receptor 1 antagonist) [19, 20], and may interact with the metabolism of drugs commonly taken by children with JIA including prednisone and naproxen [21]. Further, testing of safety and potency of CBD products is not governed by a strong regulatory apparatus, [22, 23] and a recent JAMA study revealed that only 31% of CBD products sold online are accurately labeled for potency with 21% of samples containing THC [24]. As such, there are safety concerns about use in children, especially those with JIA. As pediatric rheumatologists, the authors (C.F., M. R.) have been frequently asked about using CBD products to treat JIA symptoms, but to date, there is no literature available regarding the use of CBD in children with JIA. The objective of this study was to determine the frequency of CBD use among children with JIA and investigate caregiver knowledge and opinions about CBD use for their children.
null
null
Results
Participation Overall, 422 JIA patients met inclusion criteria. Of those, 236 parent/guardians agreed to be sent the survey link and 136 participants completed the survey (58% response rate, Fig. 1). 10 respondents (7%) reported using a CBD product to treat their child’s JIA, 79 respondents (58%) reported contemplating use of a CBD product to treat their child’s JIA, and 47 respondents (34.5%) reported no interest in starting a CBD product. Demographic characteristics of the survey respondents are shown in Table 1. The study population was largely white, had a bachelor’s degree or higher, and had an annual income of more than 50,000 dollars per year. A large majority of respondents in both groups reported using one or more CIM therapies in their lifetime. There was no significant difference in the specific types or number of CIM therapies used across groups. Report of high disease activity was more frequent among those currently using or contemplating CBD use than those not contemplating use. Fig. 1Flow diagram of studyTable 1Correlation of demographics and disease characteristicsParent/guardian and child demographics and disease characteristicsTotal (N=136)Not contemplating starting a CBD product for child (N=47)Contemplating starting a CBD product for child (N=79) and using CBD product for child (N=10)P-valueRespondent parent/guardian age in years (mean +/- SD)29.7 (8)26.5 (7)Respondent Parent/guardian Gender- Female N (%)119 (87)42 (89)77 (86)Race/ethnicity- N (%)White/Caucasian132 (97)44 (94)88 (98.9)Black/African American2 (1)1 (2)1 (1.1)Asian American2 (1)2 (4)0Parent/guardian education levelHigh School or GED20 (16)2 (4)18 (21)χ2= 8.03p=0.045*Some college but no degree26 (19)7 (15)19 (22)Associate degree26 (19)12 (26)14 (16)Bachelor’s degree or higher62 (46)26 (55)36 (41)Income level- US dollars per yearLess than 19,0003 (2)1 (2)2 (2)χ2= 1.34p= 0.7220,000 to 49,00027 (20)7 (15)20 (22)50,000 to 99,00041 (30)13 (28)28 (32)100,000 or more65 (48)26 (55)39 (44)Child’s age in years- (mean +/- SD)11 (4)11.9 (4)Child gender: Female- N (%)95 (70)31 (66)64 (72)JIA duration N (%)< 6 months2 (1)2 (4)0χ2= 1.45p= 0.696-12 months9 (7)3 (6)6 (6)> 12- 24 months20 (15)7 (15)13 (15)> 24 months105 (77)35 (75)70 (79)JIA Subtype N (%)Oligoarticular47 (35)19 (47)28 (32)χ2= 7.93p= 0.13Polyarticular53 (39)12 (26)41 (46)Psoriatic arthritis6 (4)3 (2)3 (3)Systemic14 (10)8 (17)6 (7)Ankylosing spondylitis2 (1)02 (2)Enthesitis related arthritis1 (0.7)1 (2)0Unsure12 (9)3 (6)9 (10)Current disease activity N (%)Active59 (44)15 (29)44 (49)χ2= 9.56p= 0.022*Inactive on medication49 (36)16 (34)33 (37)Inactive off medication for < 1 year18 (13)11 (23)7 (7.8)Clinical remission10 (7)5 (10)5 (5.6)Current medications reported using N (%)None22 (16)10 (21)12 (13)χ2= 2.47p= 0.48NSAID69 (51)23 (49)46 (51)Non-biologic DMARD51 (38)18 (38)33 (37)Biologic DMARD65 (48)18 (38)47 (52)aColumn percentages are displayedbP-values derived from the chi-squared (χ 2 ) or Wilcoxon testscDMARDs disease-modifying antirheumatic drugs, NSAIDs nonsteroidal anti-inflammatory drugsdMedication categories are not mutually exclusive, therefore, medications do not sum to 100% Flow diagram of study Correlation of demographics and disease characteristics χ2= 8.03 p=0.045* χ2= 1.34 p= 0.72 χ2= 1.45 p= 0.69 χ2= 7.93 p= 0.13 χ2= 9.56 p= 0.022* χ2= 2.47 p= 0.48 aColumn percentages are displayed bP-values derived from the chi-squared (χ 2 ) or Wilcoxon tests cDMARDs disease-modifying antirheumatic drugs, NSAIDs nonsteroidal anti-inflammatory drugs dMedication categories are not mutually exclusive, therefore, medications do not sum to 100% A majority of those using CBD or contemplating using CBD for their child learned about it from TV (66%), a friend/relative (34%) or JIA online blog/support group (35%). Very few obtained information from a scientific journal article (17%) or their child’s rheumatologist (2%). Around half (52%) used 2 or more sources to learn about CBD. A majority of parents/guardians (75%) reported believing that CBD would reduce their child’s joint pain (Fig. 2 A), while only 15% of respondents reported believing that CBD has side effects. More than half of respondents reported thinking that CBD is safe because it is a natural product (Fig. 2 C). Nearly two-thirds (63%, n = 56) of respondents had not discussed using CBD with their child’s rheumatologist and over half (61%) of those did not plan on discussing with their child’s rheumatologist for the following reasons: scared of what provider may think (35%), felt they wouldn’t be taken seriously (29%), and believed rheumatologist would have no knowledge about CBD (18%). Fig. 2Parent/gaurdian perceptions of those using CBD for their child’s arthritis and those contemplating use of CBD (n=89) on how they percieve CBD will help their child’s arthritis (A), how they learned about CBD (B), perception of safety fo CBD (C). 56 respondents haven’t told their child’s rheumatologist for the following reasons (D) Parent/gaurdian perceptions of those using CBD for their child’s arthritis and those contemplating use of CBD (n=89) on how they percieve CBD will help their child’s arthritis (A), how they learned about CBD (B), perception of safety fo CBD (C). 56 respondents haven’t told their child’s rheumatologist for the following reasons (D) Overall, 422 JIA patients met inclusion criteria. Of those, 236 parent/guardians agreed to be sent the survey link and 136 participants completed the survey (58% response rate, Fig. 1). 10 respondents (7%) reported using a CBD product to treat their child’s JIA, 79 respondents (58%) reported contemplating use of a CBD product to treat their child’s JIA, and 47 respondents (34.5%) reported no interest in starting a CBD product. Demographic characteristics of the survey respondents are shown in Table 1. The study population was largely white, had a bachelor’s degree or higher, and had an annual income of more than 50,000 dollars per year. A large majority of respondents in both groups reported using one or more CIM therapies in their lifetime. There was no significant difference in the specific types or number of CIM therapies used across groups. Report of high disease activity was more frequent among those currently using or contemplating CBD use than those not contemplating use. Fig. 1Flow diagram of studyTable 1Correlation of demographics and disease characteristicsParent/guardian and child demographics and disease characteristicsTotal (N=136)Not contemplating starting a CBD product for child (N=47)Contemplating starting a CBD product for child (N=79) and using CBD product for child (N=10)P-valueRespondent parent/guardian age in years (mean +/- SD)29.7 (8)26.5 (7)Respondent Parent/guardian Gender- Female N (%)119 (87)42 (89)77 (86)Race/ethnicity- N (%)White/Caucasian132 (97)44 (94)88 (98.9)Black/African American2 (1)1 (2)1 (1.1)Asian American2 (1)2 (4)0Parent/guardian education levelHigh School or GED20 (16)2 (4)18 (21)χ2= 8.03p=0.045*Some college but no degree26 (19)7 (15)19 (22)Associate degree26 (19)12 (26)14 (16)Bachelor’s degree or higher62 (46)26 (55)36 (41)Income level- US dollars per yearLess than 19,0003 (2)1 (2)2 (2)χ2= 1.34p= 0.7220,000 to 49,00027 (20)7 (15)20 (22)50,000 to 99,00041 (30)13 (28)28 (32)100,000 or more65 (48)26 (55)39 (44)Child’s age in years- (mean +/- SD)11 (4)11.9 (4)Child gender: Female- N (%)95 (70)31 (66)64 (72)JIA duration N (%)< 6 months2 (1)2 (4)0χ2= 1.45p= 0.696-12 months9 (7)3 (6)6 (6)> 12- 24 months20 (15)7 (15)13 (15)> 24 months105 (77)35 (75)70 (79)JIA Subtype N (%)Oligoarticular47 (35)19 (47)28 (32)χ2= 7.93p= 0.13Polyarticular53 (39)12 (26)41 (46)Psoriatic arthritis6 (4)3 (2)3 (3)Systemic14 (10)8 (17)6 (7)Ankylosing spondylitis2 (1)02 (2)Enthesitis related arthritis1 (0.7)1 (2)0Unsure12 (9)3 (6)9 (10)Current disease activity N (%)Active59 (44)15 (29)44 (49)χ2= 9.56p= 0.022*Inactive on medication49 (36)16 (34)33 (37)Inactive off medication for < 1 year18 (13)11 (23)7 (7.8)Clinical remission10 (7)5 (10)5 (5.6)Current medications reported using N (%)None22 (16)10 (21)12 (13)χ2= 2.47p= 0.48NSAID69 (51)23 (49)46 (51)Non-biologic DMARD51 (38)18 (38)33 (37)Biologic DMARD65 (48)18 (38)47 (52)aColumn percentages are displayedbP-values derived from the chi-squared (χ 2 ) or Wilcoxon testscDMARDs disease-modifying antirheumatic drugs, NSAIDs nonsteroidal anti-inflammatory drugsdMedication categories are not mutually exclusive, therefore, medications do not sum to 100% Flow diagram of study Correlation of demographics and disease characteristics χ2= 8.03 p=0.045* χ2= 1.34 p= 0.72 χ2= 1.45 p= 0.69 χ2= 7.93 p= 0.13 χ2= 9.56 p= 0.022* χ2= 2.47 p= 0.48 aColumn percentages are displayed bP-values derived from the chi-squared (χ 2 ) or Wilcoxon tests cDMARDs disease-modifying antirheumatic drugs, NSAIDs nonsteroidal anti-inflammatory drugs dMedication categories are not mutually exclusive, therefore, medications do not sum to 100% A majority of those using CBD or contemplating using CBD for their child learned about it from TV (66%), a friend/relative (34%) or JIA online blog/support group (35%). Very few obtained information from a scientific journal article (17%) or their child’s rheumatologist (2%). Around half (52%) used 2 or more sources to learn about CBD. A majority of parents/guardians (75%) reported believing that CBD would reduce their child’s joint pain (Fig. 2 A), while only 15% of respondents reported believing that CBD has side effects. More than half of respondents reported thinking that CBD is safe because it is a natural product (Fig. 2 C). Nearly two-thirds (63%, n = 56) of respondents had not discussed using CBD with their child’s rheumatologist and over half (61%) of those did not plan on discussing with their child’s rheumatologist for the following reasons: scared of what provider may think (35%), felt they wouldn’t be taken seriously (29%), and believed rheumatologist would have no knowledge about CBD (18%). Fig. 2Parent/gaurdian perceptions of those using CBD for their child’s arthritis and those contemplating use of CBD (n=89) on how they percieve CBD will help their child’s arthritis (A), how they learned about CBD (B), perception of safety fo CBD (C). 56 respondents haven’t told their child’s rheumatologist for the following reasons (D) Parent/gaurdian perceptions of those using CBD for their child’s arthritis and those contemplating use of CBD (n=89) on how they percieve CBD will help their child’s arthritis (A), how they learned about CBD (B), perception of safety fo CBD (C). 56 respondents haven’t told their child’s rheumatologist for the following reasons (D) Contemplating CBD use Respondents contemplating starting a CBD product for their child’s JIA (n=79) were interested in the following CBD products: CBD oil balm (30%), oil drops (25%), gummies (15%), soft gels/capsules (6.5%), and oil roll on (23%). Around a third (32%) of respondents were unsure what products they were interested in. Of those respondents (n=52) who were interested in starting a CBD product, 32.6% were interested in only oral CBD, 36.5% in a combination of oral and topical CBD, and 30.7% were interested only in topical CBD. Respondents contemplating starting a CBD product for their child’s JIA (n=79) were interested in the following CBD products: CBD oil balm (30%), oil drops (25%), gummies (15%), soft gels/capsules (6.5%), and oil roll on (23%). Around a third (32%) of respondents were unsure what products they were interested in. Of those respondents (n=52) who were interested in starting a CBD product, 32.6% were interested in only oral CBD, 36.5% in a combination of oral and topical CBD, and 30.7% were interested only in topical CBD. Current CBD use Respondents using CBD products for their child’s JIA (n=10) reported administering CBD orally (50%) or topically (50%). The majority (60%) reported using CBD on an as needed basis, while 40% reported using CBD on a scheduled basis. Overall, 40% reported administering CBD once per day, 20% twice per day and 40% at least three times per day. Respondents who reported administering CBD as needed (n=6) gave it for joint pain (66%), joint swelling (50%), joint stiffness (66%), and/or when their child requested it (33%). Respondents reported their child’s overall wellbeing to be an average 3.6 prior to starting CBD (0 = very poor, 10 = very good) and 5.3 after taking a CBD product. Half (50%, n=5) of parents reported improvement of their child’s wellbeing after they started CBD while 30% reported no change in their child’s wellbeing and 20% reported decreased well-being. Respondents used the following CBD products: oil drops (40%), lotion (10%), soft gels (10%) and oil balm (40%). Only one respondent knew the total dose of CBD administered per day (20 mg daily) while 70% (n=7) were unsure and 20% (n=2) reported they believed that the dose of CBD was irrelevant. Respondents using CBD products for their child’s JIA (n=10) reported administering CBD orally (50%) or topically (50%). The majority (60%) reported using CBD on an as needed basis, while 40% reported using CBD on a scheduled basis. Overall, 40% reported administering CBD once per day, 20% twice per day and 40% at least three times per day. Respondents who reported administering CBD as needed (n=6) gave it for joint pain (66%), joint swelling (50%), joint stiffness (66%), and/or when their child requested it (33%). Respondents reported their child’s overall wellbeing to be an average 3.6 prior to starting CBD (0 = very poor, 10 = very good) and 5.3 after taking a CBD product. Half (50%, n=5) of parents reported improvement of their child’s wellbeing after they started CBD while 30% reported no change in their child’s wellbeing and 20% reported decreased well-being. Respondents used the following CBD products: oil drops (40%), lotion (10%), soft gels (10%) and oil balm (40%). Only one respondent knew the total dose of CBD administered per day (20 mg daily) while 70% (n=7) were unsure and 20% (n=2) reported they believed that the dose of CBD was irrelevant.
Conclusions
As CBD continues to gain popularity, parental interest in CBD for treating their child’s health condition(s) will likely increase. In this study, we show that while CBD use is currently infrequent for JIA, many parents/guardians are interested in using CBD to help with JIA symptoms. As such, is important that pediatric rheumatologists and other pediatric providers educate themselves about CBD to increase their comfort in discussing CBD and its potential safety issues with their patients and/or parents. Such efforts should focus on harm-reduction, communicating uncertainty without harming the patient-physician relationship, and guiding interested parties to reliable sources on CBD (e.g. the Arthritis Foundation)[35] to ensure that they are obtaining information based on scientific evidence. In addition, rigorous clinical studies are warranted to investigate both safety and efficacy of CBD in JIA to bridge the gap in knowledge.
[ "Background", "Methods", "Participant eligibility", "Survey", "Statistical analysis", "Participation", "Contemplating CBD use", "Current CBD use", "Limitations", "" ]
[ "Juvenile idiopathic arthritis (JIA) is the most common type of chronic arthritis in children, affecting 1 in 1000 children. It is an important cause of short and long-term disability and causes significant financial burden with annual direct medical costs ranging $400-$7,000 [1]. Effective treatments for JIA include non-steroidal anti-inflammatory drugs (NSAIDs), corticosteroids, disease modifying anti-rheumatic drugs (DMARDs), and biologic agents, but each carries potential adverse effects [2]. Indeed, parents and children frequently worry about side effects and the long-term safety of medications prescribed for JIA [3, 4]. As a result, many parents and children (34-92%) use complementary and integrative medicine (CIM) separately or in conjunction with standard treatment of JIA [5–10].\nOne such CIM treatment gaining popularity in the past years is cannabidiol (CBD), which is derived from Cannabis sativa. Since the removal of some CBD products from the Controlled Substances Act in 2018, a vast number of products made from hemp (Cannabis sativa with <0.3% Δ-9-tetrahydrocannabinol [THC]) have become available in brick and mortar retailers in topical, oral or inhaled forms [11]. CBD is non-intoxicating and has been widely advertised as a safe and natural therapy for many ailments including chronic pain, arthritis, other inflammatory diseases, and mental health conditions, resulting in frequent use for these conditions [12, 13]. In non-human animal studies, CBD reduces pain and inflammation due to arthritis, but these findings have not been validated in human studies [14, 15]. With the exception of Epidiolex, which is approved for the treatment of the rare seizure disorders Lennox Gastaut and Dravet Syndrome, CBD is minimally regulated by the Food and Drug Administration (FDA) [16, 17]. With the exception of these rare seizure disorders, evidence of a therapeutic benefit of CBD for pediatric conditions is lacking [18].\nCBD’s safety profile has only been characterized among individuals with Dravet and Lennox Gastaut syndrome, so whether CBD is safe for use in healthy children or other pediatric populations remains unknown. Complicating matters, CBD is a promiscuous molecule that interacts with numerous systems in the body (e.g., serotonergic 5HT1A, endocannabinoid system as cannabinoid receptor 1 antagonist) [19, 20], and may interact with the metabolism of drugs commonly taken by children with JIA including prednisone and naproxen [21]. Further, testing of safety and potency of CBD products is not governed by a strong regulatory apparatus, [22, 23] and a recent JAMA study revealed that only 31% of CBD products sold online are accurately labeled for potency with 21% of samples containing THC [24]. As such, there are safety concerns about use in children, especially those with JIA.\nAs pediatric rheumatologists, the authors (C.F., M. R.) have been frequently asked about using CBD products to treat JIA symptoms, but to date, there is no literature available regarding the use of CBD in children with JIA. The objective of this study was to determine the frequency of CBD use among children with JIA and investigate caregiver knowledge and opinions about CBD use for their children.", "All study procedures and protocols were approved by the Institutional Review Board (IRB) at the University of Michigan (HUM00169198). We first conducted an administrative data query at the University of Michigan to identify all children ages 0-17 years of age at the time of a visit associated with the ICD-10 code for JIA between 1/1/2017 and 12/31/2019. That administrative data query identified 900 patients with ICD-10 codes for JIA.\nParticipant eligibility The charts of those 900 patients were then reviewed by C.F. Parents or guardians of patients were invited to participate in the study if the patient was younger than 18 years of age at the time of survey, had a diagnosis of JIA, had more than 1 visit to Pediatric Rheumatology clinic, and had been evaluated by a Pediatric Rheumatologist within the last 18 months. N=422 eligible participants were contacted by phone and invited to take an anonymous online survey created by the authors using a unique link through Qualtrics between December of 2019 and February of 2020. Only respondents interested in the survey were sent the unique link. Respondents were not compensated for completing the survey.\nThe charts of those 900 patients were then reviewed by C.F. Parents or guardians of patients were invited to participate in the study if the patient was younger than 18 years of age at the time of survey, had a diagnosis of JIA, had more than 1 visit to Pediatric Rheumatology clinic, and had been evaluated by a Pediatric Rheumatologist within the last 18 months. N=422 eligible participants were contacted by phone and invited to take an anonymous online survey created by the authors using a unique link through Qualtrics between December of 2019 and February of 2020. Only respondents interested in the survey were sent the unique link. Respondents were not compensated for completing the survey.\nSurvey The survey consisted of 83 items, some of which were variably displayed depending on participant’s responses. Questions addressed parent/guardian demographics (age, gender, ethnicity, education level, annual household income), use of complementary and integrative medicine (CIM) over lifetime (no use, 1 CIM, 2-4 CIMs, > 4 CIM), history of parent/guardian CBD product and cannabis use, child demographics and disease characteristics (age, gender, subtype of JIA, disease duration, parent/guardian report of disease activity at last rheumatology appointment, current rheumatologic medications used, co-morbid health conditions), and total number of CIM therapies used over child’s lifetime.\nRespondents using CBD or contemplating CBD use for treatment of their child’s arthritis answered questions about sources of CBD information, perceptions of how CBD might improve their child’s arthritis, perceptions of the safety of CBD, and whether they had discussed CBD with their child’s provider. If respondents had not discussed CBD with their child’s healthcare provider, they were asked for the reasons why.\nIf parent/guardian reported using CBD product for child’s arthritis, they were asked questions about their CBD product(s), route of CBD administration, frequency of CBD use, parental perception of child’s disease activity pre and post-CBD use (using a 0-10 visual analogue scale), and total daily dosage of CBD (if known).\nThe survey consisted of 83 items, some of which were variably displayed depending on participant’s responses. Questions addressed parent/guardian demographics (age, gender, ethnicity, education level, annual household income), use of complementary and integrative medicine (CIM) over lifetime (no use, 1 CIM, 2-4 CIMs, > 4 CIM), history of parent/guardian CBD product and cannabis use, child demographics and disease characteristics (age, gender, subtype of JIA, disease duration, parent/guardian report of disease activity at last rheumatology appointment, current rheumatologic medications used, co-morbid health conditions), and total number of CIM therapies used over child’s lifetime.\nRespondents using CBD or contemplating CBD use for treatment of their child’s arthritis answered questions about sources of CBD information, perceptions of how CBD might improve their child’s arthritis, perceptions of the safety of CBD, and whether they had discussed CBD with their child’s provider. If respondents had not discussed CBD with their child’s healthcare provider, they were asked for the reasons why.\nIf parent/guardian reported using CBD product for child’s arthritis, they were asked questions about their CBD product(s), route of CBD administration, frequency of CBD use, parental perception of child’s disease activity pre and post-CBD use (using a 0-10 visual analogue scale), and total daily dosage of CBD (if known).\nStatistical analysis We performed descriptive analyses, and present results as frequency, n (%) and mean +/- standard deviation for categorical and continuous variables, respectively. We used Fisher’s chi-square test to assess differences in categorical variables. Participants were divided into 2 comparison groups for analyses: currently using or contemplating starting a CBD product for their child and no interest in starting a CBD product. Participants using CBD for treatment of their child’s arthritis were not used for standalone comparison due to small sample size (n=10). All statistical analysis was performed using Microsoft Excel (2016, Microsoft Corporation).\nWe performed descriptive analyses, and present results as frequency, n (%) and mean +/- standard deviation for categorical and continuous variables, respectively. We used Fisher’s chi-square test to assess differences in categorical variables. Participants were divided into 2 comparison groups for analyses: currently using or contemplating starting a CBD product for their child and no interest in starting a CBD product. Participants using CBD for treatment of their child’s arthritis were not used for standalone comparison due to small sample size (n=10). All statistical analysis was performed using Microsoft Excel (2016, Microsoft Corporation).", "The charts of those 900 patients were then reviewed by C.F. Parents or guardians of patients were invited to participate in the study if the patient was younger than 18 years of age at the time of survey, had a diagnosis of JIA, had more than 1 visit to Pediatric Rheumatology clinic, and had been evaluated by a Pediatric Rheumatologist within the last 18 months. N=422 eligible participants were contacted by phone and invited to take an anonymous online survey created by the authors using a unique link through Qualtrics between December of 2019 and February of 2020. Only respondents interested in the survey were sent the unique link. Respondents were not compensated for completing the survey.", "The survey consisted of 83 items, some of which were variably displayed depending on participant’s responses. Questions addressed parent/guardian demographics (age, gender, ethnicity, education level, annual household income), use of complementary and integrative medicine (CIM) over lifetime (no use, 1 CIM, 2-4 CIMs, > 4 CIM), history of parent/guardian CBD product and cannabis use, child demographics and disease characteristics (age, gender, subtype of JIA, disease duration, parent/guardian report of disease activity at last rheumatology appointment, current rheumatologic medications used, co-morbid health conditions), and total number of CIM therapies used over child’s lifetime.\nRespondents using CBD or contemplating CBD use for treatment of their child’s arthritis answered questions about sources of CBD information, perceptions of how CBD might improve their child’s arthritis, perceptions of the safety of CBD, and whether they had discussed CBD with their child’s provider. If respondents had not discussed CBD with their child’s healthcare provider, they were asked for the reasons why.\nIf parent/guardian reported using CBD product for child’s arthritis, they were asked questions about their CBD product(s), route of CBD administration, frequency of CBD use, parental perception of child’s disease activity pre and post-CBD use (using a 0-10 visual analogue scale), and total daily dosage of CBD (if known).", "We performed descriptive analyses, and present results as frequency, n (%) and mean +/- standard deviation for categorical and continuous variables, respectively. We used Fisher’s chi-square test to assess differences in categorical variables. Participants were divided into 2 comparison groups for analyses: currently using or contemplating starting a CBD product for their child and no interest in starting a CBD product. Participants using CBD for treatment of their child’s arthritis were not used for standalone comparison due to small sample size (n=10). All statistical analysis was performed using Microsoft Excel (2016, Microsoft Corporation).", "Overall, 422 JIA patients met inclusion criteria. Of those, 236 parent/guardians agreed to be sent the survey link and 136 participants completed the survey (58% response rate, Fig. 1). 10 respondents (7%) reported using a CBD product to treat their child’s JIA, 79 respondents (58%) reported contemplating use of a CBD product to treat their child’s JIA, and 47 respondents (34.5%) reported no interest in starting a CBD product. Demographic characteristics of the survey respondents are shown in Table 1. The study population was largely white, had a bachelor’s degree or higher, and had an annual income of more than 50,000 dollars per year. A large majority of respondents in both groups reported using one or more CIM therapies in their lifetime. There was no significant difference in the specific types or number of CIM therapies used across groups. Report of high disease activity was more frequent among those currently using or contemplating CBD use than those not contemplating use.\nFig. 1Flow diagram of studyTable 1Correlation of demographics and disease characteristicsParent/guardian and child demographics and disease characteristicsTotal (N=136)Not contemplating starting a CBD product for child (N=47)Contemplating starting a CBD product for child (N=79) and using CBD product for child (N=10)P-valueRespondent parent/guardian age in years (mean +/- SD)29.7 (8)26.5 (7)Respondent Parent/guardian Gender- Female N (%)119 (87)42 (89)77 (86)Race/ethnicity- N (%)White/Caucasian132 (97)44 (94)88 (98.9)Black/African American2 (1)1 (2)1 (1.1)Asian American2 (1)2 (4)0Parent/guardian education levelHigh School or GED20 (16)2 (4)18 (21)χ2= 8.03p=0.045*Some college but no degree26 (19)7 (15)19 (22)Associate degree26 (19)12 (26)14 (16)Bachelor’s degree or higher62 (46)26 (55)36 (41)Income level- US dollars per yearLess than 19,0003 (2)1 (2)2 (2)χ2= 1.34p= 0.7220,000 to 49,00027 (20)7 (15)20 (22)50,000 to 99,00041 (30)13 (28)28 (32)100,000 or more65 (48)26 (55)39 (44)Child’s age in years- (mean +/- SD)11 (4)11.9 (4)Child gender: Female- N (%)95 (70)31 (66)64 (72)JIA duration N (%)< 6 months2 (1)2 (4)0χ2= 1.45p= 0.696-12 months9 (7)3 (6)6 (6)> 12- 24 months20 (15)7 (15)13 (15)> 24 months105 (77)35 (75)70 (79)JIA Subtype N (%)Oligoarticular47 (35)19 (47)28 (32)χ2= 7.93p= 0.13Polyarticular53 (39)12 (26)41 (46)Psoriatic arthritis6 (4)3 (2)3 (3)Systemic14 (10)8 (17)6 (7)Ankylosing spondylitis2 (1)02 (2)Enthesitis related arthritis1 (0.7)1 (2)0Unsure12 (9)3 (6)9 (10)Current disease activity N (%)Active59 (44)15 (29)44 (49)χ2= 9.56p= 0.022*Inactive on medication49 (36)16 (34)33 (37)Inactive off medication for < 1 year18 (13)11 (23)7 (7.8)Clinical remission10 (7)5 (10)5 (5.6)Current medications reported using N (%)None22 (16)10 (21)12 (13)χ2= 2.47p= 0.48NSAID69 (51)23 (49)46 (51)Non-biologic DMARD51 (38)18 (38)33 (37)Biologic DMARD65 (48)18 (38)47 (52)aColumn percentages are displayedbP-values derived from the chi-squared (χ 2 ) or Wilcoxon testscDMARDs disease-modifying antirheumatic drugs, NSAIDs nonsteroidal anti-inflammatory drugsdMedication categories are not mutually exclusive, therefore, medications do not sum to 100%\nFlow diagram of study\nCorrelation of demographics and disease characteristics\nχ2= 8.03\np=0.045*\nχ2= 1.34\np= 0.72\nχ2= 1.45\np= 0.69\nχ2= 7.93\np= 0.13\nχ2= 9.56\np= 0.022*\nχ2= 2.47\np= 0.48\naColumn percentages are displayed\nbP-values derived from the chi-squared (χ 2 ) or Wilcoxon tests\ncDMARDs disease-modifying antirheumatic drugs, NSAIDs nonsteroidal anti-inflammatory drugs\ndMedication categories are not mutually exclusive, therefore, medications do not sum to 100%\nA majority of those using CBD or contemplating using CBD for their child learned about it from TV (66%), a friend/relative (34%) or JIA online blog/support group (35%). Very few obtained information from a scientific journal article (17%) or their child’s rheumatologist (2%). Around half (52%) used 2 or more sources to learn about CBD. A majority of parents/guardians (75%) reported believing that CBD would reduce their child’s joint pain (Fig. 2 A), while only 15% of respondents reported believing that CBD has side effects. More than half of respondents reported thinking that CBD is safe because it is a natural product (Fig. 2 C). Nearly two-thirds (63%, n = 56) of respondents had not discussed using CBD with their child’s rheumatologist and over half (61%) of those did not plan on discussing with their child’s rheumatologist for the following reasons: scared of what provider may think (35%), felt they wouldn’t be taken seriously (29%), and believed rheumatologist would have no knowledge about CBD (18%).\nFig. 2Parent/gaurdian perceptions of those using CBD for their child’s arthritis and those contemplating use of CBD (n=89) on how they percieve CBD will help their child’s arthritis (A), how they learned about CBD (B), perception of safety fo CBD (C). 56 respondents haven’t told their child’s rheumatologist for the following reasons (D)\nParent/gaurdian perceptions of those using CBD for their child’s arthritis and those contemplating use of CBD (n=89) on how they percieve CBD will help their child’s arthritis (A), how they learned about CBD (B), perception of safety fo CBD (C). 56 respondents haven’t told their child’s rheumatologist for the following reasons (D)", "Respondents contemplating starting a CBD product for their child’s JIA (n=79) were interested in the following CBD products: CBD oil balm (30%), oil drops (25%), gummies (15%), soft gels/capsules (6.5%), and oil roll on (23%). Around a third (32%) of respondents were unsure what products they were interested in. Of those respondents (n=52) who were interested in starting a CBD product, 32.6% were interested in only oral CBD, 36.5% in a combination of oral and topical CBD, and 30.7% were interested only in topical CBD.", "Respondents using CBD products for their child’s JIA (n=10) reported administering CBD orally (50%) or topically (50%). The majority (60%) reported using CBD on an as needed basis, while 40% reported using CBD on a scheduled basis. Overall, 40% reported administering CBD once per day, 20% twice per day and 40% at least three times per day. Respondents who reported administering CBD as needed (n=6) gave it for joint pain (66%), joint swelling (50%), joint stiffness (66%), and/or when their child requested it (33%). Respondents reported their child’s overall wellbeing to be an average 3.6 prior to starting CBD (0 = very poor, 10 = very good) and 5.3 after taking a CBD product. Half (50%, n=5) of parents reported improvement of their child’s wellbeing after they started CBD while 30% reported no change in their child’s wellbeing and 20% reported decreased well-being. Respondents used the following CBD products: oil drops (40%), lotion (10%), soft gels (10%) and oil balm (40%). Only one respondent knew the total dose of CBD administered per day (20 mg daily) while 70% (n=7) were unsure and 20% (n=2) reported they believed that the dose of CBD was irrelevant.", "Respondents of both panels were similar in terms of race/ethnicity; education, age, and gender, however, > 95% of respondents were white/Caucasian which is not representative of the JIA patient population at our institution or in the US. Survey links were only generated for parents or guardians who expressed interest in participating in the study, so selection bias was likely present. In addition, respondents may have interpreted survey questions differently than we intended and wording of questions may have introduced bias. Finally, we only queried the parents/guardians of individuals with JIA rather than directly asking individuals with JIA about their experiences with or interest in CBD.", "\nAdditional file 1\nAdditional file 1" ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Participant eligibility", "Survey", "Statistical analysis", "Results", "Participation", "Contemplating CBD use", "Current CBD use", "Discussion", "Limitations", "Conclusions", "Supplementary information", "" ]
[ "Juvenile idiopathic arthritis (JIA) is the most common type of chronic arthritis in children, affecting 1 in 1000 children. It is an important cause of short and long-term disability and causes significant financial burden with annual direct medical costs ranging $400-$7,000 [1]. Effective treatments for JIA include non-steroidal anti-inflammatory drugs (NSAIDs), corticosteroids, disease modifying anti-rheumatic drugs (DMARDs), and biologic agents, but each carries potential adverse effects [2]. Indeed, parents and children frequently worry about side effects and the long-term safety of medications prescribed for JIA [3, 4]. As a result, many parents and children (34-92%) use complementary and integrative medicine (CIM) separately or in conjunction with standard treatment of JIA [5–10].\nOne such CIM treatment gaining popularity in the past years is cannabidiol (CBD), which is derived from Cannabis sativa. Since the removal of some CBD products from the Controlled Substances Act in 2018, a vast number of products made from hemp (Cannabis sativa with <0.3% Δ-9-tetrahydrocannabinol [THC]) have become available in brick and mortar retailers in topical, oral or inhaled forms [11]. CBD is non-intoxicating and has been widely advertised as a safe and natural therapy for many ailments including chronic pain, arthritis, other inflammatory diseases, and mental health conditions, resulting in frequent use for these conditions [12, 13]. In non-human animal studies, CBD reduces pain and inflammation due to arthritis, but these findings have not been validated in human studies [14, 15]. With the exception of Epidiolex, which is approved for the treatment of the rare seizure disorders Lennox Gastaut and Dravet Syndrome, CBD is minimally regulated by the Food and Drug Administration (FDA) [16, 17]. With the exception of these rare seizure disorders, evidence of a therapeutic benefit of CBD for pediatric conditions is lacking [18].\nCBD’s safety profile has only been characterized among individuals with Dravet and Lennox Gastaut syndrome, so whether CBD is safe for use in healthy children or other pediatric populations remains unknown. Complicating matters, CBD is a promiscuous molecule that interacts with numerous systems in the body (e.g., serotonergic 5HT1A, endocannabinoid system as cannabinoid receptor 1 antagonist) [19, 20], and may interact with the metabolism of drugs commonly taken by children with JIA including prednisone and naproxen [21]. Further, testing of safety and potency of CBD products is not governed by a strong regulatory apparatus, [22, 23] and a recent JAMA study revealed that only 31% of CBD products sold online are accurately labeled for potency with 21% of samples containing THC [24]. As such, there are safety concerns about use in children, especially those with JIA.\nAs pediatric rheumatologists, the authors (C.F., M. R.) have been frequently asked about using CBD products to treat JIA symptoms, but to date, there is no literature available regarding the use of CBD in children with JIA. The objective of this study was to determine the frequency of CBD use among children with JIA and investigate caregiver knowledge and opinions about CBD use for their children.", "All study procedures and protocols were approved by the Institutional Review Board (IRB) at the University of Michigan (HUM00169198). We first conducted an administrative data query at the University of Michigan to identify all children ages 0-17 years of age at the time of a visit associated with the ICD-10 code for JIA between 1/1/2017 and 12/31/2019. That administrative data query identified 900 patients with ICD-10 codes for JIA.\nParticipant eligibility The charts of those 900 patients were then reviewed by C.F. Parents or guardians of patients were invited to participate in the study if the patient was younger than 18 years of age at the time of survey, had a diagnosis of JIA, had more than 1 visit to Pediatric Rheumatology clinic, and had been evaluated by a Pediatric Rheumatologist within the last 18 months. N=422 eligible participants were contacted by phone and invited to take an anonymous online survey created by the authors using a unique link through Qualtrics between December of 2019 and February of 2020. Only respondents interested in the survey were sent the unique link. Respondents were not compensated for completing the survey.\nThe charts of those 900 patients were then reviewed by C.F. Parents or guardians of patients were invited to participate in the study if the patient was younger than 18 years of age at the time of survey, had a diagnosis of JIA, had more than 1 visit to Pediatric Rheumatology clinic, and had been evaluated by a Pediatric Rheumatologist within the last 18 months. N=422 eligible participants were contacted by phone and invited to take an anonymous online survey created by the authors using a unique link through Qualtrics between December of 2019 and February of 2020. Only respondents interested in the survey were sent the unique link. Respondents were not compensated for completing the survey.\nSurvey The survey consisted of 83 items, some of which were variably displayed depending on participant’s responses. Questions addressed parent/guardian demographics (age, gender, ethnicity, education level, annual household income), use of complementary and integrative medicine (CIM) over lifetime (no use, 1 CIM, 2-4 CIMs, > 4 CIM), history of parent/guardian CBD product and cannabis use, child demographics and disease characteristics (age, gender, subtype of JIA, disease duration, parent/guardian report of disease activity at last rheumatology appointment, current rheumatologic medications used, co-morbid health conditions), and total number of CIM therapies used over child’s lifetime.\nRespondents using CBD or contemplating CBD use for treatment of their child’s arthritis answered questions about sources of CBD information, perceptions of how CBD might improve their child’s arthritis, perceptions of the safety of CBD, and whether they had discussed CBD with their child’s provider. If respondents had not discussed CBD with their child’s healthcare provider, they were asked for the reasons why.\nIf parent/guardian reported using CBD product for child’s arthritis, they were asked questions about their CBD product(s), route of CBD administration, frequency of CBD use, parental perception of child’s disease activity pre and post-CBD use (using a 0-10 visual analogue scale), and total daily dosage of CBD (if known).\nThe survey consisted of 83 items, some of which were variably displayed depending on participant’s responses. Questions addressed parent/guardian demographics (age, gender, ethnicity, education level, annual household income), use of complementary and integrative medicine (CIM) over lifetime (no use, 1 CIM, 2-4 CIMs, > 4 CIM), history of parent/guardian CBD product and cannabis use, child demographics and disease characteristics (age, gender, subtype of JIA, disease duration, parent/guardian report of disease activity at last rheumatology appointment, current rheumatologic medications used, co-morbid health conditions), and total number of CIM therapies used over child’s lifetime.\nRespondents using CBD or contemplating CBD use for treatment of their child’s arthritis answered questions about sources of CBD information, perceptions of how CBD might improve their child’s arthritis, perceptions of the safety of CBD, and whether they had discussed CBD with their child’s provider. If respondents had not discussed CBD with their child’s healthcare provider, they were asked for the reasons why.\nIf parent/guardian reported using CBD product for child’s arthritis, they were asked questions about their CBD product(s), route of CBD administration, frequency of CBD use, parental perception of child’s disease activity pre and post-CBD use (using a 0-10 visual analogue scale), and total daily dosage of CBD (if known).\nStatistical analysis We performed descriptive analyses, and present results as frequency, n (%) and mean +/- standard deviation for categorical and continuous variables, respectively. We used Fisher’s chi-square test to assess differences in categorical variables. Participants were divided into 2 comparison groups for analyses: currently using or contemplating starting a CBD product for their child and no interest in starting a CBD product. Participants using CBD for treatment of their child’s arthritis were not used for standalone comparison due to small sample size (n=10). All statistical analysis was performed using Microsoft Excel (2016, Microsoft Corporation).\nWe performed descriptive analyses, and present results as frequency, n (%) and mean +/- standard deviation for categorical and continuous variables, respectively. We used Fisher’s chi-square test to assess differences in categorical variables. Participants were divided into 2 comparison groups for analyses: currently using or contemplating starting a CBD product for their child and no interest in starting a CBD product. Participants using CBD for treatment of their child’s arthritis were not used for standalone comparison due to small sample size (n=10). All statistical analysis was performed using Microsoft Excel (2016, Microsoft Corporation).", "The charts of those 900 patients were then reviewed by C.F. Parents or guardians of patients were invited to participate in the study if the patient was younger than 18 years of age at the time of survey, had a diagnosis of JIA, had more than 1 visit to Pediatric Rheumatology clinic, and had been evaluated by a Pediatric Rheumatologist within the last 18 months. N=422 eligible participants were contacted by phone and invited to take an anonymous online survey created by the authors using a unique link through Qualtrics between December of 2019 and February of 2020. Only respondents interested in the survey were sent the unique link. Respondents were not compensated for completing the survey.", "The survey consisted of 83 items, some of which were variably displayed depending on participant’s responses. Questions addressed parent/guardian demographics (age, gender, ethnicity, education level, annual household income), use of complementary and integrative medicine (CIM) over lifetime (no use, 1 CIM, 2-4 CIMs, > 4 CIM), history of parent/guardian CBD product and cannabis use, child demographics and disease characteristics (age, gender, subtype of JIA, disease duration, parent/guardian report of disease activity at last rheumatology appointment, current rheumatologic medications used, co-morbid health conditions), and total number of CIM therapies used over child’s lifetime.\nRespondents using CBD or contemplating CBD use for treatment of their child’s arthritis answered questions about sources of CBD information, perceptions of how CBD might improve their child’s arthritis, perceptions of the safety of CBD, and whether they had discussed CBD with their child’s provider. If respondents had not discussed CBD with their child’s healthcare provider, they were asked for the reasons why.\nIf parent/guardian reported using CBD product for child’s arthritis, they were asked questions about their CBD product(s), route of CBD administration, frequency of CBD use, parental perception of child’s disease activity pre and post-CBD use (using a 0-10 visual analogue scale), and total daily dosage of CBD (if known).", "We performed descriptive analyses, and present results as frequency, n (%) and mean +/- standard deviation for categorical and continuous variables, respectively. We used Fisher’s chi-square test to assess differences in categorical variables. Participants were divided into 2 comparison groups for analyses: currently using or contemplating starting a CBD product for their child and no interest in starting a CBD product. Participants using CBD for treatment of their child’s arthritis were not used for standalone comparison due to small sample size (n=10). All statistical analysis was performed using Microsoft Excel (2016, Microsoft Corporation).", "Participation Overall, 422 JIA patients met inclusion criteria. Of those, 236 parent/guardians agreed to be sent the survey link and 136 participants completed the survey (58% response rate, Fig. 1). 10 respondents (7%) reported using a CBD product to treat their child’s JIA, 79 respondents (58%) reported contemplating use of a CBD product to treat their child’s JIA, and 47 respondents (34.5%) reported no interest in starting a CBD product. Demographic characteristics of the survey respondents are shown in Table 1. The study population was largely white, had a bachelor’s degree or higher, and had an annual income of more than 50,000 dollars per year. A large majority of respondents in both groups reported using one or more CIM therapies in their lifetime. There was no significant difference in the specific types or number of CIM therapies used across groups. Report of high disease activity was more frequent among those currently using or contemplating CBD use than those not contemplating use.\nFig. 1Flow diagram of studyTable 1Correlation of demographics and disease characteristicsParent/guardian and child demographics and disease characteristicsTotal (N=136)Not contemplating starting a CBD product for child (N=47)Contemplating starting a CBD product for child (N=79) and using CBD product for child (N=10)P-valueRespondent parent/guardian age in years (mean +/- SD)29.7 (8)26.5 (7)Respondent Parent/guardian Gender- Female N (%)119 (87)42 (89)77 (86)Race/ethnicity- N (%)White/Caucasian132 (97)44 (94)88 (98.9)Black/African American2 (1)1 (2)1 (1.1)Asian American2 (1)2 (4)0Parent/guardian education levelHigh School or GED20 (16)2 (4)18 (21)χ2= 8.03p=0.045*Some college but no degree26 (19)7 (15)19 (22)Associate degree26 (19)12 (26)14 (16)Bachelor’s degree or higher62 (46)26 (55)36 (41)Income level- US dollars per yearLess than 19,0003 (2)1 (2)2 (2)χ2= 1.34p= 0.7220,000 to 49,00027 (20)7 (15)20 (22)50,000 to 99,00041 (30)13 (28)28 (32)100,000 or more65 (48)26 (55)39 (44)Child’s age in years- (mean +/- SD)11 (4)11.9 (4)Child gender: Female- N (%)95 (70)31 (66)64 (72)JIA duration N (%)< 6 months2 (1)2 (4)0χ2= 1.45p= 0.696-12 months9 (7)3 (6)6 (6)> 12- 24 months20 (15)7 (15)13 (15)> 24 months105 (77)35 (75)70 (79)JIA Subtype N (%)Oligoarticular47 (35)19 (47)28 (32)χ2= 7.93p= 0.13Polyarticular53 (39)12 (26)41 (46)Psoriatic arthritis6 (4)3 (2)3 (3)Systemic14 (10)8 (17)6 (7)Ankylosing spondylitis2 (1)02 (2)Enthesitis related arthritis1 (0.7)1 (2)0Unsure12 (9)3 (6)9 (10)Current disease activity N (%)Active59 (44)15 (29)44 (49)χ2= 9.56p= 0.022*Inactive on medication49 (36)16 (34)33 (37)Inactive off medication for < 1 year18 (13)11 (23)7 (7.8)Clinical remission10 (7)5 (10)5 (5.6)Current medications reported using N (%)None22 (16)10 (21)12 (13)χ2= 2.47p= 0.48NSAID69 (51)23 (49)46 (51)Non-biologic DMARD51 (38)18 (38)33 (37)Biologic DMARD65 (48)18 (38)47 (52)aColumn percentages are displayedbP-values derived from the chi-squared (χ 2 ) or Wilcoxon testscDMARDs disease-modifying antirheumatic drugs, NSAIDs nonsteroidal anti-inflammatory drugsdMedication categories are not mutually exclusive, therefore, medications do not sum to 100%\nFlow diagram of study\nCorrelation of demographics and disease characteristics\nχ2= 8.03\np=0.045*\nχ2= 1.34\np= 0.72\nχ2= 1.45\np= 0.69\nχ2= 7.93\np= 0.13\nχ2= 9.56\np= 0.022*\nχ2= 2.47\np= 0.48\naColumn percentages are displayed\nbP-values derived from the chi-squared (χ 2 ) or Wilcoxon tests\ncDMARDs disease-modifying antirheumatic drugs, NSAIDs nonsteroidal anti-inflammatory drugs\ndMedication categories are not mutually exclusive, therefore, medications do not sum to 100%\nA majority of those using CBD or contemplating using CBD for their child learned about it from TV (66%), a friend/relative (34%) or JIA online blog/support group (35%). Very few obtained information from a scientific journal article (17%) or their child’s rheumatologist (2%). Around half (52%) used 2 or more sources to learn about CBD. A majority of parents/guardians (75%) reported believing that CBD would reduce their child’s joint pain (Fig. 2 A), while only 15% of respondents reported believing that CBD has side effects. More than half of respondents reported thinking that CBD is safe because it is a natural product (Fig. 2 C). Nearly two-thirds (63%, n = 56) of respondents had not discussed using CBD with their child’s rheumatologist and over half (61%) of those did not plan on discussing with their child’s rheumatologist for the following reasons: scared of what provider may think (35%), felt they wouldn’t be taken seriously (29%), and believed rheumatologist would have no knowledge about CBD (18%).\nFig. 2Parent/gaurdian perceptions of those using CBD for their child’s arthritis and those contemplating use of CBD (n=89) on how they percieve CBD will help their child’s arthritis (A), how they learned about CBD (B), perception of safety fo CBD (C). 56 respondents haven’t told their child’s rheumatologist for the following reasons (D)\nParent/gaurdian perceptions of those using CBD for their child’s arthritis and those contemplating use of CBD (n=89) on how they percieve CBD will help their child’s arthritis (A), how they learned about CBD (B), perception of safety fo CBD (C). 56 respondents haven’t told their child’s rheumatologist for the following reasons (D)\nOverall, 422 JIA patients met inclusion criteria. Of those, 236 parent/guardians agreed to be sent the survey link and 136 participants completed the survey (58% response rate, Fig. 1). 10 respondents (7%) reported using a CBD product to treat their child’s JIA, 79 respondents (58%) reported contemplating use of a CBD product to treat their child’s JIA, and 47 respondents (34.5%) reported no interest in starting a CBD product. Demographic characteristics of the survey respondents are shown in Table 1. The study population was largely white, had a bachelor’s degree or higher, and had an annual income of more than 50,000 dollars per year. A large majority of respondents in both groups reported using one or more CIM therapies in their lifetime. There was no significant difference in the specific types or number of CIM therapies used across groups. Report of high disease activity was more frequent among those currently using or contemplating CBD use than those not contemplating use.\nFig. 1Flow diagram of studyTable 1Correlation of demographics and disease characteristicsParent/guardian and child demographics and disease characteristicsTotal (N=136)Not contemplating starting a CBD product for child (N=47)Contemplating starting a CBD product for child (N=79) and using CBD product for child (N=10)P-valueRespondent parent/guardian age in years (mean +/- SD)29.7 (8)26.5 (7)Respondent Parent/guardian Gender- Female N (%)119 (87)42 (89)77 (86)Race/ethnicity- N (%)White/Caucasian132 (97)44 (94)88 (98.9)Black/African American2 (1)1 (2)1 (1.1)Asian American2 (1)2 (4)0Parent/guardian education levelHigh School or GED20 (16)2 (4)18 (21)χ2= 8.03p=0.045*Some college but no degree26 (19)7 (15)19 (22)Associate degree26 (19)12 (26)14 (16)Bachelor’s degree or higher62 (46)26 (55)36 (41)Income level- US dollars per yearLess than 19,0003 (2)1 (2)2 (2)χ2= 1.34p= 0.7220,000 to 49,00027 (20)7 (15)20 (22)50,000 to 99,00041 (30)13 (28)28 (32)100,000 or more65 (48)26 (55)39 (44)Child’s age in years- (mean +/- SD)11 (4)11.9 (4)Child gender: Female- N (%)95 (70)31 (66)64 (72)JIA duration N (%)< 6 months2 (1)2 (4)0χ2= 1.45p= 0.696-12 months9 (7)3 (6)6 (6)> 12- 24 months20 (15)7 (15)13 (15)> 24 months105 (77)35 (75)70 (79)JIA Subtype N (%)Oligoarticular47 (35)19 (47)28 (32)χ2= 7.93p= 0.13Polyarticular53 (39)12 (26)41 (46)Psoriatic arthritis6 (4)3 (2)3 (3)Systemic14 (10)8 (17)6 (7)Ankylosing spondylitis2 (1)02 (2)Enthesitis related arthritis1 (0.7)1 (2)0Unsure12 (9)3 (6)9 (10)Current disease activity N (%)Active59 (44)15 (29)44 (49)χ2= 9.56p= 0.022*Inactive on medication49 (36)16 (34)33 (37)Inactive off medication for < 1 year18 (13)11 (23)7 (7.8)Clinical remission10 (7)5 (10)5 (5.6)Current medications reported using N (%)None22 (16)10 (21)12 (13)χ2= 2.47p= 0.48NSAID69 (51)23 (49)46 (51)Non-biologic DMARD51 (38)18 (38)33 (37)Biologic DMARD65 (48)18 (38)47 (52)aColumn percentages are displayedbP-values derived from the chi-squared (χ 2 ) or Wilcoxon testscDMARDs disease-modifying antirheumatic drugs, NSAIDs nonsteroidal anti-inflammatory drugsdMedication categories are not mutually exclusive, therefore, medications do not sum to 100%\nFlow diagram of study\nCorrelation of demographics and disease characteristics\nχ2= 8.03\np=0.045*\nχ2= 1.34\np= 0.72\nχ2= 1.45\np= 0.69\nχ2= 7.93\np= 0.13\nχ2= 9.56\np= 0.022*\nχ2= 2.47\np= 0.48\naColumn percentages are displayed\nbP-values derived from the chi-squared (χ 2 ) or Wilcoxon tests\ncDMARDs disease-modifying antirheumatic drugs, NSAIDs nonsteroidal anti-inflammatory drugs\ndMedication categories are not mutually exclusive, therefore, medications do not sum to 100%\nA majority of those using CBD or contemplating using CBD for their child learned about it from TV (66%), a friend/relative (34%) or JIA online blog/support group (35%). Very few obtained information from a scientific journal article (17%) or their child’s rheumatologist (2%). Around half (52%) used 2 or more sources to learn about CBD. A majority of parents/guardians (75%) reported believing that CBD would reduce their child’s joint pain (Fig. 2 A), while only 15% of respondents reported believing that CBD has side effects. More than half of respondents reported thinking that CBD is safe because it is a natural product (Fig. 2 C). Nearly two-thirds (63%, n = 56) of respondents had not discussed using CBD with their child’s rheumatologist and over half (61%) of those did not plan on discussing with their child’s rheumatologist for the following reasons: scared of what provider may think (35%), felt they wouldn’t be taken seriously (29%), and believed rheumatologist would have no knowledge about CBD (18%).\nFig. 2Parent/gaurdian perceptions of those using CBD for their child’s arthritis and those contemplating use of CBD (n=89) on how they percieve CBD will help their child’s arthritis (A), how they learned about CBD (B), perception of safety fo CBD (C). 56 respondents haven’t told their child’s rheumatologist for the following reasons (D)\nParent/gaurdian perceptions of those using CBD for their child’s arthritis and those contemplating use of CBD (n=89) on how they percieve CBD will help their child’s arthritis (A), how they learned about CBD (B), perception of safety fo CBD (C). 56 respondents haven’t told their child’s rheumatologist for the following reasons (D)\nContemplating CBD use Respondents contemplating starting a CBD product for their child’s JIA (n=79) were interested in the following CBD products: CBD oil balm (30%), oil drops (25%), gummies (15%), soft gels/capsules (6.5%), and oil roll on (23%). Around a third (32%) of respondents were unsure what products they were interested in. Of those respondents (n=52) who were interested in starting a CBD product, 32.6% were interested in only oral CBD, 36.5% in a combination of oral and topical CBD, and 30.7% were interested only in topical CBD.\nRespondents contemplating starting a CBD product for their child’s JIA (n=79) were interested in the following CBD products: CBD oil balm (30%), oil drops (25%), gummies (15%), soft gels/capsules (6.5%), and oil roll on (23%). Around a third (32%) of respondents were unsure what products they were interested in. Of those respondents (n=52) who were interested in starting a CBD product, 32.6% were interested in only oral CBD, 36.5% in a combination of oral and topical CBD, and 30.7% were interested only in topical CBD.\nCurrent CBD use Respondents using CBD products for their child’s JIA (n=10) reported administering CBD orally (50%) or topically (50%). The majority (60%) reported using CBD on an as needed basis, while 40% reported using CBD on a scheduled basis. Overall, 40% reported administering CBD once per day, 20% twice per day and 40% at least three times per day. Respondents who reported administering CBD as needed (n=6) gave it for joint pain (66%), joint swelling (50%), joint stiffness (66%), and/or when their child requested it (33%). Respondents reported their child’s overall wellbeing to be an average 3.6 prior to starting CBD (0 = very poor, 10 = very good) and 5.3 after taking a CBD product. Half (50%, n=5) of parents reported improvement of their child’s wellbeing after they started CBD while 30% reported no change in their child’s wellbeing and 20% reported decreased well-being. Respondents used the following CBD products: oil drops (40%), lotion (10%), soft gels (10%) and oil balm (40%). Only one respondent knew the total dose of CBD administered per day (20 mg daily) while 70% (n=7) were unsure and 20% (n=2) reported they believed that the dose of CBD was irrelevant.\nRespondents using CBD products for their child’s JIA (n=10) reported administering CBD orally (50%) or topically (50%). The majority (60%) reported using CBD on an as needed basis, while 40% reported using CBD on a scheduled basis. Overall, 40% reported administering CBD once per day, 20% twice per day and 40% at least three times per day. Respondents who reported administering CBD as needed (n=6) gave it for joint pain (66%), joint swelling (50%), joint stiffness (66%), and/or when their child requested it (33%). Respondents reported their child’s overall wellbeing to be an average 3.6 prior to starting CBD (0 = very poor, 10 = very good) and 5.3 after taking a CBD product. Half (50%, n=5) of parents reported improvement of their child’s wellbeing after they started CBD while 30% reported no change in their child’s wellbeing and 20% reported decreased well-being. Respondents used the following CBD products: oil drops (40%), lotion (10%), soft gels (10%) and oil balm (40%). Only one respondent knew the total dose of CBD administered per day (20 mg daily) while 70% (n=7) were unsure and 20% (n=2) reported they believed that the dose of CBD was irrelevant.", "Overall, 422 JIA patients met inclusion criteria. Of those, 236 parent/guardians agreed to be sent the survey link and 136 participants completed the survey (58% response rate, Fig. 1). 10 respondents (7%) reported using a CBD product to treat their child’s JIA, 79 respondents (58%) reported contemplating use of a CBD product to treat their child’s JIA, and 47 respondents (34.5%) reported no interest in starting a CBD product. Demographic characteristics of the survey respondents are shown in Table 1. The study population was largely white, had a bachelor’s degree or higher, and had an annual income of more than 50,000 dollars per year. A large majority of respondents in both groups reported using one or more CIM therapies in their lifetime. There was no significant difference in the specific types or number of CIM therapies used across groups. Report of high disease activity was more frequent among those currently using or contemplating CBD use than those not contemplating use.\nFig. 1Flow diagram of studyTable 1Correlation of demographics and disease characteristicsParent/guardian and child demographics and disease characteristicsTotal (N=136)Not contemplating starting a CBD product for child (N=47)Contemplating starting a CBD product for child (N=79) and using CBD product for child (N=10)P-valueRespondent parent/guardian age in years (mean +/- SD)29.7 (8)26.5 (7)Respondent Parent/guardian Gender- Female N (%)119 (87)42 (89)77 (86)Race/ethnicity- N (%)White/Caucasian132 (97)44 (94)88 (98.9)Black/African American2 (1)1 (2)1 (1.1)Asian American2 (1)2 (4)0Parent/guardian education levelHigh School or GED20 (16)2 (4)18 (21)χ2= 8.03p=0.045*Some college but no degree26 (19)7 (15)19 (22)Associate degree26 (19)12 (26)14 (16)Bachelor’s degree or higher62 (46)26 (55)36 (41)Income level- US dollars per yearLess than 19,0003 (2)1 (2)2 (2)χ2= 1.34p= 0.7220,000 to 49,00027 (20)7 (15)20 (22)50,000 to 99,00041 (30)13 (28)28 (32)100,000 or more65 (48)26 (55)39 (44)Child’s age in years- (mean +/- SD)11 (4)11.9 (4)Child gender: Female- N (%)95 (70)31 (66)64 (72)JIA duration N (%)< 6 months2 (1)2 (4)0χ2= 1.45p= 0.696-12 months9 (7)3 (6)6 (6)> 12- 24 months20 (15)7 (15)13 (15)> 24 months105 (77)35 (75)70 (79)JIA Subtype N (%)Oligoarticular47 (35)19 (47)28 (32)χ2= 7.93p= 0.13Polyarticular53 (39)12 (26)41 (46)Psoriatic arthritis6 (4)3 (2)3 (3)Systemic14 (10)8 (17)6 (7)Ankylosing spondylitis2 (1)02 (2)Enthesitis related arthritis1 (0.7)1 (2)0Unsure12 (9)3 (6)9 (10)Current disease activity N (%)Active59 (44)15 (29)44 (49)χ2= 9.56p= 0.022*Inactive on medication49 (36)16 (34)33 (37)Inactive off medication for < 1 year18 (13)11 (23)7 (7.8)Clinical remission10 (7)5 (10)5 (5.6)Current medications reported using N (%)None22 (16)10 (21)12 (13)χ2= 2.47p= 0.48NSAID69 (51)23 (49)46 (51)Non-biologic DMARD51 (38)18 (38)33 (37)Biologic DMARD65 (48)18 (38)47 (52)aColumn percentages are displayedbP-values derived from the chi-squared (χ 2 ) or Wilcoxon testscDMARDs disease-modifying antirheumatic drugs, NSAIDs nonsteroidal anti-inflammatory drugsdMedication categories are not mutually exclusive, therefore, medications do not sum to 100%\nFlow diagram of study\nCorrelation of demographics and disease characteristics\nχ2= 8.03\np=0.045*\nχ2= 1.34\np= 0.72\nχ2= 1.45\np= 0.69\nχ2= 7.93\np= 0.13\nχ2= 9.56\np= 0.022*\nχ2= 2.47\np= 0.48\naColumn percentages are displayed\nbP-values derived from the chi-squared (χ 2 ) or Wilcoxon tests\ncDMARDs disease-modifying antirheumatic drugs, NSAIDs nonsteroidal anti-inflammatory drugs\ndMedication categories are not mutually exclusive, therefore, medications do not sum to 100%\nA majority of those using CBD or contemplating using CBD for their child learned about it from TV (66%), a friend/relative (34%) or JIA online blog/support group (35%). Very few obtained information from a scientific journal article (17%) or their child’s rheumatologist (2%). Around half (52%) used 2 or more sources to learn about CBD. A majority of parents/guardians (75%) reported believing that CBD would reduce their child’s joint pain (Fig. 2 A), while only 15% of respondents reported believing that CBD has side effects. More than half of respondents reported thinking that CBD is safe because it is a natural product (Fig. 2 C). Nearly two-thirds (63%, n = 56) of respondents had not discussed using CBD with their child’s rheumatologist and over half (61%) of those did not plan on discussing with their child’s rheumatologist for the following reasons: scared of what provider may think (35%), felt they wouldn’t be taken seriously (29%), and believed rheumatologist would have no knowledge about CBD (18%).\nFig. 2Parent/gaurdian perceptions of those using CBD for their child’s arthritis and those contemplating use of CBD (n=89) on how they percieve CBD will help their child’s arthritis (A), how they learned about CBD (B), perception of safety fo CBD (C). 56 respondents haven’t told their child’s rheumatologist for the following reasons (D)\nParent/gaurdian perceptions of those using CBD for their child’s arthritis and those contemplating use of CBD (n=89) on how they percieve CBD will help their child’s arthritis (A), how they learned about CBD (B), perception of safety fo CBD (C). 56 respondents haven’t told their child’s rheumatologist for the following reasons (D)", "Respondents contemplating starting a CBD product for their child’s JIA (n=79) were interested in the following CBD products: CBD oil balm (30%), oil drops (25%), gummies (15%), soft gels/capsules (6.5%), and oil roll on (23%). Around a third (32%) of respondents were unsure what products they were interested in. Of those respondents (n=52) who were interested in starting a CBD product, 32.6% were interested in only oral CBD, 36.5% in a combination of oral and topical CBD, and 30.7% were interested only in topical CBD.", "Respondents using CBD products for their child’s JIA (n=10) reported administering CBD orally (50%) or topically (50%). The majority (60%) reported using CBD on an as needed basis, while 40% reported using CBD on a scheduled basis. Overall, 40% reported administering CBD once per day, 20% twice per day and 40% at least three times per day. Respondents who reported administering CBD as needed (n=6) gave it for joint pain (66%), joint swelling (50%), joint stiffness (66%), and/or when their child requested it (33%). Respondents reported their child’s overall wellbeing to be an average 3.6 prior to starting CBD (0 = very poor, 10 = very good) and 5.3 after taking a CBD product. Half (50%, n=5) of parents reported improvement of their child’s wellbeing after they started CBD while 30% reported no change in their child’s wellbeing and 20% reported decreased well-being. Respondents used the following CBD products: oil drops (40%), lotion (10%), soft gels (10%) and oil balm (40%). Only one respondent knew the total dose of CBD administered per day (20 mg daily) while 70% (n=7) were unsure and 20% (n=2) reported they believed that the dose of CBD was irrelevant.", "To our knowledge, this is the first study exploring parent/guardian knowledge and opinions regarding CBD use for their children with JIA. We found that while CBD use is infrequent, there is a strong parent/guardian interest in using CBD for treating JIA, especially among respondents reporting more active disease and a longer disease course. Use of stronger medications such as biologics, on the other hand, was not associated with a significant difference in CBD interest. These findings are consistent with other studies showing that children with JIA use CIM more frequently if they have more active disease and longer disease duration, and that use of immunosuppressive or biologic medications is not a factor related to CIM use among children with JIA [6, 25].\nThe majority of the survey respondents learned about CBD from television, the internet (JIA online blog/support group), or friend or family member with only a small percentage of respondents learned from a health care provider or scientific study, mirroring results from other studies of adults using CBD oil or cannabis [26, 27]. Our study further showed that many parent/guardians are not discussing CBD with their child’s rheumatologist. This is because they expressed worry that their child’s rheumatologist would negatively judge them and or not take them seriously if they discussed their experience with or interest in CBD. This finding is similar to a recent study in which only 9.6% of young adults reported discussing CBD usage with their healthcare provider [27]. Previous studies evaluating CIM use in adolescents with JIA have demonstrated similarly low rates of discussions with their health care provider, [10] and parents of children with other chronic health conditions have reported similar reasons for not discussing CIM with their child’s health provider. These results suggest that providers need CBD and CIM-related education to better serve individuals with JIA, and also that providers need to specifically ask about use of CBD and other CIM modalities.\nAs CBD becomes increasingly more popular, parental interest in using CBD to treat their child’s health conditions continues to grow. The use of the search terms for “CBD for children” and “CBD for kids” have increased since 2018, [22] and numerous blog posts and other forms of media report positive results from giving CBD to children [22]. These forms of media do often mention preclinical CBD research conducted in mice, which demonstrate that CBD has potent anti-inflammatory and analgesic effects in induced inflammatory arthritis [14, 15]. Further, some small clinical trials of CBD in adults do show that CBD may have analgesic activity (in neuropathy and temporomandibular joint disorder [28, 29]) and short-term anxiolytic effects, [30–32] and several clinical trials of CBD in arthritis are ongoing (for example, in rheumatoid arthritis) [33]. However, what is often not communicated is that studies on safety and efficacy of CBD among children with those symptoms (e.g., pain, inflammation) have not yet been conducted. As such, additional rigorous research is needed to investigate whether these preliminary therapeutic findings translate to the JIA context.\nConsistent with prior reports about CBD administration among young adults, [12, 27] a majority of those using CBD for their child’s arthritis are administering CBD orally (60%) on an as needed basis as often as several times per day for joint pain and/or stiffness. In addition, 69% of those contemplating CBD expressed interest in an oral CBD product (alone or in combination with topical CBD). This strong interest in oral CBD is important to note, as CBD has been suggested to interact with the liver enzyme cytochrome P450 and could interfere with the metabolism of several commonly prescribed rheumatologic medications, including prednisone, naproxen, and tofacitinib, potentially leading to increased drug levels and increased risk of toxicity.\nThe large majority of respondents believed CBD is safe because it is a natural product and did not believe there were adverse effects of CBD. Surprisingly, only 1 of 10 participants currently giving their child CBD knew what dose they were administering. The overall safety of CBD for healthy children or other clinical populations remains unknown but the Epidiolex trials, [16, 34] which used high doses of CBD, reported non-serious adverse effects in children including dry mouth, sedation, and/or decreased appetite. Other studies have reported similar adverse effects in young adults or adults taking CBD [13, 27].", "Respondents of both panels were similar in terms of race/ethnicity; education, age, and gender, however, > 95% of respondents were white/Caucasian which is not representative of the JIA patient population at our institution or in the US. Survey links were only generated for parents or guardians who expressed interest in participating in the study, so selection bias was likely present. In addition, respondents may have interpreted survey questions differently than we intended and wording of questions may have introduced bias. Finally, we only queried the parents/guardians of individuals with JIA rather than directly asking individuals with JIA about their experiences with or interest in CBD.", "As CBD continues to gain popularity, parental interest in CBD for treating their child’s health condition(s) will likely increase. In this study, we show that while CBD use is currently infrequent for JIA, many parents/guardians are interested in using CBD to help with JIA symptoms. As such, is important that pediatric rheumatologists and other pediatric providers educate themselves about CBD to increase their comfort in discussing CBD and its potential safety issues with their patients and/or parents. Such efforts should focus on harm-reduction, communicating uncertainty without harming the patient-physician relationship, and guiding interested parties to reliable sources on CBD (e.g. the Arthritis Foundation)[35] to ensure that they are obtaining information based on scientific evidence. In addition, rigorous clinical studies are warranted to investigate both safety and efficacy of CBD in JIA to bridge the gap in knowledge.", " \nAdditional file 1\nAdditional file 1\n\nAdditional file 1\nAdditional file 1", "\nAdditional file 1\nAdditional file 1" ]
[ null, null, null, null, null, "results", null, null, null, "discussion", null, "conclusion", "supplementary-material", null ]
[ "Juvenile idiopathic arthritis", "Cannabidiol", "Pediatric rheumatology", "Complementary and integrative medicine" ]
Background: Juvenile idiopathic arthritis (JIA) is the most common type of chronic arthritis in children, affecting 1 in 1000 children. It is an important cause of short and long-term disability and causes significant financial burden with annual direct medical costs ranging $400-$7,000 [1]. Effective treatments for JIA include non-steroidal anti-inflammatory drugs (NSAIDs), corticosteroids, disease modifying anti-rheumatic drugs (DMARDs), and biologic agents, but each carries potential adverse effects [2]. Indeed, parents and children frequently worry about side effects and the long-term safety of medications prescribed for JIA [3, 4]. As a result, many parents and children (34-92%) use complementary and integrative medicine (CIM) separately or in conjunction with standard treatment of JIA [5–10]. One such CIM treatment gaining popularity in the past years is cannabidiol (CBD), which is derived from Cannabis sativa. Since the removal of some CBD products from the Controlled Substances Act in 2018, a vast number of products made from hemp (Cannabis sativa with <0.3% Δ-9-tetrahydrocannabinol [THC]) have become available in brick and mortar retailers in topical, oral or inhaled forms [11]. CBD is non-intoxicating and has been widely advertised as a safe and natural therapy for many ailments including chronic pain, arthritis, other inflammatory diseases, and mental health conditions, resulting in frequent use for these conditions [12, 13]. In non-human animal studies, CBD reduces pain and inflammation due to arthritis, but these findings have not been validated in human studies [14, 15]. With the exception of Epidiolex, which is approved for the treatment of the rare seizure disorders Lennox Gastaut and Dravet Syndrome, CBD is minimally regulated by the Food and Drug Administration (FDA) [16, 17]. With the exception of these rare seizure disorders, evidence of a therapeutic benefit of CBD for pediatric conditions is lacking [18]. CBD’s safety profile has only been characterized among individuals with Dravet and Lennox Gastaut syndrome, so whether CBD is safe for use in healthy children or other pediatric populations remains unknown. Complicating matters, CBD is a promiscuous molecule that interacts with numerous systems in the body (e.g., serotonergic 5HT1A, endocannabinoid system as cannabinoid receptor 1 antagonist) [19, 20], and may interact with the metabolism of drugs commonly taken by children with JIA including prednisone and naproxen [21]. Further, testing of safety and potency of CBD products is not governed by a strong regulatory apparatus, [22, 23] and a recent JAMA study revealed that only 31% of CBD products sold online are accurately labeled for potency with 21% of samples containing THC [24]. As such, there are safety concerns about use in children, especially those with JIA. As pediatric rheumatologists, the authors (C.F., M. R.) have been frequently asked about using CBD products to treat JIA symptoms, but to date, there is no literature available regarding the use of CBD in children with JIA. The objective of this study was to determine the frequency of CBD use among children with JIA and investigate caregiver knowledge and opinions about CBD use for their children. Methods: All study procedures and protocols were approved by the Institutional Review Board (IRB) at the University of Michigan (HUM00169198). We first conducted an administrative data query at the University of Michigan to identify all children ages 0-17 years of age at the time of a visit associated with the ICD-10 code for JIA between 1/1/2017 and 12/31/2019. That administrative data query identified 900 patients with ICD-10 codes for JIA. Participant eligibility The charts of those 900 patients were then reviewed by C.F. Parents or guardians of patients were invited to participate in the study if the patient was younger than 18 years of age at the time of survey, had a diagnosis of JIA, had more than 1 visit to Pediatric Rheumatology clinic, and had been evaluated by a Pediatric Rheumatologist within the last 18 months. N=422 eligible participants were contacted by phone and invited to take an anonymous online survey created by the authors using a unique link through Qualtrics between December of 2019 and February of 2020. Only respondents interested in the survey were sent the unique link. Respondents were not compensated for completing the survey. The charts of those 900 patients were then reviewed by C.F. Parents or guardians of patients were invited to participate in the study if the patient was younger than 18 years of age at the time of survey, had a diagnosis of JIA, had more than 1 visit to Pediatric Rheumatology clinic, and had been evaluated by a Pediatric Rheumatologist within the last 18 months. N=422 eligible participants were contacted by phone and invited to take an anonymous online survey created by the authors using a unique link through Qualtrics between December of 2019 and February of 2020. Only respondents interested in the survey were sent the unique link. Respondents were not compensated for completing the survey. Survey The survey consisted of 83 items, some of which were variably displayed depending on participant’s responses. Questions addressed parent/guardian demographics (age, gender, ethnicity, education level, annual household income), use of complementary and integrative medicine (CIM) over lifetime (no use, 1 CIM, 2-4 CIMs, > 4 CIM), history of parent/guardian CBD product and cannabis use, child demographics and disease characteristics (age, gender, subtype of JIA, disease duration, parent/guardian report of disease activity at last rheumatology appointment, current rheumatologic medications used, co-morbid health conditions), and total number of CIM therapies used over child’s lifetime. Respondents using CBD or contemplating CBD use for treatment of their child’s arthritis answered questions about sources of CBD information, perceptions of how CBD might improve their child’s arthritis, perceptions of the safety of CBD, and whether they had discussed CBD with their child’s provider. If respondents had not discussed CBD with their child’s healthcare provider, they were asked for the reasons why. If parent/guardian reported using CBD product for child’s arthritis, they were asked questions about their CBD product(s), route of CBD administration, frequency of CBD use, parental perception of child’s disease activity pre and post-CBD use (using a 0-10 visual analogue scale), and total daily dosage of CBD (if known). The survey consisted of 83 items, some of which were variably displayed depending on participant’s responses. Questions addressed parent/guardian demographics (age, gender, ethnicity, education level, annual household income), use of complementary and integrative medicine (CIM) over lifetime (no use, 1 CIM, 2-4 CIMs, > 4 CIM), history of parent/guardian CBD product and cannabis use, child demographics and disease characteristics (age, gender, subtype of JIA, disease duration, parent/guardian report of disease activity at last rheumatology appointment, current rheumatologic medications used, co-morbid health conditions), and total number of CIM therapies used over child’s lifetime. Respondents using CBD or contemplating CBD use for treatment of their child’s arthritis answered questions about sources of CBD information, perceptions of how CBD might improve their child’s arthritis, perceptions of the safety of CBD, and whether they had discussed CBD with their child’s provider. If respondents had not discussed CBD with their child’s healthcare provider, they were asked for the reasons why. If parent/guardian reported using CBD product for child’s arthritis, they were asked questions about their CBD product(s), route of CBD administration, frequency of CBD use, parental perception of child’s disease activity pre and post-CBD use (using a 0-10 visual analogue scale), and total daily dosage of CBD (if known). Statistical analysis We performed descriptive analyses, and present results as frequency, n (%) and mean +/- standard deviation for categorical and continuous variables, respectively. We used Fisher’s chi-square test to assess differences in categorical variables. Participants were divided into 2 comparison groups for analyses: currently using or contemplating starting a CBD product for their child and no interest in starting a CBD product. Participants using CBD for treatment of their child’s arthritis were not used for standalone comparison due to small sample size (n=10). All statistical analysis was performed using Microsoft Excel (2016, Microsoft Corporation). We performed descriptive analyses, and present results as frequency, n (%) and mean +/- standard deviation for categorical and continuous variables, respectively. We used Fisher’s chi-square test to assess differences in categorical variables. Participants were divided into 2 comparison groups for analyses: currently using or contemplating starting a CBD product for their child and no interest in starting a CBD product. Participants using CBD for treatment of their child’s arthritis were not used for standalone comparison due to small sample size (n=10). All statistical analysis was performed using Microsoft Excel (2016, Microsoft Corporation). Participant eligibility: The charts of those 900 patients were then reviewed by C.F. Parents or guardians of patients were invited to participate in the study if the patient was younger than 18 years of age at the time of survey, had a diagnosis of JIA, had more than 1 visit to Pediatric Rheumatology clinic, and had been evaluated by a Pediatric Rheumatologist within the last 18 months. N=422 eligible participants were contacted by phone and invited to take an anonymous online survey created by the authors using a unique link through Qualtrics between December of 2019 and February of 2020. Only respondents interested in the survey were sent the unique link. Respondents were not compensated for completing the survey. Survey: The survey consisted of 83 items, some of which were variably displayed depending on participant’s responses. Questions addressed parent/guardian demographics (age, gender, ethnicity, education level, annual household income), use of complementary and integrative medicine (CIM) over lifetime (no use, 1 CIM, 2-4 CIMs, > 4 CIM), history of parent/guardian CBD product and cannabis use, child demographics and disease characteristics (age, gender, subtype of JIA, disease duration, parent/guardian report of disease activity at last rheumatology appointment, current rheumatologic medications used, co-morbid health conditions), and total number of CIM therapies used over child’s lifetime. Respondents using CBD or contemplating CBD use for treatment of their child’s arthritis answered questions about sources of CBD information, perceptions of how CBD might improve their child’s arthritis, perceptions of the safety of CBD, and whether they had discussed CBD with their child’s provider. If respondents had not discussed CBD with their child’s healthcare provider, they were asked for the reasons why. If parent/guardian reported using CBD product for child’s arthritis, they were asked questions about their CBD product(s), route of CBD administration, frequency of CBD use, parental perception of child’s disease activity pre and post-CBD use (using a 0-10 visual analogue scale), and total daily dosage of CBD (if known). Statistical analysis: We performed descriptive analyses, and present results as frequency, n (%) and mean +/- standard deviation for categorical and continuous variables, respectively. We used Fisher’s chi-square test to assess differences in categorical variables. Participants were divided into 2 comparison groups for analyses: currently using or contemplating starting a CBD product for their child and no interest in starting a CBD product. Participants using CBD for treatment of their child’s arthritis were not used for standalone comparison due to small sample size (n=10). All statistical analysis was performed using Microsoft Excel (2016, Microsoft Corporation). Results: Participation Overall, 422 JIA patients met inclusion criteria. Of those, 236 parent/guardians agreed to be sent the survey link and 136 participants completed the survey (58% response rate, Fig. 1). 10 respondents (7%) reported using a CBD product to treat their child’s JIA, 79 respondents (58%) reported contemplating use of a CBD product to treat their child’s JIA, and 47 respondents (34.5%) reported no interest in starting a CBD product. Demographic characteristics of the survey respondents are shown in Table 1. The study population was largely white, had a bachelor’s degree or higher, and had an annual income of more than 50,000 dollars per year. A large majority of respondents in both groups reported using one or more CIM therapies in their lifetime. There was no significant difference in the specific types or number of CIM therapies used across groups. Report of high disease activity was more frequent among those currently using or contemplating CBD use than those not contemplating use. Fig. 1Flow diagram of studyTable 1Correlation of demographics and disease characteristicsParent/guardian and child demographics and disease characteristicsTotal (N=136)Not contemplating starting a CBD product for child (N=47)Contemplating starting a CBD product for child (N=79) and using CBD product for child (N=10)P-valueRespondent parent/guardian age in years (mean +/- SD)29.7 (8)26.5 (7)Respondent Parent/guardian Gender- Female N (%)119 (87)42 (89)77 (86)Race/ethnicity- N (%)White/Caucasian132 (97)44 (94)88 (98.9)Black/African American2 (1)1 (2)1 (1.1)Asian American2 (1)2 (4)0Parent/guardian education levelHigh School or GED20 (16)2 (4)18 (21)χ2= 8.03p=0.045*Some college but no degree26 (19)7 (15)19 (22)Associate degree26 (19)12 (26)14 (16)Bachelor’s degree or higher62 (46)26 (55)36 (41)Income level- US dollars per yearLess than 19,0003 (2)1 (2)2 (2)χ2= 1.34p= 0.7220,000 to 49,00027 (20)7 (15)20 (22)50,000 to 99,00041 (30)13 (28)28 (32)100,000 or more65 (48)26 (55)39 (44)Child’s age in years- (mean +/- SD)11 (4)11.9 (4)Child gender: Female- N (%)95 (70)31 (66)64 (72)JIA duration N (%)< 6 months2 (1)2 (4)0χ2= 1.45p= 0.696-12 months9 (7)3 (6)6 (6)> 12- 24 months20 (15)7 (15)13 (15)> 24 months105 (77)35 (75)70 (79)JIA Subtype N (%)Oligoarticular47 (35)19 (47)28 (32)χ2= 7.93p= 0.13Polyarticular53 (39)12 (26)41 (46)Psoriatic arthritis6 (4)3 (2)3 (3)Systemic14 (10)8 (17)6 (7)Ankylosing spondylitis2 (1)02 (2)Enthesitis related arthritis1 (0.7)1 (2)0Unsure12 (9)3 (6)9 (10)Current disease activity N (%)Active59 (44)15 (29)44 (49)χ2= 9.56p= 0.022*Inactive on medication49 (36)16 (34)33 (37)Inactive off medication for < 1 year18 (13)11 (23)7 (7.8)Clinical remission10 (7)5 (10)5 (5.6)Current medications reported using N (%)None22 (16)10 (21)12 (13)χ2= 2.47p= 0.48NSAID69 (51)23 (49)46 (51)Non-biologic DMARD51 (38)18 (38)33 (37)Biologic DMARD65 (48)18 (38)47 (52)aColumn percentages are displayedbP-values derived from the chi-squared (χ 2 ) or Wilcoxon testscDMARDs disease-modifying antirheumatic drugs, NSAIDs nonsteroidal anti-inflammatory drugsdMedication categories are not mutually exclusive, therefore, medications do not sum to 100% Flow diagram of study Correlation of demographics and disease characteristics χ2= 8.03 p=0.045* χ2= 1.34 p= 0.72 χ2= 1.45 p= 0.69 χ2= 7.93 p= 0.13 χ2= 9.56 p= 0.022* χ2= 2.47 p= 0.48 aColumn percentages are displayed bP-values derived from the chi-squared (χ 2 ) or Wilcoxon tests cDMARDs disease-modifying antirheumatic drugs, NSAIDs nonsteroidal anti-inflammatory drugs dMedication categories are not mutually exclusive, therefore, medications do not sum to 100% A majority of those using CBD or contemplating using CBD for their child learned about it from TV (66%), a friend/relative (34%) or JIA online blog/support group (35%). Very few obtained information from a scientific journal article (17%) or their child’s rheumatologist (2%). Around half (52%) used 2 or more sources to learn about CBD. A majority of parents/guardians (75%) reported believing that CBD would reduce their child’s joint pain (Fig. 2 A), while only 15% of respondents reported believing that CBD has side effects. More than half of respondents reported thinking that CBD is safe because it is a natural product (Fig. 2 C). Nearly two-thirds (63%, n = 56) of respondents had not discussed using CBD with their child’s rheumatologist and over half (61%) of those did not plan on discussing with their child’s rheumatologist for the following reasons: scared of what provider may think (35%), felt they wouldn’t be taken seriously (29%), and believed rheumatologist would have no knowledge about CBD (18%). Fig. 2Parent/gaurdian perceptions of those using CBD for their child’s arthritis and those contemplating use of CBD (n=89) on how they percieve CBD will help their child’s arthritis (A), how they learned about CBD (B), perception of safety fo CBD (C). 56 respondents haven’t told their child’s rheumatologist for the following reasons (D) Parent/gaurdian perceptions of those using CBD for their child’s arthritis and those contemplating use of CBD (n=89) on how they percieve CBD will help their child’s arthritis (A), how they learned about CBD (B), perception of safety fo CBD (C). 56 respondents haven’t told their child’s rheumatologist for the following reasons (D) Overall, 422 JIA patients met inclusion criteria. Of those, 236 parent/guardians agreed to be sent the survey link and 136 participants completed the survey (58% response rate, Fig. 1). 10 respondents (7%) reported using a CBD product to treat their child’s JIA, 79 respondents (58%) reported contemplating use of a CBD product to treat their child’s JIA, and 47 respondents (34.5%) reported no interest in starting a CBD product. Demographic characteristics of the survey respondents are shown in Table 1. The study population was largely white, had a bachelor’s degree or higher, and had an annual income of more than 50,000 dollars per year. A large majority of respondents in both groups reported using one or more CIM therapies in their lifetime. There was no significant difference in the specific types or number of CIM therapies used across groups. Report of high disease activity was more frequent among those currently using or contemplating CBD use than those not contemplating use. Fig. 1Flow diagram of studyTable 1Correlation of demographics and disease characteristicsParent/guardian and child demographics and disease characteristicsTotal (N=136)Not contemplating starting a CBD product for child (N=47)Contemplating starting a CBD product for child (N=79) and using CBD product for child (N=10)P-valueRespondent parent/guardian age in years (mean +/- SD)29.7 (8)26.5 (7)Respondent Parent/guardian Gender- Female N (%)119 (87)42 (89)77 (86)Race/ethnicity- N (%)White/Caucasian132 (97)44 (94)88 (98.9)Black/African American2 (1)1 (2)1 (1.1)Asian American2 (1)2 (4)0Parent/guardian education levelHigh School or GED20 (16)2 (4)18 (21)χ2= 8.03p=0.045*Some college but no degree26 (19)7 (15)19 (22)Associate degree26 (19)12 (26)14 (16)Bachelor’s degree or higher62 (46)26 (55)36 (41)Income level- US dollars per yearLess than 19,0003 (2)1 (2)2 (2)χ2= 1.34p= 0.7220,000 to 49,00027 (20)7 (15)20 (22)50,000 to 99,00041 (30)13 (28)28 (32)100,000 or more65 (48)26 (55)39 (44)Child’s age in years- (mean +/- SD)11 (4)11.9 (4)Child gender: Female- N (%)95 (70)31 (66)64 (72)JIA duration N (%)< 6 months2 (1)2 (4)0χ2= 1.45p= 0.696-12 months9 (7)3 (6)6 (6)> 12- 24 months20 (15)7 (15)13 (15)> 24 months105 (77)35 (75)70 (79)JIA Subtype N (%)Oligoarticular47 (35)19 (47)28 (32)χ2= 7.93p= 0.13Polyarticular53 (39)12 (26)41 (46)Psoriatic arthritis6 (4)3 (2)3 (3)Systemic14 (10)8 (17)6 (7)Ankylosing spondylitis2 (1)02 (2)Enthesitis related arthritis1 (0.7)1 (2)0Unsure12 (9)3 (6)9 (10)Current disease activity N (%)Active59 (44)15 (29)44 (49)χ2= 9.56p= 0.022*Inactive on medication49 (36)16 (34)33 (37)Inactive off medication for < 1 year18 (13)11 (23)7 (7.8)Clinical remission10 (7)5 (10)5 (5.6)Current medications reported using N (%)None22 (16)10 (21)12 (13)χ2= 2.47p= 0.48NSAID69 (51)23 (49)46 (51)Non-biologic DMARD51 (38)18 (38)33 (37)Biologic DMARD65 (48)18 (38)47 (52)aColumn percentages are displayedbP-values derived from the chi-squared (χ 2 ) or Wilcoxon testscDMARDs disease-modifying antirheumatic drugs, NSAIDs nonsteroidal anti-inflammatory drugsdMedication categories are not mutually exclusive, therefore, medications do not sum to 100% Flow diagram of study Correlation of demographics and disease characteristics χ2= 8.03 p=0.045* χ2= 1.34 p= 0.72 χ2= 1.45 p= 0.69 χ2= 7.93 p= 0.13 χ2= 9.56 p= 0.022* χ2= 2.47 p= 0.48 aColumn percentages are displayed bP-values derived from the chi-squared (χ 2 ) or Wilcoxon tests cDMARDs disease-modifying antirheumatic drugs, NSAIDs nonsteroidal anti-inflammatory drugs dMedication categories are not mutually exclusive, therefore, medications do not sum to 100% A majority of those using CBD or contemplating using CBD for their child learned about it from TV (66%), a friend/relative (34%) or JIA online blog/support group (35%). Very few obtained information from a scientific journal article (17%) or their child’s rheumatologist (2%). Around half (52%) used 2 or more sources to learn about CBD. A majority of parents/guardians (75%) reported believing that CBD would reduce their child’s joint pain (Fig. 2 A), while only 15% of respondents reported believing that CBD has side effects. More than half of respondents reported thinking that CBD is safe because it is a natural product (Fig. 2 C). Nearly two-thirds (63%, n = 56) of respondents had not discussed using CBD with their child’s rheumatologist and over half (61%) of those did not plan on discussing with their child’s rheumatologist for the following reasons: scared of what provider may think (35%), felt they wouldn’t be taken seriously (29%), and believed rheumatologist would have no knowledge about CBD (18%). Fig. 2Parent/gaurdian perceptions of those using CBD for their child’s arthritis and those contemplating use of CBD (n=89) on how they percieve CBD will help their child’s arthritis (A), how they learned about CBD (B), perception of safety fo CBD (C). 56 respondents haven’t told their child’s rheumatologist for the following reasons (D) Parent/gaurdian perceptions of those using CBD for their child’s arthritis and those contemplating use of CBD (n=89) on how they percieve CBD will help their child’s arthritis (A), how they learned about CBD (B), perception of safety fo CBD (C). 56 respondents haven’t told their child’s rheumatologist for the following reasons (D) Contemplating CBD use Respondents contemplating starting a CBD product for their child’s JIA (n=79) were interested in the following CBD products: CBD oil balm (30%), oil drops (25%), gummies (15%), soft gels/capsules (6.5%), and oil roll on (23%). Around a third (32%) of respondents were unsure what products they were interested in. Of those respondents (n=52) who were interested in starting a CBD product, 32.6% were interested in only oral CBD, 36.5% in a combination of oral and topical CBD, and 30.7% were interested only in topical CBD. Respondents contemplating starting a CBD product for their child’s JIA (n=79) were interested in the following CBD products: CBD oil balm (30%), oil drops (25%), gummies (15%), soft gels/capsules (6.5%), and oil roll on (23%). Around a third (32%) of respondents were unsure what products they were interested in. Of those respondents (n=52) who were interested in starting a CBD product, 32.6% were interested in only oral CBD, 36.5% in a combination of oral and topical CBD, and 30.7% were interested only in topical CBD. Current CBD use Respondents using CBD products for their child’s JIA (n=10) reported administering CBD orally (50%) or topically (50%). The majority (60%) reported using CBD on an as needed basis, while 40% reported using CBD on a scheduled basis. Overall, 40% reported administering CBD once per day, 20% twice per day and 40% at least three times per day. Respondents who reported administering CBD as needed (n=6) gave it for joint pain (66%), joint swelling (50%), joint stiffness (66%), and/or when their child requested it (33%). Respondents reported their child’s overall wellbeing to be an average 3.6 prior to starting CBD (0 = very poor, 10 = very good) and 5.3 after taking a CBD product. Half (50%, n=5) of parents reported improvement of their child’s wellbeing after they started CBD while 30% reported no change in their child’s wellbeing and 20% reported decreased well-being. Respondents used the following CBD products: oil drops (40%), lotion (10%), soft gels (10%) and oil balm (40%). Only one respondent knew the total dose of CBD administered per day (20 mg daily) while 70% (n=7) were unsure and 20% (n=2) reported they believed that the dose of CBD was irrelevant. Respondents using CBD products for their child’s JIA (n=10) reported administering CBD orally (50%) or topically (50%). The majority (60%) reported using CBD on an as needed basis, while 40% reported using CBD on a scheduled basis. Overall, 40% reported administering CBD once per day, 20% twice per day and 40% at least three times per day. Respondents who reported administering CBD as needed (n=6) gave it for joint pain (66%), joint swelling (50%), joint stiffness (66%), and/or when their child requested it (33%). Respondents reported their child’s overall wellbeing to be an average 3.6 prior to starting CBD (0 = very poor, 10 = very good) and 5.3 after taking a CBD product. Half (50%, n=5) of parents reported improvement of their child’s wellbeing after they started CBD while 30% reported no change in their child’s wellbeing and 20% reported decreased well-being. Respondents used the following CBD products: oil drops (40%), lotion (10%), soft gels (10%) and oil balm (40%). Only one respondent knew the total dose of CBD administered per day (20 mg daily) while 70% (n=7) were unsure and 20% (n=2) reported they believed that the dose of CBD was irrelevant. Participation: Overall, 422 JIA patients met inclusion criteria. Of those, 236 parent/guardians agreed to be sent the survey link and 136 participants completed the survey (58% response rate, Fig. 1). 10 respondents (7%) reported using a CBD product to treat their child’s JIA, 79 respondents (58%) reported contemplating use of a CBD product to treat their child’s JIA, and 47 respondents (34.5%) reported no interest in starting a CBD product. Demographic characteristics of the survey respondents are shown in Table 1. The study population was largely white, had a bachelor’s degree or higher, and had an annual income of more than 50,000 dollars per year. A large majority of respondents in both groups reported using one or more CIM therapies in their lifetime. There was no significant difference in the specific types or number of CIM therapies used across groups. Report of high disease activity was more frequent among those currently using or contemplating CBD use than those not contemplating use. Fig. 1Flow diagram of studyTable 1Correlation of demographics and disease characteristicsParent/guardian and child demographics and disease characteristicsTotal (N=136)Not contemplating starting a CBD product for child (N=47)Contemplating starting a CBD product for child (N=79) and using CBD product for child (N=10)P-valueRespondent parent/guardian age in years (mean +/- SD)29.7 (8)26.5 (7)Respondent Parent/guardian Gender- Female N (%)119 (87)42 (89)77 (86)Race/ethnicity- N (%)White/Caucasian132 (97)44 (94)88 (98.9)Black/African American2 (1)1 (2)1 (1.1)Asian American2 (1)2 (4)0Parent/guardian education levelHigh School or GED20 (16)2 (4)18 (21)χ2= 8.03p=0.045*Some college but no degree26 (19)7 (15)19 (22)Associate degree26 (19)12 (26)14 (16)Bachelor’s degree or higher62 (46)26 (55)36 (41)Income level- US dollars per yearLess than 19,0003 (2)1 (2)2 (2)χ2= 1.34p= 0.7220,000 to 49,00027 (20)7 (15)20 (22)50,000 to 99,00041 (30)13 (28)28 (32)100,000 or more65 (48)26 (55)39 (44)Child’s age in years- (mean +/- SD)11 (4)11.9 (4)Child gender: Female- N (%)95 (70)31 (66)64 (72)JIA duration N (%)< 6 months2 (1)2 (4)0χ2= 1.45p= 0.696-12 months9 (7)3 (6)6 (6)> 12- 24 months20 (15)7 (15)13 (15)> 24 months105 (77)35 (75)70 (79)JIA Subtype N (%)Oligoarticular47 (35)19 (47)28 (32)χ2= 7.93p= 0.13Polyarticular53 (39)12 (26)41 (46)Psoriatic arthritis6 (4)3 (2)3 (3)Systemic14 (10)8 (17)6 (7)Ankylosing spondylitis2 (1)02 (2)Enthesitis related arthritis1 (0.7)1 (2)0Unsure12 (9)3 (6)9 (10)Current disease activity N (%)Active59 (44)15 (29)44 (49)χ2= 9.56p= 0.022*Inactive on medication49 (36)16 (34)33 (37)Inactive off medication for < 1 year18 (13)11 (23)7 (7.8)Clinical remission10 (7)5 (10)5 (5.6)Current medications reported using N (%)None22 (16)10 (21)12 (13)χ2= 2.47p= 0.48NSAID69 (51)23 (49)46 (51)Non-biologic DMARD51 (38)18 (38)33 (37)Biologic DMARD65 (48)18 (38)47 (52)aColumn percentages are displayedbP-values derived from the chi-squared (χ 2 ) or Wilcoxon testscDMARDs disease-modifying antirheumatic drugs, NSAIDs nonsteroidal anti-inflammatory drugsdMedication categories are not mutually exclusive, therefore, medications do not sum to 100% Flow diagram of study Correlation of demographics and disease characteristics χ2= 8.03 p=0.045* χ2= 1.34 p= 0.72 χ2= 1.45 p= 0.69 χ2= 7.93 p= 0.13 χ2= 9.56 p= 0.022* χ2= 2.47 p= 0.48 aColumn percentages are displayed bP-values derived from the chi-squared (χ 2 ) or Wilcoxon tests cDMARDs disease-modifying antirheumatic drugs, NSAIDs nonsteroidal anti-inflammatory drugs dMedication categories are not mutually exclusive, therefore, medications do not sum to 100% A majority of those using CBD or contemplating using CBD for their child learned about it from TV (66%), a friend/relative (34%) or JIA online blog/support group (35%). Very few obtained information from a scientific journal article (17%) or their child’s rheumatologist (2%). Around half (52%) used 2 or more sources to learn about CBD. A majority of parents/guardians (75%) reported believing that CBD would reduce their child’s joint pain (Fig. 2 A), while only 15% of respondents reported believing that CBD has side effects. More than half of respondents reported thinking that CBD is safe because it is a natural product (Fig. 2 C). Nearly two-thirds (63%, n = 56) of respondents had not discussed using CBD with their child’s rheumatologist and over half (61%) of those did not plan on discussing with their child’s rheumatologist for the following reasons: scared of what provider may think (35%), felt they wouldn’t be taken seriously (29%), and believed rheumatologist would have no knowledge about CBD (18%). Fig. 2Parent/gaurdian perceptions of those using CBD for their child’s arthritis and those contemplating use of CBD (n=89) on how they percieve CBD will help their child’s arthritis (A), how they learned about CBD (B), perception of safety fo CBD (C). 56 respondents haven’t told their child’s rheumatologist for the following reasons (D) Parent/gaurdian perceptions of those using CBD for their child’s arthritis and those contemplating use of CBD (n=89) on how they percieve CBD will help their child’s arthritis (A), how they learned about CBD (B), perception of safety fo CBD (C). 56 respondents haven’t told their child’s rheumatologist for the following reasons (D) Contemplating CBD use: Respondents contemplating starting a CBD product for their child’s JIA (n=79) were interested in the following CBD products: CBD oil balm (30%), oil drops (25%), gummies (15%), soft gels/capsules (6.5%), and oil roll on (23%). Around a third (32%) of respondents were unsure what products they were interested in. Of those respondents (n=52) who were interested in starting a CBD product, 32.6% were interested in only oral CBD, 36.5% in a combination of oral and topical CBD, and 30.7% were interested only in topical CBD. Current CBD use: Respondents using CBD products for their child’s JIA (n=10) reported administering CBD orally (50%) or topically (50%). The majority (60%) reported using CBD on an as needed basis, while 40% reported using CBD on a scheduled basis. Overall, 40% reported administering CBD once per day, 20% twice per day and 40% at least three times per day. Respondents who reported administering CBD as needed (n=6) gave it for joint pain (66%), joint swelling (50%), joint stiffness (66%), and/or when their child requested it (33%). Respondents reported their child’s overall wellbeing to be an average 3.6 prior to starting CBD (0 = very poor, 10 = very good) and 5.3 after taking a CBD product. Half (50%, n=5) of parents reported improvement of their child’s wellbeing after they started CBD while 30% reported no change in their child’s wellbeing and 20% reported decreased well-being. Respondents used the following CBD products: oil drops (40%), lotion (10%), soft gels (10%) and oil balm (40%). Only one respondent knew the total dose of CBD administered per day (20 mg daily) while 70% (n=7) were unsure and 20% (n=2) reported they believed that the dose of CBD was irrelevant. Discussion: To our knowledge, this is the first study exploring parent/guardian knowledge and opinions regarding CBD use for their children with JIA. We found that while CBD use is infrequent, there is a strong parent/guardian interest in using CBD for treating JIA, especially among respondents reporting more active disease and a longer disease course. Use of stronger medications such as biologics, on the other hand, was not associated with a significant difference in CBD interest. These findings are consistent with other studies showing that children with JIA use CIM more frequently if they have more active disease and longer disease duration, and that use of immunosuppressive or biologic medications is not a factor related to CIM use among children with JIA [6, 25]. The majority of the survey respondents learned about CBD from television, the internet (JIA online blog/support group), or friend or family member with only a small percentage of respondents learned from a health care provider or scientific study, mirroring results from other studies of adults using CBD oil or cannabis [26, 27]. Our study further showed that many parent/guardians are not discussing CBD with their child’s rheumatologist. This is because they expressed worry that their child’s rheumatologist would negatively judge them and or not take them seriously if they discussed their experience with or interest in CBD. This finding is similar to a recent study in which only 9.6% of young adults reported discussing CBD usage with their healthcare provider [27]. Previous studies evaluating CIM use in adolescents with JIA have demonstrated similarly low rates of discussions with their health care provider, [10] and parents of children with other chronic health conditions have reported similar reasons for not discussing CIM with their child’s health provider. These results suggest that providers need CBD and CIM-related education to better serve individuals with JIA, and also that providers need to specifically ask about use of CBD and other CIM modalities. As CBD becomes increasingly more popular, parental interest in using CBD to treat their child’s health conditions continues to grow. The use of the search terms for “CBD for children” and “CBD for kids” have increased since 2018, [22] and numerous blog posts and other forms of media report positive results from giving CBD to children [22]. These forms of media do often mention preclinical CBD research conducted in mice, which demonstrate that CBD has potent anti-inflammatory and analgesic effects in induced inflammatory arthritis [14, 15]. Further, some small clinical trials of CBD in adults do show that CBD may have analgesic activity (in neuropathy and temporomandibular joint disorder [28, 29]) and short-term anxiolytic effects, [30–32] and several clinical trials of CBD in arthritis are ongoing (for example, in rheumatoid arthritis) [33]. However, what is often not communicated is that studies on safety and efficacy of CBD among children with those symptoms (e.g., pain, inflammation) have not yet been conducted. As such, additional rigorous research is needed to investigate whether these preliminary therapeutic findings translate to the JIA context. Consistent with prior reports about CBD administration among young adults, [12, 27] a majority of those using CBD for their child’s arthritis are administering CBD orally (60%) on an as needed basis as often as several times per day for joint pain and/or stiffness. In addition, 69% of those contemplating CBD expressed interest in an oral CBD product (alone or in combination with topical CBD). This strong interest in oral CBD is important to note, as CBD has been suggested to interact with the liver enzyme cytochrome P450 and could interfere with the metabolism of several commonly prescribed rheumatologic medications, including prednisone, naproxen, and tofacitinib, potentially leading to increased drug levels and increased risk of toxicity. The large majority of respondents believed CBD is safe because it is a natural product and did not believe there were adverse effects of CBD. Surprisingly, only 1 of 10 participants currently giving their child CBD knew what dose they were administering. The overall safety of CBD for healthy children or other clinical populations remains unknown but the Epidiolex trials, [16, 34] which used high doses of CBD, reported non-serious adverse effects in children including dry mouth, sedation, and/or decreased appetite. Other studies have reported similar adverse effects in young adults or adults taking CBD [13, 27]. Limitations: Respondents of both panels were similar in terms of race/ethnicity; education, age, and gender, however, > 95% of respondents were white/Caucasian which is not representative of the JIA patient population at our institution or in the US. Survey links were only generated for parents or guardians who expressed interest in participating in the study, so selection bias was likely present. In addition, respondents may have interpreted survey questions differently than we intended and wording of questions may have introduced bias. Finally, we only queried the parents/guardians of individuals with JIA rather than directly asking individuals with JIA about their experiences with or interest in CBD. Conclusions: As CBD continues to gain popularity, parental interest in CBD for treating their child’s health condition(s) will likely increase. In this study, we show that while CBD use is currently infrequent for JIA, many parents/guardians are interested in using CBD to help with JIA symptoms. As such, is important that pediatric rheumatologists and other pediatric providers educate themselves about CBD to increase their comfort in discussing CBD and its potential safety issues with their patients and/or parents. Such efforts should focus on harm-reduction, communicating uncertainty without harming the patient-physician relationship, and guiding interested parties to reliable sources on CBD (e.g. the Arthritis Foundation)[35] to ensure that they are obtaining information based on scientific evidence. In addition, rigorous clinical studies are warranted to investigate both safety and efficacy of CBD in JIA to bridge the gap in knowledge. Supplementary information: Additional file 1 Additional file 1 Additional file 1 Additional file 1 : Additional file 1 Additional file 1
Background: Juvenile idiopathic arthritis (JIA) is common and difficult to treat. Cannabidiol (CBD) is now widely available, but no studies to date have investigated the use of CBD for JIA. Methods: We performed a chart review to identify patients with JIA at a Midwestern medical institution between 2017 and 2019. We surveyed primary caregivers of JIA patients using an anonymous, online survey with questions on caregiver knowledge and attitudes towards CBD. We compared respondents with no interest in CBD use vs. those contemplating or currently using CBD using descriptive statistics. Results: Of 900 reviewed charts, 422 met inclusion criteria. Of these, 236 consented to be sent a survey link, and n=136 (58%) completed surveys. Overall, 34.5% (n=47) of respondents reported no interest in using a CBD product for their child's JIA, while 54% (n=79) reported contemplating using CBD and 7% (n=10) reported currently giving their child CBD. Only 2% of respondents contemplating or actively using a CBD product learned about CBD from their child's rheumatologist, compared with television (70%) or a friend (50%). Most respondents had not talked to their child's rheumatologist about using CBD. Of those currently using CBD, most used oral or topical products, and only 10% of respondents (n=1) knew what dose they were giving their child. Conclusions: Our results show infrequent use but a large interest in CBD among caregivers of children with JIA. Given CBD's unknown safety profile in children with JIA, this study highlights a need for better studies and education around CBD for pediatric rheumatologists.
Background: Juvenile idiopathic arthritis (JIA) is the most common type of chronic arthritis in children, affecting 1 in 1000 children. It is an important cause of short and long-term disability and causes significant financial burden with annual direct medical costs ranging $400-$7,000 [1]. Effective treatments for JIA include non-steroidal anti-inflammatory drugs (NSAIDs), corticosteroids, disease modifying anti-rheumatic drugs (DMARDs), and biologic agents, but each carries potential adverse effects [2]. Indeed, parents and children frequently worry about side effects and the long-term safety of medications prescribed for JIA [3, 4]. As a result, many parents and children (34-92%) use complementary and integrative medicine (CIM) separately or in conjunction with standard treatment of JIA [5–10]. One such CIM treatment gaining popularity in the past years is cannabidiol (CBD), which is derived from Cannabis sativa. Since the removal of some CBD products from the Controlled Substances Act in 2018, a vast number of products made from hemp (Cannabis sativa with <0.3% Δ-9-tetrahydrocannabinol [THC]) have become available in brick and mortar retailers in topical, oral or inhaled forms [11]. CBD is non-intoxicating and has been widely advertised as a safe and natural therapy for many ailments including chronic pain, arthritis, other inflammatory diseases, and mental health conditions, resulting in frequent use for these conditions [12, 13]. In non-human animal studies, CBD reduces pain and inflammation due to arthritis, but these findings have not been validated in human studies [14, 15]. With the exception of Epidiolex, which is approved for the treatment of the rare seizure disorders Lennox Gastaut and Dravet Syndrome, CBD is minimally regulated by the Food and Drug Administration (FDA) [16, 17]. With the exception of these rare seizure disorders, evidence of a therapeutic benefit of CBD for pediatric conditions is lacking [18]. CBD’s safety profile has only been characterized among individuals with Dravet and Lennox Gastaut syndrome, so whether CBD is safe for use in healthy children or other pediatric populations remains unknown. Complicating matters, CBD is a promiscuous molecule that interacts with numerous systems in the body (e.g., serotonergic 5HT1A, endocannabinoid system as cannabinoid receptor 1 antagonist) [19, 20], and may interact with the metabolism of drugs commonly taken by children with JIA including prednisone and naproxen [21]. Further, testing of safety and potency of CBD products is not governed by a strong regulatory apparatus, [22, 23] and a recent JAMA study revealed that only 31% of CBD products sold online are accurately labeled for potency with 21% of samples containing THC [24]. As such, there are safety concerns about use in children, especially those with JIA. As pediatric rheumatologists, the authors (C.F., M. R.) have been frequently asked about using CBD products to treat JIA symptoms, but to date, there is no literature available regarding the use of CBD in children with JIA. The objective of this study was to determine the frequency of CBD use among children with JIA and investigate caregiver knowledge and opinions about CBD use for their children. Conclusions: As CBD continues to gain popularity, parental interest in CBD for treating their child’s health condition(s) will likely increase. In this study, we show that while CBD use is currently infrequent for JIA, many parents/guardians are interested in using CBD to help with JIA symptoms. As such, is important that pediatric rheumatologists and other pediatric providers educate themselves about CBD to increase their comfort in discussing CBD and its potential safety issues with their patients and/or parents. Such efforts should focus on harm-reduction, communicating uncertainty without harming the patient-physician relationship, and guiding interested parties to reliable sources on CBD (e.g. the Arthritis Foundation)[35] to ensure that they are obtaining information based on scientific evidence. In addition, rigorous clinical studies are warranted to investigate both safety and efficacy of CBD in JIA to bridge the gap in knowledge.
Background: Juvenile idiopathic arthritis (JIA) is common and difficult to treat. Cannabidiol (CBD) is now widely available, but no studies to date have investigated the use of CBD for JIA. Methods: We performed a chart review to identify patients with JIA at a Midwestern medical institution between 2017 and 2019. We surveyed primary caregivers of JIA patients using an anonymous, online survey with questions on caregiver knowledge and attitudes towards CBD. We compared respondents with no interest in CBD use vs. those contemplating or currently using CBD using descriptive statistics. Results: Of 900 reviewed charts, 422 met inclusion criteria. Of these, 236 consented to be sent a survey link, and n=136 (58%) completed surveys. Overall, 34.5% (n=47) of respondents reported no interest in using a CBD product for their child's JIA, while 54% (n=79) reported contemplating using CBD and 7% (n=10) reported currently giving their child CBD. Only 2% of respondents contemplating or actively using a CBD product learned about CBD from their child's rheumatologist, compared with television (70%) or a friend (50%). Most respondents had not talked to their child's rheumatologist about using CBD. Of those currently using CBD, most used oral or topical products, and only 10% of respondents (n=1) knew what dose they were giving their child. Conclusions: Our results show infrequent use but a large interest in CBD among caregivers of children with JIA. Given CBD's unknown safety profile in children with JIA, this study highlights a need for better studies and education around CBD for pediatric rheumatologists.
8,021
317
[ 622, 1115, 124, 275, 114, 1116, 124, 276, 124, 8 ]
14
[ "cbd", "child", "respondents", "reported", "jia", "use", "product", "cbd product", "10", "disease" ]
[ "cbd product treat", "cbd products treat", "cbd arthritis foundation", "arthritis learned cbd", "cbd child arthritis" ]
null
[CONTENT] Juvenile idiopathic arthritis | Cannabidiol | Pediatric rheumatology | Complementary and integrative medicine [SUMMARY]
null
[CONTENT] Juvenile idiopathic arthritis | Cannabidiol | Pediatric rheumatology | Complementary and integrative medicine [SUMMARY]
[CONTENT] Juvenile idiopathic arthritis | Cannabidiol | Pediatric rheumatology | Complementary and integrative medicine [SUMMARY]
[CONTENT] Juvenile idiopathic arthritis | Cannabidiol | Pediatric rheumatology | Complementary and integrative medicine [SUMMARY]
[CONTENT] Juvenile idiopathic arthritis | Cannabidiol | Pediatric rheumatology | Complementary and integrative medicine [SUMMARY]
[CONTENT] Adolescent | Arthritis, Juvenile | Cannabidiol | Child | Female | Health Knowledge, Attitudes, Practice | Humans | Male | Parents | Surveys and Questionnaires [SUMMARY]
null
[CONTENT] Adolescent | Arthritis, Juvenile | Cannabidiol | Child | Female | Health Knowledge, Attitudes, Practice | Humans | Male | Parents | Surveys and Questionnaires [SUMMARY]
[CONTENT] Adolescent | Arthritis, Juvenile | Cannabidiol | Child | Female | Health Knowledge, Attitudes, Practice | Humans | Male | Parents | Surveys and Questionnaires [SUMMARY]
[CONTENT] Adolescent | Arthritis, Juvenile | Cannabidiol | Child | Female | Health Knowledge, Attitudes, Practice | Humans | Male | Parents | Surveys and Questionnaires [SUMMARY]
[CONTENT] Adolescent | Arthritis, Juvenile | Cannabidiol | Child | Female | Health Knowledge, Attitudes, Practice | Humans | Male | Parents | Surveys and Questionnaires [SUMMARY]
[CONTENT] cbd product treat | cbd products treat | cbd arthritis foundation | arthritis learned cbd | cbd child arthritis [SUMMARY]
null
[CONTENT] cbd product treat | cbd products treat | cbd arthritis foundation | arthritis learned cbd | cbd child arthritis [SUMMARY]
[CONTENT] cbd product treat | cbd products treat | cbd arthritis foundation | arthritis learned cbd | cbd child arthritis [SUMMARY]
[CONTENT] cbd product treat | cbd products treat | cbd arthritis foundation | arthritis learned cbd | cbd child arthritis [SUMMARY]
[CONTENT] cbd product treat | cbd products treat | cbd arthritis foundation | arthritis learned cbd | cbd child arthritis [SUMMARY]
[CONTENT] cbd | child | respondents | reported | jia | use | product | cbd product | 10 | disease [SUMMARY]
null
[CONTENT] cbd | child | respondents | reported | jia | use | product | cbd product | 10 | disease [SUMMARY]
[CONTENT] cbd | child | respondents | reported | jia | use | product | cbd product | 10 | disease [SUMMARY]
[CONTENT] cbd | child | respondents | reported | jia | use | product | cbd product | 10 | disease [SUMMARY]
[CONTENT] cbd | child | respondents | reported | jia | use | product | cbd product | 10 | disease [SUMMARY]
[CONTENT] children | cbd | use | jia | products | cbd products | children jia | use children | drugs | safety [SUMMARY]
null
[CONTENT] cbd | child | reported | χ2 | respondents | contemplating | 15 | product | 10 | 50 [SUMMARY]
[CONTENT] cbd | increase | pediatric | interested | jia | safety | health condition likely | uncertainty harming | uncertainty harming patient | uncertainty harming patient physician [SUMMARY]
[CONTENT] cbd | child | file | additional file | additional | respondents | reported | use | additional file additional file | file additional file [SUMMARY]
[CONTENT] cbd | child | file | additional file | additional | respondents | reported | use | additional file additional file | file additional file [SUMMARY]
[CONTENT] Juvenile | JIA ||| Cannabidiol (CBD | JIA [SUMMARY]
null
[CONTENT] 422 ||| 236 | 58% ||| 34.5% | JIA | 54% | n=79 | 7% ||| Only 2% | 70% | 50% ||| CBD ||| only 10% [SUMMARY]
[CONTENT] JIA ||| JIA [SUMMARY]
[CONTENT] JIA ||| Cannabidiol (CBD ||| JIA | Midwestern | between 2017 and 2019 ||| JIA ||| CBD ||| 422 ||| 236 | 58% ||| 34.5% | JIA | 54% | n=79 | 7% ||| Only 2% | 70% | 50% ||| CBD ||| only 10% ||| JIA ||| JIA [SUMMARY]
[CONTENT] JIA ||| Cannabidiol (CBD ||| JIA | Midwestern | between 2017 and 2019 ||| JIA ||| CBD ||| 422 ||| 236 | 58% ||| 34.5% | JIA | 54% | n=79 | 7% ||| Only 2% | 70% | 50% ||| CBD ||| only 10% ||| JIA ||| JIA [SUMMARY]
Influence of oral health condition on swallowing and oral intake level for patients affected by chronic stroke.
25565784
According to the literature, the occurrence of dysphagia is high in cases of stroke, and its severity can be enhanced by loss of teeth and the use of poorly fitting prostheses.
BACKGROUND
Thirty elderly individuals affected by stroke in chronic phase participated. All subjects underwent assessment of their oral condition, with classification from the Functional Oral Intake Scale (FOIS) and nasoendoscopic swallowing assessment to classify the degree of dysphagia. The statistical analysis examined a heterogeneous group (HG, n=30) and two groups designated by the affected body part, right (RHG, n=8) and left (LHG, n=11), excluding totally dentate or edentulous individuals without rehabilitation with more than one episode of stroke.
METHODS
There was a negative correlation between the need for replacement prostheses and the FOIS scale for the HG (P=0.02) and RHG (P=0.01). Differences in FOIS between types of prostheses of the upper dental arch in the LHG (P=0.01) and lower dental arch in the RHG (P=0.04). A negative correlation was found between the number of teeth present and the degree of dysfunction in swallowing liquid in the LHG (P=0.05). There were differences in the performance in swallowing solids between individuals without prosthesis and those with partial prosthesis in the inferior dental arch (P=0.04) for the HG.
RESULTS
The need for replacement prostheses, type of prostheses, and the number of teeth of elderly patients poststroke in chronic phase showed an association with the level of oral intake and the degree of oropharyngeal dysphagia.
CONCLUSION
[ "Aged", "Aged, 80 and over", "Brazil", "Chronic Disease", "Deglutition", "Deglutition Disorders", "Dental Prosthesis", "Diagnosis, Oral", "Endoscopy, Gastrointestinal", "Female", "Humans", "Male", "Mouth Rehabilitation", "Oral Health", "Outcome Assessment, Health Care", "Risk Factors", "Severity of Illness Index", "Stroke", "Tooth Loss" ]
4279671
Introduction
The process of swallowing physiologically changes with aging. The loss of teeth, very common in this population, is related to the reduction of bone tissue, receptors (proprioceptors and periodontal ligaments), and muscle atrophy. Consequently, orofacial functions are impaired in toothless individuals.1 In addition, prostheses are functionally less efficient when compared to individuals with natural dentition.1,2 The stability of the jaw occlusion of the posterior teeth or prostheses is important for the swallowing function.2 The dental condition and the use of poorly fitting prostheses may contribute to the difficulties arising from old age, and the oral diet and nutritional status of the elderly may be affected by changes related to aging3 and changes in skill and desire to feel, bite, chew4 and swallow food,3,5 significantly compromising and further aggravating the quality of life of these individuals.6 The literature has shown the influence of oral health condition on swallowing and nutrition, indirectly influencing the daily life activities of the elderly.7 Furthermore, it is clear that maintenance of oral health is essential for the general health, quality of life, chewing ability8 and, mainly, the reduction of pneumonia risk in fragile elderly people,8,9 for it is already known that bacterial colonies in oropharyngeal tissues and in dental plaque are the greatest precursors for the development of aspiration respiratory infections.10 Besides the influence of the loss of dentition, the functional impairments resulting from aging may be potentiated by neurological changes resulting in dysphagia in patients, an important cause of morbidity and mortality in this population.11 In the adult and elderly population, dysphagia is most commonly associated with stroke.12 More than half of patients who have suffered stroke present between six and ten types of disability, muscle weakness being the most prevalent one, followed by communication, speech, and swallowing disorders.13 Specifically regarding swallowing, the stroke entails increased oropharyngeal transit time,14,15 changes in motor control of tongue,16 as well as the presence of laryngotracheal aspiration of food.16,17 Once the physiological changes resulting from aging in swallowing may be aggravated by loss of teeth and the use of poorly fitting prostheses, the severity of dysphagia could be influenced by the oral health condition and use of dental prostheses in elderly patients with neurological diseases. However, according to the literature studied, no research has made the correlation between the findings of the swallowing performance with oral health in elderly patients after stroke. Therefore, the aim of this study was to relate the condition of oral health to the level of oral intake and the degree of swallowing dysfunction in elderly patients with stroke in chronic phase.
Statistical analysis
Spearman’s test was used to verify the correlation between the oral health condition (DMFT, number of teeth present, and the need for prostheses replacement) and the performance in the swallowing function (classification in the FOIS scale and dysphagia rating in each consistency). For this analysis to be performed, the FOIS scale and the degree of dysphagia were numerically classified from 1 to 7 and from 1 to 4, respectively. For comparison of FOIS results and dysphagia classification, among the different types of prostheses used in the upper and lower dental arches (complete dental prosthesis, partial dental prosthesis, denture or none), the Kruskal–Wallis test was used, followed by the Dunn’s test.
Results
According to the evaluation of oral condition, the mean of DMFT of elderly individuals affected by stroke in this study was 28.7, while the mean of remaining teeth was 5.6, showing that most individuals were edentulous. Table 2 shows the types of prostheses used by the elderly individuals affected by stroke in each dental arch and the need for prostheses replacement. According to the results of the functional oral intake scale, no individuals were found to be on a single consistency diet or tube dependent. Most individuals (53%) presented as level V on the FOIS, followed by level IV (34%) and level VII (13%). With the severity rating of the swallowing disorder, no occurrence of severe dysphagia was verified. The distribution of the elderly affected by stroke according to severity rating of the swallowing disorder is shown in Table 3, as well. According to the results of the statistical analysis, correlations between the oral health condition and the results of the FOIS scale in elderly individuals affected by stroke in this study were found. There was a statistically significant negative correlation between the need for replacement of prostheses and the rating scale, and also in the HG and in RHG, showing that the greater the need for replacement of the prosthesis, the worse the rating of the FOIS scale. There were differences in the classification of the FOIS scale, for the LHG, in the different types of prostheses used in the upper dental arch, whereas for the RHG the difference in the classification of FOIS occurred between groups of individuals with different types of prostheses used in the lower dental arch. After completion of the Dunn’s post-test, the difference was confirmed only in the LHG, demonstrating that individuals of this group with partial prostheses in the upper arch (ie, that still had natural elements and whose missed teeth had been rehabilitated) had better ratings in the FOIS scale compared to individuals with total prostheses. The results of the correlation between the FOIS scale and the oral health condition are shown in Table 4. In the statistical analysis between the degree of swallowing dysfunction and oral health condition, a significant negative correlation between the number of teeth and the swallowing performance of liquid in the LHG was found, demonstrating that the greater the number of teeth, the lower the degree of swallowing dysfunction. According to the Kruskal–Wallis test, there was no difference in the performance of swallowing solid boluses depending on the type of prosthesis used in the lower dental arch; in the HG, however, when applying the post-Dunn’s test, this difference was not confirmed (P>0.05). The comparison of the FOIS scale between the types of dental prostheses is shown in Table 5. The results of the correlation between the degree of dysphagia and the oral health condition are shown in Table 6, and the comparison of the degree of swallowing dysfunction between types of dental prostheses is described in Table 7.
Conclusion
The findings of this study indicate that the oral health condition of elderly individuals after stroke in chronic phase showed an association with the level of oral intake and the degree of oropharyngeal dysphagia. The limitations of the study should be considered, one being the lack of access to the imaging examinations to characterize the individuals according to the type of stroke, damaged areas, and extent of injury, and heterogeneity of the time of brain involvement, in addition to the effects of medications and other comorbidities, which were not taken into account. Also, owing to the small sample, strong correlations could not be observed, and a cause/effect relationship cannot be determined. Despite the limitations of this study, and the fact that hemiplegia does not necessarily represent a direct relationship with the injured brain hemisphere contralaterally, since cerebellum lesions may cause ipsilateral motor damages,31 the findings indicated that the level of oral intake was influenced by the condition or type of dental prostheses, regardless of the side of the body affected. On the other hand, the number of teeth were demonstrated to be related to the level of dysphagia in individuals with right hemiplegia, which shows that the oral health condition differentially affects the physiological responses of swallowing, influenced by the brain damage. Considering the present sample, further research is necessary pertaining to the oral condition and the neurological involvement of individuals. In addition, these issues demonstrate the need for an interdisciplinary research team comprising dentists, physicians, speech pathologists, and other health professionals, so as to study and define the treatment and guidelines to patients and caretakers, in order to maintain the best oral health condition in the elderly presented with oropharyngeal dysphagia.
[ "Methods", "Procedures", "Oral condition evaluation", "Functional Oral Intake Scale", "Fiberoptic endoscopic evaluation of swallowing", "Conclusion" ]
[ " Study design and subjects Thirty poststroke individuals, aged 61 to 90 years, whose injury happened 6 months to 9 years previously to their participation in this prospective study. Table 1 shows the characterization data of the sample.\nThe inclusion criteria considered the following cases: affected by stroke as attested by a medical report, through clinical diagnosis and imaging, with a minimum time of 6 months; aged over 60 years, being in regular clinical neurological monitoring, not having undergone dysphagia rehabilitation; presenting general stable health that would enable the realization of the proposed tests.\nIn order to achieve a greater delineation of the group, in addition to considering the group of 30 individuals (heterogeneous group [HG]), the total dentate, the total edentulous without dental prostheses, and individuals with more than one episode of stroke were excluded and the participants were divided into two groups, according to the affected body side: right hemiplegic group (RHG, n=8) and left hemiplegic group (LHG, n=11). A distribution and recruitment fluxogram is shown in Figure 1.\nThirty poststroke individuals, aged 61 to 90 years, whose injury happened 6 months to 9 years previously to their participation in this prospective study. Table 1 shows the characterization data of the sample.\nThe inclusion criteria considered the following cases: affected by stroke as attested by a medical report, through clinical diagnosis and imaging, with a minimum time of 6 months; aged over 60 years, being in regular clinical neurological monitoring, not having undergone dysphagia rehabilitation; presenting general stable health that would enable the realization of the proposed tests.\nIn order to achieve a greater delineation of the group, in addition to considering the group of 30 individuals (heterogeneous group [HG]), the total dentate, the total edentulous without dental prostheses, and individuals with more than one episode of stroke were excluded and the participants were divided into two groups, according to the affected body side: right hemiplegic group (RHG, n=8) and left hemiplegic group (LHG, n=11). A distribution and recruitment fluxogram is shown in Figure 1.\n Procedures Oral condition evaluation The participants underwent assessment of various components of oral health through traditional health indicators, based on the presence or absence of disease. The data were collected from the clinical examination by a dentist as instructed by the WHO – Oral Health Surveys: Basic Methods,18 including the assessment of dental status by the number of decayed, missing, and filled teeth (DMFT) and evaluation of dental prostheses.\nThe prostheses were evaluated for retention, functional stability, smile aesthetics, degree of bone resorption, and quality of the mucosa.19\nFrom the evaluation, the dental arches were classified as without prosthesis or with the presence of partial or complete dental prosthesis; later, the use or need for replacement of dental prostheses was indicated. The need for prostheses replacement was as follows: 0, no need to change; 1, partial dental prosthesis was necessary in one of the dental arches; 2, need for partial dental prosthesis in both dental arches; 3, need for complete dental prosthesis in one of the dental arches; 4, need for dental prosthesis in both dental arches.\nThe participants underwent assessment of various components of oral health through traditional health indicators, based on the presence or absence of disease. The data were collected from the clinical examination by a dentist as instructed by the WHO – Oral Health Surveys: Basic Methods,18 including the assessment of dental status by the number of decayed, missing, and filled teeth (DMFT) and evaluation of dental prostheses.\nThe prostheses were evaluated for retention, functional stability, smile aesthetics, degree of bone resorption, and quality of the mucosa.19\nFrom the evaluation, the dental arches were classified as without prosthesis or with the presence of partial or complete dental prosthesis; later, the use or need for replacement of dental prostheses was indicated. The need for prostheses replacement was as follows: 0, no need to change; 1, partial dental prosthesis was necessary in one of the dental arches; 2, need for partial dental prosthesis in both dental arches; 3, need for complete dental prosthesis in one of the dental arches; 4, need for dental prosthesis in both dental arches.\n Functional Oral Intake Scale Research of oral ingestion was performed by reviewing the usual food consumption patterns referred to in the 24-hour dietary recall. From the data obtained by the research of oral ingestion, patients were classified according to the levels of the Functional Oral Intake Scale (FOIS),20 on a scale from I to VII, considering the diet characteristics based on the properties and texture of the food.21\nResearch of oral ingestion was performed by reviewing the usual food consumption patterns referred to in the 24-hour dietary recall. From the data obtained by the research of oral ingestion, patients were classified according to the levels of the Functional Oral Intake Scale (FOIS),20 on a scale from I to VII, considering the diet characteristics based on the properties and texture of the food.21\n Fiberoptic endoscopic evaluation of swallowing The fiberoptic endoscopic evaluation of swallowing (FEES) was performed by an otorhinolaryngologist physician collaboratively with the speech therapist. Subjects were asked to remain seated with their heads arranged in the direction of the body axis, without bending or rotation, and the examination was performed using a flexible endoscopic equipment of fiberoptic bronchoscopic type (model Olympus CLV-U20) and an Olympus OTV–SC nasopharyngoscope (Olympus Corporation, Tokyo, Japan).\nThree consistent standardized foods were evaluated: liquid (10 mL of filtered water), thick pudding (10 mL of thickened dietary grape juice, reaching a final consistency similar to that of pudding), and solid (half a slice of bread, 1 cm thick). From the data obtained by the FEES, the severity rating of the swallowing disorder was determined in accordance with the scale of functional impairment of swallowing of Macedo Filho,22,23 which subdivides the dysphagia severity levels (normal, mild, moderate, and severe).\nThe fiberoptic endoscopic evaluation of swallowing (FEES) was performed by an otorhinolaryngologist physician collaboratively with the speech therapist. Subjects were asked to remain seated with their heads arranged in the direction of the body axis, without bending or rotation, and the examination was performed using a flexible endoscopic equipment of fiberoptic bronchoscopic type (model Olympus CLV-U20) and an Olympus OTV–SC nasopharyngoscope (Olympus Corporation, Tokyo, Japan).\nThree consistent standardized foods were evaluated: liquid (10 mL of filtered water), thick pudding (10 mL of thickened dietary grape juice, reaching a final consistency similar to that of pudding), and solid (half a slice of bread, 1 cm thick). From the data obtained by the FEES, the severity rating of the swallowing disorder was determined in accordance with the scale of functional impairment of swallowing of Macedo Filho,22,23 which subdivides the dysphagia severity levels (normal, mild, moderate, and severe).\n Statistical analysis Spearman’s test was used to verify the correlation between the oral health condition (DMFT, number of teeth present, and the need for prostheses replacement) and the performance in the swallowing function (classification in the FOIS scale and dysphagia rating in each consistency). For this analysis to be performed, the FOIS scale and the degree of dysphagia were numerically classified from 1 to 7 and from 1 to 4, respectively.\nFor comparison of FOIS results and dysphagia classification, among the different types of prostheses used in the upper and lower dental arches (complete dental prosthesis, partial dental prosthesis, denture or none), the Kruskal–Wallis test was used, followed by the Dunn’s test.\nSpearman’s test was used to verify the correlation between the oral health condition (DMFT, number of teeth present, and the need for prostheses replacement) and the performance in the swallowing function (classification in the FOIS scale and dysphagia rating in each consistency). For this analysis to be performed, the FOIS scale and the degree of dysphagia were numerically classified from 1 to 7 and from 1 to 4, respectively.\nFor comparison of FOIS results and dysphagia classification, among the different types of prostheses used in the upper and lower dental arches (complete dental prosthesis, partial dental prosthesis, denture or none), the Kruskal–Wallis test was used, followed by the Dunn’s test.\n Oral condition evaluation The participants underwent assessment of various components of oral health through traditional health indicators, based on the presence or absence of disease. The data were collected from the clinical examination by a dentist as instructed by the WHO – Oral Health Surveys: Basic Methods,18 including the assessment of dental status by the number of decayed, missing, and filled teeth (DMFT) and evaluation of dental prostheses.\nThe prostheses were evaluated for retention, functional stability, smile aesthetics, degree of bone resorption, and quality of the mucosa.19\nFrom the evaluation, the dental arches were classified as without prosthesis or with the presence of partial or complete dental prosthesis; later, the use or need for replacement of dental prostheses was indicated. The need for prostheses replacement was as follows: 0, no need to change; 1, partial dental prosthesis was necessary in one of the dental arches; 2, need for partial dental prosthesis in both dental arches; 3, need for complete dental prosthesis in one of the dental arches; 4, need for dental prosthesis in both dental arches.\nThe participants underwent assessment of various components of oral health through traditional health indicators, based on the presence or absence of disease. The data were collected from the clinical examination by a dentist as instructed by the WHO – Oral Health Surveys: Basic Methods,18 including the assessment of dental status by the number of decayed, missing, and filled teeth (DMFT) and evaluation of dental prostheses.\nThe prostheses were evaluated for retention, functional stability, smile aesthetics, degree of bone resorption, and quality of the mucosa.19\nFrom the evaluation, the dental arches were classified as without prosthesis or with the presence of partial or complete dental prosthesis; later, the use or need for replacement of dental prostheses was indicated. The need for prostheses replacement was as follows: 0, no need to change; 1, partial dental prosthesis was necessary in one of the dental arches; 2, need for partial dental prosthesis in both dental arches; 3, need for complete dental prosthesis in one of the dental arches; 4, need for dental prosthesis in both dental arches.\n Functional Oral Intake Scale Research of oral ingestion was performed by reviewing the usual food consumption patterns referred to in the 24-hour dietary recall. From the data obtained by the research of oral ingestion, patients were classified according to the levels of the Functional Oral Intake Scale (FOIS),20 on a scale from I to VII, considering the diet characteristics based on the properties and texture of the food.21\nResearch of oral ingestion was performed by reviewing the usual food consumption patterns referred to in the 24-hour dietary recall. From the data obtained by the research of oral ingestion, patients were classified according to the levels of the Functional Oral Intake Scale (FOIS),20 on a scale from I to VII, considering the diet characteristics based on the properties and texture of the food.21\n Fiberoptic endoscopic evaluation of swallowing The fiberoptic endoscopic evaluation of swallowing (FEES) was performed by an otorhinolaryngologist physician collaboratively with the speech therapist. Subjects were asked to remain seated with their heads arranged in the direction of the body axis, without bending or rotation, and the examination was performed using a flexible endoscopic equipment of fiberoptic bronchoscopic type (model Olympus CLV-U20) and an Olympus OTV–SC nasopharyngoscope (Olympus Corporation, Tokyo, Japan).\nThree consistent standardized foods were evaluated: liquid (10 mL of filtered water), thick pudding (10 mL of thickened dietary grape juice, reaching a final consistency similar to that of pudding), and solid (half a slice of bread, 1 cm thick). From the data obtained by the FEES, the severity rating of the swallowing disorder was determined in accordance with the scale of functional impairment of swallowing of Macedo Filho,22,23 which subdivides the dysphagia severity levels (normal, mild, moderate, and severe).\nThe fiberoptic endoscopic evaluation of swallowing (FEES) was performed by an otorhinolaryngologist physician collaboratively with the speech therapist. Subjects were asked to remain seated with their heads arranged in the direction of the body axis, without bending or rotation, and the examination was performed using a flexible endoscopic equipment of fiberoptic bronchoscopic type (model Olympus CLV-U20) and an Olympus OTV–SC nasopharyngoscope (Olympus Corporation, Tokyo, Japan).\nThree consistent standardized foods were evaluated: liquid (10 mL of filtered water), thick pudding (10 mL of thickened dietary grape juice, reaching a final consistency similar to that of pudding), and solid (half a slice of bread, 1 cm thick). From the data obtained by the FEES, the severity rating of the swallowing disorder was determined in accordance with the scale of functional impairment of swallowing of Macedo Filho,22,23 which subdivides the dysphagia severity levels (normal, mild, moderate, and severe).\n Statistical analysis Spearman’s test was used to verify the correlation between the oral health condition (DMFT, number of teeth present, and the need for prostheses replacement) and the performance in the swallowing function (classification in the FOIS scale and dysphagia rating in each consistency). For this analysis to be performed, the FOIS scale and the degree of dysphagia were numerically classified from 1 to 7 and from 1 to 4, respectively.\nFor comparison of FOIS results and dysphagia classification, among the different types of prostheses used in the upper and lower dental arches (complete dental prosthesis, partial dental prosthesis, denture or none), the Kruskal–Wallis test was used, followed by the Dunn’s test.\nSpearman’s test was used to verify the correlation between the oral health condition (DMFT, number of teeth present, and the need for prostheses replacement) and the performance in the swallowing function (classification in the FOIS scale and dysphagia rating in each consistency). For this analysis to be performed, the FOIS scale and the degree of dysphagia were numerically classified from 1 to 7 and from 1 to 4, respectively.\nFor comparison of FOIS results and dysphagia classification, among the different types of prostheses used in the upper and lower dental arches (complete dental prosthesis, partial dental prosthesis, denture or none), the Kruskal–Wallis test was used, followed by the Dunn’s test.", " Oral condition evaluation The participants underwent assessment of various components of oral health through traditional health indicators, based on the presence or absence of disease. The data were collected from the clinical examination by a dentist as instructed by the WHO – Oral Health Surveys: Basic Methods,18 including the assessment of dental status by the number of decayed, missing, and filled teeth (DMFT) and evaluation of dental prostheses.\nThe prostheses were evaluated for retention, functional stability, smile aesthetics, degree of bone resorption, and quality of the mucosa.19\nFrom the evaluation, the dental arches were classified as without prosthesis or with the presence of partial or complete dental prosthesis; later, the use or need for replacement of dental prostheses was indicated. The need for prostheses replacement was as follows: 0, no need to change; 1, partial dental prosthesis was necessary in one of the dental arches; 2, need for partial dental prosthesis in both dental arches; 3, need for complete dental prosthesis in one of the dental arches; 4, need for dental prosthesis in both dental arches.\nThe participants underwent assessment of various components of oral health through traditional health indicators, based on the presence or absence of disease. The data were collected from the clinical examination by a dentist as instructed by the WHO – Oral Health Surveys: Basic Methods,18 including the assessment of dental status by the number of decayed, missing, and filled teeth (DMFT) and evaluation of dental prostheses.\nThe prostheses were evaluated for retention, functional stability, smile aesthetics, degree of bone resorption, and quality of the mucosa.19\nFrom the evaluation, the dental arches were classified as without prosthesis or with the presence of partial or complete dental prosthesis; later, the use or need for replacement of dental prostheses was indicated. The need for prostheses replacement was as follows: 0, no need to change; 1, partial dental prosthesis was necessary in one of the dental arches; 2, need for partial dental prosthesis in both dental arches; 3, need for complete dental prosthesis in one of the dental arches; 4, need for dental prosthesis in both dental arches.\n Functional Oral Intake Scale Research of oral ingestion was performed by reviewing the usual food consumption patterns referred to in the 24-hour dietary recall. From the data obtained by the research of oral ingestion, patients were classified according to the levels of the Functional Oral Intake Scale (FOIS),20 on a scale from I to VII, considering the diet characteristics based on the properties and texture of the food.21\nResearch of oral ingestion was performed by reviewing the usual food consumption patterns referred to in the 24-hour dietary recall. From the data obtained by the research of oral ingestion, patients were classified according to the levels of the Functional Oral Intake Scale (FOIS),20 on a scale from I to VII, considering the diet characteristics based on the properties and texture of the food.21\n Fiberoptic endoscopic evaluation of swallowing The fiberoptic endoscopic evaluation of swallowing (FEES) was performed by an otorhinolaryngologist physician collaboratively with the speech therapist. Subjects were asked to remain seated with their heads arranged in the direction of the body axis, without bending or rotation, and the examination was performed using a flexible endoscopic equipment of fiberoptic bronchoscopic type (model Olympus CLV-U20) and an Olympus OTV–SC nasopharyngoscope (Olympus Corporation, Tokyo, Japan).\nThree consistent standardized foods were evaluated: liquid (10 mL of filtered water), thick pudding (10 mL of thickened dietary grape juice, reaching a final consistency similar to that of pudding), and solid (half a slice of bread, 1 cm thick). From the data obtained by the FEES, the severity rating of the swallowing disorder was determined in accordance with the scale of functional impairment of swallowing of Macedo Filho,22,23 which subdivides the dysphagia severity levels (normal, mild, moderate, and severe).\nThe fiberoptic endoscopic evaluation of swallowing (FEES) was performed by an otorhinolaryngologist physician collaboratively with the speech therapist. Subjects were asked to remain seated with their heads arranged in the direction of the body axis, without bending or rotation, and the examination was performed using a flexible endoscopic equipment of fiberoptic bronchoscopic type (model Olympus CLV-U20) and an Olympus OTV–SC nasopharyngoscope (Olympus Corporation, Tokyo, Japan).\nThree consistent standardized foods were evaluated: liquid (10 mL of filtered water), thick pudding (10 mL of thickened dietary grape juice, reaching a final consistency similar to that of pudding), and solid (half a slice of bread, 1 cm thick). From the data obtained by the FEES, the severity rating of the swallowing disorder was determined in accordance with the scale of functional impairment of swallowing of Macedo Filho,22,23 which subdivides the dysphagia severity levels (normal, mild, moderate, and severe).\n Statistical analysis Spearman’s test was used to verify the correlation between the oral health condition (DMFT, number of teeth present, and the need for prostheses replacement) and the performance in the swallowing function (classification in the FOIS scale and dysphagia rating in each consistency). For this analysis to be performed, the FOIS scale and the degree of dysphagia were numerically classified from 1 to 7 and from 1 to 4, respectively.\nFor comparison of FOIS results and dysphagia classification, among the different types of prostheses used in the upper and lower dental arches (complete dental prosthesis, partial dental prosthesis, denture or none), the Kruskal–Wallis test was used, followed by the Dunn’s test.\nSpearman’s test was used to verify the correlation between the oral health condition (DMFT, number of teeth present, and the need for prostheses replacement) and the performance in the swallowing function (classification in the FOIS scale and dysphagia rating in each consistency). For this analysis to be performed, the FOIS scale and the degree of dysphagia were numerically classified from 1 to 7 and from 1 to 4, respectively.\nFor comparison of FOIS results and dysphagia classification, among the different types of prostheses used in the upper and lower dental arches (complete dental prosthesis, partial dental prosthesis, denture or none), the Kruskal–Wallis test was used, followed by the Dunn’s test.", "The participants underwent assessment of various components of oral health through traditional health indicators, based on the presence or absence of disease. The data were collected from the clinical examination by a dentist as instructed by the WHO – Oral Health Surveys: Basic Methods,18 including the assessment of dental status by the number of decayed, missing, and filled teeth (DMFT) and evaluation of dental prostheses.\nThe prostheses were evaluated for retention, functional stability, smile aesthetics, degree of bone resorption, and quality of the mucosa.19\nFrom the evaluation, the dental arches were classified as without prosthesis or with the presence of partial or complete dental prosthesis; later, the use or need for replacement of dental prostheses was indicated. The need for prostheses replacement was as follows: 0, no need to change; 1, partial dental prosthesis was necessary in one of the dental arches; 2, need for partial dental prosthesis in both dental arches; 3, need for complete dental prosthesis in one of the dental arches; 4, need for dental prosthesis in both dental arches.", "Research of oral ingestion was performed by reviewing the usual food consumption patterns referred to in the 24-hour dietary recall. From the data obtained by the research of oral ingestion, patients were classified according to the levels of the Functional Oral Intake Scale (FOIS),20 on a scale from I to VII, considering the diet characteristics based on the properties and texture of the food.21", "The fiberoptic endoscopic evaluation of swallowing (FEES) was performed by an otorhinolaryngologist physician collaboratively with the speech therapist. Subjects were asked to remain seated with their heads arranged in the direction of the body axis, without bending or rotation, and the examination was performed using a flexible endoscopic equipment of fiberoptic bronchoscopic type (model Olympus CLV-U20) and an Olympus OTV–SC nasopharyngoscope (Olympus Corporation, Tokyo, Japan).\nThree consistent standardized foods were evaluated: liquid (10 mL of filtered water), thick pudding (10 mL of thickened dietary grape juice, reaching a final consistency similar to that of pudding), and solid (half a slice of bread, 1 cm thick). From the data obtained by the FEES, the severity rating of the swallowing disorder was determined in accordance with the scale of functional impairment of swallowing of Macedo Filho,22,23 which subdivides the dysphagia severity levels (normal, mild, moderate, and severe).", "The findings of this study indicate that the oral health condition of elderly individuals after stroke in chronic phase showed an association with the level of oral intake and the degree of oropharyngeal dysphagia. The limitations of the study should be considered, one being the lack of access to the imaging examinations to characterize the individuals according to the type of stroke, damaged areas, and extent of injury, and heterogeneity of the time of brain involvement, in addition to the effects of medications and other comorbidities, which were not taken into account. Also, owing to the small sample, strong correlations could not be observed, and a cause/effect relationship cannot be determined.\nDespite the limitations of this study, and the fact that hemiplegia does not necessarily represent a direct relationship with the injured brain hemisphere contralaterally, since cerebellum lesions may cause ipsilateral motor damages,31 the findings indicated that the level of oral intake was influenced by the condition or type of dental prostheses, regardless of the side of the body affected. On the other hand, the number of teeth were demonstrated to be related to the level of dysphagia in individuals with right hemiplegia, which shows that the oral health condition differentially affects the physiological responses of swallowing, influenced by the brain damage.\nConsidering the present sample, further research is necessary pertaining to the oral condition and the neurological involvement of individuals. In addition, these issues demonstrate the need for an interdisciplinary research team comprising dentists, physicians, speech pathologists, and other health professionals, so as to study and define the treatment and guidelines to patients and caretakers, in order to maintain the best oral health condition in the elderly presented with oropharyngeal dysphagia." ]
[ "methods", "methods", null, null, null, null ]
[ "Introduction", "Methods", "Study design and subjects", "Procedures", "Oral condition evaluation", "Functional Oral Intake Scale", "Fiberoptic endoscopic evaluation of swallowing", "Statistical analysis", "Results", "Discussion", "Conclusion" ]
[ "The process of swallowing physiologically changes with aging. The loss of teeth, very common in this population, is related to the reduction of bone tissue, receptors (proprioceptors and periodontal ligaments), and muscle atrophy. Consequently, orofacial functions are impaired in toothless individuals.1 In addition, prostheses are functionally less efficient when compared to individuals with natural dentition.1,2\nThe stability of the jaw occlusion of the posterior teeth or prostheses is important for the swallowing function.2 The dental condition and the use of poorly fitting prostheses may contribute to the difficulties arising from old age, and the oral diet and nutritional status of the elderly may be affected by changes related to aging3 and changes in skill and desire to feel, bite, chew4 and swallow food,3,5 significantly compromising and further aggravating the quality of life of these individuals.6\nThe literature has shown the influence of oral health condition on swallowing and nutrition, indirectly influencing the daily life activities of the elderly.7 Furthermore, it is clear that maintenance of oral health is essential for the general health, quality of life, chewing ability8 and, mainly, the reduction of pneumonia risk in fragile elderly people,8,9 for it is already known that bacterial colonies in oropharyngeal tissues and in dental plaque are the greatest precursors for the development of aspiration respiratory infections.10\nBesides the influence of the loss of dentition, the functional impairments resulting from aging may be potentiated by neurological changes resulting in dysphagia in patients, an important cause of morbidity and mortality in this population.11\nIn the adult and elderly population, dysphagia is most commonly associated with stroke.12 More than half of patients who have suffered stroke present between six and ten types of disability, muscle weakness being the most prevalent one, followed by communication, speech, and swallowing disorders.13 Specifically regarding swallowing, the stroke entails increased oropharyngeal transit time,14,15 changes in motor control of tongue,16 as well as the presence of laryngotracheal aspiration of food.16,17\nOnce the physiological changes resulting from aging in swallowing may be aggravated by loss of teeth and the use of poorly fitting prostheses, the severity of dysphagia could be influenced by the oral health condition and use of dental prostheses in elderly patients with neurological diseases. However, according to the literature studied, no research has made the correlation between the findings of the swallowing performance with oral health in elderly patients after stroke. Therefore, the aim of this study was to relate the condition of oral health to the level of oral intake and the degree of swallowing dysfunction in elderly patients with stroke in chronic phase.", " Study design and subjects Thirty poststroke individuals, aged 61 to 90 years, whose injury happened 6 months to 9 years previously to their participation in this prospective study. Table 1 shows the characterization data of the sample.\nThe inclusion criteria considered the following cases: affected by stroke as attested by a medical report, through clinical diagnosis and imaging, with a minimum time of 6 months; aged over 60 years, being in regular clinical neurological monitoring, not having undergone dysphagia rehabilitation; presenting general stable health that would enable the realization of the proposed tests.\nIn order to achieve a greater delineation of the group, in addition to considering the group of 30 individuals (heterogeneous group [HG]), the total dentate, the total edentulous without dental prostheses, and individuals with more than one episode of stroke were excluded and the participants were divided into two groups, according to the affected body side: right hemiplegic group (RHG, n=8) and left hemiplegic group (LHG, n=11). A distribution and recruitment fluxogram is shown in Figure 1.\nThirty poststroke individuals, aged 61 to 90 years, whose injury happened 6 months to 9 years previously to their participation in this prospective study. Table 1 shows the characterization data of the sample.\nThe inclusion criteria considered the following cases: affected by stroke as attested by a medical report, through clinical diagnosis and imaging, with a minimum time of 6 months; aged over 60 years, being in regular clinical neurological monitoring, not having undergone dysphagia rehabilitation; presenting general stable health that would enable the realization of the proposed tests.\nIn order to achieve a greater delineation of the group, in addition to considering the group of 30 individuals (heterogeneous group [HG]), the total dentate, the total edentulous without dental prostheses, and individuals with more than one episode of stroke were excluded and the participants were divided into two groups, according to the affected body side: right hemiplegic group (RHG, n=8) and left hemiplegic group (LHG, n=11). A distribution and recruitment fluxogram is shown in Figure 1.\n Procedures Oral condition evaluation The participants underwent assessment of various components of oral health through traditional health indicators, based on the presence or absence of disease. The data were collected from the clinical examination by a dentist as instructed by the WHO – Oral Health Surveys: Basic Methods,18 including the assessment of dental status by the number of decayed, missing, and filled teeth (DMFT) and evaluation of dental prostheses.\nThe prostheses were evaluated for retention, functional stability, smile aesthetics, degree of bone resorption, and quality of the mucosa.19\nFrom the evaluation, the dental arches were classified as without prosthesis or with the presence of partial or complete dental prosthesis; later, the use or need for replacement of dental prostheses was indicated. The need for prostheses replacement was as follows: 0, no need to change; 1, partial dental prosthesis was necessary in one of the dental arches; 2, need for partial dental prosthesis in both dental arches; 3, need for complete dental prosthesis in one of the dental arches; 4, need for dental prosthesis in both dental arches.\nThe participants underwent assessment of various components of oral health through traditional health indicators, based on the presence or absence of disease. The data were collected from the clinical examination by a dentist as instructed by the WHO – Oral Health Surveys: Basic Methods,18 including the assessment of dental status by the number of decayed, missing, and filled teeth (DMFT) and evaluation of dental prostheses.\nThe prostheses were evaluated for retention, functional stability, smile aesthetics, degree of bone resorption, and quality of the mucosa.19\nFrom the evaluation, the dental arches were classified as without prosthesis or with the presence of partial or complete dental prosthesis; later, the use or need for replacement of dental prostheses was indicated. The need for prostheses replacement was as follows: 0, no need to change; 1, partial dental prosthesis was necessary in one of the dental arches; 2, need for partial dental prosthesis in both dental arches; 3, need for complete dental prosthesis in one of the dental arches; 4, need for dental prosthesis in both dental arches.\n Functional Oral Intake Scale Research of oral ingestion was performed by reviewing the usual food consumption patterns referred to in the 24-hour dietary recall. From the data obtained by the research of oral ingestion, patients were classified according to the levels of the Functional Oral Intake Scale (FOIS),20 on a scale from I to VII, considering the diet characteristics based on the properties and texture of the food.21\nResearch of oral ingestion was performed by reviewing the usual food consumption patterns referred to in the 24-hour dietary recall. From the data obtained by the research of oral ingestion, patients were classified according to the levels of the Functional Oral Intake Scale (FOIS),20 on a scale from I to VII, considering the diet characteristics based on the properties and texture of the food.21\n Fiberoptic endoscopic evaluation of swallowing The fiberoptic endoscopic evaluation of swallowing (FEES) was performed by an otorhinolaryngologist physician collaboratively with the speech therapist. Subjects were asked to remain seated with their heads arranged in the direction of the body axis, without bending or rotation, and the examination was performed using a flexible endoscopic equipment of fiberoptic bronchoscopic type (model Olympus CLV-U20) and an Olympus OTV–SC nasopharyngoscope (Olympus Corporation, Tokyo, Japan).\nThree consistent standardized foods were evaluated: liquid (10 mL of filtered water), thick pudding (10 mL of thickened dietary grape juice, reaching a final consistency similar to that of pudding), and solid (half a slice of bread, 1 cm thick). From the data obtained by the FEES, the severity rating of the swallowing disorder was determined in accordance with the scale of functional impairment of swallowing of Macedo Filho,22,23 which subdivides the dysphagia severity levels (normal, mild, moderate, and severe).\nThe fiberoptic endoscopic evaluation of swallowing (FEES) was performed by an otorhinolaryngologist physician collaboratively with the speech therapist. Subjects were asked to remain seated with their heads arranged in the direction of the body axis, without bending or rotation, and the examination was performed using a flexible endoscopic equipment of fiberoptic bronchoscopic type (model Olympus CLV-U20) and an Olympus OTV–SC nasopharyngoscope (Olympus Corporation, Tokyo, Japan).\nThree consistent standardized foods were evaluated: liquid (10 mL of filtered water), thick pudding (10 mL of thickened dietary grape juice, reaching a final consistency similar to that of pudding), and solid (half a slice of bread, 1 cm thick). From the data obtained by the FEES, the severity rating of the swallowing disorder was determined in accordance with the scale of functional impairment of swallowing of Macedo Filho,22,23 which subdivides the dysphagia severity levels (normal, mild, moderate, and severe).\n Statistical analysis Spearman’s test was used to verify the correlation between the oral health condition (DMFT, number of teeth present, and the need for prostheses replacement) and the performance in the swallowing function (classification in the FOIS scale and dysphagia rating in each consistency). For this analysis to be performed, the FOIS scale and the degree of dysphagia were numerically classified from 1 to 7 and from 1 to 4, respectively.\nFor comparison of FOIS results and dysphagia classification, among the different types of prostheses used in the upper and lower dental arches (complete dental prosthesis, partial dental prosthesis, denture or none), the Kruskal–Wallis test was used, followed by the Dunn’s test.\nSpearman’s test was used to verify the correlation between the oral health condition (DMFT, number of teeth present, and the need for prostheses replacement) and the performance in the swallowing function (classification in the FOIS scale and dysphagia rating in each consistency). For this analysis to be performed, the FOIS scale and the degree of dysphagia were numerically classified from 1 to 7 and from 1 to 4, respectively.\nFor comparison of FOIS results and dysphagia classification, among the different types of prostheses used in the upper and lower dental arches (complete dental prosthesis, partial dental prosthesis, denture or none), the Kruskal–Wallis test was used, followed by the Dunn’s test.\n Oral condition evaluation The participants underwent assessment of various components of oral health through traditional health indicators, based on the presence or absence of disease. The data were collected from the clinical examination by a dentist as instructed by the WHO – Oral Health Surveys: Basic Methods,18 including the assessment of dental status by the number of decayed, missing, and filled teeth (DMFT) and evaluation of dental prostheses.\nThe prostheses were evaluated for retention, functional stability, smile aesthetics, degree of bone resorption, and quality of the mucosa.19\nFrom the evaluation, the dental arches were classified as without prosthesis or with the presence of partial or complete dental prosthesis; later, the use or need for replacement of dental prostheses was indicated. The need for prostheses replacement was as follows: 0, no need to change; 1, partial dental prosthesis was necessary in one of the dental arches; 2, need for partial dental prosthesis in both dental arches; 3, need for complete dental prosthesis in one of the dental arches; 4, need for dental prosthesis in both dental arches.\nThe participants underwent assessment of various components of oral health through traditional health indicators, based on the presence or absence of disease. The data were collected from the clinical examination by a dentist as instructed by the WHO – Oral Health Surveys: Basic Methods,18 including the assessment of dental status by the number of decayed, missing, and filled teeth (DMFT) and evaluation of dental prostheses.\nThe prostheses were evaluated for retention, functional stability, smile aesthetics, degree of bone resorption, and quality of the mucosa.19\nFrom the evaluation, the dental arches were classified as without prosthesis or with the presence of partial or complete dental prosthesis; later, the use or need for replacement of dental prostheses was indicated. The need for prostheses replacement was as follows: 0, no need to change; 1, partial dental prosthesis was necessary in one of the dental arches; 2, need for partial dental prosthesis in both dental arches; 3, need for complete dental prosthesis in one of the dental arches; 4, need for dental prosthesis in both dental arches.\n Functional Oral Intake Scale Research of oral ingestion was performed by reviewing the usual food consumption patterns referred to in the 24-hour dietary recall. From the data obtained by the research of oral ingestion, patients were classified according to the levels of the Functional Oral Intake Scale (FOIS),20 on a scale from I to VII, considering the diet characteristics based on the properties and texture of the food.21\nResearch of oral ingestion was performed by reviewing the usual food consumption patterns referred to in the 24-hour dietary recall. From the data obtained by the research of oral ingestion, patients were classified according to the levels of the Functional Oral Intake Scale (FOIS),20 on a scale from I to VII, considering the diet characteristics based on the properties and texture of the food.21\n Fiberoptic endoscopic evaluation of swallowing The fiberoptic endoscopic evaluation of swallowing (FEES) was performed by an otorhinolaryngologist physician collaboratively with the speech therapist. Subjects were asked to remain seated with their heads arranged in the direction of the body axis, without bending or rotation, and the examination was performed using a flexible endoscopic equipment of fiberoptic bronchoscopic type (model Olympus CLV-U20) and an Olympus OTV–SC nasopharyngoscope (Olympus Corporation, Tokyo, Japan).\nThree consistent standardized foods were evaluated: liquid (10 mL of filtered water), thick pudding (10 mL of thickened dietary grape juice, reaching a final consistency similar to that of pudding), and solid (half a slice of bread, 1 cm thick). From the data obtained by the FEES, the severity rating of the swallowing disorder was determined in accordance with the scale of functional impairment of swallowing of Macedo Filho,22,23 which subdivides the dysphagia severity levels (normal, mild, moderate, and severe).\nThe fiberoptic endoscopic evaluation of swallowing (FEES) was performed by an otorhinolaryngologist physician collaboratively with the speech therapist. Subjects were asked to remain seated with their heads arranged in the direction of the body axis, without bending or rotation, and the examination was performed using a flexible endoscopic equipment of fiberoptic bronchoscopic type (model Olympus CLV-U20) and an Olympus OTV–SC nasopharyngoscope (Olympus Corporation, Tokyo, Japan).\nThree consistent standardized foods were evaluated: liquid (10 mL of filtered water), thick pudding (10 mL of thickened dietary grape juice, reaching a final consistency similar to that of pudding), and solid (half a slice of bread, 1 cm thick). From the data obtained by the FEES, the severity rating of the swallowing disorder was determined in accordance with the scale of functional impairment of swallowing of Macedo Filho,22,23 which subdivides the dysphagia severity levels (normal, mild, moderate, and severe).\n Statistical analysis Spearman’s test was used to verify the correlation between the oral health condition (DMFT, number of teeth present, and the need for prostheses replacement) and the performance in the swallowing function (classification in the FOIS scale and dysphagia rating in each consistency). For this analysis to be performed, the FOIS scale and the degree of dysphagia were numerically classified from 1 to 7 and from 1 to 4, respectively.\nFor comparison of FOIS results and dysphagia classification, among the different types of prostheses used in the upper and lower dental arches (complete dental prosthesis, partial dental prosthesis, denture or none), the Kruskal–Wallis test was used, followed by the Dunn’s test.\nSpearman’s test was used to verify the correlation between the oral health condition (DMFT, number of teeth present, and the need for prostheses replacement) and the performance in the swallowing function (classification in the FOIS scale and dysphagia rating in each consistency). For this analysis to be performed, the FOIS scale and the degree of dysphagia were numerically classified from 1 to 7 and from 1 to 4, respectively.\nFor comparison of FOIS results and dysphagia classification, among the different types of prostheses used in the upper and lower dental arches (complete dental prosthesis, partial dental prosthesis, denture or none), the Kruskal–Wallis test was used, followed by the Dunn’s test.", "Thirty poststroke individuals, aged 61 to 90 years, whose injury happened 6 months to 9 years previously to their participation in this prospective study. Table 1 shows the characterization data of the sample.\nThe inclusion criteria considered the following cases: affected by stroke as attested by a medical report, through clinical diagnosis and imaging, with a minimum time of 6 months; aged over 60 years, being in regular clinical neurological monitoring, not having undergone dysphagia rehabilitation; presenting general stable health that would enable the realization of the proposed tests.\nIn order to achieve a greater delineation of the group, in addition to considering the group of 30 individuals (heterogeneous group [HG]), the total dentate, the total edentulous without dental prostheses, and individuals with more than one episode of stroke were excluded and the participants were divided into two groups, according to the affected body side: right hemiplegic group (RHG, n=8) and left hemiplegic group (LHG, n=11). A distribution and recruitment fluxogram is shown in Figure 1.", " Oral condition evaluation The participants underwent assessment of various components of oral health through traditional health indicators, based on the presence or absence of disease. The data were collected from the clinical examination by a dentist as instructed by the WHO – Oral Health Surveys: Basic Methods,18 including the assessment of dental status by the number of decayed, missing, and filled teeth (DMFT) and evaluation of dental prostheses.\nThe prostheses were evaluated for retention, functional stability, smile aesthetics, degree of bone resorption, and quality of the mucosa.19\nFrom the evaluation, the dental arches were classified as without prosthesis or with the presence of partial or complete dental prosthesis; later, the use or need for replacement of dental prostheses was indicated. The need for prostheses replacement was as follows: 0, no need to change; 1, partial dental prosthesis was necessary in one of the dental arches; 2, need for partial dental prosthesis in both dental arches; 3, need for complete dental prosthesis in one of the dental arches; 4, need for dental prosthesis in both dental arches.\nThe participants underwent assessment of various components of oral health through traditional health indicators, based on the presence or absence of disease. The data were collected from the clinical examination by a dentist as instructed by the WHO – Oral Health Surveys: Basic Methods,18 including the assessment of dental status by the number of decayed, missing, and filled teeth (DMFT) and evaluation of dental prostheses.\nThe prostheses were evaluated for retention, functional stability, smile aesthetics, degree of bone resorption, and quality of the mucosa.19\nFrom the evaluation, the dental arches were classified as without prosthesis or with the presence of partial or complete dental prosthesis; later, the use or need for replacement of dental prostheses was indicated. The need for prostheses replacement was as follows: 0, no need to change; 1, partial dental prosthesis was necessary in one of the dental arches; 2, need for partial dental prosthesis in both dental arches; 3, need for complete dental prosthesis in one of the dental arches; 4, need for dental prosthesis in both dental arches.\n Functional Oral Intake Scale Research of oral ingestion was performed by reviewing the usual food consumption patterns referred to in the 24-hour dietary recall. From the data obtained by the research of oral ingestion, patients were classified according to the levels of the Functional Oral Intake Scale (FOIS),20 on a scale from I to VII, considering the diet characteristics based on the properties and texture of the food.21\nResearch of oral ingestion was performed by reviewing the usual food consumption patterns referred to in the 24-hour dietary recall. From the data obtained by the research of oral ingestion, patients were classified according to the levels of the Functional Oral Intake Scale (FOIS),20 on a scale from I to VII, considering the diet characteristics based on the properties and texture of the food.21\n Fiberoptic endoscopic evaluation of swallowing The fiberoptic endoscopic evaluation of swallowing (FEES) was performed by an otorhinolaryngologist physician collaboratively with the speech therapist. Subjects were asked to remain seated with their heads arranged in the direction of the body axis, without bending or rotation, and the examination was performed using a flexible endoscopic equipment of fiberoptic bronchoscopic type (model Olympus CLV-U20) and an Olympus OTV–SC nasopharyngoscope (Olympus Corporation, Tokyo, Japan).\nThree consistent standardized foods were evaluated: liquid (10 mL of filtered water), thick pudding (10 mL of thickened dietary grape juice, reaching a final consistency similar to that of pudding), and solid (half a slice of bread, 1 cm thick). From the data obtained by the FEES, the severity rating of the swallowing disorder was determined in accordance with the scale of functional impairment of swallowing of Macedo Filho,22,23 which subdivides the dysphagia severity levels (normal, mild, moderate, and severe).\nThe fiberoptic endoscopic evaluation of swallowing (FEES) was performed by an otorhinolaryngologist physician collaboratively with the speech therapist. Subjects were asked to remain seated with their heads arranged in the direction of the body axis, without bending or rotation, and the examination was performed using a flexible endoscopic equipment of fiberoptic bronchoscopic type (model Olympus CLV-U20) and an Olympus OTV–SC nasopharyngoscope (Olympus Corporation, Tokyo, Japan).\nThree consistent standardized foods were evaluated: liquid (10 mL of filtered water), thick pudding (10 mL of thickened dietary grape juice, reaching a final consistency similar to that of pudding), and solid (half a slice of bread, 1 cm thick). From the data obtained by the FEES, the severity rating of the swallowing disorder was determined in accordance with the scale of functional impairment of swallowing of Macedo Filho,22,23 which subdivides the dysphagia severity levels (normal, mild, moderate, and severe).\n Statistical analysis Spearman’s test was used to verify the correlation between the oral health condition (DMFT, number of teeth present, and the need for prostheses replacement) and the performance in the swallowing function (classification in the FOIS scale and dysphagia rating in each consistency). For this analysis to be performed, the FOIS scale and the degree of dysphagia were numerically classified from 1 to 7 and from 1 to 4, respectively.\nFor comparison of FOIS results and dysphagia classification, among the different types of prostheses used in the upper and lower dental arches (complete dental prosthesis, partial dental prosthesis, denture or none), the Kruskal–Wallis test was used, followed by the Dunn’s test.\nSpearman’s test was used to verify the correlation between the oral health condition (DMFT, number of teeth present, and the need for prostheses replacement) and the performance in the swallowing function (classification in the FOIS scale and dysphagia rating in each consistency). For this analysis to be performed, the FOIS scale and the degree of dysphagia were numerically classified from 1 to 7 and from 1 to 4, respectively.\nFor comparison of FOIS results and dysphagia classification, among the different types of prostheses used in the upper and lower dental arches (complete dental prosthesis, partial dental prosthesis, denture or none), the Kruskal–Wallis test was used, followed by the Dunn’s test.", "The participants underwent assessment of various components of oral health through traditional health indicators, based on the presence or absence of disease. The data were collected from the clinical examination by a dentist as instructed by the WHO – Oral Health Surveys: Basic Methods,18 including the assessment of dental status by the number of decayed, missing, and filled teeth (DMFT) and evaluation of dental prostheses.\nThe prostheses were evaluated for retention, functional stability, smile aesthetics, degree of bone resorption, and quality of the mucosa.19\nFrom the evaluation, the dental arches were classified as without prosthesis or with the presence of partial or complete dental prosthesis; later, the use or need for replacement of dental prostheses was indicated. The need for prostheses replacement was as follows: 0, no need to change; 1, partial dental prosthesis was necessary in one of the dental arches; 2, need for partial dental prosthesis in both dental arches; 3, need for complete dental prosthesis in one of the dental arches; 4, need for dental prosthesis in both dental arches.", "Research of oral ingestion was performed by reviewing the usual food consumption patterns referred to in the 24-hour dietary recall. From the data obtained by the research of oral ingestion, patients were classified according to the levels of the Functional Oral Intake Scale (FOIS),20 on a scale from I to VII, considering the diet characteristics based on the properties and texture of the food.21", "The fiberoptic endoscopic evaluation of swallowing (FEES) was performed by an otorhinolaryngologist physician collaboratively with the speech therapist. Subjects were asked to remain seated with their heads arranged in the direction of the body axis, without bending or rotation, and the examination was performed using a flexible endoscopic equipment of fiberoptic bronchoscopic type (model Olympus CLV-U20) and an Olympus OTV–SC nasopharyngoscope (Olympus Corporation, Tokyo, Japan).\nThree consistent standardized foods were evaluated: liquid (10 mL of filtered water), thick pudding (10 mL of thickened dietary grape juice, reaching a final consistency similar to that of pudding), and solid (half a slice of bread, 1 cm thick). From the data obtained by the FEES, the severity rating of the swallowing disorder was determined in accordance with the scale of functional impairment of swallowing of Macedo Filho,22,23 which subdivides the dysphagia severity levels (normal, mild, moderate, and severe).", "Spearman’s test was used to verify the correlation between the oral health condition (DMFT, number of teeth present, and the need for prostheses replacement) and the performance in the swallowing function (classification in the FOIS scale and dysphagia rating in each consistency). For this analysis to be performed, the FOIS scale and the degree of dysphagia were numerically classified from 1 to 7 and from 1 to 4, respectively.\nFor comparison of FOIS results and dysphagia classification, among the different types of prostheses used in the upper and lower dental arches (complete dental prosthesis, partial dental prosthesis, denture or none), the Kruskal–Wallis test was used, followed by the Dunn’s test.", "According to the evaluation of oral condition, the mean of DMFT of elderly individuals affected by stroke in this study was 28.7, while the mean of remaining teeth was 5.6, showing that most individuals were edentulous. Table 2 shows the types of prostheses used by the elderly individuals affected by stroke in each dental arch and the need for prostheses replacement.\nAccording to the results of the functional oral intake scale, no individuals were found to be on a single consistency diet or tube dependent. Most individuals (53%) presented as level V on the FOIS, followed by level IV (34%) and level VII (13%).\nWith the severity rating of the swallowing disorder, no occurrence of severe dysphagia was verified. The distribution of the elderly affected by stroke according to severity rating of the swallowing disorder is shown in Table 3, as well.\nAccording to the results of the statistical analysis, correlations between the oral health condition and the results of the FOIS scale in elderly individuals affected by stroke in this study were found. There was a statistically significant negative correlation between the need for replacement of prostheses and the rating scale, and also in the HG and in RHG, showing that the greater the need for replacement of the prosthesis, the worse the rating of the FOIS scale.\nThere were differences in the classification of the FOIS scale, for the LHG, in the different types of prostheses used in the upper dental arch, whereas for the RHG the difference in the classification of FOIS occurred between groups of individuals with different types of prostheses used in the lower dental arch. After completion of the Dunn’s post-test, the difference was confirmed only in the LHG, demonstrating that individuals of this group with partial prostheses in the upper arch (ie, that still had natural elements and whose missed teeth had been rehabilitated) had better ratings in the FOIS scale compared to individuals with total prostheses.\nThe results of the correlation between the FOIS scale and the oral health condition are shown in Table 4.\nIn the statistical analysis between the degree of swallowing dysfunction and oral health condition, a significant negative correlation between the number of teeth and the swallowing performance of liquid in the LHG was found, demonstrating that the greater the number of teeth, the lower the degree of swallowing dysfunction.\nAccording to the Kruskal–Wallis test, there was no difference in the performance of swallowing solid boluses depending on the type of prosthesis used in the lower dental arch; in the HG, however, when applying the post-Dunn’s test, this difference was not confirmed (P>0.05). The comparison of the FOIS scale between the types of dental prostheses is shown in Table 5.\nThe results of the correlation between the degree of dysphagia and the oral health condition are shown in Table 6, and the comparison of the degree of swallowing dysfunction between types of dental prostheses is described in Table 7.", "According to the literature, individuals affected by stroke often exhibit decreased level of consciousness, paralysis of the muscles involved in swallowing, sensory deficits of the pharynx and oral cavity, as well as loss of appetite.24 With these disorders related to swallowing, dysphagia becomes an important symptom in this population, and may be aggravated by teeth loss and the use of poorly fitting prosthesis, common during aging. Furthermore, the hypothesis that oropharyngeal colonization is primarily responsible for the subsequent respiratory infection, owing to aspiration, is strongly supported by the literature.10\nNo literature studies have made the correlation between the findings of the swallowing performance and the condition of the oral health in elderly patients after stroke.\nThe results of this study showed some correlations between the oral condition and the data of the swallowing performance even when elderly individuals affected by stroke were part of the heterogeneous group.\nBesides the statistical analysis for the study group, in an attempt to reduce its heterogeneity, the subjects were divided according to the side of the motor impairment after stroke. Former studies have suggested that dysphagia maybe associated with diffuse lesions in one or both hemispheres, anterior fossa, or brainstem, as swallowing has a bilateral cortical representation. It has been well established that the protection of the airways during swallowing requires a precise coordination with accurate movements of the structures involved and adequate propulsion of the bolus to the oropharynx and esophagus;25 thus, the motor ability for swallowing seems to be essential for this protection.\nIn the HG and RHG, it was found that the higher the need for replacement of the prosthesis, the worse the rating in the FOIS scale; ie, individuals with greater need for replacement of the prostheses had a greater perception in making more compensation and changes in their diet, which reflected the classification level of oral intake.\nThe FOIS scale was developed to describe the functional level of the daily oral intake of food and liquid of a patient, considering the changes in diet and need for compensation during swallowing.20 These limitations and changes in oral intake result from patient perception and self-perception of swallowing ability that is impaired. Therefore, from the results, it can be inferred that when there was a greater need for replacement of the prosthesis, the patient’s perception of doing more compensation and dietary modifications is reflected in the classification of the FOIS scale.\nSignificant changes in oral condition can have a major effect on the functions of eating and drinking and on the nutritional status of elderly individuals and those with neurological diseases.26 Thus, the selection of food is determined by the difficulty in chewing and swallowing of the individuals. Missing teeth and the use of prosthesis cause changes in the chewing ability of the elderly according to the type of food, and influences the swallowing function;27 thus, soft or mashed foods are preferentially ingested by the elderly, due to the difficulty in mastication,28 showing that there is an influence of oral health condition on the selection of foods.\nBesides the influence of the oral condition on swallowing, there was a difference between the types of prostheses and classification of the FOIS scale when the results of the evaluation of individuals with impaired right and left hemiplegia were analyzed separately. The difference between the types of prostheses could be confirmed by post-Dunn’s test in the LHG, and demonstrated that individuals with partial dentures had better ratings in the FOIS scale than individuals with complete prostheses. No studies were found relating the oral condition and the rating scale. Thus, we can infer that when there was partial prosthesis rehabilitation along with the presence of some dental elements, subjects needed less compensation and changes in the oral diet. In this sense, studies show that the use of prostheses in the edentulous contributes to the maintenance of the physiological process of swallowing in the elderly,29 the fixation of the prosthesis in the mandible,30 and the presence of natural elements enables greater preservation of functional swallowing.\nWhen comparing the swallowing performance according to the type of prosthesis used, there was no difference between the types of prostheses fitted in the lower dental arch in swallowing solids in the HG. Studies in healthy elderly subjects demonstrate the importance of the oral condition in the performance of the oral phase of swallowing. Changes in masticatory function related to the oral condition, such as difficulties biting and chewing food27 demonstrated an influence on the pharyngeal phase of swallowing, such as premature escape of food, waste retention in the vallecula and pyriform sinuses, and the presence of coughing and gagging.27\nMoreover, the absence of dental prosthesis can result in a longer duration of swallowing,29 and the chewing and swallowing difficulties in healthy older adults decreased after the replacement of removable total prostheses by mandibular implant-supported prostheses.30 The literature has shown positive effects in the swallowing function with the use of prostheses and the differences in swallowing performance with prostheses in the upper or inferior arches.8 A study with healthy elderly individuals showed the differences between the total duration of swallowing, the latency period before pharynx elevation, duration of preparatory and oral phase, when using or not using the dental prostheses.29\nFinally, in the LHG, it was observed that the smaller the number of teeth present, the worse the rating of dysphagia for liquid. Although this study aimed at finding an influence of the presence and number of teeth on chewing and consequently on swallowing solids,1,27,28 it also demonstrates the influence of the oral condition on liquid swallowing for this population. According to Tamura et al,2 especially for liquid food, which demands more coordination during swallowing, stability in the jaw is also required, so that there is greater stability of the hyolaryngeal complex. To start the pharyngeal phase, the rise of the hyoid bone by the suprahyoid muscles and laryngeal elevation should occur. To complement these events, the jaw must be stabilized by natural or rehabilitated teeth into the proper position, because when the elevation of the hyoid bone and larynx is insufficient, there is an increased risk of aspiration during swallowing.", "The findings of this study indicate that the oral health condition of elderly individuals after stroke in chronic phase showed an association with the level of oral intake and the degree of oropharyngeal dysphagia. The limitations of the study should be considered, one being the lack of access to the imaging examinations to characterize the individuals according to the type of stroke, damaged areas, and extent of injury, and heterogeneity of the time of brain involvement, in addition to the effects of medications and other comorbidities, which were not taken into account. Also, owing to the small sample, strong correlations could not be observed, and a cause/effect relationship cannot be determined.\nDespite the limitations of this study, and the fact that hemiplegia does not necessarily represent a direct relationship with the injured brain hemisphere contralaterally, since cerebellum lesions may cause ipsilateral motor damages,31 the findings indicated that the level of oral intake was influenced by the condition or type of dental prostheses, regardless of the side of the body affected. On the other hand, the number of teeth were demonstrated to be related to the level of dysphagia in individuals with right hemiplegia, which shows that the oral health condition differentially affects the physiological responses of swallowing, influenced by the brain damage.\nConsidering the present sample, further research is necessary pertaining to the oral condition and the neurological involvement of individuals. In addition, these issues demonstrate the need for an interdisciplinary research team comprising dentists, physicians, speech pathologists, and other health professionals, so as to study and define the treatment and guidelines to patients and caretakers, in order to maintain the best oral health condition in the elderly presented with oropharyngeal dysphagia." ]
[ "intro", "methods", "methods|subjects", "methods", null, null, null, "methods", "results", "discussion", null ]
[ "deglutition", "mouth rehabilitation", "aged", "prosthodontics", "dysphagia", "cerebrovascular disorders" ]
Introduction: The process of swallowing physiologically changes with aging. The loss of teeth, very common in this population, is related to the reduction of bone tissue, receptors (proprioceptors and periodontal ligaments), and muscle atrophy. Consequently, orofacial functions are impaired in toothless individuals.1 In addition, prostheses are functionally less efficient when compared to individuals with natural dentition.1,2 The stability of the jaw occlusion of the posterior teeth or prostheses is important for the swallowing function.2 The dental condition and the use of poorly fitting prostheses may contribute to the difficulties arising from old age, and the oral diet and nutritional status of the elderly may be affected by changes related to aging3 and changes in skill and desire to feel, bite, chew4 and swallow food,3,5 significantly compromising and further aggravating the quality of life of these individuals.6 The literature has shown the influence of oral health condition on swallowing and nutrition, indirectly influencing the daily life activities of the elderly.7 Furthermore, it is clear that maintenance of oral health is essential for the general health, quality of life, chewing ability8 and, mainly, the reduction of pneumonia risk in fragile elderly people,8,9 for it is already known that bacterial colonies in oropharyngeal tissues and in dental plaque are the greatest precursors for the development of aspiration respiratory infections.10 Besides the influence of the loss of dentition, the functional impairments resulting from aging may be potentiated by neurological changes resulting in dysphagia in patients, an important cause of morbidity and mortality in this population.11 In the adult and elderly population, dysphagia is most commonly associated with stroke.12 More than half of patients who have suffered stroke present between six and ten types of disability, muscle weakness being the most prevalent one, followed by communication, speech, and swallowing disorders.13 Specifically regarding swallowing, the stroke entails increased oropharyngeal transit time,14,15 changes in motor control of tongue,16 as well as the presence of laryngotracheal aspiration of food.16,17 Once the physiological changes resulting from aging in swallowing may be aggravated by loss of teeth and the use of poorly fitting prostheses, the severity of dysphagia could be influenced by the oral health condition and use of dental prostheses in elderly patients with neurological diseases. However, according to the literature studied, no research has made the correlation between the findings of the swallowing performance with oral health in elderly patients after stroke. Therefore, the aim of this study was to relate the condition of oral health to the level of oral intake and the degree of swallowing dysfunction in elderly patients with stroke in chronic phase. Methods: Study design and subjects Thirty poststroke individuals, aged 61 to 90 years, whose injury happened 6 months to 9 years previously to their participation in this prospective study. Table 1 shows the characterization data of the sample. The inclusion criteria considered the following cases: affected by stroke as attested by a medical report, through clinical diagnosis and imaging, with a minimum time of 6 months; aged over 60 years, being in regular clinical neurological monitoring, not having undergone dysphagia rehabilitation; presenting general stable health that would enable the realization of the proposed tests. In order to achieve a greater delineation of the group, in addition to considering the group of 30 individuals (heterogeneous group [HG]), the total dentate, the total edentulous without dental prostheses, and individuals with more than one episode of stroke were excluded and the participants were divided into two groups, according to the affected body side: right hemiplegic group (RHG, n=8) and left hemiplegic group (LHG, n=11). A distribution and recruitment fluxogram is shown in Figure 1. Thirty poststroke individuals, aged 61 to 90 years, whose injury happened 6 months to 9 years previously to their participation in this prospective study. Table 1 shows the characterization data of the sample. The inclusion criteria considered the following cases: affected by stroke as attested by a medical report, through clinical diagnosis and imaging, with a minimum time of 6 months; aged over 60 years, being in regular clinical neurological monitoring, not having undergone dysphagia rehabilitation; presenting general stable health that would enable the realization of the proposed tests. In order to achieve a greater delineation of the group, in addition to considering the group of 30 individuals (heterogeneous group [HG]), the total dentate, the total edentulous without dental prostheses, and individuals with more than one episode of stroke were excluded and the participants were divided into two groups, according to the affected body side: right hemiplegic group (RHG, n=8) and left hemiplegic group (LHG, n=11). A distribution and recruitment fluxogram is shown in Figure 1. Procedures Oral condition evaluation The participants underwent assessment of various components of oral health through traditional health indicators, based on the presence or absence of disease. The data were collected from the clinical examination by a dentist as instructed by the WHO – Oral Health Surveys: Basic Methods,18 including the assessment of dental status by the number of decayed, missing, and filled teeth (DMFT) and evaluation of dental prostheses. The prostheses were evaluated for retention, functional stability, smile aesthetics, degree of bone resorption, and quality of the mucosa.19 From the evaluation, the dental arches were classified as without prosthesis or with the presence of partial or complete dental prosthesis; later, the use or need for replacement of dental prostheses was indicated. The need for prostheses replacement was as follows: 0, no need to change; 1, partial dental prosthesis was necessary in one of the dental arches; 2, need for partial dental prosthesis in both dental arches; 3, need for complete dental prosthesis in one of the dental arches; 4, need for dental prosthesis in both dental arches. The participants underwent assessment of various components of oral health through traditional health indicators, based on the presence or absence of disease. The data were collected from the clinical examination by a dentist as instructed by the WHO – Oral Health Surveys: Basic Methods,18 including the assessment of dental status by the number of decayed, missing, and filled teeth (DMFT) and evaluation of dental prostheses. The prostheses were evaluated for retention, functional stability, smile aesthetics, degree of bone resorption, and quality of the mucosa.19 From the evaluation, the dental arches were classified as without prosthesis or with the presence of partial or complete dental prosthesis; later, the use or need for replacement of dental prostheses was indicated. The need for prostheses replacement was as follows: 0, no need to change; 1, partial dental prosthesis was necessary in one of the dental arches; 2, need for partial dental prosthesis in both dental arches; 3, need for complete dental prosthesis in one of the dental arches; 4, need for dental prosthesis in both dental arches. Functional Oral Intake Scale Research of oral ingestion was performed by reviewing the usual food consumption patterns referred to in the 24-hour dietary recall. From the data obtained by the research of oral ingestion, patients were classified according to the levels of the Functional Oral Intake Scale (FOIS),20 on a scale from I to VII, considering the diet characteristics based on the properties and texture of the food.21 Research of oral ingestion was performed by reviewing the usual food consumption patterns referred to in the 24-hour dietary recall. From the data obtained by the research of oral ingestion, patients were classified according to the levels of the Functional Oral Intake Scale (FOIS),20 on a scale from I to VII, considering the diet characteristics based on the properties and texture of the food.21 Fiberoptic endoscopic evaluation of swallowing The fiberoptic endoscopic evaluation of swallowing (FEES) was performed by an otorhinolaryngologist physician collaboratively with the speech therapist. Subjects were asked to remain seated with their heads arranged in the direction of the body axis, without bending or rotation, and the examination was performed using a flexible endoscopic equipment of fiberoptic bronchoscopic type (model Olympus CLV-U20) and an Olympus OTV–SC nasopharyngoscope (Olympus Corporation, Tokyo, Japan). Three consistent standardized foods were evaluated: liquid (10 mL of filtered water), thick pudding (10 mL of thickened dietary grape juice, reaching a final consistency similar to that of pudding), and solid (half a slice of bread, 1 cm thick). From the data obtained by the FEES, the severity rating of the swallowing disorder was determined in accordance with the scale of functional impairment of swallowing of Macedo Filho,22,23 which subdivides the dysphagia severity levels (normal, mild, moderate, and severe). The fiberoptic endoscopic evaluation of swallowing (FEES) was performed by an otorhinolaryngologist physician collaboratively with the speech therapist. Subjects were asked to remain seated with their heads arranged in the direction of the body axis, without bending or rotation, and the examination was performed using a flexible endoscopic equipment of fiberoptic bronchoscopic type (model Olympus CLV-U20) and an Olympus OTV–SC nasopharyngoscope (Olympus Corporation, Tokyo, Japan). Three consistent standardized foods were evaluated: liquid (10 mL of filtered water), thick pudding (10 mL of thickened dietary grape juice, reaching a final consistency similar to that of pudding), and solid (half a slice of bread, 1 cm thick). From the data obtained by the FEES, the severity rating of the swallowing disorder was determined in accordance with the scale of functional impairment of swallowing of Macedo Filho,22,23 which subdivides the dysphagia severity levels (normal, mild, moderate, and severe). Statistical analysis Spearman’s test was used to verify the correlation between the oral health condition (DMFT, number of teeth present, and the need for prostheses replacement) and the performance in the swallowing function (classification in the FOIS scale and dysphagia rating in each consistency). For this analysis to be performed, the FOIS scale and the degree of dysphagia were numerically classified from 1 to 7 and from 1 to 4, respectively. For comparison of FOIS results and dysphagia classification, among the different types of prostheses used in the upper and lower dental arches (complete dental prosthesis, partial dental prosthesis, denture or none), the Kruskal–Wallis test was used, followed by the Dunn’s test. Spearman’s test was used to verify the correlation between the oral health condition (DMFT, number of teeth present, and the need for prostheses replacement) and the performance in the swallowing function (classification in the FOIS scale and dysphagia rating in each consistency). For this analysis to be performed, the FOIS scale and the degree of dysphagia were numerically classified from 1 to 7 and from 1 to 4, respectively. For comparison of FOIS results and dysphagia classification, among the different types of prostheses used in the upper and lower dental arches (complete dental prosthesis, partial dental prosthesis, denture or none), the Kruskal–Wallis test was used, followed by the Dunn’s test. Oral condition evaluation The participants underwent assessment of various components of oral health through traditional health indicators, based on the presence or absence of disease. The data were collected from the clinical examination by a dentist as instructed by the WHO – Oral Health Surveys: Basic Methods,18 including the assessment of dental status by the number of decayed, missing, and filled teeth (DMFT) and evaluation of dental prostheses. The prostheses were evaluated for retention, functional stability, smile aesthetics, degree of bone resorption, and quality of the mucosa.19 From the evaluation, the dental arches were classified as without prosthesis or with the presence of partial or complete dental prosthesis; later, the use or need for replacement of dental prostheses was indicated. The need for prostheses replacement was as follows: 0, no need to change; 1, partial dental prosthesis was necessary in one of the dental arches; 2, need for partial dental prosthesis in both dental arches; 3, need for complete dental prosthesis in one of the dental arches; 4, need for dental prosthesis in both dental arches. The participants underwent assessment of various components of oral health through traditional health indicators, based on the presence or absence of disease. The data were collected from the clinical examination by a dentist as instructed by the WHO – Oral Health Surveys: Basic Methods,18 including the assessment of dental status by the number of decayed, missing, and filled teeth (DMFT) and evaluation of dental prostheses. The prostheses were evaluated for retention, functional stability, smile aesthetics, degree of bone resorption, and quality of the mucosa.19 From the evaluation, the dental arches were classified as without prosthesis or with the presence of partial or complete dental prosthesis; later, the use or need for replacement of dental prostheses was indicated. The need for prostheses replacement was as follows: 0, no need to change; 1, partial dental prosthesis was necessary in one of the dental arches; 2, need for partial dental prosthesis in both dental arches; 3, need for complete dental prosthesis in one of the dental arches; 4, need for dental prosthesis in both dental arches. Functional Oral Intake Scale Research of oral ingestion was performed by reviewing the usual food consumption patterns referred to in the 24-hour dietary recall. From the data obtained by the research of oral ingestion, patients were classified according to the levels of the Functional Oral Intake Scale (FOIS),20 on a scale from I to VII, considering the diet characteristics based on the properties and texture of the food.21 Research of oral ingestion was performed by reviewing the usual food consumption patterns referred to in the 24-hour dietary recall. From the data obtained by the research of oral ingestion, patients were classified according to the levels of the Functional Oral Intake Scale (FOIS),20 on a scale from I to VII, considering the diet characteristics based on the properties and texture of the food.21 Fiberoptic endoscopic evaluation of swallowing The fiberoptic endoscopic evaluation of swallowing (FEES) was performed by an otorhinolaryngologist physician collaboratively with the speech therapist. Subjects were asked to remain seated with their heads arranged in the direction of the body axis, without bending or rotation, and the examination was performed using a flexible endoscopic equipment of fiberoptic bronchoscopic type (model Olympus CLV-U20) and an Olympus OTV–SC nasopharyngoscope (Olympus Corporation, Tokyo, Japan). Three consistent standardized foods were evaluated: liquid (10 mL of filtered water), thick pudding (10 mL of thickened dietary grape juice, reaching a final consistency similar to that of pudding), and solid (half a slice of bread, 1 cm thick). From the data obtained by the FEES, the severity rating of the swallowing disorder was determined in accordance with the scale of functional impairment of swallowing of Macedo Filho,22,23 which subdivides the dysphagia severity levels (normal, mild, moderate, and severe). The fiberoptic endoscopic evaluation of swallowing (FEES) was performed by an otorhinolaryngologist physician collaboratively with the speech therapist. Subjects were asked to remain seated with their heads arranged in the direction of the body axis, without bending or rotation, and the examination was performed using a flexible endoscopic equipment of fiberoptic bronchoscopic type (model Olympus CLV-U20) and an Olympus OTV–SC nasopharyngoscope (Olympus Corporation, Tokyo, Japan). Three consistent standardized foods were evaluated: liquid (10 mL of filtered water), thick pudding (10 mL of thickened dietary grape juice, reaching a final consistency similar to that of pudding), and solid (half a slice of bread, 1 cm thick). From the data obtained by the FEES, the severity rating of the swallowing disorder was determined in accordance with the scale of functional impairment of swallowing of Macedo Filho,22,23 which subdivides the dysphagia severity levels (normal, mild, moderate, and severe). Statistical analysis Spearman’s test was used to verify the correlation between the oral health condition (DMFT, number of teeth present, and the need for prostheses replacement) and the performance in the swallowing function (classification in the FOIS scale and dysphagia rating in each consistency). For this analysis to be performed, the FOIS scale and the degree of dysphagia were numerically classified from 1 to 7 and from 1 to 4, respectively. For comparison of FOIS results and dysphagia classification, among the different types of prostheses used in the upper and lower dental arches (complete dental prosthesis, partial dental prosthesis, denture or none), the Kruskal–Wallis test was used, followed by the Dunn’s test. Spearman’s test was used to verify the correlation between the oral health condition (DMFT, number of teeth present, and the need for prostheses replacement) and the performance in the swallowing function (classification in the FOIS scale and dysphagia rating in each consistency). For this analysis to be performed, the FOIS scale and the degree of dysphagia were numerically classified from 1 to 7 and from 1 to 4, respectively. For comparison of FOIS results and dysphagia classification, among the different types of prostheses used in the upper and lower dental arches (complete dental prosthesis, partial dental prosthesis, denture or none), the Kruskal–Wallis test was used, followed by the Dunn’s test. Study design and subjects: Thirty poststroke individuals, aged 61 to 90 years, whose injury happened 6 months to 9 years previously to their participation in this prospective study. Table 1 shows the characterization data of the sample. The inclusion criteria considered the following cases: affected by stroke as attested by a medical report, through clinical diagnosis and imaging, with a minimum time of 6 months; aged over 60 years, being in regular clinical neurological monitoring, not having undergone dysphagia rehabilitation; presenting general stable health that would enable the realization of the proposed tests. In order to achieve a greater delineation of the group, in addition to considering the group of 30 individuals (heterogeneous group [HG]), the total dentate, the total edentulous without dental prostheses, and individuals with more than one episode of stroke were excluded and the participants were divided into two groups, according to the affected body side: right hemiplegic group (RHG, n=8) and left hemiplegic group (LHG, n=11). A distribution and recruitment fluxogram is shown in Figure 1. Procedures: Oral condition evaluation The participants underwent assessment of various components of oral health through traditional health indicators, based on the presence or absence of disease. The data were collected from the clinical examination by a dentist as instructed by the WHO – Oral Health Surveys: Basic Methods,18 including the assessment of dental status by the number of decayed, missing, and filled teeth (DMFT) and evaluation of dental prostheses. The prostheses were evaluated for retention, functional stability, smile aesthetics, degree of bone resorption, and quality of the mucosa.19 From the evaluation, the dental arches were classified as without prosthesis or with the presence of partial or complete dental prosthesis; later, the use or need for replacement of dental prostheses was indicated. The need for prostheses replacement was as follows: 0, no need to change; 1, partial dental prosthesis was necessary in one of the dental arches; 2, need for partial dental prosthesis in both dental arches; 3, need for complete dental prosthesis in one of the dental arches; 4, need for dental prosthesis in both dental arches. The participants underwent assessment of various components of oral health through traditional health indicators, based on the presence or absence of disease. The data were collected from the clinical examination by a dentist as instructed by the WHO – Oral Health Surveys: Basic Methods,18 including the assessment of dental status by the number of decayed, missing, and filled teeth (DMFT) and evaluation of dental prostheses. The prostheses were evaluated for retention, functional stability, smile aesthetics, degree of bone resorption, and quality of the mucosa.19 From the evaluation, the dental arches were classified as without prosthesis or with the presence of partial or complete dental prosthesis; later, the use or need for replacement of dental prostheses was indicated. The need for prostheses replacement was as follows: 0, no need to change; 1, partial dental prosthesis was necessary in one of the dental arches; 2, need for partial dental prosthesis in both dental arches; 3, need for complete dental prosthesis in one of the dental arches; 4, need for dental prosthesis in both dental arches. Functional Oral Intake Scale Research of oral ingestion was performed by reviewing the usual food consumption patterns referred to in the 24-hour dietary recall. From the data obtained by the research of oral ingestion, patients were classified according to the levels of the Functional Oral Intake Scale (FOIS),20 on a scale from I to VII, considering the diet characteristics based on the properties and texture of the food.21 Research of oral ingestion was performed by reviewing the usual food consumption patterns referred to in the 24-hour dietary recall. From the data obtained by the research of oral ingestion, patients were classified according to the levels of the Functional Oral Intake Scale (FOIS),20 on a scale from I to VII, considering the diet characteristics based on the properties and texture of the food.21 Fiberoptic endoscopic evaluation of swallowing The fiberoptic endoscopic evaluation of swallowing (FEES) was performed by an otorhinolaryngologist physician collaboratively with the speech therapist. Subjects were asked to remain seated with their heads arranged in the direction of the body axis, without bending or rotation, and the examination was performed using a flexible endoscopic equipment of fiberoptic bronchoscopic type (model Olympus CLV-U20) and an Olympus OTV–SC nasopharyngoscope (Olympus Corporation, Tokyo, Japan). Three consistent standardized foods were evaluated: liquid (10 mL of filtered water), thick pudding (10 mL of thickened dietary grape juice, reaching a final consistency similar to that of pudding), and solid (half a slice of bread, 1 cm thick). From the data obtained by the FEES, the severity rating of the swallowing disorder was determined in accordance with the scale of functional impairment of swallowing of Macedo Filho,22,23 which subdivides the dysphagia severity levels (normal, mild, moderate, and severe). The fiberoptic endoscopic evaluation of swallowing (FEES) was performed by an otorhinolaryngologist physician collaboratively with the speech therapist. Subjects were asked to remain seated with their heads arranged in the direction of the body axis, without bending or rotation, and the examination was performed using a flexible endoscopic equipment of fiberoptic bronchoscopic type (model Olympus CLV-U20) and an Olympus OTV–SC nasopharyngoscope (Olympus Corporation, Tokyo, Japan). Three consistent standardized foods were evaluated: liquid (10 mL of filtered water), thick pudding (10 mL of thickened dietary grape juice, reaching a final consistency similar to that of pudding), and solid (half a slice of bread, 1 cm thick). From the data obtained by the FEES, the severity rating of the swallowing disorder was determined in accordance with the scale of functional impairment of swallowing of Macedo Filho,22,23 which subdivides the dysphagia severity levels (normal, mild, moderate, and severe). Statistical analysis Spearman’s test was used to verify the correlation between the oral health condition (DMFT, number of teeth present, and the need for prostheses replacement) and the performance in the swallowing function (classification in the FOIS scale and dysphagia rating in each consistency). For this analysis to be performed, the FOIS scale and the degree of dysphagia were numerically classified from 1 to 7 and from 1 to 4, respectively. For comparison of FOIS results and dysphagia classification, among the different types of prostheses used in the upper and lower dental arches (complete dental prosthesis, partial dental prosthesis, denture or none), the Kruskal–Wallis test was used, followed by the Dunn’s test. Spearman’s test was used to verify the correlation between the oral health condition (DMFT, number of teeth present, and the need for prostheses replacement) and the performance in the swallowing function (classification in the FOIS scale and dysphagia rating in each consistency). For this analysis to be performed, the FOIS scale and the degree of dysphagia were numerically classified from 1 to 7 and from 1 to 4, respectively. For comparison of FOIS results and dysphagia classification, among the different types of prostheses used in the upper and lower dental arches (complete dental prosthesis, partial dental prosthesis, denture or none), the Kruskal–Wallis test was used, followed by the Dunn’s test. Oral condition evaluation: The participants underwent assessment of various components of oral health through traditional health indicators, based on the presence or absence of disease. The data were collected from the clinical examination by a dentist as instructed by the WHO – Oral Health Surveys: Basic Methods,18 including the assessment of dental status by the number of decayed, missing, and filled teeth (DMFT) and evaluation of dental prostheses. The prostheses were evaluated for retention, functional stability, smile aesthetics, degree of bone resorption, and quality of the mucosa.19 From the evaluation, the dental arches were classified as without prosthesis or with the presence of partial or complete dental prosthesis; later, the use or need for replacement of dental prostheses was indicated. The need for prostheses replacement was as follows: 0, no need to change; 1, partial dental prosthesis was necessary in one of the dental arches; 2, need for partial dental prosthesis in both dental arches; 3, need for complete dental prosthesis in one of the dental arches; 4, need for dental prosthesis in both dental arches. Functional Oral Intake Scale: Research of oral ingestion was performed by reviewing the usual food consumption patterns referred to in the 24-hour dietary recall. From the data obtained by the research of oral ingestion, patients were classified according to the levels of the Functional Oral Intake Scale (FOIS),20 on a scale from I to VII, considering the diet characteristics based on the properties and texture of the food.21 Fiberoptic endoscopic evaluation of swallowing: The fiberoptic endoscopic evaluation of swallowing (FEES) was performed by an otorhinolaryngologist physician collaboratively with the speech therapist. Subjects were asked to remain seated with their heads arranged in the direction of the body axis, without bending or rotation, and the examination was performed using a flexible endoscopic equipment of fiberoptic bronchoscopic type (model Olympus CLV-U20) and an Olympus OTV–SC nasopharyngoscope (Olympus Corporation, Tokyo, Japan). Three consistent standardized foods were evaluated: liquid (10 mL of filtered water), thick pudding (10 mL of thickened dietary grape juice, reaching a final consistency similar to that of pudding), and solid (half a slice of bread, 1 cm thick). From the data obtained by the FEES, the severity rating of the swallowing disorder was determined in accordance with the scale of functional impairment of swallowing of Macedo Filho,22,23 which subdivides the dysphagia severity levels (normal, mild, moderate, and severe). Statistical analysis: Spearman’s test was used to verify the correlation between the oral health condition (DMFT, number of teeth present, and the need for prostheses replacement) and the performance in the swallowing function (classification in the FOIS scale and dysphagia rating in each consistency). For this analysis to be performed, the FOIS scale and the degree of dysphagia were numerically classified from 1 to 7 and from 1 to 4, respectively. For comparison of FOIS results and dysphagia classification, among the different types of prostheses used in the upper and lower dental arches (complete dental prosthesis, partial dental prosthesis, denture or none), the Kruskal–Wallis test was used, followed by the Dunn’s test. Results: According to the evaluation of oral condition, the mean of DMFT of elderly individuals affected by stroke in this study was 28.7, while the mean of remaining teeth was 5.6, showing that most individuals were edentulous. Table 2 shows the types of prostheses used by the elderly individuals affected by stroke in each dental arch and the need for prostheses replacement. According to the results of the functional oral intake scale, no individuals were found to be on a single consistency diet or tube dependent. Most individuals (53%) presented as level V on the FOIS, followed by level IV (34%) and level VII (13%). With the severity rating of the swallowing disorder, no occurrence of severe dysphagia was verified. The distribution of the elderly affected by stroke according to severity rating of the swallowing disorder is shown in Table 3, as well. According to the results of the statistical analysis, correlations between the oral health condition and the results of the FOIS scale in elderly individuals affected by stroke in this study were found. There was a statistically significant negative correlation between the need for replacement of prostheses and the rating scale, and also in the HG and in RHG, showing that the greater the need for replacement of the prosthesis, the worse the rating of the FOIS scale. There were differences in the classification of the FOIS scale, for the LHG, in the different types of prostheses used in the upper dental arch, whereas for the RHG the difference in the classification of FOIS occurred between groups of individuals with different types of prostheses used in the lower dental arch. After completion of the Dunn’s post-test, the difference was confirmed only in the LHG, demonstrating that individuals of this group with partial prostheses in the upper arch (ie, that still had natural elements and whose missed teeth had been rehabilitated) had better ratings in the FOIS scale compared to individuals with total prostheses. The results of the correlation between the FOIS scale and the oral health condition are shown in Table 4. In the statistical analysis between the degree of swallowing dysfunction and oral health condition, a significant negative correlation between the number of teeth and the swallowing performance of liquid in the LHG was found, demonstrating that the greater the number of teeth, the lower the degree of swallowing dysfunction. According to the Kruskal–Wallis test, there was no difference in the performance of swallowing solid boluses depending on the type of prosthesis used in the lower dental arch; in the HG, however, when applying the post-Dunn’s test, this difference was not confirmed (P>0.05). The comparison of the FOIS scale between the types of dental prostheses is shown in Table 5. The results of the correlation between the degree of dysphagia and the oral health condition are shown in Table 6, and the comparison of the degree of swallowing dysfunction between types of dental prostheses is described in Table 7. Discussion: According to the literature, individuals affected by stroke often exhibit decreased level of consciousness, paralysis of the muscles involved in swallowing, sensory deficits of the pharynx and oral cavity, as well as loss of appetite.24 With these disorders related to swallowing, dysphagia becomes an important symptom in this population, and may be aggravated by teeth loss and the use of poorly fitting prosthesis, common during aging. Furthermore, the hypothesis that oropharyngeal colonization is primarily responsible for the subsequent respiratory infection, owing to aspiration, is strongly supported by the literature.10 No literature studies have made the correlation between the findings of the swallowing performance and the condition of the oral health in elderly patients after stroke. The results of this study showed some correlations between the oral condition and the data of the swallowing performance even when elderly individuals affected by stroke were part of the heterogeneous group. Besides the statistical analysis for the study group, in an attempt to reduce its heterogeneity, the subjects were divided according to the side of the motor impairment after stroke. Former studies have suggested that dysphagia maybe associated with diffuse lesions in one or both hemispheres, anterior fossa, or brainstem, as swallowing has a bilateral cortical representation. It has been well established that the protection of the airways during swallowing requires a precise coordination with accurate movements of the structures involved and adequate propulsion of the bolus to the oropharynx and esophagus;25 thus, the motor ability for swallowing seems to be essential for this protection. In the HG and RHG, it was found that the higher the need for replacement of the prosthesis, the worse the rating in the FOIS scale; ie, individuals with greater need for replacement of the prostheses had a greater perception in making more compensation and changes in their diet, which reflected the classification level of oral intake. The FOIS scale was developed to describe the functional level of the daily oral intake of food and liquid of a patient, considering the changes in diet and need for compensation during swallowing.20 These limitations and changes in oral intake result from patient perception and self-perception of swallowing ability that is impaired. Therefore, from the results, it can be inferred that when there was a greater need for replacement of the prosthesis, the patient’s perception of doing more compensation and dietary modifications is reflected in the classification of the FOIS scale. Significant changes in oral condition can have a major effect on the functions of eating and drinking and on the nutritional status of elderly individuals and those with neurological diseases.26 Thus, the selection of food is determined by the difficulty in chewing and swallowing of the individuals. Missing teeth and the use of prosthesis cause changes in the chewing ability of the elderly according to the type of food, and influences the swallowing function;27 thus, soft or mashed foods are preferentially ingested by the elderly, due to the difficulty in mastication,28 showing that there is an influence of oral health condition on the selection of foods. Besides the influence of the oral condition on swallowing, there was a difference between the types of prostheses and classification of the FOIS scale when the results of the evaluation of individuals with impaired right and left hemiplegia were analyzed separately. The difference between the types of prostheses could be confirmed by post-Dunn’s test in the LHG, and demonstrated that individuals with partial dentures had better ratings in the FOIS scale than individuals with complete prostheses. No studies were found relating the oral condition and the rating scale. Thus, we can infer that when there was partial prosthesis rehabilitation along with the presence of some dental elements, subjects needed less compensation and changes in the oral diet. In this sense, studies show that the use of prostheses in the edentulous contributes to the maintenance of the physiological process of swallowing in the elderly,29 the fixation of the prosthesis in the mandible,30 and the presence of natural elements enables greater preservation of functional swallowing. When comparing the swallowing performance according to the type of prosthesis used, there was no difference between the types of prostheses fitted in the lower dental arch in swallowing solids in the HG. Studies in healthy elderly subjects demonstrate the importance of the oral condition in the performance of the oral phase of swallowing. Changes in masticatory function related to the oral condition, such as difficulties biting and chewing food27 demonstrated an influence on the pharyngeal phase of swallowing, such as premature escape of food, waste retention in the vallecula and pyriform sinuses, and the presence of coughing and gagging.27 Moreover, the absence of dental prosthesis can result in a longer duration of swallowing,29 and the chewing and swallowing difficulties in healthy older adults decreased after the replacement of removable total prostheses by mandibular implant-supported prostheses.30 The literature has shown positive effects in the swallowing function with the use of prostheses and the differences in swallowing performance with prostheses in the upper or inferior arches.8 A study with healthy elderly individuals showed the differences between the total duration of swallowing, the latency period before pharynx elevation, duration of preparatory and oral phase, when using or not using the dental prostheses.29 Finally, in the LHG, it was observed that the smaller the number of teeth present, the worse the rating of dysphagia for liquid. Although this study aimed at finding an influence of the presence and number of teeth on chewing and consequently on swallowing solids,1,27,28 it also demonstrates the influence of the oral condition on liquid swallowing for this population. According to Tamura et al,2 especially for liquid food, which demands more coordination during swallowing, stability in the jaw is also required, so that there is greater stability of the hyolaryngeal complex. To start the pharyngeal phase, the rise of the hyoid bone by the suprahyoid muscles and laryngeal elevation should occur. To complement these events, the jaw must be stabilized by natural or rehabilitated teeth into the proper position, because when the elevation of the hyoid bone and larynx is insufficient, there is an increased risk of aspiration during swallowing. Conclusion: The findings of this study indicate that the oral health condition of elderly individuals after stroke in chronic phase showed an association with the level of oral intake and the degree of oropharyngeal dysphagia. The limitations of the study should be considered, one being the lack of access to the imaging examinations to characterize the individuals according to the type of stroke, damaged areas, and extent of injury, and heterogeneity of the time of brain involvement, in addition to the effects of medications and other comorbidities, which were not taken into account. Also, owing to the small sample, strong correlations could not be observed, and a cause/effect relationship cannot be determined. Despite the limitations of this study, and the fact that hemiplegia does not necessarily represent a direct relationship with the injured brain hemisphere contralaterally, since cerebellum lesions may cause ipsilateral motor damages,31 the findings indicated that the level of oral intake was influenced by the condition or type of dental prostheses, regardless of the side of the body affected. On the other hand, the number of teeth were demonstrated to be related to the level of dysphagia in individuals with right hemiplegia, which shows that the oral health condition differentially affects the physiological responses of swallowing, influenced by the brain damage. Considering the present sample, further research is necessary pertaining to the oral condition and the neurological involvement of individuals. In addition, these issues demonstrate the need for an interdisciplinary research team comprising dentists, physicians, speech pathologists, and other health professionals, so as to study and define the treatment and guidelines to patients and caretakers, in order to maintain the best oral health condition in the elderly presented with oropharyngeal dysphagia.
Background: According to the literature, the occurrence of dysphagia is high in cases of stroke, and its severity can be enhanced by loss of teeth and the use of poorly fitting prostheses. Methods: Thirty elderly individuals affected by stroke in chronic phase participated. All subjects underwent assessment of their oral condition, with classification from the Functional Oral Intake Scale (FOIS) and nasoendoscopic swallowing assessment to classify the degree of dysphagia. The statistical analysis examined a heterogeneous group (HG, n=30) and two groups designated by the affected body part, right (RHG, n=8) and left (LHG, n=11), excluding totally dentate or edentulous individuals without rehabilitation with more than one episode of stroke. Results: There was a negative correlation between the need for replacement prostheses and the FOIS scale for the HG (P=0.02) and RHG (P=0.01). Differences in FOIS between types of prostheses of the upper dental arch in the LHG (P=0.01) and lower dental arch in the RHG (P=0.04). A negative correlation was found between the number of teeth present and the degree of dysfunction in swallowing liquid in the LHG (P=0.05). There were differences in the performance in swallowing solids between individuals without prosthesis and those with partial prosthesis in the inferior dental arch (P=0.04) for the HG. Conclusions: The need for replacement prostheses, type of prostheses, and the number of teeth of elderly patients poststroke in chronic phase showed an association with the level of oral intake and the degree of oropharyngeal dysphagia.
Introduction: The process of swallowing physiologically changes with aging. The loss of teeth, very common in this population, is related to the reduction of bone tissue, receptors (proprioceptors and periodontal ligaments), and muscle atrophy. Consequently, orofacial functions are impaired in toothless individuals.1 In addition, prostheses are functionally less efficient when compared to individuals with natural dentition.1,2 The stability of the jaw occlusion of the posterior teeth or prostheses is important for the swallowing function.2 The dental condition and the use of poorly fitting prostheses may contribute to the difficulties arising from old age, and the oral diet and nutritional status of the elderly may be affected by changes related to aging3 and changes in skill and desire to feel, bite, chew4 and swallow food,3,5 significantly compromising and further aggravating the quality of life of these individuals.6 The literature has shown the influence of oral health condition on swallowing and nutrition, indirectly influencing the daily life activities of the elderly.7 Furthermore, it is clear that maintenance of oral health is essential for the general health, quality of life, chewing ability8 and, mainly, the reduction of pneumonia risk in fragile elderly people,8,9 for it is already known that bacterial colonies in oropharyngeal tissues and in dental plaque are the greatest precursors for the development of aspiration respiratory infections.10 Besides the influence of the loss of dentition, the functional impairments resulting from aging may be potentiated by neurological changes resulting in dysphagia in patients, an important cause of morbidity and mortality in this population.11 In the adult and elderly population, dysphagia is most commonly associated with stroke.12 More than half of patients who have suffered stroke present between six and ten types of disability, muscle weakness being the most prevalent one, followed by communication, speech, and swallowing disorders.13 Specifically regarding swallowing, the stroke entails increased oropharyngeal transit time,14,15 changes in motor control of tongue,16 as well as the presence of laryngotracheal aspiration of food.16,17 Once the physiological changes resulting from aging in swallowing may be aggravated by loss of teeth and the use of poorly fitting prostheses, the severity of dysphagia could be influenced by the oral health condition and use of dental prostheses in elderly patients with neurological diseases. However, according to the literature studied, no research has made the correlation between the findings of the swallowing performance with oral health in elderly patients after stroke. Therefore, the aim of this study was to relate the condition of oral health to the level of oral intake and the degree of swallowing dysfunction in elderly patients with stroke in chronic phase. Conclusion: The findings of this study indicate that the oral health condition of elderly individuals after stroke in chronic phase showed an association with the level of oral intake and the degree of oropharyngeal dysphagia. The limitations of the study should be considered, one being the lack of access to the imaging examinations to characterize the individuals according to the type of stroke, damaged areas, and extent of injury, and heterogeneity of the time of brain involvement, in addition to the effects of medications and other comorbidities, which were not taken into account. Also, owing to the small sample, strong correlations could not be observed, and a cause/effect relationship cannot be determined. Despite the limitations of this study, and the fact that hemiplegia does not necessarily represent a direct relationship with the injured brain hemisphere contralaterally, since cerebellum lesions may cause ipsilateral motor damages,31 the findings indicated that the level of oral intake was influenced by the condition or type of dental prostheses, regardless of the side of the body affected. On the other hand, the number of teeth were demonstrated to be related to the level of dysphagia in individuals with right hemiplegia, which shows that the oral health condition differentially affects the physiological responses of swallowing, influenced by the brain damage. Considering the present sample, further research is necessary pertaining to the oral condition and the neurological involvement of individuals. In addition, these issues demonstrate the need for an interdisciplinary research team comprising dentists, physicians, speech pathologists, and other health professionals, so as to study and define the treatment and guidelines to patients and caretakers, in order to maintain the best oral health condition in the elderly presented with oropharyngeal dysphagia.
Background: According to the literature, the occurrence of dysphagia is high in cases of stroke, and its severity can be enhanced by loss of teeth and the use of poorly fitting prostheses. Methods: Thirty elderly individuals affected by stroke in chronic phase participated. All subjects underwent assessment of their oral condition, with classification from the Functional Oral Intake Scale (FOIS) and nasoendoscopic swallowing assessment to classify the degree of dysphagia. The statistical analysis examined a heterogeneous group (HG, n=30) and two groups designated by the affected body part, right (RHG, n=8) and left (LHG, n=11), excluding totally dentate or edentulous individuals without rehabilitation with more than one episode of stroke. Results: There was a negative correlation between the need for replacement prostheses and the FOIS scale for the HG (P=0.02) and RHG (P=0.01). Differences in FOIS between types of prostheses of the upper dental arch in the LHG (P=0.01) and lower dental arch in the RHG (P=0.04). A negative correlation was found between the number of teeth present and the degree of dysfunction in swallowing liquid in the LHG (P=0.05). There were differences in the performance in swallowing solids between individuals without prosthesis and those with partial prosthesis in the inferior dental arch (P=0.04) for the HG. Conclusions: The need for replacement prostheses, type of prostheses, and the number of teeth of elderly patients poststroke in chronic phase showed an association with the level of oral intake and the degree of oropharyngeal dysphagia.
7,294
293
[ 2801, 1198, 202, 70, 181, 316 ]
11
[ "dental", "oral", "swallowing", "prostheses", "prosthesis", "need", "scale", "dental prosthesis", "health", "arches" ]
[ "chewing ability elderly", "teeth swallowing performance", "age oral diet", "swallowing function dental", "oral health elderly" ]
[CONTENT] deglutition | mouth rehabilitation | aged | prosthodontics | dysphagia | cerebrovascular disorders [SUMMARY]
[CONTENT] deglutition | mouth rehabilitation | aged | prosthodontics | dysphagia | cerebrovascular disorders [SUMMARY]
[CONTENT] deglutition | mouth rehabilitation | aged | prosthodontics | dysphagia | cerebrovascular disorders [SUMMARY]
[CONTENT] deglutition | mouth rehabilitation | aged | prosthodontics | dysphagia | cerebrovascular disorders [SUMMARY]
[CONTENT] deglutition | mouth rehabilitation | aged | prosthodontics | dysphagia | cerebrovascular disorders [SUMMARY]
[CONTENT] deglutition | mouth rehabilitation | aged | prosthodontics | dysphagia | cerebrovascular disorders [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Brazil | Chronic Disease | Deglutition | Deglutition Disorders | Dental Prosthesis | Diagnosis, Oral | Endoscopy, Gastrointestinal | Female | Humans | Male | Mouth Rehabilitation | Oral Health | Outcome Assessment, Health Care | Risk Factors | Severity of Illness Index | Stroke | Tooth Loss [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Brazil | Chronic Disease | Deglutition | Deglutition Disorders | Dental Prosthesis | Diagnosis, Oral | Endoscopy, Gastrointestinal | Female | Humans | Male | Mouth Rehabilitation | Oral Health | Outcome Assessment, Health Care | Risk Factors | Severity of Illness Index | Stroke | Tooth Loss [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Brazil | Chronic Disease | Deglutition | Deglutition Disorders | Dental Prosthesis | Diagnosis, Oral | Endoscopy, Gastrointestinal | Female | Humans | Male | Mouth Rehabilitation | Oral Health | Outcome Assessment, Health Care | Risk Factors | Severity of Illness Index | Stroke | Tooth Loss [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Brazil | Chronic Disease | Deglutition | Deglutition Disorders | Dental Prosthesis | Diagnosis, Oral | Endoscopy, Gastrointestinal | Female | Humans | Male | Mouth Rehabilitation | Oral Health | Outcome Assessment, Health Care | Risk Factors | Severity of Illness Index | Stroke | Tooth Loss [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Brazil | Chronic Disease | Deglutition | Deglutition Disorders | Dental Prosthesis | Diagnosis, Oral | Endoscopy, Gastrointestinal | Female | Humans | Male | Mouth Rehabilitation | Oral Health | Outcome Assessment, Health Care | Risk Factors | Severity of Illness Index | Stroke | Tooth Loss [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Brazil | Chronic Disease | Deglutition | Deglutition Disorders | Dental Prosthesis | Diagnosis, Oral | Endoscopy, Gastrointestinal | Female | Humans | Male | Mouth Rehabilitation | Oral Health | Outcome Assessment, Health Care | Risk Factors | Severity of Illness Index | Stroke | Tooth Loss [SUMMARY]
[CONTENT] chewing ability elderly | teeth swallowing performance | age oral diet | swallowing function dental | oral health elderly [SUMMARY]
[CONTENT] chewing ability elderly | teeth swallowing performance | age oral diet | swallowing function dental | oral health elderly [SUMMARY]
[CONTENT] chewing ability elderly | teeth swallowing performance | age oral diet | swallowing function dental | oral health elderly [SUMMARY]
[CONTENT] chewing ability elderly | teeth swallowing performance | age oral diet | swallowing function dental | oral health elderly [SUMMARY]
[CONTENT] chewing ability elderly | teeth swallowing performance | age oral diet | swallowing function dental | oral health elderly [SUMMARY]
[CONTENT] chewing ability elderly | teeth swallowing performance | age oral diet | swallowing function dental | oral health elderly [SUMMARY]
[CONTENT] dental | oral | swallowing | prostheses | prosthesis | need | scale | dental prosthesis | health | arches [SUMMARY]
[CONTENT] dental | oral | swallowing | prostheses | prosthesis | need | scale | dental prosthesis | health | arches [SUMMARY]
[CONTENT] dental | oral | swallowing | prostheses | prosthesis | need | scale | dental prosthesis | health | arches [SUMMARY]
[CONTENT] dental | oral | swallowing | prostheses | prosthesis | need | scale | dental prosthesis | health | arches [SUMMARY]
[CONTENT] dental | oral | swallowing | prostheses | prosthesis | need | scale | dental prosthesis | health | arches [SUMMARY]
[CONTENT] dental | oral | swallowing | prostheses | prosthesis | need | scale | dental prosthesis | health | arches [SUMMARY]
[CONTENT] changes | elderly | swallowing | life | resulting | oral | stroke | patients | loss | population [SUMMARY]
[CONTENT] test | fois | dental | dysphagia | fois scale | classification | dental prosthesis | prosthesis | scale | prostheses [SUMMARY]
[CONTENT] individuals | table | fois | arch | scale | shown table | prostheses | fois scale | dental arch | difference [SUMMARY]
[CONTENT] brain | oral | condition | individuals | study | level | condition elderly | oropharyngeal dysphagia | relationship | involvement [SUMMARY]
[CONTENT] dental | oral | swallowing | prosthesis | prostheses | scale | dental prosthesis | need | fois | dental arches [SUMMARY]
[CONTENT] dental | oral | swallowing | prosthesis | prostheses | scale | dental prosthesis | need | fois | dental arches [SUMMARY]
[CONTENT] dysphagia [SUMMARY]
[CONTENT] ||| the Functional Oral Intake Scale | FOIS | dysphagia ||| HG | two | RHG | LHG | n=11 | more than one [SUMMARY]
[CONTENT] FOIS | HG | RHG ||| FOIS | LHG | RHG ||| LHG ||| HG [SUMMARY]
[CONTENT] dysphagia [SUMMARY]
[CONTENT] ||| dysphagia ||| ||| the Functional Oral Intake Scale | FOIS | dysphagia ||| HG | two | RHG | LHG | n=11 | more than one ||| FOIS | HG | RHG ||| FOIS | LHG | RHG ||| LHG ||| HG ||| dysphagia [SUMMARY]
[CONTENT] ||| dysphagia ||| ||| the Functional Oral Intake Scale | FOIS | dysphagia ||| HG | two | RHG | LHG | n=11 | more than one ||| FOIS | HG | RHG ||| FOIS | LHG | RHG ||| LHG ||| HG ||| dysphagia [SUMMARY]
Longitudinal Relationship between Cognitive Function and Health-Related Quality of Life among Middle-Aged and Older Patients with Diabetes in China: Digital Usage Behavior Differences.
36231699
Cognitive function and health-related quality of life (HRQoL) are important issues in diabetes care. According to the China Association for Aging, it is estimated that by 2030, the number of elderly people with dementia in China will reach 22 million. The World Health Organization reports that by 2044, the number of people with diabetes in China is expected to reach 175 million.
BACKGROUND
Cohort analyses were conducted based on 854 diabetic patients aged ≥45 years from the third (2015) and fourth (2018) survey of the China Health and Retirement Longitudinal Study (CHARLS). Correlation analysis, repeated-measures variance analysis, and cross-lagged panel models were used to measure the difference in digital usage behavior in the established relationship.
METHODS
The results show that the cognitive function of middle-aged and older diabetic patients is positively correlated with HRQoL. HRQoL at T1 could significantly predict cognitive function at T2 (PCS: B = 0.12, p &lt; 0.01; MCS: B = 0.14, p &lt; 0.01). This relationship is more associated with individual performance than digital usage behavior.
RESULTS
Unidirectional associations may exist between cognitive function and HRQoL among middle-aged and older Chinese diabetes patients. In the future, doctors and nurses can recognize the lowering of self-perceived HRQoL of middle-aged and older diabetic patients, and thus draw more attention to their cognitive function, in turn strengthening the evaluation, detection, and intervention of their cognitive function.
CONCLUSIONS
[ "Aged", "China", "Cognition", "Diabetes Mellitus", "Humans", "Longitudinal Studies", "Middle Aged", "Quality of Life" ]
9566018
1. Introduction
Cognitive impairment is one of the common complications in elderly patients [1]. For example, in China, the prevalence of mild cognitive impairments in the aging population (aged 60 and above) is 14.71% [2], and as age increases, its annual rate of progression to dementia is between 8% and 15% [2]. At the same time, cognitive impairment is also one of the common complications of diabetic patients. It is estimated that by 2045, about 170 million elderly people in China will be diabetic patients [3]. Therefore, the academic community has paid great attention to the cognitive function of the diabetic population, and found that the cognitive function of the diabetic population is closely related to the quality of life (QoL) of the elderly [4]. HRQoL is the perceived physical and mental health of an individual or group over time, including both physical component summary (PCS) and mental component summary (MCS) [5]. In contrast with the QoL, HRQoL pays special attention to the impact of disease and treatment process on the life of a person or a group [6]. Existing studies have found that changes in cognitive function are positively correlated with changes in HRQoL, and play a predictive role in future changes in HRQoL [7,8]. For example, cognitive decline was found to be a predictor of HRQoL decline in studies on multiple sclerosis patients, AIDS patients, and older women [4,9,10]. At the same time, some scholars have found that changes in HRQoL—whether it is the PCS or the MCS of HRQoL—can also predict cognitive changes in individuals in the future [11]. For example, Ezzati’s 2019 study demonstrated that changes in HRQoL preceded changes in cognition and predicted the occurrence of dementia [12]. Thus, cognitive function may be bi-directionally associated with HRQoL. The bidirectional relationship between cognitive function and HRQoL may be more pronounced in terms of diabetic patients. This is because not only are people with diabetes 1–2 times more likely to develop cognitive risks than the general population [13,14], but this risk will increase over time [15]. At the same time, with the deepening of the research on HRQoL, the medical community generally believes that the core of diabetes management should include HRQoL maintenance in addition to prevention and delay of its complications [16]. In summary, this research aims to focus on middle-aged and older diabetic patients (over 45 years old) in China—not only because China has one of the largest numbers of diabetic patients in the world, but also because the age of the population affected with diabetes is showing a downward trend [3,17]. Thus, we propose the first hypothesis: in middle-aged and older Chinese patients with diabetes, the relationship between cognitive function and HRQoL may be bidirectional. To our knowledge, previous studies on cognitive function and HRQoL have focused on unmodifiable clinical factors, such as age, gender, etc., with a lack of studies on modifiable factors [4]. Digital technologies such as the Internet and smartphones have attracted the attention of psychologists because of their portability, rapidity, and immediacy. In clinical medicine, digital usage behaviors are often used for managing and intervening with patients with diabetes and cognitive dysfunction [18]. Thus, digital technology may be seen as a modifiable factor in the relationship between cognitive function and HRQoL. We also found that with the continuous development and improvement of digital technology, digital usage behavior can have a profound impact on the cognitive function of patients with chronic diseases [18,19,20]. In addition to this, it is important that digital usage behavior, such as Internet use, can also have a certain impact on HRQoL [21]. Previous research in other populations found that using the Internet resulted in significantly higher PCS and physical pain scores [22]. According to self-determination theory, we believe that when individuals use the internet for leisure or work, their needs are met, which in turn produces positive long-term psychological outcomes such as quality of life [23]. So far, existing research is leaning towards the belief that an increase in digital usage behavior can improve the health of individuals [24]. However, there are also studies holding the opposite view; they believe that the long-term digital usage behavior reduces patients’ time for outdoor activities, and that a large amount of negative information received due to digital usage behavior is more likely to cause mood swings, which may have side effects on patients’ recovery [25]. Therefore, it is easy to find that digital usage behavior is likely to have various effects, so we propose another hypothesis: in middle-aged and older Chinese patients with diabetes, the relationship between cognitive function and HRQoL may differ considering digital usage behavior. To date, most studies on the association of cognitive function with HRQoL have used cross-sectional designs, and have failed to elucidate the direction of influence between cognitive function and HRQoL due to the limitations of cross-sectional studies in terms of making causal inferences and explaining the direction of association. Therefore, it is necessary to further explore, especially for diabetic patients, the longitudinal association between the cognitive function and HRQoL, and to clarify the direction of their effects. As a longitudinal study, this study attempts to use a cross-lag model to explore the relationship between cognitive function and HRQoL in Chinese middle-aged and older diabetic patients, as well as the direction of this relationship and whether there were differences in digital usage behavior. This research is attempting to explore the longitudinal relationship between cognition and HRQoL in middle-aged and older diabetic patients, and to provide evidence which can promote their health. It should be noted that, due to the existence of multiple databases in China, in addition to the China Health and Retirement Longitudinal Survey (CHARLS), most scholars use databases such as the China Household Finance Survey when researching topics related to public health [26]. There is more content regarding the physical and mental health of middle-aged and elderly people, so we chose CHARLS as our data source.
null
null
3. Results
3.1. Descriptive Statistics The variable descriptive statistics and correlation analysis results are shown in Table 3 and Table 4. The correlation analysis shows that there is a significant correlation between cognition at T1 and T2 and HRQoL levels at T1 and T2, indicating that cognitive function and HRQoL levels in middle-aged and older diabetic patients have a certain relationship, which is showing stability. Meanwhile, simultaneous and sequential correlations between cognitive function and HRQoL levels are also significant. The correlation coefficients between cognitive function and HRQoL at T1 are 0.28 (p < 0.01) and 0.37 (p < 0.01); the correlation coefficients between cognitive function at T2 and HRQoL at T2 are 0.32 (p < 0.01) and 0.30 (p < 0.01); the correlation coefficients between cognitive function at T1 and HRQoL at T2 are 0.25 (p < 0.01) and 0.22 (p < 0.01); and the correlation coefficients between HRQoL at T1 and cognitive function at T2 are 0.32 (p < 0.01) and 0.36 (p < 0.01). This indicates that cognitive function is basically consistent with the synchronous and stable correlations of HRQoL levels, which is in line with the basic assumptions of the cross-lag design. The variable descriptive statistics and correlation analysis results are shown in Table 3 and Table 4. The correlation analysis shows that there is a significant correlation between cognition at T1 and T2 and HRQoL levels at T1 and T2, indicating that cognitive function and HRQoL levels in middle-aged and older diabetic patients have a certain relationship, which is showing stability. Meanwhile, simultaneous and sequential correlations between cognitive function and HRQoL levels are also significant. The correlation coefficients between cognitive function and HRQoL at T1 are 0.28 (p < 0.01) and 0.37 (p < 0.01); the correlation coefficients between cognitive function at T2 and HRQoL at T2 are 0.32 (p < 0.01) and 0.30 (p < 0.01); the correlation coefficients between cognitive function at T1 and HRQoL at T2 are 0.25 (p < 0.01) and 0.22 (p < 0.01); and the correlation coefficients between HRQoL at T1 and cognitive function at T2 are 0.32 (p < 0.01) and 0.36 (p < 0.01). This indicates that cognitive function is basically consistent with the synchronous and stable correlations of HRQoL levels, which is in line with the basic assumptions of the cross-lag design. 3.2. Stability Analysis of Cognitive Function and HRQoL With cognitive function as the dependent variable, a 2 (test time: T1/T2) × 2 (digital usage behavior: use/non-use) repeated measures square analysis was performed. The results show that the testing time is the main and most significant effect (F = 11.21, p < 0.001, η2 = 0.01). The cognitive function level at T2 is significantly lower than at T1, and there are developmental differences. The main effect of digital usage behavior is significant (F = 36.325, p < 0.001, η2 = 0.06), and the cognitive function of patients using digital technology is significantly better than those patients without digital technology. The interaction between the two is not significant (F = 2.22, p > 0.05, η2 = 0.04). With PCS as the dependent variable, a 2 (test time: T1/T2) × 2 (digital usage behavior: use/non-use) repeated measures square analysis was performed. The results show that the testing time is the main and most significant effect (F =16.25, p < 0.001, η2 = 0.03), the PCS level of the post-test is significantly lower than that of the pre-test, and there is a developmental difference. The main effect of digital usage behavior is significant (F = 36.54, p < 0.001, η2 = 0.06), and the PCS of patients using digital technology is significantly better than those not using digital technology. The interaction between the two is not significant (F = 0.18, p > 0.05, η2 = 0.00). With MCS as the dependent variable, a 2 (test time: T1/T2) × 2 (digital usage behavior: use/non-use) repeated measures square analysis was performed. The results showed that the testing time is the main and most significant effect (F = 44.93, p < 0.001, η2 = 0.07), the MCS level of the post-test is significantly lower than the pre-test, and there is a developmental difference. The main effect of digital usage behavior is significant (F = 50.97, p < 0.001, η2 = 0.08), and the MCS of patients using digital technology is significantly better than those not using digital technology. The interaction between the two is significant (F = 4.43, p < 0.05, η2 = 0.08). With cognitive function as the dependent variable, a 2 (test time: T1/T2) × 2 (digital usage behavior: use/non-use) repeated measures square analysis was performed. The results show that the testing time is the main and most significant effect (F = 11.21, p < 0.001, η2 = 0.01). The cognitive function level at T2 is significantly lower than at T1, and there are developmental differences. The main effect of digital usage behavior is significant (F = 36.325, p < 0.001, η2 = 0.06), and the cognitive function of patients using digital technology is significantly better than those patients without digital technology. The interaction between the two is not significant (F = 2.22, p > 0.05, η2 = 0.04). With PCS as the dependent variable, a 2 (test time: T1/T2) × 2 (digital usage behavior: use/non-use) repeated measures square analysis was performed. The results show that the testing time is the main and most significant effect (F =16.25, p < 0.001, η2 = 0.03), the PCS level of the post-test is significantly lower than that of the pre-test, and there is a developmental difference. The main effect of digital usage behavior is significant (F = 36.54, p < 0.001, η2 = 0.06), and the PCS of patients using digital technology is significantly better than those not using digital technology. The interaction between the two is not significant (F = 0.18, p > 0.05, η2 = 0.00). With MCS as the dependent variable, a 2 (test time: T1/T2) × 2 (digital usage behavior: use/non-use) repeated measures square analysis was performed. The results showed that the testing time is the main and most significant effect (F = 44.93, p < 0.001, η2 = 0.07), the MCS level of the post-test is significantly lower than the pre-test, and there is a developmental difference. The main effect of digital usage behavior is significant (F = 50.97, p < 0.001, η2 = 0.08), and the MCS of patients using digital technology is significantly better than those not using digital technology. The interaction between the two is significant (F = 4.43, p < 0.05, η2 = 0.08). 3.3. Cross-Lagged Analysis of Cognitive Function and HRQoL In the cross-lag model which this study applied, HRQoL at T2 was predicted by cognitive function at T1, and cognitive function at T2 was predicted by HRQoL at T1. After the corresponding control variables were added to the models, both models achieved acceptable fitness criteria (Model 1: RMSEA = 0.093; CFI = 0.994, TCL = 0.912; Model 2: RMSEA = 0.098; CFI = 0.984, TCL = 0.901). The cross-lag relationship between cognitive function and PCS scores at two points in time is shown from the test of Model 1 in Figure 2. First, in the same time period, the baseline association between cognitive function and PCS is significantly positive (B = 0.19, p < 0.01), implying better cognitive performance in middle-aged and older diabetic patients with higher PCS scores at T1, and vice versa. Second, there is no significant correlation between cognitive function at T1 and PCS at T2 (B = 0.05, p > 0.01), but PCS at T1 is positively correlated with cognitive function at T2 (B = 0.12, p < 0.01). This suggests that there is a positive and significant relationship between PCS and cognitive function in middle-aged and older diabetic patients over time, but early cognitive function in middle-aged and older diabetic patients does not affect later PSC. The cross-lag relationship between cognitive function and MCS scores at the two points in time is shown in Model 2 of Figure 2. First, in the same time period, the baseline association between cognitive function and MCS is significantly positively correlated (B = 0.31, p < 0.01), which implies a better cognitive performance in middle-aged and older diabetic patients with higher MCS scores at T1, and vice versa. Second, there is no significant correlation between cognitive function at T1 and MCS at T2 (B = 0.05, p > 0.01), but there is a significant positive correlation between MCS at T1 and cognitive function at T2 (B = 0.14, p < 0.01). This suggests that there is a positive and significant relationship between MCS and cognitive function in middle-aged and older diabetic patients over time, but early cognitive function in middle-aged and older diabetic patients does not affect later MCS. In conclusion, there may be a one-way causal relationship between cognitive function and HRQoL, and the causal direction is from HRQoL to cognition. In the cross-lag model which this study applied, HRQoL at T2 was predicted by cognitive function at T1, and cognitive function at T2 was predicted by HRQoL at T1. After the corresponding control variables were added to the models, both models achieved acceptable fitness criteria (Model 1: RMSEA = 0.093; CFI = 0.994, TCL = 0.912; Model 2: RMSEA = 0.098; CFI = 0.984, TCL = 0.901). The cross-lag relationship between cognitive function and PCS scores at two points in time is shown from the test of Model 1 in Figure 2. First, in the same time period, the baseline association between cognitive function and PCS is significantly positive (B = 0.19, p < 0.01), implying better cognitive performance in middle-aged and older diabetic patients with higher PCS scores at T1, and vice versa. Second, there is no significant correlation between cognitive function at T1 and PCS at T2 (B = 0.05, p > 0.01), but PCS at T1 is positively correlated with cognitive function at T2 (B = 0.12, p < 0.01). This suggests that there is a positive and significant relationship between PCS and cognitive function in middle-aged and older diabetic patients over time, but early cognitive function in middle-aged and older diabetic patients does not affect later PSC. The cross-lag relationship between cognitive function and MCS scores at the two points in time is shown in Model 2 of Figure 2. First, in the same time period, the baseline association between cognitive function and MCS is significantly positively correlated (B = 0.31, p < 0.01), which implies a better cognitive performance in middle-aged and older diabetic patients with higher MCS scores at T1, and vice versa. Second, there is no significant correlation between cognitive function at T1 and MCS at T2 (B = 0.05, p > 0.01), but there is a significant positive correlation between MCS at T1 and cognitive function at T2 (B = 0.14, p < 0.01). This suggests that there is a positive and significant relationship between MCS and cognitive function in middle-aged and older diabetic patients over time, but early cognitive function in middle-aged and older diabetic patients does not affect later MCS. In conclusion, there may be a one-way causal relationship between cognitive function and HRQoL, and the causal direction is from HRQoL to cognition. 3.4. Heterogeneity Analysis In order to investigate whether the cross-lag relationship between cognitive function and HRQoL differs in digital usage behavior, the study performed a multi-group analysis. All diabetic patients were divided into two groups, one representing non-digital usage behavior (Figure 3), and the other representing digital usage behavior (Figure 4). In the grouped cross-lag model, the corresponding control variables were also added to the model. The models in Figure 3 and Figure 4 met fitness criteria (Figure 3: Model 3: RMSEA = 0.073, CFI = 0.996, TCL = 0.901; Model 4: RMSEA = 0.100, CFI = 0.990, TCL = 0.936. Figure 4: Model 5: RMSEA = 0.073, CFI = 0.996, TCL = 0.941; Model 6: RMSEA = 0.079, CFI = 0.970, TCL = 0.936). In both sets of models, the study focused on the relationship between cognitive function and HRQoL at a point in time, as well as their relationship at the time of follow-up research. For middle-aged and older diabetic patients using digital technology, although the model could be fitted, cognitive function at T1 did not significantly predict HRQoL at T2 (PCS: B = 0.30, p > 0.01; MCS: B = 0.14, p > 0.01), and vice versa (PCS: B = 0.06, p > 0.01; MCS: B = 0.05, p > 0.01) (see Figure 3). For diabetic patients who did not use digital technology, although T1 cognitive function did not significantly predict T2 HRQoL (PCS: B = 0.04, p > 0.01; MCS: B = 0.05, p > 0.01), T1 HRQoL significantly predicted T2 cognitive function (PCS: B = 0.10, p < 0.01; MCS: B = 0.11, p < 0.01), and the results were consistent across all diabetic patients (see Figure 4). This shows that the cross-lag model of cognitive function and HRQoL in middle-aged and older diabetic patients shows variance in the use of digital technology, that is, this relationship will be affected by digital usage behavior. In order to investigate whether the cross-lag relationship between cognitive function and HRQoL differs in digital usage behavior, the study performed a multi-group analysis. All diabetic patients were divided into two groups, one representing non-digital usage behavior (Figure 3), and the other representing digital usage behavior (Figure 4). In the grouped cross-lag model, the corresponding control variables were also added to the model. The models in Figure 3 and Figure 4 met fitness criteria (Figure 3: Model 3: RMSEA = 0.073, CFI = 0.996, TCL = 0.901; Model 4: RMSEA = 0.100, CFI = 0.990, TCL = 0.936. Figure 4: Model 5: RMSEA = 0.073, CFI = 0.996, TCL = 0.941; Model 6: RMSEA = 0.079, CFI = 0.970, TCL = 0.936). In both sets of models, the study focused on the relationship between cognitive function and HRQoL at a point in time, as well as their relationship at the time of follow-up research. For middle-aged and older diabetic patients using digital technology, although the model could be fitted, cognitive function at T1 did not significantly predict HRQoL at T2 (PCS: B = 0.30, p > 0.01; MCS: B = 0.14, p > 0.01), and vice versa (PCS: B = 0.06, p > 0.01; MCS: B = 0.05, p > 0.01) (see Figure 3). For diabetic patients who did not use digital technology, although T1 cognitive function did not significantly predict T2 HRQoL (PCS: B = 0.04, p > 0.01; MCS: B = 0.05, p > 0.01), T1 HRQoL significantly predicted T2 cognitive function (PCS: B = 0.10, p < 0.01; MCS: B = 0.11, p < 0.01), and the results were consistent across all diabetic patients (see Figure 4). This shows that the cross-lag model of cognitive function and HRQoL in middle-aged and older diabetic patients shows variance in the use of digital technology, that is, this relationship will be affected by digital usage behavior.
5. Conclusions
The cognitive function and HRQoL (PCS and MCS) in middle-aged and older Chinese diabetic patients have a certain degree of lateral stability, and both tend to increase over time. More importantly, this study found that HRQoL at T1 can significantly predict cognitive function at T2, but cognitive function at T1 cannot significantly predict HRQoL at T2. HRQoL may be an antecedent for middle-aged and older diabetic patients’ cognitive function, and this predictive relationship can be different depending on digital usage behavior. In the future, effective intervention on HRQoL of middle-aged and older diabetic patients can be considered to improve their cognitive function, thereby promoting the healthy development of middle-aged and older diabetic patients.
[ "2. Materials and Methods", "2.2. Measurements", "2.2.1. Cognitive Functioning", "2.2.2. HRQoL", "2.2.3. Socio-Demographic Variables", "2.3. Statistical Methods", "3.1. Descriptive Statistics", "3.2. Stability Analysis of Cognitive Function and HRQoL", "3.3. Cross-Lagged Analysis of Cognitive Function and HRQoL", "3.4. Heterogeneity Analysis" ]
[ "2.1. Participants The China Health and Retirement Longitudinal Survey (CHARLS) is a representative follow-up survey of middle-aged and older people in China, chaired by the National Research Institute of Development at Peking University. The study protocol was approved by the Institutional Review Board of Peking University (approval number: IRB00001052-11015), and the study protocol complies with the ethical guidelines of the 1975 Declaration of Helsinki. In order to obtain the relevant data, we applied for the CHARLS database online on 31 August 2022, and approval was quickly obtained. This study used two waves of CHARLS data, from 2015 (T1) and 2018 (T2). CHARLS data can be accessed through its official website (https://charls.pku.edu.cn (accessed on 31 August 2022)).\nFigure 1 shows the detailed process for including and excluding study participants. Wave 3 was used as the baseline data set for this study, and individuals who lacked information on HRQoL, cognitive function, digital usage behavior, and covariates were excluded from this study. Subsequently, new participants in 2018 were further excluded. Individuals lacking information on digital usage behavior, cognitive function, and covariates were also excluded from this study. In addition, in the sample selection, the research team excluded individuals with other diseases (such as cognitive diseases that may affect the detection process, mental diseases, and other diseases related to aging), and included individuals with diabetes only. In addition, the abbreviations of the main variables involved in the study are organized in Table 1.\nThe China Health and Retirement Longitudinal Survey (CHARLS) is a representative follow-up survey of middle-aged and older people in China, chaired by the National Research Institute of Development at Peking University. The study protocol was approved by the Institutional Review Board of Peking University (approval number: IRB00001052-11015), and the study protocol complies with the ethical guidelines of the 1975 Declaration of Helsinki. In order to obtain the relevant data, we applied for the CHARLS database online on 31 August 2022, and approval was quickly obtained. This study used two waves of CHARLS data, from 2015 (T1) and 2018 (T2). CHARLS data can be accessed through its official website (https://charls.pku.edu.cn (accessed on 31 August 2022)).\nFigure 1 shows the detailed process for including and excluding study participants. Wave 3 was used as the baseline data set for this study, and individuals who lacked information on HRQoL, cognitive function, digital usage behavior, and covariates were excluded from this study. Subsequently, new participants in 2018 were further excluded. Individuals lacking information on digital usage behavior, cognitive function, and covariates were also excluded from this study. In addition, in the sample selection, the research team excluded individuals with other diseases (such as cognitive diseases that may affect the detection process, mental diseases, and other diseases related to aging), and included individuals with diabetes only. In addition, the abbreviations of the main variables involved in the study are organized in Table 1.\n2.2. Measurements 2.2.1. Cognitive Functioning This study obtained data on cognitive function from baseline data in 2015 and follow-up data in 2018. The baseline data stipulated by CHARLS is from 2011, but we include CHARLS 2015 (T1) as the baseline data in our study. CHARLS is similar to the cognitive assessment used in the American Health and Retirement Study, and has constructed cognitive function evaluation criteria from the same two aspects of memory and mental state [27,28]. Furthermore, previous studies have also used the CHARLS cognitive function assessment criteria for class-correspondence studies. Firstly, memory evaluation includes immediate word recall (0–10 points) and delayed word recall (0–10 points). Mental state is measured from three dimensions: orientation, visual construction, and mathematical performance. Orientation (0–5 points) is measured by asking respondents to name the date, day of the week, and season; visual construction is assessed by drawing a previously displayed picture (0–1 points); and mathematical performance (0–5 points) is measured by asking respondents to subtract 7 consecutive times from 100. The scores of the participants’ cognitive function are equal to the sum of the scores of memory and mental state. Cognitive function scores range from 0 to 31 points, higher cognitive function scores indicating better cognitive function. Finally, in order to obtain a composite measure of cognitive function, this study normalized and averaged the total cognitive function score by adding up memory and mental state scores [29].\nThis study obtained data on cognitive function from baseline data in 2015 and follow-up data in 2018. The baseline data stipulated by CHARLS is from 2011, but we include CHARLS 2015 (T1) as the baseline data in our study. CHARLS is similar to the cognitive assessment used in the American Health and Retirement Study, and has constructed cognitive function evaluation criteria from the same two aspects of memory and mental state [27,28]. Furthermore, previous studies have also used the CHARLS cognitive function assessment criteria for class-correspondence studies. Firstly, memory evaluation includes immediate word recall (0–10 points) and delayed word recall (0–10 points). Mental state is measured from three dimensions: orientation, visual construction, and mathematical performance. Orientation (0–5 points) is measured by asking respondents to name the date, day of the week, and season; visual construction is assessed by drawing a previously displayed picture (0–1 points); and mathematical performance (0–5 points) is measured by asking respondents to subtract 7 consecutive times from 100. The scores of the participants’ cognitive function are equal to the sum of the scores of memory and mental state. Cognitive function scores range from 0 to 31 points, higher cognitive function scores indicating better cognitive function. Finally, in order to obtain a composite measure of cognitive function, this study normalized and averaged the total cognitive function score by adding up memory and mental state scores [29].\n2.2.2. HRQoL This study used a new scale construction based on the variables of the Short Form 36 (SF-36) and the CHARLS questionnaire to measure HRQoL in diabetic patients. The construction of the new scale was derived from the eight dimensions of SF-36, and the corresponding variables of CHARLS were selected to assess the following eight dimensions (Table 2): physical function (PF), role–body (RP), body pain (BP), general health (GH), vitality (VT), social functioning (SF), role–emotion (RE), and mental health (MH). The scores for the above eight dimensions were calculated by adding up the category scores and then converting the raw scores to a 0 to 100 scale. Scores from the eight subscales were aggregated into two overall scores according to the conceptual model of the SF-36 [30]. Physical function, body roles, body pain, and general perceptions of health were calculated as PCS, and mental health, vitality, emotional role, and social functioning were calculated as MCS. Although the HRQoL questionnaire based on CHARLS is slightly different from other HRQoL questionnaires, they all have similar focuses, including physical, emotional, and social diversity. The questionnaire has been determined to be effective in the Chinese population [31], and has already been used in related research [32].\nThis study used a new scale construction based on the variables of the Short Form 36 (SF-36) and the CHARLS questionnaire to measure HRQoL in diabetic patients. The construction of the new scale was derived from the eight dimensions of SF-36, and the corresponding variables of CHARLS were selected to assess the following eight dimensions (Table 2): physical function (PF), role–body (RP), body pain (BP), general health (GH), vitality (VT), social functioning (SF), role–emotion (RE), and mental health (MH). The scores for the above eight dimensions were calculated by adding up the category scores and then converting the raw scores to a 0 to 100 scale. Scores from the eight subscales were aggregated into two overall scores according to the conceptual model of the SF-36 [30]. Physical function, body roles, body pain, and general perceptions of health were calculated as PCS, and mental health, vitality, emotional role, and social functioning were calculated as MCS. Although the HRQoL questionnaire based on CHARLS is slightly different from other HRQoL questionnaires, they all have similar focuses, including physical, emotional, and social diversity. The questionnaire has been determined to be effective in the Chinese population [31], and has already been used in related research [32].\n2.2.3. Socio-Demographic Variables In order to minimize the possibility of other variables influencing the cognitive function–HRQoL relationship study, and to simplify the model, this research controlled for several specific covariates associated with cognitive function and HRQoL. According to previous studies, all covariates were based on baseline data (CHARLS 2015) [32]. Firstly, population control variables include age, gender, and education status. Secondly, since HRQoL can be divided into two parts, PCS and MCS, this study used different control variables in the models for cognitive function, PCS, and MCS. This research included depression as well as current smoking and drinking habits as control variables for PCS scores. In this research, PCS scores are also considered with marital status, depression, physical activity, and current smoking and drinking habits as control variables.\nIn order to minimize the possibility of other variables influencing the cognitive function–HRQoL relationship study, and to simplify the model, this research controlled for several specific covariates associated with cognitive function and HRQoL. According to previous studies, all covariates were based on baseline data (CHARLS 2015) [32]. Firstly, population control variables include age, gender, and education status. Secondly, since HRQoL can be divided into two parts, PCS and MCS, this study used different control variables in the models for cognitive function, PCS, and MCS. This research included depression as well as current smoking and drinking habits as control variables for PCS scores. In this research, PCS scores are also considered with marital status, depression, physical activity, and current smoking and drinking habits as control variables.\n2.2.1. Cognitive Functioning This study obtained data on cognitive function from baseline data in 2015 and follow-up data in 2018. The baseline data stipulated by CHARLS is from 2011, but we include CHARLS 2015 (T1) as the baseline data in our study. CHARLS is similar to the cognitive assessment used in the American Health and Retirement Study, and has constructed cognitive function evaluation criteria from the same two aspects of memory and mental state [27,28]. Furthermore, previous studies have also used the CHARLS cognitive function assessment criteria for class-correspondence studies. Firstly, memory evaluation includes immediate word recall (0–10 points) and delayed word recall (0–10 points). Mental state is measured from three dimensions: orientation, visual construction, and mathematical performance. Orientation (0–5 points) is measured by asking respondents to name the date, day of the week, and season; visual construction is assessed by drawing a previously displayed picture (0–1 points); and mathematical performance (0–5 points) is measured by asking respondents to subtract 7 consecutive times from 100. The scores of the participants’ cognitive function are equal to the sum of the scores of memory and mental state. Cognitive function scores range from 0 to 31 points, higher cognitive function scores indicating better cognitive function. Finally, in order to obtain a composite measure of cognitive function, this study normalized and averaged the total cognitive function score by adding up memory and mental state scores [29].\nThis study obtained data on cognitive function from baseline data in 2015 and follow-up data in 2018. The baseline data stipulated by CHARLS is from 2011, but we include CHARLS 2015 (T1) as the baseline data in our study. CHARLS is similar to the cognitive assessment used in the American Health and Retirement Study, and has constructed cognitive function evaluation criteria from the same two aspects of memory and mental state [27,28]. Furthermore, previous studies have also used the CHARLS cognitive function assessment criteria for class-correspondence studies. Firstly, memory evaluation includes immediate word recall (0–10 points) and delayed word recall (0–10 points). Mental state is measured from three dimensions: orientation, visual construction, and mathematical performance. Orientation (0–5 points) is measured by asking respondents to name the date, day of the week, and season; visual construction is assessed by drawing a previously displayed picture (0–1 points); and mathematical performance (0–5 points) is measured by asking respondents to subtract 7 consecutive times from 100. The scores of the participants’ cognitive function are equal to the sum of the scores of memory and mental state. Cognitive function scores range from 0 to 31 points, higher cognitive function scores indicating better cognitive function. Finally, in order to obtain a composite measure of cognitive function, this study normalized and averaged the total cognitive function score by adding up memory and mental state scores [29].\n2.2.2. HRQoL This study used a new scale construction based on the variables of the Short Form 36 (SF-36) and the CHARLS questionnaire to measure HRQoL in diabetic patients. The construction of the new scale was derived from the eight dimensions of SF-36, and the corresponding variables of CHARLS were selected to assess the following eight dimensions (Table 2): physical function (PF), role–body (RP), body pain (BP), general health (GH), vitality (VT), social functioning (SF), role–emotion (RE), and mental health (MH). The scores for the above eight dimensions were calculated by adding up the category scores and then converting the raw scores to a 0 to 100 scale. Scores from the eight subscales were aggregated into two overall scores according to the conceptual model of the SF-36 [30]. Physical function, body roles, body pain, and general perceptions of health were calculated as PCS, and mental health, vitality, emotional role, and social functioning were calculated as MCS. Although the HRQoL questionnaire based on CHARLS is slightly different from other HRQoL questionnaires, they all have similar focuses, including physical, emotional, and social diversity. The questionnaire has been determined to be effective in the Chinese population [31], and has already been used in related research [32].\nThis study used a new scale construction based on the variables of the Short Form 36 (SF-36) and the CHARLS questionnaire to measure HRQoL in diabetic patients. The construction of the new scale was derived from the eight dimensions of SF-36, and the corresponding variables of CHARLS were selected to assess the following eight dimensions (Table 2): physical function (PF), role–body (RP), body pain (BP), general health (GH), vitality (VT), social functioning (SF), role–emotion (RE), and mental health (MH). The scores for the above eight dimensions were calculated by adding up the category scores and then converting the raw scores to a 0 to 100 scale. Scores from the eight subscales were aggregated into two overall scores according to the conceptual model of the SF-36 [30]. Physical function, body roles, body pain, and general perceptions of health were calculated as PCS, and mental health, vitality, emotional role, and social functioning were calculated as MCS. Although the HRQoL questionnaire based on CHARLS is slightly different from other HRQoL questionnaires, they all have similar focuses, including physical, emotional, and social diversity. The questionnaire has been determined to be effective in the Chinese population [31], and has already been used in related research [32].\n2.2.3. Socio-Demographic Variables In order to minimize the possibility of other variables influencing the cognitive function–HRQoL relationship study, and to simplify the model, this research controlled for several specific covariates associated with cognitive function and HRQoL. According to previous studies, all covariates were based on baseline data (CHARLS 2015) [32]. Firstly, population control variables include age, gender, and education status. Secondly, since HRQoL can be divided into two parts, PCS and MCS, this study used different control variables in the models for cognitive function, PCS, and MCS. This research included depression as well as current smoking and drinking habits as control variables for PCS scores. In this research, PCS scores are also considered with marital status, depression, physical activity, and current smoking and drinking habits as control variables.\nIn order to minimize the possibility of other variables influencing the cognitive function–HRQoL relationship study, and to simplify the model, this research controlled for several specific covariates associated with cognitive function and HRQoL. According to previous studies, all covariates were based on baseline data (CHARLS 2015) [32]. Firstly, population control variables include age, gender, and education status. Secondly, since HRQoL can be divided into two parts, PCS and MCS, this study used different control variables in the models for cognitive function, PCS, and MCS. This research included depression as well as current smoking and drinking habits as control variables for PCS scores. In this research, PCS scores are also considered with marital status, depression, physical activity, and current smoking and drinking habits as control variables.\n2.3. Statistical Methods IBM SPSS Statistics, version 23 (IBM SPSS Statistics, Armonk, NY, USA) and Mplus version 8.0 are used for data analysis. Descriptive analyses of diabetic patient characteristics, cognitive function, and PCM and MCS of HRQoL were performed. The relationship between cognitive function and HRQoL in diabetic patients at T1 and T2 was assessed using the Pearson correlation test. A repeated-measures analysis of variance was performed to assess the differences between these variables with and without the digital usage behavior. Furthermore, cross-lagged panel structural equation modeling (SEM) was used to assess the longitudinal association between cognition and HRQoL at T1 and T2. Finally, a multi-group test was performed to assess whether digital usage behavior was making a difference in this relationship. Finally, regarding the grouping of digital technologies as well, previous studies were referred to [24]. Digital technology usage was obtained from 2015 baseline data. In the CHARLS 2015 questionnaire, internet usage was measured using the following question whether respondents have used the internet in the last month (0 for no, 1 for yes).\nIBM SPSS Statistics, version 23 (IBM SPSS Statistics, Armonk, NY, USA) and Mplus version 8.0 are used for data analysis. Descriptive analyses of diabetic patient characteristics, cognitive function, and PCM and MCS of HRQoL were performed. The relationship between cognitive function and HRQoL in diabetic patients at T1 and T2 was assessed using the Pearson correlation test. A repeated-measures analysis of variance was performed to assess the differences between these variables with and without the digital usage behavior. Furthermore, cross-lagged panel structural equation modeling (SEM) was used to assess the longitudinal association between cognition and HRQoL at T1 and T2. Finally, a multi-group test was performed to assess whether digital usage behavior was making a difference in this relationship. Finally, regarding the grouping of digital technologies as well, previous studies were referred to [24]. Digital technology usage was obtained from 2015 baseline data. In the CHARLS 2015 questionnaire, internet usage was measured using the following question whether respondents have used the internet in the last month (0 for no, 1 for yes).", "2.2.1. Cognitive Functioning This study obtained data on cognitive function from baseline data in 2015 and follow-up data in 2018. The baseline data stipulated by CHARLS is from 2011, but we include CHARLS 2015 (T1) as the baseline data in our study. CHARLS is similar to the cognitive assessment used in the American Health and Retirement Study, and has constructed cognitive function evaluation criteria from the same two aspects of memory and mental state [27,28]. Furthermore, previous studies have also used the CHARLS cognitive function assessment criteria for class-correspondence studies. Firstly, memory evaluation includes immediate word recall (0–10 points) and delayed word recall (0–10 points). Mental state is measured from three dimensions: orientation, visual construction, and mathematical performance. Orientation (0–5 points) is measured by asking respondents to name the date, day of the week, and season; visual construction is assessed by drawing a previously displayed picture (0–1 points); and mathematical performance (0–5 points) is measured by asking respondents to subtract 7 consecutive times from 100. The scores of the participants’ cognitive function are equal to the sum of the scores of memory and mental state. Cognitive function scores range from 0 to 31 points, higher cognitive function scores indicating better cognitive function. Finally, in order to obtain a composite measure of cognitive function, this study normalized and averaged the total cognitive function score by adding up memory and mental state scores [29].\nThis study obtained data on cognitive function from baseline data in 2015 and follow-up data in 2018. The baseline data stipulated by CHARLS is from 2011, but we include CHARLS 2015 (T1) as the baseline data in our study. CHARLS is similar to the cognitive assessment used in the American Health and Retirement Study, and has constructed cognitive function evaluation criteria from the same two aspects of memory and mental state [27,28]. Furthermore, previous studies have also used the CHARLS cognitive function assessment criteria for class-correspondence studies. Firstly, memory evaluation includes immediate word recall (0–10 points) and delayed word recall (0–10 points). Mental state is measured from three dimensions: orientation, visual construction, and mathematical performance. Orientation (0–5 points) is measured by asking respondents to name the date, day of the week, and season; visual construction is assessed by drawing a previously displayed picture (0–1 points); and mathematical performance (0–5 points) is measured by asking respondents to subtract 7 consecutive times from 100. The scores of the participants’ cognitive function are equal to the sum of the scores of memory and mental state. Cognitive function scores range from 0 to 31 points, higher cognitive function scores indicating better cognitive function. Finally, in order to obtain a composite measure of cognitive function, this study normalized and averaged the total cognitive function score by adding up memory and mental state scores [29].\n2.2.2. HRQoL This study used a new scale construction based on the variables of the Short Form 36 (SF-36) and the CHARLS questionnaire to measure HRQoL in diabetic patients. The construction of the new scale was derived from the eight dimensions of SF-36, and the corresponding variables of CHARLS were selected to assess the following eight dimensions (Table 2): physical function (PF), role–body (RP), body pain (BP), general health (GH), vitality (VT), social functioning (SF), role–emotion (RE), and mental health (MH). The scores for the above eight dimensions were calculated by adding up the category scores and then converting the raw scores to a 0 to 100 scale. Scores from the eight subscales were aggregated into two overall scores according to the conceptual model of the SF-36 [30]. Physical function, body roles, body pain, and general perceptions of health were calculated as PCS, and mental health, vitality, emotional role, and social functioning were calculated as MCS. Although the HRQoL questionnaire based on CHARLS is slightly different from other HRQoL questionnaires, they all have similar focuses, including physical, emotional, and social diversity. The questionnaire has been determined to be effective in the Chinese population [31], and has already been used in related research [32].\nThis study used a new scale construction based on the variables of the Short Form 36 (SF-36) and the CHARLS questionnaire to measure HRQoL in diabetic patients. The construction of the new scale was derived from the eight dimensions of SF-36, and the corresponding variables of CHARLS were selected to assess the following eight dimensions (Table 2): physical function (PF), role–body (RP), body pain (BP), general health (GH), vitality (VT), social functioning (SF), role–emotion (RE), and mental health (MH). The scores for the above eight dimensions were calculated by adding up the category scores and then converting the raw scores to a 0 to 100 scale. Scores from the eight subscales were aggregated into two overall scores according to the conceptual model of the SF-36 [30]. Physical function, body roles, body pain, and general perceptions of health were calculated as PCS, and mental health, vitality, emotional role, and social functioning were calculated as MCS. Although the HRQoL questionnaire based on CHARLS is slightly different from other HRQoL questionnaires, they all have similar focuses, including physical, emotional, and social diversity. The questionnaire has been determined to be effective in the Chinese population [31], and has already been used in related research [32].\n2.2.3. Socio-Demographic Variables In order to minimize the possibility of other variables influencing the cognitive function–HRQoL relationship study, and to simplify the model, this research controlled for several specific covariates associated with cognitive function and HRQoL. According to previous studies, all covariates were based on baseline data (CHARLS 2015) [32]. Firstly, population control variables include age, gender, and education status. Secondly, since HRQoL can be divided into two parts, PCS and MCS, this study used different control variables in the models for cognitive function, PCS, and MCS. This research included depression as well as current smoking and drinking habits as control variables for PCS scores. In this research, PCS scores are also considered with marital status, depression, physical activity, and current smoking and drinking habits as control variables.\nIn order to minimize the possibility of other variables influencing the cognitive function–HRQoL relationship study, and to simplify the model, this research controlled for several specific covariates associated with cognitive function and HRQoL. According to previous studies, all covariates were based on baseline data (CHARLS 2015) [32]. Firstly, population control variables include age, gender, and education status. Secondly, since HRQoL can be divided into two parts, PCS and MCS, this study used different control variables in the models for cognitive function, PCS, and MCS. This research included depression as well as current smoking and drinking habits as control variables for PCS scores. In this research, PCS scores are also considered with marital status, depression, physical activity, and current smoking and drinking habits as control variables.", "This study obtained data on cognitive function from baseline data in 2015 and follow-up data in 2018. The baseline data stipulated by CHARLS is from 2011, but we include CHARLS 2015 (T1) as the baseline data in our study. CHARLS is similar to the cognitive assessment used in the American Health and Retirement Study, and has constructed cognitive function evaluation criteria from the same two aspects of memory and mental state [27,28]. Furthermore, previous studies have also used the CHARLS cognitive function assessment criteria for class-correspondence studies. Firstly, memory evaluation includes immediate word recall (0–10 points) and delayed word recall (0–10 points). Mental state is measured from three dimensions: orientation, visual construction, and mathematical performance. Orientation (0–5 points) is measured by asking respondents to name the date, day of the week, and season; visual construction is assessed by drawing a previously displayed picture (0–1 points); and mathematical performance (0–5 points) is measured by asking respondents to subtract 7 consecutive times from 100. The scores of the participants’ cognitive function are equal to the sum of the scores of memory and mental state. Cognitive function scores range from 0 to 31 points, higher cognitive function scores indicating better cognitive function. Finally, in order to obtain a composite measure of cognitive function, this study normalized and averaged the total cognitive function score by adding up memory and mental state scores [29].", "This study used a new scale construction based on the variables of the Short Form 36 (SF-36) and the CHARLS questionnaire to measure HRQoL in diabetic patients. The construction of the new scale was derived from the eight dimensions of SF-36, and the corresponding variables of CHARLS were selected to assess the following eight dimensions (Table 2): physical function (PF), role–body (RP), body pain (BP), general health (GH), vitality (VT), social functioning (SF), role–emotion (RE), and mental health (MH). The scores for the above eight dimensions were calculated by adding up the category scores and then converting the raw scores to a 0 to 100 scale. Scores from the eight subscales were aggregated into two overall scores according to the conceptual model of the SF-36 [30]. Physical function, body roles, body pain, and general perceptions of health were calculated as PCS, and mental health, vitality, emotional role, and social functioning were calculated as MCS. Although the HRQoL questionnaire based on CHARLS is slightly different from other HRQoL questionnaires, they all have similar focuses, including physical, emotional, and social diversity. The questionnaire has been determined to be effective in the Chinese population [31], and has already been used in related research [32].", "In order to minimize the possibility of other variables influencing the cognitive function–HRQoL relationship study, and to simplify the model, this research controlled for several specific covariates associated with cognitive function and HRQoL. According to previous studies, all covariates were based on baseline data (CHARLS 2015) [32]. Firstly, population control variables include age, gender, and education status. Secondly, since HRQoL can be divided into two parts, PCS and MCS, this study used different control variables in the models for cognitive function, PCS, and MCS. This research included depression as well as current smoking and drinking habits as control variables for PCS scores. In this research, PCS scores are also considered with marital status, depression, physical activity, and current smoking and drinking habits as control variables.", "IBM SPSS Statistics, version 23 (IBM SPSS Statistics, Armonk, NY, USA) and Mplus version 8.0 are used for data analysis. Descriptive analyses of diabetic patient characteristics, cognitive function, and PCM and MCS of HRQoL were performed. The relationship between cognitive function and HRQoL in diabetic patients at T1 and T2 was assessed using the Pearson correlation test. A repeated-measures analysis of variance was performed to assess the differences between these variables with and without the digital usage behavior. Furthermore, cross-lagged panel structural equation modeling (SEM) was used to assess the longitudinal association between cognition and HRQoL at T1 and T2. Finally, a multi-group test was performed to assess whether digital usage behavior was making a difference in this relationship. Finally, regarding the grouping of digital technologies as well, previous studies were referred to [24]. Digital technology usage was obtained from 2015 baseline data. In the CHARLS 2015 questionnaire, internet usage was measured using the following question whether respondents have used the internet in the last month (0 for no, 1 for yes).", "The variable descriptive statistics and correlation analysis results are shown in Table 3 and Table 4. The correlation analysis shows that there is a significant correlation between cognition at T1 and T2 and HRQoL levels at T1 and T2, indicating that cognitive function and HRQoL levels in middle-aged and older diabetic patients have a certain relationship, which is showing stability. Meanwhile, simultaneous and sequential correlations between cognitive function and HRQoL levels are also significant. The correlation coefficients between cognitive function and HRQoL at T1 are 0.28 (p < 0.01) and 0.37 (p < 0.01); the correlation coefficients between cognitive function at T2 and HRQoL at T2 are 0.32 (p < 0.01) and 0.30 (p < 0.01); the correlation coefficients between cognitive function at T1 and HRQoL at T2 are 0.25 (p < 0.01) and 0.22 (p < 0.01); and the correlation coefficients between HRQoL at T1 and cognitive function at T2 are 0.32 (p < 0.01) and 0.36 (p < 0.01). This indicates that cognitive function is basically consistent with the synchronous and stable correlations of HRQoL levels, which is in line with the basic assumptions of the cross-lag design.", "With cognitive function as the dependent variable, a 2 (test time: T1/T2) × 2 (digital usage behavior: use/non-use) repeated measures square analysis was performed. The results show that the testing time is the main and most significant effect (F = 11.21, p < 0.001, η2 = 0.01). The cognitive function level at T2 is significantly lower than at T1, and there are developmental differences. The main effect of digital usage behavior is significant (F = 36.325, p < 0.001, η2 = 0.06), and the cognitive function of patients using digital technology is significantly better than those patients without digital technology. The interaction between the two is not significant (F = 2.22, p > 0.05, η2 = 0.04).\nWith PCS as the dependent variable, a 2 (test time: T1/T2) × 2 (digital usage behavior: use/non-use) repeated measures square analysis was performed. The results show that the testing time is the main and most significant effect (F =16.25, p < 0.001, η2 = 0.03), the PCS level of the post-test is significantly lower than that of the pre-test, and there is a developmental difference. The main effect of digital usage behavior is significant (F = 36.54, p < 0.001, η2 = 0.06), and the PCS of patients using digital technology is significantly better than those not using digital technology. The interaction between the two is not significant (F = 0.18, p > 0.05, η2 = 0.00).\nWith MCS as the dependent variable, a 2 (test time: T1/T2) × 2 (digital usage behavior: use/non-use) repeated measures square analysis was performed. The results showed that the testing time is the main and most significant effect (F = 44.93, p < 0.001, η2 = 0.07), the MCS level of the post-test is significantly lower than the pre-test, and there is a developmental difference. The main effect of digital usage behavior is significant (F = 50.97, p < 0.001, η2 = 0.08), and the MCS of patients using digital technology is significantly better than those not using digital technology. The interaction between the two is significant (F = 4.43, p < 0.05, η2 = 0.08).", "In the cross-lag model which this study applied, HRQoL at T2 was predicted by cognitive function at T1, and cognitive function at T2 was predicted by HRQoL at T1. After the corresponding control variables were added to the models, both models achieved acceptable fitness criteria (Model 1: RMSEA = 0.093; CFI = 0.994, TCL = 0.912; Model 2: RMSEA = 0.098; CFI = 0.984, TCL = 0.901).\nThe cross-lag relationship between cognitive function and PCS scores at two points in time is shown from the test of Model 1 in Figure 2. First, in the same time period, the baseline association between cognitive function and PCS is significantly positive (B = 0.19, p < 0.01), implying better cognitive performance in middle-aged and older diabetic patients with higher PCS scores at T1, and vice versa. Second, there is no significant correlation between cognitive function at T1 and PCS at T2 (B = 0.05, p > 0.01), but PCS at T1 is positively correlated with cognitive function at T2 (B = 0.12, p < 0.01). This suggests that there is a positive and significant relationship between PCS and cognitive function in middle-aged and older diabetic patients over time, but early cognitive function in middle-aged and older diabetic patients does not affect later PSC.\nThe cross-lag relationship between cognitive function and MCS scores at the two points in time is shown in Model 2 of Figure 2. First, in the same time period, the baseline association between cognitive function and MCS is significantly positively correlated (B = 0.31, p < 0.01), which implies a better cognitive performance in middle-aged and older diabetic patients with higher MCS scores at T1, and vice versa. Second, there is no significant correlation between cognitive function at T1 and MCS at T2 (B = 0.05, p > 0.01), but there is a significant positive correlation between MCS at T1 and cognitive function at T2 (B = 0.14, p < 0.01). This suggests that there is a positive and significant relationship between MCS and cognitive function in middle-aged and older diabetic patients over time, but early cognitive function in middle-aged and older diabetic patients does not affect later MCS. In conclusion, there may be a one-way causal relationship between cognitive function and HRQoL, and the causal direction is from HRQoL to cognition.", "In order to investigate whether the cross-lag relationship between cognitive function and HRQoL differs in digital usage behavior, the study performed a multi-group analysis. All diabetic patients were divided into two groups, one representing non-digital usage behavior (Figure 3), and the other representing digital usage behavior (Figure 4). In the grouped cross-lag model, the corresponding control variables were also added to the model. The models in Figure 3 and Figure 4 met fitness criteria (Figure 3: Model 3: RMSEA = 0.073, CFI = 0.996, TCL = 0.901; Model 4: RMSEA = 0.100, CFI = 0.990, TCL = 0.936. Figure 4: Model 5: RMSEA = 0.073, CFI = 0.996, TCL = 0.941; Model 6: RMSEA = 0.079, CFI = 0.970, TCL = 0.936). In both sets of models, the study focused on the relationship between cognitive function and HRQoL at a point in time, as well as their relationship at the time of follow-up research.\nFor middle-aged and older diabetic patients using digital technology, although the model could be fitted, cognitive function at T1 did not significantly predict HRQoL at T2 (PCS: B = 0.30, p > 0.01; MCS: B = 0.14, p > 0.01), and vice versa (PCS: B = 0.06, p > 0.01; MCS: B = 0.05, p > 0.01) (see Figure 3). For diabetic patients who did not use digital technology, although T1 cognitive function did not significantly predict T2 HRQoL (PCS: B = 0.04, p > 0.01; MCS: B = 0.05, p > 0.01), T1 HRQoL significantly predicted T2 cognitive function (PCS: B = 0.10, p < 0.01; MCS: B = 0.11, p < 0.01), and the results were consistent across all diabetic patients (see Figure 4). This shows that the cross-lag model of cognitive function and HRQoL in middle-aged and older diabetic patients shows variance in the use of digital technology, that is, this relationship will be affected by digital usage behavior." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Participants", "2.2. Measurements", "2.2.1. Cognitive Functioning", "2.2.2. HRQoL", "2.2.3. Socio-Demographic Variables", "2.3. Statistical Methods", "3. Results", "3.1. Descriptive Statistics", "3.2. Stability Analysis of Cognitive Function and HRQoL", "3.3. Cross-Lagged Analysis of Cognitive Function and HRQoL", "3.4. Heterogeneity Analysis", "4. Discussion", "5. Conclusions" ]
[ "Cognitive impairment is one of the common complications in elderly patients [1]. For example, in China, the prevalence of mild cognitive impairments in the aging population (aged 60 and above) is 14.71% [2], and as age increases, its annual rate of progression to dementia is between 8% and 15% [2]. At the same time, cognitive impairment is also one of the common complications of diabetic patients. It is estimated that by 2045, about 170 million elderly people in China will be diabetic patients [3]. Therefore, the academic community has paid great attention to the cognitive function of the diabetic population, and found that the cognitive function of the diabetic population is closely related to the quality of life (QoL) of the elderly [4].\nHRQoL is the perceived physical and mental health of an individual or group over time, including both physical component summary (PCS) and mental component summary (MCS) [5]. In contrast with the QoL, HRQoL pays special attention to the impact of disease and treatment process on the life of a person or a group [6]. Existing studies have found that changes in cognitive function are positively correlated with changes in HRQoL, and play a predictive role in future changes in HRQoL [7,8]. For example, cognitive decline was found to be a predictor of HRQoL decline in studies on multiple sclerosis patients, AIDS patients, and older women [4,9,10]. At the same time, some scholars have found that changes in HRQoL—whether it is the PCS or the MCS of HRQoL—can also predict cognitive changes in individuals in the future [11]. For example, Ezzati’s 2019 study demonstrated that changes in HRQoL preceded changes in cognition and predicted the occurrence of dementia [12]. Thus, cognitive function may be bi-directionally associated with HRQoL.\nThe bidirectional relationship between cognitive function and HRQoL may be more pronounced in terms of diabetic patients. This is because not only are people with diabetes 1–2 times more likely to develop cognitive risks than the general population [13,14], but this risk will increase over time [15]. At the same time, with the deepening of the research on HRQoL, the medical community generally believes that the core of diabetes management should include HRQoL maintenance in addition to prevention and delay of its complications [16]. In summary, this research aims to focus on middle-aged and older diabetic patients (over 45 years old) in China—not only because China has one of the largest numbers of diabetic patients in the world, but also because the age of the population affected with diabetes is showing a downward trend [3,17]. Thus, we propose the first hypothesis: in middle-aged and older Chinese patients with diabetes, the relationship between cognitive function and HRQoL may be bidirectional.\nTo our knowledge, previous studies on cognitive function and HRQoL have focused on unmodifiable clinical factors, such as age, gender, etc., with a lack of studies on modifiable factors [4]. Digital technologies such as the Internet and smartphones have attracted the attention of psychologists because of their portability, rapidity, and immediacy. In clinical medicine, digital usage behaviors are often used for managing and intervening with patients with diabetes and cognitive dysfunction [18]. Thus, digital technology may be seen as a modifiable factor in the relationship between cognitive function and HRQoL. We also found that with the continuous development and improvement of digital technology, digital usage behavior can have a profound impact on the cognitive function of patients with chronic diseases [18,19,20]. In addition to this, it is important that digital usage behavior, such as Internet use, can also have a certain impact on HRQoL [21]. Previous research in other populations found that using the Internet resulted in significantly higher PCS and physical pain scores [22]. According to self-determination theory, we believe that when individuals use the internet for leisure or work, their needs are met, which in turn produces positive long-term psychological outcomes such as quality of life [23]. So far, existing research is leaning towards the belief that an increase in digital usage behavior can improve the health of individuals [24]. However, there are also studies holding the opposite view; they believe that the long-term digital usage behavior reduces patients’ time for outdoor activities, and that a large amount of negative information received due to digital usage behavior is more likely to cause mood swings, which may have side effects on patients’ recovery [25]. Therefore, it is easy to find that digital usage behavior is likely to have various effects, so we propose another hypothesis: in middle-aged and older Chinese patients with diabetes, the relationship between cognitive function and HRQoL may differ considering digital usage behavior.\nTo date, most studies on the association of cognitive function with HRQoL have used cross-sectional designs, and have failed to elucidate the direction of influence between cognitive function and HRQoL due to the limitations of cross-sectional studies in terms of making causal inferences and explaining the direction of association. Therefore, it is necessary to further explore, especially for diabetic patients, the longitudinal association between the cognitive function and HRQoL, and to clarify the direction of their effects. As a longitudinal study, this study attempts to use a cross-lag model to explore the relationship between cognitive function and HRQoL in Chinese middle-aged and older diabetic patients, as well as the direction of this relationship and whether there were differences in digital usage behavior. This research is attempting to explore the longitudinal relationship between cognition and HRQoL in middle-aged and older diabetic patients, and to provide evidence which can promote their health.\nIt should be noted that, due to the existence of multiple databases in China, in addition to the China Health and Retirement Longitudinal Survey (CHARLS), most scholars use databases such as the China Household Finance Survey when researching topics related to public health [26]. There is more content regarding the physical and mental health of middle-aged and elderly people, so we chose CHARLS as our data source.", "2.1. Participants The China Health and Retirement Longitudinal Survey (CHARLS) is a representative follow-up survey of middle-aged and older people in China, chaired by the National Research Institute of Development at Peking University. The study protocol was approved by the Institutional Review Board of Peking University (approval number: IRB00001052-11015), and the study protocol complies with the ethical guidelines of the 1975 Declaration of Helsinki. In order to obtain the relevant data, we applied for the CHARLS database online on 31 August 2022, and approval was quickly obtained. This study used two waves of CHARLS data, from 2015 (T1) and 2018 (T2). CHARLS data can be accessed through its official website (https://charls.pku.edu.cn (accessed on 31 August 2022)).\nFigure 1 shows the detailed process for including and excluding study participants. Wave 3 was used as the baseline data set for this study, and individuals who lacked information on HRQoL, cognitive function, digital usage behavior, and covariates were excluded from this study. Subsequently, new participants in 2018 were further excluded. Individuals lacking information on digital usage behavior, cognitive function, and covariates were also excluded from this study. In addition, in the sample selection, the research team excluded individuals with other diseases (such as cognitive diseases that may affect the detection process, mental diseases, and other diseases related to aging), and included individuals with diabetes only. In addition, the abbreviations of the main variables involved in the study are organized in Table 1.\nThe China Health and Retirement Longitudinal Survey (CHARLS) is a representative follow-up survey of middle-aged and older people in China, chaired by the National Research Institute of Development at Peking University. The study protocol was approved by the Institutional Review Board of Peking University (approval number: IRB00001052-11015), and the study protocol complies with the ethical guidelines of the 1975 Declaration of Helsinki. In order to obtain the relevant data, we applied for the CHARLS database online on 31 August 2022, and approval was quickly obtained. This study used two waves of CHARLS data, from 2015 (T1) and 2018 (T2). CHARLS data can be accessed through its official website (https://charls.pku.edu.cn (accessed on 31 August 2022)).\nFigure 1 shows the detailed process for including and excluding study participants. Wave 3 was used as the baseline data set for this study, and individuals who lacked information on HRQoL, cognitive function, digital usage behavior, and covariates were excluded from this study. Subsequently, new participants in 2018 were further excluded. Individuals lacking information on digital usage behavior, cognitive function, and covariates were also excluded from this study. In addition, in the sample selection, the research team excluded individuals with other diseases (such as cognitive diseases that may affect the detection process, mental diseases, and other diseases related to aging), and included individuals with diabetes only. In addition, the abbreviations of the main variables involved in the study are organized in Table 1.\n2.2. Measurements 2.2.1. Cognitive Functioning This study obtained data on cognitive function from baseline data in 2015 and follow-up data in 2018. The baseline data stipulated by CHARLS is from 2011, but we include CHARLS 2015 (T1) as the baseline data in our study. CHARLS is similar to the cognitive assessment used in the American Health and Retirement Study, and has constructed cognitive function evaluation criteria from the same two aspects of memory and mental state [27,28]. Furthermore, previous studies have also used the CHARLS cognitive function assessment criteria for class-correspondence studies. Firstly, memory evaluation includes immediate word recall (0–10 points) and delayed word recall (0–10 points). Mental state is measured from three dimensions: orientation, visual construction, and mathematical performance. Orientation (0–5 points) is measured by asking respondents to name the date, day of the week, and season; visual construction is assessed by drawing a previously displayed picture (0–1 points); and mathematical performance (0–5 points) is measured by asking respondents to subtract 7 consecutive times from 100. The scores of the participants’ cognitive function are equal to the sum of the scores of memory and mental state. Cognitive function scores range from 0 to 31 points, higher cognitive function scores indicating better cognitive function. Finally, in order to obtain a composite measure of cognitive function, this study normalized and averaged the total cognitive function score by adding up memory and mental state scores [29].\nThis study obtained data on cognitive function from baseline data in 2015 and follow-up data in 2018. The baseline data stipulated by CHARLS is from 2011, but we include CHARLS 2015 (T1) as the baseline data in our study. CHARLS is similar to the cognitive assessment used in the American Health and Retirement Study, and has constructed cognitive function evaluation criteria from the same two aspects of memory and mental state [27,28]. Furthermore, previous studies have also used the CHARLS cognitive function assessment criteria for class-correspondence studies. Firstly, memory evaluation includes immediate word recall (0–10 points) and delayed word recall (0–10 points). Mental state is measured from three dimensions: orientation, visual construction, and mathematical performance. Orientation (0–5 points) is measured by asking respondents to name the date, day of the week, and season; visual construction is assessed by drawing a previously displayed picture (0–1 points); and mathematical performance (0–5 points) is measured by asking respondents to subtract 7 consecutive times from 100. The scores of the participants’ cognitive function are equal to the sum of the scores of memory and mental state. Cognitive function scores range from 0 to 31 points, higher cognitive function scores indicating better cognitive function. Finally, in order to obtain a composite measure of cognitive function, this study normalized and averaged the total cognitive function score by adding up memory and mental state scores [29].\n2.2.2. HRQoL This study used a new scale construction based on the variables of the Short Form 36 (SF-36) and the CHARLS questionnaire to measure HRQoL in diabetic patients. The construction of the new scale was derived from the eight dimensions of SF-36, and the corresponding variables of CHARLS were selected to assess the following eight dimensions (Table 2): physical function (PF), role–body (RP), body pain (BP), general health (GH), vitality (VT), social functioning (SF), role–emotion (RE), and mental health (MH). The scores for the above eight dimensions were calculated by adding up the category scores and then converting the raw scores to a 0 to 100 scale. Scores from the eight subscales were aggregated into two overall scores according to the conceptual model of the SF-36 [30]. Physical function, body roles, body pain, and general perceptions of health were calculated as PCS, and mental health, vitality, emotional role, and social functioning were calculated as MCS. Although the HRQoL questionnaire based on CHARLS is slightly different from other HRQoL questionnaires, they all have similar focuses, including physical, emotional, and social diversity. The questionnaire has been determined to be effective in the Chinese population [31], and has already been used in related research [32].\nThis study used a new scale construction based on the variables of the Short Form 36 (SF-36) and the CHARLS questionnaire to measure HRQoL in diabetic patients. The construction of the new scale was derived from the eight dimensions of SF-36, and the corresponding variables of CHARLS were selected to assess the following eight dimensions (Table 2): physical function (PF), role–body (RP), body pain (BP), general health (GH), vitality (VT), social functioning (SF), role–emotion (RE), and mental health (MH). The scores for the above eight dimensions were calculated by adding up the category scores and then converting the raw scores to a 0 to 100 scale. Scores from the eight subscales were aggregated into two overall scores according to the conceptual model of the SF-36 [30]. Physical function, body roles, body pain, and general perceptions of health were calculated as PCS, and mental health, vitality, emotional role, and social functioning were calculated as MCS. Although the HRQoL questionnaire based on CHARLS is slightly different from other HRQoL questionnaires, they all have similar focuses, including physical, emotional, and social diversity. The questionnaire has been determined to be effective in the Chinese population [31], and has already been used in related research [32].\n2.2.3. Socio-Demographic Variables In order to minimize the possibility of other variables influencing the cognitive function–HRQoL relationship study, and to simplify the model, this research controlled for several specific covariates associated with cognitive function and HRQoL. According to previous studies, all covariates were based on baseline data (CHARLS 2015) [32]. Firstly, population control variables include age, gender, and education status. Secondly, since HRQoL can be divided into two parts, PCS and MCS, this study used different control variables in the models for cognitive function, PCS, and MCS. This research included depression as well as current smoking and drinking habits as control variables for PCS scores. In this research, PCS scores are also considered with marital status, depression, physical activity, and current smoking and drinking habits as control variables.\nIn order to minimize the possibility of other variables influencing the cognitive function–HRQoL relationship study, and to simplify the model, this research controlled for several specific covariates associated with cognitive function and HRQoL. According to previous studies, all covariates were based on baseline data (CHARLS 2015) [32]. Firstly, population control variables include age, gender, and education status. Secondly, since HRQoL can be divided into two parts, PCS and MCS, this study used different control variables in the models for cognitive function, PCS, and MCS. This research included depression as well as current smoking and drinking habits as control variables for PCS scores. In this research, PCS scores are also considered with marital status, depression, physical activity, and current smoking and drinking habits as control variables.\n2.2.1. Cognitive Functioning This study obtained data on cognitive function from baseline data in 2015 and follow-up data in 2018. The baseline data stipulated by CHARLS is from 2011, but we include CHARLS 2015 (T1) as the baseline data in our study. CHARLS is similar to the cognitive assessment used in the American Health and Retirement Study, and has constructed cognitive function evaluation criteria from the same two aspects of memory and mental state [27,28]. Furthermore, previous studies have also used the CHARLS cognitive function assessment criteria for class-correspondence studies. Firstly, memory evaluation includes immediate word recall (0–10 points) and delayed word recall (0–10 points). Mental state is measured from three dimensions: orientation, visual construction, and mathematical performance. Orientation (0–5 points) is measured by asking respondents to name the date, day of the week, and season; visual construction is assessed by drawing a previously displayed picture (0–1 points); and mathematical performance (0–5 points) is measured by asking respondents to subtract 7 consecutive times from 100. The scores of the participants’ cognitive function are equal to the sum of the scores of memory and mental state. Cognitive function scores range from 0 to 31 points, higher cognitive function scores indicating better cognitive function. Finally, in order to obtain a composite measure of cognitive function, this study normalized and averaged the total cognitive function score by adding up memory and mental state scores [29].\nThis study obtained data on cognitive function from baseline data in 2015 and follow-up data in 2018. The baseline data stipulated by CHARLS is from 2011, but we include CHARLS 2015 (T1) as the baseline data in our study. CHARLS is similar to the cognitive assessment used in the American Health and Retirement Study, and has constructed cognitive function evaluation criteria from the same two aspects of memory and mental state [27,28]. Furthermore, previous studies have also used the CHARLS cognitive function assessment criteria for class-correspondence studies. Firstly, memory evaluation includes immediate word recall (0–10 points) and delayed word recall (0–10 points). Mental state is measured from three dimensions: orientation, visual construction, and mathematical performance. Orientation (0–5 points) is measured by asking respondents to name the date, day of the week, and season; visual construction is assessed by drawing a previously displayed picture (0–1 points); and mathematical performance (0–5 points) is measured by asking respondents to subtract 7 consecutive times from 100. The scores of the participants’ cognitive function are equal to the sum of the scores of memory and mental state. Cognitive function scores range from 0 to 31 points, higher cognitive function scores indicating better cognitive function. Finally, in order to obtain a composite measure of cognitive function, this study normalized and averaged the total cognitive function score by adding up memory and mental state scores [29].\n2.2.2. HRQoL This study used a new scale construction based on the variables of the Short Form 36 (SF-36) and the CHARLS questionnaire to measure HRQoL in diabetic patients. The construction of the new scale was derived from the eight dimensions of SF-36, and the corresponding variables of CHARLS were selected to assess the following eight dimensions (Table 2): physical function (PF), role–body (RP), body pain (BP), general health (GH), vitality (VT), social functioning (SF), role–emotion (RE), and mental health (MH). The scores for the above eight dimensions were calculated by adding up the category scores and then converting the raw scores to a 0 to 100 scale. Scores from the eight subscales were aggregated into two overall scores according to the conceptual model of the SF-36 [30]. Physical function, body roles, body pain, and general perceptions of health were calculated as PCS, and mental health, vitality, emotional role, and social functioning were calculated as MCS. Although the HRQoL questionnaire based on CHARLS is slightly different from other HRQoL questionnaires, they all have similar focuses, including physical, emotional, and social diversity. The questionnaire has been determined to be effective in the Chinese population [31], and has already been used in related research [32].\nThis study used a new scale construction based on the variables of the Short Form 36 (SF-36) and the CHARLS questionnaire to measure HRQoL in diabetic patients. The construction of the new scale was derived from the eight dimensions of SF-36, and the corresponding variables of CHARLS were selected to assess the following eight dimensions (Table 2): physical function (PF), role–body (RP), body pain (BP), general health (GH), vitality (VT), social functioning (SF), role–emotion (RE), and mental health (MH). The scores for the above eight dimensions were calculated by adding up the category scores and then converting the raw scores to a 0 to 100 scale. Scores from the eight subscales were aggregated into two overall scores according to the conceptual model of the SF-36 [30]. Physical function, body roles, body pain, and general perceptions of health were calculated as PCS, and mental health, vitality, emotional role, and social functioning were calculated as MCS. Although the HRQoL questionnaire based on CHARLS is slightly different from other HRQoL questionnaires, they all have similar focuses, including physical, emotional, and social diversity. The questionnaire has been determined to be effective in the Chinese population [31], and has already been used in related research [32].\n2.2.3. Socio-Demographic Variables In order to minimize the possibility of other variables influencing the cognitive function–HRQoL relationship study, and to simplify the model, this research controlled for several specific covariates associated with cognitive function and HRQoL. According to previous studies, all covariates were based on baseline data (CHARLS 2015) [32]. Firstly, population control variables include age, gender, and education status. Secondly, since HRQoL can be divided into two parts, PCS and MCS, this study used different control variables in the models for cognitive function, PCS, and MCS. This research included depression as well as current smoking and drinking habits as control variables for PCS scores. In this research, PCS scores are also considered with marital status, depression, physical activity, and current smoking and drinking habits as control variables.\nIn order to minimize the possibility of other variables influencing the cognitive function–HRQoL relationship study, and to simplify the model, this research controlled for several specific covariates associated with cognitive function and HRQoL. According to previous studies, all covariates were based on baseline data (CHARLS 2015) [32]. Firstly, population control variables include age, gender, and education status. Secondly, since HRQoL can be divided into two parts, PCS and MCS, this study used different control variables in the models for cognitive function, PCS, and MCS. This research included depression as well as current smoking and drinking habits as control variables for PCS scores. In this research, PCS scores are also considered with marital status, depression, physical activity, and current smoking and drinking habits as control variables.\n2.3. Statistical Methods IBM SPSS Statistics, version 23 (IBM SPSS Statistics, Armonk, NY, USA) and Mplus version 8.0 are used for data analysis. Descriptive analyses of diabetic patient characteristics, cognitive function, and PCM and MCS of HRQoL were performed. The relationship between cognitive function and HRQoL in diabetic patients at T1 and T2 was assessed using the Pearson correlation test. A repeated-measures analysis of variance was performed to assess the differences between these variables with and without the digital usage behavior. Furthermore, cross-lagged panel structural equation modeling (SEM) was used to assess the longitudinal association between cognition and HRQoL at T1 and T2. Finally, a multi-group test was performed to assess whether digital usage behavior was making a difference in this relationship. Finally, regarding the grouping of digital technologies as well, previous studies were referred to [24]. Digital technology usage was obtained from 2015 baseline data. In the CHARLS 2015 questionnaire, internet usage was measured using the following question whether respondents have used the internet in the last month (0 for no, 1 for yes).\nIBM SPSS Statistics, version 23 (IBM SPSS Statistics, Armonk, NY, USA) and Mplus version 8.0 are used for data analysis. Descriptive analyses of diabetic patient characteristics, cognitive function, and PCM and MCS of HRQoL were performed. The relationship between cognitive function and HRQoL in diabetic patients at T1 and T2 was assessed using the Pearson correlation test. A repeated-measures analysis of variance was performed to assess the differences between these variables with and without the digital usage behavior. Furthermore, cross-lagged panel structural equation modeling (SEM) was used to assess the longitudinal association between cognition and HRQoL at T1 and T2. Finally, a multi-group test was performed to assess whether digital usage behavior was making a difference in this relationship. Finally, regarding the grouping of digital technologies as well, previous studies were referred to [24]. Digital technology usage was obtained from 2015 baseline data. In the CHARLS 2015 questionnaire, internet usage was measured using the following question whether respondents have used the internet in the last month (0 for no, 1 for yes).", "The China Health and Retirement Longitudinal Survey (CHARLS) is a representative follow-up survey of middle-aged and older people in China, chaired by the National Research Institute of Development at Peking University. The study protocol was approved by the Institutional Review Board of Peking University (approval number: IRB00001052-11015), and the study protocol complies with the ethical guidelines of the 1975 Declaration of Helsinki. In order to obtain the relevant data, we applied for the CHARLS database online on 31 August 2022, and approval was quickly obtained. This study used two waves of CHARLS data, from 2015 (T1) and 2018 (T2). CHARLS data can be accessed through its official website (https://charls.pku.edu.cn (accessed on 31 August 2022)).\nFigure 1 shows the detailed process for including and excluding study participants. Wave 3 was used as the baseline data set for this study, and individuals who lacked information on HRQoL, cognitive function, digital usage behavior, and covariates were excluded from this study. Subsequently, new participants in 2018 were further excluded. Individuals lacking information on digital usage behavior, cognitive function, and covariates were also excluded from this study. In addition, in the sample selection, the research team excluded individuals with other diseases (such as cognitive diseases that may affect the detection process, mental diseases, and other diseases related to aging), and included individuals with diabetes only. In addition, the abbreviations of the main variables involved in the study are organized in Table 1.", "2.2.1. Cognitive Functioning This study obtained data on cognitive function from baseline data in 2015 and follow-up data in 2018. The baseline data stipulated by CHARLS is from 2011, but we include CHARLS 2015 (T1) as the baseline data in our study. CHARLS is similar to the cognitive assessment used in the American Health and Retirement Study, and has constructed cognitive function evaluation criteria from the same two aspects of memory and mental state [27,28]. Furthermore, previous studies have also used the CHARLS cognitive function assessment criteria for class-correspondence studies. Firstly, memory evaluation includes immediate word recall (0–10 points) and delayed word recall (0–10 points). Mental state is measured from three dimensions: orientation, visual construction, and mathematical performance. Orientation (0–5 points) is measured by asking respondents to name the date, day of the week, and season; visual construction is assessed by drawing a previously displayed picture (0–1 points); and mathematical performance (0–5 points) is measured by asking respondents to subtract 7 consecutive times from 100. The scores of the participants’ cognitive function are equal to the sum of the scores of memory and mental state. Cognitive function scores range from 0 to 31 points, higher cognitive function scores indicating better cognitive function. Finally, in order to obtain a composite measure of cognitive function, this study normalized and averaged the total cognitive function score by adding up memory and mental state scores [29].\nThis study obtained data on cognitive function from baseline data in 2015 and follow-up data in 2018. The baseline data stipulated by CHARLS is from 2011, but we include CHARLS 2015 (T1) as the baseline data in our study. CHARLS is similar to the cognitive assessment used in the American Health and Retirement Study, and has constructed cognitive function evaluation criteria from the same two aspects of memory and mental state [27,28]. Furthermore, previous studies have also used the CHARLS cognitive function assessment criteria for class-correspondence studies. Firstly, memory evaluation includes immediate word recall (0–10 points) and delayed word recall (0–10 points). Mental state is measured from three dimensions: orientation, visual construction, and mathematical performance. Orientation (0–5 points) is measured by asking respondents to name the date, day of the week, and season; visual construction is assessed by drawing a previously displayed picture (0–1 points); and mathematical performance (0–5 points) is measured by asking respondents to subtract 7 consecutive times from 100. The scores of the participants’ cognitive function are equal to the sum of the scores of memory and mental state. Cognitive function scores range from 0 to 31 points, higher cognitive function scores indicating better cognitive function. Finally, in order to obtain a composite measure of cognitive function, this study normalized and averaged the total cognitive function score by adding up memory and mental state scores [29].\n2.2.2. HRQoL This study used a new scale construction based on the variables of the Short Form 36 (SF-36) and the CHARLS questionnaire to measure HRQoL in diabetic patients. The construction of the new scale was derived from the eight dimensions of SF-36, and the corresponding variables of CHARLS were selected to assess the following eight dimensions (Table 2): physical function (PF), role–body (RP), body pain (BP), general health (GH), vitality (VT), social functioning (SF), role–emotion (RE), and mental health (MH). The scores for the above eight dimensions were calculated by adding up the category scores and then converting the raw scores to a 0 to 100 scale. Scores from the eight subscales were aggregated into two overall scores according to the conceptual model of the SF-36 [30]. Physical function, body roles, body pain, and general perceptions of health were calculated as PCS, and mental health, vitality, emotional role, and social functioning were calculated as MCS. Although the HRQoL questionnaire based on CHARLS is slightly different from other HRQoL questionnaires, they all have similar focuses, including physical, emotional, and social diversity. The questionnaire has been determined to be effective in the Chinese population [31], and has already been used in related research [32].\nThis study used a new scale construction based on the variables of the Short Form 36 (SF-36) and the CHARLS questionnaire to measure HRQoL in diabetic patients. The construction of the new scale was derived from the eight dimensions of SF-36, and the corresponding variables of CHARLS were selected to assess the following eight dimensions (Table 2): physical function (PF), role–body (RP), body pain (BP), general health (GH), vitality (VT), social functioning (SF), role–emotion (RE), and mental health (MH). The scores for the above eight dimensions were calculated by adding up the category scores and then converting the raw scores to a 0 to 100 scale. Scores from the eight subscales were aggregated into two overall scores according to the conceptual model of the SF-36 [30]. Physical function, body roles, body pain, and general perceptions of health were calculated as PCS, and mental health, vitality, emotional role, and social functioning were calculated as MCS. Although the HRQoL questionnaire based on CHARLS is slightly different from other HRQoL questionnaires, they all have similar focuses, including physical, emotional, and social diversity. The questionnaire has been determined to be effective in the Chinese population [31], and has already been used in related research [32].\n2.2.3. Socio-Demographic Variables In order to minimize the possibility of other variables influencing the cognitive function–HRQoL relationship study, and to simplify the model, this research controlled for several specific covariates associated with cognitive function and HRQoL. According to previous studies, all covariates were based on baseline data (CHARLS 2015) [32]. Firstly, population control variables include age, gender, and education status. Secondly, since HRQoL can be divided into two parts, PCS and MCS, this study used different control variables in the models for cognitive function, PCS, and MCS. This research included depression as well as current smoking and drinking habits as control variables for PCS scores. In this research, PCS scores are also considered with marital status, depression, physical activity, and current smoking and drinking habits as control variables.\nIn order to minimize the possibility of other variables influencing the cognitive function–HRQoL relationship study, and to simplify the model, this research controlled for several specific covariates associated with cognitive function and HRQoL. According to previous studies, all covariates were based on baseline data (CHARLS 2015) [32]. Firstly, population control variables include age, gender, and education status. Secondly, since HRQoL can be divided into two parts, PCS and MCS, this study used different control variables in the models for cognitive function, PCS, and MCS. This research included depression as well as current smoking and drinking habits as control variables for PCS scores. In this research, PCS scores are also considered with marital status, depression, physical activity, and current smoking and drinking habits as control variables.", "This study obtained data on cognitive function from baseline data in 2015 and follow-up data in 2018. The baseline data stipulated by CHARLS is from 2011, but we include CHARLS 2015 (T1) as the baseline data in our study. CHARLS is similar to the cognitive assessment used in the American Health and Retirement Study, and has constructed cognitive function evaluation criteria from the same two aspects of memory and mental state [27,28]. Furthermore, previous studies have also used the CHARLS cognitive function assessment criteria for class-correspondence studies. Firstly, memory evaluation includes immediate word recall (0–10 points) and delayed word recall (0–10 points). Mental state is measured from three dimensions: orientation, visual construction, and mathematical performance. Orientation (0–5 points) is measured by asking respondents to name the date, day of the week, and season; visual construction is assessed by drawing a previously displayed picture (0–1 points); and mathematical performance (0–5 points) is measured by asking respondents to subtract 7 consecutive times from 100. The scores of the participants’ cognitive function are equal to the sum of the scores of memory and mental state. Cognitive function scores range from 0 to 31 points, higher cognitive function scores indicating better cognitive function. Finally, in order to obtain a composite measure of cognitive function, this study normalized and averaged the total cognitive function score by adding up memory and mental state scores [29].", "This study used a new scale construction based on the variables of the Short Form 36 (SF-36) and the CHARLS questionnaire to measure HRQoL in diabetic patients. The construction of the new scale was derived from the eight dimensions of SF-36, and the corresponding variables of CHARLS were selected to assess the following eight dimensions (Table 2): physical function (PF), role–body (RP), body pain (BP), general health (GH), vitality (VT), social functioning (SF), role–emotion (RE), and mental health (MH). The scores for the above eight dimensions were calculated by adding up the category scores and then converting the raw scores to a 0 to 100 scale. Scores from the eight subscales were aggregated into two overall scores according to the conceptual model of the SF-36 [30]. Physical function, body roles, body pain, and general perceptions of health were calculated as PCS, and mental health, vitality, emotional role, and social functioning were calculated as MCS. Although the HRQoL questionnaire based on CHARLS is slightly different from other HRQoL questionnaires, they all have similar focuses, including physical, emotional, and social diversity. The questionnaire has been determined to be effective in the Chinese population [31], and has already been used in related research [32].", "In order to minimize the possibility of other variables influencing the cognitive function–HRQoL relationship study, and to simplify the model, this research controlled for several specific covariates associated with cognitive function and HRQoL. According to previous studies, all covariates were based on baseline data (CHARLS 2015) [32]. Firstly, population control variables include age, gender, and education status. Secondly, since HRQoL can be divided into two parts, PCS and MCS, this study used different control variables in the models for cognitive function, PCS, and MCS. This research included depression as well as current smoking and drinking habits as control variables for PCS scores. In this research, PCS scores are also considered with marital status, depression, physical activity, and current smoking and drinking habits as control variables.", "IBM SPSS Statistics, version 23 (IBM SPSS Statistics, Armonk, NY, USA) and Mplus version 8.0 are used for data analysis. Descriptive analyses of diabetic patient characteristics, cognitive function, and PCM and MCS of HRQoL were performed. The relationship between cognitive function and HRQoL in diabetic patients at T1 and T2 was assessed using the Pearson correlation test. A repeated-measures analysis of variance was performed to assess the differences between these variables with and without the digital usage behavior. Furthermore, cross-lagged panel structural equation modeling (SEM) was used to assess the longitudinal association between cognition and HRQoL at T1 and T2. Finally, a multi-group test was performed to assess whether digital usage behavior was making a difference in this relationship. Finally, regarding the grouping of digital technologies as well, previous studies were referred to [24]. Digital technology usage was obtained from 2015 baseline data. In the CHARLS 2015 questionnaire, internet usage was measured using the following question whether respondents have used the internet in the last month (0 for no, 1 for yes).", "3.1. Descriptive Statistics The variable descriptive statistics and correlation analysis results are shown in Table 3 and Table 4. The correlation analysis shows that there is a significant correlation between cognition at T1 and T2 and HRQoL levels at T1 and T2, indicating that cognitive function and HRQoL levels in middle-aged and older diabetic patients have a certain relationship, which is showing stability. Meanwhile, simultaneous and sequential correlations between cognitive function and HRQoL levels are also significant. The correlation coefficients between cognitive function and HRQoL at T1 are 0.28 (p < 0.01) and 0.37 (p < 0.01); the correlation coefficients between cognitive function at T2 and HRQoL at T2 are 0.32 (p < 0.01) and 0.30 (p < 0.01); the correlation coefficients between cognitive function at T1 and HRQoL at T2 are 0.25 (p < 0.01) and 0.22 (p < 0.01); and the correlation coefficients between HRQoL at T1 and cognitive function at T2 are 0.32 (p < 0.01) and 0.36 (p < 0.01). This indicates that cognitive function is basically consistent with the synchronous and stable correlations of HRQoL levels, which is in line with the basic assumptions of the cross-lag design.\nThe variable descriptive statistics and correlation analysis results are shown in Table 3 and Table 4. The correlation analysis shows that there is a significant correlation between cognition at T1 and T2 and HRQoL levels at T1 and T2, indicating that cognitive function and HRQoL levels in middle-aged and older diabetic patients have a certain relationship, which is showing stability. Meanwhile, simultaneous and sequential correlations between cognitive function and HRQoL levels are also significant. The correlation coefficients between cognitive function and HRQoL at T1 are 0.28 (p < 0.01) and 0.37 (p < 0.01); the correlation coefficients between cognitive function at T2 and HRQoL at T2 are 0.32 (p < 0.01) and 0.30 (p < 0.01); the correlation coefficients between cognitive function at T1 and HRQoL at T2 are 0.25 (p < 0.01) and 0.22 (p < 0.01); and the correlation coefficients between HRQoL at T1 and cognitive function at T2 are 0.32 (p < 0.01) and 0.36 (p < 0.01). This indicates that cognitive function is basically consistent with the synchronous and stable correlations of HRQoL levels, which is in line with the basic assumptions of the cross-lag design.\n3.2. Stability Analysis of Cognitive Function and HRQoL With cognitive function as the dependent variable, a 2 (test time: T1/T2) × 2 (digital usage behavior: use/non-use) repeated measures square analysis was performed. The results show that the testing time is the main and most significant effect (F = 11.21, p < 0.001, η2 = 0.01). The cognitive function level at T2 is significantly lower than at T1, and there are developmental differences. The main effect of digital usage behavior is significant (F = 36.325, p < 0.001, η2 = 0.06), and the cognitive function of patients using digital technology is significantly better than those patients without digital technology. The interaction between the two is not significant (F = 2.22, p > 0.05, η2 = 0.04).\nWith PCS as the dependent variable, a 2 (test time: T1/T2) × 2 (digital usage behavior: use/non-use) repeated measures square analysis was performed. The results show that the testing time is the main and most significant effect (F =16.25, p < 0.001, η2 = 0.03), the PCS level of the post-test is significantly lower than that of the pre-test, and there is a developmental difference. The main effect of digital usage behavior is significant (F = 36.54, p < 0.001, η2 = 0.06), and the PCS of patients using digital technology is significantly better than those not using digital technology. The interaction between the two is not significant (F = 0.18, p > 0.05, η2 = 0.00).\nWith MCS as the dependent variable, a 2 (test time: T1/T2) × 2 (digital usage behavior: use/non-use) repeated measures square analysis was performed. The results showed that the testing time is the main and most significant effect (F = 44.93, p < 0.001, η2 = 0.07), the MCS level of the post-test is significantly lower than the pre-test, and there is a developmental difference. The main effect of digital usage behavior is significant (F = 50.97, p < 0.001, η2 = 0.08), and the MCS of patients using digital technology is significantly better than those not using digital technology. The interaction between the two is significant (F = 4.43, p < 0.05, η2 = 0.08).\nWith cognitive function as the dependent variable, a 2 (test time: T1/T2) × 2 (digital usage behavior: use/non-use) repeated measures square analysis was performed. The results show that the testing time is the main and most significant effect (F = 11.21, p < 0.001, η2 = 0.01). The cognitive function level at T2 is significantly lower than at T1, and there are developmental differences. The main effect of digital usage behavior is significant (F = 36.325, p < 0.001, η2 = 0.06), and the cognitive function of patients using digital technology is significantly better than those patients without digital technology. The interaction between the two is not significant (F = 2.22, p > 0.05, η2 = 0.04).\nWith PCS as the dependent variable, a 2 (test time: T1/T2) × 2 (digital usage behavior: use/non-use) repeated measures square analysis was performed. The results show that the testing time is the main and most significant effect (F =16.25, p < 0.001, η2 = 0.03), the PCS level of the post-test is significantly lower than that of the pre-test, and there is a developmental difference. The main effect of digital usage behavior is significant (F = 36.54, p < 0.001, η2 = 0.06), and the PCS of patients using digital technology is significantly better than those not using digital technology. The interaction between the two is not significant (F = 0.18, p > 0.05, η2 = 0.00).\nWith MCS as the dependent variable, a 2 (test time: T1/T2) × 2 (digital usage behavior: use/non-use) repeated measures square analysis was performed. The results showed that the testing time is the main and most significant effect (F = 44.93, p < 0.001, η2 = 0.07), the MCS level of the post-test is significantly lower than the pre-test, and there is a developmental difference. The main effect of digital usage behavior is significant (F = 50.97, p < 0.001, η2 = 0.08), and the MCS of patients using digital technology is significantly better than those not using digital technology. The interaction between the two is significant (F = 4.43, p < 0.05, η2 = 0.08).\n3.3. Cross-Lagged Analysis of Cognitive Function and HRQoL In the cross-lag model which this study applied, HRQoL at T2 was predicted by cognitive function at T1, and cognitive function at T2 was predicted by HRQoL at T1. After the corresponding control variables were added to the models, both models achieved acceptable fitness criteria (Model 1: RMSEA = 0.093; CFI = 0.994, TCL = 0.912; Model 2: RMSEA = 0.098; CFI = 0.984, TCL = 0.901).\nThe cross-lag relationship between cognitive function and PCS scores at two points in time is shown from the test of Model 1 in Figure 2. First, in the same time period, the baseline association between cognitive function and PCS is significantly positive (B = 0.19, p < 0.01), implying better cognitive performance in middle-aged and older diabetic patients with higher PCS scores at T1, and vice versa. Second, there is no significant correlation between cognitive function at T1 and PCS at T2 (B = 0.05, p > 0.01), but PCS at T1 is positively correlated with cognitive function at T2 (B = 0.12, p < 0.01). This suggests that there is a positive and significant relationship between PCS and cognitive function in middle-aged and older diabetic patients over time, but early cognitive function in middle-aged and older diabetic patients does not affect later PSC.\nThe cross-lag relationship between cognitive function and MCS scores at the two points in time is shown in Model 2 of Figure 2. First, in the same time period, the baseline association between cognitive function and MCS is significantly positively correlated (B = 0.31, p < 0.01), which implies a better cognitive performance in middle-aged and older diabetic patients with higher MCS scores at T1, and vice versa. Second, there is no significant correlation between cognitive function at T1 and MCS at T2 (B = 0.05, p > 0.01), but there is a significant positive correlation between MCS at T1 and cognitive function at T2 (B = 0.14, p < 0.01). This suggests that there is a positive and significant relationship between MCS and cognitive function in middle-aged and older diabetic patients over time, but early cognitive function in middle-aged and older diabetic patients does not affect later MCS. In conclusion, there may be a one-way causal relationship between cognitive function and HRQoL, and the causal direction is from HRQoL to cognition.\nIn the cross-lag model which this study applied, HRQoL at T2 was predicted by cognitive function at T1, and cognitive function at T2 was predicted by HRQoL at T1. After the corresponding control variables were added to the models, both models achieved acceptable fitness criteria (Model 1: RMSEA = 0.093; CFI = 0.994, TCL = 0.912; Model 2: RMSEA = 0.098; CFI = 0.984, TCL = 0.901).\nThe cross-lag relationship between cognitive function and PCS scores at two points in time is shown from the test of Model 1 in Figure 2. First, in the same time period, the baseline association between cognitive function and PCS is significantly positive (B = 0.19, p < 0.01), implying better cognitive performance in middle-aged and older diabetic patients with higher PCS scores at T1, and vice versa. Second, there is no significant correlation between cognitive function at T1 and PCS at T2 (B = 0.05, p > 0.01), but PCS at T1 is positively correlated with cognitive function at T2 (B = 0.12, p < 0.01). This suggests that there is a positive and significant relationship between PCS and cognitive function in middle-aged and older diabetic patients over time, but early cognitive function in middle-aged and older diabetic patients does not affect later PSC.\nThe cross-lag relationship between cognitive function and MCS scores at the two points in time is shown in Model 2 of Figure 2. First, in the same time period, the baseline association between cognitive function and MCS is significantly positively correlated (B = 0.31, p < 0.01), which implies a better cognitive performance in middle-aged and older diabetic patients with higher MCS scores at T1, and vice versa. Second, there is no significant correlation between cognitive function at T1 and MCS at T2 (B = 0.05, p > 0.01), but there is a significant positive correlation between MCS at T1 and cognitive function at T2 (B = 0.14, p < 0.01). This suggests that there is a positive and significant relationship between MCS and cognitive function in middle-aged and older diabetic patients over time, but early cognitive function in middle-aged and older diabetic patients does not affect later MCS. In conclusion, there may be a one-way causal relationship between cognitive function and HRQoL, and the causal direction is from HRQoL to cognition.\n3.4. Heterogeneity Analysis In order to investigate whether the cross-lag relationship between cognitive function and HRQoL differs in digital usage behavior, the study performed a multi-group analysis. All diabetic patients were divided into two groups, one representing non-digital usage behavior (Figure 3), and the other representing digital usage behavior (Figure 4). In the grouped cross-lag model, the corresponding control variables were also added to the model. The models in Figure 3 and Figure 4 met fitness criteria (Figure 3: Model 3: RMSEA = 0.073, CFI = 0.996, TCL = 0.901; Model 4: RMSEA = 0.100, CFI = 0.990, TCL = 0.936. Figure 4: Model 5: RMSEA = 0.073, CFI = 0.996, TCL = 0.941; Model 6: RMSEA = 0.079, CFI = 0.970, TCL = 0.936). In both sets of models, the study focused on the relationship between cognitive function and HRQoL at a point in time, as well as their relationship at the time of follow-up research.\nFor middle-aged and older diabetic patients using digital technology, although the model could be fitted, cognitive function at T1 did not significantly predict HRQoL at T2 (PCS: B = 0.30, p > 0.01; MCS: B = 0.14, p > 0.01), and vice versa (PCS: B = 0.06, p > 0.01; MCS: B = 0.05, p > 0.01) (see Figure 3). For diabetic patients who did not use digital technology, although T1 cognitive function did not significantly predict T2 HRQoL (PCS: B = 0.04, p > 0.01; MCS: B = 0.05, p > 0.01), T1 HRQoL significantly predicted T2 cognitive function (PCS: B = 0.10, p < 0.01; MCS: B = 0.11, p < 0.01), and the results were consistent across all diabetic patients (see Figure 4). This shows that the cross-lag model of cognitive function and HRQoL in middle-aged and older diabetic patients shows variance in the use of digital technology, that is, this relationship will be affected by digital usage behavior.\nIn order to investigate whether the cross-lag relationship between cognitive function and HRQoL differs in digital usage behavior, the study performed a multi-group analysis. All diabetic patients were divided into two groups, one representing non-digital usage behavior (Figure 3), and the other representing digital usage behavior (Figure 4). In the grouped cross-lag model, the corresponding control variables were also added to the model. The models in Figure 3 and Figure 4 met fitness criteria (Figure 3: Model 3: RMSEA = 0.073, CFI = 0.996, TCL = 0.901; Model 4: RMSEA = 0.100, CFI = 0.990, TCL = 0.936. Figure 4: Model 5: RMSEA = 0.073, CFI = 0.996, TCL = 0.941; Model 6: RMSEA = 0.079, CFI = 0.970, TCL = 0.936). In both sets of models, the study focused on the relationship between cognitive function and HRQoL at a point in time, as well as their relationship at the time of follow-up research.\nFor middle-aged and older diabetic patients using digital technology, although the model could be fitted, cognitive function at T1 did not significantly predict HRQoL at T2 (PCS: B = 0.30, p > 0.01; MCS: B = 0.14, p > 0.01), and vice versa (PCS: B = 0.06, p > 0.01; MCS: B = 0.05, p > 0.01) (see Figure 3). For diabetic patients who did not use digital technology, although T1 cognitive function did not significantly predict T2 HRQoL (PCS: B = 0.04, p > 0.01; MCS: B = 0.05, p > 0.01), T1 HRQoL significantly predicted T2 cognitive function (PCS: B = 0.10, p < 0.01; MCS: B = 0.11, p < 0.01), and the results were consistent across all diabetic patients (see Figure 4). This shows that the cross-lag model of cognitive function and HRQoL in middle-aged and older diabetic patients shows variance in the use of digital technology, that is, this relationship will be affected by digital usage behavior.", "The variable descriptive statistics and correlation analysis results are shown in Table 3 and Table 4. The correlation analysis shows that there is a significant correlation between cognition at T1 and T2 and HRQoL levels at T1 and T2, indicating that cognitive function and HRQoL levels in middle-aged and older diabetic patients have a certain relationship, which is showing stability. Meanwhile, simultaneous and sequential correlations between cognitive function and HRQoL levels are also significant. The correlation coefficients between cognitive function and HRQoL at T1 are 0.28 (p < 0.01) and 0.37 (p < 0.01); the correlation coefficients between cognitive function at T2 and HRQoL at T2 are 0.32 (p < 0.01) and 0.30 (p < 0.01); the correlation coefficients between cognitive function at T1 and HRQoL at T2 are 0.25 (p < 0.01) and 0.22 (p < 0.01); and the correlation coefficients between HRQoL at T1 and cognitive function at T2 are 0.32 (p < 0.01) and 0.36 (p < 0.01). This indicates that cognitive function is basically consistent with the synchronous and stable correlations of HRQoL levels, which is in line with the basic assumptions of the cross-lag design.", "With cognitive function as the dependent variable, a 2 (test time: T1/T2) × 2 (digital usage behavior: use/non-use) repeated measures square analysis was performed. The results show that the testing time is the main and most significant effect (F = 11.21, p < 0.001, η2 = 0.01). The cognitive function level at T2 is significantly lower than at T1, and there are developmental differences. The main effect of digital usage behavior is significant (F = 36.325, p < 0.001, η2 = 0.06), and the cognitive function of patients using digital technology is significantly better than those patients without digital technology. The interaction between the two is not significant (F = 2.22, p > 0.05, η2 = 0.04).\nWith PCS as the dependent variable, a 2 (test time: T1/T2) × 2 (digital usage behavior: use/non-use) repeated measures square analysis was performed. The results show that the testing time is the main and most significant effect (F =16.25, p < 0.001, η2 = 0.03), the PCS level of the post-test is significantly lower than that of the pre-test, and there is a developmental difference. The main effect of digital usage behavior is significant (F = 36.54, p < 0.001, η2 = 0.06), and the PCS of patients using digital technology is significantly better than those not using digital technology. The interaction between the two is not significant (F = 0.18, p > 0.05, η2 = 0.00).\nWith MCS as the dependent variable, a 2 (test time: T1/T2) × 2 (digital usage behavior: use/non-use) repeated measures square analysis was performed. The results showed that the testing time is the main and most significant effect (F = 44.93, p < 0.001, η2 = 0.07), the MCS level of the post-test is significantly lower than the pre-test, and there is a developmental difference. The main effect of digital usage behavior is significant (F = 50.97, p < 0.001, η2 = 0.08), and the MCS of patients using digital technology is significantly better than those not using digital technology. The interaction between the two is significant (F = 4.43, p < 0.05, η2 = 0.08).", "In the cross-lag model which this study applied, HRQoL at T2 was predicted by cognitive function at T1, and cognitive function at T2 was predicted by HRQoL at T1. After the corresponding control variables were added to the models, both models achieved acceptable fitness criteria (Model 1: RMSEA = 0.093; CFI = 0.994, TCL = 0.912; Model 2: RMSEA = 0.098; CFI = 0.984, TCL = 0.901).\nThe cross-lag relationship between cognitive function and PCS scores at two points in time is shown from the test of Model 1 in Figure 2. First, in the same time period, the baseline association between cognitive function and PCS is significantly positive (B = 0.19, p < 0.01), implying better cognitive performance in middle-aged and older diabetic patients with higher PCS scores at T1, and vice versa. Second, there is no significant correlation between cognitive function at T1 and PCS at T2 (B = 0.05, p > 0.01), but PCS at T1 is positively correlated with cognitive function at T2 (B = 0.12, p < 0.01). This suggests that there is a positive and significant relationship between PCS and cognitive function in middle-aged and older diabetic patients over time, but early cognitive function in middle-aged and older diabetic patients does not affect later PSC.\nThe cross-lag relationship between cognitive function and MCS scores at the two points in time is shown in Model 2 of Figure 2. First, in the same time period, the baseline association between cognitive function and MCS is significantly positively correlated (B = 0.31, p < 0.01), which implies a better cognitive performance in middle-aged and older diabetic patients with higher MCS scores at T1, and vice versa. Second, there is no significant correlation between cognitive function at T1 and MCS at T2 (B = 0.05, p > 0.01), but there is a significant positive correlation between MCS at T1 and cognitive function at T2 (B = 0.14, p < 0.01). This suggests that there is a positive and significant relationship between MCS and cognitive function in middle-aged and older diabetic patients over time, but early cognitive function in middle-aged and older diabetic patients does not affect later MCS. In conclusion, there may be a one-way causal relationship between cognitive function and HRQoL, and the causal direction is from HRQoL to cognition.", "In order to investigate whether the cross-lag relationship between cognitive function and HRQoL differs in digital usage behavior, the study performed a multi-group analysis. All diabetic patients were divided into two groups, one representing non-digital usage behavior (Figure 3), and the other representing digital usage behavior (Figure 4). In the grouped cross-lag model, the corresponding control variables were also added to the model. The models in Figure 3 and Figure 4 met fitness criteria (Figure 3: Model 3: RMSEA = 0.073, CFI = 0.996, TCL = 0.901; Model 4: RMSEA = 0.100, CFI = 0.990, TCL = 0.936. Figure 4: Model 5: RMSEA = 0.073, CFI = 0.996, TCL = 0.941; Model 6: RMSEA = 0.079, CFI = 0.970, TCL = 0.936). In both sets of models, the study focused on the relationship between cognitive function and HRQoL at a point in time, as well as their relationship at the time of follow-up research.\nFor middle-aged and older diabetic patients using digital technology, although the model could be fitted, cognitive function at T1 did not significantly predict HRQoL at T2 (PCS: B = 0.30, p > 0.01; MCS: B = 0.14, p > 0.01), and vice versa (PCS: B = 0.06, p > 0.01; MCS: B = 0.05, p > 0.01) (see Figure 3). For diabetic patients who did not use digital technology, although T1 cognitive function did not significantly predict T2 HRQoL (PCS: B = 0.04, p > 0.01; MCS: B = 0.05, p > 0.01), T1 HRQoL significantly predicted T2 cognitive function (PCS: B = 0.10, p < 0.01; MCS: B = 0.11, p < 0.01), and the results were consistent across all diabetic patients (see Figure 4). This shows that the cross-lag model of cognitive function and HRQoL in middle-aged and older diabetic patients shows variance in the use of digital technology, that is, this relationship will be affected by digital usage behavior.", "The present study aimed to further deepen our understanding of the association between cognition and HRQoL. This study used a cross-lagged model and controlled for the corresponding covariates in order to verify the longitudinal relationship between cognition and HRQoL in middle-aged and older diabetic patients. The corresponding findings suggest that this relationship is unidirectional in the cross-lag model, that is, only HRQoL at T1 can predict cognitive function at T2, and cognitive function at T1 does not predict HRQoL at T2. This study further explored whether the use of digital technology would lead to different manifestations of this relationship, and divided the study population into two categories: those who used digital technology and those who did not. The results found that this one-way lag relationship still existed in middle-aged and older diabetic patients who did not use digital technology, but was not significant in patients who used digital technology.\nFirst of all, the study found that there are certain developmental differences in the cognitive function, PCS, and MCS levels of middle-aged and older diabetic patients. There is an increasing trend over time which is consistent with previous research results [33,34]. From an age perspective, not only are changes in cognitive function closely related to age [17,35], but HRQoL also has a distinct age-specific trajectory [36]. Therefore, as patients with diabetes age, the risk of cognitive decline and reduced PCS and MCS levels increases [15,37]. Second, diabetes is considered a risk factor for developing cognitive impairments [38]. Although some studies suggest that it may not accelerate the cognitive deterioration process, there is a general consensus in the academic community that diabetes increases the risk of abnormal cognition by increasing the incidence of cognition-related diseases or affecting blood sugar levels [37,38]. Furthermore, previous cross-sectional and longitudinal studies have identified that chronic diseases such as diabetes, as well as BMI, can affect HRQoL in a negative way [39,40]. Thus, this trend may be more obvious in the diabetic population.\nIt was found that the cognitive function of T1 was highly correlated with the PCS and MCS at T1 and T2, and the cognitive function of T2 was highly correlated with the PCS and MCS at T1 and T2, which was in line with the basic assumption of a cross-lag design. Our results are consistent with previous studies on other populations [9]. In the subsequent cross-lag analysis, the results were found to be consistent with previous studies [12]. The cognitive function, PCS, and MCS levels of middle-aged and elderly diabetic patients have a certain degree of lateral stability between T1 and T2, and HRQoL can predict cognitive function. However, there is no research to explain its internal mechanism. Verghese, J. et al., and Daviglus, M.L. et al., suggest that interventions to improve physical and general mental health can prevent or delay the onset of dementia [41,42]. Since HRQoL scores from CHARLS in this study include evaluations of physical activity and general mental health, it is believed that physical activity may be a plausible mechanism by which HRQoL can predict changes in cognitive function, based on previous studies. The mediating role of physical activity in this relationship should be further investigated in the future. In general, there are few studies focusing on the impact of HRQoL on cognitive function. In the future, more in-depth research is needed to explore the internal mechanism of how HRQoL affects cognitive function [43,44].\nThird, although cognitive function at T1 was found to be significantly associated with HRQoL at T2 (PCS and MCS) in the initial correlation analysis, the predictive role of cognitive function at T1 on HRQoL at T2 in the cross-lag model was not significant, suggesting that cognitive function at T1 does not predict HRQoL at T2. This is consistent with previous studies [45]. One possible explanation is that the differences in cognitive function between the participants included are relatively small at baseline and at follow-up, as shown by a mean baseline cognitive function score of 15.73 and follow-up of 15.03. Therefore, there may not be enough individuals with the degree of cognitive function variation that would interfere with the study results. On the other hand, it may be that clinical health status alone does not determine a better quality of life [32], and for older adults, perceived life satisfaction appears to be more affected by ability to perform daily tasks [46]. Therefore, the influence of cognitive function on HRQoL may not be significant.\nFinally, results show that there is a significant difference in the use of digital technology in the one-way predictive role of cognitive function and HRQoL, that is, the causal relationship between cognitive function and HRQoL is more applicable to diabetic patients who do not use digital technology. This finding is consistent with the study hypothesis. In terms of cognitive function, previous studies believe that the use of digital technology can slow the decline of cognitive function by increasing the cognitive reserve of individuals [18,47,48,49,50]. It has also been confirmed that in the diabetic population, digital usage behavior can improve the cognitive function of patients [51]; therefore, individuals who do not use digital technology tend to have greater changes in cognitive function, which are easier to detect than changes in individuals who do use digital technology. In terms of HRQoL, the use of digital technology has a direct positive impact on HRQoL [52], and for people with diabetes, digital usage behavior can have an indirect impact on HRQoL by stimulating positive lifestyle changes such as physical activity [53,54,55,56]. Therefore, it is believed that due to direct or indirect impact of digital usage behavior on cognitive function and HRQoL, digital usage behavior may have weakened the longitudinal association between cognitive function and HRQoL in middle-aged and older diabetic patients to some extent.\nThis study can help us to make some policy recommendations. First, the aging population has greatly increased the public health burden in China. Cognitive function and HRQoL in diabetic patients were affected to varying degrees. Therefore, it is necessary to improve the relevant medical service system for diabetics in China as soon as possible to ensure that the medical needs of the elderly are met. In addition, attention will need to be out into improving the Internet and medical service systems. In order to reduce the digital technology gap, promotion of internet use and healthy aging is encouraged.\nThis study did not focus on describing intra-individual variation due to the inherent limitations of the cross-lag model [57]. However, the cross-lag model used in this study still provides new evidence on the longitudinal relationship between cognitive function and HRQoL in middle-aged and older Chinese diabetic patients, and may provide new insights into the underlying mechanisms behind this relationship. Aside from the limitations of the model itself, there are several potential limitations to consider. First, there was a 3-year lag between baseline and follow-up, which may be considered too long to assess the cross-lag relationship between cognitive function and HRQoL. Future studies will need to test whether shorter and longer temporal associations differ in terms of gaining insight into the exact relationship between cognitive function and HRQoL in different conditions. Second, this study only selected the data of wave 3 and wave 4 of the CHARLS database, and did not include wave 1 or wave 2. This study used three questions that were only included after wave 3 in the evaluation criteria for the digital usage behavior. If there were enough data, it would obviously be better to include waves 1 and 2 in the analysis. In addition, a large portion of participants were excluded due to lack of necessary data. Although we controlled for demographic variables, this may still lead to selective bias in the data. Third, in the group analysis, due to the significant sample size difference between those who used digital technology and those who did not, results may have been inaccurate. However, the study results still show that the longitudinal relationship between cognitive function and HRQoL differs in terms of digital usage behavior. Finally, because patients with diabetes often have other chronic diseases of the elderly, future research is needed to further distinguish and identify the differences between diabetes and other chronic diseases of the elderly.", "The cognitive function and HRQoL (PCS and MCS) in middle-aged and older Chinese diabetic patients have a certain degree of lateral stability, and both tend to increase over time. More importantly, this study found that HRQoL at T1 can significantly predict cognitive function at T2, but cognitive function at T1 cannot significantly predict HRQoL at T2. HRQoL may be an antecedent for middle-aged and older diabetic patients’ cognitive function, and this predictive relationship can be different depending on digital usage behavior. In the future, effective intervention on HRQoL of middle-aged and older diabetic patients can be considered to improve their cognitive function, thereby promoting the healthy development of middle-aged and older diabetic patients." ]
[ "intro", null, "subjects", null, null, null, null, null, "results", null, null, null, null, "discussion", "conclusions" ]
[ "cognitive function", "HRQoL", "digital usage behavior", "middle-aged and older people", "diabetes" ]
1. Introduction: Cognitive impairment is one of the common complications in elderly patients [1]. For example, in China, the prevalence of mild cognitive impairments in the aging population (aged 60 and above) is 14.71% [2], and as age increases, its annual rate of progression to dementia is between 8% and 15% [2]. At the same time, cognitive impairment is also one of the common complications of diabetic patients. It is estimated that by 2045, about 170 million elderly people in China will be diabetic patients [3]. Therefore, the academic community has paid great attention to the cognitive function of the diabetic population, and found that the cognitive function of the diabetic population is closely related to the quality of life (QoL) of the elderly [4]. HRQoL is the perceived physical and mental health of an individual or group over time, including both physical component summary (PCS) and mental component summary (MCS) [5]. In contrast with the QoL, HRQoL pays special attention to the impact of disease and treatment process on the life of a person or a group [6]. Existing studies have found that changes in cognitive function are positively correlated with changes in HRQoL, and play a predictive role in future changes in HRQoL [7,8]. For example, cognitive decline was found to be a predictor of HRQoL decline in studies on multiple sclerosis patients, AIDS patients, and older women [4,9,10]. At the same time, some scholars have found that changes in HRQoL—whether it is the PCS or the MCS of HRQoL—can also predict cognitive changes in individuals in the future [11]. For example, Ezzati’s 2019 study demonstrated that changes in HRQoL preceded changes in cognition and predicted the occurrence of dementia [12]. Thus, cognitive function may be bi-directionally associated with HRQoL. The bidirectional relationship between cognitive function and HRQoL may be more pronounced in terms of diabetic patients. This is because not only are people with diabetes 1–2 times more likely to develop cognitive risks than the general population [13,14], but this risk will increase over time [15]. At the same time, with the deepening of the research on HRQoL, the medical community generally believes that the core of diabetes management should include HRQoL maintenance in addition to prevention and delay of its complications [16]. In summary, this research aims to focus on middle-aged and older diabetic patients (over 45 years old) in China—not only because China has one of the largest numbers of diabetic patients in the world, but also because the age of the population affected with diabetes is showing a downward trend [3,17]. Thus, we propose the first hypothesis: in middle-aged and older Chinese patients with diabetes, the relationship between cognitive function and HRQoL may be bidirectional. To our knowledge, previous studies on cognitive function and HRQoL have focused on unmodifiable clinical factors, such as age, gender, etc., with a lack of studies on modifiable factors [4]. Digital technologies such as the Internet and smartphones have attracted the attention of psychologists because of their portability, rapidity, and immediacy. In clinical medicine, digital usage behaviors are often used for managing and intervening with patients with diabetes and cognitive dysfunction [18]. Thus, digital technology may be seen as a modifiable factor in the relationship between cognitive function and HRQoL. We also found that with the continuous development and improvement of digital technology, digital usage behavior can have a profound impact on the cognitive function of patients with chronic diseases [18,19,20]. In addition to this, it is important that digital usage behavior, such as Internet use, can also have a certain impact on HRQoL [21]. Previous research in other populations found that using the Internet resulted in significantly higher PCS and physical pain scores [22]. According to self-determination theory, we believe that when individuals use the internet for leisure or work, their needs are met, which in turn produces positive long-term psychological outcomes such as quality of life [23]. So far, existing research is leaning towards the belief that an increase in digital usage behavior can improve the health of individuals [24]. However, there are also studies holding the opposite view; they believe that the long-term digital usage behavior reduces patients’ time for outdoor activities, and that a large amount of negative information received due to digital usage behavior is more likely to cause mood swings, which may have side effects on patients’ recovery [25]. Therefore, it is easy to find that digital usage behavior is likely to have various effects, so we propose another hypothesis: in middle-aged and older Chinese patients with diabetes, the relationship between cognitive function and HRQoL may differ considering digital usage behavior. To date, most studies on the association of cognitive function with HRQoL have used cross-sectional designs, and have failed to elucidate the direction of influence between cognitive function and HRQoL due to the limitations of cross-sectional studies in terms of making causal inferences and explaining the direction of association. Therefore, it is necessary to further explore, especially for diabetic patients, the longitudinal association between the cognitive function and HRQoL, and to clarify the direction of their effects. As a longitudinal study, this study attempts to use a cross-lag model to explore the relationship between cognitive function and HRQoL in Chinese middle-aged and older diabetic patients, as well as the direction of this relationship and whether there were differences in digital usage behavior. This research is attempting to explore the longitudinal relationship between cognition and HRQoL in middle-aged and older diabetic patients, and to provide evidence which can promote their health. It should be noted that, due to the existence of multiple databases in China, in addition to the China Health and Retirement Longitudinal Survey (CHARLS), most scholars use databases such as the China Household Finance Survey when researching topics related to public health [26]. There is more content regarding the physical and mental health of middle-aged and elderly people, so we chose CHARLS as our data source. 2. Materials and Methods: 2.1. Participants The China Health and Retirement Longitudinal Survey (CHARLS) is a representative follow-up survey of middle-aged and older people in China, chaired by the National Research Institute of Development at Peking University. The study protocol was approved by the Institutional Review Board of Peking University (approval number: IRB00001052-11015), and the study protocol complies with the ethical guidelines of the 1975 Declaration of Helsinki. In order to obtain the relevant data, we applied for the CHARLS database online on 31 August 2022, and approval was quickly obtained. This study used two waves of CHARLS data, from 2015 (T1) and 2018 (T2). CHARLS data can be accessed through its official website (https://charls.pku.edu.cn (accessed on 31 August 2022)). Figure 1 shows the detailed process for including and excluding study participants. Wave 3 was used as the baseline data set for this study, and individuals who lacked information on HRQoL, cognitive function, digital usage behavior, and covariates were excluded from this study. Subsequently, new participants in 2018 were further excluded. Individuals lacking information on digital usage behavior, cognitive function, and covariates were also excluded from this study. In addition, in the sample selection, the research team excluded individuals with other diseases (such as cognitive diseases that may affect the detection process, mental diseases, and other diseases related to aging), and included individuals with diabetes only. In addition, the abbreviations of the main variables involved in the study are organized in Table 1. The China Health and Retirement Longitudinal Survey (CHARLS) is a representative follow-up survey of middle-aged and older people in China, chaired by the National Research Institute of Development at Peking University. The study protocol was approved by the Institutional Review Board of Peking University (approval number: IRB00001052-11015), and the study protocol complies with the ethical guidelines of the 1975 Declaration of Helsinki. In order to obtain the relevant data, we applied for the CHARLS database online on 31 August 2022, and approval was quickly obtained. This study used two waves of CHARLS data, from 2015 (T1) and 2018 (T2). CHARLS data can be accessed through its official website (https://charls.pku.edu.cn (accessed on 31 August 2022)). Figure 1 shows the detailed process for including and excluding study participants. Wave 3 was used as the baseline data set for this study, and individuals who lacked information on HRQoL, cognitive function, digital usage behavior, and covariates were excluded from this study. Subsequently, new participants in 2018 were further excluded. Individuals lacking information on digital usage behavior, cognitive function, and covariates were also excluded from this study. In addition, in the sample selection, the research team excluded individuals with other diseases (such as cognitive diseases that may affect the detection process, mental diseases, and other diseases related to aging), and included individuals with diabetes only. In addition, the abbreviations of the main variables involved in the study are organized in Table 1. 2.2. Measurements 2.2.1. Cognitive Functioning This study obtained data on cognitive function from baseline data in 2015 and follow-up data in 2018. The baseline data stipulated by CHARLS is from 2011, but we include CHARLS 2015 (T1) as the baseline data in our study. CHARLS is similar to the cognitive assessment used in the American Health and Retirement Study, and has constructed cognitive function evaluation criteria from the same two aspects of memory and mental state [27,28]. Furthermore, previous studies have also used the CHARLS cognitive function assessment criteria for class-correspondence studies. Firstly, memory evaluation includes immediate word recall (0–10 points) and delayed word recall (0–10 points). Mental state is measured from three dimensions: orientation, visual construction, and mathematical performance. Orientation (0–5 points) is measured by asking respondents to name the date, day of the week, and season; visual construction is assessed by drawing a previously displayed picture (0–1 points); and mathematical performance (0–5 points) is measured by asking respondents to subtract 7 consecutive times from 100. The scores of the participants’ cognitive function are equal to the sum of the scores of memory and mental state. Cognitive function scores range from 0 to 31 points, higher cognitive function scores indicating better cognitive function. Finally, in order to obtain a composite measure of cognitive function, this study normalized and averaged the total cognitive function score by adding up memory and mental state scores [29]. This study obtained data on cognitive function from baseline data in 2015 and follow-up data in 2018. The baseline data stipulated by CHARLS is from 2011, but we include CHARLS 2015 (T1) as the baseline data in our study. CHARLS is similar to the cognitive assessment used in the American Health and Retirement Study, and has constructed cognitive function evaluation criteria from the same two aspects of memory and mental state [27,28]. Furthermore, previous studies have also used the CHARLS cognitive function assessment criteria for class-correspondence studies. Firstly, memory evaluation includes immediate word recall (0–10 points) and delayed word recall (0–10 points). Mental state is measured from three dimensions: orientation, visual construction, and mathematical performance. Orientation (0–5 points) is measured by asking respondents to name the date, day of the week, and season; visual construction is assessed by drawing a previously displayed picture (0–1 points); and mathematical performance (0–5 points) is measured by asking respondents to subtract 7 consecutive times from 100. The scores of the participants’ cognitive function are equal to the sum of the scores of memory and mental state. Cognitive function scores range from 0 to 31 points, higher cognitive function scores indicating better cognitive function. Finally, in order to obtain a composite measure of cognitive function, this study normalized and averaged the total cognitive function score by adding up memory and mental state scores [29]. 2.2.2. HRQoL This study used a new scale construction based on the variables of the Short Form 36 (SF-36) and the CHARLS questionnaire to measure HRQoL in diabetic patients. The construction of the new scale was derived from the eight dimensions of SF-36, and the corresponding variables of CHARLS were selected to assess the following eight dimensions (Table 2): physical function (PF), role–body (RP), body pain (BP), general health (GH), vitality (VT), social functioning (SF), role–emotion (RE), and mental health (MH). The scores for the above eight dimensions were calculated by adding up the category scores and then converting the raw scores to a 0 to 100 scale. Scores from the eight subscales were aggregated into two overall scores according to the conceptual model of the SF-36 [30]. Physical function, body roles, body pain, and general perceptions of health were calculated as PCS, and mental health, vitality, emotional role, and social functioning were calculated as MCS. Although the HRQoL questionnaire based on CHARLS is slightly different from other HRQoL questionnaires, they all have similar focuses, including physical, emotional, and social diversity. The questionnaire has been determined to be effective in the Chinese population [31], and has already been used in related research [32]. This study used a new scale construction based on the variables of the Short Form 36 (SF-36) and the CHARLS questionnaire to measure HRQoL in diabetic patients. The construction of the new scale was derived from the eight dimensions of SF-36, and the corresponding variables of CHARLS were selected to assess the following eight dimensions (Table 2): physical function (PF), role–body (RP), body pain (BP), general health (GH), vitality (VT), social functioning (SF), role–emotion (RE), and mental health (MH). The scores for the above eight dimensions were calculated by adding up the category scores and then converting the raw scores to a 0 to 100 scale. Scores from the eight subscales were aggregated into two overall scores according to the conceptual model of the SF-36 [30]. Physical function, body roles, body pain, and general perceptions of health were calculated as PCS, and mental health, vitality, emotional role, and social functioning were calculated as MCS. Although the HRQoL questionnaire based on CHARLS is slightly different from other HRQoL questionnaires, they all have similar focuses, including physical, emotional, and social diversity. The questionnaire has been determined to be effective in the Chinese population [31], and has already been used in related research [32]. 2.2.3. Socio-Demographic Variables In order to minimize the possibility of other variables influencing the cognitive function–HRQoL relationship study, and to simplify the model, this research controlled for several specific covariates associated with cognitive function and HRQoL. According to previous studies, all covariates were based on baseline data (CHARLS 2015) [32]. Firstly, population control variables include age, gender, and education status. Secondly, since HRQoL can be divided into two parts, PCS and MCS, this study used different control variables in the models for cognitive function, PCS, and MCS. This research included depression as well as current smoking and drinking habits as control variables for PCS scores. In this research, PCS scores are also considered with marital status, depression, physical activity, and current smoking and drinking habits as control variables. In order to minimize the possibility of other variables influencing the cognitive function–HRQoL relationship study, and to simplify the model, this research controlled for several specific covariates associated with cognitive function and HRQoL. According to previous studies, all covariates were based on baseline data (CHARLS 2015) [32]. Firstly, population control variables include age, gender, and education status. Secondly, since HRQoL can be divided into two parts, PCS and MCS, this study used different control variables in the models for cognitive function, PCS, and MCS. This research included depression as well as current smoking and drinking habits as control variables for PCS scores. In this research, PCS scores are also considered with marital status, depression, physical activity, and current smoking and drinking habits as control variables. 2.2.1. Cognitive Functioning This study obtained data on cognitive function from baseline data in 2015 and follow-up data in 2018. The baseline data stipulated by CHARLS is from 2011, but we include CHARLS 2015 (T1) as the baseline data in our study. CHARLS is similar to the cognitive assessment used in the American Health and Retirement Study, and has constructed cognitive function evaluation criteria from the same two aspects of memory and mental state [27,28]. Furthermore, previous studies have also used the CHARLS cognitive function assessment criteria for class-correspondence studies. Firstly, memory evaluation includes immediate word recall (0–10 points) and delayed word recall (0–10 points). Mental state is measured from three dimensions: orientation, visual construction, and mathematical performance. Orientation (0–5 points) is measured by asking respondents to name the date, day of the week, and season; visual construction is assessed by drawing a previously displayed picture (0–1 points); and mathematical performance (0–5 points) is measured by asking respondents to subtract 7 consecutive times from 100. The scores of the participants’ cognitive function are equal to the sum of the scores of memory and mental state. Cognitive function scores range from 0 to 31 points, higher cognitive function scores indicating better cognitive function. Finally, in order to obtain a composite measure of cognitive function, this study normalized and averaged the total cognitive function score by adding up memory and mental state scores [29]. This study obtained data on cognitive function from baseline data in 2015 and follow-up data in 2018. The baseline data stipulated by CHARLS is from 2011, but we include CHARLS 2015 (T1) as the baseline data in our study. CHARLS is similar to the cognitive assessment used in the American Health and Retirement Study, and has constructed cognitive function evaluation criteria from the same two aspects of memory and mental state [27,28]. Furthermore, previous studies have also used the CHARLS cognitive function assessment criteria for class-correspondence studies. Firstly, memory evaluation includes immediate word recall (0–10 points) and delayed word recall (0–10 points). Mental state is measured from three dimensions: orientation, visual construction, and mathematical performance. Orientation (0–5 points) is measured by asking respondents to name the date, day of the week, and season; visual construction is assessed by drawing a previously displayed picture (0–1 points); and mathematical performance (0–5 points) is measured by asking respondents to subtract 7 consecutive times from 100. The scores of the participants’ cognitive function are equal to the sum of the scores of memory and mental state. Cognitive function scores range from 0 to 31 points, higher cognitive function scores indicating better cognitive function. Finally, in order to obtain a composite measure of cognitive function, this study normalized and averaged the total cognitive function score by adding up memory and mental state scores [29]. 2.2.2. HRQoL This study used a new scale construction based on the variables of the Short Form 36 (SF-36) and the CHARLS questionnaire to measure HRQoL in diabetic patients. The construction of the new scale was derived from the eight dimensions of SF-36, and the corresponding variables of CHARLS were selected to assess the following eight dimensions (Table 2): physical function (PF), role–body (RP), body pain (BP), general health (GH), vitality (VT), social functioning (SF), role–emotion (RE), and mental health (MH). The scores for the above eight dimensions were calculated by adding up the category scores and then converting the raw scores to a 0 to 100 scale. Scores from the eight subscales were aggregated into two overall scores according to the conceptual model of the SF-36 [30]. Physical function, body roles, body pain, and general perceptions of health were calculated as PCS, and mental health, vitality, emotional role, and social functioning were calculated as MCS. Although the HRQoL questionnaire based on CHARLS is slightly different from other HRQoL questionnaires, they all have similar focuses, including physical, emotional, and social diversity. The questionnaire has been determined to be effective in the Chinese population [31], and has already been used in related research [32]. This study used a new scale construction based on the variables of the Short Form 36 (SF-36) and the CHARLS questionnaire to measure HRQoL in diabetic patients. The construction of the new scale was derived from the eight dimensions of SF-36, and the corresponding variables of CHARLS were selected to assess the following eight dimensions (Table 2): physical function (PF), role–body (RP), body pain (BP), general health (GH), vitality (VT), social functioning (SF), role–emotion (RE), and mental health (MH). The scores for the above eight dimensions were calculated by adding up the category scores and then converting the raw scores to a 0 to 100 scale. Scores from the eight subscales were aggregated into two overall scores according to the conceptual model of the SF-36 [30]. Physical function, body roles, body pain, and general perceptions of health were calculated as PCS, and mental health, vitality, emotional role, and social functioning were calculated as MCS. Although the HRQoL questionnaire based on CHARLS is slightly different from other HRQoL questionnaires, they all have similar focuses, including physical, emotional, and social diversity. The questionnaire has been determined to be effective in the Chinese population [31], and has already been used in related research [32]. 2.2.3. Socio-Demographic Variables In order to minimize the possibility of other variables influencing the cognitive function–HRQoL relationship study, and to simplify the model, this research controlled for several specific covariates associated with cognitive function and HRQoL. According to previous studies, all covariates were based on baseline data (CHARLS 2015) [32]. Firstly, population control variables include age, gender, and education status. Secondly, since HRQoL can be divided into two parts, PCS and MCS, this study used different control variables in the models for cognitive function, PCS, and MCS. This research included depression as well as current smoking and drinking habits as control variables for PCS scores. In this research, PCS scores are also considered with marital status, depression, physical activity, and current smoking and drinking habits as control variables. In order to minimize the possibility of other variables influencing the cognitive function–HRQoL relationship study, and to simplify the model, this research controlled for several specific covariates associated with cognitive function and HRQoL. According to previous studies, all covariates were based on baseline data (CHARLS 2015) [32]. Firstly, population control variables include age, gender, and education status. Secondly, since HRQoL can be divided into two parts, PCS and MCS, this study used different control variables in the models for cognitive function, PCS, and MCS. This research included depression as well as current smoking and drinking habits as control variables for PCS scores. In this research, PCS scores are also considered with marital status, depression, physical activity, and current smoking and drinking habits as control variables. 2.3. Statistical Methods IBM SPSS Statistics, version 23 (IBM SPSS Statistics, Armonk, NY, USA) and Mplus version 8.0 are used for data analysis. Descriptive analyses of diabetic patient characteristics, cognitive function, and PCM and MCS of HRQoL were performed. The relationship between cognitive function and HRQoL in diabetic patients at T1 and T2 was assessed using the Pearson correlation test. A repeated-measures analysis of variance was performed to assess the differences between these variables with and without the digital usage behavior. Furthermore, cross-lagged panel structural equation modeling (SEM) was used to assess the longitudinal association between cognition and HRQoL at T1 and T2. Finally, a multi-group test was performed to assess whether digital usage behavior was making a difference in this relationship. Finally, regarding the grouping of digital technologies as well, previous studies were referred to [24]. Digital technology usage was obtained from 2015 baseline data. In the CHARLS 2015 questionnaire, internet usage was measured using the following question whether respondents have used the internet in the last month (0 for no, 1 for yes). IBM SPSS Statistics, version 23 (IBM SPSS Statistics, Armonk, NY, USA) and Mplus version 8.0 are used for data analysis. Descriptive analyses of diabetic patient characteristics, cognitive function, and PCM and MCS of HRQoL were performed. The relationship between cognitive function and HRQoL in diabetic patients at T1 and T2 was assessed using the Pearson correlation test. A repeated-measures analysis of variance was performed to assess the differences between these variables with and without the digital usage behavior. Furthermore, cross-lagged panel structural equation modeling (SEM) was used to assess the longitudinal association between cognition and HRQoL at T1 and T2. Finally, a multi-group test was performed to assess whether digital usage behavior was making a difference in this relationship. Finally, regarding the grouping of digital technologies as well, previous studies were referred to [24]. Digital technology usage was obtained from 2015 baseline data. In the CHARLS 2015 questionnaire, internet usage was measured using the following question whether respondents have used the internet in the last month (0 for no, 1 for yes). 2.1. Participants: The China Health and Retirement Longitudinal Survey (CHARLS) is a representative follow-up survey of middle-aged and older people in China, chaired by the National Research Institute of Development at Peking University. The study protocol was approved by the Institutional Review Board of Peking University (approval number: IRB00001052-11015), and the study protocol complies with the ethical guidelines of the 1975 Declaration of Helsinki. In order to obtain the relevant data, we applied for the CHARLS database online on 31 August 2022, and approval was quickly obtained. This study used two waves of CHARLS data, from 2015 (T1) and 2018 (T2). CHARLS data can be accessed through its official website (https://charls.pku.edu.cn (accessed on 31 August 2022)). Figure 1 shows the detailed process for including and excluding study participants. Wave 3 was used as the baseline data set for this study, and individuals who lacked information on HRQoL, cognitive function, digital usage behavior, and covariates were excluded from this study. Subsequently, new participants in 2018 were further excluded. Individuals lacking information on digital usage behavior, cognitive function, and covariates were also excluded from this study. In addition, in the sample selection, the research team excluded individuals with other diseases (such as cognitive diseases that may affect the detection process, mental diseases, and other diseases related to aging), and included individuals with diabetes only. In addition, the abbreviations of the main variables involved in the study are organized in Table 1. 2.2. Measurements: 2.2.1. Cognitive Functioning This study obtained data on cognitive function from baseline data in 2015 and follow-up data in 2018. The baseline data stipulated by CHARLS is from 2011, but we include CHARLS 2015 (T1) as the baseline data in our study. CHARLS is similar to the cognitive assessment used in the American Health and Retirement Study, and has constructed cognitive function evaluation criteria from the same two aspects of memory and mental state [27,28]. Furthermore, previous studies have also used the CHARLS cognitive function assessment criteria for class-correspondence studies. Firstly, memory evaluation includes immediate word recall (0–10 points) and delayed word recall (0–10 points). Mental state is measured from three dimensions: orientation, visual construction, and mathematical performance. Orientation (0–5 points) is measured by asking respondents to name the date, day of the week, and season; visual construction is assessed by drawing a previously displayed picture (0–1 points); and mathematical performance (0–5 points) is measured by asking respondents to subtract 7 consecutive times from 100. The scores of the participants’ cognitive function are equal to the sum of the scores of memory and mental state. Cognitive function scores range from 0 to 31 points, higher cognitive function scores indicating better cognitive function. Finally, in order to obtain a composite measure of cognitive function, this study normalized and averaged the total cognitive function score by adding up memory and mental state scores [29]. This study obtained data on cognitive function from baseline data in 2015 and follow-up data in 2018. The baseline data stipulated by CHARLS is from 2011, but we include CHARLS 2015 (T1) as the baseline data in our study. CHARLS is similar to the cognitive assessment used in the American Health and Retirement Study, and has constructed cognitive function evaluation criteria from the same two aspects of memory and mental state [27,28]. Furthermore, previous studies have also used the CHARLS cognitive function assessment criteria for class-correspondence studies. Firstly, memory evaluation includes immediate word recall (0–10 points) and delayed word recall (0–10 points). Mental state is measured from three dimensions: orientation, visual construction, and mathematical performance. Orientation (0–5 points) is measured by asking respondents to name the date, day of the week, and season; visual construction is assessed by drawing a previously displayed picture (0–1 points); and mathematical performance (0–5 points) is measured by asking respondents to subtract 7 consecutive times from 100. The scores of the participants’ cognitive function are equal to the sum of the scores of memory and mental state. Cognitive function scores range from 0 to 31 points, higher cognitive function scores indicating better cognitive function. Finally, in order to obtain a composite measure of cognitive function, this study normalized and averaged the total cognitive function score by adding up memory and mental state scores [29]. 2.2.2. HRQoL This study used a new scale construction based on the variables of the Short Form 36 (SF-36) and the CHARLS questionnaire to measure HRQoL in diabetic patients. The construction of the new scale was derived from the eight dimensions of SF-36, and the corresponding variables of CHARLS were selected to assess the following eight dimensions (Table 2): physical function (PF), role–body (RP), body pain (BP), general health (GH), vitality (VT), social functioning (SF), role–emotion (RE), and mental health (MH). The scores for the above eight dimensions were calculated by adding up the category scores and then converting the raw scores to a 0 to 100 scale. Scores from the eight subscales were aggregated into two overall scores according to the conceptual model of the SF-36 [30]. Physical function, body roles, body pain, and general perceptions of health were calculated as PCS, and mental health, vitality, emotional role, and social functioning were calculated as MCS. Although the HRQoL questionnaire based on CHARLS is slightly different from other HRQoL questionnaires, they all have similar focuses, including physical, emotional, and social diversity. The questionnaire has been determined to be effective in the Chinese population [31], and has already been used in related research [32]. This study used a new scale construction based on the variables of the Short Form 36 (SF-36) and the CHARLS questionnaire to measure HRQoL in diabetic patients. The construction of the new scale was derived from the eight dimensions of SF-36, and the corresponding variables of CHARLS were selected to assess the following eight dimensions (Table 2): physical function (PF), role–body (RP), body pain (BP), general health (GH), vitality (VT), social functioning (SF), role–emotion (RE), and mental health (MH). The scores for the above eight dimensions were calculated by adding up the category scores and then converting the raw scores to a 0 to 100 scale. Scores from the eight subscales were aggregated into two overall scores according to the conceptual model of the SF-36 [30]. Physical function, body roles, body pain, and general perceptions of health were calculated as PCS, and mental health, vitality, emotional role, and social functioning were calculated as MCS. Although the HRQoL questionnaire based on CHARLS is slightly different from other HRQoL questionnaires, they all have similar focuses, including physical, emotional, and social diversity. The questionnaire has been determined to be effective in the Chinese population [31], and has already been used in related research [32]. 2.2.3. Socio-Demographic Variables In order to minimize the possibility of other variables influencing the cognitive function–HRQoL relationship study, and to simplify the model, this research controlled for several specific covariates associated with cognitive function and HRQoL. According to previous studies, all covariates were based on baseline data (CHARLS 2015) [32]. Firstly, population control variables include age, gender, and education status. Secondly, since HRQoL can be divided into two parts, PCS and MCS, this study used different control variables in the models for cognitive function, PCS, and MCS. This research included depression as well as current smoking and drinking habits as control variables for PCS scores. In this research, PCS scores are also considered with marital status, depression, physical activity, and current smoking and drinking habits as control variables. In order to minimize the possibility of other variables influencing the cognitive function–HRQoL relationship study, and to simplify the model, this research controlled for several specific covariates associated with cognitive function and HRQoL. According to previous studies, all covariates were based on baseline data (CHARLS 2015) [32]. Firstly, population control variables include age, gender, and education status. Secondly, since HRQoL can be divided into two parts, PCS and MCS, this study used different control variables in the models for cognitive function, PCS, and MCS. This research included depression as well as current smoking and drinking habits as control variables for PCS scores. In this research, PCS scores are also considered with marital status, depression, physical activity, and current smoking and drinking habits as control variables. 2.2.1. Cognitive Functioning: This study obtained data on cognitive function from baseline data in 2015 and follow-up data in 2018. The baseline data stipulated by CHARLS is from 2011, but we include CHARLS 2015 (T1) as the baseline data in our study. CHARLS is similar to the cognitive assessment used in the American Health and Retirement Study, and has constructed cognitive function evaluation criteria from the same two aspects of memory and mental state [27,28]. Furthermore, previous studies have also used the CHARLS cognitive function assessment criteria for class-correspondence studies. Firstly, memory evaluation includes immediate word recall (0–10 points) and delayed word recall (0–10 points). Mental state is measured from three dimensions: orientation, visual construction, and mathematical performance. Orientation (0–5 points) is measured by asking respondents to name the date, day of the week, and season; visual construction is assessed by drawing a previously displayed picture (0–1 points); and mathematical performance (0–5 points) is measured by asking respondents to subtract 7 consecutive times from 100. The scores of the participants’ cognitive function are equal to the sum of the scores of memory and mental state. Cognitive function scores range from 0 to 31 points, higher cognitive function scores indicating better cognitive function. Finally, in order to obtain a composite measure of cognitive function, this study normalized and averaged the total cognitive function score by adding up memory and mental state scores [29]. 2.2.2. HRQoL: This study used a new scale construction based on the variables of the Short Form 36 (SF-36) and the CHARLS questionnaire to measure HRQoL in diabetic patients. The construction of the new scale was derived from the eight dimensions of SF-36, and the corresponding variables of CHARLS were selected to assess the following eight dimensions (Table 2): physical function (PF), role–body (RP), body pain (BP), general health (GH), vitality (VT), social functioning (SF), role–emotion (RE), and mental health (MH). The scores for the above eight dimensions were calculated by adding up the category scores and then converting the raw scores to a 0 to 100 scale. Scores from the eight subscales were aggregated into two overall scores according to the conceptual model of the SF-36 [30]. Physical function, body roles, body pain, and general perceptions of health were calculated as PCS, and mental health, vitality, emotional role, and social functioning were calculated as MCS. Although the HRQoL questionnaire based on CHARLS is slightly different from other HRQoL questionnaires, they all have similar focuses, including physical, emotional, and social diversity. The questionnaire has been determined to be effective in the Chinese population [31], and has already been used in related research [32]. 2.2.3. Socio-Demographic Variables: In order to minimize the possibility of other variables influencing the cognitive function–HRQoL relationship study, and to simplify the model, this research controlled for several specific covariates associated with cognitive function and HRQoL. According to previous studies, all covariates were based on baseline data (CHARLS 2015) [32]. Firstly, population control variables include age, gender, and education status. Secondly, since HRQoL can be divided into two parts, PCS and MCS, this study used different control variables in the models for cognitive function, PCS, and MCS. This research included depression as well as current smoking and drinking habits as control variables for PCS scores. In this research, PCS scores are also considered with marital status, depression, physical activity, and current smoking and drinking habits as control variables. 2.3. Statistical Methods: IBM SPSS Statistics, version 23 (IBM SPSS Statistics, Armonk, NY, USA) and Mplus version 8.0 are used for data analysis. Descriptive analyses of diabetic patient characteristics, cognitive function, and PCM and MCS of HRQoL were performed. The relationship between cognitive function and HRQoL in diabetic patients at T1 and T2 was assessed using the Pearson correlation test. A repeated-measures analysis of variance was performed to assess the differences between these variables with and without the digital usage behavior. Furthermore, cross-lagged panel structural equation modeling (SEM) was used to assess the longitudinal association between cognition and HRQoL at T1 and T2. Finally, a multi-group test was performed to assess whether digital usage behavior was making a difference in this relationship. Finally, regarding the grouping of digital technologies as well, previous studies were referred to [24]. Digital technology usage was obtained from 2015 baseline data. In the CHARLS 2015 questionnaire, internet usage was measured using the following question whether respondents have used the internet in the last month (0 for no, 1 for yes). 3. Results: 3.1. Descriptive Statistics The variable descriptive statistics and correlation analysis results are shown in Table 3 and Table 4. The correlation analysis shows that there is a significant correlation between cognition at T1 and T2 and HRQoL levels at T1 and T2, indicating that cognitive function and HRQoL levels in middle-aged and older diabetic patients have a certain relationship, which is showing stability. Meanwhile, simultaneous and sequential correlations between cognitive function and HRQoL levels are also significant. The correlation coefficients between cognitive function and HRQoL at T1 are 0.28 (p < 0.01) and 0.37 (p < 0.01); the correlation coefficients between cognitive function at T2 and HRQoL at T2 are 0.32 (p < 0.01) and 0.30 (p < 0.01); the correlation coefficients between cognitive function at T1 and HRQoL at T2 are 0.25 (p < 0.01) and 0.22 (p < 0.01); and the correlation coefficients between HRQoL at T1 and cognitive function at T2 are 0.32 (p < 0.01) and 0.36 (p < 0.01). This indicates that cognitive function is basically consistent with the synchronous and stable correlations of HRQoL levels, which is in line with the basic assumptions of the cross-lag design. The variable descriptive statistics and correlation analysis results are shown in Table 3 and Table 4. The correlation analysis shows that there is a significant correlation between cognition at T1 and T2 and HRQoL levels at T1 and T2, indicating that cognitive function and HRQoL levels in middle-aged and older diabetic patients have a certain relationship, which is showing stability. Meanwhile, simultaneous and sequential correlations between cognitive function and HRQoL levels are also significant. The correlation coefficients between cognitive function and HRQoL at T1 are 0.28 (p < 0.01) and 0.37 (p < 0.01); the correlation coefficients between cognitive function at T2 and HRQoL at T2 are 0.32 (p < 0.01) and 0.30 (p < 0.01); the correlation coefficients between cognitive function at T1 and HRQoL at T2 are 0.25 (p < 0.01) and 0.22 (p < 0.01); and the correlation coefficients between HRQoL at T1 and cognitive function at T2 are 0.32 (p < 0.01) and 0.36 (p < 0.01). This indicates that cognitive function is basically consistent with the synchronous and stable correlations of HRQoL levels, which is in line with the basic assumptions of the cross-lag design. 3.2. Stability Analysis of Cognitive Function and HRQoL With cognitive function as the dependent variable, a 2 (test time: T1/T2) × 2 (digital usage behavior: use/non-use) repeated measures square analysis was performed. The results show that the testing time is the main and most significant effect (F = 11.21, p < 0.001, η2 = 0.01). The cognitive function level at T2 is significantly lower than at T1, and there are developmental differences. The main effect of digital usage behavior is significant (F = 36.325, p < 0.001, η2 = 0.06), and the cognitive function of patients using digital technology is significantly better than those patients without digital technology. The interaction between the two is not significant (F = 2.22, p > 0.05, η2 = 0.04). With PCS as the dependent variable, a 2 (test time: T1/T2) × 2 (digital usage behavior: use/non-use) repeated measures square analysis was performed. The results show that the testing time is the main and most significant effect (F =16.25, p < 0.001, η2 = 0.03), the PCS level of the post-test is significantly lower than that of the pre-test, and there is a developmental difference. The main effect of digital usage behavior is significant (F = 36.54, p < 0.001, η2 = 0.06), and the PCS of patients using digital technology is significantly better than those not using digital technology. The interaction between the two is not significant (F = 0.18, p > 0.05, η2 = 0.00). With MCS as the dependent variable, a 2 (test time: T1/T2) × 2 (digital usage behavior: use/non-use) repeated measures square analysis was performed. The results showed that the testing time is the main and most significant effect (F = 44.93, p < 0.001, η2 = 0.07), the MCS level of the post-test is significantly lower than the pre-test, and there is a developmental difference. The main effect of digital usage behavior is significant (F = 50.97, p < 0.001, η2 = 0.08), and the MCS of patients using digital technology is significantly better than those not using digital technology. The interaction between the two is significant (F = 4.43, p < 0.05, η2 = 0.08). With cognitive function as the dependent variable, a 2 (test time: T1/T2) × 2 (digital usage behavior: use/non-use) repeated measures square analysis was performed. The results show that the testing time is the main and most significant effect (F = 11.21, p < 0.001, η2 = 0.01). The cognitive function level at T2 is significantly lower than at T1, and there are developmental differences. The main effect of digital usage behavior is significant (F = 36.325, p < 0.001, η2 = 0.06), and the cognitive function of patients using digital technology is significantly better than those patients without digital technology. The interaction between the two is not significant (F = 2.22, p > 0.05, η2 = 0.04). With PCS as the dependent variable, a 2 (test time: T1/T2) × 2 (digital usage behavior: use/non-use) repeated measures square analysis was performed. The results show that the testing time is the main and most significant effect (F =16.25, p < 0.001, η2 = 0.03), the PCS level of the post-test is significantly lower than that of the pre-test, and there is a developmental difference. The main effect of digital usage behavior is significant (F = 36.54, p < 0.001, η2 = 0.06), and the PCS of patients using digital technology is significantly better than those not using digital technology. The interaction between the two is not significant (F = 0.18, p > 0.05, η2 = 0.00). With MCS as the dependent variable, a 2 (test time: T1/T2) × 2 (digital usage behavior: use/non-use) repeated measures square analysis was performed. The results showed that the testing time is the main and most significant effect (F = 44.93, p < 0.001, η2 = 0.07), the MCS level of the post-test is significantly lower than the pre-test, and there is a developmental difference. The main effect of digital usage behavior is significant (F = 50.97, p < 0.001, η2 = 0.08), and the MCS of patients using digital technology is significantly better than those not using digital technology. The interaction between the two is significant (F = 4.43, p < 0.05, η2 = 0.08). 3.3. Cross-Lagged Analysis of Cognitive Function and HRQoL In the cross-lag model which this study applied, HRQoL at T2 was predicted by cognitive function at T1, and cognitive function at T2 was predicted by HRQoL at T1. After the corresponding control variables were added to the models, both models achieved acceptable fitness criteria (Model 1: RMSEA = 0.093; CFI = 0.994, TCL = 0.912; Model 2: RMSEA = 0.098; CFI = 0.984, TCL = 0.901). The cross-lag relationship between cognitive function and PCS scores at two points in time is shown from the test of Model 1 in Figure 2. First, in the same time period, the baseline association between cognitive function and PCS is significantly positive (B = 0.19, p < 0.01), implying better cognitive performance in middle-aged and older diabetic patients with higher PCS scores at T1, and vice versa. Second, there is no significant correlation between cognitive function at T1 and PCS at T2 (B = 0.05, p > 0.01), but PCS at T1 is positively correlated with cognitive function at T2 (B = 0.12, p < 0.01). This suggests that there is a positive and significant relationship between PCS and cognitive function in middle-aged and older diabetic patients over time, but early cognitive function in middle-aged and older diabetic patients does not affect later PSC. The cross-lag relationship between cognitive function and MCS scores at the two points in time is shown in Model 2 of Figure 2. First, in the same time period, the baseline association between cognitive function and MCS is significantly positively correlated (B = 0.31, p < 0.01), which implies a better cognitive performance in middle-aged and older diabetic patients with higher MCS scores at T1, and vice versa. Second, there is no significant correlation between cognitive function at T1 and MCS at T2 (B = 0.05, p > 0.01), but there is a significant positive correlation between MCS at T1 and cognitive function at T2 (B = 0.14, p < 0.01). This suggests that there is a positive and significant relationship between MCS and cognitive function in middle-aged and older diabetic patients over time, but early cognitive function in middle-aged and older diabetic patients does not affect later MCS. In conclusion, there may be a one-way causal relationship between cognitive function and HRQoL, and the causal direction is from HRQoL to cognition. In the cross-lag model which this study applied, HRQoL at T2 was predicted by cognitive function at T1, and cognitive function at T2 was predicted by HRQoL at T1. After the corresponding control variables were added to the models, both models achieved acceptable fitness criteria (Model 1: RMSEA = 0.093; CFI = 0.994, TCL = 0.912; Model 2: RMSEA = 0.098; CFI = 0.984, TCL = 0.901). The cross-lag relationship between cognitive function and PCS scores at two points in time is shown from the test of Model 1 in Figure 2. First, in the same time period, the baseline association between cognitive function and PCS is significantly positive (B = 0.19, p < 0.01), implying better cognitive performance in middle-aged and older diabetic patients with higher PCS scores at T1, and vice versa. Second, there is no significant correlation between cognitive function at T1 and PCS at T2 (B = 0.05, p > 0.01), but PCS at T1 is positively correlated with cognitive function at T2 (B = 0.12, p < 0.01). This suggests that there is a positive and significant relationship between PCS and cognitive function in middle-aged and older diabetic patients over time, but early cognitive function in middle-aged and older diabetic patients does not affect later PSC. The cross-lag relationship between cognitive function and MCS scores at the two points in time is shown in Model 2 of Figure 2. First, in the same time period, the baseline association between cognitive function and MCS is significantly positively correlated (B = 0.31, p < 0.01), which implies a better cognitive performance in middle-aged and older diabetic patients with higher MCS scores at T1, and vice versa. Second, there is no significant correlation between cognitive function at T1 and MCS at T2 (B = 0.05, p > 0.01), but there is a significant positive correlation between MCS at T1 and cognitive function at T2 (B = 0.14, p < 0.01). This suggests that there is a positive and significant relationship between MCS and cognitive function in middle-aged and older diabetic patients over time, but early cognitive function in middle-aged and older diabetic patients does not affect later MCS. In conclusion, there may be a one-way causal relationship between cognitive function and HRQoL, and the causal direction is from HRQoL to cognition. 3.4. Heterogeneity Analysis In order to investigate whether the cross-lag relationship between cognitive function and HRQoL differs in digital usage behavior, the study performed a multi-group analysis. All diabetic patients were divided into two groups, one representing non-digital usage behavior (Figure 3), and the other representing digital usage behavior (Figure 4). In the grouped cross-lag model, the corresponding control variables were also added to the model. The models in Figure 3 and Figure 4 met fitness criteria (Figure 3: Model 3: RMSEA = 0.073, CFI = 0.996, TCL = 0.901; Model 4: RMSEA = 0.100, CFI = 0.990, TCL = 0.936. Figure 4: Model 5: RMSEA = 0.073, CFI = 0.996, TCL = 0.941; Model 6: RMSEA = 0.079, CFI = 0.970, TCL = 0.936). In both sets of models, the study focused on the relationship between cognitive function and HRQoL at a point in time, as well as their relationship at the time of follow-up research. For middle-aged and older diabetic patients using digital technology, although the model could be fitted, cognitive function at T1 did not significantly predict HRQoL at T2 (PCS: B = 0.30, p > 0.01; MCS: B = 0.14, p > 0.01), and vice versa (PCS: B = 0.06, p > 0.01; MCS: B = 0.05, p > 0.01) (see Figure 3). For diabetic patients who did not use digital technology, although T1 cognitive function did not significantly predict T2 HRQoL (PCS: B = 0.04, p > 0.01; MCS: B = 0.05, p > 0.01), T1 HRQoL significantly predicted T2 cognitive function (PCS: B = 0.10, p < 0.01; MCS: B = 0.11, p < 0.01), and the results were consistent across all diabetic patients (see Figure 4). This shows that the cross-lag model of cognitive function and HRQoL in middle-aged and older diabetic patients shows variance in the use of digital technology, that is, this relationship will be affected by digital usage behavior. In order to investigate whether the cross-lag relationship between cognitive function and HRQoL differs in digital usage behavior, the study performed a multi-group analysis. All diabetic patients were divided into two groups, one representing non-digital usage behavior (Figure 3), and the other representing digital usage behavior (Figure 4). In the grouped cross-lag model, the corresponding control variables were also added to the model. The models in Figure 3 and Figure 4 met fitness criteria (Figure 3: Model 3: RMSEA = 0.073, CFI = 0.996, TCL = 0.901; Model 4: RMSEA = 0.100, CFI = 0.990, TCL = 0.936. Figure 4: Model 5: RMSEA = 0.073, CFI = 0.996, TCL = 0.941; Model 6: RMSEA = 0.079, CFI = 0.970, TCL = 0.936). In both sets of models, the study focused on the relationship between cognitive function and HRQoL at a point in time, as well as their relationship at the time of follow-up research. For middle-aged and older diabetic patients using digital technology, although the model could be fitted, cognitive function at T1 did not significantly predict HRQoL at T2 (PCS: B = 0.30, p > 0.01; MCS: B = 0.14, p > 0.01), and vice versa (PCS: B = 0.06, p > 0.01; MCS: B = 0.05, p > 0.01) (see Figure 3). For diabetic patients who did not use digital technology, although T1 cognitive function did not significantly predict T2 HRQoL (PCS: B = 0.04, p > 0.01; MCS: B = 0.05, p > 0.01), T1 HRQoL significantly predicted T2 cognitive function (PCS: B = 0.10, p < 0.01; MCS: B = 0.11, p < 0.01), and the results were consistent across all diabetic patients (see Figure 4). This shows that the cross-lag model of cognitive function and HRQoL in middle-aged and older diabetic patients shows variance in the use of digital technology, that is, this relationship will be affected by digital usage behavior. 3.1. Descriptive Statistics: The variable descriptive statistics and correlation analysis results are shown in Table 3 and Table 4. The correlation analysis shows that there is a significant correlation between cognition at T1 and T2 and HRQoL levels at T1 and T2, indicating that cognitive function and HRQoL levels in middle-aged and older diabetic patients have a certain relationship, which is showing stability. Meanwhile, simultaneous and sequential correlations between cognitive function and HRQoL levels are also significant. The correlation coefficients between cognitive function and HRQoL at T1 are 0.28 (p < 0.01) and 0.37 (p < 0.01); the correlation coefficients between cognitive function at T2 and HRQoL at T2 are 0.32 (p < 0.01) and 0.30 (p < 0.01); the correlation coefficients between cognitive function at T1 and HRQoL at T2 are 0.25 (p < 0.01) and 0.22 (p < 0.01); and the correlation coefficients between HRQoL at T1 and cognitive function at T2 are 0.32 (p < 0.01) and 0.36 (p < 0.01). This indicates that cognitive function is basically consistent with the synchronous and stable correlations of HRQoL levels, which is in line with the basic assumptions of the cross-lag design. 3.2. Stability Analysis of Cognitive Function and HRQoL: With cognitive function as the dependent variable, a 2 (test time: T1/T2) × 2 (digital usage behavior: use/non-use) repeated measures square analysis was performed. The results show that the testing time is the main and most significant effect (F = 11.21, p < 0.001, η2 = 0.01). The cognitive function level at T2 is significantly lower than at T1, and there are developmental differences. The main effect of digital usage behavior is significant (F = 36.325, p < 0.001, η2 = 0.06), and the cognitive function of patients using digital technology is significantly better than those patients without digital technology. The interaction between the two is not significant (F = 2.22, p > 0.05, η2 = 0.04). With PCS as the dependent variable, a 2 (test time: T1/T2) × 2 (digital usage behavior: use/non-use) repeated measures square analysis was performed. The results show that the testing time is the main and most significant effect (F =16.25, p < 0.001, η2 = 0.03), the PCS level of the post-test is significantly lower than that of the pre-test, and there is a developmental difference. The main effect of digital usage behavior is significant (F = 36.54, p < 0.001, η2 = 0.06), and the PCS of patients using digital technology is significantly better than those not using digital technology. The interaction between the two is not significant (F = 0.18, p > 0.05, η2 = 0.00). With MCS as the dependent variable, a 2 (test time: T1/T2) × 2 (digital usage behavior: use/non-use) repeated measures square analysis was performed. The results showed that the testing time is the main and most significant effect (F = 44.93, p < 0.001, η2 = 0.07), the MCS level of the post-test is significantly lower than the pre-test, and there is a developmental difference. The main effect of digital usage behavior is significant (F = 50.97, p < 0.001, η2 = 0.08), and the MCS of patients using digital technology is significantly better than those not using digital technology. The interaction between the two is significant (F = 4.43, p < 0.05, η2 = 0.08). 3.3. Cross-Lagged Analysis of Cognitive Function and HRQoL: In the cross-lag model which this study applied, HRQoL at T2 was predicted by cognitive function at T1, and cognitive function at T2 was predicted by HRQoL at T1. After the corresponding control variables were added to the models, both models achieved acceptable fitness criteria (Model 1: RMSEA = 0.093; CFI = 0.994, TCL = 0.912; Model 2: RMSEA = 0.098; CFI = 0.984, TCL = 0.901). The cross-lag relationship between cognitive function and PCS scores at two points in time is shown from the test of Model 1 in Figure 2. First, in the same time period, the baseline association between cognitive function and PCS is significantly positive (B = 0.19, p < 0.01), implying better cognitive performance in middle-aged and older diabetic patients with higher PCS scores at T1, and vice versa. Second, there is no significant correlation between cognitive function at T1 and PCS at T2 (B = 0.05, p > 0.01), but PCS at T1 is positively correlated with cognitive function at T2 (B = 0.12, p < 0.01). This suggests that there is a positive and significant relationship between PCS and cognitive function in middle-aged and older diabetic patients over time, but early cognitive function in middle-aged and older diabetic patients does not affect later PSC. The cross-lag relationship between cognitive function and MCS scores at the two points in time is shown in Model 2 of Figure 2. First, in the same time period, the baseline association between cognitive function and MCS is significantly positively correlated (B = 0.31, p < 0.01), which implies a better cognitive performance in middle-aged and older diabetic patients with higher MCS scores at T1, and vice versa. Second, there is no significant correlation between cognitive function at T1 and MCS at T2 (B = 0.05, p > 0.01), but there is a significant positive correlation between MCS at T1 and cognitive function at T2 (B = 0.14, p < 0.01). This suggests that there is a positive and significant relationship between MCS and cognitive function in middle-aged and older diabetic patients over time, but early cognitive function in middle-aged and older diabetic patients does not affect later MCS. In conclusion, there may be a one-way causal relationship between cognitive function and HRQoL, and the causal direction is from HRQoL to cognition. 3.4. Heterogeneity Analysis: In order to investigate whether the cross-lag relationship between cognitive function and HRQoL differs in digital usage behavior, the study performed a multi-group analysis. All diabetic patients were divided into two groups, one representing non-digital usage behavior (Figure 3), and the other representing digital usage behavior (Figure 4). In the grouped cross-lag model, the corresponding control variables were also added to the model. The models in Figure 3 and Figure 4 met fitness criteria (Figure 3: Model 3: RMSEA = 0.073, CFI = 0.996, TCL = 0.901; Model 4: RMSEA = 0.100, CFI = 0.990, TCL = 0.936. Figure 4: Model 5: RMSEA = 0.073, CFI = 0.996, TCL = 0.941; Model 6: RMSEA = 0.079, CFI = 0.970, TCL = 0.936). In both sets of models, the study focused on the relationship between cognitive function and HRQoL at a point in time, as well as their relationship at the time of follow-up research. For middle-aged and older diabetic patients using digital technology, although the model could be fitted, cognitive function at T1 did not significantly predict HRQoL at T2 (PCS: B = 0.30, p > 0.01; MCS: B = 0.14, p > 0.01), and vice versa (PCS: B = 0.06, p > 0.01; MCS: B = 0.05, p > 0.01) (see Figure 3). For diabetic patients who did not use digital technology, although T1 cognitive function did not significantly predict T2 HRQoL (PCS: B = 0.04, p > 0.01; MCS: B = 0.05, p > 0.01), T1 HRQoL significantly predicted T2 cognitive function (PCS: B = 0.10, p < 0.01; MCS: B = 0.11, p < 0.01), and the results were consistent across all diabetic patients (see Figure 4). This shows that the cross-lag model of cognitive function and HRQoL in middle-aged and older diabetic patients shows variance in the use of digital technology, that is, this relationship will be affected by digital usage behavior. 4. Discussion: The present study aimed to further deepen our understanding of the association between cognition and HRQoL. This study used a cross-lagged model and controlled for the corresponding covariates in order to verify the longitudinal relationship between cognition and HRQoL in middle-aged and older diabetic patients. The corresponding findings suggest that this relationship is unidirectional in the cross-lag model, that is, only HRQoL at T1 can predict cognitive function at T2, and cognitive function at T1 does not predict HRQoL at T2. This study further explored whether the use of digital technology would lead to different manifestations of this relationship, and divided the study population into two categories: those who used digital technology and those who did not. The results found that this one-way lag relationship still existed in middle-aged and older diabetic patients who did not use digital technology, but was not significant in patients who used digital technology. First of all, the study found that there are certain developmental differences in the cognitive function, PCS, and MCS levels of middle-aged and older diabetic patients. There is an increasing trend over time which is consistent with previous research results [33,34]. From an age perspective, not only are changes in cognitive function closely related to age [17,35], but HRQoL also has a distinct age-specific trajectory [36]. Therefore, as patients with diabetes age, the risk of cognitive decline and reduced PCS and MCS levels increases [15,37]. Second, diabetes is considered a risk factor for developing cognitive impairments [38]. Although some studies suggest that it may not accelerate the cognitive deterioration process, there is a general consensus in the academic community that diabetes increases the risk of abnormal cognition by increasing the incidence of cognition-related diseases or affecting blood sugar levels [37,38]. Furthermore, previous cross-sectional and longitudinal studies have identified that chronic diseases such as diabetes, as well as BMI, can affect HRQoL in a negative way [39,40]. Thus, this trend may be more obvious in the diabetic population. It was found that the cognitive function of T1 was highly correlated with the PCS and MCS at T1 and T2, and the cognitive function of T2 was highly correlated with the PCS and MCS at T1 and T2, which was in line with the basic assumption of a cross-lag design. Our results are consistent with previous studies on other populations [9]. In the subsequent cross-lag analysis, the results were found to be consistent with previous studies [12]. The cognitive function, PCS, and MCS levels of middle-aged and elderly diabetic patients have a certain degree of lateral stability between T1 and T2, and HRQoL can predict cognitive function. However, there is no research to explain its internal mechanism. Verghese, J. et al., and Daviglus, M.L. et al., suggest that interventions to improve physical and general mental health can prevent or delay the onset of dementia [41,42]. Since HRQoL scores from CHARLS in this study include evaluations of physical activity and general mental health, it is believed that physical activity may be a plausible mechanism by which HRQoL can predict changes in cognitive function, based on previous studies. The mediating role of physical activity in this relationship should be further investigated in the future. In general, there are few studies focusing on the impact of HRQoL on cognitive function. In the future, more in-depth research is needed to explore the internal mechanism of how HRQoL affects cognitive function [43,44]. Third, although cognitive function at T1 was found to be significantly associated with HRQoL at T2 (PCS and MCS) in the initial correlation analysis, the predictive role of cognitive function at T1 on HRQoL at T2 in the cross-lag model was not significant, suggesting that cognitive function at T1 does not predict HRQoL at T2. This is consistent with previous studies [45]. One possible explanation is that the differences in cognitive function between the participants included are relatively small at baseline and at follow-up, as shown by a mean baseline cognitive function score of 15.73 and follow-up of 15.03. Therefore, there may not be enough individuals with the degree of cognitive function variation that would interfere with the study results. On the other hand, it may be that clinical health status alone does not determine a better quality of life [32], and for older adults, perceived life satisfaction appears to be more affected by ability to perform daily tasks [46]. Therefore, the influence of cognitive function on HRQoL may not be significant. Finally, results show that there is a significant difference in the use of digital technology in the one-way predictive role of cognitive function and HRQoL, that is, the causal relationship between cognitive function and HRQoL is more applicable to diabetic patients who do not use digital technology. This finding is consistent with the study hypothesis. In terms of cognitive function, previous studies believe that the use of digital technology can slow the decline of cognitive function by increasing the cognitive reserve of individuals [18,47,48,49,50]. It has also been confirmed that in the diabetic population, digital usage behavior can improve the cognitive function of patients [51]; therefore, individuals who do not use digital technology tend to have greater changes in cognitive function, which are easier to detect than changes in individuals who do use digital technology. In terms of HRQoL, the use of digital technology has a direct positive impact on HRQoL [52], and for people with diabetes, digital usage behavior can have an indirect impact on HRQoL by stimulating positive lifestyle changes such as physical activity [53,54,55,56]. Therefore, it is believed that due to direct or indirect impact of digital usage behavior on cognitive function and HRQoL, digital usage behavior may have weakened the longitudinal association between cognitive function and HRQoL in middle-aged and older diabetic patients to some extent. This study can help us to make some policy recommendations. First, the aging population has greatly increased the public health burden in China. Cognitive function and HRQoL in diabetic patients were affected to varying degrees. Therefore, it is necessary to improve the relevant medical service system for diabetics in China as soon as possible to ensure that the medical needs of the elderly are met. In addition, attention will need to be out into improving the Internet and medical service systems. In order to reduce the digital technology gap, promotion of internet use and healthy aging is encouraged. This study did not focus on describing intra-individual variation due to the inherent limitations of the cross-lag model [57]. However, the cross-lag model used in this study still provides new evidence on the longitudinal relationship between cognitive function and HRQoL in middle-aged and older Chinese diabetic patients, and may provide new insights into the underlying mechanisms behind this relationship. Aside from the limitations of the model itself, there are several potential limitations to consider. First, there was a 3-year lag between baseline and follow-up, which may be considered too long to assess the cross-lag relationship between cognitive function and HRQoL. Future studies will need to test whether shorter and longer temporal associations differ in terms of gaining insight into the exact relationship between cognitive function and HRQoL in different conditions. Second, this study only selected the data of wave 3 and wave 4 of the CHARLS database, and did not include wave 1 or wave 2. This study used three questions that were only included after wave 3 in the evaluation criteria for the digital usage behavior. If there were enough data, it would obviously be better to include waves 1 and 2 in the analysis. In addition, a large portion of participants were excluded due to lack of necessary data. Although we controlled for demographic variables, this may still lead to selective bias in the data. Third, in the group analysis, due to the significant sample size difference between those who used digital technology and those who did not, results may have been inaccurate. However, the study results still show that the longitudinal relationship between cognitive function and HRQoL differs in terms of digital usage behavior. Finally, because patients with diabetes often have other chronic diseases of the elderly, future research is needed to further distinguish and identify the differences between diabetes and other chronic diseases of the elderly. 5. Conclusions: The cognitive function and HRQoL (PCS and MCS) in middle-aged and older Chinese diabetic patients have a certain degree of lateral stability, and both tend to increase over time. More importantly, this study found that HRQoL at T1 can significantly predict cognitive function at T2, but cognitive function at T1 cannot significantly predict HRQoL at T2. HRQoL may be an antecedent for middle-aged and older diabetic patients’ cognitive function, and this predictive relationship can be different depending on digital usage behavior. In the future, effective intervention on HRQoL of middle-aged and older diabetic patients can be considered to improve their cognitive function, thereby promoting the healthy development of middle-aged and older diabetic patients.
Background: Cognitive function and health-related quality of life (HRQoL) are important issues in diabetes care. According to the China Association for Aging, it is estimated that by 2030, the number of elderly people with dementia in China will reach 22 million. The World Health Organization reports that by 2044, the number of people with diabetes in China is expected to reach 175 million. Methods: Cohort analyses were conducted based on 854 diabetic patients aged ≥45 years from the third (2015) and fourth (2018) survey of the China Health and Retirement Longitudinal Study (CHARLS). Correlation analysis, repeated-measures variance analysis, and cross-lagged panel models were used to measure the difference in digital usage behavior in the established relationship. Results: The results show that the cognitive function of middle-aged and older diabetic patients is positively correlated with HRQoL. HRQoL at T1 could significantly predict cognitive function at T2 (PCS: B = 0.12, p &lt; 0.01; MCS: B = 0.14, p &lt; 0.01). This relationship is more associated with individual performance than digital usage behavior. Conclusions: Unidirectional associations may exist between cognitive function and HRQoL among middle-aged and older Chinese diabetes patients. In the future, doctors and nurses can recognize the lowering of self-perceived HRQoL of middle-aged and older diabetic patients, and thus draw more attention to their cognitive function, in turn strengthening the evaluation, detection, and intervention of their cognitive function.
1. Introduction: Cognitive impairment is one of the common complications in elderly patients [1]. For example, in China, the prevalence of mild cognitive impairments in the aging population (aged 60 and above) is 14.71% [2], and as age increases, its annual rate of progression to dementia is between 8% and 15% [2]. At the same time, cognitive impairment is also one of the common complications of diabetic patients. It is estimated that by 2045, about 170 million elderly people in China will be diabetic patients [3]. Therefore, the academic community has paid great attention to the cognitive function of the diabetic population, and found that the cognitive function of the diabetic population is closely related to the quality of life (QoL) of the elderly [4]. HRQoL is the perceived physical and mental health of an individual or group over time, including both physical component summary (PCS) and mental component summary (MCS) [5]. In contrast with the QoL, HRQoL pays special attention to the impact of disease and treatment process on the life of a person or a group [6]. Existing studies have found that changes in cognitive function are positively correlated with changes in HRQoL, and play a predictive role in future changes in HRQoL [7,8]. For example, cognitive decline was found to be a predictor of HRQoL decline in studies on multiple sclerosis patients, AIDS patients, and older women [4,9,10]. At the same time, some scholars have found that changes in HRQoL—whether it is the PCS or the MCS of HRQoL—can also predict cognitive changes in individuals in the future [11]. For example, Ezzati’s 2019 study demonstrated that changes in HRQoL preceded changes in cognition and predicted the occurrence of dementia [12]. Thus, cognitive function may be bi-directionally associated with HRQoL. The bidirectional relationship between cognitive function and HRQoL may be more pronounced in terms of diabetic patients. This is because not only are people with diabetes 1–2 times more likely to develop cognitive risks than the general population [13,14], but this risk will increase over time [15]. At the same time, with the deepening of the research on HRQoL, the medical community generally believes that the core of diabetes management should include HRQoL maintenance in addition to prevention and delay of its complications [16]. In summary, this research aims to focus on middle-aged and older diabetic patients (over 45 years old) in China—not only because China has one of the largest numbers of diabetic patients in the world, but also because the age of the population affected with diabetes is showing a downward trend [3,17]. Thus, we propose the first hypothesis: in middle-aged and older Chinese patients with diabetes, the relationship between cognitive function and HRQoL may be bidirectional. To our knowledge, previous studies on cognitive function and HRQoL have focused on unmodifiable clinical factors, such as age, gender, etc., with a lack of studies on modifiable factors [4]. Digital technologies such as the Internet and smartphones have attracted the attention of psychologists because of their portability, rapidity, and immediacy. In clinical medicine, digital usage behaviors are often used for managing and intervening with patients with diabetes and cognitive dysfunction [18]. Thus, digital technology may be seen as a modifiable factor in the relationship between cognitive function and HRQoL. We also found that with the continuous development and improvement of digital technology, digital usage behavior can have a profound impact on the cognitive function of patients with chronic diseases [18,19,20]. In addition to this, it is important that digital usage behavior, such as Internet use, can also have a certain impact on HRQoL [21]. Previous research in other populations found that using the Internet resulted in significantly higher PCS and physical pain scores [22]. According to self-determination theory, we believe that when individuals use the internet for leisure or work, their needs are met, which in turn produces positive long-term psychological outcomes such as quality of life [23]. So far, existing research is leaning towards the belief that an increase in digital usage behavior can improve the health of individuals [24]. However, there are also studies holding the opposite view; they believe that the long-term digital usage behavior reduces patients’ time for outdoor activities, and that a large amount of negative information received due to digital usage behavior is more likely to cause mood swings, which may have side effects on patients’ recovery [25]. Therefore, it is easy to find that digital usage behavior is likely to have various effects, so we propose another hypothesis: in middle-aged and older Chinese patients with diabetes, the relationship between cognitive function and HRQoL may differ considering digital usage behavior. To date, most studies on the association of cognitive function with HRQoL have used cross-sectional designs, and have failed to elucidate the direction of influence between cognitive function and HRQoL due to the limitations of cross-sectional studies in terms of making causal inferences and explaining the direction of association. Therefore, it is necessary to further explore, especially for diabetic patients, the longitudinal association between the cognitive function and HRQoL, and to clarify the direction of their effects. As a longitudinal study, this study attempts to use a cross-lag model to explore the relationship between cognitive function and HRQoL in Chinese middle-aged and older diabetic patients, as well as the direction of this relationship and whether there were differences in digital usage behavior. This research is attempting to explore the longitudinal relationship between cognition and HRQoL in middle-aged and older diabetic patients, and to provide evidence which can promote their health. It should be noted that, due to the existence of multiple databases in China, in addition to the China Health and Retirement Longitudinal Survey (CHARLS), most scholars use databases such as the China Household Finance Survey when researching topics related to public health [26]. There is more content regarding the physical and mental health of middle-aged and elderly people, so we chose CHARLS as our data source. 5. Conclusions: The cognitive function and HRQoL (PCS and MCS) in middle-aged and older Chinese diabetic patients have a certain degree of lateral stability, and both tend to increase over time. More importantly, this study found that HRQoL at T1 can significantly predict cognitive function at T2, but cognitive function at T1 cannot significantly predict HRQoL at T2. HRQoL may be an antecedent for middle-aged and older diabetic patients’ cognitive function, and this predictive relationship can be different depending on digital usage behavior. In the future, effective intervention on HRQoL of middle-aged and older diabetic patients can be considered to improve their cognitive function, thereby promoting the healthy development of middle-aged and older diabetic patients.
Background: Cognitive function and health-related quality of life (HRQoL) are important issues in diabetes care. According to the China Association for Aging, it is estimated that by 2030, the number of elderly people with dementia in China will reach 22 million. The World Health Organization reports that by 2044, the number of people with diabetes in China is expected to reach 175 million. Methods: Cohort analyses were conducted based on 854 diabetic patients aged ≥45 years from the third (2015) and fourth (2018) survey of the China Health and Retirement Longitudinal Study (CHARLS). Correlation analysis, repeated-measures variance analysis, and cross-lagged panel models were used to measure the difference in digital usage behavior in the established relationship. Results: The results show that the cognitive function of middle-aged and older diabetic patients is positively correlated with HRQoL. HRQoL at T1 could significantly predict cognitive function at T2 (PCS: B = 0.12, p &lt; 0.01; MCS: B = 0.14, p &lt; 0.01). This relationship is more associated with individual performance than digital usage behavior. Conclusions: Unidirectional associations may exist between cognitive function and HRQoL among middle-aged and older Chinese diabetes patients. In the future, doctors and nurses can recognize the lowering of self-perceived HRQoL of middle-aged and older diabetic patients, and thus draw more attention to their cognitive function, in turn strengthening the evaluation, detection, and intervention of their cognitive function.
14,040
292
[ 3773, 1384, 274, 258, 151, 207, 222, 456, 462, 410 ]
15
[ "cognitive", "function", "cognitive function", "hrqol", "digital", "study", "scores", "patients", "t1", "pcs" ]
[ "elderly diabetic patients", "older chinese diabetic", "cognitive impairments aging", "diabetes relationship cognitive", "cognitive function diabetic" ]
null
[CONTENT] cognitive function | HRQoL | digital usage behavior | middle-aged and older people | diabetes [SUMMARY]
null
[CONTENT] cognitive function | HRQoL | digital usage behavior | middle-aged and older people | diabetes [SUMMARY]
[CONTENT] cognitive function | HRQoL | digital usage behavior | middle-aged and older people | diabetes [SUMMARY]
[CONTENT] cognitive function | HRQoL | digital usage behavior | middle-aged and older people | diabetes [SUMMARY]
[CONTENT] cognitive function | HRQoL | digital usage behavior | middle-aged and older people | diabetes [SUMMARY]
[CONTENT] Aged | China | Cognition | Diabetes Mellitus | Humans | Longitudinal Studies | Middle Aged | Quality of Life [SUMMARY]
null
[CONTENT] Aged | China | Cognition | Diabetes Mellitus | Humans | Longitudinal Studies | Middle Aged | Quality of Life [SUMMARY]
[CONTENT] Aged | China | Cognition | Diabetes Mellitus | Humans | Longitudinal Studies | Middle Aged | Quality of Life [SUMMARY]
[CONTENT] Aged | China | Cognition | Diabetes Mellitus | Humans | Longitudinal Studies | Middle Aged | Quality of Life [SUMMARY]
[CONTENT] Aged | China | Cognition | Diabetes Mellitus | Humans | Longitudinal Studies | Middle Aged | Quality of Life [SUMMARY]
[CONTENT] elderly diabetic patients | older chinese diabetic | cognitive impairments aging | diabetes relationship cognitive | cognitive function diabetic [SUMMARY]
null
[CONTENT] elderly diabetic patients | older chinese diabetic | cognitive impairments aging | diabetes relationship cognitive | cognitive function diabetic [SUMMARY]
[CONTENT] elderly diabetic patients | older chinese diabetic | cognitive impairments aging | diabetes relationship cognitive | cognitive function diabetic [SUMMARY]
[CONTENT] elderly diabetic patients | older chinese diabetic | cognitive impairments aging | diabetes relationship cognitive | cognitive function diabetic [SUMMARY]
[CONTENT] elderly diabetic patients | older chinese diabetic | cognitive impairments aging | diabetes relationship cognitive | cognitive function diabetic [SUMMARY]
[CONTENT] cognitive | function | cognitive function | hrqol | digital | study | scores | patients | t1 | pcs [SUMMARY]
null
[CONTENT] cognitive | function | cognitive function | hrqol | digital | study | scores | patients | t1 | pcs [SUMMARY]
[CONTENT] cognitive | function | cognitive function | hrqol | digital | study | scores | patients | t1 | pcs [SUMMARY]
[CONTENT] cognitive | function | cognitive function | hrqol | digital | study | scores | patients | t1 | pcs [SUMMARY]
[CONTENT] cognitive | function | cognitive function | hrqol | digital | study | scores | patients | t1 | pcs [SUMMARY]
[CONTENT] hrqol | cognitive | patients | changes | digital | china | cognitive function | found | function | digital usage [SUMMARY]
null
[CONTENT] 01 | cognitive | cognitive function | function | significant | digital | t2 | t1 | η2 | time [SUMMARY]
[CONTENT] middle aged older | middle aged | middle | aged older | aged | older | hrqol | cognitive function | cognitive | diabetic [SUMMARY]
[CONTENT] cognitive | cognitive function | function | hrqol | digital | 01 | scores | study | patients | t2 [SUMMARY]
[CONTENT] cognitive | cognitive function | function | hrqol | digital | 01 | scores | study | patients | t2 [SUMMARY]
[CONTENT] ||| the China Association for Aging | 2030 | China | 22 million ||| The World Health Organization | 2044 | China | 175 million [SUMMARY]
null
[CONTENT] T1 | T2 ||| 0.12, p & | 0.01 | 0.14, p &lt | 0.01 ||| [SUMMARY]
[CONTENT] Chinese ||| [SUMMARY]
[CONTENT] ||| the China Association for Aging | 2030 | China | 22 million ||| The World Health Organization | 2044 | China | 175 million ||| 854 | years | third | 2015 | fourth (2018 | the China Health and Retirement Longitudinal Study ||| ||| ||| T1 | T2 ||| 0.12, p & | 0.01 | 0.14, p &lt | 0.01 ||| ||| Chinese ||| [SUMMARY]
[CONTENT] ||| the China Association for Aging | 2030 | China | 22 million ||| The World Health Organization | 2044 | China | 175 million ||| 854 | years | third | 2015 | fourth (2018 | the China Health and Retirement Longitudinal Study ||| ||| ||| T1 | T2 ||| 0.12, p & | 0.01 | 0.14, p &lt | 0.01 ||| ||| Chinese ||| [SUMMARY]
Repair or replace ischemic mitral regurgitation during coronary artery bypass grafting? A meta-analysis.
27585461
No agreement has been reached for the best surgical treatment for patients with chronic ischemic mitral regurgitation (IMR) undergoing coronary artery bypass grafting (CABG). Our objective was to meta-analyze the clinical outcomes of repair and replacement.
BACKGROUND
A computerized search was performed using Pubmed, Embase, Ovid medline and Cochrane Library. The search terms "ischemic or ischaemic" and "mitral valve" and "repair or replacement or annuloplasty" and "coronary artery bypass grafting" were entered as MeSH terms and keywords. The primary outcomes were operative mortality and late mortality. Secondary outcomes were 2+ or greater recurrence of mitral regurgitation and reoperation rate.
METHODS
Eleven studies were eligible for the final meta-analysis. These studies included a total of 1750 patients, 60.4 % of whom received mitral valve repair. All patients underwent concomitant coronary artery bypass graft. No differences were found in operative mortality (summary odds ratio [OR] 0.65; 95 % confidence interval [CI] 0.43-1.00; p = 0.05), late mortality (summary hazard ratio [HR] 0.87; 95 % confidence interval [CI] 0.67-1.14; p = 0.31) and reoperation (summary odds ratio [OR] 1.47; 95 % confidence interval [CI] 0.90-2.38; p = 0.12). Regurgitation recurrence was lower in the replacement group (summary odds ratio [OR] 5.41; 95 % confidence interval [CI] 3.12-9.38; p < 0.001).
RESULTS
In patients with chronic ischemic mitral regurgitation during CABG, mitral valve replacement is associated with lower recurrence of regurgitation. No differences were found regarding survival and reoperation rates.
CONCLUSION
[ "Aged", "Coronary Artery Bypass", "Female", "Heart Valve Prosthesis Implantation", "Humans", "Male", "Middle Aged", "Mitral Valve", "Mitral Valve Insufficiency", "Myocardial Ischemia", "Reoperation", "Survival Rate", "Treatment Outcome" ]
5008002
Background
Chronic ischemic mitral regurgitation (IMR) is a frequent and important complication after myocardial infarction. Its pathophysiologic mechanisms account for remodeling of segmental/global left ventricle (LV) inducing papillary muscle displacement and leaflet tethering [1]. The presence of IMR is independently associated with mortality and morbidity after myocardial infarction [2]. Given the severity of IMR, surgery performed for IMR ranges from coronary artery bypass grafting (CABG) alone to both CABG and mitral valve surgery [3, 4]. Two randomized trials indicated that repair was associated with a reduced prevalence of mitral regurgitation but did not show a clinically meaningful advantage of adding mitral valve repair to CABG [5, 6]. In addition, when compared with replacement, previous meta-analyses concluded that repair is associated with lower operative mortality but higher recurrence of regurgitation in patients with ischemic mitral regurgitation, with or without CABG [7, 8]. For patients with chronic IMR undergoing combined CABG, the best surgical treatment is still controversial. Some studies support replacement [9, 10], others support repair [11, 12], and others showed similar survival for the two procedures [13]. Current guidelines recommend mitral valve surgery for severe IMR, but do not demonstrate a specific type of procedure [14, 15]. Numerous non-randomized studies have been published comparing the clinical outcomes between MVP + CABG and MVR + CABG for IMR. However, there is still no systematic and quantitative assessment of accumulated literature on this topic. Meta-analysis is a powerful tool to provide meaningful comparison of short and long-term outcomes of these procedures. The present meta-analysis aimed to assess the clinical outcomes of patients who underwent mitral valve surgery and CABG for chronic IMR.
null
null
Results
Search results and study quality The literature search identified a total of 545 studies, which were published between 1965 and 2015. On the basis of title and abstracts, 34 articles were selected and reviewed in full. Eleven articles met the inclusion and exclusion criteria [9–13, 23–28] (Fig. 1). Of the included studies, there were ten retrospective observational studies [9–13, 23–27] and one prospective observational study [28]. All were nonrandomized studies. These studies included a total of 1807 patients, 1091 (60.4 %) of whom underwent repair and 716 (39.6 %) of whom underwent replacement. All patients had CABG. Patient characteristics and a summary of operative details are summarized in Tables 1 and 2, respectively. With the exception of the replacement patients being older in 2 of the studies, the two groups were similar in terms of hypertension (HTN), diabetes, atrial fibrillation (AF), left ventricular ejection fraction (LVEF) and the New York Heart Association (NYHA) class. Eight of the studies reported data on the type of prosthesis used for mitral valve replacement and preservation of the subvalvular apparatus. In half of the studies, the majority of patients received a bioprothesis valve. In addition, preservation of the subvalvular (either total or partial) apparatus were performed in the vast majority of mitral valve replacements.Fig. 1Flow chart of study selectionTable 1Key Features of Included StudiesStudySubjectsMean AgeMale (%)HTN (%)Diabetes (%)AF (%)NYHA III-IV (%)Mean LVEF (%)MR gradeFollow-up periodMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGLorusso et al.24424466667369414136351213NRNR35352.8 ± 0.52.8 ± 0.546.5bmoLio et al.98286570d 746181893532NRNR61713234NRNR45bmoLjubacev et al.3441NRNRNRNR85803256d 2617NRNRNRNRNRNRIn-hospitalRoshanali et al.263157578377NRNRNRNRNRNRNRNR38403.6 ± 0.53.5 ± 0.540.2a moMaltais et al.302857070686371683426NRNR85913434NRNR4.2ayrsQiu et al.1121067172645672753032282653493535NRNR48.1amoMicovic et al.865261b 62b 7273746521152729645029362.7 ± 0.62.5 ± 0.732a moBonacchi et al.3618NRNRNRNRNRNRNRNRNRNRNRNR2727NRNR32a moSilberman et al.38146267d 749350574557NRNR49cd 32cd <25 %NRNR38amoMantovani et al.61416868675454512615NRNRNRNR45453.1 ± 0.83.3 ± 0.736.8a moReece et al.5456676941d 68d NRNR2221NRNRNRNR4440NRNRIn-hospital a = mean; b = median; c Percentage class IV; d p < 0.05 between MVr and MVR Abbreviations: AF atrial fibrillation, LVEF left ventricular ejection fraction, CABG coronary artery bypass grafting, MR mitral regurgitation, HTN hypertension, MVP mitral valve repair, MVR mitral valve replacement, NR not reportedTable 2Operative characteristicsCPB time (min)ACC time (min)MVR prosthesis typeSubvalvular apparatus preservationMVP partial/suture annuloplasty (%)MVP ring annuloplasty (%)MVP undersizingMVPMVRMVPMVRMechanical %Bioprothesis %Anterior + Posterior (%)Posterior (%)None (%)Lorusso et al. [9]14514594944753482443010027 (26 mm) 52 (28 mm) 13 (30 mm) 6 (32 mm) 1 (34 mm) 1 (36 mm)Lio et al. [10] 156180107132366410000010037 % open ring 63 % closed ring 37 % rigid ring 63 % semi-rigid ringLjubacev et al. [24]1451529699NRNRNRNRNRNRNRNRRoshanali et al. [28]NRNRNRNR100010000NRNRNRMaltais et al. [13]NRNRNRNR4654NRNRNR89242 (24–28 mm) 36 (30–34 mm)Qiu et al. [26]13612910598386211890010030 mmMicovic et al. [11]NRNRNRNR100001000595Median 28 mm (range, 26–34 mm)Bonacchi et al. [12]NRNRNRNRNRNR010001783NRSilberman et al. [23]154184991111000NRNRNR010026 ± 1.2 mmMantovani et al. [25]1791731311227624010000100ModerateReece et al. [27]112132152171NRNRNRNRNR010028 mm males 26 mm females Abbreviations: MVP mitral valve repair, MVR mitral valve replacement, ACC time aortic cross-clamping time, CPB time, cardiopulmonary bypass time, NR not reported Flow chart of study selection Key Features of Included Studies a = mean; b = median; c Percentage class IV; d p < 0.05 between MVr and MVR Abbreviations: AF atrial fibrillation, LVEF left ventricular ejection fraction, CABG coronary artery bypass grafting, MR mitral regurgitation, HTN hypertension, MVP mitral valve repair, MVR mitral valve replacement, NR not reported Operative characteristics Abbreviations: MVP mitral valve repair, MVR mitral valve replacement, ACC time aortic cross-clamping time, CPB time, cardiopulmonary bypass time, NR not reported All the eleven trials were assessed by the Newcastle-Ottawa Scale for quality assessment risk evaluation of adequacy of selection, comparability, and outcomes assessment for individual trials (Table 3). All studies included in our meta-analysis were of high-quality (had ≥ 6 scores).Table 3Study quality assessment using the Newcastle-Ottawa Scale for nonrandomized studiesSelectionOutcomeFirst author, year of publication (reference)Representativeness of exposed cohortSelection of nonexposed cohortAscertainment of exposureOutcome of interest absent at start of studyComparability (Based on design and analysis)Assessment of outcomeFollow-up long enough for outcomesto occurAdequacy of follow-upTotal scoreLorusso et al.111101117Lio et al.111111118Ljubacev et al.111101016Roshanali et al.111121119Maltais et al.111121119Qiu et al.111121119Micovic et al.111121119Bonacchi et al.111121119Silberman et al.111111118Mantovani et al.111121119Reece et al.111111017 Study quality assessment using the Newcastle-Ottawa Scale for nonrandomized studies The literature search identified a total of 545 studies, which were published between 1965 and 2015. On the basis of title and abstracts, 34 articles were selected and reviewed in full. Eleven articles met the inclusion and exclusion criteria [9–13, 23–28] (Fig. 1). Of the included studies, there were ten retrospective observational studies [9–13, 23–27] and one prospective observational study [28]. All were nonrandomized studies. These studies included a total of 1807 patients, 1091 (60.4 %) of whom underwent repair and 716 (39.6 %) of whom underwent replacement. All patients had CABG. Patient characteristics and a summary of operative details are summarized in Tables 1 and 2, respectively. With the exception of the replacement patients being older in 2 of the studies, the two groups were similar in terms of hypertension (HTN), diabetes, atrial fibrillation (AF), left ventricular ejection fraction (LVEF) and the New York Heart Association (NYHA) class. Eight of the studies reported data on the type of prosthesis used for mitral valve replacement and preservation of the subvalvular apparatus. In half of the studies, the majority of patients received a bioprothesis valve. In addition, preservation of the subvalvular (either total or partial) apparatus were performed in the vast majority of mitral valve replacements.Fig. 1Flow chart of study selectionTable 1Key Features of Included StudiesStudySubjectsMean AgeMale (%)HTN (%)Diabetes (%)AF (%)NYHA III-IV (%)Mean LVEF (%)MR gradeFollow-up periodMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGLorusso et al.24424466667369414136351213NRNR35352.8 ± 0.52.8 ± 0.546.5bmoLio et al.98286570d 746181893532NRNR61713234NRNR45bmoLjubacev et al.3441NRNRNRNR85803256d 2617NRNRNRNRNRNRIn-hospitalRoshanali et al.263157578377NRNRNRNRNRNRNRNR38403.6 ± 0.53.5 ± 0.540.2a moMaltais et al.302857070686371683426NRNR85913434NRNR4.2ayrsQiu et al.1121067172645672753032282653493535NRNR48.1amoMicovic et al.865261b 62b 7273746521152729645029362.7 ± 0.62.5 ± 0.732a moBonacchi et al.3618NRNRNRNRNRNRNRNRNRNRNRNR2727NRNR32a moSilberman et al.38146267d 749350574557NRNR49cd 32cd <25 %NRNR38amoMantovani et al.61416868675454512615NRNRNRNR45453.1 ± 0.83.3 ± 0.736.8a moReece et al.5456676941d 68d NRNR2221NRNRNRNR4440NRNRIn-hospital a = mean; b = median; c Percentage class IV; d p < 0.05 between MVr and MVR Abbreviations: AF atrial fibrillation, LVEF left ventricular ejection fraction, CABG coronary artery bypass grafting, MR mitral regurgitation, HTN hypertension, MVP mitral valve repair, MVR mitral valve replacement, NR not reportedTable 2Operative characteristicsCPB time (min)ACC time (min)MVR prosthesis typeSubvalvular apparatus preservationMVP partial/suture annuloplasty (%)MVP ring annuloplasty (%)MVP undersizingMVPMVRMVPMVRMechanical %Bioprothesis %Anterior + Posterior (%)Posterior (%)None (%)Lorusso et al. [9]14514594944753482443010027 (26 mm) 52 (28 mm) 13 (30 mm) 6 (32 mm) 1 (34 mm) 1 (36 mm)Lio et al. [10] 156180107132366410000010037 % open ring 63 % closed ring 37 % rigid ring 63 % semi-rigid ringLjubacev et al. [24]1451529699NRNRNRNRNRNRNRNRRoshanali et al. [28]NRNRNRNR100010000NRNRNRMaltais et al. [13]NRNRNRNR4654NRNRNR89242 (24–28 mm) 36 (30–34 mm)Qiu et al. [26]13612910598386211890010030 mmMicovic et al. [11]NRNRNRNR100001000595Median 28 mm (range, 26–34 mm)Bonacchi et al. [12]NRNRNRNRNRNR010001783NRSilberman et al. [23]154184991111000NRNRNR010026 ± 1.2 mmMantovani et al. [25]1791731311227624010000100ModerateReece et al. [27]112132152171NRNRNRNRNR010028 mm males 26 mm females Abbreviations: MVP mitral valve repair, MVR mitral valve replacement, ACC time aortic cross-clamping time, CPB time, cardiopulmonary bypass time, NR not reported Flow chart of study selection Key Features of Included Studies a = mean; b = median; c Percentage class IV; d p < 0.05 between MVr and MVR Abbreviations: AF atrial fibrillation, LVEF left ventricular ejection fraction, CABG coronary artery bypass grafting, MR mitral regurgitation, HTN hypertension, MVP mitral valve repair, MVR mitral valve replacement, NR not reported Operative characteristics Abbreviations: MVP mitral valve repair, MVR mitral valve replacement, ACC time aortic cross-clamping time, CPB time, cardiopulmonary bypass time, NR not reported All the eleven trials were assessed by the Newcastle-Ottawa Scale for quality assessment risk evaluation of adequacy of selection, comparability, and outcomes assessment for individual trials (Table 3). All studies included in our meta-analysis were of high-quality (had ≥ 6 scores).Table 3Study quality assessment using the Newcastle-Ottawa Scale for nonrandomized studiesSelectionOutcomeFirst author, year of publication (reference)Representativeness of exposed cohortSelection of nonexposed cohortAscertainment of exposureOutcome of interest absent at start of studyComparability (Based on design and analysis)Assessment of outcomeFollow-up long enough for outcomesto occurAdequacy of follow-upTotal scoreLorusso et al.111101117Lio et al.111111118Ljubacev et al.111101016Roshanali et al.111121119Maltais et al.111121119Qiu et al.111121119Micovic et al.111121119Bonacchi et al.111121119Silberman et al.111111118Mantovani et al.111121119Reece et al.111111017 Study quality assessment using the Newcastle-Ottawa Scale for nonrandomized studies Peri-operative mortality Ten observational studies involving a total of 1750 patients reported operative mortality. The odds ratios in the study ranged from 0.16 to 2.32 (Fig. 2). The summary odds ratio was 0.65 (95 % CI, 0.43-1.00), P = 0.05, indicating there was a reduced peri-operative mortality trend towards repair, but no statistical significance reached. In assessing potential heterogeneity across the studies, I2 = 0 %, and no publication bias was found either from the Egger’s test (P = 0.83) or the Begg’s test (P = 0.68).Fig. 2Mitral valve repair versus mitral valve replacement on peri-operative mortality Mitral valve repair versus mitral valve replacement on peri-operative mortality Ten observational studies involving a total of 1750 patients reported operative mortality. The odds ratios in the study ranged from 0.16 to 2.32 (Fig. 2). The summary odds ratio was 0.65 (95 % CI, 0.43-1.00), P = 0.05, indicating there was a reduced peri-operative mortality trend towards repair, but no statistical significance reached. In assessing potential heterogeneity across the studies, I2 = 0 %, and no publication bias was found either from the Egger’s test (P = 0.83) or the Begg’s test (P = 0.68).Fig. 2Mitral valve repair versus mitral valve replacement on peri-operative mortality Mitral valve repair versus mitral valve replacement on peri-operative mortality Late mortality A total of nine studies (1622 Patients) reported late mortality (Fig. 3). The overall hazard ratio was 0.87 (95 % CI, 0.67-1.14; P = 0.31), suggesting late mortality was not significantly reduced following repair. Further, heterogeneity was moderate (I2 = 30 %). It was noted that ten of the studies included patients with different degrees of regurgitation and left ventricular dysfunction, with exception of one study [23], all of the patients included in this study had severely impaired LV function (ejection fraction <25 %) and severe ischemic MR undergoing CABG. Severely decreased left ventricular function and severe IMR could have the potential pathophysiological effect on the mortality rates of those patients. Hence, sensitivity analysis was conducted to only include studies in which not all of the patients had severe ischemic MR and severely impaired LV function undergoing CABG. Restricting analysis to these studies had no significant impact on the reduction of late mortality following repair (the summary hazard ratio, 1.03; 95 % CI, 0.90-1.17; P = 0.66). Whereas, heterogeneity suggested by I2 was significantly reduced to 0 %, indicating no variability exists among the rest studies. Further exclusion of any single study did not significantly reduce the heterogeneity. In addition, our study included 10 retrospective studies and 1 prospective study. The different study designs may influence outcomes of meta-analysis. Therefore, sensitivity analysis was performed to only include retrospective studies. Restricting analysis to these studies did not significantly impact on the result for late mortality (HR, 0.86; 95 % CI, 0.64-1.14; P = 0.30; I2 = 38 %).Fig. 3Mitral valve repair versus mitral valve replacement on late mortality Mitral valve repair versus mitral valve replacement on late mortality A total of nine studies (1622 Patients) reported late mortality (Fig. 3). The overall hazard ratio was 0.87 (95 % CI, 0.67-1.14; P = 0.31), suggesting late mortality was not significantly reduced following repair. Further, heterogeneity was moderate (I2 = 30 %). It was noted that ten of the studies included patients with different degrees of regurgitation and left ventricular dysfunction, with exception of one study [23], all of the patients included in this study had severely impaired LV function (ejection fraction <25 %) and severe ischemic MR undergoing CABG. Severely decreased left ventricular function and severe IMR could have the potential pathophysiological effect on the mortality rates of those patients. Hence, sensitivity analysis was conducted to only include studies in which not all of the patients had severe ischemic MR and severely impaired LV function undergoing CABG. Restricting analysis to these studies had no significant impact on the reduction of late mortality following repair (the summary hazard ratio, 1.03; 95 % CI, 0.90-1.17; P = 0.66). Whereas, heterogeneity suggested by I2 was significantly reduced to 0 %, indicating no variability exists among the rest studies. Further exclusion of any single study did not significantly reduce the heterogeneity. In addition, our study included 10 retrospective studies and 1 prospective study. The different study designs may influence outcomes of meta-analysis. Therefore, sensitivity analysis was performed to only include retrospective studies. Restricting analysis to these studies did not significantly impact on the result for late mortality (HR, 0.86; 95 % CI, 0.64-1.14; P = 0.30; I2 = 38 %).Fig. 3Mitral valve repair versus mitral valve replacement on late mortality Mitral valve repair versus mitral valve replacement on late mortality Mitral valve reoperation Reoperation due to such as MV regurgitation, thromboembolism and prosthetic endocarditis was reported in five studies involving a total of 845 patients. The combined odds ratio was 1.47, suggesting the trend went towards the preference of replacement. Nevertheless, no significant differences were reached between the two surgical approaches (95 % CI, 0.90-2.38; I2 = 0 %; P = 0.12) (Fig. 4).Fig. 4Mitral valve repair versus mitral valve replacement on reoperation Mitral valve repair versus mitral valve replacement on reoperation Reoperation due to such as MV regurgitation, thromboembolism and prosthetic endocarditis was reported in five studies involving a total of 845 patients. The combined odds ratio was 1.47, suggesting the trend went towards the preference of replacement. Nevertheless, no significant differences were reached between the two surgical approaches (95 % CI, 0.90-2.38; I2 = 0 %; P = 0.12) (Fig. 4).Fig. 4Mitral valve repair versus mitral valve replacement on reoperation Mitral valve repair versus mitral valve replacement on reoperation Recurrence of MR Five studies involving a total of 837 patients provided data regarding recurrence of MR during the follow-up. The MVP + CABG group was associated with a significantly increased recurrence rate of MR (OR, 5.41; 95 % CI: 3.12–9.38; P < 0.001) with low heterogeneity among those studies (I2 = 10 %) (Fig. 5). Sensitivity analysis was also performed to only include retrospective studies. Restricting analysis to these studies did not significantly impact on the result for recurrence of MR (OR, 5.97; 95 % CI, 3.36-10.58; P < 0.001; I2 = 0 %).Fig. 5Mitral valve repair versus mitral valve replacement on recurrence of mitral valve regurgitation Mitral valve repair versus mitral valve replacement on recurrence of mitral valve regurgitation Five studies involving a total of 837 patients provided data regarding recurrence of MR during the follow-up. The MVP + CABG group was associated with a significantly increased recurrence rate of MR (OR, 5.41; 95 % CI: 3.12–9.38; P < 0.001) with low heterogeneity among those studies (I2 = 10 %) (Fig. 5). Sensitivity analysis was also performed to only include retrospective studies. Restricting analysis to these studies did not significantly impact on the result for recurrence of MR (OR, 5.97; 95 % CI, 3.36-10.58; P < 0.001; I2 = 0 %).Fig. 5Mitral valve repair versus mitral valve replacement on recurrence of mitral valve regurgitation Mitral valve repair versus mitral valve replacement on recurrence of mitral valve regurgitation
Conclusions
In patients with chronic ischemic mitral regurgitation during CABG, mitral valve replacement is associated with lower recurrence of regurgitation. No differences were found regarding survival and reoperation rates.
[ "Search strategy", "Study selection", "Data extraction and quality assessment", "Statistical analysis", "Search results and study quality", "Peri-operative mortality", "Late mortality", "Mitral valve reoperation", "Recurrence of MR", "Limitations" ]
[ "This meta-analysis was conducted according to the recommendations of the Meta-Analysis of Observational Studies in Epidemiology (MOOSE) [16]. A computerized search was performed using Pubmed, Embase, Ovid medline and Cochrane Library from their dates of inception to December 2015 without language restriction. The search terms “ischemic or ischaemic” and “mitral valve” and “repair or replacement or annuloplasty” and “coronary artery bypass grafting” were entered as MeSH terms and keywords. The language of publication was restricted to English. We also reviewed the full text and references lists of all relevant review articles in detail. YW and XS independently undertook the literature search, screening of titles and abstracts. Any disagreement was resolved by consensus.", "Articles were included if there is a direct comparison of repair versus replacement and all patients with IMR had CABG. The exclusion criteria were applied to select the final articles for the meta-analysis: (1) ischemic etiology in only a subset of the patients with outcomes not specifically provided (2) nonischemic dilated cardiomyopathy (3) beating heart procedures (4) concomitant surgical ventricular restoration (5) preoperative hemodynamic instability (6) lack of annuloplasty in > 20 % of the patients in the repair group (7) acute IMR.", "All data were extracted independently by 2 investigators (Y.W., X.S.) according to the prespecified selection criteria, with disagreement resolved by consensus among all authors. The following data from each study were extracted: the last name of the first author, year of publication, study population, patients’ age and gender, comorbidities, cardiac function, severity of mitral regurgitation at baseline and follow-up period. Any disagreement was resolved by consensus.\nBased on the extracted data, the quality of the included studies was evaluated using the nine-item Newcastle-Ottawa Quality scale [17], a widely used tool for the quality assessment of non-randomized trials. The high-quality study was defined as a study with ≥6 scores.", "The primary end points were operative mortality and late mortality (considered to be year after operation). Operative mortality was defined as death within 30 days after operation or in-hospital death. Secondary end points were MR recurrence 2+ or greater and reoperation at follow-up. The meta-analysis was performed using Review Manager (Revman, version 5.3 for windows, Oxford, England, Cochrane Collaboration) and Stata (version 11.0; StataCorp, College Station, TX). Hazard ratio (HR) with a 95 % confidence intervals (CIs), directly extracted from these included studies or indirectly calculated using the method of Tierney and colleagues [18] to assess the efficacy of the surgical intervention in each study. A summary of odds ratio (OR) and their corresponding 95 % CI were computed for each dichotomous outcome using either fixed-effects models or, in the presence of substantial heterogeneity (I2 > 50 %), random-effects models [19]. Statistical heterogeneity across studies was examined with Cochran’s Q test as well as the I2 statistics. Studies with an I2 statistics of <25 % were considered to have low heterogeneity, those with an I2 statistics of 25–50 % were considered to have moderate heterogeneity, and those with an I2 statistics of >50 % were considered to have a high degree of heterogeneity [20]. If there was high heterogeneity, the possible clinical and methodological factors for this were further explored. Potential sources of heterogeneity were investigated using sensitivity analyses and each study involved in the meta-analysis was excluded each time to reflect the influence of the individual data set on the pooled RRs.\nPublication bias was assessed using the Egger regression asymmetry test [21] and Begg adjusted rank correlation test [22]; a P value of less than 0.05 was considered representative of statistically significant publication bias. Meta-analysis results are displayed in forest plots. A p value < 0.05 was considered statistically significant.", "The literature search identified a total of 545 studies, which were published between 1965 and 2015. On the basis of title and abstracts, 34 articles were selected and reviewed in full. Eleven articles met the inclusion and exclusion criteria [9–13, 23–28] (Fig. 1). Of the included studies, there were ten retrospective observational studies [9–13, 23–27] and one prospective observational study [28]. All were nonrandomized studies. These studies included a total of 1807 patients, 1091 (60.4 %) of whom underwent repair and 716 (39.6 %) of whom underwent replacement. All patients had CABG. Patient characteristics and a summary of operative details are summarized in Tables 1 and 2, respectively. With the exception of the replacement patients being older in 2 of the studies, the two groups were similar in terms of hypertension (HTN), diabetes, atrial fibrillation (AF), left ventricular ejection fraction (LVEF) and the New York Heart Association (NYHA) class. Eight of the studies reported data on the type of prosthesis used for mitral valve replacement and preservation of the subvalvular apparatus. In half of the studies, the majority of patients received a bioprothesis valve. In addition, preservation of the subvalvular (either total or partial) apparatus were performed in the vast majority of mitral valve replacements.Fig. 1Flow chart of study selectionTable 1Key Features of Included StudiesStudySubjectsMean AgeMale (%)HTN (%)Diabetes (%)AF (%)NYHA III-IV (%)Mean LVEF (%)MR gradeFollow-up periodMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGLorusso et al.24424466667369414136351213NRNR35352.8 ± 0.52.8 ± 0.546.5bmoLio et al.98286570d\n746181893532NRNR61713234NRNR45bmoLjubacev et al.3441NRNRNRNR85803256d\n2617NRNRNRNRNRNRIn-hospitalRoshanali et al.263157578377NRNRNRNRNRNRNRNR38403.6 ± 0.53.5 ± 0.540.2a moMaltais et al.302857070686371683426NRNR85913434NRNR4.2ayrsQiu et al.1121067172645672753032282653493535NRNR48.1amoMicovic et al.865261b\n62b\n7273746521152729645029362.7 ± 0.62.5 ± 0.732a moBonacchi et al.3618NRNRNRNRNRNRNRNRNRNRNRNR2727NRNR32a moSilberman et al.38146267d\n749350574557NRNR49cd\n32cd\n<25 %NRNR38amoMantovani et al.61416868675454512615NRNRNRNR45453.1 ± 0.83.3 ± 0.736.8a moReece et al.5456676941d\n68d\nNRNR2221NRNRNRNR4440NRNRIn-hospital\na = mean; b = median; c Percentage class IV; d\np < 0.05 between MVr and MVR\nAbbreviations: AF atrial fibrillation, LVEF left ventricular ejection fraction, CABG coronary artery bypass grafting, MR mitral regurgitation, HTN hypertension, MVP mitral valve repair, MVR mitral valve replacement, NR not reportedTable 2Operative characteristicsCPB time (min)ACC time (min)MVR prosthesis typeSubvalvular apparatus preservationMVP partial/suture annuloplasty (%)MVP ring annuloplasty (%)MVP undersizingMVPMVRMVPMVRMechanical %Bioprothesis %Anterior + Posterior (%)Posterior (%)None (%)Lorusso et al. [9]14514594944753482443010027 (26 mm) 52 (28 mm) 13 (30 mm) 6 (32 mm) 1 (34 mm) 1 (36 mm)Lio et al. [10] 156180107132366410000010037 % open ring 63 % closed ring 37 % rigid ring 63 % semi-rigid ringLjubacev et al. [24]1451529699NRNRNRNRNRNRNRNRRoshanali et al. [28]NRNRNRNR100010000NRNRNRMaltais et al. [13]NRNRNRNR4654NRNRNR89242 (24–28 mm) 36 (30–34 mm)Qiu et al. [26]13612910598386211890010030 mmMicovic et al. [11]NRNRNRNR100001000595Median 28 mm (range, 26–34 mm)Bonacchi et al. [12]NRNRNRNRNRNR010001783NRSilberman et al. [23]154184991111000NRNRNR010026 ± 1.2 mmMantovani et al. [25]1791731311227624010000100ModerateReece et al. [27]112132152171NRNRNRNRNR010028 mm males 26 mm females\nAbbreviations: MVP mitral valve repair, MVR mitral valve replacement, ACC time aortic cross-clamping time, CPB time, cardiopulmonary bypass time, NR not reported\nFlow chart of study selection\nKey Features of Included Studies\n\na = mean; b = median; c Percentage class IV; d\np < 0.05 between MVr and MVR\n\nAbbreviations: AF atrial fibrillation, LVEF left ventricular ejection fraction, CABG coronary artery bypass grafting, MR mitral regurgitation, HTN hypertension, MVP mitral valve repair, MVR mitral valve replacement, NR not reported\nOperative characteristics\n\nAbbreviations: MVP mitral valve repair, MVR mitral valve replacement, ACC time aortic cross-clamping time, CPB time, cardiopulmonary bypass time, NR not reported\nAll the eleven trials were assessed by the Newcastle-Ottawa Scale for quality assessment risk evaluation of adequacy of selection, comparability, and outcomes assessment for individual trials (Table 3). All studies included in our meta-analysis were of high-quality (had ≥ 6 scores).Table 3Study quality assessment using the Newcastle-Ottawa Scale for nonrandomized studiesSelectionOutcomeFirst author, year of publication (reference)Representativeness of exposed cohortSelection of nonexposed cohortAscertainment of exposureOutcome of interest absent at start of studyComparability (Based on design and analysis)Assessment of outcomeFollow-up long enough for outcomesto occurAdequacy of follow-upTotal scoreLorusso et al.111101117Lio et al.111111118Ljubacev et al.111101016Roshanali et al.111121119Maltais et al.111121119Qiu et al.111121119Micovic et al.111121119Bonacchi et al.111121119Silberman et al.111111118Mantovani et al.111121119Reece et al.111111017\nStudy quality assessment using the Newcastle-Ottawa Scale for nonrandomized studies", "Ten observational studies involving a total of 1750 patients reported operative mortality. The odds ratios in the study ranged from 0.16 to 2.32 (Fig. 2). The summary odds ratio was 0.65 (95 % CI, 0.43-1.00), P = 0.05, indicating there was a reduced peri-operative mortality trend towards repair, but no statistical significance reached. In assessing potential heterogeneity across the studies, I2 = 0 %, and no publication bias was found either from the Egger’s test (P = 0.83) or the Begg’s test (P = 0.68).Fig. 2Mitral valve repair versus mitral valve replacement on peri-operative mortality\nMitral valve repair versus mitral valve replacement on peri-operative mortality", "A total of nine studies (1622 Patients) reported late mortality (Fig. 3). The overall hazard ratio was 0.87 (95 % CI, 0.67-1.14; P = 0.31), suggesting late mortality was not significantly reduced following repair. Further, heterogeneity was moderate (I2 = 30 %). It was noted that ten of the studies included patients with different degrees of regurgitation and left ventricular dysfunction, with exception of one study [23], all of the patients included in this study had severely impaired LV function (ejection fraction <25 %) and severe ischemic MR undergoing CABG. Severely decreased left ventricular function and severe IMR could have the potential pathophysiological effect on the mortality rates of those patients. Hence, sensitivity analysis was conducted to only include studies in which not all of the patients had severe ischemic MR and severely impaired LV function undergoing CABG. Restricting analysis to these studies had no significant impact on the reduction of late mortality following repair (the summary hazard ratio, 1.03; 95 % CI, 0.90-1.17; P = 0.66). Whereas, heterogeneity suggested by I2 was significantly reduced to 0 %, indicating no variability exists among the rest studies. Further exclusion of any single study did not significantly reduce the heterogeneity. In addition, our study included 10 retrospective studies and 1 prospective study. The different study designs may influence outcomes of meta-analysis. Therefore, sensitivity analysis was performed to only include retrospective studies. Restricting analysis to these studies did not significantly impact on the result for late mortality (HR, 0.86; 95 % CI, 0.64-1.14; P = 0.30; I2 = 38 %).Fig. 3Mitral valve repair versus mitral valve replacement on late mortality\nMitral valve repair versus mitral valve replacement on late mortality", "Reoperation due to such as MV regurgitation, thromboembolism and prosthetic endocarditis was reported in five studies involving a total of 845 patients. The combined odds ratio was 1.47, suggesting the trend went towards the preference of replacement. Nevertheless, no significant differences were reached between the two surgical approaches (95 % CI, 0.90-2.38; I2 = 0 %; P = 0.12) (Fig. 4).Fig. 4Mitral valve repair versus mitral valve replacement on reoperation\nMitral valve repair versus mitral valve replacement on reoperation", "Five studies involving a total of 837 patients provided data regarding recurrence of MR during the follow-up. The MVP + CABG group was associated with a significantly increased recurrence rate of MR (OR, 5.41; 95 % CI: 3.12–9.38; P < 0.001) with low heterogeneity among those studies (I2 = 10 %) (Fig. 5). Sensitivity analysis was also performed to only include retrospective studies. Restricting analysis to these studies did not significantly impact on the result for recurrence of MR (OR, 5.97; 95 % CI, 3.36-10.58; P < 0.001; I2 = 0 %).Fig. 5Mitral valve repair versus mitral valve replacement on recurrence of mitral valve regurgitation\nMitral valve repair versus mitral valve replacement on recurrence of mitral valve regurgitation", "Our meta-analysis has several limitations. Firstly, this study was based on observational, retrospective studies with inherent bias of such study designs. The publications included in this meta-analysis were relatively small and nonrandomized studies. Secondly, changes in NYHA class, LVEF and left ventricular reversal remodeling were too scarcely reported in the included studies to enable meta-analysis. Eight out of eleven studies included in our meta-analysis reported data on the subvalvular apparatus preservation in mitral valve replacement yet with lack of uniform preservation of both the anterior and posterior leaflets. The other three studies had no description regarding subvalvular apparatus preservation. Thirdly, potential confounding factors such as preoperative risk evaluation (STS score i.e.), mitral valve more suitable for repair, age, cause of mitral regurgitation (ischemia, fibrosis, ventricular remodeling), EF and complexity of revascularization were not considered or adjusted in some of the studies included in our meta-analysis. Therefore, the superiority of repair over replacement may be affected by this and other factors that are not possible to be revealed with meta-analysis of observational trials. Well-designed RCTs are required to further verify the conclusion. Another limitation of our report is the fact that follow-up periods were heterogeneous between some studies with different use of mean and median durations of follow-up. Therefore, subgroup analysis could not be performed statistically." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Search strategy", "Study selection", "Data extraction and quality assessment", "Statistical analysis", "Results", "Search results and study quality", "Peri-operative mortality", "Late mortality", "Mitral valve reoperation", "Recurrence of MR", "Discussion", "Limitations", "Conclusions" ]
[ "Chronic ischemic mitral regurgitation (IMR) is a frequent and important complication after myocardial infarction. Its pathophysiologic mechanisms account for remodeling of segmental/global left ventricle (LV) inducing papillary muscle displacement and leaflet tethering [1]. The presence of IMR is independently associated with mortality and morbidity after myocardial infarction [2].\nGiven the severity of IMR, surgery performed for IMR ranges from coronary artery bypass grafting (CABG) alone to both CABG and mitral valve surgery [3, 4]. Two randomized trials indicated that repair was associated with a reduced prevalence of mitral regurgitation but did not show a clinically meaningful advantage of adding mitral valve repair to CABG [5, 6]. In addition, when compared with replacement, previous meta-analyses concluded that repair is associated with lower operative mortality but higher recurrence of regurgitation in patients with ischemic mitral regurgitation, with or without CABG [7, 8]. For patients with chronic IMR undergoing combined CABG, the best surgical treatment is still controversial. Some studies support replacement [9, 10], others support repair [11, 12], and others showed similar survival for the two procedures [13]. Current guidelines recommend mitral valve surgery for severe IMR, but do not demonstrate a specific type of procedure [14, 15]. Numerous non-randomized studies have been published comparing the clinical outcomes between MVP + CABG and MVR + CABG for IMR. However, there is still no systematic and quantitative assessment of accumulated literature on this topic. Meta-analysis is a powerful tool to provide meaningful comparison of short and long-term outcomes of these procedures. The present meta-analysis aimed to assess the clinical outcomes of patients who underwent mitral valve surgery and CABG for chronic IMR.", " Search strategy This meta-analysis was conducted according to the recommendations of the Meta-Analysis of Observational Studies in Epidemiology (MOOSE) [16]. A computerized search was performed using Pubmed, Embase, Ovid medline and Cochrane Library from their dates of inception to December 2015 without language restriction. The search terms “ischemic or ischaemic” and “mitral valve” and “repair or replacement or annuloplasty” and “coronary artery bypass grafting” were entered as MeSH terms and keywords. The language of publication was restricted to English. We also reviewed the full text and references lists of all relevant review articles in detail. YW and XS independently undertook the literature search, screening of titles and abstracts. Any disagreement was resolved by consensus.\nThis meta-analysis was conducted according to the recommendations of the Meta-Analysis of Observational Studies in Epidemiology (MOOSE) [16]. A computerized search was performed using Pubmed, Embase, Ovid medline and Cochrane Library from their dates of inception to December 2015 without language restriction. The search terms “ischemic or ischaemic” and “mitral valve” and “repair or replacement or annuloplasty” and “coronary artery bypass grafting” were entered as MeSH terms and keywords. The language of publication was restricted to English. We also reviewed the full text and references lists of all relevant review articles in detail. YW and XS independently undertook the literature search, screening of titles and abstracts. Any disagreement was resolved by consensus.\n Study selection Articles were included if there is a direct comparison of repair versus replacement and all patients with IMR had CABG. The exclusion criteria were applied to select the final articles for the meta-analysis: (1) ischemic etiology in only a subset of the patients with outcomes not specifically provided (2) nonischemic dilated cardiomyopathy (3) beating heart procedures (4) concomitant surgical ventricular restoration (5) preoperative hemodynamic instability (6) lack of annuloplasty in > 20 % of the patients in the repair group (7) acute IMR.\nArticles were included if there is a direct comparison of repair versus replacement and all patients with IMR had CABG. The exclusion criteria were applied to select the final articles for the meta-analysis: (1) ischemic etiology in only a subset of the patients with outcomes not specifically provided (2) nonischemic dilated cardiomyopathy (3) beating heart procedures (4) concomitant surgical ventricular restoration (5) preoperative hemodynamic instability (6) lack of annuloplasty in > 20 % of the patients in the repair group (7) acute IMR.\n Data extraction and quality assessment All data were extracted independently by 2 investigators (Y.W., X.S.) according to the prespecified selection criteria, with disagreement resolved by consensus among all authors. The following data from each study were extracted: the last name of the first author, year of publication, study population, patients’ age and gender, comorbidities, cardiac function, severity of mitral regurgitation at baseline and follow-up period. Any disagreement was resolved by consensus.\nBased on the extracted data, the quality of the included studies was evaluated using the nine-item Newcastle-Ottawa Quality scale [17], a widely used tool for the quality assessment of non-randomized trials. The high-quality study was defined as a study with ≥6 scores.\nAll data were extracted independently by 2 investigators (Y.W., X.S.) according to the prespecified selection criteria, with disagreement resolved by consensus among all authors. The following data from each study were extracted: the last name of the first author, year of publication, study population, patients’ age and gender, comorbidities, cardiac function, severity of mitral regurgitation at baseline and follow-up period. Any disagreement was resolved by consensus.\nBased on the extracted data, the quality of the included studies was evaluated using the nine-item Newcastle-Ottawa Quality scale [17], a widely used tool for the quality assessment of non-randomized trials. The high-quality study was defined as a study with ≥6 scores.\n Statistical analysis The primary end points were operative mortality and late mortality (considered to be year after operation). Operative mortality was defined as death within 30 days after operation or in-hospital death. Secondary end points were MR recurrence 2+ or greater and reoperation at follow-up. The meta-analysis was performed using Review Manager (Revman, version 5.3 for windows, Oxford, England, Cochrane Collaboration) and Stata (version 11.0; StataCorp, College Station, TX). Hazard ratio (HR) with a 95 % confidence intervals (CIs), directly extracted from these included studies or indirectly calculated using the method of Tierney and colleagues [18] to assess the efficacy of the surgical intervention in each study. A summary of odds ratio (OR) and their corresponding 95 % CI were computed for each dichotomous outcome using either fixed-effects models or, in the presence of substantial heterogeneity (I2 > 50 %), random-effects models [19]. Statistical heterogeneity across studies was examined with Cochran’s Q test as well as the I2 statistics. Studies with an I2 statistics of <25 % were considered to have low heterogeneity, those with an I2 statistics of 25–50 % were considered to have moderate heterogeneity, and those with an I2 statistics of >50 % were considered to have a high degree of heterogeneity [20]. If there was high heterogeneity, the possible clinical and methodological factors for this were further explored. Potential sources of heterogeneity were investigated using sensitivity analyses and each study involved in the meta-analysis was excluded each time to reflect the influence of the individual data set on the pooled RRs.\nPublication bias was assessed using the Egger regression asymmetry test [21] and Begg adjusted rank correlation test [22]; a P value of less than 0.05 was considered representative of statistically significant publication bias. Meta-analysis results are displayed in forest plots. A p value < 0.05 was considered statistically significant.\nThe primary end points were operative mortality and late mortality (considered to be year after operation). Operative mortality was defined as death within 30 days after operation or in-hospital death. Secondary end points were MR recurrence 2+ or greater and reoperation at follow-up. The meta-analysis was performed using Review Manager (Revman, version 5.3 for windows, Oxford, England, Cochrane Collaboration) and Stata (version 11.0; StataCorp, College Station, TX). Hazard ratio (HR) with a 95 % confidence intervals (CIs), directly extracted from these included studies or indirectly calculated using the method of Tierney and colleagues [18] to assess the efficacy of the surgical intervention in each study. A summary of odds ratio (OR) and their corresponding 95 % CI were computed for each dichotomous outcome using either fixed-effects models or, in the presence of substantial heterogeneity (I2 > 50 %), random-effects models [19]. Statistical heterogeneity across studies was examined with Cochran’s Q test as well as the I2 statistics. Studies with an I2 statistics of <25 % were considered to have low heterogeneity, those with an I2 statistics of 25–50 % were considered to have moderate heterogeneity, and those with an I2 statistics of >50 % were considered to have a high degree of heterogeneity [20]. If there was high heterogeneity, the possible clinical and methodological factors for this were further explored. Potential sources of heterogeneity were investigated using sensitivity analyses and each study involved in the meta-analysis was excluded each time to reflect the influence of the individual data set on the pooled RRs.\nPublication bias was assessed using the Egger regression asymmetry test [21] and Begg adjusted rank correlation test [22]; a P value of less than 0.05 was considered representative of statistically significant publication bias. Meta-analysis results are displayed in forest plots. A p value < 0.05 was considered statistically significant.", "This meta-analysis was conducted according to the recommendations of the Meta-Analysis of Observational Studies in Epidemiology (MOOSE) [16]. A computerized search was performed using Pubmed, Embase, Ovid medline and Cochrane Library from their dates of inception to December 2015 without language restriction. The search terms “ischemic or ischaemic” and “mitral valve” and “repair or replacement or annuloplasty” and “coronary artery bypass grafting” were entered as MeSH terms and keywords. The language of publication was restricted to English. We also reviewed the full text and references lists of all relevant review articles in detail. YW and XS independently undertook the literature search, screening of titles and abstracts. Any disagreement was resolved by consensus.", "Articles were included if there is a direct comparison of repair versus replacement and all patients with IMR had CABG. The exclusion criteria were applied to select the final articles for the meta-analysis: (1) ischemic etiology in only a subset of the patients with outcomes not specifically provided (2) nonischemic dilated cardiomyopathy (3) beating heart procedures (4) concomitant surgical ventricular restoration (5) preoperative hemodynamic instability (6) lack of annuloplasty in > 20 % of the patients in the repair group (7) acute IMR.", "All data were extracted independently by 2 investigators (Y.W., X.S.) according to the prespecified selection criteria, with disagreement resolved by consensus among all authors. The following data from each study were extracted: the last name of the first author, year of publication, study population, patients’ age and gender, comorbidities, cardiac function, severity of mitral regurgitation at baseline and follow-up period. Any disagreement was resolved by consensus.\nBased on the extracted data, the quality of the included studies was evaluated using the nine-item Newcastle-Ottawa Quality scale [17], a widely used tool for the quality assessment of non-randomized trials. The high-quality study was defined as a study with ≥6 scores.", "The primary end points were operative mortality and late mortality (considered to be year after operation). Operative mortality was defined as death within 30 days after operation or in-hospital death. Secondary end points were MR recurrence 2+ or greater and reoperation at follow-up. The meta-analysis was performed using Review Manager (Revman, version 5.3 for windows, Oxford, England, Cochrane Collaboration) and Stata (version 11.0; StataCorp, College Station, TX). Hazard ratio (HR) with a 95 % confidence intervals (CIs), directly extracted from these included studies or indirectly calculated using the method of Tierney and colleagues [18] to assess the efficacy of the surgical intervention in each study. A summary of odds ratio (OR) and their corresponding 95 % CI were computed for each dichotomous outcome using either fixed-effects models or, in the presence of substantial heterogeneity (I2 > 50 %), random-effects models [19]. Statistical heterogeneity across studies was examined with Cochran’s Q test as well as the I2 statistics. Studies with an I2 statistics of <25 % were considered to have low heterogeneity, those with an I2 statistics of 25–50 % were considered to have moderate heterogeneity, and those with an I2 statistics of >50 % were considered to have a high degree of heterogeneity [20]. If there was high heterogeneity, the possible clinical and methodological factors for this were further explored. Potential sources of heterogeneity were investigated using sensitivity analyses and each study involved in the meta-analysis was excluded each time to reflect the influence of the individual data set on the pooled RRs.\nPublication bias was assessed using the Egger regression asymmetry test [21] and Begg adjusted rank correlation test [22]; a P value of less than 0.05 was considered representative of statistically significant publication bias. Meta-analysis results are displayed in forest plots. A p value < 0.05 was considered statistically significant.", " Search results and study quality The literature search identified a total of 545 studies, which were published between 1965 and 2015. On the basis of title and abstracts, 34 articles were selected and reviewed in full. Eleven articles met the inclusion and exclusion criteria [9–13, 23–28] (Fig. 1). Of the included studies, there were ten retrospective observational studies [9–13, 23–27] and one prospective observational study [28]. All were nonrandomized studies. These studies included a total of 1807 patients, 1091 (60.4 %) of whom underwent repair and 716 (39.6 %) of whom underwent replacement. All patients had CABG. Patient characteristics and a summary of operative details are summarized in Tables 1 and 2, respectively. With the exception of the replacement patients being older in 2 of the studies, the two groups were similar in terms of hypertension (HTN), diabetes, atrial fibrillation (AF), left ventricular ejection fraction (LVEF) and the New York Heart Association (NYHA) class. Eight of the studies reported data on the type of prosthesis used for mitral valve replacement and preservation of the subvalvular apparatus. In half of the studies, the majority of patients received a bioprothesis valve. In addition, preservation of the subvalvular (either total or partial) apparatus were performed in the vast majority of mitral valve replacements.Fig. 1Flow chart of study selectionTable 1Key Features of Included StudiesStudySubjectsMean AgeMale (%)HTN (%)Diabetes (%)AF (%)NYHA III-IV (%)Mean LVEF (%)MR gradeFollow-up periodMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGLorusso et al.24424466667369414136351213NRNR35352.8 ± 0.52.8 ± 0.546.5bmoLio et al.98286570d\n746181893532NRNR61713234NRNR45bmoLjubacev et al.3441NRNRNRNR85803256d\n2617NRNRNRNRNRNRIn-hospitalRoshanali et al.263157578377NRNRNRNRNRNRNRNR38403.6 ± 0.53.5 ± 0.540.2a moMaltais et al.302857070686371683426NRNR85913434NRNR4.2ayrsQiu et al.1121067172645672753032282653493535NRNR48.1amoMicovic et al.865261b\n62b\n7273746521152729645029362.7 ± 0.62.5 ± 0.732a moBonacchi et al.3618NRNRNRNRNRNRNRNRNRNRNRNR2727NRNR32a moSilberman et al.38146267d\n749350574557NRNR49cd\n32cd\n<25 %NRNR38amoMantovani et al.61416868675454512615NRNRNRNR45453.1 ± 0.83.3 ± 0.736.8a moReece et al.5456676941d\n68d\nNRNR2221NRNRNRNR4440NRNRIn-hospital\na = mean; b = median; c Percentage class IV; d\np < 0.05 between MVr and MVR\nAbbreviations: AF atrial fibrillation, LVEF left ventricular ejection fraction, CABG coronary artery bypass grafting, MR mitral regurgitation, HTN hypertension, MVP mitral valve repair, MVR mitral valve replacement, NR not reportedTable 2Operative characteristicsCPB time (min)ACC time (min)MVR prosthesis typeSubvalvular apparatus preservationMVP partial/suture annuloplasty (%)MVP ring annuloplasty (%)MVP undersizingMVPMVRMVPMVRMechanical %Bioprothesis %Anterior + Posterior (%)Posterior (%)None (%)Lorusso et al. [9]14514594944753482443010027 (26 mm) 52 (28 mm) 13 (30 mm) 6 (32 mm) 1 (34 mm) 1 (36 mm)Lio et al. [10] 156180107132366410000010037 % open ring 63 % closed ring 37 % rigid ring 63 % semi-rigid ringLjubacev et al. [24]1451529699NRNRNRNRNRNRNRNRRoshanali et al. [28]NRNRNRNR100010000NRNRNRMaltais et al. [13]NRNRNRNR4654NRNRNR89242 (24–28 mm) 36 (30–34 mm)Qiu et al. [26]13612910598386211890010030 mmMicovic et al. [11]NRNRNRNR100001000595Median 28 mm (range, 26–34 mm)Bonacchi et al. [12]NRNRNRNRNRNR010001783NRSilberman et al. [23]154184991111000NRNRNR010026 ± 1.2 mmMantovani et al. [25]1791731311227624010000100ModerateReece et al. [27]112132152171NRNRNRNRNR010028 mm males 26 mm females\nAbbreviations: MVP mitral valve repair, MVR mitral valve replacement, ACC time aortic cross-clamping time, CPB time, cardiopulmonary bypass time, NR not reported\nFlow chart of study selection\nKey Features of Included Studies\n\na = mean; b = median; c Percentage class IV; d\np < 0.05 between MVr and MVR\n\nAbbreviations: AF atrial fibrillation, LVEF left ventricular ejection fraction, CABG coronary artery bypass grafting, MR mitral regurgitation, HTN hypertension, MVP mitral valve repair, MVR mitral valve replacement, NR not reported\nOperative characteristics\n\nAbbreviations: MVP mitral valve repair, MVR mitral valve replacement, ACC time aortic cross-clamping time, CPB time, cardiopulmonary bypass time, NR not reported\nAll the eleven trials were assessed by the Newcastle-Ottawa Scale for quality assessment risk evaluation of adequacy of selection, comparability, and outcomes assessment for individual trials (Table 3). All studies included in our meta-analysis were of high-quality (had ≥ 6 scores).Table 3Study quality assessment using the Newcastle-Ottawa Scale for nonrandomized studiesSelectionOutcomeFirst author, year of publication (reference)Representativeness of exposed cohortSelection of nonexposed cohortAscertainment of exposureOutcome of interest absent at start of studyComparability (Based on design and analysis)Assessment of outcomeFollow-up long enough for outcomesto occurAdequacy of follow-upTotal scoreLorusso et al.111101117Lio et al.111111118Ljubacev et al.111101016Roshanali et al.111121119Maltais et al.111121119Qiu et al.111121119Micovic et al.111121119Bonacchi et al.111121119Silberman et al.111111118Mantovani et al.111121119Reece et al.111111017\nStudy quality assessment using the Newcastle-Ottawa Scale for nonrandomized studies\nThe literature search identified a total of 545 studies, which were published between 1965 and 2015. On the basis of title and abstracts, 34 articles were selected and reviewed in full. Eleven articles met the inclusion and exclusion criteria [9–13, 23–28] (Fig. 1). Of the included studies, there were ten retrospective observational studies [9–13, 23–27] and one prospective observational study [28]. All were nonrandomized studies. These studies included a total of 1807 patients, 1091 (60.4 %) of whom underwent repair and 716 (39.6 %) of whom underwent replacement. All patients had CABG. Patient characteristics and a summary of operative details are summarized in Tables 1 and 2, respectively. With the exception of the replacement patients being older in 2 of the studies, the two groups were similar in terms of hypertension (HTN), diabetes, atrial fibrillation (AF), left ventricular ejection fraction (LVEF) and the New York Heart Association (NYHA) class. Eight of the studies reported data on the type of prosthesis used for mitral valve replacement and preservation of the subvalvular apparatus. In half of the studies, the majority of patients received a bioprothesis valve. In addition, preservation of the subvalvular (either total or partial) apparatus were performed in the vast majority of mitral valve replacements.Fig. 1Flow chart of study selectionTable 1Key Features of Included StudiesStudySubjectsMean AgeMale (%)HTN (%)Diabetes (%)AF (%)NYHA III-IV (%)Mean LVEF (%)MR gradeFollow-up periodMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGLorusso et al.24424466667369414136351213NRNR35352.8 ± 0.52.8 ± 0.546.5bmoLio et al.98286570d\n746181893532NRNR61713234NRNR45bmoLjubacev et al.3441NRNRNRNR85803256d\n2617NRNRNRNRNRNRIn-hospitalRoshanali et al.263157578377NRNRNRNRNRNRNRNR38403.6 ± 0.53.5 ± 0.540.2a moMaltais et al.302857070686371683426NRNR85913434NRNR4.2ayrsQiu et al.1121067172645672753032282653493535NRNR48.1amoMicovic et al.865261b\n62b\n7273746521152729645029362.7 ± 0.62.5 ± 0.732a moBonacchi et al.3618NRNRNRNRNRNRNRNRNRNRNRNR2727NRNR32a moSilberman et al.38146267d\n749350574557NRNR49cd\n32cd\n<25 %NRNR38amoMantovani et al.61416868675454512615NRNRNRNR45453.1 ± 0.83.3 ± 0.736.8a moReece et al.5456676941d\n68d\nNRNR2221NRNRNRNR4440NRNRIn-hospital\na = mean; b = median; c Percentage class IV; d\np < 0.05 between MVr and MVR\nAbbreviations: AF atrial fibrillation, LVEF left ventricular ejection fraction, CABG coronary artery bypass grafting, MR mitral regurgitation, HTN hypertension, MVP mitral valve repair, MVR mitral valve replacement, NR not reportedTable 2Operative characteristicsCPB time (min)ACC time (min)MVR prosthesis typeSubvalvular apparatus preservationMVP partial/suture annuloplasty (%)MVP ring annuloplasty (%)MVP undersizingMVPMVRMVPMVRMechanical %Bioprothesis %Anterior + Posterior (%)Posterior (%)None (%)Lorusso et al. [9]14514594944753482443010027 (26 mm) 52 (28 mm) 13 (30 mm) 6 (32 mm) 1 (34 mm) 1 (36 mm)Lio et al. [10] 156180107132366410000010037 % open ring 63 % closed ring 37 % rigid ring 63 % semi-rigid ringLjubacev et al. [24]1451529699NRNRNRNRNRNRNRNRRoshanali et al. [28]NRNRNRNR100010000NRNRNRMaltais et al. [13]NRNRNRNR4654NRNRNR89242 (24–28 mm) 36 (30–34 mm)Qiu et al. [26]13612910598386211890010030 mmMicovic et al. [11]NRNRNRNR100001000595Median 28 mm (range, 26–34 mm)Bonacchi et al. [12]NRNRNRNRNRNR010001783NRSilberman et al. [23]154184991111000NRNRNR010026 ± 1.2 mmMantovani et al. [25]1791731311227624010000100ModerateReece et al. [27]112132152171NRNRNRNRNR010028 mm males 26 mm females\nAbbreviations: MVP mitral valve repair, MVR mitral valve replacement, ACC time aortic cross-clamping time, CPB time, cardiopulmonary bypass time, NR not reported\nFlow chart of study selection\nKey Features of Included Studies\n\na = mean; b = median; c Percentage class IV; d\np < 0.05 between MVr and MVR\n\nAbbreviations: AF atrial fibrillation, LVEF left ventricular ejection fraction, CABG coronary artery bypass grafting, MR mitral regurgitation, HTN hypertension, MVP mitral valve repair, MVR mitral valve replacement, NR not reported\nOperative characteristics\n\nAbbreviations: MVP mitral valve repair, MVR mitral valve replacement, ACC time aortic cross-clamping time, CPB time, cardiopulmonary bypass time, NR not reported\nAll the eleven trials were assessed by the Newcastle-Ottawa Scale for quality assessment risk evaluation of adequacy of selection, comparability, and outcomes assessment for individual trials (Table 3). All studies included in our meta-analysis were of high-quality (had ≥ 6 scores).Table 3Study quality assessment using the Newcastle-Ottawa Scale for nonrandomized studiesSelectionOutcomeFirst author, year of publication (reference)Representativeness of exposed cohortSelection of nonexposed cohortAscertainment of exposureOutcome of interest absent at start of studyComparability (Based on design and analysis)Assessment of outcomeFollow-up long enough for outcomesto occurAdequacy of follow-upTotal scoreLorusso et al.111101117Lio et al.111111118Ljubacev et al.111101016Roshanali et al.111121119Maltais et al.111121119Qiu et al.111121119Micovic et al.111121119Bonacchi et al.111121119Silberman et al.111111118Mantovani et al.111121119Reece et al.111111017\nStudy quality assessment using the Newcastle-Ottawa Scale for nonrandomized studies\n Peri-operative mortality Ten observational studies involving a total of 1750 patients reported operative mortality. The odds ratios in the study ranged from 0.16 to 2.32 (Fig. 2). The summary odds ratio was 0.65 (95 % CI, 0.43-1.00), P = 0.05, indicating there was a reduced peri-operative mortality trend towards repair, but no statistical significance reached. In assessing potential heterogeneity across the studies, I2 = 0 %, and no publication bias was found either from the Egger’s test (P = 0.83) or the Begg’s test (P = 0.68).Fig. 2Mitral valve repair versus mitral valve replacement on peri-operative mortality\nMitral valve repair versus mitral valve replacement on peri-operative mortality\nTen observational studies involving a total of 1750 patients reported operative mortality. The odds ratios in the study ranged from 0.16 to 2.32 (Fig. 2). The summary odds ratio was 0.65 (95 % CI, 0.43-1.00), P = 0.05, indicating there was a reduced peri-operative mortality trend towards repair, but no statistical significance reached. In assessing potential heterogeneity across the studies, I2 = 0 %, and no publication bias was found either from the Egger’s test (P = 0.83) or the Begg’s test (P = 0.68).Fig. 2Mitral valve repair versus mitral valve replacement on peri-operative mortality\nMitral valve repair versus mitral valve replacement on peri-operative mortality\n Late mortality A total of nine studies (1622 Patients) reported late mortality (Fig. 3). The overall hazard ratio was 0.87 (95 % CI, 0.67-1.14; P = 0.31), suggesting late mortality was not significantly reduced following repair. Further, heterogeneity was moderate (I2 = 30 %). It was noted that ten of the studies included patients with different degrees of regurgitation and left ventricular dysfunction, with exception of one study [23], all of the patients included in this study had severely impaired LV function (ejection fraction <25 %) and severe ischemic MR undergoing CABG. Severely decreased left ventricular function and severe IMR could have the potential pathophysiological effect on the mortality rates of those patients. Hence, sensitivity analysis was conducted to only include studies in which not all of the patients had severe ischemic MR and severely impaired LV function undergoing CABG. Restricting analysis to these studies had no significant impact on the reduction of late mortality following repair (the summary hazard ratio, 1.03; 95 % CI, 0.90-1.17; P = 0.66). Whereas, heterogeneity suggested by I2 was significantly reduced to 0 %, indicating no variability exists among the rest studies. Further exclusion of any single study did not significantly reduce the heterogeneity. In addition, our study included 10 retrospective studies and 1 prospective study. The different study designs may influence outcomes of meta-analysis. Therefore, sensitivity analysis was performed to only include retrospective studies. Restricting analysis to these studies did not significantly impact on the result for late mortality (HR, 0.86; 95 % CI, 0.64-1.14; P = 0.30; I2 = 38 %).Fig. 3Mitral valve repair versus mitral valve replacement on late mortality\nMitral valve repair versus mitral valve replacement on late mortality\nA total of nine studies (1622 Patients) reported late mortality (Fig. 3). The overall hazard ratio was 0.87 (95 % CI, 0.67-1.14; P = 0.31), suggesting late mortality was not significantly reduced following repair. Further, heterogeneity was moderate (I2 = 30 %). It was noted that ten of the studies included patients with different degrees of regurgitation and left ventricular dysfunction, with exception of one study [23], all of the patients included in this study had severely impaired LV function (ejection fraction <25 %) and severe ischemic MR undergoing CABG. Severely decreased left ventricular function and severe IMR could have the potential pathophysiological effect on the mortality rates of those patients. Hence, sensitivity analysis was conducted to only include studies in which not all of the patients had severe ischemic MR and severely impaired LV function undergoing CABG. Restricting analysis to these studies had no significant impact on the reduction of late mortality following repair (the summary hazard ratio, 1.03; 95 % CI, 0.90-1.17; P = 0.66). Whereas, heterogeneity suggested by I2 was significantly reduced to 0 %, indicating no variability exists among the rest studies. Further exclusion of any single study did not significantly reduce the heterogeneity. In addition, our study included 10 retrospective studies and 1 prospective study. The different study designs may influence outcomes of meta-analysis. Therefore, sensitivity analysis was performed to only include retrospective studies. Restricting analysis to these studies did not significantly impact on the result for late mortality (HR, 0.86; 95 % CI, 0.64-1.14; P = 0.30; I2 = 38 %).Fig. 3Mitral valve repair versus mitral valve replacement on late mortality\nMitral valve repair versus mitral valve replacement on late mortality\n Mitral valve reoperation Reoperation due to such as MV regurgitation, thromboembolism and prosthetic endocarditis was reported in five studies involving a total of 845 patients. The combined odds ratio was 1.47, suggesting the trend went towards the preference of replacement. Nevertheless, no significant differences were reached between the two surgical approaches (95 % CI, 0.90-2.38; I2 = 0 %; P = 0.12) (Fig. 4).Fig. 4Mitral valve repair versus mitral valve replacement on reoperation\nMitral valve repair versus mitral valve replacement on reoperation\nReoperation due to such as MV regurgitation, thromboembolism and prosthetic endocarditis was reported in five studies involving a total of 845 patients. The combined odds ratio was 1.47, suggesting the trend went towards the preference of replacement. Nevertheless, no significant differences were reached between the two surgical approaches (95 % CI, 0.90-2.38; I2 = 0 %; P = 0.12) (Fig. 4).Fig. 4Mitral valve repair versus mitral valve replacement on reoperation\nMitral valve repair versus mitral valve replacement on reoperation\n Recurrence of MR Five studies involving a total of 837 patients provided data regarding recurrence of MR during the follow-up. The MVP + CABG group was associated with a significantly increased recurrence rate of MR (OR, 5.41; 95 % CI: 3.12–9.38; P < 0.001) with low heterogeneity among those studies (I2 = 10 %) (Fig. 5). Sensitivity analysis was also performed to only include retrospective studies. Restricting analysis to these studies did not significantly impact on the result for recurrence of MR (OR, 5.97; 95 % CI, 3.36-10.58; P < 0.001; I2 = 0 %).Fig. 5Mitral valve repair versus mitral valve replacement on recurrence of mitral valve regurgitation\nMitral valve repair versus mitral valve replacement on recurrence of mitral valve regurgitation\nFive studies involving a total of 837 patients provided data regarding recurrence of MR during the follow-up. The MVP + CABG group was associated with a significantly increased recurrence rate of MR (OR, 5.41; 95 % CI: 3.12–9.38; P < 0.001) with low heterogeneity among those studies (I2 = 10 %) (Fig. 5). Sensitivity analysis was also performed to only include retrospective studies. Restricting analysis to these studies did not significantly impact on the result for recurrence of MR (OR, 5.97; 95 % CI, 3.36-10.58; P < 0.001; I2 = 0 %).Fig. 5Mitral valve repair versus mitral valve replacement on recurrence of mitral valve regurgitation\nMitral valve repair versus mitral valve replacement on recurrence of mitral valve regurgitation", "The literature search identified a total of 545 studies, which were published between 1965 and 2015. On the basis of title and abstracts, 34 articles were selected and reviewed in full. Eleven articles met the inclusion and exclusion criteria [9–13, 23–28] (Fig. 1). Of the included studies, there were ten retrospective observational studies [9–13, 23–27] and one prospective observational study [28]. All were nonrandomized studies. These studies included a total of 1807 patients, 1091 (60.4 %) of whom underwent repair and 716 (39.6 %) of whom underwent replacement. All patients had CABG. Patient characteristics and a summary of operative details are summarized in Tables 1 and 2, respectively. With the exception of the replacement patients being older in 2 of the studies, the two groups were similar in terms of hypertension (HTN), diabetes, atrial fibrillation (AF), left ventricular ejection fraction (LVEF) and the New York Heart Association (NYHA) class. Eight of the studies reported data on the type of prosthesis used for mitral valve replacement and preservation of the subvalvular apparatus. In half of the studies, the majority of patients received a bioprothesis valve. In addition, preservation of the subvalvular (either total or partial) apparatus were performed in the vast majority of mitral valve replacements.Fig. 1Flow chart of study selectionTable 1Key Features of Included StudiesStudySubjectsMean AgeMale (%)HTN (%)Diabetes (%)AF (%)NYHA III-IV (%)Mean LVEF (%)MR gradeFollow-up periodMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGLorusso et al.24424466667369414136351213NRNR35352.8 ± 0.52.8 ± 0.546.5bmoLio et al.98286570d\n746181893532NRNR61713234NRNR45bmoLjubacev et al.3441NRNRNRNR85803256d\n2617NRNRNRNRNRNRIn-hospitalRoshanali et al.263157578377NRNRNRNRNRNRNRNR38403.6 ± 0.53.5 ± 0.540.2a moMaltais et al.302857070686371683426NRNR85913434NRNR4.2ayrsQiu et al.1121067172645672753032282653493535NRNR48.1amoMicovic et al.865261b\n62b\n7273746521152729645029362.7 ± 0.62.5 ± 0.732a moBonacchi et al.3618NRNRNRNRNRNRNRNRNRNRNRNR2727NRNR32a moSilberman et al.38146267d\n749350574557NRNR49cd\n32cd\n<25 %NRNR38amoMantovani et al.61416868675454512615NRNRNRNR45453.1 ± 0.83.3 ± 0.736.8a moReece et al.5456676941d\n68d\nNRNR2221NRNRNRNR4440NRNRIn-hospital\na = mean; b = median; c Percentage class IV; d\np < 0.05 between MVr and MVR\nAbbreviations: AF atrial fibrillation, LVEF left ventricular ejection fraction, CABG coronary artery bypass grafting, MR mitral regurgitation, HTN hypertension, MVP mitral valve repair, MVR mitral valve replacement, NR not reportedTable 2Operative characteristicsCPB time (min)ACC time (min)MVR prosthesis typeSubvalvular apparatus preservationMVP partial/suture annuloplasty (%)MVP ring annuloplasty (%)MVP undersizingMVPMVRMVPMVRMechanical %Bioprothesis %Anterior + Posterior (%)Posterior (%)None (%)Lorusso et al. [9]14514594944753482443010027 (26 mm) 52 (28 mm) 13 (30 mm) 6 (32 mm) 1 (34 mm) 1 (36 mm)Lio et al. [10] 156180107132366410000010037 % open ring 63 % closed ring 37 % rigid ring 63 % semi-rigid ringLjubacev et al. [24]1451529699NRNRNRNRNRNRNRNRRoshanali et al. [28]NRNRNRNR100010000NRNRNRMaltais et al. [13]NRNRNRNR4654NRNRNR89242 (24–28 mm) 36 (30–34 mm)Qiu et al. [26]13612910598386211890010030 mmMicovic et al. [11]NRNRNRNR100001000595Median 28 mm (range, 26–34 mm)Bonacchi et al. [12]NRNRNRNRNRNR010001783NRSilberman et al. [23]154184991111000NRNRNR010026 ± 1.2 mmMantovani et al. [25]1791731311227624010000100ModerateReece et al. [27]112132152171NRNRNRNRNR010028 mm males 26 mm females\nAbbreviations: MVP mitral valve repair, MVR mitral valve replacement, ACC time aortic cross-clamping time, CPB time, cardiopulmonary bypass time, NR not reported\nFlow chart of study selection\nKey Features of Included Studies\n\na = mean; b = median; c Percentage class IV; d\np < 0.05 between MVr and MVR\n\nAbbreviations: AF atrial fibrillation, LVEF left ventricular ejection fraction, CABG coronary artery bypass grafting, MR mitral regurgitation, HTN hypertension, MVP mitral valve repair, MVR mitral valve replacement, NR not reported\nOperative characteristics\n\nAbbreviations: MVP mitral valve repair, MVR mitral valve replacement, ACC time aortic cross-clamping time, CPB time, cardiopulmonary bypass time, NR not reported\nAll the eleven trials were assessed by the Newcastle-Ottawa Scale for quality assessment risk evaluation of adequacy of selection, comparability, and outcomes assessment for individual trials (Table 3). All studies included in our meta-analysis were of high-quality (had ≥ 6 scores).Table 3Study quality assessment using the Newcastle-Ottawa Scale for nonrandomized studiesSelectionOutcomeFirst author, year of publication (reference)Representativeness of exposed cohortSelection of nonexposed cohortAscertainment of exposureOutcome of interest absent at start of studyComparability (Based on design and analysis)Assessment of outcomeFollow-up long enough for outcomesto occurAdequacy of follow-upTotal scoreLorusso et al.111101117Lio et al.111111118Ljubacev et al.111101016Roshanali et al.111121119Maltais et al.111121119Qiu et al.111121119Micovic et al.111121119Bonacchi et al.111121119Silberman et al.111111118Mantovani et al.111121119Reece et al.111111017\nStudy quality assessment using the Newcastle-Ottawa Scale for nonrandomized studies", "Ten observational studies involving a total of 1750 patients reported operative mortality. The odds ratios in the study ranged from 0.16 to 2.32 (Fig. 2). The summary odds ratio was 0.65 (95 % CI, 0.43-1.00), P = 0.05, indicating there was a reduced peri-operative mortality trend towards repair, but no statistical significance reached. In assessing potential heterogeneity across the studies, I2 = 0 %, and no publication bias was found either from the Egger’s test (P = 0.83) or the Begg’s test (P = 0.68).Fig. 2Mitral valve repair versus mitral valve replacement on peri-operative mortality\nMitral valve repair versus mitral valve replacement on peri-operative mortality", "A total of nine studies (1622 Patients) reported late mortality (Fig. 3). The overall hazard ratio was 0.87 (95 % CI, 0.67-1.14; P = 0.31), suggesting late mortality was not significantly reduced following repair. Further, heterogeneity was moderate (I2 = 30 %). It was noted that ten of the studies included patients with different degrees of regurgitation and left ventricular dysfunction, with exception of one study [23], all of the patients included in this study had severely impaired LV function (ejection fraction <25 %) and severe ischemic MR undergoing CABG. Severely decreased left ventricular function and severe IMR could have the potential pathophysiological effect on the mortality rates of those patients. Hence, sensitivity analysis was conducted to only include studies in which not all of the patients had severe ischemic MR and severely impaired LV function undergoing CABG. Restricting analysis to these studies had no significant impact on the reduction of late mortality following repair (the summary hazard ratio, 1.03; 95 % CI, 0.90-1.17; P = 0.66). Whereas, heterogeneity suggested by I2 was significantly reduced to 0 %, indicating no variability exists among the rest studies. Further exclusion of any single study did not significantly reduce the heterogeneity. In addition, our study included 10 retrospective studies and 1 prospective study. The different study designs may influence outcomes of meta-analysis. Therefore, sensitivity analysis was performed to only include retrospective studies. Restricting analysis to these studies did not significantly impact on the result for late mortality (HR, 0.86; 95 % CI, 0.64-1.14; P = 0.30; I2 = 38 %).Fig. 3Mitral valve repair versus mitral valve replacement on late mortality\nMitral valve repair versus mitral valve replacement on late mortality", "Reoperation due to such as MV regurgitation, thromboembolism and prosthetic endocarditis was reported in five studies involving a total of 845 patients. The combined odds ratio was 1.47, suggesting the trend went towards the preference of replacement. Nevertheless, no significant differences were reached between the two surgical approaches (95 % CI, 0.90-2.38; I2 = 0 %; P = 0.12) (Fig. 4).Fig. 4Mitral valve repair versus mitral valve replacement on reoperation\nMitral valve repair versus mitral valve replacement on reoperation", "Five studies involving a total of 837 patients provided data regarding recurrence of MR during the follow-up. The MVP + CABG group was associated with a significantly increased recurrence rate of MR (OR, 5.41; 95 % CI: 3.12–9.38; P < 0.001) with low heterogeneity among those studies (I2 = 10 %) (Fig. 5). Sensitivity analysis was also performed to only include retrospective studies. Restricting analysis to these studies did not significantly impact on the result for recurrence of MR (OR, 5.97; 95 % CI, 3.36-10.58; P < 0.001; I2 = 0 %).Fig. 5Mitral valve repair versus mitral valve replacement on recurrence of mitral valve regurgitation\nMitral valve repair versus mitral valve replacement on recurrence of mitral valve regurgitation", "In our meta-analysis of eleven studies, which included patients undergoing repair or replacement electively with CABG surgery, no differences were found regarding peri-operative mortality and long-term survival. Mitral valve replacement was associated with lower incidence of mitral regurgitation in patients with IMR during CABG. The Society of Thoracic Surgeons reports MVP + CABG group had approximately 5 % (4.8 % in-hospital mortality and 5.3 % operative mortality) nationwide mortality rates in contrast with 8 % (7.8 % in-hospital mortality and 8.5 % operative mortality) for MVR + CABG group [29].\nModerate and severe recurrent MR after a restrictive annuloplasty ring, occurred early and affected a substantial proportion of patients by 2 years [30]. In the present study, the main disadvantage of MVP + CABG group is recurrence of MR compared to MVR + CABG group. Although MR recurrence was common, mitral valve reoperation was not. Similar reoperation rate was found between both groups, which suggested that not all patients with MR recurrence 2+ or greater needed reoperation. There were several possible explanations. IMR after annuloplasty might be considered inconsequential compared with the underlying myocardial disease, or surgeons might be hesitant to perform a second or third cardiac operation in these elderly, high-risk patients [31]. Many patients with recurrent MR were just too sick or too old or both to even consider reoperating on them.\nA case-matched study found that replacement was associated with lower incidence of valve-related complications than was repair and both mitral valve procedures showed no significant difference in LV function at follow-up [32]. However, replacement had greater thromboembolic and ischemic stroke rates than repair despite anticoagulant therapy [33]. Although mitral valve replacement can sufficiently correct regurgitation, the structural integrity of the mitral valve is usually compromised after replacement, leading to a continuous damage on the left ventricular tethered loop, which results in adverse effects on left ventricular contraction and poor prognosis [8]. Therefore, individualized consideration should be given to the two surgical procedures.\nTo date, there have been no RCTs that compared the clinical outcomes of the two surgical management particularly in patients with chronic IMR during CABG. To our knowledge, our report is the first meta-analysis comparing short-term and long-term outcomes of two mitral valve procedures specifically on patients with chronic IMR undergoing concomitant CABG. We selected studies for this meta-analysis with rigorous inclusion and exclusion criteria. All the patients in the studies underwent concomitant CABG, which ensures homogeneity of IMR patients and facilitates comparisons between trials. In addition, patients with acute IMR due to ruptured papillary muscles were excluded in our study, thus the outcomes of this meta-analysis not only truly reflect the surgical intervention of chronic IMR but also avoid biasing the results toward worsening the replacement group. By excluding articles that had > 20 % lack of annuloplasty ring, we have made the comparison between the two mitral valves surgeries more powerful. Therefore, the results of our study truly reflect the surgical management of patients with IMR simultaneous to CABG.\n Limitations Our meta-analysis has several limitations. Firstly, this study was based on observational, retrospective studies with inherent bias of such study designs. The publications included in this meta-analysis were relatively small and nonrandomized studies. Secondly, changes in NYHA class, LVEF and left ventricular reversal remodeling were too scarcely reported in the included studies to enable meta-analysis. Eight out of eleven studies included in our meta-analysis reported data on the subvalvular apparatus preservation in mitral valve replacement yet with lack of uniform preservation of both the anterior and posterior leaflets. The other three studies had no description regarding subvalvular apparatus preservation. Thirdly, potential confounding factors such as preoperative risk evaluation (STS score i.e.), mitral valve more suitable for repair, age, cause of mitral regurgitation (ischemia, fibrosis, ventricular remodeling), EF and complexity of revascularization were not considered or adjusted in some of the studies included in our meta-analysis. Therefore, the superiority of repair over replacement may be affected by this and other factors that are not possible to be revealed with meta-analysis of observational trials. Well-designed RCTs are required to further verify the conclusion. Another limitation of our report is the fact that follow-up periods were heterogeneous between some studies with different use of mean and median durations of follow-up. Therefore, subgroup analysis could not be performed statistically.\nOur meta-analysis has several limitations. Firstly, this study was based on observational, retrospective studies with inherent bias of such study designs. The publications included in this meta-analysis were relatively small and nonrandomized studies. Secondly, changes in NYHA class, LVEF and left ventricular reversal remodeling were too scarcely reported in the included studies to enable meta-analysis. Eight out of eleven studies included in our meta-analysis reported data on the subvalvular apparatus preservation in mitral valve replacement yet with lack of uniform preservation of both the anterior and posterior leaflets. The other three studies had no description regarding subvalvular apparatus preservation. Thirdly, potential confounding factors such as preoperative risk evaluation (STS score i.e.), mitral valve more suitable for repair, age, cause of mitral regurgitation (ischemia, fibrosis, ventricular remodeling), EF and complexity of revascularization were not considered or adjusted in some of the studies included in our meta-analysis. Therefore, the superiority of repair over replacement may be affected by this and other factors that are not possible to be revealed with meta-analysis of observational trials. Well-designed RCTs are required to further verify the conclusion. Another limitation of our report is the fact that follow-up periods were heterogeneous between some studies with different use of mean and median durations of follow-up. Therefore, subgroup analysis could not be performed statistically.", "Our meta-analysis has several limitations. Firstly, this study was based on observational, retrospective studies with inherent bias of such study designs. The publications included in this meta-analysis were relatively small and nonrandomized studies. Secondly, changes in NYHA class, LVEF and left ventricular reversal remodeling were too scarcely reported in the included studies to enable meta-analysis. Eight out of eleven studies included in our meta-analysis reported data on the subvalvular apparatus preservation in mitral valve replacement yet with lack of uniform preservation of both the anterior and posterior leaflets. The other three studies had no description regarding subvalvular apparatus preservation. Thirdly, potential confounding factors such as preoperative risk evaluation (STS score i.e.), mitral valve more suitable for repair, age, cause of mitral regurgitation (ischemia, fibrosis, ventricular remodeling), EF and complexity of revascularization were not considered or adjusted in some of the studies included in our meta-analysis. Therefore, the superiority of repair over replacement may be affected by this and other factors that are not possible to be revealed with meta-analysis of observational trials. Well-designed RCTs are required to further verify the conclusion. Another limitation of our report is the fact that follow-up periods were heterogeneous between some studies with different use of mean and median durations of follow-up. Therefore, subgroup analysis could not be performed statistically.", "In patients with chronic ischemic mitral regurgitation during CABG, mitral valve replacement is associated with lower recurrence of regurgitation. No differences were found regarding survival and reoperation rates." ]
[ "introduction", "materials|methods", null, null, null, null, "results", null, null, null, null, null, "discussion", null, "conclusion" ]
[ "Ischemic mitral regurgitation", "Mitral valve repair", "Mitral valve replacement", "Coronary artery bypass grafting", "Meta-analysis" ]
Background: Chronic ischemic mitral regurgitation (IMR) is a frequent and important complication after myocardial infarction. Its pathophysiologic mechanisms account for remodeling of segmental/global left ventricle (LV) inducing papillary muscle displacement and leaflet tethering [1]. The presence of IMR is independently associated with mortality and morbidity after myocardial infarction [2]. Given the severity of IMR, surgery performed for IMR ranges from coronary artery bypass grafting (CABG) alone to both CABG and mitral valve surgery [3, 4]. Two randomized trials indicated that repair was associated with a reduced prevalence of mitral regurgitation but did not show a clinically meaningful advantage of adding mitral valve repair to CABG [5, 6]. In addition, when compared with replacement, previous meta-analyses concluded that repair is associated with lower operative mortality but higher recurrence of regurgitation in patients with ischemic mitral regurgitation, with or without CABG [7, 8]. For patients with chronic IMR undergoing combined CABG, the best surgical treatment is still controversial. Some studies support replacement [9, 10], others support repair [11, 12], and others showed similar survival for the two procedures [13]. Current guidelines recommend mitral valve surgery for severe IMR, but do not demonstrate a specific type of procedure [14, 15]. Numerous non-randomized studies have been published comparing the clinical outcomes between MVP + CABG and MVR + CABG for IMR. However, there is still no systematic and quantitative assessment of accumulated literature on this topic. Meta-analysis is a powerful tool to provide meaningful comparison of short and long-term outcomes of these procedures. The present meta-analysis aimed to assess the clinical outcomes of patients who underwent mitral valve surgery and CABG for chronic IMR. Methods: Search strategy This meta-analysis was conducted according to the recommendations of the Meta-Analysis of Observational Studies in Epidemiology (MOOSE) [16]. A computerized search was performed using Pubmed, Embase, Ovid medline and Cochrane Library from their dates of inception to December 2015 without language restriction. The search terms “ischemic or ischaemic” and “mitral valve” and “repair or replacement or annuloplasty” and “coronary artery bypass grafting” were entered as MeSH terms and keywords. The language of publication was restricted to English. We also reviewed the full text and references lists of all relevant review articles in detail. YW and XS independently undertook the literature search, screening of titles and abstracts. Any disagreement was resolved by consensus. This meta-analysis was conducted according to the recommendations of the Meta-Analysis of Observational Studies in Epidemiology (MOOSE) [16]. A computerized search was performed using Pubmed, Embase, Ovid medline and Cochrane Library from their dates of inception to December 2015 without language restriction. The search terms “ischemic or ischaemic” and “mitral valve” and “repair or replacement or annuloplasty” and “coronary artery bypass grafting” were entered as MeSH terms and keywords. The language of publication was restricted to English. We also reviewed the full text and references lists of all relevant review articles in detail. YW and XS independently undertook the literature search, screening of titles and abstracts. Any disagreement was resolved by consensus. Study selection Articles were included if there is a direct comparison of repair versus replacement and all patients with IMR had CABG. The exclusion criteria were applied to select the final articles for the meta-analysis: (1) ischemic etiology in only a subset of the patients with outcomes not specifically provided (2) nonischemic dilated cardiomyopathy (3) beating heart procedures (4) concomitant surgical ventricular restoration (5) preoperative hemodynamic instability (6) lack of annuloplasty in > 20 % of the patients in the repair group (7) acute IMR. Articles were included if there is a direct comparison of repair versus replacement and all patients with IMR had CABG. The exclusion criteria were applied to select the final articles for the meta-analysis: (1) ischemic etiology in only a subset of the patients with outcomes not specifically provided (2) nonischemic dilated cardiomyopathy (3) beating heart procedures (4) concomitant surgical ventricular restoration (5) preoperative hemodynamic instability (6) lack of annuloplasty in > 20 % of the patients in the repair group (7) acute IMR. Data extraction and quality assessment All data were extracted independently by 2 investigators (Y.W., X.S.) according to the prespecified selection criteria, with disagreement resolved by consensus among all authors. The following data from each study were extracted: the last name of the first author, year of publication, study population, patients’ age and gender, comorbidities, cardiac function, severity of mitral regurgitation at baseline and follow-up period. Any disagreement was resolved by consensus. Based on the extracted data, the quality of the included studies was evaluated using the nine-item Newcastle-Ottawa Quality scale [17], a widely used tool for the quality assessment of non-randomized trials. The high-quality study was defined as a study with ≥6 scores. All data were extracted independently by 2 investigators (Y.W., X.S.) according to the prespecified selection criteria, with disagreement resolved by consensus among all authors. The following data from each study were extracted: the last name of the first author, year of publication, study population, patients’ age and gender, comorbidities, cardiac function, severity of mitral regurgitation at baseline and follow-up period. Any disagreement was resolved by consensus. Based on the extracted data, the quality of the included studies was evaluated using the nine-item Newcastle-Ottawa Quality scale [17], a widely used tool for the quality assessment of non-randomized trials. The high-quality study was defined as a study with ≥6 scores. Statistical analysis The primary end points were operative mortality and late mortality (considered to be year after operation). Operative mortality was defined as death within 30 days after operation or in-hospital death. Secondary end points were MR recurrence 2+ or greater and reoperation at follow-up. The meta-analysis was performed using Review Manager (Revman, version 5.3 for windows, Oxford, England, Cochrane Collaboration) and Stata (version 11.0; StataCorp, College Station, TX). Hazard ratio (HR) with a 95 % confidence intervals (CIs), directly extracted from these included studies or indirectly calculated using the method of Tierney and colleagues [18] to assess the efficacy of the surgical intervention in each study. A summary of odds ratio (OR) and their corresponding 95 % CI were computed for each dichotomous outcome using either fixed-effects models or, in the presence of substantial heterogeneity (I2 > 50 %), random-effects models [19]. Statistical heterogeneity across studies was examined with Cochran’s Q test as well as the I2 statistics. Studies with an I2 statistics of <25 % were considered to have low heterogeneity, those with an I2 statistics of 25–50 % were considered to have moderate heterogeneity, and those with an I2 statistics of >50 % were considered to have a high degree of heterogeneity [20]. If there was high heterogeneity, the possible clinical and methodological factors for this were further explored. Potential sources of heterogeneity were investigated using sensitivity analyses and each study involved in the meta-analysis was excluded each time to reflect the influence of the individual data set on the pooled RRs. Publication bias was assessed using the Egger regression asymmetry test [21] and Begg adjusted rank correlation test [22]; a P value of less than 0.05 was considered representative of statistically significant publication bias. Meta-analysis results are displayed in forest plots. A p value < 0.05 was considered statistically significant. The primary end points were operative mortality and late mortality (considered to be year after operation). Operative mortality was defined as death within 30 days after operation or in-hospital death. Secondary end points were MR recurrence 2+ or greater and reoperation at follow-up. The meta-analysis was performed using Review Manager (Revman, version 5.3 for windows, Oxford, England, Cochrane Collaboration) and Stata (version 11.0; StataCorp, College Station, TX). Hazard ratio (HR) with a 95 % confidence intervals (CIs), directly extracted from these included studies or indirectly calculated using the method of Tierney and colleagues [18] to assess the efficacy of the surgical intervention in each study. A summary of odds ratio (OR) and their corresponding 95 % CI were computed for each dichotomous outcome using either fixed-effects models or, in the presence of substantial heterogeneity (I2 > 50 %), random-effects models [19]. Statistical heterogeneity across studies was examined with Cochran’s Q test as well as the I2 statistics. Studies with an I2 statistics of <25 % were considered to have low heterogeneity, those with an I2 statistics of 25–50 % were considered to have moderate heterogeneity, and those with an I2 statistics of >50 % were considered to have a high degree of heterogeneity [20]. If there was high heterogeneity, the possible clinical and methodological factors for this were further explored. Potential sources of heterogeneity were investigated using sensitivity analyses and each study involved in the meta-analysis was excluded each time to reflect the influence of the individual data set on the pooled RRs. Publication bias was assessed using the Egger regression asymmetry test [21] and Begg adjusted rank correlation test [22]; a P value of less than 0.05 was considered representative of statistically significant publication bias. Meta-analysis results are displayed in forest plots. A p value < 0.05 was considered statistically significant. Search strategy: This meta-analysis was conducted according to the recommendations of the Meta-Analysis of Observational Studies in Epidemiology (MOOSE) [16]. A computerized search was performed using Pubmed, Embase, Ovid medline and Cochrane Library from their dates of inception to December 2015 without language restriction. The search terms “ischemic or ischaemic” and “mitral valve” and “repair or replacement or annuloplasty” and “coronary artery bypass grafting” were entered as MeSH terms and keywords. The language of publication was restricted to English. We also reviewed the full text and references lists of all relevant review articles in detail. YW and XS independently undertook the literature search, screening of titles and abstracts. Any disagreement was resolved by consensus. Study selection: Articles were included if there is a direct comparison of repair versus replacement and all patients with IMR had CABG. The exclusion criteria were applied to select the final articles for the meta-analysis: (1) ischemic etiology in only a subset of the patients with outcomes not specifically provided (2) nonischemic dilated cardiomyopathy (3) beating heart procedures (4) concomitant surgical ventricular restoration (5) preoperative hemodynamic instability (6) lack of annuloplasty in > 20 % of the patients in the repair group (7) acute IMR. Data extraction and quality assessment: All data were extracted independently by 2 investigators (Y.W., X.S.) according to the prespecified selection criteria, with disagreement resolved by consensus among all authors. The following data from each study were extracted: the last name of the first author, year of publication, study population, patients’ age and gender, comorbidities, cardiac function, severity of mitral regurgitation at baseline and follow-up period. Any disagreement was resolved by consensus. Based on the extracted data, the quality of the included studies was evaluated using the nine-item Newcastle-Ottawa Quality scale [17], a widely used tool for the quality assessment of non-randomized trials. The high-quality study was defined as a study with ≥6 scores. Statistical analysis: The primary end points were operative mortality and late mortality (considered to be year after operation). Operative mortality was defined as death within 30 days after operation or in-hospital death. Secondary end points were MR recurrence 2+ or greater and reoperation at follow-up. The meta-analysis was performed using Review Manager (Revman, version 5.3 for windows, Oxford, England, Cochrane Collaboration) and Stata (version 11.0; StataCorp, College Station, TX). Hazard ratio (HR) with a 95 % confidence intervals (CIs), directly extracted from these included studies or indirectly calculated using the method of Tierney and colleagues [18] to assess the efficacy of the surgical intervention in each study. A summary of odds ratio (OR) and their corresponding 95 % CI were computed for each dichotomous outcome using either fixed-effects models or, in the presence of substantial heterogeneity (I2 > 50 %), random-effects models [19]. Statistical heterogeneity across studies was examined with Cochran’s Q test as well as the I2 statistics. Studies with an I2 statistics of <25 % were considered to have low heterogeneity, those with an I2 statistics of 25–50 % were considered to have moderate heterogeneity, and those with an I2 statistics of >50 % were considered to have a high degree of heterogeneity [20]. If there was high heterogeneity, the possible clinical and methodological factors for this were further explored. Potential sources of heterogeneity were investigated using sensitivity analyses and each study involved in the meta-analysis was excluded each time to reflect the influence of the individual data set on the pooled RRs. Publication bias was assessed using the Egger regression asymmetry test [21] and Begg adjusted rank correlation test [22]; a P value of less than 0.05 was considered representative of statistically significant publication bias. Meta-analysis results are displayed in forest plots. A p value < 0.05 was considered statistically significant. Results: Search results and study quality The literature search identified a total of 545 studies, which were published between 1965 and 2015. On the basis of title and abstracts, 34 articles were selected and reviewed in full. Eleven articles met the inclusion and exclusion criteria [9–13, 23–28] (Fig. 1). Of the included studies, there were ten retrospective observational studies [9–13, 23–27] and one prospective observational study [28]. All were nonrandomized studies. These studies included a total of 1807 patients, 1091 (60.4 %) of whom underwent repair and 716 (39.6 %) of whom underwent replacement. All patients had CABG. Patient characteristics and a summary of operative details are summarized in Tables 1 and 2, respectively. With the exception of the replacement patients being older in 2 of the studies, the two groups were similar in terms of hypertension (HTN), diabetes, atrial fibrillation (AF), left ventricular ejection fraction (LVEF) and the New York Heart Association (NYHA) class. Eight of the studies reported data on the type of prosthesis used for mitral valve replacement and preservation of the subvalvular apparatus. In half of the studies, the majority of patients received a bioprothesis valve. In addition, preservation of the subvalvular (either total or partial) apparatus were performed in the vast majority of mitral valve replacements.Fig. 1Flow chart of study selectionTable 1Key Features of Included StudiesStudySubjectsMean AgeMale (%)HTN (%)Diabetes (%)AF (%)NYHA III-IV (%)Mean LVEF (%)MR gradeFollow-up periodMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGLorusso et al.24424466667369414136351213NRNR35352.8 ± 0.52.8 ± 0.546.5bmoLio et al.98286570d 746181893532NRNR61713234NRNR45bmoLjubacev et al.3441NRNRNRNR85803256d 2617NRNRNRNRNRNRIn-hospitalRoshanali et al.263157578377NRNRNRNRNRNRNRNR38403.6 ± 0.53.5 ± 0.540.2a moMaltais et al.302857070686371683426NRNR85913434NRNR4.2ayrsQiu et al.1121067172645672753032282653493535NRNR48.1amoMicovic et al.865261b 62b 7273746521152729645029362.7 ± 0.62.5 ± 0.732a moBonacchi et al.3618NRNRNRNRNRNRNRNRNRNRNRNR2727NRNR32a moSilberman et al.38146267d 749350574557NRNR49cd 32cd <25 %NRNR38amoMantovani et al.61416868675454512615NRNRNRNR45453.1 ± 0.83.3 ± 0.736.8a moReece et al.5456676941d 68d NRNR2221NRNRNRNR4440NRNRIn-hospital a = mean; b = median; c Percentage class IV; d p < 0.05 between MVr and MVR Abbreviations: AF atrial fibrillation, LVEF left ventricular ejection fraction, CABG coronary artery bypass grafting, MR mitral regurgitation, HTN hypertension, MVP mitral valve repair, MVR mitral valve replacement, NR not reportedTable 2Operative characteristicsCPB time (min)ACC time (min)MVR prosthesis typeSubvalvular apparatus preservationMVP partial/suture annuloplasty (%)MVP ring annuloplasty (%)MVP undersizingMVPMVRMVPMVRMechanical %Bioprothesis %Anterior + Posterior (%)Posterior (%)None (%)Lorusso et al. [9]14514594944753482443010027 (26 mm) 52 (28 mm) 13 (30 mm) 6 (32 mm) 1 (34 mm) 1 (36 mm)Lio et al. [10] 156180107132366410000010037 % open ring 63 % closed ring 37 % rigid ring 63 % semi-rigid ringLjubacev et al. [24]1451529699NRNRNRNRNRNRNRNRRoshanali et al. [28]NRNRNRNR100010000NRNRNRMaltais et al. [13]NRNRNRNR4654NRNRNR89242 (24–28 mm) 36 (30–34 mm)Qiu et al. [26]13612910598386211890010030 mmMicovic et al. [11]NRNRNRNR100001000595Median 28 mm (range, 26–34 mm)Bonacchi et al. [12]NRNRNRNRNRNR010001783NRSilberman et al. [23]154184991111000NRNRNR010026 ± 1.2 mmMantovani et al. [25]1791731311227624010000100ModerateReece et al. [27]112132152171NRNRNRNRNR010028 mm males 26 mm females Abbreviations: MVP mitral valve repair, MVR mitral valve replacement, ACC time aortic cross-clamping time, CPB time, cardiopulmonary bypass time, NR not reported Flow chart of study selection Key Features of Included Studies a = mean; b = median; c Percentage class IV; d p < 0.05 between MVr and MVR Abbreviations: AF atrial fibrillation, LVEF left ventricular ejection fraction, CABG coronary artery bypass grafting, MR mitral regurgitation, HTN hypertension, MVP mitral valve repair, MVR mitral valve replacement, NR not reported Operative characteristics Abbreviations: MVP mitral valve repair, MVR mitral valve replacement, ACC time aortic cross-clamping time, CPB time, cardiopulmonary bypass time, NR not reported All the eleven trials were assessed by the Newcastle-Ottawa Scale for quality assessment risk evaluation of adequacy of selection, comparability, and outcomes assessment for individual trials (Table 3). All studies included in our meta-analysis were of high-quality (had ≥ 6 scores).Table 3Study quality assessment using the Newcastle-Ottawa Scale for nonrandomized studiesSelectionOutcomeFirst author, year of publication (reference)Representativeness of exposed cohortSelection of nonexposed cohortAscertainment of exposureOutcome of interest absent at start of studyComparability (Based on design and analysis)Assessment of outcomeFollow-up long enough for outcomesto occurAdequacy of follow-upTotal scoreLorusso et al.111101117Lio et al.111111118Ljubacev et al.111101016Roshanali et al.111121119Maltais et al.111121119Qiu et al.111121119Micovic et al.111121119Bonacchi et al.111121119Silberman et al.111111118Mantovani et al.111121119Reece et al.111111017 Study quality assessment using the Newcastle-Ottawa Scale for nonrandomized studies The literature search identified a total of 545 studies, which were published between 1965 and 2015. On the basis of title and abstracts, 34 articles were selected and reviewed in full. Eleven articles met the inclusion and exclusion criteria [9–13, 23–28] (Fig. 1). Of the included studies, there were ten retrospective observational studies [9–13, 23–27] and one prospective observational study [28]. All were nonrandomized studies. These studies included a total of 1807 patients, 1091 (60.4 %) of whom underwent repair and 716 (39.6 %) of whom underwent replacement. All patients had CABG. Patient characteristics and a summary of operative details are summarized in Tables 1 and 2, respectively. With the exception of the replacement patients being older in 2 of the studies, the two groups were similar in terms of hypertension (HTN), diabetes, atrial fibrillation (AF), left ventricular ejection fraction (LVEF) and the New York Heart Association (NYHA) class. Eight of the studies reported data on the type of prosthesis used for mitral valve replacement and preservation of the subvalvular apparatus. In half of the studies, the majority of patients received a bioprothesis valve. In addition, preservation of the subvalvular (either total or partial) apparatus were performed in the vast majority of mitral valve replacements.Fig. 1Flow chart of study selectionTable 1Key Features of Included StudiesStudySubjectsMean AgeMale (%)HTN (%)Diabetes (%)AF (%)NYHA III-IV (%)Mean LVEF (%)MR gradeFollow-up periodMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGLorusso et al.24424466667369414136351213NRNR35352.8 ± 0.52.8 ± 0.546.5bmoLio et al.98286570d 746181893532NRNR61713234NRNR45bmoLjubacev et al.3441NRNRNRNR85803256d 2617NRNRNRNRNRNRIn-hospitalRoshanali et al.263157578377NRNRNRNRNRNRNRNR38403.6 ± 0.53.5 ± 0.540.2a moMaltais et al.302857070686371683426NRNR85913434NRNR4.2ayrsQiu et al.1121067172645672753032282653493535NRNR48.1amoMicovic et al.865261b 62b 7273746521152729645029362.7 ± 0.62.5 ± 0.732a moBonacchi et al.3618NRNRNRNRNRNRNRNRNRNRNRNR2727NRNR32a moSilberman et al.38146267d 749350574557NRNR49cd 32cd <25 %NRNR38amoMantovani et al.61416868675454512615NRNRNRNR45453.1 ± 0.83.3 ± 0.736.8a moReece et al.5456676941d 68d NRNR2221NRNRNRNR4440NRNRIn-hospital a = mean; b = median; c Percentage class IV; d p < 0.05 between MVr and MVR Abbreviations: AF atrial fibrillation, LVEF left ventricular ejection fraction, CABG coronary artery bypass grafting, MR mitral regurgitation, HTN hypertension, MVP mitral valve repair, MVR mitral valve replacement, NR not reportedTable 2Operative characteristicsCPB time (min)ACC time (min)MVR prosthesis typeSubvalvular apparatus preservationMVP partial/suture annuloplasty (%)MVP ring annuloplasty (%)MVP undersizingMVPMVRMVPMVRMechanical %Bioprothesis %Anterior + Posterior (%)Posterior (%)None (%)Lorusso et al. [9]14514594944753482443010027 (26 mm) 52 (28 mm) 13 (30 mm) 6 (32 mm) 1 (34 mm) 1 (36 mm)Lio et al. [10] 156180107132366410000010037 % open ring 63 % closed ring 37 % rigid ring 63 % semi-rigid ringLjubacev et al. [24]1451529699NRNRNRNRNRNRNRNRRoshanali et al. [28]NRNRNRNR100010000NRNRNRMaltais et al. [13]NRNRNRNR4654NRNRNR89242 (24–28 mm) 36 (30–34 mm)Qiu et al. [26]13612910598386211890010030 mmMicovic et al. [11]NRNRNRNR100001000595Median 28 mm (range, 26–34 mm)Bonacchi et al. [12]NRNRNRNRNRNR010001783NRSilberman et al. [23]154184991111000NRNRNR010026 ± 1.2 mmMantovani et al. [25]1791731311227624010000100ModerateReece et al. [27]112132152171NRNRNRNRNR010028 mm males 26 mm females Abbreviations: MVP mitral valve repair, MVR mitral valve replacement, ACC time aortic cross-clamping time, CPB time, cardiopulmonary bypass time, NR not reported Flow chart of study selection Key Features of Included Studies a = mean; b = median; c Percentage class IV; d p < 0.05 between MVr and MVR Abbreviations: AF atrial fibrillation, LVEF left ventricular ejection fraction, CABG coronary artery bypass grafting, MR mitral regurgitation, HTN hypertension, MVP mitral valve repair, MVR mitral valve replacement, NR not reported Operative characteristics Abbreviations: MVP mitral valve repair, MVR mitral valve replacement, ACC time aortic cross-clamping time, CPB time, cardiopulmonary bypass time, NR not reported All the eleven trials were assessed by the Newcastle-Ottawa Scale for quality assessment risk evaluation of adequacy of selection, comparability, and outcomes assessment for individual trials (Table 3). All studies included in our meta-analysis were of high-quality (had ≥ 6 scores).Table 3Study quality assessment using the Newcastle-Ottawa Scale for nonrandomized studiesSelectionOutcomeFirst author, year of publication (reference)Representativeness of exposed cohortSelection of nonexposed cohortAscertainment of exposureOutcome of interest absent at start of studyComparability (Based on design and analysis)Assessment of outcomeFollow-up long enough for outcomesto occurAdequacy of follow-upTotal scoreLorusso et al.111101117Lio et al.111111118Ljubacev et al.111101016Roshanali et al.111121119Maltais et al.111121119Qiu et al.111121119Micovic et al.111121119Bonacchi et al.111121119Silberman et al.111111118Mantovani et al.111121119Reece et al.111111017 Study quality assessment using the Newcastle-Ottawa Scale for nonrandomized studies Peri-operative mortality Ten observational studies involving a total of 1750 patients reported operative mortality. The odds ratios in the study ranged from 0.16 to 2.32 (Fig. 2). The summary odds ratio was 0.65 (95 % CI, 0.43-1.00), P = 0.05, indicating there was a reduced peri-operative mortality trend towards repair, but no statistical significance reached. In assessing potential heterogeneity across the studies, I2 = 0 %, and no publication bias was found either from the Egger’s test (P = 0.83) or the Begg’s test (P = 0.68).Fig. 2Mitral valve repair versus mitral valve replacement on peri-operative mortality Mitral valve repair versus mitral valve replacement on peri-operative mortality Ten observational studies involving a total of 1750 patients reported operative mortality. The odds ratios in the study ranged from 0.16 to 2.32 (Fig. 2). The summary odds ratio was 0.65 (95 % CI, 0.43-1.00), P = 0.05, indicating there was a reduced peri-operative mortality trend towards repair, but no statistical significance reached. In assessing potential heterogeneity across the studies, I2 = 0 %, and no publication bias was found either from the Egger’s test (P = 0.83) or the Begg’s test (P = 0.68).Fig. 2Mitral valve repair versus mitral valve replacement on peri-operative mortality Mitral valve repair versus mitral valve replacement on peri-operative mortality Late mortality A total of nine studies (1622 Patients) reported late mortality (Fig. 3). The overall hazard ratio was 0.87 (95 % CI, 0.67-1.14; P = 0.31), suggesting late mortality was not significantly reduced following repair. Further, heterogeneity was moderate (I2 = 30 %). It was noted that ten of the studies included patients with different degrees of regurgitation and left ventricular dysfunction, with exception of one study [23], all of the patients included in this study had severely impaired LV function (ejection fraction <25 %) and severe ischemic MR undergoing CABG. Severely decreased left ventricular function and severe IMR could have the potential pathophysiological effect on the mortality rates of those patients. Hence, sensitivity analysis was conducted to only include studies in which not all of the patients had severe ischemic MR and severely impaired LV function undergoing CABG. Restricting analysis to these studies had no significant impact on the reduction of late mortality following repair (the summary hazard ratio, 1.03; 95 % CI, 0.90-1.17; P = 0.66). Whereas, heterogeneity suggested by I2 was significantly reduced to 0 %, indicating no variability exists among the rest studies. Further exclusion of any single study did not significantly reduce the heterogeneity. In addition, our study included 10 retrospective studies and 1 prospective study. The different study designs may influence outcomes of meta-analysis. Therefore, sensitivity analysis was performed to only include retrospective studies. Restricting analysis to these studies did not significantly impact on the result for late mortality (HR, 0.86; 95 % CI, 0.64-1.14; P = 0.30; I2 = 38 %).Fig. 3Mitral valve repair versus mitral valve replacement on late mortality Mitral valve repair versus mitral valve replacement on late mortality A total of nine studies (1622 Patients) reported late mortality (Fig. 3). The overall hazard ratio was 0.87 (95 % CI, 0.67-1.14; P = 0.31), suggesting late mortality was not significantly reduced following repair. Further, heterogeneity was moderate (I2 = 30 %). It was noted that ten of the studies included patients with different degrees of regurgitation and left ventricular dysfunction, with exception of one study [23], all of the patients included in this study had severely impaired LV function (ejection fraction <25 %) and severe ischemic MR undergoing CABG. Severely decreased left ventricular function and severe IMR could have the potential pathophysiological effect on the mortality rates of those patients. Hence, sensitivity analysis was conducted to only include studies in which not all of the patients had severe ischemic MR and severely impaired LV function undergoing CABG. Restricting analysis to these studies had no significant impact on the reduction of late mortality following repair (the summary hazard ratio, 1.03; 95 % CI, 0.90-1.17; P = 0.66). Whereas, heterogeneity suggested by I2 was significantly reduced to 0 %, indicating no variability exists among the rest studies. Further exclusion of any single study did not significantly reduce the heterogeneity. In addition, our study included 10 retrospective studies and 1 prospective study. The different study designs may influence outcomes of meta-analysis. Therefore, sensitivity analysis was performed to only include retrospective studies. Restricting analysis to these studies did not significantly impact on the result for late mortality (HR, 0.86; 95 % CI, 0.64-1.14; P = 0.30; I2 = 38 %).Fig. 3Mitral valve repair versus mitral valve replacement on late mortality Mitral valve repair versus mitral valve replacement on late mortality Mitral valve reoperation Reoperation due to such as MV regurgitation, thromboembolism and prosthetic endocarditis was reported in five studies involving a total of 845 patients. The combined odds ratio was 1.47, suggesting the trend went towards the preference of replacement. Nevertheless, no significant differences were reached between the two surgical approaches (95 % CI, 0.90-2.38; I2 = 0 %; P = 0.12) (Fig. 4).Fig. 4Mitral valve repair versus mitral valve replacement on reoperation Mitral valve repair versus mitral valve replacement on reoperation Reoperation due to such as MV regurgitation, thromboembolism and prosthetic endocarditis was reported in five studies involving a total of 845 patients. The combined odds ratio was 1.47, suggesting the trend went towards the preference of replacement. Nevertheless, no significant differences were reached between the two surgical approaches (95 % CI, 0.90-2.38; I2 = 0 %; P = 0.12) (Fig. 4).Fig. 4Mitral valve repair versus mitral valve replacement on reoperation Mitral valve repair versus mitral valve replacement on reoperation Recurrence of MR Five studies involving a total of 837 patients provided data regarding recurrence of MR during the follow-up. The MVP + CABG group was associated with a significantly increased recurrence rate of MR (OR, 5.41; 95 % CI: 3.12–9.38; P < 0.001) with low heterogeneity among those studies (I2 = 10 %) (Fig. 5). Sensitivity analysis was also performed to only include retrospective studies. Restricting analysis to these studies did not significantly impact on the result for recurrence of MR (OR, 5.97; 95 % CI, 3.36-10.58; P < 0.001; I2 = 0 %).Fig. 5Mitral valve repair versus mitral valve replacement on recurrence of mitral valve regurgitation Mitral valve repair versus mitral valve replacement on recurrence of mitral valve regurgitation Five studies involving a total of 837 patients provided data regarding recurrence of MR during the follow-up. The MVP + CABG group was associated with a significantly increased recurrence rate of MR (OR, 5.41; 95 % CI: 3.12–9.38; P < 0.001) with low heterogeneity among those studies (I2 = 10 %) (Fig. 5). Sensitivity analysis was also performed to only include retrospective studies. Restricting analysis to these studies did not significantly impact on the result for recurrence of MR (OR, 5.97; 95 % CI, 3.36-10.58; P < 0.001; I2 = 0 %).Fig. 5Mitral valve repair versus mitral valve replacement on recurrence of mitral valve regurgitation Mitral valve repair versus mitral valve replacement on recurrence of mitral valve regurgitation Search results and study quality: The literature search identified a total of 545 studies, which were published between 1965 and 2015. On the basis of title and abstracts, 34 articles were selected and reviewed in full. Eleven articles met the inclusion and exclusion criteria [9–13, 23–28] (Fig. 1). Of the included studies, there were ten retrospective observational studies [9–13, 23–27] and one prospective observational study [28]. All were nonrandomized studies. These studies included a total of 1807 patients, 1091 (60.4 %) of whom underwent repair and 716 (39.6 %) of whom underwent replacement. All patients had CABG. Patient characteristics and a summary of operative details are summarized in Tables 1 and 2, respectively. With the exception of the replacement patients being older in 2 of the studies, the two groups were similar in terms of hypertension (HTN), diabetes, atrial fibrillation (AF), left ventricular ejection fraction (LVEF) and the New York Heart Association (NYHA) class. Eight of the studies reported data on the type of prosthesis used for mitral valve replacement and preservation of the subvalvular apparatus. In half of the studies, the majority of patients received a bioprothesis valve. In addition, preservation of the subvalvular (either total or partial) apparatus were performed in the vast majority of mitral valve replacements.Fig. 1Flow chart of study selectionTable 1Key Features of Included StudiesStudySubjectsMean AgeMale (%)HTN (%)Diabetes (%)AF (%)NYHA III-IV (%)Mean LVEF (%)MR gradeFollow-up periodMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGMVP + CABGMVR + CABGLorusso et al.24424466667369414136351213NRNR35352.8 ± 0.52.8 ± 0.546.5bmoLio et al.98286570d 746181893532NRNR61713234NRNR45bmoLjubacev et al.3441NRNRNRNR85803256d 2617NRNRNRNRNRNRIn-hospitalRoshanali et al.263157578377NRNRNRNRNRNRNRNR38403.6 ± 0.53.5 ± 0.540.2a moMaltais et al.302857070686371683426NRNR85913434NRNR4.2ayrsQiu et al.1121067172645672753032282653493535NRNR48.1amoMicovic et al.865261b 62b 7273746521152729645029362.7 ± 0.62.5 ± 0.732a moBonacchi et al.3618NRNRNRNRNRNRNRNRNRNRNRNR2727NRNR32a moSilberman et al.38146267d 749350574557NRNR49cd 32cd <25 %NRNR38amoMantovani et al.61416868675454512615NRNRNRNR45453.1 ± 0.83.3 ± 0.736.8a moReece et al.5456676941d 68d NRNR2221NRNRNRNR4440NRNRIn-hospital a = mean; b = median; c Percentage class IV; d p < 0.05 between MVr and MVR Abbreviations: AF atrial fibrillation, LVEF left ventricular ejection fraction, CABG coronary artery bypass grafting, MR mitral regurgitation, HTN hypertension, MVP mitral valve repair, MVR mitral valve replacement, NR not reportedTable 2Operative characteristicsCPB time (min)ACC time (min)MVR prosthesis typeSubvalvular apparatus preservationMVP partial/suture annuloplasty (%)MVP ring annuloplasty (%)MVP undersizingMVPMVRMVPMVRMechanical %Bioprothesis %Anterior + Posterior (%)Posterior (%)None (%)Lorusso et al. [9]14514594944753482443010027 (26 mm) 52 (28 mm) 13 (30 mm) 6 (32 mm) 1 (34 mm) 1 (36 mm)Lio et al. [10] 156180107132366410000010037 % open ring 63 % closed ring 37 % rigid ring 63 % semi-rigid ringLjubacev et al. [24]1451529699NRNRNRNRNRNRNRNRRoshanali et al. [28]NRNRNRNR100010000NRNRNRMaltais et al. [13]NRNRNRNR4654NRNRNR89242 (24–28 mm) 36 (30–34 mm)Qiu et al. [26]13612910598386211890010030 mmMicovic et al. [11]NRNRNRNR100001000595Median 28 mm (range, 26–34 mm)Bonacchi et al. [12]NRNRNRNRNRNR010001783NRSilberman et al. [23]154184991111000NRNRNR010026 ± 1.2 mmMantovani et al. [25]1791731311227624010000100ModerateReece et al. [27]112132152171NRNRNRNRNR010028 mm males 26 mm females Abbreviations: MVP mitral valve repair, MVR mitral valve replacement, ACC time aortic cross-clamping time, CPB time, cardiopulmonary bypass time, NR not reported Flow chart of study selection Key Features of Included Studies a = mean; b = median; c Percentage class IV; d p < 0.05 between MVr and MVR Abbreviations: AF atrial fibrillation, LVEF left ventricular ejection fraction, CABG coronary artery bypass grafting, MR mitral regurgitation, HTN hypertension, MVP mitral valve repair, MVR mitral valve replacement, NR not reported Operative characteristics Abbreviations: MVP mitral valve repair, MVR mitral valve replacement, ACC time aortic cross-clamping time, CPB time, cardiopulmonary bypass time, NR not reported All the eleven trials were assessed by the Newcastle-Ottawa Scale for quality assessment risk evaluation of adequacy of selection, comparability, and outcomes assessment for individual trials (Table 3). All studies included in our meta-analysis were of high-quality (had ≥ 6 scores).Table 3Study quality assessment using the Newcastle-Ottawa Scale for nonrandomized studiesSelectionOutcomeFirst author, year of publication (reference)Representativeness of exposed cohortSelection of nonexposed cohortAscertainment of exposureOutcome of interest absent at start of studyComparability (Based on design and analysis)Assessment of outcomeFollow-up long enough for outcomesto occurAdequacy of follow-upTotal scoreLorusso et al.111101117Lio et al.111111118Ljubacev et al.111101016Roshanali et al.111121119Maltais et al.111121119Qiu et al.111121119Micovic et al.111121119Bonacchi et al.111121119Silberman et al.111111118Mantovani et al.111121119Reece et al.111111017 Study quality assessment using the Newcastle-Ottawa Scale for nonrandomized studies Peri-operative mortality: Ten observational studies involving a total of 1750 patients reported operative mortality. The odds ratios in the study ranged from 0.16 to 2.32 (Fig. 2). The summary odds ratio was 0.65 (95 % CI, 0.43-1.00), P = 0.05, indicating there was a reduced peri-operative mortality trend towards repair, but no statistical significance reached. In assessing potential heterogeneity across the studies, I2 = 0 %, and no publication bias was found either from the Egger’s test (P = 0.83) or the Begg’s test (P = 0.68).Fig. 2Mitral valve repair versus mitral valve replacement on peri-operative mortality Mitral valve repair versus mitral valve replacement on peri-operative mortality Late mortality: A total of nine studies (1622 Patients) reported late mortality (Fig. 3). The overall hazard ratio was 0.87 (95 % CI, 0.67-1.14; P = 0.31), suggesting late mortality was not significantly reduced following repair. Further, heterogeneity was moderate (I2 = 30 %). It was noted that ten of the studies included patients with different degrees of regurgitation and left ventricular dysfunction, with exception of one study [23], all of the patients included in this study had severely impaired LV function (ejection fraction <25 %) and severe ischemic MR undergoing CABG. Severely decreased left ventricular function and severe IMR could have the potential pathophysiological effect on the mortality rates of those patients. Hence, sensitivity analysis was conducted to only include studies in which not all of the patients had severe ischemic MR and severely impaired LV function undergoing CABG. Restricting analysis to these studies had no significant impact on the reduction of late mortality following repair (the summary hazard ratio, 1.03; 95 % CI, 0.90-1.17; P = 0.66). Whereas, heterogeneity suggested by I2 was significantly reduced to 0 %, indicating no variability exists among the rest studies. Further exclusion of any single study did not significantly reduce the heterogeneity. In addition, our study included 10 retrospective studies and 1 prospective study. The different study designs may influence outcomes of meta-analysis. Therefore, sensitivity analysis was performed to only include retrospective studies. Restricting analysis to these studies did not significantly impact on the result for late mortality (HR, 0.86; 95 % CI, 0.64-1.14; P = 0.30; I2 = 38 %).Fig. 3Mitral valve repair versus mitral valve replacement on late mortality Mitral valve repair versus mitral valve replacement on late mortality Mitral valve reoperation: Reoperation due to such as MV regurgitation, thromboembolism and prosthetic endocarditis was reported in five studies involving a total of 845 patients. The combined odds ratio was 1.47, suggesting the trend went towards the preference of replacement. Nevertheless, no significant differences were reached between the two surgical approaches (95 % CI, 0.90-2.38; I2 = 0 %; P = 0.12) (Fig. 4).Fig. 4Mitral valve repair versus mitral valve replacement on reoperation Mitral valve repair versus mitral valve replacement on reoperation Recurrence of MR: Five studies involving a total of 837 patients provided data regarding recurrence of MR during the follow-up. The MVP + CABG group was associated with a significantly increased recurrence rate of MR (OR, 5.41; 95 % CI: 3.12–9.38; P < 0.001) with low heterogeneity among those studies (I2 = 10 %) (Fig. 5). Sensitivity analysis was also performed to only include retrospective studies. Restricting analysis to these studies did not significantly impact on the result for recurrence of MR (OR, 5.97; 95 % CI, 3.36-10.58; P < 0.001; I2 = 0 %).Fig. 5Mitral valve repair versus mitral valve replacement on recurrence of mitral valve regurgitation Mitral valve repair versus mitral valve replacement on recurrence of mitral valve regurgitation Discussion: In our meta-analysis of eleven studies, which included patients undergoing repair or replacement electively with CABG surgery, no differences were found regarding peri-operative mortality and long-term survival. Mitral valve replacement was associated with lower incidence of mitral regurgitation in patients with IMR during CABG. The Society of Thoracic Surgeons reports MVP + CABG group had approximately 5 % (4.8 % in-hospital mortality and 5.3 % operative mortality) nationwide mortality rates in contrast with 8 % (7.8 % in-hospital mortality and 8.5 % operative mortality) for MVR + CABG group [29]. Moderate and severe recurrent MR after a restrictive annuloplasty ring, occurred early and affected a substantial proportion of patients by 2 years [30]. In the present study, the main disadvantage of MVP + CABG group is recurrence of MR compared to MVR + CABG group. Although MR recurrence was common, mitral valve reoperation was not. Similar reoperation rate was found between both groups, which suggested that not all patients with MR recurrence 2+ or greater needed reoperation. There were several possible explanations. IMR after annuloplasty might be considered inconsequential compared with the underlying myocardial disease, or surgeons might be hesitant to perform a second or third cardiac operation in these elderly, high-risk patients [31]. Many patients with recurrent MR were just too sick or too old or both to even consider reoperating on them. A case-matched study found that replacement was associated with lower incidence of valve-related complications than was repair and both mitral valve procedures showed no significant difference in LV function at follow-up [32]. However, replacement had greater thromboembolic and ischemic stroke rates than repair despite anticoagulant therapy [33]. Although mitral valve replacement can sufficiently correct regurgitation, the structural integrity of the mitral valve is usually compromised after replacement, leading to a continuous damage on the left ventricular tethered loop, which results in adverse effects on left ventricular contraction and poor prognosis [8]. Therefore, individualized consideration should be given to the two surgical procedures. To date, there have been no RCTs that compared the clinical outcomes of the two surgical management particularly in patients with chronic IMR during CABG. To our knowledge, our report is the first meta-analysis comparing short-term and long-term outcomes of two mitral valve procedures specifically on patients with chronic IMR undergoing concomitant CABG. We selected studies for this meta-analysis with rigorous inclusion and exclusion criteria. All the patients in the studies underwent concomitant CABG, which ensures homogeneity of IMR patients and facilitates comparisons between trials. In addition, patients with acute IMR due to ruptured papillary muscles were excluded in our study, thus the outcomes of this meta-analysis not only truly reflect the surgical intervention of chronic IMR but also avoid biasing the results toward worsening the replacement group. By excluding articles that had > 20 % lack of annuloplasty ring, we have made the comparison between the two mitral valves surgeries more powerful. Therefore, the results of our study truly reflect the surgical management of patients with IMR simultaneous to CABG. Limitations Our meta-analysis has several limitations. Firstly, this study was based on observational, retrospective studies with inherent bias of such study designs. The publications included in this meta-analysis were relatively small and nonrandomized studies. Secondly, changes in NYHA class, LVEF and left ventricular reversal remodeling were too scarcely reported in the included studies to enable meta-analysis. Eight out of eleven studies included in our meta-analysis reported data on the subvalvular apparatus preservation in mitral valve replacement yet with lack of uniform preservation of both the anterior and posterior leaflets. The other three studies had no description regarding subvalvular apparatus preservation. Thirdly, potential confounding factors such as preoperative risk evaluation (STS score i.e.), mitral valve more suitable for repair, age, cause of mitral regurgitation (ischemia, fibrosis, ventricular remodeling), EF and complexity of revascularization were not considered or adjusted in some of the studies included in our meta-analysis. Therefore, the superiority of repair over replacement may be affected by this and other factors that are not possible to be revealed with meta-analysis of observational trials. Well-designed RCTs are required to further verify the conclusion. Another limitation of our report is the fact that follow-up periods were heterogeneous between some studies with different use of mean and median durations of follow-up. Therefore, subgroup analysis could not be performed statistically. Our meta-analysis has several limitations. Firstly, this study was based on observational, retrospective studies with inherent bias of such study designs. The publications included in this meta-analysis were relatively small and nonrandomized studies. Secondly, changes in NYHA class, LVEF and left ventricular reversal remodeling were too scarcely reported in the included studies to enable meta-analysis. Eight out of eleven studies included in our meta-analysis reported data on the subvalvular apparatus preservation in mitral valve replacement yet with lack of uniform preservation of both the anterior and posterior leaflets. The other three studies had no description regarding subvalvular apparatus preservation. Thirdly, potential confounding factors such as preoperative risk evaluation (STS score i.e.), mitral valve more suitable for repair, age, cause of mitral regurgitation (ischemia, fibrosis, ventricular remodeling), EF and complexity of revascularization were not considered or adjusted in some of the studies included in our meta-analysis. Therefore, the superiority of repair over replacement may be affected by this and other factors that are not possible to be revealed with meta-analysis of observational trials. Well-designed RCTs are required to further verify the conclusion. Another limitation of our report is the fact that follow-up periods were heterogeneous between some studies with different use of mean and median durations of follow-up. Therefore, subgroup analysis could not be performed statistically. Limitations: Our meta-analysis has several limitations. Firstly, this study was based on observational, retrospective studies with inherent bias of such study designs. The publications included in this meta-analysis were relatively small and nonrandomized studies. Secondly, changes in NYHA class, LVEF and left ventricular reversal remodeling were too scarcely reported in the included studies to enable meta-analysis. Eight out of eleven studies included in our meta-analysis reported data on the subvalvular apparatus preservation in mitral valve replacement yet with lack of uniform preservation of both the anterior and posterior leaflets. The other three studies had no description regarding subvalvular apparatus preservation. Thirdly, potential confounding factors such as preoperative risk evaluation (STS score i.e.), mitral valve more suitable for repair, age, cause of mitral regurgitation (ischemia, fibrosis, ventricular remodeling), EF and complexity of revascularization were not considered or adjusted in some of the studies included in our meta-analysis. Therefore, the superiority of repair over replacement may be affected by this and other factors that are not possible to be revealed with meta-analysis of observational trials. Well-designed RCTs are required to further verify the conclusion. Another limitation of our report is the fact that follow-up periods were heterogeneous between some studies with different use of mean and median durations of follow-up. Therefore, subgroup analysis could not be performed statistically. Conclusions: In patients with chronic ischemic mitral regurgitation during CABG, mitral valve replacement is associated with lower recurrence of regurgitation. No differences were found regarding survival and reoperation rates.
Background: No agreement has been reached for the best surgical treatment for patients with chronic ischemic mitral regurgitation (IMR) undergoing coronary artery bypass grafting (CABG). Our objective was to meta-analyze the clinical outcomes of repair and replacement. Methods: A computerized search was performed using Pubmed, Embase, Ovid medline and Cochrane Library. The search terms "ischemic or ischaemic" and "mitral valve" and "repair or replacement or annuloplasty" and "coronary artery bypass grafting" were entered as MeSH terms and keywords. The primary outcomes were operative mortality and late mortality. Secondary outcomes were 2+ or greater recurrence of mitral regurgitation and reoperation rate. Results: Eleven studies were eligible for the final meta-analysis. These studies included a total of 1750 patients, 60.4 % of whom received mitral valve repair. All patients underwent concomitant coronary artery bypass graft. No differences were found in operative mortality (summary odds ratio [OR] 0.65; 95 % confidence interval [CI] 0.43-1.00; p = 0.05), late mortality (summary hazard ratio [HR] 0.87; 95 % confidence interval [CI] 0.67-1.14; p = 0.31) and reoperation (summary odds ratio [OR] 1.47; 95 % confidence interval [CI] 0.90-2.38; p = 0.12). Regurgitation recurrence was lower in the replacement group (summary odds ratio [OR] 5.41; 95 % confidence interval [CI] 3.12-9.38; p < 0.001). Conclusions: In patients with chronic ischemic mitral regurgitation during CABG, mitral valve replacement is associated with lower recurrence of regurgitation. No differences were found regarding survival and reoperation rates.
Background: Chronic ischemic mitral regurgitation (IMR) is a frequent and important complication after myocardial infarction. Its pathophysiologic mechanisms account for remodeling of segmental/global left ventricle (LV) inducing papillary muscle displacement and leaflet tethering [1]. The presence of IMR is independently associated with mortality and morbidity after myocardial infarction [2]. Given the severity of IMR, surgery performed for IMR ranges from coronary artery bypass grafting (CABG) alone to both CABG and mitral valve surgery [3, 4]. Two randomized trials indicated that repair was associated with a reduced prevalence of mitral regurgitation but did not show a clinically meaningful advantage of adding mitral valve repair to CABG [5, 6]. In addition, when compared with replacement, previous meta-analyses concluded that repair is associated with lower operative mortality but higher recurrence of regurgitation in patients with ischemic mitral regurgitation, with or without CABG [7, 8]. For patients with chronic IMR undergoing combined CABG, the best surgical treatment is still controversial. Some studies support replacement [9, 10], others support repair [11, 12], and others showed similar survival for the two procedures [13]. Current guidelines recommend mitral valve surgery for severe IMR, but do not demonstrate a specific type of procedure [14, 15]. Numerous non-randomized studies have been published comparing the clinical outcomes between MVP + CABG and MVR + CABG for IMR. However, there is still no systematic and quantitative assessment of accumulated literature on this topic. Meta-analysis is a powerful tool to provide meaningful comparison of short and long-term outcomes of these procedures. The present meta-analysis aimed to assess the clinical outcomes of patients who underwent mitral valve surgery and CABG for chronic IMR. Conclusions: In patients with chronic ischemic mitral regurgitation during CABG, mitral valve replacement is associated with lower recurrence of regurgitation. No differences were found regarding survival and reoperation rates.
Background: No agreement has been reached for the best surgical treatment for patients with chronic ischemic mitral regurgitation (IMR) undergoing coronary artery bypass grafting (CABG). Our objective was to meta-analyze the clinical outcomes of repair and replacement. Methods: A computerized search was performed using Pubmed, Embase, Ovid medline and Cochrane Library. The search terms "ischemic or ischaemic" and "mitral valve" and "repair or replacement or annuloplasty" and "coronary artery bypass grafting" were entered as MeSH terms and keywords. The primary outcomes were operative mortality and late mortality. Secondary outcomes were 2+ or greater recurrence of mitral regurgitation and reoperation rate. Results: Eleven studies were eligible for the final meta-analysis. These studies included a total of 1750 patients, 60.4 % of whom received mitral valve repair. All patients underwent concomitant coronary artery bypass graft. No differences were found in operative mortality (summary odds ratio [OR] 0.65; 95 % confidence interval [CI] 0.43-1.00; p = 0.05), late mortality (summary hazard ratio [HR] 0.87; 95 % confidence interval [CI] 0.67-1.14; p = 0.31) and reoperation (summary odds ratio [OR] 1.47; 95 % confidence interval [CI] 0.90-2.38; p = 0.12). Regurgitation recurrence was lower in the replacement group (summary odds ratio [OR] 5.41; 95 % confidence interval [CI] 3.12-9.38; p < 0.001). Conclusions: In patients with chronic ischemic mitral regurgitation during CABG, mitral valve replacement is associated with lower recurrence of regurgitation. No differences were found regarding survival and reoperation rates.
9,512
332
[ 138, 105, 140, 383, 1010, 145, 356, 102, 161, 261 ]
15
[ "studies", "mitral", "valve", "mitral valve", "analysis", "repair", "replacement", "study", "patients", "mortality" ]
[ "cabg mitral valve", "mitral valve reoperation", "severity mitral regurgitation", "mitral regurgitation imr", "ischemic mitral regurgitation" ]
null
[CONTENT] Ischemic mitral regurgitation | Mitral valve repair | Mitral valve replacement | Coronary artery bypass grafting | Meta-analysis [SUMMARY]
null
[CONTENT] Ischemic mitral regurgitation | Mitral valve repair | Mitral valve replacement | Coronary artery bypass grafting | Meta-analysis [SUMMARY]
[CONTENT] Ischemic mitral regurgitation | Mitral valve repair | Mitral valve replacement | Coronary artery bypass grafting | Meta-analysis [SUMMARY]
[CONTENT] Ischemic mitral regurgitation | Mitral valve repair | Mitral valve replacement | Coronary artery bypass grafting | Meta-analysis [SUMMARY]
[CONTENT] Ischemic mitral regurgitation | Mitral valve repair | Mitral valve replacement | Coronary artery bypass grafting | Meta-analysis [SUMMARY]
[CONTENT] Aged | Coronary Artery Bypass | Female | Heart Valve Prosthesis Implantation | Humans | Male | Middle Aged | Mitral Valve | Mitral Valve Insufficiency | Myocardial Ischemia | Reoperation | Survival Rate | Treatment Outcome [SUMMARY]
null
[CONTENT] Aged | Coronary Artery Bypass | Female | Heart Valve Prosthesis Implantation | Humans | Male | Middle Aged | Mitral Valve | Mitral Valve Insufficiency | Myocardial Ischemia | Reoperation | Survival Rate | Treatment Outcome [SUMMARY]
[CONTENT] Aged | Coronary Artery Bypass | Female | Heart Valve Prosthesis Implantation | Humans | Male | Middle Aged | Mitral Valve | Mitral Valve Insufficiency | Myocardial Ischemia | Reoperation | Survival Rate | Treatment Outcome [SUMMARY]
[CONTENT] Aged | Coronary Artery Bypass | Female | Heart Valve Prosthesis Implantation | Humans | Male | Middle Aged | Mitral Valve | Mitral Valve Insufficiency | Myocardial Ischemia | Reoperation | Survival Rate | Treatment Outcome [SUMMARY]
[CONTENT] Aged | Coronary Artery Bypass | Female | Heart Valve Prosthesis Implantation | Humans | Male | Middle Aged | Mitral Valve | Mitral Valve Insufficiency | Myocardial Ischemia | Reoperation | Survival Rate | Treatment Outcome [SUMMARY]
[CONTENT] cabg mitral valve | mitral valve reoperation | severity mitral regurgitation | mitral regurgitation imr | ischemic mitral regurgitation [SUMMARY]
null
[CONTENT] cabg mitral valve | mitral valve reoperation | severity mitral regurgitation | mitral regurgitation imr | ischemic mitral regurgitation [SUMMARY]
[CONTENT] cabg mitral valve | mitral valve reoperation | severity mitral regurgitation | mitral regurgitation imr | ischemic mitral regurgitation [SUMMARY]
[CONTENT] cabg mitral valve | mitral valve reoperation | severity mitral regurgitation | mitral regurgitation imr | ischemic mitral regurgitation [SUMMARY]
[CONTENT] cabg mitral valve | mitral valve reoperation | severity mitral regurgitation | mitral regurgitation imr | ischemic mitral regurgitation [SUMMARY]
[CONTENT] studies | mitral | valve | mitral valve | analysis | repair | replacement | study | patients | mortality [SUMMARY]
null
[CONTENT] studies | mitral | valve | mitral valve | analysis | repair | replacement | study | patients | mortality [SUMMARY]
[CONTENT] studies | mitral | valve | mitral valve | analysis | repair | replacement | study | patients | mortality [SUMMARY]
[CONTENT] studies | mitral | valve | mitral valve | analysis | repair | replacement | study | patients | mortality [SUMMARY]
[CONTENT] studies | mitral | valve | mitral valve | analysis | repair | replacement | study | patients | mortality [SUMMARY]
[CONTENT] imr | cabg | surgery | valve surgery | mitral valve surgery | mitral | chronic | meaningful | infarction | myocardial infarction [SUMMARY]
null
[CONTENT] valve | mm | mitral | mitral valve | studies | cabgmvr | time | cabgmvp cabgmvr | cabgmvr cabgmvp | cabgmvr cabgmvp cabgmvr [SUMMARY]
[CONTENT] regurgitation differences | reoperation rates | regurgitation cabg mitral | regurgitation cabg mitral valve | survival reoperation | regurgitation differences found | patients chronic ischemic | patients chronic ischemic mitral | regurgitation differences found survival | recurrence regurgitation differences [SUMMARY]
[CONTENT] mitral | valve | studies | mitral valve | analysis | repair | study | replacement | patients | mortality [SUMMARY]
[CONTENT] mitral | valve | studies | mitral valve | analysis | repair | study | replacement | patients | mortality [SUMMARY]
[CONTENT] ||| [SUMMARY]
null
[CONTENT] ||| 1750 | 60.4 % ||| ||| ||| 0.65 | 95 % | CI | 0.43 | 0.05 ||| 0.87 | 95 % | CI] 0.67-1.14 | 0.31 ||| 1.47 | 95 % | CI] 0.90-2.38 | 0.12 ||| 5.41 | 95 % | CI | 3.12-9.38 | 0.001 [SUMMARY]
[CONTENT] CABG ||| [SUMMARY]
[CONTENT] ||| ||| Pubmed, Embase | Ovid | Cochrane Library ||| ||| ||| 2+ ||| ||| ||| 1750 | 60.4 % ||| ||| ||| 0.65 | 95 % | CI | 0.43 | 0.05 ||| 0.87 | 95 % | CI] 0.67-1.14 | 0.31 ||| 1.47 | 95 % | CI] 0.90-2.38 | 0.12 ||| 5.41 | 95 % | CI | 3.12-9.38 | 0.001 ||| CABG ||| [SUMMARY]
[CONTENT] ||| ||| Pubmed, Embase | Ovid | Cochrane Library ||| ||| ||| 2+ ||| ||| ||| 1750 | 60.4 % ||| ||| ||| 0.65 | 95 % | CI | 0.43 | 0.05 ||| 0.87 | 95 % | CI] 0.67-1.14 | 0.31 ||| 1.47 | 95 % | CI] 0.90-2.38 | 0.12 ||| 5.41 | 95 % | CI | 3.12-9.38 | 0.001 ||| CABG ||| [SUMMARY]
Novel reassortant 2.3.4.4B H5N6 highly pathogenic avian influenza viruses circulating among wild, domestic birds in Xinjiang, Northwest China.
34170087
The H5 avian influenza viruses (AIVs) of clade 2.3.4.4 circulate in wild and domestic birds worldwide. In 2017, nine strains of H5N6 AIVs were isolated from aquatic poultry in Xinjiang, Northwest China.
BACKGROUND
AIVs were isolated from oropharyngeal and cloacal swabs of poultry. Identification was accomplished by inoculating isolates into embryonated chicken eggs and performing hemagglutination tests and reverse transcription polymerase chain reaction (RT-PCR). The viral genomes were amplified with RT-PCR and then sequenced. The sequence alignment, phylogenetic, and molecular characteristic analyses were performed by using bioinformatic software.
METHODS
Nine isolates originated from the same ancestor. The viral HA gene belonged to clade 2.3.4.4B, while the NA gene had a close phylogenetic relationship with the 2.3.4.4C H5N6 highly pathogenic avian influenza viruses (HPAIVs) isolated from shoveler ducks in Ningxia in 2015. The NP gene was grouped into an independent subcluster within the 2.3.4.4B H5N8 AIVs, and the remaining six genes all had close phylogenetic relationships with the 2.3.4.4B H5N8 HPAIVs isolated from the wild birds in China, Egypt, Uganda, Cameroon, and India in 2016-2017, Multiple basic amino acid residues associated with HPAIVs were located adjacent to the cleavage site of the HA protein. The nine isolates comprised reassortant 2.3.4.4B HPAIVs originating from 2.3.4.4B H5N8 and 2.3.4.4C H5N6 viruses in wild birds.
RESULTS
These results suggest that the Northern Tianshan Mountain wetlands in Xinjiang may have a key role in AIVs disseminating from Central China to the Eurasian continent and East African.
CONCLUSIONS
[ "Animals", "Animals, Domestic", "Animals, Wild", "Birds", "China", "Influenza A virus", "Influenza in Birds", "Phylogeny", "Reassortant Viruses", "Virulence", "Whole Genome Sequencing" ]
8318794
INTRODUCTION
Since 2014, highly pathogenic avian influenza viruses (HPAIVs) of clade 2.3.4.4 (H5) have been monitored in poultry and wild birds in European and Asian countries, and the viruses of that clade have evolved into eight groups (A-H) [12]. Group A includes H5N8 avian influenza viruses (AIVs) that were identified in Asia, Europe, and North America [1]. In 2016, a novel H5N8 AIV lineage initially emerged in the wild birds in Qinghai Lake, spread to Mongolia, Siberia, and Europe, and was subsequently identified in China and Korea; this new lineage was classified into clade 2.3.4.4B [3]. On the other hand, group C H5N6 AIVs were first reported in Laos in 2013 and gradually became the main source of sporadic AIV infections in poultry in Southern China [4]. Group D comprises H5N6 viruses mainly identified in China and Vietnam. Subsequent studies have reported that H5N6 viruses of groups 2.3.4.4E, 2.3.4.4F, 2.3.4.4G, and 2.3.4.4H have been isolated from poultry and wild birds [2]. Outbreaks of 2.3.4.4B H5N8 AIVs were reported in South Korea in 2014 [5]. The viruses were subsequently disseminated via migratory birds and have since been detected in wild and domestic birds worldwide [12678]. Almost simultaneously, 2.3.4.4B H5N6 HPAIVs were undergoing global transmission and have been detected in wild and domestic birds [8910111213]. The cocirculation of H5N6 and H5N8 viruses among wild and domestic birds could increase the probability of reassorting novel viruses [91011]; indeed, reassortants derived from the H5N6 and H5N8 HPAIVs have been detected in various wild birds and poultry [891415]. Moreover, it has been reported that 46% of all 2.3.4.4B H5N6 viruses in domestic poultry were derived from wild birds [9]. Thus, reassortant viruses could pose potential threats to poultry farming and human public health. In this study, nine H5N6 AIVs were isolated from aquatic poultry in Xinjiang Uyghur Autonomous Region, China, and all were grouped into clade 2.3.4.4B. This study aimed to analyze the origin of the isolates.
null
null
RESULTS
Virus isolation Nine strains of H5N6 AIV were isolated between July and November 2017, one from a duck swab and the rest from goose swabs. The eight gene segments of the 9 isolates shared very high nucleotide homology (> 99.9%); thus, the isolates originated from the same ancestor. The strains have been designated as A/goose/Xinjiang/011-015/2017(H5N6), A/duck/Xinjiang/016/2017 (H5N6), and A/goose/Xinjiang/017-019/2017(H5N6), or XJ-H5N6/2017 for short. The complete sequences of the isolates have been submitted to NCBI (accession numbers: MW109029-MW109036, MW110101-MW110108, and MW110205-MW110260). Nine strains of H5N6 AIV were isolated between July and November 2017, one from a duck swab and the rest from goose swabs. The eight gene segments of the 9 isolates shared very high nucleotide homology (> 99.9%); thus, the isolates originated from the same ancestor. The strains have been designated as A/goose/Xinjiang/011-015/2017(H5N6), A/duck/Xinjiang/016/2017 (H5N6), and A/goose/Xinjiang/017-019/2017(H5N6), or XJ-H5N6/2017 for short. The complete sequences of the isolates have been submitted to NCBI (accession numbers: MW109029-MW109036, MW110101-MW110108, and MW110205-MW110260). Sequence and phylogenetic analysis Homology BLAST and phylogenetic analyses were performed on eight genes of the viral isolates (Table 1 and Fig. 1). The viral NA gene exhibited the highest sequence homology (99.1%), and the sequence clustered together with those of H5N6 AIVs isolated from shoveler ducks sampled in Ningxia (NX488-53) and from the environment of Chongqing in 2015; those sequences were designated as 2.3.4.4C H5N6 AIVs. The viral NP gene segments shared the highest sequence homology (99.4%) with 2.3.4.4B H5N8 AIVs isolated from the green-winged teal in Egypt in 2016 and were grouped into an independent subcluster within the phylogenetic tree. The remaining six genes had the highest sequence homologies (99.1%–99.6%) and relatively close genetic relationships in the phylogenetic tree with the 2.3.4.4B H5N8 AIVs. Five of those genes, HA, PB2, NS, PA, and MP, clustered together with 2.3.4.4B H5N8 AIVs isolated from migratory swans in Sanmenxia, Hubei, and Shanxi in 2016, and this sixth gene, PB1, clustered together with H5N8 AIVs isolated from grey-headed gulls sampled in Uganda in 2017. The above results suggest that XJ-H5N6/2017 is a novel reassortant 2.3.4.4B H5N6 AIV derived from 2.3.4.4B H5N8 and 2.3.4.4C H5N6 viruses present in wild birds. Homology BLAST and phylogenetic analyses were performed on eight genes of the viral isolates (Table 1 and Fig. 1). The viral NA gene exhibited the highest sequence homology (99.1%), and the sequence clustered together with those of H5N6 AIVs isolated from shoveler ducks sampled in Ningxia (NX488-53) and from the environment of Chongqing in 2015; those sequences were designated as 2.3.4.4C H5N6 AIVs. The viral NP gene segments shared the highest sequence homology (99.4%) with 2.3.4.4B H5N8 AIVs isolated from the green-winged teal in Egypt in 2016 and were grouped into an independent subcluster within the phylogenetic tree. The remaining six genes had the highest sequence homologies (99.1%–99.6%) and relatively close genetic relationships in the phylogenetic tree with the 2.3.4.4B H5N8 AIVs. Five of those genes, HA, PB2, NS, PA, and MP, clustered together with 2.3.4.4B H5N8 AIVs isolated from migratory swans in Sanmenxia, Hubei, and Shanxi in 2016, and this sixth gene, PB1, clustered together with H5N8 AIVs isolated from grey-headed gulls sampled in Uganda in 2017. The above results suggest that XJ-H5N6/2017 is a novel reassortant 2.3.4.4B H5N6 AIV derived from 2.3.4.4B H5N8 and 2.3.4.4C H5N6 viruses present in wild birds. Molecular characteristics of the H5N6 virus isolates Assessment of the multiple amino acid mutations present in the nine isolates could help elucidate H5N6 virulence. Multiple basic amino acids (KEKRRKR↓GLF) were observed at cleavage sites in the HA protein of the H5N6 isolates (Table 2), suggesting that the isolates were HPAIVs. The viral receptor-binding sites all contained Q226 and G228 (H3 numbering), indicative of an avian-like α2,3-sialic acid (α2,3-SA) receptor-binding preference; however, the S128P, S137A, and T160A mutations in HA could enhance the capacity to bind to a human-like α-2,6-SA receptor [17]. In all nine isolates, 11 amino acid deletions (59–69) in the NA stalk were observed, suggesting the isolates could have different adaptive and virulence characteristics in poultry and mammals [18]. In addition, the L89V, G309D, R477G, I495V [19], and I504V [20] mutations in the PB2 protein and the P42S and D92E mutations in the NS1 were retained in all nine isolates; notably, those mutations can enhance viral virulence and replication activity in mammals [21]. Moreover, the M2-F38L mutation has been associated with antiviral resistance (amantadine and rimantadine) [22]. Assessment of the multiple amino acid mutations present in the nine isolates could help elucidate H5N6 virulence. Multiple basic amino acids (KEKRRKR↓GLF) were observed at cleavage sites in the HA protein of the H5N6 isolates (Table 2), suggesting that the isolates were HPAIVs. The viral receptor-binding sites all contained Q226 and G228 (H3 numbering), indicative of an avian-like α2,3-sialic acid (α2,3-SA) receptor-binding preference; however, the S128P, S137A, and T160A mutations in HA could enhance the capacity to bind to a human-like α-2,6-SA receptor [17]. In all nine isolates, 11 amino acid deletions (59–69) in the NA stalk were observed, suggesting the isolates could have different adaptive and virulence characteristics in poultry and mammals [18]. In addition, the L89V, G309D, R477G, I495V [19], and I504V [20] mutations in the PB2 protein and the P42S and D92E mutations in the NS1 were retained in all nine isolates; notably, those mutations can enhance viral virulence and replication activity in mammals [21]. Moreover, the M2-F38L mutation has been associated with antiviral resistance (amantadine and rimantadine) [22].
null
null
[ "Sample collection", "Virus isolation and identification", "Whole-genome sequencing", "Phylogenetic analysis", "Virus isolation", "Sequence and phylogenetic analysis", "Molecular characteristics of the H5N6 virus isolates" ]
[ "From March 2017 to December 2018, a total of 354 oropharyngeal and cloacal swabs from apparently healthy poultry (chickens, ducks, and geese) in live poultry markets (LPMs) in Urumqi, China were collected and placed in tubes containing viral transport medium DMEM (1,000 u/L penicillin, 500 ug/L streptomycin, 100 mg/L Nystatin, 100 mg/L Polymyxin B sulfate salt, 1000 mg/L Sulfamethoxazole, 0.05 g/L Ofloxacin). These tubes were kept in an icebox at −20°C before transport to the laboratory and then immediately stored at −80°C.\nThe animal experiment portion of this study was approved by the Committee on the Ethics of Animal Experiments of Xinjiang Key Laboratory of Biological Resources and Genetic Engineering (BRGE-AE001) and carried out under the guidelines of the Animal Care and Use Committee of the College of Life Science and Technology, Xinjiang University.", "The swab samples were vortexed and centrifuged, and the supernatants inoculated into 10-day-old specific-pathogen-free embryonated chicken eggs. Seventy-two hours after inoculation, allantois fluid was harvested, and hemagglutinin (HA) activity was assayed using 1% chicken red blood cells [14]. All HA-positive samples underwent reverse transcription polymerase chain reaction (RT-PCR) using universal primers targeting the PB1 gene [15]. Viral RNA extraction (Bioer) and RT-PCR (TOYOBO) were performed according to the manufacturers' instructions.\nRoutine surveillance samples were processed in a biosafety level 2 (BSL-2) laboratory of the Center for Influenza Research and Early-Warning (CASCIRE), Chinese Academy of Science, while experiments with live H5N6 viruses were conducted in a CASCIRE biosafety level 3 (BSL-3) laboratory. Coveralls, gloves, and N95 masks were used during the work at the BSL-2 and BSL-3 laboratories, and all wastes were autoclaved.", "Next-generation sequencing was used to determine the whole-genome sequences of the AIV isolates. The viral RNA samples were quantified using the 2100 Bioanalyzer System (Agilent Technologies). The RT-PCR and cDNA synthesis procedures were conducted using PrimeScript One-Step RT-PCR Kit Ver.2 (RR055A Takara) and influenza A-specific primers MBTuni-12 and MBTuni-13. Sequencing libraries with an insert size of 200bp were prepared by end-repairing, dA-tailing, adaptor ligation, and PCR amplification, all performed according to the standard manufacturer's instructions (Illumina, USA). The libraries were sequenced on an Illumina HiSeq4000 platform (Illumina) [416].", "Phylogenetic trees were constructed for each gene segment of the nine AIV isolates to investigate their evolutionary relationships. The AIV reference sequences were obtained from GenBank (http://www.ncbi.nlm.nih.gov/genbank) and GISAID (http://www.gisaid.org) via the online Basic Local Alignment Search Tool (BLAST). The datasets for phylogenetic analysis were generated by aligning with reference sequences closely related to the viral sequences of the isolates and removing sequences with the same strain name and those without a clear subtype or collection date. The final sequence numbers of each gene segment are PB2 37, PB1 28, PA 32, HA 35, NP 33, NA 24, MP 27, and NS 33 (Supplementary Table 1). The sequences were aligned with ClustalW using MEGA 7.0. The general time-reversible nucleotide substitution model with invariant sites (I) and the gamma rate of heterogeneity (GTR +Γ) results were selected as providing the best fit for all datasets. Maximum clade credibility (MCC) phylogenetic trees were generated by applying maximum likelihood analysis with 1000 bootstrap replicates in the MEGA-X program and visualized/annotated using Fig tree 1.4.3 software [4].", "Nine strains of H5N6 AIV were isolated between July and November 2017, one from a duck swab and the rest from goose swabs. The eight gene segments of the 9 isolates shared very high nucleotide homology (> 99.9%); thus, the isolates originated from the same ancestor. The strains have been designated as A/goose/Xinjiang/011-015/2017(H5N6), A/duck/Xinjiang/016/2017 (H5N6), and A/goose/Xinjiang/017-019/2017(H5N6), or XJ-H5N6/2017 for short. The complete sequences of the isolates have been submitted to NCBI (accession numbers: MW109029-MW109036, MW110101-MW110108, and MW110205-MW110260).", "Homology BLAST and phylogenetic analyses were performed on eight genes of the viral isolates (Table 1 and Fig. 1). The viral NA gene exhibited the highest sequence homology (99.1%), and the sequence clustered together with those of H5N6 AIVs isolated from shoveler ducks sampled in Ningxia (NX488-53) and from the environment of Chongqing in 2015; those sequences were designated as 2.3.4.4C H5N6 AIVs. The viral NP gene segments shared the highest sequence homology (99.4%) with 2.3.4.4B H5N8 AIVs isolated from the green-winged teal in Egypt in 2016 and were grouped into an independent subcluster within the phylogenetic tree. The remaining six genes had the highest sequence homologies (99.1%–99.6%) and relatively close genetic relationships in the phylogenetic tree with the 2.3.4.4B H5N8 AIVs. Five of those genes, HA, PB2, NS, PA, and MP, clustered together with 2.3.4.4B H5N8 AIVs isolated from migratory swans in Sanmenxia, Hubei, and Shanxi in 2016, and this sixth gene, PB1, clustered together with H5N8 AIVs isolated from grey-headed gulls sampled in Uganda in 2017. The above results suggest that XJ-H5N6/2017 is a novel reassortant 2.3.4.4B H5N6 AIV derived from 2.3.4.4B H5N8 and 2.3.4.4C H5N6 viruses present in wild birds.", "Assessment of the multiple amino acid mutations present in the nine isolates could help elucidate H5N6 virulence. Multiple basic amino acids (KEKRRKR↓GLF) were observed at cleavage sites in the HA protein of the H5N6 isolates (Table 2), suggesting that the isolates were HPAIVs. The viral receptor-binding sites all contained Q226 and G228 (H3 numbering), indicative of an avian-like α2,3-sialic acid (α2,3-SA) receptor-binding preference; however, the S128P, S137A, and T160A mutations in HA could enhance the capacity to bind to a human-like α-2,6-SA receptor [17]. In all nine isolates, 11 amino acid deletions (59–69) in the NA stalk were observed, suggesting the isolates could have different adaptive and virulence characteristics in poultry and mammals [18]. In addition, the L89V, G309D, R477G, I495V [19], and I504V [20] mutations in the PB2 protein and the P42S and D92E mutations in the NS1 were retained in all nine isolates; notably, those mutations can enhance viral virulence and replication activity in mammals [21]. Moreover, the M2-F38L mutation has been associated with antiviral resistance (amantadine and rimantadine) [22]." ]
[ null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Sample collection", "Virus isolation and identification", "Whole-genome sequencing", "Phylogenetic analysis", "RESULTS", "Virus isolation", "Sequence and phylogenetic analysis", "Molecular characteristics of the H5N6 virus isolates", "DISCUSSION" ]
[ "Since 2014, highly pathogenic avian influenza viruses (HPAIVs) of clade 2.3.4.4 (H5) have been monitored in poultry and wild birds in European and Asian countries, and the viruses of that clade have evolved into eight groups (A-H) [12]. Group A includes H5N8 avian influenza viruses (AIVs) that were identified in Asia, Europe, and North America [1]. In 2016, a novel H5N8 AIV lineage initially emerged in the wild birds in Qinghai Lake, spread to Mongolia, Siberia, and Europe, and was subsequently identified in China and Korea; this new lineage was classified into clade 2.3.4.4B [3]. On the other hand, group C H5N6 AIVs were first reported in Laos in 2013 and gradually became the main source of sporadic AIV infections in poultry in Southern China [4]. Group D comprises H5N6 viruses mainly identified in China and Vietnam. Subsequent studies have reported that H5N6 viruses of groups 2.3.4.4E, 2.3.4.4F, 2.3.4.4G, and 2.3.4.4H have been isolated from poultry and wild birds [2].\nOutbreaks of 2.3.4.4B H5N8 AIVs were reported in South Korea in 2014 [5]. The viruses were subsequently disseminated via migratory birds and have since been detected in wild and domestic birds worldwide [12678]. Almost simultaneously, 2.3.4.4B H5N6 HPAIVs were undergoing global transmission and have been detected in wild and domestic birds [8910111213]. The cocirculation of H5N6 and H5N8 viruses among wild and domestic birds could increase the probability of reassorting novel viruses [91011]; indeed, reassortants derived from the H5N6 and H5N8 HPAIVs have been detected in various wild birds and poultry [891415]. Moreover, it has been reported that 46% of all 2.3.4.4B H5N6 viruses in domestic poultry were derived from wild birds [9]. Thus, reassortant viruses could pose potential threats to poultry farming and human public health. In this study, nine H5N6 AIVs were isolated from aquatic poultry in Xinjiang Uyghur Autonomous Region, China, and all were grouped into clade 2.3.4.4B. This study aimed to analyze the origin of the isolates.", "Sample collection From March 2017 to December 2018, a total of 354 oropharyngeal and cloacal swabs from apparently healthy poultry (chickens, ducks, and geese) in live poultry markets (LPMs) in Urumqi, China were collected and placed in tubes containing viral transport medium DMEM (1,000 u/L penicillin, 500 ug/L streptomycin, 100 mg/L Nystatin, 100 mg/L Polymyxin B sulfate salt, 1000 mg/L Sulfamethoxazole, 0.05 g/L Ofloxacin). These tubes were kept in an icebox at −20°C before transport to the laboratory and then immediately stored at −80°C.\nThe animal experiment portion of this study was approved by the Committee on the Ethics of Animal Experiments of Xinjiang Key Laboratory of Biological Resources and Genetic Engineering (BRGE-AE001) and carried out under the guidelines of the Animal Care and Use Committee of the College of Life Science and Technology, Xinjiang University.\nFrom March 2017 to December 2018, a total of 354 oropharyngeal and cloacal swabs from apparently healthy poultry (chickens, ducks, and geese) in live poultry markets (LPMs) in Urumqi, China were collected and placed in tubes containing viral transport medium DMEM (1,000 u/L penicillin, 500 ug/L streptomycin, 100 mg/L Nystatin, 100 mg/L Polymyxin B sulfate salt, 1000 mg/L Sulfamethoxazole, 0.05 g/L Ofloxacin). These tubes were kept in an icebox at −20°C before transport to the laboratory and then immediately stored at −80°C.\nThe animal experiment portion of this study was approved by the Committee on the Ethics of Animal Experiments of Xinjiang Key Laboratory of Biological Resources and Genetic Engineering (BRGE-AE001) and carried out under the guidelines of the Animal Care and Use Committee of the College of Life Science and Technology, Xinjiang University.\nVirus isolation and identification The swab samples were vortexed and centrifuged, and the supernatants inoculated into 10-day-old specific-pathogen-free embryonated chicken eggs. Seventy-two hours after inoculation, allantois fluid was harvested, and hemagglutinin (HA) activity was assayed using 1% chicken red blood cells [14]. All HA-positive samples underwent reverse transcription polymerase chain reaction (RT-PCR) using universal primers targeting the PB1 gene [15]. Viral RNA extraction (Bioer) and RT-PCR (TOYOBO) were performed according to the manufacturers' instructions.\nRoutine surveillance samples were processed in a biosafety level 2 (BSL-2) laboratory of the Center for Influenza Research and Early-Warning (CASCIRE), Chinese Academy of Science, while experiments with live H5N6 viruses were conducted in a CASCIRE biosafety level 3 (BSL-3) laboratory. Coveralls, gloves, and N95 masks were used during the work at the BSL-2 and BSL-3 laboratories, and all wastes were autoclaved.\nThe swab samples were vortexed and centrifuged, and the supernatants inoculated into 10-day-old specific-pathogen-free embryonated chicken eggs. Seventy-two hours after inoculation, allantois fluid was harvested, and hemagglutinin (HA) activity was assayed using 1% chicken red blood cells [14]. All HA-positive samples underwent reverse transcription polymerase chain reaction (RT-PCR) using universal primers targeting the PB1 gene [15]. Viral RNA extraction (Bioer) and RT-PCR (TOYOBO) were performed according to the manufacturers' instructions.\nRoutine surveillance samples were processed in a biosafety level 2 (BSL-2) laboratory of the Center for Influenza Research and Early-Warning (CASCIRE), Chinese Academy of Science, while experiments with live H5N6 viruses were conducted in a CASCIRE biosafety level 3 (BSL-3) laboratory. Coveralls, gloves, and N95 masks were used during the work at the BSL-2 and BSL-3 laboratories, and all wastes were autoclaved.\nWhole-genome sequencing Next-generation sequencing was used to determine the whole-genome sequences of the AIV isolates. The viral RNA samples were quantified using the 2100 Bioanalyzer System (Agilent Technologies). The RT-PCR and cDNA synthesis procedures were conducted using PrimeScript One-Step RT-PCR Kit Ver.2 (RR055A Takara) and influenza A-specific primers MBTuni-12 and MBTuni-13. Sequencing libraries with an insert size of 200bp were prepared by end-repairing, dA-tailing, adaptor ligation, and PCR amplification, all performed according to the standard manufacturer's instructions (Illumina, USA). The libraries were sequenced on an Illumina HiSeq4000 platform (Illumina) [416].\nNext-generation sequencing was used to determine the whole-genome sequences of the AIV isolates. The viral RNA samples were quantified using the 2100 Bioanalyzer System (Agilent Technologies). The RT-PCR and cDNA synthesis procedures were conducted using PrimeScript One-Step RT-PCR Kit Ver.2 (RR055A Takara) and influenza A-specific primers MBTuni-12 and MBTuni-13. Sequencing libraries with an insert size of 200bp were prepared by end-repairing, dA-tailing, adaptor ligation, and PCR amplification, all performed according to the standard manufacturer's instructions (Illumina, USA). The libraries were sequenced on an Illumina HiSeq4000 platform (Illumina) [416].\nPhylogenetic analysis Phylogenetic trees were constructed for each gene segment of the nine AIV isolates to investigate their evolutionary relationships. The AIV reference sequences were obtained from GenBank (http://www.ncbi.nlm.nih.gov/genbank) and GISAID (http://www.gisaid.org) via the online Basic Local Alignment Search Tool (BLAST). The datasets for phylogenetic analysis were generated by aligning with reference sequences closely related to the viral sequences of the isolates and removing sequences with the same strain name and those without a clear subtype or collection date. The final sequence numbers of each gene segment are PB2 37, PB1 28, PA 32, HA 35, NP 33, NA 24, MP 27, and NS 33 (Supplementary Table 1). The sequences were aligned with ClustalW using MEGA 7.0. The general time-reversible nucleotide substitution model with invariant sites (I) and the gamma rate of heterogeneity (GTR +Γ) results were selected as providing the best fit for all datasets. Maximum clade credibility (MCC) phylogenetic trees were generated by applying maximum likelihood analysis with 1000 bootstrap replicates in the MEGA-X program and visualized/annotated using Fig tree 1.4.3 software [4].\nPhylogenetic trees were constructed for each gene segment of the nine AIV isolates to investigate their evolutionary relationships. The AIV reference sequences were obtained from GenBank (http://www.ncbi.nlm.nih.gov/genbank) and GISAID (http://www.gisaid.org) via the online Basic Local Alignment Search Tool (BLAST). The datasets for phylogenetic analysis were generated by aligning with reference sequences closely related to the viral sequences of the isolates and removing sequences with the same strain name and those without a clear subtype or collection date. The final sequence numbers of each gene segment are PB2 37, PB1 28, PA 32, HA 35, NP 33, NA 24, MP 27, and NS 33 (Supplementary Table 1). The sequences were aligned with ClustalW using MEGA 7.0. The general time-reversible nucleotide substitution model with invariant sites (I) and the gamma rate of heterogeneity (GTR +Γ) results were selected as providing the best fit for all datasets. Maximum clade credibility (MCC) phylogenetic trees were generated by applying maximum likelihood analysis with 1000 bootstrap replicates in the MEGA-X program and visualized/annotated using Fig tree 1.4.3 software [4].", "From March 2017 to December 2018, a total of 354 oropharyngeal and cloacal swabs from apparently healthy poultry (chickens, ducks, and geese) in live poultry markets (LPMs) in Urumqi, China were collected and placed in tubes containing viral transport medium DMEM (1,000 u/L penicillin, 500 ug/L streptomycin, 100 mg/L Nystatin, 100 mg/L Polymyxin B sulfate salt, 1000 mg/L Sulfamethoxazole, 0.05 g/L Ofloxacin). These tubes were kept in an icebox at −20°C before transport to the laboratory and then immediately stored at −80°C.\nThe animal experiment portion of this study was approved by the Committee on the Ethics of Animal Experiments of Xinjiang Key Laboratory of Biological Resources and Genetic Engineering (BRGE-AE001) and carried out under the guidelines of the Animal Care and Use Committee of the College of Life Science and Technology, Xinjiang University.", "The swab samples were vortexed and centrifuged, and the supernatants inoculated into 10-day-old specific-pathogen-free embryonated chicken eggs. Seventy-two hours after inoculation, allantois fluid was harvested, and hemagglutinin (HA) activity was assayed using 1% chicken red blood cells [14]. All HA-positive samples underwent reverse transcription polymerase chain reaction (RT-PCR) using universal primers targeting the PB1 gene [15]. Viral RNA extraction (Bioer) and RT-PCR (TOYOBO) were performed according to the manufacturers' instructions.\nRoutine surveillance samples were processed in a biosafety level 2 (BSL-2) laboratory of the Center for Influenza Research and Early-Warning (CASCIRE), Chinese Academy of Science, while experiments with live H5N6 viruses were conducted in a CASCIRE biosafety level 3 (BSL-3) laboratory. Coveralls, gloves, and N95 masks were used during the work at the BSL-2 and BSL-3 laboratories, and all wastes were autoclaved.", "Next-generation sequencing was used to determine the whole-genome sequences of the AIV isolates. The viral RNA samples were quantified using the 2100 Bioanalyzer System (Agilent Technologies). The RT-PCR and cDNA synthesis procedures were conducted using PrimeScript One-Step RT-PCR Kit Ver.2 (RR055A Takara) and influenza A-specific primers MBTuni-12 and MBTuni-13. Sequencing libraries with an insert size of 200bp were prepared by end-repairing, dA-tailing, adaptor ligation, and PCR amplification, all performed according to the standard manufacturer's instructions (Illumina, USA). The libraries were sequenced on an Illumina HiSeq4000 platform (Illumina) [416].", "Phylogenetic trees were constructed for each gene segment of the nine AIV isolates to investigate their evolutionary relationships. The AIV reference sequences were obtained from GenBank (http://www.ncbi.nlm.nih.gov/genbank) and GISAID (http://www.gisaid.org) via the online Basic Local Alignment Search Tool (BLAST). The datasets for phylogenetic analysis were generated by aligning with reference sequences closely related to the viral sequences of the isolates and removing sequences with the same strain name and those without a clear subtype or collection date. The final sequence numbers of each gene segment are PB2 37, PB1 28, PA 32, HA 35, NP 33, NA 24, MP 27, and NS 33 (Supplementary Table 1). The sequences were aligned with ClustalW using MEGA 7.0. The general time-reversible nucleotide substitution model with invariant sites (I) and the gamma rate of heterogeneity (GTR +Γ) results were selected as providing the best fit for all datasets. Maximum clade credibility (MCC) phylogenetic trees were generated by applying maximum likelihood analysis with 1000 bootstrap replicates in the MEGA-X program and visualized/annotated using Fig tree 1.4.3 software [4].", "Virus isolation Nine strains of H5N6 AIV were isolated between July and November 2017, one from a duck swab and the rest from goose swabs. The eight gene segments of the 9 isolates shared very high nucleotide homology (> 99.9%); thus, the isolates originated from the same ancestor. The strains have been designated as A/goose/Xinjiang/011-015/2017(H5N6), A/duck/Xinjiang/016/2017 (H5N6), and A/goose/Xinjiang/017-019/2017(H5N6), or XJ-H5N6/2017 for short. The complete sequences of the isolates have been submitted to NCBI (accession numbers: MW109029-MW109036, MW110101-MW110108, and MW110205-MW110260).\nNine strains of H5N6 AIV were isolated between July and November 2017, one from a duck swab and the rest from goose swabs. The eight gene segments of the 9 isolates shared very high nucleotide homology (> 99.9%); thus, the isolates originated from the same ancestor. The strains have been designated as A/goose/Xinjiang/011-015/2017(H5N6), A/duck/Xinjiang/016/2017 (H5N6), and A/goose/Xinjiang/017-019/2017(H5N6), or XJ-H5N6/2017 for short. The complete sequences of the isolates have been submitted to NCBI (accession numbers: MW109029-MW109036, MW110101-MW110108, and MW110205-MW110260).\nSequence and phylogenetic analysis Homology BLAST and phylogenetic analyses were performed on eight genes of the viral isolates (Table 1 and Fig. 1). The viral NA gene exhibited the highest sequence homology (99.1%), and the sequence clustered together with those of H5N6 AIVs isolated from shoveler ducks sampled in Ningxia (NX488-53) and from the environment of Chongqing in 2015; those sequences were designated as 2.3.4.4C H5N6 AIVs. The viral NP gene segments shared the highest sequence homology (99.4%) with 2.3.4.4B H5N8 AIVs isolated from the green-winged teal in Egypt in 2016 and were grouped into an independent subcluster within the phylogenetic tree. The remaining six genes had the highest sequence homologies (99.1%–99.6%) and relatively close genetic relationships in the phylogenetic tree with the 2.3.4.4B H5N8 AIVs. Five of those genes, HA, PB2, NS, PA, and MP, clustered together with 2.3.4.4B H5N8 AIVs isolated from migratory swans in Sanmenxia, Hubei, and Shanxi in 2016, and this sixth gene, PB1, clustered together with H5N8 AIVs isolated from grey-headed gulls sampled in Uganda in 2017. The above results suggest that XJ-H5N6/2017 is a novel reassortant 2.3.4.4B H5N6 AIV derived from 2.3.4.4B H5N8 and 2.3.4.4C H5N6 viruses present in wild birds.\nHomology BLAST and phylogenetic analyses were performed on eight genes of the viral isolates (Table 1 and Fig. 1). The viral NA gene exhibited the highest sequence homology (99.1%), and the sequence clustered together with those of H5N6 AIVs isolated from shoveler ducks sampled in Ningxia (NX488-53) and from the environment of Chongqing in 2015; those sequences were designated as 2.3.4.4C H5N6 AIVs. The viral NP gene segments shared the highest sequence homology (99.4%) with 2.3.4.4B H5N8 AIVs isolated from the green-winged teal in Egypt in 2016 and were grouped into an independent subcluster within the phylogenetic tree. The remaining six genes had the highest sequence homologies (99.1%–99.6%) and relatively close genetic relationships in the phylogenetic tree with the 2.3.4.4B H5N8 AIVs. Five of those genes, HA, PB2, NS, PA, and MP, clustered together with 2.3.4.4B H5N8 AIVs isolated from migratory swans in Sanmenxia, Hubei, and Shanxi in 2016, and this sixth gene, PB1, clustered together with H5N8 AIVs isolated from grey-headed gulls sampled in Uganda in 2017. The above results suggest that XJ-H5N6/2017 is a novel reassortant 2.3.4.4B H5N6 AIV derived from 2.3.4.4B H5N8 and 2.3.4.4C H5N6 viruses present in wild birds.\nMolecular characteristics of the H5N6 virus isolates Assessment of the multiple amino acid mutations present in the nine isolates could help elucidate H5N6 virulence. Multiple basic amino acids (KEKRRKR↓GLF) were observed at cleavage sites in the HA protein of the H5N6 isolates (Table 2), suggesting that the isolates were HPAIVs. The viral receptor-binding sites all contained Q226 and G228 (H3 numbering), indicative of an avian-like α2,3-sialic acid (α2,3-SA) receptor-binding preference; however, the S128P, S137A, and T160A mutations in HA could enhance the capacity to bind to a human-like α-2,6-SA receptor [17]. In all nine isolates, 11 amino acid deletions (59–69) in the NA stalk were observed, suggesting the isolates could have different adaptive and virulence characteristics in poultry and mammals [18]. In addition, the L89V, G309D, R477G, I495V [19], and I504V [20] mutations in the PB2 protein and the P42S and D92E mutations in the NS1 were retained in all nine isolates; notably, those mutations can enhance viral virulence and replication activity in mammals [21]. Moreover, the M2-F38L mutation has been associated with antiviral resistance (amantadine and rimantadine) [22].\nAssessment of the multiple amino acid mutations present in the nine isolates could help elucidate H5N6 virulence. Multiple basic amino acids (KEKRRKR↓GLF) were observed at cleavage sites in the HA protein of the H5N6 isolates (Table 2), suggesting that the isolates were HPAIVs. The viral receptor-binding sites all contained Q226 and G228 (H3 numbering), indicative of an avian-like α2,3-sialic acid (α2,3-SA) receptor-binding preference; however, the S128P, S137A, and T160A mutations in HA could enhance the capacity to bind to a human-like α-2,6-SA receptor [17]. In all nine isolates, 11 amino acid deletions (59–69) in the NA stalk were observed, suggesting the isolates could have different adaptive and virulence characteristics in poultry and mammals [18]. In addition, the L89V, G309D, R477G, I495V [19], and I504V [20] mutations in the PB2 protein and the P42S and D92E mutations in the NS1 were retained in all nine isolates; notably, those mutations can enhance viral virulence and replication activity in mammals [21]. Moreover, the M2-F38L mutation has been associated with antiviral resistance (amantadine and rimantadine) [22].", "Nine strains of H5N6 AIV were isolated between July and November 2017, one from a duck swab and the rest from goose swabs. The eight gene segments of the 9 isolates shared very high nucleotide homology (> 99.9%); thus, the isolates originated from the same ancestor. The strains have been designated as A/goose/Xinjiang/011-015/2017(H5N6), A/duck/Xinjiang/016/2017 (H5N6), and A/goose/Xinjiang/017-019/2017(H5N6), or XJ-H5N6/2017 for short. The complete sequences of the isolates have been submitted to NCBI (accession numbers: MW109029-MW109036, MW110101-MW110108, and MW110205-MW110260).", "Homology BLAST and phylogenetic analyses were performed on eight genes of the viral isolates (Table 1 and Fig. 1). The viral NA gene exhibited the highest sequence homology (99.1%), and the sequence clustered together with those of H5N6 AIVs isolated from shoveler ducks sampled in Ningxia (NX488-53) and from the environment of Chongqing in 2015; those sequences were designated as 2.3.4.4C H5N6 AIVs. The viral NP gene segments shared the highest sequence homology (99.4%) with 2.3.4.4B H5N8 AIVs isolated from the green-winged teal in Egypt in 2016 and were grouped into an independent subcluster within the phylogenetic tree. The remaining six genes had the highest sequence homologies (99.1%–99.6%) and relatively close genetic relationships in the phylogenetic tree with the 2.3.4.4B H5N8 AIVs. Five of those genes, HA, PB2, NS, PA, and MP, clustered together with 2.3.4.4B H5N8 AIVs isolated from migratory swans in Sanmenxia, Hubei, and Shanxi in 2016, and this sixth gene, PB1, clustered together with H5N8 AIVs isolated from grey-headed gulls sampled in Uganda in 2017. The above results suggest that XJ-H5N6/2017 is a novel reassortant 2.3.4.4B H5N6 AIV derived from 2.3.4.4B H5N8 and 2.3.4.4C H5N6 viruses present in wild birds.", "Assessment of the multiple amino acid mutations present in the nine isolates could help elucidate H5N6 virulence. Multiple basic amino acids (KEKRRKR↓GLF) were observed at cleavage sites in the HA protein of the H5N6 isolates (Table 2), suggesting that the isolates were HPAIVs. The viral receptor-binding sites all contained Q226 and G228 (H3 numbering), indicative of an avian-like α2,3-sialic acid (α2,3-SA) receptor-binding preference; however, the S128P, S137A, and T160A mutations in HA could enhance the capacity to bind to a human-like α-2,6-SA receptor [17]. In all nine isolates, 11 amino acid deletions (59–69) in the NA stalk were observed, suggesting the isolates could have different adaptive and virulence characteristics in poultry and mammals [18]. In addition, the L89V, G309D, R477G, I495V [19], and I504V [20] mutations in the PB2 protein and the P42S and D92E mutations in the NS1 were retained in all nine isolates; notably, those mutations can enhance viral virulence and replication activity in mammals [21]. Moreover, the M2-F38L mutation has been associated with antiviral resistance (amantadine and rimantadine) [22].", "The Xinjiang region is located in northwest China within the interior of the Eurasian Continent. It comprises an overlapping bird migration region between the Central Asian Flyway and the West Asian–East African Flyway. The Northern Tianshan Mountain (NTM) region in Xinjiang includes many complex mountain-oasis-desert systems, and several water reservoirs have been formed by dam construction in narrow mouths of ravines or rivers, with much of the water coming from snowmelt [23]. Many wild birds migrate along the NTM region every year, and the reservoirs in the region have become key stopover areas for migratory birds from Eurasia and Africa. Moreover, these reservoirs are also used during aquatic poultry farming, thereby increasing the odds of contact between aquatic poultry and wild birds, including direct contact with infected wild birds or indirect contact with related materials (e.g., wild-bird feces), which could result in the introduction of wild-bird origin AIVs into aquatic poultry [824]. The majority of H5 HPAIVs detections in wild and domestic birds have been associated with migratory flyways and wild-bird aggregation areas [825]. This wild-domestic bird interface has had an important role in the spread, reassortment, evolution, and epidemiology of AIVs [8]; in particular, 2.3.4.4 H5 HPAIVs that have emerged since 2013. Such HPAIVs are continuing to reassort, evolve, and spread and have been detected in wild and domestic birds around the world [26]. Since 2016, HPAIVs outbreaks in Xinjiang have occurred repeatedly, and there have been several outbreaks of H5N6 HPAIVs (including 2.3.4.4B and 2.3.4.4H) in poultry and migratory birds sampled in Xinjiang [2728]. In this study, the 2.3.4.4B HPAIV isolates from the aquatic poultry in the LPMs of Urumqi were shown to be novel reassortant viruses derived from 2.3.4.4B H5N8 and 2.3.4.4C H5N6 AIVs of wild birds, suggesting that 2.3.4.4 clade AIVs are circulating in both wild and domestic birds in Xinjiang.\nIn this study, the NA gene of XJ-H5N6/2017 had a close relationship with the wild-bird origin 2.3.4.4C H5N6 AIVs identified from Chongqing and Ningxia, China. Our previous study reported a reassortant 2.3.4.4C H5N6 AIV in Xinjiang (GISAID accession nos.: EPI1548859-1548874), which originated from the NX488-53 virus isolated in December 2016 [29]. The NP gene sequences of the isolates in this study were grouped into an independent subcluster within the 2.3.4.4B H5N8 AIVs, indicating that NP has continued to evolve in Xinjiang. The PB1 gene was most closely related to the 2.3.4.4B H5N8 AIVs from the migratory wild birds sampled in Africa in 2017. In addition, it had a close relationship with 2.3.4.4B H5N8 AIVs from the migratory swans sampled in Central China in 2016. The remaining five genes had close relationships with 2.3.4.4B H5N8 AIVs identified from the migratory swans sampled in Central China in 2016 and bar-headed geese sampled in Qinghai in 2017. The Sanmenxia wetland of the Yellow River is an important wintering ground for migratory swans, with the swans arriving at the wetland in late October each year, departing the wetland during the late February to late March period of the following year, and migrating northwest directly to Siberia and northwest China (e.g., Qinghai wetlands) for breeding and molting [30]. An outbreak of H5N1 HPAIV has occurred at the Sanmenxia wetland, and migrating swans carrying the H5N1 HPAIVs have spread it over long distances [3132]. Also, H5N8 HPAIVs from wild birds in Hubei, China have been reported to cause the death of migratory swans, and the H5N8 viruses have been isolated in samples from Qinghai Lake and Europe. Thus, the wetlands and lakes in Central China may have a key role in spreading H5N8 viruses in the East Asian-Australasian and Central Asian flyways [12]. These observations suggest that the isolated 2.3.4.4B H5N8 AIVs could have spread into Xinjiang by infected migratory wild birds present in Central China in 2016 and 2017 and spreading into Africa in 2017. Furthermore, these 2.3.4.4B H5N8 AIVs might have reassorted with the NX488-53 virus to generate the novel 2.3.4.4B virus of XJ-H5N6/2017, which could then circulate within aquatic poultry in the water reservoirs of NTM, subsequently spreading into the LPMs of Urumqi via the sale of live poultry. In January 2020, 2.3.4.4H H5N6 HPAIVs [SW/XJ/1/2020(H5N6)] were identified from migratory whooper and mute swans in Xinjiang [28], and the low level of similarity between the isolates in this study and those of SW/XJ/1/2020(H5N6) suggests that these viruses spread into Xinjiang independently. Based on the above observations, the wetlands in Xinjiang may have a key role in the AIVs circulating among the migratory birds and aquatic poultry in Xinjiang and in disseminating the viruses from Central China to the Eurasian continent and East African via the Central Asian Flyway and the West Asian–East African Flyway.\nIn our study, the multiple mutations detected in the isolates may be associated with viral virulence, adaptation, and transmission (Table 2). The viral HA protein included cleavage sites containing multiple basic amino acids associated with HPAIVs; moreover, the S128P, S137A, and T160A mutations can enhance binding capacity to a human-like α-2,6-SA receptor [17]. These three mutations were also present in the NX488-53 virus, which can infect BALB/c mice and transmit among guinea pigs via direct contact or aerosol routes [3334]. The 11 amino acid deletion in the stalk region of the NA protein has been associated with the adaptation of wild-bird origin AIVs to gallinaceous poultry [35], increasing viral virulence [36], and enhancing viral transmission in ducks [37]. It has also been reported that the NA short-stalk region in H7N9 [36] and H5N1 [38] AIVs can increase virulence in mice, and it is characteristic of the viral adaptation for chickens [35]. Five amino acid substitutions in the PB2 protein (L89V, G309D, R477G, I495V, and I504V) may increase viral replication activity in mammalian cells and viral virulence in mice [1920]. Two amino acid substitutions in the NS1 protein (D92E and P42S) could be associated with high viral fatality rates and replication efficiency [213940].\nIn summary, our data indicate that XJ-H5N6/2017 is a novel reassortant 2.3.4.4B HPAIV originating from the 2.3.4.4B H5N8 and 2.3.4.4C H5N6 viruses present in wild birds. The multiple mutations in the amino acids of the novel reassortant are associated with its pathogenicity. To detect novel reassortant HPAIVs in a timely manner, long-term, risk-based surveillance and analysis of AIVs in poultry and wild birds is essential in Xinjiang, China." ]
[ "intro", "materials|methods", null, null, null, null, "results", null, null, null, "discussion" ]
[ "Highly pathogenic avian influenza virus", "H5N6", "reassortant", "poultry", "wild bird" ]
INTRODUCTION: Since 2014, highly pathogenic avian influenza viruses (HPAIVs) of clade 2.3.4.4 (H5) have been monitored in poultry and wild birds in European and Asian countries, and the viruses of that clade have evolved into eight groups (A-H) [12]. Group A includes H5N8 avian influenza viruses (AIVs) that were identified in Asia, Europe, and North America [1]. In 2016, a novel H5N8 AIV lineage initially emerged in the wild birds in Qinghai Lake, spread to Mongolia, Siberia, and Europe, and was subsequently identified in China and Korea; this new lineage was classified into clade 2.3.4.4B [3]. On the other hand, group C H5N6 AIVs were first reported in Laos in 2013 and gradually became the main source of sporadic AIV infections in poultry in Southern China [4]. Group D comprises H5N6 viruses mainly identified in China and Vietnam. Subsequent studies have reported that H5N6 viruses of groups 2.3.4.4E, 2.3.4.4F, 2.3.4.4G, and 2.3.4.4H have been isolated from poultry and wild birds [2]. Outbreaks of 2.3.4.4B H5N8 AIVs were reported in South Korea in 2014 [5]. The viruses were subsequently disseminated via migratory birds and have since been detected in wild and domestic birds worldwide [12678]. Almost simultaneously, 2.3.4.4B H5N6 HPAIVs were undergoing global transmission and have been detected in wild and domestic birds [8910111213]. The cocirculation of H5N6 and H5N8 viruses among wild and domestic birds could increase the probability of reassorting novel viruses [91011]; indeed, reassortants derived from the H5N6 and H5N8 HPAIVs have been detected in various wild birds and poultry [891415]. Moreover, it has been reported that 46% of all 2.3.4.4B H5N6 viruses in domestic poultry were derived from wild birds [9]. Thus, reassortant viruses could pose potential threats to poultry farming and human public health. In this study, nine H5N6 AIVs were isolated from aquatic poultry in Xinjiang Uyghur Autonomous Region, China, and all were grouped into clade 2.3.4.4B. This study aimed to analyze the origin of the isolates. MATERIALS AND METHODS: Sample collection From March 2017 to December 2018, a total of 354 oropharyngeal and cloacal swabs from apparently healthy poultry (chickens, ducks, and geese) in live poultry markets (LPMs) in Urumqi, China were collected and placed in tubes containing viral transport medium DMEM (1,000 u/L penicillin, 500 ug/L streptomycin, 100 mg/L Nystatin, 100 mg/L Polymyxin B sulfate salt, 1000 mg/L Sulfamethoxazole, 0.05 g/L Ofloxacin). These tubes were kept in an icebox at −20°C before transport to the laboratory and then immediately stored at −80°C. The animal experiment portion of this study was approved by the Committee on the Ethics of Animal Experiments of Xinjiang Key Laboratory of Biological Resources and Genetic Engineering (BRGE-AE001) and carried out under the guidelines of the Animal Care and Use Committee of the College of Life Science and Technology, Xinjiang University. From March 2017 to December 2018, a total of 354 oropharyngeal and cloacal swabs from apparently healthy poultry (chickens, ducks, and geese) in live poultry markets (LPMs) in Urumqi, China were collected and placed in tubes containing viral transport medium DMEM (1,000 u/L penicillin, 500 ug/L streptomycin, 100 mg/L Nystatin, 100 mg/L Polymyxin B sulfate salt, 1000 mg/L Sulfamethoxazole, 0.05 g/L Ofloxacin). These tubes were kept in an icebox at −20°C before transport to the laboratory and then immediately stored at −80°C. The animal experiment portion of this study was approved by the Committee on the Ethics of Animal Experiments of Xinjiang Key Laboratory of Biological Resources and Genetic Engineering (BRGE-AE001) and carried out under the guidelines of the Animal Care and Use Committee of the College of Life Science and Technology, Xinjiang University. Virus isolation and identification The swab samples were vortexed and centrifuged, and the supernatants inoculated into 10-day-old specific-pathogen-free embryonated chicken eggs. Seventy-two hours after inoculation, allantois fluid was harvested, and hemagglutinin (HA) activity was assayed using 1% chicken red blood cells [14]. All HA-positive samples underwent reverse transcription polymerase chain reaction (RT-PCR) using universal primers targeting the PB1 gene [15]. Viral RNA extraction (Bioer) and RT-PCR (TOYOBO) were performed according to the manufacturers' instructions. Routine surveillance samples were processed in a biosafety level 2 (BSL-2) laboratory of the Center for Influenza Research and Early-Warning (CASCIRE), Chinese Academy of Science, while experiments with live H5N6 viruses were conducted in a CASCIRE biosafety level 3 (BSL-3) laboratory. Coveralls, gloves, and N95 masks were used during the work at the BSL-2 and BSL-3 laboratories, and all wastes were autoclaved. The swab samples were vortexed and centrifuged, and the supernatants inoculated into 10-day-old specific-pathogen-free embryonated chicken eggs. Seventy-two hours after inoculation, allantois fluid was harvested, and hemagglutinin (HA) activity was assayed using 1% chicken red blood cells [14]. All HA-positive samples underwent reverse transcription polymerase chain reaction (RT-PCR) using universal primers targeting the PB1 gene [15]. Viral RNA extraction (Bioer) and RT-PCR (TOYOBO) were performed according to the manufacturers' instructions. Routine surveillance samples were processed in a biosafety level 2 (BSL-2) laboratory of the Center for Influenza Research and Early-Warning (CASCIRE), Chinese Academy of Science, while experiments with live H5N6 viruses were conducted in a CASCIRE biosafety level 3 (BSL-3) laboratory. Coveralls, gloves, and N95 masks were used during the work at the BSL-2 and BSL-3 laboratories, and all wastes were autoclaved. Whole-genome sequencing Next-generation sequencing was used to determine the whole-genome sequences of the AIV isolates. The viral RNA samples were quantified using the 2100 Bioanalyzer System (Agilent Technologies). The RT-PCR and cDNA synthesis procedures were conducted using PrimeScript One-Step RT-PCR Kit Ver.2 (RR055A Takara) and influenza A-specific primers MBTuni-12 and MBTuni-13. Sequencing libraries with an insert size of 200bp were prepared by end-repairing, dA-tailing, adaptor ligation, and PCR amplification, all performed according to the standard manufacturer's instructions (Illumina, USA). The libraries were sequenced on an Illumina HiSeq4000 platform (Illumina) [416]. Next-generation sequencing was used to determine the whole-genome sequences of the AIV isolates. The viral RNA samples were quantified using the 2100 Bioanalyzer System (Agilent Technologies). The RT-PCR and cDNA synthesis procedures were conducted using PrimeScript One-Step RT-PCR Kit Ver.2 (RR055A Takara) and influenza A-specific primers MBTuni-12 and MBTuni-13. Sequencing libraries with an insert size of 200bp were prepared by end-repairing, dA-tailing, adaptor ligation, and PCR amplification, all performed according to the standard manufacturer's instructions (Illumina, USA). The libraries were sequenced on an Illumina HiSeq4000 platform (Illumina) [416]. Phylogenetic analysis Phylogenetic trees were constructed for each gene segment of the nine AIV isolates to investigate their evolutionary relationships. The AIV reference sequences were obtained from GenBank (http://www.ncbi.nlm.nih.gov/genbank) and GISAID (http://www.gisaid.org) via the online Basic Local Alignment Search Tool (BLAST). The datasets for phylogenetic analysis were generated by aligning with reference sequences closely related to the viral sequences of the isolates and removing sequences with the same strain name and those without a clear subtype or collection date. The final sequence numbers of each gene segment are PB2 37, PB1 28, PA 32, HA 35, NP 33, NA 24, MP 27, and NS 33 (Supplementary Table 1). The sequences were aligned with ClustalW using MEGA 7.0. The general time-reversible nucleotide substitution model with invariant sites (I) and the gamma rate of heterogeneity (GTR +Γ) results were selected as providing the best fit for all datasets. Maximum clade credibility (MCC) phylogenetic trees were generated by applying maximum likelihood analysis with 1000 bootstrap replicates in the MEGA-X program and visualized/annotated using Fig tree 1.4.3 software [4]. Phylogenetic trees were constructed for each gene segment of the nine AIV isolates to investigate their evolutionary relationships. The AIV reference sequences were obtained from GenBank (http://www.ncbi.nlm.nih.gov/genbank) and GISAID (http://www.gisaid.org) via the online Basic Local Alignment Search Tool (BLAST). The datasets for phylogenetic analysis were generated by aligning with reference sequences closely related to the viral sequences of the isolates and removing sequences with the same strain name and those without a clear subtype or collection date. The final sequence numbers of each gene segment are PB2 37, PB1 28, PA 32, HA 35, NP 33, NA 24, MP 27, and NS 33 (Supplementary Table 1). The sequences were aligned with ClustalW using MEGA 7.0. The general time-reversible nucleotide substitution model with invariant sites (I) and the gamma rate of heterogeneity (GTR +Γ) results were selected as providing the best fit for all datasets. Maximum clade credibility (MCC) phylogenetic trees were generated by applying maximum likelihood analysis with 1000 bootstrap replicates in the MEGA-X program and visualized/annotated using Fig tree 1.4.3 software [4]. Sample collection: From March 2017 to December 2018, a total of 354 oropharyngeal and cloacal swabs from apparently healthy poultry (chickens, ducks, and geese) in live poultry markets (LPMs) in Urumqi, China were collected and placed in tubes containing viral transport medium DMEM (1,000 u/L penicillin, 500 ug/L streptomycin, 100 mg/L Nystatin, 100 mg/L Polymyxin B sulfate salt, 1000 mg/L Sulfamethoxazole, 0.05 g/L Ofloxacin). These tubes were kept in an icebox at −20°C before transport to the laboratory and then immediately stored at −80°C. The animal experiment portion of this study was approved by the Committee on the Ethics of Animal Experiments of Xinjiang Key Laboratory of Biological Resources and Genetic Engineering (BRGE-AE001) and carried out under the guidelines of the Animal Care and Use Committee of the College of Life Science and Technology, Xinjiang University. Virus isolation and identification: The swab samples were vortexed and centrifuged, and the supernatants inoculated into 10-day-old specific-pathogen-free embryonated chicken eggs. Seventy-two hours after inoculation, allantois fluid was harvested, and hemagglutinin (HA) activity was assayed using 1% chicken red blood cells [14]. All HA-positive samples underwent reverse transcription polymerase chain reaction (RT-PCR) using universal primers targeting the PB1 gene [15]. Viral RNA extraction (Bioer) and RT-PCR (TOYOBO) were performed according to the manufacturers' instructions. Routine surveillance samples were processed in a biosafety level 2 (BSL-2) laboratory of the Center for Influenza Research and Early-Warning (CASCIRE), Chinese Academy of Science, while experiments with live H5N6 viruses were conducted in a CASCIRE biosafety level 3 (BSL-3) laboratory. Coveralls, gloves, and N95 masks were used during the work at the BSL-2 and BSL-3 laboratories, and all wastes were autoclaved. Whole-genome sequencing: Next-generation sequencing was used to determine the whole-genome sequences of the AIV isolates. The viral RNA samples were quantified using the 2100 Bioanalyzer System (Agilent Technologies). The RT-PCR and cDNA synthesis procedures were conducted using PrimeScript One-Step RT-PCR Kit Ver.2 (RR055A Takara) and influenza A-specific primers MBTuni-12 and MBTuni-13. Sequencing libraries with an insert size of 200bp were prepared by end-repairing, dA-tailing, adaptor ligation, and PCR amplification, all performed according to the standard manufacturer's instructions (Illumina, USA). The libraries were sequenced on an Illumina HiSeq4000 platform (Illumina) [416]. Phylogenetic analysis: Phylogenetic trees were constructed for each gene segment of the nine AIV isolates to investigate their evolutionary relationships. The AIV reference sequences were obtained from GenBank (http://www.ncbi.nlm.nih.gov/genbank) and GISAID (http://www.gisaid.org) via the online Basic Local Alignment Search Tool (BLAST). The datasets for phylogenetic analysis were generated by aligning with reference sequences closely related to the viral sequences of the isolates and removing sequences with the same strain name and those without a clear subtype or collection date. The final sequence numbers of each gene segment are PB2 37, PB1 28, PA 32, HA 35, NP 33, NA 24, MP 27, and NS 33 (Supplementary Table 1). The sequences were aligned with ClustalW using MEGA 7.0. The general time-reversible nucleotide substitution model with invariant sites (I) and the gamma rate of heterogeneity (GTR +Γ) results were selected as providing the best fit for all datasets. Maximum clade credibility (MCC) phylogenetic trees were generated by applying maximum likelihood analysis with 1000 bootstrap replicates in the MEGA-X program and visualized/annotated using Fig tree 1.4.3 software [4]. RESULTS: Virus isolation Nine strains of H5N6 AIV were isolated between July and November 2017, one from a duck swab and the rest from goose swabs. The eight gene segments of the 9 isolates shared very high nucleotide homology (> 99.9%); thus, the isolates originated from the same ancestor. The strains have been designated as A/goose/Xinjiang/011-015/2017(H5N6), A/duck/Xinjiang/016/2017 (H5N6), and A/goose/Xinjiang/017-019/2017(H5N6), or XJ-H5N6/2017 for short. The complete sequences of the isolates have been submitted to NCBI (accession numbers: MW109029-MW109036, MW110101-MW110108, and MW110205-MW110260). Nine strains of H5N6 AIV were isolated between July and November 2017, one from a duck swab and the rest from goose swabs. The eight gene segments of the 9 isolates shared very high nucleotide homology (> 99.9%); thus, the isolates originated from the same ancestor. The strains have been designated as A/goose/Xinjiang/011-015/2017(H5N6), A/duck/Xinjiang/016/2017 (H5N6), and A/goose/Xinjiang/017-019/2017(H5N6), or XJ-H5N6/2017 for short. The complete sequences of the isolates have been submitted to NCBI (accession numbers: MW109029-MW109036, MW110101-MW110108, and MW110205-MW110260). Sequence and phylogenetic analysis Homology BLAST and phylogenetic analyses were performed on eight genes of the viral isolates (Table 1 and Fig. 1). The viral NA gene exhibited the highest sequence homology (99.1%), and the sequence clustered together with those of H5N6 AIVs isolated from shoveler ducks sampled in Ningxia (NX488-53) and from the environment of Chongqing in 2015; those sequences were designated as 2.3.4.4C H5N6 AIVs. The viral NP gene segments shared the highest sequence homology (99.4%) with 2.3.4.4B H5N8 AIVs isolated from the green-winged teal in Egypt in 2016 and were grouped into an independent subcluster within the phylogenetic tree. The remaining six genes had the highest sequence homologies (99.1%–99.6%) and relatively close genetic relationships in the phylogenetic tree with the 2.3.4.4B H5N8 AIVs. Five of those genes, HA, PB2, NS, PA, and MP, clustered together with 2.3.4.4B H5N8 AIVs isolated from migratory swans in Sanmenxia, Hubei, and Shanxi in 2016, and this sixth gene, PB1, clustered together with H5N8 AIVs isolated from grey-headed gulls sampled in Uganda in 2017. The above results suggest that XJ-H5N6/2017 is a novel reassortant 2.3.4.4B H5N6 AIV derived from 2.3.4.4B H5N8 and 2.3.4.4C H5N6 viruses present in wild birds. Homology BLAST and phylogenetic analyses were performed on eight genes of the viral isolates (Table 1 and Fig. 1). The viral NA gene exhibited the highest sequence homology (99.1%), and the sequence clustered together with those of H5N6 AIVs isolated from shoveler ducks sampled in Ningxia (NX488-53) and from the environment of Chongqing in 2015; those sequences were designated as 2.3.4.4C H5N6 AIVs. The viral NP gene segments shared the highest sequence homology (99.4%) with 2.3.4.4B H5N8 AIVs isolated from the green-winged teal in Egypt in 2016 and were grouped into an independent subcluster within the phylogenetic tree. The remaining six genes had the highest sequence homologies (99.1%–99.6%) and relatively close genetic relationships in the phylogenetic tree with the 2.3.4.4B H5N8 AIVs. Five of those genes, HA, PB2, NS, PA, and MP, clustered together with 2.3.4.4B H5N8 AIVs isolated from migratory swans in Sanmenxia, Hubei, and Shanxi in 2016, and this sixth gene, PB1, clustered together with H5N8 AIVs isolated from grey-headed gulls sampled in Uganda in 2017. The above results suggest that XJ-H5N6/2017 is a novel reassortant 2.3.4.4B H5N6 AIV derived from 2.3.4.4B H5N8 and 2.3.4.4C H5N6 viruses present in wild birds. Molecular characteristics of the H5N6 virus isolates Assessment of the multiple amino acid mutations present in the nine isolates could help elucidate H5N6 virulence. Multiple basic amino acids (KEKRRKR↓GLF) were observed at cleavage sites in the HA protein of the H5N6 isolates (Table 2), suggesting that the isolates were HPAIVs. The viral receptor-binding sites all contained Q226 and G228 (H3 numbering), indicative of an avian-like α2,3-sialic acid (α2,3-SA) receptor-binding preference; however, the S128P, S137A, and T160A mutations in HA could enhance the capacity to bind to a human-like α-2,6-SA receptor [17]. In all nine isolates, 11 amino acid deletions (59–69) in the NA stalk were observed, suggesting the isolates could have different adaptive and virulence characteristics in poultry and mammals [18]. In addition, the L89V, G309D, R477G, I495V [19], and I504V [20] mutations in the PB2 protein and the P42S and D92E mutations in the NS1 were retained in all nine isolates; notably, those mutations can enhance viral virulence and replication activity in mammals [21]. Moreover, the M2-F38L mutation has been associated with antiviral resistance (amantadine and rimantadine) [22]. Assessment of the multiple amino acid mutations present in the nine isolates could help elucidate H5N6 virulence. Multiple basic amino acids (KEKRRKR↓GLF) were observed at cleavage sites in the HA protein of the H5N6 isolates (Table 2), suggesting that the isolates were HPAIVs. The viral receptor-binding sites all contained Q226 and G228 (H3 numbering), indicative of an avian-like α2,3-sialic acid (α2,3-SA) receptor-binding preference; however, the S128P, S137A, and T160A mutations in HA could enhance the capacity to bind to a human-like α-2,6-SA receptor [17]. In all nine isolates, 11 amino acid deletions (59–69) in the NA stalk were observed, suggesting the isolates could have different adaptive and virulence characteristics in poultry and mammals [18]. In addition, the L89V, G309D, R477G, I495V [19], and I504V [20] mutations in the PB2 protein and the P42S and D92E mutations in the NS1 were retained in all nine isolates; notably, those mutations can enhance viral virulence and replication activity in mammals [21]. Moreover, the M2-F38L mutation has been associated with antiviral resistance (amantadine and rimantadine) [22]. Virus isolation: Nine strains of H5N6 AIV were isolated between July and November 2017, one from a duck swab and the rest from goose swabs. The eight gene segments of the 9 isolates shared very high nucleotide homology (> 99.9%); thus, the isolates originated from the same ancestor. The strains have been designated as A/goose/Xinjiang/011-015/2017(H5N6), A/duck/Xinjiang/016/2017 (H5N6), and A/goose/Xinjiang/017-019/2017(H5N6), or XJ-H5N6/2017 for short. The complete sequences of the isolates have been submitted to NCBI (accession numbers: MW109029-MW109036, MW110101-MW110108, and MW110205-MW110260). Sequence and phylogenetic analysis: Homology BLAST and phylogenetic analyses were performed on eight genes of the viral isolates (Table 1 and Fig. 1). The viral NA gene exhibited the highest sequence homology (99.1%), and the sequence clustered together with those of H5N6 AIVs isolated from shoveler ducks sampled in Ningxia (NX488-53) and from the environment of Chongqing in 2015; those sequences were designated as 2.3.4.4C H5N6 AIVs. The viral NP gene segments shared the highest sequence homology (99.4%) with 2.3.4.4B H5N8 AIVs isolated from the green-winged teal in Egypt in 2016 and were grouped into an independent subcluster within the phylogenetic tree. The remaining six genes had the highest sequence homologies (99.1%–99.6%) and relatively close genetic relationships in the phylogenetic tree with the 2.3.4.4B H5N8 AIVs. Five of those genes, HA, PB2, NS, PA, and MP, clustered together with 2.3.4.4B H5N8 AIVs isolated from migratory swans in Sanmenxia, Hubei, and Shanxi in 2016, and this sixth gene, PB1, clustered together with H5N8 AIVs isolated from grey-headed gulls sampled in Uganda in 2017. The above results suggest that XJ-H5N6/2017 is a novel reassortant 2.3.4.4B H5N6 AIV derived from 2.3.4.4B H5N8 and 2.3.4.4C H5N6 viruses present in wild birds. Molecular characteristics of the H5N6 virus isolates: Assessment of the multiple amino acid mutations present in the nine isolates could help elucidate H5N6 virulence. Multiple basic amino acids (KEKRRKR↓GLF) were observed at cleavage sites in the HA protein of the H5N6 isolates (Table 2), suggesting that the isolates were HPAIVs. The viral receptor-binding sites all contained Q226 and G228 (H3 numbering), indicative of an avian-like α2,3-sialic acid (α2,3-SA) receptor-binding preference; however, the S128P, S137A, and T160A mutations in HA could enhance the capacity to bind to a human-like α-2,6-SA receptor [17]. In all nine isolates, 11 amino acid deletions (59–69) in the NA stalk were observed, suggesting the isolates could have different adaptive and virulence characteristics in poultry and mammals [18]. In addition, the L89V, G309D, R477G, I495V [19], and I504V [20] mutations in the PB2 protein and the P42S and D92E mutations in the NS1 were retained in all nine isolates; notably, those mutations can enhance viral virulence and replication activity in mammals [21]. Moreover, the M2-F38L mutation has been associated with antiviral resistance (amantadine and rimantadine) [22]. DISCUSSION: The Xinjiang region is located in northwest China within the interior of the Eurasian Continent. It comprises an overlapping bird migration region between the Central Asian Flyway and the West Asian–East African Flyway. The Northern Tianshan Mountain (NTM) region in Xinjiang includes many complex mountain-oasis-desert systems, and several water reservoirs have been formed by dam construction in narrow mouths of ravines or rivers, with much of the water coming from snowmelt [23]. Many wild birds migrate along the NTM region every year, and the reservoirs in the region have become key stopover areas for migratory birds from Eurasia and Africa. Moreover, these reservoirs are also used during aquatic poultry farming, thereby increasing the odds of contact between aquatic poultry and wild birds, including direct contact with infected wild birds or indirect contact with related materials (e.g., wild-bird feces), which could result in the introduction of wild-bird origin AIVs into aquatic poultry [824]. The majority of H5 HPAIVs detections in wild and domestic birds have been associated with migratory flyways and wild-bird aggregation areas [825]. This wild-domestic bird interface has had an important role in the spread, reassortment, evolution, and epidemiology of AIVs [8]; in particular, 2.3.4.4 H5 HPAIVs that have emerged since 2013. Such HPAIVs are continuing to reassort, evolve, and spread and have been detected in wild and domestic birds around the world [26]. Since 2016, HPAIVs outbreaks in Xinjiang have occurred repeatedly, and there have been several outbreaks of H5N6 HPAIVs (including 2.3.4.4B and 2.3.4.4H) in poultry and migratory birds sampled in Xinjiang [2728]. In this study, the 2.3.4.4B HPAIV isolates from the aquatic poultry in the LPMs of Urumqi were shown to be novel reassortant viruses derived from 2.3.4.4B H5N8 and 2.3.4.4C H5N6 AIVs of wild birds, suggesting that 2.3.4.4 clade AIVs are circulating in both wild and domestic birds in Xinjiang. In this study, the NA gene of XJ-H5N6/2017 had a close relationship with the wild-bird origin 2.3.4.4C H5N6 AIVs identified from Chongqing and Ningxia, China. Our previous study reported a reassortant 2.3.4.4C H5N6 AIV in Xinjiang (GISAID accession nos.: EPI1548859-1548874), which originated from the NX488-53 virus isolated in December 2016 [29]. The NP gene sequences of the isolates in this study were grouped into an independent subcluster within the 2.3.4.4B H5N8 AIVs, indicating that NP has continued to evolve in Xinjiang. The PB1 gene was most closely related to the 2.3.4.4B H5N8 AIVs from the migratory wild birds sampled in Africa in 2017. In addition, it had a close relationship with 2.3.4.4B H5N8 AIVs from the migratory swans sampled in Central China in 2016. The remaining five genes had close relationships with 2.3.4.4B H5N8 AIVs identified from the migratory swans sampled in Central China in 2016 and bar-headed geese sampled in Qinghai in 2017. The Sanmenxia wetland of the Yellow River is an important wintering ground for migratory swans, with the swans arriving at the wetland in late October each year, departing the wetland during the late February to late March period of the following year, and migrating northwest directly to Siberia and northwest China (e.g., Qinghai wetlands) for breeding and molting [30]. An outbreak of H5N1 HPAIV has occurred at the Sanmenxia wetland, and migrating swans carrying the H5N1 HPAIVs have spread it over long distances [3132]. Also, H5N8 HPAIVs from wild birds in Hubei, China have been reported to cause the death of migratory swans, and the H5N8 viruses have been isolated in samples from Qinghai Lake and Europe. Thus, the wetlands and lakes in Central China may have a key role in spreading H5N8 viruses in the East Asian-Australasian and Central Asian flyways [12]. These observations suggest that the isolated 2.3.4.4B H5N8 AIVs could have spread into Xinjiang by infected migratory wild birds present in Central China in 2016 and 2017 and spreading into Africa in 2017. Furthermore, these 2.3.4.4B H5N8 AIVs might have reassorted with the NX488-53 virus to generate the novel 2.3.4.4B virus of XJ-H5N6/2017, which could then circulate within aquatic poultry in the water reservoirs of NTM, subsequently spreading into the LPMs of Urumqi via the sale of live poultry. In January 2020, 2.3.4.4H H5N6 HPAIVs [SW/XJ/1/2020(H5N6)] were identified from migratory whooper and mute swans in Xinjiang [28], and the low level of similarity between the isolates in this study and those of SW/XJ/1/2020(H5N6) suggests that these viruses spread into Xinjiang independently. Based on the above observations, the wetlands in Xinjiang may have a key role in the AIVs circulating among the migratory birds and aquatic poultry in Xinjiang and in disseminating the viruses from Central China to the Eurasian continent and East African via the Central Asian Flyway and the West Asian–East African Flyway. In our study, the multiple mutations detected in the isolates may be associated with viral virulence, adaptation, and transmission (Table 2). The viral HA protein included cleavage sites containing multiple basic amino acids associated with HPAIVs; moreover, the S128P, S137A, and T160A mutations can enhance binding capacity to a human-like α-2,6-SA receptor [17]. These three mutations were also present in the NX488-53 virus, which can infect BALB/c mice and transmit among guinea pigs via direct contact or aerosol routes [3334]. The 11 amino acid deletion in the stalk region of the NA protein has been associated with the adaptation of wild-bird origin AIVs to gallinaceous poultry [35], increasing viral virulence [36], and enhancing viral transmission in ducks [37]. It has also been reported that the NA short-stalk region in H7N9 [36] and H5N1 [38] AIVs can increase virulence in mice, and it is characteristic of the viral adaptation for chickens [35]. Five amino acid substitutions in the PB2 protein (L89V, G309D, R477G, I495V, and I504V) may increase viral replication activity in mammalian cells and viral virulence in mice [1920]. Two amino acid substitutions in the NS1 protein (D92E and P42S) could be associated with high viral fatality rates and replication efficiency [213940]. In summary, our data indicate that XJ-H5N6/2017 is a novel reassortant 2.3.4.4B HPAIV originating from the 2.3.4.4B H5N8 and 2.3.4.4C H5N6 viruses present in wild birds. The multiple mutations in the amino acids of the novel reassortant are associated with its pathogenicity. To detect novel reassortant HPAIVs in a timely manner, long-term, risk-based surveillance and analysis of AIVs in poultry and wild birds is essential in Xinjiang, China.
Background: The H5 avian influenza viruses (AIVs) of clade 2.3.4.4 circulate in wild and domestic birds worldwide. In 2017, nine strains of H5N6 AIVs were isolated from aquatic poultry in Xinjiang, Northwest China. Methods: AIVs were isolated from oropharyngeal and cloacal swabs of poultry. Identification was accomplished by inoculating isolates into embryonated chicken eggs and performing hemagglutination tests and reverse transcription polymerase chain reaction (RT-PCR). The viral genomes were amplified with RT-PCR and then sequenced. The sequence alignment, phylogenetic, and molecular characteristic analyses were performed by using bioinformatic software. Results: Nine isolates originated from the same ancestor. The viral HA gene belonged to clade 2.3.4.4B, while the NA gene had a close phylogenetic relationship with the 2.3.4.4C H5N6 highly pathogenic avian influenza viruses (HPAIVs) isolated from shoveler ducks in Ningxia in 2015. The NP gene was grouped into an independent subcluster within the 2.3.4.4B H5N8 AIVs, and the remaining six genes all had close phylogenetic relationships with the 2.3.4.4B H5N8 HPAIVs isolated from the wild birds in China, Egypt, Uganda, Cameroon, and India in 2016-2017, Multiple basic amino acid residues associated with HPAIVs were located adjacent to the cleavage site of the HA protein. The nine isolates comprised reassortant 2.3.4.4B HPAIVs originating from 2.3.4.4B H5N8 and 2.3.4.4C H5N6 viruses in wild birds. Conclusions: These results suggest that the Northern Tianshan Mountain wetlands in Xinjiang may have a key role in AIVs disseminating from Central China to the Eurasian continent and East African.
null
null
5,613
287
[ 174, 185, 125, 210, 124, 234, 235 ]
11
[ "h5n6", "isolates", "aivs", "viral", "4b", "h5n8", "2017", "xinjiang", "wild", "birds" ]
[ "h5n6 viruses groups", "characteristics h5n6 virus", "viruses domestic poultry", "pathogenic avian influenza", "avian influenza viruses" ]
null
null
null
[CONTENT] Highly pathogenic avian influenza virus | H5N6 | reassortant | poultry | wild bird [SUMMARY]
null
[CONTENT] Highly pathogenic avian influenza virus | H5N6 | reassortant | poultry | wild bird [SUMMARY]
null
[CONTENT] Highly pathogenic avian influenza virus | H5N6 | reassortant | poultry | wild bird [SUMMARY]
null
[CONTENT] Animals | Animals, Domestic | Animals, Wild | Birds | China | Influenza A virus | Influenza in Birds | Phylogeny | Reassortant Viruses | Virulence | Whole Genome Sequencing [SUMMARY]
null
[CONTENT] Animals | Animals, Domestic | Animals, Wild | Birds | China | Influenza A virus | Influenza in Birds | Phylogeny | Reassortant Viruses | Virulence | Whole Genome Sequencing [SUMMARY]
null
[CONTENT] Animals | Animals, Domestic | Animals, Wild | Birds | China | Influenza A virus | Influenza in Birds | Phylogeny | Reassortant Viruses | Virulence | Whole Genome Sequencing [SUMMARY]
null
[CONTENT] h5n6 viruses groups | characteristics h5n6 virus | viruses domestic poultry | pathogenic avian influenza | avian influenza viruses [SUMMARY]
null
[CONTENT] h5n6 viruses groups | characteristics h5n6 virus | viruses domestic poultry | pathogenic avian influenza | avian influenza viruses [SUMMARY]
null
[CONTENT] h5n6 viruses groups | characteristics h5n6 virus | viruses domestic poultry | pathogenic avian influenza | avian influenza viruses [SUMMARY]
null
[CONTENT] h5n6 | isolates | aivs | viral | 4b | h5n8 | 2017 | xinjiang | wild | birds [SUMMARY]
null
[CONTENT] h5n6 | isolates | aivs | viral | 4b | h5n8 | 2017 | xinjiang | wild | birds [SUMMARY]
null
[CONTENT] h5n6 | isolates | aivs | viral | 4b | h5n8 | 2017 | xinjiang | wild | birds [SUMMARY]
null
[CONTENT] birds | viruses | wild | poultry | h5n6 | reported | domestic | h5n8 | wild birds | 4b [SUMMARY]
null
[CONTENT] h5n6 | isolates | aivs | 2017 | 99 | mutations | h5n8 | 4b | isolated | sequence [SUMMARY]
null
[CONTENT] h5n6 | aivs | isolates | 4b | h5n8 | 2017 | wild | birds | xinjiang | viral [SUMMARY]
null
[CONTENT] 2.3.4.4 ||| 2017 | nine | Xinjiang | Northwest China [SUMMARY]
null
[CONTENT] Nine ||| 2.3.4.4B | NA | 2.3.4.4C | avian | Ningxia | 2015 ||| NP | 2.3.4.4B | six | 2.3.4.4B | China | Egypt | Uganda | Cameroon | India | 2016-2017 | Multiple | HA ||| nine | 2.3.4.4B | 2.3.4.4B | 2.3.4.4C H5N6 [SUMMARY]
null
[CONTENT] 2.3.4.4 ||| 2017 | nine | Xinjiang | Northwest China ||| ||| Identification | transcription | RT-PCR ||| RT-PCR ||| ||| ||| Nine ||| 2.3.4.4B | NA | 2.3.4.4C | avian | Ningxia | 2015 ||| NP | 2.3.4.4B | six | 2.3.4.4B | China | Egypt | Uganda | Cameroon | India | 2016-2017 | Multiple | HA ||| nine | 2.3.4.4B | 2.3.4.4B | 2.3.4.4C H5N6 ||| the Northern Tianshan Mountain | Xinjiang | Central China | Eurasian | East African [SUMMARY]
null
Trends in conventional cardiovascular risk factors and myocardial infarction subtypes among young Chinese men with a first acute myocardial infarction.
34964143
There is limited data on the characteristics of conventional risk factors (RFs) in young Chinese men hospitalized with a first acute myocardial infarction (AMI).
BACKGROUND
A total of 2739 men aged 18-44 years hospitalized for a first AMI were identified from 2007 to 2017. The overall prevalence of RFs and their respective temporal trends and subtypes of AMI were evaluated.
METHODS
The most prevalent conditions were smoking, followed by hypertension and then obesity. Patients aged <35 years had a much higher prevalence of hypercholesterolemia and obesity. Compared with a similar reference population in the United States, young Chinese men had a higher prevalence of smoking and dyslipidemia, but a lower prevalence of obesity, hypertension, and diabetes. The prevalence of hypertension increased from 2007 through 2017 (p trend <.001), whereas smoking decreased gradually. AMI frequently presented as ST-segment elevation MI (STEMI) (77.5%). Cluster of conventional RFs (3 RFs, odds ratio [OR]: 1.69, 95% confidence interval [CI]: 1.11-2.57; ≥4 RFs, OR: 2.50, 95% CI: 1.55-4.03] and multivessel disease (OR = 1.32, 95% CI: 1.08-1.60) increased the risk of non-STEMI (NSTEMI).
RESULTS
Conventional RFs were highly prevalent in young Chinese men who were hospitalized for first AMI events, and the temporal trends varied different between China and US populations. Multivessel disease and cluster of conventional RFs are closely related to NSTEMI. Optimized preventive strategies among young adults are warranted.
CONCLUSIONS
[ "Cardiovascular Diseases", "China", "Heart Disease Risk Factors", "Humans", "Male", "Myocardial Infarction", "Non-ST Elevated Myocardial Infarction", "Risk Factors", "ST Elevation Myocardial Infarction", "United States", "Young Adult" ]
8799041
INTRODUCTION
The primary and secondary prevention of coronary heart disease (CHD) in young adults has garnered tremendous attention given the rapid increase in the incidence of acute coronary events and hospitalization rates, especially in young men. Evidence from observational epidemiological studies showed the incidence of acute coronary events increased by 37.4% in the year 2009 compared to 2007 in young adults aged 35–39 years, making it the largest increase for this age group. 1 Hospitalization rates for acute myocardial infarction (AMI) per 100 000 population experienced the most significant increase in young men (<55 years), by 45.8% from 2007 to 2012 in Beijing; 2 the proportion of young adults hospitalized for CHD was nearly 90% in men from 2013 to 2014 in Beijing. 3 This trend parallels an increase in cardiovascular risk factors (RFs) including smoking, hypertension, diabetes, obesity, and dyslipidemia in the general Chinese population as well as an increase in hospitalizations for AMI. 2 , 4 , 5 , 6 The Prospective Urban Rural Epidemiology (PURE) study showed that approximately 70% of cardiovascular disease (CVD) cases were attributed to modifiable RFs. 7 Though several studies have evaluated the prevalence of these RFs during a first or any episode of AMI and have found a high prevalence of at least 1 RF (approximately 90%), 8 , 9 , 10 most patients were classified as being at low or intermediate risk by traditional CHD risk prediction scores. 11 , 12 Unfortunately, this does not aid in the development of appropriate primary preventive strategies to decrease the risk of CHD. The prevalence of conventional RFs, other clinical characteristics, and their trends need to be clarified, which can be used in formulating preventive strategies. Few studies have evaluated recent long‐term trends and prevalence of modifiable RFs during a first AMI in young adults in China. In a retrospective analysis of patients with coronary artery disease aged ≤45 years conducted from 2010 to 2014, 6 the prevalence figures varied for the United States and German populations. 8 , 10 Contemporary data about trends in and prevalence of modifiable RFs, and subtypes of AMI are lacking in young patients. Using a retrospective analysis of hospital data from 2007 to 2017, we aimed to evaluate the trends in and prevalence of modifiable RFs and subtypes of MI during the first AMI in young Chinese men. This evidence will provide a reference point for the development of preventive strategies in this population.
METHODS
Participants This study was based on a retrospective, single‐center analysis of young men hospitalized for a first AMI. Clinical and demographic data were collected from Beijing Anzhen Hospital by trained abstractors, using physician notes, laboratory reports, patient histories, and discharge summaries from January 2007 through December 2017. Young men aged 18–44 years hospitalized for a first AMI were identified from 2007 to 2017; 136 patients were sampled at the end of 2007 and a total of 2739 patients were recruited at the end of 2017 (Figure S1). The reference population for this analysis has previously been reported by Yandrapalli et al. Briefly, hospitalizations for a first AMI in young adults aged 18–44 years were identified from the US Healthcare Cost and Utilization Project (HCUP) National Inpatient Sample (NIS) from January 2005 through September 2015. 10 Hospitalizations for AMI were determined according to the fourth universal definition of MI. 13 AMI cases were first identified by excluding cases with secondary diagnoses of prior MI, prior percutaneous coronary intervention, prior coronary artery bypass grafting, post‐AMI syndrome, chronic ischemic heart disease, heart transplant recipient, and coronary arterial disease of bypass grafts or in transplanted hearts. Cases with a history of heart failure (HF), arteritis, congenital heart disease, and cancer were excluded. Those without coronary angiography and those with missing values for laboratory reports were excluded. This study was based on a retrospective, single‐center analysis of young men hospitalized for a first AMI. Clinical and demographic data were collected from Beijing Anzhen Hospital by trained abstractors, using physician notes, laboratory reports, patient histories, and discharge summaries from January 2007 through December 2017. Young men aged 18–44 years hospitalized for a first AMI were identified from 2007 to 2017; 136 patients were sampled at the end of 2007 and a total of 2739 patients were recruited at the end of 2017 (Figure S1). The reference population for this analysis has previously been reported by Yandrapalli et al. Briefly, hospitalizations for a first AMI in young adults aged 18–44 years were identified from the US Healthcare Cost and Utilization Project (HCUP) National Inpatient Sample (NIS) from January 2005 through September 2015. 10 Hospitalizations for AMI were determined according to the fourth universal definition of MI. 13 AMI cases were first identified by excluding cases with secondary diagnoses of prior MI, prior percutaneous coronary intervention, prior coronary artery bypass grafting, post‐AMI syndrome, chronic ischemic heart disease, heart transplant recipient, and coronary arterial disease of bypass grafts or in transplanted hearts. Cases with a history of heart failure (HF), arteritis, congenital heart disease, and cancer were excluded. Those without coronary angiography and those with missing values for laboratory reports were excluded. Measurements and diagnostic criteria Primary outcomes of interest were the overall prevalence of the RFs and their respective temporal trends and subtypes of AMI. AMI was defined based on established criteria, which included 13 : detection of a rise and/or fall of cardiac biomarker values (preferably cardiac troponin [cTn]) with at least one value above the 99th percentile upper reference limit (URL) and with at least one of the following: (1) ischemia; (2) new or presumed new significant ST‐segment–T wave (ST–T) changes or new left bundle branch block; (3) development of pathological Q waves in the ECG; (4) imaging evidence of new loss of viable myocardium or new regional wall motion abnormality; and (5) detection of an intracoronary thrombus by angiography or autopsy. AMI was classified as ST‐segment elevation myocardial infarction (STEMI) and non‐STEMI (NSTEMI). Prevalent hypertension and diabetes were defined based on a documented history of hypertension and diabetes, respectively, in medical records. Hypercholesterolemia was defined as total cholesterol (TC) ≥ 5.2 mmol/L (200 mg/dl) or low‐density lipoprotein cholesterol (LDL‐C) ≥ 3.4 mmol/L (130 mg/dl). Obesity was defined as a body mass index (BMI) ≥ 28 (kg/m2). Smokers were defined as those who reported smoking cigarettes for >6 months. When the conventional RFs were compared with that of the US population, dyslipidemia and obesity were defined according to criteria in the report by Yandrapalli' et al. 10 Primary outcomes of interest were the overall prevalence of the RFs and their respective temporal trends and subtypes of AMI. AMI was defined based on established criteria, which included 13 : detection of a rise and/or fall of cardiac biomarker values (preferably cardiac troponin [cTn]) with at least one value above the 99th percentile upper reference limit (URL) and with at least one of the following: (1) ischemia; (2) new or presumed new significant ST‐segment–T wave (ST–T) changes or new left bundle branch block; (3) development of pathological Q waves in the ECG; (4) imaging evidence of new loss of viable myocardium or new regional wall motion abnormality; and (5) detection of an intracoronary thrombus by angiography or autopsy. AMI was classified as ST‐segment elevation myocardial infarction (STEMI) and non‐STEMI (NSTEMI). Prevalent hypertension and diabetes were defined based on a documented history of hypertension and diabetes, respectively, in medical records. Hypercholesterolemia was defined as total cholesterol (TC) ≥ 5.2 mmol/L (200 mg/dl) or low‐density lipoprotein cholesterol (LDL‐C) ≥ 3.4 mmol/L (130 mg/dl). Obesity was defined as a body mass index (BMI) ≥ 28 (kg/m2). Smokers were defined as those who reported smoking cigarettes for >6 months. When the conventional RFs were compared with that of the US population, dyslipidemia and obesity were defined according to criteria in the report by Yandrapalli' et al. 10 Statistical analysis Categorical variables were expressed as total numbers (proportions), differences in RF prevalence across age groups were compared using χ 2 tests for categorical variables and trends in the prevalence of RFs were analyzed using linear‐by‐linear association. Normally distributed continuous variables were presented as means ± standard deviations. A student's t test was used to compare two independent samples for normally distributed continuous variables and a Mann–Whitney U test for continuous variables with skewed distributions. Odds ratios (ORs) with 95% confidence intervals (CIs) for associations were derived from multivariable logistic regression. All reported p values were two sided. Statistical analysis was performed using IBM SPSS Statistics version 25.0 (IBM). Categorical variables were expressed as total numbers (proportions), differences in RF prevalence across age groups were compared using χ 2 tests for categorical variables and trends in the prevalence of RFs were analyzed using linear‐by‐linear association. Normally distributed continuous variables were presented as means ± standard deviations. A student's t test was used to compare two independent samples for normally distributed continuous variables and a Mann–Whitney U test for continuous variables with skewed distributions. Odds ratios (ORs) with 95% confidence intervals (CIs) for associations were derived from multivariable logistic regression. All reported p values were two sided. Statistical analysis was performed using IBM SPSS Statistics version 25.0 (IBM).
RESULTS
Prevalence of conventional cardiovascular RFs and MI subtypes across different groups In this study, 2739 cases of first AMI hospitalizations in young men were enrolled from January 1, 2007, to December 31, 2017. Of the total, 462 (16.9%) patients were <35 years, 761 (27.8%) were 35–39 years and 1516 (55.3%) were 40–44 years. The most frequent presentation of AMI was STEMI (77.5%) and over 80% underwent revascularization. Baseline characteristics of the overall sample and by age subgroups are presented in Table 1. The most prevalent RFs were smoking (75.5%), followed by hypertension (40.6%) and then obesity (38.3%). The proportion of patients without any of the five conventional CVD RFs was only 5.8% and 71.7% of patients had at least 2 RFs. Significant differences were noted for the prevalence of obesity, hypertension, diabetes, and hypercholesterolemia across age groups. Patients aged <35 years had a higher prevalence of hypercholesterolemia and obesity, with lower values for hypertension; the proportion of patients with 2 or 3 RFs in this age group was also higher than that for the other age groups. Baseline characteristics of the study population Note: Data are presented as mean ± standard deviation or n (%). Abbreviations: AMI, acute myocardial infarction; NSTEMI, non‐ST‐segment elevation myocardial infarction; STEMI, ST‐segment elevation myocardial infarction. In this study, 2739 cases of first AMI hospitalizations in young men were enrolled from January 1, 2007, to December 31, 2017. Of the total, 462 (16.9%) patients were <35 years, 761 (27.8%) were 35–39 years and 1516 (55.3%) were 40–44 years. The most frequent presentation of AMI was STEMI (77.5%) and over 80% underwent revascularization. Baseline characteristics of the overall sample and by age subgroups are presented in Table 1. The most prevalent RFs were smoking (75.5%), followed by hypertension (40.6%) and then obesity (38.3%). The proportion of patients without any of the five conventional CVD RFs was only 5.8% and 71.7% of patients had at least 2 RFs. Significant differences were noted for the prevalence of obesity, hypertension, diabetes, and hypercholesterolemia across age groups. Patients aged <35 years had a higher prevalence of hypercholesterolemia and obesity, with lower values for hypertension; the proportion of patients with 2 or 3 RFs in this age group was also higher than that for the other age groups. Baseline characteristics of the study population Note: Data are presented as mean ± standard deviation or n (%). Abbreviations: AMI, acute myocardial infarction; NSTEMI, non‐ST‐segment elevation myocardial infarction; STEMI, ST‐segment elevation myocardial infarction. Prevalence of conventional CVD RFs compared with a reference population We compared the prevalence of classic modifiable cardiovascular RFs in the study population with that of the reference population. Compared with the reference population, young Chinese men had a higher prevalence of smoking (75.5% vs. 58.1%; rate difference 17.4%) and dyslipidemia (65.0% vs. 54.6%; rate difference 10.4%), with lower prevalence of obesity (13.1% vs. 18.6%; rate difference 19.7%), hypertension (40.6% vs. 49.3%; rate difference 8.7%), and diabetes (14.9% vs. 19.9%; rate difference 5.0%). The three leading conventional RFs in the reference population were smoking, dyslipidemia, and hypertension, and this was similar for the target population (Table 2). Prevalence of conventional cardiovascular risk factors in the study population compared with the reference population Note: Data are presented as mean ± standard deviation or n (%). Abbreviation: BMI, body mass index. We compared the prevalence of classic modifiable cardiovascular RFs in the study population with that of the reference population. Compared with the reference population, young Chinese men had a higher prevalence of smoking (75.5% vs. 58.1%; rate difference 17.4%) and dyslipidemia (65.0% vs. 54.6%; rate difference 10.4%), with lower prevalence of obesity (13.1% vs. 18.6%; rate difference 19.7%), hypertension (40.6% vs. 49.3%; rate difference 8.7%), and diabetes (14.9% vs. 19.9%; rate difference 5.0%). The three leading conventional RFs in the reference population were smoking, dyslipidemia, and hypertension, and this was similar for the target population (Table 2). Prevalence of conventional cardiovascular risk factors in the study population compared with the reference population Note: Data are presented as mean ± standard deviation or n (%). Abbreviation: BMI, body mass index. Conventional cardiovascular RFs and other clinical characteristics across AMI subtypes Compared to STEMI patients, NSTEMI patients had a high prevalence of all individual conventional RFs except for smoking, which was similar in prevalence. The proportion of patients who had at least 3 RFs was 39.6% in NSTEMI patients, which was higher than that in STEMI patients (27.5%). The coronary artery involved most frequently was left anterior descending coronary artery (LAD) in both groups, but this was higher in STEMI patients. Patients with multivessel coronary disease and those without significant coronary stenosis were higher in the NSTEMI group (Table 3). Conventional risk factors and clinic characteristics according to AMI subtype Note: Data are presented as n (%). Abbreviations: EF, ejection fraction; LAD, left anterior descending coronary artery; LCX, left circumflex coronary artery; NSTEMI, non‐ST elevation acute coronary syndrome; RCA, right coronary artery; STEMI, ST‐segment elevation myocardial infarction. We also conducted logistic regression analyses to evaluate the relationships between AMI subtype and conventional RFs. Multiple conventional RFs significantly increased the risk of NSTEMI; the OR was 1.69 (95% confident interval [CI]: 1.11–2.57) for patients with three RFs and 2.50 (95% CI: 1.55–4.03) for patients with at least 4 RFs. Patients with multivessel coronary disease (OR = 1.32, 95% CI: 1.08–1.60) and patients without significant coronary stenosis or normal patients (OR = 2.01, 95% CI: 1.47–2.76) had an increased risk of NSTEMI compared with those with the single vessel disease after adjusting for the other covariates (Table 4). Relationship between AMI subtype and conventional RFs and coronary artery disease Compared to STEMI patients, NSTEMI patients had a high prevalence of all individual conventional RFs except for smoking, which was similar in prevalence. The proportion of patients who had at least 3 RFs was 39.6% in NSTEMI patients, which was higher than that in STEMI patients (27.5%). The coronary artery involved most frequently was left anterior descending coronary artery (LAD) in both groups, but this was higher in STEMI patients. Patients with multivessel coronary disease and those without significant coronary stenosis were higher in the NSTEMI group (Table 3). Conventional risk factors and clinic characteristics according to AMI subtype Note: Data are presented as n (%). Abbreviations: EF, ejection fraction; LAD, left anterior descending coronary artery; LCX, left circumflex coronary artery; NSTEMI, non‐ST elevation acute coronary syndrome; RCA, right coronary artery; STEMI, ST‐segment elevation myocardial infarction. We also conducted logistic regression analyses to evaluate the relationships between AMI subtype and conventional RFs. Multiple conventional RFs significantly increased the risk of NSTEMI; the OR was 1.69 (95% confident interval [CI]: 1.11–2.57) for patients with three RFs and 2.50 (95% CI: 1.55–4.03) for patients with at least 4 RFs. Patients with multivessel coronary disease (OR = 1.32, 95% CI: 1.08–1.60) and patients without significant coronary stenosis or normal patients (OR = 2.01, 95% CI: 1.47–2.76) had an increased risk of NSTEMI compared with those with the single vessel disease after adjusting for the other covariates (Table 4). Relationship between AMI subtype and conventional RFs and coronary artery disease Trends of conventional RFs and AMI subtypes The trends in the prevalence of conventional RFs are shown in Figure 1A. Compared with 2007, the prevalence of hypertension increased in 2017 (p trend = 0.004) that for smoking decreased gradually (p trend <0.05) and those for diabetes, hypercholesterolemia, and obesity remained unchanged (p trend >0.05). Between 2007 and 2017, the prevalence of hypertension increased from 26.5% to 44.2% (rate difference 17.7%) and that for smoking decreased from 77.9% to 72.4% (rate difference 5.5%). The prevalence of hypertension increased through 2007 and 2011, and then maintained a little shift. The greatest relative increase in prevalence between 2007 and 2009 was observed for hypercholesterolemia (from 28.7% to 35.5%), decreased in 2010 (from 35.5% to 20.5%), and then maintained a little shift. There were little shifts for diabetes and obesity (diabetes, 9.6% in 2007 and 14.8% in 2017, p = .283, rate difference 5.2%; obesity, 36.8% in 2007, and 40.7% in 2017, p = .105, rate difference 3.9%). Trends in the percentage of five conventional risk factors and acute myocardial infarction (AMI) subtype during a first acute myocardial infarction in young men 18–44 years old between 2007 and 2017. (A) Trends in the percentage of conventional risk factors, the increasing prevalence was noted for hypertension (p trend = 0.004). Decline in the rates of current smoking and drinking are appreciated. There was a similar prevalence for diabetes and hypercholesterolemia and obesity over time. (B) Trends in the percentage of AMI subtybe, the Increasing percentage was noted for NSTEMI(p trend <0.001) The most frequent presentation of AMI was STEMI, but the proportion decreased gradually from 90.4% to 63.2% (p trend <0.001), then stabilized at around 65% during the current years (Figure 1B). The mean SBP and BMI increased over the three periods; the mean SBP increased from 119.4 mmHg to 123.3 mmHg (mean difference 3.9 mmHg), with BMI increasing from 26.5 kg/m2 to 27.6 kg/m2 (mean difference 1.2 kg/m2). The mean TC decreased from 4.62 mmol/L to 4.49 mmol/L and LDL‐C decreased from 2.97 mmol/L to 2.80 mmol/L. No differences were seen in mean DBP and FPG over the three periods (Table S1). The trends in the prevalence of conventional RFs are shown in Figure 1A. Compared with 2007, the prevalence of hypertension increased in 2017 (p trend = 0.004) that for smoking decreased gradually (p trend <0.05) and those for diabetes, hypercholesterolemia, and obesity remained unchanged (p trend >0.05). Between 2007 and 2017, the prevalence of hypertension increased from 26.5% to 44.2% (rate difference 17.7%) and that for smoking decreased from 77.9% to 72.4% (rate difference 5.5%). The prevalence of hypertension increased through 2007 and 2011, and then maintained a little shift. The greatest relative increase in prevalence between 2007 and 2009 was observed for hypercholesterolemia (from 28.7% to 35.5%), decreased in 2010 (from 35.5% to 20.5%), and then maintained a little shift. There were little shifts for diabetes and obesity (diabetes, 9.6% in 2007 and 14.8% in 2017, p = .283, rate difference 5.2%; obesity, 36.8% in 2007, and 40.7% in 2017, p = .105, rate difference 3.9%). Trends in the percentage of five conventional risk factors and acute myocardial infarction (AMI) subtype during a first acute myocardial infarction in young men 18–44 years old between 2007 and 2017. (A) Trends in the percentage of conventional risk factors, the increasing prevalence was noted for hypertension (p trend = 0.004). Decline in the rates of current smoking and drinking are appreciated. There was a similar prevalence for diabetes and hypercholesterolemia and obesity over time. (B) Trends in the percentage of AMI subtybe, the Increasing percentage was noted for NSTEMI(p trend <0.001) The most frequent presentation of AMI was STEMI, but the proportion decreased gradually from 90.4% to 63.2% (p trend <0.001), then stabilized at around 65% during the current years (Figure 1B). The mean SBP and BMI increased over the three periods; the mean SBP increased from 119.4 mmHg to 123.3 mmHg (mean difference 3.9 mmHg), with BMI increasing from 26.5 kg/m2 to 27.6 kg/m2 (mean difference 1.2 kg/m2). The mean TC decreased from 4.62 mmol/L to 4.49 mmol/L and LDL‐C decreased from 2.97 mmol/L to 2.80 mmol/L. No differences were seen in mean DBP and FPG over the three periods (Table S1).
CONCLUSIONS
In conclusion, the characteristics of conventional RFs among young Chinese men hospitalized with a first MI have important healthcare implications with regard to the planning of appropriate primary preventative strategies. The three leading cardiovascular RFs (smoking, hypertension, and obesity) should be the target of intervention and treatment strategies aimed at reducing the incidence of MI in young adults. Furthermore, preventing hypercholesterolemia in patients aged 34 years or younger. should also be a primary focus. Young adults with two or more conventional RFs should be given considerable attention even if they were classified as being at low or intermediate risk by traditional CHD risk prediction scores.
[ "INTRODUCTION", "Participants", "Measurements and diagnostic criteria", "Statistical analysis", "Prevalence of conventional cardiovascular RFs and MI subtypes across different groups", "Prevalence of conventional CVD RFs compared with a reference population", "Conventional cardiovascular RFs and other clinical characteristics across AMI subtypes", "Trends of conventional RFs and AMI subtypes", "CONTROL OF CONVENTIONAL RFS", "LIMITATIONS" ]
[ "The primary and secondary prevention of coronary heart disease (CHD) in young adults has garnered tremendous attention given the rapid increase in the incidence of acute coronary events and hospitalization rates, especially in young men. Evidence from observational epidemiological studies showed the incidence of acute coronary events increased by 37.4% in the year 2009 compared to 2007 in young adults aged 35–39 years, making it the largest increase for this age group.\n1\n Hospitalization rates for acute myocardial infarction (AMI) per 100 000 population experienced the most significant increase in young men (<55 years), by 45.8% from 2007 to 2012 in Beijing;\n2\n the proportion of young adults hospitalized for CHD was nearly 90% in men from 2013 to 2014 in Beijing.\n3\n This trend parallels an increase in cardiovascular risk factors (RFs) including smoking, hypertension, diabetes, obesity, and dyslipidemia in the general Chinese population as well as an increase in hospitalizations for AMI.\n2\n, \n4\n, \n5\n, \n6\n\n\nThe Prospective Urban Rural Epidemiology (PURE) study showed that approximately 70% of cardiovascular disease (CVD) cases were attributed to modifiable RFs.\n7\n Though several studies have evaluated the prevalence of these RFs during a first or any episode of AMI and have found a high prevalence of at least 1 RF (approximately 90%),\n8\n, \n9\n, \n10\n most patients were classified as being at low or intermediate risk by traditional CHD risk prediction scores.\n11\n, \n12\n Unfortunately, this does not aid in the development of appropriate primary preventive strategies to decrease the risk of CHD. The prevalence of conventional RFs, other clinical characteristics, and their trends need to be clarified, which can be used in formulating preventive strategies.\nFew studies have evaluated recent long‐term trends and prevalence of modifiable RFs during a first AMI in young adults in China. In a retrospective analysis of patients with coronary artery disease aged ≤45 years conducted from 2010 to 2014,\n6\n the prevalence figures varied for the United States and German populations.\n8\n, \n10\n Contemporary data about trends in and prevalence of modifiable RFs, and subtypes of AMI are lacking in young patients. Using a retrospective analysis of hospital data from 2007 to 2017, we aimed to evaluate the trends in and prevalence of modifiable RFs and subtypes of MI during the first AMI in young Chinese men. This evidence will provide a reference point for the development of preventive strategies in this population.", "This study was based on a retrospective, single‐center analysis of young men hospitalized for a first AMI. Clinical and demographic data were collected from Beijing Anzhen Hospital by trained abstractors, using physician notes, laboratory reports, patient histories, and discharge summaries from January 2007 through December 2017. Young men aged 18–44 years hospitalized for a first AMI were identified from 2007 to 2017; 136 patients were sampled at the end of 2007 and a total of 2739 patients were recruited at the end of 2017 (Figure S1). The reference population for this analysis has previously been reported by Yandrapalli et al. Briefly, hospitalizations for a first AMI in young adults aged 18–44 years were identified from the US Healthcare Cost and Utilization Project (HCUP) National Inpatient Sample (NIS) from January 2005 through September 2015.\n10\n Hospitalizations for AMI were determined according to the fourth universal definition of MI.\n13\n AMI cases were first identified by excluding cases with secondary diagnoses of prior MI, prior percutaneous coronary intervention, prior coronary artery bypass grafting, post‐AMI syndrome, chronic ischemic heart disease, heart transplant recipient, and coronary arterial disease of bypass grafts or in transplanted hearts. Cases with a history of heart failure (HF), arteritis, congenital heart disease, and cancer were excluded. Those without coronary angiography and those with missing values for laboratory reports were excluded.", "Primary outcomes of interest were the overall prevalence of the RFs and their respective temporal trends and subtypes of AMI. AMI was defined based on established criteria, which included\n13\n: detection of a rise and/or fall of cardiac biomarker values (preferably cardiac troponin [cTn]) with at least one value above the 99th percentile upper reference limit (URL) and with at least one of the following: (1) ischemia; (2) new or presumed new significant ST‐segment–T wave (ST–T) changes or new left bundle branch block; (3) development of pathological Q waves in the ECG; (4) imaging evidence of new loss of viable myocardium or new regional wall motion abnormality; and (5) detection of an intracoronary thrombus by angiography or autopsy. AMI was classified as ST‐segment elevation myocardial infarction (STEMI) and non‐STEMI (NSTEMI). Prevalent hypertension and diabetes were defined based on a documented history of hypertension and diabetes, respectively, in medical records. Hypercholesterolemia was defined as total cholesterol (TC) ≥ 5.2 mmol/L (200 mg/dl) or low‐density lipoprotein cholesterol (LDL‐C) ≥ 3.4 mmol/L (130 mg/dl). Obesity was defined as a body mass index (BMI) ≥ 28 (kg/m2). Smokers were defined as those who reported smoking cigarettes for >6 months. When the conventional RFs were compared with that of the US population, dyslipidemia and obesity were defined according to criteria in the report by Yandrapalli' et al.\n10\n\n", "Categorical variables were expressed as total numbers (proportions), differences in RF prevalence across age groups were compared using χ\n2 tests for categorical variables and trends in the prevalence of RFs were analyzed using linear‐by‐linear association. Normally distributed continuous variables were presented as means ± standard deviations. A student's t test was used to compare two independent samples for normally distributed continuous variables and a Mann–Whitney U test for continuous variables with skewed distributions. Odds ratios (ORs) with 95% confidence intervals (CIs) for associations were derived from multivariable logistic regression. All reported p values were two sided. Statistical analysis was performed using IBM SPSS Statistics version 25.0 (IBM).", "In this study, 2739 cases of first AMI hospitalizations in young men were enrolled from January 1, 2007, to December 31, 2017. Of the total, 462 (16.9%) patients were <35 years, 761 (27.8%) were 35–39 years and 1516 (55.3%) were 40–44 years. The most frequent presentation of AMI was STEMI (77.5%) and over 80% underwent revascularization. Baseline characteristics of the overall sample and by age subgroups are presented in Table 1. The most prevalent RFs were smoking (75.5%), followed by hypertension (40.6%) and then obesity (38.3%). The proportion of patients without any of the five conventional CVD RFs was only 5.8% and 71.7% of patients had at least 2 RFs. Significant differences were noted for the prevalence of obesity, hypertension, diabetes, and hypercholesterolemia across age groups. Patients aged <35 years had a higher prevalence of hypercholesterolemia and obesity, with lower values for hypertension; the proportion of patients with 2 or 3 RFs in this age group was also higher than that for the other age groups.\nBaseline characteristics of the study population\n\nNote: Data are presented as mean ± standard deviation or n (%).\nAbbreviations: AMI, acute myocardial infarction; NSTEMI, non‐ST‐segment elevation myocardial infarction; STEMI, ST‐segment elevation myocardial infarction.", "We compared the prevalence of classic modifiable cardiovascular RFs in the study population with that of the reference population. Compared with the reference population, young Chinese men had a higher prevalence of smoking (75.5% vs. 58.1%; rate difference 17.4%) and dyslipidemia (65.0% vs. 54.6%; rate difference 10.4%), with lower prevalence of obesity (13.1% vs. 18.6%; rate difference 19.7%), hypertension (40.6% vs. 49.3%; rate difference 8.7%), and diabetes (14.9% vs. 19.9%; rate difference 5.0%). The three leading conventional RFs in the reference population were smoking, dyslipidemia, and hypertension, and this was similar for the target population (Table 2).\nPrevalence of conventional cardiovascular risk factors in the study population compared with the reference population\n\nNote: Data are presented as mean ± standard deviation or n (%).\nAbbreviation: BMI, body mass index.", "Compared to STEMI patients, NSTEMI patients had a high prevalence of all individual conventional RFs except for smoking, which was similar in prevalence. The proportion of patients who had at least 3 RFs was 39.6% in NSTEMI patients, which was higher than that in STEMI patients (27.5%). The coronary artery involved most frequently was left anterior descending coronary artery (LAD) in both groups, but this was higher in STEMI patients. Patients with multivessel coronary disease and those without significant coronary stenosis were higher in the NSTEMI group (Table 3).\nConventional risk factors and clinic characteristics according to AMI subtype\n\nNote: Data are presented as n (%).\nAbbreviations: EF, ejection fraction; LAD, left anterior descending coronary artery; LCX, left circumflex coronary artery; NSTEMI, non‐ST elevation acute coronary syndrome; RCA, right coronary artery; STEMI, ST‐segment elevation myocardial infarction.\nWe also conducted logistic regression analyses to evaluate the relationships between AMI subtype and conventional RFs. Multiple conventional RFs significantly increased the risk of NSTEMI; the OR was 1.69 (95% confident interval [CI]: 1.11–2.57) for patients with three RFs and 2.50 (95% CI: 1.55–4.03) for patients with at least 4 RFs. Patients with multivessel coronary disease (OR = 1.32, 95% CI: 1.08–1.60) and patients without significant coronary stenosis or normal patients (OR = 2.01, 95% CI: 1.47–2.76) had an increased risk of NSTEMI compared with those with the single vessel disease after adjusting for the other covariates (Table 4).\nRelationship between AMI subtype and conventional RFs and coronary artery disease", "The trends in the prevalence of conventional RFs are shown in Figure 1A. Compared with 2007, the prevalence of hypertension increased in 2017 (p trend = 0.004) that for smoking decreased gradually (p trend <0.05) and those for diabetes, hypercholesterolemia, and obesity remained unchanged (p trend >0.05). Between 2007 and 2017, the prevalence of hypertension increased from 26.5% to 44.2% (rate difference 17.7%) and that for smoking decreased from 77.9% to 72.4% (rate difference 5.5%). The prevalence of hypertension increased through 2007 and 2011, and then maintained a little shift. The greatest relative increase in prevalence between 2007 and 2009 was observed for hypercholesterolemia (from 28.7% to 35.5%), decreased in 2010 (from 35.5% to 20.5%), and then maintained a little shift. There were little shifts for diabetes and obesity (diabetes, 9.6% in 2007 and 14.8% in 2017, p = .283, rate difference 5.2%; obesity, 36.8% in 2007, and 40.7% in 2017, p = .105, rate difference 3.9%).\nTrends in the percentage of five conventional risk factors and acute myocardial infarction (AMI) subtype during a first acute myocardial infarction in young men 18–44 years old between 2007 and 2017. (A) Trends in the percentage of conventional risk factors, the increasing prevalence was noted for hypertension (p trend = 0.004). Decline in the rates of current smoking and drinking are appreciated. There was a similar prevalence for diabetes and hypercholesterolemia and obesity over time. (B) Trends in the percentage of AMI subtybe, the Increasing percentage was noted for NSTEMI(p trend <0.001)\nThe most frequent presentation of AMI was STEMI, but the proportion decreased gradually from 90.4% to 63.2% (p trend <0.001), then stabilized at around 65% during the current years (Figure 1B).\nThe mean SBP and BMI increased over the three periods; the mean SBP increased from 119.4 mmHg to 123.3 mmHg (mean difference 3.9 mmHg), with BMI increasing from 26.5 kg/m2 to 27.6 kg/m2 (mean difference 1.2 kg/m2). The mean TC decreased from 4.62 mmol/L to 4.49 mmol/L and LDL‐C decreased from 2.97 mmol/L to 2.80 mmol/L. No differences were seen in mean DBP and FPG over the three periods (Table S1).", "The control rate of hypertension and diabetes was 62.8% (699/1113) and 17.7% (72/407), respectively; smoking cessation was 4.8% (105/2173).", "There are some limitations that deserve consideration. This was a single‐center study and the data were collected from Beijing Anzhen hospital, which is well known for the management of CHD. Furthermore, some patients may have more severe symptoms. Hence, the results may not be extrapolated to the entire nation. Data on conventional RFs were collected from medical records; therefore, health behaviors such as physical activity, sleep duration, emotion, and stress could not be included. The sample for the annual number of AMI hospitalizations was not large enough, which may have influenced the comparisons of conventional RFs and trends in RFs over time with the reference US population." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Participants", "Measurements and diagnostic criteria", "Statistical analysis", "RESULTS", "Prevalence of conventional cardiovascular RFs and MI subtypes across different groups", "Prevalence of conventional CVD RFs compared with a reference population", "Conventional cardiovascular RFs and other clinical characteristics across AMI subtypes", "Trends of conventional RFs and AMI subtypes", "CONTROL OF CONVENTIONAL RFS", "DISCUSSION", "LIMITATIONS", "CONCLUSIONS", "CONFLICT OF INTERESTS", "", "Supporting information" ]
[ "The primary and secondary prevention of coronary heart disease (CHD) in young adults has garnered tremendous attention given the rapid increase in the incidence of acute coronary events and hospitalization rates, especially in young men. Evidence from observational epidemiological studies showed the incidence of acute coronary events increased by 37.4% in the year 2009 compared to 2007 in young adults aged 35–39 years, making it the largest increase for this age group.\n1\n Hospitalization rates for acute myocardial infarction (AMI) per 100 000 population experienced the most significant increase in young men (<55 years), by 45.8% from 2007 to 2012 in Beijing;\n2\n the proportion of young adults hospitalized for CHD was nearly 90% in men from 2013 to 2014 in Beijing.\n3\n This trend parallels an increase in cardiovascular risk factors (RFs) including smoking, hypertension, diabetes, obesity, and dyslipidemia in the general Chinese population as well as an increase in hospitalizations for AMI.\n2\n, \n4\n, \n5\n, \n6\n\n\nThe Prospective Urban Rural Epidemiology (PURE) study showed that approximately 70% of cardiovascular disease (CVD) cases were attributed to modifiable RFs.\n7\n Though several studies have evaluated the prevalence of these RFs during a first or any episode of AMI and have found a high prevalence of at least 1 RF (approximately 90%),\n8\n, \n9\n, \n10\n most patients were classified as being at low or intermediate risk by traditional CHD risk prediction scores.\n11\n, \n12\n Unfortunately, this does not aid in the development of appropriate primary preventive strategies to decrease the risk of CHD. The prevalence of conventional RFs, other clinical characteristics, and their trends need to be clarified, which can be used in formulating preventive strategies.\nFew studies have evaluated recent long‐term trends and prevalence of modifiable RFs during a first AMI in young adults in China. In a retrospective analysis of patients with coronary artery disease aged ≤45 years conducted from 2010 to 2014,\n6\n the prevalence figures varied for the United States and German populations.\n8\n, \n10\n Contemporary data about trends in and prevalence of modifiable RFs, and subtypes of AMI are lacking in young patients. Using a retrospective analysis of hospital data from 2007 to 2017, we aimed to evaluate the trends in and prevalence of modifiable RFs and subtypes of MI during the first AMI in young Chinese men. This evidence will provide a reference point for the development of preventive strategies in this population.", "Participants This study was based on a retrospective, single‐center analysis of young men hospitalized for a first AMI. Clinical and demographic data were collected from Beijing Anzhen Hospital by trained abstractors, using physician notes, laboratory reports, patient histories, and discharge summaries from January 2007 through December 2017. Young men aged 18–44 years hospitalized for a first AMI were identified from 2007 to 2017; 136 patients were sampled at the end of 2007 and a total of 2739 patients were recruited at the end of 2017 (Figure S1). The reference population for this analysis has previously been reported by Yandrapalli et al. Briefly, hospitalizations for a first AMI in young adults aged 18–44 years were identified from the US Healthcare Cost and Utilization Project (HCUP) National Inpatient Sample (NIS) from January 2005 through September 2015.\n10\n Hospitalizations for AMI were determined according to the fourth universal definition of MI.\n13\n AMI cases were first identified by excluding cases with secondary diagnoses of prior MI, prior percutaneous coronary intervention, prior coronary artery bypass grafting, post‐AMI syndrome, chronic ischemic heart disease, heart transplant recipient, and coronary arterial disease of bypass grafts or in transplanted hearts. Cases with a history of heart failure (HF), arteritis, congenital heart disease, and cancer were excluded. Those without coronary angiography and those with missing values for laboratory reports were excluded.\nThis study was based on a retrospective, single‐center analysis of young men hospitalized for a first AMI. Clinical and demographic data were collected from Beijing Anzhen Hospital by trained abstractors, using physician notes, laboratory reports, patient histories, and discharge summaries from January 2007 through December 2017. Young men aged 18–44 years hospitalized for a first AMI were identified from 2007 to 2017; 136 patients were sampled at the end of 2007 and a total of 2739 patients were recruited at the end of 2017 (Figure S1). The reference population for this analysis has previously been reported by Yandrapalli et al. Briefly, hospitalizations for a first AMI in young adults aged 18–44 years were identified from the US Healthcare Cost and Utilization Project (HCUP) National Inpatient Sample (NIS) from January 2005 through September 2015.\n10\n Hospitalizations for AMI were determined according to the fourth universal definition of MI.\n13\n AMI cases were first identified by excluding cases with secondary diagnoses of prior MI, prior percutaneous coronary intervention, prior coronary artery bypass grafting, post‐AMI syndrome, chronic ischemic heart disease, heart transplant recipient, and coronary arterial disease of bypass grafts or in transplanted hearts. Cases with a history of heart failure (HF), arteritis, congenital heart disease, and cancer were excluded. Those without coronary angiography and those with missing values for laboratory reports were excluded.\nMeasurements and diagnostic criteria Primary outcomes of interest were the overall prevalence of the RFs and their respective temporal trends and subtypes of AMI. AMI was defined based on established criteria, which included\n13\n: detection of a rise and/or fall of cardiac biomarker values (preferably cardiac troponin [cTn]) with at least one value above the 99th percentile upper reference limit (URL) and with at least one of the following: (1) ischemia; (2) new or presumed new significant ST‐segment–T wave (ST–T) changes or new left bundle branch block; (3) development of pathological Q waves in the ECG; (4) imaging evidence of new loss of viable myocardium or new regional wall motion abnormality; and (5) detection of an intracoronary thrombus by angiography or autopsy. AMI was classified as ST‐segment elevation myocardial infarction (STEMI) and non‐STEMI (NSTEMI). Prevalent hypertension and diabetes were defined based on a documented history of hypertension and diabetes, respectively, in medical records. Hypercholesterolemia was defined as total cholesterol (TC) ≥ 5.2 mmol/L (200 mg/dl) or low‐density lipoprotein cholesterol (LDL‐C) ≥ 3.4 mmol/L (130 mg/dl). Obesity was defined as a body mass index (BMI) ≥ 28 (kg/m2). Smokers were defined as those who reported smoking cigarettes for >6 months. When the conventional RFs were compared with that of the US population, dyslipidemia and obesity were defined according to criteria in the report by Yandrapalli' et al.\n10\n\n\nPrimary outcomes of interest were the overall prevalence of the RFs and their respective temporal trends and subtypes of AMI. AMI was defined based on established criteria, which included\n13\n: detection of a rise and/or fall of cardiac biomarker values (preferably cardiac troponin [cTn]) with at least one value above the 99th percentile upper reference limit (URL) and with at least one of the following: (1) ischemia; (2) new or presumed new significant ST‐segment–T wave (ST–T) changes or new left bundle branch block; (3) development of pathological Q waves in the ECG; (4) imaging evidence of new loss of viable myocardium or new regional wall motion abnormality; and (5) detection of an intracoronary thrombus by angiography or autopsy. AMI was classified as ST‐segment elevation myocardial infarction (STEMI) and non‐STEMI (NSTEMI). Prevalent hypertension and diabetes were defined based on a documented history of hypertension and diabetes, respectively, in medical records. Hypercholesterolemia was defined as total cholesterol (TC) ≥ 5.2 mmol/L (200 mg/dl) or low‐density lipoprotein cholesterol (LDL‐C) ≥ 3.4 mmol/L (130 mg/dl). Obesity was defined as a body mass index (BMI) ≥ 28 (kg/m2). Smokers were defined as those who reported smoking cigarettes for >6 months. When the conventional RFs were compared with that of the US population, dyslipidemia and obesity were defined according to criteria in the report by Yandrapalli' et al.\n10\n\n\nStatistical analysis Categorical variables were expressed as total numbers (proportions), differences in RF prevalence across age groups were compared using χ\n2 tests for categorical variables and trends in the prevalence of RFs were analyzed using linear‐by‐linear association. Normally distributed continuous variables were presented as means ± standard deviations. A student's t test was used to compare two independent samples for normally distributed continuous variables and a Mann–Whitney U test for continuous variables with skewed distributions. Odds ratios (ORs) with 95% confidence intervals (CIs) for associations were derived from multivariable logistic regression. All reported p values were two sided. Statistical analysis was performed using IBM SPSS Statistics version 25.0 (IBM).\nCategorical variables were expressed as total numbers (proportions), differences in RF prevalence across age groups were compared using χ\n2 tests for categorical variables and trends in the prevalence of RFs were analyzed using linear‐by‐linear association. Normally distributed continuous variables were presented as means ± standard deviations. A student's t test was used to compare two independent samples for normally distributed continuous variables and a Mann–Whitney U test for continuous variables with skewed distributions. Odds ratios (ORs) with 95% confidence intervals (CIs) for associations were derived from multivariable logistic regression. All reported p values were two sided. Statistical analysis was performed using IBM SPSS Statistics version 25.0 (IBM).", "This study was based on a retrospective, single‐center analysis of young men hospitalized for a first AMI. Clinical and demographic data were collected from Beijing Anzhen Hospital by trained abstractors, using physician notes, laboratory reports, patient histories, and discharge summaries from January 2007 through December 2017. Young men aged 18–44 years hospitalized for a first AMI were identified from 2007 to 2017; 136 patients were sampled at the end of 2007 and a total of 2739 patients were recruited at the end of 2017 (Figure S1). The reference population for this analysis has previously been reported by Yandrapalli et al. Briefly, hospitalizations for a first AMI in young adults aged 18–44 years were identified from the US Healthcare Cost and Utilization Project (HCUP) National Inpatient Sample (NIS) from January 2005 through September 2015.\n10\n Hospitalizations for AMI were determined according to the fourth universal definition of MI.\n13\n AMI cases were first identified by excluding cases with secondary diagnoses of prior MI, prior percutaneous coronary intervention, prior coronary artery bypass grafting, post‐AMI syndrome, chronic ischemic heart disease, heart transplant recipient, and coronary arterial disease of bypass grafts or in transplanted hearts. Cases with a history of heart failure (HF), arteritis, congenital heart disease, and cancer were excluded. Those without coronary angiography and those with missing values for laboratory reports were excluded.", "Primary outcomes of interest were the overall prevalence of the RFs and their respective temporal trends and subtypes of AMI. AMI was defined based on established criteria, which included\n13\n: detection of a rise and/or fall of cardiac biomarker values (preferably cardiac troponin [cTn]) with at least one value above the 99th percentile upper reference limit (URL) and with at least one of the following: (1) ischemia; (2) new or presumed new significant ST‐segment–T wave (ST–T) changes or new left bundle branch block; (3) development of pathological Q waves in the ECG; (4) imaging evidence of new loss of viable myocardium or new regional wall motion abnormality; and (5) detection of an intracoronary thrombus by angiography or autopsy. AMI was classified as ST‐segment elevation myocardial infarction (STEMI) and non‐STEMI (NSTEMI). Prevalent hypertension and diabetes were defined based on a documented history of hypertension and diabetes, respectively, in medical records. Hypercholesterolemia was defined as total cholesterol (TC) ≥ 5.2 mmol/L (200 mg/dl) or low‐density lipoprotein cholesterol (LDL‐C) ≥ 3.4 mmol/L (130 mg/dl). Obesity was defined as a body mass index (BMI) ≥ 28 (kg/m2). Smokers were defined as those who reported smoking cigarettes for >6 months. When the conventional RFs were compared with that of the US population, dyslipidemia and obesity were defined according to criteria in the report by Yandrapalli' et al.\n10\n\n", "Categorical variables were expressed as total numbers (proportions), differences in RF prevalence across age groups were compared using χ\n2 tests for categorical variables and trends in the prevalence of RFs were analyzed using linear‐by‐linear association. Normally distributed continuous variables were presented as means ± standard deviations. A student's t test was used to compare two independent samples for normally distributed continuous variables and a Mann–Whitney U test for continuous variables with skewed distributions. Odds ratios (ORs) with 95% confidence intervals (CIs) for associations were derived from multivariable logistic regression. All reported p values were two sided. Statistical analysis was performed using IBM SPSS Statistics version 25.0 (IBM).", "Prevalence of conventional cardiovascular RFs and MI subtypes across different groups In this study, 2739 cases of first AMI hospitalizations in young men were enrolled from January 1, 2007, to December 31, 2017. Of the total, 462 (16.9%) patients were <35 years, 761 (27.8%) were 35–39 years and 1516 (55.3%) were 40–44 years. The most frequent presentation of AMI was STEMI (77.5%) and over 80% underwent revascularization. Baseline characteristics of the overall sample and by age subgroups are presented in Table 1. The most prevalent RFs were smoking (75.5%), followed by hypertension (40.6%) and then obesity (38.3%). The proportion of patients without any of the five conventional CVD RFs was only 5.8% and 71.7% of patients had at least 2 RFs. Significant differences were noted for the prevalence of obesity, hypertension, diabetes, and hypercholesterolemia across age groups. Patients aged <35 years had a higher prevalence of hypercholesterolemia and obesity, with lower values for hypertension; the proportion of patients with 2 or 3 RFs in this age group was also higher than that for the other age groups.\nBaseline characteristics of the study population\n\nNote: Data are presented as mean ± standard deviation or n (%).\nAbbreviations: AMI, acute myocardial infarction; NSTEMI, non‐ST‐segment elevation myocardial infarction; STEMI, ST‐segment elevation myocardial infarction.\nIn this study, 2739 cases of first AMI hospitalizations in young men were enrolled from January 1, 2007, to December 31, 2017. Of the total, 462 (16.9%) patients were <35 years, 761 (27.8%) were 35–39 years and 1516 (55.3%) were 40–44 years. The most frequent presentation of AMI was STEMI (77.5%) and over 80% underwent revascularization. Baseline characteristics of the overall sample and by age subgroups are presented in Table 1. The most prevalent RFs were smoking (75.5%), followed by hypertension (40.6%) and then obesity (38.3%). The proportion of patients without any of the five conventional CVD RFs was only 5.8% and 71.7% of patients had at least 2 RFs. Significant differences were noted for the prevalence of obesity, hypertension, diabetes, and hypercholesterolemia across age groups. Patients aged <35 years had a higher prevalence of hypercholesterolemia and obesity, with lower values for hypertension; the proportion of patients with 2 or 3 RFs in this age group was also higher than that for the other age groups.\nBaseline characteristics of the study population\n\nNote: Data are presented as mean ± standard deviation or n (%).\nAbbreviations: AMI, acute myocardial infarction; NSTEMI, non‐ST‐segment elevation myocardial infarction; STEMI, ST‐segment elevation myocardial infarction.\nPrevalence of conventional CVD RFs compared with a reference population We compared the prevalence of classic modifiable cardiovascular RFs in the study population with that of the reference population. Compared with the reference population, young Chinese men had a higher prevalence of smoking (75.5% vs. 58.1%; rate difference 17.4%) and dyslipidemia (65.0% vs. 54.6%; rate difference 10.4%), with lower prevalence of obesity (13.1% vs. 18.6%; rate difference 19.7%), hypertension (40.6% vs. 49.3%; rate difference 8.7%), and diabetes (14.9% vs. 19.9%; rate difference 5.0%). The three leading conventional RFs in the reference population were smoking, dyslipidemia, and hypertension, and this was similar for the target population (Table 2).\nPrevalence of conventional cardiovascular risk factors in the study population compared with the reference population\n\nNote: Data are presented as mean ± standard deviation or n (%).\nAbbreviation: BMI, body mass index.\nWe compared the prevalence of classic modifiable cardiovascular RFs in the study population with that of the reference population. Compared with the reference population, young Chinese men had a higher prevalence of smoking (75.5% vs. 58.1%; rate difference 17.4%) and dyslipidemia (65.0% vs. 54.6%; rate difference 10.4%), with lower prevalence of obesity (13.1% vs. 18.6%; rate difference 19.7%), hypertension (40.6% vs. 49.3%; rate difference 8.7%), and diabetes (14.9% vs. 19.9%; rate difference 5.0%). The three leading conventional RFs in the reference population were smoking, dyslipidemia, and hypertension, and this was similar for the target population (Table 2).\nPrevalence of conventional cardiovascular risk factors in the study population compared with the reference population\n\nNote: Data are presented as mean ± standard deviation or n (%).\nAbbreviation: BMI, body mass index.\nConventional cardiovascular RFs and other clinical characteristics across AMI subtypes Compared to STEMI patients, NSTEMI patients had a high prevalence of all individual conventional RFs except for smoking, which was similar in prevalence. The proportion of patients who had at least 3 RFs was 39.6% in NSTEMI patients, which was higher than that in STEMI patients (27.5%). The coronary artery involved most frequently was left anterior descending coronary artery (LAD) in both groups, but this was higher in STEMI patients. Patients with multivessel coronary disease and those without significant coronary stenosis were higher in the NSTEMI group (Table 3).\nConventional risk factors and clinic characteristics according to AMI subtype\n\nNote: Data are presented as n (%).\nAbbreviations: EF, ejection fraction; LAD, left anterior descending coronary artery; LCX, left circumflex coronary artery; NSTEMI, non‐ST elevation acute coronary syndrome; RCA, right coronary artery; STEMI, ST‐segment elevation myocardial infarction.\nWe also conducted logistic regression analyses to evaluate the relationships between AMI subtype and conventional RFs. Multiple conventional RFs significantly increased the risk of NSTEMI; the OR was 1.69 (95% confident interval [CI]: 1.11–2.57) for patients with three RFs and 2.50 (95% CI: 1.55–4.03) for patients with at least 4 RFs. Patients with multivessel coronary disease (OR = 1.32, 95% CI: 1.08–1.60) and patients without significant coronary stenosis or normal patients (OR = 2.01, 95% CI: 1.47–2.76) had an increased risk of NSTEMI compared with those with the single vessel disease after adjusting for the other covariates (Table 4).\nRelationship between AMI subtype and conventional RFs and coronary artery disease\nCompared to STEMI patients, NSTEMI patients had a high prevalence of all individual conventional RFs except for smoking, which was similar in prevalence. The proportion of patients who had at least 3 RFs was 39.6% in NSTEMI patients, which was higher than that in STEMI patients (27.5%). The coronary artery involved most frequently was left anterior descending coronary artery (LAD) in both groups, but this was higher in STEMI patients. Patients with multivessel coronary disease and those without significant coronary stenosis were higher in the NSTEMI group (Table 3).\nConventional risk factors and clinic characteristics according to AMI subtype\n\nNote: Data are presented as n (%).\nAbbreviations: EF, ejection fraction; LAD, left anterior descending coronary artery; LCX, left circumflex coronary artery; NSTEMI, non‐ST elevation acute coronary syndrome; RCA, right coronary artery; STEMI, ST‐segment elevation myocardial infarction.\nWe also conducted logistic regression analyses to evaluate the relationships between AMI subtype and conventional RFs. Multiple conventional RFs significantly increased the risk of NSTEMI; the OR was 1.69 (95% confident interval [CI]: 1.11–2.57) for patients with three RFs and 2.50 (95% CI: 1.55–4.03) for patients with at least 4 RFs. Patients with multivessel coronary disease (OR = 1.32, 95% CI: 1.08–1.60) and patients without significant coronary stenosis or normal patients (OR = 2.01, 95% CI: 1.47–2.76) had an increased risk of NSTEMI compared with those with the single vessel disease after adjusting for the other covariates (Table 4).\nRelationship between AMI subtype and conventional RFs and coronary artery disease\nTrends of conventional RFs and AMI subtypes The trends in the prevalence of conventional RFs are shown in Figure 1A. Compared with 2007, the prevalence of hypertension increased in 2017 (p trend = 0.004) that for smoking decreased gradually (p trend <0.05) and those for diabetes, hypercholesterolemia, and obesity remained unchanged (p trend >0.05). Between 2007 and 2017, the prevalence of hypertension increased from 26.5% to 44.2% (rate difference 17.7%) and that for smoking decreased from 77.9% to 72.4% (rate difference 5.5%). The prevalence of hypertension increased through 2007 and 2011, and then maintained a little shift. The greatest relative increase in prevalence between 2007 and 2009 was observed for hypercholesterolemia (from 28.7% to 35.5%), decreased in 2010 (from 35.5% to 20.5%), and then maintained a little shift. There were little shifts for diabetes and obesity (diabetes, 9.6% in 2007 and 14.8% in 2017, p = .283, rate difference 5.2%; obesity, 36.8% in 2007, and 40.7% in 2017, p = .105, rate difference 3.9%).\nTrends in the percentage of five conventional risk factors and acute myocardial infarction (AMI) subtype during a first acute myocardial infarction in young men 18–44 years old between 2007 and 2017. (A) Trends in the percentage of conventional risk factors, the increasing prevalence was noted for hypertension (p trend = 0.004). Decline in the rates of current smoking and drinking are appreciated. There was a similar prevalence for diabetes and hypercholesterolemia and obesity over time. (B) Trends in the percentage of AMI subtybe, the Increasing percentage was noted for NSTEMI(p trend <0.001)\nThe most frequent presentation of AMI was STEMI, but the proportion decreased gradually from 90.4% to 63.2% (p trend <0.001), then stabilized at around 65% during the current years (Figure 1B).\nThe mean SBP and BMI increased over the three periods; the mean SBP increased from 119.4 mmHg to 123.3 mmHg (mean difference 3.9 mmHg), with BMI increasing from 26.5 kg/m2 to 27.6 kg/m2 (mean difference 1.2 kg/m2). The mean TC decreased from 4.62 mmol/L to 4.49 mmol/L and LDL‐C decreased from 2.97 mmol/L to 2.80 mmol/L. No differences were seen in mean DBP and FPG over the three periods (Table S1).\nThe trends in the prevalence of conventional RFs are shown in Figure 1A. Compared with 2007, the prevalence of hypertension increased in 2017 (p trend = 0.004) that for smoking decreased gradually (p trend <0.05) and those for diabetes, hypercholesterolemia, and obesity remained unchanged (p trend >0.05). Between 2007 and 2017, the prevalence of hypertension increased from 26.5% to 44.2% (rate difference 17.7%) and that for smoking decreased from 77.9% to 72.4% (rate difference 5.5%). The prevalence of hypertension increased through 2007 and 2011, and then maintained a little shift. The greatest relative increase in prevalence between 2007 and 2009 was observed for hypercholesterolemia (from 28.7% to 35.5%), decreased in 2010 (from 35.5% to 20.5%), and then maintained a little shift. There were little shifts for diabetes and obesity (diabetes, 9.6% in 2007 and 14.8% in 2017, p = .283, rate difference 5.2%; obesity, 36.8% in 2007, and 40.7% in 2017, p = .105, rate difference 3.9%).\nTrends in the percentage of five conventional risk factors and acute myocardial infarction (AMI) subtype during a first acute myocardial infarction in young men 18–44 years old between 2007 and 2017. (A) Trends in the percentage of conventional risk factors, the increasing prevalence was noted for hypertension (p trend = 0.004). Decline in the rates of current smoking and drinking are appreciated. There was a similar prevalence for diabetes and hypercholesterolemia and obesity over time. (B) Trends in the percentage of AMI subtybe, the Increasing percentage was noted for NSTEMI(p trend <0.001)\nThe most frequent presentation of AMI was STEMI, but the proportion decreased gradually from 90.4% to 63.2% (p trend <0.001), then stabilized at around 65% during the current years (Figure 1B).\nThe mean SBP and BMI increased over the three periods; the mean SBP increased from 119.4 mmHg to 123.3 mmHg (mean difference 3.9 mmHg), with BMI increasing from 26.5 kg/m2 to 27.6 kg/m2 (mean difference 1.2 kg/m2). The mean TC decreased from 4.62 mmol/L to 4.49 mmol/L and LDL‐C decreased from 2.97 mmol/L to 2.80 mmol/L. No differences were seen in mean DBP and FPG over the three periods (Table S1).", "In this study, 2739 cases of first AMI hospitalizations in young men were enrolled from January 1, 2007, to December 31, 2017. Of the total, 462 (16.9%) patients were <35 years, 761 (27.8%) were 35–39 years and 1516 (55.3%) were 40–44 years. The most frequent presentation of AMI was STEMI (77.5%) and over 80% underwent revascularization. Baseline characteristics of the overall sample and by age subgroups are presented in Table 1. The most prevalent RFs were smoking (75.5%), followed by hypertension (40.6%) and then obesity (38.3%). The proportion of patients without any of the five conventional CVD RFs was only 5.8% and 71.7% of patients had at least 2 RFs. Significant differences were noted for the prevalence of obesity, hypertension, diabetes, and hypercholesterolemia across age groups. Patients aged <35 years had a higher prevalence of hypercholesterolemia and obesity, with lower values for hypertension; the proportion of patients with 2 or 3 RFs in this age group was also higher than that for the other age groups.\nBaseline characteristics of the study population\n\nNote: Data are presented as mean ± standard deviation or n (%).\nAbbreviations: AMI, acute myocardial infarction; NSTEMI, non‐ST‐segment elevation myocardial infarction; STEMI, ST‐segment elevation myocardial infarction.", "We compared the prevalence of classic modifiable cardiovascular RFs in the study population with that of the reference population. Compared with the reference population, young Chinese men had a higher prevalence of smoking (75.5% vs. 58.1%; rate difference 17.4%) and dyslipidemia (65.0% vs. 54.6%; rate difference 10.4%), with lower prevalence of obesity (13.1% vs. 18.6%; rate difference 19.7%), hypertension (40.6% vs. 49.3%; rate difference 8.7%), and diabetes (14.9% vs. 19.9%; rate difference 5.0%). The three leading conventional RFs in the reference population were smoking, dyslipidemia, and hypertension, and this was similar for the target population (Table 2).\nPrevalence of conventional cardiovascular risk factors in the study population compared with the reference population\n\nNote: Data are presented as mean ± standard deviation or n (%).\nAbbreviation: BMI, body mass index.", "Compared to STEMI patients, NSTEMI patients had a high prevalence of all individual conventional RFs except for smoking, which was similar in prevalence. The proportion of patients who had at least 3 RFs was 39.6% in NSTEMI patients, which was higher than that in STEMI patients (27.5%). The coronary artery involved most frequently was left anterior descending coronary artery (LAD) in both groups, but this was higher in STEMI patients. Patients with multivessel coronary disease and those without significant coronary stenosis were higher in the NSTEMI group (Table 3).\nConventional risk factors and clinic characteristics according to AMI subtype\n\nNote: Data are presented as n (%).\nAbbreviations: EF, ejection fraction; LAD, left anterior descending coronary artery; LCX, left circumflex coronary artery; NSTEMI, non‐ST elevation acute coronary syndrome; RCA, right coronary artery; STEMI, ST‐segment elevation myocardial infarction.\nWe also conducted logistic regression analyses to evaluate the relationships between AMI subtype and conventional RFs. Multiple conventional RFs significantly increased the risk of NSTEMI; the OR was 1.69 (95% confident interval [CI]: 1.11–2.57) for patients with three RFs and 2.50 (95% CI: 1.55–4.03) for patients with at least 4 RFs. Patients with multivessel coronary disease (OR = 1.32, 95% CI: 1.08–1.60) and patients without significant coronary stenosis or normal patients (OR = 2.01, 95% CI: 1.47–2.76) had an increased risk of NSTEMI compared with those with the single vessel disease after adjusting for the other covariates (Table 4).\nRelationship between AMI subtype and conventional RFs and coronary artery disease", "The trends in the prevalence of conventional RFs are shown in Figure 1A. Compared with 2007, the prevalence of hypertension increased in 2017 (p trend = 0.004) that for smoking decreased gradually (p trend <0.05) and those for diabetes, hypercholesterolemia, and obesity remained unchanged (p trend >0.05). Between 2007 and 2017, the prevalence of hypertension increased from 26.5% to 44.2% (rate difference 17.7%) and that for smoking decreased from 77.9% to 72.4% (rate difference 5.5%). The prevalence of hypertension increased through 2007 and 2011, and then maintained a little shift. The greatest relative increase in prevalence between 2007 and 2009 was observed for hypercholesterolemia (from 28.7% to 35.5%), decreased in 2010 (from 35.5% to 20.5%), and then maintained a little shift. There were little shifts for diabetes and obesity (diabetes, 9.6% in 2007 and 14.8% in 2017, p = .283, rate difference 5.2%; obesity, 36.8% in 2007, and 40.7% in 2017, p = .105, rate difference 3.9%).\nTrends in the percentage of five conventional risk factors and acute myocardial infarction (AMI) subtype during a first acute myocardial infarction in young men 18–44 years old between 2007 and 2017. (A) Trends in the percentage of conventional risk factors, the increasing prevalence was noted for hypertension (p trend = 0.004). Decline in the rates of current smoking and drinking are appreciated. There was a similar prevalence for diabetes and hypercholesterolemia and obesity over time. (B) Trends in the percentage of AMI subtybe, the Increasing percentage was noted for NSTEMI(p trend <0.001)\nThe most frequent presentation of AMI was STEMI, but the proportion decreased gradually from 90.4% to 63.2% (p trend <0.001), then stabilized at around 65% during the current years (Figure 1B).\nThe mean SBP and BMI increased over the three periods; the mean SBP increased from 119.4 mmHg to 123.3 mmHg (mean difference 3.9 mmHg), with BMI increasing from 26.5 kg/m2 to 27.6 kg/m2 (mean difference 1.2 kg/m2). The mean TC decreased from 4.62 mmol/L to 4.49 mmol/L and LDL‐C decreased from 2.97 mmol/L to 2.80 mmol/L. No differences were seen in mean DBP and FPG over the three periods (Table S1).", "The control rate of hypertension and diabetes was 62.8% (699/1113) and 17.7% (72/407), respectively; smoking cessation was 4.8% (105/2173).", "In this study comprising of young men with a first hospitalization for AMI in China, the most prevalent CVD RFs were smoking (75.5%), hypertension (40.6%), and obesity (38.3%). Additionally, over 70% of them had at least two conventional RFs. Compared with a similar reference population, these young Chinese men had a higher prevalence of smoking and dyslipidemia and a lower prevalence of obesity, hypertension, and diabetes. Between 2007 and 2017, the prevalence of hypertension increased, whereas that of smoking decreased. Patients presenting with NSTEMI had a high prevalence of all individual conventional RFs except for smoking, which was more prevalent in STEMI patients. LAD was the most frequently involved coronary artery in the subtypes of AMI, but it was more common in STEMI patients. Multivessel coronary disease was more common in NSTEMI.\nConventional RFs were highly prevalent in young men aged 18–44 years who were hospitalized for first AMI events. The prevalence values were higher compared to national figures smoking (53.9% for young adults aged 25–44 years), hypertension (11.3% for urban adults and 10.0% for rural adults aged 18–44 years), obesity (11.0% for young adults aged 18–44 years), and diabetes (5.9% for young adults aged 18–44 years); high TC ( ≥ 5.2 mmol/L) and high LDL‐C( ≥ 3.4 mmol/L) were 28.5% and 26.3%, respectively.\n14\n, \n15\n, \n16\n, \n17\n Our results are consistent with previous studies conducted in young patients diagnosed with AMI and CHD; the prevalence for hypertension, diabetes, and smoking has been reported as 34.3%–41.7%, 11.1%–22.4%, and 57.4%–74.0%, respectively.\n3\n, \n6\n However, the prevalence figures were different from that of the US report. Compared with the data from the US NIS for patients hospitalized for a first AMI from 2005 to 2015,\n10\n young Chinese men had a higher prevalence of smoking and dyslipidemia, with a lower prevalence of hypertension, obesity, and diabetes. There were clear age differences in the prevalence of some of the RFs; patients aged <35 years had a higher prevalence of smoking, obesity, and hypercholesterolemia, whereas patients aged 35–44 years had a higher prevalence of smoking, hypertension, and diabetes.\nOur study also showed differences in the temporal trends of the conventional RFs between China and US populations. The prevalence of all the evaluated conventional RFs progressively increased between 2005 and 2015 in the United States,\n10\n but hypertension prevalence increased in China, whereas smoking decreased gradually; which was also consistent with a previous study conducted in China. In a retrospective analysis of coronary artery disease patients aged ≤45 years conducted from 2010 to 2014, the prevalence of hypertension increased from 40.7% to 47.5%, 20.3% to 26.1% for diabetes, and 27.3% to 35.7% for hyperlipidemia. However, the prevalence of smoking exhibited a downward trend.\n6\n\n\nApproximately 77% of young Chinese men hospitalized with their first MI presented with STEMI, which was consistent with other studies conducted in young Chinese adults.\n2\n, \n18\n, \n19\n Previous population‐based studies have, however, reported NSTEMI to be the most common presentation\n2\n, \n19\n; the proportion of patients presenting with NSTEMI increased from 11.6% to 36.2% in males and from 15.8% to 45.5% in females from 2007 through 2012,\n2\n which was quite different from the presentation in young adults in the US population.\n10\n, \n20\n, \n21\n Latest studies conducted in the United States have shown that the presentation of AMI was more frequently NSTEMI and increased from 54.6% to 67.9% in males and from 58.8% to 80.2% in females from 2000 through 2014.\n21\n A similar trend has been observed in Germany.\n8\n According to our findings, multivessel disease and cluster of conventional RFs are closely related to NSTEMI, which are consistent with the results of previous studies.\n10\n, \n22\n\n\nIn our study, we focused on young adults at high risk of CVD in routine clinical practice; but most of these young adults are classified as low or intermediate risk based on traditional CVD risk prediction scores that use lifetime risk estimates.\n11\n, \n12\n Using this approach leads to decreased ability to identify RFs in young adults and suboptimal utilization of preventive strategies. Furthermore, the majority of CHD events seem to occur in these “low” and “intermediate” risk groups. The knowledge of the prevalence and trends of conventional RFs among young Chinese men hospitalized with first MI have important healthcare implications with regard to planning appropriate primary preventative strategies. Considerable attention should be paid to young men with at least two out of five conventional RFs (hypercholesterolemia, hypertension, diabetes, obesity, smoking). Healthcare providers should also focus more attention on the control of metabolic factors and encourage smoking cessation.", "There are some limitations that deserve consideration. This was a single‐center study and the data were collected from Beijing Anzhen hospital, which is well known for the management of CHD. Furthermore, some patients may have more severe symptoms. Hence, the results may not be extrapolated to the entire nation. Data on conventional RFs were collected from medical records; therefore, health behaviors such as physical activity, sleep duration, emotion, and stress could not be included. The sample for the annual number of AMI hospitalizations was not large enough, which may have influenced the comparisons of conventional RFs and trends in RFs over time with the reference US population.", "In conclusion, the characteristics of conventional RFs among young Chinese men hospitalized with a first MI have important healthcare implications with regard to the planning of appropriate primary preventative strategies. The three leading cardiovascular RFs (smoking, hypertension, and obesity) should be the target of intervention and treatment strategies aimed at reducing the incidence of MI in young adults. Furthermore, preventing hypercholesterolemia in patients aged 34 years or younger. should also be a primary focus. Young adults with two or more conventional RFs should be given considerable attention even if they were classified as being at low or intermediate risk by traditional CHD risk prediction scores.", "The authors declare that there are no conflict of interests.", "", "Supplementary information.\nClick here for additional data file.\nSupplementary information.\nClick here for additional data file." ]
[ null, "methods", null, null, null, "results", null, null, null, null, null, "discussion", null, "conclusions", "COI-statement", null, "supplementary-material" ]
[ "acute myocardial infarction", "risk factor", "trends", "youth" ]
INTRODUCTION: The primary and secondary prevention of coronary heart disease (CHD) in young adults has garnered tremendous attention given the rapid increase in the incidence of acute coronary events and hospitalization rates, especially in young men. Evidence from observational epidemiological studies showed the incidence of acute coronary events increased by 37.4% in the year 2009 compared to 2007 in young adults aged 35–39 years, making it the largest increase for this age group. 1 Hospitalization rates for acute myocardial infarction (AMI) per 100 000 population experienced the most significant increase in young men (<55 years), by 45.8% from 2007 to 2012 in Beijing; 2 the proportion of young adults hospitalized for CHD was nearly 90% in men from 2013 to 2014 in Beijing. 3 This trend parallels an increase in cardiovascular risk factors (RFs) including smoking, hypertension, diabetes, obesity, and dyslipidemia in the general Chinese population as well as an increase in hospitalizations for AMI. 2 , 4 , 5 , 6 The Prospective Urban Rural Epidemiology (PURE) study showed that approximately 70% of cardiovascular disease (CVD) cases were attributed to modifiable RFs. 7 Though several studies have evaluated the prevalence of these RFs during a first or any episode of AMI and have found a high prevalence of at least 1 RF (approximately 90%), 8 , 9 , 10 most patients were classified as being at low or intermediate risk by traditional CHD risk prediction scores. 11 , 12 Unfortunately, this does not aid in the development of appropriate primary preventive strategies to decrease the risk of CHD. The prevalence of conventional RFs, other clinical characteristics, and their trends need to be clarified, which can be used in formulating preventive strategies. Few studies have evaluated recent long‐term trends and prevalence of modifiable RFs during a first AMI in young adults in China. In a retrospective analysis of patients with coronary artery disease aged ≤45 years conducted from 2010 to 2014, 6 the prevalence figures varied for the United States and German populations. 8 , 10 Contemporary data about trends in and prevalence of modifiable RFs, and subtypes of AMI are lacking in young patients. Using a retrospective analysis of hospital data from 2007 to 2017, we aimed to evaluate the trends in and prevalence of modifiable RFs and subtypes of MI during the first AMI in young Chinese men. This evidence will provide a reference point for the development of preventive strategies in this population. METHODS: Participants This study was based on a retrospective, single‐center analysis of young men hospitalized for a first AMI. Clinical and demographic data were collected from Beijing Anzhen Hospital by trained abstractors, using physician notes, laboratory reports, patient histories, and discharge summaries from January 2007 through December 2017. Young men aged 18–44 years hospitalized for a first AMI were identified from 2007 to 2017; 136 patients were sampled at the end of 2007 and a total of 2739 patients were recruited at the end of 2017 (Figure S1). The reference population for this analysis has previously been reported by Yandrapalli et al. Briefly, hospitalizations for a first AMI in young adults aged 18–44 years were identified from the US Healthcare Cost and Utilization Project (HCUP) National Inpatient Sample (NIS) from January 2005 through September 2015. 10 Hospitalizations for AMI were determined according to the fourth universal definition of MI. 13 AMI cases were first identified by excluding cases with secondary diagnoses of prior MI, prior percutaneous coronary intervention, prior coronary artery bypass grafting, post‐AMI syndrome, chronic ischemic heart disease, heart transplant recipient, and coronary arterial disease of bypass grafts or in transplanted hearts. Cases with a history of heart failure (HF), arteritis, congenital heart disease, and cancer were excluded. Those without coronary angiography and those with missing values for laboratory reports were excluded. This study was based on a retrospective, single‐center analysis of young men hospitalized for a first AMI. Clinical and demographic data were collected from Beijing Anzhen Hospital by trained abstractors, using physician notes, laboratory reports, patient histories, and discharge summaries from January 2007 through December 2017. Young men aged 18–44 years hospitalized for a first AMI were identified from 2007 to 2017; 136 patients were sampled at the end of 2007 and a total of 2739 patients were recruited at the end of 2017 (Figure S1). The reference population for this analysis has previously been reported by Yandrapalli et al. Briefly, hospitalizations for a first AMI in young adults aged 18–44 years were identified from the US Healthcare Cost and Utilization Project (HCUP) National Inpatient Sample (NIS) from January 2005 through September 2015. 10 Hospitalizations for AMI were determined according to the fourth universal definition of MI. 13 AMI cases were first identified by excluding cases with secondary diagnoses of prior MI, prior percutaneous coronary intervention, prior coronary artery bypass grafting, post‐AMI syndrome, chronic ischemic heart disease, heart transplant recipient, and coronary arterial disease of bypass grafts or in transplanted hearts. Cases with a history of heart failure (HF), arteritis, congenital heart disease, and cancer were excluded. Those without coronary angiography and those with missing values for laboratory reports were excluded. Measurements and diagnostic criteria Primary outcomes of interest were the overall prevalence of the RFs and their respective temporal trends and subtypes of AMI. AMI was defined based on established criteria, which included 13 : detection of a rise and/or fall of cardiac biomarker values (preferably cardiac troponin [cTn]) with at least one value above the 99th percentile upper reference limit (URL) and with at least one of the following: (1) ischemia; (2) new or presumed new significant ST‐segment–T wave (ST–T) changes or new left bundle branch block; (3) development of pathological Q waves in the ECG; (4) imaging evidence of new loss of viable myocardium or new regional wall motion abnormality; and (5) detection of an intracoronary thrombus by angiography or autopsy. AMI was classified as ST‐segment elevation myocardial infarction (STEMI) and non‐STEMI (NSTEMI). Prevalent hypertension and diabetes were defined based on a documented history of hypertension and diabetes, respectively, in medical records. Hypercholesterolemia was defined as total cholesterol (TC) ≥ 5.2 mmol/L (200 mg/dl) or low‐density lipoprotein cholesterol (LDL‐C) ≥ 3.4 mmol/L (130 mg/dl). Obesity was defined as a body mass index (BMI) ≥ 28 (kg/m2). Smokers were defined as those who reported smoking cigarettes for >6 months. When the conventional RFs were compared with that of the US population, dyslipidemia and obesity were defined according to criteria in the report by Yandrapalli' et al. 10 Primary outcomes of interest were the overall prevalence of the RFs and their respective temporal trends and subtypes of AMI. AMI was defined based on established criteria, which included 13 : detection of a rise and/or fall of cardiac biomarker values (preferably cardiac troponin [cTn]) with at least one value above the 99th percentile upper reference limit (URL) and with at least one of the following: (1) ischemia; (2) new or presumed new significant ST‐segment–T wave (ST–T) changes or new left bundle branch block; (3) development of pathological Q waves in the ECG; (4) imaging evidence of new loss of viable myocardium or new regional wall motion abnormality; and (5) detection of an intracoronary thrombus by angiography or autopsy. AMI was classified as ST‐segment elevation myocardial infarction (STEMI) and non‐STEMI (NSTEMI). Prevalent hypertension and diabetes were defined based on a documented history of hypertension and diabetes, respectively, in medical records. Hypercholesterolemia was defined as total cholesterol (TC) ≥ 5.2 mmol/L (200 mg/dl) or low‐density lipoprotein cholesterol (LDL‐C) ≥ 3.4 mmol/L (130 mg/dl). Obesity was defined as a body mass index (BMI) ≥ 28 (kg/m2). Smokers were defined as those who reported smoking cigarettes for >6 months. When the conventional RFs were compared with that of the US population, dyslipidemia and obesity were defined according to criteria in the report by Yandrapalli' et al. 10 Statistical analysis Categorical variables were expressed as total numbers (proportions), differences in RF prevalence across age groups were compared using χ 2 tests for categorical variables and trends in the prevalence of RFs were analyzed using linear‐by‐linear association. Normally distributed continuous variables were presented as means ± standard deviations. A student's t test was used to compare two independent samples for normally distributed continuous variables and a Mann–Whitney U test for continuous variables with skewed distributions. Odds ratios (ORs) with 95% confidence intervals (CIs) for associations were derived from multivariable logistic regression. All reported p values were two sided. Statistical analysis was performed using IBM SPSS Statistics version 25.0 (IBM). Categorical variables were expressed as total numbers (proportions), differences in RF prevalence across age groups were compared using χ 2 tests for categorical variables and trends in the prevalence of RFs were analyzed using linear‐by‐linear association. Normally distributed continuous variables were presented as means ± standard deviations. A student's t test was used to compare two independent samples for normally distributed continuous variables and a Mann–Whitney U test for continuous variables with skewed distributions. Odds ratios (ORs) with 95% confidence intervals (CIs) for associations were derived from multivariable logistic regression. All reported p values were two sided. Statistical analysis was performed using IBM SPSS Statistics version 25.0 (IBM). Participants: This study was based on a retrospective, single‐center analysis of young men hospitalized for a first AMI. Clinical and demographic data were collected from Beijing Anzhen Hospital by trained abstractors, using physician notes, laboratory reports, patient histories, and discharge summaries from January 2007 through December 2017. Young men aged 18–44 years hospitalized for a first AMI were identified from 2007 to 2017; 136 patients were sampled at the end of 2007 and a total of 2739 patients were recruited at the end of 2017 (Figure S1). The reference population for this analysis has previously been reported by Yandrapalli et al. Briefly, hospitalizations for a first AMI in young adults aged 18–44 years were identified from the US Healthcare Cost and Utilization Project (HCUP) National Inpatient Sample (NIS) from January 2005 through September 2015. 10 Hospitalizations for AMI were determined according to the fourth universal definition of MI. 13 AMI cases were first identified by excluding cases with secondary diagnoses of prior MI, prior percutaneous coronary intervention, prior coronary artery bypass grafting, post‐AMI syndrome, chronic ischemic heart disease, heart transplant recipient, and coronary arterial disease of bypass grafts or in transplanted hearts. Cases with a history of heart failure (HF), arteritis, congenital heart disease, and cancer were excluded. Those without coronary angiography and those with missing values for laboratory reports were excluded. Measurements and diagnostic criteria: Primary outcomes of interest were the overall prevalence of the RFs and their respective temporal trends and subtypes of AMI. AMI was defined based on established criteria, which included 13 : detection of a rise and/or fall of cardiac biomarker values (preferably cardiac troponin [cTn]) with at least one value above the 99th percentile upper reference limit (URL) and with at least one of the following: (1) ischemia; (2) new or presumed new significant ST‐segment–T wave (ST–T) changes or new left bundle branch block; (3) development of pathological Q waves in the ECG; (4) imaging evidence of new loss of viable myocardium or new regional wall motion abnormality; and (5) detection of an intracoronary thrombus by angiography or autopsy. AMI was classified as ST‐segment elevation myocardial infarction (STEMI) and non‐STEMI (NSTEMI). Prevalent hypertension and diabetes were defined based on a documented history of hypertension and diabetes, respectively, in medical records. Hypercholesterolemia was defined as total cholesterol (TC) ≥ 5.2 mmol/L (200 mg/dl) or low‐density lipoprotein cholesterol (LDL‐C) ≥ 3.4 mmol/L (130 mg/dl). Obesity was defined as a body mass index (BMI) ≥ 28 (kg/m2). Smokers were defined as those who reported smoking cigarettes for >6 months. When the conventional RFs were compared with that of the US population, dyslipidemia and obesity were defined according to criteria in the report by Yandrapalli' et al. 10 Statistical analysis: Categorical variables were expressed as total numbers (proportions), differences in RF prevalence across age groups were compared using χ 2 tests for categorical variables and trends in the prevalence of RFs were analyzed using linear‐by‐linear association. Normally distributed continuous variables were presented as means ± standard deviations. A student's t test was used to compare two independent samples for normally distributed continuous variables and a Mann–Whitney U test for continuous variables with skewed distributions. Odds ratios (ORs) with 95% confidence intervals (CIs) for associations were derived from multivariable logistic regression. All reported p values were two sided. Statistical analysis was performed using IBM SPSS Statistics version 25.0 (IBM). RESULTS: Prevalence of conventional cardiovascular RFs and MI subtypes across different groups In this study, 2739 cases of first AMI hospitalizations in young men were enrolled from January 1, 2007, to December 31, 2017. Of the total, 462 (16.9%) patients were <35 years, 761 (27.8%) were 35–39 years and 1516 (55.3%) were 40–44 years. The most frequent presentation of AMI was STEMI (77.5%) and over 80% underwent revascularization. Baseline characteristics of the overall sample and by age subgroups are presented in Table 1. The most prevalent RFs were smoking (75.5%), followed by hypertension (40.6%) and then obesity (38.3%). The proportion of patients without any of the five conventional CVD RFs was only 5.8% and 71.7% of patients had at least 2 RFs. Significant differences were noted for the prevalence of obesity, hypertension, diabetes, and hypercholesterolemia across age groups. Patients aged <35 years had a higher prevalence of hypercholesterolemia and obesity, with lower values for hypertension; the proportion of patients with 2 or 3 RFs in this age group was also higher than that for the other age groups. Baseline characteristics of the study population Note: Data are presented as mean ± standard deviation or n (%). Abbreviations: AMI, acute myocardial infarction; NSTEMI, non‐ST‐segment elevation myocardial infarction; STEMI, ST‐segment elevation myocardial infarction. In this study, 2739 cases of first AMI hospitalizations in young men were enrolled from January 1, 2007, to December 31, 2017. Of the total, 462 (16.9%) patients were <35 years, 761 (27.8%) were 35–39 years and 1516 (55.3%) were 40–44 years. The most frequent presentation of AMI was STEMI (77.5%) and over 80% underwent revascularization. Baseline characteristics of the overall sample and by age subgroups are presented in Table 1. The most prevalent RFs were smoking (75.5%), followed by hypertension (40.6%) and then obesity (38.3%). The proportion of patients without any of the five conventional CVD RFs was only 5.8% and 71.7% of patients had at least 2 RFs. Significant differences were noted for the prevalence of obesity, hypertension, diabetes, and hypercholesterolemia across age groups. Patients aged <35 years had a higher prevalence of hypercholesterolemia and obesity, with lower values for hypertension; the proportion of patients with 2 or 3 RFs in this age group was also higher than that for the other age groups. Baseline characteristics of the study population Note: Data are presented as mean ± standard deviation or n (%). Abbreviations: AMI, acute myocardial infarction; NSTEMI, non‐ST‐segment elevation myocardial infarction; STEMI, ST‐segment elevation myocardial infarction. Prevalence of conventional CVD RFs compared with a reference population We compared the prevalence of classic modifiable cardiovascular RFs in the study population with that of the reference population. Compared with the reference population, young Chinese men had a higher prevalence of smoking (75.5% vs. 58.1%; rate difference 17.4%) and dyslipidemia (65.0% vs. 54.6%; rate difference 10.4%), with lower prevalence of obesity (13.1% vs. 18.6%; rate difference 19.7%), hypertension (40.6% vs. 49.3%; rate difference 8.7%), and diabetes (14.9% vs. 19.9%; rate difference 5.0%). The three leading conventional RFs in the reference population were smoking, dyslipidemia, and hypertension, and this was similar for the target population (Table 2). Prevalence of conventional cardiovascular risk factors in the study population compared with the reference population Note: Data are presented as mean ± standard deviation or n (%). Abbreviation: BMI, body mass index. We compared the prevalence of classic modifiable cardiovascular RFs in the study population with that of the reference population. Compared with the reference population, young Chinese men had a higher prevalence of smoking (75.5% vs. 58.1%; rate difference 17.4%) and dyslipidemia (65.0% vs. 54.6%; rate difference 10.4%), with lower prevalence of obesity (13.1% vs. 18.6%; rate difference 19.7%), hypertension (40.6% vs. 49.3%; rate difference 8.7%), and diabetes (14.9% vs. 19.9%; rate difference 5.0%). The three leading conventional RFs in the reference population were smoking, dyslipidemia, and hypertension, and this was similar for the target population (Table 2). Prevalence of conventional cardiovascular risk factors in the study population compared with the reference population Note: Data are presented as mean ± standard deviation or n (%). Abbreviation: BMI, body mass index. Conventional cardiovascular RFs and other clinical characteristics across AMI subtypes Compared to STEMI patients, NSTEMI patients had a high prevalence of all individual conventional RFs except for smoking, which was similar in prevalence. The proportion of patients who had at least 3 RFs was 39.6% in NSTEMI patients, which was higher than that in STEMI patients (27.5%). The coronary artery involved most frequently was left anterior descending coronary artery (LAD) in both groups, but this was higher in STEMI patients. Patients with multivessel coronary disease and those without significant coronary stenosis were higher in the NSTEMI group (Table 3). Conventional risk factors and clinic characteristics according to AMI subtype Note: Data are presented as n (%). Abbreviations: EF, ejection fraction; LAD, left anterior descending coronary artery; LCX, left circumflex coronary artery; NSTEMI, non‐ST elevation acute coronary syndrome; RCA, right coronary artery; STEMI, ST‐segment elevation myocardial infarction. We also conducted logistic regression analyses to evaluate the relationships between AMI subtype and conventional RFs. Multiple conventional RFs significantly increased the risk of NSTEMI; the OR was 1.69 (95% confident interval [CI]: 1.11–2.57) for patients with three RFs and 2.50 (95% CI: 1.55–4.03) for patients with at least 4 RFs. Patients with multivessel coronary disease (OR = 1.32, 95% CI: 1.08–1.60) and patients without significant coronary stenosis or normal patients (OR = 2.01, 95% CI: 1.47–2.76) had an increased risk of NSTEMI compared with those with the single vessel disease after adjusting for the other covariates (Table 4). Relationship between AMI subtype and conventional RFs and coronary artery disease Compared to STEMI patients, NSTEMI patients had a high prevalence of all individual conventional RFs except for smoking, which was similar in prevalence. The proportion of patients who had at least 3 RFs was 39.6% in NSTEMI patients, which was higher than that in STEMI patients (27.5%). The coronary artery involved most frequently was left anterior descending coronary artery (LAD) in both groups, but this was higher in STEMI patients. Patients with multivessel coronary disease and those without significant coronary stenosis were higher in the NSTEMI group (Table 3). Conventional risk factors and clinic characteristics according to AMI subtype Note: Data are presented as n (%). Abbreviations: EF, ejection fraction; LAD, left anterior descending coronary artery; LCX, left circumflex coronary artery; NSTEMI, non‐ST elevation acute coronary syndrome; RCA, right coronary artery; STEMI, ST‐segment elevation myocardial infarction. We also conducted logistic regression analyses to evaluate the relationships between AMI subtype and conventional RFs. Multiple conventional RFs significantly increased the risk of NSTEMI; the OR was 1.69 (95% confident interval [CI]: 1.11–2.57) for patients with three RFs and 2.50 (95% CI: 1.55–4.03) for patients with at least 4 RFs. Patients with multivessel coronary disease (OR = 1.32, 95% CI: 1.08–1.60) and patients without significant coronary stenosis or normal patients (OR = 2.01, 95% CI: 1.47–2.76) had an increased risk of NSTEMI compared with those with the single vessel disease after adjusting for the other covariates (Table 4). Relationship between AMI subtype and conventional RFs and coronary artery disease Trends of conventional RFs and AMI subtypes The trends in the prevalence of conventional RFs are shown in Figure 1A. Compared with 2007, the prevalence of hypertension increased in 2017 (p trend = 0.004) that for smoking decreased gradually (p trend <0.05) and those for diabetes, hypercholesterolemia, and obesity remained unchanged (p trend >0.05). Between 2007 and 2017, the prevalence of hypertension increased from 26.5% to 44.2% (rate difference 17.7%) and that for smoking decreased from 77.9% to 72.4% (rate difference 5.5%). The prevalence of hypertension increased through 2007 and 2011, and then maintained a little shift. The greatest relative increase in prevalence between 2007 and 2009 was observed for hypercholesterolemia (from 28.7% to 35.5%), decreased in 2010 (from 35.5% to 20.5%), and then maintained a little shift. There were little shifts for diabetes and obesity (diabetes, 9.6% in 2007 and 14.8% in 2017, p = .283, rate difference 5.2%; obesity, 36.8% in 2007, and 40.7% in 2017, p = .105, rate difference 3.9%). Trends in the percentage of five conventional risk factors and acute myocardial infarction (AMI) subtype during a first acute myocardial infarction in young men 18–44 years old between 2007 and 2017. (A) Trends in the percentage of conventional risk factors, the increasing prevalence was noted for hypertension (p trend = 0.004). Decline in the rates of current smoking and drinking are appreciated. There was a similar prevalence for diabetes and hypercholesterolemia and obesity over time. (B) Trends in the percentage of AMI subtybe, the Increasing percentage was noted for NSTEMI(p trend <0.001) The most frequent presentation of AMI was STEMI, but the proportion decreased gradually from 90.4% to 63.2% (p trend <0.001), then stabilized at around 65% during the current years (Figure 1B). The mean SBP and BMI increased over the three periods; the mean SBP increased from 119.4 mmHg to 123.3 mmHg (mean difference 3.9 mmHg), with BMI increasing from 26.5 kg/m2 to 27.6 kg/m2 (mean difference 1.2 kg/m2). The mean TC decreased from 4.62 mmol/L to 4.49 mmol/L and LDL‐C decreased from 2.97 mmol/L to 2.80 mmol/L. No differences were seen in mean DBP and FPG over the three periods (Table S1). The trends in the prevalence of conventional RFs are shown in Figure 1A. Compared with 2007, the prevalence of hypertension increased in 2017 (p trend = 0.004) that for smoking decreased gradually (p trend <0.05) and those for diabetes, hypercholesterolemia, and obesity remained unchanged (p trend >0.05). Between 2007 and 2017, the prevalence of hypertension increased from 26.5% to 44.2% (rate difference 17.7%) and that for smoking decreased from 77.9% to 72.4% (rate difference 5.5%). The prevalence of hypertension increased through 2007 and 2011, and then maintained a little shift. The greatest relative increase in prevalence between 2007 and 2009 was observed for hypercholesterolemia (from 28.7% to 35.5%), decreased in 2010 (from 35.5% to 20.5%), and then maintained a little shift. There were little shifts for diabetes and obesity (diabetes, 9.6% in 2007 and 14.8% in 2017, p = .283, rate difference 5.2%; obesity, 36.8% in 2007, and 40.7% in 2017, p = .105, rate difference 3.9%). Trends in the percentage of five conventional risk factors and acute myocardial infarction (AMI) subtype during a first acute myocardial infarction in young men 18–44 years old between 2007 and 2017. (A) Trends in the percentage of conventional risk factors, the increasing prevalence was noted for hypertension (p trend = 0.004). Decline in the rates of current smoking and drinking are appreciated. There was a similar prevalence for diabetes and hypercholesterolemia and obesity over time. (B) Trends in the percentage of AMI subtybe, the Increasing percentage was noted for NSTEMI(p trend <0.001) The most frequent presentation of AMI was STEMI, but the proportion decreased gradually from 90.4% to 63.2% (p trend <0.001), then stabilized at around 65% during the current years (Figure 1B). The mean SBP and BMI increased over the three periods; the mean SBP increased from 119.4 mmHg to 123.3 mmHg (mean difference 3.9 mmHg), with BMI increasing from 26.5 kg/m2 to 27.6 kg/m2 (mean difference 1.2 kg/m2). The mean TC decreased from 4.62 mmol/L to 4.49 mmol/L and LDL‐C decreased from 2.97 mmol/L to 2.80 mmol/L. No differences were seen in mean DBP and FPG over the three periods (Table S1). Prevalence of conventional cardiovascular RFs and MI subtypes across different groups: In this study, 2739 cases of first AMI hospitalizations in young men were enrolled from January 1, 2007, to December 31, 2017. Of the total, 462 (16.9%) patients were <35 years, 761 (27.8%) were 35–39 years and 1516 (55.3%) were 40–44 years. The most frequent presentation of AMI was STEMI (77.5%) and over 80% underwent revascularization. Baseline characteristics of the overall sample and by age subgroups are presented in Table 1. The most prevalent RFs were smoking (75.5%), followed by hypertension (40.6%) and then obesity (38.3%). The proportion of patients without any of the five conventional CVD RFs was only 5.8% and 71.7% of patients had at least 2 RFs. Significant differences were noted for the prevalence of obesity, hypertension, diabetes, and hypercholesterolemia across age groups. Patients aged <35 years had a higher prevalence of hypercholesterolemia and obesity, with lower values for hypertension; the proportion of patients with 2 or 3 RFs in this age group was also higher than that for the other age groups. Baseline characteristics of the study population Note: Data are presented as mean ± standard deviation or n (%). Abbreviations: AMI, acute myocardial infarction; NSTEMI, non‐ST‐segment elevation myocardial infarction; STEMI, ST‐segment elevation myocardial infarction. Prevalence of conventional CVD RFs compared with a reference population: We compared the prevalence of classic modifiable cardiovascular RFs in the study population with that of the reference population. Compared with the reference population, young Chinese men had a higher prevalence of smoking (75.5% vs. 58.1%; rate difference 17.4%) and dyslipidemia (65.0% vs. 54.6%; rate difference 10.4%), with lower prevalence of obesity (13.1% vs. 18.6%; rate difference 19.7%), hypertension (40.6% vs. 49.3%; rate difference 8.7%), and diabetes (14.9% vs. 19.9%; rate difference 5.0%). The three leading conventional RFs in the reference population were smoking, dyslipidemia, and hypertension, and this was similar for the target population (Table 2). Prevalence of conventional cardiovascular risk factors in the study population compared with the reference population Note: Data are presented as mean ± standard deviation or n (%). Abbreviation: BMI, body mass index. Conventional cardiovascular RFs and other clinical characteristics across AMI subtypes: Compared to STEMI patients, NSTEMI patients had a high prevalence of all individual conventional RFs except for smoking, which was similar in prevalence. The proportion of patients who had at least 3 RFs was 39.6% in NSTEMI patients, which was higher than that in STEMI patients (27.5%). The coronary artery involved most frequently was left anterior descending coronary artery (LAD) in both groups, but this was higher in STEMI patients. Patients with multivessel coronary disease and those without significant coronary stenosis were higher in the NSTEMI group (Table 3). Conventional risk factors and clinic characteristics according to AMI subtype Note: Data are presented as n (%). Abbreviations: EF, ejection fraction; LAD, left anterior descending coronary artery; LCX, left circumflex coronary artery; NSTEMI, non‐ST elevation acute coronary syndrome; RCA, right coronary artery; STEMI, ST‐segment elevation myocardial infarction. We also conducted logistic regression analyses to evaluate the relationships between AMI subtype and conventional RFs. Multiple conventional RFs significantly increased the risk of NSTEMI; the OR was 1.69 (95% confident interval [CI]: 1.11–2.57) for patients with three RFs and 2.50 (95% CI: 1.55–4.03) for patients with at least 4 RFs. Patients with multivessel coronary disease (OR = 1.32, 95% CI: 1.08–1.60) and patients without significant coronary stenosis or normal patients (OR = 2.01, 95% CI: 1.47–2.76) had an increased risk of NSTEMI compared with those with the single vessel disease after adjusting for the other covariates (Table 4). Relationship between AMI subtype and conventional RFs and coronary artery disease Trends of conventional RFs and AMI subtypes: The trends in the prevalence of conventional RFs are shown in Figure 1A. Compared with 2007, the prevalence of hypertension increased in 2017 (p trend = 0.004) that for smoking decreased gradually (p trend <0.05) and those for diabetes, hypercholesterolemia, and obesity remained unchanged (p trend >0.05). Between 2007 and 2017, the prevalence of hypertension increased from 26.5% to 44.2% (rate difference 17.7%) and that for smoking decreased from 77.9% to 72.4% (rate difference 5.5%). The prevalence of hypertension increased through 2007 and 2011, and then maintained a little shift. The greatest relative increase in prevalence between 2007 and 2009 was observed for hypercholesterolemia (from 28.7% to 35.5%), decreased in 2010 (from 35.5% to 20.5%), and then maintained a little shift. There were little shifts for diabetes and obesity (diabetes, 9.6% in 2007 and 14.8% in 2017, p = .283, rate difference 5.2%; obesity, 36.8% in 2007, and 40.7% in 2017, p = .105, rate difference 3.9%). Trends in the percentage of five conventional risk factors and acute myocardial infarction (AMI) subtype during a first acute myocardial infarction in young men 18–44 years old between 2007 and 2017. (A) Trends in the percentage of conventional risk factors, the increasing prevalence was noted for hypertension (p trend = 0.004). Decline in the rates of current smoking and drinking are appreciated. There was a similar prevalence for diabetes and hypercholesterolemia and obesity over time. (B) Trends in the percentage of AMI subtybe, the Increasing percentage was noted for NSTEMI(p trend <0.001) The most frequent presentation of AMI was STEMI, but the proportion decreased gradually from 90.4% to 63.2% (p trend <0.001), then stabilized at around 65% during the current years (Figure 1B). The mean SBP and BMI increased over the three periods; the mean SBP increased from 119.4 mmHg to 123.3 mmHg (mean difference 3.9 mmHg), with BMI increasing from 26.5 kg/m2 to 27.6 kg/m2 (mean difference 1.2 kg/m2). The mean TC decreased from 4.62 mmol/L to 4.49 mmol/L and LDL‐C decreased from 2.97 mmol/L to 2.80 mmol/L. No differences were seen in mean DBP and FPG over the three periods (Table S1). CONTROL OF CONVENTIONAL RFS: The control rate of hypertension and diabetes was 62.8% (699/1113) and 17.7% (72/407), respectively; smoking cessation was 4.8% (105/2173). DISCUSSION: In this study comprising of young men with a first hospitalization for AMI in China, the most prevalent CVD RFs were smoking (75.5%), hypertension (40.6%), and obesity (38.3%). Additionally, over 70% of them had at least two conventional RFs. Compared with a similar reference population, these young Chinese men had a higher prevalence of smoking and dyslipidemia and a lower prevalence of obesity, hypertension, and diabetes. Between 2007 and 2017, the prevalence of hypertension increased, whereas that of smoking decreased. Patients presenting with NSTEMI had a high prevalence of all individual conventional RFs except for smoking, which was more prevalent in STEMI patients. LAD was the most frequently involved coronary artery in the subtypes of AMI, but it was more common in STEMI patients. Multivessel coronary disease was more common in NSTEMI. Conventional RFs were highly prevalent in young men aged 18–44 years who were hospitalized for first AMI events. The prevalence values were higher compared to national figures smoking (53.9% for young adults aged 25–44 years), hypertension (11.3% for urban adults and 10.0% for rural adults aged 18–44 years), obesity (11.0% for young adults aged 18–44 years), and diabetes (5.9% for young adults aged 18–44 years); high TC ( ≥ 5.2 mmol/L) and high LDL‐C( ≥ 3.4 mmol/L) were 28.5% and 26.3%, respectively. 14 , 15 , 16 , 17 Our results are consistent with previous studies conducted in young patients diagnosed with AMI and CHD; the prevalence for hypertension, diabetes, and smoking has been reported as 34.3%–41.7%, 11.1%–22.4%, and 57.4%–74.0%, respectively. 3 , 6 However, the prevalence figures were different from that of the US report. Compared with the data from the US NIS for patients hospitalized for a first AMI from 2005 to 2015, 10 young Chinese men had a higher prevalence of smoking and dyslipidemia, with a lower prevalence of hypertension, obesity, and diabetes. There were clear age differences in the prevalence of some of the RFs; patients aged <35 years had a higher prevalence of smoking, obesity, and hypercholesterolemia, whereas patients aged 35–44 years had a higher prevalence of smoking, hypertension, and diabetes. Our study also showed differences in the temporal trends of the conventional RFs between China and US populations. The prevalence of all the evaluated conventional RFs progressively increased between 2005 and 2015 in the United States, 10 but hypertension prevalence increased in China, whereas smoking decreased gradually; which was also consistent with a previous study conducted in China. In a retrospective analysis of coronary artery disease patients aged ≤45 years conducted from 2010 to 2014, the prevalence of hypertension increased from 40.7% to 47.5%, 20.3% to 26.1% for diabetes, and 27.3% to 35.7% for hyperlipidemia. However, the prevalence of smoking exhibited a downward trend. 6 Approximately 77% of young Chinese men hospitalized with their first MI presented with STEMI, which was consistent with other studies conducted in young Chinese adults. 2 , 18 , 19 Previous population‐based studies have, however, reported NSTEMI to be the most common presentation 2 , 19 ; the proportion of patients presenting with NSTEMI increased from 11.6% to 36.2% in males and from 15.8% to 45.5% in females from 2007 through 2012, 2 which was quite different from the presentation in young adults in the US population. 10 , 20 , 21 Latest studies conducted in the United States have shown that the presentation of AMI was more frequently NSTEMI and increased from 54.6% to 67.9% in males and from 58.8% to 80.2% in females from 2000 through 2014. 21 A similar trend has been observed in Germany. 8 According to our findings, multivessel disease and cluster of conventional RFs are closely related to NSTEMI, which are consistent with the results of previous studies. 10 , 22 In our study, we focused on young adults at high risk of CVD in routine clinical practice; but most of these young adults are classified as low or intermediate risk based on traditional CVD risk prediction scores that use lifetime risk estimates. 11 , 12 Using this approach leads to decreased ability to identify RFs in young adults and suboptimal utilization of preventive strategies. Furthermore, the majority of CHD events seem to occur in these “low” and “intermediate” risk groups. The knowledge of the prevalence and trends of conventional RFs among young Chinese men hospitalized with first MI have important healthcare implications with regard to planning appropriate primary preventative strategies. Considerable attention should be paid to young men with at least two out of five conventional RFs (hypercholesterolemia, hypertension, diabetes, obesity, smoking). Healthcare providers should also focus more attention on the control of metabolic factors and encourage smoking cessation. LIMITATIONS: There are some limitations that deserve consideration. This was a single‐center study and the data were collected from Beijing Anzhen hospital, which is well known for the management of CHD. Furthermore, some patients may have more severe symptoms. Hence, the results may not be extrapolated to the entire nation. Data on conventional RFs were collected from medical records; therefore, health behaviors such as physical activity, sleep duration, emotion, and stress could not be included. The sample for the annual number of AMI hospitalizations was not large enough, which may have influenced the comparisons of conventional RFs and trends in RFs over time with the reference US population. CONCLUSIONS: In conclusion, the characteristics of conventional RFs among young Chinese men hospitalized with a first MI have important healthcare implications with regard to the planning of appropriate primary preventative strategies. The three leading cardiovascular RFs (smoking, hypertension, and obesity) should be the target of intervention and treatment strategies aimed at reducing the incidence of MI in young adults. Furthermore, preventing hypercholesterolemia in patients aged 34 years or younger. should also be a primary focus. Young adults with two or more conventional RFs should be given considerable attention even if they were classified as being at low or intermediate risk by traditional CHD risk prediction scores. CONFLICT OF INTERESTS: The authors declare that there are no conflict of interests. : Supporting information: Supplementary information. Click here for additional data file. Supplementary information. Click here for additional data file.
Background: There is limited data on the characteristics of conventional risk factors (RFs) in young Chinese men hospitalized with a first acute myocardial infarction (AMI). Methods: A total of 2739 men aged 18-44 years hospitalized for a first AMI were identified from 2007 to 2017. The overall prevalence of RFs and their respective temporal trends and subtypes of AMI were evaluated. Results: The most prevalent conditions were smoking, followed by hypertension and then obesity. Patients aged <35 years had a much higher prevalence of hypercholesterolemia and obesity. Compared with a similar reference population in the United States, young Chinese men had a higher prevalence of smoking and dyslipidemia, but a lower prevalence of obesity, hypertension, and diabetes. The prevalence of hypertension increased from 2007 through 2017 (p trend <.001), whereas smoking decreased gradually. AMI frequently presented as ST-segment elevation MI (STEMI) (77.5%). Cluster of conventional RFs (3 RFs, odds ratio [OR]: 1.69, 95% confidence interval [CI]: 1.11-2.57; ≥4 RFs, OR: 2.50, 95% CI: 1.55-4.03] and multivessel disease (OR = 1.32, 95% CI: 1.08-1.60) increased the risk of non-STEMI (NSTEMI). Conclusions: Conventional RFs were highly prevalent in young Chinese men who were hospitalized for first AMI events, and the temporal trends varied different between China and US populations. Multivessel disease and cluster of conventional RFs are closely related to NSTEMI. Optimized preventive strategies among young adults are warranted.
INTRODUCTION: The primary and secondary prevention of coronary heart disease (CHD) in young adults has garnered tremendous attention given the rapid increase in the incidence of acute coronary events and hospitalization rates, especially in young men. Evidence from observational epidemiological studies showed the incidence of acute coronary events increased by 37.4% in the year 2009 compared to 2007 in young adults aged 35–39 years, making it the largest increase for this age group. 1 Hospitalization rates for acute myocardial infarction (AMI) per 100 000 population experienced the most significant increase in young men (<55 years), by 45.8% from 2007 to 2012 in Beijing; 2 the proportion of young adults hospitalized for CHD was nearly 90% in men from 2013 to 2014 in Beijing. 3 This trend parallels an increase in cardiovascular risk factors (RFs) including smoking, hypertension, diabetes, obesity, and dyslipidemia in the general Chinese population as well as an increase in hospitalizations for AMI. 2 , 4 , 5 , 6 The Prospective Urban Rural Epidemiology (PURE) study showed that approximately 70% of cardiovascular disease (CVD) cases were attributed to modifiable RFs. 7 Though several studies have evaluated the prevalence of these RFs during a first or any episode of AMI and have found a high prevalence of at least 1 RF (approximately 90%), 8 , 9 , 10 most patients were classified as being at low or intermediate risk by traditional CHD risk prediction scores. 11 , 12 Unfortunately, this does not aid in the development of appropriate primary preventive strategies to decrease the risk of CHD. The prevalence of conventional RFs, other clinical characteristics, and their trends need to be clarified, which can be used in formulating preventive strategies. Few studies have evaluated recent long‐term trends and prevalence of modifiable RFs during a first AMI in young adults in China. In a retrospective analysis of patients with coronary artery disease aged ≤45 years conducted from 2010 to 2014, 6 the prevalence figures varied for the United States and German populations. 8 , 10 Contemporary data about trends in and prevalence of modifiable RFs, and subtypes of AMI are lacking in young patients. Using a retrospective analysis of hospital data from 2007 to 2017, we aimed to evaluate the trends in and prevalence of modifiable RFs and subtypes of MI during the first AMI in young Chinese men. This evidence will provide a reference point for the development of preventive strategies in this population. CONCLUSIONS: In conclusion, the characteristics of conventional RFs among young Chinese men hospitalized with a first MI have important healthcare implications with regard to the planning of appropriate primary preventative strategies. The three leading cardiovascular RFs (smoking, hypertension, and obesity) should be the target of intervention and treatment strategies aimed at reducing the incidence of MI in young adults. Furthermore, preventing hypercholesterolemia in patients aged 34 years or younger. should also be a primary focus. Young adults with two or more conventional RFs should be given considerable attention even if they were classified as being at low or intermediate risk by traditional CHD risk prediction scores.
Background: There is limited data on the characteristics of conventional risk factors (RFs) in young Chinese men hospitalized with a first acute myocardial infarction (AMI). Methods: A total of 2739 men aged 18-44 years hospitalized for a first AMI were identified from 2007 to 2017. The overall prevalence of RFs and their respective temporal trends and subtypes of AMI were evaluated. Results: The most prevalent conditions were smoking, followed by hypertension and then obesity. Patients aged <35 years had a much higher prevalence of hypercholesterolemia and obesity. Compared with a similar reference population in the United States, young Chinese men had a higher prevalence of smoking and dyslipidemia, but a lower prevalence of obesity, hypertension, and diabetes. The prevalence of hypertension increased from 2007 through 2017 (p trend <.001), whereas smoking decreased gradually. AMI frequently presented as ST-segment elevation MI (STEMI) (77.5%). Cluster of conventional RFs (3 RFs, odds ratio [OR]: 1.69, 95% confidence interval [CI]: 1.11-2.57; ≥4 RFs, OR: 2.50, 95% CI: 1.55-4.03] and multivessel disease (OR = 1.32, 95% CI: 1.08-1.60) increased the risk of non-STEMI (NSTEMI). Conclusions: Conventional RFs were highly prevalent in young Chinese men who were hospitalized for first AMI events, and the temporal trends varied different between China and US populations. Multivessel disease and cluster of conventional RFs are closely related to NSTEMI. Optimized preventive strategies among young adults are warranted.
7,764
309
[ 489, 259, 305, 130, 264, 184, 319, 487, 31, 122 ]
17
[ "rfs", "prevalence", "ami", "patients", "conventional", "coronary", "hypertension", "young", "smoking", "population" ]
[ "coronary events increased", "incidence acute coronary", "myocardial infarction prevalence", "prevention coronary heart", "cardiovascular risk factors" ]
[CONTENT] acute myocardial infarction | risk factor | trends | youth [SUMMARY]
[CONTENT] acute myocardial infarction | risk factor | trends | youth [SUMMARY]
[CONTENT] acute myocardial infarction | risk factor | trends | youth [SUMMARY]
[CONTENT] acute myocardial infarction | risk factor | trends | youth [SUMMARY]
[CONTENT] acute myocardial infarction | risk factor | trends | youth [SUMMARY]
[CONTENT] acute myocardial infarction | risk factor | trends | youth [SUMMARY]
[CONTENT] Cardiovascular Diseases | China | Heart Disease Risk Factors | Humans | Male | Myocardial Infarction | Non-ST Elevated Myocardial Infarction | Risk Factors | ST Elevation Myocardial Infarction | United States | Young Adult [SUMMARY]
[CONTENT] Cardiovascular Diseases | China | Heart Disease Risk Factors | Humans | Male | Myocardial Infarction | Non-ST Elevated Myocardial Infarction | Risk Factors | ST Elevation Myocardial Infarction | United States | Young Adult [SUMMARY]
[CONTENT] Cardiovascular Diseases | China | Heart Disease Risk Factors | Humans | Male | Myocardial Infarction | Non-ST Elevated Myocardial Infarction | Risk Factors | ST Elevation Myocardial Infarction | United States | Young Adult [SUMMARY]
[CONTENT] Cardiovascular Diseases | China | Heart Disease Risk Factors | Humans | Male | Myocardial Infarction | Non-ST Elevated Myocardial Infarction | Risk Factors | ST Elevation Myocardial Infarction | United States | Young Adult [SUMMARY]
[CONTENT] Cardiovascular Diseases | China | Heart Disease Risk Factors | Humans | Male | Myocardial Infarction | Non-ST Elevated Myocardial Infarction | Risk Factors | ST Elevation Myocardial Infarction | United States | Young Adult [SUMMARY]
[CONTENT] Cardiovascular Diseases | China | Heart Disease Risk Factors | Humans | Male | Myocardial Infarction | Non-ST Elevated Myocardial Infarction | Risk Factors | ST Elevation Myocardial Infarction | United States | Young Adult [SUMMARY]
[CONTENT] coronary events increased | incidence acute coronary | myocardial infarction prevalence | prevention coronary heart | cardiovascular risk factors [SUMMARY]
[CONTENT] coronary events increased | incidence acute coronary | myocardial infarction prevalence | prevention coronary heart | cardiovascular risk factors [SUMMARY]
[CONTENT] coronary events increased | incidence acute coronary | myocardial infarction prevalence | prevention coronary heart | cardiovascular risk factors [SUMMARY]
[CONTENT] coronary events increased | incidence acute coronary | myocardial infarction prevalence | prevention coronary heart | cardiovascular risk factors [SUMMARY]
[CONTENT] coronary events increased | incidence acute coronary | myocardial infarction prevalence | prevention coronary heart | cardiovascular risk factors [SUMMARY]
[CONTENT] coronary events increased | incidence acute coronary | myocardial infarction prevalence | prevention coronary heart | cardiovascular risk factors [SUMMARY]
[CONTENT] rfs | prevalence | ami | patients | conventional | coronary | hypertension | young | smoking | population [SUMMARY]
[CONTENT] rfs | prevalence | ami | patients | conventional | coronary | hypertension | young | smoking | population [SUMMARY]
[CONTENT] rfs | prevalence | ami | patients | conventional | coronary | hypertension | young | smoking | population [SUMMARY]
[CONTENT] rfs | prevalence | ami | patients | conventional | coronary | hypertension | young | smoking | population [SUMMARY]
[CONTENT] rfs | prevalence | ami | patients | conventional | coronary | hypertension | young | smoking | population [SUMMARY]
[CONTENT] rfs | prevalence | ami | patients | conventional | coronary | hypertension | young | smoking | population [SUMMARY]
[CONTENT] modifiable rfs | young | increase | prevalence | modifiable | trends prevalence modifiable | prevalence modifiable | trends prevalence modifiable rfs | prevalence modifiable rfs | rfs [SUMMARY]
[CONTENT] defined | new | variables | ami | heart | continuous | continuous variables | identified | prior | coronary [SUMMARY]
[CONTENT] difference | patients | prevalence | rate difference | coronary | rfs | rate | conventional | mean | decreased [SUMMARY]
[CONTENT] strategies | young | primary | young adults | adults | rfs | mi | risk | attention classified | reducing [SUMMARY]
[CONTENT] rfs | prevalence | patients | ami | coronary | young | conventional | hypertension | difference | rate [SUMMARY]
[CONTENT] rfs | prevalence | patients | ami | coronary | young | conventional | hypertension | difference | rate [SUMMARY]
[CONTENT] Chinese | first | AMI [SUMMARY]
[CONTENT] 2739 | 18-44 years | first | AMI | 2007 to 2017 ||| AMI [SUMMARY]
[CONTENT] ||| 35 years ||| the United States | Chinese ||| 2007 | 2017 ||| AMI | 77.5% ||| 3 ||| 1.69 | 95% ||| CI | 1.11 | ≥4 | 2.50 | 95% | CI | 1.55-4.03 | 1.32 | 95% | CI | 1.08-1.60 | NSTEMI [SUMMARY]
[CONTENT] Chinese | first | AMI | China | US ||| Multivessel | NSTEMI ||| [SUMMARY]
[CONTENT] Chinese | first | AMI ||| 2739 | 18-44 years | first | AMI | 2007 to 2017 ||| AMI ||| ||| ||| 35 years ||| the United States | Chinese ||| 2007 | 2017 ||| AMI | 77.5% ||| 3 ||| 1.69 | 95% ||| CI | 1.11 | ≥4 | 2.50 | 95% | CI | 1.55-4.03 | 1.32 | 95% | CI | 1.08-1.60 | NSTEMI ||| Chinese | first | AMI | China | US ||| Multivessel | NSTEMI ||| [SUMMARY]
[CONTENT] Chinese | first | AMI ||| 2739 | 18-44 years | first | AMI | 2007 to 2017 ||| AMI ||| ||| ||| 35 years ||| the United States | Chinese ||| 2007 | 2017 ||| AMI | 77.5% ||| 3 ||| 1.69 | 95% ||| CI | 1.11 | ≥4 | 2.50 | 95% | CI | 1.55-4.03 | 1.32 | 95% | CI | 1.08-1.60 | NSTEMI ||| Chinese | first | AMI | China | US ||| Multivessel | NSTEMI ||| [SUMMARY]
Economic and health impacts of introducing Helicobacter pylori eradication strategy into national gastric cancer policy in Japan: A cost-effectiveness analysis.
34278663
Helicobacter pylori (H. pylori) eradication reduces gastric cancer risk. Since 2013, a population-wide H. pylori eradication strategy for patients with chronic gastritis has begun to prevent gastric cancer in Japan. The aim of this study was to evaluate the economic and health effects of H. pylori eradication strategy in national gastric cancer prevention program.
BACKGROUND
We developed a cohort state-transition model for H. pylori eradication and no eradication over a lifetime horizon from a healthcare payer perspective, and performed one-way and probabilistic sensitivity analyses. We targeted a hypothetical cohort of H. pylori-positive patients aged 20, 30, 40, 50, 60, 70, and 80. The main outcomes were costs, quality-adjusted life-years (QALYs), life expectancy life-years (LYs), incremental cost-effectiveness ratios, gastric cancer cases, and deaths from gastric cancer.
MATERIALS AND METHODS
H. pylori eradication was more effective and cost-saving for all age groups than no eradication. Sensitivity analyses showed strong robustness of the results. From 2013-2019 for 8.50 million patients, H. pylori eradication saved US$3.75 billion, increased 11.11 million QALYs and 0.45 million LYs, and prevented 284,188 cases and 65,060 deaths. For 35.59 million patients without eradication, H. pylori eradication has the potential to save US$14.82 billion, increase 43.10 million QALYs and 1.66 million LYs, and prevent 1,084,532 cases and 250,256 deaths.
RESULTS
National policy using population-wide H. pylori eradication to prevent gastric cancer has significant cost savings and health impacts for young-, middle-, and old-aged individuals in Japan. The findings strongly support the promotion of H. pylori eradication strategy for all age groups in high-incidence countries.
CONCLUSIONS
[ "Aged", "Cost-Benefit Analysis", "Helicobacter Infections", "Helicobacter pylori", "Humans", "Japan", "Middle Aged", "Policy", "Stomach Neoplasms" ]
9286640
INTRODUCTION
More than half of the world's population is infected with Helicobacter pylori (H. pylori). 1 H. pylori infection causes chronic atrophic gastritis, a common stage of progression to gastric cancer, and is responsible for 98% of the causes of gastric cancer in Japan. 2 , 3 , 4 , 5 Japan has the third highest age‐standardized rate for gastric cancer in the world. 6 The incidence of gastric cancer in Japan is almost 10 times higher than that observed in the United States. The Taipei global consensus guidelines for screening and eradication of H. pylori for gastric cancer prevention recommend that mass screening and eradication of H. pylori should be considered in populations at higher risk of gastric cancer and that eradication therapy should be offered to all individuals infected with H. pylori. 7 In the guidelines for the management of H. pylori infection by the Japanese Society for Helicobacter Research, H. pylori eradication treatment is recommended to prevent gastric cancer for patients with H. pylori infection. 8 The Ministry of Health, Labour and Welfare approved expansion of National Health Insurance coverage for H. pylori eradication treatment in patients with chronic gastritis from February 2013. 9 During 2013‐2019, 8.50 million H. pylori‐positive patients received eradication treatment. 10 , 11 The number of deaths from gastric cancer is gradually declining, with 42,931 deaths in 2019 and 42,318 deaths in 2020 (Figure 1). 12 , 13 Changes in gastric cancer deaths in Japan from 2000 to 2020 In this study, we aimed to evaluate the economic and health effects of H. pylori eradication strategy in national gastric cancer prevention program in Japan.
null
null
RESULTS
Base‐case analysis H. pylori eradication strategy was less costly and yielded greater benefits than no eradication strategy for all age groups (Table 2). No eradication strategy was dominated for all age groups. The patients aged 40 had the highest per capita cost‐savings. Per capita gains of QALYs in younger patients were higher than in older patients (Table 2). From 2013 to 2019, the patients aged 60 had the highest cost savings and health outcomes (Table S1). H. pylori eradication strategy was less costly and yielded greater benefits than no eradication strategy for all age groups (Table 2). No eradication strategy was dominated for all age groups. The patients aged 40 had the highest per capita cost‐savings. Per capita gains of QALYs in younger patients were higher than in older patients (Table 2). From 2013 to 2019, the patients aged 60 had the highest cost savings and health outcomes (Table S1). One‐way sensitivity analysis and probabilistic sensitivity analysis Incremental cost‐effectiveness ratio tornado diagram of H. pylori eradication strategy versus no eradication strategy showed that cost‐effectiveness was not sensitive to any variables in all age groups (Figure 3A, Figure S2). In probabilistic sensitivity analysis using Monte‐Carlo simulation for 10,000 trials, the acceptability curve showed that H. pylori eradication strategy was cost‐effective 100% of the time at two willingness‐to‐pay thresholds of US$50,000 per QALY gained and US$100,000 per QALY gained in all age groups (Figure 3B, Figure S3). Incremental cost‐effectiveness scatterplots showed that H. pylori eradication strategy dominated no‐eradication strategy in more than 9800 trials in all age groups (Figure 3C, Figure S4). The results showed strong robustness of H. pylori eradication strategy in all age groups. Results of the base‐case analysis ICER (US$/ QALY gained) Life expectancy life‐years (LYs) ICER (US$/LY gained) Abbrevations H. pylori = Helicobacter pylori; QALY = quality‐adjusted life‐year; LY = life expectancy life‐years; ICER = incremental cost‐effectiveness ratio; dominated = less effective and more costly than others; One‐way sensitivity analysis and probabilistic sensitivity analysis in 60‐year‐old H. pylori‐positive patients. A, The incremental cost‐effectiveness ratio (ICER) tornado diagram for H. pylori eradication strategy versus no eradication strategy. The cost‐effectiveness of H. pylori eradication strategy was not sensitive to changes in any variables. B, Cost‐effectiveness acceptability curve for H. pylori eradication strategy versus no eradication strategy. The probabilistic sensitivity analysis analyzed 10,000 simulations of the model in which input parameters were randomly varied across pre‐specified statistical distributions. The x‐axis represents the willingness‐to‐pay threshold. The acceptability curve showed that H. pylori eradication strategy was cost‐effective 100% of the time at two willingness‐to‐pay thresholds of US$50,000 per QALY gained and US$100,000 per QALY gained. C, Incremental cost‐effectiveness scatterplots with 95% confidence ellipses for H. pylori eradication strategy versus no eradication strategy. Each dot represents a single simulation for a total of 10,000 simulations. Incremental cost‐effectiveness scatterplots showed that H. pylori eradication strategy dominated no‐eradication strategy in 9811 trials, and that H. pylori eradication strategy was more cost‐effective than no‐eradication strategy in 189 trials. EV = expected value; H. pylori = Helicobacter pylori; ICER = incremental cost‐effectivenessratio; QALY = quality‐adjusted life‐year; WTP = willingness‐to‐pay threshold Incremental cost‐effectiveness ratio tornado diagram of H. pylori eradication strategy versus no eradication strategy showed that cost‐effectiveness was not sensitive to any variables in all age groups (Figure 3A, Figure S2). In probabilistic sensitivity analysis using Monte‐Carlo simulation for 10,000 trials, the acceptability curve showed that H. pylori eradication strategy was cost‐effective 100% of the time at two willingness‐to‐pay thresholds of US$50,000 per QALY gained and US$100,000 per QALY gained in all age groups (Figure 3B, Figure S3). Incremental cost‐effectiveness scatterplots showed that H. pylori eradication strategy dominated no‐eradication strategy in more than 9800 trials in all age groups (Figure 3C, Figure S4). The results showed strong robustness of H. pylori eradication strategy in all age groups. Results of the base‐case analysis ICER (US$/ QALY gained) Life expectancy life‐years (LYs) ICER (US$/LY gained) Abbrevations H. pylori = Helicobacter pylori; QALY = quality‐adjusted life‐year; LY = life expectancy life‐years; ICER = incremental cost‐effectiveness ratio; dominated = less effective and more costly than others; One‐way sensitivity analysis and probabilistic sensitivity analysis in 60‐year‐old H. pylori‐positive patients. A, The incremental cost‐effectiveness ratio (ICER) tornado diagram for H. pylori eradication strategy versus no eradication strategy. The cost‐effectiveness of H. pylori eradication strategy was not sensitive to changes in any variables. B, Cost‐effectiveness acceptability curve for H. pylori eradication strategy versus no eradication strategy. The probabilistic sensitivity analysis analyzed 10,000 simulations of the model in which input parameters were randomly varied across pre‐specified statistical distributions. The x‐axis represents the willingness‐to‐pay threshold. The acceptability curve showed that H. pylori eradication strategy was cost‐effective 100% of the time at two willingness‐to‐pay thresholds of US$50,000 per QALY gained and US$100,000 per QALY gained. C, Incremental cost‐effectiveness scatterplots with 95% confidence ellipses for H. pylori eradication strategy versus no eradication strategy. Each dot represents a single simulation for a total of 10,000 simulations. Incremental cost‐effectiveness scatterplots showed that H. pylori eradication strategy dominated no‐eradication strategy in 9811 trials, and that H. pylori eradication strategy was more cost‐effective than no‐eradication strategy in 189 trials. EV = expected value; H. pylori = Helicobacter pylori; ICER = incremental cost‐effectivenessratio; QALY = quality‐adjusted life‐year; WTP = willingness‐to‐pay threshold Cumulative lifetime economic and health outcomes H. pylori‐positive patients aged 60 had the highest cumulative lifetime economic and health outcomes (Table S1). From 2013 to 2019 for 8.50 million patients, H. pylori eradication saved US$3.75 billion, increased 11.11 million QALYs and 0.45 million LYs, and prevented 284,188 cases and 65,060 deaths. For 35.59 million patients without eradication, H. pylori eradication has the potential to save US$14.82 billion, increase 43.10 million QALYs and 1.66 million LYs, and prevent 1,084,532 cases and 250,256 deaths (Table S1). In the Markov cohort analysis, the cumulative lifetime potential of gastric cancer cases and deaths from gastric cancer in H. pylori eradication strategy compared with no eradication strategy decreased by 30 to 33% in patients under 50 and by 25 to 28% in patients aged 50 and over (Figure S5). H. pylori eradication reduced the incidence and mortality of gastric cancer in the younger age groups greater than in the older age groups (Table 2, Figure S5). H. pylori‐positive patients aged 60 had the highest cumulative lifetime economic and health outcomes (Table S1). From 2013 to 2019 for 8.50 million patients, H. pylori eradication saved US$3.75 billion, increased 11.11 million QALYs and 0.45 million LYs, and prevented 284,188 cases and 65,060 deaths. For 35.59 million patients without eradication, H. pylori eradication has the potential to save US$14.82 billion, increase 43.10 million QALYs and 1.66 million LYs, and prevent 1,084,532 cases and 250,256 deaths (Table S1). In the Markov cohort analysis, the cumulative lifetime potential of gastric cancer cases and deaths from gastric cancer in H. pylori eradication strategy compared with no eradication strategy decreased by 30 to 33% in patients under 50 and by 25 to 28% in patients aged 50 and over (Figure S5). H. pylori eradication reduced the incidence and mortality of gastric cancer in the younger age groups greater than in the older age groups (Table 2, Figure S5).
null
null
[ "INTRODUCTION", "Study design and model structure", "H. pylori eradication strategy", "No eradication strategy", "Target population", "Epidemiologic parameters and clinical probabilities", "Costs", "Health utilities, effectiveness, and health outcomes", "Sensitivity analyses", "Base‐case analysis", "One‐way sensitivity analysis and probabilistic sensitivity analysis", "Cumulative lifetime economic and health outcomes", "AUTHOR CONTRIBUTIONS" ]
[ "More than half of the world's population is infected with Helicobacter pylori (H. pylori).\n1\n\nH. pylori infection causes chronic atrophic gastritis, a common stage of progression to gastric cancer, and is responsible for 98% of the causes of gastric cancer in Japan.\n2\n, \n3\n, \n4\n, \n5\n Japan has the third highest age‐standardized rate for gastric cancer in the world.\n6\n The incidence of gastric cancer in Japan is almost 10 times higher than that observed in the United States. The Taipei global consensus guidelines for screening and eradication of H. pylori for gastric cancer prevention recommend that mass screening and eradication of H. pylori should be considered in populations at higher risk of gastric cancer and that eradication therapy should be offered to all individuals infected with H. pylori.\n7\n In the guidelines for the management of H. pylori infection by the Japanese Society for Helicobacter Research, H. pylori eradication treatment is recommended to prevent gastric cancer for patients with H. pylori infection.\n8\n The Ministry of Health, Labour and Welfare approved expansion of National Health Insurance coverage for H. pylori eradication treatment in patients with chronic gastritis from February 2013.\n9\n During 2013‐2019, 8.50 million H. pylori‐positive patients received eradication treatment.\n10\n, \n11\n The number of deaths from gastric cancer is gradually declining, with 42,931 deaths in 2019 and 42,318 deaths in 2020 (Figure 1).\n12\n, \n13\n\n\nChanges in gastric cancer deaths in Japan from 2000 to 2020\nIn this study, we aimed to evaluate the economic and health effects of H. pylori eradication strategy in national gastric cancer prevention program in Japan.", "We constructed a cohort state‐transition model with a Markov cycle tree for two strategies: H. pylori eradication strategy and no eradication strategy, using a healthcare payer perspective and a lifetime horizon. A cycle length of one year was chosen. The half‐cycle correction was applied. In the model, decision branches leaded directly to one Markov node per intervention strategy and the first events were modeled within the Markov cycle tree (Figure 2). We used TreeAge Pro (TreeAge Software Inc., Williamstown, Massachusetts) for the Decision‐analytical calculations. As this was a modeling study with all inputs and parameters derived from the published literature and Japanese statistics, ethics approval was not required.\nSchematic depiction of the Markov cycle tree in the cohort state‐transition model. We show that health states in the model as ovals. In a yearly model cycle, transitions can occur between the health states and other health states, represented by the arrows. H. pylori = Helicobacter pylori; GC = gastric cancer\nH. pylori eradication strategy The H. pylori‐positive patient receives first‐line eradication treatment (proton‐pump inhibitor, clarithromycin, and amoxicillin). The patient who fails first‐line eradication treatment receives second‐line eradication treatment (proton‐pump inhibitor, metronidazole, and amoxicillin). We consider the eradication and compliance rates of first‐line and second‐line eradication treatments in the model. After successful H. pylori eradication, H. pylori‐positive changes to H. pylori‐negative. When the patient fails both treatments, H. pylori‐positive remains until death. We calculate the costs of H. pylori test, endoscopy, and two urea breath tests when the patient receives H. pylori eradication treatment. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer followed by the Japanese gastric cancer treatment guidelines: endoscopic mucosal resection (EMR), endoscopic submucosal dissection treatment (ESD), surgery, chemotherapy, and radiotherapy with palliative care according to cancer stages, stage I‐IV.\n14\n The model includes the relative risk of developing gastric cancer after successful eradication, stage‐specific 5‐year survival rates, and mortality due to other causes (Table 1).\n12\n, \n15\n The patient aged 50 and over receives endoscopic screening every year from the year after eradication.\nBaseline estimates for selected variables\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n0.000771\n0.001167\n0.001881\n0.002803\n0.005122\n0.007949\n0.009117\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n0.000509\n0.00077\n0.00124\n0.00185\n0.00338\n0.00524\n0.00602\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n6.1\n14.7\n23.7\n33.7\n47.7\n58.6\n63.6\nStage I\nStage II\nStage III\nStage IV\n96.0\n69.2\n41.9\n6.3\n90‐99\n50‐80\n30‐50\n0‐20\n20‐29y\n30‐39y\n40‐49y\n50‐59y\n60‐69y\n70‐79y\n80‐89y\n123,986\n422,965\n1,083,631\n1,664,732\n2,860,031\n1,903,756\n440,503\n20‐29y\n30‐39y\n40‐49y\n50‐59y\n60‐69y\n70‐79y\n80‐89y\n773,480\n2,034,480\n4,256,520\n5,634,640\n7,331,490\n9,622,120\n5,933,880\nStage I\nStage II\nStage III\nStage IV\n62.5\n11.0\n7.5\n19.0\n30‐80\n5‐20\n2‐15\n10‐50\nStage I\nStage II\nStage III\nStage IV\n3675\n15,898\n24,841\n29,809\n1838‐7350\n7949‐31,796\n12,421‐49,682\n14,905‐59,618\n25,26\n\nAbbrevations H. pylori = Helicobacter pylori; N/A = not applicable\n\nThe H. pylori‐positive patient receives first‐line eradication treatment (proton‐pump inhibitor, clarithromycin, and amoxicillin). The patient who fails first‐line eradication treatment receives second‐line eradication treatment (proton‐pump inhibitor, metronidazole, and amoxicillin). We consider the eradication and compliance rates of first‐line and second‐line eradication treatments in the model. After successful H. pylori eradication, H. pylori‐positive changes to H. pylori‐negative. When the patient fails both treatments, H. pylori‐positive remains until death. We calculate the costs of H. pylori test, endoscopy, and two urea breath tests when the patient receives H. pylori eradication treatment. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer followed by the Japanese gastric cancer treatment guidelines: endoscopic mucosal resection (EMR), endoscopic submucosal dissection treatment (ESD), surgery, chemotherapy, and radiotherapy with palliative care according to cancer stages, stage I‐IV.\n14\n The model includes the relative risk of developing gastric cancer after successful eradication, stage‐specific 5‐year survival rates, and mortality due to other causes (Table 1).\n12\n, \n15\n The patient aged 50 and over receives endoscopic screening every year from the year after eradication.\nBaseline estimates for selected variables\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n0.000771\n0.001167\n0.001881\n0.002803\n0.005122\n0.007949\n0.009117\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n0.000509\n0.00077\n0.00124\n0.00185\n0.00338\n0.00524\n0.00602\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n6.1\n14.7\n23.7\n33.7\n47.7\n58.6\n63.6\nStage I\nStage II\nStage III\nStage IV\n96.0\n69.2\n41.9\n6.3\n90‐99\n50‐80\n30‐50\n0‐20\n20‐29y\n30‐39y\n40‐49y\n50‐59y\n60‐69y\n70‐79y\n80‐89y\n123,986\n422,965\n1,083,631\n1,664,732\n2,860,031\n1,903,756\n440,503\n20‐29y\n30‐39y\n40‐49y\n50‐59y\n60‐69y\n70‐79y\n80‐89y\n773,480\n2,034,480\n4,256,520\n5,634,640\n7,331,490\n9,622,120\n5,933,880\nStage I\nStage II\nStage III\nStage IV\n62.5\n11.0\n7.5\n19.0\n30‐80\n5‐20\n2‐15\n10‐50\nStage I\nStage II\nStage III\nStage IV\n3675\n15,898\n24,841\n29,809\n1838‐7350\n7949‐31,796\n12,421‐49,682\n14,905‐59,618\n25,26\n\nAbbrevations H. pylori = Helicobacter pylori; N/A = not applicable\n\nNo eradication strategy The latest version of Japanese guidelines for effective secondary prevention of gastric cancer recommends upper gastrointestinal series and endoscopy in adults 50 years of age and older. In the model, the H. pylori‐positive patient does not receive H. pylori eradication treatment, and the patient aged 50 and over receives annual endoscopic screening annually. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer.\nThe latest version of Japanese guidelines for effective secondary prevention of gastric cancer recommends upper gastrointestinal series and endoscopy in adults 50 years of age and older. In the model, the H. pylori‐positive patient does not receive H. pylori eradication treatment, and the patient aged 50 and over receives annual endoscopic screening annually. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer.", "The H. pylori‐positive patient receives first‐line eradication treatment (proton‐pump inhibitor, clarithromycin, and amoxicillin). The patient who fails first‐line eradication treatment receives second‐line eradication treatment (proton‐pump inhibitor, metronidazole, and amoxicillin). We consider the eradication and compliance rates of first‐line and second‐line eradication treatments in the model. After successful H. pylori eradication, H. pylori‐positive changes to H. pylori‐negative. When the patient fails both treatments, H. pylori‐positive remains until death. We calculate the costs of H. pylori test, endoscopy, and two urea breath tests when the patient receives H. pylori eradication treatment. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer followed by the Japanese gastric cancer treatment guidelines: endoscopic mucosal resection (EMR), endoscopic submucosal dissection treatment (ESD), surgery, chemotherapy, and radiotherapy with palliative care according to cancer stages, stage I‐IV.\n14\n The model includes the relative risk of developing gastric cancer after successful eradication, stage‐specific 5‐year survival rates, and mortality due to other causes (Table 1).\n12\n, \n15\n The patient aged 50 and over receives endoscopic screening every year from the year after eradication.\nBaseline estimates for selected variables\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n0.000771\n0.001167\n0.001881\n0.002803\n0.005122\n0.007949\n0.009117\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n0.000509\n0.00077\n0.00124\n0.00185\n0.00338\n0.00524\n0.00602\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n6.1\n14.7\n23.7\n33.7\n47.7\n58.6\n63.6\nStage I\nStage II\nStage III\nStage IV\n96.0\n69.2\n41.9\n6.3\n90‐99\n50‐80\n30‐50\n0‐20\n20‐29y\n30‐39y\n40‐49y\n50‐59y\n60‐69y\n70‐79y\n80‐89y\n123,986\n422,965\n1,083,631\n1,664,732\n2,860,031\n1,903,756\n440,503\n20‐29y\n30‐39y\n40‐49y\n50‐59y\n60‐69y\n70‐79y\n80‐89y\n773,480\n2,034,480\n4,256,520\n5,634,640\n7,331,490\n9,622,120\n5,933,880\nStage I\nStage II\nStage III\nStage IV\n62.5\n11.0\n7.5\n19.0\n30‐80\n5‐20\n2‐15\n10‐50\nStage I\nStage II\nStage III\nStage IV\n3675\n15,898\n24,841\n29,809\n1838‐7350\n7949‐31,796\n12,421‐49,682\n14,905‐59,618\n25,26\n\nAbbrevations H. pylori = Helicobacter pylori; N/A = not applicable\n", "The latest version of Japanese guidelines for effective secondary prevention of gastric cancer recommends upper gastrointestinal series and endoscopy in adults 50 years of age and older. In the model, the H. pylori‐positive patient does not receive H. pylori eradication treatment, and the patient aged 50 and over receives annual endoscopic screening annually. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer.", "We targeted a hypothetical cohort of Japanese H. pylori‐positive chronic gastritis patients who had the initial endoscopic diagnosis needing H. pylori eradication at the age of 20, 30, 40, 50, 60, 70, and 80. Children and adolescents (age <20 y) were not included in the model.", "Epidemiologic parameters and clinical probabilities were collected using MEDLINE from 2000 to June 2, 2021, national census, and Japanese cancer statistics (Table 1).\n2\n, \n3\n, \n10\n, \n11\n, \n12\n, \n15\n, \n16\n, \n17\n, \n18\n, \n19\n, \n20\n We estimated annual age‐specific numbers of H. pylori‐positive patients with eradication treatment from the literature\n10\n, \n11\n and expert opinion (Table1, Figure S1A). The numbers of H. pylori‐positive patients with and without eradication were estimated from the literature\n16\n and national census (Table1, Figure S1B). Relative risk of gastric cancer development after successful eradication\n15\n, and eradication and compliance rates of first‐ and second‐line eradication treatments\n19\n were obtained from the literature. Age‐specific gastric cancer incidence and stage‐specific 5‐year survival rate were obtained from Japanese cancer statistics.\n10\n The responsibility rate of H. pylori infection for gastric cancer development was assumed to be 98%.\n2\n, \n3\n, \n4\n, \n5\n The incidence of gastric cancer in H. pylori‐positive patients was estimated using the responsibility rate of H. pylori infection for gastric cancer development and the prevalence of H. pylori infection.\n2\n, \n3\n, \n4\n, \n5\n, \n12\n, \n16\n The incidence of gastric cancer in H. pylori‐positive patients after successful eradication treatment was estimated using the relative risk of gastric cancer development after successful eradication treatment.\n2\n, \n3\n, \n4\n, \n5\n, \n12\n, \n15\n, \n16\n The sensitivity and specificity of endoscopy were obtained from the literature.\n20\n\n", "Costs were calculated based on the costs from the Japanese national fee schedule\n17\n and were adjusted to 2019 Japanese yen, using the medical care component of the Japanese consumer price index and were converted to US dollars, using the Organisation for Economic Co‐operation and Development (OECD) purchasing power parity rate in 2019 (US$1 = ¥100.64) (Table 1).\n21\n All costs were discounted by 3%.\n22\n, \n23\n Incremental cost‐effectiveness ratios (ICERs) were calculated and compared to two willingness‐to‐pay levels of US$50,000 per quality‐adjusted life‐year (QALY) gained and US$100,000 per QALY gained.\n24\n Age‐specific and total cumulative lifetime cost savings of H. pylori eradication strategy compared with no eradication strategy were calculated.", "Health status was included to represent possible eight clinical states: (i) No H. pylori infection, (ii) H. pylori infection, (iii) gastric cancer on stage I; (iv) gastric cancer on stage II; (v) gastric cancer on stage III; (vi) gastric cancer on stage IV; (vii) cured gastric cancer; and (viii) death (Figure 2). Health state utilities were obtained from the literature and were calculated using utility weights (Table 1).\n25\n, \n26\n The annual discounting of the utilities in this analysis was set at a rate of 3%.\n22\n, \n23\n\n\nThe health outcomes were QALYs, life expectancy life‐years (LYs), gastric cancer cases, and deaths from gastric cancer. Age‐specific and total cumulative lifetime health outcomes of H. pylori eradication strategy compared with no eradication strategy were calculated and evaluated.", "We performed a one‐way sensitivity analysis to determine which strategy was more cost‐effective when we tested a single variable over a wide range of possible values while holding all other variables constant, and performed a probabilistic sensitivity analysis using a second‐order Monte‐Carlo simulation for 10,000 trials to assess the impact of the uncertainty in the model on the base‐case estimates. The uncertainty had a beta distribution in clinical probabilities and accuracies, and a log‐normal distribution in costs.", "\nH. pylori eradication strategy was less costly and yielded greater benefits than no eradication strategy for all age groups (Table 2). No eradication strategy was dominated for all age groups. The patients aged 40 had the highest per capita cost‐savings. Per capita gains of QALYs in younger patients were higher than in older patients (Table 2). From 2013 to 2019, the patients aged 60 had the highest cost savings and health outcomes (Table S1).", "Incremental cost‐effectiveness ratio tornado diagram of H. pylori eradication strategy versus no eradication strategy showed that cost‐effectiveness was not sensitive to any variables in all age groups (Figure 3A, Figure S2).\nIn probabilistic sensitivity analysis using Monte‐Carlo simulation for 10,000 trials, the acceptability curve showed that H. pylori eradication strategy was cost‐effective 100% of the time at two willingness‐to‐pay thresholds of US$50,000 per QALY gained and US$100,000 per QALY gained in all age groups (Figure 3B, Figure S3). Incremental cost‐effectiveness scatterplots showed that H. pylori eradication strategy dominated no‐eradication strategy in more than 9800 trials in all age groups (Figure 3C, Figure S4). The results showed strong robustness of H. pylori eradication strategy in all age groups.\nResults of the base‐case analysis\nICER\n(US$/\nQALY gained)\nLife expectancy\nlife‐years (LYs)\nICER\n(US$/LY\ngained)\n\nAbbrevations H. pylori = Helicobacter pylori; QALY = quality‐adjusted life‐year; LY = life expectancy life‐years; ICER = incremental cost‐effectiveness ratio; dominated = less effective and more costly than others;\n\nOne‐way sensitivity analysis and probabilistic sensitivity analysis in 60‐year‐old H. pylori‐positive patients. A, The incremental cost‐effectiveness ratio (ICER) tornado diagram for H. pylori eradication strategy versus no eradication strategy. The cost‐effectiveness of H. pylori eradication strategy was not sensitive to changes in any variables. B, Cost‐effectiveness acceptability curve for H. pylori eradication strategy versus no eradication strategy. The probabilistic sensitivity analysis analyzed 10,000 simulations of the model in which input parameters were randomly varied across pre‐specified statistical distributions. The x‐axis represents the willingness‐to‐pay threshold. The acceptability curve showed that H. pylori eradication strategy was cost‐effective 100% of the time at two willingness‐to‐pay thresholds of US$50,000 per QALY gained and US$100,000 per QALY gained. C, Incremental cost‐effectiveness scatterplots with 95% confidence ellipses for H. pylori eradication strategy versus no eradication strategy. Each dot represents a single simulation for a total of 10,000 simulations. Incremental cost‐effectiveness scatterplots showed that H. pylori eradication strategy dominated no‐eradication strategy in 9811 trials, and that H. pylori eradication strategy was more cost‐effective than no‐eradication strategy in 189 trials. EV = expected value; H. pylori = Helicobacter pylori; ICER = incremental cost‐effectivenessratio; QALY = quality‐adjusted life‐year; WTP = willingness‐to‐pay threshold", "\nH. pylori‐positive patients aged 60 had the highest cumulative lifetime economic and health outcomes (Table S1). From 2013 to 2019 for 8.50 million patients, H. pylori eradication saved US$3.75 billion, increased 11.11 million QALYs and 0.45 million LYs, and prevented 284,188 cases and 65,060 deaths. For 35.59 million patients without eradication, H. pylori eradication has the potential to save US$14.82 billion, increase 43.10 million QALYs and 1.66 million LYs, and prevent 1,084,532 cases and 250,256 deaths (Table S1).\nIn the Markov cohort analysis, the cumulative lifetime potential of gastric cancer cases and deaths from gastric cancer in H. pylori eradication strategy compared with no eradication strategy decreased by 30 to 33% in patients under 50 and by 25 to 28% in patients aged 50 and over (Figure S5). H. pylori eradication reduced the incidence and mortality of gastric cancer in the younger age groups greater than in the older age groups (Table 2, Figure S5).", "AK had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. AK and MA approved the final version of the manuscript, and involved in concept, design, and critical revision of the manuscript for important intellectual content. AK involved in acquisition, analysis, interpretation of data, drafting of the manuscript, and administrative, technical, or material support. MA involved in supervision." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Study design and model structure", "H. pylori eradication strategy", "No eradication strategy", "Target population", "Epidemiologic parameters and clinical probabilities", "Costs", "Health utilities, effectiveness, and health outcomes", "Sensitivity analyses", "RESULTS", "Base‐case analysis", "One‐way sensitivity analysis and probabilistic sensitivity analysis", "Cumulative lifetime economic and health outcomes", "DISCUSSION", "CONFLICTS OF INTEREST", "AUTHOR CONTRIBUTIONS", "Supporting information" ]
[ "More than half of the world's population is infected with Helicobacter pylori (H. pylori).\n1\n\nH. pylori infection causes chronic atrophic gastritis, a common stage of progression to gastric cancer, and is responsible for 98% of the causes of gastric cancer in Japan.\n2\n, \n3\n, \n4\n, \n5\n Japan has the third highest age‐standardized rate for gastric cancer in the world.\n6\n The incidence of gastric cancer in Japan is almost 10 times higher than that observed in the United States. The Taipei global consensus guidelines for screening and eradication of H. pylori for gastric cancer prevention recommend that mass screening and eradication of H. pylori should be considered in populations at higher risk of gastric cancer and that eradication therapy should be offered to all individuals infected with H. pylori.\n7\n In the guidelines for the management of H. pylori infection by the Japanese Society for Helicobacter Research, H. pylori eradication treatment is recommended to prevent gastric cancer for patients with H. pylori infection.\n8\n The Ministry of Health, Labour and Welfare approved expansion of National Health Insurance coverage for H. pylori eradication treatment in patients with chronic gastritis from February 2013.\n9\n During 2013‐2019, 8.50 million H. pylori‐positive patients received eradication treatment.\n10\n, \n11\n The number of deaths from gastric cancer is gradually declining, with 42,931 deaths in 2019 and 42,318 deaths in 2020 (Figure 1).\n12\n, \n13\n\n\nChanges in gastric cancer deaths in Japan from 2000 to 2020\nIn this study, we aimed to evaluate the economic and health effects of H. pylori eradication strategy in national gastric cancer prevention program in Japan.", "Study design and model structure We constructed a cohort state‐transition model with a Markov cycle tree for two strategies: H. pylori eradication strategy and no eradication strategy, using a healthcare payer perspective and a lifetime horizon. A cycle length of one year was chosen. The half‐cycle correction was applied. In the model, decision branches leaded directly to one Markov node per intervention strategy and the first events were modeled within the Markov cycle tree (Figure 2). We used TreeAge Pro (TreeAge Software Inc., Williamstown, Massachusetts) for the Decision‐analytical calculations. As this was a modeling study with all inputs and parameters derived from the published literature and Japanese statistics, ethics approval was not required.\nSchematic depiction of the Markov cycle tree in the cohort state‐transition model. We show that health states in the model as ovals. In a yearly model cycle, transitions can occur between the health states and other health states, represented by the arrows. H. pylori = Helicobacter pylori; GC = gastric cancer\nH. pylori eradication strategy The H. pylori‐positive patient receives first‐line eradication treatment (proton‐pump inhibitor, clarithromycin, and amoxicillin). The patient who fails first‐line eradication treatment receives second‐line eradication treatment (proton‐pump inhibitor, metronidazole, and amoxicillin). We consider the eradication and compliance rates of first‐line and second‐line eradication treatments in the model. After successful H. pylori eradication, H. pylori‐positive changes to H. pylori‐negative. When the patient fails both treatments, H. pylori‐positive remains until death. We calculate the costs of H. pylori test, endoscopy, and two urea breath tests when the patient receives H. pylori eradication treatment. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer followed by the Japanese gastric cancer treatment guidelines: endoscopic mucosal resection (EMR), endoscopic submucosal dissection treatment (ESD), surgery, chemotherapy, and radiotherapy with palliative care according to cancer stages, stage I‐IV.\n14\n The model includes the relative risk of developing gastric cancer after successful eradication, stage‐specific 5‐year survival rates, and mortality due to other causes (Table 1).\n12\n, \n15\n The patient aged 50 and over receives endoscopic screening every year from the year after eradication.\nBaseline estimates for selected variables\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n0.000771\n0.001167\n0.001881\n0.002803\n0.005122\n0.007949\n0.009117\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n0.000509\n0.00077\n0.00124\n0.00185\n0.00338\n0.00524\n0.00602\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n6.1\n14.7\n23.7\n33.7\n47.7\n58.6\n63.6\nStage I\nStage II\nStage III\nStage IV\n96.0\n69.2\n41.9\n6.3\n90‐99\n50‐80\n30‐50\n0‐20\n20‐29y\n30‐39y\n40‐49y\n50‐59y\n60‐69y\n70‐79y\n80‐89y\n123,986\n422,965\n1,083,631\n1,664,732\n2,860,031\n1,903,756\n440,503\n20‐29y\n30‐39y\n40‐49y\n50‐59y\n60‐69y\n70‐79y\n80‐89y\n773,480\n2,034,480\n4,256,520\n5,634,640\n7,331,490\n9,622,120\n5,933,880\nStage I\nStage II\nStage III\nStage IV\n62.5\n11.0\n7.5\n19.0\n30‐80\n5‐20\n2‐15\n10‐50\nStage I\nStage II\nStage III\nStage IV\n3675\n15,898\n24,841\n29,809\n1838‐7350\n7949‐31,796\n12,421‐49,682\n14,905‐59,618\n25,26\n\nAbbrevations H. pylori = Helicobacter pylori; N/A = not applicable\n\nThe H. pylori‐positive patient receives first‐line eradication treatment (proton‐pump inhibitor, clarithromycin, and amoxicillin). The patient who fails first‐line eradication treatment receives second‐line eradication treatment (proton‐pump inhibitor, metronidazole, and amoxicillin). We consider the eradication and compliance rates of first‐line and second‐line eradication treatments in the model. After successful H. pylori eradication, H. pylori‐positive changes to H. pylori‐negative. When the patient fails both treatments, H. pylori‐positive remains until death. We calculate the costs of H. pylori test, endoscopy, and two urea breath tests when the patient receives H. pylori eradication treatment. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer followed by the Japanese gastric cancer treatment guidelines: endoscopic mucosal resection (EMR), endoscopic submucosal dissection treatment (ESD), surgery, chemotherapy, and radiotherapy with palliative care according to cancer stages, stage I‐IV.\n14\n The model includes the relative risk of developing gastric cancer after successful eradication, stage‐specific 5‐year survival rates, and mortality due to other causes (Table 1).\n12\n, \n15\n The patient aged 50 and over receives endoscopic screening every year from the year after eradication.\nBaseline estimates for selected variables\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n0.000771\n0.001167\n0.001881\n0.002803\n0.005122\n0.007949\n0.009117\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n0.000509\n0.00077\n0.00124\n0.00185\n0.00338\n0.00524\n0.00602\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n6.1\n14.7\n23.7\n33.7\n47.7\n58.6\n63.6\nStage I\nStage II\nStage III\nStage IV\n96.0\n69.2\n41.9\n6.3\n90‐99\n50‐80\n30‐50\n0‐20\n20‐29y\n30‐39y\n40‐49y\n50‐59y\n60‐69y\n70‐79y\n80‐89y\n123,986\n422,965\n1,083,631\n1,664,732\n2,860,031\n1,903,756\n440,503\n20‐29y\n30‐39y\n40‐49y\n50‐59y\n60‐69y\n70‐79y\n80‐89y\n773,480\n2,034,480\n4,256,520\n5,634,640\n7,331,490\n9,622,120\n5,933,880\nStage I\nStage II\nStage III\nStage IV\n62.5\n11.0\n7.5\n19.0\n30‐80\n5‐20\n2‐15\n10‐50\nStage I\nStage II\nStage III\nStage IV\n3675\n15,898\n24,841\n29,809\n1838‐7350\n7949‐31,796\n12,421‐49,682\n14,905‐59,618\n25,26\n\nAbbrevations H. pylori = Helicobacter pylori; N/A = not applicable\n\nNo eradication strategy The latest version of Japanese guidelines for effective secondary prevention of gastric cancer recommends upper gastrointestinal series and endoscopy in adults 50 years of age and older. In the model, the H. pylori‐positive patient does not receive H. pylori eradication treatment, and the patient aged 50 and over receives annual endoscopic screening annually. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer.\nThe latest version of Japanese guidelines for effective secondary prevention of gastric cancer recommends upper gastrointestinal series and endoscopy in adults 50 years of age and older. In the model, the H. pylori‐positive patient does not receive H. pylori eradication treatment, and the patient aged 50 and over receives annual endoscopic screening annually. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer.\nWe constructed a cohort state‐transition model with a Markov cycle tree for two strategies: H. pylori eradication strategy and no eradication strategy, using a healthcare payer perspective and a lifetime horizon. A cycle length of one year was chosen. The half‐cycle correction was applied. In the model, decision branches leaded directly to one Markov node per intervention strategy and the first events were modeled within the Markov cycle tree (Figure 2). We used TreeAge Pro (TreeAge Software Inc., Williamstown, Massachusetts) for the Decision‐analytical calculations. As this was a modeling study with all inputs and parameters derived from the published literature and Japanese statistics, ethics approval was not required.\nSchematic depiction of the Markov cycle tree in the cohort state‐transition model. We show that health states in the model as ovals. In a yearly model cycle, transitions can occur between the health states and other health states, represented by the arrows. H. pylori = Helicobacter pylori; GC = gastric cancer\nH. pylori eradication strategy The H. pylori‐positive patient receives first‐line eradication treatment (proton‐pump inhibitor, clarithromycin, and amoxicillin). The patient who fails first‐line eradication treatment receives second‐line eradication treatment (proton‐pump inhibitor, metronidazole, and amoxicillin). We consider the eradication and compliance rates of first‐line and second‐line eradication treatments in the model. After successful H. pylori eradication, H. pylori‐positive changes to H. pylori‐negative. When the patient fails both treatments, H. pylori‐positive remains until death. We calculate the costs of H. pylori test, endoscopy, and two urea breath tests when the patient receives H. pylori eradication treatment. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer followed by the Japanese gastric cancer treatment guidelines: endoscopic mucosal resection (EMR), endoscopic submucosal dissection treatment (ESD), surgery, chemotherapy, and radiotherapy with palliative care according to cancer stages, stage I‐IV.\n14\n The model includes the relative risk of developing gastric cancer after successful eradication, stage‐specific 5‐year survival rates, and mortality due to other causes (Table 1).\n12\n, \n15\n The patient aged 50 and over receives endoscopic screening every year from the year after eradication.\nBaseline estimates for selected variables\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n0.000771\n0.001167\n0.001881\n0.002803\n0.005122\n0.007949\n0.009117\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n0.000509\n0.00077\n0.00124\n0.00185\n0.00338\n0.00524\n0.00602\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n6.1\n14.7\n23.7\n33.7\n47.7\n58.6\n63.6\nStage I\nStage II\nStage III\nStage IV\n96.0\n69.2\n41.9\n6.3\n90‐99\n50‐80\n30‐50\n0‐20\n20‐29y\n30‐39y\n40‐49y\n50‐59y\n60‐69y\n70‐79y\n80‐89y\n123,986\n422,965\n1,083,631\n1,664,732\n2,860,031\n1,903,756\n440,503\n20‐29y\n30‐39y\n40‐49y\n50‐59y\n60‐69y\n70‐79y\n80‐89y\n773,480\n2,034,480\n4,256,520\n5,634,640\n7,331,490\n9,622,120\n5,933,880\nStage I\nStage II\nStage III\nStage IV\n62.5\n11.0\n7.5\n19.0\n30‐80\n5‐20\n2‐15\n10‐50\nStage I\nStage II\nStage III\nStage IV\n3675\n15,898\n24,841\n29,809\n1838‐7350\n7949‐31,796\n12,421‐49,682\n14,905‐59,618\n25,26\n\nAbbrevations H. pylori = Helicobacter pylori; N/A = not applicable\n\nThe H. pylori‐positive patient receives first‐line eradication treatment (proton‐pump inhibitor, clarithromycin, and amoxicillin). The patient who fails first‐line eradication treatment receives second‐line eradication treatment (proton‐pump inhibitor, metronidazole, and amoxicillin). We consider the eradication and compliance rates of first‐line and second‐line eradication treatments in the model. After successful H. pylori eradication, H. pylori‐positive changes to H. pylori‐negative. When the patient fails both treatments, H. pylori‐positive remains until death. We calculate the costs of H. pylori test, endoscopy, and two urea breath tests when the patient receives H. pylori eradication treatment. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer followed by the Japanese gastric cancer treatment guidelines: endoscopic mucosal resection (EMR), endoscopic submucosal dissection treatment (ESD), surgery, chemotherapy, and radiotherapy with palliative care according to cancer stages, stage I‐IV.\n14\n The model includes the relative risk of developing gastric cancer after successful eradication, stage‐specific 5‐year survival rates, and mortality due to other causes (Table 1).\n12\n, \n15\n The patient aged 50 and over receives endoscopic screening every year from the year after eradication.\nBaseline estimates for selected variables\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n0.000771\n0.001167\n0.001881\n0.002803\n0.005122\n0.007949\n0.009117\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n0.000509\n0.00077\n0.00124\n0.00185\n0.00338\n0.00524\n0.00602\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n6.1\n14.7\n23.7\n33.7\n47.7\n58.6\n63.6\nStage I\nStage II\nStage III\nStage IV\n96.0\n69.2\n41.9\n6.3\n90‐99\n50‐80\n30‐50\n0‐20\n20‐29y\n30‐39y\n40‐49y\n50‐59y\n60‐69y\n70‐79y\n80‐89y\n123,986\n422,965\n1,083,631\n1,664,732\n2,860,031\n1,903,756\n440,503\n20‐29y\n30‐39y\n40‐49y\n50‐59y\n60‐69y\n70‐79y\n80‐89y\n773,480\n2,034,480\n4,256,520\n5,634,640\n7,331,490\n9,622,120\n5,933,880\nStage I\nStage II\nStage III\nStage IV\n62.5\n11.0\n7.5\n19.0\n30‐80\n5‐20\n2‐15\n10‐50\nStage I\nStage II\nStage III\nStage IV\n3675\n15,898\n24,841\n29,809\n1838‐7350\n7949‐31,796\n12,421‐49,682\n14,905‐59,618\n25,26\n\nAbbrevations H. pylori = Helicobacter pylori; N/A = not applicable\n\nNo eradication strategy The latest version of Japanese guidelines for effective secondary prevention of gastric cancer recommends upper gastrointestinal series and endoscopy in adults 50 years of age and older. In the model, the H. pylori‐positive patient does not receive H. pylori eradication treatment, and the patient aged 50 and over receives annual endoscopic screening annually. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer.\nThe latest version of Japanese guidelines for effective secondary prevention of gastric cancer recommends upper gastrointestinal series and endoscopy in adults 50 years of age and older. In the model, the H. pylori‐positive patient does not receive H. pylori eradication treatment, and the patient aged 50 and over receives annual endoscopic screening annually. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer.\nTarget population We targeted a hypothetical cohort of Japanese H. pylori‐positive chronic gastritis patients who had the initial endoscopic diagnosis needing H. pylori eradication at the age of 20, 30, 40, 50, 60, 70, and 80. Children and adolescents (age <20 y) were not included in the model.\nWe targeted a hypothetical cohort of Japanese H. pylori‐positive chronic gastritis patients who had the initial endoscopic diagnosis needing H. pylori eradication at the age of 20, 30, 40, 50, 60, 70, and 80. Children and adolescents (age <20 y) were not included in the model.\nEpidemiologic parameters and clinical probabilities Epidemiologic parameters and clinical probabilities were collected using MEDLINE from 2000 to June 2, 2021, national census, and Japanese cancer statistics (Table 1).\n2\n, \n3\n, \n10\n, \n11\n, \n12\n, \n15\n, \n16\n, \n17\n, \n18\n, \n19\n, \n20\n We estimated annual age‐specific numbers of H. pylori‐positive patients with eradication treatment from the literature\n10\n, \n11\n and expert opinion (Table1, Figure S1A). The numbers of H. pylori‐positive patients with and without eradication were estimated from the literature\n16\n and national census (Table1, Figure S1B). Relative risk of gastric cancer development after successful eradication\n15\n, and eradication and compliance rates of first‐ and second‐line eradication treatments\n19\n were obtained from the literature. Age‐specific gastric cancer incidence and stage‐specific 5‐year survival rate were obtained from Japanese cancer statistics.\n10\n The responsibility rate of H. pylori infection for gastric cancer development was assumed to be 98%.\n2\n, \n3\n, \n4\n, \n5\n The incidence of gastric cancer in H. pylori‐positive patients was estimated using the responsibility rate of H. pylori infection for gastric cancer development and the prevalence of H. pylori infection.\n2\n, \n3\n, \n4\n, \n5\n, \n12\n, \n16\n The incidence of gastric cancer in H. pylori‐positive patients after successful eradication treatment was estimated using the relative risk of gastric cancer development after successful eradication treatment.\n2\n, \n3\n, \n4\n, \n5\n, \n12\n, \n15\n, \n16\n The sensitivity and specificity of endoscopy were obtained from the literature.\n20\n\n\nEpidemiologic parameters and clinical probabilities were collected using MEDLINE from 2000 to June 2, 2021, national census, and Japanese cancer statistics (Table 1).\n2\n, \n3\n, \n10\n, \n11\n, \n12\n, \n15\n, \n16\n, \n17\n, \n18\n, \n19\n, \n20\n We estimated annual age‐specific numbers of H. pylori‐positive patients with eradication treatment from the literature\n10\n, \n11\n and expert opinion (Table1, Figure S1A). The numbers of H. pylori‐positive patients with and without eradication were estimated from the literature\n16\n and national census (Table1, Figure S1B). Relative risk of gastric cancer development after successful eradication\n15\n, and eradication and compliance rates of first‐ and second‐line eradication treatments\n19\n were obtained from the literature. Age‐specific gastric cancer incidence and stage‐specific 5‐year survival rate were obtained from Japanese cancer statistics.\n10\n The responsibility rate of H. pylori infection for gastric cancer development was assumed to be 98%.\n2\n, \n3\n, \n4\n, \n5\n The incidence of gastric cancer in H. pylori‐positive patients was estimated using the responsibility rate of H. pylori infection for gastric cancer development and the prevalence of H. pylori infection.\n2\n, \n3\n, \n4\n, \n5\n, \n12\n, \n16\n The incidence of gastric cancer in H. pylori‐positive patients after successful eradication treatment was estimated using the relative risk of gastric cancer development after successful eradication treatment.\n2\n, \n3\n, \n4\n, \n5\n, \n12\n, \n15\n, \n16\n The sensitivity and specificity of endoscopy were obtained from the literature.\n20\n\n\nCosts Costs were calculated based on the costs from the Japanese national fee schedule\n17\n and were adjusted to 2019 Japanese yen, using the medical care component of the Japanese consumer price index and were converted to US dollars, using the Organisation for Economic Co‐operation and Development (OECD) purchasing power parity rate in 2019 (US$1 = ¥100.64) (Table 1).\n21\n All costs were discounted by 3%.\n22\n, \n23\n Incremental cost‐effectiveness ratios (ICERs) were calculated and compared to two willingness‐to‐pay levels of US$50,000 per quality‐adjusted life‐year (QALY) gained and US$100,000 per QALY gained.\n24\n Age‐specific and total cumulative lifetime cost savings of H. pylori eradication strategy compared with no eradication strategy were calculated.\nCosts were calculated based on the costs from the Japanese national fee schedule\n17\n and were adjusted to 2019 Japanese yen, using the medical care component of the Japanese consumer price index and were converted to US dollars, using the Organisation for Economic Co‐operation and Development (OECD) purchasing power parity rate in 2019 (US$1 = ¥100.64) (Table 1).\n21\n All costs were discounted by 3%.\n22\n, \n23\n Incremental cost‐effectiveness ratios (ICERs) were calculated and compared to two willingness‐to‐pay levels of US$50,000 per quality‐adjusted life‐year (QALY) gained and US$100,000 per QALY gained.\n24\n Age‐specific and total cumulative lifetime cost savings of H. pylori eradication strategy compared with no eradication strategy were calculated.\nHealth utilities, effectiveness, and health outcomes Health status was included to represent possible eight clinical states: (i) No H. pylori infection, (ii) H. pylori infection, (iii) gastric cancer on stage I; (iv) gastric cancer on stage II; (v) gastric cancer on stage III; (vi) gastric cancer on stage IV; (vii) cured gastric cancer; and (viii) death (Figure 2). Health state utilities were obtained from the literature and were calculated using utility weights (Table 1).\n25\n, \n26\n The annual discounting of the utilities in this analysis was set at a rate of 3%.\n22\n, \n23\n\n\nThe health outcomes were QALYs, life expectancy life‐years (LYs), gastric cancer cases, and deaths from gastric cancer. Age‐specific and total cumulative lifetime health outcomes of H. pylori eradication strategy compared with no eradication strategy were calculated and evaluated.\nHealth status was included to represent possible eight clinical states: (i) No H. pylori infection, (ii) H. pylori infection, (iii) gastric cancer on stage I; (iv) gastric cancer on stage II; (v) gastric cancer on stage III; (vi) gastric cancer on stage IV; (vii) cured gastric cancer; and (viii) death (Figure 2). Health state utilities were obtained from the literature and were calculated using utility weights (Table 1).\n25\n, \n26\n The annual discounting of the utilities in this analysis was set at a rate of 3%.\n22\n, \n23\n\n\nThe health outcomes were QALYs, life expectancy life‐years (LYs), gastric cancer cases, and deaths from gastric cancer. Age‐specific and total cumulative lifetime health outcomes of H. pylori eradication strategy compared with no eradication strategy were calculated and evaluated.\nSensitivity analyses We performed a one‐way sensitivity analysis to determine which strategy was more cost‐effective when we tested a single variable over a wide range of possible values while holding all other variables constant, and performed a probabilistic sensitivity analysis using a second‐order Monte‐Carlo simulation for 10,000 trials to assess the impact of the uncertainty in the model on the base‐case estimates. The uncertainty had a beta distribution in clinical probabilities and accuracies, and a log‐normal distribution in costs.\nWe performed a one‐way sensitivity analysis to determine which strategy was more cost‐effective when we tested a single variable over a wide range of possible values while holding all other variables constant, and performed a probabilistic sensitivity analysis using a second‐order Monte‐Carlo simulation for 10,000 trials to assess the impact of the uncertainty in the model on the base‐case estimates. The uncertainty had a beta distribution in clinical probabilities and accuracies, and a log‐normal distribution in costs.", "We constructed a cohort state‐transition model with a Markov cycle tree for two strategies: H. pylori eradication strategy and no eradication strategy, using a healthcare payer perspective and a lifetime horizon. A cycle length of one year was chosen. The half‐cycle correction was applied. In the model, decision branches leaded directly to one Markov node per intervention strategy and the first events were modeled within the Markov cycle tree (Figure 2). We used TreeAge Pro (TreeAge Software Inc., Williamstown, Massachusetts) for the Decision‐analytical calculations. As this was a modeling study with all inputs and parameters derived from the published literature and Japanese statistics, ethics approval was not required.\nSchematic depiction of the Markov cycle tree in the cohort state‐transition model. We show that health states in the model as ovals. In a yearly model cycle, transitions can occur between the health states and other health states, represented by the arrows. H. pylori = Helicobacter pylori; GC = gastric cancer\nH. pylori eradication strategy The H. pylori‐positive patient receives first‐line eradication treatment (proton‐pump inhibitor, clarithromycin, and amoxicillin). The patient who fails first‐line eradication treatment receives second‐line eradication treatment (proton‐pump inhibitor, metronidazole, and amoxicillin). We consider the eradication and compliance rates of first‐line and second‐line eradication treatments in the model. After successful H. pylori eradication, H. pylori‐positive changes to H. pylori‐negative. When the patient fails both treatments, H. pylori‐positive remains until death. We calculate the costs of H. pylori test, endoscopy, and two urea breath tests when the patient receives H. pylori eradication treatment. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer followed by the Japanese gastric cancer treatment guidelines: endoscopic mucosal resection (EMR), endoscopic submucosal dissection treatment (ESD), surgery, chemotherapy, and radiotherapy with palliative care according to cancer stages, stage I‐IV.\n14\n The model includes the relative risk of developing gastric cancer after successful eradication, stage‐specific 5‐year survival rates, and mortality due to other causes (Table 1).\n12\n, \n15\n The patient aged 50 and over receives endoscopic screening every year from the year after eradication.\nBaseline estimates for selected variables\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n0.000771\n0.001167\n0.001881\n0.002803\n0.005122\n0.007949\n0.009117\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n0.000509\n0.00077\n0.00124\n0.00185\n0.00338\n0.00524\n0.00602\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n6.1\n14.7\n23.7\n33.7\n47.7\n58.6\n63.6\nStage I\nStage II\nStage III\nStage IV\n96.0\n69.2\n41.9\n6.3\n90‐99\n50‐80\n30‐50\n0‐20\n20‐29y\n30‐39y\n40‐49y\n50‐59y\n60‐69y\n70‐79y\n80‐89y\n123,986\n422,965\n1,083,631\n1,664,732\n2,860,031\n1,903,756\n440,503\n20‐29y\n30‐39y\n40‐49y\n50‐59y\n60‐69y\n70‐79y\n80‐89y\n773,480\n2,034,480\n4,256,520\n5,634,640\n7,331,490\n9,622,120\n5,933,880\nStage I\nStage II\nStage III\nStage IV\n62.5\n11.0\n7.5\n19.0\n30‐80\n5‐20\n2‐15\n10‐50\nStage I\nStage II\nStage III\nStage IV\n3675\n15,898\n24,841\n29,809\n1838‐7350\n7949‐31,796\n12,421‐49,682\n14,905‐59,618\n25,26\n\nAbbrevations H. pylori = Helicobacter pylori; N/A = not applicable\n\nThe H. pylori‐positive patient receives first‐line eradication treatment (proton‐pump inhibitor, clarithromycin, and amoxicillin). The patient who fails first‐line eradication treatment receives second‐line eradication treatment (proton‐pump inhibitor, metronidazole, and amoxicillin). We consider the eradication and compliance rates of first‐line and second‐line eradication treatments in the model. After successful H. pylori eradication, H. pylori‐positive changes to H. pylori‐negative. When the patient fails both treatments, H. pylori‐positive remains until death. We calculate the costs of H. pylori test, endoscopy, and two urea breath tests when the patient receives H. pylori eradication treatment. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer followed by the Japanese gastric cancer treatment guidelines: endoscopic mucosal resection (EMR), endoscopic submucosal dissection treatment (ESD), surgery, chemotherapy, and radiotherapy with palliative care according to cancer stages, stage I‐IV.\n14\n The model includes the relative risk of developing gastric cancer after successful eradication, stage‐specific 5‐year survival rates, and mortality due to other causes (Table 1).\n12\n, \n15\n The patient aged 50 and over receives endoscopic screening every year from the year after eradication.\nBaseline estimates for selected variables\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n0.000771\n0.001167\n0.001881\n0.002803\n0.005122\n0.007949\n0.009117\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n0.000509\n0.00077\n0.00124\n0.00185\n0.00338\n0.00524\n0.00602\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n6.1\n14.7\n23.7\n33.7\n47.7\n58.6\n63.6\nStage I\nStage II\nStage III\nStage IV\n96.0\n69.2\n41.9\n6.3\n90‐99\n50‐80\n30‐50\n0‐20\n20‐29y\n30‐39y\n40‐49y\n50‐59y\n60‐69y\n70‐79y\n80‐89y\n123,986\n422,965\n1,083,631\n1,664,732\n2,860,031\n1,903,756\n440,503\n20‐29y\n30‐39y\n40‐49y\n50‐59y\n60‐69y\n70‐79y\n80‐89y\n773,480\n2,034,480\n4,256,520\n5,634,640\n7,331,490\n9,622,120\n5,933,880\nStage I\nStage II\nStage III\nStage IV\n62.5\n11.0\n7.5\n19.0\n30‐80\n5‐20\n2‐15\n10‐50\nStage I\nStage II\nStage III\nStage IV\n3675\n15,898\n24,841\n29,809\n1838‐7350\n7949‐31,796\n12,421‐49,682\n14,905‐59,618\n25,26\n\nAbbrevations H. pylori = Helicobacter pylori; N/A = not applicable\n\nNo eradication strategy The latest version of Japanese guidelines for effective secondary prevention of gastric cancer recommends upper gastrointestinal series and endoscopy in adults 50 years of age and older. In the model, the H. pylori‐positive patient does not receive H. pylori eradication treatment, and the patient aged 50 and over receives annual endoscopic screening annually. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer.\nThe latest version of Japanese guidelines for effective secondary prevention of gastric cancer recommends upper gastrointestinal series and endoscopy in adults 50 years of age and older. In the model, the H. pylori‐positive patient does not receive H. pylori eradication treatment, and the patient aged 50 and over receives annual endoscopic screening annually. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer.", "The H. pylori‐positive patient receives first‐line eradication treatment (proton‐pump inhibitor, clarithromycin, and amoxicillin). The patient who fails first‐line eradication treatment receives second‐line eradication treatment (proton‐pump inhibitor, metronidazole, and amoxicillin). We consider the eradication and compliance rates of first‐line and second‐line eradication treatments in the model. After successful H. pylori eradication, H. pylori‐positive changes to H. pylori‐negative. When the patient fails both treatments, H. pylori‐positive remains until death. We calculate the costs of H. pylori test, endoscopy, and two urea breath tests when the patient receives H. pylori eradication treatment. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer followed by the Japanese gastric cancer treatment guidelines: endoscopic mucosal resection (EMR), endoscopic submucosal dissection treatment (ESD), surgery, chemotherapy, and radiotherapy with palliative care according to cancer stages, stage I‐IV.\n14\n The model includes the relative risk of developing gastric cancer after successful eradication, stage‐specific 5‐year survival rates, and mortality due to other causes (Table 1).\n12\n, \n15\n The patient aged 50 and over receives endoscopic screening every year from the year after eradication.\nBaseline estimates for selected variables\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n0.000771\n0.001167\n0.001881\n0.002803\n0.005122\n0.007949\n0.009117\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n0.000509\n0.00077\n0.00124\n0.00185\n0.00338\n0.00524\n0.00602\n20y\n30y\n40y\n50y\n60y\n70y\n80y\n6.1\n14.7\n23.7\n33.7\n47.7\n58.6\n63.6\nStage I\nStage II\nStage III\nStage IV\n96.0\n69.2\n41.9\n6.3\n90‐99\n50‐80\n30‐50\n0‐20\n20‐29y\n30‐39y\n40‐49y\n50‐59y\n60‐69y\n70‐79y\n80‐89y\n123,986\n422,965\n1,083,631\n1,664,732\n2,860,031\n1,903,756\n440,503\n20‐29y\n30‐39y\n40‐49y\n50‐59y\n60‐69y\n70‐79y\n80‐89y\n773,480\n2,034,480\n4,256,520\n5,634,640\n7,331,490\n9,622,120\n5,933,880\nStage I\nStage II\nStage III\nStage IV\n62.5\n11.0\n7.5\n19.0\n30‐80\n5‐20\n2‐15\n10‐50\nStage I\nStage II\nStage III\nStage IV\n3675\n15,898\n24,841\n29,809\n1838‐7350\n7949‐31,796\n12,421‐49,682\n14,905‐59,618\n25,26\n\nAbbrevations H. pylori = Helicobacter pylori; N/A = not applicable\n", "The latest version of Japanese guidelines for effective secondary prevention of gastric cancer recommends upper gastrointestinal series and endoscopy in adults 50 years of age and older. In the model, the H. pylori‐positive patient does not receive H. pylori eradication treatment, and the patient aged 50 and over receives annual endoscopic screening annually. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer.", "We targeted a hypothetical cohort of Japanese H. pylori‐positive chronic gastritis patients who had the initial endoscopic diagnosis needing H. pylori eradication at the age of 20, 30, 40, 50, 60, 70, and 80. Children and adolescents (age <20 y) were not included in the model.", "Epidemiologic parameters and clinical probabilities were collected using MEDLINE from 2000 to June 2, 2021, national census, and Japanese cancer statistics (Table 1).\n2\n, \n3\n, \n10\n, \n11\n, \n12\n, \n15\n, \n16\n, \n17\n, \n18\n, \n19\n, \n20\n We estimated annual age‐specific numbers of H. pylori‐positive patients with eradication treatment from the literature\n10\n, \n11\n and expert opinion (Table1, Figure S1A). The numbers of H. pylori‐positive patients with and without eradication were estimated from the literature\n16\n and national census (Table1, Figure S1B). Relative risk of gastric cancer development after successful eradication\n15\n, and eradication and compliance rates of first‐ and second‐line eradication treatments\n19\n were obtained from the literature. Age‐specific gastric cancer incidence and stage‐specific 5‐year survival rate were obtained from Japanese cancer statistics.\n10\n The responsibility rate of H. pylori infection for gastric cancer development was assumed to be 98%.\n2\n, \n3\n, \n4\n, \n5\n The incidence of gastric cancer in H. pylori‐positive patients was estimated using the responsibility rate of H. pylori infection for gastric cancer development and the prevalence of H. pylori infection.\n2\n, \n3\n, \n4\n, \n5\n, \n12\n, \n16\n The incidence of gastric cancer in H. pylori‐positive patients after successful eradication treatment was estimated using the relative risk of gastric cancer development after successful eradication treatment.\n2\n, \n3\n, \n4\n, \n5\n, \n12\n, \n15\n, \n16\n The sensitivity and specificity of endoscopy were obtained from the literature.\n20\n\n", "Costs were calculated based on the costs from the Japanese national fee schedule\n17\n and were adjusted to 2019 Japanese yen, using the medical care component of the Japanese consumer price index and were converted to US dollars, using the Organisation for Economic Co‐operation and Development (OECD) purchasing power parity rate in 2019 (US$1 = ¥100.64) (Table 1).\n21\n All costs were discounted by 3%.\n22\n, \n23\n Incremental cost‐effectiveness ratios (ICERs) were calculated and compared to two willingness‐to‐pay levels of US$50,000 per quality‐adjusted life‐year (QALY) gained and US$100,000 per QALY gained.\n24\n Age‐specific and total cumulative lifetime cost savings of H. pylori eradication strategy compared with no eradication strategy were calculated.", "Health status was included to represent possible eight clinical states: (i) No H. pylori infection, (ii) H. pylori infection, (iii) gastric cancer on stage I; (iv) gastric cancer on stage II; (v) gastric cancer on stage III; (vi) gastric cancer on stage IV; (vii) cured gastric cancer; and (viii) death (Figure 2). Health state utilities were obtained from the literature and were calculated using utility weights (Table 1).\n25\n, \n26\n The annual discounting of the utilities in this analysis was set at a rate of 3%.\n22\n, \n23\n\n\nThe health outcomes were QALYs, life expectancy life‐years (LYs), gastric cancer cases, and deaths from gastric cancer. Age‐specific and total cumulative lifetime health outcomes of H. pylori eradication strategy compared with no eradication strategy were calculated and evaluated.", "We performed a one‐way sensitivity analysis to determine which strategy was more cost‐effective when we tested a single variable over a wide range of possible values while holding all other variables constant, and performed a probabilistic sensitivity analysis using a second‐order Monte‐Carlo simulation for 10,000 trials to assess the impact of the uncertainty in the model on the base‐case estimates. The uncertainty had a beta distribution in clinical probabilities and accuracies, and a log‐normal distribution in costs.", "Base‐case analysis \nH. pylori eradication strategy was less costly and yielded greater benefits than no eradication strategy for all age groups (Table 2). No eradication strategy was dominated for all age groups. The patients aged 40 had the highest per capita cost‐savings. Per capita gains of QALYs in younger patients were higher than in older patients (Table 2). From 2013 to 2019, the patients aged 60 had the highest cost savings and health outcomes (Table S1).\n\nH. pylori eradication strategy was less costly and yielded greater benefits than no eradication strategy for all age groups (Table 2). No eradication strategy was dominated for all age groups. The patients aged 40 had the highest per capita cost‐savings. Per capita gains of QALYs in younger patients were higher than in older patients (Table 2). From 2013 to 2019, the patients aged 60 had the highest cost savings and health outcomes (Table S1).\nOne‐way sensitivity analysis and probabilistic sensitivity analysis Incremental cost‐effectiveness ratio tornado diagram of H. pylori eradication strategy versus no eradication strategy showed that cost‐effectiveness was not sensitive to any variables in all age groups (Figure 3A, Figure S2).\nIn probabilistic sensitivity analysis using Monte‐Carlo simulation for 10,000 trials, the acceptability curve showed that H. pylori eradication strategy was cost‐effective 100% of the time at two willingness‐to‐pay thresholds of US$50,000 per QALY gained and US$100,000 per QALY gained in all age groups (Figure 3B, Figure S3). Incremental cost‐effectiveness scatterplots showed that H. pylori eradication strategy dominated no‐eradication strategy in more than 9800 trials in all age groups (Figure 3C, Figure S4). The results showed strong robustness of H. pylori eradication strategy in all age groups.\nResults of the base‐case analysis\nICER\n(US$/\nQALY gained)\nLife expectancy\nlife‐years (LYs)\nICER\n(US$/LY\ngained)\n\nAbbrevations H. pylori = Helicobacter pylori; QALY = quality‐adjusted life‐year; LY = life expectancy life‐years; ICER = incremental cost‐effectiveness ratio; dominated = less effective and more costly than others;\n\nOne‐way sensitivity analysis and probabilistic sensitivity analysis in 60‐year‐old H. pylori‐positive patients. A, The incremental cost‐effectiveness ratio (ICER) tornado diagram for H. pylori eradication strategy versus no eradication strategy. The cost‐effectiveness of H. pylori eradication strategy was not sensitive to changes in any variables. B, Cost‐effectiveness acceptability curve for H. pylori eradication strategy versus no eradication strategy. The probabilistic sensitivity analysis analyzed 10,000 simulations of the model in which input parameters were randomly varied across pre‐specified statistical distributions. The x‐axis represents the willingness‐to‐pay threshold. The acceptability curve showed that H. pylori eradication strategy was cost‐effective 100% of the time at two willingness‐to‐pay thresholds of US$50,000 per QALY gained and US$100,000 per QALY gained. C, Incremental cost‐effectiveness scatterplots with 95% confidence ellipses for H. pylori eradication strategy versus no eradication strategy. Each dot represents a single simulation for a total of 10,000 simulations. Incremental cost‐effectiveness scatterplots showed that H. pylori eradication strategy dominated no‐eradication strategy in 9811 trials, and that H. pylori eradication strategy was more cost‐effective than no‐eradication strategy in 189 trials. EV = expected value; H. pylori = Helicobacter pylori; ICER = incremental cost‐effectivenessratio; QALY = quality‐adjusted life‐year; WTP = willingness‐to‐pay threshold\nIncremental cost‐effectiveness ratio tornado diagram of H. pylori eradication strategy versus no eradication strategy showed that cost‐effectiveness was not sensitive to any variables in all age groups (Figure 3A, Figure S2).\nIn probabilistic sensitivity analysis using Monte‐Carlo simulation for 10,000 trials, the acceptability curve showed that H. pylori eradication strategy was cost‐effective 100% of the time at two willingness‐to‐pay thresholds of US$50,000 per QALY gained and US$100,000 per QALY gained in all age groups (Figure 3B, Figure S3). Incremental cost‐effectiveness scatterplots showed that H. pylori eradication strategy dominated no‐eradication strategy in more than 9800 trials in all age groups (Figure 3C, Figure S4). The results showed strong robustness of H. pylori eradication strategy in all age groups.\nResults of the base‐case analysis\nICER\n(US$/\nQALY gained)\nLife expectancy\nlife‐years (LYs)\nICER\n(US$/LY\ngained)\n\nAbbrevations H. pylori = Helicobacter pylori; QALY = quality‐adjusted life‐year; LY = life expectancy life‐years; ICER = incremental cost‐effectiveness ratio; dominated = less effective and more costly than others;\n\nOne‐way sensitivity analysis and probabilistic sensitivity analysis in 60‐year‐old H. pylori‐positive patients. A, The incremental cost‐effectiveness ratio (ICER) tornado diagram for H. pylori eradication strategy versus no eradication strategy. The cost‐effectiveness of H. pylori eradication strategy was not sensitive to changes in any variables. B, Cost‐effectiveness acceptability curve for H. pylori eradication strategy versus no eradication strategy. The probabilistic sensitivity analysis analyzed 10,000 simulations of the model in which input parameters were randomly varied across pre‐specified statistical distributions. The x‐axis represents the willingness‐to‐pay threshold. The acceptability curve showed that H. pylori eradication strategy was cost‐effective 100% of the time at two willingness‐to‐pay thresholds of US$50,000 per QALY gained and US$100,000 per QALY gained. C, Incremental cost‐effectiveness scatterplots with 95% confidence ellipses for H. pylori eradication strategy versus no eradication strategy. Each dot represents a single simulation for a total of 10,000 simulations. Incremental cost‐effectiveness scatterplots showed that H. pylori eradication strategy dominated no‐eradication strategy in 9811 trials, and that H. pylori eradication strategy was more cost‐effective than no‐eradication strategy in 189 trials. EV = expected value; H. pylori = Helicobacter pylori; ICER = incremental cost‐effectivenessratio; QALY = quality‐adjusted life‐year; WTP = willingness‐to‐pay threshold\nCumulative lifetime economic and health outcomes \nH. pylori‐positive patients aged 60 had the highest cumulative lifetime economic and health outcomes (Table S1). From 2013 to 2019 for 8.50 million patients, H. pylori eradication saved US$3.75 billion, increased 11.11 million QALYs and 0.45 million LYs, and prevented 284,188 cases and 65,060 deaths. For 35.59 million patients without eradication, H. pylori eradication has the potential to save US$14.82 billion, increase 43.10 million QALYs and 1.66 million LYs, and prevent 1,084,532 cases and 250,256 deaths (Table S1).\nIn the Markov cohort analysis, the cumulative lifetime potential of gastric cancer cases and deaths from gastric cancer in H. pylori eradication strategy compared with no eradication strategy decreased by 30 to 33% in patients under 50 and by 25 to 28% in patients aged 50 and over (Figure S5). H. pylori eradication reduced the incidence and mortality of gastric cancer in the younger age groups greater than in the older age groups (Table 2, Figure S5).\n\nH. pylori‐positive patients aged 60 had the highest cumulative lifetime economic and health outcomes (Table S1). From 2013 to 2019 for 8.50 million patients, H. pylori eradication saved US$3.75 billion, increased 11.11 million QALYs and 0.45 million LYs, and prevented 284,188 cases and 65,060 deaths. For 35.59 million patients without eradication, H. pylori eradication has the potential to save US$14.82 billion, increase 43.10 million QALYs and 1.66 million LYs, and prevent 1,084,532 cases and 250,256 deaths (Table S1).\nIn the Markov cohort analysis, the cumulative lifetime potential of gastric cancer cases and deaths from gastric cancer in H. pylori eradication strategy compared with no eradication strategy decreased by 30 to 33% in patients under 50 and by 25 to 28% in patients aged 50 and over (Figure S5). H. pylori eradication reduced the incidence and mortality of gastric cancer in the younger age groups greater than in the older age groups (Table 2, Figure S5).", "\nH. pylori eradication strategy was less costly and yielded greater benefits than no eradication strategy for all age groups (Table 2). No eradication strategy was dominated for all age groups. The patients aged 40 had the highest per capita cost‐savings. Per capita gains of QALYs in younger patients were higher than in older patients (Table 2). From 2013 to 2019, the patients aged 60 had the highest cost savings and health outcomes (Table S1).", "Incremental cost‐effectiveness ratio tornado diagram of H. pylori eradication strategy versus no eradication strategy showed that cost‐effectiveness was not sensitive to any variables in all age groups (Figure 3A, Figure S2).\nIn probabilistic sensitivity analysis using Monte‐Carlo simulation for 10,000 trials, the acceptability curve showed that H. pylori eradication strategy was cost‐effective 100% of the time at two willingness‐to‐pay thresholds of US$50,000 per QALY gained and US$100,000 per QALY gained in all age groups (Figure 3B, Figure S3). Incremental cost‐effectiveness scatterplots showed that H. pylori eradication strategy dominated no‐eradication strategy in more than 9800 trials in all age groups (Figure 3C, Figure S4). The results showed strong robustness of H. pylori eradication strategy in all age groups.\nResults of the base‐case analysis\nICER\n(US$/\nQALY gained)\nLife expectancy\nlife‐years (LYs)\nICER\n(US$/LY\ngained)\n\nAbbrevations H. pylori = Helicobacter pylori; QALY = quality‐adjusted life‐year; LY = life expectancy life‐years; ICER = incremental cost‐effectiveness ratio; dominated = less effective and more costly than others;\n\nOne‐way sensitivity analysis and probabilistic sensitivity analysis in 60‐year‐old H. pylori‐positive patients. A, The incremental cost‐effectiveness ratio (ICER) tornado diagram for H. pylori eradication strategy versus no eradication strategy. The cost‐effectiveness of H. pylori eradication strategy was not sensitive to changes in any variables. B, Cost‐effectiveness acceptability curve for H. pylori eradication strategy versus no eradication strategy. The probabilistic sensitivity analysis analyzed 10,000 simulations of the model in which input parameters were randomly varied across pre‐specified statistical distributions. The x‐axis represents the willingness‐to‐pay threshold. The acceptability curve showed that H. pylori eradication strategy was cost‐effective 100% of the time at two willingness‐to‐pay thresholds of US$50,000 per QALY gained and US$100,000 per QALY gained. C, Incremental cost‐effectiveness scatterplots with 95% confidence ellipses for H. pylori eradication strategy versus no eradication strategy. Each dot represents a single simulation for a total of 10,000 simulations. Incremental cost‐effectiveness scatterplots showed that H. pylori eradication strategy dominated no‐eradication strategy in 9811 trials, and that H. pylori eradication strategy was more cost‐effective than no‐eradication strategy in 189 trials. EV = expected value; H. pylori = Helicobacter pylori; ICER = incremental cost‐effectivenessratio; QALY = quality‐adjusted life‐year; WTP = willingness‐to‐pay threshold", "\nH. pylori‐positive patients aged 60 had the highest cumulative lifetime economic and health outcomes (Table S1). From 2013 to 2019 for 8.50 million patients, H. pylori eradication saved US$3.75 billion, increased 11.11 million QALYs and 0.45 million LYs, and prevented 284,188 cases and 65,060 deaths. For 35.59 million patients without eradication, H. pylori eradication has the potential to save US$14.82 billion, increase 43.10 million QALYs and 1.66 million LYs, and prevent 1,084,532 cases and 250,256 deaths (Table S1).\nIn the Markov cohort analysis, the cumulative lifetime potential of gastric cancer cases and deaths from gastric cancer in H. pylori eradication strategy compared with no eradication strategy decreased by 30 to 33% in patients under 50 and by 25 to 28% in patients aged 50 and over (Figure S5). H. pylori eradication reduced the incidence and mortality of gastric cancer in the younger age groups greater than in the older age groups (Table 2, Figure S5).", "To the best of our knowledge, this is the first study to assess the economic and health impacts of population‐wide H. pylori eradication strategy in national gastric cancer prevention program covered by National Health Insurance in the world.\nWe demonstrated that population‐wide H. pylori eradication strategy reduced costs, prevented gastric cancer, and reduced deaths from gastric cancer for all age groups in the modeling study with real‐life settings in Japan, even though most older adults with gastric mucosal atrophy require more than 10 years of follow‐up endoscopic screening after successful H. pylori eradication.\n27\n, \n28\n The cost savings of H. pylori eradication strategy from 2013 to 2019 were US$3.75 billion, 10.46 times the annual budget for cancer control in Japan. This means that the promotion of H. pylori eradication strategy focused on primary prevention of gastric cancer not only saves many lives from gastric cancer, but also leads to significant cost savings in the national budget.\nIt is well known that the benefits of H. pylori eradication on the reduction of gastric cancer risk in the younger age groups are greater than those in the older age groups. Young individuals would benefit most from H. pylori eradication because it cures H. pylori related gastritis, reduces the risk of gastric cancer, and reduces transmission to their children.\n7\n This modeling study using the constant risk of gastric cancer development after successful eradication treatment demonstrated that H. pylori eradication reduced the incidence and mortality of gastric cancer in the younger age groups greater than in the older age groups. If we could modify to reduce the risk of gastric cancer development after successful eradication treatment in the younger age groups, more significant effects on reducing the incidence and mortality of gastric cancer could be shown in the younger age groups.\nSurveillance of the local antibiotic resistance of H. pylori is recommended to identify the optimal empirical therapy for H. pylori eradication in the country.\n7\n Chiang et al demonstrated no change of the antibiotic resistance rate of H. pylori through the selection of effective eradication regimens and retesting those who had completed H. pylori treatments in mass H. pylori eradication program.\n29\n Guo et al found that successful H. pylori eradication potentially restored gastric microbiota to a similar status as found in uninfected individuals, and showed beneficial effects on gut microbiota.\n30\n Liou et al showed that H pylori eradication had no effect on antibiotic resistance of E coli and no significant change in the prevalence of metabolic syndrome.\n31\n These recent studies suggested that H. pylori eradication strategy with effective regimens and high compliance rates could provide significant benefits with minimal adverse effects in high‐risk countries.\nSeveral economic analyses suggested that H. pylori screening followed by eradication treatment is cost‐effective to prevent gastric cancer, particularly in high‐risk populations.\n32\n, \n33\n, \n34\n, \n35\n, \n36\n, \n37\n, \n38\n, \n39\n, \n40\n Han et al demonstrated that H. pylori screening and eradication treatment effectively reduced the morbidity of gastric cancer and cancer‐related costs in asymptomatic infected individuals in China.\n33\n Chen et al showed that population‐based screen‐and‐treat strategy for H pylori infection proved cheaper and more effective for preventing gastric cancer, peptic ulcer disease, and nonulcer dyspepsia in asymptomatic general population compared with no‐screen strategy in China.\n34\n Zheng et al found that H. pylori eradication treatment was an economical strategy with lower costs and greater efficacy in first‐degree relatives of patients with gastric cancer in China.\n35\n Cheng et al demonstrated that H. pylori test‐and‐treat program was cost‐effective to prevent gastric cancer in an endemic area where H. pylori prevalence was >73.5% in Taiwan.\n36\n Teng et al found that H. pylori screening was likely to be cost‐effective particularly for Māori in New Zealand.\n37\n Beresniak et al showed that H. pylori test and eradication strategy including the use of urea breath test was the most cost‐effective compared to symptomatic treatment and upper gastrointestinal endoscopy in Spain.\n38\n Our previous studies demonstrated the superior cost‐effectiveness of H. pylori screening with eradication, compared to no screening, upper gastrointestinal series, and endoscopic screening for asymptomatic general populations in Japan.\n39\n, \n40\n\n\nThis study has several limitations. First, age‐specific numbers of patients with eradication were estimated based on database for Hokkaido (the north island of Japan), the expert opinion, and the literature.\n10\n, \n11\n Second, we did not consider reinfection and recurrence of H. pylori infection in the model. The reinfection rate after H. pylori eradication is very low. H. pylori infection is mainly transmitted in childhood, and recurrence of H. pylori infection after successful eradication is rare in adults.\n41\n Third, nonmedical indirect costs, such as lost productivity, were not included in this study. Forth, we did not consider other risk factors of gastric cancer such as smoking, high salt intake, a diet low in fruit and vegetables, and genetic factors in this study. Fifth, the difference in the stage distribution of gastric cancer between different age groups was not included in the model. Sixth, we did not consider the histological changes after eradication in chronic gastritis patients in the model. H. pylori infection is well known to initiate sequential histological changes such as non‐atrophic gastritis, atrophic gastritis, intestinal metaplasia, dysplasia, and intestinal‐type gastric cancer. Diffuse‐type gastric cancer is also associated with H. pylori infection. Persistent inflammation results in the development of gastric atrophy. Earlier H. pylori eradication should be considered for preventing gastric cancer development prior to the appearance of precancerous lesions.\n42\n\nH. pylori eradication strongly correlates with improvement in intestinal metaplasia in the antrum and gastric atrophy in the corpus and antrum of the stomach.\n43\n More research is needed to incorporate the histological changes of gastric mucosa and future development of gastric cancer in chronic gastritis patients into the model. Finally, there are different costs, different epidemiological parameters, and medical systems in each country. Further cost‐effectiveness studies based on the variance of each country are required.\nIn conclusion, we demonstrated in the modeling study with real‐life settings that national policy using population‐wide H. pylori eradication to prevent gastric cancer has significant cost savings and health impacts for young‐, middle‐, and old‐aged individuals in Japan. The findings strongly support the promotion of H. pylori eradication strategy for all age groups in high‐incidence countries. Based on cost‐effectiveness, introducing H. pylori eradication strategy into national gastric cancer policy should be considered in high‐risk countries around the world.", "The author has no conflicts of interest to declare.", "AK had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. AK and MA approved the final version of the manuscript, and involved in concept, design, and critical revision of the manuscript for important intellectual content. AK involved in acquisition, analysis, interpretation of data, drafting of the manuscript, and administrative, technical, or material support. MA involved in supervision.", "Supplementary Material\nClick here for additional data file." ]
[ null, "materials-and-methods", null, null, null, null, null, null, null, null, "results", null, null, null, "discussion", "COI-statement", null, "supplementary-material" ]
[ "cancer prevention", "cost‐effectiveness", "gastric cancer", "health economics", "\nHelicobacter pylori eradication" ]
INTRODUCTION: More than half of the world's population is infected with Helicobacter pylori (H. pylori). 1 H. pylori infection causes chronic atrophic gastritis, a common stage of progression to gastric cancer, and is responsible for 98% of the causes of gastric cancer in Japan. 2 , 3 , 4 , 5 Japan has the third highest age‐standardized rate for gastric cancer in the world. 6 The incidence of gastric cancer in Japan is almost 10 times higher than that observed in the United States. The Taipei global consensus guidelines for screening and eradication of H. pylori for gastric cancer prevention recommend that mass screening and eradication of H. pylori should be considered in populations at higher risk of gastric cancer and that eradication therapy should be offered to all individuals infected with H. pylori. 7 In the guidelines for the management of H. pylori infection by the Japanese Society for Helicobacter Research, H. pylori eradication treatment is recommended to prevent gastric cancer for patients with H. pylori infection. 8 The Ministry of Health, Labour and Welfare approved expansion of National Health Insurance coverage for H. pylori eradication treatment in patients with chronic gastritis from February 2013. 9 During 2013‐2019, 8.50 million H. pylori‐positive patients received eradication treatment. 10 , 11 The number of deaths from gastric cancer is gradually declining, with 42,931 deaths in 2019 and 42,318 deaths in 2020 (Figure 1). 12 , 13 Changes in gastric cancer deaths in Japan from 2000 to 2020 In this study, we aimed to evaluate the economic and health effects of H. pylori eradication strategy in national gastric cancer prevention program in Japan. MATERIALS AND METHODS: Study design and model structure We constructed a cohort state‐transition model with a Markov cycle tree for two strategies: H. pylori eradication strategy and no eradication strategy, using a healthcare payer perspective and a lifetime horizon. A cycle length of one year was chosen. The half‐cycle correction was applied. In the model, decision branches leaded directly to one Markov node per intervention strategy and the first events were modeled within the Markov cycle tree (Figure 2). We used TreeAge Pro (TreeAge Software Inc., Williamstown, Massachusetts) for the Decision‐analytical calculations. As this was a modeling study with all inputs and parameters derived from the published literature and Japanese statistics, ethics approval was not required. Schematic depiction of the Markov cycle tree in the cohort state‐transition model. We show that health states in the model as ovals. In a yearly model cycle, transitions can occur between the health states and other health states, represented by the arrows. H. pylori = Helicobacter pylori; GC = gastric cancer H. pylori eradication strategy The H. pylori‐positive patient receives first‐line eradication treatment (proton‐pump inhibitor, clarithromycin, and amoxicillin). The patient who fails first‐line eradication treatment receives second‐line eradication treatment (proton‐pump inhibitor, metronidazole, and amoxicillin). We consider the eradication and compliance rates of first‐line and second‐line eradication treatments in the model. After successful H. pylori eradication, H. pylori‐positive changes to H. pylori‐negative. When the patient fails both treatments, H. pylori‐positive remains until death. We calculate the costs of H. pylori test, endoscopy, and two urea breath tests when the patient receives H. pylori eradication treatment. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer followed by the Japanese gastric cancer treatment guidelines: endoscopic mucosal resection (EMR), endoscopic submucosal dissection treatment (ESD), surgery, chemotherapy, and radiotherapy with palliative care according to cancer stages, stage I‐IV. 14 The model includes the relative risk of developing gastric cancer after successful eradication, stage‐specific 5‐year survival rates, and mortality due to other causes (Table 1). 12 , 15 The patient aged 50 and over receives endoscopic screening every year from the year after eradication. Baseline estimates for selected variables 20y 30y 40y 50y 60y 70y 80y 0.000771 0.001167 0.001881 0.002803 0.005122 0.007949 0.009117 20y 30y 40y 50y 60y 70y 80y 0.000509 0.00077 0.00124 0.00185 0.00338 0.00524 0.00602 20y 30y 40y 50y 60y 70y 80y 6.1 14.7 23.7 33.7 47.7 58.6 63.6 Stage I Stage II Stage III Stage IV 96.0 69.2 41.9 6.3 90‐99 50‐80 30‐50 0‐20 20‐29y 30‐39y 40‐49y 50‐59y 60‐69y 70‐79y 80‐89y 123,986 422,965 1,083,631 1,664,732 2,860,031 1,903,756 440,503 20‐29y 30‐39y 40‐49y 50‐59y 60‐69y 70‐79y 80‐89y 773,480 2,034,480 4,256,520 5,634,640 7,331,490 9,622,120 5,933,880 Stage I Stage II Stage III Stage IV 62.5 11.0 7.5 19.0 30‐80 5‐20 2‐15 10‐50 Stage I Stage II Stage III Stage IV 3675 15,898 24,841 29,809 1838‐7350 7949‐31,796 12,421‐49,682 14,905‐59,618 25,26 Abbrevations H. pylori = Helicobacter pylori; N/A = not applicable The H. pylori‐positive patient receives first‐line eradication treatment (proton‐pump inhibitor, clarithromycin, and amoxicillin). The patient who fails first‐line eradication treatment receives second‐line eradication treatment (proton‐pump inhibitor, metronidazole, and amoxicillin). We consider the eradication and compliance rates of first‐line and second‐line eradication treatments in the model. After successful H. pylori eradication, H. pylori‐positive changes to H. pylori‐negative. When the patient fails both treatments, H. pylori‐positive remains until death. We calculate the costs of H. pylori test, endoscopy, and two urea breath tests when the patient receives H. pylori eradication treatment. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer followed by the Japanese gastric cancer treatment guidelines: endoscopic mucosal resection (EMR), endoscopic submucosal dissection treatment (ESD), surgery, chemotherapy, and radiotherapy with palliative care according to cancer stages, stage I‐IV. 14 The model includes the relative risk of developing gastric cancer after successful eradication, stage‐specific 5‐year survival rates, and mortality due to other causes (Table 1). 12 , 15 The patient aged 50 and over receives endoscopic screening every year from the year after eradication. Baseline estimates for selected variables 20y 30y 40y 50y 60y 70y 80y 0.000771 0.001167 0.001881 0.002803 0.005122 0.007949 0.009117 20y 30y 40y 50y 60y 70y 80y 0.000509 0.00077 0.00124 0.00185 0.00338 0.00524 0.00602 20y 30y 40y 50y 60y 70y 80y 6.1 14.7 23.7 33.7 47.7 58.6 63.6 Stage I Stage II Stage III Stage IV 96.0 69.2 41.9 6.3 90‐99 50‐80 30‐50 0‐20 20‐29y 30‐39y 40‐49y 50‐59y 60‐69y 70‐79y 80‐89y 123,986 422,965 1,083,631 1,664,732 2,860,031 1,903,756 440,503 20‐29y 30‐39y 40‐49y 50‐59y 60‐69y 70‐79y 80‐89y 773,480 2,034,480 4,256,520 5,634,640 7,331,490 9,622,120 5,933,880 Stage I Stage II Stage III Stage IV 62.5 11.0 7.5 19.0 30‐80 5‐20 2‐15 10‐50 Stage I Stage II Stage III Stage IV 3675 15,898 24,841 29,809 1838‐7350 7949‐31,796 12,421‐49,682 14,905‐59,618 25,26 Abbrevations H. pylori = Helicobacter pylori; N/A = not applicable No eradication strategy The latest version of Japanese guidelines for effective secondary prevention of gastric cancer recommends upper gastrointestinal series and endoscopy in adults 50 years of age and older. In the model, the H. pylori‐positive patient does not receive H. pylori eradication treatment, and the patient aged 50 and over receives annual endoscopic screening annually. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer. The latest version of Japanese guidelines for effective secondary prevention of gastric cancer recommends upper gastrointestinal series and endoscopy in adults 50 years of age and older. In the model, the H. pylori‐positive patient does not receive H. pylori eradication treatment, and the patient aged 50 and over receives annual endoscopic screening annually. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer. We constructed a cohort state‐transition model with a Markov cycle tree for two strategies: H. pylori eradication strategy and no eradication strategy, using a healthcare payer perspective and a lifetime horizon. A cycle length of one year was chosen. The half‐cycle correction was applied. In the model, decision branches leaded directly to one Markov node per intervention strategy and the first events were modeled within the Markov cycle tree (Figure 2). We used TreeAge Pro (TreeAge Software Inc., Williamstown, Massachusetts) for the Decision‐analytical calculations. As this was a modeling study with all inputs and parameters derived from the published literature and Japanese statistics, ethics approval was not required. Schematic depiction of the Markov cycle tree in the cohort state‐transition model. We show that health states in the model as ovals. In a yearly model cycle, transitions can occur between the health states and other health states, represented by the arrows. H. pylori = Helicobacter pylori; GC = gastric cancer H. pylori eradication strategy The H. pylori‐positive patient receives first‐line eradication treatment (proton‐pump inhibitor, clarithromycin, and amoxicillin). The patient who fails first‐line eradication treatment receives second‐line eradication treatment (proton‐pump inhibitor, metronidazole, and amoxicillin). We consider the eradication and compliance rates of first‐line and second‐line eradication treatments in the model. After successful H. pylori eradication, H. pylori‐positive changes to H. pylori‐negative. When the patient fails both treatments, H. pylori‐positive remains until death. We calculate the costs of H. pylori test, endoscopy, and two urea breath tests when the patient receives H. pylori eradication treatment. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer followed by the Japanese gastric cancer treatment guidelines: endoscopic mucosal resection (EMR), endoscopic submucosal dissection treatment (ESD), surgery, chemotherapy, and radiotherapy with palliative care according to cancer stages, stage I‐IV. 14 The model includes the relative risk of developing gastric cancer after successful eradication, stage‐specific 5‐year survival rates, and mortality due to other causes (Table 1). 12 , 15 The patient aged 50 and over receives endoscopic screening every year from the year after eradication. Baseline estimates for selected variables 20y 30y 40y 50y 60y 70y 80y 0.000771 0.001167 0.001881 0.002803 0.005122 0.007949 0.009117 20y 30y 40y 50y 60y 70y 80y 0.000509 0.00077 0.00124 0.00185 0.00338 0.00524 0.00602 20y 30y 40y 50y 60y 70y 80y 6.1 14.7 23.7 33.7 47.7 58.6 63.6 Stage I Stage II Stage III Stage IV 96.0 69.2 41.9 6.3 90‐99 50‐80 30‐50 0‐20 20‐29y 30‐39y 40‐49y 50‐59y 60‐69y 70‐79y 80‐89y 123,986 422,965 1,083,631 1,664,732 2,860,031 1,903,756 440,503 20‐29y 30‐39y 40‐49y 50‐59y 60‐69y 70‐79y 80‐89y 773,480 2,034,480 4,256,520 5,634,640 7,331,490 9,622,120 5,933,880 Stage I Stage II Stage III Stage IV 62.5 11.0 7.5 19.0 30‐80 5‐20 2‐15 10‐50 Stage I Stage II Stage III Stage IV 3675 15,898 24,841 29,809 1838‐7350 7949‐31,796 12,421‐49,682 14,905‐59,618 25,26 Abbrevations H. pylori = Helicobacter pylori; N/A = not applicable The H. pylori‐positive patient receives first‐line eradication treatment (proton‐pump inhibitor, clarithromycin, and amoxicillin). The patient who fails first‐line eradication treatment receives second‐line eradication treatment (proton‐pump inhibitor, metronidazole, and amoxicillin). We consider the eradication and compliance rates of first‐line and second‐line eradication treatments in the model. After successful H. pylori eradication, H. pylori‐positive changes to H. pylori‐negative. When the patient fails both treatments, H. pylori‐positive remains until death. We calculate the costs of H. pylori test, endoscopy, and two urea breath tests when the patient receives H. pylori eradication treatment. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer followed by the Japanese gastric cancer treatment guidelines: endoscopic mucosal resection (EMR), endoscopic submucosal dissection treatment (ESD), surgery, chemotherapy, and radiotherapy with palliative care according to cancer stages, stage I‐IV. 14 The model includes the relative risk of developing gastric cancer after successful eradication, stage‐specific 5‐year survival rates, and mortality due to other causes (Table 1). 12 , 15 The patient aged 50 and over receives endoscopic screening every year from the year after eradication. Baseline estimates for selected variables 20y 30y 40y 50y 60y 70y 80y 0.000771 0.001167 0.001881 0.002803 0.005122 0.007949 0.009117 20y 30y 40y 50y 60y 70y 80y 0.000509 0.00077 0.00124 0.00185 0.00338 0.00524 0.00602 20y 30y 40y 50y 60y 70y 80y 6.1 14.7 23.7 33.7 47.7 58.6 63.6 Stage I Stage II Stage III Stage IV 96.0 69.2 41.9 6.3 90‐99 50‐80 30‐50 0‐20 20‐29y 30‐39y 40‐49y 50‐59y 60‐69y 70‐79y 80‐89y 123,986 422,965 1,083,631 1,664,732 2,860,031 1,903,756 440,503 20‐29y 30‐39y 40‐49y 50‐59y 60‐69y 70‐79y 80‐89y 773,480 2,034,480 4,256,520 5,634,640 7,331,490 9,622,120 5,933,880 Stage I Stage II Stage III Stage IV 62.5 11.0 7.5 19.0 30‐80 5‐20 2‐15 10‐50 Stage I Stage II Stage III Stage IV 3675 15,898 24,841 29,809 1838‐7350 7949‐31,796 12,421‐49,682 14,905‐59,618 25,26 Abbrevations H. pylori = Helicobacter pylori; N/A = not applicable No eradication strategy The latest version of Japanese guidelines for effective secondary prevention of gastric cancer recommends upper gastrointestinal series and endoscopy in adults 50 years of age and older. In the model, the H. pylori‐positive patient does not receive H. pylori eradication treatment, and the patient aged 50 and over receives annual endoscopic screening annually. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer. The latest version of Japanese guidelines for effective secondary prevention of gastric cancer recommends upper gastrointestinal series and endoscopy in adults 50 years of age and older. In the model, the H. pylori‐positive patient does not receive H. pylori eradication treatment, and the patient aged 50 and over receives annual endoscopic screening annually. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer. Target population We targeted a hypothetical cohort of Japanese H. pylori‐positive chronic gastritis patients who had the initial endoscopic diagnosis needing H. pylori eradication at the age of 20, 30, 40, 50, 60, 70, and 80. Children and adolescents (age <20 y) were not included in the model. We targeted a hypothetical cohort of Japanese H. pylori‐positive chronic gastritis patients who had the initial endoscopic diagnosis needing H. pylori eradication at the age of 20, 30, 40, 50, 60, 70, and 80. Children and adolescents (age <20 y) were not included in the model. Epidemiologic parameters and clinical probabilities Epidemiologic parameters and clinical probabilities were collected using MEDLINE from 2000 to June 2, 2021, national census, and Japanese cancer statistics (Table 1). 2 , 3 , 10 , 11 , 12 , 15 , 16 , 17 , 18 , 19 , 20 We estimated annual age‐specific numbers of H. pylori‐positive patients with eradication treatment from the literature 10 , 11 and expert opinion (Table1, Figure S1A). The numbers of H. pylori‐positive patients with and without eradication were estimated from the literature 16 and national census (Table1, Figure S1B). Relative risk of gastric cancer development after successful eradication 15 , and eradication and compliance rates of first‐ and second‐line eradication treatments 19 were obtained from the literature. Age‐specific gastric cancer incidence and stage‐specific 5‐year survival rate were obtained from Japanese cancer statistics. 10 The responsibility rate of H. pylori infection for gastric cancer development was assumed to be 98%. 2 , 3 , 4 , 5 The incidence of gastric cancer in H. pylori‐positive patients was estimated using the responsibility rate of H. pylori infection for gastric cancer development and the prevalence of H. pylori infection. 2 , 3 , 4 , 5 , 12 , 16 The incidence of gastric cancer in H. pylori‐positive patients after successful eradication treatment was estimated using the relative risk of gastric cancer development after successful eradication treatment. 2 , 3 , 4 , 5 , 12 , 15 , 16 The sensitivity and specificity of endoscopy were obtained from the literature. 20 Epidemiologic parameters and clinical probabilities were collected using MEDLINE from 2000 to June 2, 2021, national census, and Japanese cancer statistics (Table 1). 2 , 3 , 10 , 11 , 12 , 15 , 16 , 17 , 18 , 19 , 20 We estimated annual age‐specific numbers of H. pylori‐positive patients with eradication treatment from the literature 10 , 11 and expert opinion (Table1, Figure S1A). The numbers of H. pylori‐positive patients with and without eradication were estimated from the literature 16 and national census (Table1, Figure S1B). Relative risk of gastric cancer development after successful eradication 15 , and eradication and compliance rates of first‐ and second‐line eradication treatments 19 were obtained from the literature. Age‐specific gastric cancer incidence and stage‐specific 5‐year survival rate were obtained from Japanese cancer statistics. 10 The responsibility rate of H. pylori infection for gastric cancer development was assumed to be 98%. 2 , 3 , 4 , 5 The incidence of gastric cancer in H. pylori‐positive patients was estimated using the responsibility rate of H. pylori infection for gastric cancer development and the prevalence of H. pylori infection. 2 , 3 , 4 , 5 , 12 , 16 The incidence of gastric cancer in H. pylori‐positive patients after successful eradication treatment was estimated using the relative risk of gastric cancer development after successful eradication treatment. 2 , 3 , 4 , 5 , 12 , 15 , 16 The sensitivity and specificity of endoscopy were obtained from the literature. 20 Costs Costs were calculated based on the costs from the Japanese national fee schedule 17 and were adjusted to 2019 Japanese yen, using the medical care component of the Japanese consumer price index and were converted to US dollars, using the Organisation for Economic Co‐operation and Development (OECD) purchasing power parity rate in 2019 (US$1 = ¥100.64) (Table 1). 21 All costs were discounted by 3%. 22 , 23 Incremental cost‐effectiveness ratios (ICERs) were calculated and compared to two willingness‐to‐pay levels of US$50,000 per quality‐adjusted life‐year (QALY) gained and US$100,000 per QALY gained. 24 Age‐specific and total cumulative lifetime cost savings of H. pylori eradication strategy compared with no eradication strategy were calculated. Costs were calculated based on the costs from the Japanese national fee schedule 17 and were adjusted to 2019 Japanese yen, using the medical care component of the Japanese consumer price index and were converted to US dollars, using the Organisation for Economic Co‐operation and Development (OECD) purchasing power parity rate in 2019 (US$1 = ¥100.64) (Table 1). 21 All costs were discounted by 3%. 22 , 23 Incremental cost‐effectiveness ratios (ICERs) were calculated and compared to two willingness‐to‐pay levels of US$50,000 per quality‐adjusted life‐year (QALY) gained and US$100,000 per QALY gained. 24 Age‐specific and total cumulative lifetime cost savings of H. pylori eradication strategy compared with no eradication strategy were calculated. Health utilities, effectiveness, and health outcomes Health status was included to represent possible eight clinical states: (i) No H. pylori infection, (ii) H. pylori infection, (iii) gastric cancer on stage I; (iv) gastric cancer on stage II; (v) gastric cancer on stage III; (vi) gastric cancer on stage IV; (vii) cured gastric cancer; and (viii) death (Figure 2). Health state utilities were obtained from the literature and were calculated using utility weights (Table 1). 25 , 26 The annual discounting of the utilities in this analysis was set at a rate of 3%. 22 , 23 The health outcomes were QALYs, life expectancy life‐years (LYs), gastric cancer cases, and deaths from gastric cancer. Age‐specific and total cumulative lifetime health outcomes of H. pylori eradication strategy compared with no eradication strategy were calculated and evaluated. Health status was included to represent possible eight clinical states: (i) No H. pylori infection, (ii) H. pylori infection, (iii) gastric cancer on stage I; (iv) gastric cancer on stage II; (v) gastric cancer on stage III; (vi) gastric cancer on stage IV; (vii) cured gastric cancer; and (viii) death (Figure 2). Health state utilities were obtained from the literature and were calculated using utility weights (Table 1). 25 , 26 The annual discounting of the utilities in this analysis was set at a rate of 3%. 22 , 23 The health outcomes were QALYs, life expectancy life‐years (LYs), gastric cancer cases, and deaths from gastric cancer. Age‐specific and total cumulative lifetime health outcomes of H. pylori eradication strategy compared with no eradication strategy were calculated and evaluated. Sensitivity analyses We performed a one‐way sensitivity analysis to determine which strategy was more cost‐effective when we tested a single variable over a wide range of possible values while holding all other variables constant, and performed a probabilistic sensitivity analysis using a second‐order Monte‐Carlo simulation for 10,000 trials to assess the impact of the uncertainty in the model on the base‐case estimates. The uncertainty had a beta distribution in clinical probabilities and accuracies, and a log‐normal distribution in costs. We performed a one‐way sensitivity analysis to determine which strategy was more cost‐effective when we tested a single variable over a wide range of possible values while holding all other variables constant, and performed a probabilistic sensitivity analysis using a second‐order Monte‐Carlo simulation for 10,000 trials to assess the impact of the uncertainty in the model on the base‐case estimates. The uncertainty had a beta distribution in clinical probabilities and accuracies, and a log‐normal distribution in costs. Study design and model structure: We constructed a cohort state‐transition model with a Markov cycle tree for two strategies: H. pylori eradication strategy and no eradication strategy, using a healthcare payer perspective and a lifetime horizon. A cycle length of one year was chosen. The half‐cycle correction was applied. In the model, decision branches leaded directly to one Markov node per intervention strategy and the first events were modeled within the Markov cycle tree (Figure 2). We used TreeAge Pro (TreeAge Software Inc., Williamstown, Massachusetts) for the Decision‐analytical calculations. As this was a modeling study with all inputs and parameters derived from the published literature and Japanese statistics, ethics approval was not required. Schematic depiction of the Markov cycle tree in the cohort state‐transition model. We show that health states in the model as ovals. In a yearly model cycle, transitions can occur between the health states and other health states, represented by the arrows. H. pylori = Helicobacter pylori; GC = gastric cancer H. pylori eradication strategy The H. pylori‐positive patient receives first‐line eradication treatment (proton‐pump inhibitor, clarithromycin, and amoxicillin). The patient who fails first‐line eradication treatment receives second‐line eradication treatment (proton‐pump inhibitor, metronidazole, and amoxicillin). We consider the eradication and compliance rates of first‐line and second‐line eradication treatments in the model. After successful H. pylori eradication, H. pylori‐positive changes to H. pylori‐negative. When the patient fails both treatments, H. pylori‐positive remains until death. We calculate the costs of H. pylori test, endoscopy, and two urea breath tests when the patient receives H. pylori eradication treatment. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer followed by the Japanese gastric cancer treatment guidelines: endoscopic mucosal resection (EMR), endoscopic submucosal dissection treatment (ESD), surgery, chemotherapy, and radiotherapy with palliative care according to cancer stages, stage I‐IV. 14 The model includes the relative risk of developing gastric cancer after successful eradication, stage‐specific 5‐year survival rates, and mortality due to other causes (Table 1). 12 , 15 The patient aged 50 and over receives endoscopic screening every year from the year after eradication. Baseline estimates for selected variables 20y 30y 40y 50y 60y 70y 80y 0.000771 0.001167 0.001881 0.002803 0.005122 0.007949 0.009117 20y 30y 40y 50y 60y 70y 80y 0.000509 0.00077 0.00124 0.00185 0.00338 0.00524 0.00602 20y 30y 40y 50y 60y 70y 80y 6.1 14.7 23.7 33.7 47.7 58.6 63.6 Stage I Stage II Stage III Stage IV 96.0 69.2 41.9 6.3 90‐99 50‐80 30‐50 0‐20 20‐29y 30‐39y 40‐49y 50‐59y 60‐69y 70‐79y 80‐89y 123,986 422,965 1,083,631 1,664,732 2,860,031 1,903,756 440,503 20‐29y 30‐39y 40‐49y 50‐59y 60‐69y 70‐79y 80‐89y 773,480 2,034,480 4,256,520 5,634,640 7,331,490 9,622,120 5,933,880 Stage I Stage II Stage III Stage IV 62.5 11.0 7.5 19.0 30‐80 5‐20 2‐15 10‐50 Stage I Stage II Stage III Stage IV 3675 15,898 24,841 29,809 1838‐7350 7949‐31,796 12,421‐49,682 14,905‐59,618 25,26 Abbrevations H. pylori = Helicobacter pylori; N/A = not applicable The H. pylori‐positive patient receives first‐line eradication treatment (proton‐pump inhibitor, clarithromycin, and amoxicillin). The patient who fails first‐line eradication treatment receives second‐line eradication treatment (proton‐pump inhibitor, metronidazole, and amoxicillin). We consider the eradication and compliance rates of first‐line and second‐line eradication treatments in the model. After successful H. pylori eradication, H. pylori‐positive changes to H. pylori‐negative. When the patient fails both treatments, H. pylori‐positive remains until death. We calculate the costs of H. pylori test, endoscopy, and two urea breath tests when the patient receives H. pylori eradication treatment. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer followed by the Japanese gastric cancer treatment guidelines: endoscopic mucosal resection (EMR), endoscopic submucosal dissection treatment (ESD), surgery, chemotherapy, and radiotherapy with palliative care according to cancer stages, stage I‐IV. 14 The model includes the relative risk of developing gastric cancer after successful eradication, stage‐specific 5‐year survival rates, and mortality due to other causes (Table 1). 12 , 15 The patient aged 50 and over receives endoscopic screening every year from the year after eradication. Baseline estimates for selected variables 20y 30y 40y 50y 60y 70y 80y 0.000771 0.001167 0.001881 0.002803 0.005122 0.007949 0.009117 20y 30y 40y 50y 60y 70y 80y 0.000509 0.00077 0.00124 0.00185 0.00338 0.00524 0.00602 20y 30y 40y 50y 60y 70y 80y 6.1 14.7 23.7 33.7 47.7 58.6 63.6 Stage I Stage II Stage III Stage IV 96.0 69.2 41.9 6.3 90‐99 50‐80 30‐50 0‐20 20‐29y 30‐39y 40‐49y 50‐59y 60‐69y 70‐79y 80‐89y 123,986 422,965 1,083,631 1,664,732 2,860,031 1,903,756 440,503 20‐29y 30‐39y 40‐49y 50‐59y 60‐69y 70‐79y 80‐89y 773,480 2,034,480 4,256,520 5,634,640 7,331,490 9,622,120 5,933,880 Stage I Stage II Stage III Stage IV 62.5 11.0 7.5 19.0 30‐80 5‐20 2‐15 10‐50 Stage I Stage II Stage III Stage IV 3675 15,898 24,841 29,809 1838‐7350 7949‐31,796 12,421‐49,682 14,905‐59,618 25,26 Abbrevations H. pylori = Helicobacter pylori; N/A = not applicable No eradication strategy The latest version of Japanese guidelines for effective secondary prevention of gastric cancer recommends upper gastrointestinal series and endoscopy in adults 50 years of age and older. In the model, the H. pylori‐positive patient does not receive H. pylori eradication treatment, and the patient aged 50 and over receives annual endoscopic screening annually. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer. The latest version of Japanese guidelines for effective secondary prevention of gastric cancer recommends upper gastrointestinal series and endoscopy in adults 50 years of age and older. In the model, the H. pylori‐positive patient does not receive H. pylori eradication treatment, and the patient aged 50 and over receives annual endoscopic screening annually. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer. H. pylori eradication strategy: The H. pylori‐positive patient receives first‐line eradication treatment (proton‐pump inhibitor, clarithromycin, and amoxicillin). The patient who fails first‐line eradication treatment receives second‐line eradication treatment (proton‐pump inhibitor, metronidazole, and amoxicillin). We consider the eradication and compliance rates of first‐line and second‐line eradication treatments in the model. After successful H. pylori eradication, H. pylori‐positive changes to H. pylori‐negative. When the patient fails both treatments, H. pylori‐positive remains until death. We calculate the costs of H. pylori test, endoscopy, and two urea breath tests when the patient receives H. pylori eradication treatment. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer followed by the Japanese gastric cancer treatment guidelines: endoscopic mucosal resection (EMR), endoscopic submucosal dissection treatment (ESD), surgery, chemotherapy, and radiotherapy with palliative care according to cancer stages, stage I‐IV. 14 The model includes the relative risk of developing gastric cancer after successful eradication, stage‐specific 5‐year survival rates, and mortality due to other causes (Table 1). 12 , 15 The patient aged 50 and over receives endoscopic screening every year from the year after eradication. Baseline estimates for selected variables 20y 30y 40y 50y 60y 70y 80y 0.000771 0.001167 0.001881 0.002803 0.005122 0.007949 0.009117 20y 30y 40y 50y 60y 70y 80y 0.000509 0.00077 0.00124 0.00185 0.00338 0.00524 0.00602 20y 30y 40y 50y 60y 70y 80y 6.1 14.7 23.7 33.7 47.7 58.6 63.6 Stage I Stage II Stage III Stage IV 96.0 69.2 41.9 6.3 90‐99 50‐80 30‐50 0‐20 20‐29y 30‐39y 40‐49y 50‐59y 60‐69y 70‐79y 80‐89y 123,986 422,965 1,083,631 1,664,732 2,860,031 1,903,756 440,503 20‐29y 30‐39y 40‐49y 50‐59y 60‐69y 70‐79y 80‐89y 773,480 2,034,480 4,256,520 5,634,640 7,331,490 9,622,120 5,933,880 Stage I Stage II Stage III Stage IV 62.5 11.0 7.5 19.0 30‐80 5‐20 2‐15 10‐50 Stage I Stage II Stage III Stage IV 3675 15,898 24,841 29,809 1838‐7350 7949‐31,796 12,421‐49,682 14,905‐59,618 25,26 Abbrevations H. pylori = Helicobacter pylori; N/A = not applicable No eradication strategy: The latest version of Japanese guidelines for effective secondary prevention of gastric cancer recommends upper gastrointestinal series and endoscopy in adults 50 years of age and older. In the model, the H. pylori‐positive patient does not receive H. pylori eradication treatment, and the patient aged 50 and over receives annual endoscopic screening annually. When the patient has gastric cancer, the patient receives the standard treatment of gastric cancer. Target population: We targeted a hypothetical cohort of Japanese H. pylori‐positive chronic gastritis patients who had the initial endoscopic diagnosis needing H. pylori eradication at the age of 20, 30, 40, 50, 60, 70, and 80. Children and adolescents (age <20 y) were not included in the model. Epidemiologic parameters and clinical probabilities: Epidemiologic parameters and clinical probabilities were collected using MEDLINE from 2000 to June 2, 2021, national census, and Japanese cancer statistics (Table 1). 2 , 3 , 10 , 11 , 12 , 15 , 16 , 17 , 18 , 19 , 20 We estimated annual age‐specific numbers of H. pylori‐positive patients with eradication treatment from the literature 10 , 11 and expert opinion (Table1, Figure S1A). The numbers of H. pylori‐positive patients with and without eradication were estimated from the literature 16 and national census (Table1, Figure S1B). Relative risk of gastric cancer development after successful eradication 15 , and eradication and compliance rates of first‐ and second‐line eradication treatments 19 were obtained from the literature. Age‐specific gastric cancer incidence and stage‐specific 5‐year survival rate were obtained from Japanese cancer statistics. 10 The responsibility rate of H. pylori infection for gastric cancer development was assumed to be 98%. 2 , 3 , 4 , 5 The incidence of gastric cancer in H. pylori‐positive patients was estimated using the responsibility rate of H. pylori infection for gastric cancer development and the prevalence of H. pylori infection. 2 , 3 , 4 , 5 , 12 , 16 The incidence of gastric cancer in H. pylori‐positive patients after successful eradication treatment was estimated using the relative risk of gastric cancer development after successful eradication treatment. 2 , 3 , 4 , 5 , 12 , 15 , 16 The sensitivity and specificity of endoscopy were obtained from the literature. 20 Costs: Costs were calculated based on the costs from the Japanese national fee schedule 17 and were adjusted to 2019 Japanese yen, using the medical care component of the Japanese consumer price index and were converted to US dollars, using the Organisation for Economic Co‐operation and Development (OECD) purchasing power parity rate in 2019 (US$1 = ¥100.64) (Table 1). 21 All costs were discounted by 3%. 22 , 23 Incremental cost‐effectiveness ratios (ICERs) were calculated and compared to two willingness‐to‐pay levels of US$50,000 per quality‐adjusted life‐year (QALY) gained and US$100,000 per QALY gained. 24 Age‐specific and total cumulative lifetime cost savings of H. pylori eradication strategy compared with no eradication strategy were calculated. Health utilities, effectiveness, and health outcomes: Health status was included to represent possible eight clinical states: (i) No H. pylori infection, (ii) H. pylori infection, (iii) gastric cancer on stage I; (iv) gastric cancer on stage II; (v) gastric cancer on stage III; (vi) gastric cancer on stage IV; (vii) cured gastric cancer; and (viii) death (Figure 2). Health state utilities were obtained from the literature and were calculated using utility weights (Table 1). 25 , 26 The annual discounting of the utilities in this analysis was set at a rate of 3%. 22 , 23 The health outcomes were QALYs, life expectancy life‐years (LYs), gastric cancer cases, and deaths from gastric cancer. Age‐specific and total cumulative lifetime health outcomes of H. pylori eradication strategy compared with no eradication strategy were calculated and evaluated. Sensitivity analyses: We performed a one‐way sensitivity analysis to determine which strategy was more cost‐effective when we tested a single variable over a wide range of possible values while holding all other variables constant, and performed a probabilistic sensitivity analysis using a second‐order Monte‐Carlo simulation for 10,000 trials to assess the impact of the uncertainty in the model on the base‐case estimates. The uncertainty had a beta distribution in clinical probabilities and accuracies, and a log‐normal distribution in costs. RESULTS: Base‐case analysis H. pylori eradication strategy was less costly and yielded greater benefits than no eradication strategy for all age groups (Table 2). No eradication strategy was dominated for all age groups. The patients aged 40 had the highest per capita cost‐savings. Per capita gains of QALYs in younger patients were higher than in older patients (Table 2). From 2013 to 2019, the patients aged 60 had the highest cost savings and health outcomes (Table S1). H. pylori eradication strategy was less costly and yielded greater benefits than no eradication strategy for all age groups (Table 2). No eradication strategy was dominated for all age groups. The patients aged 40 had the highest per capita cost‐savings. Per capita gains of QALYs in younger patients were higher than in older patients (Table 2). From 2013 to 2019, the patients aged 60 had the highest cost savings and health outcomes (Table S1). One‐way sensitivity analysis and probabilistic sensitivity analysis Incremental cost‐effectiveness ratio tornado diagram of H. pylori eradication strategy versus no eradication strategy showed that cost‐effectiveness was not sensitive to any variables in all age groups (Figure 3A, Figure S2). In probabilistic sensitivity analysis using Monte‐Carlo simulation for 10,000 trials, the acceptability curve showed that H. pylori eradication strategy was cost‐effective 100% of the time at two willingness‐to‐pay thresholds of US$50,000 per QALY gained and US$100,000 per QALY gained in all age groups (Figure 3B, Figure S3). Incremental cost‐effectiveness scatterplots showed that H. pylori eradication strategy dominated no‐eradication strategy in more than 9800 trials in all age groups (Figure 3C, Figure S4). The results showed strong robustness of H. pylori eradication strategy in all age groups. Results of the base‐case analysis ICER (US$/ QALY gained) Life expectancy life‐years (LYs) ICER (US$/LY gained) Abbrevations H. pylori = Helicobacter pylori; QALY = quality‐adjusted life‐year; LY = life expectancy life‐years; ICER = incremental cost‐effectiveness ratio; dominated = less effective and more costly than others; One‐way sensitivity analysis and probabilistic sensitivity analysis in 60‐year‐old H. pylori‐positive patients. A, The incremental cost‐effectiveness ratio (ICER) tornado diagram for H. pylori eradication strategy versus no eradication strategy. The cost‐effectiveness of H. pylori eradication strategy was not sensitive to changes in any variables. B, Cost‐effectiveness acceptability curve for H. pylori eradication strategy versus no eradication strategy. The probabilistic sensitivity analysis analyzed 10,000 simulations of the model in which input parameters were randomly varied across pre‐specified statistical distributions. The x‐axis represents the willingness‐to‐pay threshold. The acceptability curve showed that H. pylori eradication strategy was cost‐effective 100% of the time at two willingness‐to‐pay thresholds of US$50,000 per QALY gained and US$100,000 per QALY gained. C, Incremental cost‐effectiveness scatterplots with 95% confidence ellipses for H. pylori eradication strategy versus no eradication strategy. Each dot represents a single simulation for a total of 10,000 simulations. Incremental cost‐effectiveness scatterplots showed that H. pylori eradication strategy dominated no‐eradication strategy in 9811 trials, and that H. pylori eradication strategy was more cost‐effective than no‐eradication strategy in 189 trials. EV = expected value; H. pylori = Helicobacter pylori; ICER = incremental cost‐effectivenessratio; QALY = quality‐adjusted life‐year; WTP = willingness‐to‐pay threshold Incremental cost‐effectiveness ratio tornado diagram of H. pylori eradication strategy versus no eradication strategy showed that cost‐effectiveness was not sensitive to any variables in all age groups (Figure 3A, Figure S2). In probabilistic sensitivity analysis using Monte‐Carlo simulation for 10,000 trials, the acceptability curve showed that H. pylori eradication strategy was cost‐effective 100% of the time at two willingness‐to‐pay thresholds of US$50,000 per QALY gained and US$100,000 per QALY gained in all age groups (Figure 3B, Figure S3). Incremental cost‐effectiveness scatterplots showed that H. pylori eradication strategy dominated no‐eradication strategy in more than 9800 trials in all age groups (Figure 3C, Figure S4). The results showed strong robustness of H. pylori eradication strategy in all age groups. Results of the base‐case analysis ICER (US$/ QALY gained) Life expectancy life‐years (LYs) ICER (US$/LY gained) Abbrevations H. pylori = Helicobacter pylori; QALY = quality‐adjusted life‐year; LY = life expectancy life‐years; ICER = incremental cost‐effectiveness ratio; dominated = less effective and more costly than others; One‐way sensitivity analysis and probabilistic sensitivity analysis in 60‐year‐old H. pylori‐positive patients. A, The incremental cost‐effectiveness ratio (ICER) tornado diagram for H. pylori eradication strategy versus no eradication strategy. The cost‐effectiveness of H. pylori eradication strategy was not sensitive to changes in any variables. B, Cost‐effectiveness acceptability curve for H. pylori eradication strategy versus no eradication strategy. The probabilistic sensitivity analysis analyzed 10,000 simulations of the model in which input parameters were randomly varied across pre‐specified statistical distributions. The x‐axis represents the willingness‐to‐pay threshold. The acceptability curve showed that H. pylori eradication strategy was cost‐effective 100% of the time at two willingness‐to‐pay thresholds of US$50,000 per QALY gained and US$100,000 per QALY gained. C, Incremental cost‐effectiveness scatterplots with 95% confidence ellipses for H. pylori eradication strategy versus no eradication strategy. Each dot represents a single simulation for a total of 10,000 simulations. Incremental cost‐effectiveness scatterplots showed that H. pylori eradication strategy dominated no‐eradication strategy in 9811 trials, and that H. pylori eradication strategy was more cost‐effective than no‐eradication strategy in 189 trials. EV = expected value; H. pylori = Helicobacter pylori; ICER = incremental cost‐effectivenessratio; QALY = quality‐adjusted life‐year; WTP = willingness‐to‐pay threshold Cumulative lifetime economic and health outcomes H. pylori‐positive patients aged 60 had the highest cumulative lifetime economic and health outcomes (Table S1). From 2013 to 2019 for 8.50 million patients, H. pylori eradication saved US$3.75 billion, increased 11.11 million QALYs and 0.45 million LYs, and prevented 284,188 cases and 65,060 deaths. For 35.59 million patients without eradication, H. pylori eradication has the potential to save US$14.82 billion, increase 43.10 million QALYs and 1.66 million LYs, and prevent 1,084,532 cases and 250,256 deaths (Table S1). In the Markov cohort analysis, the cumulative lifetime potential of gastric cancer cases and deaths from gastric cancer in H. pylori eradication strategy compared with no eradication strategy decreased by 30 to 33% in patients under 50 and by 25 to 28% in patients aged 50 and over (Figure S5). H. pylori eradication reduced the incidence and mortality of gastric cancer in the younger age groups greater than in the older age groups (Table 2, Figure S5). H. pylori‐positive patients aged 60 had the highest cumulative lifetime economic and health outcomes (Table S1). From 2013 to 2019 for 8.50 million patients, H. pylori eradication saved US$3.75 billion, increased 11.11 million QALYs and 0.45 million LYs, and prevented 284,188 cases and 65,060 deaths. For 35.59 million patients without eradication, H. pylori eradication has the potential to save US$14.82 billion, increase 43.10 million QALYs and 1.66 million LYs, and prevent 1,084,532 cases and 250,256 deaths (Table S1). In the Markov cohort analysis, the cumulative lifetime potential of gastric cancer cases and deaths from gastric cancer in H. pylori eradication strategy compared with no eradication strategy decreased by 30 to 33% in patients under 50 and by 25 to 28% in patients aged 50 and over (Figure S5). H. pylori eradication reduced the incidence and mortality of gastric cancer in the younger age groups greater than in the older age groups (Table 2, Figure S5). Base‐case analysis: H. pylori eradication strategy was less costly and yielded greater benefits than no eradication strategy for all age groups (Table 2). No eradication strategy was dominated for all age groups. The patients aged 40 had the highest per capita cost‐savings. Per capita gains of QALYs in younger patients were higher than in older patients (Table 2). From 2013 to 2019, the patients aged 60 had the highest cost savings and health outcomes (Table S1). One‐way sensitivity analysis and probabilistic sensitivity analysis: Incremental cost‐effectiveness ratio tornado diagram of H. pylori eradication strategy versus no eradication strategy showed that cost‐effectiveness was not sensitive to any variables in all age groups (Figure 3A, Figure S2). In probabilistic sensitivity analysis using Monte‐Carlo simulation for 10,000 trials, the acceptability curve showed that H. pylori eradication strategy was cost‐effective 100% of the time at two willingness‐to‐pay thresholds of US$50,000 per QALY gained and US$100,000 per QALY gained in all age groups (Figure 3B, Figure S3). Incremental cost‐effectiveness scatterplots showed that H. pylori eradication strategy dominated no‐eradication strategy in more than 9800 trials in all age groups (Figure 3C, Figure S4). The results showed strong robustness of H. pylori eradication strategy in all age groups. Results of the base‐case analysis ICER (US$/ QALY gained) Life expectancy life‐years (LYs) ICER (US$/LY gained) Abbrevations H. pylori = Helicobacter pylori; QALY = quality‐adjusted life‐year; LY = life expectancy life‐years; ICER = incremental cost‐effectiveness ratio; dominated = less effective and more costly than others; One‐way sensitivity analysis and probabilistic sensitivity analysis in 60‐year‐old H. pylori‐positive patients. A, The incremental cost‐effectiveness ratio (ICER) tornado diagram for H. pylori eradication strategy versus no eradication strategy. The cost‐effectiveness of H. pylori eradication strategy was not sensitive to changes in any variables. B, Cost‐effectiveness acceptability curve for H. pylori eradication strategy versus no eradication strategy. The probabilistic sensitivity analysis analyzed 10,000 simulations of the model in which input parameters were randomly varied across pre‐specified statistical distributions. The x‐axis represents the willingness‐to‐pay threshold. The acceptability curve showed that H. pylori eradication strategy was cost‐effective 100% of the time at two willingness‐to‐pay thresholds of US$50,000 per QALY gained and US$100,000 per QALY gained. C, Incremental cost‐effectiveness scatterplots with 95% confidence ellipses for H. pylori eradication strategy versus no eradication strategy. Each dot represents a single simulation for a total of 10,000 simulations. Incremental cost‐effectiveness scatterplots showed that H. pylori eradication strategy dominated no‐eradication strategy in 9811 trials, and that H. pylori eradication strategy was more cost‐effective than no‐eradication strategy in 189 trials. EV = expected value; H. pylori = Helicobacter pylori; ICER = incremental cost‐effectivenessratio; QALY = quality‐adjusted life‐year; WTP = willingness‐to‐pay threshold Cumulative lifetime economic and health outcomes: H. pylori‐positive patients aged 60 had the highest cumulative lifetime economic and health outcomes (Table S1). From 2013 to 2019 for 8.50 million patients, H. pylori eradication saved US$3.75 billion, increased 11.11 million QALYs and 0.45 million LYs, and prevented 284,188 cases and 65,060 deaths. For 35.59 million patients without eradication, H. pylori eradication has the potential to save US$14.82 billion, increase 43.10 million QALYs and 1.66 million LYs, and prevent 1,084,532 cases and 250,256 deaths (Table S1). In the Markov cohort analysis, the cumulative lifetime potential of gastric cancer cases and deaths from gastric cancer in H. pylori eradication strategy compared with no eradication strategy decreased by 30 to 33% in patients under 50 and by 25 to 28% in patients aged 50 and over (Figure S5). H. pylori eradication reduced the incidence and mortality of gastric cancer in the younger age groups greater than in the older age groups (Table 2, Figure S5). DISCUSSION: To the best of our knowledge, this is the first study to assess the economic and health impacts of population‐wide H. pylori eradication strategy in national gastric cancer prevention program covered by National Health Insurance in the world. We demonstrated that population‐wide H. pylori eradication strategy reduced costs, prevented gastric cancer, and reduced deaths from gastric cancer for all age groups in the modeling study with real‐life settings in Japan, even though most older adults with gastric mucosal atrophy require more than 10 years of follow‐up endoscopic screening after successful H. pylori eradication. 27 , 28 The cost savings of H. pylori eradication strategy from 2013 to 2019 were US$3.75 billion, 10.46 times the annual budget for cancer control in Japan. This means that the promotion of H. pylori eradication strategy focused on primary prevention of gastric cancer not only saves many lives from gastric cancer, but also leads to significant cost savings in the national budget. It is well known that the benefits of H. pylori eradication on the reduction of gastric cancer risk in the younger age groups are greater than those in the older age groups. Young individuals would benefit most from H. pylori eradication because it cures H. pylori related gastritis, reduces the risk of gastric cancer, and reduces transmission to their children. 7 This modeling study using the constant risk of gastric cancer development after successful eradication treatment demonstrated that H. pylori eradication reduced the incidence and mortality of gastric cancer in the younger age groups greater than in the older age groups. If we could modify to reduce the risk of gastric cancer development after successful eradication treatment in the younger age groups, more significant effects on reducing the incidence and mortality of gastric cancer could be shown in the younger age groups. Surveillance of the local antibiotic resistance of H. pylori is recommended to identify the optimal empirical therapy for H. pylori eradication in the country. 7 Chiang et al demonstrated no change of the antibiotic resistance rate of H. pylori through the selection of effective eradication regimens and retesting those who had completed H. pylori treatments in mass H. pylori eradication program. 29 Guo et al found that successful H. pylori eradication potentially restored gastric microbiota to a similar status as found in uninfected individuals, and showed beneficial effects on gut microbiota. 30 Liou et al showed that H pylori eradication had no effect on antibiotic resistance of E coli and no significant change in the prevalence of metabolic syndrome. 31 These recent studies suggested that H. pylori eradication strategy with effective regimens and high compliance rates could provide significant benefits with minimal adverse effects in high‐risk countries. Several economic analyses suggested that H. pylori screening followed by eradication treatment is cost‐effective to prevent gastric cancer, particularly in high‐risk populations. 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 Han et al demonstrated that H. pylori screening and eradication treatment effectively reduced the morbidity of gastric cancer and cancer‐related costs in asymptomatic infected individuals in China. 33 Chen et al showed that population‐based screen‐and‐treat strategy for H pylori infection proved cheaper and more effective for preventing gastric cancer, peptic ulcer disease, and nonulcer dyspepsia in asymptomatic general population compared with no‐screen strategy in China. 34 Zheng et al found that H. pylori eradication treatment was an economical strategy with lower costs and greater efficacy in first‐degree relatives of patients with gastric cancer in China. 35 Cheng et al demonstrated that H. pylori test‐and‐treat program was cost‐effective to prevent gastric cancer in an endemic area where H. pylori prevalence was >73.5% in Taiwan. 36 Teng et al found that H. pylori screening was likely to be cost‐effective particularly for Māori in New Zealand. 37 Beresniak et al showed that H. pylori test and eradication strategy including the use of urea breath test was the most cost‐effective compared to symptomatic treatment and upper gastrointestinal endoscopy in Spain. 38 Our previous studies demonstrated the superior cost‐effectiveness of H. pylori screening with eradication, compared to no screening, upper gastrointestinal series, and endoscopic screening for asymptomatic general populations in Japan. 39 , 40 This study has several limitations. First, age‐specific numbers of patients with eradication were estimated based on database for Hokkaido (the north island of Japan), the expert opinion, and the literature. 10 , 11 Second, we did not consider reinfection and recurrence of H. pylori infection in the model. The reinfection rate after H. pylori eradication is very low. H. pylori infection is mainly transmitted in childhood, and recurrence of H. pylori infection after successful eradication is rare in adults. 41 Third, nonmedical indirect costs, such as lost productivity, were not included in this study. Forth, we did not consider other risk factors of gastric cancer such as smoking, high salt intake, a diet low in fruit and vegetables, and genetic factors in this study. Fifth, the difference in the stage distribution of gastric cancer between different age groups was not included in the model. Sixth, we did not consider the histological changes after eradication in chronic gastritis patients in the model. H. pylori infection is well known to initiate sequential histological changes such as non‐atrophic gastritis, atrophic gastritis, intestinal metaplasia, dysplasia, and intestinal‐type gastric cancer. Diffuse‐type gastric cancer is also associated with H. pylori infection. Persistent inflammation results in the development of gastric atrophy. Earlier H. pylori eradication should be considered for preventing gastric cancer development prior to the appearance of precancerous lesions. 42 H. pylori eradication strongly correlates with improvement in intestinal metaplasia in the antrum and gastric atrophy in the corpus and antrum of the stomach. 43 More research is needed to incorporate the histological changes of gastric mucosa and future development of gastric cancer in chronic gastritis patients into the model. Finally, there are different costs, different epidemiological parameters, and medical systems in each country. Further cost‐effectiveness studies based on the variance of each country are required. In conclusion, we demonstrated in the modeling study with real‐life settings that national policy using population‐wide H. pylori eradication to prevent gastric cancer has significant cost savings and health impacts for young‐, middle‐, and old‐aged individuals in Japan. The findings strongly support the promotion of H. pylori eradication strategy for all age groups in high‐incidence countries. Based on cost‐effectiveness, introducing H. pylori eradication strategy into national gastric cancer policy should be considered in high‐risk countries around the world. CONFLICTS OF INTEREST: The author has no conflicts of interest to declare. AUTHOR CONTRIBUTIONS: AK had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. AK and MA approved the final version of the manuscript, and involved in concept, design, and critical revision of the manuscript for important intellectual content. AK involved in acquisition, analysis, interpretation of data, drafting of the manuscript, and administrative, technical, or material support. MA involved in supervision. Supporting information: Supplementary Material Click here for additional data file.
Background: Helicobacter pylori (H. pylori) eradication reduces gastric cancer risk. Since 2013, a population-wide H. pylori eradication strategy for patients with chronic gastritis has begun to prevent gastric cancer in Japan. The aim of this study was to evaluate the economic and health effects of H. pylori eradication strategy in national gastric cancer prevention program. Methods: We developed a cohort state-transition model for H. pylori eradication and no eradication over a lifetime horizon from a healthcare payer perspective, and performed one-way and probabilistic sensitivity analyses. We targeted a hypothetical cohort of H. pylori-positive patients aged 20, 30, 40, 50, 60, 70, and 80. The main outcomes were costs, quality-adjusted life-years (QALYs), life expectancy life-years (LYs), incremental cost-effectiveness ratios, gastric cancer cases, and deaths from gastric cancer. Results: H. pylori eradication was more effective and cost-saving for all age groups than no eradication. Sensitivity analyses showed strong robustness of the results. From 2013-2019 for 8.50 million patients, H. pylori eradication saved US$3.75 billion, increased 11.11 million QALYs and 0.45 million LYs, and prevented 284,188 cases and 65,060 deaths. For 35.59 million patients without eradication, H. pylori eradication has the potential to save US$14.82 billion, increase 43.10 million QALYs and 1.66 million LYs, and prevent 1,084,532 cases and 250,256 deaths. Conclusions: National policy using population-wide H. pylori eradication to prevent gastric cancer has significant cost savings and health impacts for young-, middle-, and old-aged individuals in Japan. The findings strongly support the promotion of H. pylori eradication strategy for all age groups in high-incidence countries.
null
null
10,643
331
[ 322, 1275, 468, 74, 57, 341, 145, 175, 81, 87, 421, 182, 87 ]
18
[ "pylori", "eradication", "cancer", "gastric", "gastric cancer", "stage", "strategy", "pylori eradication", "eradication strategy", "treatment" ]
[ "cancer pylori eradication", "pylori test eradication", "effectiveness pylori eradication", "eradication pylori gastric", "pylori screening eradication" ]
null
null
null
[CONTENT] cancer prevention | cost‐effectiveness | gastric cancer | health economics | Helicobacter pylori eradication [SUMMARY]
null
[CONTENT] cancer prevention | cost‐effectiveness | gastric cancer | health economics | Helicobacter pylori eradication [SUMMARY]
null
[CONTENT] cancer prevention | cost‐effectiveness | gastric cancer | health economics | Helicobacter pylori eradication [SUMMARY]
null
[CONTENT] Aged | Cost-Benefit Analysis | Helicobacter Infections | Helicobacter pylori | Humans | Japan | Middle Aged | Policy | Stomach Neoplasms [SUMMARY]
null
[CONTENT] Aged | Cost-Benefit Analysis | Helicobacter Infections | Helicobacter pylori | Humans | Japan | Middle Aged | Policy | Stomach Neoplasms [SUMMARY]
null
[CONTENT] Aged | Cost-Benefit Analysis | Helicobacter Infections | Helicobacter pylori | Humans | Japan | Middle Aged | Policy | Stomach Neoplasms [SUMMARY]
null
[CONTENT] cancer pylori eradication | pylori test eradication | effectiveness pylori eradication | eradication pylori gastric | pylori screening eradication [SUMMARY]
null
[CONTENT] cancer pylori eradication | pylori test eradication | effectiveness pylori eradication | eradication pylori gastric | pylori screening eradication [SUMMARY]
null
[CONTENT] cancer pylori eradication | pylori test eradication | effectiveness pylori eradication | eradication pylori gastric | pylori screening eradication [SUMMARY]
null
[CONTENT] pylori | eradication | cancer | gastric | gastric cancer | stage | strategy | pylori eradication | eradication strategy | treatment [SUMMARY]
null
[CONTENT] pylori | eradication | cancer | gastric | gastric cancer | stage | strategy | pylori eradication | eradication strategy | treatment [SUMMARY]
null
[CONTENT] pylori | eradication | cancer | gastric | gastric cancer | stage | strategy | pylori eradication | eradication strategy | treatment [SUMMARY]
null
[CONTENT] gastric cancer | cancer | gastric | pylori | japan | eradication | deaths | screening eradication pylori | pylori pylori | cancer japan [SUMMARY]
null
[CONTENT] eradication strategy | strategy | eradication | cost | pylori | pylori eradication | pylori eradication strategy | cost effectiveness | effectiveness | groups [SUMMARY]
null
[CONTENT] pylori | eradication | cancer | gastric | gastric cancer | strategy | eradication strategy | stage | patient | pylori eradication [SUMMARY]
null
[CONTENT] ||| 2013 | Japan ||| [SUMMARY]
null
[CONTENT] ||| ||| 2013-2019 | 8.50 million | US$3.75 billion | 11.11 million | 0.45 million | 284,188 | 65,060 ||| 35.59 million | US$14.82 billion | 43.10 million | 1.66 million | 1,084,532 | 250,256 [SUMMARY]
null
[CONTENT] ||| 2013 | Japan ||| ||| one ||| 20, 30 | 40 | 50 | 60 | 70 | 80 ||| ||| ||| ||| 2013-2019 | 8.50 million | US$3.75 billion | 11.11 million | 0.45 million | 284,188 | 65,060 ||| 35.59 million | US$14.82 billion | 43.10 million | 1.66 million | 1,084,532 | 250,256 ||| Japan ||| [SUMMARY]
null
Immune checkpoint inhibitor-mediated colitis is associated with cancer overall survival.
36338892
Immune checkpoint inhibitor-mediated colitis (IMC) is a common adverse event following immune checkpoint inhibitor (ICI) therapy for cancer. IMC has been associated with improved overall survival (OS) and progression-free survival (PFS), but data are limited to a single site and predominantly for melanoma patients.
BACKGROUND
We performed a retrospective case-control study including 64 ICI users who developed IMC matched according to age, sex, ICI class, and malignancy to a cohort of ICI users without IMC, from May 2011 to May 2020. Using univariate and multivariate logistic regression, we determined association of presence of IMC on OS, PFS, and clinical predictors of IMC. Kaplan-Meier curves were generated to compare OS and PFS between ICI users with and without IMC.
METHODS
IMC was significantly associated with a higher OS (mean 24.3 mo vs 17.7 mo, P = 0.05) but not PFS (mean 13.7 mo vs 11.9 mo, P = 0.524). IMC was significantly associated with OS greater than 12 mo [Odds ratio (OR) 2.81, 95% confidence interval (CI) 1.17-6.77]. Vitamin D supplementation was significantly associated with increased risk of IMC (OR 2.48, 95%CI 1.01-6.07).
RESULTS
IMC was significantly associated with OS greater than 12 mo. In contrast to prior work, we found that vitamin D use may be a risk factor for IMC.
CONCLUSION
[ "Humans", "Immune Checkpoint Inhibitors", "Antineoplastic Agents, Immunological", "Retrospective Studies", "Case-Control Studies", "Melanoma", "Colitis", "Vitamin D" ]
9627421
INTRODUCTION
Immune checkpoint inhibitors (ICI) have dramatically changed the landscape of cancer therapy. Early studies showed significantly prolonged survival in patients with metastatic melanoma compared to standard chemotherapy[1], and evidence now exists for improved outcomes in a variety of tumors ranging from lung cancers to urothelial carcinoma to breast cancer[2-5]. Although these are powerful treatments in our armamentarium against malignancy, ICI can cause immune-related adverse events (irAE) characterized by autoimmune-like inflammation in a variety of non-tumor organs, leading to increased morbidity for patients[6]. One of the most common irAE is immune checkpoint inhibitor-mediated colitis (IMC). IMC may occur in up to 40% of patients treated with ipilimumab, an antibody targeting CTLA-4, 11%-17% of patients treated with antibodies against anti-PD-1 or anti-PD-L1, such as nivolumab, pembrolizumab, or atezolizumab, and around 32% of patients treated with a combination of anti-CTLA-4 and anti-PD-1[7]. Prior retrospective analyses of patients with IMC have attempted to identify characteristics associated with development of IMC, including type of malignancy, ICI class, dose of ICI, cancer stage, and vitamin D use[8-11]. Intriguingly, two prior studies have suggested that development of IMC may positively correlate with improved progression-free survival (PFS) and overall survival (OS)[9,10]. One of these studies controlled for confounding effects of ICI class via frequency matching, but was limited to patients with melanoma, hindering wider applicability of their findings[10]. These findings also conflict with data suggesting that use of steroids and the anti-TNF antibody infliximab in patients treated with ICI are associated with worse cancer outcomes[12,13]. These discrepancies represent a significant knowledge gap that impedes our ability to evaluate and manage IMC and ICI use. Here we present data from a retrospective study of patients treated with ICI at our institution who developed IMC across malignancy types. We compare this cohort to a matched control cohort to determine whether IMC was associated with improved progression-free survival and overall survival. We also evaluate which clinical characteristics increase the risk of developing IMC, including severe IMC.
MATERIALS AND METHODS
Study design and population We conducted a retrospective case-control single-center study after obtaining approval from the Institutional Review Board at Stanford University (IRB 57125, approved 6/30/2020). Our primary aim was to determine the association of presence and severity of IMC on OS and PFS in ICI users. Our secondary aim was to identify clinical variables which predicted development of IMC in ICI users. We evaluated all patients over the age of 18 who had been treated with immune checkpoint inhibitors (ICI) for malignancy at Stanford Health Care from May 2011 to May 2020, including anti-CTLA-4 (ipilimumab), anti-PD-1 (nivolumab, pembrolizumab), and anti-PD-L1 (atezolizumab, avelumab, durvalumab), with follow up through October 2020. Using the Stanford Research Repository tool, we screened patients treated with ICI who were assigned International Classification of Diseases (ICD) 9 and ICD 10 codes associated with non-infectious colitis and diarrhea (Supplementary Table 1). Each chart which passed the initial screen was further screened by review of clinic notes to confirm diagnosis of immune checkpoint inhibitor-related colitis by oncology providers. Any patient found to have other explanations for their clinical presentation was excluded from the study. Control patients were matched one to one with each IMC patient for sex, age, malignancy, type of ICI used, prior ICI exposure, and duration of ICI exposure (matched to number of doses from initiation of ICI to development of colitis in study cohort). Control patients were initially screened by those lacking the above ICD codes and were confirmed via direct evaluation of each chart to lack diarrhea and/or colitis ascribable to ICI per their treating oncologist. We extracted clinical data on IMC and control patient charts including demographics (age at time of ICI initiation, sex, body mass index, race per patient report), medical history (presence of prior non-liver and non-upper gastrointestinal disease, personal history of autoimmune disease, family history of autoimmune disease), and cancer history (type of malignancy, tumor stage at ICI initiation, prior chemotherapy, prior radiation therapy, type of ICI used, duration of ICI use, OS and PFS) (Supplementary Table 2). OS was determined as time from initiation of ICI to death, while PFS was determined as time from initiation of ICI to death or progression of disease as determined by oncology providers, based on radiographic evidence of progression. IMC severity was graded using commonly accepted determinants of IMC and irAE grading[14]. We specifically noted prior use of therapies designed to increase immune responses [interleukin (IL)-2, interferon (IFN)-γ, toll-like receptor (TLR)-9 agonist, tebentafusp, or anti-CD47 antibody]. Vitamin D and non-steroidal anti-inflammatory (NSAID) use were defined as vitamin D supplement or NSAID medication, respectively, noted in the history of present illness or on the patient’s medication list at the clinic visit closest to their date of ICI initiation. We collected data on IMC diagnosis including number of patients who received endoscopy (flexible sigmoidoscopy or colonoscopy), findings on endoscopy, and fecal calprotectin (Supplementary Table 3). Data on management of IMC included treatment with anti-diarrheal medications, mesalamine, steroids (prednisone, budesonide, dexamethasone), infliximab, and vedolizumab. We conducted a retrospective case-control single-center study after obtaining approval from the Institutional Review Board at Stanford University (IRB 57125, approved 6/30/2020). Our primary aim was to determine the association of presence and severity of IMC on OS and PFS in ICI users. Our secondary aim was to identify clinical variables which predicted development of IMC in ICI users. We evaluated all patients over the age of 18 who had been treated with immune checkpoint inhibitors (ICI) for malignancy at Stanford Health Care from May 2011 to May 2020, including anti-CTLA-4 (ipilimumab), anti-PD-1 (nivolumab, pembrolizumab), and anti-PD-L1 (atezolizumab, avelumab, durvalumab), with follow up through October 2020. Using the Stanford Research Repository tool, we screened patients treated with ICI who were assigned International Classification of Diseases (ICD) 9 and ICD 10 codes associated with non-infectious colitis and diarrhea (Supplementary Table 1). Each chart which passed the initial screen was further screened by review of clinic notes to confirm diagnosis of immune checkpoint inhibitor-related colitis by oncology providers. Any patient found to have other explanations for their clinical presentation was excluded from the study. Control patients were matched one to one with each IMC patient for sex, age, malignancy, type of ICI used, prior ICI exposure, and duration of ICI exposure (matched to number of doses from initiation of ICI to development of colitis in study cohort). Control patients were initially screened by those lacking the above ICD codes and were confirmed via direct evaluation of each chart to lack diarrhea and/or colitis ascribable to ICI per their treating oncologist. We extracted clinical data on IMC and control patient charts including demographics (age at time of ICI initiation, sex, body mass index, race per patient report), medical history (presence of prior non-liver and non-upper gastrointestinal disease, personal history of autoimmune disease, family history of autoimmune disease), and cancer history (type of malignancy, tumor stage at ICI initiation, prior chemotherapy, prior radiation therapy, type of ICI used, duration of ICI use, OS and PFS) (Supplementary Table 2). OS was determined as time from initiation of ICI to death, while PFS was determined as time from initiation of ICI to death or progression of disease as determined by oncology providers, based on radiographic evidence of progression. IMC severity was graded using commonly accepted determinants of IMC and irAE grading[14]. We specifically noted prior use of therapies designed to increase immune responses [interleukin (IL)-2, interferon (IFN)-γ, toll-like receptor (TLR)-9 agonist, tebentafusp, or anti-CD47 antibody]. Vitamin D and non-steroidal anti-inflammatory (NSAID) use were defined as vitamin D supplement or NSAID medication, respectively, noted in the history of present illness or on the patient’s medication list at the clinic visit closest to their date of ICI initiation. We collected data on IMC diagnosis including number of patients who received endoscopy (flexible sigmoidoscopy or colonoscopy), findings on endoscopy, and fecal calprotectin (Supplementary Table 3). Data on management of IMC included treatment with anti-diarrheal medications, mesalamine, steroids (prednisone, budesonide, dexamethasone), infliximab, and vedolizumab. Statistical analysis The rate of the primary outcomes (OS > 12 mo and PFS > 6 mo among all ICI users, OS > 12 mo and PFS > 6 mo in patients with IMC) and secondary outcomes (risks of IMC among patients with malignancy using ICI, IMC severity), predictive value of clinical variables on primary and secondary outcomes, odds ratio (OR) with its 95% confidence interval (CI), and P values were calculated using Statistics/Data Analysis (Stata/IC 15.1 for Windows, College Station, TX, United States). Dichotomous variables were analyzed for outcomes using the chi-squared test or the Fisher’s exact test where appropriate, and continuous variables were analyzed using Student’s t-tests if normally distributed, or the Wilcoxon signed-rank test for non-normal data. For our multivariate analyses, model building was based on forward stepwise logistic regression, with a P value of 0.05 required for entry, and known predictors were also included. We constructed Kaplan Meier curves for the outcomes of OS and PFS between patients with and without IMC and patients with mild vs severe IMC using GraphPad Prism (version 8.3; GraphPad Software, Inc., La Jolla, CA, United States). All authors had access to the study data and reviewed and approved the final manuscript. The rate of the primary outcomes (OS > 12 mo and PFS > 6 mo among all ICI users, OS > 12 mo and PFS > 6 mo in patients with IMC) and secondary outcomes (risks of IMC among patients with malignancy using ICI, IMC severity), predictive value of clinical variables on primary and secondary outcomes, odds ratio (OR) with its 95% confidence interval (CI), and P values were calculated using Statistics/Data Analysis (Stata/IC 15.1 for Windows, College Station, TX, United States). Dichotomous variables were analyzed for outcomes using the chi-squared test or the Fisher’s exact test where appropriate, and continuous variables were analyzed using Student’s t-tests if normally distributed, or the Wilcoxon signed-rank test for non-normal data. For our multivariate analyses, model building was based on forward stepwise logistic regression, with a P value of 0.05 required for entry, and known predictors were also included. We constructed Kaplan Meier curves for the outcomes of OS and PFS between patients with and without IMC and patients with mild vs severe IMC using GraphPad Prism (version 8.3; GraphPad Software, Inc., La Jolla, CA, United States). All authors had access to the study data and reviewed and approved the final manuscript.
null
null
CONCLUSION
Future research in this area should seek to expand current knowledge of the relationship between IMC and cancer survival. In particular, future work should focus on broadening the type and number of patients treated with immune checkpoint inhibitors and on tracking patients prior to initiating checkpoint inhibitors to determine if this relationship remains significant prospectively.
[ "INTRODUCTION", "Study design and population", "Statistical analysis", "RESULTS", "Clinical characteristics associated with IMC", "IMC significantly increases overall survival", "Significant risk factors for developing IMC and severe IMC", "DISCUSSION", "CONCLUSION" ]
[ "Immune checkpoint inhibitors (ICI) have dramatically changed the landscape of cancer therapy. Early studies showed significantly prolonged survival in patients with metastatic melanoma compared to standard chemotherapy[1], and evidence now exists for improved outcomes in a variety of tumors ranging from lung cancers to urothelial carcinoma to breast cancer[2-5]. Although these are powerful treatments in our armamentarium against malignancy, ICI can cause immune-related adverse events (irAE) characterized by autoimmune-like inflammation in a variety of non-tumor organs, leading to increased morbidity for patients[6].\nOne of the most common irAE is immune checkpoint inhibitor-mediated colitis (IMC). IMC may occur in up to 40% of patients treated with ipilimumab, an antibody targeting CTLA-4, 11%-17% of patients treated with antibodies against anti-PD-1 or anti-PD-L1, such as nivolumab, pembrolizumab, or atezolizumab, and around 32% of patients treated with a combination of anti-CTLA-4 and anti-PD-1[7]. Prior retrospective analyses of patients with IMC have attempted to identify characteristics associated with development of IMC, including type of malignancy, ICI class, dose of ICI, cancer stage, and vitamin D use[8-11]. Intriguingly, two prior studies have suggested that development of IMC may positively correlate with improved progression-free survival (PFS) and overall survival (OS)[9,10]. One of these studies controlled for confounding effects of ICI class via frequency matching, but was limited to patients with melanoma, hindering wider applicability of their findings[10]. These findings also conflict with data suggesting that use of steroids and the anti-TNF antibody infliximab in patients treated with ICI are associated with worse cancer outcomes[12,13]. These discrepancies represent a significant knowledge gap that impedes our ability to evaluate and manage IMC and ICI use.\nHere we present data from a retrospective study of patients treated with ICI at our institution who developed IMC across malignancy types. We compare this cohort to a matched control cohort to determine whether IMC was associated with improved progression-free survival and overall survival. We also evaluate which clinical characteristics increase the risk of developing IMC, including severe IMC.", "We conducted a retrospective case-control single-center study after obtaining approval from the Institutional Review Board at Stanford University (IRB 57125, approved 6/30/2020). Our primary aim was to determine the association of presence and severity of IMC on OS and PFS in ICI users. Our secondary aim was to identify clinical variables which predicted development of IMC in ICI users. We evaluated all patients over the age of 18 who had been treated with immune checkpoint inhibitors (ICI) for malignancy at Stanford Health Care from May 2011 to May 2020, including anti-CTLA-4 (ipilimumab), anti-PD-1 (nivolumab, pembrolizumab), and anti-PD-L1 (atezolizumab, avelumab, durvalumab), with follow up through October 2020. Using the Stanford Research Repository tool, we screened patients treated with ICI who were assigned International Classification of Diseases (ICD) 9 and ICD 10 codes associated with non-infectious colitis and diarrhea (Supplementary Table 1). Each chart which passed the initial screen was further screened by review of clinic notes to confirm diagnosis of immune checkpoint inhibitor-related colitis by oncology providers. Any patient found to have other explanations for their clinical presentation was excluded from the study.\nControl patients were matched one to one with each IMC patient for sex, age, malignancy, type of ICI used, prior ICI exposure, and duration of ICI exposure (matched to number of doses from initiation of ICI to development of colitis in study cohort). Control patients were initially screened by those lacking the above ICD codes and were confirmed via direct evaluation of each chart to lack diarrhea and/or colitis ascribable to ICI per their treating oncologist.\nWe extracted clinical data on IMC and control patient charts including demographics (age at time of ICI initiation, sex, body mass index, race per patient report), medical history (presence of prior non-liver and non-upper gastrointestinal disease, personal history of autoimmune disease, family history of autoimmune disease), and cancer history (type of malignancy, tumor stage at ICI initiation, prior chemotherapy, prior radiation therapy, type of ICI used, duration of ICI use, OS and PFS) (Supplementary Table 2). OS was determined as time from initiation of ICI to death, while PFS was determined as time from initiation of ICI to death or progression of disease as determined by oncology providers, based on radiographic evidence of progression. IMC severity was graded using commonly accepted determinants of IMC and irAE grading[14]. We specifically noted prior use of therapies designed to increase immune responses [interleukin (IL)-2, interferon (IFN)-γ, toll-like receptor (TLR)-9 agonist, tebentafusp, or anti-CD47 antibody]. Vitamin D and non-steroidal anti-inflammatory (NSAID) use were defined as vitamin D supplement or NSAID medication, respectively, noted in the history of present illness or on the patient’s medication list at the clinic visit closest to their date of ICI initiation.\nWe collected data on IMC diagnosis including number of patients who received endoscopy (flexible sigmoidoscopy or colonoscopy), findings on endoscopy, and fecal calprotectin (Supplementary Table 3). Data on management of IMC included treatment with anti-diarrheal medications, mesalamine, steroids (prednisone, budesonide, dexamethasone), infliximab, and vedolizumab.", "The rate of the primary outcomes (OS > 12 mo and PFS > 6 mo among all ICI users, OS > 12 mo and PFS > 6 mo in patients with IMC) and secondary outcomes (risks of IMC among patients with malignancy using ICI, IMC severity), predictive value of clinical variables on primary and secondary outcomes, odds ratio (OR) with its 95% confidence interval (CI), and P values were calculated using Statistics/Data Analysis (Stata/IC 15.1 for Windows, College Station, TX, United States). Dichotomous variables were analyzed for outcomes using the chi-squared test or the Fisher’s exact test where appropriate, and continuous variables were analyzed using Student’s t-tests if normally distributed, or the Wilcoxon signed-rank test for non-normal data. For our multivariate analyses, model building was based on forward stepwise logistic regression, with a P value of 0.05 required for entry, and known predictors were also included. We constructed Kaplan Meier curves for the outcomes of OS and PFS between patients with and without IMC and patients with mild vs severe IMC using GraphPad Prism (version 8.3; GraphPad Software, Inc., La Jolla, CA, United States). All authors had access to the study data and reviewed and approved the final manuscript.", "Clinical characteristics associated with IMC We identified a total of 314 patients treated with ICI at Stanford Health Care from May 2011 to May 2020 who had ICD codes matching our query (Supplementary Table 1). Of these, 64 had a diagnosis of IMC per review of Oncology providers’ notes, after excluding patients with alternative diagnoses for their symptoms. 24 (37.5%) of these IMC patients underwent an endoscopy (colonoscopy or flexible sigmoidoscopy) during workup, of which seven (29.2%) had a normal endoscopic appearance, consistent with prior reports demonstrating that approximately one third of patients with IMC related to anti-PD-1 therapy have microscopic colitis[15] (Supplementary Table 3). An additional 14 patients (21.9%) had imaging findings suggestive of IMC while 3 patients (4.69%) without imaging or endoscopy had an elevated calprotectin or fecal lactoferrin.\nThese 64 patients were manually matched 1:1 with control patients based on age, sex, malignancy, type of ICI, whether or not the patient had prior ICI exposure, and duration of ICI use. We compared clinical characteristics of patients from the IMC cohort and the control cohort (Table 1). None of the matched characteristics were significantly different between the two cohorts. The mean age across the combined cohorts was 66.6 years, with an average age of 67.4 in the cohort with IMC compared with 65.8 in the control cohort (P = 0.42). 57.81% of patients in each group were male (P = 1.00). Patients were predominantly white in both groups, with 52 (81.25%) white individuals in the IMC cohort compared to 50 (78.13%) in the control group (P = 0.66). The most common malignancy in each group was melanoma [33 (51.56%) in both cohorts], followed by renal cell carcinoma [8 (12.5%) in the IMC cohort and 7 (10.94%) in the control cohort] and non-small cell lung cancer [6 (9.38%) in both cohorts]. Both groups had similar numbers of patients with stage IV malignancy [56 (87.5%) in the IMC cohort and 58 (90.63%) in the control cohort, P = 0.778]. Combination ipilimumab and nivolumab was the most commonly used checkpoint therapy [24 (37.5%) of patients in each cohort], followed by nivolumab monotherapy [19 (29.69%) of each cohort] and ipilimumab monotherapy [11 (17.19%) of each cohort].\nBaseline characteristics of patients with immune checkpoint inhibitor use\nVariable matched between cases and controls.\nNumber of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls).\nSee Supplementary Table 2.\nIMC: Immune checkpoint inhibitor-mediated colitis; ICI: Immune checkpoint inhibitor; SD: Standard deviation; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; PFS: Progression-Free Survival; irAE: Immune related adverse event; OS: Overall survival.\nAmong the remainder of the clinical characteristics evaluated, personal history of autoimmune disease (including prior irAE) and family history of autoimmune disease were significantly more common in patients with IMC (P = 0.037 and 0.048, respectively). Intriguingly, prior use of a therapy designed to increase immune responses was more common in the control cohort without IMC (P = 0.027). In contrast to prior data[11], use of vitamin D supplementation at the time of first dose of ICI was significantly more prevalent in patients with IMC (P = 0.020). Neither smoking status, NSAID use at time of ICI initiation, steroid use at the time of ICI initiation, nor recent vaccination were significantly more common in IMC patients compared to controls.\nWe identified a total of 314 patients treated with ICI at Stanford Health Care from May 2011 to May 2020 who had ICD codes matching our query (Supplementary Table 1). Of these, 64 had a diagnosis of IMC per review of Oncology providers’ notes, after excluding patients with alternative diagnoses for their symptoms. 24 (37.5%) of these IMC patients underwent an endoscopy (colonoscopy or flexible sigmoidoscopy) during workup, of which seven (29.2%) had a normal endoscopic appearance, consistent with prior reports demonstrating that approximately one third of patients with IMC related to anti-PD-1 therapy have microscopic colitis[15] (Supplementary Table 3). An additional 14 patients (21.9%) had imaging findings suggestive of IMC while 3 patients (4.69%) without imaging or endoscopy had an elevated calprotectin or fecal lactoferrin.\nThese 64 patients were manually matched 1:1 with control patients based on age, sex, malignancy, type of ICI, whether or not the patient had prior ICI exposure, and duration of ICI use. We compared clinical characteristics of patients from the IMC cohort and the control cohort (Table 1). None of the matched characteristics were significantly different between the two cohorts. The mean age across the combined cohorts was 66.6 years, with an average age of 67.4 in the cohort with IMC compared with 65.8 in the control cohort (P = 0.42). 57.81% of patients in each group were male (P = 1.00). Patients were predominantly white in both groups, with 52 (81.25%) white individuals in the IMC cohort compared to 50 (78.13%) in the control group (P = 0.66). The most common malignancy in each group was melanoma [33 (51.56%) in both cohorts], followed by renal cell carcinoma [8 (12.5%) in the IMC cohort and 7 (10.94%) in the control cohort] and non-small cell lung cancer [6 (9.38%) in both cohorts]. Both groups had similar numbers of patients with stage IV malignancy [56 (87.5%) in the IMC cohort and 58 (90.63%) in the control cohort, P = 0.778]. Combination ipilimumab and nivolumab was the most commonly used checkpoint therapy [24 (37.5%) of patients in each cohort], followed by nivolumab monotherapy [19 (29.69%) of each cohort] and ipilimumab monotherapy [11 (17.19%) of each cohort].\nBaseline characteristics of patients with immune checkpoint inhibitor use\nVariable matched between cases and controls.\nNumber of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls).\nSee Supplementary Table 2.\nIMC: Immune checkpoint inhibitor-mediated colitis; ICI: Immune checkpoint inhibitor; SD: Standard deviation; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; PFS: Progression-Free Survival; irAE: Immune related adverse event; OS: Overall survival.\nAmong the remainder of the clinical characteristics evaluated, personal history of autoimmune disease (including prior irAE) and family history of autoimmune disease were significantly more common in patients with IMC (P = 0.037 and 0.048, respectively). Intriguingly, prior use of a therapy designed to increase immune responses was more common in the control cohort without IMC (P = 0.027). In contrast to prior data[11], use of vitamin D supplementation at the time of first dose of ICI was significantly more prevalent in patients with IMC (P = 0.020). Neither smoking status, NSAID use at time of ICI initiation, steroid use at the time of ICI initiation, nor recent vaccination were significantly more common in IMC patients compared to controls.\nIMC significantly increases overall survival As IMC has previously been associated with increased OS and PFS in cancer patients[9,10], we evaluated whether this association was seen in our study. We found that OS was significantly longer in patients who developed IMC compared to those who did not, with a mean OS of 24.3 mo in patients with IMC and 17.7 mo in control (P = 0.05, Table 1). OS at 12 mo following ICI initiation was significantly higher in patients who developed IMC compared to those who did not (P = 0.02, Figure 1). However, in contrast to prior findings, our study did not find a significant difference in PFS between IMC patients and controls, with a mean PFS 13.7 mo in IMC patients and 11.9 mo in controls (P = 0.524) (Table 1). PFS also did not differ between patients who developed mild vs severe IMC (P = 0.690, Supplementary Table 5).\n\nOverall survival at 12 mo in patients with and without immune checkpoint inhibitor-mediated colitis. Kaplan-Meier curve of overall survival at 12 mo in patients with immune checkpoint inhibitor-mediated colitis (IMC, red) and without IMC (black). IMC: Immune checkpoint inhibitor-mediated colitis; HR: Hazard ratio.\nAcross both cohorts, we identified clinical characteristics significantly associated with OS greater than 12 mo and PFS greater than 6 mo, which are correlated with cancer outcomes in patients treated with ICI[16] (Tables 2 and 3) (Supplementary Tables 4 and 5). IMC was significantly and independently associated with OS > 12 mo in the multivariate model (OR 2.81, 95%CI 1.17-6.77, P = 0.021) (Table 2). Number of ICI infusions was also positively associated with OS > 12 mo (OR 1.23, 95%CI 1.09-1.40), while sarcoma as underlying malignancy was significantly associated with OS < 12 mo (OR 0.17, 95%CI 0.029-0.947). Within the IMC cohort, nivolumab use was associated with OS < 12 mo in the univariate analysis (OR 0.09, 95%CI 0.01-0.83), while only age was associated with OS < 12 mo in multivariate analysis (OR 0.93, 95%CI 0.88-0.99) (Table 3). No individual malignancy was significantly associated with OS > 12 mo within the IMC cohort (Table 3).\nUnivariate and multivariate predictors of overall survival > 12 mo among patients with malignancy using immune checkpoint inhibitor (n = 128)\nNumber of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls).\nSee Supplementary Table 2.\nICI: Immune checkpoint inhibitor; IMC: Immune checkpoint inhibitor-mediated colitis; OR: Odds ratios; CI: Confidence interval; SD: Standard deviation; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; irAE: Immune related adverse event.\nUnivariate and multivariate predictors of overall survival > 12 mo among patients with immune checkpoint inhibitor colitis (n = 64)\nNumber of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls).\nSee Supplementary Table 2.\nICI: Immune checkpoint inhibitor; IMC: Immune checkpoint inhibitor-mediated colitis; SD: Standard deviation; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; irAE: Immune related adverse event.\nAs IMC has previously been associated with increased OS and PFS in cancer patients[9,10], we evaluated whether this association was seen in our study. We found that OS was significantly longer in patients who developed IMC compared to those who did not, with a mean OS of 24.3 mo in patients with IMC and 17.7 mo in control (P = 0.05, Table 1). OS at 12 mo following ICI initiation was significantly higher in patients who developed IMC compared to those who did not (P = 0.02, Figure 1). However, in contrast to prior findings, our study did not find a significant difference in PFS between IMC patients and controls, with a mean PFS 13.7 mo in IMC patients and 11.9 mo in controls (P = 0.524) (Table 1). PFS also did not differ between patients who developed mild vs severe IMC (P = 0.690, Supplementary Table 5).\n\nOverall survival at 12 mo in patients with and without immune checkpoint inhibitor-mediated colitis. Kaplan-Meier curve of overall survival at 12 mo in patients with immune checkpoint inhibitor-mediated colitis (IMC, red) and without IMC (black). IMC: Immune checkpoint inhibitor-mediated colitis; HR: Hazard ratio.\nAcross both cohorts, we identified clinical characteristics significantly associated with OS greater than 12 mo and PFS greater than 6 mo, which are correlated with cancer outcomes in patients treated with ICI[16] (Tables 2 and 3) (Supplementary Tables 4 and 5). IMC was significantly and independently associated with OS > 12 mo in the multivariate model (OR 2.81, 95%CI 1.17-6.77, P = 0.021) (Table 2). Number of ICI infusions was also positively associated with OS > 12 mo (OR 1.23, 95%CI 1.09-1.40), while sarcoma as underlying malignancy was significantly associated with OS < 12 mo (OR 0.17, 95%CI 0.029-0.947). Within the IMC cohort, nivolumab use was associated with OS < 12 mo in the univariate analysis (OR 0.09, 95%CI 0.01-0.83), while only age was associated with OS < 12 mo in multivariate analysis (OR 0.93, 95%CI 0.88-0.99) (Table 3). No individual malignancy was significantly associated with OS > 12 mo within the IMC cohort (Table 3).\nUnivariate and multivariate predictors of overall survival > 12 mo among patients with malignancy using immune checkpoint inhibitor (n = 128)\nNumber of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls).\nSee Supplementary Table 2.\nICI: Immune checkpoint inhibitor; IMC: Immune checkpoint inhibitor-mediated colitis; OR: Odds ratios; CI: Confidence interval; SD: Standard deviation; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; irAE: Immune related adverse event.\nUnivariate and multivariate predictors of overall survival > 12 mo among patients with immune checkpoint inhibitor colitis (n = 64)\nNumber of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls).\nSee Supplementary Table 2.\nICI: Immune checkpoint inhibitor; IMC: Immune checkpoint inhibitor-mediated colitis; SD: Standard deviation; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; irAE: Immune related adverse event.\nSignificant risk factors for developing IMC and severe IMC As certain clinical characteristics were significantly more common in patients with IMC compared to controls, we evaluated whether any of these clinical characteristics were associated with risk of developing IMC (Table 4). In univariate analysis, history of autoimmune disease and vitamin D use were both significantly associated with increased risk of IMC (OR 2.45, 95%CI 1.04-5.78, P = 0.040 for autoimmune disease; OR 2.51, 95%CI 1.14-5.54, P = 0.022 for vitamin D use). Interestingly, the use of vitamin D supplementation has previously been associated with a decreased risk of IMC, in contrast to our findings here[11]. Prior use of an immune-enhancing therapy (Supplementary Table 2) was associated with a significantly decreased risk of IMC (OR 0.20, 95%CI 0.04-0.95, P = 0.043). In the multivariate model which incorporated these characteristics, only the use of immune-enhancing therapy remained significantly associated with decreased risk of IMC, with an OR of 0.20 (95%CI 0.04-1.00, P = 0.050). \nUnivariate and multivariate predictors of immune checkpoint inhibitor-mediated colitis among patients using immune checkpoint inhibitor (n = 128)\nNumber of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls).\nSee Supplementary Table 2.\nICI: Immune checkpoint inhibitor; IMC: Immune checkpoint inhibitor-mediated colitis; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; irAE: Immune related adverse event.\nWe next determined if any variables were associated with an increased risk of severe IMC. Consistent with prior studies of irAE in ICI[17-19], we defined grade 1-2 IMC as mild and grade 3 or higher IMC as severe. In our study, 38 of the 64 patients (59.4%) had severe IMC (Supplementary Table 3). In the univariate model, ipilimumab and vitamin D supplementation were significantly associated with development of severe IMC (OR 8.93, 95%CI 1.07-74.8, P = 0.043 for ipilimumab; OR 3.33, 95%CI 1.10-10.14, P = 0.034 for vitamin D) (Supplementary Table 6). Combination therapy (ipilimumab plus nivolumab) trended towards an increased risk of severe IMC but did not reach significance (P = 0.053). In contrast, pembrolizumab was significantly associated with a decreased risk of severe IMC (OR 0.26, 95%CI 0.09-0.81, P = 0.020). In the multivariate model no characteristic reached significance for association with severe IMC, although both combination therapy and ipilimumab monotherapy approached significance for increased risk of severe IMC (P = 0.058 and 0.060, respectively).\nAs certain clinical characteristics were significantly more common in patients with IMC compared to controls, we evaluated whether any of these clinical characteristics were associated with risk of developing IMC (Table 4). In univariate analysis, history of autoimmune disease and vitamin D use were both significantly associated with increased risk of IMC (OR 2.45, 95%CI 1.04-5.78, P = 0.040 for autoimmune disease; OR 2.51, 95%CI 1.14-5.54, P = 0.022 for vitamin D use). Interestingly, the use of vitamin D supplementation has previously been associated with a decreased risk of IMC, in contrast to our findings here[11]. Prior use of an immune-enhancing therapy (Supplementary Table 2) was associated with a significantly decreased risk of IMC (OR 0.20, 95%CI 0.04-0.95, P = 0.043). In the multivariate model which incorporated these characteristics, only the use of immune-enhancing therapy remained significantly associated with decreased risk of IMC, with an OR of 0.20 (95%CI 0.04-1.00, P = 0.050). \nUnivariate and multivariate predictors of immune checkpoint inhibitor-mediated colitis among patients using immune checkpoint inhibitor (n = 128)\nNumber of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls).\nSee Supplementary Table 2.\nICI: Immune checkpoint inhibitor; IMC: Immune checkpoint inhibitor-mediated colitis; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; irAE: Immune related adverse event.\nWe next determined if any variables were associated with an increased risk of severe IMC. Consistent with prior studies of irAE in ICI[17-19], we defined grade 1-2 IMC as mild and grade 3 or higher IMC as severe. In our study, 38 of the 64 patients (59.4%) had severe IMC (Supplementary Table 3). In the univariate model, ipilimumab and vitamin D supplementation were significantly associated with development of severe IMC (OR 8.93, 95%CI 1.07-74.8, P = 0.043 for ipilimumab; OR 3.33, 95%CI 1.10-10.14, P = 0.034 for vitamin D) (Supplementary Table 6). Combination therapy (ipilimumab plus nivolumab) trended towards an increased risk of severe IMC but did not reach significance (P = 0.053). In contrast, pembrolizumab was significantly associated with a decreased risk of severe IMC (OR 0.26, 95%CI 0.09-0.81, P = 0.020). In the multivariate model no characteristic reached significance for association with severe IMC, although both combination therapy and ipilimumab monotherapy approached significance for increased risk of severe IMC (P = 0.058 and 0.060, respectively).", "We identified a total of 314 patients treated with ICI at Stanford Health Care from May 2011 to May 2020 who had ICD codes matching our query (Supplementary Table 1). Of these, 64 had a diagnosis of IMC per review of Oncology providers’ notes, after excluding patients with alternative diagnoses for their symptoms. 24 (37.5%) of these IMC patients underwent an endoscopy (colonoscopy or flexible sigmoidoscopy) during workup, of which seven (29.2%) had a normal endoscopic appearance, consistent with prior reports demonstrating that approximately one third of patients with IMC related to anti-PD-1 therapy have microscopic colitis[15] (Supplementary Table 3). An additional 14 patients (21.9%) had imaging findings suggestive of IMC while 3 patients (4.69%) without imaging or endoscopy had an elevated calprotectin or fecal lactoferrin.\nThese 64 patients were manually matched 1:1 with control patients based on age, sex, malignancy, type of ICI, whether or not the patient had prior ICI exposure, and duration of ICI use. We compared clinical characteristics of patients from the IMC cohort and the control cohort (Table 1). None of the matched characteristics were significantly different between the two cohorts. The mean age across the combined cohorts was 66.6 years, with an average age of 67.4 in the cohort with IMC compared with 65.8 in the control cohort (P = 0.42). 57.81% of patients in each group were male (P = 1.00). Patients were predominantly white in both groups, with 52 (81.25%) white individuals in the IMC cohort compared to 50 (78.13%) in the control group (P = 0.66). The most common malignancy in each group was melanoma [33 (51.56%) in both cohorts], followed by renal cell carcinoma [8 (12.5%) in the IMC cohort and 7 (10.94%) in the control cohort] and non-small cell lung cancer [6 (9.38%) in both cohorts]. Both groups had similar numbers of patients with stage IV malignancy [56 (87.5%) in the IMC cohort and 58 (90.63%) in the control cohort, P = 0.778]. Combination ipilimumab and nivolumab was the most commonly used checkpoint therapy [24 (37.5%) of patients in each cohort], followed by nivolumab monotherapy [19 (29.69%) of each cohort] and ipilimumab monotherapy [11 (17.19%) of each cohort].\nBaseline characteristics of patients with immune checkpoint inhibitor use\nVariable matched between cases and controls.\nNumber of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls).\nSee Supplementary Table 2.\nIMC: Immune checkpoint inhibitor-mediated colitis; ICI: Immune checkpoint inhibitor; SD: Standard deviation; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; PFS: Progression-Free Survival; irAE: Immune related adverse event; OS: Overall survival.\nAmong the remainder of the clinical characteristics evaluated, personal history of autoimmune disease (including prior irAE) and family history of autoimmune disease were significantly more common in patients with IMC (P = 0.037 and 0.048, respectively). Intriguingly, prior use of a therapy designed to increase immune responses was more common in the control cohort without IMC (P = 0.027). In contrast to prior data[11], use of vitamin D supplementation at the time of first dose of ICI was significantly more prevalent in patients with IMC (P = 0.020). Neither smoking status, NSAID use at time of ICI initiation, steroid use at the time of ICI initiation, nor recent vaccination were significantly more common in IMC patients compared to controls.", "As IMC has previously been associated with increased OS and PFS in cancer patients[9,10], we evaluated whether this association was seen in our study. We found that OS was significantly longer in patients who developed IMC compared to those who did not, with a mean OS of 24.3 mo in patients with IMC and 17.7 mo in control (P = 0.05, Table 1). OS at 12 mo following ICI initiation was significantly higher in patients who developed IMC compared to those who did not (P = 0.02, Figure 1). However, in contrast to prior findings, our study did not find a significant difference in PFS between IMC patients and controls, with a mean PFS 13.7 mo in IMC patients and 11.9 mo in controls (P = 0.524) (Table 1). PFS also did not differ between patients who developed mild vs severe IMC (P = 0.690, Supplementary Table 5).\n\nOverall survival at 12 mo in patients with and without immune checkpoint inhibitor-mediated colitis. Kaplan-Meier curve of overall survival at 12 mo in patients with immune checkpoint inhibitor-mediated colitis (IMC, red) and without IMC (black). IMC: Immune checkpoint inhibitor-mediated colitis; HR: Hazard ratio.\nAcross both cohorts, we identified clinical characteristics significantly associated with OS greater than 12 mo and PFS greater than 6 mo, which are correlated with cancer outcomes in patients treated with ICI[16] (Tables 2 and 3) (Supplementary Tables 4 and 5). IMC was significantly and independently associated with OS > 12 mo in the multivariate model (OR 2.81, 95%CI 1.17-6.77, P = 0.021) (Table 2). Number of ICI infusions was also positively associated with OS > 12 mo (OR 1.23, 95%CI 1.09-1.40), while sarcoma as underlying malignancy was significantly associated with OS < 12 mo (OR 0.17, 95%CI 0.029-0.947). Within the IMC cohort, nivolumab use was associated with OS < 12 mo in the univariate analysis (OR 0.09, 95%CI 0.01-0.83), while only age was associated with OS < 12 mo in multivariate analysis (OR 0.93, 95%CI 0.88-0.99) (Table 3). No individual malignancy was significantly associated with OS > 12 mo within the IMC cohort (Table 3).\nUnivariate and multivariate predictors of overall survival > 12 mo among patients with malignancy using immune checkpoint inhibitor (n = 128)\nNumber of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls).\nSee Supplementary Table 2.\nICI: Immune checkpoint inhibitor; IMC: Immune checkpoint inhibitor-mediated colitis; OR: Odds ratios; CI: Confidence interval; SD: Standard deviation; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; irAE: Immune related adverse event.\nUnivariate and multivariate predictors of overall survival > 12 mo among patients with immune checkpoint inhibitor colitis (n = 64)\nNumber of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls).\nSee Supplementary Table 2.\nICI: Immune checkpoint inhibitor; IMC: Immune checkpoint inhibitor-mediated colitis; SD: Standard deviation; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; irAE: Immune related adverse event.", "As certain clinical characteristics were significantly more common in patients with IMC compared to controls, we evaluated whether any of these clinical characteristics were associated with risk of developing IMC (Table 4). In univariate analysis, history of autoimmune disease and vitamin D use were both significantly associated with increased risk of IMC (OR 2.45, 95%CI 1.04-5.78, P = 0.040 for autoimmune disease; OR 2.51, 95%CI 1.14-5.54, P = 0.022 for vitamin D use). Interestingly, the use of vitamin D supplementation has previously been associated with a decreased risk of IMC, in contrast to our findings here[11]. Prior use of an immune-enhancing therapy (Supplementary Table 2) was associated with a significantly decreased risk of IMC (OR 0.20, 95%CI 0.04-0.95, P = 0.043). In the multivariate model which incorporated these characteristics, only the use of immune-enhancing therapy remained significantly associated with decreased risk of IMC, with an OR of 0.20 (95%CI 0.04-1.00, P = 0.050). \nUnivariate and multivariate predictors of immune checkpoint inhibitor-mediated colitis among patients using immune checkpoint inhibitor (n = 128)\nNumber of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls).\nSee Supplementary Table 2.\nICI: Immune checkpoint inhibitor; IMC: Immune checkpoint inhibitor-mediated colitis; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; irAE: Immune related adverse event.\nWe next determined if any variables were associated with an increased risk of severe IMC. Consistent with prior studies of irAE in ICI[17-19], we defined grade 1-2 IMC as mild and grade 3 or higher IMC as severe. In our study, 38 of the 64 patients (59.4%) had severe IMC (Supplementary Table 3). In the univariate model, ipilimumab and vitamin D supplementation were significantly associated with development of severe IMC (OR 8.93, 95%CI 1.07-74.8, P = 0.043 for ipilimumab; OR 3.33, 95%CI 1.10-10.14, P = 0.034 for vitamin D) (Supplementary Table 6). Combination therapy (ipilimumab plus nivolumab) trended towards an increased risk of severe IMC but did not reach significance (P = 0.053). In contrast, pembrolizumab was significantly associated with a decreased risk of severe IMC (OR 0.26, 95%CI 0.09-0.81, P = 0.020). In the multivariate model no characteristic reached significance for association with severe IMC, although both combination therapy and ipilimumab monotherapy approached significance for increased risk of severe IMC (P = 0.058 and 0.060, respectively).", "In our study, development of IMC following ICI use was associated with improved overall survival, although not improved progression-free survival, compared to ICI users without IMC. This is similar to findings at another center demonstrating both improved OS and PFS in patients with IMC[9,10]. We also found that vitamin D supplementation at the start of ICI treatment is a risk factor for developing IMC, in contrast to other research suggesting vitamin D use is associated with lower risk of IMC[11]. Our results, therefore, provide critical additional information on these previous associations and present a need for prospective studies.\nBoth publications showing improved survival in patients with IMC were retrospective analyses performed at the same center[9,10]. One study noted that ICI class was significantly associated with development of IMC[9], a finding that has been demonstrated several times in retrospective work[8,17,18,20-23]. However, unlike our work, this study did not match control patients to account for this likely confounder, as ICI class has been associated with differences in PFS in some malignancies[24,25]. The second study at this center examined survival in melanoma patients with IMC, compared to our work across multiple malignancies, although frequency matching was performed to account for use of different ICI classes[10]. Since our study is the first to examine survival in patients with IMC at a different center, our work here reinforces that IMC may be associated with increased overall survival and prompts a need for prospective studies.\nThe only other independent factor in our study positively associated with OS > 12 mo was number of ICI doses. This finding may be due to trivial length-time bias, as patients who survive longer are more likely to receive more doses of ICI. It is also possible that patients who required cessation of ICI due to IMC had worse outcomes, although prior work has suggested that patients still derive equivalent long-term benefit from ICI even if stopped due to irAE[26]. Type of underlying malignancy (sarcoma) was independently associated with OS < 12 mo in our study. These findings are not unexpected, as most advanced soft tissue sarcomas have a median OS of less than one year[27].\nIn contrast to prior work, we found a positive association between vitamin D supplementation and development of IMC[11]. It is unclear if this is related to low serum vitamin D levels or negative impact of the supplementation itself, as vitamin D levels near the time of ICI initiation were not recorded in most patients. Additionally, the prior report on vitamin D in IMC was in melanoma patients only, which may partially account for discrepancies with our study. As this association did not remain significant in our multivariate analysis, it is possible that another confounding factor may explain the association between vitamin D supplementation and IMC in our study.\nIn addition to challenging existing findings, we report here on additional novel risk factors for IMC. We are the first to report that prior use of immune-enhancing medications prior to ICI, such as IL-2 or interferon-γ, is significantly and independently associated with decreased risk of IMC. Much more work should be done to evaluate the relationship between these medications and future risk of IMC.\nFinally, our study is the first to examine risk factors for severe IMC. In addition to increasing risk for IMC overall, we find that vitamin D supplementation may also be a risk factor for severe IMC. Similarly, our results suggest that the use of ipilimumab may be associated with increased risk of severe IMC, while pembrolizumab may be associated with decreased risk of severe IMC in patients who develop this syndrome. As ipilimumab has previously been associated with increased risk of IMC overall, while anti-PD-1, including pembrolizumab, are associated with lower risk of IMC overall[8,9], these findings emphasize that ICI class may affect severity of IMC.\nOur findings may significantly impact clinical practice by identifying novel risks for IMC and severe IMC that clinicians, including oncologists and gastroenterologists, should be aware of, while also potentially providing reassurance to physicians and patients that development of IMC may be a positive prognosticator for cancer survival. Neither prior work nor ours found that treatment of IMC, including steroids or infliximab, negatively impacts OS[9,10], and therefore appropriate treatment of IMC should be pursued early on to minimize morbidity and mortality. Both steroid and infliximab use have been suggested to worsen survival in ICI users[12,13], but all current evidence suggests that use of these medications for IMC specifically does not impair cancer outcomes. Our work also cautions against supplementation with vitamin D in ICI users, as this may increase risk of IMC and severe IMC, although carefully designed studies with vitamin D measurements should be performed.\nOur work has several strengths. We performed robust cohort matching to minimize confounding effects of ICI class and malignancy. This is also the first study to explore risk factors associated with severe IMC. However, there are limitations to our work. As a retrospective, observational study, it is subject to recall bias and cannot evaluate causation, and may also be subject to immortal time bias (ITB). Patients may have longer exposure to checkpoint inhibitors before developing IMC, compared to patients who do not manifest this irAE, leading to a period where they must survive for long enough to develop IMC and are therefore “immortal”[28]. We found that OS > 12 mo was significantly associated with greater numbers of ICI infusions (Table 2), which is likely due to ITB. However, greater numbers of infusions were not associated with IMC (Table 4). This suggests that the association between OS > 12 mo and IMC is likely independent of the number of ICI infusions, limiting this as a source of ITB in our study.\nOther weaknesses of our work include selection of patients based on clinical criteria for IMC, including those who did not undergo endoscopy or other objective testing for intestinal inflammation, and therefore may not have had a true colitis. Like prior work, this is also a single-center study, and our results may not be widely generalizable, particularly since we identified fewer patients compared to prior work and our patient population is highly variable, including individuals with several different underlying malignancies. We did not exclude patients with prior non-GI irAEs in either group, although the presence of these was not independently associated with increased OS in our study. We also have not accounted for other factors which may be potential predictors of ICI response, including tumor PD-L1 expression burden, tumor mutational burden, gut microbial composition, proton pump inhibitor use, and combination treatment with tyrosine kinase inhibitors[29-34].", "In conclusion, our findings suggest presence of IMC is associated with improved OS in cancer patients when cases were matched closely to controls. We also found that vitamin D supplementation was significantly associated with development of both IMC and severe IMC, while immune-enhancing medications were significantly associated with decreased risk of IMC. Future work should focus on broader populations to resolve the discrepancies raised in our work, and to confirm the association between IMC and increased cancer survival. Closely involving gastroenterologists with the workup and management of IMC will be crucial to ensuring the best care possible for these patients." ]
[ null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Study design and population", "Statistical analysis", "RESULTS", "Clinical characteristics associated with IMC", "IMC significantly increases overall survival", "Significant risk factors for developing IMC and severe IMC", "DISCUSSION", "CONCLUSION" ]
[ "Immune checkpoint inhibitors (ICI) have dramatically changed the landscape of cancer therapy. Early studies showed significantly prolonged survival in patients with metastatic melanoma compared to standard chemotherapy[1], and evidence now exists for improved outcomes in a variety of tumors ranging from lung cancers to urothelial carcinoma to breast cancer[2-5]. Although these are powerful treatments in our armamentarium against malignancy, ICI can cause immune-related adverse events (irAE) characterized by autoimmune-like inflammation in a variety of non-tumor organs, leading to increased morbidity for patients[6].\nOne of the most common irAE is immune checkpoint inhibitor-mediated colitis (IMC). IMC may occur in up to 40% of patients treated with ipilimumab, an antibody targeting CTLA-4, 11%-17% of patients treated with antibodies against anti-PD-1 or anti-PD-L1, such as nivolumab, pembrolizumab, or atezolizumab, and around 32% of patients treated with a combination of anti-CTLA-4 and anti-PD-1[7]. Prior retrospective analyses of patients with IMC have attempted to identify characteristics associated with development of IMC, including type of malignancy, ICI class, dose of ICI, cancer stage, and vitamin D use[8-11]. Intriguingly, two prior studies have suggested that development of IMC may positively correlate with improved progression-free survival (PFS) and overall survival (OS)[9,10]. One of these studies controlled for confounding effects of ICI class via frequency matching, but was limited to patients with melanoma, hindering wider applicability of their findings[10]. These findings also conflict with data suggesting that use of steroids and the anti-TNF antibody infliximab in patients treated with ICI are associated with worse cancer outcomes[12,13]. These discrepancies represent a significant knowledge gap that impedes our ability to evaluate and manage IMC and ICI use.\nHere we present data from a retrospective study of patients treated with ICI at our institution who developed IMC across malignancy types. We compare this cohort to a matched control cohort to determine whether IMC was associated with improved progression-free survival and overall survival. We also evaluate which clinical characteristics increase the risk of developing IMC, including severe IMC.", "Study design and population We conducted a retrospective case-control single-center study after obtaining approval from the Institutional Review Board at Stanford University (IRB 57125, approved 6/30/2020). Our primary aim was to determine the association of presence and severity of IMC on OS and PFS in ICI users. Our secondary aim was to identify clinical variables which predicted development of IMC in ICI users. We evaluated all patients over the age of 18 who had been treated with immune checkpoint inhibitors (ICI) for malignancy at Stanford Health Care from May 2011 to May 2020, including anti-CTLA-4 (ipilimumab), anti-PD-1 (nivolumab, pembrolizumab), and anti-PD-L1 (atezolizumab, avelumab, durvalumab), with follow up through October 2020. Using the Stanford Research Repository tool, we screened patients treated with ICI who were assigned International Classification of Diseases (ICD) 9 and ICD 10 codes associated with non-infectious colitis and diarrhea (Supplementary Table 1). Each chart which passed the initial screen was further screened by review of clinic notes to confirm diagnosis of immune checkpoint inhibitor-related colitis by oncology providers. Any patient found to have other explanations for their clinical presentation was excluded from the study.\nControl patients were matched one to one with each IMC patient for sex, age, malignancy, type of ICI used, prior ICI exposure, and duration of ICI exposure (matched to number of doses from initiation of ICI to development of colitis in study cohort). Control patients were initially screened by those lacking the above ICD codes and were confirmed via direct evaluation of each chart to lack diarrhea and/or colitis ascribable to ICI per their treating oncologist.\nWe extracted clinical data on IMC and control patient charts including demographics (age at time of ICI initiation, sex, body mass index, race per patient report), medical history (presence of prior non-liver and non-upper gastrointestinal disease, personal history of autoimmune disease, family history of autoimmune disease), and cancer history (type of malignancy, tumor stage at ICI initiation, prior chemotherapy, prior radiation therapy, type of ICI used, duration of ICI use, OS and PFS) (Supplementary Table 2). OS was determined as time from initiation of ICI to death, while PFS was determined as time from initiation of ICI to death or progression of disease as determined by oncology providers, based on radiographic evidence of progression. IMC severity was graded using commonly accepted determinants of IMC and irAE grading[14]. We specifically noted prior use of therapies designed to increase immune responses [interleukin (IL)-2, interferon (IFN)-γ, toll-like receptor (TLR)-9 agonist, tebentafusp, or anti-CD47 antibody]. Vitamin D and non-steroidal anti-inflammatory (NSAID) use were defined as vitamin D supplement or NSAID medication, respectively, noted in the history of present illness or on the patient’s medication list at the clinic visit closest to their date of ICI initiation.\nWe collected data on IMC diagnosis including number of patients who received endoscopy (flexible sigmoidoscopy or colonoscopy), findings on endoscopy, and fecal calprotectin (Supplementary Table 3). Data on management of IMC included treatment with anti-diarrheal medications, mesalamine, steroids (prednisone, budesonide, dexamethasone), infliximab, and vedolizumab.\nWe conducted a retrospective case-control single-center study after obtaining approval from the Institutional Review Board at Stanford University (IRB 57125, approved 6/30/2020). Our primary aim was to determine the association of presence and severity of IMC on OS and PFS in ICI users. Our secondary aim was to identify clinical variables which predicted development of IMC in ICI users. We evaluated all patients over the age of 18 who had been treated with immune checkpoint inhibitors (ICI) for malignancy at Stanford Health Care from May 2011 to May 2020, including anti-CTLA-4 (ipilimumab), anti-PD-1 (nivolumab, pembrolizumab), and anti-PD-L1 (atezolizumab, avelumab, durvalumab), with follow up through October 2020. Using the Stanford Research Repository tool, we screened patients treated with ICI who were assigned International Classification of Diseases (ICD) 9 and ICD 10 codes associated with non-infectious colitis and diarrhea (Supplementary Table 1). Each chart which passed the initial screen was further screened by review of clinic notes to confirm diagnosis of immune checkpoint inhibitor-related colitis by oncology providers. Any patient found to have other explanations for their clinical presentation was excluded from the study.\nControl patients were matched one to one with each IMC patient for sex, age, malignancy, type of ICI used, prior ICI exposure, and duration of ICI exposure (matched to number of doses from initiation of ICI to development of colitis in study cohort). Control patients were initially screened by those lacking the above ICD codes and were confirmed via direct evaluation of each chart to lack diarrhea and/or colitis ascribable to ICI per their treating oncologist.\nWe extracted clinical data on IMC and control patient charts including demographics (age at time of ICI initiation, sex, body mass index, race per patient report), medical history (presence of prior non-liver and non-upper gastrointestinal disease, personal history of autoimmune disease, family history of autoimmune disease), and cancer history (type of malignancy, tumor stage at ICI initiation, prior chemotherapy, prior radiation therapy, type of ICI used, duration of ICI use, OS and PFS) (Supplementary Table 2). OS was determined as time from initiation of ICI to death, while PFS was determined as time from initiation of ICI to death or progression of disease as determined by oncology providers, based on radiographic evidence of progression. IMC severity was graded using commonly accepted determinants of IMC and irAE grading[14]. We specifically noted prior use of therapies designed to increase immune responses [interleukin (IL)-2, interferon (IFN)-γ, toll-like receptor (TLR)-9 agonist, tebentafusp, or anti-CD47 antibody]. Vitamin D and non-steroidal anti-inflammatory (NSAID) use were defined as vitamin D supplement or NSAID medication, respectively, noted in the history of present illness or on the patient’s medication list at the clinic visit closest to their date of ICI initiation.\nWe collected data on IMC diagnosis including number of patients who received endoscopy (flexible sigmoidoscopy or colonoscopy), findings on endoscopy, and fecal calprotectin (Supplementary Table 3). Data on management of IMC included treatment with anti-diarrheal medications, mesalamine, steroids (prednisone, budesonide, dexamethasone), infliximab, and vedolizumab.\nStatistical analysis The rate of the primary outcomes (OS > 12 mo and PFS > 6 mo among all ICI users, OS > 12 mo and PFS > 6 mo in patients with IMC) and secondary outcomes (risks of IMC among patients with malignancy using ICI, IMC severity), predictive value of clinical variables on primary and secondary outcomes, odds ratio (OR) with its 95% confidence interval (CI), and P values were calculated using Statistics/Data Analysis (Stata/IC 15.1 for Windows, College Station, TX, United States). Dichotomous variables were analyzed for outcomes using the chi-squared test or the Fisher’s exact test where appropriate, and continuous variables were analyzed using Student’s t-tests if normally distributed, or the Wilcoxon signed-rank test for non-normal data. For our multivariate analyses, model building was based on forward stepwise logistic regression, with a P value of 0.05 required for entry, and known predictors were also included. We constructed Kaplan Meier curves for the outcomes of OS and PFS between patients with and without IMC and patients with mild vs severe IMC using GraphPad Prism (version 8.3; GraphPad Software, Inc., La Jolla, CA, United States). All authors had access to the study data and reviewed and approved the final manuscript.\nThe rate of the primary outcomes (OS > 12 mo and PFS > 6 mo among all ICI users, OS > 12 mo and PFS > 6 mo in patients with IMC) and secondary outcomes (risks of IMC among patients with malignancy using ICI, IMC severity), predictive value of clinical variables on primary and secondary outcomes, odds ratio (OR) with its 95% confidence interval (CI), and P values were calculated using Statistics/Data Analysis (Stata/IC 15.1 for Windows, College Station, TX, United States). Dichotomous variables were analyzed for outcomes using the chi-squared test or the Fisher’s exact test where appropriate, and continuous variables were analyzed using Student’s t-tests if normally distributed, or the Wilcoxon signed-rank test for non-normal data. For our multivariate analyses, model building was based on forward stepwise logistic regression, with a P value of 0.05 required for entry, and known predictors were also included. We constructed Kaplan Meier curves for the outcomes of OS and PFS between patients with and without IMC and patients with mild vs severe IMC using GraphPad Prism (version 8.3; GraphPad Software, Inc., La Jolla, CA, United States). All authors had access to the study data and reviewed and approved the final manuscript.", "We conducted a retrospective case-control single-center study after obtaining approval from the Institutional Review Board at Stanford University (IRB 57125, approved 6/30/2020). Our primary aim was to determine the association of presence and severity of IMC on OS and PFS in ICI users. Our secondary aim was to identify clinical variables which predicted development of IMC in ICI users. We evaluated all patients over the age of 18 who had been treated with immune checkpoint inhibitors (ICI) for malignancy at Stanford Health Care from May 2011 to May 2020, including anti-CTLA-4 (ipilimumab), anti-PD-1 (nivolumab, pembrolizumab), and anti-PD-L1 (atezolizumab, avelumab, durvalumab), with follow up through October 2020. Using the Stanford Research Repository tool, we screened patients treated with ICI who were assigned International Classification of Diseases (ICD) 9 and ICD 10 codes associated with non-infectious colitis and diarrhea (Supplementary Table 1). Each chart which passed the initial screen was further screened by review of clinic notes to confirm diagnosis of immune checkpoint inhibitor-related colitis by oncology providers. Any patient found to have other explanations for their clinical presentation was excluded from the study.\nControl patients were matched one to one with each IMC patient for sex, age, malignancy, type of ICI used, prior ICI exposure, and duration of ICI exposure (matched to number of doses from initiation of ICI to development of colitis in study cohort). Control patients were initially screened by those lacking the above ICD codes and were confirmed via direct evaluation of each chart to lack diarrhea and/or colitis ascribable to ICI per their treating oncologist.\nWe extracted clinical data on IMC and control patient charts including demographics (age at time of ICI initiation, sex, body mass index, race per patient report), medical history (presence of prior non-liver and non-upper gastrointestinal disease, personal history of autoimmune disease, family history of autoimmune disease), and cancer history (type of malignancy, tumor stage at ICI initiation, prior chemotherapy, prior radiation therapy, type of ICI used, duration of ICI use, OS and PFS) (Supplementary Table 2). OS was determined as time from initiation of ICI to death, while PFS was determined as time from initiation of ICI to death or progression of disease as determined by oncology providers, based on radiographic evidence of progression. IMC severity was graded using commonly accepted determinants of IMC and irAE grading[14]. We specifically noted prior use of therapies designed to increase immune responses [interleukin (IL)-2, interferon (IFN)-γ, toll-like receptor (TLR)-9 agonist, tebentafusp, or anti-CD47 antibody]. Vitamin D and non-steroidal anti-inflammatory (NSAID) use were defined as vitamin D supplement or NSAID medication, respectively, noted in the history of present illness or on the patient’s medication list at the clinic visit closest to their date of ICI initiation.\nWe collected data on IMC diagnosis including number of patients who received endoscopy (flexible sigmoidoscopy or colonoscopy), findings on endoscopy, and fecal calprotectin (Supplementary Table 3). Data on management of IMC included treatment with anti-diarrheal medications, mesalamine, steroids (prednisone, budesonide, dexamethasone), infliximab, and vedolizumab.", "The rate of the primary outcomes (OS > 12 mo and PFS > 6 mo among all ICI users, OS > 12 mo and PFS > 6 mo in patients with IMC) and secondary outcomes (risks of IMC among patients with malignancy using ICI, IMC severity), predictive value of clinical variables on primary and secondary outcomes, odds ratio (OR) with its 95% confidence interval (CI), and P values were calculated using Statistics/Data Analysis (Stata/IC 15.1 for Windows, College Station, TX, United States). Dichotomous variables were analyzed for outcomes using the chi-squared test or the Fisher’s exact test where appropriate, and continuous variables were analyzed using Student’s t-tests if normally distributed, or the Wilcoxon signed-rank test for non-normal data. For our multivariate analyses, model building was based on forward stepwise logistic regression, with a P value of 0.05 required for entry, and known predictors were also included. We constructed Kaplan Meier curves for the outcomes of OS and PFS between patients with and without IMC and patients with mild vs severe IMC using GraphPad Prism (version 8.3; GraphPad Software, Inc., La Jolla, CA, United States). All authors had access to the study data and reviewed and approved the final manuscript.", "Clinical characteristics associated with IMC We identified a total of 314 patients treated with ICI at Stanford Health Care from May 2011 to May 2020 who had ICD codes matching our query (Supplementary Table 1). Of these, 64 had a diagnosis of IMC per review of Oncology providers’ notes, after excluding patients with alternative diagnoses for their symptoms. 24 (37.5%) of these IMC patients underwent an endoscopy (colonoscopy or flexible sigmoidoscopy) during workup, of which seven (29.2%) had a normal endoscopic appearance, consistent with prior reports demonstrating that approximately one third of patients with IMC related to anti-PD-1 therapy have microscopic colitis[15] (Supplementary Table 3). An additional 14 patients (21.9%) had imaging findings suggestive of IMC while 3 patients (4.69%) without imaging or endoscopy had an elevated calprotectin or fecal lactoferrin.\nThese 64 patients were manually matched 1:1 with control patients based on age, sex, malignancy, type of ICI, whether or not the patient had prior ICI exposure, and duration of ICI use. We compared clinical characteristics of patients from the IMC cohort and the control cohort (Table 1). None of the matched characteristics were significantly different between the two cohorts. The mean age across the combined cohorts was 66.6 years, with an average age of 67.4 in the cohort with IMC compared with 65.8 in the control cohort (P = 0.42). 57.81% of patients in each group were male (P = 1.00). Patients were predominantly white in both groups, with 52 (81.25%) white individuals in the IMC cohort compared to 50 (78.13%) in the control group (P = 0.66). The most common malignancy in each group was melanoma [33 (51.56%) in both cohorts], followed by renal cell carcinoma [8 (12.5%) in the IMC cohort and 7 (10.94%) in the control cohort] and non-small cell lung cancer [6 (9.38%) in both cohorts]. Both groups had similar numbers of patients with stage IV malignancy [56 (87.5%) in the IMC cohort and 58 (90.63%) in the control cohort, P = 0.778]. Combination ipilimumab and nivolumab was the most commonly used checkpoint therapy [24 (37.5%) of patients in each cohort], followed by nivolumab monotherapy [19 (29.69%) of each cohort] and ipilimumab monotherapy [11 (17.19%) of each cohort].\nBaseline characteristics of patients with immune checkpoint inhibitor use\nVariable matched between cases and controls.\nNumber of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls).\nSee Supplementary Table 2.\nIMC: Immune checkpoint inhibitor-mediated colitis; ICI: Immune checkpoint inhibitor; SD: Standard deviation; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; PFS: Progression-Free Survival; irAE: Immune related adverse event; OS: Overall survival.\nAmong the remainder of the clinical characteristics evaluated, personal history of autoimmune disease (including prior irAE) and family history of autoimmune disease were significantly more common in patients with IMC (P = 0.037 and 0.048, respectively). Intriguingly, prior use of a therapy designed to increase immune responses was more common in the control cohort without IMC (P = 0.027). In contrast to prior data[11], use of vitamin D supplementation at the time of first dose of ICI was significantly more prevalent in patients with IMC (P = 0.020). Neither smoking status, NSAID use at time of ICI initiation, steroid use at the time of ICI initiation, nor recent vaccination were significantly more common in IMC patients compared to controls.\nWe identified a total of 314 patients treated with ICI at Stanford Health Care from May 2011 to May 2020 who had ICD codes matching our query (Supplementary Table 1). Of these, 64 had a diagnosis of IMC per review of Oncology providers’ notes, after excluding patients with alternative diagnoses for their symptoms. 24 (37.5%) of these IMC patients underwent an endoscopy (colonoscopy or flexible sigmoidoscopy) during workup, of which seven (29.2%) had a normal endoscopic appearance, consistent with prior reports demonstrating that approximately one third of patients with IMC related to anti-PD-1 therapy have microscopic colitis[15] (Supplementary Table 3). An additional 14 patients (21.9%) had imaging findings suggestive of IMC while 3 patients (4.69%) without imaging or endoscopy had an elevated calprotectin or fecal lactoferrin.\nThese 64 patients were manually matched 1:1 with control patients based on age, sex, malignancy, type of ICI, whether or not the patient had prior ICI exposure, and duration of ICI use. We compared clinical characteristics of patients from the IMC cohort and the control cohort (Table 1). None of the matched characteristics were significantly different between the two cohorts. The mean age across the combined cohorts was 66.6 years, with an average age of 67.4 in the cohort with IMC compared with 65.8 in the control cohort (P = 0.42). 57.81% of patients in each group were male (P = 1.00). Patients were predominantly white in both groups, with 52 (81.25%) white individuals in the IMC cohort compared to 50 (78.13%) in the control group (P = 0.66). The most common malignancy in each group was melanoma [33 (51.56%) in both cohorts], followed by renal cell carcinoma [8 (12.5%) in the IMC cohort and 7 (10.94%) in the control cohort] and non-small cell lung cancer [6 (9.38%) in both cohorts]. Both groups had similar numbers of patients with stage IV malignancy [56 (87.5%) in the IMC cohort and 58 (90.63%) in the control cohort, P = 0.778]. Combination ipilimumab and nivolumab was the most commonly used checkpoint therapy [24 (37.5%) of patients in each cohort], followed by nivolumab monotherapy [19 (29.69%) of each cohort] and ipilimumab monotherapy [11 (17.19%) of each cohort].\nBaseline characteristics of patients with immune checkpoint inhibitor use\nVariable matched between cases and controls.\nNumber of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls).\nSee Supplementary Table 2.\nIMC: Immune checkpoint inhibitor-mediated colitis; ICI: Immune checkpoint inhibitor; SD: Standard deviation; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; PFS: Progression-Free Survival; irAE: Immune related adverse event; OS: Overall survival.\nAmong the remainder of the clinical characteristics evaluated, personal history of autoimmune disease (including prior irAE) and family history of autoimmune disease were significantly more common in patients with IMC (P = 0.037 and 0.048, respectively). Intriguingly, prior use of a therapy designed to increase immune responses was more common in the control cohort without IMC (P = 0.027). In contrast to prior data[11], use of vitamin D supplementation at the time of first dose of ICI was significantly more prevalent in patients with IMC (P = 0.020). Neither smoking status, NSAID use at time of ICI initiation, steroid use at the time of ICI initiation, nor recent vaccination were significantly more common in IMC patients compared to controls.\nIMC significantly increases overall survival As IMC has previously been associated with increased OS and PFS in cancer patients[9,10], we evaluated whether this association was seen in our study. We found that OS was significantly longer in patients who developed IMC compared to those who did not, with a mean OS of 24.3 mo in patients with IMC and 17.7 mo in control (P = 0.05, Table 1). OS at 12 mo following ICI initiation was significantly higher in patients who developed IMC compared to those who did not (P = 0.02, Figure 1). However, in contrast to prior findings, our study did not find a significant difference in PFS between IMC patients and controls, with a mean PFS 13.7 mo in IMC patients and 11.9 mo in controls (P = 0.524) (Table 1). PFS also did not differ between patients who developed mild vs severe IMC (P = 0.690, Supplementary Table 5).\n\nOverall survival at 12 mo in patients with and without immune checkpoint inhibitor-mediated colitis. Kaplan-Meier curve of overall survival at 12 mo in patients with immune checkpoint inhibitor-mediated colitis (IMC, red) and without IMC (black). IMC: Immune checkpoint inhibitor-mediated colitis; HR: Hazard ratio.\nAcross both cohorts, we identified clinical characteristics significantly associated with OS greater than 12 mo and PFS greater than 6 mo, which are correlated with cancer outcomes in patients treated with ICI[16] (Tables 2 and 3) (Supplementary Tables 4 and 5). IMC was significantly and independently associated with OS > 12 mo in the multivariate model (OR 2.81, 95%CI 1.17-6.77, P = 0.021) (Table 2). Number of ICI infusions was also positively associated with OS > 12 mo (OR 1.23, 95%CI 1.09-1.40), while sarcoma as underlying malignancy was significantly associated with OS < 12 mo (OR 0.17, 95%CI 0.029-0.947). Within the IMC cohort, nivolumab use was associated with OS < 12 mo in the univariate analysis (OR 0.09, 95%CI 0.01-0.83), while only age was associated with OS < 12 mo in multivariate analysis (OR 0.93, 95%CI 0.88-0.99) (Table 3). No individual malignancy was significantly associated with OS > 12 mo within the IMC cohort (Table 3).\nUnivariate and multivariate predictors of overall survival > 12 mo among patients with malignancy using immune checkpoint inhibitor (n = 128)\nNumber of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls).\nSee Supplementary Table 2.\nICI: Immune checkpoint inhibitor; IMC: Immune checkpoint inhibitor-mediated colitis; OR: Odds ratios; CI: Confidence interval; SD: Standard deviation; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; irAE: Immune related adverse event.\nUnivariate and multivariate predictors of overall survival > 12 mo among patients with immune checkpoint inhibitor colitis (n = 64)\nNumber of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls).\nSee Supplementary Table 2.\nICI: Immune checkpoint inhibitor; IMC: Immune checkpoint inhibitor-mediated colitis; SD: Standard deviation; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; irAE: Immune related adverse event.\nAs IMC has previously been associated with increased OS and PFS in cancer patients[9,10], we evaluated whether this association was seen in our study. We found that OS was significantly longer in patients who developed IMC compared to those who did not, with a mean OS of 24.3 mo in patients with IMC and 17.7 mo in control (P = 0.05, Table 1). OS at 12 mo following ICI initiation was significantly higher in patients who developed IMC compared to those who did not (P = 0.02, Figure 1). However, in contrast to prior findings, our study did not find a significant difference in PFS between IMC patients and controls, with a mean PFS 13.7 mo in IMC patients and 11.9 mo in controls (P = 0.524) (Table 1). PFS also did not differ between patients who developed mild vs severe IMC (P = 0.690, Supplementary Table 5).\n\nOverall survival at 12 mo in patients with and without immune checkpoint inhibitor-mediated colitis. Kaplan-Meier curve of overall survival at 12 mo in patients with immune checkpoint inhibitor-mediated colitis (IMC, red) and without IMC (black). IMC: Immune checkpoint inhibitor-mediated colitis; HR: Hazard ratio.\nAcross both cohorts, we identified clinical characteristics significantly associated with OS greater than 12 mo and PFS greater than 6 mo, which are correlated with cancer outcomes in patients treated with ICI[16] (Tables 2 and 3) (Supplementary Tables 4 and 5). IMC was significantly and independently associated with OS > 12 mo in the multivariate model (OR 2.81, 95%CI 1.17-6.77, P = 0.021) (Table 2). Number of ICI infusions was also positively associated with OS > 12 mo (OR 1.23, 95%CI 1.09-1.40), while sarcoma as underlying malignancy was significantly associated with OS < 12 mo (OR 0.17, 95%CI 0.029-0.947). Within the IMC cohort, nivolumab use was associated with OS < 12 mo in the univariate analysis (OR 0.09, 95%CI 0.01-0.83), while only age was associated with OS < 12 mo in multivariate analysis (OR 0.93, 95%CI 0.88-0.99) (Table 3). No individual malignancy was significantly associated with OS > 12 mo within the IMC cohort (Table 3).\nUnivariate and multivariate predictors of overall survival > 12 mo among patients with malignancy using immune checkpoint inhibitor (n = 128)\nNumber of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls).\nSee Supplementary Table 2.\nICI: Immune checkpoint inhibitor; IMC: Immune checkpoint inhibitor-mediated colitis; OR: Odds ratios; CI: Confidence interval; SD: Standard deviation; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; irAE: Immune related adverse event.\nUnivariate and multivariate predictors of overall survival > 12 mo among patients with immune checkpoint inhibitor colitis (n = 64)\nNumber of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls).\nSee Supplementary Table 2.\nICI: Immune checkpoint inhibitor; IMC: Immune checkpoint inhibitor-mediated colitis; SD: Standard deviation; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; irAE: Immune related adverse event.\nSignificant risk factors for developing IMC and severe IMC As certain clinical characteristics were significantly more common in patients with IMC compared to controls, we evaluated whether any of these clinical characteristics were associated with risk of developing IMC (Table 4). In univariate analysis, history of autoimmune disease and vitamin D use were both significantly associated with increased risk of IMC (OR 2.45, 95%CI 1.04-5.78, P = 0.040 for autoimmune disease; OR 2.51, 95%CI 1.14-5.54, P = 0.022 for vitamin D use). Interestingly, the use of vitamin D supplementation has previously been associated with a decreased risk of IMC, in contrast to our findings here[11]. Prior use of an immune-enhancing therapy (Supplementary Table 2) was associated with a significantly decreased risk of IMC (OR 0.20, 95%CI 0.04-0.95, P = 0.043). In the multivariate model which incorporated these characteristics, only the use of immune-enhancing therapy remained significantly associated with decreased risk of IMC, with an OR of 0.20 (95%CI 0.04-1.00, P = 0.050). \nUnivariate and multivariate predictors of immune checkpoint inhibitor-mediated colitis among patients using immune checkpoint inhibitor (n = 128)\nNumber of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls).\nSee Supplementary Table 2.\nICI: Immune checkpoint inhibitor; IMC: Immune checkpoint inhibitor-mediated colitis; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; irAE: Immune related adverse event.\nWe next determined if any variables were associated with an increased risk of severe IMC. Consistent with prior studies of irAE in ICI[17-19], we defined grade 1-2 IMC as mild and grade 3 or higher IMC as severe. In our study, 38 of the 64 patients (59.4%) had severe IMC (Supplementary Table 3). In the univariate model, ipilimumab and vitamin D supplementation were significantly associated with development of severe IMC (OR 8.93, 95%CI 1.07-74.8, P = 0.043 for ipilimumab; OR 3.33, 95%CI 1.10-10.14, P = 0.034 for vitamin D) (Supplementary Table 6). Combination therapy (ipilimumab plus nivolumab) trended towards an increased risk of severe IMC but did not reach significance (P = 0.053). In contrast, pembrolizumab was significantly associated with a decreased risk of severe IMC (OR 0.26, 95%CI 0.09-0.81, P = 0.020). In the multivariate model no characteristic reached significance for association with severe IMC, although both combination therapy and ipilimumab monotherapy approached significance for increased risk of severe IMC (P = 0.058 and 0.060, respectively).\nAs certain clinical characteristics were significantly more common in patients with IMC compared to controls, we evaluated whether any of these clinical characteristics were associated with risk of developing IMC (Table 4). In univariate analysis, history of autoimmune disease and vitamin D use were both significantly associated with increased risk of IMC (OR 2.45, 95%CI 1.04-5.78, P = 0.040 for autoimmune disease; OR 2.51, 95%CI 1.14-5.54, P = 0.022 for vitamin D use). Interestingly, the use of vitamin D supplementation has previously been associated with a decreased risk of IMC, in contrast to our findings here[11]. Prior use of an immune-enhancing therapy (Supplementary Table 2) was associated with a significantly decreased risk of IMC (OR 0.20, 95%CI 0.04-0.95, P = 0.043). In the multivariate model which incorporated these characteristics, only the use of immune-enhancing therapy remained significantly associated with decreased risk of IMC, with an OR of 0.20 (95%CI 0.04-1.00, P = 0.050). \nUnivariate and multivariate predictors of immune checkpoint inhibitor-mediated colitis among patients using immune checkpoint inhibitor (n = 128)\nNumber of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls).\nSee Supplementary Table 2.\nICI: Immune checkpoint inhibitor; IMC: Immune checkpoint inhibitor-mediated colitis; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; irAE: Immune related adverse event.\nWe next determined if any variables were associated with an increased risk of severe IMC. Consistent with prior studies of irAE in ICI[17-19], we defined grade 1-2 IMC as mild and grade 3 or higher IMC as severe. In our study, 38 of the 64 patients (59.4%) had severe IMC (Supplementary Table 3). In the univariate model, ipilimumab and vitamin D supplementation were significantly associated with development of severe IMC (OR 8.93, 95%CI 1.07-74.8, P = 0.043 for ipilimumab; OR 3.33, 95%CI 1.10-10.14, P = 0.034 for vitamin D) (Supplementary Table 6). Combination therapy (ipilimumab plus nivolumab) trended towards an increased risk of severe IMC but did not reach significance (P = 0.053). In contrast, pembrolizumab was significantly associated with a decreased risk of severe IMC (OR 0.26, 95%CI 0.09-0.81, P = 0.020). In the multivariate model no characteristic reached significance for association with severe IMC, although both combination therapy and ipilimumab monotherapy approached significance for increased risk of severe IMC (P = 0.058 and 0.060, respectively).", "We identified a total of 314 patients treated with ICI at Stanford Health Care from May 2011 to May 2020 who had ICD codes matching our query (Supplementary Table 1). Of these, 64 had a diagnosis of IMC per review of Oncology providers’ notes, after excluding patients with alternative diagnoses for their symptoms. 24 (37.5%) of these IMC patients underwent an endoscopy (colonoscopy or flexible sigmoidoscopy) during workup, of which seven (29.2%) had a normal endoscopic appearance, consistent with prior reports demonstrating that approximately one third of patients with IMC related to anti-PD-1 therapy have microscopic colitis[15] (Supplementary Table 3). An additional 14 patients (21.9%) had imaging findings suggestive of IMC while 3 patients (4.69%) without imaging or endoscopy had an elevated calprotectin or fecal lactoferrin.\nThese 64 patients were manually matched 1:1 with control patients based on age, sex, malignancy, type of ICI, whether or not the patient had prior ICI exposure, and duration of ICI use. We compared clinical characteristics of patients from the IMC cohort and the control cohort (Table 1). None of the matched characteristics were significantly different between the two cohorts. The mean age across the combined cohorts was 66.6 years, with an average age of 67.4 in the cohort with IMC compared with 65.8 in the control cohort (P = 0.42). 57.81% of patients in each group were male (P = 1.00). Patients were predominantly white in both groups, with 52 (81.25%) white individuals in the IMC cohort compared to 50 (78.13%) in the control group (P = 0.66). The most common malignancy in each group was melanoma [33 (51.56%) in both cohorts], followed by renal cell carcinoma [8 (12.5%) in the IMC cohort and 7 (10.94%) in the control cohort] and non-small cell lung cancer [6 (9.38%) in both cohorts]. Both groups had similar numbers of patients with stage IV malignancy [56 (87.5%) in the IMC cohort and 58 (90.63%) in the control cohort, P = 0.778]. Combination ipilimumab and nivolumab was the most commonly used checkpoint therapy [24 (37.5%) of patients in each cohort], followed by nivolumab monotherapy [19 (29.69%) of each cohort] and ipilimumab monotherapy [11 (17.19%) of each cohort].\nBaseline characteristics of patients with immune checkpoint inhibitor use\nVariable matched between cases and controls.\nNumber of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls).\nSee Supplementary Table 2.\nIMC: Immune checkpoint inhibitor-mediated colitis; ICI: Immune checkpoint inhibitor; SD: Standard deviation; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; PFS: Progression-Free Survival; irAE: Immune related adverse event; OS: Overall survival.\nAmong the remainder of the clinical characteristics evaluated, personal history of autoimmune disease (including prior irAE) and family history of autoimmune disease were significantly more common in patients with IMC (P = 0.037 and 0.048, respectively). Intriguingly, prior use of a therapy designed to increase immune responses was more common in the control cohort without IMC (P = 0.027). In contrast to prior data[11], use of vitamin D supplementation at the time of first dose of ICI was significantly more prevalent in patients with IMC (P = 0.020). Neither smoking status, NSAID use at time of ICI initiation, steroid use at the time of ICI initiation, nor recent vaccination were significantly more common in IMC patients compared to controls.", "As IMC has previously been associated with increased OS and PFS in cancer patients[9,10], we evaluated whether this association was seen in our study. We found that OS was significantly longer in patients who developed IMC compared to those who did not, with a mean OS of 24.3 mo in patients with IMC and 17.7 mo in control (P = 0.05, Table 1). OS at 12 mo following ICI initiation was significantly higher in patients who developed IMC compared to those who did not (P = 0.02, Figure 1). However, in contrast to prior findings, our study did not find a significant difference in PFS between IMC patients and controls, with a mean PFS 13.7 mo in IMC patients and 11.9 mo in controls (P = 0.524) (Table 1). PFS also did not differ between patients who developed mild vs severe IMC (P = 0.690, Supplementary Table 5).\n\nOverall survival at 12 mo in patients with and without immune checkpoint inhibitor-mediated colitis. Kaplan-Meier curve of overall survival at 12 mo in patients with immune checkpoint inhibitor-mediated colitis (IMC, red) and without IMC (black). IMC: Immune checkpoint inhibitor-mediated colitis; HR: Hazard ratio.\nAcross both cohorts, we identified clinical characteristics significantly associated with OS greater than 12 mo and PFS greater than 6 mo, which are correlated with cancer outcomes in patients treated with ICI[16] (Tables 2 and 3) (Supplementary Tables 4 and 5). IMC was significantly and independently associated with OS > 12 mo in the multivariate model (OR 2.81, 95%CI 1.17-6.77, P = 0.021) (Table 2). Number of ICI infusions was also positively associated with OS > 12 mo (OR 1.23, 95%CI 1.09-1.40), while sarcoma as underlying malignancy was significantly associated with OS < 12 mo (OR 0.17, 95%CI 0.029-0.947). Within the IMC cohort, nivolumab use was associated with OS < 12 mo in the univariate analysis (OR 0.09, 95%CI 0.01-0.83), while only age was associated with OS < 12 mo in multivariate analysis (OR 0.93, 95%CI 0.88-0.99) (Table 3). No individual malignancy was significantly associated with OS > 12 mo within the IMC cohort (Table 3).\nUnivariate and multivariate predictors of overall survival > 12 mo among patients with malignancy using immune checkpoint inhibitor (n = 128)\nNumber of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls).\nSee Supplementary Table 2.\nICI: Immune checkpoint inhibitor; IMC: Immune checkpoint inhibitor-mediated colitis; OR: Odds ratios; CI: Confidence interval; SD: Standard deviation; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; irAE: Immune related adverse event.\nUnivariate and multivariate predictors of overall survival > 12 mo among patients with immune checkpoint inhibitor colitis (n = 64)\nNumber of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls).\nSee Supplementary Table 2.\nICI: Immune checkpoint inhibitor; IMC: Immune checkpoint inhibitor-mediated colitis; SD: Standard deviation; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; irAE: Immune related adverse event.", "As certain clinical characteristics were significantly more common in patients with IMC compared to controls, we evaluated whether any of these clinical characteristics were associated with risk of developing IMC (Table 4). In univariate analysis, history of autoimmune disease and vitamin D use were both significantly associated with increased risk of IMC (OR 2.45, 95%CI 1.04-5.78, P = 0.040 for autoimmune disease; OR 2.51, 95%CI 1.14-5.54, P = 0.022 for vitamin D use). Interestingly, the use of vitamin D supplementation has previously been associated with a decreased risk of IMC, in contrast to our findings here[11]. Prior use of an immune-enhancing therapy (Supplementary Table 2) was associated with a significantly decreased risk of IMC (OR 0.20, 95%CI 0.04-0.95, P = 0.043). In the multivariate model which incorporated these characteristics, only the use of immune-enhancing therapy remained significantly associated with decreased risk of IMC, with an OR of 0.20 (95%CI 0.04-1.00, P = 0.050). \nUnivariate and multivariate predictors of immune checkpoint inhibitor-mediated colitis among patients using immune checkpoint inhibitor (n = 128)\nNumber of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls).\nSee Supplementary Table 2.\nICI: Immune checkpoint inhibitor; IMC: Immune checkpoint inhibitor-mediated colitis; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; irAE: Immune related adverse event.\nWe next determined if any variables were associated with an increased risk of severe IMC. Consistent with prior studies of irAE in ICI[17-19], we defined grade 1-2 IMC as mild and grade 3 or higher IMC as severe. In our study, 38 of the 64 patients (59.4%) had severe IMC (Supplementary Table 3). In the univariate model, ipilimumab and vitamin D supplementation were significantly associated with development of severe IMC (OR 8.93, 95%CI 1.07-74.8, P = 0.043 for ipilimumab; OR 3.33, 95%CI 1.10-10.14, P = 0.034 for vitamin D) (Supplementary Table 6). Combination therapy (ipilimumab plus nivolumab) trended towards an increased risk of severe IMC but did not reach significance (P = 0.053). In contrast, pembrolizumab was significantly associated with a decreased risk of severe IMC (OR 0.26, 95%CI 0.09-0.81, P = 0.020). In the multivariate model no characteristic reached significance for association with severe IMC, although both combination therapy and ipilimumab monotherapy approached significance for increased risk of severe IMC (P = 0.058 and 0.060, respectively).", "In our study, development of IMC following ICI use was associated with improved overall survival, although not improved progression-free survival, compared to ICI users without IMC. This is similar to findings at another center demonstrating both improved OS and PFS in patients with IMC[9,10]. We also found that vitamin D supplementation at the start of ICI treatment is a risk factor for developing IMC, in contrast to other research suggesting vitamin D use is associated with lower risk of IMC[11]. Our results, therefore, provide critical additional information on these previous associations and present a need for prospective studies.\nBoth publications showing improved survival in patients with IMC were retrospective analyses performed at the same center[9,10]. One study noted that ICI class was significantly associated with development of IMC[9], a finding that has been demonstrated several times in retrospective work[8,17,18,20-23]. However, unlike our work, this study did not match control patients to account for this likely confounder, as ICI class has been associated with differences in PFS in some malignancies[24,25]. The second study at this center examined survival in melanoma patients with IMC, compared to our work across multiple malignancies, although frequency matching was performed to account for use of different ICI classes[10]. Since our study is the first to examine survival in patients with IMC at a different center, our work here reinforces that IMC may be associated with increased overall survival and prompts a need for prospective studies.\nThe only other independent factor in our study positively associated with OS > 12 mo was number of ICI doses. This finding may be due to trivial length-time bias, as patients who survive longer are more likely to receive more doses of ICI. It is also possible that patients who required cessation of ICI due to IMC had worse outcomes, although prior work has suggested that patients still derive equivalent long-term benefit from ICI even if stopped due to irAE[26]. Type of underlying malignancy (sarcoma) was independently associated with OS < 12 mo in our study. These findings are not unexpected, as most advanced soft tissue sarcomas have a median OS of less than one year[27].\nIn contrast to prior work, we found a positive association between vitamin D supplementation and development of IMC[11]. It is unclear if this is related to low serum vitamin D levels or negative impact of the supplementation itself, as vitamin D levels near the time of ICI initiation were not recorded in most patients. Additionally, the prior report on vitamin D in IMC was in melanoma patients only, which may partially account for discrepancies with our study. As this association did not remain significant in our multivariate analysis, it is possible that another confounding factor may explain the association between vitamin D supplementation and IMC in our study.\nIn addition to challenging existing findings, we report here on additional novel risk factors for IMC. We are the first to report that prior use of immune-enhancing medications prior to ICI, such as IL-2 or interferon-γ, is significantly and independently associated with decreased risk of IMC. Much more work should be done to evaluate the relationship between these medications and future risk of IMC.\nFinally, our study is the first to examine risk factors for severe IMC. In addition to increasing risk for IMC overall, we find that vitamin D supplementation may also be a risk factor for severe IMC. Similarly, our results suggest that the use of ipilimumab may be associated with increased risk of severe IMC, while pembrolizumab may be associated with decreased risk of severe IMC in patients who develop this syndrome. As ipilimumab has previously been associated with increased risk of IMC overall, while anti-PD-1, including pembrolizumab, are associated with lower risk of IMC overall[8,9], these findings emphasize that ICI class may affect severity of IMC.\nOur findings may significantly impact clinical practice by identifying novel risks for IMC and severe IMC that clinicians, including oncologists and gastroenterologists, should be aware of, while also potentially providing reassurance to physicians and patients that development of IMC may be a positive prognosticator for cancer survival. Neither prior work nor ours found that treatment of IMC, including steroids or infliximab, negatively impacts OS[9,10], and therefore appropriate treatment of IMC should be pursued early on to minimize morbidity and mortality. Both steroid and infliximab use have been suggested to worsen survival in ICI users[12,13], but all current evidence suggests that use of these medications for IMC specifically does not impair cancer outcomes. Our work also cautions against supplementation with vitamin D in ICI users, as this may increase risk of IMC and severe IMC, although carefully designed studies with vitamin D measurements should be performed.\nOur work has several strengths. We performed robust cohort matching to minimize confounding effects of ICI class and malignancy. This is also the first study to explore risk factors associated with severe IMC. However, there are limitations to our work. As a retrospective, observational study, it is subject to recall bias and cannot evaluate causation, and may also be subject to immortal time bias (ITB). Patients may have longer exposure to checkpoint inhibitors before developing IMC, compared to patients who do not manifest this irAE, leading to a period where they must survive for long enough to develop IMC and are therefore “immortal”[28]. We found that OS > 12 mo was significantly associated with greater numbers of ICI infusions (Table 2), which is likely due to ITB. However, greater numbers of infusions were not associated with IMC (Table 4). This suggests that the association between OS > 12 mo and IMC is likely independent of the number of ICI infusions, limiting this as a source of ITB in our study.\nOther weaknesses of our work include selection of patients based on clinical criteria for IMC, including those who did not undergo endoscopy or other objective testing for intestinal inflammation, and therefore may not have had a true colitis. Like prior work, this is also a single-center study, and our results may not be widely generalizable, particularly since we identified fewer patients compared to prior work and our patient population is highly variable, including individuals with several different underlying malignancies. We did not exclude patients with prior non-GI irAEs in either group, although the presence of these was not independently associated with increased OS in our study. We also have not accounted for other factors which may be potential predictors of ICI response, including tumor PD-L1 expression burden, tumor mutational burden, gut microbial composition, proton pump inhibitor use, and combination treatment with tyrosine kinase inhibitors[29-34].", "In conclusion, our findings suggest presence of IMC is associated with improved OS in cancer patients when cases were matched closely to controls. We also found that vitamin D supplementation was significantly associated with development of both IMC and severe IMC, while immune-enhancing medications were significantly associated with decreased risk of IMC. Future work should focus on broader populations to resolve the discrepancies raised in our work, and to confirm the association between IMC and increased cancer survival. Closely involving gastroenterologists with the workup and management of IMC will be crucial to ensuring the best care possible for these patients." ]
[ null, "methods", null, null, null, null, null, null, null, null ]
[ "Immune checkpoint inhibitors", "Immune checkpoint inhibitor-mediated colitis", "Immune-related adverse events" ]
INTRODUCTION: Immune checkpoint inhibitors (ICI) have dramatically changed the landscape of cancer therapy. Early studies showed significantly prolonged survival in patients with metastatic melanoma compared to standard chemotherapy[1], and evidence now exists for improved outcomes in a variety of tumors ranging from lung cancers to urothelial carcinoma to breast cancer[2-5]. Although these are powerful treatments in our armamentarium against malignancy, ICI can cause immune-related adverse events (irAE) characterized by autoimmune-like inflammation in a variety of non-tumor organs, leading to increased morbidity for patients[6]. One of the most common irAE is immune checkpoint inhibitor-mediated colitis (IMC). IMC may occur in up to 40% of patients treated with ipilimumab, an antibody targeting CTLA-4, 11%-17% of patients treated with antibodies against anti-PD-1 or anti-PD-L1, such as nivolumab, pembrolizumab, or atezolizumab, and around 32% of patients treated with a combination of anti-CTLA-4 and anti-PD-1[7]. Prior retrospective analyses of patients with IMC have attempted to identify characteristics associated with development of IMC, including type of malignancy, ICI class, dose of ICI, cancer stage, and vitamin D use[8-11]. Intriguingly, two prior studies have suggested that development of IMC may positively correlate with improved progression-free survival (PFS) and overall survival (OS)[9,10]. One of these studies controlled for confounding effects of ICI class via frequency matching, but was limited to patients with melanoma, hindering wider applicability of their findings[10]. These findings also conflict with data suggesting that use of steroids and the anti-TNF antibody infliximab in patients treated with ICI are associated with worse cancer outcomes[12,13]. These discrepancies represent a significant knowledge gap that impedes our ability to evaluate and manage IMC and ICI use. Here we present data from a retrospective study of patients treated with ICI at our institution who developed IMC across malignancy types. We compare this cohort to a matched control cohort to determine whether IMC was associated with improved progression-free survival and overall survival. We also evaluate which clinical characteristics increase the risk of developing IMC, including severe IMC. MATERIALS AND METHODS: Study design and population We conducted a retrospective case-control single-center study after obtaining approval from the Institutional Review Board at Stanford University (IRB 57125, approved 6/30/2020). Our primary aim was to determine the association of presence and severity of IMC on OS and PFS in ICI users. Our secondary aim was to identify clinical variables which predicted development of IMC in ICI users. We evaluated all patients over the age of 18 who had been treated with immune checkpoint inhibitors (ICI) for malignancy at Stanford Health Care from May 2011 to May 2020, including anti-CTLA-4 (ipilimumab), anti-PD-1 (nivolumab, pembrolizumab), and anti-PD-L1 (atezolizumab, avelumab, durvalumab), with follow up through October 2020. Using the Stanford Research Repository tool, we screened patients treated with ICI who were assigned International Classification of Diseases (ICD) 9 and ICD 10 codes associated with non-infectious colitis and diarrhea (Supplementary Table 1). Each chart which passed the initial screen was further screened by review of clinic notes to confirm diagnosis of immune checkpoint inhibitor-related colitis by oncology providers. Any patient found to have other explanations for their clinical presentation was excluded from the study. Control patients were matched one to one with each IMC patient for sex, age, malignancy, type of ICI used, prior ICI exposure, and duration of ICI exposure (matched to number of doses from initiation of ICI to development of colitis in study cohort). Control patients were initially screened by those lacking the above ICD codes and were confirmed via direct evaluation of each chart to lack diarrhea and/or colitis ascribable to ICI per their treating oncologist. We extracted clinical data on IMC and control patient charts including demographics (age at time of ICI initiation, sex, body mass index, race per patient report), medical history (presence of prior non-liver and non-upper gastrointestinal disease, personal history of autoimmune disease, family history of autoimmune disease), and cancer history (type of malignancy, tumor stage at ICI initiation, prior chemotherapy, prior radiation therapy, type of ICI used, duration of ICI use, OS and PFS) (Supplementary Table 2). OS was determined as time from initiation of ICI to death, while PFS was determined as time from initiation of ICI to death or progression of disease as determined by oncology providers, based on radiographic evidence of progression. IMC severity was graded using commonly accepted determinants of IMC and irAE grading[14]. We specifically noted prior use of therapies designed to increase immune responses [interleukin (IL)-2, interferon (IFN)-γ, toll-like receptor (TLR)-9 agonist, tebentafusp, or anti-CD47 antibody]. Vitamin D and non-steroidal anti-inflammatory (NSAID) use were defined as vitamin D supplement or NSAID medication, respectively, noted in the history of present illness or on the patient’s medication list at the clinic visit closest to their date of ICI initiation. We collected data on IMC diagnosis including number of patients who received endoscopy (flexible sigmoidoscopy or colonoscopy), findings on endoscopy, and fecal calprotectin (Supplementary Table 3). Data on management of IMC included treatment with anti-diarrheal medications, mesalamine, steroids (prednisone, budesonide, dexamethasone), infliximab, and vedolizumab. We conducted a retrospective case-control single-center study after obtaining approval from the Institutional Review Board at Stanford University (IRB 57125, approved 6/30/2020). Our primary aim was to determine the association of presence and severity of IMC on OS and PFS in ICI users. Our secondary aim was to identify clinical variables which predicted development of IMC in ICI users. We evaluated all patients over the age of 18 who had been treated with immune checkpoint inhibitors (ICI) for malignancy at Stanford Health Care from May 2011 to May 2020, including anti-CTLA-4 (ipilimumab), anti-PD-1 (nivolumab, pembrolizumab), and anti-PD-L1 (atezolizumab, avelumab, durvalumab), with follow up through October 2020. Using the Stanford Research Repository tool, we screened patients treated with ICI who were assigned International Classification of Diseases (ICD) 9 and ICD 10 codes associated with non-infectious colitis and diarrhea (Supplementary Table 1). Each chart which passed the initial screen was further screened by review of clinic notes to confirm diagnosis of immune checkpoint inhibitor-related colitis by oncology providers. Any patient found to have other explanations for their clinical presentation was excluded from the study. Control patients were matched one to one with each IMC patient for sex, age, malignancy, type of ICI used, prior ICI exposure, and duration of ICI exposure (matched to number of doses from initiation of ICI to development of colitis in study cohort). Control patients were initially screened by those lacking the above ICD codes and were confirmed via direct evaluation of each chart to lack diarrhea and/or colitis ascribable to ICI per their treating oncologist. We extracted clinical data on IMC and control patient charts including demographics (age at time of ICI initiation, sex, body mass index, race per patient report), medical history (presence of prior non-liver and non-upper gastrointestinal disease, personal history of autoimmune disease, family history of autoimmune disease), and cancer history (type of malignancy, tumor stage at ICI initiation, prior chemotherapy, prior radiation therapy, type of ICI used, duration of ICI use, OS and PFS) (Supplementary Table 2). OS was determined as time from initiation of ICI to death, while PFS was determined as time from initiation of ICI to death or progression of disease as determined by oncology providers, based on radiographic evidence of progression. IMC severity was graded using commonly accepted determinants of IMC and irAE grading[14]. We specifically noted prior use of therapies designed to increase immune responses [interleukin (IL)-2, interferon (IFN)-γ, toll-like receptor (TLR)-9 agonist, tebentafusp, or anti-CD47 antibody]. Vitamin D and non-steroidal anti-inflammatory (NSAID) use were defined as vitamin D supplement or NSAID medication, respectively, noted in the history of present illness or on the patient’s medication list at the clinic visit closest to their date of ICI initiation. We collected data on IMC diagnosis including number of patients who received endoscopy (flexible sigmoidoscopy or colonoscopy), findings on endoscopy, and fecal calprotectin (Supplementary Table 3). Data on management of IMC included treatment with anti-diarrheal medications, mesalamine, steroids (prednisone, budesonide, dexamethasone), infliximab, and vedolizumab. Statistical analysis The rate of the primary outcomes (OS > 12 mo and PFS > 6 mo among all ICI users, OS > 12 mo and PFS > 6 mo in patients with IMC) and secondary outcomes (risks of IMC among patients with malignancy using ICI, IMC severity), predictive value of clinical variables on primary and secondary outcomes, odds ratio (OR) with its 95% confidence interval (CI), and P values were calculated using Statistics/Data Analysis (Stata/IC 15.1 for Windows, College Station, TX, United States). Dichotomous variables were analyzed for outcomes using the chi-squared test or the Fisher’s exact test where appropriate, and continuous variables were analyzed using Student’s t-tests if normally distributed, or the Wilcoxon signed-rank test for non-normal data. For our multivariate analyses, model building was based on forward stepwise logistic regression, with a P value of 0.05 required for entry, and known predictors were also included. We constructed Kaplan Meier curves for the outcomes of OS and PFS between patients with and without IMC and patients with mild vs severe IMC using GraphPad Prism (version 8.3; GraphPad Software, Inc., La Jolla, CA, United States). All authors had access to the study data and reviewed and approved the final manuscript. The rate of the primary outcomes (OS > 12 mo and PFS > 6 mo among all ICI users, OS > 12 mo and PFS > 6 mo in patients with IMC) and secondary outcomes (risks of IMC among patients with malignancy using ICI, IMC severity), predictive value of clinical variables on primary and secondary outcomes, odds ratio (OR) with its 95% confidence interval (CI), and P values were calculated using Statistics/Data Analysis (Stata/IC 15.1 for Windows, College Station, TX, United States). Dichotomous variables were analyzed for outcomes using the chi-squared test or the Fisher’s exact test where appropriate, and continuous variables were analyzed using Student’s t-tests if normally distributed, or the Wilcoxon signed-rank test for non-normal data. For our multivariate analyses, model building was based on forward stepwise logistic regression, with a P value of 0.05 required for entry, and known predictors were also included. We constructed Kaplan Meier curves for the outcomes of OS and PFS between patients with and without IMC and patients with mild vs severe IMC using GraphPad Prism (version 8.3; GraphPad Software, Inc., La Jolla, CA, United States). All authors had access to the study data and reviewed and approved the final manuscript. Study design and population: We conducted a retrospective case-control single-center study after obtaining approval from the Institutional Review Board at Stanford University (IRB 57125, approved 6/30/2020). Our primary aim was to determine the association of presence and severity of IMC on OS and PFS in ICI users. Our secondary aim was to identify clinical variables which predicted development of IMC in ICI users. We evaluated all patients over the age of 18 who had been treated with immune checkpoint inhibitors (ICI) for malignancy at Stanford Health Care from May 2011 to May 2020, including anti-CTLA-4 (ipilimumab), anti-PD-1 (nivolumab, pembrolizumab), and anti-PD-L1 (atezolizumab, avelumab, durvalumab), with follow up through October 2020. Using the Stanford Research Repository tool, we screened patients treated with ICI who were assigned International Classification of Diseases (ICD) 9 and ICD 10 codes associated with non-infectious colitis and diarrhea (Supplementary Table 1). Each chart which passed the initial screen was further screened by review of clinic notes to confirm diagnosis of immune checkpoint inhibitor-related colitis by oncology providers. Any patient found to have other explanations for their clinical presentation was excluded from the study. Control patients were matched one to one with each IMC patient for sex, age, malignancy, type of ICI used, prior ICI exposure, and duration of ICI exposure (matched to number of doses from initiation of ICI to development of colitis in study cohort). Control patients were initially screened by those lacking the above ICD codes and were confirmed via direct evaluation of each chart to lack diarrhea and/or colitis ascribable to ICI per their treating oncologist. We extracted clinical data on IMC and control patient charts including demographics (age at time of ICI initiation, sex, body mass index, race per patient report), medical history (presence of prior non-liver and non-upper gastrointestinal disease, personal history of autoimmune disease, family history of autoimmune disease), and cancer history (type of malignancy, tumor stage at ICI initiation, prior chemotherapy, prior radiation therapy, type of ICI used, duration of ICI use, OS and PFS) (Supplementary Table 2). OS was determined as time from initiation of ICI to death, while PFS was determined as time from initiation of ICI to death or progression of disease as determined by oncology providers, based on radiographic evidence of progression. IMC severity was graded using commonly accepted determinants of IMC and irAE grading[14]. We specifically noted prior use of therapies designed to increase immune responses [interleukin (IL)-2, interferon (IFN)-γ, toll-like receptor (TLR)-9 agonist, tebentafusp, or anti-CD47 antibody]. Vitamin D and non-steroidal anti-inflammatory (NSAID) use were defined as vitamin D supplement or NSAID medication, respectively, noted in the history of present illness or on the patient’s medication list at the clinic visit closest to their date of ICI initiation. We collected data on IMC diagnosis including number of patients who received endoscopy (flexible sigmoidoscopy or colonoscopy), findings on endoscopy, and fecal calprotectin (Supplementary Table 3). Data on management of IMC included treatment with anti-diarrheal medications, mesalamine, steroids (prednisone, budesonide, dexamethasone), infliximab, and vedolizumab. Statistical analysis: The rate of the primary outcomes (OS > 12 mo and PFS > 6 mo among all ICI users, OS > 12 mo and PFS > 6 mo in patients with IMC) and secondary outcomes (risks of IMC among patients with malignancy using ICI, IMC severity), predictive value of clinical variables on primary and secondary outcomes, odds ratio (OR) with its 95% confidence interval (CI), and P values were calculated using Statistics/Data Analysis (Stata/IC 15.1 for Windows, College Station, TX, United States). Dichotomous variables were analyzed for outcomes using the chi-squared test or the Fisher’s exact test where appropriate, and continuous variables were analyzed using Student’s t-tests if normally distributed, or the Wilcoxon signed-rank test for non-normal data. For our multivariate analyses, model building was based on forward stepwise logistic regression, with a P value of 0.05 required for entry, and known predictors were also included. We constructed Kaplan Meier curves for the outcomes of OS and PFS between patients with and without IMC and patients with mild vs severe IMC using GraphPad Prism (version 8.3; GraphPad Software, Inc., La Jolla, CA, United States). All authors had access to the study data and reviewed and approved the final manuscript. RESULTS: Clinical characteristics associated with IMC We identified a total of 314 patients treated with ICI at Stanford Health Care from May 2011 to May 2020 who had ICD codes matching our query (Supplementary Table 1). Of these, 64 had a diagnosis of IMC per review of Oncology providers’ notes, after excluding patients with alternative diagnoses for their symptoms. 24 (37.5%) of these IMC patients underwent an endoscopy (colonoscopy or flexible sigmoidoscopy) during workup, of which seven (29.2%) had a normal endoscopic appearance, consistent with prior reports demonstrating that approximately one third of patients with IMC related to anti-PD-1 therapy have microscopic colitis[15] (Supplementary Table 3). An additional 14 patients (21.9%) had imaging findings suggestive of IMC while 3 patients (4.69%) without imaging or endoscopy had an elevated calprotectin or fecal lactoferrin. These 64 patients were manually matched 1:1 with control patients based on age, sex, malignancy, type of ICI, whether or not the patient had prior ICI exposure, and duration of ICI use. We compared clinical characteristics of patients from the IMC cohort and the control cohort (Table 1). None of the matched characteristics were significantly different between the two cohorts. The mean age across the combined cohorts was 66.6 years, with an average age of 67.4 in the cohort with IMC compared with 65.8 in the control cohort (P = 0.42). 57.81% of patients in each group were male (P = 1.00). Patients were predominantly white in both groups, with 52 (81.25%) white individuals in the IMC cohort compared to 50 (78.13%) in the control group (P = 0.66). The most common malignancy in each group was melanoma [33 (51.56%) in both cohorts], followed by renal cell carcinoma [8 (12.5%) in the IMC cohort and 7 (10.94%) in the control cohort] and non-small cell lung cancer [6 (9.38%) in both cohorts]. Both groups had similar numbers of patients with stage IV malignancy [56 (87.5%) in the IMC cohort and 58 (90.63%) in the control cohort, P = 0.778]. Combination ipilimumab and nivolumab was the most commonly used checkpoint therapy [24 (37.5%) of patients in each cohort], followed by nivolumab monotherapy [19 (29.69%) of each cohort] and ipilimumab monotherapy [11 (17.19%) of each cohort]. Baseline characteristics of patients with immune checkpoint inhibitor use Variable matched between cases and controls. Number of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls). See Supplementary Table 2. IMC: Immune checkpoint inhibitor-mediated colitis; ICI: Immune checkpoint inhibitor; SD: Standard deviation; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; PFS: Progression-Free Survival; irAE: Immune related adverse event; OS: Overall survival. Among the remainder of the clinical characteristics evaluated, personal history of autoimmune disease (including prior irAE) and family history of autoimmune disease were significantly more common in patients with IMC (P = 0.037 and 0.048, respectively). Intriguingly, prior use of a therapy designed to increase immune responses was more common in the control cohort without IMC (P = 0.027). In contrast to prior data[11], use of vitamin D supplementation at the time of first dose of ICI was significantly more prevalent in patients with IMC (P = 0.020). Neither smoking status, NSAID use at time of ICI initiation, steroid use at the time of ICI initiation, nor recent vaccination were significantly more common in IMC patients compared to controls. We identified a total of 314 patients treated with ICI at Stanford Health Care from May 2011 to May 2020 who had ICD codes matching our query (Supplementary Table 1). Of these, 64 had a diagnosis of IMC per review of Oncology providers’ notes, after excluding patients with alternative diagnoses for their symptoms. 24 (37.5%) of these IMC patients underwent an endoscopy (colonoscopy or flexible sigmoidoscopy) during workup, of which seven (29.2%) had a normal endoscopic appearance, consistent with prior reports demonstrating that approximately one third of patients with IMC related to anti-PD-1 therapy have microscopic colitis[15] (Supplementary Table 3). An additional 14 patients (21.9%) had imaging findings suggestive of IMC while 3 patients (4.69%) without imaging or endoscopy had an elevated calprotectin or fecal lactoferrin. These 64 patients were manually matched 1:1 with control patients based on age, sex, malignancy, type of ICI, whether or not the patient had prior ICI exposure, and duration of ICI use. We compared clinical characteristics of patients from the IMC cohort and the control cohort (Table 1). None of the matched characteristics were significantly different between the two cohorts. The mean age across the combined cohorts was 66.6 years, with an average age of 67.4 in the cohort with IMC compared with 65.8 in the control cohort (P = 0.42). 57.81% of patients in each group were male (P = 1.00). Patients were predominantly white in both groups, with 52 (81.25%) white individuals in the IMC cohort compared to 50 (78.13%) in the control group (P = 0.66). The most common malignancy in each group was melanoma [33 (51.56%) in both cohorts], followed by renal cell carcinoma [8 (12.5%) in the IMC cohort and 7 (10.94%) in the control cohort] and non-small cell lung cancer [6 (9.38%) in both cohorts]. Both groups had similar numbers of patients with stage IV malignancy [56 (87.5%) in the IMC cohort and 58 (90.63%) in the control cohort, P = 0.778]. Combination ipilimumab and nivolumab was the most commonly used checkpoint therapy [24 (37.5%) of patients in each cohort], followed by nivolumab monotherapy [19 (29.69%) of each cohort] and ipilimumab monotherapy [11 (17.19%) of each cohort]. Baseline characteristics of patients with immune checkpoint inhibitor use Variable matched between cases and controls. Number of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls). See Supplementary Table 2. IMC: Immune checkpoint inhibitor-mediated colitis; ICI: Immune checkpoint inhibitor; SD: Standard deviation; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; PFS: Progression-Free Survival; irAE: Immune related adverse event; OS: Overall survival. Among the remainder of the clinical characteristics evaluated, personal history of autoimmune disease (including prior irAE) and family history of autoimmune disease were significantly more common in patients with IMC (P = 0.037 and 0.048, respectively). Intriguingly, prior use of a therapy designed to increase immune responses was more common in the control cohort without IMC (P = 0.027). In contrast to prior data[11], use of vitamin D supplementation at the time of first dose of ICI was significantly more prevalent in patients with IMC (P = 0.020). Neither smoking status, NSAID use at time of ICI initiation, steroid use at the time of ICI initiation, nor recent vaccination were significantly more common in IMC patients compared to controls. IMC significantly increases overall survival As IMC has previously been associated with increased OS and PFS in cancer patients[9,10], we evaluated whether this association was seen in our study. We found that OS was significantly longer in patients who developed IMC compared to those who did not, with a mean OS of 24.3 mo in patients with IMC and 17.7 mo in control (P = 0.05, Table 1). OS at 12 mo following ICI initiation was significantly higher in patients who developed IMC compared to those who did not (P = 0.02, Figure 1). However, in contrast to prior findings, our study did not find a significant difference in PFS between IMC patients and controls, with a mean PFS 13.7 mo in IMC patients and 11.9 mo in controls (P = 0.524) (Table 1). PFS also did not differ between patients who developed mild vs severe IMC (P = 0.690, Supplementary Table 5). Overall survival at 12 mo in patients with and without immune checkpoint inhibitor-mediated colitis. Kaplan-Meier curve of overall survival at 12 mo in patients with immune checkpoint inhibitor-mediated colitis (IMC, red) and without IMC (black). IMC: Immune checkpoint inhibitor-mediated colitis; HR: Hazard ratio. Across both cohorts, we identified clinical characteristics significantly associated with OS greater than 12 mo and PFS greater than 6 mo, which are correlated with cancer outcomes in patients treated with ICI[16] (Tables 2 and 3) (Supplementary Tables 4 and 5). IMC was significantly and independently associated with OS > 12 mo in the multivariate model (OR 2.81, 95%CI 1.17-6.77, P = 0.021) (Table 2). Number of ICI infusions was also positively associated with OS > 12 mo (OR 1.23, 95%CI 1.09-1.40), while sarcoma as underlying malignancy was significantly associated with OS < 12 mo (OR 0.17, 95%CI 0.029-0.947). Within the IMC cohort, nivolumab use was associated with OS < 12 mo in the univariate analysis (OR 0.09, 95%CI 0.01-0.83), while only age was associated with OS < 12 mo in multivariate analysis (OR 0.93, 95%CI 0.88-0.99) (Table 3). No individual malignancy was significantly associated with OS > 12 mo within the IMC cohort (Table 3). Univariate and multivariate predictors of overall survival > 12 mo among patients with malignancy using immune checkpoint inhibitor (n = 128) Number of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls). See Supplementary Table 2. ICI: Immune checkpoint inhibitor; IMC: Immune checkpoint inhibitor-mediated colitis; OR: Odds ratios; CI: Confidence interval; SD: Standard deviation; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; irAE: Immune related adverse event. Univariate and multivariate predictors of overall survival > 12 mo among patients with immune checkpoint inhibitor colitis (n = 64) Number of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls). See Supplementary Table 2. ICI: Immune checkpoint inhibitor; IMC: Immune checkpoint inhibitor-mediated colitis; SD: Standard deviation; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; irAE: Immune related adverse event. As IMC has previously been associated with increased OS and PFS in cancer patients[9,10], we evaluated whether this association was seen in our study. We found that OS was significantly longer in patients who developed IMC compared to those who did not, with a mean OS of 24.3 mo in patients with IMC and 17.7 mo in control (P = 0.05, Table 1). OS at 12 mo following ICI initiation was significantly higher in patients who developed IMC compared to those who did not (P = 0.02, Figure 1). However, in contrast to prior findings, our study did not find a significant difference in PFS between IMC patients and controls, with a mean PFS 13.7 mo in IMC patients and 11.9 mo in controls (P = 0.524) (Table 1). PFS also did not differ between patients who developed mild vs severe IMC (P = 0.690, Supplementary Table 5). Overall survival at 12 mo in patients with and without immune checkpoint inhibitor-mediated colitis. Kaplan-Meier curve of overall survival at 12 mo in patients with immune checkpoint inhibitor-mediated colitis (IMC, red) and without IMC (black). IMC: Immune checkpoint inhibitor-mediated colitis; HR: Hazard ratio. Across both cohorts, we identified clinical characteristics significantly associated with OS greater than 12 mo and PFS greater than 6 mo, which are correlated with cancer outcomes in patients treated with ICI[16] (Tables 2 and 3) (Supplementary Tables 4 and 5). IMC was significantly and independently associated with OS > 12 mo in the multivariate model (OR 2.81, 95%CI 1.17-6.77, P = 0.021) (Table 2). Number of ICI infusions was also positively associated with OS > 12 mo (OR 1.23, 95%CI 1.09-1.40), while sarcoma as underlying malignancy was significantly associated with OS < 12 mo (OR 0.17, 95%CI 0.029-0.947). Within the IMC cohort, nivolumab use was associated with OS < 12 mo in the univariate analysis (OR 0.09, 95%CI 0.01-0.83), while only age was associated with OS < 12 mo in multivariate analysis (OR 0.93, 95%CI 0.88-0.99) (Table 3). No individual malignancy was significantly associated with OS > 12 mo within the IMC cohort (Table 3). Univariate and multivariate predictors of overall survival > 12 mo among patients with malignancy using immune checkpoint inhibitor (n = 128) Number of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls). See Supplementary Table 2. ICI: Immune checkpoint inhibitor; IMC: Immune checkpoint inhibitor-mediated colitis; OR: Odds ratios; CI: Confidence interval; SD: Standard deviation; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; irAE: Immune related adverse event. Univariate and multivariate predictors of overall survival > 12 mo among patients with immune checkpoint inhibitor colitis (n = 64) Number of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls). See Supplementary Table 2. ICI: Immune checkpoint inhibitor; IMC: Immune checkpoint inhibitor-mediated colitis; SD: Standard deviation; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; irAE: Immune related adverse event. Significant risk factors for developing IMC and severe IMC As certain clinical characteristics were significantly more common in patients with IMC compared to controls, we evaluated whether any of these clinical characteristics were associated with risk of developing IMC (Table 4). In univariate analysis, history of autoimmune disease and vitamin D use were both significantly associated with increased risk of IMC (OR 2.45, 95%CI 1.04-5.78, P = 0.040 for autoimmune disease; OR 2.51, 95%CI 1.14-5.54, P = 0.022 for vitamin D use). Interestingly, the use of vitamin D supplementation has previously been associated with a decreased risk of IMC, in contrast to our findings here[11]. Prior use of an immune-enhancing therapy (Supplementary Table 2) was associated with a significantly decreased risk of IMC (OR 0.20, 95%CI 0.04-0.95, P = 0.043). In the multivariate model which incorporated these characteristics, only the use of immune-enhancing therapy remained significantly associated with decreased risk of IMC, with an OR of 0.20 (95%CI 0.04-1.00, P = 0.050). Univariate and multivariate predictors of immune checkpoint inhibitor-mediated colitis among patients using immune checkpoint inhibitor (n = 128) Number of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls). See Supplementary Table 2. ICI: Immune checkpoint inhibitor; IMC: Immune checkpoint inhibitor-mediated colitis; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; irAE: Immune related adverse event. We next determined if any variables were associated with an increased risk of severe IMC. Consistent with prior studies of irAE in ICI[17-19], we defined grade 1-2 IMC as mild and grade 3 or higher IMC as severe. In our study, 38 of the 64 patients (59.4%) had severe IMC (Supplementary Table 3). In the univariate model, ipilimumab and vitamin D supplementation were significantly associated with development of severe IMC (OR 8.93, 95%CI 1.07-74.8, P = 0.043 for ipilimumab; OR 3.33, 95%CI 1.10-10.14, P = 0.034 for vitamin D) (Supplementary Table 6). Combination therapy (ipilimumab plus nivolumab) trended towards an increased risk of severe IMC but did not reach significance (P = 0.053). In contrast, pembrolizumab was significantly associated with a decreased risk of severe IMC (OR 0.26, 95%CI 0.09-0.81, P = 0.020). In the multivariate model no characteristic reached significance for association with severe IMC, although both combination therapy and ipilimumab monotherapy approached significance for increased risk of severe IMC (P = 0.058 and 0.060, respectively). As certain clinical characteristics were significantly more common in patients with IMC compared to controls, we evaluated whether any of these clinical characteristics were associated with risk of developing IMC (Table 4). In univariate analysis, history of autoimmune disease and vitamin D use were both significantly associated with increased risk of IMC (OR 2.45, 95%CI 1.04-5.78, P = 0.040 for autoimmune disease; OR 2.51, 95%CI 1.14-5.54, P = 0.022 for vitamin D use). Interestingly, the use of vitamin D supplementation has previously been associated with a decreased risk of IMC, in contrast to our findings here[11]. Prior use of an immune-enhancing therapy (Supplementary Table 2) was associated with a significantly decreased risk of IMC (OR 0.20, 95%CI 0.04-0.95, P = 0.043). In the multivariate model which incorporated these characteristics, only the use of immune-enhancing therapy remained significantly associated with decreased risk of IMC, with an OR of 0.20 (95%CI 0.04-1.00, P = 0.050). Univariate and multivariate predictors of immune checkpoint inhibitor-mediated colitis among patients using immune checkpoint inhibitor (n = 128) Number of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls). See Supplementary Table 2. ICI: Immune checkpoint inhibitor; IMC: Immune checkpoint inhibitor-mediated colitis; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; irAE: Immune related adverse event. We next determined if any variables were associated with an increased risk of severe IMC. Consistent with prior studies of irAE in ICI[17-19], we defined grade 1-2 IMC as mild and grade 3 or higher IMC as severe. In our study, 38 of the 64 patients (59.4%) had severe IMC (Supplementary Table 3). In the univariate model, ipilimumab and vitamin D supplementation were significantly associated with development of severe IMC (OR 8.93, 95%CI 1.07-74.8, P = 0.043 for ipilimumab; OR 3.33, 95%CI 1.10-10.14, P = 0.034 for vitamin D) (Supplementary Table 6). Combination therapy (ipilimumab plus nivolumab) trended towards an increased risk of severe IMC but did not reach significance (P = 0.053). In contrast, pembrolizumab was significantly associated with a decreased risk of severe IMC (OR 0.26, 95%CI 0.09-0.81, P = 0.020). In the multivariate model no characteristic reached significance for association with severe IMC, although both combination therapy and ipilimumab monotherapy approached significance for increased risk of severe IMC (P = 0.058 and 0.060, respectively). Clinical characteristics associated with IMC: We identified a total of 314 patients treated with ICI at Stanford Health Care from May 2011 to May 2020 who had ICD codes matching our query (Supplementary Table 1). Of these, 64 had a diagnosis of IMC per review of Oncology providers’ notes, after excluding patients with alternative diagnoses for their symptoms. 24 (37.5%) of these IMC patients underwent an endoscopy (colonoscopy or flexible sigmoidoscopy) during workup, of which seven (29.2%) had a normal endoscopic appearance, consistent with prior reports demonstrating that approximately one third of patients with IMC related to anti-PD-1 therapy have microscopic colitis[15] (Supplementary Table 3). An additional 14 patients (21.9%) had imaging findings suggestive of IMC while 3 patients (4.69%) without imaging or endoscopy had an elevated calprotectin or fecal lactoferrin. These 64 patients were manually matched 1:1 with control patients based on age, sex, malignancy, type of ICI, whether or not the patient had prior ICI exposure, and duration of ICI use. We compared clinical characteristics of patients from the IMC cohort and the control cohort (Table 1). None of the matched characteristics were significantly different between the two cohorts. The mean age across the combined cohorts was 66.6 years, with an average age of 67.4 in the cohort with IMC compared with 65.8 in the control cohort (P = 0.42). 57.81% of patients in each group were male (P = 1.00). Patients were predominantly white in both groups, with 52 (81.25%) white individuals in the IMC cohort compared to 50 (78.13%) in the control group (P = 0.66). The most common malignancy in each group was melanoma [33 (51.56%) in both cohorts], followed by renal cell carcinoma [8 (12.5%) in the IMC cohort and 7 (10.94%) in the control cohort] and non-small cell lung cancer [6 (9.38%) in both cohorts]. Both groups had similar numbers of patients with stage IV malignancy [56 (87.5%) in the IMC cohort and 58 (90.63%) in the control cohort, P = 0.778]. Combination ipilimumab and nivolumab was the most commonly used checkpoint therapy [24 (37.5%) of patients in each cohort], followed by nivolumab monotherapy [19 (29.69%) of each cohort] and ipilimumab monotherapy [11 (17.19%) of each cohort]. Baseline characteristics of patients with immune checkpoint inhibitor use Variable matched between cases and controls. Number of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls). See Supplementary Table 2. IMC: Immune checkpoint inhibitor-mediated colitis; ICI: Immune checkpoint inhibitor; SD: Standard deviation; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; PFS: Progression-Free Survival; irAE: Immune related adverse event; OS: Overall survival. Among the remainder of the clinical characteristics evaluated, personal history of autoimmune disease (including prior irAE) and family history of autoimmune disease were significantly more common in patients with IMC (P = 0.037 and 0.048, respectively). Intriguingly, prior use of a therapy designed to increase immune responses was more common in the control cohort without IMC (P = 0.027). In contrast to prior data[11], use of vitamin D supplementation at the time of first dose of ICI was significantly more prevalent in patients with IMC (P = 0.020). Neither smoking status, NSAID use at time of ICI initiation, steroid use at the time of ICI initiation, nor recent vaccination were significantly more common in IMC patients compared to controls. IMC significantly increases overall survival: As IMC has previously been associated with increased OS and PFS in cancer patients[9,10], we evaluated whether this association was seen in our study. We found that OS was significantly longer in patients who developed IMC compared to those who did not, with a mean OS of 24.3 mo in patients with IMC and 17.7 mo in control (P = 0.05, Table 1). OS at 12 mo following ICI initiation was significantly higher in patients who developed IMC compared to those who did not (P = 0.02, Figure 1). However, in contrast to prior findings, our study did not find a significant difference in PFS between IMC patients and controls, with a mean PFS 13.7 mo in IMC patients and 11.9 mo in controls (P = 0.524) (Table 1). PFS also did not differ between patients who developed mild vs severe IMC (P = 0.690, Supplementary Table 5). Overall survival at 12 mo in patients with and without immune checkpoint inhibitor-mediated colitis. Kaplan-Meier curve of overall survival at 12 mo in patients with immune checkpoint inhibitor-mediated colitis (IMC, red) and without IMC (black). IMC: Immune checkpoint inhibitor-mediated colitis; HR: Hazard ratio. Across both cohorts, we identified clinical characteristics significantly associated with OS greater than 12 mo and PFS greater than 6 mo, which are correlated with cancer outcomes in patients treated with ICI[16] (Tables 2 and 3) (Supplementary Tables 4 and 5). IMC was significantly and independently associated with OS > 12 mo in the multivariate model (OR 2.81, 95%CI 1.17-6.77, P = 0.021) (Table 2). Number of ICI infusions was also positively associated with OS > 12 mo (OR 1.23, 95%CI 1.09-1.40), while sarcoma as underlying malignancy was significantly associated with OS < 12 mo (OR 0.17, 95%CI 0.029-0.947). Within the IMC cohort, nivolumab use was associated with OS < 12 mo in the univariate analysis (OR 0.09, 95%CI 0.01-0.83), while only age was associated with OS < 12 mo in multivariate analysis (OR 0.93, 95%CI 0.88-0.99) (Table 3). No individual malignancy was significantly associated with OS > 12 mo within the IMC cohort (Table 3). Univariate and multivariate predictors of overall survival > 12 mo among patients with malignancy using immune checkpoint inhibitor (n = 128) Number of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls). See Supplementary Table 2. ICI: Immune checkpoint inhibitor; IMC: Immune checkpoint inhibitor-mediated colitis; OR: Odds ratios; CI: Confidence interval; SD: Standard deviation; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; irAE: Immune related adverse event. Univariate and multivariate predictors of overall survival > 12 mo among patients with immune checkpoint inhibitor colitis (n = 64) Number of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls). See Supplementary Table 2. ICI: Immune checkpoint inhibitor; IMC: Immune checkpoint inhibitor-mediated colitis; SD: Standard deviation; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; irAE: Immune related adverse event. Significant risk factors for developing IMC and severe IMC: As certain clinical characteristics were significantly more common in patients with IMC compared to controls, we evaluated whether any of these clinical characteristics were associated with risk of developing IMC (Table 4). In univariate analysis, history of autoimmune disease and vitamin D use were both significantly associated with increased risk of IMC (OR 2.45, 95%CI 1.04-5.78, P = 0.040 for autoimmune disease; OR 2.51, 95%CI 1.14-5.54, P = 0.022 for vitamin D use). Interestingly, the use of vitamin D supplementation has previously been associated with a decreased risk of IMC, in contrast to our findings here[11]. Prior use of an immune-enhancing therapy (Supplementary Table 2) was associated with a significantly decreased risk of IMC (OR 0.20, 95%CI 0.04-0.95, P = 0.043). In the multivariate model which incorporated these characteristics, only the use of immune-enhancing therapy remained significantly associated with decreased risk of IMC, with an OR of 0.20 (95%CI 0.04-1.00, P = 0.050). Univariate and multivariate predictors of immune checkpoint inhibitor-mediated colitis among patients using immune checkpoint inhibitor (n = 128) Number of infusions of immune checkpoint inhibitor prior to immune checkpoint inhibitor-mediated colitis diagnosis (cases) or total (controls). See Supplementary Table 2. ICI: Immune checkpoint inhibitor; IMC: Immune checkpoint inhibitor-mediated colitis; RCC: Renal cell carcinoma; NSCLC: Non-small cell lung cancer; SCC: Squamous cell carcinoma; irAE: Immune related adverse event. We next determined if any variables were associated with an increased risk of severe IMC. Consistent with prior studies of irAE in ICI[17-19], we defined grade 1-2 IMC as mild and grade 3 or higher IMC as severe. In our study, 38 of the 64 patients (59.4%) had severe IMC (Supplementary Table 3). In the univariate model, ipilimumab and vitamin D supplementation were significantly associated with development of severe IMC (OR 8.93, 95%CI 1.07-74.8, P = 0.043 for ipilimumab; OR 3.33, 95%CI 1.10-10.14, P = 0.034 for vitamin D) (Supplementary Table 6). Combination therapy (ipilimumab plus nivolumab) trended towards an increased risk of severe IMC but did not reach significance (P = 0.053). In contrast, pembrolizumab was significantly associated with a decreased risk of severe IMC (OR 0.26, 95%CI 0.09-0.81, P = 0.020). In the multivariate model no characteristic reached significance for association with severe IMC, although both combination therapy and ipilimumab monotherapy approached significance for increased risk of severe IMC (P = 0.058 and 0.060, respectively). DISCUSSION: In our study, development of IMC following ICI use was associated with improved overall survival, although not improved progression-free survival, compared to ICI users without IMC. This is similar to findings at another center demonstrating both improved OS and PFS in patients with IMC[9,10]. We also found that vitamin D supplementation at the start of ICI treatment is a risk factor for developing IMC, in contrast to other research suggesting vitamin D use is associated with lower risk of IMC[11]. Our results, therefore, provide critical additional information on these previous associations and present a need for prospective studies. Both publications showing improved survival in patients with IMC were retrospective analyses performed at the same center[9,10]. One study noted that ICI class was significantly associated with development of IMC[9], a finding that has been demonstrated several times in retrospective work[8,17,18,20-23]. However, unlike our work, this study did not match control patients to account for this likely confounder, as ICI class has been associated with differences in PFS in some malignancies[24,25]. The second study at this center examined survival in melanoma patients with IMC, compared to our work across multiple malignancies, although frequency matching was performed to account for use of different ICI classes[10]. Since our study is the first to examine survival in patients with IMC at a different center, our work here reinforces that IMC may be associated with increased overall survival and prompts a need for prospective studies. The only other independent factor in our study positively associated with OS > 12 mo was number of ICI doses. This finding may be due to trivial length-time bias, as patients who survive longer are more likely to receive more doses of ICI. It is also possible that patients who required cessation of ICI due to IMC had worse outcomes, although prior work has suggested that patients still derive equivalent long-term benefit from ICI even if stopped due to irAE[26]. Type of underlying malignancy (sarcoma) was independently associated with OS < 12 mo in our study. These findings are not unexpected, as most advanced soft tissue sarcomas have a median OS of less than one year[27]. In contrast to prior work, we found a positive association between vitamin D supplementation and development of IMC[11]. It is unclear if this is related to low serum vitamin D levels or negative impact of the supplementation itself, as vitamin D levels near the time of ICI initiation were not recorded in most patients. Additionally, the prior report on vitamin D in IMC was in melanoma patients only, which may partially account for discrepancies with our study. As this association did not remain significant in our multivariate analysis, it is possible that another confounding factor may explain the association between vitamin D supplementation and IMC in our study. In addition to challenging existing findings, we report here on additional novel risk factors for IMC. We are the first to report that prior use of immune-enhancing medications prior to ICI, such as IL-2 or interferon-γ, is significantly and independently associated with decreased risk of IMC. Much more work should be done to evaluate the relationship between these medications and future risk of IMC. Finally, our study is the first to examine risk factors for severe IMC. In addition to increasing risk for IMC overall, we find that vitamin D supplementation may also be a risk factor for severe IMC. Similarly, our results suggest that the use of ipilimumab may be associated with increased risk of severe IMC, while pembrolizumab may be associated with decreased risk of severe IMC in patients who develop this syndrome. As ipilimumab has previously been associated with increased risk of IMC overall, while anti-PD-1, including pembrolizumab, are associated with lower risk of IMC overall[8,9], these findings emphasize that ICI class may affect severity of IMC. Our findings may significantly impact clinical practice by identifying novel risks for IMC and severe IMC that clinicians, including oncologists and gastroenterologists, should be aware of, while also potentially providing reassurance to physicians and patients that development of IMC may be a positive prognosticator for cancer survival. Neither prior work nor ours found that treatment of IMC, including steroids or infliximab, negatively impacts OS[9,10], and therefore appropriate treatment of IMC should be pursued early on to minimize morbidity and mortality. Both steroid and infliximab use have been suggested to worsen survival in ICI users[12,13], but all current evidence suggests that use of these medications for IMC specifically does not impair cancer outcomes. Our work also cautions against supplementation with vitamin D in ICI users, as this may increase risk of IMC and severe IMC, although carefully designed studies with vitamin D measurements should be performed. Our work has several strengths. We performed robust cohort matching to minimize confounding effects of ICI class and malignancy. This is also the first study to explore risk factors associated with severe IMC. However, there are limitations to our work. As a retrospective, observational study, it is subject to recall bias and cannot evaluate causation, and may also be subject to immortal time bias (ITB). Patients may have longer exposure to checkpoint inhibitors before developing IMC, compared to patients who do not manifest this irAE, leading to a period where they must survive for long enough to develop IMC and are therefore “immortal”[28]. We found that OS > 12 mo was significantly associated with greater numbers of ICI infusions (Table 2), which is likely due to ITB. However, greater numbers of infusions were not associated with IMC (Table 4). This suggests that the association between OS > 12 mo and IMC is likely independent of the number of ICI infusions, limiting this as a source of ITB in our study. Other weaknesses of our work include selection of patients based on clinical criteria for IMC, including those who did not undergo endoscopy or other objective testing for intestinal inflammation, and therefore may not have had a true colitis. Like prior work, this is also a single-center study, and our results may not be widely generalizable, particularly since we identified fewer patients compared to prior work and our patient population is highly variable, including individuals with several different underlying malignancies. We did not exclude patients with prior non-GI irAEs in either group, although the presence of these was not independently associated with increased OS in our study. We also have not accounted for other factors which may be potential predictors of ICI response, including tumor PD-L1 expression burden, tumor mutational burden, gut microbial composition, proton pump inhibitor use, and combination treatment with tyrosine kinase inhibitors[29-34]. CONCLUSION: In conclusion, our findings suggest presence of IMC is associated with improved OS in cancer patients when cases were matched closely to controls. We also found that vitamin D supplementation was significantly associated with development of both IMC and severe IMC, while immune-enhancing medications were significantly associated with decreased risk of IMC. Future work should focus on broader populations to resolve the discrepancies raised in our work, and to confirm the association between IMC and increased cancer survival. Closely involving gastroenterologists with the workup and management of IMC will be crucial to ensuring the best care possible for these patients.
Background: Immune checkpoint inhibitor-mediated colitis (IMC) is a common adverse event following immune checkpoint inhibitor (ICI) therapy for cancer. IMC has been associated with improved overall survival (OS) and progression-free survival (PFS), but data are limited to a single site and predominantly for melanoma patients. Methods: We performed a retrospective case-control study including 64 ICI users who developed IMC matched according to age, sex, ICI class, and malignancy to a cohort of ICI users without IMC, from May 2011 to May 2020. Using univariate and multivariate logistic regression, we determined association of presence of IMC on OS, PFS, and clinical predictors of IMC. Kaplan-Meier curves were generated to compare OS and PFS between ICI users with and without IMC. Results: IMC was significantly associated with a higher OS (mean 24.3 mo vs 17.7 mo, P = 0.05) but not PFS (mean 13.7 mo vs 11.9 mo, P = 0.524). IMC was significantly associated with OS greater than 12 mo [Odds ratio (OR) 2.81, 95% confidence interval (CI) 1.17-6.77]. Vitamin D supplementation was significantly associated with increased risk of IMC (OR 2.48, 95%CI 1.01-6.07). Conclusions: IMC was significantly associated with OS greater than 12 mo. In contrast to prior work, we found that vitamin D use may be a risk factor for IMC.
INTRODUCTION: Immune checkpoint inhibitors (ICI) have dramatically changed the landscape of cancer therapy. Early studies showed significantly prolonged survival in patients with metastatic melanoma compared to standard chemotherapy[1], and evidence now exists for improved outcomes in a variety of tumors ranging from lung cancers to urothelial carcinoma to breast cancer[2-5]. Although these are powerful treatments in our armamentarium against malignancy, ICI can cause immune-related adverse events (irAE) characterized by autoimmune-like inflammation in a variety of non-tumor organs, leading to increased morbidity for patients[6]. One of the most common irAE is immune checkpoint inhibitor-mediated colitis (IMC). IMC may occur in up to 40% of patients treated with ipilimumab, an antibody targeting CTLA-4, 11%-17% of patients treated with antibodies against anti-PD-1 or anti-PD-L1, such as nivolumab, pembrolizumab, or atezolizumab, and around 32% of patients treated with a combination of anti-CTLA-4 and anti-PD-1[7]. Prior retrospective analyses of patients with IMC have attempted to identify characteristics associated with development of IMC, including type of malignancy, ICI class, dose of ICI, cancer stage, and vitamin D use[8-11]. Intriguingly, two prior studies have suggested that development of IMC may positively correlate with improved progression-free survival (PFS) and overall survival (OS)[9,10]. One of these studies controlled for confounding effects of ICI class via frequency matching, but was limited to patients with melanoma, hindering wider applicability of their findings[10]. These findings also conflict with data suggesting that use of steroids and the anti-TNF antibody infliximab in patients treated with ICI are associated with worse cancer outcomes[12,13]. These discrepancies represent a significant knowledge gap that impedes our ability to evaluate and manage IMC and ICI use. Here we present data from a retrospective study of patients treated with ICI at our institution who developed IMC across malignancy types. We compare this cohort to a matched control cohort to determine whether IMC was associated with improved progression-free survival and overall survival. We also evaluate which clinical characteristics increase the risk of developing IMC, including severe IMC. CONCLUSION: Future research in this area should seek to expand current knowledge of the relationship between IMC and cancer survival. In particular, future work should focus on broadening the type and number of patients treated with immune checkpoint inhibitors and on tracking patients prior to initiating checkpoint inhibitors to determine if this relationship remains significant prospectively.
Background: Immune checkpoint inhibitor-mediated colitis (IMC) is a common adverse event following immune checkpoint inhibitor (ICI) therapy for cancer. IMC has been associated with improved overall survival (OS) and progression-free survival (PFS), but data are limited to a single site and predominantly for melanoma patients. Methods: We performed a retrospective case-control study including 64 ICI users who developed IMC matched according to age, sex, ICI class, and malignancy to a cohort of ICI users without IMC, from May 2011 to May 2020. Using univariate and multivariate logistic regression, we determined association of presence of IMC on OS, PFS, and clinical predictors of IMC. Kaplan-Meier curves were generated to compare OS and PFS between ICI users with and without IMC. Results: IMC was significantly associated with a higher OS (mean 24.3 mo vs 17.7 mo, P = 0.05) but not PFS (mean 13.7 mo vs 11.9 mo, P = 0.524). IMC was significantly associated with OS greater than 12 mo [Odds ratio (OR) 2.81, 95% confidence interval (CI) 1.17-6.77]. Vitamin D supplementation was significantly associated with increased risk of IMC (OR 2.48, 95%CI 1.01-6.07). Conclusions: IMC was significantly associated with OS greater than 12 mo. In contrast to prior work, we found that vitamin D use may be a risk factor for IMC.
10,141
279
[ 406, 625, 249, 3796, 718, 658, 510, 1251, 109 ]
10
[ "imc", "patients", "ici", "immune", "checkpoint", "immune checkpoint", "inhibitor", "checkpoint inhibitor", "immune checkpoint inhibitor", "associated" ]
[ "imc immune checkpoint", "predictors immune checkpoint", "malignancy immune checkpoint", "ici immune checkpoint", "checkpoint inhibitor colitis" ]
null
[CONTENT] Immune checkpoint inhibitors | Immune checkpoint inhibitor-mediated colitis | Immune-related adverse events [SUMMARY]
[CONTENT] Immune checkpoint inhibitors | Immune checkpoint inhibitor-mediated colitis | Immune-related adverse events [SUMMARY]
null
[CONTENT] Immune checkpoint inhibitors | Immune checkpoint inhibitor-mediated colitis | Immune-related adverse events [SUMMARY]
[CONTENT] Immune checkpoint inhibitors | Immune checkpoint inhibitor-mediated colitis | Immune-related adverse events [SUMMARY]
[CONTENT] Immune checkpoint inhibitors | Immune checkpoint inhibitor-mediated colitis | Immune-related adverse events [SUMMARY]
[CONTENT] Humans | Immune Checkpoint Inhibitors | Antineoplastic Agents, Immunological | Retrospective Studies | Case-Control Studies | Melanoma | Colitis | Vitamin D [SUMMARY]
[CONTENT] Humans | Immune Checkpoint Inhibitors | Antineoplastic Agents, Immunological | Retrospective Studies | Case-Control Studies | Melanoma | Colitis | Vitamin D [SUMMARY]
null
[CONTENT] Humans | Immune Checkpoint Inhibitors | Antineoplastic Agents, Immunological | Retrospective Studies | Case-Control Studies | Melanoma | Colitis | Vitamin D [SUMMARY]
[CONTENT] Humans | Immune Checkpoint Inhibitors | Antineoplastic Agents, Immunological | Retrospective Studies | Case-Control Studies | Melanoma | Colitis | Vitamin D [SUMMARY]
[CONTENT] Humans | Immune Checkpoint Inhibitors | Antineoplastic Agents, Immunological | Retrospective Studies | Case-Control Studies | Melanoma | Colitis | Vitamin D [SUMMARY]
[CONTENT] imc immune checkpoint | predictors immune checkpoint | malignancy immune checkpoint | ici immune checkpoint | checkpoint inhibitor colitis [SUMMARY]
[CONTENT] imc immune checkpoint | predictors immune checkpoint | malignancy immune checkpoint | ici immune checkpoint | checkpoint inhibitor colitis [SUMMARY]
null
[CONTENT] imc immune checkpoint | predictors immune checkpoint | malignancy immune checkpoint | ici immune checkpoint | checkpoint inhibitor colitis [SUMMARY]
[CONTENT] imc immune checkpoint | predictors immune checkpoint | malignancy immune checkpoint | ici immune checkpoint | checkpoint inhibitor colitis [SUMMARY]
[CONTENT] imc immune checkpoint | predictors immune checkpoint | malignancy immune checkpoint | ici immune checkpoint | checkpoint inhibitor colitis [SUMMARY]
[CONTENT] imc | patients | ici | immune | checkpoint | immune checkpoint | inhibitor | checkpoint inhibitor | immune checkpoint inhibitor | associated [SUMMARY]
[CONTENT] imc | patients | ici | immune | checkpoint | immune checkpoint | inhibitor | checkpoint inhibitor | immune checkpoint inhibitor | associated [SUMMARY]
null
[CONTENT] imc | patients | ici | immune | checkpoint | immune checkpoint | inhibitor | checkpoint inhibitor | immune checkpoint inhibitor | associated [SUMMARY]
[CONTENT] imc | patients | ici | immune | checkpoint | immune checkpoint | inhibitor | checkpoint inhibitor | immune checkpoint inhibitor | associated [SUMMARY]
[CONTENT] imc | patients | ici | immune | checkpoint | immune checkpoint | inhibitor | checkpoint inhibitor | immune checkpoint inhibitor | associated [SUMMARY]
[CONTENT] imc | patients | ici | anti | treated | patients treated | survival | improved | variety | studies [SUMMARY]
[CONTENT] ici | imc | patients | data | initiation | anti | patient | history | outcomes | pfs [SUMMARY]
null
[CONTENT] imc | closely | work | associated | significantly associated | focus | care possible patients | os cancer | os cancer patients | os cancer patients cases [SUMMARY]
[CONTENT] imc | patients | ici | immune | mo | associated | immune checkpoint | checkpoint | checkpoint inhibitor | immune checkpoint inhibitor [SUMMARY]
[CONTENT] imc | patients | ici | immune | mo | associated | immune checkpoint | checkpoint | checkpoint inhibitor | immune checkpoint inhibitor [SUMMARY]
[CONTENT] IMC | ICI ||| IMC [SUMMARY]
[CONTENT] 64 | ICI | IMC | ICI | IMC | May 2011 to May 2020 ||| IMC | IMC ||| Kaplan-Meier | ICI | IMC [SUMMARY]
null
[CONTENT] IMC ||| IMC [SUMMARY]
[CONTENT] IMC | ICI ||| IMC ||| 64 | ICI | IMC | ICI | IMC | May 2011 to May 2020 ||| IMC | IMC ||| Kaplan-Meier | ICI | IMC ||| IMC | 24.3 mo | 17.7 mo | 0.05 | 13.7 mo | 11.9 | 0.524 ||| IMC ||| 2.81 | 95% | CI | 1.17-6.77 ||| Vitamin D | IMC | 2.48 | 1.01-6.07 ||| IMC ||| IMC [SUMMARY]
[CONTENT] IMC | ICI ||| IMC ||| 64 | ICI | IMC | ICI | IMC | May 2011 to May 2020 ||| IMC | IMC ||| Kaplan-Meier | ICI | IMC ||| IMC | 24.3 mo | 17.7 mo | 0.05 | 13.7 mo | 11.9 | 0.524 ||| IMC ||| 2.81 | 95% | CI | 1.17-6.77 ||| Vitamin D | IMC | 2.48 | 1.01-6.07 ||| IMC ||| IMC [SUMMARY]
Magnesium Isoglycyrrhizinate Induces an Inhibitory Effect on Progression and Epithelial-Mesenchymal Transition of Laryngeal Cancer via the NF-κB/Twist Signaling.
33376307
Magnesium isoglycyrrhizinate (MI) was extracted from roots of the plant Glycyrrhiza glabra, which displays multiple pharmacological activities such as anti-inflammation, anti-apoptosis, and anti-tumor. Here, we aimed to investigate the effect of MI on the progression and epithelial-mesenchymal transition (EMT) of laryngeal cancer.
BACKGROUND
Forty laryngeal cancer clinical samples were used. The role of MI in the proliferation of laryngeal cancer cells was assessed by MTT assay, Edu assay and colony formation assay. The function of MI in the migration and invasion of laryngeal cancer cells was tested by transwell assays. The effect of MI on apoptosis of laryngeal cancer cells was determined by cell apoptosis assay. The impact of MI on tumor growth in vivo was analyzed by tumorigenicity analysis using Balb/c nude mice. qPCR and Western blot analysis were performed to measure the expression levels of gene and protein, respectively.
METHODS
We identified that EMT-related transcription factor Twist was significantly elevated in the laryngeal cancer tissues. The expression of Twist was also enhanced in the human laryngeal carcinoma HEP-2 cells compared with that in the primary laryngeal epithelial cells. The high expression of Twist was remarkably correlated with poor overall survival of patients with laryngeal cancer. Meanwhile, our data revealed that MI reduced cell proliferation, migration and invasion and enhanced apoptosis of laryngeal cancer cells in vitro. Moreover, MI decreased transcriptional activation and the expression levels of NF-κB and Twist, and alleviated EMT in vitro and in vivo. MI remarkably inhibited tumor growth and EMT of laryngeal cancer cells in vivo.
RESULTS
MI restrains the progression of laryngeal cancer and induces an inhibitory effect on EMT in laryngeal cancer by modulating the NF-κB/Twist signaling. Our finding provides new insights into the mechanism by which MI inhibits laryngeal carcinoma development, enriching the understanding of the anti-tumor function of MI.
CONCLUSION
[ "Animals", "Apoptosis", "Cell Proliferation", "Epithelial-Mesenchymal Transition", "Humans", "Laryngeal Neoplasms", "Mice", "Mice, Inbred BALB C", "Mice, Nude", "NF-kappa B", "Neoplasms, Experimental", "Nuclear Proteins", "Saponins", "Signal Transduction", "Triterpenes", "Tumor Cells, Cultured", "Twist-Related Protein 1" ]
7765753
Introduction
Laryngeal cancer is one of the most prevalent malignancies of the respiratory system, accounting for 26% to 30% of head and neck cancers.1,2 Almost 95% of the histologic pathogeny of laryngeal cancer is laryngeal squamous carcinoma, and the survival incidence of patients is low.3 Despite the development of new strategies in surgery, chemotherapy, and radiation, the targeted treatment of laryngeal cancer is still a challenge.4 Thus, identification of safe and effective treatment candidates for laryngeal cancer is urgently needed. Epithelial–mesenchymal transition (EMT) serves as a cellular program, in which cells drop their epithelial features and gain mesenchymal characteristics.5,6 EMT is correlated with multiple tumor progressions, such as resistance to therapy, blood intravasation, tumor initiation, tumor cell migration, tumor stemness, malignant progression, and metastasis.7–9 As a critical process of cancer development, EMT contributes to the development of laryngeal cancer. It has been reported that the combination of photodynamic therapy and carboplatin suppresses the expression of MMP-2/MMP-9 and EMT of laryngeal cancer by ROS-inhibited MEK/ERK signaling.10 EZH2 increases metastasis and aggression of laryngeal squamous cell carcinoma through the EMT program by modulating H3K27me3.11 However, investigation about the inhibitory candidates of EMT in laryngeal cancer remains limited. Nutraceutical agents display a unique therapeutic activity to treat diseases such as inflammation and cancer.12–17 Glycyrrhizic acid (GA) is extracted from the roots of licorice and serves as a major component of licorice, presenting multiple biomedical activities, such as anti-oxidant and anti-inflammatory.18 Magnesium isoglycyrrhizinate (MI), refined from GA, is a18-α-GA stereoisomer magnesium salt and demonstrates a better activity than 18-β-GA.19 As a natural and safe compound, MI shows many biomedical activities such as anti-inflammation capacities,20 anti-apoptosis21 and anti-tumor.22 It has been reported that MI inhibits fructose-modulated lipid metabolism disorder and activation of the NF-κB/NLRP3 inflammasome.23 MI reduces paclitaxel in patients with epithelial ovarian cancer managed with cisplatin and paclitaxel.24 Moreover, MI attenuates high fructose-induced liver fibrosis and EMT by upregulating miR-375-3p to overcome the TGF-β1/Smad pathway and JAK2/STAT3 signaling.25 However, the role of MI in cancer development is unknown. The effect of MI on the progression and EMT of laryngeal cancer remains unreported. Many transcription factors, such as Twist, serve as molecule switches in EMT progression.26 Twist is a crucial transcription factor for the modulating of EMT, which advances cell invasion, migration, and cancer metastasis, conferring tumor cells with stem cell-like properties and providing therapy resistance.27,28 The function of Twist in EMT of cancer development has been well reported. Disrupting the diacetylated Twist represses the progression of Basal-like breast cancer.29 The activation of Twist promotes progression and EMT of breast cancer.30 The inhibition of Twist limits the stem cell features and EMT of prostate cancer.31 Twist serves as a critical factor in promoting metastasis of pancreatic cancer.32 Besides, tumor necrosis factor α provokes EMT of hypopharyngeal cancer and induces metastasis through NF-κB signaling-regulated expression of Twist.33 NF-κB/Twist signaling is involved in the mechanism of Chysin inhibiting stem cell characteristics of ovarian cancer.34 Repression of the NF-κB/Twist axis decreases the stemness features of lung cancer stem cell.35 Down-regulation of TWIST reduces invasion and migration of laryngeal carcinoma cells by controlling the expression of N-cadherin and E-cadherin.36 The expression of Twist displays the clinical significance of laryngeal cancer.37 However, whether NF-κB and Twist are involved in the biomedical activities of MI is unclear. In this study, we aimed to explore the function of MI in the development and EMT process of laryngeal cancer. We identified a novel inhibitory effect of MI in the progression and EMT of laryngeal cancer by regulating the NF-κB/Twist signaling.
null
null
null
null
Conclusion
In conclusion, we discovered that magnesium isoglycyrrhizinate attenuated the progression of laryngeal cancer in vitro and in vivo. Magnesium isoglycyrrhizinate induced an inhibitory effect on epithelial–mesenchymal transition in laryngeal cancer by modulating the NF-κB/Twist signaling. Our findings provide new insights into the mechanism by which magnesium isoglycyrrhizinate inhibits laryngeal carcinoma development. Magnesium isoglycyrrhizinate may be applied as a potential anti-tumor candidate for laryngeal cancer in clinical treatment strategy.
[ "Methods", "Laryngeal Cancer Clinical Samples", "Cell Culture and Treatment", "Quantitative Reverse Transcription-PCR (qRT-PCR)", "MTT Assays", "EdU Assays", "Colony Formation Assay", "Transwell Assays", "Analysis of Cell Apoptosis", "Luciferase Reporter Gene Assay", "Western Blot Analysis", "Analysis of Tumorigenicity in Nude Mice", "Statistical Analysis", "Results", "Twist is Potentially Correlated with the Progression and Poor Prognosis of Laryngeal Cancer", "Magnesium Isoglycyrrhizinate (MI) Attenuates Cell Proliferation of Laryngeal Cancer in vitro", "MI Inhibits Laryngeal Cancer Cell Migration and Invasion and Enhances Cell Apoptosis in vitro", "MI Inhibits Transcriptional Activation and the Expression of NF-κB and Twist and Alleviates EMT in Laryngeal Cancer Cells", "MI Inhibits Tumor Growth and EMT of Laryngeal Cancer via the Twist/NF-κB Signaling in vivo", "Discussion", "Conclusion" ]
[ " Laryngeal Cancer Clinical Samples A total of 40 laryngeal cancer clinical samples used in this study were obtained from the Second Affiliated Hospital, Harbin Medical University between June 2016 and August 2018. All the patients were diagnosed by clinical, radiographic, and histopathological analysis. Before surgery, no systemic or local therapy was performed on the subjects. The laryngeal cancer tissues (n = 40) and adjacent normal tissues (n = 40) obtained from the patients were immediately frozen into liquid nitrogen and stored at −80 °C before use. The clinical laryngeal cancer samples were separated into two groups according to the mean expression of Twist. The overall survival was analyzed by Kaplan–Meier survival analysis. The patients and healthy controls provided written informed consent, and that the Ethics Committee of The Second Affiliated Hospital, Harbin Medical University approved this study.\nA total of 40 laryngeal cancer clinical samples used in this study were obtained from the Second Affiliated Hospital, Harbin Medical University between June 2016 and August 2018. All the patients were diagnosed by clinical, radiographic, and histopathological analysis. Before surgery, no systemic or local therapy was performed on the subjects. The laryngeal cancer tissues (n = 40) and adjacent normal tissues (n = 40) obtained from the patients were immediately frozen into liquid nitrogen and stored at −80 °C before use. The clinical laryngeal cancer samples were separated into two groups according to the mean expression of Twist. The overall survival was analyzed by Kaplan–Meier survival analysis. The patients and healthy controls provided written informed consent, and that the Ethics Committee of The Second Affiliated Hospital, Harbin Medical University approved this study.\n Cell Culture and Treatment The human laryngeal carcinoma cells HEP-2 and primary laryngeal epithelial cells LEC-P were purchased from American Type Tissue Culture Collection. Cells were cultured in DMEM (Solarbio, China) containing 10% fetal bovine serum (Gibco, USA), 0.1 mg/mL streptomycin (Solarbio, China) and 100 units/mL penicillin (Solarbio, China) at 37 °C with 5% CO2. The Magnesium isoglycyrrhizinate (MI) (purity > 98%) was obtained from Zhengda Tianqing Pharmaceutical Co., Ltd (Jiangsu, China). Cells were treated with MI of indicated dose before further analysis.\nThe human laryngeal carcinoma cells HEP-2 and primary laryngeal epithelial cells LEC-P were purchased from American Type Tissue Culture Collection. Cells were cultured in DMEM (Solarbio, China) containing 10% fetal bovine serum (Gibco, USA), 0.1 mg/mL streptomycin (Solarbio, China) and 100 units/mL penicillin (Solarbio, China) at 37 °C with 5% CO2. The Magnesium isoglycyrrhizinate (MI) (purity > 98%) was obtained from Zhengda Tianqing Pharmaceutical Co., Ltd (Jiangsu, China). Cells were treated with MI of indicated dose before further analysis.\n Quantitative Reverse Transcription-PCR (qRT-PCR) Total RNAs were extracted using TRIZOL (Invitrogen, USA), followed by reverse transcription into cDNA. The qRT-PCR reactions were prepared using the SYBR Real-time PCR I kit (Takara, Japan). GAPDH was used as the internal control. The qRT-PCR experiment was conducted in triplicate. The primer sequences were as follows:\nTwist forward: 5′-CGCTGAACGAGGCATTTGC-3′\nTwist reverse: 5′-CCAGTTTGAGGGTCTGAATC-3′\nSlug forward: 5′-GTGTTTGCAAGATCTGCGGC-3′\nSlug reverse: 5′-GCAGATGAGCCCTCAGATTTGA-3′\nZEB1 forward: 5′-CCCCAGGTGTAAGCGCAGAA-3′\nZEB1 reverse: 5′-TGGCAGGTCATCCTCTGGTACAC-3′\nSnail forward: 5′-CCACACTGGTGAGAAGCCTTTC-3′\nSnail reverse: 5′-GTCTGGAGGTGGGCACGTA-3′\nGAPDH forward: 5′-AAGAAGGTGGTGAAGCAGGC-3′.\nGAPDH reverse: 5′-TCCACCACCCAGTTGCTGTA-3′\nTotal RNAs were extracted using TRIZOL (Invitrogen, USA), followed by reverse transcription into cDNA. The qRT-PCR reactions were prepared using the SYBR Real-time PCR I kit (Takara, Japan). GAPDH was used as the internal control. The qRT-PCR experiment was conducted in triplicate. The primer sequences were as follows:\nTwist forward: 5′-CGCTGAACGAGGCATTTGC-3′\nTwist reverse: 5′-CCAGTTTGAGGGTCTGAATC-3′\nSlug forward: 5′-GTGTTTGCAAGATCTGCGGC-3′\nSlug reverse: 5′-GCAGATGAGCCCTCAGATTTGA-3′\nZEB1 forward: 5′-CCCCAGGTGTAAGCGCAGAA-3′\nZEB1 reverse: 5′-TGGCAGGTCATCCTCTGGTACAC-3′\nSnail forward: 5′-CCACACTGGTGAGAAGCCTTTC-3′\nSnail reverse: 5′-GTCTGGAGGTGGGCACGTA-3′\nGAPDH forward: 5′-AAGAAGGTGGTGAAGCAGGC-3′.\nGAPDH reverse: 5′-TCCACCACCCAGTTGCTGTA-3′\n MTT Assays MTT assays were conducted to measure cell viability of HEP-2 cells. Briefly, about 1 × 104 HEP-2 cells were put into 96 wells and cultured for 12 h. Cells were then added with 10 μL MTT solution (5 mg/mL) (Sigma, USA) and cultured for another 4 h. The culture medium was discarded, and 150 μL/well DMSO (Thermo, USA) was added to the wells. An ELISA browser was used to analyze the absorbance at 570 nm (Bio-Tek EL 800, USA).\nMTT assays were conducted to measure cell viability of HEP-2 cells. Briefly, about 1 × 104 HEP-2 cells were put into 96 wells and cultured for 12 h. Cells were then added with 10 μL MTT solution (5 mg/mL) (Sigma, USA) and cultured for another 4 h. The culture medium was discarded, and 150 μL/well DMSO (Thermo, USA) was added to the wells. An ELISA browser was used to analyze the absorbance at 570 nm (Bio-Tek EL 800, USA).\n EdU Assays The cell proliferation was analyzed by EdU assays using EdU detecting kit (RiboBio, China). Briefly, HEP-2 cells were cultured with EdU for 2 h, followed by fixation with 4% paraformaldehyde at room temperature for 30 min. Then, cells were permeabilized with 0.4% Triton X-100 for 10 min and stained with staining cocktail of EdU at room temperature for 30 min in the dark. Next, nuclear of the cells was stained with Hoechst at room temperature for 30 min. Images were analyzed using a fluorescence microscope.\nThe cell proliferation was analyzed by EdU assays using EdU detecting kit (RiboBio, China). Briefly, HEP-2 cells were cultured with EdU for 2 h, followed by fixation with 4% paraformaldehyde at room temperature for 30 min. Then, cells were permeabilized with 0.4% Triton X-100 for 10 min and stained with staining cocktail of EdU at room temperature for 30 min in the dark. Next, nuclear of the cells was stained with Hoechst at room temperature for 30 min. Images were analyzed using a fluorescence microscope.\n Colony Formation Assay About 1 × 103 HEP-2 cells were layered in 6 wells and incubated in DMEM at 37 °C. After two weeks, cells were cleaned with PBS Buffer, made in methanol for 30 min, and dyed with 1% crystal violet. The number of colonies was then calculated.\nAbout 1 × 103 HEP-2 cells were layered in 6 wells and incubated in DMEM at 37 °C. After two weeks, cells were cleaned with PBS Buffer, made in methanol for 30 min, and dyed with 1% crystal violet. The number of colonies was then calculated.\n Transwell Assays Transwell assays were conducted to evaluate the impacts of MI on cell invasion and migration of HEP-2 cells using a Transwell plate (Corning, USA). Briefly, the upper chambers were plated with about 1 × 105 cells. Cells were then solidified using 4% paraformaldehyde and dyed with crystal violet. The invaded and migrated cells were recorded and calculated.\nTranswell assays were conducted to evaluate the impacts of MI on cell invasion and migration of HEP-2 cells using a Transwell plate (Corning, USA). Briefly, the upper chambers were plated with about 1 × 105 cells. Cells were then solidified using 4% paraformaldehyde and dyed with crystal violet. The invaded and migrated cells were recorded and calculated.\n Analysis of Cell Apoptosis Approximately 2 × 105 HEP-2 cells were plated on 6-well dishes. Cell apoptosis was detected using the Annexin V-FITC Apoptosis Detection Kit (CST, USA) following the manufacturer’s instructions. Briefly, cells were collected and washed with binding buffer (BD Biosciences, USA), and then dyed at 25 °C, followed by flow cytometry analysis.\nApproximately 2 × 105 HEP-2 cells were plated on 6-well dishes. Cell apoptosis was detected using the Annexin V-FITC Apoptosis Detection Kit (CST, USA) following the manufacturer’s instructions. Briefly, cells were collected and washed with binding buffer (BD Biosciences, USA), and then dyed at 25 °C, followed by flow cytometry analysis.\n Luciferase Reporter Gene Assay Luciferase reporter gene assay was performed using the Dual-luciferase Reporter Assay System (Promega, USA). Briefly, cells were treated MI as indicated dose, followed by transfection with pGL3-NF-κB and pGL3-Twist using Lipofectamine 3000 (Invitrogen, USA). Luciferase activities were detected and Renilla was used as a normalized control.\nLuciferase reporter gene assay was performed using the Dual-luciferase Reporter Assay System (Promega, USA). Briefly, cells were treated MI as indicated dose, followed by transfection with pGL3-NF-κB and pGL3-Twist using Lipofectamine 3000 (Invitrogen, USA). Luciferase activities were detected and Renilla was used as a normalized control.\n Western Blot Analysis Total proteins were extracted from cells or tumor tissues using RIPA buffer (CST, USA). Protein concentrations were measured using the BCA Protein Quantification Kit (Abbkine, USA). Same amount of protein samples was separated by SDS-PAGE (12% polyacrylamide gels), transferred to PVDF membranes (Millipore, USA) in the subsequent step. The membranes were blocked with 5% milk and incubated with the primary antibodies for Twist (1:1000) (Abcam, UK), E-Cadherin (1:1000) (Abcam, UK), occluding (1:1000) (Abcam, UK), vimentin (1:1000) (Abcam, UK), N-cadherin (1:1000) (Abcam, UK), Ki-67 (1:1000) (Abcam, UK), IKK (1:1000) (Abcam, UK), NF-κB p65 (1:1000) (Abcam, UK), NF-κB (1:1000) (Abcam, UK), and β-actin (1:1000) (Abcam, UK) at 4 °C overnight, in which β-actin served as the control. Then, the corresponding second antibodies (1:1000) (Abcam, UK) were used to incubate the membranes at room temperature for 1 h, followed by visualization using an Odyssey CLx Infrared Imaging System. ImageJ software was used to quantify the results.\nTotal proteins were extracted from cells or tumor tissues using RIPA buffer (CST, USA). Protein concentrations were measured using the BCA Protein Quantification Kit (Abbkine, USA). Same amount of protein samples was separated by SDS-PAGE (12% polyacrylamide gels), transferred to PVDF membranes (Millipore, USA) in the subsequent step. The membranes were blocked with 5% milk and incubated with the primary antibodies for Twist (1:1000) (Abcam, UK), E-Cadherin (1:1000) (Abcam, UK), occluding (1:1000) (Abcam, UK), vimentin (1:1000) (Abcam, UK), N-cadherin (1:1000) (Abcam, UK), Ki-67 (1:1000) (Abcam, UK), IKK (1:1000) (Abcam, UK), NF-κB p65 (1:1000) (Abcam, UK), NF-κB (1:1000) (Abcam, UK), and β-actin (1:1000) (Abcam, UK) at 4 °C overnight, in which β-actin served as the control. Then, the corresponding second antibodies (1:1000) (Abcam, UK) were used to incubate the membranes at room temperature for 1 h, followed by visualization using an Odyssey CLx Infrared Imaging System. ImageJ software was used to quantify the results.\n Analysis of Tumorigenicity in Nude Mice The effect of MI on tumor growth in vivo was analyzed in nude mice of Balb/c. Mice were randomly separated into two groups (n = 3). To establish the in vivo tumor model, HEP-2 cells were treated with MI (300 mg/kg) or equal volume of saline. And about 2 × 106 cells were subcutaneously injected into mice. After 7 days of injection, tumor growth was measured every 7 days. The mice were sacrificed after 35 days of injection and tumors were scaled. Tumor volume (V) was observed by estimating the length (L) and width (W) with calipers and measured with the formula (L ×W2) × 0.5. The expression levels of Ki-67 of tumor tissues were measured by immunohistochemical staining with the Ki67 antibody (1:1000) (Abcam, UK). Protein expression levels in tumor tissues were determined by Western blot analysis using Twist (1:1000) (Abcam, UK), E-Cadherin (1:1000) (Abcam, UK), occluding (1:1000) (Abcam, UK), vimentin (1:1000) (Abcam, UK), N-cadherin (1:1000) (Abcam, UK), Ki-67 (1:1000) (Abcam, UK), IKK (1:1000) (Abcam, UK), NF-κB p65 (1:1000) (Abcam, UK), NF-κB (1:1000) (Abcam, UK), and β-actin (1:1000) (Abcam, UK). Animal care and experimental procedures in this study were approved by the Animal Ethics Committee of the Second Affiliated Hospital, Harbin Medical University.\nThe effect of MI on tumor growth in vivo was analyzed in nude mice of Balb/c. Mice were randomly separated into two groups (n = 3). To establish the in vivo tumor model, HEP-2 cells were treated with MI (300 mg/kg) or equal volume of saline. And about 2 × 106 cells were subcutaneously injected into mice. After 7 days of injection, tumor growth was measured every 7 days. The mice were sacrificed after 35 days of injection and tumors were scaled. Tumor volume (V) was observed by estimating the length (L) and width (W) with calipers and measured with the formula (L ×W2) × 0.5. The expression levels of Ki-67 of tumor tissues were measured by immunohistochemical staining with the Ki67 antibody (1:1000) (Abcam, UK). Protein expression levels in tumor tissues were determined by Western blot analysis using Twist (1:1000) (Abcam, UK), E-Cadherin (1:1000) (Abcam, UK), occluding (1:1000) (Abcam, UK), vimentin (1:1000) (Abcam, UK), N-cadherin (1:1000) (Abcam, UK), Ki-67 (1:1000) (Abcam, UK), IKK (1:1000) (Abcam, UK), NF-κB p65 (1:1000) (Abcam, UK), NF-κB (1:1000) (Abcam, UK), and β-actin (1:1000) (Abcam, UK). Animal care and experimental procedures in this study were approved by the Animal Ethics Committee of the Second Affiliated Hospital, Harbin Medical University.\n Statistical Analysis Data were presented as mean ± SD, and statistical analysis was performed using SPSS software (version 18.0). The unpaired Student’s t-test was applied for comparing two groups, and one-way ANOVA was applied for comparing multiple groups. P < 0.05 was considered as statistically significant.\nData were presented as mean ± SD, and statistical analysis was performed using SPSS software (version 18.0). The unpaired Student’s t-test was applied for comparing two groups, and one-way ANOVA was applied for comparing multiple groups. P < 0.05 was considered as statistically significant.", "A total of 40 laryngeal cancer clinical samples used in this study were obtained from the Second Affiliated Hospital, Harbin Medical University between June 2016 and August 2018. All the patients were diagnosed by clinical, radiographic, and histopathological analysis. Before surgery, no systemic or local therapy was performed on the subjects. The laryngeal cancer tissues (n = 40) and adjacent normal tissues (n = 40) obtained from the patients were immediately frozen into liquid nitrogen and stored at −80 °C before use. The clinical laryngeal cancer samples were separated into two groups according to the mean expression of Twist. The overall survival was analyzed by Kaplan–Meier survival analysis. The patients and healthy controls provided written informed consent, and that the Ethics Committee of The Second Affiliated Hospital, Harbin Medical University approved this study.", "The human laryngeal carcinoma cells HEP-2 and primary laryngeal epithelial cells LEC-P were purchased from American Type Tissue Culture Collection. Cells were cultured in DMEM (Solarbio, China) containing 10% fetal bovine serum (Gibco, USA), 0.1 mg/mL streptomycin (Solarbio, China) and 100 units/mL penicillin (Solarbio, China) at 37 °C with 5% CO2. The Magnesium isoglycyrrhizinate (MI) (purity > 98%) was obtained from Zhengda Tianqing Pharmaceutical Co., Ltd (Jiangsu, China). Cells were treated with MI of indicated dose before further analysis.", "Total RNAs were extracted using TRIZOL (Invitrogen, USA), followed by reverse transcription into cDNA. The qRT-PCR reactions were prepared using the SYBR Real-time PCR I kit (Takara, Japan). GAPDH was used as the internal control. The qRT-PCR experiment was conducted in triplicate. The primer sequences were as follows:\nTwist forward: 5′-CGCTGAACGAGGCATTTGC-3′\nTwist reverse: 5′-CCAGTTTGAGGGTCTGAATC-3′\nSlug forward: 5′-GTGTTTGCAAGATCTGCGGC-3′\nSlug reverse: 5′-GCAGATGAGCCCTCAGATTTGA-3′\nZEB1 forward: 5′-CCCCAGGTGTAAGCGCAGAA-3′\nZEB1 reverse: 5′-TGGCAGGTCATCCTCTGGTACAC-3′\nSnail forward: 5′-CCACACTGGTGAGAAGCCTTTC-3′\nSnail reverse: 5′-GTCTGGAGGTGGGCACGTA-3′\nGAPDH forward: 5′-AAGAAGGTGGTGAAGCAGGC-3′.\nGAPDH reverse: 5′-TCCACCACCCAGTTGCTGTA-3′", "MTT assays were conducted to measure cell viability of HEP-2 cells. Briefly, about 1 × 104 HEP-2 cells were put into 96 wells and cultured for 12 h. Cells were then added with 10 μL MTT solution (5 mg/mL) (Sigma, USA) and cultured for another 4 h. The culture medium was discarded, and 150 μL/well DMSO (Thermo, USA) was added to the wells. An ELISA browser was used to analyze the absorbance at 570 nm (Bio-Tek EL 800, USA).", "The cell proliferation was analyzed by EdU assays using EdU detecting kit (RiboBio, China). Briefly, HEP-2 cells were cultured with EdU for 2 h, followed by fixation with 4% paraformaldehyde at room temperature for 30 min. Then, cells were permeabilized with 0.4% Triton X-100 for 10 min and stained with staining cocktail of EdU at room temperature for 30 min in the dark. Next, nuclear of the cells was stained with Hoechst at room temperature for 30 min. Images were analyzed using a fluorescence microscope.", "About 1 × 103 HEP-2 cells were layered in 6 wells and incubated in DMEM at 37 °C. After two weeks, cells were cleaned with PBS Buffer, made in methanol for 30 min, and dyed with 1% crystal violet. The number of colonies was then calculated.", "Transwell assays were conducted to evaluate the impacts of MI on cell invasion and migration of HEP-2 cells using a Transwell plate (Corning, USA). Briefly, the upper chambers were plated with about 1 × 105 cells. Cells were then solidified using 4% paraformaldehyde and dyed with crystal violet. The invaded and migrated cells were recorded and calculated.", "Approximately 2 × 105 HEP-2 cells were plated on 6-well dishes. Cell apoptosis was detected using the Annexin V-FITC Apoptosis Detection Kit (CST, USA) following the manufacturer’s instructions. Briefly, cells were collected and washed with binding buffer (BD Biosciences, USA), and then dyed at 25 °C, followed by flow cytometry analysis.", "Luciferase reporter gene assay was performed using the Dual-luciferase Reporter Assay System (Promega, USA). Briefly, cells were treated MI as indicated dose, followed by transfection with pGL3-NF-κB and pGL3-Twist using Lipofectamine 3000 (Invitrogen, USA). Luciferase activities were detected and Renilla was used as a normalized control.", "Total proteins were extracted from cells or tumor tissues using RIPA buffer (CST, USA). Protein concentrations were measured using the BCA Protein Quantification Kit (Abbkine, USA). Same amount of protein samples was separated by SDS-PAGE (12% polyacrylamide gels), transferred to PVDF membranes (Millipore, USA) in the subsequent step. The membranes were blocked with 5% milk and incubated with the primary antibodies for Twist (1:1000) (Abcam, UK), E-Cadherin (1:1000) (Abcam, UK), occluding (1:1000) (Abcam, UK), vimentin (1:1000) (Abcam, UK), N-cadherin (1:1000) (Abcam, UK), Ki-67 (1:1000) (Abcam, UK), IKK (1:1000) (Abcam, UK), NF-κB p65 (1:1000) (Abcam, UK), NF-κB (1:1000) (Abcam, UK), and β-actin (1:1000) (Abcam, UK) at 4 °C overnight, in which β-actin served as the control. Then, the corresponding second antibodies (1:1000) (Abcam, UK) were used to incubate the membranes at room temperature for 1 h, followed by visualization using an Odyssey CLx Infrared Imaging System. ImageJ software was used to quantify the results.", "The effect of MI on tumor growth in vivo was analyzed in nude mice of Balb/c. Mice were randomly separated into two groups (n = 3). To establish the in vivo tumor model, HEP-2 cells were treated with MI (300 mg/kg) or equal volume of saline. And about 2 × 106 cells were subcutaneously injected into mice. After 7 days of injection, tumor growth was measured every 7 days. The mice were sacrificed after 35 days of injection and tumors were scaled. Tumor volume (V) was observed by estimating the length (L) and width (W) with calipers and measured with the formula (L ×W2) × 0.5. The expression levels of Ki-67 of tumor tissues were measured by immunohistochemical staining with the Ki67 antibody (1:1000) (Abcam, UK). Protein expression levels in tumor tissues were determined by Western blot analysis using Twist (1:1000) (Abcam, UK), E-Cadherin (1:1000) (Abcam, UK), occluding (1:1000) (Abcam, UK), vimentin (1:1000) (Abcam, UK), N-cadherin (1:1000) (Abcam, UK), Ki-67 (1:1000) (Abcam, UK), IKK (1:1000) (Abcam, UK), NF-κB p65 (1:1000) (Abcam, UK), NF-κB (1:1000) (Abcam, UK), and β-actin (1:1000) (Abcam, UK). Animal care and experimental procedures in this study were approved by the Animal Ethics Committee of the Second Affiliated Hospital, Harbin Medical University.", "Data were presented as mean ± SD, and statistical analysis was performed using SPSS software (version 18.0). The unpaired Student’s t-test was applied for comparing two groups, and one-way ANOVA was applied for comparing multiple groups. P < 0.05 was considered as statistically significant.", " Twist is Potentially Correlated with the Progression and Poor Prognosis of Laryngeal Cancer To assess the potential correlation of EMT-related transcription factors with laryngeal cancer, the expression of Twist, Slug, ZEB1, and Snail in the clinical laryngeal samples and laryngeal cells were measured by qPCR assays. The data showed that the expression levels of Twist, Slug, ZEB1, and Snail were significantly elevated in laryngeal cancer tissues (n = 40) compared to that in normal tissues (n = 40), among which Twist displayed the highest expression levels (P < 0.01) (Figure 1A). Meanwhile, the expression levels of Twist, Slug, ZEB1, and Snail were also enhanced in human laryngeal carcinoma HEP-2 cells compared with that in primary laryngeal epithelial cells (LEC-P), and Twist exerted the highest expression levels among these transcription factors (P < 0.001) (Figure 1B), implying that Twist is closely associated with the development of laryngeal cancer. To determine whether Twist was able to serve as the potential biomarker for patients with laryngeal cancer, we separated the clinical laryngeal cancer samples into two groups according to the mean expression of Twist. We observed that high expression levels of Twist was remarkably correlated with the poor overall survival (P < 0.01) (Figure 1C), suggesting that Twist may play a crucial role in the progression of laryngeal cancer.Figure 1Twist is potentially correlated with the progression and poor prognosis of laryngeal cancer. (A) The expression levels of Twist, Slug, ZEB1, and Snail were measured by qPCR in the laryngeal cancer tissues (n = 40) and adjacent normal tissues (n = 40). (B) The expression levels of Twist, Slug, ZEB1, and Snail were assessed by qPCR in HEP-2 cells and primary laryngeal epithelial cells. (C) The clinical laryngeal cancer samples were separated into two groups according to the mean expression of Twist. The overall survival was analyzed by Kaplan-Meier survival analysis. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01, *** P < 0.001.\nTwist is potentially correlated with the progression and poor prognosis of laryngeal cancer. (A) The expression levels of Twist, Slug, ZEB1, and Snail were measured by qPCR in the laryngeal cancer tissues (n = 40) and adjacent normal tissues (n = 40). (B) The expression levels of Twist, Slug, ZEB1, and Snail were assessed by qPCR in HEP-2 cells and primary laryngeal epithelial cells. (C) The clinical laryngeal cancer samples were separated into two groups according to the mean expression of Twist. The overall survival was analyzed by Kaplan-Meier survival analysis. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01, *** P < 0.001.\nTo assess the potential correlation of EMT-related transcription factors with laryngeal cancer, the expression of Twist, Slug, ZEB1, and Snail in the clinical laryngeal samples and laryngeal cells were measured by qPCR assays. The data showed that the expression levels of Twist, Slug, ZEB1, and Snail were significantly elevated in laryngeal cancer tissues (n = 40) compared to that in normal tissues (n = 40), among which Twist displayed the highest expression levels (P < 0.01) (Figure 1A). Meanwhile, the expression levels of Twist, Slug, ZEB1, and Snail were also enhanced in human laryngeal carcinoma HEP-2 cells compared with that in primary laryngeal epithelial cells (LEC-P), and Twist exerted the highest expression levels among these transcription factors (P < 0.001) (Figure 1B), implying that Twist is closely associated with the development of laryngeal cancer. To determine whether Twist was able to serve as the potential biomarker for patients with laryngeal cancer, we separated the clinical laryngeal cancer samples into two groups according to the mean expression of Twist. We observed that high expression levels of Twist was remarkably correlated with the poor overall survival (P < 0.01) (Figure 1C), suggesting that Twist may play a crucial role in the progression of laryngeal cancer.Figure 1Twist is potentially correlated with the progression and poor prognosis of laryngeal cancer. (A) The expression levels of Twist, Slug, ZEB1, and Snail were measured by qPCR in the laryngeal cancer tissues (n = 40) and adjacent normal tissues (n = 40). (B) The expression levels of Twist, Slug, ZEB1, and Snail were assessed by qPCR in HEP-2 cells and primary laryngeal epithelial cells. (C) The clinical laryngeal cancer samples were separated into two groups according to the mean expression of Twist. The overall survival was analyzed by Kaplan-Meier survival analysis. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01, *** P < 0.001.\nTwist is potentially correlated with the progression and poor prognosis of laryngeal cancer. (A) The expression levels of Twist, Slug, ZEB1, and Snail were measured by qPCR in the laryngeal cancer tissues (n = 40) and adjacent normal tissues (n = 40). (B) The expression levels of Twist, Slug, ZEB1, and Snail were assessed by qPCR in HEP-2 cells and primary laryngeal epithelial cells. (C) The clinical laryngeal cancer samples were separated into two groups according to the mean expression of Twist. The overall survival was analyzed by Kaplan-Meier survival analysis. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01, *** P < 0.001.\n Magnesium Isoglycyrrhizinate (MI) Attenuates Cell Proliferation of Laryngeal Cancer in vitro Then, the role of MI in the modulation of laryngeal cancer proliferation was investigated, and the structure formula of MI is shown in Figure 2A. To evaluate the effect of MI on the progression of laryngeal cancer, MTT assay, colony formation assay, and EdU assay were performed in HEP-2 cells treated with MI. MTT assay demonstrated that MI significantly inhibited cell viability in a dose-dependent manner, and the half-maximal inhibitory concentrations (IC50) of MI in HEP-2 cells was 3.22 mg/mL (P < 0.01) (Figure 2B). Hence, we selected the MI dose of 3.22 mg/mL in the following experiments. Similarly, colony formation assay showed that the colony numbers of HEP-2 cells were remarkably reduced by MI treatment (P < 0.01) (Figure 2C). Besides, EdU assay showed that MI notably decreased number of EdU-positive cells (P < 0.01) (Figure 2D). These data suggested that MI was able to inhibit cell proliferation of laryngeal cancer.Figure 2Magnesium isoglycyrrhizinate (MI) attenuates cell proliferation of laryngeal cancer in vitro. (A) The structure formula of MI was shown. (B) The cell viability was analyzed by MTT assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell proliferation was measured by colony formation assays in the HEP-2 cells treated with MI at indicated dosage. (D) The cell proliferation was examined by EdU assays in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMagnesium isoglycyrrhizinate (MI) attenuates cell proliferation of laryngeal cancer in vitro. (A) The structure formula of MI was shown. (B) The cell viability was analyzed by MTT assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell proliferation was measured by colony formation assays in the HEP-2 cells treated with MI at indicated dosage. (D) The cell proliferation was examined by EdU assays in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nThen, the role of MI in the modulation of laryngeal cancer proliferation was investigated, and the structure formula of MI is shown in Figure 2A. To evaluate the effect of MI on the progression of laryngeal cancer, MTT assay, colony formation assay, and EdU assay were performed in HEP-2 cells treated with MI. MTT assay demonstrated that MI significantly inhibited cell viability in a dose-dependent manner, and the half-maximal inhibitory concentrations (IC50) of MI in HEP-2 cells was 3.22 mg/mL (P < 0.01) (Figure 2B). Hence, we selected the MI dose of 3.22 mg/mL in the following experiments. Similarly, colony formation assay showed that the colony numbers of HEP-2 cells were remarkably reduced by MI treatment (P < 0.01) (Figure 2C). Besides, EdU assay showed that MI notably decreased number of EdU-positive cells (P < 0.01) (Figure 2D). These data suggested that MI was able to inhibit cell proliferation of laryngeal cancer.Figure 2Magnesium isoglycyrrhizinate (MI) attenuates cell proliferation of laryngeal cancer in vitro. (A) The structure formula of MI was shown. (B) The cell viability was analyzed by MTT assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell proliferation was measured by colony formation assays in the HEP-2 cells treated with MI at indicated dosage. (D) The cell proliferation was examined by EdU assays in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMagnesium isoglycyrrhizinate (MI) attenuates cell proliferation of laryngeal cancer in vitro. (A) The structure formula of MI was shown. (B) The cell viability was analyzed by MTT assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell proliferation was measured by colony formation assays in the HEP-2 cells treated with MI at indicated dosage. (D) The cell proliferation was examined by EdU assays in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\n MI Inhibits Laryngeal Cancer Cell Migration and Invasion and Enhances Cell Apoptosis in vitro The role of MI in HEP-2 cell migration, invasion, and apoptosis was then evaluated. Transwell assays revealed that MI treatment remarkably reduced cell migration (P < 0.01) (Figure 3A). Similarly, cell invasion was significantly decreased by MI in HEP-2 cells (P < 0.01) (Figure 3B). Moreover, flow cytometry analysis showed that cell apoptosis was notably increased by MI treatment in the system (P < 0.01) (Figure 3C). These results indicated that MI inhibited laryngeal cancer cell migration and invasion and enhanced cell apoptosis in vitro.Figure 3MI inhibits laryngeal cancer cell migration and invasion and enhances cell apoptosis in vitro. (A) The cell migration was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (B) The cell invasion was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell apoptosis was measure by flow cytometry analysis in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMI inhibits laryngeal cancer cell migration and invasion and enhances cell apoptosis in vitro. (A) The cell migration was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (B) The cell invasion was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell apoptosis was measure by flow cytometry analysis in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nThe role of MI in HEP-2 cell migration, invasion, and apoptosis was then evaluated. Transwell assays revealed that MI treatment remarkably reduced cell migration (P < 0.01) (Figure 3A). Similarly, cell invasion was significantly decreased by MI in HEP-2 cells (P < 0.01) (Figure 3B). Moreover, flow cytometry analysis showed that cell apoptosis was notably increased by MI treatment in the system (P < 0.01) (Figure 3C). These results indicated that MI inhibited laryngeal cancer cell migration and invasion and enhanced cell apoptosis in vitro.Figure 3MI inhibits laryngeal cancer cell migration and invasion and enhances cell apoptosis in vitro. (A) The cell migration was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (B) The cell invasion was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell apoptosis was measure by flow cytometry analysis in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMI inhibits laryngeal cancer cell migration and invasion and enhances cell apoptosis in vitro. (A) The cell migration was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (B) The cell invasion was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell apoptosis was measure by flow cytometry analysis in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\n MI Inhibits Transcriptional Activation and the Expression of NF-κB and Twist and Alleviates EMT in Laryngeal Cancer Cells Next, the underlying mechanism of the effect of MI on the development of laryngeal cancer in HEP-2 cells was further explored. It showed that MI treatment significantly reduced the luciferase activities of NF-κB in the cells (P < 0.01) (Figure 4A), suggesting that MI may inhibit NF-κB at transcriptional level. Meanwhile, dual-luciferase reporter gene assays further revealed that MI treatment remarkably decreased the transcriptional activities of Twist in the cells (P < 0.01) (Figure 4B). Furthermore, Western blot analysis demonstrated that the total expression (Figure 4C) and nucleus expression (Figure 4D) of NF-κB and Twist were significantly down-regulated by MI in HEP-2 cells (P < 0.01), suggesting that MI may inhibit Twist by modulating NF-κB. Moreover, to assess the effect of MI on the EMT of laryngeal cancer, the expression of EMT markers, including E-Cadherin, occluding, vimentin, and N-cadherin, in HEP-2 cells treated with MI were measured. Our data showed that the expression levels of E-Cadherin and occluding were enhanced while the expression levels of vimentin and N-cadherin were reduced by MI treatment in HEP-2 cells (P < 0.01) (Figure 4E), suggesting that MI can inhibit EMT of laryngeal cancer.Figure 4MI inhibits transcriptional activation and expression of NF-κB and Twist and alleviates EMT in laryngeal cancer cells. (A) The luciferase activities of NF-κB were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (B) The luciferase activities of Twist were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (C) The total expression of NF-κB, Twist, and β-actin was measured by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (D) The nucleus expression of NF-κB, Twist, and histone H3 was tested by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (E) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMI inhibits transcriptional activation and expression of NF-κB and Twist and alleviates EMT in laryngeal cancer cells. (A) The luciferase activities of NF-κB were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (B) The luciferase activities of Twist were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (C) The total expression of NF-κB, Twist, and β-actin was measured by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (D) The nucleus expression of NF-κB, Twist, and histone H3 was tested by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (E) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nNext, the underlying mechanism of the effect of MI on the development of laryngeal cancer in HEP-2 cells was further explored. It showed that MI treatment significantly reduced the luciferase activities of NF-κB in the cells (P < 0.01) (Figure 4A), suggesting that MI may inhibit NF-κB at transcriptional level. Meanwhile, dual-luciferase reporter gene assays further revealed that MI treatment remarkably decreased the transcriptional activities of Twist in the cells (P < 0.01) (Figure 4B). Furthermore, Western blot analysis demonstrated that the total expression (Figure 4C) and nucleus expression (Figure 4D) of NF-κB and Twist were significantly down-regulated by MI in HEP-2 cells (P < 0.01), suggesting that MI may inhibit Twist by modulating NF-κB. Moreover, to assess the effect of MI on the EMT of laryngeal cancer, the expression of EMT markers, including E-Cadherin, occluding, vimentin, and N-cadherin, in HEP-2 cells treated with MI were measured. Our data showed that the expression levels of E-Cadherin and occluding were enhanced while the expression levels of vimentin and N-cadherin were reduced by MI treatment in HEP-2 cells (P < 0.01) (Figure 4E), suggesting that MI can inhibit EMT of laryngeal cancer.Figure 4MI inhibits transcriptional activation and expression of NF-κB and Twist and alleviates EMT in laryngeal cancer cells. (A) The luciferase activities of NF-κB were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (B) The luciferase activities of Twist were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (C) The total expression of NF-κB, Twist, and β-actin was measured by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (D) The nucleus expression of NF-κB, Twist, and histone H3 was tested by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (E) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMI inhibits transcriptional activation and expression of NF-κB and Twist and alleviates EMT in laryngeal cancer cells. (A) The luciferase activities of NF-κB were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (B) The luciferase activities of Twist were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (C) The total expression of NF-κB, Twist, and β-actin was measured by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (D) The nucleus expression of NF-κB, Twist, and histone H3 was tested by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (E) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\n MI Inhibits Tumor Growth and EMT of Laryngeal Cancer via the Twist/NF-κB Signaling in vivo The effect of MI on laryngeal cancer development was further investigated in vivo. Tumorigenicity analysis was conducted in nude mice injected with HEP-2 cells, which were treated with MI or corresponding control. The MI treatment significantly reduced tumor size (Figure 5A), tumor weight (P < 0.01) (Figure 5B), tumor volume (P < 0.01) (Figure 5C), and the expression levels of Ki-67 in tumor tissues of the mice (Figure 5D), suggesting that MI inhibited tumor growth of laryngeal cancer in vivo.Figure 5MI inhibits tumor growth of laryngeal cancer in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by tumorigenicity assay in nude mice. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) Representative images of dissected tumors from nude mice were presented. (B) The average tumor weight was calculated and shown. (C) The average tumor volume was calculated and shown. (D) The expression levels of Ki-67 of the tumor tissues were measured by immunohistochemical staining. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMI inhibits tumor growth of laryngeal cancer in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by tumorigenicity assay in nude mice. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) Representative images of dissected tumors from nude mice were presented. (B) The average tumor weight was calculated and shown. (C) The average tumor volume was calculated and shown. (D) The expression levels of Ki-67 of the tumor tissues were measured by immunohistochemical staining. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMoreover, our data showed that MI treatment significantly enhanced the expression levels of E-Cadherin and occluding, whereas reduced the expression levels of vimentin and N-cadherin in tumor tissues of the mice (P < 0.01) (Figure 6A and B), indicating that MI attenuated EMT of laryngeal cancer in vivo. Besides, the expression levels of Twist, Ki-67, and NF-κB signaling proteins containing p65 and IKK were significantly decreased by MI in tumor tissues of the mice (P < 0.01) (Figure 6C and D), implying that MI may inhibit EMT of laryngeal cancer via the Twist/NF-κB signaling.Figure 6MI inhibits EMT of laryngeal cancer via Twist/NF-κB signaling in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by nude mice tumorigenicity assay. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the tumor tissues of the mice. (B) The results of Western blot analysis in (A) were quantified by ImageJ software. (C) The expression levels of NF-κB, NF-κB p65, Twist, Ki-67, IKK, and β-actin were measured by Western blot analysis in the tumor tissues of the mice. (D) The results of Western blot analysis in (C) were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMI inhibits EMT of laryngeal cancer via Twist/NF-κB signaling in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by nude mice tumorigenicity assay. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the tumor tissues of the mice. (B) The results of Western blot analysis in (A) were quantified by ImageJ software. (C) The expression levels of NF-κB, NF-κB p65, Twist, Ki-67, IKK, and β-actin were measured by Western blot analysis in the tumor tissues of the mice. (D) The results of Western blot analysis in (C) were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nThe effect of MI on laryngeal cancer development was further investigated in vivo. Tumorigenicity analysis was conducted in nude mice injected with HEP-2 cells, which were treated with MI or corresponding control. The MI treatment significantly reduced tumor size (Figure 5A), tumor weight (P < 0.01) (Figure 5B), tumor volume (P < 0.01) (Figure 5C), and the expression levels of Ki-67 in tumor tissues of the mice (Figure 5D), suggesting that MI inhibited tumor growth of laryngeal cancer in vivo.Figure 5MI inhibits tumor growth of laryngeal cancer in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by tumorigenicity assay in nude mice. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) Representative images of dissected tumors from nude mice were presented. (B) The average tumor weight was calculated and shown. (C) The average tumor volume was calculated and shown. (D) The expression levels of Ki-67 of the tumor tissues were measured by immunohistochemical staining. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMI inhibits tumor growth of laryngeal cancer in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by tumorigenicity assay in nude mice. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) Representative images of dissected tumors from nude mice were presented. (B) The average tumor weight was calculated and shown. (C) The average tumor volume was calculated and shown. (D) The expression levels of Ki-67 of the tumor tissues were measured by immunohistochemical staining. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMoreover, our data showed that MI treatment significantly enhanced the expression levels of E-Cadherin and occluding, whereas reduced the expression levels of vimentin and N-cadherin in tumor tissues of the mice (P < 0.01) (Figure 6A and B), indicating that MI attenuated EMT of laryngeal cancer in vivo. Besides, the expression levels of Twist, Ki-67, and NF-κB signaling proteins containing p65 and IKK were significantly decreased by MI in tumor tissues of the mice (P < 0.01) (Figure 6C and D), implying that MI may inhibit EMT of laryngeal cancer via the Twist/NF-κB signaling.Figure 6MI inhibits EMT of laryngeal cancer via Twist/NF-κB signaling in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by nude mice tumorigenicity assay. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the tumor tissues of the mice. (B) The results of Western blot analysis in (A) were quantified by ImageJ software. (C) The expression levels of NF-κB, NF-κB p65, Twist, Ki-67, IKK, and β-actin were measured by Western blot analysis in the tumor tissues of the mice. (D) The results of Western blot analysis in (C) were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMI inhibits EMT of laryngeal cancer via Twist/NF-κB signaling in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by nude mice tumorigenicity assay. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the tumor tissues of the mice. (B) The results of Western blot analysis in (A) were quantified by ImageJ software. (C) The expression levels of NF-κB, NF-κB p65, Twist, Ki-67, IKK, and β-actin were measured by Western blot analysis in the tumor tissues of the mice. (D) The results of Western blot analysis in (C) were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.", "To assess the potential correlation of EMT-related transcription factors with laryngeal cancer, the expression of Twist, Slug, ZEB1, and Snail in the clinical laryngeal samples and laryngeal cells were measured by qPCR assays. The data showed that the expression levels of Twist, Slug, ZEB1, and Snail were significantly elevated in laryngeal cancer tissues (n = 40) compared to that in normal tissues (n = 40), among which Twist displayed the highest expression levels (P < 0.01) (Figure 1A). Meanwhile, the expression levels of Twist, Slug, ZEB1, and Snail were also enhanced in human laryngeal carcinoma HEP-2 cells compared with that in primary laryngeal epithelial cells (LEC-P), and Twist exerted the highest expression levels among these transcription factors (P < 0.001) (Figure 1B), implying that Twist is closely associated with the development of laryngeal cancer. To determine whether Twist was able to serve as the potential biomarker for patients with laryngeal cancer, we separated the clinical laryngeal cancer samples into two groups according to the mean expression of Twist. We observed that high expression levels of Twist was remarkably correlated with the poor overall survival (P < 0.01) (Figure 1C), suggesting that Twist may play a crucial role in the progression of laryngeal cancer.Figure 1Twist is potentially correlated with the progression and poor prognosis of laryngeal cancer. (A) The expression levels of Twist, Slug, ZEB1, and Snail were measured by qPCR in the laryngeal cancer tissues (n = 40) and adjacent normal tissues (n = 40). (B) The expression levels of Twist, Slug, ZEB1, and Snail were assessed by qPCR in HEP-2 cells and primary laryngeal epithelial cells. (C) The clinical laryngeal cancer samples were separated into two groups according to the mean expression of Twist. The overall survival was analyzed by Kaplan-Meier survival analysis. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01, *** P < 0.001.\nTwist is potentially correlated with the progression and poor prognosis of laryngeal cancer. (A) The expression levels of Twist, Slug, ZEB1, and Snail were measured by qPCR in the laryngeal cancer tissues (n = 40) and adjacent normal tissues (n = 40). (B) The expression levels of Twist, Slug, ZEB1, and Snail were assessed by qPCR in HEP-2 cells and primary laryngeal epithelial cells. (C) The clinical laryngeal cancer samples were separated into two groups according to the mean expression of Twist. The overall survival was analyzed by Kaplan-Meier survival analysis. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01, *** P < 0.001.", "Then, the role of MI in the modulation of laryngeal cancer proliferation was investigated, and the structure formula of MI is shown in Figure 2A. To evaluate the effect of MI on the progression of laryngeal cancer, MTT assay, colony formation assay, and EdU assay were performed in HEP-2 cells treated with MI. MTT assay demonstrated that MI significantly inhibited cell viability in a dose-dependent manner, and the half-maximal inhibitory concentrations (IC50) of MI in HEP-2 cells was 3.22 mg/mL (P < 0.01) (Figure 2B). Hence, we selected the MI dose of 3.22 mg/mL in the following experiments. Similarly, colony formation assay showed that the colony numbers of HEP-2 cells were remarkably reduced by MI treatment (P < 0.01) (Figure 2C). Besides, EdU assay showed that MI notably decreased number of EdU-positive cells (P < 0.01) (Figure 2D). These data suggested that MI was able to inhibit cell proliferation of laryngeal cancer.Figure 2Magnesium isoglycyrrhizinate (MI) attenuates cell proliferation of laryngeal cancer in vitro. (A) The structure formula of MI was shown. (B) The cell viability was analyzed by MTT assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell proliferation was measured by colony formation assays in the HEP-2 cells treated with MI at indicated dosage. (D) The cell proliferation was examined by EdU assays in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMagnesium isoglycyrrhizinate (MI) attenuates cell proliferation of laryngeal cancer in vitro. (A) The structure formula of MI was shown. (B) The cell viability was analyzed by MTT assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell proliferation was measured by colony formation assays in the HEP-2 cells treated with MI at indicated dosage. (D) The cell proliferation was examined by EdU assays in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.", "The role of MI in HEP-2 cell migration, invasion, and apoptosis was then evaluated. Transwell assays revealed that MI treatment remarkably reduced cell migration (P < 0.01) (Figure 3A). Similarly, cell invasion was significantly decreased by MI in HEP-2 cells (P < 0.01) (Figure 3B). Moreover, flow cytometry analysis showed that cell apoptosis was notably increased by MI treatment in the system (P < 0.01) (Figure 3C). These results indicated that MI inhibited laryngeal cancer cell migration and invasion and enhanced cell apoptosis in vitro.Figure 3MI inhibits laryngeal cancer cell migration and invasion and enhances cell apoptosis in vitro. (A) The cell migration was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (B) The cell invasion was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell apoptosis was measure by flow cytometry analysis in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMI inhibits laryngeal cancer cell migration and invasion and enhances cell apoptosis in vitro. (A) The cell migration was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (B) The cell invasion was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell apoptosis was measure by flow cytometry analysis in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.", "Next, the underlying mechanism of the effect of MI on the development of laryngeal cancer in HEP-2 cells was further explored. It showed that MI treatment significantly reduced the luciferase activities of NF-κB in the cells (P < 0.01) (Figure 4A), suggesting that MI may inhibit NF-κB at transcriptional level. Meanwhile, dual-luciferase reporter gene assays further revealed that MI treatment remarkably decreased the transcriptional activities of Twist in the cells (P < 0.01) (Figure 4B). Furthermore, Western blot analysis demonstrated that the total expression (Figure 4C) and nucleus expression (Figure 4D) of NF-κB and Twist were significantly down-regulated by MI in HEP-2 cells (P < 0.01), suggesting that MI may inhibit Twist by modulating NF-κB. Moreover, to assess the effect of MI on the EMT of laryngeal cancer, the expression of EMT markers, including E-Cadherin, occluding, vimentin, and N-cadherin, in HEP-2 cells treated with MI were measured. Our data showed that the expression levels of E-Cadherin and occluding were enhanced while the expression levels of vimentin and N-cadherin were reduced by MI treatment in HEP-2 cells (P < 0.01) (Figure 4E), suggesting that MI can inhibit EMT of laryngeal cancer.Figure 4MI inhibits transcriptional activation and expression of NF-κB and Twist and alleviates EMT in laryngeal cancer cells. (A) The luciferase activities of NF-κB were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (B) The luciferase activities of Twist were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (C) The total expression of NF-κB, Twist, and β-actin was measured by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (D) The nucleus expression of NF-κB, Twist, and histone H3 was tested by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (E) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMI inhibits transcriptional activation and expression of NF-κB and Twist and alleviates EMT in laryngeal cancer cells. (A) The luciferase activities of NF-κB were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (B) The luciferase activities of Twist were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (C) The total expression of NF-κB, Twist, and β-actin was measured by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (D) The nucleus expression of NF-κB, Twist, and histone H3 was tested by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (E) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.", "The effect of MI on laryngeal cancer development was further investigated in vivo. Tumorigenicity analysis was conducted in nude mice injected with HEP-2 cells, which were treated with MI or corresponding control. The MI treatment significantly reduced tumor size (Figure 5A), tumor weight (P < 0.01) (Figure 5B), tumor volume (P < 0.01) (Figure 5C), and the expression levels of Ki-67 in tumor tissues of the mice (Figure 5D), suggesting that MI inhibited tumor growth of laryngeal cancer in vivo.Figure 5MI inhibits tumor growth of laryngeal cancer in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by tumorigenicity assay in nude mice. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) Representative images of dissected tumors from nude mice were presented. (B) The average tumor weight was calculated and shown. (C) The average tumor volume was calculated and shown. (D) The expression levels of Ki-67 of the tumor tissues were measured by immunohistochemical staining. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMI inhibits tumor growth of laryngeal cancer in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by tumorigenicity assay in nude mice. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) Representative images of dissected tumors from nude mice were presented. (B) The average tumor weight was calculated and shown. (C) The average tumor volume was calculated and shown. (D) The expression levels of Ki-67 of the tumor tissues were measured by immunohistochemical staining. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMoreover, our data showed that MI treatment significantly enhanced the expression levels of E-Cadherin and occluding, whereas reduced the expression levels of vimentin and N-cadherin in tumor tissues of the mice (P < 0.01) (Figure 6A and B), indicating that MI attenuated EMT of laryngeal cancer in vivo. Besides, the expression levels of Twist, Ki-67, and NF-κB signaling proteins containing p65 and IKK were significantly decreased by MI in tumor tissues of the mice (P < 0.01) (Figure 6C and D), implying that MI may inhibit EMT of laryngeal cancer via the Twist/NF-κB signaling.Figure 6MI inhibits EMT of laryngeal cancer via Twist/NF-κB signaling in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by nude mice tumorigenicity assay. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the tumor tissues of the mice. (B) The results of Western blot analysis in (A) were quantified by ImageJ software. (C) The expression levels of NF-κB, NF-κB p65, Twist, Ki-67, IKK, and β-actin were measured by Western blot analysis in the tumor tissues of the mice. (D) The results of Western blot analysis in (C) were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMI inhibits EMT of laryngeal cancer via Twist/NF-κB signaling in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by nude mice tumorigenicity assay. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the tumor tissues of the mice. (B) The results of Western blot analysis in (A) were quantified by ImageJ software. (C) The expression levels of NF-κB, NF-κB p65, Twist, Ki-67, IKK, and β-actin were measured by Western blot analysis in the tumor tissues of the mice. (D) The results of Western blot analysis in (C) were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.", "Laryngeal cancer is the second most prevalent head and neck malignancy.38 Laryngeal cancer patients hold poor survival incidences and low prognosis, and many cases still suffer from recurrence.39 Although improvements have been made in chemotherapy, radiotherapy, and surgery, invasion and metastasis are still the principal reasons for laryngeal cancer-related mortality for patients with advanced grade.40 Searching for more safe and practical treatment candidates for the effective therapy of laryngeal cancer is of great importance.41 As a natural compound, MI has exerted practical potential in the medical application. It has been reported that MI represses cardiac hypertrophy by modulating the TLR4/NF-κB signaling in mice.42 MI preserves against triptolide-produced hepatotoxicity through activating the Nrf2 signaling.43 MI relieves fructose-mediated apoptosis of podocyte by down-regulation of miR-193a to enhance WT1.44 However, the investigation of MI in cancer progression is limited. Previous study identified that the exposure-impact-toxicity was correlated with MI and docetaxel in non-small cell lung cancer mice.22 MI inhibits chemotherapy-caused damage to liver throughout the initial therapy of gastrointestinal tumor patients.45 MI decreases paclitaxel in patients with epithelial ovarian cancer treated with cisplatin and paclitaxel.24 The function of MI is associated with its activity to restrain several crucial pathways such as phospholipase A2/arachidonic signaling,46 STAT3 signaling,47 and NF-κB signaling.23,48 In this study, we firstly identified that MI inhibited cell proliferation, migration, invasion, and enhanced cell apoptosis in laryngeal cancer. MI reduced tumor growth of laryngeal cancer in vivo. Our data present a novel inhibitory function of MI in laryngeal cancer and provide valuable insights into the role of MI in cancer development.\nEMT plays a critical role in the development of laryngeal cancer, especially in cancer cell metastasis progression by promoting resistance to apoptotic stimulation, invasion, and mobility.49 It was reported that TGF β-induced lncRNA MIR155HG promoted EMT of laryngeal squamous cell carcinoma by regulating the miR-155/SOX10 signaling.50 YAP modulates the Wnt/β-catenin signaling and EMT program of laryngeal cancer.51 Abnormal methylation and down-regulation of ZNF667 and ZNF667-AS1 enhance EMT of laryngeal carcinoma.52 Meanwhile, it was reported that MI alleviated high fructose-caused liver fibrosis and EMT by enhancing miR-375-3p to repress the TGF-β1/Smad signaling and JAK2/STAT3 signaling in rat.25 Our data demonstrated that MI attenuated EMT of laryngeal cancer via modulating the Twist/NF-κB signaling. It provides valuable information that MI exerts an inhibitory function of EMT in cancer progression.\nAs a critical EMT transcription factor, Twist regulates EMT program during cancer development.53 The correlation of Twist with NF-κB signaling in the modulation of EMT in cancer progression is well identified. It has been reported that presentation to TGF β combined with TNF α provokes tumorigenesis by modulating NF-κB/Twist signaling in vitro.54 Antrodia salmonea represses aggression and metastasis through modifying EMT via the NF-κB/Twist signaling in triple-negative breast cancer cells.55 Twist is involved in the mechanism that ursolic acid restrains EMT of gastric cancer through the Axl/NF-κB signaling.56 NF-κB activation by the RANKL/RANK pathway enhances the expression of snail and twist and promotes EMT of mammary cancer cells.57 Twist plays a crucial role in regulating the NF-κB and HIF-1α signaling on hypoxia-induced chemoresistance and EMT in pancreatic cancer.58 MiR-153 depletion down-regulates the expression of metastasis-associated family member 3 and Twist family BHLH transcription factor 1 in laryngeal squamous carcinoma cells.59 The expression of TWIST is remarkably down-regulated in response to paclitaxel, and TWIST may present a crucial function in paclitaxel-related laryngeal cancer cell apoptosis.60 TWIST is involved in the mechanism that TrkB elevates the metastasis of laryngeal cancer by activating the PI3K/AKT signaling.61 In the present study, we revealed that Twist was significantly elevated in laryngeal cancer tissues and human laryngeal carcinoma HEP-2 cells. It suggests that Twist may play a critical role in the development of laryngeal cancer. NF-κB/Twist signaling was involved in the mechanism of MI inhibiting EMT of laryngeal cancer. It provides new evidence of the role of Twist in cancer progression.", "In conclusion, we discovered that magnesium isoglycyrrhizinate attenuated the progression of laryngeal cancer in vitro and in vivo. Magnesium isoglycyrrhizinate induced an inhibitory effect on epithelial–mesenchymal transition in laryngeal cancer by modulating the NF-κB/Twist signaling. Our findings provide new insights into the mechanism by which magnesium isoglycyrrhizinate inhibits laryngeal carcinoma development. Magnesium isoglycyrrhizinate may be applied as a potential anti-tumor candidate for laryngeal cancer in clinical treatment strategy." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Laryngeal Cancer Clinical Samples", "Cell Culture and Treatment", "Quantitative Reverse Transcription-PCR (qRT-PCR)", "MTT Assays", "EdU Assays", "Colony Formation Assay", "Transwell Assays", "Analysis of Cell Apoptosis", "Luciferase Reporter Gene Assay", "Western Blot Analysis", "Analysis of Tumorigenicity in Nude Mice", "Statistical Analysis", "Results", "Twist is Potentially Correlated with the Progression and Poor Prognosis of Laryngeal Cancer", "Magnesium Isoglycyrrhizinate (MI) Attenuates Cell Proliferation of Laryngeal Cancer in vitro", "MI Inhibits Laryngeal Cancer Cell Migration and Invasion and Enhances Cell Apoptosis in vitro", "MI Inhibits Transcriptional Activation and the Expression of NF-κB and Twist and Alleviates EMT in Laryngeal Cancer Cells", "MI Inhibits Tumor Growth and EMT of Laryngeal Cancer via the Twist/NF-κB Signaling in vivo", "Discussion", "Conclusion" ]
[ "Laryngeal cancer is one of the most prevalent malignancies of the respiratory system, accounting for 26% to 30% of head and neck cancers.1,2 Almost 95% of the histologic pathogeny of laryngeal cancer is laryngeal squamous carcinoma, and the survival incidence of patients is low.3 Despite the development of new strategies in surgery, chemotherapy, and radiation, the targeted treatment of laryngeal cancer is still a challenge.4 Thus, identification of safe and effective treatment candidates for laryngeal cancer is urgently needed. Epithelial–mesenchymal transition (EMT) serves as a cellular program, in which cells drop their epithelial features and gain mesenchymal characteristics.5,6 EMT is correlated with multiple tumor progressions, such as resistance to therapy, blood intravasation, tumor initiation, tumor cell migration, tumor stemness, malignant progression, and metastasis.7–9 As a critical process of cancer development, EMT contributes to the development of laryngeal cancer. It has been reported that the combination of photodynamic therapy and carboplatin suppresses the expression of MMP-2/MMP-9 and EMT of laryngeal cancer by ROS-inhibited MEK/ERK signaling.10 EZH2 increases metastasis and aggression of laryngeal squamous cell carcinoma through the EMT program by modulating H3K27me3.11 However, investigation about the inhibitory candidates of EMT in laryngeal cancer remains limited.\nNutraceutical agents display a unique therapeutic activity to treat diseases such as inflammation and cancer.12–17 Glycyrrhizic acid (GA) is extracted from the roots of licorice and serves as a major component of licorice, presenting multiple biomedical activities, such as anti-oxidant and anti-inflammatory.18 Magnesium isoglycyrrhizinate (MI), refined from GA, is a18-α-GA stereoisomer magnesium salt and demonstrates a better activity than 18-β-GA.19 As a natural and safe compound, MI shows many biomedical activities such as anti-inflammation capacities,20 anti-apoptosis21 and anti-tumor.22 It has been reported that MI inhibits fructose-modulated lipid metabolism disorder and activation of the NF-κB/NLRP3 inflammasome.23 MI reduces paclitaxel in patients with epithelial ovarian cancer managed with cisplatin and paclitaxel.24 Moreover, MI attenuates high fructose-induced liver fibrosis and EMT by upregulating miR-375-3p to overcome the TGF-β1/Smad pathway and JAK2/STAT3 signaling.25 However, the role of MI in cancer development is unknown. The effect of MI on the progression and EMT of laryngeal cancer remains unreported.\nMany transcription factors, such as Twist, serve as molecule switches in EMT progression.26 Twist is a crucial transcription factor for the modulating of EMT, which advances cell invasion, migration, and cancer metastasis, conferring tumor cells with stem cell-like properties and providing therapy resistance.27,28 The function of Twist in EMT of cancer development has been well reported. Disrupting the diacetylated Twist represses the progression of Basal-like breast cancer.29 The activation of Twist promotes progression and EMT of breast cancer.30 The inhibition of Twist limits the stem cell features and EMT of prostate cancer.31 Twist serves as a critical factor in promoting metastasis of pancreatic cancer.32 Besides, tumor necrosis factor α provokes EMT of hypopharyngeal cancer and induces metastasis through NF-κB signaling-regulated expression of Twist.33 NF-κB/Twist signaling is involved in the mechanism of Chysin inhibiting stem cell characteristics of ovarian cancer.34 Repression of the NF-κB/Twist axis decreases the stemness features of lung cancer stem cell.35 Down-regulation of TWIST reduces invasion and migration of laryngeal carcinoma cells by controlling the expression of N-cadherin and E-cadherin.36 The expression of Twist displays the clinical significance of laryngeal cancer.37 However, whether NF-κB and Twist are involved in the biomedical activities of MI is unclear.\nIn this study, we aimed to explore the function of MI in the development and EMT process of laryngeal cancer. We identified a novel inhibitory effect of MI in the progression and EMT of laryngeal cancer by regulating the NF-κB/Twist signaling.", " Laryngeal Cancer Clinical Samples A total of 40 laryngeal cancer clinical samples used in this study were obtained from the Second Affiliated Hospital, Harbin Medical University between June 2016 and August 2018. All the patients were diagnosed by clinical, radiographic, and histopathological analysis. Before surgery, no systemic or local therapy was performed on the subjects. The laryngeal cancer tissues (n = 40) and adjacent normal tissues (n = 40) obtained from the patients were immediately frozen into liquid nitrogen and stored at −80 °C before use. The clinical laryngeal cancer samples were separated into two groups according to the mean expression of Twist. The overall survival was analyzed by Kaplan–Meier survival analysis. The patients and healthy controls provided written informed consent, and that the Ethics Committee of The Second Affiliated Hospital, Harbin Medical University approved this study.\nA total of 40 laryngeal cancer clinical samples used in this study were obtained from the Second Affiliated Hospital, Harbin Medical University between June 2016 and August 2018. All the patients were diagnosed by clinical, radiographic, and histopathological analysis. Before surgery, no systemic or local therapy was performed on the subjects. The laryngeal cancer tissues (n = 40) and adjacent normal tissues (n = 40) obtained from the patients were immediately frozen into liquid nitrogen and stored at −80 °C before use. The clinical laryngeal cancer samples were separated into two groups according to the mean expression of Twist. The overall survival was analyzed by Kaplan–Meier survival analysis. The patients and healthy controls provided written informed consent, and that the Ethics Committee of The Second Affiliated Hospital, Harbin Medical University approved this study.\n Cell Culture and Treatment The human laryngeal carcinoma cells HEP-2 and primary laryngeal epithelial cells LEC-P were purchased from American Type Tissue Culture Collection. Cells were cultured in DMEM (Solarbio, China) containing 10% fetal bovine serum (Gibco, USA), 0.1 mg/mL streptomycin (Solarbio, China) and 100 units/mL penicillin (Solarbio, China) at 37 °C with 5% CO2. The Magnesium isoglycyrrhizinate (MI) (purity > 98%) was obtained from Zhengda Tianqing Pharmaceutical Co., Ltd (Jiangsu, China). Cells were treated with MI of indicated dose before further analysis.\nThe human laryngeal carcinoma cells HEP-2 and primary laryngeal epithelial cells LEC-P were purchased from American Type Tissue Culture Collection. Cells were cultured in DMEM (Solarbio, China) containing 10% fetal bovine serum (Gibco, USA), 0.1 mg/mL streptomycin (Solarbio, China) and 100 units/mL penicillin (Solarbio, China) at 37 °C with 5% CO2. The Magnesium isoglycyrrhizinate (MI) (purity > 98%) was obtained from Zhengda Tianqing Pharmaceutical Co., Ltd (Jiangsu, China). Cells were treated with MI of indicated dose before further analysis.\n Quantitative Reverse Transcription-PCR (qRT-PCR) Total RNAs were extracted using TRIZOL (Invitrogen, USA), followed by reverse transcription into cDNA. The qRT-PCR reactions were prepared using the SYBR Real-time PCR I kit (Takara, Japan). GAPDH was used as the internal control. The qRT-PCR experiment was conducted in triplicate. The primer sequences were as follows:\nTwist forward: 5′-CGCTGAACGAGGCATTTGC-3′\nTwist reverse: 5′-CCAGTTTGAGGGTCTGAATC-3′\nSlug forward: 5′-GTGTTTGCAAGATCTGCGGC-3′\nSlug reverse: 5′-GCAGATGAGCCCTCAGATTTGA-3′\nZEB1 forward: 5′-CCCCAGGTGTAAGCGCAGAA-3′\nZEB1 reverse: 5′-TGGCAGGTCATCCTCTGGTACAC-3′\nSnail forward: 5′-CCACACTGGTGAGAAGCCTTTC-3′\nSnail reverse: 5′-GTCTGGAGGTGGGCACGTA-3′\nGAPDH forward: 5′-AAGAAGGTGGTGAAGCAGGC-3′.\nGAPDH reverse: 5′-TCCACCACCCAGTTGCTGTA-3′\nTotal RNAs were extracted using TRIZOL (Invitrogen, USA), followed by reverse transcription into cDNA. The qRT-PCR reactions were prepared using the SYBR Real-time PCR I kit (Takara, Japan). GAPDH was used as the internal control. The qRT-PCR experiment was conducted in triplicate. The primer sequences were as follows:\nTwist forward: 5′-CGCTGAACGAGGCATTTGC-3′\nTwist reverse: 5′-CCAGTTTGAGGGTCTGAATC-3′\nSlug forward: 5′-GTGTTTGCAAGATCTGCGGC-3′\nSlug reverse: 5′-GCAGATGAGCCCTCAGATTTGA-3′\nZEB1 forward: 5′-CCCCAGGTGTAAGCGCAGAA-3′\nZEB1 reverse: 5′-TGGCAGGTCATCCTCTGGTACAC-3′\nSnail forward: 5′-CCACACTGGTGAGAAGCCTTTC-3′\nSnail reverse: 5′-GTCTGGAGGTGGGCACGTA-3′\nGAPDH forward: 5′-AAGAAGGTGGTGAAGCAGGC-3′.\nGAPDH reverse: 5′-TCCACCACCCAGTTGCTGTA-3′\n MTT Assays MTT assays were conducted to measure cell viability of HEP-2 cells. Briefly, about 1 × 104 HEP-2 cells were put into 96 wells and cultured for 12 h. Cells were then added with 10 μL MTT solution (5 mg/mL) (Sigma, USA) and cultured for another 4 h. The culture medium was discarded, and 150 μL/well DMSO (Thermo, USA) was added to the wells. An ELISA browser was used to analyze the absorbance at 570 nm (Bio-Tek EL 800, USA).\nMTT assays were conducted to measure cell viability of HEP-2 cells. Briefly, about 1 × 104 HEP-2 cells were put into 96 wells and cultured for 12 h. Cells were then added with 10 μL MTT solution (5 mg/mL) (Sigma, USA) and cultured for another 4 h. The culture medium was discarded, and 150 μL/well DMSO (Thermo, USA) was added to the wells. An ELISA browser was used to analyze the absorbance at 570 nm (Bio-Tek EL 800, USA).\n EdU Assays The cell proliferation was analyzed by EdU assays using EdU detecting kit (RiboBio, China). Briefly, HEP-2 cells were cultured with EdU for 2 h, followed by fixation with 4% paraformaldehyde at room temperature for 30 min. Then, cells were permeabilized with 0.4% Triton X-100 for 10 min and stained with staining cocktail of EdU at room temperature for 30 min in the dark. Next, nuclear of the cells was stained with Hoechst at room temperature for 30 min. Images were analyzed using a fluorescence microscope.\nThe cell proliferation was analyzed by EdU assays using EdU detecting kit (RiboBio, China). Briefly, HEP-2 cells were cultured with EdU for 2 h, followed by fixation with 4% paraformaldehyde at room temperature for 30 min. Then, cells were permeabilized with 0.4% Triton X-100 for 10 min and stained with staining cocktail of EdU at room temperature for 30 min in the dark. Next, nuclear of the cells was stained with Hoechst at room temperature for 30 min. Images were analyzed using a fluorescence microscope.\n Colony Formation Assay About 1 × 103 HEP-2 cells were layered in 6 wells and incubated in DMEM at 37 °C. After two weeks, cells were cleaned with PBS Buffer, made in methanol for 30 min, and dyed with 1% crystal violet. The number of colonies was then calculated.\nAbout 1 × 103 HEP-2 cells were layered in 6 wells and incubated in DMEM at 37 °C. After two weeks, cells were cleaned with PBS Buffer, made in methanol for 30 min, and dyed with 1% crystal violet. The number of colonies was then calculated.\n Transwell Assays Transwell assays were conducted to evaluate the impacts of MI on cell invasion and migration of HEP-2 cells using a Transwell plate (Corning, USA). Briefly, the upper chambers were plated with about 1 × 105 cells. Cells were then solidified using 4% paraformaldehyde and dyed with crystal violet. The invaded and migrated cells were recorded and calculated.\nTranswell assays were conducted to evaluate the impacts of MI on cell invasion and migration of HEP-2 cells using a Transwell plate (Corning, USA). Briefly, the upper chambers were plated with about 1 × 105 cells. Cells were then solidified using 4% paraformaldehyde and dyed with crystal violet. The invaded and migrated cells were recorded and calculated.\n Analysis of Cell Apoptosis Approximately 2 × 105 HEP-2 cells were plated on 6-well dishes. Cell apoptosis was detected using the Annexin V-FITC Apoptosis Detection Kit (CST, USA) following the manufacturer’s instructions. Briefly, cells were collected and washed with binding buffer (BD Biosciences, USA), and then dyed at 25 °C, followed by flow cytometry analysis.\nApproximately 2 × 105 HEP-2 cells were plated on 6-well dishes. Cell apoptosis was detected using the Annexin V-FITC Apoptosis Detection Kit (CST, USA) following the manufacturer’s instructions. Briefly, cells were collected and washed with binding buffer (BD Biosciences, USA), and then dyed at 25 °C, followed by flow cytometry analysis.\n Luciferase Reporter Gene Assay Luciferase reporter gene assay was performed using the Dual-luciferase Reporter Assay System (Promega, USA). Briefly, cells were treated MI as indicated dose, followed by transfection with pGL3-NF-κB and pGL3-Twist using Lipofectamine 3000 (Invitrogen, USA). Luciferase activities were detected and Renilla was used as a normalized control.\nLuciferase reporter gene assay was performed using the Dual-luciferase Reporter Assay System (Promega, USA). Briefly, cells were treated MI as indicated dose, followed by transfection with pGL3-NF-κB and pGL3-Twist using Lipofectamine 3000 (Invitrogen, USA). Luciferase activities were detected and Renilla was used as a normalized control.\n Western Blot Analysis Total proteins were extracted from cells or tumor tissues using RIPA buffer (CST, USA). Protein concentrations were measured using the BCA Protein Quantification Kit (Abbkine, USA). Same amount of protein samples was separated by SDS-PAGE (12% polyacrylamide gels), transferred to PVDF membranes (Millipore, USA) in the subsequent step. The membranes were blocked with 5% milk and incubated with the primary antibodies for Twist (1:1000) (Abcam, UK), E-Cadherin (1:1000) (Abcam, UK), occluding (1:1000) (Abcam, UK), vimentin (1:1000) (Abcam, UK), N-cadherin (1:1000) (Abcam, UK), Ki-67 (1:1000) (Abcam, UK), IKK (1:1000) (Abcam, UK), NF-κB p65 (1:1000) (Abcam, UK), NF-κB (1:1000) (Abcam, UK), and β-actin (1:1000) (Abcam, UK) at 4 °C overnight, in which β-actin served as the control. Then, the corresponding second antibodies (1:1000) (Abcam, UK) were used to incubate the membranes at room temperature for 1 h, followed by visualization using an Odyssey CLx Infrared Imaging System. ImageJ software was used to quantify the results.\nTotal proteins were extracted from cells or tumor tissues using RIPA buffer (CST, USA). Protein concentrations were measured using the BCA Protein Quantification Kit (Abbkine, USA). Same amount of protein samples was separated by SDS-PAGE (12% polyacrylamide gels), transferred to PVDF membranes (Millipore, USA) in the subsequent step. The membranes were blocked with 5% milk and incubated with the primary antibodies for Twist (1:1000) (Abcam, UK), E-Cadherin (1:1000) (Abcam, UK), occluding (1:1000) (Abcam, UK), vimentin (1:1000) (Abcam, UK), N-cadherin (1:1000) (Abcam, UK), Ki-67 (1:1000) (Abcam, UK), IKK (1:1000) (Abcam, UK), NF-κB p65 (1:1000) (Abcam, UK), NF-κB (1:1000) (Abcam, UK), and β-actin (1:1000) (Abcam, UK) at 4 °C overnight, in which β-actin served as the control. Then, the corresponding second antibodies (1:1000) (Abcam, UK) were used to incubate the membranes at room temperature for 1 h, followed by visualization using an Odyssey CLx Infrared Imaging System. ImageJ software was used to quantify the results.\n Analysis of Tumorigenicity in Nude Mice The effect of MI on tumor growth in vivo was analyzed in nude mice of Balb/c. Mice were randomly separated into two groups (n = 3). To establish the in vivo tumor model, HEP-2 cells were treated with MI (300 mg/kg) or equal volume of saline. And about 2 × 106 cells were subcutaneously injected into mice. After 7 days of injection, tumor growth was measured every 7 days. The mice were sacrificed after 35 days of injection and tumors were scaled. Tumor volume (V) was observed by estimating the length (L) and width (W) with calipers and measured with the formula (L ×W2) × 0.5. The expression levels of Ki-67 of tumor tissues were measured by immunohistochemical staining with the Ki67 antibody (1:1000) (Abcam, UK). Protein expression levels in tumor tissues were determined by Western blot analysis using Twist (1:1000) (Abcam, UK), E-Cadherin (1:1000) (Abcam, UK), occluding (1:1000) (Abcam, UK), vimentin (1:1000) (Abcam, UK), N-cadherin (1:1000) (Abcam, UK), Ki-67 (1:1000) (Abcam, UK), IKK (1:1000) (Abcam, UK), NF-κB p65 (1:1000) (Abcam, UK), NF-κB (1:1000) (Abcam, UK), and β-actin (1:1000) (Abcam, UK). Animal care and experimental procedures in this study were approved by the Animal Ethics Committee of the Second Affiliated Hospital, Harbin Medical University.\nThe effect of MI on tumor growth in vivo was analyzed in nude mice of Balb/c. Mice were randomly separated into two groups (n = 3). To establish the in vivo tumor model, HEP-2 cells were treated with MI (300 mg/kg) or equal volume of saline. And about 2 × 106 cells were subcutaneously injected into mice. After 7 days of injection, tumor growth was measured every 7 days. The mice were sacrificed after 35 days of injection and tumors were scaled. Tumor volume (V) was observed by estimating the length (L) and width (W) with calipers and measured with the formula (L ×W2) × 0.5. The expression levels of Ki-67 of tumor tissues were measured by immunohistochemical staining with the Ki67 antibody (1:1000) (Abcam, UK). Protein expression levels in tumor tissues were determined by Western blot analysis using Twist (1:1000) (Abcam, UK), E-Cadherin (1:1000) (Abcam, UK), occluding (1:1000) (Abcam, UK), vimentin (1:1000) (Abcam, UK), N-cadherin (1:1000) (Abcam, UK), Ki-67 (1:1000) (Abcam, UK), IKK (1:1000) (Abcam, UK), NF-κB p65 (1:1000) (Abcam, UK), NF-κB (1:1000) (Abcam, UK), and β-actin (1:1000) (Abcam, UK). Animal care and experimental procedures in this study were approved by the Animal Ethics Committee of the Second Affiliated Hospital, Harbin Medical University.\n Statistical Analysis Data were presented as mean ± SD, and statistical analysis was performed using SPSS software (version 18.0). The unpaired Student’s t-test was applied for comparing two groups, and one-way ANOVA was applied for comparing multiple groups. P < 0.05 was considered as statistically significant.\nData were presented as mean ± SD, and statistical analysis was performed using SPSS software (version 18.0). The unpaired Student’s t-test was applied for comparing two groups, and one-way ANOVA was applied for comparing multiple groups. P < 0.05 was considered as statistically significant.", "A total of 40 laryngeal cancer clinical samples used in this study were obtained from the Second Affiliated Hospital, Harbin Medical University between June 2016 and August 2018. All the patients were diagnosed by clinical, radiographic, and histopathological analysis. Before surgery, no systemic or local therapy was performed on the subjects. The laryngeal cancer tissues (n = 40) and adjacent normal tissues (n = 40) obtained from the patients were immediately frozen into liquid nitrogen and stored at −80 °C before use. The clinical laryngeal cancer samples were separated into two groups according to the mean expression of Twist. The overall survival was analyzed by Kaplan–Meier survival analysis. The patients and healthy controls provided written informed consent, and that the Ethics Committee of The Second Affiliated Hospital, Harbin Medical University approved this study.", "The human laryngeal carcinoma cells HEP-2 and primary laryngeal epithelial cells LEC-P were purchased from American Type Tissue Culture Collection. Cells were cultured in DMEM (Solarbio, China) containing 10% fetal bovine serum (Gibco, USA), 0.1 mg/mL streptomycin (Solarbio, China) and 100 units/mL penicillin (Solarbio, China) at 37 °C with 5% CO2. The Magnesium isoglycyrrhizinate (MI) (purity > 98%) was obtained from Zhengda Tianqing Pharmaceutical Co., Ltd (Jiangsu, China). Cells were treated with MI of indicated dose before further analysis.", "Total RNAs were extracted using TRIZOL (Invitrogen, USA), followed by reverse transcription into cDNA. The qRT-PCR reactions were prepared using the SYBR Real-time PCR I kit (Takara, Japan). GAPDH was used as the internal control. The qRT-PCR experiment was conducted in triplicate. The primer sequences were as follows:\nTwist forward: 5′-CGCTGAACGAGGCATTTGC-3′\nTwist reverse: 5′-CCAGTTTGAGGGTCTGAATC-3′\nSlug forward: 5′-GTGTTTGCAAGATCTGCGGC-3′\nSlug reverse: 5′-GCAGATGAGCCCTCAGATTTGA-3′\nZEB1 forward: 5′-CCCCAGGTGTAAGCGCAGAA-3′\nZEB1 reverse: 5′-TGGCAGGTCATCCTCTGGTACAC-3′\nSnail forward: 5′-CCACACTGGTGAGAAGCCTTTC-3′\nSnail reverse: 5′-GTCTGGAGGTGGGCACGTA-3′\nGAPDH forward: 5′-AAGAAGGTGGTGAAGCAGGC-3′.\nGAPDH reverse: 5′-TCCACCACCCAGTTGCTGTA-3′", "MTT assays were conducted to measure cell viability of HEP-2 cells. Briefly, about 1 × 104 HEP-2 cells were put into 96 wells and cultured for 12 h. Cells were then added with 10 μL MTT solution (5 mg/mL) (Sigma, USA) and cultured for another 4 h. The culture medium was discarded, and 150 μL/well DMSO (Thermo, USA) was added to the wells. An ELISA browser was used to analyze the absorbance at 570 nm (Bio-Tek EL 800, USA).", "The cell proliferation was analyzed by EdU assays using EdU detecting kit (RiboBio, China). Briefly, HEP-2 cells were cultured with EdU for 2 h, followed by fixation with 4% paraformaldehyde at room temperature for 30 min. Then, cells were permeabilized with 0.4% Triton X-100 for 10 min and stained with staining cocktail of EdU at room temperature for 30 min in the dark. Next, nuclear of the cells was stained with Hoechst at room temperature for 30 min. Images were analyzed using a fluorescence microscope.", "About 1 × 103 HEP-2 cells were layered in 6 wells and incubated in DMEM at 37 °C. After two weeks, cells were cleaned with PBS Buffer, made in methanol for 30 min, and dyed with 1% crystal violet. The number of colonies was then calculated.", "Transwell assays were conducted to evaluate the impacts of MI on cell invasion and migration of HEP-2 cells using a Transwell plate (Corning, USA). Briefly, the upper chambers were plated with about 1 × 105 cells. Cells were then solidified using 4% paraformaldehyde and dyed with crystal violet. The invaded and migrated cells were recorded and calculated.", "Approximately 2 × 105 HEP-2 cells were plated on 6-well dishes. Cell apoptosis was detected using the Annexin V-FITC Apoptosis Detection Kit (CST, USA) following the manufacturer’s instructions. Briefly, cells were collected and washed with binding buffer (BD Biosciences, USA), and then dyed at 25 °C, followed by flow cytometry analysis.", "Luciferase reporter gene assay was performed using the Dual-luciferase Reporter Assay System (Promega, USA). Briefly, cells were treated MI as indicated dose, followed by transfection with pGL3-NF-κB and pGL3-Twist using Lipofectamine 3000 (Invitrogen, USA). Luciferase activities were detected and Renilla was used as a normalized control.", "Total proteins were extracted from cells or tumor tissues using RIPA buffer (CST, USA). Protein concentrations were measured using the BCA Protein Quantification Kit (Abbkine, USA). Same amount of protein samples was separated by SDS-PAGE (12% polyacrylamide gels), transferred to PVDF membranes (Millipore, USA) in the subsequent step. The membranes were blocked with 5% milk and incubated with the primary antibodies for Twist (1:1000) (Abcam, UK), E-Cadherin (1:1000) (Abcam, UK), occluding (1:1000) (Abcam, UK), vimentin (1:1000) (Abcam, UK), N-cadherin (1:1000) (Abcam, UK), Ki-67 (1:1000) (Abcam, UK), IKK (1:1000) (Abcam, UK), NF-κB p65 (1:1000) (Abcam, UK), NF-κB (1:1000) (Abcam, UK), and β-actin (1:1000) (Abcam, UK) at 4 °C overnight, in which β-actin served as the control. Then, the corresponding second antibodies (1:1000) (Abcam, UK) were used to incubate the membranes at room temperature for 1 h, followed by visualization using an Odyssey CLx Infrared Imaging System. ImageJ software was used to quantify the results.", "The effect of MI on tumor growth in vivo was analyzed in nude mice of Balb/c. Mice were randomly separated into two groups (n = 3). To establish the in vivo tumor model, HEP-2 cells were treated with MI (300 mg/kg) or equal volume of saline. And about 2 × 106 cells were subcutaneously injected into mice. After 7 days of injection, tumor growth was measured every 7 days. The mice were sacrificed after 35 days of injection and tumors were scaled. Tumor volume (V) was observed by estimating the length (L) and width (W) with calipers and measured with the formula (L ×W2) × 0.5. The expression levels of Ki-67 of tumor tissues were measured by immunohistochemical staining with the Ki67 antibody (1:1000) (Abcam, UK). Protein expression levels in tumor tissues were determined by Western blot analysis using Twist (1:1000) (Abcam, UK), E-Cadherin (1:1000) (Abcam, UK), occluding (1:1000) (Abcam, UK), vimentin (1:1000) (Abcam, UK), N-cadherin (1:1000) (Abcam, UK), Ki-67 (1:1000) (Abcam, UK), IKK (1:1000) (Abcam, UK), NF-κB p65 (1:1000) (Abcam, UK), NF-κB (1:1000) (Abcam, UK), and β-actin (1:1000) (Abcam, UK). Animal care and experimental procedures in this study were approved by the Animal Ethics Committee of the Second Affiliated Hospital, Harbin Medical University.", "Data were presented as mean ± SD, and statistical analysis was performed using SPSS software (version 18.0). The unpaired Student’s t-test was applied for comparing two groups, and one-way ANOVA was applied for comparing multiple groups. P < 0.05 was considered as statistically significant.", " Twist is Potentially Correlated with the Progression and Poor Prognosis of Laryngeal Cancer To assess the potential correlation of EMT-related transcription factors with laryngeal cancer, the expression of Twist, Slug, ZEB1, and Snail in the clinical laryngeal samples and laryngeal cells were measured by qPCR assays. The data showed that the expression levels of Twist, Slug, ZEB1, and Snail were significantly elevated in laryngeal cancer tissues (n = 40) compared to that in normal tissues (n = 40), among which Twist displayed the highest expression levels (P < 0.01) (Figure 1A). Meanwhile, the expression levels of Twist, Slug, ZEB1, and Snail were also enhanced in human laryngeal carcinoma HEP-2 cells compared with that in primary laryngeal epithelial cells (LEC-P), and Twist exerted the highest expression levels among these transcription factors (P < 0.001) (Figure 1B), implying that Twist is closely associated with the development of laryngeal cancer. To determine whether Twist was able to serve as the potential biomarker for patients with laryngeal cancer, we separated the clinical laryngeal cancer samples into two groups according to the mean expression of Twist. We observed that high expression levels of Twist was remarkably correlated with the poor overall survival (P < 0.01) (Figure 1C), suggesting that Twist may play a crucial role in the progression of laryngeal cancer.Figure 1Twist is potentially correlated with the progression and poor prognosis of laryngeal cancer. (A) The expression levels of Twist, Slug, ZEB1, and Snail were measured by qPCR in the laryngeal cancer tissues (n = 40) and adjacent normal tissues (n = 40). (B) The expression levels of Twist, Slug, ZEB1, and Snail were assessed by qPCR in HEP-2 cells and primary laryngeal epithelial cells. (C) The clinical laryngeal cancer samples were separated into two groups according to the mean expression of Twist. The overall survival was analyzed by Kaplan-Meier survival analysis. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01, *** P < 0.001.\nTwist is potentially correlated with the progression and poor prognosis of laryngeal cancer. (A) The expression levels of Twist, Slug, ZEB1, and Snail were measured by qPCR in the laryngeal cancer tissues (n = 40) and adjacent normal tissues (n = 40). (B) The expression levels of Twist, Slug, ZEB1, and Snail were assessed by qPCR in HEP-2 cells and primary laryngeal epithelial cells. (C) The clinical laryngeal cancer samples were separated into two groups according to the mean expression of Twist. The overall survival was analyzed by Kaplan-Meier survival analysis. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01, *** P < 0.001.\nTo assess the potential correlation of EMT-related transcription factors with laryngeal cancer, the expression of Twist, Slug, ZEB1, and Snail in the clinical laryngeal samples and laryngeal cells were measured by qPCR assays. The data showed that the expression levels of Twist, Slug, ZEB1, and Snail were significantly elevated in laryngeal cancer tissues (n = 40) compared to that in normal tissues (n = 40), among which Twist displayed the highest expression levels (P < 0.01) (Figure 1A). Meanwhile, the expression levels of Twist, Slug, ZEB1, and Snail were also enhanced in human laryngeal carcinoma HEP-2 cells compared with that in primary laryngeal epithelial cells (LEC-P), and Twist exerted the highest expression levels among these transcription factors (P < 0.001) (Figure 1B), implying that Twist is closely associated with the development of laryngeal cancer. To determine whether Twist was able to serve as the potential biomarker for patients with laryngeal cancer, we separated the clinical laryngeal cancer samples into two groups according to the mean expression of Twist. We observed that high expression levels of Twist was remarkably correlated with the poor overall survival (P < 0.01) (Figure 1C), suggesting that Twist may play a crucial role in the progression of laryngeal cancer.Figure 1Twist is potentially correlated with the progression and poor prognosis of laryngeal cancer. (A) The expression levels of Twist, Slug, ZEB1, and Snail were measured by qPCR in the laryngeal cancer tissues (n = 40) and adjacent normal tissues (n = 40). (B) The expression levels of Twist, Slug, ZEB1, and Snail were assessed by qPCR in HEP-2 cells and primary laryngeal epithelial cells. (C) The clinical laryngeal cancer samples were separated into two groups according to the mean expression of Twist. The overall survival was analyzed by Kaplan-Meier survival analysis. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01, *** P < 0.001.\nTwist is potentially correlated with the progression and poor prognosis of laryngeal cancer. (A) The expression levels of Twist, Slug, ZEB1, and Snail were measured by qPCR in the laryngeal cancer tissues (n = 40) and adjacent normal tissues (n = 40). (B) The expression levels of Twist, Slug, ZEB1, and Snail were assessed by qPCR in HEP-2 cells and primary laryngeal epithelial cells. (C) The clinical laryngeal cancer samples were separated into two groups according to the mean expression of Twist. The overall survival was analyzed by Kaplan-Meier survival analysis. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01, *** P < 0.001.\n Magnesium Isoglycyrrhizinate (MI) Attenuates Cell Proliferation of Laryngeal Cancer in vitro Then, the role of MI in the modulation of laryngeal cancer proliferation was investigated, and the structure formula of MI is shown in Figure 2A. To evaluate the effect of MI on the progression of laryngeal cancer, MTT assay, colony formation assay, and EdU assay were performed in HEP-2 cells treated with MI. MTT assay demonstrated that MI significantly inhibited cell viability in a dose-dependent manner, and the half-maximal inhibitory concentrations (IC50) of MI in HEP-2 cells was 3.22 mg/mL (P < 0.01) (Figure 2B). Hence, we selected the MI dose of 3.22 mg/mL in the following experiments. Similarly, colony formation assay showed that the colony numbers of HEP-2 cells were remarkably reduced by MI treatment (P < 0.01) (Figure 2C). Besides, EdU assay showed that MI notably decreased number of EdU-positive cells (P < 0.01) (Figure 2D). These data suggested that MI was able to inhibit cell proliferation of laryngeal cancer.Figure 2Magnesium isoglycyrrhizinate (MI) attenuates cell proliferation of laryngeal cancer in vitro. (A) The structure formula of MI was shown. (B) The cell viability was analyzed by MTT assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell proliferation was measured by colony formation assays in the HEP-2 cells treated with MI at indicated dosage. (D) The cell proliferation was examined by EdU assays in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMagnesium isoglycyrrhizinate (MI) attenuates cell proliferation of laryngeal cancer in vitro. (A) The structure formula of MI was shown. (B) The cell viability was analyzed by MTT assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell proliferation was measured by colony formation assays in the HEP-2 cells treated with MI at indicated dosage. (D) The cell proliferation was examined by EdU assays in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nThen, the role of MI in the modulation of laryngeal cancer proliferation was investigated, and the structure formula of MI is shown in Figure 2A. To evaluate the effect of MI on the progression of laryngeal cancer, MTT assay, colony formation assay, and EdU assay were performed in HEP-2 cells treated with MI. MTT assay demonstrated that MI significantly inhibited cell viability in a dose-dependent manner, and the half-maximal inhibitory concentrations (IC50) of MI in HEP-2 cells was 3.22 mg/mL (P < 0.01) (Figure 2B). Hence, we selected the MI dose of 3.22 mg/mL in the following experiments. Similarly, colony formation assay showed that the colony numbers of HEP-2 cells were remarkably reduced by MI treatment (P < 0.01) (Figure 2C). Besides, EdU assay showed that MI notably decreased number of EdU-positive cells (P < 0.01) (Figure 2D). These data suggested that MI was able to inhibit cell proliferation of laryngeal cancer.Figure 2Magnesium isoglycyrrhizinate (MI) attenuates cell proliferation of laryngeal cancer in vitro. (A) The structure formula of MI was shown. (B) The cell viability was analyzed by MTT assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell proliferation was measured by colony formation assays in the HEP-2 cells treated with MI at indicated dosage. (D) The cell proliferation was examined by EdU assays in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMagnesium isoglycyrrhizinate (MI) attenuates cell proliferation of laryngeal cancer in vitro. (A) The structure formula of MI was shown. (B) The cell viability was analyzed by MTT assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell proliferation was measured by colony formation assays in the HEP-2 cells treated with MI at indicated dosage. (D) The cell proliferation was examined by EdU assays in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\n MI Inhibits Laryngeal Cancer Cell Migration and Invasion and Enhances Cell Apoptosis in vitro The role of MI in HEP-2 cell migration, invasion, and apoptosis was then evaluated. Transwell assays revealed that MI treatment remarkably reduced cell migration (P < 0.01) (Figure 3A). Similarly, cell invasion was significantly decreased by MI in HEP-2 cells (P < 0.01) (Figure 3B). Moreover, flow cytometry analysis showed that cell apoptosis was notably increased by MI treatment in the system (P < 0.01) (Figure 3C). These results indicated that MI inhibited laryngeal cancer cell migration and invasion and enhanced cell apoptosis in vitro.Figure 3MI inhibits laryngeal cancer cell migration and invasion and enhances cell apoptosis in vitro. (A) The cell migration was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (B) The cell invasion was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell apoptosis was measure by flow cytometry analysis in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMI inhibits laryngeal cancer cell migration and invasion and enhances cell apoptosis in vitro. (A) The cell migration was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (B) The cell invasion was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell apoptosis was measure by flow cytometry analysis in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nThe role of MI in HEP-2 cell migration, invasion, and apoptosis was then evaluated. Transwell assays revealed that MI treatment remarkably reduced cell migration (P < 0.01) (Figure 3A). Similarly, cell invasion was significantly decreased by MI in HEP-2 cells (P < 0.01) (Figure 3B). Moreover, flow cytometry analysis showed that cell apoptosis was notably increased by MI treatment in the system (P < 0.01) (Figure 3C). These results indicated that MI inhibited laryngeal cancer cell migration and invasion and enhanced cell apoptosis in vitro.Figure 3MI inhibits laryngeal cancer cell migration and invasion and enhances cell apoptosis in vitro. (A) The cell migration was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (B) The cell invasion was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell apoptosis was measure by flow cytometry analysis in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMI inhibits laryngeal cancer cell migration and invasion and enhances cell apoptosis in vitro. (A) The cell migration was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (B) The cell invasion was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell apoptosis was measure by flow cytometry analysis in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\n MI Inhibits Transcriptional Activation and the Expression of NF-κB and Twist and Alleviates EMT in Laryngeal Cancer Cells Next, the underlying mechanism of the effect of MI on the development of laryngeal cancer in HEP-2 cells was further explored. It showed that MI treatment significantly reduced the luciferase activities of NF-κB in the cells (P < 0.01) (Figure 4A), suggesting that MI may inhibit NF-κB at transcriptional level. Meanwhile, dual-luciferase reporter gene assays further revealed that MI treatment remarkably decreased the transcriptional activities of Twist in the cells (P < 0.01) (Figure 4B). Furthermore, Western blot analysis demonstrated that the total expression (Figure 4C) and nucleus expression (Figure 4D) of NF-κB and Twist were significantly down-regulated by MI in HEP-2 cells (P < 0.01), suggesting that MI may inhibit Twist by modulating NF-κB. Moreover, to assess the effect of MI on the EMT of laryngeal cancer, the expression of EMT markers, including E-Cadherin, occluding, vimentin, and N-cadherin, in HEP-2 cells treated with MI were measured. Our data showed that the expression levels of E-Cadherin and occluding were enhanced while the expression levels of vimentin and N-cadherin were reduced by MI treatment in HEP-2 cells (P < 0.01) (Figure 4E), suggesting that MI can inhibit EMT of laryngeal cancer.Figure 4MI inhibits transcriptional activation and expression of NF-κB and Twist and alleviates EMT in laryngeal cancer cells. (A) The luciferase activities of NF-κB were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (B) The luciferase activities of Twist were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (C) The total expression of NF-κB, Twist, and β-actin was measured by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (D) The nucleus expression of NF-κB, Twist, and histone H3 was tested by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (E) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMI inhibits transcriptional activation and expression of NF-κB and Twist and alleviates EMT in laryngeal cancer cells. (A) The luciferase activities of NF-κB were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (B) The luciferase activities of Twist were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (C) The total expression of NF-κB, Twist, and β-actin was measured by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (D) The nucleus expression of NF-κB, Twist, and histone H3 was tested by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (E) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nNext, the underlying mechanism of the effect of MI on the development of laryngeal cancer in HEP-2 cells was further explored. It showed that MI treatment significantly reduced the luciferase activities of NF-κB in the cells (P < 0.01) (Figure 4A), suggesting that MI may inhibit NF-κB at transcriptional level. Meanwhile, dual-luciferase reporter gene assays further revealed that MI treatment remarkably decreased the transcriptional activities of Twist in the cells (P < 0.01) (Figure 4B). Furthermore, Western blot analysis demonstrated that the total expression (Figure 4C) and nucleus expression (Figure 4D) of NF-κB and Twist were significantly down-regulated by MI in HEP-2 cells (P < 0.01), suggesting that MI may inhibit Twist by modulating NF-κB. Moreover, to assess the effect of MI on the EMT of laryngeal cancer, the expression of EMT markers, including E-Cadherin, occluding, vimentin, and N-cadherin, in HEP-2 cells treated with MI were measured. Our data showed that the expression levels of E-Cadherin and occluding were enhanced while the expression levels of vimentin and N-cadherin were reduced by MI treatment in HEP-2 cells (P < 0.01) (Figure 4E), suggesting that MI can inhibit EMT of laryngeal cancer.Figure 4MI inhibits transcriptional activation and expression of NF-κB and Twist and alleviates EMT in laryngeal cancer cells. (A) The luciferase activities of NF-κB were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (B) The luciferase activities of Twist were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (C) The total expression of NF-κB, Twist, and β-actin was measured by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (D) The nucleus expression of NF-κB, Twist, and histone H3 was tested by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (E) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMI inhibits transcriptional activation and expression of NF-κB and Twist and alleviates EMT in laryngeal cancer cells. (A) The luciferase activities of NF-κB were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (B) The luciferase activities of Twist were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (C) The total expression of NF-κB, Twist, and β-actin was measured by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (D) The nucleus expression of NF-κB, Twist, and histone H3 was tested by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (E) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\n MI Inhibits Tumor Growth and EMT of Laryngeal Cancer via the Twist/NF-κB Signaling in vivo The effect of MI on laryngeal cancer development was further investigated in vivo. Tumorigenicity analysis was conducted in nude mice injected with HEP-2 cells, which were treated with MI or corresponding control. The MI treatment significantly reduced tumor size (Figure 5A), tumor weight (P < 0.01) (Figure 5B), tumor volume (P < 0.01) (Figure 5C), and the expression levels of Ki-67 in tumor tissues of the mice (Figure 5D), suggesting that MI inhibited tumor growth of laryngeal cancer in vivo.Figure 5MI inhibits tumor growth of laryngeal cancer in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by tumorigenicity assay in nude mice. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) Representative images of dissected tumors from nude mice were presented. (B) The average tumor weight was calculated and shown. (C) The average tumor volume was calculated and shown. (D) The expression levels of Ki-67 of the tumor tissues were measured by immunohistochemical staining. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMI inhibits tumor growth of laryngeal cancer in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by tumorigenicity assay in nude mice. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) Representative images of dissected tumors from nude mice were presented. (B) The average tumor weight was calculated and shown. (C) The average tumor volume was calculated and shown. (D) The expression levels of Ki-67 of the tumor tissues were measured by immunohistochemical staining. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMoreover, our data showed that MI treatment significantly enhanced the expression levels of E-Cadherin and occluding, whereas reduced the expression levels of vimentin and N-cadherin in tumor tissues of the mice (P < 0.01) (Figure 6A and B), indicating that MI attenuated EMT of laryngeal cancer in vivo. Besides, the expression levels of Twist, Ki-67, and NF-κB signaling proteins containing p65 and IKK were significantly decreased by MI in tumor tissues of the mice (P < 0.01) (Figure 6C and D), implying that MI may inhibit EMT of laryngeal cancer via the Twist/NF-κB signaling.Figure 6MI inhibits EMT of laryngeal cancer via Twist/NF-κB signaling in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by nude mice tumorigenicity assay. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the tumor tissues of the mice. (B) The results of Western blot analysis in (A) were quantified by ImageJ software. (C) The expression levels of NF-κB, NF-κB p65, Twist, Ki-67, IKK, and β-actin were measured by Western blot analysis in the tumor tissues of the mice. (D) The results of Western blot analysis in (C) were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMI inhibits EMT of laryngeal cancer via Twist/NF-κB signaling in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by nude mice tumorigenicity assay. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the tumor tissues of the mice. (B) The results of Western blot analysis in (A) were quantified by ImageJ software. (C) The expression levels of NF-κB, NF-κB p65, Twist, Ki-67, IKK, and β-actin were measured by Western blot analysis in the tumor tissues of the mice. (D) The results of Western blot analysis in (C) were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nThe effect of MI on laryngeal cancer development was further investigated in vivo. Tumorigenicity analysis was conducted in nude mice injected with HEP-2 cells, which were treated with MI or corresponding control. The MI treatment significantly reduced tumor size (Figure 5A), tumor weight (P < 0.01) (Figure 5B), tumor volume (P < 0.01) (Figure 5C), and the expression levels of Ki-67 in tumor tissues of the mice (Figure 5D), suggesting that MI inhibited tumor growth of laryngeal cancer in vivo.Figure 5MI inhibits tumor growth of laryngeal cancer in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by tumorigenicity assay in nude mice. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) Representative images of dissected tumors from nude mice were presented. (B) The average tumor weight was calculated and shown. (C) The average tumor volume was calculated and shown. (D) The expression levels of Ki-67 of the tumor tissues were measured by immunohistochemical staining. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMI inhibits tumor growth of laryngeal cancer in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by tumorigenicity assay in nude mice. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) Representative images of dissected tumors from nude mice were presented. (B) The average tumor weight was calculated and shown. (C) The average tumor volume was calculated and shown. (D) The expression levels of Ki-67 of the tumor tissues were measured by immunohistochemical staining. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMoreover, our data showed that MI treatment significantly enhanced the expression levels of E-Cadherin and occluding, whereas reduced the expression levels of vimentin and N-cadherin in tumor tissues of the mice (P < 0.01) (Figure 6A and B), indicating that MI attenuated EMT of laryngeal cancer in vivo. Besides, the expression levels of Twist, Ki-67, and NF-κB signaling proteins containing p65 and IKK were significantly decreased by MI in tumor tissues of the mice (P < 0.01) (Figure 6C and D), implying that MI may inhibit EMT of laryngeal cancer via the Twist/NF-κB signaling.Figure 6MI inhibits EMT of laryngeal cancer via Twist/NF-κB signaling in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by nude mice tumorigenicity assay. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the tumor tissues of the mice. (B) The results of Western blot analysis in (A) were quantified by ImageJ software. (C) The expression levels of NF-κB, NF-κB p65, Twist, Ki-67, IKK, and β-actin were measured by Western blot analysis in the tumor tissues of the mice. (D) The results of Western blot analysis in (C) were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMI inhibits EMT of laryngeal cancer via Twist/NF-κB signaling in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by nude mice tumorigenicity assay. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the tumor tissues of the mice. (B) The results of Western blot analysis in (A) were quantified by ImageJ software. (C) The expression levels of NF-κB, NF-κB p65, Twist, Ki-67, IKK, and β-actin were measured by Western blot analysis in the tumor tissues of the mice. (D) The results of Western blot analysis in (C) were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.", "To assess the potential correlation of EMT-related transcription factors with laryngeal cancer, the expression of Twist, Slug, ZEB1, and Snail in the clinical laryngeal samples and laryngeal cells were measured by qPCR assays. The data showed that the expression levels of Twist, Slug, ZEB1, and Snail were significantly elevated in laryngeal cancer tissues (n = 40) compared to that in normal tissues (n = 40), among which Twist displayed the highest expression levels (P < 0.01) (Figure 1A). Meanwhile, the expression levels of Twist, Slug, ZEB1, and Snail were also enhanced in human laryngeal carcinoma HEP-2 cells compared with that in primary laryngeal epithelial cells (LEC-P), and Twist exerted the highest expression levels among these transcription factors (P < 0.001) (Figure 1B), implying that Twist is closely associated with the development of laryngeal cancer. To determine whether Twist was able to serve as the potential biomarker for patients with laryngeal cancer, we separated the clinical laryngeal cancer samples into two groups according to the mean expression of Twist. We observed that high expression levels of Twist was remarkably correlated with the poor overall survival (P < 0.01) (Figure 1C), suggesting that Twist may play a crucial role in the progression of laryngeal cancer.Figure 1Twist is potentially correlated with the progression and poor prognosis of laryngeal cancer. (A) The expression levels of Twist, Slug, ZEB1, and Snail were measured by qPCR in the laryngeal cancer tissues (n = 40) and adjacent normal tissues (n = 40). (B) The expression levels of Twist, Slug, ZEB1, and Snail were assessed by qPCR in HEP-2 cells and primary laryngeal epithelial cells. (C) The clinical laryngeal cancer samples were separated into two groups according to the mean expression of Twist. The overall survival was analyzed by Kaplan-Meier survival analysis. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01, *** P < 0.001.\nTwist is potentially correlated with the progression and poor prognosis of laryngeal cancer. (A) The expression levels of Twist, Slug, ZEB1, and Snail were measured by qPCR in the laryngeal cancer tissues (n = 40) and adjacent normal tissues (n = 40). (B) The expression levels of Twist, Slug, ZEB1, and Snail were assessed by qPCR in HEP-2 cells and primary laryngeal epithelial cells. (C) The clinical laryngeal cancer samples were separated into two groups according to the mean expression of Twist. The overall survival was analyzed by Kaplan-Meier survival analysis. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01, *** P < 0.001.", "Then, the role of MI in the modulation of laryngeal cancer proliferation was investigated, and the structure formula of MI is shown in Figure 2A. To evaluate the effect of MI on the progression of laryngeal cancer, MTT assay, colony formation assay, and EdU assay were performed in HEP-2 cells treated with MI. MTT assay demonstrated that MI significantly inhibited cell viability in a dose-dependent manner, and the half-maximal inhibitory concentrations (IC50) of MI in HEP-2 cells was 3.22 mg/mL (P < 0.01) (Figure 2B). Hence, we selected the MI dose of 3.22 mg/mL in the following experiments. Similarly, colony formation assay showed that the colony numbers of HEP-2 cells were remarkably reduced by MI treatment (P < 0.01) (Figure 2C). Besides, EdU assay showed that MI notably decreased number of EdU-positive cells (P < 0.01) (Figure 2D). These data suggested that MI was able to inhibit cell proliferation of laryngeal cancer.Figure 2Magnesium isoglycyrrhizinate (MI) attenuates cell proliferation of laryngeal cancer in vitro. (A) The structure formula of MI was shown. (B) The cell viability was analyzed by MTT assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell proliferation was measured by colony formation assays in the HEP-2 cells treated with MI at indicated dosage. (D) The cell proliferation was examined by EdU assays in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMagnesium isoglycyrrhizinate (MI) attenuates cell proliferation of laryngeal cancer in vitro. (A) The structure formula of MI was shown. (B) The cell viability was analyzed by MTT assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell proliferation was measured by colony formation assays in the HEP-2 cells treated with MI at indicated dosage. (D) The cell proliferation was examined by EdU assays in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.", "The role of MI in HEP-2 cell migration, invasion, and apoptosis was then evaluated. Transwell assays revealed that MI treatment remarkably reduced cell migration (P < 0.01) (Figure 3A). Similarly, cell invasion was significantly decreased by MI in HEP-2 cells (P < 0.01) (Figure 3B). Moreover, flow cytometry analysis showed that cell apoptosis was notably increased by MI treatment in the system (P < 0.01) (Figure 3C). These results indicated that MI inhibited laryngeal cancer cell migration and invasion and enhanced cell apoptosis in vitro.Figure 3MI inhibits laryngeal cancer cell migration and invasion and enhances cell apoptosis in vitro. (A) The cell migration was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (B) The cell invasion was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell apoptosis was measure by flow cytometry analysis in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMI inhibits laryngeal cancer cell migration and invasion and enhances cell apoptosis in vitro. (A) The cell migration was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (B) The cell invasion was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell apoptosis was measure by flow cytometry analysis in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.", "Next, the underlying mechanism of the effect of MI on the development of laryngeal cancer in HEP-2 cells was further explored. It showed that MI treatment significantly reduced the luciferase activities of NF-κB in the cells (P < 0.01) (Figure 4A), suggesting that MI may inhibit NF-κB at transcriptional level. Meanwhile, dual-luciferase reporter gene assays further revealed that MI treatment remarkably decreased the transcriptional activities of Twist in the cells (P < 0.01) (Figure 4B). Furthermore, Western blot analysis demonstrated that the total expression (Figure 4C) and nucleus expression (Figure 4D) of NF-κB and Twist were significantly down-regulated by MI in HEP-2 cells (P < 0.01), suggesting that MI may inhibit Twist by modulating NF-κB. Moreover, to assess the effect of MI on the EMT of laryngeal cancer, the expression of EMT markers, including E-Cadherin, occluding, vimentin, and N-cadherin, in HEP-2 cells treated with MI were measured. Our data showed that the expression levels of E-Cadherin and occluding were enhanced while the expression levels of vimentin and N-cadherin were reduced by MI treatment in HEP-2 cells (P < 0.01) (Figure 4E), suggesting that MI can inhibit EMT of laryngeal cancer.Figure 4MI inhibits transcriptional activation and expression of NF-κB and Twist and alleviates EMT in laryngeal cancer cells. (A) The luciferase activities of NF-κB were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (B) The luciferase activities of Twist were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (C) The total expression of NF-κB, Twist, and β-actin was measured by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (D) The nucleus expression of NF-κB, Twist, and histone H3 was tested by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (E) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMI inhibits transcriptional activation and expression of NF-κB and Twist and alleviates EMT in laryngeal cancer cells. (A) The luciferase activities of NF-κB were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (B) The luciferase activities of Twist were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (C) The total expression of NF-κB, Twist, and β-actin was measured by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (D) The nucleus expression of NF-κB, Twist, and histone H3 was tested by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (E) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.", "The effect of MI on laryngeal cancer development was further investigated in vivo. Tumorigenicity analysis was conducted in nude mice injected with HEP-2 cells, which were treated with MI or corresponding control. The MI treatment significantly reduced tumor size (Figure 5A), tumor weight (P < 0.01) (Figure 5B), tumor volume (P < 0.01) (Figure 5C), and the expression levels of Ki-67 in tumor tissues of the mice (Figure 5D), suggesting that MI inhibited tumor growth of laryngeal cancer in vivo.Figure 5MI inhibits tumor growth of laryngeal cancer in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by tumorigenicity assay in nude mice. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) Representative images of dissected tumors from nude mice were presented. (B) The average tumor weight was calculated and shown. (C) The average tumor volume was calculated and shown. (D) The expression levels of Ki-67 of the tumor tissues were measured by immunohistochemical staining. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMI inhibits tumor growth of laryngeal cancer in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by tumorigenicity assay in nude mice. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) Representative images of dissected tumors from nude mice were presented. (B) The average tumor weight was calculated and shown. (C) The average tumor volume was calculated and shown. (D) The expression levels of Ki-67 of the tumor tissues were measured by immunohistochemical staining. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMoreover, our data showed that MI treatment significantly enhanced the expression levels of E-Cadherin and occluding, whereas reduced the expression levels of vimentin and N-cadherin in tumor tissues of the mice (P < 0.01) (Figure 6A and B), indicating that MI attenuated EMT of laryngeal cancer in vivo. Besides, the expression levels of Twist, Ki-67, and NF-κB signaling proteins containing p65 and IKK were significantly decreased by MI in tumor tissues of the mice (P < 0.01) (Figure 6C and D), implying that MI may inhibit EMT of laryngeal cancer via the Twist/NF-κB signaling.Figure 6MI inhibits EMT of laryngeal cancer via Twist/NF-κB signaling in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by nude mice tumorigenicity assay. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the tumor tissues of the mice. (B) The results of Western blot analysis in (A) were quantified by ImageJ software. (C) The expression levels of NF-κB, NF-κB p65, Twist, Ki-67, IKK, and β-actin were measured by Western blot analysis in the tumor tissues of the mice. (D) The results of Western blot analysis in (C) were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.\nMI inhibits EMT of laryngeal cancer via Twist/NF-κB signaling in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by nude mice tumorigenicity assay. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the tumor tissues of the mice. (B) The results of Western blot analysis in (A) were quantified by ImageJ software. (C) The expression levels of NF-κB, NF-κB p65, Twist, Ki-67, IKK, and β-actin were measured by Western blot analysis in the tumor tissues of the mice. (D) The results of Western blot analysis in (C) were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01.", "Laryngeal cancer is the second most prevalent head and neck malignancy.38 Laryngeal cancer patients hold poor survival incidences and low prognosis, and many cases still suffer from recurrence.39 Although improvements have been made in chemotherapy, radiotherapy, and surgery, invasion and metastasis are still the principal reasons for laryngeal cancer-related mortality for patients with advanced grade.40 Searching for more safe and practical treatment candidates for the effective therapy of laryngeal cancer is of great importance.41 As a natural compound, MI has exerted practical potential in the medical application. It has been reported that MI represses cardiac hypertrophy by modulating the TLR4/NF-κB signaling in mice.42 MI preserves against triptolide-produced hepatotoxicity through activating the Nrf2 signaling.43 MI relieves fructose-mediated apoptosis of podocyte by down-regulation of miR-193a to enhance WT1.44 However, the investigation of MI in cancer progression is limited. Previous study identified that the exposure-impact-toxicity was correlated with MI and docetaxel in non-small cell lung cancer mice.22 MI inhibits chemotherapy-caused damage to liver throughout the initial therapy of gastrointestinal tumor patients.45 MI decreases paclitaxel in patients with epithelial ovarian cancer treated with cisplatin and paclitaxel.24 The function of MI is associated with its activity to restrain several crucial pathways such as phospholipase A2/arachidonic signaling,46 STAT3 signaling,47 and NF-κB signaling.23,48 In this study, we firstly identified that MI inhibited cell proliferation, migration, invasion, and enhanced cell apoptosis in laryngeal cancer. MI reduced tumor growth of laryngeal cancer in vivo. Our data present a novel inhibitory function of MI in laryngeal cancer and provide valuable insights into the role of MI in cancer development.\nEMT plays a critical role in the development of laryngeal cancer, especially in cancer cell metastasis progression by promoting resistance to apoptotic stimulation, invasion, and mobility.49 It was reported that TGF β-induced lncRNA MIR155HG promoted EMT of laryngeal squamous cell carcinoma by regulating the miR-155/SOX10 signaling.50 YAP modulates the Wnt/β-catenin signaling and EMT program of laryngeal cancer.51 Abnormal methylation and down-regulation of ZNF667 and ZNF667-AS1 enhance EMT of laryngeal carcinoma.52 Meanwhile, it was reported that MI alleviated high fructose-caused liver fibrosis and EMT by enhancing miR-375-3p to repress the TGF-β1/Smad signaling and JAK2/STAT3 signaling in rat.25 Our data demonstrated that MI attenuated EMT of laryngeal cancer via modulating the Twist/NF-κB signaling. It provides valuable information that MI exerts an inhibitory function of EMT in cancer progression.\nAs a critical EMT transcription factor, Twist regulates EMT program during cancer development.53 The correlation of Twist with NF-κB signaling in the modulation of EMT in cancer progression is well identified. It has been reported that presentation to TGF β combined with TNF α provokes tumorigenesis by modulating NF-κB/Twist signaling in vitro.54 Antrodia salmonea represses aggression and metastasis through modifying EMT via the NF-κB/Twist signaling in triple-negative breast cancer cells.55 Twist is involved in the mechanism that ursolic acid restrains EMT of gastric cancer through the Axl/NF-κB signaling.56 NF-κB activation by the RANKL/RANK pathway enhances the expression of snail and twist and promotes EMT of mammary cancer cells.57 Twist plays a crucial role in regulating the NF-κB and HIF-1α signaling on hypoxia-induced chemoresistance and EMT in pancreatic cancer.58 MiR-153 depletion down-regulates the expression of metastasis-associated family member 3 and Twist family BHLH transcription factor 1 in laryngeal squamous carcinoma cells.59 The expression of TWIST is remarkably down-regulated in response to paclitaxel, and TWIST may present a crucial function in paclitaxel-related laryngeal cancer cell apoptosis.60 TWIST is involved in the mechanism that TrkB elevates the metastasis of laryngeal cancer by activating the PI3K/AKT signaling.61 In the present study, we revealed that Twist was significantly elevated in laryngeal cancer tissues and human laryngeal carcinoma HEP-2 cells. It suggests that Twist may play a critical role in the development of laryngeal cancer. NF-κB/Twist signaling was involved in the mechanism of MI inhibiting EMT of laryngeal cancer. It provides new evidence of the role of Twist in cancer progression.", "In conclusion, we discovered that magnesium isoglycyrrhizinate attenuated the progression of laryngeal cancer in vitro and in vivo. Magnesium isoglycyrrhizinate induced an inhibitory effect on epithelial–mesenchymal transition in laryngeal cancer by modulating the NF-κB/Twist signaling. Our findings provide new insights into the mechanism by which magnesium isoglycyrrhizinate inhibits laryngeal carcinoma development. Magnesium isoglycyrrhizinate may be applied as a potential anti-tumor candidate for laryngeal cancer in clinical treatment strategy." ]
[ "intro", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "laryngeal cancer", "progression", "EMT", "magnesium isoglycyrrhizinate", "Twist", "NF-κB signaling" ]
Introduction: Laryngeal cancer is one of the most prevalent malignancies of the respiratory system, accounting for 26% to 30% of head and neck cancers.1,2 Almost 95% of the histologic pathogeny of laryngeal cancer is laryngeal squamous carcinoma, and the survival incidence of patients is low.3 Despite the development of new strategies in surgery, chemotherapy, and radiation, the targeted treatment of laryngeal cancer is still a challenge.4 Thus, identification of safe and effective treatment candidates for laryngeal cancer is urgently needed. Epithelial–mesenchymal transition (EMT) serves as a cellular program, in which cells drop their epithelial features and gain mesenchymal characteristics.5,6 EMT is correlated with multiple tumor progressions, such as resistance to therapy, blood intravasation, tumor initiation, tumor cell migration, tumor stemness, malignant progression, and metastasis.7–9 As a critical process of cancer development, EMT contributes to the development of laryngeal cancer. It has been reported that the combination of photodynamic therapy and carboplatin suppresses the expression of MMP-2/MMP-9 and EMT of laryngeal cancer by ROS-inhibited MEK/ERK signaling.10 EZH2 increases metastasis and aggression of laryngeal squamous cell carcinoma through the EMT program by modulating H3K27me3.11 However, investigation about the inhibitory candidates of EMT in laryngeal cancer remains limited. Nutraceutical agents display a unique therapeutic activity to treat diseases such as inflammation and cancer.12–17 Glycyrrhizic acid (GA) is extracted from the roots of licorice and serves as a major component of licorice, presenting multiple biomedical activities, such as anti-oxidant and anti-inflammatory.18 Magnesium isoglycyrrhizinate (MI), refined from GA, is a18-α-GA stereoisomer magnesium salt and demonstrates a better activity than 18-β-GA.19 As a natural and safe compound, MI shows many biomedical activities such as anti-inflammation capacities,20 anti-apoptosis21 and anti-tumor.22 It has been reported that MI inhibits fructose-modulated lipid metabolism disorder and activation of the NF-κB/NLRP3 inflammasome.23 MI reduces paclitaxel in patients with epithelial ovarian cancer managed with cisplatin and paclitaxel.24 Moreover, MI attenuates high fructose-induced liver fibrosis and EMT by upregulating miR-375-3p to overcome the TGF-β1/Smad pathway and JAK2/STAT3 signaling.25 However, the role of MI in cancer development is unknown. The effect of MI on the progression and EMT of laryngeal cancer remains unreported. Many transcription factors, such as Twist, serve as molecule switches in EMT progression.26 Twist is a crucial transcription factor for the modulating of EMT, which advances cell invasion, migration, and cancer metastasis, conferring tumor cells with stem cell-like properties and providing therapy resistance.27,28 The function of Twist in EMT of cancer development has been well reported. Disrupting the diacetylated Twist represses the progression of Basal-like breast cancer.29 The activation of Twist promotes progression and EMT of breast cancer.30 The inhibition of Twist limits the stem cell features and EMT of prostate cancer.31 Twist serves as a critical factor in promoting metastasis of pancreatic cancer.32 Besides, tumor necrosis factor α provokes EMT of hypopharyngeal cancer and induces metastasis through NF-κB signaling-regulated expression of Twist.33 NF-κB/Twist signaling is involved in the mechanism of Chysin inhibiting stem cell characteristics of ovarian cancer.34 Repression of the NF-κB/Twist axis decreases the stemness features of lung cancer stem cell.35 Down-regulation of TWIST reduces invasion and migration of laryngeal carcinoma cells by controlling the expression of N-cadherin and E-cadherin.36 The expression of Twist displays the clinical significance of laryngeal cancer.37 However, whether NF-κB and Twist are involved in the biomedical activities of MI is unclear. In this study, we aimed to explore the function of MI in the development and EMT process of laryngeal cancer. We identified a novel inhibitory effect of MI in the progression and EMT of laryngeal cancer by regulating the NF-κB/Twist signaling. Methods: Laryngeal Cancer Clinical Samples A total of 40 laryngeal cancer clinical samples used in this study were obtained from the Second Affiliated Hospital, Harbin Medical University between June 2016 and August 2018. All the patients were diagnosed by clinical, radiographic, and histopathological analysis. Before surgery, no systemic or local therapy was performed on the subjects. The laryngeal cancer tissues (n = 40) and adjacent normal tissues (n = 40) obtained from the patients were immediately frozen into liquid nitrogen and stored at −80 °C before use. The clinical laryngeal cancer samples were separated into two groups according to the mean expression of Twist. The overall survival was analyzed by Kaplan–Meier survival analysis. The patients and healthy controls provided written informed consent, and that the Ethics Committee of The Second Affiliated Hospital, Harbin Medical University approved this study. A total of 40 laryngeal cancer clinical samples used in this study were obtained from the Second Affiliated Hospital, Harbin Medical University between June 2016 and August 2018. All the patients were diagnosed by clinical, radiographic, and histopathological analysis. Before surgery, no systemic or local therapy was performed on the subjects. The laryngeal cancer tissues (n = 40) and adjacent normal tissues (n = 40) obtained from the patients were immediately frozen into liquid nitrogen and stored at −80 °C before use. The clinical laryngeal cancer samples were separated into two groups according to the mean expression of Twist. The overall survival was analyzed by Kaplan–Meier survival analysis. The patients and healthy controls provided written informed consent, and that the Ethics Committee of The Second Affiliated Hospital, Harbin Medical University approved this study. Cell Culture and Treatment The human laryngeal carcinoma cells HEP-2 and primary laryngeal epithelial cells LEC-P were purchased from American Type Tissue Culture Collection. Cells were cultured in DMEM (Solarbio, China) containing 10% fetal bovine serum (Gibco, USA), 0.1 mg/mL streptomycin (Solarbio, China) and 100 units/mL penicillin (Solarbio, China) at 37 °C with 5% CO2. The Magnesium isoglycyrrhizinate (MI) (purity > 98%) was obtained from Zhengda Tianqing Pharmaceutical Co., Ltd (Jiangsu, China). Cells were treated with MI of indicated dose before further analysis. The human laryngeal carcinoma cells HEP-2 and primary laryngeal epithelial cells LEC-P were purchased from American Type Tissue Culture Collection. Cells were cultured in DMEM (Solarbio, China) containing 10% fetal bovine serum (Gibco, USA), 0.1 mg/mL streptomycin (Solarbio, China) and 100 units/mL penicillin (Solarbio, China) at 37 °C with 5% CO2. The Magnesium isoglycyrrhizinate (MI) (purity > 98%) was obtained from Zhengda Tianqing Pharmaceutical Co., Ltd (Jiangsu, China). Cells were treated with MI of indicated dose before further analysis. Quantitative Reverse Transcription-PCR (qRT-PCR) Total RNAs were extracted using TRIZOL (Invitrogen, USA), followed by reverse transcription into cDNA. The qRT-PCR reactions were prepared using the SYBR Real-time PCR I kit (Takara, Japan). GAPDH was used as the internal control. The qRT-PCR experiment was conducted in triplicate. The primer sequences were as follows: Twist forward: 5′-CGCTGAACGAGGCATTTGC-3′ Twist reverse: 5′-CCAGTTTGAGGGTCTGAATC-3′ Slug forward: 5′-GTGTTTGCAAGATCTGCGGC-3′ Slug reverse: 5′-GCAGATGAGCCCTCAGATTTGA-3′ ZEB1 forward: 5′-CCCCAGGTGTAAGCGCAGAA-3′ ZEB1 reverse: 5′-TGGCAGGTCATCCTCTGGTACAC-3′ Snail forward: 5′-CCACACTGGTGAGAAGCCTTTC-3′ Snail reverse: 5′-GTCTGGAGGTGGGCACGTA-3′ GAPDH forward: 5′-AAGAAGGTGGTGAAGCAGGC-3′. GAPDH reverse: 5′-TCCACCACCCAGTTGCTGTA-3′ Total RNAs were extracted using TRIZOL (Invitrogen, USA), followed by reverse transcription into cDNA. The qRT-PCR reactions were prepared using the SYBR Real-time PCR I kit (Takara, Japan). GAPDH was used as the internal control. The qRT-PCR experiment was conducted in triplicate. The primer sequences were as follows: Twist forward: 5′-CGCTGAACGAGGCATTTGC-3′ Twist reverse: 5′-CCAGTTTGAGGGTCTGAATC-3′ Slug forward: 5′-GTGTTTGCAAGATCTGCGGC-3′ Slug reverse: 5′-GCAGATGAGCCCTCAGATTTGA-3′ ZEB1 forward: 5′-CCCCAGGTGTAAGCGCAGAA-3′ ZEB1 reverse: 5′-TGGCAGGTCATCCTCTGGTACAC-3′ Snail forward: 5′-CCACACTGGTGAGAAGCCTTTC-3′ Snail reverse: 5′-GTCTGGAGGTGGGCACGTA-3′ GAPDH forward: 5′-AAGAAGGTGGTGAAGCAGGC-3′. GAPDH reverse: 5′-TCCACCACCCAGTTGCTGTA-3′ MTT Assays MTT assays were conducted to measure cell viability of HEP-2 cells. Briefly, about 1 × 104 HEP-2 cells were put into 96 wells and cultured for 12 h. Cells were then added with 10 μL MTT solution (5 mg/mL) (Sigma, USA) and cultured for another 4 h. The culture medium was discarded, and 150 μL/well DMSO (Thermo, USA) was added to the wells. An ELISA browser was used to analyze the absorbance at 570 nm (Bio-Tek EL 800, USA). MTT assays were conducted to measure cell viability of HEP-2 cells. Briefly, about 1 × 104 HEP-2 cells were put into 96 wells and cultured for 12 h. Cells were then added with 10 μL MTT solution (5 mg/mL) (Sigma, USA) and cultured for another 4 h. The culture medium was discarded, and 150 μL/well DMSO (Thermo, USA) was added to the wells. An ELISA browser was used to analyze the absorbance at 570 nm (Bio-Tek EL 800, USA). EdU Assays The cell proliferation was analyzed by EdU assays using EdU detecting kit (RiboBio, China). Briefly, HEP-2 cells were cultured with EdU for 2 h, followed by fixation with 4% paraformaldehyde at room temperature for 30 min. Then, cells were permeabilized with 0.4% Triton X-100 for 10 min and stained with staining cocktail of EdU at room temperature for 30 min in the dark. Next, nuclear of the cells was stained with Hoechst at room temperature for 30 min. Images were analyzed using a fluorescence microscope. The cell proliferation was analyzed by EdU assays using EdU detecting kit (RiboBio, China). Briefly, HEP-2 cells were cultured with EdU for 2 h, followed by fixation with 4% paraformaldehyde at room temperature for 30 min. Then, cells were permeabilized with 0.4% Triton X-100 for 10 min and stained with staining cocktail of EdU at room temperature for 30 min in the dark. Next, nuclear of the cells was stained with Hoechst at room temperature for 30 min. Images were analyzed using a fluorescence microscope. Colony Formation Assay About 1 × 103 HEP-2 cells were layered in 6 wells and incubated in DMEM at 37 °C. After two weeks, cells were cleaned with PBS Buffer, made in methanol for 30 min, and dyed with 1% crystal violet. The number of colonies was then calculated. About 1 × 103 HEP-2 cells were layered in 6 wells and incubated in DMEM at 37 °C. After two weeks, cells were cleaned with PBS Buffer, made in methanol for 30 min, and dyed with 1% crystal violet. The number of colonies was then calculated. Transwell Assays Transwell assays were conducted to evaluate the impacts of MI on cell invasion and migration of HEP-2 cells using a Transwell plate (Corning, USA). Briefly, the upper chambers were plated with about 1 × 105 cells. Cells were then solidified using 4% paraformaldehyde and dyed with crystal violet. The invaded and migrated cells were recorded and calculated. Transwell assays were conducted to evaluate the impacts of MI on cell invasion and migration of HEP-2 cells using a Transwell plate (Corning, USA). Briefly, the upper chambers were plated with about 1 × 105 cells. Cells were then solidified using 4% paraformaldehyde and dyed with crystal violet. The invaded and migrated cells were recorded and calculated. Analysis of Cell Apoptosis Approximately 2 × 105 HEP-2 cells were plated on 6-well dishes. Cell apoptosis was detected using the Annexin V-FITC Apoptosis Detection Kit (CST, USA) following the manufacturer’s instructions. Briefly, cells were collected and washed with binding buffer (BD Biosciences, USA), and then dyed at 25 °C, followed by flow cytometry analysis. Approximately 2 × 105 HEP-2 cells were plated on 6-well dishes. Cell apoptosis was detected using the Annexin V-FITC Apoptosis Detection Kit (CST, USA) following the manufacturer’s instructions. Briefly, cells were collected and washed with binding buffer (BD Biosciences, USA), and then dyed at 25 °C, followed by flow cytometry analysis. Luciferase Reporter Gene Assay Luciferase reporter gene assay was performed using the Dual-luciferase Reporter Assay System (Promega, USA). Briefly, cells were treated MI as indicated dose, followed by transfection with pGL3-NF-κB and pGL3-Twist using Lipofectamine 3000 (Invitrogen, USA). Luciferase activities were detected and Renilla was used as a normalized control. Luciferase reporter gene assay was performed using the Dual-luciferase Reporter Assay System (Promega, USA). Briefly, cells were treated MI as indicated dose, followed by transfection with pGL3-NF-κB and pGL3-Twist using Lipofectamine 3000 (Invitrogen, USA). Luciferase activities were detected and Renilla was used as a normalized control. Western Blot Analysis Total proteins were extracted from cells or tumor tissues using RIPA buffer (CST, USA). Protein concentrations were measured using the BCA Protein Quantification Kit (Abbkine, USA). Same amount of protein samples was separated by SDS-PAGE (12% polyacrylamide gels), transferred to PVDF membranes (Millipore, USA) in the subsequent step. The membranes were blocked with 5% milk and incubated with the primary antibodies for Twist (1:1000) (Abcam, UK), E-Cadherin (1:1000) (Abcam, UK), occluding (1:1000) (Abcam, UK), vimentin (1:1000) (Abcam, UK), N-cadherin (1:1000) (Abcam, UK), Ki-67 (1:1000) (Abcam, UK), IKK (1:1000) (Abcam, UK), NF-κB p65 (1:1000) (Abcam, UK), NF-κB (1:1000) (Abcam, UK), and β-actin (1:1000) (Abcam, UK) at 4 °C overnight, in which β-actin served as the control. Then, the corresponding second antibodies (1:1000) (Abcam, UK) were used to incubate the membranes at room temperature for 1 h, followed by visualization using an Odyssey CLx Infrared Imaging System. ImageJ software was used to quantify the results. Total proteins were extracted from cells or tumor tissues using RIPA buffer (CST, USA). Protein concentrations were measured using the BCA Protein Quantification Kit (Abbkine, USA). Same amount of protein samples was separated by SDS-PAGE (12% polyacrylamide gels), transferred to PVDF membranes (Millipore, USA) in the subsequent step. The membranes were blocked with 5% milk and incubated with the primary antibodies for Twist (1:1000) (Abcam, UK), E-Cadherin (1:1000) (Abcam, UK), occluding (1:1000) (Abcam, UK), vimentin (1:1000) (Abcam, UK), N-cadherin (1:1000) (Abcam, UK), Ki-67 (1:1000) (Abcam, UK), IKK (1:1000) (Abcam, UK), NF-κB p65 (1:1000) (Abcam, UK), NF-κB (1:1000) (Abcam, UK), and β-actin (1:1000) (Abcam, UK) at 4 °C overnight, in which β-actin served as the control. Then, the corresponding second antibodies (1:1000) (Abcam, UK) were used to incubate the membranes at room temperature for 1 h, followed by visualization using an Odyssey CLx Infrared Imaging System. ImageJ software was used to quantify the results. Analysis of Tumorigenicity in Nude Mice The effect of MI on tumor growth in vivo was analyzed in nude mice of Balb/c. Mice were randomly separated into two groups (n = 3). To establish the in vivo tumor model, HEP-2 cells were treated with MI (300 mg/kg) or equal volume of saline. And about 2 × 106 cells were subcutaneously injected into mice. After 7 days of injection, tumor growth was measured every 7 days. The mice were sacrificed after 35 days of injection and tumors were scaled. Tumor volume (V) was observed by estimating the length (L) and width (W) with calipers and measured with the formula (L ×W2) × 0.5. The expression levels of Ki-67 of tumor tissues were measured by immunohistochemical staining with the Ki67 antibody (1:1000) (Abcam, UK). Protein expression levels in tumor tissues were determined by Western blot analysis using Twist (1:1000) (Abcam, UK), E-Cadherin (1:1000) (Abcam, UK), occluding (1:1000) (Abcam, UK), vimentin (1:1000) (Abcam, UK), N-cadherin (1:1000) (Abcam, UK), Ki-67 (1:1000) (Abcam, UK), IKK (1:1000) (Abcam, UK), NF-κB p65 (1:1000) (Abcam, UK), NF-κB (1:1000) (Abcam, UK), and β-actin (1:1000) (Abcam, UK). Animal care and experimental procedures in this study were approved by the Animal Ethics Committee of the Second Affiliated Hospital, Harbin Medical University. The effect of MI on tumor growth in vivo was analyzed in nude mice of Balb/c. Mice were randomly separated into two groups (n = 3). To establish the in vivo tumor model, HEP-2 cells were treated with MI (300 mg/kg) or equal volume of saline. And about 2 × 106 cells were subcutaneously injected into mice. After 7 days of injection, tumor growth was measured every 7 days. The mice were sacrificed after 35 days of injection and tumors were scaled. Tumor volume (V) was observed by estimating the length (L) and width (W) with calipers and measured with the formula (L ×W2) × 0.5. The expression levels of Ki-67 of tumor tissues were measured by immunohistochemical staining with the Ki67 antibody (1:1000) (Abcam, UK). Protein expression levels in tumor tissues were determined by Western blot analysis using Twist (1:1000) (Abcam, UK), E-Cadherin (1:1000) (Abcam, UK), occluding (1:1000) (Abcam, UK), vimentin (1:1000) (Abcam, UK), N-cadherin (1:1000) (Abcam, UK), Ki-67 (1:1000) (Abcam, UK), IKK (1:1000) (Abcam, UK), NF-κB p65 (1:1000) (Abcam, UK), NF-κB (1:1000) (Abcam, UK), and β-actin (1:1000) (Abcam, UK). Animal care and experimental procedures in this study were approved by the Animal Ethics Committee of the Second Affiliated Hospital, Harbin Medical University. Statistical Analysis Data were presented as mean ± SD, and statistical analysis was performed using SPSS software (version 18.0). The unpaired Student’s t-test was applied for comparing two groups, and one-way ANOVA was applied for comparing multiple groups. P < 0.05 was considered as statistically significant. Data were presented as mean ± SD, and statistical analysis was performed using SPSS software (version 18.0). The unpaired Student’s t-test was applied for comparing two groups, and one-way ANOVA was applied for comparing multiple groups. P < 0.05 was considered as statistically significant. Laryngeal Cancer Clinical Samples: A total of 40 laryngeal cancer clinical samples used in this study were obtained from the Second Affiliated Hospital, Harbin Medical University between June 2016 and August 2018. All the patients were diagnosed by clinical, radiographic, and histopathological analysis. Before surgery, no systemic or local therapy was performed on the subjects. The laryngeal cancer tissues (n = 40) and adjacent normal tissues (n = 40) obtained from the patients were immediately frozen into liquid nitrogen and stored at −80 °C before use. The clinical laryngeal cancer samples were separated into two groups according to the mean expression of Twist. The overall survival was analyzed by Kaplan–Meier survival analysis. The patients and healthy controls provided written informed consent, and that the Ethics Committee of The Second Affiliated Hospital, Harbin Medical University approved this study. Cell Culture and Treatment: The human laryngeal carcinoma cells HEP-2 and primary laryngeal epithelial cells LEC-P were purchased from American Type Tissue Culture Collection. Cells were cultured in DMEM (Solarbio, China) containing 10% fetal bovine serum (Gibco, USA), 0.1 mg/mL streptomycin (Solarbio, China) and 100 units/mL penicillin (Solarbio, China) at 37 °C with 5% CO2. The Magnesium isoglycyrrhizinate (MI) (purity > 98%) was obtained from Zhengda Tianqing Pharmaceutical Co., Ltd (Jiangsu, China). Cells were treated with MI of indicated dose before further analysis. Quantitative Reverse Transcription-PCR (qRT-PCR): Total RNAs were extracted using TRIZOL (Invitrogen, USA), followed by reverse transcription into cDNA. The qRT-PCR reactions were prepared using the SYBR Real-time PCR I kit (Takara, Japan). GAPDH was used as the internal control. The qRT-PCR experiment was conducted in triplicate. The primer sequences were as follows: Twist forward: 5′-CGCTGAACGAGGCATTTGC-3′ Twist reverse: 5′-CCAGTTTGAGGGTCTGAATC-3′ Slug forward: 5′-GTGTTTGCAAGATCTGCGGC-3′ Slug reverse: 5′-GCAGATGAGCCCTCAGATTTGA-3′ ZEB1 forward: 5′-CCCCAGGTGTAAGCGCAGAA-3′ ZEB1 reverse: 5′-TGGCAGGTCATCCTCTGGTACAC-3′ Snail forward: 5′-CCACACTGGTGAGAAGCCTTTC-3′ Snail reverse: 5′-GTCTGGAGGTGGGCACGTA-3′ GAPDH forward: 5′-AAGAAGGTGGTGAAGCAGGC-3′. GAPDH reverse: 5′-TCCACCACCCAGTTGCTGTA-3′ MTT Assays: MTT assays were conducted to measure cell viability of HEP-2 cells. Briefly, about 1 × 104 HEP-2 cells were put into 96 wells and cultured for 12 h. Cells were then added with 10 μL MTT solution (5 mg/mL) (Sigma, USA) and cultured for another 4 h. The culture medium was discarded, and 150 μL/well DMSO (Thermo, USA) was added to the wells. An ELISA browser was used to analyze the absorbance at 570 nm (Bio-Tek EL 800, USA). EdU Assays: The cell proliferation was analyzed by EdU assays using EdU detecting kit (RiboBio, China). Briefly, HEP-2 cells were cultured with EdU for 2 h, followed by fixation with 4% paraformaldehyde at room temperature for 30 min. Then, cells were permeabilized with 0.4% Triton X-100 for 10 min and stained with staining cocktail of EdU at room temperature for 30 min in the dark. Next, nuclear of the cells was stained with Hoechst at room temperature for 30 min. Images were analyzed using a fluorescence microscope. Colony Formation Assay: About 1 × 103 HEP-2 cells were layered in 6 wells and incubated in DMEM at 37 °C. After two weeks, cells were cleaned with PBS Buffer, made in methanol for 30 min, and dyed with 1% crystal violet. The number of colonies was then calculated. Transwell Assays: Transwell assays were conducted to evaluate the impacts of MI on cell invasion and migration of HEP-2 cells using a Transwell plate (Corning, USA). Briefly, the upper chambers were plated with about 1 × 105 cells. Cells were then solidified using 4% paraformaldehyde and dyed with crystal violet. The invaded and migrated cells were recorded and calculated. Analysis of Cell Apoptosis: Approximately 2 × 105 HEP-2 cells were plated on 6-well dishes. Cell apoptosis was detected using the Annexin V-FITC Apoptosis Detection Kit (CST, USA) following the manufacturer’s instructions. Briefly, cells were collected and washed with binding buffer (BD Biosciences, USA), and then dyed at 25 °C, followed by flow cytometry analysis. Luciferase Reporter Gene Assay: Luciferase reporter gene assay was performed using the Dual-luciferase Reporter Assay System (Promega, USA). Briefly, cells were treated MI as indicated dose, followed by transfection with pGL3-NF-κB and pGL3-Twist using Lipofectamine 3000 (Invitrogen, USA). Luciferase activities were detected and Renilla was used as a normalized control. Western Blot Analysis: Total proteins were extracted from cells or tumor tissues using RIPA buffer (CST, USA). Protein concentrations were measured using the BCA Protein Quantification Kit (Abbkine, USA). Same amount of protein samples was separated by SDS-PAGE (12% polyacrylamide gels), transferred to PVDF membranes (Millipore, USA) in the subsequent step. The membranes were blocked with 5% milk and incubated with the primary antibodies for Twist (1:1000) (Abcam, UK), E-Cadherin (1:1000) (Abcam, UK), occluding (1:1000) (Abcam, UK), vimentin (1:1000) (Abcam, UK), N-cadherin (1:1000) (Abcam, UK), Ki-67 (1:1000) (Abcam, UK), IKK (1:1000) (Abcam, UK), NF-κB p65 (1:1000) (Abcam, UK), NF-κB (1:1000) (Abcam, UK), and β-actin (1:1000) (Abcam, UK) at 4 °C overnight, in which β-actin served as the control. Then, the corresponding second antibodies (1:1000) (Abcam, UK) were used to incubate the membranes at room temperature for 1 h, followed by visualization using an Odyssey CLx Infrared Imaging System. ImageJ software was used to quantify the results. Analysis of Tumorigenicity in Nude Mice: The effect of MI on tumor growth in vivo was analyzed in nude mice of Balb/c. Mice were randomly separated into two groups (n = 3). To establish the in vivo tumor model, HEP-2 cells were treated with MI (300 mg/kg) or equal volume of saline. And about 2 × 106 cells were subcutaneously injected into mice. After 7 days of injection, tumor growth was measured every 7 days. The mice were sacrificed after 35 days of injection and tumors were scaled. Tumor volume (V) was observed by estimating the length (L) and width (W) with calipers and measured with the formula (L ×W2) × 0.5. The expression levels of Ki-67 of tumor tissues were measured by immunohistochemical staining with the Ki67 antibody (1:1000) (Abcam, UK). Protein expression levels in tumor tissues were determined by Western blot analysis using Twist (1:1000) (Abcam, UK), E-Cadherin (1:1000) (Abcam, UK), occluding (1:1000) (Abcam, UK), vimentin (1:1000) (Abcam, UK), N-cadherin (1:1000) (Abcam, UK), Ki-67 (1:1000) (Abcam, UK), IKK (1:1000) (Abcam, UK), NF-κB p65 (1:1000) (Abcam, UK), NF-κB (1:1000) (Abcam, UK), and β-actin (1:1000) (Abcam, UK). Animal care and experimental procedures in this study were approved by the Animal Ethics Committee of the Second Affiliated Hospital, Harbin Medical University. Statistical Analysis: Data were presented as mean ± SD, and statistical analysis was performed using SPSS software (version 18.0). The unpaired Student’s t-test was applied for comparing two groups, and one-way ANOVA was applied for comparing multiple groups. P < 0.05 was considered as statistically significant. Results: Twist is Potentially Correlated with the Progression and Poor Prognosis of Laryngeal Cancer To assess the potential correlation of EMT-related transcription factors with laryngeal cancer, the expression of Twist, Slug, ZEB1, and Snail in the clinical laryngeal samples and laryngeal cells were measured by qPCR assays. The data showed that the expression levels of Twist, Slug, ZEB1, and Snail were significantly elevated in laryngeal cancer tissues (n = 40) compared to that in normal tissues (n = 40), among which Twist displayed the highest expression levels (P < 0.01) (Figure 1A). Meanwhile, the expression levels of Twist, Slug, ZEB1, and Snail were also enhanced in human laryngeal carcinoma HEP-2 cells compared with that in primary laryngeal epithelial cells (LEC-P), and Twist exerted the highest expression levels among these transcription factors (P < 0.001) (Figure 1B), implying that Twist is closely associated with the development of laryngeal cancer. To determine whether Twist was able to serve as the potential biomarker for patients with laryngeal cancer, we separated the clinical laryngeal cancer samples into two groups according to the mean expression of Twist. We observed that high expression levels of Twist was remarkably correlated with the poor overall survival (P < 0.01) (Figure 1C), suggesting that Twist may play a crucial role in the progression of laryngeal cancer.Figure 1Twist is potentially correlated with the progression and poor prognosis of laryngeal cancer. (A) The expression levels of Twist, Slug, ZEB1, and Snail were measured by qPCR in the laryngeal cancer tissues (n = 40) and adjacent normal tissues (n = 40). (B) The expression levels of Twist, Slug, ZEB1, and Snail were assessed by qPCR in HEP-2 cells and primary laryngeal epithelial cells. (C) The clinical laryngeal cancer samples were separated into two groups according to the mean expression of Twist. The overall survival was analyzed by Kaplan-Meier survival analysis. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01, *** P < 0.001. Twist is potentially correlated with the progression and poor prognosis of laryngeal cancer. (A) The expression levels of Twist, Slug, ZEB1, and Snail were measured by qPCR in the laryngeal cancer tissues (n = 40) and adjacent normal tissues (n = 40). (B) The expression levels of Twist, Slug, ZEB1, and Snail were assessed by qPCR in HEP-2 cells and primary laryngeal epithelial cells. (C) The clinical laryngeal cancer samples were separated into two groups according to the mean expression of Twist. The overall survival was analyzed by Kaplan-Meier survival analysis. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01, *** P < 0.001. To assess the potential correlation of EMT-related transcription factors with laryngeal cancer, the expression of Twist, Slug, ZEB1, and Snail in the clinical laryngeal samples and laryngeal cells were measured by qPCR assays. The data showed that the expression levels of Twist, Slug, ZEB1, and Snail were significantly elevated in laryngeal cancer tissues (n = 40) compared to that in normal tissues (n = 40), among which Twist displayed the highest expression levels (P < 0.01) (Figure 1A). Meanwhile, the expression levels of Twist, Slug, ZEB1, and Snail were also enhanced in human laryngeal carcinoma HEP-2 cells compared with that in primary laryngeal epithelial cells (LEC-P), and Twist exerted the highest expression levels among these transcription factors (P < 0.001) (Figure 1B), implying that Twist is closely associated with the development of laryngeal cancer. To determine whether Twist was able to serve as the potential biomarker for patients with laryngeal cancer, we separated the clinical laryngeal cancer samples into two groups according to the mean expression of Twist. We observed that high expression levels of Twist was remarkably correlated with the poor overall survival (P < 0.01) (Figure 1C), suggesting that Twist may play a crucial role in the progression of laryngeal cancer.Figure 1Twist is potentially correlated with the progression and poor prognosis of laryngeal cancer. (A) The expression levels of Twist, Slug, ZEB1, and Snail were measured by qPCR in the laryngeal cancer tissues (n = 40) and adjacent normal tissues (n = 40). (B) The expression levels of Twist, Slug, ZEB1, and Snail were assessed by qPCR in HEP-2 cells and primary laryngeal epithelial cells. (C) The clinical laryngeal cancer samples were separated into two groups according to the mean expression of Twist. The overall survival was analyzed by Kaplan-Meier survival analysis. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01, *** P < 0.001. Twist is potentially correlated with the progression and poor prognosis of laryngeal cancer. (A) The expression levels of Twist, Slug, ZEB1, and Snail were measured by qPCR in the laryngeal cancer tissues (n = 40) and adjacent normal tissues (n = 40). (B) The expression levels of Twist, Slug, ZEB1, and Snail were assessed by qPCR in HEP-2 cells and primary laryngeal epithelial cells. (C) The clinical laryngeal cancer samples were separated into two groups according to the mean expression of Twist. The overall survival was analyzed by Kaplan-Meier survival analysis. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01, *** P < 0.001. Magnesium Isoglycyrrhizinate (MI) Attenuates Cell Proliferation of Laryngeal Cancer in vitro Then, the role of MI in the modulation of laryngeal cancer proliferation was investigated, and the structure formula of MI is shown in Figure 2A. To evaluate the effect of MI on the progression of laryngeal cancer, MTT assay, colony formation assay, and EdU assay were performed in HEP-2 cells treated with MI. MTT assay demonstrated that MI significantly inhibited cell viability in a dose-dependent manner, and the half-maximal inhibitory concentrations (IC50) of MI in HEP-2 cells was 3.22 mg/mL (P < 0.01) (Figure 2B). Hence, we selected the MI dose of 3.22 mg/mL in the following experiments. Similarly, colony formation assay showed that the colony numbers of HEP-2 cells were remarkably reduced by MI treatment (P < 0.01) (Figure 2C). Besides, EdU assay showed that MI notably decreased number of EdU-positive cells (P < 0.01) (Figure 2D). These data suggested that MI was able to inhibit cell proliferation of laryngeal cancer.Figure 2Magnesium isoglycyrrhizinate (MI) attenuates cell proliferation of laryngeal cancer in vitro. (A) The structure formula of MI was shown. (B) The cell viability was analyzed by MTT assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell proliferation was measured by colony formation assays in the HEP-2 cells treated with MI at indicated dosage. (D) The cell proliferation was examined by EdU assays in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. Magnesium isoglycyrrhizinate (MI) attenuates cell proliferation of laryngeal cancer in vitro. (A) The structure formula of MI was shown. (B) The cell viability was analyzed by MTT assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell proliferation was measured by colony formation assays in the HEP-2 cells treated with MI at indicated dosage. (D) The cell proliferation was examined by EdU assays in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. Then, the role of MI in the modulation of laryngeal cancer proliferation was investigated, and the structure formula of MI is shown in Figure 2A. To evaluate the effect of MI on the progression of laryngeal cancer, MTT assay, colony formation assay, and EdU assay were performed in HEP-2 cells treated with MI. MTT assay demonstrated that MI significantly inhibited cell viability in a dose-dependent manner, and the half-maximal inhibitory concentrations (IC50) of MI in HEP-2 cells was 3.22 mg/mL (P < 0.01) (Figure 2B). Hence, we selected the MI dose of 3.22 mg/mL in the following experiments. Similarly, colony formation assay showed that the colony numbers of HEP-2 cells were remarkably reduced by MI treatment (P < 0.01) (Figure 2C). Besides, EdU assay showed that MI notably decreased number of EdU-positive cells (P < 0.01) (Figure 2D). These data suggested that MI was able to inhibit cell proliferation of laryngeal cancer.Figure 2Magnesium isoglycyrrhizinate (MI) attenuates cell proliferation of laryngeal cancer in vitro. (A) The structure formula of MI was shown. (B) The cell viability was analyzed by MTT assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell proliferation was measured by colony formation assays in the HEP-2 cells treated with MI at indicated dosage. (D) The cell proliferation was examined by EdU assays in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. Magnesium isoglycyrrhizinate (MI) attenuates cell proliferation of laryngeal cancer in vitro. (A) The structure formula of MI was shown. (B) The cell viability was analyzed by MTT assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell proliferation was measured by colony formation assays in the HEP-2 cells treated with MI at indicated dosage. (D) The cell proliferation was examined by EdU assays in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. MI Inhibits Laryngeal Cancer Cell Migration and Invasion and Enhances Cell Apoptosis in vitro The role of MI in HEP-2 cell migration, invasion, and apoptosis was then evaluated. Transwell assays revealed that MI treatment remarkably reduced cell migration (P < 0.01) (Figure 3A). Similarly, cell invasion was significantly decreased by MI in HEP-2 cells (P < 0.01) (Figure 3B). Moreover, flow cytometry analysis showed that cell apoptosis was notably increased by MI treatment in the system (P < 0.01) (Figure 3C). These results indicated that MI inhibited laryngeal cancer cell migration and invasion and enhanced cell apoptosis in vitro.Figure 3MI inhibits laryngeal cancer cell migration and invasion and enhances cell apoptosis in vitro. (A) The cell migration was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (B) The cell invasion was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell apoptosis was measure by flow cytometry analysis in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. MI inhibits laryngeal cancer cell migration and invasion and enhances cell apoptosis in vitro. (A) The cell migration was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (B) The cell invasion was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell apoptosis was measure by flow cytometry analysis in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. The role of MI in HEP-2 cell migration, invasion, and apoptosis was then evaluated. Transwell assays revealed that MI treatment remarkably reduced cell migration (P < 0.01) (Figure 3A). Similarly, cell invasion was significantly decreased by MI in HEP-2 cells (P < 0.01) (Figure 3B). Moreover, flow cytometry analysis showed that cell apoptosis was notably increased by MI treatment in the system (P < 0.01) (Figure 3C). These results indicated that MI inhibited laryngeal cancer cell migration and invasion and enhanced cell apoptosis in vitro.Figure 3MI inhibits laryngeal cancer cell migration and invasion and enhances cell apoptosis in vitro. (A) The cell migration was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (B) The cell invasion was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell apoptosis was measure by flow cytometry analysis in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. MI inhibits laryngeal cancer cell migration and invasion and enhances cell apoptosis in vitro. (A) The cell migration was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (B) The cell invasion was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell apoptosis was measure by flow cytometry analysis in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. MI Inhibits Transcriptional Activation and the Expression of NF-κB and Twist and Alleviates EMT in Laryngeal Cancer Cells Next, the underlying mechanism of the effect of MI on the development of laryngeal cancer in HEP-2 cells was further explored. It showed that MI treatment significantly reduced the luciferase activities of NF-κB in the cells (P < 0.01) (Figure 4A), suggesting that MI may inhibit NF-κB at transcriptional level. Meanwhile, dual-luciferase reporter gene assays further revealed that MI treatment remarkably decreased the transcriptional activities of Twist in the cells (P < 0.01) (Figure 4B). Furthermore, Western blot analysis demonstrated that the total expression (Figure 4C) and nucleus expression (Figure 4D) of NF-κB and Twist were significantly down-regulated by MI in HEP-2 cells (P < 0.01), suggesting that MI may inhibit Twist by modulating NF-κB. Moreover, to assess the effect of MI on the EMT of laryngeal cancer, the expression of EMT markers, including E-Cadherin, occluding, vimentin, and N-cadherin, in HEP-2 cells treated with MI were measured. Our data showed that the expression levels of E-Cadherin and occluding were enhanced while the expression levels of vimentin and N-cadherin were reduced by MI treatment in HEP-2 cells (P < 0.01) (Figure 4E), suggesting that MI can inhibit EMT of laryngeal cancer.Figure 4MI inhibits transcriptional activation and expression of NF-κB and Twist and alleviates EMT in laryngeal cancer cells. (A) The luciferase activities of NF-κB were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (B) The luciferase activities of Twist were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (C) The total expression of NF-κB, Twist, and β-actin was measured by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (D) The nucleus expression of NF-κB, Twist, and histone H3 was tested by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (E) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. MI inhibits transcriptional activation and expression of NF-κB and Twist and alleviates EMT in laryngeal cancer cells. (A) The luciferase activities of NF-κB were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (B) The luciferase activities of Twist were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (C) The total expression of NF-κB, Twist, and β-actin was measured by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (D) The nucleus expression of NF-κB, Twist, and histone H3 was tested by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (E) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. Next, the underlying mechanism of the effect of MI on the development of laryngeal cancer in HEP-2 cells was further explored. It showed that MI treatment significantly reduced the luciferase activities of NF-κB in the cells (P < 0.01) (Figure 4A), suggesting that MI may inhibit NF-κB at transcriptional level. Meanwhile, dual-luciferase reporter gene assays further revealed that MI treatment remarkably decreased the transcriptional activities of Twist in the cells (P < 0.01) (Figure 4B). Furthermore, Western blot analysis demonstrated that the total expression (Figure 4C) and nucleus expression (Figure 4D) of NF-κB and Twist were significantly down-regulated by MI in HEP-2 cells (P < 0.01), suggesting that MI may inhibit Twist by modulating NF-κB. Moreover, to assess the effect of MI on the EMT of laryngeal cancer, the expression of EMT markers, including E-Cadherin, occluding, vimentin, and N-cadherin, in HEP-2 cells treated with MI were measured. Our data showed that the expression levels of E-Cadherin and occluding were enhanced while the expression levels of vimentin and N-cadherin were reduced by MI treatment in HEP-2 cells (P < 0.01) (Figure 4E), suggesting that MI can inhibit EMT of laryngeal cancer.Figure 4MI inhibits transcriptional activation and expression of NF-κB and Twist and alleviates EMT in laryngeal cancer cells. (A) The luciferase activities of NF-κB were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (B) The luciferase activities of Twist were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (C) The total expression of NF-κB, Twist, and β-actin was measured by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (D) The nucleus expression of NF-κB, Twist, and histone H3 was tested by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (E) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. MI inhibits transcriptional activation and expression of NF-κB and Twist and alleviates EMT in laryngeal cancer cells. (A) The luciferase activities of NF-κB were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (B) The luciferase activities of Twist were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (C) The total expression of NF-κB, Twist, and β-actin was measured by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (D) The nucleus expression of NF-κB, Twist, and histone H3 was tested by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (E) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. MI Inhibits Tumor Growth and EMT of Laryngeal Cancer via the Twist/NF-κB Signaling in vivo The effect of MI on laryngeal cancer development was further investigated in vivo. Tumorigenicity analysis was conducted in nude mice injected with HEP-2 cells, which were treated with MI or corresponding control. The MI treatment significantly reduced tumor size (Figure 5A), tumor weight (P < 0.01) (Figure 5B), tumor volume (P < 0.01) (Figure 5C), and the expression levels of Ki-67 in tumor tissues of the mice (Figure 5D), suggesting that MI inhibited tumor growth of laryngeal cancer in vivo.Figure 5MI inhibits tumor growth of laryngeal cancer in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by tumorigenicity assay in nude mice. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) Representative images of dissected tumors from nude mice were presented. (B) The average tumor weight was calculated and shown. (C) The average tumor volume was calculated and shown. (D) The expression levels of Ki-67 of the tumor tissues were measured by immunohistochemical staining. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. MI inhibits tumor growth of laryngeal cancer in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by tumorigenicity assay in nude mice. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) Representative images of dissected tumors from nude mice were presented. (B) The average tumor weight was calculated and shown. (C) The average tumor volume was calculated and shown. (D) The expression levels of Ki-67 of the tumor tissues were measured by immunohistochemical staining. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. Moreover, our data showed that MI treatment significantly enhanced the expression levels of E-Cadherin and occluding, whereas reduced the expression levels of vimentin and N-cadherin in tumor tissues of the mice (P < 0.01) (Figure 6A and B), indicating that MI attenuated EMT of laryngeal cancer in vivo. Besides, the expression levels of Twist, Ki-67, and NF-κB signaling proteins containing p65 and IKK were significantly decreased by MI in tumor tissues of the mice (P < 0.01) (Figure 6C and D), implying that MI may inhibit EMT of laryngeal cancer via the Twist/NF-κB signaling.Figure 6MI inhibits EMT of laryngeal cancer via Twist/NF-κB signaling in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by nude mice tumorigenicity assay. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the tumor tissues of the mice. (B) The results of Western blot analysis in (A) were quantified by ImageJ software. (C) The expression levels of NF-κB, NF-κB p65, Twist, Ki-67, IKK, and β-actin were measured by Western blot analysis in the tumor tissues of the mice. (D) The results of Western blot analysis in (C) were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. MI inhibits EMT of laryngeal cancer via Twist/NF-κB signaling in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by nude mice tumorigenicity assay. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the tumor tissues of the mice. (B) The results of Western blot analysis in (A) were quantified by ImageJ software. (C) The expression levels of NF-κB, NF-κB p65, Twist, Ki-67, IKK, and β-actin were measured by Western blot analysis in the tumor tissues of the mice. (D) The results of Western blot analysis in (C) were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. The effect of MI on laryngeal cancer development was further investigated in vivo. Tumorigenicity analysis was conducted in nude mice injected with HEP-2 cells, which were treated with MI or corresponding control. The MI treatment significantly reduced tumor size (Figure 5A), tumor weight (P < 0.01) (Figure 5B), tumor volume (P < 0.01) (Figure 5C), and the expression levels of Ki-67 in tumor tissues of the mice (Figure 5D), suggesting that MI inhibited tumor growth of laryngeal cancer in vivo.Figure 5MI inhibits tumor growth of laryngeal cancer in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by tumorigenicity assay in nude mice. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) Representative images of dissected tumors from nude mice were presented. (B) The average tumor weight was calculated and shown. (C) The average tumor volume was calculated and shown. (D) The expression levels of Ki-67 of the tumor tissues were measured by immunohistochemical staining. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. MI inhibits tumor growth of laryngeal cancer in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by tumorigenicity assay in nude mice. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) Representative images of dissected tumors from nude mice were presented. (B) The average tumor weight was calculated and shown. (C) The average tumor volume was calculated and shown. (D) The expression levels of Ki-67 of the tumor tissues were measured by immunohistochemical staining. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. Moreover, our data showed that MI treatment significantly enhanced the expression levels of E-Cadherin and occluding, whereas reduced the expression levels of vimentin and N-cadherin in tumor tissues of the mice (P < 0.01) (Figure 6A and B), indicating that MI attenuated EMT of laryngeal cancer in vivo. Besides, the expression levels of Twist, Ki-67, and NF-κB signaling proteins containing p65 and IKK were significantly decreased by MI in tumor tissues of the mice (P < 0.01) (Figure 6C and D), implying that MI may inhibit EMT of laryngeal cancer via the Twist/NF-κB signaling.Figure 6MI inhibits EMT of laryngeal cancer via Twist/NF-κB signaling in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by nude mice tumorigenicity assay. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the tumor tissues of the mice. (B) The results of Western blot analysis in (A) were quantified by ImageJ software. (C) The expression levels of NF-κB, NF-κB p65, Twist, Ki-67, IKK, and β-actin were measured by Western blot analysis in the tumor tissues of the mice. (D) The results of Western blot analysis in (C) were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. MI inhibits EMT of laryngeal cancer via Twist/NF-κB signaling in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by nude mice tumorigenicity assay. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the tumor tissues of the mice. (B) The results of Western blot analysis in (A) were quantified by ImageJ software. (C) The expression levels of NF-κB, NF-κB p65, Twist, Ki-67, IKK, and β-actin were measured by Western blot analysis in the tumor tissues of the mice. (D) The results of Western blot analysis in (C) were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. Twist is Potentially Correlated with the Progression and Poor Prognosis of Laryngeal Cancer: To assess the potential correlation of EMT-related transcription factors with laryngeal cancer, the expression of Twist, Slug, ZEB1, and Snail in the clinical laryngeal samples and laryngeal cells were measured by qPCR assays. The data showed that the expression levels of Twist, Slug, ZEB1, and Snail were significantly elevated in laryngeal cancer tissues (n = 40) compared to that in normal tissues (n = 40), among which Twist displayed the highest expression levels (P < 0.01) (Figure 1A). Meanwhile, the expression levels of Twist, Slug, ZEB1, and Snail were also enhanced in human laryngeal carcinoma HEP-2 cells compared with that in primary laryngeal epithelial cells (LEC-P), and Twist exerted the highest expression levels among these transcription factors (P < 0.001) (Figure 1B), implying that Twist is closely associated with the development of laryngeal cancer. To determine whether Twist was able to serve as the potential biomarker for patients with laryngeal cancer, we separated the clinical laryngeal cancer samples into two groups according to the mean expression of Twist. We observed that high expression levels of Twist was remarkably correlated with the poor overall survival (P < 0.01) (Figure 1C), suggesting that Twist may play a crucial role in the progression of laryngeal cancer.Figure 1Twist is potentially correlated with the progression and poor prognosis of laryngeal cancer. (A) The expression levels of Twist, Slug, ZEB1, and Snail were measured by qPCR in the laryngeal cancer tissues (n = 40) and adjacent normal tissues (n = 40). (B) The expression levels of Twist, Slug, ZEB1, and Snail were assessed by qPCR in HEP-2 cells and primary laryngeal epithelial cells. (C) The clinical laryngeal cancer samples were separated into two groups according to the mean expression of Twist. The overall survival was analyzed by Kaplan-Meier survival analysis. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01, *** P < 0.001. Twist is potentially correlated with the progression and poor prognosis of laryngeal cancer. (A) The expression levels of Twist, Slug, ZEB1, and Snail were measured by qPCR in the laryngeal cancer tissues (n = 40) and adjacent normal tissues (n = 40). (B) The expression levels of Twist, Slug, ZEB1, and Snail were assessed by qPCR in HEP-2 cells and primary laryngeal epithelial cells. (C) The clinical laryngeal cancer samples were separated into two groups according to the mean expression of Twist. The overall survival was analyzed by Kaplan-Meier survival analysis. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01, *** P < 0.001. Magnesium Isoglycyrrhizinate (MI) Attenuates Cell Proliferation of Laryngeal Cancer in vitro: Then, the role of MI in the modulation of laryngeal cancer proliferation was investigated, and the structure formula of MI is shown in Figure 2A. To evaluate the effect of MI on the progression of laryngeal cancer, MTT assay, colony formation assay, and EdU assay were performed in HEP-2 cells treated with MI. MTT assay demonstrated that MI significantly inhibited cell viability in a dose-dependent manner, and the half-maximal inhibitory concentrations (IC50) of MI in HEP-2 cells was 3.22 mg/mL (P < 0.01) (Figure 2B). Hence, we selected the MI dose of 3.22 mg/mL in the following experiments. Similarly, colony formation assay showed that the colony numbers of HEP-2 cells were remarkably reduced by MI treatment (P < 0.01) (Figure 2C). Besides, EdU assay showed that MI notably decreased number of EdU-positive cells (P < 0.01) (Figure 2D). These data suggested that MI was able to inhibit cell proliferation of laryngeal cancer.Figure 2Magnesium isoglycyrrhizinate (MI) attenuates cell proliferation of laryngeal cancer in vitro. (A) The structure formula of MI was shown. (B) The cell viability was analyzed by MTT assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell proliferation was measured by colony formation assays in the HEP-2 cells treated with MI at indicated dosage. (D) The cell proliferation was examined by EdU assays in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. Magnesium isoglycyrrhizinate (MI) attenuates cell proliferation of laryngeal cancer in vitro. (A) The structure formula of MI was shown. (B) The cell viability was analyzed by MTT assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell proliferation was measured by colony formation assays in the HEP-2 cells treated with MI at indicated dosage. (D) The cell proliferation was examined by EdU assays in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. MI Inhibits Laryngeal Cancer Cell Migration and Invasion and Enhances Cell Apoptosis in vitro: The role of MI in HEP-2 cell migration, invasion, and apoptosis was then evaluated. Transwell assays revealed that MI treatment remarkably reduced cell migration (P < 0.01) (Figure 3A). Similarly, cell invasion was significantly decreased by MI in HEP-2 cells (P < 0.01) (Figure 3B). Moreover, flow cytometry analysis showed that cell apoptosis was notably increased by MI treatment in the system (P < 0.01) (Figure 3C). These results indicated that MI inhibited laryngeal cancer cell migration and invasion and enhanced cell apoptosis in vitro.Figure 3MI inhibits laryngeal cancer cell migration and invasion and enhances cell apoptosis in vitro. (A) The cell migration was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (B) The cell invasion was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell apoptosis was measure by flow cytometry analysis in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. MI inhibits laryngeal cancer cell migration and invasion and enhances cell apoptosis in vitro. (A) The cell migration was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (B) The cell invasion was examined by transwell assays in the HEP-2 cells treated with MI at indicated dosage. (C) The cell apoptosis was measure by flow cytometry analysis in the HEP-2 cells treated with MI at indicated dosage. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. MI Inhibits Transcriptional Activation and the Expression of NF-κB and Twist and Alleviates EMT in Laryngeal Cancer Cells: Next, the underlying mechanism of the effect of MI on the development of laryngeal cancer in HEP-2 cells was further explored. It showed that MI treatment significantly reduced the luciferase activities of NF-κB in the cells (P < 0.01) (Figure 4A), suggesting that MI may inhibit NF-κB at transcriptional level. Meanwhile, dual-luciferase reporter gene assays further revealed that MI treatment remarkably decreased the transcriptional activities of Twist in the cells (P < 0.01) (Figure 4B). Furthermore, Western blot analysis demonstrated that the total expression (Figure 4C) and nucleus expression (Figure 4D) of NF-κB and Twist were significantly down-regulated by MI in HEP-2 cells (P < 0.01), suggesting that MI may inhibit Twist by modulating NF-κB. Moreover, to assess the effect of MI on the EMT of laryngeal cancer, the expression of EMT markers, including E-Cadherin, occluding, vimentin, and N-cadherin, in HEP-2 cells treated with MI were measured. Our data showed that the expression levels of E-Cadherin and occluding were enhanced while the expression levels of vimentin and N-cadherin were reduced by MI treatment in HEP-2 cells (P < 0.01) (Figure 4E), suggesting that MI can inhibit EMT of laryngeal cancer.Figure 4MI inhibits transcriptional activation and expression of NF-κB and Twist and alleviates EMT in laryngeal cancer cells. (A) The luciferase activities of NF-κB were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (B) The luciferase activities of Twist were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (C) The total expression of NF-κB, Twist, and β-actin was measured by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (D) The nucleus expression of NF-κB, Twist, and histone H3 was tested by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (E) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. MI inhibits transcriptional activation and expression of NF-κB and Twist and alleviates EMT in laryngeal cancer cells. (A) The luciferase activities of NF-κB were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (B) The luciferase activities of Twist were determined by luciferase reporter gene assays in the HEP-2 cells treated with MI at indicated dosage. (C) The total expression of NF-κB, Twist, and β-actin was measured by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (D) The nucleus expression of NF-κB, Twist, and histone H3 was tested by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. (E) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the HEP-2 cells treated with MI at indicated dosage. The results of Western blot analysis were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. MI Inhibits Tumor Growth and EMT of Laryngeal Cancer via the Twist/NF-κB Signaling in vivo: The effect of MI on laryngeal cancer development was further investigated in vivo. Tumorigenicity analysis was conducted in nude mice injected with HEP-2 cells, which were treated with MI or corresponding control. The MI treatment significantly reduced tumor size (Figure 5A), tumor weight (P < 0.01) (Figure 5B), tumor volume (P < 0.01) (Figure 5C), and the expression levels of Ki-67 in tumor tissues of the mice (Figure 5D), suggesting that MI inhibited tumor growth of laryngeal cancer in vivo.Figure 5MI inhibits tumor growth of laryngeal cancer in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by tumorigenicity assay in nude mice. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) Representative images of dissected tumors from nude mice were presented. (B) The average tumor weight was calculated and shown. (C) The average tumor volume was calculated and shown. (D) The expression levels of Ki-67 of the tumor tissues were measured by immunohistochemical staining. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. MI inhibits tumor growth of laryngeal cancer in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by tumorigenicity assay in nude mice. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) Representative images of dissected tumors from nude mice were presented. (B) The average tumor weight was calculated and shown. (C) The average tumor volume was calculated and shown. (D) The expression levels of Ki-67 of the tumor tissues were measured by immunohistochemical staining. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. Moreover, our data showed that MI treatment significantly enhanced the expression levels of E-Cadherin and occluding, whereas reduced the expression levels of vimentin and N-cadherin in tumor tissues of the mice (P < 0.01) (Figure 6A and B), indicating that MI attenuated EMT of laryngeal cancer in vivo. Besides, the expression levels of Twist, Ki-67, and NF-κB signaling proteins containing p65 and IKK were significantly decreased by MI in tumor tissues of the mice (P < 0.01) (Figure 6C and D), implying that MI may inhibit EMT of laryngeal cancer via the Twist/NF-κB signaling.Figure 6MI inhibits EMT of laryngeal cancer via Twist/NF-κB signaling in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by nude mice tumorigenicity assay. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the tumor tissues of the mice. (B) The results of Western blot analysis in (A) were quantified by ImageJ software. (C) The expression levels of NF-κB, NF-κB p65, Twist, Ki-67, IKK, and β-actin were measured by Western blot analysis in the tumor tissues of the mice. (D) The results of Western blot analysis in (C) were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. MI inhibits EMT of laryngeal cancer via Twist/NF-κB signaling in vivo. (A–D) The effect of MI on tumor growth of laryngeal cancer in vivo was analyzed by nude mice tumorigenicity assay. The HEP-2 cells were treated with MI (300 mg/kg) or equal volume saline and injected into the nude mice (n = 3). (A) The expression levels of E-Cadherin, occluding, vimentin, N-cadherin, and β-actin were analyzed by Western blot analysis in the tumor tissues of the mice. (B) The results of Western blot analysis in (A) were quantified by ImageJ software. (C) The expression levels of NF-κB, NF-κB p65, Twist, Ki-67, IKK, and β-actin were measured by Western blot analysis in the tumor tissues of the mice. (D) The results of Western blot analysis in (C) were quantified by ImageJ software. Data are presented as mean ± SD. Statistic significant differences were indicated: ** P < 0.01. Discussion: Laryngeal cancer is the second most prevalent head and neck malignancy.38 Laryngeal cancer patients hold poor survival incidences and low prognosis, and many cases still suffer from recurrence.39 Although improvements have been made in chemotherapy, radiotherapy, and surgery, invasion and metastasis are still the principal reasons for laryngeal cancer-related mortality for patients with advanced grade.40 Searching for more safe and practical treatment candidates for the effective therapy of laryngeal cancer is of great importance.41 As a natural compound, MI has exerted practical potential in the medical application. It has been reported that MI represses cardiac hypertrophy by modulating the TLR4/NF-κB signaling in mice.42 MI preserves against triptolide-produced hepatotoxicity through activating the Nrf2 signaling.43 MI relieves fructose-mediated apoptosis of podocyte by down-regulation of miR-193a to enhance WT1.44 However, the investigation of MI in cancer progression is limited. Previous study identified that the exposure-impact-toxicity was correlated with MI and docetaxel in non-small cell lung cancer mice.22 MI inhibits chemotherapy-caused damage to liver throughout the initial therapy of gastrointestinal tumor patients.45 MI decreases paclitaxel in patients with epithelial ovarian cancer treated with cisplatin and paclitaxel.24 The function of MI is associated with its activity to restrain several crucial pathways such as phospholipase A2/arachidonic signaling,46 STAT3 signaling,47 and NF-κB signaling.23,48 In this study, we firstly identified that MI inhibited cell proliferation, migration, invasion, and enhanced cell apoptosis in laryngeal cancer. MI reduced tumor growth of laryngeal cancer in vivo. Our data present a novel inhibitory function of MI in laryngeal cancer and provide valuable insights into the role of MI in cancer development. EMT plays a critical role in the development of laryngeal cancer, especially in cancer cell metastasis progression by promoting resistance to apoptotic stimulation, invasion, and mobility.49 It was reported that TGF β-induced lncRNA MIR155HG promoted EMT of laryngeal squamous cell carcinoma by regulating the miR-155/SOX10 signaling.50 YAP modulates the Wnt/β-catenin signaling and EMT program of laryngeal cancer.51 Abnormal methylation and down-regulation of ZNF667 and ZNF667-AS1 enhance EMT of laryngeal carcinoma.52 Meanwhile, it was reported that MI alleviated high fructose-caused liver fibrosis and EMT by enhancing miR-375-3p to repress the TGF-β1/Smad signaling and JAK2/STAT3 signaling in rat.25 Our data demonstrated that MI attenuated EMT of laryngeal cancer via modulating the Twist/NF-κB signaling. It provides valuable information that MI exerts an inhibitory function of EMT in cancer progression. As a critical EMT transcription factor, Twist regulates EMT program during cancer development.53 The correlation of Twist with NF-κB signaling in the modulation of EMT in cancer progression is well identified. It has been reported that presentation to TGF β combined with TNF α provokes tumorigenesis by modulating NF-κB/Twist signaling in vitro.54 Antrodia salmonea represses aggression and metastasis through modifying EMT via the NF-κB/Twist signaling in triple-negative breast cancer cells.55 Twist is involved in the mechanism that ursolic acid restrains EMT of gastric cancer through the Axl/NF-κB signaling.56 NF-κB activation by the RANKL/RANK pathway enhances the expression of snail and twist and promotes EMT of mammary cancer cells.57 Twist plays a crucial role in regulating the NF-κB and HIF-1α signaling on hypoxia-induced chemoresistance and EMT in pancreatic cancer.58 MiR-153 depletion down-regulates the expression of metastasis-associated family member 3 and Twist family BHLH transcription factor 1 in laryngeal squamous carcinoma cells.59 The expression of TWIST is remarkably down-regulated in response to paclitaxel, and TWIST may present a crucial function in paclitaxel-related laryngeal cancer cell apoptosis.60 TWIST is involved in the mechanism that TrkB elevates the metastasis of laryngeal cancer by activating the PI3K/AKT signaling.61 In the present study, we revealed that Twist was significantly elevated in laryngeal cancer tissues and human laryngeal carcinoma HEP-2 cells. It suggests that Twist may play a critical role in the development of laryngeal cancer. NF-κB/Twist signaling was involved in the mechanism of MI inhibiting EMT of laryngeal cancer. It provides new evidence of the role of Twist in cancer progression. Conclusion: In conclusion, we discovered that magnesium isoglycyrrhizinate attenuated the progression of laryngeal cancer in vitro and in vivo. Magnesium isoglycyrrhizinate induced an inhibitory effect on epithelial–mesenchymal transition in laryngeal cancer by modulating the NF-κB/Twist signaling. Our findings provide new insights into the mechanism by which magnesium isoglycyrrhizinate inhibits laryngeal carcinoma development. Magnesium isoglycyrrhizinate may be applied as a potential anti-tumor candidate for laryngeal cancer in clinical treatment strategy.
Background: Magnesium isoglycyrrhizinate (MI) was extracted from roots of the plant Glycyrrhiza glabra, which displays multiple pharmacological activities such as anti-inflammation, anti-apoptosis, and anti-tumor. Here, we aimed to investigate the effect of MI on the progression and epithelial-mesenchymal transition (EMT) of laryngeal cancer. Methods: Forty laryngeal cancer clinical samples were used. The role of MI in the proliferation of laryngeal cancer cells was assessed by MTT assay, Edu assay and colony formation assay. The function of MI in the migration and invasion of laryngeal cancer cells was tested by transwell assays. The effect of MI on apoptosis of laryngeal cancer cells was determined by cell apoptosis assay. The impact of MI on tumor growth in vivo was analyzed by tumorigenicity analysis using Balb/c nude mice. qPCR and Western blot analysis were performed to measure the expression levels of gene and protein, respectively. Results: We identified that EMT-related transcription factor Twist was significantly elevated in the laryngeal cancer tissues. The expression of Twist was also enhanced in the human laryngeal carcinoma HEP-2 cells compared with that in the primary laryngeal epithelial cells. The high expression of Twist was remarkably correlated with poor overall survival of patients with laryngeal cancer. Meanwhile, our data revealed that MI reduced cell proliferation, migration and invasion and enhanced apoptosis of laryngeal cancer cells in vitro. Moreover, MI decreased transcriptional activation and the expression levels of NF-κB and Twist, and alleviated EMT in vitro and in vivo. MI remarkably inhibited tumor growth and EMT of laryngeal cancer cells in vivo. Conclusions: MI restrains the progression of laryngeal cancer and induces an inhibitory effect on EMT in laryngeal cancer by modulating the NF-κB/Twist signaling. Our finding provides new insights into the mechanism by which MI inhibits laryngeal carcinoma development, enriching the understanding of the anti-tumor function of MI.
Introduction: Laryngeal cancer is one of the most prevalent malignancies of the respiratory system, accounting for 26% to 30% of head and neck cancers.1,2 Almost 95% of the histologic pathogeny of laryngeal cancer is laryngeal squamous carcinoma, and the survival incidence of patients is low.3 Despite the development of new strategies in surgery, chemotherapy, and radiation, the targeted treatment of laryngeal cancer is still a challenge.4 Thus, identification of safe and effective treatment candidates for laryngeal cancer is urgently needed. Epithelial–mesenchymal transition (EMT) serves as a cellular program, in which cells drop their epithelial features and gain mesenchymal characteristics.5,6 EMT is correlated with multiple tumor progressions, such as resistance to therapy, blood intravasation, tumor initiation, tumor cell migration, tumor stemness, malignant progression, and metastasis.7–9 As a critical process of cancer development, EMT contributes to the development of laryngeal cancer. It has been reported that the combination of photodynamic therapy and carboplatin suppresses the expression of MMP-2/MMP-9 and EMT of laryngeal cancer by ROS-inhibited MEK/ERK signaling.10 EZH2 increases metastasis and aggression of laryngeal squamous cell carcinoma through the EMT program by modulating H3K27me3.11 However, investigation about the inhibitory candidates of EMT in laryngeal cancer remains limited. Nutraceutical agents display a unique therapeutic activity to treat diseases such as inflammation and cancer.12–17 Glycyrrhizic acid (GA) is extracted from the roots of licorice and serves as a major component of licorice, presenting multiple biomedical activities, such as anti-oxidant and anti-inflammatory.18 Magnesium isoglycyrrhizinate (MI), refined from GA, is a18-α-GA stereoisomer magnesium salt and demonstrates a better activity than 18-β-GA.19 As a natural and safe compound, MI shows many biomedical activities such as anti-inflammation capacities,20 anti-apoptosis21 and anti-tumor.22 It has been reported that MI inhibits fructose-modulated lipid metabolism disorder and activation of the NF-κB/NLRP3 inflammasome.23 MI reduces paclitaxel in patients with epithelial ovarian cancer managed with cisplatin and paclitaxel.24 Moreover, MI attenuates high fructose-induced liver fibrosis and EMT by upregulating miR-375-3p to overcome the TGF-β1/Smad pathway and JAK2/STAT3 signaling.25 However, the role of MI in cancer development is unknown. The effect of MI on the progression and EMT of laryngeal cancer remains unreported. Many transcription factors, such as Twist, serve as molecule switches in EMT progression.26 Twist is a crucial transcription factor for the modulating of EMT, which advances cell invasion, migration, and cancer metastasis, conferring tumor cells with stem cell-like properties and providing therapy resistance.27,28 The function of Twist in EMT of cancer development has been well reported. Disrupting the diacetylated Twist represses the progression of Basal-like breast cancer.29 The activation of Twist promotes progression and EMT of breast cancer.30 The inhibition of Twist limits the stem cell features and EMT of prostate cancer.31 Twist serves as a critical factor in promoting metastasis of pancreatic cancer.32 Besides, tumor necrosis factor α provokes EMT of hypopharyngeal cancer and induces metastasis through NF-κB signaling-regulated expression of Twist.33 NF-κB/Twist signaling is involved in the mechanism of Chysin inhibiting stem cell characteristics of ovarian cancer.34 Repression of the NF-κB/Twist axis decreases the stemness features of lung cancer stem cell.35 Down-regulation of TWIST reduces invasion and migration of laryngeal carcinoma cells by controlling the expression of N-cadherin and E-cadherin.36 The expression of Twist displays the clinical significance of laryngeal cancer.37 However, whether NF-κB and Twist are involved in the biomedical activities of MI is unclear. In this study, we aimed to explore the function of MI in the development and EMT process of laryngeal cancer. We identified a novel inhibitory effect of MI in the progression and EMT of laryngeal cancer by regulating the NF-κB/Twist signaling. Conclusion: In conclusion, we discovered that magnesium isoglycyrrhizinate attenuated the progression of laryngeal cancer in vitro and in vivo. Magnesium isoglycyrrhizinate induced an inhibitory effect on epithelial–mesenchymal transition in laryngeal cancer by modulating the NF-κB/Twist signaling. Our findings provide new insights into the mechanism by which magnesium isoglycyrrhizinate inhibits laryngeal carcinoma development. Magnesium isoglycyrrhizinate may be applied as a potential anti-tumor candidate for laryngeal cancer in clinical treatment strategy.
Background: Magnesium isoglycyrrhizinate (MI) was extracted from roots of the plant Glycyrrhiza glabra, which displays multiple pharmacological activities such as anti-inflammation, anti-apoptosis, and anti-tumor. Here, we aimed to investigate the effect of MI on the progression and epithelial-mesenchymal transition (EMT) of laryngeal cancer. Methods: Forty laryngeal cancer clinical samples were used. The role of MI in the proliferation of laryngeal cancer cells was assessed by MTT assay, Edu assay and colony formation assay. The function of MI in the migration and invasion of laryngeal cancer cells was tested by transwell assays. The effect of MI on apoptosis of laryngeal cancer cells was determined by cell apoptosis assay. The impact of MI on tumor growth in vivo was analyzed by tumorigenicity analysis using Balb/c nude mice. qPCR and Western blot analysis were performed to measure the expression levels of gene and protein, respectively. Results: We identified that EMT-related transcription factor Twist was significantly elevated in the laryngeal cancer tissues. The expression of Twist was also enhanced in the human laryngeal carcinoma HEP-2 cells compared with that in the primary laryngeal epithelial cells. The high expression of Twist was remarkably correlated with poor overall survival of patients with laryngeal cancer. Meanwhile, our data revealed that MI reduced cell proliferation, migration and invasion and enhanced apoptosis of laryngeal cancer cells in vitro. Moreover, MI decreased transcriptional activation and the expression levels of NF-κB and Twist, and alleviated EMT in vitro and in vivo. MI remarkably inhibited tumor growth and EMT of laryngeal cancer cells in vivo. Conclusions: MI restrains the progression of laryngeal cancer and induces an inhibitory effect on EMT in laryngeal cancer by modulating the NF-κB/Twist signaling. Our finding provides new insights into the mechanism by which MI inhibits laryngeal carcinoma development, enriching the understanding of the anti-tumor function of MI.
14,977
362
[ 2990, 153, 114, 116, 101, 99, 54, 66, 69, 65, 255, 306, 56, 5892, 530, 417, 312, 722, 918, 759, 81 ]
22
[ "mi", "cells", "laryngeal", "cancer", "laryngeal cancer", "twist", "hep", "hep cells", "expression", "cell" ]
[ "laryngeal carcinoma development", "emt laryngeal cancer", "transition laryngeal cancer", "inhibits laryngeal carcinoma", "laryngeal cancer cells" ]
null
null
[CONTENT] laryngeal cancer | progression | EMT | magnesium isoglycyrrhizinate | Twist | NF-κB signaling [SUMMARY]
null
null
[CONTENT] laryngeal cancer | progression | EMT | magnesium isoglycyrrhizinate | Twist | NF-κB signaling [SUMMARY]
[CONTENT] laryngeal cancer | progression | EMT | magnesium isoglycyrrhizinate | Twist | NF-κB signaling [SUMMARY]
[CONTENT] laryngeal cancer | progression | EMT | magnesium isoglycyrrhizinate | Twist | NF-κB signaling [SUMMARY]
[CONTENT] Animals | Apoptosis | Cell Proliferation | Epithelial-Mesenchymal Transition | Humans | Laryngeal Neoplasms | Mice | Mice, Inbred BALB C | Mice, Nude | NF-kappa B | Neoplasms, Experimental | Nuclear Proteins | Saponins | Signal Transduction | Triterpenes | Tumor Cells, Cultured | Twist-Related Protein 1 [SUMMARY]
null
null
[CONTENT] Animals | Apoptosis | Cell Proliferation | Epithelial-Mesenchymal Transition | Humans | Laryngeal Neoplasms | Mice | Mice, Inbred BALB C | Mice, Nude | NF-kappa B | Neoplasms, Experimental | Nuclear Proteins | Saponins | Signal Transduction | Triterpenes | Tumor Cells, Cultured | Twist-Related Protein 1 [SUMMARY]
[CONTENT] Animals | Apoptosis | Cell Proliferation | Epithelial-Mesenchymal Transition | Humans | Laryngeal Neoplasms | Mice | Mice, Inbred BALB C | Mice, Nude | NF-kappa B | Neoplasms, Experimental | Nuclear Proteins | Saponins | Signal Transduction | Triterpenes | Tumor Cells, Cultured | Twist-Related Protein 1 [SUMMARY]
[CONTENT] Animals | Apoptosis | Cell Proliferation | Epithelial-Mesenchymal Transition | Humans | Laryngeal Neoplasms | Mice | Mice, Inbred BALB C | Mice, Nude | NF-kappa B | Neoplasms, Experimental | Nuclear Proteins | Saponins | Signal Transduction | Triterpenes | Tumor Cells, Cultured | Twist-Related Protein 1 [SUMMARY]
[CONTENT] laryngeal carcinoma development | emt laryngeal cancer | transition laryngeal cancer | inhibits laryngeal carcinoma | laryngeal cancer cells [SUMMARY]
null
null
[CONTENT] laryngeal carcinoma development | emt laryngeal cancer | transition laryngeal cancer | inhibits laryngeal carcinoma | laryngeal cancer cells [SUMMARY]
[CONTENT] laryngeal carcinoma development | emt laryngeal cancer | transition laryngeal cancer | inhibits laryngeal carcinoma | laryngeal cancer cells [SUMMARY]
[CONTENT] laryngeal carcinoma development | emt laryngeal cancer | transition laryngeal cancer | inhibits laryngeal carcinoma | laryngeal cancer cells [SUMMARY]
[CONTENT] mi | cells | laryngeal | cancer | laryngeal cancer | twist | hep | hep cells | expression | cell [SUMMARY]
null
null
[CONTENT] mi | cells | laryngeal | cancer | laryngeal cancer | twist | hep | hep cells | expression | cell [SUMMARY]
[CONTENT] mi | cells | laryngeal | cancer | laryngeal cancer | twist | hep | hep cells | expression | cell [SUMMARY]
[CONTENT] mi | cells | laryngeal | cancer | laryngeal cancer | twist | hep | hep cells | expression | cell [SUMMARY]
[CONTENT] cancer | emt | laryngeal | twist | laryngeal cancer | mi | anti | metastasis | stem cell | stem [SUMMARY]
null
null
[CONTENT] isoglycyrrhizinate | magnesium | magnesium isoglycyrrhizinate | laryngeal | laryngeal cancer | cancer | induced inhibitory effect epithelial | isoglycyrrhizinate applied | vivo magnesium | provide new insights mechanism [SUMMARY]
[CONTENT] mi | cells | cancer | laryngeal | laryngeal cancer | twist | 1000 | 1000 abcam uk | 1000 abcam | abcam uk [SUMMARY]
[CONTENT] mi | cells | cancer | laryngeal | laryngeal cancer | twist | 1000 | 1000 abcam uk | 1000 abcam | abcam uk [SUMMARY]
[CONTENT] Glycyrrhiza ||| EMT [SUMMARY]
null
null
[CONTENT] EMT | the NF-κB/Twist ||| [SUMMARY]
[CONTENT] Glycyrrhiza ||| EMT ||| ||| MTT ||| transwell assays ||| ||| Balb/c nude ||| qPCR | Western ||| ||| EMT | Twist ||| Twist ||| Twist ||| ||| NF-κB | Twist | EMT ||| EMT ||| EMT | the NF-κB/Twist ||| [SUMMARY]
[CONTENT] Glycyrrhiza ||| EMT ||| ||| MTT ||| transwell assays ||| ||| Balb/c nude ||| qPCR | Western ||| ||| EMT | Twist ||| Twist ||| Twist ||| ||| NF-κB | Twist | EMT ||| EMT ||| EMT | the NF-κB/Twist ||| [SUMMARY]
Measles outbreak investigation in an urban slum of Kaduna Metropolis, Kaduna State, Nigeria, March 2015.
31303921
Despite availability of an effective vaccine, the measles epidemic continue to occur in Nigeria. In February 2015, we investigated a suspected measles outbreak in an urban slum in Rigasa, Kaduna State, Nigeria. The study was to confirm the outbreak, determine the risk factors and implement appropriate control measures.
INTRODUCTION
We identified cases through active search and health record review. We conducted an unmatched case-control (1:1) study involving 75 under-5 cases who were randomly sampled, and 75 neighborhood controls. We interviewed caregivers of these children using structured questionnaire to collect information on sociodemographic characteristics and vaccination status of children. We collected 15 blood samples for measles IgM using Enzyme Linked Immunosorbent Assay. Descriptive, bivariate and logistic regression analyses were performed using Epi-info software. Confidence interval was set at 95%.
METHODS
We recorded 159 cases with two deaths {case fatality rate = 1.3%}. 50.3% (80) of the cases were male. Of the 15 serum samples, 11(73.3%) were confirmed IgM positive for measles. Compared to the controls, the cases were more likely to have had no or incomplete routine immunization (RI) [adjusted odds ratio (AOR) (95% confidence interval (CI)]: 28.3 (2.1, 392.0), contact with measles cases [AOR (95% CI)]: 7.5 (2.9, 19.7), and having a caregiver younger than 20 years [AOR (95% CI)]: 5.2 (1.2, 22.5). Measles serum IgM was positive in 11 samples.
RESULTS
We identified low RI uptake and contact with measles cases as predictors of measles outbreak in Rigasa, Kaduna State. We recommended strengthening of RI and education of care-givers' on completing RI schedule.
CONCLUSION
[ "Caregivers", "Case-Control Studies", "Child, Preschool", "Disease Outbreaks", "Enzyme-Linked Immunosorbent Assay", "Epidemics", "Female", "Humans", "Immunoglobulin M", "Infant", "Infant, Newborn", "Logistic Models", "Male", "Measles", "Measles Vaccine", "Nigeria", "Poverty Areas", "Risk Factors", "Surveys and Questionnaires", "Vaccination" ]
6607246
Introduction
Measles is an acute, highly contagious vaccine preventable viral disease which usually affects younger children. Transmission is primarily person-to-person via aerosolized droplets or by direct contact with the nasal and throat secretions of infected persons [1, 2]. Incubation period is 10-14 days (range, 8-15 days) from exposure to onset of rash, and the individual becomes contagious before the eruption of rashes. In 2014, World Health Organization (WHO) reported 266,701 measles cases globally with 145,700 measles deaths [1]. Being unvaccinated against measles is a risk factor for contracting the disease [3]. Other factors responsible for measles outbreak and transmissions in developing countries are; lack of parental awareness of vaccination importance and compliance with routine immunization schedule, household overcrowding with easy contact with someone with measles, acquired or inherited immunodeficiency states and malnutrition [4-6]. During outbreaks, measles case fatality rate (CFR) in developing countries are normally estimated to be 3-5%, but may reach 10-30% compared with 0.1% reported from industrialized countries [2]. Malnutrition, poor supportive case management and complications like pneumonia, diarrhea, croup and central nervous system involvement are responsible for high measles CFR [7, 8]. Nigeria is second to India among ten countries with large number of unvaccinated children for measles, and has 2.7 million of the 21.5 million children globally that have zero dose for measles containing vaccine (MCV1) in 2013 [9]. Measles is one of the top ten causes of childhood morbidity and mortality with recurrent episodes common in Northern Nigeria at the first quarter of each year [10, 11]. In 2012, 2,553 measles cases were reported in Nigeria, an increase from 390 cases reported in 2006 [10]. In line with regional strategic plan, Nigeria planned to eliminate measles by 2020 by strengthening routine immunization, conduct bi-annual measles immunization campaign for second opportunity, epidemiologic surveillance with laboratory confirmation of cases and improve case management including Vitamin A supplementation. This is meant to improve measles coverage from the present 51% as at 2014 [12] to 95% needed for effective herd immunity. In October 2013, following measles outbreak in 19 States in Northern Nigeria, mass measles vaccination campaign was conducted. The first reported case of a suspected measles in Rigasa community, an urban slum of Kaduna Metropolis occurred on 5th of January 2015 in an unimmunized 10 months old child. The District or Local Government Area (LGA), Disease Surveillance and Notification Officer (DSNO) notified the State Epidemiologist and State DSNO on 10th February, 2015 when the reported cases reached an epidemic threshold. We investigated to confirm this outbreak, determine the risk factors for contracting infection and implement appropriate control measures. This paper describes the epidemiological methods employed in the investigation, summarizes the key findings and highlights the public health actions undertaken.
Methods
Study site and study population: Rigasa is a densely populated urban slum in the south west of Igabi LGA, in Kaduna State, North-West Nigeria. It has an estimated 59,906 households with about 14,156 under-one children. The settlement has three health facilities rendering RI services. The community is noted for poor utilization of RI services and has rejected polio supplemental immunization services in the past. Measles outbreaks have been previously reported from this community. The Last measles supplementary immunization activities (SIA) was conducted from 5th to 9th October, 2013. Descriptive epidemiology-quantitative: in this investigation, a suspected measles case was, any person with generalized maculopapular rash, fever, and at least one of the following; cough, coryza or conjunctivitis or in whom a physician suspected measles, living in Rigasa community, from January 2015 when the index case was reported to March 2015. A confirmed case was, any suspected case with measles IgM positive test or an epidemiological link to a laboratory confirmed case living in the same community at same period. We actively search for cases in the community where measles is locally known as “Bakon dauro”. We interviewed and physically examined some cases at the treatment center to verify diagnosis and ensure that they met the case definition. We developed a line-list to collect information from all suspected cases on their age, sex, residence and time of onset, migration history and immunization status. We analyzed the line-list data to characterize the outbreak in time, place and person, and to develop a plausible hypothesis for measles transmission in the community. We conducted an in-depth interview with health workers rendering routine immunization services to ascertain utilization of RI services by under five-children in the community. Case-control study: we conducted an unmatched case-control study. A suspected case is, any child aged 0-59 months residing in Rigasa with history of fever, rash and at least one of the following: cough, coryza or conjunctivitis or in whom a physician suspected measles; a confirmed case is, positive to measles IgM using enzyme-linked immunosorbent assay (ELISA); and a probable case if there was epidemiological link to a laboratory confirmed measles case. A control was, any child 0-59 months residing in the same community but without the signs and symptoms of measles. We enrolled 75 cases and 75 controls to identify an odds ratio of 3 (for a risk factor on which intervention would have a significant impact), assuming 21.2% prevalence of exposure among control [4], with 95% CI and power of 80%. The sample size was determined using the Statcal function of Epi-Info statistical software. We selected and recruited the cases consecutively from among the patients that presented at the health facility and in the community. The controls were selected from the community; each control was selected from the 3rd homestead to the right of the household of a case. We used structured questionnaires to collect data on demographic characteristics, exposures, vaccination status and associated factors from both cases and controls, and clinical information from the cases only. Sample collection and laboratory analysis: we collected blood samples for 15 suspected cases for measles serum IgM determination using ELISA method at the WHO regional laboratory at Kaduna. Data management: we entered data into Epi-Info statistical software and performed univariate analysis to obtain frequencies and proportions, and bivariate analysis to obtain odds ratios and determine associations, setting p-value of 0.05 as the cut-off for statistical significance. We also performed unconditional logistic regression to adjust for possible confounders and identify the independent factors for contracting measles infection. Factors that were significant at p < 0.05 in the bivariate analysis and biological plausible variable such as age and sex were included in the model. In the final model, only variables that were found to significantly affect the outcome at P < 0.05 were retained. Data management was done using Epi InfoTM software 3.5.3 (CDC Atlanta, USA), and Microsoft Excel. For qualitative study, content analysis was thematically analyzed. Ethical consideration: ethical approval was not obtained as study was conducted as part of an outbreak response. However, permission to conduct the study was granted by the State Primary Health Care Agency, District or LGA Director of Primary Health Care and District head of Rigasa community. Informed consent was obtained from all respondents interviewed. Confidentiality of all the subjects was assured and maintained during and after the study.
Results
Descriptive epidemiology-quantitatively: a total of 159 cases with two deaths (CFR = 1.3%) were identified. 80 (50.3%) were male. Cumulatively, under-five children accounted for 90% of cases. The mean age of cases was 32 (± 22.5) months but the age group of 0-11 months were severely affected with age specific death rate of 5% (Table 1). The index case for this outbreak was a 10 months old female child that had maculopapular rash on 5th January, 2015. She was unvaccinated for measles and never had any immunization for vaccine preventable diseases according to Expanded Programme on Immunization (EPI) schedule. She had no history of contact with someone with measles and had not travelled out of the community in the last one month. She was managed as an outpatient and died 3 days later. Figure 1 shows the epidemic curve of the outbreak. The epidemic curve has a propagated pattern with four peaks, the highest on 10th March, 2015. The outbreak spanned from 5th January to 4th April, 2015. Age distributions of measles cases and case fatality rate (CFR) in Rigasa community, March 2015 Epidemic curve of measles outbreak in Rigasa community, week 1 to 15, 2015 Descriptive epidemiology-qualitative evaluation of RI services: content analysis of in depth interview of three health workers rendering routine immunization services revealed that caregivers' usually utilized RI services in the first 6-10 weeks of life, no stock out of measles vaccine in last 6 months and most caregivers failed to comply with EPI schedule. Also, all planned RI sessions were held in the past 3 months and there was no vaccine stock out in 6 months prior to the outbreak and the facilities have functioning cold chain system. Analytic study: total population sampled was 150 (cases 75, control 75). The mean age for cases and controls was 33.2 (± 21) and 37.6 (± 29) months. Males were 42 (56%) cases and 37(49%) controls. Among the cases, 15 (20%) was vaccinated for measles compared to 23 (30.7%) of the control. Among the 112 children unvaccinated for measles, attack rate was 54%. Cumulatively, 41(27%) were vaccinated for measles. Only 1 (1.3%) case compared to 12 (16%) of controls had completed the Expanded Programme on Immunization schedule (Table 2). From bivariate analysis, cases were more likely than control, to have had none or incomplete RI [OR (95% CI)]:14.0 (1.8, 111.4); not to have received measles vaccination [OR (95% CI)]:2.0 (0.8, 3.7); to have had close contact with measles cases [OR (95% CI)]:6.0 (2.7, 11.2); and to have caregivers who were younger than 20 years [OR (95% CI)]:2.6 (1, 6.8) (Table 3). From modelling, independent predictors of measles transmission in Rigassa, an urban slum in Kaduna metropolis were none or incomplete routine immunization (RI) [adjusted odds ratio (AOR) (95% confidence interval (CI)<]:28.3 (2.1, 392.0), unvaccinated for measles [AOR (95% CI)]:1.8 (0.8, 3.7), recent contact with measles cases [AOR (95% CI)]:7.5 (2.9, 19.7), and having a caregiver younger than 20 years [AOR (95% CI)]:5.2 (1.2, 22.5) (Table 4). Characteristics of measles cases and controls in Rigasa community, March 2015 Factors that may be responsible for measles outbreak at Rigasa community, Igabi LGA Kaduna State-March 2015 Factors responsible for measles outbreak’s transmission at Rigasa community, Kaduna metropolis, Kaduna State - March 2015 Laboratory findings: eleven (73%) of the 15 serum samples were confirmed IgM positive for measles. Two samples that were negative for measles IgM were also negative of rubella IgM; one was indeterminate, but there was an epidemiological link with a confirmed case of measles. The outcome of the last sample could not be ascertained.
Conclusion
We confirmed there was a measles outbreak in Rigasa community. Low measles vaccine coverage as a result of poor uptake of RI was responsible for the outbreak of measles infection in the community. This resulted in the accumulation of susceptible children thus lowering the herd immunity against measles infection. This study also suggests that children of younger caregivers were more afflicted with measles infection during this outbreak. The poor housing conditions and overcrowding in this community greatly fueled the outbreak, supporting the evidence that close contact with a measles case is a risk factor for measles transmission. A major public health implication of this study is the need to strengthen RI services. We therefore recommend that the state ministry of health should increase demand creation for RI services through more sensitization and education of caregivers. Additionally, health workers should encourage and motivate caregivers who access RI services to complete the EPI schedule to prevent vaccine preventable diseases. Engaging caregivers who had completely immunized their children according to EPI schedule as a community role model could be a good strategy to motivate other caregivers to access and complete RI services. What is known about this topic Measles is a highly contagious vaccine preventable viral disease that usually affects younger children; Several factors such as low coverage for measles antigens and overcrowding have been noted to be risk factors for measles outbreak. Measles is a highly contagious vaccine preventable viral disease that usually affects younger children; Several factors such as low coverage for measles antigens and overcrowding have been noted to be risk factors for measles outbreak. What this study adds We found children less than one year to be severely affected by measles and having the highest case-fatality; In this investigation, we found measles vaccination coverage to be 27% in this Rigasa; this is less than the reported national coverage of 51%; This study also reveals that younger caregivers who are 20 years or less compared to older ones, are more likely to have children with measles. We found children less than one year to be severely affected by measles and having the highest case-fatality; In this investigation, we found measles vaccination coverage to be 27% in this Rigasa; this is less than the reported national coverage of 51%; This study also reveals that younger caregivers who are 20 years or less compared to older ones, are more likely to have children with measles.
[ "What is known about this topic", "What this study adds" ]
[ "Measles is a highly contagious vaccine preventable viral disease that usually affects younger children;\nSeveral factors such as low coverage for measles antigens and overcrowding have been noted to be risk factors for measles outbreak.", "We found children less than one year to be severely affected by measles and having the highest case-fatality;\nIn this investigation, we found measles vaccination coverage to be 27% in this Rigasa; this is less than the reported national coverage of 51%;\nThis study also reveals that younger caregivers who are 20 years or less compared to older ones, are more likely to have children with measles." ]
[ null, null ]
[ "Introduction", "Methods", "Results", "Discussion", "Conclusion", "What is known about this topic", "What this study adds", "Competing interests" ]
[ "Measles is an acute, highly contagious vaccine preventable viral disease which usually affects younger children. Transmission is primarily person-to-person via aerosolized droplets or by direct contact with the nasal and throat secretions of infected persons [1, 2]. Incubation period is 10-14 days (range, 8-15 days) from exposure to onset of rash, and the individual becomes contagious before the eruption of rashes. In 2014, World Health Organization (WHO) reported 266,701 measles cases globally with 145,700 measles deaths [1]. Being unvaccinated against measles is a risk factor for contracting the disease [3]. Other factors responsible for measles outbreak and transmissions in developing countries are; lack of parental awareness of vaccination importance and compliance with routine immunization schedule, household overcrowding with easy contact with someone with measles, acquired or inherited immunodeficiency states and malnutrition [4-6]. During outbreaks, measles case fatality rate (CFR) in developing countries are normally estimated to be 3-5%, but may reach 10-30% compared with 0.1% reported from industrialized countries [2]. Malnutrition, poor supportive case management and complications like pneumonia, diarrhea, croup and central nervous system involvement are responsible for high measles CFR [7, 8]. Nigeria is second to India among ten countries with large number of unvaccinated children for measles, and has 2.7 million of the 21.5 million children globally that have zero dose for measles containing vaccine (MCV1) in 2013 [9]. Measles is one of the top ten causes of childhood morbidity and mortality with recurrent episodes common in Northern Nigeria at the first quarter of each year [10, 11]. In 2012, 2,553 measles cases were reported in Nigeria, an increase from 390 cases reported in 2006 [10].\nIn line with regional strategic plan, Nigeria planned to eliminate measles by 2020 by strengthening routine immunization, conduct bi-annual measles immunization campaign for second opportunity, epidemiologic surveillance with laboratory confirmation of cases and improve case management including Vitamin A supplementation. This is meant to improve measles coverage from the present 51% as at 2014 [12] to 95% needed for effective herd immunity. In October 2013, following measles outbreak in 19 States in Northern Nigeria, mass measles vaccination campaign was conducted. The first reported case of a suspected measles in Rigasa community, an urban slum of Kaduna Metropolis occurred on 5th of January 2015 in an unimmunized 10 months old child. The District or Local Government Area (LGA), Disease Surveillance and Notification Officer (DSNO) notified the State Epidemiologist and State DSNO on 10th February, 2015 when the reported cases reached an epidemic threshold. We investigated to confirm this outbreak, determine the risk factors for contracting infection and implement appropriate control measures. This paper describes the epidemiological methods employed in the investigation, summarizes the key findings and highlights the public health actions undertaken.", "Study site and study population: Rigasa is a densely populated urban slum in the south west of Igabi LGA, in Kaduna State, North-West Nigeria. It has an estimated 59,906 households with about 14,156 under-one children. The settlement has three health facilities rendering RI services. The community is noted for poor utilization of RI services and has rejected polio supplemental immunization services in the past. Measles outbreaks have been previously reported from this community. The Last measles supplementary immunization activities (SIA) was conducted from 5th to 9th October, 2013.\nDescriptive epidemiology-quantitative: in this investigation, a suspected measles case was, any person with generalized maculopapular rash, fever, and at least one of the following; cough, coryza or conjunctivitis or in whom a physician suspected measles, living in Rigasa community, from January 2015 when the index case was reported to March 2015. A confirmed case was, any suspected case with measles IgM positive test or an epidemiological link to a laboratory confirmed case living in the same community at same period. We actively search for cases in the community where measles is locally known as “Bakon dauro”. We interviewed and physically examined some cases at the treatment center to verify diagnosis and ensure that they met the case definition. We developed a line-list to collect information from all suspected cases on their age, sex, residence and time of onset, migration history and immunization status. We analyzed the line-list data to characterize the outbreak in time, place and person, and to develop a plausible hypothesis for measles transmission in the community. We conducted an in-depth interview with health workers rendering routine immunization services to ascertain utilization of RI services by under five-children in the community.\nCase-control study: we conducted an unmatched case-control study. A suspected case is, any child aged 0-59 months residing in Rigasa with history of fever, rash and at least one of the following: cough, coryza or conjunctivitis or in whom a physician suspected measles; a confirmed case is, positive to measles IgM using enzyme-linked immunosorbent assay (ELISA); and a probable case if there was epidemiological link to a laboratory confirmed measles case. A control was, any child 0-59 months residing in the same community but without the signs and symptoms of measles. We enrolled 75 cases and 75 controls to identify an odds ratio of 3 (for a risk factor on which intervention would have a significant impact), assuming 21.2% prevalence of exposure among control [4], with 95% CI and power of 80%. The sample size was determined using the Statcal function of Epi-Info statistical software. We selected and recruited the cases consecutively from among the patients that presented at the health facility and in the community. The controls were selected from the community; each control was selected from the 3rd homestead to the right of the household of a case. We used structured questionnaires to collect data on demographic characteristics, exposures, vaccination status and associated factors from both cases and controls, and clinical information from the cases only.\nSample collection and laboratory analysis: we collected blood samples for 15 suspected cases for measles serum IgM determination using ELISA method at the WHO regional laboratory at Kaduna.\nData management: we entered data into Epi-Info statistical software and performed univariate analysis to obtain frequencies and proportions, and bivariate analysis to obtain odds ratios and determine associations, setting p-value of 0.05 as the cut-off for statistical significance. We also performed unconditional logistic regression to adjust for possible confounders and identify the independent factors for contracting measles infection. Factors that were significant at p < 0.05 in the bivariate analysis and biological plausible variable such as age and sex were included in the model. In the final model, only variables that were found to significantly affect the outcome at P < 0.05 were retained. Data management was done using Epi InfoTM software 3.5.3 (CDC Atlanta, USA), and Microsoft Excel. For qualitative study, content analysis was thematically analyzed.\nEthical consideration: ethical approval was not obtained as study was conducted as part of an outbreak response. However, permission to conduct the study was granted by the State Primary Health Care Agency, District or LGA Director of Primary Health Care and District head of Rigasa community. Informed consent was obtained from all respondents interviewed. Confidentiality of all the subjects was assured and maintained during and after the study.", "Descriptive epidemiology-quantitatively: a total of 159 cases with two deaths (CFR = 1.3%) were identified. 80 (50.3%) were male. Cumulatively, under-five children accounted for 90% of cases. The mean age of cases was 32 (± 22.5) months but the age group of 0-11 months were severely affected with age specific death rate of 5% (Table 1). The index case for this outbreak was a 10 months old female child that had maculopapular rash on 5th January, 2015. She was unvaccinated for measles and never had any immunization for vaccine preventable diseases according to Expanded Programme on Immunization (EPI) schedule. She had no history of contact with someone with measles and had not travelled out of the community in the last one month. She was managed as an outpatient and died 3 days later. Figure 1 shows the epidemic curve of the outbreak. The epidemic curve has a propagated pattern with four peaks, the highest on 10th March, 2015. The outbreak spanned from 5th January to 4th April, 2015.\nAge distributions of measles cases and case fatality rate (CFR) in Rigasa community, March 2015\nEpidemic curve of measles outbreak in Rigasa community, week 1 to 15, 2015\nDescriptive epidemiology-qualitative evaluation of RI services: content analysis of in depth interview of three health workers rendering routine immunization services revealed that caregivers' usually utilized RI services in the first 6-10 weeks of life, no stock out of measles vaccine in last 6 months and most caregivers failed to comply with EPI schedule. Also, all planned RI sessions were held in the past 3 months and there was no vaccine stock out in 6 months prior to the outbreak and the facilities have functioning cold chain system.\nAnalytic study: total population sampled was 150 (cases 75, control 75). The mean age for cases and controls was 33.2 (± 21) and 37.6 (± 29) months. Males were 42 (56%) cases and 37(49%) controls. Among the cases, 15 (20%) was vaccinated for measles compared to 23 (30.7%) of the control. Among the 112 children unvaccinated for measles, attack rate was 54%. Cumulatively, 41(27%) were vaccinated for measles. Only 1 (1.3%) case compared to 12 (16%) of controls had completed the Expanded Programme on Immunization schedule (Table 2). From bivariate analysis, cases were more likely than control, to have had none or incomplete RI [OR (95% CI)]:14.0 (1.8, 111.4); not to have received measles vaccination [OR (95% CI)]:2.0 (0.8, 3.7); to have had close contact with measles cases [OR (95% CI)]:6.0 (2.7, 11.2); and to have caregivers who were younger than 20 years [OR (95% CI)]:2.6 (1, 6.8) (Table 3). From modelling, independent predictors of measles transmission in Rigassa, an urban slum in Kaduna metropolis were none or incomplete routine immunization (RI) [adjusted odds ratio (AOR) (95% confidence interval (CI)<]:28.3 (2.1, 392.0), unvaccinated for measles [AOR (95% CI)]:1.8 (0.8, 3.7), recent contact with measles cases [AOR (95% CI)]:7.5 (2.9, 19.7), and having a caregiver younger than 20 years [AOR (95% CI)]:5.2 (1.2, 22.5) (Table 4).\nCharacteristics of measles cases and controls in Rigasa community, March 2015\nFactors that may be responsible for measles outbreak at Rigasa community, Igabi LGA Kaduna State-March 2015\nFactors responsible for measles outbreak’s transmission at Rigasa community, Kaduna metropolis, Kaduna State - March 2015\nLaboratory findings: eleven (73%) of the 15 serum samples were confirmed IgM positive for measles. Two samples that were negative for measles IgM were also negative of rubella IgM; one was indeterminate, but there was an epidemiological link with a confirmed case of measles. The outcome of the last sample could not be ascertained.", "This laboratory confirmed that measles outbreak caused severe morbidity in a densely populated urban slum community in Kaduna metropolis where substandard housing and poor living conditions existed. In addition, the community, like most cosmopolitan settlements in Northern Nigeria, has witnessed poor uptake of RI for all antigens. In this investigation, measles vaccination coverage was 27%; this is lower than the estimated national coverage of 51% [12]. The outbreak of measles in Rigasa spanned from 5th January to 4th April, 2015, weeks after outbreak investigation and response. This prolonged spread of the infection could be due to lack of herd immunity in the community. The case fatality of 1.3% in this study is similar to 1.5% reported in Lagos, a cosmopolitan city in Nigeria, but lower than 3.9% reported in Bayelsa-South-South Nigeria, and 6.9% reported in a rural community in Southwestern Nigeria [8, 11, 13]. The lower CFR reported in this study could be due to early presentation of affected children to health facilities, and improved case management. During the outbreak, the state primary health care agency supplied drugs, including vitamin A, to health facilities that were used in the treatment of cases at no cost. Although measles usually affects under-five children [11], in this study we found under-one to be severely affected with CFR of 5%. In developing countries, malnutrition, lack of supportive care, crowding and poorly managed complications of measles have been implicated as causes of high CFR [8]. Routine immunization is significantly associated with reduction in measles infection among vaccinated individuals. We found that those who had no or incomplete vaccination according to EPI schedule were fourteen times more likely to have measles infection (95% CI. 1.8-111.4). In this community only 8.6% of the children had complete EPI schedule. This was by far less than the national target of at least 87% of RI coverage, and in which no fewer than 90% of the LGAs reach at least 80% of infants with complete scheduled of routine antigens by 2015 [14]. Countries with a single-dose of measles are said to be poor, least developed, report the lowest routine vaccination coverage and experience high measles diseases burden [15]. During the study period, on the EPI schedule, a single dose of measles antigen should be administered to a child at age nine months. Apart from the country's low measles vaccination coverage, primary vaccine failure occurs in 25% of children vaccinated at 9 months, and thus are unable to develop protective humoral antibodies against measles virus [10].\nWe also found that caregivers who were 20 years or less were more likely to have children with measles. This could be attributed to lack of knowledge on the importance of routine immunization and childcare. Caregivers in this community usually present their children for immunization according to EPI in the first 3 months of life but failed to complete the schedule therefore missing measles vaccination at the ninth month. Reasons put forward by caregivers to why their children missed measles vaccination ranged from being unaware of EPI schedule for measles, to competing priorities, to adverse events following immunization (AEFI). These reasons are similar to those cited in previous literatures [8, 16]. In this outbreak, the epi-curve revealed a propagated epidemic pattern which probably affirms that the disease was transmitted from person to person. Rigasa is a densely populated urban slum community and this allow for the spread of measles in this community. Overcrowding is an important risk factor in the transmission of respiratory infections [17]; and measles being a highly contagious disease, recent contact and overcrowding are risk factors for disease transmissions during measles outbreak [1, 3]. This outbreak response had one major limitation; that is, delay in reactive vaccination in the community after the investigation due to fear of potential postelection violence. This investigation was conducted just before 2015 national election, and the past elections in Nigeria, especially the preceding 2011 election, were marred by postelection violence. However, during the investigation, with support from the traditional leaders and government we were able to implement the following public health actions: educate the community on the importance of measles vaccination, prompt identification of cases in the community and referral to health facilities where drugs supplied by the state government for free treatment of cases were available. A retroactive measles vaccination campaign was carried out in the community after the 2015 national election.", "We confirmed there was a measles outbreak in Rigasa community. Low measles vaccine coverage as a result of poor uptake of RI was responsible for the outbreak of measles infection in the community. This resulted in the accumulation of susceptible children thus lowering the herd immunity against measles infection. This study also suggests that children of younger caregivers were more afflicted with measles infection during this outbreak. The poor housing conditions and overcrowding in this community greatly fueled the outbreak, supporting the evidence that close contact with a measles case is a risk factor for measles transmission. A major public health implication of this study is the need to strengthen RI services. We therefore recommend that the state ministry of health should increase demand creation for RI services through more sensitization and education of caregivers. Additionally, health workers should encourage and motivate caregivers who access RI services to complete the EPI schedule to prevent vaccine preventable diseases. Engaging caregivers who had completely immunized their children according to EPI schedule as a community role model could be a good strategy to motivate other caregivers to access and complete RI services.\n What is known about this topic Measles is a highly contagious vaccine preventable viral disease that usually affects younger children;\nSeveral factors such as low coverage for measles antigens and overcrowding have been noted to be risk factors for measles outbreak.\nMeasles is a highly contagious vaccine preventable viral disease that usually affects younger children;\nSeveral factors such as low coverage for measles antigens and overcrowding have been noted to be risk factors for measles outbreak.\n What this study adds We found children less than one year to be severely affected by measles and having the highest case-fatality;\nIn this investigation, we found measles vaccination coverage to be 27% in this Rigasa; this is less than the reported national coverage of 51%;\nThis study also reveals that younger caregivers who are 20 years or less compared to older ones, are more likely to have children with measles.\nWe found children less than one year to be severely affected by measles and having the highest case-fatality;\nIn this investigation, we found measles vaccination coverage to be 27% in this Rigasa; this is less than the reported national coverage of 51%;\nThis study also reveals that younger caregivers who are 20 years or less compared to older ones, are more likely to have children with measles.", "Measles is a highly contagious vaccine preventable viral disease that usually affects younger children;\nSeveral factors such as low coverage for measles antigens and overcrowding have been noted to be risk factors for measles outbreak.", "We found children less than one year to be severely affected by measles and having the highest case-fatality;\nIn this investigation, we found measles vaccination coverage to be 27% in this Rigasa; this is less than the reported national coverage of 51%;\nThis study also reveals that younger caregivers who are 20 years or less compared to older ones, are more likely to have children with measles.", "The authors declare no competing interests." ]
[ "intro", "methods", "results", "discussion", "conclusion", null, null, "COI-statement" ]
[ "Measles", "outbreak investigation", "routine immunization", "urban slum" ]
Introduction: Measles is an acute, highly contagious vaccine preventable viral disease which usually affects younger children. Transmission is primarily person-to-person via aerosolized droplets or by direct contact with the nasal and throat secretions of infected persons [1, 2]. Incubation period is 10-14 days (range, 8-15 days) from exposure to onset of rash, and the individual becomes contagious before the eruption of rashes. In 2014, World Health Organization (WHO) reported 266,701 measles cases globally with 145,700 measles deaths [1]. Being unvaccinated against measles is a risk factor for contracting the disease [3]. Other factors responsible for measles outbreak and transmissions in developing countries are; lack of parental awareness of vaccination importance and compliance with routine immunization schedule, household overcrowding with easy contact with someone with measles, acquired or inherited immunodeficiency states and malnutrition [4-6]. During outbreaks, measles case fatality rate (CFR) in developing countries are normally estimated to be 3-5%, but may reach 10-30% compared with 0.1% reported from industrialized countries [2]. Malnutrition, poor supportive case management and complications like pneumonia, diarrhea, croup and central nervous system involvement are responsible for high measles CFR [7, 8]. Nigeria is second to India among ten countries with large number of unvaccinated children for measles, and has 2.7 million of the 21.5 million children globally that have zero dose for measles containing vaccine (MCV1) in 2013 [9]. Measles is one of the top ten causes of childhood morbidity and mortality with recurrent episodes common in Northern Nigeria at the first quarter of each year [10, 11]. In 2012, 2,553 measles cases were reported in Nigeria, an increase from 390 cases reported in 2006 [10]. In line with regional strategic plan, Nigeria planned to eliminate measles by 2020 by strengthening routine immunization, conduct bi-annual measles immunization campaign for second opportunity, epidemiologic surveillance with laboratory confirmation of cases and improve case management including Vitamin A supplementation. This is meant to improve measles coverage from the present 51% as at 2014 [12] to 95% needed for effective herd immunity. In October 2013, following measles outbreak in 19 States in Northern Nigeria, mass measles vaccination campaign was conducted. The first reported case of a suspected measles in Rigasa community, an urban slum of Kaduna Metropolis occurred on 5th of January 2015 in an unimmunized 10 months old child. The District or Local Government Area (LGA), Disease Surveillance and Notification Officer (DSNO) notified the State Epidemiologist and State DSNO on 10th February, 2015 when the reported cases reached an epidemic threshold. We investigated to confirm this outbreak, determine the risk factors for contracting infection and implement appropriate control measures. This paper describes the epidemiological methods employed in the investigation, summarizes the key findings and highlights the public health actions undertaken. Methods: Study site and study population: Rigasa is a densely populated urban slum in the south west of Igabi LGA, in Kaduna State, North-West Nigeria. It has an estimated 59,906 households with about 14,156 under-one children. The settlement has three health facilities rendering RI services. The community is noted for poor utilization of RI services and has rejected polio supplemental immunization services in the past. Measles outbreaks have been previously reported from this community. The Last measles supplementary immunization activities (SIA) was conducted from 5th to 9th October, 2013. Descriptive epidemiology-quantitative: in this investigation, a suspected measles case was, any person with generalized maculopapular rash, fever, and at least one of the following; cough, coryza or conjunctivitis or in whom a physician suspected measles, living in Rigasa community, from January 2015 when the index case was reported to March 2015. A confirmed case was, any suspected case with measles IgM positive test or an epidemiological link to a laboratory confirmed case living in the same community at same period. We actively search for cases in the community where measles is locally known as “Bakon dauro”. We interviewed and physically examined some cases at the treatment center to verify diagnosis and ensure that they met the case definition. We developed a line-list to collect information from all suspected cases on their age, sex, residence and time of onset, migration history and immunization status. We analyzed the line-list data to characterize the outbreak in time, place and person, and to develop a plausible hypothesis for measles transmission in the community. We conducted an in-depth interview with health workers rendering routine immunization services to ascertain utilization of RI services by under five-children in the community. Case-control study: we conducted an unmatched case-control study. A suspected case is, any child aged 0-59 months residing in Rigasa with history of fever, rash and at least one of the following: cough, coryza or conjunctivitis or in whom a physician suspected measles; a confirmed case is, positive to measles IgM using enzyme-linked immunosorbent assay (ELISA); and a probable case if there was epidemiological link to a laboratory confirmed measles case. A control was, any child 0-59 months residing in the same community but without the signs and symptoms of measles. We enrolled 75 cases and 75 controls to identify an odds ratio of 3 (for a risk factor on which intervention would have a significant impact), assuming 21.2% prevalence of exposure among control [4], with 95% CI and power of 80%. The sample size was determined using the Statcal function of Epi-Info statistical software. We selected and recruited the cases consecutively from among the patients that presented at the health facility and in the community. The controls were selected from the community; each control was selected from the 3rd homestead to the right of the household of a case. We used structured questionnaires to collect data on demographic characteristics, exposures, vaccination status and associated factors from both cases and controls, and clinical information from the cases only. Sample collection and laboratory analysis: we collected blood samples for 15 suspected cases for measles serum IgM determination using ELISA method at the WHO regional laboratory at Kaduna. Data management: we entered data into Epi-Info statistical software and performed univariate analysis to obtain frequencies and proportions, and bivariate analysis to obtain odds ratios and determine associations, setting p-value of 0.05 as the cut-off for statistical significance. We also performed unconditional logistic regression to adjust for possible confounders and identify the independent factors for contracting measles infection. Factors that were significant at p < 0.05 in the bivariate analysis and biological plausible variable such as age and sex were included in the model. In the final model, only variables that were found to significantly affect the outcome at P < 0.05 were retained. Data management was done using Epi InfoTM software 3.5.3 (CDC Atlanta, USA), and Microsoft Excel. For qualitative study, content analysis was thematically analyzed. Ethical consideration: ethical approval was not obtained as study was conducted as part of an outbreak response. However, permission to conduct the study was granted by the State Primary Health Care Agency, District or LGA Director of Primary Health Care and District head of Rigasa community. Informed consent was obtained from all respondents interviewed. Confidentiality of all the subjects was assured and maintained during and after the study. Results: Descriptive epidemiology-quantitatively: a total of 159 cases with two deaths (CFR = 1.3%) were identified. 80 (50.3%) were male. Cumulatively, under-five children accounted for 90% of cases. The mean age of cases was 32 (± 22.5) months but the age group of 0-11 months were severely affected with age specific death rate of 5% (Table 1). The index case for this outbreak was a 10 months old female child that had maculopapular rash on 5th January, 2015. She was unvaccinated for measles and never had any immunization for vaccine preventable diseases according to Expanded Programme on Immunization (EPI) schedule. She had no history of contact with someone with measles and had not travelled out of the community in the last one month. She was managed as an outpatient and died 3 days later. Figure 1 shows the epidemic curve of the outbreak. The epidemic curve has a propagated pattern with four peaks, the highest on 10th March, 2015. The outbreak spanned from 5th January to 4th April, 2015. Age distributions of measles cases and case fatality rate (CFR) in Rigasa community, March 2015 Epidemic curve of measles outbreak in Rigasa community, week 1 to 15, 2015 Descriptive epidemiology-qualitative evaluation of RI services: content analysis of in depth interview of three health workers rendering routine immunization services revealed that caregivers' usually utilized RI services in the first 6-10 weeks of life, no stock out of measles vaccine in last 6 months and most caregivers failed to comply with EPI schedule. Also, all planned RI sessions were held in the past 3 months and there was no vaccine stock out in 6 months prior to the outbreak and the facilities have functioning cold chain system. Analytic study: total population sampled was 150 (cases 75, control 75). The mean age for cases and controls was 33.2 (± 21) and 37.6 (± 29) months. Males were 42 (56%) cases and 37(49%) controls. Among the cases, 15 (20%) was vaccinated for measles compared to 23 (30.7%) of the control. Among the 112 children unvaccinated for measles, attack rate was 54%. Cumulatively, 41(27%) were vaccinated for measles. Only 1 (1.3%) case compared to 12 (16%) of controls had completed the Expanded Programme on Immunization schedule (Table 2). From bivariate analysis, cases were more likely than control, to have had none or incomplete RI [OR (95% CI)]:14.0 (1.8, 111.4); not to have received measles vaccination [OR (95% CI)]:2.0 (0.8, 3.7); to have had close contact with measles cases [OR (95% CI)]:6.0 (2.7, 11.2); and to have caregivers who were younger than 20 years [OR (95% CI)]:2.6 (1, 6.8) (Table 3). From modelling, independent predictors of measles transmission in Rigassa, an urban slum in Kaduna metropolis were none or incomplete routine immunization (RI) [adjusted odds ratio (AOR) (95% confidence interval (CI)<]:28.3 (2.1, 392.0), unvaccinated for measles [AOR (95% CI)]:1.8 (0.8, 3.7), recent contact with measles cases [AOR (95% CI)]:7.5 (2.9, 19.7), and having a caregiver younger than 20 years [AOR (95% CI)]:5.2 (1.2, 22.5) (Table 4). Characteristics of measles cases and controls in Rigasa community, March 2015 Factors that may be responsible for measles outbreak at Rigasa community, Igabi LGA Kaduna State-March 2015 Factors responsible for measles outbreak’s transmission at Rigasa community, Kaduna metropolis, Kaduna State - March 2015 Laboratory findings: eleven (73%) of the 15 serum samples were confirmed IgM positive for measles. Two samples that were negative for measles IgM were also negative of rubella IgM; one was indeterminate, but there was an epidemiological link with a confirmed case of measles. The outcome of the last sample could not be ascertained. Discussion: This laboratory confirmed that measles outbreak caused severe morbidity in a densely populated urban slum community in Kaduna metropolis where substandard housing and poor living conditions existed. In addition, the community, like most cosmopolitan settlements in Northern Nigeria, has witnessed poor uptake of RI for all antigens. In this investigation, measles vaccination coverage was 27%; this is lower than the estimated national coverage of 51% [12]. The outbreak of measles in Rigasa spanned from 5th January to 4th April, 2015, weeks after outbreak investigation and response. This prolonged spread of the infection could be due to lack of herd immunity in the community. The case fatality of 1.3% in this study is similar to 1.5% reported in Lagos, a cosmopolitan city in Nigeria, but lower than 3.9% reported in Bayelsa-South-South Nigeria, and 6.9% reported in a rural community in Southwestern Nigeria [8, 11, 13]. The lower CFR reported in this study could be due to early presentation of affected children to health facilities, and improved case management. During the outbreak, the state primary health care agency supplied drugs, including vitamin A, to health facilities that were used in the treatment of cases at no cost. Although measles usually affects under-five children [11], in this study we found under-one to be severely affected with CFR of 5%. In developing countries, malnutrition, lack of supportive care, crowding and poorly managed complications of measles have been implicated as causes of high CFR [8]. Routine immunization is significantly associated with reduction in measles infection among vaccinated individuals. We found that those who had no or incomplete vaccination according to EPI schedule were fourteen times more likely to have measles infection (95% CI. 1.8-111.4). In this community only 8.6% of the children had complete EPI schedule. This was by far less than the national target of at least 87% of RI coverage, and in which no fewer than 90% of the LGAs reach at least 80% of infants with complete scheduled of routine antigens by 2015 [14]. Countries with a single-dose of measles are said to be poor, least developed, report the lowest routine vaccination coverage and experience high measles diseases burden [15]. During the study period, on the EPI schedule, a single dose of measles antigen should be administered to a child at age nine months. Apart from the country's low measles vaccination coverage, primary vaccine failure occurs in 25% of children vaccinated at 9 months, and thus are unable to develop protective humoral antibodies against measles virus [10]. We also found that caregivers who were 20 years or less were more likely to have children with measles. This could be attributed to lack of knowledge on the importance of routine immunization and childcare. Caregivers in this community usually present their children for immunization according to EPI in the first 3 months of life but failed to complete the schedule therefore missing measles vaccination at the ninth month. Reasons put forward by caregivers to why their children missed measles vaccination ranged from being unaware of EPI schedule for measles, to competing priorities, to adverse events following immunization (AEFI). These reasons are similar to those cited in previous literatures [8, 16]. In this outbreak, the epi-curve revealed a propagated epidemic pattern which probably affirms that the disease was transmitted from person to person. Rigasa is a densely populated urban slum community and this allow for the spread of measles in this community. Overcrowding is an important risk factor in the transmission of respiratory infections [17]; and measles being a highly contagious disease, recent contact and overcrowding are risk factors for disease transmissions during measles outbreak [1, 3]. This outbreak response had one major limitation; that is, delay in reactive vaccination in the community after the investigation due to fear of potential postelection violence. This investigation was conducted just before 2015 national election, and the past elections in Nigeria, especially the preceding 2011 election, were marred by postelection violence. However, during the investigation, with support from the traditional leaders and government we were able to implement the following public health actions: educate the community on the importance of measles vaccination, prompt identification of cases in the community and referral to health facilities where drugs supplied by the state government for free treatment of cases were available. A retroactive measles vaccination campaign was carried out in the community after the 2015 national election. Conclusion: We confirmed there was a measles outbreak in Rigasa community. Low measles vaccine coverage as a result of poor uptake of RI was responsible for the outbreak of measles infection in the community. This resulted in the accumulation of susceptible children thus lowering the herd immunity against measles infection. This study also suggests that children of younger caregivers were more afflicted with measles infection during this outbreak. The poor housing conditions and overcrowding in this community greatly fueled the outbreak, supporting the evidence that close contact with a measles case is a risk factor for measles transmission. A major public health implication of this study is the need to strengthen RI services. We therefore recommend that the state ministry of health should increase demand creation for RI services through more sensitization and education of caregivers. Additionally, health workers should encourage and motivate caregivers who access RI services to complete the EPI schedule to prevent vaccine preventable diseases. Engaging caregivers who had completely immunized their children according to EPI schedule as a community role model could be a good strategy to motivate other caregivers to access and complete RI services. What is known about this topic Measles is a highly contagious vaccine preventable viral disease that usually affects younger children; Several factors such as low coverage for measles antigens and overcrowding have been noted to be risk factors for measles outbreak. Measles is a highly contagious vaccine preventable viral disease that usually affects younger children; Several factors such as low coverage for measles antigens and overcrowding have been noted to be risk factors for measles outbreak. What this study adds We found children less than one year to be severely affected by measles and having the highest case-fatality; In this investigation, we found measles vaccination coverage to be 27% in this Rigasa; this is less than the reported national coverage of 51%; This study also reveals that younger caregivers who are 20 years or less compared to older ones, are more likely to have children with measles. We found children less than one year to be severely affected by measles and having the highest case-fatality; In this investigation, we found measles vaccination coverage to be 27% in this Rigasa; this is less than the reported national coverage of 51%; This study also reveals that younger caregivers who are 20 years or less compared to older ones, are more likely to have children with measles. What is known about this topic: Measles is a highly contagious vaccine preventable viral disease that usually affects younger children; Several factors such as low coverage for measles antigens and overcrowding have been noted to be risk factors for measles outbreak. What this study adds: We found children less than one year to be severely affected by measles and having the highest case-fatality; In this investigation, we found measles vaccination coverage to be 27% in this Rigasa; this is less than the reported national coverage of 51%; This study also reveals that younger caregivers who are 20 years or less compared to older ones, are more likely to have children with measles. Competing interests: The authors declare no competing interests.
Background: Despite availability of an effective vaccine, the measles epidemic continue to occur in Nigeria. In February 2015, we investigated a suspected measles outbreak in an urban slum in Rigasa, Kaduna State, Nigeria. The study was to confirm the outbreak, determine the risk factors and implement appropriate control measures. Methods: We identified cases through active search and health record review. We conducted an unmatched case-control (1:1) study involving 75 under-5 cases who were randomly sampled, and 75 neighborhood controls. We interviewed caregivers of these children using structured questionnaire to collect information on sociodemographic characteristics and vaccination status of children. We collected 15 blood samples for measles IgM using Enzyme Linked Immunosorbent Assay. Descriptive, bivariate and logistic regression analyses were performed using Epi-info software. Confidence interval was set at 95%. Results: We recorded 159 cases with two deaths {case fatality rate = 1.3%}. 50.3% (80) of the cases were male. Of the 15 serum samples, 11(73.3%) were confirmed IgM positive for measles. Compared to the controls, the cases were more likely to have had no or incomplete routine immunization (RI) [adjusted odds ratio (AOR) (95% confidence interval (CI)]: 28.3 (2.1, 392.0), contact with measles cases [AOR (95% CI)]: 7.5 (2.9, 19.7), and having a caregiver younger than 20 years [AOR (95% CI)]: 5.2 (1.2, 22.5). Measles serum IgM was positive in 11 samples. Conclusions: We identified low RI uptake and contact with measles cases as predictors of measles outbreak in Rigasa, Kaduna State. We recommended strengthening of RI and education of care-givers' on completing RI schedule.
Introduction: Measles is an acute, highly contagious vaccine preventable viral disease which usually affects younger children. Transmission is primarily person-to-person via aerosolized droplets or by direct contact with the nasal and throat secretions of infected persons [1, 2]. Incubation period is 10-14 days (range, 8-15 days) from exposure to onset of rash, and the individual becomes contagious before the eruption of rashes. In 2014, World Health Organization (WHO) reported 266,701 measles cases globally with 145,700 measles deaths [1]. Being unvaccinated against measles is a risk factor for contracting the disease [3]. Other factors responsible for measles outbreak and transmissions in developing countries are; lack of parental awareness of vaccination importance and compliance with routine immunization schedule, household overcrowding with easy contact with someone with measles, acquired or inherited immunodeficiency states and malnutrition [4-6]. During outbreaks, measles case fatality rate (CFR) in developing countries are normally estimated to be 3-5%, but may reach 10-30% compared with 0.1% reported from industrialized countries [2]. Malnutrition, poor supportive case management and complications like pneumonia, diarrhea, croup and central nervous system involvement are responsible for high measles CFR [7, 8]. Nigeria is second to India among ten countries with large number of unvaccinated children for measles, and has 2.7 million of the 21.5 million children globally that have zero dose for measles containing vaccine (MCV1) in 2013 [9]. Measles is one of the top ten causes of childhood morbidity and mortality with recurrent episodes common in Northern Nigeria at the first quarter of each year [10, 11]. In 2012, 2,553 measles cases were reported in Nigeria, an increase from 390 cases reported in 2006 [10]. In line with regional strategic plan, Nigeria planned to eliminate measles by 2020 by strengthening routine immunization, conduct bi-annual measles immunization campaign for second opportunity, epidemiologic surveillance with laboratory confirmation of cases and improve case management including Vitamin A supplementation. This is meant to improve measles coverage from the present 51% as at 2014 [12] to 95% needed for effective herd immunity. In October 2013, following measles outbreak in 19 States in Northern Nigeria, mass measles vaccination campaign was conducted. The first reported case of a suspected measles in Rigasa community, an urban slum of Kaduna Metropolis occurred on 5th of January 2015 in an unimmunized 10 months old child. The District or Local Government Area (LGA), Disease Surveillance and Notification Officer (DSNO) notified the State Epidemiologist and State DSNO on 10th February, 2015 when the reported cases reached an epidemic threshold. We investigated to confirm this outbreak, determine the risk factors for contracting infection and implement appropriate control measures. This paper describes the epidemiological methods employed in the investigation, summarizes the key findings and highlights the public health actions undertaken. Conclusion: We confirmed there was a measles outbreak in Rigasa community. Low measles vaccine coverage as a result of poor uptake of RI was responsible for the outbreak of measles infection in the community. This resulted in the accumulation of susceptible children thus lowering the herd immunity against measles infection. This study also suggests that children of younger caregivers were more afflicted with measles infection during this outbreak. The poor housing conditions and overcrowding in this community greatly fueled the outbreak, supporting the evidence that close contact with a measles case is a risk factor for measles transmission. A major public health implication of this study is the need to strengthen RI services. We therefore recommend that the state ministry of health should increase demand creation for RI services through more sensitization and education of caregivers. Additionally, health workers should encourage and motivate caregivers who access RI services to complete the EPI schedule to prevent vaccine preventable diseases. Engaging caregivers who had completely immunized their children according to EPI schedule as a community role model could be a good strategy to motivate other caregivers to access and complete RI services. What is known about this topic Measles is a highly contagious vaccine preventable viral disease that usually affects younger children; Several factors such as low coverage for measles antigens and overcrowding have been noted to be risk factors for measles outbreak. Measles is a highly contagious vaccine preventable viral disease that usually affects younger children; Several factors such as low coverage for measles antigens and overcrowding have been noted to be risk factors for measles outbreak. What this study adds We found children less than one year to be severely affected by measles and having the highest case-fatality; In this investigation, we found measles vaccination coverage to be 27% in this Rigasa; this is less than the reported national coverage of 51%; This study also reveals that younger caregivers who are 20 years or less compared to older ones, are more likely to have children with measles. We found children less than one year to be severely affected by measles and having the highest case-fatality; In this investigation, we found measles vaccination coverage to be 27% in this Rigasa; this is less than the reported national coverage of 51%; This study also reveals that younger caregivers who are 20 years or less compared to older ones, are more likely to have children with measles.
Background: Despite availability of an effective vaccine, the measles epidemic continue to occur in Nigeria. In February 2015, we investigated a suspected measles outbreak in an urban slum in Rigasa, Kaduna State, Nigeria. The study was to confirm the outbreak, determine the risk factors and implement appropriate control measures. Methods: We identified cases through active search and health record review. We conducted an unmatched case-control (1:1) study involving 75 under-5 cases who were randomly sampled, and 75 neighborhood controls. We interviewed caregivers of these children using structured questionnaire to collect information on sociodemographic characteristics and vaccination status of children. We collected 15 blood samples for measles IgM using Enzyme Linked Immunosorbent Assay. Descriptive, bivariate and logistic regression analyses were performed using Epi-info software. Confidence interval was set at 95%. Results: We recorded 159 cases with two deaths {case fatality rate = 1.3%}. 50.3% (80) of the cases were male. Of the 15 serum samples, 11(73.3%) were confirmed IgM positive for measles. Compared to the controls, the cases were more likely to have had no or incomplete routine immunization (RI) [adjusted odds ratio (AOR) (95% confidence interval (CI)]: 28.3 (2.1, 392.0), contact with measles cases [AOR (95% CI)]: 7.5 (2.9, 19.7), and having a caregiver younger than 20 years [AOR (95% CI)]: 5.2 (1.2, 22.5). Measles serum IgM was positive in 11 samples. Conclusions: We identified low RI uptake and contact with measles cases as predictors of measles outbreak in Rigasa, Kaduna State. We recommended strengthening of RI and education of care-givers' on completing RI schedule.
3,648
344
[ 38, 79 ]
8
[ "measles", "community", "cases", "case", "outbreak", "children", "study", "immunization", "health", "2015" ]
[ "deaths unvaccinated measles", "predictors measles transmission", "measles cases globally", "factors measles outbreak", "measles highly contagious" ]
[CONTENT] Measles | outbreak investigation | routine immunization | urban slum [SUMMARY]
[CONTENT] Measles | outbreak investigation | routine immunization | urban slum [SUMMARY]
[CONTENT] Measles | outbreak investigation | routine immunization | urban slum [SUMMARY]
[CONTENT] Measles | outbreak investigation | routine immunization | urban slum [SUMMARY]
[CONTENT] Measles | outbreak investigation | routine immunization | urban slum [SUMMARY]
[CONTENT] Measles | outbreak investigation | routine immunization | urban slum [SUMMARY]
[CONTENT] Caregivers | Case-Control Studies | Child, Preschool | Disease Outbreaks | Enzyme-Linked Immunosorbent Assay | Epidemics | Female | Humans | Immunoglobulin M | Infant | Infant, Newborn | Logistic Models | Male | Measles | Measles Vaccine | Nigeria | Poverty Areas | Risk Factors | Surveys and Questionnaires | Vaccination [SUMMARY]
[CONTENT] Caregivers | Case-Control Studies | Child, Preschool | Disease Outbreaks | Enzyme-Linked Immunosorbent Assay | Epidemics | Female | Humans | Immunoglobulin M | Infant | Infant, Newborn | Logistic Models | Male | Measles | Measles Vaccine | Nigeria | Poverty Areas | Risk Factors | Surveys and Questionnaires | Vaccination [SUMMARY]
[CONTENT] Caregivers | Case-Control Studies | Child, Preschool | Disease Outbreaks | Enzyme-Linked Immunosorbent Assay | Epidemics | Female | Humans | Immunoglobulin M | Infant | Infant, Newborn | Logistic Models | Male | Measles | Measles Vaccine | Nigeria | Poverty Areas | Risk Factors | Surveys and Questionnaires | Vaccination [SUMMARY]
[CONTENT] Caregivers | Case-Control Studies | Child, Preschool | Disease Outbreaks | Enzyme-Linked Immunosorbent Assay | Epidemics | Female | Humans | Immunoglobulin M | Infant | Infant, Newborn | Logistic Models | Male | Measles | Measles Vaccine | Nigeria | Poverty Areas | Risk Factors | Surveys and Questionnaires | Vaccination [SUMMARY]
[CONTENT] Caregivers | Case-Control Studies | Child, Preschool | Disease Outbreaks | Enzyme-Linked Immunosorbent Assay | Epidemics | Female | Humans | Immunoglobulin M | Infant | Infant, Newborn | Logistic Models | Male | Measles | Measles Vaccine | Nigeria | Poverty Areas | Risk Factors | Surveys and Questionnaires | Vaccination [SUMMARY]
[CONTENT] Caregivers | Case-Control Studies | Child, Preschool | Disease Outbreaks | Enzyme-Linked Immunosorbent Assay | Epidemics | Female | Humans | Immunoglobulin M | Infant | Infant, Newborn | Logistic Models | Male | Measles | Measles Vaccine | Nigeria | Poverty Areas | Risk Factors | Surveys and Questionnaires | Vaccination [SUMMARY]
[CONTENT] deaths unvaccinated measles | predictors measles transmission | measles cases globally | factors measles outbreak | measles highly contagious [SUMMARY]
[CONTENT] deaths unvaccinated measles | predictors measles transmission | measles cases globally | factors measles outbreak | measles highly contagious [SUMMARY]
[CONTENT] deaths unvaccinated measles | predictors measles transmission | measles cases globally | factors measles outbreak | measles highly contagious [SUMMARY]
[CONTENT] deaths unvaccinated measles | predictors measles transmission | measles cases globally | factors measles outbreak | measles highly contagious [SUMMARY]
[CONTENT] deaths unvaccinated measles | predictors measles transmission | measles cases globally | factors measles outbreak | measles highly contagious [SUMMARY]
[CONTENT] deaths unvaccinated measles | predictors measles transmission | measles cases globally | factors measles outbreak | measles highly contagious [SUMMARY]
[CONTENT] measles | community | cases | case | outbreak | children | study | immunization | health | 2015 [SUMMARY]
[CONTENT] measles | community | cases | case | outbreak | children | study | immunization | health | 2015 [SUMMARY]
[CONTENT] measles | community | cases | case | outbreak | children | study | immunization | health | 2015 [SUMMARY]
[CONTENT] measles | community | cases | case | outbreak | children | study | immunization | health | 2015 [SUMMARY]
[CONTENT] measles | community | cases | case | outbreak | children | study | immunization | health | 2015 [SUMMARY]
[CONTENT] measles | community | cases | case | outbreak | children | study | immunization | health | 2015 [SUMMARY]
[CONTENT] measles | 10 | nigeria | reported | countries | cases | surveillance | 2014 | million | states [SUMMARY]
[CONTENT] case | community | suspected | measles | cases | data | study | analysis | services | control [SUMMARY]
[CONTENT] measles | cases | ci | 2015 | 95 | 95 ci | months | march | march 2015 | aor 95 [SUMMARY]
[CONTENT] measles | caregivers | children | coverage | ri | outbreak | ri services | services | younger | study [SUMMARY]
[CONTENT] measles | community | children | cases | coverage | outbreak | case | interests | declare competing interests | competing interests [SUMMARY]
[CONTENT] measles | community | children | cases | coverage | outbreak | case | interests | declare competing interests | competing interests [SUMMARY]
[CONTENT] Nigeria ||| February 2015 | Rigasa | Kaduna State | Nigeria ||| [SUMMARY]
[CONTENT] ||| 75 | 75 ||| ||| 15 | Enzyme Linked Immunosorbent ||| Epi-info ||| 95% [SUMMARY]
[CONTENT] 159 | two | 1.3% ||| 50.3% | 80 ||| 15 | 11(73.3% ||| ||| AOR | 95% | CI | 28.3 | 2.1 | 392.0 | 95% | CI | 7.5 | 2.9 | 19.7 | 20 years ||| AOR | 95% | CI | 5.2 | 1.2 | 22.5 ||| 11 [SUMMARY]
[CONTENT] Rigasa | Kaduna State ||| [SUMMARY]
[CONTENT] Nigeria ||| February 2015 | Rigasa | Kaduna State | Nigeria ||| ||| ||| 75 | 75 ||| ||| 15 | Enzyme Linked Immunosorbent ||| Epi-info ||| 95% ||| ||| 159 | two | 1.3% ||| 50.3% | 80 ||| 15 | 11(73.3% ||| ||| AOR | 95% | CI | 28.3 | 2.1 | 392.0 | 95% | CI | 7.5 | 2.9 | 19.7 | 20 years ||| AOR | 95% | CI | 5.2 | 1.2 | 22.5 ||| 11 ||| Rigasa | Kaduna State ||| [SUMMARY]
[CONTENT] Nigeria ||| February 2015 | Rigasa | Kaduna State | Nigeria ||| ||| ||| 75 | 75 ||| ||| 15 | Enzyme Linked Immunosorbent ||| Epi-info ||| 95% ||| ||| 159 | two | 1.3% ||| 50.3% | 80 ||| 15 | 11(73.3% ||| ||| AOR | 95% | CI | 28.3 | 2.1 | 392.0 | 95% | CI | 7.5 | 2.9 | 19.7 | 20 years ||| AOR | 95% | CI | 5.2 | 1.2 | 22.5 ||| 11 ||| Rigasa | Kaduna State ||| [SUMMARY]
Detection of Babesia annae DNA in lung exudate samples from Red foxes (Vulpes vulpes) in Great Britain.
26867572
This study aimed to determine the prevalence of Babesia species DNA in lung exudate samples collected from red foxes (Vulpes vulpes) from across Great Britain. Babesia are small piroplasmid parasites which are mainly transmitted through the bite of infected ticks of the family Ixodidae. Babesia can cause potentially fatal disease in a wide-range of mammalian species including humans, dogs and cattle, making them of significant economic importance to both the medical and veterinary fields.
BACKGROUND
DNA was extracted from lung exudate samples of 316 foxes. A semi-nested PCR was used to initially screen samples, using universal Babesia-Theileria primers which target the 18S rRNA gene. A selection of positive PCR amplicons were purified and sequenced. Subsequently specific primers were designed to detect Babesia annae and used to screen all 316 DNA samples. Randomly selected positive samples were purified and sequenced (GenBank accession KT580786). Clones spanning a 1717 bp region of the 18S rRNA gene were generated from 2 positive samples, the resultant consensus sequence was submitted to GenBank (KT580785). Sequence KT580785 was used in the phylogenetic analysis
METHODS
Babesia annae DNA was detected in the fox samples, in total 46/316 (14.6%) of samples tested positive for the presence of Babesia annae DNA. The central region of England had the highest prevalence at 36.7%, while no positive samples were found from Wales, though only 12 samples were tested from this region. Male foxes were found to have a higher prevalence of Babesia annae DNA than females in all regions of Britain. Phylogenetic and sequence analysis of the GenBank submissions (Accession numbers KT580785 and KT580786) showed 100% identity to Babesia sp.-'Spanish Dog' (AY534602, EU583387 and AF188001).
RESULTS
This is the first time that Babesia annae DNA has been reported in red foxes in Great Britain with positive samples being found across England and Scotland indicating that this parasite is well established within the red fox population of Britain. Phylogenetic analysis demonstrated that though B. annae is closely related to B. microti it is a distinct species.
CONCLUSIONS
[ "Animals", "Babesia", "Babesiosis", "DNA Primers", "DNA, Protozoan", "DNA, Ribosomal", "Exudates and Transudates", "Foxes", "Lung", "Mass Screening", "Molecular Sequence Data", "Polymerase Chain Reaction", "RNA, Ribosomal, 18S", "Sequence Analysis, DNA", "United Kingdom" ]
4751633
Background
Babesia are small piroplasmid parasites which are widely distributed throughout the world and are transmitted to hosts through the bite of infected ticks, transplacentally [1] and mechanically through the exchange of blood i.e., in dog fights [2] as well as transovarially and transstadially in ticks . More than 100 species of Babesia have been documented [3], Babesia parasites are capable of infecting a wide range of wild and domestic host species, including humans, cattle and dogs. This ability to infect and cause disease in many mammalian species make Babesia of great economic importance in both the medical and veterinary fields. Bovine babesiosis (Red water fever) for example is considered the most important arthropod transmitted disease in cattle [4]. In Europe the main cause of human babesiosis is considered to be B. divergens [4], though over recent years other Babesia species including B. microti [5] have also been found to be responsible for human babesiosis. Babesia annae is among these economically important species having been shown to cause severe (even fatal) disease in dogs [1, 2]. Little is known about the prevalence of piroplasm infections in dogs in Great Britain. A recent study by Crawford and colleagues [6] examining 262 canine blood donors found none were positive for Babesia DNA, while a study by Smith and colleagues [7] found only 2.4 % 16/742 ticks collected from dogs from across Great Britain tested PCR positive for Babesia DNA, 11 of which showed 97–100 % identity to B. gibsoni. There are no records of Babesia annae having been previously detected in Britain. Babesia annae was identified previously under numerous synonyms including Theileria annae, Babesia sp. ‘Spanish dog’ and Babesia-microti-like [8]. This species has been found in red foxes (Vulpes vulpes), and dogs (Canis familiaris) in North America and Canada [9, 10] and throughout Europe [1, 11–15] and is considered by some authors to be “hyperendemic” in northwest Spain [16]. Red foxes are common throughout Great Britain, being highly adaptive and opportunistic predators and scavengers [17]. The most recent surveys suggested that there are between 240,000 and 258,000 red foxes in Britain, of which approximately 87 % (225,000) live in rural areas, compared to 13 % (33,000) of urban foxes (Game & Wildlife Conservation Trust (2013) and DEFRA FOI release (2013). The foxes are most likely to encounter questing adult ticks, nymphs and larvae whilst hunting and scavenging. Anthropogenic changes, such as wildlife and forest management strategies and changes in land use have also created more suitable habitats for foxes, their prey and the Ixodid ticks that feed on both. These changes have lead not only to increases in tick numbers, but have also enabled ticks to increase their distribution across Europe, with Ixodes ticks now being found in areas previously considered tick free i.e., northern Sweden [18]. The main tick species found in Britain are Ixodes ricinus (sheep tick, which is also known in some countries as the deer tick) and Ixodes hexagonus (hedgehog tick). Both tick species can be found on a wide range of domestic and wild animal species and have been shown in previous studies in Spain and Germany to be positive for B. annae (synonym T. annae) DNA [15, 19]. This is however not conclusive proof that these species are vectors for the transmission of B. annae. This study aimed at determining the prevalence of piroplasm infection in red foxes and the species of piroplasm circulating in the red fox population in Great Britain, through the analysis of lung exudate samples.
null
null
Results
Screening of fox lung exudate samples for the presence of Babesia spp. DNA Of the 120 samples initially screened, 29 samples gave positive results. From these nine samples that gave positive results for both duplicates from the semi-nested PCR (universal Babesia-Theileria primers BT1-F-BTH-1R and BT1-F-BT1-R) were cleaned as described above and sent for sequencing in one direction, using either the BT1-F or BTH-1 F primers. Following this initial screening all sequences returned with 99–100 % identity to several isolates of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’ accession numbers EU583387, AY534602 and AF188001) [28]. No other Babesia species were identified in the fox lung exudate samples. Of the 120 samples initially screened, 29 samples gave positive results. From these nine samples that gave positive results for both duplicates from the semi-nested PCR (universal Babesia-Theileria primers BT1-F-BTH-1R and BT1-F-BT1-R) were cleaned as described above and sent for sequencing in one direction, using either the BT1-F or BTH-1 F primers. Following this initial screening all sequences returned with 99–100 % identity to several isolates of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’ accession numbers EU583387, AY534602 and AF188001) [28]. No other Babesia species were identified in the fox lung exudate samples. Verification of PCR specificity A random selection of nine positive samples from across Britain that gave positive results for both duplicates, were sent for sequencing using the Babesia annae specific BTFox1F and BTFox1R primers, this created a 597 bp consensus sequence (submitted to GenBank accession number KT580786). When this resultant sequence was compared using NCBI BLAST to the published sequences, it was found to have 100 % identity to several isolates of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’ (accession numbers EU583387, AY534602 and AF188001) [28]. A further 13 positive samples were sequenced only using the BTFox1F primer; all of these sequences were 100 % identical to Babesia annae (previously referred to as: Babesia sp. ‘Spanish Dog’-AY534602, EU583387 and AF188001). Seven clones were created from two of the positive samples (F501 (4 clones) and F340 (3 clones)) using the primers BT1-F & BT-Outer-R and BT1-F & BT-Inner-R. These clones were used to create two consensus sequences, which were identical to each other. One of the 1717 bp consensus sequences was submitted to GenBank (accession number KT580785). When this consensus sequence was compared using BLAST it demonstrated a 100 % identity to the published sequences of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’ (accession numbers AY534602, EU583387 and AF188001). A random selection of nine positive samples from across Britain that gave positive results for both duplicates, were sent for sequencing using the Babesia annae specific BTFox1F and BTFox1R primers, this created a 597 bp consensus sequence (submitted to GenBank accession number KT580786). When this resultant sequence was compared using NCBI BLAST to the published sequences, it was found to have 100 % identity to several isolates of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’ (accession numbers EU583387, AY534602 and AF188001) [28]. A further 13 positive samples were sequenced only using the BTFox1F primer; all of these sequences were 100 % identical to Babesia annae (previously referred to as: Babesia sp. ‘Spanish Dog’-AY534602, EU583387 and AF188001). Seven clones were created from two of the positive samples (F501 (4 clones) and F340 (3 clones)) using the primers BT1-F & BT-Outer-R and BT1-F & BT-Inner-R. These clones were used to create two consensus sequences, which were identical to each other. One of the 1717 bp consensus sequences was submitted to GenBank (accession number KT580785). When this consensus sequence was compared using BLAST it demonstrated a 100 % identity to the published sequences of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’ (accession numbers AY534602, EU583387 and AF188001). Distribution of Babesia annae in red foxes in Great Britain Of the 316 lung exudate samples tested by nested PCR (using the BTFox1F and BTFox1R primers), 46 (14.55 % with a 95 % Confidence Interval (CI) of 10.66–18.44 %) returned positive results for the presence of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’) DNA. Positive samples were found in foxes widely distributed across Great Britain (see Table 3), with B. annae DNA being found in Scotland as well as across the whole of England. The central region of England had the highest prevalence with 18 / 49 positive samples (36.7 % prevalence - 95 % CI 23.23 %–50.23 %), which was significantly (p = 0.007) higher than the British average. Of the regions of Britain where positive samples were found, Scotland had the lowest prevalence with only 6 / 80 positive samples (7.5 % prevalence - 95 % CI 1.72 %–13.27 %). Scotland had a significantly lower (p = 0.012) prevalence than the British average. No positive samples were found in Wales, though only 12 samples were available for analysis for this region.Table 3Prevalence of Babesia annae DNA in red foxes (Vulpes vulpes) from across Great BritainRegionNo testedNo PositivePrevalence (%)95 % CIGenderNo / No Positive% Prevalence95 % CIScotland8067.5* 1.7–13.3 %Male48 / 510.41.8–19.0 %Female32 / 13.10.00–9.2 %Wales1200-Male5 / 00-Female7 / 00-Northern (England)34823.59.3–37.8 %Male20 / 6309.9–50.1 %Female14 / 214.20.0–32.6 %Central (England)491836.7** 23.2–50.2 %Male22 / 104524.6–66.3 %Female27 / 829.612.4–46.9 %Southern (England)781114.16.4–21.8 %Male45 / 817.86.6–28.9 %Female9 / 39.10.0–18.9 %TotalMale140 / 2920.714.0–27.4 %Female113 / 1412.46.3–18.5 % N Number, CI Confidence interval * Significantly lower prevalence than nation average (p = 0.045) ** Significantly higher prevalence than national average (p = 0.003) Prevalence of Babesia annae DNA in red foxes (Vulpes vulpes) from across Great Britain N Number, CI Confidence interval * Significantly lower prevalence than nation average (p = 0.045) ** Significantly higher prevalence than national average (p = 0.003) The data shows that a higher number male foxes tested positive for parasite DNA than females in all the regions where positive samples were found (Table 3), although no statistically significant differences (p = 0.093) were observed between numbers of positive males and females when comparing genders across Britain. Of the 316 lung exudate samples tested by nested PCR (using the BTFox1F and BTFox1R primers), 46 (14.55 % with a 95 % Confidence Interval (CI) of 10.66–18.44 %) returned positive results for the presence of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’) DNA. Positive samples were found in foxes widely distributed across Great Britain (see Table 3), with B. annae DNA being found in Scotland as well as across the whole of England. The central region of England had the highest prevalence with 18 / 49 positive samples (36.7 % prevalence - 95 % CI 23.23 %–50.23 %), which was significantly (p = 0.007) higher than the British average. Of the regions of Britain where positive samples were found, Scotland had the lowest prevalence with only 6 / 80 positive samples (7.5 % prevalence - 95 % CI 1.72 %–13.27 %). Scotland had a significantly lower (p = 0.012) prevalence than the British average. No positive samples were found in Wales, though only 12 samples were available for analysis for this region.Table 3Prevalence of Babesia annae DNA in red foxes (Vulpes vulpes) from across Great BritainRegionNo testedNo PositivePrevalence (%)95 % CIGenderNo / No Positive% Prevalence95 % CIScotland8067.5* 1.7–13.3 %Male48 / 510.41.8–19.0 %Female32 / 13.10.00–9.2 %Wales1200-Male5 / 00-Female7 / 00-Northern (England)34823.59.3–37.8 %Male20 / 6309.9–50.1 %Female14 / 214.20.0–32.6 %Central (England)491836.7** 23.2–50.2 %Male22 / 104524.6–66.3 %Female27 / 829.612.4–46.9 %Southern (England)781114.16.4–21.8 %Male45 / 817.86.6–28.9 %Female9 / 39.10.0–18.9 %TotalMale140 / 2920.714.0–27.4 %Female113 / 1412.46.3–18.5 % N Number, CI Confidence interval * Significantly lower prevalence than nation average (p = 0.045) ** Significantly higher prevalence than national average (p = 0.003) Prevalence of Babesia annae DNA in red foxes (Vulpes vulpes) from across Great Britain N Number, CI Confidence interval * Significantly lower prevalence than nation average (p = 0.045) ** Significantly higher prevalence than national average (p = 0.003) The data shows that a higher number male foxes tested positive for parasite DNA than females in all the regions where positive samples were found (Table 3), although no statistically significant differences (p = 0.093) were observed between numbers of positive males and females when comparing genders across Britain. Phylogenetic relationship of Babesia annae and related species Following maximum likelihood analysis it can be seen that Babesia parasites appear to be in three separate clades (Fig. 1) using the nomenclature suggested by Lack and colleagues [29], with KT580785 (B. annae) situated in clade VIII (microti group) with AY534602, EU583387 and AF188001, which are other published sequences of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’) (see Fig. 1) found in Spain and the USA. The next most closely associated group of sequences have all been classified as B. microti (AB032434, AB085191 and AB190459); these were isolated from a human, bank vole and forest mouse respectively. Two Piroplasmida species found in Eurasian badges (Meles meles) (EF057099) and Eurasian otter (Lutra lutra) (FJ225390) were also closely related to B. annae. The remaining Babesia species are further separated into 2 clades: The classical Babesia group (clade I) and the duncani group (clade IV). The Theileria species, with the exception of T. equi are situated in clade III, while the Hepatozoon parasites are situated in a separate clade.Fig. 1Phylogenetic analysis on 18S rRNA gene. Phylogenetic tree showing maximum likelihood approximation of the standard likelihood ratio test scores. A maximum likelihood approximation of the standard likelihood ratio test score of 0 indicating that no base pair substitutions were observed between AY534602, EU583387, AF188001 and KT580785. The phylogenetic analysis places KT580785 in clade VIII (microti group according to the nomenclature suggested by Lack and colleagues [26]). Phylogenetic analysis on 18S rRNA gene. Phylogenetic tree showing maximum likelihood approximation of the standard likelihood ratio test scores. A maximum likelihood approximation of the standard likelihood ratio test score of 0 indicating that no base pair substitutions were observed between AY534602, EU583387, AF188001 and KT580785. The phylogenetic analysis places KT580785 in clade VIII (microti group according to the nomenclature suggested by Lack and colleagues [26]). Following maximum likelihood analysis it can be seen that Babesia parasites appear to be in three separate clades (Fig. 1) using the nomenclature suggested by Lack and colleagues [29], with KT580785 (B. annae) situated in clade VIII (microti group) with AY534602, EU583387 and AF188001, which are other published sequences of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’) (see Fig. 1) found in Spain and the USA. The next most closely associated group of sequences have all been classified as B. microti (AB032434, AB085191 and AB190459); these were isolated from a human, bank vole and forest mouse respectively. Two Piroplasmida species found in Eurasian badges (Meles meles) (EF057099) and Eurasian otter (Lutra lutra) (FJ225390) were also closely related to B. annae. The remaining Babesia species are further separated into 2 clades: The classical Babesia group (clade I) and the duncani group (clade IV). The Theileria species, with the exception of T. equi are situated in clade III, while the Hepatozoon parasites are situated in a separate clade.Fig. 1Phylogenetic analysis on 18S rRNA gene. Phylogenetic tree showing maximum likelihood approximation of the standard likelihood ratio test scores. A maximum likelihood approximation of the standard likelihood ratio test score of 0 indicating that no base pair substitutions were observed between AY534602, EU583387, AF188001 and KT580785. The phylogenetic analysis places KT580785 in clade VIII (microti group according to the nomenclature suggested by Lack and colleagues [26]). Phylogenetic analysis on 18S rRNA gene. Phylogenetic tree showing maximum likelihood approximation of the standard likelihood ratio test scores. A maximum likelihood approximation of the standard likelihood ratio test score of 0 indicating that no base pair substitutions were observed between AY534602, EU583387, AF188001 and KT580785. The phylogenetic analysis places KT580785 in clade VIII (microti group according to the nomenclature suggested by Lack and colleagues [26]).
Conclusions
This is the first study to demonstrate the presence of B. annae DNA in Britain. Sequence analysis has shown the B. annae 18S rRNA gene sequence detected in foxes in Britain to be identical to that detected in foxes in Europe and North America. Phylogenetic analysis shows that B. annae is closely related to Babesia microti, but clearly is a distinct species.
[ "Sample collection and preparation", "DNA extraction", "PCR for detection of Babesia DNA in blood samples", "PCR clean up and DNA sequencing", "Cloning of the 18 s rRNA gene of Babesia annae", "Sequence / phylogenetic analysis", "Statistical analysis", "Screening of fox lung exudate samples for the presence of Babesia spp. DNA", "Verification of PCR specificity", "Distribution of Babesia annae in red foxes in Great Britain", "Phylogenetic relationship of Babesia annae and related species" ]
[ "The three hundred and sixteen foxes were originally collected by the University of Edinburgh as part of a study looking at the prevalence of Echinococcus multilocularis, Neospora caninum and Toxoplasma gondii [20, 21]. Lung samples were collected at post mortem examination and frozen at −20 °C prior to processing. Lung fluids were prepared as previously described [20]. Briefly, lungs were defrosted and bloody exudate was collected into a 1.5 ml microfuge tube, where no exudate was visible, lung samples were placed in a stomacher bag with approximately 5 ml of phosphate buffer saline (PBS) and processed in a stomacher for 15 s. All exudates / PBS samples were stored at −20 °C prior to DNA extraction.\nThe foxes sampled were collected over a wide range of locations in each of the study regions. A majority of the foxes were shot by game keepers and land owners as part of routine pest control procedures, so the location data from where the foxes were originally collected and the gender of the animals were often available. However some of the foxes were obtained from direct culls, for these animals the location and gender of the animal was not always available [21].", "Three hundred and sixteen lung exudate samples were defrosted and mixed by vortexing, 400 μl of each sample was added to 900 μl Nuclei Lysis Solution (Promega, Madison WI, USA) and incubated at 55 °C overnight. The samples were then processed to DNA using the Wizard® genomic DNA (Promega, Madison WI, USA) purification protocol, which was adapted by Bartley and colleagues [22] to be scaled up to allow the use of 400 μl of starting material. The DNA was resuspended in 300 μl of DNase and RNase free water and stored at +4 °C for immediate use or at −20 °C for longer term storage. Extraction controls (water) were also prepared with each batch of exudate samples, these were used as indicators of contamination and acted as additional negative controls.", "A semi-nested PCR was used to screen for the presence of Babesia DNA in 120 lung exudate samples using universal Babesia-Theileria primers BT1-F and BTH-1R for the primary amplification and BT1-F and BT1-R for the second round amplification (previously described [23]). Following this initial screening the B. annae specific primers BTFox1F and BTFox1R were designed (Primer3web v4.0.0) for use in the second round amplifications (Table 1). All 316 lung exudate samples were tested using the B. annae specific primers BTFox1F and BTFox1R.Table 1Primer names, specificity and sequences used for the detection of Babesia DNA in fox lung exudate samplesPrimer NameSpecificitya\nSequence (5’ – 3’)ReferenceBT1-FUniversal Babesia – Theileria\nGGTTGATCCTGCCAGTAGT[23]BTH-1RUniversal Babesia – Theileria\nTTGCGACCATACTCCCCCCABTH-1 FUniversal Babesia – Theileria\nCCTGMGARACGGCTACCACATCTBT1-RUniversal Babesia – Theileria\nGCCTGCTGCCTTCCTTABTFox1F\nBabesia annae\nAGTTATAAGCTTTTATACAGCDeveloped in the studyBTFox1R\nBabesia annae\nCACTCTAGTTTTCTCAAAGTAAAATABT-Outer-R\nBabesia annae\nGGAAACCTTGTTACGACTTCTCBT-Inner-R\nBabesia annae\nTTCTCCTTCCTTTAAGTGATAAG600-F\nBabesia annae\nAGTTAAGAAGCTCGTAGTTG1200-F\nBabesia annae\nAGGATTGACAGATTGATAGC\naAll primers were designed against the 18S rRNA gene\nPrimer names, specificity and sequences used for the detection of Babesia DNA in fox lung exudate samples\n\naAll primers were designed against the 18S rRNA gene\nThe reaction mixture was adapted from that previously described by Burrells and colleagues [24] and amplification conditions were as follows; each reaction (20 μl) consisted of custom PCR master mix (containing final concentrations of 45 mM Tris–HCl, 11 mM (NH4)2SO4, 4.5 mM MgCl2, 0.113 mg/ml BSA, 4.4 μM EDTA and 1.0 mM each of dATP, dCTP, dGTP and dTTP) (ABgene, Surrey, UK), 0.25pM of each forward and reverse primer (Eurofins MWG Operon), 0.75U Taq polymerase (Bioline Ltd. London, UK) and 2 μl sample template DNA, to increase sensitivity each sample was analysed in duplicate. The basic reaction conditions for all the PCR amplifications were as follows; 94 °C for 5 min followed by 35 cycles at 94 °C for 1 min, annealing (see Table 2) for 1 min and 72 °C for 1 min with a final extension period at 72 °C for 5 min.Table 2Annealing temperatures and amplicon sizes of nested PCR reactions used for the detection of Babesia annae DNA in fox lung exudate samplesPrimersAmplicon size (bp)Annealing Temp (°C)BT1-F and BTH-1R107355BT1-F and BT1-R40860BTFox1F and BTFox1R65552BT1-F and BT-Outer-R173755BT1-F and BT-Inner-R171749\nAnnealing temperatures and amplicon sizes of nested PCR reactions used for the detection of Babesia annae DNA in fox lung exudate samples\nA selection of positive samples that gave positive results for both duplicates using the B. annae specific BTFox1F and BTFox1R primers were tested in a semi-nested PCR using the primers BT1-F and BT-Outer-R (Table 1) in the primary reaction and BT1-F and BT-Inner-R (Table 1) in the second round reaction. These primers were designed to produce a longer 18S rRNA gene fragment (approx 1.7Kb). Following the second round amplification, 10 μl of each PCR product was analysed by agarose gel electrophoresis (2 % agarose in 1x TAE buffer), stained with gel red (1:10,000) (Biotonium, Hayward, CA, USA) and visualised using UV light.", "PCR products from samples that gave positive results for both duplicates using the BT1-F and BT-Outer-R (Table 1) in the primary reaction and BT1-F and BT-Inner-R primers in the second round reaction were selected for sequencing. These amplicons were cleaned using the commercially available Wizard® SV Gel and PCR Clean-up System (Promega, Madison WI, USA), as per manufacturers’ instructions. The PCR product was eluted in 30 μl of DNase / RNase free water and the nucleic acid concentration was determined by spectralphotometry (Nanodrop ND1000), 100 ng of each sample was sent for sequencing with each primer (MWG Operon) BT1-F, BTH-1 F, BTFox1F and BTFox1R (Table 1) to create a forward and reverse consensus sequence.", "Seven samples which gave positive results for both duplicates from the semi nested PCR (B. annae specific BT1-F and BT-Outer-R followed by BT1-F-BT-Inner-R), were selected for cloning and PCR products were purified as above, 7 μl of purified product was ligated into the pGEM®-T Easy Vector (Promega, Madison WI, USA) using 1 μl of T4 DNA ligase (3 Weiss units/μl), 1 μl of 10X Rapid Ligation Buffer and 1 μl of pGEM®-T Easy vector (50 ng/μl) according to the manufacturers’ instructions. Following ligation, 2 μl of ligated vector/insert was used to transform 50 μl of high-efficiency competent JM109 cells (≥1 × 108 cfu/μg DNA) (Promega, Madison WI, USA) using manufacturers’ instructions, with the following exception that LB broth was used instead of SOC medium to culture the cells. Successful transformation was confirmed using LB agar plates containing 100 μg/ml ampicillin and spread with 100 μl of IPTG (100 mM) and 20 μl of X-Gal (50 mg/ml). White colonies were screened by PCR using the BTFox1F and BTFox1R primers (Table 1) to determine the presence of the B. annae 18S rRNA gene insert. Positive colonies were cultured overnight (approx 18 h) in 10 ml LB broth containing 100 μg/ml ampicillin. Following this incubation plasmid DNA was then purified from 5 ml of each culture using the QIAprep® Miniprep kit (Qiagen), according to the manufacturers’ instructions. Purified plasmid DNA (100 ng per primer) was sent to be sequenced (MWG Operon) using T7 and SP6 primers along with BTH-1 F, 600-F, 1200-F, BT1-R, and BTH-1R (Table 1), this created an overlapping forward and reverse consensus sequence for each clone.", "Following sequencing, results were compared using NCBI Basic Local Alignment Search Tool (BLAST) to determine percentage identity of the generated sequences against published sequences. Multiple sequence alignments were created to compare these sequences to previously published data (EMBL-EBI Multiple Sequence Comparison by Log- Expectation (MUSCLE)), while phylogenetic analysis was performed on the long (1717 bp) consensus sequence using PhyML 3.0 (ATCG, Phylogeny.fr). The Gblocks programme was used for automatic alignment curation, while PhyML was used for tree building and TreeDyn programme was used to draw the phylogenetic tree [25–27]. All analyses are based on the maximum likelihood principal using an approximation of the standard Likelihood Ratio Test.", "Calculations regarding the prevalence of Babesia in red foxes from separate regions of Britain were performed only using data from animals where a location was known (253 / 316 samples). Comparisons of infection rates in relation to gender were made where data was available. All 316 samples were used when calculating the overall prevalence of Babesia DNA in red foxes in Britain.\nProportion positive (prevalence), with confidence intervals (95 % CI), was calculated for the overall study set as well as at regional level. Prevalence at the regional level was compared with the overall UK prevalence to determine if there was a significant difference (Minitab 15 (v15.1.0.0)). In addition the overall prevalence in male animals was compared with that in females (Minitab 15 (v15.1.0.0)). In these analyses, Fisher’s Exact Test was used where the number of events was less than 5; in all other cases the hypothesis test was based on the normal approximation.", "Of the 120 samples initially screened, 29 samples gave positive results. From these nine samples that gave positive results for both duplicates from the semi-nested PCR (universal Babesia-Theileria primers BT1-F-BTH-1R and BT1-F-BT1-R) were cleaned as described above and sent for sequencing in one direction, using either the BT1-F or BTH-1 F primers. Following this initial screening all sequences returned with 99–100 % identity to several isolates of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’ accession numbers EU583387, AY534602 and AF188001) [28]. No other Babesia species were identified in the fox lung exudate samples.", "A random selection of nine positive samples from across Britain that gave positive results for both duplicates, were sent for sequencing using the Babesia annae specific BTFox1F and BTFox1R primers, this created a 597 bp consensus sequence (submitted to GenBank accession number KT580786). When this resultant sequence was compared using NCBI BLAST to the published sequences, it was found to have 100 % identity to several isolates of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’ (accession numbers EU583387, AY534602 and AF188001) [28]. A further 13 positive samples were sequenced only using the BTFox1F primer; all of these sequences were 100 % identical to Babesia annae (previously referred to as: Babesia sp. ‘Spanish Dog’-AY534602, EU583387 and AF188001).\nSeven clones were created from two of the positive samples (F501 (4 clones) and F340 (3 clones)) using the primers BT1-F & BT-Outer-R and BT1-F & BT-Inner-R. These clones were used to create two consensus sequences, which were identical to each other. One of the 1717 bp consensus sequences was submitted to GenBank (accession number KT580785). When this consensus sequence was compared using BLAST it demonstrated a 100 % identity to the published sequences of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’ (accession numbers AY534602, EU583387 and AF188001).", "Of the 316 lung exudate samples tested by nested PCR (using the BTFox1F and BTFox1R primers), 46 (14.55 % with a 95 % Confidence Interval (CI) of 10.66–18.44 %) returned positive results for the presence of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’) DNA. Positive samples were found in foxes widely distributed across Great Britain (see Table 3), with B. annae DNA being found in Scotland as well as across the whole of England. The central region of England had the highest prevalence with 18 / 49 positive samples (36.7 % prevalence - 95 % CI 23.23 %–50.23 %), which was significantly (p = 0.007) higher than the British average. Of the regions of Britain where positive samples were found, Scotland had the lowest prevalence with only 6 / 80 positive samples (7.5 % prevalence - 95 % CI 1.72 %–13.27 %). Scotland had a significantly lower (p = 0.012) prevalence than the British average. No positive samples were found in Wales, though only 12 samples were available for analysis for this region.Table 3Prevalence of Babesia annae DNA in red foxes (Vulpes vulpes) from across Great BritainRegionNo testedNo PositivePrevalence (%)95 % CIGenderNo / No Positive% Prevalence95 % CIScotland8067.5*\n1.7–13.3 %Male48 / 510.41.8–19.0 %Female32 / 13.10.00–9.2 %Wales1200-Male5 / 00-Female7 / 00-Northern (England)34823.59.3–37.8 %Male20 / 6309.9–50.1 %Female14 / 214.20.0–32.6 %Central (England)491836.7**\n23.2–50.2 %Male22 / 104524.6–66.3 %Female27 / 829.612.4–46.9 %Southern (England)781114.16.4–21.8 %Male45 / 817.86.6–28.9 %Female9 / 39.10.0–18.9 %TotalMale140 / 2920.714.0–27.4 %Female113 / 1412.46.3–18.5 %\nN Number, CI Confidence interval\n* Significantly lower prevalence than nation average (p = 0.045)\n** Significantly higher prevalence than national average (p = 0.003)\nPrevalence of Babesia annae DNA in red foxes (Vulpes vulpes) from across Great Britain\n\nN Number, CI Confidence interval\n\n* Significantly lower prevalence than nation average (p = 0.045)\n\n** Significantly higher prevalence than national average (p = 0.003)\nThe data shows that a higher number male foxes tested positive for parasite DNA than females in all the regions where positive samples were found (Table 3), although no statistically significant differences (p = 0.093) were observed between numbers of positive males and females when comparing genders across Britain.", "Following maximum likelihood analysis it can be seen that Babesia parasites appear to be in three separate clades (Fig. 1) using the nomenclature suggested by Lack and colleagues [29], with KT580785 (B. annae) situated in clade VIII (microti group) with AY534602, EU583387 and AF188001, which are other published sequences of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’) (see Fig. 1) found in Spain and the USA. The next most closely associated group of sequences have all been classified as B. microti (AB032434, AB085191 and AB190459); these were isolated from a human, bank vole and forest mouse respectively. Two Piroplasmida species found in Eurasian badges (Meles meles) (EF057099) and Eurasian otter (Lutra lutra) (FJ225390) were also closely related to B. annae. The remaining Babesia species are further separated into 2 clades: The classical Babesia group (clade I) and the duncani group (clade IV). The Theileria species, with the exception of T. equi are situated in clade III, while the Hepatozoon parasites are situated in a separate clade.Fig. 1Phylogenetic analysis on 18S rRNA gene. Phylogenetic tree showing maximum likelihood approximation of the standard likelihood ratio test scores. A maximum likelihood approximation of the standard likelihood ratio test score of 0 indicating that no base pair substitutions were observed between AY534602, EU583387, AF188001 and KT580785. The phylogenetic analysis places KT580785 in clade VIII (microti group according to the nomenclature suggested by Lack and colleagues [26]).\nPhylogenetic analysis on 18S rRNA gene. Phylogenetic tree showing maximum likelihood approximation of the standard likelihood ratio test scores. A maximum likelihood approximation of the standard likelihood ratio test score of 0 indicating that no base pair substitutions were observed between AY534602, EU583387, AF188001 and KT580785. The phylogenetic analysis places KT580785 in clade VIII (microti group according to the nomenclature suggested by Lack and colleagues [26])." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Sample collection and preparation", "DNA extraction", "PCR for detection of Babesia DNA in blood samples", "PCR clean up and DNA sequencing", "Cloning of the 18 s rRNA gene of Babesia annae", "Sequence / phylogenetic analysis", "Statistical analysis", "Results", "Screening of fox lung exudate samples for the presence of Babesia spp. DNA", "Verification of PCR specificity", "Distribution of Babesia annae in red foxes in Great Britain", "Phylogenetic relationship of Babesia annae and related species", "Discussion", "Conclusions" ]
[ "Babesia are small piroplasmid parasites which are widely distributed throughout the world and are transmitted to hosts through the bite of infected ticks, transplacentally [1] and mechanically through the exchange of blood i.e., in dog fights [2] as well as transovarially and transstadially in ticks . More than 100 species of Babesia have been documented [3], Babesia parasites are capable of infecting a wide range of wild and domestic host species, including humans, cattle and dogs. This ability to infect and cause disease in many mammalian species make Babesia of great economic importance in both the medical and veterinary fields. Bovine babesiosis (Red water fever) for example is considered the most important arthropod transmitted disease in cattle [4]. In Europe the main cause of human babesiosis is considered to be B. divergens [4], though over recent years other Babesia species including B. microti [5] have also been found to be responsible for human babesiosis. Babesia annae is among these economically important species having been shown to cause severe (even fatal) disease in dogs [1, 2].\nLittle is known about the prevalence of piroplasm infections in dogs in Great Britain. A recent study by Crawford and colleagues [6] examining 262 canine blood donors found none were positive for Babesia DNA, while a study by Smith and colleagues [7] found only 2.4 % 16/742 ticks collected from dogs from across Great Britain tested PCR positive for Babesia DNA, 11 of which showed 97–100 % identity to B. gibsoni. There are no records of Babesia annae having been previously detected in Britain.\nBabesia annae was identified previously under numerous synonyms including Theileria annae, Babesia sp. ‘Spanish dog’ and Babesia-microti-like [8]. This species has been found in red foxes (Vulpes vulpes), and dogs (Canis familiaris) in North America and Canada [9, 10] and throughout Europe [1, 11–15] and is considered by some authors to be “hyperendemic” in northwest Spain [16].\nRed foxes are common throughout Great Britain, being highly adaptive and opportunistic predators and scavengers [17]. The most recent surveys suggested that there are between 240,000 and 258,000 red foxes in Britain, of which approximately 87 % (225,000) live in rural areas, compared to 13 % (33,000) of urban foxes (Game & Wildlife Conservation Trust (2013) and DEFRA FOI release (2013). The foxes are most likely to encounter questing adult ticks, nymphs and larvae whilst hunting and scavenging. Anthropogenic changes, such as wildlife and forest management strategies and changes in land use have also created more suitable habitats for foxes, their prey and the Ixodid ticks that feed on both. These changes have lead not only to increases in tick numbers, but have also enabled ticks to increase their distribution across Europe, with Ixodes ticks now being found in areas previously considered tick free i.e., northern Sweden [18].\nThe main tick species found in Britain are Ixodes ricinus (sheep tick, which is also known in some countries as the deer tick) and Ixodes hexagonus (hedgehog tick). Both tick species can be found on a wide range of domestic and wild animal species and have been shown in previous studies in Spain and Germany to be positive for B. annae (synonym T. annae) DNA [15, 19]. This is however not conclusive proof that these species are vectors for the transmission of B. annae.\nThis study aimed at determining the prevalence of piroplasm infection in red foxes and the species of piroplasm circulating in the red fox population in Great Britain, through the analysis of lung exudate samples.", " Sample collection and preparation The three hundred and sixteen foxes were originally collected by the University of Edinburgh as part of a study looking at the prevalence of Echinococcus multilocularis, Neospora caninum and Toxoplasma gondii [20, 21]. Lung samples were collected at post mortem examination and frozen at −20 °C prior to processing. Lung fluids were prepared as previously described [20]. Briefly, lungs were defrosted and bloody exudate was collected into a 1.5 ml microfuge tube, where no exudate was visible, lung samples were placed in a stomacher bag with approximately 5 ml of phosphate buffer saline (PBS) and processed in a stomacher for 15 s. All exudates / PBS samples were stored at −20 °C prior to DNA extraction.\nThe foxes sampled were collected over a wide range of locations in each of the study regions. A majority of the foxes were shot by game keepers and land owners as part of routine pest control procedures, so the location data from where the foxes were originally collected and the gender of the animals were often available. However some of the foxes were obtained from direct culls, for these animals the location and gender of the animal was not always available [21].\nThe three hundred and sixteen foxes were originally collected by the University of Edinburgh as part of a study looking at the prevalence of Echinococcus multilocularis, Neospora caninum and Toxoplasma gondii [20, 21]. Lung samples were collected at post mortem examination and frozen at −20 °C prior to processing. Lung fluids were prepared as previously described [20]. Briefly, lungs were defrosted and bloody exudate was collected into a 1.5 ml microfuge tube, where no exudate was visible, lung samples were placed in a stomacher bag with approximately 5 ml of phosphate buffer saline (PBS) and processed in a stomacher for 15 s. All exudates / PBS samples were stored at −20 °C prior to DNA extraction.\nThe foxes sampled were collected over a wide range of locations in each of the study regions. A majority of the foxes were shot by game keepers and land owners as part of routine pest control procedures, so the location data from where the foxes were originally collected and the gender of the animals were often available. However some of the foxes were obtained from direct culls, for these animals the location and gender of the animal was not always available [21].\n DNA extraction Three hundred and sixteen lung exudate samples were defrosted and mixed by vortexing, 400 μl of each sample was added to 900 μl Nuclei Lysis Solution (Promega, Madison WI, USA) and incubated at 55 °C overnight. The samples were then processed to DNA using the Wizard® genomic DNA (Promega, Madison WI, USA) purification protocol, which was adapted by Bartley and colleagues [22] to be scaled up to allow the use of 400 μl of starting material. The DNA was resuspended in 300 μl of DNase and RNase free water and stored at +4 °C for immediate use or at −20 °C for longer term storage. Extraction controls (water) were also prepared with each batch of exudate samples, these were used as indicators of contamination and acted as additional negative controls.\nThree hundred and sixteen lung exudate samples were defrosted and mixed by vortexing, 400 μl of each sample was added to 900 μl Nuclei Lysis Solution (Promega, Madison WI, USA) and incubated at 55 °C overnight. The samples were then processed to DNA using the Wizard® genomic DNA (Promega, Madison WI, USA) purification protocol, which was adapted by Bartley and colleagues [22] to be scaled up to allow the use of 400 μl of starting material. The DNA was resuspended in 300 μl of DNase and RNase free water and stored at +4 °C for immediate use or at −20 °C for longer term storage. Extraction controls (water) were also prepared with each batch of exudate samples, these were used as indicators of contamination and acted as additional negative controls.\n PCR for detection of Babesia DNA in blood samples A semi-nested PCR was used to screen for the presence of Babesia DNA in 120 lung exudate samples using universal Babesia-Theileria primers BT1-F and BTH-1R for the primary amplification and BT1-F and BT1-R for the second round amplification (previously described [23]). Following this initial screening the B. annae specific primers BTFox1F and BTFox1R were designed (Primer3web v4.0.0) for use in the second round amplifications (Table 1). All 316 lung exudate samples were tested using the B. annae specific primers BTFox1F and BTFox1R.Table 1Primer names, specificity and sequences used for the detection of Babesia DNA in fox lung exudate samplesPrimer NameSpecificitya\nSequence (5’ – 3’)ReferenceBT1-FUniversal Babesia – Theileria\nGGTTGATCCTGCCAGTAGT[23]BTH-1RUniversal Babesia – Theileria\nTTGCGACCATACTCCCCCCABTH-1 FUniversal Babesia – Theileria\nCCTGMGARACGGCTACCACATCTBT1-RUniversal Babesia – Theileria\nGCCTGCTGCCTTCCTTABTFox1F\nBabesia annae\nAGTTATAAGCTTTTATACAGCDeveloped in the studyBTFox1R\nBabesia annae\nCACTCTAGTTTTCTCAAAGTAAAATABT-Outer-R\nBabesia annae\nGGAAACCTTGTTACGACTTCTCBT-Inner-R\nBabesia annae\nTTCTCCTTCCTTTAAGTGATAAG600-F\nBabesia annae\nAGTTAAGAAGCTCGTAGTTG1200-F\nBabesia annae\nAGGATTGACAGATTGATAGC\naAll primers were designed against the 18S rRNA gene\nPrimer names, specificity and sequences used for the detection of Babesia DNA in fox lung exudate samples\n\naAll primers were designed against the 18S rRNA gene\nThe reaction mixture was adapted from that previously described by Burrells and colleagues [24] and amplification conditions were as follows; each reaction (20 μl) consisted of custom PCR master mix (containing final concentrations of 45 mM Tris–HCl, 11 mM (NH4)2SO4, 4.5 mM MgCl2, 0.113 mg/ml BSA, 4.4 μM EDTA and 1.0 mM each of dATP, dCTP, dGTP and dTTP) (ABgene, Surrey, UK), 0.25pM of each forward and reverse primer (Eurofins MWG Operon), 0.75U Taq polymerase (Bioline Ltd. London, UK) and 2 μl sample template DNA, to increase sensitivity each sample was analysed in duplicate. The basic reaction conditions for all the PCR amplifications were as follows; 94 °C for 5 min followed by 35 cycles at 94 °C for 1 min, annealing (see Table 2) for 1 min and 72 °C for 1 min with a final extension period at 72 °C for 5 min.Table 2Annealing temperatures and amplicon sizes of nested PCR reactions used for the detection of Babesia annae DNA in fox lung exudate samplesPrimersAmplicon size (bp)Annealing Temp (°C)BT1-F and BTH-1R107355BT1-F and BT1-R40860BTFox1F and BTFox1R65552BT1-F and BT-Outer-R173755BT1-F and BT-Inner-R171749\nAnnealing temperatures and amplicon sizes of nested PCR reactions used for the detection of Babesia annae DNA in fox lung exudate samples\nA selection of positive samples that gave positive results for both duplicates using the B. annae specific BTFox1F and BTFox1R primers were tested in a semi-nested PCR using the primers BT1-F and BT-Outer-R (Table 1) in the primary reaction and BT1-F and BT-Inner-R (Table 1) in the second round reaction. These primers were designed to produce a longer 18S rRNA gene fragment (approx 1.7Kb). Following the second round amplification, 10 μl of each PCR product was analysed by agarose gel electrophoresis (2 % agarose in 1x TAE buffer), stained with gel red (1:10,000) (Biotonium, Hayward, CA, USA) and visualised using UV light.\nA semi-nested PCR was used to screen for the presence of Babesia DNA in 120 lung exudate samples using universal Babesia-Theileria primers BT1-F and BTH-1R for the primary amplification and BT1-F and BT1-R for the second round amplification (previously described [23]). Following this initial screening the B. annae specific primers BTFox1F and BTFox1R were designed (Primer3web v4.0.0) for use in the second round amplifications (Table 1). All 316 lung exudate samples were tested using the B. annae specific primers BTFox1F and BTFox1R.Table 1Primer names, specificity and sequences used for the detection of Babesia DNA in fox lung exudate samplesPrimer NameSpecificitya\nSequence (5’ – 3’)ReferenceBT1-FUniversal Babesia – Theileria\nGGTTGATCCTGCCAGTAGT[23]BTH-1RUniversal Babesia – Theileria\nTTGCGACCATACTCCCCCCABTH-1 FUniversal Babesia – Theileria\nCCTGMGARACGGCTACCACATCTBT1-RUniversal Babesia – Theileria\nGCCTGCTGCCTTCCTTABTFox1F\nBabesia annae\nAGTTATAAGCTTTTATACAGCDeveloped in the studyBTFox1R\nBabesia annae\nCACTCTAGTTTTCTCAAAGTAAAATABT-Outer-R\nBabesia annae\nGGAAACCTTGTTACGACTTCTCBT-Inner-R\nBabesia annae\nTTCTCCTTCCTTTAAGTGATAAG600-F\nBabesia annae\nAGTTAAGAAGCTCGTAGTTG1200-F\nBabesia annae\nAGGATTGACAGATTGATAGC\naAll primers were designed against the 18S rRNA gene\nPrimer names, specificity and sequences used for the detection of Babesia DNA in fox lung exudate samples\n\naAll primers were designed against the 18S rRNA gene\nThe reaction mixture was adapted from that previously described by Burrells and colleagues [24] and amplification conditions were as follows; each reaction (20 μl) consisted of custom PCR master mix (containing final concentrations of 45 mM Tris–HCl, 11 mM (NH4)2SO4, 4.5 mM MgCl2, 0.113 mg/ml BSA, 4.4 μM EDTA and 1.0 mM each of dATP, dCTP, dGTP and dTTP) (ABgene, Surrey, UK), 0.25pM of each forward and reverse primer (Eurofins MWG Operon), 0.75U Taq polymerase (Bioline Ltd. London, UK) and 2 μl sample template DNA, to increase sensitivity each sample was analysed in duplicate. The basic reaction conditions for all the PCR amplifications were as follows; 94 °C for 5 min followed by 35 cycles at 94 °C for 1 min, annealing (see Table 2) for 1 min and 72 °C for 1 min with a final extension period at 72 °C for 5 min.Table 2Annealing temperatures and amplicon sizes of nested PCR reactions used for the detection of Babesia annae DNA in fox lung exudate samplesPrimersAmplicon size (bp)Annealing Temp (°C)BT1-F and BTH-1R107355BT1-F and BT1-R40860BTFox1F and BTFox1R65552BT1-F and BT-Outer-R173755BT1-F and BT-Inner-R171749\nAnnealing temperatures and amplicon sizes of nested PCR reactions used for the detection of Babesia annae DNA in fox lung exudate samples\nA selection of positive samples that gave positive results for both duplicates using the B. annae specific BTFox1F and BTFox1R primers were tested in a semi-nested PCR using the primers BT1-F and BT-Outer-R (Table 1) in the primary reaction and BT1-F and BT-Inner-R (Table 1) in the second round reaction. These primers were designed to produce a longer 18S rRNA gene fragment (approx 1.7Kb). Following the second round amplification, 10 μl of each PCR product was analysed by agarose gel electrophoresis (2 % agarose in 1x TAE buffer), stained with gel red (1:10,000) (Biotonium, Hayward, CA, USA) and visualised using UV light.\n PCR clean up and DNA sequencing PCR products from samples that gave positive results for both duplicates using the BT1-F and BT-Outer-R (Table 1) in the primary reaction and BT1-F and BT-Inner-R primers in the second round reaction were selected for sequencing. These amplicons were cleaned using the commercially available Wizard® SV Gel and PCR Clean-up System (Promega, Madison WI, USA), as per manufacturers’ instructions. The PCR product was eluted in 30 μl of DNase / RNase free water and the nucleic acid concentration was determined by spectralphotometry (Nanodrop ND1000), 100 ng of each sample was sent for sequencing with each primer (MWG Operon) BT1-F, BTH-1 F, BTFox1F and BTFox1R (Table 1) to create a forward and reverse consensus sequence.\nPCR products from samples that gave positive results for both duplicates using the BT1-F and BT-Outer-R (Table 1) in the primary reaction and BT1-F and BT-Inner-R primers in the second round reaction were selected for sequencing. These amplicons were cleaned using the commercially available Wizard® SV Gel and PCR Clean-up System (Promega, Madison WI, USA), as per manufacturers’ instructions. The PCR product was eluted in 30 μl of DNase / RNase free water and the nucleic acid concentration was determined by spectralphotometry (Nanodrop ND1000), 100 ng of each sample was sent for sequencing with each primer (MWG Operon) BT1-F, BTH-1 F, BTFox1F and BTFox1R (Table 1) to create a forward and reverse consensus sequence.\n Cloning of the 18 s rRNA gene of Babesia annae Seven samples which gave positive results for both duplicates from the semi nested PCR (B. annae specific BT1-F and BT-Outer-R followed by BT1-F-BT-Inner-R), were selected for cloning and PCR products were purified as above, 7 μl of purified product was ligated into the pGEM®-T Easy Vector (Promega, Madison WI, USA) using 1 μl of T4 DNA ligase (3 Weiss units/μl), 1 μl of 10X Rapid Ligation Buffer and 1 μl of pGEM®-T Easy vector (50 ng/μl) according to the manufacturers’ instructions. Following ligation, 2 μl of ligated vector/insert was used to transform 50 μl of high-efficiency competent JM109 cells (≥1 × 108 cfu/μg DNA) (Promega, Madison WI, USA) using manufacturers’ instructions, with the following exception that LB broth was used instead of SOC medium to culture the cells. Successful transformation was confirmed using LB agar plates containing 100 μg/ml ampicillin and spread with 100 μl of IPTG (100 mM) and 20 μl of X-Gal (50 mg/ml). White colonies were screened by PCR using the BTFox1F and BTFox1R primers (Table 1) to determine the presence of the B. annae 18S rRNA gene insert. Positive colonies were cultured overnight (approx 18 h) in 10 ml LB broth containing 100 μg/ml ampicillin. Following this incubation plasmid DNA was then purified from 5 ml of each culture using the QIAprep® Miniprep kit (Qiagen), according to the manufacturers’ instructions. Purified plasmid DNA (100 ng per primer) was sent to be sequenced (MWG Operon) using T7 and SP6 primers along with BTH-1 F, 600-F, 1200-F, BT1-R, and BTH-1R (Table 1), this created an overlapping forward and reverse consensus sequence for each clone.\nSeven samples which gave positive results for both duplicates from the semi nested PCR (B. annae specific BT1-F and BT-Outer-R followed by BT1-F-BT-Inner-R), were selected for cloning and PCR products were purified as above, 7 μl of purified product was ligated into the pGEM®-T Easy Vector (Promega, Madison WI, USA) using 1 μl of T4 DNA ligase (3 Weiss units/μl), 1 μl of 10X Rapid Ligation Buffer and 1 μl of pGEM®-T Easy vector (50 ng/μl) according to the manufacturers’ instructions. Following ligation, 2 μl of ligated vector/insert was used to transform 50 μl of high-efficiency competent JM109 cells (≥1 × 108 cfu/μg DNA) (Promega, Madison WI, USA) using manufacturers’ instructions, with the following exception that LB broth was used instead of SOC medium to culture the cells. Successful transformation was confirmed using LB agar plates containing 100 μg/ml ampicillin and spread with 100 μl of IPTG (100 mM) and 20 μl of X-Gal (50 mg/ml). White colonies were screened by PCR using the BTFox1F and BTFox1R primers (Table 1) to determine the presence of the B. annae 18S rRNA gene insert. Positive colonies were cultured overnight (approx 18 h) in 10 ml LB broth containing 100 μg/ml ampicillin. Following this incubation plasmid DNA was then purified from 5 ml of each culture using the QIAprep® Miniprep kit (Qiagen), according to the manufacturers’ instructions. Purified plasmid DNA (100 ng per primer) was sent to be sequenced (MWG Operon) using T7 and SP6 primers along with BTH-1 F, 600-F, 1200-F, BT1-R, and BTH-1R (Table 1), this created an overlapping forward and reverse consensus sequence for each clone.\n Sequence / phylogenetic analysis Following sequencing, results were compared using NCBI Basic Local Alignment Search Tool (BLAST) to determine percentage identity of the generated sequences against published sequences. Multiple sequence alignments were created to compare these sequences to previously published data (EMBL-EBI Multiple Sequence Comparison by Log- Expectation (MUSCLE)), while phylogenetic analysis was performed on the long (1717 bp) consensus sequence using PhyML 3.0 (ATCG, Phylogeny.fr). The Gblocks programme was used for automatic alignment curation, while PhyML was used for tree building and TreeDyn programme was used to draw the phylogenetic tree [25–27]. All analyses are based on the maximum likelihood principal using an approximation of the standard Likelihood Ratio Test.\nFollowing sequencing, results were compared using NCBI Basic Local Alignment Search Tool (BLAST) to determine percentage identity of the generated sequences against published sequences. Multiple sequence alignments were created to compare these sequences to previously published data (EMBL-EBI Multiple Sequence Comparison by Log- Expectation (MUSCLE)), while phylogenetic analysis was performed on the long (1717 bp) consensus sequence using PhyML 3.0 (ATCG, Phylogeny.fr). The Gblocks programme was used for automatic alignment curation, while PhyML was used for tree building and TreeDyn programme was used to draw the phylogenetic tree [25–27]. All analyses are based on the maximum likelihood principal using an approximation of the standard Likelihood Ratio Test.\n Statistical analysis Calculations regarding the prevalence of Babesia in red foxes from separate regions of Britain were performed only using data from animals where a location was known (253 / 316 samples). Comparisons of infection rates in relation to gender were made where data was available. All 316 samples were used when calculating the overall prevalence of Babesia DNA in red foxes in Britain.\nProportion positive (prevalence), with confidence intervals (95 % CI), was calculated for the overall study set as well as at regional level. Prevalence at the regional level was compared with the overall UK prevalence to determine if there was a significant difference (Minitab 15 (v15.1.0.0)). In addition the overall prevalence in male animals was compared with that in females (Minitab 15 (v15.1.0.0)). In these analyses, Fisher’s Exact Test was used where the number of events was less than 5; in all other cases the hypothesis test was based on the normal approximation.\nCalculations regarding the prevalence of Babesia in red foxes from separate regions of Britain were performed only using data from animals where a location was known (253 / 316 samples). Comparisons of infection rates in relation to gender were made where data was available. All 316 samples were used when calculating the overall prevalence of Babesia DNA in red foxes in Britain.\nProportion positive (prevalence), with confidence intervals (95 % CI), was calculated for the overall study set as well as at regional level. Prevalence at the regional level was compared with the overall UK prevalence to determine if there was a significant difference (Minitab 15 (v15.1.0.0)). In addition the overall prevalence in male animals was compared with that in females (Minitab 15 (v15.1.0.0)). In these analyses, Fisher’s Exact Test was used where the number of events was less than 5; in all other cases the hypothesis test was based on the normal approximation.", "The three hundred and sixteen foxes were originally collected by the University of Edinburgh as part of a study looking at the prevalence of Echinococcus multilocularis, Neospora caninum and Toxoplasma gondii [20, 21]. Lung samples were collected at post mortem examination and frozen at −20 °C prior to processing. Lung fluids were prepared as previously described [20]. Briefly, lungs were defrosted and bloody exudate was collected into a 1.5 ml microfuge tube, where no exudate was visible, lung samples were placed in a stomacher bag with approximately 5 ml of phosphate buffer saline (PBS) and processed in a stomacher for 15 s. All exudates / PBS samples were stored at −20 °C prior to DNA extraction.\nThe foxes sampled were collected over a wide range of locations in each of the study regions. A majority of the foxes were shot by game keepers and land owners as part of routine pest control procedures, so the location data from where the foxes were originally collected and the gender of the animals were often available. However some of the foxes were obtained from direct culls, for these animals the location and gender of the animal was not always available [21].", "Three hundred and sixteen lung exudate samples were defrosted and mixed by vortexing, 400 μl of each sample was added to 900 μl Nuclei Lysis Solution (Promega, Madison WI, USA) and incubated at 55 °C overnight. The samples were then processed to DNA using the Wizard® genomic DNA (Promega, Madison WI, USA) purification protocol, which was adapted by Bartley and colleagues [22] to be scaled up to allow the use of 400 μl of starting material. The DNA was resuspended in 300 μl of DNase and RNase free water and stored at +4 °C for immediate use or at −20 °C for longer term storage. Extraction controls (water) were also prepared with each batch of exudate samples, these were used as indicators of contamination and acted as additional negative controls.", "A semi-nested PCR was used to screen for the presence of Babesia DNA in 120 lung exudate samples using universal Babesia-Theileria primers BT1-F and BTH-1R for the primary amplification and BT1-F and BT1-R for the second round amplification (previously described [23]). Following this initial screening the B. annae specific primers BTFox1F and BTFox1R were designed (Primer3web v4.0.0) for use in the second round amplifications (Table 1). All 316 lung exudate samples were tested using the B. annae specific primers BTFox1F and BTFox1R.Table 1Primer names, specificity and sequences used for the detection of Babesia DNA in fox lung exudate samplesPrimer NameSpecificitya\nSequence (5’ – 3’)ReferenceBT1-FUniversal Babesia – Theileria\nGGTTGATCCTGCCAGTAGT[23]BTH-1RUniversal Babesia – Theileria\nTTGCGACCATACTCCCCCCABTH-1 FUniversal Babesia – Theileria\nCCTGMGARACGGCTACCACATCTBT1-RUniversal Babesia – Theileria\nGCCTGCTGCCTTCCTTABTFox1F\nBabesia annae\nAGTTATAAGCTTTTATACAGCDeveloped in the studyBTFox1R\nBabesia annae\nCACTCTAGTTTTCTCAAAGTAAAATABT-Outer-R\nBabesia annae\nGGAAACCTTGTTACGACTTCTCBT-Inner-R\nBabesia annae\nTTCTCCTTCCTTTAAGTGATAAG600-F\nBabesia annae\nAGTTAAGAAGCTCGTAGTTG1200-F\nBabesia annae\nAGGATTGACAGATTGATAGC\naAll primers were designed against the 18S rRNA gene\nPrimer names, specificity and sequences used for the detection of Babesia DNA in fox lung exudate samples\n\naAll primers were designed against the 18S rRNA gene\nThe reaction mixture was adapted from that previously described by Burrells and colleagues [24] and amplification conditions were as follows; each reaction (20 μl) consisted of custom PCR master mix (containing final concentrations of 45 mM Tris–HCl, 11 mM (NH4)2SO4, 4.5 mM MgCl2, 0.113 mg/ml BSA, 4.4 μM EDTA and 1.0 mM each of dATP, dCTP, dGTP and dTTP) (ABgene, Surrey, UK), 0.25pM of each forward and reverse primer (Eurofins MWG Operon), 0.75U Taq polymerase (Bioline Ltd. London, UK) and 2 μl sample template DNA, to increase sensitivity each sample was analysed in duplicate. The basic reaction conditions for all the PCR amplifications were as follows; 94 °C for 5 min followed by 35 cycles at 94 °C for 1 min, annealing (see Table 2) for 1 min and 72 °C for 1 min with a final extension period at 72 °C for 5 min.Table 2Annealing temperatures and amplicon sizes of nested PCR reactions used for the detection of Babesia annae DNA in fox lung exudate samplesPrimersAmplicon size (bp)Annealing Temp (°C)BT1-F and BTH-1R107355BT1-F and BT1-R40860BTFox1F and BTFox1R65552BT1-F and BT-Outer-R173755BT1-F and BT-Inner-R171749\nAnnealing temperatures and amplicon sizes of nested PCR reactions used for the detection of Babesia annae DNA in fox lung exudate samples\nA selection of positive samples that gave positive results for both duplicates using the B. annae specific BTFox1F and BTFox1R primers were tested in a semi-nested PCR using the primers BT1-F and BT-Outer-R (Table 1) in the primary reaction and BT1-F and BT-Inner-R (Table 1) in the second round reaction. These primers were designed to produce a longer 18S rRNA gene fragment (approx 1.7Kb). Following the second round amplification, 10 μl of each PCR product was analysed by agarose gel electrophoresis (2 % agarose in 1x TAE buffer), stained with gel red (1:10,000) (Biotonium, Hayward, CA, USA) and visualised using UV light.", "PCR products from samples that gave positive results for both duplicates using the BT1-F and BT-Outer-R (Table 1) in the primary reaction and BT1-F and BT-Inner-R primers in the second round reaction were selected for sequencing. These amplicons were cleaned using the commercially available Wizard® SV Gel and PCR Clean-up System (Promega, Madison WI, USA), as per manufacturers’ instructions. The PCR product was eluted in 30 μl of DNase / RNase free water and the nucleic acid concentration was determined by spectralphotometry (Nanodrop ND1000), 100 ng of each sample was sent for sequencing with each primer (MWG Operon) BT1-F, BTH-1 F, BTFox1F and BTFox1R (Table 1) to create a forward and reverse consensus sequence.", "Seven samples which gave positive results for both duplicates from the semi nested PCR (B. annae specific BT1-F and BT-Outer-R followed by BT1-F-BT-Inner-R), were selected for cloning and PCR products were purified as above, 7 μl of purified product was ligated into the pGEM®-T Easy Vector (Promega, Madison WI, USA) using 1 μl of T4 DNA ligase (3 Weiss units/μl), 1 μl of 10X Rapid Ligation Buffer and 1 μl of pGEM®-T Easy vector (50 ng/μl) according to the manufacturers’ instructions. Following ligation, 2 μl of ligated vector/insert was used to transform 50 μl of high-efficiency competent JM109 cells (≥1 × 108 cfu/μg DNA) (Promega, Madison WI, USA) using manufacturers’ instructions, with the following exception that LB broth was used instead of SOC medium to culture the cells. Successful transformation was confirmed using LB agar plates containing 100 μg/ml ampicillin and spread with 100 μl of IPTG (100 mM) and 20 μl of X-Gal (50 mg/ml). White colonies were screened by PCR using the BTFox1F and BTFox1R primers (Table 1) to determine the presence of the B. annae 18S rRNA gene insert. Positive colonies were cultured overnight (approx 18 h) in 10 ml LB broth containing 100 μg/ml ampicillin. Following this incubation plasmid DNA was then purified from 5 ml of each culture using the QIAprep® Miniprep kit (Qiagen), according to the manufacturers’ instructions. Purified plasmid DNA (100 ng per primer) was sent to be sequenced (MWG Operon) using T7 and SP6 primers along with BTH-1 F, 600-F, 1200-F, BT1-R, and BTH-1R (Table 1), this created an overlapping forward and reverse consensus sequence for each clone.", "Following sequencing, results were compared using NCBI Basic Local Alignment Search Tool (BLAST) to determine percentage identity of the generated sequences against published sequences. Multiple sequence alignments were created to compare these sequences to previously published data (EMBL-EBI Multiple Sequence Comparison by Log- Expectation (MUSCLE)), while phylogenetic analysis was performed on the long (1717 bp) consensus sequence using PhyML 3.0 (ATCG, Phylogeny.fr). The Gblocks programme was used for automatic alignment curation, while PhyML was used for tree building and TreeDyn programme was used to draw the phylogenetic tree [25–27]. All analyses are based on the maximum likelihood principal using an approximation of the standard Likelihood Ratio Test.", "Calculations regarding the prevalence of Babesia in red foxes from separate regions of Britain were performed only using data from animals where a location was known (253 / 316 samples). Comparisons of infection rates in relation to gender were made where data was available. All 316 samples were used when calculating the overall prevalence of Babesia DNA in red foxes in Britain.\nProportion positive (prevalence), with confidence intervals (95 % CI), was calculated for the overall study set as well as at regional level. Prevalence at the regional level was compared with the overall UK prevalence to determine if there was a significant difference (Minitab 15 (v15.1.0.0)). In addition the overall prevalence in male animals was compared with that in females (Minitab 15 (v15.1.0.0)). In these analyses, Fisher’s Exact Test was used where the number of events was less than 5; in all other cases the hypothesis test was based on the normal approximation.", " Screening of fox lung exudate samples for the presence of Babesia spp. DNA Of the 120 samples initially screened, 29 samples gave positive results. From these nine samples that gave positive results for both duplicates from the semi-nested PCR (universal Babesia-Theileria primers BT1-F-BTH-1R and BT1-F-BT1-R) were cleaned as described above and sent for sequencing in one direction, using either the BT1-F or BTH-1 F primers. Following this initial screening all sequences returned with 99–100 % identity to several isolates of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’ accession numbers EU583387, AY534602 and AF188001) [28]. No other Babesia species were identified in the fox lung exudate samples.\nOf the 120 samples initially screened, 29 samples gave positive results. From these nine samples that gave positive results for both duplicates from the semi-nested PCR (universal Babesia-Theileria primers BT1-F-BTH-1R and BT1-F-BT1-R) were cleaned as described above and sent for sequencing in one direction, using either the BT1-F or BTH-1 F primers. Following this initial screening all sequences returned with 99–100 % identity to several isolates of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’ accession numbers EU583387, AY534602 and AF188001) [28]. No other Babesia species were identified in the fox lung exudate samples.\n Verification of PCR specificity A random selection of nine positive samples from across Britain that gave positive results for both duplicates, were sent for sequencing using the Babesia annae specific BTFox1F and BTFox1R primers, this created a 597 bp consensus sequence (submitted to GenBank accession number KT580786). When this resultant sequence was compared using NCBI BLAST to the published sequences, it was found to have 100 % identity to several isolates of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’ (accession numbers EU583387, AY534602 and AF188001) [28]. A further 13 positive samples were sequenced only using the BTFox1F primer; all of these sequences were 100 % identical to Babesia annae (previously referred to as: Babesia sp. ‘Spanish Dog’-AY534602, EU583387 and AF188001).\nSeven clones were created from two of the positive samples (F501 (4 clones) and F340 (3 clones)) using the primers BT1-F & BT-Outer-R and BT1-F & BT-Inner-R. These clones were used to create two consensus sequences, which were identical to each other. One of the 1717 bp consensus sequences was submitted to GenBank (accession number KT580785). When this consensus sequence was compared using BLAST it demonstrated a 100 % identity to the published sequences of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’ (accession numbers AY534602, EU583387 and AF188001).\nA random selection of nine positive samples from across Britain that gave positive results for both duplicates, were sent for sequencing using the Babesia annae specific BTFox1F and BTFox1R primers, this created a 597 bp consensus sequence (submitted to GenBank accession number KT580786). When this resultant sequence was compared using NCBI BLAST to the published sequences, it was found to have 100 % identity to several isolates of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’ (accession numbers EU583387, AY534602 and AF188001) [28]. A further 13 positive samples were sequenced only using the BTFox1F primer; all of these sequences were 100 % identical to Babesia annae (previously referred to as: Babesia sp. ‘Spanish Dog’-AY534602, EU583387 and AF188001).\nSeven clones were created from two of the positive samples (F501 (4 clones) and F340 (3 clones)) using the primers BT1-F & BT-Outer-R and BT1-F & BT-Inner-R. These clones were used to create two consensus sequences, which were identical to each other. One of the 1717 bp consensus sequences was submitted to GenBank (accession number KT580785). When this consensus sequence was compared using BLAST it demonstrated a 100 % identity to the published sequences of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’ (accession numbers AY534602, EU583387 and AF188001).\n Distribution of Babesia annae in red foxes in Great Britain Of the 316 lung exudate samples tested by nested PCR (using the BTFox1F and BTFox1R primers), 46 (14.55 % with a 95 % Confidence Interval (CI) of 10.66–18.44 %) returned positive results for the presence of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’) DNA. Positive samples were found in foxes widely distributed across Great Britain (see Table 3), with B. annae DNA being found in Scotland as well as across the whole of England. The central region of England had the highest prevalence with 18 / 49 positive samples (36.7 % prevalence - 95 % CI 23.23 %–50.23 %), which was significantly (p = 0.007) higher than the British average. Of the regions of Britain where positive samples were found, Scotland had the lowest prevalence with only 6 / 80 positive samples (7.5 % prevalence - 95 % CI 1.72 %–13.27 %). Scotland had a significantly lower (p = 0.012) prevalence than the British average. No positive samples were found in Wales, though only 12 samples were available for analysis for this region.Table 3Prevalence of Babesia annae DNA in red foxes (Vulpes vulpes) from across Great BritainRegionNo testedNo PositivePrevalence (%)95 % CIGenderNo / No Positive% Prevalence95 % CIScotland8067.5*\n1.7–13.3 %Male48 / 510.41.8–19.0 %Female32 / 13.10.00–9.2 %Wales1200-Male5 / 00-Female7 / 00-Northern (England)34823.59.3–37.8 %Male20 / 6309.9–50.1 %Female14 / 214.20.0–32.6 %Central (England)491836.7**\n23.2–50.2 %Male22 / 104524.6–66.3 %Female27 / 829.612.4–46.9 %Southern (England)781114.16.4–21.8 %Male45 / 817.86.6–28.9 %Female9 / 39.10.0–18.9 %TotalMale140 / 2920.714.0–27.4 %Female113 / 1412.46.3–18.5 %\nN Number, CI Confidence interval\n* Significantly lower prevalence than nation average (p = 0.045)\n** Significantly higher prevalence than national average (p = 0.003)\nPrevalence of Babesia annae DNA in red foxes (Vulpes vulpes) from across Great Britain\n\nN Number, CI Confidence interval\n\n* Significantly lower prevalence than nation average (p = 0.045)\n\n** Significantly higher prevalence than national average (p = 0.003)\nThe data shows that a higher number male foxes tested positive for parasite DNA than females in all the regions where positive samples were found (Table 3), although no statistically significant differences (p = 0.093) were observed between numbers of positive males and females when comparing genders across Britain.\nOf the 316 lung exudate samples tested by nested PCR (using the BTFox1F and BTFox1R primers), 46 (14.55 % with a 95 % Confidence Interval (CI) of 10.66–18.44 %) returned positive results for the presence of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’) DNA. Positive samples were found in foxes widely distributed across Great Britain (see Table 3), with B. annae DNA being found in Scotland as well as across the whole of England. The central region of England had the highest prevalence with 18 / 49 positive samples (36.7 % prevalence - 95 % CI 23.23 %–50.23 %), which was significantly (p = 0.007) higher than the British average. Of the regions of Britain where positive samples were found, Scotland had the lowest prevalence with only 6 / 80 positive samples (7.5 % prevalence - 95 % CI 1.72 %–13.27 %). Scotland had a significantly lower (p = 0.012) prevalence than the British average. No positive samples were found in Wales, though only 12 samples were available for analysis for this region.Table 3Prevalence of Babesia annae DNA in red foxes (Vulpes vulpes) from across Great BritainRegionNo testedNo PositivePrevalence (%)95 % CIGenderNo / No Positive% Prevalence95 % CIScotland8067.5*\n1.7–13.3 %Male48 / 510.41.8–19.0 %Female32 / 13.10.00–9.2 %Wales1200-Male5 / 00-Female7 / 00-Northern (England)34823.59.3–37.8 %Male20 / 6309.9–50.1 %Female14 / 214.20.0–32.6 %Central (England)491836.7**\n23.2–50.2 %Male22 / 104524.6–66.3 %Female27 / 829.612.4–46.9 %Southern (England)781114.16.4–21.8 %Male45 / 817.86.6–28.9 %Female9 / 39.10.0–18.9 %TotalMale140 / 2920.714.0–27.4 %Female113 / 1412.46.3–18.5 %\nN Number, CI Confidence interval\n* Significantly lower prevalence than nation average (p = 0.045)\n** Significantly higher prevalence than national average (p = 0.003)\nPrevalence of Babesia annae DNA in red foxes (Vulpes vulpes) from across Great Britain\n\nN Number, CI Confidence interval\n\n* Significantly lower prevalence than nation average (p = 0.045)\n\n** Significantly higher prevalence than national average (p = 0.003)\nThe data shows that a higher number male foxes tested positive for parasite DNA than females in all the regions where positive samples were found (Table 3), although no statistically significant differences (p = 0.093) were observed between numbers of positive males and females when comparing genders across Britain.\n Phylogenetic relationship of Babesia annae and related species Following maximum likelihood analysis it can be seen that Babesia parasites appear to be in three separate clades (Fig. 1) using the nomenclature suggested by Lack and colleagues [29], with KT580785 (B. annae) situated in clade VIII (microti group) with AY534602, EU583387 and AF188001, which are other published sequences of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’) (see Fig. 1) found in Spain and the USA. The next most closely associated group of sequences have all been classified as B. microti (AB032434, AB085191 and AB190459); these were isolated from a human, bank vole and forest mouse respectively. Two Piroplasmida species found in Eurasian badges (Meles meles) (EF057099) and Eurasian otter (Lutra lutra) (FJ225390) were also closely related to B. annae. The remaining Babesia species are further separated into 2 clades: The classical Babesia group (clade I) and the duncani group (clade IV). The Theileria species, with the exception of T. equi are situated in clade III, while the Hepatozoon parasites are situated in a separate clade.Fig. 1Phylogenetic analysis on 18S rRNA gene. Phylogenetic tree showing maximum likelihood approximation of the standard likelihood ratio test scores. A maximum likelihood approximation of the standard likelihood ratio test score of 0 indicating that no base pair substitutions were observed between AY534602, EU583387, AF188001 and KT580785. The phylogenetic analysis places KT580785 in clade VIII (microti group according to the nomenclature suggested by Lack and colleagues [26]).\nPhylogenetic analysis on 18S rRNA gene. Phylogenetic tree showing maximum likelihood approximation of the standard likelihood ratio test scores. A maximum likelihood approximation of the standard likelihood ratio test score of 0 indicating that no base pair substitutions were observed between AY534602, EU583387, AF188001 and KT580785. The phylogenetic analysis places KT580785 in clade VIII (microti group according to the nomenclature suggested by Lack and colleagues [26]).\nFollowing maximum likelihood analysis it can be seen that Babesia parasites appear to be in three separate clades (Fig. 1) using the nomenclature suggested by Lack and colleagues [29], with KT580785 (B. annae) situated in clade VIII (microti group) with AY534602, EU583387 and AF188001, which are other published sequences of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’) (see Fig. 1) found in Spain and the USA. The next most closely associated group of sequences have all been classified as B. microti (AB032434, AB085191 and AB190459); these were isolated from a human, bank vole and forest mouse respectively. Two Piroplasmida species found in Eurasian badges (Meles meles) (EF057099) and Eurasian otter (Lutra lutra) (FJ225390) were also closely related to B. annae. The remaining Babesia species are further separated into 2 clades: The classical Babesia group (clade I) and the duncani group (clade IV). The Theileria species, with the exception of T. equi are situated in clade III, while the Hepatozoon parasites are situated in a separate clade.Fig. 1Phylogenetic analysis on 18S rRNA gene. Phylogenetic tree showing maximum likelihood approximation of the standard likelihood ratio test scores. A maximum likelihood approximation of the standard likelihood ratio test score of 0 indicating that no base pair substitutions were observed between AY534602, EU583387, AF188001 and KT580785. The phylogenetic analysis places KT580785 in clade VIII (microti group according to the nomenclature suggested by Lack and colleagues [26]).\nPhylogenetic analysis on 18S rRNA gene. Phylogenetic tree showing maximum likelihood approximation of the standard likelihood ratio test scores. A maximum likelihood approximation of the standard likelihood ratio test score of 0 indicating that no base pair substitutions were observed between AY534602, EU583387, AF188001 and KT580785. The phylogenetic analysis places KT580785 in clade VIII (microti group according to the nomenclature suggested by Lack and colleagues [26]).", "Of the 120 samples initially screened, 29 samples gave positive results. From these nine samples that gave positive results for both duplicates from the semi-nested PCR (universal Babesia-Theileria primers BT1-F-BTH-1R and BT1-F-BT1-R) were cleaned as described above and sent for sequencing in one direction, using either the BT1-F or BTH-1 F primers. Following this initial screening all sequences returned with 99–100 % identity to several isolates of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’ accession numbers EU583387, AY534602 and AF188001) [28]. No other Babesia species were identified in the fox lung exudate samples.", "A random selection of nine positive samples from across Britain that gave positive results for both duplicates, were sent for sequencing using the Babesia annae specific BTFox1F and BTFox1R primers, this created a 597 bp consensus sequence (submitted to GenBank accession number KT580786). When this resultant sequence was compared using NCBI BLAST to the published sequences, it was found to have 100 % identity to several isolates of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’ (accession numbers EU583387, AY534602 and AF188001) [28]. A further 13 positive samples were sequenced only using the BTFox1F primer; all of these sequences were 100 % identical to Babesia annae (previously referred to as: Babesia sp. ‘Spanish Dog’-AY534602, EU583387 and AF188001).\nSeven clones were created from two of the positive samples (F501 (4 clones) and F340 (3 clones)) using the primers BT1-F & BT-Outer-R and BT1-F & BT-Inner-R. These clones were used to create two consensus sequences, which were identical to each other. One of the 1717 bp consensus sequences was submitted to GenBank (accession number KT580785). When this consensus sequence was compared using BLAST it demonstrated a 100 % identity to the published sequences of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’ (accession numbers AY534602, EU583387 and AF188001).", "Of the 316 lung exudate samples tested by nested PCR (using the BTFox1F and BTFox1R primers), 46 (14.55 % with a 95 % Confidence Interval (CI) of 10.66–18.44 %) returned positive results for the presence of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’) DNA. Positive samples were found in foxes widely distributed across Great Britain (see Table 3), with B. annae DNA being found in Scotland as well as across the whole of England. The central region of England had the highest prevalence with 18 / 49 positive samples (36.7 % prevalence - 95 % CI 23.23 %–50.23 %), which was significantly (p = 0.007) higher than the British average. Of the regions of Britain where positive samples were found, Scotland had the lowest prevalence with only 6 / 80 positive samples (7.5 % prevalence - 95 % CI 1.72 %–13.27 %). Scotland had a significantly lower (p = 0.012) prevalence than the British average. No positive samples were found in Wales, though only 12 samples were available for analysis for this region.Table 3Prevalence of Babesia annae DNA in red foxes (Vulpes vulpes) from across Great BritainRegionNo testedNo PositivePrevalence (%)95 % CIGenderNo / No Positive% Prevalence95 % CIScotland8067.5*\n1.7–13.3 %Male48 / 510.41.8–19.0 %Female32 / 13.10.00–9.2 %Wales1200-Male5 / 00-Female7 / 00-Northern (England)34823.59.3–37.8 %Male20 / 6309.9–50.1 %Female14 / 214.20.0–32.6 %Central (England)491836.7**\n23.2–50.2 %Male22 / 104524.6–66.3 %Female27 / 829.612.4–46.9 %Southern (England)781114.16.4–21.8 %Male45 / 817.86.6–28.9 %Female9 / 39.10.0–18.9 %TotalMale140 / 2920.714.0–27.4 %Female113 / 1412.46.3–18.5 %\nN Number, CI Confidence interval\n* Significantly lower prevalence than nation average (p = 0.045)\n** Significantly higher prevalence than national average (p = 0.003)\nPrevalence of Babesia annae DNA in red foxes (Vulpes vulpes) from across Great Britain\n\nN Number, CI Confidence interval\n\n* Significantly lower prevalence than nation average (p = 0.045)\n\n** Significantly higher prevalence than national average (p = 0.003)\nThe data shows that a higher number male foxes tested positive for parasite DNA than females in all the regions where positive samples were found (Table 3), although no statistically significant differences (p = 0.093) were observed between numbers of positive males and females when comparing genders across Britain.", "Following maximum likelihood analysis it can be seen that Babesia parasites appear to be in three separate clades (Fig. 1) using the nomenclature suggested by Lack and colleagues [29], with KT580785 (B. annae) situated in clade VIII (microti group) with AY534602, EU583387 and AF188001, which are other published sequences of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’) (see Fig. 1) found in Spain and the USA. The next most closely associated group of sequences have all been classified as B. microti (AB032434, AB085191 and AB190459); these were isolated from a human, bank vole and forest mouse respectively. Two Piroplasmida species found in Eurasian badges (Meles meles) (EF057099) and Eurasian otter (Lutra lutra) (FJ225390) were also closely related to B. annae. The remaining Babesia species are further separated into 2 clades: The classical Babesia group (clade I) and the duncani group (clade IV). The Theileria species, with the exception of T. equi are situated in clade III, while the Hepatozoon parasites are situated in a separate clade.Fig. 1Phylogenetic analysis on 18S rRNA gene. Phylogenetic tree showing maximum likelihood approximation of the standard likelihood ratio test scores. A maximum likelihood approximation of the standard likelihood ratio test score of 0 indicating that no base pair substitutions were observed between AY534602, EU583387, AF188001 and KT580785. The phylogenetic analysis places KT580785 in clade VIII (microti group according to the nomenclature suggested by Lack and colleagues [26]).\nPhylogenetic analysis on 18S rRNA gene. Phylogenetic tree showing maximum likelihood approximation of the standard likelihood ratio test scores. A maximum likelihood approximation of the standard likelihood ratio test score of 0 indicating that no base pair substitutions were observed between AY534602, EU583387, AF188001 and KT580785. The phylogenetic analysis places KT580785 in clade VIII (microti group according to the nomenclature suggested by Lack and colleagues [26]).", "The results described in the current study clearly demonstrate that B. annae DNA is present widely in the red fox (Vulpes vulpes) population in Great Britain, with positive samples being found across all regions of England and throughout Scotland. Though we did not find any positives in the samples collected from Wales, the sample size for this region was small, with only 12 samples available for examination.\nThe overall prevalence of B. annae DNA from foxes in Britain was 14.6 % (see Table 1), the highest prevalence of B. annae was found in the central region of England where an overall prevalence of 36.7 % was observed, which was significantly (p = 0.007) higher than the national average. These figures are higher than the levels seen in foxes in numerous other European countries. In Croatia, 5.2 % (10 / 191) of foxes studied were PCR positive for Babesia annae (Synonym Theileria annae) DNA [13], while in Poland only 1 /123 (0.7 %) of fox spleens tested were PCR positive for Babesia annae (Synonym Babesia microti-like) DNA [14]. In two separate studies in Spain, 20 % (1 / 5) of foxes tested in the Burgos Region were positive for Babesia annae (Synonym Theileria annae) DNA [30], while in another study 50 % (5 / 10) of foxes from central Spain (Guadalajara) were positive for Babesia annae (Synonym T. annae, which had a 100 % identity to AF188001-Babesia sp. ‘Spanish Dog’) [23]. However these results may not reflect the actual prevalence across Spain, as sample numbers for both of these studies were very small with only 5 and 10 animals being tested, respectively. While a higher prevalence of Babesia annae (Synonym Babesia microti-like) has been found in Portugal where 63 / 91 (69.2 %) red foxes were PCR positive for the parasite [12].\nAn interesting observation from our study was that male foxes had a higher prevalence of B. annae DNA than females in all regions where positive samples were found (Table 3), though these differences were not statistically significant (p = 0.093). This may be a consequence that male foxes leave or are driven out of their home range territories in search of new territories, food and potential mates [31], increasing the chance of male foxes being exposed to infected ticks. Over recent years Ixodes tick numbers and their distribution appear to have been steadily increasing in Britain [32]. Exposure of foxes to Babesia parasites is likely to be through contact with infected ticks that occurs whilst scavenging and from small prey animals such as rodents and other small mammals, which are known to be reservoirs for Babesia parasites and also hosts to both common types of Ixodid ticks found in Britain (in particular I. ricinus). In a recent study in Britain, Brown and colleagues [33], demonstrated B. microti infections in 30.3 % of common shrews (Sorex araneus) and 30.4 % of field voles (Microtus agrestis) tested. Both shrews and voles were also found to be infested with I. ricinus ticks, strongly suggesting a role for these small mammals in the epidemiology of tick borne infections. We are unsure whether rodents can be infected with B. annae, as the parasite has never been described in any species other than canids. The primers used by Brown and colleagues [33] were B. microti specific and would not have detected B. annae even if it had been present. However Babesia annae (synonym Theileria annae) DNA has been demonstrated in both I. ricinus and I. hexagonus ticks in Spain [11, 19] with one tick being removed from a wood mouse (Apodemus sylvaticus). Alhough the tick tested positive for Babesia annae (synonym Theileria annae) DNA there is no evidence that the parasite was transmitted from the host mouse, or whether the tick was already infected prior to it attaching to the mouse [19]. There is also currently no evidence to prove if either of these species of ticks are competent vectors for the transmission of B. annae [34].\nDuring this study we only detected parasite DNA from frozen exudates, we did not detect viable parasites or manage to examine blood smears for the presence of intra-erythrocytic life cycle stages. Nor was any clinical data available for the animals involved in this study, so we are unsure whether B. annae caused clinical symptoms in infected foxes or if it caused an asymptomatic infection. However, recent studies in Spain, Sweden and USA have demonstrated that B. annae is the causative agent of severe clinical disease and pathological abnormalities in dogs [1, 9, 34]. Clinical Babesia infections in dogs are often attributed to B. canis as there are few diagnostic tools for veterinarians to distinguish between the blood-borne piroplasms in routine veterinary practices [34], other than the direct examination of red blood cells under light microscopy. In laboratories the immunofluorescent antibody test (IFAT) is used for serological diagnosis, while PCR generally corresponds to a more sensitive and specific diagnostic tool.\nOur phylogenetic analysis shows that the 18S rRNA gene of B. annae described in this study is identical to that described in Europe with a maximum likelihood approximation of the standard likelihood ratio test score of 0 indicating that no base pair substitutions were observed between AY534602, EU583387, AF188001 and KT580785. The sequences generated also showed a closer parsimony to B. rodhaini, B. felis and B. leo than B. divergens, B. gibsoni and B. duncani. The sequence analysis carried out in this study agrees with that carried out by Lack and colleagues [29], who placed B. microti and the microti-like parasites in a distinct phylogenetic clade (referred to as the microti Group) which included Babesia ‘Spanish dog’ and B. microti found in mice (AB190459) and bank voles (AB085191) amongst others [29]. The phylogenetic data presented in this study shows that the Babesia are separated into three distinct clades, while the Theileria and Hepatozoon species are also all situated in separate clades. However the piroplasmida parasites found in otters and badgers (EF057099 and FJ225390) are positioned in between the Babesia clades, suggesting these members of the phylum require further reclassification.\nMore work needs to be carried out to help determine the dynamics of transmission of B. annae to foxes; an examination of small prey animals (rodents etc.) and the ticks that infest them may help demonstrate their role in maintaining B. annae infections in the environment. Further studies are also required to examine cases of canine babesiosis in Britain to speciate the causative agent and determine if B. annae is present within the British dog population and if it is causing clinical disease in canids.", "This is the first study to demonstrate the presence of B. annae DNA in Britain. Sequence analysis has shown the B. annae 18S rRNA gene sequence detected in foxes in Britain to be identical to that detected in foxes in Europe and North America. Phylogenetic analysis shows that B. annae is closely related to Babesia microti, but clearly is a distinct species." ]
[ "introduction", "materials|methods", null, null, null, null, null, null, null, "results", null, null, null, null, "discussion", "conclusion" ]
[ "\nBabesia annae\n", "DNA", "Red foxes", "\nVulpes vulpes\n", "Babesiosis", "Great Britain", "Phylogeny" ]
Background: Babesia are small piroplasmid parasites which are widely distributed throughout the world and are transmitted to hosts through the bite of infected ticks, transplacentally [1] and mechanically through the exchange of blood i.e., in dog fights [2] as well as transovarially and transstadially in ticks . More than 100 species of Babesia have been documented [3], Babesia parasites are capable of infecting a wide range of wild and domestic host species, including humans, cattle and dogs. This ability to infect and cause disease in many mammalian species make Babesia of great economic importance in both the medical and veterinary fields. Bovine babesiosis (Red water fever) for example is considered the most important arthropod transmitted disease in cattle [4]. In Europe the main cause of human babesiosis is considered to be B. divergens [4], though over recent years other Babesia species including B. microti [5] have also been found to be responsible for human babesiosis. Babesia annae is among these economically important species having been shown to cause severe (even fatal) disease in dogs [1, 2]. Little is known about the prevalence of piroplasm infections in dogs in Great Britain. A recent study by Crawford and colleagues [6] examining 262 canine blood donors found none were positive for Babesia DNA, while a study by Smith and colleagues [7] found only 2.4 % 16/742 ticks collected from dogs from across Great Britain tested PCR positive for Babesia DNA, 11 of which showed 97–100 % identity to B. gibsoni. There are no records of Babesia annae having been previously detected in Britain. Babesia annae was identified previously under numerous synonyms including Theileria annae, Babesia sp. ‘Spanish dog’ and Babesia-microti-like [8]. This species has been found in red foxes (Vulpes vulpes), and dogs (Canis familiaris) in North America and Canada [9, 10] and throughout Europe [1, 11–15] and is considered by some authors to be “hyperendemic” in northwest Spain [16]. Red foxes are common throughout Great Britain, being highly adaptive and opportunistic predators and scavengers [17]. The most recent surveys suggested that there are between 240,000 and 258,000 red foxes in Britain, of which approximately 87 % (225,000) live in rural areas, compared to 13 % (33,000) of urban foxes (Game & Wildlife Conservation Trust (2013) and DEFRA FOI release (2013). The foxes are most likely to encounter questing adult ticks, nymphs and larvae whilst hunting and scavenging. Anthropogenic changes, such as wildlife and forest management strategies and changes in land use have also created more suitable habitats for foxes, their prey and the Ixodid ticks that feed on both. These changes have lead not only to increases in tick numbers, but have also enabled ticks to increase their distribution across Europe, with Ixodes ticks now being found in areas previously considered tick free i.e., northern Sweden [18]. The main tick species found in Britain are Ixodes ricinus (sheep tick, which is also known in some countries as the deer tick) and Ixodes hexagonus (hedgehog tick). Both tick species can be found on a wide range of domestic and wild animal species and have been shown in previous studies in Spain and Germany to be positive for B. annae (synonym T. annae) DNA [15, 19]. This is however not conclusive proof that these species are vectors for the transmission of B. annae. This study aimed at determining the prevalence of piroplasm infection in red foxes and the species of piroplasm circulating in the red fox population in Great Britain, through the analysis of lung exudate samples. Methods: Sample collection and preparation The three hundred and sixteen foxes were originally collected by the University of Edinburgh as part of a study looking at the prevalence of Echinococcus multilocularis, Neospora caninum and Toxoplasma gondii [20, 21]. Lung samples were collected at post mortem examination and frozen at −20 °C prior to processing. Lung fluids were prepared as previously described [20]. Briefly, lungs were defrosted and bloody exudate was collected into a 1.5 ml microfuge tube, where no exudate was visible, lung samples were placed in a stomacher bag with approximately 5 ml of phosphate buffer saline (PBS) and processed in a stomacher for 15 s. All exudates / PBS samples were stored at −20 °C prior to DNA extraction. The foxes sampled were collected over a wide range of locations in each of the study regions. A majority of the foxes were shot by game keepers and land owners as part of routine pest control procedures, so the location data from where the foxes were originally collected and the gender of the animals were often available. However some of the foxes were obtained from direct culls, for these animals the location and gender of the animal was not always available [21]. The three hundred and sixteen foxes were originally collected by the University of Edinburgh as part of a study looking at the prevalence of Echinococcus multilocularis, Neospora caninum and Toxoplasma gondii [20, 21]. Lung samples were collected at post mortem examination and frozen at −20 °C prior to processing. Lung fluids were prepared as previously described [20]. Briefly, lungs were defrosted and bloody exudate was collected into a 1.5 ml microfuge tube, where no exudate was visible, lung samples were placed in a stomacher bag with approximately 5 ml of phosphate buffer saline (PBS) and processed in a stomacher for 15 s. All exudates / PBS samples were stored at −20 °C prior to DNA extraction. The foxes sampled were collected over a wide range of locations in each of the study regions. A majority of the foxes were shot by game keepers and land owners as part of routine pest control procedures, so the location data from where the foxes were originally collected and the gender of the animals were often available. However some of the foxes were obtained from direct culls, for these animals the location and gender of the animal was not always available [21]. DNA extraction Three hundred and sixteen lung exudate samples were defrosted and mixed by vortexing, 400 μl of each sample was added to 900 μl Nuclei Lysis Solution (Promega, Madison WI, USA) and incubated at 55 °C overnight. The samples were then processed to DNA using the Wizard® genomic DNA (Promega, Madison WI, USA) purification protocol, which was adapted by Bartley and colleagues [22] to be scaled up to allow the use of 400 μl of starting material. The DNA was resuspended in 300 μl of DNase and RNase free water and stored at +4 °C for immediate use or at −20 °C for longer term storage. Extraction controls (water) were also prepared with each batch of exudate samples, these were used as indicators of contamination and acted as additional negative controls. Three hundred and sixteen lung exudate samples were defrosted and mixed by vortexing, 400 μl of each sample was added to 900 μl Nuclei Lysis Solution (Promega, Madison WI, USA) and incubated at 55 °C overnight. The samples were then processed to DNA using the Wizard® genomic DNA (Promega, Madison WI, USA) purification protocol, which was adapted by Bartley and colleagues [22] to be scaled up to allow the use of 400 μl of starting material. The DNA was resuspended in 300 μl of DNase and RNase free water and stored at +4 °C for immediate use or at −20 °C for longer term storage. Extraction controls (water) were also prepared with each batch of exudate samples, these were used as indicators of contamination and acted as additional negative controls. PCR for detection of Babesia DNA in blood samples A semi-nested PCR was used to screen for the presence of Babesia DNA in 120 lung exudate samples using universal Babesia-Theileria primers BT1-F and BTH-1R for the primary amplification and BT1-F and BT1-R for the second round amplification (previously described [23]). Following this initial screening the B. annae specific primers BTFox1F and BTFox1R were designed (Primer3web v4.0.0) for use in the second round amplifications (Table 1). All 316 lung exudate samples were tested using the B. annae specific primers BTFox1F and BTFox1R.Table 1Primer names, specificity and sequences used for the detection of Babesia DNA in fox lung exudate samplesPrimer NameSpecificitya Sequence (5’ – 3’)ReferenceBT1-FUniversal Babesia – Theileria GGTTGATCCTGCCAGTAGT[23]BTH-1RUniversal Babesia – Theileria TTGCGACCATACTCCCCCCABTH-1 FUniversal Babesia – Theileria CCTGMGARACGGCTACCACATCTBT1-RUniversal Babesia – Theileria GCCTGCTGCCTTCCTTABTFox1F Babesia annae AGTTATAAGCTTTTATACAGCDeveloped in the studyBTFox1R Babesia annae CACTCTAGTTTTCTCAAAGTAAAATABT-Outer-R Babesia annae GGAAACCTTGTTACGACTTCTCBT-Inner-R Babesia annae TTCTCCTTCCTTTAAGTGATAAG600-F Babesia annae AGTTAAGAAGCTCGTAGTTG1200-F Babesia annae AGGATTGACAGATTGATAGC aAll primers were designed against the 18S rRNA gene Primer names, specificity and sequences used for the detection of Babesia DNA in fox lung exudate samples aAll primers were designed against the 18S rRNA gene The reaction mixture was adapted from that previously described by Burrells and colleagues [24] and amplification conditions were as follows; each reaction (20 μl) consisted of custom PCR master mix (containing final concentrations of 45 mM Tris–HCl, 11 mM (NH4)2SO4, 4.5 mM MgCl2, 0.113 mg/ml BSA, 4.4 μM EDTA and 1.0 mM each of dATP, dCTP, dGTP and dTTP) (ABgene, Surrey, UK), 0.25pM of each forward and reverse primer (Eurofins MWG Operon), 0.75U Taq polymerase (Bioline Ltd. London, UK) and 2 μl sample template DNA, to increase sensitivity each sample was analysed in duplicate. The basic reaction conditions for all the PCR amplifications were as follows; 94 °C for 5 min followed by 35 cycles at 94 °C for 1 min, annealing (see Table 2) for 1 min and 72 °C for 1 min with a final extension period at 72 °C for 5 min.Table 2Annealing temperatures and amplicon sizes of nested PCR reactions used for the detection of Babesia annae DNA in fox lung exudate samplesPrimersAmplicon size (bp)Annealing Temp (°C)BT1-F and BTH-1R107355BT1-F and BT1-R40860BTFox1F and BTFox1R65552BT1-F and BT-Outer-R173755BT1-F and BT-Inner-R171749 Annealing temperatures and amplicon sizes of nested PCR reactions used for the detection of Babesia annae DNA in fox lung exudate samples A selection of positive samples that gave positive results for both duplicates using the B. annae specific BTFox1F and BTFox1R primers were tested in a semi-nested PCR using the primers BT1-F and BT-Outer-R (Table 1) in the primary reaction and BT1-F and BT-Inner-R (Table 1) in the second round reaction. These primers were designed to produce a longer 18S rRNA gene fragment (approx 1.7Kb). Following the second round amplification, 10 μl of each PCR product was analysed by agarose gel electrophoresis (2 % agarose in 1x TAE buffer), stained with gel red (1:10,000) (Biotonium, Hayward, CA, USA) and visualised using UV light. A semi-nested PCR was used to screen for the presence of Babesia DNA in 120 lung exudate samples using universal Babesia-Theileria primers BT1-F and BTH-1R for the primary amplification and BT1-F and BT1-R for the second round amplification (previously described [23]). Following this initial screening the B. annae specific primers BTFox1F and BTFox1R were designed (Primer3web v4.0.0) for use in the second round amplifications (Table 1). All 316 lung exudate samples were tested using the B. annae specific primers BTFox1F and BTFox1R.Table 1Primer names, specificity and sequences used for the detection of Babesia DNA in fox lung exudate samplesPrimer NameSpecificitya Sequence (5’ – 3’)ReferenceBT1-FUniversal Babesia – Theileria GGTTGATCCTGCCAGTAGT[23]BTH-1RUniversal Babesia – Theileria TTGCGACCATACTCCCCCCABTH-1 FUniversal Babesia – Theileria CCTGMGARACGGCTACCACATCTBT1-RUniversal Babesia – Theileria GCCTGCTGCCTTCCTTABTFox1F Babesia annae AGTTATAAGCTTTTATACAGCDeveloped in the studyBTFox1R Babesia annae CACTCTAGTTTTCTCAAAGTAAAATABT-Outer-R Babesia annae GGAAACCTTGTTACGACTTCTCBT-Inner-R Babesia annae TTCTCCTTCCTTTAAGTGATAAG600-F Babesia annae AGTTAAGAAGCTCGTAGTTG1200-F Babesia annae AGGATTGACAGATTGATAGC aAll primers were designed against the 18S rRNA gene Primer names, specificity and sequences used for the detection of Babesia DNA in fox lung exudate samples aAll primers were designed against the 18S rRNA gene The reaction mixture was adapted from that previously described by Burrells and colleagues [24] and amplification conditions were as follows; each reaction (20 μl) consisted of custom PCR master mix (containing final concentrations of 45 mM Tris–HCl, 11 mM (NH4)2SO4, 4.5 mM MgCl2, 0.113 mg/ml BSA, 4.4 μM EDTA and 1.0 mM each of dATP, dCTP, dGTP and dTTP) (ABgene, Surrey, UK), 0.25pM of each forward and reverse primer (Eurofins MWG Operon), 0.75U Taq polymerase (Bioline Ltd. London, UK) and 2 μl sample template DNA, to increase sensitivity each sample was analysed in duplicate. The basic reaction conditions for all the PCR amplifications were as follows; 94 °C for 5 min followed by 35 cycles at 94 °C for 1 min, annealing (see Table 2) for 1 min and 72 °C for 1 min with a final extension period at 72 °C for 5 min.Table 2Annealing temperatures and amplicon sizes of nested PCR reactions used for the detection of Babesia annae DNA in fox lung exudate samplesPrimersAmplicon size (bp)Annealing Temp (°C)BT1-F and BTH-1R107355BT1-F and BT1-R40860BTFox1F and BTFox1R65552BT1-F and BT-Outer-R173755BT1-F and BT-Inner-R171749 Annealing temperatures and amplicon sizes of nested PCR reactions used for the detection of Babesia annae DNA in fox lung exudate samples A selection of positive samples that gave positive results for both duplicates using the B. annae specific BTFox1F and BTFox1R primers were tested in a semi-nested PCR using the primers BT1-F and BT-Outer-R (Table 1) in the primary reaction and BT1-F and BT-Inner-R (Table 1) in the second round reaction. These primers were designed to produce a longer 18S rRNA gene fragment (approx 1.7Kb). Following the second round amplification, 10 μl of each PCR product was analysed by agarose gel electrophoresis (2 % agarose in 1x TAE buffer), stained with gel red (1:10,000) (Biotonium, Hayward, CA, USA) and visualised using UV light. PCR clean up and DNA sequencing PCR products from samples that gave positive results for both duplicates using the BT1-F and BT-Outer-R (Table 1) in the primary reaction and BT1-F and BT-Inner-R primers in the second round reaction were selected for sequencing. These amplicons were cleaned using the commercially available Wizard® SV Gel and PCR Clean-up System (Promega, Madison WI, USA), as per manufacturers’ instructions. The PCR product was eluted in 30 μl of DNase / RNase free water and the nucleic acid concentration was determined by spectralphotometry (Nanodrop ND1000), 100 ng of each sample was sent for sequencing with each primer (MWG Operon) BT1-F, BTH-1 F, BTFox1F and BTFox1R (Table 1) to create a forward and reverse consensus sequence. PCR products from samples that gave positive results for both duplicates using the BT1-F and BT-Outer-R (Table 1) in the primary reaction and BT1-F and BT-Inner-R primers in the second round reaction were selected for sequencing. These amplicons were cleaned using the commercially available Wizard® SV Gel and PCR Clean-up System (Promega, Madison WI, USA), as per manufacturers’ instructions. The PCR product was eluted in 30 μl of DNase / RNase free water and the nucleic acid concentration was determined by spectralphotometry (Nanodrop ND1000), 100 ng of each sample was sent for sequencing with each primer (MWG Operon) BT1-F, BTH-1 F, BTFox1F and BTFox1R (Table 1) to create a forward and reverse consensus sequence. Cloning of the 18 s rRNA gene of Babesia annae Seven samples which gave positive results for both duplicates from the semi nested PCR (B. annae specific BT1-F and BT-Outer-R followed by BT1-F-BT-Inner-R), were selected for cloning and PCR products were purified as above, 7 μl of purified product was ligated into the pGEM®-T Easy Vector (Promega, Madison WI, USA) using 1 μl of T4 DNA ligase (3 Weiss units/μl), 1 μl of 10X Rapid Ligation Buffer and 1 μl of pGEM®-T Easy vector (50 ng/μl) according to the manufacturers’ instructions. Following ligation, 2 μl of ligated vector/insert was used to transform 50 μl of high-efficiency competent JM109 cells (≥1 × 108 cfu/μg DNA) (Promega, Madison WI, USA) using manufacturers’ instructions, with the following exception that LB broth was used instead of SOC medium to culture the cells. Successful transformation was confirmed using LB agar plates containing 100 μg/ml ampicillin and spread with 100 μl of IPTG (100 mM) and 20 μl of X-Gal (50 mg/ml). White colonies were screened by PCR using the BTFox1F and BTFox1R primers (Table 1) to determine the presence of the B. annae 18S rRNA gene insert. Positive colonies were cultured overnight (approx 18 h) in 10 ml LB broth containing 100 μg/ml ampicillin. Following this incubation plasmid DNA was then purified from 5 ml of each culture using the QIAprep® Miniprep kit (Qiagen), according to the manufacturers’ instructions. Purified plasmid DNA (100 ng per primer) was sent to be sequenced (MWG Operon) using T7 and SP6 primers along with BTH-1 F, 600-F, 1200-F, BT1-R, and BTH-1R (Table 1), this created an overlapping forward and reverse consensus sequence for each clone. Seven samples which gave positive results for both duplicates from the semi nested PCR (B. annae specific BT1-F and BT-Outer-R followed by BT1-F-BT-Inner-R), were selected for cloning and PCR products were purified as above, 7 μl of purified product was ligated into the pGEM®-T Easy Vector (Promega, Madison WI, USA) using 1 μl of T4 DNA ligase (3 Weiss units/μl), 1 μl of 10X Rapid Ligation Buffer and 1 μl of pGEM®-T Easy vector (50 ng/μl) according to the manufacturers’ instructions. Following ligation, 2 μl of ligated vector/insert was used to transform 50 μl of high-efficiency competent JM109 cells (≥1 × 108 cfu/μg DNA) (Promega, Madison WI, USA) using manufacturers’ instructions, with the following exception that LB broth was used instead of SOC medium to culture the cells. Successful transformation was confirmed using LB agar plates containing 100 μg/ml ampicillin and spread with 100 μl of IPTG (100 mM) and 20 μl of X-Gal (50 mg/ml). White colonies were screened by PCR using the BTFox1F and BTFox1R primers (Table 1) to determine the presence of the B. annae 18S rRNA gene insert. Positive colonies were cultured overnight (approx 18 h) in 10 ml LB broth containing 100 μg/ml ampicillin. Following this incubation plasmid DNA was then purified from 5 ml of each culture using the QIAprep® Miniprep kit (Qiagen), according to the manufacturers’ instructions. Purified plasmid DNA (100 ng per primer) was sent to be sequenced (MWG Operon) using T7 and SP6 primers along with BTH-1 F, 600-F, 1200-F, BT1-R, and BTH-1R (Table 1), this created an overlapping forward and reverse consensus sequence for each clone. Sequence / phylogenetic analysis Following sequencing, results were compared using NCBI Basic Local Alignment Search Tool (BLAST) to determine percentage identity of the generated sequences against published sequences. Multiple sequence alignments were created to compare these sequences to previously published data (EMBL-EBI Multiple Sequence Comparison by Log- Expectation (MUSCLE)), while phylogenetic analysis was performed on the long (1717 bp) consensus sequence using PhyML 3.0 (ATCG, Phylogeny.fr). The Gblocks programme was used for automatic alignment curation, while PhyML was used for tree building and TreeDyn programme was used to draw the phylogenetic tree [25–27]. All analyses are based on the maximum likelihood principal using an approximation of the standard Likelihood Ratio Test. Following sequencing, results were compared using NCBI Basic Local Alignment Search Tool (BLAST) to determine percentage identity of the generated sequences against published sequences. Multiple sequence alignments were created to compare these sequences to previously published data (EMBL-EBI Multiple Sequence Comparison by Log- Expectation (MUSCLE)), while phylogenetic analysis was performed on the long (1717 bp) consensus sequence using PhyML 3.0 (ATCG, Phylogeny.fr). The Gblocks programme was used for automatic alignment curation, while PhyML was used for tree building and TreeDyn programme was used to draw the phylogenetic tree [25–27]. All analyses are based on the maximum likelihood principal using an approximation of the standard Likelihood Ratio Test. Statistical analysis Calculations regarding the prevalence of Babesia in red foxes from separate regions of Britain were performed only using data from animals where a location was known (253 / 316 samples). Comparisons of infection rates in relation to gender were made where data was available. All 316 samples were used when calculating the overall prevalence of Babesia DNA in red foxes in Britain. Proportion positive (prevalence), with confidence intervals (95 % CI), was calculated for the overall study set as well as at regional level. Prevalence at the regional level was compared with the overall UK prevalence to determine if there was a significant difference (Minitab 15 (v15.1.0.0)). In addition the overall prevalence in male animals was compared with that in females (Minitab 15 (v15.1.0.0)). In these analyses, Fisher’s Exact Test was used where the number of events was less than 5; in all other cases the hypothesis test was based on the normal approximation. Calculations regarding the prevalence of Babesia in red foxes from separate regions of Britain were performed only using data from animals where a location was known (253 / 316 samples). Comparisons of infection rates in relation to gender were made where data was available. All 316 samples were used when calculating the overall prevalence of Babesia DNA in red foxes in Britain. Proportion positive (prevalence), with confidence intervals (95 % CI), was calculated for the overall study set as well as at regional level. Prevalence at the regional level was compared with the overall UK prevalence to determine if there was a significant difference (Minitab 15 (v15.1.0.0)). In addition the overall prevalence in male animals was compared with that in females (Minitab 15 (v15.1.0.0)). In these analyses, Fisher’s Exact Test was used where the number of events was less than 5; in all other cases the hypothesis test was based on the normal approximation. Sample collection and preparation: The three hundred and sixteen foxes were originally collected by the University of Edinburgh as part of a study looking at the prevalence of Echinococcus multilocularis, Neospora caninum and Toxoplasma gondii [20, 21]. Lung samples were collected at post mortem examination and frozen at −20 °C prior to processing. Lung fluids were prepared as previously described [20]. Briefly, lungs were defrosted and bloody exudate was collected into a 1.5 ml microfuge tube, where no exudate was visible, lung samples were placed in a stomacher bag with approximately 5 ml of phosphate buffer saline (PBS) and processed in a stomacher for 15 s. All exudates / PBS samples were stored at −20 °C prior to DNA extraction. The foxes sampled were collected over a wide range of locations in each of the study regions. A majority of the foxes were shot by game keepers and land owners as part of routine pest control procedures, so the location data from where the foxes were originally collected and the gender of the animals were often available. However some of the foxes were obtained from direct culls, for these animals the location and gender of the animal was not always available [21]. DNA extraction: Three hundred and sixteen lung exudate samples were defrosted and mixed by vortexing, 400 μl of each sample was added to 900 μl Nuclei Lysis Solution (Promega, Madison WI, USA) and incubated at 55 °C overnight. The samples were then processed to DNA using the Wizard® genomic DNA (Promega, Madison WI, USA) purification protocol, which was adapted by Bartley and colleagues [22] to be scaled up to allow the use of 400 μl of starting material. The DNA was resuspended in 300 μl of DNase and RNase free water and stored at +4 °C for immediate use or at −20 °C for longer term storage. Extraction controls (water) were also prepared with each batch of exudate samples, these were used as indicators of contamination and acted as additional negative controls. PCR for detection of Babesia DNA in blood samples: A semi-nested PCR was used to screen for the presence of Babesia DNA in 120 lung exudate samples using universal Babesia-Theileria primers BT1-F and BTH-1R for the primary amplification and BT1-F and BT1-R for the second round amplification (previously described [23]). Following this initial screening the B. annae specific primers BTFox1F and BTFox1R were designed (Primer3web v4.0.0) for use in the second round amplifications (Table 1). All 316 lung exudate samples were tested using the B. annae specific primers BTFox1F and BTFox1R.Table 1Primer names, specificity and sequences used for the detection of Babesia DNA in fox lung exudate samplesPrimer NameSpecificitya Sequence (5’ – 3’)ReferenceBT1-FUniversal Babesia – Theileria GGTTGATCCTGCCAGTAGT[23]BTH-1RUniversal Babesia – Theileria TTGCGACCATACTCCCCCCABTH-1 FUniversal Babesia – Theileria CCTGMGARACGGCTACCACATCTBT1-RUniversal Babesia – Theileria GCCTGCTGCCTTCCTTABTFox1F Babesia annae AGTTATAAGCTTTTATACAGCDeveloped in the studyBTFox1R Babesia annae CACTCTAGTTTTCTCAAAGTAAAATABT-Outer-R Babesia annae GGAAACCTTGTTACGACTTCTCBT-Inner-R Babesia annae TTCTCCTTCCTTTAAGTGATAAG600-F Babesia annae AGTTAAGAAGCTCGTAGTTG1200-F Babesia annae AGGATTGACAGATTGATAGC aAll primers were designed against the 18S rRNA gene Primer names, specificity and sequences used for the detection of Babesia DNA in fox lung exudate samples aAll primers were designed against the 18S rRNA gene The reaction mixture was adapted from that previously described by Burrells and colleagues [24] and amplification conditions were as follows; each reaction (20 μl) consisted of custom PCR master mix (containing final concentrations of 45 mM Tris–HCl, 11 mM (NH4)2SO4, 4.5 mM MgCl2, 0.113 mg/ml BSA, 4.4 μM EDTA and 1.0 mM each of dATP, dCTP, dGTP and dTTP) (ABgene, Surrey, UK), 0.25pM of each forward and reverse primer (Eurofins MWG Operon), 0.75U Taq polymerase (Bioline Ltd. London, UK) and 2 μl sample template DNA, to increase sensitivity each sample was analysed in duplicate. The basic reaction conditions for all the PCR amplifications were as follows; 94 °C for 5 min followed by 35 cycles at 94 °C for 1 min, annealing (see Table 2) for 1 min and 72 °C for 1 min with a final extension period at 72 °C for 5 min.Table 2Annealing temperatures and amplicon sizes of nested PCR reactions used for the detection of Babesia annae DNA in fox lung exudate samplesPrimersAmplicon size (bp)Annealing Temp (°C)BT1-F and BTH-1R107355BT1-F and BT1-R40860BTFox1F and BTFox1R65552BT1-F and BT-Outer-R173755BT1-F and BT-Inner-R171749 Annealing temperatures and amplicon sizes of nested PCR reactions used for the detection of Babesia annae DNA in fox lung exudate samples A selection of positive samples that gave positive results for both duplicates using the B. annae specific BTFox1F and BTFox1R primers were tested in a semi-nested PCR using the primers BT1-F and BT-Outer-R (Table 1) in the primary reaction and BT1-F and BT-Inner-R (Table 1) in the second round reaction. These primers were designed to produce a longer 18S rRNA gene fragment (approx 1.7Kb). Following the second round amplification, 10 μl of each PCR product was analysed by agarose gel electrophoresis (2 % agarose in 1x TAE buffer), stained with gel red (1:10,000) (Biotonium, Hayward, CA, USA) and visualised using UV light. PCR clean up and DNA sequencing: PCR products from samples that gave positive results for both duplicates using the BT1-F and BT-Outer-R (Table 1) in the primary reaction and BT1-F and BT-Inner-R primers in the second round reaction were selected for sequencing. These amplicons were cleaned using the commercially available Wizard® SV Gel and PCR Clean-up System (Promega, Madison WI, USA), as per manufacturers’ instructions. The PCR product was eluted in 30 μl of DNase / RNase free water and the nucleic acid concentration was determined by spectralphotometry (Nanodrop ND1000), 100 ng of each sample was sent for sequencing with each primer (MWG Operon) BT1-F, BTH-1 F, BTFox1F and BTFox1R (Table 1) to create a forward and reverse consensus sequence. Cloning of the 18 s rRNA gene of Babesia annae: Seven samples which gave positive results for both duplicates from the semi nested PCR (B. annae specific BT1-F and BT-Outer-R followed by BT1-F-BT-Inner-R), were selected for cloning and PCR products were purified as above, 7 μl of purified product was ligated into the pGEM®-T Easy Vector (Promega, Madison WI, USA) using 1 μl of T4 DNA ligase (3 Weiss units/μl), 1 μl of 10X Rapid Ligation Buffer and 1 μl of pGEM®-T Easy vector (50 ng/μl) according to the manufacturers’ instructions. Following ligation, 2 μl of ligated vector/insert was used to transform 50 μl of high-efficiency competent JM109 cells (≥1 × 108 cfu/μg DNA) (Promega, Madison WI, USA) using manufacturers’ instructions, with the following exception that LB broth was used instead of SOC medium to culture the cells. Successful transformation was confirmed using LB agar plates containing 100 μg/ml ampicillin and spread with 100 μl of IPTG (100 mM) and 20 μl of X-Gal (50 mg/ml). White colonies were screened by PCR using the BTFox1F and BTFox1R primers (Table 1) to determine the presence of the B. annae 18S rRNA gene insert. Positive colonies were cultured overnight (approx 18 h) in 10 ml LB broth containing 100 μg/ml ampicillin. Following this incubation plasmid DNA was then purified from 5 ml of each culture using the QIAprep® Miniprep kit (Qiagen), according to the manufacturers’ instructions. Purified plasmid DNA (100 ng per primer) was sent to be sequenced (MWG Operon) using T7 and SP6 primers along with BTH-1 F, 600-F, 1200-F, BT1-R, and BTH-1R (Table 1), this created an overlapping forward and reverse consensus sequence for each clone. Sequence / phylogenetic analysis: Following sequencing, results were compared using NCBI Basic Local Alignment Search Tool (BLAST) to determine percentage identity of the generated sequences against published sequences. Multiple sequence alignments were created to compare these sequences to previously published data (EMBL-EBI Multiple Sequence Comparison by Log- Expectation (MUSCLE)), while phylogenetic analysis was performed on the long (1717 bp) consensus sequence using PhyML 3.0 (ATCG, Phylogeny.fr). The Gblocks programme was used for automatic alignment curation, while PhyML was used for tree building and TreeDyn programme was used to draw the phylogenetic tree [25–27]. All analyses are based on the maximum likelihood principal using an approximation of the standard Likelihood Ratio Test. Statistical analysis: Calculations regarding the prevalence of Babesia in red foxes from separate regions of Britain were performed only using data from animals where a location was known (253 / 316 samples). Comparisons of infection rates in relation to gender were made where data was available. All 316 samples were used when calculating the overall prevalence of Babesia DNA in red foxes in Britain. Proportion positive (prevalence), with confidence intervals (95 % CI), was calculated for the overall study set as well as at regional level. Prevalence at the regional level was compared with the overall UK prevalence to determine if there was a significant difference (Minitab 15 (v15.1.0.0)). In addition the overall prevalence in male animals was compared with that in females (Minitab 15 (v15.1.0.0)). In these analyses, Fisher’s Exact Test was used where the number of events was less than 5; in all other cases the hypothesis test was based on the normal approximation. Results: Screening of fox lung exudate samples for the presence of Babesia spp. DNA Of the 120 samples initially screened, 29 samples gave positive results. From these nine samples that gave positive results for both duplicates from the semi-nested PCR (universal Babesia-Theileria primers BT1-F-BTH-1R and BT1-F-BT1-R) were cleaned as described above and sent for sequencing in one direction, using either the BT1-F or BTH-1 F primers. Following this initial screening all sequences returned with 99–100 % identity to several isolates of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’ accession numbers EU583387, AY534602 and AF188001) [28]. No other Babesia species were identified in the fox lung exudate samples. Of the 120 samples initially screened, 29 samples gave positive results. From these nine samples that gave positive results for both duplicates from the semi-nested PCR (universal Babesia-Theileria primers BT1-F-BTH-1R and BT1-F-BT1-R) were cleaned as described above and sent for sequencing in one direction, using either the BT1-F or BTH-1 F primers. Following this initial screening all sequences returned with 99–100 % identity to several isolates of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’ accession numbers EU583387, AY534602 and AF188001) [28]. No other Babesia species were identified in the fox lung exudate samples. Verification of PCR specificity A random selection of nine positive samples from across Britain that gave positive results for both duplicates, were sent for sequencing using the Babesia annae specific BTFox1F and BTFox1R primers, this created a 597 bp consensus sequence (submitted to GenBank accession number KT580786). When this resultant sequence was compared using NCBI BLAST to the published sequences, it was found to have 100 % identity to several isolates of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’ (accession numbers EU583387, AY534602 and AF188001) [28]. A further 13 positive samples were sequenced only using the BTFox1F primer; all of these sequences were 100 % identical to Babesia annae (previously referred to as: Babesia sp. ‘Spanish Dog’-AY534602, EU583387 and AF188001). Seven clones were created from two of the positive samples (F501 (4 clones) and F340 (3 clones)) using the primers BT1-F & BT-Outer-R and BT1-F & BT-Inner-R. These clones were used to create two consensus sequences, which were identical to each other. One of the 1717 bp consensus sequences was submitted to GenBank (accession number KT580785). When this consensus sequence was compared using BLAST it demonstrated a 100 % identity to the published sequences of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’ (accession numbers AY534602, EU583387 and AF188001). A random selection of nine positive samples from across Britain that gave positive results for both duplicates, were sent for sequencing using the Babesia annae specific BTFox1F and BTFox1R primers, this created a 597 bp consensus sequence (submitted to GenBank accession number KT580786). When this resultant sequence was compared using NCBI BLAST to the published sequences, it was found to have 100 % identity to several isolates of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’ (accession numbers EU583387, AY534602 and AF188001) [28]. A further 13 positive samples were sequenced only using the BTFox1F primer; all of these sequences were 100 % identical to Babesia annae (previously referred to as: Babesia sp. ‘Spanish Dog’-AY534602, EU583387 and AF188001). Seven clones were created from two of the positive samples (F501 (4 clones) and F340 (3 clones)) using the primers BT1-F & BT-Outer-R and BT1-F & BT-Inner-R. These clones were used to create two consensus sequences, which were identical to each other. One of the 1717 bp consensus sequences was submitted to GenBank (accession number KT580785). When this consensus sequence was compared using BLAST it demonstrated a 100 % identity to the published sequences of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’ (accession numbers AY534602, EU583387 and AF188001). Distribution of Babesia annae in red foxes in Great Britain Of the 316 lung exudate samples tested by nested PCR (using the BTFox1F and BTFox1R primers), 46 (14.55 % with a 95 % Confidence Interval (CI) of 10.66–18.44 %) returned positive results for the presence of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’) DNA. Positive samples were found in foxes widely distributed across Great Britain (see Table 3), with B. annae DNA being found in Scotland as well as across the whole of England. The central region of England had the highest prevalence with 18 / 49 positive samples (36.7 % prevalence - 95 % CI 23.23 %–50.23 %), which was significantly (p = 0.007) higher than the British average. Of the regions of Britain where positive samples were found, Scotland had the lowest prevalence with only 6 / 80 positive samples (7.5 % prevalence - 95 % CI 1.72 %–13.27 %). Scotland had a significantly lower (p = 0.012) prevalence than the British average. No positive samples were found in Wales, though only 12 samples were available for analysis for this region.Table 3Prevalence of Babesia annae DNA in red foxes (Vulpes vulpes) from across Great BritainRegionNo testedNo PositivePrevalence (%)95 % CIGenderNo / No Positive% Prevalence95 % CIScotland8067.5* 1.7–13.3 %Male48 / 510.41.8–19.0 %Female32 / 13.10.00–9.2 %Wales1200-Male5 / 00-Female7 / 00-Northern (England)34823.59.3–37.8 %Male20 / 6309.9–50.1 %Female14 / 214.20.0–32.6 %Central (England)491836.7** 23.2–50.2 %Male22 / 104524.6–66.3 %Female27 / 829.612.4–46.9 %Southern (England)781114.16.4–21.8 %Male45 / 817.86.6–28.9 %Female9 / 39.10.0–18.9 %TotalMale140 / 2920.714.0–27.4 %Female113 / 1412.46.3–18.5 % N Number, CI Confidence interval * Significantly lower prevalence than nation average (p = 0.045) ** Significantly higher prevalence than national average (p = 0.003) Prevalence of Babesia annae DNA in red foxes (Vulpes vulpes) from across Great Britain N Number, CI Confidence interval * Significantly lower prevalence than nation average (p = 0.045) ** Significantly higher prevalence than national average (p = 0.003) The data shows that a higher number male foxes tested positive for parasite DNA than females in all the regions where positive samples were found (Table 3), although no statistically significant differences (p = 0.093) were observed between numbers of positive males and females when comparing genders across Britain. Of the 316 lung exudate samples tested by nested PCR (using the BTFox1F and BTFox1R primers), 46 (14.55 % with a 95 % Confidence Interval (CI) of 10.66–18.44 %) returned positive results for the presence of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’) DNA. Positive samples were found in foxes widely distributed across Great Britain (see Table 3), with B. annae DNA being found in Scotland as well as across the whole of England. The central region of England had the highest prevalence with 18 / 49 positive samples (36.7 % prevalence - 95 % CI 23.23 %–50.23 %), which was significantly (p = 0.007) higher than the British average. Of the regions of Britain where positive samples were found, Scotland had the lowest prevalence with only 6 / 80 positive samples (7.5 % prevalence - 95 % CI 1.72 %–13.27 %). Scotland had a significantly lower (p = 0.012) prevalence than the British average. No positive samples were found in Wales, though only 12 samples were available for analysis for this region.Table 3Prevalence of Babesia annae DNA in red foxes (Vulpes vulpes) from across Great BritainRegionNo testedNo PositivePrevalence (%)95 % CIGenderNo / No Positive% Prevalence95 % CIScotland8067.5* 1.7–13.3 %Male48 / 510.41.8–19.0 %Female32 / 13.10.00–9.2 %Wales1200-Male5 / 00-Female7 / 00-Northern (England)34823.59.3–37.8 %Male20 / 6309.9–50.1 %Female14 / 214.20.0–32.6 %Central (England)491836.7** 23.2–50.2 %Male22 / 104524.6–66.3 %Female27 / 829.612.4–46.9 %Southern (England)781114.16.4–21.8 %Male45 / 817.86.6–28.9 %Female9 / 39.10.0–18.9 %TotalMale140 / 2920.714.0–27.4 %Female113 / 1412.46.3–18.5 % N Number, CI Confidence interval * Significantly lower prevalence than nation average (p = 0.045) ** Significantly higher prevalence than national average (p = 0.003) Prevalence of Babesia annae DNA in red foxes (Vulpes vulpes) from across Great Britain N Number, CI Confidence interval * Significantly lower prevalence than nation average (p = 0.045) ** Significantly higher prevalence than national average (p = 0.003) The data shows that a higher number male foxes tested positive for parasite DNA than females in all the regions where positive samples were found (Table 3), although no statistically significant differences (p = 0.093) were observed between numbers of positive males and females when comparing genders across Britain. Phylogenetic relationship of Babesia annae and related species Following maximum likelihood analysis it can be seen that Babesia parasites appear to be in three separate clades (Fig. 1) using the nomenclature suggested by Lack and colleagues [29], with KT580785 (B. annae) situated in clade VIII (microti group) with AY534602, EU583387 and AF188001, which are other published sequences of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’) (see Fig. 1) found in Spain and the USA. The next most closely associated group of sequences have all been classified as B. microti (AB032434, AB085191 and AB190459); these were isolated from a human, bank vole and forest mouse respectively. Two Piroplasmida species found in Eurasian badges (Meles meles) (EF057099) and Eurasian otter (Lutra lutra) (FJ225390) were also closely related to B. annae. The remaining Babesia species are further separated into 2 clades: The classical Babesia group (clade I) and the duncani group (clade IV). The Theileria species, with the exception of T. equi are situated in clade III, while the Hepatozoon parasites are situated in a separate clade.Fig. 1Phylogenetic analysis on 18S rRNA gene. Phylogenetic tree showing maximum likelihood approximation of the standard likelihood ratio test scores. A maximum likelihood approximation of the standard likelihood ratio test score of 0 indicating that no base pair substitutions were observed between AY534602, EU583387, AF188001 and KT580785. The phylogenetic analysis places KT580785 in clade VIII (microti group according to the nomenclature suggested by Lack and colleagues [26]). Phylogenetic analysis on 18S rRNA gene. Phylogenetic tree showing maximum likelihood approximation of the standard likelihood ratio test scores. A maximum likelihood approximation of the standard likelihood ratio test score of 0 indicating that no base pair substitutions were observed between AY534602, EU583387, AF188001 and KT580785. The phylogenetic analysis places KT580785 in clade VIII (microti group according to the nomenclature suggested by Lack and colleagues [26]). Following maximum likelihood analysis it can be seen that Babesia parasites appear to be in three separate clades (Fig. 1) using the nomenclature suggested by Lack and colleagues [29], with KT580785 (B. annae) situated in clade VIII (microti group) with AY534602, EU583387 and AF188001, which are other published sequences of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’) (see Fig. 1) found in Spain and the USA. The next most closely associated group of sequences have all been classified as B. microti (AB032434, AB085191 and AB190459); these were isolated from a human, bank vole and forest mouse respectively. Two Piroplasmida species found in Eurasian badges (Meles meles) (EF057099) and Eurasian otter (Lutra lutra) (FJ225390) were also closely related to B. annae. The remaining Babesia species are further separated into 2 clades: The classical Babesia group (clade I) and the duncani group (clade IV). The Theileria species, with the exception of T. equi are situated in clade III, while the Hepatozoon parasites are situated in a separate clade.Fig. 1Phylogenetic analysis on 18S rRNA gene. Phylogenetic tree showing maximum likelihood approximation of the standard likelihood ratio test scores. A maximum likelihood approximation of the standard likelihood ratio test score of 0 indicating that no base pair substitutions were observed between AY534602, EU583387, AF188001 and KT580785. The phylogenetic analysis places KT580785 in clade VIII (microti group according to the nomenclature suggested by Lack and colleagues [26]). Phylogenetic analysis on 18S rRNA gene. Phylogenetic tree showing maximum likelihood approximation of the standard likelihood ratio test scores. A maximum likelihood approximation of the standard likelihood ratio test score of 0 indicating that no base pair substitutions were observed between AY534602, EU583387, AF188001 and KT580785. The phylogenetic analysis places KT580785 in clade VIII (microti group according to the nomenclature suggested by Lack and colleagues [26]). Screening of fox lung exudate samples for the presence of Babesia spp. DNA: Of the 120 samples initially screened, 29 samples gave positive results. From these nine samples that gave positive results for both duplicates from the semi-nested PCR (universal Babesia-Theileria primers BT1-F-BTH-1R and BT1-F-BT1-R) were cleaned as described above and sent for sequencing in one direction, using either the BT1-F or BTH-1 F primers. Following this initial screening all sequences returned with 99–100 % identity to several isolates of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’ accession numbers EU583387, AY534602 and AF188001) [28]. No other Babesia species were identified in the fox lung exudate samples. Verification of PCR specificity: A random selection of nine positive samples from across Britain that gave positive results for both duplicates, were sent for sequencing using the Babesia annae specific BTFox1F and BTFox1R primers, this created a 597 bp consensus sequence (submitted to GenBank accession number KT580786). When this resultant sequence was compared using NCBI BLAST to the published sequences, it was found to have 100 % identity to several isolates of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’ (accession numbers EU583387, AY534602 and AF188001) [28]. A further 13 positive samples were sequenced only using the BTFox1F primer; all of these sequences were 100 % identical to Babesia annae (previously referred to as: Babesia sp. ‘Spanish Dog’-AY534602, EU583387 and AF188001). Seven clones were created from two of the positive samples (F501 (4 clones) and F340 (3 clones)) using the primers BT1-F & BT-Outer-R and BT1-F & BT-Inner-R. These clones were used to create two consensus sequences, which were identical to each other. One of the 1717 bp consensus sequences was submitted to GenBank (accession number KT580785). When this consensus sequence was compared using BLAST it demonstrated a 100 % identity to the published sequences of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’ (accession numbers AY534602, EU583387 and AF188001). Distribution of Babesia annae in red foxes in Great Britain: Of the 316 lung exudate samples tested by nested PCR (using the BTFox1F and BTFox1R primers), 46 (14.55 % with a 95 % Confidence Interval (CI) of 10.66–18.44 %) returned positive results for the presence of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’) DNA. Positive samples were found in foxes widely distributed across Great Britain (see Table 3), with B. annae DNA being found in Scotland as well as across the whole of England. The central region of England had the highest prevalence with 18 / 49 positive samples (36.7 % prevalence - 95 % CI 23.23 %–50.23 %), which was significantly (p = 0.007) higher than the British average. Of the regions of Britain where positive samples were found, Scotland had the lowest prevalence with only 6 / 80 positive samples (7.5 % prevalence - 95 % CI 1.72 %–13.27 %). Scotland had a significantly lower (p = 0.012) prevalence than the British average. No positive samples were found in Wales, though only 12 samples were available for analysis for this region.Table 3Prevalence of Babesia annae DNA in red foxes (Vulpes vulpes) from across Great BritainRegionNo testedNo PositivePrevalence (%)95 % CIGenderNo / No Positive% Prevalence95 % CIScotland8067.5* 1.7–13.3 %Male48 / 510.41.8–19.0 %Female32 / 13.10.00–9.2 %Wales1200-Male5 / 00-Female7 / 00-Northern (England)34823.59.3–37.8 %Male20 / 6309.9–50.1 %Female14 / 214.20.0–32.6 %Central (England)491836.7** 23.2–50.2 %Male22 / 104524.6–66.3 %Female27 / 829.612.4–46.9 %Southern (England)781114.16.4–21.8 %Male45 / 817.86.6–28.9 %Female9 / 39.10.0–18.9 %TotalMale140 / 2920.714.0–27.4 %Female113 / 1412.46.3–18.5 % N Number, CI Confidence interval * Significantly lower prevalence than nation average (p = 0.045) ** Significantly higher prevalence than national average (p = 0.003) Prevalence of Babesia annae DNA in red foxes (Vulpes vulpes) from across Great Britain N Number, CI Confidence interval * Significantly lower prevalence than nation average (p = 0.045) ** Significantly higher prevalence than national average (p = 0.003) The data shows that a higher number male foxes tested positive for parasite DNA than females in all the regions where positive samples were found (Table 3), although no statistically significant differences (p = 0.093) were observed between numbers of positive males and females when comparing genders across Britain. Phylogenetic relationship of Babesia annae and related species: Following maximum likelihood analysis it can be seen that Babesia parasites appear to be in three separate clades (Fig. 1) using the nomenclature suggested by Lack and colleagues [29], with KT580785 (B. annae) situated in clade VIII (microti group) with AY534602, EU583387 and AF188001, which are other published sequences of Babesia annae (previously referred to as: Babesia sp. ‘Spanish dog’) (see Fig. 1) found in Spain and the USA. The next most closely associated group of sequences have all been classified as B. microti (AB032434, AB085191 and AB190459); these were isolated from a human, bank vole and forest mouse respectively. Two Piroplasmida species found in Eurasian badges (Meles meles) (EF057099) and Eurasian otter (Lutra lutra) (FJ225390) were also closely related to B. annae. The remaining Babesia species are further separated into 2 clades: The classical Babesia group (clade I) and the duncani group (clade IV). The Theileria species, with the exception of T. equi are situated in clade III, while the Hepatozoon parasites are situated in a separate clade.Fig. 1Phylogenetic analysis on 18S rRNA gene. Phylogenetic tree showing maximum likelihood approximation of the standard likelihood ratio test scores. A maximum likelihood approximation of the standard likelihood ratio test score of 0 indicating that no base pair substitutions were observed between AY534602, EU583387, AF188001 and KT580785. The phylogenetic analysis places KT580785 in clade VIII (microti group according to the nomenclature suggested by Lack and colleagues [26]). Phylogenetic analysis on 18S rRNA gene. Phylogenetic tree showing maximum likelihood approximation of the standard likelihood ratio test scores. A maximum likelihood approximation of the standard likelihood ratio test score of 0 indicating that no base pair substitutions were observed between AY534602, EU583387, AF188001 and KT580785. The phylogenetic analysis places KT580785 in clade VIII (microti group according to the nomenclature suggested by Lack and colleagues [26]). Discussion: The results described in the current study clearly demonstrate that B. annae DNA is present widely in the red fox (Vulpes vulpes) population in Great Britain, with positive samples being found across all regions of England and throughout Scotland. Though we did not find any positives in the samples collected from Wales, the sample size for this region was small, with only 12 samples available for examination. The overall prevalence of B. annae DNA from foxes in Britain was 14.6 % (see Table 1), the highest prevalence of B. annae was found in the central region of England where an overall prevalence of 36.7 % was observed, which was significantly (p = 0.007) higher than the national average. These figures are higher than the levels seen in foxes in numerous other European countries. In Croatia, 5.2 % (10 / 191) of foxes studied were PCR positive for Babesia annae (Synonym Theileria annae) DNA [13], while in Poland only 1 /123 (0.7 %) of fox spleens tested were PCR positive for Babesia annae (Synonym Babesia microti-like) DNA [14]. In two separate studies in Spain, 20 % (1 / 5) of foxes tested in the Burgos Region were positive for Babesia annae (Synonym Theileria annae) DNA [30], while in another study 50 % (5 / 10) of foxes from central Spain (Guadalajara) were positive for Babesia annae (Synonym T. annae, which had a 100 % identity to AF188001-Babesia sp. ‘Spanish Dog’) [23]. However these results may not reflect the actual prevalence across Spain, as sample numbers for both of these studies were very small with only 5 and 10 animals being tested, respectively. While a higher prevalence of Babesia annae (Synonym Babesia microti-like) has been found in Portugal where 63 / 91 (69.2 %) red foxes were PCR positive for the parasite [12]. An interesting observation from our study was that male foxes had a higher prevalence of B. annae DNA than females in all regions where positive samples were found (Table 3), though these differences were not statistically significant (p = 0.093). This may be a consequence that male foxes leave or are driven out of their home range territories in search of new territories, food and potential mates [31], increasing the chance of male foxes being exposed to infected ticks. Over recent years Ixodes tick numbers and their distribution appear to have been steadily increasing in Britain [32]. Exposure of foxes to Babesia parasites is likely to be through contact with infected ticks that occurs whilst scavenging and from small prey animals such as rodents and other small mammals, which are known to be reservoirs for Babesia parasites and also hosts to both common types of Ixodid ticks found in Britain (in particular I. ricinus). In a recent study in Britain, Brown and colleagues [33], demonstrated B. microti infections in 30.3 % of common shrews (Sorex araneus) and 30.4 % of field voles (Microtus agrestis) tested. Both shrews and voles were also found to be infested with I. ricinus ticks, strongly suggesting a role for these small mammals in the epidemiology of tick borne infections. We are unsure whether rodents can be infected with B. annae, as the parasite has never been described in any species other than canids. The primers used by Brown and colleagues [33] were B. microti specific and would not have detected B. annae even if it had been present. However Babesia annae (synonym Theileria annae) DNA has been demonstrated in both I. ricinus and I. hexagonus ticks in Spain [11, 19] with one tick being removed from a wood mouse (Apodemus sylvaticus). Alhough the tick tested positive for Babesia annae (synonym Theileria annae) DNA there is no evidence that the parasite was transmitted from the host mouse, or whether the tick was already infected prior to it attaching to the mouse [19]. There is also currently no evidence to prove if either of these species of ticks are competent vectors for the transmission of B. annae [34]. During this study we only detected parasite DNA from frozen exudates, we did not detect viable parasites or manage to examine blood smears for the presence of intra-erythrocytic life cycle stages. Nor was any clinical data available for the animals involved in this study, so we are unsure whether B. annae caused clinical symptoms in infected foxes or if it caused an asymptomatic infection. However, recent studies in Spain, Sweden and USA have demonstrated that B. annae is the causative agent of severe clinical disease and pathological abnormalities in dogs [1, 9, 34]. Clinical Babesia infections in dogs are often attributed to B. canis as there are few diagnostic tools for veterinarians to distinguish between the blood-borne piroplasms in routine veterinary practices [34], other than the direct examination of red blood cells under light microscopy. In laboratories the immunofluorescent antibody test (IFAT) is used for serological diagnosis, while PCR generally corresponds to a more sensitive and specific diagnostic tool. Our phylogenetic analysis shows that the 18S rRNA gene of B. annae described in this study is identical to that described in Europe with a maximum likelihood approximation of the standard likelihood ratio test score of 0 indicating that no base pair substitutions were observed between AY534602, EU583387, AF188001 and KT580785. The sequences generated also showed a closer parsimony to B. rodhaini, B. felis and B. leo than B. divergens, B. gibsoni and B. duncani. The sequence analysis carried out in this study agrees with that carried out by Lack and colleagues [29], who placed B. microti and the microti-like parasites in a distinct phylogenetic clade (referred to as the microti Group) which included Babesia ‘Spanish dog’ and B. microti found in mice (AB190459) and bank voles (AB085191) amongst others [29]. The phylogenetic data presented in this study shows that the Babesia are separated into three distinct clades, while the Theileria and Hepatozoon species are also all situated in separate clades. However the piroplasmida parasites found in otters and badgers (EF057099 and FJ225390) are positioned in between the Babesia clades, suggesting these members of the phylum require further reclassification. More work needs to be carried out to help determine the dynamics of transmission of B. annae to foxes; an examination of small prey animals (rodents etc.) and the ticks that infest them may help demonstrate their role in maintaining B. annae infections in the environment. Further studies are also required to examine cases of canine babesiosis in Britain to speciate the causative agent and determine if B. annae is present within the British dog population and if it is causing clinical disease in canids. Conclusions: This is the first study to demonstrate the presence of B. annae DNA in Britain. Sequence analysis has shown the B. annae 18S rRNA gene sequence detected in foxes in Britain to be identical to that detected in foxes in Europe and North America. Phylogenetic analysis shows that B. annae is closely related to Babesia microti, but clearly is a distinct species.
Background: This study aimed to determine the prevalence of Babesia species DNA in lung exudate samples collected from red foxes (Vulpes vulpes) from across Great Britain. Babesia are small piroplasmid parasites which are mainly transmitted through the bite of infected ticks of the family Ixodidae. Babesia can cause potentially fatal disease in a wide-range of mammalian species including humans, dogs and cattle, making them of significant economic importance to both the medical and veterinary fields. Methods: DNA was extracted from lung exudate samples of 316 foxes. A semi-nested PCR was used to initially screen samples, using universal Babesia-Theileria primers which target the 18S rRNA gene. A selection of positive PCR amplicons were purified and sequenced. Subsequently specific primers were designed to detect Babesia annae and used to screen all 316 DNA samples. Randomly selected positive samples were purified and sequenced (GenBank accession KT580786). Clones spanning a 1717 bp region of the 18S rRNA gene were generated from 2 positive samples, the resultant consensus sequence was submitted to GenBank (KT580785). Sequence KT580785 was used in the phylogenetic analysis Results: Babesia annae DNA was detected in the fox samples, in total 46/316 (14.6%) of samples tested positive for the presence of Babesia annae DNA. The central region of England had the highest prevalence at 36.7%, while no positive samples were found from Wales, though only 12 samples were tested from this region. Male foxes were found to have a higher prevalence of Babesia annae DNA than females in all regions of Britain. Phylogenetic and sequence analysis of the GenBank submissions (Accession numbers KT580785 and KT580786) showed 100% identity to Babesia sp.-'Spanish Dog' (AY534602, EU583387 and AF188001). Conclusions: This is the first time that Babesia annae DNA has been reported in red foxes in Great Britain with positive samples being found across England and Scotland indicating that this parasite is well established within the red fox population of Britain. Phylogenetic analysis demonstrated that though B. annae is closely related to B. microti it is a distinct species.
Background: Babesia are small piroplasmid parasites which are widely distributed throughout the world and are transmitted to hosts through the bite of infected ticks, transplacentally [1] and mechanically through the exchange of blood i.e., in dog fights [2] as well as transovarially and transstadially in ticks . More than 100 species of Babesia have been documented [3], Babesia parasites are capable of infecting a wide range of wild and domestic host species, including humans, cattle and dogs. This ability to infect and cause disease in many mammalian species make Babesia of great economic importance in both the medical and veterinary fields. Bovine babesiosis (Red water fever) for example is considered the most important arthropod transmitted disease in cattle [4]. In Europe the main cause of human babesiosis is considered to be B. divergens [4], though over recent years other Babesia species including B. microti [5] have also been found to be responsible for human babesiosis. Babesia annae is among these economically important species having been shown to cause severe (even fatal) disease in dogs [1, 2]. Little is known about the prevalence of piroplasm infections in dogs in Great Britain. A recent study by Crawford and colleagues [6] examining 262 canine blood donors found none were positive for Babesia DNA, while a study by Smith and colleagues [7] found only 2.4 % 16/742 ticks collected from dogs from across Great Britain tested PCR positive for Babesia DNA, 11 of which showed 97–100 % identity to B. gibsoni. There are no records of Babesia annae having been previously detected in Britain. Babesia annae was identified previously under numerous synonyms including Theileria annae, Babesia sp. ‘Spanish dog’ and Babesia-microti-like [8]. This species has been found in red foxes (Vulpes vulpes), and dogs (Canis familiaris) in North America and Canada [9, 10] and throughout Europe [1, 11–15] and is considered by some authors to be “hyperendemic” in northwest Spain [16]. Red foxes are common throughout Great Britain, being highly adaptive and opportunistic predators and scavengers [17]. The most recent surveys suggested that there are between 240,000 and 258,000 red foxes in Britain, of which approximately 87 % (225,000) live in rural areas, compared to 13 % (33,000) of urban foxes (Game & Wildlife Conservation Trust (2013) and DEFRA FOI release (2013). The foxes are most likely to encounter questing adult ticks, nymphs and larvae whilst hunting and scavenging. Anthropogenic changes, such as wildlife and forest management strategies and changes in land use have also created more suitable habitats for foxes, their prey and the Ixodid ticks that feed on both. These changes have lead not only to increases in tick numbers, but have also enabled ticks to increase their distribution across Europe, with Ixodes ticks now being found in areas previously considered tick free i.e., northern Sweden [18]. The main tick species found in Britain are Ixodes ricinus (sheep tick, which is also known in some countries as the deer tick) and Ixodes hexagonus (hedgehog tick). Both tick species can be found on a wide range of domestic and wild animal species and have been shown in previous studies in Spain and Germany to be positive for B. annae (synonym T. annae) DNA [15, 19]. This is however not conclusive proof that these species are vectors for the transmission of B. annae. This study aimed at determining the prevalence of piroplasm infection in red foxes and the species of piroplasm circulating in the red fox population in Great Britain, through the analysis of lung exudate samples. Conclusions: This is the first study to demonstrate the presence of B. annae DNA in Britain. Sequence analysis has shown the B. annae 18S rRNA gene sequence detected in foxes in Britain to be identical to that detected in foxes in Europe and North America. Phylogenetic analysis shows that B. annae is closely related to Babesia microti, but clearly is a distinct species.
Background: This study aimed to determine the prevalence of Babesia species DNA in lung exudate samples collected from red foxes (Vulpes vulpes) from across Great Britain. Babesia are small piroplasmid parasites which are mainly transmitted through the bite of infected ticks of the family Ixodidae. Babesia can cause potentially fatal disease in a wide-range of mammalian species including humans, dogs and cattle, making them of significant economic importance to both the medical and veterinary fields. Methods: DNA was extracted from lung exudate samples of 316 foxes. A semi-nested PCR was used to initially screen samples, using universal Babesia-Theileria primers which target the 18S rRNA gene. A selection of positive PCR amplicons were purified and sequenced. Subsequently specific primers were designed to detect Babesia annae and used to screen all 316 DNA samples. Randomly selected positive samples were purified and sequenced (GenBank accession KT580786). Clones spanning a 1717 bp region of the 18S rRNA gene were generated from 2 positive samples, the resultant consensus sequence was submitted to GenBank (KT580785). Sequence KT580785 was used in the phylogenetic analysis Results: Babesia annae DNA was detected in the fox samples, in total 46/316 (14.6%) of samples tested positive for the presence of Babesia annae DNA. The central region of England had the highest prevalence at 36.7%, while no positive samples were found from Wales, though only 12 samples were tested from this region. Male foxes were found to have a higher prevalence of Babesia annae DNA than females in all regions of Britain. Phylogenetic and sequence analysis of the GenBank submissions (Accession numbers KT580785 and KT580786) showed 100% identity to Babesia sp.-'Spanish Dog' (AY534602, EU583387 and AF188001). Conclusions: This is the first time that Babesia annae DNA has been reported in red foxes in Great Britain with positive samples being found across England and Scotland indicating that this parasite is well established within the red fox population of Britain. Phylogenetic analysis demonstrated that though B. annae is closely related to B. microti it is a distinct species.
11,833
390
[ 228, 160, 666, 156, 385, 130, 182, 131, 273, 498, 372 ]
16
[ "babesia", "annae", "samples", "dna", "positive", "babesia annae", "prevalence", "bt1", "pcr", "foxes" ]
[ "clinical babesia infections", "seen babesia parasites", "babesia parasites hosts", "babesia parasites capable", "babesia infections dogs" ]
null
[CONTENT] Babesia annae | DNA | Red foxes | Vulpes vulpes | Babesiosis | Great Britain | Phylogeny [SUMMARY]
null
[CONTENT] Babesia annae | DNA | Red foxes | Vulpes vulpes | Babesiosis | Great Britain | Phylogeny [SUMMARY]
[CONTENT] Babesia annae | DNA | Red foxes | Vulpes vulpes | Babesiosis | Great Britain | Phylogeny [SUMMARY]
[CONTENT] Babesia annae | DNA | Red foxes | Vulpes vulpes | Babesiosis | Great Britain | Phylogeny [SUMMARY]
[CONTENT] Babesia annae | DNA | Red foxes | Vulpes vulpes | Babesiosis | Great Britain | Phylogeny [SUMMARY]
[CONTENT] Animals | Babesia | Babesiosis | DNA Primers | DNA, Protozoan | DNA, Ribosomal | Exudates and Transudates | Foxes | Lung | Mass Screening | Molecular Sequence Data | Polymerase Chain Reaction | RNA, Ribosomal, 18S | Sequence Analysis, DNA | United Kingdom [SUMMARY]
null
[CONTENT] Animals | Babesia | Babesiosis | DNA Primers | DNA, Protozoan | DNA, Ribosomal | Exudates and Transudates | Foxes | Lung | Mass Screening | Molecular Sequence Data | Polymerase Chain Reaction | RNA, Ribosomal, 18S | Sequence Analysis, DNA | United Kingdom [SUMMARY]
[CONTENT] Animals | Babesia | Babesiosis | DNA Primers | DNA, Protozoan | DNA, Ribosomal | Exudates and Transudates | Foxes | Lung | Mass Screening | Molecular Sequence Data | Polymerase Chain Reaction | RNA, Ribosomal, 18S | Sequence Analysis, DNA | United Kingdom [SUMMARY]
[CONTENT] Animals | Babesia | Babesiosis | DNA Primers | DNA, Protozoan | DNA, Ribosomal | Exudates and Transudates | Foxes | Lung | Mass Screening | Molecular Sequence Data | Polymerase Chain Reaction | RNA, Ribosomal, 18S | Sequence Analysis, DNA | United Kingdom [SUMMARY]
[CONTENT] Animals | Babesia | Babesiosis | DNA Primers | DNA, Protozoan | DNA, Ribosomal | Exudates and Transudates | Foxes | Lung | Mass Screening | Molecular Sequence Data | Polymerase Chain Reaction | RNA, Ribosomal, 18S | Sequence Analysis, DNA | United Kingdom [SUMMARY]
[CONTENT] clinical babesia infections | seen babesia parasites | babesia parasites hosts | babesia parasites capable | babesia infections dogs [SUMMARY]
null
[CONTENT] clinical babesia infections | seen babesia parasites | babesia parasites hosts | babesia parasites capable | babesia infections dogs [SUMMARY]
[CONTENT] clinical babesia infections | seen babesia parasites | babesia parasites hosts | babesia parasites capable | babesia infections dogs [SUMMARY]
[CONTENT] clinical babesia infections | seen babesia parasites | babesia parasites hosts | babesia parasites capable | babesia infections dogs [SUMMARY]
[CONTENT] clinical babesia infections | seen babesia parasites | babesia parasites hosts | babesia parasites capable | babesia infections dogs [SUMMARY]
[CONTENT] babesia | annae | samples | dna | positive | babesia annae | prevalence | bt1 | pcr | foxes [SUMMARY]
null
[CONTENT] babesia | annae | samples | dna | positive | babesia annae | prevalence | bt1 | pcr | foxes [SUMMARY]
[CONTENT] babesia | annae | samples | dna | positive | babesia annae | prevalence | bt1 | pcr | foxes [SUMMARY]
[CONTENT] babesia | annae | samples | dna | positive | babesia annae | prevalence | bt1 | pcr | foxes [SUMMARY]
[CONTENT] babesia | annae | samples | dna | positive | babesia annae | prevalence | bt1 | pcr | foxes [SUMMARY]
[CONTENT] species | ticks | tick | babesia | dogs | found | considered | britain | foxes | great [SUMMARY]
null
[CONTENT] babesia | positive | samples | likelihood | prevalence | annae | clade | positive samples | babesia annae | found [SUMMARY]
[CONTENT] detected foxes | detected | annae | britain | foxes | analysis | sequence detected | shown annae | study demonstrate presence annae | europe north [SUMMARY]
[CONTENT] babesia | annae | samples | μl | bt1 | foxes | prevalence | positive | dna | babesia annae [SUMMARY]
[CONTENT] babesia | annae | samples | μl | bt1 | foxes | prevalence | positive | dna | babesia annae [SUMMARY]
[CONTENT] Babesia | Great Britain ||| Babesia | Ixodidae ||| Babesia [SUMMARY]
null
[CONTENT] Babesia | 46/316 | 14.6% | Babesia ||| England | 36.7% | Wales | only 12 ||| Babesia | Britain ||| GenBank | KT580785 | KT580786 | 100% | Babesia [SUMMARY]
[CONTENT] first | Babesia | Great Britain | England | Scotland | Britain ||| B. | B. microti [SUMMARY]
[CONTENT] Babesia | Great Britain ||| Babesia | Ixodidae ||| Babesia ||| 316 ||| PCR | Babesia | 18S ||| PCR ||| Babesia | 316 ||| GenBank | KT580786 ||| 1717 | 18S | 2 | GenBank | KT580785 ||| Sequence | Babesia | 46/316 | 14.6% | Babesia ||| England | 36.7% | Wales | only 12 ||| Babesia | Britain ||| GenBank | KT580785 | KT580786 | 100% | Babesia ||| first | Babesia | Great Britain | England | Scotland | Britain ||| B. | B. microti [SUMMARY]
[CONTENT] Babesia | Great Britain ||| Babesia | Ixodidae ||| Babesia ||| 316 ||| PCR | Babesia | 18S ||| PCR ||| Babesia | 316 ||| GenBank | KT580786 ||| 1717 | 18S | 2 | GenBank | KT580785 ||| Sequence | Babesia | 46/316 | 14.6% | Babesia ||| England | 36.7% | Wales | only 12 ||| Babesia | Britain ||| GenBank | KT580785 | KT580786 | 100% | Babesia ||| first | Babesia | Great Britain | England | Scotland | Britain ||| B. | B. microti [SUMMARY]